From xen-devel-bounces@lists.xenproject.org Sat May 01 01:13:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 May 2021 01:13:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.120821.228466 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lceBb-0006i6-NW; Sat, 01 May 2021 01:12:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 120821.228466; Sat, 01 May 2021 01:12:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lceBb-0006hz-KK; Sat, 01 May 2021 01:12:47 +0000
Received: by outflank-mailman (input) for mailman id 120821;
 Sat, 01 May 2021 01:12:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lceBa-0006hn-Ae; Sat, 01 May 2021 01:12:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lceBa-0004X0-4V; Sat, 01 May 2021 01:12:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lceBZ-0000Nx-R4; Sat, 01 May 2021 01:12:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lceBZ-0007k9-Q1; Sat, 01 May 2021 01:12:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nQREZftGdUSYWV6npHiJYe5ctymNWAz6yCq4qCmzypw=; b=zNkm3jgegw4W/tK57zUJIPbC1f
	WlxtNIqGFPvYUK7iitN0ip1qFWgTJZogi7N75kvwrz/5hqqssZqoYhPiqXZeZNnGvjhOtm1VI1N4H
	xkhUUwVUtSvCRX1Lkt8pNforSOftXUhpAaPBnMNc/mcMowf7tSWNs+PDdRFJG3qFkoHU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161551-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 161551: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=b8e53a81ba538849b98b0d417436f8be653fa1ff
X-Osstest-Versions-That:
    xen=972ba1d1d4bcb77018b50fd2bb63c0e628859ed3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 01 May 2021 01:12:45 +0000

flight 161551 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161551/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161534
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161534
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 161534
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161534
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161534
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161534
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161534
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161534
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161534
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161534
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161534
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  b8e53a81ba538849b98b0d417436f8be653fa1ff
baseline version:
 xen                  972ba1d1d4bcb77018b50fd2bb63c0e628859ed3

Last test of basis   161534  2021-04-29 21:04:23 Z    1 days
Testing same since   161551  2021-04-30 11:49:59 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   972ba1d1d4..b8e53a81ba  b8e53a81ba538849b98b0d417436f8be653fa1ff -> master


From xen-devel-bounces@lists.xenproject.org Sat May 01 02:45:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 May 2021 02:45:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.120835.228496 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lcfd0-0005eC-U8; Sat, 01 May 2021 02:45:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 120835.228496; Sat, 01 May 2021 02:45:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lcfd0-0005e5-QC; Sat, 01 May 2021 02:45:10 +0000
Received: by outflank-mailman (input) for mailman id 120835;
 Sat, 01 May 2021 02:45:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lcfcz-0005dx-HZ; Sat, 01 May 2021 02:45:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lcfcz-0002jC-AI; Sat, 01 May 2021 02:45:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lcfcz-0004DX-1u; Sat, 01 May 2021 02:45:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lcfcz-0004yV-1O; Sat, 01 May 2021 02:45:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=isxo26T2TxVneloRJfc6DCd5ki7MzZVkQxnJX8YPiYk=; b=5JIP7U3ySL/Cx1Ath3lWxxM6gp
	IVEVuWjzIL7zxiK31brT1SvGrnm1d1TzFX5rAMPaFSa6s8yeELxGxddLsDn8noRi5zidKfN55+igA
	NCzG1TLq2rutunEsN9tW76KCvKWqk1NNIFBkf+8Hjq707aBhMdDbccCxWnZgY4QkM7NU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161559-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 161559: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=1e6b0394d6c001802dc454ecff19076aaa80f51c
X-Osstest-Versions-That:
    ovmf=ab957f036f6711869283217227480b109aedc8ef
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 01 May 2021 02:45:09 +0000

flight 161559 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161559/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 1e6b0394d6c001802dc454ecff19076aaa80f51c
baseline version:
 ovmf                 ab957f036f6711869283217227480b109aedc8ef

Last test of basis   161530  2021-04-29 20:10:02 Z    1 days
Testing same since   161559  2021-04-30 18:42:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Laszlo Ersek <lersek@redhat.com>
  Lendacky, Thomas <thomas.lendacky@amd.com>
  Tom Lendacky <thomas.lendacky@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   ab957f036f..1e6b0394d6  1e6b0394d6c001802dc454ecff19076aaa80f51c -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat May 01 05:11:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 May 2021 05:11:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.120845.228517 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lchuI-0001i1-Cg; Sat, 01 May 2021 05:11:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 120845.228517; Sat, 01 May 2021 05:11:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lchuI-0001hu-9e; Sat, 01 May 2021 05:11:10 +0000
Received: by outflank-mailman (input) for mailman id 120845;
 Sat, 01 May 2021 05:11:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lchuG-0001hm-JO; Sat, 01 May 2021 05:11:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lchuG-0005Yd-FD; Sat, 01 May 2021 05:11:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lchuG-0005gM-4r; Sat, 01 May 2021 05:11:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lchuG-0002HL-4O; Sat, 01 May 2021 05:11:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=JWKzAiB9tgPsWN4989m6M36MAacFS5FP0X6ithTAYDM=; b=iazi/UGmERojaU9EGVfnHt0+ph
	zCkpnODVvrgZn31UYktSOlxrbAJojBRulvzbPLAy448QEfLeulw6NVgrbQFIBKPEKZzyE06A5iE0V
	Lq8lcnhQd0SqJociGtW5qS0f2deD3DOQKGYUf/g40TqLkgAA/OpqvY30ElNd17eD7Bng=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161554-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 161554: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=f38d1ea49711232651a817ec9d04c9d9e4816c44
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 01 May 2021 05:11:08 +0000

flight 161554 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161554/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-libvirt      14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-libvirt-xsm  14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152631
 test-amd64-i386-libvirt-pair 25 guest-start/debian       fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                f38d1ea49711232651a817ec9d04c9d9e4816c44
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  253 days
Failing since        152659  2020-08-21 14:07:39 Z  252 days  463 attempts
Testing same since   161554  2021-04-30 16:08:08 Z    0 days    1 attempts

------------------------------------------------------------
478 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 144604 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 01 07:51:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 May 2021 07:51:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.120864.228543 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lckP5-0006pA-9L; Sat, 01 May 2021 07:51:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 120864.228543; Sat, 01 May 2021 07:51:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lckP5-0006p3-6Q; Sat, 01 May 2021 07:51:07 +0000
Received: by outflank-mailman (input) for mailman id 120864;
 Sat, 01 May 2021 07:51:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lckP4-0006ov-Ow; Sat, 01 May 2021 07:51:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lckP4-00089S-7U; Sat, 01 May 2021 07:51:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lckP3-0005Al-VM; Sat, 01 May 2021 07:51:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lckP3-0008ID-Uq; Sat, 01 May 2021 07:51:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=SCqafZ5wPMGLLMT5W1CjI4RZdeWzI0O6vnuYukLAcP8=; b=E5CC+DQjWC6RAXNoX3V5HcLDZY
	GnHEUJtS/U3+KfraY3awQi4UY/M2wyfavEJA/FnDzcTEpCRZZZSZUbpwF+yT6usVZtknakia9RowP
	NUGyJCPukEFJqvt3e9Uou5dxymPC5lagkGcAixJ981ohdiOtlREmmW6mPNqIcz3VY93Y=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161570-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 161570: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=ec2e3336b8c8df572600043976e1ab5feead656e
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 01 May 2021 07:51:05 +0000

flight 161570 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161570/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              ec2e3336b8c8df572600043976e1ab5feead656e
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  295 days
Failing since        151818  2020-07-11 04:18:52 Z  294 days  287 attempts
Testing same since   161516  2021-04-29 04:18:53 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 55101 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 01 08:51:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 May 2021 08:51:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.120875.228565 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lclLG-00046V-53; Sat, 01 May 2021 08:51:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 120875.228565; Sat, 01 May 2021 08:51:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lclLG-00046O-1C; Sat, 01 May 2021 08:51:14 +0000
Received: by outflank-mailman (input) for mailman id 120875;
 Sat, 01 May 2021 08:51:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lclLF-00046G-8g; Sat, 01 May 2021 08:51:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lclLF-0001Bz-1E; Sat, 01 May 2021 08:51:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lclLE-0007zv-OX; Sat, 01 May 2021 08:51:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lclLE-0008Dt-Nz; Sat, 01 May 2021 08:51:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Go/3mGbK9PRUuJBWD6jBwDYvQ1tslkOzGYuuqT1SNJM=; b=fINX6Xzyne/IJOLqeE7jS6ktRW
	c1jKepM9OXDumEKRiXnXspVzagI1hVmrv53IrcTLJqD6BqTHplwkxShh3DIMVHgc5OrgjIjiDIUkj
	dIDR/IUoNxUp7bbdcv1O5J42i2T4GB/F11F7udDe4jBJR73lPJUI2jcq3TUeRhMczRFM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161560-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 161560: regressions - FAIL
X-Osstest-Failures:
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-localmigrate/x10:fail:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5b280a59c4dd8dad6cc8da28db981b193d10acee
X-Osstest-Versions-That:
    xen=4cf5929606adc2fb1ab4e2921c14ba4b8046ecd1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 01 May 2021 08:51:12 +0000

flight 161560 xen-4.12-testing real [real]
flight 161574 xen-4.12-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/161560/
http://logs.test-lab.xenproject.org/osstest/logs/161574/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qcow2    19 guest-localmigrate/x10   fail REGR. vs. 159418

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 159418
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 159418
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159418
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159418
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 159418
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 159418
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159418
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 159418
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159418
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 159418
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 159418
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  5b280a59c4dd8dad6cc8da28db981b193d10acee
baseline version:
 xen                  4cf5929606adc2fb1ab4e2921c14ba4b8046ecd1

Last test of basis   159418  2021-02-16 15:06:11 Z   73 days
Failing since        160128  2021-03-18 14:36:18 Z   43 days   58 attempts
Testing same since   160150  2021-03-20 04:11:48 Z   42 days   56 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Edwin Török <edvin.torok@citrix.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 311 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 01 08:58:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 May 2021 08:58:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.120882.228580 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lclRm-0004Km-0B; Sat, 01 May 2021 08:57:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 120882.228580; Sat, 01 May 2021 08:57:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lclRl-0004Kf-TP; Sat, 01 May 2021 08:57:57 +0000
Received: by outflank-mailman (input) for mailman id 120882;
 Sat, 01 May 2021 08:57:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4H9e=J4=yahoo.com=hack3rcon@srs-us1.protection.inumbo.net>)
 id 1lclRk-0004Ka-Je
 for xen-devel@lists.xenproject.org; Sat, 01 May 2021 08:57:56 +0000
Received: from sonic310-13.consmr.mail.bf2.yahoo.com (unknown [74.6.135.123])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ebff1520-ce0c-4805-954e-ff38479c27ea;
 Sat, 01 May 2021 08:57:55 +0000 (UTC)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic310.consmr.mail.bf2.yahoo.com with HTTP; Sat, 1 May 2021 08:57:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ebff1520-ce0c-4805-954e-ff38479c27ea
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1619859475; bh=hB8/+OYW3GYzeXUAIIfvp3g6P8jXaVyUB1WVuSQVAg8=; h=Date:From:To:Subject:References:From:Subject:Reply-To; b=odV5r0Vc8/aYAdzYF1K0r33VRcbSW+VpE1WuuQ0Wn1WUD3p5FWqAQQvUqJFFuKh/cjAoar+Uh562CyxsaEsasyyZrLWKHFliqUxPCoqBW2ffdyvTuqKFSZOZhox9CAUgQrxiQmL+wOdzC/VKzR54nr33kgRRmsXMQ1hRHaS7qStQohcY+b2H1/NC7fVhasNMLIJaoSUZehczT3pwA7oXov4llNyjlQTR2IVyLLNYqUEmfk+H7jAokRCrx/6zQ3Pa6piB0pQtS4mTEWFzmo1Xw/GO7QWNz6ed5VadKJfPqEJ11mQAtGNL2GI34lH3RBy+PKNRu9+H+5XPgT+BYnAfgQ==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1619859475; bh=ycSGad6+UX6o5FMG535XVOdcTVf9iraRX2+6dPzXWUi=; h=X-Sonic-MF:Date:From:To:Subject:From:Subject; b=dSK7NlF0RTqtrYePXU5KxG4XfSGhWbwbAl2JBeVn1lTN0IVu/xwMJLA+JleVf34GuhriZqls3/QMWTEkATWfbVJw1gjbsJO6C0dfPYdidy4PwY2yqAk8zt6yl6G9NupyAe0gcVJbHyKrZa1J0pV/xNwQhjct9bAPLjfXiwpH9AhnbaZz3h0u8cBEzp7IV1gK6j03AdwvgZEkBUp15HHbodykSv0oIISMEsw+19+aSwiRK+fbc4xfsz8fdmZAAqKjcqUH5CLrI8CULwXMe9AKDr2srjQnrBiSOGO/99u8XMKt5WFHRtXBMA7Ie4Ec0TnqxLUACuDWngJ8sb95ewnJ6A==
X-YMail-OSG: cCnpzQ4VM1kksk21JPlE0m9gWFAeOwRodL3Vetlj6rBeFl.hK93BmOnkHc5Ej7B
 g8y7ylXS8PpfUf4DhZHf3VApN0ZMOSA_xQhUr1rKOCIXkFFgyF9k02IaBPbRLwSFsg2w8qnN5kb.
 _Uby9haBd4lWBmUvD6i8rtX_DocZnd2gGBYwUxVX8hv2tf1glaVRhN84Gpwn.jVMBP6cspkiHg8t
 qPfYDFDqdZdbwRbuG3B0IyHe_jAW1XuFRxPx6hc4vntQ6KTYVkqx8hHiO6iy5J3SDzxOnfzLhfCT
 kDVnYpCknXOsJXdbAaMZ0rDJO0sRVC5uSpGejLJ1F1nMk9NAHbVx.mPv6he8Tzdnf5x3V1CcLWEV
 fGFChzFAVQiiv7RyZ03RLkYXHCgSWglFVFLBw9QdJfZwMSOfLcBXzCY9XQm0ZZBdW7YTaGNqAQax
 bSEeK4vcjduG9greZyDM5UBMyrTtKz3VyTzeeevgCizzD2jO8I10NT11S39pgA09Kar1SWPGMN2f
 wP_T45Ibn8wdTOBt.d_w0ME0J7kywiBMVfIQn8nMMJlCAjiyqHyB2XwmSbsWhENxa.wMF8pbuwO5
 cO7BiW2fXelct8OBdi3j0XHz3L9WMhDL8ZapR8i7KONtepuy4JwEMuqldjZHftuP1U0q6buxSX7l
 kKsWhlmmPS8VY6yZEIb9RjCotrtrqhlEwhYb2LBoDgodxslD02MwZvt9CMJG7e6ghhNROLKOC.Rz
 pcSAoD5yCmFvXRI7bNntbVkwpY6PqdsnB.QeiWLrDrVTeoFaCcqI1MDbclEqkckS4q1IvBpsrm.c
 2kiuCPP9myRvYQA5xTbTmOaT.Nm6HZDpxPThuiS0xH2boYYi8j27E5_8gYh94dmuLRD3OYx6KP_4
 WWsvCmF57wZ3d5GRkTsClQoafV8bzBobqn_Sq3eE_izIg6nrGMNO6DlNvS2pSSR28nGjgzMYQ3Oc
 kV8F.Jt1Oh5TR6d0DQWU9yLN5ety3NhwdsU09eFhfkN5rphlMsRFeNs0fHSYZC5D91ZpPSF30bSs
 UZuNRDBP5DmS2B_Y3wBiX5oh0Oo8EOd.I3BgDYXLBo6F_Wq8fjMjAjQhaXV1dL1Ldpbgw5e.Tt0H
 XGxmv33JSWmEzoEdoNWshXIvAlNXUR65j7m4F49OdEk_k3AfHSZaEaK9ezQMiyt_n03nJL3mm1kh
 jCb_wyGm5omNZeHutQxGb52BQo2VK0co8SLa_zN9llLWzsMD4VBQPlXeKwbtxY2_3Zr7HRvvxegj
 rC7XFY74CcQkDWTibRerSontl870oJadbez3oGzoq8YFIQlQOdM8GN0Q1SLJTLZUK1cPF9EpDw1F
 KTcwNYlHzcGVpIoRZw48wXJ4f5t60wK72LFN3OEzq.3.sC4Pgs.fj_D1pqW1rtu2XJmyEMSZZydw
 pBp9iu03kCMF3yuaD8y4MHEdXsH3HexV79F0Z9vPCmGW1fKHQCHBTqs6qaJJYxOAYeLM3OC_rXT8
 JGIBmjd.Zr0U3IWCVD0dRkCmOxGK2jpmFY86fEWGgHvM.xLTXohV2TngarISaVOqJh0OPkT2Y5Mo
 .WcR_m3_VDGM5bddNV_w3T13kRxaqYY0i.NYkopNfkJAuxYGKs3TDvVP93VhNrlwoJoBW3Lx.VBK
 ZXI_wevpI625iy31CzUzmDh0wit50G7mJxgqCF2ftWDEoDL8de7LfkyZiPrt7KPIe.zeElPzM950
 H7vuQFU6YUK5CkApD.YjKIVOHZA.wKMSun8p_isbzKv7k2MFvpUOcSlLJWgAdBQT_rhdBpDpdgdb
 vNWzK.RWszqrPAgWpHmhWaTaOEKiu6bFDGvUg7I0.7UktLzKXnJzVvE8ceODiXoD4J4xG1pGt1MW
 LZF0p_fI04filj0.WzhXzzhQPSzAuSi.R_G6Y7zNTf5OumMZrTROhvpuSb9KE9z7NDDbBXKEN.b4
 zbUGUDvJAqhP1U1gQopE3T6SbOXCdWlN5KwaMxtCdIAhmASL_mu_zqfaONGl8n7A.dTh4arI2Amq
 Y5MnT0jJUUDjjG4jWR32rltnOpryVpAQHr1PEkj.SOQQ5tjD5ZaOdeEPeh4ZSASjk.JaBqOL4B3b
 xXyst.S9Uwp2k3QoTBwoF3oWUut7Ans9cjf4tteJykNQnv8Jhp2aU6Ggkn28OcIiRFKGDEsSeCdy
 T3ywiw_lYu9ASd1RdiZCTXMyomvWVor.bu4wnOjplXyLQ0J4JOINj3rAUZcXr.ADigZkFp.2i4wD
 NxKWu8cKRJTVjlqiRP2aUd8s05K5t7yvC2N0uxcTLWwhnJ5VtEdmj7y1OTO._nqAn3AhZw0owXKO
 jhTRDyY9S7sFtnO8f90dUv3C4z2j6iblaVV9MSvrkDkuFGK4BoBB4Lr9EsoNDhSo0oyh76zxgJrK
 W1tzvmO.DCj3r3x8ELYek5.Y4JeNYqiY.WZldVCQEsHx11Ikp9ZMQ6NNnGcld
X-Sonic-MF: <hack3rcon@yahoo.com>
Date: Sat, 1 May 2021 08:57:53 +0000 (UTC)
From: Jason Long <hack3rcon@yahoo.com>
To: Xen-users <xen-users@lists.xenproject.org>, 
	Xen-devel <xen-devel@lists.xenproject.org>
Message-ID: <314217522.1538685.1619859473008@mail.yahoo.com>
Subject: Xen and Microservices.
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 7bit
References: <314217522.1538685.1619859473008.ref@mail.yahoo.com>
X-Mailer: WebService/1.1.18138 YMailNorrin Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36
Content-Length: 92

Hello,
Why microservices use containers like Docker and not Hypervisors like Xen?

Thanks.


From xen-devel-bounces@lists.xenproject.org Sat May 01 11:31:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 May 2021 11:31:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.120915.228622 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lcnpn-0001Bz-B7; Sat, 01 May 2021 11:30:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 120915.228622; Sat, 01 May 2021 11:30:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lcnpn-0001Bs-7c; Sat, 01 May 2021 11:30:55 +0000
Received: by outflank-mailman (input) for mailman id 120915;
 Sat, 01 May 2021 11:30:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lcnpm-0001Bk-Q1; Sat, 01 May 2021 11:30:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lcnpm-0003wk-Jr; Sat, 01 May 2021 11:30:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lcnpm-0007HR-9K; Sat, 01 May 2021 11:30:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lcnpm-0003Q8-8m; Sat, 01 May 2021 11:30:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ntnRtCfyha6DWcabJYcS5FyxkPQmoKHkiqGAmfloOQs=; b=ywczd5I5FAwC0s1f/972w1UWxw
	MHUufvZU60h+RILzt3R94baQg8EbrsXJi6u5KPxWum/KZKPksK06a5tvvXJSVeIBhp/Vd0FIs602L
	JoKLVa6IN2MqU3rQmzNcn5Ah9tA6U9hu7u53II7tLjaQRqimQTzljO3DsqqrjkylH5L8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161562-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 161562: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-qemuu-freebsd11-amd64:guest-start/freebsd.repeat:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=65ec0a7d24913b146cd1500d759b8c340319d55e
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 01 May 2021 11:30:54 +0000

flight 161562 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161562/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-qemuu-freebsd11-amd64 21 guest-start/freebsd.repeat fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                65ec0a7d24913b146cd1500d759b8c340319d55e
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  273 days
Failing since        152366  2020-08-01 20:49:34 Z  272 days  456 attempts
Testing same since   161562  2021-04-30 21:12:31 Z    0 days    1 attempts

------------------------------------------------------------
5885 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1578458 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 01 17:05:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 May 2021 17:05:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.120965.228661 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lct3V-0004Jn-VP; Sat, 01 May 2021 17:05:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 120965.228661; Sat, 01 May 2021 17:05:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lct3V-0004Jg-S8; Sat, 01 May 2021 17:05:25 +0000
Received: by outflank-mailman (input) for mailman id 120965;
 Sat, 01 May 2021 17:05:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lct3U-0004JY-GG; Sat, 01 May 2021 17:05:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lct3U-0001Qa-89; Sat, 01 May 2021 17:05:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lct3T-00080Z-Uw; Sat, 01 May 2021 17:05:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lct3T-0002SR-UO; Sat, 01 May 2021 17:05:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=eWonvZ47jUNkuLHmdm7k3vnuZKar2HF7V44B59HgBr4=; b=WDGzq93MgH0B8gucIjPboFqrS6
	cqfmbxXtBDID/fThT8gZnG4rgrjJjro9IMegIDr7cq50UPtkXB8ib97hU3q8DMDy+mwYJp6+8bSXo
	D6w07SjLiKfm1s4fU/1QO/yjVVFUhjGqRdTed3brzdGCWp3aza/sdx96DVDPb02uZNzE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161567-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 161567: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=1f8ee4cb430e5a9da37096574c41632cf69a0bc7
X-Osstest-Versions-That:
    xen=b8e53a81ba538849b98b0d417436f8be653fa1ff
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 01 May 2021 17:05:23 +0000

flight 161567 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161567/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161551
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161551
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 161551
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161551
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161551
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161551
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161551
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161551
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161551
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161551
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161551
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  1f8ee4cb430e5a9da37096574c41632cf69a0bc7
baseline version:
 xen                  b8e53a81ba538849b98b0d417436f8be653fa1ff

Last test of basis   161551  2021-04-30 11:49:59 Z    1 days
Testing same since   161567  2021-05-01 01:38:23 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   b8e53a81ba..1f8ee4cb43  1f8ee4cb430e5a9da37096574c41632cf69a0bc7 -> master


From xen-devel-bounces@lists.xenproject.org Sat May 01 17:35:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 May 2021 17:35:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.120979.228688 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lctWl-0006xE-G0; Sat, 01 May 2021 17:35:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 120979.228688; Sat, 01 May 2021 17:35:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lctWl-0006x7-C9; Sat, 01 May 2021 17:35:39 +0000
Received: by outflank-mailman (input) for mailman id 120979;
 Sat, 01 May 2021 17:35:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=glZs=J4=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lctWj-0006x1-OR
 for xen-devel@lists.xenproject.org; Sat, 01 May 2021 17:35:37 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e5452c6a-0fc6-4cc3-ab1e-2d1692657f60;
 Sat, 01 May 2021 17:35:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e5452c6a-0fc6-4cc3-ab1e-2d1692657f60
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1619890536;
  h=to:references:from:subject:message-id:date:in-reply-to:
   content-transfer-encoding:mime-version;
  bh=X1bpTG260/WsTCQMfuL5R72l9rNdOIn7KJjpKzfclxc=;
  b=fOIxYr+3q1uYsu0o/aJ1zNFw0PdQLcPvA8WnaJJht8PV7oG5o31o0ytF
   rfo/bP1mBzOi851yIqsCQjpzEctWz9ydurvR2+avuZAU1GzkxN+Z3khb6
   Nvf7uRdKqyMB7JLqlEpeHpTCIUg2G5ymxmlia0iHPssEPXay3udqhmRbj
   I=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: K26K9yjlZmgN1Ui2Y+NlLpMjSfAfahAYpxM4H81xzw9ZL3/O3EUj1jP4grsYyihwj6sVaP/Sw/
 ztYqxGkeXAGlyu/FVKZkxUfExgzYoTEEuGMRaZpHD1wuKbUaqgCUeKZC5rzCcLYZyVSLT2U9qb
 bmETza3qiW04+l8Fzv7sQi9kEjgS+uYNAZvlyWJ7zdPHS1N4Ui35Hu6brgsjzr2e96pCY432pH
 NiIK+Gh+ucHhKTV0Wx/NPZO7QibdBuvzTzzitP5aEJdISBa8O4O3JEDbsAsJroai+58apYtCgp
 snM=
X-SBRS: 5.1
X-MesageID: 44386249
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:RPjf0ag09XVyN21wyEc4Uy1Ui3BQX0tw3DAbvn1ZSRFFG/Gwv/
 uF2NwGyB75jysQUnk8mdaGfJKNW2/Y6IQd2+YsFJ+Ydk3DtHGzJI9vqbHjzTrpBjHk+odmuZ
 tIW5NVTOf9BV0St6vHySGlDtctx8SG+qi0heHYi0xgVx1udrsI1WZEIyycFVB7QxQDIJI/Go
 aV6MYvnUvdRV08aMOnCn4ZG9XZr9rQm578JTIADRgr6A6B5AnYlYLSOR6ewxsYTndz0a4vmF
 K16TDRy4eCl7WAyhHa33LO9Jg+orXc4/ZKGcDksLltFhzCkQCtDb4RP4GqnDdwm+237UZvrd
 +kmWZaA+1Wy1f8Ol64ugHs3Q6I6kdf11bHxUWDiXXu5ezVLQhKbfZpvo5SfhvH50dIhrgVu8
 gqrgHpxaZ/Nh/OkD/w4NLFTXhR5y+JiEEvjPIJiDhnWZYeAYUh3LA3xl9fE5sLAUvBmecaOd
 RpZfushsp+TUmXdDTwsGVp3bWXLwwONybDaE0DtsuJ6iNRjXB0wmAJrfZv4EsoxdYTTYJJ6P
 /DNbktvLZSTtUOZaY4P+sZR9CrY1a9DC7kASa3GxDKBasHM3XCp9re56g03vijfNgtwIEpkJ
 rMfVtEvQcJCg7TIPzL+KcO3gHGQW27Uzio4NpZ/YJFtrr1Q6euGTGfSXg1+vHQ4sk3M4n+Yb
 KeKZhWC/jsIS/FAoBSxTDzXJFUND03TNAVgNAmQFiDy/i7ZLHCh6j+SrL+NbDtGTErVifUGX
 0YRgX+I81G8wSFQXn9rB/NW278W0D28J5qeZKqvNQ7+cwoDMlhowIVgVO26oWgMjtZqJE7e0
 N4PffGn8qA1CuL1FeNy18sFgtWD05T7rmleWhNvxU2P0T9dqtGn92efGtVzUaWPxMXdbKSLC
 dv43BMvY6nJZ2Zwi4vT/i9NHiBsncVrHWWC7ARh7OE/sWgXp8jFJ4pVOhQGGzwZlNIsDcvjF
 0GRB4PR0fZGD+ro76iloYoCObWcMQ5phyqL85SoXf2rl6duskre3seU1eVII6qqDdrYwARqk
 x68qcZjrbFsy2oM3EDjOMxN0AJVH6aG4tcDAOOZJxdn5fifA0YdxbPuRWqzzUIPkb6/UQbgW
 LsaQmZY+vCDFZmtndE6ary619vemKBf0V/V2BiveRGZBf7k0c29dXOSru40mOXZFdH+O0bPT
 3fSRY5Iw9lxbmMpVaosQfHMU9j6oQlP+TbArhmTqra3Wm1LpaU0YscGeVPwZpjPNfyk+MCXO
 6FYTWJJDfgB+5B4X3Tml8VfA1P7F8qnvPj1Ee7sCyW3HsjDeHTJ1ojbbcBON2Y53XlQfHN8J
 gRt6NAgcKAdkHKLviBwuXrShQGDDX5i2u/VfspppBZprhajso7I7DrFR/zkEha1xA/JvrunE
 wQQK5H8KnMU7UfCvA6SmZ8xB4Vj9yBI0sgjxzuDsI/dV8riWXHP9nh2cu+lZMfRmmArhD3I1
 +R7ml0+OrERTKK0dcheukNCFUTTEg383J5+uyeM6XWFQWxbulGuH63KGW0frMYaK+LH9wr31
 xHyuDNu++cbCzj3g/M+RN9P6JV6m6iBfqIPzjkI58/z/WKfXKWgqWr58avjDD4DRuDAn5o+r
 FtRAg3dcRMij4rkYst9DO9I5aH5H4Yrw==
X-IronPort-AV: E=Sophos;i="5.82,266,1613451600"; 
   d="scan'208";a="44386249"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Jh+lbZbQpSV53Wn/QYyxw6kJBGFNiGxKEr3F8C49PIIaS/aRByjp8DqgDFiYjZ35s7c+U9jv0c6i2w34/Q7iarjSqK3y4DQ+9UqHnLPoOUNQMsQlp3Rmu1nb5qfPDxo7WKbAaxNgP5qUmNsminKRfyD7k1qltYgULVkOObQ10ZAbIHctPU8D1pEzzxHOwZhGYCA1kTmYsxWVvrH3SxwirzgWdXmGzNdnrhFG5TCxREEumbCCoTOa2uCH89o2Ec8gJ09N8zjTku7Hx0DVUOz3zuvfBiHqdUwB2eD0wZT90/ZolF23FL3b3bTh4hJ6Ij6KJE3b2jXkIIKv5rbLpwjxTA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3/NoANt+taqccTC45DALsn9ApX51yqV1E6cW+kH/+jQ=;
 b=SDuYgtJQvNATio6x905kbwLYsFdqdN0KKiixBzVTgE3OqFbw4fPdQrHb+c6rxHNnv2BqTaZd24PsFM38ci8YOSfDaHGkJe3WGgCC9RP0eeTO7IXfxg9RXMmOr5BjeYrXONJkIsUy0fdctmA/s+g1HXSs1duR/Q9evax5Qp8QCSSQyQmI2HOel05kn9rszRBjsLv5B+BxswJnuxnvwH3ncGAzuNSIqhRIQ1UyoRi4jbCgmWa9xO8/WUtFrwb6/M47b+nhU8UepC9OjLgiUx9SJzKxhedXABkHWtEkCqOPDkpOGQqEK+vxhaIjHwGL1zc/j/SSaakVVebnHNIL9JV9QA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3/NoANt+taqccTC45DALsn9ApX51yqV1E6cW+kH/+jQ=;
 b=om8Nk8SDDbIXWOCwz3XD8i9LUatHV4QL8UxZHFonGbSOHQNP9gICLlm8HiuFV1oaHcuTEZ8Qbiv6L6V1UdxnYtfHtaefXaKoDRbSFO3oWXHDJesZckXFxN+1K9dSTOhGrmH0cDlE1ifSWw5jB/RXd5IYNqxaio+95/OIugDeHrU=
To: Andy Lutomirski <luto@kernel.org>, X86 ML <x86@kernel.org>, LKML
	<linux-kernel@vger.kernel.org>, David Kaplan <David.Kaplan@amd.com>, David
 Woodhouse <dwmw2@infradead.org>, Josh Poimboeuf <jpoimboe@redhat.com>, Kees
 Cook <keescook@chromium.org>, Jann Horn <jannh@google.com>, xen-devel
	<xen-devel@lists.xenproject.org>
References: <CALCETrXRvhqw0fibE6qom3sDJ+nOa_aEJQeuAjPofh=8h1Cujg@mail.gmail.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: =?UTF-8?Q?Re=3a_Do_we_need_to_do_anything_about_=22dead_=c2=b5ops?=
 =?UTF-8?B?PyI=?=
Message-ID: <9d70fd98-ca47-47af-b5b1-064435ba77f1@citrix.com>
Date: Sat, 1 May 2021 18:35:23 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <CALCETrXRvhqw0fibE6qom3sDJ+nOa_aEJQeuAjPofh=8h1Cujg@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0293.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a5::17) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6d35f3e5-1ed3-481d-be04-08d90cc78433
X-MS-TrafficTypeDiagnostic: BYAPR03MB4421:
X-Microsoft-Antispam-PRVS: <BYAPR03MB4421FD466441288E23B070FFBA5D9@BYAPR03MB4421.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: pWlSsmCef/o9RZ+LMzcPwNeOrQ1I8CrjXdkTo7tHbOtPhmtdrvn/02RET3785YkmtI+bDjULaWBeuKgerV95ABTwP7Pa/U8rt0yd8b6+tjjNVOgeJ8aJB4pZC6OeG9AwimUv/Y+5Wl5gy6ouHQx6Ok9umUqbmXWUOjYLI8jxMtZezEK7oUDkmgt2r3EHYGgGQV3h2hGOwQwp40kioLmSFP+IPaHjacVO0i1N66MSFkrXyluibM1crrPSn6I5NnDnh5EixaxPNUAEulSBW4r8RNWSofKWyxWDsnMXxrtHTffOs0QSB4Xr45XylHwFEiOA99b+oxph+xERJracEP2uOAn2N8Ta9NcwkHXQA4+WUYpA4gdAGcqSvJJ7MMi6lGMehDcG9bgFIlZ0wfn5gnm5Coz9jGw9XUuBftzwNEjtSZzx1UsX+M3S+ysSxc1TDblsMMMgLb7pf0mJidXB5tl28HNERWIeENgPiASZuZ5fKKjWfYfNDUwkfm164iV7HDOSjMu7mM9Bk7V9xDXfNG0m33gPQJXF4PPrCvdcAb4hZh2ab5RFB20ddr3hfYxt7dhCr4prI2dHFDf5scX10RwMV3NB6aQkyd5vU1AERd3xvGbNmGwIzG+GkG2SpnKENkmogITvZCoSPJdKTVPJ0yLNVRR0E8BFkF1sYzwijeBq+nWZkrcXgWZLwtmIZWFfz9bozDw5zOblvX44/v1NwCRKZw==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39850400004)(366004)(376002)(136003)(346002)(396003)(66476007)(478600001)(31696002)(66574015)(2906002)(2616005)(956004)(66946007)(110136005)(316002)(16576012)(86362001)(66556008)(5660300002)(6486002)(83380400001)(38100700002)(8936002)(16526019)(224303003)(53546011)(36756003)(6666004)(186003)(26005)(31686004)(14023001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?UG1IaWVxUi82dTJwbmxUaFJ4ZHlTODBGYTlIcGlhZkorMkUvV1dHbnRKNGx1?=
 =?utf-8?B?OWh3NUpkSklubmFKekZZNUdxQ0oxeHZYL1d4cVQvN2hEM0JnSlQzT0V3MlN1?=
 =?utf-8?B?MWF0cUU3Q0gzM3Y2TFNZUFd5UkJwd3BxK3U4MkYweHdrNDdrSThjSTJwNzQy?=
 =?utf-8?B?U0Y5M0lxR0xYWkpRTmdEVm9ndkZvNnUxZFRCSERNTC9TV3hJZE56N010ME5o?=
 =?utf-8?B?dERKWENEbmhIeEZDc1dUQWxLUDBYSzN4TnhldmNsaDY1Y0ZEODNVVWZhR2py?=
 =?utf-8?B?bmhnekdIWHpkTytjaGplVElJZ0xobnpOZXdDMjkwMlk0MmF6eDRZdGtteUI0?=
 =?utf-8?B?QlJoTWxxYTFOOVhLQ3FDeHpSc2YwaE95UXdmU3krajBHUG9FMUZhalpiU282?=
 =?utf-8?B?dVNnSTloZTRmaS9rSVZ0UDJ1RHVDbUQ0YXBERHpISUw1bkRBdGNjT0g5MjRE?=
 =?utf-8?B?V1RoempVN2VINHREZVA3cWhIYVdwTlVBc1EwekZkUmV0TWE5MUlBK3kxOTJZ?=
 =?utf-8?B?MENqWFE1MzdWTkF4NWRRcnFsYmwvWnlPVCtvZTdrUVVhNGxzRVVGUDJqaXFS?=
 =?utf-8?B?U2M4RXlEc01QN0g5dExXeHFHNjhuSzdKUXJTa2hKSUtnaTN4U1NxaEJUUjhF?=
 =?utf-8?B?dDY0YWpNS091SmZyM1hxa2czdDR0MGZ4YzRhNVJkNVRaeE80eHRacHlQajB5?=
 =?utf-8?B?dVlqaXdWNitYYXRVUFZyT1ROMHNjaWRYTGJDeThNbGtua0I2ejcwZTZDTllR?=
 =?utf-8?B?aUJWUFNJRzBLT1ZoVTM1aHd6U0xzZHJ1RWlHY1M5MGI5UW9kSTJPNnZueVAv?=
 =?utf-8?B?MysyUjlzcFRvSVVZMkpBMm5MNkM4TWdNdm02b0hxMEtNZFRtYklObU03QTZa?=
 =?utf-8?B?c1FCd3BOb2ZMZ0d1ajE5U2ZUL3VJZXJtTVRMZzRzMVREb3pLRmRiTHZLRW5m?=
 =?utf-8?B?cTFEdW1SMk41OFl4Z2tmeUFTQUZCY1NKZ0FkaHVOTWNaS0Fnc0lTYitaZGx3?=
 =?utf-8?B?dWNTSS91R3lHcmZkQ0FkQkFnRU5FZjd4YUV6bXpFNzNYdnBvKytPS205U0Jr?=
 =?utf-8?B?K1pnbzdxZ3Rzd0s2bXhmYnphRUZFc2FOT1VuME1OaEM4cEJjaWE3UjdXQVlQ?=
 =?utf-8?B?SFhoQlRsWkVNSU9hMjh6RHhjV05ocVVJblNNNVVPY0JaTkdXQUtwN1Vvbytr?=
 =?utf-8?B?NTR3bk95TTFONGxqczdmc2JwZlhCa21TQnFhTkl1QnFmTnoyakVnTXhQalFU?=
 =?utf-8?B?QXRrVFFJbStjMVZ0cUt4WjB3cmM5NGluODU2UElzS2dGeUhNVjhDWEJvdEtJ?=
 =?utf-8?B?eUZlZ0xyaUlyWG9Cemx6U0JEQlg2UFovL3kzMlZHemdjZUNUSlpsTHp3QlpX?=
 =?utf-8?B?dk1MZ2o3V2RNVGJKdm56WHVQakJlaTZZRXFuNzBqVGJPYTZGaXdIUlVnbHpE?=
 =?utf-8?B?WlBmVGZlbUtiTE1pWnNycnJPaCtQVG90NGxucnpWWUJIamVUbWJDeVBsVStL?=
 =?utf-8?B?MnZraW9uK1ljczg3b3RPWW40QmdYUDk4UEU0cFJGc1Roczh1bGVBOE5YQXNT?=
 =?utf-8?B?MjJYZU5oaGQ5MDBuK0kvNUt5a0VrSm1iRzdvYVBoVCtQbFIrOUs3dU5GaFla?=
 =?utf-8?B?WW9BaE5EeXl3L242OTBBMkUwb1FOOEZlSlYvaFZKYkQySUp6elVVOXlRdjlG?=
 =?utf-8?B?VktiMkV4aDEvMy9UWXhhZHBhZkViMDJ1MTcrQVZrRDY5Smp1ZmtJSjdOemt3?=
 =?utf-8?Q?0A1Pxh71QHrk+4qwlUtfmMuiktWAPHTaHfCgkaV?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 6d35f3e5-1ed3-481d-be04-08d90cc78433
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 01 May 2021 17:35:30.7293
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: He9rlxEyrauodPqsq84OxxKOIi9ReIY1nNaYelGjGmMeXpeLQaDxWKYgRg715WRXF9NAr4xgJpMppNEtfBUnbRkr8RbBziJ+/trGEX6oJVA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4421
X-OriginatorOrg: citrix.com

On 01/05/2021 17:26, Andy Lutomirski wrote:
> Hi all-
>
> The "I See Dead =C2=B5ops" paper that is all over the Internet right now =
is
> interesting, and I think we should discuss the extent to which we
> should do anything about it.  I think there are two separate issues:
>
> First, should we (try to) flush the =C2=B5op cache across privilege
> boundaries?  I suspect we could find ways to do this, but I don't
> really see the point.  A sufficiently capable attacker (i.e. one who
> can execute their own code in the dangerous speculative window or one
> who can find a capable enough string of gadgets) can put secrets into
> the TLB, various cache levels, etc.  The =C2=B5op cache is a nice piece o=
f
> analysis, but I don't think it's qualitatively different from anything
> else that we don't flush.  Am I wrong?
>
> Second, the authors claim that their attack works across LFENCE.  I
> think that this is what's happening:
>
> load secret into %rax
> lfence
> call/jmp *%rax
>
> As I understand it, on CPUs from all major vendors, the call/jmp will
> gladly fetch before lfence retires, but the address from which it
> fetches will come from the branch predictors, not from the
> speculatively computed value in %rax.

The vendor-provided literature on pipelines (primarily, the optimisation
guides) has the register file down by the execute units, and not near
the frontend.=C2=A0 Letting the frontend have access to the register value =
is
distinctly non-trivial.

> So this is nothing new.  If the
> kernel leaks a secret into the branch predictors, we have already
> mostly lost, although we have a degree of protection from the various
> flushes we do.  In other words, if we do:
>
> load secret into %rax
> <-- non-speculative control flow actually gets here
> lfence
> call/jmp *%rax
>
> then we will train our secret into the predictors but also leak it
> into icache, TLB, etc, and we already lose.  We shouldn't be doing
> this in a way that matters.  But, if we do:
>
> mispredicted branch
> load secret into %rax
> <-- this won't retire because the branch was mispredicted
> lfence
> <-- here we're fetching but not dispatching
> call/jmp *%rax
>
> then the leak does not actually occur unless we already did the
> problematic scenario above.
>
> So my tentative analysis is that no action on Linux's part is required.
>
> What do you all think?

Everything here seems to boil down to managing to encode the secret in
the branch predictor state, then managing to recover it via the uop
cache sidechannel.

It is well known and generally understood that once your secret is in
the branch predictor, YHAL.=C2=A0 Code with that property was broken before
this paper, is still broken after this paper, and needs fixing irrespective=
.

Viewed in these terms, I don't see what security improvement is gained
from trying to flush the uop cache.

I tentatively agree with your conclusion, that no specific action
concerning the uop cache is required.

~Andrew



From xen-devel-bounces@lists.xenproject.org Sat May 01 20:35:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 01 May 2021 20:35:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121008.228712 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lcwKc-0005O7-R6; Sat, 01 May 2021 20:35:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121008.228712; Sat, 01 May 2021 20:35:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lcwKc-0005O0-Nx; Sat, 01 May 2021 20:35:18 +0000
Received: by outflank-mailman (input) for mailman id 121008;
 Sat, 01 May 2021 20:35:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lcwKc-0005Ns-5t; Sat, 01 May 2021 20:35:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lcwKb-00050G-Ul; Sat, 01 May 2021 20:35:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lcwKb-0001io-LT; Sat, 01 May 2021 20:35:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lcwKb-0007bH-Kx; Sat, 01 May 2021 20:35:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1XGcmtG22+GoaYdzgMAb5nGL+8KNlBvAZVe6YAuC/ig=; b=JJeJ+WGHMAtjj7+iNB7FTfnBYR
	SO1hyqbJGGLKjdb2Urcfe8U7SVh3QAASjZjlwEo77q3jU0/YV/R3QPtu0y/phSi5CK49fixJ/qVWx
	mO97wtilgKo4lJdNjzZmdjzfKJ3cHs5o23aLQqOpUCmlgDJVWNMKbzkaRME540gmeIbc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161571-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 161571: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=8f860d2633baf9c2b6261f703f86e394c6bc22ca
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 01 May 2021 20:35:17 +0000

flight 161571 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161571/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-libvirt      14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-libvirt-xsm  14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152631
 test-amd64-i386-libvirt-pair 25 guest-start/debian       fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                8f860d2633baf9c2b6261f703f86e394c6bc22ca
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  254 days
Failing since        152659  2020-08-21 14:07:39 Z  253 days  464 attempts
Testing same since   161571  2021-05-01 05:13:23 Z    0 days    1 attempts

------------------------------------------------------------
478 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 145075 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 02 00:23:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 May 2021 00:23:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121022.228744 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lczt3-0008IM-O3; Sun, 02 May 2021 00:23:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121022.228744; Sun, 02 May 2021 00:23:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lczt3-0008IF-Kv; Sun, 02 May 2021 00:23:05 +0000
Received: by outflank-mailman (input) for mailman id 121022;
 Sun, 02 May 2021 00:23:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lczt2-0008I5-Ak; Sun, 02 May 2021 00:23:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lczt2-0000p6-0b; Sun, 02 May 2021 00:23:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lczt1-0005tp-P4; Sun, 02 May 2021 00:23:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lczt1-0006Lz-Ob; Sun, 02 May 2021 00:23:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=sx9SJm6UDvhN8H1dvh6ig+UOAIxseksXZcvbn6S8Krk=; b=7PS9e+EkKiyGKVbcgAiSYf93G3
	i3S9jKnmLt78Z8SgKsgiP+fdSA5g3X6RuuFzNzI4Mcc38WmpKptiDTgHfH3Q0W0IdLlSK3nK1+5VT
	MgXYICafoM8h9sJ5Jwxl4hP+ZxNrbjKBKV7foOwXzxr6Fc0uewEsU4O6Y7LVfuRJbUKU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161576-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 161576: regressions - FAIL
X-Osstest-Failures:
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-localmigrate/x10:fail:regression
    xen-4.12-testing:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-saverestore.2:fail:heisenbug
    xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5b280a59c4dd8dad6cc8da28db981b193d10acee
X-Osstest-Versions-That:
    xen=4cf5929606adc2fb1ab4e2921c14ba4b8046ecd1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 02 May 2021 00:23:03 +0000

flight 161576 xen-4.12-testing real [real]
flight 161590 xen-4.12-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/161576/
http://logs.test-lab.xenproject.org/osstest/logs/161590/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qcow2 19 guest-localmigrate/x10 fail in 161560 REGR. vs. 159418

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-xsm       8 xen-boot                   fail pass in 161560
 test-amd64-amd64-xl-qcow2    18 guest-saverestore.2        fail pass in 161560

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm     15 migrate-support-check fail in 161560 never pass
 test-arm64-arm64-xl-xsm 16 saverestore-support-check fail in 161560 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 159418
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 159418
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159418
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159418
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 159418
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 159418
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159418
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 159418
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159418
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 159418
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 159418
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  5b280a59c4dd8dad6cc8da28db981b193d10acee
baseline version:
 xen                  4cf5929606adc2fb1ab4e2921c14ba4b8046ecd1

Last test of basis   159418  2021-02-16 15:06:11 Z   74 days
Failing since        160128  2021-03-18 14:36:18 Z   44 days   59 attempts
Testing same since   160150  2021-03-20 04:11:48 Z   42 days   57 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Edwin Török <edvin.torok@citrix.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 311 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 02 03:19:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 May 2021 03:19:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121040.228766 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ld2df-0005aZ-S0; Sun, 02 May 2021 03:19:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121040.228766; Sun, 02 May 2021 03:19:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ld2df-0005aQ-K5; Sun, 02 May 2021 03:19:23 +0000
Received: by outflank-mailman (input) for mailman id 121040;
 Sun, 02 May 2021 03:19:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ld2de-0005aI-FP; Sun, 02 May 2021 03:19:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ld2de-0003Vi-2W; Sun, 02 May 2021 03:19:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ld2dd-0007rK-Ni; Sun, 02 May 2021 03:19:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ld2dd-0007pF-NA; Sun, 02 May 2021 03:19:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7KnVPjqybMiNkM2jtR0vA8dRvPu4iw+xwB+Yg4VRsQk=; b=36wXMolOme67c1pHW16U0Ex6wr
	XKLMMAs/SxcWKgewouryNW3GuilTtgzSmGblIoy8JKC4AyR/INlC628Yvz3xtuKhdhYxAwDYBf6h9
	savcHHKk+z6pJy+p7GiHORSkE+lp2MvcRaDUvMRCAfWs3h8tIcV9lX2pVBVD9CYAKghc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161579-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 161579: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-localmigrate/x10:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=9f67672a817ec046f7554a885f0fe0d60e1bf99f
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 02 May 2021 03:19:21 +0000

flight 161579 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161579/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 19 guest-localmigrate/x10  fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                9f67672a817ec046f7554a885f0fe0d60e1bf99f
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  274 days
Failing since        152366  2020-08-01 20:49:34 Z  273 days  457 attempts
Testing same since   161579  2021-05-01 11:33:49 Z    0 days    1 attempts

------------------------------------------------------------
5900 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1585526 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 02 08:20:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 May 2021 08:20:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121092.228817 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ld7KJ-0006p7-2g; Sun, 02 May 2021 08:19:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121092.228817; Sun, 02 May 2021 08:19:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ld7KI-0006p0-VV; Sun, 02 May 2021 08:19:42 +0000
Received: by outflank-mailman (input) for mailman id 121092;
 Sun, 02 May 2021 08:19:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ld7KH-0006os-Q7; Sun, 02 May 2021 08:19:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ld7KH-0001KV-IU; Sun, 02 May 2021 08:19:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ld7KH-0004uh-78; Sun, 02 May 2021 08:19:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ld7KH-0001eS-6i; Sun, 02 May 2021 08:19:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=VpMCSITYtObyJ0b/k+4/Eu8R04UJxxhVKIAnZXo9JUY=; b=DXbGLwDT4gIijj6hqb8jutbek4
	BoyNgxuIQN+btapXpYBpxLr7tNskx4JsUr519XBQy8dnj1GiMWlZ/YV0T+yVqt7fLRscZ6mx6G3gM
	ZG1cdNPzDIqnGXkjNW7i9c66FupAZRcIZyeP/OtKHcxzSlc2HGoRniSxs+rEgejv0nWc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161584-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 161584: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=1f8ee4cb430e5a9da37096574c41632cf69a0bc7
X-Osstest-Versions-That:
    xen=1f8ee4cb430e5a9da37096574c41632cf69a0bc7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 02 May 2021 08:19:41 +0000

flight 161584 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161584/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat  fail pass in 161567
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 161567

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161567
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161567
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 161567
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161567
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161567
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161567
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161567
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161567
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161567
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161567
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161567
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  1f8ee4cb430e5a9da37096574c41632cf69a0bc7
baseline version:
 xen                  1f8ee4cb430e5a9da37096574c41632cf69a0bc7

Last test of basis   161584  2021-05-01 17:08:43 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun May 02 08:53:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 May 2021 08:53:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121101.228832 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ld7rL-0001vW-Uk; Sun, 02 May 2021 08:53:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121101.228832; Sun, 02 May 2021 08:53:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ld7rL-0001vP-Qo; Sun, 02 May 2021 08:53:51 +0000
Received: by outflank-mailman (input) for mailman id 121101;
 Sun, 02 May 2021 08:53:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Sey=J5=yahoo.com=hack3rcon@srs-us1.protection.inumbo.net>)
 id 1ld7rK-0001vK-55
 for xen-devel@lists.xenproject.org; Sun, 02 May 2021 08:53:50 +0000
Received: from sonic309-13.consmr.mail.bf2.yahoo.com (unknown [74.6.129.123])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b436ab04-d29d-4d1f-9ec8-624917360900;
 Sun, 02 May 2021 08:53:49 +0000 (UTC)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic309.consmr.mail.bf2.yahoo.com with HTTP; Sun, 2 May 2021 08:53:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b436ab04-d29d-4d1f-9ec8-624917360900
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1619945629; bh=RX+OVaklCNR9AyCjht1hbNKuOhLHjouMDPaxqOy3LcA=; h=Date:From:To:Cc:In-Reply-To:References:Subject:From:Subject:Reply-To; b=COk/UB3ggsXg/jTTminD+LWGjT42UvR/wtv4kXKQ/nS+jTIhUdDARkIbzZdvH4Vmw+odsqBu6bArqbAGx9M1etZ5P+s0QuFUD9u8Gv1Ypl8iMJrZl3gNWQY4QfULRWBpq1JGCSFkPpGubuk+yfLCQ4Ccyh/fI6ja8uwCvAx9yV6VuDkT5S2g4a0Gjx32ZAC+qhLEM4gjrqg3vczD86X6xzK9WskDtOkzSYsN8tpstXfLinXLSaMbDJX/gO7Bs3I7WghdffJNhyjy05zlCJ0BNX+KBHT12RJp+lnBXz16h7XapH7dE5aILGqoT+qilALIC9clybCBCLqsvCiyAJZgsg==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1619945629; bh=4WEV62igjjlJZsTqsO/TS+ixU0UhcWXpOJNNs+pEGBG=; h=X-Sonic-MF:Date:From:To:Subject:From:Subject; b=oClQsep9d8FtJN2rwyR2hc3hjHbVQcn+aqtPbQ3eXl1JMwZtSpj0A4sJEX9pdrN6fo0rBd8Iu8IYI497SPTEyMmSvPy70EQ4MKWPMoM+5AasNdYwZgN1JsBmk5zZZ6qc891CjVmlgPxNvc6O5vFCOpLmGoulE4OZteDk+rNiMXnP0V2d19qdsHhmmEic/u5AKvQUUnRgAShLKSkyuhP3UqDKuYZQAfi+zMZLTvqhx4qei/RewTcLo2NICFPDiWX2IvNpYadsKPV/Ab+l7n+uiBiIPGlJIPoL7pgGl8mDYBeqer8DkWyLeooonGP+PRjz00gTDdK5ZEXknP8eX5bURA==
X-YMail-OSG: 6.Zd6qsVM1kQp8UhrSH_m.As0iGvQhpCKgBUKDuM5bHzRVquXFMr9tMSpeS9rOP
 H9R3AXLAzRQ82yRlfvFTia63hLMcZzXnoMmg_T_JP8r2ej9WitswpoBiIRSQS4Rcli15ft6l8wIr
 1Paj3w3ZOaDA9xNAv7GNFPhF8QVypBlxaj2F6GHfuYIT2.EKK0gkLNezUNB0D10JxBtMyflsZHv1
 PRhUrJwXT3c66HbBgKje21ZnXpDZnSO741QMPfK__ArHNtJECADdUcfZFsXqnu81lyI3CIjIUn9W
 S7Woc0vNIt.s3OqSL3py_kFY3zYY72p0Rwsiz6CDkid6MS4prXBSkWqgqMW4S9WRR05emEtn_oZ5
 J6jJVWc0WD06SRIN64r7F_Yhmv33IEhOrOH44UROT3gzLBzpspuliXm326jOpzvUP3IYmr2_mpaD
 yNf_XuLpTV3a_qcZ7n8RQJACfR7n0oryIpsQiOBaQZI799z07gCQQJ3X9UrZ6x6A6SeEufqgDJGp
 985OzaQ_0uWfVy.yOTIpQqQF7ygu3GwcEwEn5eUuTTHr2CLxRKopynhkjhlUCkHA0a6jMTQZFQP4
 PU54UcnKXDXAdAr2xc3djrx9x4c3LdjHHrfmt14jU9NjJ5c0SiamAOqpmewz_UyZY8ZbbNKrUC9Y
 _xVZ3RgWBSEQNspU8RX6NIgB3QEwMRx9PfQgMB.heQiFzto4YLheLYCdiI_ZA0k56LDVRtGCFCyB
 GElNG00zRMTGZC220mnJy_Zw7h7m.cBggOb5FA.QCXvbnROGs9vchRsrNvrvwHe_ZtDG0xnbD7he
 88TyXwOD2lCUlYLTsb7_NBmufbUwXpe6uYr0DFEaDDMwViFrezMcV_TCR7IdiA0kHwAgaDA0d4hN
 qntWgvQNS2dwKYx7TFCHuO5X4ZXWzN29LJxoxzdiwGRWW8e.fkc8ih1.ZEt3M49f7n9zNN7xU.yr
 cbUhRLgqBbV9xKR3544NvzjbqIZL3i5reV41gI0LH0R0pewEGf_R4r2hWg7Jqr6ZBneQ8A0sOmik
 YBWQKncvb6DNCN.YqLM2_p.ckN_tdV0XleaZ1wefA9JasVVqHk4l33vX7thr1ChqkQtOc.z4Ygs.
 rMzyjNd0ABvbHR8VsEMB7z5xFDMkB_ei_b70lSXdqazt0.TMgrRv9jMeoQo3h2598B3_u5FNe9xF
 N_nVDTyAGpeVoSV7t0ymV7D2NnSUxcL8yp.MBqdNgC9CgpN_5v0y884y2zWILg5DQ6qh4lhWX_DP
 fh2Lt8SHSkz11mdKDd7iymKJEsI8nbf_Rfvw8UncuRuwfFFfEJz5PGvf1FAVhhZV4J4Md6FBBJuY
 YKcNl8O9Vwh3PyCUVHSt48I00IVh_5lv5AtTFLgIsF5h5JIisW0EkSSrsJNCV_cO.h8eLVM_L.Ws
 NQ3iC13OYvpKLKjVF7ienciwqzdDwBwukrQK7hC8x2NFfTGrOhboDkfp8M82bwbb5vHqxtcJVl7W
 gtmKsE50mASpvXSXq87wtpK0aFZzGY.MHXLhle4Xd4pFvKZd9AwZ1ulxvYuErWNMbNojEVbfRyBS
 WAusmqIstQ747pNveuC9_F282XeQcr_8mH4A_RnzEW5aUXo3JKVE7wFhsT.E_fx1C33h9gsGw17n
 ygGllPbUYBJlc0BklpXB1yi7zK_ohwaRUKGaT0IHchcaF8jU4QrGgFGKeFpDvX60_bk9OhtUEFwa
 2vXmhQrfxbCtEt6j4XNOW0WSM.48yPXOR5ydgz3DKIcC0fQZOP60dcxh7C36N_Nzti_WNChaE_dg
 751B1kUS5A5COm1_Qz2dU6f2bWHGFWKGuM.ZFFQYOWnH2GaaPRsgBBC3kkOf0vKhlP4t10MIVJfu
 Ut2SPfG_q2uTbvogUDo4snVXtsHzBZsZeqDY_drr_Y32xrD9PlDhgfpRe0WDJI6LjRHbYjdtZ02.
 T5WSnQGiiq0kpnVf9yUGvbiEMCYixG93Ng3KLN6ENJLAm.ezccsWaz.1rHeJR.r0sMxhftOL1KBz
 xu.FMRv_Qw9M8iVM8Zp98RL.mrQByLHsdnWONfqjWuaLVhr12SCGhrh2FJNy20RP6qDH_cSbdaB1
 4I1E0LDqtPSyXxaG86JkuQcfuImsHsUahqdCTfrrMgcjYNXygjoh4LpfgQk2NhrELFI7YjhlESmk
 1SaPHoSrVw9D1Zk9rskGp8X6kxBptTN6C17nMB2Ut0MQEIq4qup8hfAXaa61sCvR13BnDzEJnIQF
 K7KC6zKBo65UtDe7nGbkF2impRiKa5qYhgtjziZ3OwTXUkJaDPD0Cdsj3xVc6NNx2IBkufoBZpYU
 86uINpG5HfQPvumewDaOQXbKGO7r8j60tAPma0mnONrUHTSU5FKnaEMuiNRvtCvf.VupW0U9S8q6
 DKW.W4ug_YgoBxvhX5XMNm.YKp2xHBl39PzjNrQdbYgazcDpDNftw1Rd3y3_s9S69Ctmc661uuca
 tdPkNj7on2b9nP.VRLSbX6xQvcDpTEkECH6TpbPh8YNSsHphPDjc-
X-Sonic-MF: <hack3rcon@yahoo.com>
Date: Sun, 2 May 2021 08:53:40 +0000 (UTC)
From: Jason Long <hack3rcon@yahoo.com>
To: TMC <tmciolek@gmail.com>
Cc: Xen-users <xen-users@lists.xenproject.org>, 
	Xen-devel <xen-devel@lists.xenproject.org>
Message-ID: <795375038.1654154.1619945620880@mail.yahoo.com>
In-Reply-To: <CAA3FNtPpz=4dwymk3+YeB+ZCOYYo9TirFqdjrf+qgSL39mBWYw@mail.gmail.com>
References: <314217522.1538685.1619859473008.ref@mail.yahoo.com> <314217522.1538685.1619859473008@mail.yahoo.com> <CAA3FNtPpz=4dwymk3+YeB+ZCOYYo9TirFqdjrf+qgSL39mBWYw@mail.gmail.com>
Subject: Re: Xen and Microservices.
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-Mailer: WebService/1.1.18138 YMailNorrin Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36
Content-Length: 766

Thank you.
How about Unikernel?






On Sunday, May 2, 2021, 11:46:01 AM GMT+4:30, TMC <tmciolek@gmail.com> wrot=
e:=20





Jason,
containers, like Docker and kubernetes are designed to let you sandbox/isol=
ate one application or one service... without having to also host an operat=
ing system for each container.

=C2=A0Hypervisors like XEN are designed for operating systems, not single a=
pplications.=C2=A0

Hope this helps

Tomasz

On Sat, 1 May 2021 at 18:58, Jason Long <hack3rcon@yahoo.com> wrote:
> Hello,
> Why microservices use containers like Docker and not Hypervisors like Xen=
?
>=20
> Thanks.
>=20
>=20


--=20
--
GPG key fingerprint: 07DF B95B DB58 57B6 9656=C2=A0 682E 830A D092 288E F01=
7
GPG public key available on pgp(dot)net key server



From xen-devel-bounces@lists.xenproject.org Sun May 02 09:31:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 May 2021 09:31:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121059.228861 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ld8Rn-0005Xl-1I; Sun, 02 May 2021 09:31:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121059.228861; Sun, 02 May 2021 09:31:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ld8Rm-0005Xe-Uj; Sun, 02 May 2021 09:31:30 +0000
Received: by outflank-mailman (input) for mailman id 121059;
 Sun, 02 May 2021 07:15:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oy9B=J5=gmail.com=tmciolek@srs-us1.protection.inumbo.net>)
 id 1ld6KX-00011X-5U
 for xen-devel@lists.xenproject.org; Sun, 02 May 2021 07:15:53 +0000
Received: from mail-lj1-x22c.google.com (unknown [2a00:1450:4864:20::22c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 34a1d84c-2597-433e-ac79-42fc577a944b;
 Sun, 02 May 2021 07:15:47 +0000 (UTC)
Received: by mail-lj1-x22c.google.com with SMTP id s25so3166718lji.0;
 Sun, 02 May 2021 00:15:46 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 34a1d84c-2597-433e-ac79-42fc577a944b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=l8Xd1rI7INI4Zl20Bxxg5yWd6MCp6tK9E+DPZBYLVx8=;
        b=M4EA14G6ytSjlE06pJZIEvKyyfkAAcmJac8oS8Fc0vHiSFkxrwZ21aUeN8JeT8tTuI
         W1SwC5RhNkRsxhrUQZxvSMrpeThXE7TNZ5kfrHNMZ+kai7llJ4L2rCFFo6cH5gF/xseX
         a1Z6x/XG5Q4/znDea/eRNwwVfSZTe2xKHJF/vrPRnZXRev+O1EMLdA99CAUaMLg1BCGC
         metvEn30d2+2LkTF1n8SMRYqdRK5NlRjRW5QxcAu5TosxUdlF0Caafwktla0BcsVCrgP
         lWNGRk1aYqH7lmfbIBFbNBjqGmomJnv1XuXsVM3t5rv1rsToJmlo42mBwtUWy4SkAEYZ
         oG4w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=l8Xd1rI7INI4Zl20Bxxg5yWd6MCp6tK9E+DPZBYLVx8=;
        b=t/6mquqdszYc4F3mQOY3YovMykTgphudLpdHq04trzi8eWg36NdH0LPDiYBuKnv6bo
         4hgE1YDpA2csW1Lcl7mFsNIHdfEkLCcco+iZuuIPVb2t6fr/2jPE/NHICqft+b7M1cFw
         94W4S7dWwmjlSrccTgudxIcbSDM4d4O9Z4t+eZ597s52HlboQJgD3LWqu5i//EdYEgbm
         kGFmPGEKmyXcqTpME/7NAOcM2xjI3ostQkxOODddIg9upNo+9SgyK94eiJrFPyeEUnc9
         wcnqT1KLD7kLcs9B2P3fA0PVPGhgqDO1QF5CJTJDfQL0ouX1C5IQpOtTI1UGACOz3BLo
         tYxA==
X-Gm-Message-State: AOAM531M/KE9VzINk1QstNHTnQweza3l0zsjk1AjMjPiphDIBCwF8loI
	nRzWy03j8TQ6KPNvfpoBRwXKqSzRvODvF3SCcQ==
X-Google-Smtp-Source: ABdhPJyuhEima/k4MbFlCz3r+O1TsoKsFw1udmdI9QcShcidHatnIHk/kvCvuu+ZvYA3MJ59Rb1g9JceByN+0QzmRaI=
X-Received: by 2002:a2e:83cb:: with SMTP id s11mr9201433ljh.462.1619939745849;
 Sun, 02 May 2021 00:15:45 -0700 (PDT)
MIME-Version: 1.0
References: <314217522.1538685.1619859473008.ref@mail.yahoo.com> <314217522.1538685.1619859473008@mail.yahoo.com>
In-Reply-To: <314217522.1538685.1619859473008@mail.yahoo.com>
From: TMC <tmciolek@gmail.com>
Date: Sun, 2 May 2021 17:15:34 +1000
Message-ID: <CAA3FNtPpz=4dwymk3+YeB+ZCOYYo9TirFqdjrf+qgSL39mBWYw@mail.gmail.com>
Subject: Re: Xen and Microservices.
To: Jason Long <hack3rcon@yahoo.com>
Cc: Xen-users <xen-users@lists.xenproject.org>, 
	Xen-devel <xen-devel@lists.xenproject.org>
Content-Type: multipart/alternative; boundary="0000000000001be59505c1539d79"

--0000000000001be59505c1539d79
Content-Type: text/plain; charset="UTF-8"

Jason,
containers, like Docker and kubernetes are designed to let you
sandbox/isolate one application or one service... without having to also
host an operating system for each container.

 Hypervisors like XEN are designed for operating systems, not single
applications.

Hope this helps

Tomasz

On Sat, 1 May 2021 at 18:58, Jason Long <hack3rcon@yahoo.com> wrote:

> Hello,
> Why microservices use containers like Docker and not Hypervisors like Xen?
>
> Thanks.
>
>

-- 
--
GPG key fingerprint: 07DF B95B DB58 57B6 9656  682E 830A D092 288E F017
GPG public key available on pgp(dot)net key server

--0000000000001be59505c1539d79
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div>Jason,</div><div>containers, like Docker and kubernet=
es are designed to let you sandbox/isolate one application or one service..=
. without having to also host an operating system for each container.<br></=
div><div><br></div><div>=C2=A0Hypervisors like XEN are designed for operati=
ng systems, not single applications.=C2=A0</div><div><br></div><div>Hope th=
is helps</div><div><br></div><div>Tomasz<br></div></div><br><div class=3D"g=
mail_quote"><div dir=3D"ltr" class=3D"gmail_attr">On Sat, 1 May 2021 at 18:=
58, Jason Long &lt;<a href=3D"mailto:hack3rcon@yahoo.com">hack3rcon@yahoo.c=
om</a>&gt; wrote:<br></div><blockquote class=3D"gmail_quote" style=3D"margi=
n:0px 0px 0px 0.8ex;border-left:1px solid rgb(204,204,204);padding-left:1ex=
">Hello,<br>
Why microservices use containers like Docker and not Hypervisors like Xen?<=
br>
<br>
Thanks.<br>
<br>
</blockquote></div><br clear=3D"all"><br>-- <br><div dir=3D"ltr" class=3D"g=
mail_signature"><div dir=3D"ltr"><div>--<br>GPG key fingerprint: <span>07DF=
 B95B DB58 57B6 9656=C2=A0 682E 830A D092 288E F017</span><br>GPG public ke=
y available on pgp(dot)net key server</div></div></div>

--0000000000001be59505c1539d79--


From xen-devel-bounces@lists.xenproject.org Sun May 02 09:54:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 May 2021 09:54:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121144.228879 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ld8nl-0007LR-VW; Sun, 02 May 2021 09:54:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121144.228879; Sun, 02 May 2021 09:54:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ld8nl-0007LK-Sb; Sun, 02 May 2021 09:54:13 +0000
Received: by outflank-mailman (input) for mailman id 121144;
 Sun, 02 May 2021 09:54:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ld8nk-0007LC-U0; Sun, 02 May 2021 09:54:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ld8nk-0002pQ-NT; Sun, 02 May 2021 09:54:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ld8nk-0008QO-FZ; Sun, 02 May 2021 09:54:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ld8nk-00022l-F5; Sun, 02 May 2021 09:54:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=thQWkO41urZz2et5aFIpD56cgyKJDubR6WJ4z13NHoc=; b=0O84JwijIwR2B0S+CItSmY9qHU
	gjt+49fyYU5icFbSkNIwz1nq0MuYABvfqxfewGnVtlpjUD0ZMddHt/Fof7Q9VPYzXXYylDnW6/bgf
	yxZDbmLZ+5IKoACkSp0JXISLckRccDAmzQP6WxW0nlyGjpIW+pSC9xLMxjhw54RIKcsg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161601-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 161601: all pass - PUSHED
X-Osstest-Versions-This:
    xen=1f8ee4cb430e5a9da37096574c41632cf69a0bc7
X-Osstest-Versions-That:
    xen=972ba1d1d4bcb77018b50fd2bb63c0e628859ed3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 02 May 2021 09:54:12 +0000

flight 161601 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161601/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  1f8ee4cb430e5a9da37096574c41632cf69a0bc7
baseline version:
 xen                  972ba1d1d4bcb77018b50fd2bb63c0e628859ed3

Last test of basis   161502  2021-04-28 09:18:31 Z    4 days
Testing same since   161601  2021-05-02 09:18:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   972ba1d1d4..1f8ee4cb43  1f8ee4cb430e5a9da37096574c41632cf69a0bc7 -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Sun May 02 10:54:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 May 2021 10:54:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121164.228895 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ld9kD-00041v-Kz; Sun, 02 May 2021 10:54:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121164.228895; Sun, 02 May 2021 10:54:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ld9kD-00041o-Hs; Sun, 02 May 2021 10:54:37 +0000
Received: by outflank-mailman (input) for mailman id 121164;
 Sun, 02 May 2021 10:54:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ld9kC-00041g-FI; Sun, 02 May 2021 10:54:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ld9kC-0003qN-6O; Sun, 02 May 2021 10:54:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ld9kB-0003Bx-TE; Sun, 02 May 2021 10:54:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ld9kB-0001SL-ST; Sun, 02 May 2021 10:54:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mDhdPCeK3HwHUaILULM8JZ+vuZK4PcFY/9saSqhfPY4=; b=n0WRBl94Ru17behC2EaTPfMmON
	SEAIgzj8xFlj2liOcPWHB9AZzDc7v+70RPomh3LiHVI8x+ePKbox4GoTUh01qfJBFSvj8S3CBJHqn
	zTGsxvZE99TlqBXKKMFyWLYkWhdWLFJmehrJZre2zPA/8KRoT8f3XEvhr8Qy/kCzC5WI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161596-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 161596: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=ec2e3336b8c8df572600043976e1ab5feead656e
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 02 May 2021 10:54:35 +0000

flight 161596 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161596/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              ec2e3336b8c8df572600043976e1ab5feead656e
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  296 days
Failing since        151818  2020-07-11 04:18:52 Z  295 days  288 attempts
Testing same since   161516  2021-04-29 04:18:53 Z    3 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 55101 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 02 11:49:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 May 2021 11:49:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121201.228934 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldAb5-0000EX-6E; Sun, 02 May 2021 11:49:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121201.228934; Sun, 02 May 2021 11:49:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldAb5-0000EQ-2O; Sun, 02 May 2021 11:49:15 +0000
Received: by outflank-mailman (input) for mailman id 121201;
 Sun, 02 May 2021 11:49:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldAb4-0000EI-Kl; Sun, 02 May 2021 11:49:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldAb4-0004iL-FC; Sun, 02 May 2021 11:49:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldAb4-0007TP-55; Sun, 02 May 2021 11:49:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ldAb4-0002xw-4W; Sun, 02 May 2021 11:49:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=J/OECCQiwTuC5FeySSBdlBocJjb/A2TAGhuQXj8tZJ8=; b=wBcrfCFH3YVZt3BAGJ4tchT3Pz
	o8qPAnqZkknKQc9wx0WHEnn60dHMiSdRIqMeZmrMeYyFIF/7LolYU7evH0kSrdEcpfpzrYCZpYhRj
	vKnpjFpneVjqi9HbO9tMruqJ5nIMSVbc6VLEoxO5JWkBtmaaSm5+awMXvDxa/9hnWMtw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161587-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 161587: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start.2:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=8f860d2633baf9c2b6261f703f86e394c6bc22ca
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 02 May 2021 11:49:14 +0000

flight 161587 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161587/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-libvirt      14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-libvirt-xsm  14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-libvirt-pair 25 guest-start/debian       fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     19 guest-start.2           fail blocked in 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                8f860d2633baf9c2b6261f703f86e394c6bc22ca
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  255 days
Failing since        152659  2020-08-21 14:07:39 Z  253 days  465 attempts
Testing same since   161571  2021-05-01 05:13:23 Z    1 days    2 attempts

------------------------------------------------------------
478 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 145075 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 02 12:36:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 May 2021 12:36:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121182.228949 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldBK6-0004Z5-5E; Sun, 02 May 2021 12:35:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121182.228949; Sun, 02 May 2021 12:35:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldBK6-0004Yy-2E; Sun, 02 May 2021 12:35:46 +0000
Received: by outflank-mailman (input) for mailman id 121182;
 Sun, 02 May 2021 11:44:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mFLA=J5=nethence.com=pbraun@srs-us1.protection.inumbo.net>)
 id 1ldAWj-0008NQ-3q
 for xen-devel@lists.xenproject.org; Sun, 02 May 2021 11:44:45 +0000
Received: from xc.os3.su (unknown [62.210.110.7])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f9469670-8036-4a43-bd18-5582919530e8;
 Sun, 02 May 2021 11:44:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f9469670-8036-4a43-bd18-5582919530e8
Subject: Re: Xen and Microservices.
DKIM-Signature: v=1; a=rsa-sha1; c=simple/simple; d=nethence.com; s=sep2020;
	t=1619955886; bh=+uhelc9w5dL/b8f82bHKLBG2we8=;
	h=Subject:To:Cc:References:From:Date:In-Reply-To;
	b=kDD6iFuJXxZJVtcJmj0L1SRBZs8EPNhdyaMCmUQv5cpqcFDBLFQT9tf+Hu96WlD5e
	 BtxRX5ydIgibhCSqLEnfauO6/ewaWmJv7QKizgTR9mcmpVJSf1ZbCUbDLTcQ8ihO3m
	 9LxPb4n2MHjyQZfMFP3MASQNc9bzrCvFhWaQ74yA=
To: Jason Long <hack3rcon@yahoo.com>, TMC <tmciolek@gmail.com>
Cc: Xen-users <xen-users@lists.xenproject.org>,
 Xen-devel <xen-devel@lists.xenproject.org>
References: <314217522.1538685.1619859473008.ref@mail.yahoo.com>
 <314217522.1538685.1619859473008@mail.yahoo.com>
 <CAA3FNtPpz=4dwymk3+YeB+ZCOYYo9TirFqdjrf+qgSL39mBWYw@mail.gmail.com>
 <795375038.1654154.1619945620880@mail.yahoo.com>
From: Pierre-Philipp Braun <pbraun@nethence.com>
Message-ID: <88f93c4d-5e77-20db-e907-65d2e0fc7d25@nethence.com>
Date: Sun, 2 May 2021 14:42:31 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
In-Reply-To: <795375038.1654154.1619945620880@mail.yahoo.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 02/05/2021 11:53, Jason Long wrote:
> Thank you.
> How about Unikernel?

Unikernel is - in short - an attempt to even more isolate an application 
and its libraries (and bind them together in a single binary).  However 
it's pretty hard to setup and prepare, and therefore didn't gain massive 
adoption.

I don't know if a XEN Unikernel such as MirageOS or Rump would construct 
it can provide even better performance than OS-level virtualization, but 
I suppose it was part of the goal.

-- 
Pierre-Philipp Braun
SMTP Health Campaign: enforce STARTTLS and verify MX certificates
<https://nethence.com/smtp/>


From xen-devel-bounces@lists.xenproject.org Sun May 02 15:57:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 May 2021 15:57:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121281.228972 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldETM-0004Sd-IR; Sun, 02 May 2021 15:57:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121281.228972; Sun, 02 May 2021 15:57:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldETM-0004SW-FA; Sun, 02 May 2021 15:57:32 +0000
Received: by outflank-mailman (input) for mailman id 121281;
 Sun, 02 May 2021 15:57:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldETK-0004SL-Nd; Sun, 02 May 2021 15:57:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldETK-0000MB-GL; Sun, 02 May 2021 15:57:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldETK-0003yb-5T; Sun, 02 May 2021 15:57:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ldETK-0004Yv-4w; Sun, 02 May 2021 15:57:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=9SFwN7qlKnOAmJjzZASSqYvmjFUDlu1a0I7Bs2T2zSQ=; b=cpnfSBp73z4s2Xo326N4J9Vz7D
	aarYJj7mQEHNdUuodD+mjIGKGa9qPqRbH8sUJ5r4CLfdEhLlkpHFsO8FVEvAFxG3khhRXsG4RzOy9
	Mjf7scTES9QSWL22dYJBIt/HL5aGp9Usf+6AjJfifxUCLgKejs0uImp+p52O1B2UrwIo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm
Message-Id: <E1ldETK-0004Yv-4w@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 02 May 2021 15:57:30 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm
testid debian-hvm-install

Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  8d17adf34f501ded65a106572740760f0a75577c
  Bug not present: e67d8e2928200e24ecb47c7be3ea8270077f2996
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/161603/


  commit 8d17adf34f501ded65a106572740760f0a75577c
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 11:16:32 2021 +0000
  
      block: remove support for using "file" driver with block/char devices
      
      The 'host_device' and 'host_cdrom' drivers must be used instead.
      
      Reviewed-by: Eric Blake <eblake@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm.debian-hvm-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm.debian-hvm-install --summary-out=tmp/161606.bisection-summary --basis-template=152631 --blessings=real,real-bisect,real-retry qemu-mainline test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm debian-hvm-install
Searching for failure / basis pass:
 161587 fail [host=godello1] / 160125 [host=chardonnay0] 160119 [host=pinot1] 160113 [host=fiano1] 160104 [host=godello0] 160097 [host=fiano0] 160091 [host=elbling1] 160082 [host=chardonnay1] 160079 [host=albana1] 160070 [host=albana0] 160066 [host=godello0] 160002 ok.
Failure / basis pass flights: 161587 / 160002
(tree with no url: minios)
Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8f860d2633baf9c2b6261f703f86e394c6bc22ca b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
Basis pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6f34661b6c97a37a5efc27d31c037ddeda4547e2 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e4bdcc8aef6707027168ea29caed844a7da67b4d
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/libvirt.git#2c846fa6bcc11929c9fb857a22430fb9945654ad-2c846fa6bcc11929c9fb857a22430fb9945654ad https://gitlab.com/keycodemap/keycodemapdb.git#27acf0ef828bf719b2053ba398b195829413dbdd-27acf0ef828bf719b2053ba398b195829413dbdd git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0\
 dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#4751a48aeb2ab828b0a5cbdc585fd3642967cda1-1e6b0394d6c001802dc454ecff19076aaa80f51c git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://git.qemu.org/qemu.git#6f34661b6c97a37a5efc27d31c037ddeda4547e2-8f860d2633baf9c2b6261f703f86e394c6bc22ca git://xenbits.xen.org/osstest/seabios.git#b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee-b0d61ec\
 ef66eb05bd7a4eb7ada88ec5dab06dfee git://xenbits.xen.org/xen.git#e4bdcc8aef6707027168ea29caed844a7da67b4d-1f8ee4cb430e5a9da37096574c41632cf69a0bc7
>From git://cache:9419/git://git.qemu.org/qemu
   53c5433e84..15106f7dc3  staging    -> origin/staging
Loaded 34834 nodes in revision graph
Searching for test results:
 159947 [host=chardonnay0]
 160002 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6f34661b6c97a37a5efc27d31c037ddeda4547e2 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e4bdcc8aef6707027168ea29caed844a7da67b4d
 160048 []
 160050 []
 160057 []
 160062 []
 160064 []
 160066 [host=godello0]
 160070 [host=albana0]
 160079 [host=albana1]
 160082 [host=chardonnay1]
 160088 []
 160091 [host=elbling1]
 160097 [host=fiano0]
 160104 [host=godello0]
 160113 [host=fiano1]
 160119 [host=pinot1]
 160125 [host=chardonnay0]
 160134 fail irrelevant
 160147 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2e1293cbaac75e84f541f9acfa8e26749f4c3562 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160167 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca318882714080fb81fe9eb89a7b7934efc5bfae 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 bdee969c0e65d4d509932b1d70e3a3b2ffbff6d5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160328 fail irrelevant
 160361 fail irrelevant
 160392 fail irrelevant
 160418 fail irrelevant
 160448 fail irrelevant
 160477 fail irrelevant
 160501 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7b9a3c9f94bcac23c534bc9f42a9e914b433b299 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160522 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7b9a3c9f94bcac23c534bc9f42a9e914b433b299 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160541 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ec2e6e016d24bd429792d08cf607e4c5350dcdaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160563 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7993b0f83fe5c3f8555e79781d5d098f99751a94 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cead8c0d17462f3a1150b5657d3f4eaa88faf1cb
 160619 fail irrelevant
 160632 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 62bad17dcae18f55cb3bdc19909543dfdf928a2b 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6ee55e1d10c25c2f6bf5ce2084ad2327e17affa5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 90629587e16e2efdb61da77f25c25fba3c4a5fd7
 160650 fail irrelevant
 160736 fail irrelevant
 160748 fail irrelevant
 160779 fail irrelevant
 160801 fail irrelevant
 160827 fail irrelevant
 160851 fail irrelevant
 160883 fail irrelevant
 160916 fail irrelevant
 160980 fail irrelevant
 161050 fail irrelevant
 161088 fail irrelevant
 161121 fail irrelevant
 161147 fail irrelevant
 161171 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2ad22420a710dc07e3b644f91a5b55c09c39ecf3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 264aa183ad85b2779b27d1312724a291259ccc9f
 161191 fail irrelevant
 161210 fail irrelevant
 161232 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b53173e7cdafb7a318a239d557478fd73734a86a
 161256 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161276 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161290 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161308 fail irrelevant
 161334 fail irrelevant
 161364 fail irrelevant
 161388 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3b0d007a135284981fa750612a47234b83976f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b1cffefa1b163bce9aebc3416f562c1d3886eeaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 9f6cd4983715cb31f0ea540e6bbb63f799a35d8a
 161401 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3b0d007a135284981fa750612a47234b83976f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b1cffefa1b163bce9aebc3416f562c1d3886eeaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aaa3eafb3ba8b544d19ca41cda1477640b22b8fc
 161419 fail irrelevant
 161434 fail irrelevant
 161444 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f2f4c6be2dba3f8e97ac544b9c3da71e9f81b294 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ffa090bc56e73e287a63261e70ac02c0970be61a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee bea65a212c0581520203b6ad0d07615693f42f73
 161455 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f2f4c6be2dba3f8e97ac544b9c3da71e9f81b294 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ffa090bc56e73e287a63261e70ac02c0970be61a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee bea65a212c0581520203b6ad0d07615693f42f73
 161472 fail irrelevant
 161481 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5396354b868bd6652600a654bba7df16701ac1cb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0cef06d18762374c94eb4d511717a4735d668a24 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 11e7f0fe72ca0060762d18268e0388731fe8ccb6
 161495 fail irrelevant
 161514 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5b90b8abb4049e2d98040f548ad23b6ab22d5d19 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0cef06d18762374c94eb4d511717a4735d668a24 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 972ba1d1d4bcb77018b50fd2bb63c0e628859ed3
 161548 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6f34661b6c97a37a5efc27d31c037ddeda4547e2 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e4bdcc8aef6707027168ea29caed844a7da67b4d
 161550 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5b90b8abb4049e2d98040f548ad23b6ab22d5d19 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0cef06d18762374c94eb4d511717a4735d668a24 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 972ba1d1d4bcb77018b50fd2bb63c0e628859ed3
 161552 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 f2a9a6c2a86570ccbf8c5c30cbb8bf723168c459 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161540 fail irrelevant
 161553 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8a40754bca14df63c6d2ffe473b68a270dc50679 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161555 fail irrelevant
 161558 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7286d62d4e259be8cecf3dc2deea80ecc14489a5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161561 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 69259911f948ad2755bd1f2c999dd60ac322c890 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161563 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6e71c36557ed41017e634ae392fa80f03ced7fa1 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161564 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 81cbfd5088690c53541ffd0d74851c8ab055a829 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161565 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2255564fd21059960966b47212def9069cb56077 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161566 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8b858f9998a9d59a9a7188f2c5c6ffb99eff6115 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161568 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6e31b3a5c34c6e5be7ef60773e607f189eaa15f3 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b4011741e6b39a8fd0ed5aded96c16c45ead5888
 161554 fail irrelevant
 161569 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2e51b27fed31eb7b2a2cb4245806c8c7859207f7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0693602a23276b076a679b1e7ed9125a444336b6 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161572 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6f34661b6c97a37a5efc27d31c037ddeda4547e2 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e4bdcc8aef6707027168ea29caed844a7da67b4d
 161578 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dbcbda2cd846ab70bb25418f246604d0b546505f b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161573 fail irrelevant
 161575 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 51204c2f188ec1e2a38f14718d38a3772f850a4b b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b4011741e6b39a8fd0ed5aded96c16c45ead5888
 161577 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 757acb9a8295e8be4a37b2cfc1cd947e357fd29c b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 14b95b3b8546db201e7efd0636ae0e215fae98f3
 161580 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 30ca7eddc486646fa19c9619fcf233ceaa65e28c b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161581 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 549d039667b92f6ff86fac1948d61ac558026996 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161582 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 773b0bc2838ede154c6de9d78401b91fafa91062 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 56b89f455894e4628ad7994fe5dd348145d1a9c5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161583 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 313d86c956d4599054a9dcd524668f67797d317a 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 5e8892db93f3fb6a7221f2d47f3c952a7e489737 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161585 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 146f720c55637410062041f68dc908645cd18aaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161571 fail irrelevant
 161586 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4083904bc9fe5da580f7ca397b1e828fbc322732 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161588 fail irrelevant
 161589 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 516990f4df4f7bf9f86d38af71ead7175df15c19 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161591 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8d17adf34f501ded65a106572740760f0a75577c b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161593 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e67d8e2928200e24ecb47c7be3ea8270077f2996 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161595 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8d17adf34f501ded65a106572740760f0a75577c b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161597 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e67d8e2928200e24ecb47c7be3ea8270077f2996 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161598 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8d17adf34f501ded65a106572740760f0a75577c b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161602 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e67d8e2928200e24ecb47c7be3ea8270077f2996 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161587 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8f860d2633baf9c2b6261f703f86e394c6bc22ca b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161603 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8d17adf34f501ded65a106572740760f0a75577c b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161605 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6f34661b6c97a37a5efc27d31c037ddeda4547e2 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e4bdcc8aef6707027168ea29caed844a7da67b4d
 161606 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8f860d2633baf9c2b6261f703f86e394c6bc22ca b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
Searching for interesting versions
 Result found: flight 160002 (pass), for basis pass
 Result found: flight 161587 (fail), for basis failure (at ancestor ~8)
 Repro found: flight 161605 (pass), for basis pass
 Repro found: flight 161606 (fail), for basis failure
 0 revisions at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e67d8e2928200e24ecb47c7be3ea8270077f2996 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
No revisions left to test, checking graph state.
 Result found: flight 161593 (pass), for last pass
 Result found: flight 161595 (fail), for first failure
 Repro found: flight 161597 (pass), for last pass
 Repro found: flight 161598 (fail), for first failure
 Repro found: flight 161602 (pass), for last pass
 Repro found: flight 161603 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  8d17adf34f501ded65a106572740760f0a75577c
  Bug not present: e67d8e2928200e24ecb47c7be3ea8270077f2996
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/161603/


  commit 8d17adf34f501ded65a106572740760f0a75577c
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 11:16:32 2021 +0000
  
      block: remove support for using "file" driver with block/char devices
      
      The 'host_device' and 'host_cdrom' drivers must be used instead.
      
      Reviewed-by: Eric Blake <eblake@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>

neato: graph is too large for cairo-renderer bitmaps. Scaling by 0.665306 to fit
Revision graph left in /home/logs/results/bisect/qemu-mainline/test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm.debian-hvm-install.{dot,ps,png,html,svg}.
----------------------------------------
161606: tolerable FAIL

flight 161606 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/161606/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail baseline untested


jobs:
 build-amd64-xsm                                              pass    
 build-amd64                                                  pass    
 build-amd64-libvirt                                          pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Sun May 02 16:03:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 May 2021 16:03:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121290.229006 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldEYv-0005ug-Rv; Sun, 02 May 2021 16:03:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121290.229006; Sun, 02 May 2021 16:03:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldEYv-0005uZ-OC; Sun, 02 May 2021 16:03:17 +0000
Received: by outflank-mailman (input) for mailman id 121290;
 Sun, 02 May 2021 16:03:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Sey=J5=yahoo.com=hack3rcon@srs-us1.protection.inumbo.net>)
 id 1ldEYu-0005sz-AV
 for xen-devel@lists.xenproject.org; Sun, 02 May 2021 16:03:16 +0000
Received: from sonic312-21.consmr.mail.bf2.yahoo.com (unknown [74.6.128.83])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a55ea75c-e6ad-4d08-a4b8-33dae3d94ff5;
 Sun, 02 May 2021 16:03:10 +0000 (UTC)
Received: from sonic.gate.mail.ne1.yahoo.com by
 sonic312.consmr.mail.bf2.yahoo.com with HTTP; Sun, 2 May 2021 16:03:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a55ea75c-e6ad-4d08-a4b8-33dae3d94ff5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1619971390; bh=3C3MM45M/hjdAiAC849f2IgcIvq1FeC1uzvfLOkpDMM=; h=Date:From:To:Cc:In-Reply-To:References:Subject:From:Subject:Reply-To; b=SHyS5M66kGyU9ZIV7LgptIH4bZGcI+aNH59JQpfjQM5r3mbcfbqNIuHyAGzs17ndx5EK1QTmvo5G17/dFrt8y8kANs8zBI9Ahy/l8Ksv7WRkxKph3nyqK4JUNp9HuUbuVCtQFiBK3UtQqLwnQWEuLVgd0wzX3scASKyeOqBOWPZFdbjNHaldE1ZM0/pAOIXB3PTMpDsB+gO3DJLvk+gHlYAll/vusd1+yWs0D1Br8bRunN4rQcOhveKH+9yKpayBIvvHeuAboH7QNRmOuoQI1ojSU1iV1kRl2AmGdBH9+JUjA3gBi1eGHIdhKmVd2JhQuVByG0JabkCvOVUfjv4Ukg==
X-SONIC-DKIM-SIGN: v=1; a=rsa-sha256; c=relaxed/relaxed; d=yahoo.com; s=s2048; t=1619971390; bh=nzTeyeE7eFIS6zjwhUMRqJpebnhGGeYNAcU0zVkRdJY=; h=X-Sonic-MF:Date:From:To:Subject:From:Subject; b=FYL5T1f0HeheXSKv0ZUpsfijsCxHYmoVURxD3P3MQe6qjICCyxE7RqFVwWBl4wgxP3TM4ZS9YMKjhHGH2v3XDcmABKt1t/og8lVfJe1NKX8MZJO9SMnxJvDSBReNd3ClgVFECijDNgZdfDNnPQVhZr0gH2aw1FsYjvvTNUDrX/bU/IPs40fZItZu/tv66J67bI0hrXvljxWZhxadFMzVpo3gDaUaAFuQ5bwuouR6GPa5CVTJNFPZG5F3kMN9/hc3m2UbmMHMkcb44JqmpwIcIdxWWqIzOO0wo1vG4qx4B6OV/l40yuUEYb/99UUvECQvhrkdZ90MxlpWktXuRuK2iQ==
X-YMail-OSG: Y38cLKsVM1n3J9yA_qNi.BwFv_u3CjprynGWdWwdz_NXX_KNFxDo38lw2moaISN
 cf2MdkpB2zLXCXCRpbWa2TInjhvYynrCRZPn02nXJi5G4NSg7RV0zR_NvogNkdeLxoognARpqzsA
 DFiQBknwhEB7lcQVjT_1QHUScZlg_sD7h1ApWbjmLvppobnEg0PwAZkDzt2kIrj60CrCC_qE5RXo
 5LRb9tCbpZsG8WYXGgEN.9m5FEn9P4BG6DdzKNz6RKRLBa3b3oiY8qccWnVa9biuQrY4BaNlhB8y
 kB5UAy21zfkZoTgPP0Ypw1FvYqzAQBfjFm52njilAW8WjNmB6r7PsVGNLzzSU0jt5urTin_9gysc
 _.95ycY42NNmF6uDU.EEwuTtq10knCgbhjsFQiCULKY.5bQtEDWr5hbLfuuHqORf8eDOJGZowodb
 5Dht70w3NZsr9JRSw088PB1GkkSHSLg4V43YrJSNkrmXPIfqX0uVkauRBAkhYAcMJK6w3yIkawqR
 FoS9KO.6df1N0TrNVCJIPRB0IUQSoC.T2rn5BwfLQB47Q9_8SJdZfEABZwcvWq37ENmydN1NyIqc
 f.3H5qKoSZbxUmHOOgL4o_H3j.3w4W0g9QcnglzOzquirBIgN91x_q2PisHhmMCyQStqW35_hVuk
 Y5xBy_kMv9kzqi.7xSVvA6MkJge1HnHRTq8vWNAUdrHWz2buU7uYkiTf1oY9eT9ABIflfWrKZwhA
 ROVrPkfO4ZcPgL3ph90SOgmese.jHUrkxNyYlITuMHUDoWYoPayX66VZVhZ0Gjq7YV_qgB6Iqc4H
 zo6Gtee4ztGYbsEDOqvUaNtCD8FuPmydlSFp3Mmjdm5FhPReoaQsA.KsvNq_PUCif6RlKmpSU_4N
 lySxSPQjymA2xO.c4BpCBop.0.JrQFZeMUFnFJWiYc1ardsHztLRc0caosc5FosuXsnnqKvCjFeD
 D.tOiKoiMxT1rLzagtnPCH24xM3ONXhXP_LgzkhTVABJXCccA2L8qRHvgThyZxLGPVhXCTM8alxh
 wusFyZDjJ8u..AlHB.my3ueLeAzrh9EzS1e1XUgbbkwcJdEGDL.lAJp67LIBfrRPnk_.oOOEk1ZS
 q89HXpYOE0c2en3zo8KKfejID1VqqunSk53.3_raEg.PWzEEix2wC.br8tPaGIZnAqxGxRqX22XG
 VfaikL2pyz_4VRStOMGwdZwHuNTKLSzmsrCw4FxEpTOBDVjuNDHmdCxnW72fM2f2ecOSDgQSlwN6
 xqmdXFeIoSH9i7dUDolZLCelAVEuaXVy5PNeYBYtXwVmq08RlYkg19OowchMA1ZWA5_7_5nh6TOJ
 abhkPnsrh8n7XDFps7ZBJUCN6pmOFc5CuPK.nwd6Hu9T4Guib00LJEoG9_89LSkAtuG8jSWr_hJP
 zxWLIPd3TQQ4_yB7AAupQU..0RtutdKntYm8LfAvGfz0RzCnFqsFS.Fw2ml9PAG.nvHX8bb1SQpK
 fnpXtMDGdZDk_lvJJIY8StHIvsHRkqxucR8fGxExZ.KvTweXOfT71dQk_Nj3Jq76WDLxEM77RVTW
 .BlUFRk_PnFUd2LBDl9Dg71dG4.ajs.ZsHD9Wi5kEw1.FVSI21NbPshMYk7cSbkf4Gy5rUKUh2WU
 9d6OgFkjC0dymkb0.SelyIeBCwWVfveYGHPe._NGLOjvh1slDZ6OdVxCO8Rz.8niGh0JEnoLwMLS
 MStRiLctq7fbkU9UlSrvoHB4rKJPfC0X.h_eVSzdycuKMeQ9OY6gn_e_mp2jRkpahKJM_nlTUuJy
 vOSUAFXYXxI1BB8B2po4T8.jI1ta0kIHMWX0_VvxHVD9CsgySX2Ko9F55THd5dWAw3rc_GsNOQzP
 0e9FxKB8H.9hNijb3upz6zMjxSirkc4rMG9V0zGoPhHtMwWODMU5v5XbjjYoP8sq5_aalv_wwN_Y
 vBI2h.Q3TbloghTSJN22cxRzCLISnjfjO1LJOC4crPGtUK8AWlUqmzhvGGGZsEe9W2jXocmzypfP
 I8v7XbXCMDJv1MDrTSxa38VKw0MuCkfl6gZT.uliGk1r.938slxxRF3h2J8KvvgVp3bJGdNxPETR
 l30cV2EdFP5ijfIglIbja7_wjZK_scBAgEmlt.9AKbKWK6869itY_Ua8OxbeIFugHOBxL8aq9Ust
 _Hqog_XCukEAyFRiXTLwRJvoLfVXviGH2BjdnS0xDuhv_rDpv3Q2v9ofCo398rjnahFvEdupx07n
 LmHIrSLLidE1IO87PbUYqMSdXe_xihVSCMNuHRFMejyzGx_yC__0G4YAE51y7N8etTGOnueYUNXd
 Ys_l_CR8b0nI726YFK6RO874k26w93hfd..RNlz4gm8yLuZQ3RphF8oyvRO66IUK1Xbr305eJ8bH
 wh7nQ7R6TfU7Yjx2zu6asS3gRXP5Cl2X19MGri1QuewsEiRBcIO.r2IOP_ksnvvf23wJwzppAS59
 qFbBvYYyl9w6XX31fMus1rjPjB51puHdNK90tHGPMTUBxq1OhHwPsn7jS85wh9LP2YoMCx_Hr1qQ
 _Us95VW0Cs95B5CJCE3Z1t.DWcYaRnyBF4ZS70.Y-
X-Sonic-MF: <hack3rcon@yahoo.com>
Date: Sun, 2 May 2021 16:03:06 +0000 (UTC)
From: Jason Long <hack3rcon@yahoo.com>
To: TMC <tmciolek@gmail.com>, Pierre-Philipp Braun <pbraun@nethence.com>
Cc: Xen-users <xen-users@lists.xenproject.org>, 
	Xen-devel <xen-devel@lists.xenproject.org>
Message-ID: <1494482817.1685841.1619971386483@mail.yahoo.com>
In-Reply-To: <88f93c4d-5e77-20db-e907-65d2e0fc7d25@nethence.com>
References: <314217522.1538685.1619859473008.ref@mail.yahoo.com> <314217522.1538685.1619859473008@mail.yahoo.com> <CAA3FNtPpz=4dwymk3+YeB+ZCOYYo9TirFqdjrf+qgSL39mBWYw@mail.gmail.com> <795375038.1654154.1619945620880@mail.yahoo.com> <88f93c4d-5e77-20db-e907-65d2e0fc7d25@nethence.com>
Subject: Re: Xen and Microservices.
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable
X-Mailer: WebService/1.1.18138 YMailNorrin Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/90.0.4430.93 Safari/537.36
Content-Length: 817

Thanks.
Thus, Unikernel is something like Microservice? Any success?






On Sunday, May 2, 2021, 04:15:01 PM GMT+4:30, Pierre-Philipp Braun <pbraun@=
nethence.com> wrote:=20





On 02/05/2021 11:53, Jason Long wrote:

> Thank you.
> How about Unikernel?


Unikernel is - in short - an attempt to even more isolate an application=20
and its libraries (and bind them together in a single binary).=C2=A0 Howeve=
r=20
it's pretty hard to setup and prepare, and therefore didn't gain massive=20
adoption.

I don't know if a XEN Unikernel such as MirageOS or Rump would construct=20
it can provide even better performance than OS-level virtualization, but=20
I suppose it was part of the goal.

--=20
Pierre-Philipp Braun
SMTP Health Campaign: enforce STARTTLS and verify MX certificates
<https://nethence.com/smtp/
>



From xen-devel-bounces@lists.xenproject.org Sun May 02 16:31:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 May 2021 16:31:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121321.229021 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldF0N-0000EM-5R; Sun, 02 May 2021 16:31:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121321.229021; Sun, 02 May 2021 16:31:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldF0N-0000EF-28; Sun, 02 May 2021 16:31:39 +0000
Received: by outflank-mailman (input) for mailman id 121321;
 Sun, 02 May 2021 16:31:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldF0M-0000E7-NT; Sun, 02 May 2021 16:31:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldF0M-0001RW-CU; Sun, 02 May 2021 16:31:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldF0M-0005tt-32; Sun, 02 May 2021 16:31:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ldF0M-0007g1-2X; Sun, 02 May 2021 16:31:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=l4ISkk/QAqtPRde2y8Y3o75A6/7GQiU8j71wPDtrVrE=; b=s6iP8y9o2CvO38malZpFruWLjv
	Knnw0jN/gtkJIJFvaWxKQEM5yRkADn6Z/dOa/iPls0HlUCOiIdT6E0GliK887sp6IN0BioejxsBjd
	Mnxnkf39sd3nQIeZLEkv2Kva5zpWppsVvi7QJIZa5nhKrUkZC1fpHNWa1MWY8Iy04/FQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161592-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 161592: regressions - FAIL
X-Osstest-Failures:
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-localmigrate/x10:fail:regression
    xen-4.12-testing:test-arm64-arm64-xl-xsm:xen-boot:fail:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-saverestore.2:fail:heisenbug
    xen-4.12-testing:test-armhf-armhf-xl:xen-boot:fail:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5b280a59c4dd8dad6cc8da28db981b193d10acee
X-Osstest-Versions-That:
    xen=4cf5929606adc2fb1ab4e2921c14ba4b8046ecd1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 02 May 2021 16:31:38 +0000

flight 161592 xen-4.12-testing real [real]
flight 161607 xen-4.12-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/161592/
http://logs.test-lab.xenproject.org/osstest/logs/161607/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qcow2 19 guest-localmigrate/x10 fail in 161560 REGR. vs. 159418

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-xsm       8 xen-boot         fail in 161576 pass in 161592
 test-amd64-amd64-xl-qcow2    18 guest-saverestore.2        fail pass in 161560
 test-armhf-armhf-xl           8 xen-boot                   fail pass in 161576
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 161576

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl         15 migrate-support-check fail in 161560 never pass
 test-armhf-armhf-xl     16 saverestore-support-check fail in 161560 never pass
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 159418
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 159418
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159418
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159418
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 159418
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 159418
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159418
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 159418
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159418
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 159418
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 159418
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  5b280a59c4dd8dad6cc8da28db981b193d10acee
baseline version:
 xen                  4cf5929606adc2fb1ab4e2921c14ba4b8046ecd1

Last test of basis   159418  2021-02-16 15:06:11 Z   75 days
Failing since        160128  2021-03-18 14:36:18 Z   45 days   60 attempts
Testing same since   160150  2021-03-20 04:11:48 Z   43 days   58 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Edwin Török <edvin.torok@citrix.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 311 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 02 18:02:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 May 2021 18:02:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121347.229036 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldGPn-0007pH-0R; Sun, 02 May 2021 18:01:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121347.229036; Sun, 02 May 2021 18:01:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldGPm-0007pA-Tb; Sun, 02 May 2021 18:01:58 +0000
Received: by outflank-mailman (input) for mailman id 121347;
 Sun, 02 May 2021 18:01:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldGPl-0007p2-NX; Sun, 02 May 2021 18:01:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldGPl-0002y4-Cw; Sun, 02 May 2021 18:01:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldGPl-0001TJ-3j; Sun, 02 May 2021 18:01:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ldGPl-0002wc-3E; Sun, 02 May 2021 18:01:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=kxL1iA2+oJf4Hhiw4pBmwtiJ4GlGpgPsbgg0+WbZIxE=; b=f3FLZ/ldjbDXlsAscJBBB6iNIb
	mL3DnvNvGMIdouov7pAiJ/QD8fd7Bqt9vq3YUX5TWQX/kZc3uAt7DsqTmcicAj53SaeSzTF3mNgRP
	HQPbBSUifeB3PYJR2iQUeYFSbZZMn6lrPYiGt73gkoliDt/6ehHRSLq5x1l0gu5IYxzs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161594-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 161594: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=17ae69aba89dbfa2139b7f8024b757ab3cc42f59
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 02 May 2021 18:01:57 +0000

flight 161594 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161594/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                17ae69aba89dbfa2139b7f8024b757ab3cc42f59
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  274 days
Failing since        152366  2020-08-01 20:49:34 Z  273 days  458 attempts
Testing same since   161594  2021-05-02 03:24:05 Z    0 days    1 attempts

------------------------------------------------------------
5922 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1604057 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 02 23:38:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 02 May 2021 23:38:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121404.229056 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldLf8-0001eD-SF; Sun, 02 May 2021 23:38:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121404.229056; Sun, 02 May 2021 23:38:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldLf8-0001e6-PC; Sun, 02 May 2021 23:38:10 +0000
Received: by outflank-mailman (input) for mailman id 121404;
 Sun, 02 May 2021 23:38:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldLf7-0001dy-AI; Sun, 02 May 2021 23:38:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldLf7-0000Ak-2U; Sun, 02 May 2021 23:38:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldLf6-0004Kl-R4; Sun, 02 May 2021 23:38:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ldLf6-0001Op-QW; Sun, 02 May 2021 23:38:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gXiDRWGkOYAXfU3rUd5tMmQBcRGiX2naI8AuckSKHWg=; b=lLdK4tiUVJTELjjXIJjePVbhCq
	3qjTs8wqhX4JvFsA8nmpN5wRTiBUNEdPMdfnbnuxZu2emaNE/M0xntTHxBaMHRuxlGv94aKKRkACF
	vJLQrzSfSooysv77g3aZClnv6AqeSPAtmeh3dM5/iVGABN7+j8RBqJ/8yORYTseyDEVU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161599-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 161599: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-dom0pvh-xl-amd:guest-localmigrate/x10:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=1f8ee4cb430e5a9da37096574c41632cf69a0bc7
X-Osstest-Versions-That:
    xen=1f8ee4cb430e5a9da37096574c41632cf69a0bc7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 02 May 2021 23:38:08 +0000

flight 161599 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161599/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 161584 pass in 161599
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail in 161584 pass in 161599
 test-amd64-amd64-dom0pvh-xl-amd 20 guest-localmigrate/x10  fail pass in 161584

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161584
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161584
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 161584
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161584
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161584
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161584
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161584
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161584
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161584
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161584
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161584
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  1f8ee4cb430e5a9da37096574c41632cf69a0bc7
baseline version:
 xen                  1f8ee4cb430e5a9da37096574c41632cf69a0bc7

Last test of basis   161599  2021-05-02 08:21:43 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              fail    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon May 03 01:36:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 01:36:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121416.229078 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldNVg-00014E-MR; Mon, 03 May 2021 01:36:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121416.229078; Mon, 03 May 2021 01:36:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldNVg-000147-H9; Mon, 03 May 2021 01:36:32 +0000
Received: by outflank-mailman (input) for mailman id 121416;
 Mon, 03 May 2021 01:36:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldNVf-00013z-SM; Mon, 03 May 2021 01:36:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldNVf-000801-EQ; Mon, 03 May 2021 01:36:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldNVe-0000nn-R9; Mon, 03 May 2021 01:36:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ldNVe-0002Xo-QZ; Mon, 03 May 2021 01:36:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=lzO1+xIgBpKdl4K/YKWYTTtxIFGZeKuDuWtKcpKeO4w=; b=RAwNBmdpCkIUZJHwLg2/NwsPz0
	2jbLOb6JZrFuqTOp60ymBoxr8zEOhVDwz5vKO2Gw0GmitsYn4Ioy+0VhiP37gaDYwizqcT3G7sP8O
	rnaGt6bQueZewxb+vahPEPDzN9i6l6kPkypU5zXb6vqyFYXJc/E/zD3qrVHFcznZyMEw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161600-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 161600: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=370636ffbb8695e6af549011ad91a048c8cab267
X-Osstest-Versions-That:
    linux=19bfeb47e96bb342d8c43a8ba0e68baf053b0dfc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 03 May 2021 01:36:30 +0000

flight 161600 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161600/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161503
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161503
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161503
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161503
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 161503
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161503
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161503
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161503
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161503
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161503
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161503
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                370636ffbb8695e6af549011ad91a048c8cab267
baseline version:
 linux                19bfeb47e96bb342d8c43a8ba0e68baf053b0dfc

Last test of basis   161503  2021-04-28 11:42:08 Z    4 days
Testing same since   161600  2021-05-02 09:11:10 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexei Starovoitov <ast@kernel.org>
  Benedict Schlueter <benedict.schlueter@rub.de>
  Daniel Borkmann <daniel@iogearbox.net>
  Fox Chen <foxhlchen@gmail.com>
  Frank van der Linden <fllinden@amazon.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Jon Hunter <jonathanh@nvidia.com>
  Linux Kernel Functional Testing <lkft@linaro.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   19bfeb47e96b..370636ffbb86  370636ffbb8695e6af549011ad91a048c8cab267 -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Mon May 03 05:46:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 05:46:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121431.229098 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldRP1-0006VK-5e; Mon, 03 May 2021 05:45:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121431.229098; Mon, 03 May 2021 05:45:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldRP1-0006VD-2X; Mon, 03 May 2021 05:45:55 +0000
Received: by outflank-mailman (input) for mailman id 121431;
 Mon, 03 May 2021 05:45:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldROz-0006V5-NK; Mon, 03 May 2021 05:45:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldROz-00052K-CX; Mon, 03 May 2021 05:45:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldROy-0004fq-WB; Mon, 03 May 2021 05:45:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ldROy-0001Ui-Vh; Mon, 03 May 2021 05:45:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=kPO7GAgz9EuwMlZtVuJ/sqxq8ZP1vDpPNB/y/WeivnI=; b=xLOuKXHsC9QLOKVenG5ZN7cjIU
	s73DRdg53Ws99zTePpYipKuiFQfhMKOP/8ueFU/fvJhtqhBa8hRpbWJmeFBBkoixUL8XIvxLZMDug
	vlg3VYtrIsMm5j3lm+4Zod82uT+2JVRqsu7ofevveO9DYi19L78knJ6Ch5J8sczd1HLM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161604-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 161604: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=8f860d2633baf9c2b6261f703f86e394c6bc22ca
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 03 May 2021 05:45:52 +0000

flight 161604 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161604/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-libvirt      14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-libvirt-xsm  14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-libvirt-pair 25 guest-start/debian       fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                8f860d2633baf9c2b6261f703f86e394c6bc22ca
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  255 days
Failing since        152659  2020-08-21 14:07:39 Z  254 days  466 attempts
Testing same since   161571  2021-05-01 05:13:23 Z    2 days    3 attempts

------------------------------------------------------------
478 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 145075 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 03 06:17:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 06:17:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121439.229113 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldRt1-0000tU-Mm; Mon, 03 May 2021 06:16:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121439.229113; Mon, 03 May 2021 06:16:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldRt1-0000tN-Je; Mon, 03 May 2021 06:16:55 +0000
Received: by outflank-mailman (input) for mailman id 121439;
 Mon, 03 May 2021 06:16:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldRt0-0000tF-DA; Mon, 03 May 2021 06:16:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldRt0-0005e4-2s; Mon, 03 May 2021 06:16:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldRsz-0005kr-QQ; Mon, 03 May 2021 06:16:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ldRsz-0002yX-Py; Mon, 03 May 2021 06:16:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=p4nxZvCEGHKzxWlf+fQaLKbD7/jPTMX5g3qpjma59Jg=; b=CrSNqxRegosgx86gvYH3CXYQTR
	jEmP77YiGgr1NJeGW9g+oGHkTLrm7sj0AiM2D+MYH/m//Oot0v8KMnkdSgl06DOL1vIqAvkWHMMGI
	BblqgpUX5L/gpi86aH0V1tmVEH+dFSj1oE1DZw/ZkZxnt3AanFYaTVk65v0WdQ+sSyQo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161615-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 161615: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=ec2e3336b8c8df572600043976e1ab5feead656e
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 03 May 2021 06:16:53 +0000

flight 161615 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161615/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              ec2e3336b8c8df572600043976e1ab5feead656e
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  297 days
Failing since        151818  2020-07-11 04:18:52 Z  296 days  289 attempts
Testing same since   161516  2021-04-29 04:18:53 Z    4 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 55101 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 03 09:29:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 09:29:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121474.229140 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldUsm-0000f7-DE; Mon, 03 May 2021 09:28:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121474.229140; Mon, 03 May 2021 09:28:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldUsm-0000f0-AM; Mon, 03 May 2021 09:28:52 +0000
Received: by outflank-mailman (input) for mailman id 121474;
 Mon, 03 May 2021 09:28:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iacE=J6=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1ldUsk-0000ev-Vj
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 09:28:51 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 05e3430f-de8f-4771-aaec-d6d2c661cba1;
 Mon, 03 May 2021 09:28:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 05e3430f-de8f-4771-aaec-d6d2c661cba1
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620034129;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=xzwvYaLoVXYRdhMYB3H29yYEV6go5oWRT/nfLjDaI78=;
  b=agOFg6KS/NusuzDN8C5tEtdMuX/PKCuF3SX7DaDtggy9S9A5a8eHFzRS
   O67HSpiE6CxDtx4KRS+ES8xbXEGzNtPs7FJn7HHdatvhDFJniJhEjoIpv
   Qn0kBgaUpe/0pEmYeh0nW3EFI/mIef71ddQTYKP4TMN2icS5GegsMtzkP
   A=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 3s1/vaJtYC7YHP9js9sRHnafOMfC/88T35deNbb6toy7WiBnURkXrYaitIJw3ZT9c9ZYcRqk4a
 //dk40FWYT3pHtNmYhPRhIS0EHqg0yR4mVIPgibl4aWSxsAvk3HNurvXqil6yDwxOn2J5aLcKx
 k1XwWjNK4eL2aBu5yq+e+oZVFqSIrJGDRa/2cibtnFU0x6Uk+Sp8TDxKWVuO2T1MwvoTNybPrE
 tYtnjUTE/wEozs4fOCoR7gN/QQk/SyvtJSuqgx8mEJ7kQ4RIgGPpjgoNjq0Q9D5hEcUuNStXMx
 Zpo=
X-SBRS: 5.1
X-MesageID: 43304593
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:uho5oKhxtIbXjb6jFFMRaJXZWXBQXwh13DAbvn1ZSRFFG/Gwv9
 yynfgdyB//gCsQXnZlotybJKycWxrnmKJdy4N5B9efdSPhv3alK5wn0Jv6z1TbaknD38N+9Y
 MlSahxD9XsEUN35PyR3CCUG8stqePpzImGnuHbpk0CcShPS4VNqzh0ERyaFEoefngiObMcGI
 CH7sRK4xqMEE5nDfiTPXUOU+jdq9CjrvuPDSIuPBI79BKIyQqh9b+SKXOl9y0DWDBCy6pKyx
 mmryXF4MyY0s2T+1vn+EL4q79Xn9bgzdUrPr3wtuElbg/CpyztSIBoW7iptC04rue1+D8R4a
 XxiiZlBetfwTf8eXy0vAvM1mDboUkTwk6n83C0qz/CptH0Xz0zAcYpv/MmTjLpr3AOkfs59Y
 Aj5RP/i7NnSSnusQ642v3zEzZtrUawqWpKq59ps1VvFbEwRZUUkZYS5ypuYfE9NRO/0q8LOs
 90AvrR4f5HGGnqFUzxjy1UzNugUm9bJGb+fmEy/sic0z1hlHtk1UcvxMsGgnca9J4mIqM0n9
 j5Dg==
X-IronPort-AV: E=Sophos;i="5.82,268,1613451600"; 
   d="scan'208";a="43304593"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DuaWpL0iSEv70wi3RbmsuTh1KHsC1e/2eJti0nxqZLMk0wJSYJa0jduMM2hmQlSP0KkA+090sNXXaYNCc5t748iixUpIffUWwmfYlyB3wBaEGzqg3upQZlf+U/uRkqWKShHkD3e1T45SOJpdQU9ZEdDGxF+dU0BzIUKqFTALMRE23w2wXhjpjdf6qrfzv5oj/vvHOo/H0CIjawMhlfSeIVe6VbO+U96GjL98HFBPPx06dS6/93Bm6K0z4MYC/es5kMuvFC8REhI8yYi/dFzWT4AUKx/Wz6EPzjBKUEJxoI1l/GaOu/6x7lSfC6bgYwr9gVlA/PK3/FvwDrOWu2x8sw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NspLC+R+2PU6t7UpGfybhUhxS/7Lh2pavD59GqGib4U=;
 b=g/IADqZo1IBZSScnjWg3/FncI6RoDSTA76DJ4zFaih6lI609EVIvyadm7LEfARPsXhU3q1ll2L7Uf70PCOdKWrhcFsi/P2cWcNHvSUh2jhHQ9JsdZtqvcHrWDlEUNufe/op9Dg5P+m43FZ+DobL8xziD44f+JjwOtRIzXV1HidZMkGB0r+a88sFN0gaAX1EgQNXtZ3kuOWIY3Y97kG9LBHU6zkUUl+DVhvP972U39UwU1NKuTitITjBVelyVTwTLe77Ml2n1gKhp9rESFRNOnGBLoRJJQ1vMjl2eYpP7U2SgK0QCpBlXWcWx9Unm/b4DO/oySMbC82o+RYJCdKmaBQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NspLC+R+2PU6t7UpGfybhUhxS/7Lh2pavD59GqGib4U=;
 b=eODaDdtWmVOwYURwdXG128fP5L6Ib7EuIfesegZYDVS/+T2rh5hG8XfMfnyyggIjdUgObMGq1WQFSiihyFEQg6Ucb73G2t2OK9gqk2HhDAEZWmkE3+ERwnsSH26Bnxm04fH7lTaBttxS7gLjia1LmFaMOF1TjqNYGR1377vh/5k=
Date: Mon, 3 May 2021 11:28:40 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v4 01/12] x86/rtc: drop code related to strict mode
Message-ID: <YI/CSKpqWrilNKi8@Air-de-Roger>
References: <20210420140723.65321-1-roger.pau@citrix.com>
 <20210420140723.65321-2-roger.pau@citrix.com>
 <f282a2a2-e5cb-6a65-690a-b9c27c03089a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <f282a2a2-e5cb-6a65-690a-b9c27c03089a@suse.com>
X-ClientProxiedBy: PR0P264CA0065.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1d::29) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4f4c64a2-625e-4de5-90cd-08d90e15d9bd
X-MS-TrafficTypeDiagnostic: DM6PR03MB3578:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB357851A98401CB9564A3AAE48F5B9@DM6PR03MB3578.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4714;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: EcMj1CiqLNUJ+5IPkvkLRG2ouIv00kg4GFIqot5HZYFYEb0I0SI8MGEO3R+RtQxG7O7Cu6HQ/vdxbjLJYilF98pjeZr+B1h11v8Gy+kqVFGyBkZfhV+6ZlDnhoSsFKMstD9CTdwbmcnN2jjBVt1/CM09P/PCfS6hKV6dRJQzH3OQGs+z5eVqq1LlnBVJvG6pfWF/dSn1IlPVdtu3ZkI0oJpjEEKRM/DgzEkOCjs60o2lrAh5Ln5U0b21GUgqpRKMokuB/YFThnXQ4hcnYsfSoOtkNzMT74ZERnY0sReq+T+IWf+CRr/OkIIXwY7Ax5P6GAeim00XrXPdSV6XvOH+9B4gYhVliRvmpNX8tFDqpoxi79sBAOZNOrNAad7A/d0HN9M8K2g5KSA1lncdE2Wnv3Lnjy0jjT7POfeYIgx2ly0WsUgqohaF3k4TWQOf06/6a29hiL0lS83E+UZBJ8UwwXE6uEbyuGHoqcwZH8P3QJEUmwt2fPXyQvg753bUsTEF4VzXU4D/xqlz7mH2Gvm2TSvLPpWQaB1GoGpLZ2fLJFdIyWgx8++Sv3dzyXRzswqASFo0GTC9RCNymEnSyM5gvw==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(376002)(346002)(396003)(136003)(366004)(39850400004)(6486002)(54906003)(66946007)(53546011)(38100700002)(83380400001)(8676002)(66556008)(86362001)(478600001)(186003)(66476007)(956004)(2906002)(6916009)(16526019)(6666004)(26005)(4326008)(6496006)(8936002)(316002)(85182001)(33716001)(9686003)(5660300002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?NlJCWFJKSkdZVXh0TWhmNGptUWxYUEtyVm9BQXRQRXdMc0FmMjJDUmVXMVBx?=
 =?utf-8?B?VUN0N3h0MmFqakNmR25aNTF3cForNmcvcGVvM0VZQ3AyTkcvdWp6bXlGWjNT?=
 =?utf-8?B?VXJWUTRBM2VkUm5mTXd4SlV2L0FPSklpaTl1OVQ1cXIwRzRzNVhTbUM3Y2Z4?=
 =?utf-8?B?NGVISzVxSzZyQ3ZKYWNLUnFqYXZ1T053T3pNTVVvZWxra2t2Zi9OUEprQkZh?=
 =?utf-8?B?VGdSeElBWE5rM3RHTmM4U1RCdE01VmRhRE4ycU9JZkM5V3h1Nzk2Y0I3NFcz?=
 =?utf-8?B?SmcwVStVM2tpS3JiSzRYZ3cveFBSMVZFZ1lJWVRGMk9tZ3dTWGZTbEZiVTRp?=
 =?utf-8?B?YmtjWGxWZHplY0VXSW5wTFgvTFhGRUlNZXh3OXVmZ0U2cjBrcTUrUitHSUNF?=
 =?utf-8?B?ckViZTZhUjlMSnpPYzhobXJvaEZZdTNBYnR0UUxZSTQ3dzI5cXJENkdLOW9l?=
 =?utf-8?B?Z2lZQ0xaSVNsbnFsbGdOR08vRW8xbU8vMVQ1S3lPZlhIRFQvWndNVytPQzNF?=
 =?utf-8?B?RVVtb1ROTXRXK2sxZUNrbjREakdJeDNoektkUTExSTFTVUExL01mUVFaOGho?=
 =?utf-8?B?emhMQitzWUtHcnVFL1NtM3FiWmFNaUk1cHkxNWVNcDJ2MVJwYzBUWTU5dUVw?=
 =?utf-8?B?ZUdFL3paUGpkRDFVb2xUSmVtdG53UmhRUmNEVTNOWjVQa1pkaDhNTzFPMmlT?=
 =?utf-8?B?eGpnMkVVMlR6Wm9oS1RQbVVPanNWczg4OE1Sb1ZsSnlKYi9FSmlpNUlhK3Z2?=
 =?utf-8?B?MHR1bkgyeHB6cVA5UFh1TkpnVXk2L0lpLzllTjl1SGhqQzYvN1pENjNCeWFG?=
 =?utf-8?B?aDRFUEN6QzM3WlZ1WkUwNHZrZ3RXQ2ZsVEUrMHdva2RkU2JqL2pqYTc1NGdM?=
 =?utf-8?B?QVprMVNiby9WU3lOdEp1ck5BZ1pXY25va21qNmhTbFI2TFljbXI4V2tjb2sy?=
 =?utf-8?B?RHBZMzlsVjhUN2RnRXB2QkhSY0ZvczA4eEM1QTgxbU16bXc5V1JxM0g0TzFZ?=
 =?utf-8?B?ZUd3U2dSdWtJeEpHS3JpTXZGbVdwMHlWUjhQTjZVa0MvNEtmRzBjd0JLeW5i?=
 =?utf-8?B?MGQxeVNpNlJMbEJITnZVWnpXVTJyb2c2YWxCalJlbG5aQnhuMFQzTjJSZWhE?=
 =?utf-8?B?TllqRmoza2JjWG04RDR3MElOcmlqdmgwbTA5VWNrQkluSDdUZmRtemJhWnlR?=
 =?utf-8?B?SnR3c2YyZ3JhQmt4c2kvZ2UvaHd4MDlIUjdmSm0zRm9tQVRLOGJZMU5yQ2tr?=
 =?utf-8?B?K0VSQ1RkTGNZTHY2SGl3QjJTRFR0TWRqTEFwRXZqK0tQUUZwNzdiLzNiL3pR?=
 =?utf-8?B?T2MwRzNydmdqYW15S0xaejlSSWIvSUtYdVl5WHJpSmhwYzg3VzJQcXNxbWFF?=
 =?utf-8?B?cTJIbWF6OGFDZEowRWVueEVzRWowUjY2V21ObzZTajZQSGNvbFRRbG1uRkMx?=
 =?utf-8?B?TGl2bU1UeDV6QTduMXFVRlZ1a3VseU9iOVI5b1l5YzJLcUpaanp1eFMzbVNO?=
 =?utf-8?B?TTFNaXYxbUovY2FBLzdLSHBvd0orRmVFN2tiNW9iK1hYOWlMbytXLzZ0SjZS?=
 =?utf-8?B?d2ZRbGtNVVJyNHU1TXpPaC95U2lQaWwvZklEb2NMSmg1c3c4WjladFVRTmRZ?=
 =?utf-8?B?eHVuc0doTTN0NklPRHdQeEQ5N1pBWG55TUxNYWI4OWU3MlNVeTg0Z0kvSTBz?=
 =?utf-8?B?UEcrMDBqUmhJRlNvNkluN2o5Wk9IK2FJYkxFWXZFZWFHdGVSanhwcWhFRGVX?=
 =?utf-8?Q?5YXmXpRyXKdhXkjnUGfrZFjcRU4ryDtee+CmX8S?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 4f4c64a2-625e-4de5-90cd-08d90e15d9bd
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2021 09:28:46.2107
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ZDkW5Te+YS6/FmUMjB4ORLi6am50Ns8xgaAYf1EcJLq4I9FCsFwMH04RvqcH8CsEYLU3Zyex+Y1C24eiuowViA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3578
X-OriginatorOrg: citrix.com

On Thu, Apr 29, 2021 at 04:53:07PM +0200, Jan Beulich wrote:
> On 20.04.2021 16:07, Roger Pau Monne wrote:
> > --- a/xen/arch/x86/hvm/rtc.c
> > +++ b/xen/arch/x86/hvm/rtc.c
> > @@ -46,15 +46,6 @@
> >  #define epoch_year     1900
> >  #define get_year(x)    (x + epoch_year)
> >  
> > -enum rtc_mode {
> > -   rtc_mode_no_ack,
> > -   rtc_mode_strict
> > -};
> > -
> > -/* This must be in sync with how hvmloader sets the ACPI WAET flags. */
> > -#define mode_is(d, m) ((void)(d), rtc_mode_##m == rtc_mode_no_ack)
> > -#define rtc_mode_is(s, m) mode_is(vrtc_domain(s), m)
> 
> Leaving aside my concerns about this removal, I think some form of
> reference to hvmloader and its respective behavior should remain
> here, presumably in form of a (replacement) comment.

What about adding a comment in rtc_pf_callback:

/*
 * The current RTC implementation will inject an interrupt regardless
 * of whether REG_C has been read since the last interrupt was
 * injected. This is why the ACPI WAET 'RTC good' flag must be
 * unconditionally set by hvmloader.
 */

> > @@ -337,8 +336,7 @@ int pt_update_irq(struct vcpu *v)
> >      {
> >          if ( pt->pending_intr_nr )
> >          {
> > -            /* RTC code takes care of disabling the timer itself. */
> > -            if ( (pt->irq != RTC_IRQ || !pt->priv) && pt_irq_masked(pt) &&
> > +            if ( pt_irq_masked(pt) &&
> >                   /* Level interrupts should be asserted even if masked. */
> >                   !pt->level )
> >              {
> 
> I'm struggling to relate this to any other part of the patch. In
> particular I can't find the case where a periodic timer would be
> registered with RTC_IRQ and a NULL private pointer. The only use
> I can find is with a non-NULL pointer, which would mean the "else"
> path is always taken at present for the RTC case (which you now
> change).

Right, the else case was always taken because as the comment noted RTC
would take care of disabling itself (by calling destroy_periodic_time
from the callback when using strict_mode). When no_ack mode was
implemented this wasn't taken into account AFAICT, and thus the RTC
was never removed from the list even when masked.

I think with no_ack mode the RTC shouldn't have this specific handling
in pt_update_irq, as it should behave like any other virtual timer.
I could try to split this as a separate bugfix, but then I would have
to teach pt_update_irq to differentiate between strict_mode and no_ack
mode.

Would you be fine if the following is added to the commit message
instead:

"Note that the special handling of the RTC timer done in pt_update_irq
is wrong for the no_ack mode, as the RTC timer callback won't disable
the timer anymore when it detects the guest is not reading REG_C. As
such remove the code as part of the removal of strict_mode, and don't
special case the RTC timer anymore in pt_update_irq."

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon May 03 10:41:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 10:41:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121499.229162 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldW18-0007N5-Ny; Mon, 03 May 2021 10:41:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121499.229162; Mon, 03 May 2021 10:41:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldW18-0007My-KU; Mon, 03 May 2021 10:41:34 +0000
Received: by outflank-mailman (input) for mailman id 121499;
 Mon, 03 May 2021 10:41:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TA2L=J6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ldW16-0007Mt-Kc
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 10:41:32 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 608c5261-efbc-4468-be7b-b71c26d838d7;
 Mon, 03 May 2021 10:41:31 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 61D3BB287;
 Mon,  3 May 2021 10:41:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 608c5261-efbc-4468-be7b-b71c26d838d7
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620038490; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=fXqoZ+h4dh4kmM6CvMNc6hbkf6NUZCD7jM+SqjhA4w0=;
	b=SLg3BZ/WP8Um3wl1VJZwVpDqPSR2pB2vVNrIHwFvcZDlycCVzCLxzPsWCI70y2QB1cI3CE
	0/AoNTSqJTas6IkXkMl2SpvsF9J7+OG+QkR8CTog7I5LloSAdt0my7TzMGUoA3Fvs/bbMs
	2wi4Ig63MZTWmhU50/WmD3bwsM3auyo=
Subject: Re: [PATCH v3 03/13] libs/guest: allow fetching a specific MSR entry
 from a cpu policy
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210430155211.3709-1-roger.pau@citrix.com>
 <20210430155211.3709-4-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <273ba6f9-dee9-00db-407b-10325d21afae@suse.com>
Date: Mon, 3 May 2021 12:41:29 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210430155211.3709-4-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 30.04.2021 17:52, Roger Pau Monne wrote:
> Introduce an interface that returns a specific MSR entry from a cpu
> policy in xen_msr_entry_t format. Provide a helper to perform a binary
> search against an array of MSR entries.
> 
> This is useful to callers can peek data from the opaque
> xc_cpu_policy_t type.
> 
> No caller of the interface introduced on this patch.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
> Changes since v1:
>  - Introduce a helper to perform a binary search of the MSR entries
>    array.
> ---
>  tools/include/xenctrl.h         |  2 ++
>  tools/libs/guest/xg_cpuid_x86.c | 42 +++++++++++++++++++++++++++++++++
>  2 files changed, 44 insertions(+)
> 
> diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
> index cbca7209e34..605c632cf30 100644
> --- a/tools/include/xenctrl.h
> +++ b/tools/include/xenctrl.h
> @@ -2611,6 +2611,8 @@ int xc_cpu_policy_serialise(xc_interface *xch, const xc_cpu_policy_t policy,
>  int xc_cpu_policy_get_cpuid(xc_interface *xch, const xc_cpu_policy_t policy,
>                              uint32_t leaf, uint32_t subleaf,
>                              xen_cpuid_leaf_t *out);
> +int xc_cpu_policy_get_msr(xc_interface *xch, const xc_cpu_policy_t policy,
> +                          uint32_t msr, xen_msr_entry_t *out);
>  
>  int xc_get_cpu_levelling_caps(xc_interface *xch, uint32_t *caps);
>  int xc_get_cpu_featureset(xc_interface *xch, uint32_t index,
> diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x86.c
> index de27826f415..9e83daca0e6 100644
> --- a/tools/libs/guest/xg_cpuid_x86.c
> +++ b/tools/libs/guest/xg_cpuid_x86.c
> @@ -850,3 +850,45 @@ int xc_cpu_policy_get_cpuid(xc_interface *xch, const xc_cpu_policy_t policy,
>      *out = *tmp;
>      return 0;
>  }
> +
> +static int compare_entries(const void *l, const void *r)
> +{
> +    const xen_msr_entry_t *lhs = l;
> +    const xen_msr_entry_t *rhs = r;
> +
> +    if ( lhs->idx == rhs->idx )
> +        return 0;
> +    return lhs->idx < rhs->idx ? -1 : 1;
> +}
> +
> +static xen_msr_entry_t *find_entry(xen_msr_entry_t *entries,
> +                                   unsigned int nr_entries, unsigned int index)
> +{
> +    const xen_msr_entry_t key = { index };
> +
> +    return bsearch(&key, entries, nr_entries, sizeof(*entries), compare_entries);
> +}

Isn't "entries" / "entry" a little too generic a name here, considering
the CPUID equivalents use "leaves" / "leaf"? (Noticed really while looking
at patch 7.)

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 03 10:43:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 10:43:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121502.229174 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldW2j-0007TX-2j; Mon, 03 May 2021 10:43:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121502.229174; Mon, 03 May 2021 10:43:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldW2i-0007TP-Vz; Mon, 03 May 2021 10:43:12 +0000
Received: by outflank-mailman (input) for mailman id 121502;
 Mon, 03 May 2021 10:43:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TA2L=J6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ldW2h-0007TH-Mt
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 10:43:11 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4b55a956-e558-4f28-9942-fda32340bee3;
 Mon, 03 May 2021 10:43:10 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 76B58AB64;
 Mon,  3 May 2021 10:43:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4b55a956-e558-4f28-9942-fda32340bee3
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620038589; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=07thARZPh4LASF5qfa0CAKK7il3OpXViMHivwWcOe1Q=;
	b=B03CS6YsWqi7pHjTSW6FhofGyRszUh0qXggQEJdhzQ2LIzliHfKzeG4B/sbVWtLe/hSmLY
	6zAeRao4263VuUR/GKZ785YYXINyoSSnzYcrEJHH+EqcbSRrBS/3166MmyFy6b+rIA4aDP
	8si/sJFf/dFfvLl0u9T023BUD723ILY=
Subject: Re: [PATCH v3 07/13] libs/guest: obtain a compatible cpu policy from
 two input ones
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
References: <20210430155211.3709-1-roger.pau@citrix.com>
 <20210430155211.3709-8-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <838e358d-5707-0f34-c8fe-64e29f000a69@suse.com>
Date: Mon, 3 May 2021 12:43:08 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210430155211.3709-8-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 30.04.2021 17:52, Roger Pau Monne wrote:
> Introduce a helper to obtain a compatible cpu policy based on two
> input cpu policies. Currently this is done by and'ing all CPUID
> feature leaves and MSR entries, except for MSR_ARCH_CAPABILITIES which
> has the RSBA bit or'ed.
> 
> The _AC macro is pulled from libxl_internal.h into xen-tools/libs.h
> since it's required in order to use the msr-index.h header.
> 
> Note there's no need to place this helper in libx86, since the
> calculation of a compatible policy shouldn't be done from the
> hypervisor.
> 
> No callers of the interface introduced.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
> Changes since v2:
>  - Add some comments.
>  - Remove stray double semicolon.
>  - AND all 0x7 subleaves (except 0.EAX).
>  - Explicitly handle MSR indexes in a switch statement.
>  - Error out when an unhandled MSR is found.
>  - Add handling of leaf 0x80000021.
> 
> Changes since v1:
>  - Only AND the feature parts of cpuid.
>  - Use a binary search to find the matching leaves and msr entries.
>  - Remove default case from MSR level function.
> ---
>  tools/include/xen-tools/libs.h    |   5 ++
>  tools/include/xenctrl.h           |   4 +
>  tools/libs/guest/xg_cpuid_x86.c   | 137 ++++++++++++++++++++++++++++++
>  tools/libs/light/libxl_internal.h |   2 -
>  4 files changed, 146 insertions(+), 2 deletions(-)
> 
> diff --git a/tools/include/xen-tools/libs.h b/tools/include/xen-tools/libs.h
> index a16e0c38070..b9e89f9a711 100644
> --- a/tools/include/xen-tools/libs.h
> +++ b/tools/include/xen-tools/libs.h
> @@ -63,4 +63,9 @@
>  #define ROUNDUP(_x,_w) (((unsigned long)(_x)+(1UL<<(_w))-1) & ~((1UL<<(_w))-1))
>  #endif
>  
> +#ifndef _AC
> +#define __AC(X,Y)   (X##Y)
> +#define _AC(X,Y)    __AC(X,Y)
> +#endif
> +
>  #endif	/* __XEN_TOOLS_LIBS__ */
> diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
> index 5f699c09509..c41d794683c 100644
> --- a/tools/include/xenctrl.h
> +++ b/tools/include/xenctrl.h
> @@ -2622,6 +2622,10 @@ int xc_cpu_policy_update_msrs(xc_interface *xch, xc_cpu_policy_t policy,
>  /* Compatibility calculations. */
>  bool xc_cpu_policy_is_compatible(xc_interface *xch, const xc_cpu_policy_t host,
>                                   const xc_cpu_policy_t guest);
> +int xc_cpu_policy_calc_compatible(xc_interface *xch,
> +                                  const xc_cpu_policy_t p1,
> +                                  const xc_cpu_policy_t p2,
> +                                  xc_cpu_policy_t out);
>  
>  int xc_get_cpu_levelling_caps(xc_interface *xch, uint32_t *caps);
>  int xc_get_cpu_featureset(xc_interface *xch, uint32_t index,
> diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x86.c
> index 6b8bae00334..be2056469aa 100644
> --- a/tools/libs/guest/xg_cpuid_x86.c
> +++ b/tools/libs/guest/xg_cpuid_x86.c
> @@ -32,6 +32,7 @@ enum {
>  #include <xen/arch-x86/cpufeatureset.h>
>  };
>  
> +#include <xen/asm/msr-index.h>
>  #include <xen/asm/x86-vendors.h>
>  
>  #include <xen/lib/x86/cpu-policy.h>
> @@ -949,3 +950,139 @@ bool xc_cpu_policy_is_compatible(xc_interface *xch, const xc_cpu_policy_t host,
>  
>      return false;
>  }
> +
> +static bool level_msr(const xen_msr_entry_t *e1, const xen_msr_entry_t *e2,
> +                      xen_msr_entry_t *out)
> +{
> +    *out = (xen_msr_entry_t){ .idx = e1->idx };
> +
> +    switch ( e1->idx )
> +    {
> +    case MSR_INTEL_PLATFORM_INFO:
> +        out->val = e1->val & e2->val;
> +        return true;
> +
> +    case MSR_ARCH_CAPABILITIES:
> +        out->val = e1->val & e2->val;
> +        /*
> +         * Set RSBA if present on any of the input values to notice the guest
> +         * might run on vulnerable hardware at some point.
> +         */
> +        out->val |= (e1->val | e2->val) & ARCH_CAPS_RSBA;
> +        return true;
> +    }
> +
> +    return false;
> +}
> +
> +/* Only level featuresets so far. */

I have to admit that I don't think I see all the implications from
this implementation restriction. All other leaves get dropped by
the caller, but it's not clear to me what this means wrt what the
guest is ultimately going to get to see.

> +static bool level_leaf(const xen_cpuid_leaf_t *l1, const xen_cpuid_leaf_t *l2,
> +                       xen_cpuid_leaf_t *out)
> +{
> +    *out = (xen_cpuid_leaf_t){
> +        .leaf = l1->leaf,
> +        .subleaf = l2->subleaf,

Since ->leaf and ->subleaf ought to match anyway, I think it would
look less odd if both initializers were taken from consistent source.

> +    };
> +
> +    switch ( l1->leaf )
> +    {
> +    case 0x1:
> +    case 0x80000001:
> +        out->c = l1->c & l2->c;
> +        out->d = l1->d & l2->d;
> +        return true;
> +
> +    case 0xd:
> +        if ( l1->subleaf != 1 )
> +            break;
> +        /*
> +         * Only take Da1 into account, the rest of subleaves will be dropped
> +         * and recalculated by recalculate_xstate.
> +         */
> +        out->a = l1->a & l2->a;
> +        return true;
> +
> +    case 0x7:
> +        if ( l1->subleaf )
> +            /* subleaf 0 EAX contains the max subleaf count. */
> +            out->a = l1->a & l2->a;

        else
            out->a = min(l1->a, l2->a);

? Or is the result from here then further passed to
x86_cpuid_policy_shrink_max_leaves() (not visible from this patch)?
(If not, the same would apply to all other multi-subleaf leaves.)

> +        out->b = l1->b & l2->b;
> +        out->c = l1->c & l2->c;
> +        out->d = l1->d & l2->d;
> +        return true;
> +
> +    case 0x80000007:
> +        out->d = l1->d & l2->d;
> +        return true;
> +
> +    case 0x80000008:
> +        out->b = l1->b & l2->b;
> +        return true;
> +
> +    case 0x80000021:
> +        out->a = l1->a & l2->a;
> +        return true;
> +    }
> +
> +    return false;
> +}
> +
> +int xc_cpu_policy_calc_compatible(xc_interface *xch,
> +                                  const xc_cpu_policy_t p1,
> +                                  const xc_cpu_policy_t p2,
> +                                  xc_cpu_policy_t out)

I have to admit that I find these two "const" misleading here. You
don't equally constify the other two parameters (which would e.g. be
xc_interface *const xch), and I don't think doing so is common
practice elsewhere. And what p1 and p2 point to is specifically non-
const (and cannot be const), due to ...

> +{
> +    unsigned int nr_leaves, nr_msrs, i, index;
> +    unsigned int p1_nr_leaves, p2_nr_leaves;
> +    unsigned int p1_nr_entries, p2_nr_entries;
> +    int rc;
> +
> +    p1_nr_leaves = p2_nr_leaves = ARRAY_SIZE(p1->leaves);
> +    p1_nr_entries = p2_nr_entries = ARRAY_SIZE(p1->entries);
> +
> +    rc = xc_cpu_policy_serialise(xch, p1, p1->leaves, &p1_nr_leaves,
> +                                 p1->entries, &p1_nr_entries);
> +    if ( rc )
> +        return rc;
> +    rc = xc_cpu_policy_serialise(xch, p2, p2->leaves, &p2_nr_leaves,
> +                                 p2->entries, &p2_nr_entries);

... these two calls.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 03 10:45:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 10:45:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121505.229186 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldW4o-0007dh-G7; Mon, 03 May 2021 10:45:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121505.229186; Mon, 03 May 2021 10:45:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldW4o-0007da-D6; Mon, 03 May 2021 10:45:22 +0000
Received: by outflank-mailman (input) for mailman id 121505;
 Mon, 03 May 2021 10:45:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldW4n-0007dS-LG; Mon, 03 May 2021 10:45:21 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldW4n-00027c-CE; Mon, 03 May 2021 10:45:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldW4n-0004UI-1U; Mon, 03 May 2021 10:45:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ldW4n-0001cb-0j; Mon, 03 May 2021 10:45:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=eiJGtBPvdCwbqljZpCoCVaDYe0HdK0y1SbJrb9AA21Y=; b=NsUusbQc5yl3TC34uH6Y0KPzSv
	QYRMoBHFgo4EcpN7ZpDc/tzOBnthdaSEH6ZN1zYKdSxIyhzzqD/e81T0Dg3vDKNSphBcZJurJITPg
	F1cu51pA7UG62gRktmSZ2FTin1EbHoztpewBhg1TkGHFKTiADfQplutamySNmsOw1Bpg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161609-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 161609: regressions - FAIL
X-Osstest-Failures:
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-localmigrate/x10:fail:regression
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-saverestore.2:fail:heisenbug
    xen-4.12-testing:test-armhf-armhf-xl:xen-boot:fail:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-saverestore:fail:heisenbug
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5b280a59c4dd8dad6cc8da28db981b193d10acee
X-Osstest-Versions-That:
    xen=4cf5929606adc2fb1ab4e2921c14ba4b8046ecd1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 03 May 2021 10:45:21 +0000

flight 161609 xen-4.12-testing real [real]
flight 161619 xen-4.12-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/161609/
http://logs.test-lab.xenproject.org/osstest/logs/161619/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qcow2    19 guest-localmigrate/x10   fail REGR. vs. 159418

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qcow2 18 guest-saverestore.2 fail in 161592 pass in 161609
 test-armhf-armhf-xl           8 xen-boot         fail in 161592 pass in 161609
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail in 161592 pass in 161609
 test-amd64-i386-xl-qemut-win7-amd64 15 guest-saverestore   fail pass in 161592

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop   fail in 161592 like 159418
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 159418
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 159418
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159418
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159418
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 159418
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 159418
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159418
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159418
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 159418
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 159418
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  5b280a59c4dd8dad6cc8da28db981b193d10acee
baseline version:
 xen                  4cf5929606adc2fb1ab4e2921c14ba4b8046ecd1

Last test of basis   159418  2021-02-16 15:06:11 Z   75 days
Failing since        160128  2021-03-18 14:36:18 Z   45 days   61 attempts
Testing same since   160150  2021-03-20 04:11:48 Z   44 days   59 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Edwin Török <edvin.torok@citrix.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 311 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 03 11:09:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 11:09:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121521.229201 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldWSP-00014m-Kl; Mon, 03 May 2021 11:09:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121521.229201; Mon, 03 May 2021 11:09:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldWSP-00014f-G4; Mon, 03 May 2021 11:09:45 +0000
Received: by outflank-mailman (input) for mailman id 121521;
 Mon, 03 May 2021 11:09:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TA2L=J6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ldWSN-00013p-QV
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 11:09:43 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 872018e6-1c35-485d-bd7f-bf6278e55303;
 Mon, 03 May 2021 11:09:42 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1293FAE5E;
 Mon,  3 May 2021 11:09:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 872018e6-1c35-485d-bd7f-bf6278e55303
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620040182; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=KcKhOmqJ+i3Zi3fnzQJWf5u2EfWduMEp8/Qo/2sKRpI=;
	b=UOFPgIyt3ECtqZdEb63i3yTgQsfeVmJiOhCNnWOf/6mck/xoQYvqwvWP++DNVxE2+7EERh
	YG2EBY2gphESSymW30gQpXNRXMZKG5i0F4f2NQV2kFjPP312MAK/4SdBZxCFmWNgkbBjd/
	aRnwEmoMg0fA/NXIu0PQBKGFTirPYDY=
Subject: Re: [PATCH v3 08/13] libs/guest: make a cpu policy compatible with
 older Xen versions
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210430155211.3709-1-roger.pau@citrix.com>
 <20210430155211.3709-9-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <51ee228a-2d53-2dd4-55cf-233d81ba4958@suse.com>
Date: Mon, 3 May 2021 13:09:41 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210430155211.3709-9-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.04.2021 17:52, Roger Pau Monne wrote:
> @@ -1086,3 +1075,42 @@ int xc_cpu_policy_calc_compatible(xc_interface *xch,
>  
>      return rc;
>  }
> +
> +int xc_cpu_policy_make_compatible(xc_interface *xch, xc_cpu_policy_t policy,
> +                                  bool hvm)

I'm concerned of the naming, and in particular the two very different
meanings of "compatible" for xc_cpu_policy_calc_compatible() and this
new one. I'm afraid I don't have a good suggestion though, short of
making the name even longer and inserting "backwards".

Jan

> +{
> +    xc_cpu_policy_t host;
> +    int rc;
> +
> +    host = xc_cpu_policy_init();
> +    if ( !host )
> +    {
> +        errno = ENOMEM;
> +        return -1;
> +    }
> +
> +    rc = xc_cpu_policy_get_system(xch, XEN_SYSCTL_cpu_policy_host, host);
> +    if ( rc )
> +    {
> +        ERROR("Failed to get host policy");
> +        goto out;
> +    }
> +
> +    /*
> +     * Account for features which have been disabled by default since Xen 4.13,
> +     * so migrated-in VM's don't risk seeing features disappearing.
> +     */
> +    policy->cpuid.basic.rdrand = host->cpuid.basic.rdrand;
> +
> +    if ( hvm )
> +        policy->cpuid.feat.mpx = host->cpuid.feat.mpx;
> +
> +    /* Clamp maximum leaves to the ones supported on 4.12. */
> +    policy->cpuid.basic.max_leaf = min(policy->cpuid.basic.max_leaf, 0xdu);
> +    policy->cpuid.feat.max_subleaf = 0;
> +    policy->cpuid.extd.max_leaf = min(policy->cpuid.extd.max_leaf, 0x1cu);
> +
> + out:
> +    xc_cpu_policy_destroy(host);
> +    return rc;
> +}
> 



From xen-devel-bounces@lists.xenproject.org Mon May 03 11:31:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 11:31:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121529.229212 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldWnK-0003WJ-DK; Mon, 03 May 2021 11:31:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121529.229212; Mon, 03 May 2021 11:31:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldWnK-0003WC-AN; Mon, 03 May 2021 11:31:22 +0000
Received: by outflank-mailman (input) for mailman id 121529;
 Mon, 03 May 2021 11:31:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iacE=J6=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1ldWnJ-0003W7-5M
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 11:31:21 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 80c90eec-0b2b-44ca-987d-a3799e44353c;
 Mon, 03 May 2021 11:31:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 80c90eec-0b2b-44ca-987d-a3799e44353c
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620041479;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=UsI4S1ogwDdtRD4iFtW2CZGXOdt2DdSVZnLUuhx2rk0=;
  b=KazlAJa3bnub6nInT2mHC7B9O8CyfIYJuVyNO4ohhmFNL97RgA1Wu93h
   7waZ1NOIC7hKFPWSUJc5wJq3J9gsumy0ytj9OOnAb7SG/OA5xHZlWJjuB
   MwbK1ElKhGYx0fZSoOjsa4ARJlUbY/Xxs/gUQygHQa/V9HnwN7HqDTrLT
   o=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: W838pPB9O1JsB46vbd8xThuA+5JUpSC19dRHcLk5qB7LPllhqzOukpRRoxSx/SnNp6l1lG/fqW
 Jl415aWyaUWfuVNbL0JWc/KSVcP0T6v4d1dckrEtfdvQ7lLc5cmb5aP59xK5ZyhtMx9tMwhcEA
 cTFiKhZemuzv5laYSM9KEhbAIg/ks6p0J6NLHOQGtuGA24sANAjsCuVlAa8yKnZmrHzpyd/DKU
 0c6eL1wl2OIJOc0VUXfZL54zq2Ahv0uiGC94j1dxM5t3Rkab3+0H6Q/fv0+nno9xNdhgblGQhX
 NYw=
X-SBRS: 5.1
X-MesageID: 42937013
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:ajVWH6xTgvE9QCUGfjJSKrPxv+4kLtp033Aq2lEZdDV8Sebdv9
 yynfgdyB//gCsQXnZlotybJKycWxrnm6JdybI6eZOvRhPvtmftFoFt6oP+3ybtcheQysd07o
 0lSaR3DbTLYWRSpdrm4QW+DtYryMSG9qftvuvF03JxV2hRC51IxS0RMHf8LmRdQg5aCZ0lUL
 ed/NNAvTq8eXIRB/7Le0Utde7FutHNidbacQcLbiRXkjWmoBGJzPrBExae1goDSD8n+9Yf2E
 XMjgCR3NTHj9iV0RnZvlWji6h+uNyk8ddbAdzJt859EESRti+NRKBMH4KPpyo0pubH0idbrP
 Dprw07N8p+r1P9F1vF2SfF4AXr3DYw53KK8zbx6hGP0K+JJkNJN+N7iY1UaRff4UY71esMq5
 5j5G6Fq4FRSSrJgSWV3am4azhRikG2rXA++NRj9kB3bI12Us43kaUvuGlREJsGARvg7pEmHO
 REHKjnlYhrWGLfQHbDsmZ1xtuwGlw1AxedW0AH/veYyj5MgRlCvgcl7f1auk1F2IM2SpFC6e
 iBGqN0lItWRstTSa5mHu8OTea+F2Sle2OCDEuiZXDcUI0XMXPErJD6pJ8v4vuxRZAOxJwu3L
 zcTVJxrwcJCgLTIPzL+KcO3gHGQW27Uzio4NpZ/YJFtrr1Q6euGTGfSWopj9Crr5wkc4zmcs
 f2HKgTL+7oLGPoF4oM9Rb5QYNuJX4XV9BQlc08X36Iv8LXOqznvuHWa5/oVfjQOAdhflm6Lm
 oIXTD1KskFxFusQGXEjB/YXG6oWkGXx+M0LIHqu8wojKQdPIxFtQYYzX6j4NuQFDFEuqsqOG
 93ILbtlLKHtXC7lFy4q1lBC154NAJ48b/gW3RFqUshKEXva4sOvN2ZZCR00GaYIAR8C+fbCh
 RWqVgy2a/fFe3f+QkST/acdk6KhXoao3yHC70GnLeY2MvjcpQkSrA8WKJwEg3PPwdvmRljrV
 pCbANsfD6dKhrezYGeyLAEDuDWcNdxxC2xJ9RPlH7ZvUKA4f00SmAjRD6oW86PiQMITz5Z72
 cBtJM3sf6lo3KCOGE/iOM3PBlpZH6MCLxLNgiDeb5Zg6vmYg12UGeMiwGLkh1bQBuYy2wiwk
 jaaQGEc/DCBVRQ/kpV1avn63tYXGSQdUAYUAEwjaRNUUD9/lpj2+6CYaS+l1aLYlwZ2+cHLX
 Xuej0JOD5jwNixyT+YkDuPDm8d250rJ+DRZY5TNY376zeIEsmlhKsGF/hb8NJZL9joqPYMSv
 /aVAmPLj/0YtlZrTC9lzIAAm1Tp3Ylm/+zh0Ggw2i8wXIlAf3dZH5hXKoWJtmA727iA9aEua
 8J+e4djK+VCCHWbNXD9IT8KxhkATnXqXStT+4ppYtP1JhC/IdbLt3+a3/wyHpD3B8CN8/6m0
 MVfbRj7Nn6S/pSVv1XXxgcw0Egm9uOJnY6qwDaAucxelc2kn/QVun5lIbgmP4KAkebohH3Nk
 Ta2ypB/+3dVy/r789RN4sAZUBXYlM78nJs4aercJDREhyjc6Vm8EChOnGwNJ9bR67tI8Rckj
 9Kp/WJlfSQbSz2xUT5uiZ6OLtH9yKfevyJaTj8UNJgwpidIlSDgqyj/c61gnPWcFKAGjslrL
 wAU1cRYMRFgiQll6st3EGJO/XKnn4=
X-IronPort-AV: E=Sophos;i="5.82,268,1613451600"; 
   d="scan'208";a="42937013"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XWCbPfCJFVRN4EqOUxvAQgHhDBzqYVz4MgcZYPvtXdpmwVvbU2FFNSA4aEzB++iEG8B8jX4iVCq7LAcSKI9ff8dJQmwRMguV20Pr1e1tBLS8ub9saAN/lh41bF2aIsLGUeaq3iroU+OTZtzR3uRX0lV8/dn094VB9PJOf8TyU88XqlOw0K10BV4UXVnkOEdre8pSdA4xPeCl0GOQ3OUX+rSa9I+YJkTFV35bkBXooEeuRXCbtbcZ3QYpHaZDeuvEOMcbTba9671jF8Rze4v7LprSnRIH49Z0hq3213I/2TRTgtyWmz4fxfzn+UiPCm8ty1Q0oZnj/ejrgUoiqF0e1A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LWOmhvm9C4pHAby/0NnjW/zJa6dHfClEjD+lscXW5ac=;
 b=I8FTvzlHFAwS1j7Hs+kzqZD34o6aDeLvtjFcfmaCxh+i6PHjl9Wy7jQdcQkDIN6G9SuxX1xK2WFC2UaRNl/gUaIdX8OBVlM9nKDyJ9NHL9ChSj63olvZDXU6S2B4XxKy2cRSFD9F6Rq+GDGkNUdaS1pjQO6sro964ti/43cd9JznX5Lc9NygBc0GEQlD3I6QV63ARIwc0emaFElgBTW7bY8f+3KbYcMnEqAExcFrUrc7YeBrgHYdOVZTbNZYByESKZEcdHxXI5RM+helpx7vG1SD9D1kDgR696AYsX51qqWOOqfaXmKq+pIa7xn/M1dcceBmd+YPVOcOBYf0kdZc2Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=LWOmhvm9C4pHAby/0NnjW/zJa6dHfClEjD+lscXW5ac=;
 b=ajSVQBTi8YVYDAxhjBYGWcvfyTwY6ATzKrzoH4MnIgCPezvwN0DGyS7JLyEfbdNjG3kIlZIA/gKrwPH/rzA/pXbQycDSGrLdWDTObtGdCB2ekXU8IeQIt7zE6deHQIwnnVVBXH1AIaGxOuMR+3bI4FFVALQeQimnKfg88XaE45o=
Date: Mon, 3 May 2021 13:31:03 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, "Stefano Stabellini" <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH v3 01/22] mm: introduce xvmalloc() et al and use for
 grant table allocations
Message-ID: <YI/e9wyOpsVDkFQi@Air-de-Roger>
References: <322de6db-e01f-0b57-5777-5d94a13c441a@suse.com>
 <69778de6-3b94-64d1-99d9-1a0fcfa503fd@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <69778de6-3b94-64d1-99d9-1a0fcfa503fd@suse.com>
X-ClientProxiedBy: MR2P264CA0163.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:501:1::26) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 99e8e39f-3a06-4baf-5317-08d90e26f310
X-MS-TrafficTypeDiagnostic: DM4PR03MB6047:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM4PR03MB60478818C743F650FCB532938F5B9@DM4PR03MB6047.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: bovDp/t0TPTK7fulk/AYpS0aw9zs997a47Tnto5QQ32rqRISxdmAxhv4hEGjhdRvWnUR21tobPlTj6bmIAemKu67ctlC+CV8E50EaadNrGpH7e72m+PUi0jdWzjVej90ew6EAxmHatMWNSqiPlJjgVDCVLJsVCUO5GU5jFpeQnDoEVL8CGpHxVRGeZ0KpcoKTJr3ecXolx2ZjZKQLknviGEPmpzbUswMjGnU1u4gI1/F5lxW/pZt8C/Kbi+T4+t3FRKjX31Xy2xHL3LkSzL0nnOZjJP8CJWW9rdpGp+2Kyz4+RESgsArp02VV4yF1Av8lPJHjhEtRl4Dj5EVODAEFrZO/RKF3ocyQYDuMOHOdDsyAlbr6DlFtOPBPAQZIjaGO+JHX81EEyviWXltwmZ92mSOssOQ8XuANmRwRY9JK2ZpGGJ96rFtb3Z2/+7aMZTd107MNixkOrcYfXxu3/GWoHL7g10ZES9negZAVwuHVXbBz6QUc6ijn5+va1a2YbYbzUqczu7ABUEnmb0LkSsCQiiENu07tgdlV/IMn8Qa7Suorx28Tj4Tzc07GJgemvrkXFDTA4lPZHasWGH0GTMCgqgIlrMGH0ogEegpIeUt1Rw=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(396003)(136003)(376002)(39850400004)(366004)(346002)(2906002)(956004)(6666004)(38100700002)(6496006)(478600001)(6916009)(85182001)(33716001)(8676002)(54906003)(83380400001)(316002)(5660300002)(9686003)(4326008)(66556008)(86362001)(6486002)(26005)(8936002)(66946007)(16526019)(186003)(66476007);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?QktFdytHOG9saHR5NGRkRi81cDgrMlJiUG9VRTRNVmwzc09NTlB3RlVUQjBJ?=
 =?utf-8?B?a3VtN3BRNFhnWWRpeHJrRktJNW5sRWl0NFp5TmNSWGMveldJc1FKc3FhcUls?=
 =?utf-8?B?ZmRiZlZleGtrZVJ2OWtsZklaNWZIRXhxNFdVNVkxSENGL0dhT21DaXN2bWdZ?=
 =?utf-8?B?bEpxVHAzY2sxL09kL0tEWTZWZzV6N0JXYW11NTIvdWM1dHYrRHd2U0EzaFNC?=
 =?utf-8?B?eVlZdld0alYvWGdMYVlvTE1kL05RNXlsVE5zbHcwT1Vqb3FTL05Kc29zeVNk?=
 =?utf-8?B?TVJhZ1kvaDJ6RWgzdU5ERGJpOHhjeU1OMDVnb2w1Unpic083aGkzNnZxcVZz?=
 =?utf-8?B?eWxYRGRqUmsvQmxtdVBkbVNEQnEvdTA4cTd1QzZSS0NHbEI4eW1DRWRlUTBD?=
 =?utf-8?B?OGo0L08zVUxrQmpQMnNKV2M4SmpwZytvYTdHcUVmdGMydmNQR3Eva1E5UFN4?=
 =?utf-8?B?bnNjWk5QcjU1RmdYNEppMmdpN0Vya1JiMTNRK0tJdjd6TENJNmtkc3Z2TTBZ?=
 =?utf-8?B?NjhWMjNMN0pGTTZzV0NHa3ZHRm5WT2wyNDRIUHFwK2hoMS9TR2o5dFpVbmhX?=
 =?utf-8?B?cnZiSDcveGZEdmNiTkJLcXgySEdOVUh3cWR0cHdKZEFmU3gwWXdXOXcxYVEx?=
 =?utf-8?B?ZGF0MkhrdGJGM2p5RW1FUnVHRjZBNlBsRW9VNjdBL3lsV3QyQTBkMHA5WVFP?=
 =?utf-8?B?S3NpTzhzMmp1dDZGUHhDQzBZSVFyV3ZNZEtwTmZ3cGhNWkplVktXWERVOHFN?=
 =?utf-8?B?YWxIbEJXRG9RWVFBUFlOYzNEajFSdnY4aE1HN2FtS1VRek9idlJrdDJ4K244?=
 =?utf-8?B?TE01UXNhME1wOGt5eEhaZW1ud3hMNUNzRXJwbVYwTlF3b0xyZTM0ZGFXM05M?=
 =?utf-8?B?aFZlSzVubXVXMEMyVFVHcHRFSG13ZFg4M2QrbUNCbG1PNWVoV0hYYnZKM2li?=
 =?utf-8?B?Q2xXcFovYmUwNmpiZWFRT0R5cjBydzMwaVRDTDFibXRzTXhCZ3dsbXZhSlNu?=
 =?utf-8?B?UDJjMU0yQW1HRThzdmo0VHFuZFFaRHVveXIwWWxHOEVNYkI1Vjg5OC9ibDY1?=
 =?utf-8?B?c0VkY1BBWXFmRGVnME5acGdJOXVFTXlqc05xNENJelRYSlM5Z2U0VnRuYUpx?=
 =?utf-8?B?dUphTEIvdlBQeEVBYVVycE5sWVFhVDRkQkpVLzFXUEh4bVdHTmxDSFR4RVRR?=
 =?utf-8?B?a0Zmc2tzVW1rcmZRVXJOUXpwMFNyajAzQkptMjJDcUY4dW5tRHVXRm5Yc3VV?=
 =?utf-8?B?aVpVVEZOazAwYXp3d1ZmRWx5ZGZhWWtQTTk0SmlqUVNaSVU5L3kzcjlKczN5?=
 =?utf-8?B?cE5WL3dJZ0NWK3NsTE9IY3pDQkJudUlkOUQyeGlEeml2WERiNWZkdVRhQTdO?=
 =?utf-8?B?YXB3Z3MycWVLaGRUYmZSdnRMWnhpb2tNK0k2Zi9yTkV2OElaMzdsNGVYNndT?=
 =?utf-8?B?R29XcVVQSXZyWG00bEk3WEk3aTg3UFhpQ3pqTm8vbjVUTlVxUFhYaVdnWWdW?=
 =?utf-8?B?NkJTam1GMUwyMzJDeUNjTEVKd2xQNnFWUG8rZ2wyNm5TZXZxdTFIcWRnenB6?=
 =?utf-8?B?R0FDTmRkSVZQbUNCaEdoNkQyNEJhUTJzUXFwdW8xZkNnZmpzYVJUVktHSEo3?=
 =?utf-8?B?Vk9ZWTQ2cXQyWGJQK24rbS9INkhWSGdPZG15SkJMV01kOXEzcmVKRVl4WkVW?=
 =?utf-8?B?L0ZVSWtiMGEzRXl6MFhPWTdtUUFsRVFLSGVndm1jQVQ4Wk5kbnBHMkRWeFVV?=
 =?utf-8?Q?m/yN6gQLasQ4OQHgqvo1fj+yAaee5F+t7V1G3VZ?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 99e8e39f-3a06-4baf-5317-08d90e26f310
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2021 11:31:10.0618
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 8BUxoc/0H9/dmJPBQz2jFfNqf/3SgGBXypdgh8sc4/QfoYiUQEBpUIEvSUNF8MoFsoucmgF5KxkWSeu1CqD0gw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR03MB6047
X-OriginatorOrg: citrix.com

On Thu, Apr 22, 2021 at 04:43:39PM +0200, Jan Beulich wrote:
> All of the array allocations in grant_table_init() can exceed a page's
> worth of memory, which xmalloc()-based interfaces aren't really suitable
> for after boot. We also don't need any of these allocations to be
> physically contiguous.. Introduce interfaces dynamically switching
> between xmalloc() et al and vmalloc() et al, based on requested size,
> and use them instead.
> 
> All the wrappers in the new header get cloned mostly verbatim from
> xmalloc.h, with the sole adjustment to switch unsigned long to size_t
> for sizes and to unsigned int for alignments.

We seem to be growing a non-trivial amount of memory allocation
families of functions: xmalloc, vmalloc and now xvmalloc.

I think from a consumer PoV it would make sense to only have two of
those: one for allocations that require to be physically contiguous,
and one for allocation that don't require it.

Even then, requesting for physically contiguous allocations could be
done by passing a flag to the same interface that's used for
non-contiguous allocations.

Maybe another option would be to expand the existing
v{malloc,realloc,...} set of functions to have your proposed behaviour
for xv{malloc,realloc,...}?

> --- /dev/null
> +++ b/xen/include/xen/xvmalloc.h
> @@ -0,0 +1,73 @@
> +
> +#ifndef __XVMALLOC_H__
> +#define __XVMALLOC_H__
> +
> +#include <xen/cache.h>
> +#include <xen/types.h>
> +
> +/*
> + * Xen malloc/free-style interface for allocations possibly exceeding a page's
> + * worth of memory, as long as there's no need to have physically contiguous
> + * memory allocated.  These should be used in preference to xmalloc() et al
> + * whenever the size is not known to be constrained to at most a single page.

Even when it's know that size <= PAGE_SIZE this helpers are
appropriate as they would end up using xmalloc, so I think it's fine to
recommend them universally as long as there's no need to alloc
physically contiguous memory?

Granted there's a bit more overhead from the logic to decide between
using xmalloc or vmalloc &c, but that's IMO not that big of a deal in
order to not recommend this interface globally for non-contiguous
alloc.

> + */
> +
> +/* Allocate space for typed object. */
> +#define xvmalloc(_type) ((_type *)_xvmalloc(sizeof(_type), __alignof__(_type)))
> +#define xvzalloc(_type) ((_type *)_xvzalloc(sizeof(_type), __alignof__(_type)))
> +
> +/* Allocate space for array of typed objects. */
> +#define xvmalloc_array(_type, _num) \
> +    ((_type *)_xvmalloc_array(sizeof(_type), __alignof__(_type), _num))
> +#define xvzalloc_array(_type, _num) \
> +    ((_type *)_xvzalloc_array(sizeof(_type), __alignof__(_type), _num))
> +
> +/* Allocate space for a structure with a flexible array of typed objects. */
> +#define xvzalloc_flex_struct(type, field, nr) \
> +    ((type *)_xvzalloc(offsetof(type, field[nr]), __alignof__(type)))
> +
> +#define xvmalloc_flex_struct(type, field, nr) \
> +    ((type *)_xvmalloc(offsetof(type, field[nr]), __alignof__(type)))
> +
> +/* Re-allocate space for a structure with a flexible array of typed objects. */
> +#define xvrealloc_flex_struct(ptr, field, nr)                          \
> +    ((typeof(ptr))_xvrealloc(ptr, offsetof(typeof(*(ptr)), field[nr]), \
> +                             __alignof__(typeof(*(ptr)))))
> +
> +/* Allocate untyped storage. */
> +#define xvmalloc_bytes(_bytes) _xvmalloc(_bytes, SMP_CACHE_BYTES)
> +#define xvzalloc_bytes(_bytes) _xvzalloc(_bytes, SMP_CACHE_BYTES)

I see xmalloc does the same, wouldn't it be enough to align to a lower
value? Seems quite wasteful to align to 128 on x86 by default?

> +
> +/* Free any of the above. */
> +extern void xvfree(void *);
> +
> +/* Free an allocation, and zero the pointer to it. */
> +#define XVFREE(p) do { \
> +    xvfree(p);         \
> +    (p) = NULL;        \
> +} while ( false )
> +
> +/* Underlying functions */
> +extern void *_xvmalloc(size_t size, unsigned int align);
> +extern void *_xvzalloc(size_t size, unsigned int align);
> +extern void *_xvrealloc(void *ptr, size_t size, unsigned int align);

Nit: I would drop the 'extern' keyword from the function prototypes.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon May 03 11:54:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 11:54:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121539.229231 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldX97-0005Lj-Dd; Mon, 03 May 2021 11:53:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121539.229231; Mon, 03 May 2021 11:53:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldX97-0005Lc-9m; Mon, 03 May 2021 11:53:53 +0000
Received: by outflank-mailman (input) for mailman id 121539;
 Mon, 03 May 2021 11:53:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gWh3=J6=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ldX96-0005LX-9m
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 11:53:52 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 663d979e-6a08-43e1-98b4-c46a35d13160;
 Mon, 03 May 2021 11:53:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 663d979e-6a08-43e1-98b4-c46a35d13160
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620042830;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=CN5kqgy5ofU8tAD/6dZSU/2wGyQeJYxSKvPSU6mxuCg=;
  b=IuzPUIkhNLAMUmObNXZp5yeKCEFfjGV/Fup4GmSqXW+TjobnnwI6fPbV
   SUuGU++BVVkK4riej1vweBSOW6iHzInMyiB79+CnZsU6xvVHNbco9iFGR
   Ro6aCR12TSYZzkuJO8iiHCKjWibMDDrHceyJXobNRYEQu36apZan1BaRZ
   Q=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: KXmcaWjcg6iYKvUf90eo+2RcAtkaI7aNLAU4jsZI2yXRf6tTEstKzdQgUc2joc8po/0CqqWg6p
 A2O5zet3q6SSZpdh2QxyNhNa+VTwSNUH0ZGsXfe5VvpGEPPsHskaOevUB+OlI/KUYSOjN8qHl/
 El8wUwRhJvC0LmcRv+zjlZJ0R7ErwYQZsGxSSGIj1bWPXk6FYb13PvKf535qLFTEtsC6PLMmqZ
 dCdTnkW/kdc43JD83EozsHdnecc3VSPd1Mvjlwc95YoYwkHlXDhL8OB/BTfxtun+LUSPUlvRif
 2SE=
X-SBRS: 5.1
X-MesageID: 43039139
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:oolmfax1ujxNIQPfWfOLKrPxg+4kLtp033Aq2lEZdDV8Sebdv9
 yynfgdyB//gCsQXnZlotybJKycWxrnm6JdybI6eZOvRhPvtmftFoFt6oP+3ybtcheQysd07o
 0lSaR3DbTLYWRSpdrm4QW+DtYryMSG9qftvuvF03JxV2hRC51IxS0RMHf9LmRdQg5aCZ0lUL
 +V4cRarzStEE5nEPiTLH8DQuTFupn3j5rgexELHFoK7wOJgDOu5tfBYmel9z0ZVC5CxqpnzH
 jdn2XCl9memtyY6juZ7W/c6JxKhMDso+EjOOWggtUYQw+c8TqAS59mX9S5zUkIicGprG0nid
 zd5yonVv4Dlk/5WkGQjV/T1xL70DAogkWSu2OwpXf4u8T2SHYbJqN69PpkWyDU4UYho91wuZ
 gjtwny2us1fHGw6BjV3NTGWwpnkUC5uxMZ4IgupkdSTJcEb/tppZEflXklY6soJj7w64wsDY
 BVfaThzctRGGnqC0zxgnNi25iFUHg1A369MzI/k/3Q+T1XkHdl9lAf1cwSk1wRnahNO6Vs1q
 DqNL9lm6pJSdJTRaVhBP0ZSc/yMWDVRwnQWVjib2jPJeUiATbgupT36LI66KWDf4EJ9oI7nN
 DkXElDvWA/VkryAaS1rdx22yGIZF/4cSXmy8lY6ZQ8kKb7XqDXPSqKT01ru9e8ot0Ea/erGc
 qbCdZzObvOPGHuEYFG00nVQJ9JM0QTV8UTp5ISR0+OmMTWMYfn39arM8r7Ff7IK3IJS2n/Cn
 wMUHzYP8Nb9H2mXXf+nVz/QHXoVkvj/Y9hMaTT8uQJobJ9crFkg0wwsxCU98uLITpNvugdZ0
 1lOo7qlau9uC2X8A/zniJUEysYKnwQzKTrUntMqwNPGVjza6w/t9KWfn0X+HOGIxR4Xv7HCQ
 I3nSUwxYuHa7irgQwyAdOuNWyXy1EJomiRcpsakqqfoeDoZ40/FZRjfKBqDw3EG1hUlG9R2S
 d+QT5BYnWaOiLliK2jgpBRLvrYbcNAjACiJtMRj2neu0WarcQGXWAaQDaqbM6SjW8VNn9pr2
 w015VarKuLmD6pJ2d6qv8/KkdwZGOeB68DMB6If7xOmrfgeBh5SECDgTDysWBrRkPas2Epwk
 DxJyydfv/GRn5QoGpR3KrR/FRoTWmFZE5rZndmsYpyKHTeth9IoJq2T5v291HURkoJw+kbPj
 2AWzcULw907/2c1RKeml+5ZDgb76RrGtaYIKUocrnV1H/oFZaBkrseGeRIuLx/Msr1j+MNWe
 WDWgOcIT/iEdk10wiNqntNAlgtlFAU1dfTnDH15mmx23AyRcfIKFN9XrcBPpWy6XPnS/vg6u
 QwsfsF+c+LdkP/Zd6NxfuJM3ptKhbPrXW3SO9tg5ZOpq42vKZyGZ6ecTag7gAw4DwOaOPP0G
 UZS+BHxZqEHKlFVckbYThY8Vokj87nFjpgjiXGRssFOWgwhHraNe6T67XGqbATElSMzTGATm
 W3wml4xbP5RCON2r4RNrIoLUlXYEY67m5+/OnqTfyYNCyaM8VC9kG9KHmzbft0T7WEA6wZqn
 9Bkp21tt7SUyrzwwbLuzRnZopI7ma8WMu3RCaBA/RB/dD/GVOChMKRkYGOpQaybTuwcEIDg4
 JZMWQWc8RYkzEnyLQN7RLacN29nmsV131E4T9mkVbx2o+ppEfjdHs2QDHxs9FxRjlcMn+BkM
 Lf1/OXvU6NuwR45Q==
X-IronPort-AV: E=Sophos;i="5.82,270,1613451600"; 
   d="scan'208";a="43039139"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=g0+r0RHQUkmpxITkybaQ0ULzD+bQsdAxjLjWggtqay+BqICTmUvsLyjm0eBFm+ZKIrG7z7/x5wzbD/sAEec3NufrKCxxV/j+JpUFimikrM/vqZcPcsN7bX6ZcjeUyxH6/Lkdo26GkMwxptDuHOusLlYa9D0h17zgsaC1TZEcXZdOYU9Henul9hoolPq0Q+1yOgNGpcrebW0u1T3DWLe0uZKN2N8w5D7FURd4hXmcM6Xb+3sg6aExYQzw6u7ppKEzwAV3oMU0XeLiFRuZO4IrrFv/xQMcovyFlHr91eQmWIbjrMxRAUSZgBM3e8ecG/7LnhJND0OrAMWd4icJCTk6vw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CN5kqgy5ofU8tAD/6dZSU/2wGyQeJYxSKvPSU6mxuCg=;
 b=fkbgZ+jVXWpOLEEWsP6Bi0MIe/+EmCin/uZgWzzqPWhlIMlyPEtzKeJN839qxPXkLjp4UDJcX3Hu49qp4Bz3p4MMGp1pzNMUwTnBUHPlR7S5cV2M+gikelEIn+NiiPcn/dYUZunOtEzLwi5HvW+fEZclw76Kz97QBOjAX68ywAro/8nUoHvyORIF1oaO6Jop+f8xFS7zwU8XrkpUl76z9dvilAtMA/qfucgR7OtsloLGz4+PMPSgJb716Vh7XDguj4aZB2RaHJky/0ZrTxSHZUycrm36Bvw3VtRDTv32RqjAwPTIvc4uVp0weBdeCNIRov8xzehq1QTG7mJVvPVYEA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CN5kqgy5ofU8tAD/6dZSU/2wGyQeJYxSKvPSU6mxuCg=;
 b=CGYR+VUHIjf5eoiUgNquin6jt1Tiirqxf3N6x+zsbTLmTH3dAiHDckI12wM50gqkdXuq7SoFB+uIwq+duWWXUUvR98MSb8bpSZZpQCJOKpiw8e1cYJBXkCu3gTYbA/DPPZ024ffuFsQICUyeOgnx0pVb2v6iFCg9OLJLy2ZFCLY=
Subject: Re: [PATCH v3 04/22] x86/xstate: re-use valid_xcr0() for boot-time
 checks
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <322de6db-e01f-0b57-5777-5d94a13c441a@suse.com>
 <77b149c5-e7b8-6335-dd86-745c6cc69a06@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <426e6d04-13fb-5ef4-0916-274077d47692@citrix.com>
Date: Mon, 3 May 2021 12:53:40 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <77b149c5-e7b8-6335-dd86-745c6cc69a06@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0110.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:192::7) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4f66e12b-ded8-476b-3a3d-08d90e2a1bad
X-MS-TrafficTypeDiagnostic: BY5PR03MB5220:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BY5PR03MB5220DCA6B665442B7236DECDBA5B9@BY5PR03MB5220.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:820;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: cq2K5wgS8GVPTXYmWRbhldvh0yrnXo+UPZDHAg8j/jNc77pz14pgz/Wyu44uoonQqR+olsiNbjFr+Go39ALcgFH4wkPXf2KMSDhDoe4QuuiEqWaFIJyNnptP1PYfPgBJt1ILg69oqW2MMuPW8ygduP1tnVR/aexs04KP8MuiaIwGrdGFw9e0Erb5kWBUZzKrG/3vX62OH+01D/tzmxincsmIROiVWqIt/FMAi+2oO8wKyhtqNa0j+E/Hucld8IrlughJT2zI/XVi/aHVy5vyicuhbqOpuQGc75wXqPGSQLTzg4zPwCZeExVVsBs9WtB4PHqJGkVWbGylS8QXDmB5m9ODRhf3sJ+dE2D01lL94tCSDJmfqBAny84onW0CEfgpcWKxIbV7D8WuJxwhqYdFSp6+P+jxy5ZKQLxb4J3CRaUyJLa09KAWALSIqZeYpuH/cff6/X4jUOeSea8pblI5a8FX8jIYyPauoR/KC6VC2RguN6r9MB8n92JYFbvw9jkCtNMlgqlpXS7JjgNQ7p4Zy9gklx9O3YuR0hO8HGCPYZDzOXFApRkTjxexm76qSFRl+mAzA/W826a94a0/CaEiBVVhwHH1FLSwJcm0/7pkjnSo8j653KodkcsyQV/hcyUSMNkF6EZu+3R6XyMcDKcQSFXMkCTIxe+/+8QUk5LHag9+JO82Cj+z6iw7bZh7vMzI
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(136003)(366004)(346002)(39850400004)(376002)(6666004)(316002)(478600001)(8936002)(107886003)(31686004)(4326008)(16576012)(36756003)(6486002)(110136005)(956004)(2616005)(5660300002)(86362001)(186003)(2906002)(558084003)(31696002)(8676002)(54906003)(16526019)(66556008)(66476007)(38100700002)(66946007)(53546011)(26005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?NnZPcHNsUEhIQndaRnB6R2cwMys0bWIvNDU3VEpYbE1vVnJjRWVaQWxTNzJL?=
 =?utf-8?B?cWNIRmlTS3F1SEo4TmFqaStVMkc1TVdGc2NwVC81RVVmSkxNQm9EbmM2ZUlI?=
 =?utf-8?B?VVlPR3F5T1R6ZmdqTUxmdXRsbzlQdHU3RHd3djJpQTZmREYxcG5JcXdUaFBI?=
 =?utf-8?B?YVdzZHZLR2FySndSNXJaN2RSOVhnVTcwbzJhSnFsbTAwZ0JUK0lQbG02OU5Q?=
 =?utf-8?B?d01GV3hVRUhuMGxwOWdOUjZzNFlScmVnNDE2Wkl4L3V2SzFCTjRZR0UxUmVN?=
 =?utf-8?B?cXhyL2NZU25PeFp3c0J5aGFNNHAyMnNZSWRDV3FqaHY4SC9mSnNEL2hLWFVW?=
 =?utf-8?B?Rk9lZ25zakQ1R1pEMS9iUmhoekl3M2FJUzE0TkZ0RWsxZWZPME1FVjJRR1JB?=
 =?utf-8?B?eDdaemVtcXZIMzNnNmxUUzgvTW8wUmMvTUJHVzdWeE9OVG1rWnBzWUMrM0xS?=
 =?utf-8?B?SmpXYTJEWEdDN05EUnBaWWZXdmVNOTZlN0lwbDkwcXNMc2U4MEhqcVhCcTcw?=
 =?utf-8?B?eGFNOW9YLzFwTHE2bGlIZzlaeWhxeGlZendEU2dOcXFOVFJwQm9hRndxbnZE?=
 =?utf-8?B?emhEZExOVC9jaUxVdEtPNFhjUUxwZm1wRC9Kank1T25TT0JDYnBhcXdSQVBq?=
 =?utf-8?B?dnMwZXllTUFKRVdNRG5GeHZIcUwxZnVyNWlsL21rcDhYQkY2UER3bStNOE5v?=
 =?utf-8?B?WXk3Z01XNU0zcTRrZHRJVk1ZVkFEb1h3Vy9tdVZjNFJxVnlRblBhcVF2aDVB?=
 =?utf-8?B?eXBabjhibWtvbUt5SEZoV2x4V1pxNHEwR2hFUkNHS1dET2Z0ZkJsVUQwMlpr?=
 =?utf-8?B?WVdMcXVmdG5ySWllbG1LYUhZNW1uMm00UnZ2QkVOT0ZIM1Y4aklIQnBaM0tI?=
 =?utf-8?B?WEo3eEFNQ0dVTnhQWHFsUDhUeEFqSGRkUS96alhUeVNBWExCY3FSbUtvdThX?=
 =?utf-8?B?YnE4QnlGbExhaG5QaWJnK1Nwbmo4cGY2eEhQcnV5RURHMERxZFlOUUgvTjFt?=
 =?utf-8?B?OW1INWNhU2RmTW94bi9Ib2NnLzlKTGlkWlc0clpyNEdWbTBrcjJ6S0lBK3Vs?=
 =?utf-8?B?dkhta2tBbnZXRW4raUpGWnpHMTAvV1hvVlZ5M3N0blJ4N2JDZTBBNFk2SDV3?=
 =?utf-8?B?M091em5vN3gwT2ErQUErTHBKUWtPTlBrZ21RdjFIR1FKN2lSQzIvWVVQY1JN?=
 =?utf-8?B?RjlOMkEzQW96VDF2UnExLzZnNzcxcVc0R2F2VGUyc1JKWXpYL2xBTE0weG5P?=
 =?utf-8?B?aUV2OXJzR3VaVGRseDdUczlmdi92SlFlc1VPbysrR1FzRm5ydEl1ZDFPMDlF?=
 =?utf-8?B?VjVpMjNWMG1lV3Z2T0l4bVFIdmVMKzhBNzZyRkc1UEdWcnVUS0s2NEcwSlQv?=
 =?utf-8?B?YWJhc1dnNTBLQTRxQ2YyY3Blc25iaUlEZ0o3UUdDTjVrVjgzVTI0eDVQYTFx?=
 =?utf-8?B?cmloQmtONmtrRkZJRVRIMm50bjlKNEExL2VCeUxZRk92WmhyWGRkSkJKSWd6?=
 =?utf-8?B?Ky8wOGptcFVPRi9OQ3kyS2VBQml4bm9aTnovUVcrcEs3WDJoUnpqcnNqOVZp?=
 =?utf-8?B?Rk9pdjZoK0pQNktDUkJrUFpHR3pVcGNIWWVCa2drcXVkSjFnbWZrR2NhT3Fk?=
 =?utf-8?B?Q1g4ZVE4Q3dyaEdzTkMvNWgzQkxKczZieXJHM0theFZIVUpnZkIwS2lOenA4?=
 =?utf-8?B?ZmQwelhuR2h1SXlGSGRjbjBycFZUUDBxMFVMSFNxY2NkSWNxWGpjTW1qMHJL?=
 =?utf-8?Q?2+cBVLZ/pDeAp0MLJbVF4Q2baDHOOsf+1Hp4h0L?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 4f66e12b-ded8-476b-3a3d-08d90e2a1bad
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2021 11:53:46.6707
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: WV/BzxKassOipmv21QKeXMZqdhMJbkEg39M0LXQ5tMg7xXBfE0Z215t5EimXfwrHGP7dDAq4aZXcxgX35/9FWDFGeIBxbad+OORe+1UC1PE=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5220
X-OriginatorOrg: citrix.com

On 22/04/2021 15:45, Jan Beulich wrote:
> Instead of (just partially) open-coding it, re-use the function after
> suitably moving it up.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Mon May 03 12:27:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 12:27:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121559.229243 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldXf6-000852-5c; Mon, 03 May 2021 12:26:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121559.229243; Mon, 03 May 2021 12:26:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldXf6-00084v-2L; Mon, 03 May 2021 12:26:56 +0000
Received: by outflank-mailman (input) for mailman id 121559;
 Mon, 03 May 2021 12:26:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TA2L=J6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ldXf4-00084q-SW
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 12:26:54 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 458c1e55-d696-468b-ba1e-5b8563e00a19;
 Mon, 03 May 2021 12:26:53 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CFE86AD22;
 Mon,  3 May 2021 12:26:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 458c1e55-d696-468b-ba1e-5b8563e00a19
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620044812; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=jTTC7MOxUAEw3uWRU8ucnwrk9K5jqZFX/s22+9DkceY=;
	b=QhgEdmy+Pxolp72h/ebZutqlj0nBw2YcawA+B3/ds+RPWBErkyHpS01q4uxD+UQdZXsWHu
	SPmzrtfkhgQ5eGY0sVb0DifcJXhPLDH+v2EmRAzUt/T8O1CEIve5lxUtbx80g/tLEeaURq
	SwhYb0FYPHV3j30bqs+ye6CZyet2er8=
Subject: Re: [PATCH v4 01/12] x86/rtc: drop code related to strict mode
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210420140723.65321-1-roger.pau@citrix.com>
 <20210420140723.65321-2-roger.pau@citrix.com>
 <f282a2a2-e5cb-6a65-690a-b9c27c03089a@suse.com>
 <YI/CSKpqWrilNKi8@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5b06565e-1f2e-3498-c18f-e7eac0042761@suse.com>
Date: Mon, 3 May 2021 14:26:51 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <YI/CSKpqWrilNKi8@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 03.05.2021 11:28, Roger Pau Monné wrote:
> On Thu, Apr 29, 2021 at 04:53:07PM +0200, Jan Beulich wrote:
>> On 20.04.2021 16:07, Roger Pau Monne wrote:
>>> --- a/xen/arch/x86/hvm/rtc.c
>>> +++ b/xen/arch/x86/hvm/rtc.c
>>> @@ -46,15 +46,6 @@
>>>  #define epoch_year     1900
>>>  #define get_year(x)    (x + epoch_year)
>>>  
>>> -enum rtc_mode {
>>> -   rtc_mode_no_ack,
>>> -   rtc_mode_strict
>>> -};
>>> -
>>> -/* This must be in sync with how hvmloader sets the ACPI WAET flags. */
>>> -#define mode_is(d, m) ((void)(d), rtc_mode_##m == rtc_mode_no_ack)
>>> -#define rtc_mode_is(s, m) mode_is(vrtc_domain(s), m)
>>
>> Leaving aside my concerns about this removal, I think some form of
>> reference to hvmloader and its respective behavior should remain
>> here, presumably in form of a (replacement) comment.
> 
> What about adding a comment in rtc_pf_callback:
> 
> /*
>  * The current RTC implementation will inject an interrupt regardless
>  * of whether REG_C has been read since the last interrupt was
>  * injected. This is why the ACPI WAET 'RTC good' flag must be
>  * unconditionally set by hvmloader.
>  */

For one I'm unconvinced this is "must"; I think it is "may". We're
producing excess interrupts for an unaware guest, aiui. Presumably most
guests can tolerate this, but - second - it may be unnecessary overhead.
Which in turn may be why nobody has complained so far, as this sort of
overhead my be hard to notice. I also suspect the RTC may not be used
very often for generating a periodic interrupt. (I've also not seen the
flag named "RTC good" - the ACPI constant is ACPI_WAET_RTC_NO_ACK, for
example.)

>>> @@ -337,8 +336,7 @@ int pt_update_irq(struct vcpu *v)
>>>      {
>>>          if ( pt->pending_intr_nr )
>>>          {
>>> -            /* RTC code takes care of disabling the timer itself. */
>>> -            if ( (pt->irq != RTC_IRQ || !pt->priv) && pt_irq_masked(pt) &&
>>> +            if ( pt_irq_masked(pt) &&
>>>                   /* Level interrupts should be asserted even if masked. */
>>>                   !pt->level )
>>>              {
>>
>> I'm struggling to relate this to any other part of the patch. In
>> particular I can't find the case where a periodic timer would be
>> registered with RTC_IRQ and a NULL private pointer. The only use
>> I can find is with a non-NULL pointer, which would mean the "else"
>> path is always taken at present for the RTC case (which you now
>> change).
> 
> Right, the else case was always taken because as the comment noted RTC
> would take care of disabling itself (by calling destroy_periodic_time
> from the callback when using strict_mode). When no_ack mode was
> implemented this wasn't taken into account AFAICT, and thus the RTC
> was never removed from the list even when masked.
> 
> I think with no_ack mode the RTC shouldn't have this specific handling
> in pt_update_irq, as it should behave like any other virtual timer.
> I could try to split this as a separate bugfix, but then I would have
> to teach pt_update_irq to differentiate between strict_mode and no_ack
> mode.

A fair part of my confusion was about "&& !pt->priv". I've looked back
at 9607327abbd3 ("x86/HVM: properly handle RTC periodic timer even when
!RTC_PIE"), where this was added. It was, afaict, to cover for
hpet_set_timer() passing NULL with RTC_IRQ. Which makes me suspect that
be07023be115 ("x86/vhpet: add support for level triggered interrupts")
may have subtly broken things.

> Would you be fine if the following is added to the commit message
> instead:
> 
> "Note that the special handling of the RTC timer done in pt_update_irq
> is wrong for the no_ack mode, as the RTC timer callback won't disable
> the timer anymore when it detects the guest is not reading REG_C. As
> such remove the code as part of the removal of strict_mode, and don't
> special case the RTC timer anymore in pt_update_irq."

Not sure yet - as per above I'm still not convinced this part of the
change is correct.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 03 12:37:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 12:37:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121564.229254 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldXp7-0000g6-4g; Mon, 03 May 2021 12:37:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121564.229254; Mon, 03 May 2021 12:37:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldXp7-0000fz-1f; Mon, 03 May 2021 12:37:17 +0000
Received: by outflank-mailman (input) for mailman id 121564;
 Mon, 03 May 2021 12:37:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldXp5-0000fq-Rk; Mon, 03 May 2021 12:37:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldXp5-0003vy-Ht; Mon, 03 May 2021 12:37:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldXp5-0001kt-5l; Mon, 03 May 2021 12:37:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ldXp5-0000gX-5F; Mon, 03 May 2021 12:37:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=TJM3K4jcFChrMt+Q21u7rvm3/LyDrnfVyae29yoUEkw=; b=e/c34QCQ0DrzmeEIdm1h0P76oT
	EaE/SSpyWHUAz/7Iws96ZsPtu9HfshpxlPQgePZIRYlJ9OWi6+n4GHwG8RmedPBNwESmOL7pRPyc0
	PS4om/GuLkfqMX+soF5KFWnC2HYxnWOUOGCl3o+J1bKr4oapX/FVROpF0mlro4kG9kTc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161610-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 161610: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt-raw:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=17ae69aba89dbfa2139b7f8024b757ab3cc42f59
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 03 May 2021 12:37:15 +0000

flight 161610 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161610/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-qemuu-freebsd12-amd64 19 guest-localmigrate/x10 fail pass in 161594
 test-armhf-armhf-libvirt-raw 17 guest-start/debian.repeat  fail pass in 161594

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                17ae69aba89dbfa2139b7f8024b757ab3cc42f59
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  275 days
Failing since        152366  2020-08-01 20:49:34 Z  274 days  459 attempts
Testing same since   161594  2021-05-02 03:24:05 Z    1 days    2 attempts

------------------------------------------------------------
5922 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 fail    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1604057 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 03 12:43:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 12:43:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121572.229270 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldXub-0001bm-UW; Mon, 03 May 2021 12:42:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121572.229270; Mon, 03 May 2021 12:42:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldXub-0001bf-Qq; Mon, 03 May 2021 12:42:57 +0000
Received: by outflank-mailman (input) for mailman id 121572;
 Mon, 03 May 2021 12:42:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TA2L=J6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ldXua-0001bZ-PW
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 12:42:56 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e1b63cec-e117-466a-ba60-775a669d03ef;
 Mon, 03 May 2021 12:42:55 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C6E9AAE4B;
 Mon,  3 May 2021 12:42:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e1b63cec-e117-466a-ba60-775a669d03ef
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620045774; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Sn4JIu/EvibUbw7Ii7ZRLMZBwS1rfXhrQK6idYjDfV0=;
	b=GG0c4X0hH84sSAVhA6Lz2xfpDopCYF8LYRKIKyeu0nz0565yqPgFuw35PCZFsEtTNIx4Y8
	sw3zbQLDUBbAwUYuiul+q407tV5VzLltXIL8zKn/tbDTk8HshDqy8wxayXrMMsq0tF+v7R
	N4t5ztDjqRhHpELLhhTyw/PQHyK9Hpo=
Subject: Re: [PATCH] x86: Always have CR4.PKE set in HVM context
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210429221223.28348-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1bc9a7f7-0bf7-2894-1272-3eec6dbf5a8f@suse.com>
Date: Mon, 3 May 2021 14:42:54 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210429221223.28348-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 30.04.2021 00:12, Andrew Cooper wrote:
> The sole user of read_pkru() is the emulated pagewalk, and guarded behind
> guest_pku_enabled() which restricts the path to HVM (hap, even) context only.
> 
> The commentary in read_pkru() concerning _PAGE_GNTTAB overlapping with
> _PAGE_PKEY_BITS is only applicable to PV guests.
> 
> The context switch path, via write_ptbase() unconditionally writes CR4 on any
> context switch.
> 
> Therefore, we can guarantee to separate CR4.PKE between PV and HVM context at
> no extra cost.  Set PKE in mmu_cr4_features on boot, so it becomes set in HVM
> context, and clear it in pv_make_cr4().
> 
> Rename read_pkru() to rdpkru() now that it is a simple wrapper around the
> instruction.  This saves two CR4 writes on every pagewalk, which typically
> occur more than one per emulation.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Wei Liu <wl@xen.org>
> 
> It also occurs to me that for HVM/Idle => HVM/Idle context switches, we never
> need to change CR4.  I think this is substantially clearer following XSA-293 /
> c/s b2dd00574a4f ("x86/pv: Rewrite guest %cr4 handling from scratch") which
> introduced pv_make_cr4().

Never needing to change CR4 doesn't uniformly mean writes can be avoided.
Part of the purpose of the writes is to flush the TLB. Per-domain as well
as shadow mappings may be in need of such if global mappings are used
anywhere.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 03 13:15:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 13:15:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121585.229282 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldYPg-0004N3-FY; Mon, 03 May 2021 13:15:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121585.229282; Mon, 03 May 2021 13:15:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldYPg-0004Ml-Bv; Mon, 03 May 2021 13:15:04 +0000
Received: by outflank-mailman (input) for mailman id 121585;
 Mon, 03 May 2021 13:15:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gWh3=J6=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ldYPe-0004Mg-LU
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 13:15:02 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d3d095fb-48af-42ac-a637-02a3aaf78041;
 Mon, 03 May 2021 13:15:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d3d095fb-48af-42ac-a637-02a3aaf78041
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620047700;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=8BD5KPdi1LE76JuB/81Z+5hisx8uZi7CZ3Ap1mjbNus=;
  b=ZH9JY/vdd+hHPok+y4hvOPWld72abVGMqebeXGvA/wPBTO+I7DMpjiBk
   VqPQZhtrmhkuE/OarKvHQ8LOeXcId/x4EzDVY9VjDE3q69MBlduWFNfsu
   InR0PTmXIVwvYoQ6xPBegmyG21RxhUcmFOz+dgHyk2cdbnlNqHYQV1PYU
   I=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: LqPKTZNzNWi38lZ6Qvveb1feCTSnvjbsyovASPoBXtk8ZHk4NdtP7utWmjecQTpiNU2GeuJZRu
 DOqQvBkWzv9l5ulSnZcqm+1yH0BOTV6nR0toWnAGl3LI6+qKIXbKpaZkzPDkwn6x37ODGZoHfr
 xBsS7H18QiisIRPSamCCIqt7YRfoRYhRmKDuG5uAhJatcJbYivc5asEuvZMcZjv0yXOD1UyAFR
 94wjQPUuluGsa4rooD8XR2o/zpv0PapjooQ5ygl7I8+tZX3hlQ5VSYxs89xEXpf5zyBoJJMzm/
 Imk=
X-SBRS: 5.1
X-MesageID: 42945044
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:UyyloKMNCyaaBsBcT2zw55DYdL4zR+YMi2QD/3taDTRIb82VkN
 2vlvwH1RnyzA0cQm0khMroAsa9aFvm39pQ7ZMKNbmvGDPntmyhMZ144eLZrwHIMxbVstRQ3a
 IIScVDIfXtEFl3itv76gGkE9AmhOKK6rysmP229RZQZCtBApsQijtRIACdD0FwWU1iDZ02CJ
 KT6qN81kWdUF4Qadm2AWRAYvPKoMfFmImjTRkNARMm7wfmt0LV1JfRFR+E0hACFw5e2LtKyx
 m5ryXVxIWG98u6xBjVynPJ4/1t9ufJ59NfCKW3+7AoAxr2jALAXvUHZ5Sju3QPrPir+BIWlr
 D30m0dFuBSz1+UQW2vuxvq3GDboUUTwlvv00WRj3emgeGRfkNCN+N7iYhUcgTU5iMb1bkWus
 I7vBPti7NtARzNhyj77dTTPisa8nacmnY+jfUVy0VWTIp2Us4gkaUk4EhXHJ0cdRiKjrwPLe
 8GNrC/2N9ra1+AK1jWsm5zqebcJUgbL1OtR0gPvdGtyD5GnHx15Ftw/r1vol4wsL06UJVK/O
 LCL+BBk6xPVNYfaeZHCP4GWtbfMB2DfTv8dEapZXj3HqAOPHzA77bx/bUO/emvPLgF1oE7lp
 jtWE5R3FRCNX7GOImr5tlm4xrNSGKyUXDG0cdF/aV0vbX6Wf7CLTCDYEpGqbrin9wvRungH9
 qjMpNfBPHuaUH0H5xS4gH4U55ObVEDTcwuvMohUV7mmLOKFqTa8sjgNNrDLrvkFjgpHknlBG
 EYYTT1LMJcqm+xXHvVhwXQRmPNdkTz8YkYKtmew8EjjKw2cqFcuAkcjlq0ouuRLydZj6AwdE
 xiZJPr+5nL4VWezCLt1SFEKxBdBkFa7PHLSHVRvzIHNEvybPIms9WbcmZC4WufKnZEPoTrOT
 8ag24y1bO8LpSWyyxnIcmgKHimg3wao2/PaJsAhKuZ54PAdokjBpgrHIx9fD+7ViBdqEJPki
 NueQUETkjQGnfFkqO+lqEZA+nZap1bmwekIcldrFrFrkWCrcQTRn8WNgTeE/K/sEILfX55l1
 dx+6gQjP6rgjC0M1Yyh+w+LRlxcmiNOalHCw6EfY1QvbjudGhLPCG3rA3fryt2Vnvh9k0UiG
 CkCSGPY/nEDmBQvW1i3r/w/El5cXiceExMeml32LcNZ1juizJW66umd6Cz22yeZh85zuYRPC
 rsTBESLgltrurHniK9qXKnLzEL158uNuvSAPAfaLnVwGqqM5DNv7oBBeVo8JFsM83OvucHXf
 mEQRKcKCr1BooSqlWoj0dgHBMxhGgvkPvu1hGg0XOx22QnB+HOZHthXLMWLrinniHZbsfN9K
 88q907veG9aDqsLvGHzLzadD5FJFf4p3WsQ+QhtJBTuuYTudJIbu7meAqN8EsC+hM0aPrQvg
 c5Zo9Q5bjaII9hf8AIYUtijxEUveXKCHFuixD8B+81QEokgHDaNe6Y+ragk8taPmSx4C/LfW
 SF+yJT//35TzKO+L4TBaU3O3lXYiEHmQJf1dLHU43bEwOxce5fuHK8L3+mabdYIZL1VIk4n1
 Jf49uSmfWQeDe98AfMvSFjKqYL12q8W8u9DEatHuFPmubKdWiks++P4MSpii3wRib+Q0MEhZ
 ddfUhVV/99sFAZ/cUK+xn3bLf2rEIjm0Zf5j8itmeF4PnZ3E7rWWdcMQPYhZ1KWyJ0KXbgt7
 WczdSl
X-IronPort-AV: E=Sophos;i="5.82,270,1613451600"; 
   d="scan'208";a="42945044"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Yc8+lG531LPjT8Or0AkHWUfvh3Z5sWiPlG34Wtl8If9jSvcIWnT1/vz4loxg3/g2Bgc1sJg2WZ5Mdmbl3WK7kmSaLF8Q+s1T7XlobKyHCBC70ojnG58f/dmVnc5xnuYeAIEdFbk8Kegw7C+ggG7KSIZ+cvKVxQiKmHfZm5radLYbyWpAG1Qj/jAj8bVNxz+kulndvFsKo0ARYmN/O1zepKSQ8SKcug5MgNZ2A7uKq8UK9CX1RGNiGm/r+3TRO0mh70q4CCGHZevfCLMcOob38Rt5eGZu2HcDFRSUNiIGCJxanq55fkbcAiLRGJNN/vd7GkNvpId8KGcRq1bdykyLSg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=C5m17tToisrf52j6nqdrPs9TooiC3tVzUWmKZJAxMao=;
 b=EsLV8+LwD5n3lkf176LOfgD5b/7Yj8P+xrhShnjW34hrQjTVG0Vkl1XYSeER2wWWmbzQoq3xd4LWtcT0xgr/aKdgH+nmCNe0JWfZx6/wtAP/BsWFd+ooe3CLJzLEVqaAp71LXqKo6ubKHmRKHl2aMAfdm4WYimWKDMx/KVyUo0r/EMMdkDP1simCJGLhMlXYr32m0Ud+Ds2wk9w9dfCp/hBqxDH6RLbVs9uGcpWPGlFYXFSrwEqPQlllI6gUr46FQAg6b6lcg4+Rn//SuTgQPn0NgyKUEeEosk05GRulFlnfnzTEgBlM9j4WxwVZShmThoiBuNMf/7m4z/mNh96IQw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=C5m17tToisrf52j6nqdrPs9TooiC3tVzUWmKZJAxMao=;
 b=gwq2AlzQw/Qrt5fugm7dIXm+B7KkjoEpb2dJLZzvDdh8vrAC2KepXa8+wiyCTXG0FhQITQjQTqiReZQq2GnG0ia1fG98UHUUQ4mwY2hF8XpLXZn6fF3flUtGbGSaBZ+DUWoh6lQ7nXmW7Wmhc/0AgSQDSnetxri+8dFmqG3Um+Y=
To: Jan Beulich <jbeulich@suse.com>
CC: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210429221223.28348-1-andrew.cooper3@citrix.com>
 <1bc9a7f7-0bf7-2894-1272-3eec6dbf5a8f@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] x86: Always have CR4.PKE set in HVM context
Message-ID: <13a257b8-bb0d-cf4b-c198-576516b549f4@citrix.com>
Date: Mon, 3 May 2021 14:14:50 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <1bc9a7f7-0bf7-2894-1272-3eec6dbf5a8f@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0157.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:9::25) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d871a406-5703-47e2-ac9a-08d90e3572e6
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5566:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB5566714F52A348F99CE42140BA5B9@SJ0PR03MB5566.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 0/jLQ8dLyi3FH1woMhZtxtzN+aK1/fqCYN2jlJfN3wj5JQ3UvGIEqtAzgcNzl9GpkF2qtaCoVoMhaPbLRDTJ2obwQgW/O4IlmNgKzflrdzmj0KkH6g4gP7cGwqKI/RcTKXbbkVlsD8WnLkBx1jBzN5Kfx+HDrRn4hwk/WBCJ7+yAl2jJBB0NVAdoqeUpnUvAzdNdKNHzMGs5XNZ7b99ke7YK9noH1lNiUHQb2ycgvAp0zqxiLYwNKV8/TtOSu3U/2d3Dj9wHQXLRFdD79s0AZ7FB/L4yLSRtF9Ir9Gu+vyxPTzc9nAvynzeSrM6S7+yO1kEfE+DP75SpMKy2wGT7hjKr8HQxg7F8M15dkthbwV3oD2TidiBGIHHnQBzW6VFWuett7fIRb5TtzN1LACiaAnttUFJ2bhAb36tnfBlGETENlS9qxBxtvKt+oddafsk5mCLYIT7f26KF2u3oEY7j7VRL+JZems1ldALkjp41ssYtQXq+U8w3IFbIkn/61sjCGvQYkfWTNCDTRDej86eBJ1DM5TrZEGW0jnOXit9A8cSQZG0Yj/guyVx/D7nEVYZBPPd8uW2z12JgvgiJVC9i87H17raFp9m7OOvqwtQHKizH2bQsZsZb/YGytg5S/iVzc+SwgThvxkOu5p6XIbykmuE7MVTC4v3vkbqR94SaHXrA6UFm4Zz3iWJMphXN6HIO
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(366004)(376002)(346002)(39860400002)(396003)(66946007)(2616005)(66556008)(53546011)(5660300002)(66476007)(36756003)(86362001)(54906003)(6916009)(38100700002)(478600001)(186003)(6666004)(956004)(6486002)(31696002)(26005)(8936002)(16576012)(4326008)(316002)(2906002)(8676002)(16526019)(31686004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?WE9jNjB3NEJJcXlrZDh2TmtaMFVUVWhTRGxIQjJVaU5YSUJtV3BqcVE4NmNI?=
 =?utf-8?B?dFJLbHNsdllXSnRheGEwd2ZZSURQSjlHb2ZOZlFuZDhOUUI1NlMxZkZNVDlL?=
 =?utf-8?B?NUVsNjVsUXhBWHpkWWNpeXhndGh5Vm8yczIrL3VhNGRBbitLanlWVi9zaGp4?=
 =?utf-8?B?WUErcUYyWnE4OHVMVHk0OUp1c3QyQXUrRS8xNy85OVdxSWpIZ0dMZmZIR2NX?=
 =?utf-8?B?NVFJWnBRcEVMUkpxblRWZTNyZTZtdjBQTEhjNlo4UDVGSGZZQmxESi9iZHlZ?=
 =?utf-8?B?QmtGQnk1QVdUb2VEdlVuMm1GRWc1bjZhVXdIL3BDdnBERjN2ZGFuTDJwaElN?=
 =?utf-8?B?OXBFUzRWUGVZcExvSDROSUZmZmdnbGZMUXdHYXNseGZVaWd5by8zdWlwVVRJ?=
 =?utf-8?B?Y3VVY2tiT2puVFdRZGd5dXEvU1RNOWxEazlnamUrV1VSVEFjYU0xMSs5S2JD?=
 =?utf-8?B?ZUhoRWxPY3FPY3prZVdaT3A1d09zdytVSE5tOHE3a2d4MG5CV2xLWmhRM1o3?=
 =?utf-8?B?TTJmbWMvTUhLcktaYkZ4UVVpZ1JaUkFJNlRnK2lBRHR0S09aNFFvQ2ptdjdZ?=
 =?utf-8?B?UGQ1eXo1ZllWYjAxN2hsTGw0WGcwUVk0MHJMY3laNU1xVWJmTlBseU13YUMr?=
 =?utf-8?B?MUlCQ2ZYRzVqNVMzSlh0bzJpU1NDQTdhRVJ3NGtvMzB1enhWdFkzamxabHFP?=
 =?utf-8?B?YlJkYk13aDZQN3V0VFZMYmMxU1V4VFFTUlZuUmkrQ3JWdnVMN0I1YjhJaUFW?=
 =?utf-8?B?Mk9CeTJjTzJTN05hOE1vK0hNd0lVVloySlhDN2RHeXpDbjVyNFh6azVsWlY3?=
 =?utf-8?B?OGp0cSthZVNlM2t1ZE1hU0FLODViZFk5dkxXR2g4ZXMwajJzZmZSUll0SU8y?=
 =?utf-8?B?WGJSNERsTE0rUnR0cEV4QzNsUGZlckF6cG0xVW15RVNqOU9DckxIamtMbnpj?=
 =?utf-8?B?dkh1M01qNk01bUMxNm5ORUZQK3lGWnlTN1JpNVRxbkV3cmEzZnhZdk5TUVBP?=
 =?utf-8?B?TDY4aS93NmduYUJqYm5pNUNKTGtzK0NxK3ZEUDd3aGZ4VGEwKzNPYzZYYkJB?=
 =?utf-8?B?TzZCaXJKelhpa1kwdnZhOWlWT0dUcm4xVDJ2MlVUN3U4VWNmZU5WU0lIZlZp?=
 =?utf-8?B?WURtTU50TDhjTndYUUI0OVo4ak1WbjNyVGJlV1RucHJQUWp5RTJWK3ZlcmRp?=
 =?utf-8?B?MkIxcHBHem9DM21LR3hVUTVXTm11VlF6aWtNL1VqNkVTRU9jMk1IME8rYTFJ?=
 =?utf-8?B?Y3R4NmVwSWI1YmRrdld4NE93SnJDM2hJWEhVMU1aZVFQT3lBQ0Q1alhBVHJK?=
 =?utf-8?B?QmpxK0N5WHkwUHkydGxWVmZ2NTFRbERsbjlYWWZ1c3NVQnEvTEFuZ2RBUmF1?=
 =?utf-8?B?d0tGaTlKMWdEMU5UMmZicVYxalNuL29nRVo1Y25UczA0dWtjUktSVkI0YnR4?=
 =?utf-8?B?ZlJSQkFKZU9VZCtRSmE2cE5aMTcrN3p5azBkVFYrbUJ4T2F6RkxNRldBUnZt?=
 =?utf-8?B?ekN1bTZiSUNNUld4OWQ5SEprVm1BcmhrSFB4b2tVVTFTN3BuczlZT2NNZmtJ?=
 =?utf-8?B?eG1hYXM3UStqQk10UmNlUW9Nb2NVeU9rMmd3YjUvTWFYVDFBTHpRU0xaZjQ5?=
 =?utf-8?B?d0pqaVErdWp6K25sSnpxdlpSQ0kzNFY0YnJzSy9UUWcxYUxYV0JSdU1mM2xx?=
 =?utf-8?B?TDBZWlhkVSt0emtKaHNaWEE3WFJhd09zRDlyL2FuU2l1L2szYWh3TzJhVEVN?=
 =?utf-8?Q?+9rSqxovNNHuDJTraVrJwc4kWFQAE3FK6Y+eHTP?=
X-MS-Exchange-CrossTenant-Network-Message-Id: d871a406-5703-47e2-ac9a-08d90e3572e6
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2021 13:14:57.4710
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Hwxc9tEwwhfdb8pdbX0e/+UFSu9b0LB8mt8FngrbodkRFYarUF32GE+7La/GlOMCug/t5103RiCJ4/wGgySWBE0b/HY90z9GaSIg2FEB8EI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5566
X-OriginatorOrg: citrix.com

On 03/05/2021 13:42, Jan Beulich wrote:
> On 30.04.2021 00:12, Andrew Cooper wrote:
>> The sole user of read_pkru() is the emulated pagewalk, and guarded behin=
d
>> guest_pku_enabled() which restricts the path to HVM (hap, even) context =
only.
>>
>> The commentary in read_pkru() concerning _PAGE_GNTTAB overlapping with
>> _PAGE_PKEY_BITS is only applicable to PV guests.
>>
>> The context switch path, via write_ptbase() unconditionally writes CR4 o=
n any
>> context switch.
>>
>> Therefore, we can guarantee to separate CR4.PKE between PV and HVM conte=
xt at
>> no extra cost.  Set PKE in mmu_cr4_features on boot, so it becomes set i=
n HVM
>> context, and clear it in pv_make_cr4().
>>
>> Rename read_pkru() to rdpkru() now that it is a simple wrapper around th=
e
>> instruction.  This saves two CR4 writes on every pagewalk, which typical=
ly
>> occur more than one per emulation.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
>> CC: Wei Liu <wl@xen.org>
>>
>> It also occurs to me that for HVM/Idle =3D> HVM/Idle context switches, w=
e never
>> need to change CR4.  I think this is substantially clearer following XSA=
-293 /
>> c/s b2dd00574a4f ("x86/pv: Rewrite guest %cr4 handling from scratch") wh=
ich
>> introduced pv_make_cr4().
> Never needing to change CR4 doesn't uniformly mean writes can be avoided.
> Part of the purpose of the writes is to flush the TLB. Per-domain as well
> as shadow mappings may be in need of such if global mappings are used
> anywhere.

Per domain are not global.=C2=A0 Shadows from HVM guests are a) surely not
global given their changeability, and b) in a non-root TLB tag.

Details like this do need checking, but it shouldn't be hard to improve
on what we've currently got.

~Andrew



From xen-devel-bounces@lists.xenproject.org Mon May 03 13:48:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 13:48:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121608.229352 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldYva-0007FR-Ms; Mon, 03 May 2021 13:48:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121608.229352; Mon, 03 May 2021 13:48:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldYva-0007FK-Jc; Mon, 03 May 2021 13:48:02 +0000
Received: by outflank-mailman (input) for mailman id 121608;
 Mon, 03 May 2021 13:48:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wh1Q=J6=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1ldYvY-0007FF-U5
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 13:48:00 +0000
Received: from mail-lj1-x22d.google.com (unknown [2a00:1450:4864:20::22d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 65a14cdb-26c2-4965-9da6-fd9880529dab;
 Mon, 03 May 2021 13:48:00 +0000 (UTC)
Received: by mail-lj1-x22d.google.com with SMTP id d15so6807243ljo.12
 for <xen-devel@lists.xenproject.org>; Mon, 03 May 2021 06:48:00 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 65a14cdb-26c2-4965-9da6-fd9880529dab
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=t8VSLvr8273ChxxroJtJkEHq3wxjGJJ/+aPa9QEx1+s=;
        b=F9CoZZqJgnWRblqgPEkmqzuHT2glOTa2rXBm6PsjrSdk06uniSsQCDsnMLEpkS3Nzq
         U+xrUyv2RFWWNYuzcvE+82KyEmaapIE0gR+3gstID3rYjswyj0Fxk836WG7sM18HOPDF
         HlhnKNGx6azN5/fOV6Y8ksoRs6pz6AyGQc5J7NQEeSIz/0hKq50e60u34TnUod958hAR
         Ekemirkg+JLw8z5k6Mm0MBkynGDsR4re67RkUu/PC+tlw3b2IXrxxjRHMAJWWQIzH2cY
         Nr+HVHlZIxeUKJ3t28EiujZLbyGDR2z2zG27NlmtUjU/Uxzlt7V8vaFVffKRaT2J3B5j
         rrOQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=t8VSLvr8273ChxxroJtJkEHq3wxjGJJ/+aPa9QEx1+s=;
        b=jlNYfYXUAfes67TLZWhv6eyMMq6fe0K5u6T+pMzx9IEMWW7+MHvrrtE27EwNxtdynj
         VKKOA9psljrs2TNH9fqj+d32XC9GYx2iQWUA/eQ1f1Ou2pu6jh9buOE+xEa1susouwDG
         aeEmIY/XtLc6CbAW37TUH9pB9CWvPEU1xAr5ZE7XGkSaGaO98EaF0SeQfJFwXQ2kAQun
         FI8NKhwF++XCzzYcvUY0vAc/2v1IiUjGVa44b+qC4SwF/TrxR4HfFrTN6kR5BwlChczO
         xQPXUgYT255kF4ET9tZml7JPLiLGKhdlvQJXxMB3vKUsCs2l6TzVXZbMiajn00QsT6WM
         YPuQ==
X-Gm-Message-State: AOAM530HFMpPp9BPy6AGMzEDqUuhgZTWcH5qO2+QbJaRjBIheUAHA2+P
	getfpjO1yYByO6Wu5x5q5fr5KsJ7tXOgoLQT/X4=
X-Google-Smtp-Source: ABdhPJwKnPqGNCSh0cuSBMwWuEC8Bkcg3MWlqUMNEm7lk3rTxYvTvbS0OJz3Uj0wHzmP4WLRnvlOL4kEbTR93JtBU/0=
X-Received: by 2002:a2e:a7d4:: with SMTP id x20mr13590940ljp.285.1620049679087;
 Mon, 03 May 2021 06:47:59 -0700 (PDT)
MIME-Version: 1.0
References: <20210423161558.224367-1-anthony.perard@citrix.com> <20210423161558.224367-4-anthony.perard@citrix.com>
In-Reply-To: <20210423161558.224367-4-anthony.perard@citrix.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Mon, 3 May 2021 09:47:47 -0400
Message-ID: <CAKf6xpugza2tpXq52_TgUUvVfZ7_ccPcbszvu6VYO=ryGAAp5g@mail.gmail.com>
Subject: Re: [XEN PATCH 3/8] libxl: Replace deprecated "cpu-add" QMP command
 by "device_add"
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, Ian Jackson <iwj@xenproject.org>, 
	Wei Liu <wl@xen.org>
Content-Type: text/plain; charset="UTF-8"

On Fri, Apr 23, 2021 at 12:16 PM Anthony PERARD
<anthony.perard@citrix.com> wrote:
>
> The command "cpu-add" for CPU hotplug is deprecated and has been
> removed from QEMU 6.0 (April 2021). We need to add cpus with the
> command "device_add" now.
>
> In order to find out which parameters to pass to "device_add" we first
> make a call to "query-hotpluggable-cpus" which list the cpus drivers
> and properties.
>
> The algorithm to figure out which CPU to add, and by extension if any
> CPU needs to be hotplugged, is in the function that adds the cpus.
> Because of that, the command "query-hotpluggable-cpus" is always
> called, even when not needed.
>
> In case we are using a version of QEMU older than 2.7 (Sept 2016)
> which don't have "query-hotpluggable-cpus", we fallback to using
> "cpu-add".
>
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> ---
>  tools/libs/light/libxl_domain.c | 87 ++++++++++++++++++++++++++++++++-
>  1 file changed, 85 insertions(+), 2 deletions(-)
>
> diff --git a/tools/libs/light/libxl_domain.c b/tools/libs/light/libxl_domain.c
> index 8c003aa7cb04..e130deb0757f 100644
> --- a/tools/libs/light/libxl_domain.c
> +++ b/tools/libs/light/libxl_domain.c

> +
> +/* Fallback function for QEMU older than 2.7, when
> + * 'query-hotpluggable-cpus' wasn't available and vcpu object couldn't be
> + * added with 'device_add'. */
> +static void set_vcpuonline_qmp_add_cpu(libxl__egc *egc, libxl__ev_qmp *qmp,
> +                                       const libxl__json_object *response,
> +                                       int rc) { STATE_AO_GC(qmp->ao);

STATE_AO_GC should be on a new line.

With that,

Reviewed-by: Jason Andryuk <jandryuk@gmail.com>

Thanks,
Jason


From xen-devel-bounces@lists.xenproject.org Mon May 03 13:50:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 13:50:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121611.229364 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldYyL-00085Z-4u; Mon, 03 May 2021 13:50:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121611.229364; Mon, 03 May 2021 13:50:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldYyL-00085S-1q; Mon, 03 May 2021 13:50:53 +0000
Received: by outflank-mailman (input) for mailman id 121611;
 Mon, 03 May 2021 13:50:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TA2L=J6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ldYyJ-00085N-Bb
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 13:50:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d92f5217-d3b0-4323-b6fa-ca72bee66489;
 Mon, 03 May 2021 13:50:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 19F1BB062;
 Mon,  3 May 2021 13:50:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d92f5217-d3b0-4323-b6fa-ca72bee66489
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620049849; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=9d2ZDMWcYWgpewOQuhG9aaHgilKUbd8W/T81pjlWNtE=;
	b=S9Hqsei5TXp932nsHa4vy+YoMR0N6LeUsf51b7zomx43YhNiXlp1npt+Bju4pxpTXeCJ9R
	wSXjgcThNdPU/LaaJ34CxKj4Sj8MVnHy5TkR7hhF9mKobytZ/UuafWv1SC9Ecv8Ll2RHOr
	ZGGv8nyTQsZ5yCREC4tlXpIxkYV4Ncg=
Subject: Re: [PATCH v3 01/22] mm: introduce xvmalloc() et al and use for grant
 table allocations
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <322de6db-e01f-0b57-5777-5d94a13c441a@suse.com>
 <69778de6-3b94-64d1-99d9-1a0fcfa503fd@suse.com>
 <YI/e9wyOpsVDkFQi@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <aeb6aa8e-7c90-be22-1888-21b7b178e1d1@suse.com>
Date: Mon, 3 May 2021 15:50:48 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <YI/e9wyOpsVDkFQi@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 03.05.2021 13:31, Roger Pau Monné wrote:
> On Thu, Apr 22, 2021 at 04:43:39PM +0200, Jan Beulich wrote:
>> All of the array allocations in grant_table_init() can exceed a page's
>> worth of memory, which xmalloc()-based interfaces aren't really suitable
>> for after boot. We also don't need any of these allocations to be
>> physically contiguous.. Introduce interfaces dynamically switching
>> between xmalloc() et al and vmalloc() et al, based on requested size,
>> and use them instead.
>>
>> All the wrappers in the new header get cloned mostly verbatim from
>> xmalloc.h, with the sole adjustment to switch unsigned long to size_t
>> for sizes and to unsigned int for alignments.
> 
> We seem to be growing a non-trivial amount of memory allocation
> families of functions: xmalloc, vmalloc and now xvmalloc.
> 
> I think from a consumer PoV it would make sense to only have two of
> those: one for allocations that require to be physically contiguous,
> and one for allocation that don't require it.
> 
> Even then, requesting for physically contiguous allocations could be
> done by passing a flag to the same interface that's used for
> non-contiguous allocations.
> 
> Maybe another option would be to expand the existing
> v{malloc,realloc,...} set of functions to have your proposed behaviour
> for xv{malloc,realloc,...}?

All of this and some of your remarks further down has already been
discussed. A working group has been formed. No progress since. Yes,
a smaller set of interfaces may be the way to go. Controlling
behavior via flags, otoh, is very much not malloc()-like. Making
existing functions have the intended new behavior is a no-go without
auditing all present uses, to find those few which actually may need
physically contiguous allocations.

Having seen similar naming elsewhere, I did propose xnew() /
xdelete() (plus array and flex-struct counterparts) as the single
new recommended interface; didn't hear back yet. But we'd switch to
that gradually, so intermediately there would still be a larger set
of interfaces.

I'm not convinced we should continue to have byte-granular allocation
functions producing physically contiguous memory. I think the page
allocator should be used directly in such cases.

>> --- /dev/null
>> +++ b/xen/include/xen/xvmalloc.h
>> @@ -0,0 +1,73 @@
>> +
>> +#ifndef __XVMALLOC_H__
>> +#define __XVMALLOC_H__
>> +
>> +#include <xen/cache.h>
>> +#include <xen/types.h>
>> +
>> +/*
>> + * Xen malloc/free-style interface for allocations possibly exceeding a page's
>> + * worth of memory, as long as there's no need to have physically contiguous
>> + * memory allocated.  These should be used in preference to xmalloc() et al
>> + * whenever the size is not known to be constrained to at most a single page.
> 
> Even when it's know that size <= PAGE_SIZE this helpers are
> appropriate as they would end up using xmalloc, so I think it's fine to
> recommend them universally as long as there's no need to alloc
> physically contiguous memory?
> 
> Granted there's a bit more overhead from the logic to decide between
> using xmalloc or vmalloc &c, but that's IMO not that big of a deal in
> order to not recommend this interface globally for non-contiguous
> alloc.

As long as xmalloc() and vmalloc() are meant stay around as separate
interfaces, I wouldn't want to "forbid" their use when it's sufficiently
clear that they would be chosen by the new function anyway. Otoh, if the
new function became more powerful in terms of falling back to the
respectively other lower level function, that might be an argument in
favor of always using the new interfaces.

>> + */
>> +
>> +/* Allocate space for typed object. */
>> +#define xvmalloc(_type) ((_type *)_xvmalloc(sizeof(_type), __alignof__(_type)))
>> +#define xvzalloc(_type) ((_type *)_xvzalloc(sizeof(_type), __alignof__(_type)))
>> +
>> +/* Allocate space for array of typed objects. */
>> +#define xvmalloc_array(_type, _num) \
>> +    ((_type *)_xvmalloc_array(sizeof(_type), __alignof__(_type), _num))
>> +#define xvzalloc_array(_type, _num) \
>> +    ((_type *)_xvzalloc_array(sizeof(_type), __alignof__(_type), _num))
>> +
>> +/* Allocate space for a structure with a flexible array of typed objects. */
>> +#define xvzalloc_flex_struct(type, field, nr) \
>> +    ((type *)_xvzalloc(offsetof(type, field[nr]), __alignof__(type)))
>> +
>> +#define xvmalloc_flex_struct(type, field, nr) \
>> +    ((type *)_xvmalloc(offsetof(type, field[nr]), __alignof__(type)))
>> +
>> +/* Re-allocate space for a structure with a flexible array of typed objects. */
>> +#define xvrealloc_flex_struct(ptr, field, nr)                          \
>> +    ((typeof(ptr))_xvrealloc(ptr, offsetof(typeof(*(ptr)), field[nr]), \
>> +                             __alignof__(typeof(*(ptr)))))
>> +
>> +/* Allocate untyped storage. */
>> +#define xvmalloc_bytes(_bytes) _xvmalloc(_bytes, SMP_CACHE_BYTES)
>> +#define xvzalloc_bytes(_bytes) _xvzalloc(_bytes, SMP_CACHE_BYTES)
> 
> I see xmalloc does the same, wouldn't it be enough to align to a lower
> value? Seems quite wasteful to align to 128 on x86 by default?

Yes, it would. Personally (see "[PATCH v2 0/8] assorted replacement of
x[mz]alloc_bytes()") I think these ..._bytes() wrappers should all go
away. Hence I don't think it's very important how exactly they behave,
and in turn it's then best to have them match x[mz]alloc_bytes().

>> +
>> +/* Free any of the above. */
>> +extern void xvfree(void *);
>> +
>> +/* Free an allocation, and zero the pointer to it. */
>> +#define XVFREE(p) do { \
>> +    xvfree(p);         \
>> +    (p) = NULL;        \
>> +} while ( false )
>> +
>> +/* Underlying functions */
>> +extern void *_xvmalloc(size_t size, unsigned int align);
>> +extern void *_xvzalloc(size_t size, unsigned int align);
>> +extern void *_xvrealloc(void *ptr, size_t size, unsigned int align);
> 
> Nit: I would drop the 'extern' keyword from the function prototypes.

Ah yes, will do. Simply a result of taking the other header as basis.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 03 13:53:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 13:53:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121621.229376 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldZ0i-0008EZ-NH; Mon, 03 May 2021 13:53:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121621.229376; Mon, 03 May 2021 13:53:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldZ0i-0008ES-Ju; Mon, 03 May 2021 13:53:20 +0000
Received: by outflank-mailman (input) for mailman id 121621;
 Mon, 03 May 2021 13:53:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TA2L=J6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ldZ0h-0008EN-EJ
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 13:53:19 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0befd92a-be2d-4645-8de1-b1c785a63f61;
 Mon, 03 May 2021 13:53:18 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A90DFB036;
 Mon,  3 May 2021 13:53:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0befd92a-be2d-4645-8de1-b1c785a63f61
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620049997; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=AtyNzJXblg1TorskkTaOxOcwAAm/1XOy6DtVGC3zw+M=;
	b=kNW/dfUPLuB+hCowpK85ovdD8oU89MLMZWn8xCh+NgfvM9M5h1wpCtJOTM5ppMAxVitx+9
	8DHYe73u2cMpHx53NL0h5Un0XrJLRpsYqDS5y+TY9Vq4kBuy755gtWpMZiq6O7LTPAU7MI
	D2xIKyEwJ8r+kleSZj7H/fkuu9JPf/U=
Subject: Ping: [PATCH v2 8/8] Arm/optee: don't open-code xzalloc_flex_struct()
From: Jan Beulich <jbeulich@suse.com>
To: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
Cc: George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <091b4b91-712f-3526-78d1-80d31faf8e41@suse.com>
 <5fa042ac-9609-eab7-b14d-a59782ef4141@suse.com>
Message-ID: <d61a2123-b501-15d6-e2c1-f18aa4790b46@suse.com>
Date: Mon, 3 May 2021 15:53:16 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <5fa042ac-9609-eab7-b14d-a59782ef4141@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Volodymyr,

On 21.04.2021 16:59, Jan Beulich wrote:
> The current use of xzalloc_bytes() in optee is nearly an open-coded
> version  of xzalloc_flex_struct(), which was introduced after the driver
> was merged.
> 
> The main difference is xzalloc_bytes() will also force the allocation to
> be SMP_CACHE_BYTES aligned and therefore avoid sharing the cache line.
> While sharing the cache line can have an impact on the performance, this
> is also true for most of the other users of x*alloc(), x*alloc_array(),
> and x*alloc_flex_struct(). So if we want to prevent sharing cache lines,
> arranging for this should be done in the allocator itself.
> 
> In this case, we don't need stricter alignment than what the allocator 
> provides. Hence replace the call to xzalloc_bytes() with one of
> xzalloc_flex_struct().
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Signed-off-by: Julien Grall <julien@xen.org>
> ---
> v2: Use commit message very close to what was suggested by Julien.

I realize I forgot to Cc you on the v2 submission, but I didn't hear
back on v1 from you either. May I ask for an ack or otherwise?

Thanks, Jan

> --- a/xen/arch/arm/tee/optee.c
> +++ b/xen/arch/arm/tee/optee.c
> @@ -529,8 +529,7 @@ static struct optee_shm_buf *allocate_op
>      while ( unlikely(old != atomic_cmpxchg(&ctx->optee_shm_buf_pages,
>                                             old, new)) );
>  
> -    optee_shm_buf = xzalloc_bytes(sizeof(struct optee_shm_buf) +
> -                                  pages_cnt * sizeof(struct page *));
> +    optee_shm_buf = xzalloc_flex_struct(struct optee_shm_buf, pages, pages_cnt);
>      if ( !optee_shm_buf )
>      {
>          err_code = -ENOMEM;
> 
> 



From xen-devel-bounces@lists.xenproject.org Mon May 03 13:57:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 13:57:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121626.229387 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldZ4i-0008Pb-8Y; Mon, 03 May 2021 13:57:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121626.229387; Mon, 03 May 2021 13:57:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldZ4i-0008PR-5d; Mon, 03 May 2021 13:57:28 +0000
Received: by outflank-mailman (input) for mailman id 121626;
 Mon, 03 May 2021 13:57:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gWh3=J6=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ldZ4h-0008PM-00
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 13:57:27 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0d8ac41e-d727-4a40-bb3b-24a8baa39817;
 Mon, 03 May 2021 13:57:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0d8ac41e-d727-4a40-bb3b-24a8baa39817
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620050245;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=2mHMDwnB+Qqg6IwVstKSE+kgeSH2vCxjdn0ovWOL3ug=;
  b=anhN2r/Us4YJbyZAtGEok7+Ionxb8N1WtLDFHuu33i4TZ8ogdsmhL7tV
   iVPxoiKvVWeVfKpZ3I9e2KZxK1NR8fuafsfJ3hVnpcS4yPfSy7mchtCoX
   lHu+k8OEG2IQxjrMTDnEGhNeFtz1GoadP2/FrR4oiXClqh70hig1oOR62
   o=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: f+RXXpwoaCxVOzy1YvpgcFILvFKFzdRtb+9XDU0K+4ErjkJVIx6Pgtlxkqeo7DfC4jdKITSUpO
 tDhnu8WCoC4OBEVa2nMtGQeGadnBNIIwnjLtAyGr+geaUVUKj1FY3A4Z7VxzAM1J4RDIR1GxGL
 ji9uSaSKkltaDjObWlL70ydfgCOkzbAV+flh+TvPlSZPxam5wIiZ+h+Y+phOIxxWhct4MWDHb7
 1EPkcVK0R98DIw089Mo8BZcMIlDiuxq57Oj2EG4XnHRfqbXgaCqZUYFgt4G30Lsi1Ul3P4b5us
 zBE=
X-SBRS: 5.1
X-MesageID: 42948339
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:tSMnGalyUspn5cqHHdfqyiIy5iPpDfP+imdD5ilNYBxZY6Wkvu
 izgfUW0gL1gj4NWHcm3euNIrWEXGm0z/NIyKMaVI3DYCDNvmy0IIZ+qbbz2jGIIVybysdx94
 dFN5J/Btr5EERgga/BijWQPt48zLC8n5yAqvzZyx5WIz1CT4FFw0NHBh2AEktwLTM2YKYRMJ
 aH/MJIq36BVB0sH6eGL0IIVeTCuNHH/aiOCXI7LiUq9RWUineQ4KP6eiLy4j4lTzhNzb0+mF
 K18TDR26PLiZCG4y6Z7UD/xdB8mNztytxMbfb89/Q9G3HXpSuDIKhkU72GljgprO+o80ZCqq
 ixnz4Qe/5dxlmUUmapoQb8+wSI6kdQ11bSjWW2rFGmgcvlSCk0A8BM7LgpDCfx2g4bk/xXlI
 dotljp0KZ/PFf7swnWo+XsbVVMkHG5pHIz+NRj9EB3YM8lR5J66bAE8Fg9KuZnIAvKrLoJPc
 NJF8/m6PNfYTqhHgrkl1gq+tCqU3gpdy32O3Qqi4iQ2zhSqnhz01EV8swZhmsB75IwUfB/lp
 z5Dpg=
X-IronPort-AV: E=Sophos;i="5.82,270,1613451600"; 
   d="scan'208";a="42948339"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=izw1mRqaUMxg9D2lHlLxrCisQMULMjbr12UlyD70JA7WFYdPIa+/MaTAEgcONATilS3JMGYUu10JE+y6pyqF7wPdE6OOQaLvqR5680NwSSIoAwK9XhUHpkkdmElkURiUuvE1VPIdJQhF8h9hQpRmIkzCdLtyOhglRRL8S7gnig9BFphwZoruhulZKojAr9cylBeeZEdtntYEmMZFdhb+kpZA77zVLQhHleUkwsr7Y8b4yiX9qQxG/FqBCJNdZzL5mc/jd83erOrYQb4UD10C8KDiVyqcxAif06KcwQTZX+Kdlx+WW9IdLKbgt0RC9kf6+jrwmobXXsdNN+T/XCuXVQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2mHMDwnB+Qqg6IwVstKSE+kgeSH2vCxjdn0ovWOL3ug=;
 b=Om8Cs0p5LLt6/z64DRlJS5xjH/zXKk8RaB1bSQF2FhzIRmj1ZkHQWaOwVHqP2VSkcvvDBAZOdOBrvCtmdiB402WJS1GBgIRwLm6mHkkXY5ZkllNLEFSRy4vEUJfwkhAz5HTeXZUMcrzb/VzEMN+MfbJTXGVUuxUIVBW5FZ+bwsnpQ3iWp7Vk5FGDDjcOEv6McmvL+lH5EgGm404UjokloBQplvoHn4NQSh7iQlDkzHL00VyGa1FLqY2ej/W5Z38vhNz71hFrBjaQSQx8+TX2nxV9vyDR/9QhOykHDUxFvDnZe3kQOx0nW5s1FZh6HtmscgRvCpkudF/XW7UNppOdow==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2mHMDwnB+Qqg6IwVstKSE+kgeSH2vCxjdn0ovWOL3ug=;
 b=haIfum2iOZikJRVWcRVNyIVOZUdFeCU/1q4hZVD9C21wttALjrV0TnpPG13Cd1t99bL1PLJCG7loHPhGICp2ZUTTXqBM8vGmQZsLx9lIRZEq7dx8rbcOVNjXgcDcQJJjHq4VhpUGipD/StkxDWHGRps8C6qi5FL9KdnXhMP0iL0=
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: George Dunlap <george.dunlap@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <322de6db-e01f-0b57-5777-5d94a13c441a@suse.com>
 <8ba8f016-0aed-277b-bbea-80022d057791@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v3 03/22] x86/xstate: re-size save area when CPUID policy
 changes
Message-ID: <5a954be8-e213-36d8-27da-4c51243dc280@citrix.com>
Date: Mon, 3 May 2021 14:57:15 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <8ba8f016-0aed-277b-bbea-80022d057791@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0033.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:61::21) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 36858282-df07-4e1d-b9fd-08d90e3b5f91
X-MS-TrafficTypeDiagnostic: BYAPR03MB3862:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB386210DB0922AF26230C5AC3BA5B9@BYAPR03MB3862.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: S+RIWxxYV2l97tXxfBV6o++n5gNOYU+e0pzj50B2cjQIR6M2wWrow7+o7m/aeVjDVBazy+cQQv0bhK5OaaGNG3siQv3zM1xNNeaLC9POcos9bp8ubcZql2XyaxQaZjkee/sZ0kWuzCpqi6i9bWZnapR4daKAp3LOkLlEFko7M8Gi246pB/btkTBTghoQFjbn8uTMa8EEYLn7SQTk2SjkOgIh6vXwE2DxianUHU4PrNySHhX037oqnMlSsbqBAe5lN6FcdLxoigLWQBN4xvUWo9t+BhHlE8kNHeyvpqN1s12jWpHtAMtq+65sVT9gk4s9I2NVAgwn0NRlMT3yLJ7FwGDoBYfRhRUdPSYQ8dSuzO6euyOow2CKXeBitV2G3X8kJ/GGggVtfuGTBkn0BuH68BOUdFuvvgLAlChv8aCOaTFTN7socI/rmO1WpQBz/YzeD72GBO5idopCzVtzeF6o54krylNcDIcI58cVf4uUuO8PU3EDhfI7GtimycZWEVOdidOFSUhi1bmYHxX3mIMvprBk0GmWHXFB38RBbsyKQXDxyUiHqXPWZNFUatc0BSvs1jOmcu+3ZlH8buvNVuUN4vb3Zpvw3HSL+J/2aRwrdo6eR5FB6dzSnZdfPCnCv7OLoOEq9VtbdzHFzmJsC6L1S4CTCquQap81gNPSNz+yec1BHO0sOQRTav2BDS+zu7yd
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(346002)(396003)(376002)(136003)(366004)(31686004)(66556008)(186003)(26005)(83380400001)(54906003)(110136005)(16526019)(956004)(6666004)(66476007)(5660300002)(8676002)(2616005)(66946007)(316002)(107886003)(16576012)(31696002)(6486002)(86362001)(36756003)(4326008)(2906002)(8936002)(38100700002)(53546011)(478600001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?WlF3emNkWVM4a2FESVRUcXRrOUZHQ0dzT1h0d0hQWDRHN3dmZldxNnh4R082?=
 =?utf-8?B?RWtqdS9SYjAxbXNVWHZ3OVdmcGxnQS9BRWxoMk82K0lqbi9rVmg0bGlYcTZj?=
 =?utf-8?B?YUZPaFdXaXlkQm4yUWQzbXFtQ01OV3BCOEI4SUQyMGpjdXdDZEU2OGV1ZHBJ?=
 =?utf-8?B?V2l3Q0JsK3lSVEdYUG4vMDlscWJMVUNrVlJYcXA5SDZxOW1sQlc5OE5DN0dV?=
 =?utf-8?B?UGtaZlQ4T1ZHVG9uckFzZDB5TlBHK3huZGM5bFQrWVFsem9pU0JVeDcvSnlO?=
 =?utf-8?B?VmlETmdvQzdXZ1JZeDZ2QUtrK2pNdjZzMnFVZTdMUlUrUWpPamFZYkxKOWxY?=
 =?utf-8?B?Tkc3YmRrUGlnaXV2dHhtb0ZSaGRWVmZ4RU91MVVYWlFXZStEdjVlYmxxVDJF?=
 =?utf-8?B?YXRqMGFNWllpeXlteTdXNTFTT0NWRUUyNDVIb2dYTmxhRmtKN3JwMGNLRlBN?=
 =?utf-8?B?MGwvYkEvSFlha2R1anhVaFEwTis5R21Gc2ZGZW0wNWk1bDRHUTJENGJwTWhE?=
 =?utf-8?B?aGJvQjN6U2tjYWJTRTBoOXdjeHJ6aXQzVDgzMGJBVXo2T1dab0ROM0VaaE9s?=
 =?utf-8?B?TytDdkgzcmlROXprSldaVnhzbyszSktuQVZoU1VTTnBZM0U4UzRWY1pQemJa?=
 =?utf-8?B?RkhGK3lxcEFwZW9sMkdQNldIOWRnMkRrRC94SGtqTjRrK2VURWgyWWRVMW0r?=
 =?utf-8?B?cnBEaDVROFdwMnd0RWFSWXY2SDV1TUhDY3VhYnRlcDI5c0JZME9kRTN6Z09z?=
 =?utf-8?B?SStmZlUyRTNhUTgrcVZ6K2NFYVBUdVcrNFlxM1o1a3RpY1pVL2xJTk1ON3FO?=
 =?utf-8?B?aTFTN2c1ZVcxQko2VndoanFIdkpwYytPSTdxaTJzUFBuRHV6djA3SVY4b2tY?=
 =?utf-8?B?RFlScHcvRzZiN2U3ZTd2bUZZMi9jL2hiYlRoVjF5MWtmU2ZCTEErTmh2NWIw?=
 =?utf-8?B?R0NGdzRZeDJWNGtDemRTSUNuS010R2Z6NGhvcjVRaC9MbWFoeG9RU0pOVWI1?=
 =?utf-8?B?K2ZsU3JQOTN4S254RDFtaHpMSkJsZVh1UWo4RmIyQllsaURkMjV3QmRkbDly?=
 =?utf-8?B?N2VVZlRCL0ViNjcwMnVHR2xPbzN6N0U5NDRjamlwSWkyM2MrdW8yWk8zZmMx?=
 =?utf-8?B?bUUwdU9ScTVxZkFRbExOQVFEWWxnVTZrREZFZ2dTYlNiOE1MeGt4aVZ1eHZl?=
 =?utf-8?B?SGdSQlE3ZUgxYlVJQnp3eGd3YmJkK1FaajkwdHBJbEUxWGFPN0UrN1JvMzJk?=
 =?utf-8?B?WmFDcDJQL041M250ejJIWW10VVhQT0tBT0VTQzN5T2lVZGVHZ09sSlBKWFNJ?=
 =?utf-8?B?RW9ybEd2L2hiVlZhYzdDS2JBcjB0RFBocFlTcXpaWDBLZ3ZqQzNOQ0FmZFNP?=
 =?utf-8?B?eHhYY2hNRTg1c0RqZG1jZ3M2TlZmakpodi94Z1ZtNSsvbEF0NlJUWTZaVFhU?=
 =?utf-8?B?M2JBOEs5MXhhME42TG51cTArZUZ1czlpTWZtRU9WNXRxckxvMXpnUHNzbnVx?=
 =?utf-8?B?RitvSFhVN0duVHIzNEZaWkpDRE5kaUdJREtnSVBwcWU4VHdUKzB4M0gyYUor?=
 =?utf-8?B?L0lrdzQ1cmtnQXp5U0tsSG40dEN1Z29Kb0tlbW1pcWtwL25FUUd0eUxrQjhE?=
 =?utf-8?B?UXB6a2RnZGxmMnY0dDZFS0JIQ0p5ZVZ6YWU1aHY3bHNtWlFPQzRER0xaeEJx?=
 =?utf-8?B?TUdoM0xGRGJvdEROSWdxK05VYThmaUZHRjArS2RiZXpSdlk2QnpsaTdvOTVk?=
 =?utf-8?Q?mMzZv9y3aLwZcqGcPQij4L9lQJLjHcZfdxSkcSy?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 36858282-df07-4e1d-b9fd-08d90e3b5f91
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2021 13:57:22.1709
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: B1Z6dStPX/UBMfqcjIjvAeHH4orUJIn0Z95uLhL1ocRCcw0GKg7rx/FDcb4Q36r53rGVpW5ajRLRoB0LxAqllq7mkcP/h2VTuASWsUN6EqA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3862
X-OriginatorOrg: citrix.com

On 22/04/2021 15:44, Jan Beulich wrote:
> vCPU-s get maximum size areas allocated initially. Hidden (and in
> particular default-off) features may allow for a smaller size area to
> suffice.
>
> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v2: Use 1ul instead of 1ull. Re-base.
> ---
> This could be further shrunk if we used XSAVEC / if we really used
> XSAVES, as then we don't need to also cover the holes. But since we
> currently use neither of the two in reality, this would require more
> work than just adding the alternative size calculation here.
>
> Seeing that both vcpu_init_fpu() and cpuid_policy_updated() get called
> from arch_vcpu_create(), I'm not sure we really need this two-stage
> approach - the slightly longer period of time during which
> v->arch.xsave_area would remain NULL doesn't look all that problematic.
> But since xstate_alloc_save_area() gets called for idle vCPU-s, it has
> to stay anyway in some form, so the extra code churn may not be worth
> it.
>
> Instead of cpuid_policy_xcr0_max(), cpuid_policy_xstates() may be the
> interface to use here. But it remains to be determined whether the
> xcr0_accum field is meant to be inclusive of XSS (in which case it would
> better be renamed) or exclusive. Right now there's no difference as we
> don't support any XSS-controlled features.

I've been figuring out what we need to for supervisors states.=C2=A0 The
current code is not in a good shape, but I also think some of the
changes in this series take us in an unhelpful direction.

I've got a cleanup series which I will post shortly.=C2=A0 It interacts
texturally although not fundamentally with this series, but does fix
several issues.

For supervisor states, we need use XSAVES unilaterally, even for PV.=C2=A0
This is because XSS_CET_S needs to be the HVM kernel's context, or Xen's
in PV context (specifically, MSR_PL0_SSP which is the shstk equivalent
of TSS.RSP0).


A consequence is that Xen's data handling shall use the compressed
format, and include supervisor states.=C2=A0 (While in principle we could
manage CET_S, CET_U, and potentially PT when vmtrace gets expanded, each
WRMSR there is a similar order of magnitude to an XSAVES/XRSTORS
instruction.)

I'm planning a host xss setting, similar to mmu_cr4_features, which
shall be the setting in context for everything other than HVM vcpus
(which need the guest setting in context, and/or the VT-x bodge to
support host-only states).=C2=A0 Amongst other things, all context switch
paths, including from-HVM, need to step XSS up to the host setting to
let XSAVES function correctly.

However, a consequence of this is that the size of the xsave area needs
deriving from host, as well as guest-max state.=C2=A0 i.e. even if some VMs
aren't using CET, we still need space in the xsave areas to function
correctly when a single VM is using CET.

Another consequence is that we need to rethink our hypercall behaviour.=C2=
=A0
There is no such thing as supervisor states in an uncompressed XSAVE
image, which means we can't continue with that being the ABI.

I've also found some substantial issues with how we handle
xcr0/xcr0_accum and plan to address these.=C2=A0 There is no such thing as
xcr0 without the bottom bit set, ever, and xcr0_accum needs to default
to X87|SSE seeing as that's how we use it anyway.=C2=A0 However, in a conte=
xt
switch, I expect we'll still be using xcr0_accum | host_xss when it
comes to the context switch path.

In terms of actual context switching, we want to be using XSAVES/XRSTORS
whenever it is available, even if we're not using supervisor states.=C2=A0
XSAVES has both the inuse and modified optimisations, without the broken
consequence of XSAVEOPT (which is firmly in the "don't ever use this"
bucket now).

There's no point ever using XSAVEC.=C2=A0 There is no hardware where it
exists in the absence of XSAVES, and can't even in theoretical
circumstances due to (perhaps unintentional) linkage of the CPUID data.=C2=
=A0
XSAVEC also doesn't use the modified optimisation, and is therefore
strictly worse than XSAVES, even when MSR_XSS is 0.

Therefore, our choice of instruction wants to be XSAVES, or XSAVE, or
FXSAVE, depending on hardware capability.

~Andrew



From xen-devel-bounces@lists.xenproject.org Mon May 03 14:14:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 14:14:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121632.229400 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldZKu-0001kd-Md; Mon, 03 May 2021 14:14:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121632.229400; Mon, 03 May 2021 14:14:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldZKu-0001kW-JR; Mon, 03 May 2021 14:14:12 +0000
Received: by outflank-mailman (input) for mailman id 121632;
 Mon, 03 May 2021 14:14:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wh1Q=J6=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1ldZKs-0001kQ-Su
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 14:14:10 +0000
Received: from mail-lj1-x22e.google.com (unknown [2a00:1450:4864:20::22e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bf206946-606c-4f7e-88b4-5f043f76fbab;
 Mon, 03 May 2021 14:14:09 +0000 (UTC)
Received: by mail-lj1-x22e.google.com with SMTP id a36so6927102ljq.8
 for <xen-devel@lists.xenproject.org>; Mon, 03 May 2021 07:14:09 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bf206946-606c-4f7e-88b4-5f043f76fbab
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=kJVI2bPr4asnrjKn10q9IGKWJGDWc3UWQSKC9TQfcNg=;
        b=KI6PNqEw2xMoiOslciapKHd8xGLlFF3wfpomZqOdCyXQ/eWk4JSDMMBAKazRAVd9mN
         j6kKloReNvs4RhYBjO+UeVF++/jAH0DoMzoBnE7lXouPcRfYiAunuBz6we0zAH68hy8k
         /Q3OEqgl9yZYbQECWTf1GFcBpixZQOBUAShzAa4xdHZBcm5gYUYQ+k2h+sACngU6vHaf
         IxSwMvUvtN904MrqgLUSCnVJuWk+WBHIAXXdY5jdp/KQUnuhhvEFBDHkM82uQj33lSsf
         M5C3Ma7bvnMNr7A7DyOrhkXvu+BAg4/1hDi8iyPxy/gxG8MAvFnsJWpI602RsRcdQduh
         mZ5Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=kJVI2bPr4asnrjKn10q9IGKWJGDWc3UWQSKC9TQfcNg=;
        b=Wkp9+0h96eVQsMnaLdxENCpCW3BQ6d4r8GZsH4kZwnhrq7h12mSdlyfb9Miq2VT/4p
         U7B/RkLNthy0IouaVR8DhijBrEn3c3OavNSlX33YelD5QWCCMVwB3cyzZYaNUpCzlRj3
         YHIF8RGp0t3sdWw/9uJlofG2PIK3IYrYoXg5v0bu7gADmejOY64XDDt7YeRmWZDBqir7
         wLMI/vDmjSIuA9S/rH4gCY38gdON4HzVliPTEt/EQqdotY9GViVY4DTB1ud5Co1F1jru
         kq3+6vIOhky5vUZQnfVggzPWU5kexPAUXpjR8WKJ0TDfa7oSpQQOKOQAxoG7WWI7Xv/i
         vj7Q==
X-Gm-Message-State: AOAM530eqS140i8DM7C8aeHtt8a+OuyBQkxF/aTbBpuW8j3uEWrmcMLC
	l+9hforrrHW0iu7pJXnuREjUYwF9R9BN/BZoYvk=
X-Google-Smtp-Source: ABdhPJyIFJwaKOIF8lg4ByhHypta3OQA3ZshsYGa8p5IG4g1DqB9Uku1elV7Y6KtTNgg54nbNpn038V9vLxOGHTbeFM=
X-Received: by 2002:a2e:7f14:: with SMTP id a20mr14167556ljd.489.1620051248583;
 Mon, 03 May 2021 07:14:08 -0700 (PDT)
MIME-Version: 1.0
References: <20210423161558.224367-1-anthony.perard@citrix.com>
In-Reply-To: <20210423161558.224367-1-anthony.perard@citrix.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Mon, 3 May 2021 10:13:57 -0400
Message-ID: <CAKf6xpt_xkpnNwcq2-WS3SN+Qj8gcz33MaGdfCW=30HzfqrWng@mail.gmail.com>
Subject: Re: [XEN PATCH 0/8] Fix libxl with QEMU 6.0 + remove some more
 deprecated usages.
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, Ian Jackson <iwj@xenproject.org>, 
	Wei Liu <wl@xen.org>
Content-Type: text/plain; charset="UTF-8"

On Fri, Apr 23, 2021 at 12:16 PM Anthony PERARD
<anthony.perard@citrix.com> wrote:
>
> Patch series available in this git branch:
> https://xenbits.xen.org/git-http/people/aperard/xen-unstable.git br.deprecated-qemu-qmp-and-cmd-v1
>
> The Xen 4.15 release that went out just before QEMU 6.0 won't be compaptible
> with the latter. This patch series fixes libxl to replace use of QMP commands
> that have been removed from QEMU and to fix usage of deprecated command and
> parameters that well be remove from QEMU in the future.
>
> All of the series should be backported to at least Xen 4.15 or it won't be
> possible to migrate, hotplug cpu or change cdrom on HVM guest when QEMU 6.0 and
> newer is used. QEMU 6.0 is about to be release, within a week.
>
> Backport: 4.15
>
> Anthony PERARD (8):
>   libxl: Replace deprecated QMP command by "query-cpus-fast"
>   libxl: Replace QEMU's command line short-form boolean option
>   libxl: Replace deprecated "cpu-add" QMP command by "device_add"
>   libxl: Use -device for cd-rom drives
>   libxl: Assert qmp_ev's state in qmp_ev_qemu_compare_version
>   libxl: Export libxl__qmp_ev_qemu_compare_version
>   libxl: Use `id` with the "eject" QMP command
>   libxl: Replace QMP command "change" by "blockdev-change-media"

For the rest of the series besides
libxl: Replace deprecated QMP command by "query-cpus-fast"
and
libxl: Replace deprecated "cpu-add" QMP command by "device_add"

Reviewed-by: Jason Andryuk <jandryuk@gmail.com>


From xen-devel-bounces@lists.xenproject.org Mon May 03 14:22:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 14:22:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121638.229411 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldZT9-0002fR-IK; Mon, 03 May 2021 14:22:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121638.229411; Mon, 03 May 2021 14:22:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldZT9-0002fK-FO; Mon, 03 May 2021 14:22:43 +0000
Received: by outflank-mailman (input) for mailman id 121638;
 Mon, 03 May 2021 14:22:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TA2L=J6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ldZT8-0002fF-2A
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 14:22:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9e3e2dcf-7981-472b-a806-d40f7047c150;
 Mon, 03 May 2021 14:22:40 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 47EE2B2E3;
 Mon,  3 May 2021 14:22:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9e3e2dcf-7981-472b-a806-d40f7047c150
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620051759; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=54ugadqo6jk1DJ6iuEFaNypD8iMhvQy14YPAp7q2tG4=;
	b=W03zWk2Z6oM2XlFVtzLa2FvpQ9Qa1Fegf37WwojQ8rklIP1reaW8mmjDHK/NHoYPQM4HwM
	5MdNoj5oqeUkoZ768YRHCkGR4er7+leh2X1GlqOveHYyhboz2pvOpNATFwEsbmof9S+yU8
	8v9tQHCRJMC+8J42KZgr6F/VDc9mwpc=
Subject: Re: [PATCH v3 03/22] x86/xstate: re-size save area when CPUID policy
 changes
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <322de6db-e01f-0b57-5777-5d94a13c441a@suse.com>
 <8ba8f016-0aed-277b-bbea-80022d057791@suse.com>
 <5a954be8-e213-36d8-27da-4c51243dc280@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f515fdfb-d1a6-56d8-5db3-ebddeed23806@suse.com>
Date: Mon, 3 May 2021 16:22:38 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <5a954be8-e213-36d8-27da-4c51243dc280@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 03.05.2021 15:57, Andrew Cooper wrote:
> On 22/04/2021 15:44, Jan Beulich wrote:
>> vCPU-s get maximum size areas allocated initially. Hidden (and in
>> particular default-off) features may allow for a smaller size area to
>> suffice.
>>
>> Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> ---
>> v2: Use 1ul instead of 1ull. Re-base.
>> ---
>> This could be further shrunk if we used XSAVEC / if we really used
>> XSAVES, as then we don't need to also cover the holes. But since we
>> currently use neither of the two in reality, this would require more
>> work than just adding the alternative size calculation here.
>>
>> Seeing that both vcpu_init_fpu() and cpuid_policy_updated() get called
>> from arch_vcpu_create(), I'm not sure we really need this two-stage
>> approach - the slightly longer period of time during which
>> v->arch.xsave_area would remain NULL doesn't look all that problematic.
>> But since xstate_alloc_save_area() gets called for idle vCPU-s, it has
>> to stay anyway in some form, so the extra code churn may not be worth
>> it.
>>
>> Instead of cpuid_policy_xcr0_max(), cpuid_policy_xstates() may be the
>> interface to use here. But it remains to be determined whether the
>> xcr0_accum field is meant to be inclusive of XSS (in which case it would
>> better be renamed) or exclusive. Right now there's no difference as we
>> don't support any XSS-controlled features.
> 
> I've been figuring out what we need to for supervisors states.  The
> current code is not in a good shape, but I also think some of the
> changes in this series take us in an unhelpful direction.

>From reading through the rest your reply I'm not sure I see what you
mean. ORing in host_xss at certain points shouldn't be a big deal.

> I've got a cleanup series which I will post shortly.  It interacts
> texturally although not fundamentally with this series, but does fix
> several issues.
> 
> For supervisor states, we need use XSAVES unilaterally, even for PV. 
> This is because XSS_CET_S needs to be the HVM kernel's context, or Xen's
> in PV context (specifically, MSR_PL0_SSP which is the shstk equivalent
> of TSS.RSP0).
> 
> 
> A consequence is that Xen's data handling shall use the compressed
> format, and include supervisor states.  (While in principle we could
> manage CET_S, CET_U, and potentially PT when vmtrace gets expanded, each
> WRMSR there is a similar order of magnitude to an XSAVES/XRSTORS
> instruction.)

I agree.

> I'm planning a host xss setting, similar to mmu_cr4_features, which
> shall be the setting in context for everything other than HVM vcpus
> (which need the guest setting in context, and/or the VT-x bodge to
> support host-only states).  Amongst other things, all context switch
> paths, including from-HVM, need to step XSS up to the host setting to
> let XSAVES function correctly.
> 
> However, a consequence of this is that the size of the xsave area needs
> deriving from host, as well as guest-max state.  i.e. even if some VMs
> aren't using CET, we still need space in the xsave areas to function
> correctly when a single VM is using CET.

Right - as said above, taking this into consideration here shouldn't
be overly problematic.

> Another consequence is that we need to rethink our hypercall behaviour. 
> There is no such thing as supervisor states in an uncompressed XSAVE
> image, which means we can't continue with that being the ABI.

I don't think the hypercall input / output blob needs to follow any
specific hardware layout.

> I've also found some substantial issues with how we handle
> xcr0/xcr0_accum and plan to address these.  There is no such thing as
> xcr0 without the bottom bit set, ever, and xcr0_accum needs to default
> to X87|SSE seeing as that's how we use it anyway.  However, in a context
> switch, I expect we'll still be using xcr0_accum | host_xss when it
> comes to the context switch path.

Right, and to avoid confusion I think we also want to move from
xcr0_accum to e.g. xstate_accum, covering both XCR0 and XSS parts
all in one go.

> In terms of actual context switching, we want to be using XSAVES/XRSTORS
> whenever it is available, even if we're not using supervisor states. 
> XSAVES has both the inuse and modified optimisations, without the broken
> consequence of XSAVEOPT (which is firmly in the "don't ever use this"
> bucket now).

The XSAVEOPT anomaly is affecting user mode only, isn't it? Or are
you talking of something I have forgot about?

> There's no point ever using XSAVEC.  There is no hardware where it
> exists in the absence of XSAVES, and can't even in theoretical
> circumstances due to (perhaps unintentional) linkage of the CPUID data. 
> XSAVEC also doesn't use the modified optimisation, and is therefore
> strictly worse than XSAVES, even when MSR_XSS is 0.
> 
> Therefore, our choice of instruction wants to be XSAVES, or XSAVE, or
> FXSAVE, depending on hardware capability.

Makes sense to me (perhaps - see above - minus your omission of
XSAVEOPT here).

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 03 14:26:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 14:26:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121649.229424 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldZWk-0002r0-87; Mon, 03 May 2021 14:26:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121649.229424; Mon, 03 May 2021 14:26:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldZWk-0002qt-46; Mon, 03 May 2021 14:26:26 +0000
Received: by outflank-mailman (input) for mailman id 121649;
 Mon, 03 May 2021 14:26:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ngRQ=J6=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ldZWj-0002qV-5H
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 14:26:25 +0000
Received: from mail-il1-x134.google.com (unknown [2607:f8b0:4864:20::134])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1b67027b-9310-414c-b710-5a85310b8e63;
 Mon, 03 May 2021 14:26:24 +0000 (UTC)
Received: by mail-il1-x134.google.com with SMTP id h6so3793967ila.7
 for <xen-devel@lists.xenproject.org>; Mon, 03 May 2021 07:26:24 -0700 (PDT)
Received: from mail-il1-f182.google.com (mail-il1-f182.google.com.
 [209.85.166.182])
 by smtp.gmail.com with ESMTPSA id z25sm5614971iob.26.2021.05.03.07.26.22
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 03 May 2021 07:26:22 -0700 (PDT)
Received: by mail-il1-f182.google.com with SMTP id p15so3814262iln.3
 for <xen-devel@lists.xenproject.org>; Mon, 03 May 2021 07:26:22 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1b67027b-9310-414c-b710-5a85310b8e63
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=ELcNibbwxopbe4wnzShPTb7xRMLVvYpqQ8un28QQp9U=;
        b=nOyrfh2thpmQagEkzKEHJkMTgRy+3FJJdT9Oo6MZndwooAl8msGQ2IZHFMeS6aJ6Md
         +OeRGM111zl+0t7XmNgYC8WpP2ka9OwZM9VN//prTwlXpUXb+Muztthx/dt4YfUeTqwD
         ene0+k2RSvchnHpwNfAnGfkLV6l7GmkYIbRvE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=ELcNibbwxopbe4wnzShPTb7xRMLVvYpqQ8un28QQp9U=;
        b=UOhNSxIFI4CPa00XUq803vxMg2VjRi+Js/VOcw0oJYonAWOV+fn4L4w6hvHxrDQsAc
         S3Z34MUbu29Jn5K8v4bp8zNOu8Q0WragyJDh8lsiX8MbAOl3IdwCbtF3DvGBeZeiX/7u
         qpR7iRRPNccGwypVjBrKD8RNGrSL6t4Z2Mcmw0/CyBuLH1E+myot/jookHgm/Fl6oJ5T
         bFVK7U6Jpy6MXOR7FspMw+iSlRdgahoMHACosBS+aaKWum2qoDPeCi+/N09lFcDPFNAT
         nXxTz7R8l9f+efoOenwTz2yeMJR81jmQROp+eJMrvNuiOG3aKAWpqi5kslxiFtG+QUXr
         LrvQ==
X-Gm-Message-State: AOAM532mb3um9NnV4hEVE7Mv8ZQSX/JPFDDeh+BMn+rG0yuX79WyRxwJ
	y5xzYYm0ehho+hZjgdmS4Jfo/zGz7E8ZSQ==
X-Google-Smtp-Source: ABdhPJyb25KahPJML9LRmuSA0bTQPqmpTuPaaB0/CjLKnl05wCrQa/ybeGSbW8ptyK9nLIhnzcHosQ==
X-Received: by 2002:a92:c0cf:: with SMTP id t15mr16572406ilf.117.1620051983158;
        Mon, 03 May 2021 07:26:23 -0700 (PDT)
X-Received: by 2002:a05:6e02:f4e:: with SMTP id y14mr3397094ilj.18.1620051971892;
 Mon, 03 May 2021 07:26:11 -0700 (PDT)
MIME-Version: 1.0
References: <20210422081508.3942748-1-tientzu@chromium.org>
 <20210422081508.3942748-15-tientzu@chromium.org> <70b895c2-4a39-bbbd-a719-5c8b6b922026@arm.com>
In-Reply-To: <70b895c2-4a39-bbbd-a719-5c8b6b922026@arm.com>
From: Claire Chang <tientzu@chromium.org>
Date: Mon, 3 May 2021 22:26:00 +0800
X-Gmail-Original-Message-ID: <CALiNf28cc5T-cMZxNPZnrTQvqu2Ge_MmZj-teN4mE_-E-6_6XQ@mail.gmail.com>
Message-ID: <CALiNf28cc5T-cMZxNPZnrTQvqu2Ge_MmZj-teN4mE_-E-6_6XQ@mail.gmail.com>
Subject: Re: [PATCH v5 14/16] dma-direct: Allocate memory from restricted DMA
 pool if available
To: Robin Murphy <robin.murphy@arm.com>
Cc: Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>, 
	Frank Rowand <frowand.list@gmail.com>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, 
	boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig <hch@lst.de>, 
	Marek Szyprowski <m.szyprowski@samsung.com>, benh@kernel.crashing.org, paulus@samba.org, 
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, sstabellini@kernel.org, 
	grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding <treding@nvidia.com>, 
	mingo@kernel.org, bauerman@linux.ibm.com, peterz@infradead.org, 
	Greg KH <gregkh@linuxfoundation.org>, Saravana Kannan <saravanak@google.com>, 
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, heikki.krogerus@linux.intel.com, 
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Randy Dunlap <rdunlap@infradead.org>, 
	Dan Williams <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
	linux-devicetree <devicetree@vger.kernel.org>, lkml <linux-kernel@vger.kernel.org>, 
	linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, 
	Nicolas Boichat <drinkcat@chromium.org>, Jim Quinlan <james.quinlan@broadcom.com>, 
	Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com, 
	Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk, 
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
	Jianxiong Gao <jxgao@google.com>, joonas.lahtinen@linux.intel.com, 
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
	matthew.auld@intel.com, nouveau@lists.freedesktop.org, rodrigo.vivi@intel.com, 
	thomas.hellstrom@linux.intel.com
Content-Type: text/plain; charset="UTF-8"

On Fri, Apr 23, 2021 at 9:46 PM Robin Murphy <robin.murphy@arm.com> wrote:
>
> On 2021-04-22 09:15, Claire Chang wrote:
> > The restricted DMA pool is preferred if available.
> >
> > The restricted DMA pools provide a basic level of protection against the
> > DMA overwriting buffer contents at unexpected times. However, to protect
> > against general data leakage and system memory corruption, the system
> > needs to provide a way to lock down the memory access, e.g., MPU.
> >
> > Signed-off-by: Claire Chang <tientzu@chromium.org>
> > ---
> >   kernel/dma/direct.c | 35 ++++++++++++++++++++++++++---------
> >   1 file changed, 26 insertions(+), 9 deletions(-)
> >
> > diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
> > index 7a27f0510fcc..29523d2a9845 100644
> > --- a/kernel/dma/direct.c
> > +++ b/kernel/dma/direct.c
> > @@ -78,6 +78,10 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size)
> >   static void __dma_direct_free_pages(struct device *dev, struct page *page,
> >                                   size_t size)
> >   {
> > +#ifdef CONFIG_DMA_RESTRICTED_POOL
> > +     if (swiotlb_free(dev, page, size))
> > +             return;
> > +#endif
> >       dma_free_contiguous(dev, page, size);
> >   }
> >
> > @@ -92,7 +96,17 @@ static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
> >
> >       gfp |= dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask,
> >                                          &phys_limit);
> > -     page = dma_alloc_contiguous(dev, size, gfp);
> > +
> > +#ifdef CONFIG_DMA_RESTRICTED_POOL
> > +     page = swiotlb_alloc(dev, size);
> > +     if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
> > +             __dma_direct_free_pages(dev, page, size);
> > +             page = NULL;
> > +     }
> > +#endif
> > +
> > +     if (!page)
> > +             page = dma_alloc_contiguous(dev, size, gfp);
> >       if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
> >               dma_free_contiguous(dev, page, size);
> >               page = NULL;
> > @@ -148,7 +162,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
> >               gfp |= __GFP_NOWARN;
> >
> >       if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
> > -         !force_dma_unencrypted(dev)) {
> > +         !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) {
> >               page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO);
> >               if (!page)
> >                       return NULL;
> > @@ -161,8 +175,8 @@ void *dma_direct_alloc(struct device *dev, size_t size,
> >       }
> >
> >       if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) &&
> > -         !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
> > -         !dev_is_dma_coherent(dev))
> > +         !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) &&
> > +         !is_dev_swiotlb_force(dev))
> >               return arch_dma_alloc(dev, size, dma_handle, gfp, attrs);
> >
> >       /*
> > @@ -172,7 +186,9 @@ void *dma_direct_alloc(struct device *dev, size_t size,
> >       if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) &&
> >           !gfpflags_allow_blocking(gfp) &&
> >           (force_dma_unencrypted(dev) ||
> > -          (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev))))
> > +          (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
> > +           !dev_is_dma_coherent(dev))) &&
> > +         !is_dev_swiotlb_force(dev))
> >               return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
> >
> >       /* we always manually zero the memory once we are done */
> > @@ -253,15 +269,15 @@ void dma_direct_free(struct device *dev, size_t size,
> >       unsigned int page_order = get_order(size);
> >
> >       if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
> > -         !force_dma_unencrypted(dev)) {
> > +         !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) {
> >               /* cpu_addr is a struct page cookie, not a kernel address */
> >               dma_free_contiguous(dev, cpu_addr, size);
> >               return;
> >       }
> >
> >       if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) &&
> > -         !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
> > -         !dev_is_dma_coherent(dev)) {
> > +         !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) &&
> > +         !is_dev_swiotlb_force(dev)) {
> >               arch_dma_free(dev, size, cpu_addr, dma_addr, attrs);
> >               return;
> >       }
> > @@ -289,7 +305,8 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
> >       void *ret;
> >
> >       if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) &&
> > -         force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp))
> > +         force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp) &&
> > +         !is_dev_swiotlb_force(dev))
> >               return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
>
> Wait, this seems broken for non-coherent devices - in that case we need
> to return a non-cacheable address, but we can't simply fall through into
> the remapping path below in GFP_ATOMIC context. That's why we need the
> atomic pool concept in the first place :/

Sorry for the late reply. I'm not very familiar with this. I wonder if
the memory returned here must be coherent. If yes, could we say for
this case, one must set up another device coherent pool
(shared-dma-pool) and go with dma_alloc_from_dev_coherent()[1]?

[1] https://elixir.bootlin.com/linux/v5.12/source/kernel/dma/mapping.c#L435

>
> Unless I've overlooked something, we're still using the regular
> cacheable linear map address of the dma_io_tlb_mem buffer, no?
>
> Robin.
>
> >
> >       page = __dma_direct_alloc_pages(dev, size, gfp);
> >


From xen-devel-bounces@lists.xenproject.org Mon May 03 14:38:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 14:38:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121655.229435 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldZin-0003mu-BA; Mon, 03 May 2021 14:38:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121655.229435; Mon, 03 May 2021 14:38:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldZin-0003mn-87; Mon, 03 May 2021 14:38:53 +0000
Received: by outflank-mailman (input) for mailman id 121655;
 Mon, 03 May 2021 14:38:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TA2L=J6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ldZim-0003mi-Iz
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 14:38:52 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 13357f0e-b62f-41b8-befb-bed81e559c3f;
 Mon, 03 May 2021 14:38:51 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7B12FB154;
 Mon,  3 May 2021 14:38:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 13357f0e-b62f-41b8-befb-bed81e559c3f
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620052730; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=EI0/5AZAJ0srZZhXeydEjwnccgoJ4Y6/d/4/L9xJUeU=;
	b=ms0MGcdcfMhSPzt9WJ1HFCIJgRFRWABiTCNVYc0AMImCutqT07ZwHrggz9frmm8sYGQ9rD
	ktCs7asBWa4h45KuaHiu/Rj/T21ZQMYpYQy5G2xs8mCjsO7wXoxVi3AbuK8wvwOQt4Q+Jk
	miyhReHo8WxpsyrzMF/Jza8+E4ILvgY=
Subject: Re: [PATCH v4 2/3] xen/pci: Refactor PCI MSI intercept related code
To: Rahul Singh <rahul.singh@arm.com>
Cc: bertrand.marquis@arm.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org
References: <cover.1619707144.git.rahul.singh@arm.com>
 <07cb9f45a91a283af1991c42266555bb0bfe3b71.1619707144.git.rahul.singh@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <65539f2a-8b0c-7f1a-6de1-4032140a4e0e@suse.com>
Date: Mon, 3 May 2021 16:38:49 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <07cb9f45a91a283af1991c42266555bb0bfe3b71.1619707144.git.rahul.singh@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 29.04.2021 16:46, Rahul Singh wrote:
> --- /dev/null
> +++ b/xen/drivers/passthrough/msi-intercept.c
> @@ -0,0 +1,53 @@
> +/*
> + * Copyright (C) 2008,  Netronome Systems, Inc.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program; If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <xen/init.h>
> +#include <xen/pci.h>
> +#include <asm/msi.h>
> +#include <asm/hvm/io.h>
> +
> +int pdev_msix_assign(struct domain *d, struct pci_dev *pdev)
> +{
> +    int rc;
> +
> +    if ( pdev->msix )
> +    {
> +        rc = pci_reset_msix_state(pdev);
> +        if ( rc )
> +            return rc;
> +        msixtbl_init(d);
> +    }
> +
> +    return 0;
> +}
> +
> +void pdev_dump_msi(const struct pci_dev *pdev)
> +{
> +    const struct msi_desc *msi;
> +
> +    list_for_each_entry ( msi, &pdev->msi_list, list )
> +        printk("- MSIs < %d >", msi->irq);

Only the %d and a blank should be part of the format string inside the
loop body; the rest wants printing exactly once.

> +static inline size_t vmsix_table_size(const struct vpci *vpci, unsigned int nr)
> +{
> +    return
> +        (nr == VPCI_MSIX_TABLE) ? vpci->msix->max_entries * PCI_MSIX_ENTRY_SIZE
> +                                : ROUNDUP(DIV_ROUND_UP(vpci->msix->max_entries,
> +                                                       8), 8);

I'm afraid I don't view this as an acceptable way of wrapping lines.
How about

    return (nr == VPCI_MSIX_TABLE)
           ? vpci->msix->max_entries * PCI_MSIX_ENTRY_SIZE
           : ROUNDUP(DIV_ROUND_UP(vpci->msix->max_entries, 8), 8);

> @@ -428,6 +458,31 @@ int vpci_make_msix_hole(const struct pci_dev *pdev)
>      return 0;
>  }
>  
> +int vpci_remove_msix_regions(const struct vpci *vpci, struct rangeset *mem)
> +{
> +    const struct vpci_msix *msix = vpci->msix;
> +    unsigned int i;
> +    int rc;
> +
> +    for ( i = 0; msix && i < ARRAY_SIZE(msix->tables); i++ )
> +    {
> +        unsigned long start = PFN_DOWN(vmsix_table_addr(vpci, i));
> +        unsigned long end = PFN_DOWN(vmsix_table_addr(vpci, i) +
> +                vmsix_table_size(vpci, i) - 1);
> +
> +        rc = rangeset_remove_range(mem, start, end);
> +        if ( rc )
> +        {
> +            printk(XENLOG_G_WARNING
> +                    "Failed to remove MSIX table [%lx, %lx]: %d\n",
> +                    start, end, rc);

Indentation looks to be off by one space on the last two lines.

> --- /dev/null
> +++ b/xen/include/xen/msi-intercept.h
> @@ -0,0 +1,49 @@
> +/*
> + * Copyright (C) 2008,  Netronome Systems, Inc.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program; If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#ifndef __XEN_MSI_INTERCEPT_H_
> +#define __XEN_MSI_INTERCEPT_H_
> +
> +#ifdef CONFIG_HAS_PCI_MSI_INTERCEPT
> +
> +#include <asm/msi.h>
> +
> +int pdev_msix_assign(struct domain *d, struct pci_dev *pdev);
> +void pdev_dump_msi(const struct pci_dev *pdev);
> +
> +#else /* !CONFIG_HAS_PCI_MSI_INTERCEPT */
> +
> +static inline int pdev_msix_assign(struct domain *d, struct pci_dev *pdev)
> +{
> +    return 0;
> +}
> +
> +static inline void pdev_dump_msi(const struct pci_dev *pdev) {}
> +static inline void pci_cleanup_msi(struct pci_dev *pdev) {}

I don't think this last one is intercept related (and hence doesn't belong
here)?

> @@ -148,6 +150,7 @@ struct vpci_vcpu {
>  };
>  
>  #ifdef __XEN__
> +#ifdef CONFIG_HAS_PCI_MSI_INTERCEPT

Since both start and ...

> +static inline void vpci_msi_free(struct vpci *vpci) {}
> +#endif /* CONFIG_HAS_PCI_MSI_INTERCEPT */
>  #endif /* __XEN__ */

... end look to match, may I suggest to simply replace the __XEN__ ones,
as the test harness isn't supposed to (randomly) define CONFIG_*? Or
alternatively at least combine both #ifdef-s?

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 03 14:46:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 14:46:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121660.229448 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldZqL-0004i1-5D; Mon, 03 May 2021 14:46:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121660.229448; Mon, 03 May 2021 14:46:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldZqL-0004hu-1y; Mon, 03 May 2021 14:46:41 +0000
Received: by outflank-mailman (input) for mailman id 121660;
 Mon, 03 May 2021 14:46:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TA2L=J6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ldZqJ-0004hp-Ew
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 14:46:39 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3c2a5f07-55ae-437a-8979-a7c9b3462e92;
 Mon, 03 May 2021 14:46:38 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4FE0FB19B;
 Mon,  3 May 2021 14:46:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3c2a5f07-55ae-437a-8979-a7c9b3462e92
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620053197; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=eT4aP9mmfvbYpaF/Aa54jjFTrk212r7i0SUa9wwU5SU=;
	b=Yp0BSqJe/c5j4RNr/AdhmeWekomtajcLiggsSOCB3hXmaRwKW3gdVSj5ipJsRZTeqZU85U
	RJPTBRfYsS9b911uCOQTC8wK5VBeynkKbgL/shzKrD/JUTSVYC48ymB8mAWIiT3UnDuE8O
	VDOsccTwyekXRzC5fs38t/7oyWb9Dwo=
Subject: Re: [PATCH v4 3/3] xen/pci: Refactor MSI code that implements MSI
 functionality within XEN
To: Rahul Singh <rahul.singh@arm.com>
Cc: bertrand.marquis@arm.com, Paul Durrant <paul@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
 xen-devel@lists.xenproject.org
References: <cover.1619707144.git.rahul.singh@arm.com>
 <60b4c33fdcc2f7ad68d383ffae191e22b0b32f1c.1619707144.git.rahul.singh@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <bbc50008-da47-a5e2-501b-a9c06ce38335@suse.com>
Date: Mon, 3 May 2021 16:46:36 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <60b4c33fdcc2f7ad68d383ffae191e22b0b32f1c.1619707144.git.rahul.singh@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 29.04.2021 16:46, Rahul Singh wrote:
> MSI code that implements MSI functionality to support MSI within XEN is
> not usable on ARM. Move the code under CONFIG_HAS_PCI_MSI_INTERCEPT flag
> to gate the code for ARM.
> 
> Currently, we have no idea how MSI functionality will be supported for
> other architecture therefore we have decided to move the code under
> CONFIG_PCI_MSI_INTERCEPT. We know this is not the right flag to gate the
> code but to avoid an extra flag we decided to use this.

My objection remains: Actively putting code under the wrong gating
CONFIG_* is imo quite a bit worse than keeping it under a too wide one
(e.g. CONFIG_X86), if introducing a separate CONFIG_HAS_PCI_MSI is
deemed undesirable for whatever reason. Otherwise every abuse of
CONFIG_PCI_MSI_INTERCEPT ought to get a comment to the effect of this
being an abuse, which in particular for code you move into
xen/drivers/passthrough/msi-intercept.c would end up sufficiently odd.
(As a minor extra remark, putting deliberately misplaced code at the
top of a file rather than at its bottom is likely to add to possible
confusion down the road.)

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 03 14:48:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 14:48:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121663.229459 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldZrk-0004oi-H4; Mon, 03 May 2021 14:48:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121663.229459; Mon, 03 May 2021 14:48:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldZrk-0004ob-E3; Mon, 03 May 2021 14:48:08 +0000
Received: by outflank-mailman (input) for mailman id 121663;
 Mon, 03 May 2021 14:48:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iacE=J6=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1ldZri-0004oV-Pk
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 14:48:07 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ceffe932-87d1-42c0-8f5c-d106409c7349;
 Mon, 03 May 2021 14:48:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ceffe932-87d1-42c0-8f5c-d106409c7349
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620053285;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=Nz7vPGC3Mw3YCooAlDH3j29Mg4awrUWSkegvVxd5JM0=;
  b=GhTHfdIgUbSZQ+dCwPA/mM3KLehR65eZ6GgWLMKwyL2N1d1V/o26ZUjX
   G2PhxXHBGPE3TpISi+HjbR2Ur4cq+sG/txiVNYGtxFC2mlnH2PaFV9TDZ
   0XEElwz789GI/jIGzvkeuORyXSW0fVLvcXAd79mLp58TtUTxUwr6BKWrl
   A=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: uhpS1FFOEnXjnvNHCF21VoeQM6NThp0VMsdyxhN1qsk5jeDv/nFhLp4a8uIrPEhHyvFzqpJjAi
 /hrN6H0gMn1hpSyeVOrHPYagU68QxrUlNhkZH1VN+YOgkRqMkl0IdwaKUlQhBntMUds0ZbFuYq
 dYuqXoXm4SSy20u+Y7au3mPXaw3HBb4ohCA8N6cFgeXWLUt8xd6n7RgugZtTBpa4N0BLQ5UXJb
 j6cWJHPqydMYu9vBvoXMhYajT7X51oC5jzBjd9AmT9aGvQJZ/q8gZkVcxr34k62IBb1PwKrCCl
 5GA=
X-SBRS: 5.1
X-MesageID: 42954217
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:Un7Yaq/9GeQqraSDxtVuk+FqcL1zdoIgy1knxilNYDRvWIixi9
 2ukPMH1RX9lTYWXzUalcqdPbSbKEmzybdc2qNUGbu5RgHptC+TLI9k5Zb/2DGIIULD38Zn/+
 Nbf6B6YeedMXFTkdv67A6kE9wp3dmA9+SSif3Dymp2JDsLV4hLxW5Ce2CmO2dxQxRLAod8OZ
 qH/8xcpyehf3N/VLXHOlAuWe/fq9rX0K/8aRkdCBI9rCWIhzWk6Ln1eiLoois2eTVJ3Lsk7C
 z5gxX0j5/Tyc2T5z398yvo75pQkMb80dcrPq2xo+UcNzmEsHfMWK1PQLuH1QpFxN2HyFFvq9
 XUpgdlAsIb0QKvQkiQgT/Anzbtyywv7XiK8y7qvVLGrdbiTDw3T+pt7LgpCifx0EYrsNFi3K
 8j5Qvw3PA7fHCw/lWJ2/HyWx5njUayq3Y5+NRj9EB3aocCdKRX6bUW4UI9KuZxIAvB9IslHO
 NyZfusncp+TFXyVQG/gkBfhPaoXng1Ay6cRFkDtsG/w1Ft7QFE5npd68oFknga8pUhD7FC+u
 TfK6xt0IpDV8kMcMtGdag8aPryLlaIbQPHMWqUL1iiPKYbO0jVo5qyxLku/umldLEB0ZNaou
 WObHpo8UoJP27+A8yH25NGtjrXRn+mYDjrwsZCo7Bkp7zVXtPQQG6+YWFrt/Hlj+QUA8XdVf
 r2EolRGeXfIWznHpsM9xHiWqNVNWIVXKQuy5YGcmPLhviOBpzht+TdfvqWDqHqCywYVmT2BW
 ZGcyP0IOlG80C3Sl71iBXcQBrWCwnC1KM1NJKf0/kYyYALOIEJmBMSk06F6saCLiAHkqFeRj
 o7HJrX1oeA4UWm92fB6GtkfjBHCFxO3bnmW3RW4SsDM0b+d6c/q8ySEFoim0evF1tadYf7AQ
 Rfr1N49eacNJqL3x0vDNqhLya8g2YMommJC7MRgLeK68ugWp5QNOdmZIVBUSHwUzBlkwdjr2
 lOLCUeQFXEKz/ogaK5yLoOBO/ecNF4qByxIdFdrE/esUn0n7BselIrGxqVFeKHiwcnQDRZwn
 dr9bUEvbaGkTGzbVckjP8AK11KYmSPCLdgBACIDb8k3IzDSUVVdyOnlDaagxY8di7P+18Jjm
 LsFyGSZMrGG0FQoHxez6bs/m5lb2n1RTMDVllK9alGUUjWsHd61uGGIpC+1GaccXMu6OAQOj
 OtW0pZHipeg/SMkDKFkjeLEnsrgqg0NuvGFbI5bvX4wXW2MrCFkqkAAt5Z9JtoL8rVr+cOSO
 6TEjXldQ/QOqcM4Ui4t3wlMC57pD0YivvuwgTi93X983glA/beSW4WDo0zEpW51SzDSPmJ2p
 ki0o5wkuu0L2nratmJjYvQdCVOLxvPoWiwC8EkwKokyp4ahf9WJd38VzCN6VRsmDMZB+3wnF
 kFQKt67KvaU7UfNPA6SmZ8xB4RiN+LLEEXqQT4De81QEE1gxbgTqe0youNjYBqP1aIqwTxM2
 SO6iFx///KWC2YyL4RYphAV1h+WQwZ6H54+vmFeJCVIAK2d/tb9F7SCA7xTJZtDIyEE64XtB
 B0/pWhmPKWbTPx3ET1sSFgKqxDt0ahTsXaOnPBJcd4t/i7M0+LmK2k/Yqaiyr2UyKybwAgvr
 J+HHZgJvhru30Fl4040i+7V6zxrAYEqjJlkE9av2+o/JOn7mfdFVxBKivDjPxtLGFuDkQ=
X-IronPort-AV: E=Sophos;i="5.82,270,1613451600"; 
   d="scan'208";a="42954217"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YKiN2IftHEkTaeCZDp8th+bQxtjQBA85Gn7nhoEApom0LXreNFCw4DhmOkDJbW4pLhQToY06Jwx7+GC8Umd44QwprTX0yGNNkKV3WEipG2QxiAoIdmZ4vrysfveYBZ4BtDwxcBJmPKmkrawjFDXWmicLNJJJGcpcu18aBp21P3X89jRqbpU/Rk8TtCAXuQ4FgNp/ykmqhueNdEGVYOdODINRVSXIoDMk6uLDrCzV+qKvG2SZ5M5Gygiud31WG8t/nfaJp8DHM9ow53y8903UctzkxH5EHnW0M1wN2pD2MRSiawGPY+ZL9S3o+Uw1mp7zak+k5KCItreTeyrW3592CQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=55J7BXfXNPgVyghL59YD0llMk3eR8G1yedZKy4xUUko=;
 b=TgOfz1nLN0Bef/5Tv38HDNSOnZeNnZlOvrX7z3rZUrS+iQ1qZZi201ouAWg3P/yrnhgtFwo5hA3O+CxCIxKyMrjxS9qBgMMxEfnNAttfOvJEO1sV70ExVhKd3+ZeN+KVwjNe7Kem092tjc/ejAwal1gMHRxcVr32jJI0d5S9QW5+6J1RZLOg3dFbsYhUMcl69K6b+ayEv2pzORnjZNBh5coIQyP5136UGnG7ALKAsd/7wWSovmYhLHVCt1SUvzmZhW9C9bq/JYdFBxoGvs5asy1H6oJloYPJjJ8zlRxMk63jUivbK/wyqBTeRa8/lzzuzbOeJEqHF0bsWbOSmA5HbA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=55J7BXfXNPgVyghL59YD0llMk3eR8G1yedZKy4xUUko=;
 b=UwfWs5G4iiE6yda9+jM27iDu40uZWnbzrvhTozY2OBFWM408pxVR8IYr8FxoN7RSpPKSCMM9dr9fnYSLrW2KYozQCNCVMvretmxBQDpw2Ig6A1V0scLygsXpyxtcsikRIbSIWybomLTEUDm1/aeBWF/yKkjN+hjH/X8L2y34IXk=
Date: Mon, 3 May 2021 16:47:55 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v4 01/12] x86/rtc: drop code related to strict mode
Message-ID: <YJANG3LeuA3Ygt/Q@Air-de-Roger>
References: <20210420140723.65321-1-roger.pau@citrix.com>
 <20210420140723.65321-2-roger.pau@citrix.com>
 <f282a2a2-e5cb-6a65-690a-b9c27c03089a@suse.com>
 <YI/CSKpqWrilNKi8@Air-de-Roger>
 <5b06565e-1f2e-3498-c18f-e7eac0042761@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <5b06565e-1f2e-3498-c18f-e7eac0042761@suse.com>
X-ClientProxiedBy: MR2P264CA0015.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:1::27) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a79c18fe-435b-4b30-e957-08d90e42734c
X-MS-TrafficTypeDiagnostic: DM5PR03MB3147:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB31475DE68BED9878AF7F0AF58F5B9@DM5PR03MB3147.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6430;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: eIo0NBbLa9x26YX7LStBfkGit3BwzUB/0nfNpKI/di5C34mbAiK6VOg3z5fCHeI70OKWH166XbZW3M1PkgJQtwoz9eGFOVV6APL1VmoiD97G8z23MzjdGwLngidJP/p7Yd3cyiom8VGPzVbI0Tg+vBNELO2P9CxBS4wfEdtLd1xwgaIjo2R8NCQTk6z4nzchwagt3GXvgElB2Ec0NT+paDiAz6OPbLuwBs0h55ymiiPzB7NuNtyrmx8cxomAJcZSL83cA/nr6mgW+YzGuBTUtYtznS3to20HwYJoFKbpFe/U6QWjTKTD4CdTpkMdzft7bWygcDL3Cx1zNwQdRG4EzNyMsxqREOAl40S0pbwOmTnjpFqpbehZNShf5NWECsFkPQIlADWZlW11X6gnp5+pS5fJziZG0HK8uUhVgGDtVzm2ah0OQPyAMMt9B4XqIb6fTw0CXr3Lz77eNjcKs135UW7F4hRNCKiV6onhbXR0WlUOOzcs+gq67MxOmLMxp9pBUwbJXyhtVRM+6WwQkJEFBxS6aS7XSWTp4dhMTG4PtcL3MGqDRDPXKro6k4bahEO90+dtr7t0J6WjsHqqjZaW2spkxg+K7jSc+ZA+jesJl8sBvvbjwOylBaF9nQzw9MXvlqW2qS9UmRHtCGM30POSpdX2aUPcXOcYMmcayIdYT/U=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(136003)(346002)(376002)(39860400002)(396003)(366004)(6486002)(26005)(16526019)(956004)(316002)(6916009)(6496006)(33716001)(6666004)(5660300002)(9686003)(478600001)(66476007)(85182001)(66556008)(8676002)(66946007)(186003)(83380400001)(38100700002)(8936002)(966005)(4326008)(53546011)(86362001)(45080400002)(54906003)(2906002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?R1ZsdFNNSjhvWmREdTA0Z0tPc3JDUllmanRKL2RkMzU2dU1YQ2VSZWNYU2Iv?=
 =?utf-8?B?VDZNM2N3SnIzSzkrMWdCcHFYdlh1cVJwR3hDWVZYcHZ3L1dpUFArZlk4ZVNT?=
 =?utf-8?B?RWx6U0djR280VDdoZVFtV0hFWG9KZWNiYS84ano2TUtnU2pqRmxwQitVd3hC?=
 =?utf-8?B?aGxvN0VpYWZBSlVCMGlUdXRBcVBTUVkwUXljM0IvdjZ1elAvZTc3Y1p4OWZw?=
 =?utf-8?B?K0IwWEFtZEtvcGdZczVlcTZnVS93UmlmTVhPMGNONGRITk9aMmRJaVk1SXFs?=
 =?utf-8?B?eUlxQ1dmbnhtdmxnU1J1RCtjTWxvSkRxSURTNHdvS3NTK0ROQzlVRE82WFZC?=
 =?utf-8?B?aERXM3dlWktLYkdSNVZtbnpoWlIvRGFPeHpvVkhJdUhwNFNhWk9FRTEzUW5S?=
 =?utf-8?B?ODdZNVJzRVhZTFhaeDBzYm9iV2JBaUJOTUUydzhqaEt0NW4yRzVudkc0b1Np?=
 =?utf-8?B?QmlXTW80MkNvSHpzbnhOMCtmN2NCTlZaZlRzZENIRlpna2RrQnNGemhtT01R?=
 =?utf-8?B?R0QxNDI2bzdGNjdydHVqKzNVck5qSFRFaUxJQW1ja3NxSkRXREQ5VmtxTEMz?=
 =?utf-8?B?dmgybmZIa1JWY3lGK0hhdTZWN1VDTnI0N0M1aGZFSnBWTUdYZHA3aHhVTGlT?=
 =?utf-8?B?Y2R2VkROSHVjVzg3NWV5N2dUL0N3bERldE1hZjlNeHp3VUVSZ3EyNWxNZWpG?=
 =?utf-8?B?N1M0WDU1aUF3SFFJa24vbnRmaWZWN3dXS1RXVXppS0Y5Vkl0TGl2ZkxWNDhr?=
 =?utf-8?B?REdkME9xanZ1UHd1dyt4Vmw3dFBvQmlYWXBuUHVjRm96MlpWZ0VNbndNY281?=
 =?utf-8?B?UEVYVW1rdW9kY3Bhem00cTUyaDY1TFR6SkdaTG1KVmJldjhIZUNuODNSVlVn?=
 =?utf-8?B?d0I1Q3FpU0NhODV5alJ1aHdNNmRJS3paNjJTaFltM1BKU3F0eDk5SGx4S3RQ?=
 =?utf-8?B?VDNOQ0tmZmFLTG9ZR1kxeE9NTk5CZVVWTlVicDVIeXNxZTc5M3NvTnVleFJO?=
 =?utf-8?B?RnlRNkxyRVdWeFJaRUJJLzMrakZaWlF2WmJlNVYxVi95b3dJSkVuMGNXeUFR?=
 =?utf-8?B?bTNTN0x2WG5nRUtYamJmdjREa0lIVG9NKzBtNXlQK1RxVlJTa0lEMTRzRG91?=
 =?utf-8?B?OVVCYkFIcnBNM044TzRESndzZkdUYXBFeVgwTFlONW9WZDNVN0QxZ2RmRGwv?=
 =?utf-8?B?VjY1MnpFbTV0ZFpydGEvejFsaEFqcC8rUWVFM3c1a1EzeTAxM0JYYjlIRWRP?=
 =?utf-8?B?MG9TTkNFSkJzOXZSMk5POWdiTVJVVjNUbU5ldi9HSktUL2NhS0F5dklMTE5B?=
 =?utf-8?B?RlZBOGNYMXI3K0xmeUhuMW1UYXpCZUJ3MlhYYW1OT2NQcStpbEtGc3hjbnIv?=
 =?utf-8?B?Y1hVMFFJeEZaSzU4OWxoQ2Q0WmZoWGFXRnczaXZHdlRXV1ZBL3ByQk5QOWZq?=
 =?utf-8?B?NFc3UDcvM1dzQzA2K29VVDlTcUVpVlJGekdhdEpmc29DOGNqRGxIQkxoTC9Q?=
 =?utf-8?B?ekZ3WVRVNmJIcS9Hc1RCcDQvNUNTdFJTOTBWR0JWNVgrS3JWYTRnK09OaTNQ?=
 =?utf-8?B?RzRPNnkvc3c1b0pqQVhPQTM0b0VnQVpaTitnRDFWUjNYdmVrSmNrM0RoZVNl?=
 =?utf-8?B?UHNQK0lXSjlLY3hYM3hPanZFSUxRS21hK2U5SXpPZjNCcGZaYnlUWG5JYVM5?=
 =?utf-8?B?TWt3MG9vUXJSeXozSVFKZ3J4a01BUjJQaC9VM3ZSbnRCSW93b0NobGVWNHZV?=
 =?utf-8?Q?Em4vdJKYIcPliJO34vbdxAwVql97OUUd9xsGq3V?=
X-MS-Exchange-CrossTenant-Network-Message-Id: a79c18fe-435b-4b30-e957-08d90e42734c
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2021 14:48:01.5901
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: g0ZTyF+WvZX+yGHXOAikeFlFqdA58/mM/3umV8LxDm+7d/kxdi4w9cDA8XFVCuCvjgP3X+8MsELM7IOscpYmJQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3147
X-OriginatorOrg: citrix.com

On Mon, May 03, 2021 at 02:26:51PM +0200, Jan Beulich wrote:
> On 03.05.2021 11:28, Roger Pau Monné wrote:
> > On Thu, Apr 29, 2021 at 04:53:07PM +0200, Jan Beulich wrote:
> >> On 20.04.2021 16:07, Roger Pau Monne wrote:
> >>> --- a/xen/arch/x86/hvm/rtc.c
> >>> +++ b/xen/arch/x86/hvm/rtc.c
> >>> @@ -46,15 +46,6 @@
> >>>  #define epoch_year     1900
> >>>  #define get_year(x)    (x + epoch_year)
> >>>  
> >>> -enum rtc_mode {
> >>> -   rtc_mode_no_ack,
> >>> -   rtc_mode_strict
> >>> -};
> >>> -
> >>> -/* This must be in sync with how hvmloader sets the ACPI WAET flags. */
> >>> -#define mode_is(d, m) ((void)(d), rtc_mode_##m == rtc_mode_no_ack)
> >>> -#define rtc_mode_is(s, m) mode_is(vrtc_domain(s), m)
> >>
> >> Leaving aside my concerns about this removal, I think some form of
> >> reference to hvmloader and its respective behavior should remain
> >> here, presumably in form of a (replacement) comment.
> > 
> > What about adding a comment in rtc_pf_callback:
> > 
> > /*
> >  * The current RTC implementation will inject an interrupt regardless
> >  * of whether REG_C has been read since the last interrupt was
> >  * injected. This is why the ACPI WAET 'RTC good' flag must be
> >  * unconditionally set by hvmloader.
> >  */
> 
> For one I'm unconvinced this is "must"; I think it is "may". We're
> producing excess interrupts for an unaware guest, aiui. Presumably most
> guests can tolerate this, but - second - it may be unnecessary overhead.
> Which in turn may be why nobody has complained so far, as this sort of
> overhead my be hard to notice. I also suspect the RTC may not be used
> very often for generating a periodic interrupt.

I agree that there might be some overhead here, but asking for the
guest to read REG_C in order to receive further interrupts also seems
like quite a lot of overhead because all the interception involved.
IMO it's best to unconditionally offer the no_ack mode (like Xen has
been doing).

Also strict_mode wasn't really behaving according to the spec either,
as it would injected up to 10 interrupts without the user have read
REG_C.

> (I've also not seen the
> flag named "RTC good" - the ACPI constant is ACPI_WAET_RTC_NO_ACK, for
> example.)

I'm reading the WAET spec as published my Microsoft:

http://msdn.microsoft.com/en-us/windows/hardware/gg487524.aspx

Where the flag is listed as 'RTC good'. Maybe that's outdated now?
Seems to be the official source for the specification from
https://uefi.org/acpi.

> >>> @@ -337,8 +336,7 @@ int pt_update_irq(struct vcpu *v)
> >>>      {
> >>>          if ( pt->pending_intr_nr )
> >>>          {
> >>> -            /* RTC code takes care of disabling the timer itself. */
> >>> -            if ( (pt->irq != RTC_IRQ || !pt->priv) && pt_irq_masked(pt) &&
> >>> +            if ( pt_irq_masked(pt) &&
> >>>                   /* Level interrupts should be asserted even if masked. */
> >>>                   !pt->level )
> >>>              {
> >>
> >> I'm struggling to relate this to any other part of the patch. In
> >> particular I can't find the case where a periodic timer would be
> >> registered with RTC_IRQ and a NULL private pointer. The only use
> >> I can find is with a non-NULL pointer, which would mean the "else"
> >> path is always taken at present for the RTC case (which you now
> >> change).
> > 
> > Right, the else case was always taken because as the comment noted RTC
> > would take care of disabling itself (by calling destroy_periodic_time
> > from the callback when using strict_mode). When no_ack mode was
> > implemented this wasn't taken into account AFAICT, and thus the RTC
> > was never removed from the list even when masked.
> > 
> > I think with no_ack mode the RTC shouldn't have this specific handling
> > in pt_update_irq, as it should behave like any other virtual timer.
> > I could try to split this as a separate bugfix, but then I would have
> > to teach pt_update_irq to differentiate between strict_mode and no_ack
> > mode.
> 
> A fair part of my confusion was about "&& !pt->priv".

I think you meant "|| !pt->priv"?

> I've looked back
> at 9607327abbd3 ("x86/HVM: properly handle RTC periodic timer even when
> !RTC_PIE"), where this was added. It was, afaict, to cover for
> hpet_set_timer() passing NULL with RTC_IRQ.

That's tricky, as hpet_set_timer hardcodes 8 instead of using RTC_IRQ
which makes it really easy to miss.

> Which makes me suspect that
> be07023be115 ("x86/vhpet: add support for level triggered interrupts")
> may have subtly broken things.

Right - as that would have made the RTC irq when generated from the
HPET no longer be suspended when masked (as pt->priv would no longer
be NULL). Could be fixed with:

diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index ca94e8b4538..f2cbd12f400 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -318,7 +318,8 @@ static void hpet_set_timer(HPETState *h, unsigned int tn,
                          hpet_tick_to_ns(h, diff),
                          oneshot ? 0 : hpet_tick_to_ns(h, h->hpet.period[tn]),
                          irq, timer_level(h, tn) ? hpet_timer_fired : NULL,
-                         (void *)(unsigned long)tn, timer_level(h, tn));
+                         timer_level(h, tn) ? (void *)(unsigned long)tn : NULL,
+                         timer_level(h, tn));
 }
 
 static inline uint64_t hpet_fixup_reg(

Passing again NULL as the callback private data for edge triggered
interrupts.

> > Would you be fine if the following is added to the commit message
> > instead:
> > 
> > "Note that the special handling of the RTC timer done in pt_update_irq
> > is wrong for the no_ack mode, as the RTC timer callback won't disable
> > the timer anymore when it detects the guest is not reading REG_C. As
> > such remove the code as part of the removal of strict_mode, and don't
> > special case the RTC timer anymore in pt_update_irq."
> 
> Not sure yet - as per above I'm still not convinced this part of the
> change is correct.

I believe part of this handling is kind of bogus - for example I'm
unsure Xen should account masked interrupt injections as missed ticks.
A guest might decide to mask it's interrupt source for whatever
reason, and then it shouldn't receive a flurry of interrupts when
unmasked. Ie: missed ticks should only be accounted for interrupts
that should have been delivered but the guest wasn't scheduled. I
think such model would also simplify some of the logic that we
currently have.

In fact I have a patch on top of this current series which I haven't
posted yet that does implement this new mode of not accounting masked
interrupts as missed ticks to the delivered later.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon May 03 14:54:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 14:54:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121671.229471 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldZxw-0005ii-Dp; Mon, 03 May 2021 14:54:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121671.229471; Mon, 03 May 2021 14:54:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldZxw-0005ib-Ad; Mon, 03 May 2021 14:54:32 +0000
Received: by outflank-mailman (input) for mailman id 121671;
 Mon, 03 May 2021 14:54:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iacE=J6=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1ldZxv-0005iW-KT
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 14:54:31 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 42293fd5-40af-409c-b838-8d71a38f7d21;
 Mon, 03 May 2021 14:54:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 42293fd5-40af-409c-b838-8d71a38f7d21
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620053670;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=tE6xTS0ox8T508wp0RiTVObjPvt4tI/lok2HkAV5F7Y=;
  b=O8437evpAcActHG4TrSnWozlI2wBpIVNrtpOkfeH52PJm5w46hWv8SWr
   p6r3iFj2krDnXoSQH6ZjPrg/vdQVS871ODYdnARc+W6V/N57UaN862XyS
   PxaV0ltAFmjbf7YaYEzwxkuzBFGySFke7wL22MODLQ7fdw1Eev7LQxqqV
   E=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: miHaRGsVOtgYeVCoTMAD1NxzVzOqaRAgmHtMAeAtTeuiCHR4dHHKvniwif5nNVOLdj8FBMqeBD
 6je7cy/HeB2Hb7202V0CkAf9g8TTZqc3LjuNlEb7guaTfD41gl1ohfJ3LSe6KYUCGRn1Oc0C5l
 dbATIiF7hdxSFYldB2xNY598i0A0FbsEE2Zk4o04HMcb7PP4DRnlrPesJAVR3uFA5O2HmTC7jK
 rlA5BNlFMEG2r4U/vsHOJ6EHI2C787lB3lzN+N+LAScyvJjNS8yRqO8RnGmMm6CRM6zmAehPh5
 Qw0=
X-SBRS: 5.1
X-MesageID: 42752441
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:kqr5MqNTTI32YsBcT3Pw55DYdL4zR+YMi2QD/3taDTRIb82VkN
 2vlvwH1RnyzA0cQm0khMroAse9aFvm39pQ7ZMKNbmvGDPntmyhMZ144eLZrAHIMxbVstRQ3a
 IIScRDIfXtEFl3itv76gGkE9AmhOKK6rysmP229RdQZCtBApsQiTtRIACdD0FwWU1qBYAhEo
 Cd+8pAoFObCAkqR+68AWQIWPWGmsbCk4jobQVDKxks7gSPij3A0s+HLzGz2BACXzRThYoz6G
 StqX2C2oyPkdGejiXd2Wja8ohMlLLaq+drKcSQhqEuW1DRoymyYoAJYczngBkUp6WV5E8ugJ
 3wpX4bTrtOwlfwWk3wnhf3wQnn118Vmgzf4HuVm2Hqr8C8ZB9SMbs4uatjfhHU61UtsbhHuc
 ohtQ/p1Os0fGb9tR/w6NTSWxZhmlDcmwtYrccpg2FCSoxbUbdNrOUkjTNoOa0dFyH34p1PKp
 gWMOjg4p9tADSnRkGclGxuzNuwZ280DxeLT2MT0/blogR+rTRXyVAVy9cYmWpF3JUhS4Nc7+
 CBCahwkqpSJ/VmIp5VNaMke4+aG2bNSRXDPCa7JknmLrgOPzbop4Ts6Ls4yem2cPUzvdUPsa
 WEdGkdmX85ekroB8HL9oZM6ArxTGK0Wimo4t1C5rBi04eMB4bDAGmmchQDgsGgq/IQDonwQP
 CoIq9bBPflMC/HBZtJ5QvjQJNfQENuEPE9i5IeYRajs8jLIorluqjwa/DIPofgFj4iRyfRGX
 0GcD/vJNhRz0yiV3Pi6SKhHk/FSwjax9ZdAaLa9+8cxMwmLYtXqDUYjly/+4WqJFR5w+gLVX
 o7BImivrKwpGGw82qNxX5uIABhAkFc56ild3tLoAQNIn7laLprgaTZRUlimF+8YjNvRcLfFw
 BS435t/7isEpCWzSc+T/WqL3ydlHlWgH6RVZ8Tlumi6K7eC9IFJ6djfJY0ORTAFhRzlwovgn
 xEchU4SkjWES6rr76kgpwSDOT2bMJ9nw+vHM5RpRvkxAehjPBqYkFecy+lUMaRjwprbSFTnE
 dN/6gWh6fFpSyiMlIlgOMzMERFbUOeBL4uNnXCWKxk3pTQPC1gR2aDgjKXzzU+YHDj+Ukpim
 v9FiGMYv3QDl1BundX77by/DpPBxegVnM1Tko/nZx2FGzAtHo26+ONa6ap+0a6a1cJwIgmQX
 v4SApXBjkr68G81RaTljrHKG4vwY82OPfBSJ45davI53+rIIqUtK0PEvNO5qx5PNT2vuJja5
 PHRyalaBfDT8850Q2coXgofBRuoH4/iPXyxVnL6nO70HNXO4uaHH1WA5UgZ/eS4GjvS6zWjN
 FXjdcpsfCxNWu0QNic0q3TZyNCLBSWgWPedZBelblk+YYJ8J10FN3ndBGN8ldt9hA3Nt31m0
 MTW74T2sGLBqZfO+gpPxtE9V8onumVJEQlsgbKEvYzFGtd+0PzDpes2f70srIhDU2KmRvoNX
 Se+yNb+e3ZXyHr789tN4sAZUBXYlM78nJs4aercJDREhyjc4h4jReHG074VL9WU66eH7oM6j
 58/tGThueSMw71whrZszc+AqVA9Q+cMI+PKTPJPe5D6NqhP1uQxoOs/c6olT/yDQKBVH5wv/
 wMSWUgKuJZijcji4Ur0i+9DozPy3hV7Wd20HVAjV7i2o+v/WHBO1pJWDep2qlrYQ==
X-IronPort-AV: E=Sophos;i="5.82,270,1613451600"; 
   d="scan'208";a="42752441"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QL43WIBaAk4Q4A9kQSqU7z2rWfG0CspCfR/yJYw4kOO2kevt0iB8N/6jMVn1FHJsGDt7x28fJkSaE8FF7X1xVYZgcx/6nDtv9XVQO/JQ3saBUrltuszFhgDE3+BEUxhsKp5k8cGM0haMDIithM866H7xP+q4x8SZqqc1dUkcXDvJJ4E0LiNdyNTi6r++x4ePO9xBbmzwNk3jhBBsruY3W0mmD4zMx4G1hQ6PORvDmgorViqPsYW5BOsQ1Z2kx1SXTG+dZVLw/rlMULDxekddHCW0N0U8Waem7U85cdHw5q3SjvsiMywWKA1OmJx1NWSFgLyG5/v6cM5okk+F82gv6w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fTcGgbRpcF+yx991N5BmQYXfWGnNdRb9WDIQjayhgOM=;
 b=ni11iJHBspRnqf0Uj3OKVEFJWROP3j9nBbpYP5mRJpQwf6e9iwmLXZGLFCdtSXWLTKfco0Lw1DcN4cohX8YJjAmPVlbeTfz4K9VMYfkvjO7p6xy0qbfDUkbytFerefnauN6aaFiR/IQ2yIffet8fxpHBkRY9K8pq2Vw0jLF4xg7kHvWcFQV10pT+ROqx3Q6AgKtfd7BXXF8aq43TgbAxFkQgybGQWrQoKQTmTfnRVhwlzo+eupL+RPZz2fbGY79hcnpWjm5jpgcQ9hSR3d1rRmKIHkBq7d61pz7LCmceW8Er1p1pmPCiKF9p2jTBCZgvT3O8Fz3KbsQgq/FVVhY1+w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fTcGgbRpcF+yx991N5BmQYXfWGnNdRb9WDIQjayhgOM=;
 b=tsTTaUqemXdhhz6QYCFTIzL9KXrO9fyp6yketoEAaOKV5zktS8pVldbjuYbCAY/ts7x2C6J71LVOAQ1W3KP/jahBQvzGzxZ4ZipOHs7BpydJIPCVONIgzmsHytgzfuEuaLnCzY13fCa1WIizsRwWhYJeI2pqiE/3kz6iJf9O4wc=
Date: Mon, 3 May 2021 16:54:19 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, "Stefano Stabellini" <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH v3 01/22] mm: introduce xvmalloc() et al and use for
 grant table allocations
Message-ID: <YJAOm+rmKb5gbYJq@Air-de-Roger>
References: <322de6db-e01f-0b57-5777-5d94a13c441a@suse.com>
 <69778de6-3b94-64d1-99d9-1a0fcfa503fd@suse.com>
 <YI/e9wyOpsVDkFQi@Air-de-Roger>
 <aeb6aa8e-7c90-be22-1888-21b7b178e1d1@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <aeb6aa8e-7c90-be22-1888-21b7b178e1d1@suse.com>
X-ClientProxiedBy: PR1P264CA0033.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:102:19f::20) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: dd5ddf60-5a52-47ba-8f7a-08d90e43583e
X-MS-TrafficTypeDiagnostic: DM5PR03MB3372:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB3372F1AB54454A93AB624CE08F5B9@DM5PR03MB3372.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: qEm0yB6fcRoF7eq41JeJg2sz4Udw3OE/5kn0kpg18tGd6AO5B+EcvffpZ21wdMGMd8IPsK4eqUJRtzOH2pi1b0mrLHPWxPH+NYiQbyCidHn2epqy7Sou73lAdVdJeWzSwbd3zdAYVIZXSFbgdUk7hXC6X9L9tl6yFQT0LsrRfOb0HHwHJUTCI3sC1CERqaqpDqH+UXIlrGWENGu/3OkG5UtA5s1YF/rRXtxvVqroM+sKJsDLzFVJD8kuX4gnhRp1TeUhGvKIVJIEU6RyaLcSuEqS6TI5jdor6tQizg+7wHN3HcA1iko6yBtU4lRvpgqUCEuz30ybdveIFINMMSJ9LbBopDVbKW+fBRfcg0YcpjNF7pid8tqSZXx7mcJpebXhJ1WG0EczPVAL2/8FozUw9+/Nn+S7kJVkh7aTPy9nx31REr6DI1Atm49aKlfH0Y9HgmLeQbjfRzv/cH0xy61W5xKzcCUuee6V7DIWuOsTKyRGEQ137NWh19vkjfRClw3S9EqtFzGMMYa1md0ECCON8akJuNRPMss1H5tNJN0PbXLHz8ZfumYdQk6YMU+iDWaDkTUZSXQ5zXdyYl1POWU4l2nybzKkzDiMkAAYkwq9SJTLAoCgDO2rFIzQHPucg+lCf9IqPkSR6+iAPH6IxgdW/w==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(396003)(376002)(366004)(39860400002)(346002)(136003)(4326008)(54906003)(85182001)(16526019)(8676002)(9686003)(8936002)(38100700002)(6486002)(956004)(53546011)(66476007)(5660300002)(86362001)(6916009)(66946007)(26005)(478600001)(6496006)(316002)(2906002)(83380400001)(33716001)(186003)(6666004)(66556008)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?RzBHUHJGTHI4Zi8wYnFydUpES3NocklCb1VwWUU0elRmb0NZa3N0VStFWHMx?=
 =?utf-8?B?RGQ4ZUtOQXVvb3BoN2gyUzdWMi9nY0VORGNPaytKaXJRZmY3Sk0vVWg0K0cr?=
 =?utf-8?B?QVpBdmJLb3JNVVQwaWN4M0J3dUJvY0djSHNLT0YwZlpvZEppRDU2WWluYUhQ?=
 =?utf-8?B?VjZrL1VpNzBXTnNSNzZTSkdnMXRmS1NSdm0vNlYycUxRNThJSXhVcEdYSGFo?=
 =?utf-8?B?dUhlb3JqeUxVQU5scVhHTTR3ZUJ1NnlhNlhnWGRRRkFkVjl6NHQ1RkJ4SnpG?=
 =?utf-8?B?VS9pVFBBcEp3bFpCVWd2Zk5JUXVxNFNEM1Qva1FSSGF6MVJ1NUxlVDlVM0ZD?=
 =?utf-8?B?anhHT3VuREhlcVFvbUZpN2RXVnY3a3ZKSGxEMUtHSm1QVnZUVkdUYkI3ZEJn?=
 =?utf-8?B?QmJTc0psOUVxSDFGaU13YmJrT1ZPeGRxMVVtYmthd3l4eUo0TUxRUlI1Y0J6?=
 =?utf-8?B?NEMybW1DWk5KT3h2VEcvVms2OHhVT0c4d3M4ZHdGQjlGV0NWcEVieSsyNWpD?=
 =?utf-8?B?dTVpQWloalJMb0RiUEg5VmFmKzBmc1dPWEFQWCtqVkRkanJvRUZBbFN2dkU2?=
 =?utf-8?B?dS9ZUkxyYy93VkQ0ZEo5dHFnUDFzOE5CalovOEZOdGd0QXlPSk9CNXhzeFpq?=
 =?utf-8?B?cDd4Yk9BMHZoUHJ5cEpBRTVYSWlqeDYwUUJpd1NVTlFzKzlTbkJiY2FzS3Ex?=
 =?utf-8?B?d2pOR1F0UG44NHU4YWg0cktudWIyaUxaeEpjNlVya0t4QjZBRWVpdTZmV25E?=
 =?utf-8?B?eVh0Y1QxR2tsRWx6MlFUL3orbU1ZbWIrSWlNVjlqMkUyTTlkRC8yaUVsTEdj?=
 =?utf-8?B?elVZMUpvS2VHNWVSbVlDY2pkSUQ0Z2dKYmcxU242TDRZS25BMXMxRkVPTThs?=
 =?utf-8?B?ZzdrVm00LytSd2dzVHlMc1FOUXhCelAyeG1KQVpoSm9JMjlJNngzVGRYdjBv?=
 =?utf-8?B?K05ncTh2a1N5aWFPNnhKTERucE9iVHdheGo2Zkxzd2xIekUveXhrWUtSR2lv?=
 =?utf-8?B?aUg4OFdrOUpxQ2N3WndGQmw5d3AzK092Yy8wT1hOQi9zR05JT3I1dWp6L2w3?=
 =?utf-8?B?b2QyaTlCYlFONll3YmVKUERzc0JxQmorWGNxN3hZQ3dKbDhSSnZCOGZvU1Jl?=
 =?utf-8?B?ZDdEeTZGaks5YzlHUzM4OTlZRUhvQ1RQcVlVTERsY0xMdnVaYW10OHY3WjBt?=
 =?utf-8?B?VU40bVljaGwxTy9QQ0xySGxmcDNmQmJHdGgwLytLeitDQ2tQRVl5TWZPT2Zp?=
 =?utf-8?B?cnMvaTFYdHpIR3ZpSmkwNERsWDVwUFhiN280cTlOTTE4VHQvbVJvK2xaQnR2?=
 =?utf-8?B?SXA1UXBvSWNob2Y2MXBwZTRDSHFpa3VoU0JhYytQSW84cFIxVU1UTDN3NU9N?=
 =?utf-8?B?WGxhSVZkbDF5WnNLUWduaEVwYmNiRU1JQk00NTBjS1puM3VsR21lemVXaStw?=
 =?utf-8?B?Vko0NVVPM2tjTGxxNkFWK2NGSnpZMnhFZk1oYWpoOWt1OU92QXRPQjJ2UjFa?=
 =?utf-8?B?MzI3dzBGa1Brb2V0c1NscGIyQU4wc3BGZHFBeFdyR3pxTXBNU1M2eFZtWC9h?=
 =?utf-8?B?ZEhmYTlJbjZhZDhFQktOSDA2SHFOMFRLTTRQZGhsSGpodUNQS096Rm9ZM3NN?=
 =?utf-8?B?YkF4OTNDODZSZGgzUnpMUTNBcDkrNHpNQ3lYR0FGK2Fkall6Lzc1UnBmeGFG?=
 =?utf-8?B?QzdKeklWeWtsUHRtS1hja0xGdzlTa3kvSmFtRzh6WVFxM2dVMVZJOEI2NkdP?=
 =?utf-8?Q?pLti2LWAFAFJToM78J9GtNDfpdtP1pVKgMMbNCM?=
X-MS-Exchange-CrossTenant-Network-Message-Id: dd5ddf60-5a52-47ba-8f7a-08d90e43583e
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2021 14:54:25.7122
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: NEcNxm1l8sQug2DSKOmvbKcKb8c9QYJHvPtEBQzalIIrVjwetP6iqZlaSDogba6DicX8sUcPLiOnRMtDLcQbDQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3372
X-OriginatorOrg: citrix.com

On Mon, May 03, 2021 at 03:50:48PM +0200, Jan Beulich wrote:
> On 03.05.2021 13:31, Roger Pau Monné wrote:
> > On Thu, Apr 22, 2021 at 04:43:39PM +0200, Jan Beulich wrote:
> >> All of the array allocations in grant_table_init() can exceed a page's
> >> worth of memory, which xmalloc()-based interfaces aren't really suitable
> >> for after boot. We also don't need any of these allocations to be
> >> physically contiguous.. Introduce interfaces dynamically switching
> >> between xmalloc() et al and vmalloc() et al, based on requested size,
> >> and use them instead.
> >>
> >> All the wrappers in the new header get cloned mostly verbatim from
> >> xmalloc.h, with the sole adjustment to switch unsigned long to size_t
> >> for sizes and to unsigned int for alignments.
> > 
> > We seem to be growing a non-trivial amount of memory allocation
> > families of functions: xmalloc, vmalloc and now xvmalloc.
> > 
> > I think from a consumer PoV it would make sense to only have two of
> > those: one for allocations that require to be physically contiguous,
> > and one for allocation that don't require it.
> > 
> > Even then, requesting for physically contiguous allocations could be
> > done by passing a flag to the same interface that's used for
> > non-contiguous allocations.
> > 
> > Maybe another option would be to expand the existing
> > v{malloc,realloc,...} set of functions to have your proposed behaviour
> > for xv{malloc,realloc,...}?
> 
> All of this and some of your remarks further down has already been
> discussed. A working group has been formed. No progress since. Yes,
> a smaller set of interfaces may be the way to go. Controlling
> behavior via flags, otoh, is very much not malloc()-like. Making
> existing functions have the intended new behavior is a no-go without
> auditing all present uses, to find those few which actually may need
> physically contiguous allocations.

But you could make your proposed xvmalloc logic the implementation
behind vmalloc, as that would still be perfectly fine and safe? (ie:
existing users of vmalloc already expect non-physically contiguous
memory). You would just optimize the size < PAGE_SIZE for that
interface?

> Having seen similar naming elsewhere, I did propose xnew() /
> xdelete() (plus array and flex-struct counterparts) as the single
> new recommended interface; didn't hear back yet. But we'd switch to
> that gradually, so intermediately there would still be a larger set
> of interfaces.
> 
> I'm not convinced we should continue to have byte-granular allocation
> functions producing physically contiguous memory. I think the page
> allocator should be used directly in such cases.
> 
> >> --- /dev/null
> >> +++ b/xen/include/xen/xvmalloc.h
> >> @@ -0,0 +1,73 @@
> >> +
> >> +#ifndef __XVMALLOC_H__
> >> +#define __XVMALLOC_H__
> >> +
> >> +#include <xen/cache.h>
> >> +#include <xen/types.h>
> >> +
> >> +/*
> >> + * Xen malloc/free-style interface for allocations possibly exceeding a page's
> >> + * worth of memory, as long as there's no need to have physically contiguous
> >> + * memory allocated.  These should be used in preference to xmalloc() et al
> >> + * whenever the size is not known to be constrained to at most a single page.
> > 
> > Even when it's know that size <= PAGE_SIZE this helpers are
> > appropriate as they would end up using xmalloc, so I think it's fine to
> > recommend them universally as long as there's no need to alloc
> > physically contiguous memory?
> > 
> > Granted there's a bit more overhead from the logic to decide between
> > using xmalloc or vmalloc &c, but that's IMO not that big of a deal in
> > order to not recommend this interface globally for non-contiguous
> > alloc.
> 
> As long as xmalloc() and vmalloc() are meant stay around as separate
> interfaces, I wouldn't want to "forbid" their use when it's sufficiently
> clear that they would be chosen by the new function anyway. Otoh, if the
> new function became more powerful in terms of falling back to the

What do you mean with more powerful here?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon May 03 14:58:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 14:58:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121675.229484 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lda1S-0005ul-V2; Mon, 03 May 2021 14:58:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121675.229484; Mon, 03 May 2021 14:58:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lda1S-0005ue-Rv; Mon, 03 May 2021 14:58:10 +0000
Received: by outflank-mailman (input) for mailman id 121675;
 Mon, 03 May 2021 14:58:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TA2L=J6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lda1R-0005uZ-KE
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 14:58:09 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b1f3d413-0433-4dac-8db6-72e2f8fa8ff6;
 Mon, 03 May 2021 14:58:08 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A5366AF0E;
 Mon,  3 May 2021 14:58:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b1f3d413-0433-4dac-8db6-72e2f8fa8ff6
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620053887; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=qgAJdx89/lcuYzZsPoRl0Y9RMMFmqP8xIi4lpsqcidI=;
	b=VlvGFXCw36btspImOnKgGIJJnvHd1omnXJtSHxOVSrCvI0+MxQ9ZqFnf9bwXBjPk8yQ7Do
	7M7iDYs3cUWX+woLZkPwz16p2hT3tohWDV+qd9v8MWWMw5wiFppKrprbFyaQbwQtFntf1B
	WSvzJ79rYyLtpz25DVDi0PaLu8hlGKU=
Subject: Re: [PATCH v4 01/12] x86/rtc: drop code related to strict mode
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210420140723.65321-1-roger.pau@citrix.com>
 <20210420140723.65321-2-roger.pau@citrix.com>
 <f282a2a2-e5cb-6a65-690a-b9c27c03089a@suse.com>
 <YI/CSKpqWrilNKi8@Air-de-Roger>
 <5b06565e-1f2e-3498-c18f-e7eac0042761@suse.com>
 <YJANG3LeuA3Ygt/Q@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d8ed89e8-d13a-9ed6-e92b-fc7072b8382e@suse.com>
Date: Mon, 3 May 2021 16:58:07 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <YJANG3LeuA3Ygt/Q@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 03.05.2021 16:47, Roger Pau Monné wrote:
> On Mon, May 03, 2021 at 02:26:51PM +0200, Jan Beulich wrote:
>> On 03.05.2021 11:28, Roger Pau Monné wrote:
>>> On Thu, Apr 29, 2021 at 04:53:07PM +0200, Jan Beulich wrote:
>>>> On 20.04.2021 16:07, Roger Pau Monne wrote:
>> (I've also not seen the
>> flag named "RTC good" - the ACPI constant is ACPI_WAET_RTC_NO_ACK, for
>> example.)
> 
> I'm reading the WAET spec as published my Microsoft:
> 
> http://msdn.microsoft.com/en-us/windows/hardware/gg487524.aspx
> 
> Where the flag is listed as 'RTC good'. Maybe that's outdated now?
> Seems to be the official source for the specification from
> https://uefi.org/acpi.

Well, I guess the wording wasn't used for the constant's name because
the RTC isn't "bad" otherwise?

>>>>> @@ -337,8 +336,7 @@ int pt_update_irq(struct vcpu *v)
>>>>>      {
>>>>>          if ( pt->pending_intr_nr )
>>>>>          {
>>>>> -            /* RTC code takes care of disabling the timer itself. */
>>>>> -            if ( (pt->irq != RTC_IRQ || !pt->priv) && pt_irq_masked(pt) &&
>>>>> +            if ( pt_irq_masked(pt) &&
>>>>>                   /* Level interrupts should be asserted even if masked. */
>>>>>                   !pt->level )
>>>>>              {
>>>>
>>>> I'm struggling to relate this to any other part of the patch. In
>>>> particular I can't find the case where a periodic timer would be
>>>> registered with RTC_IRQ and a NULL private pointer. The only use
>>>> I can find is with a non-NULL pointer, which would mean the "else"
>>>> path is always taken at present for the RTC case (which you now
>>>> change).
>>>
>>> Right, the else case was always taken because as the comment noted RTC
>>> would take care of disabling itself (by calling destroy_periodic_time
>>> from the callback when using strict_mode). When no_ack mode was
>>> implemented this wasn't taken into account AFAICT, and thus the RTC
>>> was never removed from the list even when masked.
>>>
>>> I think with no_ack mode the RTC shouldn't have this specific handling
>>> in pt_update_irq, as it should behave like any other virtual timer.
>>> I could try to split this as a separate bugfix, but then I would have
>>> to teach pt_update_irq to differentiate between strict_mode and no_ack
>>> mode.
>>
>> A fair part of my confusion was about "&& !pt->priv".
> 
> I think you meant "|| !pt->priv"?

Oops, indeed.

>> I've looked back
>> at 9607327abbd3 ("x86/HVM: properly handle RTC periodic timer even when
>> !RTC_PIE"), where this was added. It was, afaict, to cover for
>> hpet_set_timer() passing NULL with RTC_IRQ.
> 
> That's tricky, as hpet_set_timer hardcodes 8 instead of using RTC_IRQ
> which makes it really easy to miss.
> 
>> Which makes me suspect that
>> be07023be115 ("x86/vhpet: add support for level triggered interrupts")
>> may have subtly broken things.
> 
> Right - as that would have made the RTC irq when generated from the
> HPET no longer be suspended when masked (as pt->priv would no longer
> be NULL). Could be fixed with:
> 
> diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
> index ca94e8b4538..f2cbd12f400 100644
> --- a/xen/arch/x86/hvm/hpet.c
> +++ b/xen/arch/x86/hvm/hpet.c
> @@ -318,7 +318,8 @@ static void hpet_set_timer(HPETState *h, unsigned int tn,
>                           hpet_tick_to_ns(h, diff),
>                           oneshot ? 0 : hpet_tick_to_ns(h, h->hpet.period[tn]),
>                           irq, timer_level(h, tn) ? hpet_timer_fired : NULL,
> -                         (void *)(unsigned long)tn, timer_level(h, tn));
> +                         timer_level(h, tn) ? (void *)(unsigned long)tn : NULL,
> +                         timer_level(h, tn));
>  }
>  
>  static inline uint64_t hpet_fixup_reg(
> 
> Passing again NULL as the callback private data for edge triggered
> interrupts.

Right, plus perhaps at the same time replacing the hardcoded 8.

>>> Would you be fine if the following is added to the commit message
>>> instead:
>>>
>>> "Note that the special handling of the RTC timer done in pt_update_irq
>>> is wrong for the no_ack mode, as the RTC timer callback won't disable
>>> the timer anymore when it detects the guest is not reading REG_C. As
>>> such remove the code as part of the removal of strict_mode, and don't
>>> special case the RTC timer anymore in pt_update_irq."
>>
>> Not sure yet - as per above I'm still not convinced this part of the
>> change is correct.
> 
> I believe part of this handling is kind of bogus - for example I'm
> unsure Xen should account masked interrupt injections as missed ticks.
> A guest might decide to mask it's interrupt source for whatever
> reason, and then it shouldn't receive a flurry of interrupts when
> unmasked. Ie: missed ticks should only be accounted for interrupts
> that should have been delivered but the guest wasn't scheduled. I
> think such model would also simplify some of the logic that we
> currently have.
> 
> In fact I have a patch on top of this current series which I haven't
> posted yet that does implement this new mode of not accounting masked
> interrupts as missed ticks to the delivered later.

This may be problematic: Iirc one of the goals of this mode is to cover
for the case where a guest simply doesn't get around to unmasking the
IRQ until the next one occurs. Yes, it feels bogus, but I'm not sure it
can be done away with. I also can't seem to be able to think of a
heuristic by which the two scenarios could be told apart halfway
reliably.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 03 15:21:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 15:21:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121686.229501 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldaOD-0008RY-W7; Mon, 03 May 2021 15:21:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121686.229501; Mon, 03 May 2021 15:21:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldaOD-0008RR-T9; Mon, 03 May 2021 15:21:41 +0000
Received: by outflank-mailman (input) for mailman id 121686;
 Mon, 03 May 2021 15:21:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TA2L=J6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ldaOC-0008RM-LT
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 15:21:40 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a3221977-b4f8-4209-83e6-9ef603465b79;
 Mon, 03 May 2021 15:21:39 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8089AB22B;
 Mon,  3 May 2021 15:21:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a3221977-b4f8-4209-83e6-9ef603465b79
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620055298; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Zld2azzqD6bpueS2UjE9gAu3cL/MON53N4fsZxk+00c=;
	b=otWjwLQ+Pd5O8j4PxZ64d8Ho/WMgPDIkhjiv48YYNMfCpEi+qQCGaD2CtNs4k1mhTTn+Ew
	uWiIBrdABI6+D8F6X9+A2+I2QwkIxVmbRvCKn4J644JQXcM5a7IAinVZDMfUb2csB4AL8v
	wpe8V9Hb1ynrOlA61ea8dRcA2vPgt2k=
Subject: Re: [PATCH v3 01/22] mm: introduce xvmalloc() et al and use for grant
 table allocations
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <322de6db-e01f-0b57-5777-5d94a13c441a@suse.com>
 <69778de6-3b94-64d1-99d9-1a0fcfa503fd@suse.com>
 <YI/e9wyOpsVDkFQi@Air-de-Roger>
 <aeb6aa8e-7c90-be22-1888-21b7b178e1d1@suse.com>
 <YJAOm+rmKb5gbYJq@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <340fed73-973c-feba-074d-8bfa6eeae6d6@suse.com>
Date: Mon, 3 May 2021 17:21:37 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <YJAOm+rmKb5gbYJq@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 03.05.2021 16:54, Roger Pau Monné wrote:
> On Mon, May 03, 2021 at 03:50:48PM +0200, Jan Beulich wrote:
>> On 03.05.2021 13:31, Roger Pau Monné wrote:
>>> On Thu, Apr 22, 2021 at 04:43:39PM +0200, Jan Beulich wrote:
>>>> All of the array allocations in grant_table_init() can exceed a page's
>>>> worth of memory, which xmalloc()-based interfaces aren't really suitable
>>>> for after boot. We also don't need any of these allocations to be
>>>> physically contiguous.. Introduce interfaces dynamically switching
>>>> between xmalloc() et al and vmalloc() et al, based on requested size,
>>>> and use them instead.
>>>>
>>>> All the wrappers in the new header get cloned mostly verbatim from
>>>> xmalloc.h, with the sole adjustment to switch unsigned long to size_t
>>>> for sizes and to unsigned int for alignments.
>>>
>>> We seem to be growing a non-trivial amount of memory allocation
>>> families of functions: xmalloc, vmalloc and now xvmalloc.
>>>
>>> I think from a consumer PoV it would make sense to only have two of
>>> those: one for allocations that require to be physically contiguous,
>>> and one for allocation that don't require it.
>>>
>>> Even then, requesting for physically contiguous allocations could be
>>> done by passing a flag to the same interface that's used for
>>> non-contiguous allocations.
>>>
>>> Maybe another option would be to expand the existing
>>> v{malloc,realloc,...} set of functions to have your proposed behaviour
>>> for xv{malloc,realloc,...}?
>>
>> All of this and some of your remarks further down has already been
>> discussed. A working group has been formed. No progress since. Yes,
>> a smaller set of interfaces may be the way to go. Controlling
>> behavior via flags, otoh, is very much not malloc()-like. Making
>> existing functions have the intended new behavior is a no-go without
>> auditing all present uses, to find those few which actually may need
>> physically contiguous allocations.
> 
> But you could make your proposed xvmalloc logic the implementation
> behind vmalloc, as that would still be perfectly fine and safe? (ie:
> existing users of vmalloc already expect non-physically contiguous
> memory). You would just optimize the size < PAGE_SIZE for that
> interface?

Existing callers of vmalloc() may expect page alignment of the
returned address.

>>>> --- /dev/null
>>>> +++ b/xen/include/xen/xvmalloc.h
>>>> @@ -0,0 +1,73 @@
>>>> +
>>>> +#ifndef __XVMALLOC_H__
>>>> +#define __XVMALLOC_H__
>>>> +
>>>> +#include <xen/cache.h>
>>>> +#include <xen/types.h>
>>>> +
>>>> +/*
>>>> + * Xen malloc/free-style interface for allocations possibly exceeding a page's
>>>> + * worth of memory, as long as there's no need to have physically contiguous
>>>> + * memory allocated.  These should be used in preference to xmalloc() et al
>>>> + * whenever the size is not known to be constrained to at most a single page.
>>>
>>> Even when it's know that size <= PAGE_SIZE this helpers are
>>> appropriate as they would end up using xmalloc, so I think it's fine to
>>> recommend them universally as long as there's no need to alloc
>>> physically contiguous memory?
>>>
>>> Granted there's a bit more overhead from the logic to decide between
>>> using xmalloc or vmalloc &c, but that's IMO not that big of a deal in
>>> order to not recommend this interface globally for non-contiguous
>>> alloc.
>>
>> As long as xmalloc() and vmalloc() are meant stay around as separate
>> interfaces, I wouldn't want to "forbid" their use when it's sufficiently
>> clear that they would be chosen by the new function anyway. Otoh, if the
>> new function became more powerful in terms of falling back to the
> 
> What do you mean with more powerful here?

Well, right now the function is very simplistic, looking just at the size
and doing no fallback attempts at all. Linux'es kvmalloc() goes a little
farther. What I see as an option is for either form of allocation to fall
back to the other form in case the first attempt fails. This would cover
- out of memory Xen heap for small allocs,
- out of VA space for large allocs.
And of course, like Linux does (or at least did at the time I looked at
their code), the choice which of the backing functions to call could also
become more sophisticated over time.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 03 15:28:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 15:28:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121695.229514 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldaV1-0000Cy-Se; Mon, 03 May 2021 15:28:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121695.229514; Mon, 03 May 2021 15:28:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldaV1-0000Cq-PE; Mon, 03 May 2021 15:28:43 +0000
Received: by outflank-mailman (input) for mailman id 121695;
 Mon, 03 May 2021 15:28:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iacE=J6=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1ldaV1-0000Ck-7Y
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 15:28:43 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 82232526-6643-4ab7-a553-bbd658b7cb7b;
 Mon, 03 May 2021 15:28:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 82232526-6643-4ab7-a553-bbd658b7cb7b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620055721;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=plHT/ZbeA2q4S8yeyFDmeWxNsyplZG8zxUVEDKzOtEo=;
  b=X2juJqqwBA9XIe49DI6z0U+7QbkoQgy8eY9MeFAi8SY+qoXqFrxRvp0f
   PjNRQGPnMDMvq22x2vaSmmMpYiZl/gTF8QjfjLk9PRPmrygnUIly9RVSG
   c7Ee7OnhHogg5KBHO2sTAMbCUPIGWDmK4YSu4kQoH6bG51SNytXupBfAk
   4=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: sDQlIzEfZ3vCDX/WOgyflaOEnAEMH5Qa1Z3iUmO9h04GBVwLe+NB1JpORXuEFcUijyjOZHgS44
 NZI28NWPh6+bb/Ov/fdw2MhAsGECzwc6hLBt/1UOLxtQFAnPtuVYSzw5rmC9CNpPPdU2joNVRO
 NnQrH7ZvE74WBLDjtfw+PgHgFgkZfU4TMbEaMt42FIe2zK0WMx2zvqL1JzAp2yQGMsYhRiOqx2
 bOIcrYUjVy549SYNw1pVVKLPTsX3WvOh8S15foMkkiz1/CV7UN3WYv3HxIY/hdLmp4dIOVj9nh
 Dp0=
X-SBRS: 5.1
X-MesageID: 43331329
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:4y73169kDPgIbBaAgiluk+E4c71zdoIgy1knxilNYDRvWIixi9
 2ukPMH1RX9lTYWXzUalcqdPbSbKEmwybdc2qNUGbu5RgHptC+TLI9k5Zb/2DGIIUfD38Zn/+
 Nbf6B6YeeaMXFTh8z3+RT9Nt4mzsWO/qzAv5am815GZ2hRCp1IxQF/FwqdDwlSTA5JGZI2GP
 Onl7J6jhCnfmkaadn+O2kdU4H41pf2vb/vfBJuPW9B1CCgljWtgYSVLzG52g4ZSC5Gxr0vmF
 K19jDRwoWGn7WFxgTH12nVhq4m1efJ7tdYHsSDhow0B1zX+2CVTb9sUbGDozw5ydvHgDpErP
 D3vxwiM85vgkmwQkiJoADg0wSl8DEi526K8y77vVLfoNf0TD9/NsxZhYgxSGq712MQ+PVHlI
 5b1WOQsJRaSTnamj7m3sPFUxFxmlDxiWY+kMYI5kYvCLc2Wft0l8gy7UlVGJAPEGbR84Y8Ct
 B0AMXd/vpNNXuHcnH8smNvyNutRHBbJGbffmEy/uiulxRGlnFwyEUVgOYFmG0bzYkwT5lf6/
 6BGrh0lYtJUtQdYctGdbw8aPryLlaIbQPHMWqUL1iiProAIWj1sJLy4K84/qWAUrZg9upspL
 3xFHdj8UIicUPnDsODmLdR9ArWeX6wWTT2xtsbxphip7vmVNPQQHi+YWFrt/Hlj+QUA8XdVf
 r2EolRGeXbNmfrGZxExUnEV5NTMHkTV9BQg41+ZkKWrqvwW9nXn92eVMyWCKvmED4iVG+6KG
 AERiLLP8lF7lqmQDvDqj25YQKxRmXPubZLVITK9ekaz4YAcqdWtBIOsE+04sGQJScHj7c/e1
 FmJqj7r7iyqma391vZ9mkBAGsGMm9lpJHbF19arw4DNE35NZwZvc+EQHtf2HucKgU6R8TKEB
 RHr1Ay46i+KJaXwj0vGs/PCBPIs1Ij4FaxC7sMkKyK4snoPrkiCIw9Ybd8EQXQGwYwkRp2rm
 hEcxENXULSG1rV+PWYpa1RINuaW8h3gQ+tL8IRg2nYr1+kvs0qRmEWRXqrXdSMiQgjXTxJnV
 d8mpVv3IaoqHKKEy8Ske44OFpDZCC8G7RdFj2faIFVgLzwPAdqTWmLgjSegRQydm3291wbi2
 H7ITePEMu7XGZ1izR96OLH4Vl0fmKScwZbcXZhq7RnGWDHoHpolfaRYKC+yXGcZx8I0qUfNi
 3fZjMWIgRhrurHhiK9qXKnLzEL158uNuvSAPAIaLfIwEq3JImJj60dW+JO9JF+Ldb0r/IRWe
 2RewWJPCr1YtlZijC9lzIAAm1ZuXMkmfTn1FnO926jxkMyBvLUPRBPW6wbC8v01Rmre9+4lL
 FCyf4lt+q5NWv8LvScz7vMUjJFIhTP5UarUuATr4xOt65ajsozI7DrFR/zkF1X1hQ3K8n50G
 kERr5g3bzHMohzO+sfZjxe5VhssNiUNkMkvkjXD4YFDBEQpk6eG+nMz6vDqLIpDEHEjhD3I0
 Oj/ypU+OqAUDCE0bIcFqIsMWVbYEUx8x1ZjaG/XryVLD/vW/BI/VK8PHP4WqRaUrK5FbIZqQ
 s/49zgpZ7TSwPInCTr+RdrKKNH9GiqBemoBhiXJOJO+9umfVCFgq6g5t+vnC76IAHLL3gwtM
 lgTwg9f85Dgj4tgMkcyS6pUJH6pUojjh9Z+jFollnk34C8+2fFFURaMQnU668mHgV7Azytt4
 Do4OKY3HPy7Hxuwp/YDnpdedlIBpwNVITtNjxvLsIRpbas+KIqjk14EVATJl95rAq48/Jt3L
 +/1vmXdOv4F3/yNFIH9xtCAJccpF1mlUhwN+yFqb6taAQeEeAFR8Yl7odNiTRztxzY9FsHdS
 hgsQQuy6DAJlHqJG5HCKDrhIf66SJo4pSq2hlLnjFwkknioEyAs1L19vzFwEYcnin4lEtbjo
 mzABE2mz1UswZ7x7cJCAv2bUfcBVWfPlvxzLghJH1iDkEiw+gmlwD7NKUQmougw2cMl0VtPg
 EAh5exFmjeTj2OE8bXNz7CLqX7PPdCJ89LOmwZL2OnXS60/qgYRg7UedtnyxWnkmoPA6/LIa
 +9fAeBFvA2jdEtEr2pTIiEicjU0A9Fs3kUsoLgkHSA2Oln9MWBO2dHwJoAWrksPwrv5Tx/2N
 eIRw7DVrJnkB39vijk6S5CIsURzBlQyU4GpT0IECfAPaLEemhx4DoHwYTuXisW1IKITstUq3
 7IpRqktGPNPA6ZdEmqVLtl2QNhyC/RBIbbtUO9Ybz/KlRb/+ytC76wkW7izBDBzu/x/gX2p1
 doyydRVB81wwCaUJAYDGrX1VkTN15pGgl3mASv16IB6hJc9AQZWy1UQNNG8DTiCBjaASGhnj
 jWKDdq/4wMUmN+h4Fyzg11q1NnnEVRQaV3l52Jplvoa3xNSRxUwcEKzwSJj84wWh1FSjzMGf
 uxvMLuJcuk3p1XRW/BqvoEKeB0cSRJh/be4vLFRSdKcgSf2vNSwHPxi28Bxklmn9VgskfqxJ
 bcdHz05rRis6YeT6/7hiwpVJ9iCa1Dhm3LBNAF764ugkL18tPU1C0ubLqnuRUgO1fVb/dSPB
 UsKWqIj/1o3cWQsDiXf5JgqLnByzg6Y5/tRAcD1JG3KywcvqMq3bzpFfHzFy2QrtJOAukc3g
 rE6kp7y3OEo/SGN72A
X-IronPort-AV: E=Sophos;i="5.82,270,1613451600"; 
   d="scan'208";a="43331329"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YZcZlzw84weuAw9tVkhfFQPBKZCMw2m7S3OmByeWaTiIYMVKkGp/x5UlfFRSyZ2AxIZeeVLoEpuy0yN9fRsKgEHsgDR/u+BrKnzTnX2mvUTdog2i4jho1lxBGd4gioxOgUsiak7B55yxiyJXpXvsia0YQc+ZCjG01XQ1RoqH2lm6SmE6ps74yeuc+C5LecZtVz+O7uo9y55c1RnQisBHRVpVVfd5D0RbEJQO4LvHRIWj73FZu4SZ+gQg5R5NJK6llafRYXnx7hyH8YnbFKbIOf9yty5MRnSH7TfeM/rVJRc61k3OcmUBci8APyoufg4+mb6jLSuMizd0suqg0D32kA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rAMc5hSBMfRqYV0YGowPbt7DF330QlwFYiNWmD4EVMs=;
 b=I54hWQauDjOnnzFZXsLHxXvxYYyYPjFht9KPhIZTOWZJAMIETDixfQQIvJxYr4SsRd41xI8sQIPMoqOb99o0c8+yIH+pr8QjxF2Ymf2Bw6kPCJxYh55VsDlulo+icTxF056TooOu4moFw/VyzyOfkeSAbmSAPCwxpy+VhLcJjS9Lc4xn///GMZegjgOevbOYzbS7r9FBJzQ9sHhIwFicMpdotO3lwORiEuoKFaTbrYsjrUNe01VMlzMHQg8uxOt+OIHSsa8dtDOouqyOupgEwbblViJ1A7ui1H8B3V24us6m4QKzL21fVIwhnr0Dx7WczZ9wrywXOEKh4R00iF9bUg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rAMc5hSBMfRqYV0YGowPbt7DF330QlwFYiNWmD4EVMs=;
 b=c3H/cyWq560CEJgI00gEVJo8fA6lYbSf3f5gbKPOQ3dIOOr6EGGmngopLALN45X6ntR3q5d0IQ6qxaS8wDSH3aIiYY6qsgizBdy4UCwQLhKEdwzRT3ZJvKz7T7lbTRz2vYTx6ogt4PlWomckwUK2+KElZ+UR64if8sEG4fnRRXo=
Date: Mon, 3 May 2021 17:28:24 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v4 01/12] x86/rtc: drop code related to strict mode
Message-ID: <YJAWmGCtaTktGRG0@Air-de-Roger>
References: <20210420140723.65321-1-roger.pau@citrix.com>
 <20210420140723.65321-2-roger.pau@citrix.com>
 <f282a2a2-e5cb-6a65-690a-b9c27c03089a@suse.com>
 <YI/CSKpqWrilNKi8@Air-de-Roger>
 <5b06565e-1f2e-3498-c18f-e7eac0042761@suse.com>
 <YJANG3LeuA3Ygt/Q@Air-de-Roger>
 <d8ed89e8-d13a-9ed6-e92b-fc7072b8382e@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <d8ed89e8-d13a-9ed6-e92b-fc7072b8382e@suse.com>
X-ClientProxiedBy: PR3P191CA0058.EURP191.PROD.OUTLOOK.COM
 (2603:10a6:102:55::33) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6b2097a8-0f7a-4668-6f44-08d90e481d4a
X-MS-TrafficTypeDiagnostic: DM4PR03MB6000:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM4PR03MB6000190FDA8B8D3E15FDA22F8F5B9@DM4PR03MB6000.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: qkdZyMG4zJSaUvDkTjN3uSI/k/86NgYzZT5iEpUMa8E/kDb/xZepBpW2/MCqRaJ04yYwyvJx+UKCPhzhG0f2+S8xnalEbG/epkSDeXAl1l5xmvO54jP4Am//Q6nspRvR6gd755i9eVMoyQRiP4deY6Gz1ie9KicGTcLjwx/xLBKaSfGHwvgqwJRIaLPommrFHWQdJunKOFwseKB9Crp8ArFhyYJR2ag0j6+IzABO4vaHUx2p8osLJSSJjYLMYJkzIZ1Qk5Evlo1Y1EBG4xs5ezwI+8lnPx6UrUO5nmMoPoITsaO77aTJhLILoUoQWy51VOtXpVHUj3kjHdfmheczJS8iRKXOi8+ibf9bgJVV48PXDtx/GkdQE0oAbkIy8tHlfLSEcw2HTp6UepkHRFcS41fAqftYsOe/HYDPEvCepu9VSxq276hgCd0TowELrSLkHJIyZxJ1DAFwQsoqvB8Huxga8vQRHlMiHACmzVu/8xlGtb2VPDfjdHH2+5TLt/jY7VeQpQmqmLjpCEKaRy4f/ptVCaoxa4hvAYNgnH7rKoTq845QCxVmxgdJgVMX5kX0vW7y+ZA9BtFWCAMEEMy44GOgvUzgWaZghDaDLDIUp5S79EB5CSDOASoz5ECBZf8JrcExRbsalJy0a9kAJsCion6O/27Js0AXn8DsHGYqCuE=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(136003)(396003)(376002)(366004)(39860400002)(346002)(38100700002)(54906003)(33716001)(8936002)(6496006)(5660300002)(8676002)(316002)(956004)(2906002)(86362001)(478600001)(16526019)(6666004)(26005)(966005)(9686003)(53546011)(66556008)(45080400002)(83380400001)(6916009)(4326008)(6486002)(66946007)(66476007)(85182001)(186003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?M25IRHYvSVhxM1V6bHVnakZXZGtOUkNWWklHUnJkdGJHOU9RenJyeWxVZ3da?=
 =?utf-8?B?VEdYdjlqSS9rMVVTeDRSYk9MZDAvVVNZS1o3amk3YXZ3MjdVd0ZwckZHKzFI?=
 =?utf-8?B?cndhRk9NNi85eGRVdlR5RTQwZ0JuOERUdTZsUlBDbWdVVEdJWVNlUjRYMVZZ?=
 =?utf-8?B?UWhFSFdBRWhhMVdxV0J5b2NOVGtQWnRXL1VEbzR2dUo5NzMyS20vL2NFNkc1?=
 =?utf-8?B?SVRQK3ZFa3QwM0pia29heDhkS1B6bHQ3SzhsYkN1UVJlQ0JkUnRwRnJOTHlh?=
 =?utf-8?B?V3pwT0dmanBHNGVBbnFPTGVFVk9tK1JGQ3N1UndoZC9GeDdncThRS0M2SVlr?=
 =?utf-8?B?b0hiSHg2VzhBQTFSTzh5Z1FLaFB6TWxlbCtKQVJiY3B0UmFWbGs4YkdFWkdE?=
 =?utf-8?B?R0lsUk5JTGpKWFZoMU5xYUJzRzJBS3BXVXNlR013bGhmcW1UUTJnU09OWnZK?=
 =?utf-8?B?MFBXR0VsejJBckxaRzlLbnVwTHQ1dUZuRS93K2ZTR1ZuVmQvR3NuY1VTeGRo?=
 =?utf-8?B?R2hzUnM2QTluUzcrYWhoTFpJaFVFcmpRVWQwWk1YVWt0bDhwcThGbkpGc2lT?=
 =?utf-8?B?QWh6Q3IyeWcxcE5WZXQwc1BOdHBDVHZibXNBdFF3R1QxbFRuOHZPaFFidk9a?=
 =?utf-8?B?ZVI0amZhbm0zTElXVU96Sm81UmJRY25nNEpBR0RaZ0V0bjZWZWRrM2VzSDNj?=
 =?utf-8?B?bm1ZSnBFU0ppVmdESGNwV0NoUEVwZEVEOER3YnUvdm5JQ0o5QkFFdExwWlRI?=
 =?utf-8?B?eGplTlVsQXFSU1dQcm9NWkpjVlJndVVMUU82WWRZUjRjS29VZTAvbHN4VXlL?=
 =?utf-8?B?RGtGTGtheG5Ua1NPTXo4WDBoVExha2t5dW9KT0t2TkphcDY4N1FGeWh4ai9R?=
 =?utf-8?B?MmJmVEg0TnMxelZFMGVETzdTNU04SjBReDlBQjAyMW5yVTNaQUdnczhadHpq?=
 =?utf-8?B?T2J2VCs3bzJ3WThDejUxbkRMVTJOMVhhODcxS1JvbnRvWmwwU3N1NVRrVU1o?=
 =?utf-8?B?K1E2bHlLUEZQMnZuU2wvWnpad0RzeDFLVHBYQm9TOWJMY1BFNjBnbzlFM2ZP?=
 =?utf-8?B?KzJxdjJKSHYramRIMTZhM3NxWUhTRlRiazkzanhsdkNDM1o5STVDS1NTV2Y1?=
 =?utf-8?B?ZmkrTFFlOUpqK2lwNXpJb0RKc2xGaWZZUTRXbkZzSG5qbzJBaXFVV3VnUStN?=
 =?utf-8?B?cmhPQ2k1aXBZUXZYTjFOUU1ndDdRRE9mRDQxTnJmYm5Ub3ZWeVh3SmE1SXFK?=
 =?utf-8?B?c2JpOEVyODlzbmY4NHhtdEw5cjM3TDhvZG9Xb2JnRTlQZENPdWlxTWtxbitM?=
 =?utf-8?B?dzBmSk54cG8wUTVnM3ZpU0dWa0ttUDVGaG1YbUhudlRxaWtzcGZkdHZZSXBB?=
 =?utf-8?B?NEJpZHQ5aXR6OUE4OHloOTBmdzYxb1ZONWMvQllqREpjRytxd0xYTmw4Z01y?=
 =?utf-8?B?bVdheDV5Tm9ubHhLdm5iUVd6Q1F2YUowWHBWWUVreDMrNkZEZ1ZoakFLWUpG?=
 =?utf-8?B?citmV2Q4ZldMbUlra3R6V1VZbThsL3pNRDJxbEt4a3kzNWJnNSt2M0ZLY1ZB?=
 =?utf-8?B?RUxjaG9YUkI3eVBLRXRFcmhWWmJuenVpRUpPdGhIbmFzenh1Y0d6cVJsWCto?=
 =?utf-8?B?UUJCelZFNjBpTEtGYVRGSDVwQkFZaE13ejNnRmNnUnpoMXlrRVg2UlVzZHk3?=
 =?utf-8?B?eFZjRk5MNnhtTGNhZUcyU3NQd0dJQzMvempFL3FyR1JRZUxQMmYwVVVrNDh0?=
 =?utf-8?Q?FjD8p46rQETyk/B4yahBniiuWgzF3BVHgTBLY0P?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 6b2097a8-0f7a-4668-6f44-08d90e481d4a
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2021 15:28:34.2473
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: L0I1zV627oAOVqDhytby4LxtuFE5py6MNppm8/WdcXi3h/Nz2x98oQVBNBXh7/f18geMY7iag9e1NXyr/MjCDA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR03MB6000
X-OriginatorOrg: citrix.com

On Mon, May 03, 2021 at 04:58:07PM +0200, Jan Beulich wrote:
> On 03.05.2021 16:47, Roger Pau Monné wrote:
> > On Mon, May 03, 2021 at 02:26:51PM +0200, Jan Beulich wrote:
> >> On 03.05.2021 11:28, Roger Pau Monné wrote:
> >>> On Thu, Apr 29, 2021 at 04:53:07PM +0200, Jan Beulich wrote:
> >>>> On 20.04.2021 16:07, Roger Pau Monne wrote:
> >> (I've also not seen the
> >> flag named "RTC good" - the ACPI constant is ACPI_WAET_RTC_NO_ACK, for
> >> example.)
> > 
> > I'm reading the WAET spec as published my Microsoft:
> > 
> > http://msdn.microsoft.com/en-us/windows/hardware/gg487524.aspx
> > 
> > Where the flag is listed as 'RTC good'. Maybe that's outdated now?
> > Seems to be the official source for the specification from
> > https://uefi.org/acpi.
> 
> Well, I guess the wording wasn't used for the constant's name because
> the RTC isn't "bad" otherwise?

I guess so, that's the name given my Microsoft anyway. The
descriptions speaks of an 'enhanced RTC', so you could differentiate
between plain RTC and enhanced one :).

> >>>>> @@ -337,8 +336,7 @@ int pt_update_irq(struct vcpu *v)
> >>>>>      {
> >>>>>          if ( pt->pending_intr_nr )
> >>>>>          {
> >>>>> -            /* RTC code takes care of disabling the timer itself. */
> >>>>> -            if ( (pt->irq != RTC_IRQ || !pt->priv) && pt_irq_masked(pt) &&
> >>>>> +            if ( pt_irq_masked(pt) &&
> >>>>>                   /* Level interrupts should be asserted even if masked. */
> >>>>>                   !pt->level )
> >>>>>              {
> >>>>
> >>>> I'm struggling to relate this to any other part of the patch. In
> >>>> particular I can't find the case where a periodic timer would be
> >>>> registered with RTC_IRQ and a NULL private pointer. The only use
> >>>> I can find is with a non-NULL pointer, which would mean the "else"
> >>>> path is always taken at present for the RTC case (which you now
> >>>> change).
> >>>
> >>> Right, the else case was always taken because as the comment noted RTC
> >>> would take care of disabling itself (by calling destroy_periodic_time
> >>> from the callback when using strict_mode). When no_ack mode was
> >>> implemented this wasn't taken into account AFAICT, and thus the RTC
> >>> was never removed from the list even when masked.
> >>>
> >>> I think with no_ack mode the RTC shouldn't have this specific handling
> >>> in pt_update_irq, as it should behave like any other virtual timer.
> >>> I could try to split this as a separate bugfix, but then I would have
> >>> to teach pt_update_irq to differentiate between strict_mode and no_ack
> >>> mode.
> >>
> >> A fair part of my confusion was about "&& !pt->priv".
> > 
> > I think you meant "|| !pt->priv"?
> 
> Oops, indeed.
> 
> >> I've looked back
> >> at 9607327abbd3 ("x86/HVM: properly handle RTC periodic timer even when
> >> !RTC_PIE"), where this was added. It was, afaict, to cover for
> >> hpet_set_timer() passing NULL with RTC_IRQ.
> > 
> > That's tricky, as hpet_set_timer hardcodes 8 instead of using RTC_IRQ
> > which makes it really easy to miss.
> > 
> >> Which makes me suspect that
> >> be07023be115 ("x86/vhpet: add support for level triggered interrupts")
> >> may have subtly broken things.
> > 
> > Right - as that would have made the RTC irq when generated from the
> > HPET no longer be suspended when masked (as pt->priv would no longer
> > be NULL). Could be fixed with:
> > 
> > diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
> > index ca94e8b4538..f2cbd12f400 100644
> > --- a/xen/arch/x86/hvm/hpet.c
> > +++ b/xen/arch/x86/hvm/hpet.c
> > @@ -318,7 +318,8 @@ static void hpet_set_timer(HPETState *h, unsigned int tn,
> >                           hpet_tick_to_ns(h, diff),
> >                           oneshot ? 0 : hpet_tick_to_ns(h, h->hpet.period[tn]),
> >                           irq, timer_level(h, tn) ? hpet_timer_fired : NULL,
> > -                         (void *)(unsigned long)tn, timer_level(h, tn));
> > +                         timer_level(h, tn) ? (void *)(unsigned long)tn : NULL,
> > +                         timer_level(h, tn));
> >  }
> >  
> >  static inline uint64_t hpet_fixup_reg(
> > 
> > Passing again NULL as the callback private data for edge triggered
> > interrupts.
> 
> Right, plus perhaps at the same time replacing the hardcoded 8.

Right, but if you agree to take this patch and remove strict_mode then
the emulated RTC won't disable itself anymore, and hence needs to be
handled as any other virtual timer?

I will submit the HPET patch so it can be backported to stable
releases anyway, just wanted to check whether you would agree to
remove the strict_mode, and whether you then agree that the special
handling of the RTC done in pt_update_irq is no longer needed.

> >>> Would you be fine if the following is added to the commit message
> >>> instead:
> >>>
> >>> "Note that the special handling of the RTC timer done in pt_update_irq
> >>> is wrong for the no_ack mode, as the RTC timer callback won't disable
> >>> the timer anymore when it detects the guest is not reading REG_C. As
> >>> such remove the code as part of the removal of strict_mode, and don't
> >>> special case the RTC timer anymore in pt_update_irq."
> >>
> >> Not sure yet - as per above I'm still not convinced this part of the
> >> change is correct.
> > 
> > I believe part of this handling is kind of bogus - for example I'm
> > unsure Xen should account masked interrupt injections as missed ticks.
> > A guest might decide to mask it's interrupt source for whatever
> > reason, and then it shouldn't receive a flurry of interrupts when
> > unmasked. Ie: missed ticks should only be accounted for interrupts
> > that should have been delivered but the guest wasn't scheduled. I
> > think such model would also simplify some of the logic that we
> > currently have.
> > 
> > In fact I have a patch on top of this current series which I haven't
> > posted yet that does implement this new mode of not accounting masked
> > interrupts as missed ticks to the delivered later.
> 
> This may be problematic: Iirc one of the goals of this mode is to cover
> for the case where a guest simply doesn't get around to unmasking the
> IRQ until the next one occurs. Yes, it feels bogus, but I'm not sure it
> can be done away with.

Well, an OS shouldn't really mask the interrupt source without being
capable of handling missed interrupts. As even when running native a
SMM could steal time from the OS and thus miss timer ticks?

> I also can't seem to be able to think of a
> heuristic by which the two scenarios could be told apart halfway
> reliably.

I've tested with Windows 7 limited to 2% capacity and seems to be able
to keep track of time correctly when masked timer interrupts are not
accounted as missed ticks. Note than doing the same with
no_missed_ticks_pending leads to time skews (so limiting the CPU
capacity to 2% does indeed force timer ticks to accumulate).

Anyway, we can discuss later, once this initial batch of patches is
in.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon May 03 15:39:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 15:39:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121700.229538 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldafq-0001Aj-6y; Mon, 03 May 2021 15:39:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121700.229538; Mon, 03 May 2021 15:39:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldafq-0001Ab-3C; Mon, 03 May 2021 15:39:54 +0000
Received: by outflank-mailman (input) for mailman id 121700;
 Mon, 03 May 2021 15:39:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gWh3=J6=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ldafo-0001AE-H9
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 15:39:52 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c419d8a6-f98f-46b6-a958-3e897796841f;
 Mon, 03 May 2021 15:39:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c419d8a6-f98f-46b6-a958-3e897796841f
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620056391;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=O2yVEz5u95v8j+WwtRne2yHrcHV1GGCzk+PGhtB6nEs=;
  b=K4hFoYpeokTeD06OFn/amVtCtKK3JmgB7QqKnmlYuT0yS+5qWtv5wKi7
   ie907rwrhukH9TeY//dTd7aDxpnStZgRGCaEyKgzGIdI/mPZPGERm1wK5
   lwoauB+HsqFYUYvBKG5CtAkWut/RKpLlZHkmbRaSlRCY3y1l5Lo9C11j3
   M=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: ND6u3vdoKm4YWMNTymQAmDMkd0lpy/mP8prIe8qqAfuEtlNhe+FCyGdoS390/SYVs8FM+ek5sC
 GsMAwiWTB3sy/ypNi5KzHL2DcZlwucTKGSVVElcdEFExPw41PsRPe/VE6TXsJC6kWz33idZ8DO
 JmwATgSbNnNL6OfW1NlbdSgKpWTzVhdS2jIxs0wRHf0/m+TyTcdA+9Wq/UjNezRK/sHCy+w9tK
 YAc8+Fje0uwWB+rrv8vPO0u2tr9Tn4mQHSDLziD0qvZhCTrEmDeAeWsfm5EhXgQL3I6MBpc18H
 DpQ=
X-SBRS: 5.1
X-MesageID: 43332334
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:9sViJagXbX0eZuIcRW07O+jHQXBQXl0ji2hD6mlwRA09T+Wzva
 mV/cgz/xnylToXRTUcgtiGIqaNWjfx8pRy7IkXM96ZLXHbkUGvK5xv6pan/i34F0TFh5dg/I
 ppbqQWMqySMXFUlsD/iTPWL/8Bx529/LmslaPiyR5WPGVXQoVByys8NQqBCE1xQ2B9dPwEPb
 6R/NBOqTblWVl/VLXYOlA/U+LOp8LGmfvdCHZsbXNK1CC0gTyl87L8GRSDty1uNA9n+rs+7X
 PD1zXw+6TLiYDB9jbny2TR455K8eGA9vJ/AqW35PQ9G3HJggasaJ8JYczmgAwI
X-IronPort-AV: E=Sophos;i="5.82,270,1613451600"; 
   d="scan'208";a="43332334"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH 5/5] x86/cpuid: Fix handling of xsave dynamic leaves
Date: Mon, 3 May 2021 16:39:38 +0100
Message-ID: <20210503153938.14109-6-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210503153938.14109-1-andrew.cooper3@citrix.com>
References: <20210503153938.14109-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

If max leaf is greater than 0xd but xsave not available to the guest, then the
current XSAVE size should not be filled in.  This is a latent bug for now as
the guest max leaf is 0xd, but will become problematic in the future.

The comment concerning XSS state is wrong.  VT-x doesn't manage host/guest
state automatically, but there is provision for "host only" bits to be set, so
the implications are still accurate.

Introduce {xstate,hw}_compressed_size() helpers to mirror the uncompressed
ones.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/cpuid.c         | 23 +++++++--------------
 xen/arch/x86/xstate.c        | 49 ++++++++++++++++++++++++++++++++++++++++++++
 xen/include/asm-x86/xstate.h |  1 +
 3 files changed, 57 insertions(+), 16 deletions(-)

diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index c7f8388e5d..92745aa63f 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -1041,24 +1041,15 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf,
     case XSTATE_CPUID:
         switch ( subleaf )
         {
+        case 0:
+            if ( p->basic.xsave )
+                res->b = xstate_uncompressed_size(v->arch.xcr0);
+            break;
+
         case 1:
             if ( p->xstate.xsaves )
-            {
-                /*
-                 * TODO: Figure out what to do for XSS state.  VT-x manages
-                 * host vs guest MSR_XSS automatically, so as soon as we start
-                 * supporting any XSS states, the wrong XSS will be in
-                 * context.
-                 */
-                BUILD_BUG_ON(XSTATE_XSAVES_ONLY != 0);
-
-                /*
-                 * Read CPUID[0xD,0/1].EBX from hardware.  They vary with
-                 * enabled XSTATE, and appropraite XCR0|XSS are in context.
-                 */
-        case 0:
-                res->b = cpuid_count_ebx(leaf, subleaf);
-            }
+                res->b = xstate_compressed_size(v->arch.xcr0 |
+                                                v->arch.msrs->xss.raw);
             break;
         }
         break;
diff --git a/xen/arch/x86/xstate.c b/xen/arch/x86/xstate.c
index d4c01da574..03489f0cf4 100644
--- a/xen/arch/x86/xstate.c
+++ b/xen/arch/x86/xstate.c
@@ -602,6 +602,55 @@ unsigned int xstate_uncompressed_size(uint64_t xcr0)
     return size;
 }
 
+static unsigned int hw_compressed_size(uint64_t xstates)
+{
+    uint64_t curr_xcr0 = get_xcr0(), curr_xss = get_msr_xss();
+    unsigned int size;
+    bool ok;
+
+    ok = set_xcr0(xstates & ~XSTATE_XSAVES_ONLY);
+    ASSERT(ok);
+    set_msr_xss(xstates & XSTATE_XSAVES_ONLY);
+
+    size = cpuid_count_ebx(XSTATE_CPUID, 1);
+
+    ok = set_xcr0(curr_xcr0);
+    ASSERT(ok);
+    set_msr_xss(curr_xss);
+
+    return size;
+}
+
+unsigned int xstate_compressed_size(uint64_t xstates)
+{
+    unsigned int i, size = XSTATE_AREA_MIN_SIZE;
+
+    xstates &= ~XSTATE_FP_SSE;
+    for_each_set_bit ( i, &xstates, 63 )
+    {
+        if ( test_bit(i, &xstate_align) )
+            size = ROUNDUP(size, 64);
+
+        size += xstate_sizes[i];
+    }
+
+    /* In debug builds, cross-check our calculation with hardware. */
+    if ( IS_ENABLED(CONFIG_DEBUG) )
+    {
+        unsigned int hwsize;
+
+        xstates |= XSTATE_FP_SSE;
+        hwsize = hw_compressed_size(xstates);
+
+        if ( size != hwsize )
+            printk_once(XENLOG_ERR "%s(%#"PRIx64") size %#x != hwsize %#x\n",
+                        __func__, xstates, size, hwsize);
+        size = hwsize;
+    }
+
+    return size;
+}
+
 /* Collect the information of processor's extended state */
 void xstate_init(struct cpuinfo_x86 *c)
 {
diff --git a/xen/include/asm-x86/xstate.h b/xen/include/asm-x86/xstate.h
index 02d6f171b8..ecf7bbc5cd 100644
--- a/xen/include/asm-x86/xstate.h
+++ b/xen/include/asm-x86/xstate.h
@@ -108,6 +108,7 @@ void xstate_free_save_area(struct vcpu *v);
 int xstate_alloc_save_area(struct vcpu *v);
 void xstate_init(struct cpuinfo_x86 *c);
 unsigned int xstate_uncompressed_size(uint64_t xcr0);
+unsigned int xstate_compressed_size(uint64_t states);
 
 static inline uint64_t xgetbv(unsigned int index)
 {
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon May 03 15:39:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 15:39:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121699.229526 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldafn-00019Y-UR; Mon, 03 May 2021 15:39:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121699.229526; Mon, 03 May 2021 15:39:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldafn-00019R-RO; Mon, 03 May 2021 15:39:51 +0000
Received: by outflank-mailman (input) for mailman id 121699;
 Mon, 03 May 2021 15:39:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gWh3=J6=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ldafm-00019M-Uc
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 15:39:50 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 335820cc-7388-415e-bf6d-33d452fd8cb8;
 Mon, 03 May 2021 15:39:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 335820cc-7388-415e-bf6d-33d452fd8cb8
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620056390;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=w7Xo1CRX6LgQwvrOXLUtnY7XKHYy8vBzRGguf0D0ZzU=;
  b=V05TcB4VfeAvhuszyvDSWlXfCCwBPVr2ixm8fZ6G4uR8nSsn3k5WfUbj
   FNB/6G2wle/pD2af6UTqabFBpFN4P7dhS//xLdvxk8pS2ZF4pBOsvIN1T
   LNYnucFQsXoqRfILqHhA4TCkIOR/LK08BLHqjYKZiys6Uw/C1aYJegStK
   g=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: vAYGs1Hij7qrprczXNX/gUlo7yBa9ui7IFjWJMPWCgq/AWEMZ+Euoz2iNLBf7NBIRU3+e9qUsi
 BGNyXhKmtCemdtRJINHLmWFi165SmJDLpzOnkk/UmI6g/C8tVEEN52OI28sAl4pC7c+1UFuiP4
 dRngxbWub4HGWauMHLjZdCQUd8w+b6gCyIliaGyFdSZk3je3Iw7NhCUqLnRfvAX1q47/UyJPJO
 Aqu1N+MkaT6b3Q958vwIoDcvHo3jd5mobTU/RvlQQrXziWw7SrEgCP9VSzX3SdKbVC2ni6udrv
 m7U=
X-SBRS: 5.1
X-MesageID: 43332330
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:ECAIYKq21n+vdPLaZJjR6okaV5rUeYIsi2QD101hICF9WMqeis
 yogbAnxQb54Qx8ZFgMu/ClfJOBT3TV6IJv7eAqVouKcQH6tAKTQ71KwpDlx1TbdRHW0uJGz6
 9vf+xfJbTLbWRSqcb/7E2GH807wN+BmZrIuc7kw31gTR5nZshbhm8SZzqzKFF8RwVNGPMCZf
 mhz/dAzgDQHEg/X4CWAWQEQviGh/CjruODXTc2QyQIrCWvoFqTmdzHLyQ=
X-IronPort-AV: E=Sophos;i="5.82,270,1613451600"; 
   d="scan'208";a="43332330"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH 1/5] x86/xstate: Elide redundant writes in set_xcr0()
Date: Mon, 3 May 2021 16:39:34 +0100
Message-ID: <20210503153938.14109-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210503153938.14109-1-andrew.cooper3@citrix.com>
References: <20210503153938.14109-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

XSETBV is an expensive instruction as, amongst other things, it involves
reconfiguring the instruction decode at the frontend of the pipeline.

We have several paths which reconfigure %xcr0 in quick succession (the context
switch path has 5, including the fpu save/restore helpers), and only a single
caller takes any care to try to skip redundant writes.

Update set_xcr0() to perform amortisation automatically, and simplify the
__context_switch() path as a consequence.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/domain.c |  4 +---
 xen/arch/x86/xstate.c | 15 +++++++++++----
 2 files changed, 12 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/domain.c b/xen/arch/x86/domain.c
index 4dc27f798e..50a27197b5 100644
--- a/xen/arch/x86/domain.c
+++ b/xen/arch/x86/domain.c
@@ -1977,9 +1977,7 @@ static void __context_switch(void)
         memcpy(stack_regs, &n->arch.user_regs, CTXT_SWITCH_STACK_BYTES);
         if ( cpu_has_xsave )
         {
-            u64 xcr0 = n->arch.xcr0 ?: XSTATE_FP_SSE;
-
-            if ( xcr0 != get_xcr0() && !set_xcr0(xcr0) )
+            if ( !set_xcr0(n->arch.xcr0 ?: XSTATE_FP_SSE) )
                 BUG();
 
             if ( cpu_has_xsaves && is_hvm_vcpu(n) )
diff --git a/xen/arch/x86/xstate.c b/xen/arch/x86/xstate.c
index 3794d9a5a5..f82dae8053 100644
--- a/xen/arch/x86/xstate.c
+++ b/xen/arch/x86/xstate.c
@@ -55,11 +55,18 @@ static inline bool xsetbv(u32 index, u64 xfeatures)
     return lo != 0;
 }
 
-bool set_xcr0(u64 xfeatures)
+bool set_xcr0(u64 val)
 {
-    if ( !xsetbv(XCR_XFEATURE_ENABLED_MASK, xfeatures) )
-        return false;
-    this_cpu(xcr0) = xfeatures;
+    uint64_t *this_xcr0 = &this_cpu(xcr0);
+
+    if ( *this_xcr0 != val )
+    {
+        if ( !xsetbv(XCR_XFEATURE_ENABLED_MASK, val) )
+            return false;
+
+        *this_xcr0 = val;
+    }
+
     return true;
 }
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon May 03 15:39:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 15:39:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121701.229550 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldaft-0001Do-Jf; Mon, 03 May 2021 15:39:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121701.229550; Mon, 03 May 2021 15:39:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldaft-0001Df-GH; Mon, 03 May 2021 15:39:57 +0000
Received: by outflank-mailman (input) for mailman id 121701;
 Mon, 03 May 2021 15:39:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gWh3=J6=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ldafr-00019M-Qf
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 15:39:55 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c3bbfcf4-ecb2-4c99-a13a-5e390c53d09e;
 Mon, 03 May 2021 15:39:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c3bbfcf4-ecb2-4c99-a13a-5e390c53d09e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620056390;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=UBDI32eRY/hcnx9jebSRvWF1wyrbcX7Hg6wiMTv7Ipw=;
  b=S8dCWq2rP7/xXKspjjOofl+wDIrtDuFJQLS8Hk6HU38vHvzkiyA+sLwJ
   sN07PFNNkPZ1RmeOR6/LEKzucLuw+eS8fvOan78RU9IKbMDd56Bx1auA3
   8YmqwN7RlAfJb/69VjrNucZH9Pku+4cJZ3Wpl0j6tajNwbxyFxbqr7SCO
   c=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: H5HLuDGyqEByLnCPYbYjLdx7mfF/ERASTva30yvuQ1iok75tvfyaLNmFDOJjcdrg22kapJlzVo
 /DYx6XvBIdcuSXrsWC2CXKgZhZxazfprOKt5dQJz0I1bNkUPTPNQvfw2zrPiN7b+3OEUuUAmYo
 0MkXfJgzwAeV8myO2Kx6f7EqhHyjn6qTUc9mdRVk8FsUxNRXBKZkMNicsQxj1HdFoAfUvv5+VF
 YdqFWwptXKqRhyk06x/iJkSd9+S4a7zYPcFkLhITqW6X+uOX6QhwuT+79ahopfezdA9puN9XoA
 oDg=
X-SBRS: 5.1
X-MesageID: 43332332
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:KLSibKFGXtC7fUmdpLqEF8eALOonbusQ8zAX/mp2TgFYddHdqt
 C2kJ0guSPcpRQwfDUbmd6GMLSdWn+0z/VIyKQYILvKZmbbkUSyKoUK1+Xf6hntATf3+OIY9Y
 oISchDIfnxCVQ/ssrg+gm/FL8boeWvy6yjiefAw3oFd2gDAcxdxj1kAQWWGFAefngkObMFEv
 Onl696jgvlVXMLbtmqQlkpNtKzw+HjpdbdT1orJzNP0njtsQ+V
X-IronPort-AV: E=Sophos;i="5.82,270,1613451600"; 
   d="scan'208";a="43332332"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH 4/5] x86/cpuid: Simplify recalculate_xstate()
Date: Mon, 3 May 2021 16:39:37 +0100
Message-ID: <20210503153938.14109-5-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210503153938.14109-1-andrew.cooper3@citrix.com>
References: <20210503153938.14109-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Make use of the new xstate_uncompressed_size() helper rather than maintaining
the running calculation while accumulating feature components.

The rest of the CPUID data can come direct from the raw cpuid policy.  All
per-component data forms an ABI through the behaviour of the X{SAVE,RSTOR}*
instructions, and are constant.

Use for_each_set_bit() rather than opencoding a slightly awkward version of
it.  Mask the attributes in ecx down based on the visible features.  This
isn't actually necessary for any components or attributes defined at the time
of writing (up to AMX), but is added out of an abundance of caution.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

Using min() in for_each_set_bit() leads to awful code generation, as it
prohibits the optimiations for spotting that the bitmap is <= BITS_PER_LONG.
As p->xstate is long enough already, use a BUILD_BUG_ON() instead.
---
 xen/arch/x86/cpuid.c | 52 +++++++++++++++++-----------------------------------
 1 file changed, 17 insertions(+), 35 deletions(-)

diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index 752bf244ea..c7f8388e5d 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -154,8 +154,7 @@ static void sanitise_featureset(uint32_t *fs)
 static void recalculate_xstate(struct cpuid_policy *p)
 {
     uint64_t xstates = XSTATE_FP_SSE;
-    uint32_t xstate_size = XSTATE_AREA_MIN_SIZE;
-    unsigned int i, Da1 = p->xstate.Da1;
+    unsigned int i, ecx_bits = 0, Da1 = p->xstate.Da1;
 
     /*
      * The Da1 leaf is the only piece of information preserved in the common
@@ -167,61 +166,44 @@ static void recalculate_xstate(struct cpuid_policy *p)
         return;
 
     if ( p->basic.avx )
-    {
         xstates |= X86_XCR0_YMM;
-        xstate_size = max(xstate_size,
-                          xstate_offsets[X86_XCR0_YMM_POS] +
-                          xstate_sizes[X86_XCR0_YMM_POS]);
-    }
 
     if ( p->feat.mpx )
-    {
         xstates |= X86_XCR0_BNDREGS | X86_XCR0_BNDCSR;
-        xstate_size = max(xstate_size,
-                          xstate_offsets[X86_XCR0_BNDCSR_POS] +
-                          xstate_sizes[X86_XCR0_BNDCSR_POS]);
-    }
 
     if ( p->feat.avx512f )
-    {
         xstates |= X86_XCR0_OPMASK | X86_XCR0_ZMM | X86_XCR0_HI_ZMM;
-        xstate_size = max(xstate_size,
-                          xstate_offsets[X86_XCR0_HI_ZMM_POS] +
-                          xstate_sizes[X86_XCR0_HI_ZMM_POS]);
-    }
 
     if ( p->feat.pku )
-    {
         xstates |= X86_XCR0_PKRU;
-        xstate_size = max(xstate_size,
-                          xstate_offsets[X86_XCR0_PKRU_POS] +
-                          xstate_sizes[X86_XCR0_PKRU_POS]);
-    }
 
-    p->xstate.max_size  =  xstate_size;
+    /* Subleaf 0 */
+    p->xstate.max_size =
+        xstate_uncompressed_size(xstates & ~XSTATE_XSAVES_ONLY);
     p->xstate.xcr0_low  =  xstates & ~XSTATE_XSAVES_ONLY;
     p->xstate.xcr0_high = (xstates & ~XSTATE_XSAVES_ONLY) >> 32;
 
+    /* Subleaf 1 */
     p->xstate.Da1 = Da1;
     if ( p->xstate.xsaves )
     {
+        ecx_bits |= 3; /* Align64, XSS */
         p->xstate.xss_low   =  xstates & XSTATE_XSAVES_ONLY;
         p->xstate.xss_high  = (xstates & XSTATE_XSAVES_ONLY) >> 32;
     }
-    else
-        xstates &= ~XSTATE_XSAVES_ONLY;
 
-    for ( i = 2; i < min(63ul, ARRAY_SIZE(p->xstate.comp)); ++i )
+    /* Subleafs 2+ */
+    xstates &= ~XSTATE_FP_SSE;
+    BUILD_BUG_ON(ARRAY_SIZE(p->xstate.comp) < 63);
+    for_each_set_bit ( i, &xstates, 63 )
     {
-        uint64_t curr_xstate = 1ul << i;
-
-        if ( !(xstates & curr_xstate) )
-            continue;
-
-        p->xstate.comp[i].size   = xstate_sizes[i];
-        p->xstate.comp[i].offset = xstate_offsets[i];
-        p->xstate.comp[i].xss    = curr_xstate & XSTATE_XSAVES_ONLY;
-        p->xstate.comp[i].align  = curr_xstate & xstate_align;
+        /*
+         * Pass through size (eax) and offset (ebx) directly.  Visbility of
+         * attributes in ecx limited by visible features in Da1.
+         */
+        p->xstate.raw[i].a = raw_cpuid_policy.xstate.raw[i].a;
+        p->xstate.raw[i].b = raw_cpuid_policy.xstate.raw[i].b;
+        p->xstate.raw[i].c = raw_cpuid_policy.xstate.raw[i].c & ecx_bits;
     }
 }
 
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon May 03 15:40:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 15:40:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121704.229562 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldagA-00025J-Tt; Mon, 03 May 2021 15:40:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121704.229562; Mon, 03 May 2021 15:40:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldagA-00025C-QN; Mon, 03 May 2021 15:40:14 +0000
Received: by outflank-mailman (input) for mailman id 121704;
 Mon, 03 May 2021 15:40:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gWh3=J6=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ldag9-00024k-QG
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 15:40:13 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2c3bb019-c786-452a-a47b-286dc0035ea2;
 Mon, 03 May 2021 15:40:13 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c3bb019-c786-452a-a47b-286dc0035ea2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620056412;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=mXqZftHpRDLeZxeb0x+TL0ZHMLCxUQrWazmLu6mDEQ8=;
  b=JJAR3rXjT2OUAc6yOLMIgW/mih9MaeJjpGOuqVpiyXk5BrqYIM7QXBZG
   Tg5sxDOkQXUKEeD5V7M04LFPCDx0zyYHrzq2rUqRtVlWMXg5QdPr3GZGN
   8+6E+mAFuTuZz5DWCee7mr5pGnzhgduZZXd1F02ZTyrUvyoijSk4jEpxV
   k=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: jwVzsGPTzymjEOObQ6hZ4znlS0rM8b+nLlFnWd7hWye6yMWO7OX6rt0v5tVhljq/QhKe37q61J
 AXwBehdG0nft1dO9D2gemwmyLo/LZEnPaiep+pUHzPm4VoF4oE4H6NxMNVykTE9XGl8b8oxbFA
 pm9S8pahP6+KIR4WCZ9hJ/OmZF4A4YknMX5au4fxKHShEDSCAXdsH9JVzKCF8aNdy1h3LI3mVR
 65DzcXE6ybB5rlvDVkQGkLK38W/dpZNsk4zA5diUHv0w0x6vA0L1GXITzIiqIwKFsW3mD7BVW7
 mBM=
X-SBRS: 5.1
X-MesageID: 42942251
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:IC5VBqtOrQYjugcKHjHeSlnJ7skDkNV00zAX/kB9WHVpW+az/v
 rOoN0w0xjohDENHEw6kdebN6WaBV/a/5h54Y4eVI3SOjXOkm2uMY1k8M/e0yTtcheOkdJ1+K
 98f8FFeb7NJHdgi8KS2maFOvIB5PXCz6yyn+fZyB5WPGVXQoVt9R1wBAreMmAefnglObMDGJ
 CR5tVKqlObEBx9BKnWOlA/U/XevNqOrZr6YHc9dmcawTOThjCl4qOSKXil9yoZOgkg/Z4StU
 zMkwn0/cyYwpSG9iM=
X-IronPort-AV: E=Sophos;i="5.82,270,1613451600"; 
   d="scan'208";a="42942251"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH 0/5] x86/xstate: Fixes to size calculations
Date: Mon, 3 May 2021 16:39:33 +0100
Message-ID: <20210503153938.14109-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Various fixes and improvements to xsave image size calculations.

 * Skip redundant xcr0 writes
 * Don't reconfigure xcr0 to query hardware when we can trivially calculate
   the answer ourselves.
 * Fix latent bug with CPUID.0xD[0].ebx.
 * Rework CPUID.0xD[1].ebx to behave correctly when supervisor states are in
   use.

Results from AMD Milan with some prototype CET handling, as well as the debug
cross checks, in place:
  (d1) xstates 0x0001, uncomp 0x240, comp 0x240
  (d1) xstates 0x0003, uncomp 0x240, comp 0x240
  (d1) xstates 0x0007, uncomp 0x340, comp 0x340
  (d1) xstates 0x0207, uncomp 0x988, comp 0x348
  (d1) xstates 0x0a07, uncomp 0x988, comp 0x358
  (d1) xstates 0x1a07, uncomp 0x988, comp 0x370

Andrew Cooper (5):
  x86/xstate: Elide redundant writes in set_xcr0()
  x86/xstate: Rename _xstate_ctxt_size() to hw_uncompressed_size()
  x86/xstate: Rework xstate_ctxt_size() as xstate_uncompressed_size()
  x86/cpuid: Simplify recalculate_xstate()
  x86/cpuid: Fix handling of xsave dynamic leaves

 xen/arch/x86/cpuid.c         |  75 +++++++++------------------
 xen/arch/x86/domain.c        |   4 +-
 xen/arch/x86/domctl.c        |   2 +-
 xen/arch/x86/hvm/hvm.c       |   2 +-
 xen/arch/x86/xstate.c        | 117 +++++++++++++++++++++++++++++++++++--------
 xen/include/asm-x86/xstate.h |   3 +-
 6 files changed, 126 insertions(+), 77 deletions(-)

-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon May 03 15:40:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 15:40:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121708.229574 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldagG-00029M-9B; Mon, 03 May 2021 15:40:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121708.229574; Mon, 03 May 2021 15:40:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldagG-00029B-3u; Mon, 03 May 2021 15:40:20 +0000
Received: by outflank-mailman (input) for mailman id 121708;
 Mon, 03 May 2021 15:40:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gWh3=J6=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ldagE-00024k-PX
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 15:40:18 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6c9f735a-84bf-45b9-93ac-1f6aa835f82c;
 Mon, 03 May 2021 15:40:13 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6c9f735a-84bf-45b9-93ac-1f6aa835f82c
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620056413;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=jIHwbXNLvjr4f1oZ4tb+qy7o6+MtjNlDvDcd+ysvfTI=;
  b=BuqCEfw74JtwAkbgUHEaJ4rW7Lvf+g4qEZ2gem1LZsno/9w2T50EiQzC
   1hjkaCF43UD5FvAxD5vkXiWC2hhI4BrEEo02kSn6Jqc2E8LSJeV8PSfO5
   uAM8B2hUPOXRKdQvySIjKreLPUe89c/837zxvqHj5/ubatgU7Hx2dTBIB
   s=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: H8alMazpCCJaYOggVvYZDMfYy/DJMgq68K+15edzm5ilvKT9vOwyZAbK4zXMFQsGjpvIl4Ip/0
 nwLK/2Zp9/y1udBRVGmlXdRyG4omwm65Jugrc0AMhyjBcJM022qsa80ghR3JcJVnUZILxfJav9
 gnpBHL2P//yUwAI+4a8TTlnlUuyxAf04CVtk+J7l2zgXYRXL4S31rkABAslf3MqDY+N23PEZpe
 /VmAKPrEU9vp2EXqUxVFK4ByskrxZ4Io22/XvN3/gNd9OGtI1/Nx3vQrFO1WMR69GPug7arKUI
 t2U=
X-SBRS: 5.1
X-MesageID: 42942252
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:gYQGjKgbg5jonU+AM8TG8wVStHBQXl0ji2hD6mlwRA09T+Wzva
 mV/cgz/xnylToXRTUcgtiGIqaNWjfx8pRy7IkXM96ZLXHbkUGvK5xv6pan/i34F0TFh5dg/I
 ppbqQWMqySMXFUlsD/iTPWL/8Bx529/LmslaPiyR5WPGVXQoVByys8NQqBCE1xQ2B9dPwEPb
 6R/NBOqTblWVl/VLXYOlA/U+LOp8LGmfvdCHZsbXNK1CC0gTyl87L8GRSDty1uNA9n+rs+7X
 PD1zXw+6TLiYDB9jbny2TR455K8eGA9vJ/AqW35PQ9G3HJggasaJ8JYczmgAwI
X-IronPort-AV: E=Sophos;i="5.82,270,1613451600"; 
   d="scan'208";a="42942252"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH 2/5] x86/xstate: Rename _xstate_ctxt_size() to hw_uncompressed_size()
Date: Mon, 3 May 2021 16:39:35 +0100
Message-ID: <20210503153938.14109-3-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210503153938.14109-1-andrew.cooper3@citrix.com>
References: <20210503153938.14109-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

The latter is a more descriptive name, as it explicitly highlights the query
from hardware.

Simplify the internal logic using cpuid_count_ebx(), and drop the curr/max
assertion as this property is guaranteed by the x86 ISA.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/xstate.c | 15 +++++++--------
 1 file changed, 7 insertions(+), 8 deletions(-)

diff --git a/xen/arch/x86/xstate.c b/xen/arch/x86/xstate.c
index f82dae8053..e6c225a16b 100644
--- a/xen/arch/x86/xstate.c
+++ b/xen/arch/x86/xstate.c
@@ -554,19 +554,18 @@ void xstate_free_save_area(struct vcpu *v)
     v->arch.xsave_area = NULL;
 }
 
-static unsigned int _xstate_ctxt_size(u64 xcr0)
+static unsigned int hw_uncompressed_size(uint64_t xcr0)
 {
     u64 act_xcr0 = get_xcr0();
-    u32 eax, ebx = 0, ecx, edx;
+    unsigned int size;
     bool ok = set_xcr0(xcr0);
 
     ASSERT(ok);
-    cpuid_count(XSTATE_CPUID, 0, &eax, &ebx, &ecx, &edx);
-    ASSERT(ebx <= ecx);
+    size = cpuid_count_ebx(XSTATE_CPUID, 0);
     ok = set_xcr0(act_xcr0);
     ASSERT(ok);
 
-    return ebx;
+    return size;
 }
 
 /* Fastpath for common xstate size requests, avoiding reloads of xcr0. */
@@ -578,7 +577,7 @@ unsigned int xstate_ctxt_size(u64 xcr0)
     if ( xcr0 == 0 )
         return 0;
 
-    return _xstate_ctxt_size(xcr0);
+    return hw_uncompressed_size(xcr0);
 }
 
 /* Collect the information of processor's extended state */
@@ -635,14 +634,14 @@ void xstate_init(struct cpuinfo_x86 *c)
          * xsave_cntxt_size is the max size required by enabled features.
          * We know FP/SSE and YMM about eax, and nothing about edx at present.
          */
-        xsave_cntxt_size = _xstate_ctxt_size(feature_mask);
+        xsave_cntxt_size = hw_uncompressed_size(feature_mask);
         printk("xstate: size: %#x and states: %#"PRIx64"\n",
                xsave_cntxt_size, xfeature_mask);
     }
     else
     {
         BUG_ON(xfeature_mask != feature_mask);
-        BUG_ON(xsave_cntxt_size != _xstate_ctxt_size(feature_mask));
+        BUG_ON(xsave_cntxt_size != hw_uncompressed_size(feature_mask));
     }
 
     if ( setup_xstate_features(bsp) && bsp )
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon May 03 15:40:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 15:40:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121711.229586 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldagI-0002Br-Fo; Mon, 03 May 2021 15:40:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121711.229586; Mon, 03 May 2021 15:40:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldagI-0002Bj-CC; Mon, 03 May 2021 15:40:22 +0000
Received: by outflank-mailman (input) for mailman id 121711;
 Mon, 03 May 2021 15:40:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gWh3=J6=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ldagG-00023Q-Jm
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 15:40:20 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 503cc877-2ae4-455c-add3-a9ba24c79b18;
 Mon, 03 May 2021 15:40:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 503cc877-2ae4-455c-add3-a9ba24c79b18
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620056414;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=L7RgdbGEd1jyg+A5R9ekb1vmEPCeoLbeT25HF5YV6ys=;
  b=Fo6y/mfH4tilvi2rNZGdCRu3ELjtjTRKZFN75E0ja+DjNRNaU2MumzpS
   FYUtliEd0BmgfxJGvCegInLmKaff+DAncDGV7uJlf4DO/1PoZVI2r23YV
   YkJRjy4tlg8FDtW+QUUaCD/UrXVjyXYx4uH97egXIbZyWhO+lfRsveG2T
   Q=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: GmP/0IMBbxNpZqx3flufAKBbnNuepUNa6zZBqCJSu5LACseXfw6ekok5Z0rgXVko7KX9BIRgd3
 mBvcUPI63l31kjEYczwPlFfJkZjevfhSbt7ca7YvmTuCzQMept3XJ4uupBCOIWMsEBIJcrYHEy
 L/KBJlZXSDmwHxjogGD3hRT4T/5Sc3m5NV3GGD/72f3orz9PTfpxBTAgPapVCWN/tocfcRQrnt
 17bq54ACAzCZsff1MTMinMDTg+wHPOIGPe3KpdlWSOX/JHbWgqF0jdoPbANaDsIGZbWilv+/T9
 NtY=
X-SBRS: 5.1
X-MesageID: 42942253
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:EOUCG6/QoiV6zIOdCWNuk+BfI+orLtY04lQ7vn1ZYzY9SK2lvu
 qpm+kW0gKxtS0YX2sulcvFFK6LR37d8pAd2/hoAZ6JWg76tGy0aLxz9IeK+UyYJwTS/vNQvJ
 0QEJRWJ8b3CTFB4vrSwA79KNo4xcnCzabAv5a7815IbSVHL55t9B14DAHzKDwReCBjCYAiHJ
 SRouprzgDQG0g/VciwCnkbU+WrnbSi//iKDSIuPBIp5BKDijml8tfBYn+l9ywTTi9VxvMa+X
 XF+jaJnZmLie2xyRPXygboj6h+pd2J8LV+Lf3JrsAULzn24zzYAbhcZw==
X-IronPort-AV: E=Sophos;i="5.82,270,1613451600"; 
   d="scan'208";a="42942253"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH 3/5] x86/xstate: Rework xstate_ctxt_size() as xstate_uncompressed_size()
Date: Mon, 3 May 2021 16:39:36 +0100
Message-ID: <20210503153938.14109-4-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210503153938.14109-1-andrew.cooper3@citrix.com>
References: <20210503153938.14109-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

We're soon going to need a compressed helper of the same form.

The size of the uncompressed image is a strictly a property of the highest
user state.  This can be calculated trivially with xstate_offsets/sizes, and
is much faster than a CPUID instruction in the first place, let alone the two
XCR0 writes surrounding it.

Retain the cross-check with hardware in debug builds, but forgo it normal
builds.  In particular, this means that the migration paths don't need to mess
with XCR0 just to sanity check the buffer size.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/domctl.c        |  2 +-
 xen/arch/x86/hvm/hvm.c       |  2 +-
 xen/arch/x86/xstate.c        | 40 +++++++++++++++++++++++++++++++---------
 xen/include/asm-x86/xstate.h |  2 +-
 4 files changed, 34 insertions(+), 12 deletions(-)

diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index e440bd021e..8c3552410d 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -899,7 +899,7 @@ long arch_do_domctl(
         uint32_t offset = 0;
 
 #define PV_XSAVE_HDR_SIZE (2 * sizeof(uint64_t))
-#define PV_XSAVE_SIZE(xcr0) (PV_XSAVE_HDR_SIZE + xstate_ctxt_size(xcr0))
+#define PV_XSAVE_SIZE(xcr0) (PV_XSAVE_HDR_SIZE + xstate_uncompressed_size(xcr0))
 
         ret = -ESRCH;
         if ( (evc->vcpu >= d->max_vcpus) ||
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index 28beacc45b..e5fda6b387 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -1203,7 +1203,7 @@ HVM_REGISTER_SAVE_RESTORE(CPU, hvm_save_cpu_ctxt, hvm_load_cpu_ctxt, 1,
 
 #define HVM_CPU_XSAVE_SIZE(xcr0) (offsetof(struct hvm_hw_cpu_xsave, \
                                            save_area) + \
-                                  xstate_ctxt_size(xcr0))
+                                  xstate_uncompressed_size(xcr0))
 
 static int hvm_save_cpu_xsave_states(struct vcpu *v, hvm_domain_context_t *h)
 {
diff --git a/xen/arch/x86/xstate.c b/xen/arch/x86/xstate.c
index e6c225a16b..d4c01da574 100644
--- a/xen/arch/x86/xstate.c
+++ b/xen/arch/x86/xstate.c
@@ -184,7 +184,7 @@ void expand_xsave_states(struct vcpu *v, void *dest, unsigned int size)
     /* Check there is state to serialise (i.e. at least an XSAVE_HDR) */
     BUG_ON(!v->arch.xcr0_accum);
     /* Check there is the correct room to decompress into. */
-    BUG_ON(size != xstate_ctxt_size(v->arch.xcr0_accum));
+    BUG_ON(size != xstate_uncompressed_size(v->arch.xcr0_accum));
 
     if ( !(xsave->xsave_hdr.xcomp_bv & XSTATE_COMPACTION_ENABLED) )
     {
@@ -246,7 +246,7 @@ void compress_xsave_states(struct vcpu *v, const void *src, unsigned int size)
     u64 xstate_bv, valid;
 
     BUG_ON(!v->arch.xcr0_accum);
-    BUG_ON(size != xstate_ctxt_size(v->arch.xcr0_accum));
+    BUG_ON(size != xstate_uncompressed_size(v->arch.xcr0_accum));
     ASSERT(!xsave_area_compressed(src));
 
     xstate_bv = ((const struct xsave_struct *)src)->xsave_hdr.xstate_bv;
@@ -568,16 +568,38 @@ static unsigned int hw_uncompressed_size(uint64_t xcr0)
     return size;
 }
 
-/* Fastpath for common xstate size requests, avoiding reloads of xcr0. */
-unsigned int xstate_ctxt_size(u64 xcr0)
+unsigned int xstate_uncompressed_size(uint64_t xcr0)
 {
-    if ( xcr0 == xfeature_mask )
-        return xsave_cntxt_size;
+    unsigned int size;
+    int idx = flsl(xcr0) - 1;
 
-    if ( xcr0 == 0 )
-        return 0;
+    /*
+     * The maximum size of an uncompressed XSAVE area is determined by the
+     * highest user state, as the size and offset of each component is fixed.
+     */
+    if ( idx >= 2 )
+    {
+        ASSERT(xstate_offsets[idx] && xstate_sizes[idx]);
+        size = xstate_offsets[idx] + xstate_sizes[idx];
+    }
+    else
+        size = XSTATE_AREA_MIN_SIZE;
 
-    return hw_uncompressed_size(xcr0);
+    /* In debug builds, cross-check our calculation with hardware. */
+    if ( IS_ENABLED(CONFIG_DEBUG) )
+    {
+        unsigned int hwsize;
+
+        xcr0 |= XSTATE_FP_SSE;
+        hwsize = hw_uncompressed_size(xcr0);
+
+        if ( size != hwsize )
+            printk_once(XENLOG_ERR "%s(%#"PRIx64") size %#x != hwsize %#x\n",
+                        __func__, xcr0, size, hwsize);
+        size = hwsize;
+    }
+
+    return size;
 }
 
 /* Collect the information of processor's extended state */
diff --git a/xen/include/asm-x86/xstate.h b/xen/include/asm-x86/xstate.h
index 7ab0bdde89..02d6f171b8 100644
--- a/xen/include/asm-x86/xstate.h
+++ b/xen/include/asm-x86/xstate.h
@@ -107,7 +107,7 @@ void compress_xsave_states(struct vcpu *v, const void *src, unsigned int size);
 void xstate_free_save_area(struct vcpu *v);
 int xstate_alloc_save_area(struct vcpu *v);
 void xstate_init(struct cpuinfo_x86 *c);
-unsigned int xstate_ctxt_size(u64 xcr0);
+unsigned int xstate_uncompressed_size(uint64_t xcr0);
 
 static inline uint64_t xgetbv(unsigned int index)
 {
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Mon May 03 15:47:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 15:47:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121729.229597 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldan9-0002cg-8g; Mon, 03 May 2021 15:47:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121729.229597; Mon, 03 May 2021 15:47:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldan9-0002cZ-5Q; Mon, 03 May 2021 15:47:27 +0000
Received: by outflank-mailman (input) for mailman id 121729;
 Mon, 03 May 2021 15:47:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=lc2c=J6=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ldan7-0002cE-2d
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 15:47:25 +0000
Received: from mo6-p00-ob.smtp.rzone.de (unknown [2a01:238:400:100::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a5b6d815-dc03-4f13-a96f-221b87bde8c6;
 Mon, 03 May 2021 15:47:24 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.25.5 AUTH)
 with ESMTPSA id g034cex43FlH029
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Mon, 3 May 2021 17:47:17 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a5b6d815-dc03-4f13-a96f-221b87bde8c6
ARC-Seal: i=1; a=rsa-sha256; t=1620056837; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=AaOW5fBW0PeSjI7JEmiMO5CeTXc9BPGx84g3bK5Es1hAOYlIIEI2ns29sSPDtOjn4B
    qAAQTVdY/gI28AmI3/iPfkepHgcKpDV5TTvLyrQRhwAne1GuphjmVDyoyepDC35ePTfk
    WIjreC2xDHedJjBdHAHLjiDJ6vnpWV/joQc5ZCv2y/iYNoc1LC10qpFoZ0lmm54vwGli
    AtKaQlnZcKUqclUdSn5gpidEUVw6hkegPF01B71dh/9EMAxLLTXiv+7z2gdxi9ZVzdVt
    cFcGi3RX+Jhg4coary7IWNZDFO8Xtrlxao8UgeIifO2ZqRqf8yU2T4gEuailJnRYIH0m
    d4LQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1620056837;
    s=strato-dkim-0002; d=strato.com;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=ycCpPkfjO2T5krNvdESPBnNfk/UB5Msvnwpt33xdYeg=;
    b=Z3key5YpWusSNdHDORNH0htvRxhqxpzxtV8Q/INgML2jN9jDchjtKnXWZPEBQgCUuq
    Qt7AgaJoSWoVEjFcqduMKSUFoaRJOCg01FSlo9FkLq4JXTGIGAkMdckbUkO64llu0pWX
    vI2U4CFRNNlcIyu1AlNLFuhmchX/DI/YpPTtKWe0Gkfpk1OeSJMroQqJJZjolQU2kMX0
    KaYwKwlNGvpZAFb+lxiayoubJ5aBEDkMhLO3Zd4V932DiNh6Q8HR3UKhfYYav6dSuwS7
    clJGgWsM3dJnspdju/WMRcM3tRp1kNQxZ0UohQ8C66/X3DlyDogRWQx8VMKh6R/Op8yP
    T9gQ==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1620056837;
    s=strato-dkim-0002; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=ycCpPkfjO2T5krNvdESPBnNfk/UB5Msvnwpt33xdYeg=;
    b=P9iTvs9pIja6nGxzlYeRAsEppWTHMH70IZNNrex9d2Z+ZorTr2FtZsIxDFcnmJL6es
    S66+3fMnQOEso11Pix4qmUZxq4rhDN4hi7m2I8UNcEw63D+kNktw/r7AaXuzic/4mpwU
    SvUnitKfc2ftP6omg9xW4gqPsVkQ5pgg5Q/O+tnQ5U8x8bxew1M+c92kS5AJyt9Knb6b
    QOejD4xsWBSJzwOyD5RuJhbWiaSEsmn+Iiah0AHTqYMh8ZiKEOewEIInOkS4LkhtRo7W
    8IkuZK1EIkKqo6/b86F/+dwYN805uusSQREKYQQO5VcucbSCAGc+AKHdHQ9JeIPdfqPr
    Jkjw==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgtl+1b1FMstFZvCqIQN5N7TvWFg4vzhFVdoKAuQ"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH v1] tools: add newlines to xenstored WRL_LOG
Date: Mon,  3 May 2021 17:47:12 +0200
Message-Id: <20210503154712.508-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

According to syslog(3) the fmt string does not need a newline.
The mini-os implementation of syslog requires the trailing newline.
Other calls to syslog do include the newline already, add it also to WRL_LOG.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/xenstore/xenstored_domain.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 3d4d0649a2..2d333b3ff6 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -1132,10 +1132,10 @@ void wrl_apply_debit_actual(struct domain *domain)
 	if (domain->wrl_credit < 0) {
 		if (!domain->wrl_delay_logged) {
 			domain->wrl_delay_logged = true;
-			WRL_LOG(now, "domain %ld is affected",
+			WRL_LOG(now, "domain %ld is affected\n",
 				(long)domain->domid);
 		} else if (!wrl_log_last_warning) {
-			WRL_LOG(now, "rate limiting restarts");
+			WRL_LOG(now, "rate limiting restarts\n");
 		}
 		wrl_log_last_warning = now.sec;
 	}
@@ -1145,7 +1145,7 @@ void wrl_log_periodic(struct wrl_timestampt now)
 {
 	if (wrl_log_last_warning &&
 	    (now.sec - wrl_log_last_warning) > WRL_LOGEVERY) {
-		WRL_LOG(now, "not in force recently");
+		WRL_LOG(now, "not in force recently\n");
 		wrl_log_last_warning = 0;
 	}
 }


From xen-devel-bounces@lists.xenproject.org Mon May 03 15:50:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 15:50:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121737.229609 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldaqJ-0003Ud-Rf; Mon, 03 May 2021 15:50:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121737.229609; Mon, 03 May 2021 15:50:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldaqJ-0003UW-Oi; Mon, 03 May 2021 15:50:43 +0000
Received: by outflank-mailman (input) for mailman id 121737;
 Mon, 03 May 2021 15:50:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TA2L=J6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ldaqI-0003UR-Cn
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 15:50:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 03490bff-fbd4-4548-9724-83d10e329e84;
 Mon, 03 May 2021 15:50:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 38305B20F;
 Mon,  3 May 2021 15:50:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 03490bff-fbd4-4548-9724-83d10e329e84
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620057040; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=7vgsUSIt1qrfYBtRqm5wgG+fvrjDxM14z0fDosLzFv8=;
	b=bXipMq5TMsk3/0DHUMOOHXAakdAPEhlBR1L7eSabUUhsPu5/FLGeowjLnICskCq7Xt9YEK
	e4q7GOmpYhCB8Woanbn6jO7CK4fqzWph/em7McWC3y4ACLX0HlbJLYbhvVf8y3vpKC1dFn
	sj9uO7avpHksmi8oNkGbM1R9O4KQtDE=
Subject: Re: [PATCH v4 05/12] x86/hvm: allowing registering EOI callbacks for
 GSIs
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210420140723.65321-1-roger.pau@citrix.com>
 <20210420140723.65321-6-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <19b0b30d-2fd6-4cc3-fd7a-4f4a3ce735f7@suse.com>
Date: Mon, 3 May 2021 17:50:39 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210420140723.65321-6-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 20.04.2021 16:07, Roger Pau Monne wrote:
> Such callbacks will be executed once a EOI is performed by the guest,
> regardless of whether the interrupts are injected from the vIO-APIC or
> the vPIC, as ISA IRQs are translated to GSIs and then the
> corresponding callback is executed at EOI.
> 
> The vIO-APIC infrastructure for handling EOIs is build on top of the
> existing vlapic EOI callback functionality, while the vPIC one is
> handled when writing to the vPIC EOI register.
> 
> Note that such callbacks need to be registered and de-registered, and
> that a single GSI can have multiple callbacks associated. That's
> because GSIs can be level triggered and shared, as that's the case
> with legacy PCI interrupts shared between several devices.
> 
> Strictly speaking this is a non-functional change, since there are no
> users of this new interface introduced by this change.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

In principle, as everything looks functionally correct to me,
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Nevertheless, besides a few remarks further down, I have to admit I'm
concerned of the direct-to-indirect calls conversion (not just here,
but also covering earlier patches), which (considering we're talking
of EOI) I expect may occur quite frequently for at least some guests.
There aren't that many different callback functions which get
registered, are there? Hence I wonder whether enumerating them and
picking the right one via, say, an enum wouldn't be more efficient
and still allow elimination of (in the case here) unconditional calls
to hvm_dpci_eoi() for every EOI.

> --- a/xen/arch/x86/hvm/irq.c
> +++ b/xen/arch/x86/hvm/irq.c
> @@ -595,6 +595,81 @@ int hvm_local_events_need_delivery(struct vcpu *v)
>      return !hvm_interrupt_blocked(v, intack);
>  }
>  
> +int hvm_gsi_register_callback(struct domain *d, unsigned int gsi,
> +                              struct hvm_gsi_eoi_callback *cb)
> +{
> +    struct hvm_irq *hvm_irq = hvm_domain_irq(d);
> +
> +    if ( gsi >= hvm_irq->nr_gsis )
> +    {
> +        ASSERT_UNREACHABLE();
> +        return -EINVAL;
> +    }
> +
> +    write_lock(&hvm_irq->gsi_callbacks_lock);
> +    list_add(&cb->list, &hvm_irq->gsi_callbacks[gsi]);
> +    write_unlock(&hvm_irq->gsi_callbacks_lock);
> +
> +    return 0;
> +}
> +
> +int hvm_gsi_unregister_callback(struct domain *d, unsigned int gsi,
> +                                struct hvm_gsi_eoi_callback *cb)
> +{
> +    struct hvm_irq *hvm_irq = hvm_domain_irq(d);
> +    const struct list_head *tmp;
> +    bool found = false;
> +
> +    if ( gsi >= hvm_irq->nr_gsis )
> +    {
> +        ASSERT_UNREACHABLE();
> +        return -EINVAL;
> +    }
> +
> +    write_lock(&hvm_irq->gsi_callbacks_lock);
> +    list_for_each ( tmp, &hvm_irq->gsi_callbacks[gsi] )
> +        if ( tmp == &cb->list )
> +        {
> +            list_del(&cb->list);

Minor remark: Would passing "tmp" here lead to better generated
code?

> @@ -419,13 +421,25 @@ static void eoi_callback(struct vcpu *v, unsigned int vector, void *data)
>              if ( is_iommu_enabled(d) )
>              {
>                  spin_unlock(&d->arch.hvm.irq_lock);
> -                hvm_dpci_eoi(d, vioapic->base_gsi + pin);
> +                hvm_dpci_eoi(d, gsi);
>                  spin_lock(&d->arch.hvm.irq_lock);
>              }
>  
> +            /*
> +             * Callbacks don't expect to be executed with any lock held, so
> +             * drop the lock that protects the vIO-APIC fields from changing.
> +             *
> +             * Note that the redirection entry itself cannot go away, so upon
> +             * retaking the lock we only need to avoid making assumptions on
> +             * redirection entry field values (ie: recheck the IRR field).
> +             */
> +            spin_unlock(&d->arch.hvm.irq_lock);
> +            hvm_gsi_execute_callbacks(d, gsi);
> +            spin_lock(&d->arch.hvm.irq_lock);

While this may be transient in the series, as said before I'm not
happy about this double unlock/relock sequence. I didn't really
understand what would be wrong with

            spin_unlock(&d->arch.hvm.irq_lock);
            if ( is_iommu_enabled(d) )
                hvm_dpci_eoi(d, gsi);
            hvm_gsi_execute_callbacks(d, gsi);
            spin_lock(&d->arch.hvm.irq_lock);

This in particular wouldn't grow but even shrink the later patch
dropping the call to hvm_dpci_eoi().

> --- a/xen/arch/x86/hvm/vpic.c
> +++ b/xen/arch/x86/hvm/vpic.c
> @@ -235,6 +235,8 @@ static void vpic_ioport_write(
>                  unsigned int pin = __scanbit(pending, 8);
>  
>                  ASSERT(pin < 8);
> +                hvm_gsi_execute_callbacks(current->domain,
> +                        hvm_isa_irq_to_gsi((addr >> 7) ? (pin | 8) : pin));
>                  hvm_dpci_eoi(current->domain,
>                               hvm_isa_irq_to_gsi((addr >> 7) ? (pin | 8) : pin));
>                  __clear_bit(pin, &pending);
> @@ -285,6 +287,8 @@ static void vpic_ioport_write(
>                  /* Release lock and EOI the physical interrupt (if any). */
>                  vpic_update_int_output(vpic);
>                  vpic_unlock(vpic);
> +                hvm_gsi_execute_callbacks(current->domain,
> +                        hvm_isa_irq_to_gsi((addr >> 7) ? (pin | 8) : pin));
>                  hvm_dpci_eoi(current->domain,
>                               hvm_isa_irq_to_gsi((addr >> 7) ? (pin | 8) : pin));
>                  return; /* bail immediately */

Another presumably minor remark: In the IO-APIC case you insert after
the call to hvm_dpci_eoi(). I wonder if consistency wouldn't help
avoid questions of archeologists in a couple of years time.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 03 15:59:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 15:59:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121742.229622 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldayh-0003hK-Os; Mon, 03 May 2021 15:59:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121742.229622; Mon, 03 May 2021 15:59:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldayh-0003hD-KU; Mon, 03 May 2021 15:59:23 +0000
Received: by outflank-mailman (input) for mailman id 121742;
 Mon, 03 May 2021 15:59:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TA2L=J6=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ldayg-0003h4-6l
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 15:59:22 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f22c889c-682e-4513-b001-5a76cd2e7cdc;
 Mon, 03 May 2021 15:59:20 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 23295B011;
 Mon,  3 May 2021 15:59:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f22c889c-682e-4513-b001-5a76cd2e7cdc
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620057560; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=3zjsJslUplCDLd2j5GYvhm/B4wkceDi5tiQCLODoQQw=;
	b=RpVqDnKpw/wxx5k4D6kbw3iqh/lvIXGDTRPoBKZL80VxFrWXfjB3MjLix+NSkLGwhMoi09
	5DLNVFT9i3esBKdkdcb/m4JpaTisMHc635bF/t/ddIoyohU84FR9tb0F7GVsfDrvHHWuaL
	2Rn2ptwZawd3eH8k+Z9GdShyoqIjIQc=
Subject: Re: [PATCH v4 01/12] x86/rtc: drop code related to strict mode
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210420140723.65321-1-roger.pau@citrix.com>
 <20210420140723.65321-2-roger.pau@citrix.com>
 <f282a2a2-e5cb-6a65-690a-b9c27c03089a@suse.com>
 <YI/CSKpqWrilNKi8@Air-de-Roger>
 <5b06565e-1f2e-3498-c18f-e7eac0042761@suse.com>
 <YJANG3LeuA3Ygt/Q@Air-de-Roger>
 <d8ed89e8-d13a-9ed6-e92b-fc7072b8382e@suse.com>
 <YJAWmGCtaTktGRG0@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <abed4575-b1f7-5134-483e-2301447d77e0@suse.com>
Date: Mon, 3 May 2021 17:59:19 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <YJAWmGCtaTktGRG0@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 03.05.2021 17:28, Roger Pau Monné wrote:
> On Mon, May 03, 2021 at 04:58:07PM +0200, Jan Beulich wrote:
>> On 03.05.2021 16:47, Roger Pau Monné wrote:
>>> On Mon, May 03, 2021 at 02:26:51PM +0200, Jan Beulich wrote:
>>>> On 03.05.2021 11:28, Roger Pau Monné wrote:
>>>>> On Thu, Apr 29, 2021 at 04:53:07PM +0200, Jan Beulich wrote:
>>>>>> On 20.04.2021 16:07, Roger Pau Monne wrote:
>>>>>>> @@ -337,8 +336,7 @@ int pt_update_irq(struct vcpu *v)
>>>>>>>      {
>>>>>>>          if ( pt->pending_intr_nr )
>>>>>>>          {
>>>>>>> -            /* RTC code takes care of disabling the timer itself. */
>>>>>>> -            if ( (pt->irq != RTC_IRQ || !pt->priv) && pt_irq_masked(pt) &&
>>>>>>> +            if ( pt_irq_masked(pt) &&
>>>>>>>                   /* Level interrupts should be asserted even if masked. */
>>>>>>>                   !pt->level )
>>>>>>>              {
>>>>>>
>>>>>> I'm struggling to relate this to any other part of the patch. In
>>>>>> particular I can't find the case where a periodic timer would be
>>>>>> registered with RTC_IRQ and a NULL private pointer. The only use
>>>>>> I can find is with a non-NULL pointer, which would mean the "else"
>>>>>> path is always taken at present for the RTC case (which you now
>>>>>> change).
>>>>>
>>>>> Right, the else case was always taken because as the comment noted RTC
>>>>> would take care of disabling itself (by calling destroy_periodic_time
>>>>> from the callback when using strict_mode). When no_ack mode was
>>>>> implemented this wasn't taken into account AFAICT, and thus the RTC
>>>>> was never removed from the list even when masked.
>>>>>
>>>>> I think with no_ack mode the RTC shouldn't have this specific handling
>>>>> in pt_update_irq, as it should behave like any other virtual timer.
>>>>> I could try to split this as a separate bugfix, but then I would have
>>>>> to teach pt_update_irq to differentiate between strict_mode and no_ack
>>>>> mode.
>>>>
>>>> A fair part of my confusion was about "&& !pt->priv".
>>>
>>> I think you meant "|| !pt->priv"?
>>
>> Oops, indeed.
>>
>>>> I've looked back
>>>> at 9607327abbd3 ("x86/HVM: properly handle RTC periodic timer even when
>>>> !RTC_PIE"), where this was added. It was, afaict, to cover for
>>>> hpet_set_timer() passing NULL with RTC_IRQ.
>>>
>>> That's tricky, as hpet_set_timer hardcodes 8 instead of using RTC_IRQ
>>> which makes it really easy to miss.
>>>
>>>> Which makes me suspect that
>>>> be07023be115 ("x86/vhpet: add support for level triggered interrupts")
>>>> may have subtly broken things.
>>>
>>> Right - as that would have made the RTC irq when generated from the
>>> HPET no longer be suspended when masked (as pt->priv would no longer
>>> be NULL). Could be fixed with:
>>>
>>> diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
>>> index ca94e8b4538..f2cbd12f400 100644
>>> --- a/xen/arch/x86/hvm/hpet.c
>>> +++ b/xen/arch/x86/hvm/hpet.c
>>> @@ -318,7 +318,8 @@ static void hpet_set_timer(HPETState *h, unsigned int tn,
>>>                           hpet_tick_to_ns(h, diff),
>>>                           oneshot ? 0 : hpet_tick_to_ns(h, h->hpet.period[tn]),
>>>                           irq, timer_level(h, tn) ? hpet_timer_fired : NULL,
>>> -                         (void *)(unsigned long)tn, timer_level(h, tn));
>>> +                         timer_level(h, tn) ? (void *)(unsigned long)tn : NULL,
>>> +                         timer_level(h, tn));
>>>  }
>>>  
>>>  static inline uint64_t hpet_fixup_reg(
>>>
>>> Passing again NULL as the callback private data for edge triggered
>>> interrupts.
>>
>> Right, plus perhaps at the same time replacing the hardcoded 8.
> 
> Right, but if you agree to take this patch and remove strict_mode then
> the emulated RTC won't disable itself anymore, and hence needs to be
> handled as any other virtual timer?

I'm still trying to become convinced, both of the removal of the mode
in general and the particular part of the change I've been struggling
with.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 03 16:12:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 16:12:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121750.229633 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldbBQ-0005wF-T6; Mon, 03 May 2021 16:12:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121750.229633; Mon, 03 May 2021 16:12:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldbBQ-0005w8-QG; Mon, 03 May 2021 16:12:32 +0000
Received: by outflank-mailman (input) for mailman id 121750;
 Mon, 03 May 2021 16:12:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldbBP-0005vx-PO; Mon, 03 May 2021 16:12:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldbBP-00089u-EH; Mon, 03 May 2021 16:12:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldbBP-000335-5Y; Mon, 03 May 2021 16:12:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ldbBP-0000gf-4i; Mon, 03 May 2021 16:12:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=W7oQG8Ym2Col5++JQ4zbN3VEKhFQwi6j7HMH3NoGVXY=; b=zqZ3ncMvF+36U9Z4MAYJ742Up+
	2e+BK7Q/yKrxS8YA2t32aKNwv20zab2Bu1fyfP2dTscUoeFH+H88qpNPU6UuiAKhvakx/WOqUMBCS
	sIQDJw0HrS6aQLiTnnc/os1tUFuUVQcOOHyC2nOX/Z41svA+jRpJFUHtJVVj25gigeFc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161625-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 161625: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=d26c277826dbbd64b3e3cb57159e1ecbfad33bc8
X-Osstest-Versions-That:
    xen=1f8ee4cb430e5a9da37096574c41632cf69a0bc7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 03 May 2021 16:12:31 +0000

flight 161625 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161625/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  d26c277826dbbd64b3e3cb57159e1ecbfad33bc8
baseline version:
 xen                  1f8ee4cb430e5a9da37096574c41632cf69a0bc7

Last test of basis   161556  2021-04-30 17:01:28 Z    2 days
Testing same since   161625  2021-05-03 14:01:31 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Rahul Singh <rahul.singh@arm.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Tim Deegan <tim@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   1f8ee4cb43..d26c277826  d26c277826dbbd64b3e3cb57159e1ecbfad33bc8 -> smoke


From xen-devel-bounces@lists.xenproject.org Mon May 03 16:13:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 16:13:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121754.229649 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldbCJ-00062Y-8Y; Mon, 03 May 2021 16:13:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121754.229649; Mon, 03 May 2021 16:13:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldbCJ-00062R-5W; Mon, 03 May 2021 16:13:27 +0000
Received: by outflank-mailman (input) for mailman id 121754;
 Mon, 03 May 2021 16:13:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gWh3=J6=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ldbCH-000624-9l
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 16:13:25 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b024c3e4-3e74-4bda-84f1-4a9e5e505133;
 Mon, 03 May 2021 16:13:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b024c3e4-3e74-4bda-84f1-4a9e5e505133
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620058404;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=jH4UhUCJPHw7dk6mHkWcv0bMFFryveuWTTTIp1cX3c8=;
  b=ZU9A9Fh+PPCawdMUvNB6LUBDjMVybTJZlSZ9eB1frzZDuDIT2Yu6KMKl
   SoBk/e0vW9B87g02wCkLrng1HZ9UbKuexre6isF9lPzP1DqXlfXnCvELs
   R9DY2S876ZBCS0+ye47oxokWlnHIXcED9bTHKaZThz6/jzgm3ALWBTyk9
   c=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: Cj5dIroF/np1ujJQtIqMkcFXkSgdBP+xqZ4fcw5eqN0z+4wCECv18UO9l5BG4nAwAAnSYhw/nl
 l+gsdRMxmyHqT3+eDh1awDD6lcanZP/GhhJTWQiQceGsIWerN6sINkcUm8FV8RVBdxey4seFH3
 PD8nQifflTmGXMHfKd4o0wSNbtoMTDF5dAm7rVIrPPFtrwVRIZh20DbHe6QIZPKNaGwJrFGqal
 BKroFSX3g9NebDG/kdvLmPJj+aTkOeV2zHlt59fCga3+xwd43w3FylrYrllVe9Clfj+52R1qEY
 YL8=
X-SBRS: 5.1
X-MesageID: 43335974
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:hBFHQak+GHJsWy0SLwKN3KaWzuHpDfPXimdD5ilNYBxZY6Wkvu
 izgfUW0gL1gj4NWHcm3euNIrWEXGm0z/NIyKMWOqqvWxSjhXuwIOhZnOzf6hDDOwm7zO5S0q
 98b7NzYeebMXFWhdv3iTPWL/8O29+CmZrHuc7/yDNXQRhue+Vc6W5Ce2WmO2lXYCUDOpYjDp
 qb4aN81l6dUFAadN6yCHVAf8Wrnb32vanraxIHGBIrgTPm5V+VwYX3HBSC0hAVXykn+8ZBzU
 H/nxHk/aLmivmny3bnvFP71Yhcm9fq17J4damxo/USQw+Mti+YIL5PdpfHlzAzreGp5j8R4a
 XxiiZlBetfwTf8emm0pDHkxgXv1i0/gkWStmOwsD/YjuHSAB48FspdlaJVGyGplXYIjZVH/4
 9gm0KfqpZNAhvLkE3Glqn1fiAvrGWYiz4Gs4co/hpieLpbUpB9h8gj2XkQN5E6ECfz+OkcYZ
 NTJfCZ3tl6WxendXzD11MfueCEbzAIMTqtZFMNgcCR2yg+pgEF82IogPYSmXoN7/sGOuF5zt
 WBNqxpkYdHRdMNYZR8A/8cQdC2Bnale2O2DF6v
X-IronPort-AV: E=Sophos;i="5.82,270,1613451600"; 
   d="scan'208";a="43335974"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eb8ipjnEExQEPOyth807koJLyArm6peG/COPSaZYcHuiRRFW6oCDK+TSoyBpmXxqZyRlTEi+gzRnGxgkVEAVFMeJEO+fx40Sc4PpsiBK1qP7FmlfjN9I+DG0HAAhA6xrdbPPBM+yPCb99zpxaWuR1QgoFjw9dzlFqh/d9Vsluops7WTK6cGKQk+Fc/PxwnnvJgMFKz0KT6O3FyIn1ULzHne8hS15fiAxzhREiETWPERBfk7WlxR3slQnBM249B2nKbtRj2iEDt8V+fw4/pr2T2wF1CEwt9m/cgSIYj34I7eYXOazPJlLVWo4zIOUgHT86LJAjBCB1Yk9q0LMxrm5Rg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jH4UhUCJPHw7dk6mHkWcv0bMFFryveuWTTTIp1cX3c8=;
 b=cWr8R7kAQb3QyjR/cB4APLNQmyI+kqjeY9+hFT7RaunyVxmQiw4kUGggHrr4XqfT64URW7jtchzja6SOQ8qavFR8v0JmviL2TBdTitsDwubxG0M8QRV3E8srki9zi/EEIOrb97WGMS5LiDhBihjb7JxlgBLgwG1j7hpv8SE6ZAyaihtnBlfEMLRoz3Nw+/LpUdwrxM7M9pLCsSTm2Wzxm2MTR4t7W7b+nB1Ha6eZxKj0Jq9ElGa5C2ncNuAkg6jFdHt3ebQ9YnXytALw1NmPzRgwvYrPDPk33v+CtGjChHokuTz3hsu0jx12rH8snaeWkfTlpu7HuYQQuuS701HMkg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jH4UhUCJPHw7dk6mHkWcv0bMFFryveuWTTTIp1cX3c8=;
 b=MyYIPBkg61qwLxcq8nRp+lerZwLJPvAIyAK3YFRxnBhTuBE1n4HY9yM0NTuotvLWzo/3HMFNCypDZX0uy5z+Y51/Awj+ZJdF0WUmWQ9vkZBbRd8u/nsgRJssRR/5Tsc2WDefg8m27EoBFqKIk+5xMsENSibRv6uJfu71KY++rps=
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <322de6db-e01f-0b57-5777-5d94a13c441a@suse.com>
 <434705ef-1c34-581d-b956-2322b4413232@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v3 05/22] x86/xstate: drop xstate_offsets[] and
 xstate_sizes[]
Message-ID: <f3a9b372-c927-70e3-a2ba-fef2bb2c7d7a@citrix.com>
Date: Mon, 3 May 2021 17:10:28 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <434705ef-1c34-581d-b956-2322b4413232@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0252.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:8a::24) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a705999c-e93f-4b96-ebaf-08d90e4dfc59
X-MS-TrafficTypeDiagnostic: BYAPR03MB4358:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB4358A73F550131C16DD999CBBA5B9@BYAPR03MB4358.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: h3runkRA/sxQ50gYrEXoGrEeshJ6Rzfa8KvKPZ/O+QqX+RhNWegBvkfg1fnkkcmOIADtJmvsdG44MHeU3eSnXXtUyRYbiP0vcRYVY1r2LVnJjKHSXKOfWmU3fmzMLOy89DVuntZ5XSlVa06gUDPj9gnedHTfaW4SnJF0SDHVDlKyjQlHfXCPJI/tHckMsAGYvDzwwfUlkR8IelaOfVMu6cXS7KCkN4emwR0UuPhPjxvbWdYWToxL/6tdcTm9htvMXrcoI/ThQcrU3/NX6y9+bFJC1uoLARRckr0wAjvBhB80d+u1bzWxAilse3jVGfZPfwCr59Z3uehCF2I9FsXQHMs+wCq1VJzi3gTJ93OusekHxdLVDdHZ6JGG6NCbYTRva8mrX8qkRR4HNnnB55TV5+lFQO2V50HeaFELde3pW10tTh2s+wIIXM4rPdTojV+EOWRiv7qQnY/aCn33KHrDZBtnvdZxBd8WA1s3dUPyb9sgn8NBd086RHFhGIvsQ7ylGiN65DNUeXUq4icQsTe46o41AKZf/y+REAyIdTvZ/s5dfqQKxnMC0xdTKoLpDZ7GxHZ3olvHGy8RRvxsLfPsnUXHzY+QjOzXcRa6bMgpnvx74I8FLlKgY2zzlVBOzLlzjHwfCmiUxjsXcVcu7EF7CF1JI6eDBH+tAtHP7ArGPmEP2WHHYYELCnLglVhxQH3s
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(376002)(39860400002)(346002)(136003)(366004)(8676002)(83380400001)(4326008)(53546011)(107886003)(86362001)(36756003)(8936002)(478600001)(2906002)(110136005)(956004)(186003)(2616005)(16526019)(26005)(6486002)(54906003)(31696002)(66556008)(6666004)(31686004)(16576012)(316002)(38100700002)(66946007)(5660300002)(66476007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?WTd2elN4NEEraDFnU09DSEk0S1V6d0xYTTRxTzc3dXlSWEJzM28wcDZxR2dl?=
 =?utf-8?B?bnhZREoxaU53bmMxMFBYMFdEWjNwOVViTWowVnppc21qTzFTbWxJZm8wUEg0?=
 =?utf-8?B?SEdLVjgrdFdKTzRXd1plcVRJM2RQc3UxanN1ZDVaWEUySWFqQmdNRHlrbG5h?=
 =?utf-8?B?ZmhKRCtBTmppMTlvSmNpSFBtRUdSaW1KMm16Y1I5WjgzaTcxUmxaOExQMXQ1?=
 =?utf-8?B?QlR4Z2NxVUV2Q1grWFhDZnVkU1NHTERFYXNWdVNYUGY2K3hiMnF5RHpNZ3BR?=
 =?utf-8?B?N1FzM0FIVlpWNWVtUTExSzRPZ1lxMHkvc0E2bWxCeE81R2NSellESUZMVW5n?=
 =?utf-8?B?a3VlcCtTWTYwMmpoUE00Z2pCTDZiUlVHaFdYKy9McWxlZTl4U0JvdDdMSTU2?=
 =?utf-8?B?ZnVyZ25nbmQ3aVRlUmE5elQ2c2hCNFo3SDJ0UGg5L2dtV2hrSmpkVkp6Y2tw?=
 =?utf-8?B?UUFMbEZSYlNHUEcvSmg3bFVPT0JLRS9FMzUvN0xOMUhMK1JObEQzNzV5OGJP?=
 =?utf-8?B?QUUrODNRZDdXYjJ0Y1JKRk1oQ2ZjNHJzTXgvbm54S0JmQUcwcTR1Zk1zUmEr?=
 =?utf-8?B?YncyQzA4TUNWSmZRYkRBMlVyVlZTTFE0anAyVVpVTDdPSnJiZmdzUHpTSVNQ?=
 =?utf-8?B?WEFXajRVUmtwN21oUTBwWXVBUjE1OEdYY0thVzA3d2hRaUd4M2FIVFUxWmVO?=
 =?utf-8?B?OVVYdXdoTnhOWFdSTkMrTS8zanZUNTJCa3hUbHVrVVRzNTVud3hSblZaY0No?=
 =?utf-8?B?SVZseHo5MnVBUVY5WklwVnlPNU5RenovR3k4cFgrdGE3cHNXTmtJODdsWUZ3?=
 =?utf-8?B?MlZBaU9JWXpkcTYwcFBSL3FocFFHbFJZbGlFWktyUlJaOWk1Z1d0bkhweDl5?=
 =?utf-8?B?U0p6Z3lQMjRiRnBFMmlNbGorZFpvdFpLTjQwdGZQZjBaRkptanFLUlRnbXdE?=
 =?utf-8?B?VFRrakd6djVCbVZPSHRzVzJ1L2hmemZhNmtpcEhId2ZIeXRLQWp3YUxvcFJr?=
 =?utf-8?B?Q3J3djNIbHVoT1ZzUFE1aTQ3ZVFScWRZenQ2SEpGTU9sdStrRDhrYUw3Tyt5?=
 =?utf-8?B?VldSb1hFZWRMOXNnanZFekNhRDZnTktXdzBiWWRld29YeXVWMFp2NW9qbXhm?=
 =?utf-8?B?Rm9JbzI4eVVOTjcxSUQzVis5UXFzY08vZnR6N3pRQXpHQ245ckFVUFNuYzJi?=
 =?utf-8?B?OE5iUUtPbVFVdjR1cjVqTURRaytNT2hWNVJDV3hDWUloZjZZanNWVGV3MjBX?=
 =?utf-8?B?TkV2ZXRNYXYyZFZTejN4bG03YzN6bHJWaFZxaVpRTDRZQ0RjS0VQK1NqVlJO?=
 =?utf-8?B?UG5ya1dKbWNEUlVCV204cENiUFpzNXZiaFNCZW5LZUp6M0xZMmV2Q25HTzM4?=
 =?utf-8?B?YXdNMEJFYmd3dm9URHB0dS9kNUNibGc0NFJqbnZHdGNwRDRncnJwVWRBR3h4?=
 =?utf-8?B?cXkvOExhQ0l2eEhHR01LUWxkVjFiaTlpK2ZPeUJhclk1NDRQU0FoQjdoSTIw?=
 =?utf-8?B?TURZbjFJMGlsUnRnbnhyelVObEJKRm1rT1U4VXF6cCsvUG45VktYbWlGL24z?=
 =?utf-8?B?T2k2N3l5SjJYWE1qQ2o2eC9WQUt4U1M1dUNYVTFsc3JhN210alZNa3AxSGgv?=
 =?utf-8?B?Z3dDQU1TQWZiZktzeEhCb2ViZURMU1VNZXZwZUF5d1BWTUtqTnhSZkZhZmJE?=
 =?utf-8?B?K3U4U1pudUhvUk5ud2VGZEtMMFk1MHNDNFR4TEZLdkgxRmdsUHd6eFIxdEV3?=
 =?utf-8?Q?Ycq28fGfxk4uNBcbAxU2mjs9FkXlavkoZkNYEBV?=
X-MS-Exchange-CrossTenant-Network-Message-Id: a705999c-e93f-4b96-ebaf-08d90e4dfc59
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2021 16:10:36.2795
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: YiYWuhW0vPK0d49p/aQFnCGyySrwDf0XI9S9fEDic0g8iam8o3NwX8qHu4UHVP4JigUYJkbW0pitxgKE8WpNpxQc9edqaOZcXEZH+JI0XQg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4358
X-OriginatorOrg: citrix.com

On 22/04/2021 15:45, Jan Beulich wrote:
> They're redundant with respective fields from the raw CPUID policy; no
> need to keep two copies of the same data.

So before I read this patch of yours, I had a separate cleanup patch
turning the two arrays into static const.

> This also breaks
> recalculate_xstate()'s dependency on xstate_init(),

It doesn't, because you've retained the reference to xstate_align, which
is calculated in xstate_init().=C2=A0 I've posted "[PATCH 4/5] x86/cpuid:
Simplify recalculate_xstate()" which goes rather further.

xstate_align, and xstate_xfd as you've got later in the series, don't
need to be variables.=C2=A0 They're constants, just like the offset/size
information, because they're all a description of the XSAVE ISA
instruction behaviour.

We never turn on states we don't understand, which means we don't
actually need to refer to any component subleaf, other than to cross-check.

I'm still on the fence as to whether it is better to compile in the
constants, or to just use the raw policy.=C2=A0 Absolutely nothing good wil=
l
come of the constants changing, and one of my backup plans for dealing
with the size of cpuid_policy if it becomes a problem was to not store
these leaves, and generate them dynamically on request.


> allowing host CPUID
> policy calculation to be moved together with that of the raw one (which
> a subsequent change will require anyway).

While breaking up the host/raw calculations from the rest, we really
need to group the MSR policy calculations with their CPUID counterparts.

~Andrew



From xen-devel-bounces@lists.xenproject.org Mon May 03 16:39:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 16:39:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121763.229661 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldbbQ-0007sB-Hx; Mon, 03 May 2021 16:39:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121763.229661; Mon, 03 May 2021 16:39:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldbbQ-0007s4-EP; Mon, 03 May 2021 16:39:24 +0000
Received: by outflank-mailman (input) for mailman id 121763;
 Mon, 03 May 2021 16:39:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iacE=J6=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1ldbbP-0007rz-G8
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 16:39:23 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0f09e520-f509-43f3-97ff-6d09ba35cfd3;
 Mon, 03 May 2021 16:39:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0f09e520-f509-43f3-97ff-6d09ba35cfd3
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620059962;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=6o4/YU971fkkFNF5WnOgbS3fC1pIW6vMLXh4yU3wl30=;
  b=h5u6PS6JRh9A1Hlsj2tW4TBCe67FG8ATHJulcdfDsP7Q1o6Q9ZwdG1Bd
   oi2pSQwNP1AMiMMRNAW0vzUzKYfCD1crrH2KHjrT7zsfwjpjl7xew78+z
   je4hYPURk6ozQ7zS3BWTFySBSORQW3yNKcmBnzoBBp10XWFxFyQypf995
   w=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: S8g/NRkGAjUvdslJX2++TghVwjrjaCleEyLtjgvz3rpvCfUDaqs4qwnfmygg8MafejhuokkQW5
 OLvkXpg5YI+ceJo1i4emoDz2K1Daowqzv4oCIkRnUjiBcFRvIQnC/9Q48/GfDSl/weJY4M080A
 kUXRm8W4G8DNfwtQ4J5JQzrrWjLIiyX3KsJtV5JEHYuL0jWcLo4ayGET3HfPJONzLzQaViyEAD
 haSrXuXYlFYLMcK2d2QEFOqDSApirUv/yjc0CFEzdEffGO3uKea/AUUs3DpWMXEiqeTdQm1Ft/
 mI4=
X-SBRS: 5.1
X-MesageID: 44470930
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:LDPpraNMCZG3hMBcT3Hw55DYdL4zR+YMi2QD/3taDTRIb82VkN
 2vlvwH1RnyzA0cQm0khMroAsa9aFvm39pQ7ZMKNbmvGDPntmyhMZ144eLZrwHIMxbVstRQ3a
 IIScVDIfXtEFl3itv76gGkE9AmhOKK6rysmP229RZQZCtBApsQiDtRIACdD0FwWU1qBYAhEo
 Cd+8pAoFObCA4qR+68AWQIWPWGmsbCk4jobQVDKxks7gSPij3A0s+GLzGz2BACXzRThYoz6G
 StqX2D2oyPkdGejiXd2Wja8ohMlLLapOdrKcSQhqEuW1fRoymyYoAJYczmgBkUp6WV5E8ugJ
 3wpX4bTrhOwlfwWk3wnhf3wQnn118Vmg/f4HuVm2Hqr8C8ZB9SMbs5uatjfhHU61UtsbhHuc
 ohtQLp1OskMTr6kCvw/NTOXR1x/3DE2UYKquIPk2dZFbIXdb45l/1vwGpuDJwCECjmgbpXdt
 VGMce03oczTXqqK1rdvmVp3eW2WGUyEhqsUiE5y7Ko+gkTs3Zjw0QCwssD2l8G6ZImUpFBo9
 /JK6Jyidh1P4MrRJM4IN1Ebdq8C2TLTx6JGGWOIW7/HKVCH37WsZb47Jg8+enCQu1G8LIC3L
 D6FH9Iv287fEzjTeeU2odQzxzLSGKhGRzw18B3/fFCy/3BbYuuFRfGZEElksOmrflaKNbcQe
 yPNJVfBOKmBXfyGLxOwxb1V/BpWDgjefxQnux+d0OFo8rNJIGvnPfcauzvKL3kFithdXj4Bl
 cFQTjvNORN5k2mQRbD8VrsckKoXna60YN7EaDc8eRW4pMKLJdwvg8cjkn8xszjE0wGjoUGOG
 9FZJ/3mKKyome7uUzS6X9yBxZbBkFJpJHpU3ZAox42I1r5GIxz/+m3SCR35j+qNxV/R8TZHE
 p0vFJs45+6KJSW2GQEB8+4NHmZy18evmiDQZtZuqDr37aqRroISrIdHIBhHwTCEBJ43Sxwrn
 1YVQMCTkjDUhX0iauki5QQLPrFd8Z1hTqqJcI8kwOdiWys4eUUAlcLVT+nVsCaxSw0QSBPu1
 F3+6gDxIablS2XMms5iuQgOFhqYGCaaYg2SzitVcFxoPTGaQtwRWCFiXi/hwsocmTnzUkUm1
 fsNDaZY/3NH1pbtE1Jy6qCyiIGSkytO2ZLLlxqu4x0EmrL/kx+1uKGfYKf+WqcYFlq+JBXDB
 j1JR8pZi9+zdG+0xCY3AuYHXI935M0I6j2F7I4aYze3XurNayFnawLBOVv4Z5gLdzi29V7F9
 63SkuwFnfVGukp0wuaqjIZIyFysmAjiu6t9xv/7mS0tURPd8b6ERBDffU8LN6d5WS/GKrN/5
 V9kN4vvey/dk/2ccWLzKnLbzhFbjPfyFTGO90AmNRxh+YVsrA2IrzwFR3v/1tD1A8lLMj1mF
 gFKZ4LqIzpC8tKRYgqZyld/lAVj9yBI0sgjxzuDoYFDCQQpk6eG+nM3qHBprUuCHCQvQfcOV
 GQ9CtG4vfONhHzooIyOuYVIW5MblI752kn1OSed5fIAAHCTZAIwHOKdlu8eqRaUq6LBPE5qQ
 t7+ciBm6uyezDj0A7d+Rt9LaQmyRfrfeqCRCaNE/VP6dq0JBClhbar+te6iHPPcgSAAn5ozL
 FtRAg3dcRMij4rkY0x3GyTc8XM0z0YumobxypmmF7r0pWh+0HBEyh9QFTkvqk=
X-IronPort-AV: E=Sophos;i="5.82,270,1613451600"; 
   d="scan'208";a="44470930"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dkgsCsqSmrNmqC/DjJxVkHL3tHze3iiVg6bUuXx3+7X+hViTvZYXtFrKATOCw69fODooBqSatPGKntOzAE2gYBIHYHskDuttgMsyqBJgUza2UnNfkN5KI6n4RclULGnTtLW0Zm3G7CpbqM9KEX1rSEApqzLy1YI5w9KQYHRD9ym34cN2pXWykeFy+g6s8OwmQXTdrVDyqmq+l1MerEBDmYniAlrQj2eaADgltO28oP5tFftT5aIpN40zNFoi0e9tRt58uoahvPUaflHtMU8lN9ZDAf+hQJ83YGSbyI2D/HDykJOCRpXnOaqmiQGt3oP66pPD6VU2/Nmqh7a+pCYuZQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Y2WKKilRHeQ1aKc85MkKclO66P1DzP9xS2Lh9EV+1AQ=;
 b=ioxlohnLrW9SV9sG0q7bXugz5noIJGDt4LMPQwMXRBgC1dipyvDNpSR1zM/CKfPs2nq/ldHkWRexKYMaIh4JGykFu99ljQeC5AQNvB+kMqGaOTJoAemmUlNP6TkA1LZA89ud9sRoDN0/UI3ipn2YfcSHtR+P4zS2lmfN6wGNiJP7c4UpJvdv+6zqVRdvnpHTwsSaNh2wwmA7bKPjjTHND3Mob9MkEQ6DT08qDacxdjqgD+C2od3ETPv1Zn0TXmuQWQ4dNWQ7lpTyo8ZnpQkLd8nBTpL3aAJPgBNFkaOxw8bd6TT7MvYDyD04DJ4QcKh3V1VqQ6zWnb6ZEKzCGrSGvA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Y2WKKilRHeQ1aKc85MkKclO66P1DzP9xS2Lh9EV+1AQ=;
 b=g0+ujk7K3IYrs8FVpRRaZWwavoVuTwEG9G9cWvD2xw3B9hsZcRq68neLpBUd7fRryDroGTGV3KA2ROufPA6VIIQ83qJBFS9QA9AnFOBdaYhcIvEwUOpaY9CnfRe2nBA/jBm2w1Q+atmfP98mXMkPiEJM6x9mZq8vfxq62w13Dr8=
Date: Mon, 3 May 2021 18:39:12 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v3 01/22] mm: introduce xvmalloc() et al and use for
 grant table allocations
Message-ID: <YJAnMAmob1Y5myp4@Air-de-Roger>
References: <322de6db-e01f-0b57-5777-5d94a13c441a@suse.com>
 <69778de6-3b94-64d1-99d9-1a0fcfa503fd@suse.com>
 <YI/e9wyOpsVDkFQi@Air-de-Roger>
 <aeb6aa8e-7c90-be22-1888-21b7b178e1d1@suse.com>
 <YJAOm+rmKb5gbYJq@Air-de-Roger>
 <340fed73-973c-feba-074d-8bfa6eeae6d6@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <340fed73-973c-feba-074d-8bfa6eeae6d6@suse.com>
X-ClientProxiedBy: MR2P264CA0046.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500::34)
 To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: cf06cd30-c278-4a2f-900f-08d90e51feee
X-MS-TrafficTypeDiagnostic: DM5PR03MB2923:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB29234CD7AFE3F38A4F3968F48F5B9@DM5PR03MB2923.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: bKPDdCduzeXWKIy1dPB1QqACBwscMrK91NSeH/lQu4tB+7yALtFYu3k7fxW9bu7K3dYdgXdUC9p8pizja5EW6jXa0HMnGc3rGSyQ4VkZ/pYzm6DWbJ9T1ZY4h92rbRpcfH8s4nXnPCLtOT8oXcf36UYHs2uUgZL55huHiHh5Pb7WV7oVZk6sEHR9d/VYi6aBC8Q1TRlys7F0yy8M38LUdzO2YdkQOXgBh9J8AqXg+VyhNhIlOscUZqT0o3aTE1zp69VovswA7mm1EcSStMPTyBE6bLw7vJvKXVYHu6H2c5oo666Qvht6gf2/jRo/ki+w1QLU2xb3X+auIaBB8DcSf9+E3TuacwnWBCILc7quabqQK2fJVrkpMhd8hZbLAd5BZvXnML2BuBAmsoSRu0zkc07dVkf1eP/JuWOFEdglu2CgdIqf/02qzww3RfrZvnqxcdWlhZKcE7jynf85wgxXA1xMIH8Ewtu3OLkuvm6C+zsMZXHqXRFJC0dcO0Gl6acAxTkfHcIOuRZi83tRdTmjIiGDxo0+MyNAxAGDyrCAqmMCtZ2Ev27C50zkvT9PadS8sJ7odf+AvPB4H2/Q72x5HqSPSf83rOPX+w6bByMIrdY=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(39860400002)(396003)(136003)(376002)(346002)(366004)(6666004)(316002)(478600001)(8676002)(8936002)(54906003)(4326008)(6486002)(9686003)(956004)(33716001)(16526019)(186003)(2906002)(5660300002)(6496006)(6916009)(66476007)(66556008)(38100700002)(66946007)(53546011)(26005)(85182001)(83380400001)(86362001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?VEJlTmtVa2szUWE2V2R4a2hZYi8zS3MrbEVFY0o1MkFMUkc1S0M1Vmw0aGVn?=
 =?utf-8?B?SEdFK1Q1TWFIeUtIbU9iaUFYNDFFTHAvWWRkSWdzam1WNkordUp0WGo3eHdn?=
 =?utf-8?B?dXI5K2I4bHpKRWdFa01EWUZxNXk1a1Bad244N0JaUVI5ejRTbldFTHVocnZs?=
 =?utf-8?B?aHVackxzVzNjUm03bVp4MzNMMUhmWGszNzdkQzBGeEFTbW4zNUhiT2ZjZTRh?=
 =?utf-8?B?d28yQlBaeWJmaWtEaTMwc1FzVnVvSU00V0puWStTaGh6SWdwRFFFWDdvdElM?=
 =?utf-8?B?S1dmaHVGemsxKytzSmM4UEFVVlp1ekl5WjlGVnoyY3BKK3JDZUJ3YlRIZUNS?=
 =?utf-8?B?Y2d1SUlWNmg3clZpVTU0TSt0amtFbFN6KytKTWxENXh0QWxMdC9wMkwrRVVY?=
 =?utf-8?B?NGhSa214UGZwa2hkaExsSGFFQUxIU3dUblhaWW8zN2lDVlYxL0VyWnRLUStE?=
 =?utf-8?B?R3hmMVJQZlFxV2o5QUpRRXQwOUZVMFp5Z2t4dHcrYTZUc0FOUXMvR0F3dmFJ?=
 =?utf-8?B?azFpRFR0cjRYMlFFN2dUMnBaZ0J6VjFRMUVMeTdYR2JNUVBjVElWV2xRd2hy?=
 =?utf-8?B?TTdqWmhMeTJMZlRZZXpYTE9zeWFNbW9nTExDbG1xZEFvZVlNVEJkb0Z3VjVJ?=
 =?utf-8?B?c2pVNnI4MHF2bnQwMWd2ME1SR2pHTXQwTmNFY3E4bVI3VldZdjhXM0pKak5L?=
 =?utf-8?B?S3BPcEQ0S0pQWUJPRTh6U2wzS054ZmxrUEpRZjZvTTJmeS9MSE93UlJPbUZE?=
 =?utf-8?B?U3A3bm5xRGdTZW1VMWVFN1J5UjhTUTB2VEtRR3MwaVMrZlZBS1orMGozcUdR?=
 =?utf-8?B?VzlLenNTRHJXdWg3TXVCcnRxc295RzlXb2tGL3QvdXlTeTZXTnRINGRSUG00?=
 =?utf-8?B?QzJtRitIN2FVRm9iSER3NzBIK3dUT0V6MXczc3BpN2F5L3NmdVA1TXNUbS8r?=
 =?utf-8?B?ZkxvWUd0K29xdW0zbXRIUEF0MmJ5Ulh1bUhQWnYrMWtydFo1TGIvNDBkUzBV?=
 =?utf-8?B?Q01UaVJ1RnVERVFmcG05djVVdHhqNjR2RG1UVnNtbG1ZQnNpYXFXdzR6NWhH?=
 =?utf-8?B?bnlnNTRrcEN0ZjYyaGdMdjkrZmlaVVJlajFrZ1RELzBzcVBGVGpNc1o0U2kv?=
 =?utf-8?B?MW9lV1VnbWJuVno0TENiS3lFZG9halF5REY1WTVwYUpoV1FBMk5MTEViWlJw?=
 =?utf-8?B?RUxLSS9JWUxLV1BUdFlrME10Y1gwRFZ4b3o0TnRvcjlyM1BHZjEyTGdZa3dn?=
 =?utf-8?B?dkdoSkdYNGh6RkpESmV5ajJZVXEwK2dqQk1GeUxuN2lZRUlkaERyYjFhT2di?=
 =?utf-8?B?bWZwOWtPY1FzUmI0NDdpTzBkZ2srTWlqVVA4NjJpbHRtQWdZczVwTjh3ajh3?=
 =?utf-8?B?cHRzTnlyZmF2L25ib2FxcU9vUU0zWGVqTy9OelhRUUpmLzRFMGJ1RzhPL0JI?=
 =?utf-8?B?bk4zVGpnb2ZGc2tsV0ZUc1dYMkxIRnhycHpBNVJ4eVRwc0tXMHNXWTVrYzFj?=
 =?utf-8?B?Nis1SWVITFN2WnVXSURISHdNc3VlRWhCSFAwM1l0ZzdRR2RRTHZnUHZQSWVr?=
 =?utf-8?B?MHFudFFvdjQxZ1RVcVo1Qzd3eWU5QkpleHdPUDN1aFBxSWErODArUFN0bHFy?=
 =?utf-8?B?YmxHYzFnd0NBeThxUnBWcmRDK0lFbERSR0ZUT1pwb0s4anhwV28weTl4WFAx?=
 =?utf-8?B?YzBjVDRRVjRRSlpNRG9VRFZWUDhxTjFmWWhWeU9vUTBEa3NkMnUwRXpLRmxT?=
 =?utf-8?Q?ZgEX48hjgxtymTXpGoaXQwJuo10W0k91n6rNXcs?=
X-MS-Exchange-CrossTenant-Network-Message-Id: cf06cd30-c278-4a2f-900f-08d90e51feee
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2021 16:39:18.3137
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: EwBzeqZTkydkZM8GvvVrtBehqwAE8fE20l8aFPQjKQnDl1gzSUKnKmhmFqL95uKj0sGYPKsd+gvrknft/UHHgA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB2923
X-OriginatorOrg: citrix.com

On Mon, May 03, 2021 at 05:21:37PM +0200, Jan Beulich wrote:
> On 03.05.2021 16:54, Roger Pau Monné wrote:
> > On Mon, May 03, 2021 at 03:50:48PM +0200, Jan Beulich wrote:
> >> On 03.05.2021 13:31, Roger Pau Monné wrote:
> >>> On Thu, Apr 22, 2021 at 04:43:39PM +0200, Jan Beulich wrote:
> >>>> All of the array allocations in grant_table_init() can exceed a page's
> >>>> worth of memory, which xmalloc()-based interfaces aren't really suitable
> >>>> for after boot. We also don't need any of these allocations to be
> >>>> physically contiguous.. Introduce interfaces dynamically switching
> >>>> between xmalloc() et al and vmalloc() et al, based on requested size,
> >>>> and use them instead.
> >>>>
> >>>> All the wrappers in the new header get cloned mostly verbatim from
> >>>> xmalloc.h, with the sole adjustment to switch unsigned long to size_t
> >>>> for sizes and to unsigned int for alignments.
> >>>
> >>> We seem to be growing a non-trivial amount of memory allocation
> >>> families of functions: xmalloc, vmalloc and now xvmalloc.
> >>>
> >>> I think from a consumer PoV it would make sense to only have two of
> >>> those: one for allocations that require to be physically contiguous,
> >>> and one for allocation that don't require it.
> >>>
> >>> Even then, requesting for physically contiguous allocations could be
> >>> done by passing a flag to the same interface that's used for
> >>> non-contiguous allocations.
> >>>
> >>> Maybe another option would be to expand the existing
> >>> v{malloc,realloc,...} set of functions to have your proposed behaviour
> >>> for xv{malloc,realloc,...}?
> >>
> >> All of this and some of your remarks further down has already been
> >> discussed. A working group has been formed. No progress since. Yes,
> >> a smaller set of interfaces may be the way to go. Controlling
> >> behavior via flags, otoh, is very much not malloc()-like. Making
> >> existing functions have the intended new behavior is a no-go without
> >> auditing all present uses, to find those few which actually may need
> >> physically contiguous allocations.
> > 
> > But you could make your proposed xvmalloc logic the implementation
> > behind vmalloc, as that would still be perfectly fine and safe? (ie:
> > existing users of vmalloc already expect non-physically contiguous
> > memory). You would just optimize the size < PAGE_SIZE for that
> > interface?
> 
> Existing callers of vmalloc() may expect page alignment of the
> returned address.

Right - just looked and also the interface is different from
x{v}malloc, so you would have to fixup callers.

> >>>> --- /dev/null
> >>>> +++ b/xen/include/xen/xvmalloc.h
> >>>> @@ -0,0 +1,73 @@
> >>>> +
> >>>> +#ifndef __XVMALLOC_H__
> >>>> +#define __XVMALLOC_H__
> >>>> +
> >>>> +#include <xen/cache.h>
> >>>> +#include <xen/types.h>
> >>>> +
> >>>> +/*
> >>>> + * Xen malloc/free-style interface for allocations possibly exceeding a page's
> >>>> + * worth of memory, as long as there's no need to have physically contiguous
> >>>> + * memory allocated.  These should be used in preference to xmalloc() et al
> >>>> + * whenever the size is not known to be constrained to at most a single page.
> >>>
> >>> Even when it's know that size <= PAGE_SIZE this helpers are
> >>> appropriate as they would end up using xmalloc, so I think it's fine to
> >>> recommend them universally as long as there's no need to alloc
> >>> physically contiguous memory?
> >>>
> >>> Granted there's a bit more overhead from the logic to decide between
> >>> using xmalloc or vmalloc &c, but that's IMO not that big of a deal in
> >>> order to not recommend this interface globally for non-contiguous
> >>> alloc.
> >>
> >> As long as xmalloc() and vmalloc() are meant stay around as separate
> >> interfaces, I wouldn't want to "forbid" their use when it's sufficiently
> >> clear that they would be chosen by the new function anyway. Otoh, if the
> >> new function became more powerful in terms of falling back to the
> > 
> > What do you mean with more powerful here?
> 
> Well, right now the function is very simplistic, looking just at the size
> and doing no fallback attempts at all. Linux'es kvmalloc() goes a little
> farther. What I see as an option is for either form of allocation to fall
> back to the other form in case the first attempt fails. This would cover
> - out of memory Xen heap for small allocs,
> - out of VA space for large allocs.
> And of course, like Linux does (or at least did at the time I looked at
> their code), the choice which of the backing functions to call could also
> become more sophisticated over time.

I'm not opposed to any of this, but even your proposed code right now
seems no worse than using either vmalloc or xmalloc, as it's only a
higher level wrapper around those.

What I would prefer is to propose to use function foo for all
allocations that don't require contiguous physical memory, and
function bar for those that do require contiguous physical memory.
It's IMO awkward from a developer PoV to have to select an
allocation function based on the size to be allocated.

I wouldn't mind if you wanted to name this more generic wrapped straight
malloc.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Mon May 03 17:56:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 17:56:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121779.229678 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldcnq-0006AV-CJ; Mon, 03 May 2021 17:56:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121779.229678; Mon, 03 May 2021 17:56:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldcnq-0006AO-9R; Mon, 03 May 2021 17:56:18 +0000
Received: by outflank-mailman (input) for mailman id 121779;
 Mon, 03 May 2021 17:56:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldcno-0006AG-EC; Mon, 03 May 2021 17:56:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldcno-0001O8-43; Mon, 03 May 2021 17:56:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldcnn-0000UC-Ru; Mon, 03 May 2021 17:56:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ldcnn-0005fB-RO; Mon, 03 May 2021 17:56:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1nHz4e29BBsLWwbuRvtHe2Zxs7HR5LxMr+gzUyrsoog=; b=nF/bIPGgCJD0AsajUnkUGxdm6i
	qXNdPRbC6V9f3zAEv1/l7z6LfqrfS4zzpYvAvro+6cMboO8tuNllw9vTlt3boHIZcRdyKVot4zJYN
	1bvJZCQMkx75xcUVVSAwtv5Y+8EswiBTFeQtjjVlMOEZtYWkDdFTpbHtdAzH9m0W3oQo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161613-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 161613: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=1f8ee4cb430e5a9da37096574c41632cf69a0bc7
X-Osstest-Versions-That:
    xen=1f8ee4cb430e5a9da37096574c41632cf69a0bc7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 03 May 2021 17:56:15 +0000

flight 161613 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161613/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 161584
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161599
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161599
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 161599
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161599
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161599
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161599
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161599
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161599
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161599
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161599
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161599
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  1f8ee4cb430e5a9da37096574c41632cf69a0bc7
baseline version:
 xen                  1f8ee4cb430e5a9da37096574c41632cf69a0bc7

Last test of basis   161613  2021-05-03 01:52:43 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon May 03 18:18:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 18:18:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121789.229694 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldd8r-00088E-BX; Mon, 03 May 2021 18:18:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121789.229694; Mon, 03 May 2021 18:18:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldd8r-000887-8O; Mon, 03 May 2021 18:18:01 +0000
Received: by outflank-mailman (input) for mailman id 121789;
 Mon, 03 May 2021 18:17:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gWh3=J6=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ldd8p-00087z-6f
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 18:17:59 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e1d4790c-0a37-4d53-9555-280004b591a7;
 Mon, 03 May 2021 18:17:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e1d4790c-0a37-4d53-9555-280004b591a7
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620065877;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=Y9b37xMBFTD9E38XE3lJ9Vu2vpebJECfdcQbYgYEvTk=;
  b=a/6VeYg9xYPlx8/oP+yh4KmIfUrpcM3Oakw8C/iQmUnoqFGOuuNnJaoQ
   gmsyG+egJEAwvUdmsBVVqAKYjpwR4f3Vo61LNwFLX7FZ4twWY+hD0kuex
   HKcvhHdr/n8d1hZ7ZnnHP/o9Zdad+5yFbMBcTEqnbgpC231Pvb39wwcpY
   E=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: f0vie2CDhUh7uhAFGRzbzO0+yGqIzSyk7Lz48kd0i+8UFFnZbBpEKFidN2bG2ap7gXVz9ZTaiC
 FlqQr7PDVRX7olxN2x12Ok0Nl0Dec5aP3UnvhWl1Zjx4kNfDA8D1N1HytvQwh6RiehAz6ew+H7
 fWvLauZ6EqUYdkpyOJC058EFq2gFt/aidAf35vSQQRiKloImt7CCcz3gkNPv0lmLmZPt3fUCkT
 IQpWYI7IcbSBN5RsrLhQ47Hv2y7fh7o8P+BiGLOPtCyQDWjFgF/EwHFp6PGExVzizvMUMlpKCF
 Wzk=
X-SBRS: 5.1
X-MesageID: 43346468
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:+/qxW6tpShlGXeB6vovBEKIW7skC2YYji2hD6mlwRA09T+WxrO
 rrtOgH1BPylTYaUGwhn9fFA6WbXXbA7/dOj7U5FYyJGC3ronGhIo0n14vtxDX8Bzbzn9Qz6Y
 5JSII7MtH5CDFB4frSyBWkEtom3dmM+L2pg+Cb9Ht2UQR2cchbjztRICzzKDwTeCBtA50lGJ
 2Aou9OoDS9cXoaB/7LeEUtde7FutHNidbaehYAHREq802jijmv5b78HXGjr2gjehlIxqov9n
 WArhzh6syYwo2G4zL/90uW1ZRZn9P91sBObfbstuE5Iijh4zzYH7hJdKaFuFkO0YeSwXYs1O
 LBuhIxe/l0gkmhA12dhTvI903e3C0163nkoGXo80fLhcDiXjo1B45gqOtiA2PkwnEttt19z6
 5Htljx3/E8YGKi7UaNk+TgbB1kmlG5pnAvi4co/htieLATdaNLqsgn9F5Vea1wbx7S0pwtE+
 VlEajnlY9rWG6dBkqp21VH/MahRTAaEBuAXyE5y7ao+gkTtnV4w0wE/dcYj3cN+bksIqM0l9
 jsA+BGkqpDQdQRar84LOAdQdGvAmiIeh7UNnmOSG6XWp0vCjbokdra8b817OaldNghy4Yzoo
 3IVBd9uXQpc0zjJMWS1PRwg1HwaVT4eQ6o5tBV5pB/tLG5bqHsKze/RFcnlNbli+kDA+XAMs
 zDeq5+MrvGFy/DCIxJ1wrxV915Mn8FSvAYvd49Rhanvt/LEIv3rebWGcyjZ4bFIHIBYCfSE3
 EDVD/8KIFr9UawQEL1hxDXRjfDYUr60ZVsELXL3uQaxYQXX7c89jQ9uBCc3IWmODdCuqs5cA
 9VO7X8iJ62omGw4CLp4gxSS11gJ3cQxI+lf2JBpAcMPU+xW60Eoc+jdWdb22bCAhd+SsjRAT
 NOvlgfw9PwE7WggQQZT/63OGOTiHUe4FiQSY0Hp6GF7cD5PrQ1E4ghQ640MQnQDRR6lUJLpQ
 54GU85b36aMgmrpbSujZQSCu2aXcJ7mh2XLcldrm+ak16dq8EpTn4yRCWvTsaTvAYrS1Nv9x
 hM2p5apIDFtSekKGM5juh9GkZLcn6rDLVPCxnAWJ9ZgYnxeAZ7TX6DgBuTjx1bQBuyy2wiwk
 jaaQGEc/DCBVRQ/lRVyLzj/l9PemKBRE5ocXxhvYphFWPJh2Zr3YawF9+O+lrUTmFH7vAWMT
 nDbzdXGA9oytyt/DO+mTqJFxwdt9gTF92YKI5mX6DY23urJoHNqLoPGOVM+o15cPr0tPUQbO
 6ZcwiJDT/xBu8zwTaJrnI9NCQckgh9rdrYnDneqESo1n82BvTfZGl8T7YAOteG8izKQe2L3J
 gRt6N8gcKAdkHKLviIxqHcY2Qddlf9oWuqQ/oprp4Rl6Qor7d3F4TaVzyN9Hwv5mRJEO7E0G
 clBIJ86/T9H6UqWeo4USdQ5EAom9SCN1FDiH29PsYOOXUWy0bGNNaI6YfSobUhAke9tBL9UG
 PvhBF1zrPgZW+/zrYUBKI7HHROZGU94Hpk+vmed4e4MnTiS8hzuH67OGS6arlTVeysHqgRtA
 9z57iz7qOqXhu9/ADbpj1gJK1St06hXMOpGQqJXcpF6cazN1jJoqyk5qeI/XjKYAr+T0QTno
 tec0MMKuxFlzk5lYUylhGIdZafmDNvr3JupRd9llDs3YC64GDUWWF+WDep86l+bH10KXiHjc
 PM7O6C8m/yiQI1gqX+KA==
X-IronPort-AV: E=Sophos;i="5.82,270,1613451600"; 
   d="scan'208";a="43346468"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fgQJkSFhDjRWN1m7olL8ZD2sNIukdQrBm8I6zZfCjq0ofZ4sEF3bwgUEX39h3H9Vy9Q8ghjO4wQ46wohCk/B6WYLiiHU4ZuNiThtrpcfqr+rmkYYjA83CVtNZ+TPO/a3eQ3Kg+IFWL8Esth0gFTZ3Kgkqxjl0RbTM5LJhJDnZJP0FYa6RtxKixULQ3htKXtsSGq3Uincfeg/EfzUR6IOQ0tl+Ui8sHqXG5jFcXBHgA455O6XRKyA6lUa3L/L3hvJSDmkDqj7dggeuxMXH+Jn6MPvgJYjgEX5s5+kBSxaviiBtF1iu2HUu/rsS9rHiZfn37KJeH8+LEgLZFpJswkyiQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BMVq/PVM5fTVIL1pWHdqj1g4J5WYn+Z6zPmOXnMPNjU=;
 b=mbMbOwjHtyz5Cv0+y2fs6P9eUYiZ9CLXI3k/ehYKAHJ6OD4N+VHLy/3QwR83YvmYSqXNOgAKko5q5iebxjFqRdFu3aDYxC8MRnMpNkgfxBYqct80SbGsZIB0bsGRRIpqwM3izplnmfer+b7aSBP9KWYXK7A+VNOaWYEKEDJZ4zUdD4XZ+YKg9mgKf71+0rVWcQ+cJpgZQaFMkvXMRpL6cfO/YdE2vuBQsYJZDs7W/VpjVIewjCPFEmvPi/viu7SxC5o+WvmeFKSYY6XuvmJrTrHzgCNNL1l9vXhnenYHplYArrWcm5NuSC+ayxCfJ1MTG1zsYv+qSRIzG8Ge1jlqyw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BMVq/PVM5fTVIL1pWHdqj1g4J5WYn+Z6zPmOXnMPNjU=;
 b=Poo8t0/BYT/zzU9awm+byz10R6bvXn2xcP2QEwiY91n4b41FZ12rtWuOyIsLymyfmdDjEtDIsTUOlKsg9/ZLvHmyqK/WdZ0PSHKj9UzN1BzoCsbysBXiaOjyjAHL+QVHT0EG67gDhqaeI//8YRMO4gfW5b8GQDwBNE9cqnNVmfU=
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Jan Beulich <JBeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
References: <20210503153938.14109-1-andrew.cooper3@citrix.com>
 <20210503153938.14109-4-andrew.cooper3@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 3/5] x86/xstate: Rework xstate_ctxt_size() as
 xstate_uncompressed_size()
Message-ID: <a8487667-1f47-1aae-1528-4a1224cbda7b@citrix.com>
Date: Mon, 3 May 2021 19:17:48 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <20210503153938.14109-4-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0158.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:188::19) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d32732c2-be5d-4cef-acac-08d90e5fc525
X-MS-TrafficTypeDiagnostic: BYAPR03MB3430:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB34301248D1D284E54C13DF13BA5B9@BYAPR03MB3430.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 87tfWVLZRHRLPqwAIlP/9FfLDdwRSEHLMynt1Zq3PR0QvgwAFXktiNpxJ1YCJGIDjNiQfghAMx/7rJu7MnKfgi0d72DJnjRomtsdv0+kSKsKNdrYCtcLoKSERaxvzsQ1BqXvQv1xGQL+Xti+cHW6tV4xea6KeB6aK2SsbO1XV71WIn86ITAX1Zo+vc4ImkWQYiuU0MlpKWyD2bGeGfAc4VX2dAdfAsqR26gUeYSM8A6lI+lCt5feOsE6wxkUMXeo94M7cKQEzIlptnBUfEdta8owPUblZPLBU2W+Li0RKggTvIN5ZJAQUzRtUiDjS3NSXzyD34nzlpi/jrb3ACIALhzpEfbhl81Wvy2R1u7Xcxz6U1FdWtjdY0mdsC62PWQGdWEShy8G0vHOxze/SMKCkvnZOJEgHZMzu+fEReGvJesrYU0wdv1rZsILZdi0Xj1ud0CScNRQ9XUblfk0UKLJUn5petXJrW/f7kXoJivhAWnhfv/UnK1hxB82BFiNMP+Lk+FFPxckZwgY2L6hzsq6rlv5PHfGtZXxWy47NnNH+oNDtED3Mgzay9t37mSmHdaekrmG1nzvTawhRMdF4FPWtj/pfUI8r6jXkwAhkglVTfWt14/TwZFzkwU5SbDoN0g7fFU/7B0jewJ6IiKyHz7+dx+JfCCK8RirFz1eRvqyxB1y8E0RLDJHZgji8IQwU2V88dhzCS5tfFme0WKunJcsBbLbswjT8pEw3BpC69OCOHdajbllxRHyhdNVVUwLrDiavdkFPt3CFWysfO6L07aH7A==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(136003)(366004)(39860400002)(376002)(346002)(53546011)(4326008)(5660300002)(8936002)(6666004)(6486002)(31686004)(38100700002)(2616005)(86362001)(956004)(478600001)(36756003)(16576012)(6916009)(186003)(54906003)(8676002)(316002)(66946007)(66476007)(66556008)(2906002)(31696002)(16526019)(26005)(83380400001)(966005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?ZzFsMHhVMTlLQmllc0daTTlzaDZZM2lwK0NUb0hCNTFKbFYrVWd6c0h2RDNX?=
 =?utf-8?B?TmNEaXRXK25TaHZtOEJBMkFGcEpNd3dwUXllN1lZVVpWV0tsWHJVR0p3eUFj?=
 =?utf-8?B?N1ZrY0JuWnE3SHBzK2gwTmMwTzlzdGZzbTR3Mi9OQVlGR1FLY3BPR1B2dkVh?=
 =?utf-8?B?djRJd1R0eERud240cUR6RU5XUWpxdWpjck9nTDd6TzVXWFpVWDJDVzZESjk4?=
 =?utf-8?B?MXhIWDhIQ0ZxanJsNVg3RmhOMEtkalh6a2hwMWZqR28wTW5xNmxicUk4MUsx?=
 =?utf-8?B?RzIwQ0czbkRkd0VUSWhFTFRPQkIwTnFsUnBYT3NhL1h5clNVWDJUSEFXZVdz?=
 =?utf-8?B?eDhPbytEVURVSFhwczV1KzdTSmJ0M3hKUGlZYWxpYlo2aFVnWXJXc2x1bGdD?=
 =?utf-8?B?Ti82em1Ea0Y1cXFuenVTK2t6NlRpQmsxWDczbjN5Y0VGSlhiWnBKWVgwOU5l?=
 =?utf-8?B?S3N5dHNwblFxdGZFcVVaU0dxOHdZY0JyWldNVjFtaHFMMFNOZzcrNkc3MjJT?=
 =?utf-8?B?WGxyYitJeXZXdjdrL2Voejh2M0tNR1pINGs0OEdUT05jUkROZXRLVitYSUMw?=
 =?utf-8?B?RWx1MWVqVnc4K3JmeXFWUEVCRmZFN296M0tIUFh5R25vOTExUWhJNzRkSnZ0?=
 =?utf-8?B?ZHI1VER3ZDUrTFk1SG9UakU0dU9pMnZOaXdka2NZdWtzRDJINVgzbWJmSDlq?=
 =?utf-8?B?ZHZ4OXRmSThVSGhtVmxndS9qRmErcGpZV2FDK3hSM2JqZ3oyakZsYUxvaHhD?=
 =?utf-8?B?QnZCQTR1NExOTnlCMW1oYjllRTczZS9lc21sdXlxeGc4RjkvakdmUEZQYldi?=
 =?utf-8?B?T1E4WVZCK2xsV3BIQ3FtdG41cmMvaDFwMW92UERJd2tORW0rYlhEYnVWdHRk?=
 =?utf-8?B?K3RtQlYzUWhHS2JFOGJUM3FPTzZ2aFZPSHhTUnBWQjBvUVV6Q0hjYWd2Ukgw?=
 =?utf-8?B?UXB4a1VvaE1KSG9lWWM4M01jMlVSNXR3NDlHQ2g1Q0dack5OQnlvNmtLNVBv?=
 =?utf-8?B?MDlJMFRDZUFWbXluY0tTc1JZaUVSVjdQSVp2UFVhT01URDBzNTJ1OVVhdnMv?=
 =?utf-8?B?ME1kc3BqMnlqUjl2eUhpbUowMXNsTk1kREl3S0pKb3VMVVd1bStsZnRLWTEw?=
 =?utf-8?B?N0FhQXFmSDJhbVpFbGhQNjVyTWFsZ0JObVFyN3lCNk0vWGhheFZ1RHRneU4x?=
 =?utf-8?B?ckFROWtua2J1bllwZHkvSnhoT2tLUEJ5b1NaTVQ4ODF3b1ppTVQ2eEY3b0Z5?=
 =?utf-8?B?MlEyak5RTTRldVdUejBwRUh3TnNiQXhWYUJkNld0YkNFNEswRXdlUFQ1MFFk?=
 =?utf-8?B?QlY2dloxM0VycEw3cUJvOWhidzY2akVjMVpONnNvSUptaEQza3ZUUE1GYXFZ?=
 =?utf-8?B?WXdJZHFFK0hQTWRsZ0tkcXV1dk5VTWtyOUxUSnpwN2tjVjh1QlZUMngyWTJk?=
 =?utf-8?B?eDRSdUp0Ly9qNmYxTVBpd0l6VHBmQ1R6S1VpcjFRSE8zQnRpUXlFdTBBa0N5?=
 =?utf-8?B?LzYvWjlJdUNMbWVVTTJNR2gxMVc4K0dZQXp4S2FGTUFVTEpOUlhXdHpLNmlI?=
 =?utf-8?B?bDFEZjlsVEdmbGdUTUkrcUppVTdacnhlbmNPdmYvSWNPZjNJRUtheE84QXgy?=
 =?utf-8?B?M2xVY21FMUhKWWkyTWE5c0YrREszNmY3NHYyNDNlZVBVUkhmV0FGSVhXY1li?=
 =?utf-8?B?SzNVdjM1SFl0dHZ1ZmV2c2RPVkppd29SWlNQL0xUUlNvbU5UektYUzcwU21G?=
 =?utf-8?Q?KiTsXIqT2wEUMJHsmy+fsTp1zFj3vjEB+gWBLe4?=
X-MS-Exchange-CrossTenant-Network-Message-Id: d32732c2-be5d-4cef-acac-08d90e5fc525
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 03 May 2021 18:17:54.3697
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: MC41fg+Oht054VjBgW87RRCdIsB3isS2wzI1UqBfTr2RH0vV6kXN7/QPwjL/frrrDEOVu7C6MVJ0Uprlcl739FecMdK0VcPUUI9151HupZA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3430
X-OriginatorOrg: citrix.com

On 03/05/2021 16:39, Andrew Cooper wrote:
> We're soon going to need a compressed helper of the same form.
>
> The size of the uncompressed image is a strictly a property of the highes=
t
> user state.  This can be calculated trivially with xstate_offsets/sizes, =
and
> is much faster than a CPUID instruction in the first place, let alone the=
 two
> XCR0 writes surrounding it.
>
> Retain the cross-check with hardware in debug builds, but forgo it normal
> builds.  In particular, this means that the migration paths don't need to=
 mess
> with XCR0 just to sanity check the buffer size.
>
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

The Qemu smoketests have actually found a bug here.

https://gitlab.com/xen-project/patchew/xen/-/jobs/1232118510/artifacts/file=
/smoke.serial

We call into xstate_uncompressed_size() from
hvm_register_CPU_save_and_restore() so the previous "xcr0 =3D=3D 0" path wa=
s
critical to Xen not exploding on non-xsave platforms.

This is straight up buggy - we shouldn't be registering xsave handlers
on non-xsave platforms, but the calculation is also wrong (in the safe
directly at least) when we use compressed formats.=C2=A0 Yet another
unexpected surprise for the todo list.

~Andrew



From xen-devel-bounces@lists.xenproject.org Mon May 03 19:05:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 19:05:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121799.229705 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lddsr-0003wN-Vn; Mon, 03 May 2021 19:05:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121799.229705; Mon, 03 May 2021 19:05:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lddsr-0003wG-Rx; Mon, 03 May 2021 19:05:33 +0000
Received: by outflank-mailman (input) for mailman id 121799;
 Mon, 03 May 2021 19:05:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=D+pa=J6=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1lddsq-0003wB-Sl
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 19:05:33 +0000
Received: from galois.linutronix.de (unknown [193.142.43.55])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 24b006c1-b8d1-4e9f-8574-c9e3865d71f5;
 Mon, 03 May 2021 19:05:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 24b006c1-b8d1-4e9f-8574-c9e3865d71f5
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1620068730;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=FLkve6HXLfgxpJAWW6er1tQK1qnNOtpub3goAX+TdbM=;
	b=r7T+1fS30pNUclSzvMNOId4Jc7ooI632oXoL+wBdH+efvKIyRgBV0QZHYfOBmZnYGuNGcV
	5U7ghgY6ShUSSX3nHIErQsr2UuP7dkwCXL55R4aXlU7ioqtUFsRpa89B63egHbiXRwP0IL
	sMdxQGpp5Xvp+a3O7/M81oE1aTtxmt520+GQS889a8UQhpEFALt5qWyM1/WlcTnEakkjc3
	3BfrS9syoDXogzgl0bBuzQPR8vlDeoWuIrzFrROO3rwKpGRz4JYIc10uCvO3a64e73KTt7
	EkAPlic1cFpwT5pV+b2xkced1EVqatU/4DpvaQlRBQbdRS03ncx6MAizauB9sw==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1620068730;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=FLkve6HXLfgxpJAWW6er1tQK1qnNOtpub3goAX+TdbM=;
	b=uD1KjZuTN8voYLmc+tVb3tu8d2nRizEfGUX5ZzhqdF060ShxUwNwzIyqHVUSEuiKrn0AO5
	2P8oP1s8gcjL/IAw==
To: Lai Jiangshan <jiangshanlai@gmail.com>, linux-kernel@vger.kernel.org
Cc: Lai Jiangshan <laijs@linux.alibaba.com>, Paolo Bonzini <pbonzini@redhat.com>, Sean Christopherson <seanjc@google.com>, Steven Rostedt <rostedt@goodmis.org>, Andi Kleen <ak@linux.intel.com>, Andy Lutomirski <luto@kernel.org>, Vitaly Kuznetsov <vkuznets@redhat.com>, Wanpeng Li <wanpengli@tencent.com>, Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>, kvm@vger.kernel.org, Josh Poimboeuf <jpoimboe@redhat.com>, Uros Bizjak <ubizjak@gmail.com>, Maxim Levitsky <mlevitsk@redhat.com>, Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, x86@kernel.org, "H. Peter Anvin" <hpa@zytor.com>, Boris Ostrovsky <boris.ostrovsky@oracle.com>, Juergen Gross <jgross@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, Peter Zijlstra <peterz@infradead.org>, Alexandre Chartre <alexandre.chartre@oracle.com>, Joerg Roedel <jroedel@suse.de>, Jian Cai <caij2003@gmail.com>, xen-devel@lists.xenproject.org
Subject: Re: [PATCH 1/4] x86/xen/entry: Rename xenpv_exc_nmi to noist_exc_nmi
In-Reply-To: <20210426230949.3561-2-jiangshanlai@gmail.com>
References: <20210426230949.3561-1-jiangshanlai@gmail.com> <20210426230949.3561-2-jiangshanlai@gmail.com>
Date: Mon, 03 May 2021 21:05:29 +0200
Message-ID: <87r1ind4ee.ffs@nanos.tec.linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain

On Tue, Apr 27 2021 at 07:09, Lai Jiangshan wrote:
> From: Lai Jiangshan <laijs@linux.alibaba.com>
>
> There is no any functionality change intended.  Just rename it and
> move it to arch/x86/kernel/nmi.c so that we can resue it later in
> next patch for early NMI and kvm.

'Reuse it later' is not really a proper explanation why this change it
necessary.

Also this can be simplified by using aliasing which keeps the name
spaces intact.

Thanks,

        tglx
---       

--- a/arch/x86/include/asm/idtentry.h
+++ b/arch/x86/include/asm/idtentry.h
@@ -135,6 +135,9 @@ static __always_inline void __##func(str
 #define DEFINE_IDTENTRY_RAW(func)					\
 __visible noinstr void func(struct pt_regs *regs)
 
+#define DEFINE_IDTENTRY_RAW_ALIAS(alias, func)				\
+__visible noinstr void func(struct pt_regs *regs) __alias(alias)
+
 /**
  * DECLARE_IDTENTRY_RAW_ERRORCODE - Declare functions for raw IDT entry points
  *				    Error code pushed by hardware
--- a/arch/x86/kernel/nmi.c
+++ b/arch/x86/kernel/nmi.c
@@ -524,6 +524,8 @@ DEFINE_IDTENTRY_RAW(exc_nmi)
 		mds_user_clear_cpu_buffers();
 }
 
+DEFINE_IDTENTRY_RAW_ALIAS(exc_nmi, xenpv_exc_nmi);
+
 void stop_nmi(void)
 {
 	ignore_nmis++;
--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -565,12 +565,6 @@ static void xen_write_ldt_entry(struct d
 
 void noist_exc_debug(struct pt_regs *regs);
 
-DEFINE_IDTENTRY_RAW(xenpv_exc_nmi)
-{
-	/* On Xen PV, NMI doesn't use IST.  The C part is the same as native. */
-	exc_nmi(regs);
-}
-
 DEFINE_IDTENTRY_RAW_ERRORCODE(xenpv_exc_double_fault)
 {
 	/* On Xen PV, DF doesn't use IST.  The C part is the same as native. */


From xen-devel-bounces@lists.xenproject.org Mon May 03 19:28:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 19:28:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121805.229730 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldeF3-0005ju-1p; Mon, 03 May 2021 19:28:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121805.229730; Mon, 03 May 2021 19:28:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldeF2-0005jm-Uj; Mon, 03 May 2021 19:28:28 +0000
Received: by outflank-mailman (input) for mailman id 121805;
 Mon, 03 May 2021 19:28:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wh1Q=J6=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1ldeF1-0005i5-MB
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 19:28:27 +0000
Received: from mail-qk1-x729.google.com (unknown [2607:f8b0:4864:20::729])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 289b7eb7-366a-4be4-b57f-3045304d52a9;
 Mon, 03 May 2021 19:28:27 +0000 (UTC)
Received: by mail-qk1-x729.google.com with SMTP id o5so6300921qkb.0
 for <xen-devel@lists.xenproject.org>; Mon, 03 May 2021 12:28:27 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:8710:5560:a711:776f])
 by smtp.gmail.com with ESMTPSA id
 g18sm9225209qke.21.2021.05.03.12.28.25
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 03 May 2021 12:28:25 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 289b7eb7-366a-4be4-b57f-3045304d52a9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=g57GzuZpsCctvWoQCtrFFJcfB/69BgTCkSLuWAk+2XM=;
        b=YBSpjJP205ldCrlB+abWxu+m8VbeuPWIJ/XXExUz8EYyTkZUFjL7+LVAckObklbecq
         sPObnk2qR7dwOPgGgFzMo5rBmIb1D7OBaPvJ9SkIf6BJYqvOKybVkKht3Dzq74FhtPHv
         kw9pke3JThZ4BAXAMsWRUDC/bKESzyHV4xcS/vnc1V2z0FG8IDDfyjmYh9mFcjz3eoI1
         9+zZBOGyKlx4LFybEC/LGuvK/i9AAIdT0YTkCXdCxD7EOWOSwspc7zdAoJ0IVbOIWhOp
         E+bDacQAsydRaewH0NtCDSLYfl5pbxI0m5JCQT6cuCi0V9Qjk+SZC5BAstHdQbOfBKeL
         urWA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=g57GzuZpsCctvWoQCtrFFJcfB/69BgTCkSLuWAk+2XM=;
        b=t4hf6fIQpv6sFu+hCTiVgGK6AKE54Ct28Ux/sfJCwrTrNZiBgHVPFQyStCnTyo6uOg
         WDeD9xV576EFubHH7Q0qykN3m0tgzN2l7ZkFABHMVLgSjnOoDTabDdiKwM0W0P1v0jVh
         nZ7tW6FiGpLRtHMNK1BInCfB1txZxDbOTzEgvOHInb7gpaMz1zJReGRA2QzFXeZVVIM3
         yGCk4dKWNxlgx1DYz51kQkM1K7DTKcgix6XYSepsyN9EarKEXYLr4cUypldwf5LpwWMf
         xf6ElMXPF2FCs7awwUeMUXAz5JK6GkmHjqOpsb1QYA4z0xa6fkYrVjlGMee5lbax5C8h
         4juw==
X-Gm-Message-State: AOAM5317CPuYjK9hmeG1Qsy7CTqFA/vhh9L1mxKBbg6BaJLB4AK+OTuM
	dCLrHzYHxMnvDZ5V0rJEzQqO1U6d9OE=
X-Google-Smtp-Source: ABdhPJzItvGOOofE9kprenmOUVxW3jLCJ50TJvXYSkBUnqLA6shwxoxlgBVFLmzU5Z/p3pGvKbu5lg==
X-Received: by 2002:a37:a5d8:: with SMTP id o207mr17798356qke.13.1620070106436;
        Mon, 03 May 2021 12:28:26 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Jan Beulich <jbeulich@suse.com>
Subject: [PATCH 01/13] cpufreq: Allow restricting to internal governors only
Date: Mon,  3 May 2021 15:27:58 -0400
Message-Id: <20210503192810.36084-2-jandryuk@gmail.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210503192810.36084-1-jandryuk@gmail.com>
References: <20210503192810.36084-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

For hwp, the standard governors are not usable, and only the internal
one is applicable.  Add the cpufreq_governor_internal boolean to
indicate when an internal governor, like hwp-internal, will be used.
This is set during presmp_initcall, so that it can suppress governor
registration during initcall.  Only a governor with a name containing
"internal" will be allowed in that case.

This way, the unuseable governors are not registered, so they internal
one is the only one returned to userspace.  This means incompatible
governors won't be advertised to userspace.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 xen/drivers/cpufreq/cpufreq.c      | 4 ++++
 xen/include/acpi/cpufreq/cpufreq.h | 1 +
 2 files changed, 5 insertions(+)

diff --git a/xen/drivers/cpufreq/cpufreq.c b/xen/drivers/cpufreq/cpufreq.c
index 419aae83ee..6cbf150538 100644
--- a/xen/drivers/cpufreq/cpufreq.c
+++ b/xen/drivers/cpufreq/cpufreq.c
@@ -57,6 +57,7 @@ struct cpufreq_dom {
 };
 static LIST_HEAD_READ_MOSTLY(cpufreq_dom_list_head);
 
+bool __read_mostly cpufreq_governor_internal;
 struct cpufreq_governor *__read_mostly cpufreq_opt_governor;
 LIST_HEAD_READ_MOSTLY(cpufreq_governor_list);
 
@@ -122,6 +123,9 @@ int __init cpufreq_register_governor(struct cpufreq_governor *governor)
     if (!governor)
         return -EINVAL;
 
+    if (cpufreq_governor_internal && strstr(governor->name, "internal") == NULL)
+        return -EINVAL;
+
     if (__find_governor(governor->name) != NULL)
         return -EEXIST;
 
diff --git a/xen/include/acpi/cpufreq/cpufreq.h b/xen/include/acpi/cpufreq/cpufreq.h
index e88b20bfed..56df5eebed 100644
--- a/xen/include/acpi/cpufreq/cpufreq.h
+++ b/xen/include/acpi/cpufreq/cpufreq.h
@@ -115,6 +115,7 @@ extern struct cpufreq_governor cpufreq_gov_dbs;
 extern struct cpufreq_governor cpufreq_gov_userspace;
 extern struct cpufreq_governor cpufreq_gov_performance;
 extern struct cpufreq_governor cpufreq_gov_powersave;
+extern bool cpufreq_governor_internal;
 
 extern struct list_head cpufreq_governor_list;
 
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Mon May 03 19:28:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 19:28:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121804.229718 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldeEx-0005iH-PK; Mon, 03 May 2021 19:28:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121804.229718; Mon, 03 May 2021 19:28:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldeEx-0005iA-ME; Mon, 03 May 2021 19:28:23 +0000
Received: by outflank-mailman (input) for mailman id 121804;
 Mon, 03 May 2021 19:28:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wh1Q=J6=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1ldeEw-0005i5-Pi
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 19:28:22 +0000
Received: from mail-qk1-x731.google.com (unknown [2607:f8b0:4864:20::731])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 78dc73cc-d9a8-4c3d-babd-04d9718dfc6c;
 Mon, 03 May 2021 19:28:21 +0000 (UTC)
Received: by mail-qk1-x731.google.com with SMTP id i67so3146422qkc.4
 for <xen-devel@lists.xenproject.org>; Mon, 03 May 2021 12:28:21 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:8710:5560:a711:776f])
 by smtp.gmail.com with ESMTPSA id
 g18sm9225209qke.21.2021.05.03.12.28.19
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 03 May 2021 12:28:20 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 78dc73cc-d9a8-4c3d-babd-04d9718dfc6c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=Co+6Hp6qlbhWNQZqHSxOLclZUpg0DuM7toVdpgmloKc=;
        b=efhDfL/Tbgue8tsD8zGB9Qt7U5UwRRapOgACRafC86iyyJ2/uG4FXq0e0fUbZLOnCk
         yg4/79gQTUI1oKLPU7Ys80rjjdQoNeJsF0EPj9TtM1zPsN+yIjA91Wnkq9sxA8h9aNe4
         4sqk3J434LukpT10bH4M6m5dga/KAfb3vGNgkgfgtv+NeD7xpMV5HQnKnhViJHllMoS1
         pJNdoaxaDTHtyzDKSbXA5czgV/Go3CPDge6XZ63BW13IhxQnyHDXqnqBBMz6RT2EnlNu
         YYGhK8h1UTyNOOYhiKF+imQ0c6oG8rVS/MNEjQ+mwjXbI+nYUYaXglEI72kUJM6HQIs+
         H2tQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=Co+6Hp6qlbhWNQZqHSxOLclZUpg0DuM7toVdpgmloKc=;
        b=WLh3ytkrETI6m4IVWn2NAWcxyVvejUs2etVhAzhwqT4EJUqg1qVWv0h1oj68VcWbwP
         z5mMxSlBfavRVH8YRBY1XnbbfEnQcu67aPHcuy+VXbtdMWJxMWUGcQOqAu8HpFcxxYaQ
         TF+b+PxViQN1ntKr5fkMfuhhBRWD5tpKZjCbSmaCajnRMqb2N/WmGPrrwvEjCApCCGNW
         wxBtIkj3p2rtBF+amAC/RTq7epECELT0Qw/B6P2kPQcu6RhvnlmHrZ2rV+8DLBCJmeug
         z3qvqDiLXE2++NgGHoiSFilrYDC6hXP+9f05Wp/MBe3S5y2aGQyveI5ITrDE7NVbZiCu
         aqRw==
X-Gm-Message-State: AOAM532xxld6mPKQAL8WSMt/1VII2rjW4Jp4NhMpJJzU7s7hRP4gF5QH
	Av0jwU5U6n/azcBrVAsvStxswLmIcIY=
X-Google-Smtp-Source: ABdhPJzNZvZL9DqzsyfXhcULxXw/9tnBoSkZ3w0E+/iO7GgnuycshsMnWxaMqexReotRX5hbeMxtOA==
X-Received: by 2002:ae9:e897:: with SMTP id a145mr18232749qkg.334.1620070100810;
        Mon, 03 May 2021 12:28:20 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Community Manager <community.manager@xenproject.org>
Subject: [PATCH 00/13] Intel Hardware P-States (HWP) support
Date: Mon,  3 May 2021 15:27:57 -0400
Message-Id: <20210503192810.36084-1-jandryuk@gmail.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Hi,

This patch series adds Hardware-Controlled Performance States (HWP) for
Intel processors to Xen.

With HWP, the processor makes its own determinations for frequency
selection depending, though users can set some parameters and
preferences.  There is also Turbo Boost which dynamically pushes the max
frequency if possible.

The existing governors don't work with HWP since they select frequencies
and HWP doesn't expose those.  Therefore a dummy hwp-interal governor is
used that doesn't do anything.

xenpm get-cpufreq-para is extended to show HWP parameters, and
set-cpufreq-hwp is added to set them.

A lightly loaded OpenXT laptop showed ~1W power savings according to
powertop.  A mostly idle Fedora system (dom0 only) showed a more modest
power savings.

This for for a 10th gen 6-core 1600 MHz base 4900 MHZ max cpu.  In the
default balance mode, Turbo Boost doesn't exceed 4GHz.  Tweaking the
energy_perf preference with `xenpm set-cpufreq-hwp balance ene:64`,
I've seen the CPU hit 4.7GHz before throttling down and bouncing around
between 4.3 and 4.5 GHz.  Curiously the other cores read ~4GHz when
turbo boost takes affect.  This was done after pinning all dom0 cores,
and using taskset to pin to vCPU/pCPU 11 and running a bash tightloop.

There are only minor changes since the RFC posting.  Typo fixes and a
few hunks have been folded into earlier patches.  I reordered "xenpm:
Factor out a non-fatal cpuid_parse variant" immediately before "xenpm:
Add set-cpufreq-hwp subcommand" where it is used.

Open questions:

HWP defaults to enabled and running in balanced mode.  This is useful
for testing, but should the old ACPI cpufreq driver remain the default?

This series unilaterally enables Hardware Duty Cycling (HDC) which is
another feature to autonomously powerdown things.  That is enabled if
HWP is enabled.  Maybe that want to be configurable?

I've only tested on an 8th gen and a 10th gen systems.  The don't have
fast MSR support, and they do have activity window and energy_perf
support.  So the respective other modes are untested.

This changes the systcl_pm_op hypercall, so that wants review.

I wanted to get this out since I know Qubes is also interested.

Regards,
Jason

Jason Andryuk (13):
  cpufreq: Allow restricting to internal governors only
  cpufreq: Add perf_freq to cpuinfo
  cpufreq: Export intel_feature_detect
  cpufreq: Add Hardware P-State (HWP) driver
  xenpm: Change get-cpufreq-para output for internal
  cpufreq: Export HWP parameters to userspace
  libxc: Include hwp_para in definitions
  xenpm: Print HWP parameters
  xen: Add SET_CPUFREQ_HWP xen_sysctl_pm_op
  libxc: Add xc_set_cpufreq_hwp
  xenpm: Factor out a non-fatal cpuid_parse variant
  xenpm: Add set-cpufreq-hwp subcommand
  CHANGELOG: Add Intel HWP entry

 CHANGELOG.md                              |   2 +
 docs/misc/xen-command-line.pandoc         |   9 +
 tools/include/xenctrl.h                   |   6 +
 tools/libs/ctrl/xc_pm.c                   |  18 +
 tools/misc/xenpm.c                        | 375 +++++++++++-
 xen/arch/x86/acpi/cpufreq/Makefile        |   1 +
 xen/arch/x86/acpi/cpufreq/cpufreq.c       |  15 +-
 xen/arch/x86/acpi/cpufreq/hwp.c           | 671 ++++++++++++++++++++++
 xen/drivers/acpi/pmstat.c                 |  30 +
 xen/drivers/cpufreq/cpufreq.c             |   4 +
 xen/drivers/cpufreq/utility.c             |   1 +
 xen/include/acpi/cpufreq/cpufreq.h        |   9 +
 xen/include/acpi/cpufreq/processor_perf.h |   5 +
 xen/include/asm-x86/cpufeature.h          |  11 +-
 xen/include/asm-x86/msr-index.h           |  17 +
 xen/include/public/sysctl.h               |  52 +-
 16 files changed, 1197 insertions(+), 29 deletions(-)
 create mode 100644 xen/arch/x86/acpi/cpufreq/hwp.c

-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Mon May 03 19:28:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 19:28:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121806.229742 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldeF7-0005nm-De; Mon, 03 May 2021 19:28:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121806.229742; Mon, 03 May 2021 19:28:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldeF7-0005nf-9z; Mon, 03 May 2021 19:28:33 +0000
Received: by outflank-mailman (input) for mailman id 121806;
 Mon, 03 May 2021 19:28:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wh1Q=J6=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1ldeF6-0005i5-Ie
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 19:28:32 +0000
Received: from mail-qk1-x72b.google.com (unknown [2607:f8b0:4864:20::72b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4ea1e9d3-6db0-433a-9af7-bf527518f5d2;
 Mon, 03 May 2021 19:28:30 +0000 (UTC)
Received: by mail-qk1-x72b.google.com with SMTP id i67so3146897qkc.4
 for <xen-devel@lists.xenproject.org>; Mon, 03 May 2021 12:28:30 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:8710:5560:a711:776f])
 by smtp.gmail.com with ESMTPSA id
 g18sm9225209qke.21.2021.05.03.12.28.28
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 03 May 2021 12:28:29 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4ea1e9d3-6db0-433a-9af7-bf527518f5d2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=o6aeco73TVtnG2vnCKcrrr9RATvAYWd7uArT6elgZ4M=;
        b=jM9llCt0QXCGnbPlb2cvcWJKAmrDSwo4oWznuSXdih4WXdBIIY4x+Yysl99fS+iCtn
         K9hA03/FWdkpL2tmB6NjdnJaqql+sc+6eIL+mfxpMi2Fhkj75eF5IGrtQl+38SCjKGAf
         p4c8czJCGJODZux5wP737n2+HFbKfGOP4nJaeDCkGDIV8P+A6oNYFFBJk7cM3mTLhSTE
         re3LhBQd3hHCToEu/ZXmTnOPyVhvA6zCEJmyf0v31gy9m5sW6R862AlUJX1BUwvMXGNX
         62Xc/51OOE6g3uPf2Evvq80SkGeaDXUO94f0bhIrQWcX7917rGllkm17dViKf2tBIfSA
         FoFg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=o6aeco73TVtnG2vnCKcrrr9RATvAYWd7uArT6elgZ4M=;
        b=g8iEJflMYM7i7doqae5epiTktyLcCiEqc0Z0Pu6xsUm1NayH8w3G65PXC7MbNoXkxw
         VyVQY0BZ1pHq2xUIhsTFiWcE8BYiWOSGWXpDE1qm5hfMmL+EwzAyuMxXm7njbKOsbbmL
         QuPw15GzWjs7dlUwkqIYeRpA8MjhS63fkUOTLUdI8Eq+l1yqqIfpXFoDgXK97RUCW/mR
         9qyDRJYvMzMlw0bX5RQ25SZ+JXG2j+7hCGK1s5GeuzTJgFIZWAokD4epTwqBTC/i1Hdu
         osE5+jyXIl0A/vBQqAhbpdbOypSAbEeZpmpIAwsTqvlM+M71qlrfXIOQQP6QWuM9JaJq
         xm8g==
X-Gm-Message-State: AOAM533vM9Fbv62N4S+5VdM3Zvk6gl67w+7NIBr8fBd41aQGG54kNHeF
	q5rhJ6O7ZJam4oVrsMuGKxWyKNzmcYQ=
X-Google-Smtp-Source: ABdhPJzy3E49jxtVyfMBYZkW+p55Hjn3fb981MdlPPXBkc0anCD6xoEj5zqOSMI3ZDWCGDmaiPnFJA==
X-Received: by 2002:ae9:df46:: with SMTP id t67mr20677355qkf.269.1620070109951;
        Mon, 03 May 2021 12:28:29 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 02/13] cpufreq: Add perf_freq to cpuinfo
Date: Mon,  3 May 2021 15:27:59 -0400
Message-Id: <20210503192810.36084-3-jandryuk@gmail.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210503192810.36084-1-jandryuk@gmail.com>
References: <20210503192810.36084-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

acpi-cpufreq scales the aperf/mperf measurements by max_freq, but HWP
needs to scale by base frequency.  Settings max_freq to base_freq
"works" but the code is not obvious, and returning values to userspace
is tricky.  Add an additonal perf_freq member which is used for scaling
aperf/mperf measurements.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
I don't like this, but it seems the best way to re-use the common
aperf/mperf code.  The other option would be to add wrappers that then
do the acpi vs. hwp scaling.
---
 xen/arch/x86/acpi/cpufreq/cpufreq.c | 2 +-
 xen/drivers/cpufreq/utility.c       | 1 +
 xen/include/acpi/cpufreq/cpufreq.h  | 3 +++
 3 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/acpi/cpufreq/cpufreq.c b/xen/arch/x86/acpi/cpufreq/cpufreq.c
index f1f3c6923f..5eac2f7321 100644
--- a/xen/arch/x86/acpi/cpufreq/cpufreq.c
+++ b/xen/arch/x86/acpi/cpufreq/cpufreq.c
@@ -317,7 +317,7 @@ unsigned int get_measured_perf(unsigned int cpu, unsigned int flag)
     else
         perf_percent = 0;
 
-    return policy->cpuinfo.max_freq * perf_percent / 100;
+    return policy->cpuinfo.perf_freq * perf_percent / 100;
 }
 
 static unsigned int get_cur_freq_on_cpu(unsigned int cpu)
diff --git a/xen/drivers/cpufreq/utility.c b/xen/drivers/cpufreq/utility.c
index b93895d4dd..788929e079 100644
--- a/xen/drivers/cpufreq/utility.c
+++ b/xen/drivers/cpufreq/utility.c
@@ -236,6 +236,7 @@ int cpufreq_frequency_table_cpuinfo(struct cpufreq_policy *policy,
 
     policy->min = policy->cpuinfo.min_freq = min_freq;
     policy->max = policy->cpuinfo.max_freq = max_freq;
+    policy->cpuinfo.perf_freq = max_freq;
     policy->cpuinfo.second_max_freq = second_max_freq;
 
     if (policy->min == ~0)
diff --git a/xen/include/acpi/cpufreq/cpufreq.h b/xen/include/acpi/cpufreq/cpufreq.h
index 56df5eebed..b91859ce5d 100644
--- a/xen/include/acpi/cpufreq/cpufreq.h
+++ b/xen/include/acpi/cpufreq/cpufreq.h
@@ -37,6 +37,9 @@ extern struct acpi_cpufreq_data *cpufreq_drv_data[NR_CPUS];
 struct cpufreq_cpuinfo {
     unsigned int        max_freq;
     unsigned int        second_max_freq;    /* P1 if Turbo Mode is on */
+    unsigned int        perf_freq; /* Scaling freq for aperf/mpref.
+                                      acpi-cpufreq uses max_freq, but HWP uses
+                                      base_freq.*/
     unsigned int        min_freq;
     unsigned int        transition_latency; /* in 10^(-9) s = nanoseconds */
 };
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Mon May 03 19:28:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 19:28:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121807.229754 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldeFC-0005sp-Nn; Mon, 03 May 2021 19:28:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121807.229754; Mon, 03 May 2021 19:28:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldeFC-0005sg-Jn; Mon, 03 May 2021 19:28:38 +0000
Received: by outflank-mailman (input) for mailman id 121807;
 Mon, 03 May 2021 19:28:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wh1Q=J6=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1ldeFB-0005i5-Il
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 19:28:37 +0000
Received: from mail-qv1-xf2c.google.com (unknown [2607:f8b0:4864:20::f2c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3aa841dc-cbbf-4658-b61b-53c596a891d0;
 Mon, 03 May 2021 19:28:33 +0000 (UTC)
Received: by mail-qv1-xf2c.google.com with SMTP id dl3so3214318qvb.3
 for <xen-devel@lists.xenproject.org>; Mon, 03 May 2021 12:28:33 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:8710:5560:a711:776f])
 by smtp.gmail.com with ESMTPSA id
 g18sm9225209qke.21.2021.05.03.12.28.32
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 03 May 2021 12:28:32 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3aa841dc-cbbf-4658-b61b-53c596a891d0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=n5SMtjJiYtnXZ6qrr9jd+Aepky6pd8C0k6PHUjXrSqU=;
        b=GPA8CR2Qwl41wS71H7Nrs1TOtettIUx/IgqSGziPcU99C+sHnTjMaToTZL/tTmcdt/
         23214F+Lqdlhzh5/9b/t/MdewCySeHLfdLAq0fCeptshVx6a39sYtoHH50dTV/IrwYIo
         C3J4GFLp2o+rK4DjzTlzj1aqRTTrFuAECppfXNlL2wrqcec+XjejgKSf4muKQZXSNVSd
         AInPzb8U7DswIQAlvV1HcIOkIvqpH6BYGhbvRuY6UGJKsgTfVG+zv0GYIkRabg2OC4GF
         UlziXpnMpqCK+MEhyuM035V6bGYQ4LaoT5Tyy2adfogGWLdvyCjSxmSUvoRRkEcQJv/w
         UaBQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=n5SMtjJiYtnXZ6qrr9jd+Aepky6pd8C0k6PHUjXrSqU=;
        b=GGzzNNQlQg/X8+SGxEvwekG4an1Hl00F/iwcroWN2OSiX5W/smOMKfgiZEMFtfCRJW
         rRZxbIznQGnR3jWTtAp3vCHD3rHqiCw/lOJjIz7gJHdhB7QwcrqHgE4/D4Bm4pEHB0U7
         oLFMc76tVTVLKOaQtU7cC1IOOeHQRbXOi9OBvzRB0pBNLanF2QThes2Od8IzdwVYBC74
         tsMf8gVvL7k6F8sHnPl0mmN4EUkf9DID9/Laxw/GPEEEotYRiW/q3At8NMQXkDkegE3R
         rx/8DMUU7vpkbnn2kG+HRBGL0g3vApcwpRBvDPNZ/xZGmAR1HzeguxBtCJrWnpP90qAK
         8oaQ==
X-Gm-Message-State: AOAM530EmsLtAcnpVPej/pGb7V5VWFM1QCsMYqHgkc9AuUMfiCyxpNR1
	Zx6TXL/cze9BNq6LkzWCT/9B3Hg2w3w=
X-Google-Smtp-Source: ABdhPJxQRpBOQfnCcqOD0VJ2BEUNTsF095m+CkUd/TlpLDyVfgsU/URliWZUx2FLVrEwsNb8kmP/1A==
X-Received: by 2002:ad4:4bc4:: with SMTP id l4mr6967742qvw.46.1620070113001;
        Mon, 03 May 2021 12:28:33 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 03/13] cpufreq: Export intel_feature_detect
Date: Mon,  3 May 2021 15:28:00 -0400
Message-Id: <20210503192810.36084-4-jandryuk@gmail.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210503192810.36084-1-jandryuk@gmail.com>
References: <20210503192810.36084-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Export feature_detect as intel_feature_detect so it can be re-used by
HWP.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 xen/arch/x86/acpi/cpufreq/cpufreq.c       | 4 ++--
 xen/include/acpi/cpufreq/processor_perf.h | 2 ++
 2 files changed, 4 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/acpi/cpufreq/cpufreq.c b/xen/arch/x86/acpi/cpufreq/cpufreq.c
index 5eac2f7321..8aae9b534d 100644
--- a/xen/arch/x86/acpi/cpufreq/cpufreq.c
+++ b/xen/arch/x86/acpi/cpufreq/cpufreq.c
@@ -340,7 +340,7 @@ static unsigned int get_cur_freq_on_cpu(unsigned int cpu)
     return extract_freq(get_cur_val(cpumask_of(cpu)), data);
 }
 
-static void feature_detect(void *info)
+void intel_feature_detect(void *info)
 {
     struct cpufreq_policy *policy = info;
     unsigned int eax;
@@ -596,7 +596,7 @@ acpi_cpufreq_cpu_init(struct cpufreq_policy *policy)
     /* Check for APERF/MPERF support in hardware
      * also check for boost support */
     if (c->x86_vendor == X86_VENDOR_INTEL && c->cpuid_level >= 6)
-        on_selected_cpus(cpumask_of(cpu), feature_detect, policy, 1);
+        on_selected_cpus(cpumask_of(cpu), intel_feature_detect, policy, 1);
 
     /*
      * the first call to ->target() should result in us actually
diff --git a/xen/include/acpi/cpufreq/processor_perf.h b/xen/include/acpi/cpufreq/processor_perf.h
index d8a1ba68a6..e2c08f0e6d 100644
--- a/xen/include/acpi/cpufreq/processor_perf.h
+++ b/xen/include/acpi/cpufreq/processor_perf.h
@@ -7,6 +7,8 @@
 
 #define XEN_PX_INIT 0x80000000
 
+void intel_feature_detect(void *info);
+
 int powernow_cpufreq_init(void);
 unsigned int powernow_register_driver(void);
 unsigned int get_measured_perf(unsigned int cpu, unsigned int flag);
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Mon May 03 19:28:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 19:28:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121809.229766 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldeFI-0005yQ-2o; Mon, 03 May 2021 19:28:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121809.229766; Mon, 03 May 2021 19:28:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldeFH-0005yH-Uz; Mon, 03 May 2021 19:28:43 +0000
Received: by outflank-mailman (input) for mailman id 121809;
 Mon, 03 May 2021 19:28:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wh1Q=J6=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1ldeFG-0005i5-It
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 19:28:42 +0000
Received: from mail-qk1-x734.google.com (unknown [2607:f8b0:4864:20::734])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dd0d2f7c-7ff6-4225-af94-1de9a427986a;
 Mon, 03 May 2021 19:28:37 +0000 (UTC)
Received: by mail-qk1-x734.google.com with SMTP id o27so6262492qkj.9
 for <xen-devel@lists.xenproject.org>; Mon, 03 May 2021 12:28:37 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:8710:5560:a711:776f])
 by smtp.gmail.com with ESMTPSA id
 g18sm9225209qke.21.2021.05.03.12.28.35
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 03 May 2021 12:28:35 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dd0d2f7c-7ff6-4225-af94-1de9a427986a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=K9DM60LHgbrjnKk6ipk/EiTHiRh7kQUIe70xmtDrRvs=;
        b=P5Aw84DYh1kRXQ5q+yHsFjYfqLSNrAfPHZ9jm+TcO8POWbtHCSO4ouh7MVnMLN8FDS
         iRLfMTTPKeE4HObIsBrvkZ65umxSsZODudoQwF8kmoytYNhenb8BAmKRUeLLYE2fChqH
         ASQwYCgdusZ/HHnzLNK2GPyNanIGTYnZJKO/Ikpj/v+FXCtIdLYIP3YGlu4Qly0kg7qI
         K3zcgpq1+09x7uPMMDDbIx/pwiuFJ0Jvw+/F7IBUIroEaL0/pqRaWMU8QXwTheGPh3D6
         GIZc8mOL6rui+P+znBoRMZ6fK2uzj9O8e4iv3+TANhJKPs/5Rr0X7Cv6bqV42ThKrUWS
         rhTQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=K9DM60LHgbrjnKk6ipk/EiTHiRh7kQUIe70xmtDrRvs=;
        b=BM2uLiMhnboBTfffEvX8Oxl7ykCxWrtnjFdeEMBV77ngipRdYZZiAp/G69IRj1Vz7k
         sVQaX8El+Ny0Jmb4qfsdNwOgx15wXJd79HbAiekzMGvMtTvBJd7CVCgiNMheZsA/d2+9
         DYFgxl7TiOpaxl+gelZoYUorxEbs/Us3X7GED5at2hHhtu/8bwnqylXEYnCevEgNFDli
         YoBlrzLGqi9TZ+IQIgL5OfR7E34EtkRd22RPkSmabedXIvqHgphj/Ex6gzfIEHtaJMb4
         IxER7cB2gand73AcSGfP9GG184vGL7gWn/94YLCrgACKHdXWOwu3bTJ+TEaLMerHhVnV
         VOKw==
X-Gm-Message-State: AOAM531GWP7ALh4V5NhabJCLMojN0/7oP58t1pBM/qG6RL0TyBfS1hHy
	f4CgZhEGfoD3K4LS0waEshv4MNMnhH4=
X-Google-Smtp-Source: ABdhPJx4FLGBNxhoZUHMAUeQ/UHTXIvtNfrRi/JKbCGannSxELP/eThBg90JzOo4DSv2CWkTZ9YerA==
X-Received: by 2002:a37:6004:: with SMTP id u4mr21062213qkb.369.1620070116261;
        Mon, 03 May 2021 12:28:36 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH 04/13] cpufreq: Add Hardware P-State (HWP) driver
Date: Mon,  3 May 2021 15:28:01 -0400
Message-Id: <20210503192810.36084-5-jandryuk@gmail.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210503192810.36084-1-jandryuk@gmail.com>
References: <20210503192810.36084-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

>From the Intel SDM: "Hardware-Controlled Performance States (HWP), which
autonomously selects performance states while utilizing OS supplied
performance guidance hints."

Enable HWP to run in autonomous mode by poking the correct MSRs.

There is no interface to configure - it hardcodes the default 0x80 (out
of 0x0-0xff) energy/performance preference.  xen_sysctl_pm_op/xenpm will
be to be extended to configure in subsequent patches.

Unscientific powertop measurement of an mostly idle, customized OpenXT
install:
A 10th gen 6-core laptop showed battery discharge drop from ~9.x to
~7.x watts.
A 8th gen 4-core laptop dropped from ~10 to ~9

Power usage depends on many factors, especially display brightness, but
this does show an power saving in balanced mode when CPU utilization is
low.

HWP isn't compatible with an external governor - it doesn't take
explicit frequency requests.  Therefore a minimal internal governor,
hwp-internal, is also added as a placeholder.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

---

We disable on cpuid_level < 0x16.  cpuid(0x16) is used to get the cpu
frequencies for calculating the APERF/MPERF.  Without it, things would
still work, but the averge cpufrequency output would be wrong.

If HWP Energy_Performance_Preference isn't supported, the code falls
back to IA32_ENERGY_PERF_BIAS.  Right now, we don't check
CPUID.06H:ECX.SETBH[bit 3] before using that MSR.  The SDM reads like
it'll be available, and I assume it was available by the time Skylake
introduced HWP.

My 8th & 10th gen test systems both report:
(XEN) HWP: 1 notify: 1 act_window: 1 energy_perf: 1 pkg_level: 0 peci: 0
(XEN) HWP: FAST_IA32_HWP_REQUEST not supported
(XEN) HWP: Hardware Duty Cycling (HDC) supported
(XEN) HWP: HW_FEEDBACK not supported

So FAST_IA32_HWP_REQUEST and IA32_ENERGY_PERF_BIAS have not been tested.
---
 docs/misc/xen-command-line.pandoc         |   9 +
 xen/arch/x86/acpi/cpufreq/Makefile        |   1 +
 xen/arch/x86/acpi/cpufreq/cpufreq.c       |   9 +-
 xen/arch/x86/acpi/cpufreq/hwp.c           | 533 ++++++++++++++++++++++
 xen/include/acpi/cpufreq/processor_perf.h |   3 +
 xen/include/asm-x86/cpufeature.h          |  11 +-
 xen/include/asm-x86/msr-index.h           |  17 +
 7 files changed, 579 insertions(+), 4 deletions(-)
 create mode 100644 xen/arch/x86/acpi/cpufreq/hwp.c

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index c32a397a12..66363a3d71 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -1355,6 +1355,15 @@ Specify whether guests are to be given access to physical port 80
 (often used for debugging purposes), to override the DMI based
 detection of systems known to misbehave upon accesses to that port.
 
+### hwp (x86)
+> `= <boolean>`
+
+> Default: `true`
+
+Specifies whether Xen uses Hardware-Controlled Performance States (HWP)
+on supported Intel hardware.  HWP is a Skylake+ feature which provides
+better CPU power management.
+
 ### idle_latency_factor (x86)
 > `= <integer>`
 
diff --git a/xen/arch/x86/acpi/cpufreq/Makefile b/xen/arch/x86/acpi/cpufreq/Makefile
index f75da9b9ca..db83aa6b14 100644
--- a/xen/arch/x86/acpi/cpufreq/Makefile
+++ b/xen/arch/x86/acpi/cpufreq/Makefile
@@ -1,2 +1,3 @@
 obj-y += cpufreq.o
+obj-y += hwp.o
 obj-y += powernow.o
diff --git a/xen/arch/x86/acpi/cpufreq/cpufreq.c b/xen/arch/x86/acpi/cpufreq/cpufreq.c
index 8aae9b534d..966490bda1 100644
--- a/xen/arch/x86/acpi/cpufreq/cpufreq.c
+++ b/xen/arch/x86/acpi/cpufreq/cpufreq.c
@@ -641,9 +641,12 @@ static int __init cpufreq_driver_init(void)
     int ret = 0;
 
     if ((cpufreq_controller == FREQCTL_xen) &&
-        (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL))
-        ret = cpufreq_register_driver(&acpi_cpufreq_driver);
-    else if ((cpufreq_controller == FREQCTL_xen) &&
+        (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL)) {
+        if (hwp_available())
+            ret = hwp_register_driver();
+        else
+            ret = cpufreq_register_driver(&acpi_cpufreq_driver);
+    } else if ((cpufreq_controller == FREQCTL_xen) &&
         (boot_cpu_data.x86_vendor &
          (X86_VENDOR_AMD | X86_VENDOR_HYGON)))
         ret = powernow_register_driver();
diff --git a/xen/arch/x86/acpi/cpufreq/hwp.c b/xen/arch/x86/acpi/cpufreq/hwp.c
new file mode 100644
index 0000000000..f8e6fdbd41
--- /dev/null
+++ b/xen/arch/x86/acpi/cpufreq/hwp.c
@@ -0,0 +1,533 @@
+/*
+ * hwp.c cpufreq driver to run Intel Hardware P-States (HWP)
+ *
+ * Copyright (C) 2021 Jason Andryuk <jandryuk@gmail.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or (at
+ * your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful, but
+ * WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License along
+ * with this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/cpumask.h>
+#include <xen/init.h>
+#include <xen/param.h>
+#include <xen/xmalloc.h>
+#include <asm/msr.h>
+#include <asm/io.h>
+#include <acpi/cpufreq/cpufreq.h>
+
+static bool feature_hwp;
+static bool feature_hwp_notification;
+static bool feature_hwp_activity_window;
+static bool feature_hwp_energy_perf;
+static bool feature_hwp_pkg_level_ctl;
+static bool feature_hwp_peci;
+
+static bool feature_hdc;
+static bool feature_fast_msr;
+
+bool opt_hwp = true;
+boolean_param("hwp", opt_hwp);
+
+union hwp_request
+{
+    struct
+    {
+        uint64_t min_perf:8;
+        uint64_t max_perf:8;
+        uint64_t desired:8;
+        uint64_t energy_perf:8;
+        uint64_t activity_window:10;
+        uint64_t package_control:1;
+        uint64_t reserved:16;
+        uint64_t activity_window_valid:1;
+        uint64_t energy_perf_valid:1;
+        uint64_t desired_valid:1;
+        uint64_t max_perf_valid:1;
+        uint64_t min_perf_valid:1;
+    };
+    uint64_t raw;
+};
+
+struct hwp_drv_data
+{
+    union
+    {
+        uint64_t hwp_caps;
+        struct
+        {
+            uint64_t hw_highest:8;
+            uint64_t hw_guaranteed:8;
+            uint64_t hw_most_efficient:8;
+            uint64_t hw_lowest:8;
+            uint64_t hw_reserved:32;
+        };
+    };
+    union hwp_request curr_req;
+    uint16_t activity_window;
+    uint8_t minimum;
+    uint8_t maximum;
+    uint8_t desired;
+    uint8_t energy_perf;
+};
+struct hwp_drv_data *hwp_drv_data[NR_CPUS];
+
+#define hwp_err(...)     printk(XENLOG_ERR __VA_ARGS__)
+#define hwp_info(...)    printk(XENLOG_INFO __VA_ARGS__)
+#define hwp_verbose(...)                   \
+({                                         \
+    if ( cpufreq_verbose )                 \
+    {                                      \
+        printk(XENLOG_DEBUG __VA_ARGS__);  \
+    }                                      \
+})
+#define hwp_verbose_cont(...)              \
+({                                         \
+    if ( cpufreq_verbose )                 \
+    {                                      \
+        printk(             __VA_ARGS__);  \
+    }                                      \
+})
+
+static int hwp_governor(struct cpufreq_policy *policy,
+                        unsigned int event)
+{
+    int ret;
+
+    if ( policy == NULL )
+        return -EINVAL;
+
+    switch (event)
+    {
+    case CPUFREQ_GOV_START:
+        ret = 0;
+        break;
+    case CPUFREQ_GOV_STOP:
+        ret = -EINVAL;
+        break;
+    case CPUFREQ_GOV_LIMITS:
+        ret = 0;
+        break;
+    default:
+        ret = -EINVAL;
+    }
+
+    return ret;
+}
+
+static struct cpufreq_governor hwp_cpufreq_governor =
+{
+    .name          = "hwp-internal",
+    .governor      = hwp_governor,
+};
+
+static int __init cpufreq_gov_hwp_init(void)
+{
+    return cpufreq_register_governor(&hwp_cpufreq_governor);
+}
+__initcall(cpufreq_gov_hwp_init);
+
+bool hwp_available(void)
+{
+    uint32_t eax;
+    uint64_t val;
+    bool use_hwp;
+
+    if ( boot_cpu_data.cpuid_level < CPUID_PM_LEAF )
+    {
+        hwp_verbose("cpuid_level (%u) lacks HWP support\n", boot_cpu_data.cpuid_level);
+
+        return false;
+    }
+
+    eax = cpuid_eax(CPUID_PM_LEAF);
+    feature_hwp                 = !!(eax & CPUID6_EAX_HWP);
+    feature_hwp_notification    = !!(eax & CPUID6_EAX_HWP_Notification);
+    feature_hwp_activity_window = !!(eax & CPUID6_EAX_HWP_Activity_Window);
+    feature_hwp_energy_perf     =
+        !!(eax & CPUID6_EAX_HWP_Energy_Performance_Preference);
+    feature_hwp_pkg_level_ctl   =
+        !!(eax & CPUID6_EAX_HWP_Package_Level_Request);
+    feature_hwp_peci            = !!(eax & CPUID6_EAX_HWP_PECI);
+
+    hwp_verbose("HWP: %d notify: %d act_window: %d energy_perf: %d pkg_level: %d peci: %d\n",
+                feature_hwp, feature_hwp_notification,
+                feature_hwp_activity_window, feature_hwp_energy_perf,
+                feature_hwp_pkg_level_ctl, feature_hwp_peci);
+
+    if ( !feature_hwp )
+    {
+        hwp_verbose("Hardware does not support HWP\n");
+
+        return false;
+    }
+
+    if ( boot_cpu_data.cpuid_level < 0x16 )
+    {
+        hwp_info("HWP disabled: cpuid_level %x < 0x16 lacks CPU freq info\n",
+                 boot_cpu_data.cpuid_level);
+
+        return false;
+    }
+
+    hwp_verbose("HWP: FAST_IA32_HWP_REQUEST %ssupported\n",
+                eax & CPUID6_EAX_FAST_HWP_MSR ? "" : "not ");
+    if ( eax & CPUID6_EAX_FAST_HWP_MSR )
+    {
+        if ( rdmsr_safe(MSR_FAST_UNCORE_MSRS_CAPABILITY, val) )
+            hwp_err("error rdmsr_safe(MSR_FAST_UNCORE_MSRS_CAPABILITY)\n");
+
+        hwp_verbose("HWP: MSR_FAST_UNCORE_MSRS_CAPABILITY: %016lx\n", val);
+        if (val & FAST_IA32_HWP_REQUEST )
+        {
+            hwp_verbose("HWP: FAST_IA32_HWP_REQUEST MSR available\n");
+            feature_fast_msr = true;
+        }
+    }
+
+    feature_hdc = !!(eax & CPUID6_EAX_HDC);
+
+    hwp_verbose("HWP: Hardware Duty Cycling (HDC) %ssupported\n",
+                feature_hdc ? "" : "not ");
+
+    hwp_verbose("HWP: HW_FEEDBACK %ssupported\n",
+                (eax & CPUID6_EAX_HW_FEEDBACK) ? "" : "not ");
+
+    use_hwp = feature_hwp && opt_hwp;
+    cpufreq_governor_internal = use_hwp;
+
+    if ( use_hwp )
+        hwp_info("Using HWP for cpufreq\n");
+
+    return use_hwp;
+}
+
+static void hdc_set_pkg_hdc_ctl(bool val)
+{
+    uint64_t msr;
+
+    if ( rdmsr_safe(MSR_IA32_PKG_HDC_CTL, msr) )
+    {
+        hwp_err("error rdmsr_safe(MSR_IA32_PKG_HDC_CTL)\n");
+
+        return;
+    }
+
+    msr = val ? IA32_PKG_HDC_CTL_HDC_PKG_Enable : 0;
+
+    if ( wrmsr_safe(MSR_IA32_PKG_HDC_CTL, msr) )
+        hwp_err("error wrmsr_safe(MSR_IA32_PKG_HDC_CTL): %016lx\n", msr);
+}
+
+static void hdc_set_pm_ctl1(bool val)
+{
+    uint64_t msr;
+
+    if ( rdmsr_safe(MSR_IA32_PM_CTL1, msr) )
+    {
+        hwp_err("error rdmsr_safe(MSR_IA32_PM_CTL1)\n");
+
+        return;
+    }
+
+    msr = val ? IA32_PM_CTL1_HDC_Allow_Block : 0;
+
+    if ( wrmsr_safe(MSR_IA32_PM_CTL1, msr) )
+        hwp_err("error wrmsr_safe(MSR_IA32_PM_CTL1): %016lx\n", msr);
+}
+
+static void hwp_fast_uncore_msrs_ctl(bool val)
+{
+    uint64_t msr;
+
+    if ( rdmsr_safe(MSR_FAST_UNCORE_MSRS_CTL, msr) )
+        hwp_err("error rdmsr_safe(MSR_FAST_UNCORE_MSRS_CTL)\n");
+
+    msr = val;
+
+    if ( wrmsr_safe(MSR_FAST_UNCORE_MSRS_CTL, msr) )
+        hwp_err("error wrmsr_safe(MSR_FAST_UNCORE_MSRS_CTL): %016lx\n", msr);
+}
+
+static void hwp_get_cpu_speeds(struct cpufreq_policy *policy)
+{
+    uint32_t base_khz, max_khz, bus_khz, edx;
+
+    cpuid(0x16, &base_khz, &max_khz, &bus_khz, &edx);
+
+    /* aperf/mperf scales base. */
+    policy->cpuinfo.perf_freq = base_khz * 1000;
+    policy->cpuinfo.min_freq = base_khz * 1000;
+    policy->cpuinfo.max_freq = max_khz * 1000;
+    policy->min = base_khz * 1000;
+    policy->max = max_khz * 1000;
+    policy->cur = 0;
+}
+
+static void hwp_read_capabilities(void *info)
+{
+    struct cpufreq_policy *policy = info;
+    struct hwp_drv_data *data = hwp_drv_data[policy->cpu];
+
+    if ( rdmsr_safe(MSR_IA32_HWP_CAPABILITIES, data->hwp_caps) )
+    {
+        hwp_err("CPU%u: error rdmsr_safe(MSR_IA32_HWP_CAPABILITIES)\n",
+                policy->cpu);
+
+        return;
+    }
+
+    if ( rdmsr_safe(MSR_IA32_HWP_REQUEST, data->curr_req.raw) )
+    {
+        hwp_err("CPU%u: error rdmsr_safe(MSR_IA32_HWP_REQUEST)\n", policy->cpu);
+
+        return;
+    }
+}
+
+static void hwp_init_msrs(void *info)
+{
+    struct cpufreq_policy *policy = info;
+    uint64_t val;
+
+    /* Package level MSR, but we don't have a good idea of packages here, so
+     * just do it everytime. */
+    if ( rdmsr_safe(MSR_IA32_PM_ENABLE, val) )
+    {
+        hwp_err("CPU%u: error rdmsr_safe(MSR_IA32_PM_ENABLE)\n", policy->cpu);
+
+        return;
+    }
+
+    hwp_verbose("CPU%u: MSR_IA32_PM_ENABLE: %016lx\n", policy->cpu, val);
+    if ( val != IA32_PM_ENABLE_HWP_ENABLE )
+    {
+        val = IA32_PM_ENABLE_HWP_ENABLE;
+        if ( wrmsr_safe(MSR_IA32_PM_ENABLE, val) )
+            hwp_err("CPU%u: error wrmsr_safe(MSR_IA32_PM_ENABLE, %lx)\n",
+                    policy->cpu, val);
+    }
+
+    hwp_read_capabilities(info);
+
+    /* Check for APERF/MPERF support in hardware
+     * also check for boost/turbo support */
+    intel_feature_detect(policy);
+
+    if ( feature_hdc )
+    {
+        hdc_set_pkg_hdc_ctl(true);
+        hdc_set_pm_ctl1(true);
+    }
+
+    if ( feature_fast_msr )
+        hwp_fast_uncore_msrs_ctl(true);
+
+    hwp_get_cpu_speeds(policy);
+}
+
+static int hwp_cpufreq_verify(struct cpufreq_policy *policy)
+{
+    unsigned int cpu = policy->cpu;
+    struct hwp_drv_data *data = hwp_drv_data[cpu];
+
+    if ( !feature_hwp_energy_perf && data->energy_perf )
+    {
+        if ( data->energy_perf > 15 )
+        {
+            hwp_err("energy_perf %d exceeds IA32_ENERGY_PERF_BIAS range 0-15\n",
+                    data->energy_perf);
+
+            return -EINVAL;
+        }
+    }
+
+    if ( !feature_hwp_activity_window && data->activity_window )
+    {
+        hwp_err("HWP activity window not supported.\n");
+
+        return -EINVAL;
+    }
+
+    return 0;
+}
+
+/* val 0 - highest performance, 15 - maximum energy savings */
+static void hwp_energy_perf_bias(void *info)
+{
+    uint64_t msr;
+    struct hwp_drv_data *data = info;
+    uint8_t val = data->energy_perf;
+
+    ASSERT(val <= 15);
+
+    if ( rdmsr_safe(MSR_IA32_ENERGY_PERF_BIAS, msr) )
+    {
+        hwp_err("error rdmsr_safe(MSR_IA32_ENERGY_PERF_BIAS)\n");
+
+        return;
+    }
+
+    msr &= ~(0xf);
+    msr |= val;
+
+    if ( wrmsr_safe(MSR_IA32_ENERGY_PERF_BIAS, msr) )
+        hwp_err("error wrmsr_safe(MSR_IA32_ENERGY_PERF_BIAS): %016lx\n", msr);
+}
+
+static void hwp_write_request(void *info)
+{
+    struct cpufreq_policy *policy = info;
+    struct hwp_drv_data *data = hwp_drv_data[policy->cpu];
+    union hwp_request hwp_req = data->curr_req;
+
+    BUILD_BUG_ON(sizeof(union hwp_request) != sizeof(uint64_t));
+    if ( wrmsr_safe(MSR_IA32_HWP_REQUEST, hwp_req.raw) )
+    {
+        hwp_err("CPU%u: error wrmsr_safe(MSR_IA32_HWP_REQUEST, %lx)\n",
+                policy->cpu, hwp_req.raw);
+        rdmsr_safe(MSR_IA32_HWP_REQUEST, data->curr_req.raw);
+    }
+}
+
+static int hwp_cpufreq_target(struct cpufreq_policy *policy,
+                              unsigned int target_freq, unsigned int relation)
+{
+    unsigned int cpu = policy->cpu;
+    struct hwp_drv_data *data = hwp_drv_data[cpu];
+    union hwp_request hwp_req;
+
+    /* Zero everything to ensure reserved bits are zero... */
+    hwp_req.raw = 0;
+    /* .. and update from there */
+    hwp_req.min_perf = data->minimum;
+    hwp_req.max_perf = data->maximum;
+    hwp_req.desired = data->desired;
+    if ( feature_hwp_energy_perf )
+        hwp_req.energy_perf = data->energy_perf;
+    if ( feature_hwp_activity_window )
+        hwp_req.activity_window = data->activity_window;
+
+    if ( hwp_req.raw == data->curr_req.raw )
+        return 0;
+
+    data->curr_req.raw = hwp_req.raw;
+
+    hwp_verbose("CPU%u: wrmsr HWP_REQUEST %016lx\n", cpu, hwp_req.raw);
+    on_selected_cpus(cpumask_of(cpu), hwp_write_request, policy, 1);
+
+    if ( !feature_hwp_energy_perf && data->energy_perf )
+    {
+        on_selected_cpus(cpumask_of(cpu), hwp_energy_perf_bias,
+                         data, 1);
+    }
+
+    return 0;
+}
+
+static int hwp_cpufreq_cpu_init(struct cpufreq_policy *policy)
+{
+    unsigned int cpu = policy->cpu;
+    struct hwp_drv_data *data;
+
+    if ( cpufreq_opt_governor )
+    {
+        printk(XENLOG_WARNING
+               "HWP: governor \"%s\" is incompatible with hwp. Using default \"%s\"\n",
+               cpufreq_opt_governor->name, hwp_cpufreq_governor.name);
+    }
+    policy->governor = &hwp_cpufreq_governor;
+
+    data = xzalloc(typeof(*data));
+    if ( !data )
+        return -ENOMEM;
+
+    hwp_drv_data[cpu] = data;
+
+    on_selected_cpus(cpumask_of(cpu), hwp_init_msrs, policy, 1);
+
+    data->minimum = data->hw_lowest;
+    data->maximum = data->hw_highest;
+    data->desired = 0; /* default to HW autonomous */
+    if ( feature_hwp_energy_perf )
+        data->energy_perf = 0x80;
+    else
+        data->energy_perf = 7;
+
+    hwp_verbose("CPU%u: IA32_HWP_CAPABILITIES: %016lx\n", cpu, data->hwp_caps);
+
+    hwp_verbose("CPU%u: rdmsr HWP_REQUEST %016lx\n", cpu, data->curr_req.raw);
+
+    return 0;
+}
+
+static int hwp_cpufreq_cpu_exit(struct cpufreq_policy *policy)
+{
+    unsigned int cpu = policy->cpu;
+
+    xfree(hwp_drv_data[cpu]);
+    hwp_drv_data[cpu] = NULL;
+
+    return 0;
+}
+
+/* The SDM reads like turbo should be disabled with MSR_IA32_PERF_CTL and
+ * PERF_CTL_TURBO_DISENGAGE, but that does not seem to actually work, at least
+ * with my HWP testing.  MSR_IA32_MISC_ENABLE and MISC_ENABLE_TURBO_DISENGAGE
+ * is what Linux uses and seems to work. */
+static void hwp_set_misc_turbo(void *info)
+{
+    struct cpufreq_policy *policy = info;
+    uint64_t msr;
+
+    if ( rdmsr_safe(MSR_IA32_MISC_ENABLE, msr) )
+    {
+        hwp_err("CPU%u: error rdmsr_safe(MSR_IA32_MISC_ENABLE)\n", policy->cpu);
+
+        return;
+    }
+
+    if ( policy->turbo == CPUFREQ_TURBO_ENABLED )
+        msr &= ~MSR_IA32_MISC_ENABLE_TURBO_DISENGAGE;
+    else
+        msr |= MSR_IA32_MISC_ENABLE_TURBO_DISENGAGE;
+
+    if ( wrmsr_safe(MSR_IA32_MISC_ENABLE, msr) )
+        hwp_err("CPU%u: error wrmsr_safe(MSR_IA32_MISC_ENABLE): %016lx\n",
+                policy->cpu, msr);
+}
+
+static int hwp_cpufreq_update(int cpuid, struct cpufreq_policy *policy)
+{
+    on_selected_cpus(cpumask_of(cpuid), hwp_set_misc_turbo, policy, 1);
+
+    return 0;
+}
+
+static const struct cpufreq_driver __initconstrel hwp_cpufreq_driver =
+{
+    .name   = "hwp-cpufreq",
+    .verify = hwp_cpufreq_verify,
+    .target = hwp_cpufreq_target,
+    .init   = hwp_cpufreq_cpu_init,
+    .exit   = hwp_cpufreq_cpu_exit,
+    .update = hwp_cpufreq_update,
+};
+
+int hwp_register_driver(void)
+{
+    int ret;
+
+    ret = cpufreq_register_driver(&hwp_cpufreq_driver);
+
+    return ret;
+}
diff --git a/xen/include/acpi/cpufreq/processor_perf.h b/xen/include/acpi/cpufreq/processor_perf.h
index e2c08f0e6d..2e67e667e0 100644
--- a/xen/include/acpi/cpufreq/processor_perf.h
+++ b/xen/include/acpi/cpufreq/processor_perf.h
@@ -9,6 +9,9 @@
 
 void intel_feature_detect(void *info);
 
+bool hwp_available(void);
+int hwp_register_driver(void);
+
 int powernow_cpufreq_init(void);
 unsigned int powernow_register_driver(void);
 unsigned int get_measured_perf(unsigned int cpu, unsigned int flag);
diff --git a/xen/include/asm-x86/cpufeature.h b/xen/include/asm-x86/cpufeature.h
index 33b2257888..1900c90f90 100644
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -26,7 +26,16 @@
 #define CPUID5_ECX_EXTENSIONS_SUPPORTED 0x1
 #define CPUID5_ECX_INTERRUPT_BREAK      0x2
 
-#define CPUID_PM_LEAF                    6
+#define CPUID_PM_LEAF                                6
+#define CPUID6_EAX_HWP                               (_AC(1, U) <<  7)
+#define CPUID6_EAX_HWP_Notification                  (_AC(1, U) <<  8)
+#define CPUID6_EAX_HWP_Activity_Window               (_AC(1, U) <<  9)
+#define CPUID6_EAX_HWP_Energy_Performance_Preference (_AC(1, U) << 10)
+#define CPUID6_EAX_HWP_Package_Level_Request         (_AC(1, U) << 11)
+#define CPUID6_EAX_HDC                               (_AC(1, U) << 13)
+#define CPUID6_EAX_HWP_PECI                          (_AC(1, U) << 16)
+#define CPUID6_EAX_FAST_HWP_MSR                      (_AC(1, U) << 18)
+#define CPUID6_EAX_HW_FEEDBACK                       (_AC(1, U) << 19)
 #define CPUID6_ECX_APERFMPERF_CAPABILITY 0x1
 
 /* CPUID level 0x00000001.edx */
diff --git a/xen/include/asm-x86/msr-index.h b/xen/include/asm-x86/msr-index.h
index bd3a3a1e7f..b8f712e1a3 100644
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -101,6 +101,12 @@
 #define MSR_RTIT_ADDR_A(n)                 (0x00000580 + (n) * 2)
 #define MSR_RTIT_ADDR_B(n)                 (0x00000581 + (n) * 2)
 
+#define MSR_FAST_UNCORE_MSRS_CTL            0x00000657
+#define  FAST_IA32_HWP_REQUEST_MSR_ENABLE   (_AC(1, ULL) <<  0)
+
+#define MSR_FAST_UNCORE_MSRS_CAPABILITY     0x0000065f
+#define  FAST_IA32_HWP_REQUEST              (_AC(1, ULL) <<  0)
+
 #define MSR_U_CET                           0x000006a0
 #define MSR_S_CET                           0x000006a2
 #define  CET_SHSTK_EN                       (_AC(1, ULL) <<  0)
@@ -112,10 +118,20 @@
 #define MSR_PL3_SSP                         0x000006a7
 #define MSR_INTERRUPT_SSP_TABLE             0x000006a8
 
+#define MSR_IA32_PM_ENABLE                  0x00000770
+#define  IA32_PM_ENABLE_HWP_ENABLE          (_AC(1, ULL) <<  0)
+#define MSR_IA32_HWP_CAPABILITIES           0x00000771
+#define MSR_IA32_HWP_REQUEST                0x00000774
+
 #define MSR_PASID                           0x00000d93
 #define  PASID_PASID_MASK                   0x000fffff
 #define  PASID_VALID                        (_AC(1, ULL) << 31)
 
+#define MSR_IA32_PKG_HDC_CTL                0x00000db0
+#define  IA32_PKG_HDC_CTL_HDC_PKG_Enable    (_AC(1, ULL) <<  0)
+#define MSR_IA32_PM_CTL1                    0x00000db1
+#define  IA32_PM_CTL1_HDC_Allow_Block       (_AC(1, ULL) <<  0)
+
 #define MSR_K8_VM_CR                        0xc0010114
 #define  VM_CR_INIT_REDIRECTION             (_AC(1, ULL) <<  1)
 #define  VM_CR_SVM_DISABLE                  (_AC(1, ULL) <<  4)
@@ -460,6 +476,7 @@
 #define MSR_IA32_MISC_ENABLE_LIMIT_CPUID  (1<<22)
 #define MSR_IA32_MISC_ENABLE_XTPR_DISABLE (1<<23)
 #define MSR_IA32_MISC_ENABLE_XD_DISABLE	(1ULL << 34)
+#define MSR_IA32_MISC_ENABLE_TURBO_DISENGAGE (1ULL << 38)
 
 #define MSR_IA32_TSC_DEADLINE		0x000006E0
 #define MSR_IA32_ENERGY_PERF_BIAS	0x000001b0
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Mon May 03 19:28:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 19:28:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121811.229778 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldeFM-000642-Jn; Mon, 03 May 2021 19:28:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121811.229778; Mon, 03 May 2021 19:28:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldeFM-00063s-GB; Mon, 03 May 2021 19:28:48 +0000
Received: by outflank-mailman (input) for mailman id 121811;
 Mon, 03 May 2021 19:28:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wh1Q=J6=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1ldeFL-0005i5-J7
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 19:28:47 +0000
Received: from mail-qk1-x730.google.com (unknown [2607:f8b0:4864:20::730])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 03167565-5537-404f-b71c-fa23f9918af9;
 Mon, 03 May 2021 19:28:39 +0000 (UTC)
Received: by mail-qk1-x730.google.com with SMTP id 197so5973125qkl.12
 for <xen-devel@lists.xenproject.org>; Mon, 03 May 2021 12:28:39 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:8710:5560:a711:776f])
 by smtp.gmail.com with ESMTPSA id
 g18sm9225209qke.21.2021.05.03.12.28.38
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 03 May 2021 12:28:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 03167565-5537-404f-b71c-fa23f9918af9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=A+tWllvHyW52JnCKAHpzVsmBO/u0B8vwweaHzsv0x+g=;
        b=c3ma4whvihZMr+XtT6Dsf4OvPW1Pd0zYSsA7yv0QPFWlH7p7BuQcQ2rtZziOWKLerT
         QaUC1k0ZsuNbbBHRsmyI5i/82bSOdyDg6ZGDgvxlKgAnxYIXJC5fc+rV5VwY+XKqpHpy
         LS9UTUig3LOABNY1ZWkONeNr2OJpkj9XjTdNSyxIpQgodbKxHkOiiZA6/8Nz707q5vrJ
         2GUOI9OCX0BZKNIWpMIFQBXDrEP/9aEklkD7xjIbYY7zBYn8SJjqzqIHRDTxCzvzGRH8
         d8Mu8fAfzzswLtYjMDAWdPgJaToPoK1kMIr4oolqmMT9WGyRJ+an+624OzwBCrWZRTLP
         31TQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=A+tWllvHyW52JnCKAHpzVsmBO/u0B8vwweaHzsv0x+g=;
        b=typfibQoJ2RX0HlYtl40cgBCp+73WBABo+9PNGLjXENR5/UsQFMwOuNx2suvTedgzc
         Z18x8vg4ECx9NJbAUIp4i4syooELzoQHXDn6FnddMFWGdnaDV6yBb6IFb+ljws8X8qhA
         N4lQbQ2ZR4g7HT4Egj4ShY7pbxkXYHac7bO9ckv7omN5tlpIYVb/zGY/d+0g3FQvRhQ1
         2EDNEtcP8o2/K4oYoh6C4zK7aWhObyFi74PL82zXHPXAmltM/b6Ftc0yvk8RQn22vJ1o
         pm6n2CZIOgXrVlgjW3LbjTfIkldjibQWg4Z3nStqZ7VcacBC6DYBboYu6aMPqzozx05K
         ZIXA==
X-Gm-Message-State: AOAM5308ZAV39HkCTvYgKrNiCKG2tsJ9JREvcnS2Ghf3pRF0nk1/QDn6
	No7f+hBGnayhbVWnVYdMuHEEHJUICWs=
X-Google-Smtp-Source: ABdhPJxhn3QpGVO522p42qDm0dV7XUUdhgdWu+j8lIrefXpjNxHG18JFiuo2CLCG8JYNeqhK+e05BA==
X-Received: by 2002:ae9:dc47:: with SMTP id q68mr21053462qkf.197.1620070119123;
        Mon, 03 May 2021 12:28:39 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 05/13] xenpm: Change get-cpufreq-para output for internal
Date: Mon,  3 May 2021 15:28:02 -0400
Message-Id: <20210503192810.36084-6-jandryuk@gmail.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210503192810.36084-1-jandryuk@gmail.com>
References: <20210503192810.36084-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When using HWP, some of the returned data is not applicable.  In that
case, we should just omit it to avoid confusing the user.  So switch to
printing the base and turbo frequencies since those are relevant to HWP.
Similarly, stop printing the CPU frequencies since those do not apply.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 tools/misc/xenpm.c | 45 +++++++++++++++++++++++++++++----------------
 1 file changed, 29 insertions(+), 16 deletions(-)

diff --git a/tools/misc/xenpm.c b/tools/misc/xenpm.c
index d0191d4984..562bf939f9 100644
--- a/tools/misc/xenpm.c
+++ b/tools/misc/xenpm.c
@@ -711,6 +711,7 @@ void start_gather_func(int argc, char *argv[])
 /* print out parameters about cpu frequency */
 static void print_cpufreq_para(int cpuid, struct xc_get_cpufreq_para *p_cpufreq)
 {
+    bool internal = strstr(p_cpufreq->scaling_governor, "internal");
     int i;
 
     printf("cpu id               : %d\n", cpuid);
@@ -720,10 +721,19 @@ static void print_cpufreq_para(int cpuid, struct xc_get_cpufreq_para *p_cpufreq)
         printf(" %d", p_cpufreq->affected_cpus[i]);
     printf("\n");
 
-    printf("cpuinfo frequency    : max [%u] min [%u] cur [%u]\n",
-           p_cpufreq->cpuinfo_max_freq,
-           p_cpufreq->cpuinfo_min_freq,
-           p_cpufreq->cpuinfo_cur_freq);
+    if ( internal )
+    {
+        printf("cpuinfo frequency    : base [%u] turbo [%u]\n",
+               p_cpufreq->cpuinfo_min_freq,
+               p_cpufreq->cpuinfo_max_freq);
+    }
+    else
+    {
+        printf("cpuinfo frequency    : max [%u] min [%u] cur [%u]\n",
+               p_cpufreq->cpuinfo_max_freq,
+               p_cpufreq->cpuinfo_min_freq,
+               p_cpufreq->cpuinfo_cur_freq);
+    }
 
     printf("scaling_driver       : %s\n", p_cpufreq->scaling_driver);
 
@@ -750,19 +760,22 @@ static void print_cpufreq_para(int cpuid, struct xc_get_cpufreq_para *p_cpufreq)
                p_cpufreq->u.ondemand.up_threshold);
     }
 
-    printf("scaling_avail_freq   :");
-    for ( i = 0; i < p_cpufreq->freq_num; i++ )
-        if ( p_cpufreq->scaling_available_frequencies[i] ==
-             p_cpufreq->scaling_cur_freq )
-            printf(" *%d", p_cpufreq->scaling_available_frequencies[i]);
-        else
-            printf(" %d", p_cpufreq->scaling_available_frequencies[i]);
-    printf("\n");
+    if ( !internal )
+    {
+        printf("scaling_avail_freq   :");
+        for ( i = 0; i < p_cpufreq->freq_num; i++ )
+            if ( p_cpufreq->scaling_available_frequencies[i] ==
+                 p_cpufreq->scaling_cur_freq )
+                printf(" *%d", p_cpufreq->scaling_available_frequencies[i]);
+            else
+                printf(" %d", p_cpufreq->scaling_available_frequencies[i]);
+        printf("\n");
 
-    printf("scaling frequency    : max [%u] min [%u] cur [%u]\n",
-           p_cpufreq->scaling_max_freq,
-           p_cpufreq->scaling_min_freq,
-           p_cpufreq->scaling_cur_freq);
+        printf("scaling frequency    : max [%u] min [%u] cur [%u]\n",
+               p_cpufreq->scaling_max_freq,
+               p_cpufreq->scaling_min_freq,
+               p_cpufreq->scaling_cur_freq);
+    }
 
     printf("turbo mode           : %s\n",
            p_cpufreq->turbo_enabled ? "enabled" : "disabled or n/a");
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Mon May 03 19:28:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 19:28:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121816.229790 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldeFR-0006A6-VU; Mon, 03 May 2021 19:28:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121816.229790; Mon, 03 May 2021 19:28:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldeFR-00069v-Rk; Mon, 03 May 2021 19:28:53 +0000
Received: by outflank-mailman (input) for mailman id 121816;
 Mon, 03 May 2021 19:28:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wh1Q=J6=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1ldeFQ-0005i5-JK
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 19:28:52 +0000
Received: from mail-qv1-xf2f.google.com (unknown [2607:f8b0:4864:20::f2f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2a172d8b-12a9-47aa-99e3-a987c7197e98;
 Mon, 03 May 2021 19:28:42 +0000 (UTC)
Received: by mail-qv1-xf2f.google.com with SMTP id h3so3191784qve.13
 for <xen-devel@lists.xenproject.org>; Mon, 03 May 2021 12:28:42 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:8710:5560:a711:776f])
 by smtp.gmail.com with ESMTPSA id
 g18sm9225209qke.21.2021.05.03.12.28.40
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 03 May 2021 12:28:41 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2a172d8b-12a9-47aa-99e3-a987c7197e98
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=zxg8tmL3kR+kA95W5jp8RRRSYuidUeimNwqoUj/JiLc=;
        b=OTQJH6lAkxr/0mZotzGTfOrTZs9LaA1POMdgMjB7jyiSkmDZfcpbpZAEM0BcfddfLD
         Dw6RBH69CKalNwQBzEgpkD71Txqi6A51Dx8lo54SoFaM6dVnQah6+uKJJQF3bkX36mJF
         UPJqgrsccGunCV8P/J4xMrAkooV05O78kBAsUZDzMfcwN6bnKfmswcZok1VHbZBcfpxX
         tCafV/2XFQHgL1rQuBO/cClJvPtL5qc4NBZ4HIxWotg9JeNM7Dwd2kmUH8pm9VnjdqV7
         K1F0ulBKbmdXP8hDvfMXRBvFGAyKc2XPH3MKW2ORXqhpQxFJkWYzcHKr1ZTWeCtTOoVY
         iE4Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=zxg8tmL3kR+kA95W5jp8RRRSYuidUeimNwqoUj/JiLc=;
        b=Pc2JJA0jzOBUbMqFu2VYa/+anPOqUCgWeBYTdxT61XnJvv+WEJNq6ycynm9f7sEGFT
         rwZSv9liNsz8wtnyWQmc71vwK3kB0iE6Hlg2e2r2o/rDjYQFETlZjF5dDqOAxiy02Dzy
         +BBAmvXTwIo3Wv5FtheQDSMkfd5VqWEiucINwEqfMJgOMI8m1DauMSdr4YyS42wWdKin
         gW+lowsgNXG6F3IlMsBKs5T8poki2Qw5iiIVKiXWSswVeM/AK+3u+KS4IXgM3B8TDXl+
         b8ooROwPU3dQfDCHkoD5jgI1DH9xpNj6KtQ0o5Q+8YMsjCs337D4nx3sCqBHe11hjFsy
         0Tww==
X-Gm-Message-State: AOAM532cCMpBJVfkg4J/qD1JU7JEv5yyBSN8L5TuSZbfpPF5iJpvjWzJ
	yQg0VqIeWyKxiJ76ICYJOmyZSc9Bzrk=
X-Google-Smtp-Source: ABdhPJzIZGwlNdJ88kvoudaoCZRg4pJghjAdxq+KJiFs4RJwCeJsWtKiGg7so4U+USFNQt5LqdFI0g==
X-Received: by 2002:a05:6214:2b0:: with SMTP id m16mr21927389qvv.4.1620070122177;
        Mon, 03 May 2021 12:28:42 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH 06/13] cpufreq: Export HWP parameters to userspace
Date: Mon,  3 May 2021 15:28:03 -0400
Message-Id: <20210503192810.36084-7-jandryuk@gmail.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210503192810.36084-1-jandryuk@gmail.com>
References: <20210503192810.36084-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Extend xen_get_cpufreq_para to return hwp parameters.  These match the
hardware rather closely.

We need the hw_features bitmask to indicated fields supported by the
actual hardware.

The use of uint8_t parameters matches the hardware size.  uint32_t
entries grows the sysctl_t past the build assertion in setup.c.  The
uint8_t ranges are supported across multiple generations, so hopefully
they won't change.

Increment XEN_SYSCTL_INTERFACE_VERSION for the new fields.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 xen/arch/x86/acpi/cpufreq/hwp.c    | 24 ++++++++++++++++++++++++
 xen/drivers/acpi/pmstat.c          |  6 ++++++
 xen/include/acpi/cpufreq/cpufreq.h |  3 +++
 xen/include/public/sysctl.h        | 20 +++++++++++++++++++-
 4 files changed, 52 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/acpi/cpufreq/hwp.c b/xen/arch/x86/acpi/cpufreq/hwp.c
index f8e6fdbd41..92222d6d85 100644
--- a/xen/arch/x86/acpi/cpufreq/hwp.c
+++ b/xen/arch/x86/acpi/cpufreq/hwp.c
@@ -523,6 +523,30 @@ static const struct cpufreq_driver __initconstrel hwp_cpufreq_driver =
     .update = hwp_cpufreq_update,
 };
 
+int get_hwp_para(struct cpufreq_policy *policy, struct xen_hwp_para *hwp_para)
+{
+    unsigned int cpu = policy->cpu;
+    struct hwp_drv_data *data = hwp_drv_data[cpu];
+
+    if ( data == NULL )
+        return -EINVAL;
+
+    hwp_para->hw_feature        =
+        feature_hwp_activity_window ? XEN_SYSCTL_HWP_FEAT_ACT_WINDOW  : 0 |
+        feature_hwp_energy_perf     ? XEN_SYSCTL_HWP_FEAT_ENERGY_PERF : 0;
+    hwp_para->hw_lowest         = data->hw_lowest;
+    hwp_para->hw_most_efficient = data->hw_most_efficient;
+    hwp_para->hw_guaranteed     = data->hw_guaranteed;
+    hwp_para->hw_highest        = data->hw_highest;
+    hwp_para->minimum           = data->minimum;
+    hwp_para->maximum           = data->maximum;
+    hwp_para->energy_perf       = data->energy_perf;
+    hwp_para->activity_window   = data->activity_window;
+    hwp_para->desired           = data->desired;
+
+    return 0;
+}
+
 int hwp_register_driver(void)
 {
     int ret;
diff --git a/xen/drivers/acpi/pmstat.c b/xen/drivers/acpi/pmstat.c
index 1bae635101..3e35c42949 100644
--- a/xen/drivers/acpi/pmstat.c
+++ b/xen/drivers/acpi/pmstat.c
@@ -290,6 +290,12 @@ static int get_cpufreq_para(struct xen_sysctl_pm_op *op)
             &op->u.get_para.u.ondemand.sampling_rate,
             &op->u.get_para.u.ondemand.up_threshold);
     }
+
+    if ( !strncasecmp(op->u.get_para.scaling_governor,
+                      "hwp-internal", CPUFREQ_NAME_LEN) )
+    {
+        ret = get_hwp_para(policy, &op->u.get_para.u.hwp_para);
+    }
     op->u.get_para.turbo_enabled = cpufreq_get_turbo_status(op->cpuid);
 
     return ret;
diff --git a/xen/include/acpi/cpufreq/cpufreq.h b/xen/include/acpi/cpufreq/cpufreq.h
index b91859ce5d..42146ca2cf 100644
--- a/xen/include/acpi/cpufreq/cpufreq.h
+++ b/xen/include/acpi/cpufreq/cpufreq.h
@@ -246,4 +246,7 @@ int write_userspace_scaling_setspeed(unsigned int cpu, unsigned int freq);
 void cpufreq_dbs_timer_suspend(void);
 void cpufreq_dbs_timer_resume(void);
 
+/********************** hwp hypercall helper *************************/
+int get_hwp_para(struct cpufreq_policy *policy, struct xen_hwp_para *hwp_para);
+
 #endif /* __XEN_CPUFREQ_PM_H__ */
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index 039ccf885c..1a6c6397ea 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -35,7 +35,7 @@
 #include "domctl.h"
 #include "physdev.h"
 
-#define XEN_SYSCTL_INTERFACE_VERSION 0x00000013
+#define XEN_SYSCTL_INTERFACE_VERSION 0x00000014
 
 /*
  * Read console content from Xen buffer ring.
@@ -301,6 +301,23 @@ struct xen_ondemand {
     uint32_t up_threshold;
 };
 
+struct xen_hwp_para {
+    uint16_t activity_window; /* 7bit mantissa and 3bit exponent */
+#define XEN_SYSCTL_HWP_FEAT_ENERGY_PERF (1 << 0) /* energy_perf range 0-255 if
+                                                    1. Otherwise 0-15 */
+#define XEN_SYSCTL_HWP_FEAT_ACT_WINDOW  (1 << 1) /* activity_window supported
+                                                    if 1 */
+    uint8_t hw_feature; /* bit flags for features */
+    uint8_t hw_lowest;
+    uint8_t hw_most_efficient;
+    uint8_t hw_guaranteed;
+    uint8_t hw_highest;
+    uint8_t minimum;
+    uint8_t maximum;
+    uint8_t desired;
+    uint8_t energy_perf;
+};
+
 /*
  * cpufreq para name of this structure named
  * same as sysfs file name of native linux
@@ -332,6 +349,7 @@ struct xen_get_cpufreq_para {
     union {
         struct  xen_userspace userspace;
         struct  xen_ondemand ondemand;
+        struct  xen_hwp_para hwp_para;
     } u;
 
     int32_t turbo_enabled;
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Mon May 03 19:29:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 19:29:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121823.229802 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldeFc-0006Iw-97; Mon, 03 May 2021 19:29:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121823.229802; Mon, 03 May 2021 19:29:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldeFc-0006Il-5L; Mon, 03 May 2021 19:29:04 +0000
Received: by outflank-mailman (input) for mailman id 121823;
 Mon, 03 May 2021 19:29:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wh1Q=J6=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1ldeFa-0005i5-Jn
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 19:29:02 +0000
Received: from mail-qk1-x72a.google.com (unknown [2607:f8b0:4864:20::72a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 987927ca-06df-4976-8231-de24dc5622a7;
 Mon, 03 May 2021 19:28:46 +0000 (UTC)
Received: by mail-qk1-x72a.google.com with SMTP id 197so5973479qkl.12
 for <xen-devel@lists.xenproject.org>; Mon, 03 May 2021 12:28:46 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:8710:5560:a711:776f])
 by smtp.gmail.com with ESMTPSA id
 g18sm9225209qke.21.2021.05.03.12.28.44
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 03 May 2021 12:28:45 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 987927ca-06df-4976-8231-de24dc5622a7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=uqmeaN/4AXKtgJwT/eGJDLG2d8Jq2K3MPeafs5T5nYM=;
        b=O/puGUfut9mjkvqGS86rfWewHK5E599cwLjEx2+bHI6TOIOUpS4ZZ9oNcqv64wwHK0
         q64nPMyLnCGNL3Fw+sllS/T/vFIOtu+Mv+Ul+uPQI8xdQJKtmvMCWpAN8zP0t3OrvxLp
         3Efpb6rDVyYF+PCKjAjx4oPbKKF6sMAs5uQdixPlakV7IBxVtdzzJIwO2kT/Cww1k0xO
         LJsYSJCkJ2UcPEZ7+zKyFFoTMnsi5U1O7DTKpJo1FXK5XFWkFEAG2V/bu1/jDg6bEvUW
         JdhIKB+xJ5HSvhF3bKXWpt1otDoigLP4X+VLeAOGW6rpUBxs24bKmmY8LzvvL8/Yt7K0
         DrLA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=uqmeaN/4AXKtgJwT/eGJDLG2d8Jq2K3MPeafs5T5nYM=;
        b=EogcMaZAhCEo1wIQ51f8la57OqKB71wXegTyaiybm4ekVOe2lQ5xppNz0SMgus4Bw2
         97dxO3OCrkvmA2p/UA0BaFs6uyUu9AX/SCUrzl70q+X9/MmGK58KxtRSXkIjpUZukR/V
         uvD6MAfc6uGiMELEYaD0a4jWlaAKHhQ+AGZ4gvF0WUw1r9Tcx9DmjWN1aXIPtPqSy7vf
         kCAT3c3mbsK6Vc3Wo0dAUgyvHCx56Gknr+t5f8/7G1svb3ai8OxP/cudpEasVWdzvbxi
         cj1lJM3iCqu1een1fbl3kyAlRwF7G83cnZtgGepLyBtyGTzfFRDAmxln3OKgULXIS2fo
         KftQ==
X-Gm-Message-State: AOAM532bxZXgjU6UxUtoLlXs0iO/C9Yp43kNID1SvHXs3lqT21VLEfLD
	j2J1DhScbX+pTF0wSb/1nek+kfXIC5Y=
X-Google-Smtp-Source: ABdhPJx6ttFNkIn5dzIsgKI9wOkqyfRGwETpXC7M8/giqwZN9obtfIXDzSEvqWjAmhLcaCATHTdqBw==
X-Received: by 2002:a37:a5d8:: with SMTP id o207mr17799675qke.13.1620070125776;
        Mon, 03 May 2021 12:28:45 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 07/13] libxc: Include hwp_para in definitions
Date: Mon,  3 May 2021 15:28:04 -0400
Message-Id: <20210503192810.36084-8-jandryuk@gmail.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210503192810.36084-1-jandryuk@gmail.com>
References: <20210503192810.36084-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Expose the hwp_para fields through libxc.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 tools/include/xenctrl.h | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index 27cec1b93f..82dfa1613a 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -1960,6 +1960,7 @@ int xc_smt_disable(xc_interface *xch);
  */
 typedef struct xen_userspace xc_userspace_t;
 typedef struct xen_ondemand xc_ondemand_t;
+typedef struct xen_hwp_para xc_hwp_para_t;
 
 struct xc_get_cpufreq_para {
     /* IN/OUT variable */
@@ -1987,6 +1988,7 @@ struct xc_get_cpufreq_para {
     union {
         xc_userspace_t userspace;
         xc_ondemand_t ondemand;
+        xc_hwp_para_t hwp_para;
     } u;
 
     int32_t turbo_enabled;
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Mon May 03 19:29:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 19:29:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121825.229814 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldeFh-0006Np-JN; Mon, 03 May 2021 19:29:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121825.229814; Mon, 03 May 2021 19:29:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldeFh-0006Na-FI; Mon, 03 May 2021 19:29:09 +0000
Received: by outflank-mailman (input) for mailman id 121825;
 Mon, 03 May 2021 19:29:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wh1Q=J6=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1ldeFf-0005i5-Jn
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 19:29:07 +0000
Received: from mail-qk1-x736.google.com (unknown [2607:f8b0:4864:20::736])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e4ed6410-658a-4a0f-b3e0-88195cd685b6;
 Mon, 03 May 2021 19:28:49 +0000 (UTC)
Received: by mail-qk1-x736.google.com with SMTP id x8so6283459qkl.2
 for <xen-devel@lists.xenproject.org>; Mon, 03 May 2021 12:28:49 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:8710:5560:a711:776f])
 by smtp.gmail.com with ESMTPSA id
 g18sm9225209qke.21.2021.05.03.12.28.47
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 03 May 2021 12:28:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e4ed6410-658a-4a0f-b3e0-88195cd685b6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=gt85a9/jfQ7Ok17ZWY+qwgah9qHuJa+K9OUnjKLsOOc=;
        b=m22f79isZGNQAsYaUIL3/HfQVDpVWzxexpINlOO7hIcVSY8G1Z3YESSEokpdYnOQ6L
         ufFHbXjfy5TsQpgDkum9bIdbZ54wXP5ix2KuyWgd8ab1SeWYd/V+rCEPIlO9P864Mfjc
         gnw0Az+nem/YdygO/CAen/ToQa9H/d4S/1/F/oNLNqc2sA7COQOT0ay6JP/cujrcqiRA
         FDh16mVHQDxquhIGGmKtg8XpvN1SwdPqcwEy903e1B3LaNyrBR42sXw1uqqg53o1kFdH
         TP1+7ipZeIxSMpU6CWn5KMPEfLkbeyArvRhuvoaad0BGn5XwHju6x+Gn6xZc2KG89We1
         qGHQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=gt85a9/jfQ7Ok17ZWY+qwgah9qHuJa+K9OUnjKLsOOc=;
        b=Nfaf4FAs7rID2AMuKS7VPONwLlBKH6e6275urV1JW44vt0ZKvG2Fuld0tzbaTEHzqO
         X59gT2vdKOzjN7OHhckNwzLBRxOpCqVures7MYYaza0Ae5DUAG5bvhXqJe8gggTag+YR
         cfwG3awdan5/gJbktCt33sxpbzUCRtvSNqAl8GUSn9Angdd1Lk/hmdR4eGIKQQ/6LDdD
         9qTkmAhAr/m+kksUFi535Zgp8eSEXCrVWfs2WW17xYOjRpUmZa8vnFx2242HN/kpl1RS
         8sIEFpX16d6ZN41FMYEXk9hxLniJhuviwuTnfzz3KuEZjkp0aHT+Us5vPxlS8MrkOty+
         5Q8g==
X-Gm-Message-State: AOAM533Aqo4MHR1bIo6VI9huVOdiUIG3oWS5egRdlvMVINUcMJmVeIyX
	wdV4ykKss0j9Zq80g08bDQpBh4pC7HY=
X-Google-Smtp-Source: ABdhPJz/03LMfmKp0x6zG8POk2wzTrZRWriHbDwMe0oz2I8oEm/pX7v3mJQspxQxqeARe/iXRkXpjg==
X-Received: by 2002:a37:a683:: with SMTP id p125mr14770669qke.332.1620070128657;
        Mon, 03 May 2021 12:28:48 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 08/13] xenpm: Print HWP parameters
Date: Mon,  3 May 2021 15:28:05 -0400
Message-Id: <20210503192810.36084-9-jandryuk@gmail.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210503192810.36084-1-jandryuk@gmail.com>
References: <20210503192810.36084-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Print HWP-specific parameters.  Some are always present, but others
depend on hardware support.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 tools/misc/xenpm.c | 71 ++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 71 insertions(+)

diff --git a/tools/misc/xenpm.c b/tools/misc/xenpm.c
index 562bf939f9..9588dac991 100644
--- a/tools/misc/xenpm.c
+++ b/tools/misc/xenpm.c
@@ -708,6 +708,43 @@ void start_gather_func(int argc, char *argv[])
     pause();
 }
 
+static void calculate_hwp_activity_window(const xc_hwp_para_t *hwp,
+                                          unsigned int *activity_window,
+                                          const char **units)
+{
+    unsigned int mantissa = hwp->activity_window & 0x7f;
+    unsigned int exponent = ( hwp->activity_window >> 7 ) & 0x7;
+    unsigned int multiplier = 1;
+
+    if ( hwp->activity_window == 0 )
+    {
+        *units = "hardware selected";
+        *activity_window = 0;
+
+        return;
+    }
+
+    if ( exponent >= 6 )
+    {
+        *units = "s";
+        exponent -= 6;
+    }
+    else if ( exponent >= 3 )
+    {
+        *units = "ms";
+        exponent -= 3;
+    }
+    else
+    {
+        *units = "us";
+    }
+
+    for ( unsigned int i = 0; i < exponent; i++ )
+        multiplier *= 10;
+
+    *activity_window = mantissa * multiplier;
+}
+
 /* print out parameters about cpu frequency */
 static void print_cpufreq_para(int cpuid, struct xc_get_cpufreq_para *p_cpufreq)
 {
@@ -777,6 +814,40 @@ static void print_cpufreq_para(int cpuid, struct xc_get_cpufreq_para *p_cpufreq)
                p_cpufreq->scaling_cur_freq);
     }
 
+    if ( strcmp(p_cpufreq->scaling_governor, "hwp-internal") == 0 )
+    {
+        const xc_hwp_para_t *hwp = &p_cpufreq->u.hwp_para;
+
+        printf("hwp variables        :\n");
+        printf("  hardware limits    : lowest [%u] most_efficient [%u]\n",
+               hwp->hw_lowest, hwp->hw_most_efficient);
+        printf("  hardware limits    : guaranteed [%u] highest [%u]\n",
+               hwp->hw_guaranteed, hwp->hw_highest);
+        printf("  configured limits  : min [%u] max [%u] energy_perf [%u]\n",
+               hwp->minimum, hwp->maximum, hwp->energy_perf);
+
+        if ( hwp->hw_feature & XEN_SYSCTL_HWP_FEAT_ENERGY_PERF )
+        {
+            printf("  configured limits  : energy_perf [%u%s]\n",
+                   hwp->energy_perf,
+                   hwp->energy_perf ? "" : " hw autonomous");
+        }
+
+        if ( hwp->hw_feature & XEN_SYSCTL_HWP_FEAT_ACT_WINDOW )
+        {
+            unsigned int activity_window;
+            const char *units;
+
+            calculate_hwp_activity_window(hwp, &activity_window, &units);
+            printf("  configured limits  : activity_window [%u %s]\n",
+                   activity_window, units);
+        }
+
+        printf("  configured limits  : desired [%u%s]\n",
+               hwp->desired,
+               hwp->desired ? "" : " hw autonomous");
+    }
+
     printf("turbo mode           : %s\n",
            p_cpufreq->turbo_enabled ? "enabled" : "disabled or n/a");
     printf("\n");
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Mon May 03 19:35:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 19:35:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121844.229838 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldeLR-0007Wv-Jb; Mon, 03 May 2021 19:35:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121844.229838; Mon, 03 May 2021 19:35:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldeLR-0007Wo-GK; Mon, 03 May 2021 19:35:05 +0000
Received: by outflank-mailman (input) for mailman id 121844;
 Mon, 03 May 2021 19:35:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wh1Q=J6=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1ldeFu-0005i5-KF
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 19:29:22 +0000
Received: from mail-qk1-x729.google.com (unknown [2607:f8b0:4864:20::729])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 67c84692-857f-406e-b7f5-ce41fbc9cc96;
 Mon, 03 May 2021 19:28:54 +0000 (UTC)
Received: by mail-qk1-x729.google.com with SMTP id q127so6277958qkb.1
 for <xen-devel@lists.xenproject.org>; Mon, 03 May 2021 12:28:54 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:8710:5560:a711:776f])
 by smtp.gmail.com with ESMTPSA id
 g18sm9225209qke.21.2021.05.03.12.28.52
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 03 May 2021 12:28:53 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 67c84692-857f-406e-b7f5-ce41fbc9cc96
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=W0dWQDpwEXoVAFtJGqplAcn8R3hLDan97osHedktuzA=;
        b=jeAjy/+B/nRqPMGmVF+3m+72zaVTjLaZvFYZSePfzwkhZoOAiPKLpoeYT4Y7f8KNru
         IN8zHh5ajNrJHuLR8D9RfEz7Rb9Qv3hrAlwuXlfu9kj4iM6om55UeBgPlcPbehR7btBj
         FEoF+XgPnyMAhDwL7Cwx+JHYcj8th2ikK4EDPnsJI3y0qfBkm7oyOyAJ6Jh5zBsaML+1
         avV3KbPga57bSfIcmSu0a5y9BwGkLtwzZN1NgW3iZXKBPjABpprnwM6w3DCTewcQgI+U
         GnwxM+wsrY2jsnTmSfGAVi4ngq5nYKDqoR6sHQ7dRVecGh5wtL2TsnbeMBiksn4MOGPP
         SNiQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=W0dWQDpwEXoVAFtJGqplAcn8R3hLDan97osHedktuzA=;
        b=W7vmTRBLEq59hle7farg74MjR34dQLgcLsX+182+L13IFRoEzKmE9DTLgugww0XSqb
         Ll6jyNQRf5jZCqgYbpa+xK417IF63vSQeDdg/JPyWSehBJsHariUAiudZTJDoPJgrPQ2
         PIZxdTiqP7QfNV1uDdw9zU1TtlvT1llQxOWyVKbZm1C0J5WocLO8GJW3U0JJUvOosOJH
         U01YwjImPKVmU989ibfgNvVqWVSvf//sLBfIOpPfXw7t/QTjLDS3OmPIPle/ZKOLpBwO
         mIA09iPUSNMzgAyG1S64Jzehl7tAVRn5yWwM1AOrqjMP5IhTiuKXVPLWNdz4rvlb+Aht
         Cqqg==
X-Gm-Message-State: AOAM530VrBtfAghkJylPN6hZSN781xo6+KSsw0IULQ20PgrwL32TqkM+
	AZSovIfpL4kPI4sTiYow2Y4W1+P8TLI=
X-Google-Smtp-Source: ABdhPJxRgtng+6wCpftZbN46/DsbYWTqBEYMoYU4Bv7uuJjFVyruFKXM/P3Mof7Bhs5NjIskLYxNGQ==
X-Received: by 2002:ae9:c014:: with SMTP id u20mr21902294qkk.387.1620070133953;
        Mon, 03 May 2021 12:28:53 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 10/13] libxc: Add xc_set_cpufreq_hwp
Date: Mon,  3 May 2021 15:28:07 -0400
Message-Id: <20210503192810.36084-11-jandryuk@gmail.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210503192810.36084-1-jandryuk@gmail.com>
References: <20210503192810.36084-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add xc_set_cpufreq_hwp to allow calling xen_systctl_pm_op
SET_CPUFREQ_HWP.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

---
Am I allowed to do set_hwp = *set_hwp struct assignment?
---
 tools/include/xenctrl.h |  4 ++++
 tools/libs/ctrl/xc_pm.c | 18 ++++++++++++++++++
 2 files changed, 22 insertions(+)

diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index 82dfa1613a..0fd1e756cb 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -1994,11 +1994,15 @@ struct xc_get_cpufreq_para {
     int32_t turbo_enabled;
 };
 
+typedef struct xen_set_hwp_para xc_set_hwp_para_t;
+
 int xc_get_cpufreq_para(xc_interface *xch, int cpuid,
                         struct xc_get_cpufreq_para *user_para);
 int xc_set_cpufreq_gov(xc_interface *xch, int cpuid, char *govname);
 int xc_set_cpufreq_para(xc_interface *xch, int cpuid,
                         int ctrl_type, int ctrl_value);
+int xc_set_cpufreq_hwp(xc_interface *xch, int cpuid,
+                       xc_set_hwp_para_t *set_hwp);
 int xc_get_cpufreq_avgfreq(xc_interface *xch, int cpuid, int *avg_freq);
 
 int xc_set_sched_opt_smt(xc_interface *xch, uint32_t value);
diff --git a/tools/libs/ctrl/xc_pm.c b/tools/libs/ctrl/xc_pm.c
index 76d7eb7f26..407a24d2aa 100644
--- a/tools/libs/ctrl/xc_pm.c
+++ b/tools/libs/ctrl/xc_pm.c
@@ -330,6 +330,24 @@ int xc_set_cpufreq_para(xc_interface *xch, int cpuid,
     return xc_sysctl(xch, &sysctl);
 }
 
+int xc_set_cpufreq_hwp(xc_interface *xch, int cpuid,
+                       xc_set_hwp_para_t *set_hwp)
+{
+    DECLARE_SYSCTL;
+
+    if ( !xch )
+    {
+        errno = EINVAL;
+        return -1;
+    }
+    sysctl.cmd = XEN_SYSCTL_pm_op;
+    sysctl.u.pm_op.cmd = SET_CPUFREQ_HWP;
+    sysctl.u.pm_op.cpuid = cpuid;
+    sysctl.u.pm_op.u.set_hwp = *set_hwp;
+
+    return xc_sysctl(xch, &sysctl);
+}
+
 int xc_get_cpufreq_avgfreq(xc_interface *xch, int cpuid, int *avg_freq)
 {
     int ret = 0;
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Mon May 03 19:35:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 19:35:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121843.229826 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldeLO-0007Up-7c; Mon, 03 May 2021 19:35:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121843.229826; Mon, 03 May 2021 19:35:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldeLO-0007Ui-4P; Mon, 03 May 2021 19:35:02 +0000
Received: by outflank-mailman (input) for mailman id 121843;
 Mon, 03 May 2021 19:35:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wh1Q=J6=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1ldeG4-0005i5-KY
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 19:29:32 +0000
Received: from mail-qv1-xf2f.google.com (unknown [2607:f8b0:4864:20::f2f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8fdb1852-9169-42f7-9597-236b74c2a607;
 Mon, 03 May 2021 19:28:59 +0000 (UTC)
Received: by mail-qv1-xf2f.google.com with SMTP id jm10so3210244qvb.5
 for <xen-devel@lists.xenproject.org>; Mon, 03 May 2021 12:28:59 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:8710:5560:a711:776f])
 by smtp.gmail.com with ESMTPSA id
 g18sm9225209qke.21.2021.05.03.12.28.58
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 03 May 2021 12:28:58 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8fdb1852-9169-42f7-9597-236b74c2a607
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=SBP/WUCHYsHwmvFvXna4dIQQTPCdqZeigE28a+rXW1E=;
        b=O6BVnc4qd3xwHDBJXfoQIucfK9gnTGxsJRVTZbya52yOkKK+3zh0/rVotSg+Sl9Hpp
         Tql0xvakVp3Q29gD1H39GXvBQzEipXCd/8vKYbYfC9b8eILOcHb72QlNF9Nh3PIjmc/z
         FWZHtDC2IixKzDJ5IbkNZVIG6TZrQbFfMN7UrNco4svOIsNz8xNIj2DD7zD40z/HDIKK
         cJqcv7PvqopYvl3GimbTVrtMB4SIxB/hPMryBcsl5ZKwcWW2Aqd325ZLdVUXk+KV5te0
         1pTIAso0lQiqiDltBky+pPfwXFXPqRtOsrRAtPf0vDlM+ZN1bTIq96pj+CroXFhOVzDe
         fWcg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=SBP/WUCHYsHwmvFvXna4dIQQTPCdqZeigE28a+rXW1E=;
        b=a58Arko+9KvYrQgPLfaRTRZO8xjMkq8ycF42IjFHPmW5egk3cZcp7RsuwCGAqAdovP
         kzuUxieFtmiIe8ALXZtut24/emQdDmxMf7ZJmu/1532FXBPOcy/xKtrahwoyCfA8maBh
         1NIDegU1R3e+a+FdUe6p+fttEKOrboGaQo0rPpbqucuu+HrhaBpjGSJ7BzUaKCUilkcw
         x6ABd5YXxpyW616FPZ3+ZsB5QG9wT8JPOIQt3WBNklT5mUFPE8hN45XbDghOr13kTPPp
         E72XsmlTwt9fVQLGRCQXXJGsadSvo5rUQ9sVnEwYaEKkbKIUNnVoOa6kDIZ7XbjkIBHn
         aICg==
X-Gm-Message-State: AOAM531rnk1IDXK6CIn8Bw0IaSuDYbJMbNSjXQbMpgwB4eXdEWIOssRI
	h9sh9ixh3/zf3vNfjoqMZKSd450OGPM=
X-Google-Smtp-Source: ABdhPJw9qRujRfcqvQQOj/Nh1hz+fW3OuLKeqluU/Upmh5MZLqNoVIeI+WS4P/Ajl4B+ngqLCxMG+A==
X-Received: by 2002:a0c:c488:: with SMTP id u8mr21113740qvi.47.1620070139057;
        Mon, 03 May 2021 12:28:59 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 12/13] xenpm: Add set-cpufreq-hwp subcommand
Date: Mon,  3 May 2021 15:28:09 -0400
Message-Id: <20210503192810.36084-13-jandryuk@gmail.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210503192810.36084-1-jandryuk@gmail.com>
References: <20210503192810.36084-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

set-cpufreq-hwp allows setting the Hardware P-State (HWP) parameters.

It can be run on all or just a single cpu.  There are presets of
balance, powersave & performance.  Those can be further tweaked by
param:val arguments as explained in the usage description.

Parameter names are just checked to the first 3 characters to shorten
typing.

Some options are hardware dependent, and ranges can be found in
get-cpufreq-para.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 tools/misc/xenpm.c | 240 +++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 240 insertions(+)

diff --git a/tools/misc/xenpm.c b/tools/misc/xenpm.c
index a686f8f46e..d3bcaf3b58 100644
--- a/tools/misc/xenpm.c
+++ b/tools/misc/xenpm.c
@@ -67,6 +67,25 @@ void show_help(void)
             " set-max-cstate        <num>|'unlimited' [<num2>|'unlimited']\n"
             "                                     set the C-State limitation (<num> >= 0) and\n"
             "                                     optionally the C-sub-state limitation (<num2> >= 0)\n"
+            " set-cpufreq-hwp       [cpuid] [balance|performance|powersave] <param:val>*\n"
+            "                                     set Hardware P-State (HWP) parameters\n"
+            "                                     optionally a preset of one of\n"
+            "                                       balance|performance|powersave\n"
+            "                                     an optional list of param:val arguments\n"
+            "                                       minimum:N  hw_lowest ... hw_highest\n"
+            "                                       maximum:N  hw_lowest ... hw_highest\n"
+            "                                       desired:N  hw_lowest ... hw_highest\n"
+            "                                           Set explicit performance target.\n"
+            "                                           non-zero disables auto-HWP mode.\n"
+            "                                       energy_perf:0-255 (or 0-15)\n"
+            "                                                   energy/performance hint\n"
+            "                                                   lower favor performance\n"
+            "                                                   higher favor powersave\n"
+            "                                                   127 (or 7) balance\n"
+            "                                       act_window:N{,m,u}s range 0us-1270s\n"
+            "                                           window for internal calculations.\n"
+            "                                           0 lets the hardware decide.\n"
+            "                                     get-cpufreq-para returns hw_lowest/highest.\n"
             " start [seconds]                     start collect Cx/Px statistics,\n"
             "                                     output after CTRL-C or SIGINT or several seconds.\n"
             " enable-turbo-mode     [cpuid]       enable Turbo Mode for processors that support it.\n"
@@ -1309,6 +1328,226 @@ void disable_turbo_mode(int argc, char *argv[])
                 errno, strerror(errno));
 }
 
+/*
+ * Parse activity_window:NNN{us,ms,s} and validate range.
+ *
+ * Activity window is a 7bit mantissa (0-127) with a 3bit exponent (0-7) base
+ * 10 in microseconds.  So the range is 1 microsecond to 1270 seconds.  A value
+ * of 0 lets the hardware autonomously select the window.
+ *
+ * Return 0 on success
+ *       -1 on error
+ *        1 Not activity_window. i.e. try parsing as another argument
+ */
+static int parse_activity_window(xc_set_hwp_para_t *set_hwp, char *p)
+{
+    char *param = NULL, *val = NULL, *suffix = NULL;
+    unsigned int u;
+    unsigned int exponent = 0;
+    unsigned int multiplier = 1;
+    int ret;
+
+    ret = sscanf(p, "%m[a-z_A-Z]:%ms", &param, &val);
+    if ( ret != 2 )
+    {
+        return -1;
+    }
+
+    if ( strncasecmp(param, "act", 3) != 0 )
+    {
+        ret = 1;
+
+        goto out;
+    }
+
+    free(param);
+    param = NULL;
+
+    ret = sscanf(val, "%u%ms", &u, &suffix);
+    if ( ret != 1 && ret != 2 )
+    {
+        fprintf(stderr, "invalid activity window: %s\n", val);
+
+        ret = -1;
+
+        goto out;
+    }
+
+    if ( ret == 2 && suffix )
+    {
+        if ( strcasecmp(suffix, "s") == 0 )
+        {
+            multiplier = 1000 * 1000;
+            exponent = 6;
+        }
+        else if ( strcasecmp(suffix, "ms") == 0 )
+        {
+            multiplier = 1000;
+            exponent = 3;
+        }
+        else if ( strcasecmp(suffix, "us") == 0 )
+        {
+            multiplier = 1;
+            exponent = 0;
+        }
+        else
+        {
+            fprintf(stderr, "invalid activity window units: %s\n", suffix);
+
+            ret = -1;
+            goto out;
+        }
+    }
+
+    if ( u > 1270 * 1000 * 1000 / multiplier )
+    {
+        fprintf(stderr, "activity window %s too large\n", val);
+
+        ret = -1;
+        goto out;
+    }
+
+    /* looking for 7 bits of mantissa and 3 bits of exponent */
+    while ( u > 127 )
+    {
+        u /= 10;
+        exponent += 1;
+    }
+
+    set_hwp->activity_window = ( exponent & 0x7 ) << 7 | ( u & 0x7f );
+    set_hwp->set_params |= XEN_SYSCTL_HWP_SET_ACT_WINDOW;
+
+    ret = 0;
+
+ out:
+    free(suffix);
+    free(param);
+    free(val);
+
+    return ret;
+}
+
+static int parse_hwp_opts(xc_set_hwp_para_t *set_hwp, int *cpuid,
+                          int argc, char *argv[])
+{
+    int i = 0;
+
+    if ( argc < 1 )
+        return -1;
+
+    if ( parse_cpuid_non_fatal(argv[i], cpuid) == 0 )
+    {
+        i++;
+    }
+
+    if ( i == argc )
+        return -1;
+
+    if ( strcasecmp(argv[i], "powersave") == 0 )
+    {
+        set_hwp->set_params = XEN_SYSCTL_HWP_SET_PRESET_POWERSAVE;
+        i++;
+    }
+    else if ( strcasecmp(argv[i], "performance") == 0 )
+    {
+        set_hwp->set_params = XEN_SYSCTL_HWP_SET_PRESET_PERFORMANCE;
+        i++;
+    }
+    else if ( strcasecmp(argv[i], "balance") == 0 )
+    {
+        set_hwp->set_params = XEN_SYSCTL_HWP_SET_PRESET_BALANCE;
+        i++;
+    }
+
+    for ( ; i < argc; i++)
+    {
+        unsigned int val;
+        char *param;
+        int ret;
+
+        ret = parse_activity_window(set_hwp, argv[i]);
+        switch ( ret )
+        {
+        case -1:
+            return -1;
+        case 0:
+            continue;
+            break;
+        case 1:
+            /* try other parsing */
+            break;
+        }
+
+        /* sscanf can't handle split on ':' for "%ms:%u'  */
+        ret = sscanf(argv[i], "%m[a-zA-Z_]:%u", &param, &val);
+        if ( ret != 2 )
+        {
+            fprintf(stderr, "%s is an invalid hwp parameter.\n", argv[i]);
+            return -1;
+        }
+
+        if ( val > 255 )
+        {
+            fprintf(stderr, "%s value %u is out of range.\n", param, val);
+            return -1;
+        }
+
+        if ( strncasecmp(param, "min", 3) == 0 )
+        {
+            set_hwp->minimum = val;
+            set_hwp->set_params |= XEN_SYSCTL_HWP_SET_MINIMUM;
+        }
+        else if ( strncasecmp(param, "max", 3) == 0 )
+        {
+            set_hwp->maximum = val;
+            set_hwp->set_params |= XEN_SYSCTL_HWP_SET_MAXIMUM;
+        }
+        else if ( strncasecmp(param, "des", 3) == 0 )
+        {
+            set_hwp->desired = val;
+            set_hwp->set_params |= XEN_SYSCTL_HWP_SET_DESIRED;
+        }
+        else if ( strncasecmp(param, "ene", 3) == 0 )
+        {
+            set_hwp->energy_perf = val;
+            set_hwp->set_params |= XEN_SYSCTL_HWP_SET_ENERGY_PERF;
+        }
+        else
+        {
+            fprintf(stderr, "%s is an invalid parameter\n.", param);
+            return -1;
+        }
+
+        free(param);
+    }
+
+    return 0;
+}
+
+static void hwp_set_func(int argc, char *argv[])
+{
+    xc_set_hwp_para_t set_hwp = {};
+    int cpuid = -1;
+    int i = 0;
+
+    if ( parse_hwp_opts(&set_hwp, &cpuid, argc, argv) )
+    {
+        fprintf(stderr, "Missing, excess, or invalid argument(s)\n");
+        exit(EINVAL);
+    }
+
+    if ( cpuid != -1 )
+    {
+        i = cpuid;
+        max_cpu_nr = i + 1;
+    }
+
+    for ( ; i < max_cpu_nr; i++ )
+        if ( xc_set_cpufreq_hwp(xc_handle, i, &set_hwp) )
+            fprintf(stderr, "[CPU%d] failed to set hwp params (%d - %s)\n",
+                    i, errno, strerror(errno));
+}
+
 struct {
     const char *name;
     void (*function)(int argc, char *argv[]);
@@ -1319,6 +1558,7 @@ struct {
     { "get-cpufreq-average", cpufreq_func },
     { "start", start_gather_func },
     { "get-cpufreq-para", cpufreq_para_func },
+    { "set-cpufreq-hwp", hwp_set_func },
     { "set-scaling-maxfreq", scaling_max_freq_func },
     { "set-scaling-minfreq", scaling_min_freq_func },
     { "set-scaling-governor", scaling_governor_func },
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Mon May 03 19:35:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 19:35:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121847.229850 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldeLU-0007a9-U2; Mon, 03 May 2021 19:35:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121847.229850; Mon, 03 May 2021 19:35:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldeLU-0007a2-QJ; Mon, 03 May 2021 19:35:08 +0000
Received: by outflank-mailman (input) for mailman id 121847;
 Mon, 03 May 2021 19:35:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wh1Q=J6=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1ldeFp-0005i5-KA
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 19:29:17 +0000
Received: from mail-qv1-xf35.google.com (unknown [2607:f8b0:4864:20::f35])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7961687f-6128-4146-ae23-04d37fbfe793;
 Mon, 03 May 2021 19:28:52 +0000 (UTC)
Received: by mail-qv1-xf35.google.com with SMTP id h1so2412836qvv.10
 for <xen-devel@lists.xenproject.org>; Mon, 03 May 2021 12:28:52 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:8710:5560:a711:776f])
 by smtp.gmail.com with ESMTPSA id
 g18sm9225209qke.21.2021.05.03.12.28.50
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 03 May 2021 12:28:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7961687f-6128-4146-ae23-04d37fbfe793
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=SA2GZ9hF2Qvi9qam8q5O9Sh0EYcwTdRBSgnWrRejxiQ=;
        b=K7QccnrtgPJmjHYg6sNoXTWMoj+Vc0zwioEjc6/VfQ+P0fQm0cm07QvcejGvl0LYkp
         bjiUU8UfMyEa2DA+5i2PnVXzdLx0eH0sfJhMkV7ivzXalX07OeohPWBpbK66CkPG1qKs
         uak6gkfwR3PBhDcfrkQ1ZwJHMJ0vjnkN9rPc3P4dHTar58C+p+kfMGtAdkN7aevBVBl0
         G3CaB18hkpLlvhi1jUxxOXu5GDRk3c/BtzZSmfpSBOIcl189O6xwIoLOmDW0ga3u92tb
         3z0OsWME0xR6HW2gcMLZGtMvXsvnz6MI3Ub6yBnR4rRwncBZFkQHN/7Gmro27zlmJAiL
         /qRA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=SA2GZ9hF2Qvi9qam8q5O9Sh0EYcwTdRBSgnWrRejxiQ=;
        b=UKJTV6rSV43TJ4DuLtWHCyVKZ2FEdIEh13BSkHe9V9M1oGfxqeXEFmHiLtpTXTCkfw
         qP7VUx+uD8cjSznOjq41BvcarV2GQB/ZzXlroaNEy7nvJLAQ/EY37FXZ5ZP2s29v72oi
         b4BhbDx6FiPxqvBdNtGFCzPWPnyJ39bjfnZg5V2Q691IhDsQPEHUWdJuZVrtbMF6BM6u
         QNj69LLNJ5An45pNJbGa0oxDx43nCPuDK5Nqq7QlnmgNIk9kM0TGsj2uu0bEsgxiZb4z
         Lb9wg8zQgxSgUsq0qaWifHvFTzAUJwU1zGYXgaIJC7TnH+eC8ixZZi4SMqFXvKFoqMju
         QK+Q==
X-Gm-Message-State: AOAM532Z+/mgz6mxzkvjMFLrv2PFRD0COGoOfb62I4MpWtoufWXV/Fy5
	no0Dn+BEJLsbxYE74aiQOOET1hMpMsU=
X-Google-Smtp-Source: ABdhPJz+bMqGsKQPk/vjNNXpvY12fF1VAf59hcwKRvYkQrYHSjWtbGiQ8W9IwjoiBfMqfMcklYxxug==
X-Received: by 2002:ad4:59c7:: with SMTP id el7mr21549307qvb.26.1620070131409;
        Mon, 03 May 2021 12:28:51 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH 09/13] xen: Add SET_CPUFREQ_HWP xen_sysctl_pm_op
Date: Mon,  3 May 2021 15:28:06 -0400
Message-Id: <20210503192810.36084-10-jandryuk@gmail.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210503192810.36084-1-jandryuk@gmail.com>
References: <20210503192810.36084-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add SET_CPUFREQ_HWP xen_sysctl_pm_op to set HWP parameters.  The sysctl
supports setting multiple values simultaneously as indicated by the
set_params bits.  This allows atomically applying new HWP configuration
via a single wrmsr.

XEN_SYSCTL_HWP_SET_PRESET_BALANCE/PERFORMANCE/POWERSAVE provide three
common presets.  Setting them depends on hardware limits which the
hypervisor is already caching.  So using them allows skipping a
hypercall to query the limits (hw_lowest/highest) to then set those same
values.  The code is organized to allow a preset to be refined with
additional stuff if desired.

"most_efficient" and "guaranteed" could be additional presets in the
future, but the are not added now.  Those levels can change at runtime,
but we don't have code in place to monitor and update for those events.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 xen/arch/x86/acpi/cpufreq/hwp.c    | 114 +++++++++++++++++++++++++++++
 xen/drivers/acpi/pmstat.c          |  24 ++++++
 xen/include/acpi/cpufreq/cpufreq.h |   2 +
 xen/include/public/sysctl.h        |  32 ++++++++
 4 files changed, 172 insertions(+)

diff --git a/xen/arch/x86/acpi/cpufreq/hwp.c b/xen/arch/x86/acpi/cpufreq/hwp.c
index 92222d6d85..0fd70d76a8 100644
--- a/xen/arch/x86/acpi/cpufreq/hwp.c
+++ b/xen/arch/x86/acpi/cpufreq/hwp.c
@@ -547,6 +547,120 @@ int get_hwp_para(struct cpufreq_policy *policy, struct xen_hwp_para *hwp_para)
     return 0;
 }
 
+int set_hwp_para(struct cpufreq_policy *policy,
+                 struct xen_set_hwp_para *set_hwp)
+{
+    unsigned int cpu = policy->cpu;
+    struct hwp_drv_data *data = hwp_drv_data[cpu];
+
+    if ( data == NULL )
+        return -EINVAL;
+
+    /* Validate all parameters first */
+    if ( set_hwp->set_params & ~XEN_SYSCTL_HWP_SET_PARAM_MASK )
+    {
+        hwp_err("Invalid bits in hwp set_params %u\n",
+                set_hwp->set_params);
+
+        return -EINVAL;
+    }
+
+    if ( set_hwp->activity_window & ~XEN_SYSCTL_HWP_ACT_WINDOW_MASK )
+    {
+        hwp_err("Invalid bits in activity window %u\n",
+                set_hwp->activity_window);
+
+        return -EINVAL;
+    }
+
+    if ( !feature_hwp_energy_perf &&
+         set_hwp->set_params & XEN_SYSCTL_HWP_SET_ENERGY_PERF &&
+         set_hwp->energy_perf > 0xf )
+    {
+        hwp_err("energy_perf %u out of range for IA32_ENERGY_PERF_BIAS\n",
+                set_hwp->energy_perf);
+
+        return -EINVAL;
+    }
+
+    if ( set_hwp->set_params & XEN_SYSCTL_HWP_SET_DESIRED &&
+         set_hwp->desired != 0 &&
+         ( set_hwp->desired < data->hw_lowest ||
+           set_hwp->desired > data->hw_highest ) )
+    {
+        hwp_err("hwp desired %u is out of range (%u ... %u)\n",
+                set_hwp->desired, data->hw_lowest, data->hw_highest);
+
+        return -EINVAL;
+    }
+
+    /*
+     * minimum & maximum are not validated as hardware doesn't seem to care
+     * and the SDM says CPUs will clip internally.
+     */
+
+    /* Apply presets */
+    switch ( set_hwp->set_params & XEN_SYSCTL_HWP_SET_PRESET_MASK )
+    {
+    case XEN_SYSCTL_HWP_SET_PRESET_POWERSAVE:
+        data->minimum = data->hw_lowest;
+        data->maximum = data->hw_lowest;
+        data->activity_window = 0;
+        if ( feature_hwp_energy_perf )
+            data->energy_perf = 0xff;
+        else
+            data->energy_perf = 0xf;
+        data->desired = 0;
+        break;
+    case XEN_SYSCTL_HWP_SET_PRESET_PERFORMANCE:
+        data->minimum = data->hw_highest;
+        data->maximum = data->hw_highest;
+        data->activity_window = 0;
+        data->energy_perf = 0;
+        data->desired = 0;
+        break;
+    case XEN_SYSCTL_HWP_SET_PRESET_BALANCE:
+        data->minimum = data->hw_lowest;
+        data->maximum = data->hw_highest;
+        data->activity_window = 0;
+        data->energy_perf = 0x80;
+        if ( feature_hwp_energy_perf )
+            data->energy_perf = 0x80;
+        else
+            data->energy_perf = 0x7;
+        data->desired = 0;
+        break;
+    case XEN_SYSCTL_HWP_SET_PRESET_NONE:
+        break;
+    default:
+        printk("HWP: Invalid preset value: %u\n",
+               set_hwp->set_params & XEN_SYSCTL_HWP_SET_PRESET_MASK);
+
+        return -EINVAL;
+    }
+
+    /* Further customize presets if needed */
+    if ( set_hwp->set_params & XEN_SYSCTL_HWP_SET_MINIMUM )
+        data->minimum = set_hwp->minimum;
+
+    if ( set_hwp->set_params & XEN_SYSCTL_HWP_SET_MAXIMUM )
+        data->maximum = set_hwp->maximum;
+
+    if ( set_hwp->set_params & XEN_SYSCTL_HWP_SET_ENERGY_PERF )
+        data->energy_perf = set_hwp->energy_perf;
+
+    if ( set_hwp->set_params & XEN_SYSCTL_HWP_SET_DESIRED )
+        data->desired = set_hwp->desired;
+
+    if ( set_hwp->set_params & XEN_SYSCTL_HWP_SET_ACT_WINDOW )
+        data->activity_window = set_hwp->activity_window &
+                                XEN_SYSCTL_HWP_ACT_WINDOW_MASK;
+
+    hwp_cpufreq_target(policy, 0, 0);
+
+    return 0;
+}
+
 int hwp_register_driver(void)
 {
     int ret;
diff --git a/xen/drivers/acpi/pmstat.c b/xen/drivers/acpi/pmstat.c
index 3e35c42949..016b0445ec 100644
--- a/xen/drivers/acpi/pmstat.c
+++ b/xen/drivers/acpi/pmstat.c
@@ -318,6 +318,24 @@ static int set_cpufreq_gov(struct xen_sysctl_pm_op *op)
     return __cpufreq_set_policy(old_policy, &new_policy);
 }
 
+static int set_cpufreq_hwp(struct xen_sysctl_pm_op *op)
+{
+    struct cpufreq_policy *policy;
+
+    if ( !cpufreq_governor_internal )
+        return -EINVAL;
+
+    policy = per_cpu(cpufreq_cpu_policy, op->cpuid);
+
+    if ( !policy || !policy->governor )
+        return -EINVAL;
+
+    if ( strncasecmp(policy->governor->name, "hwp-internal", CPUFREQ_NAME_LEN) )
+        return -EINVAL;
+
+    return set_hwp_para(policy, &op->u.set_hwp);
+}
+
 static int set_cpufreq_para(struct xen_sysctl_pm_op *op)
 {
     int ret = 0;
@@ -465,6 +483,12 @@ int do_pm_op(struct xen_sysctl_pm_op *op)
         break;
     }
 
+    case SET_CPUFREQ_HWP:
+    {
+        ret = set_cpufreq_hwp(op);
+        break;
+    }
+
     case SET_CPUFREQ_PARA:
     {
         ret = set_cpufreq_para(op);
diff --git a/xen/include/acpi/cpufreq/cpufreq.h b/xen/include/acpi/cpufreq/cpufreq.h
index 42146ca2cf..7ff7d0d4bb 100644
--- a/xen/include/acpi/cpufreq/cpufreq.h
+++ b/xen/include/acpi/cpufreq/cpufreq.h
@@ -248,5 +248,7 @@ void cpufreq_dbs_timer_resume(void);
 
 /********************** hwp hypercall helper *************************/
 int get_hwp_para(struct cpufreq_policy *policy, struct xen_hwp_para *hwp_para);
+int set_hwp_para(struct cpufreq_policy *policy,
+                 struct xen_set_hwp_para *set_hwp);
 
 #endif /* __XEN_CPUFREQ_PM_H__ */
diff --git a/xen/include/public/sysctl.h b/xen/include/public/sysctl.h
index 1a6c6397ea..3f18a3d522 100644
--- a/xen/include/public/sysctl.h
+++ b/xen/include/public/sysctl.h
@@ -318,6 +318,36 @@ struct xen_hwp_para {
     uint8_t energy_perf;
 };
 
+/* set multiple values simultaneously when set_args bit is set */
+struct xen_set_hwp_para {
+    uint16_t set_params; /* bitflags for valid values */
+#define XEN_SYSCTL_HWP_SET_DESIRED              (1U << 0)
+#define XEN_SYSCTL_HWP_SET_ENERGY_PERF          (1U << 1)
+#define XEN_SYSCTL_HWP_SET_ACT_WINDOW           (1U << 2)
+#define XEN_SYSCTL_HWP_SET_MINIMUM              (1U << 3)
+#define XEN_SYSCTL_HWP_SET_MAXIMUM              (1U << 4)
+#define XEN_SYSCTL_HWP_SET_PRESET_MASK          (0xf000)
+#define XEN_SYSCTL_HWP_SET_PRESET_NONE          (0x0000)
+#define XEN_SYSCTL_HWP_SET_PRESET_BALANCE       (0x1000)
+#define XEN_SYSCTL_HWP_SET_PRESET_POWERSAVE     (0x2000)
+#define XEN_SYSCTL_HWP_SET_PRESET_PERFORMANCE   (0x3000)
+#define XEN_SYSCTL_HWP_SET_PARAM_MASK ((uint16_t)( \
+                                  XEN_SYSCTL_HWP_SET_PRESET_MASK | \
+                                  XEN_SYSCTL_HWP_SET_DESIRED     | \
+                                  XEN_SYSCTL_HWP_SET_ENERGY_PERF | \
+                                  XEN_SYSCTL_HWP_SET_ACT_WINDOW  | \
+                                  XEN_SYSCTL_HWP_SET_MINIMUM     | \
+                                  XEN_SYSCTL_HWP_SET_MAXIMUM     ))
+
+    uint16_t activity_window; /* 7bit mantissa and 3bit exponent */
+#define XEN_SYSCTL_HWP_ACT_WINDOW_MASK          (0x03ff)
+    uint8_t minimum;
+    uint8_t maximum;
+    uint8_t desired;
+    uint8_t energy_perf; /* 0-255 or 0-15 depending on HW support */
+};
+
+
 /*
  * cpufreq para name of this structure named
  * same as sysfs file name of native linux
@@ -379,6 +409,7 @@ struct xen_sysctl_pm_op {
     #define SET_CPUFREQ_GOV            (CPUFREQ_PARA | 0x02)
     #define SET_CPUFREQ_PARA           (CPUFREQ_PARA | 0x03)
     #define GET_CPUFREQ_AVGFREQ        (CPUFREQ_PARA | 0x04)
+    #define SET_CPUFREQ_HWP            (CPUFREQ_PARA | 0x05)
 
     /* set/reset scheduler power saving option */
     #define XEN_SYSCTL_pm_op_set_sched_opt_smt    0x21
@@ -405,6 +436,7 @@ struct xen_sysctl_pm_op {
         struct xen_get_cpufreq_para get_para;
         struct xen_set_cpufreq_gov  set_gov;
         struct xen_set_cpufreq_para set_para;
+        struct xen_set_hwp_para     set_hwp;
         uint64_aligned_t get_avgfreq;
         uint32_t                    set_sched_opt_smt;
 #define XEN_SYSCTL_CX_UNLIMITED 0xffffffff
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Mon May 03 19:35:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 19:35:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121849.229862 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldeLi-0007jK-7G; Mon, 03 May 2021 19:35:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121849.229862; Mon, 03 May 2021 19:35:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldeLi-0007jD-43; Mon, 03 May 2021 19:35:22 +0000
Received: by outflank-mailman (input) for mailman id 121849;
 Mon, 03 May 2021 19:35:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wh1Q=J6=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1ldeFz-0005i5-KQ
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 19:29:27 +0000
Received: from mail-qk1-x72a.google.com (unknown [2607:f8b0:4864:20::72a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d36e909b-5832-42f6-8902-22621205925b;
 Mon, 03 May 2021 19:28:57 +0000 (UTC)
Received: by mail-qk1-x72a.google.com with SMTP id o27so6263552qkj.9
 for <xen-devel@lists.xenproject.org>; Mon, 03 May 2021 12:28:57 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:8710:5560:a711:776f])
 by smtp.gmail.com with ESMTPSA id
 g18sm9225209qke.21.2021.05.03.12.28.55
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 03 May 2021 12:28:56 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d36e909b-5832-42f6-8902-22621205925b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=q7njXXxu6B3Y8DrhUPNrHywzinnQvdu6zM4xzDyyuko=;
        b=nqheekHalLG6tkckUcUjkQBwapbD9B61k9PGZDqmegukYvyYBQGZTl71KVWEqyJePK
         3dl4xE2E2qC+DEFfu+QxxTpPflf+l9Avm16huLJLACvu4i+x9mMpVhFqDIV3WUqONFnz
         xQI6t3BqYhtc6DVOB0u6/vgODsRBEGBjctfvAn3BtNQ4uzUQXrvEOBjyJr9MiVSGCGXc
         eHnsfIxSr9RShifWy3LXVHblEJ2zGvjJ3LCyWaU89jRUnRjda4W2LUJKQ2abbhuRDOWk
         G7liIE5f0+Lm4IbMTsgNExRr5Radb2M4LeF2XxnsfF7cisodMEJO+Mb4RjQ6UwA0QRT8
         hc7w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=q7njXXxu6B3Y8DrhUPNrHywzinnQvdu6zM4xzDyyuko=;
        b=g8g5iB+/xCCQO8HLNxli6Bk3+c0HwAfZ+dOLFxKXoe5AJSJT4R6conY3z2K8NuGO0P
         9IVomnZKRZtkBkkCxG6g5rwcp4ajehpI7reWZQTYkRvW7rMZvcBOuhp/P2NRlJXcBogg
         Mw3CNwc8BrHO9kb9lEXBOL68ttKD0OWZ7NWu7N5rniKmwlgFRu73S/6lELIEeth+MUDY
         JUcf1bdbUxTA354/hrrBiAzilx2vEGqPyN3yW8w2Oe9yTiA17A1dUoMEq12HmVM5URX3
         JKjVfWggllKh9eWI5bJKLTb+x/S0AcjWV+gLN5X0YcKgHjghYqt//PzMyUW8nqjZh9Aj
         DZtQ==
X-Gm-Message-State: AOAM532jnqk7C4t+uMPFjdLsPjRtxihGgmY6X91qmLy1QN3T+xtM5t2R
	ZrfWxsKfYgjQ8h+vzYgpB5eYY2XErv4=
X-Google-Smtp-Source: ABdhPJzYLBRfeN46ykpr3WELiAXepylc6nrMXcD8iVLEyQ3AOfKghnhhLA1pHjHVDIRAhiqWl+Hkyw==
X-Received: by 2002:a05:620a:1036:: with SMTP id a22mr8961172qkk.186.1620070136565;
        Mon, 03 May 2021 12:28:56 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 11/13] xenpm: Factor out a non-fatal cpuid_parse variant
Date: Mon,  3 May 2021 15:28:08 -0400
Message-Id: <20210503192810.36084-12-jandryuk@gmail.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210503192810.36084-1-jandryuk@gmail.com>
References: <20210503192810.36084-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Allow cpuid_prase to be re-used without terminating xenpm.  HWP
will re-use it to optionally parse a cpuid.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 tools/misc/xenpm.c | 19 ++++++++++++++-----
 1 file changed, 14 insertions(+), 5 deletions(-)

diff --git a/tools/misc/xenpm.c b/tools/misc/xenpm.c
index 9588dac991..a686f8f46e 100644
--- a/tools/misc/xenpm.c
+++ b/tools/misc/xenpm.c
@@ -79,17 +79,26 @@ void help_func(int argc, char *argv[])
     show_help();
 }
 
-static void parse_cpuid(const char *arg, int *cpuid)
+static int parse_cpuid_non_fatal(const char *arg, int *cpuid)
 {
     if ( sscanf(arg, "%d", cpuid) != 1 || *cpuid < 0 )
     {
         if ( strcasecmp(arg, "all") )
-        {
-            fprintf(stderr, "Invalid CPU identifier: '%s'\n", arg);
-            exit(EINVAL);
-        }
+            return -1;
+
         *cpuid = -1;
     }
+
+    return 0;
+}
+
+static void parse_cpuid(const char *arg, int *cpuid)
+{
+    if ( parse_cpuid_non_fatal(arg, cpuid) )
+    {
+        fprintf(stderr, "Invalid CPU identifier: '%s'\n", arg);
+        exit(EINVAL);
+    }
 }
 
 static void parse_cpuid_and_int(int argc, char *argv[],
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Mon May 03 19:35:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 19:35:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121851.229874 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldeLk-0007ls-IK; Mon, 03 May 2021 19:35:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121851.229874; Mon, 03 May 2021 19:35:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldeLk-0007lj-Dx; Mon, 03 May 2021 19:35:24 +0000
Received: by outflank-mailman (input) for mailman id 121851;
 Mon, 03 May 2021 19:35:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wh1Q=J6=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1ldeG9-0005i5-Kh
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 19:29:37 +0000
Received: from mail-qk1-x734.google.com (unknown [2607:f8b0:4864:20::734])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4c8daf29-0926-4ea5-aa7a-809e84392228;
 Mon, 03 May 2021 19:29:02 +0000 (UTC)
Received: by mail-qk1-x734.google.com with SMTP id a2so6255843qkh.11
 for <xen-devel@lists.xenproject.org>; Mon, 03 May 2021 12:29:02 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:8710:5560:a711:776f])
 by smtp.gmail.com with ESMTPSA id
 g18sm9225209qke.21.2021.05.03.12.29.00
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 03 May 2021 12:29:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4c8daf29-0926-4ea5-aa7a-809e84392228
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=erQS8MRQTBPyaLr9w4i+Rj4rOpKDbVN/1THD8knRDvQ=;
        b=dQKe7cyABPeD48VAX3db6u/qUzVypeOxvLgjMiBKfIpOVibjTxeqgre/U83sc9GrFc
         AR7pYs/BDNWM50OPm82rfAKqYpRBektJDSp7cey9NoIPTLDg3AS3sFvrhnJpuI7rNUhU
         TKQNAF8+A4cvjClJ608AzTjs9pu5WbLbYOEQTbXw2J52ePciHcTLLVjdVKjCqgC74vf1
         gpxF8JGBCUy1zAHcsz9eLjL0fX40KyAjbtbbNEF2qH/W4X/q2CXhrqWLE0odGNBo4e5n
         pb1aEtdDQfHr4o49IkPhCdLf79/JTUy5PspEGfEK9D/qpzc2Oj63iDzYnQSfHvRSElyi
         EX3Q==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=erQS8MRQTBPyaLr9w4i+Rj4rOpKDbVN/1THD8knRDvQ=;
        b=BVGwDR1WEWGsuHzSOHVIxu+ovVQa1+LIlSSzSrWMYfV+vP7nQyhSdnNIPtSgPyogk/
         wTtO3M65pjrwXV/mZ3vlOv5AE8eah3VFAByzGD7LCHVfjgonnnR/EM9D/UOEzLKE4IS3
         heqiccjbhC5Z62W+dLcsAMiW3Ls0h1B6hcAKwpLRNH1vZng+q4cUx97RPaJY7kGOtbnY
         Zy40hSvQ45NQJaUqBhrKwhQyFaBOL17t45Lr+cm3SYpBHZY0KxHqZ54aaIWuXL9hKgBI
         w3C/OURAFAMvFjL77x+0Yrd4KwJylNrqYY+9vEr3zq6T/hEFg5+XbS6KY1rijQzTKBUx
         SqCg==
X-Gm-Message-State: AOAM532YLWHQZmcdouqPmXkV8svYdiOEZLaFqZstaLq99W1ZdrdLBFTD
	xpkmsFt5twzs5wD39eF1wsLxTAXBxNE=
X-Google-Smtp-Source: ABdhPJyvhYdeVdpTZBj+YymLg3IRFM/mMFSZg3WdY7wyW3IoyBL6i7mCWd0IT67+lDvZ9fiTkPtjZw==
X-Received: by 2002:a37:8443:: with SMTP id g64mr21293992qkd.185.1620070141508;
        Mon, 03 May 2021 12:29:01 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Ian Jackson <iwj@xenproject.org>,
	Community Manager <community.manager@xenproject.org>
Subject: [PATCH 13/13] CHANGELOG: Add Intel HWP entry
Date: Mon,  3 May 2021 15:28:10 -0400
Message-Id: <20210503192810.36084-14-jandryuk@gmail.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210503192810.36084-1-jandryuk@gmail.com>
References: <20210503192810.36084-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 CHANGELOG.md | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/CHANGELOG.md b/CHANGELOG.md
index 0106fccec1..bbca67bc0b 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -5,6 +5,8 @@ Notable changes to Xen will be documented in this file.
 The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
 
 ## [unstable UNRELEASED](https://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=staging) - TBD
+### Added / support upgraded
+ - Intel Hardware P-States (HWP) cpufreq driver
 
 ## [4.15.0 UNRELEASED](https://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=RELEASE-4.15.0) - TBD
 
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Mon May 03 19:41:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 19:41:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121872.229886 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldeRw-0000Rs-9s; Mon, 03 May 2021 19:41:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121872.229886; Mon, 03 May 2021 19:41:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldeRw-0000Rl-5a; Mon, 03 May 2021 19:41:48 +0000
Received: by outflank-mailman (input) for mailman id 121872;
 Mon, 03 May 2021 19:41:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=D+pa=J6=linutronix.de=tglx@srs-us1.protection.inumbo.net>)
 id 1ldeRv-0000Rg-1z
 for xen-devel@lists.xenproject.org; Mon, 03 May 2021 19:41:47 +0000
Received: from galois.linutronix.de (unknown [2a0a:51c0:0:12e:550::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cfdf417c-dd93-4ec6-bc85-65393b7babae;
 Mon, 03 May 2021 19:41:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cfdf417c-dd93-4ec6-bc85-65393b7babae
From: Thomas Gleixner <tglx@linutronix.de>
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020; t=1620070904;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=R89mha3W7dggNcHio8mKK09HcaBedKHLYwCG8+sIhQk=;
	b=baebpRmFzGasqi47KSv9CzosBnOiLYNu8TAQWs4HYL8bMXuO3WHl9PW+6B2wB2AHbx27rY
	oOoUfxBV37pDJYArt3WLnjTje5ja5x3UuYgZe+3XejjGC6KButNajyEr+v07Edoyiy3kHY
	c48uBhmkG0bWfoGuoCo8YiNsgbQDzNgJBfBp97ly7hDxC0o2vlwN5eDjufS0Kj2655+N9I
	1sRye9evLpVS4KWl2tnj1rrHO1TOMwMmvmTZSo/oiNE52v5yb9gRdFgMRILpKdoV9cp4oz
	YN6n4Am8JaNeey9jU2D5lUWhA6o6aTZm1mb5xm32PwtUdtQPjc9kG8iGX5ILXQ==
DKIM-Signature: v=1; a=ed25519-sha256; c=relaxed/relaxed; d=linutronix.de;
	s=2020e; t=1620070904;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=R89mha3W7dggNcHio8mKK09HcaBedKHLYwCG8+sIhQk=;
	b=VsoMRqDg8qyFRWUYjcU0Jkt5wjqQb3YctqCuac4VSxwB6NvvlM+s+7EveLibUxsYy8sW9y
	F+QEywnUKLPnq5CQ==
To: Lai Jiangshan <jiangshanlai@gmail.com>, linux-kernel@vger.kernel.org
Cc: Lai Jiangshan <laijs@linux.alibaba.com>, Paolo Bonzini <pbonzini@redhat.com>, Sean Christopherson <seanjc@google.com>, Steven Rostedt <rostedt@goodmis.org>, Andi Kleen <ak@linux.intel.com>, Andy Lutomirski <luto@kernel.org>, Vitaly Kuznetsov <vkuznets@redhat.com>, Wanpeng Li <wanpengli@tencent.com>, Jim Mattson <jmattson@google.com>, Joerg Roedel <joro@8bytes.org>, kvm@vger.kernel.org, Josh Poimboeuf <jpoimboe@redhat.com>, Uros Bizjak <ubizjak@gmail.com>, Maxim Levitsky <mlevitsk@redhat.com>, Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>, x86@kernel.org, "H. Peter Anvin" <hpa@zytor.com>, Boris Ostrovsky <boris.ostrovsky@oracle.com>, Juergen Gross <jgross@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, Peter Zijlstra <peterz@infradead.org>, Alexandre Chartre <alexandre.chartre@oracle.com>, Joerg Roedel <jroedel@suse.de>, Jian Cai <caij2003@gmail.com>, xen-devel@lists.xenproject.org
Subject: Re: [PATCH 1/4] x86/xen/entry: Rename xenpv_exc_nmi to noist_exc_nmi
In-Reply-To: <87r1ind4ee.ffs@nanos.tec.linutronix.de>
References: <20210426230949.3561-1-jiangshanlai@gmail.com> <20210426230949.3561-2-jiangshanlai@gmail.com> <87r1ind4ee.ffs@nanos.tec.linutronix.de>
Date: Mon, 03 May 2021 21:41:44 +0200
Message-ID: <87h7jjk3k7.ffs@nanos.tec.linutronix.de>
MIME-Version: 1.0
Content-Type: text/plain

On Mon, May 03 2021 at 21:05, Thomas Gleixner wrote:

> On Tue, Apr 27 2021 at 07:09, Lai Jiangshan wrote:
>> From: Lai Jiangshan <laijs@linux.alibaba.com>
>>
>> There is no any functionality change intended.  Just rename it and
>> move it to arch/x86/kernel/nmi.c so that we can resue it later in
>> next patch for early NMI and kvm.
>
> 'Reuse it later' is not really a proper explanation why this change it
> necessary.
>
> Also this can be simplified by using aliasing which keeps the name
> spaces intact.

Aside of that this is not required to be part of a fixes series which
needs to be backported.

Thanks,

        tglx


From xen-devel-bounces@lists.xenproject.org Mon May 03 21:18:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 03 May 2021 21:18:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121883.229900 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldfxe-00087s-KV; Mon, 03 May 2021 21:18:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121883.229900; Mon, 03 May 2021 21:18:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldfxe-00087l-HK; Mon, 03 May 2021 21:18:38 +0000
Received: by outflank-mailman (input) for mailman id 121883;
 Mon, 03 May 2021 21:18:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldfxd-00087d-4H; Mon, 03 May 2021 21:18:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldfxc-00051y-TL; Mon, 03 May 2021 21:18:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldfxc-0003aN-Hk; Mon, 03 May 2021 21:18:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ldfxc-00065m-BG; Mon, 03 May 2021 21:18:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=r5psQ+lXCy58/ALs7k613xcnWuTXWnxnL0ub2pW0iC4=; b=vzIhDC3A3uK6aRGA/MJ1/1TdmP
	713m2tb5N13MthZLp6w3MUbxLgaBDhWlmmfWQSLmKOvf6GMjcDcZZA49x5DLiqe2D6PyLh900fDxV
	X2JacBFLxRmZ1MZzhECzMup+YeGLyuRdds3TZw1uXAa2W8ZIlrsnpEvT7pxCxUuZbqpo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161616-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 161616: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=53c5433e84e8935abed8e91d4a2eb813168a0ecf
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 03 May 2021 21:18:36 +0000

flight 161616 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161616/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-libvirt      14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-libvirt-xsm  14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-libvirt-pair 25 guest-start/debian       fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                53c5433e84e8935abed8e91d4a2eb813168a0ecf
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  256 days
Failing since        152659  2020-08-21 14:07:39 Z  255 days  467 attempts
Testing same since   161616  2021-05-03 05:49:06 Z    0 days    1 attempts

------------------------------------------------------------
480 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 145184 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 04 02:22:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 02:22:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121920.229922 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldkhT-0006Ya-1W; Tue, 04 May 2021 02:22:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121920.229922; Tue, 04 May 2021 02:22:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldkhS-0006YT-TQ; Tue, 04 May 2021 02:22:14 +0000
Received: by outflank-mailman (input) for mailman id 121920;
 Tue, 04 May 2021 02:22:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldkhR-0006YL-Ho; Tue, 04 May 2021 02:22:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldkhR-0007om-BN; Tue, 04 May 2021 02:22:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldkhQ-0000aN-Tc; Tue, 04 May 2021 02:22:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ldkhQ-0006fx-TA; Tue, 04 May 2021 02:22:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Kv0v/NwEtOPTg8lATEu1rCG59WkZMJC5Kz99j0dzw3s=; b=yMmBD6gC9pDQ/AUf65bdEoHdQ7
	pdlfyfCUAGgvU30+gMXVYVd2rKa4RrupTJHOzcxEm9DW2Wn/F/oppD9xqNSdoUQtwgYnlEGF1oz18
	cafWhQCQRlN2s32WO8fZmSdHCbnxgroQC2XJyiuBk1lptKcaq11Of64S1FuSfYCpst1E=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161621-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 161621: regressions - FAIL
X-Osstest-Failures:
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-localmigrate/x10:fail:regression
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-saverestore:fail:heisenbug
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-saverestore.2:fail:heisenbug
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5b280a59c4dd8dad6cc8da28db981b193d10acee
X-Osstest-Versions-That:
    xen=4cf5929606adc2fb1ab4e2921c14ba4b8046ecd1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 04 May 2021 02:22:12 +0000

flight 161621 xen-4.12-testing real [real]
flight 161632 xen-4.12-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/161621/
http://logs.test-lab.xenproject.org/osstest/logs/161632/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qcow2 19 guest-localmigrate/x10 fail in 161609 REGR. vs. 159418

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemut-win7-amd64 15 guest-saverestore fail in 161609 pass in 161621
 test-amd64-amd64-xl-qcow2    18 guest-saverestore.2        fail pass in 161609

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 159418
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 159418
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159418
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159418
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 159418
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 159418
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159418
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 159418
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159418
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 159418
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 159418
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  5b280a59c4dd8dad6cc8da28db981b193d10acee
baseline version:
 xen                  4cf5929606adc2fb1ab4e2921c14ba4b8046ecd1

Last test of basis   159418  2021-02-16 15:06:11 Z   76 days
Failing since        160128  2021-03-18 14:36:18 Z   46 days   62 attempts
Testing same since   160150  2021-03-20 04:11:48 Z   44 days   60 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Edwin Török <edvin.torok@citrix.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 311 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 04 04:21:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 04:21:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121934.229937 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldmYK-0008Ga-NG; Tue, 04 May 2021 04:20:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121934.229937; Tue, 04 May 2021 04:20:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldmYK-0008GT-JZ; Tue, 04 May 2021 04:20:56 +0000
Received: by outflank-mailman (input) for mailman id 121934;
 Tue, 04 May 2021 04:20:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldmYJ-0008GL-Uo; Tue, 04 May 2021 04:20:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldmYJ-0001KA-NP; Tue, 04 May 2021 04:20:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldmYJ-0005tG-Cl; Tue, 04 May 2021 04:20:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ldmYJ-0004AV-CL; Tue, 04 May 2021 04:20:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8pUjx/ckiWcnP+teYRlqxxl4NiB3bpWnyoHULQ2lQ0A=; b=WUt1Kqvl/yRoDO/JZRYfI3Czls
	jFyFDlBhvR1A5fmsSC2x9KMIAVjSnsFYMVTNXglhI2OiDmAjwIUYiuQNfY1CaBAFnCChlp4bzIF19
	bzB5fLa5CIrdPY8/K5K5VB1o0pgIUUCljEz7FWggXbftm6x7uUny+oWmAqhaCaQEd2aA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161623-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 161623: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=9ccce092fc64d19504fa54de4fd659e279cc92e7
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 04 May 2021 04:20:55 +0000

flight 161623 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161623/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                9ccce092fc64d19504fa54de4fd659e279cc92e7
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  276 days
Failing since        152366  2020-08-01 20:49:34 Z  275 days  460 attempts
Testing same since   161623  2021-05-03 12:40:45 Z    0 days    1 attempts

------------------------------------------------------------
5922 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1604290 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 04 05:55:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 05:55:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121939.229952 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldo1j-0007hj-Q0; Tue, 04 May 2021 05:55:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121939.229952; Tue, 04 May 2021 05:55:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldo1j-0007hc-M4; Tue, 04 May 2021 05:55:23 +0000
Received: by outflank-mailman (input) for mailman id 121939;
 Tue, 04 May 2021 05:55:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldo1i-0007hU-I3; Tue, 04 May 2021 05:55:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldo1i-0003FL-78; Tue, 04 May 2021 05:55:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldo1h-0002eS-SP; Tue, 04 May 2021 05:55:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ldo1h-0002hW-Rt; Tue, 04 May 2021 05:55:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=JYGC+Xmu7+RbavdWM3E+aPWYUeCEdKzIqIYYLCPqAbg=; b=Drdcvv8mEIGVzy8MidXI4vEy8K
	LPn8xxy2LXHaWbHFhhaMbI2Ouikg4zFy613zR5Z5XlLkaus5ZunQnCOGQHqyjp7ZpRzo8E0mnUH2K
	B/Ihj5xOHUy6R46XUP4Abx95+ype7H5TQeuNW/7C1gScXzonqQaQEKK1O0w+T/h0cPno=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161661-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 161661: regressions - FAIL
X-Osstest-Failures:
    xen-4.12-testing:build-amd64-xsm:xen-build:fail:regression
    xen-4.12-testing:build-amd64:xen-build:fail:regression
    xen-4.12-testing:build-arm64:xen-build:fail:regression
    xen-4.12-testing:build-amd64-prev:xen-build:fail:regression
    xen-4.12-testing:build-i386:xen-build:fail:regression
    xen-4.12-testing:build-arm64-xsm:xen-build:fail:regression
    xen-4.12-testing:build-i386-xsm:xen-build:fail:regression
    xen-4.12-testing:build-i386-prev:xen-build:fail:regression
    xen-4.12-testing:build-armhf:xen-build:fail:regression
    xen-4.12-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-xtf-amd64-amd64-1:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:build-arm64-libvirt:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    xen-4.12-testing:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    xen-4.12-testing:build-i386-libvirt:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-livepatch:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-migrupgrade:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-livepatch:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-migrupgrade:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-xtf-amd64-amd64-2:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-xtf-amd64-amd64-3:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-xtf-amd64-amd64-4:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-xtf-amd64-amd64-5:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=5b280a59c4dd8dad6cc8da28db981b193d10acee
X-Osstest-Versions-That:
    xen=4cf5929606adc2fb1ab4e2921c14ba4b8046ecd1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 04 May 2021 05:55:21 +0000

flight 161661 xen-4.12-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161661/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 159418
 build-amd64                   6 xen-build                fail REGR. vs. 159418
 build-arm64                   6 xen-build                fail REGR. vs. 159418
 build-amd64-prev              6 xen-build                fail REGR. vs. 159418
 build-i386                    6 xen-build                fail REGR. vs. 159418
 build-arm64-xsm               6 xen-build                fail REGR. vs. 159418
 build-i386-xsm                6 xen-build                fail REGR. vs. 159418
 build-i386-prev               6 xen-build                fail REGR. vs. 159418
 build-armhf                   6 xen-build                fail REGR. vs. 159418

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-1        1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-livepatch    1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-livepatch     1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-xtf-amd64-amd64-2        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-3        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-4        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-5        1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  5b280a59c4dd8dad6cc8da28db981b193d10acee
baseline version:
 xen                  4cf5929606adc2fb1ab4e2921c14ba4b8046ecd1

Last test of basis   159418  2021-02-16 15:06:11 Z   76 days
Failing since        160128  2021-03-18 14:36:18 Z   46 days   63 attempts
Testing same since   160150  2021-03-20 04:11:48 Z   45 days   61 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Edwin Török <edvin.torok@citrix.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64-xtf                                              pass    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-prev                                             fail    
 build-i386-prev                                              fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       blocked 
 test-xtf-amd64-amd64-2                                       blocked 
 test-xtf-amd64-amd64-3                                       blocked 
 test-xtf-amd64-amd64-4                                       blocked 
 test-xtf-amd64-amd64-5                                       blocked 
 test-amd64-amd64-xl                                          blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-livepatch                                   blocked 
 test-amd64-i386-livepatch                                    blocked 
 test-amd64-amd64-migrupgrade                                 blocked 
 test-amd64-i386-migrupgrade                                  blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 311 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 04 06:24:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 06:24:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121950.229973 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldoTo-0001z4-5c; Tue, 04 May 2021 06:24:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121950.229973; Tue, 04 May 2021 06:24:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldoTo-0001yx-1O; Tue, 04 May 2021 06:24:24 +0000
Received: by outflank-mailman (input) for mailman id 121950;
 Tue, 04 May 2021 06:24:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldoTn-0001yp-Js; Tue, 04 May 2021 06:24:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldoTn-0003nx-DB; Tue, 04 May 2021 06:24:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldoTn-0003fx-4V; Tue, 04 May 2021 06:24:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ldoTn-0002I7-43; Tue, 04 May 2021 06:24:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=221n0zJfq5BjU44Uz5g7UhsQlwgc0lRWAFNwszUIN00=; b=AAPHuY9Srs4njo1rWZHASOaFer
	l6HclwcADu8SaaqCZjq5jJRI2eVfvq3DfdHKQlzo9th9kDessxwolhpXTmycD63BYbnhRErRJ4lpC
	1S67wVPa0UPGQqjVT1B0tDOkfhg8bTVPCDS3hEFwKUX2Hih8YjBGMXQDJnX/EqJifXfI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161629-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 161629: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=8c8f49f0dc86e3c58d94766e6b194b83c1bef5c9
X-Osstest-Versions-That:
    ovmf=1e6b0394d6c001802dc454ecff19076aaa80f51c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 04 May 2021 06:24:23 +0000

flight 161629 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161629/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 8c8f49f0dc86e3c58d94766e6b194b83c1bef5c9
baseline version:
 ovmf                 1e6b0394d6c001802dc454ecff19076aaa80f51c

Last test of basis   161559  2021-04-30 18:42:58 Z    3 days
Testing same since   161629  2021-05-03 18:40:06 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michael D Kinney <michael.d.kinney@intel.com>
  Rebecca Cran <rebecca@bsdio.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   1e6b0394d6..8c8f49f0dc  8c8f49f0dc86e3c58d94766e6b194b83c1bef5c9 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue May 04 07:58:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 07:58:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121969.229999 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldpwF-0001Kn-2X; Tue, 04 May 2021 07:57:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121969.229999; Tue, 04 May 2021 07:57:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldpwE-0001Kg-Vt; Tue, 04 May 2021 07:57:50 +0000
Received: by outflank-mailman (input) for mailman id 121969;
 Tue, 04 May 2021 07:57:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1gXq=J7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ldpwE-0001Kb-1f
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 07:57:50 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4a264c15-80b5-4ca9-ac6c-bc694c5ec9e1;
 Tue, 04 May 2021 07:57:48 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A7B4FAECB;
 Tue,  4 May 2021 07:57:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4a264c15-80b5-4ca9-ac6c-bc694c5ec9e1
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620115067; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=njW/EeGF5q23c7VSb2H2+m0T4TKq3MoSFSByja5muxE=;
	b=mgC7IuhH4+JuoL9JL9fFiaPmYBobYdkuhLhFS/1xlwJSYJ8jTmxF4DEInErKarwewLW70y
	QPuUt5h54NBF0EOUwdmvEyJ0na1KZp7kQt9dVmiaSavOqZiT0WeAls81kzG7PPd4GKqqfU
	V1kCujWbbbIIuP2TzCviLyNZd3QGqrU=
Subject: Re: [PATCH v3 05/22] x86/xstate: drop xstate_offsets[] and
 xstate_sizes[]
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <322de6db-e01f-0b57-5777-5d94a13c441a@suse.com>
 <434705ef-1c34-581d-b956-2322b4413232@suse.com>
 <f3a9b372-c927-70e3-a2ba-fef2bb2c7d7a@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ebf0945a-db78-66de-2f64-860c5067220d@suse.com>
Date: Tue, 4 May 2021 09:57:47 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <f3a9b372-c927-70e3-a2ba-fef2bb2c7d7a@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 03.05.2021 18:10, Andrew Cooper wrote:
> On 22/04/2021 15:45, Jan Beulich wrote:
>> They're redundant with respective fields from the raw CPUID policy; no
>> need to keep two copies of the same data.
> 
> So before I read this patch of yours, I had a separate cleanup patch
> turning the two arrays into static const.
> 
>> This also breaks
>> recalculate_xstate()'s dependency on xstate_init(),
> 
> It doesn't, because you've retained the reference to xstate_align, which
> is calculated in xstate_init().

Good point - s/breaks/eliminates some of/.

>  I've posted "[PATCH 4/5] x86/cpuid:
> Simplify recalculate_xstate()" which goes rather further.

I'll see to take a look soonish.

> xstate_align, and xstate_xfd as you've got later in the series, don't
> need to be variables.  They're constants, just like the offset/size
> information, because they're all a description of the XSAVE ISA
> instruction behaviour.

Hmm, I think there are multiple views possible - for xfd_mask even more
than for xstate_align: XFD is, according to my understanding of the
spec, not a prereq feature to AMX. IOW AMX would function fine without
XFD, just that lazy state saving space allocation then wouldn't be
possible. And I also can't, in principle, see any reason why largish
components like the AVX512 ones couldn't become XFD-sensitive (in
hardware, we of course can't mimic this in software).

(I could take as proof sde reporting AMX but not XFD with -spr, but I
rather suspect this to be an oversight in their CPUID data. I've posted
a respective question in their forum.)

If there really was a strict static relationship, I'm having trouble
seeing why the information would need expressing in CPUID at all. It
would at least feel like over-engineering then.

> We never turn on states we don't understand, which means we don't
> actually need to refer to any component subleaf, other than to cross-check.
> 
> I'm still on the fence as to whether it is better to compile in the
> constants, or to just use the raw policy.  Absolutely nothing good will
> come of the constants changing, and one of my backup plans for dealing
> with the size of cpuid_policy if it becomes a problem was to not store
> these leaves, and generate them dynamically on request.

Actually it is my understanding that the reason the offsets are
expressed via CPUID is that originally it was meant for them to be
able to vary between implementations (see in particular the placement
of the LWP component, which has resulted in a curious 128-byte gap
ahead of the MPX components). Until it was realized what implications
this would have on migration.

>> allowing host CPUID
>> policy calculation to be moved together with that of the raw one (which
>> a subsequent change will require anyway).
> 
> While breaking up the host/raw calculations from the rest, we really
> need to group the MSR policy calculations with their CPUID counterparts.

But that's orthogonal to the change here, i.e. if at all for this
series subject of a separate patch. Plus I have to admit I'm not
sure I see what your plan here would be - cpuid.c and msr.c so far
don't cross reference one another. And I thought this separation
was intentional.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 04 08:03:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 08:03:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121976.230012 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldq1J-0002lm-Tj; Tue, 04 May 2021 08:03:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121976.230012; Tue, 04 May 2021 08:03:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldq1J-0002lf-Q1; Tue, 04 May 2021 08:03:05 +0000
Received: by outflank-mailman (input) for mailman id 121976;
 Tue, 04 May 2021 08:03:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1gXq=J7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ldq1I-0002lY-9m
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 08:03:04 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dc110e49-9b37-4006-9adb-39b99282f19b;
 Tue, 04 May 2021 08:03:03 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C746AAFF0;
 Tue,  4 May 2021 08:03:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dc110e49-9b37-4006-9adb-39b99282f19b
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620115382; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=706C+Rr0FDuR3T16GiMyOKirxaTctxt7XOLLnVP5JrU=;
	b=MAHd+U5Zc6o7yjzJdLmIDjhzdz3fZA1WUs/bjpZ0nJc7S0Uoa55lPxvk7Z/YQdM56nzmvC
	iDpml+U1HndK0MG9FTZukdTdm++l0Rqmo+r8XbV/nWYrPXCm6Gyhc+h9rSA4kbst3mOXNp
	TWkBpS6UDVEHLrwsnnh3LT5SN/6xyb0=
Subject: Re: [PATCH 10/13] libxc: Add xc_set_cpufreq_hwp
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210503192810.36084-1-jandryuk@gmail.com>
 <20210503192810.36084-11-jandryuk@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <398aa86f-13e4-e5d4-29d6-4491a05c920a@suse.com>
Date: Tue, 4 May 2021 10:03:02 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210503192810.36084-11-jandryuk@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 03.05.2021 21:28, Jason Andryuk wrote:
> Add xc_set_cpufreq_hwp to allow calling xen_systctl_pm_op
> SET_CPUFREQ_HWP.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> 
> ---
> Am I allowed to do set_hwp = *set_hwp struct assignment?

I'm puzzled by the question - why would you not be?

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 04 08:25:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 08:25:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.121994.230076 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldqN4-0004oB-F5; Tue, 04 May 2021 08:25:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 121994.230076; Tue, 04 May 2021 08:25:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldqN4-0004o4-Bz; Tue, 04 May 2021 08:25:34 +0000
Received: by outflank-mailman (input) for mailman id 121994;
 Tue, 04 May 2021 08:25:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldqN2-0004nw-Ik; Tue, 04 May 2021 08:25:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldqN2-0006Lc-E3; Tue, 04 May 2021 08:25:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldqN2-0001PQ-2s; Tue, 04 May 2021 08:25:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ldqN2-0000sJ-2N; Tue, 04 May 2021 08:25:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=U9WxZKjxoDe5ISgEn5i1Dt2KWJg79cBBwES2L5oOMaw=; b=0p+BfR2Mx7kT48o4x66ZXKmvoe
	3Ny/pyTy9FmUDhgsEO03IbtUshph/zmbAMNUL+5A9oTEF8YAvlznM3vytF6uWdF6RY+OnfL9jtzME
	7ABGlYLz0n4+7iELrr7voa7BGaAQfCMCXD5VXkWztFJshRTBZJYlB6sDlbybVgLilex0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161628-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 161628: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=d26c277826dbbd64b3e3cb57159e1ecbfad33bc8
X-Osstest-Versions-That:
    xen=1f8ee4cb430e5a9da37096574c41632cf69a0bc7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 04 May 2021 08:25:32 +0000

flight 161628 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161628/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 161613

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161613
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 161613
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161613
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161613
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161613
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161613
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 161613
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161613
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161613
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161613
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161613
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161613
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  d26c277826dbbd64b3e3cb57159e1ecbfad33bc8
baseline version:
 xen                  1f8ee4cb430e5a9da37096574c41632cf69a0bc7

Last test of basis   161613  2021-05-03 01:52:43 Z    1 days
Testing same since   161628  2021-05-03 18:07:52 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Rahul Singh <rahul.singh@arm.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Tim Deegan <tim@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   1f8ee4cb43..d26c277826  d26c277826dbbd64b3e3cb57159e1ecbfad33bc8 -> master


From xen-devel-bounces@lists.xenproject.org Tue May 04 08:42:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 08:42:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122004.230091 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldqdU-0006X8-6W; Tue, 04 May 2021 08:42:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122004.230091; Tue, 04 May 2021 08:42:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldqdU-0006X1-34; Tue, 04 May 2021 08:42:32 +0000
Received: by outflank-mailman (input) for mailman id 122004;
 Tue, 04 May 2021 08:42:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=n4Og=J7=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1ldqdS-0006Ww-LE
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 08:42:30 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a38950da-1513-4ab9-aa77-0868ca61b5e1;
 Tue, 04 May 2021 08:42:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a38950da-1513-4ab9-aa77-0868ca61b5e1
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620117749;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=b/QYWXalV/j1/JHRl6JS+NRs5jZhyCw3xqOSqf1wfSc=;
  b=VJcPAwiFAmqYizRrUgEc19htr2ZM/uw5UcS8vo5XzZQzWjoYF1HnXO08
   Bo3YbSbkNLIWfkRAQJUKhHoyJUIvIj1rQQ/EkAIMD4DgsBxVr+T7GQt/p
   rD1yqgEIsTUQqmoMAcWJmXaSpbii6BIgkGrz5UMzcpWaFudOSgTAiHdW1
   Y=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: NEaZF8caH4SgaFgcySI0XH26IojI0ZZTPaG58CE2FqYoFMCPxUZU2kueyKMD91lv2+4EiBy9GE
 T2s3ApWKcLhIiq/h7y4E+ebifhLigb8EMOUyUTbcvRRrJEfk7r6rgQMWVNJ+KIQg/DpK4sAoCY
 B49CWJGbGz7cIBAEaue5kxnarxZwFICleWF2da72F8SqE8Z1UZ+0/0ahBSmne8swBykPKDm/Mm
 9zldnro8qvpJKsfAmBYNMMGb75QF8Rdz1X9mtuysDasfausWlQ15uXU7IbwBk68T1sfZD/uej8
 k/o=
X-SBRS: 5.1
X-MesageID: 44520022
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:KoFOvaAc7mKNcazlHeh+sceALOonbusQ8zAX/mhsVB1YddGZnc
 iynPIdkST5kioVRWtIo729EYOLKEm9ybde544NMbC+GDT3oWfAFvAH0aLO4R3FXxf/+OlUyL
 t6f8FFYuHYIFBmga/BjzWQPM0nxLC8npyAocf74zNTQRpxa6dmhj0JbzqzNkFtXgFJCd4YOf
 Onl6l6jgGtc3gWcci3b0NtN4T+jubGiY78Zlo+DwMngTPksRqT9LX4HxKEty1uMQ9n/LFKyw
 n4uj283IqPmbWRyhjQ12jchq4m5efJ+594K+GnzuQQIjXooA60aIpmQK3qhkFInMifrGwEvf
 OJjxA8P9liy365RBDLnTLdnzPO/Rxry3j+xUSWiXHuyPaJOg4SOo56qq9yNj76gnBQ2+1U4e
 Zw8E+y86dzN1fmmh/w4tDZPisa7XackD4ZvsM4y0BEXZB2Us42kaUvuHl7Pb0nByzA5IUuAI
 BVfbvhzccTS1+cYnzD11MfueCEbzA2FheCdEAIptaY5ThQhGx41EsV3qUk7w89yK4=
X-IronPort-AV: E=Sophos;i="5.82,271,1613451600"; 
   d="scan'208";a="44520022"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fGQBlkrc4xCCl6vGAJx+HM2i0m3MFlfJAMbl6tPxM4241a1jLXP1sZIem1rL5ZM2nOTJCSbWCe+8wKeBaIUzBAaCHjL6HzdcWBubkghqT/pjhAR6xzgz6sN0+w/SgtSwZ8qkWVIZvUDiS4wnXUEqv/qqqQlohpqZYxJNyCWvgvlFAi2D+lixVVMlxRvftRmt91YoCHWAP+oMUfQ4PQAwnHr1KFidHdtmA99KAn73woN+FXfiianXqAURDKOMlYzYGzLa6CBdbSYT73l0iZAEIy+/J0o+oUQTVK1xgygzUofqbtW00EnP9jSsjx5Ym2RJQ6wUOB5C2DuhHRb33U5DCQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gMShm7y/KTJWmReJD3OB0IPy3SLCiCtqqCgQhb7UTzU=;
 b=h8wCtZjxkVoXHaG3IPcUkEc87gFz4sEqSNpZxEhEHEVk94aMtwU6fSm+5IzoqCB5J4BjUSeYXnbqMsVmPJjcxME1ZlmRTOsYUFDuN3fuBGO9g8dGSac3TMc9BTHaHgdMWzb3Q///zYy3fZf4apPyD3dmOcP0Rff+kfa1KpWlRHZyAYWOCSidgFtoK4ySQca5km6ISwlBOAKU1nIJhQJJYLTmjrgRRV9gpFaOWCoKeOEP3Rmr7c7j5kSRDpDrK0vdT7aUWC/16Y3xk6d4VYmrGCmECqdzvsY+pw3uEgi6fyaZekQBhcHHTIqmMSiJ4GIbo/PIQG1bhtnIF0dDX3uV4A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=gMShm7y/KTJWmReJD3OB0IPy3SLCiCtqqCgQhb7UTzU=;
 b=BQfjqD3SPFEx5hK99Wk/UtQ1DpMcuoBrLiz9fMSZk+Ad4vT2ja9/8iyJt+lIYtFCkrMV1JPjsJuReJqWoqLpcKXIWMdxU+hxV6XbSJjqocqH1MsnIfvULEBf22yz/aSGzg3AqwwEEknjGksWyNDVcjEAvojLAv2FFyKiBjxiO1o=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH] x86/vhpet: fix RTC special casing
Date: Tue,  4 May 2021 10:42:08 +0200
Message-ID: <20210504084208.62823-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.31.1
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: MR2P264CA0086.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:32::26) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 2953a0e9-0df5-4598-581a-08d90ed88a05
X-MS-TrafficTypeDiagnostic: DM6PR03MB4762:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB47623C69267A3604A5674F9B8F5A9@DM6PR03MB4762.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:5797;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 7eQ3JzgteGPuWkU1jQoCcCqhwAUS42mc7bxfui9yqQfG9N6Gm7ffpQQrFGqQTW7PZ8tLNw5aCw++yLRgHbmZP0Lqk7jnFK4TKUuzZyIBQ/dwYer2NrmsPwl6/gEWQV5wY+83P6vNGK74Nih5vXDU0WwHHKP+WiOC1+9RV19tehFqcByLqXHvLfwb/1EFEJ6A9gO2PkGMzma14Z3Vkn2xq4VpMAv2krobk+4wBYbsGGj24PIitAXQJowz2Pf+jJkuVnjyKM4CTxgPJdLfn4HMTpTr4cYyUqOQtF7AGr/su6vhS3SCWqNl47Wp+tFAuO3VxgwnDeLRJwmqk6oh1HEm6novMB6T2xUlPkO4GkjqawClac+shzRACHIA0U9uEJ4CKcmQet6ORtqZw+R/ZizMuVSAeAjdnm82jtejyph8+OOJ2ZaDIa0+520EAy/yuJc3GOZQoiV+d571dNdvqmZv86C6Dc5CAdIPRh7fIgqOh1Me+nPih16/mj/yab8Skarng/WiN7MeBgf+fM5U+ksHlvcjAZxSBR0IMD258LcRfn+sW5XYkzh9Baq8Pu3IaE0LdFHw1sj6r+RAhOLaCEYJZ1d0BhdRvK5CjRzjrMAyNIk=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39850400004)(366004)(136003)(396003)(376002)(346002)(6916009)(2906002)(38100700002)(66556008)(16526019)(1076003)(478600001)(36756003)(66476007)(6496006)(8676002)(66946007)(4326008)(186003)(54906003)(956004)(6666004)(86362001)(2616005)(6486002)(8936002)(83380400001)(26005)(5660300002)(316002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?MlRtV1N3bkVIV1ZJeXl5djRCVDg1MEwyL2loT05xWCtnbTZ5NnR3M2ZTcnhT?=
 =?utf-8?B?Z0NLd0FGc1U2Z0h0UStBN1E2aERtRTk4d1ZjRVdhanRZSTg5bW9yMWdaQm9u?=
 =?utf-8?B?N2FWYUI2ZzNxVVd0SWY4QVpDcXlqQmRJYUQwdmpwT3haWFpvaFVOVTBSWSt5?=
 =?utf-8?B?b01kTUk0WWdqMURXdTNqam95VExzL1h0VFduTnZXMTMvTU5XU3MvbE9qV3lT?=
 =?utf-8?B?WlpNMldOUFlBMDJsSk03Qy9PNG5CV09XaFlNQlJtdWoybFN1aG9lMmtKRGI4?=
 =?utf-8?B?NEdTd3JkUVNoVDR2L2tzV0l4bUpqSWl5ckRRT1hYWjVhYUZZUnFoVmxaZjNm?=
 =?utf-8?B?RGVwSC90RUJWbkRZYlo5Z3FyUlJkTnJlTUdoWkltRGQ5S21pNDdqWUxIUjFP?=
 =?utf-8?B?aGtiYnNzdC9jSnJWK2l5LzFyOENpOXl1UnFWOWU3eEJMd291VWVJeXFTdEVI?=
 =?utf-8?B?Y2pteDZaV3Vremh5ZW9Yay9yZlJGTmZ4bW5QUnNMTFhBYllMTStXVVBPcmMz?=
 =?utf-8?B?MGtRODhkM2hsOVhMTk9meDFmRDNqWkI0cjhMODAyVUhVakh6ZUdBYnlCMXhE?=
 =?utf-8?B?dlZ4bWdFbXEvSWUycGFMQnFGeGZ5MHcyNjJKaUl1N3VSK3dCZmxUNHN6ZWdL?=
 =?utf-8?B?NGZmYkJLR2pXRkJHWnh3R25kSDhTaEo4SDFJWGVWWkMzZWc4d1o5YUtKQ1RD?=
 =?utf-8?B?R1hDWktycXFRcndJR3FUQllzMmxDVXh3dlBETGVJTDJsTEdRSCs5ZkUzSnlW?=
 =?utf-8?B?STA5SGdJTTArcmVRYTEwY2h3NVFJN2JsQlorQzg4ZS91a3lDUVhGWmltckFX?=
 =?utf-8?B?VDlKUEtZdklZOTU3NW5UZmxBUFBXRVFFQTFHdGUvclJkeUJBNzJGRVNuRzRq?=
 =?utf-8?B?RGUzV2MvbWIvdk4wWmdsQTVIRW5OSTI5Z01oekhZejYydmYrVEtMNTZ4OU16?=
 =?utf-8?B?ZSs0cUFXLzBuUTJlaUM4eVFoSzJ1RURnMlZpd2hJVThjRktrcXVKUmlPZlA0?=
 =?utf-8?B?OGl0TFltRVMwd3J0RE8xUkRkR25Sd1M3d3o5ZzdScnZBTUtRaWJjYklRODI2?=
 =?utf-8?B?Mm9zeUFrMU55czRHN0NwREM0WUp0SXRSV0dZekh5eHllK0Jjbndxdng4eUxk?=
 =?utf-8?B?UWFqTW5kZ1FmdTB1UHd4bytwM29waDBiNlVadU5QNUw5SzNiV1FOYzN2RHB2?=
 =?utf-8?B?dzdKMnhMeVBZYXBUQ2RmYk5pR0NEeFVXUkRiYUgxd1pXSnZvWndnSU1OeUNV?=
 =?utf-8?B?UjVFZnBRYXAzTE1ZRjJYdmJBWXBnOWVIUldQeVVTTnVlaDNEeWx4VFhnVUsz?=
 =?utf-8?B?bUxOTG52NDFmRXNjdzFGSU9iZXIrQmxrUkRjSytMZXpobWhidUpqQ05zZkN3?=
 =?utf-8?B?SXArQUpUbllEUEZMUHlGNFAzTCtRZ0FCWVpFOFV3dlExU2pWSnVXSWxRcEtR?=
 =?utf-8?B?YTV0OWJtVlRzME1lUzlyVUlFN21KNzc5cW5idzV5ci9qQVlIOWZuT1VkYWZ0?=
 =?utf-8?B?Qm52VldQNTdYZ2U1QWlGQlM5ZWZXWUQ2YXRIenh0aXJ6czNac1FBV0NFQ0hn?=
 =?utf-8?B?Q1FUb3Y5VmVsc2tLY2dUNy9VSVlCZHl1NGxXbi9jalZSSFFhQ216cWNzZFp3?=
 =?utf-8?B?Tm5HaWEwMks5SHdEMUlmenB4UDZvTDRhZDRWcUNDMTJobGd4TG5EN2lqbklF?=
 =?utf-8?B?c2NIY0FBL01IcUhOdFo1MGcweHZ5U0hTdjZ3VnlHZFNka2VBN2pvRHQ3MWQ4?=
 =?utf-8?Q?w57afasZp2+Kk8KZkpuUPsNk9m2bxmDolw/tfIW?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 2953a0e9-0df5-4598-581a-08d90ed88a05
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2021 08:42:24.2699
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 9DXWT9E0hbOKHZpsGl6ODksplWi+lVRjk5nrCb5Niw4TZWTmGE6VawNAEnTApZTV5tf5Ktx34NBivyUScbFcKg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4762
X-OriginatorOrg: citrix.com

Restore setting the virtual timer callback private data to NULL if the
timer is not level triggered. This fixes the special casing done in
pt_update_irq so that the RTC interrupt when originating from the HPET
is suspended if the interrupt source is masked.

Note the RTC special casing done in pt_update_irq should only apply to
the RTC interrupt originating from the emulated RTC device (which does
set the callback private data), as in that case the callback itself
will destroy the virtual timer if the interrupt is ignored.

While there also use RTC_IRQ instead of 8 when the HPET is configured
in LegacyReplacement Mode.

Fixes: be07023be115 ("x86/vhpet: add support for level triggered interrupts")
Reported-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/hvm/hpet.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/hpet.c b/xen/arch/x86/hvm/hpet.c
index ca94e8b4538..ee756abb824 100644
--- a/xen/arch/x86/hvm/hpet.c
+++ b/xen/arch/x86/hvm/hpet.c
@@ -22,6 +22,7 @@
 #include <asm/hvm/trace.h>
 #include <asm/current.h>
 #include <asm/hpet.h>
+#include <asm/mc146818rtc.h>
 #include <xen/sched.h>
 #include <xen/event.h>
 #include <xen/trace.h>
@@ -290,7 +291,7 @@ static void hpet_set_timer(HPETState *h, unsigned int tn,
         /* if LegacyReplacementRoute bit is set, HPET specification requires
            timer0 be routed to IRQ0 in NON-APIC or IRQ2 in the I/O APIC,
            timer1 be routed to IRQ8 in NON-APIC or IRQ8 in the I/O APIC. */
-        irq = (tn == 0) ? 0 : 8;
+        irq = (tn == 0) ? 0 : RTC_IRQ;
         h->pt[tn].source = PTSRC_isa;
     }
     else
@@ -318,7 +319,8 @@ static void hpet_set_timer(HPETState *h, unsigned int tn,
                          hpet_tick_to_ns(h, diff),
                          oneshot ? 0 : hpet_tick_to_ns(h, h->hpet.period[tn]),
                          irq, timer_level(h, tn) ? hpet_timer_fired : NULL,
-                         (void *)(unsigned long)tn, timer_level(h, tn));
+                         timer_level(h, tn) ? (void *)(unsigned long)tn : NULL,
+                         timer_level(h, tn));
 }
 
 static inline uint64_t hpet_fixup_reg(
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Tue May 04 08:42:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 08:42:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122005.230103 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldqdb-0006Ze-Fj; Tue, 04 May 2021 08:42:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122005.230103; Tue, 04 May 2021 08:42:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldqdb-0006ZV-B8; Tue, 04 May 2021 08:42:39 +0000
Received: by outflank-mailman (input) for mailman id 122005;
 Tue, 04 May 2021 08:42:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1gXq=J7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ldqda-0006ZH-HF
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 08:42:38 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id db5d6744-d172-4a28-bd53-2491b2c74f2c;
 Tue, 04 May 2021 08:42:37 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B381FB158;
 Tue,  4 May 2021 08:42:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: db5d6744-d172-4a28-bd53-2491b2c74f2c
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620117756; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=t7pOKaQXzpRytMdA4ZFAwyRv6nAp2aFO8FSE0pQddp8=;
	b=RkgIBAkp8+7NeSTaAzy1gImnP6TWxlhE6TRahF2gXrsiKBwF5JzZEEeYQb+n+Xew0ZPu2w
	uciv1rMdM36Ui1XimTetieS8r7+bcVxHtlflRhk7XepNWs/XNBO8O80nuYC2eWXEHle+Ry
	W1hCRbKcaiQ2c6SzNBD92WtPxE5nFns=
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v4] gnttab: defer allocation of status frame tracking array
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
Message-ID: <74048f89-fee7-06c2-ffd5-6e5a14bdf440@suse.com>
Date: Tue, 4 May 2021 10:42:36 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

This array can be large when many grant frames are permitted; avoid
allocating it when it's not going to be used anyway, by doing this only
in gnttab_populate_status_frames(). While the delaying of the respective
memory allocation adds possible reasons for failure of the respective
enclosing operations, there are other memory allocations there already,
so callers can't expect these operations to always succeed anyway.

As to the re-ordering at the end of gnttab_unpopulate_status_frames(),
this is merely to represent intended order of actions (shrink array
bound, then free higher array entries). If there were racing accesses,
suitable barriers would need adding in addition.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v4: Add a comment. Add a few blank lines. Extend description.
v3: Drop smp_wmb(). Re-base.
v2: Defer allocation to when a domain actually switches to the v2 grant
    API.

--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -1747,6 +1747,17 @@ gnttab_populate_status_frames(struct dom
     /* Make sure, prior version checks are architectural visible */
     block_speculation();
 
+    if ( gt->status == ZERO_BLOCK_PTR )
+    {
+        gt->status = xzalloc_array(grant_status_t *,
+                                   grant_to_status_frames(gt->max_grant_frames));
+        if ( !gt->status )
+        {
+            gt->status = ZERO_BLOCK_PTR;
+            return -ENOMEM;
+        }
+    }
+
     for ( i = nr_status_frames(gt); i < req_status_frames; i++ )
     {
         if ( (gt->status[i] = alloc_xenheap_page()) == NULL )
@@ -1767,18 +1778,25 @@ status_alloc_failed:
         free_xenheap_page(gt->status[i]);
         gt->status[i] = NULL;
     }
+
+    if ( !nr_status_frames(gt) )
+    {
+        xfree(gt->status);
+        gt->status = ZERO_BLOCK_PTR;
+    }
+
     return -ENOMEM;
 }
 
 static int
 gnttab_unpopulate_status_frames(struct domain *d, struct grant_table *gt)
 {
-    unsigned int i;
+    unsigned int i, n = nr_status_frames(gt);
 
     /* Make sure, prior version checks are architectural visible */
     block_speculation();
 
-    for ( i = 0; i < nr_status_frames(gt); i++ )
+    for ( i = 0; i < n; i++ )
     {
         struct page_info *pg = virt_to_page(gt->status[i]);
         gfn_t gfn = gnttab_get_frame_gfn(gt, true, i);
@@ -1833,12 +1849,11 @@ gnttab_unpopulate_status_frames(struct d
         page_set_owner(pg, NULL);
     }
 
-    for ( i = 0; i < nr_status_frames(gt); i++ )
-    {
-        free_xenheap_page(gt->status[i]);
-        gt->status[i] = NULL;
-    }
     gt->nr_status_frames = 0;
+    for ( i = 0; i < n; i++ )
+        free_xenheap_page(gt->status[i]);
+    xfree(gt->status);
+    gt->status = ZERO_BLOCK_PTR;
 
     return 0;
 }
@@ -1969,11 +1984,11 @@ int grant_table_init(struct domain *d, i
     if ( gt->shared_raw == NULL )
         goto out;
 
-    /* Status pages for grant table - for version 2 */
-    gt->status = xzalloc_array(grant_status_t *,
-                               grant_to_status_frames(gt->max_grant_frames));
-    if ( gt->status == NULL )
-        goto out;
+    /*
+     * Status page tracking array for v2 gets allocated on demand. But don't
+     * leave a NULL pointer there.
+     */
+    gt->status = ZERO_BLOCK_PTR;
 
     grant_write_lock(gt);
 
@@ -4047,11 +4062,13 @@ int gnttab_acquire_resource(
         if ( gt->gt_version != 2 )
             break;
 
+        /* This may change gt->status, so has to happen before setting vaddrs. */ 
+        rc = gnttab_get_status_frame_mfn(d, final_frame, &tmp);
+
         /* Check that void ** is a suitable representation for gt->status. */
         BUILD_BUG_ON(!__builtin_types_compatible_p(
                          typeof(gt->status), grant_status_t **));
         vaddrs = (void **)gt->status;
-        rc = gnttab_get_status_frame_mfn(d, final_frame, &tmp);
         break;
     }
 


From xen-devel-bounces@lists.xenproject.org Tue May 04 08:54:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 08:54:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122012.230115 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldqoz-0007bV-Hm; Tue, 04 May 2021 08:54:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122012.230115; Tue, 04 May 2021 08:54:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldqoz-0007bO-El; Tue, 04 May 2021 08:54:25 +0000
Received: by outflank-mailman (input) for mailman id 122012;
 Tue, 04 May 2021 08:54:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1gXq=J7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ldqox-0007bJ-LA
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 08:54:23 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id df7881b7-04fb-496b-a336-321115daab1b;
 Tue, 04 May 2021 08:54:22 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 45677AED7;
 Tue,  4 May 2021 08:54:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: df7881b7-04fb-496b-a336-321115daab1b
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620118461; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=TSP9FYSdRM7WifVs0K5hQNwiRkkZf+HdtxO2eRrGRfI=;
	b=Goa7fDAAHu6bz4DmhUWPSWoqT1Gwu0DxihlvmcJO44ItT8uxlHrZJiJTSfIt4DO9N/tmxa
	npNaLDmIJnTrihgUSpYs+9TWi6/Xyr5lu5v3wf5gEZh0uNZqfGpdNACRT8CeJvRPJHW2jV
	frjhuGMJbLzO+Yy8pOIvribHMXt0CSY=
Subject: Re: [PATCH] x86/vhpet: fix RTC special casing
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210504084208.62823-1-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <15dd7e21-0077-936f-740c-90c2ed991bdf@suse.com>
Date: Tue, 4 May 2021 10:54:20 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210504084208.62823-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 04.05.2021 10:42, Roger Pau Monne wrote:
> Restore setting the virtual timer callback private data to NULL if the
> timer is not level triggered. This fixes the special casing done in
> pt_update_irq so that the RTC interrupt when originating from the HPET
> is suspended if the interrupt source is masked.
> 
> Note the RTC special casing done in pt_update_irq should only apply to
> the RTC interrupt originating from the emulated RTC device (which does
> set the callback private data), as in that case the callback itself
> will destroy the virtual timer if the interrupt is ignored.
> 
> While there also use RTC_IRQ instead of 8 when the HPET is configured
> in LegacyReplacement Mode.
> 
> Fixes: be07023be115 ("x86/vhpet: add support for level triggered interrupts")
> Reported-by: Jan Beulich <jbeulich@suse.com>
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

> @@ -318,7 +319,8 @@ static void hpet_set_timer(HPETState *h, unsigned int tn,
>                           hpet_tick_to_ns(h, diff),
>                           oneshot ? 0 : hpet_tick_to_ns(h, h->hpet.period[tn]),
>                           irq, timer_level(h, tn) ? hpet_timer_fired : NULL,
> -                         (void *)(unsigned long)tn, timer_level(h, tn));
> +                         timer_level(h, tn) ? (void *)(unsigned long)tn : NULL,
> +                         timer_level(h, tn));

Depending on what further changes to this call may be planned, it
may help readability if we split the call into a level and an edge
one.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 04 09:04:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 09:04:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122023.230139 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldqyt-0000Bu-MI; Tue, 04 May 2021 09:04:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122023.230139; Tue, 04 May 2021 09:04:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldqyt-0000Bj-H8; Tue, 04 May 2021 09:04:39 +0000
Received: by outflank-mailman (input) for mailman id 122023;
 Tue, 04 May 2021 09:04:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldqyr-0000BG-RH; Tue, 04 May 2021 09:04:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldqyr-00070O-Jn; Tue, 04 May 2021 09:04:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldqyr-0003S0-7C; Tue, 04 May 2021 09:04:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ldqyr-000352-6j; Tue, 04 May 2021 09:04:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=VKXogiKFi7H1YOEWMrD4qxuw3pYFGtSX2+lqXOkgonc=; b=fez2tmkHiwtUc+mPu1OOwLAonl
	WLju5Th/De3djKF8VTr+4P29CfTIdASWH5wFQdV8S3FhrDKgq7CkEiQo/o7lUXGYt0+aDuLxK8V+n
	fnVig+eJ1WKY3oXRTQe8kIM0zBnny53C4Y9Uy5S5h14vYJcA7VDfCvwMH2EcF9Fkgic4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161698-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 161698: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=b6a02345dc689c9b57e53d7fb37d42a0ba3c1cca
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 04 May 2021 09:04:37 +0000

flight 161698 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161698/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              b6a02345dc689c9b57e53d7fb37d42a0ba3c1cca
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  298 days
Failing since        151818  2020-07-11 04:18:52 Z  297 days  290 attempts
Testing same since   161698  2021-05-04 05:42:31 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 55889 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 04 09:28:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 09:28:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122033.230154 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldrLt-00020p-Qs; Tue, 04 May 2021 09:28:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122033.230154; Tue, 04 May 2021 09:28:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldrLt-00020i-Nr; Tue, 04 May 2021 09:28:25 +0000
Received: by outflank-mailman (input) for mailman id 122033;
 Tue, 04 May 2021 09:28:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1gXq=J7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ldrLs-00020d-9E
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 09:28:24 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cd8c3e9b-3621-4132-8b93-1ccd57c16418;
 Tue, 04 May 2021 09:28:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4AC59B19A;
 Tue,  4 May 2021 09:28:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cd8c3e9b-3621-4132-8b93-1ccd57c16418
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620120502; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QMnsycFkZceMuPLrD5lDxoO8Cdg8Uoi8uLnpunYETIU=;
	b=iQb0RVdCPYd8Q/yz5NwysuwfazjiJcOfDqkVJPrfcj0Y3OsZYVm3muJwy7f4vv7H8un2HN
	C9P/wMCxoqCX/6a7jWNAY2z+Yjq+AocjwysRPhp67eaQ19SqQEm2IrRfSAragUVz2rbg7L
	uJDV/PFF4disRv8i86qW8yy/nodss1I=
Subject: Re: [PATCH v4 07/12] x86/dpci: switch to use a GSI EOI callback
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <20210420140723.65321-1-roger.pau@citrix.com>
 <20210420140723.65321-8-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <00e220bc-eb45-800b-b266-1b94e69d44c3@suse.com>
Date: Tue, 4 May 2021 11:28:21 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210420140723.65321-8-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 20.04.2021 16:07, Roger Pau Monne wrote:
> @@ -476,6 +476,7 @@ int pt_irq_create_bind(
>      {
>          struct dev_intx_gsi_link *digl = NULL;
>          struct hvm_girq_dpci_mapping *girq = NULL;
> +        struct hvm_gsi_eoi_callback *cb = NULL;

I wonder if this wouldn't benefit from a brief "hwdom only" comment.

> @@ -502,11 +503,22 @@ int pt_irq_create_bind(
>              girq->bus = digl->bus = pt_irq_bind->u.pci.bus;
>              girq->device = digl->device = pt_irq_bind->u.pci.device;
>              girq->intx = digl->intx = pt_irq_bind->u.pci.intx;
> -            list_add_tail(&digl->list, &pirq_dpci->digl_list);
> +            girq->cb.callback = dpci_eoi;
>  
>              guest_gsi = hvm_pci_intx_gsi(digl->device, digl->intx);
>              link = hvm_pci_intx_link(digl->device, digl->intx);
>  
> +            rc = hvm_gsi_register_callback(d, guest_gsi, &girq->cb);
> +            if ( rc )
> +            {
> +                spin_unlock(&d->event_lock);
> +                xfree(girq);
> +                xfree(digl);
> +                return rc;
> +            }
> +
> +            list_add_tail(&digl->list, &pirq_dpci->digl_list);
> +
>              hvm_irq_dpci->link_cnt[link]++;

Could we keep calculation and use of "link" together, please, so the
compiler can avoid spilling the value to the stack or allocating a
callee-saved register for it?

> @@ -514,17 +526,43 @@ int pt_irq_create_bind(
>          }
>          else
>          {
> +            /*
> +             * NB: the callback structure allocated below will never be freed
> +             * once setup because it's used by the hardware domain and will
> +             * never be unregistered.
> +             */
> +            cb = xzalloc(struct hvm_gsi_eoi_callback);
> +
>              ASSERT(is_hardware_domain(d));
>  
> +            if ( !cb )
> +            {
> +                spin_unlock(&d->event_lock);
> +                return -ENOMEM;
> +            }

I'm inclined to ask that the ASSERT() remain first in this "else" block.
In fact, you could ...

>              /* MSI_TRANSLATE is not supported for the hardware domain. */
>              if ( pt_irq_bind->irq_type != PT_IRQ_TYPE_PCI ||
>                   pirq >= hvm_domain_irq(d)->nr_gsis )
>              {
>                  spin_unlock(&d->event_lock);
> -
> +                xfree(cb);

... avoid this extra cleanup by ...

>                  return -EINVAL;
>              }

... putting the allocation here.

>              guest_gsi = pirq;
> +
> +            cb->callback = dpci_eoi;
> +            /*
> +             * IRQ binds created for the hardware domain are never destroyed,
> +             * so it's fine to not keep a reference to cb here.
> +             */
> +            rc = hvm_gsi_register_callback(d, guest_gsi, cb);

In reply to a v3 comment of mine you said "I should replace IRQ with
GSI in the comment above to make it clearer." And while the question
of the comment being (and going to remain) true in the first place
was discussed, I would have hoped for the commit message to say a
word on this. If this ever changed, chances are the place here would
go unnoticed and unchanged, leading to a memory leak.

> @@ -596,12 +634,17 @@ int pt_irq_create_bind(
>                      list_del(&digl->list);
>                      link = hvm_pci_intx_link(digl->device, digl->intx);
>                      hvm_irq_dpci->link_cnt[link]--;
> +                    hvm_gsi_unregister_callback(d, guest_gsi, &girq->cb);
>                  }
> +                else
> +                    hvm_gsi_unregister_callback(d, guest_gsi, cb);
> +
>                  pirq_dpci->flags = 0;
>                  pirq_cleanup_check(info, d);
>                  spin_unlock(&d->event_lock);
>                  xfree(girq);
>                  xfree(digl);
> +                xfree(cb);

May I suggest that you move the xfree() into the "else" you add, and
perhaps even make it conditional upon the un-register being successful?

> @@ -708,6 +752,11 @@ int pt_irq_destroy_bind(
>                   girq->machine_gsi == machine_gsi )
>              {
>                  list_del(&girq->list);
> +                rc = hvm_gsi_unregister_callback(d, guest_gsi, &girq->cb);
> +                if ( rc )
> +                    printk(XENLOG_G_WARNING
> +                           "%pd: unable to remove callback for GSI %u: %d\n",
> +                           d, guest_gsi, rc);
>                  xfree(girq);

If the un-registration really failed (here as well as further up),
is it safe to free girq?

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 04 09:31:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 09:31:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122036.230166 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldrOQ-0002pS-85; Tue, 04 May 2021 09:31:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122036.230166; Tue, 04 May 2021 09:31:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldrOQ-0002pL-4m; Tue, 04 May 2021 09:31:02 +0000
Received: by outflank-mailman (input) for mailman id 122036;
 Tue, 04 May 2021 09:31:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldrOO-0002pC-RA; Tue, 04 May 2021 09:31:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldrOO-0007PX-Hu; Tue, 04 May 2021 09:31:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldrOO-00047F-AL; Tue, 04 May 2021 09:31:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ldrOO-0000vu-9p; Tue, 04 May 2021 09:31:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1r/qbNh8nvaEat4E6k2RxRSbAPBib4z6mBz58MRWnC4=; b=cTTvl/qu7ZZAZKegRCoS8PPLof
	exNHq2DzmPALR46PYT2jfXgVghafrotFKkUfJvYBQMtR+V9yOCE0fH/Fv31kfr0CZP22064hviitJ
	r8wrUND9NJZ5ECsSv/8RMqd49WXGquHju96JTk6IbQe/lQBHYNSWz8RWMwCJkZDMqYC4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161718-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 161718: regressions - FAIL
X-Osstest-Failures:
    xen-4.12-testing:build-amd64-xsm:xen-build:fail:regression
    xen-4.12-testing:build-amd64:xen-build:fail:regression
    xen-4.12-testing:build-arm64:xen-build:fail:regression
    xen-4.12-testing:build-amd64-prev:xen-build:fail:regression
    xen-4.12-testing:build-i386:xen-build:fail:regression
    xen-4.12-testing:build-arm64-xsm:xen-build:fail:regression
    xen-4.12-testing:build-i386-xsm:xen-build:fail:regression
    xen-4.12-testing:build-i386-prev:xen-build:fail:regression
    xen-4.12-testing:build-armhf:xen-build:fail:regression
    xen-4.12-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-xtf-amd64-amd64-1:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:build-arm64-libvirt:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    xen-4.12-testing:build-armhf-libvirt:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    xen-4.12-testing:build-i386-libvirt:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-livepatch:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-migrupgrade:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-livepatch:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-migrupgrade:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-xtf-amd64-amd64-2:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-xtf-amd64-amd64-3:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-xtf-amd64-amd64-4:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-xtf-amd64-amd64-5:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    xen=5b280a59c4dd8dad6cc8da28db981b193d10acee
X-Osstest-Versions-That:
    xen=4cf5929606adc2fb1ab4e2921c14ba4b8046ecd1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 04 May 2021 09:31:00 +0000

flight 161718 xen-4.12-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161718/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 159418
 build-amd64                   6 xen-build                fail REGR. vs. 159418
 build-arm64                   6 xen-build                fail REGR. vs. 159418
 build-amd64-prev              6 xen-build                fail REGR. vs. 159418
 build-i386                    6 xen-build                fail REGR. vs. 159418
 build-arm64-xsm               6 xen-build                fail REGR. vs. 159418
 build-i386-xsm                6 xen-build                fail REGR. vs. 159418
 build-i386-prev               6 xen-build                fail REGR. vs. 159418
 build-armhf                   6 xen-build                fail REGR. vs. 159418

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-1        1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-livepatch    1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-livepatch     1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-xtf-amd64-amd64-2        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-3        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-4        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-5        1 build-check(1)               blocked  n/a

version targeted for testing:
 xen                  5b280a59c4dd8dad6cc8da28db981b193d10acee
baseline version:
 xen                  4cf5929606adc2fb1ab4e2921c14ba4b8046ecd1

Last test of basis   159418  2021-02-16 15:06:11 Z   76 days
Failing since        160128  2021-03-18 14:36:18 Z   46 days   64 attempts
Testing same since   160150  2021-03-20 04:11:48 Z   45 days   62 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Edwin Török <edvin.torok@citrix.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64-xtf                                              pass    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-prev                                             fail    
 build-i386-prev                                              fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       blocked 
 test-xtf-amd64-amd64-2                                       blocked 
 test-xtf-amd64-amd64-3                                       blocked 
 test-xtf-amd64-amd64-4                                       blocked 
 test-xtf-amd64-amd64-5                                       blocked 
 test-amd64-amd64-xl                                          blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-livepatch                                   blocked 
 test-amd64-i386-livepatch                                    blocked 
 test-amd64-amd64-migrupgrade                                 blocked 
 test-amd64-i386-migrupgrade                                  blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 311 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 04 09:46:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 09:46:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122052.230181 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldrdC-0003qp-Op; Tue, 04 May 2021 09:46:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122052.230181; Tue, 04 May 2021 09:46:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldrdC-0003qi-Lf; Tue, 04 May 2021 09:46:18 +0000
Received: by outflank-mailman (input) for mailman id 122052;
 Tue, 04 May 2021 09:46:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8884=J7=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1ldrdB-0003qd-Fo
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 09:46:17 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 1b4aa7bb-162e-4f20-bda6-96ef96c40985;
 Tue, 04 May 2021 09:46:16 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 33E14ED1;
 Tue,  4 May 2021 02:46:16 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com
 [10.1.197.16])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id D16813F73B;
 Tue,  4 May 2021 02:46:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1b4aa7bb-162e-4f20-bda6-96ef96c40985
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v4 0/3] Use Doxygen and sphinx for html documentation
Date: Tue,  4 May 2021 10:46:03 +0100
Message-Id: <20210504094606.7125-1-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.17.1

This serie introduce doxygen in the sphinx html docs generation.
One benefit is to keep most of the documentation in the source
files of xen so that it's more maintainable, on the other hand
there are some limitation of doxygen that should be addressed
modifying the current codebase (for example doxygen can't parse
anonymous structure/union).

To reproduce the documentation xen must be compiled because
most of the headers are generated on compilation time from
the makefiles.

Here follows the steps to generate the sphinx html docs, some
package may be required on your machine, everything is suggested
by the autoconf script.
Here I'm building the arm64 docs (the only introduced for now by
this serie):

./configure
make -C xen XEN_TARGET_ARCH="arm64" CROSS_COMPILE="aarch64-linux-gnu-" menuconfig
make -C xen XEN_TARGET_ARCH="arm64" CROSS_COMPILE="aarch64-linux-gnu-"
make -C docs XEN_TARGET_ARCH="arm64" sphinx-html

now in docs/sphinx/html/ we have the generated docs starting
from the index.html page.

Luca Fancellu (3):
  docs: add doxygen support for html documentation
  docs: hypercalls sphinx skeleton for generated html
  docs/doxygen: doxygen documentation for grant_table.h

 .gitignore                                    |    7 +
 config/Docs.mk.in                             |    2 +
 docs/Makefile                                 |   46 +-
 docs/conf.py                                  |   48 +-
 docs/configure                                |  258 ++
 docs/configure.ac                             |   15 +
 docs/hypercall-interfaces/arm32.rst           |    4 +
 docs/hypercall-interfaces/arm64.rst           |   33 +
 .../common/grant_tables.rst                   |    8 +
 docs/hypercall-interfaces/index.rst.in        |    7 +
 docs/hypercall-interfaces/x86_64.rst          |    4 +
 docs/index.rst                                |    8 +
 docs/xen-doxygen/customdoxygen.css            |   36 +
 docs/xen-doxygen/doxy-preprocessor.py         |  110 +
 docs/xen-doxygen/doxy_input.list              |    1 +
 docs/xen-doxygen/doxygen_include.h.in         |   32 +
 docs/xen-doxygen/footer.html                  |   21 +
 docs/xen-doxygen/header.html                  |   56 +
 docs/xen-doxygen/mainpage.md                  |    5 +
 docs/xen-doxygen/xen_project_logo_165x67.png  |  Bin 0 -> 18223 bytes
 docs/xen.doxyfile.in                          | 2316 +++++++++++++++++
 m4/ax_python_module.m4                        |   56 +
 m4/docs_tool.m4                               |    9 +
 xen/include/public/grant_table.h              |   68 +-
 24 files changed, 3120 insertions(+), 30 deletions(-)
 create mode 100644 docs/hypercall-interfaces/arm32.rst
 create mode 100644 docs/hypercall-interfaces/arm64.rst
 create mode 100644 docs/hypercall-interfaces/common/grant_tables.rst
 create mode 100644 docs/hypercall-interfaces/index.rst.in
 create mode 100644 docs/hypercall-interfaces/x86_64.rst
 create mode 100644 docs/xen-doxygen/customdoxygen.css
 create mode 100755 docs/xen-doxygen/doxy-preprocessor.py
 create mode 100644 docs/xen-doxygen/doxy_input.list
 create mode 100644 docs/xen-doxygen/doxygen_include.h.in
 create mode 100644 docs/xen-doxygen/footer.html
 create mode 100644 docs/xen-doxygen/header.html
 create mode 100644 docs/xen-doxygen/mainpage.md
 create mode 100644 docs/xen-doxygen/xen_project_logo_165x67.png
 create mode 100644 docs/xen.doxyfile.in
 create mode 100644 m4/ax_python_module.m4

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 04 09:46:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 09:46:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122053.230193 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldrdH-0003sN-1I; Tue, 04 May 2021 09:46:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122053.230193; Tue, 04 May 2021 09:46:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldrdG-0003sE-TQ; Tue, 04 May 2021 09:46:22 +0000
Received: by outflank-mailman (input) for mailman id 122053;
 Tue, 04 May 2021 09:46:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8884=J7=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1ldrdG-0003rw-0f
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 09:46:22 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id eb85e37c-4f06-4477-8daa-31c153a25d06;
 Tue, 04 May 2021 09:46:20 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D0266ED1;
 Tue,  4 May 2021 02:46:19 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com
 [10.1.197.16])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7E30F3F73B;
 Tue,  4 May 2021 02:46:18 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eb85e37c-4f06-4477-8daa-31c153a25d06
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v4 2/3] docs: hypercalls sphinx skeleton for generated html
Date: Tue,  4 May 2021 10:46:05 +0100
Message-Id: <20210504094606.7125-3-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210504094606.7125-1-luca.fancellu@arm.com>
References: <20210504094606.7125-1-luca.fancellu@arm.com>

Create a skeleton for the documentation about hypercalls

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
 .gitignore                             |  1 +
 docs/Makefile                          |  4 ++++
 docs/hypercall-interfaces/arm32.rst    |  4 ++++
 docs/hypercall-interfaces/arm64.rst    | 32 ++++++++++++++++++++++++++
 docs/hypercall-interfaces/index.rst.in |  7 ++++++
 docs/hypercall-interfaces/x86_64.rst   |  4 ++++
 docs/index.rst                         |  8 +++++++
 7 files changed, 60 insertions(+)
 create mode 100644 docs/hypercall-interfaces/arm32.rst
 create mode 100644 docs/hypercall-interfaces/arm64.rst
 create mode 100644 docs/hypercall-interfaces/index.rst.in
 create mode 100644 docs/hypercall-interfaces/x86_64.rst

diff --git a/.gitignore b/.gitignore
index d271e0ce6a..a9aab120ae 100644
--- a/.gitignore
+++ b/.gitignore
@@ -64,6 +64,7 @@ docs/xen.doxyfile
 docs/xen.doxyfile.tmp
 docs/xen-doxygen/doxygen_include.h
 docs/xen-doxygen/doxygen_include.h.tmp
+docs/hypercall-interfaces/index.rst
 extras/mini-os*
 install/*
 stubdom/*-minios-config.mk
diff --git a/docs/Makefile b/docs/Makefile
index 2f784c36ce..b02c3dfb79 100644
--- a/docs/Makefile
+++ b/docs/Makefile
@@ -61,6 +61,9 @@ build: html txt pdf man-pages figs
 sphinx-html: $(DOXY_DEPS) $(DOXY_LIST_SOURCES)
 ifneq ($(SPHINXBUILD),no)
 	$(DOXYGEN) xen.doxyfile
+	@echo "Generating hypercall-interfaces/index.rst"
+	@sed -e "s,@XEN_TARGET_ARCH@,$(XEN_TARGET_ARCH),g" \
+		hypercall-interfaces/index.rst.in > hypercall-interfaces/index.rst
 	XEN_ROOT=$(realpath $(XEN_ROOT)) $(SPHINXBUILD) -b html . sphinx/html
 else
 	@echo "Sphinx is not installed; skipping sphinx-html documentation."
@@ -108,6 +111,7 @@ clean: clean-man-pages
 	rm -f xen.doxyfile.tmp
 	rm -f xen-doxygen/doxygen_include.h
 	rm -f xen-doxygen/doxygen_include.h.tmp
+	rm -f hypercall-interfaces/index.rst
 
 .PHONY: distclean
 distclean: clean
diff --git a/docs/hypercall-interfaces/arm32.rst b/docs/hypercall-interfaces/arm32.rst
new file mode 100644
index 0000000000..4e973fbbaf
--- /dev/null
+++ b/docs/hypercall-interfaces/arm32.rst
@@ -0,0 +1,4 @@
+.. SPDX-License-Identifier: CC-BY-4.0
+
+Hypercall Interfaces - arm32
+============================
diff --git a/docs/hypercall-interfaces/arm64.rst b/docs/hypercall-interfaces/arm64.rst
new file mode 100644
index 0000000000..5e701a2adc
--- /dev/null
+++ b/docs/hypercall-interfaces/arm64.rst
@@ -0,0 +1,32 @@
+.. SPDX-License-Identifier: CC-BY-4.0
+
+Hypercall Interfaces - arm64
+============================
+
+Starting points
+---------------
+.. toctree::
+   :maxdepth: 2
+
+
+
+Functions
+---------
+
+
+Structs
+-------
+
+
+Enums and sets of #defines
+--------------------------
+
+
+Typedefs
+--------
+
+
+Enum values and individual #defines
+-----------------------------------
+
+
diff --git a/docs/hypercall-interfaces/index.rst.in b/docs/hypercall-interfaces/index.rst.in
new file mode 100644
index 0000000000..e4dcc5db8d
--- /dev/null
+++ b/docs/hypercall-interfaces/index.rst.in
@@ -0,0 +1,7 @@
+.. SPDX-License-Identifier: CC-BY-4.0
+
+Hypercall Interfaces
+====================
+
+.. toctree::
+   @XEN_TARGET_ARCH@
diff --git a/docs/hypercall-interfaces/x86_64.rst b/docs/hypercall-interfaces/x86_64.rst
new file mode 100644
index 0000000000..3ed70dff95
--- /dev/null
+++ b/docs/hypercall-interfaces/x86_64.rst
@@ -0,0 +1,4 @@
+.. SPDX-License-Identifier: CC-BY-4.0
+
+Hypercall Interfaces - x86_64
+=============================
diff --git a/docs/index.rst b/docs/index.rst
index b75487a05d..52226a42d8 100644
--- a/docs/index.rst
+++ b/docs/index.rst
@@ -53,6 +53,14 @@ kind of development environment.
    hypervisor-guide/index
 
 
+Hypercall Interfaces documentation
+----------------------------------
+
+.. toctree::
+   :maxdepth: 2
+
+   hypercall-interfaces/index
+
 Miscellanea
 -----------
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 04 09:46:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 09:46:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122054.230205 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldrdI-0003u9-AQ; Tue, 04 May 2021 09:46:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122054.230205; Tue, 04 May 2021 09:46:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldrdI-0003u0-6K; Tue, 04 May 2021 09:46:24 +0000
Received: by outflank-mailman (input) for mailman id 122054;
 Tue, 04 May 2021 09:46:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8884=J7=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1ldrdG-0003qd-ET
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 09:46:22 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 4de8c80b-d026-41cd-b2e4-72038d720b90;
 Tue, 04 May 2021 09:46:18 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 4AF871063;
 Tue,  4 May 2021 02:46:18 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com
 [10.1.197.16])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 67ADD3F73B;
 Tue,  4 May 2021 02:46:16 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4de8c80b-d026-41cd-b2e4-72038d720b90
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v4 1/3] docs: add doxygen support for html documentation
Date: Tue,  4 May 2021 10:46:04 +0100
Message-Id: <20210504094606.7125-2-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210504094606.7125-1-luca.fancellu@arm.com>
References: <20210504094606.7125-1-luca.fancellu@arm.com>

Add doxygen support to build html documentation with sphinx,
to do that the following modification are applied:

1) Modify docs/configure.ac and consequently configure script
   to check, through the ./configure script, the presence in
   the system of the sphinx-build binary, if it is found then
   it checks also the presence of the doxygen binary, breathe
   and sphinx-rtd-theme python packages.
   Doxygen and the packages are required only if sphinx-build
   is found, otherwise the Makefile will simply skip the
   sphinx html generation.
   The ax_python_module.m4 support is needed to check for
   python packages.
2) Add doxygen templates and configuration file
3) Modify docs/Makefile to call in sequence doxygen and
   sphinx-build, the doxygen configuration file will be
   modified to include the xen absolute path and
   a list of header to parse.
   A doxygen_input.h file is generated to include every
   header listed in the doxy_input.list file.
4) Add preprocessor called by doxygen before parsing headers,
   it will include in every header a doxygen_include.h file
   that provides missing defines and includes that are
   usually passed by the compiler.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
v4 changes:
- create alias @keepindent/@endkeepindent for the doxygen
  command @code/@endcode
v3 changes:
- add preprocessor to handle missing defines and anonymous
  structs/unions before doxygen parsing
- modification to Makefile to handle the new process
v2 changes:
- Fix bug in Makefile when sphinx is not found in the system
---
 .gitignore                                   |    6 +
 config/Docs.mk.in                            |    2 +
 docs/Makefile                                |   42 +-
 docs/conf.py                                 |   48 +-
 docs/configure                               |  258 ++
 docs/configure.ac                            |   15 +
 docs/xen-doxygen/customdoxygen.css           |   36 +
 docs/xen-doxygen/doxy-preprocessor.py        |  110 +
 docs/xen-doxygen/doxy_input.list             |    0
 docs/xen-doxygen/doxygen_include.h.in        |   32 +
 docs/xen-doxygen/footer.html                 |   21 +
 docs/xen-doxygen/header.html                 |   56 +
 docs/xen-doxygen/mainpage.md                 |    5 +
 docs/xen-doxygen/xen_project_logo_165x67.png |  Bin 0 -> 18223 bytes
 docs/xen.doxyfile.in                         | 2316 ++++++++++++++++++
 m4/ax_python_module.m4                       |   56 +
 m4/docs_tool.m4                              |    9 +
 17 files changed, 3006 insertions(+), 6 deletions(-)
 create mode 100644 docs/xen-doxygen/customdoxygen.css
 create mode 100755 docs/xen-doxygen/doxy-preprocessor.py
 create mode 100644 docs/xen-doxygen/doxy_input.list
 create mode 100644 docs/xen-doxygen/doxygen_include.h.in
 create mode 100644 docs/xen-doxygen/footer.html
 create mode 100644 docs/xen-doxygen/header.html
 create mode 100644 docs/xen-doxygen/mainpage.md
 create mode 100644 docs/xen-doxygen/xen_project_logo_165x67.png
 create mode 100644 docs/xen.doxyfile.in
 create mode 100644 m4/ax_python_module.m4

diff --git a/.gitignore b/.gitignore
index 1c2fa1530b..d271e0ce6a 100644
--- a/.gitignore
+++ b/.gitignore
@@ -58,6 +58,12 @@ docs/man7/
 docs/man8/
 docs/pdf/
 docs/txt/
+docs/doxygen-output
+docs/sphinx
+docs/xen.doxyfile
+docs/xen.doxyfile.tmp
+docs/xen-doxygen/doxygen_include.h
+docs/xen-doxygen/doxygen_include.h.tmp
 extras/mini-os*
 install/*
 stubdom/*-minios-config.mk
diff --git a/config/Docs.mk.in b/config/Docs.mk.in
index e76e5cd5ff..dfd4a02838 100644
--- a/config/Docs.mk.in
+++ b/config/Docs.mk.in
@@ -7,3 +7,5 @@ POD2HTML            := @POD2HTML@
 POD2TEXT            := @POD2TEXT@
 PANDOC              := @PANDOC@
 PERL                := @PERL@
+SPHINXBUILD         := @SPHINXBUILD@
+DOXYGEN             := @DOXYGEN@
diff --git a/docs/Makefile b/docs/Makefile
index 8de1efb6f5..2f784c36ce 100644
--- a/docs/Makefile
+++ b/docs/Makefile
@@ -17,6 +17,18 @@ TXTSRC-y := $(sort $(shell find misc -name '*.txt' -print))
 
 PANDOCSRC-y := $(sort $(shell find designs/ features/ misc/ process/ specs/ \( -name '*.pandoc' -o -name '*.md' \) -print))
 
+# Directory in which the doxygen documentation is created
+# This must be kept in sync with breathe_projects value in conf.py
+DOXYGEN_OUTPUT = doxygen-output
+
+# Doxygen input headers from xen-doxygen/doxy_input.list file
+DOXY_LIST_SOURCES != cat "xen-doxygen/doxy_input.list"
+DOXY_LIST_SOURCES := $(realpath $(addprefix $(XEN_ROOT)/,$(DOXY_LIST_SOURCES)))
+
+DOXY_DEPS := xen.doxyfile \
+			 xen-doxygen/mainpage.md \
+			 xen-doxygen/doxygen_include.h
+
 # Documentation targets
 $(foreach i,$(MAN_SECTIONS), \
   $(eval DOC_MAN$(i) := $(patsubst man/%.$(i),man$(i)/%.$(i), \
@@ -46,8 +58,28 @@ all: build
 build: html txt pdf man-pages figs
 
 .PHONY: sphinx-html
-sphinx-html:
-	sphinx-build -b html . sphinx/html
+sphinx-html: $(DOXY_DEPS) $(DOXY_LIST_SOURCES)
+ifneq ($(SPHINXBUILD),no)
+	$(DOXYGEN) xen.doxyfile
+	XEN_ROOT=$(realpath $(XEN_ROOT)) $(SPHINXBUILD) -b html . sphinx/html
+else
+	@echo "Sphinx is not installed; skipping sphinx-html documentation."
+endif
+
+xen.doxyfile: xen.doxyfile.in xen-doxygen/doxy_input.list
+	@echo "Generating $@"
+	@sed -e "s,@XEN_BASE@,$(realpath $(XEN_ROOT)),g" $< \
+		| sed -e "s,@DOXY_OUT@,$(DOXYGEN_OUTPUT),g" > $@.tmp
+	@$(foreach inc,\
+		$(DOXY_LIST_SOURCES),\
+		echo "INPUT += \"$(inc)\"" >> $@.tmp; \
+	)
+	mv $@.tmp $@
+
+xen-doxygen/doxygen_include.h: xen-doxygen/doxygen_include.h.in
+	@echo "Generating $@"
+	@sed -e "s,@XEN_BASE@,$(realpath $(XEN_ROOT)),g" $< > $@.tmp
+	@mv $@.tmp $@
 
 .PHONY: html
 html: $(DOC_HTML) html/index.html
@@ -71,7 +103,11 @@ clean: clean-man-pages
 	$(MAKE) -C figs clean
 	rm -rf .word_count *.aux *.dvi *.bbl *.blg *.glo *.idx *~
 	rm -rf *.ilg *.log *.ind *.toc *.bak *.tmp core
-	rm -rf html txt pdf sphinx/html
+	rm -rf html txt pdf sphinx $(DOXYGEN_OUTPUT)
+	rm -f xen.doxyfile
+	rm -f xen.doxyfile.tmp
+	rm -f xen-doxygen/doxygen_include.h
+	rm -f xen-doxygen/doxygen_include.h.tmp
 
 .PHONY: distclean
 distclean: clean
diff --git a/docs/conf.py b/docs/conf.py
index 50e41501db..a48de42331 100644
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -13,13 +13,17 @@
 # add these directories to sys.path here. If the directory is relative to the
 # documentation root, use os.path.abspath to make it absolute, like shown here.
 #
-# import os
-# import sys
+import os
+import sys
 # sys.path.insert(0, os.path.abspath('.'))
 
 
 # -- Project information -----------------------------------------------------
 
+if "XEN_ROOT" not in os.environ:
+    sys.exit("$XEN_ROOT environment variable undefined.")
+XEN_ROOT = os.path.abspath(os.environ["XEN_ROOT"])
+
 project = u'Xen'
 copyright = u'2019, The Xen development community'
 author = u'The Xen development community'
@@ -35,6 +39,7 @@ try:
             xen_subver = line.split(u"=")[1].strip()
         elif line.startswith(u"export XEN_EXTRAVERSION"):
             xen_extra = line.split(u"=")[1].split(u"$", 1)[0].strip()
+
 except:
     pass
 finally:
@@ -44,6 +49,15 @@ finally:
     else:
         version = release = u"unknown version"
 
+try:
+    xen_doxygen_output = None
+
+    for line in open(u"Makefile"):
+        if line.startswith(u"DOXYGEN_OUTPUT"):
+                xen_doxygen_output = line.split(u"=")[1].strip()
+except:
+    sys.exit("DOXYGEN_OUTPUT variable undefined.")
+
 # -- General configuration ---------------------------------------------------
 
 # If your documentation needs a minimal Sphinx version, state it here.
@@ -53,7 +67,8 @@ needs_sphinx = '1.4'
 # Add any Sphinx extension module names here, as strings. They can be
 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
 # ones.
-extensions = []
+# breathe -> extension that integrates doxygen xml output with sphinx
+extensions = ['breathe']
 
 # Add any paths that contain templates here, relative to this directory.
 templates_path = ['_templates']
@@ -175,6 +190,33 @@ texinfo_documents = [
      'Miscellaneous'),
 ]
 
+# -- Options for Breathe extension -------------------------------------------
+
+breathe_projects = {
+    "Xen": "{}/docs/{}/xml".format(XEN_ROOT, xen_doxygen_output)
+}
+breathe_default_project = "Xen"
+
+breathe_domain_by_extension = {
+    "h": "c",
+    "c": "c",
+}
+breathe_separate_member_pages = True
+breathe_show_enumvalue_initializer = True
+breathe_show_define_initializer = True
+
+# Qualifiers to a function are causing Sphihx/Breathe to warn about
+# Error when parsing function declaration and more.  This is a list
+# of strings that the parser additionally should accept as
+# attributes.
+cpp_id_attributes = [
+    '__syscall', '__deprecated', '__may_alias',
+    '__used', '__unused', '__weak',
+    '__DEPRECATED_MACRO', 'FUNC_NORETURN',
+    '__subsystem',
+]
+c_id_attributes = cpp_id_attributes
+
 
 # -- Options for Epub output -------------------------------------------------
 
diff --git a/docs/configure b/docs/configure
index 569bd4c2ff..0ebf046a79 100755
--- a/docs/configure
+++ b/docs/configure
@@ -588,6 +588,8 @@ ac_unique_file="misc/xen-command-line.pandoc"
 ac_subst_vars='LTLIBOBJS
 LIBOBJS
 PERL
+DOXYGEN
+SPHINXBUILD
 PANDOC
 POD2TEXT
 POD2HTML
@@ -673,6 +675,7 @@ POD2MAN
 POD2HTML
 POD2TEXT
 PANDOC
+DOXYGEN
 PERL'
 
 
@@ -1318,6 +1321,7 @@ Some influential environment variables:
   POD2HTML    Path to pod2html tool
   POD2TEXT    Path to pod2text tool
   PANDOC      Path to pandoc tool
+  DOXYGEN     Path to doxygen tool
   PERL        Path to Perl parser
 
 Use these variables to override the choices made by `configure' or to help
@@ -1800,6 +1804,7 @@ ac_configure="$SHELL $ac_aux_dir/configure"  # Please don't use this var.
 
 
 
+
 case "$host_os" in
 *freebsd*) XENSTORED_KVA=/dev/xen/xenstored ;;
 *) XENSTORED_KVA=/proc/xen/xsd_kva ;;
@@ -1812,6 +1817,53 @@ case "$host_os" in
 esac
 
 
+# ===========================================================================
+#     https://www.gnu.org/software/autoconf-archive/ax_python_module.html
+# ===========================================================================
+#
+# SYNOPSIS
+#
+#   AX_PYTHON_MODULE(modname[, fatal, python])
+#
+# DESCRIPTION
+#
+#   Checks for Python module.
+#
+#   If fatal is non-empty then absence of a module will trigger an error.
+#   The third parameter can either be "python" for Python 2 or "python3" for
+#   Python 3; defaults to Python 3.
+#
+# LICENSE
+#
+#   Copyright (c) 2008 Andrew Collier
+#
+#   Copying and distribution of this file, with or without modification, are
+#   permitted in any medium without royalty provided the copyright notice
+#   and this notice are preserved. This file is offered as-is, without any
+#   warranty.
+
+#serial 9
+
+# This is what autoupdate's m4 run will expand.  It fires
+# the warning (with _au_warn_XXX), outputs it into the
+# updated configure.ac (with AC_DIAGNOSE), and then outputs
+# the replacement expansion.
+
+
+# This is an auxiliary macro that is also run when
+# autoupdate runs m4.  It simply calls m4_warning, but
+# we need a wrapper so that each warning is emitted only
+# once.  We break the quoting in m4_warning's argument in
+# order to expand this macro's arguments, not AU_DEFUN's.
+
+
+# Finally, this is the expansion that is picked up by
+# autoconf.  It tells the user to run autoupdate, and
+# then outputs the replacement expansion.  We do not care
+# about autoupdate's warning because that contains
+# information on what to do *after* running autoupdate.
+
+
 
 
 test "x$prefix" = "xNONE" && prefix=$ac_default_prefix
@@ -2232,6 +2284,212 @@ $as_echo "$as_me: WARNING: pandoc is not available so some documentation won't b
 fi
 
 
+# If sphinx is installed, make sure to have also the dependencies to build
+# Sphinx documentation.
+for ac_prog in sphinx-build
+do
+  # Extract the first word of "$ac_prog", so it can be a program name with args.
+set dummy $ac_prog; ac_word=$2
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+$as_echo_n "checking for $ac_word... " >&6; }
+if ${ac_cv_prog_SPHINXBUILD+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  if test -n "$SPHINXBUILD"; then
+  ac_cv_prog_SPHINXBUILD="$SPHINXBUILD" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+  IFS=$as_save_IFS
+  test -z "$as_dir" && as_dir=.
+    for ac_exec_ext in '' $ac_executable_extensions; do
+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+    ac_cv_prog_SPHINXBUILD="$ac_prog"
+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+    break 2
+  fi
+done
+  done
+IFS=$as_save_IFS
+
+fi
+fi
+SPHINXBUILD=$ac_cv_prog_SPHINXBUILD
+if test -n "$SPHINXBUILD"; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $SPHINXBUILD" >&5
+$as_echo "$SPHINXBUILD" >&6; }
+else
+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+fi
+
+
+  test -n "$SPHINXBUILD" && break
+done
+test -n "$SPHINXBUILD" || SPHINXBUILD="no"
+
+    if test "x$SPHINXBUILD" = xno; then :
+  { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: sphinx-build is not available so sphinx documentation \
+won't be built" >&5
+$as_echo "$as_me: WARNING: sphinx-build is not available so sphinx documentation \
+won't be built" >&2;}
+else
+
+            # Extract the first word of "sphinx-build", so it can be a program name with args.
+set dummy sphinx-build; ac_word=$2
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+$as_echo_n "checking for $ac_word... " >&6; }
+if ${ac_cv_path_SPHINXBUILD+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  case $SPHINXBUILD in
+  [\\/]* | ?:[\\/]*)
+  ac_cv_path_SPHINXBUILD="$SPHINXBUILD" # Let the user override the test with a path.
+  ;;
+  *)
+  as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+  IFS=$as_save_IFS
+  test -z "$as_dir" && as_dir=.
+    for ac_exec_ext in '' $ac_executable_extensions; do
+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+    ac_cv_path_SPHINXBUILD="$as_dir/$ac_word$ac_exec_ext"
+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+    break 2
+  fi
+done
+  done
+IFS=$as_save_IFS
+
+  ;;
+esac
+fi
+SPHINXBUILD=$ac_cv_path_SPHINXBUILD
+if test -n "$SPHINXBUILD"; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $SPHINXBUILD" >&5
+$as_echo "$SPHINXBUILD" >&6; }
+else
+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+fi
+
+
+
+
+    # Extract the first word of "doxygen", so it can be a program name with args.
+set dummy doxygen; ac_word=$2
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+$as_echo_n "checking for $ac_word... " >&6; }
+if ${ac_cv_path_DOXYGEN+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  case $DOXYGEN in
+  [\\/]* | ?:[\\/]*)
+  ac_cv_path_DOXYGEN="$DOXYGEN" # Let the user override the test with a path.
+  ;;
+  *)
+  as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+  IFS=$as_save_IFS
+  test -z "$as_dir" && as_dir=.
+    for ac_exec_ext in '' $ac_executable_extensions; do
+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+    ac_cv_path_DOXYGEN="$as_dir/$ac_word$ac_exec_ext"
+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+    break 2
+  fi
+done
+  done
+IFS=$as_save_IFS
+
+  ;;
+esac
+fi
+DOXYGEN=$ac_cv_path_DOXYGEN
+if test -n "$DOXYGEN"; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DOXYGEN" >&5
+$as_echo "$DOXYGEN" >&6; }
+else
+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+fi
+
+
+    if ! test -x "$ac_cv_path_DOXYGEN"; then :
+
+        as_fn_error $? "doxygen is needed" "$LINENO" 5
+
+fi
+
+
+    if test -z $PYTHON;
+    then
+        if test -z "";
+        then
+            PYTHON="python3"
+        else
+            PYTHON=""
+        fi
+    fi
+    PYTHON_NAME=`basename $PYTHON`
+    { $as_echo "$as_me:${as_lineno-$LINENO}: checking $PYTHON_NAME module: breathe" >&5
+$as_echo_n "checking $PYTHON_NAME module: breathe... " >&6; }
+    $PYTHON -c "import breathe" 2>/dev/null
+    if test $? -eq 0;
+    then
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+$as_echo "yes" >&6; }
+        eval HAVE_PYMOD_BREATHE=yes
+    else
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+        eval HAVE_PYMOD_BREATHE=no
+        #
+        if test -n "yes"
+        then
+            as_fn_error $? "failed to find required module breathe" "$LINENO" 5
+            exit 1
+        fi
+    fi
+
+
+    if test -z $PYTHON;
+    then
+        if test -z "";
+        then
+            PYTHON="python3"
+        else
+            PYTHON=""
+        fi
+    fi
+    PYTHON_NAME=`basename $PYTHON`
+    { $as_echo "$as_me:${as_lineno-$LINENO}: checking $PYTHON_NAME module: sphinx_rtd_theme" >&5
+$as_echo_n "checking $PYTHON_NAME module: sphinx_rtd_theme... " >&6; }
+    $PYTHON -c "import sphinx_rtd_theme" 2>/dev/null
+    if test $? -eq 0;
+    then
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+$as_echo "yes" >&6; }
+        eval HAVE_PYMOD_SPHINX_RTD_THEME=yes
+    else
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+        eval HAVE_PYMOD_SPHINX_RTD_THEME=no
+        #
+        if test -n "yes"
+        then
+            as_fn_error $? "failed to find required module sphinx_rtd_theme" "$LINENO" 5
+            exit 1
+        fi
+    fi
+
+
+
+fi
+
 
 # Extract the first word of "perl", so it can be a program name with args.
 set dummy perl; ac_word=$2
diff --git a/docs/configure.ac b/docs/configure.ac
index c2e5edd3b3..a2ff55f30a 100644
--- a/docs/configure.ac
+++ b/docs/configure.ac
@@ -20,6 +20,7 @@ m4_include([../m4/docs_tool.m4])
 m4_include([../m4/path_or_fail.m4])
 m4_include([../m4/features.m4])
 m4_include([../m4/paths.m4])
+m4_include([../m4/ax_python_module.m4])
 
 AX_XEN_EXPAND_CONFIG()
 
@@ -29,6 +30,20 @@ AX_DOCS_TOOL_PROG([POD2HTML], [pod2html])
 AX_DOCS_TOOL_PROG([POD2TEXT], [pod2text])
 AX_DOCS_TOOL_PROG([PANDOC], [pandoc])
 
+# If sphinx is installed, make sure to have also the dependencies to build
+# Sphinx documentation.
+AC_CHECK_PROGS([SPHINXBUILD], [sphinx-build], [no])
+    AS_IF([test "x$SPHINXBUILD" = xno],
+        [AC_MSG_WARN(sphinx-build is not available so sphinx documentation \
+won't be built)],
+        [
+            AC_PATH_PROG([SPHINXBUILD], [sphinx-build])
+            AX_DOCS_TOOL_REQ_PROG([DOXYGEN], [doxygen])
+            AX_PYTHON_MODULE([breathe],[yes])
+            AX_PYTHON_MODULE([sphinx_rtd_theme], [yes])
+        ]
+    )
+
 AC_ARG_VAR([PERL], [Path to Perl parser])
 AX_PATH_PROG_OR_FAIL([PERL], [perl])
 
diff --git a/docs/xen-doxygen/customdoxygen.css b/docs/xen-doxygen/customdoxygen.css
new file mode 100644
index 0000000000..4735e41cf5
--- /dev/null
+++ b/docs/xen-doxygen/customdoxygen.css
@@ -0,0 +1,36 @@
+/* Custom CSS for Doxygen-generated HTML
+ * Copyright (c) 2015 Intel Corporation
+ * SPDX-License-Identifier: Apache-2.0
+ */
+
+code {
+  font-family: Monaco,Menlo,Consolas,"Courier New",monospace;
+  background-color: #D8D8D8;
+  padding: 0 0.25em 0 0.25em;
+}
+
+pre.fragment {
+  display: block;
+  font-family: Monaco,Menlo,Consolas,"Courier New",monospace;
+  padding: 1rem;
+  word-break: break-all;
+  word-wrap: break-word;
+  white-space: pre;
+  background-color: #D8D8D8;
+}
+
+#projectlogo
+{
+  vertical-align: middle;
+}
+
+#projectname
+{
+  font: 200% Tahoma, Arial,sans-serif;
+  color: #3D578C;
+}
+
+#projectbrief
+{
+  color: #3D578C;
+}
diff --git a/docs/xen-doxygen/doxy-preprocessor.py b/docs/xen-doxygen/doxy-preprocessor.py
new file mode 100755
index 0000000000..496899d8e6
--- /dev/null
+++ b/docs/xen-doxygen/doxy-preprocessor.py
@@ -0,0 +1,110 @@
+#!/usr/bin/python3
+#
+# Copyright (c) 2021, Arm Limited.
+#
+# SPDX-License-Identifier: GPL-2.0
+#
+
+import os, sys, re
+
+
+# Variables that holds the preprocessed header text
+output_text = ""
+header_file_name = ""
+
+# Variables to enumerate the anonymous structs/unions
+anonymous_struct_count = 0
+anonymous_union_count = 0
+
+
+def error(text):
+    sys.stderr.write("{}\n".format(text))
+    sys.exit(1)
+
+
+def write_to_output(text):
+    sys.stdout.write(text)
+
+
+def insert_doxygen_header(text):
+    # Here the strategy is to insert the #include <doxygen_include.h> in the
+    # first line of the header
+    abspath = os.path.dirname(os.path.abspath(__file__))
+    text += "#include \"{}/doxygen_include.h\"\n".format(abspath)
+
+    return text
+
+
+def enumerate_anonymous(match):
+    global anonymous_struct_count
+    global anonymous_union_count
+
+    if "struct" in match.group(1):
+        label = "anonymous_struct_%d" % anonymous_struct_count
+        anonymous_struct_count += 1
+    else:
+        label = "anonymous_union_%d" % anonymous_union_count
+        anonymous_union_count += 1
+
+    return match.group(1) + " " + label + " {"
+
+
+def manage_anonymous_structs_unions(text):
+    # Match anonymous unions/structs with this pattern:
+    # struct/union {
+    #     [...]
+    #
+    # and substitute it in this way:
+    #
+    # struct anonymous_struct_# {
+    #     [...]
+    # or
+    # union anonymous_union_# {
+    #     [...]
+    # where # is a counter starting from zero, different between structs and
+    # unions
+    #
+    # We don't count anonymous union/struct that are part of a typedef because
+    # they don't create any issue for doxygen
+    text = re.sub(
+        "(?<!typedef\s)(struct|union)\s+?\{",
+        enumerate_anonymous,
+        text,
+        flags=re.S
+    )
+
+    return text
+
+
+def main(argv):
+    global output_text
+    global header_file_name
+
+    if len(argv) != 1:
+        error("Script called without arguments!")
+
+    header_file_name = argv[0]
+
+    # Open header file
+    input_header_file = open(header_file_name, 'r')
+    # Read all lines
+    lines = input_header_file.readlines()
+
+    # Inject config.h and some defines in the current header, during compilation
+    # this job is done by the -include argument passed to the compiler.
+    output_text = insert_doxygen_header(output_text)
+
+    # Load file content in a variable
+    for line in lines:
+        output_text += "{}".format(line)
+
+    # Try to get rid of any anonymous union/struct
+    output_text = manage_anonymous_structs_unions(output_text)
+
+    # Final stage of the preprocessor, print the output to stdout
+    write_to_output(output_text)
+
+
+if __name__ == "__main__":
+    main(sys.argv[1:])
+    sys.exit(0)
diff --git a/docs/xen-doxygen/doxy_input.list b/docs/xen-doxygen/doxy_input.list
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/docs/xen-doxygen/doxygen_include.h.in b/docs/xen-doxygen/doxygen_include.h.in
new file mode 100644
index 0000000000..df284f3931
--- /dev/null
+++ b/docs/xen-doxygen/doxygen_include.h.in
@@ -0,0 +1,32 @@
+/*
+ * Doxygen include header
+ * It supplies the xen/include/xen/config.h that is included using the -include
+ * argument of the compiler in Xen Makefile.
+ * Other macros are defined because they are usually provided by the compiler.
+ *
+ * Copyright (C) 2021 ARM Limited
+ *
+ * Author: Luca Fancellu <luca.fancellu@arm.com>
+ *
+ * SPDX-License-Identifier: GPL-2.0
+ */
+
+#include "@XEN_BASE@/xen/include/xen/config.h"
+
+#if defined(CONFIG_X86_64)
+
+#define __x86_64__ 1
+
+#elif defined(CONFIG_ARM_64)
+
+#define __aarch64__ 1
+
+#elif defined(CONFIG_ARM_32)
+
+#define __arm__ 1
+
+#else
+
+#error Architecture not supported/recognized.
+
+#endif
diff --git a/docs/xen-doxygen/footer.html b/docs/xen-doxygen/footer.html
new file mode 100644
index 0000000000..a24bf2b9b4
--- /dev/null
+++ b/docs/xen-doxygen/footer.html
@@ -0,0 +1,21 @@
+<!-- HTML footer for doxygen 1.8.13-->
+<!-- start footer part -->
+<!--BEGIN GENERATE_TREEVIEW-->
+<div id="nav-path" class="navpath"><!-- id is needed for treeview function! -->
+  <ul>
+    $navpath
+    <li class="footer">$generatedby
+    <a href="http://www.doxygen.org/index.html">
+    <img class="footer" src="$relpath^doxygen.png" alt="doxygen"/></a> $doxygenversion </li>
+  </ul>
+</div>
+<!--END GENERATE_TREEVIEW-->
+<!--BEGIN !GENERATE_TREEVIEW-->
+<hr class="footer"/><address class="footer"><small>
+$generatedby &#160;<a href="http://www.doxygen.org/index.html">
+<img class="footer" src="$relpath^doxygen.png" alt="doxygen"/>
+</a> $doxygenversion
+</small></address>
+<!--END !GENERATE_TREEVIEW-->
+</body>
+</html>
diff --git a/docs/xen-doxygen/header.html b/docs/xen-doxygen/header.html
new file mode 100644
index 0000000000..83ac2f1835
--- /dev/null
+++ b/docs/xen-doxygen/header.html
@@ -0,0 +1,56 @@
+<!-- HTML header for doxygen 1.8.13-->
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
+<html xmlns="http://www.w3.org/1999/xhtml">
+<head>
+<meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/>
+<meta http-equiv="X-UA-Compatible" content="IE=9"/>
+<meta name="generator" content="Doxygen $doxygenversion"/>
+<meta name="viewport" content="width=device-width, initial-scale=1"/>
+<!--BEGIN PROJECT_NAME--><title>$projectname: $title</title><!--END PROJECT_NAME-->
+<!--BEGIN !PROJECT_NAME--><title>$title</title><!--END !PROJECT_NAME-->
+<link href="$relpath^tabs.css" rel="stylesheet" type="text/css"/>
+<script type="text/javascript" src="$relpath^jquery.js"></script>
+<script type="text/javascript" src="$relpath^dynsections.js"></script>
+$treeview
+$search
+$mathjax
+<link href="$relpath^$stylesheet" rel="stylesheet" type="text/css" />
+$extrastylesheet
+</head>
+<body>
+<div id="top"><!-- do not remove this div, it is closed by doxygen! -->
+
+<!--BEGIN TITLEAREA-->
+<div id="titlearea">
+<table cellspacing="0" cellpadding="0">
+ <tbody>
+ <tr style="height: 56px;">
+  <!--BEGIN PROJECT_LOGO-->
+  <td id="projectlogo"><img alt="Logo" src="$relpath^$projectlogo"/></td>
+  <!--END PROJECT_LOGO-->
+  <!--BEGIN PROJECT_NAME-->
+  <td id="projectalign" style="padding-left: 1em;">
+   <div id="projectname">$projectname
+   <!--BEGIN PROJECT_NUMBER-->&#160;<span id="projectnumber">$projectnumber</span><!--END PROJECT_NUMBER-->
+   </div>
+   <!--BEGIN PROJECT_BRIEF--><div id="projectbrief">$projectbrief</div><!--END PROJECT_BRIEF-->
+  </td>
+  <!--END PROJECT_NAME-->
+  <!--BEGIN !PROJECT_NAME-->
+   <!--BEGIN PROJECT_BRIEF-->
+    <td style="padding-left: 0.5em;">
+    <div id="projectbrief">$projectbrief</div>
+    </td>
+   <!--END PROJECT_BRIEF-->
+  <!--END !PROJECT_NAME-->
+  <!--BEGIN DISABLE_INDEX-->
+   <!--BEGIN SEARCHENGINE-->
+   <td>$searchbox</td>
+   <!--END SEARCHENGINE-->
+  <!--END DISABLE_INDEX-->
+ </tr>
+ </tbody>
+</table>
+</div>
+<!--END TITLEAREA-->
+<!-- end header part -->
diff --git a/docs/xen-doxygen/mainpage.md b/docs/xen-doxygen/mainpage.md
new file mode 100644
index 0000000000..ff548b87fc
--- /dev/null
+++ b/docs/xen-doxygen/mainpage.md
@@ -0,0 +1,5 @@
+# API Documentation   {#index}
+
+## Introduction
+
+## Licensing
diff --git a/docs/xen-doxygen/xen_project_logo_165x67.png b/docs/xen-doxygen/xen_project_logo_165x67.png
new file mode 100644
index 0000000000000000000000000000000000000000..7244959d59cdeb9f23c5202160ea45508dfc7265
GIT binary patch
literal 18223
zcmV+NKn=f%P)<h;3K|Lk000e1NJLTq005-`002V>1^@s6{Wir#00004XF*Lt006O$
zeEU(80000WV@Og>004&%004{+008|`004nN004b?008NW002DY000@xb3BE2000U(
zX+uL$P-t&-Z*ypGa3D!TLm+T+Z)Rz1WdHz3$DNjUR8-d%htIutdZEoQ0#b(FyTAa_
zdy`&8VVD_UC<6{NG_fI~0ue<-nj%P0#DLLIBvwSR5EN9f2P6n6F&ITuEN@2Ei>|D^
z_ww@l<E(G(v-i3C?7h!g7XXr{FPE1FO97C|6YzsPoaqsfQFQD8fB_z0fGGe>Rz|vC
zuzLs)$;-`!o*{AqUjza0dRV*yaMRE;fKCVhpQKsoe1Yhg01=zBIT<Vw7l=3|OOP(M
z&x)8Dmn>!&C1$=TK@rP|Ibo3vKKm@PqnO#LJhq6%Ij6Hz*<$V$@wQAMN5qJ)hzm2h
zoGcOF60t^#FqJFfH{#e-4l@G)6iI9sa9D{VHW4w29}?su;^hF~NC{tY+*d5%WDCTX
za!E_i;d2ub1#}&jF5T4HnnCyEWTkKf0>c0%E1Ah>(_PY1)0w;+02c53Su*0<(nUqK
zG_|(0G&D0Z{i;y^b@OjZ+}lNZ8Th$p5Uu}<?XUdO8USF-iE6X+i!H7SfX*!d$ld#5
z(>MTtq^NHl*T1?CO*}7&0ztZsv2j*bmJyf3G7=Z`5B*PvzoD<bXCyxEkMhu6Iq^(k
zihwSz8!Ig(O~|Kbq%&C@y5XOP_#X%Ubsh#moOlkO!xKe>iKdLpOAxi2$L0#SX*@cY
z_n(^h55xYX#km%V()bZjV~l{*bt*u9?FT3d5g^g~#a;iSZ@&02Abxq_DwB(I|L-^b
zXThc7C4-yrInE_0gw7K3GZ**7&k~>k0Z0NWkO#^@9q0f<U<Ry!EpP;Gz#I635D*Dg
z0~SaGseli%Kpxlx3PCa03HE?$PzM@8GiU|JK_@r`&Vx(f8n^*&gZp3<On_%#7Q6-v
z5CmZ%GDLyoAr(jy(ud3-24oMpLB3EB6bZ#b2@nqwLV3_;s2D1Ps-b$Q8TuYN37v<o
zK!ea-XbhT$euv({2uy;huoA2V8^a9P3HE_Q;8kz}yavvN3*a4aCENfXg*)K$@HO~0
zJPJR9=MaDp5gMY37$OYB1@T9ska&cTtVfEF3ZwyPMY@qb<R&tT%ph-37!(CXM;W4Q
zQJ$z!6brQmwH{T1szx0~b)b4tH&J7#S=2`~8Lf!cN86yi&=KeabQZc0U4d>wx1%qj
zZ=)yBuQ3=54Wo^*!gyjLF-e%Um=erBOdIALW)L%unZshS@>qSW9o8Sq#0s#5*edK%
z>{;v(b^`kbN5rY%%y90wC>#%$kE_5P!JWYk;U;klcqzOl-UjcFXXA75rT9jCH~u<)
z0>40zCTJ7v2qA<d!X`o`p_Oov@PP1=NF=Het%-p|E^#BVl6Z`GnK(v#OOhe!kz7d8
zBq3=B=@980=`QIdnM~FqJCdWw0`d-WGx-Af5&4Y-MZ!qJOM)%2L83;YLt;qcxg=gv
zQ_@LtwPdbjh2#mz>yk54cquI@7b&LHdZ`+zlTss6bJ7%PQ)z$cROu4wBhpu-r)01)
zS~6}jY?%U?gEALn#wiFzo#H}aQ8rT=DHkadR18&{>P1bW7E`~Y4p3)hWn`DhhRJ5j
z*2tcg9i<^OEt(fCg;q*CP8+7ZTcWhYX$fb^_9d-LhL+6BEtPYW<H!}swaML<dnZqq
zcau++-zDEE|4;#?pr;V1kfpF+;iAIKQtDFMrL3hzOOG$TrwA+RDF!L7RXnKJuQ;cq
ztmL7Tu2iLTL1{*rrtGMkq+G6iMtNF=qGGSYRVi0FtMZgCOLwBD&@1V^^jTF!RZmr+
zYQ5@!>VlfKTBusSTASKKb%HuWJzl+By+?gkLq)?+BTu76<DMp7lcAZYxmUAKb6!hZ
zD_m=<R;SjKww$(?cCL1d_5&TVj)Tq`od%s-x)@!CZnEw^-5Ywao`qhbUX9*$eOTX8
zpR2!5f6xGJU~RxNXfPNtBpEsxW*W8_jv3L6e2wyrI*pziYZylv?=tQ){%B%hl48<m
za^F<O)Y~-QwA=J|Gd(kwS&i8(bF#U+`3CbY^B2qXmvNTuUv|fWV&P}8)uPAZgQb-v
z-?G(m+DgMJ)~eQOgh6ElFiIGgt<l!b)*Gx(S--Whv=P`GxB1Q1&^Foji0#yJ?d6>1
zjmyXF)a;mc^>(B7bo*HQ1NNg1st!zt28YLv>W*y3CdWx9U8f|cqfXDAO`Q48?auQq
zHZJR2&bcD49<D{M18y>Ip>EY~kKEPV6Wm+eXFV)D)_R=tM0@&p?(!V*Qu1PXHG9o^
zTY0bZ?)4%01p8F`JoeS|<@<K~!G7L;yZs)l&|JY=(diHTz5I9kKMc?gSQGGLASN&%
zuqN<HkZDj}P+u@5I41Z=@aqugkkXL*p*o?$(4H{Ku;{Snu=#M;@UrmH2;+!#5!WIW
zBDs-WQP`-ksHUj7m2NBdtel9ph%SsCUZuS%d)1ZI3ae9ApN^4?VaA+@MaPE69*KR=
z^k+6O=i<ELYU5^EF08$*XKY7yIeVI8$0_4X#@of0#ZM*JCG1X^PIO4DNSxuiaI3j5
zl01{@lID~BlMf|-N(oPCOU0$erk>=<@RE7GY07EYX@lwd>4oW|Yi!o+Su@M`;WuSK
z8LKk71XR(_RKHM1xJ5XYX`fk>`6eqY>qNG6HZQwBM=xi4&Sb88?zd}EYguc1@>KIS
z<&CX#T35dwS|7K*XM_5Nf(;WJJvJWRMA($P>8E^?{IdL4o5MGE7bq2MEEwP7v8AO@
zqL5!WvekBL-8R%V?zVyL=G&{be=K4bT`e{#t|)$A!YaA?jp;X)-+bB;zhj`(vULAW
z%ue3U;av{94wp%n<(7@__S@Z2PA@Mif3+uO&y|X06?J<Fdxd*PD}5`wsx+#0R=uxI
ztiE02T+>#oSi8M;ejj_^(0<4Lt#wLu#dYrva1Y$6_o(k^&}yhSh&h;f@JVA>W8b%o
zZ=0JGnu?n~9O4}sJsfnnx7n(>`H13?(iXTy*fM=I`sj`CT)*pTHEgYKqqP+u1IL8N
zo_-(u{qS+0<2@%BCt82d{Gqm;(q7a7b>wu+b|!X?c13m#p7cK1({0<`{-e>4hfb-U
zsyQuty7Ua;Ou?B?XLHZaol8GAb3Wnxcu!2v{R<HnZuJKC4qWuPc=?k1r3-ydeP=J*
zT|RZi=E}*djH{j3EU$I+TlBa8Wbsq`faO5Pb*t-LH>_`T4=x`(GvqLI{-*2AOSimk
zUAw*F_TX^n@STz9k<mNsJ5zU4?!LH}d2iwV#s}yJMGvJORy<OC)bO+J&uycYqo>DQ
z$NC=!KfXWC8h`dn#xL(D3Z9UkR7|Q&Hcy#Notk!^zVUSB(}`#4&lYA1f0h2V_PNgU
zAAWQEt$#LRcH#y9#i!p(Udq2b^lI6wp1FXzN3T;~FU%Lck$-deE#qz9yYP3D3t8{6
z?<+s(e(3(_^YOu_)K8!O1p}D#{JO;G(*OVf32;bRa{vGf4*&oQ4*`<-1El}}02*{f
zSaefwW^{L9a%BKeVQFr3E>1;MAa*k@H7+qQF!XYv002BXNkl<ZcwX&&2Ur%z_P#e7
z6Qi+fqA~V@E%vU7y^D&10wPTTDJm);MHEC76-DVCL_kmw?7jDbz4s2N*c<Kq-*@37
z#Au5Dn|t;CJm2#^yWj4#-8pmSoS8GTMLrU$3Dg1V0S)qxwSgKyRp2vyrhkMg0%Wkd
zKntJ?&<p4b^aln21A#&LNB-{z@P1FA6VMc>1$+-w06x=a`rA|vr~**(wFP<rWHvJ1
z+fW~4G{)LwjHRuWrIk&K7A;2c+FM}=GH^GbB|rwP43q&r(`WiaDhp7WH3WVE3K+3O
zi4q#qr%!j?xN+0UGiS~ozkBEIovf@Z$;`}>#~E32=hh>6`k4PSh1Z`vdHU@7wd+^*
z?%lU7ARy4EXV0EvkdBI3DMdQ~?D{JKrGd}%nSMu<T$GGI14?&XzI^%No}LTlpE-Rt
z<;|PSY>>im&!4}Ld-qc1+`03zf8TytnYc<qg2QFa>h*H?@DaIi{(_{Yrpb#JFO=|E
zS&WyBYw34aC9jU}(WB>Bq)!HAH&01S-IQv=XXgA&3Y7=gowf(q#gZ8{6ILWFefi?m
zi=3QX0Yl2ehmXK)mt@z@J(7@+D3Oto;^X5hvuDqiY15{Ot*xy<lFGb!^CU1ZP-3EE
zWYwxwvTxr3xpe7@JbLs*NhdoyM{=@r1&n@V`0(LY$dAm~2WSQS2vBwS2KY?>M~Pi0
zyJ{LP>{f?_hJ^ZOJbd&pCtJWoSqd}Vx^-JFT(}^|jvWJ&?UVKE*GpVnoP>mg$btn6
z#MRYRoSd9w)~s2wXwf1G4GmT9uUofHcJACMM~)m(q$;H=r7RgUH%EZnoC60AZ12*g
zi>hm<?n*13#!yM%GyNYTc9XQIX-!i)4t92);db|K>g{Yuu}m=I^Jg!?kdGxJ;}N9f
zLon1mxpL)-!eC@hV)N$Bl9-q$HOYuPMn^}>#*G_g|Ni}Q>eMMoNlB3tCr-%Kt5+pG
zJzbtXd!}^hxw%q6Z$O(iZAz|MwdzQeh5BY=fDPs|WBwl@TD;W(4%JY19GaB4CNc9(
zj=X-IDNmm~lSdhk!IaMxqa_#IL*0-Jb?Ve<*}8SBczJor`0?XKPft$<4<0PNdi4@W
zJL%rNy9^jGKr}Tq#n8}Drc9Y4OO`B=-Me?o_3Jla{5+9YuU<(`4#ec!1SY+E_wLO;
zefn6SOl&C4fbW1(z-Rg&CQ3*e6|}6?t5m6?bNJBllxI(0%Hzk+APv*xe)@fMvCkDg
zfdG@+w{I)bjuJ6EJY2lJy~V}FMXar@MPFZEh7KJny?ghTK7IO1zkdBiU0q#9j2Iy%
zCMF6~y1BW@;>C-VxLdYtQKT&;<=e#WoV@zv@zZCMCQX`w@=={=18_9pGh_ab(zgI5
zq{5KhyY;lVaQ^C@jEtv}mYObCuUu2QXi7yg<;|Nnm9Co1xMIZ$S+;DM!dQO3{^IB`
zJ=Z|rI9o?&RF<xe?kgiBqvvL3X3xNg&kPI<UXB_yI!jN_AZv`VY4)sH9=V~RVM@22
zl$0bJHf&JzQb@&ob|f?AD2z&7Q&AsYXJ>~5hlZe>LjW=+M+QE3<^N;E3jG1-45({q
zYTI49c;i`m+5@?M?WUYPdrofL$m?Ej-MXbPV@ynp0vap{2?^rk?U!w3W&Og`)HD@F
z&FO^;7j6#-2uO&ChzO63ja{^S`SN+d0x)kdbj!G)prDN~dX6|dJ6{LGKD4#5e-#(E
zJcs%w^hZVSgps4D1srN(xBl|QOQ;ZU6f6Dps~lOgYQt)jmyF2)cMchSw#xs9h(-e?
z&Y*T}J6N1Je(uT58@J`mnR9aZ@L@TB{=6a?j~+cLU@Qp^4i+$*BHe-lgR;#`&F+B_
zkDwcl0mIEmPEOX}yLayZ#Os!kk<kgdXFCYIHbC2#FJE?q6#V(@*|WV49z3WC2AqI2
zef|CYx7yp=UqaXXaMh|+xog+16{I68SFThHB1&ka1tz@@!zx3bK79e*_N4^+M?|CC
zg8>>p94`7A_)MQT(Xin#OaH1>f6(8yWpC=GOIM*M9#@8Is4tQ!XpH#!`YQULpP!!u
z2ZiRKJ5B{7?E^zCTC--2{^`@FyMcLHg83Q(wSj5?8U8nvf4vsa0B8mY+#Y!hg>;+_
zD}4iW%?m&V8Ip{z=$o6j$b$zD6dm^Z_3M%f{WedMr-`$Zn=g{3Rn8e8>cw9HpXn1N
zcH7h=y92`m1D700NjWLowr&@6xys-+CDzKsmCE3^gM)+2m@z}_Y#m>Y9z8k*U2iCu
z)Er%MA9UgEAw8QQ9Zp4l`>%if>r2qTaQ>Hw3<@dQ77}#mx^?T^U@&b4)1M0q4a??O
zCk-M>=rd=|$e}}rWbfX6N@F>C<hX3#y8m^H=B*r&3`Yytz-atXY8D2|#Rm8d%2pJ|
z&-9TJ2ccU7LoCfFM{U@!4NSORJUl$a#>Pe_PoAs{?a+EAIK0!P%P%i*epy>vdn*i>
zXhTE85HP48ENu@0hRkepb92YirAt#CC=15?f*Ji40%MlUXU+~MPMk1>L{8|^rOR18
zJ-y6f!-mPAL4#!M*s;ni5od>-ojF4^UuHPFilxQGr~Uf%OGda59UUDn#F?)u6Jcy@
z>}G9kJsD~SJ(EiQod)xn{&Per$?3zs)vMPu3k!*UwPVLlRQ3#6jBfh;wQJyy52wiJ
z=mSW*Go(_fzsmrShV{I>yn3svt1m<so{^vA2h$}OX%~TA2M*jEpsqd?S^TZW@|pf~
zL@7yWHr(EB=CKXyH^Ycop)e(-ViYXvOV_W-g{xQO<oR=wa_XcUIe9{k0z7{o&R)8x
zFhWjFPOi4D?oy=Rv}n<y#hKxs5t$c63%XI)u3cwdzI^#H(>j0U@;mb)SRR&(;D46&
z+~rG3*)QF=ro=sa`68wNm69b(+9K^1#flaC_A^s{e8h^U_jm7>E!xfSobMx>Hg1uH
z3l}PL)n;a9^1}~5$lm??<=*}KayLC)uEW5%1nGJ4>Q%W6L*tK-@${*}nAdLKe$}v1
zqp8T}M=-%3>T&r@LY=R}gb5R(pFMl_hG|@d)z5t2m5-`CJTlzPyLXlHrote)hce&7
z|C1+A=4feYokO~AKriI;1MoHQx%>Xeh)VxYfaVw@t204?q2li!AfBF{io*mQCu-HI
zC2DGFas?Hhp7Btgym+qQ;giSm;PE3c-V+6no;@vq>KgB>Xz$*=cae_<Q2u?4kk61D
z0Pi0X5^^v<s-!)9pybUwRe7oMC|nMf>*>oE%6(o-y`QdF>6<of%4*oK;Sr=uhU|}g
zY6Fy#pADUlj5v-*ukmgT)tXGRpX!pEup&`mM7otxTEb9~jvYJ7_uqdn0|yR#miFMj
zJVNEa%zPu9m8P%6`#@rtOwJ4D)BO4KRr=*Og&C9QUwrWeyY`yZt5+X;@ZiB4`BCH<
znCx|SmXa2!aQcPw%go7^7jIt4Q#P9C&*Wi7hB9Qdc=6&_Ft#=!Z5yZ$oskzMWC`GN
zxBU?k4IDb_4$&B*y~AUUrvwHD%gmWG6{hs_^^+bwdMM5l9ou(G{pzc)b~-ydKL$zm
zyBsC{0%c`o<!ESVa9U6e4Duxlk<Xkg%TF+9Jnr4Qm)GZi0OjQ7%Inv!mD!(j=g!d)
zBp2!K0<57wwgtXJe#P_i{7fGqad5c_D$3AgjMcM{kZ_qc%~|nz<X9jbDY|v*CPRm6
z<o4^=FQ8AKJ~I$DCN?(q1>MT%74#u=^XAPLwQJWNkG!jbIf~)PvBNHoj*f}1UcLIg
z2gY9{&Wm%lhjZn-cI}ep&6~>)RjZ3ygGO7?0Qvw`07Zv{RHl?<PeJvC9!4Ca<p>19
zbE*)fAkm9`3=GV2;6VMK<t0?(tsHCRz5f}aCwcpynzr(?w3s9zAz?COhKtgD)8&j3
z5|*zF)6lwUV`DSU-rl}<<;s=E0Lgpy?8zbnd?;gX+qUgG^5&3?T8R=RhQ-Cj9nZh@
zLBu(6N^nkr(pR%$#fs9qcOMxzKwZ=aX<b4-oPMP4Ot1B>fO>}x9V`tD4CYLlG$|PR
z^J=Io>j-rBt4vKzeKa*Soe^#rK!;+E;gVs?fU)1JhmwIooJFGl0P~_#3-ja3&UX3N
z=$l#?%>g=4Q<b7x@vr<mC^R@T)e#tkUc}bW(9pf`IPkDx!)QG6*|-{j2J0t1?yAx!
z`}gWGaNFcbcCfa+6jw8L-|p`2ij?G#P{)p)=c1y!g@uLHXbnOfGf1KRmoHz=Wmoy3
zj5%k{oWu3%)mwDp#EE<Pm;N>}Z@SIhhoyZDh8P%3%9k&%lsiw-_mukenq)R<(j;ug
zj2WxfuU~)W%9ShksYc`{@u!rUn)+nt&Yc&1eSJ4}?%a7Yo}Ua>jm-Bp1K>ZIsj8)=
zr6bCdcJ=Djbmm9w-o5)W^M4x~Hf%V(Wy_XhXc!)dOU7rtv_YB8{QdnmpE`Bw_RE(q
zl@W+{5uQIA@9-A%^_Aaz^9>j9{gAJpe{xg;;5wI)n#1(&I63Ccim1A7yi`R>jvA$x
zHDJI1&Ev<9*FSpnXc@fpvUp&&cKgAD2VWpF(82dZ2=Q`J;ji-l{%s;dqO!;|oR`m~
zWJkFaAI-jf`zn&2&vEkP$^01q9)hlV50aE~?37>?@J<R0CY)1BwuMf<H$V7aB0dXx
zr|pQV3kf}Y>(;I3$ZYwy|1#aUaU-LD|Nc%$rw)XzqO*TWL`hk_SkYoeqxDR(X1cnI
zOMchQp&(W&nR0}d_7z=S-A>RUE8&@GF;mB)YZzBDdZ0^BfAr{)(tZ9XiTWe;TDs4z
zS+gb!68B8Sij{KMZMSOG3JuIzVc7Q(nSg1qK`E|q2uq2}=lH9Vf8)lD=c-n%s*U$h
z1@A=Z(s86OD(CEfbprj1q@$yA7}EIU-;v_)A{dG<)YR0>Q4Sh)pVSBgC1sszKh$tC
zG%yp-`FV@FIG0R)l3h2^s#UvS3k!?-cvj9uE8P`9>y_(vxa>6$UHkK=PoFB4{C6l9
zznntDMSn&N`})mm1$15GaL<AT3!bCP+K6(@uT`_w6EY+nHB|H_^NkXo$IPru1!Ov^
z9jY#$R{GhqX9enrB6Z2245Run-|U=hB`(wAxr)x8Kc8NuN)>LZP#N!psvr#{i`%zv
zE8Q*Qs=#=Kk(HgLluJeErr+6$ScbxRzD=t8ET8fpWe*7nIfHU^LYY1(FDY7bYCkn?
z9WYAQ2s-2(MUSKd1_#c`kQ@#wS+dkHbVZf%tX~2uX+XxL70cxkTF~JVH*MaOdHe1i
zx&QD%0Ukce_#GZ(Je2eY_Y2Nd;Z$*W?d|{f(o*Hto!fHl#&x;M>CTH6<jmPKTw=k&
z_=u7vOLF<G6%2s~D(T3w+_-g9iFfDTT_ugQ`{@NB^KII+DO*EBBdtuCGDi`Accn^|
zQlP4&K~2diTozTCQ`6JrZt6WHO+rP>;^NLD{x48#xD=yq%T}$t8D5o!3aV#PL6s)&
zJ>moajw~C?e)IM%B@gDOdLM<;QlW79_>4^NHl7y^_6^FSU#wU$TIinyQKVB+Hfhtk
z)AG@JW5A4a6^7*aB$<=?*zn<+)X}=c$H$kzyZ=o$0EYYmD2azw!(Wp+b?ffhzI%7h
zNl2ZPQ>WxOq}2&XpLgNd>C;LWLUqogDh`jTG*X}s9yoeb_8vSaJNE36&D*z2d_qEY
z^A;^mBHy{lUlT|5zWw^epc~G8imvz|!XHdYkt4^C7o?-YiTq<+Lc*JGOPA*O<$S;b
z5>6jSOTY(Ab>Y<c^95!7UD;5kLrDMdv14-M?p^-RZP~gt*9y`4Z9i)C=xu2aA1dKh
z&si9S(<F|bIH8o8&$xZpF4?$kn?%LN<|M9KmCI|#5J#1_DsEw@(p05&6y-Z};eyny
zTh|rFSTza}l<T7>)fio~4(+<G)gL`Z=Fjs~7?Shd)GK@T>?wwZMjUwNFa}-8e)nK_
z*rEW3NI0xv)4fNJbK7?A0`u&ZUHkUS?)?Ye3*Ik`x9{!PvrjqSzI%^s*s@htt=}L?
zYu3t4_t~$?SE#TV`Po4-(f6}W^%^zo($dmiK6vs>wjlp)yLQXYy?OC@WLiA#IdD)x
z%E{BxxJi>kh&vhO{~1sNIPKU7X>t)-`1(zovsn(tD_lm^k!fy4UfcPs_^&-`)EdO&
z4jzM?oSm<vXFO5DsN((+Ht*Oe>o#qcgjK6^hiVLa`0cmfayoP?u(y4Oj;VYeRaoZz
zHh7)oRAJA-LvrZoF=^De@nWRG=lUe{N-`LS44QZ9(0$9OQAP?=a$1qz<Qx*>Qdl#y
zaoGq%@1gfT6dss{PX)9{F2i-#H!@D!unjMH%QjiNdFuyZ&897~X5(h%xN5^DiBC+D
z*cFMQJ6iuC^5Bw(v5=6h(HJ;<GGK<=tSdC(lh<uj!YgSY{^|`#14mVwYd39`Qx~tu
z`YqdEB3>de0pPlx-T=oUx%<XwAoA#$bJ^=QZ&T8E8_Xkl{YK?{!?qo=X~!-Zp`$~o
zOC8(9CouTQ-Xkf>{ld>x7zNJ=EKgo7A(2rzU?Qpk{(u#pPZ#fW9L}R$X1PCSS>BaS
zVJK+=49hZ=D_?#N8d;rBg(*p#1!&x{efKSTU`mfUbCo5r1SKWM`NoYKmxC}Il;zOE
zd%FNUGz}jP7LNc{dk2NQiC?uwmL;wFV8pIWmS`~JQZS=mXhcrycI`O;&XzwMc~P>_
z-L8JK7A+R<ICM0Z`Nbq8DRFs>AtNIVRa*SNeDzv6ef0)6Ovr|D^YqfCOBt#$L=|xD
zA@AMFcp^*USG*UI%i<*h;CURFYRE9neSH3srAoOjO<0-ByqIU<XDSR;nON4y_~p`f
zz`$FGKO0Qc56{*Js0naT-CBRl*gJpaeX5=r`AcwAOeW$^MH<zCPon!KvGp`)+xq9V
zx;h3j8-@<o+xYwY(`iB(5*j<!gxxm>ifO^Ux3%F#@_zk_6)R3&vu4fRhgpIbogiVc
z@gI!vWy^6~E-tg@Wqx0!%6_Cj9g?GWer3^Fn1)(cQ_Jxe_oSSX;H9w=8WSfWOP49f
z!fC0l^E@Pa8PZ-Mu}N!W1te?I#;uYJ$-*O%uzr(7tw@rfs2K4JUm^h!OWz6qh$!(1
zi4ecAB}(2s`t;q2@));l(>8(WheRV!pl~?~gZEgjV3cQ3U`P(6eiY)9AvyI*Ba*5S
zhu(Db$C}<qgw%akIw~m7sH7P$K}%x9YO?JE#5F}49IpEKnUY3JzK&SCWs45Wb#?Nr
z@1UR{W$g>4W&i&Dl??<ChFub!C4M)x#)k!wIT}N<&b)f{YR2OnkuX$BU}TI0E{Xjh
z1Vw>?mL;IF<8n)U^9}VY2UHq|H~4~CVKlPnZ>j6-KIchf{7NN1e=y5C^ToA*$Y|y0
z8y+RzA>lxT_=2GrCm=trUsx0vF-kf235^8PMJi$52JbQcqQFq`3JAfqNNLr!{aTb~
z!cff-CpYXl@cy#omq)pebLM9?#qJ5>+Toe%0R^)}IC6e_l=W1{^yTX|y+1ty_xOwi
z%h;n&KN^BVkOrrcKYjv376mBXsx)fYJV0}}t~feQld!NbWw|U@3=SMPP(jtI)l3nN
zFNfWIfwaWK@|=v(aq{HJu`gb{$UJiXI$AB6DPBQn%<>OHSnLW}w_~4VWQ$C&wZDTr
z%m7Nr!WG7qkrRTJEXxgxS)s%WiGF{cED!JdM?`~}!o<@zP!{+FgQ>#sKU}<k1^&Sb
zc+P9@!s4JjFrKGRpv+zD4`z<eEmyt*RSP?7JIB;DJNBddq~v;cd3ZhV{j5;RGkExj
z%ZRIuXX03=>iOt2!bNXZUjAXPmnJ~({FCS7F=#2IFY>PTWA!-1?SOPBP(FSFLly%#
zrdIZcsx|C}Ym5>r%LxiIMny$Qe0;pJFpG=B3=PN7AgBhAi4|!HsYM2*XU(9<$jBo|
zOSbG!k?^=giNecMVQJ!;LiqK1-o3Z){}!>y>*e;7Ou6$kQ-*43(-*QGDv2WoN`;{e
zKb9?9&Lw8$x`Mp^DBb9lNHb}@jJ2Hbs!qN7x9T=fyHHQ9;aQ@7A-w&+@b$vsmGcH_
zJU`l`S&Nk@78|d#PgvBOgbmvY%JgpJzXM>&@+4{6yybet{RPj(*&u~*&PHGVr{*m~
zmaW==@~>6my*s^MpFINUc?E`1pLE4LYJhY;ovB9fQMts##LC5r7E>QOSSL$=w2?$c
zL@GO7aFYRUNX5N>X3UtGhTvQ-OE)g|*JRE*jT<+%IC}J`GJ>42Zkt5L!6?CtSq5f`
zNnEQi0MB_O<M0}fh5xI<ybUD`?yuUso5{<LLnpGDwQTK)eCy&JE3RZ<(3+LM``&|j
zM#j^y$xE+rK84d?zGjmoZrCAOx}#H(PB1V9PzK%eL07to0I$>Y#{MsW;W(W~3%*bF
z8a4gnR&A6xFp(<%cR^LY=!7*A616-Bmi9b6PZK;B->pJ8=jcdBjS)Hr_MSMe#8JJQ
zcY)8nEO`SA!_{JK>qvca9MYgO^LuT9kB(5+<sLnH#KGRyy?PG5J3>oWSq8xc|6J$I
zHQgE-8ZvdNJ%@!jRKn>q>L)FAbaWP_r>DQTl=?(MqtRucy9kR-1al<Ah)9%(xRuIz
z1Q?a)s{g#laCz5M_h@}4{|3Cqusl!Nv{PYBbe&JimMd?9{5hXn6ct$=aPkXZn&Tf4
z14CwcK^#?{zamNES8rA@+Sv3a(y&GvtpQr`1l`8C-OFHgb@jPinu})-`y(Js=!g*`
zd^Jaov_?Ey$vt}aAGB@Nmc74uhPR<A504?yE5vS^>tn>7h%(ShF3{U?&Yo<@1RMKX
zTMry7c&<NAGf7FapZd_#h^v7#J`H_RMX2X*ftnpUbZF7G^)I_dXz9r;H+O|8xj_{r
zrk0i#odl$BpZ;?Zg)VCY=FFM1iQX$~p*ML2P|u2h;vWU&Iy&zd3WFgW1_l1}TmjA(
z2S+O5RdEXL<%chfg1AbWOJ&9SZF2vW$aouvD=Y(04lkTTP!q;k+dQ4?9Uz{5AxhfJ
zM^&DJw1BYKm2&*@J#qC|_z3ZA0M0~FC#Hp49~kWB=C)C30e^wwAbFX-fyp@PZtO|a
z4NT0htlN1IMo`{+Q04s_;5y5<I5<icLN6UMeB@Qcr9N2+D6o>z;5g(p)_tMxOETM^
zrOEq?f+CP^gw(22kLm;s!%st>R1w~5DWI08r)T?mbs8<y($tr6<18R0mnj=naqk{l
z-*f|;GQ~E%RH;%ft5>hS$c_Je!eV9Sd>?UJ;3u;e1<IVoK{y5|$2s0Wq5r(ka18G<
zcM-5yT<3W!*BJIU@d_h9uIjjQ%RbqB;H2~$q(S$t_UP`Y=jt?Q)a2MScTbr<XQ9mS
zK>G8L{(Ntl>6uqP_l5rA?iHZqvwVG?)J3Dai0AS{eEnX%dO2{#<e@`{ZqemEC^}II
zTR3mVpS92r@JE`yO8J?Wwf*!bUw>VKT|XtI@gzr=CthGS-e>;r1>S`#%IW6mD-E0c
z#M!CtV4%YKB$=~jC8#I!eM4myo<sFa?*s2MUq?5OEW~w#KFJ>7)AOlB9sF?k@ZnCi
zYSq#lqOS8?M_W%B7UIrn1n0tk`Q;aB)~p#lSe|W8xg^u(ERv}+=ZT~H0_EIkwwD5)
zzx|&_`<e5_&SkFHx_HPG=h<R2eU8`?xaR1-uppc&eqpH6BAoHUHy%1Gi^F1b8a8RB
z@@1qh7uKR}hnLn)Zb;u9>CYD18BE_p0rR$V1(f_2NND^zd7-f8uf+e6Nn6BazHdRk
z3`;o70^B^s-fg~2MLJVZ_Q}Y{*lNnlij}{MLH^vdwMB_<zM1AQ%QM#=@8<0?{0_Kh
zH^T$rJaWG%TGSJIdQ&h^VSSQ@WUDqEx@>Us3KAQHC*GMB!Sr~X?jc%wV>s(&i8MIp
z{pskFDneaUc>MTrYdbr;kzG3V-mjx=Aof!olzk@IjdQxOXRlsz<>5nF9JN}k>}LUP
zGQn}SSWlZP6P)JAL}Hpp9`gSy_YlT%s+*YGx{A4ti<nHBCdT8ZN~312o^|fg@0GcY
ztBiMWD~O|tR~Q9p;>h&u+!o84gO|j@&h>fevgPRjVF1jmU%!4%UA4wyU@=9;T04sA
zWM>&?GadQPz%d_FW}qxCO8HfJsnS-Rk9Tla?pxZ;MA>IDAH;PLvnkWX#M((rCOV0}
zxlL}Hj$PBf_@XGMI}Opzvg>YNuVK?TzMpXjtDtb+Z@~;@F`eWr!*s_yMO@Cm*8mFZ
zlXRBp(`(T1OP2OCmHhua&FNx>=SJB$RmoW|cK@Fqfho%Zjgyj+hBs^0%3W7`Ozvn{
z;XXb-Vr69|4({%<>w2b4a`O>`@s46J!AXoK%@E_sGsI}pbmcf^vWs$5uIJw~oH$MN
z5q`9ly^J!q6J0YK89LfLr^=5tFCyEd;$MBWXYeS~tkG5uO4zs4D-6{=p7UrubFqZ3
zKPm%98q<oNYi@3Ng0oLGYu1#KrAkYiF1=)!frX41V=W_1CX0^g6wxsQ#@UGOI9t)P
zn5sO7k+pLHl=7l1dMqo#X(P@^lSxQpqG%e8ml4Jjr1ub=%=%4Q+$;8F@eRm#HZTko
z&n}YVr5)P-+<UJ!;%Ffb<NYxvDCxKPxfiESM=%YbK%d09;wU^o%kaU7vDSYv&51G!
z&sXKgS}PEj`sAm(qcNEB$IF*5_Z>HG+$31xm-_S`C@osH5zm;l;=S^)j2b^fG{@S=
z2(zi8ZSEl2;~Zq<IQzVFb4TS^@E`ZIOzo8L!^cb(4a12tM1Q<=9XRq$snTWXu)wK8
zx`;V^|6}bF!;B}1<`^)5Szi3#rdt@l-s5rNtN@wrzfyi`_sdPrJZWodi@m+QvYSVn
zHf`j~;$KOPdX43$wq2!5f32K;BgW<q(j6xQfkC?FGFZ=2hKwFBLks{M)kj$Z7Pyxe
zroXnCbm}woO`}$w9{*6Y?zQ4ym)MOmQ!=^$x~TB>0Ig+CchZ<>UZ+uup!!YQoUYTT
z`K{Uwo2C5`5aw=!W^K<^`0fX9#N_~Yfj&vXe=Zp7-l$c_O?4Z$yofk=-%jIg)NRz_
zR>Kw@jw7BK^li=zeUfW*6arIL;+~MuB_~w*{)gn6YOQiZc3cvB?|2z*YAYIGn&BpP
zq6KF76<|IbHNi{-|7)5~m0@7Up~hfFFq^vm1R11nB`rJmc~Z1!u>@o_4qY89vpyJ;
z^XYEQI`()l)MygYw=GC3FTM9e6ODuUIf5z8oEJ%@s?}xW$dMAYBuZJu$Tc}_+qRS1
zHS0;SqF;0Rj?-=XzA05YrF`Wo=YFi)=vFPYW~uo|%SURB`c3ZO{sk}!!_$+U`l1)W
zu}C`0_C?uS0i0RlrdV`B;dEzlJP+48@l!7RFaXsG?!r+RGvC5AfU{KHVUSSwE6^uN
z_|K_ZuE*zUPYyY;F}$6|+u*n<8$5@)*maZ9IpoKoE_!-#*o&hUAHSZ-LZUKTjvhVQ
zX_&G3`Gix?#d204m<)^v1{g_6W$6S=%LmU@$h*cc3`^!34#v~~W2!?6)oap<Q-nc)
z4)WpPFt=;rIv=jq8-V&;+GoVrOy)HLjP`4qh4WBBm49A-de*KoKW?wm7zPg=rc94=
zeAURvNQP^S5cPqYS)Dp{-mIaaF}G#QmS*35_Z=CA1LUkcGEU(NWu<em6&b(;l12+<
z>;rTJXoS=PI0Z<_NGjaba>k@O(jQo@S~aUGRjSwnj)h=fwrp7r%Ig7r!N8P^Edb88
zd=Kz_e35eMWDy0YPbq1FxJZw>W@Dfwzy|S0_z9q!8W*y$>sAX33yVofNx6FT=+SFS
zVq%UpP*dxSM)39N^XH44ICEOb@5qT`MGhZJDRTHkN|B={Pbk-U|J>z^Mb2Njtbpfd
zE}SoN^4uAvd<Ag;p7R>-F&>Y|pYMkajg5mtcLc{wf4BPccZ34c%BxzdPMxuSF)>%R
zTz(}J-Mqzc5_CD6nKH(9mU1+j;;J0q2Co}J*E4`#rVqVL4|=37bT$n`tL*Qq)!+aw
z$2SL|Ae@O&_U&Pz0IqH3dgd`zs@FQAHy#GWMCf+ZA>WZtVW`T&vQ2XLm#7`*(Cv;D
zOX$E*LBz;#tQZ?v$iM-^PFq`ByDnO^sE?_sDJzW5uG9f}PNpbaq5R(rpvxG`%1yY)
zgk%hs^}QPs;5w&1JD}n(K7aoFH8<mW8xJ2oe1ZEn%+1Z&K=~n5I+l<bNnhMbO)Wy)
zPD?9tJ3U>gRDgN15*dzbx&I6dM`x8{s8=q57_xHh+WRNYof9yN>^X2C1NR0);+IK#
z^r*-k<fG*G7I>ZaAH4-$yL<osJL9m-yq}&Q4*9U5)|oM5hU>zG3zvI)d&j!FyK|Sj
zf(3TJBiulP{TD5~1|2(gv^#U=Oxor%sp2><Ky2K+WU`B=SUcxgzZ0gxLghKYYbq$$
zc;CrG#yiax3s~CYU}+B@ZThBknQ}Xjwml?HSLDU+Ns$Ih<{+K}WpJMZ@~*%18Fb&u
z(H-gKS;KGV^}9R~j{m2itiJIZW%c1Jf;6OkyAEP#XeI{wmd{L0&B9l$S~cwD%a=c7
zWMq)J*dW;DlF8mx(Y#J3ATzLhf83(;2@FvFn{U1uee~F|XK5J^l#RF^a?>r~!IQ_5
zn(<KXrl&zm%A@%B_+yCHjuH!hsXo0InVFdiLqNTH=Y17;U&TBuH|V<#9Xd?8$#qJ%
zZz&rz%$qllyUh*6{j!fAKQ8j-&6{_oMHEi2Fwp4U8J@_=$tm*m=~E?t)N7dv6)Frz
zV^qFp!_CIR!^5{Be@gw|>$>?boBBpu6BCnARM@NS$FGWqUxaW>%*E4BrqA<{X&#Hk
z*~43$=Xxvu-v!=d7>+mE&+?T1L$#h4EA}Ow3#Q<ub3mBV=A9xN*I4;<>gbP?CF3SL
zX8u`zDk$NA8S{K)j&HaexR5Gq_nZ{9#y?5FzQeP%N9yi`O5$++`gQKP{=0qF|6PRg
zv!T_fSFheIyvM9JS((ZzOEmJ_Lx&FK9653%hhI<QO3xRs@)oqEr>AE%Y0|_F@i_R;
zvQc+z#B>18F0rel{>MdmR2NhYghGTXNV&@gzv@fpYlf=~CRFxDBID$jsalO1HOxni
z8nu7u(4iN#w6sn)Yu3z%@o=9#78#e!!Tgv8m%K1N4sU%|I6Z>QhN80pQp#}_jJ+HM
z3RfFYU*-4Rxh#e~FI9)OFgOFisLEPpeqyG3_wK5%cW|`;pCym-3(Ps}K2}rgcgL;U
zni~)mC*Gk;Wl>P1EbtGN`M$w2&nHOc;mGqp0>gML4wPA5zS6d17cMd2a0y-Tbtnb#
zaD36s?{~z92H4&30gda_Z;&!)ae&PA4SHu@Z$ni^mf1TrT0&x1icfg7RH#_#KDzPH
zy?gie!u$9U_{UcBq3o=Lmiqeo>-h4yZX69E7fg4%Ql&~qDp#(2pjNF~7kqqtUh*E7
zzCsG|V^&Fs*LBR8G2_>*TXz5w{V4QdR+{d#Y12ZFA3uH(|Ci%`9lS$Ua*fiZOZRtj
za*Eixb?aGfh5*LDla!QnOixdbIx*KsmCk>sEK@z`((ZwQfz125gM$NoQVqd0RAI;v
zbVY2Vt*z~nkdSbMJ8+yofBxQ<En7}Q#h8bCG>#PJef|2iQdjuTay0JRwW~#NaPU?%
zx@(s&UrvY4dSmI*rMrIm=_eXloUU!@>+8F2|Ni|2-2<LIH!d#jHu5@+uADA@+^Z>%
zP^jXKnl^1UXQB78$hZ{}xP*?ak>aiR2L{OkKl&KvVWIz@!z&<I+~&>~=%j+~g3Bl}
z&CJX~@glh_f*<wd+80{dd`PxPG9X(Dr38=l5SA@Y7YNZ9F*4&_`Me8-%ego(L;@nB
zWXbYGnYY-79|myRxpQYDypMm>AtAql`#^8Kz?aYEz5Dj<%fx$LhU;AaLwB}$sK|3%
z6VHtfC`k<r4Csny6c7+lu>U3)Edv#%Y~=wxlnya$;Cy$TEG#ViE?&Htt^@=X-YkI|
z7l28qE^&s7j6@x`y=sRJ=6L}6B(9BsK1l<!8XAmNVq)TP+(S~iiX>B|$qV!6&*$hv
zD+m^<G?b1N%atqFZ^42E8|gQjAMQ8hK#%am(9n?E&X_@6DA@Hx6^75x=R!Bm2Esbe
zBV<5}s6nO5l}%f=ZnL~`lcsyr8a6s!Urp@{eFO8*;9}kS^{>{cSMM&E?QyNzbzaq|
zRVx>ew}Eidu3bB2NQR$G=4X=Wi;0SPk(`{&Zk!u0DIccK2c0S2#t<+jEmNoB#fz^%
z{+IqJ&-VdwGEx4tZasQ#)*C%qFDfdE7V}@y3qO=l$&|vQQR&Lgu-xzq@8CYJ*&^@O
zxZf1#hLC2;=3U$X0Tq^`rl!WF4-R|w?77eDB(58xH035;+{O^yUj)J!LF(+{ebx4c
z+&qCgBAHvYT@EYW%F2pESPhXcrQpyF8#X9AbWnAG6;976EucZurcK8{!rdp6a5IK0
zSFZ5;NZCxAd61wx=5*=Og`0b+RE$B8eAn{t$(=iQlyY!)5`H>{arp&0dwcu!RjO2(
z&yU8kY%CYkWEhs29|eOXW@DqOkVhymsUnzw4g{2B#sG&(3i}mWRjXEQT8kDf=JxH|
zH?CK&UdI|YYI3`GuR+;^25QKlL4(D}$XFa59hJ2_bPC?FV}~3(cu-caUafq-jz@mp
zWc&8*a^%R-H@J5d*H@#^9G_#pDkfF;_xJZh7q)NBnl<T=968*e3X(+ePNFjiEoujQ
z2W1-;y3_G<DRe&W-o1x(@77Ct^yn{5o3?z4#&M!&&z`~k`}a41w5CN()!-jv%KWse
zHE7Vl?Z&NJS(FE?1S2CO8WbFQqLk~^xN&1IRA>gm$kprD`A(?j%tPgQU$}fZD-{*`
z01TFnisOzS=$_xKSh0fskPFSmjoV5_xPAAo!T`{dU!Xp2v~Jz{s*R1!<LjtoZkv4{
zOt5R$uKQ@z9M`N@v0}w3r_Y{!{qWHvg?Xn;nZl6}e)U%)A~NzU)4Y?K%6fYZ=DSp;
zOqs)|>$|t^+{w*Aqj(O9k9WNn;eSTM>gg8{ph$3*0}?wIb@8Ze+qPGJ{`uz{kkT)>
zZ8uaH8gO>it5<i$`@e+e&3TLlaTob`dV0Rn)YQBKwI!u)-MSo-qYj-%c#*U~X?b#5
zi(_E*0hLeTZ!mW3*lsZ6wa`*srca-~Y4~u>%T^Xs-#9zDiSsm9Wlawm@AT=@%F9ek
zOH*3;rAwEUk3aFdJ76^BJKpQpuU9^0$ImMh{1_;~uiWy;dt_vKC(-+z>xAfN%&<(0
zpI+X!ZJPuI1<O>ssWRTm8kj6r7Ph%)DEH7^ZvwMULqiz=iQOFKXE#RO>K~*0^qUMA
zI53f$C0xKeAX7#}MBGDnej+F+=n$U$0=lfst$2oHyoYso);|6E9Y#C{95?Mda)e*0
zke&PX$-2#(bARd9?KaX)#{C7QN|o|WTD$humYq9=m}Fx^M+@qr+&p0pkZ5af|717f
z?}0$redthD{RRy<*4h>6MDIO(SoWu+$N`{DyLMbZWH4mdu;pYfUfZ~Rdls1S1g=xU
z1_GO0-QAxaI&ne{9X~Gf7B0MoYsQ^AclNywX50+Hvu)3w+-^O3(AAC~Rd5G9P~Na6
zO`B5xr8^*dKHb`N>TG1*e7?Ph4oTBy&A5dOXRqiF+X;+L1NT=#B2<QH(b4)P(b9S+
zz;JaS-aA2=wSYuSw6wH5Gu~?IGvCD#;^Q5Ht|~+l6IaTmi<cAyUa>OKg;8XaL9boA
zu6#O=l9UXrVo-hvl#IylE)pt9O9p4Rz^;HyPO(o3!TY|xe&XilE^e-K08g>Ab<5Nq
zIqEW)Fg`judJN*Xg7VL)zrQu4^Pz<FsS0%&I$Uet=AC<GdGZ?BxO0zOyZb=s0(a)h
z4M|wDUZSA8hb)Pf!0;ux>KemXp{{_1p^53a9S06ed{W-h;QoWuInh81nFfO)(%pOF
z{1u5!TqWU4<FYD$_uV>NAA@|m0NnF(QqMkpFRom-QC6&8CvhuRW&c>C26eiT+Ipi?
z4jezFTt~i{rM~@^3}NLm+vDz`6K7=4(c^OA!iCJx(9qOn%a+}MFt`R|<<5cQr(eh8
zIpUMn%H*jIHxS2loaMv|2Tq)pWr@k+;pO!L*Mia0a&m!|d<W>A+z_HC^k@$7)6h{L
zrZqC<$mt7`xOSr~iCvyuu3Y&gxIP&7Dex!|OW^;96W(tP=m#yJ$ZA2F`MJ2b92sM5
zmNwJHD=R2qse}YAk?^o3vSssDxpDo5qU@6?sUK0vrtU%sLQv6VcS4<mM>3r%Jddh#
z>MhiFjzR)OL`2F0&xJC7-Xigw?<;d>`{$aOT0b*08+Qg0DHzgLGc`4}3Cc+&{y#=|
zv7GZBq(9c|O7dnjn$SpD96~2!7>z-pvM3-FhFA~`FMqL|Z2OXHc@UlqNwvjf!j!4^
zLSvT;R~}BZwO1+|P)E8P^3k<%bWTm)x>LAZ*W7wCrKAfa3<vo+E5>Kl9jT{(jmwaI
zLL<b*!}B?q#R1nXr#LvLEd?_NM#YNNBwNOH`l4u2huK~}&$&`GXz4NuiCM1P=Sogx
zHD@?SV8WC#^Voa9K$drr`$F&MT;CKPmmnG=w8^Me$e%qd$9`$?a|WrR>KylnTTgYo
z8<V_Nf{~A{({y^)+o0@>!(Lhu8o2iVN2qXqKd~{SS1(w#PUxC8TU%RSH5qI1c)|RT
zoUo8(5*iHaH!@lh(drK#JOriorov!k%s(PBEv-Lp$45rx=j_(6-yk6&q5K-IEb{Ub
z??re~USVQA!SNOJ+1t=}wn1{c9y@ldKcur7lKE|S@E;(&WLowHGv_aQ#Fd1;s8oAb
z_iP=5F)v0LnLO3eH-6lIutsV<wT9HS!h!LSDqRqd#*ew{yoHZLVpf15Vx{lE!Stx-
z8f^~Qa0R2D^Xz#_x``XN%VfvtG@d3Reho+@O6b}pOO<k5wQbKUzldn@$9o^5Ir0M2
z{klYn>GOSqUPUfX2IH*|^<kRy%(6iEnG5_w-ULT4m&oNwVq!J%*+@NuG;IUp+bHkt
zk$Q%A)HO6yhiU7kqwLqdt5VgkM2QmKi-N;5BjS@}DHwCaD1FtcOWGx>X+$;<G9;I%
zR6zO0dMx&P9u}7<kt>q9CW>)%aIZ4T|Nqa7ZxbrN9;8`2FwO|*iwhPnUcAS`!s5nw
zt0^zMy&`fWLYGTK*fLof6)(|C`I+S;=!ARa*s<es?dml}_hffW>y4Hex8CEYnqh^?
z+BNGSaU=QhQ3-&Q3Gk1QK>sKS^otgUsqSw^k2bhBb?VgpFowL){p%o|wkS^>;2#?t
ze;?(y^y<}X)SMl$JSm&2I74DrNT*-A(VcD;zz+>@JdR_C<4_meWPr079J8(WeU++i
zp)o76!sAy$0<DxzKX=`O>vZ^_qX!4HN7*{NJfJi?b0bZjym*nLuCC5ir5w%ZQm0NG
z8yHNtqgJez;AoCBM`o4!wlpQ}i0U<K`G6U6maST^FjD(YU6hSji+}aiG*7>v*UWSG
z@w3vi-++6#PD_|00nuQxHKwMfbVi8<_-#v_ufP7<ZJtlSYc89<55xSx$urOJEc6>-
zm)(P7q)@-Mlq^}2Lw|g>>XpCy-Wd%ob0r$n@yoXa<z}OtsHUdYwN|ZK%K9q&|MwD@
zfCJKvAk{iUQt890_FuSg;Snn<t29%SaWCxcre*mo3dxO*N|vQd5+!D70vL2T7&T5|
z)R<^8EEzXeqL##>dyWP(E(M|`EF=a@9S>%Vkr~tHW}BOhe`RE7lnx_?pK1<(@nir-
z?g-U@U&iIk$A5+JVmY_lvUArS>-;0)WUhCR%=Zq;tx>Z!H>Dbj+dTm~b0~Tiz{=oM
zCdUw)x9|LOvUl(jna6lO!MQbS*QHyWYQ%uUO#M6m(*5XcFMpZm6D%C`gN~jF<L?|S
z^V3jQo^!%uu6Ll!oVzHyO@~hGYR3Wto40I}5EQjSn7&s)L~fmW4OHIo=Hn*VJn~o+
zAiGj7P*uwD_xHc)<m7b1%ggH`1cs8Pp^3%mFN+t~f@I@Z;@IIMN8RyQ8~~Lh5~UFC
z<Czf|8F>PR=VgwOhD0ZbowM873Kc7Hw4rhBy7hhL`G(~BghmVJur6J?^Z`8v!ucXR
zF8Pk@yv1j}EnSAsND2GT5b~=5G=c8-%Yg$2j8?8(xiBOoWFsv03lk<xxNmGc=J`}x
zhio^O**WNbbC*UYNL+NX#Kk6qQS(5?jm6Q|I~=;ASMD@Nmu!o1Rxgc>jqigI&(EDZ
zcOxwEg<wYGbLY<e0-cn)Bi+RQtArOzMZNc6P2I~jGiJ*q$C+X>-tINJF3x7rb&Se9
zbuKa@r&(2wC}heW1BQ+`&v=ua+{9#p?JLA_MqK(pQs1K68i(xMwDgRhO>}S(+v&5#
z(cMd?&&!)$wRfE-Q=DgsrHxZ|{f13a5Y`Toj#GZ>y#@?94pFHrL$R`Re)IL$CESrd
zcY_;Pvv$3W7B)_~)($h3r4tUbJjHR=0%f_#B$Q>6(=5>%J?32T;$P82O8KI3`3jX{
z%qBVHaJdPWT;xsf`iS$K#WEFn*dxA?x%IWuWy(@lZeF!o^+jBk!t^FP&lX!(4<!$6
zLlauyT-*N7FJ=Ri@LSzY(em2EqU(niWCH0qFFZUvY5x5A$DEy=Z$kfkXkudW%+S#A
zrGbILYfB5uH(<Qi`uh50z-Q*><_~OaY;HhTJhouLf~2spFpq?U1QUL|{`T$L9UvLW
zjK5Az{g(+ZmQ!wh`VJd)edwsMZ-$IA$?VcwokJ*kxJOTzcj`|071FcM5S=TGH&o9I
z@dnc(r{gf)*i@q;oa0t6Rl00+o6bER4AC8%r8{nlj5fEE(UwzXu=beO?Ys85U#46I
zcE22op*7wN7^i7um2Ny~x)_XiklsUeXykDHcU>?h=d~@X)vUX}TYrt`T85Un+GDLn
z*L1RsFdm=Xd+^A|4V$&$Yzy7&=t)%$(o&fM{=Qm`9l!J)@(gL`j><2Gj@gu)K0|e$
zG-=hIT|RwITO*H##fp7t*12c@JHzzNb4HGxDB30ybMbzj)UMw+3}M<J-T&+gOE3;~
zye44KwpXrP>A7pyu9553t+ND!%|O?)AU;0cFE%z7{2IO#T|h*1bhLj=OpIq-T-<cr
zw_3Y)t<I4nM|z{HYK?R_O-P2P;{Q=m{)=q}&3-&2lphd)|C4~WxJL>3&fzFrrxa{f
z?91X-h~o=z+duknva(cj*914CrIQ0c88{o^lgd@9a;S3E>c`8JuSETmdLNf$=z$U1
z7cE+>ZS#&j9CXIdI5;gZDR-#RWEld~ZQ7ci=p2fnV+^-zr1J!wivvo2TV`9uDnA~r
zROQD*B}#tF;U=!`<e)$I>tq8_R0*U{W;X(SN|gF`3&J0(_<gmMZ@w+VRhZOc`P|(Q
zmKz_it~vuwNOM)$3Y8C_VKLkSKoe=vkW+R!`L7U|sV2|>XbR9<hlWXeFl@)WckliT
zowQSCW@dX_YXh_ZC=I!{dUb%3=%XdPSdQbei>3oGE0N$iuf2B}0`KLAp~JH(9?z9{
z=F=AzF?050ICl5+5aI+!J1`TyYseU!OCOD{o-SzZ0L7P}e5LOVM@^XadeWRA(Kfdi
zP17kdP}l6u*WZ-n_$1W{su9ea!#8SRc=~km0|L}T#{irc?}2-K1~Me)yeYLw^j5Er
z`;^9<s$_V^8;NVwohemS{-Hz_TqpCh%$z>f#D6{~9gr0(9oIfS0@GCm{-fiH|4zb-
zWra~!q*UY>oobwp760C058eab8_y#srie#CbP;#IC56M%BIovilrnTc=5h=&4%47+
zTcbgvpXN+)oiF2^+{I$5ix^qiX4bCXkZYoJ!4RBKASoV102zn*@;VuXpi?uik$D;B
zU$gzASO&&X%>t648BP_4@!6Odhs3a|GLw;6W;QDN(=u(<809}YsqvZqCau`8waD^y
zn~N-4y|GA4^7<mN$s3Bqt=WWYTZ$xX++HMc^UflPo3`hN+fpQM^(GZ#u(Da91exTE
z*i{>Nk5Z?4!zMr3>KU6{8m>Jmwa)<cTUD!7<FukBx=TtiMJ-sdev6WKemoVk6;AVC
z%#Zmpf0joThvnt{{BZA%ql877@jSc^u*^zX`Jd^0rvC%PN(XVAmU<i=Yq-{k4i6)6
mojw4Z{rN|I06vV06#0L0BQaiPgq42)0000<MNUMnLSTY`z}`*(

literal 0
HcmV?d00001

diff --git a/docs/xen.doxyfile.in b/docs/xen.doxyfile.in
new file mode 100644
index 0000000000..00969d9b78
--- /dev/null
+++ b/docs/xen.doxyfile.in
@@ -0,0 +1,2316 @@
+# Doxyfile 1.8.13
+
+# This file describes the settings to be used by the documentation system
+# doxygen (www.doxygen.org) for a project.
+#
+# All text after a double hash (##) is considered a comment and is placed in
+# front of the TAG it is preceding.
+#
+# All text after a single hash (#) is considered a comment and will be ignored.
+# The format is:
+# TAG = value [value, ...]
+# For lists, items can also be appended using:
+# TAG += value [value, ...]
+# Values that contain spaces should be placed between quotes (\" \").
+#
+# This file is base on doc/zephyr.doxyfile.in zephry 2.3
+
+#---------------------------------------------------------------------------
+# Project related configuration options
+#---------------------------------------------------------------------------
+
+# This tag specifies the encoding used for all characters in the config file
+# that follow. The default is UTF-8 which is also the encoding used for all text
+# before the first occurrence of this tag. Doxygen uses libiconv (or the iconv
+# built into libc) for the transcoding. See
+# https://www.gnu.org/software/libiconv/ for the list of possible encodings.
+# The default value is: UTF-8.
+
+DOXYFILE_ENCODING      = UTF-8
+
+# The PROJECT_NAME tag is a single word (or a sequence of words surrounded by
+# double-quotes, unless you are using Doxywizard) that should identify the
+# project for which the documentation is generated. This name is used in the
+# title of most generated pages and in a few other places.
+# The default value is: My Project.
+
+PROJECT_NAME           = "Xen Project"
+
+# The PROJECT_NUMBER tag can be used to enter a project or revision number. This
+# could be handy for archiving the generated documentation or if some version
+# control system is used.
+
+PROJECT_NUMBER         =
+
+# Using the PROJECT_BRIEF tag one can provide an optional one line description
+# for a project that appears at the top of each page and should give viewer a
+# quick idea about the purpose of the project. Keep the description short.
+
+PROJECT_BRIEF          = "An Open Source Type 1 Hypervisor"
+
+# With the PROJECT_LOGO tag one can specify a logo or an icon that is included
+# in the documentation. The maximum height of the logo should not exceed 55
+# pixels and the maximum width should not exceed 200 pixels. Doxygen will copy
+# the logo to the output directory.
+
+PROJECT_LOGO           = "xen-doxygen/xen_project_logo_165x67.png"
+
+# The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) path
+# into which the generated documentation will be written. If a relative path is
+# entered, it will be relative to the location where doxygen was started. If
+# left blank the current directory will be used.
+
+OUTPUT_DIRECTORY       = @DOXY_OUT@
+
+# If the CREATE_SUBDIRS tag is set to YES then doxygen will create 4096 sub-
+# directories (in 2 levels) under the output directory of each output format and
+# will distribute the generated files over these directories. Enabling this
+# option can be useful when feeding doxygen a huge amount of source files, where
+# putting all generated files in the same directory would otherwise causes
+# performance problems for the file system.
+# The default value is: NO.
+
+CREATE_SUBDIRS         = NO
+
+# The OUTPUT_LANGUAGE tag is used to specify the language in which all
+# documentation generated by doxygen is written. Doxygen will use this
+# information to generate all constant output in the proper language.
+# Possible values are: Afrikaans, Arabic, Armenian, Brazilian, Catalan, Chinese,
+# Chinese-Traditional, Croatian, Czech, Danish, Dutch, English (United States),
+# Esperanto, Farsi (Persian), Finnish, French, German, Greek, Hungarian,
+# Indonesian, Italian, Japanese, Japanese-en (Japanese with English messages),
+# Korean, Korean-en (Korean with English messages), Latvian, Lithuanian,
+# Macedonian, Norwegian, Persian (Farsi), Polish, Portuguese, Romanian, Russian,
+# Serbian, Serbian-Cyrillic, Slovak, Slovene, Spanish, Swedish, Turkish,
+# Ukrainian and Vietnamese.
+# The default value is: English.
+
+OUTPUT_LANGUAGE        = English
+
+# If the BRIEF_MEMBER_DESC tag is set to YES, doxygen will include brief member
+# descriptions after the members that are listed in the file and class
+# documentation (similar to Javadoc). Set to NO to disable this.
+# The default value is: YES.
+
+BRIEF_MEMBER_DESC      = YES
+
+# If the REPEAT_BRIEF tag is set to YES, doxygen will prepend the brief
+# description of a member or function before the detailed description
+#
+# Note: If both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the
+# brief descriptions will be completely suppressed.
+# The default value is: YES.
+
+REPEAT_BRIEF           = YES
+
+# This tag implements a quasi-intelligent brief description abbreviator that is
+# used to form the text in various listings. Each string in this list, if found
+# as the leading text of the brief description, will be stripped from the text
+# and the result, after processing the whole list, is used as the annotated
+# text. Otherwise, the brief description is used as-is. If left blank, the
+# following values are used ($name is automatically replaced with the name of
+# the entity):The $name class, The $name widget, The $name file, is, provides,
+# specifies, contains, represents, a, an and the.
+
+ABBREVIATE_BRIEF       = YES
+
+# If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then
+# doxygen will generate a detailed section even if there is only a brief
+# description.
+# The default value is: NO.
+
+ALWAYS_DETAILED_SEC    = YES
+
+# If the INLINE_INHERITED_MEMB tag is set to YES, doxygen will show all
+# inherited members of a class in the documentation of that class as if those
+# members were ordinary class members. Constructors, destructors and assignment
+# operators of the base classes will not be shown.
+# The default value is: NO.
+
+INLINE_INHERITED_MEMB  = YES
+
+# If the FULL_PATH_NAMES tag is set to YES, doxygen will prepend the full path
+# before files name in the file list and in the header files. If set to NO the
+# shortest path that makes the file name unique will be used
+# The default value is: YES.
+
+FULL_PATH_NAMES        = YES
+
+# The STRIP_FROM_PATH tag can be used to strip a user-defined part of the path.
+# Stripping is only done if one of the specified strings matches the left-hand
+# part of the path. The tag can be used to show relative paths in the file list.
+# If left blank the directory from which doxygen is run is used as the path to
+# strip.
+#
+# Note that you can specify absolute paths here, but also relative paths, which
+# will be relative from the directory where doxygen is started.
+# This tag requires that the tag FULL_PATH_NAMES is set to YES.
+
+STRIP_FROM_PATH        = @XEN_BASE@
+
+# The STRIP_FROM_INC_PATH tag can be used to strip a user-defined part of the
+# path mentioned in the documentation of a class, which tells the reader which
+# header file to include in order to use a class. If left blank only the name of
+# the header file containing the class definition is used. Otherwise one should
+# specify the list of include paths that are normally passed to the compiler
+# using the -I flag.
+
+STRIP_FROM_INC_PATH    =
+
+# If the SHORT_NAMES tag is set to YES, doxygen will generate much shorter (but
+# less readable) file names. This can be useful is your file systems doesn't
+# support long names like on DOS, Mac, or CD-ROM.
+# The default value is: NO.
+
+SHORT_NAMES            = NO
+
+# If the JAVADOC_AUTOBRIEF tag is set to YES then doxygen will interpret the
+# first line (until the first dot) of a Javadoc-style comment as the brief
+# description. If set to NO, the Javadoc-style will behave just like regular Qt-
+# style comments (thus requiring an explicit @brief command for a brief
+# description.)
+# The default value is: NO.
+
+JAVADOC_AUTOBRIEF      = NO
+
+# If the QT_AUTOBRIEF tag is set to YES then doxygen will interpret the first
+# line (until the first dot) of a Qt-style comment as the brief description. If
+# set to NO, the Qt-style will behave just like regular Qt-style comments (thus
+# requiring an explicit \brief command for a brief description.)
+# The default value is: NO.
+
+QT_AUTOBRIEF           = NO
+
+# The MULTILINE_CPP_IS_BRIEF tag can be set to YES to make doxygen treat a
+# multi-line C++ special comment block (i.e. a block of //! or /// comments) as
+# a brief description. This used to be the default behavior. The new default is
+# to treat a multi-line C++ comment block as a detailed description. Set this
+# tag to YES if you prefer the old behavior instead.
+#
+# Note that setting this tag to YES also means that rational rose comments are
+# not recognized any more.
+# The default value is: NO.
+
+MULTILINE_CPP_IS_BRIEF = NO
+
+# If the INHERIT_DOCS tag is set to YES then an undocumented member inherits the
+# documentation from any documented member that it re-implements.
+# The default value is: YES.
+
+INHERIT_DOCS           = YES
+
+# If the SEPARATE_MEMBER_PAGES tag is set to YES then doxygen will produce a new
+# page for each member. If set to NO, the documentation of a member will be part
+# of the file/class/namespace that contains it.
+# The default value is: NO.
+
+SEPARATE_MEMBER_PAGES  = YES
+
+# The TAB_SIZE tag can be used to set the number of spaces in a tab. Doxygen
+# uses this value to replace tabs by spaces in code fragments.
+# Minimum value: 1, maximum value: 16, default value: 4.
+
+TAB_SIZE               = 8
+
+# This tag can be used to specify a number of aliases that act as commands in
+# the documentation. An alias has the form:
+# name=value
+# For example adding
+# "sideeffect=@par Side Effects:\n"
+# will allow you to put the command \sideeffect (or @sideeffect) in the
+# documentation, which will result in a user-defined paragraph with heading
+# "Side Effects:". You can put \n's in the value part of an alias to insert
+# newlines.
+
+ALIASES                = "rst=\verbatim embed:rst:leading-asterisk" \
+                         "endrst=\endverbatim" \
+                         "keepindent=\code" \
+                         "endkeepindent=\endcode"
+
+ALIASES += req{1}="\ref XEN_\1 \"XEN-\1\" "
+ALIASES += satisfy{1}="\xrefitem satisfy \"Satisfies requirement\" \"Requirement Implementation\" \1"
+ALIASES += verify{1}="\xrefitem verify \"Verifies requirement\" \"Requirement Verification\" \1"
+
+# Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C sources
+# only. Doxygen will then generate output that is more tailored for C. For
+# instance, some of the names that are used will be different. The list of all
+# members will be omitted, etc.
+# The default value is: NO.
+
+OPTIMIZE_OUTPUT_FOR_C  = YES
+
+# Set the OPTIMIZE_OUTPUT_JAVA tag to YES if your project consists of Java or
+# Python sources only. Doxygen will then generate output that is more tailored
+# for that language. For instance, namespaces will be presented as packages,
+# qualified scopes will look different, etc.
+# The default value is: NO.
+
+OPTIMIZE_OUTPUT_JAVA   = NO
+
+# Set the OPTIMIZE_FOR_FORTRAN tag to YES if your project consists of Fortran
+# sources. Doxygen will then generate output that is tailored for Fortran.
+# The default value is: NO.
+
+OPTIMIZE_FOR_FORTRAN   = NO
+
+# Set the OPTIMIZE_OUTPUT_VHDL tag to YES if your project consists of VHDL
+# sources. Doxygen will then generate output that is tailored for VHDL.
+# The default value is: NO.
+
+OPTIMIZE_OUTPUT_VHDL   = NO
+
+# Doxygen selects the parser to use depending on the extension of the files it
+# parses. With this tag you can assign which parser to use for a given
+# extension. Doxygen has a built-in mapping, but you can override or extend it
+# using this tag. The format is ext=language, where ext is a file extension, and
+# language is one of the parsers supported by doxygen: IDL, Java, Javascript,
+# C#, C, C++, D, PHP, Objective-C, Python, Fortran (fixed format Fortran:
+# FortranFixed, free formatted Fortran: FortranFree, unknown formatted Fortran:
+# Fortran. In the later case the parser tries to guess whether the code is fixed
+# or free formatted code, this is the default for Fortran type files), VHDL. For
+# instance to make doxygen treat .inc files as Fortran files (default is PHP),
+# and .f files as C (default is Fortran), use: inc=Fortran f=C.
+#
+# Note: For files without extension you can use no_extension as a placeholder.
+#
+# Note that for custom extensions you also need to set FILE_PATTERNS otherwise
+# the files are not read by doxygen.
+
+EXTENSION_MAPPING      =
+
+# If the MARKDOWN_SUPPORT tag is enabled then doxygen pre-processes all comments
+# according to the Markdown format, which allows for more readable
+# documentation. See http://daringfireball.net/projects/markdown/ for details.
+# The output of markdown processing is further processed by doxygen, so you can
+# mix doxygen, HTML, and XML commands with Markdown formatting. Disable only in
+# case of backward compatibilities issues.
+# The default value is: YES.
+
+MARKDOWN_SUPPORT       = YES
+
+# When enabled doxygen tries to link words that correspond to documented
+# classes, or namespaces to their corresponding documentation. Such a link can
+# be prevented in individual cases by putting a % sign in front of the word or
+# globally by setting AUTOLINK_SUPPORT to NO.
+# The default value is: YES.
+
+AUTOLINK_SUPPORT       = YES
+
+# If you use STL classes (i.e. std::string, std::vector, etc.) but do not want
+# to include (a tag file for) the STL sources as input, then you should set this
+# tag to YES in order to let doxygen match functions declarations and
+# definitions whose arguments contain STL classes (e.g. func(std::string);
+# versus func(std::string) {}). This also make the inheritance and collaboration
+# diagrams that involve STL classes more complete and accurate.
+# The default value is: NO.
+
+BUILTIN_STL_SUPPORT    = NO
+
+# If you use Microsoft's C++/CLI language, you should set this option to YES to
+# enable parsing support.
+# The default value is: NO.
+
+CPP_CLI_SUPPORT        = YES
+
+# Set the SIP_SUPPORT tag to YES if your project consists of sip (see:
+# https://www.riverbankcomputing.com/software/sip/intro) sources only. Doxygen
+# will parse them like normal C++ but will assume all classes use public instead
+# of private inheritance when no explicit protection keyword is present.
+# The default value is: NO.
+
+SIP_SUPPORT            = NO
+
+# For Microsoft's IDL there are propget and propput attributes to indicate
+# getter and setter methods for a property. Setting this option to YES will make
+# doxygen to replace the get and set methods by a property in the documentation.
+# This will only work if the methods are indeed getting or setting a simple
+# type. If this is not the case, or you want to show the methods anyway, you
+# should set this option to NO.
+# The default value is: YES.
+
+IDL_PROPERTY_SUPPORT   = YES
+
+# If member grouping is used in the documentation and the DISTRIBUTE_GROUP_DOC
+# tag is set to YES then doxygen will reuse the documentation of the first
+# member in the group (if any) for the other members of the group. By default
+# all members of a group must be documented explicitly.
+# The default value is: NO.
+
+DISTRIBUTE_GROUP_DOC   = NO
+
+# Set the SUBGROUPING tag to YES to allow class member groups of the same type
+# (for instance a group of public functions) to be put as a subgroup of that
+# type (e.g. under the Public Functions section). Set it to NO to prevent
+# subgrouping. Alternatively, this can be done per class using the
+# \nosubgrouping command.
+# The default value is: YES.
+
+SUBGROUPING            = YES
+
+# When the INLINE_GROUPED_CLASSES tag is set to YES, classes, structs and unions
+# are shown inside the group in which they are included (e.g. using \ingroup)
+# instead of on a separate page (for HTML and Man pages) or section (for LaTeX
+# and RTF).
+#
+# Note that this feature does not work in combination with
+# SEPARATE_MEMBER_PAGES.
+# The default value is: NO.
+
+INLINE_GROUPED_CLASSES = NO
+
+# When the INLINE_SIMPLE_STRUCTS tag is set to YES, structs, classes, and unions
+# with only public data fields or simple typedef fields will be shown inline in
+# the documentation of the scope in which they are defined (i.e. file,
+# namespace, or group documentation), provided this scope is documented. If set
+# to NO, structs, classes, and unions are shown on a separate page (for HTML and
+# Man pages) or section (for LaTeX and RTF).
+# The default value is: NO.
+
+INLINE_SIMPLE_STRUCTS  = YES
+
+# When TYPEDEF_HIDES_STRUCT tag is enabled, a typedef of a struct, union, or
+# enum is documented as struct, union, or enum with the name of the typedef. So
+# typedef struct TypeS {} TypeT, will appear in the documentation as a struct
+# with name TypeT. When disabled the typedef will appear as a member of a file,
+# namespace, or class. And the struct will be named TypeS. This can typically be
+# useful for C code in case the coding convention dictates that all compound
+# types are typedef'ed and only the typedef is referenced, never the tag name.
+# The default value is: NO.
+
+TYPEDEF_HIDES_STRUCT   = NO
+
+# The size of the symbol lookup cache can be set using LOOKUP_CACHE_SIZE. This
+# cache is used to resolve symbols given their name and scope. Since this can be
+# an expensive process and often the same symbol appears multiple times in the
+# code, doxygen keeps a cache of pre-resolved symbols. If the cache is too small
+# doxygen will become slower. If the cache is too large, memory is wasted. The
+# cache size is given by this formula: 2^(16+LOOKUP_CACHE_SIZE). The valid range
+# is 0..9, the default is 0, corresponding to a cache size of 2^16=65536
+# symbols. At the end of a run doxygen will report the cache usage and suggest
+# the optimal cache size from a speed point of view.
+# Minimum value: 0, maximum value: 9, default value: 0.
+
+LOOKUP_CACHE_SIZE      = 9
+
+#---------------------------------------------------------------------------
+# Build related configuration options
+#---------------------------------------------------------------------------
+
+# If the EXTRACT_ALL tag is set to YES, doxygen will assume all entities in
+# documentation are documented, even if no documentation was available. Private
+# class members and static file members will be hidden unless the
+# EXTRACT_PRIVATE respectively EXTRACT_STATIC tags are set to YES.
+# Note: This will also disable the warnings about undocumented members that are
+# normally produced when WARNINGS is set to YES.
+# The default value is: NO.
+
+EXTRACT_ALL            = YES
+
+# If the EXTRACT_PRIVATE tag is set to YES, all private members of a class will
+# be included in the documentation.
+# The default value is: NO.
+
+EXTRACT_PRIVATE        = NO
+
+# If the EXTRACT_PACKAGE tag is set to YES, all members with package or internal
+# scope will be included in the documentation.
+# The default value is: NO.
+
+EXTRACT_PACKAGE        = YES
+
+# If the EXTRACT_STATIC tag is set to YES, all static members of a file will be
+# included in the documentation.
+# The default value is: NO.
+
+EXTRACT_STATIC         = YES
+
+# If the EXTRACT_LOCAL_CLASSES tag is set to YES, classes (and structs) defined
+# locally in source files will be included in the documentation. If set to NO,
+# only classes defined in header files are included. Does not have any effect
+# for Java sources.
+# The default value is: YES.
+
+EXTRACT_LOCAL_CLASSES  = YES
+
+# This flag is only useful for Objective-C code. If set to YES, local methods,
+# which are defined in the implementation section but not in the interface are
+# included in the documentation. If set to NO, only methods in the interface are
+# included.
+# The default value is: NO.
+
+EXTRACT_LOCAL_METHODS  = YES
+
+# If this flag is set to YES, the members of anonymous namespaces will be
+# extracted and appear in the documentation as a namespace called
+# 'anonymous_namespace{file}', where file will be replaced with the base name of
+# the file that contains the anonymous namespace. By default anonymous namespace
+# are hidden.
+# The default value is: NO.
+
+EXTRACT_ANON_NSPACES   = NO
+
+# If the HIDE_UNDOC_MEMBERS tag is set to YES, doxygen will hide all
+# undocumented members inside documented classes or files. If set to NO these
+# members will be included in the various overviews, but no documentation
+# section is generated. This option has no effect if EXTRACT_ALL is enabled.
+# The default value is: NO.
+
+HIDE_UNDOC_MEMBERS     = NO
+
+# If the HIDE_UNDOC_CLASSES tag is set to YES, doxygen will hide all
+# undocumented classes that are normally visible in the class hierarchy. If set
+# to NO, these classes will be included in the various overviews. This option
+# has no effect if EXTRACT_ALL is enabled.
+# The default value is: NO.
+
+HIDE_UNDOC_CLASSES     = NO
+
+# If the HIDE_FRIEND_COMPOUNDS tag is set to YES, doxygen will hide all friend
+# (class|struct|union) declarations. If set to NO, these declarations will be
+# included in the documentation.
+# The default value is: NO.
+
+HIDE_FRIEND_COMPOUNDS  = NO
+
+# If the HIDE_IN_BODY_DOCS tag is set to YES, doxygen will hide any
+# documentation blocks found inside the body of a function. If set to NO, these
+# blocks will be appended to the function's detailed documentation block.
+# The default value is: NO.
+
+HIDE_IN_BODY_DOCS      = NO
+
+# The INTERNAL_DOCS tag determines if documentation that is typed after a
+# \internal command is included. If the tag is set to NO then the documentation
+# will be excluded. Set it to YES to include the internal documentation.
+# The default value is: NO.
+
+INTERNAL_DOCS          = NO
+
+# If the CASE_SENSE_NAMES tag is set to NO then doxygen will only generate file
+# names in lower-case letters. If set to YES, upper-case letters are also
+# allowed. This is useful if you have classes or files whose names only differ
+# in case and if your file system supports case sensitive file names. Windows
+# and Mac users are advised to set this option to NO.
+# The default value is: system dependent.
+
+CASE_SENSE_NAMES       = YES
+
+# If the HIDE_SCOPE_NAMES tag is set to NO then doxygen will show members with
+# their full class and namespace scopes in the documentation. If set to YES, the
+# scope will be hidden.
+# The default value is: NO.
+
+HIDE_SCOPE_NAMES       = NO
+
+# If the SHOW_INCLUDE_FILES tag is set to YES then doxygen will put a list of
+# the files that are included by a file in the documentation of that file.
+# The default value is: YES.
+
+SHOW_INCLUDE_FILES     = YES
+
+# If the SHOW_GROUPED_MEMB_INC tag is set to YES then Doxygen will add for each
+# grouped member an include statement to the documentation, telling the reader
+# which file to include in order to use the member.
+# The default value is: NO.
+
+SHOW_GROUPED_MEMB_INC  = YES
+
+# If the FORCE_LOCAL_INCLUDES tag is set to YES then doxygen will list include
+# files with double quotes in the documentation rather than with sharp brackets.
+# The default value is: NO.
+
+FORCE_LOCAL_INCLUDES   = NO
+
+# If the INLINE_INFO tag is set to YES then a tag [inline] is inserted in the
+# documentation for inline members.
+# The default value is: YES.
+
+INLINE_INFO            = YES
+
+# If the SORT_MEMBER_DOCS tag is set to YES then doxygen will sort the
+# (detailed) documentation of file and class members alphabetically by member
+# name. If set to NO, the members will appear in declaration order.
+# The default value is: YES.
+
+SORT_MEMBER_DOCS       = YES
+
+# If the SORT_BRIEF_DOCS tag is set to YES then doxygen will sort the brief
+# descriptions of file, namespace and class members alphabetically by member
+# name. If set to NO, the members will appear in declaration order. Note that
+# this will also influence the order of the classes in the class list.
+# The default value is: NO.
+
+SORT_BRIEF_DOCS        = NO
+
+# If the SORT_MEMBERS_CTORS_1ST tag is set to YES then doxygen will sort the
+# (brief and detailed) documentation of class members so that constructors and
+# destructors are listed first. If set to NO the constructors will appear in the
+# respective orders defined by SORT_BRIEF_DOCS and SORT_MEMBER_DOCS.
+# Note: If SORT_BRIEF_DOCS is set to NO this option is ignored for sorting brief
+# member documentation.
+# Note: If SORT_MEMBER_DOCS is set to NO this option is ignored for sorting
+# detailed member documentation.
+# The default value is: NO.
+
+SORT_MEMBERS_CTORS_1ST = NO
+
+# If the SORT_GROUP_NAMES tag is set to YES then doxygen will sort the hierarchy
+# of group names into alphabetical order. If set to NO the group names will
+# appear in their defined order.
+# The default value is: NO.
+
+SORT_GROUP_NAMES       = YES
+
+# If the SORT_BY_SCOPE_NAME tag is set to YES, the class list will be sorted by
+# fully-qualified names, including namespaces. If set to NO, the class list will
+# be sorted only by class name, not including the namespace part.
+# Note: This option is not very useful if HIDE_SCOPE_NAMES is set to YES.
+# Note: This option applies only to the class list, not to the alphabetical
+# list.
+# The default value is: NO.
+
+SORT_BY_SCOPE_NAME     = YES
+
+# If the STRICT_PROTO_MATCHING option is enabled and doxygen fails to do proper
+# type resolution of all parameters of a function it will reject a match between
+# the prototype and the implementation of a member function even if there is
+# only one candidate or it is obvious which candidate to choose by doing a
+# simple string match. By disabling STRICT_PROTO_MATCHING doxygen will still
+# accept a match between prototype and implementation in such cases.
+# The default value is: NO.
+
+STRICT_PROTO_MATCHING  = YES
+
+# The GENERATE_TODOLIST tag can be used to enable (YES) or disable (NO) the todo
+# list. This list is created by putting \todo commands in the documentation.
+# The default value is: YES.
+
+GENERATE_TODOLIST      = NO
+
+# The GENERATE_TESTLIST tag can be used to enable (YES) or disable (NO) the test
+# list. This list is created by putting \test commands in the documentation.
+# The default value is: YES.
+
+GENERATE_TESTLIST      = NO
+
+# The GENERATE_BUGLIST tag can be used to enable (YES) or disable (NO) the bug
+# list. This list is created by putting \bug commands in the documentation.
+# The default value is: YES.
+
+GENERATE_BUGLIST       = NO
+
+# The GENERATE_DEPRECATEDLIST tag can be used to enable (YES) or disable (NO)
+# the deprecated list. This list is created by putting \deprecated commands in
+# the documentation.
+# The default value is: YES.
+
+GENERATE_DEPRECATEDLIST= YES
+
+# The ENABLED_SECTIONS tag can be used to enable conditional documentation
+# sections, marked by \if <section_label> ... \endif and \cond <section_label>
+# ... \endcond blocks.
+
+ENABLED_SECTIONS       = YES
+
+# The MAX_INITIALIZER_LINES tag determines the maximum number of lines that the
+# initial value of a variable or macro / define can have for it to appear in the
+# documentation. If the initializer consists of more lines than specified here
+# it will be hidden. Use a value of 0 to hide initializers completely. The
+# appearance of the value of individual variables and macros / defines can be
+# controlled using \showinitializer or \hideinitializer command in the
+# documentation regardless of this setting.
+# Minimum value: 0, maximum value: 10000, default value: 30.
+
+MAX_INITIALIZER_LINES  = 300
+
+# Set the SHOW_USED_FILES tag to NO to disable the list of files generated at
+# the bottom of the documentation of classes and structs. If set to YES, the
+# list will mention the files that were used to generate the documentation.
+# The default value is: YES.
+
+SHOW_USED_FILES        = YES
+
+# Set the SHOW_FILES tag to NO to disable the generation of the Files page. This
+# will remove the Files entry from the Quick Index and from the Folder Tree View
+# (if specified).
+# The default value is: YES.
+
+SHOW_FILES             = YES
+
+# Set the SHOW_NAMESPACES tag to NO to disable the generation of the Namespaces
+# page. This will remove the Namespaces entry from the Quick Index and from the
+# Folder Tree View (if specified).
+# The default value is: YES.
+
+SHOW_NAMESPACES        = YES
+
+# The FILE_VERSION_FILTER tag can be used to specify a program or script that
+# doxygen should invoke to get the current version for each file (typically from
+# the version control system). Doxygen will invoke the program by executing (via
+# popen()) the command command input-file, where command is the value of the
+# FILE_VERSION_FILTER tag, and input-file is the name of an input file provided
+# by doxygen. Whatever the program writes to standard output is used as the file
+# version. For an example see the documentation.
+
+FILE_VERSION_FILTER    =
+
+# The LAYOUT_FILE tag can be used to specify a layout file which will be parsed
+# by doxygen. The layout file controls the global structure of the generated
+# output files in an output format independent way. To create the layout file
+# that represents doxygen's defaults, run doxygen with the -l option. You can
+# optionally specify a file name after the option, if omitted DoxygenLayout.xml
+# will be used as the name of the layout file.
+#
+# Note that if you run doxygen from a directory containing a file called
+# DoxygenLayout.xml, doxygen will parse it automatically even if the LAYOUT_FILE
+# tag is left empty.
+
+LAYOUT_FILE            =
+
+# The CITE_BIB_FILES tag can be used to specify one or more bib files containing
+# the reference definitions. This must be a list of .bib files. The .bib
+# extension is automatically appended if omitted. This requires the bibtex tool
+# to be installed. See also https://en.wikipedia.org/wiki/BibTeX for more info.
+# For LaTeX the style of the bibliography can be controlled using
+# LATEX_BIB_STYLE. To use this feature you need bibtex and perl available in the
+# search path. See also \cite for info how to create references.
+
+CITE_BIB_FILES         =
+
+#---------------------------------------------------------------------------
+# Configuration options related to warning and progress messages
+#---------------------------------------------------------------------------
+
+# The QUIET tag can be used to turn on/off the messages that are generated to
+# standard output by doxygen. If QUIET is set to YES this implies that the
+# messages are off.
+# The default value is: NO.
+
+QUIET                  = YES
+
+# The WARNINGS tag can be used to turn on/off the warning messages that are
+# generated to standard error (stderr) by doxygen. If WARNINGS is set to YES
+# this implies that the warnings are on.
+#
+# Tip: Turn warnings on while writing the documentation.
+# The default value is: YES.
+
+WARNINGS               = YES
+
+# If the WARN_IF_UNDOCUMENTED tag is set to YES then doxygen will generate
+# warnings for undocumented members. If EXTRACT_ALL is set to YES then this flag
+# will automatically be disabled.
+# The default value is: YES.
+
+WARN_IF_UNDOCUMENTED   = YES
+
+# If the WARN_IF_DOC_ERROR tag is set to YES, doxygen will generate warnings for
+# potential errors in the documentation, such as not documenting some parameters
+# in a documented function, or documenting parameters that don't exist or using
+# markup commands wrongly.
+# The default value is: YES.
+
+WARN_IF_DOC_ERROR      = YES
+
+# This WARN_NO_PARAMDOC option can be enabled to get warnings for functions that
+# are documented, but have no documentation for their parameters or return
+# value. If set to NO, doxygen will only warn about wrong or incomplete
+# parameter documentation, but not about the absence of documentation.
+# The default value is: NO.
+
+WARN_NO_PARAMDOC       = NO
+
+# The WARN_FORMAT tag determines the format of the warning messages that doxygen
+# can produce. The string should contain the $file, $line, and $text tags, which
+# will be replaced by the file and line number from which the warning originated
+# and the warning text. Optionally the format may contain $version, which will
+# be replaced by the version of the file (if it could be obtained via
+# FILE_VERSION_FILTER)
+# The default value is: $file:$line: $text.
+
+WARN_FORMAT            = "$file:$line: $text"
+
+# The WARN_LOGFILE tag can be used to specify a file to which warning and error
+# messages should be written. If left blank the output is written to standard
+# error (stderr).
+
+WARN_LOGFILE           =
+
+#---------------------------------------------------------------------------
+# Configuration options related to the input files
+#---------------------------------------------------------------------------
+
+# The INPUT tag is used to specify the files and/or directories that contain
+# documented source files. You may enter file names like myfile.cpp or
+# directories like /usr/src/myproject. Separate the files or directories with
+# spaces. See also FILE_PATTERNS and EXTENSION_MAPPING
+# Note: If this tag is empty the current directory is searched.
+
+INPUT                  = "@XEN_BASE@/docs/xen-doxygen/mainpage.md"
+
+# This tag can be used to specify the character encoding of the source files
+# that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses
+# libiconv (or the iconv built into libc) for the transcoding. See the libiconv
+# documentation (see: https://www.gnu.org/software/libiconv/) for the list of
+# possible encodings.
+# The default value is: UTF-8.
+
+INPUT_ENCODING         = UTF-8
+
+# If the value of the INPUT tag contains directories, you can use the
+# FILE_PATTERNS tag to specify one or more wildcard patterns (like *.cpp and
+# *.h) to filter out the source-files in the directories.
+#
+# Note that for custom extensions or not directly supported extensions you also
+# need to set EXTENSION_MAPPING for the extension otherwise the files are not
+# read by doxygen.
+#
+# If left blank the following patterns are tested:*.c, *.cc, *.cxx, *.cpp,
+# *.c++, *.java, *.ii, *.ixx, *.ipp, *.i++, *.inl, *.idl, *.ddl, *.odl, *.h,
+# *.hh, *.hxx, *.hpp, *.h++, *.cs, *.d, *.php, *.php4, *.php5, *.phtml, *.inc,
+# *.m, *.markdown, *.md, *.mm, *.dox, *.py, *.pyw, *.f90, *.f95, *.f03, *.f08,
+# *.f, *.for, *.tcl, *.vhd, *.vhdl, *.ucf and *.qsf.
+
+# This MUST be kept in sync with DOXY_SOURCES in doc/CMakeLists.txt
+# for incremental (and faster) builds to work correctly.
+FILE_PATTERNS          = "*.c" \
+                         "*.h" \
+                         "*.S" \
+                         "*.md"
+
+# The RECURSIVE tag can be used to specify whether or not subdirectories should
+# be searched for input files as well.
+# The default value is: NO.
+
+RECURSIVE              = YES
+
+# The EXCLUDE tag can be used to specify files and/or directories that should be
+# excluded from the INPUT source files. This way you can easily exclude a
+# subdirectory from a directory tree whose root is specified with the INPUT tag.
+#
+# Note that relative paths are relative to the directory from which doxygen is
+# run.
+
+EXCLUDE                = @XEN_BASE@/include/nothing.h
+
+# The EXCLUDE_SYMLINKS tag can be used to select whether or not files or
+# directories that are symbolic links (a Unix file system feature) are excluded
+# from the input.
+# The default value is: NO.
+
+EXCLUDE_SYMLINKS       = NO
+
+# If the value of the INPUT tag contains directories, you can use the
+# EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude
+# certain files from those directories.
+#
+# Note that the wildcards are matched against the file with absolute path, so to
+# exclude all test directories for example use the pattern */test/*
+
+EXCLUDE_PATTERNS       =
+
+# The EXCLUDE_SYMBOLS tag can be used to specify one or more symbol names
+# (namespaces, classes, functions, etc.) that should be excluded from the
+# output. The symbol name can be a fully qualified name, a word, or if the
+# wildcard * is used, a substring. Examples: ANamespace, AClass,
+# AClass::ANamespace, ANamespace::*Test
+#
+# Note that the wildcards are matched against the file with absolute path, so to
+# exclude all test directories use the pattern */test/*
+
+# Hide internal names (starting with an underscore, and doxygen-generated names
+# for nested unnamed unions that don't generate meaningful sphinx output anyway.
+EXCLUDE_SYMBOLS        =
+# _*  *.__unnamed__ z_* Z_*
+
+# The EXAMPLE_PATH tag can be used to specify one or more files or directories
+# that contain example code fragments that are included (see the \include
+# command).
+
+EXAMPLE_PATH           =
+
+# If the value of the EXAMPLE_PATH tag contains directories, you can use the
+# EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp and
+# *.h) to filter out the source-files in the directories. If left blank all
+# files are included.
+
+EXAMPLE_PATTERNS       =
+
+# If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be
+# searched for input files to be used with the \include or \dontinclude commands
+# irrespective of the value of the RECURSIVE tag.
+# The default value is: NO.
+
+EXAMPLE_RECURSIVE      = YES
+
+# The IMAGE_PATH tag can be used to specify one or more files or directories
+# that contain images that are to be included in the documentation (see the
+# \image command).
+
+IMAGE_PATH             =
+
+# The INPUT_FILTER tag can be used to specify a program that doxygen should
+# invoke to filter for each input file. Doxygen will invoke the filter program
+# by executing (via popen()) the command:
+#
+# <filter> <input-file>
+#
+# where <filter> is the value of the INPUT_FILTER tag, and <input-file> is the
+# name of an input file. Doxygen will then use the output that the filter
+# program writes to standard output. If FILTER_PATTERNS is specified, this tag
+# will be ignored.
+#
+# Note that the filter must not add or remove lines; it is applied before the
+# code is scanned, but not when the output code is generated. If lines are added
+# or removed, the anchors will not be placed correctly.
+#
+# Note that for custom extensions or not directly supported extensions you also
+# need to set EXTENSION_MAPPING for the extension otherwise the files are not
+# properly processed by doxygen.
+
+INPUT_FILTER           =
+
+# The FILTER_PATTERNS tag can be used to specify filters on a per file pattern
+# basis. Doxygen will compare the file name with each pattern and apply the
+# filter if there is a match. The filters are a list of the form: pattern=filter
+# (like *.cpp=my_cpp_filter). See INPUT_FILTER for further information on how
+# filters are used. If the FILTER_PATTERNS tag is empty or if none of the
+# patterns match the file name, INPUT_FILTER is applied.
+#
+# Note that for custom extensions or not directly supported extensions you also
+# need to set EXTENSION_MAPPING for the extension otherwise the files are not
+# properly processed by doxygen.
+
+FILTER_PATTERNS     = *.h="\"@XEN_BASE@/docs/xen-doxygen/doxy-preprocessor.py\""
+
+# If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using
+# INPUT_FILTER) will also be used to filter the input files that are used for
+# producing the source files to browse (i.e. when SOURCE_BROWSER is set to YES).
+# The default value is: NO.
+
+FILTER_SOURCE_FILES    = NO
+
+# The FILTER_SOURCE_PATTERNS tag can be used to specify source filters per file
+# pattern. A pattern will override the setting for FILTER_PATTERN (if any) and
+# it is also possible to disable source filtering for a specific pattern using
+# *.ext= (so without naming a filter).
+# This tag requires that the tag FILTER_SOURCE_FILES is set to YES.
+
+FILTER_SOURCE_PATTERNS =
+
+# If the USE_MDFILE_AS_MAINPAGE tag refers to the name of a markdown file that
+# is part of the input, its contents will be placed on the main page
+# (index.html). This can be useful if you have a project on for instance GitHub
+# and want to reuse the introduction page also for the doxygen output.
+
+USE_MDFILE_AS_MAINPAGE = "mainpage.md"
+
+#---------------------------------------------------------------------------
+# Configuration options related to source browsing
+#---------------------------------------------------------------------------
+
+# If the SOURCE_BROWSER tag is set to YES then a list of source files will be
+# generated. Documented entities will be cross-referenced with these sources.
+#
+# Note: To get rid of all source code in the generated output, make sure that
+# also VERBATIM_HEADERS is set to NO.
+# The default value is: NO.
+
+SOURCE_BROWSER         = NO
+
+# Setting the INLINE_SOURCES tag to YES will include the body of functions,
+# classes and enums directly into the documentation.
+# The default value is: NO.
+
+INLINE_SOURCES         = NO
+
+# Setting the STRIP_CODE_COMMENTS tag to YES will instruct doxygen to hide any
+# special comment blocks from generated source code fragments. Normal C, C++ and
+# Fortran comments will always remain visible.
+# The default value is: YES.
+
+STRIP_CODE_COMMENTS    = YES
+
+# If the REFERENCED_BY_RELATION tag is set to YES then for each documented
+# function all documented functions referencing it will be listed.
+# The default value is: NO.
+
+REFERENCED_BY_RELATION = NO
+
+# If the REFERENCES_RELATION tag is set to YES then for each documented function
+# all documented entities called/used by that function will be listed.
+# The default value is: NO.
+
+REFERENCES_RELATION    = NO
+
+# If the REFERENCES_LINK_SOURCE tag is set to YES and SOURCE_BROWSER tag is set
+# to YES then the hyperlinks from functions in REFERENCES_RELATION and
+# REFERENCED_BY_RELATION lists will link to the source code. Otherwise they will
+# link to the documentation.
+# The default value is: YES.
+
+REFERENCES_LINK_SOURCE = YES
+
+# If SOURCE_TOOLTIPS is enabled (the default) then hovering a hyperlink in the
+# source code will show a tooltip with additional information such as prototype,
+# brief description and links to the definition and documentation. Since this
+# will make the HTML file larger and loading of large files a bit slower, you
+# can opt to disable this feature.
+# The default value is: YES.
+# This tag requires that the tag SOURCE_BROWSER is set to YES.
+
+SOURCE_TOOLTIPS        = YES
+
+# If the USE_HTAGS tag is set to YES then the references to source code will
+# point to the HTML generated by the htags(1) tool instead of doxygen built-in
+# source browser. The htags tool is part of GNU's global source tagging system
+# (see https://www.gnu.org/software/global/global.html). You will need version
+# 4.8.6 or higher.
+#
+# To use it do the following:
+# - Install the latest version of global
+# - Enable SOURCE_BROWSER and USE_HTAGS in the config file
+# - Make sure the INPUT points to the root of the source tree
+# - Run doxygen as normal
+#
+# Doxygen will invoke htags (and that will in turn invoke gtags), so these
+# tools must be available from the command line (i.e. in the search path).
+#
+# The result: instead of the source browser generated by doxygen, the links to
+# source code will now point to the output of htags.
+# The default value is: NO.
+# This tag requires that the tag SOURCE_BROWSER is set to YES.
+
+USE_HTAGS              = NO
+
+# If the VERBATIM_HEADERS tag is set the YES then doxygen will generate a
+# verbatim copy of the header file for each class for which an include is
+# specified. Set to NO to disable this.
+# See also: Section \class.
+# The default value is: YES.
+
+VERBATIM_HEADERS       = YES
+
+#---------------------------------------------------------------------------
+# Configuration options related to the alphabetical class index
+#---------------------------------------------------------------------------
+
+# If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index of all
+# compounds will be generated. Enable this if the project contains a lot of
+# classes, structs, unions or interfaces.
+# The default value is: YES.
+
+ALPHABETICAL_INDEX     = YES
+
+# The COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns in
+# which the alphabetical index list will be split.
+# Minimum value: 1, maximum value: 20, default value: 5.
+# This tag requires that the tag ALPHABETICAL_INDEX is set to YES.
+
+COLS_IN_ALPHA_INDEX    = 2
+
+# In case all classes in a project start with a common prefix, all classes will
+# be put under the same header in the alphabetical index. The IGNORE_PREFIX tag
+# can be used to specify a prefix (or a list of prefixes) that should be ignored
+# while generating the index headers.
+# This tag requires that the tag ALPHABETICAL_INDEX is set to YES.
+
+IGNORE_PREFIX          =
+
+#---------------------------------------------------------------------------
+# Configuration options related to the HTML output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_HTML tag is set to YES, doxygen will generate HTML output
+# The default value is: YES.
+
+GENERATE_HTML          = YES
+
+# The HTML_OUTPUT tag is used to specify where the HTML docs will be put. If a
+# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
+# it.
+# The default directory is: html.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_OUTPUT            = html
+
+# The HTML_FILE_EXTENSION tag can be used to specify the file extension for each
+# generated HTML page (for example: .htm, .php, .asp).
+# The default value is: .html.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_FILE_EXTENSION    = .html
+
+# The HTML_HEADER tag can be used to specify a user-defined HTML header file for
+# each generated HTML page. If the tag is left blank doxygen will generate a
+# standard header.
+#
+# To get valid HTML the header file that includes any scripts and style sheets
+# that doxygen needs, which is dependent on the configuration options used (e.g.
+# the setting GENERATE_TREEVIEW). It is highly recommended to start with a
+# default header using
+# doxygen -w html new_header.html new_footer.html new_stylesheet.css
+# YourConfigFile
+# and then modify the file new_header.html. See also section "Doxygen usage"
+# for information on how to generate the default header that doxygen normally
+# uses.
+# Note: The header is subject to change so you typically have to regenerate the
+# default header when upgrading to a newer version of doxygen. For a description
+# of the possible markers and block names see the documentation.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_HEADER            = xen-doxygen/header.html
+
+# The HTML_FOOTER tag can be used to specify a user-defined HTML footer for each
+# generated HTML page. If the tag is left blank doxygen will generate a standard
+# footer. See HTML_HEADER for more information on how to generate a default
+# footer and what special commands can be used inside the footer. See also
+# section "Doxygen usage" for information on how to generate the default footer
+# that doxygen normally uses.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_FOOTER            =
+
+# The HTML_STYLESHEET tag can be used to specify a user-defined cascading style
+# sheet that is used by each HTML page. It can be used to fine-tune the look of
+# the HTML output. If left blank doxygen will generate a default style sheet.
+# See also section "Doxygen usage" for information on how to generate the style
+# sheet that doxygen normally uses.
+# Note: It is recommended to use HTML_EXTRA_STYLESHEET instead of this tag, as
+# it is more robust and this tag (HTML_STYLESHEET) will in the future become
+# obsolete.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_STYLESHEET        =
+
+# The HTML_EXTRA_STYLESHEET tag can be used to specify additional user-defined
+# cascading style sheets that are included after the standard style sheets
+# created by doxygen. Using this option one can overrule certain style aspects.
+# This is preferred over using HTML_STYLESHEET since it does not replace the
+# standard style sheet and is therefore more robust against future updates.
+# Doxygen will copy the style sheet files to the output directory.
+# Note: The order of the extra style sheet files is of importance (e.g. the last
+# style sheet in the list overrules the setting of the previous ones in the
+# list). For an example see the documentation.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_EXTRA_STYLESHEET  = xen-doxygen/customdoxygen.css
+
+# The HTML_EXTRA_FILES tag can be used to specify one or more extra images or
+# other source files which should be copied to the HTML output directory. Note
+# that these files will be copied to the base HTML output directory. Use the
+# $relpath^ marker in the HTML_HEADER and/or HTML_FOOTER files to load these
+# files. In the HTML_STYLESHEET file, use the file name only. Also note that the
+# files will be copied as-is; there are no commands or markers available.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_EXTRA_FILES       =
+
+# The HTML_COLORSTYLE_HUE tag controls the color of the HTML output. Doxygen
+# will adjust the colors in the style sheet and background images according to
+# this color. Hue is specified as an angle on a colorwheel, see
+# https://en.wikipedia.org/wiki/Hue for more information. For instance the value
+# 0 represents red, 60 is yellow, 120 is green, 180 is cyan, 240 is blue, 300
+# purple, and 360 is red again.
+# Minimum value: 0, maximum value: 359, default value: 220.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_COLORSTYLE_HUE    =
+
+# The HTML_COLORSTYLE_SAT tag controls the purity (or saturation) of the colors
+# in the HTML output. For a value of 0 the output will use grayscales only. A
+# value of 255 will produce the most vivid colors.
+# Minimum value: 0, maximum value: 255, default value: 100.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_COLORSTYLE_SAT    =
+
+# The HTML_COLORSTYLE_GAMMA tag controls the gamma correction applied to the
+# luminance component of the colors in the HTML output. Values below 100
+# gradually make the output lighter, whereas values above 100 make the output
+# darker. The value divided by 100 is the actual gamma applied, so 80 represents
+# a gamma of 0.8, The value 220 represents a gamma of 2.2, and 100 does not
+# change the gamma.
+# Minimum value: 40, maximum value: 240, default value: 80.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_COLORSTYLE_GAMMA  =
+
+# If the HTML_TIMESTAMP tag is set to YES then the footer of each generated HTML
+# page will contain the date and time when the page was generated. Setting this
+# to YES can help to show when doxygen was last run and thus if the
+# documentation is up to date.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_TIMESTAMP         = YES
+
+# If the HTML_DYNAMIC_SECTIONS tag is set to YES then the generated HTML
+# documentation will contain sections that can be hidden and shown after the
+# page has loaded.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_DYNAMIC_SECTIONS  = YES
+
+# With HTML_INDEX_NUM_ENTRIES one can control the preferred number of entries
+# shown in the various tree structured indices initially; the user can expand
+# and collapse entries dynamically later on. Doxygen will expand the tree to
+# such a level that at most the specified number of entries are visible (unless
+# a fully collapsed tree already exceeds this amount). So setting the number of
+# entries 1 will produce a full collapsed tree by default. 0 is a special value
+# representing an infinite number of entries and will result in a full expanded
+# tree by default.
+# Minimum value: 0, maximum value: 9999, default value: 100.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_INDEX_NUM_ENTRIES = 100
+
+# If the GENERATE_DOCSET tag is set to YES, additional index files will be
+# generated that can be used as input for Apple's Xcode 3 integrated development
+# environment (see: https://developer.apple.com/tools/xcode/), introduced with
+# OSX 10.5 (Leopard). To create a documentation set, doxygen will generate a
+# Makefile in the HTML output directory. Running make will produce the docset in
+# that directory and running make install will install the docset in
+# ~/Library/Developer/Shared/Documentation/DocSets so that Xcode will find it at
+# startup. See https://developer.apple.com/tools/creatingdocsetswithdoxygen.html
+# for more information.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+GENERATE_DOCSET        = YES
+
+# This tag determines the name of the docset feed. A documentation feed provides
+# an umbrella under which multiple documentation sets from a single provider
+# (such as a company or product suite) can be grouped.
+# The default value is: Doxygen generated docs.
+# This tag requires that the tag GENERATE_DOCSET is set to YES.
+
+DOCSET_FEEDNAME        = "Doxygen generated docs"
+
+# This tag specifies a string that should uniquely identify the documentation
+# set bundle. This should be a reverse domain-name style string, e.g.
+# com.mycompany.MyDocSet. Doxygen will append .docset to the name.
+# The default value is: org.doxygen.Project.
+# This tag requires that the tag GENERATE_DOCSET is set to YES.
+
+DOCSET_BUNDLE_ID       = org.doxygen.Project
+
+# The DOCSET_PUBLISHER_ID tag specifies a string that should uniquely identify
+# the documentation publisher. This should be a reverse domain-name style
+# string, e.g. com.mycompany.MyDocSet.documentation.
+# The default value is: org.doxygen.Publisher.
+# This tag requires that the tag GENERATE_DOCSET is set to YES.
+
+DOCSET_PUBLISHER_ID    = org.doxygen.Publisher
+
+# The DOCSET_PUBLISHER_NAME tag identifies the documentation publisher.
+# The default value is: Publisher.
+# This tag requires that the tag GENERATE_DOCSET is set to YES.
+
+DOCSET_PUBLISHER_NAME  = Publisher
+
+# If the GENERATE_HTMLHELP tag is set to YES then doxygen generates three
+# additional HTML index files: index.hhp, index.hhc, and index.hhk. The
+# index.hhp is a project file that can be read by Microsoft's HTML Help Workshop
+# (see: http://www.microsoft.com/en-us/download/details.aspx?id=21138) on
+# Windows.
+#
+# The HTML Help Workshop contains a compiler that can convert all HTML output
+# generated by doxygen into a single compiled HTML file (.chm). Compiled HTML
+# files are now used as the Windows 98 help format, and will replace the old
+# Windows help format (.hlp) on all Windows platforms in the future. Compressed
+# HTML files also contain an index, a table of contents, and you can search for
+# words in the documentation. The HTML workshop also contains a viewer for
+# compressed HTML files.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+GENERATE_HTMLHELP      = NO
+
+# The CHM_FILE tag can be used to specify the file name of the resulting .chm
+# file. You can add a path in front of the file if the result should not be
+# written to the html output directory.
+# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
+
+CHM_FILE               = NO
+
+# The HHC_LOCATION tag can be used to specify the location (absolute path
+# including file name) of the HTML help compiler (hhc.exe). If non-empty,
+# doxygen will try to run the HTML help compiler on the generated index.hhp.
+# The file has to be specified with full path.
+# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
+
+HHC_LOCATION           =
+
+# The GENERATE_CHI flag controls if a separate .chi index file is generated
+# (YES) or that it should be included in the master .chm file (NO).
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
+
+GENERATE_CHI           = NO
+
+# The CHM_INDEX_ENCODING is used to encode HtmlHelp index (hhk), content (hhc)
+# and project file content.
+# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
+
+CHM_INDEX_ENCODING     =
+
+# The BINARY_TOC flag controls whether a binary table of contents is generated
+# (YES) or a normal table of contents (NO) in the .chm file. Furthermore it
+# enables the Previous and Next buttons.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
+
+BINARY_TOC             = YES
+
+# The TOC_EXPAND flag can be set to YES to add extra items for group members to
+# the table of contents of the HTML help documentation and to the tree view.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
+
+TOC_EXPAND             = NO
+
+# If the GENERATE_QHP tag is set to YES and both QHP_NAMESPACE and
+# QHP_VIRTUAL_FOLDER are set, an additional index file will be generated that
+# can be used as input for Qt's qhelpgenerator to generate a Qt Compressed Help
+# (.qch) of the generated HTML documentation.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+GENERATE_QHP           = NO
+
+# If the QHG_LOCATION tag is specified, the QCH_FILE tag can be used to specify
+# the file name of the resulting .qch file. The path specified is relative to
+# the HTML output folder.
+# This tag requires that the tag GENERATE_QHP is set to YES.
+
+QCH_FILE               =
+
+# The QHP_NAMESPACE tag specifies the namespace to use when generating Qt Help
+# Project output. For more information please see Qt Help Project / Namespace
+# (see: http://doc.qt.io/qt-4.8/qthelpproject.html#namespace).
+# The default value is: org.doxygen.Project.
+# This tag requires that the tag GENERATE_QHP is set to YES.
+
+QHP_NAMESPACE          = org.doxygen.Project
+
+# The QHP_VIRTUAL_FOLDER tag specifies the namespace to use when generating Qt
+# Help Project output. For more information please see Qt Help Project / Virtual
+# Folders (see: http://doc.qt.io/qt-4.8/qthelpproject.html#virtual-folders).
+# The default value is: doc.
+# This tag requires that the tag GENERATE_QHP is set to YES.
+
+QHP_VIRTUAL_FOLDER     = doc
+
+# If the QHP_CUST_FILTER_NAME tag is set, it specifies the name of a custom
+# filter to add. For more information please see Qt Help Project / Custom
+# Filters (see: http://doc.qt.io/qt-4.8/qthelpproject.html#custom-filters).
+# This tag requires that the tag GENERATE_QHP is set to YES.
+
+QHP_CUST_FILTER_NAME   =
+
+# The QHP_CUST_FILTER_ATTRS tag specifies the list of the attributes of the
+# custom filter to add. For more information please see Qt Help Project / Custom
+# Filters (see: http://doc.qt.io/qt-4.8/qthelpproject.html#custom-filters).
+# This tag requires that the tag GENERATE_QHP is set to YES.
+
+QHP_CUST_FILTER_ATTRS  =
+
+# The QHP_SECT_FILTER_ATTRS tag specifies the list of the attributes this
+# project's filter section matches. Qt Help Project / Filter Attributes (see:
+# http://doc.qt.io/qt-4.8/qthelpproject.html#filter-attributes).
+# This tag requires that the tag GENERATE_QHP is set to YES.
+
+QHP_SECT_FILTER_ATTRS  =
+
+# The QHG_LOCATION tag can be used to specify the location of Qt's
+# qhelpgenerator. If non-empty doxygen will try to run qhelpgenerator on the
+# generated .qhp file.
+# This tag requires that the tag GENERATE_QHP is set to YES.
+
+QHG_LOCATION           =
+
+# If the GENERATE_ECLIPSEHELP tag is set to YES, additional index files will be
+# generated, together with the HTML files, they form an Eclipse help plugin. To
+# install this plugin and make it available under the help contents menu in
+# Eclipse, the contents of the directory containing the HTML and XML files needs
+# to be copied into the plugins directory of eclipse. The name of the directory
+# within the plugins directory should be the same as the ECLIPSE_DOC_ID value.
+# After copying Eclipse needs to be restarted before the help appears.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+GENERATE_ECLIPSEHELP   = NO
+
+# A unique identifier for the Eclipse help plugin. When installing the plugin
+# the directory name containing the HTML and XML files should also have this
+# name. Each documentation set should have its own identifier.
+# The default value is: org.doxygen.Project.
+# This tag requires that the tag GENERATE_ECLIPSEHELP is set to YES.
+
+ECLIPSE_DOC_ID         = org.doxygen.Project
+
+# If you want full control over the layout of the generated HTML pages it might
+# be necessary to disable the index and replace it with your own. The
+# DISABLE_INDEX tag can be used to turn on/off the condensed index (tabs) at top
+# of each HTML page. A value of NO enables the index and the value YES disables
+# it. Since the tabs in the index contain the same information as the navigation
+# tree, you can set this option to YES if you also set GENERATE_TREEVIEW to YES.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+DISABLE_INDEX          = NO
+
+# The GENERATE_TREEVIEW tag is used to specify whether a tree-like index
+# structure should be generated to display hierarchical information. If the tag
+# value is set to YES, a side panel will be generated containing a tree-like
+# index structure (just like the one that is generated for HTML Help). For this
+# to work a browser that supports JavaScript, DHTML, CSS and frames is required
+# (i.e. any modern browser). Windows users are probably better off using the
+# HTML help feature. Via custom style sheets (see HTML_EXTRA_STYLESHEET) one can
+# further fine-tune the look of the index. As an example, the default style
+# sheet generated by doxygen has an example that shows how to put an image at
+# the root of the tree instead of the PROJECT_NAME. Since the tree basically has
+# the same information as the tab index, you could consider setting
+# DISABLE_INDEX to YES when enabling this option.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+GENERATE_TREEVIEW      = YES
+
+# The ENUM_VALUES_PER_LINE tag can be used to set the number of enum values that
+# doxygen will group on one line in the generated HTML documentation.
+#
+# Note that a value of 0 will completely suppress the enum values from appearing
+# in the overview section.
+# Minimum value: 0, maximum value: 20, default value: 4.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+ENUM_VALUES_PER_LINE   = 4
+
+# If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can be used
+# to set the initial width (in pixels) of the frame in which the tree is shown.
+# Minimum value: 0, maximum value: 1500, default value: 250.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+TREEVIEW_WIDTH         = 250
+
+# If the EXT_LINKS_IN_WINDOW option is set to YES, doxygen will open links to
+# external symbols imported via tag files in a separate window.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+EXT_LINKS_IN_WINDOW    = NO
+
+# Use this tag to change the font size of LaTeX formulas included as images in
+# the HTML documentation. When you change the font size after a successful
+# doxygen run you need to manually remove any form_*.png images from the HTML
+# output directory to force them to be regenerated.
+# Minimum value: 8, maximum value: 50, default value: 10.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+FORMULA_FONTSIZE       = 10
+
+# Use the FORMULA_TRANPARENT tag to determine whether or not the images
+# generated for formulas are transparent PNGs. Transparent PNGs are not
+# supported properly for IE 6.0, but are supported on all modern browsers.
+#
+# Note that when changing this option you need to delete any form_*.png files in
+# the HTML output directory before the changes have effect.
+# The default value is: YES.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+FORMULA_TRANSPARENT    = YES
+
+# Enable the USE_MATHJAX option to render LaTeX formulas using MathJax (see
+# https://www.mathjax.org) which uses client side Javascript for the rendering
+# instead of using pre-rendered bitmaps. Use this if you do not have LaTeX
+# installed or if you want to formulas look prettier in the HTML output. When
+# enabled you may also need to install MathJax separately and configure the path
+# to it using the MATHJAX_RELPATH option.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+USE_MATHJAX            = NO
+
+# When MathJax is enabled you can set the default output format to be used for
+# the MathJax output. See the MathJax site (see:
+# http://docs.mathjax.org/en/latest/output.html) for more details.
+# Possible values are: HTML-CSS (which is slower, but has the best
+# compatibility), NativeMML (i.e. MathML) and SVG.
+# The default value is: HTML-CSS.
+# This tag requires that the tag USE_MATHJAX is set to YES.
+
+MATHJAX_FORMAT         = HTML-CSS
+
+# When MathJax is enabled you need to specify the location relative to the HTML
+# output directory using the MATHJAX_RELPATH option. The destination directory
+# should contain the MathJax.js script. For instance, if the mathjax directory
+# is located at the same level as the HTML output directory, then
+# MATHJAX_RELPATH should be ../mathjax. The default value points to the MathJax
+# Content Delivery Network so you can quickly see the result without installing
+# MathJax. However, it is strongly recommended to install a local copy of
+# MathJax from https://www.mathjax.org before deployment.
+# The default value is: http://cdn.mathjax.org/mathjax/latest.
+# This tag requires that the tag USE_MATHJAX is set to YES.
+
+MATHJAX_RELPATH        = http://cdn.mathjax.org/mathjax/latest
+
+# The MATHJAX_EXTENSIONS tag can be used to specify one or more MathJax
+# extension names that should be enabled during MathJax rendering. For example
+# MATHJAX_EXTENSIONS = TeX/AMSmath TeX/AMSsymbols
+# This tag requires that the tag USE_MATHJAX is set to YES.
+
+MATHJAX_EXTENSIONS     =
+
+# The MATHJAX_CODEFILE tag can be used to specify a file with javascript pieces
+# of code that will be used on startup of the MathJax code. See the MathJax site
+# (see: http://docs.mathjax.org/en/latest/output.html) for more details. For an
+# example see the documentation.
+# This tag requires that the tag USE_MATHJAX is set to YES.
+
+MATHJAX_CODEFILE       =
+
+# When the SEARCHENGINE tag is enabled doxygen will generate a search box for
+# the HTML output. The underlying search engine uses javascript and DHTML and
+# should work on any modern browser. Note that when using HTML help
+# (GENERATE_HTMLHELP), Qt help (GENERATE_QHP), or docsets (GENERATE_DOCSET)
+# there is already a search function so this one should typically be disabled.
+# For large projects the javascript based search engine can be slow, then
+# enabling SERVER_BASED_SEARCH may provide a better solution. It is possible to
+# search using the keyboard; to jump to the search box use <access key> + S
+# (what the <access key> is depends on the OS and browser, but it is typically
+# <CTRL>, <ALT>/<option>, or both). Inside the search box use the <cursor down
+# key> to jump into the search results window, the results can be navigated
+# using the <cursor keys>. Press <Enter> to select an item or <escape> to cancel
+# the search. The filter options can be selected when the cursor is inside the
+# search box by pressing <Shift>+<cursor down>. Also here use the <cursor keys>
+# to select a filter and <Enter> or <escape> to activate or cancel the filter
+# option.
+# The default value is: YES.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+SEARCHENGINE           = YES
+
+# When the SERVER_BASED_SEARCH tag is enabled the search engine will be
+# implemented using a web server instead of a web client using Javascript. There
+# are two flavors of web server based searching depending on the EXTERNAL_SEARCH
+# setting. When disabled, doxygen will generate a PHP script for searching and
+# an index file used by the script. When EXTERNAL_SEARCH is enabled the indexing
+# and searching needs to be provided by external tools. See the section
+# "External Indexing and Searching" for details.
+# The default value is: NO.
+# This tag requires that the tag SEARCHENGINE is set to YES.
+
+SERVER_BASED_SEARCH    = NO
+
+# When EXTERNAL_SEARCH tag is enabled doxygen will no longer generate the PHP
+# script for searching. Instead the search results are written to an XML file
+# which needs to be processed by an external indexer. Doxygen will invoke an
+# external search engine pointed to by the SEARCHENGINE_URL option to obtain the
+# search results.
+#
+# Doxygen ships with an example indexer (doxyindexer) and search engine
+# (doxysearch.cgi) which are based on the open source search engine library
+# Xapian (see: https://xapian.org/).
+#
+# See the section "External Indexing and Searching" for details.
+# The default value is: NO.
+# This tag requires that the tag SEARCHENGINE is set to YES.
+
+EXTERNAL_SEARCH        = NO
+
+# The SEARCHENGINE_URL should point to a search engine hosted by a web server
+# which will return the search results when EXTERNAL_SEARCH is enabled.
+#
+# Doxygen ships with an example indexer (doxyindexer) and search engine
+# (doxysearch.cgi) which are based on the open source search engine library
+# Xapian (see: https://xapian.org/). See the section "External Indexing and
+# Searching" for details.
+# This tag requires that the tag SEARCHENGINE is set to YES.
+
+SEARCHENGINE_URL       =
+
+# When SERVER_BASED_SEARCH and EXTERNAL_SEARCH are both enabled the unindexed
+# search data is written to a file for indexing by an external tool. With the
+# SEARCHDATA_FILE tag the name of this file can be specified.
+# The default file is: searchdata.xml.
+# This tag requires that the tag SEARCHENGINE is set to YES.
+
+SEARCHDATA_FILE        = searchdata.xml
+
+# When SERVER_BASED_SEARCH and EXTERNAL_SEARCH are both enabled the
+# EXTERNAL_SEARCH_ID tag can be used as an identifier for the project. This is
+# useful in combination with EXTRA_SEARCH_MAPPINGS to search through multiple
+# projects and redirect the results back to the right project.
+# This tag requires that the tag SEARCHENGINE is set to YES.
+
+EXTERNAL_SEARCH_ID     =
+
+# The EXTRA_SEARCH_MAPPINGS tag can be used to enable searching through doxygen
+# projects other than the one defined by this configuration file, but that are
+# all added to the same external search index. Each project needs to have a
+# unique id set via EXTERNAL_SEARCH_ID. The search mapping then maps the id of
+# to a relative location where the documentation can be found. The format is:
+# EXTRA_SEARCH_MAPPINGS = tagname1=loc1 tagname2=loc2 ...
+# This tag requires that the tag SEARCHENGINE is set to YES.
+
+EXTRA_SEARCH_MAPPINGS  =
+
+#---------------------------------------------------------------------------
+# Configuration options related to the LaTeX output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_LATEX tag is set to YES, doxygen will generate LaTeX output.
+# The default value is: YES.
+
+GENERATE_LATEX         = NO
+
+# The LATEX_OUTPUT tag is used to specify where the LaTeX docs will be put. If a
+# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
+# it.
+# The default directory is: latex.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_OUTPUT           = latex
+
+# The LATEX_CMD_NAME tag can be used to specify the LaTeX command name to be
+# invoked.
+#
+# Note that when enabling USE_PDFLATEX this option is only used for generating
+# bitmaps for formulas in the HTML output, but not in the Makefile that is
+# written to the output directory.
+# The default file is: latex.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_CMD_NAME         = latex
+
+# The MAKEINDEX_CMD_NAME tag can be used to specify the command name to generate
+# index for LaTeX.
+# The default file is: makeindex.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+MAKEINDEX_CMD_NAME     = makeindex
+
+# If the COMPACT_LATEX tag is set to YES, doxygen generates more compact LaTeX
+# documents. This may be useful for small projects and may help to save some
+# trees in general.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+COMPACT_LATEX          = NO
+
+# The PAPER_TYPE tag can be used to set the paper type that is used by the
+# printer.
+# Possible values are: a4 (210 x 297 mm), letter (8.5 x 11 inches), legal (8.5 x
+# 14 inches) and executive (7.25 x 10.5 inches).
+# The default value is: a4.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+PAPER_TYPE             = a4
+
+# The EXTRA_PACKAGES tag can be used to specify one or more LaTeX package names
+# that should be included in the LaTeX output. The package can be specified just
+# by its name or with the correct syntax as to be used with the LaTeX
+# \usepackage command. To get the times font for instance you can specify :
+# EXTRA_PACKAGES=times or EXTRA_PACKAGES={times}
+# To use the option intlimits with the amsmath package you can specify:
+# EXTRA_PACKAGES=[intlimits]{amsmath}
+# If left blank no extra packages will be included.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+EXTRA_PACKAGES         =
+
+# The LATEX_HEADER tag can be used to specify a personal LaTeX header for the
+# generated LaTeX document. The header should contain everything until the first
+# chapter. If it is left blank doxygen will generate a standard header. See
+# section "Doxygen usage" for information on how to let doxygen write the
+# default header to a separate file.
+#
+# Note: Only use a user-defined header if you know what you are doing! The
+# following commands have a special meaning inside the header: $title,
+# $datetime, $date, $doxygenversion, $projectname, $projectnumber,
+# $projectbrief, $projectlogo. Doxygen will replace $title with the empty
+# string, for the replacement values of the other commands the user is referred
+# to HTML_HEADER.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_HEADER           =
+
+# The LATEX_FOOTER tag can be used to specify a personal LaTeX footer for the
+# generated LaTeX document. The footer should contain everything after the last
+# chapter. If it is left blank doxygen will generate a standard footer. See
+# LATEX_HEADER for more information on how to generate a default footer and what
+# special commands can be used inside the footer.
+#
+# Note: Only use a user-defined footer if you know what you are doing!
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_FOOTER           =
+
+# The LATEX_EXTRA_FILES tag can be used to specify one or more extra images or
+# other source files which should be copied to the LATEX_OUTPUT output
+# directory. Note that the files will be copied as-is; there are no commands or
+# markers available.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_EXTRA_FILES      =
+
+# If the PDF_HYPERLINKS tag is set to YES, the LaTeX that is generated is
+# prepared for conversion to PDF (using ps2pdf or pdflatex). The PDF file will
+# contain links (just like the HTML output) instead of page references. This
+# makes the output suitable for online browsing using a PDF viewer.
+# The default value is: YES.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+PDF_HYPERLINKS         = YES
+
+# If the USE_PDFLATEX tag is set to YES, doxygen will use pdflatex to generate
+# the PDF file directly from the LaTeX files. Set this option to YES, to get a
+# higher quality PDF documentation.
+# The default value is: YES.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+USE_PDFLATEX           = YES
+
+# If the LATEX_BATCHMODE tag is set to YES, doxygen will add the \batchmode
+# command to the generated LaTeX files. This will instruct LaTeX to keep running
+# if errors occur, instead of asking the user for help. This option is also used
+# when generating formulas in HTML.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_BATCHMODE        = NO
+
+# If the LATEX_HIDE_INDICES tag is set to YES then doxygen will not include the
+# index chapters (such as File Index, Compound Index, etc.) in the output.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_HIDE_INDICES     = NO
+
+# If the LATEX_SOURCE_CODE tag is set to YES then doxygen will include source
+# code with syntax highlighting in the LaTeX output.
+#
+# Note that which sources are shown also depends on other settings such as
+# SOURCE_BROWSER.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_SOURCE_CODE      = NO
+
+# The LATEX_BIB_STYLE tag can be used to specify the style to use for the
+# bibliography, e.g. plainnat, or ieeetr. See
+# https://en.wikipedia.org/wiki/BibTeX and \cite for more info.
+# The default value is: plain.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_BIB_STYLE        = plain
+
+#---------------------------------------------------------------------------
+# Configuration options related to the RTF output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_RTF tag is set to YES, doxygen will generate RTF output. The
+# RTF output is optimized for Word 97 and may not look too pretty with other RTF
+# readers/editors.
+# The default value is: NO.
+
+GENERATE_RTF           = NO
+
+# The RTF_OUTPUT tag is used to specify where the RTF docs will be put. If a
+# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
+# it.
+# The default directory is: rtf.
+# This tag requires that the tag GENERATE_RTF is set to YES.
+
+RTF_OUTPUT             = rtf
+
+# If the COMPACT_RTF tag is set to YES, doxygen generates more compact RTF
+# documents. This may be useful for small projects and may help to save some
+# trees in general.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_RTF is set to YES.
+
+COMPACT_RTF            = NO
+
+# If the RTF_HYPERLINKS tag is set to YES, the RTF that is generated will
+# contain hyperlink fields. The RTF file will contain links (just like the HTML
+# output) instead of page references. This makes the output suitable for online
+# browsing using Word or some other Word compatible readers that support those
+# fields.
+#
+# Note: WordPad (write) and others do not support links.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_RTF is set to YES.
+
+RTF_HYPERLINKS         = YES
+
+# Load stylesheet definitions from file. Syntax is similar to doxygen's config
+# file, i.e. a series of assignments. You only have to provide replacements,
+# missing definitions are set to their default value.
+#
+# See also section "Doxygen usage" for information on how to generate the
+# default style sheet that doxygen normally uses.
+# This tag requires that the tag GENERATE_RTF is set to YES.
+
+RTF_STYLESHEET_FILE    =
+
+# Set optional variables used in the generation of an RTF document. Syntax is
+# similar to doxygen's config file. A template extensions file can be generated
+# using doxygen -e rtf extensionFile.
+# This tag requires that the tag GENERATE_RTF is set to YES.
+
+RTF_EXTENSIONS_FILE    =
+
+#---------------------------------------------------------------------------
+# Configuration options related to the man page output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_MAN tag is set to YES, doxygen will generate man pages for
+# classes and files.
+# The default value is: NO.
+
+GENERATE_MAN           = NO
+
+# The MAN_OUTPUT tag is used to specify where the man pages will be put. If a
+# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
+# it. A directory man3 will be created inside the directory specified by
+# MAN_OUTPUT.
+# The default directory is: man.
+# This tag requires that the tag GENERATE_MAN is set to YES.
+
+MAN_OUTPUT             = man
+
+# The MAN_EXTENSION tag determines the extension that is added to the generated
+# man pages. In case the manual section does not start with a number, the number
+# 3 is prepended. The dot (.) at the beginning of the MAN_EXTENSION tag is
+# optional.
+# The default value is: .3.
+# This tag requires that the tag GENERATE_MAN is set to YES.
+
+MAN_EXTENSION          = .3
+
+# If the MAN_LINKS tag is set to YES and doxygen generates man output, then it
+# will generate one additional man file for each entity documented in the real
+# man page(s). These additional files only source the real man page, but without
+# them the man command would be unable to find the correct page.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_MAN is set to YES.
+
+MAN_LINKS              = NO
+
+#---------------------------------------------------------------------------
+# Configuration options related to the XML output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_XML tag is set to YES, doxygen will generate an XML file that
+# captures the structure of the code including all documentation.
+# The default value is: NO.
+
+GENERATE_XML           = YES
+
+# The XML_OUTPUT tag is used to specify where the XML pages will be put. If a
+# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
+# it.
+# The default directory is: xml.
+# This tag requires that the tag GENERATE_XML is set to YES.
+
+XML_OUTPUT             = xml
+
+# If the XML_PROGRAMLISTING tag is set to YES, doxygen will dump the program
+# listings (including syntax highlighting and cross-referencing information) to
+# the XML output. Note that enabling this will significantly increase the size
+# of the XML output.
+# The default value is: YES.
+# This tag requires that the tag GENERATE_XML is set to YES.
+
+XML_PROGRAMLISTING     = YES
+
+#---------------------------------------------------------------------------
+# Configuration options related to the DOCBOOK output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_DOCBOOK tag is set to YES, doxygen will generate Docbook files
+# that can be used to generate PDF.
+# The default value is: NO.
+
+GENERATE_DOCBOOK       = NO
+
+# The DOCBOOK_OUTPUT tag is used to specify where the Docbook pages will be put.
+# If a relative path is entered the value of OUTPUT_DIRECTORY will be put in
+# front of it.
+# The default directory is: docbook.
+# This tag requires that the tag GENERATE_DOCBOOK is set to YES.
+
+DOCBOOK_OUTPUT         = docbook
+
+#---------------------------------------------------------------------------
+# Configuration options for the AutoGen Definitions output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_AUTOGEN_DEF tag is set to YES, doxygen will generate an
+# AutoGen Definitions (see http://autogen.sourceforge.net/) file that captures
+# the structure of the code including all documentation. Note that this feature
+# is still experimental and incomplete at the moment.
+# The default value is: NO.
+
+GENERATE_AUTOGEN_DEF   = NO
+
+#---------------------------------------------------------------------------
+# Configuration options related to the Perl module output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_PERLMOD tag is set to YES, doxygen will generate a Perl module
+# file that captures the structure of the code including all documentation.
+#
+# Note that this feature is still experimental and incomplete at the moment.
+# The default value is: NO.
+
+GENERATE_PERLMOD       = NO
+
+# If the PERLMOD_LATEX tag is set to YES, doxygen will generate the necessary
+# Makefile rules, Perl scripts and LaTeX code to be able to generate PDF and DVI
+# output from the Perl module output.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_PERLMOD is set to YES.
+
+PERLMOD_LATEX          = NO
+
+# If the PERLMOD_PRETTY tag is set to YES, the Perl module output will be nicely
+# formatted so it can be parsed by a human reader. This is useful if you want to
+# understand what is going on. On the other hand, if this tag is set to NO, the
+# size of the Perl module output will be much smaller and Perl will parse it
+# just the same.
+# The default value is: YES.
+# This tag requires that the tag GENERATE_PERLMOD is set to YES.
+
+PERLMOD_PRETTY         = YES
+
+# The names of the make variables in the generated doxyrules.make file are
+# prefixed with the string contained in PERLMOD_MAKEVAR_PREFIX. This is useful
+# so different doxyrules.make files included by the same Makefile don't
+# overwrite each other's variables.
+# This tag requires that the tag GENERATE_PERLMOD is set to YES.
+
+PERLMOD_MAKEVAR_PREFIX =
+
+#---------------------------------------------------------------------------
+# Configuration options related to the preprocessor
+#---------------------------------------------------------------------------
+
+# If the ENABLE_PREPROCESSING tag is set to YES, doxygen will evaluate all
+# C-preprocessor directives found in the sources and include files.
+# The default value is: YES.
+
+ENABLE_PREPROCESSING   = YES
+
+# If the MACRO_EXPANSION tag is set to YES, doxygen will expand all macro names
+# in the source code. If set to NO, only conditional compilation will be
+# performed. Macro expansion can be done in a controlled way by setting
+# EXPAND_ONLY_PREDEF to YES.
+# The default value is: NO.
+# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
+
+MACRO_EXPANSION        = YES
+
+# If the EXPAND_ONLY_PREDEF and MACRO_EXPANSION tags are both set to YES then
+# the macro expansion is limited to the macros specified with the PREDEFINED and
+# EXPAND_AS_DEFINED tags.
+# The default value is: NO.
+# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
+
+EXPAND_ONLY_PREDEF     = NO
+
+# If the SEARCH_INCLUDES tag is set to YES, the include files in the
+# INCLUDE_PATH will be searched if a #include is found.
+# The default value is: YES.
+# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
+
+SEARCH_INCLUDES        = YES
+
+# The INCLUDE_PATH tag can be used to specify one or more directories that
+# contain include files that are not input files but should be processed by the
+# preprocessor.
+# This tag requires that the tag SEARCH_INCLUDES is set to YES.
+
+INCLUDE_PATH           = "@XEN_BASE@/xen/include/generated" \
+                         "@XEN_BASE@/xen/include/"
+
+# You can use the INCLUDE_FILE_PATTERNS tag to specify one or more wildcard
+# patterns (like *.h and *.hpp) to filter out the header-files in the
+# directories. If left blank, the patterns specified with FILE_PATTERNS will be
+# used.
+# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
+
+INCLUDE_FILE_PATTERNS  =
+
+# The PREDEFINED tag can be used to specify one or more macro names that are
+# defined before the preprocessor is started (similar to the -D option of e.g.
+# gcc). The argument of the tag is a list of macros of the form: name or
+# name=definition (no spaces). If the definition and the "=" are omitted, "=1"
+# is assumed. To prevent a macro definition from being undefined via #undef or
+# recursively expanded use the := operator instead of the = operator.
+# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
+
+PREDEFINED             = __attribute__(x)= \
+                         DOXYGEN \
+                         __XEN__
+
+# If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then this
+# tag can be used to specify a list of macro names that should be expanded. The
+# macro definition that is found in the sources will be used. Use the PREDEFINED
+# tag if you want to use a different macro definition that overrules the
+# definition found in the source code.
+# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
+
+EXPAND_AS_DEFINED      =
+
+# If the SKIP_FUNCTION_MACROS tag is set to YES then doxygen's preprocessor will
+# remove all references to function-like macros that are alone on a line, have
+# an all uppercase name, and do not end with a semicolon. Such function macros
+# are typically used for boiler-plate code, and will confuse the parser if not
+# removed.
+# The default value is: YES.
+# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
+
+SKIP_FUNCTION_MACROS   = NO
+
+#---------------------------------------------------------------------------
+# Configuration options related to external references
+#---------------------------------------------------------------------------
+
+# The TAGFILES tag can be used to specify one or more tag files. For each tag
+# file the location of the external documentation should be added. The format of
+# a tag file without this location is as follows:
+# TAGFILES = file1 file2 ...
+# Adding location for the tag files is done as follows:
+# TAGFILES = file1=loc1 "file2 = loc2" ...
+# where loc1 and loc2 can be relative or absolute paths or URLs. See the
+# section "Linking to external documentation" for more information about the use
+# of tag files.
+# Note: Each tag file must have a unique name (where the name does NOT include
+# the path). If a tag file is not located in the directory in which doxygen is
+# run, you must also specify the path to the tagfile here.
+
+TAGFILES               =
+
+# When a file name is specified after GENERATE_TAGFILE, doxygen will create a
+# tag file that is based on the input files it reads. See section "Linking to
+# external documentation" for more information about the usage of tag files.
+
+GENERATE_TAGFILE       =
+
+# If the ALLEXTERNALS tag is set to YES, all external class will be listed in
+# the class index. If set to NO, only the inherited external classes will be
+# listed.
+# The default value is: NO.
+
+ALLEXTERNALS           = NO
+
+# If the EXTERNAL_GROUPS tag is set to YES, all external groups will be listed
+# in the modules index. If set to NO, only the current project's groups will be
+# listed.
+# The default value is: YES.
+
+EXTERNAL_GROUPS        = YES
+
+# If the EXTERNAL_PAGES tag is set to YES, all external pages will be listed in
+# the related pages index. If set to NO, only the current project's pages will
+# be listed.
+# The default value is: YES.
+
+EXTERNAL_PAGES         = YES
+
+#---------------------------------------------------------------------------
+# Configuration options related to the dot tool
+#---------------------------------------------------------------------------
+
+# If the CLASS_DIAGRAMS tag is set to YES, doxygen will generate a class diagram
+# (in HTML and LaTeX) for classes with base or super classes. Setting the tag to
+# NO turns the diagrams off. Note that this option also works with HAVE_DOT
+# disabled, but it is recommended to install and use dot, since it yields more
+# powerful graphs.
+# The default value is: YES.
+
+CLASS_DIAGRAMS         = NO
+
+# You can include diagrams made with dia in doxygen documentation. Doxygen will
+# then run dia to produce the diagram and insert it in the documentation. The
+# DIA_PATH tag allows you to specify the directory where the dia binary resides.
+# If left empty dia is assumed to be found in the default search path.
+
+DIA_PATH               =
+
+# If set to YES the inheritance and collaboration graphs will hide inheritance
+# and usage relations if the target is undocumented or is not a class.
+# The default value is: YES.
+
+HIDE_UNDOC_RELATIONS   = YES
+
+# If you set the HAVE_DOT tag to YES then doxygen will assume the dot tool is
+# available from the path. This tool is part of Graphviz (see:
+# http://www.graphviz.org/), a graph visualization toolkit from AT&T and Lucent
+# Bell Labs. The other options in this section have no effect if this option is
+# set to NO
+# The default value is: NO.
+
+HAVE_DOT               = NO
+
+# The DOT_NUM_THREADS specifies the number of dot invocations doxygen is allowed
+# to run in parallel. When set to 0 doxygen will base this on the number of
+# processors available in the system. You can set it explicitly to a value
+# larger than 0 to get control over the balance between CPU load and processing
+# speed.
+# Minimum value: 0, maximum value: 32, default value: 0.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_NUM_THREADS        = 0
+
+# When you want a differently looking font in the dot files that doxygen
+# generates you can specify the font name using DOT_FONTNAME. You need to make
+# sure dot is able to find the font, which can be done by putting it in a
+# standard location or by setting the DOTFONTPATH environment variable or by
+# setting DOT_FONTPATH to the directory containing the font.
+# The default value is: Helvetica.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_FONTNAME           = Helvetica
+
+# The DOT_FONTSIZE tag can be used to set the size (in points) of the font of
+# dot graphs.
+# Minimum value: 4, maximum value: 24, default value: 10.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_FONTSIZE           = 10
+
+# By default doxygen will tell dot to use the default font as specified with
+# DOT_FONTNAME. If you specify a different font using DOT_FONTNAME you can set
+# the path where dot can find it using this tag.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_FONTPATH           =
+
+# If the CLASS_GRAPH tag is set to YES then doxygen will generate a graph for
+# each documented class showing the direct and indirect inheritance relations.
+# Setting this tag to YES will force the CLASS_DIAGRAMS tag to NO.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+CLASS_GRAPH            = YES
+
+# If the COLLABORATION_GRAPH tag is set to YES then doxygen will generate a
+# graph for each documented class showing the direct and indirect implementation
+# dependencies (inheritance, containment, and class references variables) of the
+# class with other documented classes.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+COLLABORATION_GRAPH    = YES
+
+# If the GROUP_GRAPHS tag is set to YES then doxygen will generate a graph for
+# groups, showing the direct groups dependencies.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+GROUP_GRAPHS           = YES
+
+# If the UML_LOOK tag is set to YES, doxygen will generate inheritance and
+# collaboration diagrams in a style similar to the OMG's Unified Modeling
+# Language.
+# The default value is: NO.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+UML_LOOK               = NO
+
+# If the UML_LOOK tag is enabled, the fields and methods are shown inside the
+# class node. If there are many fields or methods and many nodes the graph may
+# become too big to be useful. The UML_LIMIT_NUM_FIELDS threshold limits the
+# number of items for each type to make the size more manageable. Set this to 0
+# for no limit. Note that the threshold may be exceeded by 50% before the limit
+# is enforced. So when you set the threshold to 10, up to 15 fields may appear,
+# but if the number exceeds 15, the total amount of fields shown is limited to
+# 10.
+# Minimum value: 0, maximum value: 100, default value: 10.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+UML_LIMIT_NUM_FIELDS   = 10
+
+# If the TEMPLATE_RELATIONS tag is set to YES then the inheritance and
+# collaboration graphs will show the relations between templates and their
+# instances.
+# The default value is: NO.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+TEMPLATE_RELATIONS     = NO
+
+# If the INCLUDE_GRAPH, ENABLE_PREPROCESSING and SEARCH_INCLUDES tags are set to
+# YES then doxygen will generate a graph for each documented file showing the
+# direct and indirect include dependencies of the file with other documented
+# files.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+INCLUDE_GRAPH          = YES
+
+# If the INCLUDED_BY_GRAPH, ENABLE_PREPROCESSING and SEARCH_INCLUDES tags are
+# set to YES then doxygen will generate a graph for each documented file showing
+# the direct and indirect include dependencies of the file with other documented
+# files.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+INCLUDED_BY_GRAPH      = YES
+
+# If the CALL_GRAPH tag is set to YES then doxygen will generate a call
+# dependency graph for every global function or class method.
+#
+# Note that enabling this option will significantly increase the time of a run.
+# So in most cases it will be better to enable call graphs for selected
+# functions only using the \callgraph command. Disabling a call graph can be
+# accomplished by means of the command \hidecallgraph.
+# The default value is: NO.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+CALL_GRAPH             = NO
+
+# If the CALLER_GRAPH tag is set to YES then doxygen will generate a caller
+# dependency graph for every global function or class method.
+#
+# Note that enabling this option will significantly increase the time of a run.
+# So in most cases it will be better to enable caller graphs for selected
+# functions only using the \callergraph command. Disabling a caller graph can be
+# accomplished by means of the command \hidecallergraph.
+# The default value is: NO.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+CALLER_GRAPH           = NO
+
+# If the GRAPHICAL_HIERARCHY tag is set to YES then doxygen will graphical
+# hierarchy of all classes instead of a textual one.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+GRAPHICAL_HIERARCHY    = YES
+
+# If the DIRECTORY_GRAPH tag is set to YES then doxygen will show the
+# dependencies a directory has on other directories in a graphical way. The
+# dependency relations are determined by the #include relations between the
+# files in the directories.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DIRECTORY_GRAPH        = YES
+
+# The DOT_IMAGE_FORMAT tag can be used to set the image format of the images
+# generated by dot. For an explanation of the image formats see the section
+# output formats in the documentation of the dot tool (Graphviz (see:
+# http://www.graphviz.org/)).
+# Note: If you choose svg you need to set HTML_FILE_EXTENSION to xhtml in order
+# to make the SVG files visible in IE 9+ (other browsers do not have this
+# requirement).
+# Possible values are: png, jpg, gif, svg, png:gd, png:gd:gd, png:cairo,
+# png:cairo:gd, png:cairo:cairo, png:cairo:gdiplus, png:gdiplus and
+# png:gdiplus:gdiplus.
+# The default value is: png.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_IMAGE_FORMAT       = png
+
+# If DOT_IMAGE_FORMAT is set to svg, then this option can be set to YES to
+# enable generation of interactive SVG images that allow zooming and panning.
+#
+# Note that this requires a modern browser other than Internet Explorer. Tested
+# and working are Firefox, Chrome, Safari, and Opera.
+# Note: For IE 9+ you need to set HTML_FILE_EXTENSION to xhtml in order to make
+# the SVG files visible. Older versions of IE do not have SVG support.
+# The default value is: NO.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+INTERACTIVE_SVG        = NO
+
+# The DOT_PATH tag can be used to specify the path where the dot tool can be
+# found. If left blank, it is assumed the dot tool can be found in the path.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_PATH               =
+
+# The DOTFILE_DIRS tag can be used to specify one or more directories that
+# contain dot files that are included in the documentation (see the \dotfile
+# command).
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOTFILE_DIRS           =
+
+# The MSCFILE_DIRS tag can be used to specify one or more directories that
+# contain msc files that are included in the documentation (see the \mscfile
+# command).
+
+MSCFILE_DIRS           =
+
+# The DIAFILE_DIRS tag can be used to specify one or more directories that
+# contain dia files that are included in the documentation (see the \diafile
+# command).
+
+DIAFILE_DIRS           =
+
+# The DOT_GRAPH_MAX_NODES tag can be used to set the maximum number of nodes
+# that will be shown in the graph. If the number of nodes in a graph becomes
+# larger than this value, doxygen will truncate the graph, which is visualized
+# by representing a node as a red box. Note that doxygen if the number of direct
+# children of the root node in a graph is already larger than
+# DOT_GRAPH_MAX_NODES then the graph will not be shown at all. Also note that
+# the size of a graph can be further restricted by MAX_DOT_GRAPH_DEPTH.
+# Minimum value: 0, maximum value: 10000, default value: 50.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_GRAPH_MAX_NODES    = 50
+
+# The MAX_DOT_GRAPH_DEPTH tag can be used to set the maximum depth of the graphs
+# generated by dot. A depth value of 3 means that only nodes reachable from the
+# root by following a path via at most 3 edges will be shown. Nodes that lay
+# further from the root node will be omitted. Note that setting this option to 1
+# or 2 may greatly reduce the computation time needed for large code bases. Also
+# note that the size of a graph can be further restricted by
+# DOT_GRAPH_MAX_NODES. Using a depth of 0 means no depth restriction.
+# Minimum value: 0, maximum value: 1000, default value: 0.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+MAX_DOT_GRAPH_DEPTH    = 0
+
+# Set the DOT_TRANSPARENT tag to YES to generate images with a transparent
+# background. This is disabled by default, because dot on Windows does not seem
+# to support this out of the box.
+#
+# Warning: Depending on the platform used, enabling this option may lead to
+# badly anti-aliased labels on the edges of a graph (i.e. they become hard to
+# read).
+# The default value is: NO.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_TRANSPARENT        = NO
+
+# Set the DOT_MULTI_TARGETS tag to YES to allow dot to generate multiple output
+# files in one run (i.e. multiple -o and -T options on the command line). This
+# makes dot run faster, but since only newer versions of dot (>1.8.10) support
+# this, this feature is disabled by default.
+# The default value is: NO.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_MULTI_TARGETS      = NO
+
+# If the GENERATE_LEGEND tag is set to YES doxygen will generate a legend page
+# explaining the meaning of the various boxes and arrows in the dot generated
+# graphs.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+GENERATE_LEGEND        = YES
+
+# If the DOT_CLEANUP tag is set to YES, doxygen will remove the intermediate dot
+# files that are used to generate the various graphs.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_CLEANUP            = YES
diff --git a/m4/ax_python_module.m4 b/m4/ax_python_module.m4
new file mode 100644
index 0000000000..107d88264a
--- /dev/null
+++ b/m4/ax_python_module.m4
@@ -0,0 +1,56 @@
+# ===========================================================================
+#     https://www.gnu.org/software/autoconf-archive/ax_python_module.html
+# ===========================================================================
+#
+# SYNOPSIS
+#
+#   AX_PYTHON_MODULE(modname[, fatal, python])
+#
+# DESCRIPTION
+#
+#   Checks for Python module.
+#
+#   If fatal is non-empty then absence of a module will trigger an error.
+#   The third parameter can either be "python" for Python 2 or "python3" for
+#   Python 3; defaults to Python 3.
+#
+# LICENSE
+#
+#   Copyright (c) 2008 Andrew Collier
+#
+#   Copying and distribution of this file, with or without modification, are
+#   permitted in any medium without royalty provided the copyright notice
+#   and this notice are preserved. This file is offered as-is, without any
+#   warranty.
+
+#serial 9
+
+AU_ALIAS([AC_PYTHON_MODULE], [AX_PYTHON_MODULE])
+AC_DEFUN([AX_PYTHON_MODULE],[
+    if test -z $PYTHON;
+    then
+        if test -z "$3";
+        then
+            PYTHON="python3"
+        else
+            PYTHON="$3"
+        fi
+    fi
+    PYTHON_NAME=`basename $PYTHON`
+    AC_MSG_CHECKING($PYTHON_NAME module: $1)
+    $PYTHON -c "import $1" 2>/dev/null
+    if test $? -eq 0;
+    then
+        AC_MSG_RESULT(yes)
+        eval AS_TR_CPP(HAVE_PYMOD_$1)=yes
+    else
+        AC_MSG_RESULT(no)
+        eval AS_TR_CPP(HAVE_PYMOD_$1)=no
+        #
+        if test -n "$2"
+        then
+            AC_MSG_ERROR(failed to find required module $1)
+            exit 1
+        fi
+    fi
+])
\ No newline at end of file
diff --git a/m4/docs_tool.m4 b/m4/docs_tool.m4
index 3e8814ac8d..39aa348026 100644
--- a/m4/docs_tool.m4
+++ b/m4/docs_tool.m4
@@ -15,3 +15,12 @@ dnl
         AC_MSG_WARN([$2 is not available so some documentation won't be built])
     ])
 ])
+
+AC_DEFUN([AX_DOCS_TOOL_REQ_PROG], [
+dnl
+    AC_ARG_VAR([$1], [Path to $2 tool])
+    AC_PATH_PROG([$1], [$2])
+    AS_IF([! test -x "$ac_cv_path_$1"], [
+        AC_MSG_ERROR([$2 is needed])
+    ])
+])
\ No newline at end of file
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 04 09:46:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 09:46:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122055.230217 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldrdM-0003yB-Qf; Tue, 04 May 2021 09:46:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122055.230217; Tue, 04 May 2021 09:46:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldrdM-0003y3-NP; Tue, 04 May 2021 09:46:28 +0000
Received: by outflank-mailman (input) for mailman id 122055;
 Tue, 04 May 2021 09:46:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8884=J7=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1ldrdK-0003rw-Ui
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 09:46:26 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id aa94bf46-2607-42ad-b6bf-b3b3c6f4157f;
 Tue, 04 May 2021 09:46:21 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 64CF8106F;
 Tue,  4 May 2021 02:46:21 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com
 [10.1.197.16])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0F4493F73B;
 Tue,  4 May 2021 02:46:19 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aa94bf46-2607-42ad-b6bf-b3b3c6f4157f
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v4 3/3] docs/doxygen: doxygen documentation for grant_table.h
Date: Tue,  4 May 2021 10:46:06 +0100
Message-Id: <20210504094606.7125-4-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210504094606.7125-1-luca.fancellu@arm.com>
References: <20210504094606.7125-1-luca.fancellu@arm.com>

Modification to include/public/grant_table.h:

1) Add doxygen tags to:
 - Create Grant tables section
 - include variables in the generated documentation
 - Used @keepindent/@endkeepindent to enclose comment
   section that are indented using spaces, to keep
   the indentation.
2) Add .rst file for grant table for Arm64

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
v4 changes:
- Used @keepindent/@endkeepindent doxygen commands
  to keep text with spaces indentation.
- drop changes to grant_entry_v1 comment, it will
  be changed and included in the docs in a future patch
- Move docs .rst to "common" folder
v3 changes:
- removed tags to skip anonymous union/struct
- moved back comment pointed out by Jan
- moved down defines related to struct gnttab_copy
  as pointed out by Jan
v2 changes:
- Revert back to anonymous union/struct
- add doxygen tags to skip anonymous union/struct
---
 docs/hypercall-interfaces/arm64.rst           |  1 +
 .../common/grant_tables.rst                   |  8 +++
 docs/xen-doxygen/doxy_input.list              |  1 +
 xen/include/public/grant_table.h              | 68 ++++++++++++-------
 4 files changed, 54 insertions(+), 24 deletions(-)
 create mode 100644 docs/hypercall-interfaces/common/grant_tables.rst

diff --git a/docs/hypercall-interfaces/arm64.rst b/docs/hypercall-interfaces/arm64.rst
index 5e701a2adc..cb4c0d13de 100644
--- a/docs/hypercall-interfaces/arm64.rst
+++ b/docs/hypercall-interfaces/arm64.rst
@@ -8,6 +8,7 @@ Starting points
 .. toctree::
    :maxdepth: 2
 
+   common/grant_tables
 
 
 Functions
diff --git a/docs/hypercall-interfaces/common/grant_tables.rst b/docs/hypercall-interfaces/common/grant_tables.rst
new file mode 100644
index 0000000000..8955ec5812
--- /dev/null
+++ b/docs/hypercall-interfaces/common/grant_tables.rst
@@ -0,0 +1,8 @@
+.. SPDX-License-Identifier: CC-BY-4.0
+
+Grant Tables
+============
+
+.. doxygengroup:: grant_table
+   :project: Xen
+   :members:
diff --git a/docs/xen-doxygen/doxy_input.list b/docs/xen-doxygen/doxy_input.list
index e69de29bb2..233d692fa7 100644
--- a/docs/xen-doxygen/doxy_input.list
+++ b/docs/xen-doxygen/doxy_input.list
@@ -0,0 +1 @@
+xen/include/public/grant_table.h
diff --git a/xen/include/public/grant_table.h b/xen/include/public/grant_table.h
index 84b1d26b36..d879c3e23e 100644
--- a/xen/include/public/grant_table.h
+++ b/xen/include/public/grant_table.h
@@ -25,15 +25,19 @@
  * Copyright (c) 2004, K A Fraser
  */
 
+/**
+ * @file
+ * @brief Interface for granting foreign access to page frames, and receiving
+ * page-ownership transfers.
+ */
+
 #ifndef __XEN_PUBLIC_GRANT_TABLE_H__
 #define __XEN_PUBLIC_GRANT_TABLE_H__
 
 #include "xen.h"
 
-/*
- * `incontents 150 gnttab Grant Tables
- *
- * Xen's grant tables provide a generic mechanism to memory sharing
+/**
+ * @brief Xen's grant tables provide a generic mechanism to memory sharing
  * between domains. This shared memory interface underpins the split
  * device drivers for block and network IO.
  *
@@ -51,13 +55,10 @@
  * know the real machine address of a page it is sharing. This makes
  * it possible to share memory correctly with domains running in
  * fully virtualised memory.
- */
-
-/***********************************
+ *
  * GRANT TABLE REPRESENTATION
- */
-
-/* Some rough guidelines on accessing and updating grant-table entries
+ *
+ * Some rough guidelines on accessing and updating grant-table entries
  * in a concurrency-safe manner. For more information, Linux contains a
  * reference implementation for guest OSes (drivers/xen/grant_table.c, see
  * http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=drivers/xen/grant-table.c;hb=HEAD
@@ -66,6 +67,7 @@
  *     compiler barrier will still be required.
  *
  * Introducing a valid entry into the grant table:
+ * @keepindent
  *  1. Write ent->domid.
  *  2. Write ent->frame:
  *      GTF_permit_access:   Frame to which access is permitted.
@@ -73,20 +75,25 @@
  *                           frame, or zero if none.
  *  3. Write memory barrier (WMB).
  *  4. Write ent->flags, inc. valid type.
+ * @endkeepindent
  *
  * Invalidating an unused GTF_permit_access entry:
+ * @keepindent
  *  1. flags = ent->flags.
  *  2. Observe that !(flags & (GTF_reading|GTF_writing)).
  *  3. Check result of SMP-safe CMPXCHG(&ent->flags, flags, 0).
  *  NB. No need for WMB as reuse of entry is control-dependent on success of
  *      step 3, and all architectures guarantee ordering of ctrl-dep writes.
+ * @endkeepindent
  *
  * Invalidating an in-use GTF_permit_access entry:
+ *
  *  This cannot be done directly. Request assistance from the domain controller
  *  which can set a timeout on the use of a grant entry and take necessary
  *  action. (NB. This is not yet implemented!).
  *
  * Invalidating an unused GTF_accept_transfer entry:
+ * @keepindent
  *  1. flags = ent->flags.
  *  2. Observe that !(flags & GTF_transfer_committed). [*]
  *  3. Check result of SMP-safe CMPXCHG(&ent->flags, flags, 0).
@@ -97,18 +104,24 @@
  *      transferred frame is written. It is safe for the guest to spin waiting
  *      for this to occur (detect by observing GTF_transfer_completed in
  *      ent->flags).
+ * @endkeepindent
  *
  * Invalidating a committed GTF_accept_transfer entry:
  *  1. Wait for (ent->flags & GTF_transfer_completed).
  *
  * Changing a GTF_permit_access from writable to read-only:
+ *
  *  Use SMP-safe CMPXCHG to set GTF_readonly, while checking !GTF_writing.
  *
  * Changing a GTF_permit_access from read-only to writable:
+ *
  *  Use SMP-safe bit-setting instruction.
+ *
+ * @addtogroup grant_table Grant Tables
+ * @{
  */
 
-/*
+/**
  * Reference to a grant entry in a specified domain's grant table.
  */
 typedef uint32_t grant_ref_t;
@@ -129,15 +142,17 @@ typedef uint32_t grant_ref_t;
 #define grant_entry_v1_t grant_entry_t
 #endif
 struct grant_entry_v1 {
-    /* GTF_xxx: various type and flag information.  [XEN,GST] */
+    /** GTF_xxx: various type and flag information.  [XEN,GST] */
     uint16_t flags;
-    /* The domain being granted foreign privileges. [GST] */
+    /** The domain being granted foreign privileges. [GST] */
     domid_t  domid;
-    /*
+    /**
+     * @keepindent
      * GTF_permit_access: GFN that @domid is allowed to map and access. [GST]
      * GTF_accept_transfer: GFN that @domid is allowed to transfer into. [GST]
      * GTF_transfer_completed: MFN whose ownership transferred by @domid
      *                         (non-translated guests only). [XEN]
+     * @endkeepindent
      */
     uint32_t frame;
 };
@@ -228,7 +243,7 @@ struct grant_entry_header {
 };
 typedef struct grant_entry_header grant_entry_header_t;
 
-/*
+/**
  * Version 2 of the grant entry structure.
  */
 union grant_entry_v2 {
@@ -433,7 +448,7 @@ typedef struct gnttab_transfer gnttab_transfer_t;
 DEFINE_XEN_GUEST_HANDLE(gnttab_transfer_t);
 
 
-/*
+/**
  * GNTTABOP_copy: Hypervisor based copy
  * source and destinations can be eithers MFNs or, for foreign domains,
  * grant references. the foreign domain has to grant read/write access
@@ -451,11 +466,6 @@ DEFINE_XEN_GUEST_HANDLE(gnttab_transfer_t);
  * bytes to be copied.
  */
 
-#define _GNTCOPY_source_gref      (0)
-#define GNTCOPY_source_gref       (1<<_GNTCOPY_source_gref)
-#define _GNTCOPY_dest_gref        (1)
-#define GNTCOPY_dest_gref         (1<<_GNTCOPY_dest_gref)
-
 struct gnttab_copy {
     /* IN parameters. */
     struct gnttab_copy_ptr {
@@ -471,6 +481,12 @@ struct gnttab_copy {
     /* OUT parameters. */
     int16_t       status;
 };
+
+#define _GNTCOPY_source_gref      (0)
+#define GNTCOPY_source_gref       (1<<_GNTCOPY_source_gref)
+#define _GNTCOPY_dest_gref        (1)
+#define GNTCOPY_dest_gref         (1<<_GNTCOPY_dest_gref)
+
 typedef struct gnttab_copy  gnttab_copy_t;
 DEFINE_XEN_GUEST_HANDLE(gnttab_copy_t);
 
@@ -579,7 +595,7 @@ struct gnttab_swap_grant_ref {
 typedef struct gnttab_swap_grant_ref gnttab_swap_grant_ref_t;
 DEFINE_XEN_GUEST_HANDLE(gnttab_swap_grant_ref_t);
 
-/*
+/**
  * Issue one or more cache maintenance operations on a portion of a
  * page granted to the calling domain by a foreign domain.
  */
@@ -588,8 +604,8 @@ struct gnttab_cache_flush {
         uint64_t dev_bus_addr;
         grant_ref_t ref;
     } a;
-    uint16_t offset; /* offset from start of grant */
-    uint16_t length; /* size within the grant */
+    uint16_t offset; /**< offset from start of grant */
+    uint16_t length; /**< size within the grant */
 #define GNTTAB_CACHE_CLEAN          (1u<<0)
 #define GNTTAB_CACHE_INVAL          (1u<<1)
 #define GNTTAB_CACHE_SOURCE_GREF    (1u<<31)
@@ -673,6 +689,10 @@ DEFINE_XEN_GUEST_HANDLE(gnttab_cache_flush_t);
     "operation not done; try again"             \
 }
 
+/**
+ * @}
+*/
+
 #endif /* __XEN_PUBLIC_GRANT_TABLE_H__ */
 
 /*
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 04 10:27:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 10:27:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122079.230229 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldsGt-0007em-VS; Tue, 04 May 2021 10:27:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122079.230229; Tue, 04 May 2021 10:27:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldsGt-0007ef-SA; Tue, 04 May 2021 10:27:19 +0000
Received: by outflank-mailman (input) for mailman id 122079;
 Tue, 04 May 2021 10:27:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=n4Og=J7=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1ldsGt-0007ea-2D
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 10:27:19 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d642e090-c73b-4b09-88dc-ad84ada52c15;
 Tue, 04 May 2021 10:27:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d642e090-c73b-4b09-88dc-ad84ada52c15
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620124037;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=CMKq6+bwsHCfmJ6XKcTLGMWTHi26fPUtc48zVuzErx4=;
  b=c04NjM9A2iUHNd8/Q7QjEmlwO87vZyRZMFYMNoX9dZQ5U21RNu8Ukhoe
   KUT2bHPnW7M0E7eq+gX21wnzaw4+UilwR5B2p21OQ7q4+IW3/gaYoKpxw
   upEMjsBEbWbciMDuzQD2nDfaaL5gYchf/e2sysM4/al1sgt3QtDHeTflL
   0=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: y42Gwvmg6SwfujHWV+n0DnwO6IuuNxfHPkg2Z2zv8dd1wOAwoa0gtQTOjibgUPPoDgIpAQiZqk
 L/fQiyfKIlm+E5HVMlAzAAR4I6A7Im70e1CeUXQuqHUtQjuQQdAqTva8/IcGOnMdYJv54oTV+9
 8yvOOjhZ3HM/sMe4NWlLW7DHqpy/RnCGh6lTe2/wL7mIqG0JCe/g+CuDSJdWqaJhb2bZ2sWQ/B
 vg5IoFLLizU6SOahDoHl6e1sQOzIZcfJ/vIbi7+ahJTIAmH5h8wwB3Ubbqariz05sfuPP4gRq1
 WWc=
X-SBRS: 5.1
X-MesageID: 43121103
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:t/huD6ilrcRieYtzMWBGIr09U3BQX2hw3DAbvn1ZSRFFG/Gwv/
 uF2NwGyB75jysQUnk8mdaGfJKNW2/Y6IQd2+csFJ+Ydk3DtHGzJI9vqbHjzTrpBjHk+odmup
 tIW5NVTOf9BV0St6rHySGlDtctx8SG+qi0heHYi0xgVx1udrsI1WdEIyywe3cGIjVuL5w/CZ
 aa+45rpyC4f24Wc8S8ARA+LpX+jvfMk4/rZgNDOgUu7xOAgSjtxLnxFRWZ2Rl2aUIy/Z4J92
 /Znwvlopiyqv3T8G6n60b/zbRz3OHgxNxKGdCWhqEuSwnEpw60aO1aKti/lR8vpuXH0idPrP
 DtpFMaM913+zfteAiO0GTQ8i3B9Bpr1HP401+fhhLY0L/EbRY3EdBIi44cUjax0TtZgPhG3K
 hG332UuvNsZHuq9kmNhKmrJmRXv3G5rnY4nekYg2Y3a/pkVJZroZEC50QQKZ8cHUvBmfAaOd
 NzB8LR7us+SyLiU1nluABUsbuRd0V2NBKHTk8eg9eSwjhbkVtopnFotfA3rzMu8okwRIJD4P
 mBGqN0lKtWRstTVq5lAvwdKPHHRVDlcFbpCia/MF7nHKYINzbkrIP22qw84KWPdIYTxJU/tZ
 zdWDpjxCAPUnOrLffL8IxA8xjLTmn4dy/q0Nti659wvaC5bKb3MAWYIWpe0PeIkrE6OIn2Sv
 yzMJVZD7vINm31A7tE2AX4Rt17NWQeassIodw2Mmj+4v7jG8nPjKj2YfzTLL3iHXIPQWXkGE
 YOWzD1OYFu9UaudnjkgAXAen/kd0DllKgAVZTyzqw28swgJ4dMug8ahRCS/ceQMwBPtaQwYQ
 9fLdrc4+eGjFjz2VyNw3RiOxJbAEoQyq7nSWl2qQgDNF6xVb4Cvt6YaF1DxXfvHG45c+rmVC
 pk43hn86O+KJKdgQo4Dci8D26ch3wP4FWHUokbga/Gwcv+YJs3AtIHVcVKZET2Pi0wvTwvhH
 ZIaQcCSEOaPCjpk7+ZgJsdA/yaUcJ9jgetKct9smneqk2YmMEqShIgLnyTeP/SpTxraytfh1
 V3/aNaqqGHgyyTJWw2h/l9DEdBc12NALVNDB2MYaJdnryDQnA3cU66wRihzz0jcGvj8Esfwk
 jsNzedd/3wDl1BgXxAyarx/FRodmKSQlJoZhlBwP9APFWDnkw2/f6AZ6K13WfUUFcEz+0HGB
 zuYDcZIGpVtpqK/S/QvAzHOWQtx50oMOCYMa8qdKvL3GixbKeSk7sdIvNS9JF5Fdznv+MRS9
 iDcwuNID6QMZJx5yWl4lIefAVkongtlv3lnCD/5G+jxXglHL78Jk9lS7xzGaDU00HUA9KzlL
 N3gtI+sbHubiHfatuaxbrWaDAGABXJumKyR/wpr5cRna9ajsoFI7DrFR/zkFdA11ECCe2xsm
 U0aqFy+qrANY9iZNZ6QVMTwnMZ0PC0aHI2uQn3CNIkdV4jj3XnL8qEioC43YYHMwmknk/MIl
 GR/C1WwufdUwaC3bAcDbgsIW4+UjlL1F1SuMeDfZbXEgOkaqVq+0e7KGa0dNZmOeW4MIRVih
 Zx+NeTmeCLMwL+xQDLpDN+ZoZD6XyuT8/3IAWCH4dzgpCHEGXJpquh+8ioijjrDRO9dkQDnI
 VAMXUqUf4rsEhrsKQHlg6oSqL2pUo5k1xRpRFf/2SdpLSO0SP8BkFJMQrQn5NMeyJcW0L41f
 j4zQ==
X-IronPort-AV: E=Sophos;i="5.82,272,1613451600"; 
   d="scan'208";a="43121103"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=f+kLTFSEK4mDkfOomMLrnhwT9ZPrg7YlRPK/6GjgnphGUasD2uEeUuO06JORbdo+V4uL+vYizIjV4xqfTfELde4//4dI74nz62ABlrY269iwVeQgWSzV0FILS+t4WBEV54gwNpcGkAIB//psRZ/X/ciXX3jITBn9RnwW0t3HnWAX9kk37LC/fPhhsyFuBRiSR4PayIRqN536oh4wkDwiWYPLD/2TiR3gb1dbBwyFlFLuRDafhnv8QVfEkc+18Y7ffcheHj5LL0zUQ6mwBkz/y30/fg4Yjh6WkUm2xv6uF20RnMP/3dHlqlTIZtgda4mBDl0rUquibkOrhtZ1gNN3Ow==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rMbsXZI1ILwSgYxitoIFgSUXxsw7I48Mp9yc+OCjGCA=;
 b=TVJcjPwuF4eUoEosVOAQjHj5/yGRYVvth36eVqoBM/pKImEGM5KDEcTieS34OvTgWVSG/Dub+GvTMX5INFuhiiHN2nb+2M7RSbi7kF1Dij28BjhPbK4jfrQP1jcI/onYYSHU1q/l/QpJXgKuZMTXNtC8v0WgRWSw2GBUXo7Bka58GfQntWlmShfJfd0NhUHVG8ziVDwWPvliJ17MryKvQfnK+zffWZpaioX46I7d0XaCpbh2y1+1y9TT0+BraSCOOlyHPhnWZKhl1uwj7ukzJY4UTlLBtO4Ho9wXue3qr0m7uBrQJegWwpbwhvf2y1cQF8AOTE+dxGQQNVE86NpG0Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rMbsXZI1ILwSgYxitoIFgSUXxsw7I48Mp9yc+OCjGCA=;
 b=WsaqyG00S8/QE6DdOLHxYg9qoMNiGdMr5bMr7nIUM/5pzrIov2YPjB26NCTw4JXkm+e6I6D9u8LN9KPl6vmaG9+Wcuizpif3K1QjS/h+tlW9moRHzdm2bpESM2W+bkrsSeztuXULX28ImLX0rvuiXzRj7CErl+gTz+V8wijCSao=
Date: Tue, 4 May 2021 12:27:08 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v4 05/12] x86/hvm: allowing registering EOI callbacks for
 GSIs
Message-ID: <YJEhfO0gSxFJQc8u@Air-de-Roger>
References: <20210420140723.65321-1-roger.pau@citrix.com>
 <20210420140723.65321-6-roger.pau@citrix.com>
 <19b0b30d-2fd6-4cc3-fd7a-4f4a3ce735f7@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <19b0b30d-2fd6-4cc3-fd7a-4f4a3ce735f7@suse.com>
X-ClientProxiedBy: MR2P264CA0066.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:31::30) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 0221180a-e5de-4e2e-4241-08d90ee72ed7
X-MS-TrafficTypeDiagnostic: DM4PR03MB6062:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM4PR03MB6062619FEBA95CEC1FAB080B8F5A9@DM4PR03MB6062.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: sjG/lnrroAGCYHcloqvhC7QSLRGdyocgv4w+wNvqU9R8drac1qnHIGYjoCh9RopcLcL6iLhcOr+WFbMNNvobx917vXhuLk3+5hVBKnL+P1SpOEETj94MVQ59ydtir+F9kWKnX/NLS1jQcClNKLagem1dW3AHUJyICW1M6eD8oXk3j0EI2GtLdvp7OeqlE7aRXtV0sPO0w0xANbjiMRIsp7teIf5VRfc0cu0A0QXNQ3vDHCWoHOiFGhcywmKGlhHE8CBhARTVZg6oiykXFUJ490zxq0e40QsGEucBjm2/MtiB7I1MaKf9FjT9Tn5RbTWScu1hOCjNkkEnbz41Lu9hxoDeUhPpJZII0+jA4FV3/ASP3rxOD2oPNoCq+KzB2L86NEeQhD+ftidOSZGJhynM03CU1/QyKTfq+YFZLi6a7dYqncSxb9TZvH4dmmDMhzA97WAiZdN4SLcn3fNH6GoABRunz7E7AzqGZN0maAtPBDxmfagx9enBdR+ZbV0XROjvlZR+Z+1kLGkyr43FkwcGfwU7DARckNEPv2htrfGDp0244/o1unsRGEtdYtxVYMtX3faL81whcTPO2t9EaaPdSXs2oo8B9K9zvJcflzfvxTY=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(136003)(396003)(39860400002)(376002)(366004)(346002)(66556008)(53546011)(6666004)(66946007)(66476007)(5660300002)(83380400001)(2906002)(16526019)(85182001)(478600001)(33716001)(86362001)(4326008)(956004)(6496006)(186003)(6486002)(26005)(38100700002)(8676002)(54906003)(316002)(6916009)(8936002)(9686003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?b0cvNGMrQ3RBeXdUVjZLZUFFa0pRWlB5YkE5WFQwRWdxQTZhbDJ0NEV4cWN6?=
 =?utf-8?B?R0VGK09DVmVndlc3UjUxMnBNZUltTlNOOTM2WE1iTmlLcTVCdzRHVmdCeU5D?=
 =?utf-8?B?dGVnQlBzNHhEemVoa0FGTnlYd1JYU2xrdUhkSWtkcUJ1aEFYbElDM2NjU2JL?=
 =?utf-8?B?TXlxNXJlSkd4UmdNTWNJejZxekxQZlB2WEFETmNUYTQ5aFp5c1M0U0lVM3c1?=
 =?utf-8?B?cmRieXkxbmlnWGM2dndKTjNtQTc1bzROOFZCUFVJcDhmSWdxVmVnQW94TjhX?=
 =?utf-8?B?ZmV2K3RqVi9idmFiYy9UdFNyNXlFbDNiRHh4MmszbGhEaTBBV1BnSUNCNzQ4?=
 =?utf-8?B?bzRzVkd2a3V1RHRsQ3FtTm1CTjZ6MTh4cktWS3hsQm92N1pSS2dGVGlLZXVH?=
 =?utf-8?B?dUJ5MmRScldzeXhiRVhJVm1WRFBjTGVRK0VzWnVSWHg3dVVYMG1GNmpLVGov?=
 =?utf-8?B?UGN2b1E4ZUI2ckVQNk94Tzd4UHN3RVRyNTVFWTU0eVVWVldSOVZYb0NWMnZY?=
 =?utf-8?B?QnFhY0FST1FERWFwc1luR2NiSTIvZUJSOVNyaVV2TFdCL2s0NXplbGNPbVZT?=
 =?utf-8?B?SmRqUjkxWHZoOUo4ZzhIL0VMQnBEdmNMR0xMelpIVEsySVBHOWZBa01DeFFJ?=
 =?utf-8?B?ZW1CREVLOU1Nc0pFdmpBUzd3MmJFT01NeGRrSUVNNlhjSEZwOUd3ZXNNR2pG?=
 =?utf-8?B?ZnhsMldVdCtrcUNvSnE4YzlWa3J5RDROVFZZQW1zNW1tT1ZGR3NtUjhBTUUr?=
 =?utf-8?B?cU5rNzJZa3pHUnBNR0M5dWU0eStvSlNSZDFHRGlkd3dlUjdnRU1qZkhHUkhY?=
 =?utf-8?B?SDVsTGlsd2x2RndNRG9aaVViQnZKNEdkSThEMTlJMXBJRDcrK0crdUxWNDFY?=
 =?utf-8?B?MFJFKzV1U3ltN1VDbUs4OUl4djlrc1FXb2VVZVVLQm1wbS8xZEpXc2V2THdk?=
 =?utf-8?B?cGxJS3NHT1Boby9zVFdjczNMU0IzYUFnZk0wUUV0WG5TMDQwdUdIdXNMMEcw?=
 =?utf-8?B?L2kvL2ZPWkZjM1RRTDhrY1k5cS92eC9KVWV1cGdZOUFNanBWUGJ3N3ExMUNN?=
 =?utf-8?B?NGhZZTQvU1NROXgremZwRHpQMzNQY0pPTDhqc1VQTFdjK2psMVFtMXByWEkv?=
 =?utf-8?B?NUVnemF3aFR5K0xVcmtpY0lpTEgyUitTbU9yNnduNnlZSWJJZzd1VlFOU0p2?=
 =?utf-8?B?VTBoVWVIUlEzSndRd3k0UTQ0a0xxSE1IV0pEM1RtUWUybGhBZkRRTlFKR3lD?=
 =?utf-8?B?bExNYlhsdlgxL3I1ZmxYN2tsQmdjVEQ0ZWJBU0R2NlJlQnNOU1RBbzJpdXF2?=
 =?utf-8?B?OUIxc29MSWtwbEpwV3ZZTTBnUU0ybXVmRkk0eDEwUGJBeXpHemZVSnMzV04r?=
 =?utf-8?B?OWlwUUlKT1J0YXRQUU0rVWFjYWxUbVU2N0FDS2JNK2IwaCs5U1N0U1hySWdl?=
 =?utf-8?B?RHFBZENjenBlRks3Wk0rV2JBVHdNdXpqVkVtK0hPUnZTdWVxKzNiWlRyVWYw?=
 =?utf-8?B?Wm50TGZKQVZ3K0J2alIrN1ZucXgvKzk4NHYvdUxaYmtySE01SWlhT2dpSGVs?=
 =?utf-8?B?eTdzcG9HWEdnWm9BalZYWktqVXU2UE1vdFJxSC9FWjhNQ2xEeFhJUStVZ1lT?=
 =?utf-8?B?bjlDakdmS0hSTnNnL1IxUy93MEFXNUFia2paaWRxSkpxNjJzUDhzUkRoM0RR?=
 =?utf-8?B?UDdXT2xmRVBPdnYvRXFXWklGUHd5ODAzdkNJNndKVWlXYzRIcmhYQXRmaXhl?=
 =?utf-8?Q?R4GFuUjN22xe2wwuavfyTFH3CLiMGMru1nXIYRk?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 0221180a-e5de-4e2e-4241-08d90ee72ed7
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2021 10:27:13.6744
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Qx/ufi2V63PtrE4B8+OZTfMt+L3Gj+RFkFLvAHwnYnsPLbsyV8zNHF9GYPR32aDkkY7QVIoMiKJ1BYcoTKHJGQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR03MB6062
X-OriginatorOrg: citrix.com

On Mon, May 03, 2021 at 05:50:39PM +0200, Jan Beulich wrote:
> On 20.04.2021 16:07, Roger Pau Monne wrote:
> > Such callbacks will be executed once a EOI is performed by the guest,
> > regardless of whether the interrupts are injected from the vIO-APIC or
> > the vPIC, as ISA IRQs are translated to GSIs and then the
> > corresponding callback is executed at EOI.
> > 
> > The vIO-APIC infrastructure for handling EOIs is build on top of the
> > existing vlapic EOI callback functionality, while the vPIC one is
> > handled when writing to the vPIC EOI register.
> > 
> > Note that such callbacks need to be registered and de-registered, and
> > that a single GSI can have multiple callbacks associated. That's
> > because GSIs can be level triggered and shared, as that's the case
> > with legacy PCI interrupts shared between several devices.
> > 
> > Strictly speaking this is a non-functional change, since there are no
> > users of this new interface introduced by this change.
> > 
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> In principle, as everything looks functionally correct to me,
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> 
> Nevertheless, besides a few remarks further down, I have to admit I'm
> concerned of the direct-to-indirect calls conversion (not just here,
> but also covering earlier patches), which (considering we're talking
> of EOI) I expect may occur quite frequently for at least some guests.

I would expect the vmexit cost for each EOI would shadow any gain
between using direct vs indirect calls.

> There aren't that many different callback functions which get
> registered, are there? Hence I wonder whether enumerating them and
> picking the right one via, say, an enum wouldn't be more efficient
> and still allow elimination of (in the case here) unconditional calls
> to hvm_dpci_eoi() for every EOI.

So for the vlapic (vector) callbacks we have the current consumers:
 - MSI passthrough.
 - vPT.
 - IO-APIC.

For GSI callbacks we have:
 - GSI passthrough.
 - vPT.

I could see about implementing this.

This is also kind of blocked on the RTC stuff, since vPT cannot be
migrated to this new model unless we remove strict_mode or changfe the
logic here to allow GSI callbacks to de-register themselves.

> 
> > --- a/xen/arch/x86/hvm/irq.c
> > +++ b/xen/arch/x86/hvm/irq.c
> > @@ -595,6 +595,81 @@ int hvm_local_events_need_delivery(struct vcpu *v)
> >      return !hvm_interrupt_blocked(v, intack);
> >  }
> >  
> > +int hvm_gsi_register_callback(struct domain *d, unsigned int gsi,
> > +                              struct hvm_gsi_eoi_callback *cb)
> > +{
> > +    struct hvm_irq *hvm_irq = hvm_domain_irq(d);
> > +
> > +    if ( gsi >= hvm_irq->nr_gsis )
> > +    {
> > +        ASSERT_UNREACHABLE();
> > +        return -EINVAL;
> > +    }
> > +
> > +    write_lock(&hvm_irq->gsi_callbacks_lock);
> > +    list_add(&cb->list, &hvm_irq->gsi_callbacks[gsi]);
> > +    write_unlock(&hvm_irq->gsi_callbacks_lock);
> > +
> > +    return 0;
> > +}
> > +
> > +int hvm_gsi_unregister_callback(struct domain *d, unsigned int gsi,
> > +                                struct hvm_gsi_eoi_callback *cb)
> > +{
> > +    struct hvm_irq *hvm_irq = hvm_domain_irq(d);
> > +    const struct list_head *tmp;
> > +    bool found = false;
> > +
> > +    if ( gsi >= hvm_irq->nr_gsis )
> > +    {
> > +        ASSERT_UNREACHABLE();
> > +        return -EINVAL;
> > +    }
> > +
> > +    write_lock(&hvm_irq->gsi_callbacks_lock);
> > +    list_for_each ( tmp, &hvm_irq->gsi_callbacks[gsi] )
> > +        if ( tmp == &cb->list )
> > +        {
> > +            list_del(&cb->list);
> 
> Minor remark: Would passing "tmp" here lead to better generated
> code?

Maybe? I don't mind doing so.

> > @@ -419,13 +421,25 @@ static void eoi_callback(struct vcpu *v, unsigned int vector, void *data)
> >              if ( is_iommu_enabled(d) )
> >              {
> >                  spin_unlock(&d->arch.hvm.irq_lock);
> > -                hvm_dpci_eoi(d, vioapic->base_gsi + pin);
> > +                hvm_dpci_eoi(d, gsi);
> >                  spin_lock(&d->arch.hvm.irq_lock);
> >              }
> >  
> > +            /*
> > +             * Callbacks don't expect to be executed with any lock held, so
> > +             * drop the lock that protects the vIO-APIC fields from changing.
> > +             *
> > +             * Note that the redirection entry itself cannot go away, so upon
> > +             * retaking the lock we only need to avoid making assumptions on
> > +             * redirection entry field values (ie: recheck the IRR field).
> > +             */
> > +            spin_unlock(&d->arch.hvm.irq_lock);
> > +            hvm_gsi_execute_callbacks(d, gsi);
> > +            spin_lock(&d->arch.hvm.irq_lock);
> 
> While this may be transient in the series, as said before I'm not
> happy about this double unlock/relock sequence. I didn't really
> understand what would be wrong with
> 
>             spin_unlock(&d->arch.hvm.irq_lock);
>             if ( is_iommu_enabled(d) )
>                 hvm_dpci_eoi(d, gsi);
>             hvm_gsi_execute_callbacks(d, gsi);
>             spin_lock(&d->arch.hvm.irq_lock);
> 
> This in particular wouldn't grow but even shrink the later patch
> dropping the call to hvm_dpci_eoi().

Sure.

> > --- a/xen/arch/x86/hvm/vpic.c
> > +++ b/xen/arch/x86/hvm/vpic.c
> > @@ -235,6 +235,8 @@ static void vpic_ioport_write(
> >                  unsigned int pin = __scanbit(pending, 8);
> >  
> >                  ASSERT(pin < 8);
> > +                hvm_gsi_execute_callbacks(current->domain,
> > +                        hvm_isa_irq_to_gsi((addr >> 7) ? (pin | 8) : pin));
> >                  hvm_dpci_eoi(current->domain,
> >                               hvm_isa_irq_to_gsi((addr >> 7) ? (pin | 8) : pin));
> >                  __clear_bit(pin, &pending);
> > @@ -285,6 +287,8 @@ static void vpic_ioport_write(
> >                  /* Release lock and EOI the physical interrupt (if any). */
> >                  vpic_update_int_output(vpic);
> >                  vpic_unlock(vpic);
> > +                hvm_gsi_execute_callbacks(current->domain,
> > +                        hvm_isa_irq_to_gsi((addr >> 7) ? (pin | 8) : pin));
> >                  hvm_dpci_eoi(current->domain,
> >                               hvm_isa_irq_to_gsi((addr >> 7) ? (pin | 8) : pin));
> >                  return; /* bail immediately */
> 
> Another presumably minor remark: In the IO-APIC case you insert after
> the call to hvm_dpci_eoi(). I wonder if consistency wouldn't help
> avoid questions of archeologists in a couple of years time.

Hm, sorry, I remember trying to place them in the same order, but
likely messed up the order during some rebase.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 04 10:53:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 10:53:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122090.230244 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldsgD-0001lI-4s; Tue, 04 May 2021 10:53:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122090.230244; Tue, 04 May 2021 10:53:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldsgD-0001lB-0g; Tue, 04 May 2021 10:53:29 +0000
Received: by outflank-mailman (input) for mailman id 122090;
 Tue, 04 May 2021 10:53:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldsgB-0001l3-N4; Tue, 04 May 2021 10:53:27 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldsgB-0000Pw-Fx; Tue, 04 May 2021 10:53:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldsgB-0008A0-7s; Tue, 04 May 2021 10:53:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ldsgB-0002kJ-7N; Tue, 04 May 2021 10:53:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=pY5bO1Zb5vhBs+8Hs99oJGmSZZcO2G0WSMxTQbo1wX0=; b=g7FLsItLFsOmJz5qVTR0Q49sja
	6ISrqdht5PniBFtg4/hc2yepCa01W7w6sA23ZCHQu6jpRIn59ZUq2TbJ9S1Ho6gUm1jR60VKq3wTH
	X5LFUJ4iCkVr65ruzTZNm3c0+Ob1ES8lzxwFNXuoqdZ+lsLtF081QsOUnv2q9sgJMCXg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161726-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 161726: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=f297b7f20010711e36e981fe45645302cc9d109d
X-Osstest-Versions-That:
    ovmf=8c8f49f0dc86e3c58d94766e6b194b83c1bef5c9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 04 May 2021 10:53:27 +0000

flight 161726 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161726/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 f297b7f20010711e36e981fe45645302cc9d109d
baseline version:
 ovmf                 8c8f49f0dc86e3c58d94766e6b194b83c1bef5c9

Last test of basis   161629  2021-05-03 18:40:06 Z    0 days
Testing same since   161726  2021-05-04 06:28:31 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Getnat Ejigu <getnatejigu@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   8c8f49f0dc..f297b7f200  f297b7f20010711e36e981fe45645302cc9d109d -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue May 04 10:55:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 10:55:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122097.230259 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldshk-0001tu-KT; Tue, 04 May 2021 10:55:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122097.230259; Tue, 04 May 2021 10:55:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldshk-0001tn-HP; Tue, 04 May 2021 10:55:04 +0000
Received: by outflank-mailman (input) for mailman id 122097;
 Tue, 04 May 2021 10:55:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldshj-0001te-1U; Tue, 04 May 2021 10:55:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldshi-0000RL-TF; Tue, 04 May 2021 10:55:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldshi-0008Cx-N4; Tue, 04 May 2021 10:55:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ldshi-0004Do-MZ; Tue, 04 May 2021 10:55:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=4HLcnkjH/2fDEQoCA+/fiDPeNWiQbn3xhdL1zZTM5tc=; b=TSddti/IN9mSkfpPTpN22lnGrZ
	sqKtpBoJGyw3PiVFo7fB2xPDDbSQibgBWOlW4Oif8l4/Jd6TOyiKvk7Uum5VfnrRtGEDgNDvGZ4nj
	INe8MS07UmEvDPfDNlsZ9+wS/EUU1gLJtNwNpB16RJPoU2z/25q2QQxOOhsI0bQqh39c=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161631-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 161631: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=15106f7dc3290ff3254611f265849a314a93eb0e
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 04 May 2021 10:55:02 +0000

flight 161631 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161631/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                15106f7dc3290ff3254611f265849a314a93eb0e
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  257 days
Failing since        152659  2020-08-21 14:07:39 Z  255 days  468 attempts
Testing same since   161631  2021-05-03 21:39:25 Z    0 days    1 attempts

------------------------------------------------------------
480 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  pass    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 145775 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 04 10:56:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 10:56:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122101.230274 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldsj1-00021V-1Z; Tue, 04 May 2021 10:56:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122101.230274; Tue, 04 May 2021 10:56:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldsj0-00021O-TT; Tue, 04 May 2021 10:56:22 +0000
Received: by outflank-mailman (input) for mailman id 122101;
 Tue, 04 May 2021 10:56:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=n4Og=J7=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1ldsiz-00021G-Pv
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 10:56:21 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 76ea8337-7c06-4d40-ad59-1ccda7360c49;
 Tue, 04 May 2021 10:56:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 76ea8337-7c06-4d40-ad59-1ccda7360c49
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620125780;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=24s7pfBxLw5B9HRBk9sZdMZrfIdgjjIhoIOVsbTcupE=;
  b=ZzypOTkg9mRdtXUHeAUZcMST6RRt1EELPXaZUOAJBSoHCsQNwf59ZHho
   m/UDSKmtv4Ftkfq0fL61K0/MX50wE+6pnFNYTLHyIx1TU6mMUVakdP0Ss
   4NzZvgT6U78GjckEEh52embpuKBl9WxpCLHMEbstic/59QZ4X7TbBv8F5
   w=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: gFANDSTfiwQX/V7EffnZX9PXR4nJXWLk3fUqqFDQnSvZogL/JmayJrtNueUjqkpoFzKODjiXXP
 7E6ES4aM3RtpCH8ng1KV0sV9KFKY+cprPuUPIiYwnnR79OVCXIr04mAsyIQFH2X+CnD4sS8WCC
 hB/IU0YX1n4gNfXWt6pAmiA1xm8U9AcTRAKLdFI6FIT/xL55BYuGtkSFGXmw5Licxg0fxZfXnc
 21k8iryRH6Q8LZ5EYY4nvarTz7uL3zgyoKR/oFb2bnkP/NLCuYY+A/74soR1p+Yt8ezoMkfCTq
 c20=
X-SBRS: 5.1
X-MesageID: 43122271
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:u7qwCa36i6LhUsnx2gnTvgqjBQB3eYIsi2QD101hICF9Wvez0+
 izgfUW0gL1gj4NWHcm3euNIrWEXGm0z/BIyKErF/OHUBP9sGWlaLtj44zr3iH6F0TFmdJ1/Z
 xLN5JzANiYNzRHpO7n/Qi1FMshytGb8KauwdzT1WtpUBsCUcFdxi1SYzzrdHFebg9AGJY/Cd
 6w5tBfoSChZHQQaa2AdwQ4dsLEoMDGk4+jXAUPAAQp5BLLoTSj7rP7FBbw5GZjbxpkx7A+/W
 /Z1zHo/6nLiYDA9jbw9U/2q65Xltzo18dZCKW36/Q9Bz3whm+TFeFccpKYujRdmpDK1H8Ll5
 32rw4kL4BP7RrqDx6IiD/M/yWl7zo08X/lzjaj8AjeiOj0XigzBcYEpa8xSGqg12MasNtx0L
 1G0gui3vI9Z36w/1Welq31fipnmUaurX0pnfR7tQ0lbaIkZKJMtotaxUtJEf47bVLHwbo6G+
 pjBty03ocuTXqmaRnizw5S6e3pdHEyEhCae1MFq8yY3hNH9UoJvncw9YgxmGwN+4k6TIQBz+
 PYMr5wnLULdcMOa7lhbd1xDvefOyjoe1bhIWiSKVPoGOUuPG/MkYf+5PEQ6PuxcJIF4ZMukP
 36IR1lnF93X3irJdyF3ZVN/ByIan66Ry7RxsZX4IU8kqHgRZLwWBfzC2wGoo+FmbEyE8fbU/
 G8NNZ9GPn4N1bjHo5PwknXR4RSE38DS8cY0+xLG26mk4buEMnHp+bbePHcKP7GCjA/QF7yBX
 MFQXzdP8NFwke3WmLpoRTYVn/3E3aPuK5YIez/xaw+2YINPopDvkw+klKi/PyGLjVEr+gXcS
 JFUffau5L+gVPz0XfD7m1vNBYYJF1S+q/cX3RDohJPF0v1dL0EquiOYGw65grBGjZPC+ftVC
 JPrVV+/qy6a7aKwzo5Nt6hOmWGy1weuWyNVJVZvqGY/8/qdtcZA/8dKe1MPDSOMyYwtRdhqW
 9FZgNBbFTYDCnShaKsi4FRIvreedl6iAKCOtVVtnrbiEWZqagUNzkmdg/rdfTSrRclRjJSiF
 E02bQYmqC8lTGmLnZ6vP41K2RWaGOcAKtPCSOMYIk8oMGsRChACUOxwRCKgRA6fWTns2EfnH
 boIyGvdfbXOVZFoXxD3qH28FR7S3WFcytLGwJHmLw4MV6Dlmd40OeNaKb26WeXZ1cY6sw2MT
 3OY1IpU0lT7uHy8CTQtCeJFH0gyJlrA/fUC647darPnlm3LpeTqK0AF/hI3ZpsOdz0qNUXWe
 aHdwL9FkKgN8oZnyiu4lo1Mih9r3cp1c7y0Br+9W6iwToRB+HRLFkOfcBTH/isq0zfA9CG35
 VygYhr4a+eMmDtZsWHzq+SRThZMR/XqXO3SeZtiZ08h9NEiJJDW73gFR3P3zV7+T97CuHevk
 YXWr5677DMIZUHRb1YRwtpun4S0O2SJ04quDHsCuAwfVsRn2bWVun5l4bgmP4KOAm9vwP+Nl
 mUzj1F89rEVyWF06QGC6hYGxUhVGEMrFBj9viFbYveFUGDcPxC5kOzNhaGAfJgYZnAPbUbtR
 Bh5d6U28eRairjwQjV+R92OLhH/WriYcS8Bmu3aKJ12u3/HVSHma2x5sGvyB/xVDugckwdwb
 R/SnZ4VLUKthASyKst0iazTaTrokUq13tmiAsX6GLF68yB+2fUHUZPLAvDpI5ZNAMjakS1sQ
 ==
X-IronPort-AV: E=Sophos;i="5.82,272,1613451600"; 
   d="scan'208";a="43122271"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AwxPL+Czi1+3uath0xc6RLXiIfurmQtRaLY7X+IGZwOpLVuwjnq5JUorY7LeWQSiVh7nyvo9cC+gd67PTfNsTEASK2yU6rdvxHI92H9IwwRihHfR5sCcrLa0R7JFC7sX85aHUBYCOko8nf3XjFMCMTPvbJ1G0tG4UuXptjpg4FtZWEvCZv2jBqviP+vCZbeB4V9QplCJylQgeeRsbmi8YLGUqdsRgLv/fCaaNANjBvaGapjvzRvjqklQOTJwURVGx9tzBNSeNKbwGJK7QSo5/pDk19RPTDY10Qi5UbYqpe76mQlL7B3XyuNde44o7U9CvHM9joDnDYlvV3PYHNzNqg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fcY4ysDXt9Hr533qwosADRT5FVEQ/4539qETJLeVOwQ=;
 b=lZ63qDqWxmt6madIZqPDTi83o5h4iyGi7JyRFbGbFGcShM8LL5ChVFgoEbarFG8b0o2vMCHxlp0sDGqXZ8tDMin0WsE5gVIbqWS/2TlI0SofvDQuiuEQOcg/dKbPn0vc4mosrxBfur4FbpCR1N9itXll9BXutUOUvbMNf9R01fi310AOws+Wq1SHKG3IJgrRgMiJ9WD1nha/lbN24zl2ZBhU5CGPLzrNnSwmLXbDIDK5cmhNvyXz05p1J+hxbTHFU1fiv+vdoCtM7RjrLWm9RjSjWIO7Q8Qenk7n0JPK1s2WSprg8+TkXdZ3taPISv2xBnorHtkl/uQU4TiItUHMLA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fcY4ysDXt9Hr533qwosADRT5FVEQ/4539qETJLeVOwQ=;
 b=C3H6lywyXDYY733qXrkp1FTMZiRcvq3/RLMkbbuImXts5oidIBkmo0ikAiLyxo+/v2sMXVWem5ftd06Mb4rkR5DGUeGt5Bj4Lol6TNstJWKwQOAhEJPdga1R5FBaMpxbZR9qa6+KOs/Zx+5KPNfcazdC2RW3KvJomHCk9k9sIxQ=
Date: Tue, 4 May 2021 12:56:11 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v3 03/13] libs/guest: allow fetching a specific MSR entry
 from a cpu policy
Message-ID: <YJEoS6P1S6NbySFd@Air-de-Roger>
References: <20210430155211.3709-1-roger.pau@citrix.com>
 <20210430155211.3709-4-roger.pau@citrix.com>
 <273ba6f9-dee9-00db-407b-10325d21afae@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <273ba6f9-dee9-00db-407b-10325d21afae@suse.com>
X-ClientProxiedBy: AM0PR10CA0097.EURPRD10.PROD.OUTLOOK.COM
 (2603:10a6:208:e6::14) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b5691a7b-f310-4d92-6f32-08d90eeb3e43
X-MS-TrafficTypeDiagnostic: DM6PR03MB5338:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB53387C724FA5E5D0EF0071DE8F5A9@DM6PR03MB5338.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 4m7+NTXHdTe6fyd3gvhlZBMosG+GK0MogHRc6QOiWnC8TKYYr4bmCvQN5f4UsnaDS6/T194EWWokyyBnh31asl4PUtfLPHK3Dq8F2+xCF2y9rV1LwvqfjsOjqeCOr9hHIHCXN9cEFDqopyRT5oqnCu9t9i6IVNLMF8/ORB94qfEeJNTRMNxEsKXA56br9Or4lM+s+4BnoXZb6RKfdBcdLUFMraqo8cWJv5FCjmMbR7MF3uqPgjsdXyyflNgP186NYH3vkhAXM8EYpJHrALMOBBjlcgKKQnLiRFNkrHvkwIUuGtfXleG8f2KMiZFos+bXBvfMImrnTV5RG7nhzQMAPGczdD9/VSyKD3obKwQu7bQD9nA7AQbXA4AKue+AihEsVqfYrmDYRigA9/JGRpp3bOEyEuXkLUaIdWZVkKrXc5kFihnM3J2rAY+uli6ef2SnM37tMT91lHsCKC8Qt33BzU+BQIOfgGYHz8zta40n6wIlYmvBDlzYakhT/ULQ3jdjMUZRehkxFHIL0XE1LsKs381AfJnt6QWh3X4K2muDzImuWlsXFY0nVjMGHktT0XdYBUCkVf59uKrK4khp2tTmy7CcIYR7zKlTr1VHCYlC8VU=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(366004)(346002)(396003)(136003)(376002)(39860400002)(53546011)(26005)(8936002)(2906002)(6666004)(66946007)(478600001)(186003)(85182001)(38100700002)(5660300002)(8676002)(9686003)(6486002)(956004)(66476007)(16526019)(66556008)(83380400001)(6916009)(6496006)(316002)(4326008)(54906003)(86362001)(33716001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?cmJMTHg3NEFDcHZvang4Zkk3MldHSTQ4c1dmZmVzc1NRRURQcWhHMGprVHBJ?=
 =?utf-8?B?OVV0UGVyaHhVS1MvSkllTHoxZ1I5SDFUTE01elpQR0FxWkxzcUIxV0l5SXBh?=
 =?utf-8?B?WHBYdEhUMGNsOXhucHFteEd2dlNXR3cyVWVvSnB2eW4zSitnYWlVQW1TUDVk?=
 =?utf-8?B?bk92b0lWT2VtRk0zczNKZjV3UkMzY1FXMVlCN3JIVVJVcW5mak4rQk45eS83?=
 =?utf-8?B?aVZJUFJsZmtLM1NOZUNua3FvUjJ3Sy9BRTVyNFhjM2ZIc3lFVjZkWGRjN3ZE?=
 =?utf-8?B?aXlZVStwTnhhSzNhUDczcWlNbmsvTlNVWDFtY2h6bERSSHd1WUFwckJIS3dx?=
 =?utf-8?B?aE5GcDdvbFNJNHN6VFRnZnNXd1N6N2xGSnlVL25LUGErMVRvQnNwMWp4dmN1?=
 =?utf-8?B?MVhTbnVVbDBqcHNHVTR5ekxuWVpQUXpWRjVqRlN1UU91SSs1dk5oN0U4Wk91?=
 =?utf-8?B?OFg2YlpWeU1mUXBwVFJPR1YvbnpXbUV6SlAybGJlY2hwU21yVDYxbURseElB?=
 =?utf-8?B?Ry8rRldUQlJMdWp1QnUwMFBHUDFUOWVLRi9aQzNnbThuV2NTc3dKdW5YbW9B?=
 =?utf-8?B?UEdTZzJZOGc4T0JlRUdJYnc3bTZXcVltY3Y5c0I4MERPYWcwZGkzTWRzSVg1?=
 =?utf-8?B?dEVPNlFYeUN6RGdxNEJXbTkvVzBxTk56UTFKeXNjNC9kWkVtaHZNZkoxUHEy?=
 =?utf-8?B?dUk2NDd6eUt3ekFGVFVZRmRXMUpQOTM1WktkTGRaUUlxRTJjZUt6ZTJzdGRs?=
 =?utf-8?B?SUZVS3Q1eTVWU0I4NTZ6ejl1OHJPWWVEczVYdDZ3Q2Fod1NlUmc0TnNrbnZj?=
 =?utf-8?B?SFRxRmM3N1VUMUdxT2Rib0hVU1Y5OTAxeng2VDJYbjFvSm1JdHI3dWZZY1lK?=
 =?utf-8?B?TEEydEd1OU9KK3QrdUlWZEZtTkR5VnRZZFovamR0RGU0TkpBTkhBSW50b3Rw?=
 =?utf-8?B?N2hvYTNoekpnZ05pK0lMQmIrZUxPaFo4QmN5RlQ5L25zN1g1VGthOTE2YTRk?=
 =?utf-8?B?RGNUV3JWczYwNVhiN3VSUlNWdmJNeUI3VytTc1RpOWU4dzhxSFhZMjVkbHlo?=
 =?utf-8?B?MTF4NE55Myt3dHduL3BCbjlKQjZyempCczNJdFNpbXdqeGFrL2JWeEVXZUJ5?=
 =?utf-8?B?Z0pCTnRjSnVhTnJJOFNrYStta3VHTTgrYlkxQVAvWWVvVEx1cFZzZnpESmU1?=
 =?utf-8?B?WU9EV1F3MlNOamxGcGM4R0w1VkM1RU9sS1Fjd0VoeDZSclh4NkVZTWdxbmhN?=
 =?utf-8?B?MXNaS1VKcTlFM0NzcFpuTEpkQ3dVZThzL1FGWW5JcGZZSG5UVS8xc2krQzNz?=
 =?utf-8?B?Tk5ldklRUDFGamM2TnJ0NHBTd0ZyQWVQbFl1L0Rja0l1SmgzSmJveTVpRldT?=
 =?utf-8?B?T3A3ZHlLaGg3SlBncnYyMzJ4NjdPaFdOTWhBZjZFNFZER1VRRVlvdkRNVXhI?=
 =?utf-8?B?QzI4Tmg5MHlpY2ErUkxmZzhVdG13ekNNanl2UTdYbHA2R2Rqbm5zZHZVOVh1?=
 =?utf-8?B?TXI3L0owUTdxdEN2bitqUFIwS3E1bmJyekU4d2cvZ2t0Rk1XUkUzU2VoaXJi?=
 =?utf-8?B?ekJjWDAxd0dZRmthdTdNK09QNmJoclhzQ1pnV1JNYW42eTFWQVM5NFVTNENH?=
 =?utf-8?B?MHRoM2k1dUg5aXRsTndVUk12MktYRGgxOHo5bHVLTUg0ckZLNGdzZUI3ejlv?=
 =?utf-8?B?QUZtZnltVnNmNzEvQWUvU2IxckplaXNTLzFrNGtjSEY2enRlQkNOQ29PbC9S?=
 =?utf-8?Q?/kuwB1CGP6nY5ZR65vSocxu5mUj1m+/773Qk6D9?=
X-MS-Exchange-CrossTenant-Network-Message-Id: b5691a7b-f310-4d92-6f32-08d90eeb3e43
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2021 10:56:17.5590
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 78lGe3i/xhIG3j+dd47oTUHrTIZPhO+s9EIjoFV6ud8+k8kgzELyM/c1gikagWm6BBNBllTemF8Uw+bNWESOkA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5338
X-OriginatorOrg: citrix.com

On Mon, May 03, 2021 at 12:41:29PM +0200, Jan Beulich wrote:
> On 30.04.2021 17:52, Roger Pau Monne wrote:
> > Introduce an interface that returns a specific MSR entry from a cpu
> > policy in xen_msr_entry_t format. Provide a helper to perform a binary
> > search against an array of MSR entries.
> > 
> > This is useful to callers can peek data from the opaque
> > xc_cpu_policy_t type.
> > 
> > No caller of the interface introduced on this patch.
> > 
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> > ---
> > Changes since v1:
> >  - Introduce a helper to perform a binary search of the MSR entries
> >    array.
> > ---
> >  tools/include/xenctrl.h         |  2 ++
> >  tools/libs/guest/xg_cpuid_x86.c | 42 +++++++++++++++++++++++++++++++++
> >  2 files changed, 44 insertions(+)
> > 
> > diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
> > index cbca7209e34..605c632cf30 100644
> > --- a/tools/include/xenctrl.h
> > +++ b/tools/include/xenctrl.h
> > @@ -2611,6 +2611,8 @@ int xc_cpu_policy_serialise(xc_interface *xch, const xc_cpu_policy_t policy,
> >  int xc_cpu_policy_get_cpuid(xc_interface *xch, const xc_cpu_policy_t policy,
> >                              uint32_t leaf, uint32_t subleaf,
> >                              xen_cpuid_leaf_t *out);
> > +int xc_cpu_policy_get_msr(xc_interface *xch, const xc_cpu_policy_t policy,
> > +                          uint32_t msr, xen_msr_entry_t *out);
> >  
> >  int xc_get_cpu_levelling_caps(xc_interface *xch, uint32_t *caps);
> >  int xc_get_cpu_featureset(xc_interface *xch, uint32_t index,
> > diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x86.c
> > index de27826f415..9e83daca0e6 100644
> > --- a/tools/libs/guest/xg_cpuid_x86.c
> > +++ b/tools/libs/guest/xg_cpuid_x86.c
> > @@ -850,3 +850,45 @@ int xc_cpu_policy_get_cpuid(xc_interface *xch, const xc_cpu_policy_t policy,
> >      *out = *tmp;
> >      return 0;
> >  }
> > +
> > +static int compare_entries(const void *l, const void *r)
> > +{
> > +    const xen_msr_entry_t *lhs = l;
> > +    const xen_msr_entry_t *rhs = r;
> > +
> > +    if ( lhs->idx == rhs->idx )
> > +        return 0;
> > +    return lhs->idx < rhs->idx ? -1 : 1;
> > +}
> > +
> > +static xen_msr_entry_t *find_entry(xen_msr_entry_t *entries,
> > +                                   unsigned int nr_entries, unsigned int index)
> > +{
> > +    const xen_msr_entry_t key = { index };
> > +
> > +    return bsearch(&key, entries, nr_entries, sizeof(*entries), compare_entries);
> > +}
> 
> Isn't "entries" / "entry" a little too generic a name here, considering
> the CPUID equivalents use "leaves" / "leaf"? (Noticed really while looking
> at patch 7.)

Would you be fine with naming the function find_msr and leaving the
rest of the parameters names as-is?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 04 11:00:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 11:00:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122110.230286 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldsnD-0002wj-MI; Tue, 04 May 2021 11:00:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122110.230286; Tue, 04 May 2021 11:00:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldsnD-0002wc-If; Tue, 04 May 2021 11:00:43 +0000
Received: by outflank-mailman (input) for mailman id 122110;
 Tue, 04 May 2021 11:00:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1gXq=J7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ldsnC-0002wX-OI
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 11:00:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 575acd95-a739-4f2d-baba-74d80dcb17eb;
 Tue, 04 May 2021 11:00:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 218BCB166;
 Tue,  4 May 2021 11:00:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 575acd95-a739-4f2d-baba-74d80dcb17eb
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620126040; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=/3Gg8MY9+0WFljQhbZmbClJvUb35ue3ybLPt5I0ZFlI=;
	b=iw5dGp0q/hGvxaRtRTcMxPypPn4nuTTGDsszNBZUyK7mSa/bq3OSnav+ezHUzIpv1bUB9V
	6uQRJEYY84rvgc7bfAYtsfrXll7ordw7K8Vgusq1dH/ajlNTBtgATA/O1eF0PNWN9EE6q5
	Uk6q0jYYhP9yJRnxwnNv4/eZ3EQYVZU=
Subject: Re: [PATCH v4 08/12] x86/vpt: switch interrupt injection model
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
 xen-devel@lists.xenproject.org
References: <20210420140723.65321-1-roger.pau@citrix.com>
 <20210420140723.65321-9-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b2e83796-ea71-ce71-4fc4-2bf1fc3bc3dc@suse.com>
Date: Tue, 4 May 2021 13:00:39 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210420140723.65321-9-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 20.04.2021 16:07, Roger Pau Monne wrote:
> @@ -295,188 +248,153 @@ static void pt_irq_fired(struct vcpu *v, struct periodic_time *pt)
>              list_del(&pt->list);
>          pt->on_list = false;
>          pt->pending_intr_nr = 0;
> +
> +        return;
>      }
> -    else if ( mode_is(v->domain, one_missed_tick_pending) ||
> -              mode_is(v->domain, no_missed_ticks_pending) )
> -    {
> -        pt->last_plt_gtime = hvm_get_guest_time(v);
> -        pt_process_missed_ticks(pt);
> -        pt->pending_intr_nr = 0; /* 'collapse' all missed ticks */
> -        set_timer(&pt->timer, pt->scheduled);
> -    }
> -    else
> +
> +    pt_process_missed_ticks(pt);
> +    /* 'collapse' missed ticks according to the selected mode. */
> +    switch ( pt->vcpu->domain->arch.hvm.params[HVM_PARAM_TIMER_MODE] )
>      {
> -        pt->last_plt_gtime += pt->period;
> -        if ( --pt->pending_intr_nr == 0 )
> -        {
> -            pt_process_missed_ticks(pt);
> -            if ( pt->pending_intr_nr == 0 )
> -                set_timer(&pt->timer, pt->scheduled);
> -        }
> +    case HVMPTM_one_missed_tick_pending:
> +        pt->pending_intr_nr = min(pt->pending_intr_nr, 1u);
> +        break;
> +
> +    case HVMPTM_no_missed_ticks_pending:
> +        pt->pending_intr_nr = 0;
> +        break;
>      }
>  
> -    if ( mode_is(v->domain, delay_for_missed_ticks) &&
> -         (hvm_get_guest_time(v) < pt->last_plt_gtime) )
> -        hvm_set_guest_time(v, pt->last_plt_gtime);
> +    if ( !pt->pending_intr_nr )
> +        set_timer(&pt->timer, pt->scheduled);
>  }
>  
> -int pt_update_irq(struct vcpu *v)
> +static void pt_timer_fn(void *data)
>  {
> -    struct list_head *head = &v->arch.hvm.tm_list;
> -    struct periodic_time *pt, *temp, *earliest_pt;
> -    uint64_t max_lag;
> -    int irq, pt_vector = -1;
> -    bool level;
> +    struct periodic_time *pt = data;
> +    struct vcpu *v;
> +    time_cb *cb = NULL;
> +    void *cb_priv;
> +    unsigned int irq;
>  
> -    pt_vcpu_lock(v);
> +    pt_lock(pt);
>  
> -    earliest_pt = NULL;
> -    max_lag = -1ULL;
> -    list_for_each_entry_safe ( pt, temp, head, list )
> +    v = pt->vcpu;
> +    irq = pt->irq;
> +
> +    if ( inject_interrupt(pt) )
>      {
> -        if ( pt->pending_intr_nr )
> -        {
> -            if ( pt_irq_masked(pt) &&
> -                 /* Level interrupts should be asserted even if masked. */
> -                 !pt->level )
> -            {
> -                /* suspend timer emulation */
> -                list_del(&pt->list);
> -                pt->on_list = 0;
> -            }
> -            else
> -            {
> -                if ( (pt->last_plt_gtime + pt->period) < max_lag )
> -                {
> -                    max_lag = pt->last_plt_gtime + pt->period;
> -                    earliest_pt = pt;
> -                }
> -            }
> -        }
> +        pt->scheduled += pt->period;
> +        pt->do_not_freeze = 0;

Nit: "false" please.

> +        cb = pt->cb;
> +        cb_priv = pt->priv;
>      }
> -
> -    if ( earliest_pt == NULL )
> +    else
>      {
> -        pt_vcpu_unlock(v);
> -        return -1;
> +        /* Masked. */
> +        if ( pt->on_list )
> +            list_del(&pt->list);
> +        pt->on_list = false;
> +        pt->pending_intr_nr++;
>      }

inject_interrupt() returns whether it was able to deliver the interrupt.
This in particular fails if the interrupt was masked and is edge
triggered. This, unexpectedly to me, reports success if a level triggered
interrupt was already pending. But in either event, the missed ticks
accounting is, as per my understanding of the comment in hvm/params.h,
supposed to be dealing with missing delivery due to preemption only. An
interrupt being masked / already pending may not be in this state due to
the guest having got preempted, though. A guest keeping a timer interrupt
masked for an extended period of time should not get a flood of
interrupts later on, no matter what HVM_PARAM_TIMER_MODE is set to.

However, I'm not going to exclude that little bit of doc is wrong, or
implementation and doc aren't in agreement already before your change.

> -    earliest_pt->irq_issued = 1;

This looks to be the only place where the field gets set to non-zero.
If the field is unused after this change, it wants deleting. I notice
patch 11 does so, but it may be worthwhile pointing out
- in the description here, that field removal will happen later,
- in the later patch, that this field was already unused (and doesn't
  become dead by the other removal done there).

> -    irq = earliest_pt->irq;
> -    level = earliest_pt->level;
> +    pt_unlock(pt);
>  
> -    pt_vcpu_unlock(v);
> +    if ( cb )
> +        cb(v, cb_priv);
> +}
>  
> -    switch ( earliest_pt->source )
> -    {
> -    case PTSRC_lapic:
> -        /*
> -         * If periodic timer interrupt is handled by lapic, its vector in
> -         * IRR is returned and used to set eoi_exit_bitmap for virtual
> -         * interrupt delivery case. Otherwise return -1 to do nothing.
> -         */
> -        vlapic_set_irq(vcpu_vlapic(v), irq, 0);
> -        pt_vector = irq;
> -        break;
> +static void eoi_callback(struct periodic_time *pt)
> +{
> +    struct vcpu *v = NULL;
> +    time_cb *cb = NULL;
> +    void *cb_priv = NULL;
>  
> -    case PTSRC_isa:
> -        hvm_isa_irq_deassert(v->domain, irq);
> -        if ( platform_legacy_irq(irq) && vlapic_accept_pic_intr(v) &&
> -             v->domain->arch.hvm.vpic[irq >> 3].int_output )
> -            hvm_isa_irq_assert(v->domain, irq, NULL);
> -        else
> +    pt_lock(pt);
> +
> +    irq_eoi(pt);
> +    if ( pt->pending_intr_nr )
> +    {
> +        if ( inject_interrupt(pt) )
>          {
> -            pt_vector = hvm_isa_irq_assert(v->domain, irq, vioapic_get_vector);
> -            /*
> -             * hvm_isa_irq_assert may not set the corresponding bit in vIRR
> -             * when mask field of IOAPIC RTE is set. Check it again.
> -             */
> -            if ( pt_vector < 0 || !vlapic_test_irq(vcpu_vlapic(v), pt_vector) )
> -                pt_vector = -1;
> +            pt->pending_intr_nr--;
> +            cb = pt->cb;
> +            cb_priv = pt->priv;
> +            v = pt->vcpu;
>          }
> -        break;
> -
> -    case PTSRC_ioapic:
> -        pt_vector = hvm_ioapic_assert(v->domain, irq, level);
> -        if ( pt_vector < 0 || !vlapic_test_irq(vcpu_vlapic(v), pt_vector) )
> +        else
>          {
> -            pt_vector = -1;
> -            if ( level )
> -            {
> -                /*
> -                 * Level interrupts are always asserted because the pin assert
> -                 * count is incremented regardless of whether the pin is masked
> -                 * or the vector latched in IRR, so also execute the callback
> -                 * associated with the timer.
> -                 */
> -                time_cb *cb = NULL;
> -                void *cb_priv = NULL;
> -
> -                pt_vcpu_lock(v);
> -                /* Make sure the timer is still on the list. */
> -                list_for_each_entry ( pt, &v->arch.hvm.tm_list, list )
> -                    if ( pt == earliest_pt )
> -                    {
> -                        pt_irq_fired(v, pt);
> -                        cb = pt->cb;
> -                        cb_priv = pt->priv;
> -                        break;
> -                    }
> -                pt_vcpu_unlock(v);
> -
> -                if ( cb != NULL )
> -                    cb(v, cb_priv);
> -            }
> +            /* Masked. */
> +            if ( pt->on_list )
> +                list_del(&pt->list);
> +            pt->on_list = false;
>          }
> -        break;
>      }
>  
> -    return pt_vector;
> +    pt_unlock(pt);
> +
> +    if ( cb )
> +        cb(v, cb_priv);
>  }
>  
> -static struct periodic_time *is_pt_irq(
> -    struct vcpu *v, struct hvm_intack intack)
> +static void vlapic_eoi_callback(struct vcpu *unused, unsigned int unused2,
> +                                void *data)
>  {
> -    struct list_head *head = &v->arch.hvm.tm_list;
> -    struct periodic_time *pt;
> -
> -    list_for_each_entry ( pt, head, list )
> -    {
> -        if ( pt->pending_intr_nr && pt->irq_issued &&
> -             (intack.vector == pt_irq_vector(pt, intack.source)) )
> -            return pt;
> -    }
> +    eoi_callback(data);
> +}
>  
> -    return NULL;
> +static void vioapic_eoi_callback(struct domain *unused, unsigned int unused2,
> +                                 void *data)
> +{
> +    eoi_callback(data);
>  }
>  
> -void pt_intr_post(struct vcpu *v, struct hvm_intack intack)
> +static bool inject_interrupt(struct periodic_time *pt)
>  {
> -    struct periodic_time *pt;
> -    time_cb *cb;
> -    void *cb_priv;
> +    struct vcpu *v = pt->vcpu;
> +    struct domain *d = v->domain;
> +    unsigned int irq = pt->irq;
>  
> -    if ( intack.source == hvm_intsrc_vector )
> -        return;
> +    /* Level interrupts should be asserted even if masked. */
> +    if ( pt_irq_masked(pt) && !pt->level )
> +        return false;
>  
> -    pt_vcpu_lock(v);
> -
> -    pt = is_pt_irq(v, intack);
> -    if ( pt == NULL )
> +    switch ( pt->source )
>      {
> -        pt_vcpu_unlock(v);
> -        return;
> +    case PTSRC_lapic:
> +        vlapic_set_irq_callback(vcpu_vlapic(v), pt->irq, 0, vlapic_eoi_callback,
> +                                pt);
> +        break;
> +
> +    case PTSRC_isa:
> +        hvm_isa_irq_deassert(d, irq);
> +        hvm_isa_irq_assert(d, irq, NULL);
> +        break;
> +
> +    case PTSRC_ioapic:
> +        hvm_ioapic_assert(d, irq, pt->level);
> +        break;
>      }

Why do ISA IRQs get de-asserted first, but IO-APIC ones don't? I
notice e.g. hvm_set_callback_irq_level() and hvm_set_pci_link_route()
have similar apparent asymmetries, so I guess I'm missing something.
In particular I can't spot - even prior to this change - where
hvm_irq->gsi_assert_count[gsi] would get decremented for a level
triggered IRQ, when hvm_ioapic_deassert() gets called only from
hvm/hpet.c:hpet_write(). I guess the main point is that that's the
only case of a level triggered timer interrupt for us?

> @@ -641,20 +590,29 @@ void pt_adjust_global_vcpu_target(struct vcpu *v)
>      write_unlock(&pl_time->vhpet.lock);
>  }
>  
> -
>  static void pt_resume(struct periodic_time *pt)
>  {
> +    struct vcpu *v;
> +    time_cb *cb = NULL;
> +    void *cb_priv;
> +
>      if ( pt->vcpu == NULL )
>          return;
>  
>      pt_lock(pt);
> -    if ( pt->pending_intr_nr && !pt->on_list )
> +    if ( pt->pending_intr_nr && !pt->on_list && inject_interrupt(pt) )
>      {
> +        pt->pending_intr_nr--;
> +        cb = pt->cb;
> +        cb_priv = pt->priv;
> +        v = pt->vcpu;
>          pt->on_list = 1;
>          list_add(&pt->list, &pt->vcpu->arch.hvm.tm_list);
> -        vcpu_kick(pt->vcpu);

Just for my own understanding: The replacement of this is what happens
down the call tree from inject_interrupt()?

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 04 11:32:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 11:32:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122119.230298 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldtHI-0005Vw-3W; Tue, 04 May 2021 11:31:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122119.230298; Tue, 04 May 2021 11:31:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldtHI-0005Vp-0d; Tue, 04 May 2021 11:31:48 +0000
Received: by outflank-mailman (input) for mailman id 122119;
 Tue, 04 May 2021 11:31:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tMRT=J7=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1ldtHG-0005Vk-GK
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 11:31:46 +0000
Received: from mail-lj1-x22c.google.com (unknown [2a00:1450:4864:20::22c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ca75a0d6-ff98-48b7-be0f-1851e254e5c6;
 Tue, 04 May 2021 11:31:45 +0000 (UTC)
Received: by mail-lj1-x22c.google.com with SMTP id v5so1903476ljg.12
 for <xen-devel@lists.xenproject.org>; Tue, 04 May 2021 04:31:45 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ca75a0d6-ff98-48b7-be0f-1851e254e5c6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=AJ8cGkmcBjEKVQ14Njcx0L8K0NUqs1k2Jl/LKA+r4xM=;
        b=BI3CZWk7tQaYSWhWn0E3v1WHCiBa5BBk+GarLzyDVXAZOHBxhBstKOKpyc1Gam4S1l
         MKshVKBp0JgrV5AKS6EDD0F35xJHL+/RVbJ5H8wPrz/OtSLUfuPeoLN7Yo9dSztVrxjh
         Yw7ZKA54APs1S67g3wotv6H1QogK8RZNhf/fsCsGGmTdGUnOdRWXe1WEHR+WqK4UBVHo
         3nC2BKcTOhWR8lWilE/RByNwKUc6nPsqavGA3DWtXIOJW0eujW8Wl+QG1Ct7LynyADUN
         tWSKjeVoW6xyK1OszBGHZzP7jsSt1aIH9rrQV+YGy5PX5B8rasU6u3a8geoXACcz0SZw
         0mXw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=AJ8cGkmcBjEKVQ14Njcx0L8K0NUqs1k2Jl/LKA+r4xM=;
        b=oUOwtgS95ly6QRjMhu+oMirXheuyoOIZwo7xCGLJuPYNccAgOpVyy3+3B4u4/hnGra
         BwtfcMuSdnJpkGuSrkbzckWjrAbWrIHgzHViyUSDRv6bVkfVuxSYGKBwS9mE7MRS34+k
         TZ42ihwXfdkkW7TbRWMLviajTsf+uQ++w+o90c0SA257NC5NNAJ6behRCw6GL+6m9Ygm
         iSA5LFQ7qRv1JgTTOMEUYp28xL+ePLiN8CfV7GJ+EEb9xm98orQSBOwjgOxau5Q4IrKp
         POOTtOcjNHgUc0zbSXpuXdrolSojX0UvTq0+x16qTl5CubSjqsn8rD3r/yrJFyJatJzS
         LXUg==
X-Gm-Message-State: AOAM531k+oQUVDMi637tyPR1atSy5wiNsdKpULEdOZ6EMU0PxZPa8Lf6
	pz6LTqTkj7YOMOv6XXFv26eR9LjEDi5obQEnuAk=
X-Google-Smtp-Source: ABdhPJzzeHtjx63/vg6E/VRtgoKuSHrf5qwXUvuOdG4o8I4/U/AYByYvI2mMR/4GRIgXvp2FAvXxbxiX5uTAJ6eugsY=
X-Received: by 2002:a2e:a7d4:: with SMTP id x20mr17071038ljp.285.1620127904669;
 Tue, 04 May 2021 04:31:44 -0700 (PDT)
MIME-Version: 1.0
References: <20210503192810.36084-1-jandryuk@gmail.com> <20210503192810.36084-11-jandryuk@gmail.com>
 <398aa86f-13e4-e5d4-29d6-4491a05c920a@suse.com>
In-Reply-To: <398aa86f-13e4-e5d4-29d6-4491a05c920a@suse.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Tue, 4 May 2021 07:31:33 -0400
Message-ID: <CAKf6xpvvMLT1ie089dqmjvsNgC+GZjbBkNTr4tS2VwRf0tEGbw@mail.gmail.com>
Subject: Re: [PATCH 10/13] libxc: Add xc_set_cpufreq_hwp
To: Jan Beulich <jbeulich@suse.com>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>, 
	xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"

On Tue, May 4, 2021 at 4:03 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 03.05.2021 21:28, Jason Andryuk wrote:
> > Add xc_set_cpufreq_hwp to allow calling xen_systctl_pm_op
> > SET_CPUFREQ_HWP.
> >
> > Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> >
> > ---
> > Am I allowed to do set_hwp = *set_hwp struct assignment?
>
> I'm puzzled by the question - why would you not be?

Yes, I thought it perfectly sensible to do.    However, I didn't see
other places in the file assigning structs, so I was not sure if there
was some reason against it.

Thanks for taking a look.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Tue May 04 11:40:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 11:40:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122123.230310 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldtPO-0006QM-07; Tue, 04 May 2021 11:40:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122123.230310; Tue, 04 May 2021 11:40:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldtPN-0006QF-SW; Tue, 04 May 2021 11:40:09 +0000
Received: by outflank-mailman (input) for mailman id 122123;
 Tue, 04 May 2021 11:40:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldtPM-0006Q7-Gw; Tue, 04 May 2021 11:40:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldtPM-0001BM-8g; Tue, 04 May 2021 11:40:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldtPM-0001UN-1v; Tue, 04 May 2021 11:40:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ldtPM-0002RF-1O; Tue, 04 May 2021 11:40:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=JJR9BAJnpf9DuYSxgcL7kc02DNZfoWrovGTB5QB1JPk=; b=ieUWSNZSe9SyLdxdPzXEOnKzpt
	Um3TZQ+jQ4o7Zg54tC4tOo8TM5sR0DAV2mMcKk7exzu43ymYEB3KOIl66M2r+bLNeSdEAgx2xMRov
	l7jp/7nEIQRG8GXJb3p9oLUqeJz6DkKYRroxrlTm6E4iuVdCGNTjX2RcDDCRvHzR4fIU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161758-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 161758: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=e927a3b89ae82ac875aafedbefd6b4bc46201b7d
X-Osstest-Versions-That:
    xen=d26c277826dbbd64b3e3cb57159e1ecbfad33bc8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 04 May 2021 11:40:08 +0000

flight 161758 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161758/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  e927a3b89ae82ac875aafedbefd6b4bc46201b7d
baseline version:
 xen                  d26c277826dbbd64b3e3cb57159e1ecbfad33bc8

Last test of basis   161625  2021-05-03 14:01:31 Z    0 days
Testing same since   161758  2021-05-04 09:01:35 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   d26c277826..e927a3b89a  e927a3b89ae82ac875aafedbefd6b4bc46201b7d -> smoke


From xen-devel-bounces@lists.xenproject.org Tue May 04 11:40:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 11:40:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122125.230325 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldtPT-0006Sl-8D; Tue, 04 May 2021 11:40:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122125.230325; Tue, 04 May 2021 11:40:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldtPT-0006Se-4b; Tue, 04 May 2021 11:40:15 +0000
Received: by outflank-mailman (input) for mailman id 122125;
 Tue, 04 May 2021 11:40:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1gXq=J7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ldtPR-0006SB-Or
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 11:40:13 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6007ed61-587b-4a88-95d8-5bb06f376c3e;
 Tue, 04 May 2021 11:40:12 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8D227B278;
 Tue,  4 May 2021 11:40:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6007ed61-587b-4a88-95d8-5bb06f376c3e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620128411; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=y/0v17JDLn1s0Zf9Y3DlKqsVYyq+mWO8XcEqT48QLr4=;
	b=BD6xR3Yp16U41B8j4eOhw1JEtqp73v7aqltUBVe1SMO65+4PheCDJVjzvX78KFVuu7SSCJ
	oqb4bGj6WU+OxuXCP+w/HNESfrQ/HgYh7XU95H794f2J7dIIxrBfpjpz5NLjrUUTxJpV3A
	qlJuVon7LWHSUTEkFUFJKOEmVHAQbFM=
Subject: Re: [PATCH v3 03/13] libs/guest: allow fetching a specific MSR entry
 from a cpu policy
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210430155211.3709-1-roger.pau@citrix.com>
 <20210430155211.3709-4-roger.pau@citrix.com>
 <273ba6f9-dee9-00db-407b-10325d21afae@suse.com>
 <YJEoS6P1S6NbySFd@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <54c48a0f-075f-c379-eeb4-60b4439d8907@suse.com>
Date: Tue, 4 May 2021 13:40:11 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <YJEoS6P1S6NbySFd@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 04.05.2021 12:56, Roger Pau Monné wrote:
> On Mon, May 03, 2021 at 12:41:29PM +0200, Jan Beulich wrote:
>> On 30.04.2021 17:52, Roger Pau Monne wrote:
>>> --- a/tools/libs/guest/xg_cpuid_x86.c
>>> +++ b/tools/libs/guest/xg_cpuid_x86.c
>>> @@ -850,3 +850,45 @@ int xc_cpu_policy_get_cpuid(xc_interface *xch, const xc_cpu_policy_t policy,
>>>      *out = *tmp;
>>>      return 0;
>>>  }
>>> +
>>> +static int compare_entries(const void *l, const void *r)
>>> +{
>>> +    const xen_msr_entry_t *lhs = l;
>>> +    const xen_msr_entry_t *rhs = r;
>>> +
>>> +    if ( lhs->idx == rhs->idx )
>>> +        return 0;
>>> +    return lhs->idx < rhs->idx ? -1 : 1;
>>> +}
>>> +
>>> +static xen_msr_entry_t *find_entry(xen_msr_entry_t *entries,
>>> +                                   unsigned int nr_entries, unsigned int index)
>>> +{
>>> +    const xen_msr_entry_t key = { index };
>>> +
>>> +    return bsearch(&key, entries, nr_entries, sizeof(*entries), compare_entries);
>>> +}
>>
>> Isn't "entries" / "entry" a little too generic a name here, considering
>> the CPUID equivalents use "leaves" / "leaf"? (Noticed really while looking
>> at patch 7.)
> 
> Would you be fine with naming the function find_msr and leaving the
> rest of the parameters names as-is?

Yes. But recall I'm not the maintainer of this code anyway.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 04 11:42:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 11:42:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122131.230340 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldtRF-0006es-Lv; Tue, 04 May 2021 11:42:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122131.230340; Tue, 04 May 2021 11:42:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldtRF-0006el-Ii; Tue, 04 May 2021 11:42:05 +0000
Received: by outflank-mailman (input) for mailman id 122131;
 Tue, 04 May 2021 11:42:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1gXq=J7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ldtRE-0006ef-Rq
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 11:42:04 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a3d4f0c5-8710-481e-8ead-3e43278b0aa9;
 Tue, 04 May 2021 11:42:04 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4FE62B1AB;
 Tue,  4 May 2021 11:42:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a3d4f0c5-8710-481e-8ead-3e43278b0aa9
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620128523; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ryllCNiCw7GWCpYLyQrlqnMw2lLfBO5O1ojPixYPNT4=;
	b=mixHJGkXH6MOVUps68qhyJZA+GZeuTjSYQkaArdXiNll0b9Zhmny1yn8hsHCK34NHQ8AN3
	VGwh2uEJTAg3bXpPU+62Jzwne5UdCH7o8c01+zFNbqKYFgtacTiSGtKGkNCbmBs5btqOEk
	MGzb5YZkj65asSnlaBgkt8HLHnWP+iU=
Subject: Re: [PATCH v4 09/12] x86/irq: remove unused parameter from
 hvm_isa_irq_assert
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210420140723.65321-1-roger.pau@citrix.com>
 <20210420140723.65321-10-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <31aac991-530e-5c4c-2f2c-c66a6f2e92c4@suse.com>
Date: Tue, 4 May 2021 13:42:02 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210420140723.65321-10-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 20.04.2021 16:07, Roger Pau Monne wrote:
> There are no callers anymore passing a get_vector function pointer to
> hvm_isa_irq_assert, so drop the parameter.
> 
> No functional change expected.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Tue May 04 11:42:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 11:42:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122137.230352 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldtRm-0006lQ-2z; Tue, 04 May 2021 11:42:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122137.230352; Tue, 04 May 2021 11:42:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldtRl-0006lJ-WD; Tue, 04 May 2021 11:42:38 +0000
Received: by outflank-mailman (input) for mailman id 122137;
 Tue, 04 May 2021 11:42:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1gXq=J7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ldtRk-0006lA-J8
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 11:42:36 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4f7a5e8f-b223-4dd4-87fa-8fe8185322d7;
 Tue, 04 May 2021 11:42:35 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 18090B287;
 Tue,  4 May 2021 11:42:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4f7a5e8f-b223-4dd4-87fa-8fe8185322d7
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620128555; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=xpOfQ6/LbyqOjgwTFDZZRqRwvL0gA5SO2gz74Xc7MLY=;
	b=N+qkkr2e3/At5UDHXS1D34vSUYlLcTEpc5hE3ldg1SU2o+f+kNw6VZPb3S6OclTQrcLJ73
	epWBGY97d8diHnHtRTlPZ45D9CtCcfkc+vNAlgzxOA5mfsQegTc+kIgsE8GCURcOe+NDsp
	bIFBs21s1UnZD7j/QLxLQ2bvgZ68sEU=
Subject: Re: [PATCH v4 10/12] x86/irq: drop return value from
 hvm_ioapic_assert
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210420140723.65321-1-roger.pau@citrix.com>
 <20210420140723.65321-11-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <06479892-2e2c-4fe6-5f7c-9903827db3b4@suse.com>
Date: Tue, 4 May 2021 13:42:34 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210420140723.65321-11-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 20.04.2021 16:07, Roger Pau Monne wrote:
> There's no caller anymore that cares about the injected vector, so
> drop the returned vector from the function.
> 
> No functional change indented.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Tue May 04 11:48:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 11:48:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122148.230367 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldtXo-00071D-Qw; Tue, 04 May 2021 11:48:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122148.230367; Tue, 04 May 2021 11:48:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldtXo-000716-My; Tue, 04 May 2021 11:48:52 +0000
Received: by outflank-mailman (input) for mailman id 122148;
 Tue, 04 May 2021 11:48:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1gXq=J7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ldtXn-000711-9R
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 11:48:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 451d6ac6-91b6-4985-8579-eed8b1a69fa6;
 Tue, 04 May 2021 11:48:50 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4ADB5B1AB;
 Tue,  4 May 2021 11:48:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 451d6ac6-91b6-4985-8579-eed8b1a69fa6
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620128929; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=9cWwn9iWhot3yHq5n08aa+U0hOkGHwH+wkF4UOgeBK4=;
	b=UPJ53o2sbXeExC8IY+3+cSWCwTxZ6c0OvTV+AxhNVBXF3CACwNfO9QSjhfUwidbooXdZBQ
	LoSaKlq1USHhMFbq6aSwakX0Pn4n/TzBOycG/JnOhOlmT8AD+dqaKeOFl8HOWT+BBMuFcK
	nanlnAB6U7Z3sB6dTtaaPeD9jckERlk=
Subject: Re: [PATCH v4 3/3] docs/doxygen: doxygen documentation for
 grant_table.h
To: Luca Fancellu <luca.fancellu@arm.com>
Cc: bertrand.marquis@arm.com, wei.chen@arm.com,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20210504094606.7125-1-luca.fancellu@arm.com>
 <20210504094606.7125-4-luca.fancellu@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <37e5b461-40fe-ac78-59b9-033ff8cdc6d1@suse.com>
Date: Tue, 4 May 2021 13:48:48 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210504094606.7125-4-luca.fancellu@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 04.05.2021 11:46, Luca Fancellu wrote:
> @@ -451,11 +466,6 @@ DEFINE_XEN_GUEST_HANDLE(gnttab_transfer_t);
>   * bytes to be copied.
>   */
>  
> -#define _GNTCOPY_source_gref      (0)
> -#define GNTCOPY_source_gref       (1<<_GNTCOPY_source_gref)
> -#define _GNTCOPY_dest_gref        (1)
> -#define GNTCOPY_dest_gref         (1<<_GNTCOPY_dest_gref)
> -
>  struct gnttab_copy {
>      /* IN parameters. */
>      struct gnttab_copy_ptr {
> @@ -471,6 +481,12 @@ struct gnttab_copy {
>      /* OUT parameters. */
>      int16_t       status;
>  };
> +
> +#define _GNTCOPY_source_gref      (0)
> +#define GNTCOPY_source_gref       (1<<_GNTCOPY_source_gref)
> +#define _GNTCOPY_dest_gref        (1)
> +#define GNTCOPY_dest_gref         (1<<_GNTCOPY_dest_gref)

Didn't you say you agree with moving this back up some, next to the
field using these?

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 04 11:51:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 11:51:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122152.230379 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldtaW-0007pe-8j; Tue, 04 May 2021 11:51:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122152.230379; Tue, 04 May 2021 11:51:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldtaW-0007pX-4Y; Tue, 04 May 2021 11:51:40 +0000
Received: by outflank-mailman (input) for mailman id 122152;
 Tue, 04 May 2021 11:51:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1gXq=J7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ldtaU-0007pS-Nd
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 11:51:38 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c489974f-ef13-4bf8-8d83-9682ad0826a6;
 Tue, 04 May 2021 11:51:38 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 38AF7ACC4;
 Tue,  4 May 2021 11:51:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c489974f-ef13-4bf8-8d83-9682ad0826a6
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620129097; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=T5ED1M1mFdajQeHDp6a1WU7QjD7jpSFUTnWtGUDsc5Q=;
	b=bImvwcbx2mxS07aTydHhVAV2lQKO6BYWGOWG+ADvhRSe9U4ZQWX4JDvFlJ4PcxPEpinAbA
	PMMVTdZEt0ao/DeI/+7jRcMIiUL6sXAlZTQK11oxjveN/jQtnuSZJoEcVH+OW332aLQtjQ
	kA+tbR3+VPgEu47dao4H6CehnhLXE6w=
Subject: Re: [PATCH 1/5] x86/xstate: Elide redundant writes in set_xcr0()
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210503153938.14109-1-andrew.cooper3@citrix.com>
 <20210503153938.14109-2-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <39d641e7-107b-0e60-8fc2-6f6c2303f072@suse.com>
Date: Tue, 4 May 2021 13:51:36 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210503153938.14109-2-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 03.05.2021 17:39, Andrew Cooper wrote:
> XSETBV is an expensive instruction as, amongst other things, it involves
> reconfiguring the instruction decode at the frontend of the pipeline.
> 
> We have several paths which reconfigure %xcr0 in quick succession (the context
> switch path has 5, including the fpu save/restore helpers), and only a single
> caller takes any care to try to skip redundant writes.
> 
> Update set_xcr0() to perform amortisation automatically, and simplify the
> __context_switch() path as a consequence.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Tue May 04 11:53:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 11:53:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122156.230390 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldtc7-0007x2-Jj; Tue, 04 May 2021 11:53:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122156.230390; Tue, 04 May 2021 11:53:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldtc7-0007wv-Gl; Tue, 04 May 2021 11:53:19 +0000
Received: by outflank-mailman (input) for mailman id 122156;
 Tue, 04 May 2021 11:53:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1gXq=J7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ldtc6-0007wp-VF
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 11:53:18 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a975c2e1-829c-4ef6-adcd-6a9958ff0596;
 Tue, 04 May 2021 11:53:18 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6B091B1BF;
 Tue,  4 May 2021 11:53:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a975c2e1-829c-4ef6-adcd-6a9958ff0596
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620129197; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=gCoQbFof7v0vqsM+hR+s0SWCWVBD6bGkJrW0MxBuKS8=;
	b=gBI0k07UujlB0/jjj68BC7Iw+/2L/thWfJlO1p2JcjWzLxYkcBeKNQDipQeSAwlizHuX6+
	P49KiM4hxCGHwweZbHjWyg2FyWKuRklD4MkNIs0+O6Cj/pe5ahldlM0h+wsRWemElcBglJ
	+cPYi9Onpu6ekikY+c+zeVgPOH0OnFo=
Subject: Re: [PATCH 2/5] x86/xstate: Rename _xstate_ctxt_size() to
 hw_uncompressed_size()
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210503153938.14109-1-andrew.cooper3@citrix.com>
 <20210503153938.14109-3-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d4000f61-12f7-0aa4-3a1a-cb997159828e@suse.com>
Date: Tue, 4 May 2021 13:53:16 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210503153938.14109-3-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 03.05.2021 17:39, Andrew Cooper wrote:
> The latter is a more descriptive name, as it explicitly highlights the query
> from hardware.
> 
> Simplify the internal logic using cpuid_count_ebx(), and drop the curr/max
> assertion as this property is guaranteed by the x86 ISA.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Tue May 04 11:57:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 11:57:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122162.230402 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldtfo-00087m-4a; Tue, 04 May 2021 11:57:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122162.230402; Tue, 04 May 2021 11:57:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldtfo-00087f-1V; Tue, 04 May 2021 11:57:08 +0000
Received: by outflank-mailman (input) for mailman id 122162;
 Tue, 04 May 2021 11:57:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=n4Og=J7=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1ldtfn-00087a-3L
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 11:57:07 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b1784aca-fad0-457d-b15d-8d245708b444;
 Tue, 04 May 2021 11:57:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b1784aca-fad0-457d-b15d-8d245708b444
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620129425;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=4keY9u1EV9zqFYDjpT+eLegpqcQzH9qzOzPqXKH+nm8=;
  b=WEDJK/PS9Jz1jq+w7DU4vpe0XSZ50WefFEbtIRxMCc7gNo/3TrFjKAGr
   oUkQuZQZVZOwa2v7TO/UCrwbWTjd3GUVRzDQYXpuPCklvmTWu6HPQt+0g
   zhyHkjNByEZS+iLW2oGcnE0HQHQ7u7/4G+LmmS0hl74THdI3QSiL5AnjN
   A=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: UetZ4xc+jB0oPjUu/ffExCZL+svyw22C+vErB/uGqxIM/PvHJ8E07Sob/o/RQ8xU4DP8j9h1/5
 SBKG/91xP78i4KjjflV5FoXGT5LwTcb8d9arAjkEKsIDEUN9HYqCpNQP/CafRfR3eWHQduugj0
 MHzVIU0OyOQh55uT4SahKGJeIkVK2xOKfi/RkfrL2iVPco3Y73zBj8mkTuApjQ2Klm7opgS4vn
 1Zjdg88MEosvGHTloFaW2YmJF7NlmK5HdxmSKzq19eQd0F612hRaVbUWGVNFb5okTpolo31W7B
 hNg=
X-SBRS: 5.1
X-MesageID: 43398568
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:liXVR6BiCwxAkRflHegltMeALOonbusQ8zAX/mhLY1h8btGYm8
 eynP4SyB/zj3IrVGs9nM2bUZPsfVr1zrQwxYUKJ7+tUE3duGWuJJx/9oeK+VfdMgXE3Kpm2a
 9kGpIUNPTZB1J3lNu/xQG+HcopztXvytHSuc715R5WPGJXQotn6Bp0DRveN0VwShVPC5ZRLu
 vn2uNsoT28dXMLKvmqH3VtZZmIm/TntrLDJSQHCRku9RWUgVqThILSPhCE0n4lIlRy6Jg492
 ytqW3Ez4Wl98q20xrNk1LUhq4m4OfJ7vtmKIiyhtMOKjPq4zzYKbhJf7GZpjg6rKWOxT8R4b
 /xiiwtNchy9H/dF1vdyXSC5yDa3Dkj8HPkw1OD6EGT2PDRfi4wCMZKmOtiA3nkwncgp9113e
 Zq2G+UpvNsfHf9tRn9/NTBWlVWkFO1qxMZ4IsupkFYOLF/VJZh6agkuG9FGpYJGyz3rKo9Fv
 N1Mc3a7PFKNXuHcnHwpABUsZORd0V2Oi3DblkJu8ST3TQTtmt+1VEkyMsWmWpF3I4hSqND+/
 /PPs1T5f9zZ/5TSZg4KPYKQMOxBGCIawnLKniuLVPuE7xCHH7RtZjt4vEQ6PuxcJIFiLs+8a
 6xEW9whCoXQQbDGMeO1JpE/lTmW2OmRwngzclY+tx3obv5SL33MTCSSVwnnse6ys9vQfHzar
 KWAtZ7EvXjJWzhFcJixAvlQaRfLnEYTYkUt78AKhCzi/OODrevmv3Qcf7VKraoOy0jQHnDDn
 wKWyW2IM1B60usS2LpmRS5YQKpRmXPubZLVITK9ekaz4YAcqdWtBIOtFi/7saXbTtYsqI3e0
 N6KKj9kryyoHS3+Wqg1RQoBjNtSmJupJnwWXJDogEHd2nud6wYhtmZcWdOmGecKgRnVMPQGg
 5Hr1Fx8aa6RqbggRwKOpaCCCa3nnETrHWFQ9MggaWF/97iYY59JI0hQrZNGQLCEAFVlQ5mpH
 xYUhINQlbSG1rV+OKYpa1RINuaVtFnxC+3PMZfqBvkxDihjPBqYkFeYhmDfoq8hx00Sz9dm1
 trmpVv/IaoqHKIMmswgOMxLVtWTn+YaYg2QDitbJlIm7ztZQF7RXqLgzvfkB0oZm/27Swp9x
 PcBDzRdvfRDlVHvHdElq7s7VNvb22YO1l9c3ZgrORGZC37k2c21e+Afayo1WSNLlME3+EGKT
 nACAFiVT9G1pSy1BSPniyFGmhjzpIyPvbFBLBmd73IwHuiJMmJkq4BdsUkiqpNJZTrsuURV/
 iYdBLQJDTkC/kx0wjQv207IkBP2QsZuOKt3Aeg4Hmz3XY5D/aXKFN6R6sDK9XZ62T/Xf6H3J
 hwkNpdh5r5DkzhLtqdja3HZT9KLR3e5XS7SOwlsphYt6M/vrkbJeiubRLYkHVcmBkuJsb9k0
 0TBLlh6LfaI4l1YogcfTla8ldBrqXFEGI79gjtRukwclEmgyWFY5eH47/UpaEuBUPErg3qIl
 Wb+zBc+fCAXybr789rN4sgZWBNLE474zB++enHcYvaAgCjbftC81q3KWXVSs4pdIGVXbEL6g
 9n6NSJlfKNfyX22ArMrSJ2S5g+glqPUIe3GkaQAuZG/NyxJESUjqar6MC1ii3rSTHTUTVqua
 RVMUoKbspCjTE+jIo4liiqI5aH3H4Yrw==
X-IronPort-AV: E=Sophos;i="5.82,272,1613451600"; 
   d="scan'208";a="43398568"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UhnmjpgW9t++ohBH/Uws4afog9ZTutRZDDXj4+NoAsIwG7GFejlry9Zw79jX9Om1hyrSKiXjnh1rqhldumSpz9q+ZOs97ZI0laqbCq4fVWsY+LhGjffT+sVZDwT/X6mAUX/KR4js7k+TVjGboQ66cloXc2wJQOxfDo5xbDfAiESvTDH1FKwn2kMKgU4XX0iJ57/hxDo+O4D0FFofWGG65f9E85/jhHfYHFwv2aNDRqW4NHB48CQ1w70+RQyalPqCkDvakgWt/IfeVE7eMAr77qb+6yeOd0Lncc7Lier++QQzUwNvwKTN3OAXmHvJf9sm9YUarRcEaP0+quYROdXZYA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=aAGZmKUgV3p5/89FFRlnYouNXo8eGBnLdLeQmKZcnSU=;
 b=nJptIdxeExLakYiFDIk1jU9Urxpr4jn0+wG7NhItr8xNnXKyfStDJnPT0VkwTUL+2GcOEWqMZ2W6gyUOdHSwMo5Ar8L51tKvr7p5FuslMowxdEUe7wwxVWaSI1Q5XQwcBlBUON3f7RZFsT9vyovuWqf0xhVe8XJzhqYGMVxZp/13afqGeRyRYb+QEtLYdMOrVAc++ujL2iugm/iI7Cirz2HrD1SeT34NiREpg99RPUflSZ0ac8gBqZQbLhVKlkqJUL8jvQLz2R9Tfj74OhAdl6ooHi3XMYeLIM8Pv4qd16wM17tk9wZx/QKKNi4w+GmvmSlSYUWVjq2EaMETIc/Zpw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=aAGZmKUgV3p5/89FFRlnYouNXo8eGBnLdLeQmKZcnSU=;
 b=iWI2F1MXUKqNI6oK5jIOftcGiA93m0o2YZyOVUlr2RMA9pyCJeiPgVb77q7ttZ/JstvfhOgHVE9TBXuWoTzY729skhk9OmkZlpYwNgjA2OwacKaGsuYUn6tcnOg6hQ1fD2Y54QfKMsfWV2Vsk+M8SGe+ef7WQAZ2p6WG4E1hjGA=
Date: Tue, 4 May 2021 13:56:55 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Anthony PERARD
	<anthony.perard@citrix.com>, <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v3 07/13] libs/guest: obtain a compatible cpu policy from
 two input ones
Message-ID: <YJE2hxPYq2kGrOwV@Air-de-Roger>
References: <20210430155211.3709-1-roger.pau@citrix.com>
 <20210430155211.3709-8-roger.pau@citrix.com>
 <838e358d-5707-0f34-c8fe-64e29f000a69@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <838e358d-5707-0f34-c8fe-64e29f000a69@suse.com>
X-ClientProxiedBy: MR1P264CA0027.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:501:2f::14) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: eb0c6a5d-c8f4-452c-8b50-08d90ef3b98f
X-MS-TrafficTypeDiagnostic: DM4PR03MB5998:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM4PR03MB599892407782346CC5FB70278F5A9@DM4PR03MB5998.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: /rfBMpa6opGJRaZEE9Hc2DRMW1UHJ7L1niWOqh99JGglGyO75d2Lsy5AYI/a8FL2NM/P/EbhxZjbEfNx1VtmgdA4bKsDW0gFpiN0FuwOzrt55VhERQbxXejvinmv80wY9azY2kJ7V/G9LJw/iL+ZARbCN5gpl0PB1EkW6iHtr3eY9pPaIw+UeIACKNaV7bYvy6LiAOE4QyhzmkzLfAh4YemPiDThSw39+gkjHnPaWGHYow6L1yAoZ8SDVUR0kU729Ohc7kgl4xlAv1t/oJLpN/k4gb81tT5ArR8JPdMKw62jCjlH9cJby/ZGSnaifnNLNNTwrGfkZpuE1AEmYz5py3G1c/NHDnKtC5rprw5EfYPhEPPwXsClFwe7JgiAnjeWrH2//vkfzOXONs0MfJphBT1tsBx1SO1+l3y+jUAfz+I61TQFn0GUm8R9PTz4+/Nq3mO0JY6zn3FVHqpMNyRa7GcQ2VQEGiUVbDH6JOhxzEJjVTuFJGDL06UjRlhVlwV2xqsSLwcOoxTIfWgW3M4MjOVoqBA5ZaQAwCDcgmZPiv93YzpWf//utZ7+b90tpdKleY1dkjUMC3dPeam01I3hn6iW2YXQXfD+o88yq0Tgcco=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(376002)(396003)(346002)(39860400002)(366004)(136003)(6496006)(9686003)(2906002)(85182001)(956004)(6916009)(38100700002)(5660300002)(53546011)(6486002)(83380400001)(4326008)(478600001)(6666004)(8936002)(8676002)(186003)(86362001)(26005)(66946007)(54906003)(316002)(66556008)(66476007)(33716001)(16526019);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?WHMzU3h3VU1oaWVlZXJrQ0NLdGpBWDdzYzI0YnF3RnU4Vm04aFI3ZXhIb0Vk?=
 =?utf-8?B?Qnh3NXJrank1SnB2UUlmTldHTzAyeFBLMzZFZjRNdGdjL1E5QlBhT2lQTzRr?=
 =?utf-8?B?ME13UXdTcXc3RUpSampweWVOemVyMGhMcE56KzJFaDJwNGFlVC9ZZU5Cc08w?=
 =?utf-8?B?RXMvYzdtMmtPOXFpcTM1SFozVzdxTUI4TngvK2lhenpDTmVIRVJOTncyMjVw?=
 =?utf-8?B?bms3bUZCQUtMaUZqWmc0ZU0yVkNVejRaV1cyb0NGZDZaMEdTUEtYaWlyaGFj?=
 =?utf-8?B?ZytiZW9YanVwb3g1cjN4VkE2NHY1K3ZNaHdkVFRHY2RyMzVlRDZxUTVjS2Ny?=
 =?utf-8?B?bCt1ZnpJUnVxUkJIVzVnejVVM1p1UzJmL3dGZUN4OHNnM3ZzQ0dRT0JoUlc1?=
 =?utf-8?B?VDV5bUFHQnhxN1RMMDVDeXhScm1PMXA0UHhOZW9PaGhHQk9pS3VqQk1iSlo3?=
 =?utf-8?B?TkNXQ0dlRTE2eEdYdFdWQXdhdktTN0w1ajlKd05scUVhd2tYNi9pYWVDamhO?=
 =?utf-8?B?ajgvMm1adzFwSEV1TEFWNEh4dWRqbTEzZUpxZ2YwR0lHT2M2Rks0NzhMRERh?=
 =?utf-8?B?YVNQMnpjeDF1cVVMY2xHZ1VOZ3hGYlFvUjB6SnFpZHQyQjVIbmJtK0l2WmMx?=
 =?utf-8?B?UlRRV3I3SXArRXNMUW5SYXMrNWFqUTFUNVdmdEhqSDg2ek9mOGx6dmI1L0V5?=
 =?utf-8?B?ckxHRU5aRlBENDRDdy9waVEzWWdZNDdzRW9KODJuUnpXUW5CY3JtdzM5S1Fq?=
 =?utf-8?B?Z3FndWpFWUJmR2ZGMGlUS1BtOS9xN2VyOFBLUWQ0TTJ5U3kxRm9kUEs1TzZs?=
 =?utf-8?B?TGxsVDROWkFhbHh3TWZYbEpnWEhhSE9HOEk3QXVnaEJNd3ZibEs5c3o0MnFk?=
 =?utf-8?B?MWQyRFowQW52TDYyNTMvR25jSWxCTzlTOG9kdzhzSDdvWU92YXJCc1Z0Tm9z?=
 =?utf-8?B?a3llb2NCTjdOLzF4bitGMERFWnJ0ZmZYdFhDYThVQTgzdy9SeHFTQlExV0Vx?=
 =?utf-8?B?YWdkVlJ6ZmFvQXhuVXMwMGdNTjQ2UDdoN3J4SnZQSnpncDlIV1VQZGdkc1A4?=
 =?utf-8?B?Zm1kQ1AxRDNDQi9WZ084TWd2SGZ5TnhuTCtpU0dwcTQ5aDVlMkl6VUxMQ1FH?=
 =?utf-8?B?V1ZCblhzaUoreUZJaVAzQnVVQkp2YXoyOENzUXg3VWdOMEFYWnJFdkkrTVhR?=
 =?utf-8?B?di93MlM2dmhhdk5mTmw5OWtzbWZ4NTMyaUt1OWc5TUlEbFp6aFJtNThNSDU0?=
 =?utf-8?B?cXdzaHUvR0psRnV4ZS9nVDVlOThJQ0pKdmhudWtJaEtyNkZ0YkhGSXpETy9U?=
 =?utf-8?B?aVAwejZNUzMxWEJMbkE0am80OVlmYnRHbWxOMnVKb3pxUXZuZlhDY1k1NGpl?=
 =?utf-8?B?NTJQRy9iUS9hSXhhNys4MWtTdW1ZQU9DWjZHZWQ4aFd6dVBLMUdRTEZKMHR5?=
 =?utf-8?B?M2NYUmxwN1ZXVEZ1MDNMMHZOUUJneG9wSTVZbW9QNzI5bkNDRTl6UlY4TW1V?=
 =?utf-8?B?M2NUSXZHZWd0Wk9JMmJ0RXNSVWNLWmw5UU5iU1ZkWmRGRGpNV1phSEdtNGJB?=
 =?utf-8?B?SEN0TTljK1V0THVOSVNNSTNlbC9jRThVMFp5ckpUSHdaTnNLYzNKSC9wQVN5?=
 =?utf-8?B?WDl1YkM2ZXRvdnRsSlhXTWxJOVFVZFQ4Um5ydWNvdWhxRmFNZzZYZHhwb0Yv?=
 =?utf-8?B?YTA0WjJNS1VxenZCS0FsV3pHMTh2YTMzQjlZS2x3ZDM4R3hvcEpsd1VMU1BO?=
 =?utf-8?Q?Gbo7wBLAdYXmtsviBJ9Uw9qFjZbPBnbekwVdTQy?=
X-MS-Exchange-CrossTenant-Network-Message-Id: eb0c6a5d-c8f4-452c-8b50-08d90ef3b98f
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2021 11:57:00.3854
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: EWrCikZSzjhMnOn66ST5IA5zNwhmdFqZqU96U7J+NKInvQRUA62ir1PJCXeGZYQv6fx3xGR1xwEJ7yLled50Eg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR03MB5998
X-OriginatorOrg: citrix.com

On Mon, May 03, 2021 at 12:43:08PM +0200, Jan Beulich wrote:
> On 30.04.2021 17:52, Roger Pau Monne wrote:
> > Introduce a helper to obtain a compatible cpu policy based on two
> > input cpu policies. Currently this is done by and'ing all CPUID
> > feature leaves and MSR entries, except for MSR_ARCH_CAPABILITIES which
> > has the RSBA bit or'ed.
> > 
> > The _AC macro is pulled from libxl_internal.h into xen-tools/libs.h
> > since it's required in order to use the msr-index.h header.
> > 
> > Note there's no need to place this helper in libx86, since the
> > calculation of a compatible policy shouldn't be done from the
> > hypervisor.
> > 
> > No callers of the interface introduced.
> > 
> > Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> > ---
> > Changes since v2:
> >  - Add some comments.
> >  - Remove stray double semicolon.
> >  - AND all 0x7 subleaves (except 0.EAX).
> >  - Explicitly handle MSR indexes in a switch statement.
> >  - Error out when an unhandled MSR is found.
> >  - Add handling of leaf 0x80000021.
> > 
> > Changes since v1:
> >  - Only AND the feature parts of cpuid.
> >  - Use a binary search to find the matching leaves and msr entries.
> >  - Remove default case from MSR level function.
> > ---
> >  tools/include/xen-tools/libs.h    |   5 ++
> >  tools/include/xenctrl.h           |   4 +
> >  tools/libs/guest/xg_cpuid_x86.c   | 137 ++++++++++++++++++++++++++++++
> >  tools/libs/light/libxl_internal.h |   2 -
> >  4 files changed, 146 insertions(+), 2 deletions(-)
> > 
> > diff --git a/tools/include/xen-tools/libs.h b/tools/include/xen-tools/libs.h
> > index a16e0c38070..b9e89f9a711 100644
> > --- a/tools/include/xen-tools/libs.h
> > +++ b/tools/include/xen-tools/libs.h
> > @@ -63,4 +63,9 @@
> >  #define ROUNDUP(_x,_w) (((unsigned long)(_x)+(1UL<<(_w))-1) & ~((1UL<<(_w))-1))
> >  #endif
> >  
> > +#ifndef _AC
> > +#define __AC(X,Y)   (X##Y)
> > +#define _AC(X,Y)    __AC(X,Y)
> > +#endif
> > +
> >  #endif	/* __XEN_TOOLS_LIBS__ */
> > diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
> > index 5f699c09509..c41d794683c 100644
> > --- a/tools/include/xenctrl.h
> > +++ b/tools/include/xenctrl.h
> > @@ -2622,6 +2622,10 @@ int xc_cpu_policy_update_msrs(xc_interface *xch, xc_cpu_policy_t policy,
> >  /* Compatibility calculations. */
> >  bool xc_cpu_policy_is_compatible(xc_interface *xch, const xc_cpu_policy_t host,
> >                                   const xc_cpu_policy_t guest);
> > +int xc_cpu_policy_calc_compatible(xc_interface *xch,
> > +                                  const xc_cpu_policy_t p1,
> > +                                  const xc_cpu_policy_t p2,
> > +                                  xc_cpu_policy_t out);
> >  
> >  int xc_get_cpu_levelling_caps(xc_interface *xch, uint32_t *caps);
> >  int xc_get_cpu_featureset(xc_interface *xch, uint32_t index,
> > diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x86.c
> > index 6b8bae00334..be2056469aa 100644
> > --- a/tools/libs/guest/xg_cpuid_x86.c
> > +++ b/tools/libs/guest/xg_cpuid_x86.c
> > @@ -32,6 +32,7 @@ enum {
> >  #include <xen/arch-x86/cpufeatureset.h>
> >  };
> >  
> > +#include <xen/asm/msr-index.h>
> >  #include <xen/asm/x86-vendors.h>
> >  
> >  #include <xen/lib/x86/cpu-policy.h>
> > @@ -949,3 +950,139 @@ bool xc_cpu_policy_is_compatible(xc_interface *xch, const xc_cpu_policy_t host,
> >  
> >      return false;
> >  }
> > +
> > +static bool level_msr(const xen_msr_entry_t *e1, const xen_msr_entry_t *e2,
> > +                      xen_msr_entry_t *out)
> > +{
> > +    *out = (xen_msr_entry_t){ .idx = e1->idx };
> > +
> > +    switch ( e1->idx )
> > +    {
> > +    case MSR_INTEL_PLATFORM_INFO:
> > +        out->val = e1->val & e2->val;
> > +        return true;
> > +
> > +    case MSR_ARCH_CAPABILITIES:
> > +        out->val = e1->val & e2->val;
> > +        /*
> > +         * Set RSBA if present on any of the input values to notice the guest
> > +         * might run on vulnerable hardware at some point.
> > +         */
> > +        out->val |= (e1->val | e2->val) & ARCH_CAPS_RSBA;
> > +        return true;
> > +    }
> > +
> > +    return false;
> > +}
> > +
> > +/* Only level featuresets so far. */
> 
> I have to admit that I don't think I see all the implications from
> this implementation restriction. All other leaves get dropped by
> the caller, but it's not clear to me what this means wrt what the
> guest is ultimately going to get to see.

This aims to be based on what XenServer does, which I was told is to
level the featuresets. I think the caller of the function will have to
fill the part of the policy that cannot be leveled. It's likely new
helpers will be added to do that as required.

One option would be to get the default policy for the guest and then
use xc_cpu_policy_update_cpuid to apply the leveled one.

> > +static bool level_leaf(const xen_cpuid_leaf_t *l1, const xen_cpuid_leaf_t *l2,
> > +                       xen_cpuid_leaf_t *out)
> > +{
> > +    *out = (xen_cpuid_leaf_t){
> > +        .leaf = l1->leaf,
> > +        .subleaf = l2->subleaf,
> 
> Since ->leaf and ->subleaf ought to match anyway, I think it would
> look less odd if both initializers were taken from consistent source.

Sure, my bad.

> > +    };
> > +
> > +    switch ( l1->leaf )
> > +    {
> > +    case 0x1:
> > +    case 0x80000001:
> > +        out->c = l1->c & l2->c;
> > +        out->d = l1->d & l2->d;
> > +        return true;
> > +
> > +    case 0xd:
> > +        if ( l1->subleaf != 1 )
> > +            break;
> > +        /*
> > +         * Only take Da1 into account, the rest of subleaves will be dropped
> > +         * and recalculated by recalculate_xstate.
> > +         */
> > +        out->a = l1->a & l2->a;
> > +        return true;
> > +
> > +    case 0x7:
> > +        if ( l1->subleaf )
> > +            /* subleaf 0 EAX contains the max subleaf count. */
> > +            out->a = l1->a & l2->a;
> 
>         else
>             out->a = min(l1->a, l2->a);
> 
> ? Or is the result from here then further passed to
> x86_cpuid_policy_shrink_max_leaves() (not visible from this patch)?
> (If not, the same would apply to all other multi-subleaf leaves.)

Hm, might be worth to set all the max fields directly in
xc_cpu_policy_calc_compatible and also add a call to
x86_cpuid_policy_shrink_max_leaves after the leveling is done.

> > +        out->b = l1->b & l2->b;
> > +        out->c = l1->c & l2->c;
> > +        out->d = l1->d & l2->d;
> > +        return true;
> > +
> > +    case 0x80000007:
> > +        out->d = l1->d & l2->d;
> > +        return true;
> > +
> > +    case 0x80000008:
> > +        out->b = l1->b & l2->b;
> > +        return true;
> > +
> > +    case 0x80000021:
> > +        out->a = l1->a & l2->a;
> > +        return true;
> > +    }
> > +
> > +    return false;
> > +}
> > +
> > +int xc_cpu_policy_calc_compatible(xc_interface *xch,
> > +                                  const xc_cpu_policy_t p1,
> > +                                  const xc_cpu_policy_t p2,
> > +                                  xc_cpu_policy_t out)
> 
> I have to admit that I find these two "const" misleading here. You
> don't equally constify the other two parameters (which would e.g. be
> xc_interface *const xch), and I don't think doing so is common
> practice elsewhere. And what p1 and p2 point to is specifically non-
> const (and cannot be const), due to ...
> 
> > +{
> > +    unsigned int nr_leaves, nr_msrs, i, index;
> > +    unsigned int p1_nr_leaves, p2_nr_leaves;
> > +    unsigned int p1_nr_entries, p2_nr_entries;
> > +    int rc;
> > +
> > +    p1_nr_leaves = p2_nr_leaves = ARRAY_SIZE(p1->leaves);
> > +    p1_nr_entries = p2_nr_entries = ARRAY_SIZE(p1->entries);
> > +
> > +    rc = xc_cpu_policy_serialise(xch, p1, p1->leaves, &p1_nr_leaves,
> > +                                 p1->entries, &p1_nr_entries);
> > +    if ( rc )
> > +        return rc;
> > +    rc = xc_cpu_policy_serialise(xch, p2, p2->leaves, &p2_nr_leaves,
> > +                                 p2->entries, &p2_nr_entries);
> 
> ... these two calls.

Right, that's a leftover from previously, where xc_cpu_policy_t didn't
also contain a buffer to store the serialized version.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 04 11:58:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 11:58:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122168.230415 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldth1-0008FH-LP; Tue, 04 May 2021 11:58:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122168.230415; Tue, 04 May 2021 11:58:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldth1-0008FA-Hy; Tue, 04 May 2021 11:58:23 +0000
Received: by outflank-mailman (input) for mailman id 122168;
 Tue, 04 May 2021 11:58:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=n4Og=J7=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1ldtgz-0008F5-IG
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 11:58:21 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 11c97783-1b22-4e9d-aaa7-868f2bcdce86;
 Tue, 04 May 2021 11:58:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 11c97783-1b22-4e9d-aaa7-868f2bcdce86
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620129500;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=JER8YN7eMcReMDldAxAP6IfKeDIGNAZD1whjWnDYBdA=;
  b=Otq59TAJ7HsaQG8MdjBAnQvL/RCWHYsoPyqAvVi9tmV6+iyRolx0ZWsp
   hEeKlw2c6tPC/8j1jl0GnfCr1BWnHwljfTgn9El73GY9VIKBSd8Iy/kPb
   4eBxUFfz6bVfPdPDDTJ2U1MFJV4PVQrISUwZ6jMod+LIerd96UcYsSbiD
   0=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: VwZUlls91Vr6Pv+7PmjLpTqmgtbcfrOWykP2AKavfxnSEWVZZgIr80rPiswJN9pFDnU6Fclvii
 0/N33GdYAfCUYfNEd6ILId22OPNuVfnulvEDaKMGfFN143q6Nf0yTk3R6Awhd8M7eXlIvnJf2b
 u1R6RFKELOfVRPd7wZYKtF3bHxqnJOWdNontWCCqvhXyRMZRiPNcaKIn5Msb4pO4ZYZT5c5sXF
 NRXQwC+MOsnbwwf6jwFaKE1slbli686a6xfqMYyq3UfFwrk7vrsU5zjZYTsaSNKLIQ2X7sWMFv
 MXw=
X-SBRS: 5.1
X-MesageID: 43024916
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:OQb/S6PECRH7EMBcT3Hw55DYdL4zR+YMi2QD/3taDTRIb82VkN
 2vlvwH1RnyzA0cQm0khMroAsa9aFvm39pQ7ZMKNbmvGDPntmyhMZ144eLZrwHIMxbVstRQ3a
 IIScVDIfXtEFl3itv76gGkE9AmhOKK6rysmP229RZQZCtBApsQiDtRIACdD0FwWU1qBYAhEo
 Cd+8pAoFObCA4qR+68AWQIWPWGmsbCk4jobQVDKxks7gSPij3A0s+GLzGz2BACXzRThYoz6G
 StqX2D2oyPkdGejiXd2Wja8ohMlLLapOdrKcSQhqEuW1fRoymyYoAJYczmgBkUp6WV5E8ugJ
 3wpX4bTrhOwlfwWk3wnhf3wQnn118Vmg/f4HuVm2Hqr8C8ZB9SMbs5uatjfhHU61UtsbhHuc
 ohtQLp1OskMTr6kCvw/NTOXR1x/3DE2UYKquIPk2dZFbIXdb45l/1vwGpuDJwCECjmgbpXdt
 VGMce03oczTXqqK1rdvmVp3eW2WGUyEhqsUiE5y7Ko+gkTs3Zjw0QCwssD2l8G6ZImUpFBo9
 /JK6Jyidh1P4MrRJM4IN1Ebdq8C2TLTx6JGGWOIW7/HKVCH37WsZb47Jg8+enCQu1G8LIC3L
 D6FH9Iv287fEzjTeeU2odQzxzLSGKhGRzw18B3/fFCy/3BbYuuFRfGZEElksOmrflaKNbcQe
 yPNJVfBOKmBXfyGLxOwxb1V/BpWDgjefxQnux+d0OFo8rNJIGvnPfcauzvKL3kFithdXj4Bl
 cFQTjvNORN5k2mQRbD8VrsckKoXna60YN7EaDc8eRW4pMKLJdwvg8cjkn8xszjE0wGjoUGOG
 9FZJ/3mKKyome7uUzS6X9yBxZbBkFJpJHpU3ZAox42I1r5GIxz/+m3SCR35j+qNxV/R8TZHE
 p0vFJs45+6KJSW2GQEB8+4NHmZy18evmiDQZtZuqDr37aqRroISrIdHIBhHwTCEBJ43Sxwrn
 1YVQMCTkjDUhX0iauki5QQLPrFd8Z1hTqqJcI8kwOdiWys4eUUAlcLVT+nVsCaxSw0QSBPu1
 F3+6gDxIablS2XMms5iuQgOFhqYGCaaYg2SzitVcFxoPTGaQtwRWCFiXi/hwsocmTnzUkUm1
 fsNDaZY/3NH1pbtE1Jy6qCyiIGSkytO2ZLLlxqu4x0EmrL/kx+1uKGfYKf+WqcYFlq+JBXDB
 j1JR8pZi9+zdG+0xCY3AuYHXI935M0I6j2F7I4aYze3XurNayFnawLBOVv4Z5gLdzi29V7F9
 63SkuwFnfVGukp0wuaqjIZIyFysmAjiu6t9xv/7mS0tURPd8b6ERBDffU8LN6d5WS/GKrN/5
 V9kN4vvey/dk/2ccWLzKnLbzhFbjPfyFTGO90AmNRxh+YVsrA2IrzwFR3v/1tD1A8lLMj1mF
 gFKZ4LqIzpC8tKRYgqZyld/lAVj9yBI0sgjxzuDoYFDCQQpk6eG+nM3qHBprUuCHCQvQfcOV
 GQ9CtG4vfONhHzooIyOuYVIW5MblI752kn1OSed5fIAAHCTZAIwHOKdlu8eqRaUq6LBPE5qQ
 t7+ciBm6uyezDj0A7d+Rt9LaQmyRfrfeqCRCaNE/VP6dq0JBClhbar+te6iHPPcgSAAn5ozL
 FtRAg3dcRMij4rkY0x3GyTc8XM0z0YumobxypmmF7r0pWh+0HBEyh9QFTkvqk=
X-IronPort-AV: E=Sophos;i="5.82,272,1613451600"; 
   d="scan'208";a="43024916"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=N0dDvNMXIwg3AJbVqYpB4Bj10OwKZ1t6iLdCSj4nSI+4C397cFTGGA+OkZkCxHkDnuH7GvJBQQ8sloWye4xya/waE86uUQ+axV0/jrh6DYbYU3kcsLJg1bxQY1eU/iRlk8B/7e+Pn/dVc0jHth+tFTPbtLQZ8tLihQSJ1F7xxxm14mFMR/w4uW3liCljdE5ur4PlqVNfadoCVeVBTf8GoRaCVTv4xPGebCTDnA/vP6mlYe6rFqgjqCTcvDHcMG/DJ438T+oyG+PLSneYqbIAmj/ITYf5isEVX9F00rd3mqmzmqlka6qd52yB/+X8iIExlc9ZNju1Po2sSHOeuAwrbw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ONepewlv+FsTt9OZUtOs7BhlRHHkSUMAl/yz4LBnMrg=;
 b=gxEIOh6ZtFhruRqzuO/FFw+lDzlJOsBbnTAk1rH0hv7q7xwWGH6kH635+mRSP7BPjDscdgdX53hC3ROUQF2bB/ZtcPEcQdgg37BKY5GNrVh5c0DXTPjNJBAmMgsHEDIpvEgIUgf+si0cEqq+MNx9o/8g82eEFV/67HyczNz9idzOZJx+bgvLHwxD1Cg6xWvQoIBAqlkVLpL58DIqYytgKj0aEXfzA9U/dOzdeXLkHaQY7JnOkfpeLmMpsS1eMNaj1JbO5bty/xl8lGjv6agG5vlZ2G3mRyU7bvlDQUYz50CG7Pw238o2z3GL9tubNG0GPCmCCpePNaeKSrGLezN2eA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ONepewlv+FsTt9OZUtOs7BhlRHHkSUMAl/yz4LBnMrg=;
 b=GqcdB2yhVAUpWxDWirdXokfVahn1c8vj8TA1sA+nFEIjWUmg/RQhzDBGPIaMQ0jTze3SGisDyW0fSCboO5coM1fd5EuwjLtlAHAHFp1Nw8hZm5PDii8X29xRUjW3DQ9WS3KDFl/EodNgAo43PYTLYXOvQ+ebh8LDN2V3Tv0ZBRM=
Date: Tue, 4 May 2021 13:58:11 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v3 03/13] libs/guest: allow fetching a specific MSR entry
 from a cpu policy
Message-ID: <YJE20/M+OCER2vPn@Air-de-Roger>
References: <20210430155211.3709-1-roger.pau@citrix.com>
 <20210430155211.3709-4-roger.pau@citrix.com>
 <273ba6f9-dee9-00db-407b-10325d21afae@suse.com>
 <YJEoS6P1S6NbySFd@Air-de-Roger>
 <54c48a0f-075f-c379-eeb4-60b4439d8907@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <54c48a0f-075f-c379-eeb4-60b4439d8907@suse.com>
X-ClientProxiedBy: AM5PR0402CA0001.eurprd04.prod.outlook.com
 (2603:10a6:203:90::11) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 19575167-aead-40ce-b79d-08d90ef3e766
X-MS-TrafficTypeDiagnostic: DM6PR03MB4970:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB49706CA96E017F604A0FADFC8F5A9@DM6PR03MB4970.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4714;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: ajyUbhCGr9pvc7PPVqCQmrFomm8q8LPjQ0/xaf92J82gYSx0r7dRsRwmddJcrI5rSoSN27AA0sKmk+LPxcX48zkHGtK2jv0BCO5WARtZ+GIsrZlReREB3B9mrVrMZFtqFesJHRQeUWuNrL1i6sNJo3rpHo9lDYWbl4bd5kaVUwBzrCpAJgaDyTTaO6jjIUcLcFE48xc0ID+hrQzeJhIz7alaYRKluImwJ3GtwXveU7iw5M3TWJnq2p/teHO74pZRN+NeBVkHp4k7AGkFOwL0ZTlJPi5x6MPNdP5GvcAvlGYVwicHSJAVohZiEMMvjmB248/gNOdbT4r+3v+2lZuA31WT1jyVMvymgiu7qm2Otffm/M00WRSTi8jlkSAM9ObQFm3BlcqS2lBFCqXs8uoGtuvVDhZSnj6F/B/MoyUYAxK/1mr80n8mIT4+F0IVoIqsL8ChLzTHOWUePmKIkixxagnwVe7RKLkLlFYZSf/0eIgdQAnxENgesXTxjZVin2CXJyVs3q6dur2AmES99XMBzDjbd3h/yUJtco+ZsjtH4JzxTpwkJZJQdKbqq729OvDePlguiS6MIIh/lnwhxOVeCwcpXmjgnFtK1lSO91wuZKY=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(136003)(39860400002)(346002)(366004)(396003)(376002)(26005)(6486002)(5660300002)(6496006)(8936002)(6916009)(316002)(956004)(478600001)(4326008)(6666004)(54906003)(66476007)(9686003)(66556008)(2906002)(86362001)(16526019)(53546011)(33716001)(186003)(83380400001)(85182001)(38100700002)(66946007)(8676002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?bjZ4NEh6N0lpVmJQZWt5SjdoY1VLa0VvUWJwdXFXbU5HMjVXYTQ4OFp1MTNN?=
 =?utf-8?B?SkU1R1l4a1hORyswY0NETGJObkpmTEVqRnBWVVBzQ1BWdzNTb3BCcXVQTUpY?=
 =?utf-8?B?Zm4zZHhOcXRnWlkyMzhsYWpHM0pibzk4dEd4T2RFK3hyZzlVYmU0aGwzcnVH?=
 =?utf-8?B?aG51dWowb2ZSaU1vdlIzL2RzZjZqd2pRdmowdXFrRHFtTkJqK1poMEpJRStn?=
 =?utf-8?B?OXRaSi9MTVg1cTZySHJpVzVNZ1l2cVdscTE0dnNnK3RxQjh2NzJLUFhmVk1X?=
 =?utf-8?B?QjdXUlBqeDN4elpaamhIbEIvQ0orckpBd3JQcDJoZzgyUWtYblF6MEJCSm5y?=
 =?utf-8?B?dkxHRzJkc1RZR3puOFJYUEVGNTZQdkRVMzdZQzMvNm15MDRQK0NwS3dydUxj?=
 =?utf-8?B?T29zN1Q2RDRzQ1RkNXRCUUVnMWhlUTZaMUIvQjRHckk0WXEvek4xVFZyM0g0?=
 =?utf-8?B?ZHlTZXRVWUJLUUFFeU9MWXZZZHluN2RpZlZzbk5KOE9tUExXVVN5ZUl5VXB1?=
 =?utf-8?B?TzlUTW5aNjMrZEwwbk14L0JLQVFGZ3Iwa1lGdkdYVE5zRXVGaVc5UTZtaWtR?=
 =?utf-8?B?MERtcC9WY3l5ZUJLdkY5ZW03RmdDZEwvc05NQzQwR0NUcGUwYUR2dlMwdm0x?=
 =?utf-8?B?N0YvR3hBb0twMHZJN1Z1SEdNVzVVeDFUQThDL1B1OWhLMzRJaVR5YW01blBN?=
 =?utf-8?B?RVNEWnN0TVYxdWs0WkVlNkRhemtIQ0VPdGVNN2d3ZjhMRUxIRGh4bDBBNzA0?=
 =?utf-8?B?MFZwZzJMUlVkVWM1NkdhNWFQak8wZXV5UmJmQWNqNitVcXJYSy8rekdST0NJ?=
 =?utf-8?B?YUJjNEVmZXdLZ01leklnUm9ySWdCNkZJZlh3NHNYQ0Z1VTQvSVRUMzRyREFi?=
 =?utf-8?B?c2JTaWhsbFJZMWVIUVBGL3NGa0xCWjR4MFVwUHNQNit4U1VES3M1c3lrT29i?=
 =?utf-8?B?SS9oM2FjTFJQSVgxT3Z5VjcvZWp3M2l2ZURlSWdpdjd4TEVaRFB3Q3lMdThm?=
 =?utf-8?B?UEpJUnhSdlp2S0JQOG5ZNWs0U2RCSXpiZHpxdzZwRXYvT3JsN3E1cUtLMVEx?=
 =?utf-8?B?M0dWRFVsWXV1c2dxUVZ6eDRSZFBWeXRpcHBDTjdaZ29aU2RpWW5KeTdqWjRy?=
 =?utf-8?B?QnJOaEpMOEpqU0NGQUVJQnZjT1BDRHk5ZVpPUU05aVpDZkprVS9tQlRoSjMr?=
 =?utf-8?B?L1ovVG9ETGQ4ZWZZQUp3bm45N1hIaUlOM3l6cndoRzdORm9LbmpKY2RuajFa?=
 =?utf-8?B?UnpKQXdrNmF6czR6V1BucXhOa01QN04ycmdQckJBdjgzVTJuUS9uZHJLWWly?=
 =?utf-8?B?S3gxSFc4ano5NHBjUkdaeVZZNzRVZzA0eGdqN0xaSis5NDlLc3RYU1l4cFhD?=
 =?utf-8?B?Unh0L0hZMHlxdlo2bVNadC9aOHVITDIwVTRmWjY0Z3BmZG51QU9yU1Jidzda?=
 =?utf-8?B?cWJhQU9tekZFQnJ3c0lSUjBZdUJwQ0JoUHAvK3ZqNW52NHN5ZlVIR1k1QzBj?=
 =?utf-8?B?N1ZLY2FrNWhmSUlFdlNReFk2QTZNdzJwNGdIZ0lkNmtCZGRDejlEQytoWmU5?=
 =?utf-8?B?VFpEZ3IyalRFaVpqdDgrK0VSODVBS3lxWnduQ1hrL29mK2NPKzU3a2R1WGJa?=
 =?utf-8?B?d1pKRnZVTjcvelZhV2ROTGtVQzFtbkY1TkxITlQvRkZsNERTU1NITEZrdTZT?=
 =?utf-8?B?SkZKdi9pV0dvanBmSDhXVUc1S3poSFFVV1Z3eHB5eGdoempQSUQ0V0lucW94?=
 =?utf-8?Q?m4Is1HoKtJP0Pja2Q3Z7WNIV3FSMOeYZwB+VDLP?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 19575167-aead-40ce-b79d-08d90ef3e766
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2021 11:58:17.2522
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: jloVTjBGbq5LG2y8Wwwtt4Zm+2/i5OLDF/53JFSY+8EG8FB+FTGaSL62CTB/tPMVWJg75JuAuXtPGpOx2AparA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4970
X-OriginatorOrg: citrix.com

On Tue, May 04, 2021 at 01:40:11PM +0200, Jan Beulich wrote:
> On 04.05.2021 12:56, Roger Pau Monné wrote:
> > On Mon, May 03, 2021 at 12:41:29PM +0200, Jan Beulich wrote:
> >> On 30.04.2021 17:52, Roger Pau Monne wrote:
> >>> --- a/tools/libs/guest/xg_cpuid_x86.c
> >>> +++ b/tools/libs/guest/xg_cpuid_x86.c
> >>> @@ -850,3 +850,45 @@ int xc_cpu_policy_get_cpuid(xc_interface *xch, const xc_cpu_policy_t policy,
> >>>      *out = *tmp;
> >>>      return 0;
> >>>  }
> >>> +
> >>> +static int compare_entries(const void *l, const void *r)
> >>> +{
> >>> +    const xen_msr_entry_t *lhs = l;
> >>> +    const xen_msr_entry_t *rhs = r;
> >>> +
> >>> +    if ( lhs->idx == rhs->idx )
> >>> +        return 0;
> >>> +    return lhs->idx < rhs->idx ? -1 : 1;
> >>> +}
> >>> +
> >>> +static xen_msr_entry_t *find_entry(xen_msr_entry_t *entries,
> >>> +                                   unsigned int nr_entries, unsigned int index)
> >>> +{
> >>> +    const xen_msr_entry_t key = { index };
> >>> +
> >>> +    return bsearch(&key, entries, nr_entries, sizeof(*entries), compare_entries);
> >>> +}
> >>
> >> Isn't "entries" / "entry" a little too generic a name here, considering
> >> the CPUID equivalents use "leaves" / "leaf"? (Noticed really while looking
> >> at patch 7.)
> > 
> > Would you be fine with naming the function find_msr and leaving the
> > rest of the parameters names as-is?
> 
> Yes. But recall I'm not the maintainer of this code anyway.

You cared to provide feedback, and I'm happy to make the change.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 04 11:59:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 11:59:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122172.230427 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldtiU-0008Od-1n; Tue, 04 May 2021 11:59:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122172.230427; Tue, 04 May 2021 11:59:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldtiT-0008OW-Tr; Tue, 04 May 2021 11:59:53 +0000
Received: by outflank-mailman (input) for mailman id 122172;
 Tue, 04 May 2021 11:59:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Poa=J7=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ldtiT-0008OR-9r
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 11:59:53 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7ec4f726-4ca4-486f-9f06-107eee058fab;
 Tue, 04 May 2021 11:59:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7ec4f726-4ca4-486f-9f06-107eee058fab
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620129592;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=kYTwRNPQgpB96Vv74KyTFIIkRTEOsQwXIo/pu+As7L4=;
  b=CaeTCbwi64z37La1sHKSVDt6ODOahcH6EiCZWKqUqBEB4zIe9wJAvrUK
   NzvdKtvIfSTPPz9Apx350mn8+oSjHMor/ACt9GDSaRHYIPMqHhqfhxP+x
   lt5HLKViBN8UtMh6EBfEozNGJ++YV3c1qnjmRr3LiryFt0SVy1RvhCMfO
   U=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: aBemyGdseHn/8BQ/+MgBkxeqcimSi6Lj7h2/QHPGHN1CWlr5Sh6Fg2dxGScnCqHHtJQNcE68fb
 k4mFkXuHaGUiYlaDtSUgRuOZ3wKCYjku64Th8sTiDmTjXq5pG/eoufSwuSGeyyZDicUDXMFWUH
 k+XoeTfo95tpYNVGdLhVt5nl8uW1PdRdkOOpXgVhIxZBklr7Y4Ekf1H+F6ntU7+7C6iwzQEXmh
 rUBrklm6ebTCJCER5uqYDb5XUUdn5kSWj9vtPUTC1YQGQb4rG4XpYNFEAm5PfsH4hxXIjh3LVU
 Qm0=
X-SBRS: 5.1
X-MesageID: 42822229
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:cI2kAqijhrpeZRfGG1kfWzhAwHBQX3Vw3DAbvn1ZSRFFG/Gwv/
 uF2NwGyB75jysQUnk8mdaGfJKNW2/Y6IQd2+csFJ+Ydk3DtHGzJI9vqbHjzTrpBjHk+odmup
 tIW5NVTOf9BV0St6rHySGlDtctx8SG+qi0heHYi0xgVx1udrsI1WdEIyywe3cGIDVuL5w/CZ
 aa+45jrz2vZXwYYq2Adwc4dsLEoMDGk4+jXAUPAAQp5BLLoTSj7rP7FBbw5GZgbxpkx7A+/W
 /Z1zHo/6nLiYDG9jbw9U/2q65Xltzo18dZCKW36/Q9Bz3whm+TFf9ccpKYujRdmpDI1H8Ll5
 32rw4kL4BP7RrqDxyIiD/M/yWl7zo08X/lzjaj8AneiOj0XigzBcYEpa8xSGqg12MasNtx0L
 1G0gui3vI9Z36w/1Welqz1fipnmUaurX0pnfR7tQ05baIkZKJMtotaxUtJEf47bVHHwbo6G+
 pjBty03ocuTXqmaRnizwxS6eC3Um92NhmLRVVqgL3u7xFm2Fp9z0ce2fUFmGYB+J8XW/B/lp
 T5G5Utu7dUQsAMa6VhQM8HXMusE2TIBSnBKWSIPD3cZe86EkOIj6SyzKQ+5emsdpBN5JwumK
 7ZWFcdkWIpYUrhBeCHwZUjyGGNfEyNGRDWju1O7ZlwvbPxAJDxNzeYdVwom8y8590CH8zyQZ
 +ISdBrKs6mCVGrNZdC3gX4VZUXA2IZStcpttEyXE/Lit7XK7ftqvfQfJ/oVfnQOAdhflm6Lm
 oIXTD1KskFxFusQGXEjB/YXG6oVVf4+b52DajG78kewIUALeR3w0wooGX8wvvOBSxJs6Qwck
 c7CqjgiLmHqW6/+nuNz2gBAGsbMm9lpJHbF19arw4DNE35NZwZvc+ERGxU1HybYjt2T8bcFh
 9jt016kJjHaaC49GQHMZaKI2iah3wcqDahVJEHgJCO4s/jZ9ceAos5XrdyUSHGDQZ8lwoviG
 orUn5FembvUhfVzYm1hp0dA+/SM/Nmhh2wHMJSoXXD8WOGpc8uQXMfdyW0UdGehDsvQzY8vC
 w1z4YvxJ673Rq/I2o2h+o1dHdWbn6MPb5ABAOZILlPlqvTYwF2R2eSjTm8gxU+E1Carnk6ty
 jEF2m5aPvLCl1StjR93rzx+F15TGmbYnl9c2t3q4F7CGTAtEtiyOPjXNvH70KhLn85hs0NOj
 DMZjUfZjljwN26zza5sjePH3dO/ORiAsXtSJAYN53D0HKkL4OF0ZwcF/hP5ZB/KZTFqekQS9
 +SfAeTMRL1A+4kwBauu34gISV4wUNUyc/A6VnA1iyVzXQ/Cf3dLBBaXLkdOcib9HWhaPCS0p
 l15OhF9deYAyHUUJqhxq7WZTIYdU+Wjm6yUu0yqZdb+Yg1r6B+GpHHUT3OkFFLtS9OWvvcpQ
 c7euBc5ruEB6pEO+o1UAhd9kAylNuOIFAw2zaGSNMWTBUItTvjI9iN47D0srIhDU2KmRvoNT
 Ckglpg1saAexHG6KUTBK0xK1lHcUQQ6Hxt++WZao3bYT/aPt1rzR6fMnWndqVaR7XAMbIMrg
 xi69XgpZ7aSwPInCTRtyB8OKRA7iKORt6zGhuFHapt/8ahMVqBxous78jbtka5dRKLL2AZj5
 ZCb0oec4BqjSQjlpQ+1myKcZPMy3hV2Gd20HVAjV7i2o+v/WfdEwVnCGTi8+RrdAgWFGOJg8
 TD+fWfz1Ln7lF+qML+KHs=
X-IronPort-AV: E=Sophos;i="5.82,272,1613451600"; 
   d="scan'208";a="42822229"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eRcd8yJ1xHLhMSDQLeFk29ISCoJ+BzLpnERzO5MAQUQQofKutW+FijtG3xHEVfr0dhMwBMbnTvu6UEref6hhQUj28NykYRKSvoZA1kPpzS+BSbWdiSO+zB/7a1NI85o2LdUVcntmAMj2LPpQzQr6xoztVJkPPrvbtyRqFy+m/jgya+KozFe4Xi+a4JF3B63DiaqGQGmqTFmRbJHhJJYG8WqDMG6ym+NoBJyYoGj9DVtB0qocu5HWoNgdto16HtvaJ0wuEWTXfxbVwx5rzLHKMEyoP1UBwgvKiINV24s2/XCFL7EwD/3AgzPm0Ycs6AsLMw69hcLNoaUT4u7PDGe15g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=K4jRvz7DAu+uZjDw20zqbODBVRrh323FvRl2G5GXE3A=;
 b=XcyJMV95ucStiGxL1SrXY4DRSnc2TXZEFn5wqIkx2IfjdfcpFT7wFThPnonJkNCL9xT0ZsT5966KqEW8dKjt/JTiYQNM04GXKoP/Rx5rvNixyr+bU2eCYoDBY0eXDMCuv+6XyZtU0fbYOeLgqqqZqMkdGpJ9M4M06TdLqbiaLK05lq4HwggEIT1G3a9O2S0baiUigXM2rmSjOQ6BYX3NDvYmbJ58OLylQ09679HgvPYITOEyWEoue7DD9mLzZ78XVzjEZO/XrQelWk3SCry0U/lZfkjgqwrJFXIdDJMy7PqgyA0koT5Lx6GWZ84XFtUbBCPD0ck6It3IrVYK1dIsdg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=K4jRvz7DAu+uZjDw20zqbODBVRrh323FvRl2G5GXE3A=;
 b=GBG4vcFSS8oZeMRSpi+5RB+GPobmFS1AU9s5w9KN7wq4k3h8hIGwEUQDYWGvFl3ZOAxgpwce2WNkZS7GJ0QrjUk75NXbW41+q7Gf5A3A5AoioIKzfChE6pyeje26SYC5HuOL7d263FgXDxYmLaHNI4WGN9aJUsta784CqmTHRZY=
To: Roger Pau Monne <roger.pau@citrix.com>, <xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210430155211.3709-1-roger.pau@citrix.com>
 <20210430155211.3709-3-roger.pau@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v3 02/13] libs/guest: allow fetching a specific CPUID leaf
 from a cpu policy
Message-ID: <76e5e596-24bc-9d91-e654-cef1115e5139@citrix.com>
Date: Tue, 4 May 2021 12:59:43 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <20210430155211.3709-3-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO2P123CA0064.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1::28) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d00d95d1-77db-4384-0424-08d90ef41de6
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5440:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB54409643400003AFCC1BD03ABA5A9@SJ0PR03MB5440.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: gRw0u+OTixm/SjocT3AhRaiNkk5dzISHvBWm3JsIUrVg2sk1w+mjmKTCzrRZsAPSbfwPSNX42YXLk8ITIoY7z7VQUZkSBhZNuFEVwyrJPWfisT8spHeF+nhG4XS4BU1Haqu0ioTenpFKKczx9fcMnlsayrb4ibwn2ocDATLcti9XN7km0uKhOM7jircfPMDR/cNESl52GWlLIGV4xCt5o5CHQpH39b77pl9u871cAyTkotjwj8nx+NQHgPogV6Oqamt6TdM5POme8+y5wbfn/wfjEhhGEdNPVIXnwpxh60Ibv/PANgIDB9N7wtC4yOLaRSu49+wi8HvT/0rn/sO5mXlqUOh1IuJ/a/PHk6+WFWthUinPnW0ZSM2twxPFJFWffpA2AzVsEjkwMeZ+RDkf5FvC5AfyP2fHHkEs7G1OzMs1c7z+CPRGpeKvX8UC16Z2xboGk6l/gbhTd8HXPvdhqAJmUKH85g3Z324AT0DI3kXmChZZKgK3Zl05sW6EXqp6y2pNuqJxaKcV/xLLl/HStm38+Fqoty25FuXqe29AwxvfC3srSeGhtr7LS3czvOEcvrIB25IctyqgBbvYB+EA/QSwtOpKpfzqXfhda6oJGg/+5j62VLvJwYhn8ok0chKYApfs2GbEyHxgao9C5/HkR2FwH7BnH3vnViJRc0iH6g4B6cwDQwpN+k+NkEFBwf/n
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(136003)(346002)(376002)(396003)(366004)(6486002)(86362001)(316002)(8676002)(31696002)(478600001)(54906003)(26005)(66946007)(16526019)(83380400001)(956004)(2616005)(36756003)(16576012)(6666004)(38100700002)(4326008)(186003)(2906002)(53546011)(8936002)(31686004)(5660300002)(66476007)(66556008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?Rm9NT0syWWg1czVEa05PaDFuczdEQktkR0hKOVYzcXFQRWhQcWIwYkhablBL?=
 =?utf-8?B?T0IwUHh6eEZwTFJ5NVk2cGhXNFJLdGVaV21COWJtQ09XblFYd3czR05uL0VY?=
 =?utf-8?B?MHRtaWk4Wnl6MW9JaGJWOStVYWVkR3JiOXUrYzcya2pXQnFWU056TVdXdDFG?=
 =?utf-8?B?SGFyY2Q4aFJsY3VaRmJsbmZoOGpKL2ptTHBqNGdSbUxDZUF3TDRtTHM4V2hX?=
 =?utf-8?B?cUJBRnVRK2tiTTMyY3FNY3pucWplQmVjT2YrUWFweWM5VVV3Mm92VmZXZkxQ?=
 =?utf-8?B?S2V6T2hobEl3d0YyT2dHMlJmcjNYSDQrSUNraWM0UURKRVVVMmh1aUhkWGFH?=
 =?utf-8?B?dDh6OEFVbG9DU0p5US9nTHRaYzJ5L1E4aWoxWjl2a2NRcUZZOWhtQ01GeWx6?=
 =?utf-8?B?aHQ1NUk2NFYzcktVaWx3MnBSczJZY1B0YnFBNXFLN0hEMk5NVUExT1hlVVNy?=
 =?utf-8?B?RzlhVGFQTHQxTDFnaXVPcXhyZmxpWU40Wjd4ZDdZbGRlQ1prb3YzM0s1L3dR?=
 =?utf-8?B?aWVqNjRyL2tRb3MxWFR5emcvNmJlYjZpZDlyeEZtdFZTWU1tT0p2SDlLSUgz?=
 =?utf-8?B?ZDh2RnpYMWdMbzY4dkdGaE1lbGd0VHY1d3NsVnphaVlZai9jSGZReVNaUTRQ?=
 =?utf-8?B?ZzZKR3FvODRzV1hSRVRXNFJyVHlnM2U4WXkrVFg4eFcrejZBQzhaUzd2UUNi?=
 =?utf-8?B?Sk1aZkE2Q3AyOVpxcWZqS0dVc3hyNTQ2ZVdiemZTYnBoYklnQWNFaUJDSTJ3?=
 =?utf-8?B?alRCRlRsYUF1Z0RNd1F0RXE2Z0QrVWt4eWJmM2w3bkdkVEVDSnA2YU5UT2dw?=
 =?utf-8?B?am5QdkNVNUhoN3kvOTVlRG9zYTF2aXowOW11WDZUenFEa0FoaUM3cUMvRUJw?=
 =?utf-8?B?blR2UUFyOW4rNFJnSDgwNXNKL3ZjVTBBN1RwMXduUzdNaDBacDNjc0p0NVpL?=
 =?utf-8?B?eHVEWFhRZ0hsTHBNWmpqUXBCSjlOZHZkQXRmQUhrWU1Jb2lCd01LM3d3bmYr?=
 =?utf-8?B?dzRVVytLcDhzT0lZaTNSTlJnWWJJMXFUalhZL2ZGL1E5SzVFaEZRZGV6K0My?=
 =?utf-8?B?OGRjclQ4YnN2L1lXMEUyQkNWTXpjeXhlc0VhSEIySXZzK0VnZ0NNQm5IYWtM?=
 =?utf-8?B?NytXc0hhTmowdnJUYXlOMmNBU1FodU53VVZMTThGRXRYR1cvMVNUbnA3bzZ5?=
 =?utf-8?B?bnpScG1nREF1MUtHNjhFUlVPSTdsS1lFdVlpbEJVVDZWd2FjNW1SNFo1U0x5?=
 =?utf-8?B?RWtqRGk2OG5QVFByT29LRVVvbGtJQVF2ZUxmVnFqYXJKRnlmMHkrSWpiUnBi?=
 =?utf-8?B?UW1KbU9vTUpYcm9FZU9Hd0xQUWpiVUszcTNENkJDRDh5K1lZMitmZkdydE8x?=
 =?utf-8?B?eEVGUERUQitON2g0SkxBRjFuM0FiV09BUXFiU3RMVFJ4c0ZLbW00OC8zMEJJ?=
 =?utf-8?B?dzRCcit6R2kyTWh4NVlLQ003L2hTR1NtZURWTjVtSFRId3pBYjZFa2ZZcy9G?=
 =?utf-8?B?Ti8yMVJSZGhqazBOMDZNNWlWT093cmV2dG5KMHRyYVRBWWRycHlvSmhJcHR2?=
 =?utf-8?B?cXdQV1lqbDFwSm8rSTBVd0ZKb1hYSTlGUS81TzJSRk1VNjFnWitRL0RtU2Zq?=
 =?utf-8?B?alJ0czFDZXRtditEWjlkSVB1cUJqaGxhK3RjaHBSNlFmNHZqZW9id3BwOGc4?=
 =?utf-8?B?NStOL1NyRTJPaG1pZTR0QmdtZmFFaXQ2U2JMRmF3dlNSRmxUUXNsaVB4Y0Yx?=
 =?utf-8?Q?GrI1ozQz7FL0tiixWTwO+mHSEZXOV//7UxYhuGy?=
X-MS-Exchange-CrossTenant-Network-Message-Id: d00d95d1-77db-4384-0424-08d90ef41de6
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2021 11:59:48.7417
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 7flI9Z76Xs1lzfe31c8dpZgEX6DFpVQ2Z0liZ5sUBp+7OA28+UB0GoxNHK5xH/cLa3Dm3XJPRe1EtzIpRn+f92VL/KbisscpSFJ1MadryEc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5440
X-OriginatorOrg: citrix.com

On 30/04/2021 16:52, Roger Pau Monne wrote:
> @@ -822,3 +825,28 @@ int xc_cpu_policy_serialise(xc_interface *xch, const=
 xc_cpu_policy_t p,
>      errno =3D 0;
>      return 0;
>  }
> +
> +int xc_cpu_policy_get_cpuid(xc_interface *xch, const xc_cpu_policy_t pol=
icy,
> +                            uint32_t leaf, uint32_t subleaf,
> +                            xen_cpuid_leaf_t *out)
> +{
> +    unsigned int nr_leaves =3D ARRAY_SIZE(policy->leaves);
> +    xen_cpuid_leaf_t *tmp;
> +    int rc;
> +
> +    rc =3D xc_cpu_policy_serialise(xch, policy, policy->leaves, &nr_leav=
es,
> +                                 NULL, 0);
> +    if ( rc )
> +        return rc;

Sorry for not spotting this last time.

You don't need to serialise.=C2=A0 You can look up leaf/subleaf in O(1) tim=
e
from cpuid_policy, which was a design goal of the structure originally.

It is probably best to adapt most of the first switch statement in
guest_cpuid() to be a libx86 function.=C2=A0 The asserts aren't massively
interesting to keep, and instead of messing around with nospec, just
have the function return a pointer into the cpuid_policy (or NULL), and
have a single block_speculation() in Xen.=C2=A0 We'll also want a unit test
to go with this new function to check that out-of-range leaves don't
result in out-of-bounds reads.

~Andrew



From xen-devel-bounces@lists.xenproject.org Tue May 04 12:02:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 12:02:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122182.230456 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldtkz-0000s1-19; Tue, 04 May 2021 12:02:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122182.230456; Tue, 04 May 2021 12:02:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldtky-0000ru-Tl; Tue, 04 May 2021 12:02:28 +0000
Received: by outflank-mailman (input) for mailman id 122182;
 Tue, 04 May 2021 12:02:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2eVL=J7=xenbits.xen.org=julieng@srs-us1.protection.inumbo.net>)
 id 1ldtkw-0000q5-VQ
 for xen-devel@lists.xen.org; Tue, 04 May 2021 12:02:27 +0000
Received: from mail.xenproject.org (unknown [104.130.215.37])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 80c90052-ae1b-416c-b221-1b0e36a3409d;
 Tue, 04 May 2021 12:02:16 +0000 (UTC)
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julieng@xenbits.xen.org>)
 id 1ldtkc-0001gK-7N; Tue, 04 May 2021 12:02:06 +0000
Received: from julieng by xenbits.xenproject.org with local (Exim 4.92)
 (envelope-from <julieng@xenbits.xen.org>)
 id 1ldtkc-0005td-56; Tue, 04 May 2021 12:02:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 80c90052-ae1b-416c-b221-1b0e36a3409d
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Date:Message-Id:Subject:CC:From:To:MIME-Version:
	Content-Transfer-Encoding:Content-Type;
	bh=m+HDPtEAJGdPrWsIa+xOu+7ahCc1X7yMC3b4EdNbWb8=; b=wk9lTLXjwro3m4kUC8DGJNufDF
	dM2KaT24y4tgaFbYNdL0XQvJmK9pSYZCMch4JApsrsNVkJp7dmrrMocD4b1ZIANKDav4CtdmemR8I
	Xh8saZ2aI/ZbOkkUjZW5s2fSbJnuQjgFzHJiuV2o79/A9kiEWRfj0QL1IbiECMbgxSVw=;
Content-Type: multipart/mixed; boundary="=separator"; charset="utf-8"
Content-Transfer-Encoding: binary
MIME-Version: 1.0
X-Mailer: MIME-tools 5.509 (Entity 5.509)
To: xen-announce@lists.xen.org, xen-devel@lists.xen.org,
 xen-users@lists.xen.org, oss-security@lists.openwall.com
From: Xen.org security team <security@xen.org>
CC: Xen.org security team <security-team-members@xen.org>
Subject: Xen Security Advisory 370 v2 (CVE-2021-28689) - x86: Speculative
 vulnerabilities with bare (non-shim) 32-bit PV guests
Message-Id: <E1ldtkc-0005td-56@xenbits.xenproject.org>
Date: Tue, 04 May 2021 12:02:06 +0000

--=separator
Content-Type: text/plain; charset="utf-8"
Content-Disposition: inline
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA256

            Xen Security Advisory CVE-2021-28689 / XSA-370
                               version 2

   x86: Speculative vulnerabilities with bare (non-shim) 32-bit PV guests

UPDATES IN VERSION 2
====================

Note that the patch is docs-only and the affected version ranges, in the
files summary of the Resolution section.

Public release.

ISSUE DESCRIPTION
=================

32-bit x86 PV guest kernels run in ring 1.  At the time when Xen was
developed, this area of the i386 architecture was rarely used, which is why
Xen was able to use it to implement paravirtualisation, Xen's novel
approach to virtualization.  In AMD64, Xen had to use a different
implementation approach, so Xen does not use ring 1 to support 64-bit
guests.  With the focus now being on 64-bit systems, and the availability
of explicit hardware support for virtualization, fixing speculation issues
in ring 1 is not a priority for processor companies.

Indirect Branch Restricted Speculation (IBRS) is an architectural x86
extension put together to combat speculative execution sidechannel attacks,
including Spectre v2.  It was retrofitted in microcode to existing CPUs.

For more details on Spectre v2, see:
  http://xenbits.xen.org/xsa/advisory-254.html

However, IBRS does not architecturally protect ring 0 from predictions
learnt in ring 1.

For more details, see:
  https://software.intel.com/security-software-guidance/deep-dives/deep-dive-indirect-branch-restricted-speculation

Similar situations may exist with other mitigations for other kinds of
speculative execution attacks.  The situation is quite likely to be similar
for speculative execution attacks which have yet to be discovered,
disclosed, or mitigated.

IMPACT
======

A malicious 32-bit guest kernel may be able to mount a Spectre v2 attack
against Xen, despite the presence hardware protections being active.

It therefore might be able to infer the contents of arbitrary host memory,
including memory assigned to other guests.

VULNERABLE SYSTEMS
==================

Systems running all versions of Xen are affected.

Only x86 systems are vulnerable, and only CPUs which are potentially
vulnerable to Spectre v2.  Consult your hardware manufacturer.

The vulnerability can only be exploited by 32-bit PV guests which are not
run in PV-Shim.

MITIGATION
==========

Running 32-bit PV guests under PV-Shim avoids the vulnerability when Spectre v2
protections are otherwise enabled on the system.

PV shim is available and fully security-supported in all
security-supported versions of Xen.  Using shim is the recommended
configuration.

Not running 32-bit PV guests avoids the vulnerability.

CREDITS
=======

This issue was discovered by Jann Horn of Google Project Zero.

RESOLUTION
==========

There is no resolution available, and none is ever expected.

The patches provided only update the security support statement.

The first patch is an unavoidable consequence of the discussions
above; the support status described is in effect immediately.

The security team does not consider the support status listed in patch
1 to be particularly useful; however, we do not feel we have the
authority to completely de-support non-shim 32-bit PV guests without
community consultation.

The second patch is the long-term support status the security team
proposes to the community. It will not become effective until three
weeks after the XSA-370 embargo lifts, and only if there are no
objections raised before that point.

If you need security support for un-shimmed 32-bit PV guests, please
make your voice heard on xen-devel@lists.xenproject.org (or to
security@xenproject.org) as soon as possible after the embargo lifts.

xsa370/*.patch         Xen unstable (docs only)
<no fix available>     Xen (all versions)

$ sha256sum xsa370* xsa370*/*
ffb6e1be6a849b8e6930386d70817f53970f3d71a0a89980565c87070e85a7e2  xsa370.meta
45c11df550f1900663a388106d6625e84fa280881e613825c830b1984f87b3a9  xsa370/0001-SUPPORT.md-Document-speculative-attacks-status-of-no.patch
48dfe434bcdf4f08b623b639079fd1c9f9b1939b279200550dbae7736340cb53  xsa370/0002-SUPPORT.md-Un-shimmed-32-bit-PV-guests-are-no-longer.patch
$

BARE 32 BIT PV SECURITY SUPPORT STATUS
======================================

This advisory discloses only a (very serious) information disclosure
vulnerability exploitable by bare 32 bit PV guests, using speculative
execution.

We are considering further entirely withdrawing security support for
configurations with non-shim 32 bit PV guests.  Any such decision,
including the precise scope of the (de)support, will be made following
public community discussion.

The result of that public process will be a patch to the security support
statement, backported (as applicable) to the relevant trees.

NOTE REGARDING EMBARGO
======================

In principle, the fact that the new CPU facilities are not capable of
protecting ring 0 Xen from a ring 1 PV guest, might be gleaned from
the hardware vendor documentation.

Howver, in practice this docuemntation is so difficult to find and
interpret that the implications discussed in this advisory are not
recognised widely, if at all.

DEPLOYMENT DURING EMBARGO
=========================

Deployment of the patches and/or mitigations described above (or
others which are substantially similar) is permitted during the
embargo, even on public-facing systems with untrusted guest users and
administrators.

But: Distribution of updated software is prohibited (except to other
members of the predisclosure list).

Predisclosure list members who wish to deploy significantly different
patches and/or mitigations, please contact the Xen Project Security
Team.


(Note: this during-embargo deployment notice is retained in
post-embargo publicly released Xen Project advisories, even though it
is then no longer applicable.  This is to enable the community to have
oversight of the Xen Project Security Team's decisionmaking.)

For more information about permissible uses of embargoed information,
consult the Xen Project community's agreed Security Policy:
  http://www.xenproject.org/security-policy.html
-----BEGIN PGP SIGNATURE-----

iQFABAEBCAAqFiEEI+MiLBRfRHX6gGCng/4UyVfoK9kFAmCRH6YMHHBncEB4ZW4u
b3JnAAoJEIP+FMlX6CvZNZIIAJiJsQvTRMfiBJ5+Yg4gyT7/T4vVkLZ4+j8FBlXL
1+SnIcOu5wgU0tmOADl58us9nZVZfo6X5xV4A+oJwrYvunI/1oGn27ylr3c0FYUH
PLSa8bGIw3BeeAGEpADL3rPIQtTeiokpGlkRSNaAz1N8kKypcY+4Ds4Pjtgz3Gd4
gk2y7U2wReV7OItk7Sp1lstyBdda1qClXedKJa+dENSzsf/6/o9Nad8sgCosMj+k
dx65CNgUWC2JRsMq+4fMTwhE2CtIh9IL4ylv7RyqI/ICW8UTMS2XOnALyjVIu1bI
96HCYrSCNclebmHI1385PV3CXUk6Goue0EDk3FxRTaBv7SM=
=YLXZ
-----END PGP SIGNATURE-----

--=separator
Content-Type: application/octet-stream; name="xsa370.meta"
Content-Disposition: attachment; filename="xsa370.meta"
Content-Transfer-Encoding: base64

ewogICJYU0EiOiAzNzAsCiAgIlN1cHBvcnRlZFZlcnNpb25zIjogWwogICAg
Im1hc3RlciIKICBdLAogICJUcmVlcyI6IFsKICAgICJ4ZW4iCiAgXSwKICAi
UmVjaXBlcyI6IHsKICAgICJtYXN0ZXIiOiB7CiAgICAgICJSZWNpcGVzIjog
ewogICAgICAgICJ4ZW4iOiB7CiAgICAgICAgICAiU3RhYmxlUmVmIjogIjQ4
MzQ5MzY1NDlmNzg4Mzc4OTE4ZGE4ZTliYzk3ZGY3ZGQzZWUxNmQiLAogICAg
ICAgICAgIlByZXJlcXMiOiBbXSwKICAgICAgICAgICJQYXRjaGVzIjogWwog
ICAgICAgICAgICAieHNhMzcwLyoucGF0Y2giCiAgICAgICAgICBdCiAgICAg
ICAgfQogICAgICB9CiAgICB9CiAgfQp9

--=separator
Content-Type: application/octet-stream;
 name="xsa370/0001-SUPPORT.md-Document-speculative-attacks-status-of-no.patch"
Content-Disposition: attachment;
 filename="xsa370/0001-SUPPORT.md-Document-speculative-attacks-status-of-no.patch"
Content-Transfer-Encoding: base64

RnJvbSA0ZTM3ZTIxZjZlNzE3NTJmYjY5YzI3YWI5ZjE0MTdhNWQxOWViZWRi
IE1vbiBTZXAgMTcgMDA6MDA6MDAgMjAwMQpGcm9tOiBJYW4gSmFja3NvbiA8
aWFuLmphY2tzb25AZXUuY2l0cml4LmNvbT4KRGF0ZTogVHVlLCA5IE1hciAy
MDIxIDE1OjAwOjQ3ICswMDAwClN1YmplY3Q6IFtQQVRDSCAxLzJdIFNVUFBP
UlQubWQ6IERvY3VtZW50IHNwZWN1bGF0aXZlIGF0dGFja3Mgc3RhdHVzIG9m
CiBub24tc2hpbSAzMi1iaXQgUFYKClRoaXMgZG9jdW1lbnRzLCBidXQgZG9l
cyBub3QgZml4LCBYU0EtMzcwLgoKUmVwb3J0ZWQtYnk6IEphbm4gSG9ybiA8
amFubmhAZ29vZ2xlLmNvbT4KU2lnbmVkLW9mZi1ieTogSWFuIEphY2tzb24g
PGlhbi5qYWNrc29uQGV1LmNpdHJpeC5jb20+ClNpZ25lZC1vZmYtYnk6IEdl
b3JnZSBEdW5sYXAgPGdlb3JnZS5kdW5sYXBAY2l0cml4LmNvbT4KQWNrZWQt
Ynk6IEphbiBCZXVsaWNoIDxqYmV1bGljaEBzdXNlLmNvbT4KLS0tCgpOQiB0
aGF0IHRoZSBzZWN1cml0eSB0ZWFtIGRvZXMgbm90IGNvbnNpZGVyIHRoZSBz
ZWN1cml0eSBzdXBwb3J0CnN0YXR1cyBvZiB1bi1zaGltbWVkIDMyLWJpdCBQ
ViBndWVzdHMgaW4gdGhpcyBwYXRjaCB0byBiZSBwYXJ0aWN1bGFybHkKdXNl
ZnVsLiBIb3dldmVyLCB3ZSBkbyBub3QgY29uc2lkZXIgb3Vyc2VsdmVzIHRv
IGhhdmUgdGhlIGF1dGhvcml0eSB0byBkZWNpZGUKdG8gY29tcGxldGVseSBk
ZS1zdXBwb3J0IDMyLWJpdCBQViBndWVzdHMgd2l0aG91dCBjb21tdW5pdHkg
Y29uc3VsdGF0aW9uLgoKVGhlIHN1cHBvcnQgc3RhdHVzIGluIHRoaXMgcGF0
Y2ggc2hvdWxkIHRoZXJlZm9yZSBiZSBjb25zaWRlcmVkCnRyYW5zaXRpb25h
bC4gIEEgcGVybWFuZW50IHN1cHBvcnQgc3RhdHVzIGlzIHByb3Bvc2VkIGlu
IGEgc3Vic2VxdWVudApwYXRjaCBpbiB0aGlzIHNlcmllcy4KCnYyOgotIEZp
eCBkb3VibGUgJ2JlJwotIERvbid0IG1lbnRpb24gdXNlciAtPiBrZXJuZWwg
YXR0YWNrcywgd2hpY2ggaGF2ZSBub3RoaW5nIHRvIGRvIHdpdGggWGVuCi0t
LQogU1VQUE9SVC5tZCB8IDExICsrKysrKysrKystCiAxIGZpbGUgY2hhbmdl
ZCwgMTAgaW5zZXJ0aW9ucygrKSwgMSBkZWxldGlvbigtKQoKZGlmZiAtLWdp
dCBhL1NVUFBPUlQubWQgYi9TVVBQT1JULm1kCmluZGV4IDdkYjQ1NjhmMWEu
LjZkY2Q5M2UyMmYgMTAwNjQ0Ci0tLSBhL1NVUFBPUlQubWQKKysrIGIvU1VQ
UE9SVC5tZApAQCAtODQsNyArODQsMTYgQEAgVHJhZGl0aW9uYWwgWGVuIFBW
IGd1ZXN0CiAKIE5vIGhhcmR3YXJlIHJlcXVpcmVtZW50cwogCi0gICAgU3Rh
dHVzOiBTdXBwb3J0ZWQKKyAgICBTdGF0dXMsIHg4Nl82NDogU3VwcG9ydGVk
CisgICAgU3RhdHVzLCB4ODZfMzIsIHNoaW06IFN1cHBvcnRlZAorICAgIFN0
YXR1cywgeDg2XzMyLCB3aXRob3V0IHNoaW06IFN1cHBvcnRlZCwgd2l0aCBj
YXZlYXRzCisKK0R1ZSB0byBhcmNoaXRlY3R1cmFsIGxpbWl0YXRpb25zLAor
MzItYml0IFBWIGd1ZXN0cyBtdXN0IGJlIGFzc3VtZWQgdG8gYmUgYWJsZSB0
byByZWFkIGFyYml0cmFyeSBob3N0IG1lbW9yeQordXNpbmcgc3BlY3VsYXRp
dmUgZXhlY3V0aW9uIGF0dGFja3MuCitBZHZpc29yaWVzIHdpbGwgY29udGlu
dWUgdG8gYmUgaXNzdWVkCitmb3IgbmV3IHZ1bG5lcmFiaWxpdGllcyByZWxh
dGVkIHRvIHVuLXNoaW1tZWQgMzItYml0IFBWIGd1ZXN0cworZW5hYmxpbmcg
ZGVuaWFsLW9mLXNlcnZpY2UgYXR0YWNrcyBvciBwcml2aWxlZ2UgZXNjYWxh
dGlvbiBhdHRhY2tzLgogCiAjIyMgeDg2L0hWTQogCi0tIAoyLjMwLjIKCg==

--=separator
Content-Type: application/octet-stream;
 name="xsa370/0002-SUPPORT.md-Un-shimmed-32-bit-PV-guests-are-no-longer.patch"
Content-Disposition: attachment;
 filename="xsa370/0002-SUPPORT.md-Un-shimmed-32-bit-PV-guests-are-no-longer.patch"
Content-Transfer-Encoding: base64

RnJvbTogR2VvcmdlIER1bmxhcCA8Z2VvcmdlLmR1bmxhcEBjaXRyaXguY29t
PgpTdWJqZWN0OiBTVVBQT1JULm1kOiBVbi1zaGltbWVkIDMyLWJpdCBQViBn
dWVzdHMgYXJlIG5vIGxvbmdlciBzdXBwb3J0ZWQKClRoZSBzdXBwb3J0IHN0
YXR1cyBvZiAzMi1iaXQgZ3Vlc3RzIGRvZXNuJ3Qgc2VlbSBwYXJ0aWN1bGFy
bHkgdXNlZnVsLgoKV2l0aCBpdCBjaGFuZ2VkIHRvIGZ1bGx5IHVuc3VwcG9y
dGVkIG91dHNpZGUgb2YgUFYtc2hpbSwgYWRqdXN0IHRoZSBQVjMyCktjb25m
aWcgZGVmYXVsdCBhY2NvcmRpbmdseS4KClJlcG9ydGVkLWJ5OiBKYW5uIEhv
cm4gPGphbm5oQGdvb2dsZS5jb20+ClNpZ25lZC1vZmYtYnk6IEdlb3JnZSBE
dW5sYXAgPGdlb3JnZS5kdW5sYXBAY2l0cml4LmNvbT4KU2lnbmVkLW9mZi1i
eTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPgotLS0KCk5CIHRo
aXMgcGF0Y2ggc2hvdWxkIGJlIGNvbnNpZGVyZWQgYSBwcm9wb3NhbCB0byB0
aGUgY29tbXVuaXR5LiAgSXQKd2lsbCBub3QgYmVjb21lIGVmZmVjdGl2ZSB1
bnRpbCB0aHJlZSB3ZWVrcyBhZnRlciB0aGUgWFNBLTM3MCBlbWJhcmdvCmxp
ZnRzLCBhbmQgb25seSBpZiB0aGVyZSBhcmUgbm8gb2JqZWN0aW9ucyByYWlz
ZWQgYmVmb3JlIHRoYXQgcG9pbnQuCgpUQkQ6IFNob3VsZCB3ZSBhbHNvIGRl
ZmF1bHQgb3B0X3B2MzIgdG8gZmFsc2Ugd2hlbiBub3QgcnVubmluZyBpbiBz
aGltCiAgICAgbW9kZT8KClRoZSAoZm9yd2FyZCkgZGVwZW5kZW5jeSBvbiBQ
Vl9TSElNIGlzbid0IHZlcnkgdXNlZnVsIGVzcGVjaWFsbHkgd2hlbgpjb25m
aWd1cmluZyBmcm9tIHNjcmF0Y2ggLSB3ZSBtYXkgd2FudCB0byByZS1vcmRl
ciBpdGVtcyBkb3duIHRoZSByb2FkLApzdWNoIHRoYXQgdGhlIHByb21wdCBm
b3IgUFZfU0hJTSBvY2N1cnMgYWhlYWQgb2YgdGhhdCBmb3IgUFYzMi4gWWV0
IHRoZW4KdGhpcyBjb25mbGljdHMgd2l0aCBQVl9TSElNIGFsc28gZGVwZW5k
aW5nIG9uIEdVRVNULgoKdjM6Ci0gQWRkIEtjb25maWcgYWRqdXN0bWVudC4K
CnYyOgotIFBvcnQgb3ZlciBjaGFuZ2VzIGluIHBhdGNoIDEKCi0tLSBhL1NV
UFBPUlQubWQKKysrIGIvU1VQUE9SVC5tZApAQCAtODYsMTQgKzg2LDcgQEAg
Tm8gaGFyZHdhcmUgcmVxdWlyZW1lbnRzCiAKICAgICBTdGF0dXMsIHg4Nl82
NDogU3VwcG9ydGVkCiAgICAgU3RhdHVzLCB4ODZfMzIsIHNoaW06IFN1cHBv
cnRlZAotICAgIFN0YXR1cywgeDg2XzMyLCB3aXRob3V0IHNoaW06IFN1cHBv
cnRlZCwgd2l0aCBjYXZlYXRzCi0KLUR1ZSB0byBhcmNoaXRlY3R1cmFsIGxp
bWl0YXRpb25zLAotMzItYml0IFBWIGd1ZXN0cyBtdXN0IGJlIGFzc3VtZWQg
dG8gYmUgYWJsZSB0byByZWFkIGFyYml0cmFyeSBob3N0IG1lbW9yeQotdXNp
bmcgc3BlY3VsYXRpdmUgZXhlY3V0aW9uIGF0dGFja3MuCi1BZHZpc29yaWVz
IHdpbGwgY29udGludWUgdG8gYmUgaXNzdWVkCi1mb3IgbmV3IHZ1bG5lcmFi
aWxpdGllcyByZWxhdGVkIHRvIHVuLXNoaW1tZWQgMzItYml0IFBWIGd1ZXN0
cwotZW5hYmxpbmcgZGVuaWFsLW9mLXNlcnZpY2UgYXR0YWNrcyBvciBwcml2
aWxlZ2UgZXNjYWxhdGlvbiBhdHRhY2tzLgorICAgIFN0YXR1cywgeDg2XzMy
LCB3aXRob3V0IHNoaW06IFN1cHBvcnRlZCwgbm90IHNlY3VyaXR5IHN1cHBv
cnRlZAogCiAjIyMgeDg2L0hWTQogCi0tLSBhL3hlbi9hcmNoL3g4Ni9LY29u
ZmlnCisrKyBiL3hlbi9hcmNoL3g4Ni9LY29uZmlnCkBAIC01Niw3ICs1Niw3
IEBAIGNvbmZpZyBQVgogY29uZmlnIFBWMzIKIAlib29sICJTdXBwb3J0IGZv
ciAzMmJpdCBQViBndWVzdHMiCiAJZGVwZW5kcyBvbiBQVgotCWRlZmF1bHQg
eQorCWRlZmF1bHQgUFZfU0hJTQogCS0tLWhlbHAtLS0KIAkgIFRoZSAzMmJp
dCBQViBBQkkgdXNlcyBSaW5nMSwgYW4gYXJlYSBvZiB0aGUgeDg2IGFyY2hp
dGVjdHVyZSB3aGljaAogCSAgd2FzIGRlcHJlY2F0ZWQgYW5kIG1vc3RseSBy
ZW1vdmVkIGluIHRoZSBBTUQ2NCBzcGVjLiAgQXMgYSByZXN1bHQsCkBAIC02
Nyw3ICs2NywxMCBAQCBjb25maWcgUFYzMgogCSAgcmVkdWN0aW9uLCBvciBw
ZXJmb3JtYW5jZSByZWFzb25zLiAgQmFja3dhcmRzIGNvbXBhdGliaWxpdHkg
Y2FuIGJlCiAJICBwcm92aWRlZCB2aWEgdGhlIFBWIFNoaW0gbWVjaGFuaXNt
LgogCi0JICBJZiB1bnN1cmUsIHNheSBZLgorCSAgTm90ZSB0aGF0IG91dHNp
ZGUgb2YgUFYgU2hpbSwgMzItYml0IFBWIGd1ZXN0cyBhcmUgbm90IHNlY3Vy
aXR5CisJICBzdXBwb3J0ZWQgYW55bW9yZS4KKworCSAgSWYgdW5zdXJlLCB1
c2UgdGhlIGRlZmF1bHQgc2V0dGluZy4KIAogY29uZmlnIFBWX0xJTkVBUl9Q
VAogICAgICAgIGJvb2wgIlN1cHBvcnQgZm9yIFBWIGxpbmVhciBwYWdldGFi
bGVzIgo=

--=separator--


From xen-devel-bounces@lists.xenproject.org Tue May 04 12:08:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 12:08:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122225.230486 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldtqn-0001Wg-5s; Tue, 04 May 2021 12:08:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122225.230486; Tue, 04 May 2021 12:08:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldtqn-0001WZ-23; Tue, 04 May 2021 12:08:29 +0000
Received: by outflank-mailman (input) for mailman id 122225;
 Tue, 04 May 2021 12:08:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1gXq=J7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ldtql-0001WU-GV
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 12:08:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 993177b1-c634-4b88-99ae-a387a9e82b1c;
 Tue, 04 May 2021 12:08:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E3F6EB1D9;
 Tue,  4 May 2021 12:08:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 993177b1-c634-4b88-99ae-a387a9e82b1c
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620130106; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=HugiPDxWd/SWOnt+uFJRfy4G9pnjfLXVMvwsebfJ/Fg=;
	b=D5fZTiEEvouPpa8z3vRhzjyhRpNi0Dy8oRVlOcpVz6laJM5GF6rYFR1ChC+EXRtJk9+2Ct
	bIYSQA840Bm714rj/CUPt08u49qWSTKsUqXmMapLXNzzbWtr28KNvz0RjNpO8R1FCPe329
	pER8pn3xMfmhEZY7KogcARKlzRstcZU=
Subject: Re: [PATCH 3/5] x86/xstate: Rework xstate_ctxt_size() as
 xstate_uncompressed_size()
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210503153938.14109-1-andrew.cooper3@citrix.com>
 <20210503153938.14109-4-andrew.cooper3@citrix.com>
 <a8487667-1f47-1aae-1528-4a1224cbda7b@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <07a03e61-742c-5880-1003-fcded7efc662@suse.com>
Date: Tue, 4 May 2021 14:08:25 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <a8487667-1f47-1aae-1528-4a1224cbda7b@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 03.05.2021 20:17, Andrew Cooper wrote:
> On 03/05/2021 16:39, Andrew Cooper wrote:
>> We're soon going to need a compressed helper of the same form.
>>
>> The size of the uncompressed image is a strictly a property of the highest
>> user state.  This can be calculated trivially with xstate_offsets/sizes, and
>> is much faster than a CPUID instruction in the first place, let alone the two
>> XCR0 writes surrounding it.
>>
>> Retain the cross-check with hardware in debug builds, but forgo it normal
>> builds.  In particular, this means that the migration paths don't need to mess
>> with XCR0 just to sanity check the buffer size.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> 
> The Qemu smoketests have actually found a bug here.
> 
> https://gitlab.com/xen-project/patchew/xen/-/jobs/1232118510/artifacts/file/smoke.serial
> 
> We call into xstate_uncompressed_size() from
> hvm_register_CPU_save_and_restore() so the previous "xcr0 == 0" path was
> critical to Xen not exploding on non-xsave platforms.
> 
> This is straight up buggy - we shouldn't be registering xsave handlers
> on non-xsave platforms, but the calculation is also wrong (in the safe
> directly at least) when we use compressed formats.  Yet another
> unexpected surprise for the todo list.

I don't view this as buggy at all - it was an implementation choice.
Perhaps not the best one, but still correct afaict. Then again I'm
afraid I don't understand "in the safe directly at least", so I may
well be overlooking something. Will wait for your updated patch ...

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 04 12:12:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 12:12:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122230.230498 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldtv4-0002OE-Mw; Tue, 04 May 2021 12:12:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122230.230498; Tue, 04 May 2021 12:12:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldtv4-0002O7-JN; Tue, 04 May 2021 12:12:54 +0000
Received: by outflank-mailman (input) for mailman id 122230;
 Tue, 04 May 2021 12:12:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1gXq=J7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ldtv2-0002O2-Tm
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 12:12:52 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 332afec1-6c43-47ce-94f9-2da946c419d1;
 Tue, 04 May 2021 12:12:51 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1B77BB2DC;
 Tue,  4 May 2021 12:12:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 332afec1-6c43-47ce-94f9-2da946c419d1
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620130371; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Etumk9WiYOFtAAaAsfCpzSWRkB+mKLAtUDQEeNVOb4A=;
	b=sO+iz23Thm2R9P14oMqplr62s963f6FfQGdcqcBDUjP/9pOufmbOKrMV4f6m2vmgH7PmUv
	SeBxCWhYSIXtLH6/NP72a2dX2mH0O+rQ9fO063dSDc2pegfF1eWM3ggu9ZC4eAOet7zgop
	EEUeJ/ms29/NllkTPYn6YKbEG/ZjgTY=
Subject: Re: [PATCH v3 07/13] libs/guest: obtain a compatible cpu policy from
 two input ones
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>, xen-devel@lists.xenproject.org
References: <20210430155211.3709-1-roger.pau@citrix.com>
 <20210430155211.3709-8-roger.pau@citrix.com>
 <838e358d-5707-0f34-c8fe-64e29f000a69@suse.com>
 <YJE2hxPYq2kGrOwV@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0fa9652d-2b38-516d-a371-df90943d93a3@suse.com>
Date: Tue, 4 May 2021 14:12:50 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <YJE2hxPYq2kGrOwV@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 04.05.2021 13:56, Roger Pau Monné wrote:
> On Mon, May 03, 2021 at 12:43:08PM +0200, Jan Beulich wrote:
>> On 30.04.2021 17:52, Roger Pau Monne wrote:
>>> +/* Only level featuresets so far. */
>>
>> I have to admit that I don't think I see all the implications from
>> this implementation restriction. All other leaves get dropped by
>> the caller, but it's not clear to me what this means wrt what the
>> guest is ultimately going to get to see.
> 
> This aims to be based on what XenServer does, which I was told is to
> level the featuresets. I think the caller of the function will have to
> fill the part of the policy that cannot be leveled. It's likely new
> helpers will be added to do that as required.
> 
> One option would be to get the default policy for the guest and then
> use xc_cpu_policy_update_cpuid to apply the leveled one.

Could such further plans perhaps be outlined (to a degree at least)
in the description?

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 04 12:15:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 12:15:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122233.230510 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldtxd-0002Xf-6G; Tue, 04 May 2021 12:15:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122233.230510; Tue, 04 May 2021 12:15:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldtxd-0002XY-15; Tue, 04 May 2021 12:15:33 +0000
Received: by outflank-mailman (input) for mailman id 122233;
 Tue, 04 May 2021 12:15:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Poa=J7=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ldtxb-0002XT-5H
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 12:15:31 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1f400856-5ea6-42ca-b0ff-9c22d068a34b;
 Tue, 04 May 2021 12:15:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1f400856-5ea6-42ca-b0ff-9c22d068a34b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620130530;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=bO1/plI8n3xyY78l+HYLvvvHIHf+gjkMgfcGHi7zPlw=;
  b=HApFU2RAo7u/z4Tr4ysI6iGWMhViAzsNmDPkX99LfdPI605W86EOTwTu
   doutJCc/vz7GPtxIw2BynGyh8pOXeU8HrhQf8fQVWCak4YM1YyQ+PI+14
   j8k7UtsBRoGv5G2xWUmGxrmfyhQoGTMg6mvLJZ0x9iVdh1DKcHELmOQe3
   A=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: ujbbvgy4WMIDfX7QDtVq8Cl+HsMXiwoo24DmkUb3vMLyTnPOvcGe2Sgr2KXFsIfdn2LP8+KnM/
 vwOcdg7uHJtRwfI6t6kzYgso/AMDZzGAbp+BqqcaISHcZy48SBnbTZKBaoxkwNCjco60ciLdLw
 rXIELmvI5iYF3HVuwR40JFc1JYO26XY63whNKnAMXcb2lj7KzjVTlhrrgnCkyC+hXDaceAwUW0
 lOK9iRnGob4QDqRIB5LAlWuMKu+JZgmGG94Z5C3bWDZ7/cy70r0XhCwRiLUQYO2/RntcIX154D
 87U=
X-SBRS: 5.1
X-MesageID: 43009141
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:86CKHarzoV4YdMZU8tork4waV5rAeYIsi2QD101hICF9WMqeis
 yogbAnxQb54Qx8ZFgMkc2NUZPufVry7phwiLN+AZ6DW03ctHKsPMVe6+LZsl7dMgnf0sIY6q
 t6aah5D7TLYGRSqcrh+gG3H5IB7bC8gcWVrNzTxXtsUg1mApsIh2wSNi+hHkJ7XwVAD5Yifa
 DshPZvnSaqengcc62AZkUtYu6rnbz2va79bQVDLxAq7xTmt0LO1JfKVyKV2RoTSFp0sNMfzV
 Q=
X-IronPort-AV: E=Sophos;i="5.82,272,1613451600"; 
   d="scan'208";a="43009141"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Vfq+1/3VrX/BaGc0kMJcHB23vPC8wQ3KIflJT/7XD3MzgvlUI+WDzOsHjf6tYjvfoBzVgtRWeVDrtsS2Yp7QQJLexGnGWcH+HwRsiPP2lDY5YG6W0syE0VQttrnj1YJzoGoa9qErIgHNrQ4lkQqdPCt2vu2vsGwHOQu+ataoK9Q8IKuHGtcriln+BDPFwM7afQ8+7uYMVNaWXBYqfmcoqhutYtDjdT9mRgYnLnSv9CXU3s3KZZtz6f/fiJNlXOUR4pS30pX+JwD60N7fP7GHRoSfQ5kHtbIWP+KddMezdO9X9+D8S+nkMQKQUGuZJVjv21cKj1nwpWb/EikgFvKSEg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GlMIyk+8wBM13t2FCxK4lt6KVBCkfBaWOQiPDpVj+Jw=;
 b=SU827rx/1RGEb/KSWV24AyJcuLiTTd+0UPsZ7bPzhgc6Tt0dXI459kQSTMTVUm1P0QNNzCupKrMswIrioPjGWxuCJ+RQDFx6NrF3O6yU42siq/fmo2jke8i8+rytaCuID5KclIXFIUc6af7vZdfpLNSYhZnatpPzaqXD9DCEa13/6dtn2KoaiYn68hSq57asvP4tbyWpQcW4ZKwlqSVquOWzBr4BJpx+9yW5u3iauSMayIuhuysxAytNuTDXz+aeH4pZnmAPCe27whUAwrXFCW4AcFTN7sX7I0i71R5eOg24Z3vYCtMq7bUF8+HNokxTyQy7Y626pY7VXQGDcf5G8g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GlMIyk+8wBM13t2FCxK4lt6KVBCkfBaWOQiPDpVj+Jw=;
 b=xRZ1XOediXwjoE9hVE60Kpg07AgcQ9Aq9lfZC1nhC5F2p2X8bLF1unOlBl4mQIPf41NGgTa40x1m62FfOls2AnQFouYVpHc00Fs2urBk2p37L1XioEAnxlUD7GCMNr7KwT66K44tOe+m918zsCQbwWZuZ27OoI2eM1wKYtzT0vo=
Subject: Re: [PATCH 3/5] x86/xstate: Rework xstate_ctxt_size() as
 xstate_uncompressed_size()
To: Jan Beulich <jbeulich@suse.com>
CC: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210503153938.14109-1-andrew.cooper3@citrix.com>
 <20210503153938.14109-4-andrew.cooper3@citrix.com>
 <a8487667-1f47-1aae-1528-4a1224cbda7b@citrix.com>
 <07a03e61-742c-5880-1003-fcded7efc662@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <3adb681c-9638-4603-018f-6fe096243cf0@citrix.com>
Date: Tue, 4 May 2021 13:15:18 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <07a03e61-742c-5880-1003-fcded7efc662@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0067.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:60::31) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4165ad72-c468-462a-4d8d-08d90ef64c30
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5584:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB5584BEA73578A4332BAAE8C1BA5A9@SJ0PR03MB5584.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 93hMzQ+S4URV6ngc/+A42zMyLNaxm4t+fSxAeIjJuf2Rir0bJs5E9QNEK6JCVTcwZ+zA0M8VYp9vfnDs1/vJlJLDV7vQMxg6fA83+LWCZ152ZzOiE7emkqLTxWeMKNXVR7JrfXQVZjxpcXzOuL7Q3pWtqS9eI07gIZEgynd7hq28QVVs8llc4KUSyArNjzBCFJljKOpu7eTDHoiB/BK/xAd3Y0hcaK6ama/RHaKQRA2Wu8m6biYEj8PtswBG5TiSVr8ULjdhaPykHiJu927FY7TZGhyqzGa/979iGbXcMINBED7iKKLa5JzD9KFmJi113EqSOw++dW1xUP2+sKrmGXkFYAPD1/vFnNUzILaPiefg4hwhP3BRWsKvlwY/Mum/pgMr0HAIP9sj7qCtKBZxOjFlBcFtMYjWjoUBztPAg2mTYEgNmcXr2/cybhrKKkijOA7heK4UMnkAnDMU5VEpBlz3vTa3wqj+Z0B26JdH6k/JFfhb+y8FLS8r4NI3SU4tWgjrhKkTFwT+ABsq9LSAjrsMWCQeFidx87YVoveTiFzXuBdA5u5RibgXEm37i6/ONszMPkyoK9A/fGG8XiTcrSp/WqbA5WaZFqVsYqVqepFeM+LY/y7QX17A56lJrY1wOo24YYqAHZ6oZRAHilEjwGkDC1ovCyAUZI0l58IKLq0ilSn/SEiJlmzjcE70Utm5dnnII3kC5S85StwDBZZnO+CjQHhVELVjL8qlSQPZHv6tLI2GLj8WZrs0yQeaqOFMAIuyLEBQRo7ie6FWsnGNmw==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(136003)(366004)(376002)(39860400002)(396003)(38100700002)(66946007)(6916009)(5660300002)(2906002)(66476007)(66556008)(31696002)(26005)(186003)(478600001)(16526019)(8936002)(6666004)(53546011)(966005)(6486002)(2616005)(316002)(31686004)(83380400001)(8676002)(86362001)(4326008)(956004)(16576012)(36756003)(54906003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?RVVSdkNVMll4bnExc0VWamNrd3JnNk5hY1UzVlZUZUdxNWVyaUVOMHFZUVFs?=
 =?utf-8?B?MUFXQnZoK0E3RmVRMG5DYnFnSVg3dG5EN2ppTTVyaGgyNFZyRVc3NnBGYlBY?=
 =?utf-8?B?N1ZXbU8xZEVlWlFmL0F2a1hhWExYUUlOZHh0enZhT3k0OU5JaVpmY0NneUZI?=
 =?utf-8?B?VGdWbWZlN1dhVW9OODFScGNKYWMwNzJDSGg0aUViUWNYaDJ1RTBYZFNaUzBN?=
 =?utf-8?B?aldEclFibVRsaVZPUGMzTzNMNlQ5RzRJSStPUHRBT0RoTWx2Unl6UTBZNTFl?=
 =?utf-8?B?TTA5OEdIbEhQSVBJZEFOZVMzV0h1L3lsZXl6WTVGdmtha3dUYkgvSEtSODdY?=
 =?utf-8?B?WUs4MUhIZm5NTUc5d1JidEttS3kvaldTNlh1SUdKQ3hFMW96TlJMTm5IM1FZ?=
 =?utf-8?B?eXIxNHJMbmVEMHcrSnVwOTFySllNV0VaL0Q5ME8zWWtCMmdQbE85RjZvT2x0?=
 =?utf-8?B?eS9OYjR1ZVZDQmxvMi82K2ozTkNDcGR3WTJYRGRQWEpFRnJpSjgrOTBkak1m?=
 =?utf-8?B?eWZJT01kcFBMSmNNMzJTdE9vM0xEZXcrR01vaDhOMlJmbXZOaEhFN0FmNExZ?=
 =?utf-8?B?YmVBOVUzNmFrZXRoUTJDcjdTbG0zSnMwNWdQWnlUcE1mNFN5Vjk1a2tocFZx?=
 =?utf-8?B?SlhLMGlxZmhXdm5kSkZoYUt6RjNUdTFadGRQUHVPRmhJa0QzQ2V4QUFVdDJ6?=
 =?utf-8?B?MGNXLzRGTUR2Y0xrb3lYSFpFWG1qcWxxTSsrQi9aaGZKNE01WWpVUjNaM0RF?=
 =?utf-8?B?bE1YbTFkblVlZW5vQ3pXNXRTNUJ2RWM5WnVyMUhZS1VPVzJhUGNybkd0OThO?=
 =?utf-8?B?bmRVL2VhamdBNlZ5K3VPV0JZekU3WC9mVFU3NGNXNDhNcFpIT1Y1bGptejMv?=
 =?utf-8?B?Zk55ci9qcUw5UEtlc0tvWW95S2JDT2JGZkg2S29QM1lnVVpBOHkvVm9jdVZM?=
 =?utf-8?B?cnpoRDhMVWJLRUhDSWRiN3BjNkJFZDl2MU9YNnNEeDNteVJoTTUwNGYrREpR?=
 =?utf-8?B?VjhRbC9henY0bDlmQzVLblBZUkJ1RU5pZ2tPQXU3U2JsNVFaTXVyMHpjRllB?=
 =?utf-8?B?S3JBbURjQVdqQXRKbkFuWnlMNlA1dVN1MFNpeHZjN0VpU1hGcnA4bFFtakZC?=
 =?utf-8?B?cHFBaG4wSWRraTRoM2MwMDNncUxSUDlQWUEwaTZkczRDWDhyMnRLTW52eW9U?=
 =?utf-8?B?c0E0LzEzeVhBQ1l3NHE3b1dlbTFCRGdHQkF4ZE44Q3J3SjRFakVSY2xyOVBl?=
 =?utf-8?B?dlgybE9LNEZlN2xnSU85Nk1ocVFycXNJL1o2MlZSdXJCcEpJWVluKzdrd3Ns?=
 =?utf-8?B?Zll0cTZybmJ1ang3TXRySnFEcDBqcVh5aG1OQ2tYSzJhTUtrTks4Z2FpSGo3?=
 =?utf-8?B?RzJtYlhZUTFwaFRuZXFBU0RrYy95YzdiYWxFWFJLVncvblVhN2twQ2ZvNXpS?=
 =?utf-8?B?UXpNMFZTaE9DUjNtL08xbjM3RkFyRklKTll2emRycVFqUDQ3MHRsdlYrSVVy?=
 =?utf-8?B?YkJyYVNVRlJOME13b2dOV1JwdUFOYlR5LzByU0lKOGJjYndzck54V3I5Z2M1?=
 =?utf-8?B?VVpPVTkrbnBVVFd5d1N5QXhldWlqeHozSnZscEc3WngxTGdXbkdHQkl4c095?=
 =?utf-8?B?eC92bWtMOWR3UTM1NHVQL2puNnpjczh2RE45dVdpUmpmQ0lWK3FiVlBkNnlI?=
 =?utf-8?B?SWdRclMybERoa3J6MnVvZkI2RERmRmNSQlpoMWcrUzdRbC9GRUtUbGNpSGtI?=
 =?utf-8?Q?Fg80dfZwGpLkfLzSInsawuC1tUJnzMDllvU+V4k?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 4165ad72-c468-462a-4d8d-08d90ef64c30
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2021 12:15:25.3672
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: HzbVoqRYDSUVtsH0OBp7DE5cGcar4eCpjZAwdjLJplWhlKwqyauXwwll7Gg8fSzC0PB1YSgkgtJudGL1CjjP+crsgGRrtbPTqgzA4ORlT0I=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5584
X-OriginatorOrg: citrix.com

On 04/05/2021 13:08, Jan Beulich wrote:
> On 03.05.2021 20:17, Andrew Cooper wrote:
>> On 03/05/2021 16:39, Andrew Cooper wrote:
>>> We're soon going to need a compressed helper of the same form.
>>>
>>> The size of the uncompressed image is a strictly a property of the highest
>>> user state.  This can be calculated trivially with xstate_offsets/sizes, and
>>> is much faster than a CPUID instruction in the first place, let alone the two
>>> XCR0 writes surrounding it.
>>>
>>> Retain the cross-check with hardware in debug builds, but forgo it normal
>>> builds.  In particular, this means that the migration paths don't need to mess
>>> with XCR0 just to sanity check the buffer size.
>>>
>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> The Qemu smoketests have actually found a bug here.
>>
>> https://gitlab.com/xen-project/patchew/xen/-/jobs/1232118510/artifacts/file/smoke.serial
>>
>> We call into xstate_uncompressed_size() from
>> hvm_register_CPU_save_and_restore() so the previous "xcr0 == 0" path was
>> critical to Xen not exploding on non-xsave platforms.
>>
>> This is straight up buggy - we shouldn't be registering xsave handlers
>> on non-xsave platforms, but the calculation is also wrong (in the safe
>> directly at least) when we use compressed formats.  Yet another
>> unexpected surprise for the todo list.
> I don't view this as buggy at all - it was an implementation choice.
> Perhaps not the best one, but still correct afaict. Then again I'm
> afraid I don't understand "in the safe directly at least", so I may
> well be overlooking something. Will wait for your updated patch ...

For now, it is a patch 2.5/5 which just puts a cpu_has_xsave guard
around the registration.  Everything to do with xsave record processing
is unnecessary overhead on a non-xsave platform.

I don't intend to alter patch 3 as a consequence.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 04 12:20:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 12:20:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122241.230522 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldu2G-0003QP-RR; Tue, 04 May 2021 12:20:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122241.230522; Tue, 04 May 2021 12:20:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldu2G-0003QI-OS; Tue, 04 May 2021 12:20:20 +0000
Received: by outflank-mailman (input) for mailman id 122241;
 Tue, 04 May 2021 12:20:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1gXq=J7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ldu2F-0003QC-QL
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 12:20:19 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 527c3065-17ca-414d-b096-4d75b3d4d810;
 Tue, 04 May 2021 12:20:18 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B6BD9AE1C;
 Tue,  4 May 2021 12:20:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 527c3065-17ca-414d-b096-4d75b3d4d810
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620130817; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=AJIMTLKRZlb0O5yvWX57lbNP8pcOMKvO2QN5/I5G4Os=;
	b=IZ8uyWmpKGD4krYRX6kgIX3lS9xLfH8hVtGEgU21q4SIf/R0p8weTE4P9loPu9t7U419jP
	MUiUTvEhCu2PeT5VE+iq01Rw/RHR9hDkhgzx+wmOUbRxubr1h+iONxlSSuN3eVXZGoiV3e
	uwEzSjFXGcz8Jg7SjdrzCL08CfhhQ1c=
Subject: Re: [PATCH 3/5] x86/xstate: Rework xstate_ctxt_size() as
 xstate_uncompressed_size()
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210503153938.14109-1-andrew.cooper3@citrix.com>
 <20210503153938.14109-4-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4deac8bb-3252-36e0-b728-b78c2132984b@suse.com>
Date: Tue, 4 May 2021 14:20:17 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210503153938.14109-4-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 03.05.2021 17:39, Andrew Cooper wrote:
> @@ -568,16 +568,38 @@ static unsigned int hw_uncompressed_size(uint64_t xcr0)
>      return size;
>  }
>  
> -/* Fastpath for common xstate size requests, avoiding reloads of xcr0. */
> -unsigned int xstate_ctxt_size(u64 xcr0)
> +unsigned int xstate_uncompressed_size(uint64_t xcr0)

Since you rewrite the function anyway, and since taking into account
the XSS-controlled features here is going to be necessary as well
(even if just down the road, but that's what your ultimate goal is
from all I can tell), how about renaming the parameter to "xstates"
or "states" at the same time?

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 04 12:23:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 12:23:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122245.230538 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldu4u-0003ZA-Bg; Tue, 04 May 2021 12:23:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122245.230538; Tue, 04 May 2021 12:23:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldu4u-0003Z3-8J; Tue, 04 May 2021 12:23:04 +0000
Received: by outflank-mailman (input) for mailman id 122245;
 Tue, 04 May 2021 12:23:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Poa=J7=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ldu4s-0003Yy-Q8
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 12:23:02 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6114fd68-ecda-4180-a314-1999fd255265;
 Tue, 04 May 2021 12:23:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6114fd68-ecda-4180-a314-1999fd255265
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620130981;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=9xtblkDgLvYducoKyVExRfpS8OwrmDeEK0SdwZ/nGm8=;
  b=YotQ4jrZM5Z2oc0TZvhp+AiWPHAxalrNXbhSXI25eHzuSf0EOFOcbPJN
   G1yEZuWu9qpAiOp1bFkfVcYyLpq+0LfRNVrnUxglNJk1P1fGucuJwLRZ2
   wI3xu9pNqrQKnyjNHUyh7lKm9hn6Axic5jNyqLTa6dyYDTeph3aFSE/qy
   o=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: m/21pzff3LChsq4P2PJIdFHelrqEGJGG1ocwZqdj/ldD+A3VwEWj4kiChTzGLyKzZcm0aVJga0
 H5JxILQGnAREzflmqTIS6m1/m2gs8/Fpbdyy7qJvj62aQ6J3KLAhBYtwFYwXNT1q01jH4UAvya
 x/CblzM9x2vew7sJYkrklWvl61ze8g7+FIWt43IkoW2U7hrUlVMAEGVv28EnxT97STbDpPl8yf
 cT7neZuyIRXYG2f0i2hKs18yy0qu7nMTi5b8W5d+D9XABQD8siDleSOlDZipo2Aw3nzYC7FWCB
 cWI=
X-SBRS: 5.1
X-MesageID: 42824116
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:QYBxKa0KXlVq5CrMqDxscQqjBWhyeYIsi2QD101hICF9WtCEls
 yogfQQ3QL1jjFUY307hdWcIsC7LE/035hz/IUXIPOeTBDr0VHYSb1KwKnD53nbGyP4/vNAzq
 sIScJDIfD5EFQSt6nHySaiFdJI+re62YSJocub8Ht3VwFtbMhbnmVEIyKWCFd/SgUDJbdRLv
 qhz/FKrTahZngbB/7TbhU4dtPOusHRk9beaQMGbiRN1CC1kTiq5LTmeiLovSs2bjUn+9Yf2F
 mAqSPVzOGJs/a3yhjTvlWjlah+qZ/a5fZoQOCJgsgRAD3whgivf5QJYcz+gBkF5NuBxXxvvN
 7QowoxH8kb0QKsQkiF5SHD9iOl8DEy52TswVWV6EGT3vDRdXYBJOdqwad6GyGpj3YIjZVH/4
 9gm1+9jd5xCyjNmSzsjuK4Ly1Cpw6PjlcJ1dIIg2c3a/p4VJZh6bYx0WlyC5k6ECfz+OkcYZ
 JTJfCZ3vpQfF+ABkqp2FVH8ZipVnQ3KB+MXlIPjMyTyyRXh3B01SIjtbUioks=
X-IronPort-AV: E=Sophos;i="5.82,272,1613451600"; 
   d="scan'208";a="42824116"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ey5WQvElTLwxfzirp71Lm/+uO0yMPHFJZX9oNvi8AbrDmwBcd7+dvg+djSbKmyPoLIergF0RCh6hofn2y/1bz+WZf/NfMuvK9xkJe29UqhikweXpQD49TB2QAtMCNQC8LAX4ohzAKdzsCzguzrGQNI4a2AX5gh+6zyP3FyD22ScPj0uCT5L1BfJjbcXDWY5TGed7iLQwkupUaghduWWpmTfocgdBFvRftfHZ/c5L1U0AmHKQlfHauJCHuY8vhPmtVxbyL58NGqL3ugHDZm27s9vq50z9dP2SY/b3PAK9158lyKhO+zwnRxXc3zxrFoRsPX0DQNhLhHEhkWFyFl5jkw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7REod0m7+vTu2xuv4UEbFuo5xAcchj2VKo+YFBu1Ml8=;
 b=U8XG3rYC2Uly3jIBYooQ1A6cpnB/4inGrZ37ADTAzjZfUdzjFLUB8tGXKXXDJi1zDe897bT1Iu0m/7aDzw63LH6ZkyvBgjaA2qJDhfcZQ5BxERWarBUCAvhS9OAaiUUm5Oor3vd3BOMMSJMxOmqOcf/ezq4smQNIXkRj8eX9PBp2/jZ7135K6GjKkbccjHSFf31LI3zJE1yLQboXyISc22f4rnsoHy2gjSI5pxHuattI9k0VGPMgfwfXd46AuzUzcgx6ky7ZFeH0Ap+rjdn9mgOLSRTHvLTYGhPyMkZtoGB4uHCt6Xt/bEteRI7z1bO4KT/TRDvHtuQbhpNIgn5EfA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7REod0m7+vTu2xuv4UEbFuo5xAcchj2VKo+YFBu1Ml8=;
 b=J+pd9ztl1i9dT6vD7eWU3+eoueWZIv8oZnwZOeyos0PpoGYcqfE4F8V52Dk4O5uTDTjKWk7heBmiNY2OhemCg3iRRvNwyo7FnlyLx2ATDxdf6dD4wh13Ak2a9ViUHSJVR+GIDfborBhJC72JWvuTLmiFRtRVa8sp3xgjhHNxf8M=
Subject: Re: [PATCH 3/5] x86/xstate: Rework xstate_ctxt_size() as
 xstate_uncompressed_size()
To: Jan Beulich <jbeulich@suse.com>
CC: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210503153938.14109-1-andrew.cooper3@citrix.com>
 <20210503153938.14109-4-andrew.cooper3@citrix.com>
 <4deac8bb-3252-36e0-b728-b78c2132984b@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <0015b645-3e76-8a03-4a5f-b81edff43623@citrix.com>
Date: Tue, 4 May 2021 13:22:52 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <4deac8bb-3252-36e0-b728-b78c2132984b@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0234.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:b::30) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6138f3c2-31c6-4a30-26be-08d90ef75a10
X-MS-TrafficTypeDiagnostic: BYAPR03MB4552:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB4552FB2C7796F19A5810E251BA5A9@BYAPR03MB4552.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: /meDXzhl3zkMIxcy0n5J5eGHd8fQJglZ4nbhPzNNGYnEykD9VA+hJRfIYhZJ4HCeHxwo0OEvmmWu7hl3OQxiv1wWUU2VsCoptlDMsFr/ouz95Zw6uila6yaVRyPltDhKSqBYRmrKQzhGagF3HnU3NTRUvWPOQ4vprUyMlZjPvSoX2s3Q1zvV4DJQCAzvdNFqc1jmwNPihNkCc7oHtR2nLboyvJCCaIC0iLApEFV5CeQP7AbD33CyS2bR0Bn4qo0nZMbrzRjheUzBi04EyjWdGTb0x8gorBpgi7TK80O9+Qksa5hgIESAUqJBS/YCLSMVZS+y18vuZKf8paqzK2S5INaGqKS/sm/+ELNgUUtWMhttRQbbLdSjiZ3VJxCa0Kr8XWwGm1IQ/LW6WPa4/DpHE+EkifKWpDtepTCqFKuySYP+gAR7dpw2Lt85ZVuUEsBvUiNQj9PZFOGqr7dhigiEF75ZtpgN/jndEVJ3Oic64P5zmSs/r4Uon8r1Q4sULT/kr7+kfMGvVsA7NRTmgN8c+tOh1gb7SQ9cs6tpelIYzy5i382N9+V2sDMBUQntaVflnjXOLTvGXWXUgouIk81SmVRJZzPlFpFNBHboLv9uAstgAH9riKibnMBLLRVM9sna0LPop6uARui6Vkxod40aB2hAHL8iBiBUWtYM3S/TpbPfsZIGIk3xCFtTKUChA7Ts
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39860400002)(136003)(366004)(346002)(376002)(186003)(26005)(316002)(16526019)(8676002)(6916009)(53546011)(54906003)(66556008)(6666004)(66476007)(31686004)(8936002)(66946007)(38100700002)(5660300002)(16576012)(956004)(2616005)(4326008)(478600001)(4744005)(2906002)(83380400001)(36756003)(31696002)(86362001)(6486002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?QTM4N2IyV041ZWlZVFljRmxITUJnVXFqcEdqcXc3S3ljTkZzRko3UXhzSCt0?=
 =?utf-8?B?YzF5MzVUaDIrSHZYUEpRalpUb1FaNkpKL0diSjVSU25MQVdLVHNGYkY2VkEv?=
 =?utf-8?B?WUxYT2pyNkxJWjRhYVRRdjArUmZUZGttM3NTRFZOdEVhVEhZVkZwdE9yWFcv?=
 =?utf-8?B?WmdYcSsxaUNqcy9Eak5JWXpMVjlraGRqdXFWdmtCMld6WHJFbUVMakRFdStD?=
 =?utf-8?B?MGhoVzlMRzFnS2UyS2RZejFkb3hsVi9waHE5TnZXVHRoQ010QXc3MU9vM0tQ?=
 =?utf-8?B?bVdxcE0vS0I2U1E4dTg0dElUeWlJK2F4RkRwOUlyTzBBaDdxRzdpYkZNVnJl?=
 =?utf-8?B?UDAydHkvMGRZYS9Kb080Mkd5bU1CMEZkWjJFK3F2Yzk2eUw1SHlPaXpWL2Zw?=
 =?utf-8?B?N0lsUzRSSEQ1TmZTcVBsNEE4NkhpbjZ3ZWtQOWNKMWJKeVRoZXh3SFJRYy9Z?=
 =?utf-8?B?RFllMHVNTlJ1NGhacmsvUmQ5Z2ltRFBrUmVSdWlRaUQyc20yWXdwanV1c0h6?=
 =?utf-8?B?L2NHRGVxb1AzT2syZ1VZNGVTa3g4ZXlHNXhpOEs3QjlaOFR4dHlrUmpPQzYy?=
 =?utf-8?B?WUR1dW5BWlNzekpvK2N6NU5NSXBGak1wSUVSdjBFb1RwVlJIQ014SUhYQnhj?=
 =?utf-8?B?am9uSEppRmdWWFF5ZDU5S0c1T2Fod1RIT0lZeHdYQjZtd1NzSjZqc2Q1cFJa?=
 =?utf-8?B?U3RRRm85SHhydGFRQXdUK3JlcGx4V0FtRytWZnBVZTJ4d2JNbEhVdWFSUy90?=
 =?utf-8?B?bVVhMzNWZDh4NXE5bzFZd0REMm43VHVDaUtEOHRtMjAxc3NuN1ZvVjFYK09a?=
 =?utf-8?B?UXRxN0NmbWJlK2tiOHFWRnlKUUZzbEZ5YW13cy9RVzlDTnRMWTQzWDVLS095?=
 =?utf-8?B?UTlUd2NIbk1qME5jdTBORkd0M3VHRHIwSDRxZXZhMXN4SVNtd1RIYlVIZVls?=
 =?utf-8?B?Yk5DYmxVcjUycHFadC93aTZzalZtK3oxUzF2Szk5QU9DUzRNNGJrcE9TS1Iw?=
 =?utf-8?B?VkQrTWFwWFJkYXR2T3ArRlQ5c2RMQTdKTmZWTlpTbCtGcjRKcVZCOExhZ2Y2?=
 =?utf-8?B?R2NRWnpSWDNDRW5KTm1nTnpuWHlmdXF3NFp5MHBtRjJDczlKWFhFbm4zWTVW?=
 =?utf-8?B?aUUxNkZGanRVbHRJeTBUYWI4NzgwRVNlSkN2V280S2NObEhMU0FaR2xMMTMz?=
 =?utf-8?B?bWwyc1ZwK2o5UzA1VlByOGpYUmdkTmRFSldJdTBmUXpCd1BIdWFXOWY1NHR2?=
 =?utf-8?B?ODNOOTY0NXVXcGJXMGNFU2dtQUdJckVqK1d1Vk9TYnFabzE1TElTM09jSXpG?=
 =?utf-8?B?YzZOYlI2MTZNb0czTHQxQ1ozK3JLa2djdHdkR2pYRldBTDJ5WGVlenNuakpz?=
 =?utf-8?B?K01mMEo0Rkg3VW4zdzFMMm5vK3NNS2dIMlA4VVRzRklTbEIxZU1vN0FnUW1S?=
 =?utf-8?B?djA2L28wdVhWN3h4OElzSENDYXQ0aGx6Y1pmSngzeWtKUmZiWFBQSDFMb25r?=
 =?utf-8?B?ZlluYkMzQzRjU2RIT0VFOFFxNWx3bXBQbFhnWVNpbEg2QytwSjl3Mlp5RHFv?=
 =?utf-8?B?anR4OWpsQnMrV2tCbWljdmhYUm5HZXRObjFmRFFybWJqUVpUYnJ1R1c2YnBT?=
 =?utf-8?B?ZEU5aHBRTFhFa2VvTHpXTXpTTUNaOHZQU0owaEczOC93NXBmNkxXc1hhOWh2?=
 =?utf-8?B?VjNFb3NaazhtT3NMMUNxWlJ2Z2ZnZG5pSlI2ajJ5RHlGZEtkZmpWTGhZN3pr?=
 =?utf-8?Q?Q3B9sbAG7CJFqSBfx9u1iQXlT2j/Hsw1HWinHlO?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 6138f3c2-31c6-4a30-26be-08d90ef75a10
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2021 12:22:58.1374
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: G3mJ47ajHDB1QQucKj4LxXL03yIXIMD3wqG+ar9/niSBBBtEoXIuKsBopRxkrsARoMPwbISBV/HFHWKZKpRDAE6dUikTeKCo9GVd6AswUZk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4552
X-OriginatorOrg: citrix.com

On 04/05/2021 13:20, Jan Beulich wrote:
> On 03.05.2021 17:39, Andrew Cooper wrote:
>> @@ -568,16 +568,38 @@ static unsigned int hw_uncompressed_size(uint64_t xcr0)
>>      return size;
>>  }
>>  
>> -/* Fastpath for common xstate size requests, avoiding reloads of xcr0. */
>> -unsigned int xstate_ctxt_size(u64 xcr0)
>> +unsigned int xstate_uncompressed_size(uint64_t xcr0)
> Since you rewrite the function anyway, and since taking into account
> the XSS-controlled features here is going to be necessary as well
> (even if just down the road, but that's what your ultimate goal is
> from all I can tell), how about renaming the parameter to "xstates"
> or "states" at the same time?

I'm working on some cleanup of terminology, which I haven't posted yet.

For this one, I'm not sure.  For uncompressed size, we genuinely mean
user states only.  When there's a suitable constant to use, this will
gain an assertion.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 04 12:44:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 12:44:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122267.230550 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lduP7-0005PJ-Df; Tue, 04 May 2021 12:43:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122267.230550; Tue, 04 May 2021 12:43:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lduP7-0005PC-8w; Tue, 04 May 2021 12:43:57 +0000
Received: by outflank-mailman (input) for mailman id 122267;
 Tue, 04 May 2021 12:43:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1gXq=J7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lduP6-0005P7-0r
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 12:43:56 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d6ae3262-c64e-4166-a055-bf18bc1d97fa;
 Tue, 04 May 2021 12:43:54 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1697FAF80;
 Tue,  4 May 2021 12:43:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d6ae3262-c64e-4166-a055-bf18bc1d97fa
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620132234; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=UNEpEs7hQRw31USRmSUhcEW9+/K3g+bDBOE/0594Ngo=;
	b=EZ1rK8Z71hkwrUC/XPfXvRFv77s/hrLkG7KfqD8eoqrxWDQsWycxn1XsfS5EKQkYO1Qj9l
	vgxAs2eKmrBGufKfK+Y8rdFkoT+tIKcGQvdrFp4lJh8D4ZxmxYA59xw8sp0X9J52qogbAY
	QcR//VzmYFZOqBL7XzsR7T3IQailBfU=
Subject: Re: [PATCH 4/5] x86/cpuid: Simplify recalculate_xstate()
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210503153938.14109-1-andrew.cooper3@citrix.com>
 <20210503153938.14109-5-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <17501fdd-b9f0-3493-7d0d-8c5333fafa45@suse.com>
Date: Tue, 4 May 2021 14:43:53 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210503153938.14109-5-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 03.05.2021 17:39, Andrew Cooper wrote:
> Make use of the new xstate_uncompressed_size() helper rather than maintaining
> the running calculation while accumulating feature components.
> 
> The rest of the CPUID data can come direct from the raw cpuid policy.  All
> per-component data forms an ABI through the behaviour of the X{SAVE,RSTOR}*
> instructions, and are constant.
> 
> Use for_each_set_bit() rather than opencoding a slightly awkward version of
> it.  Mask the attributes in ecx down based on the visible features.  This
> isn't actually necessary for any components or attributes defined at the time
> of writing (up to AMX), but is added out of an abundance of caution.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Wei Liu <wl@xen.org>
> 
> Using min() in for_each_set_bit() leads to awful code generation, as it
> prohibits the optimiations for spotting that the bitmap is <= BITS_PER_LONG.
> As p->xstate is long enough already, use a BUILD_BUG_ON() instead.
> ---
>  xen/arch/x86/cpuid.c | 52 +++++++++++++++++-----------------------------------
>  1 file changed, 17 insertions(+), 35 deletions(-)
> 
> diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
> index 752bf244ea..c7f8388e5d 100644
> --- a/xen/arch/x86/cpuid.c
> +++ b/xen/arch/x86/cpuid.c
> @@ -154,8 +154,7 @@ static void sanitise_featureset(uint32_t *fs)
>  static void recalculate_xstate(struct cpuid_policy *p)
>  {
>      uint64_t xstates = XSTATE_FP_SSE;
> -    uint32_t xstate_size = XSTATE_AREA_MIN_SIZE;
> -    unsigned int i, Da1 = p->xstate.Da1;
> +    unsigned int i, ecx_bits = 0, Da1 = p->xstate.Da1;
>  
>      /*
>       * The Da1 leaf is the only piece of information preserved in the common
> @@ -167,61 +166,44 @@ static void recalculate_xstate(struct cpuid_policy *p)
>          return;
>  
>      if ( p->basic.avx )
> -    {
>          xstates |= X86_XCR0_YMM;
> -        xstate_size = max(xstate_size,
> -                          xstate_offsets[X86_XCR0_YMM_POS] +
> -                          xstate_sizes[X86_XCR0_YMM_POS]);
> -    }
>  
>      if ( p->feat.mpx )
> -    {
>          xstates |= X86_XCR0_BNDREGS | X86_XCR0_BNDCSR;
> -        xstate_size = max(xstate_size,
> -                          xstate_offsets[X86_XCR0_BNDCSR_POS] +
> -                          xstate_sizes[X86_XCR0_BNDCSR_POS]);
> -    }
>  
>      if ( p->feat.avx512f )
> -    {
>          xstates |= X86_XCR0_OPMASK | X86_XCR0_ZMM | X86_XCR0_HI_ZMM;
> -        xstate_size = max(xstate_size,
> -                          xstate_offsets[X86_XCR0_HI_ZMM_POS] +
> -                          xstate_sizes[X86_XCR0_HI_ZMM_POS]);
> -    }
>  
>      if ( p->feat.pku )
> -    {
>          xstates |= X86_XCR0_PKRU;
> -        xstate_size = max(xstate_size,
> -                          xstate_offsets[X86_XCR0_PKRU_POS] +
> -                          xstate_sizes[X86_XCR0_PKRU_POS]);
> -    }
>  
> -    p->xstate.max_size  =  xstate_size;
> +    /* Subleaf 0 */
> +    p->xstate.max_size =
> +        xstate_uncompressed_size(xstates & ~XSTATE_XSAVES_ONLY);
>      p->xstate.xcr0_low  =  xstates & ~XSTATE_XSAVES_ONLY;
>      p->xstate.xcr0_high = (xstates & ~XSTATE_XSAVES_ONLY) >> 32;
>  
> +    /* Subleaf 1 */
>      p->xstate.Da1 = Da1;
>      if ( p->xstate.xsaves )
>      {
> +        ecx_bits |= 3; /* Align64, XSS */

Align64 is also needed for p->xstate.xsavec afaict. I'm not really
convinced to tie one to the other either. I would rather think this
is a per-state-component attribute independent of other features.
Those state components could in turn have a dependency (like XSS
ones on XSAVES).

I'm also not happy at all to see you use a literal 3 here. We have
a struct for this, after all.

>          p->xstate.xss_low   =  xstates & XSTATE_XSAVES_ONLY;
>          p->xstate.xss_high  = (xstates & XSTATE_XSAVES_ONLY) >> 32;
>      }
> -    else
> -        xstates &= ~XSTATE_XSAVES_ONLY;
>  
> -    for ( i = 2; i < min(63ul, ARRAY_SIZE(p->xstate.comp)); ++i )
> +    /* Subleafs 2+ */
> +    xstates &= ~XSTATE_FP_SSE;
> +    BUILD_BUG_ON(ARRAY_SIZE(p->xstate.comp) < 63);
> +    for_each_set_bit ( i, &xstates, 63 )
>      {
> -        uint64_t curr_xstate = 1ul << i;
> -
> -        if ( !(xstates & curr_xstate) )
> -            continue;
> -
> -        p->xstate.comp[i].size   = xstate_sizes[i];
> -        p->xstate.comp[i].offset = xstate_offsets[i];
> -        p->xstate.comp[i].xss    = curr_xstate & XSTATE_XSAVES_ONLY;
> -        p->xstate.comp[i].align  = curr_xstate & xstate_align;
> +        /*
> +         * Pass through size (eax) and offset (ebx) directly.  Visbility of
> +         * attributes in ecx limited by visible features in Da1.
> +         */
> +        p->xstate.raw[i].a = raw_cpuid_policy.xstate.raw[i].a;
> +        p->xstate.raw[i].b = raw_cpuid_policy.xstate.raw[i].b;
> +        p->xstate.raw[i].c = raw_cpuid_policy.xstate.raw[i].c & ecx_bits;

To me, going to raw[].{a,b,c,d} looks like a backwards move, to be
honest. Both this and the literal 3 above make it harder to locate
all the places that need changing if a new bit (like xfd) is to be
added. It would be better if grep-ing for an existing field name
(say "xss") would easily turn up all involved places.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 04 12:45:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 12:45:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122270.230561 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lduQS-0005Ve-Nt; Tue, 04 May 2021 12:45:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122270.230561; Tue, 04 May 2021 12:45:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lduQS-0005VX-Kl; Tue, 04 May 2021 12:45:20 +0000
Received: by outflank-mailman (input) for mailman id 122270;
 Tue, 04 May 2021 12:45:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1gXq=J7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lduQR-0005VQ-FB
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 12:45:19 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f8ecbd8d-6cb2-4cf1-9f43-0f0dcb8043ba;
 Tue, 04 May 2021 12:45:18 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D74C3AF21;
 Tue,  4 May 2021 12:45:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f8ecbd8d-6cb2-4cf1-9f43-0f0dcb8043ba
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620132317; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=VZRmz45DfhcROUIiTINUpEqr8mhThAOu9eoy8T3dNXc=;
	b=SJLZvKp+ixTW1QCIfXqbI9HSdDy+5krV6PCR2I29BNmerWAvfLZeSvSrPj59C3/3LuuOj7
	132gNIAPbCQhktWYIOv7LsApZut8a1Tv1kYEAe4RaLMQr3xTlX9s7bF52il3bcYSDN6jJz
	ZBX0YoPHPYv82nWMEr9YHFGbnnKfuJY=
Subject: Re: [PATCH 3/5] x86/xstate: Rework xstate_ctxt_size() as
 xstate_uncompressed_size()
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210503153938.14109-1-andrew.cooper3@citrix.com>
 <20210503153938.14109-4-andrew.cooper3@citrix.com>
 <4deac8bb-3252-36e0-b728-b78c2132984b@suse.com>
 <0015b645-3e76-8a03-4a5f-b81edff43623@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8b6e5904-eebe-b01b-119c-7dc7202d286d@suse.com>
Date: Tue, 4 May 2021 14:45:17 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <0015b645-3e76-8a03-4a5f-b81edff43623@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 04.05.2021 14:22, Andrew Cooper wrote:
> On 04/05/2021 13:20, Jan Beulich wrote:
>> On 03.05.2021 17:39, Andrew Cooper wrote:
>>> @@ -568,16 +568,38 @@ static unsigned int hw_uncompressed_size(uint64_t xcr0)
>>>      return size;
>>>  }
>>>  
>>> -/* Fastpath for common xstate size requests, avoiding reloads of xcr0. */
>>> -unsigned int xstate_ctxt_size(u64 xcr0)
>>> +unsigned int xstate_uncompressed_size(uint64_t xcr0)
>> Since you rewrite the function anyway, and since taking into account
>> the XSS-controlled features here is going to be necessary as well
>> (even if just down the road, but that's what your ultimate goal is
>> from all I can tell), how about renaming the parameter to "xstates"
>> or "states" at the same time?
> 
> I'm working on some cleanup of terminology, which I haven't posted yet.
> 
> For this one, I'm not sure.  For uncompressed size, we genuinely mean
> user states only.

Ah, yes - fair point.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 04 12:49:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 12:49:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122279.230573 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lduU9-0005hj-Ao; Tue, 04 May 2021 12:49:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122279.230573; Tue, 04 May 2021 12:49:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lduU9-0005hc-7Y; Tue, 04 May 2021 12:49:09 +0000
Received: by outflank-mailman (input) for mailman id 122279;
 Tue, 04 May 2021 12:49:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tMRT=J7=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1lduU7-0005hX-VY
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 12:49:08 +0000
Received: from mail-qk1-x72e.google.com (unknown [2607:f8b0:4864:20::72e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b60ccf20-5525-43f5-87ae-b1dfd6a5731d;
 Tue, 04 May 2021 12:49:07 +0000 (UTC)
Received: by mail-qk1-x72e.google.com with SMTP id u20so8296200qku.10
 for <xen-devel@lists.xenproject.org>; Tue, 04 May 2021 05:49:07 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:92e5:6d58:b544:4daa])
 by smtp.gmail.com with ESMTPSA id
 i11sm2355001qtv.8.2021.05.04.05.49.05
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 04 May 2021 05:49:05 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b60ccf20-5525-43f5-87ae-b1dfd6a5731d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=3Tal1ALW05zmL2hsosOStx0zh1sfSUh+K85bAjmwB0s=;
        b=g+zsHD5UKRDMcY71P8S37WwU7yTMTXKBdFvm/mB+j42Ll9sJG2HAabiggqE1fKEoLW
         7vzugJF3g4LFJguXY6616H7LfXXkcSRT3YB/SoTAP0fIeKjA6GdhcAkzhVToeKHzK4iC
         k/B9bRsHEkKZ/9jYuBA6uh6Zv9u7bWghKSZYW39ASCOW5vkU3seScFxUnoYueK8P0W1o
         ++nvp1b5QRWiC0UhOEiSnjczqanU9jx5TqJ6eytDl2nVP0+FwE73AcsEGG/36PYE+EYh
         hECuCwk34VkwGB1BTzAlgaR9p/656rfx82QIzggYGbl/i+U/eyzDKNrnCxsakx/VPhY3
         hmUw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=3Tal1ALW05zmL2hsosOStx0zh1sfSUh+K85bAjmwB0s=;
        b=YwkBZmV4esowj89i2QXwl9Bwb5KP5KZxuFvkk0seeb3K9793/+A8BpmtAQUyNiMYN2
         63FZN+usUS8R5WqW8YWeXO6lCWAyeUQ+sPYH8MYIWn6c7FyJAcdVd7MArbnCVDzSk0Tt
         lWdVCRf7kpl8HG7FSOBGn36C3ieS1brn3q7v6YX2zuFg62chUVYRYUoWRlFSIIaWLbi5
         PFRkXHL0dqH5q+b4hzPtxcBswc/fDzF2aS5TT9K/kfPiGvY3NHaNuAyTHNLxwHkemqLn
         j6A55tKPVO8DEGod7zYNTH12NOXPab74wIibCnl20wu5LR4ddtV8H0iFx7r1X730PAr+
         npoA==
X-Gm-Message-State: AOAM532d97bu6TYiMfXvqIpFSGr0ya4nI7skEHRhxsYKtIGOgJIaIJvH
	kyYpo4G4AmDRwrbD7tlsEytGC+NIUyA=
X-Google-Smtp-Source: ABdhPJy5/Jft3DXtjDP9GwNUBxgzKrfsRvUUNHQx+GTa/Eck1jXs+Dxc0b5qzzHOmmmX5f5Gppskng==
X-Received: by 2002:a05:620a:13bc:: with SMTP id m28mr8490152qki.357.1620132546581;
        Tue, 04 May 2021 05:49:06 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>,
	Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Dag Nygren <dag@newtech.fi>
Subject: [PATCH 0/9] vtpmmgr: Some fixes - still incomplete
Date: Tue,  4 May 2021 08:48:33 -0400
Message-Id: <20210504124842.220445-1-jandryuk@gmail.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

vtpmmgr TPM 2.0 support is incomplete.  There is no code to save the
tpm2 keys generated by the vtpmmgr, so it's impossible to restore vtpm
state with tpm2.  The vtpmmgr also issues TPM 1.2 commands to the TPM
2.0 hardware which naturally fails.  Dag reported this [1][2], and I
independently re-discovered it.

I have not fixed the above issues.  These are some fixes I made while
investigating tpm2 support.  At a minimum, "docs: Warn about incomplete
vtpmmgr TPM 2.0 support" should be applied to warn others.

This is useful for debugging:
vtpmmgr: Print error code to aid debugging

This fixes vtpmmgr output (also noted by Dag [3]) but maybe removing %z
would be better:
stubom: newlib: Enable C99 formats for %z

This gives more flexibility if you are already using the TPM2 hardware:
vtpmmgr: Allow specifying srk_handle for TPM2

These are some changes to unload keys from the TPM hardware (so they
are not still loaded for anything that runs afterwards):
vtpmmgr: Move vtpmmgr_shutdown
vtpmmgr: Flush transient keys on shutdown
vtpmmgr: Flush all transient keys
vtpmmgr: Shutdown more gracefully

This lets vtpms initialize their random pools:
vtpmmgr: Support GetRandom passthrough on TPM 2.0

[1] https://lore.kernel.org/xen-devel/8285393.eUs1EhXEQl@eseries.newtech.fi/
[2] https://lore.kernel.org/xen-devel/1615731.eyaQ0j4tC5@eseries.newtech.fi/
[3] https://lore.kernel.org/xen-devel/3151252.0ZAaMuH7Fy@dag.newtech.fi/

Jason Andryuk (9):
  docs: Warn about incomplete vtpmmgr TPM 2.0 support
  vtpmmgr: Print error code to aid debugging
  stubom: newlib: Enable C99 formats for %z
  vtpmmgr: Allow specifying srk_handle for TPM2
  vtpmmgr: Move vtpmmgr_shutdown
  vtpmmgr: Flush transient keys on shutdown
  vtpmmgr: Flush all transient keys
  vtpmmgr: Shutdown more gracefully
  vtpmmgr: Support GetRandom passthrough on TPM 2.0

 docs/man/xen-vtpmmgr.7.pod         | 18 +++++++++++
 stubdom/Makefile                   |  2 +-
 stubdom/vtpmmgr/init.c             | 49 ++++++++++++++++++++----------
 stubdom/vtpmmgr/marshal.h          | 10 ++++++
 stubdom/vtpmmgr/tpm.c              |  2 +-
 stubdom/vtpmmgr/tpm2.c             |  2 +-
 stubdom/vtpmmgr/vtpm_cmd_handler.c | 48 +++++++++++++++++++++++++++++
 stubdom/vtpmmgr/vtpmmgr.c          | 12 +++++++-
 8 files changed, 123 insertions(+), 20 deletions(-)

-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Tue May 04 12:49:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 12:49:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122280.230586 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lduUE-0005kA-NH; Tue, 04 May 2021 12:49:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122280.230586; Tue, 04 May 2021 12:49:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lduUE-0005k1-Id; Tue, 04 May 2021 12:49:14 +0000
Received: by outflank-mailman (input) for mailman id 122280;
 Tue, 04 May 2021 12:49:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tMRT=J7=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1lduUC-0005hX-Qv
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 12:49:12 +0000
Received: from mail-qk1-x733.google.com (unknown [2607:f8b0:4864:20::733])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 946902fd-f0d2-4c30-aca5-e82efc7bfee3;
 Tue, 04 May 2021 12:49:11 +0000 (UTC)
Received: by mail-qk1-x733.google.com with SMTP id k127so8295938qkc.6
 for <xen-devel@lists.xenproject.org>; Tue, 04 May 2021 05:49:11 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:92e5:6d58:b544:4daa])
 by smtp.gmail.com with ESMTPSA id
 i11sm2355001qtv.8.2021.05.04.05.49.10
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 04 May 2021 05:49:10 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 946902fd-f0d2-4c30-aca5-e82efc7bfee3
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=3KzzfjROMAHRA0Cu1BG5VLwtVA1MVVdfpHeND8oqJ2s=;
        b=Z1mqDmUNXCDfiBRm5rhrsqEJUoJIgMk+Pi+znrsBuB/1g6cQmHDAmwD3dwb4d122CE
         lqlmGYTiyr43E5GscRvhsk55hxa2tQwk4LgHxPzdpuJbZ6TnjlL2A25ALevXUryF7ahc
         DMJvDP8sfZKtdKyQ3rpwsnDCzYMt/Hpyl5fbB1dAD6neI5Bxr9OSG9cjAbSTDqIARZD1
         ozOj0A79gFKVsb+AkNgNdoNJjnfRU7bY6gYF9cVtt8bjiOqHsBW+9utWaUtPZkNfnWLf
         viy96ok1QxMfXLP5wT8EwABmpIZ6RXiT7JQaH13z1O694pZ7pLinehCflueL1Do9pHU8
         SsyQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=3KzzfjROMAHRA0Cu1BG5VLwtVA1MVVdfpHeND8oqJ2s=;
        b=ZzjMlzOlaOgrmFpymWGXjJpGSRXvcfcqlVf/eE6Jsl63OZOYweBYDrwjK5xMC4KXXE
         f29ALw0TUqOT5pxqF4Tx74vwrcFtFb8pFccdFx+ZwVQLxf7IR9X8HFJ2wtptljLGYY4n
         dEkT20Nms7TO77fkWuKGG4bqpIDxBFkFZ7W1r8uKD1KRa0voVJAJeZ979bSKU/26Vc45
         dHCaMi6UK112q442kcsTSU8TBH4dtzkTwFIeRnve8M+VtfVT+Y3y99VUxAHd+Iq/y3sQ
         SJv5RR10mkP8rh6R4MQeoeHQwvY2oret7If3VUume8mr6HuZc7BEkDqxCXnXhj9y7oxL
         uwmw==
X-Gm-Message-State: AOAM533Vn6kJjV3PEJ+SJgVpdaCL1MVI0Y0FgU5suo3Fyr3K4Kr6NaLY
	/v/jv4WBunCWk6Nbz0HPUvGtwlRLtzk=
X-Google-Smtp-Source: ABdhPJx33CuWTBGjz85fbgCR8dzRj4XXn1W5IDA4Vv7+cIK1vR9R0m97XLxCBsvYZfnw8LLjv+RuQQ==
X-Received: by 2002:a37:ae44:: with SMTP id x65mr24485179qke.9.1620132550933;
        Tue, 04 May 2021 05:49:10 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH 1/9] docs: Warn about incomplete vtpmmgr TPM 2.0 support
Date: Tue,  4 May 2021 08:48:34 -0400
Message-Id: <20210504124842.220445-2-jandryuk@gmail.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210504124842.220445-1-jandryuk@gmail.com>
References: <20210504124842.220445-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The vtpmmgr TPM 2.0 support is incomplete.  Add a warning about that to
the documentation so others don't have to work through discovering it is
broken.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 docs/man/xen-vtpmmgr.7.pod | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/docs/man/xen-vtpmmgr.7.pod b/docs/man/xen-vtpmmgr.7.pod
index af825a7ffe..875dcce508 100644
--- a/docs/man/xen-vtpmmgr.7.pod
+++ b/docs/man/xen-vtpmmgr.7.pod
@@ -222,6 +222,17 @@ XSM label, not the kernel.
 
 =head1 Appendix B: vtpmmgr on TPM 2.0
 
+=head2 WARNING: Incomplete - cannot persist data
+
+TPM 2.0 support for vTPM manager is incomplete.  There is no support for
+persisting an encryption key, so vTPM manager regenerates primary and secondary
+key handles each boot.
+
+Also, the vTPM manger group command implementation hardcodes TPM 1.2 commands.
+This means running manage-vtpmmgr.pl fails when the TPM 2.0 hardware rejects
+the TPM 1.2 commands.  vTPM manager with TPM 2.0 cannot create groups and
+therefore cannot persist vTPM contents.
+
 =head2 Manager disk image setup:
 
 The vTPM Manager requires a disk image to store its encrypted data. The image
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Tue May 04 12:49:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 12:49:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122281.230598 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lduUJ-0005o7-UW; Tue, 04 May 2021 12:49:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122281.230598; Tue, 04 May 2021 12:49:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lduUJ-0005o0-R0; Tue, 04 May 2021 12:49:19 +0000
Received: by outflank-mailman (input) for mailman id 122281;
 Tue, 04 May 2021 12:49:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tMRT=J7=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1lduUH-0005hX-RF
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 12:49:17 +0000
Received: from mail-qt1-x82c.google.com (unknown [2607:f8b0:4864:20::82c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 37a208f9-55da-474d-b809-2c31378d1d2e;
 Tue, 04 May 2021 12:49:12 +0000 (UTC)
Received: by mail-qt1-x82c.google.com with SMTP id o1so6152340qta.1
 for <xen-devel@lists.xenproject.org>; Tue, 04 May 2021 05:49:12 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:92e5:6d58:b544:4daa])
 by smtp.gmail.com with ESMTPSA id
 i11sm2355001qtv.8.2021.05.04.05.49.11
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 04 May 2021 05:49:11 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 37a208f9-55da-474d-b809-2c31378d1d2e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=mBt899KCzk+1nbxQuHuD9pe5MRuqdb/uNTd34+igWrM=;
        b=G/Us5UcFLnbOBW2ZxV/xvgiRyKBiZispCo68BGevhBKsXCLZtNTLLTc4oPSfFJalWY
         hFENro5shnFlNzV5GZ+qEECrYCZDtO74ZZ/Y/iebR9qmVb2biicsJUvFdB6scJn6tcaw
         2xuh1FD38bAYlvkqgdyjEG6hvmk1uk90DiDhyjuMuj8FFFNyG0HT/if7yY8dR1tbazYG
         v/b7PUaMKPyY2DfLi8CK9jLSVU2/yU3vP/QS5O8ZLsel1lYVxxOT5YpxtS1qlWhp3UNA
         3mWvvS87E0tKEyFqCNVoZq4bCmtnui3HEmUD7n+eJ93fdAJH2ru32RuUqrS4JV+HtLRC
         34zg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=mBt899KCzk+1nbxQuHuD9pe5MRuqdb/uNTd34+igWrM=;
        b=t0lpoCK2acfOcG3WWFMBGBnOgTuRc3NETXD8u2H5SlFhz3MHvL0iTc+VEa1lMWu8uK
         dwP4XfstadKxt2bKt1QvNbWjlpAcVPB69ZM0O/vwrtyjFN9hTDcdWZ/JS5Inh1kBvka7
         8TP7wvf+04X3bBJe3IHNPw9igsp7frCBQHc7AGKNoygk8CeWv2b3M2eRabBGn9c60bgw
         EWO2hDmNs6ptagub92/o+Baw7KYvMEZPN3ImW2EW08wJewmVq99Hg8HJL+rTL+OE68B7
         ZJ6CpWTD1ra+kQCHJoSisU8/Obl3TAR2SE7zmyp/Za2Al7lK2eV49K3hXwhALQJTXwjv
         rH0A==
X-Gm-Message-State: AOAM5339/iVnZmc5XtJskGhLq/4+gXqQqkDWXKyCpiF3sLBmvpYBniaJ
	pADLNyy2I/zAkoDwZhswh/1BQTw+lL4=
X-Google-Smtp-Source: ABdhPJwNXWm+uBV0EUqUcW0M49kF1m7xKv8ZVNnfm8ZAL8isMlbC1MwV+wwtOvN2wXOH7wVCTmEQCw==
X-Received: by 2002:ac8:5358:: with SMTP id d24mr3180823qto.351.1620132552280;
        Tue, 04 May 2021 05:49:12 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>,
	Samuel Thibault <samuel.thibault@ens-lyon.org>
Subject: [PATCH 2/9] vtpmmgr: Print error code to aid debugging
Date: Tue,  4 May 2021 08:48:35 -0400
Message-Id: <20210504124842.220445-3-jandryuk@gmail.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210504124842.220445-1-jandryuk@gmail.com>
References: <20210504124842.220445-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

tpm_get_error_name returns "Unknown Error Code" when an error string
is not defined.  In that case, we should print the Error Code so it can
be looked up offline.  tpm_get_error_name returns a const string, so
just have the two callers always print the error code so it is always
available.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 stubdom/vtpmmgr/tpm.c  | 2 +-
 stubdom/vtpmmgr/tpm2.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/stubdom/vtpmmgr/tpm.c b/stubdom/vtpmmgr/tpm.c
index 779cddd64e..83b2bc16b2 100644
--- a/stubdom/vtpmmgr/tpm.c
+++ b/stubdom/vtpmmgr/tpm.c
@@ -109,7 +109,7 @@
 			UINT32 rsp_status; \
 			UNPACK_OUT(TPM_RSP_HEADER, &rsp_tag, &rsp_len, &rsp_status); \
 			if (rsp_status != TPM_SUCCESS) { \
-				vtpmlogerror(VTPM_LOG_TPM, "Failed with return code %s\n", tpm_get_error_name(rsp_status)); \
+				vtpmlogerror(VTPM_LOG_TPM, "Failed with return code %s (%x)\n", tpm_get_error_name(rsp_status), rsp_status); \
 				status = rsp_status; \
 				goto abort_egress; \
 			} \
diff --git a/stubdom/vtpmmgr/tpm2.c b/stubdom/vtpmmgr/tpm2.c
index c9f1016ab5..655e6d164c 100644
--- a/stubdom/vtpmmgr/tpm2.c
+++ b/stubdom/vtpmmgr/tpm2.c
@@ -126,7 +126,7 @@
     ptr = unpack_TPM_RSP_HEADER(ptr, \
           &(tag), &(paramSize), &(status));\
     if ((status) != TPM_SUCCESS){ \
-        vtpmlogerror(VTPM_LOG_TPM, "Failed with return code %s\n", tpm_get_error_name(status));\
+        vtpmlogerror(VTPM_LOG_TPM, "Failed with return code %s (%x)\n", tpm_get_error_name(status), (status));\
         goto abort_egress;\
     }\
 } while(0)
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Tue May 04 12:49:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 12:49:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122283.230610 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lduUO-0005sJ-9K; Tue, 04 May 2021 12:49:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122283.230610; Tue, 04 May 2021 12:49:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lduUO-0005s8-5k; Tue, 04 May 2021 12:49:24 +0000
Received: by outflank-mailman (input) for mailman id 122283;
 Tue, 04 May 2021 12:49:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tMRT=J7=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1lduUM-0005hX-RN
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 12:49:22 +0000
Received: from mail-qk1-x735.google.com (unknown [2607:f8b0:4864:20::735])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3699f7f1-1e70-41dc-be90-334e67911516;
 Tue, 04 May 2021 12:49:14 +0000 (UTC)
Received: by mail-qk1-x735.google.com with SMTP id a2so8300387qkh.11
 for <xen-devel@lists.xenproject.org>; Tue, 04 May 2021 05:49:14 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:92e5:6d58:b544:4daa])
 by smtp.gmail.com with ESMTPSA id
 i11sm2355001qtv.8.2021.05.04.05.49.12
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 04 May 2021 05:49:12 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3699f7f1-1e70-41dc-be90-334e67911516
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=Y48tGnkudDovdl/5JuqPSC9kMpsbk+sIm1W0tcK17X8=;
        b=D9u6t/78OOZobHOleD5mosdiTLzjI0yT/Vwz2SX8CqEyiK890JH2KY4OMfo8XMPaqz
         An7urQlk5XuCejgn29hKhx+liwck/MHjiUeIBW8W6pXVZDJ7UwAoc3rY5G5qt4OkZCDH
         SHlQGsfK+rl9e/JQxbohFfolV/jEuZTtsp3DUIXIRD86cZl7W0vMZ1ACD3/MljF1Fvh4
         a977kXEEITQLooL6KZxDbM7WWSg1LIyN5gPQzYSkGzoZPgOsXJyPYBVclfqGzagKFiP0
         aqbrapmYc2E9wDJkTyE30kNRTizkJQXRexzwZ//E3OwVSOAwwNFUneahEV9sgtRjdkoW
         odIA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=Y48tGnkudDovdl/5JuqPSC9kMpsbk+sIm1W0tcK17X8=;
        b=Q9vJHqxATvn6xt6UzX5hWeFojGT7YBOmqTCd6rpjUhHFULm8ccsFUOxfwpp4+pOem+
         hHLrhD4K6/WQPqALDs3mNKWzYyuAaf/2+kplerJ+fmTWPwK9i9ODhW5BXadsI/fgOTku
         zeph9QkEbOFMg3X8ggHkFRVxlI0snrtKNofMg98mWVznZhCZqQou9EhDqvMSCUDFIDQN
         ly/tF6jiTID10Ul5p5A/c8RsvNVuNqKA+t3NXEqWg+YroZ8Fx8qFSS+7bbafw7m9WeT8
         jt9Mh6Ywe2kHWXRxWdLOvqf/eOeF+BeN1PVdxQ0Xd8g/TLpLr8MQN3hy3OpH7Te5rst0
         dN8A==
X-Gm-Message-State: AOAM533LDtjlyE4fKhKVjewBUlqyEbSLAwss+T+yo63A7VHWDpX0q1Oy
	NHeTGEy9zrRNv/yOBNoUbdjdA+Qtn4s=
X-Google-Smtp-Source: ABdhPJwuPnT/sqnCz0dlIhe6T7rERiv//tc6a1gUXO0jBEomuTR9x/exHYo6BgSqVcqURggccyun+A==
X-Received: by 2002:a05:620a:918:: with SMTP id v24mr24268623qkv.54.1620132553430;
        Tue, 04 May 2021 05:49:13 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Samuel Thibault <samuel.thibault@ens-lyon.org>
Subject: [PATCH 3/9] stubom: newlib: Enable C99 formats for %z
Date: Tue,  4 May 2021 08:48:36 -0400
Message-Id: <20210504124842.220445-4-jandryuk@gmail.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210504124842.220445-1-jandryuk@gmail.com>
References: <20210504124842.220445-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

vtpmmgr was changed to print size_t with the %z modifier, but newlib
isn't compiled with %z support.  So you get output like:

root seal: zu; sector of 13: zu
root: zu v=zu
itree: 36; sector of 112: zu
group: zu v=zu id=zu md=zu
group seal: zu; 5 in parent: zu; sector of 13: zu
vtpm: zu+zu; sector of 48: zu

Enable the C99 formats in newlib so vtpmmgr prints the numeric values.

Fixes 9379af08ccc0 "stubdom: vtpmmgr: Correctly format size_t with %z
when printing."

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
I haven't tried, but the other option would be to cast size_t and avoid
%z.  Since this seems to be the only mini-os use of %z, that may be
better than building a larger newlib.
---
 stubdom/Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/stubdom/Makefile b/stubdom/Makefile
index 90d9ffcd9f..c6de5f68ae 100644
--- a/stubdom/Makefile
+++ b/stubdom/Makefile
@@ -105,7 +105,7 @@ cross-newlib: $(NEWLIB_STAMPFILE)
 $(NEWLIB_STAMPFILE): mk-headers-$(XEN_TARGET_ARCH) newlib-$(NEWLIB_VERSION)
 	mkdir -p newlib-$(XEN_TARGET_ARCH)
 	( cd newlib-$(XEN_TARGET_ARCH) && \
-	  CC_FOR_TARGET="$(CC) $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) $(NEWLIB_CFLAGS)" AR_FOR_TARGET=$(AR) LD_FOR_TARGET=$(LD) RANLIB_FOR_TARGET=$(RANLIB) ../newlib-$(NEWLIB_VERSION)/configure --prefix=$(CROSS_PREFIX) --verbose --target=$(GNU_TARGET_ARCH)-xen-elf --enable-newlib-io-long-long --disable-multilib && \
+	  CC_FOR_TARGET="$(CC) $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) $(NEWLIB_CFLAGS)" AR_FOR_TARGET=$(AR) LD_FOR_TARGET=$(LD) RANLIB_FOR_TARGET=$(RANLIB) ../newlib-$(NEWLIB_VERSION)/configure --prefix=$(CROSS_PREFIX) --verbose --target=$(GNU_TARGET_ARCH)-xen-elf --enable-newlib-io-long-long --enable-newlib-io-c99-formats --disable-multilib && \
 	  $(MAKE) DESTDIR= && \
 	  $(MAKE) DESTDIR= install )
 
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Tue May 04 12:49:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 12:49:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122284.230622 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lduUT-0005xX-Le; Tue, 04 May 2021 12:49:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122284.230622; Tue, 04 May 2021 12:49:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lduUT-0005xN-Hg; Tue, 04 May 2021 12:49:29 +0000
Received: by outflank-mailman (input) for mailman id 122284;
 Tue, 04 May 2021 12:49:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tMRT=J7=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1lduUR-0005hX-RX
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 12:49:27 +0000
Received: from mail-qk1-x72d.google.com (unknown [2607:f8b0:4864:20::72d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9cf9e786-2da6-4426-a16e-07b399763b73;
 Tue, 04 May 2021 12:49:15 +0000 (UTC)
Received: by mail-qk1-x72d.google.com with SMTP id u20so8296617qku.10
 for <xen-devel@lists.xenproject.org>; Tue, 04 May 2021 05:49:15 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:92e5:6d58:b544:4daa])
 by smtp.gmail.com with ESMTPSA id
 i11sm2355001qtv.8.2021.05.04.05.49.13
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 04 May 2021 05:49:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9cf9e786-2da6-4426-a16e-07b399763b73
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=NzGJMXzPpKRh7oGL0zGJA40Q9xjMNDkXow9gtE6yA3E=;
        b=Sup3MNbEW57h/VsmIS0kBok+uAZWBc3oAU6hzp6wCLu+3wA1dhIdbFwvmvBkh7bMy+
         Ovfp2rakfnwrj7sLKqbJGlfPPJR/9Phko11nJexPcbALlZb1QMxK56MGL8BC1VKhj+3w
         YQQEKqCbPXeMvHyIxFrj4tzMz/e6SHvlOrGO7cx4h/8o5s1ABtGQOfwdjTIJGfJzitxv
         V5dFPk7/BhOmaHa+JoKJM4PvamLMRayKXIKBF/KJw9Go8uSSAS1k+aEY+2KKBTMMCRTe
         /HQBD9+QDF0cnMkhFwqxk5I00Zrek0e0hk2nrRXLSXjSojI0JFe6AuQ4UNKEvT8z7eWY
         z8QA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=NzGJMXzPpKRh7oGL0zGJA40Q9xjMNDkXow9gtE6yA3E=;
        b=bIY0V9Pdjdkid8La2QOLaHlRaIHIhh408+7TxtCHU1mUd54EZ7rA9MoDLxQYr22bhe
         GCbU62uMncCd7McxKL9YPypxj7IV9bNdEO1NHyZ4WG2BPoVyf8xBI5SqsK7dwiLbG+JY
         1uvTh8HZvybhE1yg9hqEh/EVIK1Lu2cRZsKum2JeA8N7S7VrmTh1VgPaD9pnZ5poWTeZ
         4NbAVCTYkcWwmBSzzJ0lhbY+aHxqypEH9+/HsRctbWRkKXXcf/KG3+l7O1O21NbW5R7Q
         7pEZz9r0cCu1aahdWsDmMtzcMbMJiRcG7fLqjsutWPjgu1WNFpimohqfaljDJob0HBIw
         Aciw==
X-Gm-Message-State: AOAM533JO9Y9I2548LdeNSHGwphOAMKIWkAIofjwz4BsYb23IuhWaxt7
	q6HtCSSvvfKWSWcjTsNpuky2KtO49jk=
X-Google-Smtp-Source: ABdhPJw4AohCj+xilj3RZmADBGUvFfvRHx+tfERd2JyoFkgLdp0H0S6qbdcoyBto7IJ6ROqHQ4eYYQ==
X-Received: by 2002:a37:745:: with SMTP id 66mr20175763qkh.5.1620132554549;
        Tue, 04 May 2021 05:49:14 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>,
	Samuel Thibault <samuel.thibault@ens-lyon.org>
Subject: [PATCH 4/9] vtpmmgr: Allow specifying srk_handle for TPM2
Date: Tue,  4 May 2021 08:48:37 -0400
Message-Id: <20210504124842.220445-5-jandryuk@gmail.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210504124842.220445-1-jandryuk@gmail.com>
References: <20210504124842.220445-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Bypass taking ownership of the TPM2 if an srk_handle is specified.

This srk_handle must be usable with Null auth for the time being.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 docs/man/xen-vtpmmgr.7.pod |  7 +++++++
 stubdom/vtpmmgr/init.c     | 11 ++++++++++-
 2 files changed, 17 insertions(+), 1 deletion(-)

diff --git a/docs/man/xen-vtpmmgr.7.pod b/docs/man/xen-vtpmmgr.7.pod
index 875dcce508..3286954568 100644
--- a/docs/man/xen-vtpmmgr.7.pod
+++ b/docs/man/xen-vtpmmgr.7.pod
@@ -92,6 +92,13 @@ Valid arguments:
 
 =over 4
 
+=item srk_handle=<HANDLE>
+
+Specify a srk_handle for TPM 2.0.  TPM 2.0 uses a key hierarchy, and
+this allow specifying the parent handle for vtpmmgr to create its own
+key under.  Using this option bypasses vtpmmgr trying to take ownership
+of the TPM.
+
 =item owner_auth=<AUTHSPEC>
 
 =item srk_auth=<AUTHSPEC>
diff --git a/stubdom/vtpmmgr/init.c b/stubdom/vtpmmgr/init.c
index 1506735051..c01d03e9f4 100644
--- a/stubdom/vtpmmgr/init.c
+++ b/stubdom/vtpmmgr/init.c
@@ -302,6 +302,11 @@ int parse_cmdline_opts(int argc, char** argv, struct Opts* opts)
             goto err_invalid;
          }
       }
+      else if(!strncmp(argv[i], "srk_handle:", 11)) {
+         if(sscanf(argv[i] + 11, "%x", &vtpm_globals.srk_handle) != 1) {
+            goto err_invalid;
+         }
+      }
       else if(!strncmp(argv[i], "tpmdriver=", 10)) {
          if(!strcmp(argv[i] + 10, "tpm_tis")) {
             opts->tpmdriver = TPMDRV_TPM_TIS;
@@ -586,7 +591,11 @@ TPM_RESULT vtpmmgr2_create(void)
 {
     TPM_RESULT status = TPM_SUCCESS;
 
-    TPMTRYRETURN(tpm2_take_ownership());
+    if ( vtpm_globals.srk_handle == 0 ) {
+        TPMTRYRETURN(tpm2_take_ownership());
+    } else {
+        tpm2_AuthArea_ctor(NULL, 0, &vtpm_globals.srk_auth_area);
+    }
 
    /* create SK */
     TPM2_Create_Params_out out;
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Tue May 04 12:49:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 12:49:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122286.230634 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lduUX-00062d-Vg; Tue, 04 May 2021 12:49:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122286.230634; Tue, 04 May 2021 12:49:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lduUX-00062U-SF; Tue, 04 May 2021 12:49:33 +0000
Received: by outflank-mailman (input) for mailman id 122286;
 Tue, 04 May 2021 12:49:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tMRT=J7=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1lduUW-0005hX-Rg
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 12:49:32 +0000
Received: from mail-qk1-x733.google.com (unknown [2607:f8b0:4864:20::733])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 49133945-3ed6-47e0-953f-deaafcdad809;
 Tue, 04 May 2021 12:49:16 +0000 (UTC)
Received: by mail-qk1-x733.google.com with SMTP id i17so8324308qki.3
 for <xen-devel@lists.xenproject.org>; Tue, 04 May 2021 05:49:16 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:92e5:6d58:b544:4daa])
 by smtp.gmail.com with ESMTPSA id
 i11sm2355001qtv.8.2021.05.04.05.49.14
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 04 May 2021 05:49:15 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 49133945-3ed6-47e0-953f-deaafcdad809
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=EcH+63aPNYFng4f+xHjI7oU7HEm0WiVbIzYTc1mGNtw=;
        b=XicsVm3VzRnST1nHLjHEHOgRNYyW3z62WGVZqJgCZ2WUO2n+pVVbUqhmvPjK64iTqh
         cjWqp2Wqm4fhQPgxTgGtT8Pu5jDLckiRX6+Y9bX5WWn10ufb3LAW9e94l+hesFvbquYE
         UJv/vDZlTzXFfQdYSX3r8Ya8pT2tK9VyTQUhK2vYpfvKOmByM+lnqIW4b9M61x81jKN/
         OGIG/h/2oIjwgMcGUR0lms9XodeXi1ARLH5DuNaxQtg0EaWRBXiVmBpQ3pMd+Jk298//
         /A7eHTBilOMl9WxV2ATRn3NEVHd8bIP83L+ZtZNiT5mkYMDJ8/RswJv5RLnKjDZbdFwa
         zy7g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=EcH+63aPNYFng4f+xHjI7oU7HEm0WiVbIzYTc1mGNtw=;
        b=rUQsA6sVteIZsGY7mN1yYf/wpLSV1J5uV41Oj35HZObAintTk/4xzAl8WuINuVTt6d
         MAAKd2yF1mNrbIJUfW0KYgSbBOUSVypcCVeNbzVLM6s7XjwEPq0aiKjV5i0bsynYb/yf
         QnIqIwN5wGNamqMNbWAZmCJzm0/vkS9ZdS0cjBwL0wW+P3R/gsakvOjvXsTfzBRLL6FK
         6mTeDtCoKWcFqwg9uDWHhHeJ6c6RrOw7VDKpYqbVKUGFYUr8hjLMnwtGcWQMjLZHIoG6
         1W0k2+0sVltzWLUErKKMq89TYnjtuZP64l53qR2dpZMKzBK0OsID/Mt3TxaI7/RT0lcT
         njaQ==
X-Gm-Message-State: AOAM533RuI/aW+gd/NSmSsEHSB1oyUOd+yaYiXiUOwo8MRKoN/wNCq1F
	BoXgmGSWhzXTM4+7vIFbeuS3OpPh080=
X-Google-Smtp-Source: ABdhPJw/+ohffaaA5Um631ptKNm5wa1RdxFRcwvA4+8PC+zZc3l8NcdkTmCvJba514s3Jrryj5dFhg==
X-Received: by 2002:a05:620a:1230:: with SMTP id v16mr1496277qkj.14.1620132555775;
        Tue, 04 May 2021 05:49:15 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>,
	Samuel Thibault <samuel.thibault@ens-lyon.org>
Subject: [PATCH 5/9] vtpmmgr: Move vtpmmgr_shutdown
Date: Tue,  4 May 2021 08:48:38 -0400
Message-Id: <20210504124842.220445-6-jandryuk@gmail.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210504124842.220445-1-jandryuk@gmail.com>
References: <20210504124842.220445-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Reposition vtpmmgr_shutdown so it can call flush_tpm2 without a forward
declaration.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 stubdom/vtpmmgr/init.c | 28 ++++++++++++++--------------
 1 file changed, 14 insertions(+), 14 deletions(-)

diff --git a/stubdom/vtpmmgr/init.c b/stubdom/vtpmmgr/init.c
index c01d03e9f4..569b0dd1dc 100644
--- a/stubdom/vtpmmgr/init.c
+++ b/stubdom/vtpmmgr/init.c
@@ -503,20 +503,6 @@ egress:
    return status;
 }
 
-void vtpmmgr_shutdown(void)
-{
-   /* Cleanup TPM resources */
-   TPM_TerminateHandle(vtpm_globals.oiap.AuthHandle);
-
-   /* Close tpmback */
-   shutdown_tpmback();
-
-   /* Close tpmfront/tpm_tis */
-   close(vtpm_globals.tpm_fd);
-
-   vtpmloginfo(VTPM_LOG_VTPM, "VTPM Manager stopped.\n");
-}
-
 /* TPM 2.0 */
 
 static void tpm2_AuthArea_ctor(const char *authValue, UINT32 authLen,
@@ -797,3 +783,17 @@ abort_egress:
 egress:
     return status;
 }
+
+void vtpmmgr_shutdown(void)
+{
+   /* Cleanup TPM resources */
+   TPM_TerminateHandle(vtpm_globals.oiap.AuthHandle);
+
+   /* Close tpmback */
+   shutdown_tpmback();
+
+   /* Close tpmfront/tpm_tis */
+   close(vtpm_globals.tpm_fd);
+
+   vtpmloginfo(VTPM_LOG_VTPM, "VTPM Manager stopped.\n");
+}
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Tue May 04 12:49:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 12:49:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122289.230646 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lduUd-00068p-9l; Tue, 04 May 2021 12:49:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122289.230646; Tue, 04 May 2021 12:49:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lduUd-00068f-5y; Tue, 04 May 2021 12:49:39 +0000
Received: by outflank-mailman (input) for mailman id 122289;
 Tue, 04 May 2021 12:49:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tMRT=J7=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1lduUb-0005hX-Rs
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 12:49:37 +0000
Received: from mail-qk1-x730.google.com (unknown [2607:f8b0:4864:20::730])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ac142a89-455c-4029-b545-9857656015dc;
 Tue, 04 May 2021 12:49:17 +0000 (UTC)
Received: by mail-qk1-x730.google.com with SMTP id 197so8021009qkl.12
 for <xen-devel@lists.xenproject.org>; Tue, 04 May 2021 05:49:17 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:92e5:6d58:b544:4daa])
 by smtp.gmail.com with ESMTPSA id
 i11sm2355001qtv.8.2021.05.04.05.49.15
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 04 May 2021 05:49:16 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ac142a89-455c-4029-b545-9857656015dc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=a4b7pqUsgCM7Gh4S5TbhfMLGNActBOFdM2Cun502Sq8=;
        b=dzhHDDK46ArIp4L2A5VhPTN8YohkMLSNo9JYU6cZlKUIkuHStMu1ReFAgPMe7FVxTl
         uLziA6EBg5n26usbiMR2T3R6SmhJbj368s4xUf2BQKSOCzMvSXUslFDSUhrj4eRsuFFx
         UANM4zEKlnJiS2lDaJoU+FvR4kh/fdpcjU9c+DA34rVIQDN2odsg4cOnSt6/gwzeYjgu
         252RdKvHrCPWPnC6PtPncPtBYy7o410oVieV+YA+xtoJzU9S0AoW/UtJUYQM5bUJU2tD
         rF0pEGL/NUpGRZLqIwiHoQnxLYDUyPJotd7a9c9AhM69EKeLIkddA15UmRxByIE4zYN9
         5/vg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=a4b7pqUsgCM7Gh4S5TbhfMLGNActBOFdM2Cun502Sq8=;
        b=FGv6CxZSqM8qTymPOWi8iq4wcZhuTbBEQWKqpHTLo/m1oEjlOFiAAUSn+5QtJEfzbF
         NO8JNetjx47hopwu5y33bVHsLZuNaG9xQn/QBuWyK0DBGGweoGxNpieQ6nJxd4fFUUdO
         G9xd8O8276Qe66VeDeV+1ipGaSBOL2sMuoWoUxKwgSbaSMW2fWYEyK0poZnf/D/RJr7j
         rbGl38jZxsTy91kRBPjn4P5G8RuFFqwrleEg8KwDkjYLqjvZD9XLIomcfMM/iDv5G5uf
         McwJMKcYHzxk/6ozjC4YVqQ/CvYtxDrCPjA/JMXnZfJJaf9yfDsdsHqigCaX7OT/mFKN
         +aVw==
X-Gm-Message-State: AOAM533oFdooG3aDFLWW8kQ6/D0M+BvwSe988xlJlZzZLl8pqr2M9n5d
	LqisxSBCb5sF2XtNxubsuMbufgj8/8o=
X-Google-Smtp-Source: ABdhPJxkM0wWqRyiW4XWD1IwsuRGHA7MegVMGGjyxEmpughkogQ4MUtOqxkmzUjUKAxsAAUl+Klk5w==
X-Received: by 2002:a37:a8c6:: with SMTP id r189mr20972326qke.446.1620132556773;
        Tue, 04 May 2021 05:49:16 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>,
	Samuel Thibault <samuel.thibault@ens-lyon.org>
Subject: [PATCH 6/9] vtpmmgr: Flush transient keys on shutdown
Date: Tue,  4 May 2021 08:48:39 -0400
Message-Id: <20210504124842.220445-7-jandryuk@gmail.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210504124842.220445-1-jandryuk@gmail.com>
References: <20210504124842.220445-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove our key so it isn't left in the TPM for someone to come along
after vtpmmgr shutsdown.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 stubdom/vtpmmgr/init.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/stubdom/vtpmmgr/init.c b/stubdom/vtpmmgr/init.c
index 569b0dd1dc..d9fefa9be6 100644
--- a/stubdom/vtpmmgr/init.c
+++ b/stubdom/vtpmmgr/init.c
@@ -792,6 +792,14 @@ void vtpmmgr_shutdown(void)
    /* Close tpmback */
    shutdown_tpmback();
 
+    if (hw_is_tpm2()) {
+        /* Blow away all stale handles left in the tpm*/
+        if (flush_tpm2() != TPM_SUCCESS) {
+            vtpmlogerror(VTPM_LOG_TPM,
+                         "TPM2_FlushResources failed, continuing shutdown..\n");
+        }
+    }
+
    /* Close tpmfront/tpm_tis */
    close(vtpm_globals.tpm_fd);
 
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Tue May 04 12:49:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 12:49:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122293.230657 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lduUh-0006Es-PW; Tue, 04 May 2021 12:49:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122293.230657; Tue, 04 May 2021 12:49:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lduUh-0006Ej-Lv; Tue, 04 May 2021 12:49:43 +0000
Received: by outflank-mailman (input) for mailman id 122293;
 Tue, 04 May 2021 12:49:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tMRT=J7=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1lduUg-0005hX-S2
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 12:49:42 +0000
Received: from mail-qk1-x733.google.com (unknown [2607:f8b0:4864:20::733])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 08ac263e-1521-4c02-b1de-0a817ac8cc6b;
 Tue, 04 May 2021 12:49:18 +0000 (UTC)
Received: by mail-qk1-x733.google.com with SMTP id v20so8316949qkv.5
 for <xen-devel@lists.xenproject.org>; Tue, 04 May 2021 05:49:18 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:92e5:6d58:b544:4daa])
 by smtp.gmail.com with ESMTPSA id
 i11sm2355001qtv.8.2021.05.04.05.49.16
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 04 May 2021 05:49:17 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 08ac263e-1521-4c02-b1de-0a817ac8cc6b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=IY3688lCpOSYrT/VQUB4uhc+PuFxy7DpqKpBTNFfpKA=;
        b=Mmc0fH56zWwQ1LV01pYXV2jkRXyynZHWJ0rK/6gwah0byGawyFX+lzPw27/dn4vimR
         2Z0A+1mrOZzPJa635380x7QGdS2hUTtoU/IGhRUE4b6qhvfnW4UREWvlZG2mXdTQLApm
         wAdtgNk8NgJl1weCwggzmNt1lT4iLv50X1xulz/Zkax8Mqy20a+A1/aK9jf8J7WYja+T
         0fZxu5dobo999pwIni2ajUiWmWg0tT3fZAklntLd/Q090Rm0Fn92eybjhq8/JKQvfV8T
         Fu2WxaVYcWlBDGsGk1/xuN9lGn1hyQTl23/ss2aTxahrVg/4vtHpOZUqMKDVYxhYaqg6
         wYeg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=IY3688lCpOSYrT/VQUB4uhc+PuFxy7DpqKpBTNFfpKA=;
        b=dGtoihiYnQn3Bpw4Jwm8CJGesOzU1rXoMBovYcq5Sei61MHNqQ34+baLywxaNkVyWz
         n+CnPChuAm4IPJdU5kq/afeD6Y8UqSWCAYgVVl8UT/hBgyLUOdFzTigPg5SJuSZxibdc
         58W4xsZJpg1Qwcru8zBtkYxsfL2N/3nU6Y07h+Vtt8M9/IDZ+FkPNgsKFqnd1xaaqQBL
         Bw6L6nn37YjtOD/XdDwhxkewTG7wODjVQMz0k4dd1KrcOEs8Mg9j28jDSk8tlM3X7+eP
         jaynb29h/Z49+eODwu2wkq3sY6v1VFrgwT1HgfAw2H5t5Rmx07/EHGVT0y8fh2EtzT2d
         5U2g==
X-Gm-Message-State: AOAM533+UCjOHvSPOUWqVAATkgHzFHci0fM/m0URpnTu6wBzEZRKTKpw
	/FPcxtGJYOFM3MFG+eXS7/bnOMuwv8A=
X-Google-Smtp-Source: ABdhPJy0DBjg5zaFaKwENUOB2K3O2SgQN2b1tHKM5coz0xMFMeNl88Nbu+peZEA7nvp5D8XkuqesaQ==
X-Received: by 2002:a37:ae44:: with SMTP id x65mr24485640qke.9.1620132557845;
        Tue, 04 May 2021 05:49:17 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>,
	Samuel Thibault <samuel.thibault@ens-lyon.org>
Subject: [PATCH 7/9] vtpmmgr: Flush all transient keys
Date: Tue,  4 May 2021 08:48:40 -0400
Message-Id: <20210504124842.220445-8-jandryuk@gmail.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210504124842.220445-1-jandryuk@gmail.com>
References: <20210504124842.220445-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

We're only flushing 2 transients, but there are 3 handles.  Use <= to also
flush the third handle.

The number of transient handles/keys is hardware dependent, so this
should query for the limit.  And assignment of handles is assumed to be
sequential from the minimum.  That may not be guaranteed, but seems okay
with my tpm2.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 stubdom/vtpmmgr/init.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/stubdom/vtpmmgr/init.c b/stubdom/vtpmmgr/init.c
index d9fefa9be6..e0dbcac3ad 100644
--- a/stubdom/vtpmmgr/init.c
+++ b/stubdom/vtpmmgr/init.c
@@ -656,7 +656,7 @@ static TPM_RC flush_tpm2(void)
 {
     int i;
 
-    for (i = TRANSIENT_FIRST; i < TRANSIENT_LAST; i++)
+    for (i = TRANSIENT_FIRST; i <= TRANSIENT_LAST; i++)
          TPM2_FlushContext(i);
 
     return TPM_SUCCESS;
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Tue May 04 12:49:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 12:49:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122296.230670 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lduUn-0006L6-59; Tue, 04 May 2021 12:49:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122296.230670; Tue, 04 May 2021 12:49:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lduUn-0006Kv-1d; Tue, 04 May 2021 12:49:49 +0000
Received: by outflank-mailman (input) for mailman id 122296;
 Tue, 04 May 2021 12:49:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tMRT=J7=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1lduUl-0005hX-Rw
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 12:49:47 +0000
Received: from mail-qk1-x72f.google.com (unknown [2607:f8b0:4864:20::72f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8e116294-23d2-48e3-bd0e-8ded2f2daf19;
 Tue, 04 May 2021 12:49:19 +0000 (UTC)
Received: by mail-qk1-x72f.google.com with SMTP id i67so5189471qkc.4
 for <xen-devel@lists.xenproject.org>; Tue, 04 May 2021 05:49:19 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:92e5:6d58:b544:4daa])
 by smtp.gmail.com with ESMTPSA id
 i11sm2355001qtv.8.2021.05.04.05.49.17
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 04 May 2021 05:49:18 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8e116294-23d2-48e3-bd0e-8ded2f2daf19
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=0DdizkBoM7pfPhbNOdVIyMYXA8hmL6MF3OIfF/uZavE=;
        b=kF+VGY1G0vz1TpF0CPNN+SmSe6Y8SqQTFCN3JoDULv2uIpxig3DWNaiRYODW+uCzCi
         IdeK8d3SmjNxsWKShWgMc2NBX+g4ciSEG2pXKM4KDqUi+ZbZ0/3c0B1r3Zv6ORT/1E7/
         /riXqDqO8aZfOUCpBiOsoCLbuWLdWVXkdu7q5NppT1/gDSeFEM5vBrEhwEu2td1IjGyh
         HbiZQXtI5nmVyyVM/Pd/+mDABFMgWYmiBQJe8gb4gzXxtFyL1u+7vS8R5L9xn76ZSj/U
         uSLKIOmlaLW2jJ51vHx0S8kWVnvpBk+GqPHcJXt+6O7VPJCjbRTkynx9H9ev7W+yyDjW
         G17w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=0DdizkBoM7pfPhbNOdVIyMYXA8hmL6MF3OIfF/uZavE=;
        b=hpF4bUkiF2G+Kn4dsFIGCKs5t/2dqtfC3y9kzOm70W3br38vNkt7z1Hp9MCffbrqXG
         JvoCkmyNXiFQ9gJL4iWpMpT41GLpqzCIjbdFOGdKM33bAVZWwLwHIpJuK0bPF8DWivKn
         foMIM4KzPk7IQ7y2AF3ON67BSVz2/ftca3xLWw7hIqJEsnxKn8hGMZxwSCYT5xBaXJRZ
         ML2Mhm2zqq99aOKVabQH7aB0GjlqG1lFXGWf80nx4fz9qO6KclgZVTG4Vh15j7oLAXWO
         6Zh/2Rr2ll3pvkQAD3hze4rIiuDcRZZYJdlu8evOHdRtdZ9OWt1j+xELXRqfl9ypDu3r
         6Dcw==
X-Gm-Message-State: AOAM532rN0zKs1R4Dkceho9tE4ZSQLjhBoASBkOSgrhiHO+6a5kZ3oRa
	mLI9H2i3eGtSUBvoI7GrMeeXrirknU4=
X-Google-Smtp-Source: ABdhPJz4nGrzzdi22sWorVw6Psa6zTWQYJGfOf+8+5iLbGs1LACC/jy+Xj5P4ntNBDR5nYeuxHN1SQ==
X-Received: by 2002:a05:620a:918:: with SMTP id v24mr24268993qkv.54.1620132558949;
        Tue, 04 May 2021 05:49:18 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>,
	Samuel Thibault <samuel.thibault@ens-lyon.org>
Subject: [PATCH 8/9] vtpmmgr: Shutdown more gracefully
Date: Tue,  4 May 2021 08:48:41 -0400
Message-Id: <20210504124842.220445-9-jandryuk@gmail.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210504124842.220445-1-jandryuk@gmail.com>
References: <20210504124842.220445-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

vtpmmgr uses the default, weak app_shutdown, which immediately calls the
shutdown hypercall.  This short circuits the vtpmmgr clean up logic.  We
need to perform the clean up to actually Flush our key out of the tpm.

Setting do_shutdown is one step in that direction, but vtpmmgr will most
likely be waiting in tpmback_req_any.  We need to call shutdown_tpmback
to cancel the wait inside tpmback and perform the shutdown.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 stubdom/vtpmmgr/vtpmmgr.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/stubdom/vtpmmgr/vtpmmgr.c b/stubdom/vtpmmgr/vtpmmgr.c
index 9fddaa24f8..46ea018921 100644
--- a/stubdom/vtpmmgr/vtpmmgr.c
+++ b/stubdom/vtpmmgr/vtpmmgr.c
@@ -67,11 +67,21 @@ int hw_is_tpm2(void)
     return (hardware_version.hw_version == TPM2_HARDWARE) ? 1 : 0;
 }
 
+static int do_shutdown;
+
+void app_shutdown(unsigned int reason)
+{
+    printk("Shutdown requested: %d\n", reason);
+    do_shutdown = 1;
+
+    shutdown_tpmback();
+}
+
 void main_loop(void) {
    tpmcmd_t* tpmcmd;
    uint8_t respbuf[TCPA_MAX_BUFFER_LENGTH];
 
-   while(1) {
+   while (!do_shutdown) {
       /* Wait for requests from a vtpm */
       vtpmloginfo(VTPM_LOG_VTPM, "Waiting for commands from vTPM's:\n");
       if((tpmcmd = tpmback_req_any()) == NULL) {
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Tue May 04 12:49:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 12:49:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122300.230682 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lduUs-0006Ra-Eu; Tue, 04 May 2021 12:49:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122300.230682; Tue, 04 May 2021 12:49:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lduUs-0006RQ-Ah; Tue, 04 May 2021 12:49:54 +0000
Received: by outflank-mailman (input) for mailman id 122300;
 Tue, 04 May 2021 12:49:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tMRT=J7=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1lduUq-0005hX-S5
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 12:49:52 +0000
Received: from mail-qk1-x730.google.com (unknown [2607:f8b0:4864:20::730])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9ac7165f-96f8-4fbd-9072-2d634d77c41a;
 Tue, 04 May 2021 12:49:20 +0000 (UTC)
Received: by mail-qk1-x730.google.com with SMTP id 197so8021191qkl.12
 for <xen-devel@lists.xenproject.org>; Tue, 04 May 2021 05:49:20 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:92e5:6d58:b544:4daa])
 by smtp.gmail.com with ESMTPSA id
 i11sm2355001qtv.8.2021.05.04.05.49.19
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 04 May 2021 05:49:19 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9ac7165f-96f8-4fbd-9072-2d634d77c41a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=5P9pw08EZqXqNEkdmsF7ZDqPU01GGuwd0g313RI0CAY=;
        b=rPD2BYjdtrTPILExTmxr8PkQw+BPrfyypfxVEThxmTXarRi8DWP1Oz7eN7dYciLKqk
         zTVeVXKDd9JR5A5EiAA2AuMkPi2if0f5SjQIiQMwT/ht6S73gyS9LtsDOdCJe67Ec4Nf
         Pep50LlBoi6lc/lCvP5fxaglzepfoqv5WnHmKYXqrS8ZFkHwV5DbXPdQQm36wmrLscgv
         uUoo6ctdKY6C4rkqrieI8acygzu3xaLILQzqJVlCatB+eW2kO1v38ic6owVD80e5UJL6
         TACTV9wgxXwz8qqr1IDCvZN1nYzh80eYobn7yUCp2fpclsZXS8fnPW5kCiKIxecSGuyO
         AxFA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=5P9pw08EZqXqNEkdmsF7ZDqPU01GGuwd0g313RI0CAY=;
        b=R5WL5Di/lkWxym6SDUqvSBR/BIsUW07TL0ob/AEoin+yJ87B6jKjksa8qNidS/heVE
         PHsBTckQU24gH8s6D0jH7BVAKfuEdgpjqUcmAJIn7g5/M/4dew6+k9E4R4aZIQ5hyGGp
         qJm7/jjnxTKx/CvNENEv+SDl1x+1fR1VnsKbIsTPCxkWOrUsicw8aZgBxBEtPgBlvbnR
         0oE8qS6CjOsG80v5HqyHjkkVAg61dJECh9Qx69w875ZccpjZEamWbOa2aOsqN0e8blul
         bZNMQyX+v9u7oq8qufWB6ongltPfO1lRh/18bVLVbmwq0Shxl1RVsviXv2yCEMuX22or
         ARDw==
X-Gm-Message-State: AOAM531VNsQ2gm51umrz7T5K3Gh8bMZTsk5e4Nx1MqA5uxh4/airEhwc
	HdSgENk71q9zqyrK4xehq16uRukNqzI=
X-Google-Smtp-Source: ABdhPJyTEspJiSpERqd+zFpQhhwsfGAiBJpSYZUHyVrbKmTHn9E1iL/VVWvjSNNVZmi7/qjezf9V0g==
X-Received: by 2002:a37:a24b:: with SMTP id l72mr15678342qke.189.1620132560052;
        Tue, 04 May 2021 05:49:20 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>,
	Samuel Thibault <samuel.thibault@ens-lyon.org>
Subject: [PATCH 9/9] vtpmmgr: Support GetRandom passthrough on TPM 2.0
Date: Tue,  4 May 2021 08:48:42 -0400
Message-Id: <20210504124842.220445-10-jandryuk@gmail.com>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <20210504124842.220445-1-jandryuk@gmail.com>
References: <20210504124842.220445-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

GetRandom passthrough currently fails when using vtpmmgr with a hardware
TPM 2.0.
vtpmmgr (8): INFO[VTPM]: Passthrough: TPM_GetRandom
vtpm (12): vtpm_cmd.c:120: Error: TPM_GetRandom() failed with error code (30)

When running on TPM 2.0 hardware, vtpmmgr needs to convert the TPM 1.2
TPM_ORD_GetRandom into a TPM2 TPM_CC_GetRandom command.  Besides the
differing ordinal, the TPM 1.2 uses 32bit sizes for the request and
response (vs. 16bit for TPM2).

Place the random output directly into the tpmcmd->resp and build the
packet around it.  This avoids bouncing through an extra buffer, but the
header has to be written after grabbing the random bytes so we have the
number of bytes to include in the size.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 stubdom/vtpmmgr/marshal.h          | 10 +++++++
 stubdom/vtpmmgr/vtpm_cmd_handler.c | 48 ++++++++++++++++++++++++++++++
 2 files changed, 58 insertions(+)

diff --git a/stubdom/vtpmmgr/marshal.h b/stubdom/vtpmmgr/marshal.h
index dce19c6439..20da22af09 100644
--- a/stubdom/vtpmmgr/marshal.h
+++ b/stubdom/vtpmmgr/marshal.h
@@ -890,6 +890,15 @@ inline int sizeof_TPM_AUTH_SESSION(const TPM_AUTH_SESSION* auth) {
 	return rv;
 }
 
+static
+inline int sizeof_TPM_RQU_HEADER(BYTE* ptr) {
+	int rv = 0;
+	rv += sizeof_UINT16(ptr);
+	rv += sizeof_UINT32(ptr);
+	rv += sizeof_UINT32(ptr);
+	return rv;
+}
+
 static
 inline BYTE* pack_TPM_RQU_HEADER(BYTE* ptr,
 		TPM_TAG tag,
@@ -923,5 +932,6 @@ inline int unpack3_TPM_RQU_HEADER(BYTE* ptr, UINT32* pos, UINT32 max,
 #define pack_TPM_RSP_HEADER(p, t, s, r) pack_TPM_RQU_HEADER(p, t, s, r)
 #define unpack_TPM_RSP_HEADER(p, t, s, r) unpack_TPM_RQU_HEADER(p, t, s, r)
 #define unpack3_TPM_RSP_HEADER(p, l, m, t, s, r) unpack3_TPM_RQU_HEADER(p, l, m, t, s, r)
+#define sizeof_TPM_RSP_HEADER(p) sizeof_TPM_RQU_HEADER(p)
 
 #endif
diff --git a/stubdom/vtpmmgr/vtpm_cmd_handler.c b/stubdom/vtpmmgr/vtpm_cmd_handler.c
index 2ac14fae77..7ca1d9df94 100644
--- a/stubdom/vtpmmgr/vtpm_cmd_handler.c
+++ b/stubdom/vtpmmgr/vtpm_cmd_handler.c
@@ -47,6 +47,7 @@
 #include "vtpm_disk.h"
 #include "vtpmmgr.h"
 #include "tpm.h"
+#include "tpm2.h"
 #include "tpmrsa.h"
 #include "tcg.h"
 #include "mgmt_authority.h"
@@ -772,6 +773,52 @@ static int vtpmmgr_permcheck(struct tpm_opaque *opq)
 	return 1;
 }
 
+TPM_RESULT vtpmmgr_handle_getrandom(struct tpm_opaque *opaque,
+				    tpmcmd_t* tpmcmd)
+{
+	TPM_RESULT status = TPM_SUCCESS;
+	TPM_TAG tag;
+	UINT32 size;
+	UINT32 rand_offset;
+	UINT32 rand_size;
+	TPM_COMMAND_CODE ord;
+	BYTE *p;
+
+	p = unpack_TPM_RQU_HEADER(tpmcmd->req, &tag, &size, &ord);
+
+	if (!hw_is_tpm2()) {
+		size = TCPA_MAX_BUFFER_LENGTH;
+		TPMTRYRETURN(TPM_TransmitData(tpmcmd->req, tpmcmd->req_len,
+					      tpmcmd->resp, &size));
+		tpmcmd->resp_len = size;
+
+		return TPM_SUCCESS;
+	}
+
+	/* TPM_GetRandom req: <header><uint32 num bytes> */
+	unpack_UINT32(p, &rand_size);
+
+	/* Call TPM2_GetRandom but return a TPM_GetRandom response. */
+	/* TPM_GetRandom resp: <header><uint32 num bytes><num random bytes> */
+        rand_offset = sizeof_TPM_RSP_HEADER(tpmcmd->resp) +
+		      sizeof_UINT32(tpmcmd->resp);
+
+	TPMTRYRETURN(TPM2_GetRandom(&rand_size, tpmcmd->resp + rand_offset));
+
+	p = pack_TPM_RSP_HEADER(tpmcmd->resp, TPM_TAG_RSP_COMMAND,
+				rand_offset + rand_size, status);
+	p = pack_UINT32(p, rand_size);
+	tpmcmd->resp_len = rand_offset + rand_size;
+
+	return status;
+
+abort_egress:
+	tpmcmd->resp_len = VTPM_COMMAND_HEADER_SIZE;
+	pack_TPM_RSP_HEADER(tpmcmd->resp, tag + 3, tpmcmd->resp_len, status);
+
+	return status;
+}
+
 TPM_RESULT vtpmmgr_handle_cmd(
 		struct tpm_opaque *opaque,
 		tpmcmd_t* tpmcmd)
@@ -842,6 +889,7 @@ TPM_RESULT vtpmmgr_handle_cmd(
 		switch(ord) {
 		case TPM_ORD_GetRandom:
 			vtpmloginfo(VTPM_LOG_VTPM, "Passthrough: TPM_GetRandom\n");
+			return vtpmmgr_handle_getrandom(opaque, tpmcmd);
 			break;
 		case TPM_ORD_PcrRead:
 			vtpmloginfo(VTPM_LOG_VTPM, "Passthrough: TPM_PcrRead\n");
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Tue May 04 12:56:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 12:56:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122329.230694 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lduaz-0007cR-7H; Tue, 04 May 2021 12:56:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122329.230694; Tue, 04 May 2021 12:56:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lduaz-0007cK-49; Tue, 04 May 2021 12:56:13 +0000
Received: by outflank-mailman (input) for mailman id 122329;
 Tue, 04 May 2021 12:56:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1gXq=J7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lduay-0007cF-7i
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 12:56:12 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6e08573d-170b-41bd-bba4-1d686dc25568;
 Tue, 04 May 2021 12:56:11 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 56E4FAF21;
 Tue,  4 May 2021 12:56:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6e08573d-170b-41bd-bba4-1d686dc25568
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620132970; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=n6Xu8DcTkC1lSYhllsdu5KJARVtwMcv4p/dT5dLxCGQ=;
	b=Jph2VrmhCwElMbdsQ6V6olyyrPPqKtBke/5ptmPysff1gEgdReMSVok4brxvohUzYhsqaQ
	DcXp+vQi4G/BkvVELtnyWs87wh9xhEBADf+5OYCx869OQ3L9B1O32eK2dJOkN4ezQARh1t
	21p0vVY8Pq0hJbsZb+siO2Lk9cBhAPE=
Subject: Re: [PATCH 5/5] x86/cpuid: Fix handling of xsave dynamic leaves
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210503153938.14109-1-andrew.cooper3@citrix.com>
 <20210503153938.14109-6-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <5e6511ca-83bd-8a43-202e-949b4d19b1ab@suse.com>
Date: Tue, 4 May 2021 14:56:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210503153938.14109-6-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 03.05.2021 17:39, Andrew Cooper wrote:
> If max leaf is greater than 0xd but xsave not available to the guest, then the
> current XSAVE size should not be filled in.  This is a latent bug for now as
> the guest max leaf is 0xd, but will become problematic in the future.
> 
> The comment concerning XSS state is wrong.  VT-x doesn't manage host/guest
> state automatically, but there is provision for "host only" bits to be set, so
> the implications are still accurate.
> 
> Introduce {xstate,hw}_compressed_size() helpers to mirror the uncompressed
> ones.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
albeit with a remark:

> +unsigned int xstate_compressed_size(uint64_t xstates)
> +{
> +    unsigned int i, size = XSTATE_AREA_MIN_SIZE;
> +
> +    xstates &= ~XSTATE_FP_SSE;
> +    for_each_set_bit ( i, &xstates, 63 )
> +    {
> +        if ( test_bit(i, &xstate_align) )
> +            size = ROUNDUP(size, 64);
> +
> +        size += xstate_sizes[i];
> +    }
> +
> +    /* In debug builds, cross-check our calculation with hardware. */
> +    if ( IS_ENABLED(CONFIG_DEBUG) )
> +    {
> +        unsigned int hwsize;
> +
> +        xstates |= XSTATE_FP_SSE;
> +        hwsize = hw_compressed_size(xstates);
> +
> +        if ( size != hwsize )
> +            printk_once(XENLOG_ERR "%s(%#"PRIx64") size %#x != hwsize %#x\n",
> +                        __func__, xstates, size, hwsize);
> +        size = hwsize;

To be honest, already on the earlier patch I was wondering whether
it does any good to override size here: That'll lead to different
behavior on debug vs release builds. If the log message is not
paid attention to, we'd then end up with longer term breakage.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 04 13:03:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 13:03:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122332.230725 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lduhh-0000CH-8Z; Tue, 04 May 2021 13:03:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122332.230725; Tue, 04 May 2021 13:03:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lduhh-0000CA-4b; Tue, 04 May 2021 13:03:09 +0000
Received: by outflank-mailman (input) for mailman id 122332;
 Tue, 04 May 2021 12:59:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Y5oy=J7=linuxfoundation.org=rromoff@srs-us1.protection.inumbo.net>)
 id 1ldudg-0007lc-CQ
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 12:59:00 +0000
Received: from mail-lf1-x136.google.com (unknown [2a00:1450:4864:20::136])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f45fbebd-a8b9-4eff-a2b2-ce2623eded6b;
 Tue, 04 May 2021 12:58:59 +0000 (UTC)
Received: by mail-lf1-x136.google.com with SMTP id t11so11617408lfl.11
 for <xen-devel@lists.xenproject.org>; Tue, 04 May 2021 05:58:59 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f45fbebd-a8b9-4eff-a2b2-ce2623eded6b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linuxfoundation.org; s=google;
        h=mime-version:from:date:message-id:subject:to;
        bh=mbiIFB7WbyDEjQWTDse1ymM07k6kH/Kt8YZWaQjE/sE=;
        b=h0FYrxCYXICIJ/LpY1Gsc8SdUhZP7OBd+tGjMvwNTY4SvQdcbtb05ctyUD5pPXNbeO
         FcMC36bPrBim6e7ZviJxiBiALVf6bzvWi6cYemmoKeXsXhWrPLveo6YgQtIAr5Cylb8l
         4O9sOX8zRjifUeOe3T9vfEGR0VqQx6Y1CbMDA=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:from:date:message-id:subject:to;
        bh=mbiIFB7WbyDEjQWTDse1ymM07k6kH/Kt8YZWaQjE/sE=;
        b=ov44lvLM2CFYqO9MZXZ3hC0eU1NP2QmipTEQA1twYTUkdivQ+zAIOnVBFHExnYDU8+
         zEtWan+CcztOQG1dMAZJNkEw4r/GUSp9bcxw4OnrqTSiKkLgk4DXSUhY2wlxxnxRs7Uz
         DdS+HlCTaBZZNSBfOM23ZTfWdQUA6tB5h1/Xrm4znd7cfS+0iFkrmQkMBPcQakuCWl4s
         FuAGbTiIrJwLR2Ti2KzQ5PhmgEmzlwYxDUjMVgSPwQ1L9k5fHA4hNhbiH+cIiAHNxvQ4
         5UneVR5BPTGetWj1mI/4amNNvfH6HizbX9rPKYlZboVM6RN0seoKdkjvf+wD62Sjb9fb
         u1SA==
X-Gm-Message-State: AOAM531VPfIiYuKtYvSH1h5iGPIC5HNQlXWOtjWIDF21Q5FbHVdOCHvw
	To2ocuLeI8KgPD99I5SueemFeKN+C/XPQJ4LF++XnucOZ5y5Ag==
X-Google-Smtp-Source: ABdhPJxnucCbyUv7pL6HcNOl3r9b/8k4AbpHvv0F2dVJSh/pUWyDzK86Hcw6x1rkqjeeUJyAi/dzW6SIhOckM3jHLao=
X-Received: by 2002:a05:6512:2287:: with SMTP id f7mr5004393lfu.475.1620133138060;
 Tue, 04 May 2021 05:58:58 -0700 (PDT)
MIME-Version: 1.0
From: Rachel Romoff <rromoff@linuxfoundation.org>
Date: Tue, 4 May 2021 07:58:45 -0500
Message-ID: <CA+1LEQuO5Z_N6VKo3TwrE8Ri6+L1V4ssa_-vxKjXKJ-6ycvi7w@mail.gmail.com>
Subject: Xen PR Alias
To: xen-devel@lists.xenproject.org
Content-Type: multipart/alternative; boundary="0000000000002ef2a405c180a40f"

--0000000000002ef2a405c180a40f
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hi All-


For better collaboration with the Xen Project Community on content and PR
ideas, we are reviving the PR email alias.

The list should be used for:

   -

   New blog ideas
   -

   Volunteering for a developer profile (details below)
   -

   To discuss anything new or newsworthy that can be turned into content,
   or pitched to media related to Xen.
   -

   Flagging conferences or speaking opportunities.
   -

   Links to be tweeted or posted from Xen social media


This is a semi-private list in that embargoed news could be discussed.
When you agree to join this list, you agree to not share embargoed content
publicly.

To join this list, please email George Dunlap <george.dunlap@citrix.com> or
Rachel Romoff.

We will also share opportunities from time to time with this list and may
ask for feedback on certain topics or programs we are working on.

Open Opportunities

Use cases for Xen - Are you using Xen for something interesting? Let us
know!

Case Studies - Has your organization gotten measurable value out of its use
of Xen. Have you used Xen on a personal project and found interesting data?
Let us know!

Developer Profiles - Are you a Xen Project developer, or tech adjacent
professional? We want to profile the work you=E2=80=99ve done.

If you have any questions please let us know,
Rachel


--=20
Rachel Romoff
Pronouns: She/Her
(210) 241-8284
Twitter @rachelromoff

--0000000000002ef2a405c180a40f
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><span id=3D"gmail-docs-internal-guid-fc964c40-7fff-a593-f5=
a4-420745de9c6d"><p style=3D"line-height:1.38;margin-top:0pt;margin-bottom:=
0pt"><span style=3D"font-family:Arial;color:rgb(0,0,0);background-color:tra=
nsparent;font-variant-numeric:normal;font-variant-east-asian:normal;vertica=
l-align:baseline;white-space:pre-wrap">Hi All-</span></p><p style=3D"line-h=
eight:1.38;margin-top:0pt;margin-bottom:0pt"><span style=3D"font-family:Ari=
al;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:norma=
l;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wr=
ap"><br></span></p><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;=
margin-bottom:0pt"><span style=3D"font-family:Arial;color:rgb(0,0,0);backgr=
ound-color:transparent;font-variant-numeric:normal;font-variant-east-asian:=
normal;vertical-align:baseline;white-space:pre-wrap">For better collaborati=
on with the Xen Project Community on content and PR ideas, we are reviving =
the PR email alias.=C2=A0</span></p><br><p dir=3D"ltr" style=3D"line-height=
:1.38;margin-top:0pt;margin-bottom:0pt"><span style=3D"font-family:Arial;co=
lor:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;fon=
t-variant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">T=
he list should be used for:</span></p><ul style=3D"margin-top:0px;margin-bo=
ttom:0px"><li dir=3D"ltr" style=3D"list-style-type:disc;font-family:Arial;c=
olor:rgb(0,0,0);background-color:transparent;font-variant-numeric:normal;fo=
nt-variant-east-asian:normal;vertical-align:baseline;white-space:pre"><p di=
r=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span=
 style=3D"background-color:transparent;font-variant-numeric:normal;font-var=
iant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">New bl=
og ideas</span></p></li><li dir=3D"ltr" style=3D"list-style-type:disc;font-=
family:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-num=
eric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-sp=
ace:pre"><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;margin-bot=
tom:0pt"><span style=3D"background-color:transparent;font-variant-numeric:n=
ormal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pr=
e-wrap">Volunteering for a developer profile (details below)</span></p></li=
><li dir=3D"ltr" style=3D"list-style-type:disc;font-family:Arial;color:rgb(=
0,0,0);background-color:transparent;font-variant-numeric:normal;font-varian=
t-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir=3D"ltr"=
 style=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style=3D=
"background-color:transparent;font-variant-numeric:normal;font-variant-east=
-asian:normal;vertical-align:baseline;white-space:pre-wrap">To discuss anyt=
hing new or newsworthy that can be turned into content, or pitched to media=
 related to Xen.</span></p></li><li dir=3D"ltr" style=3D"list-style-type:di=
sc;font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-var=
iant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline;=
white-space:pre"><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;ma=
rgin-bottom:0pt"><span style=3D"background-color:transparent;font-variant-n=
umeric:normal;font-variant-east-asian:normal;vertical-align:baseline;white-=
space:pre-wrap">Flagging conferences or speaking opportunities.</span></p><=
/li><li dir=3D"ltr" style=3D"list-style-type:disc;font-family:Arial;color:r=
gb(0,0,0);background-color:transparent;font-variant-numeric:normal;font-var=
iant-east-asian:normal;vertical-align:baseline;white-space:pre"><p dir=3D"l=
tr" style=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style=
=3D"background-color:transparent;font-variant-numeric:normal;font-variant-e=
ast-asian:normal;vertical-align:baseline;white-space:pre-wrap">Links to be =
tweeted or posted from Xen social media</span></p></li></ul><br><p dir=3D"l=
tr" style=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style=
=3D"font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-va=
riant-numeric:normal;font-variant-east-asian:normal;vertical-align:baseline=
;white-space:pre-wrap">This is a semi-private list in that embargoed news c=
ould be discussed.=C2=A0 When you agree to join this list, you agree to not=
 share embargoed content publicly.</span></p><br><p dir=3D"ltr" style=3D"li=
ne-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style=3D"font-family=
:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:n=
ormal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pr=
e-wrap">To join this list, please email </span><a href=3D"mailto:george.dun=
lap@citrix.com" style=3D"text-decoration-line:none"><span style=3D"font-fam=
ily:Arial;background-color:transparent;font-variant-numeric:normal;font-var=
iant-east-asian:normal;text-decoration-line:underline;vertical-align:baseli=
ne;white-space:pre-wrap">George Dunlap</span></a><span style=3D"font-family=
:Arial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:n=
ormal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pr=
e-wrap"> or Rachel Romoff.=C2=A0</span></p><br><p dir=3D"ltr" style=3D"line=
-height:1.38;margin-top:0pt;margin-bottom:0pt"><span style=3D"font-family:A=
rial;color:rgb(0,0,0);background-color:transparent;font-variant-numeric:nor=
mal;font-variant-east-asian:normal;vertical-align:baseline;white-space:pre-=
wrap">We will also share opportunities from time to time with this list and=
 may ask for feedback on certain topics or programs we are working on.=C2=
=A0</span></p><br><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;m=
argin-bottom:0pt"><span style=3D"font-family:Arial;color:rgb(0,0,0);backgro=
und-color:transparent;font-weight:700;font-variant-numeric:normal;font-vari=
ant-east-asian:normal;vertical-align:baseline;white-space:pre-wrap">Open Op=
portunities</span></p><br><p dir=3D"ltr" style=3D"line-height:1.38;margin-t=
op:0pt;margin-bottom:0pt"><span style=3D"font-family:Arial;color:rgb(0,0,0)=
;background-color:transparent;font-variant-numeric:normal;font-variant-east=
-asian:normal;vertical-align:baseline;white-space:pre-wrap">Use cases for X=
en - Are you using Xen for something interesting? Let us know!</span></p><b=
r><p dir=3D"ltr" style=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt=
"><span style=3D"font-family:Arial;color:rgb(0,0,0);background-color:transp=
arent;font-variant-numeric:normal;font-variant-east-asian:normal;vertical-a=
lign:baseline;white-space:pre-wrap">Case Studies - Has your organization go=
tten measurable value out of its use of Xen. Have you used Xen on a persona=
l project and found interesting data? Let us know!</span></p><br><p dir=3D"=
ltr" style=3D"line-height:1.38;margin-top:0pt;margin-bottom:0pt"><span styl=
e=3D"font-family:Arial;color:rgb(0,0,0);background-color:transparent;font-v=
ariant-numeric:normal;font-variant-east-asian:normal;vertical-align:baselin=
e;white-space:pre-wrap">Developer Profiles - Are you a Xen Project develope=
r, or tech adjacent professional? We want to profile the work you=E2=80=99v=
e done.=C2=A0</span></p><br>If you have any questions please let us know,</=
span><div><span>Rachel<br><br></span><div><br></div>-- <br><div dir=3D"ltr"=
 class=3D"gmail_signature" data-smartmail=3D"gmail_signature"><div dir=3D"l=
tr">Rachel Romoff<div>Pronouns: She/Her<br><div><span style=3D"background-c=
olor:rgb(255,255,255)">(210) 241-8284</span></div><div><span style=3D"backg=
round-color:rgb(255,255,255)">Twitter @rachelromoff</span></div><div><br></=
div><div><br></div></div></div></div></div></div>

--0000000000002ef2a405c180a40f--


From xen-devel-bounces@lists.xenproject.org Tue May 04 13:03:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 13:03:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122345.230738 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lduiN-0000Kl-N0; Tue, 04 May 2021 13:03:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122345.230738; Tue, 04 May 2021 13:03:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lduiN-0000Ke-JH; Tue, 04 May 2021 13:03:51 +0000
Received: by outflank-mailman (input) for mailman id 122345;
 Tue, 04 May 2021 13:03:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=c7IS=J7=ens-lyon.org=samuel.thibault@srs-us1.protection.inumbo.net>)
 id 1lduiM-0000Jn-NH
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 13:03:50 +0000
Received: from hera.aquilenet.fr (unknown [185.233.100.1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1dfb8e11-3890-4c33-9d0f-d3eadac30016;
 Tue, 04 May 2021 13:03:47 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by hera.aquilenet.fr (Postfix) with ESMTP id E9C72140;
 Tue,  4 May 2021 15:03:45 +0200 (CEST)
Received: from hera.aquilenet.fr ([127.0.0.1])
 by localhost (hera.aquilenet.fr [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id Ka3Jwbg04sll; Tue,  4 May 2021 15:03:45 +0200 (CEST)
Received: from begin (unknown [IPv6:2a01:cb19:956:1b00:de41:a9ff:fe47:ec49])
 by hera.aquilenet.fr (Postfix) with ESMTPSA id 0C8B2AF;
 Tue,  4 May 2021 15:03:44 +0200 (CEST)
Received: from samy by begin with local (Exim 4.94)
 (envelope-from <samuel.thibault@ens-lyon.org>)
 id 1lduiF-00Fp58-VK; Tue, 04 May 2021 15:03:43 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1dfb8e11-3890-4c33-9d0f-d3eadac30016
X-Virus-Scanned: Debian amavisd-new at aquilenet.fr
Date: Tue, 4 May 2021 15:03:43 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel@lists.xenproject.org, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
Subject: Re: [PATCH 2/9] vtpmmgr: Print error code to aid debugging
Message-ID: <20210504130343.dwhvlewrphufjd7d@begin>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
References: <20210504124842.220445-1-jandryuk@gmail.com>
 <20210504124842.220445-3-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210504124842.220445-3-jandryuk@gmail.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)
X-Spamd-Bar: --
Authentication-Results: hera.aquilenet.fr
X-Rspamd-Server: hera
X-Rspamd-Queue-Id: E9C72140
X-Spamd-Result: default: False [-2.50 / 15.00];
	 ARC_NA(0.00)[];
	 RCVD_VIA_SMTP_AUTH(0.00)[];
	 FROM_HAS_DN(0.00)[];
	 RCPT_COUNT_THREE(0.00)[4];
	 TO_DN_SOME(0.00)[];
	 TO_MATCH_ENVRCPT_ALL(0.00)[];
	 TAGGED_RCPT(0.00)[];
	 MIME_GOOD(-0.10)[text/plain];
	 FREEMAIL_ENVRCPT(0.00)[gmail.com];
	 HAS_ORG_HEADER(0.00)[];
	 RCVD_COUNT_THREE(0.00)[3];
	 FREEMAIL_TO(0.00)[gmail.com];
	 RCVD_NO_TLS_LAST(0.10)[];
	 FROM_EQ_ENVFROM(0.00)[];
	 MID_RHS_NOT_FQDN(0.50)[];
	 BAYES_HAM(-3.00)[100.00%]

Jason Andryuk, le mar. 04 mai 2021 08:48:35 -0400, a ecrit:
> tpm_get_error_name returns "Unknown Error Code" when an error string
> is not defined.  In that case, we should print the Error Code so it can
> be looked up offline.  tpm_get_error_name returns a const string, so
> just have the two callers always print the error code so it is always
> available.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

> ---
>  stubdom/vtpmmgr/tpm.c  | 2 +-
>  stubdom/vtpmmgr/tpm2.c | 2 +-
>  2 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/stubdom/vtpmmgr/tpm.c b/stubdom/vtpmmgr/tpm.c
> index 779cddd64e..83b2bc16b2 100644
> --- a/stubdom/vtpmmgr/tpm.c
> +++ b/stubdom/vtpmmgr/tpm.c
> @@ -109,7 +109,7 @@
>  			UINT32 rsp_status; \
>  			UNPACK_OUT(TPM_RSP_HEADER, &rsp_tag, &rsp_len, &rsp_status); \
>  			if (rsp_status != TPM_SUCCESS) { \
> -				vtpmlogerror(VTPM_LOG_TPM, "Failed with return code %s\n", tpm_get_error_name(rsp_status)); \
> +				vtpmlogerror(VTPM_LOG_TPM, "Failed with return code %s (%x)\n", tpm_get_error_name(rsp_status), rsp_status); \
>  				status = rsp_status; \
>  				goto abort_egress; \
>  			} \
> diff --git a/stubdom/vtpmmgr/tpm2.c b/stubdom/vtpmmgr/tpm2.c
> index c9f1016ab5..655e6d164c 100644
> --- a/stubdom/vtpmmgr/tpm2.c
> +++ b/stubdom/vtpmmgr/tpm2.c
> @@ -126,7 +126,7 @@
>      ptr = unpack_TPM_RSP_HEADER(ptr, \
>            &(tag), &(paramSize), &(status));\
>      if ((status) != TPM_SUCCESS){ \
> -        vtpmlogerror(VTPM_LOG_TPM, "Failed with return code %s\n", tpm_get_error_name(status));\
> +        vtpmlogerror(VTPM_LOG_TPM, "Failed with return code %s (%x)\n", tpm_get_error_name(status), (status));\
>          goto abort_egress;\
>      }\
>  } while(0)
> -- 
> 2.30.2
> 


From xen-devel-bounces@lists.xenproject.org Tue May 04 13:08:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 13:08:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122353.230750 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldumm-0000W4-8E; Tue, 04 May 2021 13:08:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122353.230750; Tue, 04 May 2021 13:08:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldumm-0000Vx-55; Tue, 04 May 2021 13:08:24 +0000
Received: by outflank-mailman (input) for mailman id 122353;
 Tue, 04 May 2021 13:08:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=c7IS=J7=ens-lyon.org=samuel.thibault@srs-us1.protection.inumbo.net>)
 id 1lduml-0000Vs-8l
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 13:08:23 +0000
Received: from hera.aquilenet.fr (unknown [185.233.100.1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b62c44fc-f32a-4f59-ac2a-6e7c51da99a1;
 Tue, 04 May 2021 13:08:22 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by hera.aquilenet.fr (Postfix) with ESMTP id 4673DAF;
 Tue,  4 May 2021 15:08:21 +0200 (CEST)
Received: from hera.aquilenet.fr ([127.0.0.1])
 by localhost (hera.aquilenet.fr [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id N_alPIsPs8l7; Tue,  4 May 2021 15:08:20 +0200 (CEST)
Received: from begin (unknown [IPv6:2a01:cb19:956:1b00:de41:a9ff:fe47:ec49])
 by hera.aquilenet.fr (Postfix) with ESMTPSA id D5F2240;
 Tue,  4 May 2021 15:08:15 +0200 (CEST)
Received: from samy by begin with local (Exim 4.94)
 (envelope-from <samuel.thibault@ens-lyon.org>)
 id 1ldumc-00FpLS-U6; Tue, 04 May 2021 15:08:14 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b62c44fc-f32a-4f59-ac2a-6e7c51da99a1
X-Virus-Scanned: Debian amavisd-new at aquilenet.fr
Date: Tue, 4 May 2021 15:08:14 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH 3/9] stubom: newlib: Enable C99 formats for %z
Message-ID: <20210504130814.ztkdptd2p4xxtc6i@begin>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210504124842.220445-1-jandryuk@gmail.com>
 <20210504124842.220445-4-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210504124842.220445-4-jandryuk@gmail.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)
X-Spamd-Bar: --
Authentication-Results: hera.aquilenet.fr
X-Rspamd-Server: hera
X-Rspamd-Queue-Id: 4673DAF
X-Spamd-Result: default: False [-2.50 / 15.00];
	 ARC_NA(0.00)[];
	 RCVD_VIA_SMTP_AUTH(0.00)[];
	 FROM_HAS_DN(0.00)[];
	 RCPT_COUNT_THREE(0.00)[4];
	 TO_DN_SOME(0.00)[];
	 TO_MATCH_ENVRCPT_ALL(0.00)[];
	 MIME_GOOD(-0.10)[text/plain];
	 FREEMAIL_ENVRCPT(0.00)[gmail.com];
	 HAS_ORG_HEADER(0.00)[];
	 RCVD_COUNT_THREE(0.00)[3];
	 FREEMAIL_TO(0.00)[gmail.com];
	 RCVD_NO_TLS_LAST(0.10)[];
	 FROM_EQ_ENVFROM(0.00)[];
	 MID_RHS_NOT_FQDN(0.50)[];
	 BAYES_HAM(-3.00)[100.00%]

Jason Andryuk, le mar. 04 mai 2021 08:48:36 -0400, a ecrit:
> vtpmmgr was changed to print size_t with the %z modifier, but newlib
> isn't compiled with %z support.  So you get output like:
> 
> root seal: zu; sector of 13: zu
> root: zu v=zu
> itree: 36; sector of 112: zu
> group: zu v=zu id=zu md=zu
> group seal: zu; 5 in parent: zu; sector of 13: zu
> vtpm: zu+zu; sector of 48: zu
> 
> Enable the C99 formats in newlib so vtpmmgr prints the numeric values.
> 
> Fixes 9379af08ccc0 "stubdom: vtpmmgr: Correctly format size_t with %z
> when printing."
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

> ---
> I haven't tried, but the other option would be to cast size_t and avoid
> %z.  Since this seems to be the only mini-os use of %z, that may be
> better than building a larger newlib.

The size difference will be very small. I believe we prefer to have a
working %z rather than getting surprised to see it non-working.

> ---
>  stubdom/Makefile | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/stubdom/Makefile b/stubdom/Makefile
> index 90d9ffcd9f..c6de5f68ae 100644
> --- a/stubdom/Makefile
> +++ b/stubdom/Makefile
> @@ -105,7 +105,7 @@ cross-newlib: $(NEWLIB_STAMPFILE)
>  $(NEWLIB_STAMPFILE): mk-headers-$(XEN_TARGET_ARCH) newlib-$(NEWLIB_VERSION)
>  	mkdir -p newlib-$(XEN_TARGET_ARCH)
>  	( cd newlib-$(XEN_TARGET_ARCH) && \
> -	  CC_FOR_TARGET="$(CC) $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) $(NEWLIB_CFLAGS)" AR_FOR_TARGET=$(AR) LD_FOR_TARGET=$(LD) RANLIB_FOR_TARGET=$(RANLIB) ../newlib-$(NEWLIB_VERSION)/configure --prefix=$(CROSS_PREFIX) --verbose --target=$(GNU_TARGET_ARCH)-xen-elf --enable-newlib-io-long-long --disable-multilib && \
> +	  CC_FOR_TARGET="$(CC) $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) $(NEWLIB_CFLAGS)" AR_FOR_TARGET=$(AR) LD_FOR_TARGET=$(LD) RANLIB_FOR_TARGET=$(RANLIB) ../newlib-$(NEWLIB_VERSION)/configure --prefix=$(CROSS_PREFIX) --verbose --target=$(GNU_TARGET_ARCH)-xen-elf --enable-newlib-io-long-long --enable-newlib-io-c99-formats --disable-multilib && \
>  	  $(MAKE) DESTDIR= && \
>  	  $(MAKE) DESTDIR= install )
>  
> -- 
> 2.30.2
> 


From xen-devel-bounces@lists.xenproject.org Tue May 04 13:09:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 13:09:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122356.230762 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldunz-0000e4-KY; Tue, 04 May 2021 13:09:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122356.230762; Tue, 04 May 2021 13:09:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldunz-0000dv-Fx; Tue, 04 May 2021 13:09:39 +0000
Received: by outflank-mailman (input) for mailman id 122356;
 Tue, 04 May 2021 13:09:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8884=J7=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1ldunx-0000dm-RV
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 13:09:37 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7e1a::625])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b324017d-e981-45fe-be44-863ad137db14;
 Tue, 04 May 2021 13:09:35 +0000 (UTC)
Received: from AM6PR0502CA0068.eurprd05.prod.outlook.com
 (2603:10a6:20b:56::45) by AM0PR08MB3107.eurprd08.prod.outlook.com
 (2603:10a6:208:60::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4087.35; Tue, 4 May
 2021 13:09:33 +0000
Received: from VE1EUR03FT017.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:56:cafe::c4) by AM6PR0502CA0068.outlook.office365.com
 (2603:10a6:20b:56::45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4087.27 via Frontend
 Transport; Tue, 4 May 2021 13:09:33 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT017.mail.protection.outlook.com (10.152.18.90) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4087.27 via Frontend Transport; Tue, 4 May 2021 13:09:33 +0000
Received: ("Tessian outbound aff50003470c:v91");
 Tue, 04 May 2021 13:09:32 +0000
Received: from a858f6b1c09d.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 DECB788A-BE9A-4E67-BD91-207C6C5325E9.1; 
 Tue, 04 May 2021 13:09:21 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a858f6b1c09d.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 04 May 2021 13:09:21 +0000
Received: from VI1PR08MB3629.eurprd08.prod.outlook.com (2603:10a6:803:7f::25)
 by VI1PR08MB5520.eurprd08.prod.outlook.com (2603:10a6:803:135::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4087.41; Tue, 4 May
 2021 13:09:17 +0000
Received: from VI1PR08MB3629.eurprd08.prod.outlook.com
 ([fe80::4502:9762:8b3b:63d9]) by VI1PR08MB3629.eurprd08.prod.outlook.com
 ([fe80::4502:9762:8b3b:63d9%4]) with mapi id 15.20.4087.044; Tue, 4 May 2021
 13:09:16 +0000
Received: from smtpclient.apple (82.8.129.65) by
 LO4P123CA0313.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:197::12) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4087.43 via Frontend Transport; Tue, 4 May 2021 13:09:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b324017d-e981-45fe-be44-863ad137db14
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=u8dFEG5o76gj/8Zyki+POmpxp3gzwAy7gksoi/WRfFQ=;
 b=McDgnugKVhUymVghASUQJPfcM24ECMhyLCNLYYB+er5RGJT/VHC8xlzJUasGzBtY8qBQN4MUe6r75JJqvIERhUNbb+bKnqDJy++Y5MwsSAbvQnrxZw+vt06pm2pPp8Le7EBVBwHsB18WJ2EW+a8iNYJAuK3gI4aS+eqeqsmMhT4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: a59e3e2be832c0ce
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CIYhVptjxI3wrwT8OGi6ofFr7U3VQxfREcX+Zs22NNiSe4QHzLkp3ZK2HSmgHgGCBPuzDxurMj0C/yrTE8ztCGMVOgdZCcwifWAasK4XTRu+gJFTKcMxSVhjirkK/OO4B/4ia8ZcW/oMqxnZzFMcm4aEBmZxA48AagqfswLbLR+1fUBYbf8y3qii/9p6PLgFVj6gUDGwJuWICulDxT1QoKN1J7uLLWtIs1dGhHMttoe8cda31Rz/BmFjVp7vRr4CKIWNfP+HagKNs244Z/kRLk8ZGyZ8Ywe2aG7z+LaLM48JNeYqsk8sQgfXjykHqjI/l7xhu3uj1jJR8UY57AOAYw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=u8dFEG5o76gj/8Zyki+POmpxp3gzwAy7gksoi/WRfFQ=;
 b=TdbeRfAerBSyDWQ3NhWGXRWZOg6LkjXOOPOMAvi1d755KMkl135hCZDP2vbuvPT8/z7WiJkVCnhaOJO/3xRqasiUYt6O95yhJc8xh7GXBPeVKJGspCw28V1eFYgfN+LZezTTQWv0/U+SytZwAraLH8i+IYP+6+rty40499wnfTr2jWQTbqMUI7KGHzA11SfRS3hdA5uYSOnke0PZZORaAYTAGSLy2ilx3dOFiT/p+/c8cyEARsEtxAc3TOcrrIq0Oj9GGZafoZYJwOXKarp95GQ4CBL6SFsP7PJZLG4JxWfhSzGDiUhDn5iZ4BMg+qDyZ7LlG8m3TAsnk7kRHBDqtw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=u8dFEG5o76gj/8Zyki+POmpxp3gzwAy7gksoi/WRfFQ=;
 b=McDgnugKVhUymVghASUQJPfcM24ECMhyLCNLYYB+er5RGJT/VHC8xlzJUasGzBtY8qBQN4MUe6r75JJqvIERhUNbb+bKnqDJy++Y5MwsSAbvQnrxZw+vt06pm2pPp8Le7EBVBwHsB18WJ2EW+a8iNYJAuK3gI4aS+eqeqsmMhT4=
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
Content-Type: text/plain;
	charset=utf-8
Subject: Re: [PATCH v4 3/3] docs/doxygen: doxygen documentation for
 grant_table.h
From: Luca Fancellu <luca.fancellu@arm.com>
In-Reply-To: <37e5b461-40fe-ac78-59b9-033ff8cdc6d1@suse.com>
Date: Tue, 4 May 2021 14:09:09 +0100
Cc: Bertrand Marquis <bertrand.marquis@arm.com>,
 wei.chen@arm.com,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
Content-Transfer-Encoding: quoted-printable
Message-Id: <1853929B-AC45-42AF-8FE4-7B23C700B2E2@arm.com>
References: <20210504094606.7125-1-luca.fancellu@arm.com>
 <20210504094606.7125-4-luca.fancellu@arm.com>
 <37e5b461-40fe-ac78-59b9-033ff8cdc6d1@suse.com>
To: Jan Beulich <jbeulich@suse.com>
X-Mailer: Apple Mail (2.3654.80.0.2.43)
X-Originating-IP: [82.8.129.65]
X-ClientProxiedBy: LO4P123CA0313.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:197::12) To VI1PR08MB3629.eurprd08.prod.outlook.com
 (2603:10a6:803:7f::25)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 2dca6f1c-1aa6-44e4-5953-08d90efddc17
X-MS-TrafficTypeDiagnostic: VI1PR08MB5520:|AM0PR08MB3107:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<AM0PR08MB3107E4360815A6DA0D858C4EE45A9@AM0PR08MB3107.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 wuw+d+3Dp3pJXqrslqEDLCs9k55bqqMxcn4iJu9ZESPPmcp/VTY6VITjrWTn2JAPVrKVljVmJVK3GTPKVd6nSD6iNMqb/g7QeBLZNfUWBMwBw18Z2jJ86ECUbSYIpCAS9tagiSwA20H2I0FCaIiDzRpN7rUS4peQbzd+5Tk3zhcxpL1Q8/KRTBTQdbD1D0QvRhsBN36+o1MSkWRi4P7vFIYWHkJrU0YmEv4GnMa6uZCO2q1K+abim7RcTRskNfoL81C731D8EeTo1Wzr27aFqW7a14Fv/e4IY9ATcRzHQv3S+8Ca5LT6shDlvh9oHpPKwxHTzz/JgjZ1igm4AksZFvkZ3Sqk0d0q0kOAakSVDzuVPw2tmkC/FQacMMryYXtHz0Aapps9tR8QSnGOvVGA9ae7sOd5LdtJPjxLoDqV9JPBxkPUqVDe2aTwAPgGMfAPdMBT6I9Fmn4hvRUY8wQdiw5TsJWDieYu3LzrapXHV32w/VxZbjpJ14F4sIMRd99Flmw9NKkN9FeJli3uezaUF6S4SKX1zH0SigffrKKojG7XsfWCJ1ADuuCDuQGlBZKQqE2PVI6uWTR7YKEbGkaLC+wVEuUxijuzbHDPDPFqdwp/SeCEhXBjJ2fj9FVaiLg8WgkRFDZ62YfQWSnpztoLr+yh6lTLIA9SZJqETJy7XBYm/tolgG6uASjmNgmcqrhf
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR08MB3629.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39850400004)(366004)(136003)(396003)(376002)(346002)(478600001)(54906003)(2616005)(4744005)(44832011)(86362001)(316002)(8936002)(2906002)(8676002)(5660300002)(52116002)(6506007)(956004)(4326008)(26005)(6486002)(16526019)(66476007)(66946007)(6916009)(66556008)(38100700002)(38350700002)(36756003)(33656002)(186003)(6512007)(6666004)(53546011)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
 =?utf-8?B?dSsweldpTVorclY3bkYwOS9hbDlVajZxaTNZR08wRnRVYUpPRXJQWTFUY2Zr?=
 =?utf-8?B?UG0rdEdEK3JwRzh2OGc4RFRKc1VYWUdKVFlWWEFTSjRpcnJBV3JuU2hwdVhP?=
 =?utf-8?B?SXNZaXcreDAxb1pwVEVtT2FUeGhjd3AwcXdDaFhNcGhGK1VJVzV2bTBRS2FD?=
 =?utf-8?B?Wk0wclcybXJPMTMwV05vTVZ3WkF3alFzVFp0bld3NXRNR2RPbStnd0hHV3p3?=
 =?utf-8?B?cmRtSGVXbXFuWXRiRDdIdlYxVWtLWnhsM1VWa1NGclgzbTJNbXhQTXd5a1Ar?=
 =?utf-8?B?b2d5ZEtkRlJHNlMyUncxMHhPYUdQdEJJQm5ISzNjWUErdGNadkRtU2UrMmVu?=
 =?utf-8?B?TGpkK21ZZGdpQXlrUVVtNG1JWTV3VzBDUWFwcGt0QUdsY2hyRHJQRkNjcEVn?=
 =?utf-8?B?V2NFb3ZpTjFWMVZMR0FzVDhabk8zVW9IUmVoS3NNQ28vTFppNFpIL2pqTGpD?=
 =?utf-8?B?T1VzdzZVcm5EMG82VkQreEtHWGFDamUzZzhOSTVGdHFEaGpQUm5lWVdlRmI0?=
 =?utf-8?B?U1JJUmRyV1hMSGNSRDFqM0lndVp3dTMxOWpBVFlGRktQZkZpZkNXWmRxbDJ3?=
 =?utf-8?B?WHZSN3kxL1Y1V0VMY2o0azFSck5qejN2bTJ6Q0YwOG9MQU4ycUM5b2tHRE9E?=
 =?utf-8?B?aTJmbm1mRWFaWHVnOFhnaXdBWGVnNGxFcVRmR0dORi9Cb2JrOGk4LzZZNFJJ?=
 =?utf-8?B?OU5HQmhmeGxFdXdGRE5vYW1aQUpyOFVqckw1YzV3NkhXY1VEaERBNGF1STdz?=
 =?utf-8?B?K1VGNHk3cWdWMWxERjd4cW8vL3ZTOHhULzlHeVZxMDQ0bDBjN1Y0QkZhdGZX?=
 =?utf-8?B?Z3dvdERjM25XMU9SL29OUU1IYjhsK3N0bVJUK3dTL2I4REtJeGcyWC9oQVNX?=
 =?utf-8?B?L1FoNHRBdmNrT1V5dytSVzQ1ZDRlRFR5amtDQ1N4UCtNaWdHSDVjZjlmbFNr?=
 =?utf-8?B?UHZZelU4bVZrcExJTVQzeHNzdFJFMEdOandkVzhwRSswWW1zb05lYytQcENu?=
 =?utf-8?B?TkJHQmM5cDRWb0xuR1VMdEhDbzIzdk0ydlpSMmxtM0h6TU5PSlV3amRHdkhC?=
 =?utf-8?B?bXYyZDJZaE5uVXF2SjU2aFk2RjJMcVQ1ZnUrSHgvWDNzcHgzcjcyZFhZS2JN?=
 =?utf-8?B?M3RCckFFZ0ZLZkloZnZ2Ym5VTkI0Z1hOd0I2aFVvaUdTVkdVZE9BMmo5YnJE?=
 =?utf-8?B?c2dRUjgyeXNMTW1HMTFVc0YwUWRTdW4zT3VadUtUOENrZzBMaFN3TVNqQlZD?=
 =?utf-8?B?QkozZjY3R0Y2ZzdhT3J2MHhzYno4em1KekVoUFZQYnhScklMaithVitKUFVy?=
 =?utf-8?B?V1B5ZElhdVpyWlhCKzJKa2FwYkJFa3ZZSC93L2JGQ2hBTDJTK1M1R1liUHdZ?=
 =?utf-8?B?S1NOUkY2VnhPR2lTNlpaUHdJZEJHRmd6UVVOdU5IYWtEZ01JaXpKckxPaFZN?=
 =?utf-8?B?cThLdWliWVhJNDhsbGlrM01JaTRPdnlWU2hBWEtXVHhiL1c1RytJblQ1cWhQ?=
 =?utf-8?B?eTRVdnpVeStkaGdNdTdYbUZDREJVcFFmUFhXaUpHck9WYllWSG1ObEh3K1JU?=
 =?utf-8?B?dmdGN3IyZUFvWmdtT1FqQjdiNldOSnVvY0krc3MwMm81VVpmNWN6ZU5SRmxj?=
 =?utf-8?B?SzNuZHRpK3ZPaFdMQVZCb3hCTGJzZUJjTjRONjhoeDEyRFFHUlVQN2NLbmJK?=
 =?utf-8?B?OGRsUlMzMnoyeVZKV1Z2RE5xdmgzQ1R5V0JjQ0xXODBJSDlOZ04rbFhia001?=
 =?utf-8?Q?IT+95+MedffNm8ryRRkOb06EqwfM2rn/ANH1A8m?=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB5520
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT017.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	d08ae94f-b275-4249-7ac5-08d90efdd1df
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	8Q9IRSx55+JUSHqMoZHy3QpGm5p8VgCIMv8ycnKWoqC4dLvQtpRwe4WTDYDwdZMyCPyRM6T4nupFuMdfJN/tXCiXumUKAudKQEX3eA9Dw4r/8Sjiv7AH69sJ4pune4LvxgbJuBazQ99LXqsmFFd+VhOiTFQfoQRwxbbgYKiytX4HdGHUa2t48kOG3M2tzRZ4HYVDiKf5z60J4PowZliKFCN9lPFV5UdMR+NPVWvipf1/GIuDIrxVmSzBNifM0EJBe7j+Co41A+ORieLN3+KnFGbrctlDVNEgb/RiUpEPtKu/o925Fe4NnYBjzCH0WtjTq3YmzWpJX7B04yHyyRsBsdPXTwQFS3Qjyn2ZCU/Z/pLmVFkqF7+zfyBvUTSHVBCw3VAvCrymHuGD4Pnk/beie0cCqKopss7qn5scpc1Vq1OAHG19OBzuX7jTQG4gQq9/2XkezzOaIZkpOedokfvhebx1Z+b1wLkO53B+WixdQ0HLoQtF2enYHTNIT6p88VzOTeUBgv/8WODDpePeM2uJTwBrF7Qu9raFFyYwELvL2GxeLsrylQ6Cn/qG3bBFhtfsk2bbPB33eWDv6vh5i4SNHrwn0ipjD3xcg5AwGpYaGmCFkheCESWLpz5d3iMZFsnpuC7WtzoujwxrjBs5iTfkqhGyoEkmcTFmTn81vhXZHWs=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(136003)(376002)(346002)(39850400004)(36840700001)(46966006)(4744005)(2906002)(33656002)(44832011)(6512007)(6486002)(8676002)(4326008)(70206006)(70586007)(82740400003)(47076005)(316002)(8936002)(86362001)(53546011)(6506007)(36860700001)(5660300002)(16526019)(186003)(81166007)(36756003)(356005)(6862004)(82310400003)(54906003)(478600001)(336012)(6666004)(2616005)(26005)(956004);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2021 13:09:33.0408
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 2dca6f1c-1aa6-44e4-5953-08d90efddc17
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT017.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB3107



> On 4 May 2021, at 12:48, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> On 04.05.2021 11:46, Luca Fancellu wrote:
>> @@ -451,11 +466,6 @@ DEFINE_XEN_GUEST_HANDLE(gnttab_transfer_t);
>>  * bytes to be copied.
>>  */
>>=20
>> -#define _GNTCOPY_source_gref      (0)
>> -#define GNTCOPY_source_gref       (1<<_GNTCOPY_source_gref)
>> -#define _GNTCOPY_dest_gref        (1)
>> -#define GNTCOPY_dest_gref         (1<<_GNTCOPY_dest_gref)
>> -
>> struct gnttab_copy {
>>     /* IN parameters. */
>>     struct gnttab_copy_ptr {
>> @@ -471,6 +481,12 @@ struct gnttab_copy {
>>     /* OUT parameters. */
>>     int16_t       status;
>> };
>> +
>> +#define _GNTCOPY_source_gref      (0)
>> +#define GNTCOPY_source_gref       (1<<_GNTCOPY_source_gref)
>> +#define _GNTCOPY_dest_gref        (1)
>> +#define GNTCOPY_dest_gref         (1<<_GNTCOPY_dest_gref)
>=20
> Didn't you say you agree with moving this back up some, next to the
> field using these?

Hi Jan,

My mistake! I=E2=80=99ll move it in the next patch, did you spot anything e=
lse I might have forgot of what we agreed?

Cheers,
Luca

>=20
> Jan



From xen-devel-bounces@lists.xenproject.org Tue May 04 13:13:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 13:13:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122361.230774 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldurn-0001UF-6P; Tue, 04 May 2021 13:13:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122361.230774; Tue, 04 May 2021 13:13:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldurn-0001U8-1L; Tue, 04 May 2021 13:13:35 +0000
Received: by outflank-mailman (input) for mailman id 122361;
 Tue, 04 May 2021 13:13:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=c7IS=J7=ens-lyon.org=samuel.thibault@srs-us1.protection.inumbo.net>)
 id 1ldurl-0001U3-Er
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 13:13:33 +0000
Received: from hera.aquilenet.fr (unknown [2a0c:e300::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ef317a00-2862-44bd-baa3-f39faf501c39;
 Tue, 04 May 2021 13:13:31 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by hera.aquilenet.fr (Postfix) with ESMTP id 7A996140;
 Tue,  4 May 2021 15:13:30 +0200 (CEST)
Received: from hera.aquilenet.fr ([127.0.0.1])
 by localhost (hera.aquilenet.fr [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id sPl5ECzdqkG6; Tue,  4 May 2021 15:13:29 +0200 (CEST)
Received: from begin (unknown [IPv6:2a01:cb19:956:1b00:de41:a9ff:fe47:ec49])
 by hera.aquilenet.fr (Postfix) with ESMTPSA id 8C278AF;
 Tue,  4 May 2021 15:13:29 +0200 (CEST)
Received: from samy by begin with local (Exim 4.94)
 (envelope-from <samuel.thibault@ens-lyon.org>)
 id 1ldurg-00FpSo-Gn; Tue, 04 May 2021 15:13:28 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ef317a00-2862-44bd-baa3-f39faf501c39
X-Virus-Scanned: Debian amavisd-new at aquilenet.fr
Date: Tue, 4 May 2021 15:13:28 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
Subject: Re: [PATCH 4/9] vtpmmgr: Allow specifying srk_handle for TPM2
Message-ID: <20210504131328.wtoe4swz7nyzyuts@begin>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
References: <20210504124842.220445-1-jandryuk@gmail.com>
 <20210504124842.220445-5-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210504124842.220445-5-jandryuk@gmail.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)
X-Spamd-Bar: --
Authentication-Results: hera.aquilenet.fr
X-Rspamd-Server: hera
X-Rspamd-Queue-Id: 7A996140
X-Spamd-Result: default: False [-2.50 / 15.00];
	 ARC_NA(0.00)[];
	 RCVD_VIA_SMTP_AUTH(0.00)[];
	 FROM_HAS_DN(0.00)[];
	 TO_DN_SOME(0.00)[];
	 TO_MATCH_ENVRCPT_ALL(0.00)[];
	 FREEMAIL_ENVRCPT(0.00)[gmail.com];
	 TAGGED_RCPT(0.00)[];
	 MIME_GOOD(-0.10)[text/plain];
	 RCPT_COUNT_FIVE(0.00)[6];
	 HAS_ORG_HEADER(0.00)[];
	 RCVD_COUNT_THREE(0.00)[3];
	 FREEMAIL_TO(0.00)[gmail.com];
	 RCVD_NO_TLS_LAST(0.10)[];
	 FROM_EQ_ENVFROM(0.00)[];
	 MID_RHS_NOT_FQDN(0.50)[];
	 BAYES_HAM(-3.00)[100.00%]

Jason Andryuk, le mar. 04 mai 2021 08:48:37 -0400, a ecrit:
> Bypass taking ownership of the TPM2 if an srk_handle is specified.
> 
> This srk_handle must be usable with Null auth for the time being.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> ---
>  docs/man/xen-vtpmmgr.7.pod |  7 +++++++
>  stubdom/vtpmmgr/init.c     | 11 ++++++++++-
>  2 files changed, 17 insertions(+), 1 deletion(-)
> 
> diff --git a/docs/man/xen-vtpmmgr.7.pod b/docs/man/xen-vtpmmgr.7.pod
> index 875dcce508..3286954568 100644
> --- a/docs/man/xen-vtpmmgr.7.pod
> +++ b/docs/man/xen-vtpmmgr.7.pod
> @@ -92,6 +92,13 @@ Valid arguments:
>  
>  =over 4
>  
> +=item srk_handle=<HANDLE>

Is this actually srk_handle= or srk_handle: ?

The code tests for the latter. The problem seems to "exist" also for
owner_auth: and srk_auth: but both = and : work actually because strncmp
is told not to check for = and :

We'd better clean this up to avoid confusions.

Samuel

> +
> +Specify a srk_handle for TPM 2.0.  TPM 2.0 uses a key hierarchy, and
> +this allow specifying the parent handle for vtpmmgr to create its own
> +key under.  Using this option bypasses vtpmmgr trying to take ownership
> +of the TPM.
> +
>  =item owner_auth=<AUTHSPEC>
>  
>  =item srk_auth=<AUTHSPEC>
> diff --git a/stubdom/vtpmmgr/init.c b/stubdom/vtpmmgr/init.c
> index 1506735051..c01d03e9f4 100644
> --- a/stubdom/vtpmmgr/init.c
> +++ b/stubdom/vtpmmgr/init.c
> @@ -302,6 +302,11 @@ int parse_cmdline_opts(int argc, char** argv, struct Opts* opts)
>              goto err_invalid;
>           }
>        }
> +      else if(!strncmp(argv[i], "srk_handle:", 11)) {
> +         if(sscanf(argv[i] + 11, "%x", &vtpm_globals.srk_handle) != 1) {
> +            goto err_invalid;
> +         }
> +      }
>        else if(!strncmp(argv[i], "tpmdriver=", 10)) {
>           if(!strcmp(argv[i] + 10, "tpm_tis")) {
>              opts->tpmdriver = TPMDRV_TPM_TIS;
> @@ -586,7 +591,11 @@ TPM_RESULT vtpmmgr2_create(void)
>  {
>      TPM_RESULT status = TPM_SUCCESS;
>  
> -    TPMTRYRETURN(tpm2_take_ownership());
> +    if ( vtpm_globals.srk_handle == 0 ) {
> +        TPMTRYRETURN(tpm2_take_ownership());
> +    } else {
> +        tpm2_AuthArea_ctor(NULL, 0, &vtpm_globals.srk_auth_area);
> +    }
>  
>     /* create SK */
>      TPM2_Create_Params_out out;
> -- 
> 2.30.2
> 


From xen-devel-bounces@lists.xenproject.org Tue May 04 13:14:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 13:14:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122367.230786 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldusd-0001d5-KG; Tue, 04 May 2021 13:14:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122367.230786; Tue, 04 May 2021 13:14:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldusd-0001cy-H8; Tue, 04 May 2021 13:14:27 +0000
Received: by outflank-mailman (input) for mailman id 122367;
 Tue, 04 May 2021 13:14:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=c7IS=J7=ens-lyon.org=samuel.thibault@srs-us1.protection.inumbo.net>)
 id 1ldusc-0001cq-Nz
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 13:14:26 +0000
Received: from hera.aquilenet.fr (unknown [2a0c:e300::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9020e9be-d7b3-485b-bff9-8b16bd45550e;
 Tue, 04 May 2021 13:14:25 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by hera.aquilenet.fr (Postfix) with ESMTP id 0CFDF319;
 Tue,  4 May 2021 15:14:25 +0200 (CEST)
Received: from hera.aquilenet.fr ([127.0.0.1])
 by localhost (hera.aquilenet.fr [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id 3U7TsUZRm3T1; Tue,  4 May 2021 15:14:24 +0200 (CEST)
Received: from begin (unknown [IPv6:2a01:cb19:956:1b00:de41:a9ff:fe47:ec49])
 by hera.aquilenet.fr (Postfix) with ESMTPSA id 30AE4301;
 Tue,  4 May 2021 15:14:24 +0200 (CEST)
Received: from samy by begin with local (Exim 4.94)
 (envelope-from <samuel.thibault@ens-lyon.org>)
 id 1ldusZ-00FpWx-8n; Tue, 04 May 2021 15:14:23 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9020e9be-d7b3-485b-bff9-8b16bd45550e
X-Virus-Scanned: Debian amavisd-new at aquilenet.fr
Date: Tue, 4 May 2021 15:14:23 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel@lists.xenproject.org, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
Subject: Re: [PATCH 5/9] vtpmmgr: Move vtpmmgr_shutdown
Message-ID: <20210504131423.52sqzrnn3yu34r2u@begin>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
References: <20210504124842.220445-1-jandryuk@gmail.com>
 <20210504124842.220445-6-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210504124842.220445-6-jandryuk@gmail.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)
X-Spamd-Bar: --
Authentication-Results: hera.aquilenet.fr
X-Rspamd-Server: hera
X-Rspamd-Queue-Id: 0CFDF319
X-Spamd-Result: default: False [-2.50 / 15.00];
	 ARC_NA(0.00)[];
	 RCVD_VIA_SMTP_AUTH(0.00)[];
	 FROM_HAS_DN(0.00)[];
	 RCPT_COUNT_THREE(0.00)[4];
	 TO_DN_SOME(0.00)[];
	 TO_MATCH_ENVRCPT_ALL(0.00)[];
	 TAGGED_RCPT(0.00)[];
	 MIME_GOOD(-0.10)[text/plain];
	 FREEMAIL_ENVRCPT(0.00)[gmail.com];
	 HAS_ORG_HEADER(0.00)[];
	 RCVD_COUNT_THREE(0.00)[3];
	 FREEMAIL_TO(0.00)[gmail.com];
	 RCVD_NO_TLS_LAST(0.10)[];
	 FROM_EQ_ENVFROM(0.00)[];
	 MID_RHS_NOT_FQDN(0.50)[];
	 BAYES_HAM(-3.00)[100.00%]

Jason Andryuk, le mar. 04 mai 2021 08:48:38 -0400, a ecrit:
> Reposition vtpmmgr_shutdown so it can call flush_tpm2 without a forward
> declaration.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

> ---
>  stubdom/vtpmmgr/init.c | 28 ++++++++++++++--------------
>  1 file changed, 14 insertions(+), 14 deletions(-)
> 
> diff --git a/stubdom/vtpmmgr/init.c b/stubdom/vtpmmgr/init.c
> index c01d03e9f4..569b0dd1dc 100644
> --- a/stubdom/vtpmmgr/init.c
> +++ b/stubdom/vtpmmgr/init.c
> @@ -503,20 +503,6 @@ egress:
>     return status;
>  }
>  
> -void vtpmmgr_shutdown(void)
> -{
> -   /* Cleanup TPM resources */
> -   TPM_TerminateHandle(vtpm_globals.oiap.AuthHandle);
> -
> -   /* Close tpmback */
> -   shutdown_tpmback();
> -
> -   /* Close tpmfront/tpm_tis */
> -   close(vtpm_globals.tpm_fd);
> -
> -   vtpmloginfo(VTPM_LOG_VTPM, "VTPM Manager stopped.\n");
> -}
> -
>  /* TPM 2.0 */
>  
>  static void tpm2_AuthArea_ctor(const char *authValue, UINT32 authLen,
> @@ -797,3 +783,17 @@ abort_egress:
>  egress:
>      return status;
>  }
> +
> +void vtpmmgr_shutdown(void)
> +{
> +   /* Cleanup TPM resources */
> +   TPM_TerminateHandle(vtpm_globals.oiap.AuthHandle);
> +
> +   /* Close tpmback */
> +   shutdown_tpmback();
> +
> +   /* Close tpmfront/tpm_tis */
> +   close(vtpm_globals.tpm_fd);
> +
> +   vtpmloginfo(VTPM_LOG_VTPM, "VTPM Manager stopped.\n");
> +}
> -- 
> 2.30.2
> 


From xen-devel-bounces@lists.xenproject.org Tue May 04 13:15:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 13:15:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122370.230798 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldutc-0001l7-Tw; Tue, 04 May 2021 13:15:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122370.230798; Tue, 04 May 2021 13:15:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldutc-0001l0-R3; Tue, 04 May 2021 13:15:28 +0000
Received: by outflank-mailman (input) for mailman id 122370;
 Tue, 04 May 2021 13:15:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=c7IS=J7=ens-lyon.org=samuel.thibault@srs-us1.protection.inumbo.net>)
 id 1ldutc-0001ks-25
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 13:15:28 +0000
Received: from hera.aquilenet.fr (unknown [185.233.100.1])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 46662dda-9f14-4d1b-bc15-399849c508a0;
 Tue, 04 May 2021 13:15:27 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by hera.aquilenet.fr (Postfix) with ESMTP id 6C3D4237;
 Tue,  4 May 2021 15:15:26 +0200 (CEST)
Received: from hera.aquilenet.fr ([127.0.0.1])
 by localhost (hera.aquilenet.fr [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id rooe7fDGSXGH; Tue,  4 May 2021 15:15:25 +0200 (CEST)
Received: from begin (unknown [IPv6:2a01:cb19:956:1b00:de41:a9ff:fe47:ec49])
 by hera.aquilenet.fr (Postfix) with ESMTPSA id 717481E8;
 Tue,  4 May 2021 15:15:25 +0200 (CEST)
Received: from samy by begin with local (Exim 4.94)
 (envelope-from <samuel.thibault@ens-lyon.org>)
 id 1ldutY-00FpXJ-Ka; Tue, 04 May 2021 15:15:24 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 46662dda-9f14-4d1b-bc15-399849c508a0
X-Virus-Scanned: Debian amavisd-new at aquilenet.fr
Date: Tue, 4 May 2021 15:15:24 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel@lists.xenproject.org, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
Subject: Re: [PATCH 6/9] vtpmmgr: Flush transient keys on shutdown
Message-ID: <20210504131524.5emfxq2eykdjj6av@begin>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
References: <20210504124842.220445-1-jandryuk@gmail.com>
 <20210504124842.220445-7-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210504124842.220445-7-jandryuk@gmail.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)
X-Spamd-Bar: --
Authentication-Results: hera.aquilenet.fr
X-Rspamd-Server: hera
X-Rspamd-Queue-Id: 6C3D4237
X-Spamd-Result: default: False [-2.50 / 15.00];
	 ARC_NA(0.00)[];
	 RCVD_VIA_SMTP_AUTH(0.00)[];
	 FROM_HAS_DN(0.00)[];
	 RCPT_COUNT_THREE(0.00)[4];
	 TO_DN_SOME(0.00)[];
	 TO_MATCH_ENVRCPT_ALL(0.00)[];
	 TAGGED_RCPT(0.00)[];
	 MIME_GOOD(-0.10)[text/plain];
	 FREEMAIL_ENVRCPT(0.00)[gmail.com];
	 HAS_ORG_HEADER(0.00)[];
	 RCVD_COUNT_THREE(0.00)[3];
	 FREEMAIL_TO(0.00)[gmail.com];
	 RCVD_NO_TLS_LAST(0.10)[];
	 FROM_EQ_ENVFROM(0.00)[];
	 MID_RHS_NOT_FQDN(0.50)[];
	 BAYES_HAM(-3.00)[100.00%]

Jason Andryuk, le mar. 04 mai 2021 08:48:39 -0400, a ecrit:
> Remove our key so it isn't left in the TPM for someone to come along
> after vtpmmgr shutsdown.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

> ---
>  stubdom/vtpmmgr/init.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 
> diff --git a/stubdom/vtpmmgr/init.c b/stubdom/vtpmmgr/init.c
> index 569b0dd1dc..d9fefa9be6 100644
> --- a/stubdom/vtpmmgr/init.c
> +++ b/stubdom/vtpmmgr/init.c
> @@ -792,6 +792,14 @@ void vtpmmgr_shutdown(void)
>     /* Close tpmback */
>     shutdown_tpmback();
>  
> +    if (hw_is_tpm2()) {
> +        /* Blow away all stale handles left in the tpm*/
> +        if (flush_tpm2() != TPM_SUCCESS) {
> +            vtpmlogerror(VTPM_LOG_TPM,
> +                         "TPM2_FlushResources failed, continuing shutdown..\n");
> +        }
> +    }
> +
>     /* Close tpmfront/tpm_tis */
>     close(vtpm_globals.tpm_fd);
>  
> -- 
> 2.30.2
> 


From xen-devel-bounces@lists.xenproject.org Tue May 04 13:16:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 13:16:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122374.230810 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lduud-0001rx-89; Tue, 04 May 2021 13:16:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122374.230810; Tue, 04 May 2021 13:16:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lduud-0001rq-4p; Tue, 04 May 2021 13:16:31 +0000
Received: by outflank-mailman (input) for mailman id 122374;
 Tue, 04 May 2021 13:16:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=c7IS=J7=ens-lyon.org=samuel.thibault@srs-us1.protection.inumbo.net>)
 id 1lduub-0001rd-NC
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 13:16:29 +0000
Received: from hera.aquilenet.fr (unknown [185.233.100.1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id db7c420d-2f84-4127-a455-19cc2312fd9e;
 Tue, 04 May 2021 13:16:28 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by hera.aquilenet.fr (Postfix) with ESMTP id 0BB71365;
 Tue,  4 May 2021 15:16:28 +0200 (CEST)
Received: from hera.aquilenet.fr ([127.0.0.1])
 by localhost (hera.aquilenet.fr [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id XyzKQs3AJ772; Tue,  4 May 2021 15:16:27 +0200 (CEST)
Received: from begin (unknown [IPv6:2a01:cb19:956:1b00:de41:a9ff:fe47:ec49])
 by hera.aquilenet.fr (Postfix) with ESMTPSA id 42FA3301;
 Tue,  4 May 2021 15:16:27 +0200 (CEST)
Received: from samy by begin with local (Exim 4.94)
 (envelope-from <samuel.thibault@ens-lyon.org>)
 id 1lduuY-00FpYD-3f; Tue, 04 May 2021 15:16:26 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: db7c420d-2f84-4127-a455-19cc2312fd9e
X-Virus-Scanned: Debian amavisd-new at aquilenet.fr
Date: Tue, 4 May 2021 15:16:26 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel@lists.xenproject.org, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
Subject: Re: [PATCH 7/9] vtpmmgr: Flush all transient keys
Message-ID: <20210504131626.h2ylaamk35evw6yg@begin>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
References: <20210504124842.220445-1-jandryuk@gmail.com>
 <20210504124842.220445-8-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210504124842.220445-8-jandryuk@gmail.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)
X-Spamd-Bar: --
Authentication-Results: hera.aquilenet.fr
X-Rspamd-Server: hera
X-Rspamd-Queue-Id: 0BB71365
X-Spamd-Result: default: False [-2.50 / 15.00];
	 ARC_NA(0.00)[];
	 RCVD_VIA_SMTP_AUTH(0.00)[];
	 FROM_HAS_DN(0.00)[];
	 RCPT_COUNT_THREE(0.00)[4];
	 TO_DN_SOME(0.00)[];
	 TO_MATCH_ENVRCPT_ALL(0.00)[];
	 TAGGED_RCPT(0.00)[];
	 MIME_GOOD(-0.10)[text/plain];
	 FREEMAIL_ENVRCPT(0.00)[gmail.com];
	 HAS_ORG_HEADER(0.00)[];
	 RCVD_COUNT_THREE(0.00)[3];
	 FREEMAIL_TO(0.00)[gmail.com];
	 RCVD_NO_TLS_LAST(0.10)[];
	 FROM_EQ_ENVFROM(0.00)[];
	 MID_RHS_NOT_FQDN(0.50)[];
	 BAYES_HAM(-3.00)[100.00%]

Jason Andryuk, le mar. 04 mai 2021 08:48:40 -0400, a ecrit:
> We're only flushing 2 transients, but there are 3 handles.  Use <= to also
> flush the third handle.
> 
> The number of transient handles/keys is hardware dependent, so this
> should query for the limit.  And assignment of handles is assumed to be
> sequential from the minimum.  That may not be guaranteed, but seems okay
> with my tpm2.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

Maybe explicit in the log that TRANSIENT_LAST is actually inclusive?

Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

> ---
>  stubdom/vtpmmgr/init.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/stubdom/vtpmmgr/init.c b/stubdom/vtpmmgr/init.c
> index d9fefa9be6..e0dbcac3ad 100644
> --- a/stubdom/vtpmmgr/init.c
> +++ b/stubdom/vtpmmgr/init.c
> @@ -656,7 +656,7 @@ static TPM_RC flush_tpm2(void)
>  {
>      int i;
>  
> -    for (i = TRANSIENT_FIRST; i < TRANSIENT_LAST; i++)
> +    for (i = TRANSIENT_FIRST; i <= TRANSIENT_LAST; i++)
>           TPM2_FlushContext(i);
>  
>      return TPM_SUCCESS;
> -- 
> 2.30.2
> 


From xen-devel-bounces@lists.xenproject.org Tue May 04 13:18:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 13:18:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122380.230821 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lduwr-00022c-Jq; Tue, 04 May 2021 13:18:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122380.230821; Tue, 04 May 2021 13:18:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lduwr-00022V-Gr; Tue, 04 May 2021 13:18:49 +0000
Received: by outflank-mailman (input) for mailman id 122380;
 Tue, 04 May 2021 13:18:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=c7IS=J7=ens-lyon.org=samuel.thibault@srs-us1.protection.inumbo.net>)
 id 1lduwp-00021e-PX
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 13:18:47 +0000
Received: from hera.aquilenet.fr (unknown [185.233.100.1])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ccd55b99-cfc5-422d-b7d3-854f8fb372a3;
 Tue, 04 May 2021 13:18:46 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by hera.aquilenet.fr (Postfix) with ESMTP id C5AAD41B;
 Tue,  4 May 2021 15:18:45 +0200 (CEST)
Received: from hera.aquilenet.fr ([127.0.0.1])
 by localhost (hera.aquilenet.fr [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id fSDWyE44xkTu; Tue,  4 May 2021 15:18:43 +0200 (CEST)
Received: from begin (unknown [IPv6:2a01:cb19:956:1b00:de41:a9ff:fe47:ec49])
 by hera.aquilenet.fr (Postfix) with ESMTPSA id 779DB8F;
 Tue,  4 May 2021 15:18:43 +0200 (CEST)
Received: from samy by begin with local (Exim 4.94)
 (envelope-from <samuel.thibault@ens-lyon.org>)
 id 1lduwk-00FpZq-Ez; Tue, 04 May 2021 15:18:42 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ccd55b99-cfc5-422d-b7d3-854f8fb372a3
X-Virus-Scanned: Debian amavisd-new at aquilenet.fr
Date: Tue, 4 May 2021 15:18:42 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel@lists.xenproject.org, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
Subject: Re: [PATCH 8/9] vtpmmgr: Shutdown more gracefully
Message-ID: <20210504131842.cas3s2rpd4cvr46q@begin>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
References: <20210504124842.220445-1-jandryuk@gmail.com>
 <20210504124842.220445-9-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210504124842.220445-9-jandryuk@gmail.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)
X-Spamd-Bar: --
Authentication-Results: hera.aquilenet.fr
X-Rspamd-Server: hera
X-Rspamd-Queue-Id: C5AAD41B
X-Spamd-Result: default: False [-2.50 / 15.00];
	 ARC_NA(0.00)[];
	 RCVD_VIA_SMTP_AUTH(0.00)[];
	 FROM_HAS_DN(0.00)[];
	 RCPT_COUNT_THREE(0.00)[4];
	 TO_DN_SOME(0.00)[];
	 TO_MATCH_ENVRCPT_ALL(0.00)[];
	 TAGGED_RCPT(0.00)[];
	 MIME_GOOD(-0.10)[text/plain];
	 FREEMAIL_ENVRCPT(0.00)[gmail.com];
	 HAS_ORG_HEADER(0.00)[];
	 RCVD_COUNT_THREE(0.00)[3];
	 FREEMAIL_TO(0.00)[gmail.com];
	 RCVD_NO_TLS_LAST(0.10)[];
	 FROM_EQ_ENVFROM(0.00)[];
	 MID_RHS_NOT_FQDN(0.50)[];
	 BAYES_HAM(-3.00)[100.00%]

Jason Andryuk, le mar. 04 mai 2021 08:48:41 -0400, a ecrit:
> vtpmmgr uses the default, weak app_shutdown, which immediately calls the
> shutdown hypercall.  This short circuits the vtpmmgr clean up logic.  We
> need to perform the clean up to actually Flush our key out of the tpm.
> 
> Setting do_shutdown is one step in that direction, but vtpmmgr will most
> likely be waiting in tpmback_req_any.  We need to call shutdown_tpmback
> to cancel the wait inside tpmback and perform the shutdown.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

Reviewed-by: Samuel Thibault <samuel.thibaut@ens-lyon.org>

> ---
>  stubdom/vtpmmgr/vtpmmgr.c | 12 +++++++++++-
>  1 file changed, 11 insertions(+), 1 deletion(-)
> 
> diff --git a/stubdom/vtpmmgr/vtpmmgr.c b/stubdom/vtpmmgr/vtpmmgr.c
> index 9fddaa24f8..46ea018921 100644
> --- a/stubdom/vtpmmgr/vtpmmgr.c
> +++ b/stubdom/vtpmmgr/vtpmmgr.c
> @@ -67,11 +67,21 @@ int hw_is_tpm2(void)
>      return (hardware_version.hw_version == TPM2_HARDWARE) ? 1 : 0;
>  }
>  
> +static int do_shutdown;
> +
> +void app_shutdown(unsigned int reason)
> +{
> +    printk("Shutdown requested: %d\n", reason);
> +    do_shutdown = 1;
> +
> +    shutdown_tpmback();
> +}
> +
>  void main_loop(void) {
>     tpmcmd_t* tpmcmd;
>     uint8_t respbuf[TCPA_MAX_BUFFER_LENGTH];
>  
> -   while(1) {
> +   while (!do_shutdown) {
>        /* Wait for requests from a vtpm */
>        vtpmloginfo(VTPM_LOG_VTPM, "Waiting for commands from vTPM's:\n");
>        if((tpmcmd = tpmback_req_any()) == NULL) {
> -- 
> 2.30.2
> 


From xen-devel-bounces@lists.xenproject.org Tue May 04 13:28:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 13:28:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122389.230834 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldv5z-0002xG-Ff; Tue, 04 May 2021 13:28:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122389.230834; Tue, 04 May 2021 13:28:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldv5z-0002x9-CN; Tue, 04 May 2021 13:28:15 +0000
Received: by outflank-mailman (input) for mailman id 122389;
 Tue, 04 May 2021 13:28:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1gXq=J7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ldv5x-0002wz-VB
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 13:28:13 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id eb30eb11-e99d-4291-9783-5a5d3127cae0;
 Tue, 04 May 2021 13:28:12 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1C885AF2F;
 Tue,  4 May 2021 13:28:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eb30eb11-e99d-4291-9783-5a5d3127cae0
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620134892; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=AaFdGfB8BZpSS75FzAWGpKYG+lMXN2kqMmw3gSrRoSo=;
	b=fcJ8k92/e/CFi0J1rMoovhZYquKhkQeAtXHNHCk8aQoqplc/lBZ0oohvfwaRUZ+4yN8CC5
	oPHN/GY1ay+5jxN0AqCSw+4yOWN0bnQER/E7BWsU/cgYbvA/P2DUEtFNvpDb2/swJiJ5oN
	ki0Vwr+kAOmVAcxmZugGjalxkF+fddo=
Subject: Re: [PATCH v4 3/3] docs/doxygen: doxygen documentation for
 grant_table.h
To: Luca Fancellu <luca.fancellu@arm.com>
Cc: Bertrand Marquis <bertrand.marquis@arm.com>, wei.chen@arm.com,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20210504094606.7125-1-luca.fancellu@arm.com>
 <20210504094606.7125-4-luca.fancellu@arm.com>
 <37e5b461-40fe-ac78-59b9-033ff8cdc6d1@suse.com>
 <1853929B-AC45-42AF-8FE4-7B23C700B2E2@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e3f816df-a3ee-f880-ad6f-68c9cc2db517@suse.com>
Date: Tue, 4 May 2021 15:28:11 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <1853929B-AC45-42AF-8FE4-7B23C700B2E2@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 04.05.2021 15:09, Luca Fancellu wrote:
>> On 4 May 2021, at 12:48, Jan Beulich <jbeulich@suse.com> wrote:
>> On 04.05.2021 11:46, Luca Fancellu wrote:
>>> @@ -451,11 +466,6 @@ DEFINE_XEN_GUEST_HANDLE(gnttab_transfer_t);
>>>  * bytes to be copied.
>>>  */
>>>
>>> -#define _GNTCOPY_source_gref      (0)
>>> -#define GNTCOPY_source_gref       (1<<_GNTCOPY_source_gref)
>>> -#define _GNTCOPY_dest_gref        (1)
>>> -#define GNTCOPY_dest_gref         (1<<_GNTCOPY_dest_gref)
>>> -
>>> struct gnttab_copy {
>>>     /* IN parameters. */
>>>     struct gnttab_copy_ptr {
>>> @@ -471,6 +481,12 @@ struct gnttab_copy {
>>>     /* OUT parameters. */
>>>     int16_t       status;
>>> };
>>> +
>>> +#define _GNTCOPY_source_gref      (0)
>>> +#define GNTCOPY_source_gref       (1<<_GNTCOPY_source_gref)
>>> +#define _GNTCOPY_dest_gref        (1)
>>> +#define GNTCOPY_dest_gref         (1<<_GNTCOPY_dest_gref)
>>
>> Didn't you say you agree with moving this back up some, next to the
>> field using these?
> 
> My mistake! I’ll move it in the next patch, did you spot anything else I might have forgot of what we agreed?

No, thanks. I don't think I have any more comments to make on this
series (once this last aspect got addressed, and assuming no new
issues get introduced). But to be clear on that side as well - I
don't think I'm up to actually ack-ing the patch (let alone the
entire series).

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 04 13:32:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 13:32:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122394.230845 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldv9k-0003o8-Vw; Tue, 04 May 2021 13:32:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122394.230845; Tue, 04 May 2021 13:32:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldv9k-0003o1-Sw; Tue, 04 May 2021 13:32:08 +0000
Received: by outflank-mailman (input) for mailman id 122394;
 Tue, 04 May 2021 13:32:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8884=J7=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1ldv9k-0003nw-C3
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 13:32:08 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 27a0024d-6ef3-424d-bd26-79b8e379d72a;
 Tue, 04 May 2021 13:32:03 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 9BEAF106F;
 Tue,  4 May 2021 06:31:57 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com
 [10.1.197.16])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3D6F23F73B;
 Tue,  4 May 2021 06:31:56 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 27a0024d-6ef3-424d-bd26-79b8e379d72a
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v5 2/3] docs: hypercalls sphinx skeleton for generated html
Date: Tue,  4 May 2021 14:31:44 +0100
Message-Id: <20210504133145.767-3-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210504133145.767-1-luca.fancellu@arm.com>
References: <20210504133145.767-1-luca.fancellu@arm.com>

Create a skeleton for the documentation about hypercalls

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
 .gitignore                             |  1 +
 docs/Makefile                          |  4 ++++
 docs/hypercall-interfaces/arm32.rst    |  4 ++++
 docs/hypercall-interfaces/arm64.rst    | 32 ++++++++++++++++++++++++++
 docs/hypercall-interfaces/index.rst.in |  7 ++++++
 docs/hypercall-interfaces/x86_64.rst   |  4 ++++
 docs/index.rst                         |  8 +++++++
 7 files changed, 60 insertions(+)
 create mode 100644 docs/hypercall-interfaces/arm32.rst
 create mode 100644 docs/hypercall-interfaces/arm64.rst
 create mode 100644 docs/hypercall-interfaces/index.rst.in
 create mode 100644 docs/hypercall-interfaces/x86_64.rst

diff --git a/.gitignore b/.gitignore
index d271e0ce6a..a9aab120ae 100644
--- a/.gitignore
+++ b/.gitignore
@@ -64,6 +64,7 @@ docs/xen.doxyfile
 docs/xen.doxyfile.tmp
 docs/xen-doxygen/doxygen_include.h
 docs/xen-doxygen/doxygen_include.h.tmp
+docs/hypercall-interfaces/index.rst
 extras/mini-os*
 install/*
 stubdom/*-minios-config.mk
diff --git a/docs/Makefile b/docs/Makefile
index 2f784c36ce..b02c3dfb79 100644
--- a/docs/Makefile
+++ b/docs/Makefile
@@ -61,6 +61,9 @@ build: html txt pdf man-pages figs
 sphinx-html: $(DOXY_DEPS) $(DOXY_LIST_SOURCES)
 ifneq ($(SPHINXBUILD),no)
 	$(DOXYGEN) xen.doxyfile
+	@echo "Generating hypercall-interfaces/index.rst"
+	@sed -e "s,@XEN_TARGET_ARCH@,$(XEN_TARGET_ARCH),g" \
+		hypercall-interfaces/index.rst.in > hypercall-interfaces/index.rst
 	XEN_ROOT=$(realpath $(XEN_ROOT)) $(SPHINXBUILD) -b html . sphinx/html
 else
 	@echo "Sphinx is not installed; skipping sphinx-html documentation."
@@ -108,6 +111,7 @@ clean: clean-man-pages
 	rm -f xen.doxyfile.tmp
 	rm -f xen-doxygen/doxygen_include.h
 	rm -f xen-doxygen/doxygen_include.h.tmp
+	rm -f hypercall-interfaces/index.rst
 
 .PHONY: distclean
 distclean: clean
diff --git a/docs/hypercall-interfaces/arm32.rst b/docs/hypercall-interfaces/arm32.rst
new file mode 100644
index 0000000000..4e973fbbaf
--- /dev/null
+++ b/docs/hypercall-interfaces/arm32.rst
@@ -0,0 +1,4 @@
+.. SPDX-License-Identifier: CC-BY-4.0
+
+Hypercall Interfaces - arm32
+============================
diff --git a/docs/hypercall-interfaces/arm64.rst b/docs/hypercall-interfaces/arm64.rst
new file mode 100644
index 0000000000..5e701a2adc
--- /dev/null
+++ b/docs/hypercall-interfaces/arm64.rst
@@ -0,0 +1,32 @@
+.. SPDX-License-Identifier: CC-BY-4.0
+
+Hypercall Interfaces - arm64
+============================
+
+Starting points
+---------------
+.. toctree::
+   :maxdepth: 2
+
+
+
+Functions
+---------
+
+
+Structs
+-------
+
+
+Enums and sets of #defines
+--------------------------
+
+
+Typedefs
+--------
+
+
+Enum values and individual #defines
+-----------------------------------
+
+
diff --git a/docs/hypercall-interfaces/index.rst.in b/docs/hypercall-interfaces/index.rst.in
new file mode 100644
index 0000000000..e4dcc5db8d
--- /dev/null
+++ b/docs/hypercall-interfaces/index.rst.in
@@ -0,0 +1,7 @@
+.. SPDX-License-Identifier: CC-BY-4.0
+
+Hypercall Interfaces
+====================
+
+.. toctree::
+   @XEN_TARGET_ARCH@
diff --git a/docs/hypercall-interfaces/x86_64.rst b/docs/hypercall-interfaces/x86_64.rst
new file mode 100644
index 0000000000..3ed70dff95
--- /dev/null
+++ b/docs/hypercall-interfaces/x86_64.rst
@@ -0,0 +1,4 @@
+.. SPDX-License-Identifier: CC-BY-4.0
+
+Hypercall Interfaces - x86_64
+=============================
diff --git a/docs/index.rst b/docs/index.rst
index b75487a05d..52226a42d8 100644
--- a/docs/index.rst
+++ b/docs/index.rst
@@ -53,6 +53,14 @@ kind of development environment.
    hypervisor-guide/index
 
 
+Hypercall Interfaces documentation
+----------------------------------
+
+.. toctree::
+   :maxdepth: 2
+
+   hypercall-interfaces/index
+
 Miscellanea
 -----------
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 04 13:32:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 13:32:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122395.230858 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldv9q-0003qJ-A2; Tue, 04 May 2021 13:32:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122395.230858; Tue, 04 May 2021 13:32:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldv9q-0003q9-4I; Tue, 04 May 2021 13:32:14 +0000
Received: by outflank-mailman (input) for mailman id 122395;
 Tue, 04 May 2021 13:32:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8884=J7=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1ldv9p-0003nw-9G
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 13:32:13 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id f25d994e-633f-4031-ab52-29192e9e6662;
 Tue, 04 May 2021 13:31:56 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0B3A61063;
 Tue,  4 May 2021 06:31:56 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com
 [10.1.197.16])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 337253F73B;
 Tue,  4 May 2021 06:31:54 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f25d994e-633f-4031-ab52-29192e9e6662
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v5 1/3] docs: add doxygen support for html documentation
Date: Tue,  4 May 2021 14:31:43 +0100
Message-Id: <20210504133145.767-2-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210504133145.767-1-luca.fancellu@arm.com>
References: <20210504133145.767-1-luca.fancellu@arm.com>

Add doxygen support to build html documentation with sphinx,
to do that the following modification are applied:

1) Modify docs/configure.ac and consequently configure script
   to check, through the ./configure script, the presence in
   the system of the sphinx-build binary, if it is found then
   it checks also the presence of the doxygen binary, breathe
   and sphinx-rtd-theme python packages.
   Doxygen and the packages are required only if sphinx-build
   is found, otherwise the Makefile will simply skip the
   sphinx html generation.
   The ax_python_module.m4 support is needed to check for
   python packages.
2) Add doxygen templates and configuration file
3) Modify docs/Makefile to call in sequence doxygen and
   sphinx-build, the doxygen configuration file will be
   modified to include the xen absolute path and
   a list of header to parse.
   A doxygen_input.h file is generated to include every
   header listed in the doxy_input.list file.
4) Add preprocessor called by doxygen before parsing headers,
   it will include in every header a doxygen_include.h file
   that provides missing defines and includes that are
   usually passed by the compiler.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
v4 changes:
- create alias @keepindent/@endkeepindent for the doxygen
  command @code/@endcode
v3 changes:
- add preprocessor to handle missing defines and anonymous
  structs/unions before doxygen parsing
- modification to Makefile to handle the new process
v2 changes:
- Fix bug in Makefile when sphinx is not found in the system
---
 .gitignore                                   |    6 +
 config/Docs.mk.in                            |    2 +
 docs/Makefile                                |   42 +-
 docs/conf.py                                 |   48 +-
 docs/configure                               |  258 ++
 docs/configure.ac                            |   15 +
 docs/xen-doxygen/customdoxygen.css           |   36 +
 docs/xen-doxygen/doxy-preprocessor.py        |  110 +
 docs/xen-doxygen/doxy_input.list             |    0
 docs/xen-doxygen/doxygen_include.h.in        |   32 +
 docs/xen-doxygen/footer.html                 |   21 +
 docs/xen-doxygen/header.html                 |   56 +
 docs/xen-doxygen/mainpage.md                 |    5 +
 docs/xen-doxygen/xen_project_logo_165x67.png |  Bin 0 -> 18223 bytes
 docs/xen.doxyfile.in                         | 2316 ++++++++++++++++++
 m4/ax_python_module.m4                       |   56 +
 m4/docs_tool.m4                              |    9 +
 17 files changed, 3006 insertions(+), 6 deletions(-)
 create mode 100644 docs/xen-doxygen/customdoxygen.css
 create mode 100755 docs/xen-doxygen/doxy-preprocessor.py
 create mode 100644 docs/xen-doxygen/doxy_input.list
 create mode 100644 docs/xen-doxygen/doxygen_include.h.in
 create mode 100644 docs/xen-doxygen/footer.html
 create mode 100644 docs/xen-doxygen/header.html
 create mode 100644 docs/xen-doxygen/mainpage.md
 create mode 100644 docs/xen-doxygen/xen_project_logo_165x67.png
 create mode 100644 docs/xen.doxyfile.in
 create mode 100644 m4/ax_python_module.m4

diff --git a/.gitignore b/.gitignore
index 1c2fa1530b..d271e0ce6a 100644
--- a/.gitignore
+++ b/.gitignore
@@ -58,6 +58,12 @@ docs/man7/
 docs/man8/
 docs/pdf/
 docs/txt/
+docs/doxygen-output
+docs/sphinx
+docs/xen.doxyfile
+docs/xen.doxyfile.tmp
+docs/xen-doxygen/doxygen_include.h
+docs/xen-doxygen/doxygen_include.h.tmp
 extras/mini-os*
 install/*
 stubdom/*-minios-config.mk
diff --git a/config/Docs.mk.in b/config/Docs.mk.in
index e76e5cd5ff..dfd4a02838 100644
--- a/config/Docs.mk.in
+++ b/config/Docs.mk.in
@@ -7,3 +7,5 @@ POD2HTML            := @POD2HTML@
 POD2TEXT            := @POD2TEXT@
 PANDOC              := @PANDOC@
 PERL                := @PERL@
+SPHINXBUILD         := @SPHINXBUILD@
+DOXYGEN             := @DOXYGEN@
diff --git a/docs/Makefile b/docs/Makefile
index 8de1efb6f5..2f784c36ce 100644
--- a/docs/Makefile
+++ b/docs/Makefile
@@ -17,6 +17,18 @@ TXTSRC-y := $(sort $(shell find misc -name '*.txt' -print))
 
 PANDOCSRC-y := $(sort $(shell find designs/ features/ misc/ process/ specs/ \( -name '*.pandoc' -o -name '*.md' \) -print))
 
+# Directory in which the doxygen documentation is created
+# This must be kept in sync with breathe_projects value in conf.py
+DOXYGEN_OUTPUT = doxygen-output
+
+# Doxygen input headers from xen-doxygen/doxy_input.list file
+DOXY_LIST_SOURCES != cat "xen-doxygen/doxy_input.list"
+DOXY_LIST_SOURCES := $(realpath $(addprefix $(XEN_ROOT)/,$(DOXY_LIST_SOURCES)))
+
+DOXY_DEPS := xen.doxyfile \
+			 xen-doxygen/mainpage.md \
+			 xen-doxygen/doxygen_include.h
+
 # Documentation targets
 $(foreach i,$(MAN_SECTIONS), \
   $(eval DOC_MAN$(i) := $(patsubst man/%.$(i),man$(i)/%.$(i), \
@@ -46,8 +58,28 @@ all: build
 build: html txt pdf man-pages figs
 
 .PHONY: sphinx-html
-sphinx-html:
-	sphinx-build -b html . sphinx/html
+sphinx-html: $(DOXY_DEPS) $(DOXY_LIST_SOURCES)
+ifneq ($(SPHINXBUILD),no)
+	$(DOXYGEN) xen.doxyfile
+	XEN_ROOT=$(realpath $(XEN_ROOT)) $(SPHINXBUILD) -b html . sphinx/html
+else
+	@echo "Sphinx is not installed; skipping sphinx-html documentation."
+endif
+
+xen.doxyfile: xen.doxyfile.in xen-doxygen/doxy_input.list
+	@echo "Generating $@"
+	@sed -e "s,@XEN_BASE@,$(realpath $(XEN_ROOT)),g" $< \
+		| sed -e "s,@DOXY_OUT@,$(DOXYGEN_OUTPUT),g" > $@.tmp
+	@$(foreach inc,\
+		$(DOXY_LIST_SOURCES),\
+		echo "INPUT += \"$(inc)\"" >> $@.tmp; \
+	)
+	mv $@.tmp $@
+
+xen-doxygen/doxygen_include.h: xen-doxygen/doxygen_include.h.in
+	@echo "Generating $@"
+	@sed -e "s,@XEN_BASE@,$(realpath $(XEN_ROOT)),g" $< > $@.tmp
+	@mv $@.tmp $@
 
 .PHONY: html
 html: $(DOC_HTML) html/index.html
@@ -71,7 +103,11 @@ clean: clean-man-pages
 	$(MAKE) -C figs clean
 	rm -rf .word_count *.aux *.dvi *.bbl *.blg *.glo *.idx *~
 	rm -rf *.ilg *.log *.ind *.toc *.bak *.tmp core
-	rm -rf html txt pdf sphinx/html
+	rm -rf html txt pdf sphinx $(DOXYGEN_OUTPUT)
+	rm -f xen.doxyfile
+	rm -f xen.doxyfile.tmp
+	rm -f xen-doxygen/doxygen_include.h
+	rm -f xen-doxygen/doxygen_include.h.tmp
 
 .PHONY: distclean
 distclean: clean
diff --git a/docs/conf.py b/docs/conf.py
index 50e41501db..a48de42331 100644
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -13,13 +13,17 @@
 # add these directories to sys.path here. If the directory is relative to the
 # documentation root, use os.path.abspath to make it absolute, like shown here.
 #
-# import os
-# import sys
+import os
+import sys
 # sys.path.insert(0, os.path.abspath('.'))
 
 
 # -- Project information -----------------------------------------------------
 
+if "XEN_ROOT" not in os.environ:
+    sys.exit("$XEN_ROOT environment variable undefined.")
+XEN_ROOT = os.path.abspath(os.environ["XEN_ROOT"])
+
 project = u'Xen'
 copyright = u'2019, The Xen development community'
 author = u'The Xen development community'
@@ -35,6 +39,7 @@ try:
             xen_subver = line.split(u"=")[1].strip()
         elif line.startswith(u"export XEN_EXTRAVERSION"):
             xen_extra = line.split(u"=")[1].split(u"$", 1)[0].strip()
+
 except:
     pass
 finally:
@@ -44,6 +49,15 @@ finally:
     else:
         version = release = u"unknown version"
 
+try:
+    xen_doxygen_output = None
+
+    for line in open(u"Makefile"):
+        if line.startswith(u"DOXYGEN_OUTPUT"):
+                xen_doxygen_output = line.split(u"=")[1].strip()
+except:
+    sys.exit("DOXYGEN_OUTPUT variable undefined.")
+
 # -- General configuration ---------------------------------------------------
 
 # If your documentation needs a minimal Sphinx version, state it here.
@@ -53,7 +67,8 @@ needs_sphinx = '1.4'
 # Add any Sphinx extension module names here, as strings. They can be
 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
 # ones.
-extensions = []
+# breathe -> extension that integrates doxygen xml output with sphinx
+extensions = ['breathe']
 
 # Add any paths that contain templates here, relative to this directory.
 templates_path = ['_templates']
@@ -175,6 +190,33 @@ texinfo_documents = [
      'Miscellaneous'),
 ]
 
+# -- Options for Breathe extension -------------------------------------------
+
+breathe_projects = {
+    "Xen": "{}/docs/{}/xml".format(XEN_ROOT, xen_doxygen_output)
+}
+breathe_default_project = "Xen"
+
+breathe_domain_by_extension = {
+    "h": "c",
+    "c": "c",
+}
+breathe_separate_member_pages = True
+breathe_show_enumvalue_initializer = True
+breathe_show_define_initializer = True
+
+# Qualifiers to a function are causing Sphihx/Breathe to warn about
+# Error when parsing function declaration and more.  This is a list
+# of strings that the parser additionally should accept as
+# attributes.
+cpp_id_attributes = [
+    '__syscall', '__deprecated', '__may_alias',
+    '__used', '__unused', '__weak',
+    '__DEPRECATED_MACRO', 'FUNC_NORETURN',
+    '__subsystem',
+]
+c_id_attributes = cpp_id_attributes
+
 
 # -- Options for Epub output -------------------------------------------------
 
diff --git a/docs/configure b/docs/configure
index 569bd4c2ff..0ebf046a79 100755
--- a/docs/configure
+++ b/docs/configure
@@ -588,6 +588,8 @@ ac_unique_file="misc/xen-command-line.pandoc"
 ac_subst_vars='LTLIBOBJS
 LIBOBJS
 PERL
+DOXYGEN
+SPHINXBUILD
 PANDOC
 POD2TEXT
 POD2HTML
@@ -673,6 +675,7 @@ POD2MAN
 POD2HTML
 POD2TEXT
 PANDOC
+DOXYGEN
 PERL'
 
 
@@ -1318,6 +1321,7 @@ Some influential environment variables:
   POD2HTML    Path to pod2html tool
   POD2TEXT    Path to pod2text tool
   PANDOC      Path to pandoc tool
+  DOXYGEN     Path to doxygen tool
   PERL        Path to Perl parser
 
 Use these variables to override the choices made by `configure' or to help
@@ -1800,6 +1804,7 @@ ac_configure="$SHELL $ac_aux_dir/configure"  # Please don't use this var.
 
 
 
+
 case "$host_os" in
 *freebsd*) XENSTORED_KVA=/dev/xen/xenstored ;;
 *) XENSTORED_KVA=/proc/xen/xsd_kva ;;
@@ -1812,6 +1817,53 @@ case "$host_os" in
 esac
 
 
+# ===========================================================================
+#     https://www.gnu.org/software/autoconf-archive/ax_python_module.html
+# ===========================================================================
+#
+# SYNOPSIS
+#
+#   AX_PYTHON_MODULE(modname[, fatal, python])
+#
+# DESCRIPTION
+#
+#   Checks for Python module.
+#
+#   If fatal is non-empty then absence of a module will trigger an error.
+#   The third parameter can either be "python" for Python 2 or "python3" for
+#   Python 3; defaults to Python 3.
+#
+# LICENSE
+#
+#   Copyright (c) 2008 Andrew Collier
+#
+#   Copying and distribution of this file, with or without modification, are
+#   permitted in any medium without royalty provided the copyright notice
+#   and this notice are preserved. This file is offered as-is, without any
+#   warranty.
+
+#serial 9
+
+# This is what autoupdate's m4 run will expand.  It fires
+# the warning (with _au_warn_XXX), outputs it into the
+# updated configure.ac (with AC_DIAGNOSE), and then outputs
+# the replacement expansion.
+
+
+# This is an auxiliary macro that is also run when
+# autoupdate runs m4.  It simply calls m4_warning, but
+# we need a wrapper so that each warning is emitted only
+# once.  We break the quoting in m4_warning's argument in
+# order to expand this macro's arguments, not AU_DEFUN's.
+
+
+# Finally, this is the expansion that is picked up by
+# autoconf.  It tells the user to run autoupdate, and
+# then outputs the replacement expansion.  We do not care
+# about autoupdate's warning because that contains
+# information on what to do *after* running autoupdate.
+
+
 
 
 test "x$prefix" = "xNONE" && prefix=$ac_default_prefix
@@ -2232,6 +2284,212 @@ $as_echo "$as_me: WARNING: pandoc is not available so some documentation won't b
 fi
 
 
+# If sphinx is installed, make sure to have also the dependencies to build
+# Sphinx documentation.
+for ac_prog in sphinx-build
+do
+  # Extract the first word of "$ac_prog", so it can be a program name with args.
+set dummy $ac_prog; ac_word=$2
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+$as_echo_n "checking for $ac_word... " >&6; }
+if ${ac_cv_prog_SPHINXBUILD+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  if test -n "$SPHINXBUILD"; then
+  ac_cv_prog_SPHINXBUILD="$SPHINXBUILD" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+  IFS=$as_save_IFS
+  test -z "$as_dir" && as_dir=.
+    for ac_exec_ext in '' $ac_executable_extensions; do
+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+    ac_cv_prog_SPHINXBUILD="$ac_prog"
+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+    break 2
+  fi
+done
+  done
+IFS=$as_save_IFS
+
+fi
+fi
+SPHINXBUILD=$ac_cv_prog_SPHINXBUILD
+if test -n "$SPHINXBUILD"; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $SPHINXBUILD" >&5
+$as_echo "$SPHINXBUILD" >&6; }
+else
+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+fi
+
+
+  test -n "$SPHINXBUILD" && break
+done
+test -n "$SPHINXBUILD" || SPHINXBUILD="no"
+
+    if test "x$SPHINXBUILD" = xno; then :
+  { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: sphinx-build is not available so sphinx documentation \
+won't be built" >&5
+$as_echo "$as_me: WARNING: sphinx-build is not available so sphinx documentation \
+won't be built" >&2;}
+else
+
+            # Extract the first word of "sphinx-build", so it can be a program name with args.
+set dummy sphinx-build; ac_word=$2
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+$as_echo_n "checking for $ac_word... " >&6; }
+if ${ac_cv_path_SPHINXBUILD+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  case $SPHINXBUILD in
+  [\\/]* | ?:[\\/]*)
+  ac_cv_path_SPHINXBUILD="$SPHINXBUILD" # Let the user override the test with a path.
+  ;;
+  *)
+  as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+  IFS=$as_save_IFS
+  test -z "$as_dir" && as_dir=.
+    for ac_exec_ext in '' $ac_executable_extensions; do
+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+    ac_cv_path_SPHINXBUILD="$as_dir/$ac_word$ac_exec_ext"
+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+    break 2
+  fi
+done
+  done
+IFS=$as_save_IFS
+
+  ;;
+esac
+fi
+SPHINXBUILD=$ac_cv_path_SPHINXBUILD
+if test -n "$SPHINXBUILD"; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $SPHINXBUILD" >&5
+$as_echo "$SPHINXBUILD" >&6; }
+else
+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+fi
+
+
+
+
+    # Extract the first word of "doxygen", so it can be a program name with args.
+set dummy doxygen; ac_word=$2
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+$as_echo_n "checking for $ac_word... " >&6; }
+if ${ac_cv_path_DOXYGEN+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  case $DOXYGEN in
+  [\\/]* | ?:[\\/]*)
+  ac_cv_path_DOXYGEN="$DOXYGEN" # Let the user override the test with a path.
+  ;;
+  *)
+  as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+  IFS=$as_save_IFS
+  test -z "$as_dir" && as_dir=.
+    for ac_exec_ext in '' $ac_executable_extensions; do
+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+    ac_cv_path_DOXYGEN="$as_dir/$ac_word$ac_exec_ext"
+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+    break 2
+  fi
+done
+  done
+IFS=$as_save_IFS
+
+  ;;
+esac
+fi
+DOXYGEN=$ac_cv_path_DOXYGEN
+if test -n "$DOXYGEN"; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DOXYGEN" >&5
+$as_echo "$DOXYGEN" >&6; }
+else
+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+fi
+
+
+    if ! test -x "$ac_cv_path_DOXYGEN"; then :
+
+        as_fn_error $? "doxygen is needed" "$LINENO" 5
+
+fi
+
+
+    if test -z $PYTHON;
+    then
+        if test -z "";
+        then
+            PYTHON="python3"
+        else
+            PYTHON=""
+        fi
+    fi
+    PYTHON_NAME=`basename $PYTHON`
+    { $as_echo "$as_me:${as_lineno-$LINENO}: checking $PYTHON_NAME module: breathe" >&5
+$as_echo_n "checking $PYTHON_NAME module: breathe... " >&6; }
+    $PYTHON -c "import breathe" 2>/dev/null
+    if test $? -eq 0;
+    then
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+$as_echo "yes" >&6; }
+        eval HAVE_PYMOD_BREATHE=yes
+    else
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+        eval HAVE_PYMOD_BREATHE=no
+        #
+        if test -n "yes"
+        then
+            as_fn_error $? "failed to find required module breathe" "$LINENO" 5
+            exit 1
+        fi
+    fi
+
+
+    if test -z $PYTHON;
+    then
+        if test -z "";
+        then
+            PYTHON="python3"
+        else
+            PYTHON=""
+        fi
+    fi
+    PYTHON_NAME=`basename $PYTHON`
+    { $as_echo "$as_me:${as_lineno-$LINENO}: checking $PYTHON_NAME module: sphinx_rtd_theme" >&5
+$as_echo_n "checking $PYTHON_NAME module: sphinx_rtd_theme... " >&6; }
+    $PYTHON -c "import sphinx_rtd_theme" 2>/dev/null
+    if test $? -eq 0;
+    then
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+$as_echo "yes" >&6; }
+        eval HAVE_PYMOD_SPHINX_RTD_THEME=yes
+    else
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+        eval HAVE_PYMOD_SPHINX_RTD_THEME=no
+        #
+        if test -n "yes"
+        then
+            as_fn_error $? "failed to find required module sphinx_rtd_theme" "$LINENO" 5
+            exit 1
+        fi
+    fi
+
+
+
+fi
+
 
 # Extract the first word of "perl", so it can be a program name with args.
 set dummy perl; ac_word=$2
diff --git a/docs/configure.ac b/docs/configure.ac
index c2e5edd3b3..a2ff55f30a 100644
--- a/docs/configure.ac
+++ b/docs/configure.ac
@@ -20,6 +20,7 @@ m4_include([../m4/docs_tool.m4])
 m4_include([../m4/path_or_fail.m4])
 m4_include([../m4/features.m4])
 m4_include([../m4/paths.m4])
+m4_include([../m4/ax_python_module.m4])
 
 AX_XEN_EXPAND_CONFIG()
 
@@ -29,6 +30,20 @@ AX_DOCS_TOOL_PROG([POD2HTML], [pod2html])
 AX_DOCS_TOOL_PROG([POD2TEXT], [pod2text])
 AX_DOCS_TOOL_PROG([PANDOC], [pandoc])
 
+# If sphinx is installed, make sure to have also the dependencies to build
+# Sphinx documentation.
+AC_CHECK_PROGS([SPHINXBUILD], [sphinx-build], [no])
+    AS_IF([test "x$SPHINXBUILD" = xno],
+        [AC_MSG_WARN(sphinx-build is not available so sphinx documentation \
+won't be built)],
+        [
+            AC_PATH_PROG([SPHINXBUILD], [sphinx-build])
+            AX_DOCS_TOOL_REQ_PROG([DOXYGEN], [doxygen])
+            AX_PYTHON_MODULE([breathe],[yes])
+            AX_PYTHON_MODULE([sphinx_rtd_theme], [yes])
+        ]
+    )
+
 AC_ARG_VAR([PERL], [Path to Perl parser])
 AX_PATH_PROG_OR_FAIL([PERL], [perl])
 
diff --git a/docs/xen-doxygen/customdoxygen.css b/docs/xen-doxygen/customdoxygen.css
new file mode 100644
index 0000000000..4735e41cf5
--- /dev/null
+++ b/docs/xen-doxygen/customdoxygen.css
@@ -0,0 +1,36 @@
+/* Custom CSS for Doxygen-generated HTML
+ * Copyright (c) 2015 Intel Corporation
+ * SPDX-License-Identifier: Apache-2.0
+ */
+
+code {
+  font-family: Monaco,Menlo,Consolas,"Courier New",monospace;
+  background-color: #D8D8D8;
+  padding: 0 0.25em 0 0.25em;
+}
+
+pre.fragment {
+  display: block;
+  font-family: Monaco,Menlo,Consolas,"Courier New",monospace;
+  padding: 1rem;
+  word-break: break-all;
+  word-wrap: break-word;
+  white-space: pre;
+  background-color: #D8D8D8;
+}
+
+#projectlogo
+{
+  vertical-align: middle;
+}
+
+#projectname
+{
+  font: 200% Tahoma, Arial,sans-serif;
+  color: #3D578C;
+}
+
+#projectbrief
+{
+  color: #3D578C;
+}
diff --git a/docs/xen-doxygen/doxy-preprocessor.py b/docs/xen-doxygen/doxy-preprocessor.py
new file mode 100755
index 0000000000..496899d8e6
--- /dev/null
+++ b/docs/xen-doxygen/doxy-preprocessor.py
@@ -0,0 +1,110 @@
+#!/usr/bin/python3
+#
+# Copyright (c) 2021, Arm Limited.
+#
+# SPDX-License-Identifier: GPL-2.0
+#
+
+import os, sys, re
+
+
+# Variables that holds the preprocessed header text
+output_text = ""
+header_file_name = ""
+
+# Variables to enumerate the anonymous structs/unions
+anonymous_struct_count = 0
+anonymous_union_count = 0
+
+
+def error(text):
+    sys.stderr.write("{}\n".format(text))
+    sys.exit(1)
+
+
+def write_to_output(text):
+    sys.stdout.write(text)
+
+
+def insert_doxygen_header(text):
+    # Here the strategy is to insert the #include <doxygen_include.h> in the
+    # first line of the header
+    abspath = os.path.dirname(os.path.abspath(__file__))
+    text += "#include \"{}/doxygen_include.h\"\n".format(abspath)
+
+    return text
+
+
+def enumerate_anonymous(match):
+    global anonymous_struct_count
+    global anonymous_union_count
+
+    if "struct" in match.group(1):
+        label = "anonymous_struct_%d" % anonymous_struct_count
+        anonymous_struct_count += 1
+    else:
+        label = "anonymous_union_%d" % anonymous_union_count
+        anonymous_union_count += 1
+
+    return match.group(1) + " " + label + " {"
+
+
+def manage_anonymous_structs_unions(text):
+    # Match anonymous unions/structs with this pattern:
+    # struct/union {
+    #     [...]
+    #
+    # and substitute it in this way:
+    #
+    # struct anonymous_struct_# {
+    #     [...]
+    # or
+    # union anonymous_union_# {
+    #     [...]
+    # where # is a counter starting from zero, different between structs and
+    # unions
+    #
+    # We don't count anonymous union/struct that are part of a typedef because
+    # they don't create any issue for doxygen
+    text = re.sub(
+        "(?<!typedef\s)(struct|union)\s+?\{",
+        enumerate_anonymous,
+        text,
+        flags=re.S
+    )
+
+    return text
+
+
+def main(argv):
+    global output_text
+    global header_file_name
+
+    if len(argv) != 1:
+        error("Script called without arguments!")
+
+    header_file_name = argv[0]
+
+    # Open header file
+    input_header_file = open(header_file_name, 'r')
+    # Read all lines
+    lines = input_header_file.readlines()
+
+    # Inject config.h and some defines in the current header, during compilation
+    # this job is done by the -include argument passed to the compiler.
+    output_text = insert_doxygen_header(output_text)
+
+    # Load file content in a variable
+    for line in lines:
+        output_text += "{}".format(line)
+
+    # Try to get rid of any anonymous union/struct
+    output_text = manage_anonymous_structs_unions(output_text)
+
+    # Final stage of the preprocessor, print the output to stdout
+    write_to_output(output_text)
+
+
+if __name__ == "__main__":
+    main(sys.argv[1:])
+    sys.exit(0)
diff --git a/docs/xen-doxygen/doxy_input.list b/docs/xen-doxygen/doxy_input.list
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/docs/xen-doxygen/doxygen_include.h.in b/docs/xen-doxygen/doxygen_include.h.in
new file mode 100644
index 0000000000..df284f3931
--- /dev/null
+++ b/docs/xen-doxygen/doxygen_include.h.in
@@ -0,0 +1,32 @@
+/*
+ * Doxygen include header
+ * It supplies the xen/include/xen/config.h that is included using the -include
+ * argument of the compiler in Xen Makefile.
+ * Other macros are defined because they are usually provided by the compiler.
+ *
+ * Copyright (C) 2021 ARM Limited
+ *
+ * Author: Luca Fancellu <luca.fancellu@arm.com>
+ *
+ * SPDX-License-Identifier: GPL-2.0
+ */
+
+#include "@XEN_BASE@/xen/include/xen/config.h"
+
+#if defined(CONFIG_X86_64)
+
+#define __x86_64__ 1
+
+#elif defined(CONFIG_ARM_64)
+
+#define __aarch64__ 1
+
+#elif defined(CONFIG_ARM_32)
+
+#define __arm__ 1
+
+#else
+
+#error Architecture not supported/recognized.
+
+#endif
diff --git a/docs/xen-doxygen/footer.html b/docs/xen-doxygen/footer.html
new file mode 100644
index 0000000000..a24bf2b9b4
--- /dev/null
+++ b/docs/xen-doxygen/footer.html
@@ -0,0 +1,21 @@
+<!-- HTML footer for doxygen 1.8.13-->
+<!-- start footer part -->
+<!--BEGIN GENERATE_TREEVIEW-->
+<div id="nav-path" class="navpath"><!-- id is needed for treeview function! -->
+  <ul>
+    $navpath
+    <li class="footer">$generatedby
+    <a href="http://www.doxygen.org/index.html">
+    <img class="footer" src="$relpath^doxygen.png" alt="doxygen"/></a> $doxygenversion </li>
+  </ul>
+</div>
+<!--END GENERATE_TREEVIEW-->
+<!--BEGIN !GENERATE_TREEVIEW-->
+<hr class="footer"/><address class="footer"><small>
+$generatedby &#160;<a href="http://www.doxygen.org/index.html">
+<img class="footer" src="$relpath^doxygen.png" alt="doxygen"/>
+</a> $doxygenversion
+</small></address>
+<!--END !GENERATE_TREEVIEW-->
+</body>
+</html>
diff --git a/docs/xen-doxygen/header.html b/docs/xen-doxygen/header.html
new file mode 100644
index 0000000000..83ac2f1835
--- /dev/null
+++ b/docs/xen-doxygen/header.html
@@ -0,0 +1,56 @@
+<!-- HTML header for doxygen 1.8.13-->
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
+<html xmlns="http://www.w3.org/1999/xhtml">
+<head>
+<meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/>
+<meta http-equiv="X-UA-Compatible" content="IE=9"/>
+<meta name="generator" content="Doxygen $doxygenversion"/>
+<meta name="viewport" content="width=device-width, initial-scale=1"/>
+<!--BEGIN PROJECT_NAME--><title>$projectname: $title</title><!--END PROJECT_NAME-->
+<!--BEGIN !PROJECT_NAME--><title>$title</title><!--END !PROJECT_NAME-->
+<link href="$relpath^tabs.css" rel="stylesheet" type="text/css"/>
+<script type="text/javascript" src="$relpath^jquery.js"></script>
+<script type="text/javascript" src="$relpath^dynsections.js"></script>
+$treeview
+$search
+$mathjax
+<link href="$relpath^$stylesheet" rel="stylesheet" type="text/css" />
+$extrastylesheet
+</head>
+<body>
+<div id="top"><!-- do not remove this div, it is closed by doxygen! -->
+
+<!--BEGIN TITLEAREA-->
+<div id="titlearea">
+<table cellspacing="0" cellpadding="0">
+ <tbody>
+ <tr style="height: 56px;">
+  <!--BEGIN PROJECT_LOGO-->
+  <td id="projectlogo"><img alt="Logo" src="$relpath^$projectlogo"/></td>
+  <!--END PROJECT_LOGO-->
+  <!--BEGIN PROJECT_NAME-->
+  <td id="projectalign" style="padding-left: 1em;">
+   <div id="projectname">$projectname
+   <!--BEGIN PROJECT_NUMBER-->&#160;<span id="projectnumber">$projectnumber</span><!--END PROJECT_NUMBER-->
+   </div>
+   <!--BEGIN PROJECT_BRIEF--><div id="projectbrief">$projectbrief</div><!--END PROJECT_BRIEF-->
+  </td>
+  <!--END PROJECT_NAME-->
+  <!--BEGIN !PROJECT_NAME-->
+   <!--BEGIN PROJECT_BRIEF-->
+    <td style="padding-left: 0.5em;">
+    <div id="projectbrief">$projectbrief</div>
+    </td>
+   <!--END PROJECT_BRIEF-->
+  <!--END !PROJECT_NAME-->
+  <!--BEGIN DISABLE_INDEX-->
+   <!--BEGIN SEARCHENGINE-->
+   <td>$searchbox</td>
+   <!--END SEARCHENGINE-->
+  <!--END DISABLE_INDEX-->
+ </tr>
+ </tbody>
+</table>
+</div>
+<!--END TITLEAREA-->
+<!-- end header part -->
diff --git a/docs/xen-doxygen/mainpage.md b/docs/xen-doxygen/mainpage.md
new file mode 100644
index 0000000000..ff548b87fc
--- /dev/null
+++ b/docs/xen-doxygen/mainpage.md
@@ -0,0 +1,5 @@
+# API Documentation   {#index}
+
+## Introduction
+
+## Licensing
diff --git a/docs/xen-doxygen/xen_project_logo_165x67.png b/docs/xen-doxygen/xen_project_logo_165x67.png
new file mode 100644
index 0000000000000000000000000000000000000000..7244959d59cdeb9f23c5202160ea45508dfc7265
GIT binary patch
literal 18223
zcmV+NKn=f%P)<h;3K|Lk000e1NJLTq005-`002V>1^@s6{Wir#00004XF*Lt006O$
zeEU(80000WV@Og>004&%004{+008|`004nN004b?008NW002DY000@xb3BE2000U(
zX+uL$P-t&-Z*ypGa3D!TLm+T+Z)Rz1WdHz3$DNjUR8-d%htIutdZEoQ0#b(FyTAa_
zdy`&8VVD_UC<6{NG_fI~0ue<-nj%P0#DLLIBvwSR5EN9f2P6n6F&ITuEN@2Ei>|D^
z_ww@l<E(G(v-i3C?7h!g7XXr{FPE1FO97C|6YzsPoaqsfQFQD8fB_z0fGGe>Rz|vC
zuzLs)$;-`!o*{AqUjza0dRV*yaMRE;fKCVhpQKsoe1Yhg01=zBIT<Vw7l=3|OOP(M
z&x)8Dmn>!&C1$=TK@rP|Ibo3vKKm@PqnO#LJhq6%Ij6Hz*<$V$@wQAMN5qJ)hzm2h
zoGcOF60t^#FqJFfH{#e-4l@G)6iI9sa9D{VHW4w29}?su;^hF~NC{tY+*d5%WDCTX
za!E_i;d2ub1#}&jF5T4HnnCyEWTkKf0>c0%E1Ah>(_PY1)0w;+02c53Su*0<(nUqK
zG_|(0G&D0Z{i;y^b@OjZ+}lNZ8Th$p5Uu}<?XUdO8USF-iE6X+i!H7SfX*!d$ld#5
z(>MTtq^NHl*T1?CO*}7&0ztZsv2j*bmJyf3G7=Z`5B*PvzoD<bXCyxEkMhu6Iq^(k
zihwSz8!Ig(O~|Kbq%&C@y5XOP_#X%Ubsh#moOlkO!xKe>iKdLpOAxi2$L0#SX*@cY
z_n(^h55xYX#km%V()bZjV~l{*bt*u9?FT3d5g^g~#a;iSZ@&02Abxq_DwB(I|L-^b
zXThc7C4-yrInE_0gw7K3GZ**7&k~>k0Z0NWkO#^@9q0f<U<Ry!EpP;Gz#I635D*Dg
z0~SaGseli%Kpxlx3PCa03HE?$PzM@8GiU|JK_@r`&Vx(f8n^*&gZp3<On_%#7Q6-v
z5CmZ%GDLyoAr(jy(ud3-24oMpLB3EB6bZ#b2@nqwLV3_;s2D1Ps-b$Q8TuYN37v<o
zK!ea-XbhT$euv({2uy;huoA2V8^a9P3HE_Q;8kz}yavvN3*a4aCENfXg*)K$@HO~0
zJPJR9=MaDp5gMY37$OYB1@T9ska&cTtVfEF3ZwyPMY@qb<R&tT%ph-37!(CXM;W4Q
zQJ$z!6brQmwH{T1szx0~b)b4tH&J7#S=2`~8Lf!cN86yi&=KeabQZc0U4d>wx1%qj
zZ=)yBuQ3=54Wo^*!gyjLF-e%Um=erBOdIALW)L%unZshS@>qSW9o8Sq#0s#5*edK%
z>{;v(b^`kbN5rY%%y90wC>#%$kE_5P!JWYk;U;klcqzOl-UjcFXXA75rT9jCH~u<)
z0>40zCTJ7v2qA<d!X`o`p_Oov@PP1=NF=Het%-p|E^#BVl6Z`GnK(v#OOhe!kz7d8
zBq3=B=@980=`QIdnM~FqJCdWw0`d-WGx-Af5&4Y-MZ!qJOM)%2L83;YLt;qcxg=gv
zQ_@LtwPdbjh2#mz>yk54cquI@7b&LHdZ`+zlTss6bJ7%PQ)z$cROu4wBhpu-r)01)
zS~6}jY?%U?gEALn#wiFzo#H}aQ8rT=DHkadR18&{>P1bW7E`~Y4p3)hWn`DhhRJ5j
z*2tcg9i<^OEt(fCg;q*CP8+7ZTcWhYX$fb^_9d-LhL+6BEtPYW<H!}swaML<dnZqq
zcau++-zDEE|4;#?pr;V1kfpF+;iAIKQtDFMrL3hzOOG$TrwA+RDF!L7RXnKJuQ;cq
ztmL7Tu2iLTL1{*rrtGMkq+G6iMtNF=qGGSYRVi0FtMZgCOLwBD&@1V^^jTF!RZmr+
zYQ5@!>VlfKTBusSTASKKb%HuWJzl+By+?gkLq)?+BTu76<DMp7lcAZYxmUAKb6!hZ
zD_m=<R;SjKww$(?cCL1d_5&TVj)Tq`od%s-x)@!CZnEw^-5Ywao`qhbUX9*$eOTX8
zpR2!5f6xGJU~RxNXfPNtBpEsxW*W8_jv3L6e2wyrI*pziYZylv?=tQ){%B%hl48<m
za^F<O)Y~-QwA=J|Gd(kwS&i8(bF#U+`3CbY^B2qXmvNTuUv|fWV&P}8)uPAZgQb-v
z-?G(m+DgMJ)~eQOgh6ElFiIGgt<l!b)*Gx(S--Whv=P`GxB1Q1&^Foji0#yJ?d6>1
zjmyXF)a;mc^>(B7bo*HQ1NNg1st!zt28YLv>W*y3CdWx9U8f|cqfXDAO`Q48?auQq
zHZJR2&bcD49<D{M18y>Ip>EY~kKEPV6Wm+eXFV)D)_R=tM0@&p?(!V*Qu1PXHG9o^
zTY0bZ?)4%01p8F`JoeS|<@<K~!G7L;yZs)l&|JY=(diHTz5I9kKMc?gSQGGLASN&%
zuqN<HkZDj}P+u@5I41Z=@aqugkkXL*p*o?$(4H{Ku;{Snu=#M;@UrmH2;+!#5!WIW
zBDs-WQP`-ksHUj7m2NBdtel9ph%SsCUZuS%d)1ZI3ae9ApN^4?VaA+@MaPE69*KR=
z^k+6O=i<ELYU5^EF08$*XKY7yIeVI8$0_4X#@of0#ZM*JCG1X^PIO4DNSxuiaI3j5
zl01{@lID~BlMf|-N(oPCOU0$erk>=<@RE7GY07EYX@lwd>4oW|Yi!o+Su@M`;WuSK
z8LKk71XR(_RKHM1xJ5XYX`fk>`6eqY>qNG6HZQwBM=xi4&Sb88?zd}EYguc1@>KIS
z<&CX#T35dwS|7K*XM_5Nf(;WJJvJWRMA($P>8E^?{IdL4o5MGE7bq2MEEwP7v8AO@
zqL5!WvekBL-8R%V?zVyL=G&{be=K4bT`e{#t|)$A!YaA?jp;X)-+bB;zhj`(vULAW
z%ue3U;av{94wp%n<(7@__S@Z2PA@Mif3+uO&y|X06?J<Fdxd*PD}5`wsx+#0R=uxI
ztiE02T+>#oSi8M;ejj_^(0<4Lt#wLu#dYrva1Y$6_o(k^&}yhSh&h;f@JVA>W8b%o
zZ=0JGnu?n~9O4}sJsfnnx7n(>`H13?(iXTy*fM=I`sj`CT)*pTHEgYKqqP+u1IL8N
zo_-(u{qS+0<2@%BCt82d{Gqm;(q7a7b>wu+b|!X?c13m#p7cK1({0<`{-e>4hfb-U
zsyQuty7Ua;Ou?B?XLHZaol8GAb3Wnxcu!2v{R<HnZuJKC4qWuPc=?k1r3-ydeP=J*
zT|RZi=E}*djH{j3EU$I+TlBa8Wbsq`faO5Pb*t-LH>_`T4=x`(GvqLI{-*2AOSimk
zUAw*F_TX^n@STz9k<mNsJ5zU4?!LH}d2iwV#s}yJMGvJORy<OC)bO+J&uycYqo>DQ
z$NC=!KfXWC8h`dn#xL(D3Z9UkR7|Q&Hcy#Notk!^zVUSB(}`#4&lYA1f0h2V_PNgU
zAAWQEt$#LRcH#y9#i!p(Udq2b^lI6wp1FXzN3T;~FU%Lck$-deE#qz9yYP3D3t8{6
z?<+s(e(3(_^YOu_)K8!O1p}D#{JO;G(*OVf32;bRa{vGf4*&oQ4*`<-1El}}02*{f
zSaefwW^{L9a%BKeVQFr3E>1;MAa*k@H7+qQF!XYv002BXNkl<ZcwX&&2Ur%z_P#e7
z6Qi+fqA~V@E%vU7y^D&10wPTTDJm);MHEC76-DVCL_kmw?7jDbz4s2N*c<Kq-*@37
z#Au5Dn|t;CJm2#^yWj4#-8pmSoS8GTMLrU$3Dg1V0S)qxwSgKyRp2vyrhkMg0%Wkd
zKntJ?&<p4b^aln21A#&LNB-{z@P1FA6VMc>1$+-w06x=a`rA|vr~**(wFP<rWHvJ1
z+fW~4G{)LwjHRuWrIk&K7A;2c+FM}=GH^GbB|rwP43q&r(`WiaDhp7WH3WVE3K+3O
zi4q#qr%!j?xN+0UGiS~ozkBEIovf@Z$;`}>#~E32=hh>6`k4PSh1Z`vdHU@7wd+^*
z?%lU7ARy4EXV0EvkdBI3DMdQ~?D{JKrGd}%nSMu<T$GGI14?&XzI^%No}LTlpE-Rt
z<;|PSY>>im&!4}Ld-qc1+`03zf8TytnYc<qg2QFa>h*H?@DaIi{(_{Yrpb#JFO=|E
zS&WyBYw34aC9jU}(WB>Bq)!HAH&01S-IQv=XXgA&3Y7=gowf(q#gZ8{6ILWFefi?m
zi=3QX0Yl2ehmXK)mt@z@J(7@+D3Oto;^X5hvuDqiY15{Ot*xy<lFGb!^CU1ZP-3EE
zWYwxwvTxr3xpe7@JbLs*NhdoyM{=@r1&n@V`0(LY$dAm~2WSQS2vBwS2KY?>M~Pi0
zyJ{LP>{f?_hJ^ZOJbd&pCtJWoSqd}Vx^-JFT(}^|jvWJ&?UVKE*GpVnoP>mg$btn6
z#MRYRoSd9w)~s2wXwf1G4GmT9uUofHcJACMM~)m(q$;H=r7RgUH%EZnoC60AZ12*g
zi>hm<?n*13#!yM%GyNYTc9XQIX-!i)4t92);db|K>g{Yuu}m=I^Jg!?kdGxJ;}N9f
zLon1mxpL)-!eC@hV)N$Bl9-q$HOYuPMn^}>#*G_g|Ni}Q>eMMoNlB3tCr-%Kt5+pG
zJzbtXd!}^hxw%q6Z$O(iZAz|MwdzQeh5BY=fDPs|WBwl@TD;W(4%JY19GaB4CNc9(
zj=X-IDNmm~lSdhk!IaMxqa_#IL*0-Jb?Ve<*}8SBczJor`0?XKPft$<4<0PNdi4@W
zJL%rNy9^jGKr}Tq#n8}Drc9Y4OO`B=-Me?o_3Jla{5+9YuU<(`4#ec!1SY+E_wLO;
zefn6SOl&C4fbW1(z-Rg&CQ3*e6|}6?t5m6?bNJBllxI(0%Hzk+APv*xe)@fMvCkDg
zfdG@+w{I)bjuJ6EJY2lJy~V}FMXar@MPFZEh7KJny?ghTK7IO1zkdBiU0q#9j2Iy%
zCMF6~y1BW@;>C-VxLdYtQKT&;<=e#WoV@zv@zZCMCQX`w@=={=18_9pGh_ab(zgI5
zq{5KhyY;lVaQ^C@jEtv}mYObCuUu2QXi7yg<;|Nnm9Co1xMIZ$S+;DM!dQO3{^IB`
zJ=Z|rI9o?&RF<xe?kgiBqvvL3X3xNg&kPI<UXB_yI!jN_AZv`VY4)sH9=V~RVM@22
zl$0bJHf&JzQb@&ob|f?AD2z&7Q&AsYXJ>~5hlZe>LjW=+M+QE3<^N;E3jG1-45({q
zYTI49c;i`m+5@?M?WUYPdrofL$m?Ej-MXbPV@ynp0vap{2?^rk?U!w3W&Og`)HD@F
z&FO^;7j6#-2uO&ChzO63ja{^S`SN+d0x)kdbj!G)prDN~dX6|dJ6{LGKD4#5e-#(E
zJcs%w^hZVSgps4D1srN(xBl|QOQ;ZU6f6Dps~lOgYQt)jmyF2)cMchSw#xs9h(-e?
z&Y*T}J6N1Je(uT58@J`mnR9aZ@L@TB{=6a?j~+cLU@Qp^4i+$*BHe-lgR;#`&F+B_
zkDwcl0mIEmPEOX}yLayZ#Os!kk<kgdXFCYIHbC2#FJE?q6#V(@*|WV49z3WC2AqI2
zef|CYx7yp=UqaXXaMh|+xog+16{I68SFThHB1&ka1tz@@!zx3bK79e*_N4^+M?|CC
zg8>>p94`7A_)MQT(Xin#OaH1>f6(8yWpC=GOIM*M9#@8Is4tQ!XpH#!`YQULpP!!u
z2ZiRKJ5B{7?E^zCTC--2{^`@FyMcLHg83Q(wSj5?8U8nvf4vsa0B8mY+#Y!hg>;+_
zD}4iW%?m&V8Ip{z=$o6j$b$zD6dm^Z_3M%f{WedMr-`$Zn=g{3Rn8e8>cw9HpXn1N
zcH7h=y92`m1D700NjWLowr&@6xys-+CDzKsmCE3^gM)+2m@z}_Y#m>Y9z8k*U2iCu
z)Er%MA9UgEAw8QQ9Zp4l`>%if>r2qTaQ>Hw3<@dQ77}#mx^?T^U@&b4)1M0q4a??O
zCk-M>=rd=|$e}}rWbfX6N@F>C<hX3#y8m^H=B*r&3`Yytz-atXY8D2|#Rm8d%2pJ|
z&-9TJ2ccU7LoCfFM{U@!4NSORJUl$a#>Pe_PoAs{?a+EAIK0!P%P%i*epy>vdn*i>
zXhTE85HP48ENu@0hRkepb92YirAt#CC=15?f*Ji40%MlUXU+~MPMk1>L{8|^rOR18
zJ-y6f!-mPAL4#!M*s;ni5od>-ojF4^UuHPFilxQGr~Uf%OGda59UUDn#F?)u6Jcy@
z>}G9kJsD~SJ(EiQod)xn{&Per$?3zs)vMPu3k!*UwPVLlRQ3#6jBfh;wQJyy52wiJ
z=mSW*Go(_fzsmrShV{I>yn3svt1m<so{^vA2h$}OX%~TA2M*jEpsqd?S^TZW@|pf~
zL@7yWHr(EB=CKXyH^Ycop)e(-ViYXvOV_W-g{xQO<oR=wa_XcUIe9{k0z7{o&R)8x
zFhWjFPOi4D?oy=Rv}n<y#hKxs5t$c63%XI)u3cwdzI^#H(>j0U@;mb)SRR&(;D46&
z+~rG3*)QF=ro=sa`68wNm69b(+9K^1#flaC_A^s{e8h^U_jm7>E!xfSobMx>Hg1uH
z3l}PL)n;a9^1}~5$lm??<=*}KayLC)uEW5%1nGJ4>Q%W6L*tK-@${*}nAdLKe$}v1
zqp8T}M=-%3>T&r@LY=R}gb5R(pFMl_hG|@d)z5t2m5-`CJTlzPyLXlHrote)hce&7
z|C1+A=4feYokO~AKriI;1MoHQx%>Xeh)VxYfaVw@t204?q2li!AfBF{io*mQCu-HI
zC2DGFas?Hhp7Btgym+qQ;giSm;PE3c-V+6no;@vq>KgB>Xz$*=cae_<Q2u?4kk61D
z0Pi0X5^^v<s-!)9pybUwRe7oMC|nMf>*>oE%6(o-y`QdF>6<of%4*oK;Sr=uhU|}g
zY6Fy#pADUlj5v-*ukmgT)tXGRpX!pEup&`mM7otxTEb9~jvYJ7_uqdn0|yR#miFMj
zJVNEa%zPu9m8P%6`#@rtOwJ4D)BO4KRr=*Og&C9QUwrWeyY`yZt5+X;@ZiB4`BCH<
znCx|SmXa2!aQcPw%go7^7jIt4Q#P9C&*Wi7hB9Qdc=6&_Ft#=!Z5yZ$oskzMWC`GN
zxBU?k4IDb_4$&B*y~AUUrvwHD%gmWG6{hs_^^+bwdMM5l9ou(G{pzc)b~-ydKL$zm
zyBsC{0%c`o<!ESVa9U6e4Duxlk<Xkg%TF+9Jnr4Qm)GZi0OjQ7%Inv!mD!(j=g!d)
zBp2!K0<57wwgtXJe#P_i{7fGqad5c_D$3AgjMcM{kZ_qc%~|nz<X9jbDY|v*CPRm6
z<o4^=FQ8AKJ~I$DCN?(q1>MT%74#u=^XAPLwQJWNkG!jbIf~)PvBNHoj*f}1UcLIg
z2gY9{&Wm%lhjZn-cI}ep&6~>)RjZ3ygGO7?0Qvw`07Zv{RHl?<PeJvC9!4Ca<p>19
zbE*)fAkm9`3=GV2;6VMK<t0?(tsHCRz5f}aCwcpynzr(?w3s9zAz?COhKtgD)8&j3
z5|*zF)6lwUV`DSU-rl}<<;s=E0Lgpy?8zbnd?;gX+qUgG^5&3?T8R=RhQ-Cj9nZh@
zLBu(6N^nkr(pR%$#fs9qcOMxzKwZ=aX<b4-oPMP4Ot1B>fO>}x9V`tD4CYLlG$|PR
z^J=Io>j-rBt4vKzeKa*Soe^#rK!;+E;gVs?fU)1JhmwIooJFGl0P~_#3-ja3&UX3N
z=$l#?%>g=4Q<b7x@vr<mC^R@T)e#tkUc}bW(9pf`IPkDx!)QG6*|-{j2J0t1?yAx!
z`}gWGaNFcbcCfa+6jw8L-|p`2ij?G#P{)p)=c1y!g@uLHXbnOfGf1KRmoHz=Wmoy3
zj5%k{oWu3%)mwDp#EE<Pm;N>}Z@SIhhoyZDh8P%3%9k&%lsiw-_mukenq)R<(j;ug
zj2WxfuU~)W%9ShksYc`{@u!rUn)+nt&Yc&1eSJ4}?%a7Yo}Ua>jm-Bp1K>ZIsj8)=
zr6bCdcJ=Djbmm9w-o5)W^M4x~Hf%V(Wy_XhXc!)dOU7rtv_YB8{QdnmpE`Bw_RE(q
zl@W+{5uQIA@9-A%^_Aaz^9>j9{gAJpe{xg;;5wI)n#1(&I63Ccim1A7yi`R>jvA$x
zHDJI1&Ev<9*FSpnXc@fpvUp&&cKgAD2VWpF(82dZ2=Q`J;ji-l{%s;dqO!;|oR`m~
zWJkFaAI-jf`zn&2&vEkP$^01q9)hlV50aE~?37>?@J<R0CY)1BwuMf<H$V7aB0dXx
zr|pQV3kf}Y>(;I3$ZYwy|1#aUaU-LD|Nc%$rw)XzqO*TWL`hk_SkYoeqxDR(X1cnI
zOMchQp&(W&nR0}d_7z=S-A>RUE8&@GF;mB)YZzBDdZ0^BfAr{)(tZ9XiTWe;TDs4z
zS+gb!68B8Sij{KMZMSOG3JuIzVc7Q(nSg1qK`E|q2uq2}=lH9Vf8)lD=c-n%s*U$h
z1@A=Z(s86OD(CEfbprj1q@$yA7}EIU-;v_)A{dG<)YR0>Q4Sh)pVSBgC1sszKh$tC
zG%yp-`FV@FIG0R)l3h2^s#UvS3k!?-cvj9uE8P`9>y_(vxa>6$UHkK=PoFB4{C6l9
zznntDMSn&N`})mm1$15GaL<AT3!bCP+K6(@uT`_w6EY+nHB|H_^NkXo$IPru1!Ov^
z9jY#$R{GhqX9enrB6Z2245Run-|U=hB`(wAxr)x8Kc8NuN)>LZP#N!psvr#{i`%zv
zE8Q*Qs=#=Kk(HgLluJeErr+6$ScbxRzD=t8ET8fpWe*7nIfHU^LYY1(FDY7bYCkn?
z9WYAQ2s-2(MUSKd1_#c`kQ@#wS+dkHbVZf%tX~2uX+XxL70cxkTF~JVH*MaOdHe1i
zx&QD%0Ukce_#GZ(Je2eY_Y2Nd;Z$*W?d|{f(o*Hto!fHl#&x;M>CTH6<jmPKTw=k&
z_=u7vOLF<G6%2s~D(T3w+_-g9iFfDTT_ugQ`{@NB^KII+DO*EBBdtuCGDi`Accn^|
zQlP4&K~2diTozTCQ`6JrZt6WHO+rP>;^NLD{x48#xD=yq%T}$t8D5o!3aV#PL6s)&
zJ>moajw~C?e)IM%B@gDOdLM<;QlW79_>4^NHl7y^_6^FSU#wU$TIinyQKVB+Hfhtk
z)AG@JW5A4a6^7*aB$<=?*zn<+)X}=c$H$kzyZ=o$0EYYmD2azw!(Wp+b?ffhzI%7h
zNl2ZPQ>WxOq}2&XpLgNd>C;LWLUqogDh`jTG*X}s9yoeb_8vSaJNE36&D*z2d_qEY
z^A;^mBHy{lUlT|5zWw^epc~G8imvz|!XHdYkt4^C7o?-YiTq<+Lc*JGOPA*O<$S;b
z5>6jSOTY(Ab>Y<c^95!7UD;5kLrDMdv14-M?p^-RZP~gt*9y`4Z9i)C=xu2aA1dKh
z&si9S(<F|bIH8o8&$xZpF4?$kn?%LN<|M9KmCI|#5J#1_DsEw@(p05&6y-Z};eyny
zTh|rFSTza}l<T7>)fio~4(+<G)gL`Z=Fjs~7?Shd)GK@T>?wwZMjUwNFa}-8e)nK_
z*rEW3NI0xv)4fNJbK7?A0`u&ZUHkUS?)?Ye3*Ik`x9{!PvrjqSzI%^s*s@htt=}L?
zYu3t4_t~$?SE#TV`Po4-(f6}W^%^zo($dmiK6vs>wjlp)yLQXYy?OC@WLiA#IdD)x
z%E{BxxJi>kh&vhO{~1sNIPKU7X>t)-`1(zovsn(tD_lm^k!fy4UfcPs_^&-`)EdO&
z4jzM?oSm<vXFO5DsN((+Ht*Oe>o#qcgjK6^hiVLa`0cmfayoP?u(y4Oj;VYeRaoZz
zHh7)oRAJA-LvrZoF=^De@nWRG=lUe{N-`LS44QZ9(0$9OQAP?=a$1qz<Qx*>Qdl#y
zaoGq%@1gfT6dss{PX)9{F2i-#H!@D!unjMH%QjiNdFuyZ&897~X5(h%xN5^DiBC+D
z*cFMQJ6iuC^5Bw(v5=6h(HJ;<GGK<=tSdC(lh<uj!YgSY{^|`#14mVwYd39`Qx~tu
z`YqdEB3>de0pPlx-T=oUx%<XwAoA#$bJ^=QZ&T8E8_Xkl{YK?{!?qo=X~!-Zp`$~o
zOC8(9CouTQ-Xkf>{ld>x7zNJ=EKgo7A(2rzU?Qpk{(u#pPZ#fW9L}R$X1PCSS>BaS
zVJK+=49hZ=D_?#N8d;rBg(*p#1!&x{efKSTU`mfUbCo5r1SKWM`NoYKmxC}Il;zOE
zd%FNUGz}jP7LNc{dk2NQiC?uwmL;wFV8pIWmS`~JQZS=mXhcrycI`O;&XzwMc~P>_
z-L8JK7A+R<ICM0Z`Nbq8DRFs>AtNIVRa*SNeDzv6ef0)6Ovr|D^YqfCOBt#$L=|xD
zA@AMFcp^*USG*UI%i<*h;CURFYRE9neSH3srAoOjO<0-ByqIU<XDSR;nON4y_~p`f
zz`$FGKO0Qc56{*Js0naT-CBRl*gJpaeX5=r`AcwAOeW$^MH<zCPon!KvGp`)+xq9V
zx;h3j8-@<o+xYwY(`iB(5*j<!gxxm>ifO^Ux3%F#@_zk_6)R3&vu4fRhgpIbogiVc
z@gI!vWy^6~E-tg@Wqx0!%6_Cj9g?GWer3^Fn1)(cQ_Jxe_oSSX;H9w=8WSfWOP49f
z!fC0l^E@Pa8PZ-Mu}N!W1te?I#;uYJ$-*O%uzr(7tw@rfs2K4JUm^h!OWz6qh$!(1
zi4ecAB}(2s`t;q2@));l(>8(WheRV!pl~?~gZEgjV3cQ3U`P(6eiY)9AvyI*Ba*5S
zhu(Db$C}<qgw%akIw~m7sH7P$K}%x9YO?JE#5F}49IpEKnUY3JzK&SCWs45Wb#?Nr
z@1UR{W$g>4W&i&Dl??<ChFub!C4M)x#)k!wIT}N<&b)f{YR2OnkuX$BU}TI0E{Xjh
z1Vw>?mL;IF<8n)U^9}VY2UHq|H~4~CVKlPnZ>j6-KIchf{7NN1e=y5C^ToA*$Y|y0
z8y+RzA>lxT_=2GrCm=trUsx0vF-kf235^8PMJi$52JbQcqQFq`3JAfqNNLr!{aTb~
z!cff-CpYXl@cy#omq)pebLM9?#qJ5>+Toe%0R^)}IC6e_l=W1{^yTX|y+1ty_xOwi
z%h;n&KN^BVkOrrcKYjv376mBXsx)fYJV0}}t~feQld!NbWw|U@3=SMPP(jtI)l3nN
zFNfWIfwaWK@|=v(aq{HJu`gb{$UJiXI$AB6DPBQn%<>OHSnLW}w_~4VWQ$C&wZDTr
z%m7Nr!WG7qkrRTJEXxgxS)s%WiGF{cED!JdM?`~}!o<@zP!{+FgQ>#sKU}<k1^&Sb
zc+P9@!s4JjFrKGRpv+zD4`z<eEmyt*RSP?7JIB;DJNBddq~v;cd3ZhV{j5;RGkExj
z%ZRIuXX03=>iOt2!bNXZUjAXPmnJ~({FCS7F=#2IFY>PTWA!-1?SOPBP(FSFLly%#
zrdIZcsx|C}Ym5>r%LxiIMny$Qe0;pJFpG=B3=PN7AgBhAi4|!HsYM2*XU(9<$jBo|
zOSbG!k?^=giNecMVQJ!;LiqK1-o3Z){}!>y>*e;7Ou6$kQ-*43(-*QGDv2WoN`;{e
zKb9?9&Lw8$x`Mp^DBb9lNHb}@jJ2Hbs!qN7x9T=fyHHQ9;aQ@7A-w&+@b$vsmGcH_
zJU`l`S&Nk@78|d#PgvBOgbmvY%JgpJzXM>&@+4{6yybet{RPj(*&u~*&PHGVr{*m~
zmaW==@~>6my*s^MpFINUc?E`1pLE4LYJhY;ovB9fQMts##LC5r7E>QOSSL$=w2?$c
zL@GO7aFYRUNX5N>X3UtGhTvQ-OE)g|*JRE*jT<+%IC}J`GJ>42Zkt5L!6?CtSq5f`
zNnEQi0MB_O<M0}fh5xI<ybUD`?yuUso5{<LLnpGDwQTK)eCy&JE3RZ<(3+LM``&|j
zM#j^y$xE+rK84d?zGjmoZrCAOx}#H(PB1V9PzK%eL07to0I$>Y#{MsW;W(W~3%*bF
z8a4gnR&A6xFp(<%cR^LY=!7*A616-Bmi9b6PZK;B->pJ8=jcdBjS)Hr_MSMe#8JJQ
zcY)8nEO`SA!_{JK>qvca9MYgO^LuT9kB(5+<sLnH#KGRyy?PG5J3>oWSq8xc|6J$I
zHQgE-8ZvdNJ%@!jRKn>q>L)FAbaWP_r>DQTl=?(MqtRucy9kR-1al<Ah)9%(xRuIz
z1Q?a)s{g#laCz5M_h@}4{|3Cqusl!Nv{PYBbe&JimMd?9{5hXn6ct$=aPkXZn&Tf4
z14CwcK^#?{zamNES8rA@+Sv3a(y&GvtpQr`1l`8C-OFHgb@jPinu})-`y(Js=!g*`
zd^Jaov_?Ey$vt}aAGB@Nmc74uhPR<A504?yE5vS^>tn>7h%(ShF3{U?&Yo<@1RMKX
zTMry7c&<NAGf7FapZd_#h^v7#J`H_RMX2X*ftnpUbZF7G^)I_dXz9r;H+O|8xj_{r
zrk0i#odl$BpZ;?Zg)VCY=FFM1iQX$~p*ML2P|u2h;vWU&Iy&zd3WFgW1_l1}TmjA(
z2S+O5RdEXL<%chfg1AbWOJ&9SZF2vW$aouvD=Y(04lkTTP!q;k+dQ4?9Uz{5AxhfJ
zM^&DJw1BYKm2&*@J#qC|_z3ZA0M0~FC#Hp49~kWB=C)C30e^wwAbFX-fyp@PZtO|a
z4NT0htlN1IMo`{+Q04s_;5y5<I5<icLN6UMeB@Qcr9N2+D6o>z;5g(p)_tMxOETM^
zrOEq?f+CP^gw(22kLm;s!%st>R1w~5DWI08r)T?mbs8<y($tr6<18R0mnj=naqk{l
z-*f|;GQ~E%RH;%ft5>hS$c_Je!eV9Sd>?UJ;3u;e1<IVoK{y5|$2s0Wq5r(ka18G<
zcM-5yT<3W!*BJIU@d_h9uIjjQ%RbqB;H2~$q(S$t_UP`Y=jt?Q)a2MScTbr<XQ9mS
zK>G8L{(Ntl>6uqP_l5rA?iHZqvwVG?)J3Dai0AS{eEnX%dO2{#<e@`{ZqemEC^}II
zTR3mVpS92r@JE`yO8J?Wwf*!bUw>VKT|XtI@gzr=CthGS-e>;r1>S`#%IW6mD-E0c
z#M!CtV4%YKB$=~jC8#I!eM4myo<sFa?*s2MUq?5OEW~w#KFJ>7)AOlB9sF?k@ZnCi
zYSq#lqOS8?M_W%B7UIrn1n0tk`Q;aB)~p#lSe|W8xg^u(ERv}+=ZT~H0_EIkwwD5)
zzx|&_`<e5_&SkFHx_HPG=h<R2eU8`?xaR1-uppc&eqpH6BAoHUHy%1Gi^F1b8a8RB
z@@1qh7uKR}hnLn)Zb;u9>CYD18BE_p0rR$V1(f_2NND^zd7-f8uf+e6Nn6BazHdRk
z3`;o70^B^s-fg~2MLJVZ_Q}Y{*lNnlij}{MLH^vdwMB_<zM1AQ%QM#=@8<0?{0_Kh
zH^T$rJaWG%TGSJIdQ&h^VSSQ@WUDqEx@>Us3KAQHC*GMB!Sr~X?jc%wV>s(&i8MIp
z{pskFDneaUc>MTrYdbr;kzG3V-mjx=Aof!olzk@IjdQxOXRlsz<>5nF9JN}k>}LUP
zGQn}SSWlZP6P)JAL}Hpp9`gSy_YlT%s+*YGx{A4ti<nHBCdT8ZN~312o^|fg@0GcY
ztBiMWD~O|tR~Q9p;>h&u+!o84gO|j@&h>fevgPRjVF1jmU%!4%UA4wyU@=9;T04sA
zWM>&?GadQPz%d_FW}qxCO8HfJsnS-Rk9Tla?pxZ;MA>IDAH;PLvnkWX#M((rCOV0}
zxlL}Hj$PBf_@XGMI}Opzvg>YNuVK?TzMpXjtDtb+Z@~;@F`eWr!*s_yMO@Cm*8mFZ
zlXRBp(`(T1OP2OCmHhua&FNx>=SJB$RmoW|cK@Fqfho%Zjgyj+hBs^0%3W7`Ozvn{
z;XXb-Vr69|4({%<>w2b4a`O>`@s46J!AXoK%@E_sGsI}pbmcf^vWs$5uIJw~oH$MN
z5q`9ly^J!q6J0YK89LfLr^=5tFCyEd;$MBWXYeS~tkG5uO4zs4D-6{=p7UrubFqZ3
zKPm%98q<oNYi@3Ng0oLGYu1#KrAkYiF1=)!frX41V=W_1CX0^g6wxsQ#@UGOI9t)P
zn5sO7k+pLHl=7l1dMqo#X(P@^lSxQpqG%e8ml4Jjr1ub=%=%4Q+$;8F@eRm#HZTko
z&n}YVr5)P-+<UJ!;%Ffb<NYxvDCxKPxfiESM=%YbK%d09;wU^o%kaU7vDSYv&51G!
z&sXKgS}PEj`sAm(qcNEB$IF*5_Z>HG+$31xm-_S`C@osH5zm;l;=S^)j2b^fG{@S=
z2(zi8ZSEl2;~Zq<IQzVFb4TS^@E`ZIOzo8L!^cb(4a12tM1Q<=9XRq$snTWXu)wK8
zx`;V^|6}bF!;B}1<`^)5Szi3#rdt@l-s5rNtN@wrzfyi`_sdPrJZWodi@m+QvYSVn
zHf`j~;$KOPdX43$wq2!5f32K;BgW<q(j6xQfkC?FGFZ=2hKwFBLks{M)kj$Z7Pyxe
zroXnCbm}woO`}$w9{*6Y?zQ4ym)MOmQ!=^$x~TB>0Ig+CchZ<>UZ+uup!!YQoUYTT
z`K{Uwo2C5`5aw=!W^K<^`0fX9#N_~Yfj&vXe=Zp7-l$c_O?4Z$yofk=-%jIg)NRz_
zR>Kw@jw7BK^li=zeUfW*6arIL;+~MuB_~w*{)gn6YOQiZc3cvB?|2z*YAYIGn&BpP
zq6KF76<|IbHNi{-|7)5~m0@7Up~hfFFq^vm1R11nB`rJmc~Z1!u>@o_4qY89vpyJ;
z^XYEQI`()l)MygYw=GC3FTM9e6ODuUIf5z8oEJ%@s?}xW$dMAYBuZJu$Tc}_+qRS1
zHS0;SqF;0Rj?-=XzA05YrF`Wo=YFi)=vFPYW~uo|%SURB`c3ZO{sk}!!_$+U`l1)W
zu}C`0_C?uS0i0RlrdV`B;dEzlJP+48@l!7RFaXsG?!r+RGvC5AfU{KHVUSSwE6^uN
z_|K_ZuE*zUPYyY;F}$6|+u*n<8$5@)*maZ9IpoKoE_!-#*o&hUAHSZ-LZUKTjvhVQ
zX_&G3`Gix?#d204m<)^v1{g_6W$6S=%LmU@$h*cc3`^!34#v~~W2!?6)oap<Q-nc)
z4)WpPFt=;rIv=jq8-V&;+GoVrOy)HLjP`4qh4WBBm49A-de*KoKW?wm7zPg=rc94=
zeAURvNQP^S5cPqYS)Dp{-mIaaF}G#QmS*35_Z=CA1LUkcGEU(NWu<em6&b(;l12+<
z>;rTJXoS=PI0Z<_NGjaba>k@O(jQo@S~aUGRjSwnj)h=fwrp7r%Ig7r!N8P^Edb88
zd=Kz_e35eMWDy0YPbq1FxJZw>W@Dfwzy|S0_z9q!8W*y$>sAX33yVofNx6FT=+SFS
zVq%UpP*dxSM)39N^XH44ICEOb@5qT`MGhZJDRTHkN|B={Pbk-U|J>z^Mb2Njtbpfd
zE}SoN^4uAvd<Ag;p7R>-F&>Y|pYMkajg5mtcLc{wf4BPccZ34c%BxzdPMxuSF)>%R
zTz(}J-Mqzc5_CD6nKH(9mU1+j;;J0q2Co}J*E4`#rVqVL4|=37bT$n`tL*Qq)!+aw
z$2SL|Ae@O&_U&Pz0IqH3dgd`zs@FQAHy#GWMCf+ZA>WZtVW`T&vQ2XLm#7`*(Cv;D
zOX$E*LBz;#tQZ?v$iM-^PFq`ByDnO^sE?_sDJzW5uG9f}PNpbaq5R(rpvxG`%1yY)
zgk%hs^}QPs;5w&1JD}n(K7aoFH8<mW8xJ2oe1ZEn%+1Z&K=~n5I+l<bNnhMbO)Wy)
zPD?9tJ3U>gRDgN15*dzbx&I6dM`x8{s8=q57_xHh+WRNYof9yN>^X2C1NR0);+IK#
z^r*-k<fG*G7I>ZaAH4-$yL<osJL9m-yq}&Q4*9U5)|oM5hU>zG3zvI)d&j!FyK|Sj
zf(3TJBiulP{TD5~1|2(gv^#U=Oxor%sp2><Ky2K+WU`B=SUcxgzZ0gxLghKYYbq$$
zc;CrG#yiax3s~CYU}+B@ZThBknQ}Xjwml?HSLDU+Ns$Ih<{+K}WpJMZ@~*%18Fb&u
z(H-gKS;KGV^}9R~j{m2itiJIZW%c1Jf;6OkyAEP#XeI{wmd{L0&B9l$S~cwD%a=c7
zWMq)J*dW;DlF8mx(Y#J3ATzLhf83(;2@FvFn{U1uee~F|XK5J^l#RF^a?>r~!IQ_5
zn(<KXrl&zm%A@%B_+yCHjuH!hsXo0InVFdiLqNTH=Y17;U&TBuH|V<#9Xd?8$#qJ%
zZz&rz%$qllyUh*6{j!fAKQ8j-&6{_oMHEi2Fwp4U8J@_=$tm*m=~E?t)N7dv6)Frz
zV^qFp!_CIR!^5{Be@gw|>$>?boBBpu6BCnARM@NS$FGWqUxaW>%*E4BrqA<{X&#Hk
z*~43$=Xxvu-v!=d7>+mE&+?T1L$#h4EA}Ow3#Q<ub3mBV=A9xN*I4;<>gbP?CF3SL
zX8u`zDk$NA8S{K)j&HaexR5Gq_nZ{9#y?5FzQeP%N9yi`O5$++`gQKP{=0qF|6PRg
zv!T_fSFheIyvM9JS((ZzOEmJ_Lx&FK9653%hhI<QO3xRs@)oqEr>AE%Y0|_F@i_R;
zvQc+z#B>18F0rel{>MdmR2NhYghGTXNV&@gzv@fpYlf=~CRFxDBID$jsalO1HOxni
z8nu7u(4iN#w6sn)Yu3z%@o=9#78#e!!Tgv8m%K1N4sU%|I6Z>QhN80pQp#}_jJ+HM
z3RfFYU*-4Rxh#e~FI9)OFgOFisLEPpeqyG3_wK5%cW|`;pCym-3(Ps}K2}rgcgL;U
zni~)mC*Gk;Wl>P1EbtGN`M$w2&nHOc;mGqp0>gML4wPA5zS6d17cMd2a0y-Tbtnb#
zaD36s?{~z92H4&30gda_Z;&!)ae&PA4SHu@Z$ni^mf1TrT0&x1icfg7RH#_#KDzPH
zy?gie!u$9U_{UcBq3o=Lmiqeo>-h4yZX69E7fg4%Ql&~qDp#(2pjNF~7kqqtUh*E7
zzCsG|V^&Fs*LBR8G2_>*TXz5w{V4QdR+{d#Y12ZFA3uH(|Ci%`9lS$Ua*fiZOZRtj
za*Eixb?aGfh5*LDla!QnOixdbIx*KsmCk>sEK@z`((ZwQfz125gM$NoQVqd0RAI;v
zbVY2Vt*z~nkdSbMJ8+yofBxQ<En7}Q#h8bCG>#PJef|2iQdjuTay0JRwW~#NaPU?%
zx@(s&UrvY4dSmI*rMrIm=_eXloUU!@>+8F2|Ni|2-2<LIH!d#jHu5@+uADA@+^Z>%
zP^jXKnl^1UXQB78$hZ{}xP*?ak>aiR2L{OkKl&KvVWIz@!z&<I+~&>~=%j+~g3Bl}
z&CJX~@glh_f*<wd+80{dd`PxPG9X(Dr38=l5SA@Y7YNZ9F*4&_`Me8-%ego(L;@nB
zWXbYGnYY-79|myRxpQYDypMm>AtAql`#^8Kz?aYEz5Dj<%fx$LhU;AaLwB}$sK|3%
z6VHtfC`k<r4Csny6c7+lu>U3)Edv#%Y~=wxlnya$;Cy$TEG#ViE?&Htt^@=X-YkI|
z7l28qE^&s7j6@x`y=sRJ=6L}6B(9BsK1l<!8XAmNVq)TP+(S~iiX>B|$qV!6&*$hv
zD+m^<G?b1N%atqFZ^42E8|gQjAMQ8hK#%am(9n?E&X_@6DA@Hx6^75x=R!Bm2Esbe
zBV<5}s6nO5l}%f=ZnL~`lcsyr8a6s!Urp@{eFO8*;9}kS^{>{cSMM&E?QyNzbzaq|
zRVx>ew}Eidu3bB2NQR$G=4X=Wi;0SPk(`{&Zk!u0DIccK2c0S2#t<+jEmNoB#fz^%
z{+IqJ&-VdwGEx4tZasQ#)*C%qFDfdE7V}@y3qO=l$&|vQQR&Lgu-xzq@8CYJ*&^@O
zxZf1#hLC2;=3U$X0Tq^`rl!WF4-R|w?77eDB(58xH035;+{O^yUj)J!LF(+{ebx4c
z+&qCgBAHvYT@EYW%F2pESPhXcrQpyF8#X9AbWnAG6;976EucZurcK8{!rdp6a5IK0
zSFZ5;NZCxAd61wx=5*=Og`0b+RE$B8eAn{t$(=iQlyY!)5`H>{arp&0dwcu!RjO2(
z&yU8kY%CYkWEhs29|eOXW@DqOkVhymsUnzw4g{2B#sG&(3i}mWRjXEQT8kDf=JxH|
zH?CK&UdI|YYI3`GuR+;^25QKlL4(D}$XFa59hJ2_bPC?FV}~3(cu-caUafq-jz@mp
zWc&8*a^%R-H@J5d*H@#^9G_#pDkfF;_xJZh7q)NBnl<T=968*e3X(+ePNFjiEoujQ
z2W1-;y3_G<DRe&W-o1x(@77Ct^yn{5o3?z4#&M!&&z`~k`}a41w5CN()!-jv%KWse
zHE7Vl?Z&NJS(FE?1S2CO8WbFQqLk~^xN&1IRA>gm$kprD`A(?j%tPgQU$}fZD-{*`
z01TFnisOzS=$_xKSh0fskPFSmjoV5_xPAAo!T`{dU!Xp2v~Jz{s*R1!<LjtoZkv4{
zOt5R$uKQ@z9M`N@v0}w3r_Y{!{qWHvg?Xn;nZl6}e)U%)A~NzU)4Y?K%6fYZ=DSp;
zOqs)|>$|t^+{w*Aqj(O9k9WNn;eSTM>gg8{ph$3*0}?wIb@8Ze+qPGJ{`uz{kkT)>
zZ8uaH8gO>it5<i$`@e+e&3TLlaTob`dV0Rn)YQBKwI!u)-MSo-qYj-%c#*U~X?b#5
zi(_E*0hLeTZ!mW3*lsZ6wa`*srca-~Y4~u>%T^Xs-#9zDiSsm9Wlawm@AT=@%F9ek
zOH*3;rAwEUk3aFdJ76^BJKpQpuU9^0$ImMh{1_;~uiWy;dt_vKC(-+z>xAfN%&<(0
zpI+X!ZJPuI1<O>ssWRTm8kj6r7Ph%)DEH7^ZvwMULqiz=iQOFKXE#RO>K~*0^qUMA
zI53f$C0xKeAX7#}MBGDnej+F+=n$U$0=lfst$2oHyoYso);|6E9Y#C{95?Mda)e*0
zke&PX$-2#(bARd9?KaX)#{C7QN|o|WTD$humYq9=m}Fx^M+@qr+&p0pkZ5af|717f
z?}0$redthD{RRy<*4h>6MDIO(SoWu+$N`{DyLMbZWH4mdu;pYfUfZ~Rdls1S1g=xU
z1_GO0-QAxaI&ne{9X~Gf7B0MoYsQ^AclNywX50+Hvu)3w+-^O3(AAC~Rd5G9P~Na6
zO`B5xr8^*dKHb`N>TG1*e7?Ph4oTBy&A5dOXRqiF+X;+L1NT=#B2<QH(b4)P(b9S+
zz;JaS-aA2=wSYuSw6wH5Gu~?IGvCD#;^Q5Ht|~+l6IaTmi<cAyUa>OKg;8XaL9boA
zu6#O=l9UXrVo-hvl#IylE)pt9O9p4Rz^;HyPO(o3!TY|xe&XilE^e-K08g>Ab<5Nq
zIqEW)Fg`judJN*Xg7VL)zrQu4^Pz<FsS0%&I$Uet=AC<GdGZ?BxO0zOyZb=s0(a)h
z4M|wDUZSA8hb)Pf!0;ux>KemXp{{_1p^53a9S06ed{W-h;QoWuInh81nFfO)(%pOF
z{1u5!TqWU4<FYD$_uV>NAA@|m0NnF(QqMkpFRom-QC6&8CvhuRW&c>C26eiT+Ipi?
z4jezFTt~i{rM~@^3}NLm+vDz`6K7=4(c^OA!iCJx(9qOn%a+}MFt`R|<<5cQr(eh8
zIpUMn%H*jIHxS2loaMv|2Tq)pWr@k+;pO!L*Mia0a&m!|d<W>A+z_HC^k@$7)6h{L
zrZqC<$mt7`xOSr~iCvyuu3Y&gxIP&7Dex!|OW^;96W(tP=m#yJ$ZA2F`MJ2b92sM5
zmNwJHD=R2qse}YAk?^o3vSssDxpDo5qU@6?sUK0vrtU%sLQv6VcS4<mM>3r%Jddh#
z>MhiFjzR)OL`2F0&xJC7-Xigw?<;d>`{$aOT0b*08+Qg0DHzgLGc`4}3Cc+&{y#=|
zv7GZBq(9c|O7dnjn$SpD96~2!7>z-pvM3-FhFA~`FMqL|Z2OXHc@UlqNwvjf!j!4^
zLSvT;R~}BZwO1+|P)E8P^3k<%bWTm)x>LAZ*W7wCrKAfa3<vo+E5>Kl9jT{(jmwaI
zLL<b*!}B?q#R1nXr#LvLEd?_NM#YNNBwNOH`l4u2huK~}&$&`GXz4NuiCM1P=Sogx
zHD@?SV8WC#^Voa9K$drr`$F&MT;CKPmmnG=w8^Me$e%qd$9`$?a|WrR>KylnTTgYo
z8<V_Nf{~A{({y^)+o0@>!(Lhu8o2iVN2qXqKd~{SS1(w#PUxC8TU%RSH5qI1c)|RT
zoUo8(5*iHaH!@lh(drK#JOriorov!k%s(PBEv-Lp$45rx=j_(6-yk6&q5K-IEb{Ub
z??re~USVQA!SNOJ+1t=}wn1{c9y@ldKcur7lKE|S@E;(&WLowHGv_aQ#Fd1;s8oAb
z_iP=5F)v0LnLO3eH-6lIutsV<wT9HS!h!LSDqRqd#*ew{yoHZLVpf15Vx{lE!Stx-
z8f^~Qa0R2D^Xz#_x``XN%VfvtG@d3Reho+@O6b}pOO<k5wQbKUzldn@$9o^5Ir0M2
z{klYn>GOSqUPUfX2IH*|^<kRy%(6iEnG5_w-ULT4m&oNwVq!J%*+@NuG;IUp+bHkt
zk$Q%A)HO6yhiU7kqwLqdt5VgkM2QmKi-N;5BjS@}DHwCaD1FtcOWGx>X+$;<G9;I%
zR6zO0dMx&P9u}7<kt>q9CW>)%aIZ4T|Nqa7ZxbrN9;8`2FwO|*iwhPnUcAS`!s5nw
zt0^zMy&`fWLYGTK*fLof6)(|C`I+S;=!ARa*s<es?dml}_hffW>y4Hex8CEYnqh^?
z+BNGSaU=QhQ3-&Q3Gk1QK>sKS^otgUsqSw^k2bhBb?VgpFowL){p%o|wkS^>;2#?t
ze;?(y^y<}X)SMl$JSm&2I74DrNT*-A(VcD;zz+>@JdR_C<4_meWPr079J8(WeU++i
zp)o76!sAy$0<DxzKX=`O>vZ^_qX!4HN7*{NJfJi?b0bZjym*nLuCC5ir5w%ZQm0NG
z8yHNtqgJez;AoCBM`o4!wlpQ}i0U<K`G6U6maST^FjD(YU6hSji+}aiG*7>v*UWSG
z@w3vi-++6#PD_|00nuQxHKwMfbVi8<_-#v_ufP7<ZJtlSYc89<55xSx$urOJEc6>-
zm)(P7q)@-Mlq^}2Lw|g>>XpCy-Wd%ob0r$n@yoXa<z}OtsHUdYwN|ZK%K9q&|MwD@
zfCJKvAk{iUQt890_FuSg;Snn<t29%SaWCxcre*mo3dxO*N|vQd5+!D70vL2T7&T5|
z)R<^8EEzXeqL##>dyWP(E(M|`EF=a@9S>%Vkr~tHW}BOhe`RE7lnx_?pK1<(@nir-
z?g-U@U&iIk$A5+JVmY_lvUArS>-;0)WUhCR%=Zq;tx>Z!H>Dbj+dTm~b0~Tiz{=oM
zCdUw)x9|LOvUl(jna6lO!MQbS*QHyWYQ%uUO#M6m(*5XcFMpZm6D%C`gN~jF<L?|S
z^V3jQo^!%uu6Ll!oVzHyO@~hGYR3Wto40I}5EQjSn7&s)L~fmW4OHIo=Hn*VJn~o+
zAiGj7P*uwD_xHc)<m7b1%ggH`1cs8Pp^3%mFN+t~f@I@Z;@IIMN8RyQ8~~Lh5~UFC
z<Czf|8F>PR=VgwOhD0ZbowM873Kc7Hw4rhBy7hhL`G(~BghmVJur6J?^Z`8v!ucXR
zF8Pk@yv1j}EnSAsND2GT5b~=5G=c8-%Yg$2j8?8(xiBOoWFsv03lk<xxNmGc=J`}x
zhio^O**WNbbC*UYNL+NX#Kk6qQS(5?jm6Q|I~=;ASMD@Nmu!o1Rxgc>jqigI&(EDZ
zcOxwEg<wYGbLY<e0-cn)Bi+RQtArOzMZNc6P2I~jGiJ*q$C+X>-tINJF3x7rb&Se9
zbuKa@r&(2wC}heW1BQ+`&v=ua+{9#p?JLA_MqK(pQs1K68i(xMwDgRhO>}S(+v&5#
z(cMd?&&!)$wRfE-Q=DgsrHxZ|{f13a5Y`Toj#GZ>y#@?94pFHrL$R`Re)IL$CESrd
zcY_;Pvv$3W7B)_~)($h3r4tUbJjHR=0%f_#B$Q>6(=5>%J?32T;$P82O8KI3`3jX{
z%qBVHaJdPWT;xsf`iS$K#WEFn*dxA?x%IWuWy(@lZeF!o^+jBk!t^FP&lX!(4<!$6
zLlauyT-*N7FJ=Ri@LSzY(em2EqU(niWCH0qFFZUvY5x5A$DEy=Z$kfkXkudW%+S#A
zrGbILYfB5uH(<Qi`uh50z-Q*><_~OaY;HhTJhouLf~2spFpq?U1QUL|{`T$L9UvLW
zjK5Az{g(+ZmQ!wh`VJd)edwsMZ-$IA$?VcwokJ*kxJOTzcj`|071FcM5S=TGH&o9I
z@dnc(r{gf)*i@q;oa0t6Rl00+o6bER4AC8%r8{nlj5fEE(UwzXu=beO?Ys85U#46I
zcE22op*7wN7^i7um2Ny~x)_XiklsUeXykDHcU>?h=d~@X)vUX}TYrt`T85Un+GDLn
z*L1RsFdm=Xd+^A|4V$&$Yzy7&=t)%$(o&fM{=Qm`9l!J)@(gL`j><2Gj@gu)K0|e$
zG-=hIT|RwITO*H##fp7t*12c@JHzzNb4HGxDB30ybMbzj)UMw+3}M<J-T&+gOE3;~
zye44KwpXrP>A7pyu9553t+ND!%|O?)AU;0cFE%z7{2IO#T|h*1bhLj=OpIq-T-<cr
zw_3Y)t<I4nM|z{HYK?R_O-P2P;{Q=m{)=q}&3-&2lphd)|C4~WxJL>3&fzFrrxa{f
z?91X-h~o=z+duknva(cj*914CrIQ0c88{o^lgd@9a;S3E>c`8JuSETmdLNf$=z$U1
z7cE+>ZS#&j9CXIdI5;gZDR-#RWEld~ZQ7ci=p2fnV+^-zr1J!wivvo2TV`9uDnA~r
zROQD*B}#tF;U=!`<e)$I>tq8_R0*U{W;X(SN|gF`3&J0(_<gmMZ@w+VRhZOc`P|(Q
zmKz_it~vuwNOM)$3Y8C_VKLkSKoe=vkW+R!`L7U|sV2|>XbR9<hlWXeFl@)WckliT
zowQSCW@dX_YXh_ZC=I!{dUb%3=%XdPSdQbei>3oGE0N$iuf2B}0`KLAp~JH(9?z9{
z=F=AzF?050ICl5+5aI+!J1`TyYseU!OCOD{o-SzZ0L7P}e5LOVM@^XadeWRA(Kfdi
zP17kdP}l6u*WZ-n_$1W{su9ea!#8SRc=~km0|L}T#{irc?}2-K1~Me)yeYLw^j5Er
z`;^9<s$_V^8;NVwohemS{-Hz_TqpCh%$z>f#D6{~9gr0(9oIfS0@GCm{-fiH|4zb-
zWra~!q*UY>oobwp760C058eab8_y#srie#CbP;#IC56M%BIovilrnTc=5h=&4%47+
zTcbgvpXN+)oiF2^+{I$5ix^qiX4bCXkZYoJ!4RBKASoV102zn*@;VuXpi?uik$D;B
zU$gzASO&&X%>t648BP_4@!6Odhs3a|GLw;6W;QDN(=u(<809}YsqvZqCau`8waD^y
zn~N-4y|GA4^7<mN$s3Bqt=WWYTZ$xX++HMc^UflPo3`hN+fpQM^(GZ#u(Da91exTE
z*i{>Nk5Z?4!zMr3>KU6{8m>Jmwa)<cTUD!7<FukBx=TtiMJ-sdev6WKemoVk6;AVC
z%#Zmpf0joThvnt{{BZA%ql877@jSc^u*^zX`Jd^0rvC%PN(XVAmU<i=Yq-{k4i6)6
mojw4Z{rN|I06vV06#0L0BQaiPgq42)0000<MNUMnLSTY`z}`*(

literal 0
HcmV?d00001

diff --git a/docs/xen.doxyfile.in b/docs/xen.doxyfile.in
new file mode 100644
index 0000000000..00969d9b78
--- /dev/null
+++ b/docs/xen.doxyfile.in
@@ -0,0 +1,2316 @@
+# Doxyfile 1.8.13
+
+# This file describes the settings to be used by the documentation system
+# doxygen (www.doxygen.org) for a project.
+#
+# All text after a double hash (##) is considered a comment and is placed in
+# front of the TAG it is preceding.
+#
+# All text after a single hash (#) is considered a comment and will be ignored.
+# The format is:
+# TAG = value [value, ...]
+# For lists, items can also be appended using:
+# TAG += value [value, ...]
+# Values that contain spaces should be placed between quotes (\" \").
+#
+# This file is base on doc/zephyr.doxyfile.in zephry 2.3
+
+#---------------------------------------------------------------------------
+# Project related configuration options
+#---------------------------------------------------------------------------
+
+# This tag specifies the encoding used for all characters in the config file
+# that follow. The default is UTF-8 which is also the encoding used for all text
+# before the first occurrence of this tag. Doxygen uses libiconv (or the iconv
+# built into libc) for the transcoding. See
+# https://www.gnu.org/software/libiconv/ for the list of possible encodings.
+# The default value is: UTF-8.
+
+DOXYFILE_ENCODING      = UTF-8
+
+# The PROJECT_NAME tag is a single word (or a sequence of words surrounded by
+# double-quotes, unless you are using Doxywizard) that should identify the
+# project for which the documentation is generated. This name is used in the
+# title of most generated pages and in a few other places.
+# The default value is: My Project.
+
+PROJECT_NAME           = "Xen Project"
+
+# The PROJECT_NUMBER tag can be used to enter a project or revision number. This
+# could be handy for archiving the generated documentation or if some version
+# control system is used.
+
+PROJECT_NUMBER         =
+
+# Using the PROJECT_BRIEF tag one can provide an optional one line description
+# for a project that appears at the top of each page and should give viewer a
+# quick idea about the purpose of the project. Keep the description short.
+
+PROJECT_BRIEF          = "An Open Source Type 1 Hypervisor"
+
+# With the PROJECT_LOGO tag one can specify a logo or an icon that is included
+# in the documentation. The maximum height of the logo should not exceed 55
+# pixels and the maximum width should not exceed 200 pixels. Doxygen will copy
+# the logo to the output directory.
+
+PROJECT_LOGO           = "xen-doxygen/xen_project_logo_165x67.png"
+
+# The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) path
+# into which the generated documentation will be written. If a relative path is
+# entered, it will be relative to the location where doxygen was started. If
+# left blank the current directory will be used.
+
+OUTPUT_DIRECTORY       = @DOXY_OUT@
+
+# If the CREATE_SUBDIRS tag is set to YES then doxygen will create 4096 sub-
+# directories (in 2 levels) under the output directory of each output format and
+# will distribute the generated files over these directories. Enabling this
+# option can be useful when feeding doxygen a huge amount of source files, where
+# putting all generated files in the same directory would otherwise causes
+# performance problems for the file system.
+# The default value is: NO.
+
+CREATE_SUBDIRS         = NO
+
+# The OUTPUT_LANGUAGE tag is used to specify the language in which all
+# documentation generated by doxygen is written. Doxygen will use this
+# information to generate all constant output in the proper language.
+# Possible values are: Afrikaans, Arabic, Armenian, Brazilian, Catalan, Chinese,
+# Chinese-Traditional, Croatian, Czech, Danish, Dutch, English (United States),
+# Esperanto, Farsi (Persian), Finnish, French, German, Greek, Hungarian,
+# Indonesian, Italian, Japanese, Japanese-en (Japanese with English messages),
+# Korean, Korean-en (Korean with English messages), Latvian, Lithuanian,
+# Macedonian, Norwegian, Persian (Farsi), Polish, Portuguese, Romanian, Russian,
+# Serbian, Serbian-Cyrillic, Slovak, Slovene, Spanish, Swedish, Turkish,
+# Ukrainian and Vietnamese.
+# The default value is: English.
+
+OUTPUT_LANGUAGE        = English
+
+# If the BRIEF_MEMBER_DESC tag is set to YES, doxygen will include brief member
+# descriptions after the members that are listed in the file and class
+# documentation (similar to Javadoc). Set to NO to disable this.
+# The default value is: YES.
+
+BRIEF_MEMBER_DESC      = YES
+
+# If the REPEAT_BRIEF tag is set to YES, doxygen will prepend the brief
+# description of a member or function before the detailed description
+#
+# Note: If both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the
+# brief descriptions will be completely suppressed.
+# The default value is: YES.
+
+REPEAT_BRIEF           = YES
+
+# This tag implements a quasi-intelligent brief description abbreviator that is
+# used to form the text in various listings. Each string in this list, if found
+# as the leading text of the brief description, will be stripped from the text
+# and the result, after processing the whole list, is used as the annotated
+# text. Otherwise, the brief description is used as-is. If left blank, the
+# following values are used ($name is automatically replaced with the name of
+# the entity):The $name class, The $name widget, The $name file, is, provides,
+# specifies, contains, represents, a, an and the.
+
+ABBREVIATE_BRIEF       = YES
+
+# If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then
+# doxygen will generate a detailed section even if there is only a brief
+# description.
+# The default value is: NO.
+
+ALWAYS_DETAILED_SEC    = YES
+
+# If the INLINE_INHERITED_MEMB tag is set to YES, doxygen will show all
+# inherited members of a class in the documentation of that class as if those
+# members were ordinary class members. Constructors, destructors and assignment
+# operators of the base classes will not be shown.
+# The default value is: NO.
+
+INLINE_INHERITED_MEMB  = YES
+
+# If the FULL_PATH_NAMES tag is set to YES, doxygen will prepend the full path
+# before files name in the file list and in the header files. If set to NO the
+# shortest path that makes the file name unique will be used
+# The default value is: YES.
+
+FULL_PATH_NAMES        = YES
+
+# The STRIP_FROM_PATH tag can be used to strip a user-defined part of the path.
+# Stripping is only done if one of the specified strings matches the left-hand
+# part of the path. The tag can be used to show relative paths in the file list.
+# If left blank the directory from which doxygen is run is used as the path to
+# strip.
+#
+# Note that you can specify absolute paths here, but also relative paths, which
+# will be relative from the directory where doxygen is started.
+# This tag requires that the tag FULL_PATH_NAMES is set to YES.
+
+STRIP_FROM_PATH        = @XEN_BASE@
+
+# The STRIP_FROM_INC_PATH tag can be used to strip a user-defined part of the
+# path mentioned in the documentation of a class, which tells the reader which
+# header file to include in order to use a class. If left blank only the name of
+# the header file containing the class definition is used. Otherwise one should
+# specify the list of include paths that are normally passed to the compiler
+# using the -I flag.
+
+STRIP_FROM_INC_PATH    =
+
+# If the SHORT_NAMES tag is set to YES, doxygen will generate much shorter (but
+# less readable) file names. This can be useful is your file systems doesn't
+# support long names like on DOS, Mac, or CD-ROM.
+# The default value is: NO.
+
+SHORT_NAMES            = NO
+
+# If the JAVADOC_AUTOBRIEF tag is set to YES then doxygen will interpret the
+# first line (until the first dot) of a Javadoc-style comment as the brief
+# description. If set to NO, the Javadoc-style will behave just like regular Qt-
+# style comments (thus requiring an explicit @brief command for a brief
+# description.)
+# The default value is: NO.
+
+JAVADOC_AUTOBRIEF      = NO
+
+# If the QT_AUTOBRIEF tag is set to YES then doxygen will interpret the first
+# line (until the first dot) of a Qt-style comment as the brief description. If
+# set to NO, the Qt-style will behave just like regular Qt-style comments (thus
+# requiring an explicit \brief command for a brief description.)
+# The default value is: NO.
+
+QT_AUTOBRIEF           = NO
+
+# The MULTILINE_CPP_IS_BRIEF tag can be set to YES to make doxygen treat a
+# multi-line C++ special comment block (i.e. a block of //! or /// comments) as
+# a brief description. This used to be the default behavior. The new default is
+# to treat a multi-line C++ comment block as a detailed description. Set this
+# tag to YES if you prefer the old behavior instead.
+#
+# Note that setting this tag to YES also means that rational rose comments are
+# not recognized any more.
+# The default value is: NO.
+
+MULTILINE_CPP_IS_BRIEF = NO
+
+# If the INHERIT_DOCS tag is set to YES then an undocumented member inherits the
+# documentation from any documented member that it re-implements.
+# The default value is: YES.
+
+INHERIT_DOCS           = YES
+
+# If the SEPARATE_MEMBER_PAGES tag is set to YES then doxygen will produce a new
+# page for each member. If set to NO, the documentation of a member will be part
+# of the file/class/namespace that contains it.
+# The default value is: NO.
+
+SEPARATE_MEMBER_PAGES  = YES
+
+# The TAB_SIZE tag can be used to set the number of spaces in a tab. Doxygen
+# uses this value to replace tabs by spaces in code fragments.
+# Minimum value: 1, maximum value: 16, default value: 4.
+
+TAB_SIZE               = 8
+
+# This tag can be used to specify a number of aliases that act as commands in
+# the documentation. An alias has the form:
+# name=value
+# For example adding
+# "sideeffect=@par Side Effects:\n"
+# will allow you to put the command \sideeffect (or @sideeffect) in the
+# documentation, which will result in a user-defined paragraph with heading
+# "Side Effects:". You can put \n's in the value part of an alias to insert
+# newlines.
+
+ALIASES                = "rst=\verbatim embed:rst:leading-asterisk" \
+                         "endrst=\endverbatim" \
+                         "keepindent=\code" \
+                         "endkeepindent=\endcode"
+
+ALIASES += req{1}="\ref XEN_\1 \"XEN-\1\" "
+ALIASES += satisfy{1}="\xrefitem satisfy \"Satisfies requirement\" \"Requirement Implementation\" \1"
+ALIASES += verify{1}="\xrefitem verify \"Verifies requirement\" \"Requirement Verification\" \1"
+
+# Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C sources
+# only. Doxygen will then generate output that is more tailored for C. For
+# instance, some of the names that are used will be different. The list of all
+# members will be omitted, etc.
+# The default value is: NO.
+
+OPTIMIZE_OUTPUT_FOR_C  = YES
+
+# Set the OPTIMIZE_OUTPUT_JAVA tag to YES if your project consists of Java or
+# Python sources only. Doxygen will then generate output that is more tailored
+# for that language. For instance, namespaces will be presented as packages,
+# qualified scopes will look different, etc.
+# The default value is: NO.
+
+OPTIMIZE_OUTPUT_JAVA   = NO
+
+# Set the OPTIMIZE_FOR_FORTRAN tag to YES if your project consists of Fortran
+# sources. Doxygen will then generate output that is tailored for Fortran.
+# The default value is: NO.
+
+OPTIMIZE_FOR_FORTRAN   = NO
+
+# Set the OPTIMIZE_OUTPUT_VHDL tag to YES if your project consists of VHDL
+# sources. Doxygen will then generate output that is tailored for VHDL.
+# The default value is: NO.
+
+OPTIMIZE_OUTPUT_VHDL   = NO
+
+# Doxygen selects the parser to use depending on the extension of the files it
+# parses. With this tag you can assign which parser to use for a given
+# extension. Doxygen has a built-in mapping, but you can override or extend it
+# using this tag. The format is ext=language, where ext is a file extension, and
+# language is one of the parsers supported by doxygen: IDL, Java, Javascript,
+# C#, C, C++, D, PHP, Objective-C, Python, Fortran (fixed format Fortran:
+# FortranFixed, free formatted Fortran: FortranFree, unknown formatted Fortran:
+# Fortran. In the later case the parser tries to guess whether the code is fixed
+# or free formatted code, this is the default for Fortran type files), VHDL. For
+# instance to make doxygen treat .inc files as Fortran files (default is PHP),
+# and .f files as C (default is Fortran), use: inc=Fortran f=C.
+#
+# Note: For files without extension you can use no_extension as a placeholder.
+#
+# Note that for custom extensions you also need to set FILE_PATTERNS otherwise
+# the files are not read by doxygen.
+
+EXTENSION_MAPPING      =
+
+# If the MARKDOWN_SUPPORT tag is enabled then doxygen pre-processes all comments
+# according to the Markdown format, which allows for more readable
+# documentation. See http://daringfireball.net/projects/markdown/ for details.
+# The output of markdown processing is further processed by doxygen, so you can
+# mix doxygen, HTML, and XML commands with Markdown formatting. Disable only in
+# case of backward compatibilities issues.
+# The default value is: YES.
+
+MARKDOWN_SUPPORT       = YES
+
+# When enabled doxygen tries to link words that correspond to documented
+# classes, or namespaces to their corresponding documentation. Such a link can
+# be prevented in individual cases by putting a % sign in front of the word or
+# globally by setting AUTOLINK_SUPPORT to NO.
+# The default value is: YES.
+
+AUTOLINK_SUPPORT       = YES
+
+# If you use STL classes (i.e. std::string, std::vector, etc.) but do not want
+# to include (a tag file for) the STL sources as input, then you should set this
+# tag to YES in order to let doxygen match functions declarations and
+# definitions whose arguments contain STL classes (e.g. func(std::string);
+# versus func(std::string) {}). This also make the inheritance and collaboration
+# diagrams that involve STL classes more complete and accurate.
+# The default value is: NO.
+
+BUILTIN_STL_SUPPORT    = NO
+
+# If you use Microsoft's C++/CLI language, you should set this option to YES to
+# enable parsing support.
+# The default value is: NO.
+
+CPP_CLI_SUPPORT        = YES
+
+# Set the SIP_SUPPORT tag to YES if your project consists of sip (see:
+# https://www.riverbankcomputing.com/software/sip/intro) sources only. Doxygen
+# will parse them like normal C++ but will assume all classes use public instead
+# of private inheritance when no explicit protection keyword is present.
+# The default value is: NO.
+
+SIP_SUPPORT            = NO
+
+# For Microsoft's IDL there are propget and propput attributes to indicate
+# getter and setter methods for a property. Setting this option to YES will make
+# doxygen to replace the get and set methods by a property in the documentation.
+# This will only work if the methods are indeed getting or setting a simple
+# type. If this is not the case, or you want to show the methods anyway, you
+# should set this option to NO.
+# The default value is: YES.
+
+IDL_PROPERTY_SUPPORT   = YES
+
+# If member grouping is used in the documentation and the DISTRIBUTE_GROUP_DOC
+# tag is set to YES then doxygen will reuse the documentation of the first
+# member in the group (if any) for the other members of the group. By default
+# all members of a group must be documented explicitly.
+# The default value is: NO.
+
+DISTRIBUTE_GROUP_DOC   = NO
+
+# Set the SUBGROUPING tag to YES to allow class member groups of the same type
+# (for instance a group of public functions) to be put as a subgroup of that
+# type (e.g. under the Public Functions section). Set it to NO to prevent
+# subgrouping. Alternatively, this can be done per class using the
+# \nosubgrouping command.
+# The default value is: YES.
+
+SUBGROUPING            = YES
+
+# When the INLINE_GROUPED_CLASSES tag is set to YES, classes, structs and unions
+# are shown inside the group in which they are included (e.g. using \ingroup)
+# instead of on a separate page (for HTML and Man pages) or section (for LaTeX
+# and RTF).
+#
+# Note that this feature does not work in combination with
+# SEPARATE_MEMBER_PAGES.
+# The default value is: NO.
+
+INLINE_GROUPED_CLASSES = NO
+
+# When the INLINE_SIMPLE_STRUCTS tag is set to YES, structs, classes, and unions
+# with only public data fields or simple typedef fields will be shown inline in
+# the documentation of the scope in which they are defined (i.e. file,
+# namespace, or group documentation), provided this scope is documented. If set
+# to NO, structs, classes, and unions are shown on a separate page (for HTML and
+# Man pages) or section (for LaTeX and RTF).
+# The default value is: NO.
+
+INLINE_SIMPLE_STRUCTS  = YES
+
+# When TYPEDEF_HIDES_STRUCT tag is enabled, a typedef of a struct, union, or
+# enum is documented as struct, union, or enum with the name of the typedef. So
+# typedef struct TypeS {} TypeT, will appear in the documentation as a struct
+# with name TypeT. When disabled the typedef will appear as a member of a file,
+# namespace, or class. And the struct will be named TypeS. This can typically be
+# useful for C code in case the coding convention dictates that all compound
+# types are typedef'ed and only the typedef is referenced, never the tag name.
+# The default value is: NO.
+
+TYPEDEF_HIDES_STRUCT   = NO
+
+# The size of the symbol lookup cache can be set using LOOKUP_CACHE_SIZE. This
+# cache is used to resolve symbols given their name and scope. Since this can be
+# an expensive process and often the same symbol appears multiple times in the
+# code, doxygen keeps a cache of pre-resolved symbols. If the cache is too small
+# doxygen will become slower. If the cache is too large, memory is wasted. The
+# cache size is given by this formula: 2^(16+LOOKUP_CACHE_SIZE). The valid range
+# is 0..9, the default is 0, corresponding to a cache size of 2^16=65536
+# symbols. At the end of a run doxygen will report the cache usage and suggest
+# the optimal cache size from a speed point of view.
+# Minimum value: 0, maximum value: 9, default value: 0.
+
+LOOKUP_CACHE_SIZE      = 9
+
+#---------------------------------------------------------------------------
+# Build related configuration options
+#---------------------------------------------------------------------------
+
+# If the EXTRACT_ALL tag is set to YES, doxygen will assume all entities in
+# documentation are documented, even if no documentation was available. Private
+# class members and static file members will be hidden unless the
+# EXTRACT_PRIVATE respectively EXTRACT_STATIC tags are set to YES.
+# Note: This will also disable the warnings about undocumented members that are
+# normally produced when WARNINGS is set to YES.
+# The default value is: NO.
+
+EXTRACT_ALL            = YES
+
+# If the EXTRACT_PRIVATE tag is set to YES, all private members of a class will
+# be included in the documentation.
+# The default value is: NO.
+
+EXTRACT_PRIVATE        = NO
+
+# If the EXTRACT_PACKAGE tag is set to YES, all members with package or internal
+# scope will be included in the documentation.
+# The default value is: NO.
+
+EXTRACT_PACKAGE        = YES
+
+# If the EXTRACT_STATIC tag is set to YES, all static members of a file will be
+# included in the documentation.
+# The default value is: NO.
+
+EXTRACT_STATIC         = YES
+
+# If the EXTRACT_LOCAL_CLASSES tag is set to YES, classes (and structs) defined
+# locally in source files will be included in the documentation. If set to NO,
+# only classes defined in header files are included. Does not have any effect
+# for Java sources.
+# The default value is: YES.
+
+EXTRACT_LOCAL_CLASSES  = YES
+
+# This flag is only useful for Objective-C code. If set to YES, local methods,
+# which are defined in the implementation section but not in the interface are
+# included in the documentation. If set to NO, only methods in the interface are
+# included.
+# The default value is: NO.
+
+EXTRACT_LOCAL_METHODS  = YES
+
+# If this flag is set to YES, the members of anonymous namespaces will be
+# extracted and appear in the documentation as a namespace called
+# 'anonymous_namespace{file}', where file will be replaced with the base name of
+# the file that contains the anonymous namespace. By default anonymous namespace
+# are hidden.
+# The default value is: NO.
+
+EXTRACT_ANON_NSPACES   = NO
+
+# If the HIDE_UNDOC_MEMBERS tag is set to YES, doxygen will hide all
+# undocumented members inside documented classes or files. If set to NO these
+# members will be included in the various overviews, but no documentation
+# section is generated. This option has no effect if EXTRACT_ALL is enabled.
+# The default value is: NO.
+
+HIDE_UNDOC_MEMBERS     = NO
+
+# If the HIDE_UNDOC_CLASSES tag is set to YES, doxygen will hide all
+# undocumented classes that are normally visible in the class hierarchy. If set
+# to NO, these classes will be included in the various overviews. This option
+# has no effect if EXTRACT_ALL is enabled.
+# The default value is: NO.
+
+HIDE_UNDOC_CLASSES     = NO
+
+# If the HIDE_FRIEND_COMPOUNDS tag is set to YES, doxygen will hide all friend
+# (class|struct|union) declarations. If set to NO, these declarations will be
+# included in the documentation.
+# The default value is: NO.
+
+HIDE_FRIEND_COMPOUNDS  = NO
+
+# If the HIDE_IN_BODY_DOCS tag is set to YES, doxygen will hide any
+# documentation blocks found inside the body of a function. If set to NO, these
+# blocks will be appended to the function's detailed documentation block.
+# The default value is: NO.
+
+HIDE_IN_BODY_DOCS      = NO
+
+# The INTERNAL_DOCS tag determines if documentation that is typed after a
+# \internal command is included. If the tag is set to NO then the documentation
+# will be excluded. Set it to YES to include the internal documentation.
+# The default value is: NO.
+
+INTERNAL_DOCS          = NO
+
+# If the CASE_SENSE_NAMES tag is set to NO then doxygen will only generate file
+# names in lower-case letters. If set to YES, upper-case letters are also
+# allowed. This is useful if you have classes or files whose names only differ
+# in case and if your file system supports case sensitive file names. Windows
+# and Mac users are advised to set this option to NO.
+# The default value is: system dependent.
+
+CASE_SENSE_NAMES       = YES
+
+# If the HIDE_SCOPE_NAMES tag is set to NO then doxygen will show members with
+# their full class and namespace scopes in the documentation. If set to YES, the
+# scope will be hidden.
+# The default value is: NO.
+
+HIDE_SCOPE_NAMES       = NO
+
+# If the SHOW_INCLUDE_FILES tag is set to YES then doxygen will put a list of
+# the files that are included by a file in the documentation of that file.
+# The default value is: YES.
+
+SHOW_INCLUDE_FILES     = YES
+
+# If the SHOW_GROUPED_MEMB_INC tag is set to YES then Doxygen will add for each
+# grouped member an include statement to the documentation, telling the reader
+# which file to include in order to use the member.
+# The default value is: NO.
+
+SHOW_GROUPED_MEMB_INC  = YES
+
+# If the FORCE_LOCAL_INCLUDES tag is set to YES then doxygen will list include
+# files with double quotes in the documentation rather than with sharp brackets.
+# The default value is: NO.
+
+FORCE_LOCAL_INCLUDES   = NO
+
+# If the INLINE_INFO tag is set to YES then a tag [inline] is inserted in the
+# documentation for inline members.
+# The default value is: YES.
+
+INLINE_INFO            = YES
+
+# If the SORT_MEMBER_DOCS tag is set to YES then doxygen will sort the
+# (detailed) documentation of file and class members alphabetically by member
+# name. If set to NO, the members will appear in declaration order.
+# The default value is: YES.
+
+SORT_MEMBER_DOCS       = YES
+
+# If the SORT_BRIEF_DOCS tag is set to YES then doxygen will sort the brief
+# descriptions of file, namespace and class members alphabetically by member
+# name. If set to NO, the members will appear in declaration order. Note that
+# this will also influence the order of the classes in the class list.
+# The default value is: NO.
+
+SORT_BRIEF_DOCS        = NO
+
+# If the SORT_MEMBERS_CTORS_1ST tag is set to YES then doxygen will sort the
+# (brief and detailed) documentation of class members so that constructors and
+# destructors are listed first. If set to NO the constructors will appear in the
+# respective orders defined by SORT_BRIEF_DOCS and SORT_MEMBER_DOCS.
+# Note: If SORT_BRIEF_DOCS is set to NO this option is ignored for sorting brief
+# member documentation.
+# Note: If SORT_MEMBER_DOCS is set to NO this option is ignored for sorting
+# detailed member documentation.
+# The default value is: NO.
+
+SORT_MEMBERS_CTORS_1ST = NO
+
+# If the SORT_GROUP_NAMES tag is set to YES then doxygen will sort the hierarchy
+# of group names into alphabetical order. If set to NO the group names will
+# appear in their defined order.
+# The default value is: NO.
+
+SORT_GROUP_NAMES       = YES
+
+# If the SORT_BY_SCOPE_NAME tag is set to YES, the class list will be sorted by
+# fully-qualified names, including namespaces. If set to NO, the class list will
+# be sorted only by class name, not including the namespace part.
+# Note: This option is not very useful if HIDE_SCOPE_NAMES is set to YES.
+# Note: This option applies only to the class list, not to the alphabetical
+# list.
+# The default value is: NO.
+
+SORT_BY_SCOPE_NAME     = YES
+
+# If the STRICT_PROTO_MATCHING option is enabled and doxygen fails to do proper
+# type resolution of all parameters of a function it will reject a match between
+# the prototype and the implementation of a member function even if there is
+# only one candidate or it is obvious which candidate to choose by doing a
+# simple string match. By disabling STRICT_PROTO_MATCHING doxygen will still
+# accept a match between prototype and implementation in such cases.
+# The default value is: NO.
+
+STRICT_PROTO_MATCHING  = YES
+
+# The GENERATE_TODOLIST tag can be used to enable (YES) or disable (NO) the todo
+# list. This list is created by putting \todo commands in the documentation.
+# The default value is: YES.
+
+GENERATE_TODOLIST      = NO
+
+# The GENERATE_TESTLIST tag can be used to enable (YES) or disable (NO) the test
+# list. This list is created by putting \test commands in the documentation.
+# The default value is: YES.
+
+GENERATE_TESTLIST      = NO
+
+# The GENERATE_BUGLIST tag can be used to enable (YES) or disable (NO) the bug
+# list. This list is created by putting \bug commands in the documentation.
+# The default value is: YES.
+
+GENERATE_BUGLIST       = NO
+
+# The GENERATE_DEPRECATEDLIST tag can be used to enable (YES) or disable (NO)
+# the deprecated list. This list is created by putting \deprecated commands in
+# the documentation.
+# The default value is: YES.
+
+GENERATE_DEPRECATEDLIST= YES
+
+# The ENABLED_SECTIONS tag can be used to enable conditional documentation
+# sections, marked by \if <section_label> ... \endif and \cond <section_label>
+# ... \endcond blocks.
+
+ENABLED_SECTIONS       = YES
+
+# The MAX_INITIALIZER_LINES tag determines the maximum number of lines that the
+# initial value of a variable or macro / define can have for it to appear in the
+# documentation. If the initializer consists of more lines than specified here
+# it will be hidden. Use a value of 0 to hide initializers completely. The
+# appearance of the value of individual variables and macros / defines can be
+# controlled using \showinitializer or \hideinitializer command in the
+# documentation regardless of this setting.
+# Minimum value: 0, maximum value: 10000, default value: 30.
+
+MAX_INITIALIZER_LINES  = 300
+
+# Set the SHOW_USED_FILES tag to NO to disable the list of files generated at
+# the bottom of the documentation of classes and structs. If set to YES, the
+# list will mention the files that were used to generate the documentation.
+# The default value is: YES.
+
+SHOW_USED_FILES        = YES
+
+# Set the SHOW_FILES tag to NO to disable the generation of the Files page. This
+# will remove the Files entry from the Quick Index and from the Folder Tree View
+# (if specified).
+# The default value is: YES.
+
+SHOW_FILES             = YES
+
+# Set the SHOW_NAMESPACES tag to NO to disable the generation of the Namespaces
+# page. This will remove the Namespaces entry from the Quick Index and from the
+# Folder Tree View (if specified).
+# The default value is: YES.
+
+SHOW_NAMESPACES        = YES
+
+# The FILE_VERSION_FILTER tag can be used to specify a program or script that
+# doxygen should invoke to get the current version for each file (typically from
+# the version control system). Doxygen will invoke the program by executing (via
+# popen()) the command command input-file, where command is the value of the
+# FILE_VERSION_FILTER tag, and input-file is the name of an input file provided
+# by doxygen. Whatever the program writes to standard output is used as the file
+# version. For an example see the documentation.
+
+FILE_VERSION_FILTER    =
+
+# The LAYOUT_FILE tag can be used to specify a layout file which will be parsed
+# by doxygen. The layout file controls the global structure of the generated
+# output files in an output format independent way. To create the layout file
+# that represents doxygen's defaults, run doxygen with the -l option. You can
+# optionally specify a file name after the option, if omitted DoxygenLayout.xml
+# will be used as the name of the layout file.
+#
+# Note that if you run doxygen from a directory containing a file called
+# DoxygenLayout.xml, doxygen will parse it automatically even if the LAYOUT_FILE
+# tag is left empty.
+
+LAYOUT_FILE            =
+
+# The CITE_BIB_FILES tag can be used to specify one or more bib files containing
+# the reference definitions. This must be a list of .bib files. The .bib
+# extension is automatically appended if omitted. This requires the bibtex tool
+# to be installed. See also https://en.wikipedia.org/wiki/BibTeX for more info.
+# For LaTeX the style of the bibliography can be controlled using
+# LATEX_BIB_STYLE. To use this feature you need bibtex and perl available in the
+# search path. See also \cite for info how to create references.
+
+CITE_BIB_FILES         =
+
+#---------------------------------------------------------------------------
+# Configuration options related to warning and progress messages
+#---------------------------------------------------------------------------
+
+# The QUIET tag can be used to turn on/off the messages that are generated to
+# standard output by doxygen. If QUIET is set to YES this implies that the
+# messages are off.
+# The default value is: NO.
+
+QUIET                  = YES
+
+# The WARNINGS tag can be used to turn on/off the warning messages that are
+# generated to standard error (stderr) by doxygen. If WARNINGS is set to YES
+# this implies that the warnings are on.
+#
+# Tip: Turn warnings on while writing the documentation.
+# The default value is: YES.
+
+WARNINGS               = YES
+
+# If the WARN_IF_UNDOCUMENTED tag is set to YES then doxygen will generate
+# warnings for undocumented members. If EXTRACT_ALL is set to YES then this flag
+# will automatically be disabled.
+# The default value is: YES.
+
+WARN_IF_UNDOCUMENTED   = YES
+
+# If the WARN_IF_DOC_ERROR tag is set to YES, doxygen will generate warnings for
+# potential errors in the documentation, such as not documenting some parameters
+# in a documented function, or documenting parameters that don't exist or using
+# markup commands wrongly.
+# The default value is: YES.
+
+WARN_IF_DOC_ERROR      = YES
+
+# This WARN_NO_PARAMDOC option can be enabled to get warnings for functions that
+# are documented, but have no documentation for their parameters or return
+# value. If set to NO, doxygen will only warn about wrong or incomplete
+# parameter documentation, but not about the absence of documentation.
+# The default value is: NO.
+
+WARN_NO_PARAMDOC       = NO
+
+# The WARN_FORMAT tag determines the format of the warning messages that doxygen
+# can produce. The string should contain the $file, $line, and $text tags, which
+# will be replaced by the file and line number from which the warning originated
+# and the warning text. Optionally the format may contain $version, which will
+# be replaced by the version of the file (if it could be obtained via
+# FILE_VERSION_FILTER)
+# The default value is: $file:$line: $text.
+
+WARN_FORMAT            = "$file:$line: $text"
+
+# The WARN_LOGFILE tag can be used to specify a file to which warning and error
+# messages should be written. If left blank the output is written to standard
+# error (stderr).
+
+WARN_LOGFILE           =
+
+#---------------------------------------------------------------------------
+# Configuration options related to the input files
+#---------------------------------------------------------------------------
+
+# The INPUT tag is used to specify the files and/or directories that contain
+# documented source files. You may enter file names like myfile.cpp or
+# directories like /usr/src/myproject. Separate the files or directories with
+# spaces. See also FILE_PATTERNS and EXTENSION_MAPPING
+# Note: If this tag is empty the current directory is searched.
+
+INPUT                  = "@XEN_BASE@/docs/xen-doxygen/mainpage.md"
+
+# This tag can be used to specify the character encoding of the source files
+# that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses
+# libiconv (or the iconv built into libc) for the transcoding. See the libiconv
+# documentation (see: https://www.gnu.org/software/libiconv/) for the list of
+# possible encodings.
+# The default value is: UTF-8.
+
+INPUT_ENCODING         = UTF-8
+
+# If the value of the INPUT tag contains directories, you can use the
+# FILE_PATTERNS tag to specify one or more wildcard patterns (like *.cpp and
+# *.h) to filter out the source-files in the directories.
+#
+# Note that for custom extensions or not directly supported extensions you also
+# need to set EXTENSION_MAPPING for the extension otherwise the files are not
+# read by doxygen.
+#
+# If left blank the following patterns are tested:*.c, *.cc, *.cxx, *.cpp,
+# *.c++, *.java, *.ii, *.ixx, *.ipp, *.i++, *.inl, *.idl, *.ddl, *.odl, *.h,
+# *.hh, *.hxx, *.hpp, *.h++, *.cs, *.d, *.php, *.php4, *.php5, *.phtml, *.inc,
+# *.m, *.markdown, *.md, *.mm, *.dox, *.py, *.pyw, *.f90, *.f95, *.f03, *.f08,
+# *.f, *.for, *.tcl, *.vhd, *.vhdl, *.ucf and *.qsf.
+
+# This MUST be kept in sync with DOXY_SOURCES in doc/CMakeLists.txt
+# for incremental (and faster) builds to work correctly.
+FILE_PATTERNS          = "*.c" \
+                         "*.h" \
+                         "*.S" \
+                         "*.md"
+
+# The RECURSIVE tag can be used to specify whether or not subdirectories should
+# be searched for input files as well.
+# The default value is: NO.
+
+RECURSIVE              = YES
+
+# The EXCLUDE tag can be used to specify files and/or directories that should be
+# excluded from the INPUT source files. This way you can easily exclude a
+# subdirectory from a directory tree whose root is specified with the INPUT tag.
+#
+# Note that relative paths are relative to the directory from which doxygen is
+# run.
+
+EXCLUDE                = @XEN_BASE@/include/nothing.h
+
+# The EXCLUDE_SYMLINKS tag can be used to select whether or not files or
+# directories that are symbolic links (a Unix file system feature) are excluded
+# from the input.
+# The default value is: NO.
+
+EXCLUDE_SYMLINKS       = NO
+
+# If the value of the INPUT tag contains directories, you can use the
+# EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude
+# certain files from those directories.
+#
+# Note that the wildcards are matched against the file with absolute path, so to
+# exclude all test directories for example use the pattern */test/*
+
+EXCLUDE_PATTERNS       =
+
+# The EXCLUDE_SYMBOLS tag can be used to specify one or more symbol names
+# (namespaces, classes, functions, etc.) that should be excluded from the
+# output. The symbol name can be a fully qualified name, a word, or if the
+# wildcard * is used, a substring. Examples: ANamespace, AClass,
+# AClass::ANamespace, ANamespace::*Test
+#
+# Note that the wildcards are matched against the file with absolute path, so to
+# exclude all test directories use the pattern */test/*
+
+# Hide internal names (starting with an underscore, and doxygen-generated names
+# for nested unnamed unions that don't generate meaningful sphinx output anyway.
+EXCLUDE_SYMBOLS        =
+# _*  *.__unnamed__ z_* Z_*
+
+# The EXAMPLE_PATH tag can be used to specify one or more files or directories
+# that contain example code fragments that are included (see the \include
+# command).
+
+EXAMPLE_PATH           =
+
+# If the value of the EXAMPLE_PATH tag contains directories, you can use the
+# EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp and
+# *.h) to filter out the source-files in the directories. If left blank all
+# files are included.
+
+EXAMPLE_PATTERNS       =
+
+# If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be
+# searched for input files to be used with the \include or \dontinclude commands
+# irrespective of the value of the RECURSIVE tag.
+# The default value is: NO.
+
+EXAMPLE_RECURSIVE      = YES
+
+# The IMAGE_PATH tag can be used to specify one or more files or directories
+# that contain images that are to be included in the documentation (see the
+# \image command).
+
+IMAGE_PATH             =
+
+# The INPUT_FILTER tag can be used to specify a program that doxygen should
+# invoke to filter for each input file. Doxygen will invoke the filter program
+# by executing (via popen()) the command:
+#
+# <filter> <input-file>
+#
+# where <filter> is the value of the INPUT_FILTER tag, and <input-file> is the
+# name of an input file. Doxygen will then use the output that the filter
+# program writes to standard output. If FILTER_PATTERNS is specified, this tag
+# will be ignored.
+#
+# Note that the filter must not add or remove lines; it is applied before the
+# code is scanned, but not when the output code is generated. If lines are added
+# or removed, the anchors will not be placed correctly.
+#
+# Note that for custom extensions or not directly supported extensions you also
+# need to set EXTENSION_MAPPING for the extension otherwise the files are not
+# properly processed by doxygen.
+
+INPUT_FILTER           =
+
+# The FILTER_PATTERNS tag can be used to specify filters on a per file pattern
+# basis. Doxygen will compare the file name with each pattern and apply the
+# filter if there is a match. The filters are a list of the form: pattern=filter
+# (like *.cpp=my_cpp_filter). See INPUT_FILTER for further information on how
+# filters are used. If the FILTER_PATTERNS tag is empty or if none of the
+# patterns match the file name, INPUT_FILTER is applied.
+#
+# Note that for custom extensions or not directly supported extensions you also
+# need to set EXTENSION_MAPPING for the extension otherwise the files are not
+# properly processed by doxygen.
+
+FILTER_PATTERNS     = *.h="\"@XEN_BASE@/docs/xen-doxygen/doxy-preprocessor.py\""
+
+# If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using
+# INPUT_FILTER) will also be used to filter the input files that are used for
+# producing the source files to browse (i.e. when SOURCE_BROWSER is set to YES).
+# The default value is: NO.
+
+FILTER_SOURCE_FILES    = NO
+
+# The FILTER_SOURCE_PATTERNS tag can be used to specify source filters per file
+# pattern. A pattern will override the setting for FILTER_PATTERN (if any) and
+# it is also possible to disable source filtering for a specific pattern using
+# *.ext= (so without naming a filter).
+# This tag requires that the tag FILTER_SOURCE_FILES is set to YES.
+
+FILTER_SOURCE_PATTERNS =
+
+# If the USE_MDFILE_AS_MAINPAGE tag refers to the name of a markdown file that
+# is part of the input, its contents will be placed on the main page
+# (index.html). This can be useful if you have a project on for instance GitHub
+# and want to reuse the introduction page also for the doxygen output.
+
+USE_MDFILE_AS_MAINPAGE = "mainpage.md"
+
+#---------------------------------------------------------------------------
+# Configuration options related to source browsing
+#---------------------------------------------------------------------------
+
+# If the SOURCE_BROWSER tag is set to YES then a list of source files will be
+# generated. Documented entities will be cross-referenced with these sources.
+#
+# Note: To get rid of all source code in the generated output, make sure that
+# also VERBATIM_HEADERS is set to NO.
+# The default value is: NO.
+
+SOURCE_BROWSER         = NO
+
+# Setting the INLINE_SOURCES tag to YES will include the body of functions,
+# classes and enums directly into the documentation.
+# The default value is: NO.
+
+INLINE_SOURCES         = NO
+
+# Setting the STRIP_CODE_COMMENTS tag to YES will instruct doxygen to hide any
+# special comment blocks from generated source code fragments. Normal C, C++ and
+# Fortran comments will always remain visible.
+# The default value is: YES.
+
+STRIP_CODE_COMMENTS    = YES
+
+# If the REFERENCED_BY_RELATION tag is set to YES then for each documented
+# function all documented functions referencing it will be listed.
+# The default value is: NO.
+
+REFERENCED_BY_RELATION = NO
+
+# If the REFERENCES_RELATION tag is set to YES then for each documented function
+# all documented entities called/used by that function will be listed.
+# The default value is: NO.
+
+REFERENCES_RELATION    = NO
+
+# If the REFERENCES_LINK_SOURCE tag is set to YES and SOURCE_BROWSER tag is set
+# to YES then the hyperlinks from functions in REFERENCES_RELATION and
+# REFERENCED_BY_RELATION lists will link to the source code. Otherwise they will
+# link to the documentation.
+# The default value is: YES.
+
+REFERENCES_LINK_SOURCE = YES
+
+# If SOURCE_TOOLTIPS is enabled (the default) then hovering a hyperlink in the
+# source code will show a tooltip with additional information such as prototype,
+# brief description and links to the definition and documentation. Since this
+# will make the HTML file larger and loading of large files a bit slower, you
+# can opt to disable this feature.
+# The default value is: YES.
+# This tag requires that the tag SOURCE_BROWSER is set to YES.
+
+SOURCE_TOOLTIPS        = YES
+
+# If the USE_HTAGS tag is set to YES then the references to source code will
+# point to the HTML generated by the htags(1) tool instead of doxygen built-in
+# source browser. The htags tool is part of GNU's global source tagging system
+# (see https://www.gnu.org/software/global/global.html). You will need version
+# 4.8.6 or higher.
+#
+# To use it do the following:
+# - Install the latest version of global
+# - Enable SOURCE_BROWSER and USE_HTAGS in the config file
+# - Make sure the INPUT points to the root of the source tree
+# - Run doxygen as normal
+#
+# Doxygen will invoke htags (and that will in turn invoke gtags), so these
+# tools must be available from the command line (i.e. in the search path).
+#
+# The result: instead of the source browser generated by doxygen, the links to
+# source code will now point to the output of htags.
+# The default value is: NO.
+# This tag requires that the tag SOURCE_BROWSER is set to YES.
+
+USE_HTAGS              = NO
+
+# If the VERBATIM_HEADERS tag is set the YES then doxygen will generate a
+# verbatim copy of the header file for each class for which an include is
+# specified. Set to NO to disable this.
+# See also: Section \class.
+# The default value is: YES.
+
+VERBATIM_HEADERS       = YES
+
+#---------------------------------------------------------------------------
+# Configuration options related to the alphabetical class index
+#---------------------------------------------------------------------------
+
+# If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index of all
+# compounds will be generated. Enable this if the project contains a lot of
+# classes, structs, unions or interfaces.
+# The default value is: YES.
+
+ALPHABETICAL_INDEX     = YES
+
+# The COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns in
+# which the alphabetical index list will be split.
+# Minimum value: 1, maximum value: 20, default value: 5.
+# This tag requires that the tag ALPHABETICAL_INDEX is set to YES.
+
+COLS_IN_ALPHA_INDEX    = 2
+
+# In case all classes in a project start with a common prefix, all classes will
+# be put under the same header in the alphabetical index. The IGNORE_PREFIX tag
+# can be used to specify a prefix (or a list of prefixes) that should be ignored
+# while generating the index headers.
+# This tag requires that the tag ALPHABETICAL_INDEX is set to YES.
+
+IGNORE_PREFIX          =
+
+#---------------------------------------------------------------------------
+# Configuration options related to the HTML output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_HTML tag is set to YES, doxygen will generate HTML output
+# The default value is: YES.
+
+GENERATE_HTML          = YES
+
+# The HTML_OUTPUT tag is used to specify where the HTML docs will be put. If a
+# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
+# it.
+# The default directory is: html.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_OUTPUT            = html
+
+# The HTML_FILE_EXTENSION tag can be used to specify the file extension for each
+# generated HTML page (for example: .htm, .php, .asp).
+# The default value is: .html.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_FILE_EXTENSION    = .html
+
+# The HTML_HEADER tag can be used to specify a user-defined HTML header file for
+# each generated HTML page. If the tag is left blank doxygen will generate a
+# standard header.
+#
+# To get valid HTML the header file that includes any scripts and style sheets
+# that doxygen needs, which is dependent on the configuration options used (e.g.
+# the setting GENERATE_TREEVIEW). It is highly recommended to start with a
+# default header using
+# doxygen -w html new_header.html new_footer.html new_stylesheet.css
+# YourConfigFile
+# and then modify the file new_header.html. See also section "Doxygen usage"
+# for information on how to generate the default header that doxygen normally
+# uses.
+# Note: The header is subject to change so you typically have to regenerate the
+# default header when upgrading to a newer version of doxygen. For a description
+# of the possible markers and block names see the documentation.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_HEADER            = xen-doxygen/header.html
+
+# The HTML_FOOTER tag can be used to specify a user-defined HTML footer for each
+# generated HTML page. If the tag is left blank doxygen will generate a standard
+# footer. See HTML_HEADER for more information on how to generate a default
+# footer and what special commands can be used inside the footer. See also
+# section "Doxygen usage" for information on how to generate the default footer
+# that doxygen normally uses.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_FOOTER            =
+
+# The HTML_STYLESHEET tag can be used to specify a user-defined cascading style
+# sheet that is used by each HTML page. It can be used to fine-tune the look of
+# the HTML output. If left blank doxygen will generate a default style sheet.
+# See also section "Doxygen usage" for information on how to generate the style
+# sheet that doxygen normally uses.
+# Note: It is recommended to use HTML_EXTRA_STYLESHEET instead of this tag, as
+# it is more robust and this tag (HTML_STYLESHEET) will in the future become
+# obsolete.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_STYLESHEET        =
+
+# The HTML_EXTRA_STYLESHEET tag can be used to specify additional user-defined
+# cascading style sheets that are included after the standard style sheets
+# created by doxygen. Using this option one can overrule certain style aspects.
+# This is preferred over using HTML_STYLESHEET since it does not replace the
+# standard style sheet and is therefore more robust against future updates.
+# Doxygen will copy the style sheet files to the output directory.
+# Note: The order of the extra style sheet files is of importance (e.g. the last
+# style sheet in the list overrules the setting of the previous ones in the
+# list). For an example see the documentation.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_EXTRA_STYLESHEET  = xen-doxygen/customdoxygen.css
+
+# The HTML_EXTRA_FILES tag can be used to specify one or more extra images or
+# other source files which should be copied to the HTML output directory. Note
+# that these files will be copied to the base HTML output directory. Use the
+# $relpath^ marker in the HTML_HEADER and/or HTML_FOOTER files to load these
+# files. In the HTML_STYLESHEET file, use the file name only. Also note that the
+# files will be copied as-is; there are no commands or markers available.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_EXTRA_FILES       =
+
+# The HTML_COLORSTYLE_HUE tag controls the color of the HTML output. Doxygen
+# will adjust the colors in the style sheet and background images according to
+# this color. Hue is specified as an angle on a colorwheel, see
+# https://en.wikipedia.org/wiki/Hue for more information. For instance the value
+# 0 represents red, 60 is yellow, 120 is green, 180 is cyan, 240 is blue, 300
+# purple, and 360 is red again.
+# Minimum value: 0, maximum value: 359, default value: 220.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_COLORSTYLE_HUE    =
+
+# The HTML_COLORSTYLE_SAT tag controls the purity (or saturation) of the colors
+# in the HTML output. For a value of 0 the output will use grayscales only. A
+# value of 255 will produce the most vivid colors.
+# Minimum value: 0, maximum value: 255, default value: 100.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_COLORSTYLE_SAT    =
+
+# The HTML_COLORSTYLE_GAMMA tag controls the gamma correction applied to the
+# luminance component of the colors in the HTML output. Values below 100
+# gradually make the output lighter, whereas values above 100 make the output
+# darker. The value divided by 100 is the actual gamma applied, so 80 represents
+# a gamma of 0.8, The value 220 represents a gamma of 2.2, and 100 does not
+# change the gamma.
+# Minimum value: 40, maximum value: 240, default value: 80.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_COLORSTYLE_GAMMA  =
+
+# If the HTML_TIMESTAMP tag is set to YES then the footer of each generated HTML
+# page will contain the date and time when the page was generated. Setting this
+# to YES can help to show when doxygen was last run and thus if the
+# documentation is up to date.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_TIMESTAMP         = YES
+
+# If the HTML_DYNAMIC_SECTIONS tag is set to YES then the generated HTML
+# documentation will contain sections that can be hidden and shown after the
+# page has loaded.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_DYNAMIC_SECTIONS  = YES
+
+# With HTML_INDEX_NUM_ENTRIES one can control the preferred number of entries
+# shown in the various tree structured indices initially; the user can expand
+# and collapse entries dynamically later on. Doxygen will expand the tree to
+# such a level that at most the specified number of entries are visible (unless
+# a fully collapsed tree already exceeds this amount). So setting the number of
+# entries 1 will produce a full collapsed tree by default. 0 is a special value
+# representing an infinite number of entries and will result in a full expanded
+# tree by default.
+# Minimum value: 0, maximum value: 9999, default value: 100.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_INDEX_NUM_ENTRIES = 100
+
+# If the GENERATE_DOCSET tag is set to YES, additional index files will be
+# generated that can be used as input for Apple's Xcode 3 integrated development
+# environment (see: https://developer.apple.com/tools/xcode/), introduced with
+# OSX 10.5 (Leopard). To create a documentation set, doxygen will generate a
+# Makefile in the HTML output directory. Running make will produce the docset in
+# that directory and running make install will install the docset in
+# ~/Library/Developer/Shared/Documentation/DocSets so that Xcode will find it at
+# startup. See https://developer.apple.com/tools/creatingdocsetswithdoxygen.html
+# for more information.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+GENERATE_DOCSET        = YES
+
+# This tag determines the name of the docset feed. A documentation feed provides
+# an umbrella under which multiple documentation sets from a single provider
+# (such as a company or product suite) can be grouped.
+# The default value is: Doxygen generated docs.
+# This tag requires that the tag GENERATE_DOCSET is set to YES.
+
+DOCSET_FEEDNAME        = "Doxygen generated docs"
+
+# This tag specifies a string that should uniquely identify the documentation
+# set bundle. This should be a reverse domain-name style string, e.g.
+# com.mycompany.MyDocSet. Doxygen will append .docset to the name.
+# The default value is: org.doxygen.Project.
+# This tag requires that the tag GENERATE_DOCSET is set to YES.
+
+DOCSET_BUNDLE_ID       = org.doxygen.Project
+
+# The DOCSET_PUBLISHER_ID tag specifies a string that should uniquely identify
+# the documentation publisher. This should be a reverse domain-name style
+# string, e.g. com.mycompany.MyDocSet.documentation.
+# The default value is: org.doxygen.Publisher.
+# This tag requires that the tag GENERATE_DOCSET is set to YES.
+
+DOCSET_PUBLISHER_ID    = org.doxygen.Publisher
+
+# The DOCSET_PUBLISHER_NAME tag identifies the documentation publisher.
+# The default value is: Publisher.
+# This tag requires that the tag GENERATE_DOCSET is set to YES.
+
+DOCSET_PUBLISHER_NAME  = Publisher
+
+# If the GENERATE_HTMLHELP tag is set to YES then doxygen generates three
+# additional HTML index files: index.hhp, index.hhc, and index.hhk. The
+# index.hhp is a project file that can be read by Microsoft's HTML Help Workshop
+# (see: http://www.microsoft.com/en-us/download/details.aspx?id=21138) on
+# Windows.
+#
+# The HTML Help Workshop contains a compiler that can convert all HTML output
+# generated by doxygen into a single compiled HTML file (.chm). Compiled HTML
+# files are now used as the Windows 98 help format, and will replace the old
+# Windows help format (.hlp) on all Windows platforms in the future. Compressed
+# HTML files also contain an index, a table of contents, and you can search for
+# words in the documentation. The HTML workshop also contains a viewer for
+# compressed HTML files.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+GENERATE_HTMLHELP      = NO
+
+# The CHM_FILE tag can be used to specify the file name of the resulting .chm
+# file. You can add a path in front of the file if the result should not be
+# written to the html output directory.
+# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
+
+CHM_FILE               = NO
+
+# The HHC_LOCATION tag can be used to specify the location (absolute path
+# including file name) of the HTML help compiler (hhc.exe). If non-empty,
+# doxygen will try to run the HTML help compiler on the generated index.hhp.
+# The file has to be specified with full path.
+# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
+
+HHC_LOCATION           =
+
+# The GENERATE_CHI flag controls if a separate .chi index file is generated
+# (YES) or that it should be included in the master .chm file (NO).
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
+
+GENERATE_CHI           = NO
+
+# The CHM_INDEX_ENCODING is used to encode HtmlHelp index (hhk), content (hhc)
+# and project file content.
+# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
+
+CHM_INDEX_ENCODING     =
+
+# The BINARY_TOC flag controls whether a binary table of contents is generated
+# (YES) or a normal table of contents (NO) in the .chm file. Furthermore it
+# enables the Previous and Next buttons.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
+
+BINARY_TOC             = YES
+
+# The TOC_EXPAND flag can be set to YES to add extra items for group members to
+# the table of contents of the HTML help documentation and to the tree view.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
+
+TOC_EXPAND             = NO
+
+# If the GENERATE_QHP tag is set to YES and both QHP_NAMESPACE and
+# QHP_VIRTUAL_FOLDER are set, an additional index file will be generated that
+# can be used as input for Qt's qhelpgenerator to generate a Qt Compressed Help
+# (.qch) of the generated HTML documentation.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+GENERATE_QHP           = NO
+
+# If the QHG_LOCATION tag is specified, the QCH_FILE tag can be used to specify
+# the file name of the resulting .qch file. The path specified is relative to
+# the HTML output folder.
+# This tag requires that the tag GENERATE_QHP is set to YES.
+
+QCH_FILE               =
+
+# The QHP_NAMESPACE tag specifies the namespace to use when generating Qt Help
+# Project output. For more information please see Qt Help Project / Namespace
+# (see: http://doc.qt.io/qt-4.8/qthelpproject.html#namespace).
+# The default value is: org.doxygen.Project.
+# This tag requires that the tag GENERATE_QHP is set to YES.
+
+QHP_NAMESPACE          = org.doxygen.Project
+
+# The QHP_VIRTUAL_FOLDER tag specifies the namespace to use when generating Qt
+# Help Project output. For more information please see Qt Help Project / Virtual
+# Folders (see: http://doc.qt.io/qt-4.8/qthelpproject.html#virtual-folders).
+# The default value is: doc.
+# This tag requires that the tag GENERATE_QHP is set to YES.
+
+QHP_VIRTUAL_FOLDER     = doc
+
+# If the QHP_CUST_FILTER_NAME tag is set, it specifies the name of a custom
+# filter to add. For more information please see Qt Help Project / Custom
+# Filters (see: http://doc.qt.io/qt-4.8/qthelpproject.html#custom-filters).
+# This tag requires that the tag GENERATE_QHP is set to YES.
+
+QHP_CUST_FILTER_NAME   =
+
+# The QHP_CUST_FILTER_ATTRS tag specifies the list of the attributes of the
+# custom filter to add. For more information please see Qt Help Project / Custom
+# Filters (see: http://doc.qt.io/qt-4.8/qthelpproject.html#custom-filters).
+# This tag requires that the tag GENERATE_QHP is set to YES.
+
+QHP_CUST_FILTER_ATTRS  =
+
+# The QHP_SECT_FILTER_ATTRS tag specifies the list of the attributes this
+# project's filter section matches. Qt Help Project / Filter Attributes (see:
+# http://doc.qt.io/qt-4.8/qthelpproject.html#filter-attributes).
+# This tag requires that the tag GENERATE_QHP is set to YES.
+
+QHP_SECT_FILTER_ATTRS  =
+
+# The QHG_LOCATION tag can be used to specify the location of Qt's
+# qhelpgenerator. If non-empty doxygen will try to run qhelpgenerator on the
+# generated .qhp file.
+# This tag requires that the tag GENERATE_QHP is set to YES.
+
+QHG_LOCATION           =
+
+# If the GENERATE_ECLIPSEHELP tag is set to YES, additional index files will be
+# generated, together with the HTML files, they form an Eclipse help plugin. To
+# install this plugin and make it available under the help contents menu in
+# Eclipse, the contents of the directory containing the HTML and XML files needs
+# to be copied into the plugins directory of eclipse. The name of the directory
+# within the plugins directory should be the same as the ECLIPSE_DOC_ID value.
+# After copying Eclipse needs to be restarted before the help appears.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+GENERATE_ECLIPSEHELP   = NO
+
+# A unique identifier for the Eclipse help plugin. When installing the plugin
+# the directory name containing the HTML and XML files should also have this
+# name. Each documentation set should have its own identifier.
+# The default value is: org.doxygen.Project.
+# This tag requires that the tag GENERATE_ECLIPSEHELP is set to YES.
+
+ECLIPSE_DOC_ID         = org.doxygen.Project
+
+# If you want full control over the layout of the generated HTML pages it might
+# be necessary to disable the index and replace it with your own. The
+# DISABLE_INDEX tag can be used to turn on/off the condensed index (tabs) at top
+# of each HTML page. A value of NO enables the index and the value YES disables
+# it. Since the tabs in the index contain the same information as the navigation
+# tree, you can set this option to YES if you also set GENERATE_TREEVIEW to YES.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+DISABLE_INDEX          = NO
+
+# The GENERATE_TREEVIEW tag is used to specify whether a tree-like index
+# structure should be generated to display hierarchical information. If the tag
+# value is set to YES, a side panel will be generated containing a tree-like
+# index structure (just like the one that is generated for HTML Help). For this
+# to work a browser that supports JavaScript, DHTML, CSS and frames is required
+# (i.e. any modern browser). Windows users are probably better off using the
+# HTML help feature. Via custom style sheets (see HTML_EXTRA_STYLESHEET) one can
+# further fine-tune the look of the index. As an example, the default style
+# sheet generated by doxygen has an example that shows how to put an image at
+# the root of the tree instead of the PROJECT_NAME. Since the tree basically has
+# the same information as the tab index, you could consider setting
+# DISABLE_INDEX to YES when enabling this option.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+GENERATE_TREEVIEW      = YES
+
+# The ENUM_VALUES_PER_LINE tag can be used to set the number of enum values that
+# doxygen will group on one line in the generated HTML documentation.
+#
+# Note that a value of 0 will completely suppress the enum values from appearing
+# in the overview section.
+# Minimum value: 0, maximum value: 20, default value: 4.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+ENUM_VALUES_PER_LINE   = 4
+
+# If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can be used
+# to set the initial width (in pixels) of the frame in which the tree is shown.
+# Minimum value: 0, maximum value: 1500, default value: 250.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+TREEVIEW_WIDTH         = 250
+
+# If the EXT_LINKS_IN_WINDOW option is set to YES, doxygen will open links to
+# external symbols imported via tag files in a separate window.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+EXT_LINKS_IN_WINDOW    = NO
+
+# Use this tag to change the font size of LaTeX formulas included as images in
+# the HTML documentation. When you change the font size after a successful
+# doxygen run you need to manually remove any form_*.png images from the HTML
+# output directory to force them to be regenerated.
+# Minimum value: 8, maximum value: 50, default value: 10.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+FORMULA_FONTSIZE       = 10
+
+# Use the FORMULA_TRANPARENT tag to determine whether or not the images
+# generated for formulas are transparent PNGs. Transparent PNGs are not
+# supported properly for IE 6.0, but are supported on all modern browsers.
+#
+# Note that when changing this option you need to delete any form_*.png files in
+# the HTML output directory before the changes have effect.
+# The default value is: YES.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+FORMULA_TRANSPARENT    = YES
+
+# Enable the USE_MATHJAX option to render LaTeX formulas using MathJax (see
+# https://www.mathjax.org) which uses client side Javascript for the rendering
+# instead of using pre-rendered bitmaps. Use this if you do not have LaTeX
+# installed or if you want to formulas look prettier in the HTML output. When
+# enabled you may also need to install MathJax separately and configure the path
+# to it using the MATHJAX_RELPATH option.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+USE_MATHJAX            = NO
+
+# When MathJax is enabled you can set the default output format to be used for
+# the MathJax output. See the MathJax site (see:
+# http://docs.mathjax.org/en/latest/output.html) for more details.
+# Possible values are: HTML-CSS (which is slower, but has the best
+# compatibility), NativeMML (i.e. MathML) and SVG.
+# The default value is: HTML-CSS.
+# This tag requires that the tag USE_MATHJAX is set to YES.
+
+MATHJAX_FORMAT         = HTML-CSS
+
+# When MathJax is enabled you need to specify the location relative to the HTML
+# output directory using the MATHJAX_RELPATH option. The destination directory
+# should contain the MathJax.js script. For instance, if the mathjax directory
+# is located at the same level as the HTML output directory, then
+# MATHJAX_RELPATH should be ../mathjax. The default value points to the MathJax
+# Content Delivery Network so you can quickly see the result without installing
+# MathJax. However, it is strongly recommended to install a local copy of
+# MathJax from https://www.mathjax.org before deployment.
+# The default value is: http://cdn.mathjax.org/mathjax/latest.
+# This tag requires that the tag USE_MATHJAX is set to YES.
+
+MATHJAX_RELPATH        = http://cdn.mathjax.org/mathjax/latest
+
+# The MATHJAX_EXTENSIONS tag can be used to specify one or more MathJax
+# extension names that should be enabled during MathJax rendering. For example
+# MATHJAX_EXTENSIONS = TeX/AMSmath TeX/AMSsymbols
+# This tag requires that the tag USE_MATHJAX is set to YES.
+
+MATHJAX_EXTENSIONS     =
+
+# The MATHJAX_CODEFILE tag can be used to specify a file with javascript pieces
+# of code that will be used on startup of the MathJax code. See the MathJax site
+# (see: http://docs.mathjax.org/en/latest/output.html) for more details. For an
+# example see the documentation.
+# This tag requires that the tag USE_MATHJAX is set to YES.
+
+MATHJAX_CODEFILE       =
+
+# When the SEARCHENGINE tag is enabled doxygen will generate a search box for
+# the HTML output. The underlying search engine uses javascript and DHTML and
+# should work on any modern browser. Note that when using HTML help
+# (GENERATE_HTMLHELP), Qt help (GENERATE_QHP), or docsets (GENERATE_DOCSET)
+# there is already a search function so this one should typically be disabled.
+# For large projects the javascript based search engine can be slow, then
+# enabling SERVER_BASED_SEARCH may provide a better solution. It is possible to
+# search using the keyboard; to jump to the search box use <access key> + S
+# (what the <access key> is depends on the OS and browser, but it is typically
+# <CTRL>, <ALT>/<option>, or both). Inside the search box use the <cursor down
+# key> to jump into the search results window, the results can be navigated
+# using the <cursor keys>. Press <Enter> to select an item or <escape> to cancel
+# the search. The filter options can be selected when the cursor is inside the
+# search box by pressing <Shift>+<cursor down>. Also here use the <cursor keys>
+# to select a filter and <Enter> or <escape> to activate or cancel the filter
+# option.
+# The default value is: YES.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+SEARCHENGINE           = YES
+
+# When the SERVER_BASED_SEARCH tag is enabled the search engine will be
+# implemented using a web server instead of a web client using Javascript. There
+# are two flavors of web server based searching depending on the EXTERNAL_SEARCH
+# setting. When disabled, doxygen will generate a PHP script for searching and
+# an index file used by the script. When EXTERNAL_SEARCH is enabled the indexing
+# and searching needs to be provided by external tools. See the section
+# "External Indexing and Searching" for details.
+# The default value is: NO.
+# This tag requires that the tag SEARCHENGINE is set to YES.
+
+SERVER_BASED_SEARCH    = NO
+
+# When EXTERNAL_SEARCH tag is enabled doxygen will no longer generate the PHP
+# script for searching. Instead the search results are written to an XML file
+# which needs to be processed by an external indexer. Doxygen will invoke an
+# external search engine pointed to by the SEARCHENGINE_URL option to obtain the
+# search results.
+#
+# Doxygen ships with an example indexer (doxyindexer) and search engine
+# (doxysearch.cgi) which are based on the open source search engine library
+# Xapian (see: https://xapian.org/).
+#
+# See the section "External Indexing and Searching" for details.
+# The default value is: NO.
+# This tag requires that the tag SEARCHENGINE is set to YES.
+
+EXTERNAL_SEARCH        = NO
+
+# The SEARCHENGINE_URL should point to a search engine hosted by a web server
+# which will return the search results when EXTERNAL_SEARCH is enabled.
+#
+# Doxygen ships with an example indexer (doxyindexer) and search engine
+# (doxysearch.cgi) which are based on the open source search engine library
+# Xapian (see: https://xapian.org/). See the section "External Indexing and
+# Searching" for details.
+# This tag requires that the tag SEARCHENGINE is set to YES.
+
+SEARCHENGINE_URL       =
+
+# When SERVER_BASED_SEARCH and EXTERNAL_SEARCH are both enabled the unindexed
+# search data is written to a file for indexing by an external tool. With the
+# SEARCHDATA_FILE tag the name of this file can be specified.
+# The default file is: searchdata.xml.
+# This tag requires that the tag SEARCHENGINE is set to YES.
+
+SEARCHDATA_FILE        = searchdata.xml
+
+# When SERVER_BASED_SEARCH and EXTERNAL_SEARCH are both enabled the
+# EXTERNAL_SEARCH_ID tag can be used as an identifier for the project. This is
+# useful in combination with EXTRA_SEARCH_MAPPINGS to search through multiple
+# projects and redirect the results back to the right project.
+# This tag requires that the tag SEARCHENGINE is set to YES.
+
+EXTERNAL_SEARCH_ID     =
+
+# The EXTRA_SEARCH_MAPPINGS tag can be used to enable searching through doxygen
+# projects other than the one defined by this configuration file, but that are
+# all added to the same external search index. Each project needs to have a
+# unique id set via EXTERNAL_SEARCH_ID. The search mapping then maps the id of
+# to a relative location where the documentation can be found. The format is:
+# EXTRA_SEARCH_MAPPINGS = tagname1=loc1 tagname2=loc2 ...
+# This tag requires that the tag SEARCHENGINE is set to YES.
+
+EXTRA_SEARCH_MAPPINGS  =
+
+#---------------------------------------------------------------------------
+# Configuration options related to the LaTeX output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_LATEX tag is set to YES, doxygen will generate LaTeX output.
+# The default value is: YES.
+
+GENERATE_LATEX         = NO
+
+# The LATEX_OUTPUT tag is used to specify where the LaTeX docs will be put. If a
+# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
+# it.
+# The default directory is: latex.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_OUTPUT           = latex
+
+# The LATEX_CMD_NAME tag can be used to specify the LaTeX command name to be
+# invoked.
+#
+# Note that when enabling USE_PDFLATEX this option is only used for generating
+# bitmaps for formulas in the HTML output, but not in the Makefile that is
+# written to the output directory.
+# The default file is: latex.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_CMD_NAME         = latex
+
+# The MAKEINDEX_CMD_NAME tag can be used to specify the command name to generate
+# index for LaTeX.
+# The default file is: makeindex.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+MAKEINDEX_CMD_NAME     = makeindex
+
+# If the COMPACT_LATEX tag is set to YES, doxygen generates more compact LaTeX
+# documents. This may be useful for small projects and may help to save some
+# trees in general.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+COMPACT_LATEX          = NO
+
+# The PAPER_TYPE tag can be used to set the paper type that is used by the
+# printer.
+# Possible values are: a4 (210 x 297 mm), letter (8.5 x 11 inches), legal (8.5 x
+# 14 inches) and executive (7.25 x 10.5 inches).
+# The default value is: a4.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+PAPER_TYPE             = a4
+
+# The EXTRA_PACKAGES tag can be used to specify one or more LaTeX package names
+# that should be included in the LaTeX output. The package can be specified just
+# by its name or with the correct syntax as to be used with the LaTeX
+# \usepackage command. To get the times font for instance you can specify :
+# EXTRA_PACKAGES=times or EXTRA_PACKAGES={times}
+# To use the option intlimits with the amsmath package you can specify:
+# EXTRA_PACKAGES=[intlimits]{amsmath}
+# If left blank no extra packages will be included.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+EXTRA_PACKAGES         =
+
+# The LATEX_HEADER tag can be used to specify a personal LaTeX header for the
+# generated LaTeX document. The header should contain everything until the first
+# chapter. If it is left blank doxygen will generate a standard header. See
+# section "Doxygen usage" for information on how to let doxygen write the
+# default header to a separate file.
+#
+# Note: Only use a user-defined header if you know what you are doing! The
+# following commands have a special meaning inside the header: $title,
+# $datetime, $date, $doxygenversion, $projectname, $projectnumber,
+# $projectbrief, $projectlogo. Doxygen will replace $title with the empty
+# string, for the replacement values of the other commands the user is referred
+# to HTML_HEADER.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_HEADER           =
+
+# The LATEX_FOOTER tag can be used to specify a personal LaTeX footer for the
+# generated LaTeX document. The footer should contain everything after the last
+# chapter. If it is left blank doxygen will generate a standard footer. See
+# LATEX_HEADER for more information on how to generate a default footer and what
+# special commands can be used inside the footer.
+#
+# Note: Only use a user-defined footer if you know what you are doing!
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_FOOTER           =
+
+# The LATEX_EXTRA_FILES tag can be used to specify one or more extra images or
+# other source files which should be copied to the LATEX_OUTPUT output
+# directory. Note that the files will be copied as-is; there are no commands or
+# markers available.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_EXTRA_FILES      =
+
+# If the PDF_HYPERLINKS tag is set to YES, the LaTeX that is generated is
+# prepared for conversion to PDF (using ps2pdf or pdflatex). The PDF file will
+# contain links (just like the HTML output) instead of page references. This
+# makes the output suitable for online browsing using a PDF viewer.
+# The default value is: YES.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+PDF_HYPERLINKS         = YES
+
+# If the USE_PDFLATEX tag is set to YES, doxygen will use pdflatex to generate
+# the PDF file directly from the LaTeX files. Set this option to YES, to get a
+# higher quality PDF documentation.
+# The default value is: YES.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+USE_PDFLATEX           = YES
+
+# If the LATEX_BATCHMODE tag is set to YES, doxygen will add the \batchmode
+# command to the generated LaTeX files. This will instruct LaTeX to keep running
+# if errors occur, instead of asking the user for help. This option is also used
+# when generating formulas in HTML.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_BATCHMODE        = NO
+
+# If the LATEX_HIDE_INDICES tag is set to YES then doxygen will not include the
+# index chapters (such as File Index, Compound Index, etc.) in the output.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_HIDE_INDICES     = NO
+
+# If the LATEX_SOURCE_CODE tag is set to YES then doxygen will include source
+# code with syntax highlighting in the LaTeX output.
+#
+# Note that which sources are shown also depends on other settings such as
+# SOURCE_BROWSER.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_SOURCE_CODE      = NO
+
+# The LATEX_BIB_STYLE tag can be used to specify the style to use for the
+# bibliography, e.g. plainnat, or ieeetr. See
+# https://en.wikipedia.org/wiki/BibTeX and \cite for more info.
+# The default value is: plain.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_BIB_STYLE        = plain
+
+#---------------------------------------------------------------------------
+# Configuration options related to the RTF output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_RTF tag is set to YES, doxygen will generate RTF output. The
+# RTF output is optimized for Word 97 and may not look too pretty with other RTF
+# readers/editors.
+# The default value is: NO.
+
+GENERATE_RTF           = NO
+
+# The RTF_OUTPUT tag is used to specify where the RTF docs will be put. If a
+# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
+# it.
+# The default directory is: rtf.
+# This tag requires that the tag GENERATE_RTF is set to YES.
+
+RTF_OUTPUT             = rtf
+
+# If the COMPACT_RTF tag is set to YES, doxygen generates more compact RTF
+# documents. This may be useful for small projects and may help to save some
+# trees in general.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_RTF is set to YES.
+
+COMPACT_RTF            = NO
+
+# If the RTF_HYPERLINKS tag is set to YES, the RTF that is generated will
+# contain hyperlink fields. The RTF file will contain links (just like the HTML
+# output) instead of page references. This makes the output suitable for online
+# browsing using Word or some other Word compatible readers that support those
+# fields.
+#
+# Note: WordPad (write) and others do not support links.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_RTF is set to YES.
+
+RTF_HYPERLINKS         = YES
+
+# Load stylesheet definitions from file. Syntax is similar to doxygen's config
+# file, i.e. a series of assignments. You only have to provide replacements,
+# missing definitions are set to their default value.
+#
+# See also section "Doxygen usage" for information on how to generate the
+# default style sheet that doxygen normally uses.
+# This tag requires that the tag GENERATE_RTF is set to YES.
+
+RTF_STYLESHEET_FILE    =
+
+# Set optional variables used in the generation of an RTF document. Syntax is
+# similar to doxygen's config file. A template extensions file can be generated
+# using doxygen -e rtf extensionFile.
+# This tag requires that the tag GENERATE_RTF is set to YES.
+
+RTF_EXTENSIONS_FILE    =
+
+#---------------------------------------------------------------------------
+# Configuration options related to the man page output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_MAN tag is set to YES, doxygen will generate man pages for
+# classes and files.
+# The default value is: NO.
+
+GENERATE_MAN           = NO
+
+# The MAN_OUTPUT tag is used to specify where the man pages will be put. If a
+# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
+# it. A directory man3 will be created inside the directory specified by
+# MAN_OUTPUT.
+# The default directory is: man.
+# This tag requires that the tag GENERATE_MAN is set to YES.
+
+MAN_OUTPUT             = man
+
+# The MAN_EXTENSION tag determines the extension that is added to the generated
+# man pages. In case the manual section does not start with a number, the number
+# 3 is prepended. The dot (.) at the beginning of the MAN_EXTENSION tag is
+# optional.
+# The default value is: .3.
+# This tag requires that the tag GENERATE_MAN is set to YES.
+
+MAN_EXTENSION          = .3
+
+# If the MAN_LINKS tag is set to YES and doxygen generates man output, then it
+# will generate one additional man file for each entity documented in the real
+# man page(s). These additional files only source the real man page, but without
+# them the man command would be unable to find the correct page.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_MAN is set to YES.
+
+MAN_LINKS              = NO
+
+#---------------------------------------------------------------------------
+# Configuration options related to the XML output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_XML tag is set to YES, doxygen will generate an XML file that
+# captures the structure of the code including all documentation.
+# The default value is: NO.
+
+GENERATE_XML           = YES
+
+# The XML_OUTPUT tag is used to specify where the XML pages will be put. If a
+# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
+# it.
+# The default directory is: xml.
+# This tag requires that the tag GENERATE_XML is set to YES.
+
+XML_OUTPUT             = xml
+
+# If the XML_PROGRAMLISTING tag is set to YES, doxygen will dump the program
+# listings (including syntax highlighting and cross-referencing information) to
+# the XML output. Note that enabling this will significantly increase the size
+# of the XML output.
+# The default value is: YES.
+# This tag requires that the tag GENERATE_XML is set to YES.
+
+XML_PROGRAMLISTING     = YES
+
+#---------------------------------------------------------------------------
+# Configuration options related to the DOCBOOK output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_DOCBOOK tag is set to YES, doxygen will generate Docbook files
+# that can be used to generate PDF.
+# The default value is: NO.
+
+GENERATE_DOCBOOK       = NO
+
+# The DOCBOOK_OUTPUT tag is used to specify where the Docbook pages will be put.
+# If a relative path is entered the value of OUTPUT_DIRECTORY will be put in
+# front of it.
+# The default directory is: docbook.
+# This tag requires that the tag GENERATE_DOCBOOK is set to YES.
+
+DOCBOOK_OUTPUT         = docbook
+
+#---------------------------------------------------------------------------
+# Configuration options for the AutoGen Definitions output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_AUTOGEN_DEF tag is set to YES, doxygen will generate an
+# AutoGen Definitions (see http://autogen.sourceforge.net/) file that captures
+# the structure of the code including all documentation. Note that this feature
+# is still experimental and incomplete at the moment.
+# The default value is: NO.
+
+GENERATE_AUTOGEN_DEF   = NO
+
+#---------------------------------------------------------------------------
+# Configuration options related to the Perl module output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_PERLMOD tag is set to YES, doxygen will generate a Perl module
+# file that captures the structure of the code including all documentation.
+#
+# Note that this feature is still experimental and incomplete at the moment.
+# The default value is: NO.
+
+GENERATE_PERLMOD       = NO
+
+# If the PERLMOD_LATEX tag is set to YES, doxygen will generate the necessary
+# Makefile rules, Perl scripts and LaTeX code to be able to generate PDF and DVI
+# output from the Perl module output.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_PERLMOD is set to YES.
+
+PERLMOD_LATEX          = NO
+
+# If the PERLMOD_PRETTY tag is set to YES, the Perl module output will be nicely
+# formatted so it can be parsed by a human reader. This is useful if you want to
+# understand what is going on. On the other hand, if this tag is set to NO, the
+# size of the Perl module output will be much smaller and Perl will parse it
+# just the same.
+# The default value is: YES.
+# This tag requires that the tag GENERATE_PERLMOD is set to YES.
+
+PERLMOD_PRETTY         = YES
+
+# The names of the make variables in the generated doxyrules.make file are
+# prefixed with the string contained in PERLMOD_MAKEVAR_PREFIX. This is useful
+# so different doxyrules.make files included by the same Makefile don't
+# overwrite each other's variables.
+# This tag requires that the tag GENERATE_PERLMOD is set to YES.
+
+PERLMOD_MAKEVAR_PREFIX =
+
+#---------------------------------------------------------------------------
+# Configuration options related to the preprocessor
+#---------------------------------------------------------------------------
+
+# If the ENABLE_PREPROCESSING tag is set to YES, doxygen will evaluate all
+# C-preprocessor directives found in the sources and include files.
+# The default value is: YES.
+
+ENABLE_PREPROCESSING   = YES
+
+# If the MACRO_EXPANSION tag is set to YES, doxygen will expand all macro names
+# in the source code. If set to NO, only conditional compilation will be
+# performed. Macro expansion can be done in a controlled way by setting
+# EXPAND_ONLY_PREDEF to YES.
+# The default value is: NO.
+# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
+
+MACRO_EXPANSION        = YES
+
+# If the EXPAND_ONLY_PREDEF and MACRO_EXPANSION tags are both set to YES then
+# the macro expansion is limited to the macros specified with the PREDEFINED and
+# EXPAND_AS_DEFINED tags.
+# The default value is: NO.
+# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
+
+EXPAND_ONLY_PREDEF     = NO
+
+# If the SEARCH_INCLUDES tag is set to YES, the include files in the
+# INCLUDE_PATH will be searched if a #include is found.
+# The default value is: YES.
+# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
+
+SEARCH_INCLUDES        = YES
+
+# The INCLUDE_PATH tag can be used to specify one or more directories that
+# contain include files that are not input files but should be processed by the
+# preprocessor.
+# This tag requires that the tag SEARCH_INCLUDES is set to YES.
+
+INCLUDE_PATH           = "@XEN_BASE@/xen/include/generated" \
+                         "@XEN_BASE@/xen/include/"
+
+# You can use the INCLUDE_FILE_PATTERNS tag to specify one or more wildcard
+# patterns (like *.h and *.hpp) to filter out the header-files in the
+# directories. If left blank, the patterns specified with FILE_PATTERNS will be
+# used.
+# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
+
+INCLUDE_FILE_PATTERNS  =
+
+# The PREDEFINED tag can be used to specify one or more macro names that are
+# defined before the preprocessor is started (similar to the -D option of e.g.
+# gcc). The argument of the tag is a list of macros of the form: name or
+# name=definition (no spaces). If the definition and the "=" are omitted, "=1"
+# is assumed. To prevent a macro definition from being undefined via #undef or
+# recursively expanded use the := operator instead of the = operator.
+# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
+
+PREDEFINED             = __attribute__(x)= \
+                         DOXYGEN \
+                         __XEN__
+
+# If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then this
+# tag can be used to specify a list of macro names that should be expanded. The
+# macro definition that is found in the sources will be used. Use the PREDEFINED
+# tag if you want to use a different macro definition that overrules the
+# definition found in the source code.
+# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
+
+EXPAND_AS_DEFINED      =
+
+# If the SKIP_FUNCTION_MACROS tag is set to YES then doxygen's preprocessor will
+# remove all references to function-like macros that are alone on a line, have
+# an all uppercase name, and do not end with a semicolon. Such function macros
+# are typically used for boiler-plate code, and will confuse the parser if not
+# removed.
+# The default value is: YES.
+# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
+
+SKIP_FUNCTION_MACROS   = NO
+
+#---------------------------------------------------------------------------
+# Configuration options related to external references
+#---------------------------------------------------------------------------
+
+# The TAGFILES tag can be used to specify one or more tag files. For each tag
+# file the location of the external documentation should be added. The format of
+# a tag file without this location is as follows:
+# TAGFILES = file1 file2 ...
+# Adding location for the tag files is done as follows:
+# TAGFILES = file1=loc1 "file2 = loc2" ...
+# where loc1 and loc2 can be relative or absolute paths or URLs. See the
+# section "Linking to external documentation" for more information about the use
+# of tag files.
+# Note: Each tag file must have a unique name (where the name does NOT include
+# the path). If a tag file is not located in the directory in which doxygen is
+# run, you must also specify the path to the tagfile here.
+
+TAGFILES               =
+
+# When a file name is specified after GENERATE_TAGFILE, doxygen will create a
+# tag file that is based on the input files it reads. See section "Linking to
+# external documentation" for more information about the usage of tag files.
+
+GENERATE_TAGFILE       =
+
+# If the ALLEXTERNALS tag is set to YES, all external class will be listed in
+# the class index. If set to NO, only the inherited external classes will be
+# listed.
+# The default value is: NO.
+
+ALLEXTERNALS           = NO
+
+# If the EXTERNAL_GROUPS tag is set to YES, all external groups will be listed
+# in the modules index. If set to NO, only the current project's groups will be
+# listed.
+# The default value is: YES.
+
+EXTERNAL_GROUPS        = YES
+
+# If the EXTERNAL_PAGES tag is set to YES, all external pages will be listed in
+# the related pages index. If set to NO, only the current project's pages will
+# be listed.
+# The default value is: YES.
+
+EXTERNAL_PAGES         = YES
+
+#---------------------------------------------------------------------------
+# Configuration options related to the dot tool
+#---------------------------------------------------------------------------
+
+# If the CLASS_DIAGRAMS tag is set to YES, doxygen will generate a class diagram
+# (in HTML and LaTeX) for classes with base or super classes. Setting the tag to
+# NO turns the diagrams off. Note that this option also works with HAVE_DOT
+# disabled, but it is recommended to install and use dot, since it yields more
+# powerful graphs.
+# The default value is: YES.
+
+CLASS_DIAGRAMS         = NO
+
+# You can include diagrams made with dia in doxygen documentation. Doxygen will
+# then run dia to produce the diagram and insert it in the documentation. The
+# DIA_PATH tag allows you to specify the directory where the dia binary resides.
+# If left empty dia is assumed to be found in the default search path.
+
+DIA_PATH               =
+
+# If set to YES the inheritance and collaboration graphs will hide inheritance
+# and usage relations if the target is undocumented or is not a class.
+# The default value is: YES.
+
+HIDE_UNDOC_RELATIONS   = YES
+
+# If you set the HAVE_DOT tag to YES then doxygen will assume the dot tool is
+# available from the path. This tool is part of Graphviz (see:
+# http://www.graphviz.org/), a graph visualization toolkit from AT&T and Lucent
+# Bell Labs. The other options in this section have no effect if this option is
+# set to NO
+# The default value is: NO.
+
+HAVE_DOT               = NO
+
+# The DOT_NUM_THREADS specifies the number of dot invocations doxygen is allowed
+# to run in parallel. When set to 0 doxygen will base this on the number of
+# processors available in the system. You can set it explicitly to a value
+# larger than 0 to get control over the balance between CPU load and processing
+# speed.
+# Minimum value: 0, maximum value: 32, default value: 0.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_NUM_THREADS        = 0
+
+# When you want a differently looking font in the dot files that doxygen
+# generates you can specify the font name using DOT_FONTNAME. You need to make
+# sure dot is able to find the font, which can be done by putting it in a
+# standard location or by setting the DOTFONTPATH environment variable or by
+# setting DOT_FONTPATH to the directory containing the font.
+# The default value is: Helvetica.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_FONTNAME           = Helvetica
+
+# The DOT_FONTSIZE tag can be used to set the size (in points) of the font of
+# dot graphs.
+# Minimum value: 4, maximum value: 24, default value: 10.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_FONTSIZE           = 10
+
+# By default doxygen will tell dot to use the default font as specified with
+# DOT_FONTNAME. If you specify a different font using DOT_FONTNAME you can set
+# the path where dot can find it using this tag.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_FONTPATH           =
+
+# If the CLASS_GRAPH tag is set to YES then doxygen will generate a graph for
+# each documented class showing the direct and indirect inheritance relations.
+# Setting this tag to YES will force the CLASS_DIAGRAMS tag to NO.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+CLASS_GRAPH            = YES
+
+# If the COLLABORATION_GRAPH tag is set to YES then doxygen will generate a
+# graph for each documented class showing the direct and indirect implementation
+# dependencies (inheritance, containment, and class references variables) of the
+# class with other documented classes.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+COLLABORATION_GRAPH    = YES
+
+# If the GROUP_GRAPHS tag is set to YES then doxygen will generate a graph for
+# groups, showing the direct groups dependencies.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+GROUP_GRAPHS           = YES
+
+# If the UML_LOOK tag is set to YES, doxygen will generate inheritance and
+# collaboration diagrams in a style similar to the OMG's Unified Modeling
+# Language.
+# The default value is: NO.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+UML_LOOK               = NO
+
+# If the UML_LOOK tag is enabled, the fields and methods are shown inside the
+# class node. If there are many fields or methods and many nodes the graph may
+# become too big to be useful. The UML_LIMIT_NUM_FIELDS threshold limits the
+# number of items for each type to make the size more manageable. Set this to 0
+# for no limit. Note that the threshold may be exceeded by 50% before the limit
+# is enforced. So when you set the threshold to 10, up to 15 fields may appear,
+# but if the number exceeds 15, the total amount of fields shown is limited to
+# 10.
+# Minimum value: 0, maximum value: 100, default value: 10.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+UML_LIMIT_NUM_FIELDS   = 10
+
+# If the TEMPLATE_RELATIONS tag is set to YES then the inheritance and
+# collaboration graphs will show the relations between templates and their
+# instances.
+# The default value is: NO.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+TEMPLATE_RELATIONS     = NO
+
+# If the INCLUDE_GRAPH, ENABLE_PREPROCESSING and SEARCH_INCLUDES tags are set to
+# YES then doxygen will generate a graph for each documented file showing the
+# direct and indirect include dependencies of the file with other documented
+# files.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+INCLUDE_GRAPH          = YES
+
+# If the INCLUDED_BY_GRAPH, ENABLE_PREPROCESSING and SEARCH_INCLUDES tags are
+# set to YES then doxygen will generate a graph for each documented file showing
+# the direct and indirect include dependencies of the file with other documented
+# files.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+INCLUDED_BY_GRAPH      = YES
+
+# If the CALL_GRAPH tag is set to YES then doxygen will generate a call
+# dependency graph for every global function or class method.
+#
+# Note that enabling this option will significantly increase the time of a run.
+# So in most cases it will be better to enable call graphs for selected
+# functions only using the \callgraph command. Disabling a call graph can be
+# accomplished by means of the command \hidecallgraph.
+# The default value is: NO.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+CALL_GRAPH             = NO
+
+# If the CALLER_GRAPH tag is set to YES then doxygen will generate a caller
+# dependency graph for every global function or class method.
+#
+# Note that enabling this option will significantly increase the time of a run.
+# So in most cases it will be better to enable caller graphs for selected
+# functions only using the \callergraph command. Disabling a caller graph can be
+# accomplished by means of the command \hidecallergraph.
+# The default value is: NO.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+CALLER_GRAPH           = NO
+
+# If the GRAPHICAL_HIERARCHY tag is set to YES then doxygen will graphical
+# hierarchy of all classes instead of a textual one.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+GRAPHICAL_HIERARCHY    = YES
+
+# If the DIRECTORY_GRAPH tag is set to YES then doxygen will show the
+# dependencies a directory has on other directories in a graphical way. The
+# dependency relations are determined by the #include relations between the
+# files in the directories.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DIRECTORY_GRAPH        = YES
+
+# The DOT_IMAGE_FORMAT tag can be used to set the image format of the images
+# generated by dot. For an explanation of the image formats see the section
+# output formats in the documentation of the dot tool (Graphviz (see:
+# http://www.graphviz.org/)).
+# Note: If you choose svg you need to set HTML_FILE_EXTENSION to xhtml in order
+# to make the SVG files visible in IE 9+ (other browsers do not have this
+# requirement).
+# Possible values are: png, jpg, gif, svg, png:gd, png:gd:gd, png:cairo,
+# png:cairo:gd, png:cairo:cairo, png:cairo:gdiplus, png:gdiplus and
+# png:gdiplus:gdiplus.
+# The default value is: png.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_IMAGE_FORMAT       = png
+
+# If DOT_IMAGE_FORMAT is set to svg, then this option can be set to YES to
+# enable generation of interactive SVG images that allow zooming and panning.
+#
+# Note that this requires a modern browser other than Internet Explorer. Tested
+# and working are Firefox, Chrome, Safari, and Opera.
+# Note: For IE 9+ you need to set HTML_FILE_EXTENSION to xhtml in order to make
+# the SVG files visible. Older versions of IE do not have SVG support.
+# The default value is: NO.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+INTERACTIVE_SVG        = NO
+
+# The DOT_PATH tag can be used to specify the path where the dot tool can be
+# found. If left blank, it is assumed the dot tool can be found in the path.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_PATH               =
+
+# The DOTFILE_DIRS tag can be used to specify one or more directories that
+# contain dot files that are included in the documentation (see the \dotfile
+# command).
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOTFILE_DIRS           =
+
+# The MSCFILE_DIRS tag can be used to specify one or more directories that
+# contain msc files that are included in the documentation (see the \mscfile
+# command).
+
+MSCFILE_DIRS           =
+
+# The DIAFILE_DIRS tag can be used to specify one or more directories that
+# contain dia files that are included in the documentation (see the \diafile
+# command).
+
+DIAFILE_DIRS           =
+
+# The DOT_GRAPH_MAX_NODES tag can be used to set the maximum number of nodes
+# that will be shown in the graph. If the number of nodes in a graph becomes
+# larger than this value, doxygen will truncate the graph, which is visualized
+# by representing a node as a red box. Note that doxygen if the number of direct
+# children of the root node in a graph is already larger than
+# DOT_GRAPH_MAX_NODES then the graph will not be shown at all. Also note that
+# the size of a graph can be further restricted by MAX_DOT_GRAPH_DEPTH.
+# Minimum value: 0, maximum value: 10000, default value: 50.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_GRAPH_MAX_NODES    = 50
+
+# The MAX_DOT_GRAPH_DEPTH tag can be used to set the maximum depth of the graphs
+# generated by dot. A depth value of 3 means that only nodes reachable from the
+# root by following a path via at most 3 edges will be shown. Nodes that lay
+# further from the root node will be omitted. Note that setting this option to 1
+# or 2 may greatly reduce the computation time needed for large code bases. Also
+# note that the size of a graph can be further restricted by
+# DOT_GRAPH_MAX_NODES. Using a depth of 0 means no depth restriction.
+# Minimum value: 0, maximum value: 1000, default value: 0.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+MAX_DOT_GRAPH_DEPTH    = 0
+
+# Set the DOT_TRANSPARENT tag to YES to generate images with a transparent
+# background. This is disabled by default, because dot on Windows does not seem
+# to support this out of the box.
+#
+# Warning: Depending on the platform used, enabling this option may lead to
+# badly anti-aliased labels on the edges of a graph (i.e. they become hard to
+# read).
+# The default value is: NO.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_TRANSPARENT        = NO
+
+# Set the DOT_MULTI_TARGETS tag to YES to allow dot to generate multiple output
+# files in one run (i.e. multiple -o and -T options on the command line). This
+# makes dot run faster, but since only newer versions of dot (>1.8.10) support
+# this, this feature is disabled by default.
+# The default value is: NO.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_MULTI_TARGETS      = NO
+
+# If the GENERATE_LEGEND tag is set to YES doxygen will generate a legend page
+# explaining the meaning of the various boxes and arrows in the dot generated
+# graphs.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+GENERATE_LEGEND        = YES
+
+# If the DOT_CLEANUP tag is set to YES, doxygen will remove the intermediate dot
+# files that are used to generate the various graphs.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_CLEANUP            = YES
diff --git a/m4/ax_python_module.m4 b/m4/ax_python_module.m4
new file mode 100644
index 0000000000..107d88264a
--- /dev/null
+++ b/m4/ax_python_module.m4
@@ -0,0 +1,56 @@
+# ===========================================================================
+#     https://www.gnu.org/software/autoconf-archive/ax_python_module.html
+# ===========================================================================
+#
+# SYNOPSIS
+#
+#   AX_PYTHON_MODULE(modname[, fatal, python])
+#
+# DESCRIPTION
+#
+#   Checks for Python module.
+#
+#   If fatal is non-empty then absence of a module will trigger an error.
+#   The third parameter can either be "python" for Python 2 or "python3" for
+#   Python 3; defaults to Python 3.
+#
+# LICENSE
+#
+#   Copyright (c) 2008 Andrew Collier
+#
+#   Copying and distribution of this file, with or without modification, are
+#   permitted in any medium without royalty provided the copyright notice
+#   and this notice are preserved. This file is offered as-is, without any
+#   warranty.
+
+#serial 9
+
+AU_ALIAS([AC_PYTHON_MODULE], [AX_PYTHON_MODULE])
+AC_DEFUN([AX_PYTHON_MODULE],[
+    if test -z $PYTHON;
+    then
+        if test -z "$3";
+        then
+            PYTHON="python3"
+        else
+            PYTHON="$3"
+        fi
+    fi
+    PYTHON_NAME=`basename $PYTHON`
+    AC_MSG_CHECKING($PYTHON_NAME module: $1)
+    $PYTHON -c "import $1" 2>/dev/null
+    if test $? -eq 0;
+    then
+        AC_MSG_RESULT(yes)
+        eval AS_TR_CPP(HAVE_PYMOD_$1)=yes
+    else
+        AC_MSG_RESULT(no)
+        eval AS_TR_CPP(HAVE_PYMOD_$1)=no
+        #
+        if test -n "$2"
+        then
+            AC_MSG_ERROR(failed to find required module $1)
+            exit 1
+        fi
+    fi
+])
\ No newline at end of file
diff --git a/m4/docs_tool.m4 b/m4/docs_tool.m4
index 3e8814ac8d..39aa348026 100644
--- a/m4/docs_tool.m4
+++ b/m4/docs_tool.m4
@@ -15,3 +15,12 @@ dnl
         AC_MSG_WARN([$2 is not available so some documentation won't be built])
     ])
 ])
+
+AC_DEFUN([AX_DOCS_TOOL_REQ_PROG], [
+dnl
+    AC_ARG_VAR([$1], [Path to $2 tool])
+    AC_PATH_PROG([$1], [$2])
+    AS_IF([! test -x "$ac_cv_path_$1"], [
+        AC_MSG_ERROR([$2 is needed])
+    ])
+])
\ No newline at end of file
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 04 13:32:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 13:32:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122396.230869 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldv9v-0003tu-P8; Tue, 04 May 2021 13:32:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122396.230869; Tue, 04 May 2021 13:32:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldv9v-0003tl-Ll; Tue, 04 May 2021 13:32:19 +0000
Received: by outflank-mailman (input) for mailman id 122396;
 Tue, 04 May 2021 13:32:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8884=J7=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1ldv9u-0003nw-9T
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 13:32:18 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 76b41ce4-87e6-4da3-8842-677e06b878b5;
 Tue, 04 May 2021 13:31:59 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2FF7B139F;
 Tue,  4 May 2021 06:31:59 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com
 [10.1.197.16])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CC9CA3F73B;
 Tue,  4 May 2021 06:31:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 76b41ce4-87e6-4da3-8842-677e06b878b5
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v5 3/3] docs/doxygen: doxygen documentation for grant_table.h
Date: Tue,  4 May 2021 14:31:45 +0100
Message-Id: <20210504133145.767-4-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210504133145.767-1-luca.fancellu@arm.com>
References: <20210504133145.767-1-luca.fancellu@arm.com>

Modification to include/public/grant_table.h:

1) Add doxygen tags to:
 - Create Grant tables section
 - include variables in the generated documentation
 - Used @keepindent/@endkeepindent to enclose comment
   section that are indented using spaces, to keep
   the indentation.
2) Add .rst file for grant table for Arm64

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
v5 changes:
- Move GNTCOPY_* define next to the flags field
v4 changes:
- Used @keepindent/@endkeepindent doxygen commands
  to keep text with spaces indentation.
- drop changes to grant_entry_v1 comment, it will
  be changed and included in the docs in a future patch
- Move docs .rst to "common" folder
v3 changes:
- removed tags to skip anonymous union/struct
- moved back comment pointed out by Jan
- moved down defines related to struct gnttab_copy
  as pointed out by Jan
v2 changes:
- Revert back to anonymous union/struct
- add doxygen tags to skip anonymous union/struct
---
 docs/hypercall-interfaces/arm64.rst           |  1 +
 .../common/grant_tables.rst                   |  8 +++
 docs/xen-doxygen/doxy_input.list              |  1 +
 xen/include/public/grant_table.h              | 66 ++++++++++++-------
 4 files changed, 52 insertions(+), 24 deletions(-)
 create mode 100644 docs/hypercall-interfaces/common/grant_tables.rst

diff --git a/docs/hypercall-interfaces/arm64.rst b/docs/hypercall-interfaces/arm64.rst
index 5e701a2adc..cb4c0d13de 100644
--- a/docs/hypercall-interfaces/arm64.rst
+++ b/docs/hypercall-interfaces/arm64.rst
@@ -8,6 +8,7 @@ Starting points
 .. toctree::
    :maxdepth: 2
 
+   common/grant_tables
 
 
 Functions
diff --git a/docs/hypercall-interfaces/common/grant_tables.rst b/docs/hypercall-interfaces/common/grant_tables.rst
new file mode 100644
index 0000000000..8955ec5812
--- /dev/null
+++ b/docs/hypercall-interfaces/common/grant_tables.rst
@@ -0,0 +1,8 @@
+.. SPDX-License-Identifier: CC-BY-4.0
+
+Grant Tables
+============
+
+.. doxygengroup:: grant_table
+   :project: Xen
+   :members:
diff --git a/docs/xen-doxygen/doxy_input.list b/docs/xen-doxygen/doxy_input.list
index e69de29bb2..233d692fa7 100644
--- a/docs/xen-doxygen/doxy_input.list
+++ b/docs/xen-doxygen/doxy_input.list
@@ -0,0 +1 @@
+xen/include/public/grant_table.h
diff --git a/xen/include/public/grant_table.h b/xen/include/public/grant_table.h
index 84b1d26b36..e1fb91dfc6 100644
--- a/xen/include/public/grant_table.h
+++ b/xen/include/public/grant_table.h
@@ -25,15 +25,19 @@
  * Copyright (c) 2004, K A Fraser
  */
 
+/**
+ * @file
+ * @brief Interface for granting foreign access to page frames, and receiving
+ * page-ownership transfers.
+ */
+
 #ifndef __XEN_PUBLIC_GRANT_TABLE_H__
 #define __XEN_PUBLIC_GRANT_TABLE_H__
 
 #include "xen.h"
 
-/*
- * `incontents 150 gnttab Grant Tables
- *
- * Xen's grant tables provide a generic mechanism to memory sharing
+/**
+ * @brief Xen's grant tables provide a generic mechanism to memory sharing
  * between domains. This shared memory interface underpins the split
  * device drivers for block and network IO.
  *
@@ -51,13 +55,10 @@
  * know the real machine address of a page it is sharing. This makes
  * it possible to share memory correctly with domains running in
  * fully virtualised memory.
- */
-
-/***********************************
+ *
  * GRANT TABLE REPRESENTATION
- */
-
-/* Some rough guidelines on accessing and updating grant-table entries
+ *
+ * Some rough guidelines on accessing and updating grant-table entries
  * in a concurrency-safe manner. For more information, Linux contains a
  * reference implementation for guest OSes (drivers/xen/grant_table.c, see
  * http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=drivers/xen/grant-table.c;hb=HEAD
@@ -66,6 +67,7 @@
  *     compiler barrier will still be required.
  *
  * Introducing a valid entry into the grant table:
+ * @keepindent
  *  1. Write ent->domid.
  *  2. Write ent->frame:
  *      GTF_permit_access:   Frame to which access is permitted.
@@ -73,20 +75,25 @@
  *                           frame, or zero if none.
  *  3. Write memory barrier (WMB).
  *  4. Write ent->flags, inc. valid type.
+ * @endkeepindent
  *
  * Invalidating an unused GTF_permit_access entry:
+ * @keepindent
  *  1. flags = ent->flags.
  *  2. Observe that !(flags & (GTF_reading|GTF_writing)).
  *  3. Check result of SMP-safe CMPXCHG(&ent->flags, flags, 0).
  *  NB. No need for WMB as reuse of entry is control-dependent on success of
  *      step 3, and all architectures guarantee ordering of ctrl-dep writes.
+ * @endkeepindent
  *
  * Invalidating an in-use GTF_permit_access entry:
+ *
  *  This cannot be done directly. Request assistance from the domain controller
  *  which can set a timeout on the use of a grant entry and take necessary
  *  action. (NB. This is not yet implemented!).
  *
  * Invalidating an unused GTF_accept_transfer entry:
+ * @keepindent
  *  1. flags = ent->flags.
  *  2. Observe that !(flags & GTF_transfer_committed). [*]
  *  3. Check result of SMP-safe CMPXCHG(&ent->flags, flags, 0).
@@ -97,18 +104,24 @@
  *      transferred frame is written. It is safe for the guest to spin waiting
  *      for this to occur (detect by observing GTF_transfer_completed in
  *      ent->flags).
+ * @endkeepindent
  *
  * Invalidating a committed GTF_accept_transfer entry:
  *  1. Wait for (ent->flags & GTF_transfer_completed).
  *
  * Changing a GTF_permit_access from writable to read-only:
+ *
  *  Use SMP-safe CMPXCHG to set GTF_readonly, while checking !GTF_writing.
  *
  * Changing a GTF_permit_access from read-only to writable:
+ *
  *  Use SMP-safe bit-setting instruction.
+ *
+ * @addtogroup grant_table Grant Tables
+ * @{
  */
 
-/*
+/**
  * Reference to a grant entry in a specified domain's grant table.
  */
 typedef uint32_t grant_ref_t;
@@ -129,15 +142,17 @@ typedef uint32_t grant_ref_t;
 #define grant_entry_v1_t grant_entry_t
 #endif
 struct grant_entry_v1 {
-    /* GTF_xxx: various type and flag information.  [XEN,GST] */
+    /** GTF_xxx: various type and flag information.  [XEN,GST] */
     uint16_t flags;
-    /* The domain being granted foreign privileges. [GST] */
+    /** The domain being granted foreign privileges. [GST] */
     domid_t  domid;
-    /*
+    /**
+     * @keepindent
      * GTF_permit_access: GFN that @domid is allowed to map and access. [GST]
      * GTF_accept_transfer: GFN that @domid is allowed to transfer into. [GST]
      * GTF_transfer_completed: MFN whose ownership transferred by @domid
      *                         (non-translated guests only). [XEN]
+     * @endkeepindent
      */
     uint32_t frame;
 };
@@ -228,7 +243,7 @@ struct grant_entry_header {
 };
 typedef struct grant_entry_header grant_entry_header_t;
 
-/*
+/**
  * Version 2 of the grant entry structure.
  */
 union grant_entry_v2 {
@@ -433,7 +448,7 @@ typedef struct gnttab_transfer gnttab_transfer_t;
 DEFINE_XEN_GUEST_HANDLE(gnttab_transfer_t);
 
 
-/*
+/**
  * GNTTABOP_copy: Hypervisor based copy
  * source and destinations can be eithers MFNs or, for foreign domains,
  * grant references. the foreign domain has to grant read/write access
@@ -451,11 +466,6 @@ DEFINE_XEN_GUEST_HANDLE(gnttab_transfer_t);
  * bytes to be copied.
  */
 
-#define _GNTCOPY_source_gref      (0)
-#define GNTCOPY_source_gref       (1<<_GNTCOPY_source_gref)
-#define _GNTCOPY_dest_gref        (1)
-#define GNTCOPY_dest_gref         (1<<_GNTCOPY_dest_gref)
-
 struct gnttab_copy {
     /* IN parameters. */
     struct gnttab_copy_ptr {
@@ -468,6 +478,10 @@ struct gnttab_copy {
     } source, dest;
     uint16_t      len;
     uint16_t      flags;          /* GNTCOPY_* */
+#define _GNTCOPY_source_gref      (0)
+#define GNTCOPY_source_gref       (1<<_GNTCOPY_source_gref)
+#define _GNTCOPY_dest_gref        (1)
+#define GNTCOPY_dest_gref         (1<<_GNTCOPY_dest_gref)
     /* OUT parameters. */
     int16_t       status;
 };
@@ -579,7 +593,7 @@ struct gnttab_swap_grant_ref {
 typedef struct gnttab_swap_grant_ref gnttab_swap_grant_ref_t;
 DEFINE_XEN_GUEST_HANDLE(gnttab_swap_grant_ref_t);
 
-/*
+/**
  * Issue one or more cache maintenance operations on a portion of a
  * page granted to the calling domain by a foreign domain.
  */
@@ -588,8 +602,8 @@ struct gnttab_cache_flush {
         uint64_t dev_bus_addr;
         grant_ref_t ref;
     } a;
-    uint16_t offset; /* offset from start of grant */
-    uint16_t length; /* size within the grant */
+    uint16_t offset; /**< offset from start of grant */
+    uint16_t length; /**< size within the grant */
 #define GNTTAB_CACHE_CLEAN          (1u<<0)
 #define GNTTAB_CACHE_INVAL          (1u<<1)
 #define GNTTAB_CACHE_SOURCE_GREF    (1u<<31)
@@ -673,6 +687,10 @@ DEFINE_XEN_GUEST_HANDLE(gnttab_cache_flush_t);
     "operation not done; try again"             \
 }
 
+/**
+ * @}
+*/
+
 #endif /* __XEN_PUBLIC_GRANT_TABLE_H__ */
 
 /*
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 04 13:32:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 13:32:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122397.230882 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldvA1-0003y3-1j; Tue, 04 May 2021 13:32:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122397.230882; Tue, 04 May 2021 13:32:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldvA0-0003xw-UV; Tue, 04 May 2021 13:32:24 +0000
Received: by outflank-mailman (input) for mailman id 122397;
 Tue, 04 May 2021 13:32:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8884=J7=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1ldv9z-0003nw-9s
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 13:32:23 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id c4ce2a06-0283-4f51-99ab-0c1796251262;
 Tue, 04 May 2021 13:32:04 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 06073ED1;
 Tue,  4 May 2021 06:31:54 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com
 [10.1.197.16])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 8E25F3F73B;
 Tue,  4 May 2021 06:31:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c4ce2a06-0283-4f51-99ab-0c1796251262
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v5 0/3] Use Doxygen and sphinx for html documentation
Date: Tue,  4 May 2021 14:31:42 +0100
Message-Id: <20210504133145.767-1-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.17.1

This serie introduce doxygen in the sphinx html docs generation.
One benefit is to keep most of the documentation in the source
files of xen so that it's more maintainable, on the other hand
there are some limitation of doxygen that should be addressed
modifying the current codebase (for example doxygen can't parse
anonymous structure/union).

To reproduce the documentation xen must be compiled because
most of the headers are generated on compilation time from
the makefiles.

Here follows the steps to generate the sphinx html docs, some
package may be required on your machine, everything is suggested
by the autoconf script.
Here I'm building the arm64 docs (the only introduced for now by
this serie):

./configure
make -C xen XEN_TARGET_ARCH="arm64" CROSS_COMPILE="aarch64-linux-gnu-" menuconfig
make -C xen XEN_TARGET_ARCH="arm64" CROSS_COMPILE="aarch64-linux-gnu-"
make -C docs XEN_TARGET_ARCH="arm64" sphinx-html

now in docs/sphinx/html/ we have the generated docs starting
from the index.html page.

Luca Fancellu (3):
  docs: add doxygen support for html documentation
  docs: hypercalls sphinx skeleton for generated html
  docs/doxygen: doxygen documentation for grant_table.h

 .gitignore                                    |    7 +
 config/Docs.mk.in                             |    2 +
 docs/Makefile                                 |   46 +-
 docs/conf.py                                  |   48 +-
 docs/configure                                |  258 ++
 docs/configure.ac                             |   15 +
 docs/hypercall-interfaces/arm32.rst           |    4 +
 docs/hypercall-interfaces/arm64.rst           |   33 +
 .../common/grant_tables.rst                   |    8 +
 docs/hypercall-interfaces/index.rst.in        |    7 +
 docs/hypercall-interfaces/x86_64.rst          |    4 +
 docs/index.rst                                |    8 +
 docs/xen-doxygen/customdoxygen.css            |   36 +
 docs/xen-doxygen/doxy-preprocessor.py         |  110 +
 docs/xen-doxygen/doxy_input.list              |    1 +
 docs/xen-doxygen/doxygen_include.h.in         |   32 +
 docs/xen-doxygen/footer.html                  |   21 +
 docs/xen-doxygen/header.html                  |   56 +
 docs/xen-doxygen/mainpage.md                  |    5 +
 docs/xen-doxygen/xen_project_logo_165x67.png  |  Bin 0 -> 18223 bytes
 docs/xen.doxyfile.in                          | 2316 +++++++++++++++++
 m4/ax_python_module.m4                        |   56 +
 m4/docs_tool.m4                               |    9 +
 xen/include/public/grant_table.h              |   66 +-
 24 files changed, 3118 insertions(+), 30 deletions(-)
 create mode 100644 docs/hypercall-interfaces/arm32.rst
 create mode 100644 docs/hypercall-interfaces/arm64.rst
 create mode 100644 docs/hypercall-interfaces/common/grant_tables.rst
 create mode 100644 docs/hypercall-interfaces/index.rst.in
 create mode 100644 docs/hypercall-interfaces/x86_64.rst
 create mode 100644 docs/xen-doxygen/customdoxygen.css
 create mode 100755 docs/xen-doxygen/doxy-preprocessor.py
 create mode 100644 docs/xen-doxygen/doxy_input.list
 create mode 100644 docs/xen-doxygen/doxygen_include.h.in
 create mode 100644 docs/xen-doxygen/footer.html
 create mode 100644 docs/xen-doxygen/header.html
 create mode 100644 docs/xen-doxygen/mainpage.md
 create mode 100644 docs/xen-doxygen/xen_project_logo_165x67.png
 create mode 100644 docs/xen.doxyfile.in
 create mode 100644 m4/ax_python_module.m4

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 04 13:33:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 13:33:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122409.230893 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldvB5-0004DE-C6; Tue, 04 May 2021 13:33:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122409.230893; Tue, 04 May 2021 13:33:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldvB5-0004D7-8s; Tue, 04 May 2021 13:33:31 +0000
Received: by outflank-mailman (input) for mailman id 122409;
 Tue, 04 May 2021 13:33:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8884=J7=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1ldvB4-0004D0-6M
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 13:33:30 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe0d::60a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 47bc4bf8-73f5-4d4d-9b37-b8835af71570;
 Tue, 04 May 2021 13:33:27 +0000 (UTC)
Received: from DB7PR02CA0018.eurprd02.prod.outlook.com (2603:10a6:10:52::31)
 by VI1PR08MB5341.eurprd08.prod.outlook.com (2603:10a6:803:135::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4087.35; Tue, 4 May
 2021 13:33:24 +0000
Received: from DB5EUR03FT020.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:52:cafe::e7) by DB7PR02CA0018.outlook.office365.com
 (2603:10a6:10:52::31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4087.27 via Frontend
 Transport; Tue, 4 May 2021 13:33:24 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT020.mail.protection.outlook.com (10.152.20.134) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4087.27 via Frontend Transport; Tue, 4 May 2021 13:33:24 +0000
Received: ("Tessian outbound 8ca198b738d3:v91");
 Tue, 04 May 2021 13:33:24 +0000
Received: from 518802a104b1.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 775F3061-7E55-434F-AF8B-6D4CF107A98A.1; 
 Tue, 04 May 2021 13:33:13 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 518802a104b1.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 04 May 2021 13:33:13 +0000
Received: from VI1PR08MB3629.eurprd08.prod.outlook.com (2603:10a6:803:7f::25)
 by VI1PR08MB4238.eurprd08.prod.outlook.com (2603:10a6:803:f2::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4087.38; Tue, 4 May
 2021 13:33:10 +0000
Received: from VI1PR08MB3629.eurprd08.prod.outlook.com
 ([fe80::4502:9762:8b3b:63d9]) by VI1PR08MB3629.eurprd08.prod.outlook.com
 ([fe80::4502:9762:8b3b:63d9%4]) with mapi id 15.20.4087.044; Tue, 4 May 2021
 13:33:10 +0000
Received: from smtpclient.apple (82.8.129.65) by
 LO4P123CA0308.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:197::7) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4108.24 via Frontend Transport; Tue, 4 May 2021 13:33:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 47bc4bf8-73f5-4d4d-9b37-b8835af71570
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SCmY5xtt53UnXwyGAt02nSthVV0hK+IwmbNXSyzE5M0=;
 b=d8rz+cbzY6OUlA/LXGZW+6rqtLulFgqqj+9OH9zX1P7CD+av8fJowabmOUf5zMCOB6z5NqZdfOtX1MfT1qdIX8fODDejbIV/ifNqYNwNXJ+8PYzB19clhzoP+mdCboLhsvmStoomH899IntewPyE+BwBAR5hVTwz5YySATKkWZg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: c72431353c9cc090
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fNlp0Q6R4CZilrtfRqyos78CaP3Hc8j1ddR0gSXtswQWiM4x+9o3lSzUvHhtGckYOpZw40zvXwVRggZ9EGnSEc3Be+5gypey+mII3RpeAxUU/WmQjosrF4mxZYqQu3G328+hEb2oob1Y7bzLPuxmYLRw9HW6upj1pnObQsYu/o6h5GH7c0vjc0ztY7VwXrIR/zzX8YJ0/0/XpG91gkvR0zFlMOdTuq59c0CTpIE7UNYhOY0i0hJQm8obPu2Er2+FPSM4xPvFYmRum1NPVW5imBYFMFcEQdpTxuvyCImCJVUm/j1liHJ7eAs3IgponsmjySjhjP4YBVs/NlUp/Cozzg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SCmY5xtt53UnXwyGAt02nSthVV0hK+IwmbNXSyzE5M0=;
 b=BaOGmWNj9qn6a4A49KOekS4SMq8FBvpy6ySmce38hFBc9hVnPPvV/IL83hkEu/BcuZZRBhqnHNNMJc9PHzuIE2Yd+NiwtbS7uiqQKrIkjy+2DXwT/2NeXF7XI0Z0CyYPEqU4HJojp2pYstMSNby/laHJTPgUeAvYhZskmFy9sLtlh2DA97oBfC6cUe0dJV0/nuQLv6VAWWE3i7UKvCDhuuqUQT/9il3LqhjE7RUm60QH9wuAC7h40wN6sthhL6Qxg7gM1osirXiaQN6wJyNnETRwjv3G8zyG6cO5Vk+ibDHRWOnMFl2rXS5/ub4PLkPDWQVcProCxlq6E+vVc3F1GQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SCmY5xtt53UnXwyGAt02nSthVV0hK+IwmbNXSyzE5M0=;
 b=d8rz+cbzY6OUlA/LXGZW+6rqtLulFgqqj+9OH9zX1P7CD+av8fJowabmOUf5zMCOB6z5NqZdfOtX1MfT1qdIX8fODDejbIV/ifNqYNwNXJ+8PYzB19clhzoP+mdCboLhsvmStoomH899IntewPyE+BwBAR5hVTwz5YySATKkWZg=
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
Content-Type: text/plain;
	charset=utf-8
Subject: Re: [PATCH v4 3/3] docs/doxygen: doxygen documentation for
 grant_table.h
From: Luca Fancellu <luca.fancellu@arm.com>
In-Reply-To: <e3f816df-a3ee-f880-ad6f-68c9cc2db517@suse.com>
Date: Tue, 4 May 2021 14:33:02 +0100
Cc: Bertrand Marquis <bertrand.marquis@arm.com>,
 wei.chen@arm.com,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
Content-Transfer-Encoding: quoted-printable
Message-Id: <5D19A76C-DBD5-463D-975C-65FBDA0297C4@arm.com>
References: <20210504094606.7125-1-luca.fancellu@arm.com>
 <20210504094606.7125-4-luca.fancellu@arm.com>
 <37e5b461-40fe-ac78-59b9-033ff8cdc6d1@suse.com>
 <1853929B-AC45-42AF-8FE4-7B23C700B2E2@arm.com>
 <e3f816df-a3ee-f880-ad6f-68c9cc2db517@suse.com>
To: Jan Beulich <jbeulich@suse.com>
X-Mailer: Apple Mail (2.3654.80.0.2.43)
X-Originating-IP: [82.8.129.65]
X-ClientProxiedBy: LO4P123CA0308.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:197::7) To VI1PR08MB3629.eurprd08.prod.outlook.com
 (2603:10a6:803:7f::25)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e29ac58d-ec28-43d9-4b1d-08d90f013156
X-MS-TrafficTypeDiagnostic: VI1PR08MB4238:|VI1PR08MB5341:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB5341163DB338410D4A13486BE45A9@VI1PR08MB5341.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 7LSSm1EbNSXQBmTkAvsicCb8SXxeT7WqBOjeEHsqqOCSxvMVVoxSQlW5g4tJBDTfWTeF+SAy1cl0Vd0Uv7rEN/PsBntPwi5SwCQKoKid4xz4BC+FC9d/LV0Gi5je8KWcyGBBX4Rc2caMwRZyV8ecV0sUKyxaXSux4U9Uhp14TCUloYP6grYdAsT2h3QKFwZ/ONflPx3W5ouZ8dGMb4ed5j8cwV8Nwj2sflXADiKYWYEFp28udMFJcqg8o5G23DIFBvZEgzFh8AVhfQBD0CUQ++T1PueNo2d+AYuiOT+HkIqVdifLxCJnYK7W8Kd0R/u9QaQBOd+IymzjxG2KVSOVpopY50BqFwUsYB3/yXlMOazXJ00QdsIJ1NLEpoy1oAsGU8h53FxnYjECioDqWU5tNXcwxFT736sG1NKRCb6oUskzOX/ZkUejBbmKJ0gO8Onkw+XChmAed5bE3CzirrShZGe+TDhx5iAkaznD4JXkLVQSe+mL4+sAhOg2NLikvFu+tcnEtrZW7BX9q0TWfGVszzDPFZpZPftzCcUFcPGOj/iWi/+vWW8j2wUhLKhud4sU6SZR2KFT1Ml/YEilP0GnWdQn2PL7/0wuxF1DKv351mn55d02aNLvCmNz53QOref92t0jzxiETgwo8JhbyB5OYmYRWKXFggNY12e8RPWuI8g1Ciim82okQFOezzFt8Rbo
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR08MB3629.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(376002)(136003)(366004)(39850400004)(396003)(2906002)(2616005)(956004)(44832011)(8676002)(186003)(38350700002)(6916009)(16526019)(86362001)(6666004)(5660300002)(52116002)(4326008)(36756003)(54906003)(8936002)(33656002)(66946007)(66556008)(478600001)(316002)(6486002)(6512007)(66476007)(6506007)(26005)(53546011)(38100700002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
 =?utf-8?B?UnRMVU1MUW9ZVEdQVmRDVnpiZFBBc0VHK0VTVnAxL1hrK3lscW5lcXVCSExZ?=
 =?utf-8?B?aFFXR0M0NFM4b05TVVh6Rm04bWVYTVR2Q3JrMGVROFZ4ZTJuQUhQVXJsU0Yy?=
 =?utf-8?B?TWpDb1R6SjJJekh4UktMZEZFUmxRcUxoZEY0ZEt5Rk5QbTE1MTZpMHRZYTVB?=
 =?utf-8?B?b0d4SndZckEzT0lPaEppTlU3MkYxcEd2MXZsSGNBVGJTTDUxYVBkTmFYQUZT?=
 =?utf-8?B?dDhucERzVndqNHpHbWNGNjVIeWVoVlpXNzY4VjZSQWNVR29sSk9uclAwOEFr?=
 =?utf-8?B?Y0xIV1djdG9icXI4c05NelJVbFFTMVUyN1pJU2NjUlFRcjFsdDRKRzd1MVAz?=
 =?utf-8?B?TURCemdzbm9YL2RNc0UzeUNLREpGRUNpb2FqZG5xYWtDR1YrTnR5Yy90TDJN?=
 =?utf-8?B?WGR1STRPQVhpckNhUm4zcnJvQURid1daWENyenZJRGthaC9wNHAxcWZZWHBK?=
 =?utf-8?B?SDY1RHR2ZmIwN2NSYXBIUGVva1I3YWFjTnp0RFByajIvc0FVUVk5VmV2Q2tr?=
 =?utf-8?B?YTVjSzVvNkxwdFlFVmhoa3U3dXBIOS9zQWw5bFBUaEJQOENyRHBzUEJKN3ZK?=
 =?utf-8?B?dm8yUEhBVkwvNzhEdktRUnl3MllEazl1dld1dWpENndhdS8ydHdONlI0UmY5?=
 =?utf-8?B?Y1pLaUNVdnE5dXRTaWxkdm51UlNHeDJlZnhYR2V4emVuMjYvdTE2aDQzVjMx?=
 =?utf-8?B?UzhTSXluaTNzc0dURnRoZFdHeXI2ZjQ0eXlHcUE5MUFVajF3M1N1WnJMT2Vz?=
 =?utf-8?B?bVNKeWprNVlXcU1SdGwxOFU2QjFIM3d0aE91U3E3ZC9EcnJCT2tZQ3ZzZjVH?=
 =?utf-8?B?cDdKcDR6bFBUUXFuVGM3azNkZWU2MnNrbFYwQXl1M2NZZjVMcnY3RTJmMHU5?=
 =?utf-8?B?OUVIV2g4UEJnRzUzTklTUDhsTFNEc09xaTJRY0FPUlJXWVhQckJ2b0ZqM1NU?=
 =?utf-8?B?aFR5djBnV01ZcWdYUWlBeXNKSGhEazhCd3I1VVNGTGM0NzNPT1hHRDk3N0Zo?=
 =?utf-8?B?SWRzMFQwbit6MG1lN3cxVXhIWVZnbXlRTUlpOFZsZW9QUmRTbTM1V1ZTR1lp?=
 =?utf-8?B?dmN4dk5uaFZPanZIOE9QTXFYYit0cmxjTnBxYUJQWkFyc0NpNHo1QkU4OHlD?=
 =?utf-8?B?TTE0UFZ3bTdUeU9lb2thMU95M3J2K2tmSGRtMTJmSDdnVHlaYTQ3OFVkZ2Jz?=
 =?utf-8?B?amlEUW45NTJmUXljOHJnMDF2b1ozRHc3RmI3Q0tNc2NBa0dLZkdhRkNFSEUx?=
 =?utf-8?B?enUvaEs3ZjlzMHF1ZnBGRjhQUDdEQU5nRXp6Z0hrS1IwTWRuL0VVajRqQVdR?=
 =?utf-8?B?VllQbW5BenZmbTBYb2wwU3ZpbDRaY0tWTXFINjJibGh2emh3NmhjQldlWjN5?=
 =?utf-8?B?NGErNk9BSzJxUVlnVXFwTDVRU1BwNzVVUlRPSVRKYzlKT3NvYlI4NzdZRHZW?=
 =?utf-8?B?dGNGVlRjRVRSOGhXYzE4RnMwbkhtelkyT3pJcHNmZHk1aHRxQ0lWOVg3SEhV?=
 =?utf-8?B?TmNnTCt2V0ZKVUtnbDFNOTFEOVpuSHduMm5pdlJBZVBmMW1JWUpGbWpxdnk0?=
 =?utf-8?B?eFlvUVNXeWRMUXBPdTB3eTR1OXpOdHZBNVE5NWR3NUp4bFlxSDlzZWpIWkZI?=
 =?utf-8?B?ZUpnbVBodlpXS1JtblFuZFR5K1JBR3A2NTlUaUFUL2dmWm16OGZJcDZIMFpQ?=
 =?utf-8?B?Z2FhZVFCNTFMYUFGeGR3VHlzUVdSM0tLYnQxdnh3UGl1N2kyZGx2WUo4NnNX?=
 =?utf-8?Q?SPODkr1F7ITCw/Qyg/rzEUotcq50Wx9I2xNgD1i?=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB4238
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT020.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	80e855ea-4f70-444c-ad6e-08d90f012870
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	IZ6uBNM1wOMLpaupbpG6gUABzVVU96HpYH9TpN6DagQb0BfXXqrPbzeDpe6yu3TZ6oL97BdcQB5BskhgDUCyP0dKhSu0TPEkpIgkQPuPF9qqABdWaPlEt4MVBegwctbD+qlm+nQpb07K5A5/VGfXynsfuu83s7+1O7t9Mo01JOVHABQ/4roxl2W+K81STcpZb9bJfZLOrceA4g+IJCJQ67Z4iVq7muiIZYjxn4Y3uL9+VOALVkWxUU3rMkAGg9d5zpO/e+qWZ8nQMnz47B8zPO2jpyNbvY/lCg8BdB5M7L/b/U9C/M2Dhs8kmQkkMfRg2izdhVVN2KziH+SYFCKVGmgeT0/7+wmOF90N8qMXxcwB2Ha0T2hcCJs+Z8QCLgbdGgGsKxBiVz4lXS7+Q5j0fFncCAtwMkSpXTmxGtUIeRU2ypCE6FVdBV6cyZoTwejqrISBZQwd+rAMafiyXjJGC3yH/ZXmNO0dTP8fiZoFoTs8YbmTTBZ0RtnNebOcDc0xQcSbPxM/oJgoj0xuFZNHArZqGKQ+5gEEp+58kj4GOPCdOgNkZ+ySQedkA8wkA4078rdS5+uN+9YGB2fX6dpo0Wopys0ruNpmeXrN3XnxRNwWpSM4VX1/QBEp7XZurR58wooDPAqAg0kSoDXDCHum24lFM1yiPV3U29qbPhoDfzQ=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39850400004)(396003)(136003)(346002)(376002)(46966006)(36840700001)(4326008)(6862004)(336012)(82310400003)(5660300002)(53546011)(16526019)(36860700001)(36756003)(6666004)(6512007)(82740400003)(2906002)(86362001)(186003)(26005)(47076005)(54906003)(44832011)(316002)(478600001)(356005)(33656002)(6506007)(81166007)(8936002)(8676002)(70206006)(956004)(70586007)(2616005)(6486002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2021 13:33:24.6673
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e29ac58d-ec28-43d9-4b1d-08d90f013156
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT020.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB5341



> On 4 May 2021, at 14:28, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> On 04.05.2021 15:09, Luca Fancellu wrote:
>>> On 4 May 2021, at 12:48, Jan Beulich <jbeulich@suse.com> wrote:
>>> On 04.05.2021 11:46, Luca Fancellu wrote:
>>>> @@ -451,11 +466,6 @@ DEFINE_XEN_GUEST_HANDLE(gnttab_transfer_t);
>>>> * bytes to be copied.
>>>> */
>>>>=20
>>>> -#define _GNTCOPY_source_gref      (0)
>>>> -#define GNTCOPY_source_gref       (1<<_GNTCOPY_source_gref)
>>>> -#define _GNTCOPY_dest_gref        (1)
>>>> -#define GNTCOPY_dest_gref         (1<<_GNTCOPY_dest_gref)
>>>> -
>>>> struct gnttab_copy {
>>>>    /* IN parameters. */
>>>>    struct gnttab_copy_ptr {
>>>> @@ -471,6 +481,12 @@ struct gnttab_copy {
>>>>    /* OUT parameters. */
>>>>    int16_t       status;
>>>> };
>>>> +
>>>> +#define _GNTCOPY_source_gref      (0)
>>>> +#define GNTCOPY_source_gref       (1<<_GNTCOPY_source_gref)
>>>> +#define _GNTCOPY_dest_gref        (1)
>>>> +#define GNTCOPY_dest_gref         (1<<_GNTCOPY_dest_gref)
>>>=20
>>> Didn't you say you agree with moving this back up some, next to the
>>> field using these?
>>=20
>> My mistake! I=E2=80=99ll move it in the next patch, did you spot anythin=
g else I might have forgot of what we agreed?
>=20
> No, thanks. I don't think I have any more comments to make on this
> series (once this last aspect got addressed, and assuming no new
> issues get introduced). But to be clear on that side as well - I
> don't think I'm up to actually ack-ing the patch (let alone the
> entire series).

Ok, at least would you mind to do a review by of the patches we discussed t=
ogether?

Cheers,
Luca

>=20
> Jan



From xen-devel-bounces@lists.xenproject.org Tue May 04 13:33:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 13:33:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122410.230906 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldvBC-0004Ge-Oc; Tue, 04 May 2021 13:33:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122410.230906; Tue, 04 May 2021 13:33:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldvBC-0004GX-Ks; Tue, 04 May 2021 13:33:38 +0000
Received: by outflank-mailman (input) for mailman id 122410;
 Tue, 04 May 2021 13:33:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=c7IS=J7=ens-lyon.org=samuel.thibault@srs-us1.protection.inumbo.net>)
 id 1ldvBB-0004G9-EV
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 13:33:37 +0000
Received: from hera.aquilenet.fr (unknown [2a0c:e300::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 37a5966d-879d-4acb-b38b-1f8b8a61e75a;
 Tue, 04 May 2021 13:33:36 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by hera.aquilenet.fr (Postfix) with ESMTP id 0822B1E8;
 Tue,  4 May 2021 15:33:35 +0200 (CEST)
Received: from hera.aquilenet.fr ([127.0.0.1])
 by localhost (hera.aquilenet.fr [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id TMAOUrG6GVvo; Tue,  4 May 2021 15:33:34 +0200 (CEST)
Received: from begin (unknown [IPv6:2a01:cb19:956:1b00:de41:a9ff:fe47:ec49])
 by hera.aquilenet.fr (Postfix) with ESMTPSA id F20F5194;
 Tue,  4 May 2021 15:33:33 +0200 (CEST)
Received: from samy by begin with local (Exim 4.94)
 (envelope-from <samuel.thibault@ens-lyon.org>)
 id 1ldvB6-00FpmX-So; Tue, 04 May 2021 15:33:32 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 37a5966d-879d-4acb-b38b-1f8b8a61e75a
X-Virus-Scanned: Debian amavisd-new at aquilenet.fr
Date: Tue, 4 May 2021 15:33:32 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel@lists.xenproject.org, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
Subject: Re: [PATCH 9/9] vtpmmgr: Support GetRandom passthrough on TPM 2.0
Message-ID: <20210504133332.pt56xjrxvbnz2htd@begin>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
References: <20210504124842.220445-1-jandryuk@gmail.com>
 <20210504124842.220445-10-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210504124842.220445-10-jandryuk@gmail.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)
X-Spamd-Bar: --
Authentication-Results: hera.aquilenet.fr
X-Rspamd-Server: hera
X-Rspamd-Queue-Id: 0822B1E8
X-Spamd-Result: default: False [-2.50 / 15.00];
	 ARC_NA(0.00)[];
	 RCVD_VIA_SMTP_AUTH(0.00)[];
	 FROM_HAS_DN(0.00)[];
	 RCPT_COUNT_THREE(0.00)[4];
	 TO_DN_SOME(0.00)[];
	 TO_MATCH_ENVRCPT_ALL(0.00)[];
	 TAGGED_RCPT(0.00)[];
	 MIME_GOOD(-0.10)[text/plain];
	 FREEMAIL_ENVRCPT(0.00)[gmail.com];
	 HAS_ORG_HEADER(0.00)[];
	 RCVD_COUNT_THREE(0.00)[3];
	 FREEMAIL_TO(0.00)[gmail.com];
	 RCVD_NO_TLS_LAST(0.10)[];
	 FROM_EQ_ENVFROM(0.00)[];
	 MID_RHS_NOT_FQDN(0.50)[];
	 BAYES_HAM(-3.00)[100.00%]

Jason Andryuk, le mar. 04 mai 2021 08:48:42 -0400, a ecrit:
> GetRandom passthrough currently fails when using vtpmmgr with a hardware
> TPM 2.0.
> vtpmmgr (8): INFO[VTPM]: Passthrough: TPM_GetRandom
> vtpm (12): vtpm_cmd.c:120: Error: TPM_GetRandom() failed with error code (30)
> 
> When running on TPM 2.0 hardware, vtpmmgr needs to convert the TPM 1.2
> TPM_ORD_GetRandom into a TPM2 TPM_CC_GetRandom command.  Besides the
> differing ordinal, the TPM 1.2 uses 32bit sizes for the request and
> response (vs. 16bit for TPM2).
> 
> Place the random output directly into the tpmcmd->resp and build the
> packet around it.  This avoids bouncing through an extra buffer, but the
> header has to be written after grabbing the random bytes so we have the
> number of bytes to include in the size.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> ---
>  stubdom/vtpmmgr/marshal.h          | 10 +++++++
>  stubdom/vtpmmgr/vtpm_cmd_handler.c | 48 ++++++++++++++++++++++++++++++
>  2 files changed, 58 insertions(+)
> 
> diff --git a/stubdom/vtpmmgr/marshal.h b/stubdom/vtpmmgr/marshal.h
> index dce19c6439..20da22af09 100644
> --- a/stubdom/vtpmmgr/marshal.h
> +++ b/stubdom/vtpmmgr/marshal.h
> @@ -890,6 +890,15 @@ inline int sizeof_TPM_AUTH_SESSION(const TPM_AUTH_SESSION* auth) {
>  	return rv;
>  }
>  
> +static
> +inline int sizeof_TPM_RQU_HEADER(BYTE* ptr) {
> +	int rv = 0;
> +	rv += sizeof_UINT16(ptr);
> +	rv += sizeof_UINT32(ptr);
> +	rv += sizeof_UINT32(ptr);
> +	return rv;
> +}
> +
>  static
>  inline BYTE* pack_TPM_RQU_HEADER(BYTE* ptr,
>  		TPM_TAG tag,
> @@ -923,5 +932,6 @@ inline int unpack3_TPM_RQU_HEADER(BYTE* ptr, UINT32* pos, UINT32 max,
>  #define pack_TPM_RSP_HEADER(p, t, s, r) pack_TPM_RQU_HEADER(p, t, s, r)
>  #define unpack_TPM_RSP_HEADER(p, t, s, r) unpack_TPM_RQU_HEADER(p, t, s, r)
>  #define unpack3_TPM_RSP_HEADER(p, l, m, t, s, r) unpack3_TPM_RQU_HEADER(p, l, m, t, s, r)
> +#define sizeof_TPM_RSP_HEADER(p) sizeof_TPM_RQU_HEADER(p)
>  
>  #endif
> diff --git a/stubdom/vtpmmgr/vtpm_cmd_handler.c b/stubdom/vtpmmgr/vtpm_cmd_handler.c
> index 2ac14fae77..7ca1d9df94 100644
> --- a/stubdom/vtpmmgr/vtpm_cmd_handler.c
> +++ b/stubdom/vtpmmgr/vtpm_cmd_handler.c
> @@ -47,6 +47,7 @@
>  #include "vtpm_disk.h"
>  #include "vtpmmgr.h"
>  #include "tpm.h"
> +#include "tpm2.h"
>  #include "tpmrsa.h"
>  #include "tcg.h"
>  #include "mgmt_authority.h"
> @@ -772,6 +773,52 @@ static int vtpmmgr_permcheck(struct tpm_opaque *opq)
>  	return 1;
>  }
>  
> +TPM_RESULT vtpmmgr_handle_getrandom(struct tpm_opaque *opaque,
> +				    tpmcmd_t* tpmcmd)
> +{
> +	TPM_RESULT status = TPM_SUCCESS;
> +	TPM_TAG tag;
> +	UINT32 size;
> +	UINT32 rand_offset;
> +	UINT32 rand_size;
> +	TPM_COMMAND_CODE ord;
> +	BYTE *p;
> +
> +	p = unpack_TPM_RQU_HEADER(tpmcmd->req, &tag, &size, &ord);
> +
> +	if (!hw_is_tpm2()) {
> +		size = TCPA_MAX_BUFFER_LENGTH;
> +		TPMTRYRETURN(TPM_TransmitData(tpmcmd->req, tpmcmd->req_len,
> +					      tpmcmd->resp, &size));
> +		tpmcmd->resp_len = size;
> +
> +		return TPM_SUCCESS;
> +	}


We need to check for the size of the request before unpacking (which
doesn't check for it), don't we?

> +	/* TPM_GetRandom req: <header><uint32 num bytes> */
> +	unpack_UINT32(p, &rand_size);
> +
> +	/* Call TPM2_GetRandom but return a TPM_GetRandom response. */
> +	/* TPM_GetRandom resp: <header><uint32 num bytes><num random bytes> */
> +        rand_offset = sizeof_TPM_RSP_HEADER(tpmcmd->resp) +
> +		      sizeof_UINT32(tpmcmd->resp);

There is a spurious indentation here, at first sight I even thought it
was part of the comment.


We also need to check that rand_size is not too large?
- that the returned data won't overflow tpmcmd->resp + rand_offset
- that it fits in a UINT16

Also, TPM2_GetRandom casts bytesRequested into UINT16*, that's bogus, it
should use a local UINT16 variable and assign *bytesRequested.

> +	TPMTRYRETURN(TPM2_GetRandom(&rand_size, tpmcmd->resp + rand_offset));
> +
> +	p = pack_TPM_RSP_HEADER(tpmcmd->resp, TPM_TAG_RSP_COMMAND,
> +				rand_offset + rand_size, status);
> +	p = pack_UINT32(p, rand_size);
> +	tpmcmd->resp_len = rand_offset + rand_size;
> +
> +	return status;
> +
> +abort_egress:
> +	tpmcmd->resp_len = VTPM_COMMAND_HEADER_SIZE;
> +	pack_TPM_RSP_HEADER(tpmcmd->resp, tag + 3, tpmcmd->resp_len, status);
> +
> +	return status;
> +}
> +
>  TPM_RESULT vtpmmgr_handle_cmd(
>  		struct tpm_opaque *opaque,
>  		tpmcmd_t* tpmcmd)
> @@ -842,6 +889,7 @@ TPM_RESULT vtpmmgr_handle_cmd(
>  		switch(ord) {
>  		case TPM_ORD_GetRandom:
>  			vtpmloginfo(VTPM_LOG_VTPM, "Passthrough: TPM_GetRandom\n");
> +			return vtpmmgr_handle_getrandom(opaque, tpmcmd);
>  			break;

Drop the break, then. I would say also move (or drop) the log, like the
other cases.

>  		case TPM_ORD_PcrRead:
>  			vtpmloginfo(VTPM_LOG_VTPM, "Passthrough: TPM_PcrRead\n");
> -- 
> 2.30.2
> 


From xen-devel-bounces@lists.xenproject.org Tue May 04 13:36:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 13:36:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122422.230918 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldvE9-0004Yz-9M; Tue, 04 May 2021 13:36:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122422.230918; Tue, 04 May 2021 13:36:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldvE9-0004Ys-45; Tue, 04 May 2021 13:36:41 +0000
Received: by outflank-mailman (input) for mailman id 122422;
 Tue, 04 May 2021 13:36:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldvE7-0004Yh-JS; Tue, 04 May 2021 13:36:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldvE7-0003Kn-FM; Tue, 04 May 2021 13:36:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldvE7-00072p-5E; Tue, 04 May 2021 13:36:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ldvE7-0002b9-2P; Tue, 04 May 2021 13:36:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=IujlPQhfD2LS9/rWuZrYAKUxrK/yxy496pBkKVbEK/o=; b=b8QJljHMsS9TZvWf18sn4WAawa
	UzEXdnSfiwtTZXoZzX3vE8ycFpkWLgQ78NW9/imVpw93xRUqW2xkJSlZ1rkiuQ3yC9c2KQVkEL6Ki
	qwzuUb0GT0qtEHg8qIL4S1cpiptOYrmjYQESsJYSY+LkMPFOz7PHBYisYnuGMLPcrIlQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161700-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 161700: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=5e321ded302da4d8c5d5dd953423d9b748ab3775
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 04 May 2021 13:36:39 +0000

flight 161700 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161700/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 152332
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                5e321ded302da4d8c5d5dd953423d9b748ab3775
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  276 days
Failing since        152366  2020-08-01 20:49:34 Z  275 days  461 attempts
Testing same since   161700  2021-05-04 05:42:29 Z    0 days    1 attempts

------------------------------------------------------------
5925 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1605801 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 04 13:48:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 13:48:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122439.230932 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldvPB-0005Yo-DH; Tue, 04 May 2021 13:48:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122439.230932; Tue, 04 May 2021 13:48:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldvPB-0005Yh-AD; Tue, 04 May 2021 13:48:05 +0000
Received: by outflank-mailman (input) for mailman id 122439;
 Tue, 04 May 2021 13:48:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=n4Og=J7=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1ldvPA-0005Yc-LT
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 13:48:04 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0b3ae563-2125-4f75-98fd-08d3916ddee6;
 Tue, 04 May 2021 13:48:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0b3ae563-2125-4f75-98fd-08d3916ddee6
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620136083;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=HHC8OgU+UvKDTasAmy0Hu4u+yA6GwDKmbQwchjgRri0=;
  b=XmUrqMv/m4uDOorMg54aIbGanSynYYdIPk8ymGN1ozJKsxTBGRcgYEpX
   Niv7IM6uWlk8JDgt0QpgOiyB2S3VKn0dWLAZz/fR1JiNWpzQ4Yms7JLct
   eiDAA8qnh3wZkQf1awgodIFczPyiuneIQzHvxQnhFnRFh0md9/AI0AoLh
   4=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: PJQlRY/+d8D4a3m2Clralt4Swr/ROoqee9aiZALqr68V6do4S/DcwxfXfVqEm+6LeDQQ4U+wf/
 Z1gMdt/aC5oC+S8nYC7hdMtN5l5rWtrz8YinU2DjaqnVN1b4Ek5Glmo78N3PEcaJdiO2y8QQFZ
 YTGqKPcgtFR/d0rFP2W+E4Yle1SFn+ouR9TbDQwKRzbfdot76AylA760eRjj3Qp9cq5YD5qj7c
 YdAxnoyhEn5QczxKf5tDK2VOOrNGHYxQrwpsi/uJVXnB09pfEr+IzOrTHnrmrjAlRA2Mx0M+c6
 rq8=
X-SBRS: 5.1
X-MesageID: 44542174
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:9ySOCqhDMcOkGy1IO/ccKe235XBQX2pw3DAbvn1ZSRFFG/Gwv/
 uF2NwGyB75jysQUnk8mdaGfJKNW2/Y6IQd2+YsFJ+Ydk3DtHGzJI9vqbHjzTrpBjHk+odmuZ
 tIW5NVTOf9BV0St6vHySGlDtctx8SG+qi0heHYi0xgVx1udrsI1WZEIyywe3cGIzVuL5w/CZ
 aa+45rpyC4f24Wc8S8ARA+LpL+jvfMk4/rZgNDOgUu7xOAgSjtxLnxFRWZ2Rl2aUIz/Z4J92
 /Znwvlopiyqv3T8G6m60b/zbRz3OHgxNxKGdCWhqEuRAnEpw60aO1aKt+/lR8vpuXH0idOrP
 DtpFMaM913+zfteAiO0GfQ8i3B9Bpr1HP401+fhhLY0LzEbRY3EdBIi44cUjax0TtYgPhG3K
 hG332UuvNsZHuq9kSNhKm7azhQmkW5unYkm+II5kYvKbc2U7NNsZcZuHpcDZZoJlOK1KkcDO
 JsAMvAjcwmF2+yUnaxhBgK/PWcGl43HhuAX3EYvN2U3zV8jBlCvjUl7f1asXEa+J0nTZ5Yo8
 zCL6RzjblLCvQbdKRnGY46MIeKI12IZSiJHHOZIFzhGq1CE3XRq6Tv6LFwwO2xYpQHwLY7hZ
 ypaiIWiUcCP2bVTeGe1pxC9R7ABE+nWy72981Y759l/pXhWbvCK0S4ORATuvrlh89aLtzQWv
 61Np4TKeTkN3HSFYFA2BC7c4VOKEMZTNYetr8AKhOzi/OODrevmv3Qcf7VKraoOy0jQHnDDn
 wKWyW2C95H6mytR3/kkDncU37gYSXEjNBNOZmf29JW5JkGN4VKvARQo0++/Nu3JTpLtbFzXE
 YWGsKjroqL4U2NuUrY5WRgPRRQSmxP5q/7bn9MrQgWd2f9cbMJvcSjaXlftUH3YiNXfofzKk
 pytl538aW4I9i73iY5Ee+qNWqckj81qG+VSYwf3omO/93sdJ99LptOYt0+KSz7UzhO3Sp6om
 ZKbwEJAmXFECn1tKmjhJsIQMfFd9d9hw+vCdVOqW3WsHidoc1HfApZYxeeFeqsxSo+TTtdgV
 N8t4UFhqCbpDqpIWwjxNgjPEZ0c2SRCrJeBAGjbIFZ84qbPz1YfCOvv3i3mhszcm3l+wE3in
 b6JSOZQ/3NH2FQo2tVyKrs7VNyeFiMZk4YUAEIjaRNUUD9/lpj2+6CYaS+l1GcbVYP2ckxGj
 DIazl6GHIk+/mHkDqu3BqSH3QvwZsjetHHBLM4arfJxzeGM4uTj5wLGPdS4bdoPN3jqfUwTO
 qaYgOZRQmITd8B6kiwnDIIKSN0oH4rnbfUwxXj9nG/x2N6LvzIIlhqLotrVe203izBfbKv35
 p4h95u4rf1HWX1d9KcyabYKxREMQjepGaqT+cu7bBY1JhCwIdbLt3+a3/v0noC4TAVaOHTv2
 kaSL5g4L/ANpR0FvZiMx5xzx4MrpC3MEAvsgbKGecwclEmsm/DM7qyks/1gItqJnfEmRD5Nl
 ae+RBM5vvpXyOM0rgBFqI7SF4mH3QU2TBH/OmYcZfXBxjvX+Zf/ECiOnvVSs4WdIG1XZERpA
 19+deGgqu+cDf5whnZuX9eLrhV+2iqBeO0DwTkI58Ez/WKfXCNiLCt+si9kXPeTia6cV0Rgc
 l9TnMrB/4zwwUKvckQySi9Sqv+v0IjnR9/2Fhc5yHQ87njxnzaE0FAORDembNMU1BoQyG1sf
 g=
X-IronPort-AV: E=Sophos;i="5.82,272,1613451600"; 
   d="scan'208";a="44542174"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XVXNYOTyK3RsciURKI+HPtL9soB5IqF04opAhHjqtgQQUrB8ZtX8PhdRjLPQCyE2gPTAFzlzkWO2D1/+VjhNqIsps9v+NQN1RcPIl1ZcdIh5kClU49OD49G3cTtCKnvMuJe+2H+4F07cakakW5fr9b4sjfX/sdcI2V3Rq2WDtjVteaCyXoPinbRPx9EEcASJMN50ft1yIF2yeKjAUSH/GxQqMVpG1EFQP+o7TIp4VYxCwErdBLbBlZbNtCRyLY2yfEr/ebIZX9RZDrRXI+kC48I76f3rV8369xB4oGhjMh9eb97KDj/NhoHInuZYAQFJLtykn+L6SvfxGhCuJvP34g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oZrKNOq23P+B+xPjn0c6ttemAMfOK/ov2+s6p8L+gKo=;
 b=P0/8FfTmwV8rezlNueayLL3oqFMKqiHWi4v42jXai58YzPlTyiWccp243Afcfo/5XT8Ohl7xbTwxDUmNy+mSiYnBHdBKMoj6qtE1/ySuTPjBOrvl1tfY+uFv+2mkh0Ht3lLCnvffmCB8SHdIUF952bZGBeyOtbo9sH/mJkwg4AkHcok/X9vSqr0ynM+44mAmCpeQLrcxeN9tYNoHL8P7kCIrAyBu1QKt0uub/rr6oST9y/ikxW8y5TZ8UodRId9rprhBt9QEKcbm7gzZBQ5mAMMuvsjuO/QLH28zjzAeKEH9aYeMyACBiG2fA/JNav2bIECM/ISxoMAZOyhZApg3Fw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oZrKNOq23P+B+xPjn0c6ttemAMfOK/ov2+s6p8L+gKo=;
 b=Nzxw4V4zFPc07WzHBTCEAFCc+ndrTQ03ihVixwvYWFlKiIMdlUa7PTZvVaWnzXlFtDBw8W/pTNOL9MJznrqAmS3+3ql+EQ6asxulmYF7mzv1XKnlj2z08wOs3Ha1GwqEu6ibEqJ11JrRpqQDAu2YIbd3CqpXeadoC5ZJ3KaUGP8=
Date: Tue, 4 May 2021 15:47:53 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: <xen-devel@lists.xenproject.org>, Ian Jackson <iwj@xenproject.org>, Wei
 Liu <wl@xen.org>
Subject: Re: [PATCH v3 02/13] libs/guest: allow fetching a specific CPUID
 leaf from a cpu policy
Message-ID: <YJFQifk/0nXCuMJT@Air-de-Roger>
References: <20210430155211.3709-1-roger.pau@citrix.com>
 <20210430155211.3709-3-roger.pau@citrix.com>
 <76e5e596-24bc-9d91-e654-cef1115e5139@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <76e5e596-24bc-9d91-e654-cef1115e5139@citrix.com>
X-ClientProxiedBy: MR2P264CA0175.FRAP264.PROD.OUTLOOK.COM (2603:10a6:501::14)
 To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 062e5ed5-f2b5-40c8-6d0a-08d90f033a71
X-MS-TrafficTypeDiagnostic: DM4PR03MB5965:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM4PR03MB59651DCF4435BF91AE4333528F5A9@DM4PR03MB5965.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: kM55c+r8aBYMBEZtjxG/1HyRoV4sNy4o2UKLPje0aJlD1xrH/jRgzkQbbCAjSauoSWSj9aTO7vvtbnb09JoOjqzXP6qlnzpdxxTbm1Qf0IbIHEe5u0OQNXio7oormm1yFUFkEi+zhAfN4r8zF/fsBCgSUq69MqwEMLZcdWIiDlx5yOV8yaakFFKA268Jo0kOskDtTQC0egfmWHJXoU/Mc9KKPiUDyux4N+/sZ2xig6eIlCZtUUiIlULvoi9iiFLJQ4eHlSRQX94y/sCasNIHVqJegxleJISPTg1DtVIePYBwKrbOW/ALpPxnJ+/psVn+sVl2+Py5J//OTf9mIpdp4zn02Y/gNmFgk+P3Rnt7sypHISeZHuvsC2JV0ygpHW6DxOz2pTUytHMXL+l1aTzLM9b8Mjbiq+0NePp9ChL2Oyd6xcKcTFt36cCJhcry1mpwk/F2P153TXl+16UdcvUnkSEdErmSvkWNOrUAAKKFOTF/xbP/k2o4+CU/MfwCqxfRnGgGfj7d8gSGmYtpscCunupdrSk2xjh2dnj5diQX/Pq9Gxqe26VvWKvZjMBfkKSPZnV1TL0Iq2x846FVpJzgTUvZys0tPFITH27nswgP9RE=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(136003)(396003)(346002)(376002)(39860400002)(366004)(54906003)(66476007)(5660300002)(2906002)(6486002)(38100700002)(316002)(6666004)(66556008)(4326008)(66946007)(26005)(186003)(85182001)(83380400001)(6862004)(6636002)(16526019)(9686003)(8936002)(956004)(33716001)(478600001)(53546011)(6496006)(86362001)(8676002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?SXBVRXpVOTJvZnRkb2RydUNwUkFoTy9hOU82L2t3blNhWW1oc3g5K25XZFJH?=
 =?utf-8?B?UUFmQmVUNVdWOXp5NytNQlpyN0phTmtPSHBrVzZuaVltd29taEJTQllkcXV2?=
 =?utf-8?B?Snh2OXJpTGYzWlZWMm85L1ZwQWpFaEU5eFF4di9kSm5UaFpBZlFhSWxCTUNL?=
 =?utf-8?B?NUxVa0p3N2ZLOUlBcDdMT2JOU1Q3aFQwNnFaWWRlTFF0LzR2U2RPb0FZU3pi?=
 =?utf-8?B?ekl3dVdNeEkvUU1oTFNzWGtpREhIZDFkdzEzYlVyWGNzK1VWVFc5b2hNbFNS?=
 =?utf-8?B?TU9hUDRqZFRCVVRzWFBQc04wdEVEcDFaeWk5USsvL3hlUkxaVWdKeURlSHFO?=
 =?utf-8?B?eVl2SHQxVFJONVZIdkZLNW5CZm40Z25PclN4TVlEWDZrSTdkM3c1OFBwRWhY?=
 =?utf-8?B?M3lJSi9vRVNSajJ6RGhkclFCS0U5L3UxclFXSDZyNUdjNkRNRlpHREhVT2tW?=
 =?utf-8?B?cmVaaGpmT1Y0cjJkUjNPRFJPV2w1YUNZanNHNmFDRmFIem5OMjRUakQ1TStk?=
 =?utf-8?B?V0tlU2RQSW9GWUFySkJNOE1DdisxZlZhdVVNR1VQeUhFTUFnZGljMHkwSVdr?=
 =?utf-8?B?d2RRcDRQQk50cC8ydDYvYXlVTzlWTEFUalhKQnBzSGM4ZnN2K2U3MTlBV3I0?=
 =?utf-8?B?SFRLMCs0eHJGcFgrczlUaGRNelIvTm1SeHplR05lVUJMVjFmTGxUNzd5eHZF?=
 =?utf-8?B?d0lkeDlMVWRZVENpTEcwQWlwYzBnTDQ0dHBtOUk2VUdOejBhN0F6dCtDTU83?=
 =?utf-8?B?MkV4cDFTcE1YL3NaR2ZGWGU4YWhtcUQzKzYxWE82SVhtaXMzMmw5d1JjaTY0?=
 =?utf-8?B?dXhOUDBRWnp5TzVtcStkRkVFcnFLZmp0N1VmdFVIOTJVWGNFNTVwdGxhQkkv?=
 =?utf-8?B?emJGS3ZIbzQ1Qm9CVUYwZVNHME1UanJnZHdZQktqRHlUR29RMnZGdXJ2dnhN?=
 =?utf-8?B?bTM1Mm1IMk9rTTlXczVVeUpBemNQeGw4VnF2emxYcXc5RnErMmFOVFNkQkRU?=
 =?utf-8?B?RVNpQlNkZWRZNXV5NFRveUlvRjBWSlhZQUszcDBINmRwV0VpeVBWOXBKY1J2?=
 =?utf-8?B?RVpXKzJjS01CM2EwWDgvdC9rVmJVdlBFN0h4YVBYSG01cWtiN1lUUXFxQk9Z?=
 =?utf-8?B?eGViT0U3TEZHYnRPVVIxWFlSRGZ6VmtHalE5aTRwUzhQcWFObnZiV0ZlTk5N?=
 =?utf-8?B?c3ZUeW9UbGlqeVNLUUJuejFFeTZMTWNmaFgxTVBDUm40OE90Z0R5WFZuQ1l2?=
 =?utf-8?B?SkFmeWtXZEpXZkNYbGphTHBYYzRGanhXRFo0QTAwSDJMZmZhN1ROMUJRNitF?=
 =?utf-8?B?TkQ2VGovQ3p2dU5IUis5VjJDRlNkazh1dVltR3pYWWtZRE5FZVFmWVFlNkVD?=
 =?utf-8?B?Q0ZsN3VpQVAxUUprNjJldjRBWmRzcUZDUDNjMlBQYjJ2NlgzbVNlVWhyT0cw?=
 =?utf-8?B?Nm9QUzVwMVMzR2RWRXZycS8rK0JRR3hGNUlraHcyd1ByYWRPaE9QNUdtMlNw?=
 =?utf-8?B?WUJOTmxzNURMOWxSMks0WHZOYVZKWWNibXZvNXNGc0owcVhselZaYTM0VUY3?=
 =?utf-8?B?aFFNck1oRGpTdHltY0hZcHc1Q3c0c256bGhQYkxjbmd4RnNTVFVxZC8zRlNl?=
 =?utf-8?B?SjRlMy81N0ZvOVJWTDFSQ0JzR1BudE9sUXUyUlNsYkV2d0ZBbTFNa3YyZDhC?=
 =?utf-8?B?QjhPVTAyZ0ptNVAydXNLWXVXM2RkRXlRMm41WjZBN2R2QXlsSlJxSURveTV4?=
 =?utf-8?Q?t3e2RYbrSz69wlgAv1SFPckXnypJj/huw+qq/wR?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 062e5ed5-f2b5-40c8-6d0a-08d90f033a71
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2021 13:47:59.1676
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: oTNb+RorE07VkGOKAq3R3jzLTY9HDEAVGsf/gno001ozXfE9nvHkzfMlSGGp2qXPj+Y8oz+OkjtP2530IuPwEA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR03MB5965
X-OriginatorOrg: citrix.com

On Tue, May 04, 2021 at 12:59:43PM +0100, Andrew Cooper wrote:
> On 30/04/2021 16:52, Roger Pau Monne wrote:
> > @@ -822,3 +825,28 @@ int xc_cpu_policy_serialise(xc_interface *xch, const xc_cpu_policy_t p,
> >      errno = 0;
> >      return 0;
> >  }
> > +
> > +int xc_cpu_policy_get_cpuid(xc_interface *xch, const xc_cpu_policy_t policy,
> > +                            uint32_t leaf, uint32_t subleaf,
> > +                            xen_cpuid_leaf_t *out)
> > +{
> > +    unsigned int nr_leaves = ARRAY_SIZE(policy->leaves);
> > +    xen_cpuid_leaf_t *tmp;
> > +    int rc;
> > +
> > +    rc = xc_cpu_policy_serialise(xch, policy, policy->leaves, &nr_leaves,
> > +                                 NULL, 0);
> > +    if ( rc )
> > +        return rc;
> 
> Sorry for not spotting this last time.
> 
> You don't need to serialise.  You can look up leaf/subleaf in O(1) time
> from cpuid_policy, which was a design goal of the structure originally.
> 
> It is probably best to adapt most of the first switch statement in
> guest_cpuid() to be a libx86 function.  The asserts aren't massively
> interesting to keep, and instead of messing around with nospec, just
> have the function return a pointer into the cpuid_policy (or NULL), and
> have a single block_speculation() in Xen.

libx86 already has array_access_nospec, so I think it's fine to just
leave the code as-is instead of adding a block_speculation in Xen and
dropping the array_access_nospec accessors?

> We'll also want a unit test
> to go with this new function to check that out-of-range leaves don't
> result in out-of-bounds reads.

Sure.

Also, whats your opinion regarding xc_cpu_policy_get_msr, should I
also split part of guest_rdmsr and place it in libx86 in order to
fetch the MSRs present in msr_policy?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 04 13:50:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 13:50:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122443.230945 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldvRX-0006Oa-RY; Tue, 04 May 2021 13:50:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122443.230945; Tue, 04 May 2021 13:50:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldvRX-0006OS-O0; Tue, 04 May 2021 13:50:31 +0000
Received: by outflank-mailman (input) for mailman id 122443;
 Tue, 04 May 2021 13:50:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=c6am=J7=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ldvRW-0006OM-Ry
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 13:50:30 +0000
Received: from mo6-p00-ob.smtp.rzone.de (unknown [2a01:238:400:100::a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6058fdbc-5f11-40a9-a20e-6e5bf9a8a7ca;
 Tue, 04 May 2021 13:50:29 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.25.6 AUTH)
 with ESMTPSA id J02652x44DoO0uW
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 4 May 2021 15:50:24 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6058fdbc-5f11-40a9-a20e-6e5bf9a8a7ca
ARC-Seal: i=1; a=rsa-sha256; t=1620136224; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=qE8yF2Dns1Tls5uOBaKOIKql6WE9W5vhBtEi5pPPUA0+ClFUzz0DVEeKVdyHz7O8N3
    rsTZibD/YRrMbZC4cYV0V61eE28KkqBz2WZX64YVsMP5xQisg3v6El0F6gcnbGsHF3Mt
    eQAu++0sp338KLqtWhInSgN2LcsMyTJmvOjOzE9eOK1IY1Rm1D4t1BNj/CHi/xjs9XI+
    AhFRIl/MKJWoyXKdW1avb93ayqPbS29ro6KxW3AkM3FKkTr2FzGqgfr5kmzRgn8YevXx
    vQlwA0ElN+9L+LaLEvgAjzDBw4VcGSCy43PuYkumXe4Eqa8AEtTzCiuygRH2dO853/v3
    zLWQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1620136224;
    s=strato-dkim-0002; d=strato.com;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=vDyx0ZuQU8vpmVvwjvDTlZpcimiw0UixQ+F6nK5gDqk=;
    b=QKHMa4IfSyrr3d9DaGPaYp4DHp6Mnoo7UvcJ9qr5EjbF5B5Ixv65cP6EwLBFOZjD+Z
    SRcAp3RotZYd5oNY0+DnyvyEXrQdF5wgzt6SVWMrC0+CKaj3bgtiJzvgKAtWb31bG3Go
    7D5yQpr85k4sB/hhESQhw0wDvSab3GMpLmWKz+DcPNxCsuvIefMCw4y2B/1q/bUl29HT
    ouIguslYOZlDSkiZUI4o8NLn+BfiLKW+h9OlaO3ZKGDMKq+hpypmaAcW34Z8hp8thv0u
    u12p5PhLyBoUhGSRbtTBjpMKXRpy09ojQhfN/WF0xeA9SoHoFEXBbKNVYPdyFRoBghfB
    CrKg==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1620136224;
    s=strato-dkim-0002; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=vDyx0ZuQU8vpmVvwjvDTlZpcimiw0UixQ+F6nK5gDqk=;
    b=FHiXEafxsSp8GxzKVKYzSwqAg5SrPMzeC+fxjqcHDjd9EacwG1lLqZc9UoyR6UuMN1
    H2+zDRULf58jbkx6kT2YYc9UIl17eyykh+StGhY6Rz1Wv3rc4SfyWIrQE97A95kz3MP9
    cnBqvuA8OV3AX8hWoa3Y29pWbNS6H5gPOxMIUfu3o5enC0Q5iy83q3V3flryZbpP8KNz
    9hJp7uobuMJt/S4sY7HHqiN0tCs8iaMipLNlCT2XwEg6o/wsZJCMykLanF68PUNA7mXH
    NzYvHCHQAdeW6t2gh0RNsJk++BoNbNjf9JmYtgLx6X7DFYfDNVQRnxwTRQU5BGaEf9Jw
    kiLg==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgtl+1b1FMstFZvCqIQN5N7TvWFg4vzhFVdoKAuQ"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v1] tools: fix incorrect suggestions for XENCONSOLED_TRACE on FreeBSD
Date: Tue,  4 May 2021 15:50:21 +0200
Message-Id: <20210504135021.8394-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

--log does not take a file, it specifies what is supposed to be logged.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/hotplug/FreeBSD/rc.d/xencommons.in | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/hotplug/FreeBSD/rc.d/xencommons.in b/tools/hotplug/FreeBSD/rc.d/xencommons.in
index ccd5a9b055..36dd717944 100644
--- a/tools/hotplug/FreeBSD/rc.d/xencommons.in
+++ b/tools/hotplug/FreeBSD/rc.d/xencommons.in
@@ -23,7 +23,7 @@ required_files="/dev/xen/xenstored"
 
 XENSTORED_PIDFILE="@XEN_RUN_DIR@/xenstored.pid"
 XENCONSOLED_PIDFILE="@XEN_RUN_DIR@/xenconsoled.pid"
-#XENCONSOLED_TRACE="@XEN_LOG_DIR@/xenconsole-trace.log"
+#XENCONSOLED_TRACE="none|guest|hv|all"
 #XENSTORED_TRACE="@XEN_LOG_DIR@/xen/xenstore-trace.log"
 
 load_rc_config $name


From xen-devel-bounces@lists.xenproject.org Tue May 04 13:54:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 13:54:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122450.230957 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldvUx-0006ZF-BL; Tue, 04 May 2021 13:54:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122450.230957; Tue, 04 May 2021 13:54:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldvUx-0006Z8-7j; Tue, 04 May 2021 13:54:03 +0000
Received: by outflank-mailman (input) for mailman id 122450;
 Tue, 04 May 2021 13:54:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=c6am=J7=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ldvUw-0006Z3-7E
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 13:54:02 +0000
Received: from mo6-p00-ob.smtp.rzone.de (unknown [2a01:238:400:100::c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ff44761c-c1d1-48b8-b034-eeb66289ca27;
 Tue, 04 May 2021 13:54:01 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.25.6 AUTH)
 with ESMTPSA id J02652x44Dru0vh
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 4 May 2021 15:53:56 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ff44761c-c1d1-48b8-b034-eeb66289ca27
ARC-Seal: i=1; a=rsa-sha256; t=1620136437; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=p0e0DsfG8nIClEAgYjt0FvcuCEbWCVtXJ81V4dPmA0CblLU0g3AHbIFQ13fHiOpf4h
    SHeHjEcATKY4W6OmC82eFSk6CIA47CdUsMTNegRUVYEVRT4Vat/d6g1k84eCIIwtLALv
    31TX4yoSOT9uPxrEdD9soMYaRCAbtpYsEMuMwxUzGDFBdjTilID0h+vi8s4Hyh2rrtr+
    QIvM1FvmoRNL906lHf0KOE8IXz4+I7xeW5O/jIOA7+3+xMNaGd6Lw0aEfcnzArA03rio
    iLvzJiz4MiU+EHvMhFOUNEWeRbbMx8De1Z70yTZw5l9u9Be+Iqbm9m/kMGshB8SAAH7F
    je+A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1620136437;
    s=strato-dkim-0002; d=strato.com;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=FXlZHQ7OeC7ioZObcBYOZq1PikOoW2WB/4QAzUkhgS0=;
    b=I/UjySNyF+RErM9+DYg10bmAmrggqQUo0PaTjqIymAuBY8eaHFL3+eaT6v4RzIGr5A
    ReHNFclj23Her5nKHwF60t7sY5cMXfyZG+DWCPvorlPRIFSVIygK4x33oly++/E5Cl9l
    Aei5mD2oPJELXEXvDNhAgFRVegOpUwAEHt+UYSPi8vZ8Tn4Np9VNxX6tN8OCRZye8LF/
    B8PrbbXopGD0XAa5Wl5tu28Vd+dSv+OPZna8KGTp7uy9CLsrZwPtBXLCzLjokeokDwWg
    S79VV+3cKdb9ZhnDH42ycL8lBFfGePqtJBzfIwDsSeHiEE59YOaADNGYDH4VVJdk7xDc
    hMGg==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1620136437;
    s=strato-dkim-0002; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=FXlZHQ7OeC7ioZObcBYOZq1PikOoW2WB/4QAzUkhgS0=;
    b=EqaDcYeCk4BHC+yPCfppXSObsxBfImWQt9sJGpkXn8tSLexTy1hGC5eQ7HiR3rFgfO
    eLxbdaF/2vUupKYUYWXkZLsts6lo+BjekkDfp+C7aJ4lYvqBY4o2juKPL9Aa5foAe4qM
    soAH/u6xvl64YEnvOJ9kYm7M0oVlHaLcfM/v7PcCoA6e/JCE4nHa0NIYHInwmPMEmTjW
    rQzE+cU01uZrJguf51v/uMtyWcDedvcsTzDYkWksPL0BrCnR6Qpo4YoXHtTrdMwfOFZ9
    UKiId7Gk8G22qIPdPP8keqq1R8R/r6VktQpqEKvfxbX//aG0Ul7KUbIYwj1rEdLBsJ32
    1oyA==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgtl+1b1FMstFZvCqIQN5N7TvWFg4vzhFVdoKAuQ"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v1] tools: fix incorrect suggestions for XENCONSOLED_TRACE on NetBSD
Date: Tue,  4 May 2021 15:53:55 +0200
Message-Id: <20210504135355.8668-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

--log does not take a file, it specifies what is supposed to be logged.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/hotplug/NetBSD/rc.d/xencommons.in | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/hotplug/NetBSD/rc.d/xencommons.in b/tools/hotplug/NetBSD/rc.d/xencommons.in
index 3981787eac..7f739206c1 100644
--- a/tools/hotplug/NetBSD/rc.d/xencommons.in
+++ b/tools/hotplug/NetBSD/rc.d/xencommons.in
@@ -22,7 +22,7 @@ required_files="/kern/xen/privcmd"
 
 XENSTORED_PIDFILE="@XEN_RUN_DIR@/xenstored.pid"
 XENCONSOLED_PIDFILE="@XEN_RUN_DIR@/xenconsoled.pid"
-#XENCONSOLED_TRACE="@XEN_LOG_DIR@/xenconsole-trace.log"
+#XENCONSOLED_TRACE="none|guest|hv|all"
 #XENSTORED_TRACE="@XEN_LOG_DIR@/xenstore-trace.log"
 
 xen_precmd()


From xen-devel-bounces@lists.xenproject.org Tue May 04 13:58:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 13:58:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122455.230969 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldvZf-0006kl-VN; Tue, 04 May 2021 13:58:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122455.230969; Tue, 04 May 2021 13:58:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldvZf-0006ke-RX; Tue, 04 May 2021 13:58:55 +0000
Received: by outflank-mailman (input) for mailman id 122455;
 Tue, 04 May 2021 13:58:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Poa=J7=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ldvZe-0006kL-GQ
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 13:58:54 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7f7020ea-6f57-46fa-9b49-1cc430135b6b;
 Tue, 04 May 2021 13:58:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7f7020ea-6f57-46fa-9b49-1cc430135b6b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620136732;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=aZL1la4bbrFmxd//wXy67zwagwSlJbJ2J+P2+Z6HU/A=;
  b=E0U3gFUos5yCZSDaEXWW/mmB2iBe6QJNHuwrkX0nSMG4cjIIF/IRnHZR
   uayhCwuYDQKlgHOy0eeneg2s8ETX42RPV6ugpLiCcxSgFhXTVvAwBKdu7
   VOQXdFDQwVC2tI5y3WcZCUQrQgLVfczxvbm1LDppjOSELvcT1vWF94Aq1
   M=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 6MVjAA1YjfkEVLoVZyEvwLF1/HezSNNZ60kKdMOgMuFDUNXWNy90+6ync6TAHxbiMdwcStVP1L
 Y2320ZlqA5MfmazoXhT+4UQqjcBWWywzaSBczfzLYZDc/BlmU+GbrcsMPBLlpyGcE0xvclSY9N
 hNcHtMeA/46rLk5rOu+gZHDZMjy1ASTp5XxRdB8X15FKWsDhEdVuANarLR+TbCxubWxJ8Jd4KW
 F3+R4TSyNaUHyPN9rTMA/Wes6AmALW6XkPIviYsojSXaMgJMFEIQDe5GvWSps3lWJN+YIzf8n1
 MMo=
X-SBRS: 5.1
X-MesageID: 43035984
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:gW+9EaqVKmjiPdUozaBuJqAaV5v5L9V00zAX/kB9WHVpW+SFis
 Gjm+ka3xfoiDAXHEotg8yEJbPoex7h3LR+iLNwAZ6JWg76tGy0aLxz9IeK+UyFJwTS1M54kZ
 1hfa93FcHqATFB5/rSzQGkH78br+Wv37uvgY7loUtFaSFPR+Ve4xxiCgCde3cGITVuIZYiDp
 KT6o5milObCBcqR/+2DHUEQOTPzuej/P7bSCULGgI97022hS6ogYSQLzGjwhwcXzlTqI1Sk1
 TtrgqR3MSemsD+8DDw/Sv575NamNzuo+EzefCku4wuBRjHziqtbIRlcbWesD4yu/HH0idXrP
 D85y0OEu42x3TNfnykgRaF4Xie7B8er0XM5HXdoXz/rdf3TDg3YvAx+75xQ1/ixGcL+PRfuZ
 g7uF6xht5sIj7r2BnZ3ZzuUSpnk0KlyEBS6tI7vjhkfqY1LINKoZd3xjIyLL4wWBjUxaoAC+
 dUAMTV9J9tACmnRkGchGVpzdC2N05DZyuucwwHssyR5TBcgGp0+Use3NAehXcN7vsGOuF529
 g=
X-IronPort-AV: E=Sophos;i="5.82,272,1613451600"; 
   d="scan'208";a="43035984"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WOO+Kex2856NLz7hnem5QL1CObNFQkCGIGBQn+g8iB7CVuZr6VzaSTK+X09SGJgnurT0EUC74mBSoUk70ko4vbFW3UMAYiVY31r9N4+dQueuo46iRfpyXEkj8dYB4EH7tST1B7NEtgAURyw+4CeChnH5GA+0X9dXHxL9Glo9NMq22+Ka0eO73WMTnhIiuxZjsA9oOnrXG34UpFL+p5pFBPeHhL2CMkcT1xWNNYnCNGDHuWi2nN1dUSgPKDYg0jtdcwLzKBcw/tMyH+/H9pczGOWkhyB3ChGeG14MmrNjFT7A1+1mzxvNXs6lNIpmwWX22qSFNozjad7rr7beg8QMLA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nHlfehM1RcCCAc87CiUpFZzXbYw4x2gaPLuBEf2UEn8=;
 b=VZMd9h0mVS3ZFocgQuMhyEe6PBgAXMKp80uSdN1nYHqUr6Kn97h7BK8MxmEHWNwKoRSr2ShFE5RWyutpSU38MrhdwkujYl8l5YzPduRSLDhq78Ivx8FBYHxPsah0g9dfB7MIBhsaFLe8+qG89jBW8SDbCPn2NY8gqpY1b/dAbxkj5Ss+r6zhn9w+F6w1Xo928EmOrnmD4zdLhBgqWe8bkbL9yEGsKSmNZeEK0dLl8X1d2tlFOOjCPevEKtAzmlRqvztVk2Cp8LG9gTcTSpj5/pj1rqOv+eFbH62bUuV+289+zHIfR8yWoWT0T05FI/6DpcqvC8oAL7nraAgg+eNFqA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=nHlfehM1RcCCAc87CiUpFZzXbYw4x2gaPLuBEf2UEn8=;
 b=DYUkqv5iRpo5ZPmzS+GnDK3AAd3bLDdKde59jRgvKxez/xR8/hfaC2PvrM/zr8sqZu9BwVJNjDojejrVObfAytxE5X9/XHga8T5qAtYfnUt0KHuwvhhg1RXTxGV1iu4WLeRkQKQqr/P+2aDPh6HwGaEOaXd8Kz8fEJqZXRNHGiY=
To: Jan Beulich <jbeulich@suse.com>
CC: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210503153938.14109-1-andrew.cooper3@citrix.com>
 <20210503153938.14109-5-andrew.cooper3@citrix.com>
 <17501fdd-b9f0-3493-7d0d-8c5333fafa45@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 4/5] x86/cpuid: Simplify recalculate_xstate()
Message-ID: <3f9ae28f-2fb7-0f4f-511b-93ba74ec3aeb@citrix.com>
Date: Tue, 4 May 2021 14:58:42 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <17501fdd-b9f0-3493-7d0d-8c5333fafa45@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0201.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a5::8) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 95fcebb9-f397-4408-fa00-08d90f04bdc4
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5534:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB55347B4880C344CE03549DFEBA5A9@SJ0PR03MB5534.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 2QAoOQGFN7Gc8gL53+YsXXab0pMub0oycg86h69YdXageDliYKnCpol1kXk6JJWhyI98KzG4nqxFtVbYUCRPp263/8dMLF4212qUAtOAVvnArPaBgn53Z5a3Uckwn0nHTEWlFXg+EyNfg23wEJUOqnYyiRMGN60HS3bqhVzZd+Aj/w350BO3CmUsFNX3QqMWf3uxFocRQ7vnjZvRLZDlc2kNimU97sDHdNr9HWU4xgotCH2vPh+SIozL13n20RSyl/bAVeOz8I4SVvn7e11K3xU1i3K3WLjnDoYsHsba8VJMO7UkWjw+OmjPUma88vYBk+7jKlc4eLpQ5J9/JF9xZi8Z00PNPT1absUsYcsgi8zkF8hb3fjJzWRwQ+mBCCCWNhEuHnoj6wBF9DWTRJpBvPC8n2ePzFCg8RZf1CcuCHhq/vKCXWXZbql535FhSfXLV1k+BM7OEVVz81WKA3MFU9d0DsMx9nR50yiy+Wk/GvVVh6Gz8HQa3IRm0f5ASLbhg0w6HuF0POeqcscAhqL3DMK5IbTmtpMAvCzWUSQ6SdaATgtcCd6oGgiA7xVWjn9Q0s9YETZOpZJIEVDUJ80Gzl0M/R6O8rGOB9IFzQpgL0sShRUBHJeOKv9XQWwlFPn+Q3TnT38ZW7/YHFqMMhMTeLLmUGK6/7Gh1akRc14gz6tw8EHNf2nKuIWaZLF0m9Ud
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(376002)(136003)(346002)(366004)(396003)(38100700002)(83380400001)(26005)(2906002)(8676002)(86362001)(478600001)(36756003)(956004)(2616005)(6916009)(54906003)(31696002)(5660300002)(186003)(6486002)(66946007)(66476007)(66556008)(6666004)(16576012)(31686004)(316002)(8936002)(4326008)(16526019)(53546011)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?c2RZWS9oTHM0ZFNzYjhNanA5UFFhOVU1bG5Hak0yUXJtY0ozUWJtcmlTS1Jr?=
 =?utf-8?B?TVNndC9LU2Rnc3IvMzZINVlZTzQ4OHhhZUpCMGc3bmh0dnl5RHVYNE1OSkZB?=
 =?utf-8?B?RitvTnVNMDNIWjlleVhPOVJkejMrTURBV2I5bTlXL0ZwT2JMQkVEMWpFZ3Vh?=
 =?utf-8?B?dTd4K3ZQQjQvZnpsN0VFMmxMSTNTb1V2c0JHYTJuSENIeXNta3BkNHZaUWIx?=
 =?utf-8?B?dG5YKzc4VFJNcVlDeVB2RWVPS3QyMGd0cXcxN3lSNFlRdTBSK3UrQU1TWGh1?=
 =?utf-8?B?dFczM21zQWhEZzdNTnVra0tmazhlbk5tRlAzVVlZR2lvbC83ZFI1TnU3RFpx?=
 =?utf-8?B?b2x5NlMxZkVvanp1bTVzQTN1S292dEJTVUNGYjFLQ1lWQlRqTzBjQ3NiSldX?=
 =?utf-8?B?S3UyRmltWnFMSDhhK1VIdmExRTRTVmNlUzNYc1Nsdkc5cHEzNmpHTFRTR1hj?=
 =?utf-8?B?MS9zeHR5RlVVdDZlQXFPNjJhWTk4U2RJNzdmdFNaWGF2Z3dHOERiR3E5QStZ?=
 =?utf-8?B?Z29JRm1EdUFmWVFMZGZjaEludGNaN0t6eHQvc2hzNHJaRzgyRkprUGd6UG4x?=
 =?utf-8?B?WmU2VUorQklsalROc0FPY2tVaWdSTzZXNFgxUFp6OVI3Y2ZiV2ZpNWVZUFU2?=
 =?utf-8?B?Rkh6WDVKMytZSC8zRnVDT3QyVkREQk9WRnhpVXhva0JmVlAra2pvVXNWNE9m?=
 =?utf-8?B?ZmVjSnlwdmlWZ211S09LZ1c1WjhEZ2x6NC9WR2ZZUGZtdDZaM2VFeWtkUk1D?=
 =?utf-8?B?YklNOTN0R1E2RXk5ZTNDSTBWNDFadEJMcjVLSTVZeGZzWUQyVnB6dk1sS2l4?=
 =?utf-8?B?TzNFOW5PSWMrN2V5TFJ0WWg2MVBHQWs5dVkzY3luZXVPSDhwY2pFdnBISXBK?=
 =?utf-8?B?aVBZWm1hRmNYSGZ6dDcwZDZLSm41VGpHZ3VXZUJMclE2UG9OYUVVbzhlNWFO?=
 =?utf-8?B?U21HOTZoRVhnTEx2Y1Vpd29GRUVxYmlhZEF2bS9WUjRqbThFZE1pNWVhWXZB?=
 =?utf-8?B?UVRjNTlBODNPRXNPakpPZXFEZFU2RmU2NVg5cTUyRm04RXNyR00xckx3dk1j?=
 =?utf-8?B?b3JFS0l6ZjhmZk1pa0YxbzdwU2N1QWlJVWtCMUo0QWZsR3VmK1d1dzdTS0FN?=
 =?utf-8?B?TnlUK1dlMVl0OVAybkVkanVhTXVESEJhTGErSzhObnZRdThacC9LcTVQMXdM?=
 =?utf-8?B?THlVNVJxclJQYmpvOEVFd056ZHlBTXpGZ1FaOENwVlM2amFzYitSSE0zOUNU?=
 =?utf-8?B?QlZoL3ExMEFDNVpOOWtUVkhiS0FhakFSdXVMQzlTZnllditYZGJPSk5IeEhx?=
 =?utf-8?B?cGtIZFR1OWI5RENENDhSZ2FzZXBCQjhNQlcrUGJjUUY0dXhyM0M0M1E4dEtC?=
 =?utf-8?B?WmdjVktHaXUxMjVIcUxKUHZFSmJnY3M5WUxoOVp1QUhLbVZjelhEV0VubGJp?=
 =?utf-8?B?UFlXbEw3L0hvOE1mM1ZCb2FWMFY0RVRiNjlqb1d5VDFZdkgrbkVFN01LeVVn?=
 =?utf-8?B?bnRONS9qYUVDZXZPUUdadlMzclJ3YU9FOUVlVjJsMmgvdUtGYkhoL2Q2RFVE?=
 =?utf-8?B?U09vWjRsMmlkMHZoZ1d6Z25pZkRCOVdIQThjTjFtTUwzUG5uNStCd3JzM3FD?=
 =?utf-8?B?czh0V1pLQUlKR0xPdThNbWVvY2NYbzBYNkpEdHZYa2JpMndWVy9HN01nSVUr?=
 =?utf-8?B?dHlDUE1qTXlneTE0K3hFTy9Renl1UGJDUVczQTVURUJQN3drQUtYSGIxK3V5?=
 =?utf-8?Q?0Jq+yyLqbMarUE7wwursCz5caHeKpnzna0TDelS?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 95fcebb9-f397-4408-fa00-08d90f04bdc4
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2021 13:58:48.9353
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: i78OzWjweZvroikYv3YgRzcrILd0fNAW46ow0Hh7XrbYDpU3o8Qj7mQHD4mMangYDMTV8DcJy9ZlkdQ62J+d+7TppJmbi0NZj/ycI3JhJW8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5534
X-OriginatorOrg: citrix.com

On 04/05/2021 13:43, Jan Beulich wrote:
> On 03.05.2021 17:39, Andrew Cooper wrote:
>> Make use of the new xstate_uncompressed_size() helper rather than mainta=
ining
>> the running calculation while accumulating feature components.
>>
>> The rest of the CPUID data can come direct from the raw cpuid policy.  A=
ll
>> per-component data forms an ABI through the behaviour of the X{SAVE,RSTO=
R}*
>> instructions, and are constant.
>>
>> Use for_each_set_bit() rather than opencoding a slightly awkward version=
 of
>> it.  Mask the attributes in ecx down based on the visible features.  Thi=
s
>> isn't actually necessary for any components or attributes defined at the=
 time
>> of writing (up to AMX), but is added out of an abundance of caution.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
>> CC: Wei Liu <wl@xen.org>
>>
>> Using min() in for_each_set_bit() leads to awful code generation, as it
>> prohibits the optimiations for spotting that the bitmap is <=3D BITS_PER=
_LONG.
>> As p->xstate is long enough already, use a BUILD_BUG_ON() instead.
>> ---
>>  xen/arch/x86/cpuid.c | 52 +++++++++++++++++----------------------------=
-------
>>  1 file changed, 17 insertions(+), 35 deletions(-)
>>
>> diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
>> index 752bf244ea..c7f8388e5d 100644
>> --- a/xen/arch/x86/cpuid.c
>> +++ b/xen/arch/x86/cpuid.c
>> @@ -154,8 +154,7 @@ static void sanitise_featureset(uint32_t *fs)
>>  static void recalculate_xstate(struct cpuid_policy *p)
>>  {
>>      uint64_t xstates =3D XSTATE_FP_SSE;
>> -    uint32_t xstate_size =3D XSTATE_AREA_MIN_SIZE;
>> -    unsigned int i, Da1 =3D p->xstate.Da1;
>> +    unsigned int i, ecx_bits =3D 0, Da1 =3D p->xstate.Da1;
>> =20
>>      /*
>>       * The Da1 leaf is the only piece of information preserved in the c=
ommon
>> @@ -167,61 +166,44 @@ static void recalculate_xstate(struct cpuid_policy=
 *p)
>>          return;
>> =20
>>      if ( p->basic.avx )
>> -    {
>>          xstates |=3D X86_XCR0_YMM;
>> -        xstate_size =3D max(xstate_size,
>> -                          xstate_offsets[X86_XCR0_YMM_POS] +
>> -                          xstate_sizes[X86_XCR0_YMM_POS]);
>> -    }
>> =20
>>      if ( p->feat.mpx )
>> -    {
>>          xstates |=3D X86_XCR0_BNDREGS | X86_XCR0_BNDCSR;
>> -        xstate_size =3D max(xstate_size,
>> -                          xstate_offsets[X86_XCR0_BNDCSR_POS] +
>> -                          xstate_sizes[X86_XCR0_BNDCSR_POS]);
>> -    }
>> =20
>>      if ( p->feat.avx512f )
>> -    {
>>          xstates |=3D X86_XCR0_OPMASK | X86_XCR0_ZMM | X86_XCR0_HI_ZMM;
>> -        xstate_size =3D max(xstate_size,
>> -                          xstate_offsets[X86_XCR0_HI_ZMM_POS] +
>> -                          xstate_sizes[X86_XCR0_HI_ZMM_POS]);
>> -    }
>> =20
>>      if ( p->feat.pku )
>> -    {
>>          xstates |=3D X86_XCR0_PKRU;
>> -        xstate_size =3D max(xstate_size,
>> -                          xstate_offsets[X86_XCR0_PKRU_POS] +
>> -                          xstate_sizes[X86_XCR0_PKRU_POS]);
>> -    }
>> =20
>> -    p->xstate.max_size  =3D  xstate_size;
>> +    /* Subleaf 0 */
>> +    p->xstate.max_size =3D
>> +        xstate_uncompressed_size(xstates & ~XSTATE_XSAVES_ONLY);
>>      p->xstate.xcr0_low  =3D  xstates & ~XSTATE_XSAVES_ONLY;
>>      p->xstate.xcr0_high =3D (xstates & ~XSTATE_XSAVES_ONLY) >> 32;
>> =20
>> +    /* Subleaf 1 */
>>      p->xstate.Da1 =3D Da1;
>>      if ( p->xstate.xsaves )
>>      {
>> +        ecx_bits |=3D 3; /* Align64, XSS */
> Align64 is also needed for p->xstate.xsavec afaict. I'm not really
> convinced to tie one to the other either. I would rather think this
> is a per-state-component attribute independent of other features.
> Those state components could in turn have a dependency (like XSS
> ones on XSAVES).

There is no such thing as a system with xsavec !=3D xsaves (although there
does appear to be one line of AMD CPU with xsaves and not xgetbv1).

Through some (likely unintentional) coupling of data in CPUID, the
compressed dynamic size (CPUID.0xd[1].ebx) is required for xsavec, and
is strictly defined as XCR0|XSS, which forces xsaves into the mix.

In fact, an error with the spec is that userspace can calculate the
kernel's choice of MSR_XSS using CPUID data alone - there is not
currently an ambiguous combination of sizes of supervisor state
components.=C2=A0 This fact also makes XSAVEC suboptimal even for userspace
to use, because it is forced to allocate larger-than-necessary buffers.

In principle, we could ignore the coupling and support xsavec without
xsaves, but given that XSAVES is strictly more useful than XSAVEC, I'm
not sure it is worth trying to support.

>
> I'm also not happy at all to see you use a literal 3 here. We have
> a struct for this, after all.
>
>>          p->xstate.xss_low   =3D  xstates & XSTATE_XSAVES_ONLY;
>>          p->xstate.xss_high  =3D (xstates & XSTATE_XSAVES_ONLY) >> 32;
>>      }
>> -    else
>> -        xstates &=3D ~XSTATE_XSAVES_ONLY;
>> =20
>> -    for ( i =3D 2; i < min(63ul, ARRAY_SIZE(p->xstate.comp)); ++i )
>> +    /* Subleafs 2+ */
>> +    xstates &=3D ~XSTATE_FP_SSE;
>> +    BUILD_BUG_ON(ARRAY_SIZE(p->xstate.comp) < 63);
>> +    for_each_set_bit ( i, &xstates, 63 )
>>      {
>> -        uint64_t curr_xstate =3D 1ul << i;
>> -
>> -        if ( !(xstates & curr_xstate) )
>> -            continue;
>> -
>> -        p->xstate.comp[i].size   =3D xstate_sizes[i];
>> -        p->xstate.comp[i].offset =3D xstate_offsets[i];
>> -        p->xstate.comp[i].xss    =3D curr_xstate & XSTATE_XSAVES_ONLY;
>> -        p->xstate.comp[i].align  =3D curr_xstate & xstate_align;
>> +        /*
>> +         * Pass through size (eax) and offset (ebx) directly.  Visbilit=
y of
>> +         * attributes in ecx limited by visible features in Da1.
>> +         */
>> +        p->xstate.raw[i].a =3D raw_cpuid_policy.xstate.raw[i].a;
>> +        p->xstate.raw[i].b =3D raw_cpuid_policy.xstate.raw[i].b;
>> +        p->xstate.raw[i].c =3D raw_cpuid_policy.xstate.raw[i].c & ecx_b=
its;
> To me, going to raw[].{a,b,c,d} looks like a backwards move, to be
> honest. Both this and the literal 3 above make it harder to locate
> all the places that need changing if a new bit (like xfd) is to be
> added. It would be better if grep-ing for an existing field name
> (say "xss") would easily turn up all involved places.

It's specifically to reduce the number of areas needing editing when a
new state, and therefore the number of opportunities to screw things up.

As said in the commit message, I'm not even convinced that the ecx_bits
mask is necessary, as new attributes only come in with new behaviours of
new state components.

If we choose to skip the ecx masking, then this loop body becomes even
more simple.=C2=A0 Just p->xstate.raw[i] =3D raw_cpuid_policy.xstate.raw[i]=
.

Even if Intel do break with tradition, and retrofit new attributes into
existing subleafs, leaking them to guests won't cause anything to
explode (the bits are still reserved after all), and we can fix anything
necessary at that point.

~Andrew



From xen-devel-bounces@lists.xenproject.org Tue May 04 13:59:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 13:59:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122456.230981 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldvZn-0006nk-CQ; Tue, 04 May 2021 13:59:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122456.230981; Tue, 04 May 2021 13:59:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldvZn-0006nd-8i; Tue, 04 May 2021 13:59:03 +0000
Received: by outflank-mailman (input) for mailman id 122456;
 Tue, 04 May 2021 13:59:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=c6am=J7=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ldvZm-0006nU-OI
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 13:59:02 +0000
Received: from mo6-p01-ob.smtp.rzone.de (unknown [2a01:238:400:200::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a9b4bee1-97c0-41d4-8398-32571905c029;
 Tue, 04 May 2021 13:59:01 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.25.6 AUTH)
 with ESMTPSA id J02652x44Dwu0xu
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 4 May 2021 15:58:56 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a9b4bee1-97c0-41d4-8398-32571905c029
ARC-Seal: i=1; a=rsa-sha256; t=1620136736; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=gl8n6LEUf177IKD+nxWuzBd3rcbyssXGgAUhGGr1d5Rd9FFzP2ZDqKr3QIPf79G/P3
    HYXpQ1FAH4/gOuRblkORui+7SxNkoeMsEgkZl9i133B2gcUvLGENNIajUMYZc+Scoy/4
    h5GEbiuZ7OQIswKwBgVLGvaDXkXxwzmmev2abTuuyp+Q1k/daez7diEdoKRr98VvKBt2
    KV3wJcdqAPVpuX0lBRGavOJ+oGf0cZPHXJuB/ncYGWpDBJRJjqEY9PcYUWOgoop5G+SM
    01Q0ZIJlLAyr4brqg+uMjaV69zCOROa8bBW2g6FKGW8ck/35d6NPY7vJjoZyNpRGHGwu
    6VQQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1620136736;
    s=strato-dkim-0002; d=strato.com;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=aoG9+qL2ivaPMEoU1EGRyiASZMMlBWNhdK9xa+nEY4A=;
    b=LoVBmF6ooWY1hxCAZuOXiOKteM11Ww3a3sv4c7Spjl7TNBBhBLV97vZH/3fmgAJymv
    v8cBe9WvJNT8zEJTD0q67bILopKV015Q/c3paTpzSb0kBWQMOIhaXEXjxHk4A8MgjY29
    EofCIf773fri8J9ehTNRI9FpUartpQGcGSMj2/3VxomQg7FpoyvDy4tG2/PZYhkZFLAc
    mBqNHwpRaGdVdLGgtz0C58Y7I8KrbEdQ9JJCUoygq23HyxHbYJxE9MdpiafHwEYxWx+U
    vKXD5qAPBvVXe40MaoDHC4d/qZ24aOLdpL8MVJ6PjPNOSs07VDdH7rGDaGe+GU8yopXD
    dBUQ==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1620136736;
    s=strato-dkim-0002; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=aoG9+qL2ivaPMEoU1EGRyiASZMMlBWNhdK9xa+nEY4A=;
    b=XoouJV1kjuF5ZNcfSJjrAnpwUJju3FgFcx5tMdsiLv9M4pfAkA6IpfvqM5R09zg4Rb
    slxU7/ePlO/XYwtrR0jtCAUe/J4f4nkppI7jQkvKf7k1DzWI985RnM3MsAWtEv2O0tN8
    xRnXFfVDQmUT5bSGv1SJq6OagQfSPQa6eSkQxVrSVF6FTe0vhASrujm6zeKev4EB4c4E
    EzV5po0vCOPDAUy4yuZu8aGotfReijbTdg/a+OdXQz8NIT1HaOhqd19YuIbLJid3gz5a
    FtVZ+zpCvtNh3Fa8O518fkyNOKBNp2bH7Zngbz8FLFPRxf8fEmKZXV/i9xEM8ybpMj12
    3J0Q==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgtl+1b1FMstFZvCqIQN5N7TvWFg4vzhFVdoKAuQ"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v1] tools: handle missing xencommons in xenconsoled.service
Date: Tue,  4 May 2021 15:58:54 +0200
Message-Id: <20210504135854.10355-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

sysconfig files are not mandatory.
Adjust xenconsoled.service to handle a missing sysconfig file by
prepending a dash to the to-be-sourced filename.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/hotplug/Linux/systemd/xenconsoled.service.in | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/hotplug/Linux/systemd/xenconsoled.service.in b/tools/hotplug/Linux/systemd/xenconsoled.service.in
index 8e333b114e..1f03de9041 100644
--- a/tools/hotplug/Linux/systemd/xenconsoled.service.in
+++ b/tools/hotplug/Linux/systemd/xenconsoled.service.in
@@ -9,7 +9,7 @@ Type=simple
 Environment=XENCONSOLED_ARGS=
 Environment=XENCONSOLED_TRACE=none
 Environment=XENCONSOLED_LOG_DIR=@XEN_LOG_DIR@/console
-EnvironmentFile=@CONFIG_DIR@/@CONFIG_LEAF_DIR@/xencommons
+EnvironmentFile=-@CONFIG_DIR@/@CONFIG_LEAF_DIR@/xencommons
 ExecStartPre=/bin/grep -q control_d /proc/xen/capabilities
 ExecStartPre=/bin/mkdir -p ${XENCONSOLED_LOG_DIR}
 ExecStart=@sbindir@/xenconsoled -i --log=${XENCONSOLED_TRACE} --log-dir=${XENCONSOLED_LOG_DIR} $XENCONSOLED_ARGS


From xen-devel-bounces@lists.xenproject.org Tue May 04 14:07:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 14:07:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122469.230993 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldvi1-0007tq-7e; Tue, 04 May 2021 14:07:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122469.230993; Tue, 04 May 2021 14:07:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldvi1-0007tj-4Y; Tue, 04 May 2021 14:07:33 +0000
Received: by outflank-mailman (input) for mailman id 122469;
 Tue, 04 May 2021 14:07:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HwvY=J7=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1ldvhz-0007tb-It
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 14:07:31 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e4882147-9be8-4eb2-aa55-0198c304447f;
 Tue, 04 May 2021 14:07:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e4882147-9be8-4eb2-aa55-0198c304447f
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620137250;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=Evkx80lpR4d0ahpeYk3ZR0XUOTpDMjEy6K7Hw1hpxJ8=;
  b=ITKEqzGFHb9h1rmc3gWLbLmggr0nz+SGK5+wr9zHU9y6vwWqpvRk+DMc
   IrFYpQt66QO0/RE5Z0wilruD62XU0rYnZ+qm7/ow7NsSmNv9t+XcXNDa1
   m5e9I1pG0rla/NOHZCPseHDIr8r8TGVjnbguaMXJ2ryaHZ4EKjJudP7cd
   Y=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: PQR98lDqIlDtyPdmqskIihDJxIcFNIBywuGxLnQTRXxufri13546R1evgubmIrKPmFoggQ9Wjh
 5ZVFVHUQVL/IMzlfgsvT6LNS9BvlPuUiuchpstpqC9VB200dfDyl6M5zuYnCelsOvJjyPueSiQ
 1aoBbUkmou2mQ+H/AzH2Pu1b3JDg3fb7IC+YoSk9Ah3pGF4NJAQQKnNzjMUo8u5+fwVySvsKhb
 yp3bBEJ8UyoDIQC83JcxdIb59YmPhR6evL7/Kf/Ior5/QSpfAOEo1KDg6/3RDMQTZUcdIEy1mO
 0GM=
X-SBRS: 5.1
X-MesageID: 43020350
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:Cvz0QarNlGz6EMuE2Y0EivIaV5qbeYIsi2QD101hICF9WObwra
 GTtd4c0gL5jytUdXE7gNabOLSBR3S0z+8T3aA6O7C+UA76/FayJIZ54of4hxHmESvy9ulSvJ
 0QFZRWItv2EFR8kILG8BC1euxQpOWv3ai0iY7lo0tFYhptb8hbgDtRKgHeKUFuQRkDOJxRLu
 vk2uNihx6NPUsadd66AH5tZZmnm/TumIj9aRALQz4LgTP+7g+A07LxHxiG0hp2aVomqt1OzU
 H/nwP0/amluf2goyWstVP71JhKhMDnjuJKGc3ksLlsFhzXlg2qaI59MofjgBkJpoiUhmoXrA
 ==
X-IronPort-AV: E=Sophos;i="5.82,272,1613451600"; 
   d="scan'208";a="43020350"
Date: Tue, 4 May 2021 15:07:24 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Roger Pau Monne <roger.pau@citrix.com>
CC: <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>, Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v3 01/13] libxl: don't ignore the return value from
 xc_cpuid_apply_policy
Message-ID: <YJFVHGJpTHAypuS4@perard>
References: <20210430155211.3709-1-roger.pau@citrix.com>
 <20210430155211.3709-2-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20210430155211.3709-2-roger.pau@citrix.com>

On Fri, Apr 30, 2021 at 05:51:59PM +0200, Roger Pau Monne wrote:
> Also change libxl__cpuid_legacy to propagate the error from
> xc_cpuid_apply_policy into callers.
> 
> Signed-off-by: Roger Pau Monn <roger.pau@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> ---
> Changes since v2:
>  - Use 'r' for xc_cpuid_apply_policy return value.
>  - Use LOGEVD to print error message.
> 
> Changes since v1:
>  - Return ERROR_FAIL on error.
> ---

Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue May 04 14:08:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 14:08:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122472.231005 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldviY-0007zJ-HE; Tue, 04 May 2021 14:08:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122472.231005; Tue, 04 May 2021 14:08:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldviY-0007zB-DV; Tue, 04 May 2021 14:08:06 +0000
Received: by outflank-mailman (input) for mailman id 122472;
 Tue, 04 May 2021 14:08:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HwvY=J7=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1ldviW-0007z3-PO
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 14:08:04 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0595fd8d-2ce1-4a50-b58d-bf5c74176926;
 Tue, 04 May 2021 14:08:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0595fd8d-2ce1-4a50-b58d-bf5c74176926
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620137283;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=UmSxk6F4LN5hsPAmNnS6XBgeWQsTTA6KoKvWWnscwcA=;
  b=CrCjDqEzV0iBQ0A49M/BMbvaZWcCYp0KJvEHSjCzM1liqZSZPNErW74e
   h5+MxOV3Rorc/Jpkzl6xPyjc123qUjS9NI1Imk1BJn5plYhqY9FX0A9Er
   MIy0enoi42QNJTLDLJbkA5RMhJWrKPAvsRBSY6dXiw0KgiznTaJsufU4Q
   Y=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: TFcedvYesZKszCTsAtytaX7uZ0+cO9+WWYiPXpoDzKuYfDYi6f6S8ZJ0R+1ffdhPIJi8QtklRE
 DegzC+JgN3nh0p+0SbTyJxgN1PYXTs4HqWjshwmBxg+7mQ4dZdVKtqeve/eQ8pcxYogsQe6GfR
 p6yfQbXves/hIrZZgz8/g7sTo3zc9Uyc2VaDp10w/vA34RPCi6ybzxfuPU/CaqH1zXUvTyO4k0
 gmijroFvon9XIeo2aYzwif6S7QpXfIal0rgNH4N7frUgcgiSKIXrYCHYrcgcrzboHw6LVX5Iev
 sB4=
X-SBRS: 5.1
X-MesageID: 44545315
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:QONNcq/9BnJexTDEc5Zuk+FucL1zdoIgy1knxilNYDRvWIixi9
 2ukPMH1RX9lTYWXzUalcqdPbSbKEmxybdc2qNUGbu5RgHptC+TLI9k5Zb/2DGIIUDD38Zn/+
 Nbf6B6YeebMXFTh8z3+RT9Nt4mzsWO/qzAv5a5815GSwZ2Z6Z8qyJwDQiGGkN7LTM2fKYRPp
 ya+8ZBun6caW0aB/7LcEUtcsrig5nwlJzgaQMbHBJP0mKzpB6h9bKSKWn74j4wSDVKqI1MzU
 HklEjD6ryno7WHzHbnpgjuxrB3vPek9ddZHsyLjaEuW3zRoyKlfp5oVbHHnB1dmpDL1H8QnN
 PBowgtMq1IghvsV1q4rhf31w7r3CxG0Q6H9XajnXDhrcblLQhaN+N9hJlUehacy00ssMAU6t
 Mp40ultoFaBR6FoSLl59KgbXFXv3ezyEBNrccjy1hkFacOYr5YqoISuGlPFo0bIS784Ic7VM
 FzEcD1/p9tABinRkGcmlMq7M2nX3w1EBvDaFMFoNap3z9fm20851cExfYYgmwL+PsGOtN5zt
 WBFp4tuKBFT8cQY644LvwGW9GLBmvERg+JF26OP1L9FuUiN2jWo5D6pJU5jdvaNaAg/d8Xot
 DsQVlYvWk9dwbFEsuVxqBG9RjLXSGzRjLoxsZC54Vou7H1SbbxWBfzB2wGoo+FmbEyE8fbU/
 G8NNZ9GPn4N1bjHo5Pwkn/VvBpWDcjefxQnux+d0OFo8rNJIGvnPfcauzvKL3kFithXmv+B3
 AETSXiPcko1DHrZlbIxDzqH1/9cE32+px9VILA+fII9YQLPopQ9g4PiVq44cmPISZYsrM/eV
 Z/JL+PqNL2mUCGuULzq0l5MBtUCUhYpJ/6VWlRmAMMO0ToNaoYt86HYmBU1nufLhp5R8fbeT
 Qv52hfyOaSFdi91CoiA9WoPiamlHMVvmuNVIpZsLaE/93ZdpQxCYsGVKR9GR7QLQF8nR9npQ
 54GUw5b36aMgmrqK2+yLQIGenUdrBH8X+WCP8RjUiaiGKxioUEQGABUzunTMiN6DxePAZ8tx
 lW6K8QgL2JhDC1D3Aw6d5IeGFkWSCsLJwDNh2MYLxOnL/qfQ1qCU+MjTmLkREoE1CatXk6ty
 jaAQPRQ+jCBmNHvH9Z16rwtGl/cH6QFngAL0xSuZFhFGjAp3Z42fKKYK32yGeKdl4e2IgmQU
 v4SCpXLQV0y9+t0hmJ3D6ECHU9350revfQFbI5btjoqwaQAZzNkaENBPlP+pl5cNjor+8QSO
 qaEjXlXw/QGqcs2waPoGwiNzQxoH44kenw0Bmg6GSjxnYwDb7TJ1thLotrUe20/izhR/yS1o
 9+gs9wteysMn/pYtrD0LrJdVd4W23uiH/zS/ttpYFfvKo0urc2F57HUSHQ3HUC2BklNs/7mE
 4XXawT2sG0BqZ/O8gJPy5J9Fsgk9qCaFEmtQH7Gecyd1AghX2zBaL435PY7b40RkGRrgr5Pl
 eStzBH9/DeRi2ZyPoUDbkzLWk+UjlK1F1yuOeZM4veBwWhe7sdoB60MnqhfKRcT6bAE7MKtR
 p+68yJmejSdyeQ4nGtgRJrZqZVt2CgSoeuBQjJH+hC+dmzI06Njaun+9TbtkaHdRKrL0ADwZ
 RYfkkRZNlZgjYsjIcrwjG/I5aH334Ngh9b+3V7jVbj1Yit/XfDEUxHOQPfhI9KXTM7CAn2sf
 j4
X-IronPort-AV: E=Sophos;i="5.82,272,1613451600"; 
   d="scan'208";a="44545315"
Date: Tue, 4 May 2021 15:08:00 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Roger Pau Monne <roger.pau@citrix.com>
CC: <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH v3 12/13] libs/{light,guest}: implement
 xc_cpuid_apply_policy in libxl
Message-ID: <YJFVQMSfTyOeuw3u@perard>
References: <20210430155211.3709-1-roger.pau@citrix.com>
 <20210430155211.3709-13-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20210430155211.3709-13-roger.pau@citrix.com>

On Fri, Apr 30, 2021 at 05:52:10PM +0200, Roger Pau Monne wrote:
> With the addition of the xc_cpu_policy_* now libxl can have better
> control over the cpu policy, this allows removing the
> xc_cpuid_apply_policy function and instead coding the required bits by
> libxl in libxl__cpuid_legacy directly.
> 
> Remove xc_cpuid_apply_policy.
> 
> Signed-off-by: Roger Pau Monn <roger.pau@citrix.com>
> ---
> Changes since v2:
>  - Use 'r' for libxc return values.
>  - Fix comment about making a cpu policy compatible.
>  - Use LOG*D macros.
> ---

Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue May 04 14:09:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 14:09:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122475.231017 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldvjO-000888-QE; Tue, 04 May 2021 14:08:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122475.231017; Tue, 04 May 2021 14:08:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldvjO-000881-N7; Tue, 04 May 2021 14:08:58 +0000
Received: by outflank-mailman (input) for mailman id 122475;
 Tue, 04 May 2021 14:08:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HwvY=J7=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1ldvjN-00087t-77
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 14:08:57 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f4abb085-0306-432c-91c7-969a1a7edb80;
 Tue, 04 May 2021 14:08:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f4abb085-0306-432c-91c7-969a1a7edb80
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620137336;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:content-transfer-encoding:in-reply-to;
  bh=NlUUtvPrySGI4Wn+g8O2ZK/jJcGmGAP2doqCMUNxoBs=;
  b=WCaZwlhZ9My+MwxmgrIFrmPyN0XUrn4XYklrYLQ2VF4/nTA1UdUGfpVw
   2p9q18Dso/jZYDzZzwGklzO9KG1zqxftyRZUlzmCbFHiw6zXTDo301/jq
   Vp9ptVddBt+98LTE6/Q1dYUinZDtuKT8TJ9urHOEBZDnPI8ib1DN0HkP+
   o=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 3a6uvkaCaYepdXZRr2xQWHzr8026Xrcda7gERbnf22cG8ya7Hm8M1lIyF8kqUGW+1Q1tL4M+/P
 J5rS9UegAzsSfV8F3OCoQNSto330MNgJoMPsPpoVEuQUvDLOuvgV6VTllMVdV2JYzWdAx19qkA
 htiPzJGX5iHBKVEkiWxFLxREAWUJmzj96y+M3Gs+OVM5gXnehr/DkKHz0sc1B+UlONWQ7qFpQh
 +NGi+1lhhTDKavRpfyy3oByl5b/kW//kz9LaWRUE+iqi4p8Qxzg5K1XsOt7jiL2GOb18OT4CNP
 ZsU=
X-SBRS: 5.1
X-MesageID: 43038264
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:OFf9ZK0nRQ0PPfE/XoX74wqjBR50eYIsi2QD101hICF9Wvez0+
 izgfUW0gL1gj4NWHcm3euNIrWEXGm0z/NIyKErF/OHUBP9sGWlaLtj44zr3iH6F0TFnNJ1/Z
 xLN5JzANiYNzNHpO7x6gWgDpIEyN6I7KiniY7lvg9QZCtBApsB0y5SDAGHHkpqACxPApQkHJ
 SRj/A33AaIU3IRc8i9Gz05T/HOzue72q7OTBYaC3ccmW+zpByy7rqSKXal9zcbSjVV3L8k+2
 StqX2c2oyGk9WWjiDRzHXS6ZM+oqqh9vJmCNaXgsYYbhXA4zzYHbhJYLGJsDArrOzH0j9D/L
 bxii09NMd+4W65RBDXnTLR2hLt2Dtry3juxU7wuwqHneXFRSk3A8cEuIRBchGx0TtDgPhA1s
 twvl6xht5yN1ftjS7979/HW1VBjUyvu0M4neoSlXBEFaMDdb54t+UkjR5oOaZFOBi/xJEsEe
 FoAs2Zzu1Ra0midHzcvnNi2piFQmkzOhuNQ0AEutaQugImwExR/g89/ogyj30A/JUyR91v/O
 LfKJ1ymLVPVMMNKYpgGeY6R9etAGCle2OSDEuiZXDcUI0XMXPErJD6pJ8v4vuxQYcFyJsplI
 6Ee3lz3FRCKH7GOImr5tlm4xrNSGKyUXDG0cdF/aFjtrn9XrbwdQyCUk0piNvImYQ3PuTrH9
 KIfL5GCf7qKmXjXaxT2RflYoJfLXkFXNdQgdowW0uPrsXXbrSw8tHKafq7HsvELR8UHkfERl
 cTVjn6I8tNqmqxXGXjuQPcX3P2dla6x7hUeZKqrtQ7+cwoDMlhowIVgVO26oWgMjtZqJUscE
 9/Or/81p6hrW6t5GDS8lhzMhVTDkxp8KztOkk6+zMiAgfRS/Iuqt+fcWdd0D+sPRlkVfrMHA
 pevVhsvaqxMpyL3CgnT8isOmWRiXwPqGOWJq1szZGr1IPAQNcVH5wmUKt+GUHgDBpugz9wpG
 NCcgMfAkfCCzXgj766hIcYCOy3TagpvC6bZepv7V7Pv0SVos8iAlEBWSS1aNWahQY1SyARgl
 Fq6acQjKCHhS2kJQIE8bQFGWwJTF7SLKNNDQyDaokRsKvsYhtIVmuPhSaXkVU0YWrl90Ibg2
 blLSGSZPHTCldRoXxDyM/RgQFJX1TYW3g1RmFxsIV7G2iDkG10y/Wzfaa203GccB8a2ecWPC
 vebTZXKRMrxNaqzxmcnTaEGxwdt8MTF92YKI5mX6DY23urJoHNv7oBBeVs55pgNMrjqKsXSu
 qZYRaSNy7kEOsn1Ayeu2Y9NEBP2SwZuMKt/CegwHmz3XY5D/aXCk9hXasnL9aV6HWhb+qU0b
 1i5OhF/9eYAyHUUJqr2KvXZzlMJlf4umitVdwlrphSoOYbqKZzJZ/GSjHFvUs3mSkWHYPRrg
 czUa576LfONstEZMoJYR9U+VIviZCpIFY0tBf1RsszZ0skgXOeH97h2cuNlZMfRmm64CfgM1
 iW9CNQu93fWTGY6LIcA6UsZWtMbkwx72lj4fOCe4XcBB7CTZAEwHOKdluGNJNNQqmMHrsd6j
 xg5cuTouORfy3knADc1AELe55mwiKCe4efEQiMEelH/5iRIlKXmJan58a1kXPyQTu/YEIImJ
 1deSUrH5x+owhnqLdy/jm5S6TxrE5guUBZ+ytbmlnk3ZXj5nzaE0FAOQjQmY5XQjFXL3iNga
 3+gK+l/UW4xAIA9YjIFU9WcN0LMcMXVJLLIyBnLtVVoKSl5LM1giNIYA4nCmk1jDyV5ZI84Z
 6JnNHpH8HyA3bhPlwMvRFCHZB9hSEqrm9kdc6mhKjNIjk/J6otOb8S94pWmDVmphzP9FUadQ
 xqrRFt/JrKNEbbM3N1ErXOkYP6jAMDnrWL70RjqRxr6wmaoWDZn2Kwvs6u/1IosmXWr2Zc2N
 +gbQp0pBwciylK7uhKNGGXXWvlXHWfWWmHsZcACDlCEm4mzOU5hAfCKa4rmpiyw3F/lC82GC
 Qq3L2nf1yvYgm6JJnXNzClFNO8HdhoRe1eU1BzbEKeBVD8gZQ3VRLUaL5ap3KRs38TGKmDPb
 6nagqBDJ8LuLcdIbDoXpT2v7a+6iFV8mhXtpn8hGCOgZtRnL+mDwN4qJYnf6BiJwet+05Do6
 WudA/LJswJ7DrOr2rngB4oW+M+0kAbx1IXqjwXHz7tD9SvRlt5rDpD0JPxQjYEoPzxBep9oC
 PFrBq5o3/dOBqPPETEYcx431s5sALASpndpV60Zv74MA1T4uilFc6C5GDH/h6fo9WKlz/FsB
 xxlWtaWBUu0g+QGNAcEW3fwUxfMx1iEHpX4Uef87tY5ntsnkk/dDMNAMEF4SXyEBjZSzT+my
 64H0oHy6gXF2tuweRZtXtBhUwv3FcSULRnj52K7FOgbH1NQApAz8cf4jH+2/AVRjB7IQv8Ie
 D0sN+kL4Cx0pBWEyuV4/8FIPd6Ax4/wo799a3aU2ZOfg6Y1bBcwV7W4Rk69UEkgNJgu0f6go
 fUFwKDl4NBp+oAT+vyiS8yRPoCJ4U7oHyKHcZa9rUrw0j59NPQ3iozZqGw5BlEaBzbIeJqaV
 UydzOVgfkjwMqH5CedeZhtrqTd0So6Ze3NXExOtbzaPDJH+OEv3N7WZI7SJTTbp8lEEfkC2n
 7+nDcD4nupnZ++Fpq0PBC4hiuKVWMrCI5AG0ptB+HQLaIIy4wCj15akrY8Oh1Z6MgOIxWvvN
 ILuDfxhI5VvIErrgVYQeqqhCBTxQ4=
X-IronPort-AV: E=Sophos;i="5.82,272,1613451600"; 
   d="scan'208";a="43038264"
Date: Tue, 4 May 2021 15:08:51 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Roger Pau Monne <roger.pau@citrix.com>
CC: <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH v3 13/13] libs/guest: (re)move xc_cpu_policy_apply_cpuid
Message-ID: <YJFVc97ZFYMhWdxg@perard>
References: <20210430155211.3709-1-roger.pau@citrix.com>
 <20210430155211.3709-14-roger.pau@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="iso-8859-1"
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20210430155211.3709-14-roger.pau@citrix.com>

On Fri, Apr 30, 2021 at 05:52:11PM +0200, Roger Pau Monne wrote:
> Move the logic from xc_cpu_policy_apply_cpuid into libxl, now that the
> xc_cpu_policy_* helpers allow modifying a cpu policy. By moving such
> parsing into libxl directly we can get rid of xc_xend_cpuid, as libxl
> will now implement it's own private type for storing CPUID
> information, which currently matches xc_xend_cpuid.
> 
> Note the function logic is moved as-is, but requires adapting to the
> libxl coding style.
> 
> No functional change intended.
> 
> Signed-off-by: Roger Pau Monn <roger.pau@citrix.com>
> ---
> Changes since v2:
>  - Use LOG*D.
>  - Pass a gc to apply_policy.
>  - Use 'r' for libxc return values.
> ---

Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue May 04 14:17:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 14:17:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122482.231029 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldvri-0000d2-NQ; Tue, 04 May 2021 14:17:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122482.231029; Tue, 04 May 2021 14:17:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldvri-0000cv-Jr; Tue, 04 May 2021 14:17:34 +0000
Received: by outflank-mailman (input) for mailman id 122482;
 Tue, 04 May 2021 14:17:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Poa=J7=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ldvrg-0000cq-J8
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 14:17:32 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b7f01392-f16a-42c5-b0b6-22756509218c;
 Tue, 04 May 2021 14:17:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b7f01392-f16a-42c5-b0b6-22756509218c
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620137851;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=eufgxCqONvYW1r7dAxyrmXvPvv9Cmpg2WSksENOOIto=;
  b=G0ToPfZy/HkD79rWflTPfzk0Nmx1RbTHK4/y9Nv7x7Fo9O+3hCLdXRZZ
   ImukAqBSQlZCgIwN7mysa9ItEclLBxqmiiRi36q6QdE2vBhi92bAe4bys
   eQq5q8Fk4iL/sVOuRJynBT/zpqxd1+NBtNFHGG4XeyoYI6wl4Xo8NEYMF
   M=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: WBvaaw+LXMbsgqQbNUgK1df1bOn16qpilUW5gYi1En30fu7vHmoyvtr3jbEFOSFCQZuO52Zg6w
 f4DhnxtS7eZfzK2QEhvXWoNRlgfG9mtYaF8DPjcd1SrlJUhvluXZX51d8eO1KlF7gFYNjlaCoi
 2VLDvqOzQ2d9q7V7QpdzOb1cgyb0qCqvhEpf86UHraX68Ffi2EPDZEGcWG9e0eunhf8fDb/kXI
 V2/tE/joYDj0+GCt8Fy1VqY/VUZjRKOwtJFPncHsIPst0DA8PNWNkuirJbb1rPMS1rMfoJYPCC
 UaU=
X-SBRS: 5.1
X-MesageID: 43021780
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:ykpRyqNAs+vFgsBcT27w55DYdL4zR+YMi2QD/3taDTRIb82VkN
 2vlvwH1RnyzA0cQm0khMroAse9aFvm39pQ7ZMKNbmvGDPntmyhMZ144eLZrAHIMxbVstRQ3a
 IIScRDIfXtEFl3itv76gGkE9AmhOKK6rysmP229RdQZCtBApsQiztRIACdD0FwWU1iDZ02CJ
 KT6qN81kSdUF4Qadm2AWRAYvPKoMfFmImjTRkNARMm7wfmt0LW1JfRFR+E0hACFw5e2LtKyx
 m4ryXVxIWG98u6xBjVynPJ4/1t+efJ59NfCKW3+7MoAxr2jALAXvUGZ5Sju3QPrPir+BIWlr
 D30modFuBSz1+UQW2vuxvq3GDboUQTwlvv00WRj3emgeGRfkNDN+N7iYhUcgTU5iMb1bkWus
 87vBP6xu5qJCjNkyjn69/DWwsCrDvSnVMYnfMOlHsaaIMCadZq3P8i1XlIG5QNFj+S0vFfLM
 BSCqjnlZNrWG+BY2uclmdix8HEZAVIIj62BmIGusCTzgFMmmF4w0Yy1KUk7wc93aN4ZJ9e6+
 veNKN00JlIU88NdKp4QNwMWM2tFwX2MFzxGVPXBW6iOLAMOnrLpZKyyLIp5NuycJhN6Jcpgp
 zOXH5RqGZaQTOuNeS+mLlwtjzdSmS0WjrgjutE4YJih7H6TL33dQWeVVEHiaKb0rciK/yef8
 z2FINdAvflI2erM51OxRfCV55bLmRbeNEJu+w8R0mFrqvwW87Xn92eVMyWCKvmED4iVG+6KG
 AERiLPKMJJ6V3udWT/hDTXRnPxam3y9Z99C8Hhjqwu4blIErcJnhkeiFy/6M3OAyZFqLYKcE
 x3J66isq7TnxjwwU/4q0FSfjZNBEdc57vtF1lQoxURDk/yebEf//GWeWVY2mq7NgZyJvmmVj
 J3lhBSw+aaPpaQzSctB5aMKWSBlUYeo3qMUtM6lrCc49zmPrc1FIwvVqA0NQijLW00pS9a7E
 N4LCMUTE7WET3jzY+/ioYPOe3Zf95gxCGxIcBVrnrbnV6Gpd4mQ0YaWzLGa7/TvS8eAx5vwn
 Fh+a4Wh7SN3Ry1L3Ekveg+OFpQLFiMDKl+FwSDboVMkrXNcAV9JF363ACyulUWQC7H5k8Sjm
 vuIWmxdevQClRQgHxez53n6Uh5bGmbYkJ2ZE1rqIEVLxWyhl9DlcuwIoaj2WqYbVUPhtsQNz
 zIehM+CAJjzdLf7m/ZpB+yUVEdgrk+NO3UC7ouN4zJ0nS2MYuSiOUtBPlP5qtoM9jor84GWe
 +SYBWuMTv9Eu8lsjbl/koNCW1Rkj0JgPno0Brq4CyEx3Y5G+PVO0kjaLcBId2QhlKUD8qg4d
 Fct5YSsuSxOGmqNYLD5qHTcjJZKhTc5USxVPolrJhIvaQ08Jt/dqOrJwfg5TVi5lEZKsyxqW
 Y1BIJcy5rFMpV0f8MTdzlCl2BZ3uinHQ8OiEjOHuQ6fVsRlHfVMNOC3qrQpdMUczq8jTq1HW
 PazjZU8PjEVRaSzLI2C6o/JmJNdUg3gU4Std+qRsn1CA+wcftE80f/GnihcKVFQKztI8Rdkj
 9Kp/WJlfSQbSz2xUT5uiZ6OLtH9yKCTdmpCAyBXc5O/NrSAyXCvoKapOqyhizwUz21dgAxgp
 BEb1UZaoB7sQYZ5bdHmRSae+jQuUIqk1xX/DFhmBrM4+GdkRbmNHADFxbYjJVQVSRUKV6Sg6
 3+gLOl6Eg=
X-IronPort-AV: E=Sophos;i="5.82,272,1613451600"; 
   d="scan'208";a="43021780"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=clnQt8P5nvnK1+aC5eiQSc6I+BaRQYkd6HoO+pIyvYWzlDjKT/CgcZloMVlHYVcZLGBgUWCmAymweK31xnl+t24+6BHe5b8D8ABLGgK577Daf6SYuJZ9/PQEPKWxi0L431dug7kDcE2UQrYn8G1a4/Ulw2mkJNS6T50EHVQMBJpLICdNVf+m8VPm6iUzd8USnH5bzL7Njd1STYsk3M2KgPufzFsidiHGmGOtPVE64yngeTPvRrfhNTkOato8FRlGNZt9/3QcdUVgoTIaPexcb2ydal/SdPVQ40xH6/LJMz5q2KvwCGIZ25TE6hk9HEJtoAPJAikYRvpvjK5isHCP3Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0IzHNUnHkHRN05OTXJBVEIrofGBAQ94N1TE62xdLd3k=;
 b=Y6xsKFJJflhHmWVzxX32W61NjJ7Rl+Y88bnjSAIQU4YGgYcoEwrdHRWwrqd9HxeqrV+pvjmRvj9S0XD2Z8PlMEcQwLwCZ119vX5MUwsUkPUouPp0e0DSstQ8k07b7K1qFdqST6gHZ4KWNDAVFMUmjW4gJ2GE6YeJ3fiO/bY+LMwOGfZpXIu0/Of0EAk/mo+Wja97nZfsTBBCzIPvSCY3hYycNwxBgHY41csPImEBj2Z/kVnNzcqcwQ6+fOmoHk0d7Txw8/8DVWiUX4AL9Y6ZUjXlKtpiAPmeGA57JizdyQ++DttviGNwOhun2T90S5vGw62E6bVwPAsggGjS9Ds3xA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0IzHNUnHkHRN05OTXJBVEIrofGBAQ94N1TE62xdLd3k=;
 b=hJumQfsPpscaZxP0InR2EFo02wVGn51uEk5zMeDF/ZIEPYQzVu0WyBJZbUP8OEoFYsRM8MXL5D/TdPyUiLjC8KOXdx3JvNSoge1cY1NQqD8HDHmNW/1sEOauroA0YEGw2wpaPjqhXjg15gj0M+aEAmIKV2aahUI7EquZgcgQCiU=
To: Jan Beulich <jbeulich@suse.com>
CC: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210503153938.14109-1-andrew.cooper3@citrix.com>
 <20210503153938.14109-6-andrew.cooper3@citrix.com>
 <5e6511ca-83bd-8a43-202e-949b4d19b1ab@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 5/5] x86/cpuid: Fix handling of xsave dynamic leaves
Message-ID: <1279476a-f99d-59a4-7fed-1aee37dbe204@citrix.com>
Date: Tue, 4 May 2021 15:17:10 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <5e6511ca-83bd-8a43-202e-949b4d19b1ab@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0002.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:150::7) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 9c95041f-901f-4f06-043b-08d90f075259
X-MS-TrafficTypeDiagnostic: BYAPR03MB4119:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB4119823AA5CA3565ABFA230FBA5A9@BYAPR03MB4119.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: TJ1z5gqvc8JW4MYDWHr3cWLWb2jPUaOXZPaHqNvE3sIhRgaY/Y/qJCnzVwRikMLx2f5/0/GgXbddmNp4hfYG6qqCl/3EyilS5/xrZsR2H9wgQJtipgXoBIcOeLsPY8q7MXstAJWrGcDaKMaaCn2Fzs2/AMAEok0q1xmWInUrcEaK7mxZ707DkVyhoMQ/SkaxtJSMz4a3SbQyBWd4KeqiasGjJUDbn5d/Mp4LVtWQCTyuo57BeDxYxz8fUorkcqGmZVF5SMVdB9yRS03PZJ1HZGYets6ZGyXl+7m80ZLgXcVrbcXTFO06XxIVaARyvhtspXS7H47y2qe4Pr+h5cB5tlbG4WE0j68Qdw8hgWdskEsn2WDruHbJ/ymXzBIJh7WJjEHPfofCF6AhjpJZFiohKlQgl3b3WtxKaOJZJB0MJd0MEpiPmcEZbmsRMxxMoPkr4JMN/B0qZ8MONA8+3Xa3WU/As3bZUiFe/qTm8nhU2orsvuFXftFkoBBbXdPsJl0dlh3prQtOtofHYqTDQ8hBae8qtHZkoWOz8HDghva31h3CE0kfAhCQBu5X8+6cPUyS+sUy/jJkbcw6xJdf0erqehQpL+tAkhc9pxfm2iS7Ypnt59c2VQP0KTW8EEI1vly7zmoVVkhzTB6eeTcgBqD5Tlxs8ZM0KbAeHqJb97Y/oULDY4ts4O3YviZ7T39fuaaJ
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(396003)(366004)(136003)(376002)(39860400002)(956004)(2616005)(31696002)(6666004)(31686004)(26005)(6916009)(66946007)(66556008)(6486002)(83380400001)(86362001)(66476007)(8676002)(478600001)(316002)(54906003)(16576012)(2906002)(8936002)(53546011)(16526019)(186003)(38100700002)(4326008)(36756003)(5660300002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?QUdQSVpXZWEwRmxDaUVPM1YzaWxEdmNpRGhySCsyR09nMy80UDlyL0xRQ2dj?=
 =?utf-8?B?Rlk2T1BOZi8zakhHQzN4eEhvTWxtQXc3bWhMRUZ4MWhia3BVSTZYd0MyNXI2?=
 =?utf-8?B?czQrQVViUFZoRmJrK1B5bGg2cjRUY1ZmNGlDNGVINmkrWFZTbmo5RWJVeStU?=
 =?utf-8?B?RmpLeHZrS3QyOW55Sk9SR3ZIRmdidFpoWDQya1N5WFd0T2hUUEZVRHBKeHA4?=
 =?utf-8?B?Z3FLVDliaDdHVnZiWUpBUHhqKzV2eUtTWm9PNU5yR1lNYVkwUzZDOEwvV1hu?=
 =?utf-8?B?aUZDSmtQWUI0ZGFWVTlnVXA3TWF6anFFeTUwMnJXWHJhb1JFZ0t4aUlwQlZE?=
 =?utf-8?B?Wkd0eUFzQVhvZSs4dmlpMjNKOEJUWUtGSzk0WlIrdUhzSjZ1eTNsQVVqMElh?=
 =?utf-8?B?VGt2U0h3bEVhL1Bubm1qaE54TW5lQ1Q3OUdLekM2R0JyOXhYU2xlL3RvdEFs?=
 =?utf-8?B?N3pQam5oRTlSTHlNdndRdlBmR3hOZXJUVVBMQ09NcHZVSVlQRG52Ums3NVFL?=
 =?utf-8?B?RVNsSlFVY1FudXJhL1VJNkY3dkF2bHh4eW1rVTl4ZHVUaEJGajI2ZWR5OWtB?=
 =?utf-8?B?L2EwMlA3bUY5dUhGTGtqQ2Jnd044TGlEU0I3aGVWUzRDYnlGcENLUXRENEho?=
 =?utf-8?B?c21jZ3pMdVJscFVwclZlTmIyV0wrdGc4S3VKdEV2SDFWSzhEUHBaNDd5VmRm?=
 =?utf-8?B?SkVOV3BqVmkyL3RUNEUrWDFxODl4NFZwQnlTZjJGcFVZV083NXduZStOUUJ3?=
 =?utf-8?B?WlhBaWt1NTV1Q01LdUtuTW5hWjVCaEpEUDFWcVRiNzZTTHd0cWl0M3p5OC92?=
 =?utf-8?B?cmhFOWlMbnV5VG45WmJsS3RUdVdhSlZlRmRXNldsRFEveDcreEtRL3o3VkRZ?=
 =?utf-8?B?K3hKR1piMXV3azBXL0JCODRNUWtGZnRIZmFsdHZYUVoxcWZScllhbHEyOHUz?=
 =?utf-8?B?bEtodEtVQmZXYjFUdVRyM1pGRkVzcG5VckR5TkNDdHpWOXMyVlI4VFk1eHht?=
 =?utf-8?B?V1A0Rk9Kdk1mRGE0c3BIY2o5STEzUURmTUlTNFkxV2RqUnhUeFpyTmxveUl5?=
 =?utf-8?B?NGRiNjlDWS9yb3dNZklBcWo0MU1SczFmOEc0YkFwUTBnS21nL3ZJMThuL1dN?=
 =?utf-8?B?bTNWS3FOdkRWQVo0cFV2RHRjN2J1VlRWWEYzWUNCbHZmNTNlNC9tQUVMMTBU?=
 =?utf-8?B?VnhqWnhTcHFUdGQ2QkJBNWpFNlAyd3BUZU95YS9Sd3k3bjZSODBtVHZMRkJj?=
 =?utf-8?B?NnpLdjhoTEE4QlEra2tYd3BUbTVlbzMzUXdLOVc4N2ZLaDRmaktIUURWNHFp?=
 =?utf-8?B?KzFUenJmT3I1M1ByYSt3QUlaWDVkZEQ1cXl2SU9UNkFWRlptaVdaRDdQeXhG?=
 =?utf-8?B?akQxMm83dlEzem8vUi9LS3F1WnFoSkxOQXNhUjdqckdnd0RFZEd5VWdlQnpP?=
 =?utf-8?B?UmNRbXRHL1VMN1llQUFTeGxmeExubG9ZQ0dqeFZkbmtsS1p6cmtZS3Uybjl5?=
 =?utf-8?B?RUdRcThpTmdXZzFUWExvZU55RkFBLzBueHgrOEt4cE9oVzA5NkVTNzBKcm9z?=
 =?utf-8?B?YnVhczloZ2IxelpvRzVHRlBYRmEvYm0xVFZ6NHh2dzQrOHZMdWpWbExaN2po?=
 =?utf-8?B?RXR0SzExTldBUkUzWXF0RU1vZEZqSU5FRXdKdXhsM1pCcFRoa3NyanNHSktS?=
 =?utf-8?B?THJmNTNUdGIrL1dweXJYWHA2dUtoeUhvNTNtUE80aHZ0ZTFPVFl4SER6ang2?=
 =?utf-8?Q?GZQ04AFQV0ZJfmLRPkLCqqTQuvO636YIPfhdL3q?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 9c95041f-901f-4f06-043b-08d90f075259
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2021 14:17:17.1458
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 8hn4P1ia/iBlCjpfNC78qmYsbCH6XXUggQN1JMvmndW9z7ePGn/bKyaVw3LwXvRztC1gyKM7WZQyMFiUfhRj0tCo2s7ERQzTck+65cw/j1c=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4119
X-OriginatorOrg: citrix.com

On 04/05/2021 13:56, Jan Beulich wrote:
> On 03.05.2021 17:39, Andrew Cooper wrote:
>> If max leaf is greater than 0xd but xsave not available to the guest, th=
en the
>> current XSAVE size should not be filled in.  This is a latent bug for no=
w as
>> the guest max leaf is 0xd, but will become problematic in the future.
>>
>> The comment concerning XSS state is wrong.  VT-x doesn't manage host/gue=
st
>> state automatically, but there is provision for "host only" bits to be s=
et, so
>> the implications are still accurate.
>>
>> Introduce {xstate,hw}_compressed_size() helpers to mirror the uncompress=
ed
>> ones.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> albeit with a remark:
>
>> +unsigned int xstate_compressed_size(uint64_t xstates)
>> +{
>> +    unsigned int i, size =3D XSTATE_AREA_MIN_SIZE;
>> +
>> +    xstates &=3D ~XSTATE_FP_SSE;
>> +    for_each_set_bit ( i, &xstates, 63 )
>> +    {
>> +        if ( test_bit(i, &xstate_align) )
>> +            size =3D ROUNDUP(size, 64);
>> +
>> +        size +=3D xstate_sizes[i];
>> +    }
>> +
>> +    /* In debug builds, cross-check our calculation with hardware. */
>> +    if ( IS_ENABLED(CONFIG_DEBUG) )
>> +    {
>> +        unsigned int hwsize;
>> +
>> +        xstates |=3D XSTATE_FP_SSE;
>> +        hwsize =3D hw_compressed_size(xstates);
>> +
>> +        if ( size !=3D hwsize )
>> +            printk_once(XENLOG_ERR "%s(%#"PRIx64") size %#x !=3D hwsize=
 %#x\n",
>> +                        __func__, xstates, size, hwsize);
>> +        size =3D hwsize;
> To be honest, already on the earlier patch I was wondering whether
> it does any good to override size here: That'll lead to different
> behavior on debug vs release builds. If the log message is not
> paid attention to, we'd then end up with longer term breakage.

Well - our options are pass hardware size, or BUG(), because getting
this wrong will cause memory corruption.

The BUG() option is a total pain when developing new support - the first
version of this patch did use BUG(), but after hitting it 4 times in a
row (caused by issues with constants elsewhere), I decided against it.

If we had something which was a mix between WARN_ONCE() and a proper
printk() explaining what was going on, then I would have used that.=C2=A0
Maybe it's time to introduce one...

~Andrew



From xen-devel-bounces@lists.xenproject.org Tue May 04 14:31:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 14:31:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122495.231040 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldw5K-0002Ko-4p; Tue, 04 May 2021 14:31:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122495.231040; Tue, 04 May 2021 14:31:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldw5K-0002Kh-1u; Tue, 04 May 2021 14:31:38 +0000
Received: by outflank-mailman (input) for mailman id 122495;
 Tue, 04 May 2021 14:31:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=c6am=J7=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ldw5I-0002Kc-Io
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 14:31:36 +0000
Received: from mo6-p01-ob.smtp.rzone.de (unknown [2a01:238:400:200::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b18d0a1f-964e-4a84-9ed0-25c3bdd4e64a;
 Tue, 04 May 2021 14:31:35 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.25.6 AUTH)
 with ESMTPSA id J02652x44EVU18F
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Tue, 4 May 2021 16:31:30 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b18d0a1f-964e-4a84-9ed0-25c3bdd4e64a
ARC-Seal: i=1; a=rsa-sha256; t=1620138691; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=kSmVLoVjuW7ipSwAwohtZoqtJDhnwRhD1u14OrAk3cGarmM5BqjhAtlZdpKIt0jZio
    OzFSx0LIrnUxindCy9E1klF9ACPDwme5zOavzjOzHWpc3NWLadVXuzWc84zSSm/16HFW
    mt8lstMAafQpu8PvfI+UA9egk6i3Z96zYw8If5/QB33FzxvqmAW4QvSLKkZ2ej7fMg+6
    iGJKfRt4MfQE1Z+X6wdAec5TqRitZg4xV83e1Vb+UW0zh6X5z0tNXHkHqlIT0AXT4BSV
    dOeHFHxFMSJiCZZPxaTl8668nTGrLz5AJThL1ihOznnEB3PfKsUc7XEK3F1p1DiepXx1
    TfwA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1620138691;
    s=strato-dkim-0002; d=strato.com;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=PMt+6Y2zd1tLCLAyzAtE9v1z1cHR2MrHavRovAkAb3k=;
    b=T+7riswwGJG6XnBywZ6Q9KbJMFtolNqdJskFgVFWLp0ypM6OYGpTcXlJiTe+VXh8WI
    5wNGsUJEIKEFKLJyW6j5pBrDa+Yf9SxJ4eDgD/ntblkJSB6a1Swr3MO5wct9f5Lle9sl
    IUo0JoIA4NcHz95Tj66dV5j3mB6ZvhR8OUG4YjmyHI4Y6mS73CmKjq5R4XvVjM4xckcH
    8W0sWF294F4DYedIHqZY3UbZMTLRvoViA/cYtPlp6HUEW10yK2CcKkY0YkXLWqlz24xW
    hDKPfSlusPf54stm6P05crB+OF/pEJLS8aXTNKi/j/5LX145y8RgI19QoRB8RWlbp4/B
    dT9g==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1620138691;
    s=strato-dkim-0002; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=PMt+6Y2zd1tLCLAyzAtE9v1z1cHR2MrHavRovAkAb3k=;
    b=e39bkJMH6P+puU/IcojW2Y2rzZSA0HAHZxbu/Qlm7iWzs2F3PZKKfzrPmI/OpFLnTG
    Ps8TSRnswKaGw+JJou7I75MONufqWo9aCrq135OXU49WYUxUMUUQsBIdmUi8yeDmieey
    YSlfiB8y7DSAvPD5QD1GLJUEOqPC+Q76gFWxcmCsx6Pkim+w7yyROjW+mvPq64quIQKW
    nE2iSWsM7J4be65uId4wJ94XBVhY6VuQCkxaQUuW44E6UvDED4D+nyQl5cDrWVPAS1i3
    775inPhxUuB4LQGR3URcUjcIsZwlZVo6MY1h+1WEaLne0dTICYXIdIxsq6yaudQ41YJ8
    As+A==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgtl+1b1FMstFZvCqIQN5N7TvWFg4vzhFVdoKAuQ"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v1] tools: handle missing xencommons in xen-init-dom0.service
Date: Tue,  4 May 2021 16:31:28 +0200
Message-Id: <20210504143128.16456-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

sysconfig files are not mandatory.
Adjust xen-init-dom0.service to handle a missing sysconfig file by
prepending a dash to the to-be-sourced filename.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/hotplug/Linux/systemd/xen-init-dom0.service.in | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/hotplug/Linux/systemd/xen-init-dom0.service.in b/tools/hotplug/Linux/systemd/xen-init-dom0.service.in
index beed3126c6..98779b8507 100644
--- a/tools/hotplug/Linux/systemd/xen-init-dom0.service.in
+++ b/tools/hotplug/Linux/systemd/xen-init-dom0.service.in
@@ -7,7 +7,7 @@ ConditionPathExists=/proc/xen/capabilities
 [Service]
 Type=oneshot
 RemainAfterExit=true
-EnvironmentFile=@CONFIG_DIR@/@CONFIG_LEAF_DIR@/xencommons
+EnvironmentFile=-@CONFIG_DIR@/@CONFIG_LEAF_DIR@/xencommons
 ExecStartPre=/bin/grep -q control_d /proc/xen/capabilities
 ExecStart=@LIBEXEC_BIN@/xen-init-dom0 $XEN_DOM0_UUID
 


From xen-devel-bounces@lists.xenproject.org Tue May 04 14:39:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 14:39:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122500.231053 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldwCn-0002Z4-Tw; Tue, 04 May 2021 14:39:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122500.231053; Tue, 04 May 2021 14:39:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldwCn-0002Yx-Qp; Tue, 04 May 2021 14:39:21 +0000
Received: by outflank-mailman (input) for mailman id 122500;
 Tue, 04 May 2021 14:39:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HwvY=J7=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1ldwCm-0002Ys-O8
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 14:39:20 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2270ca45-af1e-4ff0-a54a-c848ddfeb789;
 Tue, 04 May 2021 14:39:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2270ca45-af1e-4ff0-a54a-c848ddfeb789
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620139159;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=5PynkD+3oVBooqyRQbevV7cegVq//k1+w7uE7ORl0TU=;
  b=Q5YeS6SyH7skE7PdDCTGkvUk738JNYDasr9u1XygxMEo7xdQeRkbHTYq
   d1yDOmpHUqdPF0isIDWHtFqYcYkECYdyKyH6FayEvMEjXrkPfL/QFZF24
   1JsZeA+1JCQODfCi7GF3EY1Bxv+k1BCg2a7RrGf3IlteXgN07OwD/RJJd
   Y=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: U5RaGnfdi66XYNbPNrX4YnNboXV+jgxe6eNDIZcFgS0U8nHJqPrJhG0sgFtC9qORJD8NGh1Ejn
 P6fg3fMFJKCsXhDf2A7XwTyHUXEhFUDYgz5x3wu5wedDpeEC4dAAP45ASKg6zEugzlcqpGe5CD
 w8+Oy19ygUiIIpziV4NqgpUnb8jGIUI70+qCvKUqU6rIkiZf9flL2GFy93krYIB4p4WDuC/N8P
 oEWjXluDnELmotfYaLjczuP62OCDMnPQ40FMUChSDnBDEp7cSd+HYYvi1cid3GJ/lUp5G0WCyy
 cBY=
X-SBRS: 5.1
X-MesageID: 43041665
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:xhfDgKHmnk4eiVPvpLqEeceALOonbusQ8zAX/mpaICY1TuWzkc
 eykPMHkTL1ki8WQnE8mdaGUZPwJk/035hz/IUXIPOeTBDr0VHYSL1KwIP+z1TbdxHW2fVa0c
 5bHJRWKNq1NlRiiNa/3Q/QKadH/PCi0ISFwdjT1G1sSwYCUdAE0y5cBhyAGkN7AClqbKBJd6
 a03cZMqzq+dXl/VK3SbRNpY8H5q9LGj57gaxIdbiRXijWmtj+09KX8VyGRwxZ2aUI3/Z4Z7W
 PHnwblj5/Cj9iHzHbnuVPu0w==
X-IronPort-AV: E=Sophos;i="5.82,272,1613451600"; 
   d="scan'208";a="43041665"
Date: Tue, 4 May 2021 15:39:15 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Nick Rosbrook <rosbrookn@gmail.com>
CC: <xen-devel@lists.xenproject.org>, <george.dunlap@citrix.com>, "Nick
 Rosbrook" <rosbrookn@ainfosec.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [RFC v2 1/7] libxl: remove extra whitespace from gentypes.py
Message-ID: <YJFckz3BLlOr8/I+@perard>
References: <cover.1614734296.git.rosbrookn@ainfosec.com>
 <7a75b14f66acac499a0b17ab1c5595549421bac7.1614734296.git.rosbrookn@ainfosec.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <7a75b14f66acac499a0b17ab1c5595549421bac7.1614734296.git.rosbrookn@ainfosec.com>

On Tue, Mar 02, 2021 at 08:46:13PM -0500, Nick Rosbrook wrote:
> No functional change, just remove the extra whitespace from gentypes.py.
> 
> Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>

Acked-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue May 04 15:03:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 15:03:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122512.231065 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldwZh-000560-Rz; Tue, 04 May 2021 15:03:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122512.231065; Tue, 04 May 2021 15:03:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldwZh-00055t-Or; Tue, 04 May 2021 15:03:01 +0000
Received: by outflank-mailman (input) for mailman id 122512;
 Tue, 04 May 2021 15:03:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HwvY=J7=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1ldwZg-00055m-Rm
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 15:03:00 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c89b14fa-7a20-4a7f-8e60-a47346bcbc15;
 Tue, 04 May 2021 15:02:59 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c89b14fa-7a20-4a7f-8e60-a47346bcbc15
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620140579;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=bZGXlmwhAsPJxG4m5DpEz4VqP2Ra9gEBwL/WyieoUwQ=;
  b=egsxCaPy19OH+ZQCIiyqhAWWcmVjatGhEeQJ4+pwjd4fP/ePC5yoakew
   YJaWI9Gs8kbPtKrk/1yRJ9hI7H2NSW517Sp4n1lYfsS6oP30PtDw3GJsU
   vy/fVU33v/aH3ugw7plkTKEUCu/ZVElNpmf78UCTeLTFpXAK2sJxyX8SF
   U=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: LXlZInGMc84TC13rmkdvYlglLgEPiYzHshJbYVslgnsuVGucC3WnlT2tt2Q9DeaVZATqjJYJUP
 +YCNFFzNunCebmOumatmw/dVL2cTDBwuC5HHa58nbSYoC5vhDOCn+HY9R30axF8FAcAJkrQdnC
 IJlehiPu1xdHxgIQnwp7cnpclin2JRD3+mgQzRqsrB8vBlv88h9RWAdynnRAfld7SbKJO7jorX
 S+X1NcJn2ULIgSKucAN7iB/nFcYfUO0W+ASpaJbCFu766EPyrYlWAjg4Jo8c8wRn3W2aPs5akj
 erc=
X-SBRS: 5.1
X-MesageID: 43044107
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:wNupua/pajgfEffGhfhuk+BJI+orLtY04lQ7vn1ZYzY9SK2lvu
 qpm+kW0gKxpTYKQXc7mc2BPq7oewK6ybde544NMbC+GDT3oWfAFvAH0aLOyyDtcheTysdzzq
 FlGpIQNPTRChxAgd/+8E2EFb8bsb+62YSJocub8Ht3VwFtbMhbnmJEIyKWCFd/SgUDJbdRLu
 v+2uN9qzCteWsaY62AbxFvNYX+jubGm578bRkNCwRP0njtsRqS5KPnCB/d5x8CUlp0sM4f2F
 LYmA/07LjLiZGG4yLbvlW806hr
X-IronPort-AV: E=Sophos;i="5.82,272,1613451600"; 
   d="scan'208";a="43044107"
Date: Tue, 4 May 2021 16:02:55 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Nick Rosbrook <rosbrookn@gmail.com>
CC: <xen-devel@lists.xenproject.org>, <george.dunlap@citrix.com>, "Nick
 Rosbrook" <rosbrookn@ainfosec.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [RFC v2 6/7] libxl: implement device add/remove/destroy
 functions generation
Message-ID: <YJFiH9dFdlq2l87k@perard>
References: <cover.1614734296.git.rosbrookn@ainfosec.com>
 <5986715fe1d677533b67c06e9561cd716716d46a.1614734296.git.rosbrookn@ainfosec.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <5986715fe1d677533b67c06e9561cd716716d46a.1614734296.git.rosbrookn@ainfosec.com>

On Tue, Mar 02, 2021 at 08:46:18PM -0500, Nick Rosbrook wrote:
> +def libxl_func_define_device_add(func):
> +    s = ''
> +
> +    return_type = func.return_type.typename
> +    if isinstance(func.return_type, idl.Enumeration):
> +        return_type = idl.integer.typename
> +
> +    params = ', '.join([ ty.make_arg(name) for (name,ty) in func.params ])
> +
> +    s += '{0} {1}({2})\n'.format(return_type, func.name, params)
> +    s += '{\n'
> +    s += '\tAO_CREATE(ctx, domid, ao_how);\n'
> +    s += '\tlibxl__ao_device *aodev;\n\n'
> +    s += '\tGCNEW(aodev);\n'
> +    s += '\tlibxl__prepare_ao_device(ao, aodev);\n'
> +    s += '\taodev->action = LIBXL__DEVICE_ACTION_ADD;\n'
> +    s += '\taodev->callback = device_addrm_aocomplete;\n'
> +    s += '\taodev->update_json = true;\n'
> +    s += '\tlibxl__{0}(egc, domid, type, aodev);\n\n'.format(func.rawname)
> +    s += '\treturn AO_INPROGRESS;\n'
> +    s += '}\n'

That's kind of hard to read, I think we could use python's triple-quote
(or triple double-quote) ''' or """ to have a multi-line string and
remove all those \t \n
Something like:

    s = '''
    {ret} {func}({params})
    {{
        return ERROR_FAIL;
        libxl__{rawname}(gc);
    }}
    '''.format(ret=return_type, func=func.name, params=params,
               rawname=func.rawname)

That would produce some extra indentation in the generated C file, but
that doesn't matter to me. They could be removed with textwrap.dedent()
if needed.

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue May 04 15:34:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 15:34:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122525.231077 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldx47-0007rB-DY; Tue, 04 May 2021 15:34:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122525.231077; Tue, 04 May 2021 15:34:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldx47-0007r4-A1; Tue, 04 May 2021 15:34:27 +0000
Received: by outflank-mailman (input) for mailman id 122525;
 Tue, 04 May 2021 15:34:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=n4Og=J7=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1ldx46-0007qz-Cs
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 15:34:26 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id be18ebcb-b508-4eef-833f-7de75807f3bf;
 Tue, 04 May 2021 15:34:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: be18ebcb-b508-4eef-833f-7de75807f3bf
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620142465;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=HzHSvazUdwKik3KTEArmLtv0RwWlOUMR2DQAn06LRXo=;
  b=g9imxbUlGk4ScxN8rh2QS8FbCnh3addNQcEmL9jXGldV9TjNUsb7klcO
   dy4WjfU125h+gL9fSsCpSE7A2ulObYdwTydOhXPFbz8uJo/ly0WDSCFiR
   455WiU/3FkWzM3PSjhwZytpWC+CTKU//NnG48CBGU34+1BY6YbFDZRLxK
   4=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: dPk1cbQX7YxhnjBTxDCZloHj9Y83NEOynDkfKcb5+lZyVwH6puZu43swMSLISVaAChOxVUaVc8
 Qjyr/CAojXM73HD6x2aHngu6uuPZ3j4a7tPDZr1mrxNvIAp89DnHKE/yYaOojIPmCXdCIdNnFE
 Hz1MHMk7R/bGpM8ffCfSFEHRuyucf2P5LSS9upU1SFGCKaiJFcYSw3hFflAgP0OGea8PLkO8bv
 QIlthxLGQzfzKBSGXMkfC85e7rc3ayNDWWrOI/4aH0+dehh3DrgEjBV/Ce3LNS6Go8BOKFJa4I
 k/4=
X-SBRS: 5.1
X-MesageID: 43049069
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:ByxlH69KRr3arzgwz3Nuk+EKdb1zdoIgy1knxilNYDRIb82VkN
 2vlvwH1RnyzA0cQm0khMroAsS9aFnXnKQU3aA6O7C+UA76/Fa5NY0K1/qH/xTMOQ3bstRc26
 BpbrRkBLTLZ2RSoM7m7GCDfOoI78KA9MmT69v261dIYUVUZ7p77wF/Yzzrd3FeYAVdH5I2GN
 69y6N81lmdUE8aZMi6GXUJNtKrz7H2vanrfAIcAFof4BSO5AnC1JfBDxOa0h0COgk/o4sKzG
 6tqW3Ez5Tmid6X4Fv212jf75NZ8eGRt+drNYi3peU+bhnpggasTox9V7OFpyBdmpDS1H8a1O
 Pijj1lE8Nv627AXmzdm2qT5yDQlAwAxlWn6ViEjWDtqcb0LQhKdfZptMZiXTbyr28D1esMt5
 5j7iaimLd8SS7kpmDb4ePFUhl7/3DE2kYKoKoooFF0FbcFZKQ5l/14wGplVK0uMQjd844dHO
 xnHKjnlYxrWGLfVXzfs2V1qebcJ0gbL1ODSkgGjMSfzyJbqnB/11cZ38wShB47heoAd6U=
X-IronPort-AV: E=Sophos;i="5.82,272,1613451600"; 
   d="scan'208";a="43049069"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=F55LPs8Bq1hb/PTsu3CesJMqu6GeuE3RBHH9tTeSUwJMHjOSbeaVzTqwweQcwJuX+g07YPwpR27xZVzbzfzyqfeeEf/0ppE2qF9/yMweeOTmXz2FBTkkMk2m1qmkz0TE6VXo1R+RmkkZrC4k1qZKYuFH+IHbWMsGZD0VQx8sHjabAkeVOuMtY3I8PR0p18zZiKws9mSodnAA1X8tbvd+tb3xqn7dfSZ3iy16Pm7mh8PV/FdXalYSLtQoahQ5JdLCsmIAgIDj4LppUclODiP2lxCOZgu0EHS829EJrRhbbgLf4Iz7Ia8SJAdkKZM0X0LFonsqFfErh/zdjTgb9R5x+g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qcAE8UgSzLbxpyNcyOQZBf9t9rCKnTuTyD+W4zDbFJw=;
 b=L4I1U3CjDZvR+krKJ9rUjbFHqPf/mtalHQPUHBczMKo5BWHqPY3f6xXFPZWGdeXBD4PcG6Dd9vsQwfeH1cdbqk5dLeKl9+0VAGNg1r/IzGF+tN6jvd6tgmxaLmcM9UPu12+vkR3zIzUV/TssRVJUE7TfGHfjc52p3TLMMSjpbkBKtIIZGUItr7RHYFhKMDxINDJPDThVCrHRpk10L15U4TVV0tRyUsbAzj2g/k4h/O4Jbj5ctv4dRnDTpIYNfu7oTQgMuO4MWrMJ4J7p35KdROMAz+9C1HCLFk2JrSa+EQ6dqjxfJPJh+p/GHNYA4P3jr1Xk9Uy9iyY03SkJ9Ad6Pg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qcAE8UgSzLbxpyNcyOQZBf9t9rCKnTuTyD+W4zDbFJw=;
 b=nSceOhUCqj+PdUJM4igYhmLZWtIOIZhDuasxXNJv5fuH6OW/clWdxUlh/xB/hRiA7DB493eHGC2fRPR2ccdorbc+xpr0xevkA4Upx+LWLAEDT9ZxyF0h5kcVmUQzGUK2mutxCirgdj0VHsPbIkK3iugIEgpK81juQNKc9bjjiz4=
Date: Tue, 4 May 2021 17:34:12 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v3 08/13] libs/guest: make a cpu policy compatible with
 older Xen versions
Message-ID: <YJFpdA8qmYca9bUO@Air-de-Roger>
References: <20210430155211.3709-1-roger.pau@citrix.com>
 <20210430155211.3709-9-roger.pau@citrix.com>
 <51ee228a-2d53-2dd4-55cf-233d81ba4958@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <51ee228a-2d53-2dd4-55cf-233d81ba4958@suse.com>
X-ClientProxiedBy: MR2P264CA0037.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500::25)
 To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4e7cb121-9de3-4198-88a9-08d90f1216ea
X-MS-TrafficTypeDiagnostic: DM5PR03MB3289:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB32893B4FFD40224CC16F530E8F5A9@DM5PR03MB3289.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: QtMIOgedyZRRzY0V9zzi8lSb9kb6YO46B6yuK8jK3QUpuhf4HngJyNs6e7MKM0Ey1Jvva9L9bIMhWyNdYEjquQYnL5TlWRaF8Wmz4n100x7J79NId7RTneJX0wj0kkwhwRstzSt9GICI1Gb43iLwdw+wBQYytE9Zr2e4edjbPiMSiQyl+7XcfDt3d9O2OFyi+mxeK8PsHYUdJ7H2Lt+RQEebOWj4UMREjflmHu4/RBBgl4iuI4GiJI1cRRxeSLLflepJU6t9Z8m3RstWDnkmfcpVaFrjsxCG+qeH84py22vx5uPgJlSATGFNrt51IPSeNwXM47tumXAAXZoj7y+YVl7xwzFe66reft81qBaBs12kpTXFfhFh9K6jYliw0tESsIi7G5J3o/+8OPrX5E0NbEdR4dqeJmhNHg6QaIQfWY1QIaMP+X4iPh8CdL5KmD5Eo5IKyEzaT/9ZRR8o4hPrivE0yZY39DRVKL4kPVO4RbL5W0dCVonAgoR01qVqT8DS9R+1Yf6Mg1IvGA/E7jGhzTTXpGOUQXNhi0MlWMTsypqqEVScyAoLQ8JvNng08uLS/0pEIR9hfqbLUrE76/umBxNMhOmPV+6tfbbUtuw8ngU=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(346002)(396003)(136003)(366004)(39860400002)(376002)(478600001)(6486002)(6916009)(6496006)(9686003)(956004)(8676002)(5660300002)(4744005)(33716001)(16526019)(85182001)(66946007)(8936002)(38100700002)(316002)(53546011)(26005)(54906003)(86362001)(2906002)(186003)(66556008)(4326008)(66476007)(6666004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?MlFWYjR4VDdaTFVIZmUra2F6bEFBTFM4dHVxMGV6bE8vTHBJQytNc3pIK2tp?=
 =?utf-8?B?UTlSbzZCR3dHWjl5WDBlWE42NTlrd3ZVTnN1YUp1TkpwSGEzZWJ4bXJKMGI3?=
 =?utf-8?B?dVJpdXBoSmdPK3BaaVk2L0RTQjdMcDJLTHA2VnVvQy9ENDBLd0NlNGplQURp?=
 =?utf-8?B?ZGVhMzdXN1E3TkZ6NVZFcitWSkR1VU9yRXEvTkNsU3lZNXVCbjlKR1Rma2ZN?=
 =?utf-8?B?b29vTXNYTUNOTnhkZG0zSFRqUnBWVjlCeGNIYm5ldjdTS0FLWXBrQjluaENs?=
 =?utf-8?B?SWFRN0dpRnJLWEhvSGk2eUh0YXdGTWVBY3NURDBsVXNRL1NkT0VPQVIvQlZo?=
 =?utf-8?B?UlVzc1NUUU9KamZVV01UelNPb1VHcENhdFpIS1NyaS9VWWo4aGVhQ1J4TUda?=
 =?utf-8?B?WHpYL3AwdG5SMmM0NUtnV2dWWkQ4eFFDWEJqMitKTi9HU0swRFR1NjZOWXcx?=
 =?utf-8?B?ME1iZmt3S2VkUVMxU2lsMGZmRXZKU3o2ampzc01WNkd4WU1Ja1FoM1V1SzRN?=
 =?utf-8?B?MzJiM0p2ZWFEYXh2c0RWYkJhMkpldS9XOGE4WkpHWFhnUjhObjNnWnZ5T1Qw?=
 =?utf-8?B?Y0VDZ1lPYWpOYnhDNTV5cHJxNTZzUTZsbVVkUUpUWEt1YUNGWGhwOE1yWE5D?=
 =?utf-8?B?YjRET2NrTDU3YkkxbHRreldFS1pydjlJUERBczRyZ1JUU0dSbWdyZjY2enBC?=
 =?utf-8?B?MmVKRzRRM1lIeDRiRGk4WnU5ekFWd3lpRHRERXVWVitNbXhmczIrWmRLS1dl?=
 =?utf-8?B?aGgxZXc1MVZVdmJnQmJyanNBQU9YUWlMMmxWbEZubEhKMm5PcHNTV2RwUmV4?=
 =?utf-8?B?emlXL2ROM1pzME9FQ1BMVndPUjdLWHFtQlhSSFdWN3FaQWJzeUdIa0RuY05w?=
 =?utf-8?B?OXR3aDN5WlhOT2tqY2E4MzZac2lpbjNVQUVMamZlcU8vZ3c1c0lndXVJV3Nn?=
 =?utf-8?B?TjVBYytUWUp1QUR6T3pqR0g4b3haZ3RWMU5rdVZ5bmlqZnlDajdNV3ZQS0sx?=
 =?utf-8?B?NVdkWm1wbkZtMDBrNHV1RVV4bS92bXM0QUZGT0dEazhJMTZMUU1vY0Q4cW11?=
 =?utf-8?B?QXNyWEt6MUdqbjF5RDhnbCtSMXMwMkpxaVRjV3lhQythcUg3cEFVQk1hSWdU?=
 =?utf-8?B?SG1aK1pmc2lFVFlsL2lZT3plVlI5Rk4xcFBiZG81Tk1VaTNML20wWmZwaGZr?=
 =?utf-8?B?alpjbHF4U09ybnZCdDhYRXNkYldhRDlBNWtjVTM2MGdCNnVYUEFBWVlyMnRq?=
 =?utf-8?B?TUNoL3NmaHFaTjdwUEMzTUxyNmhiMWMrUXZsWHM3dHFZN1ltckRReWNLVGtB?=
 =?utf-8?B?a2VCQ3o4UzRCNHlNSjVqUFgvdkdPZHBJdGdZN3V3Z1JqVmpwdXk3YjR6Rndy?=
 =?utf-8?B?a0xiRzZWZURUSy9VMy9JNFAxYnpFd2IvTFp4V1VRTlNBN3FraWc5TWlLclhB?=
 =?utf-8?B?ZTJFbjBxUTJ2Rk5DUWszMGhKZTFBRktaY3lTazQ0cXlBWFdGNnYyTkRmOWtY?=
 =?utf-8?B?U29FT0tlVjFzYWwvSDJ6aW9wWmRmNHlPQ0tGcCtGbitvU01lV1dIc0hZOEw4?=
 =?utf-8?B?cS96MkJxTmRXMEx6T0lmR2FoQ1RwWFB5TVoreFJNWGZBakxuTUN5RFVsTWNo?=
 =?utf-8?B?Wm45dFdBVTJBQldUNExXTWU2YW53YzhBaVp5ZG94NDQ4cjJZSXlMU0tjMld5?=
 =?utf-8?B?RWgvTS9rODZOUEJ2WlJpbkV4R2xzOTV3WmVzNDdWS3N0VzVMYnJBdWd3SGJj?=
 =?utf-8?Q?tDrHCml214vzFvRaQVdCk60gy6N9Fo+Ffu95uMW?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 4e7cb121-9de3-4198-88a9-08d90f1216ea
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2021 15:34:21.8938
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: WzafeisyVSZgimm+kK2JU8jKS3OZjOB2Q/3o1ciTNitOoIM6+Zzs7Xs5aBycMUvKGdMjLWeD7h5DXIUNVpi8Dg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3289
X-OriginatorOrg: citrix.com

On Mon, May 03, 2021 at 01:09:41PM +0200, Jan Beulich wrote:
> On 30.04.2021 17:52, Roger Pau Monne wrote:
> > @@ -1086,3 +1075,42 @@ int xc_cpu_policy_calc_compatible(xc_interface *xch,
> >  
> >      return rc;
> >  }
> > +
> > +int xc_cpu_policy_make_compatible(xc_interface *xch, xc_cpu_policy_t policy,
> > +                                  bool hvm)
> 
> I'm concerned of the naming, and in particular the two very different
> meanings of "compatible" for xc_cpu_policy_calc_compatible() and this
> new one. I'm afraid I don't have a good suggestion though, short of
> making the name even longer and inserting "backwards".

Would xc_cpu_policy_make_compat_412 be acceptable?

That's the more concise one I can think of.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 04 15:43:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 15:43:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122532.231089 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldxCw-0000OA-AM; Tue, 04 May 2021 15:43:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122532.231089; Tue, 04 May 2021 15:43:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldxCw-0000O3-6Y; Tue, 04 May 2021 15:43:34 +0000
Received: by outflank-mailman (input) for mailman id 122532;
 Tue, 04 May 2021 15:43:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HwvY=J7=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1ldxCu-0000Ny-GT
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 15:43:32 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 353851e9-a008-44b6-a43e-7882a7291ca8;
 Tue, 04 May 2021 15:43:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 353851e9-a008-44b6-a43e-7882a7291ca8
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620143011;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=CG6+lS2YOn+LnO07Bj9wm8DHIYJ974yHNLOjiagM7kY=;
  b=YLCyjw7m9XUO394dIhhMfddO5Y0ZVXH2DKS118KrgAT2TB6kD2IBUX2X
   ehTF4c4dCMRLq+9Af2L0+Pb1NauNbm7kIFaewkBPYlhAbAud7A3JsyXdB
   xLIdAeMhabQbxgZJvSu0ObhVNqesbZSqrHk7D2zGDq+dioVZmsX3MnBF4
   8=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: HdnusMnvgV1Hhh3dubwoBwLGhDz+q3r6GjigUP/pZ3Kh1++xV9ZwKN3/eMKjbA5i10B7TNwbED
 Ol8LswLekaoQFuLQsrjDD0CBJm0fw/mc7jInnpPKteziUrDqQtBiZEYPE6mMjGqrk4nc32kK5V
 +ZbaCg56nA8771CmwuwEpdSqnYgtOUSiIJmaYGRswFxtntKirOCqX5dQqtkmF9BInyUtLnUxZF
 GKMubgrpsW+Ta5QK4gw6hO0dpRJtARd3Hym1RztW5z3VbumXRBNth7jbQOlArCp0JQxCjBWxXA
 ugk=
X-SBRS: 5.1
X-MesageID: 43050826
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:wa48pqEa1OciuhDypLqFH5HXdLJzesId70hD6mlYQxZYa9Hdss
 iokugS2xOcskdpZFgLs7m7VpWoa3Xa6JJz/M01NbCtQAHptAKTXflfxKHlxCDtHDC7y/5F2c
 5bAtZDIfD5EFQSt7ee3CCWCNAlqePqzImJgqPkw25pXUVWbchbnmFEIyK6NmEzewVcH5o+E/
 Onl7t6jh6tY24eYMj+JlRtZZmmm/TxmJjrYQELCnccgWHk516VwYX3HBSC0hAVXykn+8ZEzU
 H+nwv16r7LiYDY9jbny2TR455K8eGB9vJ/BdeBgsVQCjLghhfAXvUDZ5S+vSs4qOzq1VAykN
 OkmXcdFvl0gkm/QkiF5T/WnyjpynIH9mLrw17wuwqZneXJABYBT/dnqa0cWB3D8EYktMx7y8
 twrhiknqsSNjnhpmDU7cXJURYCrDvFnVMy1eoIy3BPW4oXb7Fc6YgS5llcHpsbECXm84w/C+
 V1AMbA5PFZbEOCYxnizxRS6c3pWm52EgaNQ0AEtMDQziNfm2phyVAEgMMFmHMN8488VolE6+
 zIPr8ArsAzcuYGKaRnBOkARsOrCmvCBRLUWVjiXmjaKA==
X-IronPort-AV: E=Sophos;i="5.82,272,1613451600"; 
   d="scan'208";a="43050826"
Date: Tue, 4 May 2021 16:43:27 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Nick Rosbrook <rosbrookn@gmail.com>
CC: <xen-devel@lists.xenproject.org>, <george.dunlap@citrix.com>, "Nick
 Rosbrook" <rosbrookn@ainfosec.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [RFC v2 5/7] libxl: add device function definitions to
 libxl_types.idl
Message-ID: <YJFrn7+4AQt7K2Fa@perard>
References: <cover.1614734296.git.rosbrookn@ainfosec.com>
 <2cd96b7e884c6f0c2667ef7499ff7179b99ea635.1614734296.git.rosbrookn@ainfosec.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <2cd96b7e884c6f0c2667ef7499ff7179b99ea635.1614734296.git.rosbrookn@ainfosec.com>

On Tue, Mar 02, 2021 at 08:46:17PM -0500, Nick Rosbrook wrote:
> diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
> index 5b85a7419f..550af7a1c7 100644
> --- a/tools/libs/light/libxl_types.idl
> +++ b/tools/libs/light/libxl_types.idl
> @@ -666,6 +668,24 @@ libxl_device_vfb = Struct("device_vfb", [
>      ("keymap",        string),
>      ])
>  
> +libxl_device_vfb_add = DeviceAddFunction("device_vfb_add",
> +    device_param=("vfb", libxl_device_vfb),
> +    extra_params=[("ao_how", libxl_asyncop_how)],
> +    return_type=libxl_error
> +)
> +
> +libxl_device_vfb_remove = DeviceRemoveFunction("device_vfb_remove",
> +    device_param=("vfb", libxl_device_vfb),
> +    extra_params=[("ao_how", libxl_asyncop_how)],
> +    return_type=libxl_error
> +)
> +
> +libxl_device_vfb_destroy = DeviceDestroyFunction("device_vfb_destroy",
> +    device_param=("vfb", libxl_device_vfb),
> +    extra_params=[("ao_how", libxl_asyncop_how)],
> +    return_type=libxl_error
> +)
> +
>  libxl_device_vkb = Struct("device_vkb", [
>      ("backend_domid", libxl_domid),
>      ("backend_domname", string),

In future version of the series that is deem ready, I think it would be
useful to have this change in libxl_types.idl and the change that remove
the macro call from the C file in the same patch. It would make it
possible to review discrepancies.

The change in the idl for vfb is different that the change in the C
file:

> --- a/tools/libs/light/libxl_console.c
> +++ b/tools/libs/light/libxl_console.c
> @@ -723,8 +723,6 @@ static LIBXL_DEFINE_UPDATE_DEVID(vfb)
>  static LIBXL_DEFINE_DEVICE_FROM_TYPE(vfb)
> 
>  /* vfb */
> -LIBXL_DEFINE_DEVICE_REMOVE(vfb)
> -
>  DEFINE_DEVICE_TYPE_STRUCT(vfb, VFB, vfbs,
>      .skip_attach = 1,
>      .set_xenstore_config = (device_set_xenstore_config_fn_t)

No add function ;-)

And libxl doesn't build anymore with the last patch applied. They are
maybe also issues with functions that are static and thus are not
accessible from other c files.

Cheers,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue May 04 15:47:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 15:47:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122536.231101 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldxGE-0000ZX-QF; Tue, 04 May 2021 15:46:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122536.231101; Tue, 04 May 2021 15:46:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldxGE-0000ZQ-N2; Tue, 04 May 2021 15:46:58 +0000
Received: by outflank-mailman (input) for mailman id 122536;
 Tue, 04 May 2021 15:46:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Poa=J7=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ldxGC-0000ZL-Sp
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 15:46:56 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b636abbc-61c9-4747-adca-f6747d3d3dea;
 Tue, 04 May 2021 15:46:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b636abbc-61c9-4747-adca-f6747d3d3dea
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620143215;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=M6PFSTrjgqNJdq9owZ72io/TAheugEl8jOXv4IzmTFo=;
  b=QQjxRObs+f/zXkA/nSrWvIZ2jQvbKY+maSnP91Sla2dM/KqOGNrxRNoT
   czFlk4aWq/CtoskZYjkoyt3bWZBSX5SEP7isgsKbmKw5VkiLeZSH7Dkee
   RXk8cuHqQxJzau0dfbhIDA/3BVRd3ff0f15JpcRnHdP2Y8cT3wqyYOiKy
   M=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: g0/BzL5FSflj8ZfbIuwr2QfkuH45RpPUjIhx2MAL0xDY3OXam98NkoItVt2XQrPieYMTJWc1OD
 6CNw2XxijDrtGh6riZUgN6Jv9IGYBUXYY6av1KhwGRXuDtQ68uFfXH9TvGKQPRwKUnD3in2a4d
 7ElH30RO+klEPCDrWcWY0Fpwg82uel0DrAIdMRUSs0IFjUxT5198Z0za5Az/3gIniE+eYmBgLJ
 mP8KCR4+ZCRjHiJ+DiCms92C5JxHIfG60agiazogIV3exEdBK50trniANIKJCIe4zr5XFA5PJH
 WOU=
X-SBRS: 5.1
X-MesageID: 43151386
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:B9c4Jale298zJ4A7JxzkB4fXgyHpDfMCj2dD5ilNYBxZY6Wkvu
 iUtrAyyQL0hDENWHsphNCHP+26TWnB8INuiLN/AZ6LZyOjnGezNolt4c/ZwzPmEzDj7eI178
 tdWoBEIpnLAVB+5PyW3CCRGdwt2cTC1aiui/vXwXsFd3ARV4hLxW5Ce2SmO2dxQxRLAod8MZ
 Ka6NZOqTbIQwVrUu2QAH4ZU+/f4+DajZ6OW29KOzcL4BSD5AnYjoLSPAOf2n4lMw9n4bBnym
 Tdlhy826PLiYDB9jb590v+q6tbg8HgzNwrPr3BtuEwJi/3ggilIKRNMofyxQwdm+2k5FY0nN
 SkmX5JVK4cik/5RW27rQDg3APtyl8Vmgff4GSVnGf5pojBTC86YvAxwb5xSAfT6EYrobhHoc
 d29l+ZrJZeAFfhmynw9rHzJnZXv3e0unYrnKoviWVeW+IlGdtshLEYlXklc6soLWbf0sQKAe
 NuBMbT6LJ9alWBdU3UuWFp3ZiFQmkzNg3ueDlMhuWllxxt2FxpxUoRw8IS2l0a8ogmdpVC7+
 PYdox1ibB1SNMMZ64VPpZOfeKHTkj2BT7cOmObJlrqUIsdPWjWlpLx6LIpoManZYIP15l3vJ
 jaSltXuSoTdivVeIyz9awO1iqIbHS2XDzrxM0bzYN+oKfASL3iNjDGR0spl8emvvUDEszWU/
 u+I/ttcrzeBFqrPbwM8xz1WpFUJ3VbetYSoMwHV1WHpd+OKoCCjJ2YTN/jYJ7WVRo0UGL2BX
 UOGBLpIt9b00ytUnjkxB7LW33sfUT79YlqELfT+vUSzIRlDPwNjiElzXCCou2bIzxLtaI7OH
 ZkKLT8i6WhuC2d5mDT9VhkPRJbE2dY6LjtSGlxuAcPKk/4GIxz/um3SCR35j+nLgU6Z97KGA
 Rfzm4HhZ6fHti1/2QeLP6JdkidlGAeoXqWSYx0oNz92e7VPrUiDpgnX6RtEx7sDBIdo3cslE
 5KdBIESkjDFjnnlKWii9gOCPvCcsRn6T3bX/J8uDbRs16RqtooQWZeVzmyUdSPiQJrXDZMgE
 ZtmpVvyIaoiHKqKWElhv4/P0AJYGOLAKheBADtXvQjppn7PAVxR3yNnzqUllU6fXfr7Vwbgi
 jkITePcf/GRlpbtXYw6NeizHpkMmGcdVl3cHZ0rMl0EnnHoG961auTfbWoulHhH2cq06UYKn
 XIcDESKgRhy5S+0wOUgi+LETEjyo81NuLQAbw/e9joqziQAZzNkbtDE+5f/Z5jOtyrqOMNXO
 6FcwKeLT/zCYoSqnuoj2dgPDMxpGgvkPvu1hGg8XOx22QnB+HOZFthXLMWLrinniLZbufN1I
 88i907veG9aDqsLtGHzLzadD5FJFfYp3WsQ+QhtJBTuuYzudJIburmeCqN0GsC2hM0aNrwng
 cZRq9w5bjaII9hf8AIYUtijy4UvcXKKFFuqxD8B+81YEokgHDaNc6Y+ragk8tePmSR4A/rfU
 SF+yJT//3ZTzKO2L4TBaU3O3lXYiEHmQZf1fLHcZbRBgWsf/xC+1T/MmbVSs4tdJS4
X-IronPort-AV: E=Sophos;i="5.82,272,1613451600"; 
   d="scan'208";a="43151386"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=am4fYg7iI+FM/tnmciYHS4o/APfSbmZXDw51u57Y1UM48seEvIb8Nag1TPVFZZPMFhCvrq7kqHqxnNz+rDhyeREQoTz2imxXQeEmDg11qlXrVntGbtceOWzAsJ5tcfL0ZnBGBN6yD3+TftxvFcvz3TNBRN6eNytj9WyRVygeEljbMVNHfsgi9hvvl8VXGOmX9FMBTXH+h2/2rTJKrxCICKyCDDX0PF9ySp7gF8NI0JFqJI/SBtr+uqhP1zabBpiAD+A4A6TGTPEYFGum8UnofJ1EJQSMWvJJqD5hcPQYzVzhEG+Pd+JRKZlW0pPOVtBXJVPJ/+J4/SvGmnioYd0uqA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=y57z0yiUGcehdRjLSs/lStDkTVoQTegtPk6MNSkBBTA=;
 b=hgbKYYooSyvCWfdhLzl/b3IDb33GBWQxv6eQOoDcjtEi3+/p/m/dq+2wqsBfgUAEm28Fx/uMjMqYLWe7+1Izd+LQ0M+aG2TFZKOrHxb/kvxs3ZPXw3ltq5SZmLl6ZA2lTBFBXclmr06I7ByOs05qJtuN7uqPZ2FvB/3IxCVIv2BxzSAYMwVDifyAWhnebEkTXxtcZgW9S6wQc4I+3aU/Og1e17XlhRQ3kEluIUyy5xA5OmlUaZaZRqQ3F0pLqI9s2/JWlw2dfYoI1ZhRaXEniWQ73Klguj3NgvMW8PNIgY0CqyHQqCtQQ2/CKiE+6iN/3t0a2qy0UjmtaDekW6NZ0Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=y57z0yiUGcehdRjLSs/lStDkTVoQTegtPk6MNSkBBTA=;
 b=INct4NQcKDdMZNU3Q3HdG1ZGWuQPBu7ZIlXnP6EPHM8kl1k1qtEhWSMO2tIBmPpIT7O0/S84Q9WXuPzPJU/t6ztjfRqDLyScOZ+sCplO4q6zzCX+LUv1lsjuldaNG7vz+WT5P5np6VD8OYlBDud522wLpdcEtRgJaBljnRR7nik=
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
CC: <xen-devel@lists.xenproject.org>, Ian Jackson <iwj@xenproject.org>, "Wei
 Liu" <wl@xen.org>
References: <20210430155211.3709-1-roger.pau@citrix.com>
 <20210430155211.3709-3-roger.pau@citrix.com>
 <76e5e596-24bc-9d91-e654-cef1115e5139@citrix.com>
 <YJFQifk/0nXCuMJT@Air-de-Roger>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v3 02/13] libs/guest: allow fetching a specific CPUID leaf
 from a cpu policy
Message-ID: <73c52a4c-b801-59f3-eba6-e00d7200bdd5@citrix.com>
Date: Tue, 4 May 2021 16:46:44 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <YJFQifk/0nXCuMJT@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO2P123CA0082.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:138::15) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 61234598-3f56-417a-860b-08d90f13d59e
X-MS-TrafficTypeDiagnostic: BY5PR03MB5345:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BY5PR03MB5345388746B49F5B8ED4D854BA5A9@BY5PR03MB5345.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: w+j38FPzjQbbH/1GjiX7voV+rbc0aWxHnGSKAsiJGAKchqEPU735NXeMWV4tC/FKPcgGmny1sNkfifg5abTrTe5tVVvdl1AVB7Mj/XAdSlqUWvVVkba5/VebEn1zt0VSd5z5NdSQMjK4PUEnxVLruquJ2NU5pmuXieqq9m4m9qDelERcI/fvznVJh1zX5OKOClXavSOOarN2p4HBlq+4oPl1gOoOFr62aKdlDEk636KciajLy/GFuCLqAu53aA3Y9WR1E2m/6JuBiiwvBI02dzgaKwjtfsTkexMBgOc1cSZZ8oBZ9P/Zod0CgR+MLDSrq6jVUKSgj7bEq/4x2N+wmCZZY1C+Xbrsc/mPVIr3bfFCa8hBP9KjSq6EFgS+e+Gy/0CnE8uiFFK8MQfQxZJct4rBs75FksL9S2D6gu881c9HWc/q5kVyKLIlc5wFGzJp0rc+Vw34XNGZgW6eLyNsltoFvyL5zsguKwvY9hdWDYK2PQBxFZL+woJcNmnFm33gE+1v7Gl0iCJQlNcVF5UYaxwBi4SO6uBwFu8EuV7Zg0rSHPJawtsZZhRobaBz77EDx0hFY1EXDQLG4f9n33T+iAaW+yCw8II+2i11a2J30XcmisMiVwcWe0KUk/4tc/V+l5jAQUq+0W0EeAfAoCOvlSMTKCeQAEEWsHzQokc4aeMVw0Aie77EQJ8bX7BJeaVZ
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(376002)(396003)(39850400004)(366004)(136003)(5660300002)(4326008)(26005)(6862004)(66946007)(956004)(38100700002)(53546011)(37006003)(66476007)(16526019)(6486002)(16576012)(8936002)(83380400001)(316002)(478600001)(2616005)(66556008)(186003)(31696002)(2906002)(86362001)(6636002)(54906003)(6666004)(31686004)(36756003)(8676002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?Rzh4YXYvVHcvSU9oS2JzZVZZQ3JnQmo4Wjd5MlhGc1VVWTZPbTVpTWdBWFov?=
 =?utf-8?B?V3JsR1NTeWpONVRtczFrS2x2b3NRTERMeDZuVCtLbWoxcnUyM290TGVmdVhi?=
 =?utf-8?B?MGt3RituSzdSdDRha0xEeEhWQUlhbUpOM0xtM0UxWkN2TENSblI5L0ltYW1y?=
 =?utf-8?B?VW02STJtVDQzU011NWtoay9zdTZoRWM3U3E5VWVxUHIvM2VESFA2WG83Zk5C?=
 =?utf-8?B?TTl6QldMNDBzUUF0VmIwMXRjeWIrMmZWNlhXVldZWDY3RS9SQVFxeUhBbGZD?=
 =?utf-8?B?UmczaXJWSTl1OVFxRGtvbzFobEY4MHlWREtqTzlZVEw0cVV3Nms1TUxKcWxK?=
 =?utf-8?B?Vkd3TXZ4UzdHM1VJZy93RXo2Wm53aVlWSXpnREx2Y1pWNnA3ZXRUMjNxdnhI?=
 =?utf-8?B?U2JsNEpXL2pHenl0WXA2VWk1bWUxeXJDMDhPU0dlK3p1dXJxTHQvT1hHVmNO?=
 =?utf-8?B?UFlweVl3ZmNqM3pZNm1CSW9jSWUxM1NXTHFMUkNJZDN0QlhBTVllaWVRQUs1?=
 =?utf-8?B?a0JOZUR5d3pPZ0VSa3krbzlmYWxRNkhYNFBjLzFnMkpTbWJsT0lMVWxMeEJi?=
 =?utf-8?B?L2hlQ0lxcDVXWkZLdit6TlBESFp2ZUdjNkN6MFZ3WTFDSG5mWS84N21Oc0tI?=
 =?utf-8?B?QXloTXVWTDRxSFZNdzNwbi9wMDFZMWRaTkdMbnRIY21EamJuNGQwd3JBNmc2?=
 =?utf-8?B?NnlncWwyOUx2RUJ2NHJOWTNRNmlEM2VueDdhRnBJZnlmWllaNXJnRytWQnBW?=
 =?utf-8?B?SjNFeGJkSlRySXNFaXRNb1dXQWRvbHFkaGpjS3FVaHI0d3RrbWZsUG1XOHBs?=
 =?utf-8?B?Z1ErTXUzYUpTQStvNTNiWEROUHVHREtmeHY3ZXd1Wmd0Nk9TeUVySlNNdWZV?=
 =?utf-8?B?ZE52SmZmNlE4V1g1ZUQ2UllJcVJ3N25VSTJyQVo5aHI2blUrdGFBN2xhbmNN?=
 =?utf-8?B?bkxsUmhLdEIvcFFIOXZyRWU2eCtOOUpHelRkRkVjd2RLZFEyKzRzcnhrQ3ZM?=
 =?utf-8?B?czdpRUludURDZG9EOUZqUUdlOGM1cWRnbVRNNWFRUTd2UlZMK045bS91R280?=
 =?utf-8?B?UEQ5L0UvMWt1QkZYVzRTdGtNV0xIZHNNalhGMUliUnpzVy83KzAyd2c1UHRa?=
 =?utf-8?B?b3Fxejg4MWR3NTV4MGVUcFFBbytNUkgxSGd6dVpKQ3VObXdYQk1HZkZ5VTR5?=
 =?utf-8?B?dmh6RGhkc2tJR3Naak5sSU9UU1YzNVg2aTZMZ29oT3ZVOEFxcWJWSGhTZjd2?=
 =?utf-8?B?VExaSjFjbjk2UDh4bkhwaTJUdlBlNXIrK0IwWUltTGt2bk82NlMyUXMyZDEv?=
 =?utf-8?B?a2ViVVVaaGdPUHBuUUM0U2tXRWxzbDduRnZqUTdHcWk3aDl1S2gva3lROG1V?=
 =?utf-8?B?TGcxdE01ckYyeXZGVmdnZDB3Tko0Nnh4djZEZkFMK0pnd0JsdTdRcEJWQ2VN?=
 =?utf-8?B?blg5M1QvZFdJdmlKTXh5SXVlR2tsS1QxcVl0cGwwSVV3aC9kTUxHelJuUTFn?=
 =?utf-8?B?Y21ZVFViOW9rc3gvMnRMT3VkZDh2cFdSMFIzQTREVTZwdnVwYXR2SitESmtM?=
 =?utf-8?B?eXY2UmpIMExoV0xIbys3aDM1UTkzNU5mRWY2MzZ1aTQrQjFPR0ZBQTRuWnUx?=
 =?utf-8?B?aUQ1RU5ZckxPVmppeGRLeEZKU2s2QkhpQ29YZ2I5cnpJbjRWVnN5WHNOdEdB?=
 =?utf-8?B?SFVib1lmL0M0NkdJSkdnRVN5dlc0VHdaV0hZRTN3NzZjMTJOVGZudjg5Zjh1?=
 =?utf-8?Q?dHCo2FaeYogB1QeOAeDOrzoytgo1bR04Y4Ykc8q?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 61234598-3f56-417a-860b-08d90f13d59e
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2021 15:46:51.5075
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 3T/PHhWZwZNW+lrW435f1N9wKpPW5D1JG97FOvM0SD6z1w2VXVLkFfO4OTvt0yVDl89f19jgrTA4FUPVg5HbCDwwi39+ZgZdJtIMhnrGA1c=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5345
X-OriginatorOrg: citrix.com

On 04/05/2021 14:47, Roger Pau Monn=C3=A9 wrote:
> On Tue, May 04, 2021 at 12:59:43PM +0100, Andrew Cooper wrote:
>> On 30/04/2021 16:52, Roger Pau Monne wrote:
>>> @@ -822,3 +825,28 @@ int xc_cpu_policy_serialise(xc_interface *xch, con=
st xc_cpu_policy_t p,
>>>      errno =3D 0;
>>>      return 0;
>>>  }
>>> +
>>> +int xc_cpu_policy_get_cpuid(xc_interface *xch, const xc_cpu_policy_t p=
olicy,
>>> +                            uint32_t leaf, uint32_t subleaf,
>>> +                            xen_cpuid_leaf_t *out)
>>> +{
>>> +    unsigned int nr_leaves =3D ARRAY_SIZE(policy->leaves);
>>> +    xen_cpuid_leaf_t *tmp;
>>> +    int rc;
>>> +
>>> +    rc =3D xc_cpu_policy_serialise(xch, policy, policy->leaves, &nr_le=
aves,
>>> +                                 NULL, 0);
>>> +    if ( rc )
>>> +        return rc;
>> Sorry for not spotting this last time.
>>
>> You don't need to serialise.=C2=A0 You can look up leaf/subleaf in O(1) =
time
>> from cpuid_policy, which was a design goal of the structure originally.
>>
>> It is probably best to adapt most of the first switch statement in
>> guest_cpuid() to be a libx86 function.=C2=A0 The asserts aren't massivel=
y
>> interesting to keep, and instead of messing around with nospec, just
>> have the function return a pointer into the cpuid_policy (or NULL), and
>> have a single block_speculation() in Xen.
> libx86 already has array_access_nospec, so I think it's fine to just
> leave the code as-is instead of adding a block_speculation in Xen and
> dropping the array_access_nospec accessors?

The same libx86 function should be used to simplify
x86_cpuid_copy_from_buffer(), which has a similar opencoded construct
for looking up the leaf/subleaf.

You might need some macro trickery to make a const and non-const
versions, or have the main version non-const, and a const-qualified
inline helper which casts away constness on the input but replaces it on
the output.

The new code can't use array_access_nospec() because it is no longer
accessing the array - merely returning a pointer.=C2=A0 array_index_nospec(=
)
might be an acceptable alternative.

>> We'll also want a unit test
>> to go with this new function to check that out-of-range leaves don't
>> result in out-of-bounds reads.
> Sure.
>
> Also, whats your opinion regarding xc_cpu_policy_get_msr, should I
> also split part of guest_rdmsr and place it in libx86 in order to
> fetch the MSRs present in msr_policy?

That's harder to say.=C2=A0 I'd like to avoid the serialise call, but the
current msr_policy structure currently uses uint32_t for space reasons,
so you can't just create a uint64_t pointer to it.

Perhaps we should bite the bullet and use uint64_t unilaterally, so we
can create a lookup_msr_by_index() or equiv.=C2=A0 The next big block of MS=
Rs
going into the policy are the VT-x ones, and they'll need to be 64 bits
wide.

~Andrew



From xen-devel-bounces@lists.xenproject.org Tue May 04 15:47:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 15:47:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122537.231113 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldxGJ-0000bx-8K; Tue, 04 May 2021 15:47:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122537.231113; Tue, 04 May 2021 15:47:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldxGJ-0000bp-4N; Tue, 04 May 2021 15:47:03 +0000
Received: by outflank-mailman (input) for mailman id 122537;
 Tue, 04 May 2021 15:47:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HwvY=J7=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1ldxGH-0000ZL-Rm
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 15:47:01 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 69d36ea5-fa61-4ef7-bed3-36c7949a9bf2;
 Tue, 04 May 2021 15:46:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 69d36ea5-fa61-4ef7-bed3-36c7949a9bf2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620143216;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=34vg3/jZvUrCWPff6Y7VRYmZB2dC0EvCkOqX+rXqM+k=;
  b=GrQ3WjIW3v+OVpbuK73wy+mkWI2SUrw+oXWGZX6wIxFIQkLZBcNb5Nhk
   ITQ4Fx4zaNKWxgNI5pvsPf+1d6IMDWUwF27EBckq0wDbxqAKW0BmlfIPU
   MsWarmV9VW5DPYFVX9ADrpIhiCylLDjGdQSMFO7fIKLuobKmf5WhkBsoD
   s=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 4y+ckJVHAPYofB4ReZke1LKLqakm3hKYjN7R5M3/QPb1OdMSh0uMBFTxcS2oaMNJo3t7a0U1F7
 R7b+7c6a6Q5XiKEAhG8JEJ6r4KhthBcg9xV4lipLCzAA3WqdxZcn9wWwgLOyDDrXgQeePzsIdA
 hhitXHqZHBQkUxBHzA/VNgOTfsUhWcskFyMFM/6Av7rD47DqdldkmMzQG8wh/AwzCvoIjned2P
 iFI4jnduAdzK6MmV5PA9Ij+Z5a5Tel5S2N6VRaPsTwNTiuhcZrFE14oRHYaeeI3FT3EsRmj+MB
 /Es=
X-SBRS: 5.1
X-MesageID: 44559840
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:qC5JJa171fIOgaHmDVxmswqjBJskLtp033Aq2lEZdDV+dMuEm8
 ey2MkKzBOcslcscVwphNzoAsK9aFzG85od2+MsFJODeCWjh2eyNoFl6uLZrQHIPyHl7OZS2e
 NBXsFFZOHYNlRxgcbk7ATQKb9J/PC87Kuqie3Cpk0DcShWbchbgjtENg==
X-IronPort-AV: E=Sophos;i="5.82,272,1613451600"; 
   d="scan'208";a="44559840"
Date: Tue, 4 May 2021 16:46:52 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Nick Rosbrook <rosbrookn@gmail.com>
CC: <xen-devel@lists.xenproject.org>, <george.dunlap@citrix.com>, "Nick
 Rosbrook" <rosbrookn@ainfosec.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [RFC v2 0/7] add function support to IDL
Message-ID: <YJFsbHruoGA6aGMY@perard>
References: <cover.1614734296.git.rosbrookn@ainfosec.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <cover.1614734296.git.rosbrookn@ainfosec.com>

On Tue, Mar 02, 2021 at 08:46:12PM -0500, Nick Rosbrook wrote:
> At a Xen Summit design session for the golang bindings (see [1]), we
> agreed that it would be beneficial to expand the libxl IDL with function
> support. In addition to benefiting libxl itself, this would allow other
> language bindings to easily generate function wrappers.
> 
> The first version of this RFC is quite old [1]. I did address comments
> on the original RFC, but also expanded the scope a bit. As a way to
> evaluate function support, I worked on using this addition to the IDL to
> generate device add/remove/destroy functions, and removing the
> corresponding macros in libxl_internal.h. However, I stopped short of
> actually completing a build with this in place, as I thought it made
> sense to get feedback on the idea before working on the next step.

The series looks good to me, beside a few detail.

Cheers,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue May 04 15:51:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 15:51:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122547.231125 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldxKq-0001Yg-RX; Tue, 04 May 2021 15:51:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122547.231125; Tue, 04 May 2021 15:51:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldxKq-0001YZ-OA; Tue, 04 May 2021 15:51:44 +0000
Received: by outflank-mailman (input) for mailman id 122547;
 Tue, 04 May 2021 15:51:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HwvY=J7=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1ldxKp-0001YM-9b
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 15:51:43 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7bde7c29-1ae5-4789-b829-ad5f4d00b060;
 Tue, 04 May 2021 15:51:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7bde7c29-1ae5-4789-b829-ad5f4d00b060
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620143502;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=z8IVQVmIERO9GuJm71umaVXvQ0TkO9n2zadsCgYDiXE=;
  b=BCLSMd16gv0le0nTJyNCnYsMCmB/WE5zTRDu3eSHpT3kUw6ZDGNAeFTQ
   9+vaNGZVW5PN8BA6RZza+84Cdms61Bx5sLmzzfFusVQ8DJVesYAGK8n0z
   1Puelli5lO4wnhSX2KVPvAU4gmpuTkx90pMJoJuRVJq4wrluPaMczLjjx
   I=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: p+MxbggoVgEGUUprX6r7y514u77WP9E30CBgo1VtBj4ZbdaR/vM//zxoDLCg4Q9TE1jG8JRyrK
 voBq/36UmP+qWXaoQ7p3bV0fCAmuLHcaVB9+N/7OZ1HoZ6A9kFdxdXejeHEE3ZX56OHO2rZYUu
 NlFIBXQrGBXrCsbhwv/XO6/59hwG76mvTTBFF371KezQIcFesNiaH0XGSnjKtAXTXr+mVjztwy
 tMPiNvEdLlewGtwDWDY4+e7ObejUYSliX7sW3BJVwN9zVIISbb1UdjmSHbsQnpQ9zHTNpfj/BW
 6lQ=
X-SBRS: 5.1
X-MesageID: 43426162
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:WG4qNaEfozHxT/fmpLqFOpHXdLJzesId70hD6mlYVQFVfsuEl8
 qngfQc0lvOhCwMXWw78OrsBICrSxrnlaJdy48XILukQU3aqHKlRbsSibfK7h/BP2nF9uBb3b
 p9aKQWMrfNJHVzkMqS2maFOvk6xt3vys6VrMP/61socg1wcaFn6G5Ce2OmO2l7XhNPC5Z8NL
 f03LslmxOadX4abtu2CxA+NoCum/TxmI/7ehlDPhY76WC15g+A0qLwEBSTw34lIlFy6IolmF
 KlryXJop+Nntv+4R/a2m/V4f1t6abc4+oGPuOgoIw4Lj3tjyyheYhuXaaT1QpF3N2H2RIRv/
 Tn5zsmIsRv+1PdF1vF3ifF6k3b/xsFr1/k1FOCjnPoraXCNUwHIvsEv611WF/9ySMbzbZB+Z
 MO5U21nd5rKCmFuyLH693BR3hR5zGJiEtnq8E/pThiS4cEAYUhy7A3zQduP7orOjn104wjGP
 kGNrCn2N9mNWmXaH3UpQBUsaWRd0V2Gh+HR34LsdCO3w5Xm2hkz1AZyNZ3pAZ5yK4A
X-IronPort-AV: E=Sophos;i="5.82,272,1613451600"; 
   d="scan'208";a="43426162"
Date: Tue, 4 May 2021 16:51:38 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Julien Grall <julien@xen.org>
CC: <xen-devel@lists.xenproject.org>, Ian Jackson <iwj@xenproject.org>, "Wei
 Liu" <wl@xen.org>
Subject: Re: [XEN PATCH] xl: constify cmd_table entries
Message-ID: <YJFtijH6TktVYDmp@perard>
References: <20210427161105.91731-1-anthony.perard@citrix.com>
 <5cbe94d4-2d07-b517-af9f-c5f1e47f7588@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <5cbe94d4-2d07-b517-af9f-c5f1e47f7588@xen.org>

On Wed, Apr 28, 2021 at 01:54:39PM +0100, Julien Grall wrote:
> > -int cmdtable_len = sizeof(cmd_table)/sizeof(struct cmd_spec);
> > +const int cmdtable_len = sizeof(cmd_table)/sizeof(struct cmd_spec);
> 
> NIT: This can be replaced with ARRAY_SIZE().

I've thought of using it but the macro isn't available to "xl". But it's
probably a good time to add the macro and start using it.


> 
> Reviewed-by: Julien Grall <jgrall@amazon.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue May 04 16:06:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 16:06:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122553.231137 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldxYg-00036u-34; Tue, 04 May 2021 16:06:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122553.231137; Tue, 04 May 2021 16:06:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldxYg-00036n-09; Tue, 04 May 2021 16:06:02 +0000
Received: by outflank-mailman (input) for mailman id 122553;
 Tue, 04 May 2021 16:06:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldxYe-00036f-NF; Tue, 04 May 2021 16:06:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldxYe-0006Nn-DN; Tue, 04 May 2021 16:06:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ldxYe-0007Rs-5u; Tue, 04 May 2021 16:06:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ldxYe-0000Md-5C; Tue, 04 May 2021 16:06:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=mEbQnF+kd1LLO1PXH5Wdjx0zOW74Xz/UuQ6vcsP9DeM=; b=eWyRrqFoLB8GPj5CXJClug5nrO
	yXfdKS5jvtNkXddZI+7lvvPRA8FHNkRD5SM+cJyNuao2ZQcUGvlCcQJYmG1PssuBe2d+xMVi4623M
	YdtjJUm8vHeeuBn80l/c1HlMCuDB74hnHpSA1tFohcriN8QGG+ywbeS7MFXA2hkKeNZs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161768-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 161768: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=ec4b43107f8663b4a3cf6b9605e1d80152a89f49
X-Osstest-Versions-That:
    xen=e927a3b89ae82ac875aafedbefd6b4bc46201b7d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 04 May 2021 16:06:00 +0000

flight 161768 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161768/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  ec4b43107f8663b4a3cf6b9605e1d80152a89f49
baseline version:
 xen                  e927a3b89ae82ac875aafedbefd6b4bc46201b7d

Last test of basis   161758  2021-05-04 09:01:35 Z    0 days
Testing same since   161768  2021-05-04 13:01:32 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   e927a3b89a..ec4b43107f  ec4b43107f8663b4a3cf6b9605e1d80152a89f49 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue May 04 16:14:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 16:14:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122563.231151 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldxh4-000431-Uo; Tue, 04 May 2021 16:14:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122563.231151; Tue, 04 May 2021 16:14:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldxh4-00042u-Rv; Tue, 04 May 2021 16:14:42 +0000
Received: by outflank-mailman (input) for mailman id 122563;
 Tue, 04 May 2021 16:14:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1gXq=J7=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ldxh3-000422-CU
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 16:14:41 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e7ecddd5-e8a9-49bb-a161-748115da0f87;
 Tue, 04 May 2021 16:14:40 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3416AB1A6;
 Tue,  4 May 2021 16:14:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e7ecddd5-e8a9-49bb-a161-748115da0f87
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620144879; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=xW1+wKt2bKdnpd6LFoRvJpieoTt7d2HR0ECbtlV39XE=;
	b=QirFRRnIpP+btT0j59d9nu/4h6xSnC+3HlphhpdtPUKxkHEJtc2F8BY+uApX00g8M7FAX6
	9mvCe4WLSZLDRzD20jkoFQwOUZVd2D2LOLFkxYtvFfCX5mBnL4mJBeOPDeRKtjGtXq73UL
	CcwKM2GHyoGs/fSdFTyiHqSB5ueFLDw=
Subject: Re: [PATCH v4 3/3] docs/doxygen: doxygen documentation for
 grant_table.h
To: Luca Fancellu <luca.fancellu@arm.com>
Cc: Bertrand Marquis <bertrand.marquis@arm.com>, wei.chen@arm.com,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20210504094606.7125-1-luca.fancellu@arm.com>
 <20210504094606.7125-4-luca.fancellu@arm.com>
 <37e5b461-40fe-ac78-59b9-033ff8cdc6d1@suse.com>
 <1853929B-AC45-42AF-8FE4-7B23C700B2E2@arm.com>
 <e3f816df-a3ee-f880-ad6f-68c9cc2db517@suse.com>
 <5D19A76C-DBD5-463D-975C-65FBDA0297C4@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c910c146-453c-23e5-e2df-0b8790fb3624@suse.com>
Date: Tue, 4 May 2021 18:14:38 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <5D19A76C-DBD5-463D-975C-65FBDA0297C4@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 04.05.2021 15:33, Luca Fancellu wrote:
> 
> 
>> On 4 May 2021, at 14:28, Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 04.05.2021 15:09, Luca Fancellu wrote:
>>>> On 4 May 2021, at 12:48, Jan Beulich <jbeulich@suse.com> wrote:
>>>> On 04.05.2021 11:46, Luca Fancellu wrote:
>>>>> @@ -451,11 +466,6 @@ DEFINE_XEN_GUEST_HANDLE(gnttab_transfer_t);
>>>>> * bytes to be copied.
>>>>> */
>>>>>
>>>>> -#define _GNTCOPY_source_gref      (0)
>>>>> -#define GNTCOPY_source_gref       (1<<_GNTCOPY_source_gref)
>>>>> -#define _GNTCOPY_dest_gref        (1)
>>>>> -#define GNTCOPY_dest_gref         (1<<_GNTCOPY_dest_gref)
>>>>> -
>>>>> struct gnttab_copy {
>>>>>    /* IN parameters. */
>>>>>    struct gnttab_copy_ptr {
>>>>> @@ -471,6 +481,12 @@ struct gnttab_copy {
>>>>>    /* OUT parameters. */
>>>>>    int16_t       status;
>>>>> };
>>>>> +
>>>>> +#define _GNTCOPY_source_gref      (0)
>>>>> +#define GNTCOPY_source_gref       (1<<_GNTCOPY_source_gref)
>>>>> +#define _GNTCOPY_dest_gref        (1)
>>>>> +#define GNTCOPY_dest_gref         (1<<_GNTCOPY_dest_gref)
>>>>
>>>> Didn't you say you agree with moving this back up some, next to the
>>>> field using these?
>>>
>>> My mistake! I’ll move it in the next patch, did you spot anything else I might have forgot of what we agreed?
>>
>> No, thanks. I don't think I have any more comments to make on this
>> series (once this last aspect got addressed, and assuming no new
>> issues get introduced). But to be clear on that side as well - I
>> don't think I'm up to actually ack-ing the patch (let alone the
>> entire series).
> 
> Ok, at least would you mind to do a review by of the patches we discussed together?

I'm afraid I don't understand: I did look over this one.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 04 16:15:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 16:15:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122564.231164 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldxhS-000495-82; Tue, 04 May 2021 16:15:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122564.231164; Tue, 04 May 2021 16:15:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldxhS-00048x-4j; Tue, 04 May 2021 16:15:06 +0000
Received: by outflank-mailman (input) for mailman id 122564;
 Tue, 04 May 2021 16:15:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HwvY=J7=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1ldxhQ-00048f-If
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 16:15:04 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0a0874d7-1103-4742-a191-09a704b2fbd8;
 Tue, 04 May 2021 16:15:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0a0874d7-1103-4742-a191-09a704b2fbd8
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620144903;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=RbE9Angj5xJVvHKryCVkZNxXmr6wlbJe7cuv7VsWjZQ=;
  b=OjLu0BIMP6VKLAJiCv3cg2YkW3Qiw5masuTjw74ymIXZ94WKOVVcxWjO
   9mcNGvSennboACpdddUi8Ar09HPklevJy9GuFjdu5MBBRChDx6itrsBND
   7bUrweZYZR4VWnJsLR6+wX4MtLzLGOgHIs/Xh9HG/9813Lo8d65QKRnM9
   A=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: jrZFDAdcWP9PxAV4qiOrl5myQKBMA4t1Mg/dxxGX5aQb+gmTtFmuMon/lzloVuamPKPqnUh/O2
 fhU3+y7T4x5cEDrNrN/zrpJPO1QCXm9q+xJS6aDKKwLRKUAktm2xrRZxqQ1KDHU/8OX6LYyeTw
 4fmU/K2udp7d3WxGFIqy7hO0qaM62mqPoweMqng1FkNhkCy3yWLC2PL4OgNZ5b2s/3lZJHtBuY
 Ib15SZ2mNFe9rqRmMYLnycgkCR6ptCRVQSBcfJwQwwLXf2O2zZXxyLpZwHwTUfS6NcyVEGYx+g
 7Dw=
X-SBRS: 5.1
X-MesageID: 44563269
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:bz+VS6HP6TG5kpwwpLqEDMeALOonbusQ8zAX/mpaICY6TuWzkc
 eykPMHkSLugDEKV3063fyGMq+MQXTTnKQFhbU5F7GkQQXgpS+UPJhvhLGSpQHINiXi+odmtZ
 tIXLN5DLTLYWRSqebfzE2GH807wN+BmZrY4Nv263t2VwllZ+VBwm5Ce2WmO3Z7TgVHGpY1fa
 D0jqEsygaIQngLct+9QkAMQumrnaytqLvdfRUECxQ7gTPusRqU7tfBfCSw71M7WzNLzaxKyx
 mmrzDE
X-IronPort-AV: E=Sophos;i="5.82,272,1613451600"; 
   d="scan'208";a="44563269"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Julien Grall
	<jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH v2] xl: constify cmd_table entries
Date: Tue, 4 May 2021 17:14:36 +0100
Message-ID: <20210504161436.613782-1-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.31.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Also constify cmdtable_len and make use of ARRAY_SIZE, which is
available in "xen-tools/libs.h".

The entries in cmd_table don't need to be modified once xl is running.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---

Notes:
    v2:
    - use ARRAY_SIZE()
    - rework commit message

 tools/xl/xl.c          | 4 ++--
 tools/xl/xl.h          | 6 +++---
 tools/xl/xl_cmdtable.c | 9 +++++----
 3 files changed, 10 insertions(+), 9 deletions(-)

diff --git a/tools/xl/xl.c b/tools/xl/xl.c
index 3a8929580212..4107d10fd469 100644
--- a/tools/xl/xl.c
+++ b/tools/xl/xl.c
@@ -362,7 +362,7 @@ int main(int argc, char **argv)
 {
     int opt = 0;
     char *cmd = 0;
-    struct cmd_spec *cspec;
+    const struct cmd_spec *cspec;
     int ret;
     void *config_data = 0;
     int config_len = 0;
@@ -462,7 +462,7 @@ int child_report(xlchildnum child)
 void help(const char *command)
 {
     int i;
-    struct cmd_spec *cmd;
+    const struct cmd_spec *cmd;
 
     if (!command || !strcmp(command, "help")) {
         printf("Usage xl [-vfNtT] <subcommand> [args]\n\n");
diff --git a/tools/xl/xl.h b/tools/xl/xl.h
index 137a29077c1e..e5a106dfbc82 100644
--- a/tools/xl/xl.h
+++ b/tools/xl/xl.h
@@ -218,10 +218,10 @@ int main_qemu_monitor_command(int argc, char **argv);
 void help(const char *command);
 
 extern const char *common_domname;
-extern struct cmd_spec cmd_table[];
-extern int cmdtable_len;
+extern const struct cmd_spec cmd_table[];
+extern const int cmdtable_len;
 /* Look up a command in the table, allowing unambiguous truncation */
-struct cmd_spec *cmdtable_lookup(const char *s);
+const struct cmd_spec *cmdtable_lookup(const char *s);
 
 extern libxl_ctx *ctx;
 extern xentoollog_logger_stdiostream *logger;
diff --git a/tools/xl/xl_cmdtable.c b/tools/xl/xl_cmdtable.c
index 07f54daabec3..661323d4884e 100644
--- a/tools/xl/xl_cmdtable.c
+++ b/tools/xl/xl_cmdtable.c
@@ -15,10 +15,11 @@
 #include <string.h>
 
 #include <libxl.h>
+#include <xen-tools/libs.h>
 
 #include "xl.h"
 
-struct cmd_spec cmd_table[] = {
+const struct cmd_spec cmd_table[] = {
     { "create",
       &main_create, 1, 1,
       "Create a domain from config file <filename>",
@@ -631,12 +632,12 @@ struct cmd_spec cmd_table[] = {
     },
 };
 
-int cmdtable_len = sizeof(cmd_table)/sizeof(struct cmd_spec);
+const int cmdtable_len = ARRAY_SIZE(cmd_table);
 
 /* Look up a command in the table, allowing unambiguous truncation */
-struct cmd_spec *cmdtable_lookup(const char *s)
+const struct cmd_spec *cmdtable_lookup(const char *s)
 {
-    struct cmd_spec *cmd = NULL;
+    const struct cmd_spec *cmd = NULL;
     size_t len;
     int i, count = 0;
 
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Tue May 04 16:20:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 16:20:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122571.231175 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldxmB-0004n7-Rd; Tue, 04 May 2021 16:19:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122571.231175; Tue, 04 May 2021 16:19:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldxmB-0004n0-Oc; Tue, 04 May 2021 16:19:59 +0000
Received: by outflank-mailman (input) for mailman id 122571;
 Tue, 04 May 2021 16:19:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Poa=J7=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ldxmA-0004mv-Nl
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 16:19:58 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7f1b0039-b9c3-453c-8fa5-358e2f2ac074;
 Tue, 04 May 2021 16:19:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7f1b0039-b9c3-453c-8fa5-358e2f2ac074
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620145196;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=BIkzgdykgN6KNEdoNvYGFZ3/LhxyiUpca/MzS//CAx0=;
  b=ahT4cWifR+uCfDPyC5UHPzceSXT/dUGnybcJvZtrCqZ6R93fj1SmqyZT
   ZW19b/+o0bajNZdHOxGAILsTKN7TrMD1rRnaeZzGgdFoi7z8rit+CqmlK
   DAqSlEJrOjyt+1gCFYVDAiJHYe1AqDzB3OOY/V7mNvOvhUYR1jHxAN3Aj
   I=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: PnwDQxP6IdHsw/WOOnZZeVx/hIw0IRsf8VhjgsJzyqGKUy3rZ8Wvl0ajzjMGAHwxaKZamLVKRD
 2ge9Te77qJXTb0RdGLpj7VWRx0EfUFLyxjk9/RuoHwaFfyIEQQsPQGjbIu6X69K0+3Bw4+Xykj
 j9TifLh2EUGbA6SuyGbWMWhydOJ+QfZ/8wLAyJxGSjhefIT7vo2gdwAIn81WZPVJpqY6NHjCXD
 GfA1zie/CdF82bXHjIwekUiqeEnX/VWRAnGQHAfhjp0xN0bd1RNyVyoKkD/0is1SdHcWQtBy3P
 9hg=
X-SBRS: 5.1
X-MesageID: 43429236
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:JYtDOqHE4GsZgAX9pLqFRZTXdLJzesId70hD6mlYcjYQWtCEls
 yogfQQ3QL1jjFUY307hdWcIsC7Lk/03aVepa0cJ62rUgWjgmunK4l+8ZDvqgeNJwTXzcQY76
 tpdsFFZeHYJURmjMr8/QmzG8shxt7Cy6yzmeLC1R5WLD1CQYsI1XYcNi+wFEpqSA5aQb8wE5
 SB7sRKzgDQBkg/RMK9G3UDQqz/vNXNjp3relorABQg5QmIg1qTmcLHOjKf2QoTVC4K/Kc6/Q
 H+4nHEz4iAk9X+8B/T0GfP849b8eGO9vJvDNGB4/JlUgnEpR2vYO1aKtu/lRAz5Nqi8VM71O
 TLyi1QRfhbz1P0UiWLrQD22w/muQxemEPK7VODm3PsrYjYaVsBerN8rLlUeBfY9EYs1esUuM
 kgvxP7xu9qJCjNkyjn69/DWwsCrDvSnVMYnfMOlHsaaIMCadZq3Pwi1XlIG5QNFj+S0vFELM
 BSCqjnlZNrWG+BY2uclmdix8HEZAVJIj62BmIGusCTzgFMmmF4w0Yy1KUk7wY93aN4ZJ9e6+
 veNKN00JlIU88NdKp4QNwMWM2tFwX2MF3xGVPXBW6iOLAMOnrLpZKyyLIp5NuycJhN6Jcpgp
 zOXH5RqGZaQTOhNeS+mLlwtjzdSmS0WjrgjutE4YJih7H6TL33dQWeVVEHiaKb0rYiK/yef8
 z2FINdAvflI2erM51OxRfCV55bLmRbeNEJu+w8R0mFrqvwW83Xn92eVMyWCKvmED4iVG+6KG
 AERiLPKMJJ6V3udWT/hDTXRnPxam3y9Z99C8Hhjq0u4blIErcJnhkeiFy/6M3OAyZFqLYKcE
 x3J66isq7TnxjzwU/4q0FSfjZNBEdc57vtF1lQoxURDk/yebEf//GWeWVY2mq7NgZyJvmmVz
 J3lhBSw+aaPpaQzSctB5aMKWSBlUYeo3qMUtM6lrCc49zmPrc1FIwvVqA0NQijLW01pS9a7E
 N4LCMUTE7WET3jzY+/ioYPOe3Zf95gxCGxIcBVrnrbnV6Gpd4mQ0YaWzLGa7/UvS8eAx5vwn
 Fh+a4Wh7SN3Ry1L3Ekveg+OFpQLFiMDKl+FwSDboVMkrXNcAV9JF36wwCyulUWQC7H5k8Sjm
 vuIWmxdevQClRQgHxez53n6Uh5bGmbYkJ2ZE1rqIEVLxW1hl9DlcuwIoaj2WqYbVUPhtsQNz
 zIehM+CAJjzdLf7m/epB+yUVEdgrk+NO3UC7ouN4zJ0nS2MYuSiOUtBPlP5qtoM9jor84GWe
 +SYBWuMTv9Eu8lsjbl/0oNCW1Rkj0JgPno0Brq4CyEx3Y5G+PVO0kjaLcBId2QhlKUDsqg4d
 Fct5YSsuSxOGmqNYLD5qHTcjJZKhTc5USxVPolrJhIvaQ08Jt/dqOrJgfg5TVi5lEZKsyxqW
 Y1BIJcy5rFMpV0f8MTdzlCl2BZ3+inHQ8OiEjOHuQ6fVsRlHfVMNOC3qrQpdMUczm8jTq1HW
 PazjZU8PjEVRaSzLI2C6o/JmJNdUg3gU4Sit+qRsn1CA+wcftE80f/GnihcKVFQKztI8Rckj
 9Kp/WJlfSQbSz2xUT5uiZ6OLtH9yKCTdmpCAyBXc5O/NrSAyXBvoKapOqyhizwUz21dgAxgp
 BEb1UZaoB7sQYZ5bdHmhSae+jQuUIqk1xX/DFhmBrM4+GdkRnmNHADFxbYjJVQVSRUKV6Sg6
 3+gLCl6Eg=
X-IronPort-AV: E=Sophos;i="5.82,272,1613451600"; 
   d="scan'208";a="43429236"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LahPVBJpnDbJSPWfT+9utm83ucTk9Qw1aKOt0LTHw98XH0kjfaiAql4X63u0QYbOyF90H1Eo8EpXQmQ/I665Ok8yv5OBl9C2o3MCZAh2JKJYcfvz3U3E4hOuKpqaDDpsG0xRfJPIvPGy8oiSKvzrQpdi1/djNTgCfJd8cK13ZNo0NaEuXGSWdQdf+0qOVq8lYptVwov2vxMf/TRlBmvmGn3RRvZuh72eYy3yi0hx9UEI86em53waq6JAEY2B9hRo8TEQvKVQCBPVB9/rNJxcIqkTzFmDFwTdsVrDSvM9wfq9E5GJ40mApgmBWglgnRYGrlwV1pTLyun4Wa4TEbdf4Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BIkzgdykgN6KNEdoNvYGFZ3/LhxyiUpca/MzS//CAx0=;
 b=Maa0TnGCKSiHlJoTBUYUYydoCbftzepColWpL+6y7A3jwSfwHBXgrPce++1Tko4L1TZOofHUQF3uw3/5fjVdrb5gIZQcHj8/pqFuPFbRfZNJjcLrTkrcY16cD82H4xdWYNkO9TxynAOne673ljlgdpS86mbU+HS0YJvXfgz7N6r9n0tt/ixWv8ohfDhkRnRAuS6IbEpdbkmbxMR8sDm6tgZkQ07kzZMM76PVEoWNkT/VAiqd6WQS7JQty8iztsX9nXvj9UYKnlztdU8jz/bJKpk1nWZFPk/dVVzUmYIhWC9VTPN717IQ69nMivxffDL9B8V1cr/Y/CLH3eWWuBFCEA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BIkzgdykgN6KNEdoNvYGFZ3/LhxyiUpca/MzS//CAx0=;
 b=VDBYLj0gdx1ds5pDR5Vc3cpTZLe4M/eQJ5yhH7z9QciqShW+xl5rZhILnR0U/hI6ajcrtuF0TkI/qt+vVlvgKCZntEmETC7JGkdNQGV7vle+SPJ4bvb6+I3vJHixFArqkuhE5KjYJ/DxN5W3SN2viVChlqfcSshj3ja/5UMzrA4=
Subject: Re: [PATCH v3 04/13] libs/guest: allow updating a cpu policy CPUID
 data
To: Roger Pau Monne <roger.pau@citrix.com>, <xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210430155211.3709-1-roger.pau@citrix.com>
 <20210430155211.3709-5-roger.pau@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <d0508c89-1540-d3cc-f756-c24af75306ce@citrix.com>
Date: Tue, 4 May 2021 17:19:46 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <20210430155211.3709-5-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P123CA0100.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:139::15) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c70fcd2b-bf47-4f85-4683-08d90f1872bf
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5648:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB56484BA2B5A16705CBACA0B6BA5A9@SJ0PR03MB5648.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:5516;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: H7rW1i17JAqWR1aSlVdjZNRACNHsyNzCgBOwg01tuoS8eSHqbOe7lDWQRIBvD1d81gOzMWimwR1MGZjI9gRvCLxpDMOOuENJVxFXtQ50fsKanZSCoMpnd8Eg1bEDYRaWlkbmjuN2C8IEpOOpSqaqyq0XoWW8Xl32/Na4P5F+LzGXzzZvMvmmA64B3a35OBZJNCacsZMMHppqVrCE82hZfWwdD1L3PooSr+wYYCu55iQkEpPoipRZqG/8b7Zbv34+xqovrKsro/5xx2SdvWgfja9hXHDKSQTE1zoQdKEKpQwgLHhJT/JHxXgPOsnh4ryR4S08nl7/oV13RaR3nevb5EyKooHHtfOIGI/kYIXx6q89aIRsOVH++9OeF4rxsgbUhUmLksGcqIo37I0CF8S/ArKflw6AzHHrvP3eXndbkSw8eTJnYirCwp1Jc8EUStNK81UO25eWbjYtRd+JpKHWWNe5Yrh7YGZvUA6/4ySW2yRs4PZgC1m6XHTXz3Dkxchd7/jNNlbT6c9hThUhbno7xnijgYXskzV5fU31gBaL3C0/MUZ0ry3y4yeV3gUwOY2SkWi+87fuGe4wwLufwq1/V1PIiBZatp8shtFDePXSLkll3g6+iE4DmPqcfym3zX9Mc3KN3fKaSAZIRi0jxnYgekRMxp4Ozhkc7CLynYpXRdvJ9XXlMqu7xOMw6LC9F/QP
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39850400004)(346002)(366004)(396003)(376002)(136003)(956004)(53546011)(66476007)(316002)(8936002)(6486002)(83380400001)(54906003)(38100700002)(4744005)(66556008)(16576012)(31686004)(66946007)(26005)(16526019)(4326008)(478600001)(86362001)(31696002)(8676002)(36756003)(2906002)(6666004)(5660300002)(186003)(2616005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?TThIVm8xUHVwbHpydEZmN2pLRFdaNFg1TTdsbmJncHQxSmJOamFQVWcvcU1h?=
 =?utf-8?B?TVk4bnZoNFNNSGxySU92K3pGaWdjcitqVGpZam9QbjlUaklFZGlweUM1MVl5?=
 =?utf-8?B?ck5QWW9SeUpNU0J0a0ZVMnJwWmpwR0RMNjRiZ0l4QVVtZnhHUnFBQmtrZkts?=
 =?utf-8?B?akpwNmFiaWZPRGN2eVBDenV6aUVlTGkvRVVTY3g0QnBFNXcrbTZmTDFEajl5?=
 =?utf-8?B?OWhWaktTZm9sZnBEMmhMZ2hQZmRPV3ArZ1psZkFCZk9KRGhYY2piWmd6Q1hO?=
 =?utf-8?B?NU5wQnJtTGVwN3pxeGN0b2kxenRzWWZsRzNmdWZadTVkU2xsSUVXZTFCVkVF?=
 =?utf-8?B?ZUZsZS9HTENFbXY3TG83WEdXZThXNTFQZjNoU3gyRWw0eXFkN3RlV2lUT1p6?=
 =?utf-8?B?c1hqcVlRaS9IMVVXMnBZUzFvVm9mMm1yTm82bm9hbHNNTU16Rk4vdEdaRExR?=
 =?utf-8?B?K0wrLzluTHF3cm1lSHhSbDZrTHB3K25RT2VEK0hpZjFZT3l6bXg3c2FIeTZC?=
 =?utf-8?B?Um9jYzhoS29heXhLWnM4VlFvVkZLSStYMlJ3eFJ0cjdVYjF5Wng1aitWakdF?=
 =?utf-8?B?TitJZ0c4MjBJcTB5R2RXUmRUbCtXb2V5dkNHTXBLVDhJRTlFRzVZNW9wNzNv?=
 =?utf-8?B?b1RnYjN4ampFOS9ySE4zREROSWx1TEJ0TVpuUmorMlZGQ09pc0hhaFFOc3py?=
 =?utf-8?B?MUVhbUpNZ1hYUm8xUGRaOHVnUk9OR24yd3VyQ040UmdKS05SQ3ZxVzFSdzJk?=
 =?utf-8?B?ZkQ3QkFlNll0MllTR2hpREZhUy8wRkRJYjlRbHRyelJWTjJMbGJYRzJmRmRR?=
 =?utf-8?B?TFRiYnNwM21LOUZzeFk1RlVtemtIeUgvWmhOQ0ViMDQ5cWJaL2FCNS93bmsz?=
 =?utf-8?B?amxmU1dXMjVxWGxnNFNSai9SUnovandoY0MrNTQ2ZWtHN0RRODcrcHlzTllu?=
 =?utf-8?B?Uk1ldkhlVCtkQVhGQisxODhxSmZDeFN5OEZtSVZmZzNHMXZrZnJXZ3JnUll1?=
 =?utf-8?B?eHQvMHFYM2dNOVg4WEMzdkV1dmpZbUVGSmxXdkVLcTBuMzM0M2h6Y2tIdGVq?=
 =?utf-8?B?N05uTkVsMTd3dkdhdVowN0lxTno2N2N2N0hzcC9ZRHM4emRucGRjbEJMVW0z?=
 =?utf-8?B?RkZtbnhjZlBKMXV6SitGM3JTN2lpV0NablFsdCtkSEJNU2FzQy9NVHV3b1lF?=
 =?utf-8?B?UjdpUGRsSFYvTmhVbDZ3Zm03RFlsSk1LZzZrcEg1ektKWGZTRzBxMWRrVTBZ?=
 =?utf-8?B?VzNiTWZkUXYwUDNJUTMzRkRXbFJEaVRaVHNsQWJnWHNzYStGTzhaTTFkU3ho?=
 =?utf-8?B?VEhqWWtidjdpcWFtWnJFeDVKQkFvb2xUeVhVckpIUE0vRkVYS3ExUGxGSXdI?=
 =?utf-8?B?WTF2R2pCQXdGYi9ac2FORTROQ3NvbEVQR2FWbmhhWUl2QzZhelR0OWt0S0kz?=
 =?utf-8?B?R0dYdndxWWxGZ3djT2c3enBYUHlrOXp3ZWdBMXI4aFNVcG1NVmN5OE4xdmpB?=
 =?utf-8?B?eWNUdFlDNzBmM05JQVZrSXp5c0pKREhBU0FDbGhkMUliM1o3SjE2OFZZTmox?=
 =?utf-8?B?QVRGZERoalczQ1UyTjFHa1lLTVR4ejVqZVdMQTMvcFNxWUtkZkdNV3YwYjZK?=
 =?utf-8?B?WC9VTzNuQ3BWeWxBSjFqR1hKSkNhN3dnT2JvL3grdVlNbGdVY1FERE83Qzc5?=
 =?utf-8?B?M3ZBbnRrakxuek9teTdOMjVubjR6c1Q1ZmZXeTdhV3RmSEZaR0JxazBuUHk5?=
 =?utf-8?Q?tew3vgok05hjGTmwkBuewTIIfx/9Oka/vgBMBPC?=
X-MS-Exchange-CrossTenant-Network-Message-Id: c70fcd2b-bf47-4f85-4683-08d90f1872bf
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2021 16:19:53.1723
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 1x6S6x8L/ILIy1Q+rxdQ7QAX+rUmdyIa1RVRjry4Qty2ZDv85FOWpg/nWHRXP9CP4L6S9m88Oebq25mHs3Nud18bijBpyGoSJWrG9NK8h8Y=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5648
X-OriginatorOrg: citrix.com

On 30/04/2021 16:52, Roger Pau Monne wrote:
> Introduce a helper to update the CPUID policy using an array of
> xen_cpuid_leaf_t entries. Note the leaves present in the input
> xen_cpuid_leaf_t array will replace any existing leaves on the policy.
>
> No user of the interface introduced on this patch.
>
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Tue May 04 16:20:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 16:20:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122576.231188 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldxms-0005bK-Ab; Tue, 04 May 2021 16:20:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122576.231188; Tue, 04 May 2021 16:20:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldxms-0005bD-5g; Tue, 04 May 2021 16:20:42 +0000
Received: by outflank-mailman (input) for mailman id 122576;
 Tue, 04 May 2021 16:20:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Poa=J7=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ldxmq-0005b4-Nv
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 16:20:40 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dc2d8228-76ae-4fef-b6ec-ff6481e970ea;
 Tue, 04 May 2021 16:20:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dc2d8228-76ae-4fef-b6ec-ff6481e970ea
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620145239;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=KsnMXnRy/pU1+QexvNN9831BNueRCcEbO329Kx33PUQ=;
  b=XnViGN3xajh6eJsyQahj686xwFSSgqF8CRahBw1mSj/9BS1GglI9ahrx
   1WZqBU8E8Oszua4Ke3ZOKH3fSV45wHzbkDpfiHa4pay46DgXny3as9lNN
   d0DU6Wg+4lp0O7KwIQsOKd+QCtGqhYA1lm6OT1H5fqtRqqkWFjII1Tqzg
   Y=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: UGLBQrAe3JPA13tvpVI7R0BaKLKDBAIQs4ARy2Mj69sBrV6D4BAjaOVnGGxK26Qed8/BxBXxZx
 xc0zqeuuVKWk3m9VIonrXbPPPG4TWB1PsFaMhnzO8PlA7r1D12SG7HPdd696EOwQ8EzgGmZFMW
 S9DUvWLRU5ps8ciHQyJyZjNw0KFf9FPt6qrhnIlYRR3BLq8I/U7VsF5cfWNirpqyQBEyx9NTnT
 y2ugJaDbaIZRCO5N1Qg9WeEDLdt6qwSRcg/9yhF7rYzYBH3ccKx49qLg2xJUDAqMqyFY5NfwjB
 /vY=
X-SBRS: 5.1
X-MesageID: 43154837
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:zrNUaamtzybtDjpdPrvLEZaSca/pDfOBj2dD5ilNYBxZY6Wkvu
 iUtrAyyQL0hDENWHsphNCHP+26TWnB8INuiLN/AZ6LZyOjnGezNolt4c/ZwzPmEzDj7eI178
 tdWoBEIpnLAVB+5PyW3CCRD8sgzN6b8KqhmOfZyDNXQRt3brx7hj0ZNi+wOCRNNW17LLA+E4
 eR4dcCgjKmd2geYMjTPAh7Y8HoodrXmJX6JSMcDxk85wWUyR+u4rj2Ex+Xty1uLw9n67Ek7G
 TDjkjF9ryu2svLtyP0+k3yy9BtmNXnwsZeH8DksKYoAxjllwrAXvUYZ5SspzYwydvfjmoCsN
 6JmBs4OtQ21nW5RBDOnTLI+y3NlAkj8GXjz1jwuwqRneXcSCghA8RMwaJ1GyGpknYIh9133K
 JV02/xjfM+Znmh7UeNkuTgbB1kmlG5pnAvi4co/htieLATdaNLqsgn9F5Vea1wbB7S0pwtE+
 VlEajnlZRrWG6dBkqp21VH/MahRTAaEBuAXyE5y7eo+gkTtnV4w0wE/dcYj3cN+bksIqM0lt
 jsA+BGkqpDQdQRar84LOAdQdGvAmiIeh7UNnmOSG6XW50vCjbokdra8b817OaldNghy4Yzoo
 3IVBd9uXQpc0zjJMWS1PRwg17waVT4eQ6o5tBV5pB/tLG5bqHsKze/RFcnlNbli+kDA+XAMs
 zDe65+MrvGFy/DCIxJ1wrxV915Mn8FSvAYvd49Rhanvt/LEIv3rebWGcyjZIbFIHIBYCfSE3
 EDVD/8KIFr9UawQEL1hxDXRjfDYUr60ZVsELXL3uQaxYQXX7c89zQ9uBCc3IWmODdCuqs5cA
 9VO7X8iJ62omGw4CLp4gxSS15gJ3cQxI+lf2JBpAcMPU+xW60Eoc+jdWdb22bCAhd+SsjRAT
 NOvlgfw9PxE7WggQQZT/63OGOTiHUe4FiQSY0Hp6GF7cD5PrQ1E4ghQ640MQnQDRR6lUJLpQ
 54GU45b36aMgmrpbSujZQSCu2aXcJ7mh2XLcldrm+ak16dq8EpTn4yRCWvTsaTvAYrS1Nv9x
 9M2p5apIDFtSekKGM5juh9GkZLcn6rDLVPCxnAWJ9ZgYnxeAZ7TX6DgBuTjx1bQButy2wiwk
 jaaQGEc/DCBVRQ/lRVyLzj/l9PemKBRE5ocXxhvYphFWPJh2Zr3YawF9iO+lrUTmFH7vAWMT
 nDbzdXGA9oytyt/DO+mTqJFxwdt98TF92YKI5mX6DY23urJoHNqLoPGOVM+o15cPr0tPUQbO
 6ZcwiJDT/xBu8zwTaJrnI9NCQckgh8rdrYnDneqESo1n82BvTfZGl8T7YAOteG8izKQe2L3J
 gRt6N9gcKAdkHKLviIxqHcY2Qddlf9oWuqQ/oprp4Rl6Qor7d3F4TaVzyN9Hwv5mRIEO7E0G
 clBIJ86/T9H6UqWeo4USdQ5EAom9SCN1FDiH28PsYOOXUWy0bGNNaI6YfSobUhAke9tBL9UG
 PvhxF1zrPgZW+/zrYUBKI7HHROZGU94Hpk+vmed4e4MnTdS8hzuH67OGS6arlTVeysHqgRtA
 9z57iz7qKqXhu9/ADbpj1gJK1St06hXMOpGQqJXcpF6cazN1jJoqyk5qeI/XvKYAr+T0QTno
 tec0MMKuxFlzk5lYUylhGIdZafmDNsr3JupRd9llDs3YC64GDUWWF+WDep/Kl+bH10KXiHjc
 PM7O6C8m/yiQI1gaX+KA==
X-IronPort-AV: E=Sophos;i="5.82,272,1613451600"; 
   d="scan'208";a="43154837"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=K4QvWJaqolRFPvp/7w15Oe2Nqq+rEl22lJd9kz+P0n9JSL4gZEQIwCmghrelsxtKD7zqCcDnIEuGHNDydLpzK2ndnSlQUyqd2MmKfGjMq63aG+JnM+3xjP4+YM9TMQ7l62OfciCV56WRxieyJ10YZY4cQmM8lqSMc1Gleke1lqZyzrggD2U1p0sChUQNes95lTBnkCVScxM9a88gdAyWdIO25/0SaWt3oVF+m66FmP9AD1FXzmMyZ6VlHBKj4QUiJ6U637Y2I3ApzPedw6XyhuJ5mkJ1yWCeShRJmohpaA16rQkqpLUg/MSDAvmLQO858D9OjqfVrovkUy3/q62y2A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KsnMXnRy/pU1+QexvNN9831BNueRCcEbO329Kx33PUQ=;
 b=UYAkEyE4oHC4K4X+7/dt5puk1Qrxiquj+onec6yki4ZsfB7yWAj6ZHOxVVAUFILwUlLpNFP8M/UvqLqTLUHh2y4tM5jvIfP0nNdj02pq70s+dYLUnTe6Zadnn/DQogqvAhdbL40ZcPJJhACjSz8o86igNHWSu5Dj9UXBAeGDhTKT3mGpyVflsKdxIO5CVV4g76sLV5aOwZ7ZLdk4Kyv7z7jj2VwgzIsjm9yJlVQaJI7zmGqHPffHHwfvN779wEhy9GrVojp6QnhcXATTnlWjAWOaNwaBPASWZju7AU/BIK4puIxGaL0HoigQJeTwiFc4Si0MoTvxSgZuULqgBQtRlg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KsnMXnRy/pU1+QexvNN9831BNueRCcEbO329Kx33PUQ=;
 b=rej+UlM5tjEOCDRSJt7IS6BhgOadVZi0Ed/KeIvqXFRLPcy7f4fA+Jri4lKgvqoDnj38DTbMci9kFPB36ETg0CxHJtrS4Rz/4PLq+ucU8hEEgnB9sDIhu5LTuIQqifPybHUvjVCxG3GhmCXOS0O8Y6dMmpyYTV9dFAoFiJhIlQQ=
Subject: Re: [PATCH v3 05/13] libs/guest: allow updating a cpu policy MSR data
To: Roger Pau Monne <roger.pau@citrix.com>, <xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210430155211.3709-1-roger.pau@citrix.com>
 <20210430155211.3709-6-roger.pau@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <361b3f93-09c6-7eee-e231-bda07167f06e@citrix.com>
Date: Tue, 4 May 2021 17:20:20 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <20210430155211.3709-6-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P123CA0097.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:139::12) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f432d402-ec42-40f0-90de-08d90f188688
X-MS-TrafficTypeDiagnostic: BYAPR03MB4166:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB4166F11663F59141B68B1947BA5A9@BYAPR03MB4166.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:5516;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: l8OmF/tBybPqFL5bf0KCI7m8Atb9QvwLQu+s6NLcozBJDDgy+RO2KCgaKGqni87JKwV+dkq5nwBW62kK6QntRWDy1NPXsYJXo6Ylbaq9VRNG1SUFbaWrdck/uoTIpq0EUCuRpjJbCZqVHNHt3FurRGZ9gS54v4U3QJrGT+bOgqlqPBS2RQP4aWiRMMZMcyDCbhRnG8GRZ5dxsXmVcr7ZOym4Pu6ANF2fZIrDLngkeM/oz2QmhNZYVWqbac3hJ/dVn5feI4rPM56ymdAvEbyuTEYZ7s2Hdci5Cq+B0UbYz16QnN0Dvxy1FuRWRkmY9e6EypMQx4xVHgZX4L7E0pAYZ1rayUvVGVSiH9Y7BseBkND/mWJlT7q57MtMbdQhEukfghqgnDEG6C57HhR05UOm2rvH+AwcrPPv5Ov83P3J/tc8nZJ1wbglX7ctpXlDhxHrN9194lJYTxNE38ArbwImB0Dm5yS1QqaCF0sJRTgmuiqMR2OZ/wfTDEsUFYMrs/XmiCFPiJSgDqalBfcEkcc+u2yvMQGHA4NH2C7fQzYn6hUbcbkNAftSnNqXUo/F0bE3PA77rUFNpSvSBwIav09/dAZZ+vSDDDnDfHcRHZ8kX+nz0t+zMTw/sNhKXwQHOzKvEkSmFsxRBI2UkIFzLWGkmJimdKsbF6qSBcovMB8kXTUixzoadlxfH15xj/CtGA8e
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(376002)(396003)(39860400002)(346002)(136003)(2616005)(956004)(53546011)(16526019)(31686004)(478600001)(6486002)(36756003)(66476007)(6666004)(66556008)(4326008)(54906003)(186003)(86362001)(4744005)(5660300002)(8676002)(316002)(26005)(66946007)(31696002)(16576012)(2906002)(38100700002)(8936002)(83380400001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?UGFXVEhySmZuVWVrbE42ang4c2x2YjNpZWEwUFdzTlZBOERqYUxTS290VldV?=
 =?utf-8?B?S1YwbDVkNlVCcnBpVyt0RnZYMGlHT3VmUlI2VmlSMTRUL1RwdDMyZlptdTNR?=
 =?utf-8?B?VFQwS3J0ajUrYUMxUDY4YTlLNFVKM1BzbWhQZjdYM1djUXQ1MlhFTHJsY3VS?=
 =?utf-8?B?YlFWbUxDNFRVTjRBYU5MeVk0Y2huQXNRNFhySXB5UzE4cW1wUVBJT3EyUDJZ?=
 =?utf-8?B?bFRSOG5ld1o2SHkyRVZqMVp6Z1BPWVR2YklFa1VJUHMySnZoNUp6R08wYWNt?=
 =?utf-8?B?TUFlZVl4RWloRE9HZFR6ek1pOXVTRnpkUWI5VjhvZDlETUFCZDdhbkptYmVs?=
 =?utf-8?B?WlE0SDJOUFAvdXljUnYxUzFYWXdRZFFPUU9EL2ZTaVVEUUgvd0VHNVYyWHFF?=
 =?utf-8?B?YTdkUnFyR3Q0SmloMkptT1crek00RUs2WFRWWkNzUDNVbUZMTENHTW5wRWRm?=
 =?utf-8?B?dWlkYldibjg2UUNEQ3FKY2FwbzgxZitDd1djcy9UaTFsOXFoUXBuMHllVHhr?=
 =?utf-8?B?Q3FZbjhtUnFhU05SQTM1S0RuajdGTC9SbjkrYkkzZXFNaHpMcm5rVlZuNXd3?=
 =?utf-8?B?bi80TVF2Y2JHeVFGdzV4Qytab0VEdDFJQW5GUGJyaHZlaUNOWDhmalJWU25a?=
 =?utf-8?B?QjFCTTNtcHpYVjQwNW02c1pjWmVJK25ydFF5dWpMTGtZNGQvWFZOT3BNbXZx?=
 =?utf-8?B?cExqdDMxWXFHNmhmZmJ4VS8ySmp6V2FwN1F2Umk1OFUrMXk3UmFQWWh0U2Zu?=
 =?utf-8?B?a2l0M1I2TXN3cWhWdm0zTEF2azJwR0NCRFVuYWJtY0l0VS92aW1lWVVyRTJD?=
 =?utf-8?B?YTQrMmVuWm1nQm1lMXhKOGtLQ2FBL2VmQ0dwN1JrNGI3amV1MFRPZTlaVzBz?=
 =?utf-8?B?QmtPWHJiaFI5S0hjNnkycGtrU0gxR2xwdXcrU01sS0oxODU3OXg2dXpKRDZl?=
 =?utf-8?B?V1dmdklVMy95NVpQOW1VM0dEbkZZWlBlR081d1FRMFNUNVBRZStRTFhCcnFa?=
 =?utf-8?B?WXlxanpJWk1ET0UrZVd6RGJwejZEM0xDVXVnaDRSNjY4emFJQktBL2FEZjdl?=
 =?utf-8?B?ejV1ai9qWGRMOWkzUE5rcWZjRDRoOWtYNWZjUHNmb0dHN3dEM2dWWEdaUUlz?=
 =?utf-8?B?eDJoYXFjdUZlRmllT3NYT1BEQjB2MVFNT2ozQVZKNUtaM3VKZW1tM2Vja3ds?=
 =?utf-8?B?cDhKUzVDN2MwbktYMm05MU9MbHZkTmZEaTFqRkpUSnRENDk5RUNHUUcvRFhS?=
 =?utf-8?B?ZUlabjZ2YUVIYU1ab1ZoQ0xWR3I5NGFsUUJyL3JvbUdCTHdjWWlHaStJNjNM?=
 =?utf-8?B?M0lnblpjTkxGeGo3QUxzOTlGRDhQNkJicDE2YWFrVkYwMnUzOS8yOXZFb2ow?=
 =?utf-8?B?QUlCSmw2ZWpwOUxDb0U3V3dHV2JXVU1YT2MrN1ZudTJpNjdnTTNpRHBZcWlR?=
 =?utf-8?B?TGFLQmRkTkVuVHhTRXlraWlLRVdySXZ0TUV5T2V5eWpWYkVVeU9HNk83U0tK?=
 =?utf-8?B?Z0hrUUEvM0VKOU9yNVFUMlJ5ME5FSU5ZOW1RbXpiLzVyaUUzdEdZQ240aFRS?=
 =?utf-8?B?VENCNVpnbHJDZDRwUkhnd25nZmdwZXRsRVFuQnViYVR0Vnc2RDR4VnM5cDZo?=
 =?utf-8?B?YTFBZHdncFcyVXFYeTJxL3B4djFQQ3RER1hoWTMvcmMvYzB3b2tHMGJ6aldT?=
 =?utf-8?B?TWE4bCtUNTMwVnQ2ZFVGODVUN2Z2dHhZdjNhOVRRc01lVFNEU3RzVVliY2dZ?=
 =?utf-8?Q?3sRBcdsLhq1t1ZyaqqXtRMOtjucZZ9jB5ujTT4y?=
X-MS-Exchange-CrossTenant-Network-Message-Id: f432d402-ec42-40f0-90de-08d90f188688
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2021 16:20:26.3242
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Fs23i83ZiyoswGdHhPrplaPtNWUW6m1U5cY08VOiH6IMLeLLvToXKdXlvEYwNlkp0CrxLol3MaQg7Q2CxHfSAslI3ztIcE4vp8o1QD9+6BE=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4166
X-OriginatorOrg: citrix.com

On 30/04/2021 16:52, Roger Pau Monne wrote:
> Introduce a helper to update the MSR policy using an array of
> xen_msr_entry_t entries. Note the MSRs present in the input
> xen_msr_entry_t array will replace any existing entries on the
> policy.
>
> No user of the interface introduced on this patch.
>
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Tue May 04 16:28:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 16:28:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122584.231200 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldxuI-0005pt-4Y; Tue, 04 May 2021 16:28:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122584.231200; Tue, 04 May 2021 16:28:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldxuI-0005pm-17; Tue, 04 May 2021 16:28:22 +0000
Received: by outflank-mailman (input) for mailman id 122584;
 Tue, 04 May 2021 16:28:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Poa=J7=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ldxuG-0005ph-Js
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 16:28:20 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dda30a9e-cb7e-42ea-90e3-50d9c4f82914;
 Tue, 04 May 2021 16:28:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dda30a9e-cb7e-42ea-90e3-50d9c4f82914
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620145699;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=EgX4mHnTit9xAveOYeTu8DwYj7toK0IV/tcZ0arXh88=;
  b=Lmgb6JWJWrEjFw/aa8Dt65sz3Q6JEOip8WijYLG6+pD/YmmaOa2r7TM6
   jFxXiKR5YZOV2kNglLLLIArPYC1sy0pYmE3Y+x0sGUl6J7xVKPcKTvxXk
   +CD7sAqtJQw48lfMYN/NaphYvmFnmr33o5iWYUTgBasxwXylcZrYGMoHb
   0=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: cIrjpFa1y/Z8oGQuFburUWR+ftTj17yzvQ/yWkfXkmBTk6vnFPDuH0HjB47Vhp4f8nfVn6IX8j
 LxqsqFPZDEkzwtWPBQBwbi1ZOfxaU3sDA93gvI0Q2ekhL87OLQKIhNlb85uP0bfmoWu3UuZse1
 KDRBVF9qQg1tej9KXHc3vObaMAvLuCGNfDSNTZEbN9P72OYihtgfA//quemnzLgSSpGk0NuEMi
 y8xSpPZ7xD+b/uavevBYC98swvBtEgbs0hhb8xRb5oFV7Qr0EUyQRR8zEjsGCFueKuwDD59R1S
 Nik=
X-SBRS: 5.1
X-MesageID: 43430130
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:bX5dM60qt/vMbJPbbyecrAqjBR13eYIsi2QD101hICF9Wvez0+
 izgfUW0gL1gj4NWHcm3euNIrWEXGm0z/BIyKErF/OHUBP9sGWlaLtj44zr3iH6F0TFmdJ1/Z
 xLN5JzANiYNzRHpO7n/Qi1FMshytGb8KauwdzT1WtpUBsCUcFdxi1SYzzrdXFebg9AGJY/Cd
 647s1IuzKvdR0sH7qGL1MCWPXOoMCOqYnvZgQICwVixA6Fiz6p77CSKWnm4j41VTRTzbA+tV
 XUigCR3NTej9iX6D/5k1XS4ZNfhcf7xrJ4ZfCkp8AJJlzX+32VTat7XbnqhkFNnMiO7xIQnM
 DIs1McOa1Img7sV0WUhTeo5AX6yjYp7BbZuC2lqF/uu9bwSj5/K+cpv/MgTjLj50AtvM5x3c
 twtgrz3fcnbmKj7VDAzuPFWB1wmk2/rWBKq590s1VlXZYDc7gUlIQD/SpuYeQ9NRjn44MqGv
 QGNrC42N9qdzqhHhTkl1V0zMfpdno+GQrueDl5huWllxJSnHx/0nICwt0eknoq5PsGOul5zt
 WBHaJymL5USMgKKYp7GecaWMOyTlfAWBTWLQupUBraPZBCH0iIh4/84b0z6u3vUJsUzKEqkJ
 CEdF9Dr2Y9d2/nFMXm5uwLzjn9BEGGGRj9wMBX4JZ0/pfmQqDwDCGFQFcy1+O9vvQ2GKTgKr
 SOEaMTJ8WmAXrlGI5P0QG7cYJVM2MiXMocvct+c06So/jMNpbhuoXgAbXuDYuoNQxhdnL0A3
 MFUjS2Dt5H9FqXVnjxhwWUdGjqfmD54JJsAInX9+Ue0+E2R8lxmzlQrW78ytCAKDVEvKBzVl
 B5OqnbnqSyonTz3Wug1RQvBjNtSmJupJnwWXJDogEHd2nud6wYhtmZcWdOmF+OJhp1SdLqAB
 dSzm4Hv56fHti1/2QPGtinOmWVgz84v3SRVaoRnaWF+IPDdo4nCI0lHIh8Dx/CGRAwuQsCkh
 YCVCY0AmvkUh/+g6Ssi5IZQMvFccNnvQutKclI7VTFtUudoskrbmABXyGnVPOWhQpGfUsQun
 RBt4skxJaQkzemLmUyxM4iNkdXVWiRCLVaSDieaJ5sgbDtcgFoRWKsjTiX4itDI1bCxgE3vC
 jMPCeUcfbEDh54tmpD2qjnyl9ya16QZll9cHx8rI17G1nXo3ob657/WoODl0+qLncSyOAUNz
 /IJQEfJQ5j3Pib/h+YkjTqLwRq+rweesjmSJgzebDa3X2gbLCSnaYdBvlO4dJOL9b1qNIGVu
 qZZi6YJD71EPkSxgSQv3opURME8EUMoLfN4lnI/WK41HkwDb7uO1xgXagcOMzZwG7+RfqEua
 8Jxe4djK+VCCHWZdGHw62MMGIGBRPXvGKsT+Yn7bpTprk/sbNvH5/dFRvEvUs3qikWHYPRrg
 c5Rq8+3ZXqfqlIVOYWczhC/lUomM+URXFb+DDeM6sbRxUVk3TfP9m1+LLGprokP12ZqGLLSC
 6i2hwY282AYjCK2rEbAZ8hOGh6aEAz73J54eOJHregQTmCRqVm/FCgNGW6f6IYYK+ZGa8Iph
 IS2aDFo8anMw750hvXpz11P+Zn9HumW9q7BEapFfRT+9K3fXSKja3C2r/9sB7HDR+6YV8fn4
 tLaAg5adlCkCAriMkP6ReJI5aH6X4Noh95+jFollnkx4ig7iP6JCh9QHzkq6QTeyJSPHiOhd
 nC6s6C2h3GkWN45aU=
X-IronPort-AV: E=Sophos;i="5.82,272,1613451600"; 
   d="scan'208";a="43430130"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nAARShNSfEFnNFbWOZT4VUviM0sjsaKC/yYsx43qAdSV35TZcorHQjCyn5ZMwzxsmsQISiGxC1ushLRMsPbsoqsHV2lhZ9oxNeSHiiI/K4114XzS1glOH5oFCEyUdsy/yXHqEjLS5giCYhPO7V75ljJ/WNI/tvWpOg1mO0hkcWavsdWKt/hGqns01QsJyT2Q2LYAKx5289xpFxKNLq+WFsefYLihzNhjTunFJ3NqAB92Ukp6abStwxKuqErgl552GO9ugUbPe7VlnEv4mvq4LMcGIyXoBHquD9OHXYJcHuiaeqI5cldnAufI6NxlLMCWv8h8aCtra4Tib0RIpanDEw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EgX4mHnTit9xAveOYeTu8DwYj7toK0IV/tcZ0arXh88=;
 b=dxqRjS9XfSONej3imebt/YiwY2xDaanY9dIO0F+zJDQm9e8ZYBX1pjM/7XXIsEo47f4LRCNQw8R5cGSEStXF2tdQrRtSmvv7JYdTp29XcCLHuG3l+i3LmNNtUqnRcDhqoB/wUIdZBV0Y8eForDFdEWeCIkAnmjuOmb3mTdQhR/gTwi5MhqAUbl6J3phWKXoJhfa4n2DqmOn7sLo2XcP+oSpiLfIBTjLvg8xV4wb4ipxLi6MgEpoQRN8ukhSNHegl17YaDU/ABp6EOKhTH8Vtom9IyEFlsGqZjLFS34/nlSI6txrFCDf7pf5ENQsWy5vOYbb3eBwcA10YBkvzumlMpA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EgX4mHnTit9xAveOYeTu8DwYj7toK0IV/tcZ0arXh88=;
 b=ZpKgcAs161eKlJLfNRBSnZ6k2f7vReNLe2kXmUHroziklU4J2kRHGSMdGs+kd3p5b5v9Ir5/zPuZ1eRxUedyrkfXvNstAkmmNQOIBx1Jse1c9NaYW948D0UgVkin5LfOCELBpQsXa/J8wvII6MWSnTeuLqyPzjo9NV6t8M5Mrd0=
Subject: Re: [PATCH v3 06/13] libs/guest: introduce helper to check cpu policy
 compatibility
To: Roger Pau Monne <roger.pau@citrix.com>, <xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210430155211.3709-1-roger.pau@citrix.com>
 <20210430155211.3709-7-roger.pau@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <4a476df7-8038-01a8-a957-2de669609cf4@citrix.com>
Date: Tue, 4 May 2021 17:27:51 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <20210430155211.3709-7-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0315.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a4::15) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b0f75a4f-483c-4952-ff8e-08d90f19940f
X-MS-TrafficTypeDiagnostic: BYAPR03MB4806:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB4806C1469F5D92185874DE85BA5A9@BYAPR03MB4806.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:1775;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: sF8PjUeTVMx1d0xTcqquxjumksjVCKkikGy5qxg1qcUOVgthhEznXHWyMaXO+9dJrVudx1z7051cG/4CamJCZWuJteYH14vLHqONcK7Oxlqykm7/a4T1BdzAbkuUJPLOqYO+sbmd9PPvF1tDGi0sccWu0TQ70/kWefjT5/cI5S4sgDxOBYfb4zMHyhwXF9cg1DA6hR4sl2T16gWl8wXob5XZNWS8MeiugqMc9lRv0ArX/D0WM9/UbreaTNkJJBZWiuDRNlfbA0jMl3syqzOMkPIaWCaGrdQnunUDKjn4YCUlry1H09ZruPJ6WD7pvJv/Q4/ovfmCPJiyKnkwuhh3FLQKOvxPurhayIeitnlfRsOL1qxTF4ur0wP/q3zfKTY+ZSLjublJXp8Rg4Y3477kQQW5I+7o2OUmuMK8oEXqgsjTcp7thr2NsT7jgzSJE7qS4bw18bgdew2C33CVDTA34KilJ/AHDKOExwYGFpPfDTmf3UEZ14Om3aqHFxGiWZ6dJHtbuMLFVOYbOIIE8eqdbaik6rMVukIMLqT+40W8FQizL4kXBegBcGBD0kqygDi7YZEkfVL9FViT3AE0kykBGtcOlloaaPCVLF+wpkKKDAEOdNcNSQj/JJZyLDyZxGNIp7U5kNKW2ZF5SLg1awviOZVx+sQ5MxFIoPhluDwPSQaOXkNqT5mIaKPKCQerxLIO
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(39860400002)(346002)(396003)(376002)(366004)(38100700002)(6486002)(8676002)(4326008)(31696002)(36756003)(6666004)(53546011)(66946007)(2616005)(956004)(478600001)(2906002)(31686004)(16526019)(4744005)(316002)(16576012)(66476007)(66556008)(26005)(186003)(5660300002)(8936002)(54906003)(86362001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?VWNYQXZKVlpjaG1ldTRtK3J5b1YxMTNCbEw4amJpRmxaM3pPOGFvUEh4dVA5?=
 =?utf-8?B?dHdrSmovbnpJZUo2RHRXb0Zic0I2eE9hM1Aza0VhdFhaVER3MmpmVW1RYS84?=
 =?utf-8?B?eEIxWUl5cjI5S0NGT3BLa1dWTm9OeFBta0tzTEhmcjNtSzNlOHA5VnJ4RlBR?=
 =?utf-8?B?Tml5UUxzL3d6Z2ZQWkk5ZGF1Z2hNNFBsZ0ZDbW5xYUtUV1dFZWhGOTdYRWF4?=
 =?utf-8?B?Z1F1SlZhUTI0ZWpmejQzQm5ENjRMZWtqTW5Zc0RjN3prNUtMREw4V28xYlpW?=
 =?utf-8?B?R0RpelJrc0JRT3JjQnNhMEFUOE10N0dMc1RET2dtWnZ0K0c5YS9ZSVBCQStn?=
 =?utf-8?B?RS8zUDBNMmlQMXhWOXN5dWduSkZQSWRMRHQxeUg2VmdoUk5IM1p1N1Zoam54?=
 =?utf-8?B?bFZubXVrQzl3dGVhMmRES01rclJLSG1rUnJpZ3hQQUI4alVXNXRYbkxFWGJs?=
 =?utf-8?B?QXdnU2FyeVlSblJ1ZU5qUmtybTN4NU5IbTVuRXNTY3VKNkNOTStYb2Q0U3RH?=
 =?utf-8?B?amJnd0hGWnpteUlvUlZZWlZQT3ZSVGR0bCtKWlV5WXliNkIwSk5sOVdGeEx4?=
 =?utf-8?B?ZmUvdERFTGtzcURQVkJQcUQ3MGJjcG1DUzJvUlYrMThGQmlnOUlONWdUZVlZ?=
 =?utf-8?B?NHQ1Y2gwa0hoTFZWL2VYZzZBRmQxelZyM3BpMnZKaVBNM2NKQWlUWXNCTHZt?=
 =?utf-8?B?VXRjRTNyZW4vdUZSWTdHWWx6QXBoL1IyZGlQa3JxKzVNakN4d0dLZGlCNGxy?=
 =?utf-8?B?UXFMSlpHT1poRktKYzBBNFFsQzZoWWprRFVzaGhJbDlHeGpRQmxBQXVJZkMz?=
 =?utf-8?B?M2c1U2VJbWE3MS9ZYUtBcFp3U0NNVTYvNGlURnV5Ly8waVpqbjJPdVRGR04z?=
 =?utf-8?B?UFJXWDV0dEt2QmZWYkhwWHFkdHR0UW8wZnNkYitkOGpMMzRZZmZTcm9HQk9U?=
 =?utf-8?B?WStzWi9STUdkNzVJTTN0RXI4VTdsQmNMNEFiT3J4TStLS1ZuWmowaERjVkt2?=
 =?utf-8?B?U1drdHdRY3pzQ3paRnhWM0QwZ1Z2eWxabGd6cFdlN1k3WTM2Z2pKTGlUMUEv?=
 =?utf-8?B?aVpORXlWZWJIMURmdFpNY3dpcjBFQzRIVmRRVzlicVBlZnpTbEtvb0R6L01k?=
 =?utf-8?B?VVh1dmZGK0FDL3NwaDVaZlBDemtmOFV0clQ0VnlIcHFQVFUrTkQyY0JVcEZp?=
 =?utf-8?B?TFl2dWhCTDRYeWtlOE1UZ2ZiNzJPRklwdzNtY2hJTjdvdmxQTU9QYjVST0wx?=
 =?utf-8?B?bWo3bVJ3MGhZN2pnbGRYU3M4RTFjQloveDZZcjc2YWorcmVjeTRPVjBPT3RO?=
 =?utf-8?B?elczTTkyQ2I4cE40MFlnTTRRTVhGc3cxYm1iQlBWUDFkeTc1RE1VakRXUjhP?=
 =?utf-8?B?cDVJT0x6SWtoa1gwbTJqYWNlaUFqNVMwLzhGSnhRaCttS2hBS3Y2d3FJV21X?=
 =?utf-8?B?YmxkcWVWcEI5d2M5eWJYdlNSWUw5c3R3Q216cjJUSXBldCt1eXp5YTJWbjZx?=
 =?utf-8?B?WkNIbDd0Q0piOFVnM1dCY3JJcE9QU29EWUdYZFpVQVVDV1B1cnJ0Q2FZcGhi?=
 =?utf-8?B?WGVMUGYwU1ZldkE0K1pVcm41S0IxZXRGcjAyTmNPcGNEYklRb0VUbmFUNTBx?=
 =?utf-8?B?RFJFcFpkdHFnMmtGK3FDaDlOMnk4VEI1b0E4enI4Mnd0b1Nkc3k0SlZ3MWpw?=
 =?utf-8?B?RThEUy9yUHZ6VTQxb2hLMTNWQVZUUW54cU9UZ1B0K0lBMVRaN1dDSktzcUhX?=
 =?utf-8?Q?i+wsX81wilgbWOjLZtiL5tsGN5KfSAj1hsu0EUo?=
X-MS-Exchange-CrossTenant-Network-Message-Id: b0f75a4f-483c-4952-ff8e-08d90f19940f
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2021 16:27:58.5959
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: tbqJAnWezA3Ak0XwdJoLpgJJLg5jK7X9Tkt+P3tv2C7WtYbA6HDOgkw145HFb7sLlSELwWTaqKFATHmXDPs26Q3rPC4kZyfrRj090k7CqMY=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4806
X-OriginatorOrg: citrix.com

On 30/04/2021 16:52, Roger Pau Monne wrote:
> Such helpers is just a wrapper to the existing
> x86_cpu_policies_are_compatible function. This requires building
> policy.c from libx86 on user land also.
>
> No user of the interface introduced.
>
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Tue May 04 16:50:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 16:50:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122591.231211 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldyG1-0008M7-0P; Tue, 04 May 2021 16:50:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122591.231211; Tue, 04 May 2021 16:50:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldyG0-0008M0-Td; Tue, 04 May 2021 16:50:48 +0000
Received: by outflank-mailman (input) for mailman id 122591;
 Tue, 04 May 2021 16:50:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Poa=J7=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ldyG0-0008Lv-44
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 16:50:48 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b695add4-05a7-47be-a028-d058b0283cc5;
 Tue, 04 May 2021 16:50:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b695add4-05a7-47be-a028-d058b0283cc5
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620147046;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=XhwbN2WKRaPogBinXEEbYCDzLOhLHAc6060zm++ufT4=;
  b=TQMXn4V/7NKSjo3vZPnodcwjruNk6ee83J9ko5sZ17WbpK8j4SXRwkb4
   +qCzkMEM3KISCCGWEDv6rOXx2XibTWcrTKByLJu5/35kViHC6CK6znrfO
   NGA8LGD6Q+8ACZvtJJUS/nTDCOCL4jXBphbUt5x+U346yEMeloanel7DI
   0=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: GGDSXdT0gdZ6kpoGsdVwKR+ZShN0z4kckMb30y0rb00P5YRTfhZQYfvJd+I/3zvc7rLs/kYlSL
 qXJ0B4WBmmg7kxGtripyVS5dDx8jFjmWATUoQmhafmDeMK9ZFOzPKLbiI8A7KB7lq3oAfDo9RD
 N1BE/XgNGhv3ulZQ7z/++X9A5u6oJbtPFKX6FskmKuG2jJksrUHWv1dY5b/uAf/P4UVfBuZjsK
 ynUkcmJW5cCm/jRKH8QLbe+2ZBTMaclSCUGDX+C16rfkYzVS4uUUABzwOjOSJa0Rc+Xf1ie20U
 lLU=
X-SBRS: 5.1
X-MesageID: 43040040
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:oDAu46H0UVEzBENPpLqFcpTXdLJzesId70hD6mlYcjYQWtCEls
 yogfQQ3QL1jjFUY307hdWcIsC7L0/03aVepa0cJ62rUgWjgmunK4l+8ZDvqgeOJwTXzcQY76
 tpdsFFZOHYJ158kMr8/U2EA88tqeP3v5yAqMX/6zNWTQ9sY7x99AsRMGemO2B/WQUuP+tBKL
 Oy/cxCzgDQG0g/TsP+PXUdWviGmtujruOdXTcjJzoKrDaDlimp7rmSKWnX4j47XylUybkvtU
 jp+jaJnpmLiP2wxh/C22K71f0/87GNqqohOOW2hscYMTnqgAqzDb4RIIGqhzwpvPqprG8jjd
 ikmWZnA+1I93jTcmupyCGdvzXI7Tc053fujX+ejHfzyPaJIg4SNstbiYpVNibe8kor1esMt5
 5j4mTxjeszMTrw2Azg+t6NbB1xj0yyu3Znq/ILlmdSS5F2Us4skaUvuGduVLsQFiPz744qVM
 N0CtvH2fpQeVSGK1jEo2hG2rWXLzUONybDZnJHlt2e0jBQknw85VAf3tYjknAJ8494Y4VY5t
 7DLr9jmNh1P48rRJM4IN1Ebdq8C2TLTx6JGnmVO07bGKYOPG+IjJLr/rMv5qWPdIYTxJU/3L
 TNOWko9lIaSgbLM4mjzZdL+hfCTCGWRjL20PxT4JB/p/nyX7zuPSqfSE0/kseprvkFa/erHs
 qbCdZzObvOPGHuEYFG00nVQJ9JM0QTV8UTp5I6Vju104b2A7yvktaeXOfYJbLrHzphcHj4GG
 E/UD/6I9gF6FuqVH/+iB3YQGjsZUT74JJ1HMHhjqou4blIErcJnhkeiFy/6M3OCTpItL1zR1
 d6LKmit6W8vACNjBn1xlQsHiAYIlde4b3mXX8PmBQDNFnsd60f//+Ff3pJ4XeBLhhjbs/fHQ
 JFvW5r8aavI5H4/1FkN/uXdkahy1oavjajUooVkKzr37aZRroISrIdHJFXOSqOPRpvggpuoH
 pEc2Y/NzHiPwKrr76kgpwSDPzYbP9mjm6QUJdpgHrCqESRotwuTHMHXzioFdWamxoqWiA8vC
 wBz4YWnKeH3SyyIm8+nfk1PTR3GReqKaMDAwKfaIpOnLf3PAl2UGeRnDSfzwo+Y2zw6iwp9y
 HcBDzRffHAGVxGvH9Elq7s7VNvb22YFngAIUxSoMl4FW7cvGx03vLObq2v03GJYl9Hxu0GKj
 nKbX8TJQxprurHniK9iXKHFX88wI8pMfGYBLM/c6vL0nfoMZaWj8g9bolp1YcgMMqrvv4AUO
 qZdQPQIj/+B/ggxgCZ4nIoIjN9pnVhi/X1wxfohVLIrkIXEL7XOhBrVrsbK9aT4yz8T+2F14
 g8id4up+O/PiHwZ7e9uNHqRi8GLgmWrX+9Tukup5wRp640ubdpF5TQUDfD1hh8rV0DBdaxkF
 lbTLVw4bjHNIMqYtcbfDhB+EE10NuIN0kmv2XNc6MDVEBoi2WeOdyH47DF8+VyRkKAoRb9Il
 mZ/WlW+ezfUy6KyL4dDOYxLA1tGTwBwWUn+PnHcYvaTBiufaVE+lGxN3emar9TSKSfA9wr31
 9HysDNm/XSbjby3QDboCByLa1P+Xu2WM/aOnP4JcdYt9ihfUmWiqSk4MSvnC76RDuyZUMfn5
 BEfyUrH7N+tgU=
X-IronPort-AV: E=Sophos;i="5.82,272,1613451600"; 
   d="scan'208";a="43040040"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iyDygCyxKE/PYFoRE5IgDOmXNJvLwIkvLB0Ey/OcvCh116VmDEJvxAMuy9DiCsni0efHyCrQBYKmmlY2CHs07N+duJ2P2s7VtNY1FwjRsweTpOv/Q7yYCA2xoaSYtolTJO2jAT4DupaBd7Uxr27yBqaFMZRM2UuVP4ic7ObI3CxCsTE4+igXf6IkVIuckp5ld8E0dHaOnx+AzJDan6h0b7BUTsozwx5BtQ1Misdw1R/JzESq6trruQvN+s7IgVNlL4KiGfaMLupgomcbyneXXJlizbK90qC+nw0nvVNOrcUoHYOzq5is5qJaY3lYz0s3GU0wb6D+jygOcO7rMps0pA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ARxJWajNZ7W1OZf1kb8wuakHh0okaJWDH+NHh7IFqi8=;
 b=OkPrilEWpsG3Ime5u9A0Z3x05XUru3G2Kt8WLuIRpzF/0qM+R6tTV6ggYvknwRWXlO7shMvt/r1Qn6CLS7cPI6koufOHARZsoax2BittU83RRvXZgUgO/AkJ7ByVNa0brybEa+/rD7VpDXh9L0PtD+QBqhWq5K7c232vz4MekhDGT6b/axHg0jQ3onTmIwcZYcRXYs8VyYeh/eIrauxgflbAoksgA8VT++L0SXosOuU0cb7V78aF3H4syw2OBax+sQoc7yv35t6Ij1TNJViWrm5aYCskhiHNsCGxs3NrdT493xAWXGvQd9JMVhjd0zAYxuf4Crxskz3pCRUPan/DLg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ARxJWajNZ7W1OZf1kb8wuakHh0okaJWDH+NHh7IFqi8=;
 b=WLZdDJgwr5SzGFNlsUTNm5nQvCN7LpAip3Hl9IRwCIOXWfugnuJm0ec2kpykZfWG1evPidfwTKIPstu7whYz+yoiJmt215HcoPSHSxcZjYmFxOhIMO7Y9F3W2ZPOpPkLyxWW5NXX0XpXh6m3rrj5ev40ObbhXrtVypGBzEil40I=
To: Roger Pau Monne <roger.pau@citrix.com>, <xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>, Anthony PERARD
	<anthony.perard@citrix.com>
References: <20210430155211.3709-1-roger.pau@citrix.com>
 <20210430155211.3709-8-roger.pau@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v3 07/13] libs/guest: obtain a compatible cpu policy from
 two input ones
Message-ID: <b12d35d6-68f8-6284-d423-e99c43ba9e90@citrix.com>
Date: Tue, 4 May 2021 17:50:36 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <20210430155211.3709-8-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0156.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:9::24) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e3170b2a-1712-4d1b-088d-08d90f1cc0ef
X-MS-TrafficTypeDiagnostic: BYAPR03MB4168:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB4168FCCED1E5E2A33FE26567BA5A9@BYAPR03MB4168.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: ulWt3tFZt8K2PQDtX5yx/wDxhyY2q92t7srqv7x006RuqRdSQYnPXPEAz3+i5BV2gOhdJa3JTEWmQza1N5Z+bWenSpXmH5coRYhQI/01xWUHyVgZfV+IchptMNtoctrPMaWsvxm9sKF9bAfhT8ppnzfXT7n/dcYkbiJ8/u8SjhZlu1Wiwjmtn1phTcDbsDBy2TRt6jV67riJFBSHyJPpupU5CK1d9urM6jScfFBz732GPgUrPCMX2mlJMsMcaBbZIS01lMxQ27nCDXYJk0euufo0dYB7sZoZJyRUotkykE5YUH1JmypSMFGfTDifNguBAaBgyzMUHKjn283ysZ2Xgq8Z0hdYVaJZ31dRsmRYXp3cGvYHEdxYMJmt6XiymVhIhHpivFNyy4iCnm/S0jnFnLBkE5FNai/goUMpiKle4314n3pWkanEClO33WbeWDCM+0luToh6rcgZwsfXILzgOS0/BCXk7kbZ4uoDI3qiWIxVC80njW4eCXcRRutIUXO7pvcVGg4qSl9rTfjdT0wPz86X8Wz07+ahrTQEu7xZtmw5BhDu++iGWjhDF8DVdKAT9FUUUryqBCgu3xM3uJ6i50DiuR67ym3l1xUkDDad3Au4LL5wo3Sd/yi5AeSehWwk/R8u7eeqQ4gqpuVyIUWR7+59CP+11MhIRT5t6aFH/qI+cem3CbZexVzigJ80HZCd
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(136003)(396003)(376002)(39860400002)(346002)(16576012)(31686004)(316002)(54906003)(8676002)(16526019)(36756003)(956004)(6666004)(2906002)(83380400001)(186003)(4326008)(2616005)(6486002)(66946007)(8936002)(478600001)(66476007)(66556008)(86362001)(31696002)(38100700002)(5660300002)(26005)(53546011)(107886003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?WUhKWUEyUjdkZHIySjhPYmpRbWtXaE1NN0thUlQ0M0xqNGRLbWZSQWU2eVlu?=
 =?utf-8?B?SjgxMFdheTlxZEtJN2hPbnpSMjhtcDNJTmt2N2J6UkJpTXFrMm1lSnhmeFMx?=
 =?utf-8?B?UjY3eW02TmxGelZtUml4cmpoaWlaZHRjYzVyWTRzL3ZCa2pjM3R0N29oSFAw?=
 =?utf-8?B?blJRbkttYVNUeGRia1A1b0FsdFBSMDFyeGFjUHBnWUIyejhKbHJFVmd5S00x?=
 =?utf-8?B?SWxzMll2OHA0VDJGRVllOTNVK3d3Z2ZkdGhSMmFRQnY1cTRGWUxwUFVoTGxM?=
 =?utf-8?B?cGUvVlV4ZTBtMXRJMnZ6TUtVQ1YwUmlneGZsS3R5bDdKL2liL0pNYlJzSUFE?=
 =?utf-8?B?bnk4d0xTcmdJM3VxR1kzeFpXM285NlVtcEJxQlhtSG95Z0s0OURRQ1dlbzJM?=
 =?utf-8?B?YTMzTmVsNmpLbS91c09vRVRPYUpUYUVPdVVXNzBCVVkrOEk0SE85bE5CK1g4?=
 =?utf-8?B?Q2hHclpjTTE4Vlc2VG9xb1FsY1NNY2h6WUZqRDRFR1ZtNW1PVVI5OElTd0kx?=
 =?utf-8?B?T2lBNWY2RWZDUkhiQ3ZkZTBCNW80T0tzUGlUS1IvUGpnbnd5bVR6UFJLb1FF?=
 =?utf-8?B?R2pyd1oyd1FKOU1haEhqZUpRSzkzTk1zTytqTnJYd21xYlBlaVVQMEc5UXlD?=
 =?utf-8?B?bVNadFRhbjNrSFI0eGRLcEREMisyNkhNTGVyTlcvYUFLYnMvc3hNVFl1UUdU?=
 =?utf-8?B?MWZRYml2Q3laQ0hIaDVReHdWaXZLc2RTb3pDb3JlU0ZTNTZ2ajlYb1RlT2tW?=
 =?utf-8?B?VXZCNFpCQVJQQkhmOHhwaHhaWW5YYnhHU2tyTTd6WGtGTis1TnJGVG00WTJN?=
 =?utf-8?B?cUpWRmRXeFIxSEdvQUdYK0VBcjQvSk9VQ2ttWEMvWmZsZlhsd1grWStFdk5K?=
 =?utf-8?B?Ump5QzZacGtNd29YNVlmamVOcXFIUDhBVXRsME1GWlk2TjJwVHhTL29xUFFJ?=
 =?utf-8?B?ODRMNTZLaGt1VmFGMFdCT3VtVm55UlhndGhMM1N0L3J1L3RSd1pMb1F4UDkz?=
 =?utf-8?B?NmFUOUJ5NVRqVldSTHpOMW91djZiblNSaGxTeEJxMmM3K25QS2lkeFdUWWha?=
 =?utf-8?B?R3BtMk5MaHdpQU9jRXEyaVpIYnMwTEdpSllrdUgwd1Y2YlBBY3FzRUdCdnNy?=
 =?utf-8?B?TVpRY0Y3VmpGeitUNzIvdmVXZkZ1WjhPaE9iQUkxT1M5ZnRKaFBnNS90RCtr?=
 =?utf-8?B?d3VLd05aaktpS2R4dGNSV2NKa1lCRVhXZXZGZlJuKzRESE40KzJtdFhhRXgv?=
 =?utf-8?B?UTkvd1hUeVNuOGxhZWYxQzV6RGFRT2haSCtHVmJyVEc4UUR2bVFHb0t6aWJz?=
 =?utf-8?B?bEJtWmQ1TDNmcEdOck5GbmJ6d2pHYXIwQlkwaUsrdXczZjFIR1NwUERRS2ha?=
 =?utf-8?B?SWpmRkpwcDEwOWV2Z0xvQlF6dTJIMlM4MVZHZlhqaTk2TlF0WktyOVAxeHIr?=
 =?utf-8?B?cExNNGREQTc0QVVDUTRIbDEyMi9raGp2bS90WFRHeW9kekZ5WU1weTFSQjRZ?=
 =?utf-8?B?UHlTUlpkREFHTFA5K2xhcDlwMTBiREJ0VnQ5Tk9POVhUUWhLQ0ozUit2UjhZ?=
 =?utf-8?B?TmlKLzhiK0xBVCtZcWRPQVNqYmg1S0hya0RLdnlObDBjVVJKNGRXSEh3SkpW?=
 =?utf-8?B?NHhzUjVINjREMWJLT0dqL2tUZTM0ek14Zkc2U1J6S0h0MjFCK1duV2t4MVZx?=
 =?utf-8?B?eUJJWk5FNFpMRlp3ZnlnY3FmWWROTkp1blorSzBKTHMveUlhWWdGeWRscFFo?=
 =?utf-8?Q?3ZzZ4uCCp/2Oz7+b0BlsjvF8GjrtriqjN16KxNl?=
X-MS-Exchange-CrossTenant-Network-Message-Id: e3170b2a-1712-4d1b-088d-08d90f1cc0ef
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2021 16:50:42.1436
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 5K/AN8XxPR5ahvoNsf8TVeR6KlkRzhDYLbs2AAa0xdELuEK769oZyT97rl9pXoXWefH7nlRLiko+PAEJkwIkw36Mty45E3W7NFsk9VC5B0M=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4168
X-OriginatorOrg: citrix.com

On 30/04/2021 16:52, Roger Pau Monne wrote:
> Introduce a helper to obtain a compatible cpu policy based on two
> input cpu policies. Currently this is done by and'ing all CPUID
> feature leaves and MSR entries, except for MSR_ARCH_CAPABILITIES which
> has the RSBA bit or'ed.
>
> The _AC macro is pulled from libxl_internal.h into xen-tools/libs.h
> since it's required in order to use the msr-index.h header.
>
> Note there's no need to place this helper in libx86, since the
> calculation of a compatible policy shouldn't be done from the
> hypervisor.
>
> No callers of the interface introduced.
>
> Signed-off-by: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
> ---
> Changes since v2:
>  - Add some comments.
>  - Remove stray double semicolon.
>  - AND all 0x7 subleaves (except 0.EAX).
>  - Explicitly handle MSR indexes in a switch statement.
>  - Error out when an unhandled MSR is found.
>  - Add handling of leaf 0x80000021.
>
> Changes since v1:
>  - Only AND the feature parts of cpuid.
>  - Use a binary search to find the matching leaves and msr entries.
>  - Remove default case from MSR level function.
> ---
>  tools/include/xen-tools/libs.h    |   5 ++
>  tools/include/xenctrl.h           |   4 +
>  tools/libs/guest/xg_cpuid_x86.c   | 137 ++++++++++++++++++++++++++++++
>  tools/libs/light/libxl_internal.h |   2 -

This *needs* to be in libx86.=C2=A0 I don't particularly mind if you start
with it behind #ifdef __XEN__ (I'm still sure we'll need it in the
hypervisor), but this, more than just about anything else, needs to be
covered by the unit tests.

Next, you need to follow the same structure as Xen's cpuid.c for
deriving policies.=C2=A0 You can't just loop through the two serialised for=
ms
like this.

To start with, you want to calculate the min of a/b->max_leaf, then loop
over that pulling information sideways from a/b.

For MSRs, all but MSR_INTEL_PLATFORM_INFO are CPUID qualified, so need
to look like:

if ( out.cpuid.feat.arch_caps )
=C2=A0=C2=A0=C2=A0 out.msr.arch_caps.raw =3D
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ((a.msr.arch_caps.raw ^ INV_MASK=
) &
=C2=A0=C2=A0=C2=A0 =C2=A0=C2=A0=C2=A0=C2=A0 (b.msr.arch_caps.raw ^ INV_MASK=
)) ^ INV_MASK;

Where INV_MASK is the mask of arch caps bits which want inverted
polarity.=C2=A0 (Name subject to change - perhap ARCH_CAPS_POLARITY_MASK ?)


I'm sure I had some work starting this somewhere.=C2=A0 I'll see if I can
locate it.

~Andrew



From xen-devel-bounces@lists.xenproject.org Tue May 04 17:05:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 17:05:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122600.231224 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldyTm-0000za-Ez; Tue, 04 May 2021 17:05:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122600.231224; Tue, 04 May 2021 17:05:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldyTm-0000zT-BN; Tue, 04 May 2021 17:05:02 +0000
Received: by outflank-mailman (input) for mailman id 122600;
 Tue, 04 May 2021 17:05:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tMRT=J7=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1ldyTl-0000zO-6d
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 17:05:01 +0000
Received: from mail-lf1-x12a.google.com (unknown [2a00:1450:4864:20::12a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2edc572d-e4cb-4bab-bdd7-bd894fc5898d;
 Tue, 04 May 2021 17:05:00 +0000 (UTC)
Received: by mail-lf1-x12a.google.com with SMTP id x2so14332430lff.10
 for <xen-devel@lists.xenproject.org>; Tue, 04 May 2021 10:04:59 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2edc572d-e4cb-4bab-bdd7-bd894fc5898d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to;
        bh=0r0tC/MhFmMETe0s08jqzXCoIJHWHVmJwmREtMKC6vc=;
        b=qbfa/S3lo7VCXNpguTZfuIo8ZZjLrZY2EaKDvdNxVLQUdOJr+40bL+waQk5OxpLah8
         ky96tnJRM8dGDZPEQDo34VhwJ6uG5fDECcIlCbaDv9Zgs3IgK6UpN6WviUQkwKfOfw4z
         puY63pU5FqAiWpnQZ73xqGMdVfHrq7vpBl2rW5bw9utYZbvAcBBqUgptlAQGkXAmeH/u
         ACcV/8+sH7gum5+6ZV8WK1IGavnMRnu9Vo9xtFXzl8kns5Yo57+CZIOGg8jGOFJ4zl2X
         bXVq6tvqlH3TWDTvBR37z9Sc35j09NBuIBMO5V053AfIYeVP5tTNYqSuMeBvUDTmeDvR
         6MJQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to;
        bh=0r0tC/MhFmMETe0s08jqzXCoIJHWHVmJwmREtMKC6vc=;
        b=KfivOtEg3APKhd3p/gmJMa25GewPBgDIzcYqtRq7nw87QhDJBi7corHxx1WIaRA1BP
         hQndViu4Tqe+GAxmaRA9DR6X6HFC+lAm6gnJutNuagRTSt+UbAW7IlHpVO3Ds6HwlPA/
         io0wUGyIX3ZapCKIab4qUUNxhYH65QUDKP7quZO+ILQvRnSeXl7u+Yf3Ij6/DljXkAV6
         KscBrcC0aJCj888JjQAOLOVt0r+NW7VUn7erCcW+s7kqD7xf4rklLr84HashIzaTn6//
         ucJhFmmOwwEPPinWgHF3oc8YEuLsm97aWFRD8zJHs5bM16+XlEA22oaDinW4T2/lWvCm
         ZlgA==
X-Gm-Message-State: AOAM532JL5v3PEJt/+tr3tgYHryEsmdj2h2x5GIZNF72srWKCTpMKfZZ
	Q6wb1jdNwZ5BA+FfWb3x9lruiJgGWJiyCh+UeWM=
X-Google-Smtp-Source: ABdhPJyMLek6re+2+pYqsYJXyl/oyFpvtsky7d+Cv692pS7IG9XKh6KrQOlb/FbFk39iZWLalxMLiG3jklyi4z1U7LE=
X-Received: by 2002:a05:6512:3e7:: with SMTP id n7mr17460446lfq.150.1620147898935;
 Tue, 04 May 2021 10:04:58 -0700 (PDT)
MIME-Version: 1.0
References: <20210504124842.220445-1-jandryuk@gmail.com> <20210504124842.220445-5-jandryuk@gmail.com>
 <20210504131328.wtoe4swz7nyzyuts@begin>
In-Reply-To: <20210504131328.wtoe4swz7nyzyuts@begin>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Tue, 4 May 2021 13:04:47 -0400
Message-ID: <CAKf6xpsVJQ7LeV63hb8Sm_6gq+xjCwMDOkuMKNsn+-vqHF=9rQ@mail.gmail.com>
Subject: Re: [PATCH 4/9] vtpmmgr: Allow specifying srk_handle for TPM2
To: Samuel Thibault <samuel.thibault@ens-lyon.org>, Jason Andryuk <jandryuk@gmail.com>, 
	xen-devel <xen-devel@lists.xenproject.org>, Ian Jackson <iwj@xenproject.org>, 
	Wei Liu <wl@xen.org>, Daniel De Graaf <dgdegra@tycho.nsa.gov>, Quan Xu <quan.xu0@gmail.com>
Content-Type: text/plain; charset="UTF-8"

On Tue, May 4, 2021 at 9:13 AM Samuel Thibault
<samuel.thibault@ens-lyon.org> wrote:
>
> Jason Andryuk, le mar. 04 mai 2021 08:48:37 -0400, a ecrit:
> > Bypass taking ownership of the TPM2 if an srk_handle is specified.
> >
> > This srk_handle must be usable with Null auth for the time being.
> >
> > Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> > ---
> >  docs/man/xen-vtpmmgr.7.pod |  7 +++++++
> >  stubdom/vtpmmgr/init.c     | 11 ++++++++++-
> >  2 files changed, 17 insertions(+), 1 deletion(-)
> >
> > diff --git a/docs/man/xen-vtpmmgr.7.pod b/docs/man/xen-vtpmmgr.7.pod
> > index 875dcce508..3286954568 100644
> > --- a/docs/man/xen-vtpmmgr.7.pod
> > +++ b/docs/man/xen-vtpmmgr.7.pod
> > @@ -92,6 +92,13 @@ Valid arguments:
> >
> >  =over 4
> >
> > +=item srk_handle=<HANDLE>
>
> Is this actually srk_handle= or srk_handle: ?

Whoops.  It's srk_handle: .  I just copy and pasted here.

> The code tests for the latter. The problem seems to "exist" also for
> owner_auth: and srk_auth: but both = and : work actually because strncmp
> is told not to check for = and :

owner_auth & srk_auth don't check :, but then they don't skip : or =
when passing the string to parse_auth_string.  So they can't work
properly?

srk_handle: does check for that entire string.

> We'd better clean this up to avoid confusions.

Right, so what do we want?  I'm leaning toward standardizing on =
since the tpm.*= options look to parse properly.  Given : doesn't seem
like it could work, we don't need to attempt to maintain backwards
compatibility.

Thanks for the review.

-Jason


From xen-devel-bounces@lists.xenproject.org Tue May 04 17:05:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 17:05:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122603.231235 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldyUU-000158-Np; Tue, 04 May 2021 17:05:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122603.231235; Tue, 04 May 2021 17:05:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldyUU-000151-Kx; Tue, 04 May 2021 17:05:46 +0000
Received: by outflank-mailman (input) for mailman id 122603;
 Tue, 04 May 2021 17:05:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tMRT=J7=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1ldyUT-00014v-Tj
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 17:05:45 +0000
Received: from mail-lf1-x12b.google.com (unknown [2a00:1450:4864:20::12b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4181a9dc-ea2e-47aa-aad3-fd080edb2212;
 Tue, 04 May 2021 17:05:45 +0000 (UTC)
Received: by mail-lf1-x12b.google.com with SMTP id n138so14391297lfa.3
 for <xen-devel@lists.xenproject.org>; Tue, 04 May 2021 10:05:45 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4181a9dc-ea2e-47aa-aad3-fd080edb2212
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to;
        bh=tcon4j7AC3gVKmFEWSyXjWC+cBOcg4Zdc/1k3QPFtDY=;
        b=YDRNSyYpZ7k/nWhfWNE01Tnc+lZIzIxcIIVPlH1+EMLvHl82DYJogoN0fLHf8zjWad
         1+lQVTtsVDJh6Lq9UjW5vvwsmIotkQ4MKuVKB1HjECaRUp0kLqWj3M21NXDIKML7MCf2
         4Ja13Nkj0if+jSxy9Ag/AxmNUguKH7Pd/ndgOyzSwediIY7Ti9qvd5Z+Pnym3IpI8vSn
         dE7u+JuoIV0xpESfXIIBSQKzImSAwRVh57CsrNnUV+K37EBv9QnJ3YgldlyZU1E/dyxV
         vUiamwxHMKByU1/oVu85i/CUlB9EGaDQswai/0PWbp6wxfzUFoB2SYW2iAxcz5LVuGWB
         YOPQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to;
        bh=tcon4j7AC3gVKmFEWSyXjWC+cBOcg4Zdc/1k3QPFtDY=;
        b=tMeicLSzoeHznEpkTm1u6j8KM5oEv4i1cOxB1kHbQy5AxNuOmykuOaSscbTPpjg/2s
         52xHh7IZQHWqf2GqwGLp/juX/RUfEplUQgqt4Dc36FV/YsdCuzKXyYiWW3j63FUe+Nlj
         lyOTYwDeeNbjCL9Cw/BiYu6lNmDCHCWj6WN7AHH/VOjIs4Ml2D8UIRH6VJSbZ4q3pZLK
         0BrFA/kdstam/afvgQD1VahS+mzIcM7l5K2awQDCv+/CSNcNEf6hCPfB1oktmq5oxm0f
         +grdvtKB22Dgq9i1rSQ8DH1mZAR5RbNcMuydwpdHjstOw0fY/DTC/fdUTp/0/Mc8ECUd
         aSwg==
X-Gm-Message-State: AOAM5300dtkZTMM2dAS/9H3Y1VqoIVqbbNlDvI3ZcAmq771z3hRTvpkO
	yOCEtUX/UTtkeGut0Zy57hGOKqwQIr9j7vUnEkTsqs9I
X-Google-Smtp-Source: ABdhPJykU0l+G7JZnihj+z4nWpGgKUrkaB9GXMva3mQFt0zvMYOrYdbKJH3r/92STDkpZjc7agcXHaiOyY+Ni6ZDdc0=
X-Received: by 2002:ac2:5e36:: with SMTP id o22mr8124029lfg.529.1620147944309;
 Tue, 04 May 2021 10:05:44 -0700 (PDT)
MIME-Version: 1.0
References: <20210504124842.220445-1-jandryuk@gmail.com> <20210504124842.220445-8-jandryuk@gmail.com>
 <20210504131626.h2ylaamk35evw6yg@begin>
In-Reply-To: <20210504131626.h2ylaamk35evw6yg@begin>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Tue, 4 May 2021 13:05:33 -0400
Message-ID: <CAKf6xptnmC9FQzZ_2Z1uBaFYS1s0F7VAVu0cm=a_5YKyPSEdxA@mail.gmail.com>
Subject: Re: [PATCH 7/9] vtpmmgr: Flush all transient keys
To: Samuel Thibault <samuel.thibault@ens-lyon.org>, Jason Andryuk <jandryuk@gmail.com>, 
	xen-devel <xen-devel@lists.xenproject.org>, Daniel De Graaf <dgdegra@tycho.nsa.gov>, 
	Quan Xu <quan.xu0@gmail.com>
Content-Type: text/plain; charset="UTF-8"

On Tue, May 4, 2021 at 9:16 AM Samuel Thibault
<samuel.thibault@ens-lyon.org> wrote:
>
> Jason Andryuk, le mar. 04 mai 2021 08:48:40 -0400, a ecrit:
> > We're only flushing 2 transients, but there are 3 handles.  Use <= to also
> > flush the third handle.
> >
> > The number of transient handles/keys is hardware dependent, so this
> > should query for the limit.  And assignment of handles is assumed to be
> > sequential from the minimum.  That may not be guaranteed, but seems okay
> > with my tpm2.
> >
> > Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
>
> Maybe explicit in the log that TRANSIENT_LAST is actually inclusive?

In the commit message?  Sounds good to me.

> Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

Thanks,
Jason


From xen-devel-bounces@lists.xenproject.org Tue May 04 17:07:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 17:07:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122606.231248 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldyW5-0001D2-3X; Tue, 04 May 2021 17:07:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122606.231248; Tue, 04 May 2021 17:07:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldyW5-0001Cv-0G; Tue, 04 May 2021 17:07:25 +0000
Received: by outflank-mailman (input) for mailman id 122606;
 Tue, 04 May 2021 17:07:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=c7IS=J7=ens-lyon.org=samuel.thibault@srs-us1.protection.inumbo.net>)
 id 1ldyW3-0001Cp-Lm
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 17:07:23 +0000
Received: from hera.aquilenet.fr (unknown [185.233.100.1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cf359cd7-3e35-47c1-81fb-22870f7a37d9;
 Tue, 04 May 2021 17:07:22 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by hera.aquilenet.fr (Postfix) with ESMTP id 784BD1E8;
 Tue,  4 May 2021 19:07:21 +0200 (CEST)
Received: from hera.aquilenet.fr ([127.0.0.1])
 by localhost (hera.aquilenet.fr [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id v6cCmkSJ2K4K; Tue,  4 May 2021 19:07:20 +0200 (CEST)
Received: from begin (unknown [IPv6:2a01:cb19:956:1b00:de41:a9ff:fe47:ec49])
 by hera.aquilenet.fr (Postfix) with ESMTPSA id BDC24AF;
 Tue,  4 May 2021 19:07:20 +0200 (CEST)
Received: from samy by begin with local (Exim 4.94)
 (envelope-from <samuel.thibault@ens-lyon.org>)
 id 1ldyVz-00GOoe-Dw; Tue, 04 May 2021 19:07:19 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cf359cd7-3e35-47c1-81fb-22870f7a37d9
X-Virus-Scanned: Debian amavisd-new at aquilenet.fr
Date: Tue, 4 May 2021 19:07:19 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
Subject: Re: [PATCH 4/9] vtpmmgr: Allow specifying srk_handle for TPM2
Message-ID: <20210504170719.mnu3e3av7klsvyuq@begin>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Jason Andryuk <jandryuk@gmail.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
References: <20210504124842.220445-1-jandryuk@gmail.com>
 <20210504124842.220445-5-jandryuk@gmail.com>
 <20210504131328.wtoe4swz7nyzyuts@begin>
 <CAKf6xpsVJQ7LeV63hb8Sm_6gq+xjCwMDOkuMKNsn+-vqHF=9rQ@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CAKf6xpsVJQ7LeV63hb8Sm_6gq+xjCwMDOkuMKNsn+-vqHF=9rQ@mail.gmail.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)
X-Spamd-Bar: --
Authentication-Results: hera.aquilenet.fr
X-Rspamd-Server: hera
X-Rspamd-Queue-Id: 784BD1E8
X-Spamd-Result: default: False [-2.50 / 15.00];
	 ARC_NA(0.00)[];
	 RCVD_VIA_SMTP_AUTH(0.00)[];
	 FROM_HAS_DN(0.00)[];
	 TO_MATCH_ENVRCPT_ALL(0.00)[];
	 FREEMAIL_ENVRCPT(0.00)[gmail.com];
	 TAGGED_RCPT(0.00)[];
	 MIME_GOOD(-0.10)[text/plain];
	 RCPT_COUNT_FIVE(0.00)[6];
	 HAS_ORG_HEADER(0.00)[];
	 RCVD_COUNT_THREE(0.00)[3];
	 TO_DN_ALL(0.00)[];
	 FREEMAIL_TO(0.00)[gmail.com];
	 RCVD_NO_TLS_LAST(0.10)[];
	 FROM_EQ_ENVFROM(0.00)[];
	 MID_RHS_NOT_FQDN(0.50)[];
	 BAYES_HAM(-3.00)[100.00%]

Jason Andryuk, le mar. 04 mai 2021 13:04:47 -0400, a ecrit:
> owner_auth & srk_auth don't check :, but then they don't skip : or =
> when passing the string to parse_auth_string.  So they can't work
> properly?

They happen to "work" just because there is no other parameter prefixed
the same.

> > We'd better clean this up to avoid confusions.
> 
> Right, so what do we want?  I'm leaning toward standardizing on =
> since the tpm.*= options look to parse properly.

I'd say so too. Also because that's what is apparently documented.

Samuel


From xen-devel-bounces@lists.xenproject.org Tue May 04 17:07:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 17:07:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122607.231259 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldyWJ-0001Gj-BB; Tue, 04 May 2021 17:07:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122607.231259; Tue, 04 May 2021 17:07:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldyWJ-0001Gc-8D; Tue, 04 May 2021 17:07:39 +0000
Received: by outflank-mailman (input) for mailman id 122607;
 Tue, 04 May 2021 17:07:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=c7IS=J7=ens-lyon.org=samuel.thibault@srs-us1.protection.inumbo.net>)
 id 1ldyWI-0001GQ-CR
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 17:07:38 +0000
Received: from hera.aquilenet.fr (unknown [2a0c:e300::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cc6671eb-9236-4983-b6bb-1b732a0c2503;
 Tue, 04 May 2021 17:07:37 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by hera.aquilenet.fr (Postfix) with ESMTP id 56B8B365;
 Tue,  4 May 2021 19:07:36 +0200 (CEST)
Received: from hera.aquilenet.fr ([127.0.0.1])
 by localhost (hera.aquilenet.fr [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id bkCf6oQ-HwxQ; Tue,  4 May 2021 19:07:35 +0200 (CEST)
Received: from begin (unknown [IPv6:2a01:cb19:956:1b00:de41:a9ff:fe47:ec49])
 by hera.aquilenet.fr (Postfix) with ESMTPSA id 67417362;
 Tue,  4 May 2021 19:07:35 +0200 (CEST)
Received: from samy by begin with local (Exim 4.94)
 (envelope-from <samuel.thibault@ens-lyon.org>)
 id 1ldyWE-00GOos-JT; Tue, 04 May 2021 19:07:34 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cc6671eb-9236-4983-b6bb-1b732a0c2503
X-Virus-Scanned: Debian amavisd-new at aquilenet.fr
Date: Tue, 4 May 2021 19:07:34 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
Subject: Re: [PATCH 7/9] vtpmmgr: Flush all transient keys
Message-ID: <20210504170734.nc4hfmo5olhlikes@begin>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Jason Andryuk <jandryuk@gmail.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
References: <20210504124842.220445-1-jandryuk@gmail.com>
 <20210504124842.220445-8-jandryuk@gmail.com>
 <20210504131626.h2ylaamk35evw6yg@begin>
 <CAKf6xptnmC9FQzZ_2Z1uBaFYS1s0F7VAVu0cm=a_5YKyPSEdxA@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CAKf6xptnmC9FQzZ_2Z1uBaFYS1s0F7VAVu0cm=a_5YKyPSEdxA@mail.gmail.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)
X-Spamd-Bar: --
Authentication-Results: hera.aquilenet.fr
X-Rspamd-Server: hera
X-Rspamd-Queue-Id: 56B8B365
X-Spamd-Result: default: False [-2.50 / 15.00];
	 ARC_NA(0.00)[];
	 RCVD_VIA_SMTP_AUTH(0.00)[];
	 FROM_HAS_DN(0.00)[];
	 RCPT_COUNT_THREE(0.00)[4];
	 TO_MATCH_ENVRCPT_ALL(0.00)[];
	 FREEMAIL_ENVRCPT(0.00)[gmail.com];
	 TAGGED_RCPT(0.00)[];
	 MIME_GOOD(-0.10)[text/plain];
	 HAS_ORG_HEADER(0.00)[];
	 RCVD_COUNT_THREE(0.00)[3];
	 TO_DN_ALL(0.00)[];
	 FREEMAIL_TO(0.00)[gmail.com];
	 RCVD_NO_TLS_LAST(0.10)[];
	 FROM_EQ_ENVFROM(0.00)[];
	 MID_RHS_NOT_FQDN(0.50)[];
	 BAYES_HAM(-3.00)[100.00%]

Jason Andryuk, le mar. 04 mai 2021 13:05:33 -0400, a ecrit:
> On Tue, May 4, 2021 at 9:16 AM Samuel Thibault
> <samuel.thibault@ens-lyon.org> wrote:
> >
> > Jason Andryuk, le mar. 04 mai 2021 08:48:40 -0400, a ecrit:
> > > We're only flushing 2 transients, but there are 3 handles.  Use <= to also
> > > flush the third handle.
> > >
> > > The number of transient handles/keys is hardware dependent, so this
> > > should query for the limit.  And assignment of handles is assumed to be
> > > sequential from the minimum.  That may not be guaranteed, but seems okay
> > > with my tpm2.
> > >
> > > Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> >
> > Maybe explicit in the log that TRANSIENT_LAST is actually inclusive?
> 
> In the commit message?  Sounds good to me.

Yes, please.

Samuel


From xen-devel-bounces@lists.xenproject.org Tue May 04 17:11:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 17:11:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122616.231271 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldyaD-0002Bx-T2; Tue, 04 May 2021 17:11:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122616.231271; Tue, 04 May 2021 17:11:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldyaD-0002Bq-Q8; Tue, 04 May 2021 17:11:41 +0000
Received: by outflank-mailman (input) for mailman id 122616;
 Tue, 04 May 2021 17:11:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Poa=J7=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ldyaC-0002Bl-Ns
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 17:11:40 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2fcef880-6c1b-475f-9abe-a2e657fb6501;
 Tue, 04 May 2021 17:11:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2fcef880-6c1b-475f-9abe-a2e657fb6501
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620148299;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=qzpfVoL1vNzteEAkCE/Nulz//M/e6jSjpz/5Xp5sM70=;
  b=F2XWMcfmECFwTI8qsnfqX/fmfVHwruLmxvUT30LI9jBFB3hBti1vuZSk
   JR9U57sCUCtKJEcqOwJ/DgHyyCzVUDoP5FzHX5Gsw64ijA3N4elPakDIM
   YP0YaWrHJT6R70qEtQjx64huW2vFnlctQJ+j18Yqogim3q6apP5+Gvac3
   s=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: LbS+GgXo8jgueGmJhyPP4B9h1B7obDHe6U9a+uebv6fG0cy26A8iFVRz3W5xVqoHeXE56B6aSG
 deCiAoJmqGG8nqrCm6MHaMVe067uU8amI8euVzulE4BU5VptbTqzBV9aDoAyUP932tPlAcZtlF
 jIpHPPM/i+D5qMcyTgVUapR9vGO+dMv+eD5dcnV/6VAo1WQaPTR/sEM1dPtgz1Y8sSZKBVncfs
 csQ/7xvkeC05eQhN6bRo75ndW2OcvTz/ClOkDUyjiyUs3ftW7waoNZ8Fwi9Oxp/j5L1Rt17T0V
 01U=
X-SBRS: 5.1
X-MesageID: 43059945
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:BoYpvaiwD7xcXBnYT9Q+eZL973BQX3Vw3DAbvn1ZSRFFG/Gwv/
 uF2NwGyB75jysQUnk8mdaGfJKNW2/Y6IQd2+csFJ+Ydk3DtHGzJI9vqbHjzTrpBjHk+odmup
 tIW5NVTOf9BV0St6rHySGlDtctx8SG+qi0heHYi0xgVx1udrsI1WdEIyywe3cGIDVuL5w/CZ
 aa+45jrz2vZXwYYq2Adwc4dsLEoMDGk4+jXAUPAAQp5BLLoTSj7rP7FBbw5GZgbxpkx7A+/W
 /Z1zHo/6nLiYDG9jbw9U/2q65Xltzo18dZCKW36/Q9Bz3whm+TFf9ccpKYujRdmpDI1H8Ll5
 32rw4kL4BP7RrqDxyIiD/M/yWl7zo08X/lzjaj8AneiOj0XigzBcYEpa8xSGqg12MasNtx0L
 1G0gui3vI9Z36w/1Welqz1fipnmUaurX0pnfR7tQ05baIkZKJMtotaxUtJEf47bVHHwbo6G+
 pjBty03ocuTXqmaRnizwxS6eC3Um92NhmLRVVqgL3u7xFm2Fp9z0ce2fUFmGYB+J8XW/B/lp
 T5G5Utu7dUQsAMa6VhQM8HXMusE2TIBSnBKWSIPD3cZe86EkOIj6SyzKQ+5emsdpBN5JwumK
 7ZWFcdkWIpYUrhBeCHwZUjyGGNfEyNGRDWju1O7ZlwvbPxAJDxNzeYdVwom8y8590CH8zyQZ
 +ISdBrKs6mCVGrNZdC3gX4VZUXA2IZStcpttEyXE/Lit7XK7ftqvfQfJ/oVfnQOAdhflm6Lm
 oIXTD1KskFxFusQGXEjB/YXG6oVVf4+b52DajG78kewIUALeR3w0wooGX8wvvOBSxJs6Qwck
 c7CqjgiLmHqW6/+nuNz2gBAGsbMm9lpJHbF19arw4DNE35NZwZvc+ERGxU1HybYjt2T8bcFh
 9jt016kJjHaaC49GQHMZaKI2iah3wcqDahVJEHgJCO4s/jZ9ceAos5XrdyUSHGDQZ8lwoviG
 orUn5FembvUhfVzYm1hp0dA+/SM/Nmhh2wHMJSoXXD8WOGpc8uQXMfdyW0UdGehDsvQzY8vC
 w1z4YvxJ673Rq/I2o2h+o1dHdWbn6MPb5ABAOZILlPlqvTYwF2R2eSjTm8gxU+E1Carnk6ty
 jEF2m5aPvLCl1StjR93rzx+F15TGmbYnl9c2t3q4F7CGTAtEtiyOPjXNvH70KhLn85hs0NOj
 DMZjUfZjljwN26zza5sjePH3dO/ORiAsXtSJAYN53D0HKkL4OF0ZwcF/hP5ZB/KZTFqekQS9
 +SfAeTMRL1A+4kwBauu34gISV4wUNUyc/A6VnA1iyVzXQ/Cf3dLBBaXLkdOcib9HWhaPCS0p
 l15OhF9deYAyHUUJqhxq7WZTIYdU+Wjm6yUu0yqZdb+Yg1r6B+GpHHUT3OkFFLtS9OWvvcpQ
 c7euBc5ruEB6pEO+o1UAhd9kAylNuOIFAw2zaGSNMWTBUItTvjI9iN47D0srIhDU2KmRvoNT
 Ckglpg1saAexHG6KUTBK0xK1lHcUQQ6Hxt++WZao3bYT/aPt1rzR6fMnWndqVaR7XAMbIMrg
 xi69XgpZ7aSwPInCTRtyB8OKRA7iKORt6zGhuFHapt/8ahMVqBxous78jbtka5dRKLL2AZj5
 ZCb0oec4BqjSQjlpQ+1myKcZPMy3hV2Gd20HVAjV7i2o+v/WfdEwVnCGTi8+RrdAgWFGOJg8
 TD+fWfz1Ln7lF+qML+KHs=
X-IronPort-AV: E=Sophos;i="5.82,272,1613451600"; 
   d="scan'208";a="43059945"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=O0GBlWioToy/2dBD5UTa6dwF4GgnB4ilpInxXzkMP2tBTqaz5GcHYR8T10gzSLX53GSSeELDg9I4lcdDE4gJ5Jx03fAAedCVnPZLHM4H5eu3XjwhlfGASBg98ZHy0vx89rPyOHEWJDL3aoaXDe0IIS1jP3X2dmtTOTuq5Ne70pW+r/nYzouGc49ByHabbYWthodfWLtUAKZM+rw/Y/Yefb1uvTnXvjRc36n4UceDBvDshpUkncxN8mgtLfIlfDQt5LHfjUNz2aSRZL2hV7kiEQfnXQNEln6vdyGY4cwscEC3qALmtjTmnV/d3U6Q0q8IDJR2fJkMJ7wVLLf3uF0xXA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EBVKAU/3atcB2PR5D0RKoqYfEHv+UESguO6ovrxI4s4=;
 b=AN9OayFG/21pHVg8A1Xme2oZ1JNjPLGA8MegDZ6RwXxpMKzSuDJTeFOkzjBE485bx2AsraI+U0pJVHSdrXYmVAWTlkakwYk8YxLXbY3Lsc+4Yz9Zhed0OX4+v9zujytxOUUvcJWQKDYydFQI7S56DJmomHBF3FXy3tMTy6hv9dcT8D3BvxtjsxnTo0MaX3UTCG4pkFuWmdTffvEUR4trOihJ+GiZTAaPUHQtufTlJRkMpOVT5/5iLBGhHi9c+1SnpPO9pOeIxbdmzobtIp1dJ11P78dKMtUCDtPjf2xFBf2MgnudBgJeLo8bNbctU7GQhIvWdMDJ3oTZd80NiTNdXQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EBVKAU/3atcB2PR5D0RKoqYfEHv+UESguO6ovrxI4s4=;
 b=CA9M/n+uuLM4ngUeAJrNWWP2aITo+btzro2J71KMdO/vWGk6f+DK3GwVcn4gw0jbOjXMeA/KHHkFxZCZkSBqhM2N3wULOb5H47N3IajeAIFRBkRpht7LPUApXom5q9hIuexHOvH4L+6c3bhYlWafqV3bnHKGwEXJj/OgDbDldPw=
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Jan Beulich
	<jbeulich@suse.com>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	<xen-devel@lists.xenproject.org>
References: <20210430155211.3709-1-roger.pau@citrix.com>
 <20210430155211.3709-4-roger.pau@citrix.com>
 <273ba6f9-dee9-00db-407b-10325d21afae@suse.com>
 <YJEoS6P1S6NbySFd@Air-de-Roger>
 <54c48a0f-075f-c379-eeb4-60b4439d8907@suse.com>
 <YJE20/M+OCER2vPn@Air-de-Roger>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v3 03/13] libs/guest: allow fetching a specific MSR entry
 from a cpu policy
Message-ID: <66d6596b-5d90-7bf8-a383-ce2b6b1fe03f@citrix.com>
Date: Tue, 4 May 2021 18:11:25 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <YJE20/M+OCER2vPn@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0303.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a5::27) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f63693fc-a721-44ed-2185-08d90f1faa17
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5440:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB544039F1259166A55A973CC5BA5A9@SJ0PR03MB5440.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: AgNad49s9JdrB2JN3HPaoCrYmEYPYKRMFjhPSODgRgoXANeK4ASRzpjSWO4tFwaeUIcV8bd9fSxGdIQNr+ihD5R5DJ2riAEukzAJsdjugDkdG7zc2U5rbmDteYTtg92yDp/1NBR6Xwul6EvvLwY970DURZWxzJiq8/q7w8DM/uRIaEuwfiy+lV5Clis58Bomn4B9sSzqe8FqZH3+wQcLfKPYh1bK7k/VVk5SNK3prok1NjCJ97pAHGltC1Y7iMgn2jHDf9EY9BQW5JUBuwzdNd9y0vyehSn7u4dDFdpREmr6jPYpGuRL5brtw5GcQ5uJYFBvbZgN1mc3CwBNGHykUqxdXBgQy00dO+jEF40vHfNafyQMQnOCdDbZJ563PFwtWIoY28BYdugL+1g6NHUV6YjOtWC+00HpCY7ARrLrFik5+x9jVB4QI7W8dzkAKNAPaZ+cJnJqH0cRaiHICBfViSCeQRz9f3R8jVvtO0BkSuXgWZUkLRkTxypR5otWsmB+rRLk7oFFO/0FtoQCArdZsKShb4QweAt1ssES2mNjohtjJzwDVN0eayX4vfYd8cvbtdS+EKuBB5GMxKH1d2FA/B0Pc6m+JJs20pKK4LSYESmsjRBZF/lKLmP03yaHEhN2ewiU/0wi5MDBVFzHco8NgmHJuyZ7yhQKDkf4oyXr4MNDekq0aW/8HjedmGzy3CAD
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(366004)(396003)(39850400004)(376002)(346002)(186003)(2906002)(53546011)(6666004)(38100700002)(4326008)(5660300002)(66556008)(8936002)(31686004)(6486002)(54906003)(66476007)(8676002)(478600001)(31696002)(316002)(86362001)(110136005)(2616005)(83380400001)(36756003)(956004)(16576012)(26005)(66946007)(16526019)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?YVNETXBLVWFQRXhPVGRWSzYzcXNiQi8xWitLTGhVNWU5K0ZJS0s0STI2MU1O?=
 =?utf-8?B?K2dVMTV4dDkzSld0NnA1ZGp1YzZFcGZmNERmVm9IazZkcjdwUlE0OEIySGZW?=
 =?utf-8?B?VkcvQnNJTEd4S0hjcTVtd05zMzlrTmIza09KQnUwQ2MraFkyNWg3dEt2blR3?=
 =?utf-8?B?RjJoNWZqbjhjRUVsUlRUTjd2YXFrTDVKcG02ODJGMWZMMlUvT2JVdVp0bGdB?=
 =?utf-8?B?eEVYT0gvbzkyb0hGUUcrYlFkSWRuS292ZmxkSTNlQ1orTzJkK0Yzbk5UTUx5?=
 =?utf-8?B?ZHd3Q3c1OFplMTlROU92aGR4VzBkTjM0OWI5QzF3N1ZDR1FPTEVOdlBCK0Yy?=
 =?utf-8?B?ck5KbThPS3VoNjdHYTYzR2ovc0VYT3FXVm01MDBndVpEKzVkOWtucnhPODBv?=
 =?utf-8?B?SFhUYmlRaEM4aVlaSjdKZCtJV0FWTGVleHhRMjBkR05HSHFaSWFGSmt5aks2?=
 =?utf-8?B?UGFuNkZjSGlvVEFwMEg1Qm50WVZ3UGd1N2Y5aHExeGYyRVcrVndLRWxScEVp?=
 =?utf-8?B?L3daZS9UaHczTU1jcmdKcCtrb0RET0pFeHFqVDluWFlJbFErdXJPL1NkL1hR?=
 =?utf-8?B?ajF2enJ4UjVFSGNpczZMNWpuNUJIMTBvckpzTjc5L2tNTGk5ZUh5UDZQOFk5?=
 =?utf-8?B?SDZHaE5wOWFGa1JHSHhsRkhzWWFGSmptZHI1d200d1lTbGE0cFlGOXEyRUZM?=
 =?utf-8?B?V3FtSVQwMkxDbWppOFAyRWo3RENZM2dXbjZrTWgxZmNFVzVHMURHS2tmeVY3?=
 =?utf-8?B?VWQ5YVEyV3N1akhhVmNmSkZHM1M5SzJ4RFdiaVhDSDNIeVJjVHEwTjIyZ1VL?=
 =?utf-8?B?dFpjZHZBcWVUdDNzZEhRUWZGY1hQU0VTNVZvcjZVNXR2L0xvN0dkY0pwZjh4?=
 =?utf-8?B?V2VVc29GdDJoSUpKYU5uODdZZHU0RGNHNGhiRytsRDFpOXdNSDh1NGJkQjV3?=
 =?utf-8?B?WEtURE5vN1kyTG9jRjNObGZkWmRLWHJ3KzFzeVR3OWhvcUhPMGpSWnJpaENx?=
 =?utf-8?B?V3JOalFpZzhCbDJ0MlczMWY1TkZ6aWk0R1BSTnUrZDBZcEZVRzFjbFpKemVV?=
 =?utf-8?B?RzVOUk9JZUpCNGlKc1EwSFU3d1J0RkpMZmtOb0lMVUZVa2h6N0QvL0piVlRM?=
 =?utf-8?B?RnZrRjIzem9laEd3RVNmT28yVjl3Sjh6RVlJN2JjRVlpbksxK09Yc2N0c3lt?=
 =?utf-8?B?WG9DeE15Z1B5RVpQT2p5aEhkTDhQZEt0aXZ0RE4vS2c0cmg1M3FzNThhbUsz?=
 =?utf-8?B?UmttelhrSGlpdUh6UGtZZDYvaFhxSUJrSXUrbnV0amhsVElKOUJIcHMydCtO?=
 =?utf-8?B?WlRGdmFmMWxrRWtqZWVnRlJFVUNZQXZjTUFGWEMrbExDYjF0clplWE9ZNkVQ?=
 =?utf-8?B?L3p6cTJ4YTI1anBEa1FEU0g3UlN6YnhhTldqeGR4MkNGN3FMT1F4YjVGVjc2?=
 =?utf-8?B?VXNWQkpwNmVwd25ESkVvRlBtQlJUeHNOWlZFcGg2czdaUEs5LytwTHVVNnNa?=
 =?utf-8?B?bkxDVjdDS2dQZUJHV0M0cW40L3pQOXJmMnRLSHRud2MzR3Y2RTNOYUtpRlFz?=
 =?utf-8?B?NVpHbUVnT2gwUEw4L3IxaE81UEJDbVUwY0RhalF2ano0TGlnTGFUaFFCdm52?=
 =?utf-8?B?bnl1NG1LcHV0dHJEbXFyVGhnWm9WbGhwMG1FakVEM0VlQWRhdTNnOVNqeFl1?=
 =?utf-8?B?bksvbkZxMkkwam5rRmxNdG51YkV0Y2NMTmJyNHVySkxSWEE1bnptMDJNR0x0?=
 =?utf-8?Q?1pjxVnTTjQCH6Yy+P0RyiNYfAVFEVa7Cnn886B3?=
X-MS-Exchange-CrossTenant-Network-Message-Id: f63693fc-a721-44ed-2185-08d90f1faa17
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2021 17:11:32.3240
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: eXJxIpNuZyFlsJ7urfbgqLR6bZ+ZUEwmXgufrYt8tok03xONj22RF/q2/SkxBv9hyFnDZef3J72Po1FiSzdEp3pwbJr47kPphCDJQ04a/TQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5440
X-OriginatorOrg: citrix.com

On 04/05/2021 12:58, Roger Pau Monn=C3=A9 wrote:
> On Tue, May 04, 2021 at 01:40:11PM +0200, Jan Beulich wrote:
>> On 04.05.2021 12:56, Roger Pau Monn=C3=A9 wrote:
>>> On Mon, May 03, 2021 at 12:41:29PM +0200, Jan Beulich wrote:
>>>> On 30.04.2021 17:52, Roger Pau Monne wrote:
>>>>> --- a/tools/libs/guest/xg_cpuid_x86.c
>>>>> +++ b/tools/libs/guest/xg_cpuid_x86.c
>>>>> @@ -850,3 +850,45 @@ int xc_cpu_policy_get_cpuid(xc_interface *xch, c=
onst xc_cpu_policy_t policy,
>>>>>      *out =3D *tmp;
>>>>>      return 0;
>>>>>  }
>>>>> +
>>>>> +static int compare_entries(const void *l, const void *r)
>>>>> +{
>>>>> +    const xen_msr_entry_t *lhs =3D l;
>>>>> +    const xen_msr_entry_t *rhs =3D r;
>>>>> +
>>>>> +    if ( lhs->idx =3D=3D rhs->idx )
>>>>> +        return 0;
>>>>> +    return lhs->idx < rhs->idx ? -1 : 1;
>>>>> +}
>>>>> +
>>>>> +static xen_msr_entry_t *find_entry(xen_msr_entry_t *entries,
>>>>> +                                   unsigned int nr_entries, unsigned=
 int index)
>>>>> +{
>>>>> +    const xen_msr_entry_t key =3D { index };
>>>>> +
>>>>> +    return bsearch(&key, entries, nr_entries, sizeof(*entries), comp=
are_entries);
>>>>> +}
>>>> Isn't "entries" / "entry" a little too generic a name here, considerin=
g
>>>> the CPUID equivalents use "leaves" / "leaf"? (Noticed really while loo=
king
>>>> at patch 7.)
>>> Would you be fine with naming the function find_msr and leaving the
>>> rest of the parameters names as-is?
>> Yes. But recall I'm not the maintainer of this code anyway.

This file in particular has been entirely within the x86 remit for
multiple years now, as have the other cpuid bits in misc/ and libxl.

> You cared to provide feedback, and I'm happy to make the change.

find_msr() would be a better name.=C2=A0 As for entries and nr_entries,
suggestions welcome.=C2=A0 I couldn't think of anything better for the low
level helpers.

~Andrew



From xen-devel-bounces@lists.xenproject.org Tue May 04 17:24:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 17:24:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122625.231284 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldymL-0003Bz-4c; Tue, 04 May 2021 17:24:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122625.231284; Tue, 04 May 2021 17:24:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldymL-0003Bs-1X; Tue, 04 May 2021 17:24:13 +0000
Received: by outflank-mailman (input) for mailman id 122625;
 Tue, 04 May 2021 17:24:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tMRT=J7=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1ldymI-0003Bm-Q2
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 17:24:10 +0000
Received: from mail-lj1-x231.google.com (unknown [2a00:1450:4864:20::231])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3c232f10-9ce1-438e-b501-a314e73fb1fc;
 Tue, 04 May 2021 17:24:09 +0000 (UTC)
Received: by mail-lj1-x231.google.com with SMTP id w4so3462748ljw.9
 for <xen-devel@lists.xenproject.org>; Tue, 04 May 2021 10:24:09 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3c232f10-9ce1-438e-b501-a314e73fb1fc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to;
        bh=QWuHKK8AqgvxefPI9dlc0z3SCAD3/cmGO76guAI1cS0=;
        b=NW7geWlHB2h9N62ZfGXz8K6d1WbrcPQ4Ttz+cDEqBZe9Wgw5/8aJhPJS3lfnYrME60
         Ij36Z7LZMtLHOU9xmtIgkWhTQVME+mr9MftouCBbXOz2ExRppFRY5UdwYKpgMn+1bDdn
         Q3rNCWiK3qU4DidIOCM7gBP4jJ9uZ9zefIOy1mgxRgfAFtbvudvivjN6tWgWrZM4A1OU
         ZRDpZ4EA8LfMTkFWISDmQdVkMEKHQz9t/msJF8Ht6CsIcsGgMpbxs17/86ktiyo+rTlJ
         ZeTrx/lJ0I09aoLHaYS8s21+GacoUa4dOFYpLQ3ga5WoR8ssXCfdUQC6kMAcpT7APt+2
         0VlA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to;
        bh=QWuHKK8AqgvxefPI9dlc0z3SCAD3/cmGO76guAI1cS0=;
        b=J5oAUtL5FWqHFuZxh2/aCVscB8tZJCAlDzdYei8GSn7b15pCPn8oigMjLQeCqYJbwU
         XPkISuoO1lnPBAYZQlakp3UJfI20C02/oGNclDHt0s8FEf7yv8pFERH9RLXHgUnsiKTV
         bb4FML4n7niUGreRHKTQaqnatiU86rUYAuitsDcztr10YCSrsW+W3dK6kxMQOEZVP39T
         JE4BxpvZfry+DGxGk3cEX20Ui34RKcpEEJoODyl9TCrcnQdWzEat1GuH0HlH7AcQRL3N
         +tc/Hko2ajEa4NkKOqU8jC5DIKaXFX+3w3kQ6v8YniIgXLRR/+2+uwi81NM1Qnxg4N+g
         4fMA==
X-Gm-Message-State: AOAM532MyTBxjb1IWzu2qItdewFZrAyuk09gPHRDHPm4RveYF2d1cK7N
	DHJqjfj0JwenE2z0qKPaZi8G2QG/nipRETa6vyc=
X-Google-Smtp-Source: ABdhPJy1+xM7x8wugzqEkYm4App3lQE0uFAWbFaBKI5MJ+CF2iyLIHcwg0/ibDhi5mKX5jzl85bDbka16YV1DKlYBSM=
X-Received: by 2002:a05:651c:14c:: with SMTP id c12mr18470942ljd.437.1620149048503;
 Tue, 04 May 2021 10:24:08 -0700 (PDT)
MIME-Version: 1.0
References: <20210504124842.220445-1-jandryuk@gmail.com> <20210504124842.220445-10-jandryuk@gmail.com>
 <20210504133332.pt56xjrxvbnz2htd@begin>
In-Reply-To: <20210504133332.pt56xjrxvbnz2htd@begin>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Tue, 4 May 2021 13:23:57 -0400
Message-ID: <CAKf6xpuS_FoWpkBxMEudJsOwfKG96f8_Vd8p6tcU5C1f01PT6Q@mail.gmail.com>
Subject: Re: [PATCH 9/9] vtpmmgr: Support GetRandom passthrough on TPM 2.0
To: Samuel Thibault <samuel.thibault@ens-lyon.org>, Jason Andryuk <jandryuk@gmail.com>, 
	xen-devel <xen-devel@lists.xenproject.org>, Daniel De Graaf <dgdegra@tycho.nsa.gov>, 
	Quan Xu <quan.xu0@gmail.com>
Content-Type: text/plain; charset="UTF-8"

On Tue, May 4, 2021 at 9:33 AM Samuel Thibault
<samuel.thibault@ens-lyon.org> wrote:
>
> Jason Andryuk, le mar. 04 mai 2021 08:48:42 -0400, a ecrit:
> > GetRandom passthrough currently fails when using vtpmmgr with a hardware
> > TPM 2.0.
> > vtpmmgr (8): INFO[VTPM]: Passthrough: TPM_GetRandom
> > vtpm (12): vtpm_cmd.c:120: Error: TPM_GetRandom() failed with error code (30)
> >
> > When running on TPM 2.0 hardware, vtpmmgr needs to convert the TPM 1.2
> > TPM_ORD_GetRandom into a TPM2 TPM_CC_GetRandom command.  Besides the
> > differing ordinal, the TPM 1.2 uses 32bit sizes for the request and
> > response (vs. 16bit for TPM2).
> >
> > Place the random output directly into the tpmcmd->resp and build the
> > packet around it.  This avoids bouncing through an extra buffer, but the
> > header has to be written after grabbing the random bytes so we have the
> > number of bytes to include in the size.
> >
> > Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> > ---
> >  stubdom/vtpmmgr/marshal.h          | 10 +++++++
> >  stubdom/vtpmmgr/vtpm_cmd_handler.c | 48 ++++++++++++++++++++++++++++++
> >  2 files changed, 58 insertions(+)
> >
> > diff --git a/stubdom/vtpmmgr/marshal.h b/stubdom/vtpmmgr/marshal.h
> > index dce19c6439..20da22af09 100644
> > --- a/stubdom/vtpmmgr/marshal.h
> > +++ b/stubdom/vtpmmgr/marshal.h
> > @@ -890,6 +890,15 @@ inline int sizeof_TPM_AUTH_SESSION(const TPM_AUTH_SESSION* auth) {
> >       return rv;
> >  }
> >
> > +static
> > +inline int sizeof_TPM_RQU_HEADER(BYTE* ptr) {
> > +     int rv = 0;
> > +     rv += sizeof_UINT16(ptr);
> > +     rv += sizeof_UINT32(ptr);
> > +     rv += sizeof_UINT32(ptr);
> > +     return rv;
> > +}
> > +
> >  static
> >  inline BYTE* pack_TPM_RQU_HEADER(BYTE* ptr,
> >               TPM_TAG tag,
> > @@ -923,5 +932,6 @@ inline int unpack3_TPM_RQU_HEADER(BYTE* ptr, UINT32* pos, UINT32 max,
> >  #define pack_TPM_RSP_HEADER(p, t, s, r) pack_TPM_RQU_HEADER(p, t, s, r)
> >  #define unpack_TPM_RSP_HEADER(p, t, s, r) unpack_TPM_RQU_HEADER(p, t, s, r)
> >  #define unpack3_TPM_RSP_HEADER(p, l, m, t, s, r) unpack3_TPM_RQU_HEADER(p, l, m, t, s, r)
> > +#define sizeof_TPM_RSP_HEADER(p) sizeof_TPM_RQU_HEADER(p)
> >
> >  #endif
> > diff --git a/stubdom/vtpmmgr/vtpm_cmd_handler.c b/stubdom/vtpmmgr/vtpm_cmd_handler.c
> > index 2ac14fae77..7ca1d9df94 100644
> > --- a/stubdom/vtpmmgr/vtpm_cmd_handler.c
> > +++ b/stubdom/vtpmmgr/vtpm_cmd_handler.c
> > @@ -47,6 +47,7 @@
> >  #include "vtpm_disk.h"
> >  #include "vtpmmgr.h"
> >  #include "tpm.h"
> > +#include "tpm2.h"
> >  #include "tpmrsa.h"
> >  #include "tcg.h"
> >  #include "mgmt_authority.h"
> > @@ -772,6 +773,52 @@ static int vtpmmgr_permcheck(struct tpm_opaque *opq)
> >       return 1;
> >  }
> >
> > +TPM_RESULT vtpmmgr_handle_getrandom(struct tpm_opaque *opaque,
> > +                                 tpmcmd_t* tpmcmd)
> > +{
> > +     TPM_RESULT status = TPM_SUCCESS;
> > +     TPM_TAG tag;
> > +     UINT32 size;
> > +     UINT32 rand_offset;
> > +     UINT32 rand_size;
> > +     TPM_COMMAND_CODE ord;
> > +     BYTE *p;
> > +
> > +     p = unpack_TPM_RQU_HEADER(tpmcmd->req, &tag, &size, &ord);
> > +
> > +     if (!hw_is_tpm2()) {
> > +             size = TCPA_MAX_BUFFER_LENGTH;
> > +             TPMTRYRETURN(TPM_TransmitData(tpmcmd->req, tpmcmd->req_len,
> > +                                           tpmcmd->resp, &size));
> > +             tpmcmd->resp_len = size;
> > +
> > +             return TPM_SUCCESS;
> > +     }
>
>
> We need to check for the size of the request before unpacking (which
> doesn't check for it), don't we?

Yes, good catch.  vtpmmgr_handle_cmd doesn't check either.

> > +     /* TPM_GetRandom req: <header><uint32 num bytes> */
> > +     unpack_UINT32(p, &rand_size);
> > +
> > +     /* Call TPM2_GetRandom but return a TPM_GetRandom response. */
> > +     /* TPM_GetRandom resp: <header><uint32 num bytes><num random bytes> */
> > +        rand_offset = sizeof_TPM_RSP_HEADER(tpmcmd->resp) +
> > +                   sizeof_UINT32(tpmcmd->resp);
>
> There is a spurious indentation here, at first sight I even thought it
> was part of the comment.

Sorry about that - it was 8 spaces instead of a tab.

> We also need to check that rand_size is not too large?
> - that the returned data won't overflow tpmcmd->resp + rand_offset
> - that it fits in a UINT16

Yes, will do.

> Also, TPM2_GetRandom casts bytesRequested into UINT16*, that's bogus, it
> should use a local UINT16 variable and assign *bytesRequested.

Good catch.  I'll do that in a new patch.

> > +     TPMTRYRETURN(TPM2_GetRandom(&rand_size, tpmcmd->resp + rand_offset));
> > +
> > +     p = pack_TPM_RSP_HEADER(tpmcmd->resp, TPM_TAG_RSP_COMMAND,
> > +                             rand_offset + rand_size, status);
> > +     p = pack_UINT32(p, rand_size);
> > +     tpmcmd->resp_len = rand_offset + rand_size;
> > +
> > +     return status;
> > +
> > +abort_egress:
> > +     tpmcmd->resp_len = VTPM_COMMAND_HEADER_SIZE;
> > +     pack_TPM_RSP_HEADER(tpmcmd->resp, tag + 3, tpmcmd->resp_len, status);
> > +
> > +     return status;
> > +}
> > +
> >  TPM_RESULT vtpmmgr_handle_cmd(
> >               struct tpm_opaque *opaque,
> >               tpmcmd_t* tpmcmd)
> > @@ -842,6 +889,7 @@ TPM_RESULT vtpmmgr_handle_cmd(
> >               switch(ord) {
> >               case TPM_ORD_GetRandom:
> >                       vtpmloginfo(VTPM_LOG_VTPM, "Passthrough: TPM_GetRandom\n");
> > +                     return vtpmmgr_handle_getrandom(opaque, tpmcmd);
> >                       break;
>
> Drop the break, then. I would say also move (or drop) the log, like the
> other cases.

Will drop the break.  I would just leave the log since it matches the
other cases in this case statement.  But I can remove it if you want.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Tue May 04 17:26:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 17:26:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122628.231296 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldyoj-0003M7-Jp; Tue, 04 May 2021 17:26:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122628.231296; Tue, 04 May 2021 17:26:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldyoj-0003M0-F9; Tue, 04 May 2021 17:26:41 +0000
Received: by outflank-mailman (input) for mailman id 122628;
 Tue, 04 May 2021 17:26:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=E2aX=J7=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1ldyoj-0003Lv-13
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 17:26:41 +0000
Received: from mail-qk1-x729.google.com (unknown [2607:f8b0:4864:20::729])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9ce38a9c-592a-4b90-9b97-b800ec854132;
 Tue, 04 May 2021 17:26:40 +0000 (UTC)
Received: by mail-qk1-x729.google.com with SMTP id o27so9339825qkj.9
 for <xen-devel@lists.xenproject.org>; Tue, 04 May 2021 10:26:40 -0700 (PDT)
Received: from six (c-73-89-138-5.hsd1.vt.comcast.net. [73.89.138.5])
 by smtp.gmail.com with ESMTPSA id g25sm935209qtu.93.2021.05.04.10.26.38
 (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256);
 Tue, 04 May 2021 10:26:39 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9ce38a9c-592a-4b90-9b97-b800ec854132
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to:user-agent;
        bh=Q0A+THcWBgMm6tqQiWxwcEZdK2rcBHSm8cjiNNwjTMs=;
        b=FWHc9H2k/L66PF9EKRO6TYr6gPDB12UyoSnWh65ewlR7QF/CuU2HeAh4+oJMDIDfs8
         BK8rymhueB2j0BHJG5Qi/W0opvdXqgUQrl8dKtuFZEARHm4FoUKedRDe3C+qbd06iNpt
         IQNJIJPh1JlSl/DVx8FQD5uAAVkHOSUrnBcYqXtsWx+OH4RTez5lfyd+Ms3WxeXVXGgy
         07iRmAdZG6mJ+bTNvJ0kcPlM1rkRHU3UqJ6LCc0x7YopNkZplXMYsz11Oyif5i+su5Oh
         0BPDtKjHq0e2x1bIAw+8RbnDyfrGjVjjsG+PQdG3UVZlaVi1iIzyA+IAy2a/8pl/KiBz
         IvdQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=Q0A+THcWBgMm6tqQiWxwcEZdK2rcBHSm8cjiNNwjTMs=;
        b=Xj245EylRPbjQutanaaMmas9u+bTbM3VAiiaDClNGqj1t4BEyvN6iunsiA7VPTF1Gb
         xSrtMgazPCkGubcz6Lq0jB3i1Sd4OPaapJZ/OQBVouxHUiVnmYTVmkr7zZlCc4E9vpi1
         rx873dgyNZc+h3O6K/bm/MuFrLgTc0ZpbSoXqVUyMoYjZ3PLrayevXC3NioebWbbEJ/0
         1V9P/jrJT3ykQkZLbA2y4C2e0KpXkS5xDPgx3LFUmyJVV7zduL0cTF/ifKOu9Xu1vMbU
         sn3lAjF5LFjTren1FADdi3u4f24dSc0c/7spDfEZiN3NYNkaPm87XVKWx7jFJdh+HpZk
         MmAw==
X-Gm-Message-State: AOAM533CuSPWKjbL595vsRCoI6AXDjauPlIaIuOlaUENIaGtC3BF2yTI
	Gg5n16gPar7Y4uxlY48qRsY=
X-Google-Smtp-Source: ABdhPJxW4R0R8Pp38jsXDjxIPkp5ZAamG2jqOVe6u+ZY5sjFiJLffceFBXByVWFSjbYsyU80KWCW2g==
X-Received: by 2002:a05:620a:138f:: with SMTP id k15mr13272719qki.471.1620149200029;
        Tue, 04 May 2021 10:26:40 -0700 (PDT)
Date: Tue, 4 May 2021 13:26:37 -0400
From: Nick Rosbrook <rosbrookn@gmail.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: xen-devel@lists.xenproject.org, george.dunlap@citrix.com,
	Nick Rosbrook <rosbrookn@ainfosec.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [RFC v2 5/7] libxl: add device function definitions to
 libxl_types.idl
Message-ID: <20210504172637.GA7941@six>
References: <cover.1614734296.git.rosbrookn@ainfosec.com>
 <2cd96b7e884c6f0c2667ef7499ff7179b99ea635.1614734296.git.rosbrookn@ainfosec.com>
 <YJFrn7+4AQt7K2Fa@perard>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <YJFrn7+4AQt7K2Fa@perard>
User-Agent: Mutt/1.9.4 (2018-02-28)

On Tue, May 04, 2021 at 04:43:27PM +0100, Anthony PERARD wrote:
> On Tue, Mar 02, 2021 at 08:46:17PM -0500, Nick Rosbrook wrote:
> > diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
> > index 5b85a7419f..550af7a1c7 100644
> > --- a/tools/libs/light/libxl_types.idl
> > +++ b/tools/libs/light/libxl_types.idl
> > @@ -666,6 +668,24 @@ libxl_device_vfb = Struct("device_vfb", [
> >      ("keymap",        string),
> >      ])
> >  
> > +libxl_device_vfb_add = DeviceAddFunction("device_vfb_add",
> > +    device_param=("vfb", libxl_device_vfb),
> > +    extra_params=[("ao_how", libxl_asyncop_how)],
> > +    return_type=libxl_error
> > +)
> > +
> > +libxl_device_vfb_remove = DeviceRemoveFunction("device_vfb_remove",
> > +    device_param=("vfb", libxl_device_vfb),
> > +    extra_params=[("ao_how", libxl_asyncop_how)],
> > +    return_type=libxl_error
> > +)
> > +
> > +libxl_device_vfb_destroy = DeviceDestroyFunction("device_vfb_destroy",
> > +    device_param=("vfb", libxl_device_vfb),
> > +    extra_params=[("ao_how", libxl_asyncop_how)],
> > +    return_type=libxl_error
> > +)
> > +
> >  libxl_device_vkb = Struct("device_vkb", [
> >      ("backend_domid", libxl_domid),
> >      ("backend_domname", string),
> 
> In future version of the series that is deem ready, I think it would be
> useful to have this change in libxl_types.idl and the change that remove
> the macro call from the C file in the same patch. It would make it
> possible to review discrepancies.
> 
> The change in the idl for vfb is different that the change in the C
> file:
> 
> > --- a/tools/libs/light/libxl_console.c
> > +++ b/tools/libs/light/libxl_console.c
> > @@ -723,8 +723,6 @@ static LIBXL_DEFINE_UPDATE_DEVID(vfb)
> >  static LIBXL_DEFINE_DEVICE_FROM_TYPE(vfb)
> > 
> >  /* vfb */
> > -LIBXL_DEFINE_DEVICE_REMOVE(vfb)
> > -
> >  DEFINE_DEVICE_TYPE_STRUCT(vfb, VFB, vfbs,
> >      .skip_attach = 1,
> >      .set_xenstore_config = (device_set_xenstore_config_fn_t)
> 
> No add function ;-)
> 

Good catch, thanks. I will consolidate these two patches in the next
version.

> And libxl doesn't build anymore with the last patch applied. They are
> maybe also issues with functions that are static and thus are not
> accessible from other c files.

Yes, I wanted to receive a bit of feedback on the code generation
approach before mangling things to build. As you say, there are
currently static functions in libxl_<device>.c files that will need to
be accessible from the generated C file. I will address this in v3.

Thanks,
NR


From xen-devel-bounces@lists.xenproject.org Tue May 04 17:27:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 17:27:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122632.231308 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldypp-0003TL-TA; Tue, 04 May 2021 17:27:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122632.231308; Tue, 04 May 2021 17:27:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldypp-0003TE-Q1; Tue, 04 May 2021 17:27:49 +0000
Received: by outflank-mailman (input) for mailman id 122632;
 Tue, 04 May 2021 17:27:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tMRT=J7=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1ldypo-0003T8-PS
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 17:27:48 +0000
Received: from mail-lf1-x130.google.com (unknown [2a00:1450:4864:20::130])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 11b6ea1c-dac8-4593-bb01-3fbcfee913c9;
 Tue, 04 May 2021 17:27:48 +0000 (UTC)
Received: by mail-lf1-x130.google.com with SMTP id c3so10410194lfs.7
 for <xen-devel@lists.xenproject.org>; Tue, 04 May 2021 10:27:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 11b6ea1c-dac8-4593-bb01-3fbcfee913c9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to;
        bh=v0CVkEbf9Ck0jddg6MwqLyIswronxdSbnoHybmIUOck=;
        b=Mug5UL+qcO57rvyTmXzlCgCzqYZiCpOmzlaZ8iZMSyyESNOyGpkIhy9W/TpmjLtY/H
         8JxjeOq65PabpjnCZl7kYoyAKlY6f/wLef/WxxC/9PBplGWQk2cOTt0WwRtT7cwXZacv
         7jcZM88wlfvcxdg9Jxb92R3mbb1dw2iyPomAywIWaV7m16GXd83dO6uuj9m09nsdViOy
         WztbkRH5TkArbFCKMunL+9ijubupBsD/nBMN1I/Ak9Pzyvo9FyiaNWVt0B8NOzCnEcG4
         GMswuksjVybkW8rLwuEgWpzbQemU7ITgp1VPJ5UJ8QpuOHJgLzAgisUZ9/AXGoV7fnuc
         j9GA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to;
        bh=v0CVkEbf9Ck0jddg6MwqLyIswronxdSbnoHybmIUOck=;
        b=SuYl5VJeTow5CkBtg8a3YP7oST64y6aXct7BeocfU1CkUJwlWDg9J5N7hhFL9+me00
         /mLaJc2/IvDcgtoiDxeAxcMV59UZZaAYpvNNhtSsXzbr8365qG7hjNh8QjZRV8Gd9Dke
         u1QXdYJ3OiBfGD0yMTAzOrURc175UsBRX3D2QPASl7fXBDd5B6wogvQgCIUSnuGoUqdM
         /jpb2AZMPnNlH6MJsJl6txG+0WE2IN3+Hmj3DwDQ6BcglIcUUOIRnoyyGj19Pt2IuG7P
         mtzPOIbKYmUGE7S8dg8KOpYBXdMCIjMp5B0GH5jE8iz0LcUmHK4UU8TWMKxEto+gJeVc
         LJEA==
X-Gm-Message-State: AOAM530txqdeOzVApCV+eLeNdzUqbVTnid6zapvnmzpn92AqyDf0BvT8
	qU3DcA2dLps45fhFGLdXGQsZmjCItcviTEofQ/0=
X-Google-Smtp-Source: ABdhPJyvULL6JNy2sHMgHfno0mV+fsqDnG6koDpptwQNp8/tuS+MmI9d9Oc6KUBjWuv7m+8oGYAKRlOV7vvuFL2m8C0=
X-Received: by 2002:a05:6512:3e7:: with SMTP id n7mr17519120lfq.150.1620149267110;
 Tue, 04 May 2021 10:27:47 -0700 (PDT)
MIME-Version: 1.0
References: <20210504124842.220445-1-jandryuk@gmail.com> <20210504124842.220445-5-jandryuk@gmail.com>
 <20210504131328.wtoe4swz7nyzyuts@begin> <CAKf6xpsVJQ7LeV63hb8Sm_6gq+xjCwMDOkuMKNsn+-vqHF=9rQ@mail.gmail.com>
 <20210504170719.mnu3e3av7klsvyuq@begin>
In-Reply-To: <20210504170719.mnu3e3av7klsvyuq@begin>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Tue, 4 May 2021 13:27:36 -0400
Message-ID: <CAKf6xpvP2TCqZwew8_ykYEcXfsmhsef2TefcV++h2u4BsWVo2A@mail.gmail.com>
Subject: Re: [PATCH 4/9] vtpmmgr: Allow specifying srk_handle for TPM2
To: Samuel Thibault <samuel.thibault@ens-lyon.org>, 
	xen-devel <xen-devel@lists.xenproject.org>, Ian Jackson <iwj@xenproject.org>, 
	Wei Liu <wl@xen.org>, Daniel De Graaf <dgdegra@tycho.nsa.gov>, Quan Xu <quan.xu0@gmail.com>
Content-Type: text/plain; charset="UTF-8"

On Tue, May 4, 2021 at 1:07 PM Samuel Thibault
<samuel.thibault@ens-lyon.org> wrote:
>
> Jason Andryuk, le mar. 04 mai 2021 13:04:47 -0400, a ecrit:
> > owner_auth & srk_auth don't check :, but then they don't skip : or =
> > when passing the string to parse_auth_string.  So they can't work
> > properly?
>
> They happen to "work" just because there is no other parameter prefixed
> the same.

parse_auth_string fails on the ":".

Just tested "owner_auth:well-known"
ERROR[VTPM]: Invalid auth string :well-known
ERROR[VTPM]: Invalid Option owner_auth:well-known
ERROR[VTPM]: Command line parsing failed! exiting..

> > > We'd better clean this up to avoid confusions.
> >
> > Right, so what do we want?  I'm leaning toward standardizing on =
> > since the tpm.*= options look to parse properly.
>
> I'd say so too. Also because that's what is apparently documented.

Ok, thanks.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Tue May 04 17:29:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 17:29:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122636.231320 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldyri-0003d2-8i; Tue, 04 May 2021 17:29:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122636.231320; Tue, 04 May 2021 17:29:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldyri-0003cv-5g; Tue, 04 May 2021 17:29:46 +0000
Received: by outflank-mailman (input) for mailman id 122636;
 Tue, 04 May 2021 17:29:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=E2aX=J7=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1ldyrg-0003cq-FZ
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 17:29:44 +0000
Received: from mail-qv1-xf34.google.com (unknown [2607:f8b0:4864:20::f34])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 56b1b6b4-cd4e-4bdd-b672-162de0ee2689;
 Tue, 04 May 2021 17:29:43 +0000 (UTC)
Received: by mail-qv1-xf34.google.com with SMTP id i8so4865560qvv.0
 for <xen-devel@lists.xenproject.org>; Tue, 04 May 2021 10:29:43 -0700 (PDT)
Received: from six (c-73-89-138-5.hsd1.vt.comcast.net. [73.89.138.5])
 by smtp.gmail.com with ESMTPSA id k1sm11377700qkh.5.2021.05.04.10.29.42
 (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256);
 Tue, 04 May 2021 10:29:42 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 56b1b6b4-cd4e-4bdd-b672-162de0ee2689
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to:user-agent;
        bh=prttDhQN8uYHL1d//3odkJa/kJ4jXGOhgl6bvJXgbHY=;
        b=DhCgFeoAclfq0YjXw1hYxIsPOp2pDIupFTeD2pHMwoxAnH/zrq+TEFQ692aue3f2fV
         gjwlE0reGloapoj69Varw9Pb+cMGKvMENoaJ6/KtpI1zzCONbpsXqOgWDpHJue4JODUN
         diTVGM86yHH5YHVN4acVhQK7GEuyJaoOmHtUQL99AFQeZJZ08RC4CblXNvmgUkqslAlk
         Hr0sLt2yWEGp2JnPyTRKe3I8mDwHDgGhBbcEoo6XqMLewNzIiX1uwTE5cgG2p06NJQCn
         EqNmqzccnXeqMoy0c2AzHYGa0rrYK8HoQhMVySl66gj5T1VMnoBaYg79f5RE984lXv+5
         y4/A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=prttDhQN8uYHL1d//3odkJa/kJ4jXGOhgl6bvJXgbHY=;
        b=eKyo3EOAoXYQREl8BllL6peyu9CWuOGoJRCre5huqRZ6Ec2Q9WYNkE2YlYB+sZHwev
         ntW/KqmFmxbLQnkl+OWHXuEVqGQki1JkOWebQkbwFKInlyLmR3m4cT/inZvfzAp+tMtN
         d9pwS4uoqdKZsm3xIT/B3N+2OmTGmTzpCSWI5/EUczXc1SiamRt45ymRRxr++jbqSfAA
         wRiisg/9CNGEAl6a/g1LzPzVprvxb8zf4RKII/YrnkUt6gZlaCnqlh4ynmonXEcfT3eD
         f/dnYqeRdZs9cGHDc9n4N6R6+vFxfCbidbtdoGFTjlCvJxVIcJO4oOWLUW3e/lvv/s2w
         foXg==
X-Gm-Message-State: AOAM5320QBOelO0xahEPswWItpOmjCuakkaZeJnboA1GIYhPpI80pXuI
	dWoJ4sB5E8fzmsX12OP99AA=
X-Google-Smtp-Source: ABdhPJye0JTFhTt9LWMNpuVfSlGSvKsk+xrKtYLXt26swZoAikXcO41OUe3nq9BsySCzhOYuvfrx7Q==
X-Received: by 2002:ad4:54c5:: with SMTP id j5mr27393657qvx.4.1620149383222;
        Tue, 04 May 2021 10:29:43 -0700 (PDT)
Date: Tue, 4 May 2021 13:29:40 -0400
From: Nick Rosbrook <rosbrookn@gmail.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: xen-devel@lists.xenproject.org, george.dunlap@citrix.com,
	Nick Rosbrook <rosbrookn@ainfosec.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [RFC v2 6/7] libxl: implement device add/remove/destroy
 functions generation
Message-ID: <20210504172940.GB7941@six>
References: <cover.1614734296.git.rosbrookn@ainfosec.com>
 <5986715fe1d677533b67c06e9561cd716716d46a.1614734296.git.rosbrookn@ainfosec.com>
 <YJFiH9dFdlq2l87k@perard>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <YJFiH9dFdlq2l87k@perard>
User-Agent: Mutt/1.9.4 (2018-02-28)

On Tue, May 04, 2021 at 04:02:55PM +0100, Anthony PERARD wrote:
> On Tue, Mar 02, 2021 at 08:46:18PM -0500, Nick Rosbrook wrote:
> > +def libxl_func_define_device_add(func):
> > +    s = ''
> > +
> > +    return_type = func.return_type.typename
> > +    if isinstance(func.return_type, idl.Enumeration):
> > +        return_type = idl.integer.typename
> > +
> > +    params = ', '.join([ ty.make_arg(name) for (name,ty) in func.params ])
> > +
> > +    s += '{0} {1}({2})\n'.format(return_type, func.name, params)
> > +    s += '{\n'
> > +    s += '\tAO_CREATE(ctx, domid, ao_how);\n'
> > +    s += '\tlibxl__ao_device *aodev;\n\n'
> > +    s += '\tGCNEW(aodev);\n'
> > +    s += '\tlibxl__prepare_ao_device(ao, aodev);\n'
> > +    s += '\taodev->action = LIBXL__DEVICE_ACTION_ADD;\n'
> > +    s += '\taodev->callback = device_addrm_aocomplete;\n'
> > +    s += '\taodev->update_json = true;\n'
> > +    s += '\tlibxl__{0}(egc, domid, type, aodev);\n\n'.format(func.rawname)
> > +    s += '\treturn AO_INPROGRESS;\n'
> > +    s += '}\n'
> 
> That's kind of hard to read, I think we could use python's triple-quote
> (or triple double-quote) ''' or """ to have a multi-line string and
> remove all those \t \n
> Something like:
> 
>     s = '''
>     {ret} {func}({params})
>     {{
>         return ERROR_FAIL;
>         libxl__{rawname}(gc);
>     }}
>     '''.format(ret=return_type, func=func.name, params=params,
>                rawname=func.rawname)
> 
> That would produce some extra indentation in the generated C file, but
> that doesn't matter to me. They could be removed with textwrap.dedent()
> if needed.
> 
That sounds good to me.

Thanks,
NR


From xen-devel-bounces@lists.xenproject.org Tue May 04 17:31:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 17:31:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122640.231332 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldytg-0004RM-MT; Tue, 04 May 2021 17:31:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122640.231332; Tue, 04 May 2021 17:31:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldytg-0004RF-Ij; Tue, 04 May 2021 17:31:48 +0000
Received: by outflank-mailman (input) for mailman id 122640;
 Tue, 04 May 2021 17:31:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=E2aX=J7=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1ldyte-0004RA-H0
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 17:31:46 +0000
Received: from mail-qk1-x72d.google.com (unknown [2607:f8b0:4864:20::72d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 64aafcbf-3934-4acd-b9eb-f716ad624911;
 Tue, 04 May 2021 17:31:45 +0000 (UTC)
Received: by mail-qk1-x72d.google.com with SMTP id u20so9349633qku.10
 for <xen-devel@lists.xenproject.org>; Tue, 04 May 2021 10:31:45 -0700 (PDT)
Received: from six (c-73-89-138-5.hsd1.vt.comcast.net. [73.89.138.5])
 by smtp.gmail.com with ESMTPSA id h188sm11696162qkd.23.2021.05.04.10.31.44
 (version=TLS1_2 cipher=ECDHE-ECDSA-CHACHA20-POLY1305 bits=256/256);
 Tue, 04 May 2021 10:31:45 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 64aafcbf-3934-4acd-b9eb-f716ad624911
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to:user-agent;
        bh=sJD6MEx691KAn1LgCzQ6M7yBTwppjZ9fnSlAHAME3fA=;
        b=BwaU+zyLfV4ULlzlYISsPd8lKqdE5xflCJKcuobtUMAZdfbMSGjEY9zbPC9NriQQkp
         6NklOhg69nV6I/QG6jivkK87I4GMepvRVxyCfWQy42lTi/+0ZklKUOaeu+5rcDefVF8k
         9dXvkG/hOd8vpCSX66matGZ+luQzfeR9JjbDHWBMYsHy08h1vlX6PB7JiYdHtrWEmLga
         Tcomqtc7N3QXBr9HhuNJ3ZClrIpQP2pN+yVYd8K7xoOQt+jN5x/3DkO69ZxDVjdXrNfQ
         CYmrBd3ks39fTu9YlxfHlZwCfn5mEK4ji94/hRyKCMEnFgRtGQh7Giu9MpvZDh5aVcXH
         LWGw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to:user-agent;
        bh=sJD6MEx691KAn1LgCzQ6M7yBTwppjZ9fnSlAHAME3fA=;
        b=X91N3SchaOJocarIapxMpyMRzqGYehdoZ+CNCc3wOZtHG9Hkjtak6TJkAIluyqMPXm
         qyrvlxyjBi+BSMPhiex8T6b2ho9Gx29NFcZx0fQo8spsI9lI1pX4C30vFv2ECpmsnMmb
         LeDmKmfByGmuq28ZQNjyxVjVdwIiwo6QBHFcy7+hGJO7p65FgAVXOQ4a2dR6WtK4uyXW
         N/RcZrYC5T0LA7Ez9dJKUdcHlG8u4vJRg9H1Crz5/dv3JbbKnfGZxi7yOzKuPIWNVHdo
         rCTlNf+M6/z4Nkshikvl3ZckZFZz9DIZ7mdML3KNRB809fCEMM2Peh69EeFroa7a4Tn9
         hIBQ==
X-Gm-Message-State: AOAM533NNFATaqdONRpL3T///osJAzlUTDI9ATj5NggVVxuUO2EoLOkZ
	x/PK36KXmdeKu+pB6XjePIU=
X-Google-Smtp-Source: ABdhPJy9J/h0ycF5GVMKcXiYEOaaNcMWgKGoz9eJwKbOjAYimSBfXYPv/nfLTFrNGf45osIkFb8nGQ==
X-Received: by 2002:a37:e47:: with SMTP id 68mr26768440qko.372.1620149505695;
        Tue, 04 May 2021 10:31:45 -0700 (PDT)
Date: Tue, 4 May 2021 13:31:43 -0400
From: Nick Rosbrook <rosbrookn@gmail.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: xen-devel@lists.xenproject.org, george.dunlap@citrix.com,
	Nick Rosbrook <rosbrookn@ainfosec.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [RFC v2 0/7] add function support to IDL
Message-ID: <20210504173143.GC7941@six>
References: <cover.1614734296.git.rosbrookn@ainfosec.com>
 <YJFsbHruoGA6aGMY@perard>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <YJFsbHruoGA6aGMY@perard>
User-Agent: Mutt/1.9.4 (2018-02-28)

On Tue, May 04, 2021 at 04:46:52PM +0100, Anthony PERARD wrote:
> On Tue, Mar 02, 2021 at 08:46:12PM -0500, Nick Rosbrook wrote:
> > At a Xen Summit design session for the golang bindings (see [1]), we
> > agreed that it would be beneficial to expand the libxl IDL with function
> > support. In addition to benefiting libxl itself, this would allow other
> > language bindings to easily generate function wrappers.
> > 
> > The first version of this RFC is quite old [1]. I did address comments
> > on the original RFC, but also expanded the scope a bit. As a way to
> > evaluate function support, I worked on using this addition to the IDL to
> > generate device add/remove/destroy functions, and removing the
> > corresponding macros in libxl_internal.h. However, I stopped short of
> > actually completing a build with this in place, as I thought it made
> > sense to get feedback on the idea before working on the next step.
> 
> The series looks good to me, beside a few detail.
> 
> Cheers,
> 
> -- 
> Anthony PERARD

Thanks for reviewing!

-NR


From xen-devel-bounces@lists.xenproject.org Tue May 04 17:47:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 17:47:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122648.231344 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldz8k-0005VP-6t; Tue, 04 May 2021 17:47:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122648.231344; Tue, 04 May 2021 17:47:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldz8k-0005VI-3q; Tue, 04 May 2021 17:47:22 +0000
Received: by outflank-mailman (input) for mailman id 122648;
 Tue, 04 May 2021 17:47:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=c7IS=J7=ens-lyon.org=samuel.thibault@srs-us1.protection.inumbo.net>)
 id 1ldz8j-0005VC-0k
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 17:47:21 +0000
Received: from hera.aquilenet.fr (unknown [185.233.100.1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0b795809-35b4-40da-9382-d2fe0a56d345;
 Tue, 04 May 2021 17:47:19 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by hera.aquilenet.fr (Postfix) with ESMTP id B66EE1E8;
 Tue,  4 May 2021 19:47:18 +0200 (CEST)
Received: from hera.aquilenet.fr ([127.0.0.1])
 by localhost (hera.aquilenet.fr [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id h1G5CTwCoOK3; Tue,  4 May 2021 19:47:18 +0200 (CEST)
Received: from begin (unknown [IPv6:2a01:cb19:956:1b00:de41:a9ff:fe47:ec49])
 by hera.aquilenet.fr (Postfix) with ESMTPSA id 1BE7490;
 Tue,  4 May 2021 19:47:18 +0200 (CEST)
Received: from samy by begin with local (Exim 4.94)
 (envelope-from <samuel.thibault@ens-lyon.org>)
 id 1ldz8f-00GPn2-9z; Tue, 04 May 2021 19:47:17 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0b795809-35b4-40da-9382-d2fe0a56d345
X-Virus-Scanned: Debian amavisd-new at aquilenet.fr
Date: Tue, 4 May 2021 19:47:17 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
Subject: Re: [PATCH 9/9] vtpmmgr: Support GetRandom passthrough on TPM 2.0
Message-ID: <20210504174717.yzvqyc37twoitlns@begin>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Jason Andryuk <jandryuk@gmail.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
References: <20210504124842.220445-1-jandryuk@gmail.com>
 <20210504124842.220445-10-jandryuk@gmail.com>
 <20210504133332.pt56xjrxvbnz2htd@begin>
 <CAKf6xpuS_FoWpkBxMEudJsOwfKG96f8_Vd8p6tcU5C1f01PT6Q@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CAKf6xpuS_FoWpkBxMEudJsOwfKG96f8_Vd8p6tcU5C1f01PT6Q@mail.gmail.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)
X-Spamd-Bar: --
Authentication-Results: hera.aquilenet.fr
X-Rspamd-Server: hera
X-Rspamd-Queue-Id: B66EE1E8
X-Spamd-Result: default: False [-2.50 / 15.00];
	 ARC_NA(0.00)[];
	 RCVD_VIA_SMTP_AUTH(0.00)[];
	 FROM_HAS_DN(0.00)[];
	 RCPT_COUNT_THREE(0.00)[4];
	 TO_MATCH_ENVRCPT_ALL(0.00)[];
	 FREEMAIL_ENVRCPT(0.00)[gmail.com];
	 TAGGED_RCPT(0.00)[];
	 MIME_GOOD(-0.10)[text/plain];
	 HAS_ORG_HEADER(0.00)[];
	 RCVD_COUNT_THREE(0.00)[3];
	 TO_DN_ALL(0.00)[];
	 FREEMAIL_TO(0.00)[gmail.com];
	 RCVD_NO_TLS_LAST(0.10)[];
	 FROM_EQ_ENVFROM(0.00)[];
	 MID_RHS_NOT_FQDN(0.50)[];
	 BAYES_HAM(-3.00)[100.00%]

Jason Andryuk, le mar. 04 mai 2021 13:23:57 -0400, a ecrit:
> > > @@ -842,6 +889,7 @@ TPM_RESULT vtpmmgr_handle_cmd(
> > >               switch(ord) {
> > >               case TPM_ORD_GetRandom:
> > >                       vtpmloginfo(VTPM_LOG_VTPM, "Passthrough: TPM_GetRandom\n");
> > > +                     return vtpmmgr_handle_getrandom(opaque, tpmcmd);
> > >                       break;
> >
> > Drop the break, then. I would say also move (or drop) the log, like the
> > other cases.
> 
> Will drop the break.  I would just leave the log since it matches the
> other cases in this case statement.

Mmm, right these are all pass-through cases so better log them the same.

Samuel


From xen-devel-bounces@lists.xenproject.org Tue May 04 17:47:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 17:47:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122651.231356 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldz9F-0005am-GI; Tue, 04 May 2021 17:47:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122651.231356; Tue, 04 May 2021 17:47:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldz9F-0005af-CN; Tue, 04 May 2021 17:47:53 +0000
Received: by outflank-mailman (input) for mailman id 122651;
 Tue, 04 May 2021 17:47:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Poa=J7=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ldz9D-0005aX-Qn
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 17:47:51 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9957dda9-a73a-4c6b-8a05-587b76299324;
 Tue, 04 May 2021 17:47:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9957dda9-a73a-4c6b-8a05-587b76299324
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620150470;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=3VCWHNbh2yxzLC/u0igMuQk+qIam3UdIDz+UrWmvYm8=;
  b=eHKScAzM0u/QGwtxhZg5u/owVPn2RNbci3b754vL+0+SiZvmCj+4sSZu
   QRMXsvWvDat+YzokBu2f+kRLbEihAzoVkds2rMX62g8x4jcmIt1eaxPYY
   FAV6NUwH4Rgx38jrCASc9wFozGmsnp52dlJg44ni8Scw3K1MWaOwGMywz
   8=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: Xu46TS6FTTbKhTA6PIZWl3g/Un9KqRc3YSfevXA1XMvIHQp21aPrAg/+n8MxS5GSxXZXYA4093
 JkB0Hn+iq67NdcG4V0KZ9lho13sEbWGlGTXZerCWFBfqs4mHZeUcEkkBDUP+ow7+PSOduGI0Rc
 HXObh9YThcFPU4O3KYz8PjbnbiW3+jfvlgGMMhVM/w4AEHGvUyd2ROtlJKiExYA32TgcB/z748
 OeB1ypKz7J92XhKBFOhCS6c6/HyaHD/2spGj5eJUm1fJN+I1zDUGLRCSiRTqDtl2cmVeSbPukb
 p6I=
X-SBRS: 5.1
X-MesageID: 43046289
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:0I5MKKEkvtFvuHVopLqFR5TXdLJzesId70hD6mlYcjYQWtCEls
 yogfQQ3QL1jjFUY307hdWcIsC7L0/03aVepa0cJ62rUgWjgmunK4l+8ZDvqgeOJwTXzcQY76
 tpdsFFZOHYJURmjMr8/QmzG8shxt7Cy6yzmeLC1R5WLT1CQYsI1XYcNi+wFEpqSA5aQb8wE5
 SB7sRKzgDQB0g/RMK9G3UDQqz/vNXNjp3relorABQg5QmIg1qTmcHHOjKf2QoTVC4K/Kc6/Q
 H+4nDEz4iAk9X+8B/T0GfP849b8eGB9vJvDNGB4/JlUQnEpR2vYO1aKtu/lRAz5Nqi8VM71O
 TLyi1QQvhbz1P0UiWLrQD22w/muQxemUPK7VODm3PsrYjYaVsBerJ8rLlUeBfY9EYs1esUuM
 kgshP7xvgneS/opyjz68PFUBtnjCOP0B0fuNUekmBFVs8mYKJRxLZvj399KosKHy7x9ekcYZ
 BTJfzbjcwmFG+yU2rUpS1GztCqQx0Ib227a3lHkMmU3z9KpWt+3ksVyecO901whK4Vet1q4f
 /JPb9vk6wLZsgKbbhlDONEesevDHfRKCi8fl66EBDCLuUqKnjNo5n47PEc4/yrQoUByN8XlI
 7aWF1VmGYucyvVeIyz9awO1iqIbHS2XDzrxM0bzYN+oKfASL3iNjDGYEwykuO7ys9vQPHzar
 KWAtZ7EvXjJWzhFcJixAvlQaRfLnEYTYk8pss7YVSTucjGQ7ea9dDzQbL2Hv7AADwkUmTwDj
 8oRz7oPvhN6UitRzvWmx7Ud3TxelHu3J55HaTAltJjjLQlB8lpiEw4mF657saEJXlpqaotZn
 ZzJ7vhj+eaqACNjCH1xlQsHiAYIlde4b3mXX8PjxQNKVnIfbEKvMjaXWhT2XCANyJuVs++Kn
 8Ym31HvYaMa7CAzyErDNyqdkiAiWEImX6MR5AA3oqO+NniYZF9Kpo9QqR+GUHqGnVO6EZXgV
 YGTDVBal7UFzvoh6ngpocTHvvje951hxruB9VVp3LZvUC1vtouWXMfYj6rXaes8EMTbgsRom
 c0374UgbKGlzrqA3A4mv4EPFpFb3nSPKhLFz2fZIJfmqnifSZ5SWviv03CtzgDPk7Rs2kCjG
 3oKiOZPdXGGEBUtHxj3qH2y19sbWmGc0Vsand1jJ1lGQ39ywNO+N7OQpD2/3qaa1MEzO1YCj
 3DbDcICi5Fxty81neu6Xu/PERj4q9rEv3WDbwlfb2W52ikL5eQk7oaW9VO+ox+CdzouugXcO
 6WdgOPNgnkA+cx1wH9nAd8BABE7F0f1d/40hzs62a1mEMlCf3JOVJ8WvU1Jcqf42WMfYfB7L
 xJyfYO+c2+PWX6ZoTYleX5bztfJgjSpmDzZecyspxQtb8zsrw2P5Sza0q/6Fh3mDEFaOHznw
 ciZY4+xpbrEIpmZdYTdCJU5UBBrqXEEGIb9ijNRtYjdlQshULBN9yH47D0uaMia3fx0zfYCB
 26yWlh5P/LUCuI6K4CB48xKWpQblIg6H4KxpLKS6TgTCGrffpE5ly0LzuUd6JcUrGMHdwr31
 pHyuDNu++cbCzj3g/M+RN9P6JV6m6iBee/GhiFF+IN09u0Pz238+SXyf/2qDf8Uj2gbUsEwa
 VDaEwLd8xGzgAYs7df6Fn4doXH5mQ/k1Vf5jl7llninqieiV2rbH1uAEn+mZVZXT5aL36Sq9
 /KmNLoj0jA3A==
X-IronPort-AV: E=Sophos;i="5.82,272,1613451600"; 
   d="scan'208";a="43046289"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cIgS6//X48sLEFDEdV6/IzkrbYAy+cEWgENg+HlPh2UrPr7UQTJ8wT2QR1ZEjgyHdEKnkeV9+S/I+ugOtP2OIyOa4v5qAw8XCqriO+rtvHa9i2b74/lfls/DUJiFMv1Xqw249CR5c8GDabVhbahBN8+z7sjImSj44xItbpQMgR8bwFcqeKefVR4M5b6Tk750qLeV39t4yC4piz5zACmF/SWKQo4KVfuweyqEUMAdZNJkyvAZ0MGIITD9XXkbW+XxBRYMXhyGNQCHEn+o4j7uqyPgM9IEs3AauIlG2Bgyvg29IAuRLPrnWYd+YTQHaPnanwl5Z0dgXHf7YM2tXhaeWQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kS+af/9wuEjAZ5jXY0405C5CYjrYskh69LYhnX2usEg=;
 b=ceAq+7UuYIit97h5Qbtapee0DE3XvBj2AhC1DTNP9wYJRxsBhSUcH5BLDn7L5MW/Rz5boIbjGu+Wrkgqd/bNqLClvqcBySQiIwMiIhab9wQRVSt8H5aSAPc2o8T1HKUdYS4brwuN6QhTRUtXsDF9lCw1DWSQwyWRwMxYYLNWrNJxgcaTbEPwVYkJjcSxxHp/BH9Ehag2pndGxVgjkiiu8sRdySmAYsEIxscPgosryDh8CCCQcveozoXPpjDt77xBjpd7uKTRfTMxghVtYQNcuOFYUyKBThwmbJlUkaUFiv92/s98OVZEzGjnSIIukm5oA3VqKOgR1Qt7OQBtUiI5Lg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kS+af/9wuEjAZ5jXY0405C5CYjrYskh69LYhnX2usEg=;
 b=fyg+FNYNexje7nNHVFhXXS8z2HuTGhYCyZRDrHYCYfOQVhkd5n7kOvQym+IzHSs1U1afW9PzBNLevGa5i2EJTevgx4AaCXsOhdl2ZK9CbgwjHIOqp7y4Hu44vKWhoEwsBkm2MFeHRg3mgAl64GU2tbvcSgg5HCVPd1GRX+IUeKc=
To: Olaf Hering <olaf@aepfle.de>, <xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20210504135021.8394-1-olaf@aepfle.de>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v1] tools: fix incorrect suggestions for XENCONSOLED_TRACE
 on FreeBSD
Message-ID: <c71658e6-422b-4852-6d21-4688d09d8b8e@citrix.com>
Date: Tue, 4 May 2021 18:47:12 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <20210504135021.8394-1-olaf@aepfle.de>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0420.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18b::11) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d9019baf-ea8f-485f-a021-08d90f24ac47
X-MS-TrafficTypeDiagnostic: BYAPR03MB4296:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB4296B299C37170F3FB2D21FBBA5A9@BYAPR03MB4296.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:2582;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 5Eb4beTQgTJG8JVgQj593PXY4Nnwi6yw+XBCuCVUa73yIvBN7o0EnLMeMsl0O+6dp8yCq44M7kDDnGnco3sU1oGS1L5vKQLjF3SBlRFhQE8qjfTUWySDWudj0AF1y+2kEgNpOTxU0pC6BA7J59wbIpoM0lZKxRmSPhGCy1PxDgg43kSrBbKdweey9fzosDerEXa0wBqJW3NoQFL5FVCXDTwYYYT1lWejpJjOmrX7fCmz9Wyy3gYetC6j+nfrlSSFVTIc883WPO3DQxwKKvUJUKN9uhz31Ax39R4NC8OQ1PfCYIkR4FKBn1aT9Uv+3BCTDVgwXPqT45WRpxTDA8GTnnmVdb4PcbLU8t1Rtt1YfnIdGAtOqXWYIV9mUQMFsxXw9j02FE2h1X6m4o+6S7h7eZXZbvJsUBc3oIANSO65nVRhfG8Ak7rI64KdA6/nlFSMobyQXgvFv2/eBY+6lZWYK7mh/xbUJg+SLK/BwyLJO9Sl/a571HMG0qlGRFHWtIJbyVwLGW4Kb1PvOKxxd2E18+Zw74+xoYUyHkwFw77HNFrxy8gVrDOv1TL8UuJff2VBwcZVjgrDPvq6yZ4a7I50WsoOs7VVlwmTKaqo8FexN7RoTJjme2P9L/HHjGA7jkvLU5XOgYE+I/CZ5cv3O75wd7xQ3V5TZJBnhB6mvifCddG0sMx6JOo0Lw3YhNNNM055
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(366004)(346002)(396003)(376002)(39850400004)(66556008)(66476007)(38100700002)(8936002)(6486002)(2616005)(956004)(31696002)(16576012)(54906003)(316002)(478600001)(107886003)(6666004)(5660300002)(31686004)(26005)(36756003)(186003)(83380400001)(16526019)(4326008)(2906002)(86362001)(8676002)(53546011)(66946007)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?cGIwS2ttczRqVE5KYjBXS3ViRTNEdVFJdkxwM3Fka2hTR2ZXRUozYzg0bjBD?=
 =?utf-8?B?RXpXZ1ZtMTFET1Nua0NQUWVuMlRwcVl5WDg4bm81dG4va2Nzck9NZkE5K3ZD?=
 =?utf-8?B?Y1c2MXVQZ3RVanpTNTQ5K3ZBSUVvMGNVa2QyV0JSM1dYeTJ6dVRmMjdMaU5r?=
 =?utf-8?B?aEdoaVlac095RzFpZEJOSWNESFR4RjF6aUNvNGVtVnl0MlpkNXBuTFdnWHRx?=
 =?utf-8?B?c3F4VDFyTnNXQVVsd21kYnpyOGdGSFdDTkxHU0lGM2V5UnQ5ZkNqeEQ5MERy?=
 =?utf-8?B?WFl5N2xmNXJ1aCtsenNDVzZMcVgySlB6eGFlRWlRTlJScC9SUzQ2ZTgva29T?=
 =?utf-8?B?anFmVzVkTnJIaCs2RkUwUGFYNC91SFhUcXJCNkJEWjlIL01GSFdUelBWZ1Nq?=
 =?utf-8?B?d2gvNnlkTERSMEJBM2VaQkFyaEhwNytNWURneUhQZHBpOFljVW93MUFUV2Iw?=
 =?utf-8?B?SGxSa0NROUZtU0lDRlJiOThvdERHTEJyYW9zOTVBR29ocTZFdXNSWk10bGMv?=
 =?utf-8?B?YkRWeFJCV3hYN0hWLzN1dG94cmtManJDbmxiZlBiSm5MTmp2RDF2UTEwMnk3?=
 =?utf-8?B?cFAxeWl5TThyQVlESHJJZ0wxWUdqelByQXdXNmxEU1pLUWhhVVhFaDhSVmZs?=
 =?utf-8?B?YVRWdUtqbDdFU2ZBWlMxTkNIWGFhdzQzVXFpeXhud00vMWNQWldyRUVMTXhW?=
 =?utf-8?B?cmQ2ZXpHdEg2aGo4SnBuQW5ERjdNL0ZVVWI1RmFiT3B5MlNvS3pKeE9SR0Ja?=
 =?utf-8?B?SVBJeGkzQURXL05pcGhGLzdZOFg0MWZxMjYvcm1yVTJvd1pENVVMREx4VERJ?=
 =?utf-8?B?K2hUS1A4dExUcFNQSlY0aUJRM1FoNDNxcFRDZkZMd252RG9nclpmMVc1YWx6?=
 =?utf-8?B?N210c2pZMFdFWnplQ21rRVFCNDNKM0Y0QlZJeWN0SHg5L2t6M1dSRnJSY2Ry?=
 =?utf-8?B?THRLNkROdlFIN1lPMXBVTTFVSTNSYlltNjFyVHpWNjNwcm8xMVp6YnhLdXpY?=
 =?utf-8?B?YjUzMHhGbFNOMkNDSGliMERBbjBDN1FBZGJVWDhYalRaRmVWU0RwQXZjOFc1?=
 =?utf-8?B?WW5rb0hoRUZhcURTNlo0L2dZL0JnWHM3R09FdnZZNnhqQ2lSRmdabStwSC9s?=
 =?utf-8?B?cmFIZ0l3Q3NaV3dmcmo3RldSRmx6bHkwWkZncUtYdEhTVlZTazg5K1lidmw3?=
 =?utf-8?B?ZHF3ZHpnWlduYTZaZ3k2UnZJbHlYbFE5Qlgzc21wZTZjZElJcGlEOHNCNnFM?=
 =?utf-8?B?WEdGazBNTHAwM285akY3YkVwMC9ibmJBUTFhaitjL2pjVHVNTitjUVJZd3Bj?=
 =?utf-8?B?MHBzY2pWVEJ0RmZHVjh0ZWNoVXpPNUJyaDE1SjJ6em91Y1dkb3ljL0t0aHh4?=
 =?utf-8?B?bDZBcXM5Sm81THlhdVlucVFpT05UVHZhTTl6NDZRTXVYSWIwZHhLUlgvbU1L?=
 =?utf-8?B?ZGpTaDcweE1FdTlZRzdtYmJ2bUUzaTgzNldNSEp5N2pWbTEvV2FXUnBWL3lE?=
 =?utf-8?B?RGE5UFhQMU9KV3YvcHdnclV5d24yRGh1bmY4OXhMdlFuRlRkR0c3Z1B0RnM4?=
 =?utf-8?B?cjNUVmpTRFBXYUtOTjdXbk5mK0FYZUo4YlhnM0pqWW9tSlhPN1gvK1JVN1ph?=
 =?utf-8?B?ckhqUS9hZEZoTkFyM3cvSEE3d2V3VGxuOUdhUFBiSThkU0FSVnFsbTk2QkRw?=
 =?utf-8?B?R29vNVJGZUJPLzFTcTRHUVNCU3JSVXVGdCtjcFBZeTJaYW1FcHdacU84V0Y2?=
 =?utf-8?Q?iX/MjeBoQoJUsszmb9TD1iuaXC/gBJTVH+hK+as?=
X-MS-Exchange-CrossTenant-Network-Message-Id: d9019baf-ea8f-485f-a021-08d90f24ac47
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2021 17:47:23.5838
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: dT1CMMwoWLAsnJf8ovsnlR2KlawB1V7e7xMFKNMM+wSIKNPfS4YVoucFjM/lJV0ogf2/pxB62h/R/VhIHkjIRoU9OueTTkW4P8GY3xceEdU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4296
X-OriginatorOrg: citrix.com

On 04/05/2021 14:50, Olaf Hering wrote:
> --log does not take a file, it specifies what is supposed to be logged.
>
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>.=C2=A0 That said, ...

> ---
>  tools/hotplug/FreeBSD/rc.d/xencommons.in | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/tools/hotplug/FreeBSD/rc.d/xencommons.in b/tools/hotplug/Fre=
eBSD/rc.d/xencommons.in
> index ccd5a9b055..36dd717944 100644
> --- a/tools/hotplug/FreeBSD/rc.d/xencommons.in
> +++ b/tools/hotplug/FreeBSD/rc.d/xencommons.in
> @@ -23,7 +23,7 @@ required_files=3D"/dev/xen/xenstored"
> =20
>  XENSTORED_PIDFILE=3D"@XEN_RUN_DIR@/xenstored.pid"
>  XENCONSOLED_PIDFILE=3D"@XEN_RUN_DIR@/xenconsoled.pid"
> -#XENCONSOLED_TRACE=3D"@XEN_LOG_DIR@/xenconsole-trace.log"
> +#XENCONSOLED_TRACE=3D"none|guest|hv|all"
>  #XENSTORED_TRACE=3D"@XEN_LOG_DIR@/xen/xenstore-trace.log"

It would probably be clearer to untangle these in one go, leaving the
result looking like:

XENCONSOLED_PIDFILE=3D"@XEN_RUN_DIR@/xenconsoled.pid"
#XENCONSOLED_TRACE=3D"none|guest|hv|all"

XENSTORED_PIDFILE=3D"@XEN_RUN_DIR@/xenstored.pid"
#XENSTORED_TRACE=3D"@XEN_LOG_DIR@/xen/xenstore-trace.log"

I'd also be tempted to fold this and the NetBSD change together.=C2=A0 It's
not as if these bugfixes are distro-specific.


It looks like a bug in NetBSD in c/s 2e8644e1d90, which was copied into
FreeBSD by c/s 5dcdb2bf569.=C2=A0 (P.S. Sorry Roger - both your bugs,
starting from a decade ago).=C2=A0 It really is idiotic that we've got a
commonly named *_TRACE variable with totally different semantics for the
two daemons.=C2=A0 Then again, its far too late to fix this :(

~Andrew



From xen-devel-bounces@lists.xenproject.org Tue May 04 17:48:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 17:48:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122652.231368 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldz9U-0005fV-Nx; Tue, 04 May 2021 17:48:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122652.231368; Tue, 04 May 2021 17:48:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldz9U-0005fO-Km; Tue, 04 May 2021 17:48:08 +0000
Received: by outflank-mailman (input) for mailman id 122652;
 Tue, 04 May 2021 17:48:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=c7IS=J7=ens-lyon.org=samuel.thibault@srs-us1.protection.inumbo.net>)
 id 1ldz9T-0005fA-DH
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 17:48:07 +0000
Received: from hera.aquilenet.fr (unknown [185.233.100.1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d37579af-6b4c-41d4-ab2d-24f9ce1e8629;
 Tue, 04 May 2021 17:48:06 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by hera.aquilenet.fr (Postfix) with ESMTP id A25F91D9;
 Tue,  4 May 2021 19:48:05 +0200 (CEST)
Received: from hera.aquilenet.fr ([127.0.0.1])
 by localhost (hera.aquilenet.fr [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id K0HDWytAod5G; Tue,  4 May 2021 19:48:05 +0200 (CEST)
Received: from begin (unknown [IPv6:2a01:cb19:956:1b00:de41:a9ff:fe47:ec49])
 by hera.aquilenet.fr (Postfix) with ESMTPSA id DC73F194;
 Tue,  4 May 2021 19:48:04 +0200 (CEST)
Received: from samy by begin with local (Exim 4.94)
 (envelope-from <samuel.thibault@ens-lyon.org>)
 id 1ldz9Q-00GPoq-0s; Tue, 04 May 2021 19:48:04 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d37579af-6b4c-41d4-ab2d-24f9ce1e8629
X-Virus-Scanned: Debian amavisd-new at aquilenet.fr
Date: Tue, 4 May 2021 19:48:03 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
Subject: Re: [PATCH 4/9] vtpmmgr: Allow specifying srk_handle for TPM2
Message-ID: <20210504174803.p6wonh4qeqbmk2gq@begin>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Jason Andryuk <jandryuk@gmail.com>,
	xen-devel <xen-devel@lists.xenproject.org>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
References: <20210504124842.220445-1-jandryuk@gmail.com>
 <20210504124842.220445-5-jandryuk@gmail.com>
 <20210504131328.wtoe4swz7nyzyuts@begin>
 <CAKf6xpsVJQ7LeV63hb8Sm_6gq+xjCwMDOkuMKNsn+-vqHF=9rQ@mail.gmail.com>
 <20210504170719.mnu3e3av7klsvyuq@begin>
 <CAKf6xpvP2TCqZwew8_ykYEcXfsmhsef2TefcV++h2u4BsWVo2A@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CAKf6xpvP2TCqZwew8_ykYEcXfsmhsef2TefcV++h2u4BsWVo2A@mail.gmail.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)
X-Spamd-Bar: --
Authentication-Results: hera.aquilenet.fr
X-Rspamd-Server: hera
X-Rspamd-Queue-Id: A25F91D9
X-Spamd-Result: default: False [-2.50 / 15.00];
	 ARC_NA(0.00)[];
	 RCVD_VIA_SMTP_AUTH(0.00)[];
	 FROM_HAS_DN(0.00)[];
	 TO_MATCH_ENVRCPT_ALL(0.00)[];
	 FREEMAIL_ENVRCPT(0.00)[gmail.com];
	 TAGGED_RCPT(0.00)[];
	 MIME_GOOD(-0.10)[text/plain];
	 RCPT_COUNT_FIVE(0.00)[6];
	 HAS_ORG_HEADER(0.00)[];
	 RCVD_COUNT_THREE(0.00)[3];
	 TO_DN_ALL(0.00)[];
	 FREEMAIL_TO(0.00)[gmail.com];
	 RCVD_NO_TLS_LAST(0.10)[];
	 FROM_EQ_ENVFROM(0.00)[];
	 MID_RHS_NOT_FQDN(0.50)[];
	 BAYES_HAM(-3.00)[100.00%]

Jason Andryuk, le mar. 04 mai 2021 13:27:36 -0400, a ecrit:
> On Tue, May 4, 2021 at 1:07 PM Samuel Thibault
> <samuel.thibault@ens-lyon.org> wrote:
> >
> > Jason Andryuk, le mar. 04 mai 2021 13:04:47 -0400, a ecrit:
> > > owner_auth & srk_auth don't check :, but then they don't skip : or =
> > > when passing the string to parse_auth_string.  So they can't work
> > > properly?
> >
> > They happen to "work" just because there is no other parameter prefixed
> > the same.
> 
> parse_auth_string fails on the ":".
> 
> Just tested "owner_auth:well-known"

owner_auth happens to have the proper size, but srk_auth doesn't.

Samuel


From xen-devel-bounces@lists.xenproject.org Tue May 04 17:50:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 17:50:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122657.231380 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldzBg-0006Yq-59; Tue, 04 May 2021 17:50:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122657.231380; Tue, 04 May 2021 17:50:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldzBg-0006Yj-1Z; Tue, 04 May 2021 17:50:24 +0000
Received: by outflank-mailman (input) for mailman id 122657;
 Tue, 04 May 2021 17:50:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Poa=J7=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ldzBe-0006Yd-KN
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 17:50:22 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0c5a74da-ec3d-4116-8c7f-69c25def076e;
 Tue, 04 May 2021 17:50:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0c5a74da-ec3d-4116-8c7f-69c25def076e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620150621;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=DoQ4unHTciOtlEQit/zkvIA8oGYZiCo6EJOuUCqqpLs=;
  b=amHtQwu/mfcEdCZW5SdLlYhEtZAKzQ98Ome2e0oJkWvpd6ePackbfCNv
   hLTbl/JCGSeR+gBRzZLmZjUcU8+jUYYPGZQ9pWJNP9mwRiP5xKpoM0Ufl
   H3A6WkYMsLdraonZM8xhKmgDZ5/Z6x+CeG2ONrRP2ZId+7yz/DVtOeGt/
   8=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: ap4bwLnWsQrLiA4GFVvMD3z2AaZKNxux0+wf3qK6IUaiG8afQ5AWk+fPj2zAHVPIQTbeMt6Zdv
 uNl15kQsuSDAjnNL0dXnqYeEbUkeP30SH9+dcX5d9ctgFw53h6ToTIqDsPxhXo5HEyOT+uNVHR
 Yyw92XncENNKEQhJK7y+xvaVbIeQsy27pczrffCbu83wERxpZ5G5yY7ku+UfgJxEytFBMsZ1Pl
 Lb5GOHdRykXpMT4eCCmy51ZxvokGMq12RRMPrz95+p7VWK2Lw0oV+UTN92btIR4d6ugk4qWTZQ
 PCM=
X-SBRS: 5.1
X-MesageID: 42859347
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:R0j+EqNobetTa8BcT0rw55DYdL4zR+YMi2QD/3taDTRIb82VkN
 2vlvwH1RnyzA0cQm0khMroAse9aFvm39pQ7ZMKNbmvGDPntmyhMZ144eLZrAHIMxbVstRQ3a
 IIScRDIfXtEFl3itv76gGkE9AmhOKK6rysmP229RdQZCtBApsQiDtRIACdD0FwWU1iDZ02CJ
 KT6qN81kSdUF4Qadm2AWRAYvPKoMfFmImjTRkNARMm7wfmt0LW1JfRFR+E0hACFw5e2LtKyx
 m4ryXVxIWG98u6xBjVynPJ4/1t+efJ59NfCKW3+7MoAxr2jALAXvUZZ5Sju3QPrPir+BIWlr
 D30modFuBSz1+UQW2vuxvq3GDboUQTwlvv00WRj3emgeGRfkNDN+N7iYhUcgTU5iMb1bkWus
 87vBP6xu5qJCjNkyjn69/DWwsCrDvTnVMYnfMOlHsaaIMCadZq3P8i1XlIG5QNFj+S0vFDLM
 BSCqjnlZJrWG+BY2uclmdix8HEZAVIIj62BmIGusCTzgFMmmF4w0Yy1KUk7wc93aN4ZJ9e6+
 veNKN00JlIU88NdKp4QNwMWM2tFwX2MF3xGVPXBW6iOLAMOnrLpZKyyLIp5NuycJhN6Jcpgp
 zOXH5RqGZaQTOhNeS+mLlwtjzdSmS0WjrgjutE4YJih7H6TL33dQWeVVEHiaKb0rYiK/yef8
 z2FINdAvflI2erM51OxRfCV55bLmRbeNEJu+w8R0mFrqvwW83Xn92eVMyWCKvmED4iVG+6KG
 AERiLPKMJJ6V3udWT/hDTXRnPxam3y9Z99C8Hhjq0u4blIErcJnhkeiFy/6M3OAyZFqLYKcE
 x3J66isq7TnxjzwU/4q0FSfjZNBEdc57vtF1lQoxURDk/yebEf//GWeWVY2mq7NgZyJvmmVz
 J3lhBSw+aaPpaQzSctB5aMKWSBlUYeo3qMUtM6lrCc49zmPrc1FIwvVqA0NQijLW01pS9a7E
 N4LCMUTE7WET3jzY+/ioYPOe3Zf95gxCGxIcBVrnrbnV6Gpd4mQ0YaWzLGa7/UvS8eAx5vwn
 Fh+a4Wh7SN3Ry1L3Ekveg+OFpQLFiMDKl+FwSDboVMkrXNcAV9JF36wwCyulUWQC7H5k8Sjm
 vuIWmxdevQClRQgHxez53n6Uh5bGmbYkJ2ZE1rqIEVLxWyhl9DlcuwIoaj2WqYbVUPhtsQNz
 zIehM+CAJjzdLf7m/epB+yUVEdgrk+NO3UC7ouN4zJ0nS2MYuSiOUtBPlP5qtoM9jor84GWe
 +SYBWuMTv9Eu8lsjbl/0oNCW1Rkj0Dnvzp0hG+szT98347HPbIIFNpA5scOMqR6mD4R/COlL
 V15OhFy9eYAyHUUJqhz6qSUhtobjX0ikSyR/szqZ9Vsbkp3YEDVKXzYH/t7jV/wB46LM3Ij0
 sQT6Rw3aDZNuZUDrsvUhMc2mBsqc+GI0QquDHnG+MSfVkiiHnAItOCioC44IYHMwmkpAHqP0
 OY/DAY1/DZXzGb3bpyMdN8HU1mLGw94m9l5uWMasn5DxirbfhK+B6fPmWmeLFQDIiDFrN4lG
 c33/i428uWfTH/wgbeoH9SJb9P6X+uRYeKOz23cNQ4u+CSCBCrmaul4Mm6kTfxR3+aUi0j9P
 x4XH1VSN9ChDkkhJAwyQ6oRMXM0xoYr2c=
X-IronPort-AV: E=Sophos;i="5.82,272,1613451600"; 
   d="scan'208";a="42859347"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OBFXCxojq5kjfM49+BIFyrZrIwBWd2iJLcWqnIlLFnq5DKeQoX2eUUM6pmMLdg1UPMPaAyhLA+S/dhFS6fKVZuLujLB7//P1Yn7lAayBXlW4DlmKosxRNBKJ+L6NJVynUL6hlBndCrW5aOPRBHFX+ULNYdcdnMNHB0CTFLWZvFJiDQbjs3UhURfK5q60Tlor1nJ8BV/DVjb+GMPj1tWI4CTumSw2xjkAsh4GKB0jSmMjNiW4kAyLao1RQxYFW/K6ORlV3wW6CL7drJGZCu3w7T9QexNBBZOBj+mMkMhkYCtqpvrg5DS9a/nYNO3DtA/wQAiJYolLzFLUquZEPGAj+g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DoQ4unHTciOtlEQit/zkvIA8oGYZiCo6EJOuUCqqpLs=;
 b=PDf3Ir+ssYwJ/eWjEqa0vKcdXeioP4XONHeahWAvvsuyXdbuaSbbzSA8KOPPP+AFU2BleBARocC+fR8VOXU7ifyMCy55wWC+L35iCxOzBeMwAjh6x4cG+xhmeYZVCU3BAtuwITqLasFaShOmrjbZQcAdJlHUgSxmraBgChuyrng3M7+cxSGhfECovo5Ek9GcNdlfXaK7HCqbgpWxffc55gU0Til/5x214M9avZL2MwDZqvgFxS8hk+tz1m+l8S8K4I273+4obqb9ddGaJ0g0o27aJnzXsjEFZTEMmcqKet5iCRYoyCuRbFffvUryaRUN5o46cjwxoytHyrn0x99d2Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DoQ4unHTciOtlEQit/zkvIA8oGYZiCo6EJOuUCqqpLs=;
 b=OJWlnNnh0NjEm8l+bHSuFV5iFUGKkifiKVCCJDmSCHmRdzDSP2BVFhSaqknSNnSnyI3OYCy4iDwk2+D8kgaeLi/7qqpX2vC1BhARLt4W70Gh4hnR7QzPsoeda+sR872WVI19njNOQ8U0lqdGNGuCleacBqb0BMYS0d9MtJCZsPg=
Subject: Re: [PATCH v1] tools: handle missing xencommons in
 xenconsoled.service
To: Olaf Hering <olaf@aepfle.de>, <xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210504135854.10355-1-olaf@aepfle.de>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <2b207f7b-3a97-810b-8c15-32be55a3d5b2@citrix.com>
Date: Tue, 4 May 2021 18:50:11 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <20210504135854.10355-1-olaf@aepfle.de>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0328.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18c::9) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ebffa245-1460-4f0d-a869-08d90f25143b
X-MS-TrafficTypeDiagnostic: BYAPR03MB4743:
X-Microsoft-Antispam-PRVS: <BYAPR03MB4743A93BEB4FDE2429DEFC10BA5A9@BYAPR03MB4743.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:989;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: m8iLlGoCT4Z2cR1CYg8GYHO15c0LrU2mpJVEblDpxz3S1e0GAeeGGg0ezrecrKGCjo5eQ8z89H4/7nESEQpuxU+eL9NK3ZCTul9AS5SZI1vitUdVrqnDhIEo31I85LMVIjASwX1I+x5PvP663Vk6usAhu0tgTSWEgF8t2flwmRYnk/mNOoUqSpZq/14pqNxSK9v0EiTJYteIAK63/Av9uItmIcWbBqPGTDtc9jv3zPY9Zsj4ax5DL7BPIMyjEfah7Wibvqnl3E8wBJP5sO5UExI53G5f0uen2+cN84kWeSxlvbQBsANPBW5GQ6REW3YRKakZ/rj8VGCuPWPSU7Dp7Zi5/Koi2bENtWu5tmgysYcmQgRNd2gP3lt4YPiYUGhSNdmXnAMsQBAkte1UAmY2svG0muW2ETEq3PrRna50Q1PRj6qLcQxCkAcNeMmXSWves+pF0+8qaPpTXjBoMwjEqq4LW8stnKwbKie8OOGQ/9bqPg2NSeAwRyuBgtXsUk/Sdl4ikMzgzi8p2KL+79S+QZOHU+cSl8kOpaWf9N/y5Z/QY25hfCPXj2XEkdTFI9JeqRuG5IPuJe1WrQuunVUQtgOxPUzIOV3+WFHhb32hQQLdNGSzOnRAW7moMQEv8vXCfvTMfIRFbUJQGgYKJG8JVX69trjLErmZ/8L2SZNS/OajvMtsCAdAH/xhYBwaz7fe
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(136003)(346002)(376002)(39830400003)(316002)(478600001)(31696002)(558084003)(956004)(31686004)(5660300002)(8676002)(4326008)(36756003)(16576012)(6666004)(54906003)(2616005)(8936002)(6486002)(86362001)(186003)(2906002)(26005)(66476007)(66556008)(53546011)(16526019)(38100700002)(66946007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?aXU2WVZvWUxyTGZDc3hHcSsvVlczNXU5Yzd3ZDVIbWVEb0x5NFA3d0xTY1dj?=
 =?utf-8?B?SXYvM0hJTklQcXBhdWlhbEo5bHVpSXJtNVE0RGJER0t1c2Z6dkVxSGxPd1Zn?=
 =?utf-8?B?ZTA5bkdNMnI4ejRWYnpUQ0Q5ZkplRFJFclNKckNmcEdJa0h3L05xanZWQVRD?=
 =?utf-8?B?akhSeFZ1RHF2YUN6dmd5MUcwUFRDRzFmQkorWlk2azVsbXdZemFwRVdmeEZn?=
 =?utf-8?B?YXoyWXFtYXFIWWtDS2xla1NjL1hldUlvWTZwUHVISmY2cy81bW8yeEtNRHN4?=
 =?utf-8?B?WFpEM3A0ZWZFMTBVcnYyZXU3a1pRQlpJQlg1VTZ4K29tR1Juak5MWEdYdWg0?=
 =?utf-8?B?dmxqUUp0VVhqRllHQTVQdUVnM2tXUVd6TkFRVG16bEtUYzl1TkFCZjlxRzB5?=
 =?utf-8?B?UXcraTRUYkJWS1pVYXpMRUVsY2FMNTQ5OWVnQ0xGVGtudk44NWM4WGJ6Z1N0?=
 =?utf-8?B?RFBGcVRiRUVYaUJxZWVHRGlsbEljUGN1NllIN1JRNUJBM3lUaGZGSHVuQUxW?=
 =?utf-8?B?ejJ2MnB5NGw1U3huWVM0VVZ4V0hyR24wM01XMVc3YW9WYnI5bHFwdS92QWlC?=
 =?utf-8?B?b3JOQXFkMGEyM3dJanRhdmwwRXhCWmJXY1NWcHh2SEx2b2FVV3dJbDQwRFF1?=
 =?utf-8?B?RWRWZWpuWGEyS2dvOHZuNWY3Q3c5OGN4M1FOU2J6cHR2a2FTeUExTk1RSWoy?=
 =?utf-8?B?MEYvUy9HQ0tBM2IwZTkxUkJpdVFodVVDUVJ6SEh6am04NlVFUld5SGZVay9u?=
 =?utf-8?B?T3ZPeFVkSzNER3hYdGwzaUhLVTNvRkhKazkxUTg1QjM2NEFsV3hLK3l5NndE?=
 =?utf-8?B?akhiUDZKbXcxNGFwelRucnpPTDFtYnZVTlE5SWNGNzhBUVgyYldLbUdsUksr?=
 =?utf-8?B?ZU1zMUk1L2l6WHpQU3A5Y3BWL0pSRVFUSkRYMGpPb2FXRmorTlR6VHY4UEk0?=
 =?utf-8?B?NlR5c20vbFRLVXFQOGovdDlZdXYyYXhmWXlGV1pNbGE5RzU3QnVVcUg4KytW?=
 =?utf-8?B?Z3JCd3FsVXhleDZRdWRBSzVUZDRKVjVTaTBIRExrak1adDlhZmdnS1plVnRm?=
 =?utf-8?B?U3ZGOEdNRkx1MjQ1V0xWTkpVeFkvQzh1dFYyeHFSVk1vOHptNVB1bFhZYXY3?=
 =?utf-8?B?Y1FTOGFQRzl1dGVBZXMzcU15d2lQOTdsRmN2UkR0NWtCbFlHajRqdHpxOGZr?=
 =?utf-8?B?YUN3QTJ6RkF1aW9vc0t3QUhXYXZENEgrMTZIZUVTSmxaNVQzaXZodnZYWVo1?=
 =?utf-8?B?ME9oZGcyYnNlaGp6MFA5eTJGVk80QWJOa1BiOGhpME9aQlpMbGtJSjVlN3dY?=
 =?utf-8?B?S0N3cmJIQmszU1MrelNMY0pDTG5ESEYxNHBTVGQ2QlcwMjhLcWNjY29CYk5D?=
 =?utf-8?B?SGNMTW94UGgxVUVKVys5RDl3OGNra3BYczJ1VEY5aGRtV2pVRk85b2p5SWcv?=
 =?utf-8?B?dkFneEhuMGJRTzRWcmNsdzhWN0VBNUFjOGdBQ1dhaklJRFBMSHYxZjQzck9H?=
 =?utf-8?B?dXBOd1dPd1FXMk9memZKWmNSbENqK3BLQyswNjA2UUFnMWx6WDl2NEthdXFZ?=
 =?utf-8?B?NURFMEpadUFKMVFVV1NkK0llR1FUeStOdUVXWnlvaGNjRFgrNUlhN21pVVFL?=
 =?utf-8?B?R1duVlhkQml2NWpNNDIyeVg2dEh4YklkOEc0RndVV2RGd2RuS2xXYk1QYm52?=
 =?utf-8?B?QVdlRitjOFlRUkg2Y0o1OEhua0EwQjlwWDVKekhsMDh2Ry9yTjlvR2xCdS9W?=
 =?utf-8?Q?kME5DUWoStwHL9KdLfAeZyWFwcomx8X8JceMJQF?=
X-MS-Exchange-CrossTenant-Network-Message-Id: ebffa245-1460-4f0d-a869-08d90f25143b
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2021 17:50:18.1613
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: p5l64C64F4ZuFhXuYGKDe+gKUU3awW0f4SBgjhWzgtuxELdXjpmW2nx+HhH9ieAFjs83Kii1n4IMd6f3fpYEgVrbt4XiAN4CwE9jRypXsSk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4743
X-OriginatorOrg: citrix.com

On 04/05/2021 14:58, Olaf Hering wrote:
> sysconfig files are not mandatory.
> Adjust xenconsoled.service to handle a missing sysconfig file by
> prepending a dash to the to-be-sourced filename.
>
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Tue May 04 17:51:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 17:51:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122663.231392 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldzCA-0006g5-JU; Tue, 04 May 2021 17:50:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122663.231392; Tue, 04 May 2021 17:50:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldzCA-0006fy-FZ; Tue, 04 May 2021 17:50:54 +0000
Received: by outflank-mailman (input) for mailman id 122663;
 Tue, 04 May 2021 17:50:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Poa=J7=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ldzC9-0006fs-SX
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 17:50:53 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d67874f8-41cf-4c4d-acd6-42b85d002d38;
 Tue, 04 May 2021 17:50:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d67874f8-41cf-4c4d-acd6-42b85d002d38
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620150652;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=2zwOtvbYCeCrrEEu3ky6mj/LhH83mJ6RRHaJM5BD6f8=;
  b=Suv70BuYCTJ4eLtWqsOqHhYNa3bjrC0ihaWvn7sXnz6w8fheX4CnrIok
   MOJqIaNjUQZ0epyypvKBMTCcX4FOlRFUxNWUy7Ojulm4MxyUU0n02uCRg
   8IrfwOLAbYXO56rQ7PK4QYJt+Q4CjT6Tg9Sbl06yRHgFtpuwKuMyDJwNf
   U=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: qmH1LsJ8y21Z5MsTOlQEENc4Az6eNte778E25JqcTdBfi2CEHNSZL/OsSO5f0ernOe9QbjfU0i
 0iWzG+IudaicYSDLy8OQrF6IW8tAz6B4BNytj8AIGRk4rRLy1Ct5svsahEONa/qG5eSly4VME4
 YiXwrDNmXyEFMypVfUk6wcjBRgXiluyy7ho2asZMRWnjJ9LlX+NcP21dM4Lk/H6IIcV8dUDs9W
 ahrVQZgD8HwYBboEX+2KcHdEM+km6WOK6LEGiSHTVlk22wmCm2UryzBHyk//aWf416jwvW6LDn
 hQk=
X-SBRS: 5.1
X-MesageID: 43063475
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:eG2I0alvexdfEESR5lKd7Cbfvn3pDfOnj2dD5ilNYBxZY6Wkvu
 iUtrAyyQL0hDENWHsphNCHP+26TWnB8INuiLN+AZ6LZyOjnGezNolt4c/ZwzPmEzDj7eI178
 hdWoBEIpnLAVB+5PyX3CCRD8sgzN6b8KqhmOfZyDNXQRt3brx7hj0YNi+wOCRNNW57LLA+E4
 eR4dcCgjKmd2geYMjTPAh6Y8HoodrXmJX6JSMcDxk85wWUyR+u4rj2Ex+Xty1uLA9n67Ek7G
 TDjkjF9ryu2svLtiP0+k3yy9BtmNXnwsZeH8DksKkoAxjllwrAXvUbZ5SspzYwydvfkWoCsN
 6JmBs4OtQ21nW5RBDJnTLI+y3NlAkj8GXjz1jwuwqQneXcSCghA8RMwaJ1GyGpk3YIh9133K
 JV02/xjfM+Znms7UeNham9azhQmkW5unYkm+II5kYvN7c2U7NNsZcZuHpcDZZoJlOI1KkcDO
 JsAMvAjcwmFG+yUnaxhBgK/PWRRHgpWj+JTk8e0/blqQR+rTRSyksVw9EnhXEQ9J4xYIks3Z
 W1Do1Y0J5JVcMYdqR7GaMoRta2EHXERVb2PHuVOkmPLtBJB1v977rMpJkl7uCjf5IFiLM0hZ
 T6SVtd8Uo/YVjnB8Gi1IBCmyq9DlmVbHDI8IVz9pJ5srrzSP7AKiuYUm0jlMOmvrE2HtDbc+
 zbAuMUP9bTaU/VXapZ1Qz3XJdfbVMEVtcOh9o9U1WS5urWN4zRsPDBevq7HsusLR8UHkfERl
 cTVjn6I8tNqmqxXGXjvRTXU3TxPmPl+5ZdF7Xb4vgzxIABOpYkiHlRtX2JouWwbRFSuK0/e0
 VzZJn9lLmgmGWw9WHUq0VlUyAtSnp90fHFaTdntAUKO0T7ffIooNOEY11f23OBO1taR8PSGw
 hPmkRv9cuMXtut7BFnL+jiHnORjnMVqn7PZYwbgLe/6cDsfY59KZo6RqprF0HuGwZukQhn7E
 dPATV0B3P3J3fLs+GInZYUDObQe51XmwGwO/NZrnrZqAG7vsEgRnwSWha0Ss6JiQMSRz5Z72
 cBsZM3sf6lo3KCOGE/iOM3PBlnc2KMGo9LCwyDecFpgLzxQRpxSm2LnDSerBk2dgPRhgMvr1
 2kCRfRVeDAA1JbtHwd9qrx6lt7el+QeF9KZmlgvZdwEnnHvXhPwfaGD5DDple5Wx8n+KUwIT
 vFaTwdLkdVy9e72AW8tRyCGX8lr69edND1PfAGSfX+y3mtIIqHmeU6BPdS5o9iL82rmPQMS/
 ijdwicKy7YB+sl1xeOnGssPDB5pRAf4KrV8SygyFL9+nExAfDfegs7A54aJsyR9GjiSbKj1o
 5jgdc8oOu3NSHQZ7e9uNfqRg8GDimWh2i8C9wMg9Rzm4kZsbNoBZnVUTfSzhh8rV4DBfaxsH
 lbebhx5bDKB5RmcMMTcR9I51ZBrqX5EGIb9ijNRtIkdV4jj3XnL8qEzrrBp70oGFCArmLLSB
 Ci2hwY2/fORC2Y07EGT4o2PGRNcUA5gU4Ssd+qRsn1CA+wcftE80f/GnihcKVFQKzAPbkLtB
 5175WpmOCQHhCIlDz4jH9eIqhU9XygTt73KAWQGfRQ+9j/AG+yuMKRkYaOpQaybyC6ZUQejZ
 BEckJVTv0rsEhSsKQHlg6oSqL2pUo5lUB5+j8PrC+05rSb
X-IronPort-AV: E=Sophos;i="5.82,272,1613451600"; 
   d="scan'208";a="43063475"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hoSh307uEBxEr8pfIoj3dxW3099US50Zb6QD1pVtECbF1JkiYrC8vfNH+68XmIAujb3CMPKXk7WuaH+n7G3oNGKE1L6+qaey00BNLEh6sUQOL3RHPGYo7GByx9G587FiwjXLtyL21C03/m00WEyf2YdvVT7NXPj8L1ZCCKFfQFxou9AA4yXphOf9AdRIVkh6lSb9bCRvX5yAggi7F/bi7B/JI4/xOFYePIIU+JjWBBzeR+fa3Ee1P+0Y2tc49gtZHTgYHGX5dkieiQZGg/0SO7B3Y3cbpsxHt/GWnWZyWdzPmZRSMe3BwQgsTL1cDk2OL1Ccj26IzbNahX4k++v6gg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2zwOtvbYCeCrrEEu3ky6mj/LhH83mJ6RRHaJM5BD6f8=;
 b=XR2UYx79mLzTCQ7oLb91cu8J5nKVueNJY3zAjGjz/hDXcpavQ8BtN7ePi0baDjzcv74dIUaYktxobFcarmPNjA4BqEzhpXfUGQHfpkey6JV8HRO7whxka6clVUdwcOgAuv3U/Vnj4rvmCQcpUHhKRUns0os6s8LEbzqO6Jx3LZN2ASuM7B+b6+0pFUEnkJf786jDaHDg//29/RcKwVSIXyNe++B03UyOdPOj1P9O29X8yEC48hlb5BOYR7hrQhtMDdn/WSYIs/F87RtNxMv0bmuNAV1K12Nh+aHst+/gYS/DBbJHFhws1f1qgeMjWpH67IVroShmiCcacqEPx6sq+w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2zwOtvbYCeCrrEEu3ky6mj/LhH83mJ6RRHaJM5BD6f8=;
 b=upw3dibWr+1yCgPJ2QVGmEjiPA1touOBnWjFrKP6M7ifU2QgtNxYYwIycTlBNkAFdXeFCS38NTb2uwOYVKLsJHqhK5nca26zC8eY3pzzz6IHd7Qg0yFgWUkEqVOvf2BieFekncTHStE3AI/Q0+DIHPTv/hGLlow4VMyZVg2Ilbk=
Subject: Re: [PATCH v1] tools: handle missing xencommons in
 xen-init-dom0.service
To: Olaf Hering <olaf@aepfle.de>, <xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210504143128.16456-1-olaf@aepfle.de>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <fe3987b5-c04f-24f3-88c6-24553710352d@citrix.com>
Date: Tue, 4 May 2021 18:50:36 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <20210504143128.16456-1-olaf@aepfle.de>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0342.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18c::23) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 352bac7b-dd66-4fa7-b4c2-08d90f2526a9
X-MS-TrafficTypeDiagnostic: BYAPR03MB4743:
X-Microsoft-Antispam-PRVS: <BYAPR03MB47434B736082E62DBF1C24A4BA5A9@BYAPR03MB4743.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:989;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: rB5sauJdf3VjJw6+hDRRRPeKoLywtraDbe+oziS0o9vG9Rk7djNDd46JDQwAIsYv9aAVLKJHTOOHShn6oIKP0l/G7iFlMWclCkc5X9Kswg/x562P24KwXytsgmDpZA2Yz4aFEJSinM3Sa8mhg/PRNgxzX4W5ETT6JgYvzRTsixAEUwrkGWJjJpA6i00hefZt4Gf4SIKixOHKYmbg0zoWVpWLYA0UQAlIdUUBQJlATnskLD3xTVOGF0ne40nxsajdlP9r61olz86G8KFCuDmSC+IEcGTOKhgq8wD1fUkSHLqqW1sduAdliTK0F/FJBGzuTH9ntQWDoscRP3kZxRG57XiAaoPbji6dQdaWk079NMog1yUEOwfg+6bOsPRYnWzc5G4eZyL16SrPHRhFqCSlDBtgYfRJE7WQ24PUVQaMc45jHlBwgpbwRW/PmXn+e5Z0XlbkSdP4DPh1Jjkb46BcA32W81kzMep8QOaJB04pGkYgmaKxqR/sZTjHhwKXTwR7Sx0/QZ4dZBuED/Qak7kwytJQu10cX/40M8CL1yoBfvu4KC3AuOuj3Yf2wmcKP4sZZgyB13r7T7qNzVFcfZfc+ApsOCgpfzXPrnMEnBKdtGAsQyvKf5C1ElDBL+8oUOy7kbRDuesrHKHJIRsDoqjjjrhBENgarSNYfcrHKzJqOmpoFEmWtW+yrfSy22Xivi6A
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(136003)(346002)(376002)(39830400003)(316002)(478600001)(31696002)(558084003)(956004)(31686004)(5660300002)(8676002)(4326008)(36756003)(16576012)(6666004)(54906003)(2616005)(8936002)(6486002)(86362001)(186003)(2906002)(26005)(66476007)(66556008)(53546011)(16526019)(38100700002)(66946007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?TkR1RE5Scy9IS1pkL1Jzb2hhZ0xOUmRiUlEwUkRRcHNqQlhMaXBxdkZzOUh6?=
 =?utf-8?B?YU1kMXNva2lQRmZRNDB0SXgySk5PMWhleHJmMFdzaDB6cWdWSGJMZTVPYjBE?=
 =?utf-8?B?eWZDTHZMbzZ2bjdtcmp4RWY3dDRIempCS1FZUXd5cXRTdC94bmc1NHB2b2lS?=
 =?utf-8?B?NUVSbmgyZXFSUzBXSXFGQVZrb01uWURkeUc4ZXVrRjF6cTNMN080aFByUWor?=
 =?utf-8?B?bXVOVE5zcWxPUVhqRUl3dlhYeUlFV0FTT2UwakZzV3RXdkJ6NkE1YTREVndF?=
 =?utf-8?B?S0ZRZVJsb1MwR0dHYjlsSDhtOU1XMmdoMm81aXYzUUhWSEJOdEc4amlZZG1K?=
 =?utf-8?B?ZnJqdW5kZ3Jlb0lCQWh0dkR1SWkvSng2R3NwWnRjcVlTRk56OUpLVytRM0RN?=
 =?utf-8?B?QS8raDhjWlJJYnppQjd4T1d6ZnM2NXNhMjdvK3d4aFZ0cUVJLzB3bFJzK2RH?=
 =?utf-8?B?NmJWR2V0cFFRYTJ5U09GY3l3VFNGblFhMU0vQ3ZZMnR4YThiR3ZoS095dU9m?=
 =?utf-8?B?emtIamRZVDh0YU92b29pZzB0cFBFY3BwOERlRkJnQ0JNWDVhNzB2RmZVK0R3?=
 =?utf-8?B?VVNsVlVjaGsvKzNkSU1QT1FZUkhueVlNcmMyM2ZnU2ZiNjFqUXZwRVVmSGVR?=
 =?utf-8?B?NG12QnNVR0RvUXNUVDhCWlZzTEQ4YWhBemY2QkUxL25FUzhQOTRIanFzaS8x?=
 =?utf-8?B?MG5WK3M0M3VIS3l4cW9GdVBkWkFLTlEyNmtYMmRPY2VvaUlhRG9SaUREb2R2?=
 =?utf-8?B?bEpQU0hrZkpBY2ZlMENVNGdmSnNKMFZiMTY2OC9YZWV2ZDdmTXhHL3RyL1Qx?=
 =?utf-8?B?aVlLWnFZTlRsOUN1UVBkN0VBNDBpMFpXcUxUTHhtcUNoR0dUVVJGRWhxOHE0?=
 =?utf-8?B?MzRhZ2liSzhDZjBHejJwMlRTQ05BYUozZE5FOVMrOU9BSHRoVnpkVTZUbDJ5?=
 =?utf-8?B?eWorZXVYYlRpZDd4VlljOFpyS0Z5UG1RUE5ocTM2YkxDR1duMCtiTTRRNHBi?=
 =?utf-8?B?SjNadk5ORTZ6aGFlcHg3MUJrenhpZitqMjd1T25IeTZBbldpTUIrUTVVVENL?=
 =?utf-8?B?ZDJCSnJ3Q3pZVlJ1UGo5eDVZTVBvOUZpWjkrN1RZMCs5MlR2WGwwbWNlTnlF?=
 =?utf-8?B?NlNoWCtMLzFndjlnTmNMUVBualNBNXdNUHBhaUZ6SnVhbUhnbWNnN1RDWVIz?=
 =?utf-8?B?MVoyR01WUGNRaU9YQ0R5N3hjaEdkS3RGZDg4UVkwK1R1bFZlczVjcnBndDha?=
 =?utf-8?B?R1h1MmVCZGtwbUY2M290RlVMdG9mSi9NRVdjNzdhOVlYNEZWVUptRjhKdnJy?=
 =?utf-8?B?clgzeTdEQisrcHhWTmxUQXp3aklMUjJTSExTM1ArY21WOUJwaWxEcWtnL09s?=
 =?utf-8?B?MzIxYyt3VFBTNXF1N3NwYlg2MHNyMjhXYWpBaVlsQjlYVk9YSUR6RW1jT3Bi?=
 =?utf-8?B?cE5GQlk3UnhvS1ZUMm4rNEx2Mm1tT1Bjc2srOXFMSE9FSzQxdmdCWldMRFha?=
 =?utf-8?B?V3VVNTY3dUJxc2hvTGJOcFlZYjdVK2RtUnFjd0lYQVJBbmp0eThNOXlyMmNy?=
 =?utf-8?B?dEJ6N2o1NElEOVovbUpEUC9iQkw5eXJ2YmViQ1ZsWGpPNkUzakdORHYyMGho?=
 =?utf-8?B?NXpqR2pIdWxXWCtMYnJqUEk3eThXOG1sczM3OEhKT0l4WmJGejkvZjdiNWJv?=
 =?utf-8?B?TkdZNDR5Tk1yUTJCYWxxTzM5RnZlejFZeWIwZE9ZUnpVM2lyOEtKVDExNVlr?=
 =?utf-8?Q?pM4qXsQAL1F9FKDGQ9OT9YbYd1AsVGgxyuMoYkA?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 352bac7b-dd66-4fa7-b4c2-08d90f2526a9
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2021 17:50:49.4194
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: YQx+Nc0vQsG7W9Ssa298wkC1RCSIXZJzwB0L6KeAucRJKJfd8mwIU9E3JRVDZE6w4XEOuvmV1LhP302PhNOouY8GJ3zS6fdzQgLvlGSaQ4s=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4743
X-OriginatorOrg: citrix.com

On 04/05/2021 15:31, Olaf Hering wrote:
> sysconfig files are not mandatory.
> Adjust xen-init-dom0.service to handle a missing sysconfig file by
> prepending a dash to the to-be-sourced filename.
>
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Tue May 04 17:55:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 17:55:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122674.231404 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldzGl-0006tp-7R; Tue, 04 May 2021 17:55:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122674.231404; Tue, 04 May 2021 17:55:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ldzGl-0006ti-2i; Tue, 04 May 2021 17:55:39 +0000
Received: by outflank-mailman (input) for mailman id 122674;
 Tue, 04 May 2021 17:55:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Poa=J7=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ldzGj-0006td-Di
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 17:55:37 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dde4598f-1052-4501-b90c-289fc0cdaa15;
 Tue, 04 May 2021 17:55:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dde4598f-1052-4501-b90c-289fc0cdaa15
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620150936;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=lOygGWTFXkTQhfHzoqhd4sj3sNICY0WxoSyJxLU7WRY=;
  b=Vz62G9upixUGS/TxvYMMQu+xNpuh79OAn+JbAPfPCVr2au1iEH3YnnqQ
   1EUWdg5MdB2Q4BFJ++Nrmx5JIDL3EksgIuscLjHFjIB04ZREAswTseIin
   1NxNI6/8zjHbVy/KGkjFEgdaQTnzv3OVsYamjsm/FbJP/RJOf9ZeSZxy3
   o=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: tm/6OytogkZh8IZqCd+B81vHMuANf2RnCOj2PsPVZyFMVGIBDQztzS+ygnUR+Q5W91UjXuoqRp
 luFMOSb3sWggWNdJV+vDwHgVN1QGvga8wA8lCphpz8bKPkdoN57ASHYVH7CsBmCXuVmwBREJb7
 J0vsaCQtPKnuMZ++SUCBGcD/F/lb2wFV/HoNA4AkEiOFqu+GvxfXpb5ziKQM6UaPR+7UiiLV1e
 yN6ots88JtunHq/aSBmrOBDC72hFln9pAOJnaqjMsGAYiTaE7wSoMDcTaiArOMTU1wqRYoLifL
 ZmQ=
X-SBRS: 5.1
X-MesageID: 44573406
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:sePf0a8kNIjWcMUByMhuk+GYcb1zdoIgy1knxilNYDRvWIixi9
 2ukPMH1RX9lTYWXzUalcqdPbSbKEmsj6JdybI6eZOvRhPvtmftFoFt6oP+3ybtcheOjdJ1/Z
 xLN5JzANiYNykCse/T6gO1Cstl5dGB/ryhi+u29QYmcShBQchbnnREIyycFVB7QxQDIJI/Go
 aV6MYvnUvsRV08aMOnCn4ZG9XSvtGjruPOXTcqJT4CrDOPgzSh9aLgH3GjtGZjbxpjyaovmF
 K15DDRyb6kt5iAu27h/k/Vq69bgd7wjuZEbfb8wfQ9DhXJpkKWaJ96W7uE1QpF/d2HzFoxit
 HDr1MBEq1ImjrsV1q4qxfsxAXsuQxGgxSOpTD34BuTwr2aNVdKb7sn9OQpDWqm12MasN5xy6
 5N1W6C3qAnRS/opjj35NTDSnhR5ziJiEciiuIagjh+VoYTedZq3Pgi1X5VC5sJEWbG7pkmGo
 BVfbThzctRGGnqH0zxjy0j+sKtQ34zdy32Ojl+y7rljQR+rTRc9Q811cYflnAP+NYWUJ9f/d
 nJNaxuifVnUtIWRbgVPpZcfeKHTkj2BT7cOmObJlrqUIsdPWjWlpLx6LIpoMm3ZZ0zyocokp
 ipaiIGiUcCP2bVTeGe1pxC9R7ABE+nWy72981Y759l/prxWaTsKi/GbFw1icOvr7E+D6TgKq
 eOEaMTJ8WmAXrlGI5P0QG7cYJVM2MiXMocvct+VEmJps7NN432pu3WePveP9PWYGUZc1K6Jk
 FGcCn4Jc1G4EzucGT/mgLtV3TkfVG6/Z8YKtmJw8EjjKw2cqFcuAkcjlq0ouuRLydZj6AwdE
 xiZLfukqaxo3iq7X/Fhl8ZbyZ1PwJw2vHNQnlKrQgFPwffarAYoeiSfmhUwT+APR9wT8TfFQ
 ZFvFRp8aerL5idrBpSW+6PAya/tT8+tXiKR5ATlumo/sH+YK41CZ4gRehsDwnRDgd0ngxrsW
 9HbwcBSibkZ2fToJTgqKZRKPDUdtF6jgvuHNVdrmjHs16A4es1QGEAYjKoWcmLoAonSjZOnG
 dt+6sHjLfoo0f5FUIPxMADdHxFciC+Ha9PBgXtXvQZppnbPCVLCVqsqRPfoRcpYWbu/1gVnQ
 XaXGKpUMCOJEFctHBe2rvt63VueAymDhRNQ0E/jaZGUUnHoHx32e+OarHb6Rq4VnIL3vwdPD
 bZYTEbPwNpwJSt2ASInSuZfE9WiKkGL6jTCq8ufKrU3W7oIIqUlbseF/sR55p9Msvy29V7Ct
 63akuQLDniDfku1BHQrnE5ODNsoH1Mq4Kw5DT1qGy51mU4G/zcPRBvQKwaOciV6yzhS+yT2J
 t0ydIzsu3YCBSoVveWjaXWZSVEMBXdvCq/SPwps4ldueYqr6RodqOrFQfgxTVCxlEzPc30nE
 QRTOBy563AIJZme4gXdzhC9lQkmdyTJCIQw1bLK/57eUtog27QPtuP7baNs7YpD0GbrAb7OF
 WU8URmjrH4djrG0aRfB7M7IGxQZkR59W9r+/macZbMTAqtbONO8TOBQzqAWa4YTLLAH7oerh
 x3uY7V2+CWcjf1wwDWs39wJLlU/2OuXMO1B0aNFIdzgp+HEEXJhrHv5si5yCrzQ3+8bU8TgI
 Veb0webshZkFAZ/coK+zn3Trayu14vlltV/CpumVHs0JW3+WuzJzAHDSTJxpFNGSRJOneGjc
 7Z4fGV2XT07j9Cw4TCHi5rD5NzMslVSJP2ISdoIdURu7Dt/7NHuFUDXCsT
X-IronPort-AV: E=Sophos;i="5.82,272,1613451600"; 
   d="scan'208";a="44573406"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Q1OtcKe2WEH+plddZqbt1AmKnDBjG5IvFkEMamjX2uRROORArscYWM0uqWgFgwnc5KI5iBd/ZVwAVRWjfR61iXJBQnIEiIGZR/V/vGu9DyLGMbqd9Juqm65VYm+3WixNnCQcffX/IiiqAvrmi6JEtpAItRE06ib3w729N2rXBp8aEU97Pe3SowCFrR6nKYYR712HWrZ7nT7UQqC88uppw0zwX6Efz9ik697h3USAZETcSiajeO3i7+GBvg5X7rRjRY2zrwq4fjctM5VBt2ih1QKm0SCkxoDTWGrILvo9ki0Ed5ftaPie2ZyMT+MzgyBlmI/CL59NyA+dxkqsQ7+zRw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DnQ8wFkuqBJ7z1MMAuFXc3wb0ZpD++bpbaJswgZMIrM=;
 b=XQE1Ee5oGjcmyI1hBhvFrvu3pxQp96YMf498NN9inkhfV68BMaEkj3hH81PvGo6T+iIeO1UT1ySBUANRYXhahnj1uyIt7jI1GFB+qwvkUQPmK1UrcWoIQgHBnQRlmaiNCMAF96R54kCTSDgR9qILJ9vrqjz4aQtMZQinvsulTnnVyEPK7ay0IDu9cUheloPQDBGEbbyIm5sRekaXMuPrOp9kYwIb+HmgI+EkscxAeZraXno9hIzYzoK411hqqzlXFXBKMdud1iOu+y++DeSt1yAj6vjMsjiKCXh4nniwBHtGWieW+zOrUllP8h5NHBM/jUK4thws/HGlpxn0lgA+dg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DnQ8wFkuqBJ7z1MMAuFXc3wb0ZpD++bpbaJswgZMIrM=;
 b=gYXX+XjtRTaydtbJKPQRBLjXXl9LuDRZPkucLNFiOK6hA8EZz5XPFHlH0I1C0hmSd57S5/A3l94XWLoTBO4pvM7cR38U64xr+0K30POGSOkX2ZYk9kMa9VFj/cTkIwZtIvZlBCJZMlD7duN0Os7vp4KwlSYDpNuPX70QM3AWWMk=
Subject: Re: [PATCH 1/9] docs: Warn about incomplete vtpmmgr TPM 2.0 support
To: Jason Andryuk <jandryuk@gmail.com>, <xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210504124842.220445-1-jandryuk@gmail.com>
 <20210504124842.220445-2-jandryuk@gmail.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <59277d56-8b3c-b18a-5eb1-f4b87c1a65da@citrix.com>
Date: Tue, 4 May 2021 18:55:26 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <20210504124842.220445-2-jandryuk@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LNXP265CA0052.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:5d::16) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6f67d418-8cd2-43cf-ba3c-08d90f25cf7a
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5854:
X-Microsoft-Antispam-PRVS: <SJ0PR03MB585446A37CE5D2041A9A7FC6BA5A9@SJ0PR03MB5854.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:5516;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 9VWo+GsPNBcTtRvPICAkQrdmoDpU6FdvPquBqM8KgATSh540JoGE7ctdAFd3k99GNtPKbQDGWoFyq08+tcpX3OPlRkO0wVuZ9y/XxsH8+Iq03Gt4CXye8gHtK6EWEVogXPIirOvp9JNFCLab+Ws524zH5a9hmE6raWUL93cvPR60rWq3xNuUnUrSYh6XQ6ARlw5ok2XoUdL3Ub6ROJShkDhp8lP5sL66g9KMk9VSuhQlF1zsULk3WFfAhxqYu6vrAQ/4r2lR4ICnuzogIBs/GUzwA0X6OLWUMdcPL46GUBqiw+3AuJhIhMI0Zm0NdcnQX0c2PnWZ+NYruN3TfUhbXR+6lVaatt3uDmKpQZPCw5S6chMo/ZehWgMCHriC7qDcKUcC776unc2j8VX+v8QNY5zcdnuWynw/5eZ2CVdkT53dFWyJRVcjZGHXbCBp1nlkMcXh7B+AbEEAwUskWoPsZ6x7+k6OUS50+l8y5OnI2uFN+CgxMUKQLU6OZ/xByimqk1xj64mPcDf1Liqm44gdzzLqNKwjC5+CzSlxlUeOGDN5skPvTxYNdEOsNZDERjU69O9Qeh98/v097chW2P1tr/cgITyhTjvmy8smOgb5/WV9yZQOt+CTjjZF8oGcM32bBk7KRAml4WBwALr75vbfRmwNFB+8icbFkmKPHQoqNbHCoRHBWs7x2UHhTUDKsu9X
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39850400004)(366004)(376002)(346002)(136003)(86362001)(54906003)(31686004)(6666004)(53546011)(2906002)(38100700002)(66476007)(4744005)(16526019)(4326008)(186003)(5660300002)(26005)(31696002)(66556008)(36756003)(83380400001)(2616005)(8936002)(316002)(956004)(6486002)(16576012)(478600001)(66946007)(8676002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?RnoxRFZxMkNxaUZxRHd0T1M1ZENvRWtPRE5VMkx1eGlyNzZDREYremlEV0RH?=
 =?utf-8?B?YzM1bUpJTzEwSk4xNHRzbElZbmdwU0ZKUW5yTGR3SDdUM2pld3ovSzhvSm5v?=
 =?utf-8?B?VDI4RkpHeWJCcmxCekRWTEVFZ1ByZ0ozZGxtWWU4dzg4TkgzcGEyUllkQm1E?=
 =?utf-8?B?YjFubjluSGxxQmJCTWxvS2dIeUpuenNLa0xLaVRRWi9xdnF5SzNzYTZtWEhG?=
 =?utf-8?B?MXAxUmRsRlE5eVlxMEJEVEsrZVRWWGpUM1NxKy9QL2dadGhQUmJscElpQURo?=
 =?utf-8?B?WTBRVThuUk0zTVFPS2JCcWxHK0ZaUGdvb25vNmsvSXVtVng3eHR3RHFJSzFS?=
 =?utf-8?B?UTkvV1pNcU9ISm5yWG1HRXN2N1RJSGZuOWU3YU5zN2JweFVYRGJMdzd1SzdP?=
 =?utf-8?B?cS90dFl0TzBlUzFUd0Y2VFVXc0hUY0FUbnc3ZmJtM3laeklmSDBhbVRiZHMz?=
 =?utf-8?B?RDJxZ2xhTkh1OSs0QkxuYlRCYTB0a3d0UGF2Z0pzT0NiS0VBMjg2RnpiemJR?=
 =?utf-8?B?RHFKOSt1T1lRcG9CZGU5R3Vob3dRVXd4WUlucmttOU5EYjQ5dUZaSWd6cFBk?=
 =?utf-8?B?bCtCdm02b2pwNFJDSkJHM0piTDF2c2c0QzJISGw0a1FtV1RjTGh2MTlvU1pK?=
 =?utf-8?B?amRhaGFwNk5OZXdCbjRJcS9GQWlIVjA3T1hYLys4Uzd0Ym9BRzFmVXFqMmoz?=
 =?utf-8?B?a2NsZ1hjckV1YVVzd1EvV2FzSUhxOFZIQkFMdEdVWS9qTlFZWThHL00ydGtS?=
 =?utf-8?B?c1FkdGdUNlQxUzNmSGxJb0RteW5MVTlOZ2JBWmNHVmdhWCtXclM1T1hxd0Ri?=
 =?utf-8?B?NDl4QXNoUTYzbnZ2Y0R4bFZ1akVSeTNGWnpHdDB4cTVtQlFHLzVLc3BBV3RQ?=
 =?utf-8?B?cmRrRkcyMmxXMTdSbHRaUlBucnpiZmFRclduTG1PSDBFL0ZJeUwvU1ZQa3Ro?=
 =?utf-8?B?UTlRbElaejNuS0FVUlpyTmluY2ZpWVVhdEhnZzFVU0JLdmRYN2F1UHJ6d2p4?=
 =?utf-8?B?NjEzNjEvYUNJVldTN1B1MlZNTmxRb0s3Vi9Pb0htTmVCYk1QYVVncGQ4RHF5?=
 =?utf-8?B?U1h1SWFOQmNWMXgvSjdLdm9QZXdscnIvQWVTNi9ZZVdYTFRrQk16ZmNLdldY?=
 =?utf-8?B?R2RnQ1MwL3lBaks0UGlXR05vTHdudVNZMGZEaDAyaUFTWFVoNjVBblZ6U0dB?=
 =?utf-8?B?Rkd2NkpMcTVhbEtXaDVUQ2pNV1BTcWhyVDRiRG1FWDFUTDdNT002a3Nnd0dj?=
 =?utf-8?B?TUlOdUc1aFZ6YW1VemRPUW9GVUxSS0dBRkZLNnBiV2luYkx6Ui95UEVPOUd2?=
 =?utf-8?B?Mzd5Rm0wUm1Wb0FEam53c0NERGJXa3RjeGkvUWJMaGhjajZsWmVrN1N6MlND?=
 =?utf-8?B?QWRoMG93V2dtSzFZT0lqUm1zNkJvNjRGYU53R2xnUVZkNU1SdTFJWGNwenMy?=
 =?utf-8?B?cmJpQi9NcmtqVHdmQVRQRjJCWGZUQTdkOXRydVRHM0pNYUoySHpvYTJDTVFs?=
 =?utf-8?B?NjRmUjk2NGtITGJDd3RheWZ6Ni9kTzlUendEM3RHcmdWa1daRnlhTXhDSlVq?=
 =?utf-8?B?V1VWMGF3MEFybTFDdXFkR1VzNEFBc3QxRG92UWVtVnNOcjcyTDEyRG5MeGkw?=
 =?utf-8?B?SFArVTB6b1RIaWxVeFMzMWcvc1ZtcjRmMUFCS2JHc1BIVlhNZlZCRWxRK3Nq?=
 =?utf-8?B?V2Q1YUZjaFdobXJidnV4SkFjM3p3WGhISHRlSXFiV21TNkliZEVmbFU3Nk1h?=
 =?utf-8?Q?RdVqGw7ejzr2qDDdS8ZR1KxF236+jIVWXwIIdEb?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 6f67d418-8cd2-43cf-ba3c-08d90f25cf7a
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 04 May 2021 17:55:32.2686
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 6IIJv1sWmrxUf8CuguEbH4ZMGdgNukQPEDk3/qOHKzkgN2S6VhBhuwGNHZcFRmdqkhzcQhJ5y8ZG5Jyq501Wl9fn7R2S0AjbjXIyaBmAEio=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5854
X-OriginatorOrg: citrix.com

On 04/05/2021 13:48, Jason Andryuk wrote:
> The vtpmmgr TPM 2.0 support is incomplete.  Add a warning about that to
> the documentation so others don't have to work through discovering it is
> broken.
>
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

This is definitely the kind of health warning needed for people playing
this area.


From xen-devel-bounces@lists.xenproject.org Tue May 04 18:49:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 18:49:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122691.231451 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1le06o-00038x-QI; Tue, 04 May 2021 18:49:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122691.231451; Tue, 04 May 2021 18:49:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1le06o-00038q-ND; Tue, 04 May 2021 18:49:26 +0000
Received: by outflank-mailman (input) for mailman id 122691;
 Tue, 04 May 2021 18:49:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1le06n-00038i-Ut; Tue, 04 May 2021 18:49:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1le06n-0000mg-Kp; Tue, 04 May 2021 18:49:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1le06n-00082o-8M; Tue, 04 May 2021 18:49:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1le06n-0008AC-7h; Tue, 04 May 2021 18:49:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=dik1D0qQnDuIN45zERB76HFbNYX2pXNvAAaCKRyU8t0=; b=3mey45w8K1p+JZwot1iy+Nyfa5
	l+B2HQIOX6L2u4YkIOD7N+SC2/GkZHVYZC+vByHzJlQ6uGQG5yNpdTA4rf556GpnXWUlm3dLGu44r
	qj97i7iGHB2Csk0Br2Qwwe3nhjb1n+1RrHUyCDYAG1Ukg1nPI9vO6toVcEye/8tL/QwQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161761-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 161761: regressions - FAIL
X-Osstest-Failures:
    xen-4.12-testing:build-amd64-xsm:xen-build:fail:regression
    xen-4.12-testing:build-amd64:xen-build:fail:regression
    xen-4.12-testing:build-amd64-prev:xen-build:fail:regression
    xen-4.12-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-xtf-amd64-amd64-1:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:build-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-livepatch:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-migrupgrade:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-livepatch:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-migrupgrade:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-xtf-amd64-amd64-2:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-xtf-amd64-amd64-3:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-xtf-amd64-amd64-4:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-xtf-amd64-amd64-5:build-check(1):blocked:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5b280a59c4dd8dad6cc8da28db981b193d10acee
X-Osstest-Versions-That:
    xen=4cf5929606adc2fb1ab4e2921c14ba4b8046ecd1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 04 May 2021 18:49:25 +0000

flight 161761 xen-4.12-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161761/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-xsm               6 xen-build                fail REGR. vs. 159418
 build-amd64                   6 xen-build                fail REGR. vs. 159418
 build-amd64-prev              6 xen-build                fail REGR. vs. 159418

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-1        1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-livepatch    1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-migrupgrade  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-livepatch     1 build-check(1)               blocked  n/a
 test-amd64-i386-migrupgrade   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-xtf-amd64-amd64-2        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-3        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-4        1 build-check(1)               blocked  n/a
 test-xtf-amd64-amd64-5        1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 159418
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 159418
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  5b280a59c4dd8dad6cc8da28db981b193d10acee
baseline version:
 xen                  4cf5929606adc2fb1ab4e2921c14ba4b8046ecd1

Last test of basis   159418  2021-02-16 15:06:11 Z   77 days
Failing since        160128  2021-03-18 14:36:18 Z   47 days   65 attempts
Testing same since   160150  2021-03-20 04:11:48 Z   45 days   63 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Edwin Török <edvin.torok@citrix.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  fail    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             fail    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       blocked 
 test-xtf-amd64-amd64-2                                       blocked 
 test-xtf-amd64-amd64-3                                       blocked 
 test-xtf-amd64-amd64-4                                       blocked 
 test-xtf-amd64-amd64-5                                       blocked 
 test-amd64-amd64-xl                                          blocked 
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        blocked 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-livepatch                                   blocked 
 test-amd64-i386-livepatch                                    blocked 
 test-amd64-amd64-migrupgrade                                 blocked 
 test-amd64-i386-migrupgrade                                  blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 311 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 04 18:53:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 18:53:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122698.231467 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1le0Ay-00041P-L0; Tue, 04 May 2021 18:53:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122698.231467; Tue, 04 May 2021 18:53:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1le0Ay-00041I-Hx; Tue, 04 May 2021 18:53:44 +0000
Received: by outflank-mailman (input) for mailman id 122698;
 Tue, 04 May 2021 18:53:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Poa=J7=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1le0Ax-00041B-29
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 18:53:43 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id df988fa9-4297-41c4-8bbd-4a2350a05d30;
 Tue, 04 May 2021 18:53:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: df988fa9-4297-41c4-8bbd-4a2350a05d30
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620154421;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=06f3Upass2mWyTQoV9Tdjou4hgcCqgB0oTGnDLZQanY=;
  b=CdppCgqWXpGAwPP4Ji1cAe96T8rk6QHaDGxrrd47Qzzk0qjQ27Dr0gY7
   izJGqd2lH2Iqu5Uyj2gaevlvstkJn9YTfIezXMjQWGkpGh1BCCVnf9w0t
   lg4saG8j6YjiSseBSbiZHf10im2NK4paSPaXfiQJ5kW2mVlqbPsZZ1HEf
   g=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: yqvrvkrZkOrUIcOWtZ8oScsOZW/874cgoVpvnmfyx4k5mMt/ftmY+Z3zO/+VjdZuGUC3l+J6mN
 Y+1fGA+aLyX78Ky71cklJoGpj4DQ8OgvP5fIWKJg7Md2wjZ7shydRnmz4WhMJQYMuIikekZS9F
 ozRUS3uIDl4AW/xi5hINhAvis8OSiAOwEwjhIzIftJbYUZKvb+NRwYjOxyHk/t8SPVbnir8cf7
 uk9Wr/42BuzJxFEEagafbDQB7WDhqMtYHVMDw4JogTvlW8sX/hRIW65x8Pvf/FM5Qf43fXgYU7
 seE=
X-SBRS: 5.1
X-MesageID: 43068859
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:4vriwK7hI2kB2EfKOgPXwRqBI+orLtY04lQ7vn1ZYQBJc8Ceis
 CllOka0xixszoKRHQ8g7m7VZWoa3m0z/5IyKMWOqqvWxSjhXuwIOhZnO/f6hDDOwm7zO5S0q
 98b7NzYeebMXFWhdv3iTPWL/8O29+CmZrHuc7771NACT5ncLth6QARMHf/LmRTSBNdDZQ0UL
 qwj/A3xAaIQngcYsSlCnRtZYGqy+Hjr576fQUAQycu9Qjmt1iVwYTnGBuV1Ap2aUIs/Z4e9w
 H+8jDR1+GYnNyQjjTd0GLS6Jo+oqqd9vJzQPaip+JQBjHligODbJlsVbuYrFkO0Z2SwWdvqv
 bgiVMNONly9mPwcwiO0GTQ8jil6hkCwTvDzkKVmnTqq8CRfkNFN+Nxwbh3XzGczmhIhqAa7I
 t7m1i3mrASMDb72AP63NTMXwECrDvOnVMS1dQ9olYabZETc9Zq3Ooi1XIQKrgsNgTg5rsqFe
 F/Zfusnsp+QBehY3fVsnIH+q3UYl0DWhOPQk01sseIyTRhnHdg00sCxMAE901wjK4Adw==
X-IronPort-AV: E=Sophos;i="5.82,272,1613451600"; 
   d="scan'208";a="43068859"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH] libs/guest: Don't hide the indirection on xc_cpu_policy_t
Date: Tue, 4 May 2021 19:53:22 +0100
Message-ID: <20210504185322.19306-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

It is bad form in C, perhaps best demonstrated by trying to read
xc_cpu_policy_destroy(), and causes const qualification to have
less-than-obvious behaviour (the hidden pointer becomes const, not the thing
it points at).

xc_cpu_policy_set_domain() needs to drop its (now normal) const qualification,
as the policy object is modified by the serialisation operation.

This also shows up a problem with the x86_cpu_policies_are_compatible(), where
the intermediate pointers are non-const.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>

Discovered while trying to start the integration into XenServer.  This wants
fixing ASAP, before futher uses get added.

Unsure what to do about x86_cpu_policies_are_compatible().  It would be nice
to have xc_cpu_policy_is_compatible() sensibly const'd, but maybe that means
we need a struct const_cpu_policy and that smells like it is spiralling out of
control.
---
 tools/include/xenctrl.h             | 22 +++++++++++-----------
 tools/libs/guest/xg_cpuid_x86.c     | 22 +++++++++++-----------
 tools/libs/guest/xg_sr_common_x86.c |  2 +-
 tools/misc/xen-cpuid.c              |  2 +-
 4 files changed, 24 insertions(+), 24 deletions(-)

diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index 0fdb2e8885..58d3377d6a 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -2590,33 +2590,33 @@ int xc_psr_get_domain_data(xc_interface *xch, uint32_t domid,
 int xc_psr_get_hw_info(xc_interface *xch, uint32_t socket,
                        xc_psr_feat_type type, xc_psr_hw_info *hw_info);
 
-typedef struct xc_cpu_policy *xc_cpu_policy_t;
+typedef struct xc_cpu_policy xc_cpu_policy_t;
 
 /* Create and free a xc_cpu_policy object. */
-xc_cpu_policy_t xc_cpu_policy_init(void);
-void xc_cpu_policy_destroy(xc_cpu_policy_t policy);
+xc_cpu_policy_t *xc_cpu_policy_init(void);
+void xc_cpu_policy_destroy(xc_cpu_policy_t *policy);
 
 /* Retrieve a system policy, or get/set a domains policy. */
 int xc_cpu_policy_get_system(xc_interface *xch, unsigned int policy_idx,
-                             xc_cpu_policy_t policy);
+                             xc_cpu_policy_t *policy);
 int xc_cpu_policy_get_domain(xc_interface *xch, uint32_t domid,
-                             xc_cpu_policy_t policy);
+                             xc_cpu_policy_t *policy);
 int xc_cpu_policy_set_domain(xc_interface *xch, uint32_t domid,
-                             const xc_cpu_policy_t policy);
+                             xc_cpu_policy_t *policy);
 
 /* Manipulate a policy via architectural representations. */
-int xc_cpu_policy_serialise(xc_interface *xch, const xc_cpu_policy_t policy,
+int xc_cpu_policy_serialise(xc_interface *xch, const xc_cpu_policy_t *policy,
                             xen_cpuid_leaf_t *leaves, uint32_t *nr_leaves,
                             xen_msr_entry_t *msrs, uint32_t *nr_msrs);
-int xc_cpu_policy_update_cpuid(xc_interface *xch, xc_cpu_policy_t policy,
+int xc_cpu_policy_update_cpuid(xc_interface *xch, xc_cpu_policy_t *policy,
                                const xen_cpuid_leaf_t *leaves,
                                uint32_t nr);
-int xc_cpu_policy_update_msrs(xc_interface *xch, xc_cpu_policy_t policy,
+int xc_cpu_policy_update_msrs(xc_interface *xch, xc_cpu_policy_t *policy,
                               const xen_msr_entry_t *msrs, uint32_t nr);
 
 /* Compatibility calculations. */
-bool xc_cpu_policy_is_compatible(xc_interface *xch, const xc_cpu_policy_t host,
-                                 const xc_cpu_policy_t guest);
+bool xc_cpu_policy_is_compatible(xc_interface *xch, xc_cpu_policy_t *host,
+                                 xc_cpu_policy_t *guest);
 
 int xc_get_cpu_levelling_caps(xc_interface *xch, uint32_t *caps);
 int xc_get_cpu_featureset(xc_interface *xch, uint32_t index,
diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x86.c
index d4e02cecb1..1ebc108213 100644
--- a/tools/libs/guest/xg_cpuid_x86.c
+++ b/tools/libs/guest/xg_cpuid_x86.c
@@ -672,18 +672,18 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
     return rc;
 }
 
-xc_cpu_policy_t xc_cpu_policy_init(void)
+xc_cpu_policy_t *xc_cpu_policy_init(void)
 {
     return calloc(1, sizeof(struct xc_cpu_policy));
 }
 
-void xc_cpu_policy_destroy(xc_cpu_policy_t policy)
+void xc_cpu_policy_destroy(xc_cpu_policy_t *policy)
 {
     if ( policy )
         free(policy);
 }
 
-static int deserialize_policy(xc_interface *xch, xc_cpu_policy_t policy,
+static int deserialize_policy(xc_interface *xch, xc_cpu_policy_t *policy,
                               unsigned int nr_leaves, unsigned int nr_entries)
 {
     uint32_t err_leaf = -1, err_subleaf = -1, err_msr = -1;
@@ -713,7 +713,7 @@ static int deserialize_policy(xc_interface *xch, xc_cpu_policy_t policy,
 }
 
 int xc_cpu_policy_get_system(xc_interface *xch, unsigned int policy_idx,
-                             xc_cpu_policy_t policy)
+                             xc_cpu_policy_t *policy)
 {
     unsigned int nr_leaves = ARRAY_SIZE(policy->leaves);
     unsigned int nr_entries = ARRAY_SIZE(policy->entries);
@@ -738,7 +738,7 @@ int xc_cpu_policy_get_system(xc_interface *xch, unsigned int policy_idx,
 }
 
 int xc_cpu_policy_get_domain(xc_interface *xch, uint32_t domid,
-                             xc_cpu_policy_t policy)
+                             xc_cpu_policy_t *policy)
 {
     unsigned int nr_leaves = ARRAY_SIZE(policy->leaves);
     unsigned int nr_entries = ARRAY_SIZE(policy->entries);
@@ -763,7 +763,7 @@ int xc_cpu_policy_get_domain(xc_interface *xch, uint32_t domid,
 }
 
 int xc_cpu_policy_set_domain(xc_interface *xch, uint32_t domid,
-                             const xc_cpu_policy_t policy)
+                             xc_cpu_policy_t *policy)
 {
     uint32_t err_leaf = -1, err_subleaf = -1, err_msr = -1;
     unsigned int nr_leaves = ARRAY_SIZE(policy->leaves);
@@ -791,7 +791,7 @@ int xc_cpu_policy_set_domain(xc_interface *xch, uint32_t domid,
     return rc;
 }
 
-int xc_cpu_policy_serialise(xc_interface *xch, const xc_cpu_policy_t p,
+int xc_cpu_policy_serialise(xc_interface *xch, const xc_cpu_policy_t *p,
                             xen_cpuid_leaf_t *leaves, uint32_t *nr_leaves,
                             xen_msr_entry_t *msrs, uint32_t *nr_msrs)
 {
@@ -823,7 +823,7 @@ int xc_cpu_policy_serialise(xc_interface *xch, const xc_cpu_policy_t p,
     return 0;
 }
 
-int xc_cpu_policy_update_cpuid(xc_interface *xch, xc_cpu_policy_t policy,
+int xc_cpu_policy_update_cpuid(xc_interface *xch, xc_cpu_policy_t *policy,
                                const xen_cpuid_leaf_t *leaves,
                                uint32_t nr)
 {
@@ -843,7 +843,7 @@ int xc_cpu_policy_update_cpuid(xc_interface *xch, xc_cpu_policy_t policy,
     return rc;
 }
 
-int xc_cpu_policy_update_msrs(xc_interface *xch, xc_cpu_policy_t policy,
+int xc_cpu_policy_update_msrs(xc_interface *xch, xc_cpu_policy_t *policy,
                               const xen_msr_entry_t *msrs, uint32_t nr)
 {
     unsigned int err_msr = -1;
@@ -861,8 +861,8 @@ int xc_cpu_policy_update_msrs(xc_interface *xch, xc_cpu_policy_t policy,
     return rc;
 }
 
-bool xc_cpu_policy_is_compatible(xc_interface *xch, const xc_cpu_policy_t host,
-                                 const xc_cpu_policy_t guest)
+bool xc_cpu_policy_is_compatible(xc_interface *xch, xc_cpu_policy_t *host,
+                                 xc_cpu_policy_t *guest)
 {
     struct cpu_policy_errors err = INIT_CPU_POLICY_ERRORS;
     struct cpu_policy h = { &host->cpuid, &host->msr };
diff --git a/tools/libs/guest/xg_sr_common_x86.c b/tools/libs/guest/xg_sr_common_x86.c
index 15265e7a33..563b4f0168 100644
--- a/tools/libs/guest/xg_sr_common_x86.c
+++ b/tools/libs/guest/xg_sr_common_x86.c
@@ -48,7 +48,7 @@ int write_x86_cpu_policy_records(struct xc_sr_context *ctx)
     struct xc_sr_record cpuid = { .type = REC_TYPE_X86_CPUID_POLICY, };
     struct xc_sr_record msrs  = { .type = REC_TYPE_X86_MSR_POLICY, };
     uint32_t nr_leaves = 0, nr_msrs = 0;
-    xc_cpu_policy_t policy = NULL;
+    xc_cpu_policy_t *policy = NULL;
     int rc;
 
     if ( xc_cpu_policy_get_size(xch, &nr_leaves, &nr_msrs) < 0 )
diff --git a/tools/misc/xen-cpuid.c b/tools/misc/xen-cpuid.c
index b2a36deacc..d4bc83d8c9 100644
--- a/tools/misc/xen-cpuid.c
+++ b/tools/misc/xen-cpuid.c
@@ -468,7 +468,7 @@ int main(int argc, char **argv)
         uint32_t i, max_leaves, max_msrs;
 
         xc_interface *xch = xc_interface_open(0, 0, 0);
-        xc_cpu_policy_t policy = xc_cpu_policy_init();
+        xc_cpu_policy_t *policy = xc_cpu_policy_init();
 
         if ( !xch )
             err(1, "xc_interface_open");
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue May 04 21:31:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 21:31:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122722.231485 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1le2dp-0000eK-Vc; Tue, 04 May 2021 21:31:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122722.231485; Tue, 04 May 2021 21:31:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1le2dp-0000eD-Rq; Tue, 04 May 2021 21:31:41 +0000
Received: by outflank-mailman (input) for mailman id 122722;
 Tue, 04 May 2021 21:31:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6Poa=J7=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1le2do-0000e8-JO
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 21:31:40 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 23da2c29-fc50-4385-afbb-46e6f4840b6a;
 Tue, 04 May 2021 21:31:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 23da2c29-fc50-4385-afbb-46e6f4840b6a
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620163898;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=iN1qtu5eTkaQjuDskxiwa8CLM7Iy3U/d+exGSEFZ19E=;
  b=QLoWXeoHAwgk5AMSJMcMhagoxoKDC3eh2oNPg/xG7wfURgqhKBDpxD5a
   dfnmsARE2gqGRfct/pzmqcOBshtogh6qYCgupVO9hsYSYUdR/vTqdi0J0
   LZMF/u7iKyG3a2y1Xjm72d1NGaZ+F4hVLcrvt4YZkCPDUgoXoCS2avtuS
   s=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: PwvmcrGaVqCWE91Y1AP4NeRQoNLeqbubh6VFpJiiqWIZ84KydYhUbLPFvrCFBOCiRgwZZxzTw+
 fRr4vKJKRkQglFqPqOJYAvCTtiPQ7XkEIxpKQugj6BdsxG78ya0y+z2iZKuAeHaK4wzXbrkvyw
 /r4nSRgml5RX4VB8w29DCtOsRoE+pF32lwGhM5Plm7MI3NVRfKZzvjGah8w9sBqJExpDCLz9XI
 f24Jg8YLLzLyPBFacNX1hAz6wXYGxmkBugfaJzGzM0GSG2G7LpuUbEK1o7XuBt0JhOg+xXveTi
 m1A=
X-SBRS: 5.1
X-MesageID: 42875491
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:A3qW86sXVK3aAbxvGaFOXU5E7skDkNV00zAX/kB9WHVpW+az/v
 rOoN0w0xjohDENHEw6kdebN6WaBV/a/5h54Y4eVI3SOjXOkm2uMY1k8M/e0yTtcheOkdJ1+K
 98f8FFeb7NJHdgi8KS2maFOvIB5PXCz6yyn+fZyB5WPGVXQoVt9R1wBAreMmAefnglObMDGJ
 CR5tVKqlObEBx9BKnWOlA/U/XevNqOrZr6YHc9dmcawTOThjCl4qOSKXil9yoZOgkg/Z4StU
 zMkwn0/cyYwpSG9iM=
X-IronPort-AV: E=Sophos;i="5.82,272,1613451600"; 
   d="scan'208";a="42875491"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH] libx86: Introduce x86_cpu_policy_calculate_compatible() with MSR_ARCH_CAPS handling
Date: Tue, 4 May 2021 22:31:20 +0100
Message-ID: <20210504213120.4179-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Just as with x86_cpu_policies_are_compatible(), make a start on this function
with some token handling.

Add levelling support for MSR_ARCH_CAPS, because RSBA has interesting
properties, and introduce test_calculate_compatible_success() to the unit
tests, covering various cases where the arch_caps CPUID bit falls out, and
with RSBA accumulating rather than intersecting across the two.

Extend x86_cpu_policies_are_compatible() with a check for MSR_ARCH_CAPS, which
was arguably missing from c/s e32605b07ef "x86: Begin to introduce support for
MSR_ARCH_CAPS".

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 tools/include/xen-tools/libs.h           |   5 ++
 tools/tests/cpu-policy/test-cpu-policy.c | 150 +++++++++++++++++++++++++++++++
 xen/include/xen/lib/x86/cpu-policy.h     |  22 +++++
 xen/lib/x86/policy.c                     |  47 ++++++++++
 4 files changed, 224 insertions(+)

diff --git a/tools/include/xen-tools/libs.h b/tools/include/xen-tools/libs.h
index a16e0c3807..4de10efdea 100644
--- a/tools/include/xen-tools/libs.h
+++ b/tools/include/xen-tools/libs.h
@@ -63,4 +63,9 @@
 #define ROUNDUP(_x,_w) (((unsigned long)(_x)+(1UL<<(_w))-1) & ~((1UL<<(_w))-1))
 #endif
 
+#ifndef _AC
+#define __AC(X, Y)   (X ## Y)
+#define _AC(X, Y)    __AC(X, Y)
+#endif
+
 #endif	/* __XEN_TOOLS_LIBS__ */
diff --git a/tools/tests/cpu-policy/test-cpu-policy.c b/tools/tests/cpu-policy/test-cpu-policy.c
index 75973298df..455b4fe3c0 100644
--- a/tools/tests/cpu-policy/test-cpu-policy.c
+++ b/tools/tests/cpu-policy/test-cpu-policy.c
@@ -775,6 +775,154 @@ static void test_is_compatible_failure(void)
     }
 }
 
+static void test_calculate_compatible_success(void)
+{
+    static struct test {
+        const char *name;
+        struct {
+            struct cpuid_policy p;
+            struct msr_policy m;
+        } a, b, out;
+    } tests[] = {
+        {
+            "arch_caps, b short max_leaf",
+            .a = {
+                .p.basic.max_leaf = 7,
+                .p.feat.arch_caps = true,
+                .m.arch_caps.rdcl_no = true,
+            },
+            .b = {
+                .p.basic.max_leaf = 6,
+                .p.feat.arch_caps = true,
+                .m.arch_caps.rdcl_no = true,
+            },
+            .out = {
+                .p.basic.max_leaf = 6,
+            },
+        },
+        {
+            "arch_caps, b feat missing",
+            .a = {
+                .p.basic.max_leaf = 7,
+                .p.feat.arch_caps = true,
+                .m.arch_caps.rdcl_no = true,
+            },
+            .b = {
+                .p.basic.max_leaf = 7,
+                .m.arch_caps.rdcl_no = true,
+            },
+            .out = {
+                .p.basic.max_leaf = 7,
+            },
+        },
+        {
+            "arch_caps, b rdcl_no missing",
+            .a = {
+                .p.basic.max_leaf = 7,
+                .p.feat.arch_caps = true,
+                .m.arch_caps.rdcl_no = true,
+            },
+            .b = {
+                .p.basic.max_leaf = 7,
+                .p.feat.arch_caps = true,
+            },
+            .out = {
+                .p.basic.max_leaf = 7,
+                .p.feat.arch_caps = true,
+            },
+        },
+        {
+            "arch_caps, rdcl_no ok",
+            .a = {
+                .p.basic.max_leaf = 7,
+                .p.feat.arch_caps = true,
+                .m.arch_caps.rdcl_no = true,
+            },
+            .b = {
+                .p.basic.max_leaf = 7,
+                .p.feat.arch_caps = true,
+                .m.arch_caps.rdcl_no = true,
+            },
+            .out = {
+                .p.basic.max_leaf = 7,
+                .p.feat.arch_caps = true,
+                .m.arch_caps.rdcl_no = true,
+            },
+        },
+        {
+            "arch_caps, rsba accum",
+            .a = {
+                .p.basic.max_leaf = 7,
+                .p.feat.arch_caps = true,
+                .m.arch_caps.rsba = true,
+            },
+            .b = {
+                .p.basic.max_leaf = 7,
+                .p.feat.arch_caps = true,
+            },
+            .out = {
+                .p.basic.max_leaf = 7,
+                .p.feat.arch_caps = true,
+                .m.arch_caps.rsba = true,
+            },
+        },
+    };
+    struct cpu_policy_errors no_errors = INIT_CPU_POLICY_ERRORS;
+
+    printf("Testing calculate compatibility success:\n");
+
+    for ( size_t i = 0; i < ARRAY_SIZE(tests); ++i )
+    {
+        struct test *t = &tests[i];
+        struct cpuid_policy *p = malloc(sizeof(struct cpuid_policy));
+        struct msr_policy *m = malloc(sizeof(struct msr_policy));
+        struct cpu_policy a = {
+            &t->a.p,
+            &t->a.m,
+        }, b = {
+            &t->b.p,
+            &t->b.m,
+        }, out = {
+            p,
+            m,
+        };
+        struct cpu_policy_errors e;
+        int res;
+
+        if ( !p || !m )
+            err(1, "%s() malloc failure", __func__);
+
+        res = x86_cpu_policy_calculate_compatible(&a, &b, &out, &e);
+
+        /* Check the expected error output. */
+        if ( res != 0 || memcmp(&no_errors, &e, sizeof(no_errors)) )
+        {
+            fail("  Test '%s' expected no errors\n"
+                 "    got res %d { leaf %08x, subleaf %08x, msr %08x }\n",
+                 t->name, res, e.leaf, e.subleaf, e.msr);
+            goto test_done;
+        }
+
+        if ( memcmp(&t->out.p, p, sizeof(*p)) )
+        {
+            fail("  Test '%s' resulting CPUID policy not as expected\n",
+                 t->name);
+            goto test_done;
+        }
+
+        if ( memcmp(&t->out.m, m, sizeof(*m)) )
+        {
+            fail("  Test '%s' resulting MSR policy not as expected\n",
+                 t->name);
+            goto test_done;
+        }
+
+    test_done:
+        free(p);
+        free(m);
+    }
+}
+
 int main(int argc, char **argv)
 {
     printf("CPU Policy unit tests\n");
@@ -793,6 +941,8 @@ int main(int argc, char **argv)
     test_is_compatible_success();
     test_is_compatible_failure();
 
+    test_calculate_compatible_success();
+
     if ( nr_failures )
         printf("Done: %u failures\n", nr_failures);
     else
diff --git a/xen/include/xen/lib/x86/cpu-policy.h b/xen/include/xen/lib/x86/cpu-policy.h
index 5a2c4c7b2d..0422a15557 100644
--- a/xen/include/xen/lib/x86/cpu-policy.h
+++ b/xen/include/xen/lib/x86/cpu-policy.h
@@ -37,6 +37,28 @@ int x86_cpu_policies_are_compatible(const struct cpu_policy *host,
                                     const struct cpu_policy *guest,
                                     struct cpu_policy_errors *err);
 
+/*
+ * Given two policies, calculate one which is compatible with each.
+ *
+ * i.e. Given host @a and host @b, calculate what to give a VM so it can live
+ * migrate between the two.
+ *
+ * @param a        A cpu_policy.
+ * @param b        Another cpu_policy.
+ * @param out      A policy compatible with @a and @b.
+ * @param err      Optional hint for error diagnostics.
+ * @returns -errno
+ *
+ * For typical usage, @a and @b should be system policies of the same type
+ * (i.e. PV/HVM default/max) from different hosts.  In the case that an
+ * incompatibility is detected, the optional err pointer may identify the
+ * problematic leaf/subleaf and/or MSR.
+ */
+int x86_cpu_policy_calculate_compatible(const struct cpu_policy *a,
+                                        const struct cpu_policy *b,
+                                        struct cpu_policy *out,
+                                        struct cpu_policy_errors *err);
+
 #endif /* !XEN_LIB_X86_POLICIES_H */
 
 /*
diff --git a/xen/lib/x86/policy.c b/xen/lib/x86/policy.c
index f6cea4e2f9..06039e8aa8 100644
--- a/xen/lib/x86/policy.c
+++ b/xen/lib/x86/policy.c
@@ -29,6 +29,9 @@ int x86_cpu_policies_are_compatible(const struct cpu_policy *host,
     if ( ~host->msr->platform_info.raw & guest->msr->platform_info.raw )
         FAIL_MSR(MSR_INTEL_PLATFORM_INFO);
 
+    if ( ~host->msr->arch_caps.raw & guest->msr->arch_caps.raw )
+        FAIL_MSR(MSR_ARCH_CAPABILITIES);
+
 #undef FAIL_MSR
 #undef FAIL_CPUID
 #undef NA
@@ -43,6 +46,50 @@ int x86_cpu_policies_are_compatible(const struct cpu_policy *host,
     return ret;
 }
 
+int x86_cpu_policy_calculate_compatible(const struct cpu_policy *a,
+                                        const struct cpu_policy *b,
+                                        struct cpu_policy *out,
+                                        struct cpu_policy_errors *err)
+{
+    const struct cpuid_policy *ap = a->cpuid, *bp = b->cpuid;
+    const struct msr_policy *am = a->msr, *bm = b->msr;
+    struct cpuid_policy *cp = out->cpuid;
+    struct msr_policy *mp = out->msr;
+
+    memset(cp, 0, sizeof(*cp));
+    memset(mp, 0, sizeof(*mp));
+
+    cp->basic.max_leaf = min(ap->basic.max_leaf, bp->basic.max_leaf);
+
+    if ( cp->basic.max_leaf >= 7 )
+    {
+        cp->feat.max_subleaf = min(ap->feat.max_subleaf, bp->feat.max_subleaf);
+
+        cp->feat.raw[0].b = ap->feat.raw[0].b & bp->feat.raw[0].b;
+        cp->feat.raw[0].c = ap->feat.raw[0].c & bp->feat.raw[0].c;
+        cp->feat.raw[0].d = ap->feat.raw[0].d & bp->feat.raw[0].d;
+    }
+
+    /* TODO: Far more. */
+
+    mp->platform_info.raw = am->platform_info.raw & bm->platform_info.raw;
+
+    if ( cp->feat.arch_caps )
+    {
+        /*
+         * RSBA means "RSB Alternative", i.e. RSB stuffing not necesserily
+         * safe.  It needs to accumulate rather than intersect across a
+         * resource pool.
+         */
+#define POL_MASK ARCH_CAPS_RSBA
+        mp->arch_caps.raw = ((am->arch_caps.raw ^ POL_MASK) &
+                             (bm->arch_caps.raw ^ POL_MASK)) ^ POL_MASK;
+#undef POL_MASK
+    }
+
+    return 0;
+}
+
 /*
  * Local variables:
  * mode: C
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Tue May 04 21:57:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 21:57:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122726.231497 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1le32i-0002VH-1U; Tue, 04 May 2021 21:57:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122726.231497; Tue, 04 May 2021 21:57:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1le32h-0002VA-UW; Tue, 04 May 2021 21:57:23 +0000
Received: by outflank-mailman (input) for mailman id 122726;
 Tue, 04 May 2021 21:57:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1le32h-0002V2-Cr; Tue, 04 May 2021 21:57:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1le32h-00042p-3B; Tue, 04 May 2021 21:57:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1le32g-0001MY-Rz; Tue, 04 May 2021 21:57:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1le32g-000603-RU; Tue, 04 May 2021 21:57:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=oIFQsgDyHrngQOVWQZz8FaYLWQIckqiIqU53TpnvCWM=; b=HvAUibw6vCAlKGLW7JywNFG66g
	Uyzvm26QfqxnGpVG8xtr6qDKu2l8sOLpqH10GZl6wrHBaLc2ZOJSBsXBRedGAXd8pvCQFiKvGO88z
	C21LiM0JwNXEOssR5FiNDqYt28BB6DezF8DSPLcNmk3qOnoIEMLI7vm4AC2HQLCML44c=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161775-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 161775: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8cccd6438e86112ab383e41b433b5a7e73be9621
X-Osstest-Versions-That:
    xen=ec4b43107f8663b4a3cf6b9605e1d80152a89f49
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 04 May 2021 21:57:22 +0000

flight 161775 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161775/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8cccd6438e86112ab383e41b433b5a7e73be9621
baseline version:
 xen                  ec4b43107f8663b4a3cf6b9605e1d80152a89f49

Last test of basis   161768  2021-05-04 13:01:32 Z    0 days
Testing same since   161775  2021-05-04 19:02:53 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   ec4b43107f..8cccd6438e  8cccd6438e86112ab383e41b433b5a7e73be9621 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue May 04 22:27:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 22:27:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122735.231512 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1le3Vb-00056X-Hi; Tue, 04 May 2021 22:27:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122735.231512; Tue, 04 May 2021 22:27:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1le3Vb-00056Q-Ds; Tue, 04 May 2021 22:27:15 +0000
Received: by outflank-mailman (input) for mailman id 122735;
 Tue, 04 May 2021 22:27:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jbig=J7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1le3Va-00056L-LH
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 22:27:14 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9a32b8bc-ef8f-498f-9786-30049ea68465;
 Tue, 04 May 2021 22:27:14 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 55728613BA;
 Tue,  4 May 2021 22:27:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9a32b8bc-ef8f-498f-9786-30049ea68465
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1620167233;
	bh=qjKiDAYu2hOOH+tXFxKg6w9AVWLQiKItwUQ3EUwWhUs=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=Zv1Qyohkx3imw86kergc1/tMUwmSpXQUcIgqKy59j6GNrd/AKYVAsxhJt5kJWfC91
	 BYkl5gsAxKpmouRtYXOFlCOfjR9JZX60HrsVJaQpvX9JPhri+se+03TkRnp9rdXGRJ
	 9SIqK2T35bqsD9J8NWo6vK3IPc/rxSVdQa4iSTY5sd2oiXCamv2BQWObY/dfxeoNCs
	 fITml+ek6CjwpqzrFAazOgMtbaEsZt9+xn9pyYL1L3BlK7PAdV2GImDEGZq5RUrZkh
	 +1zToKh0ukTkjMPauiES+RVCkQFW0jCOpaOuoFJSUVeKvDybkmk17kiD20+g4AwAyu
	 gLhgiX/l9DtHQ==
Date: Tue, 4 May 2021 15:27:11 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Luca Fancellu <luca.fancellu@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, wei.chen@arm.com, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v5 3/3] docs/doxygen: doxygen documentation for
 grant_table.h
In-Reply-To: <20210504133145.767-4-luca.fancellu@arm.com>
Message-ID: <alpine.DEB.2.21.2105041514260.5018@sstabellini-ThinkPad-T480s>
References: <20210504133145.767-1-luca.fancellu@arm.com> <20210504133145.767-4-luca.fancellu@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 4 May 2021, Luca Fancellu wrote:
> Modification to include/public/grant_table.h:
> 
> 1) Add doxygen tags to:
>  - Create Grant tables section
>  - include variables in the generated documentation
>  - Used @keepindent/@endkeepindent to enclose comment
>    section that are indented using spaces, to keep
>    the indentation.
> 2) Add .rst file for grant table for Arm64

Thanks Luca for your hard work on this. It is going to make things a lot
better!

I reviewed this patch while looking at
https://luca.fancellu.gitlab.io/xen-docs/hypercall-interfaces/arm64/grant_tables.html

In short, I think this changes look fine except for a trivial code style
issue on the very last comment at the bottom of the patch.

All my questions below are basically about some other in-code comments,
currently existing in the code, but not outputted in the html file.
Is there an easy way to add them?



> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> ---
> v5 changes:
> - Move GNTCOPY_* define next to the flags field
> v4 changes:
> - Used @keepindent/@endkeepindent doxygen commands
>   to keep text with spaces indentation.
> - drop changes to grant_entry_v1 comment, it will
>   be changed and included in the docs in a future patch
> - Move docs .rst to "common" folder
> v3 changes:
> - removed tags to skip anonymous union/struct
> - moved back comment pointed out by Jan
> - moved down defines related to struct gnttab_copy
>   as pointed out by Jan
> v2 changes:
> - Revert back to anonymous union/struct
> - add doxygen tags to skip anonymous union/struct
> ---
>  docs/hypercall-interfaces/arm64.rst           |  1 +
>  .../common/grant_tables.rst                   |  8 +++
>  docs/xen-doxygen/doxy_input.list              |  1 +
>  xen/include/public/grant_table.h              | 66 ++++++++++++-------
>  4 files changed, 52 insertions(+), 24 deletions(-)
>  create mode 100644 docs/hypercall-interfaces/common/grant_tables.rst
> 
> diff --git a/docs/hypercall-interfaces/arm64.rst b/docs/hypercall-interfaces/arm64.rst
> index 5e701a2adc..cb4c0d13de 100644
> --- a/docs/hypercall-interfaces/arm64.rst
> +++ b/docs/hypercall-interfaces/arm64.rst
> @@ -8,6 +8,7 @@ Starting points
>  .. toctree::
>     :maxdepth: 2
>  
> +   common/grant_tables
>  
>  
>  Functions
> diff --git a/docs/hypercall-interfaces/common/grant_tables.rst b/docs/hypercall-interfaces/common/grant_tables.rst
> new file mode 100644
> index 0000000000..8955ec5812
> --- /dev/null
> +++ b/docs/hypercall-interfaces/common/grant_tables.rst
> @@ -0,0 +1,8 @@
> +.. SPDX-License-Identifier: CC-BY-4.0
> +
> +Grant Tables
> +============
> +
> +.. doxygengroup:: grant_table
> +   :project: Xen
> +   :members:
> diff --git a/docs/xen-doxygen/doxy_input.list b/docs/xen-doxygen/doxy_input.list
> index e69de29bb2..233d692fa7 100644
> --- a/docs/xen-doxygen/doxy_input.list
> +++ b/docs/xen-doxygen/doxy_input.list
> @@ -0,0 +1 @@
> +xen/include/public/grant_table.h
> diff --git a/xen/include/public/grant_table.h b/xen/include/public/grant_table.h
> index 84b1d26b36..e1fb91dfc6 100644
> --- a/xen/include/public/grant_table.h
> +++ b/xen/include/public/grant_table.h
> @@ -25,15 +25,19 @@
>   * Copyright (c) 2004, K A Fraser
>   */
>  
> +/**
> + * @file
> + * @brief Interface for granting foreign access to page frames, and receiving
> + * page-ownership transfers.
> + */
> +
>  #ifndef __XEN_PUBLIC_GRANT_TABLE_H__
>  #define __XEN_PUBLIC_GRANT_TABLE_H__
>  
>  #include "xen.h"
>  
> -/*
> - * `incontents 150 gnttab Grant Tables
> - *
> - * Xen's grant tables provide a generic mechanism to memory sharing
> +/**
> + * @brief Xen's grant tables provide a generic mechanism to memory sharing
>   * between domains. This shared memory interface underpins the split
>   * device drivers for block and network IO.
>   *
> @@ -51,13 +55,10 @@
>   * know the real machine address of a page it is sharing. This makes
>   * it possible to share memory correctly with domains running in
>   * fully virtualised memory.
> - */
> -
> -/***********************************
> + *
>   * GRANT TABLE REPRESENTATION
> - */
> -
> -/* Some rough guidelines on accessing and updating grant-table entries
> + *
> + * Some rough guidelines on accessing and updating grant-table entries
>   * in a concurrency-safe manner. For more information, Linux contains a
>   * reference implementation for guest OSes (drivers/xen/grant_table.c, see
>   * http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=drivers/xen/grant-table.c;hb=HEAD
> @@ -66,6 +67,7 @@
>   *     compiler barrier will still be required.
>   *
>   * Introducing a valid entry into the grant table:
> + * @keepindent
>   *  1. Write ent->domid.
>   *  2. Write ent->frame:
>   *      GTF_permit_access:   Frame to which access is permitted.
> @@ -73,20 +75,25 @@
>   *                           frame, or zero if none.
>   *  3. Write memory barrier (WMB).
>   *  4. Write ent->flags, inc. valid type.
> + * @endkeepindent
>   *
>   * Invalidating an unused GTF_permit_access entry:
> + * @keepindent
>   *  1. flags = ent->flags.
>   *  2. Observe that !(flags & (GTF_reading|GTF_writing)).
>   *  3. Check result of SMP-safe CMPXCHG(&ent->flags, flags, 0).
>   *  NB. No need for WMB as reuse of entry is control-dependent on success of
>   *      step 3, and all architectures guarantee ordering of ctrl-dep writes.
> + * @endkeepindent
>   *
>   * Invalidating an in-use GTF_permit_access entry:
> + *
>   *  This cannot be done directly. Request assistance from the domain controller
>   *  which can set a timeout on the use of a grant entry and take necessary
>   *  action. (NB. This is not yet implemented!).
>   *
>   * Invalidating an unused GTF_accept_transfer entry:
> + * @keepindent
>   *  1. flags = ent->flags.
>   *  2. Observe that !(flags & GTF_transfer_committed). [*]
>   *  3. Check result of SMP-safe CMPXCHG(&ent->flags, flags, 0).
> @@ -97,18 +104,24 @@
>   *      transferred frame is written. It is safe for the guest to spin waiting
>   *      for this to occur (detect by observing GTF_transfer_completed in
>   *      ent->flags).
> + * @endkeepindent
>   *
>   * Invalidating a committed GTF_accept_transfer entry:
>   *  1. Wait for (ent->flags & GTF_transfer_completed).
>   *
>   * Changing a GTF_permit_access from writable to read-only:
> + *
>   *  Use SMP-safe CMPXCHG to set GTF_readonly, while checking !GTF_writing.
>   *
>   * Changing a GTF_permit_access from read-only to writable:
> + *
>   *  Use SMP-safe bit-setting instruction.
> + *
> + * @addtogroup grant_table Grant Tables
> + * @{
>   */
>  
> -/*
> +/**
>   * Reference to a grant entry in a specified domain's grant table.
>   */
>  typedef uint32_t grant_ref_t;

Just below this typedef there is the following comment:

/*
 * A grant table comprises a packed array of grant entries in one or more
 * page frames shared between Xen and a guest.
 * [XEN]: This field is written by Xen and read by the sharing guest.
 * [GST]: This field is written by the guest and read by Xen.
 */

I noticed it doesn't appear in the output html. Any way we can retain it
somewhere? Maybe we have to move it together with the larger comment
above?


> @@ -129,15 +142,17 @@ typedef uint32_t grant_ref_t;
>  #define grant_entry_v1_t grant_entry_t
>  #endif
>  struct grant_entry_v1 {
> -    /* GTF_xxx: various type and flag information.  [XEN,GST] */
> +    /** GTF_xxx: various type and flag information.  [XEN,GST] */
>      uint16_t flags;
> -    /* The domain being granted foreign privileges. [GST] */
> +    /** The domain being granted foreign privileges. [GST] */
>      domid_t  domid;
> -    /*
> +    /**
> +     * @keepindent
>       * GTF_permit_access: GFN that @domid is allowed to map and access. [GST]
>       * GTF_accept_transfer: GFN that @domid is allowed to transfer into. [GST]
>       * GTF_transfer_completed: MFN whose ownership transferred by @domid
>       *                         (non-translated guests only). [XEN]
> +     * @endkeepindent
>       */
>      uint32_t frame;
>  };
> @@ -228,7 +243,7 @@ struct grant_entry_header {
>  };
>  typedef struct grant_entry_header grant_entry_header_t;


Also this comment is missing from the output:

/*
 * Type of grant entry.
 *  GTF_invalid: This grant entry grants no privileges.
 *  GTF_permit_access: Allow @domid to map/access @frame.
 *  GTF_accept_transfer: Allow @domid to transfer ownership of one page frame
 *                       to this guest. Xen writes the page number to @frame.
 *  GTF_transitive: Allow @domid to transitively access a subrange of
 *                  @trans_grant in @trans_domid.  No mappings are allowed.
 */

Is there a way to keep it?


Similarly for the other subflags descriptions.


> -/*
> +/**
>   * Version 2 of the grant entry structure.
>   */
>  union grant_entry_v2 {
> @@ -433,7 +448,7 @@ typedef struct gnttab_transfer gnttab_transfer_t;
>  DEFINE_XEN_GUEST_HANDLE(gnttab_transfer_t);

What about the comments for each members of the union? Basically I am
trying to ask if we can output almost all comments currently living in
this header.


> -/*
> +/**
>   * GNTTABOP_copy: Hypervisor based copy
>   * source and destinations can be eithers MFNs or, for foreign domains,
>   * grant references. the foreign domain has to grant read/write access
> @@ -451,11 +466,6 @@ DEFINE_XEN_GUEST_HANDLE(gnttab_transfer_t);
>   * bytes to be copied.
>   */
>  
> -#define _GNTCOPY_source_gref      (0)
> -#define GNTCOPY_source_gref       (1<<_GNTCOPY_source_gref)
> -#define _GNTCOPY_dest_gref        (1)
> -#define GNTCOPY_dest_gref         (1<<_GNTCOPY_dest_gref)
> -
>  struct gnttab_copy {
>      /* IN parameters. */
>      struct gnttab_copy_ptr {
> @@ -468,6 +478,10 @@ struct gnttab_copy {
>      } source, dest;
>      uint16_t      len;
>      uint16_t      flags;          /* GNTCOPY_* */
> +#define _GNTCOPY_source_gref      (0)
> +#define GNTCOPY_source_gref       (1<<_GNTCOPY_source_gref)
> +#define _GNTCOPY_dest_gref        (1)
> +#define GNTCOPY_dest_gref         (1<<_GNTCOPY_dest_gref)

I think this is OK


>      /* OUT parameters. */
>      int16_t       status;
>  };
> @@ -579,7 +593,7 @@ struct gnttab_swap_grant_ref {
>  typedef struct gnttab_swap_grant_ref gnttab_swap_grant_ref_t;
>  DEFINE_XEN_GUEST_HANDLE(gnttab_swap_grant_ref_t);
>  
> -/*
> +/**
>   * Issue one or more cache maintenance operations on a portion of a
>   * page granted to the calling domain by a foreign domain.
>   */
> @@ -588,8 +602,8 @@ struct gnttab_cache_flush {
>          uint64_t dev_bus_addr;
>          grant_ref_t ref;
>      } a;
> -    uint16_t offset; /* offset from start of grant */
> -    uint16_t length; /* size within the grant */
> +    uint16_t offset; /**< offset from start of grant */
> +    uint16_t length; /**< size within the grant */
>  #define GNTTAB_CACHE_CLEAN          (1u<<0)
>  #define GNTTAB_CACHE_INVAL          (1u<<1)
>  #define GNTTAB_CACHE_SOURCE_GREF    (1u<<31)
> @@ -673,6 +687,10 @@ DEFINE_XEN_GUEST_HANDLE(gnttab_cache_flush_t);
>      "operation not done; try again"             \
>  }
>  
> +/**
> + * @}
> +*/

Alignment of the *


>  #endif /* __XEN_PUBLIC_GRANT_TABLE_H__ */
>  
>  /*
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Tue May 04 22:30:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 22:30:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122738.231524 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1le3Yv-0005xp-16; Tue, 04 May 2021 22:30:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122738.231524; Tue, 04 May 2021 22:30:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1le3Yu-0005xh-U3; Tue, 04 May 2021 22:30:40 +0000
Received: by outflank-mailman (input) for mailman id 122738;
 Tue, 04 May 2021 22:30:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jbig=J7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1le3Yt-0005xc-I4
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 22:30:39 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2f42bea6-9701-419f-9a12-0afd3d9f8679;
 Tue, 04 May 2021 22:30:38 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 95282613BA;
 Tue,  4 May 2021 22:30:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2f42bea6-9701-419f-9a12-0afd3d9f8679
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1620167438;
	bh=N5DNmNqIILQGKdjmxbsySJMTjU5VA/cIrCKROhYvYKI=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=D7uVxlL+36S+hjMQMYbihYmqEgkvjuZQGZbBIR4lmRPoj6gnxs7HKl+DfY6ebdFdU
	 wrIAaZhQ7ar5G8SO0+7RI094WdW+p9T+E2ebVCzHpMYKXWoquYkNoaZgPznKMoc16P
	 Yr/tVP1a20dYUdhZ/BycENRZqqDiCftSgfb6vgfxfydJTB7V7Dls/a8nw0gCpM6l+6
	 5H9seCcTdwjPwJdJnH6BDO73B+MEhlGOvjggbQQHY3shmF2FDWbavOq9TA3LNRiuAM
	 7Y1VK01G/gWqY+OiJrTHUF6mhzTIcGncxm0MKm+ZFNG3KeErbDqgw34IEhCVLfJNKB
	 Y2DQS0ia6X5EA==
Date: Tue, 4 May 2021 15:30:36 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Luca Fancellu <luca.fancellu@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, wei.chen@arm.com, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v5 2/3] docs: hypercalls sphinx skeleton for generated
 html
In-Reply-To: <20210504133145.767-3-luca.fancellu@arm.com>
Message-ID: <alpine.DEB.2.21.2105041527550.5018@sstabellini-ThinkPad-T480s>
References: <20210504133145.767-1-luca.fancellu@arm.com> <20210504133145.767-3-luca.fancellu@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 4 May 2021, Luca Fancellu wrote:
> Create a skeleton for the documentation about hypercalls

Why is there a difference between the arm32, arm64 and x86_64 skeletons?
Shouldn't just we have one? Or if we have to have three, why are they
not identical?


> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> ---
>  .gitignore                             |  1 +
>  docs/Makefile                          |  4 ++++
>  docs/hypercall-interfaces/arm32.rst    |  4 ++++
>  docs/hypercall-interfaces/arm64.rst    | 32 ++++++++++++++++++++++++++
>  docs/hypercall-interfaces/index.rst.in |  7 ++++++
>  docs/hypercall-interfaces/x86_64.rst   |  4 ++++
>  docs/index.rst                         |  8 +++++++
>  7 files changed, 60 insertions(+)
>  create mode 100644 docs/hypercall-interfaces/arm32.rst
>  create mode 100644 docs/hypercall-interfaces/arm64.rst
>  create mode 100644 docs/hypercall-interfaces/index.rst.in
>  create mode 100644 docs/hypercall-interfaces/x86_64.rst
> 
> diff --git a/.gitignore b/.gitignore
> index d271e0ce6a..a9aab120ae 100644
> --- a/.gitignore
> +++ b/.gitignore
> @@ -64,6 +64,7 @@ docs/xen.doxyfile
>  docs/xen.doxyfile.tmp
>  docs/xen-doxygen/doxygen_include.h
>  docs/xen-doxygen/doxygen_include.h.tmp
> +docs/hypercall-interfaces/index.rst
>  extras/mini-os*
>  install/*
>  stubdom/*-minios-config.mk
> diff --git a/docs/Makefile b/docs/Makefile
> index 2f784c36ce..b02c3dfb79 100644
> --- a/docs/Makefile
> +++ b/docs/Makefile
> @@ -61,6 +61,9 @@ build: html txt pdf man-pages figs
>  sphinx-html: $(DOXY_DEPS) $(DOXY_LIST_SOURCES)
>  ifneq ($(SPHINXBUILD),no)
>  	$(DOXYGEN) xen.doxyfile
> +	@echo "Generating hypercall-interfaces/index.rst"
> +	@sed -e "s,@XEN_TARGET_ARCH@,$(XEN_TARGET_ARCH),g" \
> +		hypercall-interfaces/index.rst.in > hypercall-interfaces/index.rst
>  	XEN_ROOT=$(realpath $(XEN_ROOT)) $(SPHINXBUILD) -b html . sphinx/html
>  else
>  	@echo "Sphinx is not installed; skipping sphinx-html documentation."
> @@ -108,6 +111,7 @@ clean: clean-man-pages
>  	rm -f xen.doxyfile.tmp
>  	rm -f xen-doxygen/doxygen_include.h
>  	rm -f xen-doxygen/doxygen_include.h.tmp
> +	rm -f hypercall-interfaces/index.rst
>  
>  .PHONY: distclean
>  distclean: clean
> diff --git a/docs/hypercall-interfaces/arm32.rst b/docs/hypercall-interfaces/arm32.rst
> new file mode 100644
> index 0000000000..4e973fbbaf
> --- /dev/null
> +++ b/docs/hypercall-interfaces/arm32.rst
> @@ -0,0 +1,4 @@
> +.. SPDX-License-Identifier: CC-BY-4.0
> +
> +Hypercall Interfaces - arm32
> +============================
> diff --git a/docs/hypercall-interfaces/arm64.rst b/docs/hypercall-interfaces/arm64.rst
> new file mode 100644
> index 0000000000..5e701a2adc
> --- /dev/null
> +++ b/docs/hypercall-interfaces/arm64.rst
> @@ -0,0 +1,32 @@
> +.. SPDX-License-Identifier: CC-BY-4.0
> +
> +Hypercall Interfaces - arm64
> +============================
> +
> +Starting points
> +---------------
> +.. toctree::
> +   :maxdepth: 2
> +
> +
> +
> +Functions
> +---------
> +
> +
> +Structs
> +-------
> +
> +
> +Enums and sets of #defines
> +--------------------------
> +
> +
> +Typedefs
> +--------
> +
> +
> +Enum values and individual #defines
> +-----------------------------------
> +
> +
> diff --git a/docs/hypercall-interfaces/index.rst.in b/docs/hypercall-interfaces/index.rst.in
> new file mode 100644
> index 0000000000..e4dcc5db8d
> --- /dev/null
> +++ b/docs/hypercall-interfaces/index.rst.in
> @@ -0,0 +1,7 @@
> +.. SPDX-License-Identifier: CC-BY-4.0
> +
> +Hypercall Interfaces
> +====================
> +
> +.. toctree::
> +   @XEN_TARGET_ARCH@
> diff --git a/docs/hypercall-interfaces/x86_64.rst b/docs/hypercall-interfaces/x86_64.rst
> new file mode 100644
> index 0000000000..3ed70dff95
> --- /dev/null
> +++ b/docs/hypercall-interfaces/x86_64.rst
> @@ -0,0 +1,4 @@
> +.. SPDX-License-Identifier: CC-BY-4.0
> +
> +Hypercall Interfaces - x86_64
> +=============================
> diff --git a/docs/index.rst b/docs/index.rst
> index b75487a05d..52226a42d8 100644
> --- a/docs/index.rst
> +++ b/docs/index.rst
> @@ -53,6 +53,14 @@ kind of development environment.
>     hypervisor-guide/index
>  
>  
> +Hypercall Interfaces documentation
> +----------------------------------
> +
> +.. toctree::
> +   :maxdepth: 2
> +
> +   hypercall-interfaces/index
> +
>  Miscellanea
>  -----------
>  
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Tue May 04 22:46:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 22:46:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122745.231536 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1le3oO-0006xp-G5; Tue, 04 May 2021 22:46:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122745.231536; Tue, 04 May 2021 22:46:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1le3oO-0006xi-Bt; Tue, 04 May 2021 22:46:40 +0000
Received: by outflank-mailman (input) for mailman id 122745;
 Tue, 04 May 2021 22:46:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1le3oM-0006xa-Re; Tue, 04 May 2021 22:46:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1le3oM-0004qD-MM; Tue, 04 May 2021 22:46:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1le3oM-00087x-BK; Tue, 04 May 2021 22:46:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1le3oM-0002go-9U; Tue, 04 May 2021 22:46:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ASDgWw9gQPhPSLw/ZBh9rMkO03SfT5tph7A/+N7niIY=; b=bJ7cvXBT+18yG2mB+Oo42JCffj
	BbFce8ByOpCrfnqVHbW/SGo50gIPljDxGfjCImYBNjNJufG6Nho0NacaS1uteQBEz0rqoom3ekrbP
	SrAPvZHO6ADmA8T1xhZseKzMSmX5LiT6uAByWkjRwJhQfC1PY/tHppUXUdgSS8YjOLfs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161755-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 161755: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=d26c277826dbbd64b3e3cb57159e1ecbfad33bc8
X-Osstest-Versions-That:
    xen=d26c277826dbbd64b3e3cb57159e1ecbfad33bc8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 04 May 2021 22:46:38 +0000

flight 161755 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161755/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10       fail  like 161628
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161628
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 161628
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161628
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161628
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161628
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161628
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 161628
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161628
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161628
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161628
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161628
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161628
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  d26c277826dbbd64b3e3cb57159e1ecbfad33bc8
baseline version:
 xen                  d26c277826dbbd64b3e3cb57159e1ecbfad33bc8

Last test of basis   161755  2021-05-04 08:28:51 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Tue May 04 22:55:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 22:55:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122753.231551 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1le3x5-0007uf-Gd; Tue, 04 May 2021 22:55:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122753.231551; Tue, 04 May 2021 22:55:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1le3x5-0007uY-Cf; Tue, 04 May 2021 22:55:39 +0000
Received: by outflank-mailman (input) for mailman id 122753;
 Tue, 04 May 2021 22:55:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jbig=J7=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1le3x4-0007uT-7s
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 22:55:38 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 28f15641-e996-4c9f-b49b-d1434d3201fc;
 Tue, 04 May 2021 22:55:36 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id DB394613D8;
 Tue,  4 May 2021 22:55:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 28f15641-e996-4c9f-b49b-d1434d3201fc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1620168935;
	bh=gAocMcTEQVz/SSdkt1ne1WB+VXRcmOjHfzI5QMHs+xM=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=HTh6xiNVYqDco5/wGDBoGpAB7eabdcA2K3Xbv5a7wf5jDlwXI5eMZsnSSElWd2etQ
	 FumCKWA/BTNBqQmxuP//l1UE1y2vgE9JKUcTZ6mMMFWNX5lB4vGM4dYSQIHHRspkaK
	 YOIB+uDNP2hJSRwJuKDVCVVqKpI75fjoCI5TWnjz8P51JoBqktmmRaDtJvRZhaU7fh
	 aUAE806P6HyXiUIY2g8T4D8D/oX82LK2gUqn8I/2SY1TVe7sB3IMrk91DZojLOBCm0
	 AVoPacS1zaIAdipptS7zRWN7wXJShN4mU3IRP5qv8pJID4IWHHnyUUxx9eWdTr8Ws+
	 CRaX9Y5YLEHqA==
Date: Tue, 4 May 2021 15:55:34 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Luca Fancellu <luca.fancellu@arm.com>
cc: xen-devel@lists.xenproject.org, bertrand.marquis@arm.com, wei.chen@arm.com, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v5 1/3] docs: add doxygen support for html
 documentation
In-Reply-To: <20210504133145.767-2-luca.fancellu@arm.com>
Message-ID: <alpine.DEB.2.21.2105041532030.5018@sstabellini-ThinkPad-T480s>
References: <20210504133145.767-1-luca.fancellu@arm.com> <20210504133145.767-2-luca.fancellu@arm.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 4 May 2021, Luca Fancellu wrote:
> Add doxygen support to build html documentation with sphinx,
> to do that the following modification are applied:
> 
> 1) Modify docs/configure.ac and consequently configure script
>    to check, through the ./configure script, the presence in
>    the system of the sphinx-build binary, if it is found then
>    it checks also the presence of the doxygen binary, breathe
>    and sphinx-rtd-theme python packages.
>    Doxygen and the packages are required only if sphinx-build
>    is found, otherwise the Makefile will simply skip the
>    sphinx html generation.
>    The ax_python_module.m4 support is needed to check for
>    python packages.
> 2) Add doxygen templates and configuration file
> 3) Modify docs/Makefile to call in sequence doxygen and
>    sphinx-build, the doxygen configuration file will be
>    modified to include the xen absolute path and
>    a list of header to parse.
>    A doxygen_input.h file is generated to include every
>    header listed in the doxy_input.list file.
> 4) Add preprocessor called by doxygen before parsing headers,
>    it will include in every header a doxygen_include.h file
>    that provides missing defines and includes that are
>    usually passed by the compiler.

This is clearly is a gargantuan work! Thank you so much!

Is there a way we can split this patch somehow? Given the nature of the
work, obvious we can't do a line-by-line review of everything, but I
think it would be good to split it so that we can more easily ack parts
of it and review others.

For instance something like the following:

- one patch for docs/xen.doxyfile.in for which you can already add my
  acked-by judging by the output[1]
- one patch to add the png image, also add my acked-by
- one patch for mainpage.md, header.html, footer.html, and
  customdoxygen.css, I think they can all be on a single patch and have
  my acked-by
- one patch to add the new m4 macros under m4/
- one patch with changes to docs/configure.ac, docs/configure,
  config/Docs.mk.in
- one patch with doxy-preprocessor.py, doxy_input.list,
  doxygen_include.h.in
- one patch with changes to docs/Makefile and docs/conf.py

Does this sound reasonable to you?


[1] https://luca.fancellu.gitlab.io/xen-docs/hypercall-interfaces/arm64/grant_tables.html


> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
> ---
> v4 changes:
> - create alias @keepindent/@endkeepindent for the doxygen
>   command @code/@endcode
> v3 changes:
> - add preprocessor to handle missing defines and anonymous
>   structs/unions before doxygen parsing
> - modification to Makefile to handle the new process
> v2 changes:
> - Fix bug in Makefile when sphinx is not found in the system
> ---
>  .gitignore                                   |    6 +
>  config/Docs.mk.in                            |    2 +
>  docs/Makefile                                |   42 +-
>  docs/conf.py                                 |   48 +-
>  docs/configure                               |  258 ++
>  docs/configure.ac                            |   15 +
>  docs/xen-doxygen/customdoxygen.css           |   36 +
>  docs/xen-doxygen/doxy-preprocessor.py        |  110 +
>  docs/xen-doxygen/doxy_input.list             |    0
>  docs/xen-doxygen/doxygen_include.h.in        |   32 +
>  docs/xen-doxygen/footer.html                 |   21 +
>  docs/xen-doxygen/header.html                 |   56 +
>  docs/xen-doxygen/mainpage.md                 |    5 +
>  docs/xen-doxygen/xen_project_logo_165x67.png |  Bin 0 -> 18223 bytes
>  docs/xen.doxyfile.in                         | 2316 ++++++++++++++++++
>  m4/ax_python_module.m4                       |   56 +
>  m4/docs_tool.m4                              |    9 +
>  17 files changed, 3006 insertions(+), 6 deletions(-)
>  create mode 100644 docs/xen-doxygen/customdoxygen.css
>  create mode 100755 docs/xen-doxygen/doxy-preprocessor.py
>  create mode 100644 docs/xen-doxygen/doxy_input.list
>  create mode 100644 docs/xen-doxygen/doxygen_include.h.in
>  create mode 100644 docs/xen-doxygen/footer.html
>  create mode 100644 docs/xen-doxygen/header.html
>  create mode 100644 docs/xen-doxygen/mainpage.md
>  create mode 100644 docs/xen-doxygen/xen_project_logo_165x67.png
>  create mode 100644 docs/xen.doxyfile.in
>  create mode 100644 m4/ax_python_module.m4
> 
> diff --git a/.gitignore b/.gitignore
> index 1c2fa1530b..d271e0ce6a 100644
> --- a/.gitignore
> +++ b/.gitignore
> @@ -58,6 +58,12 @@ docs/man7/
>  docs/man8/
>  docs/pdf/
>  docs/txt/
> +docs/doxygen-output
> +docs/sphinx
> +docs/xen.doxyfile
> +docs/xen.doxyfile.tmp
> +docs/xen-doxygen/doxygen_include.h
> +docs/xen-doxygen/doxygen_include.h.tmp
>  extras/mini-os*
>  install/*
>  stubdom/*-minios-config.mk
> diff --git a/config/Docs.mk.in b/config/Docs.mk.in
> index e76e5cd5ff..dfd4a02838 100644
> --- a/config/Docs.mk.in
> +++ b/config/Docs.mk.in
> @@ -7,3 +7,5 @@ POD2HTML            := @POD2HTML@
>  POD2TEXT            := @POD2TEXT@
>  PANDOC              := @PANDOC@
>  PERL                := @PERL@
> +SPHINXBUILD         := @SPHINXBUILD@
> +DOXYGEN             := @DOXYGEN@
> diff --git a/docs/Makefile b/docs/Makefile
> index 8de1efb6f5..2f784c36ce 100644
> --- a/docs/Makefile
> +++ b/docs/Makefile
> @@ -17,6 +17,18 @@ TXTSRC-y := $(sort $(shell find misc -name '*.txt' -print))
>  
>  PANDOCSRC-y := $(sort $(shell find designs/ features/ misc/ process/ specs/ \( -name '*.pandoc' -o -name '*.md' \) -print))
>  
> +# Directory in which the doxygen documentation is created
> +# This must be kept in sync with breathe_projects value in conf.py
> +DOXYGEN_OUTPUT = doxygen-output
> +
> +# Doxygen input headers from xen-doxygen/doxy_input.list file
> +DOXY_LIST_SOURCES != cat "xen-doxygen/doxy_input.list"
> +DOXY_LIST_SOURCES := $(realpath $(addprefix $(XEN_ROOT)/,$(DOXY_LIST_SOURCES)))
> +
> +DOXY_DEPS := xen.doxyfile \
> +			 xen-doxygen/mainpage.md \
> +			 xen-doxygen/doxygen_include.h
> +
>  # Documentation targets
>  $(foreach i,$(MAN_SECTIONS), \
>    $(eval DOC_MAN$(i) := $(patsubst man/%.$(i),man$(i)/%.$(i), \
> @@ -46,8 +58,28 @@ all: build
>  build: html txt pdf man-pages figs
>  
>  .PHONY: sphinx-html
> -sphinx-html:
> -	sphinx-build -b html . sphinx/html
> +sphinx-html: $(DOXY_DEPS) $(DOXY_LIST_SOURCES)
> +ifneq ($(SPHINXBUILD),no)
> +	$(DOXYGEN) xen.doxyfile
> +	XEN_ROOT=$(realpath $(XEN_ROOT)) $(SPHINXBUILD) -b html . sphinx/html
> +else
> +	@echo "Sphinx is not installed; skipping sphinx-html documentation."
> +endif
> +
> +xen.doxyfile: xen.doxyfile.in xen-doxygen/doxy_input.list
> +	@echo "Generating $@"
> +	@sed -e "s,@XEN_BASE@,$(realpath $(XEN_ROOT)),g" $< \
> +		| sed -e "s,@DOXY_OUT@,$(DOXYGEN_OUTPUT),g" > $@.tmp
> +	@$(foreach inc,\
> +		$(DOXY_LIST_SOURCES),\
> +		echo "INPUT += \"$(inc)\"" >> $@.tmp; \
> +	)
> +	mv $@.tmp $@
> +
> +xen-doxygen/doxygen_include.h: xen-doxygen/doxygen_include.h.in
> +	@echo "Generating $@"
> +	@sed -e "s,@XEN_BASE@,$(realpath $(XEN_ROOT)),g" $< > $@.tmp
> +	@mv $@.tmp $@
>  
>  .PHONY: html
>  html: $(DOC_HTML) html/index.html
> @@ -71,7 +103,11 @@ clean: clean-man-pages
>  	$(MAKE) -C figs clean
>  	rm -rf .word_count *.aux *.dvi *.bbl *.blg *.glo *.idx *~
>  	rm -rf *.ilg *.log *.ind *.toc *.bak *.tmp core
> -	rm -rf html txt pdf sphinx/html
> +	rm -rf html txt pdf sphinx $(DOXYGEN_OUTPUT)
> +	rm -f xen.doxyfile
> +	rm -f xen.doxyfile.tmp
> +	rm -f xen-doxygen/doxygen_include.h
> +	rm -f xen-doxygen/doxygen_include.h.tmp
>  
>  .PHONY: distclean
>  distclean: clean
> diff --git a/docs/conf.py b/docs/conf.py
> index 50e41501db..a48de42331 100644
> --- a/docs/conf.py
> +++ b/docs/conf.py
> @@ -13,13 +13,17 @@
>  # add these directories to sys.path here. If the directory is relative to the
>  # documentation root, use os.path.abspath to make it absolute, like shown here.
>  #
> -# import os
> -# import sys
> +import os
> +import sys
>  # sys.path.insert(0, os.path.abspath('.'))
>  
>  
>  # -- Project information -----------------------------------------------------
>  
> +if "XEN_ROOT" not in os.environ:
> +    sys.exit("$XEN_ROOT environment variable undefined.")
> +XEN_ROOT = os.path.abspath(os.environ["XEN_ROOT"])
> +
>  project = u'Xen'
>  copyright = u'2019, The Xen development community'
>  author = u'The Xen development community'
> @@ -35,6 +39,7 @@ try:
>              xen_subver = line.split(u"=")[1].strip()
>          elif line.startswith(u"export XEN_EXTRAVERSION"):
>              xen_extra = line.split(u"=")[1].split(u"$", 1)[0].strip()
> +
>  except:
>      pass
>  finally:
> @@ -44,6 +49,15 @@ finally:
>      else:
>          version = release = u"unknown version"
>  
> +try:
> +    xen_doxygen_output = None
> +
> +    for line in open(u"Makefile"):
> +        if line.startswith(u"DOXYGEN_OUTPUT"):
> +                xen_doxygen_output = line.split(u"=")[1].strip()
> +except:
> +    sys.exit("DOXYGEN_OUTPUT variable undefined.")
> +
>  # -- General configuration ---------------------------------------------------
>  
>  # If your documentation needs a minimal Sphinx version, state it here.
> @@ -53,7 +67,8 @@ needs_sphinx = '1.4'
>  # Add any Sphinx extension module names here, as strings. They can be
>  # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
>  # ones.
> -extensions = []
> +# breathe -> extension that integrates doxygen xml output with sphinx
> +extensions = ['breathe']
>  
>  # Add any paths that contain templates here, relative to this directory.
>  templates_path = ['_templates']
> @@ -175,6 +190,33 @@ texinfo_documents = [
>       'Miscellaneous'),
>  ]
>  
> +# -- Options for Breathe extension -------------------------------------------
> +
> +breathe_projects = {
> +    "Xen": "{}/docs/{}/xml".format(XEN_ROOT, xen_doxygen_output)
> +}
> +breathe_default_project = "Xen"
> +
> +breathe_domain_by_extension = {
> +    "h": "c",
> +    "c": "c",
> +}
> +breathe_separate_member_pages = True
> +breathe_show_enumvalue_initializer = True
> +breathe_show_define_initializer = True
> +
> +# Qualifiers to a function are causing Sphihx/Breathe to warn about
> +# Error when parsing function declaration and more.  This is a list
> +# of strings that the parser additionally should accept as
> +# attributes.
> +cpp_id_attributes = [
> +    '__syscall', '__deprecated', '__may_alias',
> +    '__used', '__unused', '__weak',
> +    '__DEPRECATED_MACRO', 'FUNC_NORETURN',
> +    '__subsystem',
> +]
> +c_id_attributes = cpp_id_attributes
> +
>  
>  # -- Options for Epub output -------------------------------------------------
>  
> diff --git a/docs/configure b/docs/configure
> index 569bd4c2ff..0ebf046a79 100755
> --- a/docs/configure
> +++ b/docs/configure
> @@ -588,6 +588,8 @@ ac_unique_file="misc/xen-command-line.pandoc"
>  ac_subst_vars='LTLIBOBJS
>  LIBOBJS
>  PERL
> +DOXYGEN
> +SPHINXBUILD
>  PANDOC
>  POD2TEXT
>  POD2HTML
> @@ -673,6 +675,7 @@ POD2MAN
>  POD2HTML
>  POD2TEXT
>  PANDOC
> +DOXYGEN
>  PERL'
>  
>  
> @@ -1318,6 +1321,7 @@ Some influential environment variables:
>    POD2HTML    Path to pod2html tool
>    POD2TEXT    Path to pod2text tool
>    PANDOC      Path to pandoc tool
> +  DOXYGEN     Path to doxygen tool
>    PERL        Path to Perl parser
>  
>  Use these variables to override the choices made by `configure' or to help
> @@ -1800,6 +1804,7 @@ ac_configure="$SHELL $ac_aux_dir/configure"  # Please don't use this var.
>  
>  
>  
> +
>  case "$host_os" in
>  *freebsd*) XENSTORED_KVA=/dev/xen/xenstored ;;
>  *) XENSTORED_KVA=/proc/xen/xsd_kva ;;
> @@ -1812,6 +1817,53 @@ case "$host_os" in
>  esac
>  
>  
> +# ===========================================================================
> +#     https://www.gnu.org/software/autoconf-archive/ax_python_module.html
> +# ===========================================================================
> +#
> +# SYNOPSIS
> +#
> +#   AX_PYTHON_MODULE(modname[, fatal, python])
> +#
> +# DESCRIPTION
> +#
> +#   Checks for Python module.
> +#
> +#   If fatal is non-empty then absence of a module will trigger an error.
> +#   The third parameter can either be "python" for Python 2 or "python3" for
> +#   Python 3; defaults to Python 3.
> +#
> +# LICENSE
> +#
> +#   Copyright (c) 2008 Andrew Collier
> +#
> +#   Copying and distribution of this file, with or without modification, are
> +#   permitted in any medium without royalty provided the copyright notice
> +#   and this notice are preserved. This file is offered as-is, without any
> +#   warranty.
> +
> +#serial 9
> +
> +# This is what autoupdate's m4 run will expand.  It fires
> +# the warning (with _au_warn_XXX), outputs it into the
> +# updated configure.ac (with AC_DIAGNOSE), and then outputs
> +# the replacement expansion.
> +
> +
> +# This is an auxiliary macro that is also run when
> +# autoupdate runs m4.  It simply calls m4_warning, but
> +# we need a wrapper so that each warning is emitted only
> +# once.  We break the quoting in m4_warning's argument in
> +# order to expand this macro's arguments, not AU_DEFUN's.
> +
> +
> +# Finally, this is the expansion that is picked up by
> +# autoconf.  It tells the user to run autoupdate, and
> +# then outputs the replacement expansion.  We do not care
> +# about autoupdate's warning because that contains
> +# information on what to do *after* running autoupdate.
> +
> +
>  
>  
>  test "x$prefix" = "xNONE" && prefix=$ac_default_prefix
> @@ -2232,6 +2284,212 @@ $as_echo "$as_me: WARNING: pandoc is not available so some documentation won't b
>  fi
>  
>  
> +# If sphinx is installed, make sure to have also the dependencies to build
> +# Sphinx documentation.
> +for ac_prog in sphinx-build
> +do
> +  # Extract the first word of "$ac_prog", so it can be a program name with args.
> +set dummy $ac_prog; ac_word=$2
> +{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
> +$as_echo_n "checking for $ac_word... " >&6; }
> +if ${ac_cv_prog_SPHINXBUILD+:} false; then :
> +  $as_echo_n "(cached) " >&6
> +else
> +  if test -n "$SPHINXBUILD"; then
> +  ac_cv_prog_SPHINXBUILD="$SPHINXBUILD" # Let the user override the test.
> +else
> +as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
> +for as_dir in $PATH
> +do
> +  IFS=$as_save_IFS
> +  test -z "$as_dir" && as_dir=.
> +    for ac_exec_ext in '' $ac_executable_extensions; do
> +  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
> +    ac_cv_prog_SPHINXBUILD="$ac_prog"
> +    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
> +    break 2
> +  fi
> +done
> +  done
> +IFS=$as_save_IFS
> +
> +fi
> +fi
> +SPHINXBUILD=$ac_cv_prog_SPHINXBUILD
> +if test -n "$SPHINXBUILD"; then
> +  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $SPHINXBUILD" >&5
> +$as_echo "$SPHINXBUILD" >&6; }
> +else
> +  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
> +$as_echo "no" >&6; }
> +fi
> +
> +
> +  test -n "$SPHINXBUILD" && break
> +done
> +test -n "$SPHINXBUILD" || SPHINXBUILD="no"
> +
> +    if test "x$SPHINXBUILD" = xno; then :
> +  { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: sphinx-build is not available so sphinx documentation \
> +won't be built" >&5
> +$as_echo "$as_me: WARNING: sphinx-build is not available so sphinx documentation \
> +won't be built" >&2;}
> +else
> +
> +            # Extract the first word of "sphinx-build", so it can be a program name with args.
> +set dummy sphinx-build; ac_word=$2
> +{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
> +$as_echo_n "checking for $ac_word... " >&6; }
> +if ${ac_cv_path_SPHINXBUILD+:} false; then :
> +  $as_echo_n "(cached) " >&6
> +else
> +  case $SPHINXBUILD in
> +  [\\/]* | ?:[\\/]*)
> +  ac_cv_path_SPHINXBUILD="$SPHINXBUILD" # Let the user override the test with a path.
> +  ;;
> +  *)
> +  as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
> +for as_dir in $PATH
> +do
> +  IFS=$as_save_IFS
> +  test -z "$as_dir" && as_dir=.
> +    for ac_exec_ext in '' $ac_executable_extensions; do
> +  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
> +    ac_cv_path_SPHINXBUILD="$as_dir/$ac_word$ac_exec_ext"
> +    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
> +    break 2
> +  fi
> +done
> +  done
> +IFS=$as_save_IFS
> +
> +  ;;
> +esac
> +fi
> +SPHINXBUILD=$ac_cv_path_SPHINXBUILD
> +if test -n "$SPHINXBUILD"; then
> +  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $SPHINXBUILD" >&5
> +$as_echo "$SPHINXBUILD" >&6; }
> +else
> +  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
> +$as_echo "no" >&6; }
> +fi
> +
> +
> +
> +
> +    # Extract the first word of "doxygen", so it can be a program name with args.
> +set dummy doxygen; ac_word=$2
> +{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
> +$as_echo_n "checking for $ac_word... " >&6; }
> +if ${ac_cv_path_DOXYGEN+:} false; then :
> +  $as_echo_n "(cached) " >&6
> +else
> +  case $DOXYGEN in
> +  [\\/]* | ?:[\\/]*)
> +  ac_cv_path_DOXYGEN="$DOXYGEN" # Let the user override the test with a path.
> +  ;;
> +  *)
> +  as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
> +for as_dir in $PATH
> +do
> +  IFS=$as_save_IFS
> +  test -z "$as_dir" && as_dir=.
> +    for ac_exec_ext in '' $ac_executable_extensions; do
> +  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
> +    ac_cv_path_DOXYGEN="$as_dir/$ac_word$ac_exec_ext"
> +    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
> +    break 2
> +  fi
> +done
> +  done
> +IFS=$as_save_IFS
> +
> +  ;;
> +esac
> +fi
> +DOXYGEN=$ac_cv_path_DOXYGEN
> +if test -n "$DOXYGEN"; then
> +  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DOXYGEN" >&5
> +$as_echo "$DOXYGEN" >&6; }
> +else
> +  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
> +$as_echo "no" >&6; }
> +fi
> +
> +
> +    if ! test -x "$ac_cv_path_DOXYGEN"; then :
> +
> +        as_fn_error $? "doxygen is needed" "$LINENO" 5
> +
> +fi
> +
> +
> +    if test -z $PYTHON;
> +    then
> +        if test -z "";
> +        then
> +            PYTHON="python3"
> +        else
> +            PYTHON=""
> +        fi
> +    fi
> +    PYTHON_NAME=`basename $PYTHON`
> +    { $as_echo "$as_me:${as_lineno-$LINENO}: checking $PYTHON_NAME module: breathe" >&5
> +$as_echo_n "checking $PYTHON_NAME module: breathe... " >&6; }
> +    $PYTHON -c "import breathe" 2>/dev/null
> +    if test $? -eq 0;
> +    then
> +        { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
> +$as_echo "yes" >&6; }
> +        eval HAVE_PYMOD_BREATHE=yes
> +    else
> +        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
> +$as_echo "no" >&6; }
> +        eval HAVE_PYMOD_BREATHE=no
> +        #
> +        if test -n "yes"
> +        then
> +            as_fn_error $? "failed to find required module breathe" "$LINENO" 5
> +            exit 1
> +        fi
> +    fi
> +
> +
> +    if test -z $PYTHON;
> +    then
> +        if test -z "";
> +        then
> +            PYTHON="python3"
> +        else
> +            PYTHON=""
> +        fi
> +    fi
> +    PYTHON_NAME=`basename $PYTHON`
> +    { $as_echo "$as_me:${as_lineno-$LINENO}: checking $PYTHON_NAME module: sphinx_rtd_theme" >&5
> +$as_echo_n "checking $PYTHON_NAME module: sphinx_rtd_theme... " >&6; }
> +    $PYTHON -c "import sphinx_rtd_theme" 2>/dev/null
> +    if test $? -eq 0;
> +    then
> +        { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
> +$as_echo "yes" >&6; }
> +        eval HAVE_PYMOD_SPHINX_RTD_THEME=yes
> +    else
> +        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
> +$as_echo "no" >&6; }
> +        eval HAVE_PYMOD_SPHINX_RTD_THEME=no
> +        #
> +        if test -n "yes"
> +        then
> +            as_fn_error $? "failed to find required module sphinx_rtd_theme" "$LINENO" 5
> +            exit 1
> +        fi
> +    fi
> +
> +
> +
> +fi
> +
>  
>  # Extract the first word of "perl", so it can be a program name with args.
>  set dummy perl; ac_word=$2
> diff --git a/docs/configure.ac b/docs/configure.ac
> index c2e5edd3b3..a2ff55f30a 100644
> --- a/docs/configure.ac
> +++ b/docs/configure.ac
> @@ -20,6 +20,7 @@ m4_include([../m4/docs_tool.m4])
>  m4_include([../m4/path_or_fail.m4])
>  m4_include([../m4/features.m4])
>  m4_include([../m4/paths.m4])
> +m4_include([../m4/ax_python_module.m4])
>  
>  AX_XEN_EXPAND_CONFIG()
>  
> @@ -29,6 +30,20 @@ AX_DOCS_TOOL_PROG([POD2HTML], [pod2html])
>  AX_DOCS_TOOL_PROG([POD2TEXT], [pod2text])
>  AX_DOCS_TOOL_PROG([PANDOC], [pandoc])
>  
> +# If sphinx is installed, make sure to have also the dependencies to build
> +# Sphinx documentation.
> +AC_CHECK_PROGS([SPHINXBUILD], [sphinx-build], [no])
> +    AS_IF([test "x$SPHINXBUILD" = xno],
> +        [AC_MSG_WARN(sphinx-build is not available so sphinx documentation \
> +won't be built)],
> +        [
> +            AC_PATH_PROG([SPHINXBUILD], [sphinx-build])
> +            AX_DOCS_TOOL_REQ_PROG([DOXYGEN], [doxygen])
> +            AX_PYTHON_MODULE([breathe],[yes])
> +            AX_PYTHON_MODULE([sphinx_rtd_theme], [yes])
> +        ]
> +    )
> +
>  AC_ARG_VAR([PERL], [Path to Perl parser])
>  AX_PATH_PROG_OR_FAIL([PERL], [perl])
>  
> diff --git a/docs/xen-doxygen/customdoxygen.css b/docs/xen-doxygen/customdoxygen.css
> new file mode 100644
> index 0000000000..4735e41cf5
> --- /dev/null
> +++ b/docs/xen-doxygen/customdoxygen.css
> @@ -0,0 +1,36 @@
> +/* Custom CSS for Doxygen-generated HTML
> + * Copyright (c) 2015 Intel Corporation
> + * SPDX-License-Identifier: Apache-2.0
> + */
> +
> +code {
> +  font-family: Monaco,Menlo,Consolas,"Courier New",monospace;
> +  background-color: #D8D8D8;
> +  padding: 0 0.25em 0 0.25em;
> +}
> +
> +pre.fragment {
> +  display: block;
> +  font-family: Monaco,Menlo,Consolas,"Courier New",monospace;
> +  padding: 1rem;
> +  word-break: break-all;
> +  word-wrap: break-word;
> +  white-space: pre;
> +  background-color: #D8D8D8;
> +}
> +
> +#projectlogo
> +{
> +  vertical-align: middle;
> +}
> +
> +#projectname
> +{
> +  font: 200% Tahoma, Arial,sans-serif;
> +  color: #3D578C;
> +}
> +
> +#projectbrief
> +{
> +  color: #3D578C;
> +}
> diff --git a/docs/xen-doxygen/doxy-preprocessor.py b/docs/xen-doxygen/doxy-preprocessor.py
> new file mode 100755
> index 0000000000..496899d8e6
> --- /dev/null
> +++ b/docs/xen-doxygen/doxy-preprocessor.py
> @@ -0,0 +1,110 @@
> +#!/usr/bin/python3
> +#
> +# Copyright (c) 2021, Arm Limited.
> +#
> +# SPDX-License-Identifier: GPL-2.0
> +#
> +
> +import os, sys, re
> +
> +
> +# Variables that holds the preprocessed header text
> +output_text = ""
> +header_file_name = ""
> +
> +# Variables to enumerate the anonymous structs/unions
> +anonymous_struct_count = 0
> +anonymous_union_count = 0
> +
> +
> +def error(text):
> +    sys.stderr.write("{}\n".format(text))
> +    sys.exit(1)
> +
> +
> +def write_to_output(text):
> +    sys.stdout.write(text)
> +
> +
> +def insert_doxygen_header(text):
> +    # Here the strategy is to insert the #include <doxygen_include.h> in the
> +    # first line of the header
> +    abspath = os.path.dirname(os.path.abspath(__file__))
> +    text += "#include \"{}/doxygen_include.h\"\n".format(abspath)
> +
> +    return text
> +
> +
> +def enumerate_anonymous(match):
> +    global anonymous_struct_count
> +    global anonymous_union_count
> +
> +    if "struct" in match.group(1):
> +        label = "anonymous_struct_%d" % anonymous_struct_count
> +        anonymous_struct_count += 1
> +    else:
> +        label = "anonymous_union_%d" % anonymous_union_count
> +        anonymous_union_count += 1
> +
> +    return match.group(1) + " " + label + " {"
> +
> +
> +def manage_anonymous_structs_unions(text):
> +    # Match anonymous unions/structs with this pattern:
> +    # struct/union {
> +    #     [...]
> +    #
> +    # and substitute it in this way:
> +    #
> +    # struct anonymous_struct_# {
> +    #     [...]
> +    # or
> +    # union anonymous_union_# {
> +    #     [...]
> +    # where # is a counter starting from zero, different between structs and
> +    # unions
> +    #
> +    # We don't count anonymous union/struct that are part of a typedef because
> +    # they don't create any issue for doxygen
> +    text = re.sub(
> +        "(?<!typedef\s)(struct|union)\s+?\{",
> +        enumerate_anonymous,
> +        text,
> +        flags=re.S
> +    )
> +
> +    return text
> +
> +
> +def main(argv):
> +    global output_text
> +    global header_file_name
> +
> +    if len(argv) != 1:
> +        error("Script called without arguments!")
> +
> +    header_file_name = argv[0]
> +
> +    # Open header file
> +    input_header_file = open(header_file_name, 'r')
> +    # Read all lines
> +    lines = input_header_file.readlines()
> +
> +    # Inject config.h and some defines in the current header, during compilation
> +    # this job is done by the -include argument passed to the compiler.
> +    output_text = insert_doxygen_header(output_text)
> +
> +    # Load file content in a variable
> +    for line in lines:
> +        output_text += "{}".format(line)
> +
> +    # Try to get rid of any anonymous union/struct
> +    output_text = manage_anonymous_structs_unions(output_text)
> +
> +    # Final stage of the preprocessor, print the output to stdout
> +    write_to_output(output_text)
> +
> +
> +if __name__ == "__main__":
> +    main(sys.argv[1:])
> +    sys.exit(0)
> diff --git a/docs/xen-doxygen/doxy_input.list b/docs/xen-doxygen/doxy_input.list
> new file mode 100644
> index 0000000000..e69de29bb2
> diff --git a/docs/xen-doxygen/doxygen_include.h.in b/docs/xen-doxygen/doxygen_include.h.in
> new file mode 100644
> index 0000000000..df284f3931
> --- /dev/null
> +++ b/docs/xen-doxygen/doxygen_include.h.in
> @@ -0,0 +1,32 @@
> +/*
> + * Doxygen include header
> + * It supplies the xen/include/xen/config.h that is included using the -include
> + * argument of the compiler in Xen Makefile.
> + * Other macros are defined because they are usually provided by the compiler.
> + *
> + * Copyright (C) 2021 ARM Limited
> + *
> + * Author: Luca Fancellu <luca.fancellu@arm.com>
> + *
> + * SPDX-License-Identifier: GPL-2.0
> + */
> +
> +#include "@XEN_BASE@/xen/include/xen/config.h"
> +
> +#if defined(CONFIG_X86_64)
> +
> +#define __x86_64__ 1
> +
> +#elif defined(CONFIG_ARM_64)
> +
> +#define __aarch64__ 1
> +
> +#elif defined(CONFIG_ARM_32)
> +
> +#define __arm__ 1
> +
> +#else
> +
> +#error Architecture not supported/recognized.
> +
> +#endif
> diff --git a/docs/xen-doxygen/footer.html b/docs/xen-doxygen/footer.html
> new file mode 100644
> index 0000000000..a24bf2b9b4
> --- /dev/null
> +++ b/docs/xen-doxygen/footer.html
> @@ -0,0 +1,21 @@
> +<!-- HTML footer for doxygen 1.8.13-->
> +<!-- start footer part -->
> +<!--BEGIN GENERATE_TREEVIEW-->
> +<div id="nav-path" class="navpath"><!-- id is needed for treeview function! -->
> +  <ul>
> +    $navpath
> +    <li class="footer">$generatedby
> +    <a href="http://www.doxygen.org/index.html">
> +    <img class="footer" src="$relpath^doxygen.png" alt="doxygen"/></a> $doxygenversion </li>
> +  </ul>
> +</div>
> +<!--END GENERATE_TREEVIEW-->
> +<!--BEGIN !GENERATE_TREEVIEW-->
> +<hr class="footer"/><address class="footer"><small>
> +$generatedby &#160;<a href="http://www.doxygen.org/index.html">
> +<img class="footer" src="$relpath^doxygen.png" alt="doxygen"/>
> +</a> $doxygenversion
> +</small></address>
> +<!--END !GENERATE_TREEVIEW-->
> +</body>
> +</html>
> diff --git a/docs/xen-doxygen/header.html b/docs/xen-doxygen/header.html
> new file mode 100644
> index 0000000000..83ac2f1835
> --- /dev/null
> +++ b/docs/xen-doxygen/header.html
> @@ -0,0 +1,56 @@
> +<!-- HTML header for doxygen 1.8.13-->
> +<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
> +<html xmlns="http://www.w3.org/1999/xhtml">
> +<head>
> +<meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/>
> +<meta http-equiv="X-UA-Compatible" content="IE=9"/>
> +<meta name="generator" content="Doxygen $doxygenversion"/>
> +<meta name="viewport" content="width=device-width, initial-scale=1"/>
> +<!--BEGIN PROJECT_NAME--><title>$projectname: $title</title><!--END PROJECT_NAME-->
> +<!--BEGIN !PROJECT_NAME--><title>$title</title><!--END !PROJECT_NAME-->
> +<link href="$relpath^tabs.css" rel="stylesheet" type="text/css"/>
> +<script type="text/javascript" src="$relpath^jquery.js"></script>
> +<script type="text/javascript" src="$relpath^dynsections.js"></script>
> +$treeview
> +$search
> +$mathjax
> +<link href="$relpath^$stylesheet" rel="stylesheet" type="text/css" />
> +$extrastylesheet
> +</head>
> +<body>
> +<div id="top"><!-- do not remove this div, it is closed by doxygen! -->
> +
> +<!--BEGIN TITLEAREA-->
> +<div id="titlearea">
> +<table cellspacing="0" cellpadding="0">
> + <tbody>
> + <tr style="height: 56px;">
> +  <!--BEGIN PROJECT_LOGO-->
> +  <td id="projectlogo"><img alt="Logo" src="$relpath^$projectlogo"/></td>
> +  <!--END PROJECT_LOGO-->
> +  <!--BEGIN PROJECT_NAME-->
> +  <td id="projectalign" style="padding-left: 1em;">
> +   <div id="projectname">$projectname
> +   <!--BEGIN PROJECT_NUMBER-->&#160;<span id="projectnumber">$projectnumber</span><!--END PROJECT_NUMBER-->
> +   </div>
> +   <!--BEGIN PROJECT_BRIEF--><div id="projectbrief">$projectbrief</div><!--END PROJECT_BRIEF-->
> +  </td>
> +  <!--END PROJECT_NAME-->
> +  <!--BEGIN !PROJECT_NAME-->
> +   <!--BEGIN PROJECT_BRIEF-->
> +    <td style="padding-left: 0.5em;">
> +    <div id="projectbrief">$projectbrief</div>
> +    </td>
> +   <!--END PROJECT_BRIEF-->
> +  <!--END !PROJECT_NAME-->
> +  <!--BEGIN DISABLE_INDEX-->
> +   <!--BEGIN SEARCHENGINE-->
> +   <td>$searchbox</td>
> +   <!--END SEARCHENGINE-->
> +  <!--END DISABLE_INDEX-->
> + </tr>
> + </tbody>
> +</table>
> +</div>
> +<!--END TITLEAREA-->
> +<!-- end header part -->
> diff --git a/docs/xen-doxygen/mainpage.md b/docs/xen-doxygen/mainpage.md
> new file mode 100644
> index 0000000000..ff548b87fc
> --- /dev/null
> +++ b/docs/xen-doxygen/mainpage.md
> @@ -0,0 +1,5 @@
> +# API Documentation   {#index}
> +
> +## Introduction
> +
> +## Licensing
> diff --git a/docs/xen-doxygen/xen_project_logo_165x67.png b/docs/xen-doxygen/xen_project_logo_165x67.png
> new file mode 100644
> index 0000000000000000000000000000000000000000..7244959d59cdeb9f23c5202160ea45508dfc7265
> GIT binary patch
> literal 18223
> zcmV+NKn=f%P)<h;3K|Lk000e1NJLTq005-`002V>1^@s6{Wir#00004XF*Lt006O$
> zeEU(80000WV@Og>004&%004{+008|`004nN004b?008NW002DY000@xb3BE2000U(
> zX+uL$P-t&-Z*ypGa3D!TLm+T+Z)Rz1WdHz3$DNjUR8-d%htIutdZEoQ0#b(FyTAa_
> zdy`&8VVD_UC<6{NG_fI~0ue<-nj%P0#DLLIBvwSR5EN9f2P6n6F&ITuEN@2Ei>|D^
> z_ww@l<E(G(v-i3C?7h!g7XXr{FPE1FO97C|6YzsPoaqsfQFQD8fB_z0fGGe>Rz|vC
> zuzLs)$;-`!o*{AqUjza0dRV*yaMRE;fKCVhpQKsoe1Yhg01=zBIT<Vw7l=3|OOP(M
> z&x)8Dmn>!&C1$=TK@rP|Ibo3vKKm@PqnO#LJhq6%Ij6Hz*<$V$@wQAMN5qJ)hzm2h
> zoGcOF60t^#FqJFfH{#e-4l@G)6iI9sa9D{VHW4w29}?su;^hF~NC{tY+*d5%WDCTX
> za!E_i;d2ub1#}&jF5T4HnnCyEWTkKf0>c0%E1Ah>(_PY1)0w;+02c53Su*0<(nUqK
> zG_|(0G&D0Z{i;y^b@OjZ+}lNZ8Th$p5Uu}<?XUdO8USF-iE6X+i!H7SfX*!d$ld#5
> z(>MTtq^NHl*T1?CO*}7&0ztZsv2j*bmJyf3G7=Z`5B*PvzoD<bXCyxEkMhu6Iq^(k
> zihwSz8!Ig(O~|Kbq%&C@y5XOP_#X%Ubsh#moOlkO!xKe>iKdLpOAxi2$L0#SX*@cY
> z_n(^h55xYX#km%V()bZjV~l{*bt*u9?FT3d5g^g~#a;iSZ@&02Abxq_DwB(I|L-^b
> zXThc7C4-yrInE_0gw7K3GZ**7&k~>k0Z0NWkO#^@9q0f<U<Ry!EpP;Gz#I635D*Dg
> z0~SaGseli%Kpxlx3PCa03HE?$PzM@8GiU|JK_@r`&Vx(f8n^*&gZp3<On_%#7Q6-v
> z5CmZ%GDLyoAr(jy(ud3-24oMpLB3EB6bZ#b2@nqwLV3_;s2D1Ps-b$Q8TuYN37v<o
> zK!ea-XbhT$euv({2uy;huoA2V8^a9P3HE_Q;8kz}yavvN3*a4aCENfXg*)K$@HO~0
> zJPJR9=MaDp5gMY37$OYB1@T9ska&cTtVfEF3ZwyPMY@qb<R&tT%ph-37!(CXM;W4Q
> zQJ$z!6brQmwH{T1szx0~b)b4tH&J7#S=2`~8Lf!cN86yi&=KeabQZc0U4d>wx1%qj
> zZ=)yBuQ3=54Wo^*!gyjLF-e%Um=erBOdIALW)L%unZshS@>qSW9o8Sq#0s#5*edK%
> z>{;v(b^`kbN5rY%%y90wC>#%$kE_5P!JWYk;U;klcqzOl-UjcFXXA75rT9jCH~u<)
> z0>40zCTJ7v2qA<d!X`o`p_Oov@PP1=NF=Het%-p|E^#BVl6Z`GnK(v#OOhe!kz7d8
> zBq3=B=@980=`QIdnM~FqJCdWw0`d-WGx-Af5&4Y-MZ!qJOM)%2L83;YLt;qcxg=gv
> zQ_@LtwPdbjh2#mz>yk54cquI@7b&LHdZ`+zlTss6bJ7%PQ)z$cROu4wBhpu-r)01)
> zS~6}jY?%U?gEALn#wiFzo#H}aQ8rT=DHkadR18&{>P1bW7E`~Y4p3)hWn`DhhRJ5j
> z*2tcg9i<^OEt(fCg;q*CP8+7ZTcWhYX$fb^_9d-LhL+6BEtPYW<H!}swaML<dnZqq
> zcau++-zDEE|4;#?pr;V1kfpF+;iAIKQtDFMrL3hzOOG$TrwA+RDF!L7RXnKJuQ;cq
> ztmL7Tu2iLTL1{*rrtGMkq+G6iMtNF=qGGSYRVi0FtMZgCOLwBD&@1V^^jTF!RZmr+
> zYQ5@!>VlfKTBusSTASKKb%HuWJzl+By+?gkLq)?+BTu76<DMp7lcAZYxmUAKb6!hZ
> zD_m=<R;SjKww$(?cCL1d_5&TVj)Tq`od%s-x)@!CZnEw^-5Ywao`qhbUX9*$eOTX8
> zpR2!5f6xGJU~RxNXfPNtBpEsxW*W8_jv3L6e2wyrI*pziYZylv?=tQ){%B%hl48<m
> za^F<O)Y~-QwA=J|Gd(kwS&i8(bF#U+`3CbY^B2qXmvNTuUv|fWV&P}8)uPAZgQb-v
> z-?G(m+DgMJ)~eQOgh6ElFiIGgt<l!b)*Gx(S--Whv=P`GxB1Q1&^Foji0#yJ?d6>1
> zjmyXF)a;mc^>(B7bo*HQ1NNg1st!zt28YLv>W*y3CdWx9U8f|cqfXDAO`Q48?auQq
> zHZJR2&bcD49<D{M18y>Ip>EY~kKEPV6Wm+eXFV)D)_R=tM0@&p?(!V*Qu1PXHG9o^
> zTY0bZ?)4%01p8F`JoeS|<@<K~!G7L;yZs)l&|JY=(diHTz5I9kKMc?gSQGGLASN&%
> zuqN<HkZDj}P+u@5I41Z=@aqugkkXL*p*o?$(4H{Ku;{Snu=#M;@UrmH2;+!#5!WIW
> zBDs-WQP`-ksHUj7m2NBdtel9ph%SsCUZuS%d)1ZI3ae9ApN^4?VaA+@MaPE69*KR=
> z^k+6O=i<ELYU5^EF08$*XKY7yIeVI8$0_4X#@of0#ZM*JCG1X^PIO4DNSxuiaI3j5
> zl01{@lID~BlMf|-N(oPCOU0$erk>=<@RE7GY07EYX@lwd>4oW|Yi!o+Su@M`;WuSK
> z8LKk71XR(_RKHM1xJ5XYX`fk>`6eqY>qNG6HZQwBM=xi4&Sb88?zd}EYguc1@>KIS
> z<&CX#T35dwS|7K*XM_5Nf(;WJJvJWRMA($P>8E^?{IdL4o5MGE7bq2MEEwP7v8AO@
> zqL5!WvekBL-8R%V?zVyL=G&{be=K4bT`e{#t|)$A!YaA?jp;X)-+bB;zhj`(vULAW
> z%ue3U;av{94wp%n<(7@__S@Z2PA@Mif3+uO&y|X06?J<Fdxd*PD}5`wsx+#0R=uxI
> ztiE02T+>#oSi8M;ejj_^(0<4Lt#wLu#dYrva1Y$6_o(k^&}yhSh&h;f@JVA>W8b%o
> zZ=0JGnu?n~9O4}sJsfnnx7n(>`H13?(iXTy*fM=I`sj`CT)*pTHEgYKqqP+u1IL8N
> zo_-(u{qS+0<2@%BCt82d{Gqm;(q7a7b>wu+b|!X?c13m#p7cK1({0<`{-e>4hfb-U
> zsyQuty7Ua;Ou?B?XLHZaol8GAb3Wnxcu!2v{R<HnZuJKC4qWuPc=?k1r3-ydeP=J*
> zT|RZi=E}*djH{j3EU$I+TlBa8Wbsq`faO5Pb*t-LH>_`T4=x`(GvqLI{-*2AOSimk
> zUAw*F_TX^n@STz9k<mNsJ5zU4?!LH}d2iwV#s}yJMGvJORy<OC)bO+J&uycYqo>DQ
> z$NC=!KfXWC8h`dn#xL(D3Z9UkR7|Q&Hcy#Notk!^zVUSB(}`#4&lYA1f0h2V_PNgU
> zAAWQEt$#LRcH#y9#i!p(Udq2b^lI6wp1FXzN3T;~FU%Lck$-deE#qz9yYP3D3t8{6
> z?<+s(e(3(_^YOu_)K8!O1p}D#{JO;G(*OVf32;bRa{vGf4*&oQ4*`<-1El}}02*{f
> zSaefwW^{L9a%BKeVQFr3E>1;MAa*k@H7+qQF!XYv002BXNkl<ZcwX&&2Ur%z_P#e7
> z6Qi+fqA~V@E%vU7y^D&10wPTTDJm);MHEC76-DVCL_kmw?7jDbz4s2N*c<Kq-*@37
> z#Au5Dn|t;CJm2#^yWj4#-8pmSoS8GTMLrU$3Dg1V0S)qxwSgKyRp2vyrhkMg0%Wkd
> zKntJ?&<p4b^aln21A#&LNB-{z@P1FA6VMc>1$+-w06x=a`rA|vr~**(wFP<rWHvJ1
> z+fW~4G{)LwjHRuWrIk&K7A;2c+FM}=GH^GbB|rwP43q&r(`WiaDhp7WH3WVE3K+3O
> zi4q#qr%!j?xN+0UGiS~ozkBEIovf@Z$;`}>#~E32=hh>6`k4PSh1Z`vdHU@7wd+^*
> z?%lU7ARy4EXV0EvkdBI3DMdQ~?D{JKrGd}%nSMu<T$GGI14?&XzI^%No}LTlpE-Rt
> z<;|PSY>>im&!4}Ld-qc1+`03zf8TytnYc<qg2QFa>h*H?@DaIi{(_{Yrpb#JFO=|E
> zS&WyBYw34aC9jU}(WB>Bq)!HAH&01S-IQv=XXgA&3Y7=gowf(q#gZ8{6ILWFefi?m
> zi=3QX0Yl2ehmXK)mt@z@J(7@+D3Oto;^X5hvuDqiY15{Ot*xy<lFGb!^CU1ZP-3EE
> zWYwxwvTxr3xpe7@JbLs*NhdoyM{=@r1&n@V`0(LY$dAm~2WSQS2vBwS2KY?>M~Pi0
> zyJ{LP>{f?_hJ^ZOJbd&pCtJWoSqd}Vx^-JFT(}^|jvWJ&?UVKE*GpVnoP>mg$btn6
> z#MRYRoSd9w)~s2wXwf1G4GmT9uUofHcJACMM~)m(q$;H=r7RgUH%EZnoC60AZ12*g
> zi>hm<?n*13#!yM%GyNYTc9XQIX-!i)4t92);db|K>g{Yuu}m=I^Jg!?kdGxJ;}N9f
> zLon1mxpL)-!eC@hV)N$Bl9-q$HOYuPMn^}>#*G_g|Ni}Q>eMMoNlB3tCr-%Kt5+pG
> zJzbtXd!}^hxw%q6Z$O(iZAz|MwdzQeh5BY=fDPs|WBwl@TD;W(4%JY19GaB4CNc9(
> zj=X-IDNmm~lSdhk!IaMxqa_#IL*0-Jb?Ve<*}8SBczJor`0?XKPft$<4<0PNdi4@W
> zJL%rNy9^jGKr}Tq#n8}Drc9Y4OO`B=-Me?o_3Jla{5+9YuU<(`4#ec!1SY+E_wLO;
> zefn6SOl&C4fbW1(z-Rg&CQ3*e6|}6?t5m6?bNJBllxI(0%Hzk+APv*xe)@fMvCkDg
> zfdG@+w{I)bjuJ6EJY2lJy~V}FMXar@MPFZEh7KJny?ghTK7IO1zkdBiU0q#9j2Iy%
> zCMF6~y1BW@;>C-VxLdYtQKT&;<=e#WoV@zv@zZCMCQX`w@=={=18_9pGh_ab(zgI5
> zq{5KhyY;lVaQ^C@jEtv}mYObCuUu2QXi7yg<;|Nnm9Co1xMIZ$S+;DM!dQO3{^IB`
> zJ=Z|rI9o?&RF<xe?kgiBqvvL3X3xNg&kPI<UXB_yI!jN_AZv`VY4)sH9=V~RVM@22
> zl$0bJHf&JzQb@&ob|f?AD2z&7Q&AsYXJ>~5hlZe>LjW=+M+QE3<^N;E3jG1-45({q
> zYTI49c;i`m+5@?M?WUYPdrofL$m?Ej-MXbPV@ynp0vap{2?^rk?U!w3W&Og`)HD@F
> z&FO^;7j6#-2uO&ChzO63ja{^S`SN+d0x)kdbj!G)prDN~dX6|dJ6{LGKD4#5e-#(E
> zJcs%w^hZVSgps4D1srN(xBl|QOQ;ZU6f6Dps~lOgYQt)jmyF2)cMchSw#xs9h(-e?
> z&Y*T}J6N1Je(uT58@J`mnR9aZ@L@TB{=6a?j~+cLU@Qp^4i+$*BHe-lgR;#`&F+B_
> zkDwcl0mIEmPEOX}yLayZ#Os!kk<kgdXFCYIHbC2#FJE?q6#V(@*|WV49z3WC2AqI2
> zef|CYx7yp=UqaXXaMh|+xog+16{I68SFThHB1&ka1tz@@!zx3bK79e*_N4^+M?|CC
> zg8>>p94`7A_)MQT(Xin#OaH1>f6(8yWpC=GOIM*M9#@8Is4tQ!XpH#!`YQULpP!!u
> z2ZiRKJ5B{7?E^zCTC--2{^`@FyMcLHg83Q(wSj5?8U8nvf4vsa0B8mY+#Y!hg>;+_
> zD}4iW%?m&V8Ip{z=$o6j$b$zD6dm^Z_3M%f{WedMr-`$Zn=g{3Rn8e8>cw9HpXn1N
> zcH7h=y92`m1D700NjWLowr&@6xys-+CDzKsmCE3^gM)+2m@z}_Y#m>Y9z8k*U2iCu
> z)Er%MA9UgEAw8QQ9Zp4l`>%if>r2qTaQ>Hw3<@dQ77}#mx^?T^U@&b4)1M0q4a??O
> zCk-M>=rd=|$e}}rWbfX6N@F>C<hX3#y8m^H=B*r&3`Yytz-atXY8D2|#Rm8d%2pJ|
> z&-9TJ2ccU7LoCfFM{U@!4NSORJUl$a#>Pe_PoAs{?a+EAIK0!P%P%i*epy>vdn*i>
> zXhTE85HP48ENu@0hRkepb92YirAt#CC=15?f*Ji40%MlUXU+~MPMk1>L{8|^rOR18
> zJ-y6f!-mPAL4#!M*s;ni5od>-ojF4^UuHPFilxQGr~Uf%OGda59UUDn#F?)u6Jcy@
> z>}G9kJsD~SJ(EiQod)xn{&Per$?3zs)vMPu3k!*UwPVLlRQ3#6jBfh;wQJyy52wiJ
> z=mSW*Go(_fzsmrShV{I>yn3svt1m<so{^vA2h$}OX%~TA2M*jEpsqd?S^TZW@|pf~
> zL@7yWHr(EB=CKXyH^Ycop)e(-ViYXvOV_W-g{xQO<oR=wa_XcUIe9{k0z7{o&R)8x
> zFhWjFPOi4D?oy=Rv}n<y#hKxs5t$c63%XI)u3cwdzI^#H(>j0U@;mb)SRR&(;D46&
> z+~rG3*)QF=ro=sa`68wNm69b(+9K^1#flaC_A^s{e8h^U_jm7>E!xfSobMx>Hg1uH
> z3l}PL)n;a9^1}~5$lm??<=*}KayLC)uEW5%1nGJ4>Q%W6L*tK-@${*}nAdLKe$}v1
> zqp8T}M=-%3>T&r@LY=R}gb5R(pFMl_hG|@d)z5t2m5-`CJTlzPyLXlHrote)hce&7
> z|C1+A=4feYokO~AKriI;1MoHQx%>Xeh)VxYfaVw@t204?q2li!AfBF{io*mQCu-HI
> zC2DGFas?Hhp7Btgym+qQ;giSm;PE3c-V+6no;@vq>KgB>Xz$*=cae_<Q2u?4kk61D
> z0Pi0X5^^v<s-!)9pybUwRe7oMC|nMf>*>oE%6(o-y`QdF>6<of%4*oK;Sr=uhU|}g
> zY6Fy#pADUlj5v-*ukmgT)tXGRpX!pEup&`mM7otxTEb9~jvYJ7_uqdn0|yR#miFMj
> zJVNEa%zPu9m8P%6`#@rtOwJ4D)BO4KRr=*Og&C9QUwrWeyY`yZt5+X;@ZiB4`BCH<
> znCx|SmXa2!aQcPw%go7^7jIt4Q#P9C&*Wi7hB9Qdc=6&_Ft#=!Z5yZ$oskzMWC`GN
> zxBU?k4IDb_4$&B*y~AUUrvwHD%gmWG6{hs_^^+bwdMM5l9ou(G{pzc)b~-ydKL$zm
> zyBsC{0%c`o<!ESVa9U6e4Duxlk<Xkg%TF+9Jnr4Qm)GZi0OjQ7%Inv!mD!(j=g!d)
> zBp2!K0<57wwgtXJe#P_i{7fGqad5c_D$3AgjMcM{kZ_qc%~|nz<X9jbDY|v*CPRm6
> z<o4^=FQ8AKJ~I$DCN?(q1>MT%74#u=^XAPLwQJWNkG!jbIf~)PvBNHoj*f}1UcLIg
> z2gY9{&Wm%lhjZn-cI}ep&6~>)RjZ3ygGO7?0Qvw`07Zv{RHl?<PeJvC9!4Ca<p>19
> zbE*)fAkm9`3=GV2;6VMK<t0?(tsHCRz5f}aCwcpynzr(?w3s9zAz?COhKtgD)8&j3
> z5|*zF)6lwUV`DSU-rl}<<;s=E0Lgpy?8zbnd?;gX+qUgG^5&3?T8R=RhQ-Cj9nZh@
> zLBu(6N^nkr(pR%$#fs9qcOMxzKwZ=aX<b4-oPMP4Ot1B>fO>}x9V`tD4CYLlG$|PR
> z^J=Io>j-rBt4vKzeKa*Soe^#rK!;+E;gVs?fU)1JhmwIooJFGl0P~_#3-ja3&UX3N
> z=$l#?%>g=4Q<b7x@vr<mC^R@T)e#tkUc}bW(9pf`IPkDx!)QG6*|-{j2J0t1?yAx!
> z`}gWGaNFcbcCfa+6jw8L-|p`2ij?G#P{)p)=c1y!g@uLHXbnOfGf1KRmoHz=Wmoy3
> zj5%k{oWu3%)mwDp#EE<Pm;N>}Z@SIhhoyZDh8P%3%9k&%lsiw-_mukenq)R<(j;ug
> zj2WxfuU~)W%9ShksYc`{@u!rUn)+nt&Yc&1eSJ4}?%a7Yo}Ua>jm-Bp1K>ZIsj8)=
> zr6bCdcJ=Djbmm9w-o5)W^M4x~Hf%V(Wy_XhXc!)dOU7rtv_YB8{QdnmpE`Bw_RE(q
> zl@W+{5uQIA@9-A%^_Aaz^9>j9{gAJpe{xg;;5wI)n#1(&I63Ccim1A7yi`R>jvA$x
> zHDJI1&Ev<9*FSpnXc@fpvUp&&cKgAD2VWpF(82dZ2=Q`J;ji-l{%s;dqO!;|oR`m~
> zWJkFaAI-jf`zn&2&vEkP$^01q9)hlV50aE~?37>?@J<R0CY)1BwuMf<H$V7aB0dXx
> zr|pQV3kf}Y>(;I3$ZYwy|1#aUaU-LD|Nc%$rw)XzqO*TWL`hk_SkYoeqxDR(X1cnI
> zOMchQp&(W&nR0}d_7z=S-A>RUE8&@GF;mB)YZzBDdZ0^BfAr{)(tZ9XiTWe;TDs4z
> zS+gb!68B8Sij{KMZMSOG3JuIzVc7Q(nSg1qK`E|q2uq2}=lH9Vf8)lD=c-n%s*U$h
> z1@A=Z(s86OD(CEfbprj1q@$yA7}EIU-;v_)A{dG<)YR0>Q4Sh)pVSBgC1sszKh$tC
> zG%yp-`FV@FIG0R)l3h2^s#UvS3k!?-cvj9uE8P`9>y_(vxa>6$UHkK=PoFB4{C6l9
> zznntDMSn&N`})mm1$15GaL<AT3!bCP+K6(@uT`_w6EY+nHB|H_^NkXo$IPru1!Ov^
> z9jY#$R{GhqX9enrB6Z2245Run-|U=hB`(wAxr)x8Kc8NuN)>LZP#N!psvr#{i`%zv
> zE8Q*Qs=#=Kk(HgLluJeErr+6$ScbxRzD=t8ET8fpWe*7nIfHU^LYY1(FDY7bYCkn?
> z9WYAQ2s-2(MUSKd1_#c`kQ@#wS+dkHbVZf%tX~2uX+XxL70cxkTF~JVH*MaOdHe1i
> zx&QD%0Ukce_#GZ(Je2eY_Y2Nd;Z$*W?d|{f(o*Hto!fHl#&x;M>CTH6<jmPKTw=k&
> z_=u7vOLF<G6%2s~D(T3w+_-g9iFfDTT_ugQ`{@NB^KII+DO*EBBdtuCGDi`Accn^|
> zQlP4&K~2diTozTCQ`6JrZt6WHO+rP>;^NLD{x48#xD=yq%T}$t8D5o!3aV#PL6s)&
> zJ>moajw~C?e)IM%B@gDOdLM<;QlW79_>4^NHl7y^_6^FSU#wU$TIinyQKVB+Hfhtk
> z)AG@JW5A4a6^7*aB$<=?*zn<+)X}=c$H$kzyZ=o$0EYYmD2azw!(Wp+b?ffhzI%7h
> zNl2ZPQ>WxOq}2&XpLgNd>C;LWLUqogDh`jTG*X}s9yoeb_8vSaJNE36&D*z2d_qEY
> z^A;^mBHy{lUlT|5zWw^epc~G8imvz|!XHdYkt4^C7o?-YiTq<+Lc*JGOPA*O<$S;b
> z5>6jSOTY(Ab>Y<c^95!7UD;5kLrDMdv14-M?p^-RZP~gt*9y`4Z9i)C=xu2aA1dKh
> z&si9S(<F|bIH8o8&$xZpF4?$kn?%LN<|M9KmCI|#5J#1_DsEw@(p05&6y-Z};eyny
> zTh|rFSTza}l<T7>)fio~4(+<G)gL`Z=Fjs~7?Shd)GK@T>?wwZMjUwNFa}-8e)nK_
> z*rEW3NI0xv)4fNJbK7?A0`u&ZUHkUS?)?Ye3*Ik`x9{!PvrjqSzI%^s*s@htt=}L?
> zYu3t4_t~$?SE#TV`Po4-(f6}W^%^zo($dmiK6vs>wjlp)yLQXYy?OC@WLiA#IdD)x
> z%E{BxxJi>kh&vhO{~1sNIPKU7X>t)-`1(zovsn(tD_lm^k!fy4UfcPs_^&-`)EdO&
> z4jzM?oSm<vXFO5DsN((+Ht*Oe>o#qcgjK6^hiVLa`0cmfayoP?u(y4Oj;VYeRaoZz
> zHh7)oRAJA-LvrZoF=^De@nWRG=lUe{N-`LS44QZ9(0$9OQAP?=a$1qz<Qx*>Qdl#y
> zaoGq%@1gfT6dss{PX)9{F2i-#H!@D!unjMH%QjiNdFuyZ&897~X5(h%xN5^DiBC+D
> z*cFMQJ6iuC^5Bw(v5=6h(HJ;<GGK<=tSdC(lh<uj!YgSY{^|`#14mVwYd39`Qx~tu
> z`YqdEB3>de0pPlx-T=oUx%<XwAoA#$bJ^=QZ&T8E8_Xkl{YK?{!?qo=X~!-Zp`$~o
> zOC8(9CouTQ-Xkf>{ld>x7zNJ=EKgo7A(2rzU?Qpk{(u#pPZ#fW9L}R$X1PCSS>BaS
> zVJK+=49hZ=D_?#N8d;rBg(*p#1!&x{efKSTU`mfUbCo5r1SKWM`NoYKmxC}Il;zOE
> zd%FNUGz}jP7LNc{dk2NQiC?uwmL;wFV8pIWmS`~JQZS=mXhcrycI`O;&XzwMc~P>_
> z-L8JK7A+R<ICM0Z`Nbq8DRFs>AtNIVRa*SNeDzv6ef0)6Ovr|D^YqfCOBt#$L=|xD
> zA@AMFcp^*USG*UI%i<*h;CURFYRE9neSH3srAoOjO<0-ByqIU<XDSR;nON4y_~p`f
> zz`$FGKO0Qc56{*Js0naT-CBRl*gJpaeX5=r`AcwAOeW$^MH<zCPon!KvGp`)+xq9V
> zx;h3j8-@<o+xYwY(`iB(5*j<!gxxm>ifO^Ux3%F#@_zk_6)R3&vu4fRhgpIbogiVc
> z@gI!vWy^6~E-tg@Wqx0!%6_Cj9g?GWer3^Fn1)(cQ_Jxe_oSSX;H9w=8WSfWOP49f
> z!fC0l^E@Pa8PZ-Mu}N!W1te?I#;uYJ$-*O%uzr(7tw@rfs2K4JUm^h!OWz6qh$!(1
> zi4ecAB}(2s`t;q2@));l(>8(WheRV!pl~?~gZEgjV3cQ3U`P(6eiY)9AvyI*Ba*5S
> zhu(Db$C}<qgw%akIw~m7sH7P$K}%x9YO?JE#5F}49IpEKnUY3JzK&SCWs45Wb#?Nr
> z@1UR{W$g>4W&i&Dl??<ChFub!C4M)x#)k!wIT}N<&b)f{YR2OnkuX$BU}TI0E{Xjh
> z1Vw>?mL;IF<8n)U^9}VY2UHq|H~4~CVKlPnZ>j6-KIchf{7NN1e=y5C^ToA*$Y|y0
> z8y+RzA>lxT_=2GrCm=trUsx0vF-kf235^8PMJi$52JbQcqQFq`3JAfqNNLr!{aTb~
> z!cff-CpYXl@cy#omq)pebLM9?#qJ5>+Toe%0R^)}IC6e_l=W1{^yTX|y+1ty_xOwi
> z%h;n&KN^BVkOrrcKYjv376mBXsx)fYJV0}}t~feQld!NbWw|U@3=SMPP(jtI)l3nN
> zFNfWIfwaWK@|=v(aq{HJu`gb{$UJiXI$AB6DPBQn%<>OHSnLW}w_~4VWQ$C&wZDTr
> z%m7Nr!WG7qkrRTJEXxgxS)s%WiGF{cED!JdM?`~}!o<@zP!{+FgQ>#sKU}<k1^&Sb
> zc+P9@!s4JjFrKGRpv+zD4`z<eEmyt*RSP?7JIB;DJNBddq~v;cd3ZhV{j5;RGkExj
> z%ZRIuXX03=>iOt2!bNXZUjAXPmnJ~({FCS7F=#2IFY>PTWA!-1?SOPBP(FSFLly%#
> zrdIZcsx|C}Ym5>r%LxiIMny$Qe0;pJFpG=B3=PN7AgBhAi4|!HsYM2*XU(9<$jBo|
> zOSbG!k?^=giNecMVQJ!;LiqK1-o3Z){}!>y>*e;7Ou6$kQ-*43(-*QGDv2WoN`;{e
> zKb9?9&Lw8$x`Mp^DBb9lNHb}@jJ2Hbs!qN7x9T=fyHHQ9;aQ@7A-w&+@b$vsmGcH_
> zJU`l`S&Nk@78|d#PgvBOgbmvY%JgpJzXM>&@+4{6yybet{RPj(*&u~*&PHGVr{*m~
> zmaW==@~>6my*s^MpFINUc?E`1pLE4LYJhY;ovB9fQMts##LC5r7E>QOSSL$=w2?$c
> zL@GO7aFYRUNX5N>X3UtGhTvQ-OE)g|*JRE*jT<+%IC}J`GJ>42Zkt5L!6?CtSq5f`
> zNnEQi0MB_O<M0}fh5xI<ybUD`?yuUso5{<LLnpGDwQTK)eCy&JE3RZ<(3+LM``&|j
> zM#j^y$xE+rK84d?zGjmoZrCAOx}#H(PB1V9PzK%eL07to0I$>Y#{MsW;W(W~3%*bF
> z8a4gnR&A6xFp(<%cR^LY=!7*A616-Bmi9b6PZK;B->pJ8=jcdBjS)Hr_MSMe#8JJQ
> zcY)8nEO`SA!_{JK>qvca9MYgO^LuT9kB(5+<sLnH#KGRyy?PG5J3>oWSq8xc|6J$I
> zHQgE-8ZvdNJ%@!jRKn>q>L)FAbaWP_r>DQTl=?(MqtRucy9kR-1al<Ah)9%(xRuIz
> z1Q?a)s{g#laCz5M_h@}4{|3Cqusl!Nv{PYBbe&JimMd?9{5hXn6ct$=aPkXZn&Tf4
> z14CwcK^#?{zamNES8rA@+Sv3a(y&GvtpQr`1l`8C-OFHgb@jPinu})-`y(Js=!g*`
> zd^Jaov_?Ey$vt}aAGB@Nmc74uhPR<A504?yE5vS^>tn>7h%(ShF3{U?&Yo<@1RMKX
> zTMry7c&<NAGf7FapZd_#h^v7#J`H_RMX2X*ftnpUbZF7G^)I_dXz9r;H+O|8xj_{r
> zrk0i#odl$BpZ;?Zg)VCY=FFM1iQX$~p*ML2P|u2h;vWU&Iy&zd3WFgW1_l1}TmjA(
> z2S+O5RdEXL<%chfg1AbWOJ&9SZF2vW$aouvD=Y(04lkTTP!q;k+dQ4?9Uz{5AxhfJ
> zM^&DJw1BYKm2&*@J#qC|_z3ZA0M0~FC#Hp49~kWB=C)C30e^wwAbFX-fyp@PZtO|a
> z4NT0htlN1IMo`{+Q04s_;5y5<I5<icLN6UMeB@Qcr9N2+D6o>z;5g(p)_tMxOETM^
> zrOEq?f+CP^gw(22kLm;s!%st>R1w~5DWI08r)T?mbs8<y($tr6<18R0mnj=naqk{l
> z-*f|;GQ~E%RH;%ft5>hS$c_Je!eV9Sd>?UJ;3u;e1<IVoK{y5|$2s0Wq5r(ka18G<
> zcM-5yT<3W!*BJIU@d_h9uIjjQ%RbqB;H2~$q(S$t_UP`Y=jt?Q)a2MScTbr<XQ9mS
> zK>G8L{(Ntl>6uqP_l5rA?iHZqvwVG?)J3Dai0AS{eEnX%dO2{#<e@`{ZqemEC^}II
> zTR3mVpS92r@JE`yO8J?Wwf*!bUw>VKT|XtI@gzr=CthGS-e>;r1>S`#%IW6mD-E0c
> z#M!CtV4%YKB$=~jC8#I!eM4myo<sFa?*s2MUq?5OEW~w#KFJ>7)AOlB9sF?k@ZnCi
> zYSq#lqOS8?M_W%B7UIrn1n0tk`Q;aB)~p#lSe|W8xg^u(ERv}+=ZT~H0_EIkwwD5)
> zzx|&_`<e5_&SkFHx_HPG=h<R2eU8`?xaR1-uppc&eqpH6BAoHUHy%1Gi^F1b8a8RB
> z@@1qh7uKR}hnLn)Zb;u9>CYD18BE_p0rR$V1(f_2NND^zd7-f8uf+e6Nn6BazHdRk
> z3`;o70^B^s-fg~2MLJVZ_Q}Y{*lNnlij}{MLH^vdwMB_<zM1AQ%QM#=@8<0?{0_Kh
> zH^T$rJaWG%TGSJIdQ&h^VSSQ@WUDqEx@>Us3KAQHC*GMB!Sr~X?jc%wV>s(&i8MIp
> z{pskFDneaUc>MTrYdbr;kzG3V-mjx=Aof!olzk@IjdQxOXRlsz<>5nF9JN}k>}LUP
> zGQn}SSWlZP6P)JAL}Hpp9`gSy_YlT%s+*YGx{A4ti<nHBCdT8ZN~312o^|fg@0GcY
> ztBiMWD~O|tR~Q9p;>h&u+!o84gO|j@&h>fevgPRjVF1jmU%!4%UA4wyU@=9;T04sA
> zWM>&?GadQPz%d_FW}qxCO8HfJsnS-Rk9Tla?pxZ;MA>IDAH;PLvnkWX#M((rCOV0}
> zxlL}Hj$PBf_@XGMI}Opzvg>YNuVK?TzMpXjtDtb+Z@~;@F`eWr!*s_yMO@Cm*8mFZ
> zlXRBp(`(T1OP2OCmHhua&FNx>=SJB$RmoW|cK@Fqfho%Zjgyj+hBs^0%3W7`Ozvn{
> z;XXb-Vr69|4({%<>w2b4a`O>`@s46J!AXoK%@E_sGsI}pbmcf^vWs$5uIJw~oH$MN
> z5q`9ly^J!q6J0YK89LfLr^=5tFCyEd;$MBWXYeS~tkG5uO4zs4D-6{=p7UrubFqZ3
> zKPm%98q<oNYi@3Ng0oLGYu1#KrAkYiF1=)!frX41V=W_1CX0^g6wxsQ#@UGOI9t)P
> zn5sO7k+pLHl=7l1dMqo#X(P@^lSxQpqG%e8ml4Jjr1ub=%=%4Q+$;8F@eRm#HZTko
> z&n}YVr5)P-+<UJ!;%Ffb<NYxvDCxKPxfiESM=%YbK%d09;wU^o%kaU7vDSYv&51G!
> z&sXKgS}PEj`sAm(qcNEB$IF*5_Z>HG+$31xm-_S`C@osH5zm;l;=S^)j2b^fG{@S=
> z2(zi8ZSEl2;~Zq<IQzVFb4TS^@E`ZIOzo8L!^cb(4a12tM1Q<=9XRq$snTWXu)wK8
> zx`;V^|6}bF!;B}1<`^)5Szi3#rdt@l-s5rNtN@wrzfyi`_sdPrJZWodi@m+QvYSVn
> zHf`j~;$KOPdX43$wq2!5f32K;BgW<q(j6xQfkC?FGFZ=2hKwFBLks{M)kj$Z7Pyxe
> zroXnCbm}woO`}$w9{*6Y?zQ4ym)MOmQ!=^$x~TB>0Ig+CchZ<>UZ+uup!!YQoUYTT
> z`K{Uwo2C5`5aw=!W^K<^`0fX9#N_~Yfj&vXe=Zp7-l$c_O?4Z$yofk=-%jIg)NRz_
> zR>Kw@jw7BK^li=zeUfW*6arIL;+~MuB_~w*{)gn6YOQiZc3cvB?|2z*YAYIGn&BpP
> zq6KF76<|IbHNi{-|7)5~m0@7Up~hfFFq^vm1R11nB`rJmc~Z1!u>@o_4qY89vpyJ;
> z^XYEQI`()l)MygYw=GC3FTM9e6ODuUIf5z8oEJ%@s?}xW$dMAYBuZJu$Tc}_+qRS1
> zHS0;SqF;0Rj?-=XzA05YrF`Wo=YFi)=vFPYW~uo|%SURB`c3ZO{sk}!!_$+U`l1)W
> zu}C`0_C?uS0i0RlrdV`B;dEzlJP+48@l!7RFaXsG?!r+RGvC5AfU{KHVUSSwE6^uN
> z_|K_ZuE*zUPYyY;F}$6|+u*n<8$5@)*maZ9IpoKoE_!-#*o&hUAHSZ-LZUKTjvhVQ
> zX_&G3`Gix?#d204m<)^v1{g_6W$6S=%LmU@$h*cc3`^!34#v~~W2!?6)oap<Q-nc)
> z4)WpPFt=;rIv=jq8-V&;+GoVrOy)HLjP`4qh4WBBm49A-de*KoKW?wm7zPg=rc94=
> zeAURvNQP^S5cPqYS)Dp{-mIaaF}G#QmS*35_Z=CA1LUkcGEU(NWu<em6&b(;l12+<
> z>;rTJXoS=PI0Z<_NGjaba>k@O(jQo@S~aUGRjSwnj)h=fwrp7r%Ig7r!N8P^Edb88
> zd=Kz_e35eMWDy0YPbq1FxJZw>W@Dfwzy|S0_z9q!8W*y$>sAX33yVofNx6FT=+SFS
> zVq%UpP*dxSM)39N^XH44ICEOb@5qT`MGhZJDRTHkN|B={Pbk-U|J>z^Mb2Njtbpfd
> zE}SoN^4uAvd<Ag;p7R>-F&>Y|pYMkajg5mtcLc{wf4BPccZ34c%BxzdPMxuSF)>%R
> zTz(}J-Mqzc5_CD6nKH(9mU1+j;;J0q2Co}J*E4`#rVqVL4|=37bT$n`tL*Qq)!+aw
> z$2SL|Ae@O&_U&Pz0IqH3dgd`zs@FQAHy#GWMCf+ZA>WZtVW`T&vQ2XLm#7`*(Cv;D
> zOX$E*LBz;#tQZ?v$iM-^PFq`ByDnO^sE?_sDJzW5uG9f}PNpbaq5R(rpvxG`%1yY)
> zgk%hs^}QPs;5w&1JD}n(K7aoFH8<mW8xJ2oe1ZEn%+1Z&K=~n5I+l<bNnhMbO)Wy)
> zPD?9tJ3U>gRDgN15*dzbx&I6dM`x8{s8=q57_xHh+WRNYof9yN>^X2C1NR0);+IK#
> z^r*-k<fG*G7I>ZaAH4-$yL<osJL9m-yq}&Q4*9U5)|oM5hU>zG3zvI)d&j!FyK|Sj
> zf(3TJBiulP{TD5~1|2(gv^#U=Oxor%sp2><Ky2K+WU`B=SUcxgzZ0gxLghKYYbq$$
> zc;CrG#yiax3s~CYU}+B@ZThBknQ}Xjwml?HSLDU+Ns$Ih<{+K}WpJMZ@~*%18Fb&u
> z(H-gKS;KGV^}9R~j{m2itiJIZW%c1Jf;6OkyAEP#XeI{wmd{L0&B9l$S~cwD%a=c7
> zWMq)J*dW;DlF8mx(Y#J3ATzLhf83(;2@FvFn{U1uee~F|XK5J^l#RF^a?>r~!IQ_5
> zn(<KXrl&zm%A@%B_+yCHjuH!hsXo0InVFdiLqNTH=Y17;U&TBuH|V<#9Xd?8$#qJ%
> zZz&rz%$qllyUh*6{j!fAKQ8j-&6{_oMHEi2Fwp4U8J@_=$tm*m=~E?t)N7dv6)Frz
> zV^qFp!_CIR!^5{Be@gw|>$>?boBBpu6BCnARM@NS$FGWqUxaW>%*E4BrqA<{X&#Hk
> z*~43$=Xxvu-v!=d7>+mE&+?T1L$#h4EA}Ow3#Q<ub3mBV=A9xN*I4;<>gbP?CF3SL
> zX8u`zDk$NA8S{K)j&HaexR5Gq_nZ{9#y?5FzQeP%N9yi`O5$++`gQKP{=0qF|6PRg
> zv!T_fSFheIyvM9JS((ZzOEmJ_Lx&FK9653%hhI<QO3xRs@)oqEr>AE%Y0|_F@i_R;
> zvQc+z#B>18F0rel{>MdmR2NhYghGTXNV&@gzv@fpYlf=~CRFxDBID$jsalO1HOxni
> z8nu7u(4iN#w6sn)Yu3z%@o=9#78#e!!Tgv8m%K1N4sU%|I6Z>QhN80pQp#}_jJ+HM
> z3RfFYU*-4Rxh#e~FI9)OFgOFisLEPpeqyG3_wK5%cW|`;pCym-3(Ps}K2}rgcgL;U
> zni~)mC*Gk;Wl>P1EbtGN`M$w2&nHOc;mGqp0>gML4wPA5zS6d17cMd2a0y-Tbtnb#
> zaD36s?{~z92H4&30gda_Z;&!)ae&PA4SHu@Z$ni^mf1TrT0&x1icfg7RH#_#KDzPH
> zy?gie!u$9U_{UcBq3o=Lmiqeo>-h4yZX69E7fg4%Ql&~qDp#(2pjNF~7kqqtUh*E7
> zzCsG|V^&Fs*LBR8G2_>*TXz5w{V4QdR+{d#Y12ZFA3uH(|Ci%`9lS$Ua*fiZOZRtj
> za*Eixb?aGfh5*LDla!QnOixdbIx*KsmCk>sEK@z`((ZwQfz125gM$NoQVqd0RAI;v
> zbVY2Vt*z~nkdSbMJ8+yofBxQ<En7}Q#h8bCG>#PJef|2iQdjuTay0JRwW~#NaPU?%
> zx@(s&UrvY4dSmI*rMrIm=_eXloUU!@>+8F2|Ni|2-2<LIH!d#jHu5@+uADA@+^Z>%
> zP^jXKnl^1UXQB78$hZ{}xP*?ak>aiR2L{OkKl&KvVWIz@!z&<I+~&>~=%j+~g3Bl}
> z&CJX~@glh_f*<wd+80{dd`PxPG9X(Dr38=l5SA@Y7YNZ9F*4&_`Me8-%ego(L;@nB
> zWXbYGnYY-79|myRxpQYDypMm>AtAql`#^8Kz?aYEz5Dj<%fx$LhU;AaLwB}$sK|3%
> z6VHtfC`k<r4Csny6c7+lu>U3)Edv#%Y~=wxlnya$;Cy$TEG#ViE?&Htt^@=X-YkI|
> z7l28qE^&s7j6@x`y=sRJ=6L}6B(9BsK1l<!8XAmNVq)TP+(S~iiX>B|$qV!6&*$hv
> zD+m^<G?b1N%atqFZ^42E8|gQjAMQ8hK#%am(9n?E&X_@6DA@Hx6^75x=R!Bm2Esbe
> zBV<5}s6nO5l}%f=ZnL~`lcsyr8a6s!Urp@{eFO8*;9}kS^{>{cSMM&E?QyNzbzaq|
> zRVx>ew}Eidu3bB2NQR$G=4X=Wi;0SPk(`{&Zk!u0DIccK2c0S2#t<+jEmNoB#fz^%
> z{+IqJ&-VdwGEx4tZasQ#)*C%qFDfdE7V}@y3qO=l$&|vQQR&Lgu-xzq@8CYJ*&^@O
> zxZf1#hLC2;=3U$X0Tq^`rl!WF4-R|w?77eDB(58xH035;+{O^yUj)J!LF(+{ebx4c
> z+&qCgBAHvYT@EYW%F2pESPhXcrQpyF8#X9AbWnAG6;976EucZurcK8{!rdp6a5IK0
> zSFZ5;NZCxAd61wx=5*=Og`0b+RE$B8eAn{t$(=iQlyY!)5`H>{arp&0dwcu!RjO2(
> z&yU8kY%CYkWEhs29|eOXW@DqOkVhymsUnzw4g{2B#sG&(3i}mWRjXEQT8kDf=JxH|
> zH?CK&UdI|YYI3`GuR+;^25QKlL4(D}$XFa59hJ2_bPC?FV}~3(cu-caUafq-jz@mp
> zWc&8*a^%R-H@J5d*H@#^9G_#pDkfF;_xJZh7q)NBnl<T=968*e3X(+ePNFjiEoujQ
> z2W1-;y3_G<DRe&W-o1x(@77Ct^yn{5o3?z4#&M!&&z`~k`}a41w5CN()!-jv%KWse
> zHE7Vl?Z&NJS(FE?1S2CO8WbFQqLk~^xN&1IRA>gm$kprD`A(?j%tPgQU$}fZD-{*`
> z01TFnisOzS=$_xKSh0fskPFSmjoV5_xPAAo!T`{dU!Xp2v~Jz{s*R1!<LjtoZkv4{
> zOt5R$uKQ@z9M`N@v0}w3r_Y{!{qWHvg?Xn;nZl6}e)U%)A~NzU)4Y?K%6fYZ=DSp;
> zOqs)|>$|t^+{w*Aqj(O9k9WNn;eSTM>gg8{ph$3*0}?wIb@8Ze+qPGJ{`uz{kkT)>
> zZ8uaH8gO>it5<i$`@e+e&3TLlaTob`dV0Rn)YQBKwI!u)-MSo-qYj-%c#*U~X?b#5
> zi(_E*0hLeTZ!mW3*lsZ6wa`*srca-~Y4~u>%T^Xs-#9zDiSsm9Wlawm@AT=@%F9ek
> zOH*3;rAwEUk3aFdJ76^BJKpQpuU9^0$ImMh{1_;~uiWy;dt_vKC(-+z>xAfN%&<(0
> zpI+X!ZJPuI1<O>ssWRTm8kj6r7Ph%)DEH7^ZvwMULqiz=iQOFKXE#RO>K~*0^qUMA
> zI53f$C0xKeAX7#}MBGDnej+F+=n$U$0=lfst$2oHyoYso);|6E9Y#C{95?Mda)e*0
> zke&PX$-2#(bARd9?KaX)#{C7QN|o|WTD$humYq9=m}Fx^M+@qr+&p0pkZ5af|717f
> z?}0$redthD{RRy<*4h>6MDIO(SoWu+$N`{DyLMbZWH4mdu;pYfUfZ~Rdls1S1g=xU
> z1_GO0-QAxaI&ne{9X~Gf7B0MoYsQ^AclNywX50+Hvu)3w+-^O3(AAC~Rd5G9P~Na6
> zO`B5xr8^*dKHb`N>TG1*e7?Ph4oTBy&A5dOXRqiF+X;+L1NT=#B2<QH(b4)P(b9S+
> zz;JaS-aA2=wSYuSw6wH5Gu~?IGvCD#;^Q5Ht|~+l6IaTmi<cAyUa>OKg;8XaL9boA
> zu6#O=l9UXrVo-hvl#IylE)pt9O9p4Rz^;HyPO(o3!TY|xe&XilE^e-K08g>Ab<5Nq
> zIqEW)Fg`judJN*Xg7VL)zrQu4^Pz<FsS0%&I$Uet=AC<GdGZ?BxO0zOyZb=s0(a)h
> z4M|wDUZSA8hb)Pf!0;ux>KemXp{{_1p^53a9S06ed{W-h;QoWuInh81nFfO)(%pOF
> z{1u5!TqWU4<FYD$_uV>NAA@|m0NnF(QqMkpFRom-QC6&8CvhuRW&c>C26eiT+Ipi?
> z4jezFTt~i{rM~@^3}NLm+vDz`6K7=4(c^OA!iCJx(9qOn%a+}MFt`R|<<5cQr(eh8
> zIpUMn%H*jIHxS2loaMv|2Tq)pWr@k+;pO!L*Mia0a&m!|d<W>A+z_HC^k@$7)6h{L
> zrZqC<$mt7`xOSr~iCvyuu3Y&gxIP&7Dex!|OW^;96W(tP=m#yJ$ZA2F`MJ2b92sM5
> zmNwJHD=R2qse}YAk?^o3vSssDxpDo5qU@6?sUK0vrtU%sLQv6VcS4<mM>3r%Jddh#
> z>MhiFjzR)OL`2F0&xJC7-Xigw?<;d>`{$aOT0b*08+Qg0DHzgLGc`4}3Cc+&{y#=|
> zv7GZBq(9c|O7dnjn$SpD96~2!7>z-pvM3-FhFA~`FMqL|Z2OXHc@UlqNwvjf!j!4^
> zLSvT;R~}BZwO1+|P)E8P^3k<%bWTm)x>LAZ*W7wCrKAfa3<vo+E5>Kl9jT{(jmwaI
> zLL<b*!}B?q#R1nXr#LvLEd?_NM#YNNBwNOH`l4u2huK~}&$&`GXz4NuiCM1P=Sogx
> zHD@?SV8WC#^Voa9K$drr`$F&MT;CKPmmnG=w8^Me$e%qd$9`$?a|WrR>KylnTTgYo
> z8<V_Nf{~A{({y^)+o0@>!(Lhu8o2iVN2qXqKd~{SS1(w#PUxC8TU%RSH5qI1c)|RT
> zoUo8(5*iHaH!@lh(drK#JOriorov!k%s(PBEv-Lp$45rx=j_(6-yk6&q5K-IEb{Ub
> z??re~USVQA!SNOJ+1t=}wn1{c9y@ldKcur7lKE|S@E;(&WLowHGv_aQ#Fd1;s8oAb
> z_iP=5F)v0LnLO3eH-6lIutsV<wT9HS!h!LSDqRqd#*ew{yoHZLVpf15Vx{lE!Stx-
> z8f^~Qa0R2D^Xz#_x``XN%VfvtG@d3Reho+@O6b}pOO<k5wQbKUzldn@$9o^5Ir0M2
> z{klYn>GOSqUPUfX2IH*|^<kRy%(6iEnG5_w-ULT4m&oNwVq!J%*+@NuG;IUp+bHkt
> zk$Q%A)HO6yhiU7kqwLqdt5VgkM2QmKi-N;5BjS@}DHwCaD1FtcOWGx>X+$;<G9;I%
> zR6zO0dMx&P9u}7<kt>q9CW>)%aIZ4T|Nqa7ZxbrN9;8`2FwO|*iwhPnUcAS`!s5nw
> zt0^zMy&`fWLYGTK*fLof6)(|C`I+S;=!ARa*s<es?dml}_hffW>y4Hex8CEYnqh^?
> z+BNGSaU=QhQ3-&Q3Gk1QK>sKS^otgUsqSw^k2bhBb?VgpFowL){p%o|wkS^>;2#?t
> ze;?(y^y<}X)SMl$JSm&2I74DrNT*-A(VcD;zz+>@JdR_C<4_meWPr079J8(WeU++i
> zp)o76!sAy$0<DxzKX=`O>vZ^_qX!4HN7*{NJfJi?b0bZjym*nLuCC5ir5w%ZQm0NG
> z8yHNtqgJez;AoCBM`o4!wlpQ}i0U<K`G6U6maST^FjD(YU6hSji+}aiG*7>v*UWSG
> z@w3vi-++6#PD_|00nuQxHKwMfbVi8<_-#v_ufP7<ZJtlSYc89<55xSx$urOJEc6>-
> zm)(P7q)@-Mlq^}2Lw|g>>XpCy-Wd%ob0r$n@yoXa<z}OtsHUdYwN|ZK%K9q&|MwD@
> zfCJKvAk{iUQt890_FuSg;Snn<t29%SaWCxcre*mo3dxO*N|vQd5+!D70vL2T7&T5|
> z)R<^8EEzXeqL##>dyWP(E(M|`EF=a@9S>%Vkr~tHW}BOhe`RE7lnx_?pK1<(@nir-
> z?g-U@U&iIk$A5+JVmY_lvUArS>-;0)WUhCR%=Zq;tx>Z!H>Dbj+dTm~b0~Tiz{=oM
> zCdUw)x9|LOvUl(jna6lO!MQbS*QHyWYQ%uUO#M6m(*5XcFMpZm6D%C`gN~jF<L?|S
> z^V3jQo^!%uu6Ll!oVzHyO@~hGYR3Wto40I}5EQjSn7&s)L~fmW4OHIo=Hn*VJn~o+
> zAiGj7P*uwD_xHc)<m7b1%ggH`1cs8Pp^3%mFN+t~f@I@Z;@IIMN8RyQ8~~Lh5~UFC
> z<Czf|8F>PR=VgwOhD0ZbowM873Kc7Hw4rhBy7hhL`G(~BghmVJur6J?^Z`8v!ucXR
> zF8Pk@yv1j}EnSAsND2GT5b~=5G=c8-%Yg$2j8?8(xiBOoWFsv03lk<xxNmGc=J`}x
> zhio^O**WNbbC*UYNL+NX#Kk6qQS(5?jm6Q|I~=;ASMD@Nmu!o1Rxgc>jqigI&(EDZ
> zcOxwEg<wYGbLY<e0-cn)Bi+RQtArOzMZNc6P2I~jGiJ*q$C+X>-tINJF3x7rb&Se9
> zbuKa@r&(2wC}heW1BQ+`&v=ua+{9#p?JLA_MqK(pQs1K68i(xMwDgRhO>}S(+v&5#
> z(cMd?&&!)$wRfE-Q=DgsrHxZ|{f13a5Y`Toj#GZ>y#@?94pFHrL$R`Re)IL$CESrd
> zcY_;Pvv$3W7B)_~)($h3r4tUbJjHR=0%f_#B$Q>6(=5>%J?32T;$P82O8KI3`3jX{
> z%qBVHaJdPWT;xsf`iS$K#WEFn*dxA?x%IWuWy(@lZeF!o^+jBk!t^FP&lX!(4<!$6
> zLlauyT-*N7FJ=Ri@LSzY(em2EqU(niWCH0qFFZUvY5x5A$DEy=Z$kfkXkudW%+S#A
> zrGbILYfB5uH(<Qi`uh50z-Q*><_~OaY;HhTJhouLf~2spFpq?U1QUL|{`T$L9UvLW
> zjK5Az{g(+ZmQ!wh`VJd)edwsMZ-$IA$?VcwokJ*kxJOTzcj`|071FcM5S=TGH&o9I
> z@dnc(r{gf)*i@q;oa0t6Rl00+o6bER4AC8%r8{nlj5fEE(UwzXu=beO?Ys85U#46I
> zcE22op*7wN7^i7um2Ny~x)_XiklsUeXykDHcU>?h=d~@X)vUX}TYrt`T85Un+GDLn
> z*L1RsFdm=Xd+^A|4V$&$Yzy7&=t)%$(o&fM{=Qm`9l!J)@(gL`j><2Gj@gu)K0|e$
> zG-=hIT|RwITO*H##fp7t*12c@JHzzNb4HGxDB30ybMbzj)UMw+3}M<J-T&+gOE3;~
> zye44KwpXrP>A7pyu9553t+ND!%|O?)AU;0cFE%z7{2IO#T|h*1bhLj=OpIq-T-<cr
> zw_3Y)t<I4nM|z{HYK?R_O-P2P;{Q=m{)=q}&3-&2lphd)|C4~WxJL>3&fzFrrxa{f
> z?91X-h~o=z+duknva(cj*914CrIQ0c88{o^lgd@9a;S3E>c`8JuSETmdLNf$=z$U1
> z7cE+>ZS#&j9CXIdI5;gZDR-#RWEld~ZQ7ci=p2fnV+^-zr1J!wivvo2TV`9uDnA~r
> zROQD*B}#tF;U=!`<e)$I>tq8_R0*U{W;X(SN|gF`3&J0(_<gmMZ@w+VRhZOc`P|(Q
> zmKz_it~vuwNOM)$3Y8C_VKLkSKoe=vkW+R!`L7U|sV2|>XbR9<hlWXeFl@)WckliT
> zowQSCW@dX_YXh_ZC=I!{dUb%3=%XdPSdQbei>3oGE0N$iuf2B}0`KLAp~JH(9?z9{
> z=F=AzF?050ICl5+5aI+!J1`TyYseU!OCOD{o-SzZ0L7P}e5LOVM@^XadeWRA(Kfdi
> zP17kdP}l6u*WZ-n_$1W{su9ea!#8SRc=~km0|L}T#{irc?}2-K1~Me)yeYLw^j5Er
> z`;^9<s$_V^8;NVwohemS{-Hz_TqpCh%$z>f#D6{~9gr0(9oIfS0@GCm{-fiH|4zb-
> zWra~!q*UY>oobwp760C058eab8_y#srie#CbP;#IC56M%BIovilrnTc=5h=&4%47+
> zTcbgvpXN+)oiF2^+{I$5ix^qiX4bCXkZYoJ!4RBKASoV102zn*@;VuXpi?uik$D;B
> zU$gzASO&&X%>t648BP_4@!6Odhs3a|GLw;6W;QDN(=u(<809}YsqvZqCau`8waD^y
> zn~N-4y|GA4^7<mN$s3Bqt=WWYTZ$xX++HMc^UflPo3`hN+fpQM^(GZ#u(Da91exTE
> z*i{>Nk5Z?4!zMr3>KU6{8m>Jmwa)<cTUD!7<FukBx=TtiMJ-sdev6WKemoVk6;AVC
> z%#Zmpf0joThvnt{{BZA%ql877@jSc^u*^zX`Jd^0rvC%PN(XVAmU<i=Yq-{k4i6)6
> mojw4Z{rN|I06vV06#0L0BQaiPgq42)0000<MNUMnLSTY`z}`*(
> 
> literal 0
> HcmV?d00001
> 
> diff --git a/docs/xen.doxyfile.in b/docs/xen.doxyfile.in
> new file mode 100644
> index 0000000000..00969d9b78
> --- /dev/null
> +++ b/docs/xen.doxyfile.in
> @@ -0,0 +1,2316 @@
> +# Doxyfile 1.8.13
> +
> +# This file describes the settings to be used by the documentation system
> +# doxygen (www.doxygen.org) for a project.
> +#
> +# All text after a double hash (##) is considered a comment and is placed in
> +# front of the TAG it is preceding.
> +#
> +# All text after a single hash (#) is considered a comment and will be ignored.
> +# The format is:
> +# TAG = value [value, ...]
> +# For lists, items can also be appended using:
> +# TAG += value [value, ...]
> +# Values that contain spaces should be placed between quotes (\" \").
> +#
> +# This file is base on doc/zephyr.doxyfile.in zephry 2.3
> +
> +#---------------------------------------------------------------------------
> +# Project related configuration options
> +#---------------------------------------------------------------------------
> +
> +# This tag specifies the encoding used for all characters in the config file
> +# that follow. The default is UTF-8 which is also the encoding used for all text
> +# before the first occurrence of this tag. Doxygen uses libiconv (or the iconv
> +# built into libc) for the transcoding. See
> +# https://www.gnu.org/software/libiconv/ for the list of possible encodings.
> +# The default value is: UTF-8.
> +
> +DOXYFILE_ENCODING      = UTF-8
> +
> +# The PROJECT_NAME tag is a single word (or a sequence of words surrounded by
> +# double-quotes, unless you are using Doxywizard) that should identify the
> +# project for which the documentation is generated. This name is used in the
> +# title of most generated pages and in a few other places.
> +# The default value is: My Project.
> +
> +PROJECT_NAME           = "Xen Project"
> +
> +# The PROJECT_NUMBER tag can be used to enter a project or revision number. This
> +# could be handy for archiving the generated documentation or if some version
> +# control system is used.
> +
> +PROJECT_NUMBER         =
> +
> +# Using the PROJECT_BRIEF tag one can provide an optional one line description
> +# for a project that appears at the top of each page and should give viewer a
> +# quick idea about the purpose of the project. Keep the description short.
> +
> +PROJECT_BRIEF          = "An Open Source Type 1 Hypervisor"
> +
> +# With the PROJECT_LOGO tag one can specify a logo or an icon that is included
> +# in the documentation. The maximum height of the logo should not exceed 55
> +# pixels and the maximum width should not exceed 200 pixels. Doxygen will copy
> +# the logo to the output directory.
> +
> +PROJECT_LOGO           = "xen-doxygen/xen_project_logo_165x67.png"
> +
> +# The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) path
> +# into which the generated documentation will be written. If a relative path is
> +# entered, it will be relative to the location where doxygen was started. If
> +# left blank the current directory will be used.
> +
> +OUTPUT_DIRECTORY       = @DOXY_OUT@
> +
> +# If the CREATE_SUBDIRS tag is set to YES then doxygen will create 4096 sub-
> +# directories (in 2 levels) under the output directory of each output format and
> +# will distribute the generated files over these directories. Enabling this
> +# option can be useful when feeding doxygen a huge amount of source files, where
> +# putting all generated files in the same directory would otherwise causes
> +# performance problems for the file system.
> +# The default value is: NO.
> +
> +CREATE_SUBDIRS         = NO
> +
> +# The OUTPUT_LANGUAGE tag is used to specify the language in which all
> +# documentation generated by doxygen is written. Doxygen will use this
> +# information to generate all constant output in the proper language.
> +# Possible values are: Afrikaans, Arabic, Armenian, Brazilian, Catalan, Chinese,
> +# Chinese-Traditional, Croatian, Czech, Danish, Dutch, English (United States),
> +# Esperanto, Farsi (Persian), Finnish, French, German, Greek, Hungarian,
> +# Indonesian, Italian, Japanese, Japanese-en (Japanese with English messages),
> +# Korean, Korean-en (Korean with English messages), Latvian, Lithuanian,
> +# Macedonian, Norwegian, Persian (Farsi), Polish, Portuguese, Romanian, Russian,
> +# Serbian, Serbian-Cyrillic, Slovak, Slovene, Spanish, Swedish, Turkish,
> +# Ukrainian and Vietnamese.
> +# The default value is: English.
> +
> +OUTPUT_LANGUAGE        = English
> +
> +# If the BRIEF_MEMBER_DESC tag is set to YES, doxygen will include brief member
> +# descriptions after the members that are listed in the file and class
> +# documentation (similar to Javadoc). Set to NO to disable this.
> +# The default value is: YES.
> +
> +BRIEF_MEMBER_DESC      = YES
> +
> +# If the REPEAT_BRIEF tag is set to YES, doxygen will prepend the brief
> +# description of a member or function before the detailed description
> +#
> +# Note: If both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the
> +# brief descriptions will be completely suppressed.
> +# The default value is: YES.
> +
> +REPEAT_BRIEF           = YES
> +
> +# This tag implements a quasi-intelligent brief description abbreviator that is
> +# used to form the text in various listings. Each string in this list, if found
> +# as the leading text of the brief description, will be stripped from the text
> +# and the result, after processing the whole list, is used as the annotated
> +# text. Otherwise, the brief description is used as-is. If left blank, the
> +# following values are used ($name is automatically replaced with the name of
> +# the entity):The $name class, The $name widget, The $name file, is, provides,
> +# specifies, contains, represents, a, an and the.
> +
> +ABBREVIATE_BRIEF       = YES
> +
> +# If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then
> +# doxygen will generate a detailed section even if there is only a brief
> +# description.
> +# The default value is: NO.
> +
> +ALWAYS_DETAILED_SEC    = YES
> +
> +# If the INLINE_INHERITED_MEMB tag is set to YES, doxygen will show all
> +# inherited members of a class in the documentation of that class as if those
> +# members were ordinary class members. Constructors, destructors and assignment
> +# operators of the base classes will not be shown.
> +# The default value is: NO.
> +
> +INLINE_INHERITED_MEMB  = YES
> +
> +# If the FULL_PATH_NAMES tag is set to YES, doxygen will prepend the full path
> +# before files name in the file list and in the header files. If set to NO the
> +# shortest path that makes the file name unique will be used
> +# The default value is: YES.
> +
> +FULL_PATH_NAMES        = YES
> +
> +# The STRIP_FROM_PATH tag can be used to strip a user-defined part of the path.
> +# Stripping is only done if one of the specified strings matches the left-hand
> +# part of the path. The tag can be used to show relative paths in the file list.
> +# If left blank the directory from which doxygen is run is used as the path to
> +# strip.
> +#
> +# Note that you can specify absolute paths here, but also relative paths, which
> +# will be relative from the directory where doxygen is started.
> +# This tag requires that the tag FULL_PATH_NAMES is set to YES.
> +
> +STRIP_FROM_PATH        = @XEN_BASE@
> +
> +# The STRIP_FROM_INC_PATH tag can be used to strip a user-defined part of the
> +# path mentioned in the documentation of a class, which tells the reader which
> +# header file to include in order to use a class. If left blank only the name of
> +# the header file containing the class definition is used. Otherwise one should
> +# specify the list of include paths that are normally passed to the compiler
> +# using the -I flag.
> +
> +STRIP_FROM_INC_PATH    =
> +
> +# If the SHORT_NAMES tag is set to YES, doxygen will generate much shorter (but
> +# less readable) file names. This can be useful is your file systems doesn't
> +# support long names like on DOS, Mac, or CD-ROM.
> +# The default value is: NO.
> +
> +SHORT_NAMES            = NO
> +
> +# If the JAVADOC_AUTOBRIEF tag is set to YES then doxygen will interpret the
> +# first line (until the first dot) of a Javadoc-style comment as the brief
> +# description. If set to NO, the Javadoc-style will behave just like regular Qt-
> +# style comments (thus requiring an explicit @brief command for a brief
> +# description.)
> +# The default value is: NO.
> +
> +JAVADOC_AUTOBRIEF      = NO
> +
> +# If the QT_AUTOBRIEF tag is set to YES then doxygen will interpret the first
> +# line (until the first dot) of a Qt-style comment as the brief description. If
> +# set to NO, the Qt-style will behave just like regular Qt-style comments (thus
> +# requiring an explicit \brief command for a brief description.)
> +# The default value is: NO.
> +
> +QT_AUTOBRIEF           = NO
> +
> +# The MULTILINE_CPP_IS_BRIEF tag can be set to YES to make doxygen treat a
> +# multi-line C++ special comment block (i.e. a block of //! or /// comments) as
> +# a brief description. This used to be the default behavior. The new default is
> +# to treat a multi-line C++ comment block as a detailed description. Set this
> +# tag to YES if you prefer the old behavior instead.
> +#
> +# Note that setting this tag to YES also means that rational rose comments are
> +# not recognized any more.
> +# The default value is: NO.
> +
> +MULTILINE_CPP_IS_BRIEF = NO
> +
> +# If the INHERIT_DOCS tag is set to YES then an undocumented member inherits the
> +# documentation from any documented member that it re-implements.
> +# The default value is: YES.
> +
> +INHERIT_DOCS           = YES
> +
> +# If the SEPARATE_MEMBER_PAGES tag is set to YES then doxygen will produce a new
> +# page for each member. If set to NO, the documentation of a member will be part
> +# of the file/class/namespace that contains it.
> +# The default value is: NO.
> +
> +SEPARATE_MEMBER_PAGES  = YES
> +
> +# The TAB_SIZE tag can be used to set the number of spaces in a tab. Doxygen
> +# uses this value to replace tabs by spaces in code fragments.
> +# Minimum value: 1, maximum value: 16, default value: 4.
> +
> +TAB_SIZE               = 8
> +
> +# This tag can be used to specify a number of aliases that act as commands in
> +# the documentation. An alias has the form:
> +# name=value
> +# For example adding
> +# "sideeffect=@par Side Effects:\n"
> +# will allow you to put the command \sideeffect (or @sideeffect) in the
> +# documentation, which will result in a user-defined paragraph with heading
> +# "Side Effects:". You can put \n's in the value part of an alias to insert
> +# newlines.
> +
> +ALIASES                = "rst=\verbatim embed:rst:leading-asterisk" \
> +                         "endrst=\endverbatim" \
> +                         "keepindent=\code" \
> +                         "endkeepindent=\endcode"
> +
> +ALIASES += req{1}="\ref XEN_\1 \"XEN-\1\" "
> +ALIASES += satisfy{1}="\xrefitem satisfy \"Satisfies requirement\" \"Requirement Implementation\" \1"
> +ALIASES += verify{1}="\xrefitem verify \"Verifies requirement\" \"Requirement Verification\" \1"
> +
> +# Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C sources
> +# only. Doxygen will then generate output that is more tailored for C. For
> +# instance, some of the names that are used will be different. The list of all
> +# members will be omitted, etc.
> +# The default value is: NO.
> +
> +OPTIMIZE_OUTPUT_FOR_C  = YES
> +
> +# Set the OPTIMIZE_OUTPUT_JAVA tag to YES if your project consists of Java or
> +# Python sources only. Doxygen will then generate output that is more tailored
> +# for that language. For instance, namespaces will be presented as packages,
> +# qualified scopes will look different, etc.
> +# The default value is: NO.
> +
> +OPTIMIZE_OUTPUT_JAVA   = NO
> +
> +# Set the OPTIMIZE_FOR_FORTRAN tag to YES if your project consists of Fortran
> +# sources. Doxygen will then generate output that is tailored for Fortran.
> +# The default value is: NO.
> +
> +OPTIMIZE_FOR_FORTRAN   = NO
> +
> +# Set the OPTIMIZE_OUTPUT_VHDL tag to YES if your project consists of VHDL
> +# sources. Doxygen will then generate output that is tailored for VHDL.
> +# The default value is: NO.
> +
> +OPTIMIZE_OUTPUT_VHDL   = NO
> +
> +# Doxygen selects the parser to use depending on the extension of the files it
> +# parses. With this tag you can assign which parser to use for a given
> +# extension. Doxygen has a built-in mapping, but you can override or extend it
> +# using this tag. The format is ext=language, where ext is a file extension, and
> +# language is one of the parsers supported by doxygen: IDL, Java, Javascript,
> +# C#, C, C++, D, PHP, Objective-C, Python, Fortran (fixed format Fortran:
> +# FortranFixed, free formatted Fortran: FortranFree, unknown formatted Fortran:
> +# Fortran. In the later case the parser tries to guess whether the code is fixed
> +# or free formatted code, this is the default for Fortran type files), VHDL. For
> +# instance to make doxygen treat .inc files as Fortran files (default is PHP),
> +# and .f files as C (default is Fortran), use: inc=Fortran f=C.
> +#
> +# Note: For files without extension you can use no_extension as a placeholder.
> +#
> +# Note that for custom extensions you also need to set FILE_PATTERNS otherwise
> +# the files are not read by doxygen.
> +
> +EXTENSION_MAPPING      =
> +
> +# If the MARKDOWN_SUPPORT tag is enabled then doxygen pre-processes all comments
> +# according to the Markdown format, which allows for more readable
> +# documentation. See http://daringfireball.net/projects/markdown/ for details.
> +# The output of markdown processing is further processed by doxygen, so you can
> +# mix doxygen, HTML, and XML commands with Markdown formatting. Disable only in
> +# case of backward compatibilities issues.
> +# The default value is: YES.
> +
> +MARKDOWN_SUPPORT       = YES
> +
> +# When enabled doxygen tries to link words that correspond to documented
> +# classes, or namespaces to their corresponding documentation. Such a link can
> +# be prevented in individual cases by putting a % sign in front of the word or
> +# globally by setting AUTOLINK_SUPPORT to NO.
> +# The default value is: YES.
> +
> +AUTOLINK_SUPPORT       = YES
> +
> +# If you use STL classes (i.e. std::string, std::vector, etc.) but do not want
> +# to include (a tag file for) the STL sources as input, then you should set this
> +# tag to YES in order to let doxygen match functions declarations and
> +# definitions whose arguments contain STL classes (e.g. func(std::string);
> +# versus func(std::string) {}). This also make the inheritance and collaboration
> +# diagrams that involve STL classes more complete and accurate.
> +# The default value is: NO.
> +
> +BUILTIN_STL_SUPPORT    = NO
> +
> +# If you use Microsoft's C++/CLI language, you should set this option to YES to
> +# enable parsing support.
> +# The default value is: NO.
> +
> +CPP_CLI_SUPPORT        = YES
> +
> +# Set the SIP_SUPPORT tag to YES if your project consists of sip (see:
> +# https://www.riverbankcomputing.com/software/sip/intro) sources only. Doxygen
> +# will parse them like normal C++ but will assume all classes use public instead
> +# of private inheritance when no explicit protection keyword is present.
> +# The default value is: NO.
> +
> +SIP_SUPPORT            = NO
> +
> +# For Microsoft's IDL there are propget and propput attributes to indicate
> +# getter and setter methods for a property. Setting this option to YES will make
> +# doxygen to replace the get and set methods by a property in the documentation.
> +# This will only work if the methods are indeed getting or setting a simple
> +# type. If this is not the case, or you want to show the methods anyway, you
> +# should set this option to NO.
> +# The default value is: YES.
> +
> +IDL_PROPERTY_SUPPORT   = YES
> +
> +# If member grouping is used in the documentation and the DISTRIBUTE_GROUP_DOC
> +# tag is set to YES then doxygen will reuse the documentation of the first
> +# member in the group (if any) for the other members of the group. By default
> +# all members of a group must be documented explicitly.
> +# The default value is: NO.
> +
> +DISTRIBUTE_GROUP_DOC   = NO
> +
> +# Set the SUBGROUPING tag to YES to allow class member groups of the same type
> +# (for instance a group of public functions) to be put as a subgroup of that
> +# type (e.g. under the Public Functions section). Set it to NO to prevent
> +# subgrouping. Alternatively, this can be done per class using the
> +# \nosubgrouping command.
> +# The default value is: YES.
> +
> +SUBGROUPING            = YES
> +
> +# When the INLINE_GROUPED_CLASSES tag is set to YES, classes, structs and unions
> +# are shown inside the group in which they are included (e.g. using \ingroup)
> +# instead of on a separate page (for HTML and Man pages) or section (for LaTeX
> +# and RTF).
> +#
> +# Note that this feature does not work in combination with
> +# SEPARATE_MEMBER_PAGES.
> +# The default value is: NO.
> +
> +INLINE_GROUPED_CLASSES = NO
> +
> +# When the INLINE_SIMPLE_STRUCTS tag is set to YES, structs, classes, and unions
> +# with only public data fields or simple typedef fields will be shown inline in
> +# the documentation of the scope in which they are defined (i.e. file,
> +# namespace, or group documentation), provided this scope is documented. If set
> +# to NO, structs, classes, and unions are shown on a separate page (for HTML and
> +# Man pages) or section (for LaTeX and RTF).
> +# The default value is: NO.
> +
> +INLINE_SIMPLE_STRUCTS  = YES
> +
> +# When TYPEDEF_HIDES_STRUCT tag is enabled, a typedef of a struct, union, or
> +# enum is documented as struct, union, or enum with the name of the typedef. So
> +# typedef struct TypeS {} TypeT, will appear in the documentation as a struct
> +# with name TypeT. When disabled the typedef will appear as a member of a file,
> +# namespace, or class. And the struct will be named TypeS. This can typically be
> +# useful for C code in case the coding convention dictates that all compound
> +# types are typedef'ed and only the typedef is referenced, never the tag name.
> +# The default value is: NO.
> +
> +TYPEDEF_HIDES_STRUCT   = NO
> +
> +# The size of the symbol lookup cache can be set using LOOKUP_CACHE_SIZE. This
> +# cache is used to resolve symbols given their name and scope. Since this can be
> +# an expensive process and often the same symbol appears multiple times in the
> +# code, doxygen keeps a cache of pre-resolved symbols. If the cache is too small
> +# doxygen will become slower. If the cache is too large, memory is wasted. The
> +# cache size is given by this formula: 2^(16+LOOKUP_CACHE_SIZE). The valid range
> +# is 0..9, the default is 0, corresponding to a cache size of 2^16=65536
> +# symbols. At the end of a run doxygen will report the cache usage and suggest
> +# the optimal cache size from a speed point of view.
> +# Minimum value: 0, maximum value: 9, default value: 0.
> +
> +LOOKUP_CACHE_SIZE      = 9
> +
> +#---------------------------------------------------------------------------
> +# Build related configuration options
> +#---------------------------------------------------------------------------
> +
> +# If the EXTRACT_ALL tag is set to YES, doxygen will assume all entities in
> +# documentation are documented, even if no documentation was available. Private
> +# class members and static file members will be hidden unless the
> +# EXTRACT_PRIVATE respectively EXTRACT_STATIC tags are set to YES.
> +# Note: This will also disable the warnings about undocumented members that are
> +# normally produced when WARNINGS is set to YES.
> +# The default value is: NO.
> +
> +EXTRACT_ALL            = YES
> +
> +# If the EXTRACT_PRIVATE tag is set to YES, all private members of a class will
> +# be included in the documentation.
> +# The default value is: NO.
> +
> +EXTRACT_PRIVATE        = NO
> +
> +# If the EXTRACT_PACKAGE tag is set to YES, all members with package or internal
> +# scope will be included in the documentation.
> +# The default value is: NO.
> +
> +EXTRACT_PACKAGE        = YES
> +
> +# If the EXTRACT_STATIC tag is set to YES, all static members of a file will be
> +# included in the documentation.
> +# The default value is: NO.
> +
> +EXTRACT_STATIC         = YES
> +
> +# If the EXTRACT_LOCAL_CLASSES tag is set to YES, classes (and structs) defined
> +# locally in source files will be included in the documentation. If set to NO,
> +# only classes defined in header files are included. Does not have any effect
> +# for Java sources.
> +# The default value is: YES.
> +
> +EXTRACT_LOCAL_CLASSES  = YES
> +
> +# This flag is only useful for Objective-C code. If set to YES, local methods,
> +# which are defined in the implementation section but not in the interface are
> +# included in the documentation. If set to NO, only methods in the interface are
> +# included.
> +# The default value is: NO.
> +
> +EXTRACT_LOCAL_METHODS  = YES
> +
> +# If this flag is set to YES, the members of anonymous namespaces will be
> +# extracted and appear in the documentation as a namespace called
> +# 'anonymous_namespace{file}', where file will be replaced with the base name of
> +# the file that contains the anonymous namespace. By default anonymous namespace
> +# are hidden.
> +# The default value is: NO.
> +
> +EXTRACT_ANON_NSPACES   = NO
> +
> +# If the HIDE_UNDOC_MEMBERS tag is set to YES, doxygen will hide all
> +# undocumented members inside documented classes or files. If set to NO these
> +# members will be included in the various overviews, but no documentation
> +# section is generated. This option has no effect if EXTRACT_ALL is enabled.
> +# The default value is: NO.
> +
> +HIDE_UNDOC_MEMBERS     = NO
> +
> +# If the HIDE_UNDOC_CLASSES tag is set to YES, doxygen will hide all
> +# undocumented classes that are normally visible in the class hierarchy. If set
> +# to NO, these classes will be included in the various overviews. This option
> +# has no effect if EXTRACT_ALL is enabled.
> +# The default value is: NO.
> +
> +HIDE_UNDOC_CLASSES     = NO
> +
> +# If the HIDE_FRIEND_COMPOUNDS tag is set to YES, doxygen will hide all friend
> +# (class|struct|union) declarations. If set to NO, these declarations will be
> +# included in the documentation.
> +# The default value is: NO.
> +
> +HIDE_FRIEND_COMPOUNDS  = NO
> +
> +# If the HIDE_IN_BODY_DOCS tag is set to YES, doxygen will hide any
> +# documentation blocks found inside the body of a function. If set to NO, these
> +# blocks will be appended to the function's detailed documentation block.
> +# The default value is: NO.
> +
> +HIDE_IN_BODY_DOCS      = NO
> +
> +# The INTERNAL_DOCS tag determines if documentation that is typed after a
> +# \internal command is included. If the tag is set to NO then the documentation
> +# will be excluded. Set it to YES to include the internal documentation.
> +# The default value is: NO.
> +
> +INTERNAL_DOCS          = NO
> +
> +# If the CASE_SENSE_NAMES tag is set to NO then doxygen will only generate file
> +# names in lower-case letters. If set to YES, upper-case letters are also
> +# allowed. This is useful if you have classes or files whose names only differ
> +# in case and if your file system supports case sensitive file names. Windows
> +# and Mac users are advised to set this option to NO.
> +# The default value is: system dependent.
> +
> +CASE_SENSE_NAMES       = YES
> +
> +# If the HIDE_SCOPE_NAMES tag is set to NO then doxygen will show members with
> +# their full class and namespace scopes in the documentation. If set to YES, the
> +# scope will be hidden.
> +# The default value is: NO.
> +
> +HIDE_SCOPE_NAMES       = NO
> +
> +# If the SHOW_INCLUDE_FILES tag is set to YES then doxygen will put a list of
> +# the files that are included by a file in the documentation of that file.
> +# The default value is: YES.
> +
> +SHOW_INCLUDE_FILES     = YES
> +
> +# If the SHOW_GROUPED_MEMB_INC tag is set to YES then Doxygen will add for each
> +# grouped member an include statement to the documentation, telling the reader
> +# which file to include in order to use the member.
> +# The default value is: NO.
> +
> +SHOW_GROUPED_MEMB_INC  = YES
> +
> +# If the FORCE_LOCAL_INCLUDES tag is set to YES then doxygen will list include
> +# files with double quotes in the documentation rather than with sharp brackets.
> +# The default value is: NO.
> +
> +FORCE_LOCAL_INCLUDES   = NO
> +
> +# If the INLINE_INFO tag is set to YES then a tag [inline] is inserted in the
> +# documentation for inline members.
> +# The default value is: YES.
> +
> +INLINE_INFO            = YES
> +
> +# If the SORT_MEMBER_DOCS tag is set to YES then doxygen will sort the
> +# (detailed) documentation of file and class members alphabetically by member
> +# name. If set to NO, the members will appear in declaration order.
> +# The default value is: YES.
> +
> +SORT_MEMBER_DOCS       = YES
> +
> +# If the SORT_BRIEF_DOCS tag is set to YES then doxygen will sort the brief
> +# descriptions of file, namespace and class members alphabetically by member
> +# name. If set to NO, the members will appear in declaration order. Note that
> +# this will also influence the order of the classes in the class list.
> +# The default value is: NO.
> +
> +SORT_BRIEF_DOCS        = NO
> +
> +# If the SORT_MEMBERS_CTORS_1ST tag is set to YES then doxygen will sort the
> +# (brief and detailed) documentation of class members so that constructors and
> +# destructors are listed first. If set to NO the constructors will appear in the
> +# respective orders defined by SORT_BRIEF_DOCS and SORT_MEMBER_DOCS.
> +# Note: If SORT_BRIEF_DOCS is set to NO this option is ignored for sorting brief
> +# member documentation.
> +# Note: If SORT_MEMBER_DOCS is set to NO this option is ignored for sorting
> +# detailed member documentation.
> +# The default value is: NO.
> +
> +SORT_MEMBERS_CTORS_1ST = NO
> +
> +# If the SORT_GROUP_NAMES tag is set to YES then doxygen will sort the hierarchy
> +# of group names into alphabetical order. If set to NO the group names will
> +# appear in their defined order.
> +# The default value is: NO.
> +
> +SORT_GROUP_NAMES       = YES
> +
> +# If the SORT_BY_SCOPE_NAME tag is set to YES, the class list will be sorted by
> +# fully-qualified names, including namespaces. If set to NO, the class list will
> +# be sorted only by class name, not including the namespace part.
> +# Note: This option is not very useful if HIDE_SCOPE_NAMES is set to YES.
> +# Note: This option applies only to the class list, not to the alphabetical
> +# list.
> +# The default value is: NO.
> +
> +SORT_BY_SCOPE_NAME     = YES
> +
> +# If the STRICT_PROTO_MATCHING option is enabled and doxygen fails to do proper
> +# type resolution of all parameters of a function it will reject a match between
> +# the prototype and the implementation of a member function even if there is
> +# only one candidate or it is obvious which candidate to choose by doing a
> +# simple string match. By disabling STRICT_PROTO_MATCHING doxygen will still
> +# accept a match between prototype and implementation in such cases.
> +# The default value is: NO.
> +
> +STRICT_PROTO_MATCHING  = YES
> +
> +# The GENERATE_TODOLIST tag can be used to enable (YES) or disable (NO) the todo
> +# list. This list is created by putting \todo commands in the documentation.
> +# The default value is: YES.
> +
> +GENERATE_TODOLIST      = NO
> +
> +# The GENERATE_TESTLIST tag can be used to enable (YES) or disable (NO) the test
> +# list. This list is created by putting \test commands in the documentation.
> +# The default value is: YES.
> +
> +GENERATE_TESTLIST      = NO
> +
> +# The GENERATE_BUGLIST tag can be used to enable (YES) or disable (NO) the bug
> +# list. This list is created by putting \bug commands in the documentation.
> +# The default value is: YES.
> +
> +GENERATE_BUGLIST       = NO
> +
> +# The GENERATE_DEPRECATEDLIST tag can be used to enable (YES) or disable (NO)
> +# the deprecated list. This list is created by putting \deprecated commands in
> +# the documentation.
> +# The default value is: YES.
> +
> +GENERATE_DEPRECATEDLIST= YES
> +
> +# The ENABLED_SECTIONS tag can be used to enable conditional documentation
> +# sections, marked by \if <section_label> ... \endif and \cond <section_label>
> +# ... \endcond blocks.
> +
> +ENABLED_SECTIONS       = YES
> +
> +# The MAX_INITIALIZER_LINES tag determines the maximum number of lines that the
> +# initial value of a variable or macro / define can have for it to appear in the
> +# documentation. If the initializer consists of more lines than specified here
> +# it will be hidden. Use a value of 0 to hide initializers completely. The
> +# appearance of the value of individual variables and macros / defines can be
> +# controlled using \showinitializer or \hideinitializer command in the
> +# documentation regardless of this setting.
> +# Minimum value: 0, maximum value: 10000, default value: 30.
> +
> +MAX_INITIALIZER_LINES  = 300
> +
> +# Set the SHOW_USED_FILES tag to NO to disable the list of files generated at
> +# the bottom of the documentation of classes and structs. If set to YES, the
> +# list will mention the files that were used to generate the documentation.
> +# The default value is: YES.
> +
> +SHOW_USED_FILES        = YES
> +
> +# Set the SHOW_FILES tag to NO to disable the generation of the Files page. This
> +# will remove the Files entry from the Quick Index and from the Folder Tree View
> +# (if specified).
> +# The default value is: YES.
> +
> +SHOW_FILES             = YES
> +
> +# Set the SHOW_NAMESPACES tag to NO to disable the generation of the Namespaces
> +# page. This will remove the Namespaces entry from the Quick Index and from the
> +# Folder Tree View (if specified).
> +# The default value is: YES.
> +
> +SHOW_NAMESPACES        = YES
> +
> +# The FILE_VERSION_FILTER tag can be used to specify a program or script that
> +# doxygen should invoke to get the current version for each file (typically from
> +# the version control system). Doxygen will invoke the program by executing (via
> +# popen()) the command command input-file, where command is the value of the
> +# FILE_VERSION_FILTER tag, and input-file is the name of an input file provided
> +# by doxygen. Whatever the program writes to standard output is used as the file
> +# version. For an example see the documentation.
> +
> +FILE_VERSION_FILTER    =
> +
> +# The LAYOUT_FILE tag can be used to specify a layout file which will be parsed
> +# by doxygen. The layout file controls the global structure of the generated
> +# output files in an output format independent way. To create the layout file
> +# that represents doxygen's defaults, run doxygen with the -l option. You can
> +# optionally specify a file name after the option, if omitted DoxygenLayout.xml
> +# will be used as the name of the layout file.
> +#
> +# Note that if you run doxygen from a directory containing a file called
> +# DoxygenLayout.xml, doxygen will parse it automatically even if the LAYOUT_FILE
> +# tag is left empty.
> +
> +LAYOUT_FILE            =
> +
> +# The CITE_BIB_FILES tag can be used to specify one or more bib files containing
> +# the reference definitions. This must be a list of .bib files. The .bib
> +# extension is automatically appended if omitted. This requires the bibtex tool
> +# to be installed. See also https://en.wikipedia.org/wiki/BibTeX for more info.
> +# For LaTeX the style of the bibliography can be controlled using
> +# LATEX_BIB_STYLE. To use this feature you need bibtex and perl available in the
> +# search path. See also \cite for info how to create references.
> +
> +CITE_BIB_FILES         =
> +
> +#---------------------------------------------------------------------------
> +# Configuration options related to warning and progress messages
> +#---------------------------------------------------------------------------
> +
> +# The QUIET tag can be used to turn on/off the messages that are generated to
> +# standard output by doxygen. If QUIET is set to YES this implies that the
> +# messages are off.
> +# The default value is: NO.
> +
> +QUIET                  = YES
> +
> +# The WARNINGS tag can be used to turn on/off the warning messages that are
> +# generated to standard error (stderr) by doxygen. If WARNINGS is set to YES
> +# this implies that the warnings are on.
> +#
> +# Tip: Turn warnings on while writing the documentation.
> +# The default value is: YES.
> +
> +WARNINGS               = YES
> +
> +# If the WARN_IF_UNDOCUMENTED tag is set to YES then doxygen will generate
> +# warnings for undocumented members. If EXTRACT_ALL is set to YES then this flag
> +# will automatically be disabled.
> +# The default value is: YES.
> +
> +WARN_IF_UNDOCUMENTED   = YES
> +
> +# If the WARN_IF_DOC_ERROR tag is set to YES, doxygen will generate warnings for
> +# potential errors in the documentation, such as not documenting some parameters
> +# in a documented function, or documenting parameters that don't exist or using
> +# markup commands wrongly.
> +# The default value is: YES.
> +
> +WARN_IF_DOC_ERROR      = YES
> +
> +# This WARN_NO_PARAMDOC option can be enabled to get warnings for functions that
> +# are documented, but have no documentation for their parameters or return
> +# value. If set to NO, doxygen will only warn about wrong or incomplete
> +# parameter documentation, but not about the absence of documentation.
> +# The default value is: NO.
> +
> +WARN_NO_PARAMDOC       = NO
> +
> +# The WARN_FORMAT tag determines the format of the warning messages that doxygen
> +# can produce. The string should contain the $file, $line, and $text tags, which
> +# will be replaced by the file and line number from which the warning originated
> +# and the warning text. Optionally the format may contain $version, which will
> +# be replaced by the version of the file (if it could be obtained via
> +# FILE_VERSION_FILTER)
> +# The default value is: $file:$line: $text.
> +
> +WARN_FORMAT            = "$file:$line: $text"
> +
> +# The WARN_LOGFILE tag can be used to specify a file to which warning and error
> +# messages should be written. If left blank the output is written to standard
> +# error (stderr).
> +
> +WARN_LOGFILE           =
> +
> +#---------------------------------------------------------------------------
> +# Configuration options related to the input files
> +#---------------------------------------------------------------------------
> +
> +# The INPUT tag is used to specify the files and/or directories that contain
> +# documented source files. You may enter file names like myfile.cpp or
> +# directories like /usr/src/myproject. Separate the files or directories with
> +# spaces. See also FILE_PATTERNS and EXTENSION_MAPPING
> +# Note: If this tag is empty the current directory is searched.
> +
> +INPUT                  = "@XEN_BASE@/docs/xen-doxygen/mainpage.md"
> +
> +# This tag can be used to specify the character encoding of the source files
> +# that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses
> +# libiconv (or the iconv built into libc) for the transcoding. See the libiconv
> +# documentation (see: https://www.gnu.org/software/libiconv/) for the list of
> +# possible encodings.
> +# The default value is: UTF-8.
> +
> +INPUT_ENCODING         = UTF-8
> +
> +# If the value of the INPUT tag contains directories, you can use the
> +# FILE_PATTERNS tag to specify one or more wildcard patterns (like *.cpp and
> +# *.h) to filter out the source-files in the directories.
> +#
> +# Note that for custom extensions or not directly supported extensions you also
> +# need to set EXTENSION_MAPPING for the extension otherwise the files are not
> +# read by doxygen.
> +#
> +# If left blank the following patterns are tested:*.c, *.cc, *.cxx, *.cpp,
> +# *.c++, *.java, *.ii, *.ixx, *.ipp, *.i++, *.inl, *.idl, *.ddl, *.odl, *.h,
> +# *.hh, *.hxx, *.hpp, *.h++, *.cs, *.d, *.php, *.php4, *.php5, *.phtml, *.inc,
> +# *.m, *.markdown, *.md, *.mm, *.dox, *.py, *.pyw, *.f90, *.f95, *.f03, *.f08,
> +# *.f, *.for, *.tcl, *.vhd, *.vhdl, *.ucf and *.qsf.
> +
> +# This MUST be kept in sync with DOXY_SOURCES in doc/CMakeLists.txt
> +# for incremental (and faster) builds to work correctly.
> +FILE_PATTERNS          = "*.c" \
> +                         "*.h" \
> +                         "*.S" \
> +                         "*.md"
> +
> +# The RECURSIVE tag can be used to specify whether or not subdirectories should
> +# be searched for input files as well.
> +# The default value is: NO.
> +
> +RECURSIVE              = YES
> +
> +# The EXCLUDE tag can be used to specify files and/or directories that should be
> +# excluded from the INPUT source files. This way you can easily exclude a
> +# subdirectory from a directory tree whose root is specified with the INPUT tag.
> +#
> +# Note that relative paths are relative to the directory from which doxygen is
> +# run.
> +
> +EXCLUDE                = @XEN_BASE@/include/nothing.h
> +
> +# The EXCLUDE_SYMLINKS tag can be used to select whether or not files or
> +# directories that are symbolic links (a Unix file system feature) are excluded
> +# from the input.
> +# The default value is: NO.
> +
> +EXCLUDE_SYMLINKS       = NO
> +
> +# If the value of the INPUT tag contains directories, you can use the
> +# EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude
> +# certain files from those directories.
> +#
> +# Note that the wildcards are matched against the file with absolute path, so to
> +# exclude all test directories for example use the pattern */test/*
> +
> +EXCLUDE_PATTERNS       =
> +
> +# The EXCLUDE_SYMBOLS tag can be used to specify one or more symbol names
> +# (namespaces, classes, functions, etc.) that should be excluded from the
> +# output. The symbol name can be a fully qualified name, a word, or if the
> +# wildcard * is used, a substring. Examples: ANamespace, AClass,
> +# AClass::ANamespace, ANamespace::*Test
> +#
> +# Note that the wildcards are matched against the file with absolute path, so to
> +# exclude all test directories use the pattern */test/*
> +
> +# Hide internal names (starting with an underscore, and doxygen-generated names
> +# for nested unnamed unions that don't generate meaningful sphinx output anyway.
> +EXCLUDE_SYMBOLS        =
> +# _*  *.__unnamed__ z_* Z_*
> +
> +# The EXAMPLE_PATH tag can be used to specify one or more files or directories
> +# that contain example code fragments that are included (see the \include
> +# command).
> +
> +EXAMPLE_PATH           =
> +
> +# If the value of the EXAMPLE_PATH tag contains directories, you can use the
> +# EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp and
> +# *.h) to filter out the source-files in the directories. If left blank all
> +# files are included.
> +
> +EXAMPLE_PATTERNS       =
> +
> +# If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be
> +# searched for input files to be used with the \include or \dontinclude commands
> +# irrespective of the value of the RECURSIVE tag.
> +# The default value is: NO.
> +
> +EXAMPLE_RECURSIVE      = YES
> +
> +# The IMAGE_PATH tag can be used to specify one or more files or directories
> +# that contain images that are to be included in the documentation (see the
> +# \image command).
> +
> +IMAGE_PATH             =
> +
> +# The INPUT_FILTER tag can be used to specify a program that doxygen should
> +# invoke to filter for each input file. Doxygen will invoke the filter program
> +# by executing (via popen()) the command:
> +#
> +# <filter> <input-file>
> +#
> +# where <filter> is the value of the INPUT_FILTER tag, and <input-file> is the
> +# name of an input file. Doxygen will then use the output that the filter
> +# program writes to standard output. If FILTER_PATTERNS is specified, this tag
> +# will be ignored.
> +#
> +# Note that the filter must not add or remove lines; it is applied before the
> +# code is scanned, but not when the output code is generated. If lines are added
> +# or removed, the anchors will not be placed correctly.
> +#
> +# Note that for custom extensions or not directly supported extensions you also
> +# need to set EXTENSION_MAPPING for the extension otherwise the files are not
> +# properly processed by doxygen.
> +
> +INPUT_FILTER           =
> +
> +# The FILTER_PATTERNS tag can be used to specify filters on a per file pattern
> +# basis. Doxygen will compare the file name with each pattern and apply the
> +# filter if there is a match. The filters are a list of the form: pattern=filter
> +# (like *.cpp=my_cpp_filter). See INPUT_FILTER for further information on how
> +# filters are used. If the FILTER_PATTERNS tag is empty or if none of the
> +# patterns match the file name, INPUT_FILTER is applied.
> +#
> +# Note that for custom extensions or not directly supported extensions you also
> +# need to set EXTENSION_MAPPING for the extension otherwise the files are not
> +# properly processed by doxygen.
> +
> +FILTER_PATTERNS     = *.h="\"@XEN_BASE@/docs/xen-doxygen/doxy-preprocessor.py\""
> +
> +# If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using
> +# INPUT_FILTER) will also be used to filter the input files that are used for
> +# producing the source files to browse (i.e. when SOURCE_BROWSER is set to YES).
> +# The default value is: NO.
> +
> +FILTER_SOURCE_FILES    = NO
> +
> +# The FILTER_SOURCE_PATTERNS tag can be used to specify source filters per file
> +# pattern. A pattern will override the setting for FILTER_PATTERN (if any) and
> +# it is also possible to disable source filtering for a specific pattern using
> +# *.ext= (so without naming a filter).
> +# This tag requires that the tag FILTER_SOURCE_FILES is set to YES.
> +
> +FILTER_SOURCE_PATTERNS =
> +
> +# If the USE_MDFILE_AS_MAINPAGE tag refers to the name of a markdown file that
> +# is part of the input, its contents will be placed on the main page
> +# (index.html). This can be useful if you have a project on for instance GitHub
> +# and want to reuse the introduction page also for the doxygen output.
> +
> +USE_MDFILE_AS_MAINPAGE = "mainpage.md"
> +
> +#---------------------------------------------------------------------------
> +# Configuration options related to source browsing
> +#---------------------------------------------------------------------------
> +
> +# If the SOURCE_BROWSER tag is set to YES then a list of source files will be
> +# generated. Documented entities will be cross-referenced with these sources.
> +#
> +# Note: To get rid of all source code in the generated output, make sure that
> +# also VERBATIM_HEADERS is set to NO.
> +# The default value is: NO.
> +
> +SOURCE_BROWSER         = NO
> +
> +# Setting the INLINE_SOURCES tag to YES will include the body of functions,
> +# classes and enums directly into the documentation.
> +# The default value is: NO.
> +
> +INLINE_SOURCES         = NO
> +
> +# Setting the STRIP_CODE_COMMENTS tag to YES will instruct doxygen to hide any
> +# special comment blocks from generated source code fragments. Normal C, C++ and
> +# Fortran comments will always remain visible.
> +# The default value is: YES.
> +
> +STRIP_CODE_COMMENTS    = YES
> +
> +# If the REFERENCED_BY_RELATION tag is set to YES then for each documented
> +# function all documented functions referencing it will be listed.
> +# The default value is: NO.
> +
> +REFERENCED_BY_RELATION = NO
> +
> +# If the REFERENCES_RELATION tag is set to YES then for each documented function
> +# all documented entities called/used by that function will be listed.
> +# The default value is: NO.
> +
> +REFERENCES_RELATION    = NO
> +
> +# If the REFERENCES_LINK_SOURCE tag is set to YES and SOURCE_BROWSER tag is set
> +# to YES then the hyperlinks from functions in REFERENCES_RELATION and
> +# REFERENCED_BY_RELATION lists will link to the source code. Otherwise they will
> +# link to the documentation.
> +# The default value is: YES.
> +
> +REFERENCES_LINK_SOURCE = YES
> +
> +# If SOURCE_TOOLTIPS is enabled (the default) then hovering a hyperlink in the
> +# source code will show a tooltip with additional information such as prototype,
> +# brief description and links to the definition and documentation. Since this
> +# will make the HTML file larger and loading of large files a bit slower, you
> +# can opt to disable this feature.
> +# The default value is: YES.
> +# This tag requires that the tag SOURCE_BROWSER is set to YES.
> +
> +SOURCE_TOOLTIPS        = YES
> +
> +# If the USE_HTAGS tag is set to YES then the references to source code will
> +# point to the HTML generated by the htags(1) tool instead of doxygen built-in
> +# source browser. The htags tool is part of GNU's global source tagging system
> +# (see https://www.gnu.org/software/global/global.html). You will need version
> +# 4.8.6 or higher.
> +#
> +# To use it do the following:
> +# - Install the latest version of global
> +# - Enable SOURCE_BROWSER and USE_HTAGS in the config file
> +# - Make sure the INPUT points to the root of the source tree
> +# - Run doxygen as normal
> +#
> +# Doxygen will invoke htags (and that will in turn invoke gtags), so these
> +# tools must be available from the command line (i.e. in the search path).
> +#
> +# The result: instead of the source browser generated by doxygen, the links to
> +# source code will now point to the output of htags.
> +# The default value is: NO.
> +# This tag requires that the tag SOURCE_BROWSER is set to YES.
> +
> +USE_HTAGS              = NO
> +
> +# If the VERBATIM_HEADERS tag is set the YES then doxygen will generate a
> +# verbatim copy of the header file for each class for which an include is
> +# specified. Set to NO to disable this.
> +# See also: Section \class.
> +# The default value is: YES.
> +
> +VERBATIM_HEADERS       = YES
> +
> +#---------------------------------------------------------------------------
> +# Configuration options related to the alphabetical class index
> +#---------------------------------------------------------------------------
> +
> +# If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index of all
> +# compounds will be generated. Enable this if the project contains a lot of
> +# classes, structs, unions or interfaces.
> +# The default value is: YES.
> +
> +ALPHABETICAL_INDEX     = YES
> +
> +# The COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns in
> +# which the alphabetical index list will be split.
> +# Minimum value: 1, maximum value: 20, default value: 5.
> +# This tag requires that the tag ALPHABETICAL_INDEX is set to YES.
> +
> +COLS_IN_ALPHA_INDEX    = 2
> +
> +# In case all classes in a project start with a common prefix, all classes will
> +# be put under the same header in the alphabetical index. The IGNORE_PREFIX tag
> +# can be used to specify a prefix (or a list of prefixes) that should be ignored
> +# while generating the index headers.
> +# This tag requires that the tag ALPHABETICAL_INDEX is set to YES.
> +
> +IGNORE_PREFIX          =
> +
> +#---------------------------------------------------------------------------
> +# Configuration options related to the HTML output
> +#---------------------------------------------------------------------------
> +
> +# If the GENERATE_HTML tag is set to YES, doxygen will generate HTML output
> +# The default value is: YES.
> +
> +GENERATE_HTML          = YES
> +
> +# The HTML_OUTPUT tag is used to specify where the HTML docs will be put. If a
> +# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
> +# it.
> +# The default directory is: html.
> +# This tag requires that the tag GENERATE_HTML is set to YES.
> +
> +HTML_OUTPUT            = html
> +
> +# The HTML_FILE_EXTENSION tag can be used to specify the file extension for each
> +# generated HTML page (for example: .htm, .php, .asp).
> +# The default value is: .html.
> +# This tag requires that the tag GENERATE_HTML is set to YES.
> +
> +HTML_FILE_EXTENSION    = .html
> +
> +# The HTML_HEADER tag can be used to specify a user-defined HTML header file for
> +# each generated HTML page. If the tag is left blank doxygen will generate a
> +# standard header.
> +#
> +# To get valid HTML the header file that includes any scripts and style sheets
> +# that doxygen needs, which is dependent on the configuration options used (e.g.
> +# the setting GENERATE_TREEVIEW). It is highly recommended to start with a
> +# default header using
> +# doxygen -w html new_header.html new_footer.html new_stylesheet.css
> +# YourConfigFile
> +# and then modify the file new_header.html. See also section "Doxygen usage"
> +# for information on how to generate the default header that doxygen normally
> +# uses.
> +# Note: The header is subject to change so you typically have to regenerate the
> +# default header when upgrading to a newer version of doxygen. For a description
> +# of the possible markers and block names see the documentation.
> +# This tag requires that the tag GENERATE_HTML is set to YES.
> +
> +HTML_HEADER            = xen-doxygen/header.html
> +
> +# The HTML_FOOTER tag can be used to specify a user-defined HTML footer for each
> +# generated HTML page. If the tag is left blank doxygen will generate a standard
> +# footer. See HTML_HEADER for more information on how to generate a default
> +# footer and what special commands can be used inside the footer. See also
> +# section "Doxygen usage" for information on how to generate the default footer
> +# that doxygen normally uses.
> +# This tag requires that the tag GENERATE_HTML is set to YES.
> +
> +HTML_FOOTER            =
> +
> +# The HTML_STYLESHEET tag can be used to specify a user-defined cascading style
> +# sheet that is used by each HTML page. It can be used to fine-tune the look of
> +# the HTML output. If left blank doxygen will generate a default style sheet.
> +# See also section "Doxygen usage" for information on how to generate the style
> +# sheet that doxygen normally uses.
> +# Note: It is recommended to use HTML_EXTRA_STYLESHEET instead of this tag, as
> +# it is more robust and this tag (HTML_STYLESHEET) will in the future become
> +# obsolete.
> +# This tag requires that the tag GENERATE_HTML is set to YES.
> +
> +HTML_STYLESHEET        =
> +
> +# The HTML_EXTRA_STYLESHEET tag can be used to specify additional user-defined
> +# cascading style sheets that are included after the standard style sheets
> +# created by doxygen. Using this option one can overrule certain style aspects.
> +# This is preferred over using HTML_STYLESHEET since it does not replace the
> +# standard style sheet and is therefore more robust against future updates.
> +# Doxygen will copy the style sheet files to the output directory.
> +# Note: The order of the extra style sheet files is of importance (e.g. the last
> +# style sheet in the list overrules the setting of the previous ones in the
> +# list). For an example see the documentation.
> +# This tag requires that the tag GENERATE_HTML is set to YES.
> +
> +HTML_EXTRA_STYLESHEET  = xen-doxygen/customdoxygen.css
> +
> +# The HTML_EXTRA_FILES tag can be used to specify one or more extra images or
> +# other source files which should be copied to the HTML output directory. Note
> +# that these files will be copied to the base HTML output directory. Use the
> +# $relpath^ marker in the HTML_HEADER and/or HTML_FOOTER files to load these
> +# files. In the HTML_STYLESHEET file, use the file name only. Also note that the
> +# files will be copied as-is; there are no commands or markers available.
> +# This tag requires that the tag GENERATE_HTML is set to YES.
> +
> +HTML_EXTRA_FILES       =
> +
> +# The HTML_COLORSTYLE_HUE tag controls the color of the HTML output. Doxygen
> +# will adjust the colors in the style sheet and background images according to
> +# this color. Hue is specified as an angle on a colorwheel, see
> +# https://en.wikipedia.org/wiki/Hue for more information. For instance the value
> +# 0 represents red, 60 is yellow, 120 is green, 180 is cyan, 240 is blue, 300
> +# purple, and 360 is red again.
> +# Minimum value: 0, maximum value: 359, default value: 220.
> +# This tag requires that the tag GENERATE_HTML is set to YES.
> +
> +HTML_COLORSTYLE_HUE    =
> +
> +# The HTML_COLORSTYLE_SAT tag controls the purity (or saturation) of the colors
> +# in the HTML output. For a value of 0 the output will use grayscales only. A
> +# value of 255 will produce the most vivid colors.
> +# Minimum value: 0, maximum value: 255, default value: 100.
> +# This tag requires that the tag GENERATE_HTML is set to YES.
> +
> +HTML_COLORSTYLE_SAT    =
> +
> +# The HTML_COLORSTYLE_GAMMA tag controls the gamma correction applied to the
> +# luminance component of the colors in the HTML output. Values below 100
> +# gradually make the output lighter, whereas values above 100 make the output
> +# darker. The value divided by 100 is the actual gamma applied, so 80 represents
> +# a gamma of 0.8, The value 220 represents a gamma of 2.2, and 100 does not
> +# change the gamma.
> +# Minimum value: 40, maximum value: 240, default value: 80.
> +# This tag requires that the tag GENERATE_HTML is set to YES.
> +
> +HTML_COLORSTYLE_GAMMA  =
> +
> +# If the HTML_TIMESTAMP tag is set to YES then the footer of each generated HTML
> +# page will contain the date and time when the page was generated. Setting this
> +# to YES can help to show when doxygen was last run and thus if the
> +# documentation is up to date.
> +# The default value is: NO.
> +# This tag requires that the tag GENERATE_HTML is set to YES.
> +
> +HTML_TIMESTAMP         = YES
> +
> +# If the HTML_DYNAMIC_SECTIONS tag is set to YES then the generated HTML
> +# documentation will contain sections that can be hidden and shown after the
> +# page has loaded.
> +# The default value is: NO.
> +# This tag requires that the tag GENERATE_HTML is set to YES.
> +
> +HTML_DYNAMIC_SECTIONS  = YES
> +
> +# With HTML_INDEX_NUM_ENTRIES one can control the preferred number of entries
> +# shown in the various tree structured indices initially; the user can expand
> +# and collapse entries dynamically later on. Doxygen will expand the tree to
> +# such a level that at most the specified number of entries are visible (unless
> +# a fully collapsed tree already exceeds this amount). So setting the number of
> +# entries 1 will produce a full collapsed tree by default. 0 is a special value
> +# representing an infinite number of entries and will result in a full expanded
> +# tree by default.
> +# Minimum value: 0, maximum value: 9999, default value: 100.
> +# This tag requires that the tag GENERATE_HTML is set to YES.
> +
> +HTML_INDEX_NUM_ENTRIES = 100
> +
> +# If the GENERATE_DOCSET tag is set to YES, additional index files will be
> +# generated that can be used as input for Apple's Xcode 3 integrated development
> +# environment (see: https://developer.apple.com/tools/xcode/), introduced with
> +# OSX 10.5 (Leopard). To create a documentation set, doxygen will generate a
> +# Makefile in the HTML output directory. Running make will produce the docset in
> +# that directory and running make install will install the docset in
> +# ~/Library/Developer/Shared/Documentation/DocSets so that Xcode will find it at
> +# startup. See https://developer.apple.com/tools/creatingdocsetswithdoxygen.html
> +# for more information.
> +# The default value is: NO.
> +# This tag requires that the tag GENERATE_HTML is set to YES.
> +
> +GENERATE_DOCSET        = YES
> +
> +# This tag determines the name of the docset feed. A documentation feed provides
> +# an umbrella under which multiple documentation sets from a single provider
> +# (such as a company or product suite) can be grouped.
> +# The default value is: Doxygen generated docs.
> +# This tag requires that the tag GENERATE_DOCSET is set to YES.
> +
> +DOCSET_FEEDNAME        = "Doxygen generated docs"
> +
> +# This tag specifies a string that should uniquely identify the documentation
> +# set bundle. This should be a reverse domain-name style string, e.g.
> +# com.mycompany.MyDocSet. Doxygen will append .docset to the name.
> +# The default value is: org.doxygen.Project.
> +# This tag requires that the tag GENERATE_DOCSET is set to YES.
> +
> +DOCSET_BUNDLE_ID       = org.doxygen.Project
> +
> +# The DOCSET_PUBLISHER_ID tag specifies a string that should uniquely identify
> +# the documentation publisher. This should be a reverse domain-name style
> +# string, e.g. com.mycompany.MyDocSet.documentation.
> +# The default value is: org.doxygen.Publisher.
> +# This tag requires that the tag GENERATE_DOCSET is set to YES.
> +
> +DOCSET_PUBLISHER_ID    = org.doxygen.Publisher
> +
> +# The DOCSET_PUBLISHER_NAME tag identifies the documentation publisher.
> +# The default value is: Publisher.
> +# This tag requires that the tag GENERATE_DOCSET is set to YES.
> +
> +DOCSET_PUBLISHER_NAME  = Publisher
> +
> +# If the GENERATE_HTMLHELP tag is set to YES then doxygen generates three
> +# additional HTML index files: index.hhp, index.hhc, and index.hhk. The
> +# index.hhp is a project file that can be read by Microsoft's HTML Help Workshop
> +# (see: http://www.microsoft.com/en-us/download/details.aspx?id=21138) on
> +# Windows.
> +#
> +# The HTML Help Workshop contains a compiler that can convert all HTML output
> +# generated by doxygen into a single compiled HTML file (.chm). Compiled HTML
> +# files are now used as the Windows 98 help format, and will replace the old
> +# Windows help format (.hlp) on all Windows platforms in the future. Compressed
> +# HTML files also contain an index, a table of contents, and you can search for
> +# words in the documentation. The HTML workshop also contains a viewer for
> +# compressed HTML files.
> +# The default value is: NO.
> +# This tag requires that the tag GENERATE_HTML is set to YES.
> +
> +GENERATE_HTMLHELP      = NO
> +
> +# The CHM_FILE tag can be used to specify the file name of the resulting .chm
> +# file. You can add a path in front of the file if the result should not be
> +# written to the html output directory.
> +# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
> +
> +CHM_FILE               = NO
> +
> +# The HHC_LOCATION tag can be used to specify the location (absolute path
> +# including file name) of the HTML help compiler (hhc.exe). If non-empty,
> +# doxygen will try to run the HTML help compiler on the generated index.hhp.
> +# The file has to be specified with full path.
> +# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
> +
> +HHC_LOCATION           =
> +
> +# The GENERATE_CHI flag controls if a separate .chi index file is generated
> +# (YES) or that it should be included in the master .chm file (NO).
> +# The default value is: NO.
> +# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
> +
> +GENERATE_CHI           = NO
> +
> +# The CHM_INDEX_ENCODING is used to encode HtmlHelp index (hhk), content (hhc)
> +# and project file content.
> +# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
> +
> +CHM_INDEX_ENCODING     =
> +
> +# The BINARY_TOC flag controls whether a binary table of contents is generated
> +# (YES) or a normal table of contents (NO) in the .chm file. Furthermore it
> +# enables the Previous and Next buttons.
> +# The default value is: NO.
> +# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
> +
> +BINARY_TOC             = YES
> +
> +# The TOC_EXPAND flag can be set to YES to add extra items for group members to
> +# the table of contents of the HTML help documentation and to the tree view.
> +# The default value is: NO.
> +# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
> +
> +TOC_EXPAND             = NO
> +
> +# If the GENERATE_QHP tag is set to YES and both QHP_NAMESPACE and
> +# QHP_VIRTUAL_FOLDER are set, an additional index file will be generated that
> +# can be used as input for Qt's qhelpgenerator to generate a Qt Compressed Help
> +# (.qch) of the generated HTML documentation.
> +# The default value is: NO.
> +# This tag requires that the tag GENERATE_HTML is set to YES.
> +
> +GENERATE_QHP           = NO
> +
> +# If the QHG_LOCATION tag is specified, the QCH_FILE tag can be used to specify
> +# the file name of the resulting .qch file. The path specified is relative to
> +# the HTML output folder.
> +# This tag requires that the tag GENERATE_QHP is set to YES.
> +
> +QCH_FILE               =
> +
> +# The QHP_NAMESPACE tag specifies the namespace to use when generating Qt Help
> +# Project output. For more information please see Qt Help Project / Namespace
> +# (see: http://doc.qt.io/qt-4.8/qthelpproject.html#namespace).
> +# The default value is: org.doxygen.Project.
> +# This tag requires that the tag GENERATE_QHP is set to YES.
> +
> +QHP_NAMESPACE          = org.doxygen.Project
> +
> +# The QHP_VIRTUAL_FOLDER tag specifies the namespace to use when generating Qt
> +# Help Project output. For more information please see Qt Help Project / Virtual
> +# Folders (see: http://doc.qt.io/qt-4.8/qthelpproject.html#virtual-folders).
> +# The default value is: doc.
> +# This tag requires that the tag GENERATE_QHP is set to YES.
> +
> +QHP_VIRTUAL_FOLDER     = doc
> +
> +# If the QHP_CUST_FILTER_NAME tag is set, it specifies the name of a custom
> +# filter to add. For more information please see Qt Help Project / Custom
> +# Filters (see: http://doc.qt.io/qt-4.8/qthelpproject.html#custom-filters).
> +# This tag requires that the tag GENERATE_QHP is set to YES.
> +
> +QHP_CUST_FILTER_NAME   =
> +
> +# The QHP_CUST_FILTER_ATTRS tag specifies the list of the attributes of the
> +# custom filter to add. For more information please see Qt Help Project / Custom
> +# Filters (see: http://doc.qt.io/qt-4.8/qthelpproject.html#custom-filters).
> +# This tag requires that the tag GENERATE_QHP is set to YES.
> +
> +QHP_CUST_FILTER_ATTRS  =
> +
> +# The QHP_SECT_FILTER_ATTRS tag specifies the list of the attributes this
> +# project's filter section matches. Qt Help Project / Filter Attributes (see:
> +# http://doc.qt.io/qt-4.8/qthelpproject.html#filter-attributes).
> +# This tag requires that the tag GENERATE_QHP is set to YES.
> +
> +QHP_SECT_FILTER_ATTRS  =
> +
> +# The QHG_LOCATION tag can be used to specify the location of Qt's
> +# qhelpgenerator. If non-empty doxygen will try to run qhelpgenerator on the
> +# generated .qhp file.
> +# This tag requires that the tag GENERATE_QHP is set to YES.
> +
> +QHG_LOCATION           =
> +
> +# If the GENERATE_ECLIPSEHELP tag is set to YES, additional index files will be
> +# generated, together with the HTML files, they form an Eclipse help plugin. To
> +# install this plugin and make it available under the help contents menu in
> +# Eclipse, the contents of the directory containing the HTML and XML files needs
> +# to be copied into the plugins directory of eclipse. The name of the directory
> +# within the plugins directory should be the same as the ECLIPSE_DOC_ID value.
> +# After copying Eclipse needs to be restarted before the help appears.
> +# The default value is: NO.
> +# This tag requires that the tag GENERATE_HTML is set to YES.
> +
> +GENERATE_ECLIPSEHELP   = NO
> +
> +# A unique identifier for the Eclipse help plugin. When installing the plugin
> +# the directory name containing the HTML and XML files should also have this
> +# name. Each documentation set should have its own identifier.
> +# The default value is: org.doxygen.Project.
> +# This tag requires that the tag GENERATE_ECLIPSEHELP is set to YES.
> +
> +ECLIPSE_DOC_ID         = org.doxygen.Project
> +
> +# If you want full control over the layout of the generated HTML pages it might
> +# be necessary to disable the index and replace it with your own. The
> +# DISABLE_INDEX tag can be used to turn on/off the condensed index (tabs) at top
> +# of each HTML page. A value of NO enables the index and the value YES disables
> +# it. Since the tabs in the index contain the same information as the navigation
> +# tree, you can set this option to YES if you also set GENERATE_TREEVIEW to YES.
> +# The default value is: NO.
> +# This tag requires that the tag GENERATE_HTML is set to YES.
> +
> +DISABLE_INDEX          = NO
> +
> +# The GENERATE_TREEVIEW tag is used to specify whether a tree-like index
> +# structure should be generated to display hierarchical information. If the tag
> +# value is set to YES, a side panel will be generated containing a tree-like
> +# index structure (just like the one that is generated for HTML Help). For this
> +# to work a browser that supports JavaScript, DHTML, CSS and frames is required
> +# (i.e. any modern browser). Windows users are probably better off using the
> +# HTML help feature. Via custom style sheets (see HTML_EXTRA_STYLESHEET) one can
> +# further fine-tune the look of the index. As an example, the default style
> +# sheet generated by doxygen has an example that shows how to put an image at
> +# the root of the tree instead of the PROJECT_NAME. Since the tree basically has
> +# the same information as the tab index, you could consider setting
> +# DISABLE_INDEX to YES when enabling this option.
> +# The default value is: NO.
> +# This tag requires that the tag GENERATE_HTML is set to YES.
> +
> +GENERATE_TREEVIEW      = YES
> +
> +# The ENUM_VALUES_PER_LINE tag can be used to set the number of enum values that
> +# doxygen will group on one line in the generated HTML documentation.
> +#
> +# Note that a value of 0 will completely suppress the enum values from appearing
> +# in the overview section.
> +# Minimum value: 0, maximum value: 20, default value: 4.
> +# This tag requires that the tag GENERATE_HTML is set to YES.
> +
> +ENUM_VALUES_PER_LINE   = 4
> +
> +# If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can be used
> +# to set the initial width (in pixels) of the frame in which the tree is shown.
> +# Minimum value: 0, maximum value: 1500, default value: 250.
> +# This tag requires that the tag GENERATE_HTML is set to YES.
> +
> +TREEVIEW_WIDTH         = 250
> +
> +# If the EXT_LINKS_IN_WINDOW option is set to YES, doxygen will open links to
> +# external symbols imported via tag files in a separate window.
> +# The default value is: NO.
> +# This tag requires that the tag GENERATE_HTML is set to YES.
> +
> +EXT_LINKS_IN_WINDOW    = NO
> +
> +# Use this tag to change the font size of LaTeX formulas included as images in
> +# the HTML documentation. When you change the font size after a successful
> +# doxygen run you need to manually remove any form_*.png images from the HTML
> +# output directory to force them to be regenerated.
> +# Minimum value: 8, maximum value: 50, default value: 10.
> +# This tag requires that the tag GENERATE_HTML is set to YES.
> +
> +FORMULA_FONTSIZE       = 10
> +
> +# Use the FORMULA_TRANPARENT tag to determine whether or not the images
> +# generated for formulas are transparent PNGs. Transparent PNGs are not
> +# supported properly for IE 6.0, but are supported on all modern browsers.
> +#
> +# Note that when changing this option you need to delete any form_*.png files in
> +# the HTML output directory before the changes have effect.
> +# The default value is: YES.
> +# This tag requires that the tag GENERATE_HTML is set to YES.
> +
> +FORMULA_TRANSPARENT    = YES
> +
> +# Enable the USE_MATHJAX option to render LaTeX formulas using MathJax (see
> +# https://www.mathjax.org) which uses client side Javascript for the rendering
> +# instead of using pre-rendered bitmaps. Use this if you do not have LaTeX
> +# installed or if you want to formulas look prettier in the HTML output. When
> +# enabled you may also need to install MathJax separately and configure the path
> +# to it using the MATHJAX_RELPATH option.
> +# The default value is: NO.
> +# This tag requires that the tag GENERATE_HTML is set to YES.
> +
> +USE_MATHJAX            = NO
> +
> +# When MathJax is enabled you can set the default output format to be used for
> +# the MathJax output. See the MathJax site (see:
> +# http://docs.mathjax.org/en/latest/output.html) for more details.
> +# Possible values are: HTML-CSS (which is slower, but has the best
> +# compatibility), NativeMML (i.e. MathML) and SVG.
> +# The default value is: HTML-CSS.
> +# This tag requires that the tag USE_MATHJAX is set to YES.
> +
> +MATHJAX_FORMAT         = HTML-CSS
> +
> +# When MathJax is enabled you need to specify the location relative to the HTML
> +# output directory using the MATHJAX_RELPATH option. The destination directory
> +# should contain the MathJax.js script. For instance, if the mathjax directory
> +# is located at the same level as the HTML output directory, then
> +# MATHJAX_RELPATH should be ../mathjax. The default value points to the MathJax
> +# Content Delivery Network so you can quickly see the result without installing
> +# MathJax. However, it is strongly recommended to install a local copy of
> +# MathJax from https://www.mathjax.org before deployment.
> +# The default value is: http://cdn.mathjax.org/mathjax/latest.
> +# This tag requires that the tag USE_MATHJAX is set to YES.
> +
> +MATHJAX_RELPATH        = http://cdn.mathjax.org/mathjax/latest
> +
> +# The MATHJAX_EXTENSIONS tag can be used to specify one or more MathJax
> +# extension names that should be enabled during MathJax rendering. For example
> +# MATHJAX_EXTENSIONS = TeX/AMSmath TeX/AMSsymbols
> +# This tag requires that the tag USE_MATHJAX is set to YES.
> +
> +MATHJAX_EXTENSIONS     =
> +
> +# The MATHJAX_CODEFILE tag can be used to specify a file with javascript pieces
> +# of code that will be used on startup of the MathJax code. See the MathJax site
> +# (see: http://docs.mathjax.org/en/latest/output.html) for more details. For an
> +# example see the documentation.
> +# This tag requires that the tag USE_MATHJAX is set to YES.
> +
> +MATHJAX_CODEFILE       =
> +
> +# When the SEARCHENGINE tag is enabled doxygen will generate a search box for
> +# the HTML output. The underlying search engine uses javascript and DHTML and
> +# should work on any modern browser. Note that when using HTML help
> +# (GENERATE_HTMLHELP), Qt help (GENERATE_QHP), or docsets (GENERATE_DOCSET)
> +# there is already a search function so this one should typically be disabled.
> +# For large projects the javascript based search engine can be slow, then
> +# enabling SERVER_BASED_SEARCH may provide a better solution. It is possible to
> +# search using the keyboard; to jump to the search box use <access key> + S
> +# (what the <access key> is depends on the OS and browser, but it is typically
> +# <CTRL>, <ALT>/<option>, or both). Inside the search box use the <cursor down
> +# key> to jump into the search results window, the results can be navigated
> +# using the <cursor keys>. Press <Enter> to select an item or <escape> to cancel
> +# the search. The filter options can be selected when the cursor is inside the
> +# search box by pressing <Shift>+<cursor down>. Also here use the <cursor keys>
> +# to select a filter and <Enter> or <escape> to activate or cancel the filter
> +# option.
> +# The default value is: YES.
> +# This tag requires that the tag GENERATE_HTML is set to YES.
> +
> +SEARCHENGINE           = YES
> +
> +# When the SERVER_BASED_SEARCH tag is enabled the search engine will be
> +# implemented using a web server instead of a web client using Javascript. There
> +# are two flavors of web server based searching depending on the EXTERNAL_SEARCH
> +# setting. When disabled, doxygen will generate a PHP script for searching and
> +# an index file used by the script. When EXTERNAL_SEARCH is enabled the indexing
> +# and searching needs to be provided by external tools. See the section
> +# "External Indexing and Searching" for details.
> +# The default value is: NO.
> +# This tag requires that the tag SEARCHENGINE is set to YES.
> +
> +SERVER_BASED_SEARCH    = NO
> +
> +# When EXTERNAL_SEARCH tag is enabled doxygen will no longer generate the PHP
> +# script for searching. Instead the search results are written to an XML file
> +# which needs to be processed by an external indexer. Doxygen will invoke an
> +# external search engine pointed to by the SEARCHENGINE_URL option to obtain the
> +# search results.
> +#
> +# Doxygen ships with an example indexer (doxyindexer) and search engine
> +# (doxysearch.cgi) which are based on the open source search engine library
> +# Xapian (see: https://xapian.org/).
> +#
> +# See the section "External Indexing and Searching" for details.
> +# The default value is: NO.
> +# This tag requires that the tag SEARCHENGINE is set to YES.
> +
> +EXTERNAL_SEARCH        = NO
> +
> +# The SEARCHENGINE_URL should point to a search engine hosted by a web server
> +# which will return the search results when EXTERNAL_SEARCH is enabled.
> +#
> +# Doxygen ships with an example indexer (doxyindexer) and search engine
> +# (doxysearch.cgi) which are based on the open source search engine library
> +# Xapian (see: https://xapian.org/). See the section "External Indexing and
> +# Searching" for details.
> +# This tag requires that the tag SEARCHENGINE is set to YES.
> +
> +SEARCHENGINE_URL       =
> +
> +# When SERVER_BASED_SEARCH and EXTERNAL_SEARCH are both enabled the unindexed
> +# search data is written to a file for indexing by an external tool. With the
> +# SEARCHDATA_FILE tag the name of this file can be specified.
> +# The default file is: searchdata.xml.
> +# This tag requires that the tag SEARCHENGINE is set to YES.
> +
> +SEARCHDATA_FILE        = searchdata.xml
> +
> +# When SERVER_BASED_SEARCH and EXTERNAL_SEARCH are both enabled the
> +# EXTERNAL_SEARCH_ID tag can be used as an identifier for the project. This is
> +# useful in combination with EXTRA_SEARCH_MAPPINGS to search through multiple
> +# projects and redirect the results back to the right project.
> +# This tag requires that the tag SEARCHENGINE is set to YES.
> +
> +EXTERNAL_SEARCH_ID     =
> +
> +# The EXTRA_SEARCH_MAPPINGS tag can be used to enable searching through doxygen
> +# projects other than the one defined by this configuration file, but that are
> +# all added to the same external search index. Each project needs to have a
> +# unique id set via EXTERNAL_SEARCH_ID. The search mapping then maps the id of
> +# to a relative location where the documentation can be found. The format is:
> +# EXTRA_SEARCH_MAPPINGS = tagname1=loc1 tagname2=loc2 ...
> +# This tag requires that the tag SEARCHENGINE is set to YES.
> +
> +EXTRA_SEARCH_MAPPINGS  =
> +
> +#---------------------------------------------------------------------------
> +# Configuration options related to the LaTeX output
> +#---------------------------------------------------------------------------
> +
> +# If the GENERATE_LATEX tag is set to YES, doxygen will generate LaTeX output.
> +# The default value is: YES.
> +
> +GENERATE_LATEX         = NO
> +
> +# The LATEX_OUTPUT tag is used to specify where the LaTeX docs will be put. If a
> +# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
> +# it.
> +# The default directory is: latex.
> +# This tag requires that the tag GENERATE_LATEX is set to YES.
> +
> +LATEX_OUTPUT           = latex
> +
> +# The LATEX_CMD_NAME tag can be used to specify the LaTeX command name to be
> +# invoked.
> +#
> +# Note that when enabling USE_PDFLATEX this option is only used for generating
> +# bitmaps for formulas in the HTML output, but not in the Makefile that is
> +# written to the output directory.
> +# The default file is: latex.
> +# This tag requires that the tag GENERATE_LATEX is set to YES.
> +
> +LATEX_CMD_NAME         = latex
> +
> +# The MAKEINDEX_CMD_NAME tag can be used to specify the command name to generate
> +# index for LaTeX.
> +# The default file is: makeindex.
> +# This tag requires that the tag GENERATE_LATEX is set to YES.
> +
> +MAKEINDEX_CMD_NAME     = makeindex
> +
> +# If the COMPACT_LATEX tag is set to YES, doxygen generates more compact LaTeX
> +# documents. This may be useful for small projects and may help to save some
> +# trees in general.
> +# The default value is: NO.
> +# This tag requires that the tag GENERATE_LATEX is set to YES.
> +
> +COMPACT_LATEX          = NO
> +
> +# The PAPER_TYPE tag can be used to set the paper type that is used by the
> +# printer.
> +# Possible values are: a4 (210 x 297 mm), letter (8.5 x 11 inches), legal (8.5 x
> +# 14 inches) and executive (7.25 x 10.5 inches).
> +# The default value is: a4.
> +# This tag requires that the tag GENERATE_LATEX is set to YES.
> +
> +PAPER_TYPE             = a4
> +
> +# The EXTRA_PACKAGES tag can be used to specify one or more LaTeX package names
> +# that should be included in the LaTeX output. The package can be specified just
> +# by its name or with the correct syntax as to be used with the LaTeX
> +# \usepackage command. To get the times font for instance you can specify :
> +# EXTRA_PACKAGES=times or EXTRA_PACKAGES={times}
> +# To use the option intlimits with the amsmath package you can specify:
> +# EXTRA_PACKAGES=[intlimits]{amsmath}
> +# If left blank no extra packages will be included.
> +# This tag requires that the tag GENERATE_LATEX is set to YES.
> +
> +EXTRA_PACKAGES         =
> +
> +# The LATEX_HEADER tag can be used to specify a personal LaTeX header for the
> +# generated LaTeX document. The header should contain everything until the first
> +# chapter. If it is left blank doxygen will generate a standard header. See
> +# section "Doxygen usage" for information on how to let doxygen write the
> +# default header to a separate file.
> +#
> +# Note: Only use a user-defined header if you know what you are doing! The
> +# following commands have a special meaning inside the header: $title,
> +# $datetime, $date, $doxygenversion, $projectname, $projectnumber,
> +# $projectbrief, $projectlogo. Doxygen will replace $title with the empty
> +# string, for the replacement values of the other commands the user is referred
> +# to HTML_HEADER.
> +# This tag requires that the tag GENERATE_LATEX is set to YES.
> +
> +LATEX_HEADER           =
> +
> +# The LATEX_FOOTER tag can be used to specify a personal LaTeX footer for the
> +# generated LaTeX document. The footer should contain everything after the last
> +# chapter. If it is left blank doxygen will generate a standard footer. See
> +# LATEX_HEADER for more information on how to generate a default footer and what
> +# special commands can be used inside the footer.
> +#
> +# Note: Only use a user-defined footer if you know what you are doing!
> +# This tag requires that the tag GENERATE_LATEX is set to YES.
> +
> +LATEX_FOOTER           =
> +
> +# The LATEX_EXTRA_FILES tag can be used to specify one or more extra images or
> +# other source files which should be copied to the LATEX_OUTPUT output
> +# directory. Note that the files will be copied as-is; there are no commands or
> +# markers available.
> +# This tag requires that the tag GENERATE_LATEX is set to YES.
> +
> +LATEX_EXTRA_FILES      =
> +
> +# If the PDF_HYPERLINKS tag is set to YES, the LaTeX that is generated is
> +# prepared for conversion to PDF (using ps2pdf or pdflatex). The PDF file will
> +# contain links (just like the HTML output) instead of page references. This
> +# makes the output suitable for online browsing using a PDF viewer.
> +# The default value is: YES.
> +# This tag requires that the tag GENERATE_LATEX is set to YES.
> +
> +PDF_HYPERLINKS         = YES
> +
> +# If the USE_PDFLATEX tag is set to YES, doxygen will use pdflatex to generate
> +# the PDF file directly from the LaTeX files. Set this option to YES, to get a
> +# higher quality PDF documentation.
> +# The default value is: YES.
> +# This tag requires that the tag GENERATE_LATEX is set to YES.
> +
> +USE_PDFLATEX           = YES
> +
> +# If the LATEX_BATCHMODE tag is set to YES, doxygen will add the \batchmode
> +# command to the generated LaTeX files. This will instruct LaTeX to keep running
> +# if errors occur, instead of asking the user for help. This option is also used
> +# when generating formulas in HTML.
> +# The default value is: NO.
> +# This tag requires that the tag GENERATE_LATEX is set to YES.
> +
> +LATEX_BATCHMODE        = NO
> +
> +# If the LATEX_HIDE_INDICES tag is set to YES then doxygen will not include the
> +# index chapters (such as File Index, Compound Index, etc.) in the output.
> +# The default value is: NO.
> +# This tag requires that the tag GENERATE_LATEX is set to YES.
> +
> +LATEX_HIDE_INDICES     = NO
> +
> +# If the LATEX_SOURCE_CODE tag is set to YES then doxygen will include source
> +# code with syntax highlighting in the LaTeX output.
> +#
> +# Note that which sources are shown also depends on other settings such as
> +# SOURCE_BROWSER.
> +# The default value is: NO.
> +# This tag requires that the tag GENERATE_LATEX is set to YES.
> +
> +LATEX_SOURCE_CODE      = NO
> +
> +# The LATEX_BIB_STYLE tag can be used to specify the style to use for the
> +# bibliography, e.g. plainnat, or ieeetr. See
> +# https://en.wikipedia.org/wiki/BibTeX and \cite for more info.
> +# The default value is: plain.
> +# This tag requires that the tag GENERATE_LATEX is set to YES.
> +
> +LATEX_BIB_STYLE        = plain
> +
> +#---------------------------------------------------------------------------
> +# Configuration options related to the RTF output
> +#---------------------------------------------------------------------------
> +
> +# If the GENERATE_RTF tag is set to YES, doxygen will generate RTF output. The
> +# RTF output is optimized for Word 97 and may not look too pretty with other RTF
> +# readers/editors.
> +# The default value is: NO.
> +
> +GENERATE_RTF           = NO
> +
> +# The RTF_OUTPUT tag is used to specify where the RTF docs will be put. If a
> +# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
> +# it.
> +# The default directory is: rtf.
> +# This tag requires that the tag GENERATE_RTF is set to YES.
> +
> +RTF_OUTPUT             = rtf
> +
> +# If the COMPACT_RTF tag is set to YES, doxygen generates more compact RTF
> +# documents. This may be useful for small projects and may help to save some
> +# trees in general.
> +# The default value is: NO.
> +# This tag requires that the tag GENERATE_RTF is set to YES.
> +
> +COMPACT_RTF            = NO
> +
> +# If the RTF_HYPERLINKS tag is set to YES, the RTF that is generated will
> +# contain hyperlink fields. The RTF file will contain links (just like the HTML
> +# output) instead of page references. This makes the output suitable for online
> +# browsing using Word or some other Word compatible readers that support those
> +# fields.
> +#
> +# Note: WordPad (write) and others do not support links.
> +# The default value is: NO.
> +# This tag requires that the tag GENERATE_RTF is set to YES.
> +
> +RTF_HYPERLINKS         = YES
> +
> +# Load stylesheet definitions from file. Syntax is similar to doxygen's config
> +# file, i.e. a series of assignments. You only have to provide replacements,
> +# missing definitions are set to their default value.
> +#
> +# See also section "Doxygen usage" for information on how to generate the
> +# default style sheet that doxygen normally uses.
> +# This tag requires that the tag GENERATE_RTF is set to YES.
> +
> +RTF_STYLESHEET_FILE    =
> +
> +# Set optional variables used in the generation of an RTF document. Syntax is
> +# similar to doxygen's config file. A template extensions file can be generated
> +# using doxygen -e rtf extensionFile.
> +# This tag requires that the tag GENERATE_RTF is set to YES.
> +
> +RTF_EXTENSIONS_FILE    =
> +
> +#---------------------------------------------------------------------------
> +# Configuration options related to the man page output
> +#---------------------------------------------------------------------------
> +
> +# If the GENERATE_MAN tag is set to YES, doxygen will generate man pages for
> +# classes and files.
> +# The default value is: NO.
> +
> +GENERATE_MAN           = NO
> +
> +# The MAN_OUTPUT tag is used to specify where the man pages will be put. If a
> +# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
> +# it. A directory man3 will be created inside the directory specified by
> +# MAN_OUTPUT.
> +# The default directory is: man.
> +# This tag requires that the tag GENERATE_MAN is set to YES.
> +
> +MAN_OUTPUT             = man
> +
> +# The MAN_EXTENSION tag determines the extension that is added to the generated
> +# man pages. In case the manual section does not start with a number, the number
> +# 3 is prepended. The dot (.) at the beginning of the MAN_EXTENSION tag is
> +# optional.
> +# The default value is: .3.
> +# This tag requires that the tag GENERATE_MAN is set to YES.
> +
> +MAN_EXTENSION          = .3
> +
> +# If the MAN_LINKS tag is set to YES and doxygen generates man output, then it
> +# will generate one additional man file for each entity documented in the real
> +# man page(s). These additional files only source the real man page, but without
> +# them the man command would be unable to find the correct page.
> +# The default value is: NO.
> +# This tag requires that the tag GENERATE_MAN is set to YES.
> +
> +MAN_LINKS              = NO
> +
> +#---------------------------------------------------------------------------
> +# Configuration options related to the XML output
> +#---------------------------------------------------------------------------
> +
> +# If the GENERATE_XML tag is set to YES, doxygen will generate an XML file that
> +# captures the structure of the code including all documentation.
> +# The default value is: NO.
> +
> +GENERATE_XML           = YES
> +
> +# The XML_OUTPUT tag is used to specify where the XML pages will be put. If a
> +# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
> +# it.
> +# The default directory is: xml.
> +# This tag requires that the tag GENERATE_XML is set to YES.
> +
> +XML_OUTPUT             = xml
> +
> +# If the XML_PROGRAMLISTING tag is set to YES, doxygen will dump the program
> +# listings (including syntax highlighting and cross-referencing information) to
> +# the XML output. Note that enabling this will significantly increase the size
> +# of the XML output.
> +# The default value is: YES.
> +# This tag requires that the tag GENERATE_XML is set to YES.
> +
> +XML_PROGRAMLISTING     = YES
> +
> +#---------------------------------------------------------------------------
> +# Configuration options related to the DOCBOOK output
> +#---------------------------------------------------------------------------
> +
> +# If the GENERATE_DOCBOOK tag is set to YES, doxygen will generate Docbook files
> +# that can be used to generate PDF.
> +# The default value is: NO.
> +
> +GENERATE_DOCBOOK       = NO
> +
> +# The DOCBOOK_OUTPUT tag is used to specify where the Docbook pages will be put.
> +# If a relative path is entered the value of OUTPUT_DIRECTORY will be put in
> +# front of it.
> +# The default directory is: docbook.
> +# This tag requires that the tag GENERATE_DOCBOOK is set to YES.
> +
> +DOCBOOK_OUTPUT         = docbook
> +
> +#---------------------------------------------------------------------------
> +# Configuration options for the AutoGen Definitions output
> +#---------------------------------------------------------------------------
> +
> +# If the GENERATE_AUTOGEN_DEF tag is set to YES, doxygen will generate an
> +# AutoGen Definitions (see http://autogen.sourceforge.net/) file that captures
> +# the structure of the code including all documentation. Note that this feature
> +# is still experimental and incomplete at the moment.
> +# The default value is: NO.
> +
> +GENERATE_AUTOGEN_DEF   = NO
> +
> +#---------------------------------------------------------------------------
> +# Configuration options related to the Perl module output
> +#---------------------------------------------------------------------------
> +
> +# If the GENERATE_PERLMOD tag is set to YES, doxygen will generate a Perl module
> +# file that captures the structure of the code including all documentation.
> +#
> +# Note that this feature is still experimental and incomplete at the moment.
> +# The default value is: NO.
> +
> +GENERATE_PERLMOD       = NO
> +
> +# If the PERLMOD_LATEX tag is set to YES, doxygen will generate the necessary
> +# Makefile rules, Perl scripts and LaTeX code to be able to generate PDF and DVI
> +# output from the Perl module output.
> +# The default value is: NO.
> +# This tag requires that the tag GENERATE_PERLMOD is set to YES.
> +
> +PERLMOD_LATEX          = NO
> +
> +# If the PERLMOD_PRETTY tag is set to YES, the Perl module output will be nicely
> +# formatted so it can be parsed by a human reader. This is useful if you want to
> +# understand what is going on. On the other hand, if this tag is set to NO, the
> +# size of the Perl module output will be much smaller and Perl will parse it
> +# just the same.
> +# The default value is: YES.
> +# This tag requires that the tag GENERATE_PERLMOD is set to YES.
> +
> +PERLMOD_PRETTY         = YES
> +
> +# The names of the make variables in the generated doxyrules.make file are
> +# prefixed with the string contained in PERLMOD_MAKEVAR_PREFIX. This is useful
> +# so different doxyrules.make files included by the same Makefile don't
> +# overwrite each other's variables.
> +# This tag requires that the tag GENERATE_PERLMOD is set to YES.
> +
> +PERLMOD_MAKEVAR_PREFIX =
> +
> +#---------------------------------------------------------------------------
> +# Configuration options related to the preprocessor
> +#---------------------------------------------------------------------------
> +
> +# If the ENABLE_PREPROCESSING tag is set to YES, doxygen will evaluate all
> +# C-preprocessor directives found in the sources and include files.
> +# The default value is: YES.
> +
> +ENABLE_PREPROCESSING   = YES
> +
> +# If the MACRO_EXPANSION tag is set to YES, doxygen will expand all macro names
> +# in the source code. If set to NO, only conditional compilation will be
> +# performed. Macro expansion can be done in a controlled way by setting
> +# EXPAND_ONLY_PREDEF to YES.
> +# The default value is: NO.
> +# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
> +
> +MACRO_EXPANSION        = YES
> +
> +# If the EXPAND_ONLY_PREDEF and MACRO_EXPANSION tags are both set to YES then
> +# the macro expansion is limited to the macros specified with the PREDEFINED and
> +# EXPAND_AS_DEFINED tags.
> +# The default value is: NO.
> +# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
> +
> +EXPAND_ONLY_PREDEF     = NO
> +
> +# If the SEARCH_INCLUDES tag is set to YES, the include files in the
> +# INCLUDE_PATH will be searched if a #include is found.
> +# The default value is: YES.
> +# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
> +
> +SEARCH_INCLUDES        = YES
> +
> +# The INCLUDE_PATH tag can be used to specify one or more directories that
> +# contain include files that are not input files but should be processed by the
> +# preprocessor.
> +# This tag requires that the tag SEARCH_INCLUDES is set to YES.
> +
> +INCLUDE_PATH           = "@XEN_BASE@/xen/include/generated" \
> +                         "@XEN_BASE@/xen/include/"
> +
> +# You can use the INCLUDE_FILE_PATTERNS tag to specify one or more wildcard
> +# patterns (like *.h and *.hpp) to filter out the header-files in the
> +# directories. If left blank, the patterns specified with FILE_PATTERNS will be
> +# used.
> +# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
> +
> +INCLUDE_FILE_PATTERNS  =
> +
> +# The PREDEFINED tag can be used to specify one or more macro names that are
> +# defined before the preprocessor is started (similar to the -D option of e.g.
> +# gcc). The argument of the tag is a list of macros of the form: name or
> +# name=definition (no spaces). If the definition and the "=" are omitted, "=1"
> +# is assumed. To prevent a macro definition from being undefined via #undef or
> +# recursively expanded use the := operator instead of the = operator.
> +# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
> +
> +PREDEFINED             = __attribute__(x)= \
> +                         DOXYGEN \
> +                         __XEN__
> +
> +# If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then this
> +# tag can be used to specify a list of macro names that should be expanded. The
> +# macro definition that is found in the sources will be used. Use the PREDEFINED
> +# tag if you want to use a different macro definition that overrules the
> +# definition found in the source code.
> +# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
> +
> +EXPAND_AS_DEFINED      =
> +
> +# If the SKIP_FUNCTION_MACROS tag is set to YES then doxygen's preprocessor will
> +# remove all references to function-like macros that are alone on a line, have
> +# an all uppercase name, and do not end with a semicolon. Such function macros
> +# are typically used for boiler-plate code, and will confuse the parser if not
> +# removed.
> +# The default value is: YES.
> +# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
> +
> +SKIP_FUNCTION_MACROS   = NO
> +
> +#---------------------------------------------------------------------------
> +# Configuration options related to external references
> +#---------------------------------------------------------------------------
> +
> +# The TAGFILES tag can be used to specify one or more tag files. For each tag
> +# file the location of the external documentation should be added. The format of
> +# a tag file without this location is as follows:
> +# TAGFILES = file1 file2 ...
> +# Adding location for the tag files is done as follows:
> +# TAGFILES = file1=loc1 "file2 = loc2" ...
> +# where loc1 and loc2 can be relative or absolute paths or URLs. See the
> +# section "Linking to external documentation" for more information about the use
> +# of tag files.
> +# Note: Each tag file must have a unique name (where the name does NOT include
> +# the path). If a tag file is not located in the directory in which doxygen is
> +# run, you must also specify the path to the tagfile here.
> +
> +TAGFILES               =
> +
> +# When a file name is specified after GENERATE_TAGFILE, doxygen will create a
> +# tag file that is based on the input files it reads. See section "Linking to
> +# external documentation" for more information about the usage of tag files.
> +
> +GENERATE_TAGFILE       =
> +
> +# If the ALLEXTERNALS tag is set to YES, all external class will be listed in
> +# the class index. If set to NO, only the inherited external classes will be
> +# listed.
> +# The default value is: NO.
> +
> +ALLEXTERNALS           = NO
> +
> +# If the EXTERNAL_GROUPS tag is set to YES, all external groups will be listed
> +# in the modules index. If set to NO, only the current project's groups will be
> +# listed.
> +# The default value is: YES.
> +
> +EXTERNAL_GROUPS        = YES
> +
> +# If the EXTERNAL_PAGES tag is set to YES, all external pages will be listed in
> +# the related pages index. If set to NO, only the current project's pages will
> +# be listed.
> +# The default value is: YES.
> +
> +EXTERNAL_PAGES         = YES
> +
> +#---------------------------------------------------------------------------
> +# Configuration options related to the dot tool
> +#---------------------------------------------------------------------------
> +
> +# If the CLASS_DIAGRAMS tag is set to YES, doxygen will generate a class diagram
> +# (in HTML and LaTeX) for classes with base or super classes. Setting the tag to
> +# NO turns the diagrams off. Note that this option also works with HAVE_DOT
> +# disabled, but it is recommended to install and use dot, since it yields more
> +# powerful graphs.
> +# The default value is: YES.
> +
> +CLASS_DIAGRAMS         = NO
> +
> +# You can include diagrams made with dia in doxygen documentation. Doxygen will
> +# then run dia to produce the diagram and insert it in the documentation. The
> +# DIA_PATH tag allows you to specify the directory where the dia binary resides.
> +# If left empty dia is assumed to be found in the default search path.
> +
> +DIA_PATH               =
> +
> +# If set to YES the inheritance and collaboration graphs will hide inheritance
> +# and usage relations if the target is undocumented or is not a class.
> +# The default value is: YES.
> +
> +HIDE_UNDOC_RELATIONS   = YES
> +
> +# If you set the HAVE_DOT tag to YES then doxygen will assume the dot tool is
> +# available from the path. This tool is part of Graphviz (see:
> +# http://www.graphviz.org/), a graph visualization toolkit from AT&T and Lucent
> +# Bell Labs. The other options in this section have no effect if this option is
> +# set to NO
> +# The default value is: NO.
> +
> +HAVE_DOT               = NO
> +
> +# The DOT_NUM_THREADS specifies the number of dot invocations doxygen is allowed
> +# to run in parallel. When set to 0 doxygen will base this on the number of
> +# processors available in the system. You can set it explicitly to a value
> +# larger than 0 to get control over the balance between CPU load and processing
> +# speed.
> +# Minimum value: 0, maximum value: 32, default value: 0.
> +# This tag requires that the tag HAVE_DOT is set to YES.
> +
> +DOT_NUM_THREADS        = 0
> +
> +# When you want a differently looking font in the dot files that doxygen
> +# generates you can specify the font name using DOT_FONTNAME. You need to make
> +# sure dot is able to find the font, which can be done by putting it in a
> +# standard location or by setting the DOTFONTPATH environment variable or by
> +# setting DOT_FONTPATH to the directory containing the font.
> +# The default value is: Helvetica.
> +# This tag requires that the tag HAVE_DOT is set to YES.
> +
> +DOT_FONTNAME           = Helvetica
> +
> +# The DOT_FONTSIZE tag can be used to set the size (in points) of the font of
> +# dot graphs.
> +# Minimum value: 4, maximum value: 24, default value: 10.
> +# This tag requires that the tag HAVE_DOT is set to YES.
> +
> +DOT_FONTSIZE           = 10
> +
> +# By default doxygen will tell dot to use the default font as specified with
> +# DOT_FONTNAME. If you specify a different font using DOT_FONTNAME you can set
> +# the path where dot can find it using this tag.
> +# This tag requires that the tag HAVE_DOT is set to YES.
> +
> +DOT_FONTPATH           =
> +
> +# If the CLASS_GRAPH tag is set to YES then doxygen will generate a graph for
> +# each documented class showing the direct and indirect inheritance relations.
> +# Setting this tag to YES will force the CLASS_DIAGRAMS tag to NO.
> +# The default value is: YES.
> +# This tag requires that the tag HAVE_DOT is set to YES.
> +
> +CLASS_GRAPH            = YES
> +
> +# If the COLLABORATION_GRAPH tag is set to YES then doxygen will generate a
> +# graph for each documented class showing the direct and indirect implementation
> +# dependencies (inheritance, containment, and class references variables) of the
> +# class with other documented classes.
> +# The default value is: YES.
> +# This tag requires that the tag HAVE_DOT is set to YES.
> +
> +COLLABORATION_GRAPH    = YES
> +
> +# If the GROUP_GRAPHS tag is set to YES then doxygen will generate a graph for
> +# groups, showing the direct groups dependencies.
> +# The default value is: YES.
> +# This tag requires that the tag HAVE_DOT is set to YES.
> +
> +GROUP_GRAPHS           = YES
> +
> +# If the UML_LOOK tag is set to YES, doxygen will generate inheritance and
> +# collaboration diagrams in a style similar to the OMG's Unified Modeling
> +# Language.
> +# The default value is: NO.
> +# This tag requires that the tag HAVE_DOT is set to YES.
> +
> +UML_LOOK               = NO
> +
> +# If the UML_LOOK tag is enabled, the fields and methods are shown inside the
> +# class node. If there are many fields or methods and many nodes the graph may
> +# become too big to be useful. The UML_LIMIT_NUM_FIELDS threshold limits the
> +# number of items for each type to make the size more manageable. Set this to 0
> +# for no limit. Note that the threshold may be exceeded by 50% before the limit
> +# is enforced. So when you set the threshold to 10, up to 15 fields may appear,
> +# but if the number exceeds 15, the total amount of fields shown is limited to
> +# 10.
> +# Minimum value: 0, maximum value: 100, default value: 10.
> +# This tag requires that the tag HAVE_DOT is set to YES.
> +
> +UML_LIMIT_NUM_FIELDS   = 10
> +
> +# If the TEMPLATE_RELATIONS tag is set to YES then the inheritance and
> +# collaboration graphs will show the relations between templates and their
> +# instances.
> +# The default value is: NO.
> +# This tag requires that the tag HAVE_DOT is set to YES.
> +
> +TEMPLATE_RELATIONS     = NO
> +
> +# If the INCLUDE_GRAPH, ENABLE_PREPROCESSING and SEARCH_INCLUDES tags are set to
> +# YES then doxygen will generate a graph for each documented file showing the
> +# direct and indirect include dependencies of the file with other documented
> +# files.
> +# The default value is: YES.
> +# This tag requires that the tag HAVE_DOT is set to YES.
> +
> +INCLUDE_GRAPH          = YES
> +
> +# If the INCLUDED_BY_GRAPH, ENABLE_PREPROCESSING and SEARCH_INCLUDES tags are
> +# set to YES then doxygen will generate a graph for each documented file showing
> +# the direct and indirect include dependencies of the file with other documented
> +# files.
> +# The default value is: YES.
> +# This tag requires that the tag HAVE_DOT is set to YES.
> +
> +INCLUDED_BY_GRAPH      = YES
> +
> +# If the CALL_GRAPH tag is set to YES then doxygen will generate a call
> +# dependency graph for every global function or class method.
> +#
> +# Note that enabling this option will significantly increase the time of a run.
> +# So in most cases it will be better to enable call graphs for selected
> +# functions only using the \callgraph command. Disabling a call graph can be
> +# accomplished by means of the command \hidecallgraph.
> +# The default value is: NO.
> +# This tag requires that the tag HAVE_DOT is set to YES.
> +
> +CALL_GRAPH             = NO
> +
> +# If the CALLER_GRAPH tag is set to YES then doxygen will generate a caller
> +# dependency graph for every global function or class method.
> +#
> +# Note that enabling this option will significantly increase the time of a run.
> +# So in most cases it will be better to enable caller graphs for selected
> +# functions only using the \callergraph command. Disabling a caller graph can be
> +# accomplished by means of the command \hidecallergraph.
> +# The default value is: NO.
> +# This tag requires that the tag HAVE_DOT is set to YES.
> +
> +CALLER_GRAPH           = NO
> +
> +# If the GRAPHICAL_HIERARCHY tag is set to YES then doxygen will graphical
> +# hierarchy of all classes instead of a textual one.
> +# The default value is: YES.
> +# This tag requires that the tag HAVE_DOT is set to YES.
> +
> +GRAPHICAL_HIERARCHY    = YES
> +
> +# If the DIRECTORY_GRAPH tag is set to YES then doxygen will show the
> +# dependencies a directory has on other directories in a graphical way. The
> +# dependency relations are determined by the #include relations between the
> +# files in the directories.
> +# The default value is: YES.
> +# This tag requires that the tag HAVE_DOT is set to YES.
> +
> +DIRECTORY_GRAPH        = YES
> +
> +# The DOT_IMAGE_FORMAT tag can be used to set the image format of the images
> +# generated by dot. For an explanation of the image formats see the section
> +# output formats in the documentation of the dot tool (Graphviz (see:
> +# http://www.graphviz.org/)).
> +# Note: If you choose svg you need to set HTML_FILE_EXTENSION to xhtml in order
> +# to make the SVG files visible in IE 9+ (other browsers do not have this
> +# requirement).
> +# Possible values are: png, jpg, gif, svg, png:gd, png:gd:gd, png:cairo,
> +# png:cairo:gd, png:cairo:cairo, png:cairo:gdiplus, png:gdiplus and
> +# png:gdiplus:gdiplus.
> +# The default value is: png.
> +# This tag requires that the tag HAVE_DOT is set to YES.
> +
> +DOT_IMAGE_FORMAT       = png
> +
> +# If DOT_IMAGE_FORMAT is set to svg, then this option can be set to YES to
> +# enable generation of interactive SVG images that allow zooming and panning.
> +#
> +# Note that this requires a modern browser other than Internet Explorer. Tested
> +# and working are Firefox, Chrome, Safari, and Opera.
> +# Note: For IE 9+ you need to set HTML_FILE_EXTENSION to xhtml in order to make
> +# the SVG files visible. Older versions of IE do not have SVG support.
> +# The default value is: NO.
> +# This tag requires that the tag HAVE_DOT is set to YES.
> +
> +INTERACTIVE_SVG        = NO
> +
> +# The DOT_PATH tag can be used to specify the path where the dot tool can be
> +# found. If left blank, it is assumed the dot tool can be found in the path.
> +# This tag requires that the tag HAVE_DOT is set to YES.
> +
> +DOT_PATH               =
> +
> +# The DOTFILE_DIRS tag can be used to specify one or more directories that
> +# contain dot files that are included in the documentation (see the \dotfile
> +# command).
> +# This tag requires that the tag HAVE_DOT is set to YES.
> +
> +DOTFILE_DIRS           =
> +
> +# The MSCFILE_DIRS tag can be used to specify one or more directories that
> +# contain msc files that are included in the documentation (see the \mscfile
> +# command).
> +
> +MSCFILE_DIRS           =
> +
> +# The DIAFILE_DIRS tag can be used to specify one or more directories that
> +# contain dia files that are included in the documentation (see the \diafile
> +# command).
> +
> +DIAFILE_DIRS           =
> +
> +# The DOT_GRAPH_MAX_NODES tag can be used to set the maximum number of nodes
> +# that will be shown in the graph. If the number of nodes in a graph becomes
> +# larger than this value, doxygen will truncate the graph, which is visualized
> +# by representing a node as a red box. Note that doxygen if the number of direct
> +# children of the root node in a graph is already larger than
> +# DOT_GRAPH_MAX_NODES then the graph will not be shown at all. Also note that
> +# the size of a graph can be further restricted by MAX_DOT_GRAPH_DEPTH.
> +# Minimum value: 0, maximum value: 10000, default value: 50.
> +# This tag requires that the tag HAVE_DOT is set to YES.
> +
> +DOT_GRAPH_MAX_NODES    = 50
> +
> +# The MAX_DOT_GRAPH_DEPTH tag can be used to set the maximum depth of the graphs
> +# generated by dot. A depth value of 3 means that only nodes reachable from the
> +# root by following a path via at most 3 edges will be shown. Nodes that lay
> +# further from the root node will be omitted. Note that setting this option to 1
> +# or 2 may greatly reduce the computation time needed for large code bases. Also
> +# note that the size of a graph can be further restricted by
> +# DOT_GRAPH_MAX_NODES. Using a depth of 0 means no depth restriction.
> +# Minimum value: 0, maximum value: 1000, default value: 0.
> +# This tag requires that the tag HAVE_DOT is set to YES.
> +
> +MAX_DOT_GRAPH_DEPTH    = 0
> +
> +# Set the DOT_TRANSPARENT tag to YES to generate images with a transparent
> +# background. This is disabled by default, because dot on Windows does not seem
> +# to support this out of the box.
> +#
> +# Warning: Depending on the platform used, enabling this option may lead to
> +# badly anti-aliased labels on the edges of a graph (i.e. they become hard to
> +# read).
> +# The default value is: NO.
> +# This tag requires that the tag HAVE_DOT is set to YES.
> +
> +DOT_TRANSPARENT        = NO
> +
> +# Set the DOT_MULTI_TARGETS tag to YES to allow dot to generate multiple output
> +# files in one run (i.e. multiple -o and -T options on the command line). This
> +# makes dot run faster, but since only newer versions of dot (>1.8.10) support
> +# this, this feature is disabled by default.
> +# The default value is: NO.
> +# This tag requires that the tag HAVE_DOT is set to YES.
> +
> +DOT_MULTI_TARGETS      = NO
> +
> +# If the GENERATE_LEGEND tag is set to YES doxygen will generate a legend page
> +# explaining the meaning of the various boxes and arrows in the dot generated
> +# graphs.
> +# The default value is: YES.
> +# This tag requires that the tag HAVE_DOT is set to YES.
> +
> +GENERATE_LEGEND        = YES
> +
> +# If the DOT_CLEANUP tag is set to YES, doxygen will remove the intermediate dot
> +# files that are used to generate the various graphs.
> +# The default value is: YES.
> +# This tag requires that the tag HAVE_DOT is set to YES.
> +
> +DOT_CLEANUP            = YES
> diff --git a/m4/ax_python_module.m4 b/m4/ax_python_module.m4
> new file mode 100644
> index 0000000000..107d88264a
> --- /dev/null
> +++ b/m4/ax_python_module.m4
> @@ -0,0 +1,56 @@
> +# ===========================================================================
> +#     https://www.gnu.org/software/autoconf-archive/ax_python_module.html
> +# ===========================================================================
> +#
> +# SYNOPSIS
> +#
> +#   AX_PYTHON_MODULE(modname[, fatal, python])
> +#
> +# DESCRIPTION
> +#
> +#   Checks for Python module.
> +#
> +#   If fatal is non-empty then absence of a module will trigger an error.
> +#   The third parameter can either be "python" for Python 2 or "python3" for
> +#   Python 3; defaults to Python 3.
> +#
> +# LICENSE
> +#
> +#   Copyright (c) 2008 Andrew Collier
> +#
> +#   Copying and distribution of this file, with or without modification, are
> +#   permitted in any medium without royalty provided the copyright notice
> +#   and this notice are preserved. This file is offered as-is, without any
> +#   warranty.
> +
> +#serial 9
> +
> +AU_ALIAS([AC_PYTHON_MODULE], [AX_PYTHON_MODULE])
> +AC_DEFUN([AX_PYTHON_MODULE],[
> +    if test -z $PYTHON;
> +    then
> +        if test -z "$3";
> +        then
> +            PYTHON="python3"
> +        else
> +            PYTHON="$3"
> +        fi
> +    fi
> +    PYTHON_NAME=`basename $PYTHON`
> +    AC_MSG_CHECKING($PYTHON_NAME module: $1)
> +    $PYTHON -c "import $1" 2>/dev/null
> +    if test $? -eq 0;
> +    then
> +        AC_MSG_RESULT(yes)
> +        eval AS_TR_CPP(HAVE_PYMOD_$1)=yes
> +    else
> +        AC_MSG_RESULT(no)
> +        eval AS_TR_CPP(HAVE_PYMOD_$1)=no
> +        #
> +        if test -n "$2"
> +        then
> +            AC_MSG_ERROR(failed to find required module $1)
> +            exit 1
> +        fi
> +    fi
> +])
> \ No newline at end of file
> diff --git a/m4/docs_tool.m4 b/m4/docs_tool.m4
> index 3e8814ac8d..39aa348026 100644
> --- a/m4/docs_tool.m4
> +++ b/m4/docs_tool.m4
> @@ -15,3 +15,12 @@ dnl
>          AC_MSG_WARN([$2 is not available so some documentation won't be built])
>      ])
>  ])
> +
> +AC_DEFUN([AX_DOCS_TOOL_REQ_PROG], [
> +dnl
> +    AC_ARG_VAR([$1], [Path to $2 tool])
> +    AC_PATH_PROG([$1], [$2])
> +    AS_IF([! test -x "$ac_cv_path_$1"], [
> +        AC_MSG_ERROR([$2 is needed])
> +    ])
> +])
> \ No newline at end of file
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Tue May 04 23:50:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 04 May 2021 23:50:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122765.231569 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1le4nc-0003vh-Vh; Tue, 04 May 2021 23:49:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122765.231569; Tue, 04 May 2021 23:49:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1le4nc-0003va-SY; Tue, 04 May 2021 23:49:56 +0000
Received: by outflank-mailman (input) for mailman id 122765;
 Tue, 04 May 2021 23:49:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6YMU=J7=epam.com=prvs=5758e446ec=volodymyr_babchuk@srs-us1.protection.inumbo.net>)
 id 1le4nb-0003vV-AU
 for xen-devel@lists.xenproject.org; Tue, 04 May 2021 23:49:55 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8cde2d31-1922-4aa8-b691-d910a3a05bb1;
 Tue, 04 May 2021 23:49:54 +0000 (UTC)
Received: from pps.filterd (m0174683.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 144NjNrV008864; Tue, 4 May 2021 23:49:51 GMT
Received: from eur04-he1-obe.outbound.protection.outlook.com
 (mail-he1eur04lp2054.outbound.protection.outlook.com [104.47.13.54])
 by mx0b-0039f301.pphosted.com with ESMTP id 38bfxc811x-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Tue, 04 May 2021 23:49:51 +0000
Received: from AM0PR03MB4372.eurprd03.prod.outlook.com (2603:10a6:208:cd::14)
 by AM9PR03MB7090.eurprd03.prod.outlook.com (2603:10a6:20b:281::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.25; Tue, 4 May
 2021 23:49:45 +0000
Received: from AM0PR03MB4372.eurprd03.prod.outlook.com
 ([fe80::e123:973d:6af2:97c]) by AM0PR03MB4372.eurprd03.prod.outlook.com
 ([fe80::e123:973d:6af2:97c%5]) with mapi id 15.20.4087.044; Tue, 4 May 2021
 23:49:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8cde2d31-1922-4aa8-b691-d910a3a05bb1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NDyPnfcphszEVUaIt932EAhKfoupyLKZG7shGeeFGsMgLVoxqjx65CT8SMnEwbeh3Z0CBppNx6LhCM5e6YCtJJv/dUsfcGY9YM0elIHzOnQZBJxajhN8eKMztQI4oLveQ/R65eoU/p2LrNUBHRv6PcXpmGd1B2xusdfZzp2p05jS9Cq/MvfZQZRTqSZQjyOPljTzTFTWZ7YaKMfzdq8QDYfe6Enqeo7q5wYQ9UvYnAsBIfEq3GkfqKjARpFvzuHzmt9NZ0BNv3Q7YHAEmDsaiPjalnmd79Doe8EZAvDgjdIso00GCP1FUnyFnWg6Qj1gisI0GUfzkJGwEV+4dCUGCw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qc+HixMJxamX3swsunGRJaTdq8WZhDGUY8qU1/TMy/c=;
 b=AJCppXvBabdy3Ty5h4iDsNGpNQioThcDvrYsOAJPdJz0PsT2sfQZcFNp0dpnZxq0yKMxyDIAOqDlUK8Pv9jFJguWvaTG0Ip/6Z6ejez0Gvtiaf5IaSCbx/BWbkOnok4Kj/Xs4Jb9TTLUfO3i9rRN6VTod/zWs9InaEbq4XNRW73r21xbRKm6gf1GFxpsT+ikAbeZBhl1s9u8XU7anK+R4MBkBGrMGT/4CZPpBxdHwPewu1f9VNLaN1IDAFWmKUIBZg5v54BrZiUwVe4tafdLB4KTUko+4VAMdfZClNHorvitiCUmjkFtybTqmr6JNOGFrRRfYgWvZZ8X4OVr08X8hg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qc+HixMJxamX3swsunGRJaTdq8WZhDGUY8qU1/TMy/c=;
 b=tREVOi/UKY8pA6P2LY6m4dlf0UdmDfQ3fhCrgs6wVScOaM7mVwO52TuZDAX++4x10kxoW3q00HhHxK4iBD28LKSrsY5H+t7O7kMXlRIVlDrHzxTMVH4yhHY+YAbqXpIHaffAMP0yporrX5c8WylPZXy8j3NHZU7sh0pBTVJPe8vz3RvJ4LkLjAJ+nMbayPX90dFOtWY43Kz4UXy6uE7wpDb/2y92vd/E+jhQrdB+n3x48rx/5vjaSBaZVswZW2luiikp75PbhRAp4m9tTRjyIoh1k9hzA718650ZAP5t4MX8oVr0d+Gi8/ubgfuNKrdlXrJ7An5EfV2QgvPfLdDzoQ==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: Jan Beulich <jbeulich@suse.com>
CC: George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
        Stefano Stabellini <sstabellini@kernel.org>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: Re: Ping: [PATCH v2 8/8] Arm/optee: don't open-code
 xzalloc_flex_struct()
Thread-Topic: Ping: [PATCH v2 8/8] Arm/optee: don't open-code
 xzalloc_flex_struct()
Thread-Index: AQHXNr8I7//bY0jNu0uR6+4GJdDAWKrR2dcAgAI4+wA=
Date: Tue, 4 May 2021 23:49:44 +0000
Message-ID: <874kfixdnr.fsf@epam.com>
References: <091b4b91-712f-3526-78d1-80d31faf8e41@suse.com>
 <5fa042ac-9609-eab7-b14d-a59782ef4141@suse.com>
 <d61a2123-b501-15d6-e2c1-f18aa4790b46@suse.com>
In-Reply-To: <d61a2123-b501-15d6-e2c1-f18aa4790b46@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
user-agent: mu4e 1.4.15; emacs 27.2
authentication-results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=epam.com;
x-originating-ip: [176.36.48.175]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 33362de5-1f11-488f-94cc-08d90f574b56
x-ms-traffictypediagnostic: AM9PR03MB7090:
x-microsoft-antispam-prvs: 
 <AM9PR03MB7090ADB279C77BEB89D9667EE65A9@AM9PR03MB7090.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:5236;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 UrbqdePil/fXwNBYVLqbl8BiXO+FOgGcRG/ZcXYtaSh2cEgOHwe/UXndkiV0noWcMenrYvWwQrl8+YcuiH/TfjT+dLFO0EGcf3K04srlBE7uRV4wKwT4N9HS8X7KZSLx+19RntJlOZoV6cqSG5eIMiETm6LVZLCGuvC/RrTPdpwftS7arib46lHo+ZBcRl6x6htLvxyCfY7UCmsa3jwvSMNxZoyLpGbGhD5PtwIcT//sr6DnVy4RMFTe063vbDRpnj5bZkcV9UYKN59Iy7Xq1HkjRmruVYQ97EHPV5OkIi/YFLAMJVAU1Rk9yCC4LSJ39mVURoHG3tLACdzlRRw94FZLmcH9pPgKRiCE55MTqBCkv84SRHKHnS/QezeRMsRQJ9lHxI1wrsK9CEqMAqECNMv0IyZBpchFYE0WyBjxsi1MOQcaT6UKCBqTQ1cMQLZmAxDnvinIIsVHNbnUi/UMX+DUnpeaBDtn1v9gHHo415riBl2Yf1mHZ2H1uosFlCtCj7bznRMpUZgeKZO3wr2lLy9nuHwoa9PgWOVzlRmEMHMUSjKJiIOsHWqt8jW9LzoUfpK5pM4oOzV/Q4BuEy6g5sG0hOAepIsfPcV5ADxaDdM=
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB4372.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(376002)(39830400003)(366004)(396003)(346002)(71200400001)(2616005)(83380400001)(6916009)(6512007)(86362001)(6486002)(122000001)(54906003)(66446008)(316002)(66476007)(66556008)(5660300002)(64756008)(4326008)(66946007)(38100700002)(91956017)(76116006)(8936002)(55236004)(53546011)(6506007)(478600001)(8676002)(36756003)(186003)(2906002)(26005);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?iso-8859-1?Q?Sz0edf+RUSu6dxYjwXSZ/ZohcR9JNwNEqv5EcKi4vz5U1ZChTGjJ6wbEfT?=
 =?iso-8859-1?Q?/OH+GnkcFd9ETn+IVNYnBGv+CMOyP4VelzGQkrrirbcttfHNQPV/xcVBu3?=
 =?iso-8859-1?Q?msW6H/qK4QuASUcqsmJctueo9gss8iuNz1LqAqIi3XhO6yYo+kUURxvUP7?=
 =?iso-8859-1?Q?gfILXqT6Gpk0U+zd7df67B2HPOd4f8zuwddpgVi9emsKaFbdcPZydS3y3t?=
 =?iso-8859-1?Q?Fcl9YGHienTr0isLf/rF0YB/42pkJtG0mshvld121EcmNkNHiSeBS6oP/4?=
 =?iso-8859-1?Q?PczyX9g9LyoP3j/v8QorxyS3u2z8m6F+wuM2c1ObNU0vgz/Oz4EgRTR9YN?=
 =?iso-8859-1?Q?oa07kWx6dF+kzPpJSV43cYAcV5Uf6/0nVMw6ZHoQw5bf+FhXQj3+1IUT3t?=
 =?iso-8859-1?Q?oi0vFw97sQk4eAVQRLbOSClLA1L62OXxJPAyUN7DGuy2zw4khAhtsRdhHB?=
 =?iso-8859-1?Q?8F65MChxFMA+sbQ3DQI+IH8tmpMJDxxlgposWiFWCILTuaXO3NHnPTIi5x?=
 =?iso-8859-1?Q?4j5re4wCITO5dk/l+fVIYyyLslGOl+Xjamn9nEB3bRuYqdgp+JDnBV44Ar?=
 =?iso-8859-1?Q?yfxsHgN0xVZN+1uPAG642Cf7s/fsl6trAF4kijKINqCIO1O/0c9X2QTupO?=
 =?iso-8859-1?Q?f6pMIu1csalB8OfBWClUZaZ1FuYeGFLF3Iw+KMvp/5TaNHnoJEpKgFvSJA?=
 =?iso-8859-1?Q?gbojLWc+eu1nAorgGWWaKWtQ7VFlckEc2gs5mYnAa8KZSVnsXNqA1yeb2s?=
 =?iso-8859-1?Q?1DjwSmvc9oySL2QGMZy6NC4SGyvHXKc8t/kHApis9Vp0uScoyfi/qNGz5B?=
 =?iso-8859-1?Q?GCrT9TI6YR4rZ533dScsmOimtbnPfisqCGmyfAj6g3ES/nVuIb5Pr1yavl?=
 =?iso-8859-1?Q?Py2NgxVeveUMKLOqj7956Cys7B8HfolvAEtyMgurTGxxiUtnArticCcqFP?=
 =?iso-8859-1?Q?vqnXPjqU8Ht4Njb2IvLU6JbLQFVqCsdGTsq4arlF7fo6pjHZLdaa78knM0?=
 =?iso-8859-1?Q?wSkp1x9k7fUumNZRdYpqK11XJ8KQWnBAaa47tUYKTG4A/PyIZei5az+eQu?=
 =?iso-8859-1?Q?aHaE9JvUhRRp8TMOaaCgdOXb5LSXtI0YfIgXwaK2XOFBTD1qrjyGHUMaXP?=
 =?iso-8859-1?Q?TlYnOSWJmcxq6c/pf9zeHFxJ6uLiJXYLQ1xZ6AuWW/5zXOdXozWcXANGmd?=
 =?iso-8859-1?Q?BHmo76DX2RjlhjcqJJDoid2xHCeOqZyNi7a32FMToC0jsRCozKkKQDe5t9?=
 =?iso-8859-1?Q?oV+iUvmO3i9ntvwdJhwqOHmYAt1jJLn/Hfsv7Tfk3Rg5JmybcSBs2yf+4B?=
 =?iso-8859-1?Q?5n1Aq4G1iw/U4e2MhzgZ0VbazgfBfITVKE9in3XV56Mt/v5LUU5mQrIdEv?=
 =?iso-8859-1?Q?YAd+b7QMki?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB4372.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 33362de5-1f11-488f-94cc-08d90f574b56
X-MS-Exchange-CrossTenant-originalarrivaltime: 04 May 2021 23:49:44.8873
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: zDU5hH+Il8E/nfFMun/nPmv44MxE0M5gvP75MWsHYC9bNIkhcnXyNaGZuqiM/vkADZM99ts/eeqOO4OybvFQajvuV6025uYs5gv6yRhJyIU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR03MB7090
X-Proofpoint-GUID: 3rXtlQaaj-v4OtuCYRatqVOp6H9znCbc
X-Proofpoint-ORIG-GUID: 3rXtlQaaj-v4OtuCYRatqVOp6H9znCbc
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 bulkscore=0 impostorscore=0
 clxscore=1011 mlxscore=0 adultscore=0 priorityscore=1501 spamscore=0
 mlxlogscore=999 phishscore=0 suspectscore=0 lowpriorityscore=0
 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2104060000 definitions=main-2105040163


Hi Jan,

Jan Beulich writes:

> On 21.04.2021 16:59, Jan Beulich wrote:
>> The current use of xzalloc_bytes() in optee is nearly an open-coded
>> version  of xzalloc_flex_struct(), which was introduced after the driver
>> was merged.
>>=20
>> The main difference is xzalloc_bytes() will also force the allocation to
>> be SMP_CACHE_BYTES aligned and therefore avoid sharing the cache line.
>> While sharing the cache line can have an impact on the performance, this
>> is also true for most of the other users of x*alloc(), x*alloc_array(),
>> and x*alloc_flex_struct(). So if we want to prevent sharing cache lines,
>> arranging for this should be done in the allocator itself.
>>=20
>> In this case, we don't need stricter alignment than what the allocator=20
>> provides. Hence replace the call to xzalloc_bytes() with one of
>> xzalloc_flex_struct().
>>=20
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>> Signed-off-by: Julien Grall <julien@xen.org>
>> ---
>> v2: Use commit message very close to what was suggested by Julien.
>
> I realize I forgot to Cc you on the v2 submission, but I didn't hear
> back on v1 from you either. May I ask for an ack or otherwise?

You already had discussion on v1, so I just wanted to see how it will
conclude before stepping in.

I'm fine with this change:

Acked-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>

>
>> --- a/xen/arch/arm/tee/optee.c
>> +++ b/xen/arch/arm/tee/optee.c
>> @@ -529,8 +529,7 @@ static struct optee_shm_buf *allocate_op
>>      while ( unlikely(old !=3D atomic_cmpxchg(&ctx->optee_shm_buf_pages,
>>                                             old, new)) );
>> =20
>> -    optee_shm_buf =3D xzalloc_bytes(sizeof(struct optee_shm_buf) +
>> -                                  pages_cnt * sizeof(struct page *));
>> +    optee_shm_buf =3D xzalloc_flex_struct(struct optee_shm_buf, pages, =
pages_cnt);
>>      if ( !optee_shm_buf )
>>      {
>>          err_code =3D -ENOMEM;
>>=20
>>=20


--=20
Volodymyr Babchuk at EPAM=


From xen-devel-bounces@lists.xenproject.org Wed May 05 02:02:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 02:02:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122778.231581 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1le6ry-0005J7-4C; Wed, 05 May 2021 02:02:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122778.231581; Wed, 05 May 2021 02:02:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1le6rx-0005J0-WA; Wed, 05 May 2021 02:02:33 +0000
Received: by outflank-mailman (input) for mailman id 122778;
 Wed, 05 May 2021 02:02:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1le6rw-0005Ip-5o; Wed, 05 May 2021 02:02:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1le6rv-0005yi-Sz; Wed, 05 May 2021 02:02:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1le6rv-00039g-IZ; Wed, 05 May 2021 02:02:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1le6rv-00057P-Hz; Wed, 05 May 2021 02:02:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=PvZAFwkFFK1HfLnrnfTZP4nj1yrY8cQlvNHRwdt21VM=; b=P2+23W3B+DnDdhRjS42QMhQqVa
	4ixn1k8mMFCAaB0TTlziqLDxzdDgbr0/wdFLVAHg8pD7I70+GUiU1mfTfHCaxhrfotM5GCu0Myag5
	AZw+TWVDRr73I7DyF3Bat4KIeDqj/qhnOgy3ZnssiCTBZcKOqCALjGd3H5op370Mnfkw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161766-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 161766: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-multivcpu:guest-localmigrate:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=e93d8bcf9dbd5b8dd3b9ddbb1ece6a37e608f300
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 05 May 2021 02:02:31 +0000

flight 161766 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161766/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-libvirt      14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-libvirt-xsm  14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-libvirt-pair 25 guest-start/debian       fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-multivcpu 18 guest-localmigrate      fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                e93d8bcf9dbd5b8dd3b9ddbb1ece6a37e608f300
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  257 days
Failing since        152659  2020-08-21 14:07:39 Z  256 days  469 attempts
Testing same since   161766  2021-05-04 10:57:59 Z    0 days    1 attempts

------------------------------------------------------------
480 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 146393 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed May 05 06:27:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 06:27:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122796.231614 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leB0F-0002uG-7S; Wed, 05 May 2021 06:27:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122796.231614; Wed, 05 May 2021 06:27:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leB0F-0002u9-3u; Wed, 05 May 2021 06:27:23 +0000
Received: by outflank-mailman (input) for mailman id 122796;
 Wed, 05 May 2021 06:27:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6083=KA=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1leB0E-0002u0-3T
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 06:27:22 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5e39df27-9194-47bf-b90d-cfb38bc068f3;
 Wed, 05 May 2021 06:27:20 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 16286AE04;
 Wed,  5 May 2021 06:27:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5e39df27-9194-47bf-b90d-cfb38bc068f3
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620196039; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=PvU/67kargRclV1BzSMI07cnKNaj4Cl2Fntp3vzI4w4=;
	b=guv7oRC9XIU1Q++iBeh4QtSOYAuzDc7UrAD6k7EygnNNFLVDDWLVZyiXRI0cVZxGHlSs2i
	StqYWa5JB8x5h4nrGrtJI+YUNM+TZC1xFO9zAevA2uiyqnSsiyjAv5MVMa2dM4muAVgpeE
	nQzRPKBdcx/XapfSz2qs0G+X3qcm0pY=
Subject: Re: [PATCH] libs/guest: Don't hide the indirection on xc_cpu_policy_t
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210504185322.19306-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4920e6e0-6cd2-fc20-d426-5b12934a2565@suse.com>
Date: Wed, 5 May 2021 08:27:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210504185322.19306-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 04.05.2021 20:53, Andrew Cooper wrote:
> Unsure what to do about x86_cpu_policies_are_compatible().  It would be nice
> to have xc_cpu_policy_is_compatible() sensibly const'd, but maybe that means
> we need a struct const_cpu_policy and that smells like it is spiralling out of
> control.

Not sure it would be that bad. In fact, looking at the few uses of struct
cpu_policy in the hypervisor, I wonder if the two contained pointers
couldn't be pointer-to-const right there. Once you've constructed a full
struct cpu_policy instance, it shouldn't typically be modified anymore,
should it? Would require the respective struct arch_domain fields to also
become pointer-to-const. And if the tool stack had any need for a
container with pointer-to-non-const, it could locally define a type,
keeping lib/x86/ tidy.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 05 06:39:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 06:39:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122800.231626 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leBCC-0004MO-CH; Wed, 05 May 2021 06:39:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122800.231626; Wed, 05 May 2021 06:39:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leBCC-0004MH-98; Wed, 05 May 2021 06:39:44 +0000
Received: by outflank-mailman (input) for mailman id 122800;
 Wed, 05 May 2021 06:39:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6083=KA=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1leBCA-0004MB-Q3
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 06:39:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 169ad996-15c0-48fc-aedc-2436081af118;
 Wed, 05 May 2021 06:39:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E5647AEFE;
 Wed,  5 May 2021 06:39:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 169ad996-15c0-48fc-aedc-2436081af118
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620196781; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=CHD5V8bhN/m1JC1gKBf/BeAqVoy9fPW1pdj/Ey6MlXY=;
	b=ssXbijVH91b65Yqovz303z1luVbREDJmrFIpFS3LRAPUeQMyNdv1yzHk9uAiAU7cKXRYpi
	UcFWEJ+Ks2nvq1MLI7/BdD7UJO1csABsZEhn+yMSykFX6QXF3YNiSZ/27UzxKNHblYUs5c
	UPj1jmXt7vMmJ5oQ+zT8YwwS49Uy+jU=
Subject: Re: [PATCH] libx86: Introduce x86_cpu_policy_calculate_compatible()
 with MSR_ARCH_CAPS handling
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210504213120.4179-1-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <28384167-fbd0-d3ff-c064-ee88f5891580@suse.com>
Date: Wed, 5 May 2021 08:39:40 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210504213120.4179-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 04.05.2021 23:31, Andrew Cooper wrote:
> --- a/tools/include/xen-tools/libs.h
> +++ b/tools/include/xen-tools/libs.h
> @@ -63,4 +63,9 @@
>  #define ROUNDUP(_x,_w) (((unsigned long)(_x)+(1UL<<(_w))-1) & ~((1UL<<(_w))-1))
>  #endif
>  
> +#ifndef _AC
> +#define __AC(X, Y)   (X ## Y)
> +#define _AC(X, Y)    __AC(X, Y)
> +#endif

Somewhere in Roger's recent / pending work I recall he moved these
from somewhere, instead of adding new instances.

> --- a/tools/tests/cpu-policy/test-cpu-policy.c
> +++ b/tools/tests/cpu-policy/test-cpu-policy.c
> @@ -775,6 +775,154 @@ static void test_is_compatible_failure(void)
>      }
>  }
>  
> +static void test_calculate_compatible_success(void)
> +{
> +    static struct test {
> +        const char *name;
> +        struct {
> +            struct cpuid_policy p;
> +            struct msr_policy m;
> +        } a, b, out;
> +    } tests[] = {
> +        {
> +            "arch_caps, b short max_leaf",
> +            .a = {
> +                .p.basic.max_leaf = 7,
> +                .p.feat.arch_caps = true,
> +                .m.arch_caps.rdcl_no = true,
> +            },
> +            .b = {
> +                .p.basic.max_leaf = 6,
> +                .p.feat.arch_caps = true,
> +                .m.arch_caps.rdcl_no = true,

Is this legitimate input in the first place?

> --- a/xen/lib/x86/policy.c
> +++ b/xen/lib/x86/policy.c
> @@ -29,6 +29,9 @@ int x86_cpu_policies_are_compatible(const struct cpu_policy *host,
>      if ( ~host->msr->platform_info.raw & guest->msr->platform_info.raw )
>          FAIL_MSR(MSR_INTEL_PLATFORM_INFO);
>  
> +    if ( ~host->msr->arch_caps.raw & guest->msr->arch_caps.raw )
> +        FAIL_MSR(MSR_ARCH_CAPABILITIES);

Doesn't this need special treatment of RSBA, just like it needs specially
treating below?

> @@ -43,6 +46,50 @@ int x86_cpu_policies_are_compatible(const struct cpu_policy *host,
>      return ret;
>  }
>  
> +int x86_cpu_policy_calculate_compatible(const struct cpu_policy *a,
> +                                        const struct cpu_policy *b,
> +                                        struct cpu_policy *out,
> +                                        struct cpu_policy_errors *err)
> +{
> +    const struct cpuid_policy *ap = a->cpuid, *bp = b->cpuid;
> +    const struct msr_policy *am = a->msr, *bm = b->msr;
> +    struct cpuid_policy *cp = out->cpuid;
> +    struct msr_policy *mp = out->msr;

Hmm, okay - this would not work with my proposal in reply to your
other patch. The output would instead need to have pointers
allocated here then.

> +    memset(cp, 0, sizeof(*cp));
> +    memset(mp, 0, sizeof(*mp));
> +
> +    cp->basic.max_leaf = min(ap->basic.max_leaf, bp->basic.max_leaf);

Any reason you don't do the same right away for the max extended
leaf?

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 05 07:07:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 07:07:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122804.231638 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leBd6-0007Sy-LT; Wed, 05 May 2021 07:07:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122804.231638; Wed, 05 May 2021 07:07:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leBd6-0007Sr-IU; Wed, 05 May 2021 07:07:32 +0000
Received: by outflank-mailman (input) for mailman id 122804;
 Wed, 05 May 2021 07:07:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6083=KA=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1leBd5-0007Si-VC
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 07:07:31 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 30847763-fba8-429a-8753-3fefd6499226;
 Wed, 05 May 2021 07:07:31 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 335E4AD21;
 Wed,  5 May 2021 07:07:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 30847763-fba8-429a-8753-3fefd6499226
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620198450; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=kYTzdg2mxmFflEqULrHidRTMw57MXGtclBCaf+QyRYc=;
	b=plqUjaTrzBtuHB7WnUGwH4cOoa9GeVXeF7OsdnXxIYcHSAk7mzich15R0MzBDHzhOtP06D
	MG+Do7TVCuaT6XuSduxLM6ETrKSThbo4W27UTjRo5zg5mV9gOSsn4ErCsx8/T/MsxoQrN/
	1m/UKJeW6iBFIAG5l/FTAQlieVI+svE=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/p2m: please Clang after making certain parts HVM-only
Message-ID: <cfac6284-d4ec-af2f-6be4-c114c7c10009@suse.com>
Date: Wed, 5 May 2021 09:07:30 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Move a few #ifdef-s, to account for diagnostics like

p2m.c:549:1: error: non-void function does not return a value in all control paths [-Werror,-Wreturn-type]

which appear despite paging_mode_translate() resolving to constant
"false" when !HVM. All of the affected functions are intended to become
fully HVM-only anyway, with their non-translated stub handling split off
elsewhere.

Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Fixes: 8d012d3ddffc ("x86/p2m: {get,set}_entry hooks and p2m-pt.c are HVM-only")
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -497,21 +497,23 @@ mfn_t __get_gfn_type_access(struct p2m_d
 #ifdef CONFIG_HVM
     mfn_t mfn;
     gfn_t gfn = _gfn(gfn_l);
-#endif
-
-    /* Unshare makes no sense withuot populate. */
-    if ( q & P2M_UNSHARE )
-        q |= P2M_ALLOC;
 
     if ( !p2m || !paging_mode_translate(p2m->domain) )
     {
-        /* Not necessarily true, but for non-translated guests, we claim
-         * it's the most generic kind of memory */
+#endif
+        /*
+         * Not necessarily true, but for non-translated guests we claim
+         * it's the most generic kind of memory.
+         */
         *t = p2m_ram_rw;
         return _mfn(gfn_l);
+#ifdef CONFIG_HVM
     }
 
-#ifdef CONFIG_HVM
+    /* Unshare makes no sense without populate. */
+    if ( q & P2M_UNSHARE )
+        q |= P2M_ALLOC;
+
     if ( locked )
         /* Grab the lock here, don't release until put_gfn */
         gfn_lock(p2m, gfn, 0);
@@ -1417,18 +1419,18 @@ int set_identity_p2m_entry(struct domain
     mfn_t mfn;
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
     int ret;
-#endif
 
     if ( !paging_mode_translate(d) )
     {
+#endif
         if ( !is_iommu_enabled(d) )
             return 0;
         return iommu_legacy_map(d, _dfn(gfn_l), _mfn(gfn_l),
                                 1ul << PAGE_ORDER_4K,
                                 IOMMUF_readable | IOMMUF_writable);
+#ifdef CONFIG_HVM
     }
 
-#ifdef CONFIG_HVM
     gfn_lock(p2m, gfn, 0);
 
     mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL);
@@ -1464,16 +1466,16 @@ int clear_identity_p2m_entry(struct doma
     mfn_t mfn;
     struct p2m_domain *p2m = p2m_get_hostp2m(d);
     int ret;
-#endif
 
     if ( !paging_mode_translate(d) )
     {
+#endif
         if ( !is_iommu_enabled(d) )
             return 0;
         return iommu_legacy_unmap(d, _dfn(gfn_l), 1ul << PAGE_ORDER_4K);
+#ifdef CONFIG_HVM
     }
 
-#ifdef CONFIG_HVM
     gfn_lock(p2m, gfn, 0);
 
     mfn = p2m->get_entry(p2m, gfn, &p2mt, &a, 0, NULL, NULL);


From xen-devel-bounces@lists.xenproject.org Wed May 05 07:39:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 07:39:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122812.231650 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leC7b-00028h-69; Wed, 05 May 2021 07:39:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122812.231650; Wed, 05 May 2021 07:39:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leC7b-00028a-2z; Wed, 05 May 2021 07:39:03 +0000
Received: by outflank-mailman (input) for mailman id 122812;
 Wed, 05 May 2021 07:39:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6083=KA=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1leC7a-00028U-1R
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 07:39:02 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d129cc43-c15c-41b4-bcb6-3c6dc79fb36b;
 Wed, 05 May 2021 07:39:00 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 14D49AFB1;
 Wed,  5 May 2021 07:39:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d129cc43-c15c-41b4-bcb6-3c6dc79fb36b
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620200340; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=4HW9WrgK6dvz0OuYcED+vbJDOMECKjTtfnLHOgyY0xE=;
	b=qL2CXqgHfBn5ahfIjh28uS2e1Fi+8cOpsDhpl/1nv8hraITM23Yx4nmekvLFu/ua0DThbV
	lRWra5HIr0ldMhIa2RADkI7hjPmnyqTyxqN6urf6drBfG6tzB9w69X4lkkkRMRnoUKry28
	n+ou1qrXNegKPk6EHcKbRMbaKFq5vpE=
Subject: Re: [PATCH v3 03/13] libs/guest: allow fetching a specific MSR entry
 from a cpu policy
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
References: <20210430155211.3709-1-roger.pau@citrix.com>
 <20210430155211.3709-4-roger.pau@citrix.com>
 <273ba6f9-dee9-00db-407b-10325d21afae@suse.com>
 <YJEoS6P1S6NbySFd@Air-de-Roger>
 <54c48a0f-075f-c379-eeb4-60b4439d8907@suse.com>
 <YJE20/M+OCER2vPn@Air-de-Roger>
 <66d6596b-5d90-7bf8-a383-ce2b6b1fe03f@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <05b49cd5-31cb-1bf7-c678-846c18a88fbc@suse.com>
Date: Wed, 5 May 2021 09:38:59 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <66d6596b-5d90-7bf8-a383-ce2b6b1fe03f@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 04.05.2021 19:11, Andrew Cooper wrote:
> On 04/05/2021 12:58, Roger Pau Monné wrote:
>> On Tue, May 04, 2021 at 01:40:11PM +0200, Jan Beulich wrote:
>>> On 04.05.2021 12:56, Roger Pau Monné wrote:
>>>> On Mon, May 03, 2021 at 12:41:29PM +0200, Jan Beulich wrote:
>>>>> On 30.04.2021 17:52, Roger Pau Monne wrote:
>>>>>> --- a/tools/libs/guest/xg_cpuid_x86.c
>>>>>> +++ b/tools/libs/guest/xg_cpuid_x86.c
>>>>>> @@ -850,3 +850,45 @@ int xc_cpu_policy_get_cpuid(xc_interface *xch, const xc_cpu_policy_t policy,
>>>>>>      *out = *tmp;
>>>>>>      return 0;
>>>>>>  }
>>>>>> +
>>>>>> +static int compare_entries(const void *l, const void *r)
>>>>>> +{
>>>>>> +    const xen_msr_entry_t *lhs = l;
>>>>>> +    const xen_msr_entry_t *rhs = r;
>>>>>> +
>>>>>> +    if ( lhs->idx == rhs->idx )
>>>>>> +        return 0;
>>>>>> +    return lhs->idx < rhs->idx ? -1 : 1;
>>>>>> +}
>>>>>> +
>>>>>> +static xen_msr_entry_t *find_entry(xen_msr_entry_t *entries,
>>>>>> +                                   unsigned int nr_entries, unsigned int index)
>>>>>> +{
>>>>>> +    const xen_msr_entry_t key = { index };
>>>>>> +
>>>>>> +    return bsearch(&key, entries, nr_entries, sizeof(*entries), compare_entries);
>>>>>> +}
>>>>> Isn't "entries" / "entry" a little too generic a name here, considering
>>>>> the CPUID equivalents use "leaves" / "leaf"? (Noticed really while looking
>>>>> at patch 7.)
>>>> Would you be fine with naming the function find_msr and leaving the
>>>> rest of the parameters names as-is?
>>> Yes. But recall I'm not the maintainer of this code anyway.
> 
> This file in particular has been entirely within the x86 remit for
> multiple years now, as have the other cpuid bits in misc/ and libxl.

Well, it's somewhere in the middle imo: Its maintainers have kind of
permanently delegated to the x86 maintainers. Which still doesn't
make it part of "X86 ARCHITECTURE" until such time as the file gets
actually named there. Note how e.g. tools/misc/xen-cpuid.c already is.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 05 07:42:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 07:42:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122815.231662 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCAe-0003Td-N7; Wed, 05 May 2021 07:42:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122815.231662; Wed, 05 May 2021 07:42:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCAe-0003TW-Id; Wed, 05 May 2021 07:42:12 +0000
Received: by outflank-mailman (input) for mailman id 122815;
 Wed, 05 May 2021 07:42:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6083=KA=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1leCAd-0003TQ-DQ
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 07:42:11 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 25167c04-4e5c-4942-be75-4ae7b7b966ae;
 Wed, 05 May 2021 07:42:10 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A78B3AF3E;
 Wed,  5 May 2021 07:42:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 25167c04-4e5c-4942-be75-4ae7b7b966ae
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620200529; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=3zWsE1sxa136Lt1kOqw+xOkbfvzZsgPmokPr5KESenI=;
	b=eWj6MqDGg6giYvDvR45i8JPO8Anm/7o+MoNDpR8ilAoZ9hbue3+xXXtL86dEFOrCqlWumA
	JvIkk1r39dzvm7yfoggCDomxjvMA4pVgrMCp9oe9R8paAgtPsqAStOrK/wpQcrtV48RWzc
	ogc+ihj/C4KXMmfRLX36TyQmriIQccE=
Subject: Re: [PATCH v3 08/13] libs/guest: make a cpu policy compatible with
 older Xen versions
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210430155211.3709-1-roger.pau@citrix.com>
 <20210430155211.3709-9-roger.pau@citrix.com>
 <51ee228a-2d53-2dd4-55cf-233d81ba4958@suse.com>
 <YJFpdA8qmYca9bUO@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6a3bc5cd-10a2-1d13-0033-c22d16da25b7@suse.com>
Date: Wed, 5 May 2021 09:42:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <YJFpdA8qmYca9bUO@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 04.05.2021 17:34, Roger Pau Monné wrote:
> On Mon, May 03, 2021 at 01:09:41PM +0200, Jan Beulich wrote:
>> On 30.04.2021 17:52, Roger Pau Monne wrote:
>>> @@ -1086,3 +1075,42 @@ int xc_cpu_policy_calc_compatible(xc_interface *xch,
>>>  
>>>      return rc;
>>>  }
>>> +
>>> +int xc_cpu_policy_make_compatible(xc_interface *xch, xc_cpu_policy_t policy,
>>> +                                  bool hvm)
>>
>> I'm concerned of the naming, and in particular the two very different
>> meanings of "compatible" for xc_cpu_policy_calc_compatible() and this
>> new one. I'm afraid I don't have a good suggestion though, short of
>> making the name even longer and inserting "backwards".
> 
> Would xc_cpu_policy_make_compat_412 be acceptable?
> 
> That's the more concise one I can think of.

Hmm, maybe (perhaps with an underscore inserted between 4 and 12). Yet
(sorry) a comment in the function says "since Xen 4.13", which includes
changes that have happened later. Therefore it's not really clear to me
whether the function really _only_ deals with the 4.12 / 4.13 boundary.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 05 07:43:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 07:43:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122819.231674 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCBl-000456-Vw; Wed, 05 May 2021 07:43:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122819.231674; Wed, 05 May 2021 07:43:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCBl-00044z-SX; Wed, 05 May 2021 07:43:21 +0000
Received: by outflank-mailman (input) for mailman id 122819;
 Wed, 05 May 2021 07:43:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=J0XF=KA=arm.com=michal.orzel@srs-us1.protection.inumbo.net>)
 id 1leCBj-00044r-QI
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 07:43:19 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id a3525501-7e8c-465c-8b62-8bd864c11538;
 Wed, 05 May 2021 07:43:17 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 88181D6E;
 Wed,  5 May 2021 00:43:17 -0700 (PDT)
Received: from e123311-lin.arm.com (unknown [10.57.0.42])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 704903F718;
 Wed,  5 May 2021 00:43:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a3525501-7e8c-465c-8b62-8bd864c11538
From: Michal Orzel <michal.orzel@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	Tamas K Lengyel <tamas@tklengyel.com>,
	Alexandru Isaila <aisaila@bitdefender.com>,
	Petre Pircalabu <ppircalabu@bitdefender.com>,
	bertrand.marquis@arm.com,
	wei.chen@arm.com
Subject: [PATCH v3 00/10] arm64: Get rid of READ/WRITE_SYSREG32
Date: Wed,  5 May 2021 09:42:58 +0200
Message-Id: <20210505074308.11016-1-michal.orzel@arm.com>
X-Mailer: git-send-email 2.29.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The purpose of this patch series is to remove usage of 32bit helper
macros READ/WRITE_SYSREG32 on arm64 as the idea of them is
not following the latest ARMv8 specification and mrs/msr instructions
are expecting 64bit values.
According to ARM DDI 0487G.a all the AArch64 system registers
are 64bit wide even though many of them have upper 32bit reserved.
This does not mean that in the newer versions of ARMv8 or in the next
architecture, some of the sysregs will get widen.
Also when dealing with system registers we should use register_t
type.

This patch series removes the use of READ/WRITE_SYSREG32
and replaces these calls with READ/WRITE_SYSREG. The change was
splited into several small patches to make the review proces easier.

This patch series focuses on removing READ/WRITE_SYSREG32.
The next thing to do is to also get rid of vreg_emulate_sysreg32
and other parts related to it like TVM_REG macro.
The final part will be to completely remove macros READ/WRITE_SYSREG32.

Michal Orzel (10):
  arm64/vfp: Get rid of READ/WRITE_SYSREG32
  arm/domain: Get rid of READ/WRITE_SYSREG32
  arm: Modify type of actlr to register_t
  arm/gic: Remove member hcr of structure gic_v3
  arm/gic: Get rid of READ/WRITE_SYSREG32
  arm/p2m: Get rid of READ/WRITE_SYSREG32
  xen/arm: Always access SCTLR_EL2 using READ/WRITE_SYSREG()
  arm/page: Get rid of READ/WRITE_SYSREG32
  arm/time,vtimer: Get rid of READ/WRITE_SYSREG32
  arm64: Change type of hsr, cpsr, spsr_el1 to uint64_t

 xen/arch/arm/arm64/entry.S            |  4 +-
 xen/arch/arm/arm64/traps.c            |  2 +-
 xen/arch/arm/arm64/vfp.c              | 12 ++--
 xen/arch/arm/arm64/vsysreg.c          |  3 +-
 xen/arch/arm/domain.c                 | 22 +++---
 xen/arch/arm/gic-v3-lpi.c             |  2 +-
 xen/arch/arm/gic-v3.c                 | 98 ++++++++++++++-------------
 xen/arch/arm/mm.c                     |  2 +-
 xen/arch/arm/p2m.c                    |  8 +--
 xen/arch/arm/time.c                   | 28 ++++----
 xen/arch/arm/traps.c                  | 34 ++++++----
 xen/arch/arm/vcpreg.c                 | 13 ++--
 xen/arch/arm/vtimer.c                 | 10 +--
 xen/include/asm-arm/arm64/processor.h | 11 +--
 xen/include/asm-arm/arm64/vfp.h       |  6 +-
 xen/include/asm-arm/domain.h          |  6 +-
 xen/include/asm-arm/gic.h             |  6 +-
 xen/include/asm-arm/gic_v3_defs.h     |  2 +
 xen/include/asm-arm/hsr.h             |  2 +-
 xen/include/asm-arm/page.h            |  4 +-
 xen/include/asm-arm/processor.h       |  5 +-
 xen/include/public/arch-arm.h         |  4 +-
 xen/include/public/domctl.h           |  2 +-
 xen/include/public/vm_event.h         |  3 +-
 24 files changed, 153 insertions(+), 136 deletions(-)

-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Wed May 05 07:43:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 07:43:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122820.231686 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCBo-0004Mc-8D; Wed, 05 May 2021 07:43:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122820.231686; Wed, 05 May 2021 07:43:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCBo-0004MV-41; Wed, 05 May 2021 07:43:24 +0000
Received: by outflank-mailman (input) for mailman id 122820;
 Wed, 05 May 2021 07:43:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=J0XF=KA=arm.com=michal.orzel@srs-us1.protection.inumbo.net>)
 id 1leCBn-0004M4-Hq
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 07:43:23 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 18460ea0-c944-4e09-9008-4025e38dbf26;
 Wed, 05 May 2021 07:43:22 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3E7731063;
 Wed,  5 May 2021 00:43:22 -0700 (PDT)
Received: from e123311-lin.arm.com (unknown [10.57.0.42])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 03CAB3F718;
 Wed,  5 May 2021 00:43:20 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 18460ea0-c944-4e09-9008-4025e38dbf26
From: Michal Orzel <michal.orzel@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	bertrand.marquis@arm.com,
	wei.chen@arm.com
Subject: [PATCH v3 03/10] arm: Modify type of actlr to register_t
Date: Wed,  5 May 2021 09:43:01 +0200
Message-Id: <20210505074308.11016-4-michal.orzel@arm.com>
X-Mailer: git-send-email 2.29.0
In-Reply-To: <20210505074308.11016-1-michal.orzel@arm.com>
References: <20210505074308.11016-1-michal.orzel@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

AArch64 registers are 64bit whereas AArch32 registers
are 32bit or 64bit. MSR/MRS are expecting 64bit values thus
we should get rid of helpers READ/WRITE_SYSREG32
in favour of using READ/WRITE_SYSREG.
We should also use register_t type when reading sysregs
which can correspond to uint64_t or uint32_t.
Even though many AArch64 registers have upper 32bit reserved
it does not mean that they can't be widen in the future.

ACTLR_EL1 system register bits are implementation defined
which means it is possibly a latent bug on current HW as the CPU
implementer may already have decided to use the top 32bit.

Signed-off-by: Michal Orzel <michal.orzel@arm.com>
---
Changes since v2:
-Modify the commit message
---
 xen/arch/arm/domain.c        | 2 +-
 xen/include/asm-arm/domain.h | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index 621f518b83..c021a03c61 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -330,7 +330,7 @@ static void schedule_tail(struct vcpu *prev)
 
 static void continue_new_vcpu(struct vcpu *prev)
 {
-    current->arch.actlr = READ_SYSREG32(ACTLR_EL1);
+    current->arch.actlr = READ_SYSREG(ACTLR_EL1);
     processor_vcpu_initialise(current);
 
     schedule_tail(prev);
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index c6b59ee755..2d4f38c669 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -156,7 +156,7 @@ struct arch_vcpu
 
     /* Control Registers */
     register_t sctlr;
-    uint32_t actlr;
+    register_t actlr;
     uint32_t cpacr;
 
     uint32_t contextidr;
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Wed May 05 07:43:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 07:43:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122821.231698 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCBp-0004eW-Nh; Wed, 05 May 2021 07:43:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122821.231698; Wed, 05 May 2021 07:43:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCBp-0004eL-J2; Wed, 05 May 2021 07:43:25 +0000
Received: by outflank-mailman (input) for mailman id 122821;
 Wed, 05 May 2021 07:43:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=J0XF=KA=arm.com=michal.orzel@srs-us1.protection.inumbo.net>)
 id 1leCBo-00044r-PH
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 07:43:24 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 88af7043-99c7-4d16-ab26-f8a73ae7904f;
 Wed, 05 May 2021 07:43:19 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 365441063;
 Wed,  5 May 2021 00:43:19 -0700 (PDT)
Received: from e123311-lin.arm.com (unknown [10.57.0.42])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id CC5BF3F718;
 Wed,  5 May 2021 00:43:17 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 88af7043-99c7-4d16-ab26-f8a73ae7904f
From: Michal Orzel <michal.orzel@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v3 01/10] arm64/vfp: Get rid of READ/WRITE_SYSREG32
Date: Wed,  5 May 2021 09:42:59 +0200
Message-Id: <20210505074308.11016-2-michal.orzel@arm.com>
X-Mailer: git-send-email 2.29.0
In-Reply-To: <20210505074308.11016-1-michal.orzel@arm.com>
References: <20210505074308.11016-1-michal.orzel@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

AArch64 registers are 64bit whereas AArch32 registers
are 32bit or 64bit. MSR/MRS are expecting 64bit values thus
we should get rid of helpers READ/WRITE_SYSREG32
in favour of using READ/WRITE_SYSREG.
We should also use register_t type when reading sysregs
which can correspond to uint64_t or uint32_t.
Even though many AArch64 registers have upper 32bit reserved
it does not mean that they can't be widen in the future.

Modify type of FPCR, FPSR, FPEXC32_EL2 to register_t.

Signed-off-by: Michal Orzel <michal.orzel@arm.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
 xen/arch/arm/arm64/vfp.c        | 12 ++++++------
 xen/include/asm-arm/arm64/vfp.h |  6 +++---
 2 files changed, 9 insertions(+), 9 deletions(-)

diff --git a/xen/arch/arm/arm64/vfp.c b/xen/arch/arm/arm64/vfp.c
index 999a0d58a5..47885e76ba 100644
--- a/xen/arch/arm/arm64/vfp.c
+++ b/xen/arch/arm/arm64/vfp.c
@@ -26,10 +26,10 @@ void vfp_save_state(struct vcpu *v)
                  "stp q30, q31, [%1, #16 * 30]\n\t"
                  : "=Q" (*v->arch.vfp.fpregs) : "r" (v->arch.vfp.fpregs));
 
-    v->arch.vfp.fpsr = READ_SYSREG32(FPSR);
-    v->arch.vfp.fpcr = READ_SYSREG32(FPCR);
+    v->arch.vfp.fpsr = READ_SYSREG(FPSR);
+    v->arch.vfp.fpcr = READ_SYSREG(FPCR);
     if ( is_32bit_domain(v->domain) )
-        v->arch.vfp.fpexc32_el2 = READ_SYSREG32(FPEXC32_EL2);
+        v->arch.vfp.fpexc32_el2 = READ_SYSREG(FPEXC32_EL2);
 }
 
 void vfp_restore_state(struct vcpu *v)
@@ -55,8 +55,8 @@ void vfp_restore_state(struct vcpu *v)
                  "ldp q30, q31, [%1, #16 * 30]\n\t"
                  : : "Q" (*v->arch.vfp.fpregs), "r" (v->arch.vfp.fpregs));
 
-    WRITE_SYSREG32(v->arch.vfp.fpsr, FPSR);
-    WRITE_SYSREG32(v->arch.vfp.fpcr, FPCR);
+    WRITE_SYSREG(v->arch.vfp.fpsr, FPSR);
+    WRITE_SYSREG(v->arch.vfp.fpcr, FPCR);
     if ( is_32bit_domain(v->domain) )
-        WRITE_SYSREG32(v->arch.vfp.fpexc32_el2, FPEXC32_EL2);
+        WRITE_SYSREG(v->arch.vfp.fpexc32_el2, FPEXC32_EL2);
 }
diff --git a/xen/include/asm-arm/arm64/vfp.h b/xen/include/asm-arm/arm64/vfp.h
index 6ab5d36c6c..e6e8c363bc 100644
--- a/xen/include/asm-arm/arm64/vfp.h
+++ b/xen/include/asm-arm/arm64/vfp.h
@@ -7,9 +7,9 @@
 struct vfp_state
 {
     uint64_t fpregs[64] __vfp_aligned;
-    uint32_t fpcr;
-    uint32_t fpexc32_el2;
-    uint32_t fpsr;
+    register_t fpcr;
+    register_t fpexc32_el2;
+    register_t fpsr;
 };
 
 #endif /* _ARM_ARM64_VFP_H */
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Wed May 05 07:43:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 07:43:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122822.231710 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCBu-00052B-0w; Wed, 05 May 2021 07:43:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122822.231710; Wed, 05 May 2021 07:43:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCBt-00051z-TW; Wed, 05 May 2021 07:43:29 +0000
Received: by outflank-mailman (input) for mailman id 122822;
 Wed, 05 May 2021 07:43:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=J0XF=KA=arm.com=michal.orzel@srs-us1.protection.inumbo.net>)
 id 1leCBs-00050P-Rv
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 07:43:28 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 26bd840a-f6da-4ce2-8f2b-87ef214a59a5;
 Wed, 05 May 2021 07:43:28 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id CA06A11FB;
 Wed,  5 May 2021 00:43:27 -0700 (PDT)
Received: from e123311-lin.arm.com (unknown [10.57.0.42])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 728C93F718;
 Wed,  5 May 2021 00:43:26 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 26bd840a-f6da-4ce2-8f2b-87ef214a59a5
From: Michal Orzel <michal.orzel@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v3 06/10] arm/p2m: Get rid of READ/WRITE_SYSREG32
Date: Wed,  5 May 2021 09:43:04 +0200
Message-Id: <20210505074308.11016-7-michal.orzel@arm.com>
X-Mailer: git-send-email 2.29.0
In-Reply-To: <20210505074308.11016-1-michal.orzel@arm.com>
References: <20210505074308.11016-1-michal.orzel@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

AArch64 registers are 64bit whereas AArch32 registers
are 32bit or 64bit. MSR/MRS are expecting 64bit values thus
we should get rid of helpers READ/WRITE_SYSREG32
in favour of using READ/WRITE_SYSREG.
We should also use register_t type when reading sysregs
which can correspond to uint64_t or uint32_t.
Even though many AArch64 registers have upper 32bit reserved
it does not mean that they can't be widen in the future.

Modify type of vtcr to register_t.

Signed-off-by: Michal Orzel <michal.orzel@arm.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
 xen/arch/arm/p2m.c   | 8 ++++----
 xen/arch/arm/traps.c | 2 +-
 2 files changed, 5 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
index ac50312620..d414c4feb9 100644
--- a/xen/arch/arm/p2m.c
+++ b/xen/arch/arm/p2m.c
@@ -1973,11 +1973,11 @@ void __init p2m_restrict_ipa_bits(unsigned int ipa_bits)
 }
 
 /* VTCR value to be configured by all CPUs. Set only once by the boot CPU */
-static uint32_t __read_mostly vtcr;
+static register_t __read_mostly vtcr;
 
 static void setup_virt_paging_one(void *data)
 {
-    WRITE_SYSREG32(vtcr, VTCR_EL2);
+    WRITE_SYSREG(vtcr, VTCR_EL2);
 
     /*
      * ARM64_WORKAROUND_AT_SPECULATE: We want to keep the TLBs free from
@@ -2000,7 +2000,7 @@ static void setup_virt_paging_one(void *data)
 void __init setup_virt_paging(void)
 {
     /* Setup Stage 2 address translation */
-    unsigned long val = VTCR_RES1|VTCR_SH0_IS|VTCR_ORGN0_WBWA|VTCR_IRGN0_WBWA;
+    register_t val = VTCR_RES1|VTCR_SH0_IS|VTCR_ORGN0_WBWA|VTCR_IRGN0_WBWA;
 
 #ifdef CONFIG_ARM_32
     if ( p2m_ipa_bits < 40 )
@@ -2089,7 +2089,7 @@ void __init setup_virt_paging(void)
            pa_range_info[pa_range].pabits,
            ( MAX_VMID == MAX_VMID_16_BIT ) ? 16 : 8);
 #endif
-    printk("P2M: %d levels with order-%d root, VTCR 0x%lx\n",
+    printk("P2M: %d levels with order-%d root, VTCR 0x%"PRIregister"\n",
            4 - P2M_ROOT_LEVEL, P2M_ROOT_ORDER, val);
 
     p2m_vmid_allocator_init();
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index ccc0827107..c7acdb2087 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -911,7 +911,7 @@ static void _show_registers(const struct cpu_user_regs *regs,
         show_registers_32(regs, ctxt, guest_mode, v);
 #endif
     }
-    printk("  VTCR_EL2: %08"PRIx32"\n", READ_SYSREG32(VTCR_EL2));
+    printk("  VTCR_EL2: %"PRIregister"\n", READ_SYSREG(VTCR_EL2));
     printk(" VTTBR_EL2: %016"PRIx64"\n", ctxt->vttbr_el2);
     printk("\n");
 
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Wed May 05 07:43:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 07:43:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122823.231714 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCBu-00055e-Ex; Wed, 05 May 2021 07:43:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122823.231714; Wed, 05 May 2021 07:43:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCBu-00054e-7B; Wed, 05 May 2021 07:43:30 +0000
Received: by outflank-mailman (input) for mailman id 122823;
 Wed, 05 May 2021 07:43:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=J0XF=KA=arm.com=michal.orzel@srs-us1.protection.inumbo.net>)
 id 1leCBt-00044r-PI
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 07:43:29 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 821e5e6c-2cea-4e11-87c1-7ae54060627c;
 Wed, 05 May 2021 07:43:20 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B1A5D11FB;
 Wed,  5 May 2021 00:43:20 -0700 (PDT)
Received: from e123311-lin.arm.com (unknown [10.57.0.42])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7CECB3F718;
 Wed,  5 May 2021 00:43:19 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 821e5e6c-2cea-4e11-87c1-7ae54060627c
From: Michal Orzel <michal.orzel@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	bertrand.marquis@arm.com,
	wei.chen@arm.com
Subject: [PATCH v3 02/10] arm/domain: Get rid of READ/WRITE_SYSREG32
Date: Wed,  5 May 2021 09:43:00 +0200
Message-Id: <20210505074308.11016-3-michal.orzel@arm.com>
X-Mailer: git-send-email 2.29.0
In-Reply-To: <20210505074308.11016-1-michal.orzel@arm.com>
References: <20210505074308.11016-1-michal.orzel@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

AArch64 registers are 64bit whereas AArch32 registers
are 32bit or 64bit. MSR/MRS are expecting 64bit values thus
we should get rid of helpers READ/WRITE_SYSREG32
in favour of using READ/WRITE_SYSREG.
We should also use register_t type when reading sysregs
which can correspond to uint64_t or uint32_t.
Even though many AArch64 registers have upper 32bit reserved
it does not mean that they can't be widen in the future.

Modify type of register cntkctl to register_t.

Modify accesses to thumbee registers to use READ/WRITE_SYSREG.
Thumbee registers are only usable by a 32bit domain and in fact
should be only accessed on ARMv7 as they were retrospectively dropped
on ARMv8.

Signed-off-by: Michal Orzel <michal.orzel@arm.com>
---
Changes since v2:
-Modify the commit message
Changes since v1:
-Move modification of ACTLR into seperate patch
---
 xen/arch/arm/domain.c        | 18 +++++++++---------
 xen/include/asm-arm/domain.h |  2 +-
 2 files changed, 10 insertions(+), 10 deletions(-)

diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index bdd3d3e5b5..621f518b83 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -113,13 +113,13 @@ static void ctxt_switch_from(struct vcpu *p)
     p->arch.tpidr_el1 = READ_SYSREG(TPIDR_EL1);
 
     /* Arch timer */
-    p->arch.cntkctl = READ_SYSREG32(CNTKCTL_EL1);
+    p->arch.cntkctl = READ_SYSREG(CNTKCTL_EL1);
     virt_timer_save(p);
 
     if ( is_32bit_domain(p->domain) && cpu_has_thumbee )
     {
-        p->arch.teecr = READ_SYSREG32(TEECR32_EL1);
-        p->arch.teehbr = READ_SYSREG32(TEEHBR32_EL1);
+        p->arch.teecr = READ_SYSREG(TEECR32_EL1);
+        p->arch.teehbr = READ_SYSREG(TEEHBR32_EL1);
     }
 
 #ifdef CONFIG_ARM_32
@@ -175,7 +175,7 @@ static void ctxt_switch_from(struct vcpu *p)
 
 static void ctxt_switch_to(struct vcpu *n)
 {
-    uint32_t vpidr;
+    register_t vpidr;
 
     /* When the idle VCPU is running, Xen will always stay in hypervisor
      * mode. Therefore we don't need to restore the context of an idle VCPU.
@@ -183,8 +183,8 @@ static void ctxt_switch_to(struct vcpu *n)
     if ( is_idle_vcpu(n) )
         return;
 
-    vpidr = READ_SYSREG32(MIDR_EL1);
-    WRITE_SYSREG32(vpidr, VPIDR_EL2);
+    vpidr = READ_SYSREG(MIDR_EL1);
+    WRITE_SYSREG(vpidr, VPIDR_EL2);
     WRITE_SYSREG(n->arch.vmpidr, VMPIDR_EL2);
 
     /* VGIC */
@@ -257,8 +257,8 @@ static void ctxt_switch_to(struct vcpu *n)
 
     if ( is_32bit_domain(n->domain) && cpu_has_thumbee )
     {
-        WRITE_SYSREG32(n->arch.teecr, TEECR32_EL1);
-        WRITE_SYSREG32(n->arch.teehbr, TEEHBR32_EL1);
+        WRITE_SYSREG(n->arch.teecr, TEECR32_EL1);
+        WRITE_SYSREG(n->arch.teehbr, TEEHBR32_EL1);
     }
 
 #ifdef CONFIG_ARM_32
@@ -274,7 +274,7 @@ static void ctxt_switch_to(struct vcpu *n)
 
     /* This is could trigger an hardware interrupt from the virtual
      * timer. The interrupt needs to be injected into the guest. */
-    WRITE_SYSREG32(n->arch.cntkctl, CNTKCTL_EL1);
+    WRITE_SYSREG(n->arch.cntkctl, CNTKCTL_EL1);
     virt_timer_restore(n);
 }
 
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 0a74df9931..c6b59ee755 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -190,7 +190,7 @@ struct arch_vcpu
     struct vgic_cpu vgic;
 
     /* Timer registers  */
-    uint32_t cntkctl;
+    register_t cntkctl;
 
     struct vtimer phys_timer;
     struct vtimer virt_timer;
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Wed May 05 07:43:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 07:43:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122824.231734 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCBz-0005ss-Qy; Wed, 05 May 2021 07:43:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122824.231734; Wed, 05 May 2021 07:43:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCBz-0005sh-LZ; Wed, 05 May 2021 07:43:35 +0000
Received: by outflank-mailman (input) for mailman id 122824;
 Wed, 05 May 2021 07:43:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=J0XF=KA=arm.com=michal.orzel@srs-us1.protection.inumbo.net>)
 id 1leCBy-0005l9-8c
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 07:43:34 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 646ac69b-4168-46e5-9002-ca2397ac07e5;
 Wed, 05 May 2021 07:43:33 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E473411FB;
 Wed,  5 May 2021 00:43:32 -0700 (PDT)
Received: from e123311-lin.arm.com (unknown [10.57.0.42])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 82C083F718;
 Wed,  5 May 2021 00:43:31 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 646ac69b-4168-46e5-9002-ca2397ac07e5
From: Michal Orzel <michal.orzel@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v3 09/10] arm/time,vtimer: Get rid of READ/WRITE_SYSREG32
Date: Wed,  5 May 2021 09:43:07 +0200
Message-Id: <20210505074308.11016-10-michal.orzel@arm.com>
X-Mailer: git-send-email 2.29.0
In-Reply-To: <20210505074308.11016-1-michal.orzel@arm.com>
References: <20210505074308.11016-1-michal.orzel@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

AArch64 registers are 64bit whereas AArch32 registers
are 32bit or 64bit. MSR/MRS are expecting 64bit values thus
we should get rid of helpers READ/WRITE_SYSREG32
in favour of using READ/WRITE_SYSREG.
We should also use register_t type when reading sysregs
which can correspond to uint64_t or uint32_t.
Even though many AArch64 registers have upper 32bit reserved
it does not mean that they can't be widen in the future.

Modify type of vtimer structure's member: ctl to register_t.

Add macro CNTFRQ_MASK containing mask for timer clock frequency
field of CNTFRQ_EL0 register.

Modify CNTx_CTL_* macros to return unsigned long instead of
unsigned int as ctl is now of type register_t.

Signed-off-by: Michal Orzel <michal.orzel@arm.com>
Acked-by: Julien Grall <jgrall@amazon.com>
---
 xen/arch/arm/time.c             | 28 ++++++++++++++--------------
 xen/arch/arm/vtimer.c           | 10 +++++-----
 xen/include/asm-arm/domain.h    |  2 +-
 xen/include/asm-arm/processor.h |  5 ++++-
 4 files changed, 24 insertions(+), 21 deletions(-)

diff --git a/xen/arch/arm/time.c b/xen/arch/arm/time.c
index b0021c2c69..7dbd363537 100644
--- a/xen/arch/arm/time.c
+++ b/xen/arch/arm/time.c
@@ -145,7 +145,7 @@ void __init preinit_xen_time(void)
         preinit_acpi_xen_time();
 
     if ( !cpu_khz )
-        cpu_khz = READ_SYSREG32(CNTFRQ_EL0) / 1000;
+        cpu_khz = (READ_SYSREG(CNTFRQ_EL0) & CNTFRQ_MASK) / 1000;
 
     res = platform_init_time();
     if ( res )
@@ -205,13 +205,13 @@ int reprogram_timer(s_time_t timeout)
 
     if ( timeout == 0 )
     {
-        WRITE_SYSREG32(0, CNTHP_CTL_EL2);
+        WRITE_SYSREG(0, CNTHP_CTL_EL2);
         return 1;
     }
 
     deadline = ns_to_ticks(timeout) + boot_count;
     WRITE_SYSREG64(deadline, CNTHP_CVAL_EL2);
-    WRITE_SYSREG32(CNTx_CTL_ENABLE, CNTHP_CTL_EL2);
+    WRITE_SYSREG(CNTx_CTL_ENABLE, CNTHP_CTL_EL2);
     isb();
 
     /* No need to check for timers in the past; the Generic Timer fires
@@ -223,23 +223,23 @@ int reprogram_timer(s_time_t timeout)
 static void timer_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
 {
     if ( irq == (timer_irq[TIMER_HYP_PPI]) &&
-         READ_SYSREG32(CNTHP_CTL_EL2) & CNTx_CTL_PENDING )
+         READ_SYSREG(CNTHP_CTL_EL2) & CNTx_CTL_PENDING )
     {
         perfc_incr(hyp_timer_irqs);
         /* Signal the generic timer code to do its work */
         raise_softirq(TIMER_SOFTIRQ);
         /* Disable the timer to avoid more interrupts */
-        WRITE_SYSREG32(0, CNTHP_CTL_EL2);
+        WRITE_SYSREG(0, CNTHP_CTL_EL2);
     }
 
     if ( irq == (timer_irq[TIMER_PHYS_NONSECURE_PPI]) &&
-         READ_SYSREG32(CNTP_CTL_EL0) & CNTx_CTL_PENDING )
+         READ_SYSREG(CNTP_CTL_EL0) & CNTx_CTL_PENDING )
     {
         perfc_incr(phys_timer_irqs);
         /* Signal the generic timer code to do its work */
         raise_softirq(TIMER_SOFTIRQ);
         /* Disable the timer to avoid more interrupts */
-        WRITE_SYSREG32(0, CNTP_CTL_EL0);
+        WRITE_SYSREG(0, CNTP_CTL_EL0);
     }
 }
 
@@ -260,8 +260,8 @@ static void vtimer_interrupt(int irq, void *dev_id, struct cpu_user_regs *regs)
 
     perfc_incr(virt_timer_irqs);
 
-    current->arch.virt_timer.ctl = READ_SYSREG32(CNTV_CTL_EL0);
-    WRITE_SYSREG32(current->arch.virt_timer.ctl | CNTx_CTL_MASK, CNTV_CTL_EL0);
+    current->arch.virt_timer.ctl = READ_SYSREG(CNTV_CTL_EL0);
+    WRITE_SYSREG(current->arch.virt_timer.ctl | CNTx_CTL_MASK, CNTV_CTL_EL0);
     vgic_inject_irq(current->domain, current, current->arch.virt_timer.irq, true);
 }
 
@@ -297,9 +297,9 @@ void init_timer_interrupt(void)
     /* Sensible defaults */
     WRITE_SYSREG64(0, CNTVOFF_EL2);     /* No VM-specific offset */
     /* Do not let the VMs program the physical timer, only read the physical counter */
-    WRITE_SYSREG32(CNTHCTL_EL2_EL1PCTEN, CNTHCTL_EL2);
-    WRITE_SYSREG32(0, CNTP_CTL_EL0);    /* Physical timer disabled */
-    WRITE_SYSREG32(0, CNTHP_CTL_EL2);   /* Hypervisor's timer disabled */
+    WRITE_SYSREG(CNTHCTL_EL2_EL1PCTEN, CNTHCTL_EL2);
+    WRITE_SYSREG(0, CNTP_CTL_EL0);    /* Physical timer disabled */
+    WRITE_SYSREG(0, CNTHP_CTL_EL2);   /* Hypervisor's timer disabled */
     isb();
 
     request_irq(timer_irq[TIMER_HYP_PPI], 0, timer_interrupt,
@@ -320,8 +320,8 @@ void init_timer_interrupt(void)
  */
 static void deinit_timer_interrupt(void)
 {
-    WRITE_SYSREG32(0, CNTP_CTL_EL0);    /* Disable physical timer */
-    WRITE_SYSREG32(0, CNTHP_CTL_EL2);   /* Disable hypervisor's timer */
+    WRITE_SYSREG(0, CNTP_CTL_EL0);    /* Disable physical timer */
+    WRITE_SYSREG(0, CNTHP_CTL_EL2);   /* Disable hypervisor's timer */
     isb();
 
     release_irq(timer_irq[TIMER_HYP_PPI], NULL);
diff --git a/xen/arch/arm/vtimer.c b/xen/arch/arm/vtimer.c
index c2b27915c6..167fc6127a 100644
--- a/xen/arch/arm/vtimer.c
+++ b/xen/arch/arm/vtimer.c
@@ -138,8 +138,8 @@ void virt_timer_save(struct vcpu *v)
 {
     ASSERT(!is_idle_vcpu(v));
 
-    v->arch.virt_timer.ctl = READ_SYSREG32(CNTV_CTL_EL0);
-    WRITE_SYSREG32(v->arch.virt_timer.ctl & ~CNTx_CTL_ENABLE, CNTV_CTL_EL0);
+    v->arch.virt_timer.ctl = READ_SYSREG(CNTV_CTL_EL0);
+    WRITE_SYSREG(v->arch.virt_timer.ctl & ~CNTx_CTL_ENABLE, CNTV_CTL_EL0);
     v->arch.virt_timer.cval = READ_SYSREG64(CNTV_CVAL_EL0);
     if ( (v->arch.virt_timer.ctl & CNTx_CTL_ENABLE) &&
          !(v->arch.virt_timer.ctl & CNTx_CTL_MASK))
@@ -159,7 +159,7 @@ void virt_timer_restore(struct vcpu *v)
 
     WRITE_SYSREG64(v->domain->arch.virt_timer_base.offset, CNTVOFF_EL2);
     WRITE_SYSREG64(v->arch.virt_timer.cval, CNTV_CVAL_EL0);
-    WRITE_SYSREG32(v->arch.virt_timer.ctl, CNTV_CTL_EL0);
+    WRITE_SYSREG(v->arch.virt_timer.ctl, CNTV_CTL_EL0);
 }
 
 static bool vtimer_cntp_ctl(struct cpu_user_regs *regs, uint32_t *r, bool read)
@@ -347,7 +347,7 @@ bool vtimer_emulate(struct cpu_user_regs *regs, union hsr hsr)
 }
 
 static void vtimer_update_irq(struct vcpu *v, struct vtimer *vtimer,
-                              uint32_t vtimer_ctl)
+                              register_t vtimer_ctl)
 {
     bool level;
 
@@ -389,7 +389,7 @@ void vtimer_update_irqs(struct vcpu *v)
      * but this requires reworking the arch timer to implement this.
      */
     vtimer_update_irq(v, &v->arch.virt_timer,
-                      READ_SYSREG32(CNTV_CTL_EL0) & ~CNTx_CTL_MASK);
+                      READ_SYSREG(CNTV_CTL_EL0) & ~CNTx_CTL_MASK);
 
     /* For the physical timer we rely on our emulated state. */
     vtimer_update_irq(v, &v->arch.phys_timer, v->arch.phys_timer.ctl);
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index 2d4f38c669..c9277b5c6d 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -36,7 +36,7 @@ struct vtimer {
     struct vcpu *v;
     int irq;
     struct timer timer;
-    uint32_t ctl;
+    register_t ctl;
     uint64_t cval;
 };
 
diff --git a/xen/include/asm-arm/processor.h b/xen/include/asm-arm/processor.h
index 5c1768cdec..2577e9a244 100644
--- a/xen/include/asm-arm/processor.h
+++ b/xen/include/asm-arm/processor.h
@@ -485,9 +485,12 @@ extern register_t __cpu_logical_map[];
 
 /* Timer control registers */
 #define CNTx_CTL_ENABLE   (1u<<0)  /* Enable timer */
-#define CNTx_CTL_MASK     (1u<<1)  /* Mask IRQ */
+#define CNTx_CTL_MASK     (1ul<<1)  /* Mask IRQ */
 #define CNTx_CTL_PENDING  (1u<<2)  /* IRQ pending */
 
+/* Timer frequency mask */
+#define CNTFRQ_MASK       GENMASK(31, 0)
+
 /* Exception Vector offsets */
 /* ... ARM32 */
 #define VECTOR32_RST  0
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Wed May 05 07:43:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 07:43:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122825.231740 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCC0-00060P-BX; Wed, 05 May 2021 07:43:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122825.231740; Wed, 05 May 2021 07:43:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCC0-0005wm-25; Wed, 05 May 2021 07:43:36 +0000
Received: by outflank-mailman (input) for mailman id 122825;
 Wed, 05 May 2021 07:43:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=J0XF=KA=arm.com=michal.orzel@srs-us1.protection.inumbo.net>)
 id 1leCBy-00044r-PU
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 07:43:34 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 3f8ccee7-9bfa-46d6-9356-2e342b9ded43;
 Wed, 05 May 2021 07:43:24 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 377581063;
 Wed,  5 May 2021 00:43:24 -0700 (PDT)
Received: from e123311-lin.arm.com (unknown [10.57.0.42])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7FBD83F718;
 Wed,  5 May 2021 00:43:22 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3f8ccee7-9bfa-46d6-9356-2e342b9ded43
From: Michal Orzel <michal.orzel@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v3 04/10] arm/gic: Remove member hcr of structure gic_v3
Date: Wed,  5 May 2021 09:43:02 +0200
Message-Id: <20210505074308.11016-5-michal.orzel@arm.com>
X-Mailer: git-send-email 2.29.0
In-Reply-To: <20210505074308.11016-1-michal.orzel@arm.com>
References: <20210505074308.11016-1-michal.orzel@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

... as it is never used even in the patch introducing it.

Signed-off-by: Michal Orzel <michal.orzel@arm.com>
Acked-by: Julien Grall <jgrall@amazon.com>
---
 xen/include/asm-arm/gic.h | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
index ad0f7452d0..5069ab4aac 100644
--- a/xen/include/asm-arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -171,7 +171,7 @@
  * GICv3 registers that needs to be saved/restored
  */
 struct gic_v3 {
-    uint32_t hcr, vmcr, sre_el1;
+    uint32_t vmcr, sre_el1;
     uint32_t apr0[4];
     uint32_t apr1[4];
     uint64_t lr[16];
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Wed May 05 07:43:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 07:43:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122827.231758 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCC4-0006rF-Mp; Wed, 05 May 2021 07:43:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122827.231758; Wed, 05 May 2021 07:43:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCC4-0006r4-I2; Wed, 05 May 2021 07:43:40 +0000
Received: by outflank-mailman (input) for mailman id 122827;
 Wed, 05 May 2021 07:43:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=J0XF=KA=arm.com=michal.orzel@srs-us1.protection.inumbo.net>)
 id 1leCC3-00044r-Pa
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 07:43:39 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id cbe60f71-16dc-42fd-806e-3a109b5e7248;
 Wed, 05 May 2021 07:43:26 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2B7121063;
 Wed,  5 May 2021 00:43:26 -0700 (PDT)
Received: from e123311-lin.arm.com (unknown [10.57.0.42])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 9246E3F718;
 Wed,  5 May 2021 00:43:24 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cbe60f71-16dc-42fd-806e-3a109b5e7248
From: Michal Orzel <michal.orzel@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	bertrand.marquis@arm.com,
	wei.chen@arm.com
Subject: [PATCH v3 05/10] arm/gic: Get rid of READ/WRITE_SYSREG32
Date: Wed,  5 May 2021 09:43:03 +0200
Message-Id: <20210505074308.11016-6-michal.orzel@arm.com>
X-Mailer: git-send-email 2.29.0
In-Reply-To: <20210505074308.11016-1-michal.orzel@arm.com>
References: <20210505074308.11016-1-michal.orzel@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

AArch64 registers are 64bit whereas AArch32 registers
are 32bit or 64bit. MSR/MRS are expecting 64bit values thus
we should get rid of helpers READ/WRITE_SYSREG32
in favour of using READ/WRITE_SYSREG.
We should also use register_t type when reading sysregs
which can correspond to uint64_t or uint32_t.
Even though many AArch64 registers have upper 32bit reserved
it does not mean that they can't be widen in the future.

Modify types of following members of struct gic_v3 to register_t:
-vmcr
-sre_el1
-apr0
-apr1

Add new macro GICC_IAR_INTID_MASK containing the mask
for INTID field of ICC_IAR0/1_EL1 register as only the first 23-bits
of IAR contains the interrupt number. The rest are RES0.
Therefore, take the opportunity to mask the bits [23:31] as
they should be used for an IRQ number (we don't know how the top bits
will be used).

Signed-off-by: Michal Orzel <michal.orzel@arm.com>
---
Changes since v2:
-Modify the commit message
Changes since v1:
-Remove hcr member of gic_v3 in a seperate patch
-Add macro GICC_IAR_INTID_MASK
-Remove explicit cast in favor of implicit cast
---
 xen/arch/arm/gic-v3-lpi.c         |  2 +-
 xen/arch/arm/gic-v3.c             | 98 ++++++++++++++++---------------
 xen/include/asm-arm/gic.h         |  6 +-
 xen/include/asm-arm/gic_v3_defs.h |  2 +
 4 files changed, 58 insertions(+), 50 deletions(-)

diff --git a/xen/arch/arm/gic-v3-lpi.c b/xen/arch/arm/gic-v3-lpi.c
index 869bc97fa1..e1594dd20e 100644
--- a/xen/arch/arm/gic-v3-lpi.c
+++ b/xen/arch/arm/gic-v3-lpi.c
@@ -178,7 +178,7 @@ void gicv3_do_LPI(unsigned int lpi)
     irq_enter();
 
     /* EOI the LPI already. */
-    WRITE_SYSREG32(lpi, ICC_EOIR1_EL1);
+    WRITE_SYSREG(lpi, ICC_EOIR1_EL1);
 
     /* Find out if a guest mapped something to this physical LPI. */
     hlpip = gic_get_host_lpi(lpi);
diff --git a/xen/arch/arm/gic-v3.c b/xen/arch/arm/gic-v3.c
index ac28013c19..b86f040589 100644
--- a/xen/arch/arm/gic-v3.c
+++ b/xen/arch/arm/gic-v3.c
@@ -246,12 +246,12 @@ static void gicv3_ich_write_lr(int lr, uint64_t val)
  */
 static void gicv3_enable_sre(void)
 {
-    uint32_t val;
+    register_t val;
 
-    val = READ_SYSREG32(ICC_SRE_EL2);
+    val = READ_SYSREG(ICC_SRE_EL2);
     val |= GICC_SRE_EL2_SRE;
 
-    WRITE_SYSREG32(val, ICC_SRE_EL2);
+    WRITE_SYSREG(val, ICC_SRE_EL2);
     isb();
 }
 
@@ -315,16 +315,16 @@ static void restore_aprn_regs(const union gic_state_data *d)
     switch ( gicv3.nr_priorities )
     {
     case 7:
-        WRITE_SYSREG32(d->v3.apr0[2], ICH_AP0R2_EL2);
-        WRITE_SYSREG32(d->v3.apr1[2], ICH_AP1R2_EL2);
+        WRITE_SYSREG(d->v3.apr0[2], ICH_AP0R2_EL2);
+        WRITE_SYSREG(d->v3.apr1[2], ICH_AP1R2_EL2);
         /* Fall through */
     case 6:
-        WRITE_SYSREG32(d->v3.apr0[1], ICH_AP0R1_EL2);
-        WRITE_SYSREG32(d->v3.apr1[1], ICH_AP1R1_EL2);
+        WRITE_SYSREG(d->v3.apr0[1], ICH_AP0R1_EL2);
+        WRITE_SYSREG(d->v3.apr1[1], ICH_AP1R1_EL2);
         /* Fall through */
     case 5:
-        WRITE_SYSREG32(d->v3.apr0[0], ICH_AP0R0_EL2);
-        WRITE_SYSREG32(d->v3.apr1[0], ICH_AP1R0_EL2);
+        WRITE_SYSREG(d->v3.apr0[0], ICH_AP0R0_EL2);
+        WRITE_SYSREG(d->v3.apr1[0], ICH_AP1R0_EL2);
         break;
     default:
         BUG();
@@ -338,16 +338,16 @@ static void save_aprn_regs(union gic_state_data *d)
     switch ( gicv3.nr_priorities )
     {
     case 7:
-        d->v3.apr0[2] = READ_SYSREG32(ICH_AP0R2_EL2);
-        d->v3.apr1[2] = READ_SYSREG32(ICH_AP1R2_EL2);
+        d->v3.apr0[2] = READ_SYSREG(ICH_AP0R2_EL2);
+        d->v3.apr1[2] = READ_SYSREG(ICH_AP1R2_EL2);
         /* Fall through */
     case 6:
-        d->v3.apr0[1] = READ_SYSREG32(ICH_AP0R1_EL2);
-        d->v3.apr1[1] = READ_SYSREG32(ICH_AP1R1_EL2);
+        d->v3.apr0[1] = READ_SYSREG(ICH_AP0R1_EL2);
+        d->v3.apr1[1] = READ_SYSREG(ICH_AP1R1_EL2);
         /* Fall through */
     case 5:
-        d->v3.apr0[0] = READ_SYSREG32(ICH_AP0R0_EL2);
-        d->v3.apr1[0] = READ_SYSREG32(ICH_AP1R0_EL2);
+        d->v3.apr0[0] = READ_SYSREG(ICH_AP0R0_EL2);
+        d->v3.apr1[0] = READ_SYSREG(ICH_AP1R0_EL2);
         break;
     default:
         BUG();
@@ -371,15 +371,15 @@ static void gicv3_save_state(struct vcpu *v)
     dsb(sy);
     gicv3_save_lrs(v);
     save_aprn_regs(&v->arch.gic);
-    v->arch.gic.v3.vmcr = READ_SYSREG32(ICH_VMCR_EL2);
-    v->arch.gic.v3.sre_el1 = READ_SYSREG32(ICC_SRE_EL1);
+    v->arch.gic.v3.vmcr = READ_SYSREG(ICH_VMCR_EL2);
+    v->arch.gic.v3.sre_el1 = READ_SYSREG(ICC_SRE_EL1);
 }
 
 static void gicv3_restore_state(const struct vcpu *v)
 {
-    uint32_t val;
+    register_t val;
 
-    val = READ_SYSREG32(ICC_SRE_EL2);
+    val = READ_SYSREG(ICC_SRE_EL2);
     /*
      * Don't give access to system registers when the guest is using
      * GICv2
@@ -388,7 +388,7 @@ static void gicv3_restore_state(const struct vcpu *v)
         val &= ~GICC_SRE_EL2_ENEL1;
     else
         val |= GICC_SRE_EL2_ENEL1;
-    WRITE_SYSREG32(val, ICC_SRE_EL2);
+    WRITE_SYSREG(val, ICC_SRE_EL2);
 
     /*
      * VFIQEn is RES1 if ICC_SRE_EL1.SRE is 1. This causes a Group0
@@ -398,9 +398,9 @@ static void gicv3_restore_state(const struct vcpu *v)
      * want before starting to mess with the rest of the GIC, and
      * VMCR_EL1 in particular.
      */
-    WRITE_SYSREG32(v->arch.gic.v3.sre_el1, ICC_SRE_EL1);
+    WRITE_SYSREG(v->arch.gic.v3.sre_el1, ICC_SRE_EL1);
     isb();
-    WRITE_SYSREG32(v->arch.gic.v3.vmcr, ICH_VMCR_EL2);
+    WRITE_SYSREG(v->arch.gic.v3.vmcr, ICH_VMCR_EL2);
     restore_aprn_regs(&v->arch.gic);
     gicv3_restore_lrs(v);
 
@@ -468,24 +468,25 @@ static void gicv3_mask_irq(struct irq_desc *irqd)
 static void gicv3_eoi_irq(struct irq_desc *irqd)
 {
     /* Lower the priority */
-    WRITE_SYSREG32(irqd->irq, ICC_EOIR1_EL1);
+    WRITE_SYSREG(irqd->irq, ICC_EOIR1_EL1);
     isb();
 }
 
 static void gicv3_dir_irq(struct irq_desc *irqd)
 {
     /* Deactivate */
-    WRITE_SYSREG32(irqd->irq, ICC_DIR_EL1);
+    WRITE_SYSREG(irqd->irq, ICC_DIR_EL1);
     isb();
 }
 
 static unsigned int gicv3_read_irq(void)
 {
-    unsigned int irq = READ_SYSREG32(ICC_IAR1_EL1);
+    register_t irq = READ_SYSREG(ICC_IAR1_EL1);
 
     dsb(sy);
 
-    return irq;
+    /* IRQs are encoded using 23bit. */
+    return (irq & GICC_IAR_INTID_MASK);
 }
 
 /*
@@ -857,16 +858,16 @@ static int gicv3_cpu_init(void)
     gicv3_enable_sre();
 
     /* No priority grouping */
-    WRITE_SYSREG32(0, ICC_BPR1_EL1);
+    WRITE_SYSREG(0, ICC_BPR1_EL1);
 
     /* Set priority mask register */
-    WRITE_SYSREG32(DEFAULT_PMR_VALUE, ICC_PMR_EL1);
+    WRITE_SYSREG(DEFAULT_PMR_VALUE, ICC_PMR_EL1);
 
     /* EOI drops priority, DIR deactivates the interrupt (mode 1) */
-    WRITE_SYSREG32(GICC_CTLR_EL1_EOImode_drop, ICC_CTLR_EL1);
+    WRITE_SYSREG(GICC_CTLR_EL1_EOImode_drop, ICC_CTLR_EL1);
 
     /* Enable Group1 interrupts */
-    WRITE_SYSREG32(1, ICC_IGRPEN1_EL1);
+    WRITE_SYSREG(1, ICC_IGRPEN1_EL1);
 
     /* Sync at once at the end of cpu interface configuration */
     isb();
@@ -876,15 +877,15 @@ static int gicv3_cpu_init(void)
 
 static void gicv3_cpu_disable(void)
 {
-    WRITE_SYSREG32(0, ICC_CTLR_EL1);
+    WRITE_SYSREG(0, ICC_CTLR_EL1);
     isb();
 }
 
 static void gicv3_hyp_init(void)
 {
-    uint32_t vtr;
+    register_t vtr;
 
-    vtr = READ_SYSREG32(ICH_VTR_EL2);
+    vtr = READ_SYSREG(ICH_VTR_EL2);
     gicv3_info.nr_lrs  = (vtr & ICH_VTR_NRLRGS) + 1;
     gicv3.nr_priorities = ((vtr >> ICH_VTR_PRIBITS_SHIFT) &
                           ICH_VTR_PRIBITS_MASK) + 1;
@@ -892,8 +893,8 @@ static void gicv3_hyp_init(void)
     if ( !((gicv3.nr_priorities > 4) && (gicv3.nr_priorities < 8)) )
         panic("GICv3: Invalid number of priority bits\n");
 
-    WRITE_SYSREG32(ICH_VMCR_EOI | ICH_VMCR_VENG1, ICH_VMCR_EL2);
-    WRITE_SYSREG32(GICH_HCR_EN, ICH_HCR_EL2);
+    WRITE_SYSREG(ICH_VMCR_EOI | ICH_VMCR_VENG1, ICH_VMCR_EL2);
+    WRITE_SYSREG(GICH_HCR_EN, ICH_HCR_EL2);
 }
 
 /* Set up the per-CPU parts of the GIC for a secondary CPU */
@@ -917,11 +918,11 @@ out:
 
 static void gicv3_hyp_disable(void)
 {
-    uint32_t hcr;
+    register_t hcr;
 
-    hcr = READ_SYSREG32(ICH_HCR_EL2);
+    hcr = READ_SYSREG(ICH_HCR_EL2);
     hcr &= ~GICH_HCR_EN;
-    WRITE_SYSREG32(hcr, ICH_HCR_EL2);
+    WRITE_SYSREG(hcr, ICH_HCR_EL2);
     isb();
 }
 
@@ -1140,39 +1141,44 @@ static void gicv3_write_lr(int lr_reg, const struct gic_lr *lr)
 
 static void gicv3_hcr_status(uint32_t flag, bool status)
 {
-    uint32_t hcr;
+    register_t hcr;
 
-    hcr = READ_SYSREG32(ICH_HCR_EL2);
+    hcr = READ_SYSREG(ICH_HCR_EL2);
     if ( status )
-        WRITE_SYSREG32(hcr | flag, ICH_HCR_EL2);
+        WRITE_SYSREG(hcr | flag, ICH_HCR_EL2);
     else
-        WRITE_SYSREG32(hcr & (~flag), ICH_HCR_EL2);
+        WRITE_SYSREG(hcr & (~flag), ICH_HCR_EL2);
     isb();
 }
 
 static unsigned int gicv3_read_vmcr_priority(void)
 {
-   return ((READ_SYSREG32(ICH_VMCR_EL2) >> ICH_VMCR_PRIORITY_SHIFT) &
+   return ((READ_SYSREG(ICH_VMCR_EL2) >> ICH_VMCR_PRIORITY_SHIFT) &
             ICH_VMCR_PRIORITY_MASK);
 }
 
 /* Only support reading GRP1 APRn registers */
 static unsigned int gicv3_read_apr(int apr_reg)
 {
+    register_t apr;
+
     switch ( apr_reg )
     {
     case 0:
         ASSERT(gicv3.nr_priorities > 4 && gicv3.nr_priorities < 8);
-        return READ_SYSREG32(ICH_AP1R0_EL2);
+        apr = READ_SYSREG(ICH_AP1R0_EL2);
     case 1:
         ASSERT(gicv3.nr_priorities > 5 && gicv3.nr_priorities < 8);
-        return READ_SYSREG32(ICH_AP1R1_EL2);
+        apr = READ_SYSREG(ICH_AP1R1_EL2);
     case 2:
         ASSERT(gicv3.nr_priorities > 6 && gicv3.nr_priorities < 8);
-        return READ_SYSREG32(ICH_AP1R2_EL2);
+        apr = READ_SYSREG(ICH_AP1R2_EL2);
     default:
         BUG();
     }
+
+    /* Number of priority levels do not exceed 32bit. */
+    return apr;
 }
 
 static bool gicv3_read_pending_state(struct irq_desc *irqd)
diff --git a/xen/include/asm-arm/gic.h b/xen/include/asm-arm/gic.h
index 5069ab4aac..c7f0c343d1 100644
--- a/xen/include/asm-arm/gic.h
+++ b/xen/include/asm-arm/gic.h
@@ -171,9 +171,9 @@
  * GICv3 registers that needs to be saved/restored
  */
 struct gic_v3 {
-    uint32_t vmcr, sre_el1;
-    uint32_t apr0[4];
-    uint32_t apr1[4];
+    register_t vmcr, sre_el1;
+    register_t apr0[4];
+    register_t apr1[4];
     uint64_t lr[16];
 };
 #endif
diff --git a/xen/include/asm-arm/gic_v3_defs.h b/xen/include/asm-arm/gic_v3_defs.h
index 5a578e7c11..34ed5f857d 100644
--- a/xen/include/asm-arm/gic_v3_defs.h
+++ b/xen/include/asm-arm/gic_v3_defs.h
@@ -45,6 +45,8 @@
 #define GICC_SRE_EL2_DIB             (1UL << 2)
 #define GICC_SRE_EL2_ENEL1           (1UL << 3)
 
+#define GICC_IAR_INTID_MASK          (0xFFFFFF)
+
 /* Additional bits in GICD_TYPER defined by GICv3 */
 #define GICD_TYPE_ID_BITS_SHIFT 19
 #define GICD_TYPE_ID_BITS(r)    ((((r) >> GICD_TYPE_ID_BITS_SHIFT) & 0x1f) + 1)
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Wed May 05 07:43:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 07:43:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122830.231770 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCCA-0007dd-C6; Wed, 05 May 2021 07:43:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122830.231770; Wed, 05 May 2021 07:43:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCCA-0007dI-7m; Wed, 05 May 2021 07:43:46 +0000
Received: by outflank-mailman (input) for mailman id 122830;
 Wed, 05 May 2021 07:43:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=J0XF=KA=arm.com=michal.orzel@srs-us1.protection.inumbo.net>)
 id 1leCC8-00044r-Pp
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 07:43:44 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id f23ef506-3155-4de1-a8c8-171db1729383;
 Wed, 05 May 2021 07:43:29 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 642841063;
 Wed,  5 May 2021 00:43:29 -0700 (PDT)
Received: from e123311-lin.arm.com (unknown [10.57.0.42])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 1C31D3F718;
 Wed,  5 May 2021 00:43:27 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f23ef506-3155-4de1-a8c8-171db1729383
From: Michal Orzel <michal.orzel@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	bertrand.marquis@arm.com,
	wei.chen@arm.com
Subject: [PATCH v3 07/10] xen/arm: Always access SCTLR_EL2 using READ/WRITE_SYSREG()
Date: Wed,  5 May 2021 09:43:05 +0200
Message-Id: <20210505074308.11016-8-michal.orzel@arm.com>
X-Mailer: git-send-email 2.29.0
In-Reply-To: <20210505074308.11016-1-michal.orzel@arm.com>
References: <20210505074308.11016-1-michal.orzel@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The Armv8 specification describes the system register as a 64-bit value
on AArch64 and 32-bit value on AArch32 (same as ARMv7).

Unfortunately, Xen is accessing the system registers using
READ/WRITE_SYSREG32() which means the top 32-bit are clobbered.

This is only a latent bug so far because Xen will not yet use the top
32-bit.

There is also no change in behavior because arch/arm/arm64/head.S will
initialize SCTLR_EL2 to a sane value with the top 32-bit zeroed.

Signed-off-by: Michal Orzel <michal.orzel@arm.com>
---
Changes since v2:
-Modify the commit message
Changes since v1:
-Update commit message with SCTLR_EL2 analysis
---
 xen/arch/arm/mm.c    | 2 +-
 xen/arch/arm/traps.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 59f8a3f15f..0e07335291 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -613,7 +613,7 @@ void __init remove_early_mappings(void)
  */
 static void xen_pt_enforce_wnx(void)
 {
-    WRITE_SYSREG32(READ_SYSREG32(SCTLR_EL2) | SCTLR_Axx_ELx_WXN, SCTLR_EL2);
+    WRITE_SYSREG(READ_SYSREG(SCTLR_EL2) | SCTLR_Axx_ELx_WXN, SCTLR_EL2);
     /*
      * The TLBs may cache SCTLR_EL2.WXN. So ensure it is synchronized
      * before flushing the TLBs.
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index c7acdb2087..e7384381cc 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -915,7 +915,7 @@ static void _show_registers(const struct cpu_user_regs *regs,
     printk(" VTTBR_EL2: %016"PRIx64"\n", ctxt->vttbr_el2);
     printk("\n");
 
-    printk(" SCTLR_EL2: %08"PRIx32"\n", READ_SYSREG32(SCTLR_EL2));
+    printk(" SCTLR_EL2: %"PRIregister"\n", READ_SYSREG(SCTLR_EL2));
     printk("   HCR_EL2: %"PRIregister"\n", READ_SYSREG(HCR_EL2));
     printk(" TTBR0_EL2: %016"PRIx64"\n", READ_SYSREG64(TTBR0_EL2));
     printk("\n");
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Wed May 05 07:43:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 07:43:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122833.231782 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCCE-0008Kk-OQ; Wed, 05 May 2021 07:43:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122833.231782; Wed, 05 May 2021 07:43:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCCE-0008KP-JF; Wed, 05 May 2021 07:43:50 +0000
Received: by outflank-mailman (input) for mailman id 122833;
 Wed, 05 May 2021 07:43:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=J0XF=KA=arm.com=michal.orzel@srs-us1.protection.inumbo.net>)
 id 1leCCD-00044r-Pv
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 07:43:49 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 545043b5-8bdc-44a9-b255-5356f035047e;
 Wed, 05 May 2021 07:43:31 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 3AD831063;
 Wed,  5 May 2021 00:43:31 -0700 (PDT)
Received: from e123311-lin.arm.com (unknown [10.57.0.42])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id AC22B3F718;
 Wed,  5 May 2021 00:43:29 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 545043b5-8bdc-44a9-b255-5356f035047e
From: Michal Orzel <michal.orzel@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v3 08/10] arm/page: Get rid of READ/WRITE_SYSREG32
Date: Wed,  5 May 2021 09:43:06 +0200
Message-Id: <20210505074308.11016-9-michal.orzel@arm.com>
X-Mailer: git-send-email 2.29.0
In-Reply-To: <20210505074308.11016-1-michal.orzel@arm.com>
References: <20210505074308.11016-1-michal.orzel@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

AArch64 registers are 64bit whereas AArch32 registers
are 32bit or 64bit. MSR/MRS are expecting 64bit values thus
we should get rid of helpers READ/WRITE_SYSREG32
in favour of using READ/WRITE_SYSREG.
We should also use register_t type when reading sysregs
which can correspond to uint64_t or uint32_t.
Even though many AArch64 registers have upper 32bit reserved
it does not mean that they can't be widen in the future.

Modify accesses to CTR_EL0 to use READ/WRITE_SYSREG.

Signed-off-by: Michal Orzel <michal.orzel@arm.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
 xen/include/asm-arm/page.h | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
index 131507a517..c6f9fb0d4e 100644
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -137,10 +137,10 @@ extern size_t dcache_line_bytes;
 
 static inline size_t read_dcache_line_bytes(void)
 {
-    uint32_t ctr;
+    register_t ctr;
 
     /* Read CTR */
-    ctr = READ_SYSREG32(CTR_EL0);
+    ctr = READ_SYSREG(CTR_EL0);
 
     /* Bits 16-19 are the log2 number of words in the cacheline. */
     return (size_t) (4 << ((ctr >> 16) & 0xf));
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Wed May 05 07:43:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 07:43:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122841.231794 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCCK-0000g9-44; Wed, 05 May 2021 07:43:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122841.231794; Wed, 05 May 2021 07:43:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCCK-0000fe-0E; Wed, 05 May 2021 07:43:56 +0000
Received: by outflank-mailman (input) for mailman id 122841;
 Wed, 05 May 2021 07:43:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=J0XF=KA=arm.com=michal.orzel@srs-us1.protection.inumbo.net>)
 id 1leCCI-00044r-Py
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 07:43:54 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 80cab1ab-83b0-42a0-a8d2-bf91fa95ecc9;
 Wed, 05 May 2021 07:43:36 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 20019D6E;
 Wed,  5 May 2021 00:43:36 -0700 (PDT)
Received: from e123311-lin.arm.com (unknown [10.57.0.42])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 388713F718;
 Wed,  5 May 2021 00:43:33 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 80cab1ab-83b0-42a0-a8d2-bf91fa95ecc9
From: Michal Orzel <michal.orzel@arm.com>
To: xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Wei Liu <wl@xen.org>,
	Tamas K Lengyel <tamas@tklengyel.com>,
	Alexandru Isaila <aisaila@bitdefender.com>,
	Petre Pircalabu <ppircalabu@bitdefender.com>,
	bertrand.marquis@arm.com,
	wei.chen@arm.com
Subject: [PATCH v3 10/10] arm64: Change type of hsr, cpsr, spsr_el1 to uint64_t
Date: Wed,  5 May 2021 09:43:08 +0200
Message-Id: <20210505074308.11016-11-michal.orzel@arm.com>
X-Mailer: git-send-email 2.29.0
In-Reply-To: <20210505074308.11016-1-michal.orzel@arm.com>
References: <20210505074308.11016-1-michal.orzel@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

AArch64 registers are 64bit whereas AArch32 registers
are 32bit or 64bit. MSR/MRS are expecting 64bit values thus
we should get rid of helpers READ/WRITE_SYSREG32
in favour of using READ/WRITE_SYSREG.
We should also use register_t type when reading sysregs
which can correspond to uint64_t or uint32_t.
Even though many AArch64 registers have upper 32bit reserved
it does not mean that they can't be widen in the future.

Modify type of hsr, cpsr, spsr_el1 to uint64_t.
Previously we relied on the padding after SPSR_EL1.
As we removed the padding, modify the union to be 64bit so we don't corrupt SPSR_FIQ.
No need to modify the assembly code becuase the accesses were based on 64bit
registers as there was a 32bit padding after SPSR_EL1.

Remove 32bit padding in cpu_user_regs before spsr_fiq
as it is no longer needed due to upper union being 64bit now.
Add 64bit padding in cpu_user_regs before spsr_el1
because offset of spsr_el1 must be a multiple of 8.

Change type of cpsr to uint64_t in the public outside interface
"public/arch-arm.h" to allow ABI compatibility between 32bit and 64bit.
Increment XEN_DOMCTL_INTERFACE_VERSION.

Change type of cpsr to uint64_t in the public outside interface
"public/vm_event.h" to allow ABI compatibility between 32bit and 64bit.

Signed-off-by: Michal Orzel <michal.orzel@arm.com>
---
Changes since v2:
-Remove _res0 members from structures inside hsr union
-Update commit message
-Modify type of cpsr to uint64_t in public/arch-arm.h
-Increment XEN_DOMCTL_INTERFACE_VERSION
Changes since v1:
-Modify type of cpsr, spsr_el1
-Remove ifdefery in hsr union protecting _res0 members
-Fix formatting of printk calls
---
 xen/arch/arm/arm64/entry.S            |  4 ++--
 xen/arch/arm/arm64/traps.c            |  2 +-
 xen/arch/arm/arm64/vsysreg.c          |  3 ++-
 xen/arch/arm/domain.c                 |  2 +-
 xen/arch/arm/traps.c                  | 30 +++++++++++++++------------
 xen/arch/arm/vcpreg.c                 | 13 ++++++------
 xen/include/asm-arm/arm64/processor.h | 11 +++++-----
 xen/include/asm-arm/hsr.h             |  2 +-
 xen/include/public/arch-arm.h         |  4 ++--
 xen/include/public/domctl.h           |  2 +-
 xen/include/public/vm_event.h         |  3 +--
 11 files changed, 41 insertions(+), 35 deletions(-)

diff --git a/xen/arch/arm/arm64/entry.S b/xen/arch/arm/arm64/entry.S
index ab9a65fc14..fc3811ad0a 100644
--- a/xen/arch/arm/arm64/entry.S
+++ b/xen/arch/arm/arm64/entry.S
@@ -155,7 +155,7 @@
         add     x21, sp, #UREGS_CPSR
         mrs     x22, spsr_el2
         mrs     x23, esr_el2
-        stp     w22, w23, [x21]
+        stp     x22, x23, [x21]
 
         .endm
 
@@ -432,7 +432,7 @@ return_from_trap:
         msr     daifset, #IFLAGS___I_ /* Mask interrupts */
 
         ldr     x21, [sp, #UREGS_PC]            /* load ELR */
-        ldr     w22, [sp, #UREGS_CPSR]          /* load SPSR */
+        ldr     x22, [sp, #UREGS_CPSR]          /* load SPSR */
 
         pop     x0, x1
         pop     x2, x3
diff --git a/xen/arch/arm/arm64/traps.c b/xen/arch/arm/arm64/traps.c
index babfc1d884..9113a15c7a 100644
--- a/xen/arch/arm/arm64/traps.c
+++ b/xen/arch/arm/arm64/traps.c
@@ -36,7 +36,7 @@ void do_bad_mode(struct cpu_user_regs *regs, int reason)
     union hsr hsr = { .bits = regs->hsr };
 
     printk("Bad mode in %s handler detected\n", handler[reason]);
-    printk("ESR=0x%08"PRIx32":  EC=%"PRIx32", IL=%"PRIx32", ISS=%"PRIx32"\n",
+    printk("ESR=%#"PRIregister":  EC=%"PRIx32", IL=%"PRIx32", ISS=%"PRIx32"\n",
            hsr.bits, hsr.ec, hsr.len, hsr.iss);
 
     local_irq_disable();
diff --git a/xen/arch/arm/arm64/vsysreg.c b/xen/arch/arm/arm64/vsysreg.c
index 41f18612c6..caf17174b8 100644
--- a/xen/arch/arm/arm64/vsysreg.c
+++ b/xen/arch/arm/arm64/vsysreg.c
@@ -368,7 +368,8 @@ void do_sysreg(struct cpu_user_regs *regs,
                      sysreg.op2,
                      sysreg.read ? "=>" : "<=",
                      sysreg.reg, regs->pc);
-            gdprintk(XENLOG_ERR, "unhandled 64-bit sysreg access %#x\n",
+            gdprintk(XENLOG_ERR,
+                     "unhandled 64-bit sysreg access %#"PRIregister"\n",
                      hsr.bits & HSR_SYSREG_REGS_MASK);
             inject_undef_exception(regs, hsr);
             return;
diff --git a/xen/arch/arm/domain.c b/xen/arch/arm/domain.c
index c021a03c61..74bdbb9082 100644
--- a/xen/arch/arm/domain.c
+++ b/xen/arch/arm/domain.c
@@ -845,7 +845,7 @@ static int is_guest_pv32_psr(uint32_t psr)
 
 
 #ifdef CONFIG_ARM_64
-static int is_guest_pv64_psr(uint32_t psr)
+static int is_guest_pv64_psr(uint64_t psr)
 {
     if ( psr & PSR_MODE_BIT )
         return 0;
diff --git a/xen/arch/arm/traps.c b/xen/arch/arm/traps.c
index e7384381cc..c8f9773566 100644
--- a/xen/arch/arm/traps.c
+++ b/xen/arch/arm/traps.c
@@ -546,7 +546,7 @@ void inject_undef64_exception(struct cpu_user_regs *regs, int instr_len)
         PSR_IRQ_MASK | PSR_DBG_MASK;
     regs->pc = handler;
 
-    WRITE_SYSREG32(esr.bits, ESR_EL1);
+    WRITE_SYSREG(esr.bits, ESR_EL1);
 }
 
 /* Inject an abort exception into a 64 bit guest */
@@ -580,7 +580,7 @@ static void inject_abt64_exception(struct cpu_user_regs *regs,
     regs->pc = handler;
 
     WRITE_SYSREG(addr, FAR_EL1);
-    WRITE_SYSREG32(esr.bits, ESR_EL1);
+    WRITE_SYSREG(esr.bits, ESR_EL1);
 }
 
 static void inject_dabt64_exception(struct cpu_user_regs *regs,
@@ -717,7 +717,7 @@ struct reg_ctxt {
     uint64_t vttbr_el2;
 };
 
-static const char *mode_string(uint32_t cpsr)
+static const char *mode_string(register_t cpsr)
 {
     uint32_t mode;
     static const char *mode_strings[] = {
@@ -756,14 +756,16 @@ static void show_registers_32(const struct cpu_user_regs *regs,
 #ifdef CONFIG_ARM_64
     BUG_ON( ! (regs->cpsr & PSR_MODE_BIT) );
     printk("PC:     %08"PRIx32"\n", regs->pc32);
+    printk("CPSR:   %016"PRIx64" MODE:%s\n", regs->cpsr,
+           mode_string(regs->cpsr));
 #else
     printk("PC:     %08"PRIx32, regs->pc);
     if ( !guest_mode )
         printk(" %pS", _p(regs->pc));
     printk("\n");
-#endif
     printk("CPSR:   %08"PRIx32" MODE:%s\n", regs->cpsr,
            mode_string(regs->cpsr));
+#endif
     printk("     R0: %08"PRIx32" R1: %08"PRIx32" R2: %08"PRIx32" R3: %08"PRIx32"\n",
            regs->r0, regs->r1, regs->r2, regs->r3);
     printk("     R4: %08"PRIx32" R5: %08"PRIx32" R6: %08"PRIx32" R7: %08"PRIx32"\n",
@@ -846,7 +848,7 @@ static void show_registers_64(const struct cpu_user_regs *regs,
     {
         printk("SP:     %016"PRIx64"\n", regs->sp);
     }
-    printk("CPSR:   %08"PRIx32" MODE:%s\n", regs->cpsr,
+    printk("CPSR:   %016"PRIx64" MODE:%s\n", regs->cpsr,
            mode_string(regs->cpsr));
     printk("     X0: %016"PRIx64"  X1: %016"PRIx64"  X2: %016"PRIx64"\n",
            regs->x0, regs->x1, regs->x2);
@@ -919,7 +921,7 @@ static void _show_registers(const struct cpu_user_regs *regs,
     printk("   HCR_EL2: %"PRIregister"\n", READ_SYSREG(HCR_EL2));
     printk(" TTBR0_EL2: %016"PRIx64"\n", READ_SYSREG64(TTBR0_EL2));
     printk("\n");
-    printk("   ESR_EL2: %08"PRIx32"\n", regs->hsr);
+    printk("   ESR_EL2: %"PRIregister"\n", regs->hsr);
     printk(" HPFAR_EL2: %"PRIregister"\n", READ_SYSREG(HPFAR_EL2));
 
 #ifdef CONFIG_ARM_32
@@ -1599,7 +1601,7 @@ static const unsigned short cc_map[16] = {
 
 int check_conditional_instr(struct cpu_user_regs *regs, const union hsr hsr)
 {
-    unsigned long cpsr, cpsr_cond;
+    register_t cpsr, cpsr_cond;
     int cond;
 
     /*
@@ -1661,7 +1663,7 @@ int check_conditional_instr(struct cpu_user_regs *regs, const union hsr hsr)
 
 void advance_pc(struct cpu_user_regs *regs, const union hsr hsr)
 {
-    unsigned long itbits, cond, cpsr = regs->cpsr;
+    register_t itbits, cond, cpsr = regs->cpsr;
     bool is_thumb = psr_mode_is_32bit(regs) && (cpsr & PSR_THUMB);
 
     if ( is_thumb && (cpsr & PSR_IT_MASK) )
@@ -2004,13 +2006,15 @@ static void do_trap_stage2_abort_guest(struct cpu_user_regs *regs,
 
         break;
     default:
-        gprintk(XENLOG_WARNING, "Unsupported FSC: HSR=%#x DFSC=%#x\n",
+        gprintk(XENLOG_WARNING,
+                "Unsupported FSC: HSR=%#"PRIregister" DFSC=%#x\n",
                 hsr.bits, xabt.fsc);
     }
 
 inject_abt:
-    gdprintk(XENLOG_DEBUG, "HSR=0x%x pc=%#"PRIregister" gva=%#"PRIvaddr
-             " gpa=%#"PRIpaddr"\n", hsr.bits, regs->pc, gva, gpa);
+    gdprintk(XENLOG_DEBUG,
+             "HSR=%#"PRIregister" pc=%#"PRIregister" gva=%#"PRIvaddr" gpa=%#"PRIpaddr"\n",
+             hsr.bits, regs->pc, gva, gpa);
     if ( is_data )
         inject_dabt_exception(regs, gva, hsr.len);
     else
@@ -2204,7 +2208,7 @@ void do_trap_guest_sync(struct cpu_user_regs *regs)
 
     default:
         gprintk(XENLOG_WARNING,
-                "Unknown Guest Trap. HSR=0x%x EC=0x%x IL=%x Syndrome=0x%"PRIx32"\n",
+                "Unknown Guest Trap. HSR=%#"PRIregister" EC=0x%x IL=%x Syndrome=0x%"PRIx32"\n",
                 hsr.bits, hsr.ec, hsr.len, hsr.iss);
         inject_undef_exception(regs, hsr);
     }
@@ -2242,7 +2246,7 @@ void do_trap_hyp_sync(struct cpu_user_regs *regs)
         break;
     }
     default:
-        printk("Hypervisor Trap. HSR=0x%x EC=0x%x IL=%x Syndrome=0x%"PRIx32"\n",
+        printk("Hypervisor Trap. HSR=%#"PRIregister" EC=0x%x IL=%x Syndrome=0x%"PRIx32"\n",
                hsr.bits, hsr.ec, hsr.len, hsr.iss);
         do_unexpected_trap("Hypervisor", regs);
     }
diff --git a/xen/arch/arm/vcpreg.c b/xen/arch/arm/vcpreg.c
index 55351fc087..f0cdcc8a54 100644
--- a/xen/arch/arm/vcpreg.c
+++ b/xen/arch/arm/vcpreg.c
@@ -385,7 +385,7 @@ void do_cp15_32(struct cpu_user_regs *regs, const union hsr hsr)
                  "%s p15, %d, r%d, cr%d, cr%d, %d @ 0x%"PRIregister"\n",
                  cp32.read ? "mrc" : "mcr",
                  cp32.op1, cp32.reg, cp32.crn, cp32.crm, cp32.op2, regs->pc);
-        gdprintk(XENLOG_ERR, "unhandled 32-bit CP15 access %#x\n",
+        gdprintk(XENLOG_ERR, "unhandled 32-bit CP15 access %#"PRIregister"\n",
                  hsr.bits & HSR_CP32_REGS_MASK);
         inject_undef_exception(regs, hsr);
         return;
@@ -454,7 +454,8 @@ void do_cp15_64(struct cpu_user_regs *regs, const union hsr hsr)
                      "%s p15, %d, r%d, r%d, cr%d @ 0x%"PRIregister"\n",
                      cp64.read ? "mrrc" : "mcrr",
                      cp64.op1, cp64.reg1, cp64.reg2, cp64.crm, regs->pc);
-            gdprintk(XENLOG_ERR, "unhandled 64-bit CP15 access %#x\n",
+            gdprintk(XENLOG_ERR,
+                     "unhandled 64-bit CP15 access %#"PRIregister"\n",
                      hsr.bits & HSR_CP64_REGS_MASK);
             inject_undef_exception(regs, hsr);
             return;
@@ -585,7 +586,7 @@ void do_cp14_32(struct cpu_user_regs *regs, const union hsr hsr)
                  "%s p14, %d, r%d, cr%d, cr%d, %d @ 0x%"PRIregister"\n",
                   cp32.read ? "mrc" : "mcr",
                   cp32.op1, cp32.reg, cp32.crn, cp32.crm, cp32.op2, regs->pc);
-        gdprintk(XENLOG_ERR, "unhandled 32-bit cp14 access %#x\n",
+        gdprintk(XENLOG_ERR, "unhandled 32-bit cp14 access %#"PRIregister"\n",
                  hsr.bits & HSR_CP32_REGS_MASK);
         inject_undef_exception(regs, hsr);
         return;
@@ -627,7 +628,7 @@ void do_cp14_64(struct cpu_user_regs *regs, const union hsr hsr)
              "%s p14, %d, r%d, r%d, cr%d @ 0x%"PRIregister"\n",
              cp64.read ? "mrrc" : "mcrr",
              cp64.op1, cp64.reg1, cp64.reg2, cp64.crm, regs->pc);
-    gdprintk(XENLOG_ERR, "unhandled 64-bit CP14 access %#x\n",
+    gdprintk(XENLOG_ERR, "unhandled 64-bit CP14 access %#"PRIregister"\n",
              hsr.bits & HSR_CP64_REGS_MASK);
     inject_undef_exception(regs, hsr);
 }
@@ -658,7 +659,7 @@ void do_cp14_dbg(struct cpu_user_regs *regs, const union hsr hsr)
              "%s p14, %d, r%d, r%d, cr%d @ 0x%"PRIregister"\n",
              cp64.read ? "mrrc" : "mcrr",
              cp64.op1, cp64.reg1, cp64.reg2, cp64.crm, regs->pc);
-    gdprintk(XENLOG_ERR, "unhandled 64-bit CP14 DBG access %#x\n",
+    gdprintk(XENLOG_ERR, "unhandled 64-bit CP14 DBG access %#"PRIregister"\n",
              hsr.bits & HSR_CP64_REGS_MASK);
 
     inject_undef_exception(regs, hsr);
@@ -692,7 +693,7 @@ void do_cp10(struct cpu_user_regs *regs, const union hsr hsr)
                  "%s p10, %d, r%d, cr%d, cr%d, %d @ 0x%"PRIregister"\n",
                  cp32.read ? "mrc" : "mcr",
                  cp32.op1, cp32.reg, cp32.crn, cp32.crm, cp32.op2, regs->pc);
-        gdprintk(XENLOG_ERR, "unhandled 32-bit CP10 access %#x\n",
+        gdprintk(XENLOG_ERR, "unhandled 32-bit CP10 access %#"PRIregister"\n",
                  hsr.bits & HSR_CP32_REGS_MASK);
         inject_undef_exception(regs, hsr);
         return;
diff --git a/xen/include/asm-arm/arm64/processor.h b/xen/include/asm-arm/arm64/processor.h
index 81dfc5e615..0e86079cbb 100644
--- a/xen/include/asm-arm/arm64/processor.h
+++ b/xen/include/asm-arm/arm64/processor.h
@@ -63,18 +63,19 @@ struct cpu_user_regs
 
     /* Return address and mode */
     __DECL_REG(pc,           pc32);             /* ELR_EL2 */
-    uint32_t cpsr;                              /* SPSR_EL2 */
-    uint32_t hsr;                               /* ESR_EL2 */
+    uint64_t cpsr;                              /* SPSR_EL2 */
+    uint64_t hsr;                               /* ESR_EL2 */
+
+    /* Offset of spsr_el1 must be a multiple of 8 */
+    uint64_t pad0;
 
     /* Outer guest frame only from here on... */
 
     union {
-        uint32_t spsr_el1;       /* AArch64 */
+        uint64_t spsr_el1;       /* AArch64 */
         uint32_t spsr_svc;       /* AArch32 */
     };
 
-    uint32_t pad1; /* Doubleword-align the user half of the frame */
-
     /* AArch32 guests only */
     uint32_t spsr_fiq, spsr_irq, spsr_und, spsr_abt;
 
diff --git a/xen/include/asm-arm/hsr.h b/xen/include/asm-arm/hsr.h
index 29d4531f40..9b91b28c48 100644
--- a/xen/include/asm-arm/hsr.h
+++ b/xen/include/asm-arm/hsr.h
@@ -16,7 +16,7 @@ enum dabt_size {
 };
 
 union hsr {
-    uint32_t bits;
+    register_t bits;
     struct {
         unsigned long iss:25;  /* Instruction Specific Syndrome */
         unsigned long len:1;   /* Instruction length */
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index 713fd65317..64a2ca30da 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -267,10 +267,10 @@ struct vcpu_guest_core_regs
 
     /* Return address and mode */
     __DECL_REG(pc64,         pc32);             /* ELR_EL2 */
-    uint32_t cpsr;                              /* SPSR_EL2 */
+    uint64_t cpsr;                              /* SPSR_EL2 */
 
     union {
-        uint32_t spsr_el1;       /* AArch64 */
+        uint64_t spsr_el1;       /* AArch64 */
         uint32_t spsr_svc;       /* AArch32 */
     };
 
diff --git a/xen/include/public/domctl.h b/xen/include/public/domctl.h
index 4dbf107785..d576bfabd6 100644
--- a/xen/include/public/domctl.h
+++ b/xen/include/public/domctl.h
@@ -38,7 +38,7 @@
 #include "hvm/save.h"
 #include "memory.h"
 
-#define XEN_DOMCTL_INTERFACE_VERSION 0x00000013
+#define XEN_DOMCTL_INTERFACE_VERSION 0x00000014
 
 /*
  * NB. xen_domctl.domain is an IN/OUT parameter for this operation.
diff --git a/xen/include/public/vm_event.h b/xen/include/public/vm_event.h
index 36135ba4f1..bb003d21d0 100644
--- a/xen/include/public/vm_event.h
+++ b/xen/include/public/vm_event.h
@@ -266,8 +266,7 @@ struct vm_event_regs_arm {
     uint64_t ttbr1;
     uint64_t ttbcr;
     uint64_t pc;
-    uint32_t cpsr;
-    uint32_t _pad;
+    uint64_t cpsr;
 };
 
 /*
-- 
2.29.0



From xen-devel-bounces@lists.xenproject.org Wed May 05 07:51:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 07:51:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122871.231807 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCJv-0003JG-2k; Wed, 05 May 2021 07:51:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122871.231807; Wed, 05 May 2021 07:51:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCJu-0003J9-Te; Wed, 05 May 2021 07:51:46 +0000
Received: by outflank-mailman (input) for mailman id 122871;
 Wed, 05 May 2021 07:51:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Q0w6=KA=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1leCJs-0003J3-Ut
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 07:51:45 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe0d::612])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 62ce0968-639f-4806-b820-863912caccdd;
 Wed, 05 May 2021 07:51:42 +0000 (UTC)
Received: from AM5PR0202CA0022.eurprd02.prod.outlook.com
 (2603:10a6:203:69::32) by AM5PR0802MB2420.eurprd08.prod.outlook.com
 (2603:10a6:203:9e::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4087.40; Wed, 5 May
 2021 07:51:40 +0000
Received: from AM5EUR03FT017.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:69:cafe::88) by AM5PR0202CA0022.outlook.office365.com
 (2603:10a6:203:69::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.25 via Frontend
 Transport; Wed, 5 May 2021 07:51:40 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT017.mail.protection.outlook.com (10.152.16.89) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4108.25 via Frontend Transport; Wed, 5 May 2021 07:51:40 +0000
Received: ("Tessian outbound 9a5bb9d11315:v91");
 Wed, 05 May 2021 07:51:39 +0000
Received: from fbb2d1a33498.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 0CCFF538-6831-4542-855D-2F78452B5B97.1; 
 Wed, 05 May 2021 07:51:33 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id fbb2d1a33498.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 05 May 2021 07:51:33 +0000
Received: from AS8PR08MB6919.eurprd08.prod.outlook.com (2603:10a6:20b:39e::10)
 by AS8PR08MB6517.eurprd08.prod.outlook.com (2603:10a6:20b:31b::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.25; Wed, 5 May
 2021 07:51:23 +0000
Received: from AS8PR08MB6919.eurprd08.prod.outlook.com
 ([fe80::856e:d103:212c:8f50]) by AS8PR08MB6919.eurprd08.prod.outlook.com
 ([fe80::856e:d103:212c:8f50%4]) with mapi id 15.20.4087.044; Wed, 5 May 2021
 07:51:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 62ce0968-639f-4806-b820-863912caccdd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ei+MbNbWj9fMRiHITeteag47v3N76yJUMY45bCUTQxg=;
 b=pH9MHbqph+1D6nzAc+MtBLsovZbYdrJlsYtEj5ARGNjEreu3jOkdk8dZdRUYZbA5BIiL9Ilbryuelwzv98HOff743dvmfuTVPP3RGqqnG0cth1E9sTelCBn7pHYQlEwd8NapacBBpIvYkKrq7uYQyH+nlTenv9vVDnuNQnVhCQ4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 1fdb80b0f37297a8
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=O4T1dZcR8z+3GxqKSqSXTztuVaxSW0YvyCdftEaLJdCkVBKWPE5w+S+CTA2MXbCiU86Ke6cNbyI5wILWvG6DAgDVGa33JSuosRAgbXyNmwUTGn61QDQjViZ6Kn3Tu+7suP9Rp50tXnO1L6H6CPNxbKSTxrvWZUm5dDwdBBeQROt+wYf8u2DOxIeFXx37uIKsIiRFeUPvaVnhqu9Fl1RLVXdK8jos2Qt22CJR+AYAcwRrXArgqflKo3bcs5TlrwQCsBwT8ms3ADmmSEVTZw1GhHTqPo3V8Og1AfbLcVrTRygtRyoMLYkG5q/7RMVEZM/cIO7/ygdG46xE+6yHohm41Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ei+MbNbWj9fMRiHITeteag47v3N76yJUMY45bCUTQxg=;
 b=GxUX4ZBnjH6OvXcl1KPVI7huPY86zQ2yWHZocA/2yiOsBpMfEuxl3WRtdZrFdfgrcta/I/qNYxMVJ5rI9Yznc+M0AoHNDaUiY8ciMHcZhaltKWhGoi5kZxUVc4Xi7yS8ik5GJznmSWmKqFqlrkYIkI3FN9FSC6s3H9qPNcqFJ+0e/mBifEw2ecGNjJ13szL5rNWoUpWirJOC6eZNa3GmU3EDvEnJsjwqpIuCtQkX9ZkrKV5vdZ0rXlkS1pa9KrZXr2wTOnBC8XVQV7hyZ/zCG5e6rAYamFKPXBLpmGqet43TRUsG07MgNnPXvmIgP+lEcvbcYjeEQ9AnpYLWCKWlaA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ei+MbNbWj9fMRiHITeteag47v3N76yJUMY45bCUTQxg=;
 b=pH9MHbqph+1D6nzAc+MtBLsovZbYdrJlsYtEj5ARGNjEreu3jOkdk8dZdRUYZbA5BIiL9Ilbryuelwzv98HOff743dvmfuTVPP3RGqqnG0cth1E9sTelCBn7pHYQlEwd8NapacBBpIvYkKrq7uYQyH+nlTenv9vVDnuNQnVhCQ4=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, =?utf-8?B?Um9nZXIgUGF1IE1vbm7DqQ==?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant
	<paul@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v4 2/3] xen/pci: Refactor PCI MSI intercept related code
Thread-Topic: [PATCH v4 2/3] xen/pci: Refactor PCI MSI intercept related code
Thread-Index: AQHXPQaiQSZBgPx8y0KxY0XhVGEZGarR2gGAgAKy0wA=
Date: Wed, 5 May 2021 07:51:22 +0000
Message-ID: <2E9765F3-B820-4B8A-BE73-37583202196F@arm.com>
References: <cover.1619707144.git.rahul.singh@arm.com>
 <07cb9f45a91a283af1991c42266555bb0bfe3b71.1619707144.git.rahul.singh@arm.com>
 <65539f2a-8b0c-7f1a-6de1-4032140a4e0e@suse.com>
In-Reply-To: <65539f2a-8b0c-7f1a-6de1-4032140a4e0e@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: 3dc48887-a028-46a7-e157-08d90f9a9e14
x-ms-traffictypediagnostic: AS8PR08MB6517:|AM5PR0802MB2420:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM5PR0802MB2420F31569679CD016C87E80FC599@AM5PR0802MB2420.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:4714;OLM:4714;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 5ihKEJUX/8Q7yPQYduKScvcH9SOw6xldfBHkjFbDEpNtKdGjJjzsrtebG0khRZzLCQB3EK2BauiWZ9QQdN3YViIqb7Ls++fUNlP0o1n/FzaTPBa7tz4Qzhs/wxru28nAxVdYc/Ct1p7p4H0sYzJyYZo3ab4DfIEXVzITNgCdnuIBBgKSHpCWhCt/jJ3KhlC6bS1gUgFTjc/K0QT5+8qeG7b+Cb42tfdPjyY1cud007S6CsJzjXiKaKd8oXD6pi8bmb8tYVADOLtaIc7TkGF8S+DXDq0GOqHWyvrkLwkVf2O6xm2FSPce3HFc7XAoT2xuKdEKtrTV6lh6H5P+XTDMhYG7HGW+YyyIemq+vMse008k5mG86fhtN/2LjIvMBYPKgrcROSkKElHtAIIHabsOKtR2DiwQUcVhhbFp9ESyXd13T0UG/8xB9ltU9xxINO+wlQ8+cy4oJiyl2W0vcI/tySc3joX6Lso5zrSCjh5l2pnxBoe4rcNMrE37vVu9nJsMN2pd03KH8bXp2THR2EPHtv534T4iBckZMmrYtE9yZz2/jDF2pNbV1YmKZH/jxRVlIW1TK5lljHGn0kidPp36aC76wyDvuMyy2tOjSwwqjW8R98+u620CXn8w2TdV6Sg81nh8Bug+C/rAU4Ka3F93s+9KoJW8/rEYx4gWztdEqVREYXDY0Yjeuvzb0DtsdzNEKcAFcCtITdmzzzonVvV80okH+YdmL33dH3m86Rk1Zk2iCu+DGtQeG1EangbbW8XN+PFdeAdxR7xe1x0iEHYrTQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB6919.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(366004)(396003)(346002)(39860400002)(376002)(316002)(5660300002)(6916009)(8936002)(186003)(33656002)(2616005)(8676002)(6486002)(4326008)(54906003)(36756003)(6512007)(2906002)(7416002)(53546011)(71200400001)(122000001)(38100700002)(66556008)(66476007)(64756008)(478600001)(66446008)(6506007)(66946007)(83380400001)(76116006)(91956017)(26005)(86362001)(2004002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 13OB7jPVLs3fo/cNDjdozTLm3F/FaQKbyC5mccpj1KXIyIX7EPZ9htCs91e9z6VQGF3cXDt2AfeWECLsFoxgeFJqUwMdt5wzZXm9EAVPA7aVC0CO4Xy91VpUT9TbIwfquWu76pZYoc6bzA52icQgp4R5VRlzofJEVKKE9d1xUQVCU56cdhsKULJPZJ3dGWTjn6MFE2Rfn4GuYLdHtR3ZpqcOumKxY2QVuEHl7FoIu8r3HJc+4bPVTVxOcqvW5fE69A6uFqw86Opx/DC+NKLjpu7En97257C5x8baeJZs9I/Pp+IcFGdIeTCFdd5YRSFxZksj8kaEw1zhOhxKhyNLN9CaAQlW/1LwU7bN42Q/qgNLtgjqxKAKyiqXdJfIupFDtlYkUYIuxA/HzY7QJnEl3j/r/G3cx6ss45vi/0+AADmtXJ/T+hrsu4dAkP0QT4u53pOkk7TLT/j1savoNc4rdebQUXVriraSpZZQiCxH0PyXj5lCV2f4o5Kcp+dk4/n8TdMEPveYvF7WH0vZWP3sJJBCiRIdjH7oJIsIrqIGdgZqCVkcnCjjrwjEf0ULfp5t63JWkPWaYCtPDNCNOQK8tqtaE3c4pEnyo7U5o+m8ZOqukVz+93zLMaQkMbJdkG/TN5PMXXqmTz/WHMWxZScaml0XvFe1oa335kGJhw3hknCZ2smiBTaml2eFIFbPd+0fAtPmKva8pVYLZ7jF/PvEPdyiQK1WeMy57ZsG16lOaoYB1kXJ9smn0PY+k1lnHOeX
Content-Type: text/plain; charset="utf-8"
Content-ID: <AE79DE77437CB74E905604AB29DE9D49@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6517
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT017.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	de661956-c109-4abc-875c-08d90f9a93e0
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	x1Z1WCFejBuSCd38+Ow8CjfnYY/HsBVWMXQQLgLo9oPsFPnJhUQcsFkRfBnKseKtEZjf3At6k3M0qMmvvuCzE4buDc/FCqU6Ovuu4F+zEtBhQLou66UJQUDqk0/5b9FPUrZAA/JkuvYrKvZvjWMw76rzKEk9mNQpAYbDoBUiuevW5EsdE5gkMg9oMPJ1e5B6VXcd3pBbv/HVsJXdFM3N+VP5YRXWeQN/g/Ix1CZuC65K2oSGkZrIze8y15MjdI5/0mW9v3KZX75w2WK3bm9shW/Sb30SRItEcNVgcrYADe7tjZTZZDFoOP1NAtBOjMWraafR7FNRPmeeMVzT61Al/9vsYXPkeYiOlPajc3hBfA4GcidLkxmVSS0/e+DZ8Fsrgh+kdMwRbyq04lKDcTL+zItNDOfwd+0KgZIKDJ4+uNt1sag8yu1cXmGW/t9R+S83RjwrtLE0LaWR4zlTZmMQTm+zO3BVypu8rTby08wkDJuiUDEAVI2U+8g1PP6fHuHZK27WLLEXzqRKyazz1jHAwjnSM0DnmY5CMS/cit2uHSYRu6cSvabjjnMtVcT+B7JiDCCmJKBbfPKbkfLuD5B8eRNZQT9T2gF+m+rK+H9hrqtHekR/e6EQ2qlv9xDcK0ZLjtgZGE3TG+S1uzLkqeMDqAmIvKt69QbeRiBkryTgAl8IRAyCMIrpUmu9fwF6fEpu7pLDsSIVg2uaA5HlQX0jc2a56AeMMUdrEhOyz8yU7CDmSwwwwgjyjtIDNxu+Cl+S
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(39860400002)(346002)(396003)(376002)(46966006)(36840700001)(33656002)(82310400003)(26005)(47076005)(336012)(2906002)(6506007)(6512007)(478600001)(54906003)(2616005)(316002)(36860700001)(6862004)(53546011)(86362001)(36756003)(4326008)(70586007)(83380400001)(6486002)(70206006)(82740400003)(5660300002)(186003)(356005)(81166007)(8676002)(8936002)(2004002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2021 07:51:40.0355
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3dc48887-a028-46a7-e157-08d90f9a9e14
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT017.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM5PR0802MB2420

SGkgSmFuLA0KDQo+IE9uIDMgTWF5IDIwMjEsIGF0IDM6MzggcG0sIEphbiBCZXVsaWNoIDxqYmV1
bGljaEBzdXNlLmNvbT4gd3JvdGU6DQo+IA0KPiBPbiAyOS4wNC4yMDIxIDE2OjQ2LCBSYWh1bCBT
aW5naCB3cm90ZToNCj4+IC0tLSAvZGV2L251bGwNCj4+ICsrKyBiL3hlbi9kcml2ZXJzL3Bhc3N0
aHJvdWdoL21zaS1pbnRlcmNlcHQuYw0KPj4gQEAgLTAsMCArMSw1MyBAQA0KPj4gKy8qDQo+PiAr
ICogQ29weXJpZ2h0IChDKSAyMDA4LCAgTmV0cm9ub21lIFN5c3RlbXMsIEluYy4NCj4+ICsgKg0K
Pj4gKyAqIFRoaXMgcHJvZ3JhbSBpcyBmcmVlIHNvZnR3YXJlOyB5b3UgY2FuIHJlZGlzdHJpYnV0
ZSBpdCBhbmQvb3IgbW9kaWZ5IGl0DQo+PiArICogdW5kZXIgdGhlIHRlcm1zIGFuZCBjb25kaXRp
b25zIG9mIHRoZSBHTlUgR2VuZXJhbCBQdWJsaWMgTGljZW5zZSwNCj4+ICsgKiB2ZXJzaW9uIDIs
IGFzIHB1Ymxpc2hlZCBieSB0aGUgRnJlZSBTb2Z0d2FyZSBGb3VuZGF0aW9uLg0KPj4gKyAqDQo+
PiArICogVGhpcyBwcm9ncmFtIGlzIGRpc3RyaWJ1dGVkIGluIHRoZSBob3BlIGl0IHdpbGwgYmUg
dXNlZnVsLCBidXQgV0lUSE9VVA0KPj4gKyAqIEFOWSBXQVJSQU5UWTsgd2l0aG91dCBldmVuIHRo
ZSBpbXBsaWVkIHdhcnJhbnR5IG9mIE1FUkNIQU5UQUJJTElUWSBvcg0KPj4gKyAqIEZJVE5FU1Mg
Rk9SIEEgUEFSVElDVUxBUiBQVVJQT1NFLiAgU2VlIHRoZSBHTlUgR2VuZXJhbCBQdWJsaWMgTGlj
ZW5zZSBmb3INCj4+ICsgKiBtb3JlIGRldGFpbHMuDQo+PiArICoNCj4+ICsgKiBZb3Ugc2hvdWxk
IGhhdmUgcmVjZWl2ZWQgYSBjb3B5IG9mIHRoZSBHTlUgR2VuZXJhbCBQdWJsaWMgTGljZW5zZSBh
bG9uZyB3aXRoDQo+PiArICogdGhpcyBwcm9ncmFtOyBJZiBub3QsIHNlZSA8aHR0cDovL3d3dy5n
bnUub3JnL2xpY2Vuc2VzLz4uDQo+PiArICovDQo+PiArDQo+PiArI2luY2x1ZGUgPHhlbi9pbml0
Lmg+DQo+PiArI2luY2x1ZGUgPHhlbi9wY2kuaD4NCj4+ICsjaW5jbHVkZSA8YXNtL21zaS5oPg0K
Pj4gKyNpbmNsdWRlIDxhc20vaHZtL2lvLmg+DQo+PiArDQo+PiAraW50IHBkZXZfbXNpeF9hc3Np
Z24oc3RydWN0IGRvbWFpbiAqZCwgc3RydWN0IHBjaV9kZXYgKnBkZXYpDQo+PiArew0KPj4gKyAg
ICBpbnQgcmM7DQo+PiArDQo+PiArICAgIGlmICggcGRldi0+bXNpeCApDQo+PiArICAgIHsNCj4+
ICsgICAgICAgIHJjID0gcGNpX3Jlc2V0X21zaXhfc3RhdGUocGRldik7DQo+PiArICAgICAgICBp
ZiAoIHJjICkNCj4+ICsgICAgICAgICAgICByZXR1cm4gcmM7DQo+PiArICAgICAgICBtc2l4dGJs
X2luaXQoZCk7DQo+PiArICAgIH0NCj4+ICsNCj4+ICsgICAgcmV0dXJuIDA7DQo+PiArfQ0KPj4g
Kw0KPj4gK3ZvaWQgcGRldl9kdW1wX21zaShjb25zdCBzdHJ1Y3QgcGNpX2RldiAqcGRldikNCj4+
ICt7DQo+PiArICAgIGNvbnN0IHN0cnVjdCBtc2lfZGVzYyAqbXNpOw0KPj4gKw0KPj4gKyAgICBs
aXN0X2Zvcl9lYWNoX2VudHJ5ICggbXNpLCAmcGRldi0+bXNpX2xpc3QsIGxpc3QgKQ0KPj4gKyAg
ICAgICAgcHJpbnRrKCItIE1TSXMgPCAlZCA+IiwgbXNpLT5pcnEpOw0KPiANCj4gT25seSB0aGUg
JWQgYW5kIGEgYmxhbmsgc2hvdWxkIGJlIHBhcnQgb2YgdGhlIGZvcm1hdCBzdHJpbmcgaW5zaWRl
IHRoZQ0KPiBsb29wIGJvZHk7IHRoZSByZXN0IHdhbnRzIHByaW50aW5nIGV4YWN0bHkgb25jZS4N
Cg0KWWVzIEkgYWdyZWUgSSBtaXNzZWQgdGhpcyBJIHdpbGwgZml4IHRoaXMgaW4gbmV4dCBwYXRj
aC4NCiANCj4gDQo+PiArc3RhdGljIGlubGluZSBzaXplX3Qgdm1zaXhfdGFibGVfc2l6ZShjb25z
dCBzdHJ1Y3QgdnBjaSAqdnBjaSwgdW5zaWduZWQgaW50IG5yKQ0KPj4gK3sNCj4+ICsgICAgcmV0
dXJuDQo+PiArICAgICAgICAobnIgPT0gVlBDSV9NU0lYX1RBQkxFKSA/IHZwY2ktPm1zaXgtPm1h
eF9lbnRyaWVzICogUENJX01TSVhfRU5UUllfU0laRQ0KPj4gKyAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgOiBST1VORFVQKERJVl9ST1VORF9VUCh2cGNpLT5tc2l4LT5tYXhfZW50cmll
cywNCj4+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgOCksIDgpOw0KPiANCj4gSSdtIGFmcmFpZCBJIGRvbid0IHZpZXcgdGhpcyBhcyBhbiBh
Y2NlcHRhYmxlIHdheSBvZiB3cmFwcGluZyBsaW5lcy4NCj4gSG93IGFib3V0DQo+IA0KPiAgICBy
ZXR1cm4gKG5yID09IFZQQ0lfTVNJWF9UQUJMRSkNCj4gICAgICAgICAgID8gdnBjaS0+bXNpeC0+
bWF4X2VudHJpZXMgKiBQQ0lfTVNJWF9FTlRSWV9TSVpFDQo+ICAgICAgICAgICA6IFJPVU5EVVAo
RElWX1JPVU5EX1VQKHZwY2ktPm1zaXgtPm1heF9lbnRyaWVzLCA4KSwgOCk7DQoNCk9rLg0KDQo+
IA0KPj4gQEAgLTQyOCw2ICs0NTgsMzEgQEAgaW50IHZwY2lfbWFrZV9tc2l4X2hvbGUoY29uc3Qg
c3RydWN0IHBjaV9kZXYgKnBkZXYpDQo+PiAgICAgcmV0dXJuIDA7DQo+PiB9DQo+PiANCj4+ICtp
bnQgdnBjaV9yZW1vdmVfbXNpeF9yZWdpb25zKGNvbnN0IHN0cnVjdCB2cGNpICp2cGNpLCBzdHJ1
Y3QgcmFuZ2VzZXQgKm1lbSkNCj4+ICt7DQo+PiArICAgIGNvbnN0IHN0cnVjdCB2cGNpX21zaXgg
Km1zaXggPSB2cGNpLT5tc2l4Ow0KPj4gKyAgICB1bnNpZ25lZCBpbnQgaTsNCj4+ICsgICAgaW50
IHJjOw0KPj4gKw0KPj4gKyAgICBmb3IgKCBpID0gMDsgbXNpeCAmJiBpIDwgQVJSQVlfU0laRSht
c2l4LT50YWJsZXMpOyBpKysgKQ0KPj4gKyAgICB7DQo+PiArICAgICAgICB1bnNpZ25lZCBsb25n
IHN0YXJ0ID0gUEZOX0RPV04odm1zaXhfdGFibGVfYWRkcih2cGNpLCBpKSk7DQo+PiArICAgICAg
ICB1bnNpZ25lZCBsb25nIGVuZCA9IFBGTl9ET1dOKHZtc2l4X3RhYmxlX2FkZHIodnBjaSwgaSkg
Kw0KPj4gKyAgICAgICAgICAgICAgICB2bXNpeF90YWJsZV9zaXplKHZwY2ksIGkpIC0gMSk7DQo+
PiArDQo+PiArICAgICAgICByYyA9IHJhbmdlc2V0X3JlbW92ZV9yYW5nZShtZW0sIHN0YXJ0LCBl
bmQpOw0KPj4gKyAgICAgICAgaWYgKCByYyApDQo+PiArICAgICAgICB7DQo+PiArICAgICAgICAg
ICAgcHJpbnRrKFhFTkxPR19HX1dBUk5JTkcNCj4+ICsgICAgICAgICAgICAgICAgICAgICJGYWls
ZWQgdG8gcmVtb3ZlIE1TSVggdGFibGUgWyVseCwgJWx4XTogJWRcbiIsDQo+PiArICAgICAgICAg
ICAgICAgICAgICBzdGFydCwgZW5kLCByYyk7DQo+IA0KPiBJbmRlbnRhdGlvbiBsb29rcyB0byBi
ZSBvZmYgYnkgb25lIHNwYWNlIG9uIHRoZSBsYXN0IHR3byBsaW5lcy4NCg0KT2suDQoNCj4gDQo+
PiAtLS0gL2Rldi9udWxsDQo+PiArKysgYi94ZW4vaW5jbHVkZS94ZW4vbXNpLWludGVyY2VwdC5o
DQo+PiBAQCAtMCwwICsxLDQ5IEBADQo+PiArLyoNCj4+ICsgKiBDb3B5cmlnaHQgKEMpIDIwMDgs
ICBOZXRyb25vbWUgU3lzdGVtcywgSW5jLg0KPj4gKyAqDQo+PiArICogVGhpcyBwcm9ncmFtIGlz
IGZyZWUgc29mdHdhcmU7IHlvdSBjYW4gcmVkaXN0cmlidXRlIGl0IGFuZC9vciBtb2RpZnkgaXQN
Cj4+ICsgKiB1bmRlciB0aGUgdGVybXMgYW5kIGNvbmRpdGlvbnMgb2YgdGhlIEdOVSBHZW5lcmFs
IFB1YmxpYyBMaWNlbnNlLA0KPj4gKyAqIHZlcnNpb24gMiwgYXMgcHVibGlzaGVkIGJ5IHRoZSBG
cmVlIFNvZnR3YXJlIEZvdW5kYXRpb24uDQo+PiArICoNCj4+ICsgKiBUaGlzIHByb2dyYW0gaXMg
ZGlzdHJpYnV0ZWQgaW4gdGhlIGhvcGUgaXQgd2lsbCBiZSB1c2VmdWwsIGJ1dCBXSVRIT1VUDQo+
PiArICogQU5ZIFdBUlJBTlRZOyB3aXRob3V0IGV2ZW4gdGhlIGltcGxpZWQgd2FycmFudHkgb2Yg
TUVSQ0hBTlRBQklMSVRZIG9yDQo+PiArICogRklUTkVTUyBGT1IgQSBQQVJUSUNVTEFSIFBVUlBP
U0UuICBTZWUgdGhlIEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlIGZvcg0KPj4gKyAqIG1vcmUg
ZGV0YWlscy4NCj4+ICsgKg0KPj4gKyAqIFlvdSBzaG91bGQgaGF2ZSByZWNlaXZlZCBhIGNvcHkg
b2YgdGhlIEdOVSBHZW5lcmFsIFB1YmxpYyBMaWNlbnNlIGFsb25nIHdpdGgNCj4+ICsgKiB0aGlz
IHByb2dyYW07IElmIG5vdCwgc2VlIDxodHRwOi8vd3d3LmdudS5vcmcvbGljZW5zZXMvPi4NCj4+
ICsgKi8NCj4+ICsNCj4+ICsjaWZuZGVmIF9fWEVOX01TSV9JTlRFUkNFUFRfSF8NCj4+ICsjZGVm
aW5lIF9fWEVOX01TSV9JTlRFUkNFUFRfSF8NCj4+ICsNCj4+ICsjaWZkZWYgQ09ORklHX0hBU19Q
Q0lfTVNJX0lOVEVSQ0VQVA0KPj4gKw0KPj4gKyNpbmNsdWRlIDxhc20vbXNpLmg+DQo+PiArDQo+
PiAraW50IHBkZXZfbXNpeF9hc3NpZ24oc3RydWN0IGRvbWFpbiAqZCwgc3RydWN0IHBjaV9kZXYg
KnBkZXYpOw0KPj4gK3ZvaWQgcGRldl9kdW1wX21zaShjb25zdCBzdHJ1Y3QgcGNpX2RldiAqcGRl
dik7DQo+PiArDQo+PiArI2Vsc2UgLyogIUNPTkZJR19IQVNfUENJX01TSV9JTlRFUkNFUFQgKi8N
Cj4+ICsNCj4+ICtzdGF0aWMgaW5saW5lIGludCBwZGV2X21zaXhfYXNzaWduKHN0cnVjdCBkb21h
aW4gKmQsIHN0cnVjdCBwY2lfZGV2ICpwZGV2KQ0KPj4gK3sNCj4+ICsgICAgcmV0dXJuIDA7DQo+
PiArfQ0KPj4gKw0KPj4gK3N0YXRpYyBpbmxpbmUgdm9pZCBwZGV2X2R1bXBfbXNpKGNvbnN0IHN0
cnVjdCBwY2lfZGV2ICpwZGV2KSB7fQ0KPj4gK3N0YXRpYyBpbmxpbmUgdm9pZCBwY2lfY2xlYW51
cF9tc2koc3RydWN0IHBjaV9kZXYgKnBkZXYpIHt9DQo+IA0KPiBJIGRvbid0IHRoaW5rIHRoaXMg
bGFzdCBvbmUgaXMgaW50ZXJjZXB0IHJlbGF0ZWQgKGFuZCBoZW5jZSBkb2Vzbid0IGJlbG9uZw0K
PiBoZXJlKT8NCj4gDQpPayBJIHdpbGwgbW92ZSB0aGlzIHRvIG5leHQgcGF0Y2ggaW4gc2VyaWVz
Lg0KPj4gQEAgLTE0OCw2ICsxNTAsNyBAQCBzdHJ1Y3QgdnBjaV92Y3B1IHsNCj4+IH07DQo+PiAN
Cj4+ICNpZmRlZiBfX1hFTl9fDQo+PiArI2lmZGVmIENPTkZJR19IQVNfUENJX01TSV9JTlRFUkNF
UFQNCj4gDQo+IFNpbmNlIGJvdGggc3RhcnQgYW5kIC4uLg0KPiANCj4+ICtzdGF0aWMgaW5saW5l
IHZvaWQgdnBjaV9tc2lfZnJlZShzdHJ1Y3QgdnBjaSAqdnBjaSkge30NCj4+ICsjZW5kaWYgLyog
Q09ORklHX0hBU19QQ0lfTVNJX0lOVEVSQ0VQVCAqLw0KPj4gI2VuZGlmIC8qIF9fWEVOX18gKi8N
Cj4gDQo+IC4uLiBlbmQgbG9vayB0byBtYXRjaCwgbWF5IEkgc3VnZ2VzdCB0byBzaW1wbHkgcmVw
bGFjZSB0aGUgX19YRU5fXyBvbmVzLA0KPiBhcyB0aGUgdGVzdCBoYXJuZXNzIGlzbid0IHN1cHBv
c2VkIHRvIChyYW5kb21seSkgZGVmaW5lIENPTkZJR18qPyBPcg0KPiBhbHRlcm5hdGl2ZWx5IGF0
IGxlYXN0IGNvbWJpbmUgYm90aCAjaWZkZWYtcz8NCg0KT2sgSSB3aWxsIHJlcGxhY2UgdGhlIGxp
bmUgIiNpZmRlZiBfX1hFTl9fIiAgd2l0aCAgIiAjaWZkZWYgQ09ORklHX0hBU19QQ0lfTVNJX0lO
VEVSQ0VQVOKAnQ0KDQpSZWdhcmRzLA0KUmFodWwNCg0KPiANCj4gSmFuDQo+IA0KDQo=


From xen-devel-bounces@lists.xenproject.org Wed May 05 08:00:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 08:00:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122883.231818 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCS2-0005DF-5U; Wed, 05 May 2021 08:00:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122883.231818; Wed, 05 May 2021 08:00:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCS2-0005D8-1y; Wed, 05 May 2021 08:00:10 +0000
Received: by outflank-mailman (input) for mailman id 122883;
 Wed, 05 May 2021 08:00:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6083=KA=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1leCS1-0005D2-EK
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 08:00:09 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f0757c91-ea72-4346-987f-2d75b82459c9;
 Wed, 05 May 2021 08:00:08 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B362BB07B;
 Wed,  5 May 2021 08:00:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f0757c91-ea72-4346-987f-2d75b82459c9
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620201607; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=8MLrzM8DJ5vvj2ns/MlzkH9tsXQ2lFuGE80d21MMdhI=;
	b=Z870sb4ltm4o5ZE+uI8VsSzDlpJJSnHgR8W3/h8oZ50Ii1lNthKwP6DenSQ52YRQCHqm/J
	dyVir7RW9Mr/rXMtr+y3cgsk+SYK0cQWEWB4kWeBLYsnfSnxzhGVc7jzGY6wgZg1KMCRv6
	N0VtO/pANlUkJ3zAudfx2VASVzA0sL0=
Subject: Re: [PATCH v3 10/10] arm64: Change type of hsr, cpsr, spsr_el1 to
 uint64_t
To: Michal Orzel <michal.orzel@arm.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, Tamas K Lengyel <tamas@tklengyel.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>, bertrand.marquis@arm.com,
 wei.chen@arm.com, xen-devel@lists.xenproject.org
References: <20210505074308.11016-1-michal.orzel@arm.com>
 <20210505074308.11016-11-michal.orzel@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c5676e69-a474-d1ad-c7e9-49c03be3ab66@suse.com>
Date: Wed, 5 May 2021 10:00:07 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210505074308.11016-11-michal.orzel@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 05.05.2021 09:43, Michal Orzel wrote:
> --- a/xen/include/public/arch-arm.h
> +++ b/xen/include/public/arch-arm.h
> @@ -267,10 +267,10 @@ struct vcpu_guest_core_regs
>  
>      /* Return address and mode */
>      __DECL_REG(pc64,         pc32);             /* ELR_EL2 */
> -    uint32_t cpsr;                              /* SPSR_EL2 */
> +    uint64_t cpsr;                              /* SPSR_EL2 */
>  
>      union {
> -        uint32_t spsr_el1;       /* AArch64 */
> +        uint64_t spsr_el1;       /* AArch64 */
>          uint32_t spsr_svc;       /* AArch32 */
>      };

This change affects, besides domctl, also default_initialise_vcpu(),
which Arm's arch_initialise_vcpu() calls. I realize do_arm_vcpu_op()
only allows two unrelated VCPUOP_* to pass, but then I don't
understand why arch_initialise_vcpu() doesn't simply return e.g.
-EOPNOTSUPP. Hence I suspect I'm missing something.

> --- a/xen/include/public/domctl.h
> +++ b/xen/include/public/domctl.h
> @@ -38,7 +38,7 @@
>  #include "hvm/save.h"
>  #include "memory.h"
>  
> -#define XEN_DOMCTL_INTERFACE_VERSION 0x00000013
> +#define XEN_DOMCTL_INTERFACE_VERSION 0x00000014

So this is to cover for the struct vcpu_guest_core_regs change.

> --- a/xen/include/public/vm_event.h
> +++ b/xen/include/public/vm_event.h
> @@ -266,8 +266,7 @@ struct vm_event_regs_arm {
>      uint64_t ttbr1;
>      uint64_t ttbcr;
>      uint64_t pc;
> -    uint32_t cpsr;
> -    uint32_t _pad;
> +    uint64_t cpsr;
>  };

Then I wonder why this isn't accompanied by a similar bump of
VM_EVENT_INTERFACE_VERSION. I don't see you drop any checking /
filling of the _pad field, so existing callers may pass garbage
there, and new callers need to be prevented from looking at the
upper half when running on an older hypervisor.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 05 08:01:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 08:01:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122886.231829 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCTK-0005vw-GG; Wed, 05 May 2021 08:01:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122886.231829; Wed, 05 May 2021 08:01:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCTK-0005vp-DE; Wed, 05 May 2021 08:01:30 +0000
Received: by outflank-mailman (input) for mailman id 122886;
 Wed, 05 May 2021 08:01:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sTpK=KA=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1leCTI-0005vh-PB
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 08:01:29 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 271df542-3632-4be8-8043-8d125720b3e4;
 Wed, 05 May 2021 08:01:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 271df542-3632-4be8-8043-8d125720b3e4
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620201687;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=gg18pS8brqT35WAEWCpQir4yYIiVZjlmuHhONyMFmDs=;
  b=PoCsN5kVmYcQYkVO+2SnI8gIqr30gf6WmijRYX43EVhcCf/RtEJDYTQl
   YTyhdnESuLJlaz/t1b+Wv0bgRqll5708txe0yvBeZWf6tsvuR3Uou9bl6
   Xxyg+Ct9lTM2e+Nzq+WluS1Q1cIYrsuiS7+WKWKSeePTycCFBqPVZAc73
   4=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: PTho5+iTmy5MtKq9ynOtufHZtKEPo54/U3bo2URdiEblnpiBbsVzD64LqJwkhXhS4hCUprsNtK
 BD09icODCz+0kFWCKK3KYDWTD5dtICNN52F4gEz0qCpYKpgVh5HWvXJxxR9h/VxxSGrDn9HJyD
 OS+QZyXGnKksIIJ5IGDcV4frGAd69WvfwDyWsSjKNzEpMou5jo+Wm76EkpM3/tp9ceNH1k1RlP
 w5gXVjKJn+8b5x0WnYUJNGDB+Z6JzIlZjnbHeGy1NHoTQjErjtlElYC9kPgsfOCeNVkRNa9eh4
 2KY=
X-SBRS: 5.1
X-MesageID: 43479518
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:IEe5sqGXYh4gsWBMpLqFWpTXdLJzesId70hD6mlYcjYQWtCEls
 yogfQQ3QL1jjFUY307hdWcIsC7L0/03aVepa0cJ62rUgWjgmunK4l+8ZDvqgeOJwTXzcQY76
 tpdsFFZOHYJURmjMr8/QmzG8shxt7Cy6yzmeLC1R5WLT1CQYsI1XYeNi+wFEpqSA5aQbc4Do
 Ob/MpbpzymEE5nE/iTLH8DQuTFupn3j5rgexELHFoK7wOJgDOu5tfBYmSl9z0ZVC5CxqpnzH
 jdn2XCl9+emtyY6juZ7W/c6JxKhMDso+EjOOWggtUYQw+c7zqAS59mX9S5zVQIicGprG0nid
 zd5yonVv4DlE/5WkGQjV/T1xL70DAogkWSumOwpXf4u8T2SHYbJqN69PpkWyDU4UYho91wuZ
 gjtwny2us1fHGw6RjV3NTGWwpnkUC5uxMZ4JUupkdSTJcEb/tppZEflXklYKsoJj7w64wsDY
 BVfbjhzctRGGnqCEzxjy1ExdyhWWkLBRGWQkQOkdz96UkmoFlJi2Qf38ARhXEG6dYUTIRF/f
 3NNuBSmKhJVdJ+V9MzOM4xBe+MTkDdSxPFN2yfZXzhCaE8InrI77r6+q886u2GcIEBpaFC1q
 jpYRd9jyofakjuAcqB0Nlg6RbWWliwWjzr14V3+4V5kqeUfsupDQSzDHQV1+ewqfQWBcPWH9
 ypPohNPvPlJWzyXa5UwgzFXYVII3V2arxUhv8LH3a15u7bIIzjseLWNNzJIqD2LDoiUmTjRl
 QZWjzeI9hB81CLVnf0jAO5YQKpRmXPubZLVITK9ekaz4YAcqdWtBIOtFi/7saXbR1O25ZGOH
 dWEffCqOeWtGO29WHH4yFCIRxGFHtY573mTjdvrQ8OOEXkTKYbt7ykCCdv9UrCAiU6Y9LdEQ
 ZZqVgy07mwNYasyScrDM/iFW6GkX0JpjavQ40HkqOOoefpE6lIT6oOaehUL0HmBhZ1kQFlpC
 NocwkfXHLSETvolOGCl5wbBObWcvFmmwe1KctoqXbS3H/s5/0Hdz8+ZXqDQMSXiQEhS35/nV
 tq6ZISh7KGhHKSM2cluf85N1dNcWyTJ7pDAG2+FcNps4GuXDs1YXaBhDSchR12Xmbx7U0dil
 bsKjCudejRDkBQvW1Z1ajW4Ep5H1/tDX5YWzRfi8lQBG7GsnF83auwaq2/33C4R3ECzuseWQ
 u1Kwc6E0dL/ZSaxRSVkDGNGTEa3Z0oJPXaF6lmWarUwGmRJIqBkrwmE/dY8I1+Ds3ntvYGXI
 ukClaoBQK9L9ls9x2ep34jNiUxlWItlunw3gb5qEe/x3wyDJPpUR1bboBeB+vZyWfqR/yFis
 okyf00uPa9KWX3ZJqtz7rNYztKNxPUpiqXQogT2OdplJN3kIE2OZ/RFQbs/jVg+j4VKc/vjk
 MQQKhh+tn6S8RSVv1XXxgcx0YjkdSEEVAivQP3CNIvZF1FtQ6uA/q5p57z7Yc1CkKPpAHMKU
 CS3i1U8fDCRTaC39chetQNCFUTTEg383J5+uyeM6XWFQWxbulGlWDKf0OVQft4SKKfH68XoQ
 s/y9aUn/WPfy69/AzLpzN0LuZv9GmgKPnCTD6kKKpt89agP06LjbbvyMmvjC3vQT/+Un8mv+
 R+BAQtR/UGrCIjgo0x2jWzTaKygntNqSoh3RhX0nj32oan52/HG1phKgOxuOQPYQVu
X-IronPort-AV: E=Sophos;i="5.82,274,1613451600"; 
   d="scan'208";a="43479518"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=I5IxXW0InNJd4AaN4exdlca1FImFOtzxrCm/hNx3ynqIcdjZv/Y+7FaLD92g3aXe3XrKSQjnhPUgICbGeEqlQB7XtvevntA7mIESqLtpIVSlVKGhDZ9xRRroSd7krP0gCezUUbbwItId0FCNaxkkHnuAxE7BAlwH3HQYqDM5j6onWkmZeh0OgGWh8eq5HbkxUyEm/0RtrpkgexJvdufbP2T2n0x9YIJLSHfAyTurjEJmcvXMI0xn9k4Qsij/6GpDH7ZFTaGXezStNEvx2XQL8s5FHlc13/GrSa+UrvVQImS/mlKcju1cnlI73ozYs+dMtitXJ7LcHnfFxMyAX456zw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=F9gY0BlSOmYNsZ6Q0USMZWYQt4dTDFkZiNYIdOeG2Ws=;
 b=oT7dpPlG76HsyDxv/m19TolohsEkqS83rDjYqcOr5/uZxE1HLsfB+wRoXi0hT6fmsFc3UadWjWPdBM49Ilin/jXDLrjuAZOglEpGx9tXoDWP8vir+6tl6iMfW0nzI37h5hm1qN/xDmnB5WcAUQUjYwb4PifthmM+fJad93sT6q8PYpEo7KWV145G0rFi9o3KsF7/pkQwICto2xlhNUg78vch0r0RX4gYw0ebJRbgFOpcA+eBc9ZCwv45cJnuHDGsma39KVw5c0K/kOlPvuK4Jd30ueFJjSW/lxqdGrG60ZzRZq8hld54yQNNRyAvfibzcVD4nSvp8Kom2wK9hg1Lnw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=F9gY0BlSOmYNsZ6Q0USMZWYQt4dTDFkZiNYIdOeG2Ws=;
 b=svXUZG96iYPh+zcmOcYF/JZyAhFRLBOGJJsndAXJ+NIOEai4wTSsx60lKCZpH7FpTPlWbHSrO44KoVpPGp8BEvQYAh5l7/2lAkkhAPu3fjwLDSI1kOZzTdFrmaiyDUMkYFuyRlCOraHjOVC9wDS87eNV/dJXGEHUs7HxsndZGnU=
Date: Wed, 5 May 2021 10:01:16 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>
Subject: Re: [PATCH] x86/p2m: please Clang after making certain parts HVM-only
Message-ID: <YJJQzJxro7ZnpFuR@Air-de-Roger>
References: <cfac6284-d4ec-af2f-6be4-c114c7c10009@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <cfac6284-d4ec-af2f-6be4-c114c7c10009@suse.com>
X-ClientProxiedBy: MR2P264CA0085.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:32::25) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 99b6eb42-c082-472f-b6e8-08d90f9bf9c3
X-MS-TrafficTypeDiagnostic: DM5PR03MB2556:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB2556B98E98D9F26156BA0BA18F599@DM5PR03MB2556.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:5236;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: J5L3PyEoYsP1ZhwZ4aA1Vo2knP1MBruf1bZMAjKlx+9W0qO/jVdHKjECoPLYQSJMDtc3z556Prm4TFPHI1L1TvHNnbICOx80EztNxnZA+LjNmV257Ezz3BH/xjjExaKdsTTvyW/yWztAcWhtaRX8gEmvLOqERP0GeTAslHn/PcfQ1Q79XDHqL8rNnowEJaa1jQeuIy8gS/dDX2qlaQ/xmH96K/0cLxyk67JJNnSYSFrpHbl/AO/oHyIo6afYbkR7jtFtHSb4H54P6sbfhiDSKe3j4umP/5ILLYq8uWt0z+1jwoF97x8bnBNBjogSStogHrDBS7el08hzm+GJ2R4wOVvPvN1zqORoni51IvJaWK1pwrtZS3pemhwR1Nv9f7BjVtjMj34PKEmv8sd4c4BHT4GdShunnqWZapxQ9/yx/t9e3/pOBaD8sZHUUDtbyUR9vqpyz+QnZEmqDBHQHTwqHyK00BtEmBzktXPInkyG4WmXZ0zgY+2MjHAGbVmo1DWIu0TJdD0nw/soAuy1m0k+YI+m/fs4m9YtyzvWKWSZtuBg5CxJ/RqBlsiyK6d+HkOqmUXJX6WC4IqkmwuAM0jZK/ks3OmJGGmxFlnwZSXUt7Y=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(366004)(136003)(346002)(376002)(39860400002)(396003)(8676002)(186003)(6486002)(6916009)(6496006)(38100700002)(2906002)(5660300002)(4326008)(16526019)(6666004)(4744005)(86362001)(9686003)(54906003)(316002)(107886003)(85182001)(8936002)(26005)(33716001)(66476007)(83380400001)(66556008)(478600001)(956004)(66946007);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?K1dMWC9DMVhnd3F4STVxRlpSRGdOTjh0NTdZTDVBcHB6OUQraUhLMW4yNTV1?=
 =?utf-8?B?MysrT2V3YnBRbjNJNWh4MklYQy9lc0ZsVHR5dXRJaFkybWh5WkhMM2hSenRI?=
 =?utf-8?B?N1MzSTFkb0xNLzlCK28xVlVueVlvVUlPczZpYWxJbmxraHZHMnhHTlh6bldJ?=
 =?utf-8?B?MjVQTXdCZnlScGtMR0dEZWVCb0R1RW1iOThUTG80dnFHWm5xSHdPQkVxbXRO?=
 =?utf-8?B?dlR1ai94ZDJmeWZ5NUZJeFVLeVJrU085QnlWWTR5U09YSE82NER6c2FUaXFl?=
 =?utf-8?B?a21nVWYxWlRMbnlMVjJsdmFCZm4yWk01L0UwTnY2ZzVQeFpZWk1mMmxoclVQ?=
 =?utf-8?B?cGtyeWJjck1WaGk1a01zaHB3c01EY0dZWWliNjM1cTNBdjgrczArS1YrT0Fp?=
 =?utf-8?B?M2FseU5pWDZkY0tvVVZLZEFFVmEwN1F3RGJhZ0RHaGRiRnR0MzlCa0hUdGpG?=
 =?utf-8?B?anBEZE1TQU9lTWVRWUt2NHFMd3g3UnRqSlZWTS9PMlJ3bENyREVXRmFmSTR1?=
 =?utf-8?B?SVByVmkxQ3I3aW5UdUlEcnBEQXkvNm9laTRkaTRuZzRyYWtCMFIxVnlWMlAr?=
 =?utf-8?B?RmRtUkhjZGZZenNWUG9STW96WlQ5RnRwTGVYcDYycWNsWXYzTFlRc0drbnZx?=
 =?utf-8?B?SVFCM0wyWWdzclhGRzcxZDFVNlo3MEpvZzJudXRPcXhjcDRSTnNCS1ZzMnFz?=
 =?utf-8?B?OUNieHVMcUZvL1ZrZ1NDcnpzNFY0YWpyNE5KRHVjOTZxbkgrMDVVWjJOQ2Jq?=
 =?utf-8?B?SEpvV0FKZU1GSU5vQWVTOGUyNGd0SlBDM28yNmJzNWYvNmhENUMvdEcxV1hw?=
 =?utf-8?B?QlliZ2F3Y1UxOUlSMkJtR3Y4YUtjWVVFWEM1dEJZOTd1SFpDVm5xV0VIcHJk?=
 =?utf-8?B?cjBocEhhMk4yeGh5Ni84SFc0YmQyZnp2LzNzMDAwcVFoL1FQVHJPUFNxOEJC?=
 =?utf-8?B?dTZJc3pBUWVuaXZBMXhSeWdEWDNPbkxmWkVMMlEwMzVoNDhuazZJYjJpS3hl?=
 =?utf-8?B?VWFlaFlaZlFjeDh4ZFVyYzZwV21UcG55elBIK2Q0dGhMM1djU2s3S21sN3hU?=
 =?utf-8?B?RFFVdlh2YnlVeit0NXdLYTk5ZzJPWndZWHdQaFhmUU5ZcURmcnNrRjRocThs?=
 =?utf-8?B?Wm0zamd6TzA5c3dyRm43RzhvQ1orc0JhRUI3SXdWck9PUlgwYjk0U3ljU1Av?=
 =?utf-8?B?RmtjdGhvakJJaHFkc1V5Z3poVGFkVXdHTnFTUEhacFdJUXBVZDB3bk9oRlla?=
 =?utf-8?B?b3Y2TkpYVjk1d2F6MG5ES0dNOElreUFkbk9jS1c0aTFwdkVjK2lSbG9JZEFF?=
 =?utf-8?B?UkM1UXFyUlNLS2NFeTBXK0RWNlJTemJsWnhhUWNaTUVVVGNtNjhPS0NyN0Jq?=
 =?utf-8?B?Y3JHemRRMUhHRnJ3UkMvTUZWYk9DQTBlRFpPbUNldkE3SXJrZkxGekhIUTZh?=
 =?utf-8?B?bDBxcmlxaUxiSm1WdWhoQmNNdUVjbXkzRXVHbkhmT291M3prd091aVBybFBx?=
 =?utf-8?B?cUY5amRqbkVqQmJlamFCSTBrdkV1VjRWcTN5NWxHNFJCUkxTaXlJRjRma255?=
 =?utf-8?B?ejQ4cXpSNHB5dE1iS0tmZWZvOGFFdnVQK0sya0hyVTFkSWlTSG82WVhzbWpl?=
 =?utf-8?B?SUIxOEs5bitHWU90S1V6L0praUR2SjIrc1ZiUTZmUzkzUjVKdmZicXpxMDVq?=
 =?utf-8?B?cUhLVVZqd0ZIVjBQNVVOclRaMlArbjNFVXVvdGtBOVozbzBSTlYvY0xGK01p?=
 =?utf-8?Q?SptuhFkprWQDWghyI80K4UWIe92cBmcIQBT69xa?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 99b6eb42-c082-472f-b6e8-08d90f9bf9c3
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2021 08:01:23.5441
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 2YvDGwx2WbAc858+3YsMK8ALePbEvD6hvYqhOWf+Yv2gUMvNzf3ozM5T7NRoEJYf9kLbAhygJGuzmFaZ7F1JXQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB2556
X-OriginatorOrg: citrix.com

On Wed, May 05, 2021 at 09:07:30AM +0200, Jan Beulich wrote:
> Move a few #ifdef-s, to account for diagnostics like
> 
> p2m.c:549:1: error: non-void function does not return a value in all control paths [-Werror,-Wreturn-type]

There's also __builtin_unreachable, but that would get even messier,
and I'm not sure it's supported by all gcc versions we care about.

> which appear despite paging_mode_translate() resolving to constant
> "false" when !HVM. All of the affected functions are intended to become
> fully HVM-only anyway, with their non-translated stub handling split off
> elsewhere.
> 
> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Fixes: 8d012d3ddffc ("x86/p2m: {get,set}_entry hooks and p2m-pt.c are HVM-only")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed May 05 08:12:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 08:12:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122893.231845 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCe5-0007Ty-J5; Wed, 05 May 2021 08:12:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122893.231845; Wed, 05 May 2021 08:12:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCe5-0007Tr-G0; Wed, 05 May 2021 08:12:37 +0000
Received: by outflank-mailman (input) for mailman id 122893;
 Wed, 05 May 2021 08:12:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1leCe3-0007Tl-9d
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 08:12:35 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1leCe2-00055F-1R; Wed, 05 May 2021 08:12:34 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1leCe1-0004tx-RK; Wed, 05 May 2021 08:12:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=BVZc5AOVPMKamWPHNb0ayB+Q32l08AZ/R6DbW6pl4+c=; b=FQNgPLOp0W9CAE+H/g2TZId8Ii
	BW8KGKIIMDx6Vmx8LtSQWAcZWtYT294zzhNih+Z5szVeG+VjnO++P5vM5lhXGULgNSmNdWrn0KUZY
	1KTIxNxwjkuiQOV6SBvNJk1PeouTL2Qsu3lDC8RPXpp+u7xZxeMTGN9z+JUFpExZzjiE=;
Subject: Re: [PATCH] public/gnttab: relax v2 recommendation
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Luca Fancellu <luca.fancellu@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <dcd9ede1-9471-6866-4ba7-b6a7664b5e35@suse.com>
 <8eac6f09-4d1d-6fcc-4218-8c9a0760a6bb@xen.org>
 <71e61d09-5d92-94dc-ae0c-ce09cb49b4ce@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <c468856b-8ac6-2ab1-0f5f-eabc26d47293@xen.org>
Date: Wed, 5 May 2021 09:12:31 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <71e61d09-5d92-94dc-ae0c-ce09cb49b4ce@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 30/04/2021 09:36, Jan Beulich wrote:
> On 30.04.2021 10:19, Julien Grall wrote:
>> On 29/04/2021 14:10, Jan Beulich wrote:
>>> With there being a way to disable v2 support, telling new guests to use
>>> v2 exclusively is not a good suggestion.
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>
>>> --- a/xen/include/public/grant_table.h
>>> +++ b/xen/include/public/grant_table.h
>>> @@ -121,8 +121,10 @@ typedef uint32_t grant_ref_t;
>>>     */
>>>    
>>>    /*
>>> - * Version 1 of the grant table entry structure is maintained purely
>>> - * for backwards compatibility.  New guests should use version 2.
>>> + * Version 1 of the grant table entry structure is maintained largely for
>>> + * backwards compatibility.  New guests are recommended to support using
>>> + * version 2 to overcome version 1 limitations, but to be able to fall back
>>> + * to version 1.
>>
>> v2 is not supported on Arm and I don't see it coming anytime soon.
>> AFAIK, Linux will also not use grant table v2 unless the guest has a
>> address space larger than 44 (?) bits.
> 
> Yes, as soon as there are frame numbers not representable in 32 bits.
> 
>> I can't remember why Linux decided to not use it everywhere, but this is
>> a sign that v2 is not always desired.
>>
>> So I think it would be better to recommend new guest to use v1 unless
>> they hit the limitations (to be details).
> 
> IOW you'd prefer s/be able to fall back/default/? I'd be fine that way

Yes. We would also need to document the limitations as they don't seem 
to be (clearly?) written down in the headers.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 05 08:19:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 08:19:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122902.231859 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCkU-0008Bo-Ca; Wed, 05 May 2021 08:19:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122902.231859; Wed, 05 May 2021 08:19:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCkU-0008Bh-9E; Wed, 05 May 2021 08:19:14 +0000
Received: by outflank-mailman (input) for mailman id 122902;
 Wed, 05 May 2021 08:19:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6083=KA=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1leCkT-0008Bb-3p
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 08:19:13 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dc45eb16-6f87-478f-b28c-7768e7abaef9;
 Wed, 05 May 2021 08:19:11 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E936AAFA9;
 Wed,  5 May 2021 08:19:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dc45eb16-6f87-478f-b28c-7768e7abaef9
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620202751; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=3NhpGCqElrIZ5OQSV0L+ErMtcN//fys/3ntaxI6Rq+8=;
	b=X0z5zNNL2+o8AVuhWu+2gykT0yuGlXwsKzhKcK4RSYfIYZ6BIcgomP0DaB/xUBzS4x6ly5
	kbAYYb75A7HsSAKKV52b8P/fjCSzkWSeKh9mLliBpkpjxZ7pFZRQuUQk2upmknWBVCpGvj
	NOSUvSZtGzotJNmASoZx7PdtzuVB1iw=
Subject: Re: [PATCH 4/5] x86/cpuid: Simplify recalculate_xstate()
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210503153938.14109-1-andrew.cooper3@citrix.com>
 <20210503153938.14109-5-andrew.cooper3@citrix.com>
 <17501fdd-b9f0-3493-7d0d-8c5333fafa45@suse.com>
 <3f9ae28f-2fb7-0f4f-511b-93ba74ec3aeb@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ced9f20a-d420-6639-b041-710f7ec59613@suse.com>
Date: Wed, 5 May 2021 10:19:10 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <3f9ae28f-2fb7-0f4f-511b-93ba74ec3aeb@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 04.05.2021 15:58, Andrew Cooper wrote:
> On 04/05/2021 13:43, Jan Beulich wrote:
>> On 03.05.2021 17:39, Andrew Cooper wrote:
>>> Make use of the new xstate_uncompressed_size() helper rather than maintaining
>>> the running calculation while accumulating feature components.
>>>
>>> The rest of the CPUID data can come direct from the raw cpuid policy.  All
>>> per-component data forms an ABI through the behaviour of the X{SAVE,RSTOR}*
>>> instructions, and are constant.
>>>
>>> Use for_each_set_bit() rather than opencoding a slightly awkward version of
>>> it.  Mask the attributes in ecx down based on the visible features.  This
>>> isn't actually necessary for any components or attributes defined at the time
>>> of writing (up to AMX), but is added out of an abundance of caution.
>>>
>>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>> ---
>>> CC: Jan Beulich <JBeulich@suse.com>
>>> CC: Roger Pau Monné <roger.pau@citrix.com>
>>> CC: Wei Liu <wl@xen.org>
>>>
>>> Using min() in for_each_set_bit() leads to awful code generation, as it
>>> prohibits the optimiations for spotting that the bitmap is <= BITS_PER_LONG.
>>> As p->xstate is long enough already, use a BUILD_BUG_ON() instead.
>>> ---
>>>  xen/arch/x86/cpuid.c | 52 +++++++++++++++++-----------------------------------
>>>  1 file changed, 17 insertions(+), 35 deletions(-)
>>>
>>> diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
>>> index 752bf244ea..c7f8388e5d 100644
>>> --- a/xen/arch/x86/cpuid.c
>>> +++ b/xen/arch/x86/cpuid.c
>>> @@ -154,8 +154,7 @@ static void sanitise_featureset(uint32_t *fs)
>>>  static void recalculate_xstate(struct cpuid_policy *p)
>>>  {
>>>      uint64_t xstates = XSTATE_FP_SSE;
>>> -    uint32_t xstate_size = XSTATE_AREA_MIN_SIZE;
>>> -    unsigned int i, Da1 = p->xstate.Da1;
>>> +    unsigned int i, ecx_bits = 0, Da1 = p->xstate.Da1;
>>>  
>>>      /*
>>>       * The Da1 leaf is the only piece of information preserved in the common
>>> @@ -167,61 +166,44 @@ static void recalculate_xstate(struct cpuid_policy *p)
>>>          return;
>>>  
>>>      if ( p->basic.avx )
>>> -    {
>>>          xstates |= X86_XCR0_YMM;
>>> -        xstate_size = max(xstate_size,
>>> -                          xstate_offsets[X86_XCR0_YMM_POS] +
>>> -                          xstate_sizes[X86_XCR0_YMM_POS]);
>>> -    }
>>>  
>>>      if ( p->feat.mpx )
>>> -    {
>>>          xstates |= X86_XCR0_BNDREGS | X86_XCR0_BNDCSR;
>>> -        xstate_size = max(xstate_size,
>>> -                          xstate_offsets[X86_XCR0_BNDCSR_POS] +
>>> -                          xstate_sizes[X86_XCR0_BNDCSR_POS]);
>>> -    }
>>>  
>>>      if ( p->feat.avx512f )
>>> -    {
>>>          xstates |= X86_XCR0_OPMASK | X86_XCR0_ZMM | X86_XCR0_HI_ZMM;
>>> -        xstate_size = max(xstate_size,
>>> -                          xstate_offsets[X86_XCR0_HI_ZMM_POS] +
>>> -                          xstate_sizes[X86_XCR0_HI_ZMM_POS]);
>>> -    }
>>>  
>>>      if ( p->feat.pku )
>>> -    {
>>>          xstates |= X86_XCR0_PKRU;
>>> -        xstate_size = max(xstate_size,
>>> -                          xstate_offsets[X86_XCR0_PKRU_POS] +
>>> -                          xstate_sizes[X86_XCR0_PKRU_POS]);
>>> -    }
>>>  
>>> -    p->xstate.max_size  =  xstate_size;
>>> +    /* Subleaf 0 */
>>> +    p->xstate.max_size =
>>> +        xstate_uncompressed_size(xstates & ~XSTATE_XSAVES_ONLY);
>>>      p->xstate.xcr0_low  =  xstates & ~XSTATE_XSAVES_ONLY;
>>>      p->xstate.xcr0_high = (xstates & ~XSTATE_XSAVES_ONLY) >> 32;
>>>  
>>> +    /* Subleaf 1 */
>>>      p->xstate.Da1 = Da1;
>>>      if ( p->xstate.xsaves )
>>>      {
>>> +        ecx_bits |= 3; /* Align64, XSS */
>> Align64 is also needed for p->xstate.xsavec afaict. I'm not really
>> convinced to tie one to the other either. I would rather think this
>> is a per-state-component attribute independent of other features.
>> Those state components could in turn have a dependency (like XSS
>> ones on XSAVES).
> 
> There is no such thing as a system with xsavec != xsaves (although there
> does appear to be one line of AMD CPU with xsaves and not xgetbv1).

If we believed there was such a dependency, gen-cpuid.py should imo
already express it. The latest when we make ourselves depend on such
(which I remain not fully convinced of), such a dependency would
need adding, such that it becomes impossible to turn off xsaves
without also turning off xsavec. (Of course, a way to express this
symbolically doesn't currently exist, and is only being added as a
"side effect" of "x86: XFD enabling".)

> Through some (likely unintentional) coupling of data in CPUID, the
> compressed dynamic size (CPUID.0xd[1].ebx) is required for xsavec, and
> is strictly defined as XCR0|XSS, which forces xsaves into the mix.
> 
> In fact, an error with the spec is that userspace can calculate the
> kernel's choice of MSR_XSS using CPUID data alone - there is not
> currently an ambiguous combination of sizes of supervisor state
> components.  This fact also makes XSAVEC suboptimal even for userspace
> to use, because it is forced to allocate larger-than-necessary buffers.

But space-wise it's still better that way than using the uncompressed
format.

> In principle, we could ignore the coupling and support xsavec without
> xsaves, but given that XSAVES is strictly more useful than XSAVEC, I'm
> not sure it is worth trying to support.

I think we should, but I'm not going to object to the alternative as
long as dependencies are properly (put) in place.

>> I'm also not happy at all to see you use a literal 3 here. We have
>> a struct for this, after all.
>>
>>>          p->xstate.xss_low   =  xstates & XSTATE_XSAVES_ONLY;
>>>          p->xstate.xss_high  = (xstates & XSTATE_XSAVES_ONLY) >> 32;
>>>      }
>>> -    else
>>> -        xstates &= ~XSTATE_XSAVES_ONLY;
>>>  
>>> -    for ( i = 2; i < min(63ul, ARRAY_SIZE(p->xstate.comp)); ++i )
>>> +    /* Subleafs 2+ */
>>> +    xstates &= ~XSTATE_FP_SSE;
>>> +    BUILD_BUG_ON(ARRAY_SIZE(p->xstate.comp) < 63);
>>> +    for_each_set_bit ( i, &xstates, 63 )
>>>      {
>>> -        uint64_t curr_xstate = 1ul << i;
>>> -
>>> -        if ( !(xstates & curr_xstate) )
>>> -            continue;
>>> -
>>> -        p->xstate.comp[i].size   = xstate_sizes[i];
>>> -        p->xstate.comp[i].offset = xstate_offsets[i];
>>> -        p->xstate.comp[i].xss    = curr_xstate & XSTATE_XSAVES_ONLY;
>>> -        p->xstate.comp[i].align  = curr_xstate & xstate_align;
>>> +        /*
>>> +         * Pass through size (eax) and offset (ebx) directly.  Visbility of
>>> +         * attributes in ecx limited by visible features in Da1.
>>> +         */
>>> +        p->xstate.raw[i].a = raw_cpuid_policy.xstate.raw[i].a;
>>> +        p->xstate.raw[i].b = raw_cpuid_policy.xstate.raw[i].b;
>>> +        p->xstate.raw[i].c = raw_cpuid_policy.xstate.raw[i].c & ecx_bits;
>> To me, going to raw[].{a,b,c,d} looks like a backwards move, to be
>> honest. Both this and the literal 3 above make it harder to locate
>> all the places that need changing if a new bit (like xfd) is to be
>> added. It would be better if grep-ing for an existing field name
>> (say "xss") would easily turn up all involved places.
> 
> It's specifically to reduce the number of areas needing editing when a
> new state, and therefore the number of opportunities to screw things up.
> 
> As said in the commit message, I'm not even convinced that the ecx_bits
> mask is necessary, as new attributes only come in with new behaviours of
> new state components.
> 
> If we choose to skip the ecx masking, then this loop body becomes even
> more simple.  Just p->xstate.raw[i] = raw_cpuid_policy.xstate.raw[i].
> 
> Even if Intel do break with tradition, and retrofit new attributes into
> existing subleafs, leaking them to guests won't cause anything to
> explode (the bits are still reserved after all), and we can fix anything
> necessary at that point.

I don't think this would necessarily go without breakage. What if,
assuming XFD support is in, an existing component got XFD sensitivity
added to it? If, like you were suggesting elsewhere, and like I had
it initially, we used a build-time constant for XFD-affected
components, we'd break consuming guests. The per-component XFD bit
(just to again take as example) also isn't strictly speaking tied to
the general XFD feature flag (but to me it makes sense for us to
enforce respective consistency). Plus, in general, the moment a flag
is no longer reserved in the spec, it is not reserved anywhere
anymore: An aware (newer) guest running on unaware (older) Xen ought
to still function correctly.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 05 08:24:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 08:24:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122910.231872 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCpO-0001CN-4c; Wed, 05 May 2021 08:24:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122910.231872; Wed, 05 May 2021 08:24:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCpO-0001CG-1V; Wed, 05 May 2021 08:24:18 +0000
Received: by outflank-mailman (input) for mailman id 122910;
 Wed, 05 May 2021 08:24:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3EPH=KA=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1leCpM-0001CA-H6
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 08:24:16 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com (unknown
 [40.107.4.40]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 945539bd-16c6-48d2-9239-eb6777d24de9;
 Wed, 05 May 2021 08:24:15 +0000 (UTC)
Received: from DB9PR02CA0020.eurprd02.prod.outlook.com (2603:10a6:10:1d9::25)
 by AM9PR08MB6949.eurprd08.prod.outlook.com (2603:10a6:20b:410::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4087.35; Wed, 5 May
 2021 08:24:13 +0000
Received: from DB5EUR03FT044.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:1d9:cafe::fc) by DB9PR02CA0020.outlook.office365.com
 (2603:10a6:10:1d9::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.24 via Frontend
 Transport; Wed, 5 May 2021 08:24:13 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT044.mail.protection.outlook.com (10.152.21.167) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4108.25 via Frontend Transport; Wed, 5 May 2021 08:24:12 +0000
Received: ("Tessian outbound 6c4b4bc1cefb:v91");
 Wed, 05 May 2021 08:24:12 +0000
Received: from 808178e91026.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 4EA06257-E2A8-4520-83C4-2FC3C7E8D7BB.1; 
 Wed, 05 May 2021 08:23:57 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 808178e91026.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 05 May 2021 08:23:57 +0000
Received: from VI1PR08MB3629.eurprd08.prod.outlook.com (2603:10a6:803:7f::25)
 by VI1PR0801MB1837.eurprd08.prod.outlook.com (2603:10a6:800:5a::8)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.24; Wed, 5 May
 2021 08:23:55 +0000
Received: from VI1PR08MB3629.eurprd08.prod.outlook.com
 ([fe80::4502:9762:8b3b:63d9]) by VI1PR08MB3629.eurprd08.prod.outlook.com
 ([fe80::4502:9762:8b3b:63d9%4]) with mapi id 15.20.4087.044; Wed, 5 May 2021
 08:23:54 +0000
Received: from smtpclient.apple (82.8.129.65) by
 LO4P123CA0022.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:151::9) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4108.24 via Frontend Transport; Wed, 5 May 2021 08:23:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 945539bd-16c6-48d2-9239-eb6777d24de9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VLj1KErLubGEIgEAZCiajMWTSDygZEm45i9MOc+y5ZM=;
 b=ObBqz8zsK5fUyo4JUb8P00AbVNfVt7JbFD3Nzyrptxywz23Z/UiLjo97a/o7yhnIjbG1wK0/WEPVX6e3X67xE/M8UCzFBrwN0AuuteVFHn48yWKZh2/CZ2epzBS2gSOQu7Nia+Oiy/SuTrSINBL5DKysAxlk71T0N/rCE9N3g5Y=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 0a342d0a4d3ecca7
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UdmQPg9MKHC/UdmdyqG1V988jloFTh/Tyfum+DQ42wVbxiqq5eDWXDIVeYtwJ098hYwjCvvA+NVmWCISvvHw3Q/TpffG+bRO88HzlxdNe/POmyo6ImBjoVhYrZOzEy1LklNkMcnna/xrpJr7Xba+kBFYLALvmi8x5ln3X1MGFxvWEiM1uPXU2EV+zdaJuPb4W8neJ+/ixtQRXHc3dr+XZ5B98Z2t5pPXY96UxoiV4aGFYdbprPiT26RxkScvz3ITa1oEvU7L+bUJuLAfK2pnkfFTKqagApKBJZ4LCh4sKSpxG/iPbnfKJRcQhdQKBEfDlke7Q2fKbvAtjNG48W4uSg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VLj1KErLubGEIgEAZCiajMWTSDygZEm45i9MOc+y5ZM=;
 b=Ag40m6+E651rzRStrWG2WOYbxq6IuKbSNGrtMEfFZ54NYpeKtlk2uJX7aOVJwF+teVHNtiyIG9HBYMxXUB61845dQl2Pk6Q82v6MFdGZif5CnNiMQURM7FXcY+FA4kTaStQDcnp2onbVDzTSUoLExE3yIT1nXWGszA7w+D24RYH9l0NSoVLDkLIkHnopKeSFnnxMy4XR5mgYy1QcrpjiRIPI425UgEQyp5ljIxuIXDq6Ca42wrm2m0cTh2+jlVY80eoLw2E5sFWa1bP9pmK929KMeNeXAPdwgCA2NlBx11lc6fB74pehjHPaZPgWy3TngQz9nL5DsQJzvIIVpI3yFg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=VLj1KErLubGEIgEAZCiajMWTSDygZEm45i9MOc+y5ZM=;
 b=ObBqz8zsK5fUyo4JUb8P00AbVNfVt7JbFD3Nzyrptxywz23Z/UiLjo97a/o7yhnIjbG1wK0/WEPVX6e3X67xE/M8UCzFBrwN0AuuteVFHn48yWKZh2/CZ2epzBS2gSOQu7Nia+Oiy/SuTrSINBL5DKysAxlk71T0N/rCE9N3g5Y=
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
Content-Type: text/plain;
	charset=utf-8
Subject: Re: [PATCH v5 2/3] docs: hypercalls sphinx skeleton for generated
 html
From: Luca Fancellu <luca.fancellu@arm.com>
In-Reply-To: <alpine.DEB.2.21.2105041527550.5018@sstabellini-ThinkPad-T480s>
Date: Wed, 5 May 2021 09:23:47 +0100
Cc: xen-devel@lists.xenproject.org,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 wei.chen@arm.com,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>
Content-Transfer-Encoding: quoted-printable
Message-Id: <9A27C1DD-F549-4B0D-8027-FB3D3D697981@arm.com>
References: <20210504133145.767-1-luca.fancellu@arm.com>
 <20210504133145.767-3-luca.fancellu@arm.com>
 <alpine.DEB.2.21.2105041527550.5018@sstabellini-ThinkPad-T480s>
To: Stefano Stabellini <sstabellini@kernel.org>
X-Mailer: Apple Mail (2.3654.80.0.2.43)
X-Originating-IP: [82.8.129.65]
X-ClientProxiedBy: LO4P123CA0022.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:151::9) To VI1PR08MB3629.eurprd08.prod.outlook.com
 (2603:10a6:803:7f::25)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e0bbde0b-8aa7-4613-f891-08d90f9f2a0c
X-MS-TrafficTypeDiagnostic: VI1PR0801MB1837:|AM9PR08MB6949:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<AM9PR08MB6949F183ABD1509B27246F3DE4599@AM9PR08MB6949.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 JsMCKxEJRfWwB1/WGvzL5cq/ZOQJccXWby8JGAo1NMqChqMaO+EvjArv2YAOUcSZTp5eMWpkvigFFAC/W7zHFUCbQMan2PpEFTxTW9od5s+cMQbNd76nFEzqjJ2RLAbjwHT7T4f9/8P0IuJOyIo00+0P1XeGmTwr9wMbjzBJ+YrR+S2YaInnvtL7uN32hQnNErVelK0pPFhBIzMTjadhuYeBGnqeOZzmixUonZNn+QQAzBwc2BLA18Ze8LhuZzbSIka81n+TlRSrjIpQRyew2YXvz2cdqyAPy5Ip0k8rjjz12lSxmbVUdx8bapn6Jp+jz68rXE5tr3lMnDoMc/vA2Zg3+Cp/X+jNcqaawnZFR716PJ3wN4+WvE1KHdj0zTpDBCx50Xli/ax1h5F+8/t2uS2SfBZ7snV4wks+MT/yDRVZZxYKayRHqP8Y2BPH/fD4ZHiBy36d/7gvrd4q7qmEbVkWi6VdX8suTPgUo1DiPI/jyXHurw8fvcn561+lI44eRUbHWw4ElOKcxogshVA5XQt0l2qR+UPSwT0JqPkvdn4Nplv7YdvjJulXknwz5y5aKl0dj0pp75zKnOaCcvnuOn2wXouwlEKxQMpM1BVU1ZX3pBddtPTKc0N9d+Mr8ZkgVimwCaWm/mIOz5cmiRbjhRjM6+1SDSI0RgbhwHV3s9jXa1TJLDhORuDImIhYNN8a
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR08MB3629.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39840400004)(396003)(366004)(376002)(136003)(346002)(8936002)(5660300002)(2906002)(33656002)(4326008)(186003)(956004)(2616005)(44832011)(52116002)(38350700002)(66556008)(66476007)(478600001)(6486002)(316002)(6666004)(26005)(6512007)(53546011)(36756003)(54906003)(16526019)(6916009)(8676002)(86362001)(38100700002)(6506007)(66946007)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
 =?utf-8?B?T3p4OEkxVVlqZjJUNlNYMUdrWXhVN0JCbXluNTJLVzV2Q0x0VnlYU2Y3WG9n?=
 =?utf-8?B?V1pkeUd3MzlDVDFncXlvK0N6cmhBMkVrYllXSDZ2WXI0dGdWREJRc1B0elZU?=
 =?utf-8?B?citobk1TelBhR1RVUTVOYWlIaU5oWEwwMVNMQ0tmbS9VWlcva0h4Mk8wbXFj?=
 =?utf-8?B?RXNaV1YzOFYxenhwV094RW1lMVc0bDdXMC9XK2tCTUEwUmZSVmdwTDh2c1d0?=
 =?utf-8?B?c2J4aTk3WEdmbEF4ZXZ0UE9GRFJmM1BDQXZOdWEzTHRLTDc0WTB2K3B6OEZ0?=
 =?utf-8?B?and1djNJZWd4clMwWXdDQ0NiemZNbzVFRUFjT0JJdXZDUm5RQUN2ZnhBZUM4?=
 =?utf-8?B?VnJ4dnR5U2xzZHN4UkNGSmY4dVJBeS9CTnJKcnYvM0hXWGlOeEZkYmNSQm9i?=
 =?utf-8?B?VS9SSDg5YlIrNU1qZjlxNE5HZEtUYnFsd3laL0RUV0FtUjg3b3IyTWdxWkMw?=
 =?utf-8?B?UVArSDRKSXN0eXFVTS81VW4xcWhUZm9VYzZlYy9UcUFrUWlNWGtQVmJTVGo4?=
 =?utf-8?B?WnkwWklldTRYUnZFTVB6T3pLNmg1N2UrakplYTFQZEVtWjlnTVpGVnROYit4?=
 =?utf-8?B?K0gyWGZZUVpjVUdGWTUyUFVkYVprVHhwcWZHOHZySnREd1ZPa2N1ZFVRTGJP?=
 =?utf-8?B?a0VLODR1QldiWlVrK0pxcnNpYzVTeFNXTnJLTHVGT3dVNlpNVnM5NExOZ1or?=
 =?utf-8?B?bUNENFM4RWRQUEFXRmc1ckVpRUQwR3JZSmVRWVByZ2swcTRyZjVEcnJFeWtN?=
 =?utf-8?B?aXAzMDRwdXZNYUhqR3dTUFU3ZDhHN2pyQ0xldVV4TmxLWFA1VEdrV282cHcy?=
 =?utf-8?B?WTBqb3YvRDNnTWY5SEFSSi96czFsK1BSOW9oeEw2MTk0QUVVRkhJeEZFYWdo?=
 =?utf-8?B?TlIrS1hkS1BqRERIWlVBaG5aWXJlWVR6MlVoWkZoWnhuMzBER25ucmpIZnlS?=
 =?utf-8?B?blhrS2pPRnpVUzRBaituUitMbHpjamFWcTFDY28wMVQ1U2ZiTEsvWUJtQzNC?=
 =?utf-8?B?cEdqeVArQVZJcFNVYkVvYU9yR0ZOcmUrbEdVWkdTODh1ejNzeTgrTW9tbC8r?=
 =?utf-8?B?NmNGSWswYndob21DNHlUYmZaVXRuejU3TXU0eUI1Y21nbXdzNjE4NFhOS0Z4?=
 =?utf-8?B?OXZZczVMVzZwNjdobjEydFRhNXY4YjVsUnF4RW0wa1czeHo0Tk5BTkVDQk96?=
 =?utf-8?B?Nm5nSEVBR25IS0RqUFN2b2tHQTNBcFlGZnpUZmgyUzRQZE9oZER5VnZraWNs?=
 =?utf-8?B?UTZBTHhVTWlhRlB4VVQ4ZmZRQ1lnK241bFNLS0h5SWloNWRxWXJYUUxJb3kw?=
 =?utf-8?B?akM1YTJGZERjb3djOHlRZUUzdkc5OGR1R2E3L0ZqRDZkU2hobm5vbTFyb3E3?=
 =?utf-8?B?WUt3MGRPWWNPa3J3TzVZUHdkZHJaWEVxQkRqTjF5dUFuMkc3T0FvRnlmTTZB?=
 =?utf-8?B?OVk4d2RaeGI3cSs5SGJrVlJUYVU5WHFnZUp3aHFNTFYrZWpTdU1td0o1VU82?=
 =?utf-8?B?NUYzMHdaYzhzNmZHQ3JDdEtSbnBLTlUrWlAxQmxNd2pFeS8wbGd3Y2k1Nnc5?=
 =?utf-8?B?WU1JaDNFVXM4c0JJWld4R2FRam03NUZwMkFSU2F0KzBheDlrazY0Z1ZXVkN5?=
 =?utf-8?B?SkxUc1hKcjIxZ1J5Z2xON0FrRkMwTEhDeU8rUmFQNlpkcWF6aFdGVUc1MWpI?=
 =?utf-8?B?MFNyWllibkoreFAwYmZvUWVNS2tPZFg0SXFEbW1JVVNvdFpVampYRFh4dnlO?=
 =?utf-8?Q?FREuj+f8gPkXOygg9hULyiigSokhcCD063Cev0x?=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0801MB1837
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT044.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	f1c9e332-ae0f-4716-8fdb-08d90f9f1e78
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	EHyBEcwmr3F++waHpHEkoy5yC90nfBlOvxGQEUSGOQiVeLwhMZM9GC+d1UaK/7JFxFam7UDnM6qovPtbyTv9mrTOoU76oSTMNPIoDtO3zQyJ/UBHJ+FrWwQHQ850iVTYhfie5rvVYnRDLw5OHWbfjmcD5WUiuxH9tA8/mEyYslSQWwSVKzXER+8ddwQ+Bmqq6HxzFV6AwRTL0ndhN5mXqnxN4xRY+3rSmp2jU6IrdAl8ofWh0gFmQtIOmPq0x7gZ0BNDLj2a3bhFn4Y31FXTnGqxGe1hAyMLwq/NXbqP/Atb47/+NsSeVzba80UhB69UT2q3VuV8zKxmElGTWMPe4ig4uxfX0NF5FO1pzgV1CD8HstCSFNwsg8Oe89X1xOCwutFwYEbE30irHo/Z//8G3x8FQ+mkBWvo5oyW5y+KmvaXvOrjtBgO7EQ9NW/1wt/F7J02noJDmvQSDdoCoGjk+9iwmGWpJmHSN6qqIkyEUYhhj4UY5cnda0o443hPcIRzoxZM6Tfh4XkYypWGqiwvUe3KCnKoVP/6/N1thcRNewUIQd6Vve66rX55X+ENIp9IOYkUnro9USASIQRlG50+rX6OsSO6lcBgBpgL8BlzvA0OoZwNmdcGkOkG4DdRj+LdPgxbYkq7DeOdmUeORFzinA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39840400004)(346002)(136003)(376002)(396003)(36840700001)(46966006)(81166007)(186003)(6862004)(956004)(47076005)(356005)(16526019)(6666004)(316002)(53546011)(6506007)(36860700001)(5660300002)(2616005)(54906003)(6512007)(82310400003)(86362001)(336012)(478600001)(36756003)(26005)(33656002)(2906002)(8676002)(8936002)(44832011)(70586007)(70206006)(4326008)(6486002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2021 08:24:12.9285
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e0bbde0b-8aa7-4613-f891-08d90f9f2a0c
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT044.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6949



> On 4 May 2021, at 23:30, Stefano Stabellini <sstabellini@kernel.org> wrot=
e:
>=20
> On Tue, 4 May 2021, Luca Fancellu wrote:
>> Create a skeleton for the documentation about hypercalls
>=20
> Why is there a difference between the arm32, arm64 and x86_64 skeletons?
> Shouldn't just we have one? Or if we have to have three, why are they
> not identical?

Hi Stefano,

Thanks for your feedback, yes I can put the same sections for all the skele=
tons.
I=E2=80=99ll push soon the changes in a next patch

Cheers,
Luca

>=20
>=20
>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
>> ---
>> .gitignore                             |  1 +
>> docs/Makefile                          |  4 ++++
>> docs/hypercall-interfaces/arm32.rst    |  4 ++++
>> docs/hypercall-interfaces/arm64.rst    | 32 ++++++++++++++++++++++++++
>> docs/hypercall-interfaces/index.rst.in |  7 ++++++
>> docs/hypercall-interfaces/x86_64.rst   |  4 ++++
>> docs/index.rst                         |  8 +++++++
>> 7 files changed, 60 insertions(+)
>> create mode 100644 docs/hypercall-interfaces/arm32.rst
>> create mode 100644 docs/hypercall-interfaces/arm64.rst
>> create mode 100644 docs/hypercall-interfaces/index.rst.in
>> create mode 100644 docs/hypercall-interfaces/x86_64.rst
>>=20
>> diff --git a/.gitignore b/.gitignore
>> index d271e0ce6a..a9aab120ae 100644
>> --- a/.gitignore
>> +++ b/.gitignore
>> @@ -64,6 +64,7 @@ docs/xen.doxyfile
>> docs/xen.doxyfile.tmp
>> docs/xen-doxygen/doxygen_include.h
>> docs/xen-doxygen/doxygen_include.h.tmp
>> +docs/hypercall-interfaces/index.rst
>> extras/mini-os*
>> install/*
>> stubdom/*-minios-config.mk
>> diff --git a/docs/Makefile b/docs/Makefile
>> index 2f784c36ce..b02c3dfb79 100644
>> --- a/docs/Makefile
>> +++ b/docs/Makefile
>> @@ -61,6 +61,9 @@ build: html txt pdf man-pages figs
>> sphinx-html: $(DOXY_DEPS) $(DOXY_LIST_SOURCES)
>> ifneq ($(SPHINXBUILD),no)
>> 	$(DOXYGEN) xen.doxyfile
>> +	@echo "Generating hypercall-interfaces/index.rst"
>> +	@sed -e "s,@XEN_TARGET_ARCH@,$(XEN_TARGET_ARCH),g" \
>> +		hypercall-interfaces/index.rst.in > hypercall-interfaces/index.rst
>> 	XEN_ROOT=3D$(realpath $(XEN_ROOT)) $(SPHINXBUILD) -b html . sphinx/html
>> else
>> 	@echo "Sphinx is not installed; skipping sphinx-html documentation."
>> @@ -108,6 +111,7 @@ clean: clean-man-pages
>> 	rm -f xen.doxyfile.tmp
>> 	rm -f xen-doxygen/doxygen_include.h
>> 	rm -f xen-doxygen/doxygen_include.h.tmp
>> +	rm -f hypercall-interfaces/index.rst
>>=20
>> .PHONY: distclean
>> distclean: clean
>> diff --git a/docs/hypercall-interfaces/arm32.rst b/docs/hypercall-interf=
aces/arm32.rst
>> new file mode 100644
>> index 0000000000..4e973fbbaf
>> --- /dev/null
>> +++ b/docs/hypercall-interfaces/arm32.rst
>> @@ -0,0 +1,4 @@
>> +.. SPDX-License-Identifier: CC-BY-4.0
>> +
>> +Hypercall Interfaces - arm32
>> +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D
>> diff --git a/docs/hypercall-interfaces/arm64.rst b/docs/hypercall-interf=
aces/arm64.rst
>> new file mode 100644
>> index 0000000000..5e701a2adc
>> --- /dev/null
>> +++ b/docs/hypercall-interfaces/arm64.rst
>> @@ -0,0 +1,32 @@
>> +.. SPDX-License-Identifier: CC-BY-4.0
>> +
>> +Hypercall Interfaces - arm64
>> +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D
>> +
>> +Starting points
>> +---------------
>> +.. toctree::
>> +   :maxdepth: 2
>> +
>> +
>> +
>> +Functions
>> +---------
>> +
>> +
>> +Structs
>> +-------
>> +
>> +
>> +Enums and sets of #defines
>> +--------------------------
>> +
>> +
>> +Typedefs
>> +--------
>> +
>> +
>> +Enum values and individual #defines
>> +-----------------------------------
>> +
>> +
>> diff --git a/docs/hypercall-interfaces/index.rst.in b/docs/hypercall-int=
erfaces/index.rst.in
>> new file mode 100644
>> index 0000000000..e4dcc5db8d
>> --- /dev/null
>> +++ b/docs/hypercall-interfaces/index.rst.in
>> @@ -0,0 +1,7 @@
>> +.. SPDX-License-Identifier: CC-BY-4.0
>> +
>> +Hypercall Interfaces
>> +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
>> +
>> +.. toctree::
>> +   @XEN_TARGET_ARCH@
>> diff --git a/docs/hypercall-interfaces/x86_64.rst b/docs/hypercall-inter=
faces/x86_64.rst
>> new file mode 100644
>> index 0000000000..3ed70dff95
>> --- /dev/null
>> +++ b/docs/hypercall-interfaces/x86_64.rst
>> @@ -0,0 +1,4 @@
>> +.. SPDX-License-Identifier: CC-BY-4.0
>> +
>> +Hypercall Interfaces - x86_64
>> +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D
>> diff --git a/docs/index.rst b/docs/index.rst
>> index b75487a05d..52226a42d8 100644
>> --- a/docs/index.rst
>> +++ b/docs/index.rst
>> @@ -53,6 +53,14 @@ kind of development environment.
>>    hypervisor-guide/index
>>=20
>>=20
>> +Hypercall Interfaces documentation
>> +----------------------------------
>> +
>> +.. toctree::
>> +   :maxdepth: 2
>> +
>> +   hypercall-interfaces/index
>> +
>> Miscellanea
>> -----------
>>=20
>> --=20
>> 2.17.1
>>=20



From xen-devel-bounces@lists.xenproject.org Wed May 05 08:24:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 08:24:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122911.231884 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCpj-0001fT-Dr; Wed, 05 May 2021 08:24:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122911.231884; Wed, 05 May 2021 08:24:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCpj-0001fM-AS; Wed, 05 May 2021 08:24:39 +0000
Received: by outflank-mailman (input) for mailman id 122911;
 Wed, 05 May 2021 08:24:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6083=KA=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1leCpi-0001f2-8h
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 08:24:38 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 293a5942-290a-4657-aed7-7e8127dd1f97;
 Wed, 05 May 2021 08:24:37 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 77758AFE5;
 Wed,  5 May 2021 08:24:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 293a5942-290a-4657-aed7-7e8127dd1f97
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620203076; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=qicOdkTTJPaozAM3rqiyMeblEujBuIYyS6yjkix4UEs=;
	b=lVdT2ySjtWxou3MzRuufORMTIrZ1IcoKyRWmeDhhijWVEODDZUcPIkWirM7B6xuJk2zw6x
	iM/8PkfqJQTX4Jjt12F6XU5xHiH+ddJxJ95T4iqEDfsADVyTQsI619N/0GjY9pOIqFcbbA
	X3qa9P3yU3FGCKnrrpfoz2VaP2uzGxM=
Subject: Re: [PATCH] public/gnttab: relax v2 recommendation
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Luca Fancellu <luca.fancellu@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <dcd9ede1-9471-6866-4ba7-b6a7664b5e35@suse.com>
 <8eac6f09-4d1d-6fcc-4218-8c9a0760a6bb@xen.org>
 <71e61d09-5d92-94dc-ae0c-ce09cb49b4ce@suse.com>
 <c468856b-8ac6-2ab1-0f5f-eabc26d47293@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <51c29a91-8659-7525-a565-5b9fcfc935f3@suse.com>
Date: Wed, 5 May 2021 10:24:36 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <c468856b-8ac6-2ab1-0f5f-eabc26d47293@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 05.05.2021 10:12, Julien Grall wrote:
> Hi Jan,
> 
> On 30/04/2021 09:36, Jan Beulich wrote:
>> On 30.04.2021 10:19, Julien Grall wrote:
>>> On 29/04/2021 14:10, Jan Beulich wrote:
>>>> With there being a way to disable v2 support, telling new guests to use
>>>> v2 exclusively is not a good suggestion.
>>>>
>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>>
>>>> --- a/xen/include/public/grant_table.h
>>>> +++ b/xen/include/public/grant_table.h
>>>> @@ -121,8 +121,10 @@ typedef uint32_t grant_ref_t;
>>>>     */
>>>>    
>>>>    /*
>>>> - * Version 1 of the grant table entry structure is maintained purely
>>>> - * for backwards compatibility.  New guests should use version 2.
>>>> + * Version 1 of the grant table entry structure is maintained largely for
>>>> + * backwards compatibility.  New guests are recommended to support using
>>>> + * version 2 to overcome version 1 limitations, but to be able to fall back
>>>> + * to version 1.
>>>
>>> v2 is not supported on Arm and I don't see it coming anytime soon.
>>> AFAIK, Linux will also not use grant table v2 unless the guest has a
>>> address space larger than 44 (?) bits.
>>
>> Yes, as soon as there are frame numbers not representable in 32 bits.
>>
>>> I can't remember why Linux decided to not use it everywhere, but this is
>>> a sign that v2 is not always desired.
>>>
>>> So I think it would be better to recommend new guest to use v1 unless
>>> they hit the limitations (to be details).
>>
>> IOW you'd prefer s/be able to fall back/default/? I'd be fine that way
> 
> Yes.

Okay, I've changed that part, but ...

> We would also need to document the limitations as they don't seem 
> to be (clearly?) written down in the headers.

... I'm struggling to see where (and perhaps even why) this would be
needed. The v1 and v2 grant table entry formats are all there. I'm
inclined to consider this an orthogonal addition to make by whoever
thinks such an addition is needed in the first place.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 05 08:26:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 08:26:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122916.231896 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCrh-0002Oy-Qp; Wed, 05 May 2021 08:26:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122916.231896; Wed, 05 May 2021 08:26:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCrh-0002Or-NV; Wed, 05 May 2021 08:26:41 +0000
Received: by outflank-mailman (input) for mailman id 122916;
 Wed, 05 May 2021 08:26:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1leCrg-0002Of-3H
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 08:26:40 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1leCre-0005Mm-2a; Wed, 05 May 2021 08:26:38 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1leCrd-0005nh-Qk; Wed, 05 May 2021 08:26:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=Bnc620D7Y/A1AOJp/WBBp5QvW31nlw914HMMj82VQwg=; b=GLw8t3n5trBIbyiw99uUKu3Xty
	3/odtFKuDxD1hDGa2Dh41Nvh0MpVPtIcVbcIjOrZOk6nYvUseHu5m26t3KqTCiL3NFaOtJYK8t+5R
	Cg54MyNo2t/IoCOeiVs+elQ4QJSjuDQTA1dfX3celdx5kAUn0rpcIIhSTjUu9zJcm6Mo=;
Subject: Re: [PATCH v4] gnttab: defer allocation of status frame tracking
 array
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <74048f89-fee7-06c2-ffd5-6e5a14bdf440@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <f23ed85d-a906-a8b3-edba-48eb376c0633@xen.org>
Date: Wed, 5 May 2021 09:26:35 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <74048f89-fee7-06c2-ffd5-6e5a14bdf440@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 04/05/2021 09:42, Jan Beulich wrote:
> This array can be large when many grant frames are permitted; avoid
> allocating it when it's not going to be used anyway, by doing this only
> in gnttab_populate_status_frames(). While the delaying of the respective
> memory allocation adds possible reasons for failure of the respective
> enclosing operations, there are other memory allocations there already,
> so callers can't expect these operations to always succeed anyway.
> 
> As to the re-ordering at the end of gnttab_unpopulate_status_frames(),
> this is merely to represent intended order of actions (shrink array
> bound, then free higher array entries). If there were racing accesses,
> suitable barriers would need adding in addition.

Please drop the last sentence, this is at best misleading because you 
can't just add barriers to make it race free (see the discussion on v2 
for more details).

With the sentence dropped:

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,


[1] <f82ddfe7-853d-ca15-2373-a38068f65ef7@xen.org>


-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 05 08:33:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 08:33:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122922.231908 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCxs-0003pb-Id; Wed, 05 May 2021 08:33:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122922.231908; Wed, 05 May 2021 08:33:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leCxs-0003pU-F5; Wed, 05 May 2021 08:33:04 +0000
Received: by outflank-mailman (input) for mailman id 122922;
 Wed, 05 May 2021 08:33:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6083=KA=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1leCxr-0003pM-8W
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 08:33:03 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e8c7a540-229e-4ab4-9cd8-039ab2f05fb9;
 Wed, 05 May 2021 08:33:02 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 49963AEAA;
 Wed,  5 May 2021 08:33:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e8c7a540-229e-4ab4-9cd8-039ab2f05fb9
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620203581; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=9fIrYYZIsuD5cDHv/EswPEc6D4K2Envn/oR6SbwpRTs=;
	b=Gke1c/b3xJY5JU1zy933eVVCgT4FPF6ociC1hEr6SvHur2OXRqRRAsW2j96xDYKZiIQY92
	4M7N5wz7KYZR2Maxo/8F82agfDp59Qjc8RBzhMqEqAiGito9bTLZhW3RFi5K7f73csYIKR
	VAd2QDGeh3W+qD9yzqGRnbypUV2KvSg=
Subject: Re: [PATCH 5/5] x86/cpuid: Fix handling of xsave dynamic leaves
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210503153938.14109-1-andrew.cooper3@citrix.com>
 <20210503153938.14109-6-andrew.cooper3@citrix.com>
 <5e6511ca-83bd-8a43-202e-949b4d19b1ab@suse.com>
 <1279476a-f99d-59a4-7fed-1aee37dbe204@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d951dc24-e613-8a1d-13ea-b1e439048165@suse.com>
Date: Wed, 5 May 2021 10:33:01 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <1279476a-f99d-59a4-7fed-1aee37dbe204@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 04.05.2021 16:17, Andrew Cooper wrote:
> On 04/05/2021 13:56, Jan Beulich wrote:
>> On 03.05.2021 17:39, Andrew Cooper wrote:
>>> +unsigned int xstate_compressed_size(uint64_t xstates)
>>> +{
>>> +    unsigned int i, size = XSTATE_AREA_MIN_SIZE;
>>> +
>>> +    xstates &= ~XSTATE_FP_SSE;
>>> +    for_each_set_bit ( i, &xstates, 63 )
>>> +    {
>>> +        if ( test_bit(i, &xstate_align) )
>>> +            size = ROUNDUP(size, 64);
>>> +
>>> +        size += xstate_sizes[i];
>>> +    }
>>> +
>>> +    /* In debug builds, cross-check our calculation with hardware. */
>>> +    if ( IS_ENABLED(CONFIG_DEBUG) )
>>> +    {
>>> +        unsigned int hwsize;
>>> +
>>> +        xstates |= XSTATE_FP_SSE;
>>> +        hwsize = hw_compressed_size(xstates);
>>> +
>>> +        if ( size != hwsize )
>>> +            printk_once(XENLOG_ERR "%s(%#"PRIx64") size %#x != hwsize %#x\n",
>>> +                        __func__, xstates, size, hwsize);
>>> +        size = hwsize;
>> To be honest, already on the earlier patch I was wondering whether
>> it does any good to override size here: That'll lead to different
>> behavior on debug vs release builds. If the log message is not
>> paid attention to, we'd then end up with longer term breakage.
> 
> Well - our options are pass hardware size, or BUG(), because getting
> this wrong will cause memory corruption.

I'm afraid I'm lost: Neither passing hardware size nor BUG() would
happen in a release build, so getting this wrong does mean memory
corruption there. And I'm of the clear opinion that debug builds
shouldn't differ in behavior in such regards.

If there wasn't an increasing number of possible combinations I
would be inclined to suggest that in all builds we check during
boot that calculation and hardware provided values match for all
possible (valid) combinations.

> The BUG() option is a total pain when developing new support - the first
> version of this patch did use BUG(), but after hitting it 4 times in a
> row (caused by issues with constants elsewhere), I decided against it.

I can fully understand this aspect. I'm not opposed to printk_once()
at all. My comment was purely towards the override.

> If we had something which was a mix between WARN_ONCE() and a proper
> printk() explaining what was going on, then I would have used that. 
> Maybe it's time to introduce one...

I don't think there's a need here - what you have in terms of info
put into the log is imo sufficient.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 05 08:51:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 08:51:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122932.231920 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leDFg-00067m-9m; Wed, 05 May 2021 08:51:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122932.231920; Wed, 05 May 2021 08:51:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leDFg-00067f-6g; Wed, 05 May 2021 08:51:28 +0000
Received: by outflank-mailman (input) for mailman id 122932;
 Wed, 05 May 2021 08:51:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1leDFe-00067Z-Qn
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 08:51:26 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1leDFd-0005nC-OI; Wed, 05 May 2021 08:51:25 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1leDFd-0007gh-Hl; Wed, 05 May 2021 08:51:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=SgylSejJpaI/1BO/qxOCs4qT3nc5sKVp4tOQs1g1Xu0=; b=EHcROL44DHbBP5BYMVwjTQEqYp
	2artOGHQcqBy0WnocWi+7pJ4qPwn4VFRxFRTuqCki47+n5egomjl2vCejWRsvvNJtldPtu4hY/NwM
	czc266VmBt5OVjlxOLZKSJ8fPQJQyPVhFaEOSv6AQJUZEnPZR7dG+JIDGkhfJ+mAqrAU=;
Subject: Re: [PATCH] public/gnttab: relax v2 recommendation
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Luca Fancellu <luca.fancellu@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <dcd9ede1-9471-6866-4ba7-b6a7664b5e35@suse.com>
 <8eac6f09-4d1d-6fcc-4218-8c9a0760a6bb@xen.org>
 <71e61d09-5d92-94dc-ae0c-ce09cb49b4ce@suse.com>
 <c468856b-8ac6-2ab1-0f5f-eabc26d47293@xen.org>
 <51c29a91-8659-7525-a565-5b9fcfc935f3@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <9b8fb87c-a2fb-f371-5914-f0d175c18b02@xen.org>
Date: Wed, 5 May 2021 09:51:23 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <51c29a91-8659-7525-a565-5b9fcfc935f3@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 05/05/2021 09:24, Jan Beulich wrote:
> On 05.05.2021 10:12, Julien Grall wrote:
>> Hi Jan,
>>
>> On 30/04/2021 09:36, Jan Beulich wrote:
>>> On 30.04.2021 10:19, Julien Grall wrote:
>>>> On 29/04/2021 14:10, Jan Beulich wrote:
>>>>> With there being a way to disable v2 support, telling new guests to use
>>>>> v2 exclusively is not a good suggestion.
>>>>>
>>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>>>
>>>>> --- a/xen/include/public/grant_table.h
>>>>> +++ b/xen/include/public/grant_table.h
>>>>> @@ -121,8 +121,10 @@ typedef uint32_t grant_ref_t;
>>>>>      */
>>>>>     
>>>>>     /*
>>>>> - * Version 1 of the grant table entry structure is maintained purely
>>>>> - * for backwards compatibility.  New guests should use version 2.
>>>>> + * Version 1 of the grant table entry structure is maintained largely for
>>>>> + * backwards compatibility.  New guests are recommended to support using
>>>>> + * version 2 to overcome version 1 limitations, but to be able to fall back
>>>>> + * to version 1.
>>>>
>>>> v2 is not supported on Arm and I don't see it coming anytime soon.
>>>> AFAIK, Linux will also not use grant table v2 unless the guest has a
>>>> address space larger than 44 (?) bits.
>>>
>>> Yes, as soon as there are frame numbers not representable in 32 bits.
>>>
>>>> I can't remember why Linux decided to not use it everywhere, but this is
>>>> a sign that v2 is not always desired.
>>>>
>>>> So I think it would be better to recommend new guest to use v1 unless
>>>> they hit the limitations (to be details).
>>>
>>> IOW you'd prefer s/be able to fall back/default/? I'd be fine that way
>>
>> Yes.
> 
> Okay, I've changed that part, but ...
> 
>> We would also need to document the limitations as they don't seem
>> to be (clearly?) written down in the headers.
> 
> ... I'm struggling to see where (and perhaps even why) this would be
> needed. The v1 and v2 grant table entry formats are all there. I'm
> inclined to consider this an orthogonal addition to make by whoever
> thinks such an addition is needed in the first place.

The current comment is not mentionning about limitations but instead say 
"new OS should use v2". Your proposal is to say "default to v1 but use 
v2 if you hit limitations".

As Xen developper, I am aware of a single limitation (the 44 bits). But 
here you suggest there are multiple ones. I could probably figure out 
the others if I dig into the code...

Now imagine, you are an OS developper new to Xen. I don't think this is 
fair to say "there are limitations but I will not tell you directly. 
Instead you should try to infer them from the definitions". There is a 
chance, he/she may have missed some of the limitations and therefore the 
decision to switch between v1 and v2 would be done incorrectly.

In addition to that, it also means she/he may end up to implement the 
two versions when implementing v1 may just be sufficient (custom OSes 
may never need 44 bits worth of address space).

This is not a very friendly way to work on Xen. FAOD, I am not saying 
that the other headers are perfect... Instead, I am saying we ought to 
improve new wording to make the project a bit more welcoming.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 05 09:16:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 09:16:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122938.231932 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leDdm-0008Rw-Bd; Wed, 05 May 2021 09:16:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122938.231932; Wed, 05 May 2021 09:16:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leDdm-0008Rp-8Y; Wed, 05 May 2021 09:16:22 +0000
Received: by outflank-mailman (input) for mailman id 122938;
 Wed, 05 May 2021 09:16:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sTpK=KA=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1leDdk-0008Rj-Ah
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 09:16:20 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6b102be3-8686-4b06-bf93-d983d096eade;
 Wed, 05 May 2021 09:16:18 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6b102be3-8686-4b06-bf93-d983d096eade
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620206178;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=HLUPrNUUw1lPse9vW7Mh371NOJtc/27AFoO2jahLfKg=;
  b=LIfnJgt6ALbD/wbK7yDPn9oWQsmGu4iqz80JxrmfDfGCDjBfbBUYxwx9
   4vdOJzc0JOfvkaMeVywVf01AYK1z65p3pplpJhl9z6V/XlIM26lcOJ1NM
   ura6+mTx9IuOW9WdylXOUBuwAHWeR2bYx93XUegSKehTcVFeLua5LLEyB
   Q=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: iP/ZWPTKv0jJrjmGJ+n5zNfk7QT56JvPS+jm3LnlTic9CT7JbdEEg+S7MOA9CSckSbmTIWb6Fs
 H9SpHTBy0LW0nE3azCaAgL3mWKA6aRoSMfhG5NFu/OhKfaZRhaVs7xLzqkEMkJ7h0a3g+KWgdw
 th3166COomKVzGodGkVQ1sBvZQfibPQTuujuzhhp8VxypAT6Z7BAYd8lban/XtGkhScyPV6GOq
 0rgkMRW//T+4sRG/Jd6lnjMQ5L/3Vmqy1RgsVZ8y99g+SkBCo4n7h2mw/c24zPw+u9C4hcyfhh
 7Fc=
X-SBRS: 5.1
X-MesageID: 42902628
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:nULHL6GEWUXqdQwKpLqFWJTXdLJzesId70hD6mlYcjYQWtCEls
 yogfQQ3QL1jjFUY307hdWcIsC7Lk/03aVepa0cJ62rUgWjgmunK4l+8ZDvqgeNJwTXzcQY76
 tpdsFFZeHYJURmjMr8/QmzG8shxt7Cy6yzmeLC1R5WLD1CQYsI1XYfNi+wFEpqSA5aQbc4Do
 Ob/MpbpzymEE5nFPiTLH8DQuTFupn3j5rgexELHFoK7wOJgDOu5tfBYmWl9z0ZVC5CxqpnzH
 jdn2XCl96emtyY6juZ7W/c6JxKhMDso+EsOOWggtUYQw+c6DqAS59mX9S5zVUIicGprG0nid
 zd5yonVv4Dl0/5WkGQjV/T1xL70DAogkWSuWOwpXf4u8T2SHYbJqN69PtkWyDU4UYho91wuZ
 gjtwny1+s1fGH9tR/w6NTSWxZhmlDcmwtbrccpg2FCSoxbUbdNrOUkjTJoOa0dFyH34p1PKp
 gJMOjg4p9tADenRkGclGxuzNuwZ280DxeLT2MT0/blrQR+rTRXyVAVy9cYmWpF3JUhS4Nc7+
 CBCahwkqpSJ/VmIZ5VNaMke4+aG2bNSRXDPCa7JknmLrgOPzbop4Ts6Ls4yem2cPUzvdQPsa
 WEdGkdmX85ekroB8HL9oZM6ArxTGK0Wimo4t1C5rBi04eMBIbDAGmmchQDgsGgq/IQDonwQP
 CoIq9bBPflMC/HBZtJ5QvjQJNfQENuEfE9i5IeYRajs8jLIorluqjwa/DIPofgFj4iRyfRGX
 0GcD/vJNhRz0yiV3Pi6SKhGU/FSwjax9ZdAaLa9+8cxMwmLYtXqDUYjly/+4WqJFR5w+kLVX
 o7BImivrKwpGGw82qNxX5uIABhAkFc56ild3tLoAQNIn7laLprgaTaRUlimF+8YjNvRcLfFw
 BS435t/7isEpCWzSc+T/WqL3ydlHlWgH6RVZ8Tlumi6K7eC90FJ6djfJY0ORTAFhRzlwovgn
 xEchU4SkjWES6rr76kgpwSDOT2bMJ9nw+vHM5RpRvkxAahjPBqYkFecy+lUMaRjwprbSFTnE
 dN/6gWh6fFpSyiMlIlgOMzMERFbUOeBL4uNnXDWKxk3pTQPC1gR2aDgjKXzzU+YHDj+Ukpim
 v9FiGMYv3QDl1BundX77by/DpPBxagVnM1Tko/nZx2FGzAtHo26+ONa6ap+0a6a1cJwIgmQX
 r4SApXBjkr68G81RaTljrHKG4vwY82OPfBSJ45davI53+rIIqUtK0PEvNO5qx5PNT2vuJja5
 PYRyalaBfDT8850Q2coXgofBRuoH4/iPXyxVnL6nO70HNXO4ulHH1WA5UgZ/eS4GjvS6zWjN
 FXjdcpsfCxNWu0QNic0q3TZyNCLBSWgWPedZBdlblk+YYJ8J10FN3ndBGN8ldt9hA3Nt31m0
 MTW74T2sGIBqZfO+gpPxtE9V8onumVJEQlsgbKEvYzFGtd+kPzDpes2f70srIhDU2KmRvoNX
 Se+yNb+e3ZXyHr789sN4sAZUBXYlM78nJs4aercJDREhyjc4h4jRCHG074VL9WU66eH7oM6j
 58/tGThueSMw71whrZszc+AqVA9Q+cMI6PKTPJPe5D6NqhP1uQxoOs/c6olT/yDQKBVH5wv/
 wNSWUgKuJZijcji4Ur0i+9DozPy3hV7Gd20HVAjV7i2o+v/WHBO1pJWDep26lrYQ==
X-IronPort-AV: E=Sophos;i="5.82,274,1613451600"; 
   d="scan'208";a="42902628"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UBepOm+DekX9tVmLPZIw/855zo9N3extc8/fI9wYDa7jul45Fgl7+o0Htt4FeQBCHRygW3+od1CY9yXT1WC6tXz7B+e14US3w5fBgpU9Y4m/emS/eMa3WFSOOOQ0xqK84TAZG9RFiGUQetG6tsZWhRmoaNUQMMvfLhonuYt+DMhkV48HebMxk8v0LhmY6ECO67L8Kw33Wp0tNqjFNrIsYQ1DjrpDI2e1sd4y1pZR26LNiv006vsGWSQ1LU0VwfUGHQeDkHuq6E9clZBsYFUo3HU/iuG8N/R49E8S9VtXunzb8OP1StlpHu34gB6fbb8U7iPwkvTZ61fFa4wI/kwgmQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wKHhj6j9usx28MOesGH7ZYw6c5IjxCdUm47/6L3H3Bc=;
 b=D+G56WWaYtZS4avfJGrd16ddOrQkkN1INzmXMG4F3DrCFUbFvJsfiJRc925WtSZbVYo5LpspL2SdCH0WA8J3gTeQ7vvFVj1XfUmIMzARg434lSCfodnsi5aKx3TJp9GWV1tLh4MEkzsbiX3kqHmZ16qicQRJ8B7uDkbruWd1t+wQYj6LctuinbBXRVwZl5S3cTCHycnvMZmOY0HN0anq4c7F5aXZut7liF6BEwQVNv68kzETESd2Zu8Mmnr7WQcD9KBqCHH39PCoSOkQkNuW5YRuDcIOlxmHRffhVhNYLn0gx/Y4hObX8D66uvbvm6+7adwJuZZZMvyanFZn3n4EVA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wKHhj6j9usx28MOesGH7ZYw6c5IjxCdUm47/6L3H3Bc=;
 b=ubwzAeDZaaEiozCaB3ICaXYXO9ASufvPT8eAyNq2OmnV9n3qFCvPdx1X8U5u0y8fIivz4iMCEz+8fCdzz2kfIoM32xUt+CHr+KYvKIMgaVmVsXd+Q5Fg5uCqpX8DqlB899SZqh9y8MFNyMC1DZ65be3lMybo+7e4mExHlPbN5KU=
Date: Wed, 5 May 2021 11:16:08 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] libs/guest: Don't hide the indirection on xc_cpu_policy_t
Message-ID: <YJJiWLqGoHLSnj01@Air-de-Roger>
References: <20210504185322.19306-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20210504185322.19306-1-andrew.cooper3@citrix.com>
X-ClientProxiedBy: MR2P264CA0111.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:33::27) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 68b17c64-28d7-4c9d-7741-08d90fa66ee3
X-MS-TrafficTypeDiagnostic: DM6PR03MB4300:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB43007F259EB53544CC14FCCE8F599@DM6PR03MB4300.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: QzC9gI8pHhBr3JJxgAQAKgVpyOGj0WEdguF42IIumt9+TsLRW+V2GLHl2xH0S/SXXJsa1m+0WoSwmtVhI7NaErsszEVY02o27DGqs7nXWkdmznphdxAXg1YH+8eItwMHm8NwzvHaFJ/8KpLT4/61xp6Ef0E9zqcxHY73xa5Kilu4tKYohwgsU0SEAg55tP75RmK5Z6sTaGAfDL/Anav34Vx/a6hlUS6vDSe+NwjwJ5RBiG/MOGEzPfQgpluueJcEHvf+81eWV09DlOTNjfTCRbGNiJ5LUxptrt+2ugOKitUNFDEGBmY/vPNXujOj3S1jxVIcRC/zgOivvooVLN9fFmbXoQNarzUD4SRgYhmH/LkSHbcwCqiMe61NWongS66EzIP4IQR7cdj2YGz4eODm1hge7qlQcHaD0RNVR0ME9g02vVGg6DeQRgqBuXnA/CmvqDJ9AwWMq+Mo1+cFORnbt428FENXTB4ltHATQvMPweZxWif4PkQbnkF5cNZD8/s1+J4ynZta7xOz6IQzTpHhd2ct9Cw0VIToZjK3dWVDgLw5MM3bI91O5rUBaUeI06KfG4D4bgRGXjJ5dvh6fyzbEmBk6oNzdWguJmm3iyOJo0M=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(346002)(136003)(376002)(39860400002)(396003)(366004)(2906002)(6636002)(6666004)(26005)(956004)(16526019)(186003)(85182001)(33716001)(66556008)(9686003)(86362001)(66946007)(5660300002)(478600001)(316002)(38100700002)(4326008)(66476007)(8676002)(6862004)(6486002)(8936002)(6496006)(54906003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?MUMyOGhhWVVLclMxMTIzTzd4cWMrUXdYVS9zNFdNSzBMa1Z2UkZhei81WnNR?=
 =?utf-8?B?QzZic0l3eW5pMXhkY2Ywd1R6UU9EV2d0UkMzdFM3VXpjTWFzdDF4ZUJ2Z3Iv?=
 =?utf-8?B?OEF1WnRJQnVrcktTa0R0ekJjSTZZOXVWbEdtNDlieWxmMkt0WHBNNFh4Qm1F?=
 =?utf-8?B?RDUzVDZTYSt2UUxXalBnUkVSM2pQL0lIOTVhSzAxWkRoaGwrR054VEFDV1Nm?=
 =?utf-8?B?WDI4dTNySFFjMmtON1Awa01RR04reGFFbkpETXRjNEZNc1Nla1ZMeFVYS2ZI?=
 =?utf-8?B?Q3Z2Q3E1c1Z0TjczektWTEtTZ0luRmNmQ1ZESm1oZXJGSUdiNitEaHAzWUdJ?=
 =?utf-8?B?NFV5cDJDcEJHNjZudCtUZXYrcHVBbEZYZ2RicjJxY0pTSmwrM2VGMUIrM3RN?=
 =?utf-8?B?YzFKQTFDYnY0b1VtRGQ3ZG00R3ZOTmJtUGRycUNCU0hDNFR3R3FieU9IQmhs?=
 =?utf-8?B?RDI0UU8rR29tbW82QU1TRU03MUJBcUdqREd2SWdOSS8wclFrSGk5Q0wvaGl4?=
 =?utf-8?B?SHVjaHBkT3ZmYVdkdkM2Q2JZamxIT0lqVE45WWNNaTdQME1WdFpWZkI0NTZN?=
 =?utf-8?B?Q3pTRzJTTlM3aW5LTzJSbGEwYm1ucHoyZE92Q00xbkZSOGV3V3ExMDZKb0gw?=
 =?utf-8?B?U25tb0UrUEU4TXlmYm1rN25EUG1ZaW1qcnRIeU50L2M5OVJiRy9jNVl4cWhu?=
 =?utf-8?B?d3EyeXRubDRudU5JN0NVV2FzUUF1VTFwdlRsMlZtVEFJYU1KTlNKZmN4djNV?=
 =?utf-8?B?TnR6aFZOeUYvemt3Kzl5NmdQdWJhR2xwWDlOQldNaTU0WlBXcmd2Mnd1UUVy?=
 =?utf-8?B?ci9XRGs1a1lMb050dmEwK3o5MXVGS1lGN25MOHU4TFJtdy84a0tVZFY0S2Zi?=
 =?utf-8?B?RkJOMkpWdE5VTjZqRGtTQ0RTQmkvWThvL2YrVGc0VDRWM3l4VEpPRllzMDBV?=
 =?utf-8?B?Y3Q2OU1Lb2hnWk9CYUpsazlXdkJySlpaSzR2cnoxN2VaVU8yUmxjYmdXc1Q4?=
 =?utf-8?B?RTBDS0tuZHUrZmlKZkpYeUY3eGpWNDZZZEhjY08zTjJ6THgvVmJFRm14anBZ?=
 =?utf-8?B?OTV1Y0llTmo4dlFUTHhIQk9qQ1N3cStJVG1SSHF5VXBtOUMzeVErNjF6QVhw?=
 =?utf-8?B?TC84c0tLR2crRkRwajRnamIxLzRKZ1lyQU1ZUEltWkMzNm9NWnVYbEhBczJi?=
 =?utf-8?B?S09ERlF5b2FjTkZIUXBIcDNSKzkvSkJOVEs1ZHVIdkZpRFVNanltd0dEbExI?=
 =?utf-8?B?QnZUOE41N2hKc2FsdWRGbE53TmlQaDVmSy9iL1RaSXVaNEk5Rk5UOWVNZm1K?=
 =?utf-8?B?VTNwOStaM1dQRy9Bays2ZENHVWJ4d3p2dHhyTkI4UjFiQ2N0ZG1hV3JQS25r?=
 =?utf-8?B?NHlDcGd4YTd1bC9xeVN1RmcweVV0Yk9TWklGaDdVL09QeC8zSllqNDVxVkd3?=
 =?utf-8?B?MzQ3UnZXNzNWTWRtNStkKy9DK0VydEdGZkxUaFpoWERaK1hpdGkweUc5UVlI?=
 =?utf-8?B?ZGU4a2ZBWk44WUQwQmpteStxNG1sL3V1cDdqRnZTTC82MEtrQXdWODdxaldD?=
 =?utf-8?B?VTRZV1VBMnp1bWtTMnpUQVJjWDUvRW5WOFExa3pqU3BGUHVKTW5GaFZRb0Rr?=
 =?utf-8?B?dnc0YStpcFUxNkZBNkdKVmV1cU1KemZIWXlKeTVGdVVweWY4YWU4QjlObTdl?=
 =?utf-8?B?bnV5bStKRVpHZ1RqZGZ0TWFqS01GMGVwKzhEd1kzQ0VRR1h3djg2N2FheUgz?=
 =?utf-8?Q?QKmt+FngPhb8wcPVMpMJ5lgTU2ekNp0ERbccKzg?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 68b17c64-28d7-4c9d-7741-08d90fa66ee3
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2021 09:16:15.1234
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 1dmkTG2dALY/Xw/IIHivVdZ9tPTXrHocsnjUIXeRwr5oGzWykgMdGQ0TsfO2yZMNiVO4PmRHd7vWqLfA6tSyoQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4300
X-OriginatorOrg: citrix.com

On Tue, May 04, 2021 at 07:53:22PM +0100, Andrew Cooper wrote:
> It is bad form in C, perhaps best demonstrated by trying to read
> xc_cpu_policy_destroy(), and causes const qualification to have
> less-than-obvious behaviour (the hidden pointer becomes const, not the thing
> it points at).

Would this also affect cpuid_leaf_buffer_t and msr_entry_buffer_t
which hide an array behind a typedef?

> xc_cpu_policy_set_domain() needs to drop its (now normal) const qualification,
> as the policy object is modified by the serialisation operation.
> 
> This also shows up a problem with the x86_cpu_policies_are_compatible(), where
> the intermediate pointers are non-const.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Wei Liu <wl@xen.org>
> 
> Discovered while trying to start the integration into XenServer.  This wants
> fixing ASAP, before futher uses get added.
> 
> Unsure what to do about x86_cpu_policies_are_compatible().  It would be nice
> to have xc_cpu_policy_is_compatible() sensibly const'd, but maybe that means
> we need a struct const_cpu_policy and that smells like it is spiralling out of
> control.

Not sure TBH, I cannot think of any alternative right now, but
introducing a const_cpu_policy feels kind of code duplication.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed May 05 09:58:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 09:58:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122952.231944 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leEIO-0003w8-LA; Wed, 05 May 2021 09:58:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122952.231944; Wed, 05 May 2021 09:58:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leEIO-0003w1-Hv; Wed, 05 May 2021 09:58:20 +0000
Received: by outflank-mailman (input) for mailman id 122952;
 Wed, 05 May 2021 09:58:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leEIM-0003vr-V1; Wed, 05 May 2021 09:58:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leEIM-0006xE-RD; Wed, 05 May 2021 09:58:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leEIM-0000Lr-FS; Wed, 05 May 2021 09:58:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1leEIM-0007NK-Ey; Wed, 05 May 2021 09:58:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=g2pX0JlFalaJ7bXsW3Pd5ggfSXizL2EamAxcDXZuPcE=; b=JIFD+mWnLCDWFACgVDmbiWJRNe
	wB9U8B/Q4ZaIEY0cWlk0M4KeBfty+89XQ47mM/4eygKSVe6FLvarl+m6R3aC18cwtq6dsFGxS3onM
	X852Jmpl9cwNcsPV6yE9pR2zKPk98Ic6i6EuAC5YkYIdxeO7BD+VfhfMufjOWThKj2cg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161786-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 161786: all pass - PUSHED
X-Osstest-Versions-This:
    xen=8cccd6438e86112ab383e41b433b5a7e73be9621
X-Osstest-Versions-That:
    xen=1f8ee4cb430e5a9da37096574c41632cf69a0bc7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 05 May 2021 09:58:18 +0000

flight 161786 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161786/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  8cccd6438e86112ab383e41b433b5a7e73be9621
baseline version:
 xen                  1f8ee4cb430e5a9da37096574c41632cf69a0bc7

Last test of basis   161601  2021-05-02 09:18:26 Z    3 days
Testing same since   161786  2021-05-05 09:18:31 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Olaf Hering <olaf@aepfle.de>
  Rahul Singh <rahul.singh@arm.com>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Tim Deegan <tim@xen.org>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   1f8ee4cb43..8cccd6438e  8cccd6438e86112ab383e41b433b5a7e73be9621 -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Wed May 05 10:04:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 10:04:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122957.231958 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leEOV-0005PV-Bu; Wed, 05 May 2021 10:04:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122957.231958; Wed, 05 May 2021 10:04:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leEOV-0005PO-90; Wed, 05 May 2021 10:04:39 +0000
Received: by outflank-mailman (input) for mailman id 122957;
 Wed, 05 May 2021 10:04:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sTpK=KA=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1leEOU-0005PI-7c
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 10:04:38 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 03aaa212-772d-4033-8ee3-88a76aec57bb;
 Wed, 05 May 2021 10:04:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 03aaa212-772d-4033-8ee3-88a76aec57bb
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620209076;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=MxJ5VTvESIL5Fmj/E1Zd8IgpWtP592ks+pd4juYkTXU=;
  b=eNWfFhS3wCbE7Q2IpB16kW7ZVheu5kdpn+ULpf1IEx9j24W3Cqi3mSH+
   D7+wKtgbW56F0unbZoeC2Qci2MVkrJLxm1NhVCs2TZ0N8F/XbK45aHoT3
   ngLmVHAEylGfFFT5q7GCxPR6vG8H6ZYO8hgbxLAhUFygewOQITnosWOCA
   8=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: XvV3tTP5rZgjrQgr1z12BlG6GuSl4GTb3XU4D3iD4KqSSUz7UACumwnxRUzn7hGlKk38dEFW/P
 Kz/9j7+k0sAKVX6OelCmamMiC0yjbZuq8Wr7sKB8FwTWPLTde2ENoxyy8/Tu+ESCOrLfDk38hp
 5vhDKYjTJcmnUxpVaHcBf2fzFSRy5tMVXVjE0Sx/jHlbjKDv+6Cw6bTfRXKiWVC280hOLQszJE
 NoOlQkkPry3IveUXwRjMk/chYPz50r0RDtL0qsKdgfQxubYthU0rVv+pglNoksiYxcMbe1VLrn
 exA=
X-SBRS: 5.1
X-MesageID: 43211373
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:i621cK4EdA/bQNVdZgPXwdKCI+orLtY04lQ7vn1ZYRZeftWE0+
 Wnm/oG3RH54QxhP00Is9aGJaWGXDf4/Zl6/YEeMd6ZLW/bkUGvK5xv6pan/i34F0TFh5tg/I
 pDU4w7Mt3/ClBmkd33iTPTL/8MyMSKmZrY4Nv24GxqSWhRBZ1IyydcJkKlHlZtRA9AbKBJYK
 a0wsZcvTKvdTA2Q62Adx04dtPOrdHKi57qCCRub3VKhzWmty+i67LxDnGjvis2bjJVzb8utU
 jDngDpj5/T0c2T9x7G22ffq6lRgdvqo+EzZ/Ckt859EFTRozftQL4kerWZ+Bgpvemk6T8R4a
 DxiiZlG/421lT8USWepwD31wzpzTA0gkWSsWOwsD/dodfkXnYBAcJHgo5VGyG112Mw+NVnlK
 5b1WOQsJRaSRnJmSj76tDSEwtnjUq5uz4jlvQPh3tUXc8fZdZq3Pci1VIQFI0fWCH37I1iF+
 VxFsTR+etbajqhHgfkl3gqxsetUHQ1FgqHRUZHutX96UktoFlpi1YdgMQE2m4a8p8gQYQB+O
 jeW54Y6Y1mX4sKaeZ0HqMbTdGqD3Gle2OxDEuCZUniUKkcf27Wp4Wy6Kwt/+e0dJFN1pc0lZ
 jbOWkoy1IaagbyDYmHxtlW6BzXBG65Wz7uxswb/ZR/t7HmLYCbQBGrWRQyl4+7pOgERtfeRu
 /bAuMlP9bzaXbrEZxEmwn3RpNSJXQEUMB9gKdFZ3ue5t/OIpfn8vfWaurXOdPWYEYZc3K6H3
 8KRjS2O8la9ECsXRbD8WvsZ08=
X-IronPort-AV: E=Sophos;i="5.82,274,1613451600"; 
   d="scan'208";a="43211373"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=CRbZU3493m6g4w8SVJ52ZnZpjCGivz0yTy0Gewzc0HAaHR4hi8z0NimI81OP6TSEv7pafVIUyQGRGQYLHhkxnaabjxBmVolWMvibJPt11xDADmRW52CzxAz9qZkR1f2YCVSvoj1IjHcMjMDEPiCB3skCxnD2Q7KJaoizVd0BorHtRv/AkN52qgnM46n0Z1KH0iq6z0qVJ+PxWYoTtBKg7dE1YHj7DwtxvlMk5i9axH+2TkzHdwOEXJSUbEtzLdqrNgmqDAK5bUfhgPeg6GRn747yv5/vE6jBy5cqGM8C4Wh8v+DQrXT1qhzsHLnqT8qgmTaZEtekG8wQKXRZoS3oig==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8GfCaZebeokF8HQp/UuL/ytllX9RZLsOLOzN+/++ny8=;
 b=Cx5KlGXDVUxNOKJ/mA+0RrkJ9oxnmH3cOySyAzScqmS0yKsNgmrSV1BFYxh08rS3QjQZY08yaBoD8i1nCYauIHqBXHVhgqtClHSvkqBO0e/UeTcUDTxFLX5dbP5Q6QBwHwxpLHIqVRUmIIBQOaGpx8aGYeJkHZ3bhG3C93NgnP+8tBT066iXcxxz5PrURL91kgSzNcd6+R9lavo1Qn+96ByUoXu7HcA/6gT/K7z3sqtY+HMmI4IRXn3xU6pwmkLwcv5mMvCaK/zozIJJMtmGDSfVEA69UEid46hpUlBg3zRa8EGJ6EtTw7xDqrBFlomIj73WQpVdbkvRRWy8SDcr0w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8GfCaZebeokF8HQp/UuL/ytllX9RZLsOLOzN+/++ny8=;
 b=ba5HITzsq8a4kvallJHWZgS7vmdDQLx2i/n6TxZXMg+3YncATqR090Zu856DthMCL6yIuc8DEUWGnZqnai40yUQG/ayTUpXaEL6dTN1YeifEqwHFFy3TFedOZS5DLOMcyioLw2MxNZlIe+GrVohM4qb7GW77VOZ1bslUVNgqO0Q=
Date: Wed, 5 May 2021 12:04:27 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] libx86: Introduce x86_cpu_policy_calculate_compatible()
 with MSR_ARCH_CAPS handling
Message-ID: <YJJtqyDOIkMxjvxW@Air-de-Roger>
References: <20210504213120.4179-1-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20210504213120.4179-1-andrew.cooper3@citrix.com>
X-ClientProxiedBy: PR3P193CA0050.EURP193.PROD.OUTLOOK.COM
 (2603:10a6:102:51::25) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 9e0aa724-ada0-46ab-f7d7-08d90fad2e02
X-MS-TrafficTypeDiagnostic: DM5PR03MB2923:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB2923BD86D3661EEA8792A94C8F599@DM5PR03MB2923.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 8rTUB8auUVp4Fuoixo+pTirzFJhVhyZuWQLfiUgtYGJsfCbESGOmBOiT56wH20gZRs9Cuc7MKQQEahfm98PQ5OZK19QweIXrI7mGR4pLSXlUPCxQP93jh2VhcHh3n2vaPdfNk99ETzLaNtX6wm+uTQU5qC0o0oku+uskR8GJk+ta+5Zb0FP4iTivV2vmwiupbsgQUn8YlDMUWk1f2sy2OpLtL5vWzos8unwWpCWGzooUC8Pn7nKI++Y1DwLIAfAMSaFI8wrm4po3VIojjEJQ5eqacjUtPJl+UdMEo5rrGbJW9B7lxRUWG9lLUKO9BT7o9dTjbZH5xs4u5Lj9RlgesYBQQOwMg6zQTMJ7/gi6PondSMhQltuc4yTEnHCd1Gtmz2zyC5on1691YS/zOwM0MnjdtfjUDh4U4oDOfkKi76W5HxjZX4okoqzY/Fk6se1QaV+1S1Mq4l7S8WbVrA1tnuUFSSgunCzp9D27WzCjrZnw+oSUpFRr09hewVV9l+smvmPmgIgeU9pQcYu0BOFsZo24k8O+Pt7FwG2g+1Wgp6caYXsVXBgvBkxCioDC+VtrY4MgzsG9iyKaoa1TMtq8VXg3z/ODzFBKuME6IwexktI=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(396003)(39860400002)(346002)(366004)(376002)(136003)(956004)(26005)(316002)(9686003)(38100700002)(8936002)(6862004)(83380400001)(6496006)(4326008)(16526019)(6666004)(54906003)(86362001)(85182001)(478600001)(66556008)(66946007)(66476007)(5660300002)(33716001)(2906002)(8676002)(186003)(6636002)(6486002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?ckpjSUlhelZUaU5iL3lGN3ZyRWhqR1FMTzdDcGJCNW1kSjVHTVZTSVdlekVj?=
 =?utf-8?B?Vi9zWFhqM25SSUVTQ3Q2SGUrN0xyS1VlaEJuQ0VIUUovek5uSmNLMkk3MGd0?=
 =?utf-8?B?NHF6azFyS0dMRDJCRi9pdVFKRjhzNzd6STU3OEN3ZDNMN1cwaVZNOUNjcHNi?=
 =?utf-8?B?L3hMTitGWitDTURPODNoMGo4a0h4cVF0bXdTSm9FVjJMMEswT05TcklFcU5m?=
 =?utf-8?B?OWFuT1VtK2dCRmVzWVZ3T1J4UVpVQTZPQUZGa0ZaQldUMUp1MFRjeFI5RjJ0?=
 =?utf-8?B?WHFic05mYUkvbjhFN0RGM2xSMFhCVjBJL2Q0Nm04cDhEbml6QzZpWnJDUzVZ?=
 =?utf-8?B?MFJIUkJUVmdpbHFvYzRCM2tLU2grWDNyUkNVVERhditseVY3MVZ0bkl0bEpw?=
 =?utf-8?B?K1pwQmZZM0laSEhVdWVtZG51NG13UloxbkhwMUFGMDdka21jQkN1Rjh1bDRt?=
 =?utf-8?B?SFgzU2R2V2tHaFlkU3ZaUHA0OElTOTh1K1hXQUNacDRDaVhENGFrU2FzMkpB?=
 =?utf-8?B?c01VWnFreElGR0NYVnpkVkxuVThGZnVkVjdjdS9ha2hQQ3FRaWN0UDFmVmtz?=
 =?utf-8?B?K3QvNytpeVN4Y0VxK3VEb0NlRnJ4Tkw1UjlZYUl1K2M4R0tLYUM2REZSc2pG?=
 =?utf-8?B?b2ROSGp3SThRQWV2aDBEc3FzOER3anJKOURTcE5FUjNDeDk3QVpWbll6K293?=
 =?utf-8?B?WTFmSFhLNVJKaEVicFh2cXN3MWtaUGhPeW4zbkZOUWJrZStPRlBxb0FyT21B?=
 =?utf-8?B?SjNaMDJJTElDYTFRSFpxanc3Z2l5QnpUMXk2c2YwZmplR1JGMnY5b292MElE?=
 =?utf-8?B?NjRJNEhuU0V2NjM2cUpLOUh0d1kxWDJBWWh0V2E0eUVXam5PQzFqWjN3TGx4?=
 =?utf-8?B?eS9pcjhlOEExcGdpTnVGcmVtM29xK09VdjBMbFlNWk85dnlKdFZPNHlSMEMy?=
 =?utf-8?B?dVNCUEI2ZkxwYXVaV1lsZU5mMEdPVzJWUVpXeldScGV6Z2VnSDNvVFF3Njg2?=
 =?utf-8?B?WkpESWtnelExeHh1S2FweXlycUVyd1R1V2NPeThtakhVdEVuZzdZRm92N0xD?=
 =?utf-8?B?Y1M3OHdVdDJzSlN0VXRlSW1nbzBpY2J3aFZOS1BBMlBCZk9oRzJ0aEI2bjdY?=
 =?utf-8?B?ZmVRNTVtL29SSU1hVXRlMURtQXBsQ3JNS0dtcXlSaXhULytnQ051ZXBHYmdi?=
 =?utf-8?B?T0RPUDA2aXNFalhLbWpibnUyVkxYZXJ3YnVJazkvU1NocFUxbncxSEs5enVG?=
 =?utf-8?B?alVYYk03Z1hUWmorS1NWbXNRRFRaRTBUVytPVy9yd3pLMXJuOUttOFMweTJO?=
 =?utf-8?B?L3BKM0orSnpPWnhUY0lLeG9qZ01SNk54eHBRS0xBNVNiY3NKVVI4Rm5uU1Zv?=
 =?utf-8?B?MFRtMElLR0FQMjlpNWdrVU85ZCtnaXJFV3NBSnlnUys1RWNZNlY0THdhU2VQ?=
 =?utf-8?B?NkkxR3lncnc5VzhuSC9BUDlscjZsNzZNR0c2dHhWSmgyTUZ0cnJ6MmVsMFFY?=
 =?utf-8?B?TVRJdjJtZXlEd0RkclBxdUlrRks1bVVFdWEzK1BleHorenpLdlpvVkkyTDdM?=
 =?utf-8?B?Q3ZGMzFOajAzRFNFL2l1dFZ1RnpwZDFuRWFka3ZZYXkvTjI4OEhmYnM1RmY0?=
 =?utf-8?B?aUY3UjlnNFd5RGxRTktwUVA5VE1RejFxNndReTI3QVZobHB1am5vblN0Tlo3?=
 =?utf-8?B?dGxlL3FXK3hBL09MZFBuYlcyVERnT3F2TkVjWHJ4TXNWZkNJQ0tJdUFPYnhE?=
 =?utf-8?Q?5vAjhBAgIaGPEip1p+7+tJbRgK+0ZccxIKp3OkK?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 9e0aa724-ada0-46ab-f7d7-08d90fad2e02
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2021 10:04:32.7402
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: MfUoE4FwaSbkbs0MA94AYI6g3cFGrgZIWtYEzrbARG9WySFyE7OXoZDPz7Tu2WXtrh656q1vqrUySlIf22V10A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB2923
X-OriginatorOrg: citrix.com

On Tue, May 04, 2021 at 10:31:20PM +0100, Andrew Cooper wrote:
> Just as with x86_cpu_policies_are_compatible(), make a start on this function
> with some token handling.
> 
> Add levelling support for MSR_ARCH_CAPS, because RSBA has interesting
> properties, and introduce test_calculate_compatible_success() to the unit
> tests, covering various cases where the arch_caps CPUID bit falls out, and
> with RSBA accumulating rather than intersecting across the two.
> 
> Extend x86_cpu_policies_are_compatible() with a check for MSR_ARCH_CAPS, which
> was arguably missing from c/s e32605b07ef "x86: Begin to introduce support for
> MSR_ARCH_CAPS".
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Wei Liu <wl@xen.org>
> ---
>  tools/include/xen-tools/libs.h           |   5 ++
>  tools/tests/cpu-policy/test-cpu-policy.c | 150 +++++++++++++++++++++++++++++++
>  xen/include/xen/lib/x86/cpu-policy.h     |  22 +++++
>  xen/lib/x86/policy.c                     |  47 ++++++++++
>  4 files changed, 224 insertions(+)
> 
> diff --git a/tools/include/xen-tools/libs.h b/tools/include/xen-tools/libs.h
> index a16e0c3807..4de10efdea 100644
> --- a/tools/include/xen-tools/libs.h
> +++ b/tools/include/xen-tools/libs.h
> @@ -63,4 +63,9 @@
>  #define ROUNDUP(_x,_w) (((unsigned long)(_x)+(1UL<<(_w))-1) & ~((1UL<<(_w))-1))
>  #endif
>  
> +#ifndef _AC
> +#define __AC(X, Y)   (X ## Y)
> +#define _AC(X, Y)    __AC(X, Y)
> +#endif

You need to remove those definitions from libxl_internal.h.

>  #endif	/* __XEN_TOOLS_LIBS__ */
> diff --git a/tools/tests/cpu-policy/test-cpu-policy.c b/tools/tests/cpu-policy/test-cpu-policy.c
> index 75973298df..455b4fe3c0 100644
> --- a/tools/tests/cpu-policy/test-cpu-policy.c
> +++ b/tools/tests/cpu-policy/test-cpu-policy.c
> @@ -775,6 +775,154 @@ static void test_is_compatible_failure(void)
>      }
>  }
>  
> +static void test_calculate_compatible_success(void)
> +{
> +    static struct test {
> +        const char *name;
> +        struct {
> +            struct cpuid_policy p;
> +            struct msr_policy m;
> +        } a, b, out;
> +    } tests[] = {
> +        {
> +            "arch_caps, b short max_leaf",
> +            .a = {
> +                .p.basic.max_leaf = 7,
> +                .p.feat.arch_caps = true,
> +                .m.arch_caps.rdcl_no = true,
> +            },
> +            .b = {
> +                .p.basic.max_leaf = 6,
> +                .p.feat.arch_caps = true,
> +                .m.arch_caps.rdcl_no = true,
> +            },
> +            .out = {
> +                .p.basic.max_leaf = 6,
> +            },
> +        },
> +        {
> +            "arch_caps, b feat missing",
> +            .a = {
> +                .p.basic.max_leaf = 7,
> +                .p.feat.arch_caps = true,
> +                .m.arch_caps.rdcl_no = true,
> +            },
> +            .b = {
> +                .p.basic.max_leaf = 7,
> +                .m.arch_caps.rdcl_no = true,
> +            },
> +            .out = {
> +                .p.basic.max_leaf = 7,
> +            },
> +        },
> +        {
> +            "arch_caps, b rdcl_no missing",
> +            .a = {
> +                .p.basic.max_leaf = 7,
> +                .p.feat.arch_caps = true,
> +                .m.arch_caps.rdcl_no = true,
> +            },
> +            .b = {
> +                .p.basic.max_leaf = 7,
> +                .p.feat.arch_caps = true,
> +            },
> +            .out = {
> +                .p.basic.max_leaf = 7,
> +                .p.feat.arch_caps = true,
> +            },
> +        },
> +        {
> +            "arch_caps, rdcl_no ok",
> +            .a = {
> +                .p.basic.max_leaf = 7,
> +                .p.feat.arch_caps = true,
> +                .m.arch_caps.rdcl_no = true,
> +            },
> +            .b = {
> +                .p.basic.max_leaf = 7,
> +                .p.feat.arch_caps = true,
> +                .m.arch_caps.rdcl_no = true,
> +            },
> +            .out = {
> +                .p.basic.max_leaf = 7,
> +                .p.feat.arch_caps = true,
> +                .m.arch_caps.rdcl_no = true,
> +            },
> +        },
> +        {
> +            "arch_caps, rsba accum",
> +            .a = {
> +                .p.basic.max_leaf = 7,
> +                .p.feat.arch_caps = true,
> +                .m.arch_caps.rsba = true,
> +            },
> +            .b = {
> +                .p.basic.max_leaf = 7,
> +                .p.feat.arch_caps = true,
> +            },
> +            .out = {
> +                .p.basic.max_leaf = 7,
> +                .p.feat.arch_caps = true,
> +                .m.arch_caps.rsba = true,
> +            },
> +        },
> +    };
> +    struct cpu_policy_errors no_errors = INIT_CPU_POLICY_ERRORS;
> +
> +    printf("Testing calculate compatibility success:\n");
> +
> +    for ( size_t i = 0; i < ARRAY_SIZE(tests); ++i )
> +    {
> +        struct test *t = &tests[i];
> +        struct cpuid_policy *p = malloc(sizeof(struct cpuid_policy));
> +        struct msr_policy *m = malloc(sizeof(struct msr_policy));
> +        struct cpu_policy a = {
> +            &t->a.p,
> +            &t->a.m,
> +        }, b = {
> +            &t->b.p,
> +            &t->b.m,
> +        }, out = {
> +            p,
> +            m,
> +        };
> +        struct cpu_policy_errors e;
> +        int res;
> +
> +        if ( !p || !m )
> +            err(1, "%s() malloc failure", __func__);
> +
> +        res = x86_cpu_policy_calculate_compatible(&a, &b, &out, &e);
> +
> +        /* Check the expected error output. */
> +        if ( res != 0 || memcmp(&no_errors, &e, sizeof(no_errors)) )
> +        {
> +            fail("  Test '%s' expected no errors\n"
> +                 "    got res %d { leaf %08x, subleaf %08x, msr %08x }\n",
> +                 t->name, res, e.leaf, e.subleaf, e.msr);
> +            goto test_done;
> +        }
> +
> +        if ( memcmp(&t->out.p, p, sizeof(*p)) )
> +        {
> +            fail("  Test '%s' resulting CPUID policy not as expected\n",
> +                 t->name);
> +            goto test_done;
> +        }
> +
> +        if ( memcmp(&t->out.m, m, sizeof(*m)) )
> +        {
> +            fail("  Test '%s' resulting MSR policy not as expected\n",
> +                 t->name);
> +            goto test_done;
> +        }

In order to assert that we don't mess things up, I would also add a
x86_cpu_policies_are_compatible check here:

res = x86_cpu_policies_are_compatible(&a, &out, &e);
if ( res )
    fail("  Test '%s' created incompatible policy\n"
         "    got res %d { leaf %08x, subleaf %08x, msr %08x }\n",
         t->name, res, e.leaf, e.subleaf, e.msr);
res = x86_cpu_policies_are_compatible(&b, &out, &e);
if ( res )
    fail("  Test '%s' created incompatible policy\n"
         "    got res %d { leaf %08x, subleaf %08x, msr %08x }\n",
         t->name, res, e.leaf, e.subleaf, e.msr);

> +
> +    test_done:
> +        free(p);
> +        free(m);
> +    }
> +}
> +
>  int main(int argc, char **argv)
>  {
>      printf("CPU Policy unit tests\n");
> @@ -793,6 +941,8 @@ int main(int argc, char **argv)
>      test_is_compatible_success();
>      test_is_compatible_failure();
>  
> +    test_calculate_compatible_success();
> +
>      if ( nr_failures )
>          printf("Done: %u failures\n", nr_failures);
>      else
> diff --git a/xen/include/xen/lib/x86/cpu-policy.h b/xen/include/xen/lib/x86/cpu-policy.h
> index 5a2c4c7b2d..0422a15557 100644
> --- a/xen/include/xen/lib/x86/cpu-policy.h
> +++ b/xen/include/xen/lib/x86/cpu-policy.h
> @@ -37,6 +37,28 @@ int x86_cpu_policies_are_compatible(const struct cpu_policy *host,
>                                      const struct cpu_policy *guest,
>                                      struct cpu_policy_errors *err);
>  
> +/*
> + * Given two policies, calculate one which is compatible with each.
> + *
> + * i.e. Given host @a and host @b, calculate what to give a VM so it can live
> + * migrate between the two.
> + *
> + * @param a        A cpu_policy.
> + * @param b        Another cpu_policy.
> + * @param out      A policy compatible with @a and @b.
> + * @param err      Optional hint for error diagnostics.
> + * @returns -errno
> + *
> + * For typical usage, @a and @b should be system policies of the same type
> + * (i.e. PV/HVM default/max) from different hosts.  In the case that an
> + * incompatibility is detected, the optional err pointer may identify the
> + * problematic leaf/subleaf and/or MSR.
> + */
> +int x86_cpu_policy_calculate_compatible(const struct cpu_policy *a,
> +                                        const struct cpu_policy *b,
> +                                        struct cpu_policy *out,
> +                                        struct cpu_policy_errors *err);
> +
>  #endif /* !XEN_LIB_X86_POLICIES_H */
>  
>  /*
> diff --git a/xen/lib/x86/policy.c b/xen/lib/x86/policy.c
> index f6cea4e2f9..06039e8aa8 100644
> --- a/xen/lib/x86/policy.c
> +++ b/xen/lib/x86/policy.c
> @@ -29,6 +29,9 @@ int x86_cpu_policies_are_compatible(const struct cpu_policy *host,
>      if ( ~host->msr->platform_info.raw & guest->msr->platform_info.raw )
>          FAIL_MSR(MSR_INTEL_PLATFORM_INFO);
>  
> +    if ( ~host->msr->arch_caps.raw & guest->msr->arch_caps.raw )
> +        FAIL_MSR(MSR_ARCH_CAPABILITIES);

It might be nice to expand test_is_compatible_{success,failure} to
account for arch_caps being checked now.

Shouldn't this check also take into account that host might not have
RSBA set, but it's legit for a guest policy to have it?

if ( ~host->msr->arch_caps.raw & guest->msr->arch_caps.raw & ~POL_MASK )
    FAIL_MSR(MSR_ARCH_CAPABILITIES);

Maybe POL_MASK should be renamed and defined in a header so it's
widely available?

> +
>  #undef FAIL_MSR
>  #undef FAIL_CPUID
>  #undef NA
> @@ -43,6 +46,50 @@ int x86_cpu_policies_are_compatible(const struct cpu_policy *host,
>      return ret;
>  }
>  
> +int x86_cpu_policy_calculate_compatible(const struct cpu_policy *a,
> +                                        const struct cpu_policy *b,
> +                                        struct cpu_policy *out,
> +                                        struct cpu_policy_errors *err)

I think this should be in an #ifndef __XEN__ protected region?

There's no need to expose this to the hypervisor, as I would expect it
will never have to do compatible policy generation? (ie: it will
always be done by the toolstack?)

> +{
> +    const struct cpuid_policy *ap = a->cpuid, *bp = b->cpuid;
> +    const struct msr_policy *am = a->msr, *bm = b->msr;
> +    struct cpuid_policy *cp = out->cpuid;
> +    struct msr_policy *mp = out->msr;
> +
> +    memset(cp, 0, sizeof(*cp));
> +    memset(mp, 0, sizeof(*mp));
> +
> +    cp->basic.max_leaf = min(ap->basic.max_leaf, bp->basic.max_leaf);
> +
> +    if ( cp->basic.max_leaf >= 7 )
> +    {
> +        cp->feat.max_subleaf = min(ap->feat.max_subleaf, bp->feat.max_subleaf);
> +
> +        cp->feat.raw[0].b = ap->feat.raw[0].b & bp->feat.raw[0].b;
> +        cp->feat.raw[0].c = ap->feat.raw[0].c & bp->feat.raw[0].c;
> +        cp->feat.raw[0].d = ap->feat.raw[0].d & bp->feat.raw[0].d;
> +    }
> +
> +    /* TODO: Far more. */

Right, my proposed patch (07/13) went a bit further and also leveled
1c, 1d, Da1, e1c, e1d, e7d, e8b and e21a, and we also need to level
a couple of max_leaf fields.

I'm happy for this to go in first, and I can rebase the extra logic I
have on top of this one.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed May 05 10:29:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 10:29:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122968.231976 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leEmm-0007qi-Lj; Wed, 05 May 2021 10:29:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122968.231976; Wed, 05 May 2021 10:29:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leEmm-0007qb-IQ; Wed, 05 May 2021 10:29:44 +0000
Received: by outflank-mailman (input) for mailman id 122968;
 Wed, 05 May 2021 10:29:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leEmk-0007qJ-Vf; Wed, 05 May 2021 10:29:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leEmk-0007XQ-P2; Wed, 05 May 2021 10:29:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leEmk-0001a2-HS; Wed, 05 May 2021 10:29:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1leEmk-0001JT-Gx; Wed, 05 May 2021 10:29:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=lF6uWQbmirfbbyPpl/hQhCdeDdiAVYdF+FT7YfzVPc4=; b=14HO05+DbwXHl9fXpxOzhWjCUb
	4/PJdOPdtffOjMQBHY5+LGq3yfcxi0/7bYJwlsyRkDTCt1QW/uFewIVhSZ6hwYC53q8GkJjwCXfGk
	ikVskhZIHwIX/bKnnS3B0mF3NDm/xkCQdD2LGzoqgWsCYc1dQlMvmZb6bMfB5tXGDydI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161769-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.11-testing test] 161769: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.11-testing:test-amd64-i386-freebsd10-i386:guest-start/freebsd.repeat:fail:heisenbug
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:debian-hvm-install:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.11-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=b1e46bc369bb490b721c77f15d2583bbf466152d
X-Osstest-Versions-That:
    xen=8bce4698f6264f4b359643233b1878d0f61ed7b0
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 05 May 2021 10:29:42 +0000

flight 161769 xen-4.11-testing real [real]
flight 161787 xen-4.11-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/161769/
http://logs.test-lab.xenproject.org/osstest/logs/161787/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-freebsd10-i386 21 guest-start/freebsd.repeat fail pass in 161787-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 160155
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 12 debian-hvm-install fail like 160155
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 160155
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 160155
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 160155
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 160155
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 160155
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 160155
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 160155
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 160155
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 160155
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 160155
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 160155
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  b1e46bc369bb490b721c77f15d2583bbf466152d
baseline version:
 xen                  8bce4698f6264f4b359643233b1878d0f61ed7b0

Last test of basis   160155  2021-03-20 13:37:02 Z   45 days
Testing same since   161769  2021-05-04 13:07:00 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8bce4698f6..b1e46bc369  b1e46bc369bb490b721c77f15d2583bbf466152d -> stable-4.11


From xen-devel-bounces@lists.xenproject.org Wed May 05 10:33:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 10:33:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122976.232000 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leEqh-0000vb-9p; Wed, 05 May 2021 10:33:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122976.232000; Wed, 05 May 2021 10:33:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leEqh-0000vU-6I; Wed, 05 May 2021 10:33:47 +0000
Received: by outflank-mailman (input) for mailman id 122976;
 Wed, 05 May 2021 10:33:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leEqf-0000vH-Gw; Wed, 05 May 2021 10:33:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leEqf-0007cM-AI; Wed, 05 May 2021 10:33:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leEqf-0001i5-45; Wed, 05 May 2021 10:33:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1leEqe-0002xe-Uz; Wed, 05 May 2021 10:33:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=z9PrrnXrdkDFv9ChiGtmL3pJuQ50aO/juFETE3wSCyA=; b=LbF5/un6dMAijQtKKrULmR/5mJ
	otmCSOnFp5F/Jn5TQPVIqSntMwcIunCV/APoMc/MfY3hB9xj0AwH7fbduCy+3GZW2tL2F7qFjEDH9
	bN4C8WdyPDZBKwTpUbEucWg0hmBSkllqsZpbjM5xUjGEKWvdtD6ZxKSgBHD84Sw/LyIk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161771-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.14-testing test] 161771: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.14-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.14-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=10f0b2d49376865d49680f06c52b451fabce3bb5
X-Osstest-Versions-That:
    xen=207c70c5b622e87682152728b4ea4f0970ad2aad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 05 May 2021 10:33:44 +0000

flight 161771 xen-4.14-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161771/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161484
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161484
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161484
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161484
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 161484
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161484
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161484
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161484
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161484
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161484
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161484
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 xen                  10f0b2d49376865d49680f06c52b451fabce3bb5
baseline version:
 xen                  207c70c5b622e87682152728b4ea4f0970ad2aad

Last test of basis   161484  2021-04-27 13:06:22 Z    7 days
Testing same since   161771  2021-05-04 13:07:00 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   207c70c5b6..10f0b2d493  10f0b2d49376865d49680f06c52b451fabce3bb5 -> stable-4.14


From xen-devel-bounces@lists.xenproject.org Wed May 05 10:49:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 10:49:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122988.232015 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leF64-0002Ws-TC; Wed, 05 May 2021 10:49:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122988.232015; Wed, 05 May 2021 10:49:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leF64-0002Wk-Pz; Wed, 05 May 2021 10:49:40 +0000
Received: by outflank-mailman (input) for mailman id 122988;
 Wed, 05 May 2021 10:49:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+Yav=KA=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1leF63-0002WY-Fs
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 10:49:39 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fdfd3b10-c425-4b9c-89e5-3883e83f8cf9;
 Wed, 05 May 2021 10:49:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fdfd3b10-c425-4b9c-89e5-3883e83f8cf9
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620211778;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=4sJlPosmIk4YYQxNqqdB7begx8Af51a7fFR4y6gxgBQ=;
  b=friUIZ2VYSC+sUB3OlN/cm9tfzfoUuDWmfahgsETiFVvOcmWCp9dApeN
   rkcWElzSy5AU0Hr0UMlhHtWNbzINuvCoOcspTQ/35kY+gmgl5P8S5Oq5W
   l4QPAegKoTVtI5XaNVR3aks3rp9skq5/WHHREGjBxm6CIrfqDkQ1mPUkB
   I=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: rtSyyteCkLLKuiuffrjPpeAjIjazETaix54pw9OLRWL6Z15akbdRmTHjgbDqUlzE82xwfyFk22
 +Bx4QK5slyzCXKadHlL2RfMAeRfqS6Yb39bghqvlgn0kIUqRq9oPsA3JtAOuezJw5EQJfG/yfy
 1w2GryGOBNRwJclwQiGQMZJs5Gy5lxnPxYind9Y0vZNxJmi0jPoUvt8r1lHNtSCmfBAnnCUGOA
 IlhFp24Bk3IlBnUu9t+//f2L1P9ebkHtwjG28ITft+7+4M9uXliN6Q7Mg2r15fpaSZHgwEUO1m
 NZA=
X-SBRS: 5.1
X-MesageID: 43096500
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:ziIGH6xRtQkKr46dLyXIKrPxge4kLtp033Aq2lEZdDV8Sebdv9
 yynfgdyB//gCsQXnZlotybJKycWxrnmqJdybI6eZOvRhPvtmftFoFt6oP+3ybtcheTysd07o
 0lSaR3DbTLYGRSpdrm4QW+DtYryMSG9qftvuvF03JxV2hRCp1IxS0RMHf/LmRdQg5aCZ0lUL
 +V4cRarzStEE5nEfiTLH8DQuTFupn3j5rgexELHFoK7wOJgDOu5tfBYmSl9z0ZVC5CxqpnzH
 jdn2XCl9iemtyY6juZ7W/c6JxKhMDso+EsOOWggtUYQw+c8jqAS59mX9S5zVcIicGprG0nid
 zd5yonVv4DkU/5WkGQjV/T1xL70DAogkWSumOwpXf4u8T2SHYbJqN69PtkWyDU4UYho91wuZ
 gjtwny1+s1fGb9tR/w6NTSWxZhmlDcmwtHrccpg2FCSoxbUbdNrOUkjTNoOa0dFyH34p1PKp
 gJMOjg4p9tADGnRkzCsnIq6NKhWWlbJGb8fmEy/uaR0zRQgUljyVoZyME1jh47heMAYqgByO
 LePqtykrZSCucQcKJmHe8EBfC6E2rXXHv3QS2vCGWiMJtCF2PGqpbx7rlwzOa2eKYQxJ93vJ
 jaSltXuUM7ZkqGM7zB4LR7tjT2BEmtVzXkzc9To7JjvKfnebbtOSqfDHgzjsqJuZwkc47mcs
 f2HKgTL+7oLGPoF4oM9Rb5QYNuJX4XV9BQksonWmiJvtnAJuTRx6zmWceWAICoPScvW2v5DH
 dGdiP0Pt984keiXWK9rwPWX1/rZ0zj7bN9GKXX5IEouc0wH7wJljJQpUWy58mNJzEHmLcxZl
 FCLLTulb7+hWTexxeN00xZfj5mSmpF6rTpVH1H4SUQNVnvTLoFs9KDPURb3H6NIA5DX9rbeT
 Qv4GhfyOaSFdi91CoiA9WoPiaxlH0Ivk+HSJ8ah+ml6dr6fIg7SrIrQrZ4GwmONxEdo3cqlE
 5zLCs/AmPPHDLnjquoyLYOAvvEStV6iAC3ZehOqXzesk2Yjdo1RmQSWgOvVcL/u3dtexNkwn
 lKt4MPiruJnjiibUElhv4jDVFKYGOLRI5dAB+9f4VSkLDzcARWRWOH7AbqzS0bSy7PzQE/l2
 bhJSqbdbXuDkBGsn5V6Krs7Wh5b36QZU52d3B8v7BsDGiugAcA7ca7Io6Il0eBYFoLxe8QdA
 vIZjYfOStC7dG63hz9okfJKVwWgrEVesDNBrUqdL/enk63IIqTjKccArt/55B+Lu3jtecNTM
 OScwKYNynDFusswgCZz0xVYRVcmT0Bq7fFyRfl5G+30DoDGvLUOk1hXKxeDNeG7WToLsz4ma
 lRvJYQh6+XPWrwYNLdlv2SQD5HNx/JoWm5C8svsotZuKoutL11W7nXOAG4o01v7VEbFoPTkk
 hbfYFQpJbmEaVrd9YJey1Y8kEy/e7/Z3cDg0jTOKsGYVopj3XnJNuH7LrDlKo3DiS61X/NEG
 ja1xcYwuzMUCSC34MLEq4cIWxZb04n9XRpldnyAbH4OUGPd+tZ+kC9PWL4WLhBSLKdEbF4lG
 c23/i428uWfTH/wgbeoH9SJb9P6X+uRYeXDBiXEeBFt/y8NlLkuNrn3OeDyBP2QyC8cUIWmM
 lsclERdN1Kjn0at7IMuxLCApDfkwYCiFtR4TZui17r1MyH2Q7gbD97GDycpI5XUzlVOmWPlu
 Lf/4GjpSzA3AQ=
X-IronPort-AV: E=Sophos;i="5.82,274,1613451600"; 
   d="scan'208";a="43096500"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DJO1U1Cs20szntbCqI7+CeKW3v2dEp54NWc8MaF3VZbXfw3B2ozZ/9NjomWQ/v1kMSFVjk8wESXCumjfNHOEUyYlQrwK1BdUwmVGuwpfBeEyLbr0NFuQp6ojNrkzmtwguihz8Cdmao+zmGgDbrQIQxe64hGAi6NwP0DC0sNFYnowVs0EoRu/NxfA9Bz4H16P+STkYufklGpYTYgCoR2WziSBy0EMcWKlNlcWoSqxBpJfP18VNVNSzhRNYZ6vAQecvmY+y+SzEpdAoJUk7wWpee7LFbdcscuTX69QD4NrW+jZ6zaDUsXnhlv8yHuhgNZ6NI1X4vD00Yv9BYRrTbQdow==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4sJlPosmIk4YYQxNqqdB7begx8Af51a7fFR4y6gxgBQ=;
 b=OUa4J3wordvsf5n7B8aFnvvbZ9rVCAPVNJyeINOtv/2ZBrWkkeXov22xTJLbmQFoKcummonlww+s6rf6QXfutdcHCg3vHWoCyfAHfc9RvfTNtWl9+z/Xvyb/RcBUJlp2SN5Uq/uhdM5YR9CQGI1XCb+ACsB7B4VcOFPQxHqy3w1qUiSqAk1NPpI+tw8TJv1Jy6yQ6nVT4L7eKTMxTwFwfDb8qVXObLG62+u0Nt6bzWLaBsdaUyXt3q+Dc4ArFQWmr7AL3rHes8nP81Km9RHgKUY1gLsPbgX/8kJeKos957YbeioUHb0HiNtCVbvtYgStyQNf8SeHkSy7AinSm5gnmQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=4sJlPosmIk4YYQxNqqdB7begx8Af51a7fFR4y6gxgBQ=;
 b=u005Wd46Nmq7eKsMndRUKmZy+Gl5oLUJEGQo2/R8/mMqJcAmiQC0kSw0dygRQj0+F16xebxPiqrIHy5UIw3QlchnItbnZr1YFVhIgBmLCyLpWVTmxfZ6NRnryJScOaj76vF6J8j76xlbEyK2Q7uH3tnO9UPkug4/0WRX6mL4IUk=
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: George Dunlap <george.dunlap@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <74048f89-fee7-06c2-ffd5-6e5a14bdf440@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v4] gnttab: defer allocation of status frame tracking
 array
Message-ID: <4450085e-be97-a1ba-dbd8-c72468406fd5@citrix.com>
Date: Wed, 5 May 2021 11:49:18 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <74048f89-fee7-06c2-ffd5-6e5a14bdf440@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0141.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:193::20) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 159c06d6-f7c7-4300-c869-08d90fb37331
X-MS-TrafficTypeDiagnostic: BYAPR03MB4360:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB43604E5E0D295FC06558EE5FBA599@BYAPR03MB4360.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6430;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: bsQHEp1BZ4WSuPE5TQwZGSvEDHSys6nrvn0LE4j0QJRAXxB+gS/SBKkq1uOzT87f1FJ0GKHTAqdiScQsZ82s/qWio+tfGk9lS/EUPm/1zp5/54ACRV62gvZZI1VF/u/DNGP3yRf/UodFK5VNzDDH4X0mP3lneDQL1Pwf3tdq46IjoboGwn7uxglhDgdf0vItkhi9SIiz9KPPXIBbybElJUNAdHP4KAN5OaqlAKkWNKY9CzuPlX+eUvVAj/jMit+Y4rWH11EyAyHBdc/g+r9vfWk3x8Ni4ryXcvRoBT7RHjl0PLT8YiSgM8sj9ZNM68PEB+RrE/A/v6UvRMT3WJq58V4RJkqsYDg4iTHn0pa8RRV/4RgJ2729K7z94bchBmshhkv59jSGnQ/RTmBXtWIP2wnDERPXAbdTH+ylbr/wAUBOBxXtsu/A7BU+JYB5lg7Cko4XRONIzPOYHAP8/LBre6j8/JjFKmNqjn/m691TmxwZ0oNM4Yfad0bp+/NOXawcHxLa47bcThAbLV4qtICXTWUiT0WfDfDVf4M92tMj7cX6jB0eB53eauaHkEyhPWUEv5LzdAx6pDDdoOl6pp/OSIh4qPnItM2Z22FWFpOxDQKE8rrPzB0BD5pupzi559a5LaL1CLRABe/kZRR16/+PL+xdSXhZ1xfCoHrgasgxDPFWUwKHkvmZYHoxQl/jAIm5
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(39860400002)(376002)(136003)(346002)(66946007)(8676002)(186003)(16526019)(31696002)(66476007)(31686004)(2906002)(66556008)(38100700002)(2616005)(86362001)(36756003)(6666004)(4326008)(6486002)(956004)(5660300002)(53546011)(316002)(16576012)(54906003)(26005)(478600001)(83380400001)(8936002)(110136005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?Qy9RYlpXQWdqK1JQd1Q5bDJKQ2hTV2lpYm1zSEtkWmZ6bVo5cHpiUE1nSk9L?=
 =?utf-8?B?Z05WczlUTzJHWmpFUHo4WFJBTzVkajdMQ05IOG5tVnlJUEVTc29XQjhMWWpj?=
 =?utf-8?B?QklkUkt6UHpJMHFpdlNmUGF6dXQ2K1FQYkFXNHF0ZWxwb2hONkpaU1JwOEcv?=
 =?utf-8?B?TExBaitnQUhTTWFMYzg5S0Z1cXdUR2swaDUzTGQ4c2xsYWxvZzBadFBqYVVy?=
 =?utf-8?B?NTJ6cjBDZ1RmOUhXRURUYlJTa0M2MHhHaTFPSmVob1JNd1dQZDduRGZSaUpq?=
 =?utf-8?B?YVIvbGdzUytycml4RThrMkQxaFNId3ZsM2hMdFFJSXZJUWIwS1RPWklrK2lQ?=
 =?utf-8?B?SnpOU3dDSDYzcGc2UkQzcnl4TW5SRkZkZUdFbVpTdEJDT3Q2cXMyWnIzQjc4?=
 =?utf-8?B?a3RpSVZUTWpoaDdkdDdTWm1UMS9oU1puQWNCUzMyYWw3WE5EUWZMNGM5UTAr?=
 =?utf-8?B?L251bFlkdEdLTkdlUVN2My9iNHpsb0E2cDBKa29nQnNvN2lnSkJHUUh2MTQ2?=
 =?utf-8?B?TGxsYm1qSnpmZDZGZi9nYTluN1MwZW1GNTlzLzJlMTF2MXp6RXltbFdZKzZn?=
 =?utf-8?B?aHVDTXZwS2NkTnNVS0xNN2lTUS9SRXFCS0t5RTFGUTNpbTY2NjFpNjBxMlFa?=
 =?utf-8?B?R1dxK2dLUVMrYVF5SEV3OE0vcVBpUU1WVUJralFHcHhKSld2NzJLWmFlZ1Ro?=
 =?utf-8?B?cXNKdW1rbVB1dmNJWGNNRFhBbHczV2FMdTRjc1dsN2dLNU5Zd05ZK0cwVTgx?=
 =?utf-8?B?MkFRZGpiLzFSTExBZWxlMWZwc09ab3diZTNDRS96eWkzcjNvU0hocE1pQkdB?=
 =?utf-8?B?OU1Td0pTMmdVMGNRdTJzUm9SejRsNlZMZDlwbHozU3gvZUlvb0VCZlZrVHhk?=
 =?utf-8?B?SVNlU0lSTzljK1pjNkx6WFpKMXZSQkxXYlNFWDhOcDlKTGFBajFZTXMrUlRz?=
 =?utf-8?B?WlJDSHNCNVBCMURiV09zOXRnbEZTUEl5WXRaMnVBTHJSYmk1WURNRFhCSWl5?=
 =?utf-8?B?NnlBblFsS3VQaHNENDM2WjFkNzA4cU5lc1VqNmJKSkoxMEU0OU9XZzJGMTFT?=
 =?utf-8?B?cW9HaUxzVk43U3JjamNMVG9KdGhWSmFyNHVBSkpPWjhnZFpIalMwbGxQeGgr?=
 =?utf-8?B?VVNjMXlqN3RXMFdrYWpXdnIxUTlzN3Jsd0JseE9vbzJ0Y0Y2ZEpQK1daQUFz?=
 =?utf-8?B?SC9FNzROWjVpckpVdjFnS0ZrdncvVjlrZkkwTmZaa0RZQjZSWm05RnZPU0hq?=
 =?utf-8?B?U1dza2JSSWFJVDViT3JuZmVyZmxkUjZWYTV1ZGFXUW9DOUZyMThDbWRpNFl6?=
 =?utf-8?B?S2lzZlphcGM3bkFQWDVBTERqS2pBQm1uSFRabmdsVEVNNXlxcHlFQ2p4VGtS?=
 =?utf-8?B?SmNGeHc3WW41aGlpdWZGTU5vZVVyYjI5MGhDSWhDakxaMG9mZjRwRVpkdXU2?=
 =?utf-8?B?ZFFFZ004WDRHSGZEd3pzZDVQTWs4aWgvcjlpaE0wUkROajZVRGlWYkFkUW1N?=
 =?utf-8?B?ODV0MGRNSTNqTlZhSXAwdEFhcDlnY2IwMWRyVUw2K2g2aE9BZkJYTElEaHBi?=
 =?utf-8?B?aW5SdjUzYms2WkJPY3owY3ltZWJ5aEZ4TlRsRWlJMTB1M0dENjYvMUk3dXhv?=
 =?utf-8?B?aHZmUlVLYkVhRkh4Sm9yRHNCamZZUGxITTl6RXh6cituV3phSVdMaVcxZmEr?=
 =?utf-8?B?d3F1K2w1WERvajYvOEFHUHFXV3VSVndEOXJXdkQ4UytpZU01SDhKUlFBN1lT?=
 =?utf-8?Q?1yjznkP5TT2MALvmkIpixrS6Krk0TV8/vFraP5s?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 159c06d6-f7c7-4300-c869-08d90fb37331
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2021 10:49:25.7709
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: rOql2WliqxDLE2hzv3Um8M9z4ZP4Dm4gOguJZSMpX2v/oyynQdTrnfRH6jodY6z9V12q6b4YNqDq81ASrP70LpLBa8nu/ajAyb1J4Rig3W8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4360
X-OriginatorOrg: citrix.com

On 04/05/2021 09:42, Jan Beulich wrote:
> This array can be large when many grant frames are permitted; avoid
> allocating it when it's not going to be used anyway, by doing this only
> in gnttab_populate_status_frames(). While the delaying of the respective
> memory allocation adds possible reasons for failure of the respective
> enclosing operations, there are other memory allocations there already,
> so callers can't expect these operations to always succeed anyway.
>
> As to the re-ordering at the end of gnttab_unpopulate_status_frames(),
> this is merely to represent intended order of actions (shrink array
> bound, then free higher array entries). If there were racing accesses,
> suitable barriers would need adding in addition.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Nack.

The argument you make says that the grant status frames is "large
enough" to care about not allocating.=C2=A0 (I frankly disagree, but that
isn't relevant to my to my nack).

The change in logic here moves a failure in DOMCTL_createdomain, to
GNTTABOP_setversion.=C2=A0 We know, from the last minute screwups in XSA-22=
6,
that there are versions of Windows and Linux in the field, used by
customers, which will BUG()/BSOD when GNTTABOP_setversion doesn't succeed.

You're literally adding even more complexity to the grant table, to also
increase the chance of regressing VMs in the wild.=C2=A0 This is not ok.

The only form of this patch which is in any way acceptable, is to avoid
the allocation when you know *in DOMCTL_createdomain* that it will never
be needed by the VM.=C2=A0 So far, that is from Kconfig and/or the command
line options.

~Andrew



From xen-devel-bounces@lists.xenproject.org Wed May 05 10:52:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 10:52:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.122991.232027 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leF8W-0003rU-A3; Wed, 05 May 2021 10:52:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 122991.232027; Wed, 05 May 2021 10:52:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leF8W-0003rN-6h; Wed, 05 May 2021 10:52:12 +0000
Received: by outflank-mailman (input) for mailman id 122991;
 Wed, 05 May 2021 10:52:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3EPH=KA=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1leF8V-0003rF-2E
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 10:52:11 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.14.57]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0617f20a-4073-49d1-96d5-d206e008667c;
 Wed, 05 May 2021 10:51:59 +0000 (UTC)
Received: from AM5PR0701CA0054.eurprd07.prod.outlook.com (2603:10a6:203:2::16)
 by VI1PR08MB2784.eurprd08.prod.outlook.com (2603:10a6:802:25::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4087.38; Wed, 5 May
 2021 10:51:46 +0000
Received: from VE1EUR03FT010.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:2:cafe::92) by AM5PR0701CA0054.outlook.office365.com
 (2603:10a6:203:2::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.9 via Frontend
 Transport; Wed, 5 May 2021 10:51:46 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT010.mail.protection.outlook.com (10.152.18.113) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4108.25 via Frontend Transport; Wed, 5 May 2021 10:51:46 +0000
Received: ("Tessian outbound 8ca198b738d3:v91");
 Wed, 05 May 2021 10:51:46 +0000
Received: from 3183eb751811.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 C2E83D08-C077-4856-9140-0FBAA0644FA4.1; 
 Wed, 05 May 2021 10:51:37 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 3183eb751811.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 05 May 2021 10:51:37 +0000
Received: from VI1PR08MB3629.eurprd08.prod.outlook.com (2603:10a6:803:7f::25)
 by VI1PR08MB3758.eurprd08.prod.outlook.com (2603:10a6:803:bd::26)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4087.41; Wed, 5 May
 2021 10:51:29 +0000
Received: from VI1PR08MB3629.eurprd08.prod.outlook.com
 ([fe80::4502:9762:8b3b:63d9]) by VI1PR08MB3629.eurprd08.prod.outlook.com
 ([fe80::4502:9762:8b3b:63d9%4]) with mapi id 15.20.4087.044; Wed, 5 May 2021
 10:51:29 +0000
Received: from smtpclient.apple (82.8.129.65) by
 LO4P123CA0281.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:195::16) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4108.25 via Frontend Transport; Wed, 5 May 2021 10:51:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0617f20a-4073-49d1-96d5-d206e008667c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SQuhihuktO+p7OS9loCQjTVC8zQWtC2OYhyNMnJssBA=;
 b=7/e25+cagESvhNQJmJ+yhAIa3lvId6umLEkMZpezXe4uYFmCSw1TBN3IJFlWCmGcZoFu3jD6UxhZnJbDf09LsS2UJNZyh2Fo0NrGl6w5fY3kRU1XkJ3Ej0LWAYsr7+7FYTQmss26pImrGvBM0weZDoZiNVxz8j+3Iglo56XOTqk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: ca612930dbba5433
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jjK8XdhLopSsRS4BAD3l/tmNSQXUi77RIht0l4qF7Z2V0e6BhRpvn8wEe0DUar54UTJVphZee81tg+yIcOkUDqtKWq6v6IFadvbTfPoGIS6bzPUesCEupu8+iBLurNA6N409C1RFZLVT3cKnuZYexJhOAHl9C5oyIHxn8iJlW9TVR2nk1sKgo2Jr8SIHPfxh2d8tG63KgqdzLbH/6N7WSp08xjfmEX8k58svdMFuy5KReZ7qcbrWKZw4XLRx5XAbxQbvKUL+DdG8p2pwjBC9NhCa596CDsDxrBbbbHs24WSuyNOtu46zyGWGXdCyr0olZ//m8MvmaUja7cCmyDeC6Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SQuhihuktO+p7OS9loCQjTVC8zQWtC2OYhyNMnJssBA=;
 b=lP/7t8Qs6HSOpCz3H8teMN5xusMvibcNiYuWwu5XoN9xVF8F9fl+VOuP8tqBmLnoRaaNOn+Qv3JnCZei5Qb6hZdRgZdTXvhyD0905bfWiBYvntYu1YIeAvd7vD3GiXeQcj7IgIPdxoic6vXkBy2RWuPS+QsodzGnqLZmx+EmnkpWO2m9TD9A2pdCEqNbP0xs3t+7+UVmfBDFGc0wlnwAXjeyN9UWwri+NdtMVF81+fVwnwt6qmaY4mFx1pwX0FalD+J3uPCuDAPefv6bJpIaDUEU0H7L+yrsZkeXbdK2U8L+cRDnvRURz6YJjCyPERWMQ7L3EJp0ljMVqV3QfSvBTQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SQuhihuktO+p7OS9loCQjTVC8zQWtC2OYhyNMnJssBA=;
 b=7/e25+cagESvhNQJmJ+yhAIa3lvId6umLEkMZpezXe4uYFmCSw1TBN3IJFlWCmGcZoFu3jD6UxhZnJbDf09LsS2UJNZyh2Fo0NrGl6w5fY3kRU1XkJ3Ej0LWAYsr7+7FYTQmss26pImrGvBM0weZDoZiNVxz8j+3Iglo56XOTqk=
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
Content-Type: text/plain;
	charset=utf-8
Subject: Re: [PATCH v5 1/3] docs: add doxygen support for html documentation
From: Luca Fancellu <luca.fancellu@arm.com>
In-Reply-To: <alpine.DEB.2.21.2105041532030.5018@sstabellini-ThinkPad-T480s>
Date: Wed, 5 May 2021 11:51:22 +0100
Cc: xen-devel@lists.xenproject.org,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 wei.chen@arm.com,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>
Content-Transfer-Encoding: quoted-printable
Message-Id: <A0FCD64F-722B-4FC2-97C2-C9130E457920@arm.com>
References: <20210504133145.767-1-luca.fancellu@arm.com>
 <20210504133145.767-2-luca.fancellu@arm.com>
 <alpine.DEB.2.21.2105041532030.5018@sstabellini-ThinkPad-T480s>
To: Stefano Stabellini <sstabellini@kernel.org>
X-Mailer: Apple Mail (2.3654.80.0.2.43)
X-Originating-IP: [82.8.129.65]
X-ClientProxiedBy: LO4P123CA0281.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:195::16) To VI1PR08MB3629.eurprd08.prod.outlook.com
 (2603:10a6:803:7f::25)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: bcea167d-aac7-4fa3-463d-08d90fb3c72e
X-MS-TrafficTypeDiagnostic: VI1PR08MB3758:|VI1PR08MB2784:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB2784289C5D0F87815A6F63BCE4599@VI1PR08MB2784.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:6430;OLM:6430;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 MLOc42lEvu3XUmDCLYhG1gect2j84R6Ua//1LvDom1gq4funfxi1HQpgBqMwIMdlm/i3nCBvfSR57ICSbJ215w1sLkPilYaoAKP1SNLrng+lU6Cm2JGf/FiTRFICeUGcy/Z2VXkzLEDc9K31gtlwfUQrLsCpPDE3dOFjVl5AnIDxW/bkbhBSUDvlHUBWx11/pDd+P9u9Bt46RbYjPi8gXvaZDG6x5o1Lo+Z4spBq3EGSwqVn+cUrSMX7hESayXyzJJaIiiiTwrx83RoQ3AMccUBagUzqeSybAtAJTkVeps8CBGMtk19QSdf/nx55s/iG8UGvuXM6hFsTLumriH251S4G5sw8KFcFzrr1PrRvmxt7pnIkbrudykAvqUJ/RHQaeycoOswzLv29eXEIDkfq2cAYlkB5DebxA9vRnqpraL4bzgDxEg9OymxK8hJoxNTKlfkOHt4zVM6idbYMoCXVceP9CI9gvPpsF0fM8lweiw7Qz5iXeKXCY12F9roC4O8DxwwoCxb9FWJnsAIglanNhtw3WcpG7rTXlDF1Nutpk7vjwvp9+h91fOokR4xdcVLiTcbkQHT595odK46hNApoiaJ2Gug4pWFLEiiuD0omINEyi1Wjpr8EB0UA7vJ5IWkQZloDoJPM4vZx1fQ8HAqNWTtjuQkf1ScYuK9NlY54jnuxHxDWC7uZRC0Q/3R8MLfMWECt9PECH71hkqZ5pgyhHxyunsDopKORyu/ua+NqJID7fJv1UOI5w7BY5HlWxTZJ/K0bfxf6zWyBgNVnuIkrS2cjs/mEFBnh7c5G5Whr79gkhEN+TzQWB85KpHvzth7cMIaFze4hp1AWJzLOf51W8pFpbeTEMy2rV6C1EC3sCsyDMYbjqMwGj3fIHHH3E+l/
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR08MB3629.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(136003)(346002)(396003)(376002)(366004)(66574015)(4326008)(166002)(83380400001)(478600001)(956004)(44832011)(186003)(45080400002)(21615005)(38350700002)(2616005)(6512007)(86362001)(6486002)(6916009)(26005)(16799955002)(2906002)(30864003)(66556008)(66946007)(66476007)(16526019)(36756003)(316002)(19273905006)(966005)(52116002)(5660300002)(38100700002)(33656002)(54906003)(8936002)(53546011)(6506007)(8676002)(6666004)(292474003)(21314003)(45980500001)(579004)(559001)(563064011)(460985005);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
 =?utf-8?B?TmtrdmNKMDJOMXIwaVZyS3NOMVRTR3EybVN5RlhFWnJkaEtPVGczbGh2OXgy?=
 =?utf-8?B?YTJYNUhMMU1QY056RFp4c3prZnNTNW16M3VUcTR0SEwzZ2xRazhFUm4zMEJL?=
 =?utf-8?B?UFAycnJkQmRVWHNxYWFxem4xU2FudnJEL0VjRW42RnNMR1lqc0FjUkVFZ1J3?=
 =?utf-8?B?ZGpSZTltZDlVYjBBUUcxMXU2Mi92a1JTK05GOHZnbkRmanU4Y1pWblJ5SnV4?=
 =?utf-8?B?NGQyaUhKSnJpUEYwL3BaZjJOdkpNMXRPbUtHU0hjbjU4d2xzRERmRG05cTFX?=
 =?utf-8?B?VDZkbWtxcHpGNWpFVDVrQjB6MmZyc21XODlRdnZ5MXhTUmQybThmd2VUdFhp?=
 =?utf-8?B?M2h3N1dPWjdpQkpja0ZuVFlGSVl0SldDaEJ1MWQyUHQzWjJlazBNWEMrT3F0?=
 =?utf-8?B?VVBEaStKa0RhVnRRL1pQUHBIS2x0M2Rwd05UMVJuZEdMY1BPRVpoSHgzeWRx?=
 =?utf-8?B?OG9FKy94TkFoZS8xTHNpb0JVSk5xMFgrTGNUK3QrRTJBZ0lnL0ZEYVBRdHo4?=
 =?utf-8?B?UTlLdFlxMEh6dndoMTM1NjlyUDlUSnVnMHRuSGpTNlY4VS96aDgrM0dZUXc5?=
 =?utf-8?B?OVdsQVQzZG9aNGxrY3ZlamtWeDg4WGUzdnIrY1BZcEhZdUFtMERPMTBMWWdx?=
 =?utf-8?B?N3AvbVZkMzl4eDJrK3B1RGlUallTczNLN0paYzg1Wk10N2FWWTBnRloyeFpL?=
 =?utf-8?B?NnNOQUo3SzBIZTBTbHNMdVZFYkllNVZPV0xuTWtEOVlxazF0ODFlaG1UbWlB?=
 =?utf-8?B?TUJyR0FOazBLdWIvSXdyT3Y3cWtFN1Jjc1RuOEsxTElXNGtVYWZ6RXVLUGU3?=
 =?utf-8?B?eFhOR2g3UDZCN1EwV2lzZm5nclBHdXNMTmZ6Zm5wZk9iNy9pTGY4cmJ3SFND?=
 =?utf-8?B?NFlQMlorVDZlcUJxcCtaVWFuT3F2Skx1WmxpYXBhc3MwbjJvTHR5b1VSZUpI?=
 =?utf-8?B?RGRpSjY0SUxZV3VwM0gvZXdVQ0t0L21oUWN5SEhpOUx3MW9QU3NJdzVucENi?=
 =?utf-8?B?YlpEajVBQ1JncW1SSmI0U0x2Z1ZLTHVGV1AwS2pKUmh4Uy9mSFF0WDUyNVFx?=
 =?utf-8?B?VHlUNy9PN2JBaVNURlhpSit1bmw5cHQrcUJoTzlzMWROQitScVIxYmJCQUxZ?=
 =?utf-8?B?emJnRUdzU3NJcGxtVTdrYWxDSS9vd0l2cUJsN1lWR2VWK05TUkZkWnpQQy84?=
 =?utf-8?B?Tlc1RmxScWdrY0NtcVM2aHJzdXdzc2VpaWhVWjVpd1JvdFlpbEdaYnY4ZktR?=
 =?utf-8?B?dHM2ZWYxZ0x3aHZqaDRWaXRMQW9UTkliM2tBUStyTjQ0b1NLV2dPSnFDeENt?=
 =?utf-8?B?QXc4VVpXRkpML3Q4QnNmdHZ1VXBxSGZJM2U2MWJuSXpaZFhTRzFJbmxBSFZt?=
 =?utf-8?B?Qkd3TTNjRFdDMzhjVklDczRHOVVtVEVDTnp0NmYyQ2I4aUY5eUZxVEpBSEZn?=
 =?utf-8?B?RjdtamE0YlRkREFxVHF1a2h5a3ZnVDJITGdsVHpudnJKblRwREN4SFpaV0Nv?=
 =?utf-8?B?NG1rd0ZLd2h3V2VrLzJPQnNrZ1hmZ3BCSlk3NzIyenBpdE9LaVpUZ3VhUnY2?=
 =?utf-8?B?cWFEeXJhTEZlV21DN2hDUXlLVXkyQ3dQTEFxMFdlNE1DSGhmZWhndEx0d0hm?=
 =?utf-8?B?YU5UNk04SDBiSmQ4cU1JOVhiWDRqSE1TTlF4d25NQW5iWVdraEU5L094alh2?=
 =?utf-8?B?L25SYzZIeFErKzFEU3V4OG4wWlNWMkJSemhzLzdLbzR0eU9CNW9tdVdUVGc4?=
 =?utf-8?Q?3tBH7bUsrTXru6mzV0icMfcnLQxqnxj+nEvsDc9?=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3758
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT010.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	bd9aac0f-1280-4bb6-514b-08d90fb3bcd6
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	7x17Err4fUWUyEEQPlKxqHFpzl5AK0fI1Ed5Viqh5Zh18VeNWZv4Nh9HhgLffJEynlg10by8JI75XcR6WCiSzydhQmj/E/FJ+6lj0pMlRsPjMb1jONaY6y18OuHMADanEENG0lCrJJq6nS70kqdAoRHMlpt/FmvOp88Po7qSbIM3wSIGjrEirKGk2cfouT1CQJJkma32910OVLkhjNOMkHfzCGpkb1XLITjMPj8CHdlGfCr1VghMYPjYRGIlEjTSM2XIevtBk2VMm7EX+p8F2RdL23oLKe+YOu7BJ4qdkgaeEl9rOBzhaXDQdjsLwTptKZv+Q4xBa4YKAv2yOakZa7CiXxdPm6idlbPzGsVb/3NUxppTAUVjz7byH5O7nkAmbTGoDQdfFuHznHOetFl5UAlaDDgT6p/0FzEZm4NLAiw/FTtSZXNC2gOtOfookH1vvcfh2kbmTHcvSf7uHSNT5l1LKuzfhyHumDUpGGS3SspmJTFjUfakoLo2/OgK3PjXiXPfwsazPYsHNvBfg8RFJmTvlU5WjM1xeXlTf9Vin1uXsSM1O2MiaSbvzEZUA5aXzhjb8JQpR5knM2Q0YUQER+RixCpHviVgqRWfNo4ud4lLhy09TwnlZ0vLolMb6QXa/lCJnsRbzLPhPvVF3bTNDeCngvqx1nCWZp5MKGf6HSjEJwZK1lZnHCU47CbcxjPeUE/q2Wiv3yGRUtMGJmMShOyl/TfRxQecRCsm9EZk03RKxCjxBvGhJmlx6bVNOMvvydI+Lfd5Qxp5V+/enwijDQU/su2ftiXSyhQfFBly13UmSlLyZsyhm/7INmXtj8N0RVSmISKOq3tPWjs7yCqqLA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(39860400002)(396003)(346002)(376002)(46966006)(36840700001)(21615005)(5660300002)(44832011)(6512007)(4326008)(36756003)(2616005)(86362001)(47076005)(45080400002)(30864003)(70586007)(336012)(70206006)(82310400003)(956004)(6862004)(26005)(2906002)(66574015)(316002)(54906003)(478600001)(6486002)(6506007)(53546011)(19273905006)(166002)(36860700001)(81166007)(83380400001)(186003)(16799955002)(8676002)(16526019)(966005)(8936002)(6666004)(82740400003)(356005)(33656002)(292474003)(21314003)(579004)(559001)(563064011)(460985005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2021 10:51:46.3227
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: bcea167d-aac7-4fa3-463d-08d90fb3c72e
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT010.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB2784



> On 4 May 2021, at 23:55, Stefano Stabellini <sstabellini@kernel.org> wrot=
e:
>=20
> On Tue, 4 May 2021, Luca Fancellu wrote:
>> Add doxygen support to build html documentation with sphinx,
>> to do that the following modification are applied:
>>=20
>> 1) Modify docs/configure.ac and consequently configure script
>>   to check, through the ./configure script, the presence in
>>   the system of the sphinx-build binary, if it is found then
>>   it checks also the presence of the doxygen binary, breathe
>>   and sphinx-rtd-theme python packages.
>>   Doxygen and the packages are required only if sphinx-build
>>   is found, otherwise the Makefile will simply skip the
>>   sphinx html generation.
>>   The ax_python_module.m4 support is needed to check for
>>   python packages.
>> 2) Add doxygen templates and configuration file
>> 3) Modify docs/Makefile to call in sequence doxygen and
>>   sphinx-build, the doxygen configuration file will be
>>   modified to include the xen absolute path and
>>   a list of header to parse.
>>   A doxygen_input.h file is generated to include every
>>   header listed in the doxy_input.list file.
>> 4) Add preprocessor called by doxygen before parsing headers,
>>   it will include in every header a doxygen_include.h file
>>   that provides missing defines and includes that are
>>   usually passed by the compiler.
>=20
> This is clearly is a gargantuan work! Thank you so much!
>=20
> Is there a way we can split this patch somehow? Given the nature of the
> work, obvious we can't do a line-by-line review of everything, but I
> think it would be good to split it so that we can more easily ack parts
> of it and review others.
>=20
> For instance something like the following:
>=20
> - one patch for docs/xen.doxyfile.in for which you can already add my
>  acked-by judging by the output[1]
> - one patch to add the png image, also add my acked-by
> - one patch for mainpage.md, header.html, footer.html, and
>  customdoxygen.css, I think they can all be on a single patch and have
>  my acked-by
> - one patch to add the new m4 macros under m4/
> - one patch with changes to docs/configure.ac, docs/configure,
>  config/Docs.mk.in
> - one patch with doxy-preprocessor.py, doxy_input.list,
>  doxygen_include.h.in
> - one patch with changes to docs/Makefile and docs/conf.py
>=20
> Does this sound reasonable to you?

Hi Stefano,

Yes I=E2=80=99ll split the patch and I=E2=80=99ll push everything in the v6=
.

Cheers,
Luca

>=20
>=20
> [1] https://luca.fancellu.gitlab.io/xen-docs/hypercall-interfaces/arm64/g=
rant_tables.html
>=20
>=20
>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
>> ---
>> v4 changes:
>> - create alias @keepindent/@endkeepindent for the doxygen
>>  command @code/@endcode
>> v3 changes:
>> - add preprocessor to handle missing defines and anonymous
>>  structs/unions before doxygen parsing
>> - modification to Makefile to handle the new process
>> v2 changes:
>> - Fix bug in Makefile when sphinx is not found in the system
>> ---
>> .gitignore                                   |    6 +
>> config/Docs.mk.in                            |    2 +
>> docs/Makefile                                |   42 +-
>> docs/conf.py                                 |   48 +-
>> docs/configure                               |  258 ++
>> docs/configure.ac                            |   15 +
>> docs/xen-doxygen/customdoxygen.css           |   36 +
>> docs/xen-doxygen/doxy-preprocessor.py        |  110 +
>> docs/xen-doxygen/doxy_input.list             |    0
>> docs/xen-doxygen/doxygen_include.h.in        |   32 +
>> docs/xen-doxygen/footer.html                 |   21 +
>> docs/xen-doxygen/header.html                 |   56 +
>> docs/xen-doxygen/mainpage.md                 |    5 +
>> docs/xen-doxygen/xen_project_logo_165x67.png |  Bin 0 -> 18223 bytes
>> docs/xen.doxyfile.in                         | 2316 ++++++++++++++++++
>> m4/ax_python_module.m4                       |   56 +
>> m4/docs_tool.m4                              |    9 +
>> 17 files changed, 3006 insertions(+), 6 deletions(-)
>> create mode 100644 docs/xen-doxygen/customdoxygen.css
>> create mode 100755 docs/xen-doxygen/doxy-preprocessor.py
>> create mode 100644 docs/xen-doxygen/doxy_input.list
>> create mode 100644 docs/xen-doxygen/doxygen_include.h.in
>> create mode 100644 docs/xen-doxygen/footer.html
>> create mode 100644 docs/xen-doxygen/header.html
>> create mode 100644 docs/xen-doxygen/mainpage.md
>> create mode 100644 docs/xen-doxygen/xen_project_logo_165x67.png
>> create mode 100644 docs/xen.doxyfile.in
>> create mode 100644 m4/ax_python_module.m4
>>=20
>> diff --git a/.gitignore b/.gitignore
>> index 1c2fa1530b..d271e0ce6a 100644
>> --- a/.gitignore
>> +++ b/.gitignore
>> @@ -58,6 +58,12 @@ docs/man7/
>> docs/man8/
>> docs/pdf/
>> docs/txt/
>> +docs/doxygen-output
>> +docs/sphinx
>> +docs/xen.doxyfile
>> +docs/xen.doxyfile.tmp
>> +docs/xen-doxygen/doxygen_include.h
>> +docs/xen-doxygen/doxygen_include.h.tmp
>> extras/mini-os*
>> install/*
>> stubdom/*-minios-config.mk
>> diff --git a/config/Docs.mk.in b/config/Docs.mk.in
>> index e76e5cd5ff..dfd4a02838 100644
>> --- a/config/Docs.mk.in
>> +++ b/config/Docs.mk.in
>> @@ -7,3 +7,5 @@ POD2HTML            :=3D @POD2HTML@
>> POD2TEXT            :=3D @POD2TEXT@
>> PANDOC              :=3D @PANDOC@
>> PERL                :=3D @PERL@
>> +SPHINXBUILD         :=3D @SPHINXBUILD@
>> +DOXYGEN             :=3D @DOXYGEN@
>> diff --git a/docs/Makefile b/docs/Makefile
>> index 8de1efb6f5..2f784c36ce 100644
>> --- a/docs/Makefile
>> +++ b/docs/Makefile
>> @@ -17,6 +17,18 @@ TXTSRC-y :=3D $(sort $(shell find misc -name '*.txt' =
-print))
>>=20
>> PANDOCSRC-y :=3D $(sort $(shell find designs/ features/ misc/ process/ s=
pecs/ \( -name '*.pandoc' -o -name '*.md' \) -print))
>>=20
>> +# Directory in which the doxygen documentation is created
>> +# This must be kept in sync with breathe_projects value in conf.py
>> +DOXYGEN_OUTPUT =3D doxygen-output
>> +
>> +# Doxygen input headers from xen-doxygen/doxy_input.list file
>> +DOXY_LIST_SOURCES !=3D cat "xen-doxygen/doxy_input.list"
>> +DOXY_LIST_SOURCES :=3D $(realpath $(addprefix $(XEN_ROOT)/,$(DOXY_LIST_=
SOURCES)))
>> +
>> +DOXY_DEPS :=3D xen.doxyfile \
>> +			 xen-doxygen/mainpage.md \
>> +			 xen-doxygen/doxygen_include.h
>> +
>> # Documentation targets
>> $(foreach i,$(MAN_SECTIONS), \
>>   $(eval DOC_MAN$(i) :=3D $(patsubst man/%.$(i),man$(i)/%.$(i), \
>> @@ -46,8 +58,28 @@ all: build
>> build: html txt pdf man-pages figs
>>=20
>> .PHONY: sphinx-html
>> -sphinx-html:
>> -	sphinx-build -b html . sphinx/html
>> +sphinx-html: $(DOXY_DEPS) $(DOXY_LIST_SOURCES)
>> +ifneq ($(SPHINXBUILD),no)
>> +	$(DOXYGEN) xen.doxyfile
>> +	XEN_ROOT=3D$(realpath $(XEN_ROOT)) $(SPHINXBUILD) -b html . sphinx/htm=
l
>> +else
>> +	@echo "Sphinx is not installed; skipping sphinx-html documentation."
>> +endif
>> +
>> +xen.doxyfile: xen.doxyfile.in xen-doxygen/doxy_input.list
>> +	@echo "Generating $@"
>> +	@sed -e "s,@XEN_BASE@,$(realpath $(XEN_ROOT)),g" $< \
>> +		| sed -e "s,@DOXY_OUT@,$(DOXYGEN_OUTPUT),g" > $@.tmp
>> +	@$(foreach inc,\
>> +		$(DOXY_LIST_SOURCES),\
>> +		echo "INPUT +=3D \"$(inc)\"" >> $@.tmp; \
>> +	)
>> +	mv $@.tmp $@
>> +
>> +xen-doxygen/doxygen_include.h: xen-doxygen/doxygen_include.h.in
>> +	@echo "Generating $@"
>> +	@sed -e "s,@XEN_BASE@,$(realpath $(XEN_ROOT)),g" $< > $@.tmp
>> +	@mv $@.tmp $@
>>=20
>> .PHONY: html
>> html: $(DOC_HTML) html/index.html
>> @@ -71,7 +103,11 @@ clean: clean-man-pages
>> 	$(MAKE) -C figs clean
>> 	rm -rf .word_count *.aux *.dvi *.bbl *.blg *.glo *.idx *~
>> 	rm -rf *.ilg *.log *.ind *.toc *.bak *.tmp core
>> -	rm -rf html txt pdf sphinx/html
>> +	rm -rf html txt pdf sphinx $(DOXYGEN_OUTPUT)
>> +	rm -f xen.doxyfile
>> +	rm -f xen.doxyfile.tmp
>> +	rm -f xen-doxygen/doxygen_include.h
>> +	rm -f xen-doxygen/doxygen_include.h.tmp
>>=20
>> .PHONY: distclean
>> distclean: clean
>> diff --git a/docs/conf.py b/docs/conf.py
>> index 50e41501db..a48de42331 100644
>> --- a/docs/conf.py
>> +++ b/docs/conf.py
>> @@ -13,13 +13,17 @@
>> # add these directories to sys.path here. If the directory is relative t=
o the
>> # documentation root, use os.path.abspath to make it absolute, like show=
n here.
>> #
>> -# import os
>> -# import sys
>> +import os
>> +import sys
>> # sys.path.insert(0, os.path.abspath('.'))
>>=20
>>=20
>> # -- Project information -----------------------------------------------=
------
>>=20
>> +if "XEN_ROOT" not in os.environ:
>> +    sys.exit("$XEN_ROOT environment variable undefined.")
>> +XEN_ROOT =3D os.path.abspath(os.environ["XEN_ROOT"])
>> +
>> project =3D u'Xen'
>> copyright =3D u'2019, The Xen development community'
>> author =3D u'The Xen development community'
>> @@ -35,6 +39,7 @@ try:
>>             xen_subver =3D line.split(u"=3D")[1].strip()
>>         elif line.startswith(u"export XEN_EXTRAVERSION"):
>>             xen_extra =3D line.split(u"=3D")[1].split(u"$", 1)[0].strip(=
)
>> +
>> except:
>>     pass
>> finally:
>> @@ -44,6 +49,15 @@ finally:
>>     else:
>>         version =3D release =3D u"unknown version"
>>=20
>> +try:
>> +    xen_doxygen_output =3D None
>> +
>> +    for line in open(u"Makefile"):
>> +        if line.startswith(u"DOXYGEN_OUTPUT"):
>> +                xen_doxygen_output =3D line.split(u"=3D")[1].strip()
>> +except:
>> +    sys.exit("DOXYGEN_OUTPUT variable undefined.")
>> +
>> # -- General configuration ---------------------------------------------=
------
>>=20
>> # If your documentation needs a minimal Sphinx version, state it here.
>> @@ -53,7 +67,8 @@ needs_sphinx =3D '1.4'
>> # Add any Sphinx extension module names here, as strings. They can be
>> # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
>> # ones.
>> -extensions =3D []
>> +# breathe -> extension that integrates doxygen xml output with sphinx
>> +extensions =3D ['breathe']
>>=20
>> # Add any paths that contain templates here, relative to this directory.
>> templates_path =3D ['_templates']
>> @@ -175,6 +190,33 @@ texinfo_documents =3D [
>>      'Miscellaneous'),
>> ]
>>=20
>> +# -- Options for Breathe extension ------------------------------------=
-------
>> +
>> +breathe_projects =3D {
>> +    "Xen": "{}/docs/{}/xml".format(XEN_ROOT, xen_doxygen_output)
>> +}
>> +breathe_default_project =3D "Xen"
>> +
>> +breathe_domain_by_extension =3D {
>> +    "h": "c",
>> +    "c": "c",
>> +}
>> +breathe_separate_member_pages =3D True
>> +breathe_show_enumvalue_initializer =3D True
>> +breathe_show_define_initializer =3D True
>> +
>> +# Qualifiers to a function are causing Sphihx/Breathe to warn about
>> +# Error when parsing function declaration and more.  This is a list
>> +# of strings that the parser additionally should accept as
>> +# attributes.
>> +cpp_id_attributes =3D [
>> +    '__syscall', '__deprecated', '__may_alias',
>> +    '__used', '__unused', '__weak',
>> +    '__DEPRECATED_MACRO', 'FUNC_NORETURN',
>> +    '__subsystem',
>> +]
>> +c_id_attributes =3D cpp_id_attributes
>> +
>>=20
>> # -- Options for Epub output -------------------------------------------=
------
>>=20
>> diff --git a/docs/configure b/docs/configure
>> index 569bd4c2ff..0ebf046a79 100755
>> --- a/docs/configure
>> +++ b/docs/configure
>> @@ -588,6 +588,8 @@ ac_unique_file=3D"misc/xen-command-line.pandoc"
>> ac_subst_vars=3D'LTLIBOBJS
>> LIBOBJS
>> PERL
>> +DOXYGEN
>> +SPHINXBUILD
>> PANDOC
>> POD2TEXT
>> POD2HTML
>> @@ -673,6 +675,7 @@ POD2MAN
>> POD2HTML
>> POD2TEXT
>> PANDOC
>> +DOXYGEN
>> PERL'
>>=20
>>=20
>> @@ -1318,6 +1321,7 @@ Some influential environment variables:
>>   POD2HTML    Path to pod2html tool
>>   POD2TEXT    Path to pod2text tool
>>   PANDOC      Path to pandoc tool
>> +  DOXYGEN     Path to doxygen tool
>>   PERL        Path to Perl parser
>>=20
>> Use these variables to override the choices made by `configure' or to he=
lp
>> @@ -1800,6 +1804,7 @@ ac_configure=3D"$SHELL $ac_aux_dir/configure"  # P=
lease don't use this var.
>>=20
>>=20
>>=20
>> +
>> case "$host_os" in
>> *freebsd*) XENSTORED_KVA=3D/dev/xen/xenstored ;;
>> *) XENSTORED_KVA=3D/proc/xen/xsd_kva ;;
>> @@ -1812,6 +1817,53 @@ case "$host_os" in
>> esac
>>=20
>>=20
>> +# =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D
>> +#     https://www.gnu.org/software/autoconf-archive/ax_python_module.ht=
ml
>> +# =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D
>> +#
>> +# SYNOPSIS
>> +#
>> +#   AX_PYTHON_MODULE(modname[, fatal, python])
>> +#
>> +# DESCRIPTION
>> +#
>> +#   Checks for Python module.
>> +#
>> +#   If fatal is non-empty then absence of a module will trigger an erro=
r.
>> +#   The third parameter can either be "python" for Python 2 or "python3=
" for
>> +#   Python 3; defaults to Python 3.
>> +#
>> +# LICENSE
>> +#
>> +#   Copyright (c) 2008 Andrew Collier
>> +#
>> +#   Copying and distribution of this file, with or without modification=
, are
>> +#   permitted in any medium without royalty provided the copyright noti=
ce
>> +#   and this notice are preserved. This file is offered as-is, without =
any
>> +#   warranty.
>> +
>> +#serial 9
>> +
>> +# This is what autoupdate's m4 run will expand.  It fires
>> +# the warning (with _au_warn_XXX), outputs it into the
>> +# updated configure.ac (with AC_DIAGNOSE), and then outputs
>> +# the replacement expansion.
>> +
>> +
>> +# This is an auxiliary macro that is also run when
>> +# autoupdate runs m4.  It simply calls m4_warning, but
>> +# we need a wrapper so that each warning is emitted only
>> +# once.  We break the quoting in m4_warning's argument in
>> +# order to expand this macro's arguments, not AU_DEFUN's.
>> +
>> +
>> +# Finally, this is the expansion that is picked up by
>> +# autoconf.  It tells the user to run autoupdate, and
>> +# then outputs the replacement expansion.  We do not care
>> +# about autoupdate's warning because that contains
>> +# information on what to do *after* running autoupdate.
>> +
>> +
>>=20
>>=20
>> test "x$prefix" =3D "xNONE" && prefix=3D$ac_default_prefix
>> @@ -2232,6 +2284,212 @@ $as_echo "$as_me: WARNING: pandoc is not availab=
le so some documentation won't b
>> fi
>>=20
>>=20
>> +# If sphinx is installed, make sure to have also the dependencies to bu=
ild
>> +# Sphinx documentation.
>> +for ac_prog in sphinx-build
>> +do
>> +  # Extract the first word of "$ac_prog", so it can be a program name w=
ith args.
>> +set dummy $ac_prog; ac_word=3D$2
>> +{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
>> +$as_echo_n "checking for $ac_word... " >&6; }
>> +if ${ac_cv_prog_SPHINXBUILD+:} false; then :
>> +  $as_echo_n "(cached) " >&6
>> +else
>> +  if test -n "$SPHINXBUILD"; then
>> +  ac_cv_prog_SPHINXBUILD=3D"$SPHINXBUILD" # Let the user override the t=
est.
>> +else
>> +as_save_IFS=3D$IFS; IFS=3D$PATH_SEPARATOR
>> +for as_dir in $PATH
>> +do
>> +  IFS=3D$as_save_IFS
>> +  test -z "$as_dir" && as_dir=3D.
>> +    for ac_exec_ext in '' $ac_executable_extensions; do
>> +  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
>> +    ac_cv_prog_SPHINXBUILD=3D"$ac_prog"
>> +    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_ex=
ec_ext" >&5
>> +    break 2
>> +  fi
>> +done
>> +  done
>> +IFS=3D$as_save_IFS
>> +
>> +fi
>> +fi
>> +SPHINXBUILD=3D$ac_cv_prog_SPHINXBUILD
>> +if test -n "$SPHINXBUILD"; then
>> +  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $SPHINXBUILD" >&5
>> +$as_echo "$SPHINXBUILD" >&6; }
>> +else
>> +  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
>> +$as_echo "no" >&6; }
>> +fi
>> +
>> +
>> +  test -n "$SPHINXBUILD" && break
>> +done
>> +test -n "$SPHINXBUILD" || SPHINXBUILD=3D"no"
>> +
>> +    if test "x$SPHINXBUILD" =3D xno; then :
>> +  { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: sphinx-build is not=
 available so sphinx documentation \
>> +won't be built" >&5
>> +$as_echo "$as_me: WARNING: sphinx-build is not available so sphinx docu=
mentation \
>> +won't be built" >&2;}
>> +else
>> +
>> +            # Extract the first word of "sphinx-build", so it can be a =
program name with args.
>> +set dummy sphinx-build; ac_word=3D$2
>> +{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
>> +$as_echo_n "checking for $ac_word... " >&6; }
>> +if ${ac_cv_path_SPHINXBUILD+:} false; then :
>> +  $as_echo_n "(cached) " >&6
>> +else
>> +  case $SPHINXBUILD in
>> +  [\\/]* | ?:[\\/]*)
>> +  ac_cv_path_SPHINXBUILD=3D"$SPHINXBUILD" # Let the user override the t=
est with a path.
>> +  ;;
>> +  *)
>> +  as_save_IFS=3D$IFS; IFS=3D$PATH_SEPARATOR
>> +for as_dir in $PATH
>> +do
>> +  IFS=3D$as_save_IFS
>> +  test -z "$as_dir" && as_dir=3D.
>> +    for ac_exec_ext in '' $ac_executable_extensions; do
>> +  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
>> +    ac_cv_path_SPHINXBUILD=3D"$as_dir/$ac_word$ac_exec_ext"
>> +    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_ex=
ec_ext" >&5
>> +    break 2
>> +  fi
>> +done
>> +  done
>> +IFS=3D$as_save_IFS
>> +
>> +  ;;
>> +esac
>> +fi
>> +SPHINXBUILD=3D$ac_cv_path_SPHINXBUILD
>> +if test -n "$SPHINXBUILD"; then
>> +  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $SPHINXBUILD" >&5
>> +$as_echo "$SPHINXBUILD" >&6; }
>> +else
>> +  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
>> +$as_echo "no" >&6; }
>> +fi
>> +
>> +
>> +
>> +
>> +    # Extract the first word of "doxygen", so it can be a program name =
with args.
>> +set dummy doxygen; ac_word=3D$2
>> +{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
>> +$as_echo_n "checking for $ac_word... " >&6; }
>> +if ${ac_cv_path_DOXYGEN+:} false; then :
>> +  $as_echo_n "(cached) " >&6
>> +else
>> +  case $DOXYGEN in
>> +  [\\/]* | ?:[\\/]*)
>> +  ac_cv_path_DOXYGEN=3D"$DOXYGEN" # Let the user override the test with=
 a path.
>> +  ;;
>> +  *)
>> +  as_save_IFS=3D$IFS; IFS=3D$PATH_SEPARATOR
>> +for as_dir in $PATH
>> +do
>> +  IFS=3D$as_save_IFS
>> +  test -z "$as_dir" && as_dir=3D.
>> +    for ac_exec_ext in '' $ac_executable_extensions; do
>> +  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
>> +    ac_cv_path_DOXYGEN=3D"$as_dir/$ac_word$ac_exec_ext"
>> +    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_ex=
ec_ext" >&5
>> +    break 2
>> +  fi
>> +done
>> +  done
>> +IFS=3D$as_save_IFS
>> +
>> +  ;;
>> +esac
>> +fi
>> +DOXYGEN=3D$ac_cv_path_DOXYGEN
>> +if test -n "$DOXYGEN"; then
>> +  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DOXYGEN" >&5
>> +$as_echo "$DOXYGEN" >&6; }
>> +else
>> +  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
>> +$as_echo "no" >&6; }
>> +fi
>> +
>> +
>> +    if ! test -x "$ac_cv_path_DOXYGEN"; then :
>> +
>> +        as_fn_error $? "doxygen is needed" "$LINENO" 5
>> +
>> +fi
>> +
>> +
>> +    if test -z $PYTHON;
>> +    then
>> +        if test -z "";
>> +        then
>> +            PYTHON=3D"python3"
>> +        else
>> +            PYTHON=3D""
>> +        fi
>> +    fi
>> +    PYTHON_NAME=3D`basename $PYTHON`
>> +    { $as_echo "$as_me:${as_lineno-$LINENO}: checking $PYTHON_NAME modu=
le: breathe" >&5
>> +$as_echo_n "checking $PYTHON_NAME module: breathe... " >&6; }
>> +    $PYTHON -c "import breathe" 2>/dev/null
>> +    if test $? -eq 0;
>> +    then
>> +        { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
>> +$as_echo "yes" >&6; }
>> +        eval HAVE_PYMOD_BREATHE=3Dyes
>> +    else
>> +        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
>> +$as_echo "no" >&6; }
>> +        eval HAVE_PYMOD_BREATHE=3Dno
>> +        #
>> +        if test -n "yes"
>> +        then
>> +            as_fn_error $? "failed to find required module breathe" "$L=
INENO" 5
>> +            exit 1
>> +        fi
>> +    fi
>> +
>> +
>> +    if test -z $PYTHON;
>> +    then
>> +        if test -z "";
>> +        then
>> +            PYTHON=3D"python3"
>> +        else
>> +            PYTHON=3D""
>> +        fi
>> +    fi
>> +    PYTHON_NAME=3D`basename $PYTHON`
>> +    { $as_echo "$as_me:${as_lineno-$LINENO}: checking $PYTHON_NAME modu=
le: sphinx_rtd_theme" >&5
>> +$as_echo_n "checking $PYTHON_NAME module: sphinx_rtd_theme... " >&6; }
>> +    $PYTHON -c "import sphinx_rtd_theme" 2>/dev/null
>> +    if test $? -eq 0;
>> +    then
>> +        { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
>> +$as_echo "yes" >&6; }
>> +        eval HAVE_PYMOD_SPHINX_RTD_THEME=3Dyes
>> +    else
>> +        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
>> +$as_echo "no" >&6; }
>> +        eval HAVE_PYMOD_SPHINX_RTD_THEME=3Dno
>> +        #
>> +        if test -n "yes"
>> +        then
>> +            as_fn_error $? "failed to find required module sphinx_rtd_t=
heme" "$LINENO" 5
>> +            exit 1
>> +        fi
>> +    fi
>> +
>> +
>> +
>> +fi
>> +
>>=20
>> # Extract the first word of "perl", so it can be a program name with arg=
s.
>> set dummy perl; ac_word=3D$2
>> diff --git a/docs/configure.ac b/docs/configure.ac
>> index c2e5edd3b3..a2ff55f30a 100644
>> --- a/docs/configure.ac
>> +++ b/docs/configure.ac
>> @@ -20,6 +20,7 @@ m4_include([../m4/docs_tool.m4])
>> m4_include([../m4/path_or_fail.m4])
>> m4_include([../m4/features.m4])
>> m4_include([../m4/paths.m4])
>> +m4_include([../m4/ax_python_module.m4])
>>=20
>> AX_XEN_EXPAND_CONFIG()
>>=20
>> @@ -29,6 +30,20 @@ AX_DOCS_TOOL_PROG([POD2HTML], [pod2html])
>> AX_DOCS_TOOL_PROG([POD2TEXT], [pod2text])
>> AX_DOCS_TOOL_PROG([PANDOC], [pandoc])
>>=20
>> +# If sphinx is installed, make sure to have also the dependencies to bu=
ild
>> +# Sphinx documentation.
>> +AC_CHECK_PROGS([SPHINXBUILD], [sphinx-build], [no])
>> +    AS_IF([test "x$SPHINXBUILD" =3D xno],
>> +        [AC_MSG_WARN(sphinx-build is not available so sphinx documentat=
ion \
>> +won't be built)],
>> +        [
>> +            AC_PATH_PROG([SPHINXBUILD], [sphinx-build])
>> +            AX_DOCS_TOOL_REQ_PROG([DOXYGEN], [doxygen])
>> +            AX_PYTHON_MODULE([breathe],[yes])
>> +            AX_PYTHON_MODULE([sphinx_rtd_theme], [yes])
>> +        ]
>> +    )
>> +
>> AC_ARG_VAR([PERL], [Path to Perl parser])
>> AX_PATH_PROG_OR_FAIL([PERL], [perl])
>>=20
>> diff --git a/docs/xen-doxygen/customdoxygen.css b/docs/xen-doxygen/custo=
mdoxygen.css
>> new file mode 100644
>> index 0000000000..4735e41cf5
>> --- /dev/null
>> +++ b/docs/xen-doxygen/customdoxygen.css
>> @@ -0,0 +1,36 @@
>> +/* Custom CSS for Doxygen-generated HTML
>> + * Copyright (c) 2015 Intel Corporation
>> + * SPDX-License-Identifier: Apache-2.0
>> + */
>> +
>> +code {
>> +  font-family: Monaco,Menlo,Consolas,"Courier New",monospace;
>> +  background-color: #D8D8D8;
>> +  padding: 0 0.25em 0 0.25em;
>> +}
>> +
>> +pre.fragment {
>> +  display: block;
>> +  font-family: Monaco,Menlo,Consolas,"Courier New",monospace;
>> +  padding: 1rem;
>> +  word-break: break-all;
>> +  word-wrap: break-word;
>> +  white-space: pre;
>> +  background-color: #D8D8D8;
>> +}
>> +
>> +#projectlogo
>> +{
>> +  vertical-align: middle;
>> +}
>> +
>> +#projectname
>> +{
>> +  font: 200% Tahoma, Arial,sans-serif;
>> +  color: #3D578C;
>> +}
>> +
>> +#projectbrief
>> +{
>> +  color: #3D578C;
>> +}
>> diff --git a/docs/xen-doxygen/doxy-preprocessor.py b/docs/xen-doxygen/do=
xy-preprocessor.py
>> new file mode 100755
>> index 0000000000..496899d8e6
>> --- /dev/null
>> +++ b/docs/xen-doxygen/doxy-preprocessor.py
>> @@ -0,0 +1,110 @@
>> +#!/usr/bin/python3
>> +#
>> +# Copyright (c) 2021, Arm Limited.
>> +#
>> +# SPDX-License-Identifier: GPL-2.0
>> +#
>> +
>> +import os, sys, re
>> +
>> +
>> +# Variables that holds the preprocessed header text
>> +output_text =3D ""
>> +header_file_name =3D ""
>> +
>> +# Variables to enumerate the anonymous structs/unions
>> +anonymous_struct_count =3D 0
>> +anonymous_union_count =3D 0
>> +
>> +
>> +def error(text):
>> +    sys.stderr.write("{}\n".format(text))
>> +    sys.exit(1)
>> +
>> +
>> +def write_to_output(text):
>> +    sys.stdout.write(text)
>> +
>> +
>> +def insert_doxygen_header(text):
>> +    # Here the strategy is to insert the #include <doxygen_include.h> i=
n the
>> +    # first line of the header
>> +    abspath =3D os.path.dirname(os.path.abspath(__file__))
>> +    text +=3D "#include \"{}/doxygen_include.h\"\n".format(abspath)
>> +
>> +    return text
>> +
>> +
>> +def enumerate_anonymous(match):
>> +    global anonymous_struct_count
>> +    global anonymous_union_count
>> +
>> +    if "struct" in match.group(1):
>> +        label =3D "anonymous_struct_%d" % anonymous_struct_count
>> +        anonymous_struct_count +=3D 1
>> +    else:
>> +        label =3D "anonymous_union_%d" % anonymous_union_count
>> +        anonymous_union_count +=3D 1
>> +
>> +    return match.group(1) + " " + label + " {"
>> +
>> +
>> +def manage_anonymous_structs_unions(text):
>> +    # Match anonymous unions/structs with this pattern:
>> +    # struct/union {
>> +    #     [...]
>> +    #
>> +    # and substitute it in this way:
>> +    #
>> +    # struct anonymous_struct_# {
>> +    #     [...]
>> +    # or
>> +    # union anonymous_union_# {
>> +    #     [...]
>> +    # where # is a counter starting from zero, different between struct=
s and
>> +    # unions
>> +    #
>> +    # We don't count anonymous union/struct that are part of a typedef =
because
>> +    # they don't create any issue for doxygen
>> +    text =3D re.sub(
>> +        "(?<!typedef\s)(struct|union)\s+?\{",
>> +        enumerate_anonymous,
>> +        text,
>> +        flags=3Dre.S
>> +    )
>> +
>> +    return text
>> +
>> +
>> +def main(argv):
>> +    global output_text
>> +    global header_file_name
>> +
>> +    if len(argv) !=3D 1:
>> +        error("Script called without arguments!")
>> +
>> +    header_file_name =3D argv[0]
>> +
>> +    # Open header file
>> +    input_header_file =3D open(header_file_name, 'r')
>> +    # Read all lines
>> +    lines =3D input_header_file.readlines()
>> +
>> +    # Inject config.h and some defines in the current header, during co=
mpilation
>> +    # this job is done by the -include argument passed to the compiler.
>> +    output_text =3D insert_doxygen_header(output_text)
>> +
>> +    # Load file content in a variable
>> +    for line in lines:
>> +        output_text +=3D "{}".format(line)
>> +
>> +    # Try to get rid of any anonymous union/struct
>> +    output_text =3D manage_anonymous_structs_unions(output_text)
>> +
>> +    # Final stage of the preprocessor, print the output to stdout
>> +    write_to_output(output_text)
>> +
>> +
>> +if __name__ =3D=3D "__main__":
>> +    main(sys.argv[1:])
>> +    sys.exit(0)
>> diff --git a/docs/xen-doxygen/doxy_input.list b/docs/xen-doxygen/doxy_in=
put.list
>> new file mode 100644
>> index 0000000000..e69de29bb2
>> diff --git a/docs/xen-doxygen/doxygen_include.h.in b/docs/xen-doxygen/do=
xygen_include.h.in
>> new file mode 100644
>> index 0000000000..df284f3931
>> --- /dev/null
>> +++ b/docs/xen-doxygen/doxygen_include.h.in
>> @@ -0,0 +1,32 @@
>> +/*
>> + * Doxygen include header
>> + * It supplies the xen/include/xen/config.h that is included using the =
-include
>> + * argument of the compiler in Xen Makefile.
>> + * Other macros are defined because they are usually provided by the co=
mpiler.
>> + *
>> + * Copyright (C) 2021 ARM Limited
>> + *
>> + * Author: Luca Fancellu <luca.fancellu@arm.com>
>> + *
>> + * SPDX-License-Identifier: GPL-2.0
>> + */
>> +
>> +#include "@XEN_BASE@/xen/include/xen/config.h"
>> +
>> +#if defined(CONFIG_X86_64)
>> +
>> +#define __x86_64__ 1
>> +
>> +#elif defined(CONFIG_ARM_64)
>> +
>> +#define __aarch64__ 1
>> +
>> +#elif defined(CONFIG_ARM_32)
>> +
>> +#define __arm__ 1
>> +
>> +#else
>> +
>> +#error Architecture not supported/recognized.
>> +
>> +#endif
>> diff --git a/docs/xen-doxygen/footer.html b/docs/xen-doxygen/footer.html
>> new file mode 100644
>> index 0000000000..a24bf2b9b4
>> --- /dev/null
>> +++ b/docs/xen-doxygen/footer.html
>> @@ -0,0 +1,21 @@
>> +<!-- HTML footer for doxygen 1.8.13-->
>> +<!-- start footer part -->
>> +<!--BEGIN GENERATE_TREEVIEW-->
>> +<div id=3D"nav-path" class=3D"navpath"><!-- id is needed for treeview f=
unction! -->
>> +  <ul>
>> +    $navpath
>> +    <li class=3D"footer">$generatedby
>> +    <a href=3D"http://www.doxygen.org/index.html">
>> +    <img class=3D"footer" src=3D"$relpath^doxygen.png" alt=3D"doxygen"/=
></a> $doxygenversion </li>
>> +  </ul>
>> +</div>
>> +<!--END GENERATE_TREEVIEW-->
>> +<!--BEGIN !GENERATE_TREEVIEW-->
>> +<hr class=3D"footer"/><address class=3D"footer"><small>
>> +$generatedby &#160;<a href=3D"http://www.doxygen.org/index.html">
>> +<img class=3D"footer" src=3D"$relpath^doxygen.png" alt=3D"doxygen"/>
>> +</a> $doxygenversion
>> +</small></address>
>> +<!--END !GENERATE_TREEVIEW-->
>> +</body>
>> +</html>
>> diff --git a/docs/xen-doxygen/header.html b/docs/xen-doxygen/header.html
>> new file mode 100644
>> index 0000000000..83ac2f1835
>> --- /dev/null
>> +++ b/docs/xen-doxygen/header.html
>> @@ -0,0 +1,56 @@
>> +<!-- HTML header for doxygen 1.8.13-->
>> +<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://=
www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
>> +<html xmlns=3D"http://www.w3.org/1999/xhtml">
>> +<head>
>> +<meta http-equiv=3D"Content-Type" content=3D"text/xhtml;charset=3DUTF-8=
"/>
>> +<meta http-equiv=3D"X-UA-Compatible" content=3D"IE=3D9"/>
>> +<meta name=3D"generator" content=3D"Doxygen $doxygenversion"/>
>> +<meta name=3D"viewport" content=3D"width=3Ddevice-width, initial-scale=
=3D1"/>
>> +<!--BEGIN PROJECT_NAME--><title>$projectname: $title</title><!--END PRO=
JECT_NAME-->
>> +<!--BEGIN !PROJECT_NAME--><title>$title</title><!--END !PROJECT_NAME-->
>> +<link href=3D"$relpath^tabs.css" rel=3D"stylesheet" type=3D"text/css"/>
>> +<script type=3D"text/javascript" src=3D"$relpath^jquery.js"></script>
>> +<script type=3D"text/javascript" src=3D"$relpath^dynsections.js"></scri=
pt>
>> +$treeview
>> +$search
>> +$mathjax
>> +<link href=3D"$relpath^$stylesheet" rel=3D"stylesheet" type=3D"text/css=
" />
>> +$extrastylesheet
>> +</head>
>> +<body>
>> +<div id=3D"top"><!-- do not remove this div, it is closed by doxygen! -=
->
>> +
>> +<!--BEGIN TITLEAREA-->
>> +<div id=3D"titlearea">
>> +<table cellspacing=3D"0" cellpadding=3D"0">
>> + <tbody>
>> + <tr style=3D"height: 56px;">
>> +  <!--BEGIN PROJECT_LOGO-->
>> +  <td id=3D"projectlogo"><img alt=3D"Logo" src=3D"$relpath^$projectlogo=
"/></td>
>> +  <!--END PROJECT_LOGO-->
>> +  <!--BEGIN PROJECT_NAME-->
>> +  <td id=3D"projectalign" style=3D"padding-left: 1em;">
>> +   <div id=3D"projectname">$projectname
>> +   <!--BEGIN PROJECT_NUMBER-->&#160;<span id=3D"projectnumber">$project=
number</span><!--END PROJECT_NUMBER-->
>> +   </div>
>> +   <!--BEGIN PROJECT_BRIEF--><div id=3D"projectbrief">$projectbrief</di=
v><!--END PROJECT_BRIEF-->
>> +  </td>
>> +  <!--END PROJECT_NAME-->
>> +  <!--BEGIN !PROJECT_NAME-->
>> +   <!--BEGIN PROJECT_BRIEF-->
>> +    <td style=3D"padding-left: 0.5em;">
>> +    <div id=3D"projectbrief">$projectbrief</div>
>> +    </td>
>> +   <!--END PROJECT_BRIEF-->
>> +  <!--END !PROJECT_NAME-->
>> +  <!--BEGIN DISABLE_INDEX-->
>> +   <!--BEGIN SEARCHENGINE-->
>> +   <td>$searchbox</td>
>> +   <!--END SEARCHENGINE-->
>> +  <!--END DISABLE_INDEX-->
>> + </tr>
>> + </tbody>
>> +</table>
>> +</div>
>> +<!--END TITLEAREA-->
>> +<!-- end header part -->
>> diff --git a/docs/xen-doxygen/mainpage.md b/docs/xen-doxygen/mainpage.md
>> new file mode 100644
>> index 0000000000..ff548b87fc
>> --- /dev/null
>> +++ b/docs/xen-doxygen/mainpage.md
>> @@ -0,0 +1,5 @@
>> +# API Documentation   {#index}
>> +
>> +## Introduction
>> +
>> +## Licensing
>> diff --git a/docs/xen-doxygen/xen_project_logo_165x67.png b/docs/xen-dox=
ygen/xen_project_logo_165x67.png
>> new file mode 100644
>> index 0000000000000000000000000000000000000000..7244959d59cdeb9f23c52021=
60ea45508dfc7265
>> GIT binary patch
>> literal 18223
>> zcmV+NKn=3Df%P)<h;3K|Lk000e1NJLTq005-`002V>1^@s6{Wir#00004XF*Lt006O$
>> zeEU(80000WV@Og>004&%004{+008|`004nN004b?008NW002DY000@xb3BE2000U(
>> zX+uL$P-t&-Z*ypGa3D!TLm+T+Z)Rz1WdHz3$DNjUR8-d%htIutdZEoQ0#b(FyTAa_
>> zdy`&8VVD_UC<6{NG_fI~0ue<-nj%P0#DLLIBvwSR5EN9f2P6n6F&ITuEN@2Ei>|D^
>> z_ww@l<E(G(v-i3C?7h!g7XXr{FPE1FO97C|6YzsPoaqsfQFQD8fB_z0fGGe>Rz|vC
>> zuzLs)$;-`!o*{AqUjza0dRV*yaMRE;fKCVhpQKsoe1Yhg01=3DzBIT<Vw7l=3D3|OOP(M
>> z&x)8Dmn>!&C1$=3DTK@rP|Ibo3vKKm@PqnO#LJhq6%Ij6Hz*<$V$@wQAMN5qJ)hzm2h
>> zoGcOF60t^#FqJFfH{#e-4l@G)6iI9sa9D{VHW4w29}?su;^hF~NC{tY+*d5%WDCTX
>> za!E_i;d2ub1#}&jF5T4HnnCyEWTkKf0>c0%E1Ah>(_PY1)0w;+02c53Su*0<(nUqK
>> zG_|(0G&D0Z{i;y^b@OjZ+}lNZ8Th$p5Uu}<?XUdO8USF-iE6X+i!H7SfX*!d$ld#5
>> z(>MTtq^NHl*T1?CO*}7&0ztZsv2j*bmJyf3G7=3DZ`5B*PvzoD<bXCyxEkMhu6Iq^(k
>> zihwSz8!Ig(O~|Kbq%&C@y5XOP_#X%Ubsh#moOlkO!xKe>iKdLpOAxi2$L0#SX*@cY
>> z_n(^h55xYX#km%V()bZjV~l{*bt*u9?FT3d5g^g~#a;iSZ@&02Abxq_DwB(I|L-^b
>> zXThc7C4-yrInE_0gw7K3GZ**7&k~>k0Z0NWkO#^@9q0f<U<Ry!EpP;Gz#I635D*Dg
>> z0~SaGseli%Kpxlx3PCa03HE?$PzM@8GiU|JK_@r`&Vx(f8n^*&gZp3<On_%#7Q6-v
>> z5CmZ%GDLyoAr(jy(ud3-24oMpLB3EB6bZ#b2@nqwLV3_;s2D1Ps-b$Q8TuYN37v<o
>> zK!ea-XbhT$euv({2uy;huoA2V8^a9P3HE_Q;8kz}yavvN3*a4aCENfXg*)K$@HO~0
>> zJPJR9=3DMaDp5gMY37$OYB1@T9ska&cTtVfEF3ZwyPMY@qb<R&tT%ph-37!(CXM;W4Q
>> zQJ$z!6brQmwH{T1szx0~b)b4tH&J7#S=3D2`~8Lf!cN86yi&=3DKeabQZc0U4d>wx1%qj
>> zZ=3D)yBuQ3=3D54Wo^*!gyjLF-e%Um=3DerBOdIALW)L%unZshS@>qSW9o8Sq#0s#5*edK%
>> z>{;v(b^`kbN5rY%%y90wC>#%$kE_5P!JWYk;U;klcqzOl-UjcFXXA75rT9jCH~u<)
>> z0>40zCTJ7v2qA<d!X`o`p_Oov@PP1=3DNF=3DHet%-p|E^#BVl6Z`GnK(v#OOhe!kz7d8
>> zBq3=3DB=3D@980=3D`QIdnM~FqJCdWw0`d-WGx-Af5&4Y-MZ!qJOM)%2L83;YLt;qcxg=3D=
gv
>> zQ_@LtwPdbjh2#mz>yk54cquI@7b&LHdZ`+zlTss6bJ7%PQ)z$cROu4wBhpu-r)01)
>> zS~6}jY?%U?gEALn#wiFzo#H}aQ8rT=3DDHkadR18&{>P1bW7E`~Y4p3)hWn`DhhRJ5j
>> z*2tcg9i<^OEt(fCg;q*CP8+7ZTcWhYX$fb^_9d-LhL+6BEtPYW<H!}swaML<dnZqq
>> zcau++-zDEE|4;#?pr;V1kfpF+;iAIKQtDFMrL3hzOOG$TrwA+RDF!L7RXnKJuQ;cq
>> ztmL7Tu2iLTL1{*rrtGMkq+G6iMtNF=3DqGGSYRVi0FtMZgCOLwBD&@1V^^jTF!RZmr+
>> zYQ5@!>VlfKTBusSTASKKb%HuWJzl+By+?gkLq)?+BTu76<DMp7lcAZYxmUAKb6!hZ
>> zD_m=3D<R;SjKww$(?cCL1d_5&TVj)Tq`od%s-x)@!CZnEw^-5Ywao`qhbUX9*$eOTX8
>> zpR2!5f6xGJU~RxNXfPNtBpEsxW*W8_jv3L6e2wyrI*pziYZylv?=3DtQ){%B%hl48<m
>> za^F<O)Y~-QwA=3DJ|Gd(kwS&i8(bF#U+`3CbY^B2qXmvNTuUv|fWV&P}8)uPAZgQb-v
>> z-?G(m+DgMJ)~eQOgh6ElFiIGgt<l!b)*Gx(S--Whv=3DP`GxB1Q1&^Foji0#yJ?d6>1
>> zjmyXF)a;mc^>(B7bo*HQ1NNg1st!zt28YLv>W*y3CdWx9U8f|cqfXDAO`Q48?auQq
>> zHZJR2&bcD49<D{M18y>Ip>EY~kKEPV6Wm+eXFV)D)_R=3DtM0@&p?(!V*Qu1PXHG9o^
>> zTY0bZ?)4%01p8F`JoeS|<@<K~!G7L;yZs)l&|JY=3D(diHTz5I9kKMc?gSQGGLASN&%
>> zuqN<HkZDj}P+u@5I41Z=3D@aqugkkXL*p*o?$(4H{Ku;{Snu=3D#M;@UrmH2;+!#5!WIW
>> zBDs-WQP`-ksHUj7m2NBdtel9ph%SsCUZuS%d)1ZI3ae9ApN^4?VaA+@MaPE69*KR=3D
>> z^k+6O=3Di<ELYU5^EF08$*XKY7yIeVI8$0_4X#@of0#ZM*JCG1X^PIO4DNSxuiaI3j5
>> zl01{@lID~BlMf|-N(oPCOU0$erk>=3D<@RE7GY07EYX@lwd>4oW|Yi!o+Su@M`;WuSK
>> z8LKk71XR(_RKHM1xJ5XYX`fk>`6eqY>qNG6HZQwBM=3Dxi4&Sb88?zd}EYguc1@>KIS
>> z<&CX#T35dwS|7K*XM_5Nf(;WJJvJWRMA($P>8E^?{IdL4o5MGE7bq2MEEwP7v8AO@
>> zqL5!WvekBL-8R%V?zVyL=3DG&{be=3DK4bT`e{#t|)$A!YaA?jp;X)-+bB;zhj`(vULAW
>> z%ue3U;av{94wp%n<(7@__S@Z2PA@Mif3+uO&y|X06?J<Fdxd*PD}5`wsx+#0R=3DuxI
>> ztiE02T+>#oSi8M;ejj_^(0<4Lt#wLu#dYrva1Y$6_o(k^&}yhSh&h;f@JVA>W8b%o
>> zZ=3D0JGnu?n~9O4}sJsfnnx7n(>`H13?(iXTy*fM=3DI`sj`CT)*pTHEgYKqqP+u1IL8N
>> zo_-(u{qS+0<2@%BCt82d{Gqm;(q7a7b>wu+b|!X?c13m#p7cK1({0<`{-e>4hfb-U
>> zsyQuty7Ua;Ou?B?XLHZaol8GAb3Wnxcu!2v{R<HnZuJKC4qWuPc=3D?k1r3-ydeP=3DJ*
>> zT|RZi=3DE}*djH{j3EU$I+TlBa8Wbsq`faO5Pb*t-LH>_`T4=3Dx`(GvqLI{-*2AOSimk
>> zUAw*F_TX^n@STz9k<mNsJ5zU4?!LH}d2iwV#s}yJMGvJORy<OC)bO+J&uycYqo>DQ
>> z$NC=3D!KfXWC8h`dn#xL(D3Z9UkR7|Q&Hcy#Notk!^zVUSB(}`#4&lYA1f0h2V_PNgU
>> zAAWQEt$#LRcH#y9#i!p(Udq2b^lI6wp1FXzN3T;~FU%Lck$-deE#qz9yYP3D3t8{6
>> z?<+s(e(3(_^YOu_)K8!O1p}D#{JO;G(*OVf32;bRa{vGf4*&oQ4*`<-1El}}02*{f
>> zSaefwW^{L9a%BKeVQFr3E>1;MAa*k@H7+qQF!XYv002BXNkl<ZcwX&&2Ur%z_P#e7
>> z6Qi+fqA~V@E%vU7y^D&10wPTTDJm);MHEC76-DVCL_kmw?7jDbz4s2N*c<Kq-*@37
>> z#Au5Dn|t;CJm2#^yWj4#-8pmSoS8GTMLrU$3Dg1V0S)qxwSgKyRp2vyrhkMg0%Wkd
>> zKntJ?&<p4b^aln21A#&LNB-{z@P1FA6VMc>1$+-w06x=3Da`rA|vr~**(wFP<rWHvJ1
>> z+fW~4G{)LwjHRuWrIk&K7A;2c+FM}=3DGH^GbB|rwP43q&r(`WiaDhp7WH3WVE3K+3O
>> zi4q#qr%!j?xN+0UGiS~ozkBEIovf@Z$;`}>#~E32=3Dhh>6`k4PSh1Z`vdHU@7wd+^*
>> z?%lU7ARy4EXV0EvkdBI3DMdQ~?D{JKrGd}%nSMu<T$GGI14?&XzI^%No}LTlpE-Rt
>> z<;|PSY>>im&!4}Ld-qc1+`03zf8TytnYc<qg2QFa>h*H?@DaIi{(_{Yrpb#JFO=3D|E
>> zS&WyBYw34aC9jU}(WB>Bq)!HAH&01S-IQv=3DXXgA&3Y7=3Dgowf(q#gZ8{6ILWFefi?m
>> zi=3D3QX0Yl2ehmXK)mt@z@J(7@+D3Oto;^X5hvuDqiY15{Ot*xy<lFGb!^CU1ZP-3EE
>> zWYwxwvTxr3xpe7@JbLs*NhdoyM{=3D@r1&n@V`0(LY$dAm~2WSQS2vBwS2KY?>M~Pi0
>> zyJ{LP>{f?_hJ^ZOJbd&pCtJWoSqd}Vx^-JFT(}^|jvWJ&?UVKE*GpVnoP>mg$btn6
>> z#MRYRoSd9w)~s2wXwf1G4GmT9uUofHcJACMM~)m(q$;H=3Dr7RgUH%EZnoC60AZ12*g
>> zi>hm<?n*13#!yM%GyNYTc9XQIX-!i)4t92);db|K>g{Yuu}m=3DI^Jg!?kdGxJ;}N9f
>> zLon1mxpL)-!eC@hV)N$Bl9-q$HOYuPMn^}>#*G_g|Ni}Q>eMMoNlB3tCr-%Kt5+pG
>> zJzbtXd!}^hxw%q6Z$O(iZAz|MwdzQeh5BY=3DfDPs|WBwl@TD;W(4%JY19GaB4CNc9(
>> zj=3DX-IDNmm~lSdhk!IaMxqa_#IL*0-Jb?Ve<*}8SBczJor`0?XKPft$<4<0PNdi4@W
>> zJL%rNy9^jGKr}Tq#n8}Drc9Y4OO`B=3D-Me?o_3Jla{5+9YuU<(`4#ec!1SY+E_wLO;
>> zefn6SOl&C4fbW1(z-Rg&CQ3*e6|}6?t5m6?bNJBllxI(0%Hzk+APv*xe)@fMvCkDg
>> zfdG@+w{I)bjuJ6EJY2lJy~V}FMXar@MPFZEh7KJny?ghTK7IO1zkdBiU0q#9j2Iy%
>> zCMF6~y1BW@;>C-VxLdYtQKT&;<=3De#WoV@zv@zZCMCQX`w@=3D=3D{=3D18_9pGh_ab(zg=
I5
>> zq{5KhyY;lVaQ^C@jEtv}mYObCuUu2QXi7yg<;|Nnm9Co1xMIZ$S+;DM!dQO3{^IB`
>> zJ=3DZ|rI9o?&RF<xe?kgiBqvvL3X3xNg&kPI<UXB_yI!jN_AZv`VY4)sH9=3DV~RVM@22
>> zl$0bJHf&JzQb@&ob|f?AD2z&7Q&AsYXJ>~5hlZe>LjW=3D+M+QE3<^N;E3jG1-45({q
>> zYTI49c;i`m+5@?M?WUYPdrofL$m?Ej-MXbPV@ynp0vap{2?^rk?U!w3W&Og`)HD@F
>> z&FO^;7j6#-2uO&ChzO63ja{^S`SN+d0x)kdbj!G)prDN~dX6|dJ6{LGKD4#5e-#(E
>> zJcs%w^hZVSgps4D1srN(xBl|QOQ;ZU6f6Dps~lOgYQt)jmyF2)cMchSw#xs9h(-e?
>> z&Y*T}J6N1Je(uT58@J`mnR9aZ@L@TB{=3D6a?j~+cLU@Qp^4i+$*BHe-lgR;#`&F+B_
>> zkDwcl0mIEmPEOX}yLayZ#Os!kk<kgdXFCYIHbC2#FJE?q6#V(@*|WV49z3WC2AqI2
>> zef|CYx7yp=3DUqaXXaMh|+xog+16{I68SFThHB1&ka1tz@@!zx3bK79e*_N4^+M?|CC
>> zg8>>p94`7A_)MQT(Xin#OaH1>f6(8yWpC=3DGOIM*M9#@8Is4tQ!XpH#!`YQULpP!!u
>> z2ZiRKJ5B{7?E^zCTC--2{^`@FyMcLHg83Q(wSj5?8U8nvf4vsa0B8mY+#Y!hg>;+_
>> zD}4iW%?m&V8Ip{z=3D$o6j$b$zD6dm^Z_3M%f{WedMr-`$Zn=3Dg{3Rn8e8>cw9HpXn1N
>> zcH7h=3Dy92`m1D700NjWLowr&@6xys-+CDzKsmCE3^gM)+2m@z}_Y#m>Y9z8k*U2iCu
>> z)Er%MA9UgEAw8QQ9Zp4l`>%if>r2qTaQ>Hw3<@dQ77}#mx^?T^U@&b4)1M0q4a??O
>> zCk-M>=3Drd=3D|$e}}rWbfX6N@F>C<hX3#y8m^H=3DB*r&3`Yytz-atXY8D2|#Rm8d%2pJ|
>> z&-9TJ2ccU7LoCfFM{U@!4NSORJUl$a#>Pe_PoAs{?a+EAIK0!P%P%i*epy>vdn*i>
>> zXhTE85HP48ENu@0hRkepb92YirAt#CC=3D15?f*Ji40%MlUXU+~MPMk1>L{8|^rOR18
>> zJ-y6f!-mPAL4#!M*s;ni5od>-ojF4^UuHPFilxQGr~Uf%OGda59UUDn#F?)u6Jcy@
>> z>}G9kJsD~SJ(EiQod)xn{&Per$?3zs)vMPu3k!*UwPVLlRQ3#6jBfh;wQJyy52wiJ
>> z=3DmSW*Go(_fzsmrShV{I>yn3svt1m<so{^vA2h$}OX%~TA2M*jEpsqd?S^TZW@|pf~
>> zL@7yWHr(EB=3DCKXyH^Ycop)e(-ViYXvOV_W-g{xQO<oR=3Dwa_XcUIe9{k0z7{o&R)8x
>> zFhWjFPOi4D?oy=3DRv}n<y#hKxs5t$c63%XI)u3cwdzI^#H(>j0U@;mb)SRR&(;D46&
>> z+~rG3*)QF=3Dro=3Dsa`68wNm69b(+9K^1#flaC_A^s{e8h^U_jm7>E!xfSobMx>Hg1uH
>> z3l}PL)n;a9^1}~5$lm??<=3D*}KayLC)uEW5%1nGJ4>Q%W6L*tK-@${*}nAdLKe$}v1
>> zqp8T}M=3D-%3>T&r@LY=3DR}gb5R(pFMl_hG|@d)z5t2m5-`CJTlzPyLXlHrote)hce&7
>> z|C1+A=3D4feYokO~AKriI;1MoHQx%>Xeh)VxYfaVw@t204?q2li!AfBF{io*mQCu-HI
>> zC2DGFas?Hhp7Btgym+qQ;giSm;PE3c-V+6no;@vq>KgB>Xz$*=3Dcae_<Q2u?4kk61D
>> z0Pi0X5^^v<s-!)9pybUwRe7oMC|nMf>*>oE%6(o-y`QdF>6<of%4*oK;Sr=3DuhU|}g
>> zY6Fy#pADUlj5v-*ukmgT)tXGRpX!pEup&`mM7otxTEb9~jvYJ7_uqdn0|yR#miFMj
>> zJVNEa%zPu9m8P%6`#@rtOwJ4D)BO4KRr=3D*Og&C9QUwrWeyY`yZt5+X;@ZiB4`BCH<
>> znCx|SmXa2!aQcPw%go7^7jIt4Q#P9C&*Wi7hB9Qdc=3D6&_Ft#=3D!Z5yZ$oskzMWC`GN
>> zxBU?k4IDb_4$&B*y~AUUrvwHD%gmWG6{hs_^^+bwdMM5l9ou(G{pzc)b~-ydKL$zm
>> zyBsC{0%c`o<!ESVa9U6e4Duxlk<Xkg%TF+9Jnr4Qm)GZi0OjQ7%Inv!mD!(j=3Dg!d)
>> zBp2!K0<57wwgtXJe#P_i{7fGqad5c_D$3AgjMcM{kZ_qc%~|nz<X9jbDY|v*CPRm6
>> z<o4^=3DFQ8AKJ~I$DCN?(q1>MT%74#u=3D^XAPLwQJWNkG!jbIf~)PvBNHoj*f}1UcLIg
>> z2gY9{&Wm%lhjZn-cI}ep&6~>)RjZ3ygGO7?0Qvw`07Zv{RHl?<PeJvC9!4Ca<p>19
>> zbE*)fAkm9`3=3DGV2;6VMK<t0?(tsHCRz5f}aCwcpynzr(?w3s9zAz?COhKtgD)8&j3
>> z5|*zF)6lwUV`DSU-rl}<<;s=3DE0Lgpy?8zbnd?;gX+qUgG^5&3?T8R=3DRhQ-Cj9nZh@
>> zLBu(6N^nkr(pR%$#fs9qcOMxzKwZ=3DaX<b4-oPMP4Ot1B>fO>}x9V`tD4CYLlG$|PR
>> z^J=3DIo>j-rBt4vKzeKa*Soe^#rK!;+E;gVs?fU)1JhmwIooJFGl0P~_#3-ja3&UX3N
>> z=3D$l#?%>g=3D4Q<b7x@vr<mC^R@T)e#tkUc}bW(9pf`IPkDx!)QG6*|-{j2J0t1?yAx!
>> z`}gWGaNFcbcCfa+6jw8L-|p`2ij?G#P{)p)=3Dc1y!g@uLHXbnOfGf1KRmoHz=3DWmoy3
>> zj5%k{oWu3%)mwDp#EE<Pm;N>}Z@SIhhoyZDh8P%3%9k&%lsiw-_mukenq)R<(j;ug
>> zj2WxfuU~)W%9ShksYc`{@u!rUn)+nt&Yc&1eSJ4}?%a7Yo}Ua>jm-Bp1K>ZIsj8)=3D
>> zr6bCdcJ=3DDjbmm9w-o5)W^M4x~Hf%V(Wy_XhXc!)dOU7rtv_YB8{QdnmpE`Bw_RE(q
>> zl@W+{5uQIA@9-A%^_Aaz^9>j9{gAJpe{xg;;5wI)n#1(&I63Ccim1A7yi`R>jvA$x
>> zHDJI1&Ev<9*FSpnXc@fpvUp&&cKgAD2VWpF(82dZ2=3DQ`J;ji-l{%s;dqO!;|oR`m~
>> zWJkFaAI-jf`zn&2&vEkP$^01q9)hlV50aE~?37>?@J<R0CY)1BwuMf<H$V7aB0dXx
>> zr|pQV3kf}Y>(;I3$ZYwy|1#aUaU-LD|Nc%$rw)XzqO*TWL`hk_SkYoeqxDR(X1cnI
>> zOMchQp&(W&nR0}d_7z=3DS-A>RUE8&@GF;mB)YZzBDdZ0^BfAr{)(tZ9XiTWe;TDs4z
>> zS+gb!68B8Sij{KMZMSOG3JuIzVc7Q(nSg1qK`E|q2uq2}=3DlH9Vf8)lD=3Dc-n%s*U$h
>> z1@A=3DZ(s86OD(CEfbprj1q@$yA7}EIU-;v_)A{dG<)YR0>Q4Sh)pVSBgC1sszKh$tC
>> zG%yp-`FV@FIG0R)l3h2^s#UvS3k!?-cvj9uE8P`9>y_(vxa>6$UHkK=3DPoFB4{C6l9
>> zznntDMSn&N`})mm1$15GaL<AT3!bCP+K6(@uT`_w6EY+nHB|H_^NkXo$IPru1!Ov^
>> z9jY#$R{GhqX9enrB6Z2245Run-|U=3DhB`(wAxr)x8Kc8NuN)>LZP#N!psvr#{i`%zv
>> zE8Q*Qs=3D#=3DKk(HgLluJeErr+6$ScbxRzD=3Dt8ET8fpWe*7nIfHU^LYY1(FDY7bYCkn?
>> z9WYAQ2s-2(MUSKd1_#c`kQ@#wS+dkHbVZf%tX~2uX+XxL70cxkTF~JVH*MaOdHe1i
>> zx&QD%0Ukce_#GZ(Je2eY_Y2Nd;Z$*W?d|{f(o*Hto!fHl#&x;M>CTH6<jmPKTw=3Dk&
>> z_=3Du7vOLF<G6%2s~D(T3w+_-g9iFfDTT_ugQ`{@NB^KII+DO*EBBdtuCGDi`Accn^|
>> zQlP4&K~2diTozTCQ`6JrZt6WHO+rP>;^NLD{x48#xD=3Dyq%T}$t8D5o!3aV#PL6s)&
>> zJ>moajw~C?e)IM%B@gDOdLM<;QlW79_>4^NHl7y^_6^FSU#wU$TIinyQKVB+Hfhtk
>> z)AG@JW5A4a6^7*aB$<=3D?*zn<+)X}=3Dc$H$kzyZ=3Do$0EYYmD2azw!(Wp+b?ffhzI%7h
>> zNl2ZPQ>WxOq}2&XpLgNd>C;LWLUqogDh`jTG*X}s9yoeb_8vSaJNE36&D*z2d_qEY
>> z^A;^mBHy{lUlT|5zWw^epc~G8imvz|!XHdYkt4^C7o?-YiTq<+Lc*JGOPA*O<$S;b
>> z5>6jSOTY(Ab>Y<c^95!7UD;5kLrDMdv14-M?p^-RZP~gt*9y`4Z9i)C=3Dxu2aA1dKh
>> z&si9S(<F|bIH8o8&$xZpF4?$kn?%LN<|M9KmCI|#5J#1_DsEw@(p05&6y-Z};eyny
>> zTh|rFSTza}l<T7>)fio~4(+<G)gL`Z=3DFjs~7?Shd)GK@T>?wwZMjUwNFa}-8e)nK_
>> z*rEW3NI0xv)4fNJbK7?A0`u&ZUHkUS?)?Ye3*Ik`x9{!PvrjqSzI%^s*s@htt=3D}L?
>> zYu3t4_t~$?SE#TV`Po4-(f6}W^%^zo($dmiK6vs>wjlp)yLQXYy?OC@WLiA#IdD)x
>> z%E{BxxJi>kh&vhO{~1sNIPKU7X>t)-`1(zovsn(tD_lm^k!fy4UfcPs_^&-`)EdO&
>> z4jzM?oSm<vXFO5DsN((+Ht*Oe>o#qcgjK6^hiVLa`0cmfayoP?u(y4Oj;VYeRaoZz
>> zHh7)oRAJA-LvrZoF=3D^De@nWRG=3DlUe{N-`LS44QZ9(0$9OQAP?=3Da$1qz<Qx*>Qdl#y
>> zaoGq%@1gfT6dss{PX)9{F2i-#H!@D!unjMH%QjiNdFuyZ&897~X5(h%xN5^DiBC+D
>> z*cFMQJ6iuC^5Bw(v5=3D6h(HJ;<GGK<=3DtSdC(lh<uj!YgSY{^|`#14mVwYd39`Qx~tu
>> z`YqdEB3>de0pPlx-T=3DoUx%<XwAoA#$bJ^=3DQZ&T8E8_Xkl{YK?{!?qo=3DX~!-Zp`$~o
>> zOC8(9CouTQ-Xkf>{ld>x7zNJ=3DEKgo7A(2rzU?Qpk{(u#pPZ#fW9L}R$X1PCSS>BaS
>> zVJK+=3D49hZ=3DD_?#N8d;rBg(*p#1!&x{efKSTU`mfUbCo5r1SKWM`NoYKmxC}Il;zOE
>> zd%FNUGz}jP7LNc{dk2NQiC?uwmL;wFV8pIWmS`~JQZS=3DmXhcrycI`O;&XzwMc~P>_
>> z-L8JK7A+R<ICM0Z`Nbq8DRFs>AtNIVRa*SNeDzv6ef0)6Ovr|D^YqfCOBt#$L=3D|xD
>> zA@AMFcp^*USG*UI%i<*h;CURFYRE9neSH3srAoOjO<0-ByqIU<XDSR;nON4y_~p`f
>> zz`$FGKO0Qc56{*Js0naT-CBRl*gJpaeX5=3Dr`AcwAOeW$^MH<zCPon!KvGp`)+xq9V
>> zx;h3j8-@<o+xYwY(`iB(5*j<!gxxm>ifO^Ux3%F#@_zk_6)R3&vu4fRhgpIbogiVc
>> z@gI!vWy^6~E-tg@Wqx0!%6_Cj9g?GWer3^Fn1)(cQ_Jxe_oSSX;H9w=3D8WSfWOP49f
>> z!fC0l^E@Pa8PZ-Mu}N!W1te?I#;uYJ$-*O%uzr(7tw@rfs2K4JUm^h!OWz6qh$!(1
>> zi4ecAB}(2s`t;q2@));l(>8(WheRV!pl~?~gZEgjV3cQ3U`P(6eiY)9AvyI*Ba*5S
>> zhu(Db$C}<qgw%akIw~m7sH7P$K}%x9YO?JE#5F}49IpEKnUY3JzK&SCWs45Wb#?Nr
>> z@1UR{W$g>4W&i&Dl??<ChFub!C4M)x#)k!wIT}N<&b)f{YR2OnkuX$BU}TI0E{Xjh
>> z1Vw>?mL;IF<8n)U^9}VY2UHq|H~4~CVKlPnZ>j6-KIchf{7NN1e=3Dy5C^ToA*$Y|y0
>> z8y+RzA>lxT_=3D2GrCm=3DtrUsx0vF-kf235^8PMJi$52JbQcqQFq`3JAfqNNLr!{aTb~
>> z!cff-CpYXl@cy#omq)pebLM9?#qJ5>+Toe%0R^)}IC6e_l=3DW1{^yTX|y+1ty_xOwi
>> z%h;n&KN^BVkOrrcKYjv376mBXsx)fYJV0}}t~feQld!NbWw|U@3=3DSMPP(jtI)l3nN
>> zFNfWIfwaWK@|=3Dv(aq{HJu`gb{$UJiXI$AB6DPBQn%<>OHSnLW}w_~4VWQ$C&wZDTr
>> z%m7Nr!WG7qkrRTJEXxgxS)s%WiGF{cED!JdM?`~}!o<@zP!{+FgQ>#sKU}<k1^&Sb
>> zc+P9@!s4JjFrKGRpv+zD4`z<eEmyt*RSP?7JIB;DJNBddq~v;cd3ZhV{j5;RGkExj
>> z%ZRIuXX03=3D>iOt2!bNXZUjAXPmnJ~({FCS7F=3D#2IFY>PTWA!-1?SOPBP(FSFLly%#
>> zrdIZcsx|C}Ym5>r%LxiIMny$Qe0;pJFpG=3DB3=3DPN7AgBhAi4|!HsYM2*XU(9<$jBo|
>> zOSbG!k?^=3DgiNecMVQJ!;LiqK1-o3Z){}!>y>*e;7Ou6$kQ-*43(-*QGDv2WoN`;{e
>> zKb9?9&Lw8$x`Mp^DBb9lNHb}@jJ2Hbs!qN7x9T=3DfyHHQ9;aQ@7A-w&+@b$vsmGcH_
>> zJU`l`S&Nk@78|d#PgvBOgbmvY%JgpJzXM>&@+4{6yybet{RPj(*&u~*&PHGVr{*m~
>> zmaW=3D=3D@~>6my*s^MpFINUc?E`1pLE4LYJhY;ovB9fQMts##LC5r7E>QOSSL$=3Dw2?$c
>> zL@GO7aFYRUNX5N>X3UtGhTvQ-OE)g|*JRE*jT<+%IC}J`GJ>42Zkt5L!6?CtSq5f`
>> zNnEQi0MB_O<M0}fh5xI<ybUD`?yuUso5{<LLnpGDwQTK)eCy&JE3RZ<(3+LM``&|j
>> zM#j^y$xE+rK84d?zGjmoZrCAOx}#H(PB1V9PzK%eL07to0I$>Y#{MsW;W(W~3%*bF
>> z8a4gnR&A6xFp(<%cR^LY=3D!7*A616-Bmi9b6PZK;B->pJ8=3DjcdBjS)Hr_MSMe#8JJQ
>> zcY)8nEO`SA!_{JK>qvca9MYgO^LuT9kB(5+<sLnH#KGRyy?PG5J3>oWSq8xc|6J$I
>> zHQgE-8ZvdNJ%@!jRKn>q>L)FAbaWP_r>DQTl=3D?(MqtRucy9kR-1al<Ah)9%(xRuIz
>> z1Q?a)s{g#laCz5M_h@}4{|3Cqusl!Nv{PYBbe&JimMd?9{5hXn6ct$=3DaPkXZn&Tf4
>> z14CwcK^#?{zamNES8rA@+Sv3a(y&GvtpQr`1l`8C-OFHgb@jPinu})-`y(Js=3D!g*`
>> zd^Jaov_?Ey$vt}aAGB@Nmc74uhPR<A504?yE5vS^>tn>7h%(ShF3{U?&Yo<@1RMKX
>> zTMry7c&<NAGf7FapZd_#h^v7#J`H_RMX2X*ftnpUbZF7G^)I_dXz9r;H+O|8xj_{r
>> zrk0i#odl$BpZ;?Zg)VCY=3DFFM1iQX$~p*ML2P|u2h;vWU&Iy&zd3WFgW1_l1}TmjA(
>> z2S+O5RdEXL<%chfg1AbWOJ&9SZF2vW$aouvD=3DY(04lkTTP!q;k+dQ4?9Uz{5AxhfJ
>> zM^&DJw1BYKm2&*@J#qC|_z3ZA0M0~FC#Hp49~kWB=3DC)C30e^wwAbFX-fyp@PZtO|a
>> z4NT0htlN1IMo`{+Q04s_;5y5<I5<icLN6UMeB@Qcr9N2+D6o>z;5g(p)_tMxOETM^
>> zrOEq?f+CP^gw(22kLm;s!%st>R1w~5DWI08r)T?mbs8<y($tr6<18R0mnj=3Dnaqk{l
>> z-*f|;GQ~E%RH;%ft5>hS$c_Je!eV9Sd>?UJ;3u;e1<IVoK{y5|$2s0Wq5r(ka18G<
>> zcM-5yT<3W!*BJIU@d_h9uIjjQ%RbqB;H2~$q(S$t_UP`Y=3Djt?Q)a2MScTbr<XQ9mS
>> zK>G8L{(Ntl>6uqP_l5rA?iHZqvwVG?)J3Dai0AS{eEnX%dO2{#<e@`{ZqemEC^}II
>> zTR3mVpS92r@JE`yO8J?Wwf*!bUw>VKT|XtI@gzr=3DCthGS-e>;r1>S`#%IW6mD-E0c
>> z#M!CtV4%YKB$=3D~jC8#I!eM4myo<sFa?*s2MUq?5OEW~w#KFJ>7)AOlB9sF?k@ZnCi
>> zYSq#lqOS8?M_W%B7UIrn1n0tk`Q;aB)~p#lSe|W8xg^u(ERv}+=3DZT~H0_EIkwwD5)
>> zzx|&_`<e5_&SkFHx_HPG=3Dh<R2eU8`?xaR1-uppc&eqpH6BAoHUHy%1Gi^F1b8a8RB
>> z@@1qh7uKR}hnLn)Zb;u9>CYD18BE_p0rR$V1(f_2NND^zd7-f8uf+e6Nn6BazHdRk
>> z3`;o70^B^s-fg~2MLJVZ_Q}Y{*lNnlij}{MLH^vdwMB_<zM1AQ%QM#=3D@8<0?{0_Kh
>> zH^T$rJaWG%TGSJIdQ&h^VSSQ@WUDqEx@>Us3KAQHC*GMB!Sr~X?jc%wV>s(&i8MIp
>> z{pskFDneaUc>MTrYdbr;kzG3V-mjx=3DAof!olzk@IjdQxOXRlsz<>5nF9JN}k>}LUP
>> zGQn}SSWlZP6P)JAL}Hpp9`gSy_YlT%s+*YGx{A4ti<nHBCdT8ZN~312o^|fg@0GcY
>> ztBiMWD~O|tR~Q9p;>h&u+!o84gO|j@&h>fevgPRjVF1jmU%!4%UA4wyU@=3D9;T04sA
>> zWM>&?GadQPz%d_FW}qxCO8HfJsnS-Rk9Tla?pxZ;MA>IDAH;PLvnkWX#M((rCOV0}
>> zxlL}Hj$PBf_@XGMI}Opzvg>YNuVK?TzMpXjtDtb+Z@~;@F`eWr!*s_yMO@Cm*8mFZ
>> zlXRBp(`(T1OP2OCmHhua&FNx>=3DSJB$RmoW|cK@Fqfho%Zjgyj+hBs^0%3W7`Ozvn{
>> z;XXb-Vr69|4({%<>w2b4a`O>`@s46J!AXoK%@E_sGsI}pbmcf^vWs$5uIJw~oH$MN
>> z5q`9ly^J!q6J0YK89LfLr^=3D5tFCyEd;$MBWXYeS~tkG5uO4zs4D-6{=3Dp7UrubFqZ3
>> zKPm%98q<oNYi@3Ng0oLGYu1#KrAkYiF1=3D)!frX41V=3DW_1CX0^g6wxsQ#@UGOI9t)P
>> zn5sO7k+pLHl=3D7l1dMqo#X(P@^lSxQpqG%e8ml4Jjr1ub=3D%=3D%4Q+$;8F@eRm#HZTko
>> z&n}YVr5)P-+<UJ!;%Ffb<NYxvDCxKPxfiESM=3D%YbK%d09;wU^o%kaU7vDSYv&51G!
>> z&sXKgS}PEj`sAm(qcNEB$IF*5_Z>HG+$31xm-_S`C@osH5zm;l;=3DS^)j2b^fG{@S=3D
>> z2(zi8ZSEl2;~Zq<IQzVFb4TS^@E`ZIOzo8L!^cb(4a12tM1Q<=3D9XRq$snTWXu)wK8
>> zx`;V^|6}bF!;B}1<`^)5Szi3#rdt@l-s5rNtN@wrzfyi`_sdPrJZWodi@m+QvYSVn
>> zHf`j~;$KOPdX43$wq2!5f32K;BgW<q(j6xQfkC?FGFZ=3D2hKwFBLks{M)kj$Z7Pyxe
>> zroXnCbm}woO`}$w9{*6Y?zQ4ym)MOmQ!=3D^$x~TB>0Ig+CchZ<>UZ+uup!!YQoUYTT
>> z`K{Uwo2C5`5aw=3D!W^K<^`0fX9#N_~Yfj&vXe=3DZp7-l$c_O?4Z$yofk=3D-%jIg)NRz_
>> zR>Kw@jw7BK^li=3DzeUfW*6arIL;+~MuB_~w*{)gn6YOQiZc3cvB?|2z*YAYIGn&BpP
>> zq6KF76<|IbHNi{-|7)5~m0@7Up~hfFFq^vm1R11nB`rJmc~Z1!u>@o_4qY89vpyJ;
>> z^XYEQI`()l)MygYw=3DGC3FTM9e6ODuUIf5z8oEJ%@s?}xW$dMAYBuZJu$Tc}_+qRS1
>> zHS0;SqF;0Rj?-=3DXzA05YrF`Wo=3DYFi)=3DvFPYW~uo|%SURB`c3ZO{sk}!!_$+U`l1)W
>> zu}C`0_C?uS0i0RlrdV`B;dEzlJP+48@l!7RFaXsG?!r+RGvC5AfU{KHVUSSwE6^uN
>> z_|K_ZuE*zUPYyY;F}$6|+u*n<8$5@)*maZ9IpoKoE_!-#*o&hUAHSZ-LZUKTjvhVQ
>> zX_&G3`Gix?#d204m<)^v1{g_6W$6S=3D%LmU@$h*cc3`^!34#v~~W2!?6)oap<Q-nc)
>> z4)WpPFt=3D;rIv=3Djq8-V&;+GoVrOy)HLjP`4qh4WBBm49A-de*KoKW?wm7zPg=3Drc94=
=3D
>> zeAURvNQP^S5cPqYS)Dp{-mIaaF}G#QmS*35_Z=3DCA1LUkcGEU(NWu<em6&b(;l12+<
>> z>;rTJXoS=3DPI0Z<_NGjaba>k@O(jQo@S~aUGRjSwnj)h=3Dfwrp7r%Ig7r!N8P^Edb88
>> zd=3DKz_e35eMWDy0YPbq1FxJZw>W@Dfwzy|S0_z9q!8W*y$>sAX33yVofNx6FT=3D+SFS
>> zVq%UpP*dxSM)39N^XH44ICEOb@5qT`MGhZJDRTHkN|B=3D{Pbk-U|J>z^Mb2Njtbpfd
>> zE}SoN^4uAvd<Ag;p7R>-F&>Y|pYMkajg5mtcLc{wf4BPccZ34c%BxzdPMxuSF)>%R
>> zTz(}J-Mqzc5_CD6nKH(9mU1+j;;J0q2Co}J*E4`#rVqVL4|=3D37bT$n`tL*Qq)!+aw
>> z$2SL|Ae@O&_U&Pz0IqH3dgd`zs@FQAHy#GWMCf+ZA>WZtVW`T&vQ2XLm#7`*(Cv;D
>> zOX$E*LBz;#tQZ?v$iM-^PFq`ByDnO^sE?_sDJzW5uG9f}PNpbaq5R(rpvxG`%1yY)
>> zgk%hs^}QPs;5w&1JD}n(K7aoFH8<mW8xJ2oe1ZEn%+1Z&K=3D~n5I+l<bNnhMbO)Wy)
>> zPD?9tJ3U>gRDgN15*dzbx&I6dM`x8{s8=3Dq57_xHh+WRNYof9yN>^X2C1NR0);+IK#
>> z^r*-k<fG*G7I>ZaAH4-$yL<osJL9m-yq}&Q4*9U5)|oM5hU>zG3zvI)d&j!FyK|Sj
>> zf(3TJBiulP{TD5~1|2(gv^#U=3DOxor%sp2><Ky2K+WU`B=3DSUcxgzZ0gxLghKYYbq$$
>> zc;CrG#yiax3s~CYU}+B@ZThBknQ}Xjwml?HSLDU+Ns$Ih<{+K}WpJMZ@~*%18Fb&u
>> z(H-gKS;KGV^}9R~j{m2itiJIZW%c1Jf;6OkyAEP#XeI{wmd{L0&B9l$S~cwD%a=3Dc7
>> zWMq)J*dW;DlF8mx(Y#J3ATzLhf83(;2@FvFn{U1uee~F|XK5J^l#RF^a?>r~!IQ_5
>> zn(<KXrl&zm%A@%B_+yCHjuH!hsXo0InVFdiLqNTH=3DY17;U&TBuH|V<#9Xd?8$#qJ%
>> zZz&rz%$qllyUh*6{j!fAKQ8j-&6{_oMHEi2Fwp4U8J@_=3D$tm*m=3D~E?t)N7dv6)Frz
>> zV^qFp!_CIR!^5{Be@gw|>$>?boBBpu6BCnARM@NS$FGWqUxaW>%*E4BrqA<{X&#Hk
>> z*~43$=3DXxvu-v!=3Dd7>+mE&+?T1L$#h4EA}Ow3#Q<ub3mBV=3DA9xN*I4;<>gbP?CF3SL
>> zX8u`zDk$NA8S{K)j&HaexR5Gq_nZ{9#y?5FzQeP%N9yi`O5$++`gQKP{=3D0qF|6PRg
>> zv!T_fSFheIyvM9JS((ZzOEmJ_Lx&FK9653%hhI<QO3xRs@)oqEr>AE%Y0|_F@i_R;
>> zvQc+z#B>18F0rel{>MdmR2NhYghGTXNV&@gzv@fpYlf=3D~CRFxDBID$jsalO1HOxni
>> z8nu7u(4iN#w6sn)Yu3z%@o=3D9#78#e!!Tgv8m%K1N4sU%|I6Z>QhN80pQp#}_jJ+HM
>> z3RfFYU*-4Rxh#e~FI9)OFgOFisLEPpeqyG3_wK5%cW|`;pCym-3(Ps}K2}rgcgL;U
>> zni~)mC*Gk;Wl>P1EbtGN`M$w2&nHOc;mGqp0>gML4wPA5zS6d17cMd2a0y-Tbtnb#
>> zaD36s?{~z92H4&30gda_Z;&!)ae&PA4SHu@Z$ni^mf1TrT0&x1icfg7RH#_#KDzPH
>> zy?gie!u$9U_{UcBq3o=3DLmiqeo>-h4yZX69E7fg4%Ql&~qDp#(2pjNF~7kqqtUh*E7
>> zzCsG|V^&Fs*LBR8G2_>*TXz5w{V4QdR+{d#Y12ZFA3uH(|Ci%`9lS$Ua*fiZOZRtj
>> za*Eixb?aGfh5*LDla!QnOixdbIx*KsmCk>sEK@z`((ZwQfz125gM$NoQVqd0RAI;v
>> zbVY2Vt*z~nkdSbMJ8+yofBxQ<En7}Q#h8bCG>#PJef|2iQdjuTay0JRwW~#NaPU?%
>> zx@(s&UrvY4dSmI*rMrIm=3D_eXloUU!@>+8F2|Ni|2-2<LIH!d#jHu5@+uADA@+^Z>%
>> zP^jXKnl^1UXQB78$hZ{}xP*?ak>aiR2L{OkKl&KvVWIz@!z&<I+~&>~=3D%j+~g3Bl}
>> z&CJX~@glh_f*<wd+80{dd`PxPG9X(Dr38=3Dl5SA@Y7YNZ9F*4&_`Me8-%ego(L;@nB
>> zWXbYGnYY-79|myRxpQYDypMm>AtAql`#^8Kz?aYEz5Dj<%fx$LhU;AaLwB}$sK|3%
>> z6VHtfC`k<r4Csny6c7+lu>U3)Edv#%Y~=3Dwxlnya$;Cy$TEG#ViE?&Htt^@=3DX-YkI|
>> z7l28qE^&s7j6@x`y=3DsRJ=3D6L}6B(9BsK1l<!8XAmNVq)TP+(S~iiX>B|$qV!6&*$hv
>> zD+m^<G?b1N%atqFZ^42E8|gQjAMQ8hK#%am(9n?E&X_@6DA@Hx6^75x=3DR!Bm2Esbe
>> zBV<5}s6nO5l}%f=3DZnL~`lcsyr8a6s!Urp@{eFO8*;9}kS^{>{cSMM&E?QyNzbzaq|
>> zRVx>ew}Eidu3bB2NQR$G=3D4X=3DWi;0SPk(`{&Zk!u0DIccK2c0S2#t<+jEmNoB#fz^%
>> z{+IqJ&-VdwGEx4tZasQ#)*C%qFDfdE7V}@y3qO=3Dl$&|vQQR&Lgu-xzq@8CYJ*&^@O
>> zxZf1#hLC2;=3D3U$X0Tq^`rl!WF4-R|w?77eDB(58xH035;+{O^yUj)J!LF(+{ebx4c
>> z+&qCgBAHvYT@EYW%F2pESPhXcrQpyF8#X9AbWnAG6;976EucZurcK8{!rdp6a5IK0
>> zSFZ5;NZCxAd61wx=3D5*=3DOg`0b+RE$B8eAn{t$(=3DiQlyY!)5`H>{arp&0dwcu!RjO2(
>> z&yU8kY%CYkWEhs29|eOXW@DqOkVhymsUnzw4g{2B#sG&(3i}mWRjXEQT8kDf=3DJxH|
>> zH?CK&UdI|YYI3`GuR+;^25QKlL4(D}$XFa59hJ2_bPC?FV}~3(cu-caUafq-jz@mp
>> zWc&8*a^%R-H@J5d*H@#^9G_#pDkfF;_xJZh7q)NBnl<T=3D968*e3X(+ePNFjiEoujQ
>> z2W1-;y3_G<DRe&W-o1x(@77Ct^yn{5o3?z4#&M!&&z`~k`}a41w5CN()!-jv%KWse
>> zHE7Vl?Z&NJS(FE?1S2CO8WbFQqLk~^xN&1IRA>gm$kprD`A(?j%tPgQU$}fZD-{*`
>> z01TFnisOzS=3D$_xKSh0fskPFSmjoV5_xPAAo!T`{dU!Xp2v~Jz{s*R1!<LjtoZkv4{
>> zOt5R$uKQ@z9M`N@v0}w3r_Y{!{qWHvg?Xn;nZl6}e)U%)A~NzU)4Y?K%6fYZ=3DDSp;
>> zOqs)|>$|t^+{w*Aqj(O9k9WNn;eSTM>gg8{ph$3*0}?wIb@8Ze+qPGJ{`uz{kkT)>
>> zZ8uaH8gO>it5<i$`@e+e&3TLlaTob`dV0Rn)YQBKwI!u)-MSo-qYj-%c#*U~X?b#5
>> zi(_E*0hLeTZ!mW3*lsZ6wa`*srca-~Y4~u>%T^Xs-#9zDiSsm9Wlawm@AT=3D@%F9ek
>> zOH*3;rAwEUk3aFdJ76^BJKpQpuU9^0$ImMh{1_;~uiWy;dt_vKC(-+z>xAfN%&<(0
>> zpI+X!ZJPuI1<O>ssWRTm8kj6r7Ph%)DEH7^ZvwMULqiz=3DiQOFKXE#RO>K~*0^qUMA
>> zI53f$C0xKeAX7#}MBGDnej+F+=3Dn$U$0=3Dlfst$2oHyoYso);|6E9Y#C{95?Mda)e*0
>> zke&PX$-2#(bARd9?KaX)#{C7QN|o|WTD$humYq9=3Dm}Fx^M+@qr+&p0pkZ5af|717f
>> z?}0$redthD{RRy<*4h>6MDIO(SoWu+$N`{DyLMbZWH4mdu;pYfUfZ~Rdls1S1g=3DxU
>> z1_GO0-QAxaI&ne{9X~Gf7B0MoYsQ^AclNywX50+Hvu)3w+-^O3(AAC~Rd5G9P~Na6
>> zO`B5xr8^*dKHb`N>TG1*e7?Ph4oTBy&A5dOXRqiF+X;+L1NT=3D#B2<QH(b4)P(b9S+
>> zz;JaS-aA2=3DwSYuSw6wH5Gu~?IGvCD#;^Q5Ht|~+l6IaTmi<cAyUa>OKg;8XaL9boA
>> zu6#O=3Dl9UXrVo-hvl#IylE)pt9O9p4Rz^;HyPO(o3!TY|xe&XilE^e-K08g>Ab<5Nq
>> zIqEW)Fg`judJN*Xg7VL)zrQu4^Pz<FsS0%&I$Uet=3DAC<GdGZ?BxO0zOyZb=3Ds0(a)h
>> z4M|wDUZSA8hb)Pf!0;ux>KemXp{{_1p^53a9S06ed{W-h;QoWuInh81nFfO)(%pOF
>> z{1u5!TqWU4<FYD$_uV>NAA@|m0NnF(QqMkpFRom-QC6&8CvhuRW&c>C26eiT+Ipi?
>> z4jezFTt~i{rM~@^3}NLm+vDz`6K7=3D4(c^OA!iCJx(9qOn%a+}MFt`R|<<5cQr(eh8
>> zIpUMn%H*jIHxS2loaMv|2Tq)pWr@k+;pO!L*Mia0a&m!|d<W>A+z_HC^k@$7)6h{L
>> zrZqC<$mt7`xOSr~iCvyuu3Y&gxIP&7Dex!|OW^;96W(tP=3Dm#yJ$ZA2F`MJ2b92sM5
>> zmNwJHD=3DR2qse}YAk?^o3vSssDxpDo5qU@6?sUK0vrtU%sLQv6VcS4<mM>3r%Jddh#
>> z>MhiFjzR)OL`2F0&xJC7-Xigw?<;d>`{$aOT0b*08+Qg0DHzgLGc`4}3Cc+&{y#=3D|
>> zv7GZBq(9c|O7dnjn$SpD96~2!7>z-pvM3-FhFA~`FMqL|Z2OXHc@UlqNwvjf!j!4^
>> zLSvT;R~}BZwO1+|P)E8P^3k<%bWTm)x>LAZ*W7wCrKAfa3<vo+E5>Kl9jT{(jmwaI
>> zLL<b*!}B?q#R1nXr#LvLEd?_NM#YNNBwNOH`l4u2huK~}&$&`GXz4NuiCM1P=3DSogx
>> zHD@?SV8WC#^Voa9K$drr`$F&MT;CKPmmnG=3Dw8^Me$e%qd$9`$?a|WrR>KylnTTgYo
>> z8<V_Nf{~A{({y^)+o0@>!(Lhu8o2iVN2qXqKd~{SS1(w#PUxC8TU%RSH5qI1c)|RT
>> zoUo8(5*iHaH!@lh(drK#JOriorov!k%s(PBEv-Lp$45rx=3Dj_(6-yk6&q5K-IEb{Ub
>> z??re~USVQA!SNOJ+1t=3D}wn1{c9y@ldKcur7lKE|S@E;(&WLowHGv_aQ#Fd1;s8oAb
>> z_iP=3D5F)v0LnLO3eH-6lIutsV<wT9HS!h!LSDqRqd#*ew{yoHZLVpf15Vx{lE!Stx-
>> z8f^~Qa0R2D^Xz#_x``XN%VfvtG@d3Reho+@O6b}pOO<k5wQbKUzldn@$9o^5Ir0M2
>> z{klYn>GOSqUPUfX2IH*|^<kRy%(6iEnG5_w-ULT4m&oNwVq!J%*+@NuG;IUp+bHkt
>> zk$Q%A)HO6yhiU7kqwLqdt5VgkM2QmKi-N;5BjS@}DHwCaD1FtcOWGx>X+$;<G9;I%
>> zR6zO0dMx&P9u}7<kt>q9CW>)%aIZ4T|Nqa7ZxbrN9;8`2FwO|*iwhPnUcAS`!s5nw
>> zt0^zMy&`fWLYGTK*fLof6)(|C`I+S;=3D!ARa*s<es?dml}_hffW>y4Hex8CEYnqh^?
>> z+BNGSaU=3DQhQ3-&Q3Gk1QK>sKS^otgUsqSw^k2bhBb?VgpFowL){p%o|wkS^>;2#?t
>> ze;?(y^y<}X)SMl$JSm&2I74DrNT*-A(VcD;zz+>@JdR_C<4_meWPr079J8(WeU++i
>> zp)o76!sAy$0<DxzKX=3D`O>vZ^_qX!4HN7*{NJfJi?b0bZjym*nLuCC5ir5w%ZQm0NG
>> z8yHNtqgJez;AoCBM`o4!wlpQ}i0U<K`G6U6maST^FjD(YU6hSji+}aiG*7>v*UWSG
>> z@w3vi-++6#PD_|00nuQxHKwMfbVi8<_-#v_ufP7<ZJtlSYc89<55xSx$urOJEc6>-
>> zm)(P7q)@-Mlq^}2Lw|g>>XpCy-Wd%ob0r$n@yoXa<z}OtsHUdYwN|ZK%K9q&|MwD@
>> zfCJKvAk{iUQt890_FuSg;Snn<t29%SaWCxcre*mo3dxO*N|vQd5+!D70vL2T7&T5|
>> z)R<^8EEzXeqL##>dyWP(E(M|`EF=3Da@9S>%Vkr~tHW}BOhe`RE7lnx_?pK1<(@nir-
>> z?g-U@U&iIk$A5+JVmY_lvUArS>-;0)WUhCR%=3DZq;tx>Z!H>Dbj+dTm~b0~Tiz{=3DoM
>> zCdUw)x9|LOvUl(jna6lO!MQbS*QHyWYQ%uUO#M6m(*5XcFMpZm6D%C`gN~jF<L?|S
>> z^V3jQo^!%uu6Ll!oVzHyO@~hGYR3Wto40I}5EQjSn7&s)L~fmW4OHIo=3DHn*VJn~o+
>> zAiGj7P*uwD_xHc)<m7b1%ggH`1cs8Pp^3%mFN+t~f@I@Z;@IIMN8RyQ8~~Lh5~UFC
>> z<Czf|8F>PR=3DVgwOhD0ZbowM873Kc7Hw4rhBy7hhL`G(~BghmVJur6J?^Z`8v!ucXR
>> zF8Pk@yv1j}EnSAsND2GT5b~=3D5G=3Dc8-%Yg$2j8?8(xiBOoWFsv03lk<xxNmGc=3DJ`}x
>> zhio^O**WNbbC*UYNL+NX#Kk6qQS(5?jm6Q|I~=3D;ASMD@Nmu!o1Rxgc>jqigI&(EDZ
>> zcOxwEg<wYGbLY<e0-cn)Bi+RQtArOzMZNc6P2I~jGiJ*q$C+X>-tINJF3x7rb&Se9
>> zbuKa@r&(2wC}heW1BQ+`&v=3Dua+{9#p?JLA_MqK(pQs1K68i(xMwDgRhO>}S(+v&5#
>> z(cMd?&&!)$wRfE-Q=3DDgsrHxZ|{f13a5Y`Toj#GZ>y#@?94pFHrL$R`Re)IL$CESrd
>> zcY_;Pvv$3W7B)_~)($h3r4tUbJjHR=3D0%f_#B$Q>6(=3D5>%J?32T;$P82O8KI3`3jX{
>> z%qBVHaJdPWT;xsf`iS$K#WEFn*dxA?x%IWuWy(@lZeF!o^+jBk!t^FP&lX!(4<!$6
>> zLlauyT-*N7FJ=3DRi@LSzY(em2EqU(niWCH0qFFZUvY5x5A$DEy=3DZ$kfkXkudW%+S#A
>> zrGbILYfB5uH(<Qi`uh50z-Q*><_~OaY;HhTJhouLf~2spFpq?U1QUL|{`T$L9UvLW
>> zjK5Az{g(+ZmQ!wh`VJd)edwsMZ-$IA$?VcwokJ*kxJOTzcj`|071FcM5S=3DTGH&o9I
>> z@dnc(r{gf)*i@q;oa0t6Rl00+o6bER4AC8%r8{nlj5fEE(UwzXu=3DbeO?Ys85U#46I
>> zcE22op*7wN7^i7um2Ny~x)_XiklsUeXykDHcU>?h=3Dd~@X)vUX}TYrt`T85Un+GDLn
>> z*L1RsFdm=3DXd+^A|4V$&$Yzy7&=3Dt)%$(o&fM{=3DQm`9l!J)@(gL`j><2Gj@gu)K0|e$
>> zG-=3DhIT|RwITO*H##fp7t*12c@JHzzNb4HGxDB30ybMbzj)UMw+3}M<J-T&+gOE3;~
>> zye44KwpXrP>A7pyu9553t+ND!%|O?)AU;0cFE%z7{2IO#T|h*1bhLj=3DOpIq-T-<cr
>> zw_3Y)t<I4nM|z{HYK?R_O-P2P;{Q=3Dm{)=3Dq}&3-&2lphd)|C4~WxJL>3&fzFrrxa{f
>> z?91X-h~o=3Dz+duknva(cj*914CrIQ0c88{o^lgd@9a;S3E>c`8JuSETmdLNf$=3Dz$U1
>> z7cE+>ZS#&j9CXIdI5;gZDR-#RWEld~ZQ7ci=3Dp2fnV+^-zr1J!wivvo2TV`9uDnA~r
>> zROQD*B}#tF;U=3D!`<e)$I>tq8_R0*U{W;X(SN|gF`3&J0(_<gmMZ@w+VRhZOc`P|(Q
>> zmKz_it~vuwNOM)$3Y8C_VKLkSKoe=3DvkW+R!`L7U|sV2|>XbR9<hlWXeFl@)WckliT
>> zowQSCW@dX_YXh_ZC=3DI!{dUb%3=3D%XdPSdQbei>3oGE0N$iuf2B}0`KLAp~JH(9?z9{
>> z=3DF=3DAzF?050ICl5+5aI+!J1`TyYseU!OCOD{o-SzZ0L7P}e5LOVM@^XadeWRA(Kfdi
>> zP17kdP}l6u*WZ-n_$1W{su9ea!#8SRc=3D~km0|L}T#{irc?}2-K1~Me)yeYLw^j5Er
>> z`;^9<s$_V^8;NVwohemS{-Hz_TqpCh%$z>f#D6{~9gr0(9oIfS0@GCm{-fiH|4zb-
>> zWra~!q*UY>oobwp760C058eab8_y#srie#CbP;#IC56M%BIovilrnTc=3D5h=3D&4%47+
>> zTcbgvpXN+)oiF2^+{I$5ix^qiX4bCXkZYoJ!4RBKASoV102zn*@;VuXpi?uik$D;B
>> zU$gzASO&&X%>t648BP_4@!6Odhs3a|GLw;6W;QDN(=3Du(<809}YsqvZqCau`8waD^y
>> zn~N-4y|GA4^7<mN$s3Bqt=3DWWYTZ$xX++HMc^UflPo3`hN+fpQM^(GZ#u(Da91exTE
>> z*i{>Nk5Z?4!zMr3>KU6{8m>Jmwa)<cTUD!7<FukBx=3DTtiMJ-sdev6WKemoVk6;AVC
>> z%#Zmpf0joThvnt{{BZA%ql877@jSc^u*^zX`Jd^0rvC%PN(XVAmU<i=3DYq-{k4i6)6
>> mojw4Z{rN|I06vV06#0L0BQaiPgq42)0000<MNUMnLSTY`z}`*(
>>=20
>> literal 0
>> HcmV?d00001
>>=20
>> diff --git a/docs/xen.doxyfile.in b/docs/xen.doxyfile.in
>> new file mode 100644
>> index 0000000000..00969d9b78
>> --- /dev/null
>> +++ b/docs/xen.doxyfile.in
>> @@ -0,0 +1,2316 @@
>> +# Doxyfile 1.8.13
>> +
>> +# This file describes the settings to be used by the documentation syst=
em
>> +# doxygen (www.doxygen.org) for a project.
>> +#
>> +# All text after a double hash (##) is considered a comment and is plac=
ed in
>> +# front of the TAG it is preceding.
>> +#
>> +# All text after a single hash (#) is considered a comment and will be =
ignored.
>> +# The format is:
>> +# TAG =3D value [value, ...]
>> +# For lists, items can also be appended using:
>> +# TAG +=3D value [value, ...]
>> +# Values that contain spaces should be placed between quotes (\" \").
>> +#
>> +# This file is base on doc/zephyr.doxyfile.in zephry 2.3
>> +
>> +#----------------------------------------------------------------------=
-----
>> +# Project related configuration options
>> +#----------------------------------------------------------------------=
-----
>> +
>> +# This tag specifies the encoding used for all characters in the config=
 file
>> +# that follow. The default is UTF-8 which is also the encoding used for=
 all text
>> +# before the first occurrence of this tag. Doxygen uses libiconv (or th=
e iconv
>> +# built into libc) for the transcoding. See
>> +# https://www.gnu.org/software/libiconv/ for the list of possible encod=
ings.
>> +# The default value is: UTF-8.
>> +
>> +DOXYFILE_ENCODING      =3D UTF-8
>> +
>> +# The PROJECT_NAME tag is a single word (or a sequence of words surroun=
ded by
>> +# double-quotes, unless you are using Doxywizard) that should identify =
the
>> +# project for which the documentation is generated. This name is used i=
n the
>> +# title of most generated pages and in a few other places.
>> +# The default value is: My Project.
>> +
>> +PROJECT_NAME           =3D "Xen Project"
>> +
>> +# The PROJECT_NUMBER tag can be used to enter a project or revision num=
ber. This
>> +# could be handy for archiving the generated documentation or if some v=
ersion
>> +# control system is used.
>> +
>> +PROJECT_NUMBER         =3D
>> +
>> +# Using the PROJECT_BRIEF tag one can provide an optional one line desc=
ription
>> +# for a project that appears at the top of each page and should give vi=
ewer a
>> +# quick idea about the purpose of the project. Keep the description sho=
rt.
>> +
>> +PROJECT_BRIEF          =3D "An Open Source Type 1 Hypervisor"
>> +
>> +# With the PROJECT_LOGO tag one can specify a logo or an icon that is i=
ncluded
>> +# in the documentation. The maximum height of the logo should not excee=
d 55
>> +# pixels and the maximum width should not exceed 200 pixels. Doxygen wi=
ll copy
>> +# the logo to the output directory.
>> +
>> +PROJECT_LOGO           =3D "xen-doxygen/xen_project_logo_165x67.png"
>> +
>> +# The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute=
) path
>> +# into which the generated documentation will be written. If a relative=
 path is
>> +# entered, it will be relative to the location where doxygen was starte=
d. If
>> +# left blank the current directory will be used.
>> +
>> +OUTPUT_DIRECTORY       =3D @DOXY_OUT@
>> +
>> +# If the CREATE_SUBDIRS tag is set to YES then doxygen will create 4096=
 sub-
>> +# directories (in 2 levels) under the output directory of each output f=
ormat and
>> +# will distribute the generated files over these directories. Enabling =
this
>> +# option can be useful when feeding doxygen a huge amount of source fil=
es, where
>> +# putting all generated files in the same directory would otherwise cau=
ses
>> +# performance problems for the file system.
>> +# The default value is: NO.
>> +
>> +CREATE_SUBDIRS         =3D NO
>> +
>> +# The OUTPUT_LANGUAGE tag is used to specify the language in which all
>> +# documentation generated by doxygen is written. Doxygen will use this
>> +# information to generate all constant output in the proper language.
>> +# Possible values are: Afrikaans, Arabic, Armenian, Brazilian, Catalan,=
 Chinese,
>> +# Chinese-Traditional, Croatian, Czech, Danish, Dutch, English (United =
States),
>> +# Esperanto, Farsi (Persian), Finnish, French, German, Greek, Hungarian=
,
>> +# Indonesian, Italian, Japanese, Japanese-en (Japanese with English mes=
sages),
>> +# Korean, Korean-en (Korean with English messages), Latvian, Lithuanian=
,
>> +# Macedonian, Norwegian, Persian (Farsi), Polish, Portuguese, Romanian,=
 Russian,
>> +# Serbian, Serbian-Cyrillic, Slovak, Slovene, Spanish, Swedish, Turkish=
,
>> +# Ukrainian and Vietnamese.
>> +# The default value is: English.
>> +
>> +OUTPUT_LANGUAGE        =3D English
>> +
>> +# If the BRIEF_MEMBER_DESC tag is set to YES, doxygen will include brie=
f member
>> +# descriptions after the members that are listed in the file and class
>> +# documentation (similar to Javadoc). Set to NO to disable this.
>> +# The default value is: YES.
>> +
>> +BRIEF_MEMBER_DESC      =3D YES
>> +
>> +# If the REPEAT_BRIEF tag is set to YES, doxygen will prepend the brief
>> +# description of a member or function before the detailed description
>> +#
>> +# Note: If both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO,=
 the
>> +# brief descriptions will be completely suppressed.
>> +# The default value is: YES.
>> +
>> +REPEAT_BRIEF           =3D YES
>> +
>> +# This tag implements a quasi-intelligent brief description abbreviator=
 that is
>> +# used to form the text in various listings. Each string in this list, =
if found
>> +# as the leading text of the brief description, will be stripped from t=
he text
>> +# and the result, after processing the whole list, is used as the annot=
ated
>> +# text. Otherwise, the brief description is used as-is. If left blank, =
the
>> +# following values are used ($name is automatically replaced with the n=
ame of
>> +# the entity):The $name class, The $name widget, The $name file, is, pr=
ovides,
>> +# specifies, contains, represents, a, an and the.
>> +
>> +ABBREVIATE_BRIEF       =3D YES
>> +
>> +# If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES =
then
>> +# doxygen will generate a detailed section even if there is only a brie=
f
>> +# description.
>> +# The default value is: NO.
>> +
>> +ALWAYS_DETAILED_SEC    =3D YES
>> +
>> +# If the INLINE_INHERITED_MEMB tag is set to YES, doxygen will show all
>> +# inherited members of a class in the documentation of that class as if=
 those
>> +# members were ordinary class members. Constructors, destructors and as=
signment
>> +# operators of the base classes will not be shown.
>> +# The default value is: NO.
>> +
>> +INLINE_INHERITED_MEMB  =3D YES
>> +
>> +# If the FULL_PATH_NAMES tag is set to YES, doxygen will prepend the fu=
ll path
>> +# before files name in the file list and in the header files. If set to=
 NO the
>> +# shortest path that makes the file name unique will be used
>> +# The default value is: YES.
>> +
>> +FULL_PATH_NAMES        =3D YES
>> +
>> +# The STRIP_FROM_PATH tag can be used to strip a user-defined part of t=
he path.
>> +# Stripping is only done if one of the specified strings matches the le=
ft-hand
>> +# part of the path. The tag can be used to show relative paths in the f=
ile list.
>> +# If left blank the directory from which doxygen is run is used as the =
path to
>> +# strip.
>> +#
>> +# Note that you can specify absolute paths here, but also relative path=
s, which
>> +# will be relative from the directory where doxygen is started.
>> +# This tag requires that the tag FULL_PATH_NAMES is set to YES.
>> +
>> +STRIP_FROM_PATH        =3D @XEN_BASE@
>> +
>> +# The STRIP_FROM_INC_PATH tag can be used to strip a user-defined part =
of the
>> +# path mentioned in the documentation of a class, which tells the reade=
r which
>> +# header file to include in order to use a class. If left blank only th=
e name of
>> +# the header file containing the class definition is used. Otherwise on=
e should
>> +# specify the list of include paths that are normally passed to the com=
piler
>> +# using the -I flag.
>> +
>> +STRIP_FROM_INC_PATH    =3D
>> +
>> +# If the SHORT_NAMES tag is set to YES, doxygen will generate much shor=
ter (but
>> +# less readable) file names. This can be useful is your file systems do=
esn't
>> +# support long names like on DOS, Mac, or CD-ROM.
>> +# The default value is: NO.
>> +
>> +SHORT_NAMES            =3D NO
>> +
>> +# If the JAVADOC_AUTOBRIEF tag is set to YES then doxygen will interpre=
t the
>> +# first line (until the first dot) of a Javadoc-style comment as the br=
ief
>> +# description. If set to NO, the Javadoc-style will behave just like re=
gular Qt-
>> +# style comments (thus requiring an explicit @brief command for a brief
>> +# description.)
>> +# The default value is: NO.
>> +
>> +JAVADOC_AUTOBRIEF      =3D NO
>> +
>> +# If the QT_AUTOBRIEF tag is set to YES then doxygen will interpret the=
 first
>> +# line (until the first dot) of a Qt-style comment as the brief descrip=
tion. If
>> +# set to NO, the Qt-style will behave just like regular Qt-style commen=
ts (thus
>> +# requiring an explicit \brief command for a brief description.)
>> +# The default value is: NO.
>> +
>> +QT_AUTOBRIEF           =3D NO
>> +
>> +# The MULTILINE_CPP_IS_BRIEF tag can be set to YES to make doxygen trea=
t a
>> +# multi-line C++ special comment block (i.e. a block of //! or /// comm=
ents) as
>> +# a brief description. This used to be the default behavior. The new de=
fault is
>> +# to treat a multi-line C++ comment block as a detailed description. Se=
t this
>> +# tag to YES if you prefer the old behavior instead.
>> +#
>> +# Note that setting this tag to YES also means that rational rose comme=
nts are
>> +# not recognized any more.
>> +# The default value is: NO.
>> +
>> +MULTILINE_CPP_IS_BRIEF =3D NO
>> +
>> +# If the INHERIT_DOCS tag is set to YES then an undocumented member inh=
erits the
>> +# documentation from any documented member that it re-implements.
>> +# The default value is: YES.
>> +
>> +INHERIT_DOCS           =3D YES
>> +
>> +# If the SEPARATE_MEMBER_PAGES tag is set to YES then doxygen will prod=
uce a new
>> +# page for each member. If set to NO, the documentation of a member wil=
l be part
>> +# of the file/class/namespace that contains it.
>> +# The default value is: NO.
>> +
>> +SEPARATE_MEMBER_PAGES  =3D YES
>> +
>> +# The TAB_SIZE tag can be used to set the number of spaces in a tab. Do=
xygen
>> +# uses this value to replace tabs by spaces in code fragments.
>> +# Minimum value: 1, maximum value: 16, default value: 4.
>> +
>> +TAB_SIZE               =3D 8
>> +
>> +# This tag can be used to specify a number of aliases that act as comma=
nds in
>> +# the documentation. An alias has the form:
>> +# name=3Dvalue
>> +# For example adding
>> +# "sideeffect=3D@par Side Effects:\n"
>> +# will allow you to put the command \sideeffect (or @sideeffect) in the
>> +# documentation, which will result in a user-defined paragraph with hea=
ding
>> +# "Side Effects:". You can put \n's in the value part of an alias to in=
sert
>> +# newlines.
>> +
>> +ALIASES                =3D "rst=3D\verbatim embed:rst:leading-asterisk"=
 \
>> +                         "endrst=3D\endverbatim" \
>> +                         "keepindent=3D\code" \
>> +                         "endkeepindent=3D\endcode"
>> +
>> +ALIASES +=3D req{1}=3D"\ref XEN_\1 \"XEN-\1\" "
>> +ALIASES +=3D satisfy{1}=3D"\xrefitem satisfy \"Satisfies requirement\" =
\"Requirement Implementation\" \1"
>> +ALIASES +=3D verify{1}=3D"\xrefitem verify \"Verifies requirement\" \"R=
equirement Verification\" \1"
>> +
>> +# Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of =
C sources
>> +# only. Doxygen will then generate output that is more tailored for C. =
For
>> +# instance, some of the names that are used will be different. The list=
 of all
>> +# members will be omitted, etc.
>> +# The default value is: NO.
>> +
>> +OPTIMIZE_OUTPUT_FOR_C  =3D YES
>> +
>> +# Set the OPTIMIZE_OUTPUT_JAVA tag to YES if your project consists of J=
ava or
>> +# Python sources only. Doxygen will then generate output that is more t=
ailored
>> +# for that language. For instance, namespaces will be presented as pack=
ages,
>> +# qualified scopes will look different, etc.
>> +# The default value is: NO.
>> +
>> +OPTIMIZE_OUTPUT_JAVA   =3D NO
>> +
>> +# Set the OPTIMIZE_FOR_FORTRAN tag to YES if your project consists of F=
ortran
>> +# sources. Doxygen will then generate output that is tailored for Fortr=
an.
>> +# The default value is: NO.
>> +
>> +OPTIMIZE_FOR_FORTRAN   =3D NO
>> +
>> +# Set the OPTIMIZE_OUTPUT_VHDL tag to YES if your project consists of V=
HDL
>> +# sources. Doxygen will then generate output that is tailored for VHDL.
>> +# The default value is: NO.
>> +
>> +OPTIMIZE_OUTPUT_VHDL   =3D NO
>> +
>> +# Doxygen selects the parser to use depending on the extension of the f=
iles it
>> +# parses. With this tag you can assign which parser to use for a given
>> +# extension. Doxygen has a built-in mapping, but you can override or ex=
tend it
>> +# using this tag. The format is ext=3Dlanguage, where ext is a file ext=
ension, and
>> +# language is one of the parsers supported by doxygen: IDL, Java, Javas=
cript,
>> +# C#, C, C++, D, PHP, Objective-C, Python, Fortran (fixed format Fortra=
n:
>> +# FortranFixed, free formatted Fortran: FortranFree, unknown formatted =
Fortran:
>> +# Fortran. In the later case the parser tries to guess whether the code=
 is fixed
>> +# or free formatted code, this is the default for Fortran type files), =
VHDL. For
>> +# instance to make doxygen treat .inc files as Fortran files (default i=
s PHP),
>> +# and .f files as C (default is Fortran), use: inc=3DFortran f=3DC.
>> +#
>> +# Note: For files without extension you can use no_extension as a place=
holder.
>> +#
>> +# Note that for custom extensions you also need to set FILE_PATTERNS ot=
herwise
>> +# the files are not read by doxygen.
>> +
>> +EXTENSION_MAPPING      =3D
>> +
>> +# If the MARKDOWN_SUPPORT tag is enabled then doxygen pre-processes all=
 comments
>> +# according to the Markdown format, which allows for more readable
>> +# documentation. See http://daringfireball.net/projects/markdown/ for d=
etails.
>> +# The output of markdown processing is further processed by doxygen, so=
 you can
>> +# mix doxygen, HTML, and XML commands with Markdown formatting. Disable=
 only in
>> +# case of backward compatibilities issues.
>> +# The default value is: YES.
>> +
>> +MARKDOWN_SUPPORT       =3D YES
>> +
>> +# When enabled doxygen tries to link words that correspond to documente=
d
>> +# classes, or namespaces to their corresponding documentation. Such a l=
ink can
>> +# be prevented in individual cases by putting a % sign in front of the =
word or
>> +# globally by setting AUTOLINK_SUPPORT to NO.
>> +# The default value is: YES.
>> +
>> +AUTOLINK_SUPPORT       =3D YES
>> +
>> +# If you use STL classes (i.e. std::string, std::vector, etc.) but do n=
ot want
>> +# to include (a tag file for) the STL sources as input, then you should=
 set this
>> +# tag to YES in order to let doxygen match functions declarations and
>> +# definitions whose arguments contain STL classes (e.g. func(std::strin=
g);
>> +# versus func(std::string) {}). This also make the inheritance and coll=
aboration
>> +# diagrams that involve STL classes more complete and accurate.
>> +# The default value is: NO.
>> +
>> +BUILTIN_STL_SUPPORT    =3D NO
>> +
>> +# If you use Microsoft's C++/CLI language, you should set this option t=
o YES to
>> +# enable parsing support.
>> +# The default value is: NO.
>> +
>> +CPP_CLI_SUPPORT        =3D YES
>> +
>> +# Set the SIP_SUPPORT tag to YES if your project consists of sip (see:
>> +# https://www.riverbankcomputing.com/software/sip/intro) sources only. =
Doxygen
>> +# will parse them like normal C++ but will assume all classes use publi=
c instead
>> +# of private inheritance when no explicit protection keyword is present=
.
>> +# The default value is: NO.
>> +
>> +SIP_SUPPORT            =3D NO
>> +
>> +# For Microsoft's IDL there are propget and propput attributes to indic=
ate
>> +# getter and setter methods for a property. Setting this option to YES =
will make
>> +# doxygen to replace the get and set methods by a property in the docum=
entation.
>> +# This will only work if the methods are indeed getting or setting a si=
mple
>> +# type. If this is not the case, or you want to show the methods anyway=
, you
>> +# should set this option to NO.
>> +# The default value is: YES.
>> +
>> +IDL_PROPERTY_SUPPORT   =3D YES
>> +
>> +# If member grouping is used in the documentation and the DISTRIBUTE_GR=
OUP_DOC
>> +# tag is set to YES then doxygen will reuse the documentation of the fi=
rst
>> +# member in the group (if any) for the other members of the group. By d=
efault
>> +# all members of a group must be documented explicitly.
>> +# The default value is: NO.
>> +
>> +DISTRIBUTE_GROUP_DOC   =3D NO
>> +
>> +# Set the SUBGROUPING tag to YES to allow class member groups of the sa=
me type
>> +# (for instance a group of public functions) to be put as a subgroup of=
 that
>> +# type (e.g. under the Public Functions section). Set it to NO to preve=
nt
>> +# subgrouping. Alternatively, this can be done per class using the
>> +# \nosubgrouping command.
>> +# The default value is: YES.
>> +
>> +SUBGROUPING            =3D YES
>> +
>> +# When the INLINE_GROUPED_CLASSES tag is set to YES, classes, structs a=
nd unions
>> +# are shown inside the group in which they are included (e.g. using \in=
group)
>> +# instead of on a separate page (for HTML and Man pages) or section (fo=
r LaTeX
>> +# and RTF).
>> +#
>> +# Note that this feature does not work in combination with
>> +# SEPARATE_MEMBER_PAGES.
>> +# The default value is: NO.
>> +
>> +INLINE_GROUPED_CLASSES =3D NO
>> +
>> +# When the INLINE_SIMPLE_STRUCTS tag is set to YES, structs, classes, a=
nd unions
>> +# with only public data fields or simple typedef fields will be shown i=
nline in
>> +# the documentation of the scope in which they are defined (i.e. file,
>> +# namespace, or group documentation), provided this scope is documented=
. If set
>> +# to NO, structs, classes, and unions are shown on a separate page (for=
 HTML and
>> +# Man pages) or section (for LaTeX and RTF).
>> +# The default value is: NO.
>> +
>> +INLINE_SIMPLE_STRUCTS  =3D YES
>> +
>> +# When TYPEDEF_HIDES_STRUCT tag is enabled, a typedef of a struct, unio=
n, or
>> +# enum is documented as struct, union, or enum with the name of the typ=
edef. So
>> +# typedef struct TypeS {} TypeT, will appear in the documentation as a =
struct
>> +# with name TypeT. When disabled the typedef will appear as a member of=
 a file,
>> +# namespace, or class. And the struct will be named TypeS. This can typ=
ically be
>> +# useful for C code in case the coding convention dictates that all com=
pound
>> +# types are typedef'ed and only the typedef is referenced, never the ta=
g name.
>> +# The default value is: NO.
>> +
>> +TYPEDEF_HIDES_STRUCT   =3D NO
>> +
>> +# The size of the symbol lookup cache can be set using LOOKUP_CACHE_SIZ=
E. This
>> +# cache is used to resolve symbols given their name and scope. Since th=
is can be
>> +# an expensive process and often the same symbol appears multiple times=
 in the
>> +# code, doxygen keeps a cache of pre-resolved symbols. If the cache is =
too small
>> +# doxygen will become slower. If the cache is too large, memory is wast=
ed. The
>> +# cache size is given by this formula: 2^(16+LOOKUP_CACHE_SIZE). The va=
lid range
>> +# is 0..9, the default is 0, corresponding to a cache size of 2^16=3D65=
536
>> +# symbols. At the end of a run doxygen will report the cache usage and =
suggest
>> +# the optimal cache size from a speed point of view.
>> +# Minimum value: 0, maximum value: 9, default value: 0.
>> +
>> +LOOKUP_CACHE_SIZE      =3D 9
>> +
>> +#----------------------------------------------------------------------=
-----
>> +# Build related configuration options
>> +#----------------------------------------------------------------------=
-----
>> +
>> +# If the EXTRACT_ALL tag is set to YES, doxygen will assume all entitie=
s in
>> +# documentation are documented, even if no documentation was available.=
 Private
>> +# class members and static file members will be hidden unless the
>> +# EXTRACT_PRIVATE respectively EXTRACT_STATIC tags are set to YES.
>> +# Note: This will also disable the warnings about undocumented members =
that are
>> +# normally produced when WARNINGS is set to YES.
>> +# The default value is: NO.
>> +
>> +EXTRACT_ALL            =3D YES
>> +
>> +# If the EXTRACT_PRIVATE tag is set to YES, all private members of a cl=
ass will
>> +# be included in the documentation.
>> +# The default value is: NO.
>> +
>> +EXTRACT_PRIVATE        =3D NO
>> +
>> +# If the EXTRACT_PACKAGE tag is set to YES, all members with package or=
 internal
>> +# scope will be included in the documentation.
>> +# The default value is: NO.
>> +
>> +EXTRACT_PACKAGE        =3D YES
>> +
>> +# If the EXTRACT_STATIC tag is set to YES, all static members of a file=
 will be
>> +# included in the documentation.
>> +# The default value is: NO.
>> +
>> +EXTRACT_STATIC         =3D YES
>> +
>> +# If the EXTRACT_LOCAL_CLASSES tag is set to YES, classes (and structs)=
 defined
>> +# locally in source files will be included in the documentation. If set=
 to NO,
>> +# only classes defined in header files are included. Does not have any =
effect
>> +# for Java sources.
>> +# The default value is: YES.
>> +
>> +EXTRACT_LOCAL_CLASSES  =3D YES
>> +
>> +# This flag is only useful for Objective-C code. If set to YES, local m=
ethods,
>> +# which are defined in the implementation section but not in the interf=
ace are
>> +# included in the documentation. If set to NO, only methods in the inte=
rface are
>> +# included.
>> +# The default value is: NO.
>> +
>> +EXTRACT_LOCAL_METHODS  =3D YES
>> +
>> +# If this flag is set to YES, the members of anonymous namespaces will =
be
>> +# extracted and appear in the documentation as a namespace called
>> +# 'anonymous_namespace{file}', where file will be replaced with the bas=
e name of
>> +# the file that contains the anonymous namespace. By default anonymous =
namespace
>> +# are hidden.
>> +# The default value is: NO.
>> +
>> +EXTRACT_ANON_NSPACES   =3D NO
>> +
>> +# If the HIDE_UNDOC_MEMBERS tag is set to YES, doxygen will hide all
>> +# undocumented members inside documented classes or files. If set to NO=
 these
>> +# members will be included in the various overviews, but no documentati=
on
>> +# section is generated. This option has no effect if EXTRACT_ALL is ena=
bled.
>> +# The default value is: NO.
>> +
>> +HIDE_UNDOC_MEMBERS     =3D NO
>> +
>> +# If the HIDE_UNDOC_CLASSES tag is set to YES, doxygen will hide all
>> +# undocumented classes that are normally visible in the class hierarchy=
. If set
>> +# to NO, these classes will be included in the various overviews. This =
option
>> +# has no effect if EXTRACT_ALL is enabled.
>> +# The default value is: NO.
>> +
>> +HIDE_UNDOC_CLASSES     =3D NO
>> +
>> +# If the HIDE_FRIEND_COMPOUNDS tag is set to YES, doxygen will hide all=
 friend
>> +# (class|struct|union) declarations. If set to NO, these declarations w=
ill be
>> +# included in the documentation.
>> +# The default value is: NO.
>> +
>> +HIDE_FRIEND_COMPOUNDS  =3D NO
>> +
>> +# If the HIDE_IN_BODY_DOCS tag is set to YES, doxygen will hide any
>> +# documentation blocks found inside the body of a function. If set to N=
O, these
>> +# blocks will be appended to the function's detailed documentation bloc=
k.
>> +# The default value is: NO.
>> +
>> +HIDE_IN_BODY_DOCS      =3D NO
>> +
>> +# The INTERNAL_DOCS tag determines if documentation that is typed after=
 a
>> +# \internal command is included. If the tag is set to NO then the docum=
entation
>> +# will be excluded. Set it to YES to include the internal documentation=
.
>> +# The default value is: NO.
>> +
>> +INTERNAL_DOCS          =3D NO
>> +
>> +# If the CASE_SENSE_NAMES tag is set to NO then doxygen will only gener=
ate file
>> +# names in lower-case letters. If set to YES, upper-case letters are al=
so
>> +# allowed. This is useful if you have classes or files whose names only=
 differ
>> +# in case and if your file system supports case sensitive file names. W=
indows
>> +# and Mac users are advised to set this option to NO.
>> +# The default value is: system dependent.
>> +
>> +CASE_SENSE_NAMES       =3D YES
>> +
>> +# If the HIDE_SCOPE_NAMES tag is set to NO then doxygen will show membe=
rs with
>> +# their full class and namespace scopes in the documentation. If set to=
 YES, the
>> +# scope will be hidden.
>> +# The default value is: NO.
>> +
>> +HIDE_SCOPE_NAMES       =3D NO
>> +
>> +# If the SHOW_INCLUDE_FILES tag is set to YES then doxygen will put a l=
ist of
>> +# the files that are included by a file in the documentation of that fi=
le.
>> +# The default value is: YES.
>> +
>> +SHOW_INCLUDE_FILES     =3D YES
>> +
>> +# If the SHOW_GROUPED_MEMB_INC tag is set to YES then Doxygen will add =
for each
>> +# grouped member an include statement to the documentation, telling the=
 reader
>> +# which file to include in order to use the member.
>> +# The default value is: NO.
>> +
>> +SHOW_GROUPED_MEMB_INC  =3D YES
>> +
>> +# If the FORCE_LOCAL_INCLUDES tag is set to YES then doxygen will list =
include
>> +# files with double quotes in the documentation rather than with sharp =
brackets.
>> +# The default value is: NO.
>> +
>> +FORCE_LOCAL_INCLUDES   =3D NO
>> +
>> +# If the INLINE_INFO tag is set to YES then a tag [inline] is inserted =
in the
>> +# documentation for inline members.
>> +# The default value is: YES.
>> +
>> +INLINE_INFO            =3D YES
>> +
>> +# If the SORT_MEMBER_DOCS tag is set to YES then doxygen will sort the
>> +# (detailed) documentation of file and class members alphabetically by =
member
>> +# name. If set to NO, the members will appear in declaration order.
>> +# The default value is: YES.
>> +
>> +SORT_MEMBER_DOCS       =3D YES
>> +
>> +# If the SORT_BRIEF_DOCS tag is set to YES then doxygen will sort the b=
rief
>> +# descriptions of file, namespace and class members alphabetically by m=
ember
>> +# name. If set to NO, the members will appear in declaration order. Not=
e that
>> +# this will also influence the order of the classes in the class list.
>> +# The default value is: NO.
>> +
>> +SORT_BRIEF_DOCS        =3D NO
>> +
>> +# If the SORT_MEMBERS_CTORS_1ST tag is set to YES then doxygen will sor=
t the
>> +# (brief and detailed) documentation of class members so that construct=
ors and
>> +# destructors are listed first. If set to NO the constructors will appe=
ar in the
>> +# respective orders defined by SORT_BRIEF_DOCS and SORT_MEMBER_DOCS.
>> +# Note: If SORT_BRIEF_DOCS is set to NO this option is ignored for sort=
ing brief
>> +# member documentation.
>> +# Note: If SORT_MEMBER_DOCS is set to NO this option is ignored for sor=
ting
>> +# detailed member documentation.
>> +# The default value is: NO.
>> +
>> +SORT_MEMBERS_CTORS_1ST =3D NO
>> +
>> +# If the SORT_GROUP_NAMES tag is set to YES then doxygen will sort the =
hierarchy
>> +# of group names into alphabetical order. If set to NO the group names =
will
>> +# appear in their defined order.
>> +# The default value is: NO.
>> +
>> +SORT_GROUP_NAMES       =3D YES
>> +
>> +# If the SORT_BY_SCOPE_NAME tag is set to YES, the class list will be s=
orted by
>> +# fully-qualified names, including namespaces. If set to NO, the class =
list will
>> +# be sorted only by class name, not including the namespace part.
>> +# Note: This option is not very useful if HIDE_SCOPE_NAMES is set to YE=
S.
>> +# Note: This option applies only to the class list, not to the alphabet=
ical
>> +# list.
>> +# The default value is: NO.
>> +
>> +SORT_BY_SCOPE_NAME     =3D YES
>> +
>> +# If the STRICT_PROTO_MATCHING option is enabled and doxygen fails to d=
o proper
>> +# type resolution of all parameters of a function it will reject a matc=
h between
>> +# the prototype and the implementation of a member function even if the=
re is
>> +# only one candidate or it is obvious which candidate to choose by doin=
g a
>> +# simple string match. By disabling STRICT_PROTO_MATCHING doxygen will =
still
>> +# accept a match between prototype and implementation in such cases.
>> +# The default value is: NO.
>> +
>> +STRICT_PROTO_MATCHING  =3D YES
>> +
>> +# The GENERATE_TODOLIST tag can be used to enable (YES) or disable (NO)=
 the todo
>> +# list. This list is created by putting \todo commands in the documenta=
tion.
>> +# The default value is: YES.
>> +
>> +GENERATE_TODOLIST      =3D NO
>> +
>> +# The GENERATE_TESTLIST tag can be used to enable (YES) or disable (NO)=
 the test
>> +# list. This list is created by putting \test commands in the documenta=
tion.
>> +# The default value is: YES.
>> +
>> +GENERATE_TESTLIST      =3D NO
>> +
>> +# The GENERATE_BUGLIST tag can be used to enable (YES) or disable (NO) =
the bug
>> +# list. This list is created by putting \bug commands in the documentat=
ion.
>> +# The default value is: YES.
>> +
>> +GENERATE_BUGLIST       =3D NO
>> +
>> +# The GENERATE_DEPRECATEDLIST tag can be used to enable (YES) or disabl=
e (NO)
>> +# the deprecated list. This list is created by putting \deprecated comm=
ands in
>> +# the documentation.
>> +# The default value is: YES.
>> +
>> +GENERATE_DEPRECATEDLIST=3D YES
>> +
>> +# The ENABLED_SECTIONS tag can be used to enable conditional documentat=
ion
>> +# sections, marked by \if <section_label> ... \endif and \cond <section=
_label>
>> +# ... \endcond blocks.
>> +
>> +ENABLED_SECTIONS       =3D YES
>> +
>> +# The MAX_INITIALIZER_LINES tag determines the maximum number of lines =
that the
>> +# initial value of a variable or macro / define can have for it to appe=
ar in the
>> +# documentation. If the initializer consists of more lines than specifi=
ed here
>> +# it will be hidden. Use a value of 0 to hide initializers completely. =
The
>> +# appearance of the value of individual variables and macros / defines =
can be
>> +# controlled using \showinitializer or \hideinitializer command in the
>> +# documentation regardless of this setting.
>> +# Minimum value: 0, maximum value: 10000, default value: 30.
>> +
>> +MAX_INITIALIZER_LINES  =3D 300
>> +
>> +# Set the SHOW_USED_FILES tag to NO to disable the list of files genera=
ted at
>> +# the bottom of the documentation of classes and structs. If set to YES=
, the
>> +# list will mention the files that were used to generate the documentat=
ion.
>> +# The default value is: YES.
>> +
>> +SHOW_USED_FILES        =3D YES
>> +
>> +# Set the SHOW_FILES tag to NO to disable the generation of the Files p=
age. This
>> +# will remove the Files entry from the Quick Index and from the Folder =
Tree View
>> +# (if specified).
>> +# The default value is: YES.
>> +
>> +SHOW_FILES             =3D YES
>> +
>> +# Set the SHOW_NAMESPACES tag to NO to disable the generation of the Na=
mespaces
>> +# page. This will remove the Namespaces entry from the Quick Index and =
from the
>> +# Folder Tree View (if specified).
>> +# The default value is: YES.
>> +
>> +SHOW_NAMESPACES        =3D YES
>> +
>> +# The FILE_VERSION_FILTER tag can be used to specify a program or scrip=
t that
>> +# doxygen should invoke to get the current version for each file (typic=
ally from
>> +# the version control system). Doxygen will invoke the program by execu=
ting (via
>> +# popen()) the command command input-file, where command is the value o=
f the
>> +# FILE_VERSION_FILTER tag, and input-file is the name of an input file =
provided
>> +# by doxygen. Whatever the program writes to standard output is used as=
 the file
>> +# version. For an example see the documentation.
>> +
>> +FILE_VERSION_FILTER    =3D
>> +
>> +# The LAYOUT_FILE tag can be used to specify a layout file which will b=
e parsed
>> +# by doxygen. The layout file controls the global structure of the gene=
rated
>> +# output files in an output format independent way. To create the layou=
t file
>> +# that represents doxygen's defaults, run doxygen with the -l option. Y=
ou can
>> +# optionally specify a file name after the option, if omitted DoxygenLa=
yout.xml
>> +# will be used as the name of the layout file.
>> +#
>> +# Note that if you run doxygen from a directory containing a file calle=
d
>> +# DoxygenLayout.xml, doxygen will parse it automatically even if the LA=
YOUT_FILE
>> +# tag is left empty.
>> +
>> +LAYOUT_FILE            =3D
>> +
>> +# The CITE_BIB_FILES tag can be used to specify one or more bib files c=
ontaining
>> +# the reference definitions. This must be a list of .bib files. The .bi=
b
>> +# extension is automatically appended if omitted. This requires the bib=
tex tool
>> +# to be installed. See also https://en.wikipedia.org/wiki/BibTeX for mo=
re info.
>> +# For LaTeX the style of the bibliography can be controlled using
>> +# LATEX_BIB_STYLE. To use this feature you need bibtex and perl availab=
le in the
>> +# search path. See also \cite for info how to create references.
>> +
>> +CITE_BIB_FILES         =3D
>> +
>> +#----------------------------------------------------------------------=
-----
>> +# Configuration options related to warning and progress messages
>> +#----------------------------------------------------------------------=
-----
>> +
>> +# The QUIET tag can be used to turn on/off the messages that are genera=
ted to
>> +# standard output by doxygen. If QUIET is set to YES this implies that =
the
>> +# messages are off.
>> +# The default value is: NO.
>> +
>> +QUIET                  =3D YES
>> +
>> +# The WARNINGS tag can be used to turn on/off the warning messages that=
 are
>> +# generated to standard error (stderr) by doxygen. If WARNINGS is set t=
o YES
>> +# this implies that the warnings are on.
>> +#
>> +# Tip: Turn warnings on while writing the documentation.
>> +# The default value is: YES.
>> +
>> +WARNINGS               =3D YES
>> +
>> +# If the WARN_IF_UNDOCUMENTED tag is set to YES then doxygen will gener=
ate
>> +# warnings for undocumented members. If EXTRACT_ALL is set to YES then =
this flag
>> +# will automatically be disabled.
>> +# The default value is: YES.
>> +
>> +WARN_IF_UNDOCUMENTED   =3D YES
>> +
>> +# If the WARN_IF_DOC_ERROR tag is set to YES, doxygen will generate war=
nings for
>> +# potential errors in the documentation, such as not documenting some p=
arameters
>> +# in a documented function, or documenting parameters that don't exist =
or using
>> +# markup commands wrongly.
>> +# The default value is: YES.
>> +
>> +WARN_IF_DOC_ERROR      =3D YES
>> +
>> +# This WARN_NO_PARAMDOC option can be enabled to get warnings for funct=
ions that
>> +# are documented, but have no documentation for their parameters or ret=
urn
>> +# value. If set to NO, doxygen will only warn about wrong or incomplete
>> +# parameter documentation, but not about the absence of documentation.
>> +# The default value is: NO.
>> +
>> +WARN_NO_PARAMDOC       =3D NO
>> +
>> +# The WARN_FORMAT tag determines the format of the warning messages tha=
t doxygen
>> +# can produce. The string should contain the $file, $line, and $text ta=
gs, which
>> +# will be replaced by the file and line number from which the warning o=
riginated
>> +# and the warning text. Optionally the format may contain $version, whi=
ch will
>> +# be replaced by the version of the file (if it could be obtained via
>> +# FILE_VERSION_FILTER)
>> +# The default value is: $file:$line: $text.
>> +
>> +WARN_FORMAT            =3D "$file:$line: $text"
>> +
>> +# The WARN_LOGFILE tag can be used to specify a file to which warning a=
nd error
>> +# messages should be written. If left blank the output is written to st=
andard
>> +# error (stderr).
>> +
>> +WARN_LOGFILE           =3D
>> +
>> +#----------------------------------------------------------------------=
-----
>> +# Configuration options related to the input files
>> +#----------------------------------------------------------------------=
-----
>> +
>> +# The INPUT tag is used to specify the files and/or directories that co=
ntain
>> +# documented source files. You may enter file names like myfile.cpp or
>> +# directories like /usr/src/myproject. Separate the files or directorie=
s with
>> +# spaces. See also FILE_PATTERNS and EXTENSION_MAPPING
>> +# Note: If this tag is empty the current directory is searched.
>> +
>> +INPUT                  =3D "@XEN_BASE@/docs/xen-doxygen/mainpage.md"
>> +
>> +# This tag can be used to specify the character encoding of the source =
files
>> +# that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxy=
gen uses
>> +# libiconv (or the iconv built into libc) for the transcoding. See the =
libiconv
>> +# documentation (see: https://www.gnu.org/software/libiconv/) for the l=
ist of
>> +# possible encodings.
>> +# The default value is: UTF-8.
>> +
>> +INPUT_ENCODING         =3D UTF-8
>> +
>> +# If the value of the INPUT tag contains directories, you can use the
>> +# FILE_PATTERNS tag to specify one or more wildcard patterns (like *.cp=
p and
>> +# *.h) to filter out the source-files in the directories.
>> +#
>> +# Note that for custom extensions or not directly supported extensions =
you also
>> +# need to set EXTENSION_MAPPING for the extension otherwise the files a=
re not
>> +# read by doxygen.
>> +#
>> +# If left blank the following patterns are tested:*.c, *.cc, *.cxx, *.c=
pp,
>> +# *.c++, *.java, *.ii, *.ixx, *.ipp, *.i++, *.inl, *.idl, *.ddl, *.odl,=
 *.h,
>> +# *.hh, *.hxx, *.hpp, *.h++, *.cs, *.d, *.php, *.php4, *.php5, *.phtml,=
 *.inc,
>> +# *.m, *.markdown, *.md, *.mm, *.dox, *.py, *.pyw, *.f90, *.f95, *.f03,=
 *.f08,
>> +# *.f, *.for, *.tcl, *.vhd, *.vhdl, *.ucf and *.qsf.
>> +
>> +# This MUST be kept in sync with DOXY_SOURCES in doc/CMakeLists.txt
>> +# for incremental (and faster) builds to work correctly.
>> +FILE_PATTERNS          =3D "*.c" \
>> +                         "*.h" \
>> +                         "*.S" \
>> +                         "*.md"
>> +
>> +# The RECURSIVE tag can be used to specify whether or not subdirectorie=
s should
>> +# be searched for input files as well.
>> +# The default value is: NO.
>> +
>> +RECURSIVE              =3D YES
>> +
>> +# The EXCLUDE tag can be used to specify files and/or directories that =
should be
>> +# excluded from the INPUT source files. This way you can easily exclude=
 a
>> +# subdirectory from a directory tree whose root is specified with the I=
NPUT tag.
>> +#
>> +# Note that relative paths are relative to the directory from which dox=
ygen is
>> +# run.
>> +
>> +EXCLUDE                =3D @XEN_BASE@/include/nothing.h
>> +
>> +# The EXCLUDE_SYMLINKS tag can be used to select whether or not files o=
r
>> +# directories that are symbolic links (a Unix file system feature) are =
excluded
>> +# from the input.
>> +# The default value is: NO.
>> +
>> +EXCLUDE_SYMLINKS       =3D NO
>> +
>> +# If the value of the INPUT tag contains directories, you can use the
>> +# EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to excl=
ude
>> +# certain files from those directories.
>> +#
>> +# Note that the wildcards are matched against the file with absolute pa=
th, so to
>> +# exclude all test directories for example use the pattern */test/*
>> +
>> +EXCLUDE_PATTERNS       =3D
>> +
>> +# The EXCLUDE_SYMBOLS tag can be used to specify one or more symbol nam=
es
>> +# (namespaces, classes, functions, etc.) that should be excluded from t=
he
>> +# output. The symbol name can be a fully qualified name, a word, or if =
the
>> +# wildcard * is used, a substring. Examples: ANamespace, AClass,
>> +# AClass::ANamespace, ANamespace::*Test
>> +#
>> +# Note that the wildcards are matched against the file with absolute pa=
th, so to
>> +# exclude all test directories use the pattern */test/*
>> +
>> +# Hide internal names (starting with an underscore, and doxygen-generat=
ed names
>> +# for nested unnamed unions that don't generate meaningful sphinx outpu=
t anyway.
>> +EXCLUDE_SYMBOLS        =3D
>> +# _*  *.__unnamed__ z_* Z_*
>> +
>> +# The EXAMPLE_PATH tag can be used to specify one or more files or dire=
ctories
>> +# that contain example code fragments that are included (see the \inclu=
de
>> +# command).
>> +
>> +EXAMPLE_PATH           =3D
>> +
>> +# If the value of the EXAMPLE_PATH tag contains directories, you can us=
e the
>> +# EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.=
cpp and
>> +# *.h) to filter out the source-files in the directories. If left blank=
 all
>> +# files are included.
>> +
>> +EXAMPLE_PATTERNS       =3D
>> +
>> +# If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will b=
e
>> +# searched for input files to be used with the \include or \dontinclude=
 commands
>> +# irrespective of the value of the RECURSIVE tag.
>> +# The default value is: NO.
>> +
>> +EXAMPLE_RECURSIVE      =3D YES
>> +
>> +# The IMAGE_PATH tag can be used to specify one or more files or direct=
ories
>> +# that contain images that are to be included in the documentation (see=
 the
>> +# \image command).
>> +
>> +IMAGE_PATH             =3D
>> +
>> +# The INPUT_FILTER tag can be used to specify a program that doxygen sh=
ould
>> +# invoke to filter for each input file. Doxygen will invoke the filter =
program
>> +# by executing (via popen()) the command:
>> +#
>> +# <filter> <input-file>
>> +#
>> +# where <filter> is the value of the INPUT_FILTER tag, and <input-file>=
 is the
>> +# name of an input file. Doxygen will then use the output that the filt=
er
>> +# program writes to standard output. If FILTER_PATTERNS is specified, t=
his tag
>> +# will be ignored.
>> +#
>> +# Note that the filter must not add or remove lines; it is applied befo=
re the
>> +# code is scanned, but not when the output code is generated. If lines =
are added
>> +# or removed, the anchors will not be placed correctly.
>> +#
>> +# Note that for custom extensions or not directly supported extensions =
you also
>> +# need to set EXTENSION_MAPPING for the extension otherwise the files a=
re not
>> +# properly processed by doxygen.
>> +
>> +INPUT_FILTER           =3D
>> +
>> +# The FILTER_PATTERNS tag can be used to specify filters on a per file =
pattern
>> +# basis. Doxygen will compare the file name with each pattern and apply=
 the
>> +# filter if there is a match. The filters are a list of the form: patte=
rn=3Dfilter
>> +# (like *.cpp=3Dmy_cpp_filter). See INPUT_FILTER for further informatio=
n on how
>> +# filters are used. If the FILTER_PATTERNS tag is empty or if none of t=
he
>> +# patterns match the file name, INPUT_FILTER is applied.
>> +#
>> +# Note that for custom extensions or not directly supported extensions =
you also
>> +# need to set EXTENSION_MAPPING for the extension otherwise the files a=
re not
>> +# properly processed by doxygen.
>> +
>> +FILTER_PATTERNS     =3D *.h=3D"\"@XEN_BASE@/docs/xen-doxygen/doxy-prepr=
ocessor.py\""
>> +
>> +# If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if se=
t using
>> +# INPUT_FILTER) will also be used to filter the input files that are us=
ed for
>> +# producing the source files to browse (i.e. when SOURCE_BROWSER is set=
 to YES).
>> +# The default value is: NO.
>> +
>> +FILTER_SOURCE_FILES    =3D NO
>> +
>> +# The FILTER_SOURCE_PATTERNS tag can be used to specify source filters =
per file
>> +# pattern. A pattern will override the setting for FILTER_PATTERN (if a=
ny) and
>> +# it is also possible to disable source filtering for a specific patter=
n using
>> +# *.ext=3D (so without naming a filter).
>> +# This tag requires that the tag FILTER_SOURCE_FILES is set to YES.
>> +
>> +FILTER_SOURCE_PATTERNS =3D
>> +
>> +# If the USE_MDFILE_AS_MAINPAGE tag refers to the name of a markdown fi=
le that
>> +# is part of the input, its contents will be placed on the main page
>> +# (index.html). This can be useful if you have a project on for instanc=
e GitHub
>> +# and want to reuse the introduction page also for the doxygen output.
>> +
>> +USE_MDFILE_AS_MAINPAGE =3D "mainpage.md"
>> +
>> +#----------------------------------------------------------------------=
-----
>> +# Configuration options related to source browsing
>> +#----------------------------------------------------------------------=
-----
>> +
>> +# If the SOURCE_BROWSER tag is set to YES then a list of source files w=
ill be
>> +# generated. Documented entities will be cross-referenced with these so=
urces.
>> +#
>> +# Note: To get rid of all source code in the generated output, make sur=
e that
>> +# also VERBATIM_HEADERS is set to NO.
>> +# The default value is: NO.
>> +
>> +SOURCE_BROWSER         =3D NO
>> +
>> +# Setting the INLINE_SOURCES tag to YES will include the body of functi=
ons,
>> +# classes and enums directly into the documentation.
>> +# The default value is: NO.
>> +
>> +INLINE_SOURCES         =3D NO
>> +
>> +# Setting the STRIP_CODE_COMMENTS tag to YES will instruct doxygen to h=
ide any
>> +# special comment blocks from generated source code fragments. Normal C=
, C++ and
>> +# Fortran comments will always remain visible.
>> +# The default value is: YES.
>> +
>> +STRIP_CODE_COMMENTS    =3D YES
>> +
>> +# If the REFERENCED_BY_RELATION tag is set to YES then for each documen=
ted
>> +# function all documented functions referencing it will be listed.
>> +# The default value is: NO.
>> +
>> +REFERENCED_BY_RELATION =3D NO
>> +
>> +# If the REFERENCES_RELATION tag is set to YES then for each documented=
 function
>> +# all documented entities called/used by that function will be listed.
>> +# The default value is: NO.
>> +
>> +REFERENCES_RELATION    =3D NO
>> +
>> +# If the REFERENCES_LINK_SOURCE tag is set to YES and SOURCE_BROWSER ta=
g is set
>> +# to YES then the hyperlinks from functions in REFERENCES_RELATION and
>> +# REFERENCED_BY_RELATION lists will link to the source code. Otherwise =
they will
>> +# link to the documentation.
>> +# The default value is: YES.
>> +
>> +REFERENCES_LINK_SOURCE =3D YES
>> +
>> +# If SOURCE_TOOLTIPS is enabled (the default) then hovering a hyperlink=
 in the
>> +# source code will show a tooltip with additional information such as p=
rototype,
>> +# brief description and links to the definition and documentation. Sinc=
e this
>> +# will make the HTML file larger and loading of large files a bit slowe=
r, you
>> +# can opt to disable this feature.
>> +# The default value is: YES.
>> +# This tag requires that the tag SOURCE_BROWSER is set to YES.
>> +
>> +SOURCE_TOOLTIPS        =3D YES
>> +
>> +# If the USE_HTAGS tag is set to YES then the references to source code=
 will
>> +# point to the HTML generated by the htags(1) tool instead of doxygen b=
uilt-in
>> +# source browser. The htags tool is part of GNU's global source tagging=
 system
>> +# (see https://www.gnu.org/software/global/global.html). You will need =
version
>> +# 4.8.6 or higher.
>> +#
>> +# To use it do the following:
>> +# - Install the latest version of global
>> +# - Enable SOURCE_BROWSER and USE_HTAGS in the config file
>> +# - Make sure the INPUT points to the root of the source tree
>> +# - Run doxygen as normal
>> +#
>> +# Doxygen will invoke htags (and that will in turn invoke gtags), so th=
ese
>> +# tools must be available from the command line (i.e. in the search pat=
h).
>> +#
>> +# The result: instead of the source browser generated by doxygen, the l=
inks to
>> +# source code will now point to the output of htags.
>> +# The default value is: NO.
>> +# This tag requires that the tag SOURCE_BROWSER is set to YES.
>> +
>> +USE_HTAGS              =3D NO
>> +
>> +# If the VERBATIM_HEADERS tag is set the YES then doxygen will generate=
 a
>> +# verbatim copy of the header file for each class for which an include =
is
>> +# specified. Set to NO to disable this.
>> +# See also: Section \class.
>> +# The default value is: YES.
>> +
>> +VERBATIM_HEADERS       =3D YES
>> +
>> +#----------------------------------------------------------------------=
-----
>> +# Configuration options related to the alphabetical class index
>> +#----------------------------------------------------------------------=
-----
>> +
>> +# If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index of=
 all
>> +# compounds will be generated. Enable this if the project contains a lo=
t of
>> +# classes, structs, unions or interfaces.
>> +# The default value is: YES.
>> +
>> +ALPHABETICAL_INDEX     =3D YES
>> +
>> +# The COLS_IN_ALPHA_INDEX tag can be used to specify the number of colu=
mns in
>> +# which the alphabetical index list will be split.
>> +# Minimum value: 1, maximum value: 20, default value: 5.
>> +# This tag requires that the tag ALPHABETICAL_INDEX is set to YES.
>> +
>> +COLS_IN_ALPHA_INDEX    =3D 2
>> +
>> +# In case all classes in a project start with a common prefix, all clas=
ses will
>> +# be put under the same header in the alphabetical index. The IGNORE_PR=
EFIX tag
>> +# can be used to specify a prefix (or a list of prefixes) that should b=
e ignored
>> +# while generating the index headers.
>> +# This tag requires that the tag ALPHABETICAL_INDEX is set to YES.
>> +
>> +IGNORE_PREFIX          =3D
>> +
>> +#----------------------------------------------------------------------=
-----
>> +# Configuration options related to the HTML output
>> +#----------------------------------------------------------------------=
-----
>> +
>> +# If the GENERATE_HTML tag is set to YES, doxygen will generate HTML ou=
tput
>> +# The default value is: YES.
>> +
>> +GENERATE_HTML          =3D YES
>> +
>> +# The HTML_OUTPUT tag is used to specify where the HTML docs will be pu=
t. If a
>> +# relative path is entered the value of OUTPUT_DIRECTORY will be put in=
 front of
>> +# it.
>> +# The default directory is: html.
>> +# This tag requires that the tag GENERATE_HTML is set to YES.
>> +
>> +HTML_OUTPUT            =3D html
>> +
>> +# The HTML_FILE_EXTENSION tag can be used to specify the file extension=
 for each
>> +# generated HTML page (for example: .htm, .php, .asp).
>> +# The default value is: .html.
>> +# This tag requires that the tag GENERATE_HTML is set to YES.
>> +
>> +HTML_FILE_EXTENSION    =3D .html
>> +
>> +# The HTML_HEADER tag can be used to specify a user-defined HTML header=
 file for
>> +# each generated HTML page. If the tag is left blank doxygen will gener=
ate a
>> +# standard header.
>> +#
>> +# To get valid HTML the header file that includes any scripts and style=
 sheets
>> +# that doxygen needs, which is dependent on the configuration options u=
sed (e.g.
>> +# the setting GENERATE_TREEVIEW). It is highly recommended to start wit=
h a
>> +# default header using
>> +# doxygen -w html new_header.html new_footer.html new_stylesheet.css
>> +# YourConfigFile
>> +# and then modify the file new_header.html. See also section "Doxygen u=
sage"
>> +# for information on how to generate the default header that doxygen no=
rmally
>> +# uses.
>> +# Note: The header is subject to change so you typically have to regene=
rate the
>> +# default header when upgrading to a newer version of doxygen. For a de=
scription
>> +# of the possible markers and block names see the documentation.
>> +# This tag requires that the tag GENERATE_HTML is set to YES.
>> +
>> +HTML_HEADER            =3D xen-doxygen/header.html
>> +
>> +# The HTML_FOOTER tag can be used to specify a user-defined HTML footer=
 for each
>> +# generated HTML page. If the tag is left blank doxygen will generate a=
 standard
>> +# footer. See HTML_HEADER for more information on how to generate a def=
ault
>> +# footer and what special commands can be used inside the footer. See a=
lso
>> +# section "Doxygen usage" for information on how to generate the defaul=
t footer
>> +# that doxygen normally uses.
>> +# This tag requires that the tag GENERATE_HTML is set to YES.
>> +
>> +HTML_FOOTER            =3D
>> +
>> +# The HTML_STYLESHEET tag can be used to specify a user-defined cascadi=
ng style
>> +# sheet that is used by each HTML page. It can be used to fine-tune the=
 look of
>> +# the HTML output. If left blank doxygen will generate a default style =
sheet.
>> +# See also section "Doxygen usage" for information on how to generate t=
he style
>> +# sheet that doxygen normally uses.
>> +# Note: It is recommended to use HTML_EXTRA_STYLESHEET instead of this =
tag, as
>> +# it is more robust and this tag (HTML_STYLESHEET) will in the future b=
ecome
>> +# obsolete.
>> +# This tag requires that the tag GENERATE_HTML is set to YES.
>> +
>> +HTML_STYLESHEET        =3D
>> +
>> +# The HTML_EXTRA_STYLESHEET tag can be used to specify additional user-=
defined
>> +# cascading style sheets that are included after the standard style she=
ets
>> +# created by doxygen. Using this option one can overrule certain style =
aspects.
>> +# This is preferred over using HTML_STYLESHEET since it does not replac=
e the
>> +# standard style sheet and is therefore more robust against future upda=
tes.
>> +# Doxygen will copy the style sheet files to the output directory.
>> +# Note: The order of the extra style sheet files is of importance (e.g.=
 the last
>> +# style sheet in the list overrules the setting of the previous ones in=
 the
>> +# list). For an example see the documentation.
>> +# This tag requires that the tag GENERATE_HTML is set to YES.
>> +
>> +HTML_EXTRA_STYLESHEET  =3D xen-doxygen/customdoxygen.css
>> +
>> +# The HTML_EXTRA_FILES tag can be used to specify one or more extra ima=
ges or
>> +# other source files which should be copied to the HTML output director=
y. Note
>> +# that these files will be copied to the base HTML output directory. Us=
e the
>> +# $relpath^ marker in the HTML_HEADER and/or HTML_FOOTER files to load =
these
>> +# files. In the HTML_STYLESHEET file, use the file name only. Also note=
 that the
>> +# files will be copied as-is; there are no commands or markers availabl=
e.
>> +# This tag requires that the tag GENERATE_HTML is set to YES.
>> +
>> +HTML_EXTRA_FILES       =3D
>> +
>> +# The HTML_COLORSTYLE_HUE tag controls the color of the HTML output. Do=
xygen
>> +# will adjust the colors in the style sheet and background images accor=
ding to
>> +# this color. Hue is specified as an angle on a colorwheel, see
>> +# https://en.wikipedia.org/wiki/Hue for more information. For instance =
the value
>> +# 0 represents red, 60 is yellow, 120 is green, 180 is cyan, 240 is blu=
e, 300
>> +# purple, and 360 is red again.
>> +# Minimum value: 0, maximum value: 359, default value: 220.
>> +# This tag requires that the tag GENERATE_HTML is set to YES.
>> +
>> +HTML_COLORSTYLE_HUE    =3D
>> +
>> +# The HTML_COLORSTYLE_SAT tag controls the purity (or saturation) of th=
e colors
>> +# in the HTML output. For a value of 0 the output will use grayscales o=
nly. A
>> +# value of 255 will produce the most vivid colors.
>> +# Minimum value: 0, maximum value: 255, default value: 100.
>> +# This tag requires that the tag GENERATE_HTML is set to YES.
>> +
>> +HTML_COLORSTYLE_SAT    =3D
>> +
>> +# The HTML_COLORSTYLE_GAMMA tag controls the gamma correction applied t=
o the
>> +# luminance component of the colors in the HTML output. Values below 10=
0
>> +# gradually make the output lighter, whereas values above 100 make the =
output
>> +# darker. The value divided by 100 is the actual gamma applied, so 80 r=
epresents
>> +# a gamma of 0.8, The value 220 represents a gamma of 2.2, and 100 does=
 not
>> +# change the gamma.
>> +# Minimum value: 40, maximum value: 240, default value: 80.
>> +# This tag requires that the tag GENERATE_HTML is set to YES.
>> +
>> +HTML_COLORSTYLE_GAMMA  =3D
>> +
>> +# If the HTML_TIMESTAMP tag is set to YES then the footer of each gener=
ated HTML
>> +# page will contain the date and time when the page was generated. Sett=
ing this
>> +# to YES can help to show when doxygen was last run and thus if the
>> +# documentation is up to date.
>> +# The default value is: NO.
>> +# This tag requires that the tag GENERATE_HTML is set to YES.
>> +
>> +HTML_TIMESTAMP         =3D YES
>> +
>> +# If the HTML_DYNAMIC_SECTIONS tag is set to YES then the generated HTM=
L
>> +# documentation will contain sections that can be hidden and shown afte=
r the
>> +# page has loaded.
>> +# The default value is: NO.
>> +# This tag requires that the tag GENERATE_HTML is set to YES.
>> +
>> +HTML_DYNAMIC_SECTIONS  =3D YES
>> +
>> +# With HTML_INDEX_NUM_ENTRIES one can control the preferred number of e=
ntries
>> +# shown in the various tree structured indices initially; the user can =
expand
>> +# and collapse entries dynamically later on. Doxygen will expand the tr=
ee to
>> +# such a level that at most the specified number of entries are visible=
 (unless
>> +# a fully collapsed tree already exceeds this amount). So setting the n=
umber of
>> +# entries 1 will produce a full collapsed tree by default. 0 is a speci=
al value
>> +# representing an infinite number of entries and will result in a full =
expanded
>> +# tree by default.
>> +# Minimum value: 0, maximum value: 9999, default value: 100.
>> +# This tag requires that the tag GENERATE_HTML is set to YES.
>> +
>> +HTML_INDEX_NUM_ENTRIES =3D 100
>> +
>> +# If the GENERATE_DOCSET tag is set to YES, additional index files will=
 be
>> +# generated that can be used as input for Apple's Xcode 3 integrated de=
velopment
>> +# environment (see: https://developer.apple.com/tools/xcode/), introduc=
ed with
>> +# OSX 10.5 (Leopard). To create a documentation set, doxygen will gener=
ate a
>> +# Makefile in the HTML output directory. Running make will produce the =
docset in
>> +# that directory and running make install will install the docset in
>> +# ~/Library/Developer/Shared/Documentation/DocSets so that Xcode will f=
ind it at
>> +# startup. See https://developer.apple.com/tools/creatingdocsetswithdox=
ygen.html
>> +# for more information.
>> +# The default value is: NO.
>> +# This tag requires that the tag GENERATE_HTML is set to YES.
>> +
>> +GENERATE_DOCSET        =3D YES
>> +
>> +# This tag determines the name of the docset feed. A documentation feed=
 provides
>> +# an umbrella under which multiple documentation sets from a single pro=
vider
>> +# (such as a company or product suite) can be grouped.
>> +# The default value is: Doxygen generated docs.
>> +# This tag requires that the tag GENERATE_DOCSET is set to YES.
>> +
>> +DOCSET_FEEDNAME        =3D "Doxygen generated docs"
>> +
>> +# This tag specifies a string that should uniquely identify the documen=
tation
>> +# set bundle. This should be a reverse domain-name style string, e.g.
>> +# com.mycompany.MyDocSet. Doxygen will append .docset to the name.
>> +# The default value is: org.doxygen.Project.
>> +# This tag requires that the tag GENERATE_DOCSET is set to YES.
>> +
>> +DOCSET_BUNDLE_ID       =3D org.doxygen.Project
>> +
>> +# The DOCSET_PUBLISHER_ID tag specifies a string that should uniquely i=
dentify
>> +# the documentation publisher. This should be a reverse domain-name sty=
le
>> +# string, e.g. com.mycompany.MyDocSet.documentation.
>> +# The default value is: org.doxygen.Publisher.
>> +# This tag requires that the tag GENERATE_DOCSET is set to YES.
>> +
>> +DOCSET_PUBLISHER_ID    =3D org.doxygen.Publisher
>> +
>> +# The DOCSET_PUBLISHER_NAME tag identifies the documentation publisher.
>> +# The default value is: Publisher.
>> +# This tag requires that the tag GENERATE_DOCSET is set to YES.
>> +
>> +DOCSET_PUBLISHER_NAME  =3D Publisher
>> +
>> +# If the GENERATE_HTMLHELP tag is set to YES then doxygen generates thr=
ee
>> +# additional HTML index files: index.hhp, index.hhc, and index.hhk. The
>> +# index.hhp is a project file that can be read by Microsoft's HTML Help=
 Workshop
>> +# (see: http://www.microsoft.com/en-us/download/details.aspx?id=3D21138=
) on
>> +# Windows.
>> +#
>> +# The HTML Help Workshop contains a compiler that can convert all HTML =
output
>> +# generated by doxygen into a single compiled HTML file (.chm). Compile=
d HTML
>> +# files are now used as the Windows 98 help format, and will replace th=
e old
>> +# Windows help format (.hlp) on all Windows platforms in the future. Co=
mpressed
>> +# HTML files also contain an index, a table of contents, and you can se=
arch for
>> +# words in the documentation. The HTML workshop also contains a viewer =
for
>> +# compressed HTML files.
>> +# The default value is: NO.
>> +# This tag requires that the tag GENERATE_HTML is set to YES.
>> +
>> +GENERATE_HTMLHELP      =3D NO
>> +
>> +# The CHM_FILE tag can be used to specify the file name of the resultin=
g .chm
>> +# file. You can add a path in front of the file if the result should no=
t be
>> +# written to the html output directory.
>> +# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
>> +
>> +CHM_FILE               =3D NO
>> +
>> +# The HHC_LOCATION tag can be used to specify the location (absolute pa=
th
>> +# including file name) of the HTML help compiler (hhc.exe). If non-empt=
y,
>> +# doxygen will try to run the HTML help compiler on the generated index=
.hhp.
>> +# The file has to be specified with full path.
>> +# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
>> +
>> +HHC_LOCATION           =3D
>> +
>> +# The GENERATE_CHI flag controls if a separate .chi index file is gener=
ated
>> +# (YES) or that it should be included in the master .chm file (NO).
>> +# The default value is: NO.
>> +# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
>> +
>> +GENERATE_CHI           =3D NO
>> +
>> +# The CHM_INDEX_ENCODING is used to encode HtmlHelp index (hhk), conten=
t (hhc)
>> +# and project file content.
>> +# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
>> +
>> +CHM_INDEX_ENCODING     =3D
>> +
>> +# The BINARY_TOC flag controls whether a binary table of contents is ge=
nerated
>> +# (YES) or a normal table of contents (NO) in the .chm file. Furthermor=
e it
>> +# enables the Previous and Next buttons.
>> +# The default value is: NO.
>> +# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
>> +
>> +BINARY_TOC             =3D YES
>> +
>> +# The TOC_EXPAND flag can be set to YES to add extra items for group me=
mbers to
>> +# the table of contents of the HTML help documentation and to the tree =
view.
>> +# The default value is: NO.
>> +# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
>> +
>> +TOC_EXPAND             =3D NO
>> +
>> +# If the GENERATE_QHP tag is set to YES and both QHP_NAMESPACE and
>> +# QHP_VIRTUAL_FOLDER are set, an additional index file will be generate=
d that
>> +# can be used as input for Qt's qhelpgenerator to generate a Qt Compres=
sed Help
>> +# (.qch) of the generated HTML documentation.
>> +# The default value is: NO.
>> +# This tag requires that the tag GENERATE_HTML is set to YES.
>> +
>> +GENERATE_QHP           =3D NO
>> +
>> +# If the QHG_LOCATION tag is specified, the QCH_FILE tag can be used to=
 specify
>> +# the file name of the resulting .qch file. The path specified is relat=
ive to
>> +# the HTML output folder.
>> +# This tag requires that the tag GENERATE_QHP is set to YES.
>> +
>> +QCH_FILE               =3D
>> +
>> +# The QHP_NAMESPACE tag specifies the namespace to use when generating =
Qt Help
>> +# Project output. For more information please see Qt Help Project / Nam=
espace
>> +# (see: http://doc.qt.io/qt-4.8/qthelpproject.html#namespace).
>> +# The default value is: org.doxygen.Project.
>> +# This tag requires that the tag GENERATE_QHP is set to YES.
>> +
>> +QHP_NAMESPACE          =3D org.doxygen.Project
>> +
>> +# The QHP_VIRTUAL_FOLDER tag specifies the namespace to use when genera=
ting Qt
>> +# Help Project output. For more information please see Qt Help Project =
/ Virtual
>> +# Folders (see: http://doc.qt.io/qt-4.8/qthelpproject.html#virtual-fold=
ers).
>> +# The default value is: doc.
>> +# This tag requires that the tag GENERATE_QHP is set to YES.
>> +
>> +QHP_VIRTUAL_FOLDER     =3D doc
>> +
>> +# If the QHP_CUST_FILTER_NAME tag is set, it specifies the name of a cu=
stom
>> +# filter to add. For more information please see Qt Help Project / Cust=
om
>> +# Filters (see: http://doc.qt.io/qt-4.8/qthelpproject.html#custom-filte=
rs).
>> +# This tag requires that the tag GENERATE_QHP is set to YES.
>> +
>> +QHP_CUST_FILTER_NAME   =3D
>> +
>> +# The QHP_CUST_FILTER_ATTRS tag specifies the list of the attributes of=
 the
>> +# custom filter to add. For more information please see Qt Help Project=
 / Custom
>> +# Filters (see: http://doc.qt.io/qt-4.8/qthelpproject.html#custom-filte=
rs).
>> +# This tag requires that the tag GENERATE_QHP is set to YES.
>> +
>> +QHP_CUST_FILTER_ATTRS  =3D
>> +
>> +# The QHP_SECT_FILTER_ATTRS tag specifies the list of the attributes th=
is
>> +# project's filter section matches. Qt Help Project / Filter Attributes=
 (see:
>> +# http://doc.qt.io/qt-4.8/qthelpproject.html#filter-attributes).
>> +# This tag requires that the tag GENERATE_QHP is set to YES.
>> +
>> +QHP_SECT_FILTER_ATTRS  =3D
>> +
>> +# The QHG_LOCATION tag can be used to specify the location of Qt's
>> +# qhelpgenerator. If non-empty doxygen will try to run qhelpgenerator o=
n the
>> +# generated .qhp file.
>> +# This tag requires that the tag GENERATE_QHP is set to YES.
>> +
>> +QHG_LOCATION           =3D
>> +
>> +# If the GENERATE_ECLIPSEHELP tag is set to YES, additional index files=
 will be
>> +# generated, together with the HTML files, they form an Eclipse help pl=
ugin. To
>> +# install this plugin and make it available under the help contents men=
u in
>> +# Eclipse, the contents of the directory containing the HTML and XML fi=
les needs
>> +# to be copied into the plugins directory of eclipse. The name of the d=
irectory
>> +# within the plugins directory should be the same as the ECLIPSE_DOC_ID=
 value.
>> +# After copying Eclipse needs to be restarted before the help appears.
>> +# The default value is: NO.
>> +# This tag requires that the tag GENERATE_HTML is set to YES.
>> +
>> +GENERATE_ECLIPSEHELP   =3D NO
>> +
>> +# A unique identifier for the Eclipse help plugin. When installing the =
plugin
>> +# the directory name containing the HTML and XML files should also have=
 this
>> +# name. Each documentation set should have its own identifier.
>> +# The default value is: org.doxygen.Project.
>> +# This tag requires that the tag GENERATE_ECLIPSEHELP is set to YES.
>> +
>> +ECLIPSE_DOC_ID         =3D org.doxygen.Project
>> +
>> +# If you want full control over the layout of the generated HTML pages =
it might
>> +# be necessary to disable the index and replace it with your own. The
>> +# DISABLE_INDEX tag can be used to turn on/off the condensed index (tab=
s) at top
>> +# of each HTML page. A value of NO enables the index and the value YES =
disables
>> +# it. Since the tabs in the index contain the same information as the n=
avigation
>> +# tree, you can set this option to YES if you also set GENERATE_TREEVIE=
W to YES.
>> +# The default value is: NO.
>> +# This tag requires that the tag GENERATE_HTML is set to YES.
>> +
>> +DISABLE_INDEX          =3D NO
>> +
>> +# The GENERATE_TREEVIEW tag is used to specify whether a tree-like inde=
x
>> +# structure should be generated to display hierarchical information. If=
 the tag
>> +# value is set to YES, a side panel will be generated containing a tree=
-like
>> +# index structure (just like the one that is generated for HTML Help). =
For this
>> +# to work a browser that supports JavaScript, DHTML, CSS and frames is =
required
>> +# (i.e. any modern browser). Windows users are probably better off usin=
g the
>> +# HTML help feature. Via custom style sheets (see HTML_EXTRA_STYLESHEET=
) one can
>> +# further fine-tune the look of the index. As an example, the default s=
tyle
>> +# sheet generated by doxygen has an example that shows how to put an im=
age at
>> +# the root of the tree instead of the PROJECT_NAME. Since the tree basi=
cally has
>> +# the same information as the tab index, you could consider setting
>> +# DISABLE_INDEX to YES when enabling this option.
>> +# The default value is: NO.
>> +# This tag requires that the tag GENERATE_HTML is set to YES.
>> +
>> +GENERATE_TREEVIEW      =3D YES
>> +
>> +# The ENUM_VALUES_PER_LINE tag can be used to set the number of enum va=
lues that
>> +# doxygen will group on one line in the generated HTML documentation.
>> +#
>> +# Note that a value of 0 will completely suppress the enum values from =
appearing
>> +# in the overview section.
>> +# Minimum value: 0, maximum value: 20, default value: 4.
>> +# This tag requires that the tag GENERATE_HTML is set to YES.
>> +
>> +ENUM_VALUES_PER_LINE   =3D 4
>> +
>> +# If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can =
be used
>> +# to set the initial width (in pixels) of the frame in which the tree i=
s shown.
>> +# Minimum value: 0, maximum value: 1500, default value: 250.
>> +# This tag requires that the tag GENERATE_HTML is set to YES.
>> +
>> +TREEVIEW_WIDTH         =3D 250
>> +
>> +# If the EXT_LINKS_IN_WINDOW option is set to YES, doxygen will open li=
nks to
>> +# external symbols imported via tag files in a separate window.
>> +# The default value is: NO.
>> +# This tag requires that the tag GENERATE_HTML is set to YES.
>> +
>> +EXT_LINKS_IN_WINDOW    =3D NO
>> +
>> +# Use this tag to change the font size of LaTeX formulas included as im=
ages in
>> +# the HTML documentation. When you change the font size after a success=
ful
>> +# doxygen run you need to manually remove any form_*.png images from th=
e HTML
>> +# output directory to force them to be regenerated.
>> +# Minimum value: 8, maximum value: 50, default value: 10.
>> +# This tag requires that the tag GENERATE_HTML is set to YES.
>> +
>> +FORMULA_FONTSIZE       =3D 10
>> +
>> +# Use the FORMULA_TRANPARENT tag to determine whether or not the images
>> +# generated for formulas are transparent PNGs. Transparent PNGs are not
>> +# supported properly for IE 6.0, but are supported on all modern browse=
rs.
>> +#
>> +# Note that when changing this option you need to delete any form_*.png=
 files in
>> +# the HTML output directory before the changes have effect.
>> +# The default value is: YES.
>> +# This tag requires that the tag GENERATE_HTML is set to YES.
>> +
>> +FORMULA_TRANSPARENT    =3D YES
>> +
>> +# Enable the USE_MATHJAX option to render LaTeX formulas using MathJax =
(see
>> +# https://www.mathjax.org) which uses client side Javascript for the re=
ndering
>> +# instead of using pre-rendered bitmaps. Use this if you do not have La=
TeX
>> +# installed or if you want to formulas look prettier in the HTML output=
. When
>> +# enabled you may also need to install MathJax separately and configure=
 the path
>> +# to it using the MATHJAX_RELPATH option.
>> +# The default value is: NO.
>> +# This tag requires that the tag GENERATE_HTML is set to YES.
>> +
>> +USE_MATHJAX            =3D NO
>> +
>> +# When MathJax is enabled you can set the default output format to be u=
sed for
>> +# the MathJax output. See the MathJax site (see:
>> +# http://docs.mathjax.org/en/latest/output.html) for more details.
>> +# Possible values are: HTML-CSS (which is slower, but has the best
>> +# compatibility), NativeMML (i.e. MathML) and SVG.
>> +# The default value is: HTML-CSS.
>> +# This tag requires that the tag USE_MATHJAX is set to YES.
>> +
>> +MATHJAX_FORMAT         =3D HTML-CSS
>> +
>> +# When MathJax is enabled you need to specify the location relative to =
the HTML
>> +# output directory using the MATHJAX_RELPATH option. The destination di=
rectory
>> +# should contain the MathJax.js script. For instance, if the mathjax di=
rectory
>> +# is located at the same level as the HTML output directory, then
>> +# MATHJAX_RELPATH should be ../mathjax. The default value points to the=
 MathJax
>> +# Content Delivery Network so you can quickly see the result without in=
stalling
>> +# MathJax. However, it is strongly recommended to install a local copy =
of
>> +# MathJax from https://www.mathjax.org before deployment.
>> +# The default value is: http://cdn.mathjax.org/mathjax/latest.
>> +# This tag requires that the tag USE_MATHJAX is set to YES.
>> +
>> +MATHJAX_RELPATH        =3D http://cdn.mathjax.org/mathjax/latest
>> +
>> +# The MATHJAX_EXTENSIONS tag can be used to specify one or more MathJax
>> +# extension names that should be enabled during MathJax rendering. For =
example
>> +# MATHJAX_EXTENSIONS =3D TeX/AMSmath TeX/AMSsymbols
>> +# This tag requires that the tag USE_MATHJAX is set to YES.
>> +
>> +MATHJAX_EXTENSIONS     =3D
>> +
>> +# The MATHJAX_CODEFILE tag can be used to specify a file with javascrip=
t pieces
>> +# of code that will be used on startup of the MathJax code. See the Mat=
hJax site
>> +# (see: http://docs.mathjax.org/en/latest/output.html) for more details=
. For an
>> +# example see the documentation.
>> +# This tag requires that the tag USE_MATHJAX is set to YES.
>> +
>> +MATHJAX_CODEFILE       =3D
>> +
>> +# When the SEARCHENGINE tag is enabled doxygen will generate a search b=
ox for
>> +# the HTML output. The underlying search engine uses javascript and DHT=
ML and
>> +# should work on any modern browser. Note that when using HTML help
>> +# (GENERATE_HTMLHELP), Qt help (GENERATE_QHP), or docsets (GENERATE_DOC=
SET)
>> +# there is already a search function so this one should typically be di=
sabled.
>> +# For large projects the javascript based search engine can be slow, th=
en
>> +# enabling SERVER_BASED_SEARCH may provide a better solution. It is pos=
sible to
>> +# search using the keyboard; to jump to the search box use <access key>=
 + S
>> +# (what the <access key> is depends on the OS and browser, but it is ty=
pically
>> +# <CTRL>, <ALT>/<option>, or both). Inside the search box use the <curs=
or down
>> +# key> to jump into the search results window, the results can be navig=
ated
>> +# using the <cursor keys>. Press <Enter> to select an item or <escape> =
to cancel
>> +# the search. The filter options can be selected when the cursor is ins=
ide the
>> +# search box by pressing <Shift>+<cursor down>. Also here use the <curs=
or keys>
>> +# to select a filter and <Enter> or <escape> to activate or cancel the =
filter
>> +# option.
>> +# The default value is: YES.
>> +# This tag requires that the tag GENERATE_HTML is set to YES.
>> +
>> +SEARCHENGINE           =3D YES
>> +
>> +# When the SERVER_BASED_SEARCH tag is enabled the search engine will be
>> +# implemented using a web server instead of a web client using Javascri=
pt. There
>> +# are two flavors of web server based searching depending on the EXTERN=
AL_SEARCH
>> +# setting. When disabled, doxygen will generate a PHP script for search=
ing and
>> +# an index file used by the script. When EXTERNAL_SEARCH is enabled the=
 indexing
>> +# and searching needs to be provided by external tools. See the section
>> +# "External Indexing and Searching" for details.
>> +# The default value is: NO.
>> +# This tag requires that the tag SEARCHENGINE is set to YES.
>> +
>> +SERVER_BASED_SEARCH    =3D NO
>> +
>> +# When EXTERNAL_SEARCH tag is enabled doxygen will no longer generate t=
he PHP
>> +# script for searching. Instead the search results are written to an XM=
L file
>> +# which needs to be processed by an external indexer. Doxygen will invo=
ke an
>> +# external search engine pointed to by the SEARCHENGINE_URL option to o=
btain the
>> +# search results.
>> +#
>> +# Doxygen ships with an example indexer (doxyindexer) and search engine
>> +# (doxysearch.cgi) which are based on the open source search engine lib=
rary
>> +# Xapian (see: https://xapian.org/).
>> +#
>> +# See the section "External Indexing and Searching" for details.
>> +# The default value is: NO.
>> +# This tag requires that the tag SEARCHENGINE is set to YES.
>> +
>> +EXTERNAL_SEARCH        =3D NO
>> +
>> +# The SEARCHENGINE_URL should point to a search engine hosted by a web =
server
>> +# which will return the search results when EXTERNAL_SEARCH is enabled.
>> +#
>> +# Doxygen ships with an example indexer (doxyindexer) and search engine
>> +# (doxysearch.cgi) which are based on the open source search engine lib=
rary
>> +# Xapian (see: https://xapian.org/). See the section "External Indexing=
 and
>> +# Searching" for details.
>> +# This tag requires that the tag SEARCHENGINE is set to YES.
>> +
>> +SEARCHENGINE_URL       =3D
>> +
>> +# When SERVER_BASED_SEARCH and EXTERNAL_SEARCH are both enabled the uni=
ndexed
>> +# search data is written to a file for indexing by an external tool. Wi=
th the
>> +# SEARCHDATA_FILE tag the name of this file can be specified.
>> +# The default file is: searchdata.xml.
>> +# This tag requires that the tag SEARCHENGINE is set to YES.
>> +
>> +SEARCHDATA_FILE        =3D searchdata.xml
>> +
>> +# When SERVER_BASED_SEARCH and EXTERNAL_SEARCH are both enabled the
>> +# EXTERNAL_SEARCH_ID tag can be used as an identifier for the project. =
This is
>> +# useful in combination with EXTRA_SEARCH_MAPPINGS to search through mu=
ltiple
>> +# projects and redirect the results back to the right project.
>> +# This tag requires that the tag SEARCHENGINE is set to YES.
>> +
>> +EXTERNAL_SEARCH_ID     =3D
>> +
>> +# The EXTRA_SEARCH_MAPPINGS tag can be used to enable searching through=
 doxygen
>> +# projects other than the one defined by this configuration file, but t=
hat are
>> +# all added to the same external search index. Each project needs to ha=
ve a
>> +# unique id set via EXTERNAL_SEARCH_ID. The search mapping then maps th=
e id of
>> +# to a relative location where the documentation can be found. The form=
at is:
>> +# EXTRA_SEARCH_MAPPINGS =3D tagname1=3Dloc1 tagname2=3Dloc2 ...
>> +# This tag requires that the tag SEARCHENGINE is set to YES.
>> +
>> +EXTRA_SEARCH_MAPPINGS  =3D
>> +
>> +#----------------------------------------------------------------------=
-----
>> +# Configuration options related to the LaTeX output
>> +#----------------------------------------------------------------------=
-----
>> +
>> +# If the GENERATE_LATEX tag is set to YES, doxygen will generate LaTeX =
output.
>> +# The default value is: YES.
>> +
>> +GENERATE_LATEX         =3D NO
>> +
>> +# The LATEX_OUTPUT tag is used to specify where the LaTeX docs will be =
put. If a
>> +# relative path is entered the value of OUTPUT_DIRECTORY will be put in=
 front of
>> +# it.
>> +# The default directory is: latex.
>> +# This tag requires that the tag GENERATE_LATEX is set to YES.
>> +
>> +LATEX_OUTPUT           =3D latex
>> +
>> +# The LATEX_CMD_NAME tag can be used to specify the LaTeX command name =
to be
>> +# invoked.
>> +#
>> +# Note that when enabling USE_PDFLATEX this option is only used for gen=
erating
>> +# bitmaps for formulas in the HTML output, but not in the Makefile that=
 is
>> +# written to the output directory.
>> +# The default file is: latex.
>> +# This tag requires that the tag GENERATE_LATEX is set to YES.
>> +
>> +LATEX_CMD_NAME         =3D latex
>> +
>> +# The MAKEINDEX_CMD_NAME tag can be used to specify the command name to=
 generate
>> +# index for LaTeX.
>> +# The default file is: makeindex.
>> +# This tag requires that the tag GENERATE_LATEX is set to YES.
>> +
>> +MAKEINDEX_CMD_NAME     =3D makeindex
>> +
>> +# If the COMPACT_LATEX tag is set to YES, doxygen generates more compac=
t LaTeX
>> +# documents. This may be useful for small projects and may help to save=
 some
>> +# trees in general.
>> +# The default value is: NO.
>> +# This tag requires that the tag GENERATE_LATEX is set to YES.
>> +
>> +COMPACT_LATEX          =3D NO
>> +
>> +# The PAPER_TYPE tag can be used to set the paper type that is used by =
the
>> +# printer.
>> +# Possible values are: a4 (210 x 297 mm), letter (8.5 x 11 inches), leg=
al (8.5 x
>> +# 14 inches) and executive (7.25 x 10.5 inches).
>> +# The default value is: a4.
>> +# This tag requires that the tag GENERATE_LATEX is set to YES.
>> +
>> +PAPER_TYPE             =3D a4
>> +
>> +# The EXTRA_PACKAGES tag can be used to specify one or more LaTeX packa=
ge names
>> +# that should be included in the LaTeX output. The package can be speci=
fied just
>> +# by its name or with the correct syntax as to be used with the LaTeX
>> +# \usepackage command. To get the times font for instance you can speci=
fy :
>> +# EXTRA_PACKAGES=3Dtimes or EXTRA_PACKAGES=3D{times}
>> +# To use the option intlimits with the amsmath package you can specify:
>> +# EXTRA_PACKAGES=3D[intlimits]{amsmath}
>> +# If left blank no extra packages will be included.
>> +# This tag requires that the tag GENERATE_LATEX is set to YES.
>> +
>> +EXTRA_PACKAGES         =3D
>> +
>> +# The LATEX_HEADER tag can be used to specify a personal LaTeX header f=
or the
>> +# generated LaTeX document. The header should contain everything until =
the first
>> +# chapter. If it is left blank doxygen will generate a standard header.=
 See
>> +# section "Doxygen usage" for information on how to let doxygen write t=
he
>> +# default header to a separate file.
>> +#
>> +# Note: Only use a user-defined header if you know what you are doing! =
The
>> +# following commands have a special meaning inside the header: $title,
>> +# $datetime, $date, $doxygenversion, $projectname, $projectnumber,
>> +# $projectbrief, $projectlogo. Doxygen will replace $title with the emp=
ty
>> +# string, for the replacement values of the other commands the user is =
referred
>> +# to HTML_HEADER.
>> +# This tag requires that the tag GENERATE_LATEX is set to YES.
>> +
>> +LATEX_HEADER           =3D
>> +
>> +# The LATEX_FOOTER tag can be used to specify a personal LaTeX footer f=
or the
>> +# generated LaTeX document. The footer should contain everything after =
the last
>> +# chapter. If it is left blank doxygen will generate a standard footer.=
 See
>> +# LATEX_HEADER for more information on how to generate a default footer=
 and what
>> +# special commands can be used inside the footer.
>> +#
>> +# Note: Only use a user-defined footer if you know what you are doing!
>> +# This tag requires that the tag GENERATE_LATEX is set to YES.
>> +
>> +LATEX_FOOTER           =3D
>> +
>> +# The LATEX_EXTRA_FILES tag can be used to specify one or more extra im=
ages or
>> +# other source files which should be copied to the LATEX_OUTPUT output
>> +# directory. Note that the files will be copied as-is; there are no com=
mands or
>> +# markers available.
>> +# This tag requires that the tag GENERATE_LATEX is set to YES.
>> +
>> +LATEX_EXTRA_FILES      =3D
>> +
>> +# If the PDF_HYPERLINKS tag is set to YES, the LaTeX that is generated =
is
>> +# prepared for conversion to PDF (using ps2pdf or pdflatex). The PDF fi=
le will
>> +# contain links (just like the HTML output) instead of page references.=
 This
>> +# makes the output suitable for online browsing using a PDF viewer.
>> +# The default value is: YES.
>> +# This tag requires that the tag GENERATE_LATEX is set to YES.
>> +
>> +PDF_HYPERLINKS         =3D YES
>> +
>> +# If the USE_PDFLATEX tag is set to YES, doxygen will use pdflatex to g=
enerate
>> +# the PDF file directly from the LaTeX files. Set this option to YES, t=
o get a
>> +# higher quality PDF documentation.
>> +# The default value is: YES.
>> +# This tag requires that the tag GENERATE_LATEX is set to YES.
>> +
>> +USE_PDFLATEX           =3D YES
>> +
>> +# If the LATEX_BATCHMODE tag is set to YES, doxygen will add the \batch=
mode
>> +# command to the generated LaTeX files. This will instruct LaTeX to kee=
p running
>> +# if errors occur, instead of asking the user for help. This option is =
also used
>> +# when generating formulas in HTML.
>> +# The default value is: NO.
>> +# This tag requires that the tag GENERATE_LATEX is set to YES.
>> +
>> +LATEX_BATCHMODE        =3D NO
>> +
>> +# If the LATEX_HIDE_INDICES tag is set to YES then doxygen will not inc=
lude the
>> +# index chapters (such as File Index, Compound Index, etc.) in the outp=
ut.
>> +# The default value is: NO.
>> +# This tag requires that the tag GENERATE_LATEX is set to YES.
>> +
>> +LATEX_HIDE_INDICES     =3D NO
>> +
>> +# If the LATEX_SOURCE_CODE tag is set to YES then doxygen will include =
source
>> +# code with syntax highlighting in the LaTeX output.
>> +#
>> +# Note that which sources are shown also depends on other settings such=
 as
>> +# SOURCE_BROWSER.
>> +# The default value is: NO.
>> +# This tag requires that the tag GENERATE_LATEX is set to YES.
>> +
>> +LATEX_SOURCE_CODE      =3D NO
>> +
>> +# The LATEX_BIB_STYLE tag can be used to specify the style to use for t=
he
>> +# bibliography, e.g. plainnat, or ieeetr. See
>> +# https://en.wikipedia.org/wiki/BibTeX and \cite for more info.
>> +# The default value is: plain.
>> +# This tag requires that the tag GENERATE_LATEX is set to YES.
>> +
>> +LATEX_BIB_STYLE        =3D plain
>> +
>> +#----------------------------------------------------------------------=
-----
>> +# Configuration options related to the RTF output
>> +#----------------------------------------------------------------------=
-----
>> +
>> +# If the GENERATE_RTF tag is set to YES, doxygen will generate RTF outp=
ut. The
>> +# RTF output is optimized for Word 97 and may not look too pretty with =
other RTF
>> +# readers/editors.
>> +# The default value is: NO.
>> +
>> +GENERATE_RTF           =3D NO
>> +
>> +# The RTF_OUTPUT tag is used to specify where the RTF docs will be put.=
 If a
>> +# relative path is entered the value of OUTPUT_DIRECTORY will be put in=
 front of
>> +# it.
>> +# The default directory is: rtf.
>> +# This tag requires that the tag GENERATE_RTF is set to YES.
>> +
>> +RTF_OUTPUT             =3D rtf
>> +
>> +# If the COMPACT_RTF tag is set to YES, doxygen generates more compact =
RTF
>> +# documents. This may be useful for small projects and may help to save=
 some
>> +# trees in general.
>> +# The default value is: NO.
>> +# This tag requires that the tag GENERATE_RTF is set to YES.
>> +
>> +COMPACT_RTF            =3D NO
>> +
>> +# If the RTF_HYPERLINKS tag is set to YES, the RTF that is generated wi=
ll
>> +# contain hyperlink fields. The RTF file will contain links (just like =
the HTML
>> +# output) instead of page references. This makes the output suitable fo=
r online
>> +# browsing using Word or some other Word compatible readers that suppor=
t those
>> +# fields.
>> +#
>> +# Note: WordPad (write) and others do not support links.
>> +# The default value is: NO.
>> +# This tag requires that the tag GENERATE_RTF is set to YES.
>> +
>> +RTF_HYPERLINKS         =3D YES
>> +
>> +# Load stylesheet definitions from file. Syntax is similar to doxygen's=
 config
>> +# file, i.e. a series of assignments. You only have to provide replacem=
ents,
>> +# missing definitions are set to their default value.
>> +#
>> +# See also section "Doxygen usage" for information on how to generate t=
he
>> +# default style sheet that doxygen normally uses.
>> +# This tag requires that the tag GENERATE_RTF is set to YES.
>> +
>> +RTF_STYLESHEET_FILE    =3D
>> +
>> +# Set optional variables used in the generation of an RTF document. Syn=
tax is
>> +# similar to doxygen's config file. A template extensions file can be g=
enerated
>> +# using doxygen -e rtf extensionFile.
>> +# This tag requires that the tag GENERATE_RTF is set to YES.
>> +
>> +RTF_EXTENSIONS_FILE    =3D
>> +
>> +#----------------------------------------------------------------------=
-----
>> +# Configuration options related to the man page output
>> +#----------------------------------------------------------------------=
-----
>> +
>> +# If the GENERATE_MAN tag is set to YES, doxygen will generate man page=
s for
>> +# classes and files.
>> +# The default value is: NO.
>> +
>> +GENERATE_MAN           =3D NO
>> +
>> +# The MAN_OUTPUT tag is used to specify where the man pages will be put=
. If a
>> +# relative path is entered the value of OUTPUT_DIRECTORY will be put in=
 front of
>> +# it. A directory man3 will be created inside the directory specified b=
y
>> +# MAN_OUTPUT.
>> +# The default directory is: man.
>> +# This tag requires that the tag GENERATE_MAN is set to YES.
>> +
>> +MAN_OUTPUT             =3D man
>> +
>> +# The MAN_EXTENSION tag determines the extension that is added to the g=
enerated
>> +# man pages. In case the manual section does not start with a number, t=
he number
>> +# 3 is prepended. The dot (.) at the beginning of the MAN_EXTENSION tag=
 is
>> +# optional.
>> +# The default value is: .3.
>> +# This tag requires that the tag GENERATE_MAN is set to YES.
>> +
>> +MAN_EXTENSION          =3D .3
>> +
>> +# If the MAN_LINKS tag is set to YES and doxygen generates man output, =
then it
>> +# will generate one additional man file for each entity documented in t=
he real
>> +# man page(s). These additional files only source the real man page, bu=
t without
>> +# them the man command would be unable to find the correct page.
>> +# The default value is: NO.
>> +# This tag requires that the tag GENERATE_MAN is set to YES.
>> +
>> +MAN_LINKS              =3D NO
>> +
>> +#----------------------------------------------------------------------=
-----
>> +# Configuration options related to the XML output
>> +#----------------------------------------------------------------------=
-----
>> +
>> +# If the GENERATE_XML tag is set to YES, doxygen will generate an XML f=
ile that
>> +# captures the structure of the code including all documentation.
>> +# The default value is: NO.
>> +
>> +GENERATE_XML           =3D YES
>> +
>> +# The XML_OUTPUT tag is used to specify where the XML pages will be put=
. If a
>> +# relative path is entered the value of OUTPUT_DIRECTORY will be put in=
 front of
>> +# it.
>> +# The default directory is: xml.
>> +# This tag requires that the tag GENERATE_XML is set to YES.
>> +
>> +XML_OUTPUT             =3D xml
>> +
>> +# If the XML_PROGRAMLISTING tag is set to YES, doxygen will dump the pr=
ogram
>> +# listings (including syntax highlighting and cross-referencing informa=
tion) to
>> +# the XML output. Note that enabling this will significantly increase t=
he size
>> +# of the XML output.
>> +# The default value is: YES.
>> +# This tag requires that the tag GENERATE_XML is set to YES.
>> +
>> +XML_PROGRAMLISTING     =3D YES
>> +
>> +#----------------------------------------------------------------------=
-----
>> +# Configuration options related to the DOCBOOK output
>> +#----------------------------------------------------------------------=
-----
>> +
>> +# If the GENERATE_DOCBOOK tag is set to YES, doxygen will generate Docb=
ook files
>> +# that can be used to generate PDF.
>> +# The default value is: NO.
>> +
>> +GENERATE_DOCBOOK       =3D NO
>> +
>> +# The DOCBOOK_OUTPUT tag is used to specify where the Docbook pages wil=
l be put.
>> +# If a relative path is entered the value of OUTPUT_DIRECTORY will be p=
ut in
>> +# front of it.
>> +# The default directory is: docbook.
>> +# This tag requires that the tag GENERATE_DOCBOOK is set to YES.
>> +
>> +DOCBOOK_OUTPUT         =3D docbook
>> +
>> +#----------------------------------------------------------------------=
-----
>> +# Configuration options for the AutoGen Definitions output
>> +#----------------------------------------------------------------------=
-----
>> +
>> +# If the GENERATE_AUTOGEN_DEF tag is set to YES, doxygen will generate =
an
>> +# AutoGen Definitions (see http://autogen.sourceforge.net/) file that c=
aptures
>> +# the structure of the code including all documentation. Note that this=
 feature
>> +# is still experimental and incomplete at the moment.
>> +# The default value is: NO.
>> +
>> +GENERATE_AUTOGEN_DEF   =3D NO
>> +
>> +#----------------------------------------------------------------------=
-----
>> +# Configuration options related to the Perl module output
>> +#----------------------------------------------------------------------=
-----
>> +
>> +# If the GENERATE_PERLMOD tag is set to YES, doxygen will generate a Pe=
rl module
>> +# file that captures the structure of the code including all documentat=
ion.
>> +#
>> +# Note that this feature is still experimental and incomplete at the mo=
ment.
>> +# The default value is: NO.
>> +
>> +GENERATE_PERLMOD       =3D NO
>> +
>> +# If the PERLMOD_LATEX tag is set to YES, doxygen will generate the nec=
essary
>> +# Makefile rules, Perl scripts and LaTeX code to be able to generate PD=
F and DVI
>> +# output from the Perl module output.
>> +# The default value is: NO.
>> +# This tag requires that the tag GENERATE_PERLMOD is set to YES.
>> +
>> +PERLMOD_LATEX          =3D NO
>> +
>> +# If the PERLMOD_PRETTY tag is set to YES, the Perl module output will =
be nicely
>> +# formatted so it can be parsed by a human reader. This is useful if yo=
u want to
>> +# understand what is going on. On the other hand, if this tag is set to=
 NO, the
>> +# size of the Perl module output will be much smaller and Perl will par=
se it
>> +# just the same.
>> +# The default value is: YES.
>> +# This tag requires that the tag GENERATE_PERLMOD is set to YES.
>> +
>> +PERLMOD_PRETTY         =3D YES
>> +
>> +# The names of the make variables in the generated doxyrules.make file =
are
>> +# prefixed with the string contained in PERLMOD_MAKEVAR_PREFIX. This is=
 useful
>> +# so different doxyrules.make files included by the same Makefile don't
>> +# overwrite each other's variables.
>> +# This tag requires that the tag GENERATE_PERLMOD is set to YES.
>> +
>> +PERLMOD_MAKEVAR_PREFIX =3D
>> +
>> +#----------------------------------------------------------------------=
-----
>> +# Configuration options related to the preprocessor
>> +#----------------------------------------------------------------------=
-----
>> +
>> +# If the ENABLE_PREPROCESSING tag is set to YES, doxygen will evaluate =
all
>> +# C-preprocessor directives found in the sources and include files.
>> +# The default value is: YES.
>> +
>> +ENABLE_PREPROCESSING   =3D YES
>> +
>> +# If the MACRO_EXPANSION tag is set to YES, doxygen will expand all mac=
ro names
>> +# in the source code. If set to NO, only conditional compilation will b=
e
>> +# performed. Macro expansion can be done in a controlled way by setting
>> +# EXPAND_ONLY_PREDEF to YES.
>> +# The default value is: NO.
>> +# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
>> +
>> +MACRO_EXPANSION        =3D YES
>> +
>> +# If the EXPAND_ONLY_PREDEF and MACRO_EXPANSION tags are both set to YE=
S then
>> +# the macro expansion is limited to the macros specified with the PREDE=
FINED and
>> +# EXPAND_AS_DEFINED tags.
>> +# The default value is: NO.
>> +# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
>> +
>> +EXPAND_ONLY_PREDEF     =3D NO
>> +
>> +# If the SEARCH_INCLUDES tag is set to YES, the include files in the
>> +# INCLUDE_PATH will be searched if a #include is found.
>> +# The default value is: YES.
>> +# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
>> +
>> +SEARCH_INCLUDES        =3D YES
>> +
>> +# The INCLUDE_PATH tag can be used to specify one or more directories t=
hat
>> +# contain include files that are not input files but should be processe=
d by the
>> +# preprocessor.
>> +# This tag requires that the tag SEARCH_INCLUDES is set to YES.
>> +
>> +INCLUDE_PATH           =3D "@XEN_BASE@/xen/include/generated" \
>> +                         "@XEN_BASE@/xen/include/"
>> +
>> +# You can use the INCLUDE_FILE_PATTERNS tag to specify one or more wild=
card
>> +# patterns (like *.h and *.hpp) to filter out the header-files in the
>> +# directories. If left blank, the patterns specified with FILE_PATTERNS=
 will be
>> +# used.
>> +# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
>> +
>> +INCLUDE_FILE_PATTERNS  =3D
>> +
>> +# The PREDEFINED tag can be used to specify one or more macro names tha=
t are
>> +# defined before the preprocessor is started (similar to the -D option =
of e.g.
>> +# gcc). The argument of the tag is a list of macros of the form: name o=
r
>> +# name=3Ddefinition (no spaces). If the definition and the "=3D" are om=
itted, "=3D1"
>> +# is assumed. To prevent a macro definition from being undefined via #u=
ndef or
>> +# recursively expanded use the :=3D operator instead of the =3D operato=
r.
>> +# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
>> +
>> +PREDEFINED             =3D __attribute__(x)=3D \
>> +                         DOXYGEN \
>> +                         __XEN__
>> +
>> +# If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES the=
n this
>> +# tag can be used to specify a list of macro names that should be expan=
ded. The
>> +# macro definition that is found in the sources will be used. Use the P=
REDEFINED
>> +# tag if you want to use a different macro definition that overrules th=
e
>> +# definition found in the source code.
>> +# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
>> +
>> +EXPAND_AS_DEFINED      =3D
>> +
>> +# If the SKIP_FUNCTION_MACROS tag is set to YES then doxygen's preproce=
ssor will
>> +# remove all references to function-like macros that are alone on a lin=
e, have
>> +# an all uppercase name, and do not end with a semicolon. Such function=
 macros
>> +# are typically used for boiler-plate code, and will confuse the parser=
 if not
>> +# removed.
>> +# The default value is: YES.
>> +# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
>> +
>> +SKIP_FUNCTION_MACROS   =3D NO
>> +
>> +#----------------------------------------------------------------------=
-----
>> +# Configuration options related to external references
>> +#----------------------------------------------------------------------=
-----
>> +
>> +# The TAGFILES tag can be used to specify one or more tag files. For ea=
ch tag
>> +# file the location of the external documentation should be added. The =
format of
>> +# a tag file without this location is as follows:
>> +# TAGFILES =3D file1 file2 ...
>> +# Adding location for the tag files is done as follows:
>> +# TAGFILES =3D file1=3Dloc1 "file2 =3D loc2" ...
>> +# where loc1 and loc2 can be relative or absolute paths or URLs. See th=
e
>> +# section "Linking to external documentation" for more information abou=
t the use
>> +# of tag files.
>> +# Note: Each tag file must have a unique name (where the name does NOT =
include
>> +# the path). If a tag file is not located in the directory in which dox=
ygen is
>> +# run, you must also specify the path to the tagfile here.
>> +
>> +TAGFILES               =3D
>> +
>> +# When a file name is specified after GENERATE_TAGFILE, doxygen will cr=
eate a
>> +# tag file that is based on the input files it reads. See section "Link=
ing to
>> +# external documentation" for more information about the usage of tag f=
iles.
>> +
>> +GENERATE_TAGFILE       =3D
>> +
>> +# If the ALLEXTERNALS tag is set to YES, all external class will be lis=
ted in
>> +# the class index. If set to NO, only the inherited external classes wi=
ll be
>> +# listed.
>> +# The default value is: NO.
>> +
>> +ALLEXTERNALS           =3D NO
>> +
>> +# If the EXTERNAL_GROUPS tag is set to YES, all external groups will be=
 listed
>> +# in the modules index. If set to NO, only the current project's groups=
 will be
>> +# listed.
>> +# The default value is: YES.
>> +
>> +EXTERNAL_GROUPS        =3D YES
>> +
>> +# If the EXTERNAL_PAGES tag is set to YES, all external pages will be l=
isted in
>> +# the related pages index. If set to NO, only the current project's pag=
es will
>> +# be listed.
>> +# The default value is: YES.
>> +
>> +EXTERNAL_PAGES         =3D YES
>> +
>> +#----------------------------------------------------------------------=
-----
>> +# Configuration options related to the dot tool
>> +#----------------------------------------------------------------------=
-----
>> +
>> +# If the CLASS_DIAGRAMS tag is set to YES, doxygen will generate a clas=
s diagram
>> +# (in HTML and LaTeX) for classes with base or super classes. Setting t=
he tag to
>> +# NO turns the diagrams off. Note that this option also works with HAVE=
_DOT
>> +# disabled, but it is recommended to install and use dot, since it yiel=
ds more
>> +# powerful graphs.
>> +# The default value is: YES.
>> +
>> +CLASS_DIAGRAMS         =3D NO
>> +
>> +# You can include diagrams made with dia in doxygen documentation. Doxy=
gen will
>> +# then run dia to produce the diagram and insert it in the documentatio=
n. The
>> +# DIA_PATH tag allows you to specify the directory where the dia binary=
 resides.
>> +# If left empty dia is assumed to be found in the default search path.
>> +
>> +DIA_PATH               =3D
>> +
>> +# If set to YES the inheritance and collaboration graphs will hide inhe=
ritance
>> +# and usage relations if the target is undocumented or is not a class.
>> +# The default value is: YES.
>> +
>> +HIDE_UNDOC_RELATIONS   =3D YES
>> +
>> +# If you set the HAVE_DOT tag to YES then doxygen will assume the dot t=
ool is
>> +# available from the path. This tool is part of Graphviz (see:
>> +# http://www.graphviz.org/), a graph visualization toolkit from AT&T an=
d Lucent
>> +# Bell Labs. The other options in this section have no effect if this o=
ption is
>> +# set to NO
>> +# The default value is: NO.
>> +
>> +HAVE_DOT               =3D NO
>> +
>> +# The DOT_NUM_THREADS specifies the number of dot invocations doxygen i=
s allowed
>> +# to run in parallel. When set to 0 doxygen will base this on the numbe=
r of
>> +# processors available in the system. You can set it explicitly to a va=
lue
>> +# larger than 0 to get control over the balance between CPU load and pr=
ocessing
>> +# speed.
>> +# Minimum value: 0, maximum value: 32, default value: 0.
>> +# This tag requires that the tag HAVE_DOT is set to YES.
>> +
>> +DOT_NUM_THREADS        =3D 0
>> +
>> +# When you want a differently looking font in the dot files that doxyge=
n
>> +# generates you can specify the font name using DOT_FONTNAME. You need =
to make
>> +# sure dot is able to find the font, which can be done by putting it in=
 a
>> +# standard location or by setting the DOTFONTPATH environment variable =
or by
>> +# setting DOT_FONTPATH to the directory containing the font.
>> +# The default value is: Helvetica.
>> +# This tag requires that the tag HAVE_DOT is set to YES.
>> +
>> +DOT_FONTNAME           =3D Helvetica
>> +
>> +# The DOT_FONTSIZE tag can be used to set the size (in points) of the f=
ont of
>> +# dot graphs.
>> +# Minimum value: 4, maximum value: 24, default value: 10.
>> +# This tag requires that the tag HAVE_DOT is set to YES.
>> +
>> +DOT_FONTSIZE           =3D 10
>> +
>> +# By default doxygen will tell dot to use the default font as specified=
 with
>> +# DOT_FONTNAME. If you specify a different font using DOT_FONTNAME you =
can set
>> +# the path where dot can find it using this tag.
>> +# This tag requires that the tag HAVE_DOT is set to YES.
>> +
>> +DOT_FONTPATH           =3D
>> +
>> +# If the CLASS_GRAPH tag is set to YES then doxygen will generate a gra=
ph for
>> +# each documented class showing the direct and indirect inheritance rel=
ations.
>> +# Setting this tag to YES will force the CLASS_DIAGRAMS tag to NO.
>> +# The default value is: YES.
>> +# This tag requires that the tag HAVE_DOT is set to YES.
>> +
>> +CLASS_GRAPH            =3D YES
>> +
>> +# If the COLLABORATION_GRAPH tag is set to YES then doxygen will genera=
te a
>> +# graph for each documented class showing the direct and indirect imple=
mentation
>> +# dependencies (inheritance, containment, and class references variable=
s) of the
>> +# class with other documented classes.
>> +# The default value is: YES.
>> +# This tag requires that the tag HAVE_DOT is set to YES.
>> +
>> +COLLABORATION_GRAPH    =3D YES
>> +
>> +# If the GROUP_GRAPHS tag is set to YES then doxygen will generate a gr=
aph for
>> +# groups, showing the direct groups dependencies.
>> +# The default value is: YES.
>> +# This tag requires that the tag HAVE_DOT is set to YES.
>> +
>> +GROUP_GRAPHS           =3D YES
>> +
>> +# If the UML_LOOK tag is set to YES, doxygen will generate inheritance =
and
>> +# collaboration diagrams in a style similar to the OMG's Unified Modeli=
ng
>> +# Language.
>> +# The default value is: NO.
>> +# This tag requires that the tag HAVE_DOT is set to YES.
>> +
>> +UML_LOOK               =3D NO
>> +
>> +# If the UML_LOOK tag is enabled, the fields and methods are shown insi=
de the
>> +# class node. If there are many fields or methods and many nodes the gr=
aph may
>> +# become too big to be useful. The UML_LIMIT_NUM_FIELDS threshold limit=
s the
>> +# number of items for each type to make the size more manageable. Set t=
his to 0
>> +# for no limit. Note that the threshold may be exceeded by 50% before t=
he limit
>> +# is enforced. So when you set the threshold to 10, up to 15 fields may=
 appear,
>> +# but if the number exceeds 15, the total amount of fields shown is lim=
ited to
>> +# 10.
>> +# Minimum value: 0, maximum value: 100, default value: 10.
>> +# This tag requires that the tag HAVE_DOT is set to YES.
>> +
>> +UML_LIMIT_NUM_FIELDS   =3D 10
>> +
>> +# If the TEMPLATE_RELATIONS tag is set to YES then the inheritance and
>> +# collaboration graphs will show the relations between templates and th=
eir
>> +# instances.
>> +# The default value is: NO.
>> +# This tag requires that the tag HAVE_DOT is set to YES.
>> +
>> +TEMPLATE_RELATIONS     =3D NO
>> +
>> +# If the INCLUDE_GRAPH, ENABLE_PREPROCESSING and SEARCH_INCLUDES tags a=
re set to
>> +# YES then doxygen will generate a graph for each documented file showi=
ng the
>> +# direct and indirect include dependencies of the file with other docum=
ented
>> +# files.
>> +# The default value is: YES.
>> +# This tag requires that the tag HAVE_DOT is set to YES.
>> +
>> +INCLUDE_GRAPH          =3D YES
>> +
>> +# If the INCLUDED_BY_GRAPH, ENABLE_PREPROCESSING and SEARCH_INCLUDES ta=
gs are
>> +# set to YES then doxygen will generate a graph for each documented fil=
e showing
>> +# the direct and indirect include dependencies of the file with other d=
ocumented
>> +# files.
>> +# The default value is: YES.
>> +# This tag requires that the tag HAVE_DOT is set to YES.
>> +
>> +INCLUDED_BY_GRAPH      =3D YES
>> +
>> +# If the CALL_GRAPH tag is set to YES then doxygen will generate a call
>> +# dependency graph for every global function or class method.
>> +#
>> +# Note that enabling this option will significantly increase the time o=
f a run.
>> +# So in most cases it will be better to enable call graphs for selected
>> +# functions only using the \callgraph command. Disabling a call graph c=
an be
>> +# accomplished by means of the command \hidecallgraph.
>> +# The default value is: NO.
>> +# This tag requires that the tag HAVE_DOT is set to YES.
>> +
>> +CALL_GRAPH             =3D NO
>> +
>> +# If the CALLER_GRAPH tag is set to YES then doxygen will generate a ca=
ller
>> +# dependency graph for every global function or class method.
>> +#
>> +# Note that enabling this option will significantly increase the time o=
f a run.
>> +# So in most cases it will be better to enable caller graphs for select=
ed
>> +# functions only using the \callergraph command. Disabling a caller gra=
ph can be
>> +# accomplished by means of the command \hidecallergraph.
>> +# The default value is: NO.
>> +# This tag requires that the tag HAVE_DOT is set to YES.
>> +
>> +CALLER_GRAPH           =3D NO
>> +
>> +# If the GRAPHICAL_HIERARCHY tag is set to YES then doxygen will graphi=
cal
>> +# hierarchy of all classes instead of a textual one.
>> +# The default value is: YES.
>> +# This tag requires that the tag HAVE_DOT is set to YES.
>> +
>> +GRAPHICAL_HIERARCHY    =3D YES
>> +
>> +# If the DIRECTORY_GRAPH tag is set to YES then doxygen will show the
>> +# dependencies a directory has on other directories in a graphical way.=
 The
>> +# dependency relations are determined by the #include relations between=
 the
>> +# files in the directories.
>> +# The default value is: YES.
>> +# This tag requires that the tag HAVE_DOT is set to YES.
>> +
>> +DIRECTORY_GRAPH        =3D YES
>> +
>> +# The DOT_IMAGE_FORMAT tag can be used to set the image format of the i=
mages
>> +# generated by dot. For an explanation of the image formats see the sec=
tion
>> +# output formats in the documentation of the dot tool (Graphviz (see:
>> +# http://www.graphviz.org/)).
>> +# Note: If you choose svg you need to set HTML_FILE_EXTENSION to xhtml =
in order
>> +# to make the SVG files visible in IE 9+ (other browsers do not have th=
is
>> +# requirement).
>> +# Possible values are: png, jpg, gif, svg, png:gd, png:gd:gd, png:cairo=
,
>> +# png:cairo:gd, png:cairo:cairo, png:cairo:gdiplus, png:gdiplus and
>> +# png:gdiplus:gdiplus.
>> +# The default value is: png.
>> +# This tag requires that the tag HAVE_DOT is set to YES.
>> +
>> +DOT_IMAGE_FORMAT       =3D png
>> +
>> +# If DOT_IMAGE_FORMAT is set to svg, then this option can be set to YES=
 to
>> +# enable generation of interactive SVG images that allow zooming and pa=
nning.
>> +#
>> +# Note that this requires a modern browser other than Internet Explorer=
. Tested
>> +# and working are Firefox, Chrome, Safari, and Opera.
>> +# Note: For IE 9+ you need to set HTML_FILE_EXTENSION to xhtml in order=
 to make
>> +# the SVG files visible. Older versions of IE do not have SVG support.
>> +# The default value is: NO.
>> +# This tag requires that the tag HAVE_DOT is set to YES.
>> +
>> +INTERACTIVE_SVG        =3D NO
>> +
>> +# The DOT_PATH tag can be used to specify the path where the dot tool c=
an be
>> +# found. If left blank, it is assumed the dot tool can be found in the =
path.
>> +# This tag requires that the tag HAVE_DOT is set to YES.
>> +
>> +DOT_PATH               =3D
>> +
>> +# The DOTFILE_DIRS tag can be used to specify one or more directories t=
hat
>> +# contain dot files that are included in the documentation (see the \do=
tfile
>> +# command).
>> +# This tag requires that the tag HAVE_DOT is set to YES.
>> +
>> +DOTFILE_DIRS           =3D
>> +
>> +# The MSCFILE_DIRS tag can be used to specify one or more directories t=
hat
>> +# contain msc files that are included in the documentation (see the \ms=
cfile
>> +# command).
>> +
>> +MSCFILE_DIRS           =3D
>> +
>> +# The DIAFILE_DIRS tag can be used to specify one or more directories t=
hat
>> +# contain dia files that are included in the documentation (see the \di=
afile
>> +# command).
>> +
>> +DIAFILE_DIRS           =3D
>> +
>> +# The DOT_GRAPH_MAX_NODES tag can be used to set the maximum number of =
nodes
>> +# that will be shown in the graph. If the number of nodes in a graph be=
comes
>> +# larger than this value, doxygen will truncate the graph, which is vis=
ualized
>> +# by representing a node as a red box. Note that doxygen if the number =
of direct
>> +# children of the root node in a graph is already larger than
>> +# DOT_GRAPH_MAX_NODES then the graph will not be shown at all. Also not=
e that
>> +# the size of a graph can be further restricted by MAX_DOT_GRAPH_DEPTH.
>> +# Minimum value: 0, maximum value: 10000, default value: 50.
>> +# This tag requires that the tag HAVE_DOT is set to YES.
>> +
>> +DOT_GRAPH_MAX_NODES    =3D 50
>> +
>> +# The MAX_DOT_GRAPH_DEPTH tag can be used to set the maximum depth of t=
he graphs
>> +# generated by dot. A depth value of 3 means that only nodes reachable =
from the
>> +# root by following a path via at most 3 edges will be shown. Nodes tha=
t lay
>> +# further from the root node will be omitted. Note that setting this op=
tion to 1
>> +# or 2 may greatly reduce the computation time needed for large code ba=
ses. Also
>> +# note that the size of a graph can be further restricted by
>> +# DOT_GRAPH_MAX_NODES. Using a depth of 0 means no depth restriction.
>> +# Minimum value: 0, maximum value: 1000, default value: 0.
>> +# This tag requires that the tag HAVE_DOT is set to YES.
>> +
>> +MAX_DOT_GRAPH_DEPTH    =3D 0
>> +
>> +# Set the DOT_TRANSPARENT tag to YES to generate images with a transpar=
ent
>> +# background. This is disabled by default, because dot on Windows does =
not seem
>> +# to support this out of the box.
>> +#
>> +# Warning: Depending on the platform used, enabling this option may lea=
d to
>> +# badly anti-aliased labels on the edges of a graph (i.e. they become h=
ard to
>> +# read).
>> +# The default value is: NO.
>> +# This tag requires that the tag HAVE_DOT is set to YES.
>> +
>> +DOT_TRANSPARENT        =3D NO
>> +
>> +# Set the DOT_MULTI_TARGETS tag to YES to allow dot to generate multipl=
e output
>> +# files in one run (i.e. multiple -o and -T options on the command line=
). This
>> +# makes dot run faster, but since only newer versions of dot (>1.8.10) =
support
>> +# this, this feature is disabled by default.
>> +# The default value is: NO.
>> +# This tag requires that the tag HAVE_DOT is set to YES.
>> +
>> +DOT_MULTI_TARGETS      =3D NO
>> +
>> +# If the GENERATE_LEGEND tag is set to YES doxygen will generate a lege=
nd page
>> +# explaining the meaning of the various boxes and arrows in the dot gen=
erated
>> +# graphs.
>> +# The default value is: YES.
>> +# This tag requires that the tag HAVE_DOT is set to YES.
>> +
>> +GENERATE_LEGEND        =3D YES
>> +
>> +# If the DOT_CLEANUP tag is set to YES, doxygen will remove the interme=
diate dot
>> +# files that are used to generate the various graphs.
>> +# The default value is: YES.
>> +# This tag requires that the tag HAVE_DOT is set to YES.
>> +
>> +DOT_CLEANUP            =3D YES
>> diff --git a/m4/ax_python_module.m4 b/m4/ax_python_module.m4
>> new file mode 100644
>> index 0000000000..107d88264a
>> --- /dev/null
>> +++ b/m4/ax_python_module.m4
>> @@ -0,0 +1,56 @@
>> +# =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D
>> +#     https://www.gnu.org/software/autoconf-archive/ax_python_module.ht=
ml
>> +# =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D
>> +#
>> +# SYNOPSIS
>> +#
>> +#   AX_PYTHON_MODULE(modname[, fatal, python])
>> +#
>> +# DESCRIPTION
>> +#
>> +#   Checks for Python module.
>> +#
>> +#   If fatal is non-empty then absence of a module will trigger an erro=
r.
>> +#   The third parameter can either be "python" for Python 2 or "python3=
" for
>> +#   Python 3; defaults to Python 3.
>> +#
>> +# LICENSE
>> +#
>> +#   Copyright (c) 2008 Andrew Collier
>> +#
>> +#   Copying and distribution of this file, with or without modification=
, are
>> +#   permitted in any medium without royalty provided the copyright noti=
ce
>> +#   and this notice are preserved. This file is offered as-is, without =
any
>> +#   warranty.
>> +
>> +#serial 9
>> +
>> +AU_ALIAS([AC_PYTHON_MODULE], [AX_PYTHON_MODULE])
>> +AC_DEFUN([AX_PYTHON_MODULE],[
>> +    if test -z $PYTHON;
>> +    then
>> +        if test -z "$3";
>> +        then
>> +            PYTHON=3D"python3"
>> +        else
>> +            PYTHON=3D"$3"
>> +        fi
>> +    fi
>> +    PYTHON_NAME=3D`basename $PYTHON`
>> +    AC_MSG_CHECKING($PYTHON_NAME module: $1)
>> +    $PYTHON -c "import $1" 2>/dev/null
>> +    if test $? -eq 0;
>> +    then
>> +        AC_MSG_RESULT(yes)
>> +        eval AS_TR_CPP(HAVE_PYMOD_$1)=3Dyes
>> +    else
>> +        AC_MSG_RESULT(no)
>> +        eval AS_TR_CPP(HAVE_PYMOD_$1)=3Dno
>> +        #
>> +        if test -n "$2"
>> +        then
>> +            AC_MSG_ERROR(failed to find required module $1)
>> +            exit 1
>> +        fi
>> +    fi
>> +])
>> \ No newline at end of file
>> diff --git a/m4/docs_tool.m4 b/m4/docs_tool.m4
>> index 3e8814ac8d..39aa348026 100644
>> --- a/m4/docs_tool.m4
>> +++ b/m4/docs_tool.m4
>> @@ -15,3 +15,12 @@ dnl
>>         AC_MSG_WARN([$2 is not available so some documentation won't be =
built])
>>     ])
>> ])
>> +
>> +AC_DEFUN([AX_DOCS_TOOL_REQ_PROG], [
>> +dnl
>> +    AC_ARG_VAR([$1], [Path to $2 tool])
>> +    AC_PATH_PROG([$1], [$2])
>> +    AS_IF([! test -x "$ac_cv_path_$1"], [
>> +        AC_MSG_ERROR([$2 is needed])
>> +    ])
>> +])
>> \ No newline at end of file
>> --=20
>> 2.17.1



From xen-devel-bounces@lists.xenproject.org Wed May 05 10:57:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 10:57:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123000.232038 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leFDV-0004cd-5m; Wed, 05 May 2021 10:57:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123000.232038; Wed, 05 May 2021 10:57:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leFDV-0004cW-2r; Wed, 05 May 2021 10:57:21 +0000
Received: by outflank-mailman (input) for mailman id 123000;
 Wed, 05 May 2021 10:57:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6083=KA=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1leFDT-0004cP-2z
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 10:57:19 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 07912adf-9ee7-4154-b7a1-9c4f3b3ad2f0;
 Wed, 05 May 2021 10:57:18 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2B660B008;
 Wed,  5 May 2021 10:57:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 07912adf-9ee7-4154-b7a1-9c4f3b3ad2f0
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620212237; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=WNIUz+uee6Hj37SiZqFBZ4oIFLeTiPOCaSq5FTsNibo=;
	b=a0BVc9NIA151mrdAnbJnuBdo9kJDoshSiE+0fMgSBqwJJy1sNbs0I/d3MtiumueRnPApEj
	GhvGTQ0JrRqyZFY0HBAC5C46gr+xl08ePGFkXUUR4bHDO1nHlPeJI+5n9kgxxJvdkzmojC
	rZThExtXkndYBBxLmc9Qr4PEXGUH6q0=
Subject: Re: [PATCH] public/gnttab: relax v2 recommendation
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Luca Fancellu <luca.fancellu@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <dcd9ede1-9471-6866-4ba7-b6a7664b5e35@suse.com>
 <8eac6f09-4d1d-6fcc-4218-8c9a0760a6bb@xen.org>
 <71e61d09-5d92-94dc-ae0c-ce09cb49b4ce@suse.com>
 <c468856b-8ac6-2ab1-0f5f-eabc26d47293@xen.org>
 <51c29a91-8659-7525-a565-5b9fcfc935f3@suse.com>
 <9b8fb87c-a2fb-f371-5914-f0d175c18b02@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <7d55a2c1-f630-3e43-fb6a-39f28f716bca@suse.com>
Date: Wed, 5 May 2021 12:57:16 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <9b8fb87c-a2fb-f371-5914-f0d175c18b02@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 05.05.2021 10:51, Julien Grall wrote:
> Hi Jan,
> 
> On 05/05/2021 09:24, Jan Beulich wrote:
>> On 05.05.2021 10:12, Julien Grall wrote:
>>> Hi Jan,
>>>
>>> On 30/04/2021 09:36, Jan Beulich wrote:
>>>> On 30.04.2021 10:19, Julien Grall wrote:
>>>>> On 29/04/2021 14:10, Jan Beulich wrote:
>>>>>> With there being a way to disable v2 support, telling new guests to use
>>>>>> v2 exclusively is not a good suggestion.
>>>>>>
>>>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>>>>
>>>>>> --- a/xen/include/public/grant_table.h
>>>>>> +++ b/xen/include/public/grant_table.h
>>>>>> @@ -121,8 +121,10 @@ typedef uint32_t grant_ref_t;
>>>>>>      */
>>>>>>     
>>>>>>     /*
>>>>>> - * Version 1 of the grant table entry structure is maintained purely
>>>>>> - * for backwards compatibility.  New guests should use version 2.
>>>>>> + * Version 1 of the grant table entry structure is maintained largely for
>>>>>> + * backwards compatibility.  New guests are recommended to support using
>>>>>> + * version 2 to overcome version 1 limitations, but to be able to fall back
>>>>>> + * to version 1.
>>>>>
>>>>> v2 is not supported on Arm and I don't see it coming anytime soon.
>>>>> AFAIK, Linux will also not use grant table v2 unless the guest has a
>>>>> address space larger than 44 (?) bits.
>>>>
>>>> Yes, as soon as there are frame numbers not representable in 32 bits.
>>>>
>>>>> I can't remember why Linux decided to not use it everywhere, but this is
>>>>> a sign that v2 is not always desired.
>>>>>
>>>>> So I think it would be better to recommend new guest to use v1 unless
>>>>> they hit the limitations (to be details).
>>>>
>>>> IOW you'd prefer s/be able to fall back/default/? I'd be fine that way
>>>
>>> Yes.
>>
>> Okay, I've changed that part, but ...
>>
>>> We would also need to document the limitations as they don't seem
>>> to be (clearly?) written down in the headers.
>>
>> ... I'm struggling to see where (and perhaps even why) this would be
>> needed. The v1 and v2 grant table entry formats are all there. I'm
>> inclined to consider this an orthogonal addition to make by whoever
>> thinks such an addition is needed in the first place.
> 
> The current comment is not mentionning about limitations but instead say 
> "new OS should use v2". Your proposal is to say "default to v1 but use 
> v2 if you hit limitations".

I've intentionally not said "if you hit limitations".

> As Xen developper, I am aware of a single limitation (the 44 bits). But 
> here you suggest there are multiple ones. I could probably figure out 
> the others if I dig into the code...
> 
> Now imagine, you are an OS developper new to Xen. I don't think this is 
> fair to say "there are limitations but I will not tell you directly. 
> Instead you should try to infer them from the definitions". There is a 
> chance, he/she may have missed some of the limitations and therefore the 
> decision to switch between v1 and v2 would be done incorrectly.
> 
> In addition to that, it also means she/he may end up to implement the 
> two versions when implementing v1 may just be sufficient (custom OSes 
> may never need 44 bits worth of address space).

This limitation is based on guest attributes only for HVM. For PV,
host properties matter, so PV guests strictly should be able to
fall back to v2, or they are liable to end up not working on large
hosts.

The other limitation is more a performance one - v2 separates grant
attributes from grant status.

> This is not a very friendly way to work on Xen. FAOD, I am not saying 
> that the other headers are perfect... Instead, I am saying we ought to 
> improve new wording to make the project a bit more welcoming.

I don't think the public header is the place to go into such lengths,
especially when all the information is already there. Textually
describing the same aspects should be done elsewhere imo. I'm of the
firm opinion that the patch as is represents an improvement. There's
no suggestion anywhere that things couldn't be further improved, as
is the case about always.

Since I created this patch only because my request to correct the
statement led to me being asked to provide the suggested new text,
may I suggest that you pick up this patch or create one from scratch
to accommodate all your wishes, if you believe this extra information
really belongs there? All I'm after is to correct a statement that's
actively misleading.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 05 11:12:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 11:12:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123006.232051 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leFSI-0006rf-HC; Wed, 05 May 2021 11:12:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123006.232051; Wed, 05 May 2021 11:12:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leFSI-0006rY-Dv; Wed, 05 May 2021 11:12:38 +0000
Received: by outflank-mailman (input) for mailman id 123006;
 Wed, 05 May 2021 11:12:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6083=KA=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1leFSH-0006rS-M2
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 11:12:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b0a2f8d9-1c02-4594-82fc-598509bff442;
 Wed, 05 May 2021 11:12:36 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7ECCCB172;
 Wed,  5 May 2021 11:12:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b0a2f8d9-1c02-4594-82fc-598509bff442
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620213155; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=5cRZYvjUUKBn+qHSaA9gkKGP4aqsnrsEZ1ZGLLWmiEw=;
	b=NL3PaJNeR+prAtq29hkpHudH1DDcvCQOD5O4homxXn9lmXs/QkwiCYViPfSMCGH/CX680t
	TJZURjtpLF4PqdWemtsWerQGg3V9r7V5V5/22lCnjQt+MtXIeBehISgJAh+SDdpFMSYBAI
	ZGIA3WZpEQIZ2PTT/zLdUr7NZ9mF/s0=
Subject: Re: [PATCH v4] gnttab: defer allocation of status frame tracking
 array
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <74048f89-fee7-06c2-ffd5-6e5a14bdf440@suse.com>
 <4450085e-be97-a1ba-dbd8-c72468406fd5@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <7d0dc687-d667-95ea-18a3-a6d7da9529d6@suse.com>
Date: Wed, 5 May 2021 13:12:35 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <4450085e-be97-a1ba-dbd8-c72468406fd5@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 05.05.2021 12:49, Andrew Cooper wrote:
> On 04/05/2021 09:42, Jan Beulich wrote:
>> This array can be large when many grant frames are permitted; avoid
>> allocating it when it's not going to be used anyway, by doing this only
>> in gnttab_populate_status_frames(). While the delaying of the respective
>> memory allocation adds possible reasons for failure of the respective
>> enclosing operations, there are other memory allocations there already,
>> so callers can't expect these operations to always succeed anyway.
>>
>> As to the re-ordering at the end of gnttab_unpopulate_status_frames(),
>> this is merely to represent intended order of actions (shrink array
>> bound, then free higher array entries). If there were racing accesses,
>> suitable barriers would need adding in addition.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Nack.
> 
> The argument you make says that the grant status frames is "large
> enough" to care about not allocating.  (I frankly disagree, but that
> isn't relevant to my to my nack).
> 
> The change in logic here moves a failure in DOMCTL_createdomain, to
> GNTTABOP_setversion.  We know, from the last minute screwups in XSA-226,
> that there are versions of Windows and Linux in the field, used by
> customers, which will BUG()/BSOD when GNTTABOP_setversion doesn't succeed.
> 
> You're literally adding even more complexity to the grant table, to also
> increase the chance of regressing VMs in the wild.  This is not ok.

To me, arguing like this is not okay. The code should have been written
like this in the first place. There's absolutely no reason to allocate
memory up front which is never going to be used.

> The only form of this patch which is in any way acceptable, is to avoid
> the allocation when you know *in DOMCTL_createdomain* that it will never
> be needed by the VM.  So far, that is from Kconfig and/or the command
> line options.

Well, may I remind you that this is how this patch had started? And
may I further remind you that it was Julien who asked me to convert to
this model? And may I then remind you that I already did suggest that
the two of you should come to an agreement? I'm not going to act as a
moderator in this process. But I insist that it is really bad practice
to drop the ball and leave patches (and alike) hanging in the air.

Just to be clear - in case I don't observe the two of you agreeing on
whichever outcome in the foreseeable future, I'm going to make attempts
to get another opinion from yet another REST maintainer. Since I can
live with both forms of the change (but would prefer the more
aggressive savings as done in this version), I can also live with
whatever 4th opinion would surface. But in case the result was not in
your favor, I'd then consider your Nack overruled.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 05 11:17:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 11:17:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123009.232063 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leFWZ-0007YP-3k; Wed, 05 May 2021 11:17:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123009.232063; Wed, 05 May 2021 11:17:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leFWZ-0007YI-0S; Wed, 05 May 2021 11:17:03 +0000
Received: by outflank-mailman (input) for mailman id 123009;
 Wed, 05 May 2021 11:17:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sTpK=KA=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1leFWX-0007YB-LV
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 11:17:01 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 24d0530d-f50a-4d6b-aece-2cebf12cb607;
 Wed, 05 May 2021 11:17:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 24d0530d-f50a-4d6b-aece-2cebf12cb607
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620213420;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=IEv5J0a3Yb64RUsVoPmC0OSXkoGKJxR6yV+Y6hxAXDE=;
  b=haKk5AOflyseaZN2CubrTt4WZQmdJtr8/+N5Huq+9yVaQ49i2J6XlFmg
   rrJOgcDoTqPLORa9AzaoMQrPY30ouf1gcEy1NeR2SPAH23VMpy40RUZl+
   nV0Irxa4Hu2GN4QIz4AJZxTLitRgsaQd1wOGDG2AuG8DFPj8NDd5i5VYG
   Y=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: dB/AvBxhu0khi0C/B41B7S4F99/9K4G4CCb3rQT2Vnw9kuW6f/6ERrO/H0fxVsNTg9wFLKjF3U
 V/KR81VPmowsDIFtclloC+FMgZNu7+SK1zqfDvh2qu9pJ8a5SFrL+H9HlirriqbNvm8S90bQQq
 7sFkJvtSSScfvJmVO9aYQjhfdDGvStnTzlkSv8bh7dZ6qlQtf8JY586FGxqbS+PXL6ay4TZ0Rt
 JsWgHdprNVwFTTMJrNZftvg2aeig2yafbWTzrR8tIi3XroaYlu/to8IOr4GjQ0KPnMWzuSiGi7
 bVI=
X-SBRS: 5.1
X-MesageID: 43490432
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:Q6u+7q/nsJ5mOSJ01X5uk+EKdb1zdoIgy1knxilNYDRIb82VkN
 2vlvwH1RnyzA0cQm0khMroAsS9aFnXnKQU3aA6O7C+UA76/Fa5NY0K1/qH/xTMOQ3bstRc26
 BpbrRkBLTLZ2RSoM7m7GCDfOoI78KA9MmT69v261dIYUVUZ7p77wF/Yzzrd3FeYAVdH5I2GN
 69y6N81lmdUE8aZMi6GXUJNtKrz7H2vanrfAIcAFof4BSO5AnC1JfBDxOa0h0COgk/o4sKzG
 6tqW3Ez5Tmid6X4Fv212jf75NZ8eGRt+drNYi3peU+bhnpggasTox9V7OFpyBdmpDS1H8a1O
 Pijj1lE8Nv627AXmzdm2qT5yDQlAwAxlWn6ViEjWDtqcb0LQhKdfZptMZiXTbyr28D1esMt5
 5j7iaimLd8SS7kpmDb4ePFUhl7/3DE2kYKoKoooFF0FbcFZKQ5l/14wGplVK0uMQjd844dHO
 xnHKjnlYxrWGLfVXzfs2V1qebcJ0gbL1ODSkgGjMSfzyJbqnB/11cZ38wShB47heoAd6U=
X-IronPort-AV: E=Sophos;i="5.82,274,1613451600"; 
   d="scan'208";a="43490432"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aYFHTpQUlcp69nTmQ/bYYFGoG4He246QuzuVgp2R5kGBlD5ky3bCpmsfFybo90Ct3DYiAamZ4RINDmOMc5JcJXt8CEwgc4Flxy6hBAw0ycwmhRPKRFtEhN1YkEbhVwl8c9CBrTauUWlo1iyQw3jxRLnfjn29EJuI8owKxVJwUUNCGLOeqAiayL/PAVsc5162i43VL4zma94itrRAzdj95KwqKE0dLeOk5yqfLFOyJmu5qrLXYzYVh4wVMizPKpSYo+fFrx+ahcX6agfykXQMGfLAhH8H/5/4UOXrWL1UNCBcfy/N61pBeYejCDDCQL6RgNUiz6Ua6rshhp9amY6LJA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WnlliABA1MOSnvjHoZCf/ytWk5CO5Q2ObzTDCNg3FPA=;
 b=MFfA5Vn9CcCtGZ9IPRDS3SVFDJTleQLEDqypFuNmo8ReuTLrvwg/y/K3jUozuDgsHyTXq0ETA8ggblNWM3xxUPsuH0lhhiVZDw21m228GKsKfx2IkUUr0ODOmP90HPljo0cvtu1Jp9nwYUC1LmvzyxoXilYuA2StVMz0wXwM7ZGtIAXIxuwxGjgv8P022hxQq4ttleGQPTTV6fFyJdo74ehjhCjJ1chHL3SYxT/KsEfjzpkrkORd2+H5Y90RvxsUl9COSXkd464SGsVYJ0i7amZ8B5bU/XWExt67NnGSQw0eg4+fcP4ViL/fis4S5ugWNGJkqnu41q8lAbt5/5Gauw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WnlliABA1MOSnvjHoZCf/ytWk5CO5Q2ObzTDCNg3FPA=;
 b=cLj9mZOf32vmI1OxqulM5uTnmOyxBCtwNNJAr5RjP0nayVYFpfE7/XzxP+JGLl7p7C7ZxLMqdlRFcNhowQHsCyDYXFGdIswBbToXb5JqeSZVa3JwNjzuZUjy/NAseEh9YMnHOIeBioM8pWW1bcKIKQpkkCAYtuiPIl1pHHue/MI=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>
Subject: [PATCH] tools/libs: move cpu policy related prototypes to xenguest.h
Date: Wed,  5 May 2021 13:15:08 +0200
Message-ID: <20210505111508.82956-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.31.1
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: MR2P264CA0119.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:33::35) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: cfda7b39-9b26-471d-7cb7-08d90fb74ab2
X-MS-TrafficTypeDiagnostic: DM6PR03MB5273:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB52731205A0F9D6C2714F489D8F599@DM6PR03MB5273.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:2958;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 49MG4m3Fim++wrIzn2krFsGVwfh4h2OXopMFbke8XmBFay3t1mNi7k9SECf2ilUUIbHTGjmPnEgcezrI/aENxgytLZUG14pVH9M7/VI8u0uueuMUzLmLRZJGxPFjcClyNwjg6EOiJPV/rHx/IQKeV99LCchS8YOVj0+kgllBtHt7moYCS1aO4/IlzYC3KFvHT1PhR+7Qf0LbHxoFYaYf5mo1cFQ2Z+nOoHvir4chJPQogZacumTSVgbtkoHN9fGXuvSPPadPnjaUoG4Bj396veLKeyJZ5zwGdcTYRWdGfONnGCo9qi8ZOseSZJqdoHmVQCgkmJSLpkC/Xz7Y07+QxErvDq8FxAivE2PJd6SHZaP8stD3/BrRpEma5QipQWDCijnQGiG0w+DyTQ0b3s1PrxGqEDHw8luSegrrXolGsNT9WJ8iDts7sI1rSbKbpRVlxYE1FEgl0L9n5JYHTTraH1nKg7W0qz6Q5J31oFXrCtzpqZv0I/M2/xagYbg9sZgVCe87CvrTxtAe12pCznelrB0mj23/NEC6QvPKCo7bgy/RjBRLvkxjqq8jWbJmiAQTmA3DGaNWBNjFQ2Q8ozFtj7QLyWeIgXOeDZz9MMCOc6k=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(136003)(39860400002)(346002)(376002)(396003)(66946007)(66476007)(66556008)(36756003)(6496006)(956004)(2616005)(5660300002)(316002)(6486002)(4326008)(478600001)(26005)(54906003)(186003)(16526019)(1076003)(6666004)(86362001)(107886003)(6916009)(83380400001)(2906002)(8936002)(38100700002)(8676002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?QThkbzdmUG92TjV5ZmpTUmxPYnB4SzVHcG0wZWpTcDloTzhld1l2eEtjYUtG?=
 =?utf-8?B?REpTNE9qaDczNHN1Kzllb3FqNWdybFRQUThvdlE3V3pRdWY3TG5nN2VBekxR?=
 =?utf-8?B?L0k4eVJYZ29iUnRDYmRZSW9tUzV4bmtGQUlHLzBzZi9YMklKckFmQ2VtOUVk?=
 =?utf-8?B?UnJzVGRlZGJFUTJCRjNMVEhZU2swYUtwaEJZeENXbjVSTmhLUnpybUhYY2Yv?=
 =?utf-8?B?NW9nU1BUZlRLcGFFaHg4bUNEbmNDSDByWUlOUlorV0Jpd3hRV25Gc3NzK0Ny?=
 =?utf-8?B?RkNQNnJJcFVCZTNVczZDbHR1eDZqSERNZDVpUjhWKzFUQ0gzcC9lZmtJMkE4?=
 =?utf-8?B?UWxlRGMyTWJDMEEySmpNemk0dlBuN2xjNkgxTmpNaTdBNHplOXZVeWtVQTBC?=
 =?utf-8?B?b3dBOTZuVVJJZUEyWEZVZ1ZQdEZnTnZ2N2ZuM3NFQ2ZUUnNJU2VndHYwdG5I?=
 =?utf-8?B?a3hHZFhEdEdkMTIyOW1OaEFzcEVqSHdBVE9BQUEwMFYxazl5TDZOblN1c2hY?=
 =?utf-8?B?ZjliRS9VelJXSFFNejRTUjhpcWVzMzg3NWltZjlueTV6eEdGNUQ4THBFSUJ1?=
 =?utf-8?B?NS8vMDFsREgxY1lxSExwUll0MlMzeW5IbkltNDJEMkZhdDlXQURyd2x2MTNI?=
 =?utf-8?B?TDZKbjFlQTBtazZiOFkrQk4rcjdDS2dXd2c2NzVHVTI0OEw0RW1yZVFMVFBQ?=
 =?utf-8?B?azJVOFMrOVBLandrMjNwTTZpYWUvd3A1S0k3YkkvU2xXVjRPZStWTjdSNE9B?=
 =?utf-8?B?bDJZOWczdDFpbENWREFjalNoK1Q1NHdrMEl6QVU3NXhQUzJZZUZqUXlreGdz?=
 =?utf-8?B?TnNxOWpkUUdvVmN0aERrbFRJb0gvZlpmVm85cmRnREFsa3Q3bGhXOWV4dFdz?=
 =?utf-8?B?dGNWdkJuSDZYNnNnZldqQ0xGelZYL0hYWGkrQTBQSXhucU91UCt2cDFJU3Nh?=
 =?utf-8?B?TkxsbFNqZzRPdExMaUNvcThCTDYyNEJNME5vWnNQWGx0eDJnSlAwOEtSKzV6?=
 =?utf-8?B?bFBLRzZ3akxBVXFsbkUxNDVpeTlScG8vbGJoQlh3WW5kR2YweGl4UGFtU3I3?=
 =?utf-8?B?ZEp3MTllVUxBM090S3c1VURPNUFOMEMzRExQSkw4d256UWMvN0hQc3lDWDJO?=
 =?utf-8?B?MEhvZ1BUVjFSc2F0UFRXM0Z5OHZETmVzUnN3TFkyNnZYdHZaUFZSV3Mxb1Nk?=
 =?utf-8?B?bjVIRXJPMmI2NnNxQVlyL1N3Mzk0eGVHMnFXV2pJYjJZb1JiV1dxdnExdnhY?=
 =?utf-8?B?cXd2emZ6Yk8zTGFaTnQ4RlVKUWpjYkRub21UQUc0dm5wN0g1SWhNeis2WU0y?=
 =?utf-8?B?b2w2aGsrSXI2UjZyQ3hHQXNJNiszZEduaVMvNjlqNXJHYlM4WDRScWlhM3Vr?=
 =?utf-8?B?MC9EaGRiVjBiSndPUEp0T2ZXSDkraUdpd0lEa1MvOVJnUVVJRW9mNWlyMGFK?=
 =?utf-8?B?L3FHN2sxODdOZ0FVUmQwN08yOE9GbXlyYUtFQkQ2SUpUUFV6MmZqRExoQUFJ?=
 =?utf-8?B?Z3E5VElyU2trTWZTb1FMRDVSQ3lvVEQvRFZHb204WnF0SUxJZEJXak9xL1p0?=
 =?utf-8?B?U01RWXFIQndmZ0N5UHJGOVNtd2hZT2w2dGFBNkkzaUo2ZS9EbyttNGRLbWJL?=
 =?utf-8?B?MjRVbEpBcnkzMStrSU82Q2R4SUx6L1dWdzVJajFBYVl4YTBLNjFHVUErU2Ft?=
 =?utf-8?B?TmJGQndjTVpBYVluN1BSRUFOWVRvWDhrU29QWWhRd1JTK28ycE1pSjF3UXUz?=
 =?utf-8?Q?vElH3TraubIdYs2OW2dLc+JZ/jaY7iKOQSNR9NN?=
X-MS-Exchange-CrossTenant-Network-Message-Id: cfda7b39-9b26-471d-7cb7-08d90fb74ab2
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2021 11:16:55.7682
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: RERXI4fgQnHLGBxpUc3W7xtCTCyb5g4kcjl0JRkVfgRxgL84RMeCvTiBbih7LluFIxTyz3BbpnjpKSLnmbxFTA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5273
X-OriginatorOrg: citrix.com

Do this before adding any more stuff to xg_cpuid_x86.c.

The placement in xenctrl.h is wrong, as they are implemented by the
xenguest library. Note that xg_cpuid_x86.c needs to include
xg_private.h, and in turn also fix xg_private.h to include
xc_bitops.h.

As a result also modify xen-cpuid to include xenguest.h.

Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Note this is based on top of Andrew's xc_cpu_policy_t type change.
---
 tools/include/xenctrl.h         | 55 --------------------------------
 tools/include/xenguest.h        | 56 +++++++++++++++++++++++++++++++++
 tools/libs/guest/xg_cpuid_x86.c |  3 +-
 tools/libs/guest/xg_private.h   |  1 +
 tools/misc/xen-cpuid.c          |  1 +
 5 files changed, 59 insertions(+), 57 deletions(-)

diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index 58d3377d6ab..e894c5c392d 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -2589,61 +2589,6 @@ int xc_psr_get_domain_data(xc_interface *xch, uint32_t domid,
                            uint64_t *data);
 int xc_psr_get_hw_info(xc_interface *xch, uint32_t socket,
                        xc_psr_feat_type type, xc_psr_hw_info *hw_info);
-
-typedef struct xc_cpu_policy xc_cpu_policy_t;
-
-/* Create and free a xc_cpu_policy object. */
-xc_cpu_policy_t *xc_cpu_policy_init(void);
-void xc_cpu_policy_destroy(xc_cpu_policy_t *policy);
-
-/* Retrieve a system policy, or get/set a domains policy. */
-int xc_cpu_policy_get_system(xc_interface *xch, unsigned int policy_idx,
-                             xc_cpu_policy_t *policy);
-int xc_cpu_policy_get_domain(xc_interface *xch, uint32_t domid,
-                             xc_cpu_policy_t *policy);
-int xc_cpu_policy_set_domain(xc_interface *xch, uint32_t domid,
-                             xc_cpu_policy_t *policy);
-
-/* Manipulate a policy via architectural representations. */
-int xc_cpu_policy_serialise(xc_interface *xch, const xc_cpu_policy_t *policy,
-                            xen_cpuid_leaf_t *leaves, uint32_t *nr_leaves,
-                            xen_msr_entry_t *msrs, uint32_t *nr_msrs);
-int xc_cpu_policy_update_cpuid(xc_interface *xch, xc_cpu_policy_t *policy,
-                               const xen_cpuid_leaf_t *leaves,
-                               uint32_t nr);
-int xc_cpu_policy_update_msrs(xc_interface *xch, xc_cpu_policy_t *policy,
-                              const xen_msr_entry_t *msrs, uint32_t nr);
-
-/* Compatibility calculations. */
-bool xc_cpu_policy_is_compatible(xc_interface *xch, xc_cpu_policy_t *host,
-                                 xc_cpu_policy_t *guest);
-
-int xc_get_cpu_levelling_caps(xc_interface *xch, uint32_t *caps);
-int xc_get_cpu_featureset(xc_interface *xch, uint32_t index,
-                          uint32_t *nr_features, uint32_t *featureset);
-
-int xc_cpu_policy_get_size(xc_interface *xch, uint32_t *nr_leaves,
-                           uint32_t *nr_msrs);
-int xc_set_domain_cpu_policy(xc_interface *xch, uint32_t domid,
-                             uint32_t nr_leaves, xen_cpuid_leaf_t *leaves,
-                             uint32_t nr_msrs, xen_msr_entry_t *msrs,
-                             uint32_t *err_leaf_p, uint32_t *err_subleaf_p,
-                             uint32_t *err_msr_p);
-
-uint32_t xc_get_cpu_featureset_size(void);
-
-enum xc_static_cpu_featuremask {
-    XC_FEATUREMASK_KNOWN,
-    XC_FEATUREMASK_SPECIAL,
-    XC_FEATUREMASK_PV_MAX,
-    XC_FEATUREMASK_PV_DEF,
-    XC_FEATUREMASK_HVM_SHADOW_MAX,
-    XC_FEATUREMASK_HVM_SHADOW_DEF,
-    XC_FEATUREMASK_HVM_HAP_MAX,
-    XC_FEATUREMASK_HVM_HAP_DEF,
-};
-const uint32_t *xc_get_static_cpu_featuremask(enum xc_static_cpu_featuremask);
-
 #endif
 
 int xc_livepatch_upload(xc_interface *xch,
diff --git a/tools/include/xenguest.h b/tools/include/xenguest.h
index 217022b6e76..03c813a0d78 100644
--- a/tools/include/xenguest.h
+++ b/tools/include/xenguest.h
@@ -719,4 +719,60 @@ xen_pfn_t *xc_map_m2p(xc_interface *xch,
                       unsigned long max_mfn,
                       int prot,
                       unsigned long *mfn0);
+
+#if defined(__i386__) || defined(__x86_64__)
+typedef struct xc_cpu_policy xc_cpu_policy_t;
+
+/* Create and free a xc_cpu_policy object. */
+xc_cpu_policy_t *xc_cpu_policy_init(void);
+void xc_cpu_policy_destroy(xc_cpu_policy_t *policy);
+
+/* Retrieve a system policy, or get/set a domains policy. */
+int xc_cpu_policy_get_system(xc_interface *xch, unsigned int policy_idx,
+                             xc_cpu_policy_t *policy);
+int xc_cpu_policy_get_domain(xc_interface *xch, uint32_t domid,
+                             xc_cpu_policy_t *policy);
+int xc_cpu_policy_set_domain(xc_interface *xch, uint32_t domid,
+                             xc_cpu_policy_t *policy);
+
+/* Manipulate a policy via architectural representations. */
+int xc_cpu_policy_serialise(xc_interface *xch, const xc_cpu_policy_t *policy,
+                            xen_cpuid_leaf_t *leaves, uint32_t *nr_leaves,
+                            xen_msr_entry_t *msrs, uint32_t *nr_msrs);
+int xc_cpu_policy_update_cpuid(xc_interface *xch, xc_cpu_policy_t *policy,
+                               const xen_cpuid_leaf_t *leaves,
+                               uint32_t nr);
+int xc_cpu_policy_update_msrs(xc_interface *xch, xc_cpu_policy_t *policy,
+                              const xen_msr_entry_t *msrs, uint32_t nr);
+
+/* Compatibility calculations. */
+bool xc_cpu_policy_is_compatible(xc_interface *xch, xc_cpu_policy_t *host,
+                                 xc_cpu_policy_t *guest);
+
+int xc_get_cpu_levelling_caps(xc_interface *xch, uint32_t *caps);
+int xc_get_cpu_featureset(xc_interface *xch, uint32_t index,
+                          uint32_t *nr_features, uint32_t *featureset);
+
+int xc_cpu_policy_get_size(xc_interface *xch, uint32_t *nr_leaves,
+                           uint32_t *nr_msrs);
+int xc_set_domain_cpu_policy(xc_interface *xch, uint32_t domid,
+                             uint32_t nr_leaves, xen_cpuid_leaf_t *leaves,
+                             uint32_t nr_msrs, xen_msr_entry_t *msrs,
+                             uint32_t *err_leaf_p, uint32_t *err_subleaf_p,
+                             uint32_t *err_msr_p);
+
+uint32_t xc_get_cpu_featureset_size(void);
+
+enum xc_static_cpu_featuremask {
+    XC_FEATUREMASK_KNOWN,
+    XC_FEATUREMASK_SPECIAL,
+    XC_FEATUREMASK_PV_MAX,
+    XC_FEATUREMASK_PV_DEF,
+    XC_FEATUREMASK_HVM_SHADOW_MAX,
+    XC_FEATUREMASK_HVM_SHADOW_DEF,
+    XC_FEATUREMASK_HVM_HAP_MAX,
+    XC_FEATUREMASK_HVM_HAP_DEF,
+};
+const uint32_t *xc_get_static_cpu_featuremask(enum xc_static_cpu_featuremask);
+#endif /* __i386__ || __x86_64__ */
 #endif /* XENGUEST_H */
diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x86.c
index 1ebc108213d..144b5a5aee6 100644
--- a/tools/libs/guest/xg_cpuid_x86.c
+++ b/tools/libs/guest/xg_cpuid_x86.c
@@ -22,8 +22,7 @@
 #include <stdlib.h>
 #include <stdbool.h>
 #include <limits.h>
-#include "xc_private.h"
-#include "xc_bitops.h"
+#include "xg_private.h"
 #include <xen/hvm/params.h>
 #include <xen-tools/libs.h>
 
diff --git a/tools/libs/guest/xg_private.h b/tools/libs/guest/xg_private.h
index 8f9b257a2f3..db93521c567 100644
--- a/tools/libs/guest/xg_private.h
+++ b/tools/libs/guest/xg_private.h
@@ -27,6 +27,7 @@
 #include <sys/stat.h>
 
 #include "xc_private.h"
+#include "xc_bitops.h"
 #include "xenguest.h"
 
 #include <xen/memory.h>
diff --git a/tools/misc/xen-cpuid.c b/tools/misc/xen-cpuid.c
index d4bc83d8c92..2b1a0492b30 100644
--- a/tools/misc/xen-cpuid.c
+++ b/tools/misc/xen-cpuid.c
@@ -8,6 +8,7 @@
 #include <inttypes.h>
 
 #include <xenctrl.h>
+#include <xenguest.h>
 
 #include <xen-tools/libs.h>
 
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Wed May 05 11:42:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 11:42:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123023.232075 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leFvQ-0002Pr-Av; Wed, 05 May 2021 11:42:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123023.232075; Wed, 05 May 2021 11:42:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leFvQ-0002Pk-71; Wed, 05 May 2021 11:42:44 +0000
Received: by outflank-mailman (input) for mailman id 123023;
 Wed, 05 May 2021 11:42:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leFvP-0002Pa-QG; Wed, 05 May 2021 11:42:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leFvP-0000M2-K9; Wed, 05 May 2021 11:42:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leFvP-0006NJ-7c; Wed, 05 May 2021 11:42:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1leFvP-0003Ma-75; Wed, 05 May 2021 11:42:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/xcCz8OTCW7s0XUzsPhSSkg3JPtvU9rCVaDiL3wRRKo=; b=dptndKCVIG80aoJgLEDoNjqh+U
	TyNlg9EDJdV6deezu+F+TRi6isx8zTE8MIl+NohX0JAbxGzr9DLzzsngzh6yBVWO+kUsKXQFdjZnL
	VGYquNZ4h1JkHB2p5VKpe78apfmbNjn1QLkwkXAF5Fq89+0ZfaiLA8UwelpT47lh9Chk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161770-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.13-testing test] 161770: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-ovmf-amd64:debian-hvm-install:fail:heisenbug
    xen-4.13-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.13-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=284132938900ce8c3b11babf7255f5c6dbb21716
X-Osstest-Versions-That:
    xen=33049e3ad9c3a0f7432969f4b5d7b1a287b5b8f9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 05 May 2021 11:42:43 +0000

flight 161770 xen-4.13-testing real [real]
flight 161789 xen-4.13-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/161770/
http://logs.test-lab.xenproject.org/osstest/logs/161789/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-ovmf-amd64 12 debian-hvm-install fail pass in 161789-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161323
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161323
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161323
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161323
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161323
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 161323
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 161323
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161323
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161323
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161323
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161323
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161323
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  284132938900ce8c3b11babf7255f5c6dbb21716
baseline version:
 xen                  33049e3ad9c3a0f7432969f4b5d7b1a287b5b8f9

Last test of basis   161323  2021-04-20 10:36:16 Z   15 days
Testing same since   161770  2021-05-04 13:07:00 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   33049e3ad9..2841329389  284132938900ce8c3b11babf7255f5c6dbb21716 -> stable-4.13


From xen-devel-bounces@lists.xenproject.org Wed May 05 11:45:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 11:45:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123029.232094 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leFxx-00035X-R7; Wed, 05 May 2021 11:45:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123029.232094; Wed, 05 May 2021 11:45:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leFxx-00035Q-Np; Wed, 05 May 2021 11:45:21 +0000
Received: by outflank-mailman (input) for mailman id 123029;
 Wed, 05 May 2021 11:45:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/Xmi=KA=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1leFxw-00035J-89
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 11:45:20 +0000
Received: from mail-wr1-f42.google.com (unknown [209.85.221.42])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f1e3b9e7-d487-485d-9c57-e7b523225091;
 Wed, 05 May 2021 11:45:19 +0000 (UTC)
Received: by mail-wr1-f42.google.com with SMTP id a4so1500026wrr.2
 for <xen-devel@lists.xenproject.org>; Wed, 05 May 2021 04:45:19 -0700 (PDT)
Received: from
 liuwe-devbox-debian-v2.j3c5onc20sse1dnehy4noqpfcg.zx.internal.cloudapp.net
 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id i13sm19515910wrs.12.2021.05.05.04.45.18
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 05 May 2021 04:45:18 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f1e3b9e7-d487-485d-9c57-e7b523225091
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=yc3qfvRBIlQciQW275VLB+NRL0ob1SeLZz9oTRIVCJA=;
        b=GaF//sjryWaAq21U9OLjOsP8xNDSQUOtgWOeVJjjIfmGtOfCC1zcOcFV+V0WQ9TDsJ
         AlxsXqyzQ6kdVZp8yY/tD2t9BBwmiSvch0yiWZ7KQHQIeTi6r7IvfIcdXV7nDw9pJneN
         xHshA9NiDsxHKZAgh3TpVenMEkUrSgNzYuhSlHvbSi4T25Jrbj1pBXP1pSf7mmdvSCxZ
         qayjKrK4jpFPIQ5a6GwwluRnbh9XPvJYH6lP/KG4PToAvOq/ibpTex0s6Cd84G56sXHs
         RIRx7NEIupkE3QBjx/qSjFpf204a2mj27PKjE10hHnAo5zgn46/eehHG5GgGoO0ZhjGx
         9bKg==
X-Gm-Message-State: AOAM531cw89GaiHmBkrhqq4W1QJmyAjeSLmyaSGHJwG9Tqk4U8rOsT63
	GO/5rayc+S2DM2kzABxknswEGn1hp0s=
X-Google-Smtp-Source: ABdhPJyk+TYvxrK3SpblcqFCwSOwuMhJpEZTef+/8yupHOI2LLdo2P2qW0cEv2+EnCOEvcnFCtCjYQ==
X-Received: by 2002:a5d:59a9:: with SMTP id p9mr36316187wrr.289.1620215118757;
        Wed, 05 May 2021 04:45:18 -0700 (PDT)
From: Wei Liu <wl@xen.org>
To: Xen Development List <xen-devel@lists.xenproject.org>
Cc: =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH] automation: save xen config before building
Date: Wed,  5 May 2021 11:45:16 +0000
Message-Id: <20210505114516.456201-1-wl@xen.org>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

It is reported that failed randconfig runs are missing the config file
which makes debugging impossible. Fix this by moving the line that
copies the config file before the build is executed.

Signed-off-by: Wei Liu <wl@xen.org>
---
 automation/scripts/build | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/automation/scripts/build b/automation/scripts/build
index eaf70b11d1cb..46b6903d2922 100755
--- a/automation/scripts/build
+++ b/automation/scripts/build
@@ -16,6 +16,10 @@ else
     make -j$(nproc) -C xen defconfig
 fi
 
+# Save the config file before building because build failure causes the script
+# to exit early -- bash is invoked with -e.
+cp xen/.config xen-config
+
 # arm32 only cross-compiles the hypervisor
 if [[ "${XEN_TARGET_ARCH}" = "arm32" ]]; then
     hypervisor_only="y"
@@ -59,7 +63,6 @@ else
 fi
 
 # Extract artifacts to avoid getting rewritten by customised builds
-cp xen/.config xen-config
 mkdir binaries
 if [[ "${XEN_TARGET_ARCH}" != "x86_32" ]]; then
     cp xen/xen binaries/xen
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Wed May 05 11:47:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 11:47:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123034.232108 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leFzZ-0003iB-7a; Wed, 05 May 2021 11:47:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123034.232108; Wed, 05 May 2021 11:47:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leFzZ-0003i4-4Z; Wed, 05 May 2021 11:47:01 +0000
Received: by outflank-mailman (input) for mailman id 123034;
 Wed, 05 May 2021 11:46:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/Xmi=KA=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1leFzX-0003hy-Db
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 11:46:59 +0000
Received: from mail-wm1-f49.google.com (unknown [209.85.128.49])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4fa12e28-e515-4361-91f8-a1cc8d2861e8;
 Wed, 05 May 2021 11:46:58 +0000 (UTC)
Received: by mail-wm1-f49.google.com with SMTP id
 l18-20020a1ced120000b029014c1adff1edso3236712wmh.4
 for <xen-devel@lists.xenproject.org>; Wed, 05 May 2021 04:46:58 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id a15sm18874106wrr.53.2021.05.05.04.46.56
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 05 May 2021 04:46:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4fa12e28-e515-4361-91f8-a1cc8d2861e8
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to;
        bh=PBdsPUy4DRPAzZuVsLYDfVk9sY5Ky4uTdfEw+XMXhSI=;
        b=Fw+j79j6XeLkAHNw+dx8WUtyrxa56hzqE29lK9gLqt75qgcePabqNIj7sEIi3+7gZ0
         MOoT1GsSjvyPzLlHTo825JAAXCzG1/nd86asep+Ry3IbuD2ObIVn5T9arWHqQWuBT1hz
         tK0tgRMsyLAxmKruIccNp7i+0obuMQJpD5DXsCbXWSS1k4o6O3CqiUByMakiSBEXOLK1
         qf3hApv55rb2lHxNUyfrbeGcI5Lm2rCTwqpA2wlIH622W35cP3Ez3+lu2tfF4bPY8NFT
         EVl/UzP/GJ4GM/P9Mn82sGtgp5qMZYkgH3dtjlLbp98jpVcWkVa0QVp2ocrzdsA16L+y
         hbcg==
X-Gm-Message-State: AOAM5338zINrAmeoaBfzT7FCdYJQReaImn8OsssqRXC8gLmbvmZ+GW3j
	JC3g+p0MmgEZOoXE4+UMUz8=
X-Google-Smtp-Source: ABdhPJyU8OIY52MWZyxOw6ncxPZ0GYVaG8zTkh7cJh2GJqP/+sFlTELVApgmfey2wOdrH5aJLQfYSw==
X-Received: by 2002:a1c:b342:: with SMTP id c63mr9499069wmf.179.1620215217661;
        Wed, 05 May 2021 04:46:57 -0700 (PDT)
Date: Wed, 5 May 2021 11:46:55 +0000
From: Wei Liu <wl@xen.org>
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [XEN PATCH v2] xl: constify cmd_table entries
Message-ID: <20210505114655.ejblmjv3u4aq65ia@liuwe-devbox-debian-v2>
References: <20210504161436.613782-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210504161436.613782-1-anthony.perard@citrix.com>

On Tue, May 04, 2021 at 05:14:36PM +0100, Anthony PERARD wrote:
> Also constify cmdtable_len and make use of ARRAY_SIZE, which is
> available in "xen-tools/libs.h".
> 
> The entries in cmd_table don't need to be modified once xl is running.
> 
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
> Reviewed-by: Julien Grall <jgrall@amazon.com>

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Wed May 05 11:49:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 11:49:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123043.232120 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leG2H-0004Nq-MD; Wed, 05 May 2021 11:49:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123043.232120; Wed, 05 May 2021 11:49:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leG2H-0004Nj-JN; Wed, 05 May 2021 11:49:49 +0000
Received: by outflank-mailman (input) for mailman id 123043;
 Wed, 05 May 2021 11:49:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=rAkL=KA=tklengyel.com=tamas@srs-us1.protection.inumbo.net>)
 id 1leG2G-0004NU-6h
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 11:49:48 +0000
Received: from MTA-06-3.privateemail.com (unknown [198.54.127.59])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 264fca86-719b-4a67-b3b5-5a57718f1982;
 Wed, 05 May 2021 11:49:47 +0000 (UTC)
Received: from MTA-06.privateemail.com (localhost [127.0.0.1])
 by MTA-06.privateemail.com (Postfix) with ESMTP id 97FF260172
 for <xen-devel@lists.xenproject.org>; Wed,  5 May 2021 07:49:46 -0400 (EDT)
Received: from mail-wm1-f44.google.com (unknown [10.20.151.226])
 by MTA-06.privateemail.com (Postfix) with ESMTPA id 5F5596014C
 for <xen-devel@lists.xenproject.org>; Wed,  5 May 2021 07:49:46 -0400 (EDT)
Received: by mail-wm1-f44.google.com with SMTP id
 82-20020a1c01550000b0290142562ff7c9so950829wmb.3
 for <xen-devel@lists.xenproject.org>; Wed, 05 May 2021 04:49:46 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 264fca86-719b-4a67-b3b5-5a57718f1982
X-Gm-Message-State: AOAM5314EE1e3qXLTsmjfJx6qmvvgy84hmH8pg3AiUv1pj83V+EVHP+/
	1UByUwsMfQbA9VNEXk6bt5V2uSSes8huz0crHEQ=
X-Google-Smtp-Source: ABdhPJx/OoMFcquB0xNcq8c6oE0Ypy0RqvVuc1wxqa2rbuffTuNMZX0NE0op8aI5NzZwekZIIxIeqztKrSWcXMd/+Hc=
X-Received: by 2002:a05:600c:4fc4:: with SMTP id o4mr33030729wmq.51.1620215384957;
 Wed, 05 May 2021 04:49:44 -0700 (PDT)
MIME-Version: 1.0
References: <20210505074308.11016-1-michal.orzel@arm.com> <20210505074308.11016-11-michal.orzel@arm.com>
 <c5676e69-a474-d1ad-c7e9-49c03be3ab66@suse.com>
In-Reply-To: <c5676e69-a474-d1ad-c7e9-49c03be3ab66@suse.com>
From: Tamas K Lengyel <tamas@tklengyel.com>
Date: Wed, 5 May 2021 07:49:05 -0400
X-Gmail-Original-Message-ID: <CABfawhneqLE4uFkQW6rDR3Yc04SsohpUcAzqz9XkY-x5KfZ3vw@mail.gmail.com>
Message-ID: <CABfawhneqLE4uFkQW6rDR3Yc04SsohpUcAzqz9XkY-x5KfZ3vw@mail.gmail.com>
Subject: Re: [PATCH v3 10/10] arm64: Change type of hsr, cpsr, spsr_el1 to uint64_t
To: Jan Beulich <jbeulich@suse.com>
Cc: Michal Orzel <michal.orzel@arm.com>, Stefano Stabellini <sstabellini@kernel.org>, 
	Julien Grall <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, 
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>, 
	Alexandru Isaila <aisaila@bitdefender.com>, Petre Pircalabu <ppircalabu@bitdefender.com>, 
	bertrand.marquis@arm.com, wei.chen@arm.com, 
	Xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"
X-Virus-Scanned: ClamAV using ClamSMTP

On Wed, May 5, 2021 at 4:00 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 05.05.2021 09:43, Michal Orzel wrote:
> > --- a/xen/include/public/arch-arm.h
> > +++ b/xen/include/public/arch-arm.h
> > @@ -267,10 +267,10 @@ struct vcpu_guest_core_regs
> >
> >      /* Return address and mode */
> >      __DECL_REG(pc64,         pc32);             /* ELR_EL2 */
> > -    uint32_t cpsr;                              /* SPSR_EL2 */
> > +    uint64_t cpsr;                              /* SPSR_EL2 */
> >
> >      union {
> > -        uint32_t spsr_el1;       /* AArch64 */
> > +        uint64_t spsr_el1;       /* AArch64 */
> >          uint32_t spsr_svc;       /* AArch32 */
> >      };
>
> This change affects, besides domctl, also default_initialise_vcpu(),
> which Arm's arch_initialise_vcpu() calls. I realize do_arm_vcpu_op()
> only allows two unrelated VCPUOP_* to pass, but then I don't
> understand why arch_initialise_vcpu() doesn't simply return e.g.
> -EOPNOTSUPP. Hence I suspect I'm missing something.
>
> > --- a/xen/include/public/domctl.h
> > +++ b/xen/include/public/domctl.h
> > @@ -38,7 +38,7 @@
> >  #include "hvm/save.h"
> >  #include "memory.h"
> >
> > -#define XEN_DOMCTL_INTERFACE_VERSION 0x00000013
> > +#define XEN_DOMCTL_INTERFACE_VERSION 0x00000014
>
> So this is to cover for the struct vcpu_guest_core_regs change.
>
> > --- a/xen/include/public/vm_event.h
> > +++ b/xen/include/public/vm_event.h
> > @@ -266,8 +266,7 @@ struct vm_event_regs_arm {
> >      uint64_t ttbr1;
> >      uint64_t ttbcr;
> >      uint64_t pc;
> > -    uint32_t cpsr;
> > -    uint32_t _pad;
> > +    uint64_t cpsr;
> >  };
>
> Then I wonder why this isn't accompanied by a similar bump of
> VM_EVENT_INTERFACE_VERSION. I don't see you drop any checking /
> filling of the _pad field, so existing callers may pass garbage
> there, and new callers need to be prevented from looking at the
> upper half when running on an older hypervisor.

No, there is no need to bump the vm event interface version here. They
are folding the _pad into the cpsr and the structure is always zero
initialized. There is never "garbage" in the _pad field. As such there
is no change to the structure layout or anyone using it with a header
compiled on an older version. I asked them not to bump the version for
this change.

Thanks,
Tamas


From xen-devel-bounces@lists.xenproject.org Wed May 05 11:50:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 11:50:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123047.232133 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leG2c-0005as-3i; Wed, 05 May 2021 11:50:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123047.232133; Wed, 05 May 2021 11:50:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leG2c-0005al-0S; Wed, 05 May 2021 11:50:10 +0000
Received: by outflank-mailman (input) for mailman id 123047;
 Wed, 05 May 2021 11:50:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1leG2b-0005aS-8X
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 11:50:09 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1leG2a-0000Uc-7h; Wed, 05 May 2021 11:50:08 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1leG2a-0005AN-15; Wed, 05 May 2021 11:50:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=leW/vPVQUAgcKVCqD2t9+AhymmGmdPEmE7kqUXRoE9Q=; b=aSrtavGjwcKcYYmPy2+5E5XvBQ
	wxMpw4Q9gDPcMT1+kWyctk/OLz/HX11LV0n2fzwlS3qjQXVGGHEvtAq71yps1e0dMQ3KinpEdEi6J
	jh+fL0MKqzUhShfG5ARuiODgRq4Yrd+I8lUDMpkivqf3SOcKuDUZGKnQCVqdE2XZ2Ln0=;
Subject: Re: [PATCH] public/gnttab: relax v2 recommendation
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Luca Fancellu <luca.fancellu@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <dcd9ede1-9471-6866-4ba7-b6a7664b5e35@suse.com>
 <8eac6f09-4d1d-6fcc-4218-8c9a0760a6bb@xen.org>
 <71e61d09-5d92-94dc-ae0c-ce09cb49b4ce@suse.com>
 <c468856b-8ac6-2ab1-0f5f-eabc26d47293@xen.org>
 <51c29a91-8659-7525-a565-5b9fcfc935f3@suse.com>
 <9b8fb87c-a2fb-f371-5914-f0d175c18b02@xen.org>
 <7d55a2c1-f630-3e43-fb6a-39f28f716bca@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <00fa7de9-9f0c-1453-af7e-156d99bbd1f3@xen.org>
Date: Wed, 5 May 2021 12:50:05 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <7d55a2c1-f630-3e43-fb6a-39f28f716bca@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 05/05/2021 11:57, Jan Beulich wrote:
> On 05.05.2021 10:51, Julien Grall wrote:
>> On 05/05/2021 09:24, Jan Beulich wrote:
>>> On 05.05.2021 10:12, Julien Grall wrote:
>>>> Hi Jan,
>>>>
>>>> On 30/04/2021 09:36, Jan Beulich wrote:
>>>>> On 30.04.2021 10:19, Julien Grall wrote:
>>>>>> On 29/04/2021 14:10, Jan Beulich wrote:
>>>>>>> With there being a way to disable v2 support, telling new guests to use
>>>>>>> v2 exclusively is not a good suggestion.
>>>>>>>
>>>>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>>>>>
>>>>>>> --- a/xen/include/public/grant_table.h
>>>>>>> +++ b/xen/include/public/grant_table.h
>>>>>>> @@ -121,8 +121,10 @@ typedef uint32_t grant_ref_t;
>>>>>>>       */
>>>>>>>      
>>>>>>>      /*
>>>>>>> - * Version 1 of the grant table entry structure is maintained purely
>>>>>>> - * for backwards compatibility.  New guests should use version 2.
>>>>>>> + * Version 1 of the grant table entry structure is maintained largely for
>>>>>>> + * backwards compatibility.  New guests are recommended to support using
>>>>>>> + * version 2 to overcome version 1 limitations, but to be able to fall back
>>>>>>> + * to version 1.
>>>>>>
>>>>>> v2 is not supported on Arm and I don't see it coming anytime soon.
>>>>>> AFAIK, Linux will also not use grant table v2 unless the guest has a
>>>>>> address space larger than 44 (?) bits.
>>>>>
>>>>> Yes, as soon as there are frame numbers not representable in 32 bits.
>>>>>
>>>>>> I can't remember why Linux decided to not use it everywhere, but this is
>>>>>> a sign that v2 is not always desired.
>>>>>>
>>>>>> So I think it would be better to recommend new guest to use v1 unless
>>>>>> they hit the limitations (to be details).
>>>>>
>>>>> IOW you'd prefer s/be able to fall back/default/? I'd be fine that way
>>>>
>>>> Yes.
>>>
>>> Okay, I've changed that part, but ...
>>>
>>>> We would also need to document the limitations as they don't seem
>>>> to be (clearly?) written down in the headers.
>>>
>>> ... I'm struggling to see where (and perhaps even why) this would be
>>> needed. The v1 and v2 grant table entry formats are all there. I'm
>>> inclined to consider this an orthogonal addition to make by whoever
>>> thinks such an addition is needed in the first place.
>>
>> The current comment is not mentionning about limitations but instead say
>> "new OS should use v2". Your proposal is to say "default to v1 but use
>> v2 if you hit limitations".
> 
> I've intentionally not said "if you hit limitations".

This doesn't really change the point I made. :)

>> This is not a very friendly way to work on Xen. FAOD, I am not saying
>> that the other headers are perfect... Instead, I am saying we ought to
>> improve new wording to make the project a bit more welcoming.
> 
> I don't think the public header is the place to go into such lengths,
> especially when all the information is already there. Textually
> describing the same aspects should be done elsewhere imo.

The goal of comments is to document anything that cannot be easily 
inferred. This is the case of the limitations you mention but don't 
describe.

  I'm of the
> firm opinion that the patch as is represents an improvement. 

I haven't suggested that patch wasn't improvement. However, I think it 
can easily be improved further.

> There's
> no suggestion anywhere that things couldn't be further improved, as
> is the case about always.
> 
> Since I created this patch only because my request to correct the
> statement led to me being asked to provide the suggested new text,
> may I suggest that you pick up this patch or create one from scratch
> to accommodate all your wishes, if you believe this extra information
> really belongs there? All I'm after is to correct a statement that's
> actively misleading.

I am a bit confused with this answer. Are you saying you are not willing 
to write it but if someone else does you will accept it?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 05 12:13:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 12:13:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123074.232147 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leGOu-0008UO-HT; Wed, 05 May 2021 12:13:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123074.232147; Wed, 05 May 2021 12:13:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leGOu-0008UH-EW; Wed, 05 May 2021 12:13:12 +0000
Received: by outflank-mailman (input) for mailman id 123074;
 Wed, 05 May 2021 12:13:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1leGOs-0008UB-Eg
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 12:13:10 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1leGOr-0000to-0A; Wed, 05 May 2021 12:13:09 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1leGOq-0006sx-PX; Wed, 05 May 2021 12:13:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=tqhIAv4pX6PBB5WS4GGoRGak1M8/u1BgecNF+D+R/Kg=; b=c3ymz6Z/UD6sR3brUXBjmGWpbI
	/fxQaSO0NN+l/Lgsp0APPPRg1c0fVlkHbS3qg8UlOcFzX6cgunZYE9UvMM8edN09ETRb910HmCQrs
	aBFHS7izP4cHC1jE6K7vIXLwKeOcHNI4jqo0mAONgNluhq1W6wxZJw4T/MOkNwGuAuT0=;
Subject: Re: [PATCH v4] gnttab: defer allocation of status frame tracking
 array
To: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
 <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: George Dunlap <george.dunlap@citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <74048f89-fee7-06c2-ffd5-6e5a14bdf440@suse.com>
 <4450085e-be97-a1ba-dbd8-c72468406fd5@citrix.com>
From: Julien Grall <julien@xen.org>
Message-ID: <af447491-c0bd-6fce-3c21-df4af95a1273@xen.org>
Date: Wed, 5 May 2021 13:13:06 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <4450085e-be97-a1ba-dbd8-c72468406fd5@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Andrew,

On 05/05/2021 11:49, Andrew Cooper wrote:
> On 04/05/2021 09:42, Jan Beulich wrote:
>> This array can be large when many grant frames are permitted; avoid
>> allocating it when it's not going to be used anyway, by doing this only
>> in gnttab_populate_status_frames(). While the delaying of the respective
>> memory allocation adds possible reasons for failure of the respective
>> enclosing operations, there are other memory allocations there already,
>> so callers can't expect these operations to always succeed anyway.
>>
>> As to the re-ordering at the end of gnttab_unpopulate_status_frames(),
>> this is merely to represent intended order of actions (shrink array
>> bound, then free higher array entries). If there were racing accesses,
>> suitable barriers would need adding in addition.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Nack.
> 
> The argument you make says that the grant status frames is "large
> enough" to care about not allocating.  (I frankly disagree, but that
> isn't relevant to my to my nack).
> 
> The change in logic here moves a failure in DOMCTL_createdomain, to
> GNTTABOP_setversion.  We know, from the last minute screwups in XSA-226,
> that there are versions of Windows and Linux in the field, used by
> customers, which will BUG()/BSOD when GNTTABOP_setversion doesn't succeed.

Unfortunately, my reply to this on the v2 was left unanswered by you. So 
I will rewrite it here with more details in the hope this can lead to a 
consensus now.

 From a discussion during January's community call, the behavior was 
related to an old version version of Windows PV drivers. Newer version 
will not use Grant Table v2.

However, your comment seems to suggest that GNTTABOP_setversion will 
never fail today. AFAICT, this is not true today because Xen will return 
-ENOMEM when trying to populate the status frame (see the call to 
gnttab_populate_status_frame()).

Therefore...

> 
> You're literally adding even more complexity to the grant table, to also
> increase the chance of regressing VMs in the wild.  This is not ok.

... I am not sure why you are saying this is a regression.

> The only form of this patch which is in any way acceptable, is to avoid
> the allocation when you know *in DOMCTL_createdomain* that it will never
> be needed by the VM.  So far, that is from Kconfig and/or the command
> line options.

I can see how allocating memory upfront is easier for accounting 
purpose. However, it also means we may not be able to reduce the 
footprint if we don't know what the guest OS will use it.

This can happen for instance, if you let your customers to run random 
OSes and never restrict them to v1.

I believe that in most of the cases v1 will be used by the guest. But we 
technically can't restrict it after the fact (e.g. on reboot) as this 
may regress customers workload.

That's where the current approach is quite appealing because the 
allocation is done a runtime. So it does cater a lot more uses cases 
than your suggestion.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 05 12:16:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 12:16:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123077.232160 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leGRq-0000fw-1j; Wed, 05 May 2021 12:16:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123077.232160; Wed, 05 May 2021 12:16:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leGRp-0000fp-TT; Wed, 05 May 2021 12:16:13 +0000
Received: by outflank-mailman (input) for mailman id 123077;
 Wed, 05 May 2021 12:16:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+Yav=KA=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1leGRp-0000fj-19
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 12:16:13 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 203085a4-c52c-425d-936b-22bff8f1bd81;
 Wed, 05 May 2021 12:16:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 203085a4-c52c-425d-936b-22bff8f1bd81
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620216971;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=NH0iUGY5MfJN9dfWHScNXgaUZWJIg8vIlVGF9rvJaP0=;
  b=VWdqjqNfkai0WoMSYeHaO4hznLzHjTcQ/IKQRBFGocrA+61tO8vYYIwV
   g+zA7hcETFJYNpopE5TqO4GXBqcXAYxsvMRSKXBXop+uSUOMKWsS4oJsu
   EPODxsAqokL6L0OjdwLmGTgDm09/hjiYXFnxWWDbYaZdZTlLmSSh+pHMs
   E=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: ByBMvIavk7lcCQ7gU4qeX4Ei4UQaypFPegV8Q9+/HSFvyLqW1xcDuJbpAoj0/Ywbj1Inrq1DMH
 0gIIQ8pA5kZ4xOoTdar4grT6pwp/ZDImAcnjGUC4PRhuDycKDeCCRofoTIe35OQemkT1aBCdSE
 2K1eD9sjZKxIHIwa0tr+/mB6LWifsoQ136uGKAoqvtwzzP75/igLt0ty7BkMnHsDGBckm26JDT
 Qys2uNwKFUeCTXn22o7A91RC8cSr/F3Xj/I4vlmFUs3m3f1YA9YvPLoA61IDRvlFTFAVgu9NC/
 gcc=
X-SBRS: 5.1
X-MesageID: 43119902
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:RRCar60hJgqc+t5Pz2KIzwqjBR13eYIsi2QD101hICF9Wvez0+
 izgfUW0gL1gj4NWHcm3euNIrWEXGm0z/BIyKErF/OHUBP9sGWlaLtj44zr3iH6F0TFmdJ1/Z
 xLN5JzANiYNzRHpO7n/Qi1FMshytGb8KauwdzT1WtpUBsCUcFdxi1SYzzrdnFebg9AGJY/Cd
 647s1IuzKvdR0sH7qGL1MCWPXOoMCOqYnvZgQICwVixA6Fiz6p77CSKWnm4j41VTRTzbA+tV
 XUigCR3NTej9iX6D/5k1XS4ZNfhcf7xrJ4ZfCkp8AJJlzX+32VTat7XbnqhkFSnMiO7xIQnM
 DIs1McOa1Img7sV0WUhTeo5AX6yjYp7BbZuC2lqF/uu9bwSj5/K+cpv/MgTjLj50AtvM5x3c
 twtgrz3fcnbmKj7VDAzuPFWB1wmk2/rWBKq591s1VlXZYDc7gUlIQD/SpuYeQ9NRjn44MqGv
 QGNrCk2N9qdzqhHhXkl1V0zMfpdno+GQrueDl5huWllxJSnHx/0nICwt0eknoq5PsGOul5zt
 WBHaJymL5USMgKKYp7GecaWMOyTlfAWBTWLQupUBraPZBCH0iIh4/84b0z6u3vUJsUzKEqkJ
 CEdF9Dr2Y9d2/nFMXm5uwLzjn9BEGGGRj9wMBX4JZ0/pfmQqDwDCGFQFcy1+O9vvQ2GKTgKr
 SOEaMTJ8WmAXrlGI5P0QG7cYJVM2MiXMocvct+c06So/jMNpbhuoXgAbXuDYuoNQxhdnL0A3
 MFUjS2Dt5H9FqXVnjxhwWUdGjqfmD54JJsAInX9+Ue0+E2R8lxmzlQrW78ytCAKDVEvKBzVl
 B5OqnbnqSyonTz3Wug1RQvBjNtSmJupJnwWXJDogEHd2nud6wYhtmZcWdOmF+OJhp1SdLqAB
 dSzm4Hv56fHti1/2QPGtinOmWVgz84v3SRVaoRnaWF+IPDdo4nCI0lHIh8Dx/CGRAwuQsCkh
 YCVCY0AmvkUh/+g6Ssi5IZQMvFccNnvQutKclI7VTFtUudoskrbmABXyGnVPOWhQpGfUsQun
 RBt4skxJaQkzemLmUyxM4iNkdXVWiRCLVaSDieaJ5sgbDtcgFoRWKsjTiX4itDI1bCxgE3vC
 jMPCeUcfbEDh54tmpD2qjnyl9ya16QZll9cHx8rI17G1nXo3ob657/WoODl0+qLncSyOAUNz
 /IJQEfJQ5j3Pib/h+YkjTqLwRq+rweesjmSJgzebDa3X2gbLCSnaYdBvlO4dJOL9b1qNIGVu
 qZZi6YJD71EPkSxgSQv3opURME8EUMoLfN4lnI/WK41HkwDb7uO1xgXagcOMzZwG7+RfqEua
 8Jxe4djK+VCCHWZdGHw62MMGIGBRPXvGKsT+Yn7bpTprk/sbNvH5/dFRvEvUs3qikWHYPRrg
 c5Rq8+3ZXqfqlIVOYWczhC/lUomM+URXFb+DDeM6sbRxUVk3TfP9m1+LLGprokP12ZqGLLSC
 6i2hwY282AYjCK2rEbAZ8hOGh6aEAz73J54eOJHregQTmCRqVm/FCgNGW6f6IYYK+ZGa8Iph
 IS2aDFo8anMw750hvXpz11P+Zn9HumW9q7BEapFfRT+9K3fXSKja3C2r/9sB7HDR+6YV8fn4
 tLaAg5adlCkCAriMkP6ReJI5aH6X4Noh95+jFollnkx4ig7iP6JCh9QHzkq6QTeyJSPHiOhd
 nC6s6C2h3GkWN45aU=
X-IronPort-AV: E=Sophos;i="5.82,274,1613451600"; 
   d="scan'208";a="43119902"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OH7QpByD9YgOwGV0X4BIYRx8ZIw0DdUYGbLtYsK+5XEy/1wAgUS3cfqr2CIBJX2W72HPfhg9tdFB8FrvnOEOn6HvnMNnvdx72lYrwrTraE2xuWmO16vorC3llBFPQhdgpPqlUgUvvDQjmiASKY6edgCvXewOGbo6h/bBA3vArAxKQJq8l3DCd2HfufDP8IiumlGnPq3l9Pc6jgg+0sgv0c5hs8GI9s/LhwuZO/L4teG/IseULVJfmOKZcGyPR7aaqWBYfXOZqV/ImIwiJ21prM0eOpZGF39tFO8NVZsjPg50UmDKWu7qRt3Z5S+84MAsg5kmrQcX7JsxkCQXhfI4Mw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=H1Drzt6jytOK6IrykjjYlFRA7GKm8M8Kqpr9RDHzNV8=;
 b=nVT4UCFnbijLvIRSdeVJd8kzVRHA131aE3151hlvLqR/BZp5yBAjf2NE30KNYd7tnPwU3DfmIHrzueKE0HXruXcS2RpQ/wzk880eHh1eWE4x1vq4z9kDUf9LRVSJZRnKbRS4geEj8ldzdQ7i/6EU67tWmjGbkcAcNVsnholycpJDnRoAAKqPmj2HlZmKKpSudEmuGD5/5ETUnFkyd5kg//TztbBOSREu59FTS7g1f74+lT07eAUU5W5itiyoSk0NZqbHQvOQ863rgx54rR5AQIIYfLUSn+UDJ6Eb2sQY4Z0gCv7fYmCIT4HHDHymyQDpfRlctEKQ/eD1djM3D8XW2g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=H1Drzt6jytOK6IrykjjYlFRA7GKm8M8Kqpr9RDHzNV8=;
 b=MUgqs9k0ud/8efdCmzyC3m/hWRifUKbkkktCKUe9PKgsOEKtrSofLzlEWjPTfYXtymscN+oOuiIDeV/o6Zs6Et1fe6DI4bK4Sz0kQfYgQjtA03qJbwV6ZscOYF+7VqaEL5FSyS9G1UrtBUAIKQ4GxFqMQ+7kJSsYbnu0f4v6dmk=
To: Jan Beulich <jbeulich@suse.com>
CC: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210504213120.4179-1-andrew.cooper3@citrix.com>
 <28384167-fbd0-d3ff-c064-ee88f5891580@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] libx86: Introduce x86_cpu_policy_calculate_compatible()
 with MSR_ARCH_CAPS handling
Message-ID: <4def95ca-7488-09bf-19fa-d1fa0fa55427@citrix.com>
Date: Wed, 5 May 2021 13:15:59 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <28384167-fbd0-d3ff-c064-ee88f5891580@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0209.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a5::16) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 1b73b332-7cc3-441e-d1aa-08d90fbf8e99
X-MS-TrafficTypeDiagnostic: BYAPR03MB3622:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB36220F76E290AB5DE4266142BA599@BYAPR03MB3622.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: iKVgVaME7+puLzPpKQOCobqmy3EyDm0G+2e3qgnSuDVledeUfO2eoUzsb1c6hYDcwm/4R9GMXosGAvXQ5gF0Xytud49rqU9JnYDmTP2zSwg8JXIljvZ3eg8klGy3PBvjxcT4ef3uyehj2dZxJQPo2i1TZCFt+yNuQX6KYhV8O/9C6wByd7Y3h092Abjpw5ZGEtVxGNmZbKPzo+jjuxm6kF6rFqjyw9jiVYgCEAMrMFG9Cr2X+2j6yaX5SsWbIIWQYxrmLrsPxLzbgSsu4RNvWZ184P2fqyffparfOKbjz23GIlXWaeAnln+NThfvauMaYfUhBSoST6RWZb6Dy4M2D5+0eaGz/mQG7UQPZ5BESJK340WHDcCZsrNDZz8LTufi8irvXIaY5bZApebBWb2QuJt1r0Z5VVuHHB6xWv5dxP5FFbVHbo59PmlzcrD+0eWPwnmgIaeeEnlhF6UGh8b9E2xTH9WNKWAdE1rKrEgbGCH67DKxAGkdtmNozNThbfBofMuhQ0L6KC7BwDyz+DEl6Oar+yQykgQ5KogAQZyUdCIqBB1AiOG2Lph75H95R1J0pP/8cxrT+OPQOX13AEQs9/sV6paE53XRLkk5y8cckwg3q4tS0UBVFyqeLCDY+iMMeNWRHFIkDTetMKDUr4LMCkCp1nS/4QcjOTXI8QxYuGmeMalMaA9L18GkUxp/+mMB
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(39860400002)(376002)(346002)(366004)(5660300002)(31696002)(26005)(16576012)(316002)(186003)(6916009)(38100700002)(31686004)(16526019)(8676002)(36756003)(53546011)(478600001)(6666004)(83380400001)(2616005)(6486002)(86362001)(8936002)(2906002)(66946007)(54906003)(4326008)(66476007)(956004)(66556008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?dXdzUWl4VE5OMkFtZ1V4WUtWZ01FSU83NXh3WDNUczhPU3ZKNS85a09WaWkw?=
 =?utf-8?B?bHowakRQOGdqRUhuK2lacnJ3bE50TEYzSmtCaGFvWkRuZldUQ2h1dG9FTHZw?=
 =?utf-8?B?ZmRhaHVRSnljbWtjR0ExM084RWh1bU1zVVdNMTdtU3YrVlQvTEh4NnQ5SW9o?=
 =?utf-8?B?bVNNMWhKazlqdFl5aWhVa2xVVzhzVnp5bFNScDRMOTRyanByeWlSQWNQc3VF?=
 =?utf-8?B?N3JqT2NVYlpDWS9iODh0Qk94QmtCb01XR3JuRXdMcWJ2ZndBWFhIYm56UCtK?=
 =?utf-8?B?SjVrK0hzYWJPL29aR213ZDhpSHZhNnR0a1JvSGtGSVkzb3ljd2xzOU1pQWpi?=
 =?utf-8?B?NjcxTXRGSEVKcm41MHZ0cjR6YjVxWVE4ckpNTWc3Y1RUbkIrcWFOcEhjYkVm?=
 =?utf-8?B?ZGNiSmV6RkhoS1BIcHhaL25OZEo5cU1xY2o0UVcvYVZNd2lNMXEvUHlYb3pi?=
 =?utf-8?B?R3NOSjgwR2YxdG9TWUF4enp2ekhOZ2pHcTNPTUFCSmJNeUlFUk5CVzdwa2Z3?=
 =?utf-8?B?RnBNQTBlSTNpZkwxaERKU242WFRLTDVtSGgvNmJoWGxMaXZmaW9SVnlBNUtq?=
 =?utf-8?B?M2UwMHdMUGJiYTlKelpLQkkrSUxMMzNSZGFObG9iWXdRVFNHdHVueG5vRXlj?=
 =?utf-8?B?MW91TmJ4dXU4RE5kaERiSGFnanRqRUh0bURzWE1WdXNNK0dXVzJCRXhndFpj?=
 =?utf-8?B?cFgyZ1huZ09MZ3I0cDltTnVTZ25tbDFyWExFS0MxbUF6aDZFZEEvY3Y2eVRw?=
 =?utf-8?B?MDV3bGptV1R1SHhrNkV1K0NlbVE0MVhXNVJkS3hqcGxpdllqb25uTTRUUUNa?=
 =?utf-8?B?TUVTWFh0TmFvdGd4cERXTjJnZHdHa3RqYWxQcy9naktWblNrdUJVR0YzNWhn?=
 =?utf-8?B?eEYyWS9uM0pnSEc0YjNyUWx5b25RaUNFZWI2dFUvUWJzZDd0NUhBdTNWZHpH?=
 =?utf-8?B?cE16ekwxbk1LL1FUalJORklrZDN5Rm5wbVlwMVdFdTFBSTgvaVR4b28yRlVl?=
 =?utf-8?B?RkUwMzVkMktCWXRsQTlDcitrbXd4bmluT3BGRjlZaDF5OUhwbytJeUNRRkQz?=
 =?utf-8?B?ZkNFT0dGMFNpdXBVU05Jd3pHMzM0ZUM5L1Q0ekFxS1V2SWMxQ1Q5b1RHeVhS?=
 =?utf-8?B?bm9SMjdoNjhTNTFXWkxFRkFtWVNTY2dPTzVSVnVCangxR2ZkZzlRU3hGYkRm?=
 =?utf-8?B?K3orL3ZCUE1VT0JveXNWS3dlU3RXeDcwN0o3U1V4ZHZDZTlnTnNqeUVaaVMz?=
 =?utf-8?B?aG1scEdQSWlaeEd0N2VpUWVEM0lsZmEveVprVng2eE9qRnFDRkhKNDBPdFMz?=
 =?utf-8?B?QnkzNEJ6R1YvbmFEcUJMOU1tN1BEcG0wc0g2T3F1YzRyUnlDaEtZekdSK0FX?=
 =?utf-8?B?ZW1ZVG5yN0FENXdsYktkQlRpN1EvS1oxRUpkNkY5citLT2txejFrSTlSSzlu?=
 =?utf-8?B?Q1JYZnRpOWNYcE9KLzh0TmUzN2RwbXNGckR4MFVOcUpJZnVzZFFMWGlIaUpn?=
 =?utf-8?B?d0NGTnFKZFFHelNUTDA0Q0p6U1pvWVBDUEo0WGVVbmZ5RTFtOU1hVUVTTnBT?=
 =?utf-8?B?andlU1p0MXcyTm4yK3JLM2JHY0p5WGY5ejF4L0ZpOFJySHVFMjNBdzc3ZkVU?=
 =?utf-8?B?VE53bEZCSmp1cFpURmVNaTFOMFI3eFd5aWZFZURDNmhDTk5XSUpwWUNYdzUv?=
 =?utf-8?B?YWxDc1ZQQitNOU9iM0NaODBoeGNSRmJleGltTm9CRWpmaGUwbWw1dE0xWFJW?=
 =?utf-8?Q?TTVgJVkyGre+ZaJolsHm6DojaA4E8OYprPprg8J?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 1b73b332-7cc3-441e-d1aa-08d90fbf8e99
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2021 12:16:05.6850
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: qeXPKt1T9VDJmCX+UquT5Pq8aWOnMaIRZoj5Se1Ow76WeJYUUZ6noWLt+zBtlPPu5/wniddpPWYuQH+wHFbc31wu3GoGjfUjg+FAWGkM6Wo=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3622
X-OriginatorOrg: citrix.com

On 05/05/2021 07:39, Jan Beulich wrote:
> On 04.05.2021 23:31, Andrew Cooper wrote:
>> --- a/tools/tests/cpu-policy/test-cpu-policy.c
>> +++ b/tools/tests/cpu-policy/test-cpu-policy.c
>> @@ -775,6 +775,154 @@ static void test_is_compatible_failure(void)
>>      }
>>  }
>> =20
>> +static void test_calculate_compatible_success(void)
>> +{
>> +    static struct test {
>> +        const char *name;
>> +        struct {
>> +            struct cpuid_policy p;
>> +            struct msr_policy m;
>> +        } a, b, out;
>> +    } tests[] =3D {
>> +        {
>> +            "arch_caps, b short max_leaf",
>> +            .a =3D {
>> +                .p.basic.max_leaf =3D 7,
>> +                .p.feat.arch_caps =3D true,
>> +                .m.arch_caps.rdcl_no =3D true,
>> +            },
>> +            .b =3D {
>> +                .p.basic.max_leaf =3D 6,
>> +                .p.feat.arch_caps =3D true,
>> +                .m.arch_caps.rdcl_no =3D true,
> Is this legitimate input in the first place?

I've been debating this for a long time, and have decided "yes".

We have the xend format, and libxl format, and

cpuid=3D["host:max_leaf=3D6"]

ought not to be rejected simply because it hasn't also gone and zeroed
every higher piece of information as well.

In production, this function will be used once per host in the pool, and
once more if any custom configuration is specified.

Requiring both inputs to be of the normal form isn't necessary for this
to function correctly (and indeed, would only add extra overhead), but
the eventual result will be cleaned/shrunk/etc as appropriate.

>> --- a/xen/lib/x86/policy.c
>> +++ b/xen/lib/x86/policy.c
>> @@ -29,6 +29,9 @@ int x86_cpu_policies_are_compatible(const struct cpu_p=
olicy *host,
>>      if ( ~host->msr->platform_info.raw & guest->msr->platform_info.raw =
)
>>          FAIL_MSR(MSR_INTEL_PLATFORM_INFO);
>> =20
>> +    if ( ~host->msr->arch_caps.raw & guest->msr->arch_caps.raw )
>> +        FAIL_MSR(MSR_ARCH_CAPABILITIES);
> Doesn't this need special treatment of RSBA, just like it needs specially
> treating below?

No.=C2=A0 If RSBA is clear in 'host', then Xen doesn't know about it, and i=
t
cannot be set in the policy, and cannot be offered to guests.

At the moment, our ARCH_CAPS in the policy is special for dom0 alone to
see, which is why RSBA isn't unilaterally set, but it will just as soon
as the toolstack logic for handling MSRs properly is in place.

>> @@ -43,6 +46,50 @@ int x86_cpu_policies_are_compatible(const struct cpu_=
policy *host,
>>      return ret;
>>  }
>> =20
>> +int x86_cpu_policy_calculate_compatible(const struct cpu_policy *a,
>> +                                        const struct cpu_policy *b,
>> +                                        struct cpu_policy *out,
>> +                                        struct cpu_policy_errors *err)
>> +{
>> +    const struct cpuid_policy *ap =3D a->cpuid, *bp =3D b->cpuid;
>> +    const struct msr_policy *am =3D a->msr, *bm =3D b->msr;
>> +    struct cpuid_policy *cp =3D out->cpuid;
>> +    struct msr_policy *mp =3D out->msr;
> Hmm, okay - this would not work with my proposal in reply to your
> other patch. The output would instead need to have pointers
> allocated here then.
>
>> +    memset(cp, 0, sizeof(*cp));
>> +    memset(mp, 0, sizeof(*mp));
>> +
>> +    cp->basic.max_leaf =3D min(ap->basic.max_leaf, bp->basic.max_leaf);
> Any reason you don't do the same right away for the max extended
> leaf?

I did the minimum for RSBA testing.=C2=A0 The line needs to be drawn somewh=
ere.

~Andrew



From xen-devel-bounces@lists.xenproject.org Wed May 05 12:18:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 12:18:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123081.232172 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leGU3-0001Jt-DV; Wed, 05 May 2021 12:18:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123081.232172; Wed, 05 May 2021 12:18:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leGU3-0001Jm-Ab; Wed, 05 May 2021 12:18:31 +0000
Received: by outflank-mailman (input) for mailman id 123081;
 Wed, 05 May 2021 12:18:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6083=KA=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1leGU1-0001Je-Rg
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 12:18:29 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 83c4dbef-bd9d-4897-8080-8834efe6ecac;
 Wed, 05 May 2021 12:18:28 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0532DACC4;
 Wed,  5 May 2021 12:18:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 83c4dbef-bd9d-4897-8080-8834efe6ecac
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620217108; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=PcEUYtfw1J9spaESNT8h/QUK42wl5s/bGU2tboUCcm8=;
	b=XoXpUhJ4OAdlWnT/CtcDWRmhDXdfmOqjOXKxQz0LkSi3f0eAGM5VXgnxm08lzHbuY/zcfQ
	qX6GmX3Iveu8pPn3dPNqBmMZbh3RuX0jAifA3X4G3NJvmN+D2VvLXwgqIs3jz70PhQ3d8x
	WOBoQLtxvhUqUZmWH0iDL2Wmrnu3SNE=
Subject: Re: [PATCH] public/gnttab: relax v2 recommendation
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Luca Fancellu <luca.fancellu@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <dcd9ede1-9471-6866-4ba7-b6a7664b5e35@suse.com>
 <8eac6f09-4d1d-6fcc-4218-8c9a0760a6bb@xen.org>
 <71e61d09-5d92-94dc-ae0c-ce09cb49b4ce@suse.com>
 <c468856b-8ac6-2ab1-0f5f-eabc26d47293@xen.org>
 <51c29a91-8659-7525-a565-5b9fcfc935f3@suse.com>
 <9b8fb87c-a2fb-f371-5914-f0d175c18b02@xen.org>
 <7d55a2c1-f630-3e43-fb6a-39f28f716bca@suse.com>
 <00fa7de9-9f0c-1453-af7e-156d99bbd1f3@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a56820e6-d30a-4647-1949-1198fa4af1d3@suse.com>
Date: Wed, 5 May 2021 14:18:29 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <00fa7de9-9f0c-1453-af7e-156d99bbd1f3@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 05.05.2021 13:50, Julien Grall wrote:
> On 05/05/2021 11:57, Jan Beulich wrote:
>> Since I created this patch only because my request to correct the
>> statement led to me being asked to provide the suggested new text,
>> may I suggest that you pick up this patch or create one from scratch
>> to accommodate all your wishes, if you believe this extra information
>> really belongs there? All I'm after is to correct a statement that's
>> actively misleading.
> 
> I am a bit confused with this answer. Are you saying you are not willing 
> to write it but if someone else does you will accept it?

Along these lines, yes. One problem is if I start enumerating the
limitations, arguing could easily start whether I cover them all,
some are not worthwhile to mention, etc. Plus, as said, I'm not
convinced this is the right place for the information. In such a
case I can accept someone else thinking differently and making
such a change, but I'd like to be convinced of changes I'm doing
myself.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 05 12:23:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 12:23:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123089.232184 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leGZB-0002k5-5a; Wed, 05 May 2021 12:23:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123089.232184; Wed, 05 May 2021 12:23:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leGZB-0002jy-0s; Wed, 05 May 2021 12:23:49 +0000
Received: by outflank-mailman (input) for mailman id 123089;
 Wed, 05 May 2021 12:23:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sTpK=KA=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1leGZ9-0002js-L1
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 12:23:47 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3e8cd0b9-2124-4255-8322-661ef7aa84e9;
 Wed, 05 May 2021 12:23:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3e8cd0b9-2124-4255-8322-661ef7aa84e9
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620217426;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=IStPD8hZzviPScV4zEwwK7+8HGHSP8ydKF4ea/UA2fQ=;
  b=gNacauTZVRwp+PfpXRC+3gLf9N2rjtDgoft6PduTc1CtKbzPUgx6NxdH
   BjjIZEHmlEdKAzMr1SYplsDr7eV+ITt8b4N5nQk8IUjJtOvNOrMPM/sNM
   czieUEnC0jIOR4fL2q8iVAjKKdasyXdHX/9LffUVOpQdIA+f8RmSr3i/l
   U=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: jnk9VvhbV/uCuP7xErOyGCiel9Zx554vlcPRMCTAdpxIUnUZrfXeRF0FdJpdz73gUmZjtloK8V
 qaScTf0nF9Op9dY+TildfPZD7MAn53RRyDbVsNvMLX631CJqVyN7etu6wTpuYSe+QEopN7Pgy9
 76QdJmEAxSkItV+uDEMkhkjdlQ/H9GmeKZ1mYFEKViK76/saHbOe3PwCJ8ljhWtJu9Uu4wBDh/
 vUrdb+vpi2id1gR0UPCyB/jJnGBB2oLcWcDWEhZjBtzSMVSfPN8eLOu6l9o/uAcCumNM9u0jzt
 cUE=
X-SBRS: 5.1
X-MesageID: 43120754
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:C1UayKmHcsKWR9Y0FW9/Tm4cQoLpDfOcj2dD5ilNYBxZY6Wkvu
 iUtrAyyQL0hDENWHsphNCHP+26TWnB8INuiLN/AZ6LZyOjnGezNolt4c/ZwzPmEzDj7eI178
 tdWoBEIpnLAVB+5PyW3CCRD8sgzN6b8KqhmOfZyDNXQRt3brx7hj0ZNi+wOCRNNW57LLA+E4
 eR4dcCijq7YHIMbtm6AH5tZZm4m/TgkpX6bRkaQyM94A6Vgj+yrJL8GR6U3hAROgk/vIsK22
 7DjgD/++Gfo+i2oyWsrFP7wrZ3vJ/aytVFDNGRkcR9EFvRoyuheYgJYczhgBkbu+eqgWxa9e
 XkgxBlBMhr7mOUQ2fdm2qQ5yDF8BIDr0Dv0kWZh3yLm72IeBsfB9BajYxUNjv1gnBQxe1U66
 5A02KHu5c/N3qp906clru4JS1CrUa6rWEvluQelRVkIPMjQYRcsJAF+wdtGIoAdRiKmbwPKv
 VkD83X+Z9tADWnRk3e11MfpOCEbzAYGxeLRVU6ocqF0zRat2AR9Tpn+OUv2lgH754zUJ9C+q
 DtNblpjqhHSossYbt6H/ppe7r6NkX9BTb3dE6CK1XuE68Kf1rLtp7M+b0woMWnYoYBwpcekI
 nIOWko+lIaSgbLM4mjzZdL+hfCTCGWRjL20PxT4JB/p/nVWKfrGTfrciFvr+KQ59EkRuHLUf
 e6P5xbR9X5K3H1JIpP1wriH7FPNHglVtEPsNpTYSPOnuv7bqnR8sDLevfaI7TgVRw+XHnkP3
 cFVD/vYOpa6ESGXWL5nQjxV3vhdleXx+MzLIHqu8wojKQdPIxFtQYYzX6j4NuQFDFEuqsqOG
 tySYmX15+TlC2TxyLl/m9pMh1SAgJ++7P7SU5HogcMLgfRebYHsNOPRHBK0BK8V1tCZvKTND
 Qai0V8+KqxIZDV7zslEcibPmWTiGZWg36WUZEGmOmm6d3+cp01SrYqMZYBWznjJlhQo0JHuW
 1DYAgLSgv0DTX1k5ioi5QSGaX4bNlzgACiJOZOsnLBvUCgpcUiL0FrHAKGYIqyu0IDVjBUjl
 p+/+s0m7ybgwuiLmM5naAFKlFWUX+WB7hHFQyBQ41RltnQCUZNZFbPoQbfpwA4e2Ls+UlXom
 D6NyWbdcvGBUdntmlC3rzn9051cWuhb1t9A0oKwrFVJCDjgDJewOWLbq283y+qZlwOzvo0HR
 vFbTERSzkejOyf5VqwonKvBH8mzpIhMqjhF7wlaajUwW7oApaPj7s6E/hd+4tFONjivvQQa/
 +WfxaYIVrDeqUU8j3QgkxgFDh/qXEin/+t5Qbs63Kg2mUjRdXVO1ZrStggUqehxlmhY8zN9p
 p3jdg457Ttdkrwb8OL0qHRYXpoLAjJrWu/UuEvrtR1sMsJxc9ONqiedQGN8ndNmCgaBoPTsm
 g1Raxg+rDPOoN1ZaUpCmpk12tssO7KFVchtwz9P/Q3cl4shULKJt/h2cu8lZMfRmm64DbqMV
 aR8ydh7+7IciuK27kdEb8xKw1tGT8BwUUn2OOJbIvLDgq2M8lF4VqhK3e4GYUtApStKPE1rh
 xg5cuPkPLSXy3k2BrItT8+Bq5V6W6oTYeTBw2LcNQ4vuCSCBCpgqGw5tS0gyqyYTyna14AjY
 kATHcuVK14+3Afpbxy9DOzRKzxql8klFUbwQgPrC+T5qGWpEHBHU9HNgXFhI5xRjc7CAnQsf
 j4
X-IronPort-AV: E=Sophos;i="5.82,274,1613451600"; 
   d="scan'208";a="43120754"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=C7hr7D6yfh1+jfbjVrsllNfN7sJTcrQVpy7XDhlE6MLVjMjYt2/TtCCH1SjMUXv+iTftjtA8Hm8YcGQzGFt++RRqmoKCXiHFKal+7yyVeHivBaWLDcnYBJKFo+hC80KD0JaoCKtiXMtMAW8Pyh58J6jogQ7dS7M72XaR/8u8ivJKvIacLbVQKabn2BpCpvMb22d1NkVbh88ykYYD0OepqlvhPa/7hDLcyH6J8IieZlT1LPFvcN2vXyRxJSHdmuIhSH+PXyKJ4J1FDArRsFXatGrjmFYU27NDEeJ84HqI6+uzI1wvd94K3v0wzlpB+ZYUMIMGNESb9LeF6XPqS91/0A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AQXbRTOV/O+7P3DEn863dmhXsUirKZNZN4i5Jtwqpmo=;
 b=dOs8gf0wV63vysabjMk+cQmuw5ppvSzdqX9KagSSE0N/EBOgim3qQiP0oljoBNS2QvhCvIWUwiJb8nM9SjIb2PT1j5w64g8756/DLIFlv4G67PXeabEeYIrs3LTJdY15NqCRSa6kcdG6RjrG1HaoP3PO71pFGYlJ+PPket9E+McT408MMqZXQZcUqQ+TPOq4WaoIABb9ZPoa8Yzc6Uj9cZIS1IDbyB0t83ViyQzgbl2yFB0LxCesF/3LG/VVW/AVRfHEc0ES9daEdkI2K5MpqOMuevoZR/V/x/HEgDSZTgtRwFgPecRMO2zrs5J7idDWc713ameRuZdcFkYqCpTc3g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AQXbRTOV/O+7P3DEn863dmhXsUirKZNZN4i5Jtwqpmo=;
 b=h6yC7+lZqGJVc8VKXcI0gBsJ67zdpDrEOyKC2xnCUf9omhQa/LF2z9t5ibXQBB2e24kpiVlKef0I8Chfcvh+akBxMOs6b2HU4igO2jXPYheW2jboey00oWXMOU54W3mjOWPcezB4QKUqzgXtspTG9BjWubQv/wcdSqHUhdT+VwI=
Date: Wed, 5 May 2021 14:23:37 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Wei Liu <wl@xen.org>
CC: Xen Development List <xen-devel@lists.xenproject.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Doug Goldstein <cardoe@cardoe.com>
Subject: Re: [PATCH] automation: save xen config before building
Message-ID: <YJKOSW62716AdMoM@Air-de-Roger>
References: <20210505114516.456201-1-wl@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20210505114516.456201-1-wl@xen.org>
X-ClientProxiedBy: PR3P189CA0088.EURP189.PROD.OUTLOOK.COM
 (2603:10a6:102:b4::33) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 00f1a331-f340-4374-e542-08d90fc09f79
X-MS-TrafficTypeDiagnostic: DM4PR03MB6062:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM4PR03MB6062EEA0CA25285E2971FFD88F599@DM4PR03MB6062.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:813;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: NkVAmkCEibr5KpjQSZme4l+PF3MSh6wOh9YUJeijbybRDK/YnKAYXJEvICkLRsPhBB0oN6iQY3mBjnLS+A3yCzfVBCkSkgQuabj6igQMv1Sp3d+knrBafqr0xL5KW0gMw1FDEllciM97hKZikh8CSQoRbXijMHgguAyt0LsoM95vclq8IoJTa7qlmqXPKmpS8jDJeRkL3D7qP4/kGVHCQw3p8dVf26vZLvYZIyp1wbi8gYtPxpITLhxGoqQOIHpH4wdyfZ5eWg9Cd8tLnOo0pnIHJtFRttztoYXV47F/TPgLG03QLjMRNx/w5MVUCBtq6jZqqtqLrXf6KOZnL/8Jv+zP1dfLiNZ9eZnFZK0zlzUgZETjOQmk3tf4vzNZQsopiziuBjSRPn7tjOR7GZvwinHCnf8UC1yjRN0wly6hJ6E1JJ5+oFeGZlyJ0sAUdfN83QG8QVwuIS7POCJ9W+cISCm9sdtVheXxkHUKIZWvLlzlwFJg3tFa5YotdotbPSZcbmS4xcoR3Er0q47jcO9P6UrmMQJJoh1uaB45D3kNFAvuaILrVrnaxAbgS6f0U2M0lv7cf9TWlZ0k/xSQtAeBYO7JnD8Ad4uZIMVrwnxDcXo=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(39860400002)(366004)(136003)(396003)(376002)(346002)(8936002)(85182001)(26005)(4744005)(478600001)(6496006)(38100700002)(33716001)(6666004)(316002)(8676002)(956004)(86362001)(54906003)(66476007)(9686003)(66556008)(66946007)(16526019)(5660300002)(6486002)(2906002)(6916009)(186003)(4326008);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?WXMrN1JLYTlFM3REOXlQTDE3U2lFWUdIZnNGQVF0cXE0NFlkcUhTaGRNdGlF?=
 =?utf-8?B?UTFxcng5TWp1NXlaeHExM3dVM3JOQTZCZmNaYzB0UW5rby9HRHY1SWN6YjNM?=
 =?utf-8?B?MEhUUzNoTVVVQ2lzNnhOVzY4UUY1ZnFKeGtIUmdzL1VINzlVczROMW5ma1Vw?=
 =?utf-8?B?c240eTdmQ1Q5R20xMm5sSXJFQnlHUXR1bXhTdWJ1ZHF1VzBCWnByNlBVbVp2?=
 =?utf-8?B?d0Z2K3p5SEd1MzVkdXJGNWc0LzZRcHE2d2lFdlJxME1reGkxTTgwY1pBbzNV?=
 =?utf-8?B?T2RtRDd2RFZjbnAxOHd4TWgrT0I5NnhpdzJHLzU5SGI1dlVXdTI0UWYrSVE0?=
 =?utf-8?B?eHY0aGlWZFNuWDNkeElJakZ5RnJKcnNvSkFBNjYxVGFjQVNNNXl4UHMzVjd1?=
 =?utf-8?B?TjNKUHFMN0VqZUw0Uys1ayt4d09tQWJsWFljY0loU3VpTWlhb2dZQTZqN3pB?=
 =?utf-8?B?ZkZ2M0JnMVBKbUI3ZklWQmRBRWIwdGNSVTZyM0x1alRvYmRxamh4WmlNdy9r?=
 =?utf-8?B?V1VxaTN2anh0dGdaNFRKYjhyWVg0ZmUzdm1CaTRtSk5BaGNqOXVjS3VGbG9H?=
 =?utf-8?B?UGhrc1pHZ1JDc2d1bHg1ZkJkRmhLZXAwUElkQUQ1VWEzK1pEQWllR0phN3lX?=
 =?utf-8?B?bUhEc2QveHMrbXM1OWd5S1hySG5LK1NIUTYvNFZZcE5YRHpDcERtUm1yWC9K?=
 =?utf-8?B?bGxMYk96NXIvVXZGK2g0YjZFUHFodTVzTk50b1IrOVhNbENsbWUxUE1ncU4x?=
 =?utf-8?B?eGJpdUFmQTJZWllDYmdFdHhuMlpqdjF1aUpINUxtdzFPN0lCalIyLzdrczVo?=
 =?utf-8?B?UVdJRWovNis5YkM1a3F5NDN6MEt6UkJNbk81Wk1HTEZCZG9HMlFVWmhsV2F4?=
 =?utf-8?B?ckZ0emJoOS9FRTBIU2JCRjRvS2owR1JQUUlWK0czankwNGgvSzVZeVNCU0ht?=
 =?utf-8?B?Ni96bFpVQndBbG14bDhUbm9KZVBlWEMrZWpROG5ScFlod1hFMUFWNHJPLzVn?=
 =?utf-8?B?U1h2Rk1aZmJMcG90SUY5QlpGcGJKWCtEdmF0L0FkMmd1ekRMN3JDY3FtZGRt?=
 =?utf-8?B?anRad2ZTU0xsWXd1SkZJZkh0NHRNWkgvdGZJOFA5UzdwenFUL3FRS3hPakkz?=
 =?utf-8?B?VzNSUXo5S0N1YjZ6T011TVJKNVd2Z09mRXFsVm9KckZScG5JT1ZKMWM2ZEFI?=
 =?utf-8?B?czJ2RngxNDkwTDFSc0thZkl3MjJ3SG1EeWQ5V2pWZDJaU3p0ZGZYYVJZTWZP?=
 =?utf-8?B?ZWIveklxYkZzUU9QTk9vT2VTenZrWGtVdGx3V29GQ1EySURhUTZWQWZ4ZUxs?=
 =?utf-8?B?RjJzd3Q0MFdQMjdGeVNDd1lCWE9IelRLZm5CWXdQYSthZGRYWTAyVVRiK3oy?=
 =?utf-8?B?LzZXNXpQalFXa0FoekNLM081TmRhYzBiVU80T0Vmek5CRWhJSzRrOE5mLzRu?=
 =?utf-8?B?WmRpQXBqNWZGaXZNSmZZSGt4ZGJDbWtvVEg1cVoyNUhiL09YUDhLbHNjKzlP?=
 =?utf-8?B?a2xhUC9GcFg4MzlybE5MZjE0cHVZem5tajE1dnd6THIzVVpQSGxiemxWV1pz?=
 =?utf-8?B?UmFXOE9sR3NRdDBDUmlRSVh0YyszVHVqUDZub3I1eS9zVWZxQmZDL1Z6NWVy?=
 =?utf-8?B?SnBqeUlPdXlnYUUvN2RPcDRnbWMwdTF4ZnVMSVc4UlhRdzVINUVQT096clJt?=
 =?utf-8?B?dzhIWXBpRTgxc09oUGUrdS9YR0ErSUNNZkR1YWtmZ1FNY1BCaVJvTEo1K0R5?=
 =?utf-8?Q?pLaGKNDfOppCKG+pAow29l/DQ9GrS4/PVhuJ7dI?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 00f1a331-f340-4374-e542-08d90fc09f79
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2021 12:23:43.4533
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 9AE6FL6bMKwJA2IyXjEg/O693Ngnlo/vLvLkvg859h5xwYDGrv3IjDaSQiBS9Y3T7zt/4S3thGSIwMpSpPXm+Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR03MB6062
X-OriginatorOrg: citrix.com

On Wed, May 05, 2021 at 11:45:16AM +0000, Wei Liu wrote:
> It is reported that failed randconfig runs are missing the config file
> which makes debugging impossible. Fix this by moving the line that
> copies the config file before the build is executed.
> 
> Signed-off-by: Wei Liu <wl@xen.org>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks!


From xen-devel-bounces@lists.xenproject.org Wed May 05 12:32:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 12:32:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123096.232196 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leGhQ-0004Eb-5M; Wed, 05 May 2021 12:32:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123096.232196; Wed, 05 May 2021 12:32:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leGhQ-0004EU-23; Wed, 05 May 2021 12:32:20 +0000
Received: by outflank-mailman (input) for mailman id 123096;
 Wed, 05 May 2021 12:32:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6083=KA=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1leGhP-0004EO-8X
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 12:32:19 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 52543dc8-11d3-4eeb-91ec-108080ee5e04;
 Wed, 05 May 2021 12:32:17 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E8883AEAE;
 Wed,  5 May 2021 12:32:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 52543dc8-11d3-4eeb-91ec-108080ee5e04
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620217937; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=z2Zmi7Dmv1izPbkG/dxF9mTnl00IZ8ICfAnjrADQxus=;
	b=AYkzYDWcbEXLtIcjTzBImlrh+oFPZ+VDYdUGBcFNeX+p7Bx4AEE8EKymrAqPlu/G7LhYKt
	lOq9ZIvGV+INJMwouAXyzhD6ubFsq7MBJrJdT4cPr3mFN7m1BsuyHZnDffvHL6f5SrkVfq
	A0UTELt2KehQ70ngVFGTQvMR8lJ5PQQ=
Subject: Re: [PATCH] libx86: Introduce x86_cpu_policy_calculate_compatible()
 with MSR_ARCH_CAPS handling
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210504213120.4179-1-andrew.cooper3@citrix.com>
 <28384167-fbd0-d3ff-c064-ee88f5891580@suse.com>
 <4def95ca-7488-09bf-19fa-d1fa0fa55427@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <64d4288b-badc-5a3a-894c-3fb3c4c31baa@suse.com>
Date: Wed, 5 May 2021 14:32:17 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <4def95ca-7488-09bf-19fa-d1fa0fa55427@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 05.05.2021 14:15, Andrew Cooper wrote:
> On 05/05/2021 07:39, Jan Beulich wrote:
>> On 04.05.2021 23:31, Andrew Cooper wrote:
>>> --- a/tools/tests/cpu-policy/test-cpu-policy.c
>>> +++ b/tools/tests/cpu-policy/test-cpu-policy.c
>>> @@ -775,6 +775,154 @@ static void test_is_compatible_failure(void)
>>>      }
>>>  }
>>>  
>>> +static void test_calculate_compatible_success(void)
>>> +{
>>> +    static struct test {
>>> +        const char *name;
>>> +        struct {
>>> +            struct cpuid_policy p;
>>> +            struct msr_policy m;
>>> +        } a, b, out;
>>> +    } tests[] = {
>>> +        {
>>> +            "arch_caps, b short max_leaf",
>>> +            .a = {
>>> +                .p.basic.max_leaf = 7,
>>> +                .p.feat.arch_caps = true,
>>> +                .m.arch_caps.rdcl_no = true,
>>> +            },
>>> +            .b = {
>>> +                .p.basic.max_leaf = 6,
>>> +                .p.feat.arch_caps = true,
>>> +                .m.arch_caps.rdcl_no = true,
>> Is this legitimate input in the first place?
> 
> I've been debating this for a long time, and have decided "yes".
> 
> We have the xend format, and libxl format, and
> 
> cpuid=["host:max_leaf=6"]
> 
> ought not to be rejected simply because it hasn't also gone and zeroed
> every higher piece of information as well.
> 
> In production, this function will be used once per host in the pool, and
> once more if any custom configuration is specified.
> 
> Requiring both inputs to be of the normal form isn't necessary for this
> to function correctly (and indeed, would only add extra overhead), but
> the eventual result will be cleaned/shrunk/etc as appropriate.

Makes sense to me.

>>> --- a/xen/lib/x86/policy.c
>>> +++ b/xen/lib/x86/policy.c
>>> @@ -29,6 +29,9 @@ int x86_cpu_policies_are_compatible(const struct cpu_policy *host,
>>>      if ( ~host->msr->platform_info.raw & guest->msr->platform_info.raw )
>>>          FAIL_MSR(MSR_INTEL_PLATFORM_INFO);
>>>  
>>> +    if ( ~host->msr->arch_caps.raw & guest->msr->arch_caps.raw )
>>> +        FAIL_MSR(MSR_ARCH_CAPABILITIES);
>> Doesn't this need special treatment of RSBA, just like it needs specially
>> treating below?
> 
> No.  If RSBA is clear in 'host', then Xen doesn't know about it, and it
> cannot be set in the policy, and cannot be offered to guests.

How does this play together with the comment in
x86_cpu_policy_calculate_compatible() saying "accumulate"? If it's
clear in one policy because it has to be clear in the host's, how
can it be valid for it to get set there in the result, just because
it was set in the other input?

>>> @@ -43,6 +46,50 @@ int x86_cpu_policies_are_compatible(const struct cpu_policy *host,
>>>      return ret;
>>>  }
>>>  
>>> +int x86_cpu_policy_calculate_compatible(const struct cpu_policy *a,
>>> +                                        const struct cpu_policy *b,
>>> +                                        struct cpu_policy *out,
>>> +                                        struct cpu_policy_errors *err)
>>> +{
>>> +    const struct cpuid_policy *ap = a->cpuid, *bp = b->cpuid;
>>> +    const struct msr_policy *am = a->msr, *bm = b->msr;
>>> +    struct cpuid_policy *cp = out->cpuid;
>>> +    struct msr_policy *mp = out->msr;
>> Hmm, okay - this would not work with my proposal in reply to your
>> other patch. The output would instead need to have pointers
>> allocated here then.
>>
>>> +    memset(cp, 0, sizeof(*cp));
>>> +    memset(mp, 0, sizeof(*mp));
>>> +
>>> +    cp->basic.max_leaf = min(ap->basic.max_leaf, bp->basic.max_leaf);
>> Any reason you don't do the same right away for the max extended
>> leaf?
> 
> I did the minimum for RSBA testing.  The line needs to be drawn somewhere.

I see. I was thinking that it would be nice if the various related
pieces could remain in sync (or at least new code not staying behind
what we already handle elsewhere), i.e. this one doing at least the
equivalent of what x86_cpu_policies_are_compatible() is currently
capable of dealing with. This, again, would make it easier to spot
all the places that need adjusting when something new is to be added.
But I'm not going to insist - this is largely your realm anyway.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 05 12:38:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 12:38:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123101.232207 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leGmw-0004w3-RI; Wed, 05 May 2021 12:38:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123101.232207; Wed, 05 May 2021 12:38:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leGmw-0004vw-O9; Wed, 05 May 2021 12:38:02 +0000
Received: by outflank-mailman (input) for mailman id 123101;
 Wed, 05 May 2021 12:38:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+Yav=KA=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1leGmv-0004vq-Bc
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 12:38:01 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1a11d802-2799-46ba-953f-48501149e2e7;
 Wed, 05 May 2021 12:37:59 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1a11d802-2799-46ba-953f-48501149e2e7
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620218279;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=wThaL1/1b8H4ZycIP4FD3mRmVz2MePQu5xqwAgYktI4=;
  b=Zha6PIgI/UnKa9S2l8zjym6IFRG3/HQxDK1giiNT2zYxLAngjdEaVPXm
   dYjizyoMoxkFhchLckH/IwIe48YdwVTbx/zi+Hgfa5q/gF9+JSEr9vIot
   +PQxrVmOJX/15KeyoPAhnBTM9M0QI9I3QozdCwlvfT1TYEbCak7bLWW+3
   8=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: YSJSIbFYduA+SSmUOiGIa4QTQ0i8Dr+qRalrW6JP1n4QwuoALYzbGZXd1/KLH8W9F8H487CogA
 JmwNAzozolaoF9SZXAGLTuGl//Olz5dkY9IFgs87D65S+RGTeYE/XMlr+eYKhIlHhx0VSZ+eiB
 PyRK4bi9wA4fH9BXCRsgkAOSFEG+O9K++gLoaIa66OIQuIrkTpwDFfUkEvCE7gO4gBuvsADGke
 14SFmLAJ0d407CsRc5Rj2I+L+C3ZyERjDpOHbOn7Se2wGouAgSS37leWVcz6SKUkH/0N3GCrUL
 hWQ=
X-SBRS: 5.1
X-MesageID: 42914401
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:7cNBTqgcldEQXG74Hxr8FdEht3BQXyl13DAbvn1ZSRFFG/Gwv9
 yynfgdyB//gCsQXnZlotybJKycWxrnmKJdy4N5B9efdSPhv3alK5wn0Jv6z1TbakvD38N+9Y
 MlSahxD9XsEUN35PyR3CCUG8stqePpzImGnuHbpk0CcShLbOVa4x59GkKnFCRNNWt7LL4YML
 bZ2cZdvTqnfh0sH6OGL10IRfLKqdGOtL+OW29kOzcd5AODjSyl5dfBenD14j4kXzxC2rsk+2
 Te+jaJg5mLiP2n1gTak1ba8pU+orDc4+FeD8+BgNV9EESJti+UYu1aOoGqjXQOj8yErH0rl9
 TNpBlIBbUN11rhOlubjDGo9w3p0DMF42Lvx1mCkRLY0LLEbQN/MeVtr8Z0dQbY9loBsbhHod
 N29lPcjbV7J1fhmznw/NfBXR0CrDvFnVMS1dQ9olYadKl2Us4pkaUvuHl7Pb1FIQfBrKcgK+
 VqBNG03ocqTXqqK0r3k0Mq/MahRR0Ib2+7a3lHgOO5+R5Mkkt0ykMJrfZv4ksoxdYGR55I6/
 +sCNUSqJh+CssfbadKDOwcW8eACmvUXRLWMG6JSG6Xbp06Bw==
X-IronPort-AV: E=Sophos;i="5.82,274,1613451600"; 
   d="scan'208";a="42914401"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=muFhjTwEvjLX7x/EKX2GqLwAQPIVXinlMW8FdZxPrF/qhQf8yTHQjRWMu16jomebsqt0Oyt2wZOl6rP/5BM1y016Ock0iRT2h8OIh5orHyhI28LNJyR4LL0v52HoILC+aWtouaamJqiOhQf95RGivyGTzYepGvIThWUkC0E8kcNj2whTy/0G5UbCa2/enyj8468AY/npaU3maUEHEK1B0Cq4OjC0waJdHLw0Kth+SzR7uWb2FRpQLqmaNGkMZeghHgfpXu3Ixh7coP/mmB6WRYjQpRKN3hlYbMsgSO0USu4yilfF7olAY8CazoH5bcqnfCo0qJ64N2mLnX7VcFx1yg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MRO0Dug3qr2b3APCkU9Ypkyol+ra2DldETw5z4uCQA0=;
 b=O43rOf+xTW3NPXooPSsxMGy1JYUIEjOgl/NMzE0I6ugfGKwDer1Dv6SHdaDzSKVxKIhioKL1cbKm5cvu3UI3f0DInXnMKjnF630lHGFansMqTRxLs6L2xvXpqgntPy56Ivj4/fNoEk9rf6ovcCm4oPpiocWwrL5tAZ8cmLlx45pUr0Sk17wORx7YcANseeRsqZFvjX056prBHimrDN3RoSetKmZE/lRlniWAmqwNMtYjOzQIQkxy57iqFEq8cyg0mQtmEKHY8UKa9d3RxgOLZgJWH9iltn0Pi2eVk/3bPGxGg4/R19E1m+vSAJGTWAtSkngbOTiyITRu4kUr1msh2g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MRO0Dug3qr2b3APCkU9Ypkyol+ra2DldETw5z4uCQA0=;
 b=UNj1hwkp2P/uBlyJ5Mu/ilycMqnX1cxe922b/6h0OMnL6cZpsNKoWJN1hSwHMu74WPs1T1JFkYYZppsPcKNcXBOixQvRUBIxpWlYMBv2555H50hvvv5Jx2T8HKM//ndJH068Wi67JNBMTQKk9lHnN0pcTJcppKK9bXSx0m/6UsE=
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, Wei Liu <wl@xen.org>
References: <20210504213120.4179-1-andrew.cooper3@citrix.com>
 <YJJtqyDOIkMxjvxW@Air-de-Roger>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] libx86: Introduce x86_cpu_policy_calculate_compatible()
 with MSR_ARCH_CAPS handling
Message-ID: <8f6f339b-f025-2cd0-e666-a3083e79af3a@citrix.com>
Date: Wed, 5 May 2021 13:37:48 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <YJJtqyDOIkMxjvxW@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0496.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1ab::15) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 5b129949-4bfc-4a8d-0abf-08d90fc29b68
X-MS-TrafficTypeDiagnostic: BYAPR03MB4871:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB4871B665B5D6AFCD9A4F8D9DBA599@BYAPR03MB4871.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: M0aokAT08aNY1SaQTmNMQ0FxI7q5FoRqeMNtXrVGnsPFdfvzidx1d6P6PclSwLzmANfGNNPlwPYMtURfe8EeSqNLZs43ULh2UDjriXQoyg15MCJMSFs8zHfYHGNLIDY4FYYCf1oGIrsC0eIF7dpgiYB4pw0EPTLDNpw7F+43mZ+mDgLYN7krI+8X8aPNHq6LJySpqsT28axEICSgYUqw3b728r+7Dzc9l2ZXrnsSZe1YVryN6QFupp36Jz0BmQ+zugFkdwx5W38O21KWtoDnj9UnXvO394CM3KojCuXRy3ShE95mNYmBUqKdy+U08xf09NCMB9Z5LkCSzh+F0IyOQyBXds8UnBmaxZD5oHZ8eS1ZlsVU+ynre5H+H9xzFPJuAGu619gdn9M7s1rNAmC+Ymt9JDEARoF4wIOjFlu1mVYWwpk1hBEkF4HzA/kIMOQs0699LtfFqsoz5F8j+9jHo8DtrPMLJ/Z1+GCMN6KZ/DDJAd3HhYW3mc2Ll+zfA4r0EQNy/T6MdBYgOiTexFXdT+dl1cgxGtdTBlHU9Y3LcaC0QCnpiAxRZXS0yd02JS9WFgKjsjaT+iOGCxsZKbnBoHUwnQk+uYf58I58vBpSxrIze2KXGF7RdidAvmKOOo2ZTEC1mCzlutSaouFoCRrwn/gB5RpkgQvUf4UMfBc9N40sJMhSUtFCrj8C3+RmkPT4
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(136003)(366004)(346002)(376002)(39860400002)(4326008)(5660300002)(478600001)(2906002)(6666004)(2616005)(53546011)(31686004)(6486002)(6862004)(956004)(86362001)(8936002)(30864003)(36756003)(38100700002)(66476007)(6636002)(54906003)(316002)(8676002)(16526019)(37006003)(31696002)(83380400001)(186003)(66556008)(16576012)(66946007)(26005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?ZUhSbm1QQ2pxQ2NjZm1PRHQ2OTJBZ1d4K0tRMzFEMnhXOVp1cW5FZVJWK2RK?=
 =?utf-8?B?Wkd3YURlUUxSZFpwODhvQTl1Z1Jtd3FDWlBmWDhXbkdpYWpjYzYxWUNhTk9a?=
 =?utf-8?B?M2FPQXA3aExzSFRjb3ZJOE1kRjhobThId1pUMk5wa3h3MVFVM0FsY3Q2Nkph?=
 =?utf-8?B?Snk5TG9BbzFxbGVKY2pKbUdXMWE0bUw4eERnaC9iVDJyZUF2RTNUaUZUaVNs?=
 =?utf-8?B?QWx4amF5ZHlBMktqL3AvMmFrcDR6OURBWGJ2MHZScTh6bnpCN3RoNUx6NFVv?=
 =?utf-8?B?aHAvbEZER3NUUnVwNkxERFlWdzZ3VzFReDJKR1V1Slh1NzFMQWtLS2R0b0Jp?=
 =?utf-8?B?SHV0REI4T2Zhc1M2cjhaSlpnRndWdlFVdHRGaXlZU05ZQkJicnB3VzFyeXUz?=
 =?utf-8?B?d2dmYXBuVFNzUC9QR3g5Y1Q2WjdlNG0yMVVYRENzYzRCaTd2MXNjaFFFbXFv?=
 =?utf-8?B?emJ1bWlKRVZjeXJraHhNMURtdnk4Q0MyVGJwU2hkNCtTbXpVcDQxVGRDYUpW?=
 =?utf-8?B?Y0pHcWUrMVdDNllNVjN2VDNLdkRUdmxXdVVhbHlqei90V0N2NzZtcEg1cWY0?=
 =?utf-8?B?YVVPUlJmMTl2Q1IyR0JsbWNFRWNFREZIck0rbytDaTdpalhqUjZuZzdhZTN0?=
 =?utf-8?B?azBuTzQ5TTVFbGZrWVhJRHRLb01Iek9vNzE3aFhFSFZMUFEveFhJL1hWcHZK?=
 =?utf-8?B?eTRHYjJLdHZZaEE3bHBLVW4rdlE0NDhXeHVaY3VXMUFmdkVZK3FPUG8wWkxo?=
 =?utf-8?B?d0NObE15eklIeGtlVmhWWW1NZEQ3SlByajczaEZ0TlZlRmF4czFCdWE3SXhG?=
 =?utf-8?B?UkJFL2NYdkFLWkd0eE5vRGdYYUpEai9VbnN1akpwZjdPMEJGTmdVOW5DWHZ5?=
 =?utf-8?B?bGJueDFZVFB6RUl6a3c5eStINUhqS1JHb21paklNTUNsUnFiVlJGNVludmoy?=
 =?utf-8?B?aG53Nm96WmVzQkR4NDcxMU5rUWNiOFhVZXdlZVUyYVVJejh4OXZ0Z3VINFJT?=
 =?utf-8?B?cCtwUVlRM1M2VTc2a1RPU1J1czdlTjZUOFM1cXhkeDJJV29wOCt6OHVKRDBD?=
 =?utf-8?B?T1psMk1tbnl5R1poRzQrTXMxZTY4MXVHTGMvMit5UVoyV0h1Mm9ROWw4elVL?=
 =?utf-8?B?TjBZWGFsaWgvTWJhSUlnQVQ5UDIwY1J6N2x1eEloSU41WGFkNzN4cXZ3SjUv?=
 =?utf-8?B?NWdZY0k0bnZtczc4blg2cVVLdkFmTXo0RzVBbW1uTWJFQ3FTTjFEN3JsVmFj?=
 =?utf-8?B?TktWZU81ZHZKdDBqRWVZRERORk1LWExiUFUreFRKU3Q4K2hZSHlVTDFvZll3?=
 =?utf-8?B?SUc2elozb1hydUZsd3pGc2prdWhkeDFRamFVSzJwRUt1VlFrUEVERXJrNUx6?=
 =?utf-8?B?a0x2c2JBMGdKSEo5V2NaNUtsaVUrdDRNMklRVWlETUlZU3VPUmJReFFiVjRO?=
 =?utf-8?B?c2xjSmJ3aWMveC9GQnNzS0hVL3FGOUQ2V2pMKzRuRUF6cnkyUEUxVHZoWW10?=
 =?utf-8?B?aGNINXdTb09uWnpwRVAyRUdRdjhISmIyNWtxS1FwWlUrTWJPVFl3RC82R2sr?=
 =?utf-8?B?Mk40azlsQzlYWW5IZ21iaDFTVlVWNnhQZXg0eWtQanJVRnBFZ0dDa2tLU2kv?=
 =?utf-8?B?NkZEeVJaazhLc1owQ3B4U1VXeWV4aXZpVG9FbVJKZDZDRVM1RTJhV1NLSHU1?=
 =?utf-8?B?bUJIbzVIcEFSZ3lDa1pBTVk0elpVZzBaMkxOaWFGRG11WmdTT2l6cmoreDdx?=
 =?utf-8?Q?k/LtCtvwaee+35Z+CJ9HsBqQHwIRlvM5ayvOdAp?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 5b129949-4bfc-4a8d-0abf-08d90fc29b68
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2021 12:37:55.6743
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 8na76/fTH149EO+wHvqhIaOrEx/ZNRa25LCexPNJiTrI3cJ5fLHh4T0AQ0e0jiIgMhuUasMajYbU/TNsALadBgq5MFIokG4MkmhEtP8+QYc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4871
X-OriginatorOrg: citrix.com

On 05/05/2021 11:04, Roger Pau Monn=C3=A9 wrote:
> On Tue, May 04, 2021 at 10:31:20PM +0100, Andrew Cooper wrote:
>> Just as with x86_cpu_policies_are_compatible(), make a start on this fun=
ction
>> with some token handling.
>>
>> Add levelling support for MSR_ARCH_CAPS, because RSBA has interesting
>> properties, and introduce test_calculate_compatible_success() to the uni=
t
>> tests, covering various cases where the arch_caps CPUID bit falls out, a=
nd
>> with RSBA accumulating rather than intersecting across the two.
>>
>> Extend x86_cpu_policies_are_compatible() with a check for MSR_ARCH_CAPS,=
 which
>> was arguably missing from c/s e32605b07ef "x86: Begin to introduce suppo=
rt for
>> MSR_ARCH_CAPS".
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
>> CC: Wei Liu <wl@xen.org>
>> ---
>>  tools/include/xen-tools/libs.h           |   5 ++
>>  tools/tests/cpu-policy/test-cpu-policy.c | 150 ++++++++++++++++++++++++=
+++++++
>>  xen/include/xen/lib/x86/cpu-policy.h     |  22 +++++
>>  xen/lib/x86/policy.c                     |  47 ++++++++++
>>  4 files changed, 224 insertions(+)
>>
>> diff --git a/tools/include/xen-tools/libs.h b/tools/include/xen-tools/li=
bs.h
>> index a16e0c3807..4de10efdea 100644
>> --- a/tools/include/xen-tools/libs.h
>> +++ b/tools/include/xen-tools/libs.h
>> @@ -63,4 +63,9 @@
>>  #define ROUNDUP(_x,_w) (((unsigned long)(_x)+(1UL<<(_w))-1) & ~((1UL<<(=
_w))-1))
>>  #endif
>> =20
>> +#ifndef _AC
>> +#define __AC(X, Y)   (X ## Y)
>> +#define _AC(X, Y)    __AC(X, Y)
>> +#endif
> You need to remove those definitions from libxl_internal.h.

Ok.

>
>>  #endif	/* __XEN_TOOLS_LIBS__ */
>> diff --git a/tools/tests/cpu-policy/test-cpu-policy.c b/tools/tests/cpu-=
policy/test-cpu-policy.c
>> index 75973298df..455b4fe3c0 100644
>> --- a/tools/tests/cpu-policy/test-cpu-policy.c
>> +++ b/tools/tests/cpu-policy/test-cpu-policy.c
>> @@ -775,6 +775,154 @@ static void test_is_compatible_failure(void)
>>      }
>>  }
>> =20
>> +static void test_calculate_compatible_success(void)
>> +{
>> +    static struct test {
>> +        const char *name;
>> +        struct {
>> +            struct cpuid_policy p;
>> +            struct msr_policy m;
>> +        } a, b, out;
>> +    } tests[] =3D {
>> +        {
>> +            "arch_caps, b short max_leaf",
>> +            .a =3D {
>> +                .p.basic.max_leaf =3D 7,
>> +                .p.feat.arch_caps =3D true,
>> +                .m.arch_caps.rdcl_no =3D true,
>> +            },
>> +            .b =3D {
>> +                .p.basic.max_leaf =3D 6,
>> +                .p.feat.arch_caps =3D true,
>> +                .m.arch_caps.rdcl_no =3D true,
>> +            },
>> +            .out =3D {
>> +                .p.basic.max_leaf =3D 6,
>> +            },
>> +        },
>> +        {
>> +            "arch_caps, b feat missing",
>> +            .a =3D {
>> +                .p.basic.max_leaf =3D 7,
>> +                .p.feat.arch_caps =3D true,
>> +                .m.arch_caps.rdcl_no =3D true,
>> +            },
>> +            .b =3D {
>> +                .p.basic.max_leaf =3D 7,
>> +                .m.arch_caps.rdcl_no =3D true,
>> +            },
>> +            .out =3D {
>> +                .p.basic.max_leaf =3D 7,
>> +            },
>> +        },
>> +        {
>> +            "arch_caps, b rdcl_no missing",
>> +            .a =3D {
>> +                .p.basic.max_leaf =3D 7,
>> +                .p.feat.arch_caps =3D true,
>> +                .m.arch_caps.rdcl_no =3D true,
>> +            },
>> +            .b =3D {
>> +                .p.basic.max_leaf =3D 7,
>> +                .p.feat.arch_caps =3D true,
>> +            },
>> +            .out =3D {
>> +                .p.basic.max_leaf =3D 7,
>> +                .p.feat.arch_caps =3D true,
>> +            },
>> +        },
>> +        {
>> +            "arch_caps, rdcl_no ok",
>> +            .a =3D {
>> +                .p.basic.max_leaf =3D 7,
>> +                .p.feat.arch_caps =3D true,
>> +                .m.arch_caps.rdcl_no =3D true,
>> +            },
>> +            .b =3D {
>> +                .p.basic.max_leaf =3D 7,
>> +                .p.feat.arch_caps =3D true,
>> +                .m.arch_caps.rdcl_no =3D true,
>> +            },
>> +            .out =3D {
>> +                .p.basic.max_leaf =3D 7,
>> +                .p.feat.arch_caps =3D true,
>> +                .m.arch_caps.rdcl_no =3D true,
>> +            },
>> +        },
>> +        {
>> +            "arch_caps, rsba accum",
>> +            .a =3D {
>> +                .p.basic.max_leaf =3D 7,
>> +                .p.feat.arch_caps =3D true,
>> +                .m.arch_caps.rsba =3D true,
>> +            },
>> +            .b =3D {
>> +                .p.basic.max_leaf =3D 7,
>> +                .p.feat.arch_caps =3D true,
>> +            },
>> +            .out =3D {
>> +                .p.basic.max_leaf =3D 7,
>> +                .p.feat.arch_caps =3D true,
>> +                .m.arch_caps.rsba =3D true,
>> +            },
>> +        },
>> +    };
>> +    struct cpu_policy_errors no_errors =3D INIT_CPU_POLICY_ERRORS;
>> +
>> +    printf("Testing calculate compatibility success:\n");
>> +
>> +    for ( size_t i =3D 0; i < ARRAY_SIZE(tests); ++i )
>> +    {
>> +        struct test *t =3D &tests[i];
>> +        struct cpuid_policy *p =3D malloc(sizeof(struct cpuid_policy));
>> +        struct msr_policy *m =3D malloc(sizeof(struct msr_policy));
>> +        struct cpu_policy a =3D {
>> +            &t->a.p,
>> +            &t->a.m,
>> +        }, b =3D {
>> +            &t->b.p,
>> +            &t->b.m,
>> +        }, out =3D {
>> +            p,
>> +            m,
>> +        };
>> +        struct cpu_policy_errors e;
>> +        int res;
>> +
>> +        if ( !p || !m )
>> +            err(1, "%s() malloc failure", __func__);
>> +
>> +        res =3D x86_cpu_policy_calculate_compatible(&a, &b, &out, &e);
>> +
>> +        /* Check the expected error output. */
>> +        if ( res !=3D 0 || memcmp(&no_errors, &e, sizeof(no_errors)) )
>> +        {
>> +            fail("  Test '%s' expected no errors\n"
>> +                 "    got res %d { leaf %08x, subleaf %08x, msr %08x }\=
n",
>> +                 t->name, res, e.leaf, e.subleaf, e.msr);
>> +            goto test_done;
>> +        }
>> +
>> +        if ( memcmp(&t->out.p, p, sizeof(*p)) )
>> +        {
>> +            fail("  Test '%s' resulting CPUID policy not as expected\n"=
,
>> +                 t->name);
>> +            goto test_done;
>> +        }
>> +
>> +        if ( memcmp(&t->out.m, m, sizeof(*m)) )
>> +        {
>> +            fail("  Test '%s' resulting MSR policy not as expected\n",
>> +                 t->name);
>> +            goto test_done;
>> +        }
> In order to assert that we don't mess things up, I would also add a
> x86_cpu_policies_are_compatible check here:
>
> res =3D x86_cpu_policies_are_compatible(&a, &out, &e);
> if ( res )
>     fail("  Test '%s' created incompatible policy\n"
>          "    got res %d { leaf %08x, subleaf %08x, msr %08x }\n",
>          t->name, res, e.leaf, e.subleaf, e.msr);
> res =3D x86_cpu_policies_are_compatible(&b, &out, &e);
> if ( res )
>     fail("  Test '%s' created incompatible policy\n"
>          "    got res %d { leaf %08x, subleaf %08x, msr %08x }\n",
>          t->name, res, e.leaf, e.subleaf, e.msr);

That's potentially problematic, hence not including it the first time
around.=C2=A0 See the discussion below.

>> +
>> +    test_done:
>> +        free(p);
>> +        free(m);
>> +    }
>> +}
>> +
>>  int main(int argc, char **argv)
>>  {
>>      printf("CPU Policy unit tests\n");
>> @@ -793,6 +941,8 @@ int main(int argc, char **argv)
>>      test_is_compatible_success();
>>      test_is_compatible_failure();
>> =20
>> +    test_calculate_compatible_success();
>> +
>>      if ( nr_failures )
>>          printf("Done: %u failures\n", nr_failures);
>>      else
>> diff --git a/xen/include/xen/lib/x86/cpu-policy.h b/xen/include/xen/lib/=
x86/cpu-policy.h
>> index 5a2c4c7b2d..0422a15557 100644
>> --- a/xen/include/xen/lib/x86/cpu-policy.h
>> +++ b/xen/include/xen/lib/x86/cpu-policy.h
>> @@ -37,6 +37,28 @@ int x86_cpu_policies_are_compatible(const struct cpu_=
policy *host,
>>                                      const struct cpu_policy *guest,
>>                                      struct cpu_policy_errors *err);
>> =20
>> +/*
>> + * Given two policies, calculate one which is compatible with each.
>> + *
>> + * i.e. Given host @a and host @b, calculate what to give a VM so it ca=
n live
>> + * migrate between the two.
>> + *
>> + * @param a        A cpu_policy.
>> + * @param b        Another cpu_policy.
>> + * @param out      A policy compatible with @a and @b.
>> + * @param err      Optional hint for error diagnostics.
>> + * @returns -errno
>> + *
>> + * For typical usage, @a and @b should be system policies of the same t=
ype
>> + * (i.e. PV/HVM default/max) from different hosts.  In the case that an
>> + * incompatibility is detected, the optional err pointer may identify t=
he
>> + * problematic leaf/subleaf and/or MSR.
>> + */
>> +int x86_cpu_policy_calculate_compatible(const struct cpu_policy *a,
>> +                                        const struct cpu_policy *b,
>> +                                        struct cpu_policy *out,
>> +                                        struct cpu_policy_errors *err);
>> +
>>  #endif /* !XEN_LIB_X86_POLICIES_H */
>> =20
>>  /*
>> diff --git a/xen/lib/x86/policy.c b/xen/lib/x86/policy.c
>> index f6cea4e2f9..06039e8aa8 100644
>> --- a/xen/lib/x86/policy.c
>> +++ b/xen/lib/x86/policy.c
>> @@ -29,6 +29,9 @@ int x86_cpu_policies_are_compatible(const struct cpu_p=
olicy *host,
>>      if ( ~host->msr->platform_info.raw & guest->msr->platform_info.raw =
)
>>          FAIL_MSR(MSR_INTEL_PLATFORM_INFO);
>> =20
>> +    if ( ~host->msr->arch_caps.raw & guest->msr->arch_caps.raw )
>> +        FAIL_MSR(MSR_ARCH_CAPABILITIES);
> It might be nice to expand test_is_compatible_{success,failure} to
> account for arch_caps being checked now.

At some point we're going to need to stop unit testing "does the AND
operator work", and limit testing to the interesting corner cases.

> Shouldn't this check also take into account that host might not have
> RSBA set, but it's legit for a guest policy to have it?

When we expose this properly to guests, the max policies will have RSBA
set, and the default policies will have RSBA forwarded from hardware
and/or the model table.

Therefore, we can accept any VM RSBA configuration, irrespective of the
particulars of this host, but if you e.g. have a pool of haswell's, the
default policy will have RSBA clear across the board, and the VM won't
see it.

> if ( ~host->msr->arch_caps.raw & guest->msr->arch_caps.raw & ~POL_MASK )
>     FAIL_MSR(MSR_ARCH_CAPABILITIES);
>
> Maybe POL_MASK should be renamed and defined in a header so it's
> widely available?

No - this would be incorrect.=C2=A0 The polarity of certain bits only matte=
rs
for levelling calculations, not for "can this VM run on this host"
calculations.

If the VM has seen RSBA, and Xen doesn't know about it, the VM cannot run.

>
>> +
>>  #undef FAIL_MSR
>>  #undef FAIL_CPUID
>>  #undef NA
>> @@ -43,6 +46,50 @@ int x86_cpu_policies_are_compatible(const struct cpu_=
policy *host,
>>      return ret;
>>  }
>> =20
>> +int x86_cpu_policy_calculate_compatible(const struct cpu_policy *a,
>> +                                        const struct cpu_policy *b,
>> +                                        struct cpu_policy *out,
>> +                                        struct cpu_policy_errors *err)
> I think this should be in an #ifndef __XEN__ protected region?
>
> There's no need to expose this to the hypervisor, as I would expect it
> will never have to do compatible policy generation? (ie: it will
> always be done by the toolstack?)

As indicated previously, I still think we want this in Xen for the boot
paths, but I suppose the guard was my suggestion to you, so is only fair
at this point.

>
>> +{
>> +    const struct cpuid_policy *ap =3D a->cpuid, *bp =3D b->cpuid;
>> +    const struct msr_policy *am =3D a->msr, *bm =3D b->msr;
>> +    struct cpuid_policy *cp =3D out->cpuid;
>> +    struct msr_policy *mp =3D out->msr;
>> +
>> +    memset(cp, 0, sizeof(*cp));
>> +    memset(mp, 0, sizeof(*mp));
>> +
>> +    cp->basic.max_leaf =3D min(ap->basic.max_leaf, bp->basic.max_leaf);
>> +
>> +    if ( cp->basic.max_leaf >=3D 7 )
>> +    {
>> +        cp->feat.max_subleaf =3D min(ap->feat.max_subleaf, bp->feat.max=
_subleaf);
>> +
>> +        cp->feat.raw[0].b =3D ap->feat.raw[0].b & bp->feat.raw[0].b;
>> +        cp->feat.raw[0].c =3D ap->feat.raw[0].c & bp->feat.raw[0].c;
>> +        cp->feat.raw[0].d =3D ap->feat.raw[0].d & bp->feat.raw[0].d;
>> +    }
>> +
>> +    /* TODO: Far more. */
> Right, my proposed patch (07/13) went a bit further and also leveled
> 1c, 1d, Da1, e1c, e1d, e7d, e8b and e21a, and we also need to level
> a couple of max_leaf fields.
>
> I'm happy for this to go in first, and I can rebase the extra logic I
> have on top of this one.

There is a lot of work to do.

One thing I haven't addressed yet is the fact is things which don't
level, e.g. vendor.=C2=A0 You've got to pick one, and there isn't a
mathematical relationship to use between a and b.

I think for that, we ought to document that we strictly take from a.=C2=A0
This makes the operation not commutative, and in particular, I don't
think we want to waste too much time/effort trying to make cross-vendor
cases work - it was a stunt a decade ago, with a huge number of sharp
corners, as well as creating a number of XSAs due to poor implementation.

For v1, I suggest we firmly stick to the same-vendor case.=C2=A0 It's not a=
s
if there is a lack of things to do to make this work.

~Andrew



From xen-devel-bounces@lists.xenproject.org Wed May 05 12:48:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 12:48:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123108.232220 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leGxB-0006RT-WB; Wed, 05 May 2021 12:48:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123108.232220; Wed, 05 May 2021 12:48:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leGxB-0006RM-SM; Wed, 05 May 2021 12:48:37 +0000
Received: by outflank-mailman (input) for mailman id 123108;
 Wed, 05 May 2021 12:48:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+Yav=KA=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1leGxB-0006RG-1r
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 12:48:37 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e24f13bd-3293-45b7-be89-05d61d0f1ec5;
 Wed, 05 May 2021 12:48:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e24f13bd-3293-45b7-be89-05d61d0f1ec5
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620218915;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=uRfkGFXvA/niCniY+UP1shJd0l1okzMlrQBGV5Yheyk=;
  b=JFMgKYbGiZ9w//6fdIyFieGiu9i+oGJ4U3tI3J2gCrPB/dvcr4IcQUmG
   uyz5EG2J6SdsHL3DL24onZKjYe6aN2RB+v/5gs5Wiho0oRnLcDw0cdFpj
   ODcow4J2jLV6uRoDNNw/8+XFGueMoYmmlnbrFt5O9ivxNRXKb8kagzPTn
   Q=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: Z9ygEBp4flUhp/PoZOetOKPAwIRDwpSFB3P5f9krA4fwvrv4LA23BZC8aV0HGxYtpcNeJPMOGD
 /ceYYbNtAcWSw1R09+1l06lrSbf/yDOgzuTn6tE6hkJ1a3YlFhT2d5D8EIHD+8uHGnzbalCuSG
 caIhuEckjrzyv9xzmvEOsno3jRwHhqNus9OkVM3W/3+FX2+VYHD+7jb4BjW3MJUL/GflWDvukO
 DuH89nyOhNZCWfcf+zkU/8iNLZksBiHaovgTlsroVLNR05GjfMHsEPP/lYDJtFUfWqhUoh511Z
 pHA=
X-SBRS: 5.1
X-MesageID: 44630970
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:b52ZXKp4K/VZ7T4snOPqkhYaV5v5L9V00zAX/kB9WHVpW+SFis
 Gjm+ka3xfoiDAXHEotg8yEJbPoex7h3LR+iLNwAZ6JWg76tGy0aLxz9IeK+UyFJwTS1M54kZ
 1hfa93FcHqATFB5/rSzQGkH78br+Wv37uvgY7loUtFaSFPR+Ve4xxiCgCde3cGITVuIZYiDp
 KT6o5milObCBcqR/+2DHUEQOTPzuej/P7bSCULGgI97022hS6ogYSQLzGjwhwcXzlTqI1Sk1
 TtrgqR3MSemsD+8DDw/Sv575NamNzuo+EzefCku4wuBRjHziqtbIRlcbWesD4yu/HH0idXrP
 D85y0OEu42x3TNfnykgRaF4Xie7B8er0XM5HXdoXz/rdf3TDg3YvAx+75xQ1/ixGcL+PRfuZ
 g7uF6xht5sIj7r2BnZ3ZzuUSpnk0KlyEBS6tI7vjhkfqY1LINKoZd3xjIyLL4wWBjUxaoAC+
 dUAMTV9J9tACmnRkGchGVpzdC2N05DZyuucwwHssyR5TBcgGp0+Use3NAehXcN7vsGOuF529
 g=
X-IronPort-AV: E=Sophos;i="5.82,274,1613451600"; 
   d="scan'208";a="44630970"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dGosTBbUazwChBWkWRQwmEIIG2XbvE5u/jdlzEulTuHFAYbqm1GQR+87suKbvBZeF6uYa2wwTu4grQpmd1nWT7yqd69wzbn+LGD008y7OgnQ12wnZalGSkMBRG4HDdC7QqKBl/DkSWNgG/2/7wmOMI21FoL5KteoaqK/NN/4wqgdkzYWJ8l+8XomVZAcfZQufcpucrF7NTnUZXIfJUIBuflRMSLVpHXE+jqB9BbpHC63XDESDu3zQ3k3qNXTYlyqdwC/8SUp7MXnHODGGjTkPO6MjUoHxzoQz9XU+0nDW2ery0K/h4v3QTLznQchs+F9Y95InMwTf5hsibC9AHhDBg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ub4M/GWmyHWtkeJT2tdNFtjKNc4K3WCG6uiAH6ukya8=;
 b=VURdLGdke9j39ashAI/3tZTSBxHSn+ULVuW2Cqbs/pKg3eTo0WGbdiDYrFtzLizLayn9Fl6On4zvq3Bjgq81hG5/HS2kjG2FN6fLdfwHkeJYD8fmwuEEFOJiJyPZV4mPitxMCiVX9jMvlKlTLLzpLCwmDTZBsOdp7/BY2YCLT5pkyGq7qQGA92/YcePQLPs73rD167PWi8D9r829RO2QXw3FeAUv6e1p3b8jk4YxywAWgYNSmoDAZrJU4vDbnFeKnFeOBdm26Yw/Ji0n2yAWeyUyqUnwKNH8h6njO5g37mYrp/2aOLKHqNDKKWayEXIKcpvoUTCl9+130Y2Dpfo+WQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ub4M/GWmyHWtkeJT2tdNFtjKNc4K3WCG6uiAH6ukya8=;
 b=BfzYRIhE03DPBQU+jycr1bjJ218/gCSNxKk48hz1B7CJmTLjs8TUM2ZIvWnrhOBFsKCgIW/HoRu2yL96Y6IkK5hMct+VwO75RT6HLWjdTPVIRVd40Xb5Pgi2Z4UE0zDeZzCoCLCapqUOWGqu/XpUjO/9lCQFzWgTgfbaYL+SJew=
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, Wei Liu <wl@xen.org>
References: <20210504185322.19306-1-andrew.cooper3@citrix.com>
 <YJJiWLqGoHLSnj01@Air-de-Roger>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] libs/guest: Don't hide the indirection on xc_cpu_policy_t
Message-ID: <deb19b01-aa07-6faf-42c8-67fe372ede64@citrix.com>
Date: Wed, 5 May 2021 13:48:26 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <YJJiWLqGoHLSnj01@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0270.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:194::23) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 05700939-50cf-4fc9-02f2-08d90fc416ec
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5664:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB5664637D7684FC93FB1BD451BA599@SJ0PR03MB5664.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: XIlR5qmxTx4KwPrmDysTT1ZEUhJki0wOIFDZw27Tq89zno/b+F9v1iHIWKkTvVR3Tpes4nj83/zOtwTuGn4txktZwm17m1mGFz1QUugey8Tu7wZlhP+lz9DPghGz+8Zr++wZ725iXCHfiOstS/cjOEOOcoxHGiazhJ2sBt92G2xjSHkfk/V3ay43ra9rKIMCUAH0ThvaSkOouG8wha7FZfzDPjM0CZ5J5IFRjrHwFYYpYUh/VerkQE7uJ1iqubQ8gL8t/pxwpSzAciWagIHeTczjjLfrPbNSKrQ3SKu5S3rc5V5W5w2HznAi6xO4XmGRtnScDfzVoVS9Yi+45auz59gtC4YPlznm2K70TxOr+M9Ve1MYWp1N66vK7XYOnBlf/3mQZLpEUJujiUcOhyM0Pw65BD5Bhb1aADMW9/qp9uC+wHyk8zGf15C7Ob636aTW8myzSQYx12+iGCc5cUaAaqLSFhIm7mBbE5+OEYoCDrZ1Ybe61GfN3JP83zmRcv/504JM0umHB0wjKZWGw17/6OsXOJ/ra4G0QsMiaOX4uH66vgES0cRWsDP/55gNxd8zqgOUEuoT23wI831zDlTNqscy0qUrl7x/uDEEKAogDHXCdbVapYu6wEFxuUhrnQVHr+wzvUoJ/xKExGtJm+G+ShU+qYjgbgASomD6rdVDgIsTnPK5Hr+U4zonFGS1rKVF
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(346002)(136003)(376002)(39860400002)(396003)(956004)(6636002)(2616005)(66556008)(54906003)(36756003)(26005)(4326008)(37006003)(31686004)(478600001)(316002)(66946007)(16576012)(66476007)(5660300002)(2906002)(6862004)(16526019)(53546011)(186003)(31696002)(6486002)(6666004)(38100700002)(8936002)(8676002)(86362001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?L0U3NW5QdURFVDNLZXRJazd2YU9RdXRLZ0NMSHFKblNFekZGekMvREJNY0R2?=
 =?utf-8?B?THpwWG0ySkZ4L1k3THVjS1h0eE5ad3dhY0RWb1dKMVB2TVZIeGZMTDhFbHpG?=
 =?utf-8?B?bHhYU0x3cCtCM2k3YWRETmxjNnFRTzcvWDFucW5xQ1dERkk4MnJWOUZzMUla?=
 =?utf-8?B?Z0F2OXA3WE9iQXBnV1BRY0xWemRzNHcyZUZiMnFUUitVUitsM2NOWUFBb1R3?=
 =?utf-8?B?dXJ5ZUt3eXNEbGdVQldrVHZ0aTc1SGRsRExFS0ZKWkhWRGNYc3o4OGEyWWc3?=
 =?utf-8?B?WExPL2x0UXc2VS9yeUJUZDkyZ0Z3Z0dJNVh2Yk56RFRqdjQyNkI2N2ZuSGNz?=
 =?utf-8?B?b2crRWtIYmR5RFlIdXdkQXVyV2o0ejAvNnNNOExTbnhJMXQwaG5TVTZuV3hO?=
 =?utf-8?B?VTF5YkFnWVpqTDdGYytIN0djUHpucVhzNE0rV25qQmw2S2pXRndkOWZWNW1D?=
 =?utf-8?B?WVE3R1lZci9LbDRvdVBFTkk4Sm9QOUtjeVZweU54Z1Bva0RDWStpNVhPUGVL?=
 =?utf-8?B?cVVJYUFlam95NWdKRndCQWJkc3M4WlVyN2J5b2xLUkVNSE8vT3Q0U0p1QlJZ?=
 =?utf-8?B?VXcyQ3M4UEpjelpSNzlxMHRpSUR4T25Oem81OVZQazhlVW1wLzFjMEZPRE9x?=
 =?utf-8?B?bEozZGcyamdXdW55NWhKZTNjWUV4NXc4UkpiWmFEWUZTcmppUUJ6TDQ3OHpE?=
 =?utf-8?B?Nmo4cVhJRDRka2Nlck8yYlM0UVQ0eFZBeUdvREh2UzEvVkxVTTFCbE0rZkcx?=
 =?utf-8?B?eFJFdkUyMHNvS3hnZHpBSjZ0aXF1RUFCZ2o5K3hRZUp1MGRpZ3dsajNWbXIr?=
 =?utf-8?B?LzRLZUo1Y1o5YjgvNDV1S2hlYnVjTUNEam9ucmoxeURTZlBEWnFCeFBJOUxa?=
 =?utf-8?B?OFhyWHZEazlQMmQ4bTVLb1ZFREl0dzVmanduNzJpY05ZRjlKWXNBS0tOVFVF?=
 =?utf-8?B?SnBsNm9ibVdyanhObFJOb0ZiblBIcEJnbXZIdDJxZGh0VFk5WEdBYXlKZG9G?=
 =?utf-8?B?MVI3ejZvZEM0UTNwdHU5aHAxY2U0OFQxV2ozTStRWG5IVTYySkw0MHlkMExR?=
 =?utf-8?B?clhjTGozZzByb2kyemJxTHdobmRNS3l0T2p1TElPU2JVQWd6Wk9pajhOTDR6?=
 =?utf-8?B?TTN4UFZWK2M5VFVBMUR2WXVhdERzK2JxYXhxRFpVN2xjdmRBazg2MVFSUFZl?=
 =?utf-8?B?amZYMENvcDk5ekhlWmJNaHYxdU8xZ3RhTU8xcTRudGZsT3VLWjh5RzFkOWVi?=
 =?utf-8?B?M3ZjL081WDdWMnN5Z0lSZmtqV1A1Tjc5YVNFd05pRExObGt4cm9USzdwQkhG?=
 =?utf-8?B?RGNYd25mSE44RjFRdk5nN3U0d3V2Q2hGRkcrYU96VEUxb3h3RC9rRTFreGJE?=
 =?utf-8?B?cWt4M1N0TytQcjJvb25WL0FKdno3OFhsSkRrYkRsbUY3UGNQUHdpNEpwb3Jh?=
 =?utf-8?B?WGRhTEhoZ2FWdElhL1I3NzFXQlFxNXkwMW05MUQwTlpacW9OUDlpY3hRa3hI?=
 =?utf-8?B?YXQzY3ZxR3UrUVhHSmowTlJZTVVzUWx6SjV5QVFTM0VzZVh2U3V0VzRld1E3?=
 =?utf-8?B?UVh6cnV2RVJ0L2RmTTlYRFVycjcxNzJnbUVVUFFxWTVsaHFiZmF0c0cweHVU?=
 =?utf-8?B?NHhyS3dXdzZ3SlA2SFNORkxub0xCRjl3TjJaWW1USm9TV2VrdFkydmM2VFlj?=
 =?utf-8?B?U09lY3ZpTElza1krc2xOME9tNXdjN1BPOHZiemk0YkVUM0x6QlM5d2t1WjJ1?=
 =?utf-8?Q?t74ZmeaalU0+F3Ihs5WFcMFgOa2kxzbQPNxgjdz?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 05700939-50cf-4fc9-02f2-08d90fc416ec
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2021 12:48:32.3827
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: qzU4VP/DSTLQc8IztBH8aMJviFEmSRCzqZ/BVtdEaCyMl9wqfVu7bDR5gOVmEJnfWYtvsQviSzMJ8d+z8tR64X4uwIH2R/kU/e4qwKFsGA4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5664
X-OriginatorOrg: citrix.com

On 05/05/2021 10:16, Roger Pau Monn=C3=A9 wrote:
> On Tue, May 04, 2021 at 07:53:22PM +0100, Andrew Cooper wrote:
>> It is bad form in C, perhaps best demonstrated by trying to read
>> xc_cpu_policy_destroy(), and causes const qualification to have
>> less-than-obvious behaviour (the hidden pointer becomes const, not the t=
hing
>> it points at).
> Would this also affect cpuid_leaf_buffer_t and msr_entry_buffer_t
> which hide an array behind a typedef?

They're a total pain because in userspace, they're plain arrays, and in
Xen, they're GUEST_HANDLE's.

Hiding arrays in a typedef like that (unlike hiding pointers) doesn't
change the interaction with const.

So the code there is correct AFAICT, even if it doesn't appear so.

>> xc_cpu_policy_set_domain() needs to drop its (now normal) const qualific=
ation,
>> as the policy object is modified by the serialisation operation.
>>
>> This also shows up a problem with the x86_cpu_policies_are_compatible(),=
 where
>> the intermediate pointers are non-const.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Acked-by: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>

Thanks.

>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
>> CC: Wei Liu <wl@xen.org>
>>
>> Discovered while trying to start the integration into XenServer.  This w=
ants
>> fixing ASAP, before futher uses get added.
>>
>> Unsure what to do about x86_cpu_policies_are_compatible().  It would be =
nice
>> to have xc_cpu_policy_is_compatible() sensibly const'd, but maybe that m=
eans
>> we need a struct const_cpu_policy and that smells like it is spiralling =
out of
>> control.
> Not sure TBH, I cannot think of any alternative right now, but
> introducing a const_cpu_policy feels kind of code duplication.

At least this is all internals.=C2=A0 We've got time and flexibility to
experiment.

~Andrew



From xen-devel-bounces@lists.xenproject.org Wed May 05 13:02:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 13:02:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123113.232232 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leHAp-0000D5-95; Wed, 05 May 2021 13:02:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123113.232232; Wed, 05 May 2021 13:02:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leHAp-0000Cy-58; Wed, 05 May 2021 13:02:43 +0000
Received: by outflank-mailman (input) for mailman id 123113;
 Wed, 05 May 2021 13:02:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sTpK=KA=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1leHAn-0000Cs-H4
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 13:02:41 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 739b3a11-843b-4d91-b042-2292c8c169a4;
 Wed, 05 May 2021 13:02:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 739b3a11-843b-4d91-b042-2292c8c169a4
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620219759;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=gJKGYj+Nzb6BkT67+vGyeUJmxadccntF3exENqCL1Oc=;
  b=IGgngk8vhSfVMAS6ZI2wMcjSYpJeuaXuATo2P3i+cv1FkQJh/xMa57HT
   MCv8bgbi8JTfY0pd/0YhScdCNGkSFr+nWl2Tg6jX3pVwC/k0NTik1aXBS
   z2ehqoBIUGDG1RbAH28Y5dxZ5rc9gsyp0XdbPs9CYCj+7FcagcWHecX09
   k=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: FGy4Bl2e/rmRYyWJdqtDeiY1NDrOAKoOnlnRWkvotpmdu4n5mLx9agfkLtn2lv2W2w7wv10HGe
 WXeKnBLWQalC4zEyIdviI5Tbg+Yat/pEdjznOIycaVH+MMpzBYh22O99aaxmFgRYS/oLenPN45
 Ka/9I37Y8ebL2qG75+oMuA+Vgl+/EJtjN/ecGz/qlYvS6cfC5hA6vGHqQ2Imjg84pHgcxW2oeW
 i4wQsekuhstdQmHadruzMpjCeYT3d1KSBcbR10TSLx6kQg/SZ6j4WNDt7OecoXbZgrxzlCxbU/
 ePM=
X-SBRS: 5.1
X-MesageID: 44632269
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:fjuF2q1VXb2uYJn8nHYuzwqjBWhyeYIsi2QD101hICF9WtCEls
 yogfQQ3QL1jjFUY307hdWcIsC7LE/035hz/IUXIPOeTBDr0VHYSb1KwKnD53nbGyP4/vNAzq
 sIScJDIfD5EFQSt6nHySaiFdJI+re62YSJocub8Ht3VwFtbMhbnmVEIyKWCFd/SgUDJbdRLv
 qhz/FKrTahZngbB/7TbhU4dtPOusHRk9beaQMGbiRN1CC1kTiq5LTmeiLovSs2bjUn+9Yf2F
 mAqSPVzOGJs/a3yhjTvlWjlah+qZ/a5fZoQOCJgsgRAD3whgivf5QJYcz+gBkF5NuBxXxvvN
 7QowoxH8kb0QKsQkiF5SHD9iOl8DEy52TswVWV6EGT3vDRdXYBJOdqwad6GyGpj3YIjZVH/4
 9gm1+9jd5xCyjNmSzsjuK4Ly1Cpw6PjlcJ1dIIg2c3a/p4VJZh6bYx0WlyC5k6ECfz+OkcYZ
 JTJfCZ3vpQfF+ABkqp2FVH8ZipVnQ3KB+MXlIPjMyTyyRXh3B01SIjtbUioks=
X-IronPort-AV: E=Sophos;i="5.82,274,1613451600"; 
   d="scan'208";a="44632269"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ImhhaJUQv0ROwftVlnBEAfDQ511HYywA/8HwvXRV+/TwNaKH4lQ7VorTVeKcSEpgs0HIr5LAkxoYWfCu6OMK1WvtQRRfu/xEbaBKEIt5wsqxYhxrO+HZpWdVuwoh0hVxVRrrlMc6Efg53c45bmWruLXJalZDxaEcDEPn3HkiqMuyEYV5ipjWLn+R3H9YFDQlkuQC+hRagDWYE+IT1q7otHfHqEh8kwNGzvLAxbpnotIwefuoNk7+3Ae8WV2DGcVvNmWn6B6PwDRHUPejS+4dGXvMZVTFedMUSGvqzUcnFGizTfZv3JWu9G42+93V5wCDCqVA3M7srqXYZqMPPxdsvA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=69lMYP8/qjV5XQc9EmLgQEsvF0WnGMorJ/qTLKU2aJU=;
 b=CrRYuACfyNMuMrI3kDBZv4jnQSJkwDOqiVzw+yYPtJbmuQELBIleK95If+d84RzieUcPC6AoUxWPyk9l3W8bPFqoKLwUtXV4JRT6E97dPkU20HOFJZXxuB1XiGVHsow0mYFW9qdjcaESXmFKIuuhfjxeZ+TpzbZrpMbHmzEmzYvWRJgskQt4nJUH0c27SDRF/wTk1LitCJPu4Q/w3XCvsrY3EU6RRWB+vp2T6nCMqNJFt22SGEPuKB/cppsdPgSSO5+LF+oXVqy4n9DBu2/g4rJzm2d1y8MHmmryAgWlfHMbi84Y7ZoG3HGonjoFWqfP9i8gEX43wHuQw9qPO39ZVA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=69lMYP8/qjV5XQc9EmLgQEsvF0WnGMorJ/qTLKU2aJU=;
 b=qmlQmxoxgHSVKSlR4Q29VJ3PgyugmXb/tQFmTNLNzC1hN/+f4Vh6mOt1C8uuN0dL1ADD6u8HqWxwSv3il0Pu6bpQa5jYl07CSgIXz4QmSMIRnBcrRzClX09CB1AQIOXqNJZQx4jLAGlxj3MJMqEyhbA0FmqyDcW/NQy9hpmrd0g=
Date: Wed, 5 May 2021 15:02:31 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] libx86: Introduce x86_cpu_policy_calculate_compatible()
 with MSR_ARCH_CAPS handling
Message-ID: <YJKXZyCHpRg32tyc@Air-de-Roger>
References: <20210504213120.4179-1-andrew.cooper3@citrix.com>
 <YJJtqyDOIkMxjvxW@Air-de-Roger>
 <8f6f339b-f025-2cd0-e666-a3083e79af3a@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <8f6f339b-f025-2cd0-e666-a3083e79af3a@citrix.com>
X-ClientProxiedBy: MR2P264CA0029.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500::17)
 To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 1489d14f-7db3-4351-f179-08d90fc60e32
X-MS-TrafficTypeDiagnostic: DS7PR03MB5605:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DS7PR03MB5605BBD1E0B3E9E61E2501D88F599@DS7PR03MB5605.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: cOfvxskYlYjiIgHHQ4Z0oNrxUgTrBdO9nuHle6D5csmvZIBuNX4l5oID6A5WWZP+VOb149M38CVqNvAKZjUlZr7W8ZBmQF12b++9ENSaSf/4cJNnLj3fef84zObg23IJh7N5j2nmeGN/MXHf+13u6pk0BdFQdp+UwOvlRFdv94vcx/giIrmiPOsLciZwf4gBCtK9DKhjSFlV2eqfqVhvk0aQsl/k1n1S4LaDuzEkDq/VhVqURF/p9DDHU569wy9nmtvkgpPoE+Uq8giPmyb5EMe1gjPX8O+KhnpG2HDrORlbBCq9YJ0OYi7m1KJFX6FN3ZPlbOd7KfeiCkjLp0aQA236iUzhBaO063kK9H1ojRBSnnx9u7w0f6xDXg9740xAZnIak+4ZkeJjRtNj4EKMM0PxngN19YxoxcxdsujNSqqkHNAdrGETNQupZN/XWVgWiA1aCBy7BHRlB3HyqW/MlZrkWf76+tBYCi/65oBP9oLP0IHpPfRJ+m9NntQNTSuFP6ujTflaNxqsRMIbX20+/N9Uchue7EygWshaaC4TWK5w7Wx2uk1DE+zAHUk1B4yQFAG4DE0LwoMyMlKR8VIYoLY6jYyhdukjGnfeNHJVLLY=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(376002)(136003)(346002)(366004)(39860400002)(396003)(6636002)(53546011)(66946007)(66476007)(86362001)(66556008)(186003)(26005)(5660300002)(83380400001)(33716001)(6486002)(30864003)(8676002)(956004)(8936002)(16526019)(85182001)(38100700002)(9686003)(6496006)(6862004)(316002)(4326008)(54906003)(2906002)(6666004)(478600001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?bGtFYlUrY1VEbDhqTzJBVkNqamppNzk1MmplbWk0ZHZyNk85b3Nmd0NmaERM?=
 =?utf-8?B?RWszc0lRNW5aejlFcWlka1YwQi9nTkNFSlBpeHhlR0pnakM2dXdGMHV2N2xs?=
 =?utf-8?B?YTdPS1dkUEl4VitEc1lGUEFZb284anIyeHFlbFRaOE5wcTV0N0d3VmhvV3FR?=
 =?utf-8?B?a0V3UFl3UUkySXhBakkyRzg2L3loc2FFSjNrdWV0Smd5WEtSakprZExTSUtM?=
 =?utf-8?B?bHlMbW13V0JFaVZxV2l5UFgyc3N1MGlpVTNBbzc3RWYvTk43ZXhCd2xKUnhj?=
 =?utf-8?B?aTNuSVMzMUhCdmtKWThyTVQvR1FIa0RoanZVOHFjU2pMclBmWTR3MUMxL2lT?=
 =?utf-8?B?QnN2ZHM5MmtubUVzNWN1ZVY3eXdzOHY5dFBTSURmaGl5K3AyQXZqNHJkWG9D?=
 =?utf-8?B?bGNDNElybHRSNEtYelNKaWZsMndMakJrSkF4VVRLQ25XeWRneVhsTFdQV0lo?=
 =?utf-8?B?RTE5SHlzaytUYVZmVFErZVpKUzZLUnpDQ2dLMUx6NnFxNmFVbzNIOTVGUEZI?=
 =?utf-8?B?K2ZUcXBrSnNGYkJSZE5LMnZZdDZKZG0wUnFNU1o5NXZvN3Vkdkk5WWNtSU0r?=
 =?utf-8?B?VUVaMGxYZkJxSE5sQS9vTHkrWWtGRkZKYmFjVjdvSWR4ZjNvVnMxMjVBa21W?=
 =?utf-8?B?MnhDTTk4TkNwVktiNkJjdE9ibzR3QVY3ZVJjR0RteXVKaVBHRUsyS0xaTjdq?=
 =?utf-8?B?TW9INGtBaU1udzRaYkxRbURpRWhJbWNZTERRR3pGb0x3enBWMzNlODVmVGgz?=
 =?utf-8?B?MXZCWnc2T0lxTnZ4MkQvT3dSRi9HRUFyV0NOUUJiaGx6RnZic3R3WmJwclR6?=
 =?utf-8?B?Rkc4VnBOa2ZNWVl6WkVDTVA1bVZLN09kem9MVmVkb0d5UE9WV2RWTW5ZRWVl?=
 =?utf-8?B?T3JnTk0vL3JHaFFuUTJlbFQ2MVdTTWk1cHF6WnpNdWVQdE93OVdCNVhPekRQ?=
 =?utf-8?B?NXBEUmNVMmFMUTBDUE5kNnduVjVEbnRzN2RHVGxOWlVSQkJQU25tVGU1Q1Ur?=
 =?utf-8?B?azcyZjhZc0Q5Yk9jcHRrYmt5ekd3Vm5udG9OSkhuZ1h4aFNoUnQyUFZSdHo2?=
 =?utf-8?B?OFN2SHNRL2ZZcVpRSzIzcjU5a2hPUy9Kamc5U3RLZmxyU0VUQmpsWnBoenA5?=
 =?utf-8?B?YStxdlNTMS9PdmJDeUF2WXFkZitSb1NMemwrTTVsWXJnUVJBcXdsVmgxcXRm?=
 =?utf-8?B?VlhQOHdiOTRxWHNLem5qUTNnYU9QcXNtR2ZoK0tSMVB1ODk2K1JsMVVtWU9Q?=
 =?utf-8?B?b3MyZ3BFTlQ4UGdoTG96aEV5YlZJbkRBbm51RnRFUm5kY1hjTDlzSkJtL3pz?=
 =?utf-8?B?c08zL3hHTTdrWlcrVUEwMmxPV0xNckVsclhsSlE2d2tqRCtaYjFmZktkUGsz?=
 =?utf-8?B?U2VRdUxISzFtUnBEQnNFaHpzUFdnUkpBTTlKZlV4aTkxYzNlUExhdS9lQkRq?=
 =?utf-8?B?UkRNa0hjeFBhQ1ZidEZRbmM2RytydzZxREJIdWVXeFg1MGhWTjNHNDFMRXRS?=
 =?utf-8?B?a1pKaGpHUVpqbGlaTGZwei9xb29DNTduRzdDOXJMVGVKNW1qam4zcUt3WnJ6?=
 =?utf-8?B?ZjRsL1NVWUU3Y21hYUoxNktRMVRzYnA1aWwzZEc4MjNUMHNTOE14NFFCbWVG?=
 =?utf-8?B?SEU5aUlHSFNxeGVWczVZdDJURnRkMWxxcXZ0Y3lXcUM1Zk5VejhSbzJFQjE1?=
 =?utf-8?B?azg0OFhYWU4veFAvY0xJb290MElpRHR5cS9xZ2wxTHk4WC9iRmZncktWYlNa?=
 =?utf-8?Q?3bWn+t/zUId6jJdRzQ8T46iyeiMRTmSu85t6Mxh?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 1489d14f-7db3-4351-f179-08d90fc60e32
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2021 13:02:36.7051
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: CtJGB437bFTUqMdXE5n/I7IJT1wlfNPolt0ANpwg+rciOpg1oZX+lWNecS5iuQkbeIVPYSM4ovXLUERd2aPoMg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5605
X-OriginatorOrg: citrix.com

On Wed, May 05, 2021 at 01:37:48PM +0100, Andrew Cooper wrote:
> On 05/05/2021 11:04, Roger Pau Monné wrote:
> > On Tue, May 04, 2021 at 10:31:20PM +0100, Andrew Cooper wrote:
> >> diff --git a/tools/tests/cpu-policy/test-cpu-policy.c b/tools/tests/cpu-policy/test-cpu-policy.c
> >> index 75973298df..455b4fe3c0 100644
> >> --- a/tools/tests/cpu-policy/test-cpu-policy.c
> >> +++ b/tools/tests/cpu-policy/test-cpu-policy.c
> >> @@ -775,6 +775,154 @@ static void test_is_compatible_failure(void)
> >>      }
> >>  }
> >>  
> >> +static void test_calculate_compatible_success(void)
> >> +{
> >> +    static struct test {
> >> +        const char *name;
> >> +        struct {
> >> +            struct cpuid_policy p;
> >> +            struct msr_policy m;
> >> +        } a, b, out;
> >> +    } tests[] = {
> >> +        {
> >> +            "arch_caps, b short max_leaf",
> >> +            .a = {
> >> +                .p.basic.max_leaf = 7,
> >> +                .p.feat.arch_caps = true,
> >> +                .m.arch_caps.rdcl_no = true,
> >> +            },
> >> +            .b = {
> >> +                .p.basic.max_leaf = 6,
> >> +                .p.feat.arch_caps = true,
> >> +                .m.arch_caps.rdcl_no = true,
> >> +            },
> >> +            .out = {
> >> +                .p.basic.max_leaf = 6,
> >> +            },
> >> +        },
> >> +        {
> >> +            "arch_caps, b feat missing",
> >> +            .a = {
> >> +                .p.basic.max_leaf = 7,
> >> +                .p.feat.arch_caps = true,
> >> +                .m.arch_caps.rdcl_no = true,
> >> +            },
> >> +            .b = {
> >> +                .p.basic.max_leaf = 7,
> >> +                .m.arch_caps.rdcl_no = true,
> >> +            },
> >> +            .out = {
> >> +                .p.basic.max_leaf = 7,
> >> +            },
> >> +        },
> >> +        {
> >> +            "arch_caps, b rdcl_no missing",
> >> +            .a = {
> >> +                .p.basic.max_leaf = 7,
> >> +                .p.feat.arch_caps = true,
> >> +                .m.arch_caps.rdcl_no = true,
> >> +            },
> >> +            .b = {
> >> +                .p.basic.max_leaf = 7,
> >> +                .p.feat.arch_caps = true,
> >> +            },
> >> +            .out = {
> >> +                .p.basic.max_leaf = 7,
> >> +                .p.feat.arch_caps = true,
> >> +            },
> >> +        },
> >> +        {
> >> +            "arch_caps, rdcl_no ok",
> >> +            .a = {
> >> +                .p.basic.max_leaf = 7,
> >> +                .p.feat.arch_caps = true,
> >> +                .m.arch_caps.rdcl_no = true,
> >> +            },
> >> +            .b = {
> >> +                .p.basic.max_leaf = 7,
> >> +                .p.feat.arch_caps = true,
> >> +                .m.arch_caps.rdcl_no = true,
> >> +            },
> >> +            .out = {
> >> +                .p.basic.max_leaf = 7,
> >> +                .p.feat.arch_caps = true,
> >> +                .m.arch_caps.rdcl_no = true,
> >> +            },
> >> +        },
> >> +        {
> >> +            "arch_caps, rsba accum",
> >> +            .a = {
> >> +                .p.basic.max_leaf = 7,
> >> +                .p.feat.arch_caps = true,
> >> +                .m.arch_caps.rsba = true,
> >> +            },
> >> +            .b = {
> >> +                .p.basic.max_leaf = 7,
> >> +                .p.feat.arch_caps = true,
> >> +            },
> >> +            .out = {
> >> +                .p.basic.max_leaf = 7,
> >> +                .p.feat.arch_caps = true,
> >> +                .m.arch_caps.rsba = true,
> >> +            },
> >> +        },
> >> +    };
> >> +    struct cpu_policy_errors no_errors = INIT_CPU_POLICY_ERRORS;
> >> +
> >> +    printf("Testing calculate compatibility success:\n");
> >> +
> >> +    for ( size_t i = 0; i < ARRAY_SIZE(tests); ++i )
> >> +    {
> >> +        struct test *t = &tests[i];
> >> +        struct cpuid_policy *p = malloc(sizeof(struct cpuid_policy));
> >> +        struct msr_policy *m = malloc(sizeof(struct msr_policy));
> >> +        struct cpu_policy a = {
> >> +            &t->a.p,
> >> +            &t->a.m,
> >> +        }, b = {
> >> +            &t->b.p,
> >> +            &t->b.m,
> >> +        }, out = {
> >> +            p,
> >> +            m,
> >> +        };
> >> +        struct cpu_policy_errors e;
> >> +        int res;
> >> +
> >> +        if ( !p || !m )
> >> +            err(1, "%s() malloc failure", __func__);
> >> +
> >> +        res = x86_cpu_policy_calculate_compatible(&a, &b, &out, &e);
> >> +
> >> +        /* Check the expected error output. */
> >> +        if ( res != 0 || memcmp(&no_errors, &e, sizeof(no_errors)) )
> >> +        {
> >> +            fail("  Test '%s' expected no errors\n"
> >> +                 "    got res %d { leaf %08x, subleaf %08x, msr %08x }\n",
> >> +                 t->name, res, e.leaf, e.subleaf, e.msr);
> >> +            goto test_done;
> >> +        }
> >> +
> >> +        if ( memcmp(&t->out.p, p, sizeof(*p)) )
> >> +        {
> >> +            fail("  Test '%s' resulting CPUID policy not as expected\n",
> >> +                 t->name);
> >> +            goto test_done;
> >> +        }
> >> +
> >> +        if ( memcmp(&t->out.m, m, sizeof(*m)) )
> >> +        {
> >> +            fail("  Test '%s' resulting MSR policy not as expected\n",
> >> +                 t->name);
> >> +            goto test_done;
> >> +        }
> > In order to assert that we don't mess things up, I would also add a
> > x86_cpu_policies_are_compatible check here:
> >
> > res = x86_cpu_policies_are_compatible(&a, &out, &e);
> > if ( res )
> >     fail("  Test '%s' created incompatible policy\n"
> >          "    got res %d { leaf %08x, subleaf %08x, msr %08x }\n",
> >          t->name, res, e.leaf, e.subleaf, e.msr);
> > res = x86_cpu_policies_are_compatible(&b, &out, &e);
> > if ( res )
> >     fail("  Test '%s' created incompatible policy\n"
> >          "    got res %d { leaf %08x, subleaf %08x, msr %08x }\n",
> >          t->name, res, e.leaf, e.subleaf, e.msr);
> 
> That's potentially problematic, hence not including it the first time
> around.  See the discussion below.

Right, I think being able to do what I propose would also allow to
detect missing bits between x86_cpu_policy_calculate_compatible and
x86_cpu_policies_are_compatible.

> >> +
> >> +    test_done:
> >> +        free(p);
> >> +        free(m);
> >> +    }
> >> +}
> >> +
> >>  int main(int argc, char **argv)
> >>  {
> >>      printf("CPU Policy unit tests\n");
> >> @@ -793,6 +941,8 @@ int main(int argc, char **argv)
> >>      test_is_compatible_success();
> >>      test_is_compatible_failure();
> >>  
> >> +    test_calculate_compatible_success();
> >> +
> >>      if ( nr_failures )
> >>          printf("Done: %u failures\n", nr_failures);
> >>      else
> >> diff --git a/xen/include/xen/lib/x86/cpu-policy.h b/xen/include/xen/lib/x86/cpu-policy.h
> >> index 5a2c4c7b2d..0422a15557 100644
> >> --- a/xen/include/xen/lib/x86/cpu-policy.h
> >> +++ b/xen/include/xen/lib/x86/cpu-policy.h
> >> @@ -37,6 +37,28 @@ int x86_cpu_policies_are_compatible(const struct cpu_policy *host,
> >>                                      const struct cpu_policy *guest,
> >>                                      struct cpu_policy_errors *err);
> >>  
> >> +/*
> >> + * Given two policies, calculate one which is compatible with each.
> >> + *
> >> + * i.e. Given host @a and host @b, calculate what to give a VM so it can live
> >> + * migrate between the two.
> >> + *
> >> + * @param a        A cpu_policy.
> >> + * @param b        Another cpu_policy.
> >> + * @param out      A policy compatible with @a and @b.
> >> + * @param err      Optional hint for error diagnostics.
> >> + * @returns -errno
> >> + *
> >> + * For typical usage, @a and @b should be system policies of the same type
> >> + * (i.e. PV/HVM default/max) from different hosts.  In the case that an
> >> + * incompatibility is detected, the optional err pointer may identify the
> >> + * problematic leaf/subleaf and/or MSR.
> >> + */
> >> +int x86_cpu_policy_calculate_compatible(const struct cpu_policy *a,
> >> +                                        const struct cpu_policy *b,
> >> +                                        struct cpu_policy *out,
> >> +                                        struct cpu_policy_errors *err);
> >> +
> >>  #endif /* !XEN_LIB_X86_POLICIES_H */
> >>  
> >>  /*
> >> diff --git a/xen/lib/x86/policy.c b/xen/lib/x86/policy.c
> >> index f6cea4e2f9..06039e8aa8 100644
> >> --- a/xen/lib/x86/policy.c
> >> +++ b/xen/lib/x86/policy.c
> >> @@ -29,6 +29,9 @@ int x86_cpu_policies_are_compatible(const struct cpu_policy *host,
> >>      if ( ~host->msr->platform_info.raw & guest->msr->platform_info.raw )
> >>          FAIL_MSR(MSR_INTEL_PLATFORM_INFO);
> >>  
> >> +    if ( ~host->msr->arch_caps.raw & guest->msr->arch_caps.raw )
> >> +        FAIL_MSR(MSR_ARCH_CAPABILITIES);
> > It might be nice to expand test_is_compatible_{success,failure} to
> > account for arch_caps being checked now.
> 
> At some point we're going to need to stop unit testing "does the AND
> operator work", and limit testing to the interesting corner cases.
> 
> > Shouldn't this check also take into account that host might not have
> > RSBA set, but it's legit for a guest policy to have it?
> 
> When we expose this properly to guests, the max policies will have RSBA
> set, and the default policies will have RSBA forwarded from hardware
> and/or the model table.
> 
> Therefore, we can accept any VM RSBA configuration, irrespective of the
> particulars of this host, but if you e.g. have a pool of haswell's, the
> default policy will have RSBA clear across the board, and the VM won't
> see it.
> 
> > if ( ~host->msr->arch_caps.raw & guest->msr->arch_caps.raw & ~POL_MASK )
> >     FAIL_MSR(MSR_ARCH_CAPABILITIES);
> >
> > Maybe POL_MASK should be renamed and defined in a header so it's
> > widely available?
> 
> No - this would be incorrect.  The polarity of certain bits only matters
> for levelling calculations, not for "can this VM run on this host"
> calculations.
> 
> If the VM has seen RSBA, and Xen doesn't know about it, the VM cannot run.

But then the logic relation between
x86_cpu_policy_calculate_compatible and
x86_cpu_policies_are_compatible is broken AFAICT.

If you give x86_cpu_policy_calculate_compatible one policy with RSBA set
and one without it will generate a compatible policy, yet that output
will be regarded as not compatible if feed into
x86_cpu_policies_are_compatible against the policy that doesn't have
RSBA set.

I think the output from x86_cpu_policy_calculate_compatible should
strictly return true when checked against any of the inputs using
x86_cpu_policies_are_compatible, or else we need to note it somewhere
because I think it's not the expected behavior.

> >
> >> +
> >>  #undef FAIL_MSR
> >>  #undef FAIL_CPUID
> >>  #undef NA
> >> @@ -43,6 +46,50 @@ int x86_cpu_policies_are_compatible(const struct cpu_policy *host,
> >>      return ret;
> >>  }
> >>  
> >> +int x86_cpu_policy_calculate_compatible(const struct cpu_policy *a,
> >> +                                        const struct cpu_policy *b,
> >> +                                        struct cpu_policy *out,
> >> +                                        struct cpu_policy_errors *err)
> > I think this should be in an #ifndef __XEN__ protected region?
> >
> > There's no need to expose this to the hypervisor, as I would expect it
> > will never have to do compatible policy generation? (ie: it will
> > always be done by the toolstack?)
> 
> As indicated previously, I still think we want this in Xen for the boot
> paths, but I suppose the guard was my suggestion to you, so is only fair
> at this point.

TBH I replied before seeing your email that also had this suggestion.
If it's indeed going to be used by Xen itself then that's fine, but I
couldn't figure out why the hypervisor would need to generate
compatible policies itself.

Maybe it will be used to generate the initial policies?

> >
> >> +{
> >> +    const struct cpuid_policy *ap = a->cpuid, *bp = b->cpuid;
> >> +    const struct msr_policy *am = a->msr, *bm = b->msr;
> >> +    struct cpuid_policy *cp = out->cpuid;
> >> +    struct msr_policy *mp = out->msr;
> >> +
> >> +    memset(cp, 0, sizeof(*cp));
> >> +    memset(mp, 0, sizeof(*mp));
> >> +
> >> +    cp->basic.max_leaf = min(ap->basic.max_leaf, bp->basic.max_leaf);
> >> +
> >> +    if ( cp->basic.max_leaf >= 7 )
> >> +    {
> >> +        cp->feat.max_subleaf = min(ap->feat.max_subleaf, bp->feat.max_subleaf);
> >> +
> >> +        cp->feat.raw[0].b = ap->feat.raw[0].b & bp->feat.raw[0].b;
> >> +        cp->feat.raw[0].c = ap->feat.raw[0].c & bp->feat.raw[0].c;
> >> +        cp->feat.raw[0].d = ap->feat.raw[0].d & bp->feat.raw[0].d;
> >> +    }
> >> +
> >> +    /* TODO: Far more. */
> > Right, my proposed patch (07/13) went a bit further and also leveled
> > 1c, 1d, Da1, e1c, e1d, e7d, e8b and e21a, and we also need to level
> > a couple of max_leaf fields.
> >
> > I'm happy for this to go in first, and I can rebase the extra logic I
> > have on top of this one.
> 
> There is a lot of work to do.
> 
> One thing I haven't addressed yet is the fact is things which don't
> level, e.g. vendor.  You've got to pick one, and there isn't a
> mathematical relationship to use between a and b.
> 
> I think for that, we ought to document that we strictly take from a. 
> This makes the operation not commutative, and in particular, I don't
> think we want to waste too much time/effort trying to make cross-vendor
> cases work - it was a stunt a decade ago, with a huge number of sharp
> corners, as well as creating a number of XSAs due to poor implementation.
> 
> For v1, I suggest we firmly stick to the same-vendor case.  It's not as
> if there is a lack of things to do to make this work.

OK, so level all the feature fields and pick the non feature parts of
cpuid strictly from one of the inputs.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed May 05 13:30:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 13:30:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123125.232250 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leHbH-000319-M0; Wed, 05 May 2021 13:30:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123125.232250; Wed, 05 May 2021 13:30:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leHbH-00030Y-Hz; Wed, 05 May 2021 13:30:03 +0000
Received: by outflank-mailman (input) for mailman id 123125;
 Wed, 05 May 2021 13:30:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sTpK=KA=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1leHbG-0002oA-DA
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 13:30:02 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e02a03ac-5093-4c2c-b0e3-61fc45255c85;
 Wed, 05 May 2021 13:30:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e02a03ac-5093-4c2c-b0e3-61fc45255c85
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620221401;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=CWvRM8pIDjGGelT2UwSsCVBCUwgSLbypiGB+15qMNcE=;
  b=Gu8VRFlqpyNyJZ3k1Fo3n50QhqaEK7Zs/3u4EQp5J1Q7lvFSNYeyiDYL
   VVNCsjsTSDT5B5KNeTlsPXxmazdzlta2ivtViGpNRVY4pOIEkDm1M60wK
   ZrPYhcF+gIN0LczLUaTEbeQRlIgXccd9Z98UQ1BiNPu/AWbALCutA3y67
   s=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: LL7KiPS478o7yMgdr+B9WztXe56x5hpWKDLRcsqG9dUsoVBXRhCmoXnLRsCxW6n1igV29kJrKC
 Zxm1W566x4BT3Z+WXhc4y2jXn2eeTnd0/N3rHW3CeiRvFmXw0prbLEh2YaxAXuaeT4bz5bgXQn
 4R6CeTgoOINcifMU/CCNCNw8gyJqZUyjtGcIA7iF5iSrcN3twhZnAhpho4uSvDh4YcN/6r/69p
 ClX/m3tTiS7ut5Xgt3f1lQ42Q7PUhjyzoW4CtLFGwqhRwFQXDvTWuHo5zx3uVYM+8ANra3ibSn
 UjE=
X-SBRS: 5.1
X-MesageID: 43109044
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:ie9EGay1RHvynQiWl3zrKrPxnu4kLtp033Aq2lEZdDV8Sebdv9
 yynfgdyB//gCsQXnZlotybJKycWxrnm6JdybI6eZOvRhPvtmftFoFt6oP+3ybtcheQysd07o
 0lSaR3DbTLYWRSpdrm4QW+DtYryMSG9qftvuvF03JxV2hRC51IxS0RMHf8LmRdQg5aCZ0lUL
 ed/NNAvTq8eXIRB/7Le0Utde7FutHNidbaehYAHREq802jijmv5b78HXGjr2sjehlIxqov9n
 WArhzh6syYwouG4zL/90uW1ZRZn9P91sBObfbstuE5Iijh4zzYAbhJdKaFuFkO0YWSwXYs1O
 LBuhIxe/l0gkmhAV2dhTvI903e3C0163nkoGXo8kfLhcDiXjo1B45gqOtiA2PkwnEttt19z6
 5Htljx3/E8YGKi7UaNkuTgbB1kmlG5pnAvi4co/gdieLATdaNLqsgn9F5Vea1wbB7S0pwtE+
 VlEajnlY9rWG6dBkqp2VVH8ZiHW3Q+GQq+WU4SusCZ+Cg+pgEJ82IogOMYhXsO75Q7Vt1t4P
 nFKL1hkPV0QtYRdr8VPpZPfeKHTkj2BT7cOmObJlrqUIkBJnL2spbypJE4/vujdpAkxIY78a
 6xHm9whCoXQQbDGMeO1JpE/lTmW2OmRwngzclY+txQpqD8bKCDC1zBdHke1++b59kPCMzSXP
 i+fLhMBeX4EGfoEYFVmyXjRphpL2UEWsF9gKd6Z3u+5ubwbqH6vO3Sd/jeYJD3Fyw/Z2/5Cn
 wfGBfpIsFt6V2qR2/YjBDdV2iFQD27wbtAVIzhu8QDwokEMYNB9iIPj06i282NITpe9ow6FX
 EOZI/Po+eeny2b7GzI52JmNl52FUBO+ojtVHtMuEsvO0PwerAThsWHdQlprTy6Dy46a/mTPB
 9Uplxx967yBYeX3zoeB9WuNX/fqHcPunSQTdM5lreY7cnoPrM0Z6xWGZBZJEHuLVhYiAxqoG
 BMZEsvXUnEDA7jjq2jkdgzH+HQd951hS+xOs5KoXfjtUGRzPtfBEczbnqLa4q6kAwuTz1bih
 la6KkEmoeNnj6pNC8CmugiCUZNb26WGbpCKwyAaOxv6/bWUTA1aV3PqS2Rihk1dGav00kJnG
 TuIReZfuzxDkNHtmpV1bvr911IZnyQFngAGExSgMlYLyDrq3xz2eiEau6I32ydZkAr78sdPD
 vGCAFiaD9G9pSS7lq4iTyCHXIpytESJeTbFq0kaKyW8GiqMpe0maYPGOJ08J5pOMv1iPICVf
 uSdmauXWrFItJs/zbQimcuOSFypnVhrOjh3wf96nOkmFE4GvjfLT1dNvgmCuDZy1KhYfmG0J
 90141o+cSxN3j8cd6Ax+X8aSVZJhbavG6xSKUJpPlvzNUPnYo2O6Oedz3CkExj9lEZCuzfkU
 sFWqR14LzbIOZUDocvUhMc2mBsrciFKUsgjxf/DeA/d2w8lnOzBaL835P47Z4URnCbrAT+OV
 Oj4zRQ0vfMUSyEz6MbAcsLUBJrQXl5zHRp5+WZcYLMTC2sauFY5VK/W0XNPYN1eeygGb8KqA
 x97MzNt+iLdzDg0ASVmTdgOKpB/yKGRsy1aTj8VdJgwpifOV6WhLGt79P2pDDrSSGjY0BdvL
 Z7TyUrH4x+owhnqpY23Ci0QrH2pUxgs2I220AYqnfdnq684GnaGklaNxb+mZs+Z0gKDkS1
X-IronPort-AV: E=Sophos;i="5.82,275,1613451600"; 
   d="scan'208";a="43109044"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LgKQs+gFhFAaMPf/WXar2aaKhPIL2N/019Nkg/V7YhYf4Mhe98vRWVI9AMt5z8yb2aX166VzN5FzXNeBbEaCGdIr8SuSxU96nJsmKRUYJxEYDJymJh9TRfVzQcv/D4nst4xzLkvvI114r1UwwVJ2dXGiRSIDVbxtbVFyqEs0EryEBg5WNfBf758HGaCt/8HzJHN/OpCbClq+rG/pdi2egFXdKZV66V+10wdlGvcBd0ykKBZ3ir6uONDZP8LmvxuEv1ehEfggSdPdvDv2/6HezcZGa8gxfCCqhLpVplynKFvgJZ89JmsOgVTM7MxR8GSgHo/NMRMN8t7cVOg+rkeWkA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UbNoFC7wcabU0RgraDtfdWkpYQWL5yMMi520N5NoKmg=;
 b=dZ70f1m1Mf8of1p/P3jvgf+vkvi21ht8Sd6L3MbJvCNiDaTsXOpaTdoy5whEU0Co6WapKUq7QnC990ldFGcdFDLzWUfwJOICQx6MEWh/k6ZLCXMY//P9yB7bhA1l7WjodfBRRox1s1KpN6n3stXavl6ijtnSRhN60L3+VHs8pm+++tt1Y/3G0/vej0byOygPUlPkWWtEWrBeezosV9H0g6Qa0vwLwBBlgRnWXOMyBIfVeZUw9nVg4KADRyoc68VB3p0qd6sAGmpbZZ7vF77r9wWcd5ejN0/D2V1bPgK/A399rMZk0ND/+ZMSOvZYi1J/b9WaW/PiMVVzK08/ecCEHg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UbNoFC7wcabU0RgraDtfdWkpYQWL5yMMi520N5NoKmg=;
 b=kuGQKKJtxf5IYifK2F2n5zlWc98POhoL8ARFCm8BsqCurmzFEI0SGZBU2hUfqdNqX5Y0DkA7XmU3sIgl6ME+9GFK7TrJL4KrFnGUUXw7xou+SZVSxXZgosPY+jZcvg0VjTXiUR7PrOCOfYMTYuow7dlIZN+JUuMdFugzFuW6p0U=
Date: Wed, 5 May 2021 15:29:51 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v3 02/22] x86/xstate: use xvzalloc() for save area
 allocation
Message-ID: <YJKdz1irxJq8Yckn@Air-de-Roger>
References: <322de6db-e01f-0b57-5777-5d94a13c441a@suse.com>
 <1fec148f-a5b2-5102-a790-e908d6f040c9@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <1fec148f-a5b2-5102-a790-e908d6f040c9@suse.com>
X-ClientProxiedBy: AM3PR03CA0069.eurprd03.prod.outlook.com
 (2603:10a6:207:5::27) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b6deeb56-c279-48ae-8675-08d90fc9df81
X-MS-TrafficTypeDiagnostic: DM6PR03MB4538:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB45387370104499FE55A91B248F599@DM6PR03MB4538.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: qbHLGRjE2nV3ifrB0IPWfx5SWOwuzir+HVQwGuPQh9Lj8T7smmsDpHaOhC5TspIhen7WtYJuZ33oxqJrDgdJVfAIxBrcT5Dw9E+fZ2Tppx1Bh2PyfJe+6GwfX3PEcX+pxhY3uK2E+bZEPv5okk1G2L5togdLK4VmWlKrKrXk07WYaDSpu5KG4soSOLRBfp/LdSm4dOHvmIEor7VatEiQsXXmZiy7kg4vJq2IVGRyNW+juScHxs8zJEczlqG8+41uV5x4bZtb7LrF7VyvAV32kNyu7aZ2FyOalT9iy/LOhQ9dydsPkK2bY8I6Vo9hYDnEL5NxpsOT9cEYmSQmznqh5rdzkl9OgCBT1o6z9ovQ9yj9K9zXSZbV5n4HGBRD0sMG/MEJNpiuZpiJElYiRA+1FNzOhdrYW0jgJ3jDpPzu9GA0jYChd8/tc7aMsSUWxDnj9YQ8gFFbg0qHJy4wljbjYcgbhwmSX7jyjq8DwWP5qmnokfXyBQlNmbcVBYNMaOcEw4pK4ftv1/2/yFcII7y2A2L9JheOvAL42f0XamLtMpCQ14ZvlR9lVmaeMgLcVHq2i8qOEJTY1XTQh7KbX07343DnRUGt1m8KyQCDyJ8jOV4=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(396003)(376002)(346002)(136003)(39860400002)(366004)(478600001)(8936002)(54906003)(5660300002)(6486002)(26005)(85182001)(4744005)(83380400001)(956004)(8676002)(9686003)(33716001)(316002)(186003)(66946007)(6496006)(6916009)(4326008)(66476007)(66556008)(16526019)(86362001)(6666004)(38100700002)(2906002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?S2d2TVRLYkNXQzlKMkVtTVJZSEF2SDErZXBnNXBZck9hWFhxYWtzb3gxTG9D?=
 =?utf-8?B?b3JYUWJYcGpmZkRZNlVWNEFwTHhBZ2k0WlVnNDU2KzJpMEtPZndqVm9uTE9J?=
 =?utf-8?B?bUZOWXpqNUtybzkxYjFPM1cvcW1CWXRmVXhFakhZcS80bjFvY1VNTDBCUEh5?=
 =?utf-8?B?Y0Zvc2hsL0trbGFvQ2pTSGhFUGRma0kvWlNMTUNJZG10dHRrNVk1WGgzZFpB?=
 =?utf-8?B?c2tmNEJLUmpZWWFtT0diQ2lIU3hweVFYRm4vUFhJWis3MWxNSlB1V1l3aTlJ?=
 =?utf-8?B?SzN1MXpSOVVCU3F2bkIwek1ERll1bG9CaFVCYStZaVk2dXFWYVZQVWNPQmor?=
 =?utf-8?B?SkV4a3AySGJvUkZpWUw5ZzM4bk01K0lMVG1VZ2tseFYyRUZSaHVtYi9qRjJS?=
 =?utf-8?B?RUQrV0ZkTDVrbFFGdXI5aGNyTEgrTlAvZHdxdEIvNTQ2cE1HWWV6MVpVN3lx?=
 =?utf-8?B?eWRzeGl3Zk5PN053YXVvNDQyN0xnVUg3ZEtsZU05TUZ0TUZxd2tQYW9Jd3J4?=
 =?utf-8?B?L2RTSGt0SllvdzZQUnVOZGMzRUV2c1VjNnJPWVZ4c1NMOWYvNkQxYWxnVFo4?=
 =?utf-8?B?RzZEQjcwZElnbG5tRWRma1M0QWhqSHpyN2Jyd3RCRm5Fbkp5dWhIQkhta3Yw?=
 =?utf-8?B?eEdoY2lBTytUSlFzNy9uaS9YbXlUTEl1Rjk5OXZ2YTYxL1M3YVNVQ3htV1ZV?=
 =?utf-8?B?WnByUGphRzA2aDJtQ0ttMzJLMEdHcXlPaHNZTWtGS202L2l6dUtuUWpMcWNz?=
 =?utf-8?B?emNQSW9HUlpnR0FGNGpSK0J1NmpZdGdVeHo1eDh6aWRudk5NTlQ1dSt4V2Rt?=
 =?utf-8?B?ZCs5bG50a3RLTmd2MzFsZWVpNnN1RmwyTDNnY2pHZEE1LzlyTk0xTWZ0ekNJ?=
 =?utf-8?B?ZEZpeW9LRjFUMnQvYXhXcHFvM2JmMllTbTRoTFJTWVBnQ3lENms2bEhvWHh5?=
 =?utf-8?B?VFM2VWlHVTBhOUpPdXZjbDh3RnB2M1Fza2NZOHczWmRDUVpkZmpLT2NjRWM3?=
 =?utf-8?B?aTZPOGdTenJsSVAyM2hadzFjZWlXL0Q4SVVRNHdXSGdzc2JwbEtCNjJNYzRH?=
 =?utf-8?B?V0NYSWdkd1RReXA0cHpPdm1CRFNnRG5VK2NjbzlMc1lNOXpMaWFqak1OL1I0?=
 =?utf-8?B?WGZmNS9VNG1LcE1jSm1SREdRbzdyUGhOaFBuTW1CNnM3L2orRHpmRzFIVnhZ?=
 =?utf-8?B?S1hoOXd4ekJFSXhGMGZRUlh5cjVDZWVHWXZrQTVJbFdNOTRqeTFjd3NNYU01?=
 =?utf-8?B?WlhqV2hoWkR2OVdUdjh0dlRqaTBxRS9QWmxjYkxCM3hMZmo2OTVkMlo1aW1O?=
 =?utf-8?B?WTVYSWJXMWI1RVJUK0ZUUWZ2RGlnY1hjS0ZidGlqS2V6YmZrc3NYaFVESHVu?=
 =?utf-8?B?QUIxTjBHZUFuU0lCNnI3Q2JRMlJyTGI4WFVhZUxkUkMwbVlodXpTV1huZkdI?=
 =?utf-8?B?K2h3SU1BSHFCL3dFVk5iZktHZHYzWWozR0RUSCtuWEpDQzdDTS9pU2R2NHNV?=
 =?utf-8?B?Z0NIaHNKbDdWVW8zK3gvM0hMeXdOczVZQVNSaklGYU9SNmxycm5Hendlc2Y0?=
 =?utf-8?B?ZDJ1eDdXbVdxNWhxaURhRUhVK0VhOVpZb2VnV2xrSUp3WlhVamZubE1DMkpz?=
 =?utf-8?B?dmh1S0sva2JjY2RJMnJvRzM1YTdiVTlKbWJQTkVXZ0Y3THhNem1PZmhmQWtM?=
 =?utf-8?B?S0RHWXZLYUgxbSt1VnRRRGlYcyt0V3ZTK0RlaDJsUDBnVkZMK3JHTUwyb3p5?=
 =?utf-8?Q?6IslM/PiBZdJ0ehYgMzje82JFrVBORQ3mTW+Ekc?=
X-MS-Exchange-CrossTenant-Network-Message-Id: b6deeb56-c279-48ae-8675-08d90fc9df81
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2021 13:29:56.3481
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: q0FWykxC0pvlTTf8qmmlk5sQTVjOnOFmAJihB7OxCd8NwXlvMqVnHqptv7pOg3KRm61iwazlosL073H3d8mCkg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4538
X-OriginatorOrg: citrix.com

On Thu, Apr 22, 2021 at 04:44:36PM +0200, Jan Beulich wrote:
> This is in preparation for the area size exceeding a page's worth of
> space, as will happen with AMX as well as Architectural LBR.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Even if the naming of the new helpers (_xvzalloc) is changed.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed May 05 13:31:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 13:31:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123128.232262 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leHcd-0003tG-1s; Wed, 05 May 2021 13:31:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123128.232262; Wed, 05 May 2021 13:31:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leHcc-0003t9-Ty; Wed, 05 May 2021 13:31:26 +0000
Received: by outflank-mailman (input) for mailman id 123128;
 Wed, 05 May 2021 13:31:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+Yav=KA=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1leHcb-0003t1-Cx
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 13:31:25 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8348da4c-c993-41eb-932e-5944ec5f45c1;
 Wed, 05 May 2021 13:31:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8348da4c-c993-41eb-932e-5944ec5f45c1
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620221484;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=u1aQVaMOEVuSDFZnklw8RIgePENBBRhLa4zir0WEP2E=;
  b=gGQ9RIPffDD46JhgdJurmwOlhsWUbNwLAvix+0el01qofX4pRQdCcWNw
   XYlWoot/eBekgyeLo3zbOJGcfmnZMSup5Jh8zlJPFyWP9ug7yqZTxER+O
   Ym/+AI///oYiTux2n8HGrc36lgPCDcc8uM7pOB3EynxSlvFWSkwjVmY6H
   4=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: /ctoHHYfyu6KX2cJc/f72NTyZhoPXz0a51/FBUtroeD3E7EvzPVlUl4kECzvwK6rqp/Wzs6PTS
 2C0fNUzfW/o0ca1pqodKa5AMifmMqfj5/MCGSFWu4L1Oq32W03rrFu+VXRDRC77d3G4iKfVce0
 /dva08KTg8/wgYKD5l0mvv1eyoBWmdNnPOvTtpEiae2xD+OrObw8/gdaot9b3KaJqD++0t4RmG
 O2v0DlPUkHAr3LQFod3I5jaPVxKviI/KjdtafKfcH9tNHc9/9dsMbo+Zl1CtAEXwFDewPmn1Fu
 xow=
X-SBRS: 5.1
X-MesageID: 44635817
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:UOE3Ia2TwSWSYwNleUpw+AqjBR13eYIsi2QD101hICF9Wvez0+
 izgfUW0gL1gj4NWHcm3euNIrWEXGm0z/BIyKErF/OHUBP9sGWlaLtj44zr3iH6F0TFmdJ1/Z
 xLN5JzANiYNzRHpO7n/Qi1FMshytGb8KauwdzT1WtpUBsCUcFdxi1SYzzrdXFebg9AGJY/Cd
 647s1IuzKvdR0sH7qGL1MCWPXOoMCOqYnvZgQICwVixA6Fiz6p77CSKWnm4j41VTRTzbA+tV
 XUigCR3NTej9iX6D/5k1XS4ZNfhcf7xrJ4ZfCkp8AJJlzX+32VTat7XbnqhkFNnMiO7xIQnM
 DIs1McOa1Img7sV0WUhTeo5AX6yjYp7BbZuC2lqF/uu9bwSj5/K+cpv/MgTjLj50AtvM5x3c
 twtgrz3fcnbmKj7VDAzuPFWB1wmk2/rWBKq590s1VlXZYDc7gUlIQD/SpuYeQ9NRjn44MqGv
 QGNrC42N9qdzqhHhTkl1V0zMfpdno+GQrueDl5huWllxJSnHx/0nICwt0eknoq5PsGOul5zt
 WBHaJymL5USMgKKYp7GecaWMOyTlfAWBTWLQupUBraPZBCH0iIh4/84b0z6u3vUJsUzKEqkJ
 CEdF9Dr2Y9d2/nFMXm5uwLzjn9BEGGGRj9wMBX4JZ0/pfmQqDwDCGFQFcy1+O9vvQ2GKTgKr
 SOEaMTJ8WmAXrlGI5P0QG7cYJVM2MiXMocvct+c06So/jMNpbhuoXgAbXuDYuoNQxhdnL0A3
 MFUjS2Dt5H9FqXVnjxhwWUdGjqfmD54JJsAInX9+Ue0+E2R8lxmzlQrW78ytCAKDVEvKBzVl
 B5OqnbnqSyonTz3Wug1RQvBjNtSmJupJnwWXJDogEHd2nud6wYhtmZcWdOmF+OJhp1SdLqAB
 dSzm4Hv56fHti1/2QPGtinOmWVgz84v3SRVaoRnaWF+IPDdo4nCI0lHIh8Dx/CGRAwuQsCkh
 YCVCY0AmvkUh/+g6Ssi5IZQMvFccNnvQutKclI7VTFtUudoskrbmABXyGnVPOWhQpGfUsQun
 RBt4skxJaQkzemLmUyxM4iNkdXVWiRCLVaSDieaJ5sgbDtcgFoRWKsjTiX4itDI1bCxgE3vC
 jMPCeUcfbEDh54tmpD2qjnyl9ya16QZll9cHx8rI17G1nXo3ob657/WoODl0+qLncSyOAUNz
 /IJQEfJQ5j3Pib/h+YkjTqLwRq+rweesjmSJgzebDa3X2gbLCSnaYdBvlO4dJOL9b1qNIGVu
 qZZi6YJD71EPkSxgSQv3opURME8EUMoLfN4lnI/WK41HkwDb7uO1xgXagcOMzZwG7+RfqEua
 8Jxe4djK+VCCHWZdGHw62MMGIGBRPXvGKsT+Yn7bpTprk/sbNvH5/dFRvEvUs3qikWHYPRrg
 c5Rq8+3ZXqfqlIVOYWczhC/lUomM+URXFb+DDeM6sbRxUVk3TfP9m1+LLGprokP12ZqGLLSC
 6i2hwY282AYjCK2rEbAZ8hOGh6aEAz73J54eOJHregQTmCRqVm/FCgNGW6f6IYYK+ZGa8Iph
 IS2aDFo8anMw750hvXpz11P+Zn9HumW9q7BEapFfRT+9K3fXSKja3C2r/9sB7HDR+6YV8fn4
 tLaAg5adlCkCAriMkP6ReJI5aH6X4Noh95+jFollnkx4ig7iP6JCh9QHzkq6QTeyJSPHiOhd
 nC6s6C2h3GkWN45aU=
X-IronPort-AV: E=Sophos;i="5.82,275,1613451600"; 
   d="scan'208";a="44635817"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DM+pDuxxqor8CAb3lKpqsp6DF2swLaHY3u3H3jpouE6D9gURV8goAod/BNjTjVqhkTXKHvdoFkcvn8ellq3FdmsfDVkoKaUHDbEGy3crH5dhGTH7Lb796jBbKXyFLhrH58ZvrnJ3FqTBbQkgMINquTS77R1z5Pw2HKioIlibQWF7NSqjJh58jg8imt9OBsjbXvzWTZhrOI27N8bg1IT/v2iSng0Ol5lnVSvBH+EExrFl653n/HVAia4i1MdoFDw8Xx45F2AXGOhuI+HAoT2v/L3l6oT1a9FWXkzX2uZ8/cn6o3hfG5zqdcKW6qu7bUbwqy22LXiU/g6JowuyboVXvw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=u1aQVaMOEVuSDFZnklw8RIgePENBBRhLa4zir0WEP2E=;
 b=W69eEbvMPzWYHnkPc/zcdE0Tgkeh97kT/udO+JKOvXHcWHAAzfITGRDmsn595zlHxqnGdRFOiLvIk0CYHnZdzVup4/q/qIlRdabhq+1n5+I4zx8FxZQY5wnpUSOLZIU7fm9jd950BTD971quqEOiETaSVP2RsgyPLEplv/WmfFSu8YkVWDhO7HCQAIJHAqvxNnOI8Hg8tXfonQGe0G1OxuaIxeuA3Bru/fA0PgiAziSAdzkb1O4Ew+IaNgO7aEgIdzB+Av27KW3k3kZV2NEYBZSG5nlWBzcIE0yVJw5yMH9gfemvuPNOrOmmw3k3czgrvppr9FgJAvPNasA76itKPw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=u1aQVaMOEVuSDFZnklw8RIgePENBBRhLa4zir0WEP2E=;
 b=Ur/Wr2XzuHy4UFdGT1U4F4gaTK8oNYr9hLNvXRSN6iPGxcGal6Osb2WVxAe6JbZAIZJBvashw3N0FSMX/j9cUs36oPelY2A1I0j+MVqByJdOuS2nLvECiAIz3hNHCDNElMSytdcDYGdw0dQV7XjtEV94RvhP0wvwruGHKDE/OLU=
Subject: Re: [PATCH] tools/libs: move cpu policy related prototypes to
 xenguest.h
To: Roger Pau Monne <roger.pau@citrix.com>, <xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>, Jan Beulich
	<jbeulich@suse.com>
References: <20210505111508.82956-1-roger.pau@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <41c22d5a-c84b-1ff0-087f-e4bb1e9108eb@citrix.com>
Date: Wed, 5 May 2021 14:31:13 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <20210505111508.82956-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0315.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a4::15) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b4c2c0a1-a23e-41ee-b6cf-08d90fca1166
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5920:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB5920758D15DCDECE05DB423FBA599@SJ0PR03MB5920.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:466;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: chDGEBBEZEfyZ9HdAIiU0jSqYGk3DjupFBLJKD4G6uMP/HdUEhnxn+9w0T0PEEH0imDlGnMnb9RKghRFFn7Q3ikLwW2J1WGid0leFjYj3hms6kkZ0SmpOIbJA7cWW1L7dmRXFFMoWuVgVIgfYSJTs0USWDwmCnbUDxB1JkptiM47ekyJtQ2bh4e7nmHV570qruc/0e4nmwvoKz0oNH5sgSozUr/ltiMyqatgWdUd+IQgFPeP60LndUBc8A5EtjaTMmH3OX8EMZj0/NKA+zfk/SzqjD0uv0beN2keqSThjZGWmdQFV0MqYyKGlosSeeKDeFH5jRmGstGki6EsNDLG5vHiyLxPOP3YAKhyzhNkaNYxVrRTqutHxb/8VDfy1gsSGpfSfuVwfNRUXVGD3PGTtukuckQWYDOcGUqWYKZc39D7hCZ6q1Lv3eLRDs+A8gTIfEVcF3sfcalzlsOxgVWQ7p6KCXm07UTbu6f6R01aepYI86+57BdHEx7L/8sODTIeAi5LVpBY9j14WjwCODSOKdtGl9+pZXJaTz0O2JwmEA7CknzQClUjGnagRURpvT03PD0VAi4JMuH/iz9H1uj75G6hksI4WX4rcbuRCZhVhCo7wtXSgzMgMudvI6XI88GDZK/Gi1O67RkR57v7TWFkfwimD9p9cbAhxLAjgSQutf4p8ODccW/Pbm//gWlSjjV5
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(346002)(39860400002)(376002)(396003)(366004)(31686004)(66476007)(316002)(36756003)(956004)(66946007)(5660300002)(8676002)(16576012)(6666004)(16526019)(186003)(26005)(66556008)(54906003)(38100700002)(478600001)(53546011)(6486002)(2616005)(4744005)(31696002)(4326008)(86362001)(2906002)(8936002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?MkVWRWVvQThPdE1mc0tReFIzeGVBbmM0UUE1RGdiNmxFUVZXNWxHem5XK3Fs?=
 =?utf-8?B?c2JpQmF4eHhFWEJ1QkJKcm5ab3h6bTF2ZmZTVlErUEp2ZnMzT3llMGw5TTJ6?=
 =?utf-8?B?ZytZd0dzaFdqUHNZVXRNMEIwQWhxZGR2NGNrcWZHN29PeU5ydGh2dlprb2hv?=
 =?utf-8?B?NDJxb1F0UElkempqMUVKSWxTNVBpUUNxUGFHWmZYQmhYUEZtYWdWOW5ZT0Ri?=
 =?utf-8?B?TmQ1N3liZUF1UWhvOGVxellKL1BCdUxrM2VDLzhMclYyWkxwTDBCekhrRE1B?=
 =?utf-8?B?bHFRaWJJRW45Q0xOUzcvYTdma2o3QlJmRG9MTVk5K1dCODVCNVRFaEc5cXN0?=
 =?utf-8?B?aDFON2pLaTZGQlE1Ump1ZFdMVzJhQkxCQmVzS3g5QVBZSVhNZzZudUJQZERQ?=
 =?utf-8?B?Qi83WHo1NEFtUG9DOTJkcDFCbmxlQkMwWkliYXRkRGJmaDNVQWgyaG55L05L?=
 =?utf-8?B?T2VEeUliazVodmVmQnRQazd2QTZwQVpwaU54KzdDcDJma0JKOVZZeUtJUDAr?=
 =?utf-8?B?ZEh2OVFIbCtPSHdLaGh5VHRKcFduc044Vk1IYjNEdm45ZWN1VzAvQWMxMzdE?=
 =?utf-8?B?WmRseU44MXJsZ1RqT0thQVUyeVI2UHhYOTdSREVhVVcwc2RPVFErL3RQYS95?=
 =?utf-8?B?V2gyZkJIMUtMWlpEMEpkcXQwWXY3VjRNRnNUbWlzKzdLODk1SDNRVkxNRHA1?=
 =?utf-8?B?Q0wwWVdsWVlmZXhubDluNVBOUGpXVHpsd280cmRYV0gyUDBWRUt0QmdYSnVO?=
 =?utf-8?B?YzUwY0c3ZDI2MGk5eDZBMy96alRmeHVqSUdjR2JlV09tM0lWdUd1bFhpM1JD?=
 =?utf-8?B?bENnWVZCMVhmRStTb2ExUnNwQndyR1NnT052WVBRWEtXdlZrbFNSc0l3dFg1?=
 =?utf-8?B?dU40ZTZvRElNUlhIOWVkSmhWSVg4VnJkMXozM1I5VzBUWnJ6TzZDZnY0dGF2?=
 =?utf-8?B?T2puRWJRbGFYV3dIUEdVR1RZdVRkRWFXampwVC94SkJINndsVDEvZUp3dWdD?=
 =?utf-8?B?ampTR3BQdVh0YWFnaXkyaGR1RE5Kdm5RT0tvOFJkNmh2N3ZlZFJoUEV3MTRT?=
 =?utf-8?B?RUZtTi9UOFpmRHFPWElWZlcrSWtqWXhIQjM4aFVJMTAwRFZHU2Rad1l1bEJL?=
 =?utf-8?B?b3pKQ2pzYUpmdW9WbUgvZTZCT3VoN1lwNXF6WERmejdKVzh3UjBFQmdKcWxH?=
 =?utf-8?B?QmdkSndCdHpxcDhwekZpOGRsS1ZMKy9NV3IyU2hmNUI2TkIwWE0zSi85Yy9p?=
 =?utf-8?B?TkluMDNPTnVaZWpNSHBDdU1acThPQkJVc2N2RWtobmgzMGpnNnU5ajdTM1B2?=
 =?utf-8?B?eEg5cGhDU2g1VWdPbG9yN1BZU1VpdUw3bjd6OElXTFo5OEk3YVJaZVVJYkpW?=
 =?utf-8?B?SFhINlFaQU5lNmUyb0xCcktpenpOcStUekhpdVVoZjNSTnBqQ0dpUDlxdm1m?=
 =?utf-8?B?S1BqMXp6MGt6Um41d3crTE45clJyaDI1UUVDYzVSS1BwUEMraVJKTHlvUlBB?=
 =?utf-8?B?RGNLakxnTUZpOFFZUE5oaHlxd2NnS004VmdwVHpzWU0zdXc0TlBPY3NTYUZK?=
 =?utf-8?B?cGlmc3dDU1VwZmlpQ3VBc1BVZEU1MmpZQlhFU0svN09CdU12cXA3aXZ5VkVJ?=
 =?utf-8?B?YVlGb056aStITU01TE1xdmE1SHpKQm41cXgrVnJPZWNSRnM2TCs2UkdUL0ZC?=
 =?utf-8?B?MFpTZFp0MG5PUVNiU0VkT0k2cmdGYlEzWm1LRlp6MGVHcnl0UEVaRHpwSDVQ?=
 =?utf-8?Q?Fkc89Qa3Te1MEaLLxX8ZIDd21NTGPx1fDjuuuz6?=
X-MS-Exchange-CrossTenant-Network-Message-Id: b4c2c0a1-a23e-41ee-b6cf-08d90fca1166
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2021 13:31:20.0869
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: XXa96vJj/KTsKB/OV0ekoX8EbXRjZjs0YfO6bnmTm0Zd/qUkS0+HyrdHrAX7FE388+zJWgnhmdg4LPpx39zccQCkrxZ/Oo34yEjJe0RQ2NI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5920
X-OriginatorOrg: citrix.com

On 05/05/2021 12:15, Roger Pau Monne wrote:
> Do this before adding any more stuff to xg_cpuid_x86.c.
>
> The placement in xenctrl.h is wrong, as they are implemented by the
> xenguest library. Note that xg_cpuid_x86.c needs to include
> xg_private.h, and in turn also fix xg_private.h to include
> xc_bitops.h.
>
> As a result also modify xen-cpuid to include xenguest.h.
>
> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

I'll commit this alongside my type changes.


From xen-devel-bounces@lists.xenproject.org Wed May 05 14:24:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 14:24:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123136.232274 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leIRc-0000LB-25; Wed, 05 May 2021 14:24:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123136.232274; Wed, 05 May 2021 14:24:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leIRb-0000L4-Ue; Wed, 05 May 2021 14:24:07 +0000
Received: by outflank-mailman (input) for mailman id 123136;
 Wed, 05 May 2021 14:24:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/Xmi=KA=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1leIRa-0000Ky-4u
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 14:24:06 +0000
Received: from mail-wm1-f42.google.com (unknown [209.85.128.42])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 47108cb3-afec-4dab-81c0-00f7b1ce4645;
 Wed, 05 May 2021 14:24:05 +0000 (UTC)
Received: by mail-wm1-f42.google.com with SMTP id
 k4-20020a7bc4040000b02901331d89fb83so1242983wmi.5
 for <xen-devel@lists.xenproject.org>; Wed, 05 May 2021 07:24:05 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id e38sm5487639wmp.21.2021.05.05.07.24.03
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 05 May 2021 07:24:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 47108cb3-afec-4dab-81c0-00f7b1ce4645
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:content-transfer-encoding
         :in-reply-to;
        bh=HUzhYKg7fe9cN2Qz8J6SF3BSD6gGRYM+RnlLLEatD30=;
        b=neia8CoIyuqVzqsaD3Xn9fudDpQfZaaFxbjYNo5ptbR4ua3kM4Fk+eha1WXUChNxXC
         YbCBVKq7nmywKIv9MkfJfVR2hEUC9m0myy9Sm6wYgp574+fmu6Vzkjuh+b3bqpceNmOG
         dv87OFZyST1y2ehhBUkx2jXzA6kFaq8SSwxvqbJwe2sTEN3C2S3YSf1KwjfXRS8iEFbP
         QLqCH4+nXCRuMZexOwjEvpD3SxwXXiCHNTDeZeIObY8RuZ+B2IXPQLIWIWSuiJTpgnux
         bZFZi5rgtyBPgXfSkTgjG67wBSyOOl6qnif9yNFz0eaPas4LOYeolBmRKgheH5s4q0/E
         Mz8Q==
X-Gm-Message-State: AOAM532TVADp33Dh2+lezT2V8QqN6BE5/qGUKvel6Qd4NkyHlsxQzU/+
	9YWpjQNgN6C77iA2U1pDQlk=
X-Google-Smtp-Source: ABdhPJz2wJwGVi4NyRAUyiplFmsnWAV4oC5Mh5nwJsNh5thftlwU8tjvklfdjyWAw3UPTFJMFKenFA==
X-Received: by 2002:a1c:3d44:: with SMTP id k65mr3709951wma.121.1620224644429;
        Wed, 05 May 2021 07:24:04 -0700 (PDT)
Date: Wed, 5 May 2021 14:24:02 +0000
From: Wei Liu <wl@xen.org>
To: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Cc: Wei Liu <wl@xen.org>,
	Xen Development List <xen-devel@lists.xenproject.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: Re: [PATCH] automation: save xen config before building
Message-ID: <20210505142402.7zpvc76smuhbeo4y@liuwe-devbox-debian-v2>
References: <20210505114516.456201-1-wl@xen.org>
 <YJKOSW62716AdMoM@Air-de-Roger>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <YJKOSW62716AdMoM@Air-de-Roger>

On Wed, May 05, 2021 at 02:23:37PM +0200, Roger Pau Monn wrote:
> On Wed, May 05, 2021 at 11:45:16AM +0000, Wei Liu wrote:
> > It is reported that failed randconfig runs are missing the config file
> > which makes debugging impossible. Fix this by moving the line that
> > copies the config file before the build is executed.
> > 
> > Signed-off-by: Wei Liu <wl@xen.org>
> 
> Acked-by: Roger Pau Monn <roger.pau@citrix.com>

A patchew run shows this indeed fixes the issue. I've pushed this to
staging.

Wei.

> 
> Thanks!


From xen-devel-bounces@lists.xenproject.org Wed May 05 14:29:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 14:29:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123140.232286 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leIWx-00010u-ME; Wed, 05 May 2021 14:29:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123140.232286; Wed, 05 May 2021 14:29:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leIWx-00010m-Io; Wed, 05 May 2021 14:29:39 +0000
Received: by outflank-mailman (input) for mailman id 123140;
 Wed, 05 May 2021 14:29:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+Yav=KA=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1leIWw-00010b-Nc
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 14:29:38 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d7f200ba-8d3d-4a72-b54a-749f596f45ea;
 Wed, 05 May 2021 14:29:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d7f200ba-8d3d-4a72-b54a-749f596f45ea
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620224976;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=8OGsTjhqalkfxGekwq98QPFllSJ+iC0LeZE8rigX0ek=;
  b=h/JlqvfWx/Kb7lhQLid3iWJXAfqI4D/HL9OZwSzH57TLlKeSqylN6KmS
   ZruLMOeP43J7uDvaCsO/iu9xZVH1UjJRsbdkH/zgOgPOktbIhWveY/q+j
   fEPco91QwCneXBJytBpZJFml5gWH2345RtPZOLYl+nR4UAeohs+q8XLti
   w=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: GEJoPujiSm4460pGyeTxd1p1KUUDWQTBRJPKx/LHEKTGHVBHYmwamMPPd/S77Qekjh0b7ZJmZ2
 pwFtnf8AN6V3fLeimsRrg8QqUQnkSmFVYNxsik3gCUyafaDEcuYg/yPzoOjPNTsw1e1ZuoWM38
 Gw9IGrk6jg3sEU7QwScDXfKrdHt0nJC/VuMWrkNgxqpwUe7SZaUUT+1FfdDSGrhDtHQKFR0EGd
 CAKPQyQ/hIkK+wYs5TXKDCrGKpb9NXewkUxWCbuS7KGbJ7b7fhNZ5sUhw8A3kTwDB/TGOpjqQQ
 vy8=
X-SBRS: 5.1
X-MesageID: 43508936
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:J8DDhK4gv3+NRqPoUgPXwRqBI+orLtY04lQ7vn1ZYQBJc8Ceis
 CllOka0xixszoKRHQ8g7m7VZWoa3m0z/5IyKMWOqqvWxSjhXuwIOhZnO/f6hDDOwm7zO5S0q
 98b7NzYeebMXFWhdv3iTPWL/8O29+CmZrHuc7771NACT5ncLth6QARMHf/LmRTSBNdDZQ0UL
 qwj/A3xAaIQngcYsSlCnRtZYGqy+Hjr576fQUAQycu9Qjmt1iVwYTnGBuV1Ap2aUIs/Z4e9w
 H+8jDR1+GYnNyQjjTd0GLS6Jo+oqqd9vJzQPaip+JQBjHligODbJlsVbuYrFkO0Z2SwWdvqv
 bgiVMNONly9mPwcwiO0GTQ8jil6hkCwTvDzkKVmnTqq8CRfkNFN+Nxwbh3XzGczmhIhqAa7I
 t7m1i3mrASMDb72AP63NTMXwECrDvOnVMS1dQ9olYabZETc9Zq3Ooi1XIQKrgsNgTg5rsqFe
 F/Zfusnsp+QBehY3fVsnIH+q3UYl0DWhOPQk01sseIyTRhnHdg00sCxMAE901wjK4Adw==
X-IronPort-AV: E=Sophos;i="5.82,275,1613451600"; 
   d="scan'208";a="43508936"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=A0gnPzerL+8JDgnQQcIFV8e7yGC5Fvb+KGUmjTERKWGC1rqitXY7FxHbVHOV9Q7RI6jn8vH1y8e81K+U2cbXUI/fwxIx/xgtZlX6ewErJQc6JaaHVmje86kCxbYrKW/4eu9/RqeLZ8KY2yFZER1WXYNKmlwWV41qlOJeg3z16kj6V0txykGzO7SWkrRJnoMKPcvE3xj3u7H+ymMUMH2jg9rwo93y+kQA3ZEk3c63Zb+mukwAL29X0SgpgcFHSQR8fEB9t+2h+a1ENnJ+H9XNkP1dGTR0kAqKuJZFageURWzg6XbvvENzWtO6WtFKx+Wm1h7RHgYKGzC5K/IGMc7BKw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=r2mwY0Dq6XSJzaHvRvMrUpwEzdVWGP1uhjH1InxBqHQ=;
 b=ocMTrzanRj8KbG4/20IeTf8Sxhg+/DcAXdGqratTMvCoKyv8Vnk6ePuJqWJ0QnrxLkqO+NoMY3y60fsTXaCk2gJCQ1rnDYNA6MNSRzCf2ezbBuv4SjghekXP5U4xCNUvdqd4jiutnx1bZvbYF742df2GJ5xmrLU5IEzhgcQJ97dHuXCPuSuGOykRLA+vk6Sw3naaseC1OlR8pV7C2NRkLPq7wYimFv6wOECIX+efe8jbvycTzpLyZ7IY59G8waBGXlyoo6DTnTpjIIebyenl/z2SOTXJtLP6uzH73wAPINO9R29G7HgnR0xmMVyuweWTjyQ9Hii0Td6+Go1Y5Bsmmg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=r2mwY0Dq6XSJzaHvRvMrUpwEzdVWGP1uhjH1InxBqHQ=;
 b=H0UgieouH9irxfYvAGvlaPzX4vmO0HnawK3IL7nAZMFqI3Wpi/DzUDAewdRjD5HFJq4xBX38WiKKnP5aNhe+AmWLKb/pzqOd6s9IFT1Rb93eQx4OPWROUpytwIurT+O80rDjq1foguD+VoNJ3DtESumWkZywy/t9X22jukVvMV0=
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, Wei Liu <wl@xen.org>
References: <20210504213120.4179-1-andrew.cooper3@citrix.com>
 <YJJtqyDOIkMxjvxW@Air-de-Roger>
 <8f6f339b-f025-2cd0-e666-a3083e79af3a@citrix.com>
 <YJKXZyCHpRg32tyc@Air-de-Roger>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] libx86: Introduce x86_cpu_policy_calculate_compatible()
 with MSR_ARCH_CAPS handling
Message-ID: <38f5b74f-b005-784b-a92d-8ddb9e1b8d3c@citrix.com>
Date: Wed, 5 May 2021 15:29:26 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <YJKXZyCHpRg32tyc@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0209.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a5::16) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 42ffaa77-126b-46af-464e-08d90fd23398
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5695:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB5695AE769A72315C0329A12EBA599@SJ0PR03MB5695.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: rVlmbt16eY0w91cwSgFCunK/cDp+IZbbxpffkI9G+lISDwq/6wVhHgHT+m0dY+8Tn4FMEQLf5gS1HmNGLgFlI0pZn1lBRKlvg/NfQGPHRm9fpOc0yQU3sZwn1QUUgcI7SMqklkVaBfYIAHrEO2gV2+SoYmPH841V+53Komhv8CN5X2IvSOAFNL7/f5MHMhWbL8Ve+Yo6SpAV+KMZzMjiaM3dRegW8KrIdGWFTPHo9VIoSRzyM2QJz1HCE6NmqNyk1dDn7ffYuRw8rd8QRNSCZsVob3OCjMHPcI/sqJZGXNjV7n+266DSBuIwEu2rT/gsTH4cFyd+74ky9AmcZg05a6Xe0bSmYiOJmoLruhtNi/G42veLXPC4PuelUjK8lUUbWXQYXk+PKJamMPDC0iH+vShfeh+7yC0I53pKpMrkkXkxZWQWnoJG/dnXomFBSoyKCu0JuQTbkPO8aZIaPIFDdzCn1Un8T5Rb9fEPvAweU7xiKdOvo6v13cPqlE2c3t3yc7KcQ+W/5qUyPep1YCoGTqAmI2qlYTCvyyLlx+mKLoCCQwYmj/XQnedmLtEUZnSytlsfXj5mi++GXw9XSxYVSPEZK+o3ClsukCcq0R1+oLC9Rq1yaYHRMCiALKC/oXnZjst3BdHyVGRyE7wW686+cDKsfyi7oRZZ6oNr8Od47snugW/vOADBbOhPQt+W0rSi
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(346002)(136003)(376002)(396003)(366004)(6486002)(478600001)(83380400001)(36756003)(38100700002)(4326008)(86362001)(31696002)(8936002)(53546011)(6636002)(26005)(8676002)(956004)(2616005)(2906002)(316002)(6666004)(54906003)(31686004)(16526019)(16576012)(6862004)(66556008)(66476007)(37006003)(66946007)(5660300002)(186003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?aElzb1VYQ2tsQnJUWVdrY0EwL1FUeDBkZ2kwdjJFR2t5ZGZuTlUrQ0hlVkcr?=
 =?utf-8?B?RVhVeStkVVRTM1p0V1o3bThYVCtGUm9SUGsxb2o0cFY5Q1g4dU93Z2FDTXMx?=
 =?utf-8?B?aVZJcVcvQWJPT1Q0ZnUxeFg0VDVLbEJQaXdyTERTNWZWKzBMblhNcXhKeFJ6?=
 =?utf-8?B?U3U3eXpKSWlmWWFFcXEzRWY3NGhOYzFmWVBlZFpxNTlPeXdMZHN6TDh5Vnd5?=
 =?utf-8?B?SEw1by9VL3RNUFdMMk1zL2VPQUZLMGRKbUpLUGF4VFlZMVVEL2V5WFZaZXg1?=
 =?utf-8?B?TVIzUUU3bmZZOHhQTGF6WHk1bVRLbnBNakxpcjR3aGxxeG9oY3BHd29FSGdq?=
 =?utf-8?B?OXB6dE1xZS8zZGhqMTVTOGczLzN4Zy80TlpqUW83ZzFyK3ZJN0UwWlJacmZ3?=
 =?utf-8?B?bTF0OGQ4cUJrZi93OEZsWFZ1ZkozYVhlWUs1eGlsakFHMTVYZkF3Y3pNaTcw?=
 =?utf-8?B?SlV2NzdVZjhzbEJLNVBibi9tSWJ5UEtUTklqMkVTMWlxamNtcjZWS0Mrc3p0?=
 =?utf-8?B?RkZyazhxcXNQY2VIT09zNjg2WjFhUDIvVzErOXVURkVBeFBiYVV1UzFNTnBU?=
 =?utf-8?B?YlovVDBLK0FPaXVoSWdWWDUvRVFGb0R6WWE0V1dQQy8wdWpoZzhtMFR5RDJC?=
 =?utf-8?B?b2ZtdkVobVI3ckFibEtWbEpHMGRxRWpVczRhN3NnT3VWcDBHSGxxVHliTFJx?=
 =?utf-8?B?MElwckRUeC9NYUlvcm1RU0c5bytBQkh2ZS9KREhWODVobENJRXFjdFpseEZ6?=
 =?utf-8?B?MTdHdEdzcElEYXhXTGljVG5NcXhlanRWWkVxV0hVOUd5QlVTdHNxUGY5aE9n?=
 =?utf-8?B?cWZWWjFJbEV1S2Z1NmZabkFPU3E4bWIzcytpaFlUVEZqSENnRXdiS3lQLzdP?=
 =?utf-8?B?NnRsOFh3OHByblZVWWlMZXBxK083SElKNlg5Y3ZXdlh0Wkp0TS94Y255TytT?=
 =?utf-8?B?UzNMMG9XY3VZTmpPZFAzVVd3OHkrRmxpRGpMRmJVWWZjTlZvQWhlZTlDSVhC?=
 =?utf-8?B?dm9PZTg1VHNZOGZOUmdSRk05QnIzeExIaEFPcXJVZksyUzM4d3RhMVNiWnNw?=
 =?utf-8?B?dm4wbkgvMHFxbXorVDVRTHAzWDBIVHhQWWtDZkdXT0Y3d1RLYUZXYjczZzBs?=
 =?utf-8?B?blU4VVhQb2lFY1J6WEplK1BUK0xrL0xyc0htSDh1VmEydFRGdHRaSTBYYXNn?=
 =?utf-8?B?VzhKblVVRlQwQThmdVR6Uy9jcWVaWlI3OTVUTzRZL0VKWnp3YWxlTC92UnUz?=
 =?utf-8?B?Y0Z4TkttOExCV2JBeUZyeTlCa014TExDblE5UlVkam1GODhad0M4Y2JYWkdh?=
 =?utf-8?B?Tk5JWTdmWG9Tb25Kb2hHYSsrU1NnUzgrRVBTRXFuRWFJU1VDWEcwWnlBRnRI?=
 =?utf-8?B?RG4rd1haRUVBbzZFWTZQNzU5Q3Q5cEx0ekNCK0ZOczJJNGF1RHA4UGlickVC?=
 =?utf-8?B?R1BUZHhQTUdrNCs3bTVrWXI5cEd5ekRKYkFkWG9EOWtUZXNRUFNOa1hFak1K?=
 =?utf-8?B?cEYvdEFJVzFVNzM3QXhnYkR4QzlLM29IMUZ6b2FEMjJZUlZJS1FhaG1lZjVO?=
 =?utf-8?B?V2pjYWswcmhzOFRra21XWEZaSUZHUnNuMVgza2N4UFhjNldHZ1VvRUwrdXpk?=
 =?utf-8?B?R1l0eW1adzNZeXZsNCtCMTg2R3Q3OWpmaU1TSGFVampGOUUvL1QzcFVtMVZJ?=
 =?utf-8?B?MWxrYTZ4ZWNnUHcya2ZmaklvOTBIRm5xYkhuejZJdzdoVFBjbkI2Q2dtTUVv?=
 =?utf-8?Q?Q/U88ozn+abv1HqAkvpaF4f71I8IwKS1C027b5S?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 42ffaa77-126b-46af-464e-08d90fd23398
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2021 14:29:33.3981
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: qRzCOY0tk7YaYMmGh+6x/VBczYaz2AcNSzAw/LDpOhUQ559cXa+tK/hNRxi1B8QvXuColptMlsy9waUhBhyTLAlZWr+VD3WALU8H6oQ9qP8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5695
X-OriginatorOrg: citrix.com

On 05/05/2021 14:02, Roger Pau Monn=C3=A9 wrote:
> On Wed, May 05, 2021 at 01:37:48PM +0100, Andrew Cooper wrote:
>> On 05/05/2021 11:04, Roger Pau Monn=C3=A9 wrote:
>>> On Tue, May 04, 2021 at 10:31:20PM +0100, Andrew Cooper wrote:
>>>> diff --git a/xen/lib/x86/policy.c b/xen/lib/x86/policy.c
>>>> index f6cea4e2f9..06039e8aa8 100644
>>>> --- a/xen/lib/x86/policy.c
>>>> +++ b/xen/lib/x86/policy.c
>>>> @@ -29,6 +29,9 @@ int x86_cpu_policies_are_compatible(const struct cpu=
_policy *host,
>>>>      if ( ~host->msr->platform_info.raw & guest->msr->platform_info.ra=
w )
>>>>          FAIL_MSR(MSR_INTEL_PLATFORM_INFO);
>>>> =20
>>>> +    if ( ~host->msr->arch_caps.raw & guest->msr->arch_caps.raw )
>>>> +        FAIL_MSR(MSR_ARCH_CAPABILITIES);
>>> It might be nice to expand test_is_compatible_{success,failure} to
>>> account for arch_caps being checked now.
>> At some point we're going to need to stop unit testing "does the AND
>> operator work", and limit testing to the interesting corner cases.
>>
>>> Shouldn't this check also take into account that host might not have
>>> RSBA set, but it's legit for a guest policy to have it?
>> When we expose this properly to guests, the max policies will have RSBA
>> set, and the default policies will have RSBA forwarded from hardware
>> and/or the model table.
>>
>> Therefore, we can accept any VM RSBA configuration, irrespective of the
>> particulars of this host, but if you e.g. have a pool of haswell's, the
>> default policy will have RSBA clear across the board, and the VM won't
>> see it.
>>
>>> if ( ~host->msr->arch_caps.raw & guest->msr->arch_caps.raw & ~POL_MASK =
)
>>>     FAIL_MSR(MSR_ARCH_CAPABILITIES);
>>>
>>> Maybe POL_MASK should be renamed and defined in a header so it's
>>> widely available?
>> No - this would be incorrect.=C2=A0 The polarity of certain bits only ma=
tters
>> for levelling calculations, not for "can this VM run on this host"
>> calculations.
>>
>> If the VM has seen RSBA, and Xen doesn't know about it, the VM cannot ru=
n.
> But then the logic relation between
> x86_cpu_policy_calculate_compatible and
> x86_cpu_policies_are_compatible is broken AFAICT.
>
> If you give x86_cpu_policy_calculate_compatible one policy with RSBA set
> and one without it will generate a compatible policy, yet that output
> will be regarded as not compatible if feed into
> x86_cpu_policies_are_compatible against the policy that doesn't have
> RSBA set.
>
> I think the output from x86_cpu_policy_calculate_compatible should
> strictly return true when checked against any of the inputs using
> x86_cpu_policies_are_compatible, or else we need to note it somewhere
> because I think it's not the expected behavior.

Welcome to the monumental complexity, and the reason why this isn't 5
minutes of work.=C2=A0 This is just the tip of the iceberg.

"Please create me a policy for a VM" is conducted across PV/HVM default
policies, and/or user settings, while "Can this VM run on this host" is
checked against the max policy.=C2=A0 This split is necessary to cope with
the corner cases.

So no - levelling max policies isn't expected to result in anything
useful, and calling is_compatible with a default (rather than max) host
setting also isn't going result in a useful answer.

And yes - for some changes, RSBA included, you're going to need to
update all your Xen's across the pool before migration is going to work,
but that's already the case now.


Tangentially, we haven't started yet on

struct irritating_corner_cases {
=C2=A0=C2=A0=C2=A0 bool vm_not_using_fcs_fds;
=C2=A0=C2=A0=C2=A0 bool vm_not_using_lbr;
=C2=A0=C2=A0=C2=A0 ...
};

which will require explicit user opt-in to override the "No - you can't
migration across the IvyBridge/Haswell, or pre-Zen/Zen boundary".

Technically, MCXSR_MASK is also a hard blocker to migration, but we
don't even have that data in a consumable form, and we just might be
extremely lucky and discover that it is restricted to non-64-bit CPUs.

Migration with a VM having turned on LBR is still a disaster.=C2=A0 For now=
,
we drop everything on the floor, and let the VM explode if the
LBR_FORMAT has changed, or if the number of stack entries changes (which
does change with Hyperthreading enabled/disabled in firmware).

>>>> +
>>>>  #undef FAIL_MSR
>>>>  #undef FAIL_CPUID
>>>>  #undef NA
>>>> @@ -43,6 +46,50 @@ int x86_cpu_policies_are_compatible(const struct cp=
u_policy *host,
>>>>      return ret;
>>>>  }
>>>> =20
>>>> +int x86_cpu_policy_calculate_compatible(const struct cpu_policy *a,
>>>> +                                        const struct cpu_policy *b,
>>>> +                                        struct cpu_policy *out,
>>>> +                                        struct cpu_policy_errors *err=
)
>>> I think this should be in an #ifndef __XEN__ protected region?
>>>
>>> There's no need to expose this to the hypervisor, as I would expect it
>>> will never have to do compatible policy generation? (ie: it will
>>> always be done by the toolstack?)
>> As indicated previously, I still think we want this in Xen for the boot
>> paths, but I suppose the guard was my suggestion to you, so is only fair
>> at this point.
> TBH I replied before seeing your email that also had this suggestion.
> If it's indeed going to be used by Xen itself then that's fine, but I
> couldn't figure out why the hypervisor would need to generate
> compatible policies itself.
>
> Maybe it will be used to generate the initial policies?

Yes.

>
>>>> +{
>>>> +    const struct cpuid_policy *ap =3D a->cpuid, *bp =3D b->cpuid;
>>>> +    const struct msr_policy *am =3D a->msr, *bm =3D b->msr;
>>>> +    struct cpuid_policy *cp =3D out->cpuid;
>>>> +    struct msr_policy *mp =3D out->msr;
>>>> +
>>>> +    memset(cp, 0, sizeof(*cp));
>>>> +    memset(mp, 0, sizeof(*mp));
>>>> +
>>>> +    cp->basic.max_leaf =3D min(ap->basic.max_leaf, bp->basic.max_leaf=
);
>>>> +
>>>> +    if ( cp->basic.max_leaf >=3D 7 )
>>>> +    {
>>>> +        cp->feat.max_subleaf =3D min(ap->feat.max_subleaf, bp->feat.m=
ax_subleaf);
>>>> +
>>>> +        cp->feat.raw[0].b =3D ap->feat.raw[0].b & bp->feat.raw[0].b;
>>>> +        cp->feat.raw[0].c =3D ap->feat.raw[0].c & bp->feat.raw[0].c;
>>>> +        cp->feat.raw[0].d =3D ap->feat.raw[0].d & bp->feat.raw[0].d;
>>>> +    }
>>>> +
>>>> +    /* TODO: Far more. */
>>> Right, my proposed patch (07/13) went a bit further and also leveled
>>> 1c, 1d, Da1, e1c, e1d, e7d, e8b and e21a, and we also need to level
>>> a couple of max_leaf fields.
>>>
>>> I'm happy for this to go in first, and I can rebase the extra logic I
>>> have on top of this one.
>> There is a lot of work to do.
>>
>> One thing I haven't addressed yet is the fact is things which don't
>> level, e.g. vendor.=C2=A0 You've got to pick one, and there isn't a
>> mathematical relationship to use between a and b.
>>
>> I think for that, we ought to document that we strictly take from a.=C2=
=A0
>> This makes the operation not commutative, and in particular, I don't
>> think we want to waste too much time/effort trying to make cross-vendor
>> cases work - it was a stunt a decade ago, with a huge number of sharp
>> corners, as well as creating a number of XSAs due to poor implementation=
.
>>
>> For v1, I suggest we firmly stick to the same-vendor case.=C2=A0 It's no=
t as
>> if there is a lack of things to do to make this work.
> OK, so level all the feature fields and pick the non feature parts of
> cpuid strictly from one of the inputs.

The awkward part to address is that we've still got simultaneous
equations with feature handling.=C2=A0 I'm fairly certain that the simple
and's which both you and I did won't be sufficient in due course.

~Andrew



From xen-devel-bounces@lists.xenproject.org Wed May 05 14:48:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 14:48:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123150.232302 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leIoq-0003Nl-AT; Wed, 05 May 2021 14:48:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123150.232302; Wed, 05 May 2021 14:48:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leIoq-0003Ne-7b; Wed, 05 May 2021 14:48:08 +0000
Received: by outflank-mailman (input) for mailman id 123150;
 Wed, 05 May 2021 14:48:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6083=KA=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1leIop-0003NX-2n
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 14:48:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 21bd878d-f90f-4771-8434-5a8edd9d45a5;
 Wed, 05 May 2021 14:48:06 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4DDD1AFD8;
 Wed,  5 May 2021 14:48:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 21bd878d-f90f-4771-8434-5a8edd9d45a5
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620226085; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=3Un/nF7d9cHfDIEG6yL1v2j45fHDcpkbofyMtRhUi1o=;
	b=LG5RLWpoRdnEbUBy1U/luIc/GcdmgLKlro0+QZkst+LUL2U2MYzBIK6n0QNN1zA402RF2O
	jC55wMrVy8raU77RM3Aep8vih9OGZCV8rxPqTlaE7vuz3dASCB97sSMwscYYXOnMg5/yiA
	sc0hyr2Ou9lkHjrq0jJoHcJvJE+DBaQ=
Subject: Re: [PATCH] libx86: Introduce x86_cpu_policy_calculate_compatible()
 with MSR_ARCH_CAPS handling
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20210504213120.4179-1-andrew.cooper3@citrix.com>
 <YJJtqyDOIkMxjvxW@Air-de-Roger>
 <8f6f339b-f025-2cd0-e666-a3083e79af3a@citrix.com>
 <YJKXZyCHpRg32tyc@Air-de-Roger>
 <38f5b74f-b005-784b-a92d-8ddb9e1b8d3c@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <bb2089cf-74f3-a7f2-7001-21a0d009440f@suse.com>
Date: Wed, 5 May 2021 16:48:06 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <38f5b74f-b005-784b-a92d-8ddb9e1b8d3c@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 05.05.2021 16:29, Andrew Cooper wrote:
> Technically, MCXSR_MASK is also a hard blocker to migration, but we
> don't even have that data in a consumable form, and we just might be
> extremely lucky and discover that it is restricted to non-64-bit CPUs.

"it" being what here? The value's presence / absence in an {F,}XSAVE
image? Or the precise value of it?

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 05 14:50:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 14:50:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123154.232314 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leIrJ-0004iF-Oq; Wed, 05 May 2021 14:50:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123154.232314; Wed, 05 May 2021 14:50:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leIrJ-0004i8-LG; Wed, 05 May 2021 14:50:41 +0000
Received: by outflank-mailman (input) for mailman id 123154;
 Wed, 05 May 2021 14:50:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+Yav=KA=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1leIrI-0004hz-KO
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 14:50:40 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 17976c9d-5390-4184-b2b4-5939c94f929d;
 Wed, 05 May 2021 14:50:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 17976c9d-5390-4184-b2b4-5939c94f929d
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620226238;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=qOv3lO6oyab0Ylfk09R7N9VExSynhyxDJZBR0IAyefE=;
  b=F6yDUdNjZSaCHY1JW9ndJO876w58zMUDbMWiFjpF27zJZELhjQqM4Fyk
   0+Bmq2Olk+29u5RWZomrtB8NcQ+GgGWZw1Vxo6eAF8JVo1R9eXj9pcqaG
   Gy9mo3dcNzd1BWP3SXwfj6veYjUxhUoM/MF5EJJxKQZtlL3aXSZkSIJZK
   U=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: aA5VaLHndqoaWp5xMZke5aN3edlejAXqzL2jTWE56KSmPqekoX6zWjG2m463KFqKGmYiit0D1C
 Kxp5dmcolOh9O7QwOlASVFaAops88qcgxjM/BWi+eUyf3UiIe06POjGCSDl4s1IC0ElGjmK11k
 SHfwekG4G1tGLoBCbbnKRj3nXAJgN2aoM17Fh36eGpJpl9+s6nmYQpP5Kw1M0oOPPaGuMYbvSF
 QF+RKhVa/KjiBUwgEe8OpqD6SvRaKwal4d1oOIYZpA4T7PK9tvVU6Nx6aBn92vtNkFaF2Ga5ZN
 4oE=
X-SBRS: 5.1
X-MesageID: 43136826
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:R/ZyIao5nN5l70BIUUVSohEaV5uEKtV00zAX/kB9WHVpW+SivY
 SHgOkb2RjoiDwYRXEnnpS6NLOdRG7HnKQa3aA4Bp3neAX9omOnIMVZ7YXkyyD9ACGWzIFg/I
 9aWexFBNX0ZGIXse/T/BS4H9E8wNOO7aCvgqPkw21wSBxxApsB0y5SIG+gYypLbSNBAoc0E4
 fZw8JBqSapd3h/VLXEOlAuWe/fq9rX0K/8aRkdCBI9rCWIhzWk6Ln1eiLooSs2eTVJ3Lsk7C
 z5gxX0j5/Tz82T5z398yvo75pQkMb80dcrPq2xo+UcNzmEsHfSWK1PQLuH1QpF2d2HyFFvq9
 XUpgdlAsIb0QKtQkiQgT/Anzbtyywv7XiK8y7rvVLGrdbiTDw3T+pt7LgpCifx0EYrsNFi3K
 8j5Qvw3PA7fHCw/lWI2/HyWx5njUayq3Y5+NRj6UB3aocCdKRX6bUW4UI9KuZyIAvB9IslHO
 NyZfusgcp+TFXyVQG8gkBS2tC2Glw8EhCaK3JywPC94nx9mXB0yFYg38oPnnsM34JVceg128
 30dotvj71AVckQcOZUA/oAW9K+Dij3TQvLK3/6GyWoKIg3f1b277Ln6rQ84++nPLQO0ZsJgZ
 zEFHdVr3Q7dU7CAdCHtac7syzlcSGYZ3DA28te7592tvnXX7zwKxCOT1gojo+Jv+gfKtezYY
 fwBLtmR9vYaUf+E4dA2APzH7NIL2MFbcETstEnH3qTv8PwLJHwvOCzSoeRGJPdVRIfHk/vCH
 oKWzb+YO9a6FqwZ3P+iB/NH1z3fEjS+o9xDbj68+AfxJNlDPwJjiElzXCCou2bIzxLtaI7OG
 FkJqn8r6+9rW6quUbEhl8ZfSZ1PwJw2vHNQnlKrQgFPwffarAYoeiSfmhUwT+iLh97RMXGLR
 5Hqz1MiOSKBq3V4RpnJ8OsM2qcgXdWjmmNVY0glqqK4tqgXZ8kEJA8WuhUGR/QHxJ43SZmwV
 0zKDMsdwv6LHfDmK+lhJsbCKX0bN9nmjqmJsZStDb4rkWTpcYmQ1MBRD6wWcurgQIjLgAkw2
 FZwus6uv6tiDyvIWwwjKATK1tXclmaB7pAEUC4folOo6vqfwtxVG+OojSfh3gICzPX3nRXol
 akATyfePnNDFYYnnxDyK7l/Gl5cXinc1tqZmp3tpB8Emr6qm9+uNX7E5ab4i+0UB8v0+sdOD
 bKbX8pLgRiy8ue+TSVlDyBfE9Wi6kGD6j4NvAOYrvT0nSiJMm0jqkABeZT54sgHsvpqPU3Xe
 WWfBK1IDv0B/gy4RGcom8oNUBP2SEZuMKt/CegwHmz3XY5D/aXHU9vQKsDJcqAq0fjXPSF3f
 xC/JoIlNr1Fl+0TNGIyavaNWEebjzSpHO7VOEup9R/u7kouL56ApncVn/p2Rh8rWMDBfaxsH
 lbZqJxpI3lEMtIWec5fipC5FonlNiVNiIQw0bLK957WWtotmPROtOC3qHBprUuCHCQvQeYAy
 jpzwRtu9P+GxaZ3bEUC6gMMX1bRUg15nNl5v6DfeTreUyXXtAG2FqxKXmmdrBBDICDBLULtx
 5/iuv409O/Rm7d2ArKuyF8Lb8L22G7QdmqCAbJPeJT6dS1NRCthaStifTDwwvfeH+ea04Cg5
 dCelFVRsNfiiM6hIly6xOMcMXM0wkYumobxypmmF7r0pWn52mePXguC3ypvrxmGR9JMnaJis
 zZ9/O/z3qV2kkf5aX+
X-IronPort-AV: E=Sophos;i="5.82,275,1613451600"; 
   d="scan'208";a="43136826"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=XcYI9CNVaA33R3kJAOrFn7/6e6FHLe2PIER+tgbEWK5EcnQr6jluRLPSep1R+G2A963vEfd18SKe5Qi37rBjv2uvujE/D/5jdDa27bSzQDKJAdGlyjS0wlhGR3yCRXlbpzqSSbPS9AhPGw8cHOASxitILSTurU8+ZL9UKKHuqDiCiOEt8TCk9ubROu97tWfhFLFWftthpzmL1EqbUwE0hJCFSIKBw8WexulZtNIbyJpeNuD1qPtMqYpx1eu9TKtGiXTq3KCaCC4GKQUf1VpqxtfqFvzQAsy2fB4U20wCfKm4Wra5eWgtP7rHRDdK1xWfFrX7qxVDfJMdeLh7M0fDdw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qOv3lO6oyab0Ylfk09R7N9VExSynhyxDJZBR0IAyefE=;
 b=m+A2jm479sWQZXLT5Rn8h+dNn+O1NtOQHP/Q9dhd14xMcuFtV0mIh5OTk/GhV4rti++le0NXTwNO9J2aeAzKBopgwfqnd1c5t1kaPreKyWEAobqpDoyeMfBzlcJePtQF8fvFvCcJKPbcryNZKR8jJIz7uGPyqSIwKbTagzDtq8GXOVrq0z5YhpFluIga9sofge0ANodZ95nvmxFWvsFamowy4/vmQXVmxC8437+EV7FmewpbcdJr43QanaKfT3lkr9D6TYXENW87XQpmdQ6jYP5dkZI8jw2xUkKNXnmeVysGn9uYQBDDChlVOIsx7nLNmdFb+UpWqbeFGuavK4n1dQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qOv3lO6oyab0Ylfk09R7N9VExSynhyxDJZBR0IAyefE=;
 b=SRWkba/PKdCsESbZBxLcOvk31tQEmeqUpP7G9h3pm1xRINZQW1t2KYNoC/BRTMA1GDCzsJbF61Y0bCBYaWMi9SI3XH4rKLLUm6t4YitPvNOlajOpRaINWOP0Dy2xq/sC4gaQUaM0soeNGz7g+uk/wB21XaZ08G/3FMqXQHa0Ysk=
Subject: Re: [PATCH] libx86: Introduce x86_cpu_policy_calculate_compatible()
 with MSR_ARCH_CAPS handling
To: Jan Beulich <jbeulich@suse.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20210504213120.4179-1-andrew.cooper3@citrix.com>
 <YJJtqyDOIkMxjvxW@Air-de-Roger>
 <8f6f339b-f025-2cd0-e666-a3083e79af3a@citrix.com>
 <YJKXZyCHpRg32tyc@Air-de-Roger>
 <38f5b74f-b005-784b-a92d-8ddb9e1b8d3c@citrix.com>
 <bb2089cf-74f3-a7f2-7001-21a0d009440f@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <8d786331-0d9a-2ec7-0fe5-ba86d4a2547c@citrix.com>
Date: Wed, 5 May 2021 15:50:28 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <bb2089cf-74f3-a7f2-7001-21a0d009440f@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0451.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1aa::6) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 0feeebd5-5422-4f7a-db69-08d90fd52304
X-MS-TrafficTypeDiagnostic: SJ0PR03MB5503:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <SJ0PR03MB5503A82BFEFD87C90A038F65BA599@SJ0PR03MB5503.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: er8zHNPWLPO19XFzJxLrOlbtmm/ur7cud6fB+Rg1MTS01C6JATixahQSb15wYWwvOkC95TbBjMA1QWOalAfJWbiUCNvrC/ORwI5292afQu003no6Pgm43q0qHBM0r3zEIajH+8O6pgxrsSPGNUUC2sjx/d7LqEYfIOdWcy1Yvg4ONXNTaiCA68yJfr01+mB4iZz+gmuhhIb/0oUBSUhfFMmosSb8uWgpD5sotJOEgdgQeO8uj6Fp5mOKrDEHKKQE3rxffcLfaAsdogAuwV1UWL5Fi+VIfCA8+hesoa8YFfMbC+mfvlwBRkyXnLKODhi9TW5wdu1eDubc5CHbtpZONTGGWFA7RFVRvCY7C/yGovBgWDrQCPSCc4rKi1l2aEH4yNhcp00dhizRpuE1AB0vhgi9HoStsAaAIHKn/Ya6Zw7giidW+G9h+vNAEQ28qaV2YrbTWqgN8Hw/EDdX7QP3PuBnXEa2nbiToKahWyqzYtgXIStzBJapPc7FQ0DvWB0d9klGvt8XLBPPp6aMoaxOblL/UzxscoQFn/ArcfLq5Ca3WVmtIdIB6mlMeOPsUSH6pf45LfKKa0ket9F4XFRF0U5tXNbgi/PULqxC114vKX0QUS8IXOSAhAOfnJ5WjuvxwXUID0fuDxTfePRhpMfjFenbOCl4RGNVX6iqrPwFgEN/u8sU5F7Sa5rODg/6fnBl
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(366004)(346002)(376002)(396003)(39860400002)(956004)(83380400001)(5660300002)(2616005)(86362001)(6916009)(16526019)(186003)(8936002)(8676002)(16576012)(36756003)(53546011)(54906003)(316002)(31686004)(478600001)(4744005)(31696002)(6666004)(4326008)(38100700002)(66476007)(66556008)(6486002)(107886003)(26005)(2906002)(66946007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?d042TmZvQStQRm9uSEUwTitYdVMydzZZZ1I1a3ZyemxXeFNXaEFhZ05odDgr?=
 =?utf-8?B?WUFQMGdLbnBCdDh1MmFMcjFoVm5PRktuSTNVT3lOZk53R1BjbEJhY1R6R1o3?=
 =?utf-8?B?bHl3aXB2eXN0aEQwSFRweXdNYUNudmxjTWZpUHV4MHd1WDRZWUI3TWRFZktP?=
 =?utf-8?B?UC93TmxUU0trN3VCREJKUDBBTkFHRGlCVHk1V2pwRy9TcnpYL3pyRWpGQVNV?=
 =?utf-8?B?dlowNGR6cExCYk05SGJRZURyZ0I1cXptakZrMkNpRXJMaXVvUjZpc0hid2c5?=
 =?utf-8?B?NmswR1A0VktrYnhzTzIzeWl2SEc4bWhRbTZUNVYybkRrWGxaRUl2NUhCQUVv?=
 =?utf-8?B?cG5FWEhVaTBUQUx2MnlicjdQYWdUcEtvMnliTTZ3bUFORGRLbEgxcUlPVlZn?=
 =?utf-8?B?aGsyL1VYaEpxU1ZraW5aS0hjK1JZTnZ0STBQRmszT1RRUTU5WkZoTU0yM3Rp?=
 =?utf-8?B?bTgwcVk5ai9meXh3V0Z3U0NtclpkRzVOa0E5K1dvNkY0OHhUZ1lCRE9Ia0xW?=
 =?utf-8?B?SWVweWFTYWZYNXNndVRySXZSa3NNL2hvLzQ5czV5THZYY0Ntd1ZnZkpTQlBa?=
 =?utf-8?B?NFErRlZSVzdGdk1ZTUJVa25ncnNxemxhVWFvY3RQZ1Rhb1ZxRElUSG1RM3lw?=
 =?utf-8?B?d0hFczlOUng0UUxzdjd5MmpvVnVIZmZLb0lBdE5yWC9ZMVRCNmpGdHFLWEpN?=
 =?utf-8?B?cThROEVQVFRla1M5dWtNc0hBd0Q3c0FodGp5VUNSRjF6Y1ZlUHFJNTZMUk5L?=
 =?utf-8?B?bUNOSTdUdzJIQWs3Q1pnL2lTZ3dRQTgvUkpKSXcrZDBZQWFPaTQ5UW0vcXdB?=
 =?utf-8?B?VlhwWTFuM3pjS0QzZmxuYUE1aU9WSHNzQi9MejhYNE9RRkhMZjNHWnJoQzVv?=
 =?utf-8?B?WFNQTEZ4Mm9HVjZ6enE1QmY2VUY2cmJEdTVYOFdXamR0NmlMWDNmRFJzNS84?=
 =?utf-8?B?N29KdVh2ZmpRNnMwTzVJVTJIVFpwSXB4Y1lpVTA0RTZkTWp3YVRMdldWSndE?=
 =?utf-8?B?TTJudFovYkJNZ2ZZZldBcmJlT0luQ3BYTHVkZVQyNDlaZUNMTW4ydVBuTDdr?=
 =?utf-8?B?SmFRekVYZlI3UXV5TFdqN2M0ZzI1UC80N00vNmFtd1djeFNmclk3bGV6amNr?=
 =?utf-8?B?TzlZUU9rc1JtRHIvSWlFUEw2Q0lpOCs2Qno4UDJUT3lONGx6aXRCNWJzczl5?=
 =?utf-8?B?NEtWQ0NGK3V1SmpaVVlnSlhDNVlmamk4Z2JzWmZnQzdwWkR4blZxV3RiZUFv?=
 =?utf-8?B?emFsbXQ0VEMvdVRhaEo4a1ZNMFhTTVM5LzdWb1lGaDZNRGUvQkcyQnJkS3Ns?=
 =?utf-8?B?WGl0dkkxTVY5TWkyL3k3bFkzWHZmQVJUVmhzcWkyQlBEcjNkY0pUVXI5S1d4?=
 =?utf-8?B?Wmx4a3hvTU5teWVrU1A2Ty9pWVFHYlB6VmlkQ1lDMkNQY1VqS3o0L3Q5azkz?=
 =?utf-8?B?ejVZaTdlMzFtRHk5MHlZMlp0M2J1dXdtdkZ1MExiUlNVR1hyUGRiOCtLTmtw?=
 =?utf-8?B?K3lqKysvSTVCb3RTcC9zZTZJSi9vdk5qOGUyWmgyeTl0UWhZcW1Od0JRZGsv?=
 =?utf-8?B?Y3lIMWlTZDg1S1BQbDR5ci9SOFVWamwzRUFNcFNEVTZZYm9jaGZaVkcrbU85?=
 =?utf-8?B?blJncldGZXg2d01OenFIVjF0V24xakRKME9OQkhHM05rWGdySzhEMVNsQnpG?=
 =?utf-8?B?SlJpa21qZWhuSmwwREErdlJxWHYxQUpvQU9rbzB2ZjN3ZmpQQ2dnZ3dtUk9H?=
 =?utf-8?Q?8lBrok11CEPjAvZJQaGuwad2yWf6vU7RdeiNcax?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 0feeebd5-5422-4f7a-db69-08d90fd52304
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2021 14:50:34.2405
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: VeFZ+taopqK9ZcaFG34OJ961/L6USVCCroIcafVRvH1y+y4u2anGtGU8F7whBhob7phIkk3u35yYTUwzkpUqMBMzEBnbk+oCwx+DEZNKDlk=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5503
X-OriginatorOrg: citrix.com

On 05/05/2021 15:48, Jan Beulich wrote:
> On 05.05.2021 16:29, Andrew Cooper wrote:
>> Technically, MCXSR_MASK is also a hard blocker to migration, but we
>> don't even have that data in a consumable form, and we just might be
>> extremely lucky and discover that it is restricted to non-64-bit CPUs.
> "it" being what here? The value's presence / absence in an {F,}XSAVE
> image? Or the precise value of it?

The precise value of it.  Migrating across the boundary where the
default changed will cause {F,}RSTOR instructions to #GP.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed May 05 14:54:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 14:54:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123161.232328 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leIua-0005QB-8t; Wed, 05 May 2021 14:54:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123161.232328; Wed, 05 May 2021 14:54:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leIua-0005Q4-5Z; Wed, 05 May 2021 14:54:04 +0000
Received: by outflank-mailman (input) for mailman id 123161;
 Wed, 05 May 2021 14:54:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+Yav=KA=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1leIuY-0005Py-6z
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 14:54:02 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2c0509dc-768d-47a8-942a-4c866bba0c8b;
 Wed, 05 May 2021 14:54:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c0509dc-768d-47a8-942a-4c866bba0c8b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620226440;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=cmhikC7c3lMtBjGqm5v0z2QAiAMrj5DLOIpvqJ8HE40=;
  b=JKuHb+fms1DZ0WrfDSKlwq8faBteKerefXBvyZg5VvL5RymhGrs31qNY
   rZQuhn+U6Qr9sYtBn/zilfKuDq2BVFPsfVghbqT3rkamCGaP79guVQydL
   henaAG+Kukoyz6JnQBlEWmPWA1a2io6HFaywNAzGBSlY2DbGVOSeIt3wz
   8=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: zDJ0HlNVHujnJwXvgm2eSkrwQ/AiXueS2QaF48kC9hz+cw5fTw10Et7geqz+LoOzdWdELanW8q
 pdB8UoQYZkIoHjX+BwSzD2EKTYFJjqo+3hOxhKjDLskn3yPdT2CDwS2AhrBzqv6CEuhJQN3uo7
 5kbx/mO5kS0oTtf8aT+vBXBQH4rYwfmzMl8TkRwFC0DDgMZdKXAAYL7UQNq5WNPbEag8vHH2TD
 NxupiGl84xrhBOaNOwol9jPVq1TNXUq7Sj31p7RSlVfTA/wLsIcfC/DgIQZoPuU/wH58xcNjYi
 HVc=
X-SBRS: 5.1
X-MesageID: 43137146
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:ziqRtK8S9HVZnvZ8XBZuk+F1cL1zdoIgy1knxilNYDRvWIixi9
 2ukPMH1RX9lTYWXzUalcqdPbSbKEmyybdc2qNUGbu5RgHptC+TLI9k5Zb/2DGIIUHD38Zn/+
 Nbf6B6YeecMXFTkdv67A6kE9wp3dmA9+SSif3Dymp2JDsKV4hLxW5Ce2OmO2dxQxRLAod8MZ
 Ka6NZOqTbIQwVpUu2QAH4ZU+/f4+DRnJX9bhIcQzIh4g+CjTSngYSKbySw9BEYTj9J3PMe4X
 HI+jaJm5mLntOa7lvn12HV54lLg9eJ8LV+LeGFl8R9EESVti+Gf4JkMofy2wwdgObq01oylc
 mJnhFIBbUI11r0XkWY5STgwBPh1jFG0Q6Q9Xa9jWH4qcL0ABIWYvAx/L5xSRfS50o+sNwU6s
 sitAj4xvkneC/opyjz68PFUBtnjCOP0B4fuNUekmBFVs8mYKJRxLZvjH99KosKHy7x9ekcYY
 9TJfzbjcwmE2+yU2rUpS1GztCqQx0Ib2y7a3lHkMmU3z9KpWt+3ksVyecO901wha4Vet1q4f
 /JPb9vk6wLZsgKbbhlDONEesevDHfRKCi8f166EBDCLuUqKnjNo5n47PEc4/yrQoUByN8XlI
 7aWF1VmGYucyvVeIOz9awO1iqIbHS2XDzrxM0bzYN+oKfASL3iNjDGYEwykuO7ys9vQfHzar
 KWAtZ7EvXjJWzhFcJixAvlQaRfLnEYTYk8pss7YVSTucjGQ7ea9tDzQbL2Hv7AADwkUmTwDj
 8oRz7oPvhN6UitRzvWmx7Ud3TxelHu3J55HaTAltJjjbQlB8lpiEw4mF657saEJXlpqaotZn
 ZzJ7vhj+eaqACNjCL1xlQsHiAYIlde4b3mXX8PjxQNKVnIfbEKvMjaXWhT2XCANyJuVs++Kn
 8Zm31HvYaMa7CAzyErDNyqdkiAiWEImX6MR5AA3oqO+NniYZF9Kpo9QqR+GUHqGnVO6EdXgV
 YGTDVBal7UFzvoh6ngpocTHvvje951hxruB9VVp3LZvUC1vtouWXMfYj6rXaes8EQTbgsRom
 c0374UgbKGlzrqA3A4mv4EPFpFb3nSPKhLFz2fZIJfmqnifSZ5SWviv03dtzgDPk7Rs2kCjG
 3oKiOZPdXGGEBUtHxj3qH2y19sbWmGc0Vsand1jJ1lGQ39ywRO+N7OQpD2/3qaa1MEzO1YCj
 3DbDcICi5Fxty81neu6Xy/PERj4q9rEv3WDbwlfb2W52ikL5eQk7oaW9VO+ox+CdzouugXcO
 6WdgOPNgnkA+cx1wH9nAd9BABE7F0f1d/40hzs62a1mEMlCf3JOVJ8WvU1Jcqf42WMfYfA7L
 xJyfYO+c2+PWX6ZoTYleX5bztfJgjSpmDzZecyspxQtb8zsrw2P5Sza0q+6Fh3mDEFaOHznw
 ciZY4+xpbrEIpmZdYTdCJU5UBBrqXFEGIb9ijNRtYjdlQshULBN9yH47D0uaMia3fx0DfYCB
 26yWlh5P/LUCuI6K4CB48xKWpQblIg6H4KxpL1S6TgTCGrffpE5ly0LzuUd6JcUrGMHdwr31
 tHyuDNu++cbCzj3g/M+RN9P6JV6m6iBee/GhiFF+IN09u0Pz238+eXyf/2qDf8Uj2gbUsEwa
 VDaEwLd8xGzgAYs7df6Fn7doXH5mQ/k1Vf5jl7llninqieiV2rY31uAEn+mZVZXT5aL36Sq9
 /KmNLojEjA3A==
X-IronPort-AV: E=Sophos;i="5.82,275,1613451600"; 
   d="scan'208";a="43137146"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=R/W8m54rXawgzNQUvM5+onSbZD6QEtey4arkDOoCvmUgignWOEW746n8BF9jf+Ty+eRNw+kEkfcmdwzt+zuogiRPZ7cYXU/DaQ8VM+c/dax1psQ9s3bLnPGEvscEF5SZTM0RojHGa6z76IUbp7U6vlx1hd+0RAFdCzuWDXSeOoGLhP9trR6TA2KZo9nss5pBW00GQmGfJJtUFNWsZ8zFHFvz+RvChq7+n4su5w2bdeu4ZSW+9PNi0U1hLyJVdD/TxWrLJ3TkOILgewvDGNBVYNAsM0gIgOyk5J6kan3sqz7LAA43iY28Y9/3lYoidlzLc39qESW/4zlsT0NfIo7FmA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0BeIkc5YgV60FmIqnHHCOUHZuk1rbyq/d/rUB/u3jwQ=;
 b=niSk8a4UGbOJHPVIR0A4vSOXLq79+SCqeJ3soQLmuIoVrQ/EJf7HoEKt4+6/1s8Gvfq1dHX0Hub8nVQs/5pae8Bm/eUnL/vfeJrJMFsCqyo9XdbynDrQRUjU7fYLIWtFztM4PPTTdtlm6u6Np3SBNxjEMh9IqhANkCjPlC/ORofBRGreFPP5IZwTqXTk5lK3SzQSF549B3qYzoC2yan+QpIX6qQ9TUnIjehMOtHSIeiFP5XlaJz1TV205L79IqhE+52Ef3EpgzMu+OIyZDauZSqkjY9k2pyNxWJE8KpZsi9oJdI6XKNVuGYnepd5Q9PP/e22kQ52u+VFr/hNdI+vhg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0BeIkc5YgV60FmIqnHHCOUHZuk1rbyq/d/rUB/u3jwQ=;
 b=SyGsQkou6wlltpQ4Ws0mx5kwPGXEKVRBpeJxQTfLBBtC7h5lM/WLK072js9enZGE/D7Mqn6r5sNBhwcoo0Kih1GBQ1BnGWNaeReJqY7W5ZvbnWHbsJmi3guW5oBqKQp1QhlxFvI8FSNXJbsSwFSElLgr5oV6ypYdTp/pZWVcKLg=
To: Jan Beulich <jbeulich@suse.com>
CC: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210503153938.14109-1-andrew.cooper3@citrix.com>
 <20210503153938.14109-5-andrew.cooper3@citrix.com>
 <17501fdd-b9f0-3493-7d0d-8c5333fafa45@suse.com>
 <3f9ae28f-2fb7-0f4f-511b-93ba74ec3aeb@citrix.com>
 <ced9f20a-d420-6639-b041-710f7ec59613@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 4/5] x86/cpuid: Simplify recalculate_xstate()
Message-ID: <d918c2b6-7c31-6868-eb50-6a3db54fa4eb@citrix.com>
Date: Wed, 5 May 2021 15:53:49 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <ced9f20a-d420-6639-b041-710f7ec59613@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0295.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:196::12) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 711b21d6-c386-4099-1c78-08d90fd59b8d
X-MS-TrafficTypeDiagnostic: BYAPR03MB4246:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB4246D78809CCC39CE1E7E639BA599@BYAPR03MB4246.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 36kCR2D6vnVjqJGr2gFXzCnB8/gQ39+EL+Oi/yW+p58jGz1LBpgY8xqPOLwvdSvWfwc+5r8z1tkEyZuX3OX38eOuweRK6gT5Qpg5UuHR3Ze4/DWYAt+4TCc7xe0jNJ6IIlqhzYJ8fvHttzo78vdBwIDKfFoY0i4zKpGMA40qZv8aWd6tBiTAs91dfzk/KWnOR6wfSceqxCxqWwIILxysR6cKiFP+uELHtShSyR4V5jW8Hq8YGF+XJz6amzCRuC6pKD6P5aPT9iHbwTWJ2/yB9Zo9M7bEbX70QiYYCHs5tCOKPvRt6SFVZ2cr00fOOfui3cRJv/nGXXEZ5E+393ovGWmlueWiHDRyvvk1mITA9n6heeZ9kEJqULiqSGd1EZi2+i9oB1cU47/HK2ZZO/eX480nsAvjG9JWdUpTD+r54xdm/PGMLaDTft1BZlQyxWy8UdAsDWFU9yjwaQJ9og3hWW6txYOtUrgdaqOc/NS4zPCPuyl1lrKu0NEWDEsy6+X/rXDp36SJFAScxBwf8184dimBkEKfRFNN8BOIWV2bZOmcFeUU7oFdMIuvFe0QlBP/RpAodmGOPWpQWQtJcuYhl1V9QRiYbOYcSql8EYzUXCFV4jP6abfw7Lv8Cwk36dilwZJ0rx6EjVTFuY/bsTJpOQlQKul4YWKYVzcTGOG2KgRhgApqofL2mmjIhsvJD57e
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(136003)(376002)(346002)(396003)(366004)(31686004)(83380400001)(16526019)(6916009)(31696002)(66556008)(54906003)(4326008)(6486002)(6666004)(38100700002)(86362001)(316002)(478600001)(16576012)(8936002)(66946007)(8676002)(956004)(2616005)(5660300002)(36756003)(2906002)(186003)(26005)(66476007)(53546011)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?S29pWGJmUnJuYktNeHd4b250OEFYeTlLV3lhYURoQnlBRG03Z3l3em13VHpN?=
 =?utf-8?B?SmRET2lSQnRRYkJ1dGRTbThWUG1yTk9zMlM4R1FQbm53OC85eEw2dCtNdXVD?=
 =?utf-8?B?N05nNnYvUkpSb0xkZFk5QTRuVkxsQUVlOTVuRG1FdVN1QXF4cCs3NVIxRFVl?=
 =?utf-8?B?bk4xY3VhZXRXNjA2Szh2QXVUWmNlYVlxSlA4T3YxQng3dTI1S053eEJtbFc1?=
 =?utf-8?B?NEp4SEkzVmp0Uzg2T0MzQk5aQjJocjF2Ti9tNjI4QllDaU5HU3RWUXJTemZS?=
 =?utf-8?B?NVZkMlhXbysrd25kQVhDYWNkaWEvK2ROTDMwMWlCU3pKSTQ2NVIvdjNwWEVu?=
 =?utf-8?B?UmlLSHhsZ3UydFRBZlN2RXdUeTdsQituWlR1YllvRFIyellkRWovdFRNM05D?=
 =?utf-8?B?cXQ4SWxSK0RlbjNoZVZhSFZ3R2ZXVTUwaGptRnhJc2F6V3h4anAwTndiQytF?=
 =?utf-8?B?TnlVMUxjVElRMUFDRC9EMkR5ckxUMHFVSmtTemxCZkRZTmUyVGZoVnFVOEpk?=
 =?utf-8?B?QXNUaWpjcDR0N3JvR3hzY1NrbkdQei84UFlXSUxmZG5WYXNuZU5XMHZQYUFF?=
 =?utf-8?B?TFN6RUdEbU9CV1c5Z0NZeTAydmN2MWwzNTd0ZHNTRisrUDVlTmNFSDljNWdB?=
 =?utf-8?B?dUVLb2ROa2hFSkQ2czc3L09lY1crWmZSbG5UaVMxZ0NsQndnWkJVcDhQZmFR?=
 =?utf-8?B?UEIvNzY1clNlOUFYWEhiT0xTdmFHR210UVk1Y3RsRXVtcGg4UlQ2b3ZsKzBj?=
 =?utf-8?B?Y2lvei8xQmJYL2hJaWt1WGlMM283RUkrL1I3aUh4ZVl2ZXlETnRXU1dsaE1F?=
 =?utf-8?B?d0UwekV6ekxIZ0FiVzA5dWFKa2VCZnVWazBCQlRhUFZQbUxFSmdSQUtta1hq?=
 =?utf-8?B?c2cyT2pOa0MvNVY2TmNudldjYno0M011dW9UUGtUU0JVdStNTFB0Z21KeWdk?=
 =?utf-8?B?b2R2WXRxbGhsbTg1WC90RjBtaDRBYUFrd1oxeFZWSjErMjQ4cFRnVW1EQXFv?=
 =?utf-8?B?aElzRUZmSzB0WVBjWVJ4NlFCR0REaTYzVElVSEZBdUIwQWszZXRXeFdwY0xL?=
 =?utf-8?B?YVhzME1HTTBxaHFDWWQ2eExOT3FBeFpkOXNEcTdKWEYydytjN1dLL21QNHdO?=
 =?utf-8?B?SzJEaFFua1M0Z3g5VFV3dTJBTE5mUURITko1M083OU1JTWNSdVdibEd6V01U?=
 =?utf-8?B?dHQ4NURML0FQc2htdWpxbTJyWjNCZnVwZTZZbGg1Y0czcHROd1RMVTZEMUpq?=
 =?utf-8?B?UEhKTzlwQ1kzUHkwTCtFQ0pLT2lIaU9HU1BpUEJsVi81OVpyV3NqcmVMSnY0?=
 =?utf-8?B?SHNoK2lZVTdPdUIzS3B5bWJ6b1VIUE9CUUh5UXM5dEdZN0JrVGxESGFPdXdG?=
 =?utf-8?B?SEFRZHdTUjl6VDNIZjY1Y3FjVm5iTFVLVVN5eHhQOW50NXBkZWFhTFk0ZjB1?=
 =?utf-8?B?cFp1M1MrVlhGNGZxeHIvNjdYRFI5dlh1UHZYMldObU1TQ2ZXWlhBQzNlTzZ4?=
 =?utf-8?B?QnVNdUx2MTdFNWtoMWRXNUhaOWxVSXE3WWZDY3hYRVFzT1ErWUF4TUN5QjFy?=
 =?utf-8?B?VGw1cGs0MysxLzZrOGpIRHpzUEw0SEVUaXV0YnNRUFJZNDRFM3JiOVYzSUo3?=
 =?utf-8?B?Njh5YzdYdW9IQlk4U1NwNkM1Ti82M3BoOU9uZkdReDJQTklnN2c0OFAxTWFP?=
 =?utf-8?B?VWVORzFXN1BZbWdFRU0zeEh5VnJwVXNHVjVFUUhNNGd3cVQxVE9HQWFlb243?=
 =?utf-8?Q?Bk4c7LYhTsBywcH0ri7hc7LCq2HmkjdwoeefMrS?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 711b21d6-c386-4099-1c78-08d90fd59b8d
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2021 14:53:56.3012
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: wx3c4OpE7xY5sxDnsLoaqIYvWgaipwKcYAKhiego5+5PNpHHNkHMmQ2U/utGhW2CT1YqDZ+3UjMhNwJzjpFrBEE3aFFb7+oPoZRhycog5CM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4246
X-OriginatorOrg: citrix.com

On 05/05/2021 09:19, Jan Beulich wrote:
> On 04.05.2021 15:58, Andrew Cooper wrote:
>> On 04/05/2021 13:43, Jan Beulich wrote:
>>> I'm also not happy at all to see you use a literal 3 here. We have
>>> a struct for this, after all.
>>>
>>>>          p->xstate.xss_low   =3D  xstates & XSTATE_XSAVES_ONLY;
>>>>          p->xstate.xss_high  =3D (xstates & XSTATE_XSAVES_ONLY) >> 32;
>>>>      }
>>>> -    else
>>>> -        xstates &=3D ~XSTATE_XSAVES_ONLY;
>>>> =20
>>>> -    for ( i =3D 2; i < min(63ul, ARRAY_SIZE(p->xstate.comp)); ++i )
>>>> +    /* Subleafs 2+ */
>>>> +    xstates &=3D ~XSTATE_FP_SSE;
>>>> +    BUILD_BUG_ON(ARRAY_SIZE(p->xstate.comp) < 63);
>>>> +    for_each_set_bit ( i, &xstates, 63 )
>>>>      {
>>>> -        uint64_t curr_xstate =3D 1ul << i;
>>>> -
>>>> -        if ( !(xstates & curr_xstate) )
>>>> -            continue;
>>>> -
>>>> -        p->xstate.comp[i].size   =3D xstate_sizes[i];
>>>> -        p->xstate.comp[i].offset =3D xstate_offsets[i];
>>>> -        p->xstate.comp[i].xss    =3D curr_xstate & XSTATE_XSAVES_ONLY=
;
>>>> -        p->xstate.comp[i].align  =3D curr_xstate & xstate_align;
>>>> +        /*
>>>> +         * Pass through size (eax) and offset (ebx) directly.  Visbil=
ity of
>>>> +         * attributes in ecx limited by visible features in Da1.
>>>> +         */
>>>> +        p->xstate.raw[i].a =3D raw_cpuid_policy.xstate.raw[i].a;
>>>> +        p->xstate.raw[i].b =3D raw_cpuid_policy.xstate.raw[i].b;
>>>> +        p->xstate.raw[i].c =3D raw_cpuid_policy.xstate.raw[i].c & ecx=
_bits;
>>> To me, going to raw[].{a,b,c,d} looks like a backwards move, to be
>>> honest. Both this and the literal 3 above make it harder to locate
>>> all the places that need changing if a new bit (like xfd) is to be
>>> added. It would be better if grep-ing for an existing field name
>>> (say "xss") would easily turn up all involved places.
>> It's specifically to reduce the number of areas needing editing when a
>> new state, and therefore the number of opportunities to screw things up.
>>
>> As said in the commit message, I'm not even convinced that the ecx_bits
>> mask is necessary, as new attributes only come in with new behaviours of
>> new state components.
>>
>> If we choose to skip the ecx masking, then this loop body becomes even
>> more simple.=C2=A0 Just p->xstate.raw[i] =3D raw_cpuid_policy.xstate.raw=
[i].
>>
>> Even if Intel do break with tradition, and retrofit new attributes into
>> existing subleafs, leaking them to guests won't cause anything to
>> explode (the bits are still reserved after all), and we can fix anything
>> necessary at that point.
> I don't think this would necessarily go without breakage. What if,
> assuming XFD support is in, an existing component got XFD sensitivity
> added to it?

I think that is exceedingly unlikely to happen.

>  If, like you were suggesting elsewhere, and like I had
> it initially, we used a build-time constant for XFD-affected
> components, we'd break consuming guests. The per-component XFD bit
> (just to again take as example) also isn't strictly speaking tied to
> the general XFD feature flag (but to me it makes sense for us to
> enforce respective consistency). Plus, in general, the moment a flag
> is no longer reserved in the spec, it is not reserved anywhere
> anymore: An aware (newer) guest running on unaware (older) Xen ought
> to still function correctly.

They're still technically reserved, because of the masking of the XFD
bit in the feature leaf.

However, pondered this for some time, we do need to retain the
attributes masking, because e.g. AMX && !XSAVEC is a legal (if very
unwise) combination to expose, and the align64 bits want to disappear
from the TILE state attributes.

Also, in terms of implementation, the easiest way to do something
plausible here is a dependency chain of XSAVE =3D> XSAVEC (implies align64
masking) =3D> XSAVES (implies xss masking) =3D> CET_*.

XFD would depend on XSAVE, and would imply masking the xfd attribute.

This still leaves the broken corner case of the dynamic compressed size,
but I think I can live with that to avoid the added complexity of trying
to force XSAVEC =3D=3D XSAVES.

~Andrew



From xen-devel-bounces@lists.xenproject.org Wed May 05 15:00:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 15:00:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123170.232344 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leJ1B-0006u5-67; Wed, 05 May 2021 15:00:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123170.232344; Wed, 05 May 2021 15:00:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leJ1B-0006ty-2O; Wed, 05 May 2021 15:00:53 +0000
Received: by outflank-mailman (input) for mailman id 123170;
 Wed, 05 May 2021 15:00:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6083=KA=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1leJ1A-0006ts-B1
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 15:00:52 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 50acb3b7-c15b-4179-8b82-9ee63b785d3b;
 Wed, 05 May 2021 15:00:51 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B9FF0AC2C;
 Wed,  5 May 2021 15:00:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 50acb3b7-c15b-4179-8b82-9ee63b785d3b
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620226850; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=WPZwS1jNW6X3xDIu0uFqYhK1qaYZcQoLoCHFd9oWP38=;
	b=aAbN8fQbGZSseffMb6iMpLqeLFM26cNHcN8JoA1TedST5b7NOYqJ9J0WF9Q9LULDt6zAn3
	7IIm+4J8Iw/u67eGn1C9vLfKxrRw8FTqL0rM/pogrk/69BZsQspmn7r4yEXAIgF/xjneC+
	aJIE7kzzUB1/w9LjzAqvHRnKVPpM7t4=
Subject: Re: [PATCH] libx86: Introduce x86_cpu_policy_calculate_compatible()
 with MSR_ARCH_CAPS handling
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20210504213120.4179-1-andrew.cooper3@citrix.com>
 <YJJtqyDOIkMxjvxW@Air-de-Roger>
 <8f6f339b-f025-2cd0-e666-a3083e79af3a@citrix.com>
 <YJKXZyCHpRg32tyc@Air-de-Roger>
 <38f5b74f-b005-784b-a92d-8ddb9e1b8d3c@citrix.com>
 <bb2089cf-74f3-a7f2-7001-21a0d009440f@suse.com>
 <8d786331-0d9a-2ec7-0fe5-ba86d4a2547c@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d9e03aa4-bbf4-a1ef-bfab-2803e707f498@suse.com>
Date: Wed, 5 May 2021 17:00:51 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <8d786331-0d9a-2ec7-0fe5-ba86d4a2547c@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 05.05.2021 16:50, Andrew Cooper wrote:
> On 05/05/2021 15:48, Jan Beulich wrote:
>> On 05.05.2021 16:29, Andrew Cooper wrote:
>>> Technically, MCXSR_MASK is also a hard blocker to migration, but we
>>> don't even have that data in a consumable form, and we just might be
>>> extremely lucky and discover that it is restricted to non-64-bit CPUs.
>> "it" being what here? The value's presence / absence in an {F,}XSAVE
>> image? Or the precise value of it?
> 
> The precise value of it.

Not sure whether DAZ is new enough, but relatively sure MM isn't.

>  Migrating across the boundary where the
> default changed will cause {F,}RSTOR instructions to #GP.

That's understood. Is there actually anything standing in the way
of treating MXCSR_MASK like yet another feature flag group?

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 05 15:14:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 15:14:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123176.232355 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leJER-0008O3-CR; Wed, 05 May 2021 15:14:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123176.232355; Wed, 05 May 2021 15:14:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leJER-0008Nw-90; Wed, 05 May 2021 15:14:35 +0000
Received: by outflank-mailman (input) for mailman id 123176;
 Wed, 05 May 2021 15:14:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6083=KA=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1leJEQ-0008Nq-3g
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 15:14:34 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 10a82fc6-a399-4bc1-a02e-b8c439005b35;
 Wed, 05 May 2021 15:14:33 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3C639AF2F;
 Wed,  5 May 2021 15:14:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 10a82fc6-a399-4bc1-a02e-b8c439005b35
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620227672; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=XjWSEWRU6/nWzzgsdY8cavazHB5HOUBR89Ly9J9tORI=;
	b=ljVaXIsKu7K59qRvLyJuGqGtAzeBTl2mUCoJmHEeGbPrPNSVobsTbgu6DgMXna+LVd6LMH
	G93yO4mAjdzFlPuSKOpl17npgw1uPdLUBb5GTWF+uLVnv3n3X+H1suaurPv3IJmpzYv5hv
	F/9y/wPCbdbzV973g1PbtbGAMPyiQSw=
Subject: Re: [PATCH 4/5] x86/cpuid: Simplify recalculate_xstate()
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210503153938.14109-1-andrew.cooper3@citrix.com>
 <20210503153938.14109-5-andrew.cooper3@citrix.com>
 <17501fdd-b9f0-3493-7d0d-8c5333fafa45@suse.com>
 <3f9ae28f-2fb7-0f4f-511b-93ba74ec3aeb@citrix.com>
 <ced9f20a-d420-6639-b041-710f7ec59613@suse.com>
 <d918c2b6-7c31-6868-eb50-6a3db54fa4eb@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4fc9bfc9-bf68-b7fc-a898-2c795ced28b4@suse.com>
Date: Wed, 5 May 2021 17:14:33 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <d918c2b6-7c31-6868-eb50-6a3db54fa4eb@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 05.05.2021 16:53, Andrew Cooper wrote:
> On 05/05/2021 09:19, Jan Beulich wrote:
>> On 04.05.2021 15:58, Andrew Cooper wrote:
>>> On 04/05/2021 13:43, Jan Beulich wrote:
>>>> I'm also not happy at all to see you use a literal 3 here. We have
>>>> a struct for this, after all.
>>>>
>>>>>          p->xstate.xss_low   =  xstates & XSTATE_XSAVES_ONLY;
>>>>>          p->xstate.xss_high  = (xstates & XSTATE_XSAVES_ONLY) >> 32;
>>>>>      }
>>>>> -    else
>>>>> -        xstates &= ~XSTATE_XSAVES_ONLY;
>>>>>  
>>>>> -    for ( i = 2; i < min(63ul, ARRAY_SIZE(p->xstate.comp)); ++i )
>>>>> +    /* Subleafs 2+ */
>>>>> +    xstates &= ~XSTATE_FP_SSE;
>>>>> +    BUILD_BUG_ON(ARRAY_SIZE(p->xstate.comp) < 63);
>>>>> +    for_each_set_bit ( i, &xstates, 63 )
>>>>>      {
>>>>> -        uint64_t curr_xstate = 1ul << i;
>>>>> -
>>>>> -        if ( !(xstates & curr_xstate) )
>>>>> -            continue;
>>>>> -
>>>>> -        p->xstate.comp[i].size   = xstate_sizes[i];
>>>>> -        p->xstate.comp[i].offset = xstate_offsets[i];
>>>>> -        p->xstate.comp[i].xss    = curr_xstate & XSTATE_XSAVES_ONLY;
>>>>> -        p->xstate.comp[i].align  = curr_xstate & xstate_align;
>>>>> +        /*
>>>>> +         * Pass through size (eax) and offset (ebx) directly.  Visbility of
>>>>> +         * attributes in ecx limited by visible features in Da1.
>>>>> +         */
>>>>> +        p->xstate.raw[i].a = raw_cpuid_policy.xstate.raw[i].a;
>>>>> +        p->xstate.raw[i].b = raw_cpuid_policy.xstate.raw[i].b;
>>>>> +        p->xstate.raw[i].c = raw_cpuid_policy.xstate.raw[i].c & ecx_bits;
>>>> To me, going to raw[].{a,b,c,d} looks like a backwards move, to be
>>>> honest. Both this and the literal 3 above make it harder to locate
>>>> all the places that need changing if a new bit (like xfd) is to be
>>>> added. It would be better if grep-ing for an existing field name
>>>> (say "xss") would easily turn up all involved places.
>>> It's specifically to reduce the number of areas needing editing when a
>>> new state, and therefore the number of opportunities to screw things up.
>>>
>>> As said in the commit message, I'm not even convinced that the ecx_bits
>>> mask is necessary, as new attributes only come in with new behaviours of
>>> new state components.
>>>
>>> If we choose to skip the ecx masking, then this loop body becomes even
>>> more simple.  Just p->xstate.raw[i] = raw_cpuid_policy.xstate.raw[i].
>>>
>>> Even if Intel do break with tradition, and retrofit new attributes into
>>> existing subleafs, leaking them to guests won't cause anything to
>>> explode (the bits are still reserved after all), and we can fix anything
>>> necessary at that point.
>> I don't think this would necessarily go without breakage. What if,
>> assuming XFD support is in, an existing component got XFD sensitivity
>> added to it?
> 
> I think that is exceedingly unlikely to happen.
> 
>>  If, like you were suggesting elsewhere, and like I had
>> it initially, we used a build-time constant for XFD-affected
>> components, we'd break consuming guests. The per-component XFD bit
>> (just to again take as example) also isn't strictly speaking tied to
>> the general XFD feature flag (but to me it makes sense for us to
>> enforce respective consistency). Plus, in general, the moment a flag
>> is no longer reserved in the spec, it is not reserved anywhere
>> anymore: An aware (newer) guest running on unaware (older) Xen ought
>> to still function correctly.
> 
> They're still technically reserved, because of the masking of the XFD
> bit in the feature leaf.
> 
> However, pondered this for some time, we do need to retain the
> attributes masking, because e.g. AMX && !XSAVEC is a legal (if very
> unwise) combination to expose, and the align64 bits want to disappear
> from the TILE state attributes.
> 
> Also, in terms of implementation, the easiest way to do something
> plausible here is a dependency chain of XSAVE => XSAVEC (implies align64
> masking) => XSAVES (implies xss masking) => CET_*.
> 
> XFD would depend on XSAVE, and would imply masking the xfd attribute.

Okay, let's go with this then.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 05 15:18:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 15:18:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123184.232367 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leJIA-0000cX-Su; Wed, 05 May 2021 15:18:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123184.232367; Wed, 05 May 2021 15:18:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leJIA-0000cQ-Q0; Wed, 05 May 2021 15:18:26 +0000
Received: by outflank-mailman (input) for mailman id 123184;
 Wed, 05 May 2021 15:18:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+Yav=KA=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1leJI9-0000c0-Lh
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 15:18:25 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5c4ed296-f76c-49df-bf91-cba68778399c;
 Wed, 05 May 2021 15:18:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5c4ed296-f76c-49df-bf91-cba68778399c
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620227902;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=BhXJwv1YbiixRQMPu9WCfwdzGSKbSp4J4vdDjkuUw2E=;
  b=Jhn+3abVU/+K8p6kisMUJDYC/XNOk3LBSP4ZUFO4r3jAeU6dMtu2ysSR
   47EeomF3PzmC3TY1dXIHc7tEEZd+Bgl1q6ueNCqTGPKeR6v7taoP+iHXu
   1Tu5/ocpDcQwIGMcYb7aEgvVNMZkXZ8QaD1b4T7yNOWYsgCZlfP7dbUt9
   c=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: d1I0m5LSx96YFbIfhV7YaHnv8xSX4B8oZHP9+ifGwT6DRF5tIT/jKfkV/tQfsPHwj5ceVQKF9g
 4WkhleZRoYsSULJTrakBBBMatK3wGcYHnQCii0X0JGjwMxYKe8hiyNO7px3kuau3qrTBwajIqh
 TwLckxlwOPmYgXMw2VGLvtKqcZOgP0/KqRzzyGNGseVI5hjSam56XE01d8dE6Wj4oX96AtomQZ
 44qazqBczz8B+tQ2FaQMlAxd9Hq1TAHvS91ueUo42jmDQ/o06jcJY2rn/43AF5PWeCUUSbkzTn
 160=
X-SBRS: 5.1
X-MesageID: 42932511
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:GFwEHa7ns+vb3MaJTAPXwRKBI+orLtY04lQ7vn1ZYQBJc8Ceis
 CllOka0xixszoKRHQ8g7m7VZWoa3Xa6JJz/M0tLa6vNTOW3VeAAaNDyc/ZwzPmEzDj7eI178
 1dWoV3FdGYNzdHpOng5g3QKbgd6f2m1IztuuvE1XdqSmhRGsNdxiN0EBySHEEzZCQuP/oEPa
 GR7MZGuDasEE5/BqiGL0IIQvTZoJnznI/mCCRsOzcc9AKMgTm0gYSKcCSw4xF2aVJy6IZny0
 fpuUjT5qKvs/a0oyWsrVP73tBtt/bKjvdGDMyIoMAJJjvrkRbAXvUdZ5Sy+AobjcvqxFE2kM
 TdgxpIBbUO11rhOlubjDGo+w783C0g43XvoGXo/kfLkIjCax8RT+9i7LgpFifx2g4bk/xXlJ
 9v5Sa/saFaCBvR9R6Nn+TgZlVRuWef5UY5nfV7tQ05baIuLIV/gKY4501vHJIJDEvBmfsaOd
 grNs3a6fpMGGnqCUzxjy1qyNyoaHw5Ag2LdEgEotCUyDhbhhlCvjIl+PA=
X-IronPort-AV: E=Sophos;i="5.82,275,1613451600"; 
   d="scan'208";a="42932511"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jUJn913lFvv354V/PP2Vpd5UCxTpk6omQEohxdfBSjT/Av7YsuUxbG9AKiv9hM1w3QFdFgWDHAp3I0HApikXDzN1MzdNQW0taYsuSqmoAYBIsEgdQFMBy9itN8YbYPgqK7VvtaTFybChpBQaRzPmLnnNn+lM8HaMk/xQqwB7yCwMaJG4b5FgTh2avAIktd4oDyitk+YF/idWjLOJ3OYh1fY8/vjTmSe/1Ns4pRuPa5MZiemaM0D/89YYM/9JhkLaBmRzqO0KSDAKD6h1YjWrkLWGEjqDKDHJQvJ8Pn9QO0jQC7GvAGtyxavyTcQCsXQ5EhbJ99pnZOi6C5DTUd1JMw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BhXJwv1YbiixRQMPu9WCfwdzGSKbSp4J4vdDjkuUw2E=;
 b=j2K6+YcTIRolZmEHzfoOvOAxq2cTv/ezS8peR8HXIbSp+5SIbNBns4wpzkESMfaYcF56K/KK69IlMG9rQwglvjrvcNaJc5R1K/TH4x/RumJx5ynFrVS9lbVd6T5oH68y/9xqfeuzooo03kDz2FUMMX9q4CLoO9GH6s71oQgTzHQrBKlbXI7tVcDeZolbHdnDXX7ADb8c3zXen48Ss6QAUZ8ZNLja7xqEmCe0/xNVneXhbjc74zzYaVzOrd701+hg0BnYecDlI4AitmwbQaqEdMc9lWpQ9+HwDnRjJlMQBqTfnfxGIkYSMqn7DC1O4SdhUyabv3UltXC4Q7pxzE85/g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=BhXJwv1YbiixRQMPu9WCfwdzGSKbSp4J4vdDjkuUw2E=;
 b=s2A5ltrzmqa21Sk1B1jS78FDnjkoDc4ZvK5V+Yv60duiQCD1wawEr/svCTaDMTyU5jIyFtoiuWOEgtiJYBrvx9dbl0isHmxNPpnnePfI2gj297ka5HXpvzvUajJw4PMpfkpa4ZxM4rwRRtUFs+VZrrvkN3QZ2eDSWwqQYjL3t1o=
To: Jan Beulich <jbeulich@suse.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Wei Liu <wl@xen.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20210504213120.4179-1-andrew.cooper3@citrix.com>
 <YJJtqyDOIkMxjvxW@Air-de-Roger>
 <8f6f339b-f025-2cd0-e666-a3083e79af3a@citrix.com>
 <YJKXZyCHpRg32tyc@Air-de-Roger>
 <38f5b74f-b005-784b-a92d-8ddb9e1b8d3c@citrix.com>
 <bb2089cf-74f3-a7f2-7001-21a0d009440f@suse.com>
 <8d786331-0d9a-2ec7-0fe5-ba86d4a2547c@citrix.com>
 <d9e03aa4-bbf4-a1ef-bfab-2803e707f498@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH] libx86: Introduce x86_cpu_policy_calculate_compatible()
 with MSR_ARCH_CAPS handling
Message-ID: <03b94402-6233-b6a9-6621-064b5c6bce26@citrix.com>
Date: Wed, 5 May 2021 16:18:10 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <d9e03aa4-bbf4-a1ef-bfab-2803e707f498@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0181.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a4::6) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 69bdbe19-659d-4081-f29a-08d90fd901e8
X-MS-TrafficTypeDiagnostic: BYAPR03MB3495:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB34958AB2B83B152F7521BDFFBA599@BYAPR03MB3495.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: R/XU10XBDvOhGgdeb8tTwA+XiqDeCB84SHDG1cNHcvDc0ZVhNBBaKSTo3KObG4NAEpe6QXs1TJam6ws1y/G2JxYlvdL34T0B3rEf6OhNseGrnSGK9aBdwIhCIravtl2y+jGt6e2dk9Y7JlgbmHKrUHyFAJ0uq7Wc4OCsDp5mbOA0CcRR7AhTxSrrNbmo6dvoE/fbgocm/lQaF+keAH2KJJ85ZgBeYgvwsWxvXOSKImrjg0qfZh5XnVT+94DVyG0lRSbJmdBsU9J4RdWt7/8r9E0kEQrwa/3/Kmx9B9r7OYKpkhHKNX0BzfhTsHP5rj8+1uG/PMRSYfYu70vwXY1JZ0zDYnrnVzYXyDqhVAwCbnrcdCST9c+sEoigrO87no2DR7dSytOtS/KWyISnSdbH2YCm1qGGWKMf5AiFZXHs1iMXVdkntAMwgJC1Hh9eFtIy8cNS0lPMZyib+qeiVa/O6JUc4ZHT1NK7E6iXa9EGRZqgdtpMWuz8K7q7RBmTxpAVmIJduhAKenssGKxcBiEOqKVHocQ+9E3aYhaS+XI8sMoNyvaBSZN2RKDXfAszpxFedibU2uh9rF4X7IULyZIjSbMSyiAN1gwHm1grlb3k3ExyfdywYJlg5C/KjLKK0YyeCJ0OfKd7nfwDqRwOldf9RyG5E7iUvNX4YbEHb/qs7ffqGlNcdSNCZ7IVO5K9fyL/
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(186003)(498600001)(4326008)(38100700002)(26005)(16526019)(2616005)(107886003)(6666004)(31686004)(6916009)(8936002)(956004)(83380400001)(31696002)(6486002)(5660300002)(86362001)(16576012)(66476007)(66946007)(66556008)(8676002)(36756003)(2906002)(54906003)(53546011)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?QzNiRVBwaHN6ak5zTHF5bVNta1gwTng2WTVVQWZPUVFKYWcxSkQrOXhiSEZo?=
 =?utf-8?B?TTg5OXZsZnRWY2pGakdzenllY3FZZTJmRHRudHZmMnhuRWJUd2JmNFp1cURi?=
 =?utf-8?B?Z1czQlgwK1VKUTYvUUxJUktUYml0cXI3cnhOQ29lUHVFdksxYWlGdHUza0Yz?=
 =?utf-8?B?R2hIT1VMYkpsQnhvRU8ybGd6clBjTG5qelU0aEJqenZxMTgzbTN4Z0FJMkQ0?=
 =?utf-8?B?S0R4dnN2SGgwQU9sU2ZPUGF2YzFuUldsMkR1MXpsaWZnaDRHbDBFRlBJZjRH?=
 =?utf-8?B?WTBWQVZLWnVpY3crT0lRYzZCRUx6ZVJ5VkxSSWZoWEtWU0lzN0g0VWw1RWZo?=
 =?utf-8?B?ejFvczFuK0V6SFMrL3hhQmt6MjRNdyszZ3Q2RFBNOXRuUXZDeFgvUjRHMlFn?=
 =?utf-8?B?UC8va21uRjNRRDk4Y0hsaXhnMTFCOVQzQm9uTlRoa214MzN0bXR4ei84VmNY?=
 =?utf-8?B?SFN3M2M3ampZdmw2cXN5VFJtOTJ2TUJOYTFaSFVwSjI0eXJkZ2FOSmVPS21Y?=
 =?utf-8?B?em1QRWowdFBxQzlyS1NPb1RBcFhhOHhKYWZsNFZKVCtldnBPT2FvSFl2NEdU?=
 =?utf-8?B?clZRb2h4RzBkREhWNk1neHk0Qkp5L2hTeGU5R2hhTnZNUzNVMFpsUkRwNDFk?=
 =?utf-8?B?MUx4VkpFZ01LcnFNTUNtQlB0bDg4eEFhNHpEYkw5NHVSYnFWbXhEQ2N4dWNq?=
 =?utf-8?B?YmV2TDhXSzIremw1c1ZWd1dNd1VkVE5UK2lLNDArNVBIREpWQVA1bFNaU2g5?=
 =?utf-8?B?TEZqK3ptSGR1WkZNcXdSZkp6UFNRaVdZK1FDdnVDM0N4Y1BMcFpyNW9NcmJM?=
 =?utf-8?B?ekNaY2NIdWRaVmtjVEoxSzdyODltR29vWlpKUEFhamtQVktOMHA4WFp6cWFu?=
 =?utf-8?B?aG50SzFtUy9adEVVNUpJWGNNNkVNd3NCd1JOWFQyZXo3dmsvd0k4WURkV1Iy?=
 =?utf-8?B?QVU0Vm1mUXdTT1ZWN2V3NGpUQ29sU2F2dG9XYjJtTnZFakYzeUlTcGpKTlVs?=
 =?utf-8?B?SXRFd3JsUEFSeGdEVk5NMnJSSFdUN2cwc3dtSHNja3dOVXgxUmdSbGY5S0lT?=
 =?utf-8?B?WEFMK2JIYjNybGowUHRaVVg1bDVVUGVMa2pPTHRENEpxNmN2QWdSSC9LYUx5?=
 =?utf-8?B?SUlrbnE1K0o4Y2hoK2dGT2NMUHRjUFhBRmN3VStFRTVnMENuTGh0UzNTdlNY?=
 =?utf-8?B?SFJRRU5Ta3l1WEoxK2wwcmg4L0hwWWZtNFNSbWUrZnpsdklhZEpHL2l6ZUp6?=
 =?utf-8?B?NW1VNEk4aDVaZ2F3bytJNkVYL2Y4QTh4MXZuN3BhdW9yU1poN2UyMzdEQnpr?=
 =?utf-8?B?WDRuZkdZVERvdk5rMTlGdHJSczFGV2NVOUdPL21yNzlYRHFQb21zMWpvaGtC?=
 =?utf-8?B?U24xa0dkT1FKY05BZkgrWm52eTRUK3ltaUlsYWFGcXE0Q3lxSXo0TFVzZnAx?=
 =?utf-8?B?VEFIOVFqYXdtdTBZOWUzYkJUdWJKZEdveE9lcjhJRFhZUWlqalR2RFhRZ095?=
 =?utf-8?B?T1FWNTgyNjIwWHJ3SFBCZXNZdWcvOFVublozaDhwOEpIMFdyK3ZzTE1teElp?=
 =?utf-8?B?K1pMcjYzQVdtUTZJdzA3cEVoU3RXT3pzTzZtTldWKy83NnI5MUVKbEJpTGlD?=
 =?utf-8?B?QnMwUnhLdndxVENFaElUbVdNMTlhVVZQbm82RkFKV3YyRHZKNEdxQzlzM0FZ?=
 =?utf-8?B?UkdybS9lNHBjQnFoOUVSV2xnWGpHaDRjd24wWkN0K3lTYUl5N0hxWW0rZkUx?=
 =?utf-8?Q?Vz+rqBkjx+PnD90D5L3iK3EchA/EIrjJ4oFO3V0?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 69bdbe19-659d-4081-f29a-08d90fd901e8
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2021 15:18:16.5648
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: QIHHWISrSMhSQsOVY5ikojtEF662ndgqHeVPFWsDcqoGTYcA1egR3jfShACjy2eH+u8HjVDeeIYWW8iWLUB12BgE+pmx/Y4cFrusPQah4uU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3495
X-OriginatorOrg: citrix.com

On 05/05/2021 16:00, Jan Beulich wrote:
> On 05.05.2021 16:50, Andrew Cooper wrote:
>> On 05/05/2021 15:48, Jan Beulich wrote:
>>> On 05.05.2021 16:29, Andrew Cooper wrote:
>>>> Technically, MCXSR_MASK is also a hard blocker to migration, but we
>>>> don't even have that data in a consumable form, and we just might be
>>>> extremely lucky and discover that it is restricted to non-64-bit CPUs.
>>> "it" being what here? The value's presence / absence in an {F,}XSAVE
>>> image? Or the precise value of it?
>> The precise value of it.
> Not sure whether DAZ is new enough, but relatively sure MM isn't.
>
>> =C2=A0 Migrating across the boundary where the
>> default changed will cause {F,}RSTOR instructions to #GP.
> That's understood. Is there actually anything standing in the way
> of treating MXCSR_MASK like yet another feature flag group?

Well - we'd need to find somewhere to put it.=C2=A0 It doesn't fit in
architectural CPUID or MSR information.

We could add a 3rd category of information in "a cpu policy", and that
is always an option.

However, this issue hasn't been a problem so far, is only for very
legacy CPUs at this point, and isn't high on my list of worries.

~Andrew



From xen-devel-bounces@lists.xenproject.org Wed May 05 16:09:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 16:09:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123201.232380 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leK5P-0005tA-MO; Wed, 05 May 2021 16:09:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123201.232380; Wed, 05 May 2021 16:09:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leK5P-0005t3-IN; Wed, 05 May 2021 16:09:19 +0000
Received: by outflank-mailman (input) for mailman id 123201;
 Wed, 05 May 2021 16:09:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Q0w6=KA=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1leK5O-0005sx-02
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 16:09:18 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.21.79]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 17c6c725-1c86-4272-bb2c-eac3eb68b099;
 Wed, 05 May 2021 16:09:15 +0000 (UTC)
Received: from DB6PR07CA0001.eurprd07.prod.outlook.com (2603:10a6:6:2d::11) by
 AM0PR08MB3378.eurprd08.prod.outlook.com (2603:10a6:208:dc::14) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4087.43; Wed, 5 May 2021 16:09:13 +0000
Received: from DB5EUR03FT012.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:2d:cafe::c2) by DB6PR07CA0001.outlook.office365.com
 (2603:10a6:6:2d::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.9 via Frontend
 Transport; Wed, 5 May 2021 16:09:13 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT012.mail.protection.outlook.com (10.152.20.161) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4108.25 via Frontend Transport; Wed, 5 May 2021 16:09:12 +0000
Received: ("Tessian outbound 13cdc29c30b8:v91");
 Wed, 05 May 2021 16:09:12 +0000
Received: from c60f44af36e3.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 3612E377-6165-49C8-ACA1-6B04A2EAF6D9.1; 
 Wed, 05 May 2021 16:09:06 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c60f44af36e3.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 05 May 2021 16:09:06 +0000
Received: from AS8PR08MB6919.eurprd08.prod.outlook.com (2603:10a6:20b:39e::10)
 by AM6PR08MB3816.eurprd08.prod.outlook.com (2603:10a6:20b:8c::26)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.24; Wed, 5 May
 2021 16:09:04 +0000
Received: from AS8PR08MB6919.eurprd08.prod.outlook.com
 ([fe80::856e:d103:212c:8f50]) by AS8PR08MB6919.eurprd08.prod.outlook.com
 ([fe80::856e:d103:212c:8f50%4]) with mapi id 15.20.4087.044; Wed, 5 May 2021
 16:09:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 17c6c725-1c86-4272-bb2c-eac3eb68b099
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OHFEARos+GDi/KDwP4pXzymCjfeDMFFW90uUpGKgo5k=;
 b=MSX+MjneCUeYRWjRR43RN77ZfDZvZo6r7PBofOp/JEWwgT4Mj94JoVes0SR3P2Ua1AuM4OyfJfEH3WsNeEyDYG0811TNIIoS9Q+uPMV98MWB4MYhkqvhgLQw85KRgYZ4ZqbQQ03fvbxU53JLulCYfLyaAPrOURSY/YfbWcaFE04=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 5a7c459c7e0e62e4
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QbYrVhW2tBUTx29sv/HakeewpQJ3W/LMY/aWgk/yfCalDS7XiGvKJvwpQtQEv6Ay72PUto6eaQmNFRZmE1hPXjhi6DrFiV1g0zFKGNPZEVA3tUHzZw8qgrLnUf/a99rIPBn9HSOD38BEmECPGClNsAOSFvGSH8az2D6io20arpDA5CQ/KZkZFaVXmvtfmFVGKbRdtj+KOcD0d0bDk3wvnBXESqoSzfWNGwyJ4NsmJitdtETa1yYvvDKYXF+0VmMPu8UcOuigIWRvLaCObk3jB4tukuxTuu1HerOTW6VqGkg21esofs9EzzjOHoY0xffLosijFujO8mvDgbD2VizvKw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OHFEARos+GDi/KDwP4pXzymCjfeDMFFW90uUpGKgo5k=;
 b=ZFma4G12mfSBu0+cl3BdTi2sCYuacH365n2DiLl1AVX4LNGbMvZDZYWwJ/UdXLuSPnPIGF3XCNnrDO+nJcZyitzU1BsRNRyFdNO1fWShgodOZxe6vwwHm0Q/ubKu8+k1W32KrSsdQLnBM2bDkps7a4jASlolHZqnXXaMHfk45zrR5/C0En9cE2suRGVIPPCpMqtUtTQg1wrKmiBhbrNVeBHfPrGiwq7lChWfJU6P6n27wjM2Y/zQ6rhAWgqTcOfI9OLZFYjshSPvB1PgLrAKAGpouVz2LbCqUPCj8/yJXEucWljBQE23Y2Oz4rR+6j0a+QsAT5qmpVIa410UvAMhwQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OHFEARos+GDi/KDwP4pXzymCjfeDMFFW90uUpGKgo5k=;
 b=MSX+MjneCUeYRWjRR43RN77ZfDZvZo6r7PBofOp/JEWwgT4Mj94JoVes0SR3P2Ua1AuM4OyfJfEH3WsNeEyDYG0811TNIIoS9Q+uPMV98MWB4MYhkqvhgLQw85KRgYZ4ZqbQQ03fvbxU53JLulCYfLyaAPrOURSY/YfbWcaFE04=
From: Rahul Singh <Rahul.Singh@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Paul Durrant <paul@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v4 3/3] xen/pci: Refactor MSI code that implements MSI
 functionality within XEN
Thread-Topic: [PATCH v4 3/3] xen/pci: Refactor MSI code that implements MSI
 functionality within XEN
Thread-Index: AQHXPQazY/ZH45uzJE6do+Ec+TuCsqrR3C4AgAM7s4A=
Date: Wed, 5 May 2021 16:09:04 +0000
Message-ID: <95E9A7E3-D13B-4DFD-8423-5951BB6976BB@arm.com>
References: <cover.1619707144.git.rahul.singh@arm.com>
 <60b4c33fdcc2f7ad68d383ffae191e22b0b32f1c.1619707144.git.rahul.singh@arm.com>
 <bbc50008-da47-a5e2-501b-a9c06ce38335@suse.com>
In-Reply-To: <bbc50008-da47-a5e2-501b-a9c06ce38335@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [80.1.41.211]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: 31485d97-8d1b-43ee-7ce1-08d90fe01fbc
x-ms-traffictypediagnostic: AM6PR08MB3816:|AM0PR08MB3378:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM0PR08MB33788AC69ED1153135C7C21AFC599@AM0PR08MB3378.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 baF3fzkpDjJ+qmAefegWwff084IQYz5OXkLoZKg/UZ2uFcnVyrL81mAhd+/P5+oywualXWLKSM0icJGalUqaQYmv7xC+4/dSfT45vsnqbpC/VayyDR+5WtpEYO/1Ihwn50Lwa1J0Hv8VvGj8HwWep+tW3K9uzKy5KamFp8jBpYw3P7TtWi7I6GAyGIHX0nDsdWovP8bfrs/wOnl51XHbAHRnHFTrP4E0xpjMtjbcfzV7wmlIhP7Nk+eaUlC9L60ugxdDRD1vrwKWtr++72hFU7p1PtGzPj2fmVDVpzB01n14UgiqERT8GRiIc5dW42FHqmOEa1wKB5bsqja7RqMCX0WdkANfblOGUGhIGrOEZ7ZAbqr0p4evjl6smuK8T5kTroSrTnv0cqCWfK5hww+Yma21KydnFwTW5H3Gz2bRpxhLLZREqQ05dz/RZPeUslRCrelSoamJCBSAqDP6R+6OnrtU5/Yozbsy5i3xiafXRONwiST/61toGSKy8+Y5BKkipnlKBItWuK6UOMMznr9B5CwEDz265KIS0jPYJCyue/DdY4hP4AJfGwAi2+jOPRBeh3HhzaYptSauDrXK7zUBrGQpX47t52LUQuhyqTQHtRpFzqKdl5LUl99yBaUUvwbr/OGmzuJw+prsppAbDY7yoDW8JYqg7yrfBeEMTwF7Urg=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AS8PR08MB6919.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(376002)(346002)(39860400002)(136003)(396003)(8676002)(5660300002)(6512007)(7416002)(76116006)(478600001)(54906003)(66446008)(2616005)(53546011)(186003)(2906002)(8936002)(26005)(6506007)(4326008)(6486002)(6916009)(33656002)(38100700002)(66946007)(64756008)(66476007)(71200400001)(66556008)(36756003)(122000001)(316002)(86362001)(91956017)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?SCtMSGp2WWh1bUtCald1TWdJWWVqbGlWRGFrekd6ZzYwM0xHZFBOQUlrS2Jl?=
 =?utf-8?B?c3I3RlV5b2ZZU2FudzBYQTIwQmgxblVXQ3NMRUxrdkdId2dLVjlLOUxKZGd1?=
 =?utf-8?B?OE5YT2dSRFl1QXRIY0FMcEtSTFR3YlpiRU5OWjhJa0Y5cXpJMFN3T3BmQksr?=
 =?utf-8?B?eld4YThvNS9NcUJtejNXSndYZHdtVDQ3TVIvbmoxMmhHUjZvdFUvRjROTUFM?=
 =?utf-8?B?emFLbVBJRGVwR2VWM0ZjdnJDQUcycTVkSVJ6MmxyZnMxSjNBSDI2SkxhcE11?=
 =?utf-8?B?NEV2dGFjeXJnOU9SQ2g2M24vZzAwcnFQMkNTVHVvdjI3OEQrelVlQkZndUh4?=
 =?utf-8?B?U3JCTEhCNk1rSk9wNlAzeXdBWGwwTnlPczNhcTl3R1ZoVXg2S08zeFkzU2RD?=
 =?utf-8?B?Tkc1aXhCOGlLVnlZcDdTbDhsWEZqYmZ1TVErOWVzcEpucElXZ204QkVNZjJ3?=
 =?utf-8?B?V3ZNRU14OHFkWGFRVlhwdm94VXFoOE5Tek1nOU5TK0wzU2RtMTgxaXhUbWxB?=
 =?utf-8?B?Uk9XUjNTTDI5ejlsTGNoSWE0ZTN0eFpRcWZnVjZCQTdWSEVXdG0rQUxtL3Ro?=
 =?utf-8?B?V3gxNUFqcVgxelNkcDk0MW8ramFRY2dyakV4WUFudS9LOFYxUXBYZmNWUmsx?=
 =?utf-8?B?OWY5blBtUmJPYXBjMnREK2VGRVZ0bXl1SWROV1RqN0VQbkRRQzRURGVjaFBs?=
 =?utf-8?B?KzFzaHNRNmxjZXdubWMrbS9wUUdpOC9GblpEeks2NnllZzd6V05mM1NNWE1T?=
 =?utf-8?B?YUJpOGIxMFI4eWtCMTUzNDRPV256aElpWXlGQW5tbkRJbWluaWdyOWxTMkZr?=
 =?utf-8?B?clljRi9CT2lBSCtzY3lDQ1VJeDFFRnF5MVl5RXFTMnJwTkV6QWFlcDF6b0FB?=
 =?utf-8?B?NmE2N25oWVBOM01xSlJTbWtZRUtpY3NaSXBmODJLSEJjNTMvb0VJK2M1RzJF?=
 =?utf-8?B?RjhiRWlqMjE5WitiZkZReXlOVHJrZmNmUnlIbnlOc0VWRnI1ZjFKRUNsOVZT?=
 =?utf-8?B?NllHeTdudzVKU2hUNDM4YUpWN0VBSGkxZWlIL05LTHdYNFlaekRVWmVZbnN2?=
 =?utf-8?B?RWtkVXlGM2dyZVhPMXlFcmJLNjY0R2dyQ3VBNFpuUVhlUERkK3lQazRueFRU?=
 =?utf-8?B?NHRXU3c2V2UxbDlKUjE1TFRZZ2FkMG5JbmVqM0h1aWt6YkNjT0dxYTdSYzFk?=
 =?utf-8?B?bzdhQTFndDArclk0dFJTUC9pQWdYVHM3VjFFN3RmMkRzUUE4elcvSzhKN1RU?=
 =?utf-8?B?dmh2UDV2RlhjN1NMRk1WNGlmcXBZSG1LRWp3WVFYdURwN0VyRjFTdGhhYWlF?=
 =?utf-8?B?aVNlem5SRGV3QTZYTlVMLzlRcEszdlF3aTdkZnBrQUl4emRqSnNvVVR0Sjdv?=
 =?utf-8?B?SEpwNy9OQXA5Z1hrRm9BMVFPMkpVd1BnMlNzMlFreDlrZnltNk1kWmU2VHhx?=
 =?utf-8?B?Sm0rQWgvVStUc1ZITFJLQ1ErcEhnbmZHUVc3TE5PTFRzUWpDdGFDYTBYUU1B?=
 =?utf-8?B?M3J2SU82YjRzaFZHOUVVdllkNldJejJvREZxc0tET3FkQ1YvdWxvelBRMHpF?=
 =?utf-8?B?ZTR6aDhxaXNQOG1mVGxNQ3pZS3RFK2hrT0x5Wjl2Y3JWMDdreXlmazZiNVhU?=
 =?utf-8?B?VWdKSjRIYlpBem9va1FFT0xSYVdERUVnOHNzRC9KSmlJTlF6MVkxOWU3WHdk?=
 =?utf-8?B?Z0ZtNVYvdHFwcStwN01xOExPRGtMaXRNVDVRbFI3ZUhGelVnZkhVVnpaODVE?=
 =?utf-8?Q?YvwnTXXifroSW3stAuE456yR1x0jPgHhiuC32iu?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <29852C72184D584D90D4F2A9B3B4BD3C@eurprd08.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB3816
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT012.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	a8155924-7a2a-46a1-4f57-08d90fe01acd
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+/t4Nqjtz0ydw/5dh/Yy8ics35mFnHcj0jP2Wr0CFUAs5WSpXUF9kaxX32Vfnmmr2xJxCDgWOtbrUABFfjm23Mb3f0n8Kz9Hd1WFeDzfu2ZewN8yjN+dXsj3tEHhaCE3Y4+0ukAkHORH/tMg0xtOMoEk+GeV5aNcAbdjxvm2XPIj8WfWme84sksST2auMpDsmO3TclH0a0xbkMQtRDeuDDCxs2EMAh8wktrPHE4R7Aac4BK2GztQHB4lUTG7zRFCZSJR3jNNwJ4ZFVC+g4TLC8P8LNqCYcBSe2FpAYWvPpKq0EQz4oHd/MbzP16CZXGohulBgJ+YR7WGnqO+ZM4FfQtlRQFP65tOjCYMBqTsBMez+zOd4mrRVJIFGsf6/+nNYw9oMCXNIAJ6sLEw2Vg8fLHn0TdRyzh/jKm8ht64MKU7JirRn9HghAiEnQxCAs6jJCsVCXyqXolC+FArvzNs1FhsKWaNgKSItA+MrAZU/eJPxTgMpYKHQy0umQ1DM9WwtvUB98z2q8hkGOpSfqkdoPe2gvMalWrs37OWQlcoXVo81fioMhKDwlTrI6sb6MUmKlCxqYEAktI3OSLB257JOhkoZuVzRWPaspvuTO7fEIykN8ey9LXPCP937NNL+Cn6fQdSAS7psjcjgGSwWKo78OBJhKv8/wLl/UwAbdKzzNI=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(376002)(136003)(39860400002)(346002)(36840700001)(46966006)(5660300002)(186003)(36756003)(33656002)(82310400003)(8676002)(54906003)(6862004)(336012)(36860700001)(26005)(356005)(2616005)(70586007)(81166007)(2906002)(316002)(4326008)(82740400003)(47076005)(70206006)(6506007)(53546011)(6512007)(8936002)(86362001)(478600001)(6486002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2021 16:09:12.9033
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 31485d97-8d1b-43ee-7ce1-08d90fe01fbc
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT012.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB3378

SGkgSmFuLA0KDQo+IE9uIDMgTWF5IDIwMjEsIGF0IDM6NDYgcG0sIEphbiBCZXVsaWNoIDxqYmV1
bGljaEBzdXNlLmNvbT4gd3JvdGU6DQo+IA0KPiBPbiAyOS4wNC4yMDIxIDE2OjQ2LCBSYWh1bCBT
aW5naCB3cm90ZToNCj4+IE1TSSBjb2RlIHRoYXQgaW1wbGVtZW50cyBNU0kgZnVuY3Rpb25hbGl0
eSB0byBzdXBwb3J0IE1TSSB3aXRoaW4gWEVOIGlzDQo+PiBub3QgdXNhYmxlIG9uIEFSTS4gTW92
ZSB0aGUgY29kZSB1bmRlciBDT05GSUdfSEFTX1BDSV9NU0lfSU5URVJDRVBUIGZsYWcNCj4+IHRv
IGdhdGUgdGhlIGNvZGUgZm9yIEFSTS4NCj4+IA0KPj4gQ3VycmVudGx5LCB3ZSBoYXZlIG5vIGlk
ZWEgaG93IE1TSSBmdW5jdGlvbmFsaXR5IHdpbGwgYmUgc3VwcG9ydGVkIGZvcg0KPj4gb3RoZXIg
YXJjaGl0ZWN0dXJlIHRoZXJlZm9yZSB3ZSBoYXZlIGRlY2lkZWQgdG8gbW92ZSB0aGUgY29kZSB1
bmRlcg0KPj4gQ09ORklHX1BDSV9NU0lfSU5URVJDRVBULiBXZSBrbm93IHRoaXMgaXMgbm90IHRo
ZSByaWdodCBmbGFnIHRvIGdhdGUgdGhlDQo+PiBjb2RlIGJ1dCB0byBhdm9pZCBhbiBleHRyYSBm
bGFnIHdlIGRlY2lkZWQgdG8gdXNlIHRoaXMuDQo+IA0KPiBNeSBvYmplY3Rpb24gcmVtYWluczog
QWN0aXZlbHkgcHV0dGluZyBjb2RlIHVuZGVyIHRoZSB3cm9uZyBnYXRpbmcNCj4gQ09ORklHXyog
aXMgaW1vIHF1aXRlIGEgYml0IHdvcnNlIHRoYW4ga2VlcGluZyBpdCB1bmRlciBhIHRvbyB3aWRl
IG9uZQ0KPiAoZS5nLiBDT05GSUdfWDg2KSwgaWYgaW50cm9kdWNpbmcgYSBzZXBhcmF0ZSBDT05G
SUdfSEFTX1BDSV9NU0kgaXMNCj4gZGVlbWVkIHVuZGVzaXJhYmxlIGZvciB3aGF0ZXZlciByZWFz
b24uIE90aGVyd2lzZSBldmVyeSBhYnVzZSBvZg0KPiBDT05GSUdfUENJX01TSV9JTlRFUkNFUFQg
b3VnaHQgdG8gZ2V0IGEgY29tbWVudCB0byB0aGUgZWZmZWN0IG9mIHRoaXMNCj4gYmVpbmcgYW4g
YWJ1c2UsIHdoaWNoIGluIHBhcnRpY3VsYXIgZm9yIGNvZGUgeW91IG1vdmUgaW50bw0KPiB4ZW4v
ZHJpdmVycy9wYXNzdGhyb3VnaC9tc2ktaW50ZXJjZXB0LmMgd291bGQgZW5kIHVwIHN1ZmZpY2ll
bnRseSBvZGQuDQo+IChBcyBhIG1pbm9yIGV4dHJhIHJlbWFyaywgcHV0dGluZyBkZWxpYmVyYXRl
bHkgbWlzcGxhY2VkIGNvZGUgYXQgdGhlDQo+IHRvcCBvZiBhIGZpbGUgcmF0aGVyIHRoYW4gYXQg
aXRzIGJvdHRvbSBpcyBsaWtlbHkgdG8gYWRkIHRvIHBvc3NpYmxlDQo+IGNvbmZ1c2lvbiBkb3du
IHRoZSByb2FkLikNCj4gDQoNCkkgdW5kZXJzdGFuZCB0aGF0IHRoaXMgaXMgbm90IHRoZSBjb3Jy
ZWN0IGZsYWcgdG8gZ2F0ZSB0aGUgY29kZS4gSWYgd2UgY2hvb3NlIHRvDQptb3ZlIHRoZSBjb2Rl
IHVuZGVyIENPTkZJR19YODYgdGhlcmUgd2lsbCBiZSAjaWZkZWYgaW4gdGhlIGNvbW1vbiBmaWxl
DQoicGFzc3Rocm91Z2gvcGNpLmPigJ0gdGhhdCBJIHRoaW5rIHdpbGwgbWFrZSBjb2RlIGhhcmRl
ciB0byB1bmRlcnN0YW5kLiBUaGUgb25seQ0Kb3B0aW9uIGxlZnQgaXMgdG8gaW50cm9kdWNlIHRo
ZSBuZXcgQ09ORklHX0hBU19QQ0lfTVNJICBvcHRpb24gYW5kIG5ldw0Kbm9uLWFyY2ggZmlsZXMg
KG1zaS5jLCBtc2kuaCkuIE1vdmUgYWxsIG5vbi1pbnRlcmNlcHQtcmVsYXRlZCBjb2RlIHRvIHRo
b3NlIGZpbGVzLg0KDQpBcyBJIG1lbnRpb24gZWFybGllciBhbHNvIHRoaXMgaXMgY29kZSBhbmQg
YXMgb2Ygbm93IHdlIGhhdmUgbm8gZGF0YSBvbiBob3cgTVNJDQp3aWxsIGJlIHN1cHBvcnRlZCBm
b3Igbm9uLXg4NiBhcmNoaXRlY3R1cmUgdGhhdOKAmXMgd2h5IHdlIGRlY2lkZWQgaXQgaXMgYmV0
dGVyIHRvIG1vdmUNCnRoZSBjb2RlIHVuZGVyIENPTkZJR19QQ0lfTVNJX0lOVEVSQ0VQVCBhbmQg
bGF0ZXIgcG9pbnQgaW4gdGltZSB3ZSBjYW4NCm1vZGlmeSB0aGUgY29kZSBvbmNlIG5vbi14ODYg
YXJjaGl0ZWN0dXJlIGltcGxlbWVudHMgTVNJIGZ1bmN0aW9uYWxpdHkgaW4gWEVOIGlmIHJlcXVp
cmVkLg0KDQpJIHdpbGwgbW92ZSB0aGUgY29kZSBhdCB0aGUgYm90dG9tIG9mIHRoZSBmaWxlIHRv
IGF2b2lkIGNvbmZ1c2lvbi4NCg0KUmVnYXJkcywNClJhaHVsDQo+IEphbg0KDQo=


From xen-devel-bounces@lists.xenproject.org Wed May 05 16:59:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 16:59:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123219.232398 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leKs8-0002FL-OA; Wed, 05 May 2021 16:59:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123219.232398; Wed, 05 May 2021 16:59:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leKs8-0002FE-L4; Wed, 05 May 2021 16:59:40 +0000
Received: by outflank-mailman (input) for mailman id 123219;
 Wed, 05 May 2021 16:59:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+Yav=KA=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1leKs7-0002F2-Ff
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 16:59:39 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1ed7016e-f3ee-43a5-b0d5-e479650d0adf;
 Wed, 05 May 2021 16:59:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1ed7016e-f3ee-43a5-b0d5-e479650d0adf
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620233978;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=8eK+kRpmouApf5Duh7JudbDh0cYrXIq6t4Vf/fUsFxE=;
  b=FrM7ZbSW0PnW4gey7waUg65kSHeZelWJ+1oZijrTDfdv/xq+9c8G9h0a
   fXNPWJvfR1N8+nKk5sTbgeEDWzABVnv7x7B50ilNanPCYYVUuBJzdjQJ6
   Hgvqu7RwFokkx1Y62g3enmJ/dY4rphBlx5GyF+x6rXu+R595j5dvpmMdD
   M=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: bphG5JNCjxXQvx/Tcz2dwxlI28ydlI1SxXjwkhtwxFLuuxbtchgR94j7e1u82oSzoEv5sQJix3
 ysHOF6rETuPNDPX8BN7OLqJQpE62+UM4yXkFQn4DKnzS1WIW67R9sMaxlIDEEYiVYhAfKq48zh
 gzsQoCtkw9rKOgA5iEFpj97Z7NJMqmPYj39p6JxvG5AqAVL5iHHGtPVo8wm5zLb/3QyPnk/zt5
 +xzhoEY46abAwz7pNpjI9mKKoIWkq32XKrPrqqy+L4VjUwzSW2ImAhWlPMNGDqaxByA8HJuWrV
 c40=
X-SBRS: 5.1
X-MesageID: 43150731
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:wO9vh6PofcjTL8BcT27w55DYdL4zR+YMi2QD/3taDTRIb82VkN
 2vlvwH1RnyzA0cQm0khMroAse9aFvm39pQ7ZMKNbmvGDPntmyhMZ144eLZrAHIMxbVstRQ3a
 IIScRDIfXtEFl3itv76gGkE9AmhOKK6rysmP229RdQZCtBApsQiztRIACdD0FwWU1iDZ02CJ
 KT6qN81kSdUF4Qadm2AWRAYvPKoMfFmImjTRkNARMm7wfmt0LW1JfRFR+E0hACFw5e2LtKyx
 m4ryXVxIWG98u6xBjVynPJ4/1t+efJ59NfCKW3+7MoAxr2jALAXvUGZ5Sju3QPrPir+BIWlr
 D30modFuBSz1+UQW2vuxvq3GDboUQTwlvv00WRj3emgeGRfkNDN+N7iYhUcgTU5iMb1bkWus
 87vBP6xu5qJCjNkyjn69/DWwsCrDvSnVMYnfMOlHsaaIMCadZq3P8i1XlIG5QNFj+S0vFfLM
 BSCqjnlZNrWG+BY2uclmdix8HEZAVIIj62BmIGusCTzgFMmmF4w0Yy1KUk7wc93aN4ZJ9e6+
 veNKN00JlIU88NdKp4QNwMWM2tFwX2MFzxGVPXBW6iOLAMOnrLpZKyyLIp5NuycJhN6Jcpgp
 zOXH5RqGZaQTOuNeS+mLlwtjzdSmS0WjrgjutE4YJih7H6TL33dQWeVVEHiaKb0rciK/yef8
 z2FINdAvflI2erM51OxRfCV55bLmRbeNEJu+w8R0mFrqvwW87Xn92eVMyWCKvmED4iVG+6KG
 AERiLPKMJJ6V3udWT/hDTXRnPxam3y9Z99C8Hhjqwu4blIErcJnhkeiFy/6M3OAyZFqLYKcE
 x3J66isq7TnxjwwU/4q0FSfjZNBEdc57vtF1lQoxURDk/yebEf//GWeWVY2mq7NgZyJvmmVj
 J3lhBSw+aaPpaQzSctB5aMKWSBlUYeo3qMUtM6lrCc49zmPrc1FIwvVqA0NQijLW00pS9a7E
 N4LCMUTE7WET3jzY+/ioYPOe3Zf95gxCGxIcBVrnrbnV6Gpd4mQ0YaWzLGa7/TvS8eAx5vwn
 Fh+a4Wh7SN3Ry1L3Ekveg+OFpQLFiMDKl+FwSDboVMkrXNcAV9JF363ACyulUWQC7H5k8Sjm
 vuIWmxdevQClRQgHxez53n6Uh5bGmbYkJ2ZE1rqIEVLxWyhl9DlcuwIoaj2WqYbVUPhtsQNz
 zIehM+CAJjzdLf7m/ZpB+yUVEdgrk+NO3UC7ouN4zJ0nS2MYuSiOUtBPlP5qtoM9jor84GWe
 +SYBWuMTv9Eu8lsjbl/koNCW1Rkj0JgPno0Brq4CyEx3Y5G+PVO0kjaLcBId2QhlKUD8qg4d
 Fct5YSsuSxOGmqNYLD5qHTcjJZKhTc5USxVPolrJhIvaQ08Jt/dqOrJwfg5TVi5lEZKsyxqW
 Y1BIJcy5rFMpV0f8MTdzlCl2BZ3uinHQ8OiEjOHuQ6fVsRlHfVMNOC3qrQpdMUczq8jTq1HW
 PazjZU8PjEVRaSzLI2C6o/JmJNdUg3gU4Std+qRsn1CA+wcftE80f/GnihcKVFQKztI8Rdkj
 9Kp/WJlfSQbSz2xUT5uiZ6OLtH9yKCTdmpCAyBXc5O/NrSAyXCvoKapOqyhizwUz21dgAxgp
 BEb1UZaoB7sQYZ5bdHmRSae+jQuUIqk1xX/DFhmBrM4+GdkRbmNHADFxbYjJVQVSRUKV6Sg6
 3+gLOl6Eg=
X-IronPort-AV: E=Sophos;i="5.82,276,1613451600"; 
   d="scan'208";a="43150731"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ledIrhzVEXl2tYGpS4/YKqVsVbQDbpXKVVtKu6zs4c5cNMFMLBiRtzwWpXCgxD7wddlI08E1hJTP5k4fRlX1gVzdqfGlE4d7Pt5qBoJEu54zMhJgccsM38WMqppTP2IjksutGbTB0CXUTH5+iVJcT5ifIi7UVghRtOni5DouT8nmM/kIWcL9gAPd3B8VWBZst8t43/PBi4ysAyRUtlCzh17iGF7+T3G8tbInoRb40zladKYDhXfhvDyLg6VWYprT2i/LQr5+LZo3vru9G/KPvDFXbzKlIs/k0z5v/TXvDcFnOhR8TYxtgM3ZxjDtnuBqO2YLn9HrUMtYLNaobTdDww==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=buUtKdNlkl53x2DMy1U2OCy8iMaD2Hvy/BHR6HUsRKU=;
 b=bpew02JQP8oAAF6knkCo7g1d6yn2vVfdtlgnZZJ6JgvgtFU3pV9g9NRkGvcjRPxHa5EOrCzAG6OQx5y1OPQp9R04wtdn+vfsqqPAX0sGkbMkGzqKcjIISZsetU7sI2NYwRTHDBSujhZEOn9UAb6XRNgCelX7bkQAtUAFmNhTY1KruOgSuXA7ZzPS/YzQ4mZ9kaYi3OxIIxtbdxIV+/ZMp00SkkRFAOBKVOk0HVPKbrM0/s7qQK4Ytt9cKDyEYLiTgPPA4DGKVxNepQvqRUIedYjTWzh0aBbt2W+7B+QQSDIF+/TadseujZcT1t8x/yulrtLftRGlHIJAluiZdPOmjw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=buUtKdNlkl53x2DMy1U2OCy8iMaD2Hvy/BHR6HUsRKU=;
 b=jF69TFMdNxCRrHBNspsTyLBsNh/sn8Wgl7MKrGlpNAWcbm62Jh3dAAImWQOGg96F8fcnPkJ1izJRJItLphq5tx94DpHP5ve7VH2JXGk1IuCjbNe5aG09EEkIQsiydqPiNAwoglLmTNgjzc/+6PhpeOpJUQ/Qzift1pIX59NIAOY=
To: Jan Beulich <jbeulich@suse.com>
CC: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
	<wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210503153938.14109-1-andrew.cooper3@citrix.com>
 <20210503153938.14109-6-andrew.cooper3@citrix.com>
 <5e6511ca-83bd-8a43-202e-949b4d19b1ab@suse.com>
 <1279476a-f99d-59a4-7fed-1aee37dbe204@citrix.com>
 <d951dc24-e613-8a1d-13ea-b1e439048165@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 5/5] x86/cpuid: Fix handling of xsave dynamic leaves
Message-ID: <7ce89520-7cca-6c9f-f229-37515116d74c@citrix.com>
Date: Wed, 5 May 2021 17:59:28 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <d951dc24-e613-8a1d-13ea-b1e439048165@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0302.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a5::26) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 1c816ccc-734b-45e1-0fff-08d90fe7285d
X-MS-TrafficTypeDiagnostic: BY5PR03MB5047:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BY5PR03MB5047E5C7A54EE7EA52D224C3BA599@BY5PR03MB5047.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: vUjKSV8cEgG1JdL0pQHqm3vTMBjH9e13zgcYn1lB7RUL4IHfk16FoVcv3SUtjaZ//LqjbGU/6at2BjKUJnHISeSBQovEWHmXnZCnmUD2aDbgxdXllcFJ4UHxpNZL3YW214GT31LrtP2UhLrgiycuLsbYmekrwXFtcZseVqFbFwWT/YdDC7c+Q+EmLRkUCSmcUrS+Dzi2T3a+O3XqXiIBrb1B0qlZfS+WYwJhYWcWSPRiPOSbqtaLKkI4g0WjM/WVkUXUFbb0FeCRHhOO2NaVFWa4YbeaZbtyXYZEZhEik3f1oaSOXmx4Q4dkSxxZ6gU5oTlV8If2vyQED+/pjtm6tl8eQXbaSRD2b49cDgl75rlBdImIXHw7mSR0e/I/1S7Q6lOVBvz04WABU9WInOWFI8Q7HVtmB2qLYlJ0hlWAhi2gFgOHllXAXRXkHv5mP6gOpuflTQ0uPlCZ6dlBzbPvENgluPAQ+2pVtH3WRiwei8O9FYfyf0w+pnmPY6QL16hYKhqcOZicaLFABpSTEK2SJq0xPWnw5VVclB5VMZjNFLQ3D6n+8NzwJMfK4Z1kyPKLoSHj5uMpeDxnhcSVqg/BT19H2m2OzY1IVb5CrhCxo/6Z0TVaigMNuOwzvq1xHGMc/HDHWyyMHuljzDYo+MH6HY4rkhtyeM1KOqp2DSoU8mXB9niHlv3O6eCTyZGCL5Wk
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(66946007)(5660300002)(8936002)(36756003)(66476007)(83380400001)(38100700002)(31686004)(2906002)(26005)(8676002)(53546011)(16576012)(4326008)(86362001)(54906003)(16526019)(6666004)(956004)(6916009)(31696002)(66556008)(498600001)(186003)(2616005)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?NVJZS1NBTGVrNkFYajdibHNhWGg2YXZITkt1Zmxpa2FtNXR1MzB0VCs1OWJP?=
 =?utf-8?B?V0Q5d0ZkN2xLTFN6ODVTV216ay9wYkQ3K2xPZ0NFMG5MYXFrYjRWcHRXUWtk?=
 =?utf-8?B?a0wydkErWmZHdVpySUhkUlpqZElDV3UwRTVYMG00clFzRmkybklQRDFac2Ju?=
 =?utf-8?B?VENpQjBBdjR6OWlMVWZiR0Q4aXJETGJRQXRqRS83dnlFQnhySG9wYXpDR0FK?=
 =?utf-8?B?SjNTZ3RMdTVFekZNa1RuWXlOakpvbnVEcUZLdUNESU5GQXJCV1FVUkhVQ2x3?=
 =?utf-8?B?Nm1sRXlQdC9lTzY0K0ZoODZsZFZhdjU0bWhwckVwbFJYdWdjMUt6MlUyZjFW?=
 =?utf-8?B?WGN5ZXo2b3hhN2xZWTcxN2JIS2JyTnF3WUVUbzNoTy8reUVJVFJkb3VXbVlv?=
 =?utf-8?B?Qkt3Vm1qeVVzWU1xY3hwbXZ0V0dQSGVPbGw3eGNMSGR1Q1dkaWg3dWtqQkdQ?=
 =?utf-8?B?NVVVOUVuSHVKL0hFMEhmRVRzcEVsdXRQNE5pd0R5TFZ5Si9GMDNMVzVOQml0?=
 =?utf-8?B?TkxZSXlEZzFzS2lhRXZsdjdxZVJsOHg4bmdMMWNNaDJCNjZhNTNLVHFnVzFM?=
 =?utf-8?B?ckJobkozSnRBNUxidG1tYUFNSWMrVzZMODVVeTFTVC93ZXFYWlYxVkdEVDgz?=
 =?utf-8?B?dVIxZnRBWCs1bUhlREgzV1pTL2k4Nm5PZzFPU2RJckphNzBVYmZWRC9oZ1BS?=
 =?utf-8?B?QnM1cnlrbEpRbnErZjF0dDBaaG1POXA5RkkrV3J2QUs3TVROTGQxa0d6Sms1?=
 =?utf-8?B?NCt0Q3B6aitncjd6YVhXNlJvS1BscFJwYWs0RnJkNDVIVU1iS1RoOTNxSTE0?=
 =?utf-8?B?NUJJV096MTlFZXJhYlllaUl0VWFMbEo5RlJwVU9kaWF3S21XZkRaVE9sYVl3?=
 =?utf-8?B?UytMQmlRV3VPa1hsYVRqcDRsUVU0Q0E0QkFuM1J4OElLM1BPWVYzbUd5dkl0?=
 =?utf-8?B?ZG9ZVVNybkc1anNFZVd0d1NmdkVSS2lVVEhYc1ZXekZraUd1QTB1Tm1CNDZI?=
 =?utf-8?B?NVNndnU2ZHY2S2tGODV2R2ZtL3dLU2tKVmp5ZXFjRVpJOWtUM3dVZ0QyVVFT?=
 =?utf-8?B?QnorTTZoeGVWMjgzcnBqYU5DRmhSR0o0SXFzbWtDaGlCMlNWQks1WnMrUG1D?=
 =?utf-8?B?cko2UTY5Rmp0dXRyZ2puc1JONXY3NFBXeVRKYnNTREt2VkJzUmNZL0xGOU5J?=
 =?utf-8?B?WjFvN2lhc2t4Y09YSkhVbTBuaU9BcjJRUjNGcVljN2FSMnBYczE5QmJabUhz?=
 =?utf-8?B?ZzNIS0puWXprR25aQXY1bU5iWUs2TGlFTkZCbjJ0Q1hGeVFmckJmbzNmeUpJ?=
 =?utf-8?B?NEZpT09oRlNPYlpqK2hwbmdpdnU3WXdvQWtmR1J0L1B2VXloK3IzQjI2ZU1v?=
 =?utf-8?B?TGVUTlZCTUN0a00zeXBoc05sbWJBZmlNR0N5SkVsUFBOMWsrVytkS0dCYy8z?=
 =?utf-8?B?NmdIM1BRdGtEbGRVMmNuazRiSU4ramZXbFFqS0c2bFBleTFPK1VBWC9Wb0hs?=
 =?utf-8?B?cGE2YjN0Tm9ocHFpcE00cnJjYmQ4dVNhMFZabUxIOXU3ZWFoa2JUY21CQlFD?=
 =?utf-8?B?ZW9lWDh1SmlRTG9BRXNLMkN3aWRkOHV4cng4R3oxNXF4SnBCc2dDNEtzaVdP?=
 =?utf-8?B?aHZMRTRTUWtRV2lPUGhRdy93VStCUGxCNE1MTXRWMVhtYUp5NDN2YS95dTJk?=
 =?utf-8?B?WnpiaFFUbkpWT0ZORVdVR2hja1duWTJZUngrVGpZQ1pkYnJEVnM2TnA5U2Fj?=
 =?utf-8?Q?71QJwws5TKFrQu0PPigPpTPY8k2bzx5YsNtpcL8?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 1c816ccc-734b-45e1-0fff-08d90fe7285d
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 05 May 2021 16:59:34.3112
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: AUdlTjfomBfbuwgI8ydSqxFLpqt4SuUxtgVJUBjzuLq0OSRxIBtojoPE6vlwzqhfsLnvM2f/aKTfhwi9aplKGK0R/X0zlqrFr1QZcvinuIw=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5047
X-OriginatorOrg: citrix.com

On 05/05/2021 09:33, Jan Beulich wrote:
> On 04.05.2021 16:17, Andrew Cooper wrote:
>> On 04/05/2021 13:56, Jan Beulich wrote:
>>> On 03.05.2021 17:39, Andrew Cooper wrote:
>>>> +unsigned int xstate_compressed_size(uint64_t xstates)
>>>> +{
>>>> +    unsigned int i, size =3D XSTATE_AREA_MIN_SIZE;
>>>> +
>>>> +    xstates &=3D ~XSTATE_FP_SSE;
>>>> +    for_each_set_bit ( i, &xstates, 63 )
>>>> +    {
>>>> +        if ( test_bit(i, &xstate_align) )
>>>> +            size =3D ROUNDUP(size, 64);
>>>> +
>>>> +        size +=3D xstate_sizes[i];
>>>> +    }
>>>> +
>>>> +    /* In debug builds, cross-check our calculation with hardware. */
>>>> +    if ( IS_ENABLED(CONFIG_DEBUG) )
>>>> +    {
>>>> +        unsigned int hwsize;
>>>> +
>>>> +        xstates |=3D XSTATE_FP_SSE;
>>>> +        hwsize =3D hw_compressed_size(xstates);
>>>> +
>>>> +        if ( size !=3D hwsize )
>>>> +            printk_once(XENLOG_ERR "%s(%#"PRIx64") size %#x !=3D hwsi=
ze %#x\n",
>>>> +                        __func__, xstates, size, hwsize);
>>>> +        size =3D hwsize;
>>> To be honest, already on the earlier patch I was wondering whether
>>> it does any good to override size here: That'll lead to different
>>> behavior on debug vs release builds. If the log message is not
>>> paid attention to, we'd then end up with longer term breakage.
>> Well - our options are pass hardware size, or BUG(), because getting
>> this wrong will cause memory corruption.
> I'm afraid I'm lost: Neither passing hardware size nor BUG() would
> happen in a release build, so getting this wrong does mean memory
> corruption there. And I'm of the clear opinion that debug builds
> shouldn't differ in behavior in such regards.

The point of not cross-checking with hardware in release builds is to
remove a bunch of very expensive operations from path which is hit
several times on every fork(), so isn't exactly rare.

But yes - the consequence of being wrong, for whatever reason, is memory
corruption (especially as the obvious way it goes wrong is having an
xsave_size[] of 0, so we end up under-reporting).

So one way or another, we need to ensure that every newly exposed option
is tested in a debug build first.=C2=A0 The integration is either complete,
or it isn't, so I don't think this terribly onerous to do.


As for actually having a behaviour difference between debug and release,
I'm not concerned.

Hitting this failure means "there is definitely a serious error in Xen,
needing fixing", but it will only be encountered the during development
of a new feature, so is for all practical concerns, limited to
development of the new feature in question.

BUG() gets the point across very obviously, but is unhelpful when in
practice the test system wont explode because the dom0 kernel won't be
using this new state yet.

> If there wasn't an increasing number of possible combinations I
> would be inclined to suggest that in all builds we check during
> boot that calculation and hardware provided values match for all
> possible (valid) combinations.

I was actually considering an XTF test on the matter, which would be a
cleaning up of the one generating the example in the cover letter.

As above, logic is only liable to be wrong during development of support
for a new state component, which is why it is reasonable to take away
the overhead in release builds.

>> The BUG() option is a total pain when developing new support - the first
>> version of this patch did use BUG(), but after hitting it 4 times in a
>> row (caused by issues with constants elsewhere), I decided against it.
> I can fully understand this aspect. I'm not opposed to printk_once()
> at all. My comment was purely towards the override.
>
>> If we had something which was a mix between WARN_ONCE() and a proper
>> printk() explaining what was going on, then I would have used that.=C2=
=A0
>> Maybe it's time to introduce one...
> I don't think there's a need here - what you have in terms of info
> put into the log is imo sufficient.

Well - it needs to be sufficiently obvious to people who aren't you and
me.=C2=A0 There are also other areas in Xen which would benefit from changi=
ng
their diagnostics to be as described.

~Andrew



From xen-devel-bounces@lists.xenproject.org Wed May 05 17:12:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 17:12:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123227.232409 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leL4H-0004QD-UZ; Wed, 05 May 2021 17:12:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123227.232409; Wed, 05 May 2021 17:12:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leL4H-0004Q6-Rc; Wed, 05 May 2021 17:12:13 +0000
Received: by outflank-mailman (input) for mailman id 123227;
 Wed, 05 May 2021 17:12:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leL4G-0004Pw-UR; Wed, 05 May 2021 17:12:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leL4G-0006VE-Ot; Wed, 05 May 2021 17:12:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leL4G-0000Gk-FN; Wed, 05 May 2021 17:12:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1leL4G-0007sd-En; Wed, 05 May 2021 17:12:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=5h29dOtR7J0OBgrPGQFDQFqE/F2Tf3nfkOwl2vD98mw=; b=oi0myYHX+l0fjHbTUXDMDvxSzI
	zDfzb8WCrfhJZDMT8GzO74Dbd/R/uftA4K0QfHJFC3LxhcY7XBs7g+o8+U04EfbwBlVIv5f4zMw1i
	7HzsuB0iNr0T4lBf4uj4OmyFtGn8jDiSVX8M9rdSDyChbfF0L/Og7Six3MjIjKvrYBHg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161793-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 161793: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=b066bd195c04f15ca396ce427c03da1e14849197
X-Osstest-Versions-That:
    xen=8cccd6438e86112ab383e41b433b5a7e73be9621
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 05 May 2021 17:12:12 +0000

flight 161793 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161793/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  b066bd195c04f15ca396ce427c03da1e14849197
baseline version:
 xen                  8cccd6438e86112ab383e41b433b5a7e73be9621

Last test of basis   161775  2021-05-04 19:02:53 Z    0 days
Testing same since   161793  2021-05-05 15:00:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Roger Pau Monné <roger.pau@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8cccd6438e..b066bd195c  b066bd195c04f15ca396ce427c03da1e14849197 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed May 05 17:29:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 17:29:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123232.232425 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leLL4-00060J-FV; Wed, 05 May 2021 17:29:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123232.232425; Wed, 05 May 2021 17:29:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leLL4-00060C-CG; Wed, 05 May 2021 17:29:34 +0000
Received: by outflank-mailman (input) for mailman id 123232;
 Wed, 05 May 2021 17:29:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leLL2-000602-Sx; Wed, 05 May 2021 17:29:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leLL2-0006l5-NO; Wed, 05 May 2021 17:29:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leLL2-0000tk-EA; Wed, 05 May 2021 17:29:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1leLL2-0000Ee-DN; Wed, 05 May 2021 17:29:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=uxyWktSqIkDTFbJjZBjzqp3nYp1yce6ZZeD7yCxOTqg=; b=pWedjGWhh9c44Wn0wmpnPJur0C
	8K8G2VrbUymMleamw6iTpFunSdz97ftLkX/9PSxaVBbJcI/t/Bu3D6/8dHmZOola7wrFxaQVXHdd7
	GeF49XiXbyN9vC8R79ngPcsqoejJC6sgv6wHs8Xs7asyVF52Pie5CmIPRGBrIZslvb0s=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161772-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.15-testing test] 161772: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-4.15-testing:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    xen-4.15-testing:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-4.15-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.15-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=280d472f4fca070a10377e318d90cabfc2540810
X-Osstest-Versions-That:
    xen=eb1f325186be9e02c3e89619cd154eb57f202eba
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 05 May 2021 17:29:32 +0000

flight 161772 xen-4.15-testing real [real]
flight 161794 xen-4.15-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/161772/
http://logs.test-lab.xenproject.org/osstest/logs/161794/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-libvirt-xsm  8 xen-boot            fail pass in 161794-retest
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 161794-retest

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check fail in 161794 never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check fail in 161794 never pass
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161322
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161322
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161322
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161322
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 161322
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161322
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161322
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161322
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161322
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161322
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161322
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  280d472f4fca070a10377e318d90cabfc2540810
baseline version:
 xen                  eb1f325186be9e02c3e89619cd154eb57f202eba

Last test of basis   161322  2021-04-20 10:06:36 Z   15 days
Testing same since   161772  2021-05-04 13:07:50 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   eb1f325186..280d472f4f  280d472f4fca070a10377e318d90cabfc2540810 -> stable-4.15


From xen-devel-bounces@lists.xenproject.org Wed May 05 17:35:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 17:35:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123241.232444 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leLQS-0007Uu-B6; Wed, 05 May 2021 17:35:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123241.232444; Wed, 05 May 2021 17:35:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leLQS-0007Un-83; Wed, 05 May 2021 17:35:08 +0000
Received: by outflank-mailman (input) for mailman id 123241;
 Wed, 05 May 2021 17:35:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1leLQR-0007Uh-PL
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 17:35:07 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1leLQQ-0006rC-Px; Wed, 05 May 2021 17:35:06 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1leLQQ-0001zW-Jo; Wed, 05 May 2021 17:35:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=vO2/1BMS+rEXM2serPa+0DZbUGvvSNfRrgTNgUBOWD8=; b=waXZYXBAXPwQrd/f+fy1SucVLd
	6uitCS/n0zeyQq6A0yNpP5w+O21WmRIh3vqS1CHjBh5FnH1C1/haF2X1SjefVjNhxou/I52Bls8tM
	eNqseV1Iw/FKFNjmZjgKrYezXPQXX/8FOAtMT10lMfFW7h+Sb0G1ClQrJzXbbYZnqWU4=;
Subject: Re: [PATCH v4 3/3] unzstd: make helper symbols static
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <12054cba-4386-0dbf-46fd-41ace0344f8e@suse.com>
 <759c8524-cc01-fac8-bc62-0ba6558477bd@suse.com>
 <cb8fa703-f421-ce55-811a-d4a649bc201a@xen.org>
 <1696e5f2-481a-5a7f-258d-b2a0679b041f@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <f6e00fd9-a207-858e-37e8-fb25427cf8de@xen.org>
Date: Wed, 5 May 2021 18:35:04 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <1696e5f2-481a-5a7f-258d-b2a0679b041f@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 29/04/2021 14:26, Jan Beulich wrote:
> On 29.04.2021 13:27, Julien Grall wrote:
>> On 21/04/2021 11:22, Jan Beulich wrote:
>>> While for the original library's purposes these functions of course want
>>> to be externally exposed, we don't need this, and we also don't want
>>> this both to prevent unintended use and to keep the name space tidy.
>>> (When functions have no callers at all, wrap them with a suitable
>>> #ifdef.) This has the added benefit of reducing the resulting binary
>>> size - while this is all .init code, it's still desirable to not carry
>>> dead code.
>>
>> So I understand the desire to keep the code close to Linux and removing
>> the dead code. However I am still not convinced that the approach taken
>> is actually worth the amount of memory saved.
>>
>> How much memory are we talking about here?
> 
> There are no (runtime) memory savings, as is being said by the
> description. There are savings on the image and symbol table sizes
> (see below - .*.0/ holding files as produced without the patch
> applied, while .*.1/ holding output with it in place), the image
> size reduction part of which is - as also expressed by the
> description - a nice side effect, but not the main motivation for
> the change.

Thanks for the providing the information. I have misunderstood your 
original intention.

Reading them again, I have to admit this doesn't really change my view 
here. You are trading a smaller name space or prevent unintended use 
(not clear what would be wrong to call them) to code 
maintenability/readability.

At the same time, this is not a code I usually work on (even if I am 
meant to maintain it). I will leave another maintainer to make the 
decision here.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 05 17:42:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 17:42:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123250.232455 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leLXL-0000S5-3u; Wed, 05 May 2021 17:42:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123250.232455; Wed, 05 May 2021 17:42:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leLXL-0000Ry-12; Wed, 05 May 2021 17:42:15 +0000
Received: by outflank-mailman (input) for mailman id 123250;
 Wed, 05 May 2021 17:42:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zu7P=KA=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1leLXJ-0000Rs-I0
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 17:42:13 +0000
Received: from mail-wr1-x42c.google.com (unknown [2a00:1450:4864:20::42c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 030085a5-d1c5-4ed7-8586-a6807b44342a;
 Wed, 05 May 2021 17:42:12 +0000 (UTC)
Received: by mail-wr1-x42c.google.com with SMTP id l2so2738284wrm.9
 for <xen-devel@lists.xenproject.org>; Wed, 05 May 2021 10:42:12 -0700 (PDT)
Received: from [192.168.1.186]
 (host86-180-176-157.range86-180.btcentralplus.com. [86.180.176.157])
 by smtp.gmail.com with ESMTPSA id q12sm5465702wmj.7.2021.05.05.10.42.11
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 05 May 2021 10:42:11 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 030085a5-d1c5-4ed7-8586-a6807b44342a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:subject:to:cc:references:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=5ck4hA6Xxm2yKJj3VL+ZRwH+wBOpSKpRNDlAP7W/xkY=;
        b=sqQ1P5bx1mEgzNpROO/fxDPLD91nT9gDAoarQLa7BmVjX+WVZbS3rBTTNHLYFjIFDH
         5zj2pxzaeX4XMXJCmuJ6jzw0XeQOFEt2APMz30Aom4+taAvFdx5rM2KAHCYZhx/hu95P
         TrvajzfUn6kYigiFSjofr5NVPr/kEHrz2NAfon1+zIAojtRgbhDWmBW3hWhXus1zpYbi
         0OqHDmOwMs97yExn2XSUhf6RKK2G3i7E0Ji0i/uUwDcXEcqLiywbnNAbo21yz/g1kPW6
         hakyTnxKg2W8Do32DUD9qoCuw6A4LfozyWo2O6VPiBK27Rqpzme96FXVe1X9FavAI7/T
         phCQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:subject:to:cc:references
         :message-id:date:user-agent:mime-version:in-reply-to
         :content-language:content-transfer-encoding;
        bh=5ck4hA6Xxm2yKJj3VL+ZRwH+wBOpSKpRNDlAP7W/xkY=;
        b=Y2ymKZtJX6t6Vrb9e4tNXE6lFj1LKx8dItQM8b6IjdFkbZTFwUNl4odrIf+sZTg8SD
         RwvGJvMiAOOeXYyOZQXpihRvj+AI5X7/lUR5ciuX4nDZUTBroRJ5kTsKSf8UDoYRf9gj
         gplOJaifwfdVBR3YObuCPclGRz9b0iU18Jhg3uhi8jZRlu5o/SLbGQAJgQDALFInApRr
         od9QknUryiYn2r1ih3t4Yzc2XPTzYhgxR/efoOgKtMdez0EkdehlIYDSjcaT8cpbS2TY
         SVGmQrGfOrozo5aIOupzwb2B2YyJHHJu9xxiSA0BlwMXlvpJNWOp04fm6OiN2jSeLXJq
         48MQ==
X-Gm-Message-State: AOAM531dMD2LbiVhGtyZDL8xyohVQhkPvtZy25283vubLXG2kgPWyT63
	Hb6Gtj8MNeCe4pAFv/k6VNuDgym6zRs=
X-Google-Smtp-Source: ABdhPJyKeePxveTaMDHr01H0ue6iC6KDu7zCB1oEN0SxiOtXKt6kgydav7ZtpWMIR5bgS3/BkLoc6w==
X-Received: by 2002:adf:a316:: with SMTP id c22mr229727wrb.202.1620236531877;
        Wed, 05 May 2021 10:42:11 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: Paul Durrant <paul@xen.org>
Reply-To: paul@xen.org
Subject: Re: [PATCH] xen: Free xenforeignmemory_resource at exit
To: Anthony PERARD <anthony.perard@citrix.com>, qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
References: <20210430163742.469739-1-anthony.perard@citrix.com>
Message-ID: <32263046-97a5-b163-ff23-746effb5c7e4@xen.org>
Date: Wed, 5 May 2021 18:42:10 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210430163742.469739-1-anthony.perard@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30/04/2021 17:37, Anthony PERARD wrote:
> From: Anthony PERARD <anthony.perard@citrix.com>
> 
> Because Coverity complains about it and this is one leak that Valgrind
> reports.
> 
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>

Acked-by: Paul Durrant <paul@xen.org>

> ---
>   hw/i386/xen/xen-hvm.c       | 9 ++++++---
>   include/hw/xen/xen_common.h | 6 ++++++
>   2 files changed, 12 insertions(+), 3 deletions(-)
> 
> diff --git a/hw/i386/xen/xen-hvm.c b/hw/i386/xen/xen-hvm.c
> index 7ce672e5a5c3..47ed7772fa39 100644
> --- a/hw/i386/xen/xen-hvm.c
> +++ b/hw/i386/xen/xen-hvm.c
> @@ -109,6 +109,7 @@ typedef struct XenIOState {
>       shared_iopage_t *shared_page;
>       shared_vmport_iopage_t *shared_vmport_page;
>       buffered_iopage_t *buffered_io_page;
> +    xenforeignmemory_resource_handle *fres;
>       QEMUTimer *buffered_io_timer;
>       CPUState **cpu_by_vcpu_id;
>       /* the evtchn port for polling the notification, */
> @@ -1254,6 +1255,9 @@ static void xen_exit_notifier(Notifier *n, void *data)
>       XenIOState *state = container_of(n, XenIOState, exit);
>   
>       xen_destroy_ioreq_server(xen_domid, state->ioservid);
> +    if (state->fres != NULL) {
> +        xenforeignmemory_unmap_resource(xen_fmem, state->fres);
> +    }
>   
>       xenevtchn_close(state->xce_handle);
>       xs_daemon_close(state->xenstore);
> @@ -1321,7 +1325,6 @@ static void xen_wakeup_notifier(Notifier *notifier, void *data)
>   static int xen_map_ioreq_server(XenIOState *state)
>   {
>       void *addr = NULL;
> -    xenforeignmemory_resource_handle *fres;
>       xen_pfn_t ioreq_pfn;
>       xen_pfn_t bufioreq_pfn;
>       evtchn_port_t bufioreq_evtchn;
> @@ -1333,12 +1336,12 @@ static int xen_map_ioreq_server(XenIOState *state)
>        */
>       QEMU_BUILD_BUG_ON(XENMEM_resource_ioreq_server_frame_bufioreq != 0);
>       QEMU_BUILD_BUG_ON(XENMEM_resource_ioreq_server_frame_ioreq(0) != 1);
> -    fres = xenforeignmemory_map_resource(xen_fmem, xen_domid,
> +    state->fres = xenforeignmemory_map_resource(xen_fmem, xen_domid,
>                                            XENMEM_resource_ioreq_server,
>                                            state->ioservid, 0, 2,
>                                            &addr,
>                                            PROT_READ | PROT_WRITE, 0);
> -    if (fres != NULL) {
> +    if (state->fres != NULL) {
>           trace_xen_map_resource_ioreq(state->ioservid, addr);
>           state->buffered_io_page = addr;
>           state->shared_page = addr + TARGET_PAGE_SIZE;
> diff --git a/include/hw/xen/xen_common.h b/include/hw/xen/xen_common.h
> index 82e56339dd7e..a8118b41acfb 100644
> --- a/include/hw/xen/xen_common.h
> +++ b/include/hw/xen/xen_common.h
> @@ -134,6 +134,12 @@ static inline xenforeignmemory_resource_handle *xenforeignmemory_map_resource(
>       return NULL;
>   }
>   
> +static inline int xenforeignmemory_unmap_resource(
> +    xenforeignmemory_handle *fmem, xenforeignmemory_resource_handle *fres)
> +{
> +    return 0;
> +}
> +
>   #endif /* CONFIG_XEN_CTRL_INTERFACE_VERSION < 41100 */
>   
>   #if CONFIG_XEN_CTRL_INTERFACE_VERSION < 41000
> 



From xen-devel-bounces@lists.xenproject.org Wed May 05 17:44:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 17:44:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123253.232468 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leLZY-00015c-JP; Wed, 05 May 2021 17:44:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123253.232468; Wed, 05 May 2021 17:44:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leLZY-00015V-EC; Wed, 05 May 2021 17:44:32 +0000
Received: by outflank-mailman (input) for mailman id 123253;
 Wed, 05 May 2021 17:44:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zu7P=KA=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1leLZX-00015P-CA
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 17:44:31 +0000
Received: from mail-wm1-x331.google.com (unknown [2a00:1450:4864:20::331])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1e7ec46c-3aae-4770-a7b3-43cbc90c91ee;
 Wed, 05 May 2021 17:44:30 +0000 (UTC)
Received: by mail-wm1-x331.google.com with SMTP id
 a10-20020a05600c068ab029014dcda1971aso1794126wmn.3
 for <xen-devel@lists.xenproject.org>; Wed, 05 May 2021 10:44:30 -0700 (PDT)
Received: from [192.168.1.186]
 (host86-180-176-157.range86-180.btcentralplus.com. [86.180.176.157])
 by smtp.gmail.com with ESMTPSA id f6sm24181718wru.72.2021.05.05.10.44.29
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 05 May 2021 10:44:29 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1e7ec46c-3aae-4770-a7b3-43cbc90c91ee
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:subject:to:cc:references:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=ojVaaEUcq5yFWfKGP37S9IFum4ExwHTnU4DANKZUVC0=;
        b=F09lbqS2Jp9Mk94IzrycRvHhuUtDpQvmXn4KhrImDMIDNiSHov9ij6EwzeRyN1lgcn
         jAq5QqmOOuAc62s6ORKq9D/oujZaaYIwEVzedas63clK3twA1mhcLhzWuEFCmWAQUo5U
         Q7QyzJZVLxL7DcDbxREEeHRGz0V5FxBJOrO/MWWUv78ipZheqCE8qzmuaBBx9K0zPpyZ
         fQaZ4o1cGL9ivwi2/oFqGFsnYIWFy2Oyw946ZjLm0x0QA2Z21FkjpcMDtIwCoVVkR8Os
         J6fI7XREPbQe4GnIb9MeK0VeYGNoXwK5itvvTdPV9MfXF2/19mOEJkbTtbxS+VEbioXt
         sUEA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:subject:to:cc:references
         :message-id:date:user-agent:mime-version:in-reply-to
         :content-language:content-transfer-encoding;
        bh=ojVaaEUcq5yFWfKGP37S9IFum4ExwHTnU4DANKZUVC0=;
        b=MP+0KswdHhm9eeRsWuxMQlAKUMLrh8mhAbwGCjs9vtHnX1wwb3E6IZMYteRlcO6F+B
         scJT+f6HEqpuRCsCQfisc2SuSwYQUAfscLGtD352F05wl0H5N6DYdQb3kf/KNu65FqS2
         TJP+jyHY/igDT9Twed13zymm4RPQbuPRxBae0Uc7UQvGhrEU1POQE2ddhb9kUO6Aher6
         ZreMQf1HVN4H5d2Wm7y76Pspj8f5B5duGg/ULI7VmQDgOFH4SYEI6+EyRqfC76343aXM
         gHFks7YFyWDTx+Z+2MDjt/kXPvzvGTuy1xokw+7/YQ8ZvCfgIAzwZtVPxLHqtNBWg+/3
         VOLg==
X-Gm-Message-State: AOAM5304Fc+qPLlCC/YWvyN0l2GG+mg/JVicZW+kaW6Ss/EbP1Z8u5F9
	qX/6tlXgAuL6/8iJOFGNIYU=
X-Google-Smtp-Source: ABdhPJyD4Xmy2bYWbn2ga0q7SPczJa65W03cnti5XHp4QguJjivbml8Wn45eEAn/1KgXS4ug/r7a9Q==
X-Received: by 2002:a1c:6606:: with SMTP id a6mr10938884wmc.160.1620236669808;
        Wed, 05 May 2021 10:44:29 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: Paul Durrant <paul@xen.org>
Reply-To: paul@xen.org
Subject: Re: [PATCH] xen-block: Use specific blockdev driver
To: Anthony PERARD <anthony.perard@citrix.com>, qemu-devel@nongnu.org
Cc: Stefano Stabellini <sstabellini@kernel.org>, Kevin Wolf
 <kwolf@redhat.com>, Max Reitz <mreitz@redhat.com>,
 xen-devel@lists.xenproject.org, qemu-block@nongnu.org
References: <20210430163432.468894-1-anthony.perard@citrix.com>
Message-ID: <05554fc3-e900-e5b2-eef7-3155f8c9b4b4@xen.org>
Date: Wed, 5 May 2021 18:44:28 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210430163432.468894-1-anthony.perard@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30/04/2021 17:34, Anthony PERARD wrote:
> From: Anthony PERARD <anthony.perard@citrix.com>
> 
> ... when a xen-block backend instance is created via xenstore.
> 
> Following 8d17adf34f50 ("block: remove support for using "file" driver
> with block/char devices"), using the "file" blockdev driver for
> everything doesn't work anymore, we need to use the "host_device"
> driver when the disk image is a block device and "file" driver when it
> is a regular file.
> 
> Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>

Acked-by: Paul Durrant <paul@xen.org>

> ---
>   hw/block/xen-block.c | 14 +++++++++++++-
>   1 file changed, 13 insertions(+), 1 deletion(-)
> 
> diff --git a/hw/block/xen-block.c b/hw/block/xen-block.c
> index 83754a434481..674953f1adee 100644
> --- a/hw/block/xen-block.c
> +++ b/hw/block/xen-block.c
> @@ -728,6 +728,8 @@ static XenBlockDrive *xen_block_drive_create(const char *id,
>       XenBlockDrive *drive = NULL;
>       QDict *file_layer;
>       QDict *driver_layer;
> +    struct stat st;
> +    int rc;
>   
>       if (params) {
>           char **v = g_strsplit(params, ":", 2);
> @@ -761,7 +763,17 @@ static XenBlockDrive *xen_block_drive_create(const char *id,
>       file_layer = qdict_new();
>       driver_layer = qdict_new();
>   
> -    qdict_put_str(file_layer, "driver", "file");
> +    rc = stat(filename, &st);
> +    if (rc) {
> +        error_setg_errno(errp, errno, "Could not stat file '%s'", filename);
> +        goto done;
> +    }
> +    if (S_ISBLK(st.st_mode)) {
> +        qdict_put_str(file_layer, "driver", "host_device");
> +    } else {
> +        qdict_put_str(file_layer, "driver", "file");
> +    }
> +
>       qdict_put_str(file_layer, "filename", filename);
>       g_free(filename);
>   
> 



From xen-devel-bounces@lists.xenproject.org Wed May 05 18:03:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 18:03:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123259.232483 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leLrz-0003Sy-7T; Wed, 05 May 2021 18:03:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123259.232483; Wed, 05 May 2021 18:03:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leLrz-0003Sr-4X; Wed, 05 May 2021 18:03:35 +0000
Received: by outflank-mailman (input) for mailman id 123259;
 Wed, 05 May 2021 18:03:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1leLrx-0003Sl-Rj
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 18:03:33 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1leLrw-0007S0-N5; Wed, 05 May 2021 18:03:32 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1leLrw-0004Ht-HD; Wed, 05 May 2021 18:03:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=pJbHyV4z2ohhk1VnAZPH6PxVRo4S+PfURaaaI670+8g=; b=btRFDEabB9PoYjH5FPWbkyS9WJ
	b/8YBpXvkF0jWOw/6D3viB1vjIhd0eYpsQzgBq3RGlvIh+UICmuKGjLFyKJ9jaMU5uBE83yo40kAG
	IKMS9N2uGzplriWxT5PHuNkaA0JxsLQb+x4K4wX2dRiqSxkmEpNROAqsGt3GHf5n39zg=;
Subject: Re: [PATCH v3 02/10] arm/domain: Get rid of READ/WRITE_SYSREG32
To: Michal Orzel <michal.orzel@arm.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, bertrand.marquis@arm.com,
 wei.chen@arm.com
References: <20210505074308.11016-1-michal.orzel@arm.com>
 <20210505074308.11016-3-michal.orzel@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <fd49021f-c437-fd0c-b3a8-e3a237e633be@xen.org>
Date: Wed, 5 May 2021 19:03:30 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210505074308.11016-3-michal.orzel@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Michal,

On 05/05/2021 08:43, Michal Orzel wrote:
> AArch64 registers are 64bit whereas AArch32 registers
> are 32bit or 64bit. MSR/MRS are expecting 64bit values thus
> we should get rid of helpers READ/WRITE_SYSREG32
> in favour of using READ/WRITE_SYSREG.
> We should also use register_t type when reading sysregs
> which can correspond to uint64_t or uint32_t.
> Even though many AArch64 registers have upper 32bit reserved
> it does not mean that they can't be widen in the future.
> 
> Modify type of register cntkctl to register_t.
> 
> Modify accesses to thumbee registers to use READ/WRITE_SYSREG.
> Thumbee registers are only usable by a 32bit domain and in fact
> should be only accessed on ARMv7 as they were retrospectively dropped
> on ARMv8.

Sorry for not replying on v2. How about:

"
Thumbee registers are only usable by a 32-bit domain and therefore we 
can just store the bottom 32-bit (IOW there is no type change). In fact, 
this could technically be restricted to Armv7 HW (the support was 
dropped retrospectively in Armv8) but leave it as-is for now.
"

If you are happy with it, I will do it on commit.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 05 18:04:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 18:04:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123262.232496 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leLtL-00043P-Jb; Wed, 05 May 2021 18:04:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123262.232496; Wed, 05 May 2021 18:04:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leLtL-00043I-Gh; Wed, 05 May 2021 18:04:59 +0000
Received: by outflank-mailman (input) for mailman id 123262;
 Wed, 05 May 2021 18:04:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1leLtK-000436-92
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 18:04:58 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1leLtI-0007TF-W6; Wed, 05 May 2021 18:04:56 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1leLtI-0004No-QH; Wed, 05 May 2021 18:04:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=/DYUmA5f9c+lzpD5sDjrB0XYy8SP54I5BlRu7i/Rd+4=; b=Gd6P4ukskumqdlntqbLqkC+Iv4
	yD/ecaOeYGZ6m3cFO7bECFuFgmsqDPzZvpr+/Yc6rCuaJVJcX5CHL/2X41IBlkI8JaXzvCM9B7O9n
	A2UGdgDqFSzCwk41Oxbvp3OmzFUZqPMGuVR7odSBx6tTXgGotU2YPBfLFGvcxywLYhtU=;
Subject: Re: [PATCH v3 03/10] arm: Modify type of actlr to register_t
To: Michal Orzel <michal.orzel@arm.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, bertrand.marquis@arm.com,
 wei.chen@arm.com
References: <20210505074308.11016-1-michal.orzel@arm.com>
 <20210505074308.11016-4-michal.orzel@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <a54f17a9-b977-a264-afd0-551cc56d6f52@xen.org>
Date: Wed, 5 May 2021 19:04:55 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210505074308.11016-4-michal.orzel@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Michal,

On 05/05/2021 08:43, Michal Orzel wrote:
> AArch64 registers are 64bit whereas AArch32 registers
> are 32bit or 64bit. MSR/MRS are expecting 64bit values thus
> we should get rid of helpers READ/WRITE_SYSREG32
> in favour of using READ/WRITE_SYSREG.
> We should also use register_t type when reading sysregs
> which can correspond to uint64_t or uint32_t.
> Even though many AArch64 registers have upper 32bit reserved
> it does not mean that they can't be widen in the future.
> 
> ACTLR_EL1 system register bits are implementation defined
> which means it is possibly a latent bug on current HW as the CPU
> implementer may already have decided to use the top 32bit.
> 
> Signed-off-by: Michal Orzel <michal.orzel@arm.com>

This patch will want to be backported.

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 05 18:06:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 18:06:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123265.232508 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leLv1-0004fi-VO; Wed, 05 May 2021 18:06:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123265.232508; Wed, 05 May 2021 18:06:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leLv1-0004fb-S6; Wed, 05 May 2021 18:06:43 +0000
Received: by outflank-mailman (input) for mailman id 123265;
 Wed, 05 May 2021 18:06:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1leLv1-0004fV-1C
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 18:06:43 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1leLuz-0007Wa-Tp; Wed, 05 May 2021 18:06:41 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1leLuz-0004WL-Nz; Wed, 05 May 2021 18:06:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=tNAWVrdaQhtGTqwXr1cIycPbH/1Akop+EME/lo8dk5o=; b=cykerznRqq0MqqVIYAFbrxFvyG
	pxr/+UwH7GnSQ/LLPrqB+wkogXLoRcyEuqEPCX0CIVqtJChU4nANiAjN43ahmmNCk5W7qiFcH8DLm
	2rQSFw+Bg6VRi4MrJrLlgOcovUSyFWT5zt89wXy0LKL+q/n48NiQDhygo0IYxG9uHihY=;
Subject: Re: [PATCH v3 05/10] arm/gic: Get rid of READ/WRITE_SYSREG32
To: Michal Orzel <michal.orzel@arm.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, bertrand.marquis@arm.com,
 wei.chen@arm.com
References: <20210505074308.11016-1-michal.orzel@arm.com>
 <20210505074308.11016-6-michal.orzel@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <a2cdf940-bec8-05cc-2c08-b66aff357bc2@xen.org>
Date: Wed, 5 May 2021 19:06:39 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210505074308.11016-6-michal.orzel@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Michal,

On 05/05/2021 08:43, Michal Orzel wrote:
> AArch64 registers are 64bit whereas AArch32 registers
> are 32bit or 64bit. MSR/MRS are expecting 64bit values thus
> we should get rid of helpers READ/WRITE_SYSREG32
> in favour of using READ/WRITE_SYSREG.
> We should also use register_t type when reading sysregs
> which can correspond to uint64_t or uint32_t.
> Even though many AArch64 registers have upper 32bit reserved
> it does not mean that they can't be widen in the future.
> 
> Modify types of following members of struct gic_v3 to register_t:
> -vmcr
> -sre_el1
> -apr0
> -apr1
> 
> Add new macro GICC_IAR_INTID_MASK containing the mask
> for INTID field of ICC_IAR0/1_EL1 register as only the first 23-bits
> of IAR contains the interrupt number. The rest are RES0.
> Therefore, take the opportunity to mask the bits [23:31] as
> they should be used for an IRQ number (we don't know how the top bits
> will be used).
> 
> Signed-off-by: Michal Orzel <michal.orzel@arm.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 05 18:08:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 18:08:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123269.232520 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leLwJ-0005JT-Ag; Wed, 05 May 2021 18:08:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123269.232520; Wed, 05 May 2021 18:08:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leLwJ-0005JM-7U; Wed, 05 May 2021 18:08:03 +0000
Received: by outflank-mailman (input) for mailman id 123269;
 Wed, 05 May 2021 18:08:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1leLwH-0005J9-Mx
 for xen-devel@lists.xenproject.org; Wed, 05 May 2021 18:08:01 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1leLwG-0007Xy-M4; Wed, 05 May 2021 18:08:00 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1leLwG-0004d3-GN; Wed, 05 May 2021 18:08:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=zp4bXx3kOruJRP+AQYdn7Zevw+C9xfLRB4BzJ1fEqH4=; b=vthv/7/bfAtmlcya/HbcUuu9gp
	pPrQtX8k1H62qEOr4cTEh0jiU27jYItxkRxopzdYEfLE5NCKAeo8AkUeQG2Vnjj0vNni9nFEutYvt
	QKeEBx8WBOvdQBm2CDAZcUoIYLkOb0W2KfjnjdEsFkB8YSOR+3D6WNvyTk0HGKgpV9sY=;
Subject: Re: [PATCH v3 07/10] xen/arm: Always access SCTLR_EL2 using
 READ/WRITE_SYSREG()
To: Michal Orzel <michal.orzel@arm.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, bertrand.marquis@arm.com,
 wei.chen@arm.com
References: <20210505074308.11016-1-michal.orzel@arm.com>
 <20210505074308.11016-8-michal.orzel@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <758c4de0-f31e-78fc-7db6-878acb5f6f54@xen.org>
Date: Wed, 5 May 2021 19:07:58 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210505074308.11016-8-michal.orzel@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Michal,

On 05/05/2021 08:43, Michal Orzel wrote:
> The Armv8 specification describes the system register as a 64-bit value
> on AArch64 and 32-bit value on AArch32 (same as ARMv7).
> 
> Unfortunately, Xen is accessing the system registers using
> READ/WRITE_SYSREG32() which means the top 32-bit are clobbered.
> 
> This is only a latent bug so far because Xen will not yet use the top
> 32-bit.
> 
> There is also no change in behavior because arch/arm/arm64/head.S will
> initialize SCTLR_EL2 to a sane value with the top 32-bit zeroed.
> 
> Signed-off-by: Michal Orzel <michal.orzel@arm.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 05 20:14:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 20:14:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123295.232538 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leNuT-0008Ns-Jy; Wed, 05 May 2021 20:14:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123295.232538; Wed, 05 May 2021 20:14:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leNuT-0008Nl-Ei; Wed, 05 May 2021 20:14:17 +0000
Received: by outflank-mailman (input) for mailman id 123295;
 Wed, 05 May 2021 20:14:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leNuR-0008Nb-Fx; Wed, 05 May 2021 20:14:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leNuR-0001Fa-AG; Wed, 05 May 2021 20:14:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leNuR-0000fY-1i; Wed, 05 May 2021 20:14:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1leNuR-0000w6-1F; Wed, 05 May 2021 20:14:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=yV1iZBr16ZHtpevgQZacWNnswAkgzJ+VsW+Rcc+lusc=; b=KGgNz1Ym/STpLut3t3iGDdzEvi
	3unsrOua9izrxM3J3OcK+JvmpJNylxaSiRSUTNPZQRL2YeUrl3FK7N0lzQx3+A3cUpW0fwbNpQR3u
	sVEkvXPMNKnGsoQ/0aads5hAQDyOtSwEwaz2z+LjwXI0FHci6TFOcRwlxh9NRcbt4Aoc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161796-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 161796: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=e921931feccc8406d5a968e3f97827095b02ce96
X-Osstest-Versions-That:
    xen=b066bd195c04f15ca396ce427c03da1e14849197
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 05 May 2021 20:14:15 +0000

flight 161796 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161796/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  e921931feccc8406d5a968e3f97827095b02ce96
baseline version:
 xen                  b066bd195c04f15ca396ce427c03da1e14849197

Last test of basis   161793  2021-05-05 15:00:27 Z    0 days
Testing same since   161796  2021-05-05 18:01:42 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   b066bd195c..e921931fec  e921931feccc8406d5a968e3f97827095b02ce96 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed May 05 21:27:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 05 May 2021 21:27:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123308.232558 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leP3K-0006NM-PG; Wed, 05 May 2021 21:27:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123308.232558; Wed, 05 May 2021 21:27:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leP3K-0006NF-MD; Wed, 05 May 2021 21:27:30 +0000
Received: by outflank-mailman (input) for mailman id 123308;
 Wed, 05 May 2021 21:27:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leP3I-0006N5-Qt; Wed, 05 May 2021 21:27:28 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leP3I-0002SD-K8; Wed, 05 May 2021 21:27:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leP3I-00065t-As; Wed, 05 May 2021 21:27:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1leP3I-0003cj-AA; Wed, 05 May 2021 21:27:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zvg4elclsNVQAd5e9tKMLupzgpcNxXLOE1g/MC0+Wx4=; b=tTmLQ9b7rQPlwIrP3PpCNqxs7p
	9Jn8Y0UxgIg48DYznI2RdCksN9/+gcic/bg2dLIAzEhqQ62J4eXXdU+EUMybIJoL7HQuywweE4Mdb
	/GBtfmFybhRg3CvMBUK7QNuv6WspfiwiXVBWCtJgHNFBrpeDQYb2IrkG2Tbib/K5Unt8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161773-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 161773: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-xsm:guest-start.2:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=5e321ded302da4d8c5d5dd953423d9b748ab3775
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 05 May 2021 21:27:28 +0000

flight 161773 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161773/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-examine    4 memdisk-try-append fail in 161700 pass in 161773
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail in 161700 pass in 161773
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10     fail pass in 161700
 test-amd64-amd64-xl-xsm      23 guest-start.2              fail pass in 161700

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                5e321ded302da4d8c5d5dd953423d9b748ab3775
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  278 days
Failing since        152366  2020-08-01 20:49:34 Z  277 days  462 attempts
Testing same since   161700  2021-05-04 05:42:29 Z    1 days    2 attempts

------------------------------------------------------------
5925 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1605801 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu May 06 00:47:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 00:47:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123321.232588 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leSAr-0007gT-Ge; Thu, 06 May 2021 00:47:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123321.232588; Thu, 06 May 2021 00:47:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leSAr-0007gM-Dk; Thu, 06 May 2021 00:47:29 +0000
Received: by outflank-mailman (input) for mailman id 123321;
 Thu, 06 May 2021 00:47:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leSAp-0007g9-FT; Thu, 06 May 2021 00:47:27 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leSAp-0006Fr-9r; Thu, 06 May 2021 00:47:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leSAp-0001Oq-0h; Thu, 06 May 2021 00:47:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1leSAp-0005fU-0G; Thu, 06 May 2021 00:47:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jlY0A/jy813fci3XUm5X3vMXdzjY64RgxoByxZlaD2c=; b=oQIZIlidZK5PtKvIfomkblUY0j
	A/DVlHrLLkSfTb7wupGcDXNi+BBqg5RRlnExiZmRpeYnFNTwv5POCvOCvkH5B3c+bBgW6rfiBnuZi
	yD0WN8U7BQ2FtMa+7Oal9GFziOqaadfTJk9V0f2Nj2c7DAYhYupLTSOraQuIMqSIH8d4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161800-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 161800: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
X-Osstest-Versions-That:
    xen=e921931feccc8406d5a968e3f97827095b02ce96
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 06 May 2021 00:47:27 +0000

flight 161800 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161800/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
baseline version:
 xen                  e921931feccc8406d5a968e3f97827095b02ce96

Last test of basis   161796  2021-05-05 18:01:42 Z    0 days
Testing same since   161800  2021-05-05 22:00:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   e921931fec..09fc903c5a  09fc903c5ac042e2e1eb54e58ea7f207ed12ee16 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu May 06 01:23:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 01:23:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123330.232603 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leSk3-0000Rt-HX; Thu, 06 May 2021 01:23:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123330.232603; Thu, 06 May 2021 01:23:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leSk3-0000Rm-Ea; Thu, 06 May 2021 01:23:51 +0000
Received: by outflank-mailman (input) for mailman id 123330;
 Thu, 06 May 2021 01:23:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leSk2-0000Rc-JJ; Thu, 06 May 2021 01:23:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leSk2-0003ga-8u; Thu, 06 May 2021 01:23:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leSk1-0004or-Sh; Thu, 06 May 2021 01:23:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1leSk1-0006RL-SD; Thu, 06 May 2021 01:23:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=22pPE/UX7J57R6nIPpP2swsqcrF74onfSnzZHjDEvHQ=; b=NqOTMTqF+gL76l8E0yIlAk+ks7
	IefKLDCeYTq4wVuoQqJLRvih0GRIPBMiuGzZd4nF+HVAimTVSF68Xwa016NxKf6hBvxgg+vUaVIT/
	f7bN0rrcmBjTKCsn/ptmM4xd0RhAJTdsmsi1QIrISnsdtggCuPri9x3+L8WWGn78DII0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161783-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 161783: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=dbc50839ba9c433071dc314d8b289526d026362c
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 06 May 2021 01:23:49 +0000

flight 161783 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161783/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              dbc50839ba9c433071dc314d8b289526d026362c
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  299 days
Failing since        151818  2020-07-11 04:18:52 Z  298 days  291 attempts
Testing same since   161783  2021-05-05 04:18:49 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 55927 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu May 06 06:13:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 06:13:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123347.232635 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leXGR-0000gA-GR; Thu, 06 May 2021 06:13:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123347.232635; Thu, 06 May 2021 06:13:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leXGR-0000g3-Dc; Thu, 06 May 2021 06:13:35 +0000
Received: by outflank-mailman (input) for mailman id 123347;
 Thu, 06 May 2021 06:13:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=cpQT=KB=arm.com=michal.orzel@srs-us1.protection.inumbo.net>)
 id 1leXGQ-0000fx-9j
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 06:13:34 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id b32e68c9-72d0-4190-a103-88f350fca7ca;
 Thu, 06 May 2021 06:13:33 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 95F8E31B;
 Wed,  5 May 2021 23:13:32 -0700 (PDT)
Received: from [10.57.27.152] (unknown [10.57.27.152])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 631B13F70D;
 Wed,  5 May 2021 23:13:31 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b32e68c9-72d0-4190-a103-88f350fca7ca
Subject: Re: [PATCH v3 02/10] arm/domain: Get rid of READ/WRITE_SYSREG32
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, bertrand.marquis@arm.com,
 wei.chen@arm.com
References: <20210505074308.11016-1-michal.orzel@arm.com>
 <20210505074308.11016-3-michal.orzel@arm.com>
 <fd49021f-c437-fd0c-b3a8-e3a237e633be@xen.org>
From: Michal Orzel <michal.orzel@arm.com>
Message-ID: <795c63a5-76fc-0de4-d3be-ac3b9d90fa58@arm.com>
Date: Thu, 6 May 2021 08:13:21 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <fd49021f-c437-fd0c-b3a8-e3a237e633be@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Hi Julien,

On 05.05.2021 20:03, Julien Grall wrote:
> Hi Michal,
> 
> On 05/05/2021 08:43, Michal Orzel wrote:
>> AArch64 registers are 64bit whereas AArch32 registers
>> are 32bit or 64bit. MSR/MRS are expecting 64bit values thus
>> we should get rid of helpers READ/WRITE_SYSREG32
>> in favour of using READ/WRITE_SYSREG.
>> We should also use register_t type when reading sysregs
>> which can correspond to uint64_t or uint32_t.
>> Even though many AArch64 registers have upper 32bit reserved
>> it does not mean that they can't be widen in the future.
>>
>> Modify type of register cntkctl to register_t.
>>
>> Modify accesses to thumbee registers to use READ/WRITE_SYSREG.
>> Thumbee registers are only usable by a 32bit domain and in fact
>> should be only accessed on ARMv7 as they were retrospectively dropped
>> on ARMv8.
> 
> Sorry for not replying on v2. How about:
> 
> "
> Thumbee registers are only usable by a 32-bit domain and therefore we can just store the bottom 32-bit (IOW there is no type change). In fact, this could technically be restricted to Armv7 HW (the support was dropped retrospectively in Armv8) but leave it as-is for now.
> "
> 
> If you are happy with it, I will do it on commit.
> 
I am happy with it. Please ack and change it on commit.
Thanks.
> Cheers,
> 

Cheers,
Michal


From xen-devel-bounces@lists.xenproject.org Thu May 06 06:17:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 06:17:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123350.232648 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leXKO-0001KE-2g; Thu, 06 May 2021 06:17:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123350.232648; Thu, 06 May 2021 06:17:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leXKN-0001K7-Va; Thu, 06 May 2021 06:17:39 +0000
Received: by outflank-mailman (input) for mailman id 123350;
 Thu, 06 May 2021 06:17:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EnkQ=KB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1leXKM-0001Jw-Kj
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 06:17:38 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [194.104.109.102])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 463ef56e-ad5b-4eb3-97af-e84ba5c3b088;
 Thu, 06 May 2021 06:17:37 +0000 (UTC)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2112.outbound.protection.outlook.com [104.47.17.112])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-11-pyIp8JIMOuSLpZMIhmiWQg-1; Thu, 06 May 2021 08:17:34 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VI1PR0401MB2445.eurprd04.prod.outlook.com (2603:10a6:800:55::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4087.41; Thu, 6 May
 2021 06:17:33 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4108.026; Thu, 6 May 2021
 06:17:33 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 PR0P264CA0116.FRAP264.PROD.OUTLOOK.COM (2603:10a6:100:19::32) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4108.25 via Frontend Transport; Thu, 6 May 2021 06:17:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 463ef56e-ad5b-4eb3-97af-e84ba5c3b088
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1620281856;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=UlHoQFkYAd6QQQK+p8KDmXW/sb6uT2XuuteBhUJFrcg=;
	b=jeEhpKLWKQDRI0aPbCB58D/MwFY1c2rK3eJyjvI80e0HpFvTme+YQUdo7dJeb+rpDsVmHe
	JuLaPywrRsuD+dDjrqYPjxVS3MimwcjSCenauwcjv2fsLpbsSp6REeKq4jNRg49K4TSFjq
	tjKbbVOpLO6ulk4p6s4ui+J5W5IEPAo=
X-MC-Unique: pyIp8JIMOuSLpZMIhmiWQg-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PE9g0dZr2Xie90mSXZGORy8yhbmdRHBv70p3uIxR7ODH79LzdeO6fd2HtwnSyAK/BP6kH7Vr4b+U6trgBx1rE0n5sW02J/bDnv4gENTOQR4iVG7IOCCJHzOZuGLxmtkJ07BsCHi5ElaUO0gXjqRe2dmR8AXjJQHbHqXhS/nz6Uz3qGUj1NRODQ9J142uKB14djyI7bnT5lD+PNwd/PRk1MRGwbR2w2yTfhzU3z4ETqyHMYuT0z65veKCXft50nWYkfHhn9t8ICAph1z3gc5qUsxwM2uxhUyeb13bw2ewf+YM0ulIzRYzU6b8gYLp2D5ee6wcUMU1zgCTyKnVxbhDQA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pniEuLjovCExY280Ez4B6hLD5bMfEtcXyVRIojq+2UU=;
 b=ThFLws4F+oikzlgGrzg9lj776TOmwLoh62c1tVF/RsqAs+of1QtrXhpoH4kGVbg/f3eaY3iCClLv2MnnGftS4PvM2nIisH3spUlE3S9OotfbbibTCsj7++SU7263C+4GaXJ7sgEL13qPFmYZxh75/x02ET7SjQCTBVZ0pxzngkPoM3/wMKFdGOjN2wxzoza9GET9tSLQU0STbBVRBpPlQ+bSzkN0lzBFylnLYdjsBlb6dsejzqaQaOHj2blwctMVlQv7pVebCOOyflby6+ca3H+gtRbAQfmHGVDlOFMSNG8V7yvSRBZSHSOdoBY5N/sJOf2xs5r0faM64O2EEUM3Mg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH 5/5] x86/cpuid: Fix handling of xsave dynamic leaves
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210503153938.14109-1-andrew.cooper3@citrix.com>
 <20210503153938.14109-6-andrew.cooper3@citrix.com>
 <5e6511ca-83bd-8a43-202e-949b4d19b1ab@suse.com>
 <1279476a-f99d-59a4-7fed-1aee37dbe204@citrix.com>
 <d951dc24-e613-8a1d-13ea-b1e439048165@suse.com>
 <7ce89520-7cca-6c9f-f229-37515116d74c@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2948dca5-2c1f-8ce0-e647-1a2ff4e85436@suse.com>
Date: Thu, 6 May 2021 08:17:21 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
In-Reply-To: <7ce89520-7cca-6c9f-f229-37515116d74c@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: quoted-printable
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: PR0P264CA0116.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:19::32) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4e0aac5b-bf11-476a-46dc-08d91056a2a6
X-MS-TrafficTypeDiagnostic: VI1PR0401MB2445:
X-Microsoft-Antispam-PRVS:
	<VI1PR0401MB24453E1C2B90B2B5E8282E68B3589@VI1PR0401MB2445.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	DJyBWlLh1otPl9orLA2ZMLYyAVIwewr2X2StKGoSzPxBWiPQzPfiY1TxQt8eELPg6rjnLJUYqV0UeqPV0laotZFn9ILPNDKT13/Brhtt4K6XJdik8ykULo3LHoKV08k1tVgHc5SHKGO+ETTepaN9TAoWtAKoNznCIYGt9Iv75QqjsDmfx1YnLtQnlvH4YJxnJncCNaDuZXMaWfCQXZCZ0azXq8A7s47Fbkz5b0v2hFhLX9Rp5NiSdPJ6GMYCIi7wqb6dBEVt9Kg1IbPWk+rmDBpCpGZLr6q2cZ7/3fW8Bc5fYP2e6xD3Y1+iEr93nQcwZxZBRLY/X7cjBgmDC0agbKMpIh+eSyxOzmncXXBpKV7qYPnbbr/bf12n2Nbp6A5pJytLIuQ/FI7Ubeg3bmRGKts1EiDDwyQlfCricQwR/5V1N+SoyTns6ypC8EcPvBpnQ31XtC5Nw9JCKxJZZukse3Fo/a/ACimX/nC0wGhET3HdeCQaplfeI5md1omWj2fUfq3w+pvFmsQ9oUvPhr6CA6dgFVLuwaEl7j/MNzb+7CGcM0WlEBjxnFJ9wmlndUMSkXkfhgZrUQHa5Rwvl8SN2M7dgHbefun4QWzGWl9tlSrVZmSVJCfisex7VENwCuKJlO/JLO7TtjCPoEi6PBvKlYm8TklDkudUZsfrkyWxy+cBKjVstkmPiIQm9hb2IrCcPIiOBJQJdhDu5u2KG2erHw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(136003)(376002)(39860400002)(346002)(366004)(6666004)(186003)(66556008)(53546011)(52116002)(66946007)(36756003)(38100700002)(83380400001)(478600001)(31686004)(26005)(5660300002)(316002)(4326008)(38350700002)(6486002)(16576012)(2616005)(8936002)(16526019)(54906003)(6916009)(956004)(66476007)(8676002)(2906002)(86362001)(31696002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?us-ascii?Q?lY1SxT3AQg1H4nmWxLXXx+CHFoFwyrkXGgt2A9Fd4GKiKmY4/poZdOA02meP?=
 =?us-ascii?Q?j27P5qUYrMmL09s66ehzHm/gdERWPmgmxaOAaAyzp1QLGej+YRhcy+tLjYTL?=
 =?us-ascii?Q?wOnwIdr5obvLULrZUl/CAQ/vhGQ6CTxRRLGIhodjPlM7e61O5l8hj2TOiT/g?=
 =?us-ascii?Q?5vfi7rdBC0OPHeuqjrkeDZFcuWeQMLcfXoDr+YRZFJWfQZZuq7/YxeokQHTi?=
 =?us-ascii?Q?OgOhgcSbCiekAwSDmuhBBIniAyfQyiCRxJlpUsaillc5q0xzM7/2i9lxlP/Q?=
 =?us-ascii?Q?d/kwbwvk3znQydQTfC7wen5JizKIoFvyo+00A6JAb5GGyyfi507GPbdXWnTL?=
 =?us-ascii?Q?pZtZF9+AUKYv2s1hyusgyfmoj2o3Bh/NbjdnazMrEvoeiJ3FjilNfFyfQiIm?=
 =?us-ascii?Q?nQ2a8GOtl/nRduYjPlm0aPVMQSGTU9gueM5gESCaXRpj+W/NT3ox4emkq3eT?=
 =?us-ascii?Q?EaQScRyzsy6tonNfu8cj30cWvdBhkgdC3BD5IyAusEo3H4eXOMAo2CotFHrF?=
 =?us-ascii?Q?gV+6YfMDVXy/QR0LCUyt3+W9Xc2OgFKrThWzqwo0qHLEoeGPpLI4c/vERuYs?=
 =?us-ascii?Q?OeJ6JkMyBRKQR084WX1TXMTKEHIOo+WMIvC2RNPYxw9ogOSYQEAopPh/ickK?=
 =?us-ascii?Q?u7llE7Prb+b5TYxPaol04li+WHFJ82y28/cwftoWYEUEMX+9mReZzScGXv65?=
 =?us-ascii?Q?+Vam4cuxJlB/TjYMhMfCDh/Mo66stbETi2yi5eN0/9Sc7dAp+XxoP1551ywY?=
 =?us-ascii?Q?bxJO/9sNpsALZZ6bxhCZLhmMqG3U8RusvLzd/ARxjq6CLAxf1f+CaehTDFX/?=
 =?us-ascii?Q?59yxKXnocv0JMrR5w0BLiV8ubJYDsgbfWxwTvk40Km/AjcdOA+Ypjem+3Rin?=
 =?us-ascii?Q?uG3Gbueexk2IPJpChc1vTuAJ0AJnMQ7u7iX0I1UiT8ldnOjL99JMbBaD9uH3?=
 =?us-ascii?Q?ZegtO9J2WeTCkx3BgWBfCZhu2Af948G8CROemjD2PpBBinP5P1e9s8GqvkFK?=
 =?us-ascii?Q?SjIcbQcMBvliFCbMgbZE9oVhqgzlE8nkH0L6c13hKsnbvTNRp8KW0C84otIn?=
 =?us-ascii?Q?CbsEfWB60uzWnWqlCkES7NZIJDl4B6Cz6jvFeWX6HqYO5IxC7sTGw5JIgimu?=
 =?us-ascii?Q?QNKZ4nvHkwtXe5rYtaNop2ouC7rE9ZrpPJjDhBl5MYse6rhzQmvsBHuWHJUG?=
 =?us-ascii?Q?yrOwerwVED+mVO8643krFE/YOZ/bJuCEuNu6wy7ed9IsWxRfRe+8/CHOU+hD?=
 =?us-ascii?Q?2f5owpZPWGWKcvkPGYz79A98tXTqQ9206iMoLJ91/0Z49Go1wjKmjfq8iDxx?=
 =?us-ascii?Q?Gxmh0qiOdGCTQf3ksrsqVdA2?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4e0aac5b-bf11-476a-46dc-08d91056a2a6
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 May 2021 06:17:33.2713
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: SM3ZSd5RNJ3Vi+OpdHblFWEECKQMYBthTGS70H3jklpZT+nQQ9THlJz08kQhQJNWR7JWTIMFL1Uv27LKj/JDcA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0401MB2445

On 05.05.2021 18:59, Andrew Cooper wrote:
> On 05/05/2021 09:33, Jan Beulich wrote:
>> On 04.05.2021 16:17, Andrew Cooper wrote:
>>> On 04/05/2021 13:56, Jan Beulich wrote:
>>>> On 03.05.2021 17:39, Andrew Cooper wrote:
>>>>> +unsigned int xstate_compressed_size(uint64_t xstates)
>>>>> +{
>>>>> +    unsigned int i, size =3D XSTATE_AREA_MIN_SIZE;
>>>>> +
>>>>> +    xstates &=3D ~XSTATE_FP_SSE;
>>>>> +    for_each_set_bit ( i, &xstates, 63 )
>>>>> +    {
>>>>> +        if ( test_bit(i, &xstate_align) )
>>>>> +            size =3D ROUNDUP(size, 64);
>>>>> +
>>>>> +        size +=3D xstate_sizes[i];
>>>>> +    }
>>>>> +
>>>>> +    /* In debug builds, cross-check our calculation with hardware. *=
/
>>>>> +    if ( IS_ENABLED(CONFIG_DEBUG) )
>>>>> +    {
>>>>> +        unsigned int hwsize;
>>>>> +
>>>>> +        xstates |=3D XSTATE_FP_SSE;
>>>>> +        hwsize =3D hw_compressed_size(xstates);
>>>>> +
>>>>> +        if ( size !=3D hwsize )
>>>>> +            printk_once(XENLOG_ERR "%s(%#"PRIx64") size %#x !=3D hws=
ize %#x\n",
>>>>> +                        __func__, xstates, size, hwsize);
>>>>> +        size =3D hwsize;
>>>> To be honest, already on the earlier patch I was wondering whether
>>>> it does any good to override size here: That'll lead to different
>>>> behavior on debug vs release builds. If the log message is not
>>>> paid attention to, we'd then end up with longer term breakage.
>>> Well - our options are pass hardware size, or BUG(), because getting
>>> this wrong will cause memory corruption.
>> I'm afraid I'm lost: Neither passing hardware size nor BUG() would
>> happen in a release build, so getting this wrong does mean memory
>> corruption there. And I'm of the clear opinion that debug builds
>> shouldn't differ in behavior in such regards.
>=20
> The point of not cross-checking with hardware in release builds is to
> remove a bunch of very expensive operations from path which is hit
> several times on every fork(), so isn't exactly rare.
>=20
> But yes - the consequence of being wrong, for whatever reason, is memory
> corruption (especially as the obvious way it goes wrong is having an
> xsave_size[] of 0, so we end up under-reporting).
>=20
> So one way or another, we need to ensure that every newly exposed option
> is tested in a debug build first.=C2=A0 The integration is either complet=
e,
> or it isn't, so I don't think this terribly onerous to do.
>=20
>=20
> As for actually having a behaviour difference between debug and release,
> I'm not concerned.
>=20
> Hitting this failure means "there is definitely a serious error in Xen,
> needing fixing", but it will only be encountered the during development
> of a new feature, so is for all practical concerns, limited to
> development of the new feature in question.
>=20
> BUG() gets the point across very obviously, but is unhelpful when in
> practice the test system wont explode because the dom0 kernel won't be
> using this new state yet.
>=20
>> If there wasn't an increasing number of possible combinations I
>> would be inclined to suggest that in all builds we check during
>> boot that calculation and hardware provided values match for all
>> possible (valid) combinations.
>=20
> I was actually considering an XTF test on the matter, which would be a
> cleaning up of the one generating the example in the cover letter.
>=20
> As above, logic is only liable to be wrong during development of support
> for a new state component, which is why it is reasonable to take away
> the overhead in release builds.

Well, okay then - let's hope all bugs here are obviously exposed
during initial development, and no corner cases get missed.

>>> The BUG() option is a total pain when developing new support - the firs=
t
>>> version of this patch did use BUG(), but after hitting it 4 times in a
>>> row (caused by issues with constants elsewhere), I decided against it.
>> I can fully understand this aspect. I'm not opposed to printk_once()
>> at all. My comment was purely towards the override.
>>
>>> If we had something which was a mix between WARN_ONCE() and a proper
>>> printk() explaining what was going on, then I would have used that.=C2=
=A0
>>> Maybe it's time to introduce one...
>> I don't think there's a need here - what you have in terms of info
>> put into the log is imo sufficient.
>=20
> Well - it needs to be sufficiently obvious to people who aren't you and
> me.=C2=A0 There are also other areas in Xen which would benefit from chan=
ging
> their diagnostics to be as described.

I generally disagree: A log message of this kind needs to be detailed
enough to easily find where it originates in source. Then the source
there should have enough information. Things are different when a log
message implies an admin may be able to take some corrective action
without actually changing source code in any way. That's not the case
here afaict.

Jan



From xen-devel-bounces@lists.xenproject.org Thu May 06 06:21:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 06:21:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123358.232660 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leXOD-0002lf-OP; Thu, 06 May 2021 06:21:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123358.232660; Thu, 06 May 2021 06:21:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leXOD-0002lY-LI; Thu, 06 May 2021 06:21:37 +0000
Received: by outflank-mailman (input) for mailman id 123358;
 Thu, 06 May 2021 06:21:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EnkQ=KB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1leXOC-0002lS-EG
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 06:21:36 +0000
Received: from de-smtp-delivery-102.mimecast.com (unknown [62.140.7.102])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d7a96cb2-6e69-41e4-b083-b18df531dc2c;
 Thu, 06 May 2021 06:21:33 +0000 (UTC)
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2107.outbound.protection.outlook.com [104.47.17.107])
 (Using TLS) by relay.mimecast.com with ESMTP id
 de-mta-32-g_n2gosZMdCIL_OhKNJivw-1; Thu, 06 May 2021 08:21:30 +0200
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com (2603:10a6:803:e7::16)
 by VE1PR04MB6384.eurprd04.prod.outlook.com (2603:10a6:803:126::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4087.25; Thu, 6 May
 2021 06:21:28 +0000
Received: from VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36]) by VI1PR04MB5600.eurprd04.prod.outlook.com
 ([fe80::f06c:6f5d:34d2:1c36%5]) with mapi id 15.20.4108.026; Thu, 6 May 2021
 06:21:28 +0000
Received: from [10.156.60.236] (37.24.206.209) by
 AM8P191CA0029.EURP191.PROD.OUTLOOK.COM (2603:10a6:20b:21a::34) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4108.25 via Frontend Transport; Thu, 6 May 2021 06:21:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d7a96cb2-6e69-41e4-b083-b18df531dc2c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=mimecast20200619;
	t=1620282092;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=o2u+aTEFkhpm5mkgF4eOF0+Eb37M6Aq5WLNTwUg+GkY=;
	b=eUCDIj3Nw6M9IfVL4cwmuo6kCN8xjINozocXF48ZL04SvvcGdh12sXOghX6Duyew1hNwrG
	WW7hsnMVrktt8eXV4dFe/LyZ+9lKxGwg7yfkueLim1OZeIoZoJEdAKA26/eN/7RN8ffd43
	/PJu38ADtLQGxoJKoAgU6tesWx7MVOQ=
X-MC-Unique: g_n2gosZMdCIL_OhKNJivw-1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=m0OUI6siT1KJFpxRQ/meKFsFJWIyR01CyLOvq4RCsPH3aCyXeCrhjDMtiGrGeXlVj0+S6JOdyaAkHFOzaLSkpNMnk+gUzwx1QUaGRiz2QvfvhFwWH2Kuw9nkzziaGMiBUuRkH5HIJvgYvu7+I4VOGNkpkL/ht7dyw/lipoV8/MeCy1eqCB7/KlmeUD7sVjvKaFrijvDLaBMNDg7LpPhVRb43ag0mnhg88aI4AKeOqRbfVZttxkV4FpoV8JasApkIFGWBeqvBOg8aGqKOdqC8oSslnbdPdU/qugIlPhxP/23rIGthpozVws4cQE+XR+qd/r3iGccvHCxxhCMsZZFzTg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=o2u+aTEFkhpm5mkgF4eOF0+Eb37M6Aq5WLNTwUg+GkY=;
 b=Ib1I6boMGKBMUHulCP1q5HQIOib8fP/VBwurbCj9fzJ2M4A2uCykJoJ2w1Z2F6sNJyskT+3KxNi44UESX2B5rp21XIYVQGxvBOBkrCtB+oHK8K2fkhJaLl/zHoNoM4CiyiMG/LxzVWYT1rslANE8saFtaQS1PS+VavxY4ur83WzjiRMd78fsW94x9vKlB1HfYVv0AuPKqeofPEnzupiOMzIu5nG5jE/QblBYHdqndi4c93XNVT+Ek7uyjoyO4OV5TVkOJbQTJVPItSv79eQ/oC+WSvLIkOXBCnncpuBi9VwkQY8IWGJ/JrYR8bVAuN7GeG0tEGxAlp5MjlcQxuNEfA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=suse.com; dmarc=pass action=none header.from=suse.com;
 dkim=pass header.d=suse.com; arc=none
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=suse.com;
Subject: Re: [PATCH v4 3/3] unzstd: make helper symbols static
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <12054cba-4386-0dbf-46fd-41ace0344f8e@suse.com>
 <759c8524-cc01-fac8-bc62-0ba6558477bd@suse.com>
 <cb8fa703-f421-ce55-811a-d4a649bc201a@xen.org>
 <1696e5f2-481a-5a7f-258d-b2a0679b041f@suse.com>
 <f6e00fd9-a207-858e-37e8-fb25427cf8de@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <cb4ee5ef-fba8-e70d-79ae-c640ed853d53@suse.com>
Date: Thu, 6 May 2021 08:21:25 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
In-Reply-To: <f6e00fd9-a207-858e-37e8-fb25427cf8de@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [37.24.206.209]
X-ClientProxiedBy: AM8P191CA0029.EURP191.PROD.OUTLOOK.COM
 (2603:10a6:20b:21a::34) To VI1PR04MB5600.eurprd04.prod.outlook.com
 (2603:10a6:803:e7::16)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 06901323-4f01-4ea4-a9df-08d910572ed2
X-MS-TrafficTypeDiagnostic: VE1PR04MB6384:
X-Microsoft-Antispam-PRVS:
	<VE1PR04MB6384D2BB3B97075B4BDFF123B3589@VE1PR04MB6384.eurprd04.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	G4kd8C+hua7umrZKvCVwnhcoYEUZp5VmwRY85t3i6es3gR4rg1QfNk4WtaAfwJj1gqaTaiFvIA59ccxOdca6mG14PM46H3/NisV4hYNwE3UEHTvbrqxyblB5Ubmi/c7hV1+31dmj8bI7J+Sn4zfBK3zbWETW/PE5KS8N4t/nZ044JZZbiR76wt5mu/qnR3Ijws/HHlWCNaVI/VkRaOLsNPQbP1Og42xDzWIHHM9vaU1XCc1UY86jWD42IXo97DbTdRCOz7trxTE/B3A0qnm0oMB/Y9jjm10ejCBdq07VDYoh/fLrYrH4F6A4+1h5KcvBC+Xb8McrGEUoe0V6o8ergmbLW6ZAATADdQDVcBJmMxCm2fyfnILqIzMibHXM80cW2N/a6R2GSojc8l74bLcE3prqIVoIsq+mMTnCgjXVgBVyY9oUPMJUEBDHeatLUbYwzpACIOHks0URmCZcpOYXypbQhmuqNgyRaCYUBKRGDi1LuON51TPrpiMjyRSgAw6aZS/2Wc7Wg943kMYptYLlpI44/6HmLZPMzFdS5Tp16hkhergkMyGMnp38wI0kzRKePmJ1LlIVIN6Xzn8xX3W2ccfQhoMSEQRTUrRaa/FphAk4GPzt1N74vPYY/iFaAm7stHTGs71Vyayq4Qdp1Veah2C2PLIpXmUvImgeOITl+DTiKorBmdZuqJD2hsFackCHkOudJyYbFGQbMf4Xk4OcOw==
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR04MB5600.eurprd04.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(39860400002)(136003)(366004)(376002)(346002)(31686004)(316002)(4326008)(5660300002)(6486002)(26005)(31696002)(36756003)(16526019)(52116002)(478600001)(2906002)(54906003)(2616005)(86362001)(53546011)(66946007)(6916009)(956004)(83380400001)(66476007)(8676002)(8936002)(38350700002)(66556008)(38100700002)(16576012)(186003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?MFArSFNmaHR2SzV1SmJxMStuRGtQaGRFZXJNOUd0S2haS3VFSnM2NjJZSi8r?=
 =?utf-8?B?K3czZEV0c2dLZHBCb01QUm1sUCtlSUJrZFltK2tPbXFJakk4dHE3RnlGRHE2?=
 =?utf-8?B?TDhrVTBpTGZHWjNSK2lSeGdxMnFzdVJwTnRLcXVpYmtKR09ianFoc2N3V2N5?=
 =?utf-8?B?Zmc5K1ZYaFdVeEFoczJySXZxdUx1dkEvSUZ3dVloQ3AvQ3lIelIvYXF5c1I1?=
 =?utf-8?B?SHIxTVczdkhNMEZZcW1NQitJVC9pVGFtSXk5bVRFc3EzT1o0aXdCSkdOWVJC?=
 =?utf-8?B?RlYrV1BGVkMvQXNXVjZDK3V5MWYwZHhmSU1qOUU4b2VpU1NIb2FjQ0t2dTJm?=
 =?utf-8?B?ZGl1R2tBUmlxenBIM25teU9jUWdycitwV21oZ0c3Z3Bib2RIWFdQZ2I3ZU5o?=
 =?utf-8?B?eGV3RlptM3VWUS9qdVpPaERRSzZkT2lJVENRUDVHakRrVCt4dG9WZFg4WUd5?=
 =?utf-8?B?QnBtcDBuWEJlRnNTUWlOaXZCdG9PWEFnaG44TXZhNVR0ZktXL0xtY2pURU1T?=
 =?utf-8?B?dTkwNUFRQmVWTktuY1VPdzl1Nzc1bWgxUEVlUVhqdk5uajI0alFTT21OK2l6?=
 =?utf-8?B?eWRlSVFXYjJjRHAzUmlkbkVDWWVLZVl0OWtKL1lMV1hEYWpiR2I5azJPTndI?=
 =?utf-8?B?dzJjRlFtdUkzL1RVYmorMWZqaktEU09LSDRJTGtKMEY2WHhmbXd6Rk8yM2g0?=
 =?utf-8?B?ZUU5UWYwSVBTT0FYd3N5WEQ3REtWb3l3Tlc4RW9yT1JoNWZadFRMdXdtdlVB?=
 =?utf-8?B?TU1CVFBSSXpvRnprWnZvUDFEb3UydVpZeEVqc0ZGbHpsTjFscndLRkxTYTVo?=
 =?utf-8?B?amJMOSt1aThPTEMydmhYWnZoVU9DYWNqM2xxZys5M2FRQ1Budk5lcDY4dVRC?=
 =?utf-8?B?SjhGY3VqZnY4UzVkUENFdzM4TG1PZnVhdzR1SGMrOVZNVVNUR1pHeTBGckNF?=
 =?utf-8?B?bjJRY1RFNSt4TEE3ZEpEWlN2Vi96RnVHQmZsL1hOUFlrT0U4OFhoaFkzMFRZ?=
 =?utf-8?B?SzlCSkx4ek1PM2dGSStKZ01zNVg4cXlueExaTHp2bGVrRThHNUpxTHlvbmI4?=
 =?utf-8?B?cm4zTm56U1IrTjJFZ2I1YnhvTkd5eCt0M1F4WjY3dlBsakxiR3JkdXRMbitn?=
 =?utf-8?B?WStZSEhLSHVEcEFvMHpRWkJGU2xVbHlSWHpUbHYvTzVvU1oyeUtBQ3pRODRv?=
 =?utf-8?B?QzNwYy9OQnMyeGsyTXN3bXZpaWw4d2pObTUxKzFTaGEwSUtpNnp3MVVvcUxM?=
 =?utf-8?B?cnVUTHBBQlp6MFdnNWRucmpxMW9NcXdBcWJyZUFoSld5MW9tbjcrcWRKSlc5?=
 =?utf-8?B?OGc5VWtqT2hsNWJlbUxJd2VhYTNldHFQdmU5cFNFUllVcERtODhwWW9lVFgz?=
 =?utf-8?B?akdreHlKNzM0RXh6MktiSGViMjg2RUVxU0pLOGVwMmE5b2F5R29pdVA4RWpU?=
 =?utf-8?B?NkVCV09HcmJSd01QNVVRN2EwY2tPcDVzbzdudGF1UGZYOThLUklURDJPc1p1?=
 =?utf-8?B?c0lMSjhFYTlXSHg2TWxNbEtRckNjb0VLaDdrd3QrZTlxKzc4OGNERE1SdFRL?=
 =?utf-8?B?MkJJcmVFbjN5N1JqaEo5ZHpDZTh3cEp4K1hFT25DZjNFWmVtSkEvTERTY2E5?=
 =?utf-8?B?QUd3SHQ0ZkY3NStkallIWTk2Sy9vV1YyQ0NpSGVDT3ZnTkY3dXZGZUxqZU5C?=
 =?utf-8?B?S0JFSFF3c2NwZ0ZNOXE0UmNlTzZsVHpKMS9yNVUrcmp2bDNhUmpNa0lVRC9E?=
 =?utf-8?Q?g2QyYlsunjg6ud9+yGk75OiG/9LktwbyIfr3nHm?=
X-OriginatorOrg: suse.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 06901323-4f01-4ea4-a9df-08d910572ed2
X-MS-Exchange-CrossTenant-AuthSource: VI1PR04MB5600.eurprd04.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 May 2021 06:21:28.4402
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: f7a17af6-1c5c-4a36-aa8b-f5be247aa4ba
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: p2pKn9+xZ4uyVKiM+i2oIOMlTq0So5NB+X03k5AUEecNqL7+GXDoUJrIHh4//cQVCf+mVjSpuSodlFlozyzr/Q==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR04MB6384

On 05.05.2021 19:35, Julien Grall wrote:
> 
> 
> On 29/04/2021 14:26, Jan Beulich wrote:
>> On 29.04.2021 13:27, Julien Grall wrote:
>>> On 21/04/2021 11:22, Jan Beulich wrote:
>>>> While for the original library's purposes these functions of course want
>>>> to be externally exposed, we don't need this, and we also don't want
>>>> this both to prevent unintended use and to keep the name space tidy.
>>>> (When functions have no callers at all, wrap them with a suitable
>>>> #ifdef.) This has the added benefit of reducing the resulting binary
>>>> size - while this is all .init code, it's still desirable to not carry
>>>> dead code.
>>>
>>> So I understand the desire to keep the code close to Linux and removing
>>> the dead code. However I am still not convinced that the approach taken
>>> is actually worth the amount of memory saved.
>>>
>>> How much memory are we talking about here?
>>
>> There are no (runtime) memory savings, as is being said by the
>> description. There are savings on the image and symbol table sizes
>> (see below - .*.0/ holding files as produced without the patch
>> applied, while .*.1/ holding output with it in place), the image
>> size reduction part of which is - as also expressed by the
>> description - a nice side effect, but not the main motivation for
>> the change.
> 
> Thanks for the providing the information. I have misunderstood your 
> original intention.
> 
> Reading them again, I have to admit this doesn't really change my view 
> here. You are trading a smaller name space or prevent unintended use 
> (not clear what would be wrong to call them) to code 
> maintenability/readability.

Well, I mean mainly the usual issue of us, short of having Linux-like
section reference checking, being at risk of calling __init functions
from non-__init code. This risk exists with every single such symbol,
and hence I view it as helpful to reduce overall risk by limiting the
number of such symbols to the actually useful ones.

> At the same time, this is not a code I usually work on (even if I am 
> meant to maintain it). I will leave another maintainer to make the 
> decision here.

Fair enough. Thanks anyway for looking here, and for all the other
acks on this (originally larger) series.

Jan



From xen-devel-bounces@lists.xenproject.org Thu May 06 07:33:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 07:33:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123362.232672 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leYVj-0000rX-25; Thu, 06 May 2021 07:33:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123362.232672; Thu, 06 May 2021 07:33:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leYVi-0000rQ-VK; Thu, 06 May 2021 07:33:26 +0000
Received: by outflank-mailman (input) for mailman id 123362;
 Thu, 06 May 2021 07:33:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leYVi-0000rG-FV; Thu, 06 May 2021 07:33:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leYVi-0002IZ-8t; Thu, 06 May 2021 07:33:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leYVh-00056k-TU; Thu, 06 May 2021 07:33:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1leYVh-0001C0-Sx; Thu, 06 May 2021 07:33:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=6WtDA124wiEiH+10T+A7oE+0ZZCa8piOQJGptGU4Fas=; b=w2nxGOPRGqN0hfRslKkhAssl63
	LGBCIY0De590fNEbs3SQcZFKvfZzgd12Fiij0UO2CqoSTr3v58o9zFY/OTrPn1rrcAas4eiVYn3yF
	E0b++QxRxoZ2Pjkvic5QbGf56paBn1FSOVQ+EPwGWFKUTEyIAsu0gA5q7Abf/CTqQ1N8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161804-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 161804: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=68e8fbe6b18030733601c12f41ed00be84c8b739
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 06 May 2021 07:33:25 +0000

flight 161804 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161804/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              68e8fbe6b18030733601c12f41ed00be84c8b739
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  300 days
Failing since        151818  2020-07-11 04:18:52 Z  299 days  292 attempts
Testing same since   161804  2021-05-06 04:18:46 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 56252 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu May 06 07:51:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 07:51:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123369.232687 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leYmd-000365-KM; Thu, 06 May 2021 07:50:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123369.232687; Thu, 06 May 2021 07:50:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leYmd-00035y-Gw; Thu, 06 May 2021 07:50:55 +0000
Received: by outflank-mailman (input) for mailman id 123369;
 Thu, 06 May 2021 07:50:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leYmc-00035o-Cx; Thu, 06 May 2021 07:50:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leYmc-0002b4-6v; Thu, 06 May 2021 07:50:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leYmb-0005y2-SW; Thu, 06 May 2021 07:50:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1leYmb-0003sp-S2; Thu, 06 May 2021 07:50:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=IPgvRM4zbwzrETSChg0Ixn4wyQ9eSJ9pUMmgDuCyBP4=; b=KJ0vHe6k9ALGhWOsMdEFMRUTXQ
	0YULMRcc1/6lfV/DuDKcgr8LEzZrWuB+vxT4bcEA35KP7u2VQzic/inf8395ahuyl0iEPqtmPGDDz
	OuvtOeGc8LksccrPX42wpL9rzSKpcGVXV6vUesv5tWVO8KQbyjA0qDQclVMwUy4tGed8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161776-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 161776: regressions - FAIL
X-Osstest-Failures:
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-localmigrate/x10:fail:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5984905b2638df87a0262d1ee91f0a6e14a86df6
X-Osstest-Versions-That:
    xen=4cf5929606adc2fb1ab4e2921c14ba4b8046ecd1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 06 May 2021 07:50:53 +0000

flight 161776 xen-4.12-testing real [real]
flight 161806 xen-4.12-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/161776/
http://logs.test-lab.xenproject.org/osstest/logs/161806/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qcow2    19 guest-localmigrate/x10   fail REGR. vs. 159418

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 159418
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 159418
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159418
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159418
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 159418
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 159418
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159418
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 159418
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159418
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 159418
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 159418
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  5984905b2638df87a0262d1ee91f0a6e14a86df6
baseline version:
 xen                  4cf5929606adc2fb1ab4e2921c14ba4b8046ecd1

Last test of basis   159418  2021-02-16 15:06:11 Z   78 days
Failing since        160128  2021-03-18 14:36:18 Z   48 days   66 attempts
Testing same since   161776  2021-05-04 19:06:01 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Edwin Török <edvin.torok@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 324 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu May 06 08:49:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 08:49:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123385.232707 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leZgk-0000Cf-13; Thu, 06 May 2021 08:48:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123385.232707; Thu, 06 May 2021 08:48:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leZgj-0000CY-Tp; Thu, 06 May 2021 08:48:53 +0000
Received: by outflank-mailman (input) for mailman id 123385;
 Thu, 06 May 2021 08:48:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HxhD=KB=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1leZgi-0000CS-Dq
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 08:48:52 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com (unknown
 [40.107.3.51]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e82bb38e-b752-42fa-8193-e667b23c54db;
 Thu, 06 May 2021 08:48:48 +0000 (UTC)
Received: from DB6PR1001CA0014.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:4:b7::24)
 by AS8PR08MB6518.eurprd08.prod.outlook.com (2603:10a6:20b:33d::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4087.35; Thu, 6 May
 2021 08:48:45 +0000
Received: from DB5EUR03FT008.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:b7:cafe::8e) by DB6PR1001CA0014.outlook.office365.com
 (2603:10a6:4:b7::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.25 via Frontend
 Transport; Thu, 6 May 2021 08:48:45 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT008.mail.protection.outlook.com (10.152.20.98) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4108.25 via Frontend Transport; Thu, 6 May 2021 08:48:45 +0000
Received: ("Tessian outbound 6c4b4bc1cefb:v91");
 Thu, 06 May 2021 08:48:45 +0000
Received: from f7acd78ffa71.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 FA20B708-B5E3-4BE7-91EA-D3DFC98DF7CF.1; 
 Thu, 06 May 2021 08:48:38 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f7acd78ffa71.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 06 May 2021 08:48:38 +0000
Received: from VI1PR08MB3629.eurprd08.prod.outlook.com (2603:10a6:803:7f::25)
 by VI1PR0801MB1935.eurprd08.prod.outlook.com (2603:10a6:800:87::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.25; Thu, 6 May
 2021 08:48:34 +0000
Received: from VI1PR08MB3629.eurprd08.prod.outlook.com
 ([fe80::4502:9762:8b3b:63d9]) by VI1PR08MB3629.eurprd08.prod.outlook.com
 ([fe80::4502:9762:8b3b:63d9%4]) with mapi id 15.20.4087.044; Thu, 6 May 2021
 08:48:34 +0000
Received: from smtpclient.apple (82.8.129.65) by
 LO4P123CA0424.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:18b::15) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4108.24 via Frontend Transport; Thu, 6 May 2021 08:48:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e82bb38e-b752-42fa-8193-e667b23c54db
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fzsf7XMA928xiZ+0qr0pgnrFwUw9jtIKFdlT/aXfFAA=;
 b=RyTpf14bY50Ld8IWgdl3madgYY0ahk06OfUHYOvr8BYR+1aeWjj3xXtEhPNbW+dsHj6/bmP6fImljfxL905Wt300kuM4qRyUaDhDEBbvF+0N8pdHL33xVzUswN62QiguIXqEbfSK5cDEx7ujsjBVDR8nbbRuh0v3H1ZMEprPbGI=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: f32bcfaa69b931c9
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZQ9SlAzgOXSAMHXSsw4eIWwFtekGO+r4SXdShWxLbL4a9qmHYiQo5qez4Zfm/kIf9djemOPJ4uUWZ1EHk0dklef15iTQcda6zFTmkL1kiJQwe1W1vd5osO/NnoJH7KlSZ3SypRbQf8vSm2GFEQSfmRBhOkUyjKopLakzlov/NmxzBPDHUIU1LNkYCHn6fJExqEq6bWLa//1/rbc26qBpF/KWsMJHdwg5Uj6WnvV+7571R2htXQyNjrY1pcBA/NOBGiHMdPV2RAohWJEFV9sVGpVsocraOjDUoQEt0V4qNOcz8w/rUIiHoUqGqBO6AevbgkBGX5vCeh2fPBFn4Sy1yw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fzsf7XMA928xiZ+0qr0pgnrFwUw9jtIKFdlT/aXfFAA=;
 b=EOGXE+5cKX95KE50yu0q6PvDL85Wt9Yz0VHbf9SsQN7/zQqhhePq5eLEJByePu2/2tcM1DbqovTKAg9bl+KDB1YwBcSr9JAzr60K8lwGxAU6V5aC8yCnDMet/DS4kxGXSP120IfUyd8ufs2HmybKu4x4QBHXzE8VuL6CU9OK9DV/Q2wjhJzJGzTZHyTG6BmDDZP88vrMXWKDrh+nL16pyET7eJYBaFT4vRPeMlL1Jpp1AueWz72kFL7NWgT+bq8EIy6/eJdahcJZWH7B9OY26cJpp2Nbe210FUxdf8Cm0gvfdnZlMbTnxALwG2UNRoO0QkN7yjWRehDFaQJOREoB6A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=fzsf7XMA928xiZ+0qr0pgnrFwUw9jtIKFdlT/aXfFAA=;
 b=RyTpf14bY50Ld8IWgdl3madgYY0ahk06OfUHYOvr8BYR+1aeWjj3xXtEhPNbW+dsHj6/bmP6fImljfxL905Wt300kuM4qRyUaDhDEBbvF+0N8pdHL33xVzUswN62QiguIXqEbfSK5cDEx7ujsjBVDR8nbbRuh0v3H1ZMEprPbGI=
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
Content-Type: text/plain;
	charset=utf-8
Subject: Re: [PATCH v5 3/3] docs/doxygen: doxygen documentation for
 grant_table.h
From: Luca Fancellu <luca.fancellu@arm.com>
In-Reply-To: <alpine.DEB.2.21.2105041514260.5018@sstabellini-ThinkPad-T480s>
Date: Thu, 6 May 2021 09:48:27 +0100
Cc: xen-devel@lists.xenproject.org,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 wei.chen@arm.com,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>
Content-Transfer-Encoding: quoted-printable
Message-Id: <9E7D7B58-0ABA-4800-B2D3-9EE3E29CF599@arm.com>
References: <20210504133145.767-1-luca.fancellu@arm.com>
 <20210504133145.767-4-luca.fancellu@arm.com>
 <alpine.DEB.2.21.2105041514260.5018@sstabellini-ThinkPad-T480s>
To: Stefano Stabellini <sstabellini@kernel.org>
X-Mailer: Apple Mail (2.3654.80.0.2.43)
X-Originating-IP: [82.8.129.65]
X-ClientProxiedBy: LO4P123CA0424.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18b::15) To VI1PR08MB3629.eurprd08.prod.outlook.com
 (2603:10a6:803:7f::25)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c02a9492-2c95-4af0-3805-08d9106bc21f
X-MS-TrafficTypeDiagnostic: VI1PR0801MB1935:|AS8PR08MB6518:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<AS8PR08MB6518D8321070E84096EFFE97E4589@AS8PR08MB6518.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 qUQesMh9uZXyOMc4qi2OKqmPRn5bgzLbyruzWPJ+0PyL7gN27SSZsuCVXF4dvJufX+10X8M8AOF8mCc9T7L1OZnxGOhxKUyWmD5VtecJHwg1jJYGplFuGiZ2RFy/4jHu2bjaZWJ3CzQLGeyO//ci45nMH5pST1Wnkzvnu8k0vae++c+k5rQk2jfcyFsdSkIkMjnsZXmCqvC+JA7vk/38IrhwGCtAps2GTm90zSqfkjz5V21i0/9AiNkJfTFkRX76QVUFzoAYHLR2b3AK1lLYfURGC0OP6UfC0dv87/EJYmwVP9T0Wn19U4mRgpCjGGi3CMqS/aPEAsLcd+vdUwVYgzuWkf3xUwK3+o3rqvKk67XwtNYnUyNIcTXKtdrGx0x4HFePuNkQVNopfaQiKERRrcqbweMhsa09C9pHE4IEx9ZY7sosummNwV8ka/x8vlAnDT3rl3YseRbipmlJdqHXbYCPmYM7WbjyW3bvEGzJ5+DHiOSLhfdXOLfIw/FWAvXfRanssDLyLdW5Hs5tK+lN8+x/a5ACxF5vjZjj8qJpWan5/IW9ahucV5XHOghmams9NyXG/PRHhphBYzrr7wUf3nMFBhUJRBR5FtbYRFZwFlL9EVyrXqd/8eGX17Uo2uKAYQe0xA0PasAaxRSecWRTlnhD+u/nHLUxE28ft7f2+0TOio+QfWsEVhT5psNGV8B4vKwc8XM8AQ+yDk3nFnplpisN/YUuw4aJlpjY2tA/DpPvB4tthDPqYnnD/SNEax/blJ/mF9js08oge09HXBhOpxJsmgFGHK39EU0h+ZQFBmE=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR08MB3629.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(39860400002)(136003)(396003)(346002)(376002)(54906003)(5660300002)(8936002)(316002)(6486002)(66946007)(66476007)(2906002)(2616005)(956004)(44832011)(4326008)(8676002)(66556008)(478600001)(36756003)(6512007)(6916009)(30864003)(6666004)(26005)(16526019)(966005)(52116002)(6506007)(38350700002)(33656002)(53546011)(38100700002)(86362001)(83380400001)(186003)(21314003)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
 =?utf-8?B?ai8zVkNXQlREeWdRd0lpbFJER1p4R0hWZHZVYktDcDNnd1lyaWZWeWtPcXRO?=
 =?utf-8?B?SXNlbHBtYVZkTzFOenU0T290cjkrYTQzcXRoSlcyUzlmQSs1dmJ1cWZPRWMx?=
 =?utf-8?B?KzlGbVFLa0NQbFdlYXpBZGNveTQzaWlKKzdPbGtDcU9KNDhqZ2x0ZEVlVzk1?=
 =?utf-8?B?V1kwNEM3Wk02cU9yYU9DNERtYUVSMUN6aVBhVVdNZ2MzSVBQMUMrbTdIbzBv?=
 =?utf-8?B?NXR6ZnJoakwreWlEWjZOZ292cnBPcFBybTVEaTdLd1gzcUZKMmgwN2dac0hR?=
 =?utf-8?B?ZS9QQ1U2ZnZ6TXdTSlh5OFdjTUlxbXZuc2ZyclZmMGNwZis3b1FrRnlRb2k1?=
 =?utf-8?B?WndjYjh2a2RvMlpLWjdhN3BidzFBY0pwbFBGaFZFbVIzK1laNkVwNDFQMHVk?=
 =?utf-8?B?Tk5aV1E2QUs4Si9RbDVnTWJuS0UzdHpJaFZBaTRIazRqdWhkbzNGNlNXelB5?=
 =?utf-8?B?bXRPV2JsbElCdkNWbmt4WW40NVFPSFROYW92NVJtLzJpYmlSYUFFaE84Q2Jr?=
 =?utf-8?B?K0JRSGJZS1pvbXVkMVUzbVFKd1l5SG00cllvOG8wejBsREg2NG9XVC9JWlc5?=
 =?utf-8?B?SVhVQkUrRlltamhISnJwQ051RHF1cEdrV1dZbkJpWnB4SGV5ZWhmaE52bUw5?=
 =?utf-8?B?RnJzVERSSDR2NlAzTmtGTnB1dXR0NkdDNlhrL1R4NkdqRnlmT0V6eXpQVFdR?=
 =?utf-8?B?ZUhFV0s4TE03cUw3TDVMTUdNMVZsZFpMa0hMWUlQcEE5SzJDTGc3bHNsQ3Vn?=
 =?utf-8?B?aWVIeDIyVGJRZWMxK0RUcGhsNGtqdTJuOXhRYUcxT1pxc3RuRDJ0S0JDbUVR?=
 =?utf-8?B?WDBRY21GUjMrRjIvREpCd3NsOU83RUo5U0J1UWVQQUhmNGRqNjk1S0pCaXBU?=
 =?utf-8?B?dWp5SHhnTnkzVExXL1l3RVY5QlpyYkIzZ2lxZWg2aHZ6UndKYVlpN1NsZVZC?=
 =?utf-8?B?VmJSTzRBY1Vza2dLYzBOODlSb09DTXpyck9lNGNpZ2ZVRU5vSmJHZWV2QUgw?=
 =?utf-8?B?K3h6VThyckJSY0Z3RU1KeTkzcytHRENNODNsK1Rjays3RTM4NVVSUlVtNStO?=
 =?utf-8?B?SStxNklOaFdDajM2ckpLa04yaW1odkdqbTVVaGFyRTVZVEFvYlY5TThOWFYx?=
 =?utf-8?B?Z0ZpRXNWUW9UTmdnWk1FOXp5Q0xyT2pGWEtxRVZ6Z1ZVYWRTR09TY3gvQnMr?=
 =?utf-8?B?ZFlaVk9OeHJjR1JURGxhQStRc3JxbVAvRkN5OGgyUEZHYmY4aS9aNUllZVFq?=
 =?utf-8?B?M3I4WkVCNkM1REJlZEtRVFFwSFl4UW53RElpVHUxSDF6a1RKcFFTb1ZMNGZj?=
 =?utf-8?B?eC94OEs2MCtCNU1YK0p1dEpmNHdORmU5bk0vRWtHTTNIakZ2eloxZ20yeGN0?=
 =?utf-8?B?MTliWEhvcDNqODEyZEtLdHd0Nk96VjdjUGVzVWRtUEVLK3k2VzJXaFhtM2ts?=
 =?utf-8?B?YWJOTVJXcWNUbzZKS0tlUklZRkZ0LzVTZTNXc2FKaEI0ZXMrOUVpdGJ2MDh0?=
 =?utf-8?B?RFpkTXdhWTFhYXFzK1hvUEJNaVZscjNOSTEzVVB5M0NmcWd4U1I5eVR2akNj?=
 =?utf-8?B?TWdyMm43Nzlqa0RxRDNPOTBiWU9wajJsTkVHeExTMzNwOGRaYmhPd1YzS1Ni?=
 =?utf-8?B?ZExhL0Z3TTNLZ1dyOFRqY2I1NFVBczUyWlNBeC9XUXg1TFJyS2dzbnZXRmJF?=
 =?utf-8?B?aFY1Y0EzaFo5RUFNc0pPSTBzWVFMYno0SENlbjRKR2lKdUNwellyNHA1K0ZF?=
 =?utf-8?Q?9Zehvas7tYN0pg8a3woPteqZVBtzVNpd35rIHUY?=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0801MB1935
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT008.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	61faa67e-7cde-45f3-7f89-08d9106bbb33
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	N1/fc19IixSlUwK6X0S2dK95Muv/EIA+w8jpLLh6PSZHZsWePeH7Offyp5UovSglVYJu2aL01bMXw6FhL6eUIhdW46RJmkeC1ewx5xpZiMUkDeAGko+138x1cq2hG29g5N6kMzipU+OGpe41Gp65VjDtOehdgrmVkbbIkMTb3hA/UIJElt4U4R2Ey4uJwu3uRS2la9WH9RIV6jheKF5q7lOBmrkkbSiQ9yEPir02orWUleRwXV3d78AXcHJMh0AUglgqINL+QhcxXwgy/j38zXHqqWoaHmz2sswMuPY5f+clhIJuUwHbEZauli+XbKNdrRyER6r88375BztemeduKbhAW4phUgsnl5vrS4OrXRigbXOx2Wvxac+JyZnTWCIW+XACcegRIFY957rIG8mFCCK4kUm54PVuQyu4kUcZa3I+JQlgz3+A+CBpqMJk9KDY1QH9VFOLJGtt8i78CMVRiJsILelAf7h1BpZcJMlpXaD3yvHcLdEFph2bEivnGJyaHrcbmzwdwE+/eyAwO1al7tFGIvZcOv19VFStQzU1ckUokceqI4K3aEHEvj438I5r0sDvSg7wP5E68j6JOb2tX93PcTUj0RpRnHUw1HX1fyQJY2GxEj5Tt1xnQLHMHLi0nBwEZJUyz2z4j51/IB8Jh4cHeqQqsGyiHhbZTfkIe3+md8JFgsOlYK8qBAE7fAq0gl0OYdP7EMOcgZrFuAIw6rtff/kIe1LnOfTW8+n6YKXQ4ZSLTdH1NXmug1SQb2Hn
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(376002)(136003)(396003)(39860400002)(46966006)(36840700001)(316002)(356005)(33656002)(44832011)(81166007)(6506007)(36756003)(6486002)(83380400001)(6666004)(8676002)(82740400003)(26005)(186003)(6862004)(336012)(86362001)(53546011)(956004)(2906002)(2616005)(8936002)(6512007)(54906003)(4326008)(36860700001)(82310400003)(966005)(47076005)(70586007)(478600001)(5660300002)(16526019)(30864003)(70206006)(21314003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 May 2021 08:48:45.3926
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c02a9492-2c95-4af0-3805-08d9106bc21f
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT008.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6518



> On 4 May 2021, at 23:27, Stefano Stabellini <sstabellini@kernel.org> wrot=
e:
>=20
> On Tue, 4 May 2021, Luca Fancellu wrote:
>> Modification to include/public/grant_table.h:
>>=20
>> 1) Add doxygen tags to:
>> - Create Grant tables section
>> - include variables in the generated documentation
>> - Used @keepindent/@endkeepindent to enclose comment
>>   section that are indented using spaces, to keep
>>   the indentation.
>> 2) Add .rst file for grant table for Arm64
>=20
> Thanks Luca for your hard work on this. It is going to make things a lot
> better!
>=20
> I reviewed this patch while looking at
> https://luca.fancellu.gitlab.io/xen-docs/hypercall-interfaces/arm64/grant=
_tables.html
>=20
> In short, I think this changes look fine except for a trivial code style
> issue on the very last comment at the bottom of the patch.
>=20
> All my questions below are basically about some other in-code comments,
> currently existing in the code, but not outputted in the html file.
> Is there an easy way to add them?
>=20
>=20
>=20
>> Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
>> ---
>> v5 changes:
>> - Move GNTCOPY_* define next to the flags field
>> v4 changes:
>> - Used @keepindent/@endkeepindent doxygen commands
>>  to keep text with spaces indentation.
>> - drop changes to grant_entry_v1 comment, it will
>>  be changed and included in the docs in a future patch
>> - Move docs .rst to "common" folder
>> v3 changes:
>> - removed tags to skip anonymous union/struct
>> - moved back comment pointed out by Jan
>> - moved down defines related to struct gnttab_copy
>>  as pointed out by Jan
>> v2 changes:
>> - Revert back to anonymous union/struct
>> - add doxygen tags to skip anonymous union/struct
>> ---
>> docs/hypercall-interfaces/arm64.rst           |  1 +
>> .../common/grant_tables.rst                   |  8 +++
>> docs/xen-doxygen/doxy_input.list              |  1 +
>> xen/include/public/grant_table.h              | 66 ++++++++++++-------
>> 4 files changed, 52 insertions(+), 24 deletions(-)
>> create mode 100644 docs/hypercall-interfaces/common/grant_tables.rst
>>=20
>> diff --git a/docs/hypercall-interfaces/arm64.rst b/docs/hypercall-interf=
aces/arm64.rst
>> index 5e701a2adc..cb4c0d13de 100644
>> --- a/docs/hypercall-interfaces/arm64.rst
>> +++ b/docs/hypercall-interfaces/arm64.rst
>> @@ -8,6 +8,7 @@ Starting points
>> .. toctree::
>>    :maxdepth: 2
>>=20
>> +   common/grant_tables
>>=20
>>=20
>> Functions
>> diff --git a/docs/hypercall-interfaces/common/grant_tables.rst b/docs/hy=
percall-interfaces/common/grant_tables.rst
>> new file mode 100644
>> index 0000000000..8955ec5812
>> --- /dev/null
>> +++ b/docs/hypercall-interfaces/common/grant_tables.rst
>> @@ -0,0 +1,8 @@
>> +.. SPDX-License-Identifier: CC-BY-4.0
>> +
>> +Grant Tables
>> +=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
>> +
>> +.. doxygengroup:: grant_table
>> +   :project: Xen
>> +   :members:
>> diff --git a/docs/xen-doxygen/doxy_input.list b/docs/xen-doxygen/doxy_in=
put.list
>> index e69de29bb2..233d692fa7 100644
>> --- a/docs/xen-doxygen/doxy_input.list
>> +++ b/docs/xen-doxygen/doxy_input.list
>> @@ -0,0 +1 @@
>> +xen/include/public/grant_table.h
>> diff --git a/xen/include/public/grant_table.h b/xen/include/public/grant=
_table.h
>> index 84b1d26b36..e1fb91dfc6 100644
>> --- a/xen/include/public/grant_table.h
>> +++ b/xen/include/public/grant_table.h
>> @@ -25,15 +25,19 @@
>>  * Copyright (c) 2004, K A Fraser
>>  */
>>=20
>> +/**
>> + * @file
>> + * @brief Interface for granting foreign access to page frames, and rec=
eiving
>> + * page-ownership transfers.
>> + */
>> +
>> #ifndef __XEN_PUBLIC_GRANT_TABLE_H__
>> #define __XEN_PUBLIC_GRANT_TABLE_H__
>>=20
>> #include "xen.h"
>>=20
>> -/*
>> - * `incontents 150 gnttab Grant Tables
>> - *
>> - * Xen's grant tables provide a generic mechanism to memory sharing
>> +/**
>> + * @brief Xen's grant tables provide a generic mechanism to memory shar=
ing
>>  * between domains. This shared memory interface underpins the split
>>  * device drivers for block and network IO.
>>  *
>> @@ -51,13 +55,10 @@
>>  * know the real machine address of a page it is sharing. This makes
>>  * it possible to share memory correctly with domains running in
>>  * fully virtualised memory.
>> - */
>> -
>> -/***********************************
>> + *
>>  * GRANT TABLE REPRESENTATION
>> - */
>> -
>> -/* Some rough guidelines on accessing and updating grant-table entries
>> + *
>> + * Some rough guidelines on accessing and updating grant-table entries
>>  * in a concurrency-safe manner. For more information, Linux contains a
>>  * reference implementation for guest OSes (drivers/xen/grant_table.c, s=
ee
>>  * http://git.kernel.org/?p=3Dlinux/kernel/git/torvalds/linux.git;a=3Dbl=
ob;f=3Ddrivers/xen/grant-table.c;hb=3DHEAD
>> @@ -66,6 +67,7 @@
>>  *     compiler barrier will still be required.
>>  *
>>  * Introducing a valid entry into the grant table:
>> + * @keepindent
>>  *  1. Write ent->domid.
>>  *  2. Write ent->frame:
>>  *      GTF_permit_access:   Frame to which access is permitted.
>> @@ -73,20 +75,25 @@
>>  *                           frame, or zero if none.
>>  *  3. Write memory barrier (WMB).
>>  *  4. Write ent->flags, inc. valid type.
>> + * @endkeepindent
>>  *
>>  * Invalidating an unused GTF_permit_access entry:
>> + * @keepindent
>>  *  1. flags =3D ent->flags.
>>  *  2. Observe that !(flags & (GTF_reading|GTF_writing)).
>>  *  3. Check result of SMP-safe CMPXCHG(&ent->flags, flags, 0).
>>  *  NB. No need for WMB as reuse of entry is control-dependent on succes=
s of
>>  *      step 3, and all architectures guarantee ordering of ctrl-dep wri=
tes.
>> + * @endkeepindent
>>  *
>>  * Invalidating an in-use GTF_permit_access entry:
>> + *
>>  *  This cannot be done directly. Request assistance from the domain con=
troller
>>  *  which can set a timeout on the use of a grant entry and take necessa=
ry
>>  *  action. (NB. This is not yet implemented!).
>>  *
>>  * Invalidating an unused GTF_accept_transfer entry:
>> + * @keepindent
>>  *  1. flags =3D ent->flags.
>>  *  2. Observe that !(flags & GTF_transfer_committed). [*]
>>  *  3. Check result of SMP-safe CMPXCHG(&ent->flags, flags, 0).
>> @@ -97,18 +104,24 @@
>>  *      transferred frame is written. It is safe for the guest to spin w=
aiting
>>  *      for this to occur (detect by observing GTF_transfer_completed in
>>  *      ent->flags).
>> + * @endkeepindent
>>  *
>>  * Invalidating a committed GTF_accept_transfer entry:
>>  *  1. Wait for (ent->flags & GTF_transfer_completed).
>>  *
>>  * Changing a GTF_permit_access from writable to read-only:
>> + *
>>  *  Use SMP-safe CMPXCHG to set GTF_readonly, while checking !GTF_writin=
g.
>>  *
>>  * Changing a GTF_permit_access from read-only to writable:
>> + *
>>  *  Use SMP-safe bit-setting instruction.
>> + *
>> + * @addtogroup grant_table Grant Tables
>> + * @{
>>  */
>>=20
>> -/*
>> +/**
>>  * Reference to a grant entry in a specified domain's grant table.
>>  */
>> typedef uint32_t grant_ref_t;
>=20
> Just below this typedef there is the following comment:
>=20
> /*
> * A grant table comprises a packed array of grant entries in one or more
> * page frames shared between Xen and a guest.
> * [XEN]: This field is written by Xen and read by the sharing guest.
> * [GST]: This field is written by the guest and read by Xen.
> */
>=20
> I noticed it doesn't appear in the output html. Any way we can retain it
> somewhere? Maybe we have to move it together with the larger comment
> above?

I agree with you, this comment should appear in the html docs, but to do so
It has to be moved together with the larger comment above.

In the original patch it was like that but I had to revert it back due to o=
bjection, my proposal is
to put it together with the larger comment and write something like this to
maintain a good readability:

   *  Use SMP-safe CMPXCHG to set GTF_readonly, while checking !GTF_writing=
.
   *
   * Changing a GTF_permit_access from read-only to writable:
   *
   *  Use SMP-safe bit-setting instruction.
+ *
+ * A grant table comprises a packed array of grant entries in one or more
+ * page frames shared between Xen and a guest.
+ *
+ * Data structure fields or defines described below have the following tag=
s:
+ * * [XEN]: This field is written by Xen and read by the sharing guest.
+ * * [GST]: This field is written by the guest and read by Xen.
   *
   * @addtogroup grant_table Grant Tables
   * @{

If we all agree I will put it in the v6

>=20
>=20
>> @@ -129,15 +142,17 @@ typedef uint32_t grant_ref_t;
>> #define grant_entry_v1_t grant_entry_t
>> #endif
>> struct grant_entry_v1 {
>> -    /* GTF_xxx: various type and flag information.  [XEN,GST] */
>> +    /** GTF_xxx: various type and flag information.  [XEN,GST] */
>>     uint16_t flags;
>> -    /* The domain being granted foreign privileges. [GST] */
>> +    /** The domain being granted foreign privileges. [GST] */
>>     domid_t  domid;
>> -    /*
>> +    /**
>> +     * @keepindent
>>      * GTF_permit_access: GFN that @domid is allowed to map and access. =
[GST]
>>      * GTF_accept_transfer: GFN that @domid is allowed to transfer into.=
 [GST]
>>      * GTF_transfer_completed: MFN whose ownership transferred by @domid
>>      *                         (non-translated guests only). [XEN]
>> +     * @endkeepindent
>>      */
>>     uint32_t frame;
>> };
>> @@ -228,7 +243,7 @@ struct grant_entry_header {
>> };
>> typedef struct grant_entry_header grant_entry_header_t;
>=20
>=20
> Also this comment is missing from the output:
>=20
> /*
> * Type of grant entry.
> *  GTF_invalid: This grant entry grants no privileges.
> *  GTF_permit_access: Allow @domid to map/access @frame.
> *  GTF_accept_transfer: Allow @domid to transfer ownership of one page fr=
ame
> *                       to this guest. Xen writes the page number to @fra=
me.
> *  GTF_transitive: Allow @domid to transitively access a subrange of
> *                  @trans_grant in @trans_domid.  No mappings are allowed=
.
> */
>=20
> Is there a way to keep it?
>=20
>=20
> Similarly for the other subflags descriptions.

Here we could split the comment and do something like this:

/**< GTF_invalid: This grant entry grants no privileges. */
#define GTF_invalid         (0U<<0)

/**< GTF_permit_access: Allow \@domid to map/access \@frame. */
#define GTF_permit_access   (1U<<0)=20

/**
* GTF_accept_transfer: Allow \@domid to transfer=20
* ownership of one page frame to this guest. Xen=20
* writes the page number to \@frame.
*/
#define GTF_accept_transfer (2U<<0)

/**
* GTF_transitive: Allow @domid to transitively access a subrange of
* \@trans_grant in \@trans_domid.  No mappings are allowed.
*/
#define GTF_transitive      (3U<<0)

In this way each define will have its own description and I think both sour=
ce code
and html docs would be improved.

>=20
>=20
>> -/*
>> +/**
>>  * Version 2 of the grant entry structure.
>>  */
>> union grant_entry_v2 {
>> @@ -433,7 +448,7 @@ typedef struct gnttab_transfer gnttab_transfer_t;
>> DEFINE_XEN_GUEST_HANDLE(gnttab_transfer_t);
>=20
> What about the comments for each members of the union? Basically I am
> trying to ask if we can output almost all comments currently living in
> this header.

Yes we can, every comment that is in place near the commented data structur=
e
needs just to have the /* changed to /** to be part of the html documentati=
on.

There are other cases as above where the comment could be split, I can do t=
hat in the v6,
I need just to know if we agree on a solution like that so I don=E2=80=99t =
have to do work that I will
trash later.

>=20
>=20
>> -/*
>> +/**
>>  * GNTTABOP_copy: Hypervisor based copy
>>  * source and destinations can be eithers MFNs or, for foreign domains,
>>  * grant references. the foreign domain has to grant read/write access
>> @@ -451,11 +466,6 @@ DEFINE_XEN_GUEST_HANDLE(gnttab_transfer_t);
>>  * bytes to be copied.
>>  */
>>=20
>> -#define _GNTCOPY_source_gref      (0)
>> -#define GNTCOPY_source_gref       (1<<_GNTCOPY_source_gref)
>> -#define _GNTCOPY_dest_gref        (1)
>> -#define GNTCOPY_dest_gref         (1<<_GNTCOPY_dest_gref)
>> -
>> struct gnttab_copy {
>>     /* IN parameters. */
>>     struct gnttab_copy_ptr {
>> @@ -468,6 +478,10 @@ struct gnttab_copy {
>>     } source, dest;
>>     uint16_t      len;
>>     uint16_t      flags;          /* GNTCOPY_* */
>> +#define _GNTCOPY_source_gref      (0)
>> +#define GNTCOPY_source_gref       (1<<_GNTCOPY_source_gref)
>> +#define _GNTCOPY_dest_gref        (1)
>> +#define GNTCOPY_dest_gref         (1<<_GNTCOPY_dest_gref)
>=20
> I think this is OK
>=20
>=20
>>     /* OUT parameters. */
>>     int16_t       status;
>> };
>> @@ -579,7 +593,7 @@ struct gnttab_swap_grant_ref {
>> typedef struct gnttab_swap_grant_ref gnttab_swap_grant_ref_t;
>> DEFINE_XEN_GUEST_HANDLE(gnttab_swap_grant_ref_t);
>>=20
>> -/*
>> +/**
>>  * Issue one or more cache maintenance operations on a portion of a
>>  * page granted to the calling domain by a foreign domain.
>>  */
>> @@ -588,8 +602,8 @@ struct gnttab_cache_flush {
>>         uint64_t dev_bus_addr;
>>         grant_ref_t ref;
>>     } a;
>> -    uint16_t offset; /* offset from start of grant */
>> -    uint16_t length; /* size within the grant */
>> +    uint16_t offset; /**< offset from start of grant */
>> +    uint16_t length; /**< size within the grant */
>> #define GNTTAB_CACHE_CLEAN          (1u<<0)
>> #define GNTTAB_CACHE_INVAL          (1u<<1)
>> #define GNTTAB_CACHE_SOURCE_GREF    (1u<<31)
>> @@ -673,6 +687,10 @@ DEFINE_XEN_GUEST_HANDLE(gnttab_cache_flush_t);
>>     "operation not done; try again"             \
>> }
>>=20
>> +/**
>> + * @}
>> +*/
>=20
> Alignment of the *

Sure I=E2=80=99ll fix it in the v6

I=E2=80=99ll wait a couple of days for the reply, then I=E2=80=99ll assume =
everything above is agreed and I=E2=80=99ll
push the v6.


Cheers,
Luca

>=20
>=20
>> #endif /* __XEN_PUBLIC_GRANT_TABLE_H__ */
>>=20
>> /*
>> --=20
>> 2.17.1



From xen-devel-bounces@lists.xenproject.org Thu May 06 09:57:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 09:57:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123399.232720 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lealK-0006hy-Tw; Thu, 06 May 2021 09:57:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123399.232720; Thu, 06 May 2021 09:57:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lealK-0006hr-QR; Thu, 06 May 2021 09:57:42 +0000
Received: by outflank-mailman (input) for mailman id 123399;
 Thu, 06 May 2021 09:57:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HEOH=KB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lealI-0006hl-Te
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 09:57:41 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 788cfdf2-b699-4c15-87e7-ea6064af0ac3;
 Thu, 06 May 2021 09:57:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 788cfdf2-b699-4c15-87e7-ea6064af0ac3
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620295058;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=/v5wimBLgevMxM2ps68tYaaP25TA6n8o1lF4wq9Fz6M=;
  b=AiepEDcz1BmiuTQa+dir2AzC1I0ma0CdqitMDHQLAP41Gzfcs5oDawCR
   IxjZr5P9nhSdGdlYpTQ/J9QUE8MaQAWThQjs4gCo2k9KTM3pIA6d3JkZs
   YshzEGQsvNNj996EwlWNGiKGAmqnGtjHOVQFQXPpbKqY7YnL8zsAp5KWk
   4=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 563QkYKpxDBVqFMi8GWC6tPXjwxU45+2KA0DD+hPvg6LPgnknRrJ9NtDZ2eqohkxLtxvUVzlOx
 O1BWJofwqkmQ+imA8EhZtLeNaMn6mij2yTvYxjHj64xkZLabdsCVyf7HABCCATT3/6qDuL2Xni
 05haO7HZ4NlnoE7aDeDcmB/yOv0WhG0CK35FNyjt71l7tOElF6DL1MiQpB9G56Onyr+EhreL7H
 FBkdVEsl3SEcXqbPO3DMq5JNWnfNKmSdKJsmjoMYts59KjUxOfNfwsDMsqbSyZVT3NEOKZ6wTj
 SfI=
X-SBRS: 5.1
X-MesageID: 43207016
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:a1pVSKszdgfhtfbe3a03BzVD7skCeoMji2hC6mlwRA09TyXGra
 6TdaUguiMc1gx8ZJhBo7C90KnpewK6yXcT2/htAV7CZnidhILMFuBfBOTZskTd8kHFh4tgPM
 RbAtlD4b/LfCBHZK/BiWHSebtQo6jkgdOVbI/lvglQpGpRGsddBmlCe2Km+hocfng4OXN1Lu
 vQ2iIpzADQNUg/X4CePD0oTuLDr9rEmNbNehgdHSMq7wGIkHeB9KP6OwLw5GZebxp/hZMZtU
 TVmQ3w4auu99uhzAXH6mPV55NK3PP819p4AtCWgMR9EESjtu+RXvUjZ1S+hkF2nAn2g2xa1e
 Uk4i1QcPib0kmhPl1c+nDWqk3dOF9E0Q6T9bea6UGT6fARCghKTPaoKOpiA2zkAnEbzaRBOZ
 RwriukXqpsfGH9dRvGlqz1vmlR5zGJSFoZ4KUuZi9kIMEjgIE4l/1owKoDKuZ9IMu90vFgLN
 VT
X-IronPort-AV: E=Sophos;i="5.82,277,1613451600"; 
   d="scan'208";a="43207016"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KCS8INC8u/MLjkAAF8q6RILCjOPKrrXv1Fp7TghRslcPBc8al2Ndy2YPyg8ddKJOheKOPwpaKFtWhYVjQkFS9dGChMePL7RHFOtYcklETRQPrmysi+vKt0exG2OSvWm2WxpiQjdiuokvd7WxFc41v0rr+jZX5L3N4e9OgBJfS/6Pqbp8ad+B8Xk4qjI/eUqYLphz29lJg4oikM+ZAfQNPmoRhBG4gVzY9jfZJAfHVax/Z1D8ITCDH6DF1EFkgUABULUn2uVy5aDMr8KU4kTDsNONB7rOOGX0Wv+dPyMamNY2jLQrjDtrzHbKjetKMjr5fdkSE8w0VZTUxrVmX8hyow==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=k+pCfK3ruyr0u0q8biCc2bOuDQxymNsb+FKpc4zeygM=;
 b=N9pvnWGTngoimklzXv/Pwv7etY3LEXxmnvrUMhLdtidY26KJ43+aDZJMvexEkj9D+m8RXJEG5PfvxkPnKCsdwOsqDEeCzG7N8zUkToDDrKWGel0CMapDHN9PQ46jUwHCav3GqUyIGCTcYJdmGdd7GMwKcVnBE+mALD+YIMk6JGm1t9zWrIfRqwciNBAwf7Gv/6U4JRr49C5TNO9VGd5DnsK+KUGIJv7kmTR/yaqTtBDI6hnbcbnuReV8A0OrKmBP5AQ8S/Y1nJdXxupLIOw4HSVmXzVSeQ85l0O5H686uqin3lE008tA4QWRukpTpKZ+erE5z10li0NoN9shDU7pFg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=k+pCfK3ruyr0u0q8biCc2bOuDQxymNsb+FKpc4zeygM=;
 b=kCyZqBt9sA1q2vjrIdweAZdyR9e30j7Pn40xqE6dK9HoMS0qSwDK8OFp3/H04hdUT0cn9wJlfQp2GboiN4VabTrGCLaZeoyLxZBFnuM1GrXLPmLK6tOJ6mk3EYE2lrEdjlb2slZYRNDq+QeR1UGSgDEHCuw8Lb2aak4bRUD0RMA=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>, Jan Beulich <jbeulich@suse.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>
Subject: [PATCH v2] tools/libs: move cpu policy related prototypes to xenguest.h
Date: Thu,  6 May 2021 11:57:05 +0200
Message-ID: <20210506095705.1796-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.31.1
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: MR1P264CA0018.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:501:2e::23) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 13365b05-6bd8-466e-a27f-08d910755411
X-MS-TrafficTypeDiagnostic: DM6PR03MB5065:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB50650E02FA5D4268CBB0B2318F589@DM6PR03MB5065.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:5797;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: oZw/w+Z4oTFLk1fQRpmn5T9pyqefvW3x8nlGoFOBWl2DDxCTXoohP+/MFBwWsHLNXts85BFIsZVPyDyWNbpYbs40wh4Akb2n901trH01IRrGKoxu88A31VRGeEnsdMp5zkU9sZlVxs/fcUpXquODJLIC7iNDwCJZAZKfQIE1NMKj2/rYV+Js9bxPWCBoY3pmUSK2K54DrPITsKzmUODT/KPaH9ORSXW9juN3C0jZfd7bKL9VmmO7z4OUEDXNcz/NrL1U+V9XinftwUG+zUmqC89dmtHsgd+Hw73TYvnDFftvkIMM0k1x0hpVTL6fmY1OCgZgIHKiuxHX0/pZvik6yWtHHijrVhC5/Mv+2g+dinCiFFuwYSaMehr8bIpeKkKOF5kyLzpDhGj6LP1+g5oBYeYJzD+ZBe2gaMEEsUmNCb/vm8IBvsWJ/A/E8uQFDobQmrCt7XFbvBjOHFdr9U+l0g8tBPGNsSLdHx4fQv7WPi68w33GHe8ThVqBhHx8mhk8ttrT8LKZhT+rINXL8592NF0vxQ1xO4EyMsBhMyhkOiLyLHJ7Rb5wvfQEVYtDLb0wb6sT+qN+sLPtkfR0dG6D03kq0Mai6Xo1dhi0/FmyVhI=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(396003)(376002)(39860400002)(136003)(366004)(26005)(6666004)(6486002)(1076003)(86362001)(8936002)(8676002)(478600001)(38100700002)(36756003)(66946007)(66476007)(54906003)(316002)(66556008)(16526019)(186003)(2616005)(5660300002)(2906002)(956004)(6916009)(4326008)(6496006)(83380400001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?T05IenNnbjNMTEFURTE3VGlOeVFybzh6c1Zsb1k2WEkxWXhXY1JhNXJUZHBn?=
 =?utf-8?B?aUx4dXRUUUIyMGprVEZQek11VHhPa090OW4zWGVkV0VpanhaVWNUeU85eFAz?=
 =?utf-8?B?ZjFDaUxKZTRlNUsyUW8zbnNGa05ONnVDckJlZ1dZRkJ6VG9IeVlzWFZoRytp?=
 =?utf-8?B?T3h6VUFMS0NGVzJIRGFGRzJRQ2hFazlIbDhYSFZiTGxyRE44QVIrNlJFTjdi?=
 =?utf-8?B?bFczSVRWdmU0N0MrdVpGT2pldEROZENIZGI3VW9wbzNDbnB6cUxaenVZclVX?=
 =?utf-8?B?aDZEaWN4czdpcWN0RHhyMUZQRStpT2FuUVQvSDFBcXBNUzE2dXhHQ1E4Tk4x?=
 =?utf-8?B?UTlUODd3bzZ4VStWRXY5eS9SQUFkSVVRYWZEM0NPaG5ZNmcwRUpScDFuNTNB?=
 =?utf-8?B?T2tEc2hkS2wyaEFPUGJiM2JBb1RIZE5NbWZ0OCtVRlVmc2l5S29VSVNMVmhO?=
 =?utf-8?B?cmRmaWwyNTdrS2svSGJ5QVBMbmpCY0V1SmRwZjIzSTVjZW5Jc1ZPbXFQdVNw?=
 =?utf-8?B?MkRHM3VaRDQ2eDRTNk52UFRucVF0RllWZmViNldRTU1Sd09weHU2WmNqcHhC?=
 =?utf-8?B?bktad0xnalVDNVA0c284ZXVsaVp5Vm1RUzdHNjQ2TGNOZTF4NWsvcWIzemts?=
 =?utf-8?B?YzNCWXhvV0dMNFcwNjkrMEEvS2R2YnZPTTNuakMveUNoUjA0L0hnTTZPSUdi?=
 =?utf-8?B?M1JVRlNLWlZMbGVOSXNRZjgvVllidjdsK09vUWpLUjg1V1FEdHc4bnFQNUJw?=
 =?utf-8?B?ZHc4Zmk3d1hmQzlrY1FUcVA0WWsxRGZqNzBFbDd0ZTNqVXFpNmFLSGlqVTBP?=
 =?utf-8?B?TUtyMEVNeGhaUTdLbHRzWjNwemI5TDFzWHBpaXI5eGU1V1RZSFNoMHN5UmZD?=
 =?utf-8?B?Ty95VmhRNy9yNHNGeC8wL3RNc2xlaS8rbVFFaFV0TFZOZCtjZEtqNnpSUkpB?=
 =?utf-8?B?Zm1PRERYTy94cE5aWUYva3VsRVVMdkNvdmIxcGxCL0hhTnVJLzVXTVBqbTRm?=
 =?utf-8?B?MEE3Wm8yWkJZdDd2MDlkUXUyWk9EL0RlUmdKK2xFckZGbFpPRSsxUWNrb3Jx?=
 =?utf-8?B?NVVpT3oxUHNDcFpKNHY3WWEzOUpGTVFQeFg3M0pNY1pqWk5qdEVOaVFnRmV1?=
 =?utf-8?B?Ym1nWW0zekh4Zk44SUw5M01xUGZPU0ZwQitOMzYzMzVuWkJuZWpORThYRSs3?=
 =?utf-8?B?TEk2elJ3M05DMDVDL3dUZzFVc1NKcytlZlJRUFQzbmRUZVhtREg0WjNpalI4?=
 =?utf-8?B?Q2kxMWlidjFHb25IUXpIWkFGSGRBSzF6aVo5ZjlZRzdpSkQwc0pLK2tqckkv?=
 =?utf-8?B?RThjaURzdHFRMHdGMlFDZFJjeCtFZHBNbE93OGhGTzJKUnM5aFRNbU1mSHNn?=
 =?utf-8?B?L1VYU3Y2UndxWW9Zc09EeGhUK3pMNW82UFVaYm1oWjFDSitRTzh0UzFHQ2NI?=
 =?utf-8?B?RVZlNVY2WGwvRXJtREZPRnM0RW5LQ0NPVjQ3a01hMWROaFRTM0tDdktYd3Bq?=
 =?utf-8?B?bjA0a1g5T2pOdjloK1B1cElWSVE4YVNXeUtNaXRST1ZhRXRIc21lSFBlN2F0?=
 =?utf-8?B?NlZlUWJHd3hRRnR6QU1nZzM4bU85MGhqazNCU3pTdDRzazd0by9hUTg5anJ0?=
 =?utf-8?B?SEVOanIrb3Y1K1NsR1hFbUQ5Q0NsN2o1M1Z0NytObkw0ZFdPWDc0Tjc3SjlZ?=
 =?utf-8?B?Vnl0OHFqSEJWbDAvUVEyRVdVRVkxVHBMbDNnODVjbWFjVnJ1T0ZISGs5Snps?=
 =?utf-8?Q?yQUX7CYt3bCjJgGP+KhpOUmj/Fvos1AP41mqZKN?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 13365b05-6bd8-466e-a27f-08d910755411
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 May 2021 09:57:15.8540
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Em9ozvofMDww6tnGjlqigcnuB6xKbCKEwgvYg+H2TMV8E2EL/cJwoYmzqgPk6o3+Q9/NAQqYDgQr16/Oxb4PwQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5065
X-OriginatorOrg: citrix.com

Do this before adding any more stuff to xg_cpuid_x86.c.

The placement in xenctrl.h is wrong, as they are implemented by the
xenguest library. Note that xg_cpuid_x86.c needs to include
xg_private.h, and in turn also fix xg_private.h to include
xc_bitops.h. The bitops definition of BITS_PER_LONG needs to be
changed to not be an expression, so that xxhash.h can use it in a
preprocessor if directive.

As a result also modify xen-cpuid and the ocaml stubs to include
xenguest.h.

Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
Changes since v1:
 - Include xenguest.h in ocaml stubs.
 - Change BITS_PER_LONG definition in xc_bitops.h.
---
 tools/include/xenctrl.h             | 55 ----------------------------
 tools/include/xenguest.h            | 56 +++++++++++++++++++++++++++++
 tools/libs/ctrl/xc_bitops.h         |  6 +++-
 tools/libs/guest/xg_cpuid_x86.c     |  3 +-
 tools/libs/guest/xg_private.h       |  1 +
 tools/misc/xen-cpuid.c              |  1 +
 tools/ocaml/libs/xc/xenctrl_stubs.c |  1 +
 7 files changed, 65 insertions(+), 58 deletions(-)

diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index 58d3377d6ab..e894c5c392d 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -2589,61 +2589,6 @@ int xc_psr_get_domain_data(xc_interface *xch, uint32_t domid,
                            uint64_t *data);
 int xc_psr_get_hw_info(xc_interface *xch, uint32_t socket,
                        xc_psr_feat_type type, xc_psr_hw_info *hw_info);
-
-typedef struct xc_cpu_policy xc_cpu_policy_t;
-
-/* Create and free a xc_cpu_policy object. */
-xc_cpu_policy_t *xc_cpu_policy_init(void);
-void xc_cpu_policy_destroy(xc_cpu_policy_t *policy);
-
-/* Retrieve a system policy, or get/set a domains policy. */
-int xc_cpu_policy_get_system(xc_interface *xch, unsigned int policy_idx,
-                             xc_cpu_policy_t *policy);
-int xc_cpu_policy_get_domain(xc_interface *xch, uint32_t domid,
-                             xc_cpu_policy_t *policy);
-int xc_cpu_policy_set_domain(xc_interface *xch, uint32_t domid,
-                             xc_cpu_policy_t *policy);
-
-/* Manipulate a policy via architectural representations. */
-int xc_cpu_policy_serialise(xc_interface *xch, const xc_cpu_policy_t *policy,
-                            xen_cpuid_leaf_t *leaves, uint32_t *nr_leaves,
-                            xen_msr_entry_t *msrs, uint32_t *nr_msrs);
-int xc_cpu_policy_update_cpuid(xc_interface *xch, xc_cpu_policy_t *policy,
-                               const xen_cpuid_leaf_t *leaves,
-                               uint32_t nr);
-int xc_cpu_policy_update_msrs(xc_interface *xch, xc_cpu_policy_t *policy,
-                              const xen_msr_entry_t *msrs, uint32_t nr);
-
-/* Compatibility calculations. */
-bool xc_cpu_policy_is_compatible(xc_interface *xch, xc_cpu_policy_t *host,
-                                 xc_cpu_policy_t *guest);
-
-int xc_get_cpu_levelling_caps(xc_interface *xch, uint32_t *caps);
-int xc_get_cpu_featureset(xc_interface *xch, uint32_t index,
-                          uint32_t *nr_features, uint32_t *featureset);
-
-int xc_cpu_policy_get_size(xc_interface *xch, uint32_t *nr_leaves,
-                           uint32_t *nr_msrs);
-int xc_set_domain_cpu_policy(xc_interface *xch, uint32_t domid,
-                             uint32_t nr_leaves, xen_cpuid_leaf_t *leaves,
-                             uint32_t nr_msrs, xen_msr_entry_t *msrs,
-                             uint32_t *err_leaf_p, uint32_t *err_subleaf_p,
-                             uint32_t *err_msr_p);
-
-uint32_t xc_get_cpu_featureset_size(void);
-
-enum xc_static_cpu_featuremask {
-    XC_FEATUREMASK_KNOWN,
-    XC_FEATUREMASK_SPECIAL,
-    XC_FEATUREMASK_PV_MAX,
-    XC_FEATUREMASK_PV_DEF,
-    XC_FEATUREMASK_HVM_SHADOW_MAX,
-    XC_FEATUREMASK_HVM_SHADOW_DEF,
-    XC_FEATUREMASK_HVM_HAP_MAX,
-    XC_FEATUREMASK_HVM_HAP_DEF,
-};
-const uint32_t *xc_get_static_cpu_featuremask(enum xc_static_cpu_featuremask);
-
 #endif
 
 int xc_livepatch_upload(xc_interface *xch,
diff --git a/tools/include/xenguest.h b/tools/include/xenguest.h
index 217022b6e76..03c813a0d78 100644
--- a/tools/include/xenguest.h
+++ b/tools/include/xenguest.h
@@ -719,4 +719,60 @@ xen_pfn_t *xc_map_m2p(xc_interface *xch,
                       unsigned long max_mfn,
                       int prot,
                       unsigned long *mfn0);
+
+#if defined(__i386__) || defined(__x86_64__)
+typedef struct xc_cpu_policy xc_cpu_policy_t;
+
+/* Create and free a xc_cpu_policy object. */
+xc_cpu_policy_t *xc_cpu_policy_init(void);
+void xc_cpu_policy_destroy(xc_cpu_policy_t *policy);
+
+/* Retrieve a system policy, or get/set a domains policy. */
+int xc_cpu_policy_get_system(xc_interface *xch, unsigned int policy_idx,
+                             xc_cpu_policy_t *policy);
+int xc_cpu_policy_get_domain(xc_interface *xch, uint32_t domid,
+                             xc_cpu_policy_t *policy);
+int xc_cpu_policy_set_domain(xc_interface *xch, uint32_t domid,
+                             xc_cpu_policy_t *policy);
+
+/* Manipulate a policy via architectural representations. */
+int xc_cpu_policy_serialise(xc_interface *xch, const xc_cpu_policy_t *policy,
+                            xen_cpuid_leaf_t *leaves, uint32_t *nr_leaves,
+                            xen_msr_entry_t *msrs, uint32_t *nr_msrs);
+int xc_cpu_policy_update_cpuid(xc_interface *xch, xc_cpu_policy_t *policy,
+                               const xen_cpuid_leaf_t *leaves,
+                               uint32_t nr);
+int xc_cpu_policy_update_msrs(xc_interface *xch, xc_cpu_policy_t *policy,
+                              const xen_msr_entry_t *msrs, uint32_t nr);
+
+/* Compatibility calculations. */
+bool xc_cpu_policy_is_compatible(xc_interface *xch, xc_cpu_policy_t *host,
+                                 xc_cpu_policy_t *guest);
+
+int xc_get_cpu_levelling_caps(xc_interface *xch, uint32_t *caps);
+int xc_get_cpu_featureset(xc_interface *xch, uint32_t index,
+                          uint32_t *nr_features, uint32_t *featureset);
+
+int xc_cpu_policy_get_size(xc_interface *xch, uint32_t *nr_leaves,
+                           uint32_t *nr_msrs);
+int xc_set_domain_cpu_policy(xc_interface *xch, uint32_t domid,
+                             uint32_t nr_leaves, xen_cpuid_leaf_t *leaves,
+                             uint32_t nr_msrs, xen_msr_entry_t *msrs,
+                             uint32_t *err_leaf_p, uint32_t *err_subleaf_p,
+                             uint32_t *err_msr_p);
+
+uint32_t xc_get_cpu_featureset_size(void);
+
+enum xc_static_cpu_featuremask {
+    XC_FEATUREMASK_KNOWN,
+    XC_FEATUREMASK_SPECIAL,
+    XC_FEATUREMASK_PV_MAX,
+    XC_FEATUREMASK_PV_DEF,
+    XC_FEATUREMASK_HVM_SHADOW_MAX,
+    XC_FEATUREMASK_HVM_SHADOW_DEF,
+    XC_FEATUREMASK_HVM_HAP_MAX,
+    XC_FEATUREMASK_HVM_HAP_DEF,
+};
+const uint32_t *xc_get_static_cpu_featuremask(enum xc_static_cpu_featuremask);
+#endif /* __i386__ || __x86_64__ */
 #endif /* XENGUEST_H */
diff --git a/tools/libs/ctrl/xc_bitops.h b/tools/libs/ctrl/xc_bitops.h
index f0bac4a071d..4a776dc3a57 100644
--- a/tools/libs/ctrl/xc_bitops.h
+++ b/tools/libs/ctrl/xc_bitops.h
@@ -6,7 +6,11 @@
 #include <stdlib.h>
 #include <string.h>
 
-#define BITS_PER_LONG (sizeof(unsigned long) * 8)
+#ifdef __LP64__
+#define BITS_PER_LONG 64
+#else
+#define BITS_PER_LONG 32
+#endif
 
 #define BITMAP_ENTRY(_nr,_bmap) ((_bmap))[(_nr) / 8]
 #define BITMAP_SHIFT(_nr) ((_nr) % 8)
diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x86.c
index 1ebc108213d..144b5a5aee6 100644
--- a/tools/libs/guest/xg_cpuid_x86.c
+++ b/tools/libs/guest/xg_cpuid_x86.c
@@ -22,8 +22,7 @@
 #include <stdlib.h>
 #include <stdbool.h>
 #include <limits.h>
-#include "xc_private.h"
-#include "xc_bitops.h"
+#include "xg_private.h"
 #include <xen/hvm/params.h>
 #include <xen-tools/libs.h>
 
diff --git a/tools/libs/guest/xg_private.h b/tools/libs/guest/xg_private.h
index 8f9b257a2f3..db93521c567 100644
--- a/tools/libs/guest/xg_private.h
+++ b/tools/libs/guest/xg_private.h
@@ -27,6 +27,7 @@
 #include <sys/stat.h>
 
 #include "xc_private.h"
+#include "xc_bitops.h"
 #include "xenguest.h"
 
 #include <xen/memory.h>
diff --git a/tools/misc/xen-cpuid.c b/tools/misc/xen-cpuid.c
index d4bc83d8c92..2b1a0492b30 100644
--- a/tools/misc/xen-cpuid.c
+++ b/tools/misc/xen-cpuid.c
@@ -8,6 +8,7 @@
 #include <inttypes.h>
 
 #include <xenctrl.h>
+#include <xenguest.h>
 
 #include <xen-tools/libs.h>
 
diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
index d05d7bb30ec..6e4bc567f53 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -32,6 +32,7 @@
 
 #define XC_WANT_COMPAT_MAP_FOREIGN_API
 #include <xenctrl.h>
+#include <xenguest.h>
 #include <xen-tools/libs.h>
 
 #include "mmap_stubs.h"
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Thu May 06 09:58:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 09:58:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123405.232732 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leamA-0007KA-Ad; Thu, 06 May 2021 09:58:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123405.232732; Thu, 06 May 2021 09:58:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leamA-0007K3-7Q; Thu, 06 May 2021 09:58:34 +0000
Received: by outflank-mailman (input) for mailman id 123405;
 Thu, 06 May 2021 09:58:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EnkQ=KB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1leam8-0007Jv-Lr
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 09:58:32 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c6d73a61-f6dc-4d8e-b965-465957ebd4de;
 Thu, 06 May 2021 09:58:31 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EAEC9AFE5;
 Thu,  6 May 2021 09:58:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c6d73a61-f6dc-4d8e-b965-465957ebd4de
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620295111; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QUOUlIUOvJykJmlLzC6jcydGahCRVB5E9NbdpyDKMzI=;
	b=oL+5pMen2EjjLiwJJ3/8qqYWB17gwl62bpwVa+kIpHz74UE18w0dAgYxM+jzKakojFy7+G
	QO5+36FPxY+OQrnfX81rGPJXbO4amK9S5JHvC9o9LCm319Dl7umSsLpMPWckRW7rLZZyfV
	aUsmbZB5mKU10EnenWCFimvTGCGh9mk=
Subject: Re: [PATCH v5 3/3] docs/doxygen: doxygen documentation for
 grant_table.h
To: Luca Fancellu <luca.fancellu@arm.com>
Cc: xen-devel@lists.xenproject.org,
 Bertrand Marquis <bertrand.marquis@arm.com>, wei.chen@arm.com,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20210504133145.767-1-luca.fancellu@arm.com>
 <20210504133145.767-4-luca.fancellu@arm.com>
 <alpine.DEB.2.21.2105041514260.5018@sstabellini-ThinkPad-T480s>
 <9E7D7B58-0ABA-4800-B2D3-9EE3E29CF599@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8fada713-9ae5-ddd3-585b-0f090748fc49@suse.com>
Date: Thu, 6 May 2021 11:58:26 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <9E7D7B58-0ABA-4800-B2D3-9EE3E29CF599@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 06.05.2021 10:48, Luca Fancellu wrote:
>> On 4 May 2021, at 23:27, Stefano Stabellini <sstabellini@kernel.org> wrote:
>> On Tue, 4 May 2021, Luca Fancellu wrote:
>>> @@ -51,13 +55,10 @@
>>>  * know the real machine address of a page it is sharing. This makes
>>>  * it possible to share memory correctly with domains running in
>>>  * fully virtualised memory.
>>> - */
>>> -
>>> -/***********************************
>>> + *
>>>  * GRANT TABLE REPRESENTATION
>>> - */
>>> -
>>> -/* Some rough guidelines on accessing and updating grant-table entries
>>> + *
>>> + * Some rough guidelines on accessing and updating grant-table entries
>>>  * in a concurrency-safe manner. For more information, Linux contains a
>>>  * reference implementation for guest OSes (drivers/xen/grant_table.c, see
>>>  * http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=drivers/xen/grant-table.c;hb=HEAD
>>> @@ -66,6 +67,7 @@
>>>  *     compiler barrier will still be required.
>>>  *
>>>  * Introducing a valid entry into the grant table:
>>> + * @keepindent
>>>  *  1. Write ent->domid.
>>>  *  2. Write ent->frame:
>>>  *      GTF_permit_access:   Frame to which access is permitted.
>>> @@ -73,20 +75,25 @@
>>>  *                           frame, or zero if none.
>>>  *  3. Write memory barrier (WMB).
>>>  *  4. Write ent->flags, inc. valid type.
>>> + * @endkeepindent
>>>  *
>>>  * Invalidating an unused GTF_permit_access entry:
>>> + * @keepindent
>>>  *  1. flags = ent->flags.
>>>  *  2. Observe that !(flags & (GTF_reading|GTF_writing)).
>>>  *  3. Check result of SMP-safe CMPXCHG(&ent->flags, flags, 0).
>>>  *  NB. No need for WMB as reuse of entry is control-dependent on success of
>>>  *      step 3, and all architectures guarantee ordering of ctrl-dep writes.
>>> + * @endkeepindent
>>>  *
>>>  * Invalidating an in-use GTF_permit_access entry:
>>> + *
>>>  *  This cannot be done directly. Request assistance from the domain controller
>>>  *  which can set a timeout on the use of a grant entry and take necessary
>>>  *  action. (NB. This is not yet implemented!).
>>>  *
>>>  * Invalidating an unused GTF_accept_transfer entry:
>>> + * @keepindent
>>>  *  1. flags = ent->flags.
>>>  *  2. Observe that !(flags & GTF_transfer_committed). [*]
>>>  *  3. Check result of SMP-safe CMPXCHG(&ent->flags, flags, 0).
>>> @@ -97,18 +104,24 @@
>>>  *      transferred frame is written. It is safe for the guest to spin waiting
>>>  *      for this to occur (detect by observing GTF_transfer_completed in
>>>  *      ent->flags).
>>> + * @endkeepindent
>>>  *
>>>  * Invalidating a committed GTF_accept_transfer entry:
>>>  *  1. Wait for (ent->flags & GTF_transfer_completed).
>>>  *
>>>  * Changing a GTF_permit_access from writable to read-only:
>>> + *
>>>  *  Use SMP-safe CMPXCHG to set GTF_readonly, while checking !GTF_writing.
>>>  *
>>>  * Changing a GTF_permit_access from read-only to writable:
>>> + *
>>>  *  Use SMP-safe bit-setting instruction.
>>> + *
>>> + * @addtogroup grant_table Grant Tables
>>> + * @{
>>>  */
>>>
>>> -/*
>>> +/**
>>>  * Reference to a grant entry in a specified domain's grant table.
>>>  */
>>> typedef uint32_t grant_ref_t;
>>
>> Just below this typedef there is the following comment:
>>
>> /*
>> * A grant table comprises a packed array of grant entries in one or more
>> * page frames shared between Xen and a guest.
>> * [XEN]: This field is written by Xen and read by the sharing guest.
>> * [GST]: This field is written by the guest and read by Xen.
>> */
>>
>> I noticed it doesn't appear in the output html. Any way we can retain it
>> somewhere? Maybe we have to move it together with the larger comment
>> above?
> 
> I agree with you, this comment should appear in the html docs, but to do so
> It has to be moved together with the larger comment above.
> 
> In the original patch it was like that but I had to revert it back due to objection, my proposal is
> to put it together with the larger comment and write something like this to
> maintain a good readability:
> 
>    *  Use SMP-safe CMPXCHG to set GTF_readonly, while checking !GTF_writing.
>    *
>    * Changing a GTF_permit_access from read-only to writable:
>    *
>    *  Use SMP-safe bit-setting instruction.
> + *
> + * A grant table comprises a packed array of grant entries in one or more
> + * page frames shared between Xen and a guest.

I think if this part was moved to the top of this big comment while ...

> + * Data structure fields or defines described below have the following tags:
> + * * [XEN]: This field is written by Xen and read by the sharing guest.
> + * * [GST]: This field is written by the guest and read by Xen.

... this part was, as suggested by you, left near its bottom, I could
agree.

However, you making this suggestion caused me to look more closely at
what the comments actually describe. If there's effort to make the
documentation easier accessible by extracting it from the header, I
wonder whether - like with the v1 vs v2 comment pointed out previously
as misleading - we shouldn't, as a prereq step, make an attempt to
actually have the documentation be correct. For example I found this:

/*
 * Version 1 and version 2 grant entries share a common prefix.  The
 * fields of the prefix are documented as part of struct
 * grant_entry_v1.
 */
struct grant_entry_header {
    uint16_t flags;
    domid_t  domid;
};

The comment is wrong. "flags" here is only holding what's tagged
[GST] for v1. The [XEN] tagged bits actually live in grant_status_t.
This can perhaps best be seen in gnttab_set_version()'s code
dealing with the first 8 entries. However, contrary to v2's
intentions, GTF_transfer_committed and GTF_transfer_completed (which
aren't properly tagged either way) get set by Xen in shared entries,
not status ones. Maybe this was considered "okay" because the frame
field also gets written in this case (i.e. the cache line will get
dirtied in any event).

Similarly I'd like to refer to my still pending "gnttab: GTF_sub_page
is a v2-only flag", which also corrects documentation in this regard.
And perhaps there's more.

An alternative to correcting the (as it seems) v2 related issues, it
may be worth considering to extract only v1 documentation in this
initial phase.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 06 10:23:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 10:23:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123412.232743 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lebA4-000226-AL; Thu, 06 May 2021 10:23:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123412.232743; Thu, 06 May 2021 10:23:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lebA4-00021z-7O; Thu, 06 May 2021 10:23:16 +0000
Received: by outflank-mailman (input) for mailman id 123412;
 Thu, 06 May 2021 10:23:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HEOH=KB=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lebA3-00021t-Bq
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 10:23:15 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7b47145d-937e-4ad0-954c-96aad88669b2;
 Thu, 06 May 2021 10:23:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7b47145d-937e-4ad0-954c-96aad88669b2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620296594;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=goP9xVIK56Mz2ZpJxi9hKqiayxFFOFAu8LDQX33BaCI=;
  b=Fft93TY2WpyrAcc79REj7qVDRryA2uX9i/ajNYe5TyHKRfnQpkR98+YC
   njFVUTJjh0L+0vv9LRwPbhP44JKL2QubgUWMSXC51juiINRHdE6+4pWJz
   4nVGVBr+pS1Bp4IKvIWwUKiyYdBwvoEmtuHhSR0uy8tuEVYe8c3YdW5kZ
   w=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: VhcAWxrdNIb6W70kTGWbftLSM4EbW5szUNIzdAKr5Cay48ZtHk/hiPZDOseh6ZrOsLtGqttsyv
 PZ4NpiYuV/C5aas8eEFJDgnRZao8bSqmbTCS79sa6OwX5+Apiyh+k4MLmX5npKxueSxDR7FG0a
 7Sv9D7Cn4IuK99ahhyMkIeokkJl/5DDDnPeIpI/wnnVNR0dbGoVZHFy3brpkxU3RHZOr8U9Bqu
 OtrK5XJA4cM839TofHstciohayfXxHeUkeKJ934n8tKGq3ginuxLcVW5LFoeHlKIdu97ofSopz
 1mk=
X-SBRS: 5.1
X-MesageID: 44718656
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:YTAtzqBki+RBoNTlHelo55DYdb4zR+YMi2TDt3oddfU1SL38qy
 nKpp4mPHDP5wr5NEtPpTniAtjjfZq/z/5ICOAqVN/PYOCPggCVxepZnOjfKlPbehEX9oRmpN
 1dm6oVMqyMMbCt5/yKnDVRELwbsaa6GLjDv5a785/0JzsaE52J6W1Ce2GmO3wzfiZqL7wjGq
 GR48JWzgDQAkj+PqyAdx84t/GonayzqK7b
X-IronPort-AV: E=Sophos;i="5.82,277,1613451600"; 
   d="scan'208";a="44718656"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PykVm8fUOc5EQC4hppjmTZ2sjd7VPszm3doGXRsCktivccxxt6/3nViNVa1PSgOjAceyswyZAFYvVDJCYu1EjZ0MTmVxlqVEw52Y/Y0bcxmabNI3VTdlVkFljna8dDup4QwsyqnccAVs5oegJkjkyRG8+ss8zDuHNbHnS1hbebl4+hpru0Lk8if9U6KRT3mrveis+v/TvPYW/vhmGvjWALJRN2dmqc7OApMiFZgfvD0Yl6JgEcUHVJb6GFpPaJ4U6MouhDz9SxLjrd+gc+3sHuHQbhmg+1xyVvd7Qntw0UCR8k8fVeWBKUoLbS5zYxbe9U8sX+++nkTJARJ8bu4YWQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xZfjdiCL9zYhnCF7SZwghC1t18RKGyGWZ8vorFpFRZ0=;
 b=GUg1WpeXoBR9cwMicQmaNRyJa02IjvZeOV6oxLy1ys2iNsLYiMFDjRelBOru9u9Bduh7wQeTXk/PEaUua1pgI2lPQxR1UISAhpQDReB9tQsmbmyPYco8vgKp+lxrk0d0XiP0L9sNwPH1sbnzGteAVulvzC2/u6VTM6xjOI2tUguu0lbmnseUD9VqBW0Fgk941j/+QT4n99fj7dQPLCJX9VYFF+YDuoWIojPirki7B1hDQfTb+/zfICRp153Rw+XS5syRjG0BiGjrQccVDn9tdmlDyJqUuu6yl2X3fcz4cB5FnQlpWGEBkTSPATgZTa5eEFWMzQlC/wFOOvqWHNGYqA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xZfjdiCL9zYhnCF7SZwghC1t18RKGyGWZ8vorFpFRZ0=;
 b=k+FJiL3/4H37NWpxIh7iYpfDV4ixrGoCBPg0xMxWzOCXDlKuoYWz7MG78SzAlKcTmiSK5e0Jie/6xhhrh46SMmpahbTu1jIyNtdI4MBf9+BVo5PN1BdB9LpwFkR1MdZWZL1YWB/Qyi4gehzNx53hx4+dMHC5fZ+d0XWHJ676zzo=
Date: Thu, 6 May 2021 12:23:06 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v3 08/13] libs/guest: make a cpu policy compatible with
 older Xen versions
Message-ID: <YJPDigyn0TFxDLT/@Air-de-Roger>
References: <20210430155211.3709-1-roger.pau@citrix.com>
 <20210430155211.3709-9-roger.pau@citrix.com>
 <51ee228a-2d53-2dd4-55cf-233d81ba4958@suse.com>
 <YJFpdA8qmYca9bUO@Air-de-Roger>
 <6a3bc5cd-10a2-1d13-0033-c22d16da25b7@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <6a3bc5cd-10a2-1d13-0033-c22d16da25b7@suse.com>
X-ClientProxiedBy: AM9P195CA0008.EURP195.PROD.OUTLOOK.COM
 (2603:10a6:20b:21f::13) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b8fcd5b5-5d15-4b07-a988-08d91078f2e5
X-MS-TrafficTypeDiagnostic: DM5PR03MB3372:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB337213BAEDADBE841296B4488F589@DM5PR03MB3372.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: sh2mOLv19dgAdR9qsuVC6jMiK1/LDug4S2hgCU7ySPKN9HLTyb51Sz8CAVy4jRCn617hU78/fNYB0Y7wjX3fFaA6dLF7yRV3Kfd/v/Xd2RkSFf3Z4TRfxrTbqTu2PEIFBrsq9EVkO/vfgwwo7foi2SmZHCnDM25OeCe7QbA184cubpPIIG9zsHmdlwXRKQp7ImXvLBOBR1/o0Xw8wk9s+6+UB037YgOtsdYVv/gA+SxmgWvZGph1ktFVlKoMcsQUAhAW0maRwVpRNQr6ohxaqeDFssdk7UYwhoCKv0RLtsZiAyXzBI3JFj+KVeRqiO6/EZ0XzfkF/D2YlfvxuRxeGcIcW75aAFyitjEU52hBl+Q9xaPvwyLXYhrHpw9PJwc3Ml0UecIWYNAswxrKijNs6Oaz+ITfY+/qkSaTh/pHPUEJK9SnooX+Bc0ngx6J/SxhImx1WLgGCtVQOqv0nr4n9iVB9KYoGqdTMj2x9a7y4IRBBB+iKZmZtzOfoua0zDqThLspU7fWLPmtBhGJ/zoa+o9dOnNCuvE+42ZUEtGhE2TLJOPfG1rugDy2hdEo8+6tTxjm2gdEqO7YKb0UvTTMFgCnh7PNbsnJBBzyIcReeow=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(346002)(366004)(136003)(376002)(396003)(39860400002)(6496006)(2906002)(6916009)(66946007)(8936002)(86362001)(66556008)(186003)(83380400001)(26005)(478600001)(316002)(16526019)(8676002)(85182001)(956004)(9686003)(6666004)(4326008)(6486002)(53546011)(66476007)(54906003)(38100700002)(5660300002)(33716001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?MGRxTVZQa2txS2hkNDViZ2svSktPalZ3aDViTGdJV1N4ZkZXZ1JYb2hLUVUr?=
 =?utf-8?B?OHhCaFZGOVcyUVJ4WldrK1RZQi85L2FETDNZc1locko5RHRnZHN2YjZVTmpZ?=
 =?utf-8?B?QlF6dkJTY015VS9TcS9kOGo2SktTdTBCSzF1MEkraEdNZm9kaDhDTlVKbGJY?=
 =?utf-8?B?MUYrRFNsOG9vNm5XYkpJOUxUZ3VjbXltTnM4NFdwWHFPZzgrNkhheGZnVyt6?=
 =?utf-8?B?MFpLVytsUmh3aEsxYm5Gb0p0dHVrVlA0Tlk2dkFCOHMwZ0ZVS2ttL2l2aXdV?=
 =?utf-8?B?dm9uNS9kcnp5dGNkQVVlK0tTcHZGVXB3QVNtSG1idFhYYXBibEs1UEpaWG9Q?=
 =?utf-8?B?V0pJTkMxT3ZLWWx5Y014VjlCazJHTWZ3TmJHQjQ2dHJ0NlhFNEZ6TlU1UUNl?=
 =?utf-8?B?LzB0MWpGUHU5eXVaSnpxdVVTV0tUZ2tHSFBkejUxV21pNTk1azYvcldLOEhw?=
 =?utf-8?B?UWwzcE1yUnVKWUdLd3A1TmppVEp5NzN2RnZWUTZpNnZrc0JmSzlka3FMaGg1?=
 =?utf-8?B?d2txN29wc1VzQkU0eS9wT1RPYTU2THQxQnRYS1pjTUVtUUlGdmtCemxWc0M1?=
 =?utf-8?B?eEVxaUVCdDhkaGV0TS9kendKeFVYVHZsK1QvTVE2WjJxMXh5Mkhnamcreldm?=
 =?utf-8?B?QjZiZS9SMm8vZnVnVytxdlQ3a005YVQzRUM0QmRaT0YzWlNVb0EvOUFNZmZm?=
 =?utf-8?B?T0lWRlpJd0VQaVkrUUVzeFJtZFV1MjZRaTRweit1eXJVNDkwZUp4R3g2eVpu?=
 =?utf-8?B?YlBrZ2NMLzZMd0YzL1NWcUJDcVoyejY4RDRGNXhTQmtGU3hkcFlhQkxSNi9Z?=
 =?utf-8?B?RnhTTFowMlJWWG4yWW1TRTZRcysrdmJzVFJ6anhNVkt3YStsUjdwQlVyMlRa?=
 =?utf-8?B?RG5IN0dyMXBuZ1k1MEtubkhGMnRISG1sMWV3S0JxVk1kWHF1V3NoYUxLeEFM?=
 =?utf-8?B?WUxucGhGYTJML2dta1FjRmk1RG95bzd4ZXBuRFEyK284SXorOVNZMDIrRHlm?=
 =?utf-8?B?VTFSL3JWaXExZjdXaWFSVnNxd2F5MUZaUzZTc1BqRjh5TC81QnNDUVUwQVg1?=
 =?utf-8?B?QnR3WEFFQmhaZlFKcDVzUmcwcjZpUW9nYTRSVzFCS3BDMXd4WWNjR1NSRXdh?=
 =?utf-8?B?Y0tpN2ErbEVnbk8ySWN6MWlKMmpBbGY1elg3MVQ5K2VtcXlFdW9lc2pvWHN3?=
 =?utf-8?B?ZWZHSDJKay9Ha1JRWktFVi9MdU8rUEt4Uld4bWNYRjBGNGY5VE1ZRjRzdzh4?=
 =?utf-8?B?UUJoU2FPUWI5b3g5aEJ1cmovTVdQQ3dhR0lxSWdFZ3hpTU1LNnlDVHN2dVJ5?=
 =?utf-8?B?V1IvbW44N2FjRkNPdVZETjhra1dPMndFZGZRZDMxRFFXMDVPdE1EelVKV1Zv?=
 =?utf-8?B?bEp4MkVqZ1RhS0hBeWlHVSs3dkxVTXp2Y29ZWlBqUTRCNUUxNWVuMEJIbmNa?=
 =?utf-8?B?WDNQWmZTbHRaVk9rbVZXOVlZem9zaGlYSlpSVlFBckJrbHZydHU2Z2haMnMv?=
 =?utf-8?B?WmhmQmxFdHh1YmhQZFk3WjhxZTBBSXNONjg0STdDaHVhdy8yNUVRSHpKMXNl?=
 =?utf-8?B?TzlaU2VpVjNCMlFvTFZmUmJsTjRZWERFc0hHWjk0OG5od2ttdXk3dmcxRXov?=
 =?utf-8?B?VlV3RWRMVGZTQW5JRWIxNitvRUliVGkyaHBIZ0FnQTUzVkJrakYxd3V6REx3?=
 =?utf-8?B?RThISlJ6ZkcwT2RqRXI5eEtCWU9CcGdqbGhMQzZVc0lWcENBaWR1YWF4RWFn?=
 =?utf-8?Q?tbbo0A7hBM75FcR/vFyea3NGZq4fTtZ54jQrSiI?=
X-MS-Exchange-CrossTenant-Network-Message-Id: b8fcd5b5-5d15-4b07-a988-08d91078f2e5
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 May 2021 10:23:10.9491
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: aE61hPrdCi4wuneAk+T+I+0d69oCe71n6f+lEv8BwNP7zHKVntLVHwj43uBZN8QNNDMkWt1qhzdEt47XoCoz8w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3372
X-OriginatorOrg: citrix.com

On Wed, May 05, 2021 at 09:42:09AM +0200, Jan Beulich wrote:
> On 04.05.2021 17:34, Roger Pau Monné wrote:
> > On Mon, May 03, 2021 at 01:09:41PM +0200, Jan Beulich wrote:
> >> On 30.04.2021 17:52, Roger Pau Monne wrote:
> >>> @@ -1086,3 +1075,42 @@ int xc_cpu_policy_calc_compatible(xc_interface *xch,
> >>>  
> >>>      return rc;
> >>>  }
> >>> +
> >>> +int xc_cpu_policy_make_compatible(xc_interface *xch, xc_cpu_policy_t policy,
> >>> +                                  bool hvm)
> >>
> >> I'm concerned of the naming, and in particular the two very different
> >> meanings of "compatible" for xc_cpu_policy_calc_compatible() and this
> >> new one. I'm afraid I don't have a good suggestion though, short of
> >> making the name even longer and inserting "backwards".
> > 
> > Would xc_cpu_policy_make_compat_412 be acceptable?
> > 
> > That's the more concise one I can think of.
> 
> Hmm, maybe (perhaps with an underscore inserted between 4 and 12). Yet
> (sorry) a comment in the function says "since Xen 4.13", which includes
> changes that have happened later. Therefore it's not really clear to me
> whether the function really _only_ deals with the 4.12 / 4.13 boundary.

It should deal with any changes in the default cpuid policy that
happened after (and not including) Xen 4.12. So resulting policy is
compatible with the behaviour that Xen 4.12 had. Any changes made in
Xen 4.13 and further versions should be accounted for here.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 06 10:43:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 10:43:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123419.232780 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lebTO-0004ty-Jg; Thu, 06 May 2021 10:43:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123419.232780; Thu, 06 May 2021 10:43:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lebTO-0004tr-F9; Thu, 06 May 2021 10:43:14 +0000
Received: by outflank-mailman (input) for mailman id 123419;
 Thu, 06 May 2021 10:43:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lebTN-0004cw-4B
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 10:43:13 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lebTJ-000652-Nr; Thu, 06 May 2021 10:43:09 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lebTJ-0002l1-FD; Thu, 06 May 2021 10:43:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=hsYrq9gCUN9/sBh5lySpQIJ5bBTu7DfTjVtwFhrf7bg=; b=Er2XOMbbw53B75nvXe24LJmNA
	4oPwroYgwxIuo3/s0++bOaDipB64gygv+vcIWw4ejGrRJF3qMHXEFmYy4fnMzPiUTwVVE5cCMtLZJ
	iQv5SqfTjvWgp4bE0KpgSLU27aPW8olHvhEyOi4zGl4xQV8My3B4g1mIuw+cssUw7DSKU=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: dwmw2@infradead.org,
	paul@xen.org,
	hongyxia@amazon.com,
	raphning@amazon.com,
	maghul@amazon.com,
	Julien Grall <jgrall@amazon.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH RFC 2/2] xen/kexec: Reserve KEXEC_TYPE_LIVEUPDATE and KEXEC_RANGE_MA_LIVEUPDATE
Date: Thu,  6 May 2021 11:42:59 +0100
Message-Id: <20210506104259.16928-3-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210506104259.16928-1-julien@xen.org>
References: <20210506104259.16928-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

Unfortunately, the code to support Live Update has already been merged in
Kexec and shipped since 2.0.21. Reserve the IDs used by Kexec before they
end up to be re-used for a different purpose.

This patch reserves two IDs:
    * KEXEC_TYPE_LIVEUPDATE: New operation to request Live Update
    * KEXEC_MA_RANGE_LIVEUPDATE: New range to query the Live Update
      area below Xen

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/include/public/kexec.h | 13 ++++++++++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/xen/include/public/kexec.h b/xen/include/public/kexec.h
index 3f2a118381ba..650d2feb036f 100644
--- a/xen/include/public/kexec.h
+++ b/xen/include/public/kexec.h
@@ -71,17 +71,22 @@
  */
 
 /*
- * Kexec supports two types of operation:
+ * Kexec supports three types of operation:
  * - kexec into a regular kernel, very similar to a standard reboot
  *   - KEXEC_TYPE_DEFAULT is used to specify this type
  * - kexec into a special "crash kernel", aka kexec-on-panic
  *   - KEXEC_TYPE_CRASH is used to specify this type
  *   - parts of our system may be broken at kexec-on-panic time
  *     - the code should be kept as simple and self-contained as possible
+ * - Live update into a new Xen, preserving all running domains
+ *   - KEXEC_TYPE_LIVE_UPDATE is used to specify this type
+ *   - Xen performs non-cooperative live migration and stores live
+ *     update state in memory, passing it to the new Xen.
  */
 
-#define KEXEC_TYPE_DEFAULT 0
-#define KEXEC_TYPE_CRASH   1
+#define KEXEC_TYPE_DEFAULT      0
+#define KEXEC_TYPE_CRASH        1
+#define KEXEC_TYPE_LIVEUPDATE   2
 
 
 /* The kexec implementation for Xen allows the user to load two
@@ -150,6 +155,8 @@ typedef struct xen_kexec_load_v1 {
 #define KEXEC_RANGE_MA_EFI_MEMMAP 5 /* machine address and size of
                                      * of the EFI Memory Map */
 #define KEXEC_RANGE_MA_VMCOREINFO 6 /* machine address and size of vmcoreinfo */
+/* machine address and size of the Live Update area below Xen */
+#define KEXEC_RANGE_MA_LIVEUPDATE 7
 
 /*
  * Find the address and size of certain memory areas
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu May 06 10:43:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 10:43:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123418.232768 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lebTN-0004cy-6q; Thu, 06 May 2021 10:43:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123418.232768; Thu, 06 May 2021 10:43:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lebTN-0004cq-3c; Thu, 06 May 2021 10:43:13 +0000
Received: by outflank-mailman (input) for mailman id 123418;
 Thu, 06 May 2021 10:43:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lebTM-0004NU-0s
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 10:43:12 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lebTH-000650-Nm; Thu, 06 May 2021 10:43:07 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lebTH-0002l1-B2; Thu, 06 May 2021 10:43:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:
	References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	bh=DgxRhrQIFfjxPAv/IQMiKpMR916qTHixAm/Vojyp6T8=; b=STPxHbZ4qaFE1AoVq+PuE+wx9R
	rPFL4MP0N9MNRWd+Ar3wQYFgAVRX8S4BSWt/Lf24RGRK6fVsWZUYb7683CKEI5v0pfpi0nfFLkhIL
	4jt78Pf+pWyFk8LWrktK7SoUQbqNJOuaYy6mphGUamyxqdw8J5kDLyzbpBdsKdcE7jwM=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: dwmw2@infradead.org,
	paul@xen.org,
	hongyxia@amazon.com,
	raphning@amazon.com,
	maghul@amazon.com,
	Julien Grall <jgrall@amazon.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH RFC 1/2] docs/design: Add a design document for Live Update
Date: Thu,  6 May 2021 11:42:58 +0100
Message-Id: <20210506104259.16928-2-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210506104259.16928-1-julien@xen.org>
References: <20210506104259.16928-1-julien@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Julien Grall <jgrall@amazon.com>

Administrators often require updating the Xen hypervisor to address
security vulnerabilities, introduce new features, or fix software defects.
Currently, we offer the following methods to perform the update:

    * Rebooting the guests and the host: this is highly disrupting to running
      guests.
    * Migrating off the guests, rebooting the host: this currently requires
      the guest to cooperate (see [1] for a non-cooperative solution) and it
      may not always be possible to migrate it off (i.e lack of capacity, use
      of local storage...).
    * Live patching: This is the less disruptive of the existing methods.
      However, it can be difficult to prepare the livepatch if the change is
      large or there are data structures to update.

This patch will introduce a new proposal called "Live Update" which will
activate new software without noticeable downtime (i.e no - or minimal -
customer).

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 docs/designs/liveupdate.md | 254 +++++++++++++++++++++++++++++++++++++
 1 file changed, 254 insertions(+)
 create mode 100644 docs/designs/liveupdate.md

diff --git a/docs/designs/liveupdate.md b/docs/designs/liveupdate.md
new file mode 100644
index 000000000000..32993934f4fe
--- /dev/null
+++ b/docs/designs/liveupdate.md
@@ -0,0 +1,254 @@
+# Live Updating Xen
+
+## Background
+
+Administrators often require updating the Xen hypervisor to address security
+vulnerabilities, introduce new features, or fix software defects.  Currently,
+we offer the following methods to perform the update:
+
+    * Rebooting the guests and the host: this is highly disrupting to running
+      guests.
+    * Migrating off the guests, rebooting the host: this currently requires
+      the guest to cooperate (see [1] for a non-cooperative solution) and it
+      may not always be possible to migrate it off (i.e lack of capacity, use
+      of local storage...).
+    * Live patching: This is the less disruptive of the existing methods.
+      However, it can be difficult to prepare the livepatch if the change is
+      large or there are data structures to update.
+
+This document will present a new approach called "Live Update" which will
+activate new software without noticeable downtime (i.e no - or minimal -
+customer pain).
+
+## Terminology
+
+xen#1: Xen version currently active and running on a droplet.  This is the
+“source” for the Live Update operation.  This version can actually be newer
+than xen#2 in case of a rollback operation.
+
+xen#2: Xen version that's the “target” of the Live Update operation. This
+version will become the active version after successful Live Update.  This
+version of Xen can actually be older than xen#1 in case of a rollback
+operation.
+
+## High-level overview
+
+Xen has a framework to bring a new image of the Xen hypervisor in memory using
+kexec.  The existing framework does not meet the baseline functionality for
+Live Update, since kexec results in a restart for the hypervisor, host, Dom0,
+and all the guests.
+
+The operation can be divided in roughly 4 parts:
+
+    1. Trigger: The operation will by triggered from outside the hypervisor
+       (e.g. dom0 userspace).
+    2. Save: The state will be stabilized by pausing the domains and
+       serialized by xen#1.
+    3. Hand-over: xen#1 will pass the serialized state and transfer control to
+       xen#2.
+    4. Restore: The state will be deserialized by xen#2.
+
+All the domains will be paused before xen#1 is starting to save the states,
+and any domain that was running before Live Update will be unpaused after
+xen#2 has finished to restore the states.  This is to prevent a domain to try
+to modify the state of another domain while it is being saved/restored.
+
+The current approach could be seen as non-cooperative migration with a twist:
+all the domains (including dom0) are not expected be involved in the Live
+Update process.
+
+The major differences compare to live migration are:
+
+    * The state is not transferred to another host, but instead locally to
+      xen#2.
+    * The memory content or device state (for passthrough) does not need to
+      be part of the stream. Instead we need to preserve it.
+    * PV backends, device emulators, xenstored are not recreated but preserved
+      (as these are part of dom0).
+
+
+Domains in process of being destroyed (*XEN\_DOMCTL\_destroydomain*) will need
+to be preserved because another entity may have mappings (e.g foreign, grant)
+on them.
+
+## Trigger
+
+Live update is built on top of the kexec interface to prepare the command line,
+load xen#2 and trigger the operation.  A new kexec type has been introduced
+(*KEXEC\_TYPE\_LIVE\_UPDATE*) to notify Xen to Live Update.
+
+The Live Update will be triggered from outside the hypervisor (e.g. dom0
+userspace).  Support for the operation has been added in kexec-tools 2.0.21.
+
+All the domains will be paused before xen#1 is starting to save the states.
+In Xen, *domain\_pause()* will pause the vCPUs as soon as they can be re-
+scheduled.  In other words, a pause request will not wait for asynchronous
+requests (e.g. I/O) to finish.  For Live Update, this is not an ideal time to
+pause because it will require more xen#1 internal state to be transferred.
+Therefore, all the domains will be paused at an architectural restartable
+boundary.
+
+Live update will not happen synchronously to the request but when all the
+domains are quiescent.  As domains running device emulators (e.g Dom0) will
+be part of the process to quiesce HVM domains, we will need to let them run
+until xen#1 is actually starting to save the state.  HVM vCPUs will be paused
+as soon as any pending asynchronous request has finished.
+
+In the current implementation, all PV domains will continue to run while the
+rest will be paused as soon as possible.  Note this approach is assuming that
+device emulators are only running in PV domains.
+
+It should be easy to extend to PVH domains not requiring device emulations.
+It will require more thought if we need to run device models in HVM domains as
+there might be inter-dependency.
+
+## Save
+
+xen#1 will be responsible to preserve and serialize the state of each existing
+domain and any system-wide state (e.g M2P).
+
+Each domain will be serialized independently using a modified migration stream,
+if there is any dependency between domains (such as for IOREQ server) they will
+be recorded using a domid. All the complexity of resolving the dependencies are
+left to the restore path in xen#2 (more in the *Restore* section).
+
+At the moment, the domains are saved one by one in a single thread, but it
+would be possible to consider multi-threading if it takes too long. Although
+this may require some adjustment in the stream format.
+
+As we want to be able to Live Update between major versions of Xen (e.g Xen
+4.11 -> Xen 4.15), the states preserved should not be a dump of Xen internal
+structure but instead the minimal information that allow us to recreate the
+domains.
+
+For instance, we don't want to preserve the frametable (and therefore
+*struct page\_info*) as-is because the refcounting may be different across
+between xen#1 and xen#2 (see XSA-299). Instead, we want to be able to recreate
+*struct page\_info* based on minimal information that are considered stable
+(such as the page type).
+
+Note that upgrading between version of Xen will also require all the hypercalls
+to be stable. This will not be covered by this document.
+
+## Hand over
+
+### Memory usage restrictions
+
+xen#2 must take care not to use any memory pages which already belong to
+guests.  To facilitate this, a number of contiguous region of memory are
+reserved for the boot allocator, known as *live update bootmem*.
+
+xen#1 will always reserve a region just below Xen (the size is controlled by
+the Xen command line parameter liveupdate) to allow Xen growing and provide
+information about LiveUpdate (see the section *Breadcrumb*).  The region will be
+passed to xen#2 using the same command line option but with the base address
+specified.
+
+For simplicity, additional regions will be provided in the stream.  They will
+consist of region that could be re-used by xen#2 during boot (such as the
+xen#1's frametable memory).
+
+xen#2 must not use any pages outside those regions until it has consumed the
+Live Update data stream and determined which pages are already in use by
+running domains or need to be re-used as-is by Xen (e.g M2P).
+
+At run time, Xen may use memory from the reserved region for any purpose that
+does not require preservation over a Live Update; in particular it __must__ not be
+mapped to a domain or used by any Xen state requiring to be preserved (e.g
+M2P).  In other word, the xenheap pages could be allocated from the reserved
+regions if we remove the concept of shared xenheap pages.
+
+The xen#2's binary may be bigger (or smaller) compare to xen#1's binary.  So
+for the purpose of loading xen#2 binary, kexec should treat the reserved memory
+right below xen#1 and its region as a single contiguous space. xen#2 will be
+loaded right at the top of the contiguous space and the rest of the memory will
+be the new reserved memory (this may shrink or grow).  For that reason, freed
+init memory from xen#1 image is also treated as reserved liveupdate update
+bootmem.
+
+### Live Update data stream
+
+During handover, xen#1 creates a Live Update data stream containing all the
+information required by the new Xen#2 to restore all the domains.
+
+Data pages for this stream may be allocated anywhere in physical memory outside
+the *live update bootmem* regions.
+
+As calling __vmap()__/__vunmap()__ has a cost on the downtime.  We want to reduce the
+number of call to __vmap()__ when restoring the stream.  Therefore the stream
+will be contiguously virtually mapped in xen#2.  xen#1 will create an array of
+MFNs of the allocated data pages, suitable for passing to __vmap()__.  The
+array will be physically contiguous but the MFNs don't need to be physically
+contiguous.
+
+### Breadcrumb
+
+Since the Live Update data stream is created during the final **kexec\_exec**
+hypercall, its address cannot be passed on the command line to the new Xen
+since the command line needs to have been set up by **kexec(8)** in userspace
+long beforehand.
+
+Thus, to allow the new Xen to find the data stream, xen#1 places a breadcrumb
+in the first words of the Live Update bootmem, containing the number of data
+pages, and the physical address of the contiguous MFN array.
+
+### IOMMU
+
+Where devices are passed through to domains, it may not be possible to quiesce
+those devices for the purpose of performing the update.
+
+If performing Live Update with assigned devices, xen#1 will leave the IOMMU
+mappings active during the handover (thus implying that IOMMU page tables may
+not be allocated in the *live update bootmem* region either).
+
+xen#2 must take control of the IOMMU without causing those mappings to become
+invalid even for a short period of time.  In other words, xen#2 should not
+re-setup the IOMMUs.  On hardware which does not support Posted Interrupts,
+interrupts may need to be generated on resume.
+
+## Restore
+
+After xen#2 initialized itself and map the stream, it will be responsible to
+restore the state of the system and each domain.
+
+Unlike the save part, it is not possible to restore a domain in a single pass.
+There are dependencies between:
+
+    1. different states of a domain.  For instance, the event channels ABI
+       used (2l vs fifo) requires to be restored before restoring the event
+       channels.
+    2. the same "state" within a domain.  For instance, in case of PV domain,
+       the pages' ownership requires to be restored before restoring the type
+       of the page (e.g is it an L4, L1... table?).
+
+    3. domains.  For instance when restoring the grant mapping, it will be
+       necessary to have the page's owner in hand to do proper refcounting.
+       Therefore the pages' ownership have to be restored first.
+
+Dependencies will be resolved using either multiple passes (for dependency
+type 2 and 3) or using a specific ordering between records (for dependency
+type 1).
+
+Each domain will be restored in 3 passes:
+
+    * Pass 0: Create the domain and restore the P2M for HVM. This can be broken
+      down in 3 parts:
+      * Allocate a domain via _domain\_create()_ but skip part that requires
+        extra records (e.g HAP, P2M).
+      * Restore any parts which needs to be done before create the vCPUs. This
+        including restoring the P2M and whether HAP is used.
+      * Create the vCPUs. Note this doesn't restore the state of the vCPUs.
+    * Pass 1: It will restore the pages' ownership and the grant-table frames
+    * Pass 2: This steps will restore any domain states (e.g vCPU state, event
+      channels) that wasn't
+
+A domain should not have a dependency on another domain within the same pass.
+Therefore it would be possible to take advantage of all the CPUs to restore
+domains in parallel and reduce the overall downtime.
+
+Once all the domains have been restored, they will be unpaused if they were
+running before Live Update.
+
+* * *
+[1] https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=docs/designs/non-cooperative-migration.md;h=4b876d809fb5b8aac02d29fd7760a5c0d5b86d87;hb=HEAD
+
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu May 06 10:43:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 10:43:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123417.232756 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lebTL-0004MG-VA; Thu, 06 May 2021 10:43:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123417.232756; Thu, 06 May 2021 10:43:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lebTL-0004M9-S4; Thu, 06 May 2021 10:43:11 +0000
Received: by outflank-mailman (input) for mailman id 123417;
 Thu, 06 May 2021 10:43:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lebTK-0004M2-NY
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 10:43:10 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lebTF-00064f-KY; Thu, 06 May 2021 10:43:05 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lebTF-0002l1-At; Thu, 06 May 2021 10:43:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=680kVgfT7Oe/mttQMr/p0AciRnNxgeCFuFajL/iBUTw=; b=22Q4OSMth/CnR9cTEE5rrgwhhS
	N+c7qFFwhWFbavfnKHE2U8YL0vmUZb5Ox45FDPJOuT/5lsrq/oY6ZOG4WAPEOfLgzOlFqGSgAaRh2
	Gq0ACotRudXndGp3qHOeykINX4lTk/1p9P6a+jstnd3IL6HplMQAnDIPpWgmmpfWVBEs=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: dwmw2@infradead.org,
	paul@xen.org,
	hongyxia@amazon.com,
	raphning@amazon.com,
	maghul@amazon.com,
	Julien Grall <jgrall@amazon.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH RFC 0/2] Add a design document for Live Updating Xen
Date: Thu,  6 May 2021 11:42:57 +0100
Message-Id: <20210506104259.16928-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1

From: Julien Grall <jgrall@amazon.com>

Hi all,

A couple of years ago, AWS presented at Xen Summit [1] a new method called
"Live  Update" to replace the underlying the hypervisor without
rebooting/migrating VMs.

Since then we worked on implementing the feature in Xen and now have
a working PoC. This series is a split of David's series sent last year [1]
focusing on the design of the feature. We will start sending the code soon.

We will give an update and demonstrate the feature working during the
new Xen Summit.

More details on the feature can be found in the first patch #1 which
introduces the design document.

Best regards,

[1] https://www.youtube.com/watch?v=ANaDS9BUhuA
[2] <a92287c03fed310e08ba40063e370038569b94ac.camel@infradead.org>

Julien Grall (2):
  docs/design: Add a design document for Live Update
  xen/kexec: Reserve KEXEC_TYPE_LIVEUPDATE and KEXEC_RANGE_MA_LIVEUPDATE

 docs/designs/liveupdate.md | 254 +++++++++++++++++++++++++++++++++++++
 xen/include/public/kexec.h |  13 +-
 2 files changed, 264 insertions(+), 3 deletions(-)
 create mode 100644 docs/designs/liveupdate.md

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu May 06 10:52:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 10:52:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123433.232792 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lebby-00073r-FI; Thu, 06 May 2021 10:52:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123433.232792; Thu, 06 May 2021 10:52:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lebby-00073k-BU; Thu, 06 May 2021 10:52:06 +0000
Received: by outflank-mailman (input) for mailman id 123433;
 Thu, 06 May 2021 10:52:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EnkQ=KB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lebbx-00073e-ET
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 10:52:05 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 66e85463-ec22-4a1b-a030-17644b6631dc;
 Thu, 06 May 2021 10:52:04 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A555AB1B9;
 Thu,  6 May 2021 10:52:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 66e85463-ec22-4a1b-a030-17644b6631dc
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620298323; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=zk3yGUXMbAjoQwDbgENX+9tDnv1jIXSwj9GofSbqNjI=;
	b=rDsAM5lu29X0684YZjmZloEu9cQfb8oYPa/3el440JjAzvTHJKzTmW++tS9dq3VSzGsS1I
	vcfHKk8/+6FXzHiLchxn+oaxkZpkBPzctOjDuoRH8CIzQPI6PF5vdVCapqBp2DcAWl/889
	uS5XW7zPiJUiW1m7OSNwwNoJGlc/HkU=
Subject: Re: [PATCH v3 08/13] libs/guest: make a cpu policy compatible with
 older Xen versions
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210430155211.3709-1-roger.pau@citrix.com>
 <20210430155211.3709-9-roger.pau@citrix.com>
 <51ee228a-2d53-2dd4-55cf-233d81ba4958@suse.com>
 <YJFpdA8qmYca9bUO@Air-de-Roger>
 <6a3bc5cd-10a2-1d13-0033-c22d16da25b7@suse.com>
 <YJPDigyn0TFxDLT/@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <bb9d62f1-eaf2-7fb0-2c2b-794317099e71@suse.com>
Date: Thu, 6 May 2021 12:52:03 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <YJPDigyn0TFxDLT/@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 06.05.2021 12:23, Roger Pau Monné wrote:
> On Wed, May 05, 2021 at 09:42:09AM +0200, Jan Beulich wrote:
>> On 04.05.2021 17:34, Roger Pau Monné wrote:
>>> On Mon, May 03, 2021 at 01:09:41PM +0200, Jan Beulich wrote:
>>>> On 30.04.2021 17:52, Roger Pau Monne wrote:
>>>>> @@ -1086,3 +1075,42 @@ int xc_cpu_policy_calc_compatible(xc_interface *xch,
>>>>>  
>>>>>      return rc;
>>>>>  }
>>>>> +
>>>>> +int xc_cpu_policy_make_compatible(xc_interface *xch, xc_cpu_policy_t policy,
>>>>> +                                  bool hvm)
>>>>
>>>> I'm concerned of the naming, and in particular the two very different
>>>> meanings of "compatible" for xc_cpu_policy_calc_compatible() and this
>>>> new one. I'm afraid I don't have a good suggestion though, short of
>>>> making the name even longer and inserting "backwards".
>>>
>>> Would xc_cpu_policy_make_compat_412 be acceptable?
>>>
>>> That's the more concise one I can think of.
>>
>> Hmm, maybe (perhaps with an underscore inserted between 4 and 12). Yet
>> (sorry) a comment in the function says "since Xen 4.13", which includes
>> changes that have happened later. Therefore it's not really clear to me
>> whether the function really _only_ deals with the 4.12 / 4.13 boundary.
> 
> It should deal with any changes in the default cpuid policy that
> happened after (and not including) Xen 4.12. So resulting policy is
> compatible with the behaviour that Xen 4.12 had. Any changes made in
> Xen 4.13 and further versions should be accounted for here.

I see.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 06 12:28:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 12:28:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123445.232809 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1led6p-0006sx-Tn; Thu, 06 May 2021 12:28:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123445.232809; Thu, 06 May 2021 12:28:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1led6p-0006sq-Qk; Thu, 06 May 2021 12:28:03 +0000
Received: by outflank-mailman (input) for mailman id 123445;
 Thu, 06 May 2021 12:28:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1led6o-0006sf-Dm; Thu, 06 May 2021 12:28:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1led6o-0007s7-5q; Thu, 06 May 2021 12:28:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1led6n-0004Xw-T5; Thu, 06 May 2021 12:28:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1led6n-0004zA-Sd; Thu, 06 May 2021 12:28:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7wto2RKxoe0O3TTnF1uxAvvBT0rkwBPPeUvIVcr8bag=; b=A0k/ido5gcYMMcZ4DVL7C6FGPY
	A0aVxf7vFq0gNSkxLZC0f18jumaUIomC2OilFuySqFV4v9hXrX5iR42oYR/VZfQtSLLSgvjnPmVyW
	KgDmYbF+LLgyEL1A2lUaJMj5loy0e7z++3ZAJhkczxR3wQLd+Wkk4nxNXfLLWayMfZlk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161778-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 161778: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start.2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=8cccd6438e86112ab383e41b433b5a7e73be9621
X-Osstest-Versions-That:
    xen=d26c277826dbbd64b3e3cb57159e1ecbfad33bc8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 06 May 2021 12:28:01 +0000

flight 161778 xen-unstable real [real]
flight 161809 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/161778/
http://logs.test-lab.xenproject.org/osstest/logs/161809/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 161809-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     19 guest-start.2           fail blocked in 161755
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161755
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161755
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 161755
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161755
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161755
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161755
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161755
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161755
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161755
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161755
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161755
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  8cccd6438e86112ab383e41b433b5a7e73be9621
baseline version:
 xen                  d26c277826dbbd64b3e3cb57159e1ecbfad33bc8

Last test of basis   161755  2021-05-04 08:28:51 Z    2 days
Testing same since   161778  2021-05-04 23:09:09 Z    1 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monne <roger.pau@citrix.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   d26c277826..8cccd6438e  8cccd6438e86112ab383e41b433b5a7e73be9621 -> master


From xen-devel-bounces@lists.xenproject.org Thu May 06 12:29:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 12:29:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123449.232825 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1led8B-0007SM-A5; Thu, 06 May 2021 12:29:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123449.232825; Thu, 06 May 2021 12:29:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1led8B-0007SF-7C; Thu, 06 May 2021 12:29:27 +0000
Received: by outflank-mailman (input) for mailman id 123449;
 Thu, 06 May 2021 12:29:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9FTm=KB=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1led89-0007S7-FS
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 12:29:25 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 85273503-9f8a-4dd2-b57a-4729d0bf8c1d;
 Thu, 06 May 2021 12:29:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 85273503-9f8a-4dd2-b57a-4729d0bf8c1d
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620304164;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=maMG0Hgis5coGY5V76NoThNCQsncixNxvZnWa8PNJdw=;
  b=MD2l7paxKmltBg73/Lfk4CrPMkgEAPANnLWyaNaDaRQhvkPEO8P0gbAu
   jw6mn0rsYFoi0630c6yvCi0mVwNU8wg4wzJMmzuhLDD2cfaEmZvrmwnws
   XekeIu3qzI06X/JdhmiF68KgBCaH4W/pzLnSUJvgioRYiMKP6c9BR5mw3
   c=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 23lC3dv3tj/qmPSffcLyqiz6SurvbL6V1iiNBgWW4MKWKsHSq0CDg6TRrAvOcsWLlK4o29m/Ke
 GDsOxqw0f0giscR2Po8TV9207lGjj7YUfcBduZ73ctrsG/kZX99+AO3xeOg+MWDKulqwqn7lSn
 rP82/Ev67pVIDCzcmKAv5YlSd06cV2qzCY3vEee2O1bULJx6yNWzN3R9Qa4bQo55xYmAgOeB+n
 IF1CAPPalh1aSVp7VZIR3dj8vtznPtwYtlsT6OeTF1XUCZzuiSfWfU+ZdSLr6eRqOFzsLIexcy
 BPg=
X-SBRS: 5.1
X-MesageID: 43007888
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:O7CUia+1rRAioC+WOSFuk+DWI+orL9Y04lQ7vn2YSXRuE/Bw8P
 re+8jztCWE8Qr5N0tQ+uxoVJPufZq+z+8Q3WByB8bBYOCOggLBR+sOgbcKqweQfREWndQ86U
 4PScZD4aXLfD1Hsfo=
X-IronPort-AV: E=Sophos;i="5.82,277,1613451600"; 
   d="scan'208";a="43007888"
From: George Dunlap <george.dunlap@citrix.com>
To: <security@xenproject.org>
CC: <xen-devel@lists.xenproject.org>, George Dunlap <george.dunlap@citrix.com>
Subject: [PATCH] SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported
Date: Thu, 6 May 2021 13:29:15 +0100
Message-ID: <20210506122915.65108-1-george.dunlap@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

The support status of 32-bit guests doesn't seem particularly useful.

Signed-off-by: George Dunlap <george.dunlap@citrix.com>
---

NB this patch should be considered a proposal to the community, as a
follow-on to XSA-370.  As mentioned in the advisory, we will wait
until 25 May for comments before checking it in.
---
 SUPPORT.md | 9 +--------
 1 file changed, 1 insertion(+), 8 deletions(-)

diff --git a/SUPPORT.md b/SUPPORT.md
index d0d4fc6f4f..a29680e04c 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -86,14 +86,7 @@ No hardware requirements
 
     Status, x86_64: Supported
     Status, x86_32, shim: Supported
-    Status, x86_32, without shim: Supported, with caveats
-
-Due to architectural limitations,
-32-bit PV guests must be assumed to be able to read arbitrary host memory
-using speculative execution attacks.
-Advisories will continue to be issued
-for new vulnerabilities related to un-shimmed 32-bit PV guests
-enabling denial-of-service attacks or privilege escalation attacks.
+    Status, x86_32, without shim: Supported, not security supported
 
 ### x86/HVM
 
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Thu May 06 12:36:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 12:36:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123467.232889 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ledEx-00017K-Ld; Thu, 06 May 2021 12:36:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123467.232889; Thu, 06 May 2021 12:36:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ledEx-00017D-HL; Thu, 06 May 2021 12:36:27 +0000
Received: by outflank-mailman (input) for mailman id 123467;
 Thu, 06 May 2021 12:36:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ledEw-000173-Kc; Thu, 06 May 2021 12:36:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ledEw-00084F-GM; Thu, 06 May 2021 12:36:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ledEw-0004mZ-92; Thu, 06 May 2021 12:36:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ledEw-0001qp-8U; Thu, 06 May 2021 12:36:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=9eKlAqqCIr2skArcRBObCkRl4NR+UJ2Omhz7U1WoEz0=; b=iqqqlx32uBcZ0OpMs8ZdWH/1VK
	fe/yQAotiJTcAlgI1qZwBHgYACm4h73VeNuAmQY3J95Zf5oszJeb4wduyUVudACHUVBQlbAPSKPi0
	PIW3m0nmLg/xVRwmhvplpcHz1LuBPLYcpVpupGXng/f1bXvIvJMJSsASPeJihQjAWOmM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161780-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 161780: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=3e13d8e34b53d8f9a3421a816ccfbdc5fa874e98
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 06 May 2021 12:36:26 +0000

flight 161780 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161780/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-libvirt      14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-libvirt-xsm  14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-libvirt-pair 25 guest-start/debian       fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                3e13d8e34b53d8f9a3421a816ccfbdc5fa874e98
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  259 days
Failing since        152659  2020-08-21 14:07:39 Z  257 days  470 attempts
Testing same since   161780  2021-05-05 02:05:40 Z    1 days    1 attempts

------------------------------------------------------------
480 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 146475 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu May 06 12:45:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 12:45:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123475.232904 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ledNb-0002aj-Rh; Thu, 06 May 2021 12:45:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123475.232904; Thu, 06 May 2021 12:45:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ledNb-0002ac-NS; Thu, 06 May 2021 12:45:23 +0000
Received: by outflank-mailman (input) for mailman id 123475;
 Thu, 06 May 2021 12:45:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9FTm=KB=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1ledNa-0002aW-CH
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 12:45:22 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 95bb6067-7974-4556-98cc-441e32b81550;
 Thu, 06 May 2021 12:45:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 95bb6067-7974-4556-98cc-441e32b81550
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620305120;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=CaFMvaXG+IcmbGP4ApcA1HnATW8o6SK5UhElbJFmBUw=;
  b=DVUOLOa0osJUOB0+85VqJ1tahwQKL3r2754j8yzG6TDVyArFf/XQz1aU
   +BP0hJcLqMRUMOCWfKX7DQ/UydZOGA5myFeAazjIqDRtRVVEGlMpYb57B
   70fWNpS/NrpW06LS58ZrFO3n6DCip84QrPefq1XzuUmVATGfExI7cCMlI
   8=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: xk3sQ0vbkSYG4Koe/MxUSlNtmghQ7DNZ2UewGez7TJBb3EnCM8KgmFFlmY0tINIy9amTyZC08y
 InCGIeNYx6hRdFMFbNu6w0CAy/LkJPOGp96RmSerf02eVfydX3l2QA31Ca12VlTcupgQ/gx+dl
 xflCiZj212skQEV3TsdNMhOxqdVSB/9z0uhSXeWlEBv+ybmFYAwJI1lo3WKGAzHSgQELr4res3
 0gGDAYBf9Kfo0Ddd+uAcGHGAdgzqebDdU0Gelf5O9z0X79UIrPcHdG1mOzxhpGvixgQiYQN5af
 1+0=
X-SBRS: 5.1
X-MesageID: 43200045
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:uB90sK2cjzvp9pze/UL7pQqjBXtyeYIsimQD101hICG9Lfb3qy
 n+ppsmPEHP5Ar5AEtQ5expOMG7MBfhHQYc2/hRAV7QZniYhILOFvAj0WKC+UyvJ8SazI9gPM
 hbAtBD4bHLfDpHZIPBkXSF+rUbsZq6GcKT9JzjJh5WJGkAAcwBnmRE40SgYzdLrWF9dMcE/f
 Gnl616Tk+bCA0qh7OAdx84tob41rj2vaOjRSRDKw8s6QGIgz/twqX9CQKk0hAXVC4K6as+8E
 De+jaJpJmLgrWe8FvxxmXT55NZlJ/K0d1YHvGBjcATN3HFlhuoXoJ8QLeP1QpF5d1HqWxa1O
 UkkS1Qefib2EmhJ11dZiGdgzUI5QxerEMKD2Xo2kcL7/aJHg7SQPAx+76xOiGpmnbI+usMjJ
 6jlljpxKZ/HFfOmj/w6MPPUAwvnk2ooWA6mepWlHBHV5ACAYUh4LD2bCtuYec99Q/Bmcsa+d
 NVfYvhDTdtACWnhnvizyVSKRyXLzwO9zK9Mwc/U+CuokxrdUFCvgIlLZYk7wI9HboGOu55Ds
 r/Q9ZVqI0=
X-IronPort-AV: E=Sophos;i="5.82,277,1613451600"; 
   d="scan'208";a="43200045"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=W839nPeYOCxdb8X8rn50eyYkB4E2dhjfh0Pp+inaKgR6ZRA85KYqo5l3XjjQzjUaLo9H2Md/4Z8qPsYYiTy75m3ccl2HcFXfTm38GbWVnPkTYnWreeh4wEmOY1PXAlqTP9IL+keBtSvRlmhAtp9jr4H//T35hniyZ/kazUGCuZIU5QSDojfk3KODSProM1x1weImggB2eM0SI7Y7rN7GK2yN6CnZHDNEzUXcEr0klT/8jXBXfou+/MPR/Pw8tVb05/lhRuncEO9jsO0RLT50xBJh0yKCsGssGo3cHjfO0aEH+b0t5BuNNEggdWfWX5W7ikqmj1OEW+Mu85HMighZ7Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IJOnOu+ywRNBf0OYecsn/LJgkcJRORzGaERRQc3EHSE=;
 b=Zijn8m3+NjyMp1BJ+hRZH/x8Ix3S46fgAThEqDNbVXtyCvM5ktYl3Mdpl6+OBBrePbBwLoNn5Chng3DG5+85SP5ZF6PQ8CYg/UXO+ojPTzpx5JI8aGWrXhcPAvTvbQrWZoCiyDm75YBd/EEI9aVhRrCf4z8ig64Pf9dYxFcDLjQRgK5HL3GPbG3XM/BFy1yOGLrJIOWgQ1mQodIXTkQtFmJ/lnE8nPRrVSIYQbuou1CIAUU9YrsoY+y5QQZLAewC+Fp/TeSzqkPBfGtgJU+Y+HD9gkaCCrgyY0WCt5PmC+7RKiXAaBg38DKEwUvJbXVWKKOxAF696z2k3j4+8v+GfA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IJOnOu+ywRNBf0OYecsn/LJgkcJRORzGaERRQc3EHSE=;
 b=PCuMVax6yT+9FNFwNUXmtbgO8rdQf4s9dzumMruBQrBLw1aTc081UCTrFwDQfUaVh4kY4fTjCkFa5WWItuOdBaU3Lc9gAyqmW4q0Qh0JdLyyttW4/8nFmrhO/eiKyv7u/VXU9cdrfVrXmiqZhOpEsa4Q3jiDijENQvedLAf8TBs=
From: George Dunlap <George.Dunlap@citrix.com>
To: "security@xenproject.org" <security@xenproject.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH] SUPPORT.md: Un-shimmed 32-bit PV guests are no longer
 supported
Thread-Topic: [PATCH] SUPPORT.md: Un-shimmed 32-bit PV guests are no longer
 supported
Thread-Index: AQHXQnNye/i9R6SCGUuiaMyKv7X1oqrWZmuA
Date: Thu, 6 May 2021 12:45:16 +0000
Message-ID: <1D2E57BF-DE91-49C7-B88D-F282523DE32B@citrix.com>
References: <20210506122915.65108-1-george.dunlap@citrix.com>
In-Reply-To: <20210506122915.65108-1-george.dunlap@citrix.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3654.60.0.2.21)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: ad7a8932-2a5c-4821-94f2-08d9108ccccb
x-ms-traffictypediagnostic: PH0PR03MB6235:
x-microsoft-antispam-prvs: <PH0PR03MB6235B06A3C22D7E4A944FA9299589@PH0PR03MB6235.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8882;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 6sZqGpYPAT5GkFL3TpPmPCPPrHP4uRxu5jti+M4N4J1z+1Nl/v+WTyQT5C9p5hdSU7k5LcrtHxSp+tTxAjgiZtHwaPJN+anG2eHTvuYOWToP8ptOSb539gw3XQccuzopwHCQJ1xP3HrdSmZkPBzQJr5/vj2GtK81SHBVgQ8vngT6a6wAJtA9XHyHIrrjvmz1bktxKQzPnLxWVHb59E/mSUGYyUVqgwLbrTJFbocKGYP7Ua26aVIBR4lSuTdy2U1NtmIjpFjpf3cFkz2TzCeZ9l/OLkqygCJZDDQMI2dGzJc2rk1aTVFJQLFYq2h/gkASWJZq92X0K7zErKIqmGbQ3XILxhLFxkSFiJbSeSxTDPolkrO62NfRAEFjPWewR8u5Pksoq46vxNK895ZNtjaMAJPfkolom4xLq7wGgsaqGE+6dNPBXXRjWhFSeMfcoEWLX0Us5j+OqBkLqZafO0HZbeSNX4UsY3uMSA8GjVHyzWb6cZmceYxNQsJPD41sldPcKC+Hf9Y5B+nFqJbEcDQdHFXXnEnI6KPYidcvnNCN+AvnOmPI07hSTmq6mjZAvzvol5XfLWMp+J1OgXPKMp9Y8HMgr8GZy5lUqFgRoqOuUJCTfiuzdKARdJf8xWNXidrVT1Gq/ExKS1QHjzvYhwyT7B9w3BJZfEiwIwtpFaQnA0I=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(346002)(376002)(136003)(396003)(366004)(6916009)(91956017)(71200400001)(4326008)(4744005)(86362001)(26005)(478600001)(76116006)(36756003)(53546011)(186003)(2616005)(316002)(5660300002)(64756008)(6506007)(2906002)(6486002)(8936002)(66946007)(66476007)(66556008)(122000001)(66446008)(6512007)(38100700002)(8676002)(33656002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: =?us-ascii?Q?wZeRcTuDHn1eyWWCjWcH97T2IHuYhOk1FIsOTALYzHxULmNlXPHMPNy1X8iU?=
 =?us-ascii?Q?1V7FF3JbQJt7uY6NL/UGIJHVq9uL9puBUh4LDOC4ucxe2hu86o2dZLc4OJf3?=
 =?us-ascii?Q?0Q2w67gjBlfC7PAW5kY9AD1aaS5vj/zfFxpwRfaIvTIFqPt2P4iTl0qzuaK3?=
 =?us-ascii?Q?ZDMFKGLWZ/8dFrEyeJ+w6XMM8lR9a7kcbLCmndZpUVPIuMwCnRflW/SrBT4l?=
 =?us-ascii?Q?sPLW+IviJKCwXG8H7EgU/xw/WsPuRxhuZKBR1ZWVHhhrL3Y18Yi3X6IwGvGQ?=
 =?us-ascii?Q?cFgli0/qJt39lsV+/LRo9RB+AqM8z/gtpsmOTPC7QdnQujkmYk2hp9x0+LdL?=
 =?us-ascii?Q?GdNCqu/HvmPy9XnqZQcjvi4h3nJqCnigYpVm3p3Fs/XjqyTLsTqpdTe9/2si?=
 =?us-ascii?Q?KqIHLXLQqGwQXXGLwWFzC5jQfbCjmV7/8xgywCHuZtk3K8E/9ioIsp5F7pW9?=
 =?us-ascii?Q?lY2mP4RCdFbbctQa1DlMvUFUxc3Wb+z/kC5hqCxR4I2ycsGg/XGw//5yJd+s?=
 =?us-ascii?Q?FUNX0XQQUSctGVFzn99FIvOuZnRMyc3iJf8/8AqEKUiJ0LN7FAWhu1sHyC9k?=
 =?us-ascii?Q?3Lvx7v/gksA3TkqcY6f10jBFbb8qnbT/DiiI5NegIFViMYoJXKjbMgLZ2/O5?=
 =?us-ascii?Q?exFvT2YgzZI6ep12r3PggrmDOa5kmEW4koDRoCVJLvus3Gvgu2e3Bs9gquZA?=
 =?us-ascii?Q?Wz2Wb0zK1W9J5IVcUGk57lJl8ljkUuDA5BOo81yj1T3y6gUVrIm7PxBmV1UA?=
 =?us-ascii?Q?ADBJOGr9uMtnMJN9Xop0YQexLcp9uSHAFz1w+xVCiKzPxNF8lE2AVZjvU60M?=
 =?us-ascii?Q?5WgkieKSMcGfRpJ3nqPkdLd1b0S8uDNmzLsZZS/mD9ji2wbb5jXYvnkjd7rr?=
 =?us-ascii?Q?zc0iwNnbm+CltFYd9dJO/JD3dVDBtUkB5clOaoX2mD/kkMv5M08CFLpMdDOZ?=
 =?us-ascii?Q?YrYpTDqTbmmMSJDb4sw/rv5kgGZer1DChyy+I+1xypNJalELiTDGC/VUy/Qn?=
 =?us-ascii?Q?o17Qk0+bfkIcDxXh+0NTfCmh8jxEuPqRFWQidua6ah1fwlMNa+FzmxbzEDve?=
 =?us-ascii?Q?0n4jK5F1hlUT+t8072I5LRh53sla1ZAsq1vujDpAyVAblcXtOEpywuNpV8LB?=
 =?us-ascii?Q?yb0lZPGdwOdytQFp7ru0Tx54rpJkU2EW1DLhUUTYgwr/eqFikN378DeSOATL?=
 =?us-ascii?Q?Dl4TGVTf1g15ytC8MlSXWApNnn4NY9M4gXsc1gbEuoGyQphTi1qq5tDLcKkE?=
 =?us-ascii?Q?5Ko3PeLDVFNk6UFDFGLC2lvPRvbUGGS8K1qwf1fw34DDxuQ3pqeS3i8wLM5W?=
 =?us-ascii?Q?C2Rb3R4oc2bVQfV/+vSTCvT2L6tinJlaT4wVgxiMYba06Q=3D=3D?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="us-ascii"
Content-ID: <3F15A1593561924796C76D0841FD5AA5@namprd03.prod.outlook.com>
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ad7a8932-2a5c-4821-94f2-08d9108ccccb
X-MS-Exchange-CrossTenant-originalarrivaltime: 06 May 2021 12:45:16.5442
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: vzDk9llbTEO1thUPMIUleuw4aRS5pNDB1jjQZAPwPz8mszO4l8KawxyRymJ5kRLydkj1m43QqCSejKP8CpV0F5jhOL6xQTcem47ayEw0Zl8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6235
X-OriginatorOrg: citrix.com



> On May 6, 2021, at 1:29 PM, George Dunlap <george.dunlap@citrix.com> wrot=
e:
>=20
> The support status of 32-bit guests doesn't seem particularly useful.
>=20
> Signed-off-by: George Dunlap <george.dunlap@citrix.com>
> ---
>=20
> NB this patch should be considered a proposal to the community, as a
> follow-on to XSA-370.  As mentioned in the advisory, we will wait
> until 25 May for comments before checking it in.

Sorry, this is an old version of the patch; sending out a new version.

 -George


From xen-devel-bounces@lists.xenproject.org Thu May 06 12:48:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 12:48:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123478.232916 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ledQH-0003EN-9L; Thu, 06 May 2021 12:48:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123478.232916; Thu, 06 May 2021 12:48:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ledQH-0003EG-5X; Thu, 06 May 2021 12:48:09 +0000
Received: by outflank-mailman (input) for mailman id 123478;
 Thu, 06 May 2021 12:48:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9FTm=KB=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1ledQG-0003EA-0W
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 12:48:08 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 83dfbb23-4d24-459d-9618-a6c0e1de1eda;
 Thu, 06 May 2021 12:48:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 83dfbb23-4d24-459d-9618-a6c0e1de1eda
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620305287;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=W/QiWTX4D53Ih8gtoglcoiha9d5V6Smayp5mH/TvMyo=;
  b=AJnYQgoo6JJ51sJU7/8IjiA+rngLtdOAmrwa5MO/vIq/8AO/ZqH2fhz6
   zq7hpbuqeNFiYU17Vw1M9dBhErux8GpJALSNrOK8Ini7/FUH5GTmm5lLL
   MwiPUN5laEhwpMbFbYRUX9VRgWH0fyQLyCNHdTg9sQX2SexETmC3khTy/
   8=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: k3dVrGYnbc1o0t+2TzrWEtJNgCDIeGscwqWazKQboJWKrTe8E7Rk7o6HWi2PBRas9dAOmmsdYL
 w4bGFe3aiwGqp6G8zz7RTfuiWjqZOHeRVRTfjVA/qTMG16THvhV9/ZhTm/IWBDhkX6Oe/6gEch
 m3amJfTGGbScSp4pW2VepvsqLXoiF37ai1U5NooTpgscw7SNp0DftS1acX6ykqqW9qeUfihH2B
 lC0XgrVhSTMbEseYKc7s3LbaEwGtTJ+brApNu8O/AZKiloU9YT/DEfUeeS/WwmR1sFJdYDKSm9
 StI=
X-SBRS: 5.1
X-MesageID: 43009711
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:B9TjFa86UBl7NJFjquxuk+DWI+orL9Y04lQ7vn2YSXRuE/Bw8P
 re+8jztCWE8Qr5N0tQ+uxoVJPufZq+z+8Q3WByB8bBYOCOggLBR+sOgbcKqweQfREWndQ86U
 4PScZD4aXLfD1Hsfo=
X-IronPort-AV: E=Sophos;i="5.82,277,1613451600"; 
   d="scan'208";a="43009711"
From: George Dunlap <george.dunlap@citrix.com>
To: <security@xenproject.org>
CC: <xen-devel@lists.xenproject.org>, George Dunlap
	<george.dunlap@citrix.com>, Jann Horn <jannh@google.com>, Jan Beulich
	<jbeulich@suse.com>
Subject: [PATCH v2] SUPPORT.md: Un-shimmed 32-bit PV guests are no longer supported
Date: Thu, 6 May 2021 13:47:52 +0100
Message-ID: <20210506124752.65844-1-george.dunlap@citrix.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

The support status of 32-bit guests doesn't seem particularly useful.

With it changed to fully unsupported outside of PV-shim, adjust the PV32
Kconfig default accordingly.

Reported-by: Jann Horn <jannh@google.com>
Signed-off-by: George Dunlap <george.dunlap@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2:
 - add in Kconfig from advisory, ported over c/s d23d792478d
---
 SUPPORT.md           | 9 +--------
 xen/arch/x86/Kconfig | 7 +++++--
 2 files changed, 6 insertions(+), 10 deletions(-)

diff --git a/SUPPORT.md b/SUPPORT.md
index d0d4fc6f4f..a29680e04c 100644
--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -86,14 +86,7 @@ No hardware requirements
 
     Status, x86_64: Supported
     Status, x86_32, shim: Supported
-    Status, x86_32, without shim: Supported, with caveats
-
-Due to architectural limitations,
-32-bit PV guests must be assumed to be able to read arbitrary host memory
-using speculative execution attacks.
-Advisories will continue to be issued
-for new vulnerabilities related to un-shimmed 32-bit PV guests
-enabling denial-of-service attacks or privilege escalation attacks.
+    Status, x86_32, without shim: Supported, not security supported
 
 ### x86/HVM
 
diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
index e55e029b79..9b164db641 100644
--- a/xen/arch/x86/Kconfig
+++ b/xen/arch/x86/Kconfig
@@ -55,7 +55,7 @@ config PV
 config PV32
 	bool "Support for 32bit PV guests"
 	depends on PV
-	default y
+	default PV_SHIM
 	select COMPAT
 	---help---
 	  The 32bit PV ABI uses Ring1, an area of the x86 architecture which
@@ -67,7 +67,10 @@ config PV32
 	  reduction, or performance reasons.  Backwards compatibility can be
 	  provided via the PV Shim mechanism.
 
-	  If unsure, say Y.
+	  Note that outside of PV Shim, 32-bit PV guests are not security
+	  supported anymore.
+
+	  If unsure, use the default setting.
 
 config PV_LINEAR_PT
        bool "Support for PV linear pagetables"
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Thu May 06 12:54:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 12:54:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123485.232946 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ledWI-00058k-PV; Thu, 06 May 2021 12:54:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123485.232946; Thu, 06 May 2021 12:54:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ledWI-00058d-MO; Thu, 06 May 2021 12:54:22 +0000
Received: by outflank-mailman (input) for mailman id 123485;
 Thu, 06 May 2021 12:54:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ledWG-00058U-Rl
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 12:54:20 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ledWG-0008MU-0S; Thu, 06 May 2021 12:54:20 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ledWF-0004ra-QI; Thu, 06 May 2021 12:54:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=1f8VE+gKWT1fXlZMtDLksPX2illJCsUSfiLEZL/+XNE=; b=wURuQ3NztQECS5UDV9e2mllNeF
	2o8abmnlOReTAwkpDzcZy4fgKuQp8us6SL/j1dEiX0Aq6isGH32Bs5POAu6UxguqNGo990RjWhKNm
	f8LBk3VaWbRSQmb0K8mTtxVKPCvhkvlh4Wc5bxXzd/GePqr75A9qFqoV9cefINJoxL8g=;
Subject: Re: [PATCH v4 3/3] unzstd: make helper symbols static
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <12054cba-4386-0dbf-46fd-41ace0344f8e@suse.com>
 <759c8524-cc01-fac8-bc62-0ba6558477bd@suse.com>
 <cb8fa703-f421-ce55-811a-d4a649bc201a@xen.org>
 <1696e5f2-481a-5a7f-258d-b2a0679b041f@suse.com>
 <f6e00fd9-a207-858e-37e8-fb25427cf8de@xen.org>
 <cb4ee5ef-fba8-e70d-79ae-c640ed853d53@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <a03685fe-191b-07e5-78af-eccc9bb4ff05@xen.org>
Date: Thu, 6 May 2021 13:54:16 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <cb4ee5ef-fba8-e70d-79ae-c640ed853d53@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 06/05/2021 07:21, Jan Beulich wrote:
> On 05.05.2021 19:35, Julien Grall wrote:
>>
>>
>> On 29/04/2021 14:26, Jan Beulich wrote:
>>> On 29.04.2021 13:27, Julien Grall wrote:
>>>> On 21/04/2021 11:22, Jan Beulich wrote:
>>>>> While for the original library's purposes these functions of course want
>>>>> to be externally exposed, we don't need this, and we also don't want
>>>>> this both to prevent unintended use and to keep the name space tidy.
>>>>> (When functions have no callers at all, wrap them with a suitable
>>>>> #ifdef.) This has the added benefit of reducing the resulting binary
>>>>> size - while this is all .init code, it's still desirable to not carry
>>>>> dead code.
>>>>
>>>> So I understand the desire to keep the code close to Linux and removing
>>>> the dead code. However I am still not convinced that the approach taken
>>>> is actually worth the amount of memory saved.
>>>>
>>>> How much memory are we talking about here?
>>>
>>> There are no (runtime) memory savings, as is being said by the
>>> description. There are savings on the image and symbol table sizes
>>> (see below - .*.0/ holding files as produced without the patch
>>> applied, while .*.1/ holding output with it in place), the image
>>> size reduction part of which is - as also expressed by the
>>> description - a nice side effect, but not the main motivation for
>>> the change.
>>
>> Thanks for the providing the information. I have misunderstood your
>> original intention.
>>
>> Reading them again, I have to admit this doesn't really change my view
>> here. You are trading a smaller name space or prevent unintended use
>> (not clear what would be wrong to call them) to code
>> maintenability/readability.
> 
> Well, I mean mainly the usual issue of us, short of having Linux-like
> section reference checking, being at risk of calling __init functions
> from non-__init code.

Right, we are rely on the review to not mix the tow. In the past, I have 
successfully used the Linux script on Xen binary. It might be a good 
idea to import a cut-down version in Xen.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 06 13:09:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 13:09:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123517.232957 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ledl5-0007M5-51; Thu, 06 May 2021 13:09:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123517.232957; Thu, 06 May 2021 13:09:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ledl5-0007Ly-2A; Thu, 06 May 2021 13:09:39 +0000
Received: by outflank-mailman (input) for mailman id 123517;
 Thu, 06 May 2021 13:09:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EnkQ=KB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ledl3-0007Ls-Io
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 13:09:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ab39dffb-6bf9-4a90-9513-ba8c7dd6bb01;
 Thu, 06 May 2021 13:09:36 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9CC25AE39;
 Thu,  6 May 2021 13:09:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ab39dffb-6bf9-4a90-9513-ba8c7dd6bb01
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620306575; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=HY+Dmj9Nv1eHRXrLW69Mj3Wx9sD2RlBHWbfeyc7i/wY=;
	b=TeWCR5omn409NuJ3KxhPf9MgweeMukJ49BbF8dFQG/oTF6eKItFGh7gQg+vuoFp2xIE/+v
	Hl+xYtLBve933/Tu8Rzaqo8JUMkP1QcBateBqot6h1qOsdQLejapvBHTnmtW9WF3II9vI8
	Gx9TmbR9yI4y5pFzfAIj4mTiJA/P3VU=
Subject: Re: [PATCH v2] SUPPORT.md: Un-shimmed 32-bit PV guests are no longer
 supported
To: George Dunlap <george.dunlap@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Jann Horn <jannh@google.com>
References: <20210506124752.65844-1-george.dunlap@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <acd695c1-dc04-4606-5212-5fd993e355b1@suse.com>
Date: Thu, 6 May 2021 15:09:35 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210506124752.65844-1-george.dunlap@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 06.05.2021 14:47, George Dunlap wrote:
> --- a/xen/arch/x86/Kconfig
> +++ b/xen/arch/x86/Kconfig
> @@ -55,7 +55,7 @@ config PV
>  config PV32
>  	bool "Support for 32bit PV guests"
>  	depends on PV
> -	default y
> +	default PV_SHIM
>  	select COMPAT
>  	---help---
>  	  The 32bit PV ABI uses Ring1, an area of the x86 architecture which
> @@ -67,7 +67,10 @@ config PV32
>  	  reduction, or performance reasons.  Backwards compatibility can be
>  	  provided via the PV Shim mechanism.
>  
> -	  If unsure, say Y.
> +	  Note that outside of PV Shim, 32-bit PV guests are not security
> +	  supported anymore.
> +
> +	  If unsure, use the default setting.

Alongside this I wonder whether we should also default opt_pv32 to false
then, unless running in shim mode.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 06 14:00:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 14:00:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123533.232988 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leeXl-0003zy-JF; Thu, 06 May 2021 13:59:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123533.232988; Thu, 06 May 2021 13:59:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leeXl-0003zp-F7; Thu, 06 May 2021 13:59:57 +0000
Received: by outflank-mailman (input) for mailman id 123533;
 Thu, 06 May 2021 13:59:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NmKg=KB=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1leeXj-0003iB-N7
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 13:59:55 +0000
Received: from mail-qt1-x82b.google.com (unknown [2607:f8b0:4864:20::82b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5d060e54-3546-4113-998d-8797937aaeb2;
 Thu, 06 May 2021 13:59:55 +0000 (UTC)
Received: by mail-qt1-x82b.google.com with SMTP id o1so4050262qta.1
 for <xen-devel@lists.xenproject.org>; Thu, 06 May 2021 06:59:55 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:6095:81da:832e:3929])
 by smtp.gmail.com with ESMTPSA id
 189sm2069992qkh.99.2021.05.06.06.59.53
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 06 May 2021 06:59:53 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5d060e54-3546-4113-998d-8797937aaeb2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=poW51U2VVY5gXkXvZ25b1vYYpwTn8lqPWe1EiAs4Tbg=;
        b=HD7SXwT1NtfGxVWBp6YBENsTVJbzVh4hYQ7yBUVP/I57IT5WiRkPD0qxJ9Z9Odak6e
         z4kGtG4S2UPVug+BzfZGv1fEQWFV3enWUR4GuK9XqXF7VZPYJuSrjfXltS54iyhVBI73
         PsUgwc01UpMbTgfI8Mj0Kq6FQ64bEw345mssQe2pvL0ptGC322h0SzK1iEaKHB9N+NFm
         z55TXBqM5jGrZcPdoQ6ywyn33/L8aEyYL+q0nrmBBpuNSdpc7tsrc9MVJKJZginFtWNS
         sH+U+ZqOChZWLlHSD2YHv/nh3InxnkDdr+7ORPZg8BhVbnMMhv3kLf/xk+D1FGIIcN4g
         v4CA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=poW51U2VVY5gXkXvZ25b1vYYpwTn8lqPWe1EiAs4Tbg=;
        b=kaXmgcg1hYdqIIqeb9OBT7iHqkxBFyl5YP8GfBXRvoS9b8tUz7QjZylbcC0j54H6uf
         +vKSYkqIfbbP3lKWkPkofZ/O43BsD1azJau8ZGAJ22o7e1p4FhZHIrgWivq0W4urCxWy
         wd/zfGa4xYrp9Ge6qSzXSIi2BhcUP8VOvQ/W6tfNAU59w665G5p+n7TOopehgvA1wiDc
         dwozjxRzArr+gyKYF03aIYiEP/nn9SnolUgTXRW839JRFkXLop5oCrPVijl0wx0bD1NC
         7ObOzVH5G2ssjXq9fgbT274Q3A6ybijUV1Wf3FSAjgK2am7SrFYylBZRbsQE6IyexjWX
         YN5Q==
X-Gm-Message-State: AOAM532cX/DS+vsp73ODkBU7TF2uRkp2FY+qRPqtgsY4KbcvCUQ9rbfQ
	wJzypJNXFKp9zFWC23p1iAi5yHCN77c=
X-Google-Smtp-Source: ABdhPJx0LAeDDpflzjiEYTOwZOIRcLGZhioDrPhNBATKFN1RUoXo5q/WK1BQeG6fvr4KxQdFiBnfrg==
X-Received: by 2002:aed:2128:: with SMTP id 37mr4311335qtc.163.1620309594470;
        Thu, 06 May 2021 06:59:54 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: [PATCH v2 01/13] docs: Warn about incomplete vtpmmgr TPM 2.0 support
Date: Thu,  6 May 2021 09:59:11 -0400
Message-Id: <20210506135923.161427-2-jandryuk@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210506135923.161427-1-jandryuk@gmail.com>
References: <20210506135923.161427-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The vtpmmgr TPM 2.0 support is incomplete.  Add a warning about that to
the documentation so others don't have to work through discovering it is
broken.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
 docs/man/xen-vtpmmgr.7.pod | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/docs/man/xen-vtpmmgr.7.pod b/docs/man/xen-vtpmmgr.7.pod
index af825a7ffe..875dcce508 100644
--- a/docs/man/xen-vtpmmgr.7.pod
+++ b/docs/man/xen-vtpmmgr.7.pod
@@ -222,6 +222,17 @@ XSM label, not the kernel.
 
 =head1 Appendix B: vtpmmgr on TPM 2.0
 
+=head2 WARNING: Incomplete - cannot persist data
+
+TPM 2.0 support for vTPM manager is incomplete.  There is no support for
+persisting an encryption key, so vTPM manager regenerates primary and secondary
+key handles each boot.
+
+Also, the vTPM manger group command implementation hardcodes TPM 1.2 commands.
+This means running manage-vtpmmgr.pl fails when the TPM 2.0 hardware rejects
+the TPM 1.2 commands.  vTPM manager with TPM 2.0 cannot create groups and
+therefore cannot persist vTPM contents.
+
 =head2 Manager disk image setup:
 
 The vTPM Manager requires a disk image to store its encrypted data. The image
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Thu May 06 14:00:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 14:00:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123534.233000 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leeXp-0004X9-TO; Thu, 06 May 2021 14:00:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123534.233000; Thu, 06 May 2021 14:00:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leeXp-0004WY-P4; Thu, 06 May 2021 14:00:01 +0000
Received: by outflank-mailman (input) for mailman id 123534;
 Thu, 06 May 2021 14:00:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NmKg=KB=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1leeXo-0003iB-NM
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 14:00:00 +0000
Received: from mail-qk1-x72b.google.com (unknown [2607:f8b0:4864:20::72b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 35cf9597-2d37-4779-b7e5-b5274f56732d;
 Thu, 06 May 2021 13:59:56 +0000 (UTC)
Received: by mail-qk1-x72b.google.com with SMTP id a22so4385472qkl.10
 for <xen-devel@lists.xenproject.org>; Thu, 06 May 2021 06:59:56 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:6095:81da:832e:3929])
 by smtp.gmail.com with ESMTPSA id
 189sm2069992qkh.99.2021.05.06.06.59.54
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 06 May 2021 06:59:55 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 35cf9597-2d37-4779-b7e5-b5274f56732d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=SCid/Fz7LIrkhT623U47nCiexIJBPSb/z7U4jQky+9M=;
        b=o74AxszkG4ApqEkoZ8gmh9ZcWJHzVmj9wYURxh2AI+TkrexELduphfW+qEXIX+gkuo
         adrRZWaGh77TNrzEn+VCKwwlm68QOuah312/X/FjOIjE759FbPWIxCBDUySfHoG1gRv2
         LYskFUGFsNJZGi1at+wWQh4EfpnYfZ54tSd/qtnoG9ZMBDdnE6a3lQjTECSeyi3WmG0S
         ee1YQnzPQ13t2/ho7lrg/VvZoUTLqsLPYUuX9qczGd1GdFjd31BBUcd2TAfkHNrSyals
         rt+cV2ODjk+1uKVvjs1Xkus0cF3gfNr+NtP7cBXK6GU2f0gKX8nsgF5zqVKu/RPXfoaB
         4zAw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=SCid/Fz7LIrkhT623U47nCiexIJBPSb/z7U4jQky+9M=;
        b=TrNukfYeITWXslD2Sf4E5xCfTC6jGiSPdOn9X0EQF6D94BS6Fb4KqS98dWkWrzsWqa
         mwLvLd0D7xns4JMUi6LxaVs9mgyTrFIe19/RfU8ZtjJaNxij8iZuiIexdY+A0RAN5T/1
         mvKFk5zqEg0LCt8KlCX/5U3tTgNsn+6TYIpK4ishGDEVVaGeDt7TyGIlgPOE3fgM+914
         xFxmWkr4UPkqMEuwvoMb0185NwDgoGB+8zeDhkL37ZopOhlEarFt9ANWu2VS5m/3xOJk
         5Fo2+cxHXmrdpHdhPNnCw7QSXhhzWFjkgkxQdK9DYj/mNBcj8q0/5dvFAfI4XB8UMV+T
         Xzfw==
X-Gm-Message-State: AOAM53016wWfpxgHw88D9MdkQPpuNZH2v0J+JIG7Xd1gMM1pJFNfdnAq
	4Uhy/SF/phg2xYy22R3oSsEBEvG67n8=
X-Google-Smtp-Source: ABdhPJznW38Pts006EYI4l39hd8n1Uo8kITRc1YT80eVCcvk6D4zMIvvHlnpALjfDJcYZCDkjDXYlw==
X-Received: by 2002:a05:620a:e05:: with SMTP id y5mr4149112qkm.250.1620309595574;
        Thu, 06 May 2021 06:59:55 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>,
	Samuel Thibault <samuel.thibault@ens-lyon.org>
Subject: [PATCH v2 02/13] vtpmmgr: Print error code to aid debugging
Date: Thu,  6 May 2021 09:59:12 -0400
Message-Id: <20210506135923.161427-3-jandryuk@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210506135923.161427-1-jandryuk@gmail.com>
References: <20210506135923.161427-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

tpm_get_error_name returns "Unknown Error Code" when an error string
is not defined.  In that case, we should print the Error Code so it can
be looked up offline.  tpm_get_error_name returns a const string, so
just have the two callers always print the error code so it is always
available.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
---
 stubdom/vtpmmgr/tpm.c  | 2 +-
 stubdom/vtpmmgr/tpm2.c | 2 +-
 2 files changed, 2 insertions(+), 2 deletions(-)

diff --git a/stubdom/vtpmmgr/tpm.c b/stubdom/vtpmmgr/tpm.c
index 779cddd64e..83b2bc16b2 100644
--- a/stubdom/vtpmmgr/tpm.c
+++ b/stubdom/vtpmmgr/tpm.c
@@ -109,7 +109,7 @@
 			UINT32 rsp_status; \
 			UNPACK_OUT(TPM_RSP_HEADER, &rsp_tag, &rsp_len, &rsp_status); \
 			if (rsp_status != TPM_SUCCESS) { \
-				vtpmlogerror(VTPM_LOG_TPM, "Failed with return code %s\n", tpm_get_error_name(rsp_status)); \
+				vtpmlogerror(VTPM_LOG_TPM, "Failed with return code %s (%x)\n", tpm_get_error_name(rsp_status), rsp_status); \
 				status = rsp_status; \
 				goto abort_egress; \
 			} \
diff --git a/stubdom/vtpmmgr/tpm2.c b/stubdom/vtpmmgr/tpm2.c
index c9f1016ab5..655e6d164c 100644
--- a/stubdom/vtpmmgr/tpm2.c
+++ b/stubdom/vtpmmgr/tpm2.c
@@ -126,7 +126,7 @@
     ptr = unpack_TPM_RSP_HEADER(ptr, \
           &(tag), &(paramSize), &(status));\
     if ((status) != TPM_SUCCESS){ \
-        vtpmlogerror(VTPM_LOG_TPM, "Failed with return code %s\n", tpm_get_error_name(status));\
+        vtpmlogerror(VTPM_LOG_TPM, "Failed with return code %s (%x)\n", tpm_get_error_name(status), (status));\
         goto abort_egress;\
     }\
 } while(0)
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Thu May 06 14:00:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 14:00:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123532.232976 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leeXg-0003iO-5i; Thu, 06 May 2021 13:59:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123532.232976; Thu, 06 May 2021 13:59:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leeXg-0003iH-1x; Thu, 06 May 2021 13:59:52 +0000
Received: by outflank-mailman (input) for mailman id 123532;
 Thu, 06 May 2021 13:59:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NmKg=KB=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1leeXe-0003iB-Qx
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 13:59:50 +0000
Received: from mail-qk1-x734.google.com (unknown [2607:f8b0:4864:20::734])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dc84a090-afb3-4664-8baf-d61aabdbd6c8;
 Thu, 06 May 2021 13:59:49 +0000 (UTC)
Received: by mail-qk1-x734.google.com with SMTP id a22so4385115qkl.10
 for <xen-devel@lists.xenproject.org>; Thu, 06 May 2021 06:59:49 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:6095:81da:832e:3929])
 by smtp.gmail.com with ESMTPSA id
 189sm2069992qkh.99.2021.05.06.06.59.47
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 06 May 2021 06:59:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dc84a090-afb3-4664-8baf-d61aabdbd6c8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=wJPHvd3uXZn/zFCWAhnJJxqwOJkY4HFy/cNCIYr5uhI=;
        b=TlS7P/oafqjoWmOEY6hdfn/Uy3yJzrqiWQdzUwImeaNjKtMPkozlOWP3lHcxj8SQeU
         28ZltUWOrkvt9T59eY4RL1boZ4MwYUKA9OhAxtuBuo24uskLb9Ff50bZ285bG79sHRtE
         gljAXM9tXHGKkGNAJ8l5ksZYIEBXTQtOqORUKhVTTTa/LuxJTgwUjEIAb9j+tpbfYSGJ
         2URLkoOB6VSN0kn5FQ08wDXWEOsU4TAf8TEtyUw/CtCSkqM8y1tAtrUuHUvkbZBqRiGo
         w3vfSvButAqBKbM7+pmEa58vtDClbk02TF/aefOaR60wpAKr2+okl1TtF2OeIIPFGKkR
         qylQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=wJPHvd3uXZn/zFCWAhnJJxqwOJkY4HFy/cNCIYr5uhI=;
        b=UqHxgBXt3cI22FZHj+qnITwxbYNN+lEz096xex34Ix49vS0o8HoBU75arZVr69eEMY
         pqeroAgAGRrd7o4xqWX87lCL+3CWkYCvcHuQu5Za0iictpub4gSZiy720dhQH+za8Xzx
         hjFdITW7e1u/SXhNAwtFHbKyY9ZA0GYMbFYehwQck2GrNT7alvW8FOjNtJnuJuI43GYh
         q1lPlRRcoepSxOvVG2ZEhpDNRHpOSt974CK+KU56lyvrzY/odZ+O98NoEQaYEwoSuMIu
         3mKvanVvLaXTVII7Z41zgW8T1zUXsg/rWO9uoS/EBOR9BGWYCOf2/WGK+dn1NSrIVlMe
         pBgg==
X-Gm-Message-State: AOAM532iJ9ld+dtQuqoU1dw8cssL5dKbJK4/xFovGxwzDokLLBwVktYM
	GyqJVhir19wwEuUxq+vCt1FesHHKmbA=
X-Google-Smtp-Source: ABdhPJwPMblysiMTeE3oIEWgXNGAOX9GprQ24myuK+G2/9r4u2lzsfoDGkI5VWuUa7pAbLDqW0oa1Q==
X-Received: by 2002:a37:8ec4:: with SMTP id q187mr4081232qkd.381.1620309589082;
        Thu, 06 May 2021 06:59:49 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>,
	Samuel Thibault <samuel.thibault@ens-lyon.org>
Subject: [PATCH v2 00/13] vtpmmgr: Some fixes - still incomplete
Date: Thu,  6 May 2021 09:59:10 -0400
Message-Id: <20210506135923.161427-1-jandryuk@gmail.com>
X-Mailer: git-send-email 2.31.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

vtpmmgr TPM 2.0 support is incomplete.  There is no code to save the
tpm2 keys generated by the vtpmmgr, so it's impossible to restore vtpm
state with tpm2.  The vtpmmgr also issues TPM 1.2 commands to the TPM
2.0 hardware which naturally fails.  Dag reported this [1][2], and I
independently re-discovered it.

I have not fixed the above issues.  These are some fixes I made while
investigating tpm2 support.  At a minimum, "docs: Warn about incomplete
vtpmmgr TPM 2.0 support" should be applied to warn others.

This is useful for debugging:
vtpmmgr: Print error code to aid debugging

This fixes vtpmmgr output (also noted by Dag [3]):
stubom: newlib: Enable C99 formats for %z

This gives more flexibility if you are already using the TPM2 hardware:
vtpmmgr: Allow specifying srk_handle for TPM2

These are some changes to unload keys from the TPM hardware (so they
are not still loaded for anything that runs afterwards):
vtpmmgr: Move vtpmmgr_shutdown
vtpmmgr: Flush transient keys on shutdown
vtpmmgr: Flush all transient keys
vtpmmgr: Shutdown more gracefully

This lets vtpms initialize their random pools:
vtpmmgr: Support GetRandom passthrough on TPM 2.0

New in v2:
TPM2_GetRandom fix per Samuel:
vtpmmgr: Remove bogus cast from TPM2_GetRandom

Change ":" to "=":
vtpmmgr: Fix owner_auth & srk_auth parsing

Follow on from comments from Samuel
vtpmmgr: Check req_len before unpacking command

Fix for vtpm emulator to work with Linux 5.4
vtpm: Correct timeout units and command duration

Changes in v2:
Added R-by & Ack-by to 1-3,5-8
Updated #4 to use srk_handle=
Updated #7 commit message
Updated #9 per Samuel
Added #10-13

[1] https://lore.kernel.org/xen-devel/8285393.eUs1EhXEQl@eseries.newtech.fi/
[2] https://lore.kernel.org/xen-devel/1615731.eyaQ0j4tC5@eseries.newtech.fi/
[3] https://lore.kernel.org/xen-devel/3151252.0ZAaMuH7Fy@dag.newtech.fi/

Jason Andryuk (13):
  docs: Warn about incomplete vtpmmgr TPM 2.0 support
  vtpmmgr: Print error code to aid debugging
  stubom: newlib: Enable C99 formats for %z
  vtpmmgr: Allow specifying srk_handle for TPM2
  vtpmmgr: Move vtpmmgr_shutdown
  vtpmmgr: Flush transient keys on shutdown
  vtpmmgr: Flush all transient keys
  vtpmmgr: Shutdown more gracefully
  vtpmmgr: Support GetRandom passthrough on TPM 2.0
  vtpmmgr: Remove bogus cast from TPM2_GetRandom
  vtpmmgr: Fix owner_auth & srk_auth parsing
  vtpmmgr: Check req_len before unpacking command
  vtpm: Correct timeout units and command duration

 docs/man/xen-vtpmmgr.7.pod              | 18 +++++++
 stubdom/Makefile                        |  4 +-
 stubdom/vtpm-command-duration.patch     | 52 +++++++++++++++++++
 stubdom/vtpm-microsecond-duration.patch | 52 +++++++++++++++++++
 stubdom/vtpmmgr/init.c                  | 57 +++++++++++++--------
 stubdom/vtpmmgr/marshal.h               | 15 ++++++
 stubdom/vtpmmgr/tpm.c                   |  2 +-
 stubdom/vtpmmgr/tpm2.c                  | 15 ++++--
 stubdom/vtpmmgr/vtpm_cmd_handler.c      | 67 ++++++++++++++++++++++++-
 stubdom/vtpmmgr/vtpmmgr.c               | 12 ++++-
 10 files changed, 266 insertions(+), 28 deletions(-)
 create mode 100644 stubdom/vtpm-command-duration.patch
 create mode 100644 stubdom/vtpm-microsecond-duration.patch

-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Thu May 06 14:00:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 14:00:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123536.233011 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leeXv-0005Vu-5W; Thu, 06 May 2021 14:00:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123536.233011; Thu, 06 May 2021 14:00:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leeXv-0005Vk-1s; Thu, 06 May 2021 14:00:07 +0000
Received: by outflank-mailman (input) for mailman id 123536;
 Thu, 06 May 2021 14:00:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NmKg=KB=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1leeXt-0003iB-NZ
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 14:00:05 +0000
Received: from mail-qt1-x831.google.com (unknown [2607:f8b0:4864:20::831])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5510c360-3273-4078-818c-feb0b1c5fd49;
 Thu, 06 May 2021 13:59:57 +0000 (UTC)
Received: by mail-qt1-x831.google.com with SMTP id g13so4036987qts.4
 for <xen-devel@lists.xenproject.org>; Thu, 06 May 2021 06:59:57 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:6095:81da:832e:3929])
 by smtp.gmail.com with ESMTPSA id
 189sm2069992qkh.99.2021.05.06.06.59.55
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 06 May 2021 06:59:56 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5510c360-3273-4078-818c-feb0b1c5fd49
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=K2IS4DIH6mLVlW67hNLVvhjqoD6ItQVTxI1gVdmna5w=;
        b=O+YCoBZo1K0G1sBZPrSk8KEPWO7UHlD2AFTiezkwB7XInHsh+bWguOGAMV4/rQBY0R
         Ls0TzFg8BhB0XLkw61Aa5YGgPo/TuX+rp+pSiBXH76F8ssEmyyifotDaKAcdFDsOpfMH
         T+//z3/qW3h5W1wduRU1Q4UeJd4COqUgdCLbrbPuA1Py2ml7u5v9o10stk2J92NznM6j
         cP0i+b2aCn/OP+wc9lGfBtGlnJB4FaqM0WaQoLl9uV2b5EFzCH/iuXbKjll6IEhuOXOA
         kP9IJTbpuk6GkRmQXDo//H5jzwry6bCwECTt7ATv1G9ecxq+3fEOafagQWArZTfTYZXX
         Y/zQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=K2IS4DIH6mLVlW67hNLVvhjqoD6ItQVTxI1gVdmna5w=;
        b=NQzSQeLUkNwSJ+mhfzeDs5LbalCOmE6NgoVFQXPQ6nsxaUfNuQkxf6Np8lYvo2DgML
         ETZgHKAhyjFtlI5U3/qaYake8hxen8ieW9U48/w8p1YC3bdks4k9wC2fAS86HytcpJgK
         0YzXbtxKKFt/HSDEigpKnA05x/S5/Qf7HtoYOAGw86iegEw/C9smcEfY01kx7tmPREpm
         t8iQ7YJLiLFGaxRcO73F0nL3sWCbYbc3K4vNiBFipiyHMQyn7kM/4NyPd0eRz2PotvvF
         pP2WB2gYj3H1fVgr7mZKlVP4DbpY6jNM0GoRTE62LpbTWE6PgT6egkReaM+vtejJIF32
         W2vQ==
X-Gm-Message-State: AOAM533Te1rqf7JP9x933/QX+EIh3Pbh9FXyEakBAkuYkixNRUAgdCF8
	vfXgT3QzMo+3jjwwDjGwzphcSF3Ltqs=
X-Google-Smtp-Source: ABdhPJzKmDpKlFkDdTij8Y3/yo50zJDY90tSPO+GddOvwyita+2uSsv2W35Iln1+DCgigN2dW8CdvQ==
X-Received: by 2002:ac8:6f4c:: with SMTP id n12mr4580168qtv.22.1620309596651;
        Thu, 06 May 2021 06:59:56 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Samuel Thibault <samuel.thibault@ens-lyon.org>
Subject: [PATCH v2 03/13] stubom: newlib: Enable C99 formats for %z
Date: Thu,  6 May 2021 09:59:13 -0400
Message-Id: <20210506135923.161427-4-jandryuk@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210506135923.161427-1-jandryuk@gmail.com>
References: <20210506135923.161427-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

vtpmmgr was changed to print size_t with the %z modifier, but newlib
isn't compiled with %z support.  So you get output like:

root seal: zu; sector of 13: zu
root: zu v=zu
itree: 36; sector of 112: zu
group: zu v=zu id=zu md=zu
group seal: zu; 5 in parent: zu; sector of 13: zu
vtpm: zu+zu; sector of 48: zu

Enable the C99 formats in newlib so vtpmmgr prints the numeric values.

Fixes 9379af08ccc0 "stubdom: vtpmmgr: Correctly format size_t with %z
when printing."

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
---
 stubdom/Makefile | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/stubdom/Makefile b/stubdom/Makefile
index 90d9ffcd9f..c6de5f68ae 100644
--- a/stubdom/Makefile
+++ b/stubdom/Makefile
@@ -105,7 +105,7 @@ cross-newlib: $(NEWLIB_STAMPFILE)
 $(NEWLIB_STAMPFILE): mk-headers-$(XEN_TARGET_ARCH) newlib-$(NEWLIB_VERSION)
 	mkdir -p newlib-$(XEN_TARGET_ARCH)
 	( cd newlib-$(XEN_TARGET_ARCH) && \
-	  CC_FOR_TARGET="$(CC) $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) $(NEWLIB_CFLAGS)" AR_FOR_TARGET=$(AR) LD_FOR_TARGET=$(LD) RANLIB_FOR_TARGET=$(RANLIB) ../newlib-$(NEWLIB_VERSION)/configure --prefix=$(CROSS_PREFIX) --verbose --target=$(GNU_TARGET_ARCH)-xen-elf --enable-newlib-io-long-long --disable-multilib && \
+	  CC_FOR_TARGET="$(CC) $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) $(NEWLIB_CFLAGS)" AR_FOR_TARGET=$(AR) LD_FOR_TARGET=$(LD) RANLIB_FOR_TARGET=$(RANLIB) ../newlib-$(NEWLIB_VERSION)/configure --prefix=$(CROSS_PREFIX) --verbose --target=$(GNU_TARGET_ARCH)-xen-elf --enable-newlib-io-long-long --enable-newlib-io-c99-formats --disable-multilib && \
 	  $(MAKE) DESTDIR= && \
 	  $(MAKE) DESTDIR= install )
 
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Thu May 06 14:00:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 14:00:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123537.233024 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leeY0-0005wI-Gt; Thu, 06 May 2021 14:00:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123537.233024; Thu, 06 May 2021 14:00:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leeY0-0005w6-Cj; Thu, 06 May 2021 14:00:12 +0000
Received: by outflank-mailman (input) for mailman id 123537;
 Thu, 06 May 2021 14:00:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NmKg=KB=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1leeXy-0003iB-Nv
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 14:00:10 +0000
Received: from mail-qk1-x72a.google.com (unknown [2607:f8b0:4864:20::72a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ffb43a8e-7703-447f-b2a4-bb32925fd77c;
 Thu, 06 May 2021 13:59:58 +0000 (UTC)
Received: by mail-qk1-x72a.google.com with SMTP id q136so4938929qka.7
 for <xen-devel@lists.xenproject.org>; Thu, 06 May 2021 06:59:58 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:6095:81da:832e:3929])
 by smtp.gmail.com with ESMTPSA id
 189sm2069992qkh.99.2021.05.06.06.59.56
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 06 May 2021 06:59:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ffb43a8e-7703-447f-b2a4-bb32925fd77c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=tdayrghQPCz70YHcLBlydERZJ3sNcRxWDd88jzJUALc=;
        b=Xl/6BPBiUg0uCStcneFXNCblFkA8fBAAtMd99bvBPHjbgibrYEP1QXTqKXo9sFK2Z5
         gBXVLKXVqBdSWl02EOWV4NOjwuUZlL1zDlb4luMTk5auVsFRYYvfR54w9Ouss6yRGKXe
         gLxa1QleBm2hdwpQhzM21wn6jgKwxK/wI3FL9rd8ZY1a9qyPfB5p78CHL3H6IazL6DaJ
         +Twf8D/zUSS12p6Iv6KpGpI59jm+qNGigZ7ni1m7wDlw2b0tVfXo5ETW7391UCIuuHiq
         fwb8P3vbExxHQ5LERzLIOzdspbb6V+PxP4HksLyRBhPIEJmthbRcL8RlN6anKQl9HJzQ
         GLLw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=tdayrghQPCz70YHcLBlydERZJ3sNcRxWDd88jzJUALc=;
        b=StLLP4vrM1Wj2H1RksxUkl/v5DHqkdkWpN1dmCrIWnKsirisxU9SK1ejfr8nmslAKW
         bup5jblHYQYQ4xIrCtRZGB1P1oh84opsSgluTyU6Z0P5yzx+x0eOQ5Hnjeeo86zexFlm
         B4jkWVl3yxyChze8qdai4C8zOgFg2Yg8/0nnTHPdh2EzF7amWJ6gJ/iKyYpFg69uM/rT
         sjjFbd2/dTsIYsW8vFUrytYxiT1aeFT040oJxfmCzWMtHVvFq4i2NZsg3UETh8TDvAMD
         Zc1QaKvbcM0NCnFiF4marSDdCloZl1bLa3853GhjsXOA9SOFIXrVsHYgCgvPKEdRqYNI
         k4lw==
X-Gm-Message-State: AOAM531S2OZpzbJxAFx0632UgCgqMrZPdMRC0eDBAp4yzdhWq61kNbqA
	3do/VdikcKv4CCWXnbjXBi6XJ/G0UKs=
X-Google-Smtp-Source: ABdhPJyk/YfrpM7kXpRPeGB7iQ5jVt6CcqzOeDiOnzgAYg+rwg1wDpdcBKYx0JbzqpuI+EhhMyuCqA==
X-Received: by 2002:a05:620a:13e2:: with SMTP id h2mr3875260qkl.235.1620309597850;
        Thu, 06 May 2021 06:59:57 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>,
	Samuel Thibault <samuel.thibault@ens-lyon.org>
Subject: [PATCH v2 04/13] vtpmmgr: Allow specifying srk_handle for TPM2
Date: Thu,  6 May 2021 09:59:14 -0400
Message-Id: <20210506135923.161427-5-jandryuk@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210506135923.161427-1-jandryuk@gmail.com>
References: <20210506135923.161427-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Bypass taking ownership of the TPM2 if an srk_handle is specified.

This srk_handle must be usable with Null auth for the time being.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
v2: Use "=" seperator
---
 docs/man/xen-vtpmmgr.7.pod |  7 +++++++
 stubdom/vtpmmgr/init.c     | 11 ++++++++++-
 2 files changed, 17 insertions(+), 1 deletion(-)

diff --git a/docs/man/xen-vtpmmgr.7.pod b/docs/man/xen-vtpmmgr.7.pod
index 875dcce508..3286954568 100644
--- a/docs/man/xen-vtpmmgr.7.pod
+++ b/docs/man/xen-vtpmmgr.7.pod
@@ -92,6 +92,13 @@ Valid arguments:
 
 =over 4
 
+=item srk_handle=<HANDLE>
+
+Specify a srk_handle for TPM 2.0.  TPM 2.0 uses a key hierarchy, and
+this allow specifying the parent handle for vtpmmgr to create its own
+key under.  Using this option bypasses vtpmmgr trying to take ownership
+of the TPM.
+
 =item owner_auth=<AUTHSPEC>
 
 =item srk_auth=<AUTHSPEC>
diff --git a/stubdom/vtpmmgr/init.c b/stubdom/vtpmmgr/init.c
index 1506735051..130e4f4bf6 100644
--- a/stubdom/vtpmmgr/init.c
+++ b/stubdom/vtpmmgr/init.c
@@ -302,6 +302,11 @@ int parse_cmdline_opts(int argc, char** argv, struct Opts* opts)
             goto err_invalid;
          }
       }
+      else if(!strncmp(argv[i], "srk_handle=", 11)) {
+         if(sscanf(argv[i] + 11, "%x", &vtpm_globals.srk_handle) != 1) {
+            goto err_invalid;
+         }
+      }
       else if(!strncmp(argv[i], "tpmdriver=", 10)) {
          if(!strcmp(argv[i] + 10, "tpm_tis")) {
             opts->tpmdriver = TPMDRV_TPM_TIS;
@@ -586,7 +591,11 @@ TPM_RESULT vtpmmgr2_create(void)
 {
     TPM_RESULT status = TPM_SUCCESS;
 
-    TPMTRYRETURN(tpm2_take_ownership());
+    if ( vtpm_globals.srk_handle == 0 ) {
+        TPMTRYRETURN(tpm2_take_ownership());
+    } else {
+        tpm2_AuthArea_ctor(NULL, 0, &vtpm_globals.srk_auth_area);
+    }
 
    /* create SK */
     TPM2_Create_Params_out out;
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Thu May 06 14:00:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 14:00:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123539.233036 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leeY5-0006Yi-SD; Thu, 06 May 2021 14:00:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123539.233036; Thu, 06 May 2021 14:00:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leeY5-0006YV-P3; Thu, 06 May 2021 14:00:17 +0000
Received: by outflank-mailman (input) for mailman id 123539;
 Thu, 06 May 2021 14:00:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NmKg=KB=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1leeY3-0003iB-Ni
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 14:00:15 +0000
Received: from mail-qk1-x72c.google.com (unknown [2607:f8b0:4864:20::72c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7f2b0a8a-1094-4fc4-b780-cb8eb8f45673;
 Thu, 06 May 2021 13:59:59 +0000 (UTC)
Received: by mail-qk1-x72c.google.com with SMTP id x8so4970604qkl.2
 for <xen-devel@lists.xenproject.org>; Thu, 06 May 2021 06:59:59 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:6095:81da:832e:3929])
 by smtp.gmail.com with ESMTPSA id
 189sm2069992qkh.99.2021.05.06.06.59.57
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 06 May 2021 06:59:58 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7f2b0a8a-1094-4fc4-b780-cb8eb8f45673
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=jlS7YCsZWAJRCh7VQeSElH9c4MLj2HkvcGmj37dX+gg=;
        b=pVdEJP/iDrGELOtr9qRnBVOIPZ2ZN0l0ZfloCS2HRZitIANAaey0IPnF7UmqjLIn96
         n/JRq1SY5EL/IBGqmnbMFmSx/QWGztcQle9WuuUrSG+BYWRzpZUdxmV3QHR8hnGfbv//
         Ije3dTarTSN0P+MyqQ4/EE+KKcrfzzZBZPb8JGfjOk7LxYam3vXT8WnN7h2GIv0ulPSa
         YwGlJoaSSHzoCBFDdTVzVcOl2lKURv4aLZOM7AaagnwNM6E4jKxJ0F8lXwrObKix13KL
         R5MYM0VNqy9uYBMR4ePJ2VwPQCPuBsHewBg/JntIZBzqBT3yUxgO6eoGi7dL1JFsLSie
         202g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=jlS7YCsZWAJRCh7VQeSElH9c4MLj2HkvcGmj37dX+gg=;
        b=kaaDUKbLUOxh+JHrKWsSui9oApRPegu1IA5IAwLPrCItDq4qxgXAnl37/dhXQMCuwn
         bZQhjmeQVugg85aWoTjCNuhoI6Dap7TX07gvtevN1dq80omX5/XarPQBfTnmcgUOH3hZ
         sG62ZdDBF6Y69+fqOya1FECrQ9d71/xfGKLtawZSmeNRm8A9BNGGS54LqtotB4gRIWwS
         wAbH65sLQse31ZLDPY34UEYOJb5+HVuVTnVI641GQ+4A28UkeuDreQ7u+Pgtsoypif0t
         XPAyNDko6ADA4D3DeToVR9c11T9fuFGR2NS6cH7guwbtf1OaOpwGcwllDFcuA4jeAx40
         TGsg==
X-Gm-Message-State: AOAM533NpPqWQu000l+X+eMI6yKVLdIYorP+pyI2SAXY8lsJzkc+6etK
	yEF9wTNvubcg1lRs04L3Urzb2DPCpz4=
X-Google-Smtp-Source: ABdhPJwSnWOKEZY6X4zaxq3gEapAmbN8i/Fn9uZbztWZrw1J2bhZYr6B32L2xUaFQxx4LT/XNVZUpQ==
X-Received: by 2002:a37:8744:: with SMTP id j65mr4376689qkd.304.1620309598982;
        Thu, 06 May 2021 06:59:58 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>,
	Samuel Thibault <samuel.thibault@ens-lyon.org>
Subject: [PATCH v2 05/13] vtpmmgr: Move vtpmmgr_shutdown
Date: Thu,  6 May 2021 09:59:15 -0400
Message-Id: <20210506135923.161427-6-jandryuk@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210506135923.161427-1-jandryuk@gmail.com>
References: <20210506135923.161427-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Reposition vtpmmgr_shutdown so it can call flush_tpm2 without a forward
declaration.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
---
 stubdom/vtpmmgr/init.c | 28 ++++++++++++++--------------
 1 file changed, 14 insertions(+), 14 deletions(-)

diff --git a/stubdom/vtpmmgr/init.c b/stubdom/vtpmmgr/init.c
index 130e4f4bf6..decf8e8b4d 100644
--- a/stubdom/vtpmmgr/init.c
+++ b/stubdom/vtpmmgr/init.c
@@ -503,20 +503,6 @@ egress:
    return status;
 }
 
-void vtpmmgr_shutdown(void)
-{
-   /* Cleanup TPM resources */
-   TPM_TerminateHandle(vtpm_globals.oiap.AuthHandle);
-
-   /* Close tpmback */
-   shutdown_tpmback();
-
-   /* Close tpmfront/tpm_tis */
-   close(vtpm_globals.tpm_fd);
-
-   vtpmloginfo(VTPM_LOG_VTPM, "VTPM Manager stopped.\n");
-}
-
 /* TPM 2.0 */
 
 static void tpm2_AuthArea_ctor(const char *authValue, UINT32 authLen,
@@ -797,3 +783,17 @@ abort_egress:
 egress:
     return status;
 }
+
+void vtpmmgr_shutdown(void)
+{
+   /* Cleanup TPM resources */
+   TPM_TerminateHandle(vtpm_globals.oiap.AuthHandle);
+
+   /* Close tpmback */
+   shutdown_tpmback();
+
+   /* Close tpmfront/tpm_tis */
+   close(vtpm_globals.tpm_fd);
+
+   vtpmloginfo(VTPM_LOG_VTPM, "VTPM Manager stopped.\n");
+}
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Thu May 06 14:00:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 14:00:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123542.233048 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leeYA-000767-5z; Thu, 06 May 2021 14:00:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123542.233048; Thu, 06 May 2021 14:00:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leeYA-00075y-1f; Thu, 06 May 2021 14:00:22 +0000
Received: by outflank-mailman (input) for mailman id 123542;
 Thu, 06 May 2021 14:00:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NmKg=KB=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1leeY8-0003iB-Nn
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 14:00:20 +0000
Received: from mail-qv1-xf34.google.com (unknown [2607:f8b0:4864:20::f34])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ddcf99ed-964d-469e-ac1c-2014191c38bb;
 Thu, 06 May 2021 14:00:00 +0000 (UTC)
Received: by mail-qv1-xf34.google.com with SMTP id j3so3073994qvs.1
 for <xen-devel@lists.xenproject.org>; Thu, 06 May 2021 07:00:00 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:6095:81da:832e:3929])
 by smtp.gmail.com with ESMTPSA id
 189sm2069992qkh.99.2021.05.06.06.59.59
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 06 May 2021 06:59:59 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ddcf99ed-964d-469e-ac1c-2014191c38bb
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=bPkiWxfn9TxWci7jUlEYMQpJFIgv4BLfsaWuHMIMW38=;
        b=RzyJ90eZH5ru3q2u6T6ceR4GUw/nWf3Qjqw0TYkDxibbXmtpbqI/pO8Qes2hxE9/Vk
         gKsiYCh2ThxppnjzMny+C/OZ9y5Bjl03yJQHVE1e98bQ2Ti/CB51ChHj9Fk7ZAlMtyEd
         LJH75WHXqRvFcddwwwn1SgMGPYhYq4kdpBOTTaYH3BZgrmkdC0nkUrxc+fhbK/xujhFx
         YP7uKNW7N9vEgvva0k+6mM8KSxaUmx1lEnU847HUs+ji8AjQI1lKn67ZLaOMIhOmuJCJ
         MSsR0bfOLLZmP325f/ZhDGfEkpR7qaQesqRAqcSe0s9gWu53h2EC244GwrIsp6dAXpa/
         i7rQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=bPkiWxfn9TxWci7jUlEYMQpJFIgv4BLfsaWuHMIMW38=;
        b=SbpByoOCh4LWoozyAKSQnXlP4OLmOf8u7apkHkLDa8fbHUokqApOyFIUPEwpbIQ+A4
         hEuYqn5VuIUB9J7rFsOo2Lcq5+kOxziPl6U6GaIRco0gMM4m8di0Mwx72bTMmVnJ5JQe
         hsCf2PAGixaWbznIgUyLrtZOrXriAzvDzm4voEuKwmZtaE2WjjdfQnUy7Vx8Wn9aLk66
         ZVf3gIQknn8ikgakNBiiOaQh0GXCbLGxLffiEqyT3iWzAT5UmQ+yJBJwUri9pzHBZ1xG
         ZYO+ZfPRMi4tsLo/58g0JbPDfcZv6rUA1N/lB9Q1Sk44/rOLz8UzaMisjpMqy2jm135u
         gElQ==
X-Gm-Message-State: AOAM531qW9NDVvier8rx5wGS/ZUwkkcJA5XUEH3X5QoawCCsrNwd/N55
	AYuf4iM6rrth8CZLKaGLsCHalapTtsM=
X-Google-Smtp-Source: ABdhPJwy73jUFiBa9zPp9bCMtDNse2DIEKkyvSQXE/Nck56eaqMgVEvaQ4s+ylVd4ZsZu+rieyg+jg==
X-Received: by 2002:a05:6214:766:: with SMTP id f6mr4327467qvz.17.1620309600120;
        Thu, 06 May 2021 07:00:00 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>,
	Samuel Thibault <samuel.thibault@ens-lyon.org>
Subject: [PATCH v2 06/13] vtpmmgr: Flush transient keys on shutdown
Date: Thu,  6 May 2021 09:59:16 -0400
Message-Id: <20210506135923.161427-7-jandryuk@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210506135923.161427-1-jandryuk@gmail.com>
References: <20210506135923.161427-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Remove our key so it isn't left in the TPM for someone to come along
after vtpmmgr shutsdown.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
---
 stubdom/vtpmmgr/init.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/stubdom/vtpmmgr/init.c b/stubdom/vtpmmgr/init.c
index decf8e8b4d..56b4be85b3 100644
--- a/stubdom/vtpmmgr/init.c
+++ b/stubdom/vtpmmgr/init.c
@@ -792,6 +792,14 @@ void vtpmmgr_shutdown(void)
    /* Close tpmback */
    shutdown_tpmback();
 
+    if (hw_is_tpm2()) {
+        /* Blow away all stale handles left in the tpm*/
+        if (flush_tpm2() != TPM_SUCCESS) {
+            vtpmlogerror(VTPM_LOG_TPM,
+                         "TPM2_FlushResources failed, continuing shutdown..\n");
+        }
+    }
+
    /* Close tpmfront/tpm_tis */
    close(vtpm_globals.tpm_fd);
 
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Thu May 06 14:00:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 14:00:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123545.233060 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leeYF-0007kN-Nv; Thu, 06 May 2021 14:00:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123545.233060; Thu, 06 May 2021 14:00:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leeYF-0007j2-H1; Thu, 06 May 2021 14:00:27 +0000
Received: by outflank-mailman (input) for mailman id 123545;
 Thu, 06 May 2021 14:00:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NmKg=KB=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1leeYD-0003iB-Nt
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 14:00:25 +0000
Received: from mail-qt1-x833.google.com (unknown [2607:f8b0:4864:20::833])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 10e26b75-1e83-45b8-9c17-a6a2f03adc2a;
 Thu, 06 May 2021 14:00:01 +0000 (UTC)
Received: by mail-qt1-x833.google.com with SMTP id c11so3938904qth.2
 for <xen-devel@lists.xenproject.org>; Thu, 06 May 2021 07:00:01 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:6095:81da:832e:3929])
 by smtp.gmail.com with ESMTPSA id
 189sm2069992qkh.99.2021.05.06.07.00.00
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 06 May 2021 07:00:00 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 10e26b75-1e83-45b8-9c17-a6a2f03adc2a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=jlN6pfqKDZP3Jzb3yYX+3eh3XU7QBQjlR000MozME8g=;
        b=olpNpbuLF3uupDB/Jnzg/BOqPZl3l1tFa8X5wgL8uGPhV+PEUVQmFfEAPwLTRmKseC
         lKHYkl+/+hRXM5y5vtL3IPop5/gM4SssGwRL7KAAS3peLOvVFqknK66658dkwnZ06Tkb
         p4qBWpN/bbXC+RVs9e+CXixT+xptECMK5q2DUGHE3BPi3SqgxJo5lVxBW2tWGhqQRkYs
         1lgQi7T0aI3dmrJD0o5vU4sJyw9wza5FkA4+WMb29ydenUjVybcH688U0EXbGT//7Bm2
         dJ38P8YAYDmXrKmKZUwr8MEazYjxrbMcWRwEOOedbCWcRZMKT33XY/HeQt8sMHmqun8j
         h8GA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=jlN6pfqKDZP3Jzb3yYX+3eh3XU7QBQjlR000MozME8g=;
        b=t0a4ZfZSxjvgjH03qHeUtIpM4i1mnc7wwRTUtH4DxykfMIIIvxMhZ7DWHTmGqgkMjK
         PKct7ud5T2ahuj36E7f4fuXCjMi0CU2+kSj16wV3663r4ImpGwNUMXOELk38pY/8cXDt
         Jpysi0aOIsoeGBr6ilsSPiQSHxhcb1em55rS89RJCnXurXoKaa35ko7t0luswTvZn8Kw
         yvZ5tmNavCHN29UCu9PAsrJ9u7X8wUKoeAnUqLZYzCkPPUlHxjVjGF7RdRnFLje+ISsi
         LaLQ6puUHBgAg2feLPAEFrXquf4VLkTNJCo+fHTULSUNp2jnQs7veTkEDXOYBiBGZx3P
         O5ng==
X-Gm-Message-State: AOAM532EkoYO9FRqdkRFW9UpAPdtO7BjVXyePJA8QJf+1qw9Phl7nojm
	BOschGpfC9J/cHbnSFK5TBCPKeW1jFI=
X-Google-Smtp-Source: ABdhPJwNpmmBhCUN1SbJ1hejGCXJjDYOTk3/yub1eoBDphC5tpbqaqa9CF5zSno/mhIN93VXZAWRjQ==
X-Received: by 2002:ac8:5c14:: with SMTP id i20mr4477274qti.175.1620309601296;
        Thu, 06 May 2021 07:00:01 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>,
	Samuel Thibault <samuel.thibault@ens-lyon.org>
Subject: [PATCH v2 07/13] vtpmmgr: Flush all transient keys
Date: Thu,  6 May 2021 09:59:17 -0400
Message-Id: <20210506135923.161427-8-jandryuk@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210506135923.161427-1-jandryuk@gmail.com>
References: <20210506135923.161427-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

We're only flushing 2 transients, but there are 3 handles.  Use <= to also
flush the third handle since TRANSIENT_LAST is inclusive

The number of transient handles/keys is hardware dependent, so this
should query for the limit.  And assignment of handles is assumed to be
sequential from the minimum.  That may not be guaranteed, but seems okay
with my tpm2.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
---
v2 add "since TRANSIENT_LAST is inclusive" to commit message.
---
 stubdom/vtpmmgr/init.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/stubdom/vtpmmgr/init.c b/stubdom/vtpmmgr/init.c
index 56b4be85b3..4ae34a4fcb 100644
--- a/stubdom/vtpmmgr/init.c
+++ b/stubdom/vtpmmgr/init.c
@@ -656,7 +656,7 @@ static TPM_RC flush_tpm2(void)
 {
     int i;
 
-    for (i = TRANSIENT_FIRST; i < TRANSIENT_LAST; i++)
+    for (i = TRANSIENT_FIRST; i <= TRANSIENT_LAST; i++)
          TPM2_FlushContext(i);
 
     return TPM_SUCCESS;
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Thu May 06 14:00:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 14:00:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123548.233072 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leeYK-0008I7-0d; Thu, 06 May 2021 14:00:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123548.233072; Thu, 06 May 2021 14:00:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leeYJ-0008Hv-TA; Thu, 06 May 2021 14:00:31 +0000
Received: by outflank-mailman (input) for mailman id 123548;
 Thu, 06 May 2021 14:00:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NmKg=KB=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1leeYI-0003iB-O2
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 14:00:30 +0000
Received: from mail-qk1-x72c.google.com (unknown [2607:f8b0:4864:20::72c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d772169a-d0f4-4cdc-a684-7476d154bdc6;
 Thu, 06 May 2021 14:00:03 +0000 (UTC)
Received: by mail-qk1-x72c.google.com with SMTP id 197so4935247qkl.12
 for <xen-devel@lists.xenproject.org>; Thu, 06 May 2021 07:00:03 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:6095:81da:832e:3929])
 by smtp.gmail.com with ESMTPSA id
 189sm2069992qkh.99.2021.05.06.07.00.01
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 06 May 2021 07:00:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d772169a-d0f4-4cdc-a684-7476d154bdc6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=ybgo2iqpAthKznwUf2rKB8sj2C1FVcmokKqjDDD9c5k=;
        b=QEMn2z8Au1zTcK9KWNUAWf4/pwLJGCha5xCHPbaVcHvwTs/0WGeDRi+dSSLkj+PFGd
         4DPTDrPpCoBGdEGFejSupnuB+2ecOeU06MvzCQFF4Z4YIc0V09aZGj5Jea66cvYjABzy
         upkTZ1/LVmbDH46CpQSVtDMmhaQRMlMJ50/DPtppaq76ByrVqcXk7PZVdjbnqRBYfrK6
         o5uIPQNEnoCso1VhvhvrN8PTVvJsleyC5gE2wrCqOomdQP6iHpSBulGhGXC4/z5+jybO
         zE41gDLiMQffA77a8KAFAyKdaMoJhCp5S+VEkXRu+IFYewK60CD6hvfCjuscdjXREmzS
         D3PQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=ybgo2iqpAthKznwUf2rKB8sj2C1FVcmokKqjDDD9c5k=;
        b=X4L74DWmgglI3gIgXV84TsVIHfaiZc2Gz1qwQINdvk7hQ7E0p7DTwxqTI1MKJE2HJS
         Kwr5wVqtXaxTq0wOAznR/zocitG/C7yM917IfYU2U3+hGr23mhUEseBPznzMMRzH0h5M
         YWWz/zq0Y/d9Hl1mPu9Ox4RzNniqIqKTZkivb42Y4MkHF+5pa38sUNzDK4hjUbHbbJjO
         3VgQUBD9pksXMeEM8LeFLJyn19l0kP6Ch/hZ6HxPCUS1ohDuww3wm1UM3feahMsem/tf
         g4y21C6oHM3y9jeLUS7sCxghL1drxPVZbVT/+taHqXYBbzo3eWQSH5NdQpA++zLNMOSA
         7JqA==
X-Gm-Message-State: AOAM530+lpr04dCiaNyEpN1GVx9HBD8mvQQlYG8MPnW3cKRUXJCfSIeM
	V9dvm/FR/NTogq689ZgKD2VmLqa5aWA=
X-Google-Smtp-Source: ABdhPJwYESUQtl/XOHs0n5w17jqWUNLlZiVMYKCuK2J/fCzM3i3lA5rxNuBIThjTw2IEM0dxq30QDw==
X-Received: by 2002:a37:45d3:: with SMTP id s202mr4141671qka.424.1620309602421;
        Thu, 06 May 2021 07:00:02 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>,
	Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Samuel Thibault <samuel.thibaut@ens-lyon.org>
Subject: [PATCH v2 08/13] vtpmmgr: Shutdown more gracefully
Date: Thu,  6 May 2021 09:59:18 -0400
Message-Id: <20210506135923.161427-9-jandryuk@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210506135923.161427-1-jandryuk@gmail.com>
References: <20210506135923.161427-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

vtpmmgr uses the default, weak app_shutdown, which immediately calls the
shutdown hypercall.  This short circuits the vtpmmgr clean up logic.  We
need to perform the clean up to actually Flush our key out of the tpm.

Setting do_shutdown is one step in that direction, but vtpmmgr will most
likely be waiting in tpmback_req_any.  We need to call shutdown_tpmback
to cancel the wait inside tpmback and perform the shutdown.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
Reviewed-by: Samuel Thibault <samuel.thibaut@ens-lyon.org>
---
 stubdom/vtpmmgr/vtpmmgr.c | 12 +++++++++++-
 1 file changed, 11 insertions(+), 1 deletion(-)

diff --git a/stubdom/vtpmmgr/vtpmmgr.c b/stubdom/vtpmmgr/vtpmmgr.c
index 9fddaa24f8..46ea018921 100644
--- a/stubdom/vtpmmgr/vtpmmgr.c
+++ b/stubdom/vtpmmgr/vtpmmgr.c
@@ -67,11 +67,21 @@ int hw_is_tpm2(void)
     return (hardware_version.hw_version == TPM2_HARDWARE) ? 1 : 0;
 }
 
+static int do_shutdown;
+
+void app_shutdown(unsigned int reason)
+{
+    printk("Shutdown requested: %d\n", reason);
+    do_shutdown = 1;
+
+    shutdown_tpmback();
+}
+
 void main_loop(void) {
    tpmcmd_t* tpmcmd;
    uint8_t respbuf[TCPA_MAX_BUFFER_LENGTH];
 
-   while(1) {
+   while (!do_shutdown) {
       /* Wait for requests from a vtpm */
       vtpmloginfo(VTPM_LOG_VTPM, "Waiting for commands from vTPM's:\n");
       if((tpmcmd = tpmback_req_any()) == NULL) {
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Thu May 06 14:00:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 14:00:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123552.233084 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leeYO-0000Oo-9s; Thu, 06 May 2021 14:00:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123552.233084; Thu, 06 May 2021 14:00:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leeYO-0000Oh-5K; Thu, 06 May 2021 14:00:36 +0000
Received: by outflank-mailman (input) for mailman id 123552;
 Thu, 06 May 2021 14:00:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NmKg=KB=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1leeYN-0003iB-OC
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 14:00:35 +0000
Received: from mail-qk1-x730.google.com (unknown [2607:f8b0:4864:20::730])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c37acdc0-f19d-4581-89c9-16376abafabc;
 Thu, 06 May 2021 14:00:04 +0000 (UTC)
Received: by mail-qk1-x730.google.com with SMTP id l129so4940985qke.8
 for <xen-devel@lists.xenproject.org>; Thu, 06 May 2021 07:00:04 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:6095:81da:832e:3929])
 by smtp.gmail.com with ESMTPSA id
 189sm2069992qkh.99.2021.05.06.07.00.02
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 06 May 2021 07:00:03 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c37acdc0-f19d-4581-89c9-16376abafabc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=Xs1qSTBmNZ6MyDxAQr1ybF/sqzOXtnBmJm894lxdIZ8=;
        b=tHfz+NTyXpk78wHnIcVo4zHaklFwprxZ+8yfagEEfzwYJA1ki77Li6jrJ9g4/Izhjr
         8RDshlJK9cScWT3MN7e9ew7fjc0Aq0kZ77I8nWtUnWhde3PNEsHbA4s/AGO/pcDNS7kW
         hNSUWw1CLHbhg5MlASYImhKgySpag9kgCHvnWRV1mDCgfSdc4h2cQSPcPnD5TQRR5RGT
         ZpyQsVBohskhLxCh0ZFvXclm/wes9b28NrZeDD3oRtpz1wyiu6TJaOYG86xIX3ycD8h3
         gE2gomoM0DDwoheWf5AulpKBJJMISGRQsW2sM0MJEtVQDOkHOwqbHh0Mbz9uVCgk0XV+
         kLHg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=Xs1qSTBmNZ6MyDxAQr1ybF/sqzOXtnBmJm894lxdIZ8=;
        b=MBe6Q3npVLesLz7/eAB/W+nfThB3DtR3Kir5jkEk9xkJz/38Tja/DS9LBemsQvo5hX
         jZ2yswzSCF7m1bViw2SVsPfjEmW/lucVIlsinMH7CUljyATSylAu80amyMlrrFq+BSdx
         sTm1g8e/NOWbcJ8LwkLdSU0FWBEtMlwogBv9RtnJV66lAFgvk4Py7PQHbnsbR+sUzTOL
         t6LdwPv9E65dCZtpcLpd3oBWxavxa79tN67oj9/SB+FzqG43Iw5mz+B/QSDKgee4kOMT
         nBnT9y5SGNbBgxe1T7g2/0EWMBTEZ+io6MknKNDC2YbBFb0AjmZgue4BbkNvz40nuM4+
         +01w==
X-Gm-Message-State: AOAM5307iIyPenfmU0LHn7a17IQi0pH3IJMDsssg9C7R3+z57cFehxik
	sFRooB9KRuJ9tVda80OOZOTzGSH09qA=
X-Google-Smtp-Source: ABdhPJw9AC8SqgTmEg5gThVactbzhJlZGRBSwxSqIJewZU3bb/+e3+8m84on7iyL531mU6tp0UWEgA==
X-Received: by 2002:a05:620a:8d0:: with SMTP id z16mr711055qkz.394.1620309603611;
        Thu, 06 May 2021 07:00:03 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>,
	Samuel Thibault <samuel.thibault@ens-lyon.org>
Subject: [PATCH v2 09/13] vtpmmgr: Support GetRandom passthrough on TPM 2.0
Date: Thu,  6 May 2021 09:59:19 -0400
Message-Id: <20210506135923.161427-10-jandryuk@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210506135923.161427-1-jandryuk@gmail.com>
References: <20210506135923.161427-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

GetRandom passthrough currently fails when using vtpmmgr with a hardware
TPM 2.0.
vtpmmgr (8): INFO[VTPM]: Passthrough: TPM_GetRandom
vtpm (12): vtpm_cmd.c:120: Error: TPM_GetRandom() failed with error code (30)

When running on TPM 2.0 hardware, vtpmmgr needs to convert the TPM 1.2
TPM_ORD_GetRandom into a TPM2 TPM_CC_GetRandom command.  Besides the
differing ordinal, the TPM 1.2 uses 32bit sizes for the request and
response (vs. 16bit for TPM2).

Place the random output directly into the tpmcmd->resp and build the
packet around it.  This avoids bouncing through an extra buffer, but the
header has to be written after grabbing the random bytes so we have the
number of bytes to include in the size.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

---
v2:
Add bounds and size checks
Whitespace fixup
---
 stubdom/vtpmmgr/marshal.h          | 15 ++++++++
 stubdom/vtpmmgr/vtpm_cmd_handler.c | 61 +++++++++++++++++++++++++++++-
 2 files changed, 75 insertions(+), 1 deletion(-)

diff --git a/stubdom/vtpmmgr/marshal.h b/stubdom/vtpmmgr/marshal.h
index dce19c6439..f1037a7976 100644
--- a/stubdom/vtpmmgr/marshal.h
+++ b/stubdom/vtpmmgr/marshal.h
@@ -890,6 +890,15 @@ inline int sizeof_TPM_AUTH_SESSION(const TPM_AUTH_SESSION* auth) {
 	return rv;
 }
 
+static
+inline int sizeof_TPM_RQU_HEADER(BYTE* ptr) {
+	int rv = 0;
+	rv += sizeof_UINT16(ptr);
+	rv += sizeof_UINT32(ptr);
+	rv += sizeof_UINT32(ptr);
+	return rv;
+}
+
 static
 inline BYTE* pack_TPM_RQU_HEADER(BYTE* ptr,
 		TPM_TAG tag,
@@ -920,8 +929,14 @@ inline int unpack3_TPM_RQU_HEADER(BYTE* ptr, UINT32* pos, UINT32 max,
 		unpack3_UINT32(ptr, pos, max, ord);
 }
 
+static
+inline int sizeof_TPM_RQU_GetRandom(BYTE* ptr) {
+	return sizeof_TPM_RQU_HEADER(ptr) + sizeof_UINT32(ptr);
+}
+
 #define pack_TPM_RSP_HEADER(p, t, s, r) pack_TPM_RQU_HEADER(p, t, s, r)
 #define unpack_TPM_RSP_HEADER(p, t, s, r) unpack_TPM_RQU_HEADER(p, t, s, r)
 #define unpack3_TPM_RSP_HEADER(p, l, m, t, s, r) unpack3_TPM_RQU_HEADER(p, l, m, t, s, r)
+#define sizeof_TPM_RSP_HEADER(p) sizeof_TPM_RQU_HEADER(p)
 
 #endif
diff --git a/stubdom/vtpmmgr/vtpm_cmd_handler.c b/stubdom/vtpmmgr/vtpm_cmd_handler.c
index 2ac14fae77..c879b24c13 100644
--- a/stubdom/vtpmmgr/vtpm_cmd_handler.c
+++ b/stubdom/vtpmmgr/vtpm_cmd_handler.c
@@ -47,6 +47,7 @@
 #include "vtpm_disk.h"
 #include "vtpmmgr.h"
 #include "tpm.h"
+#include "tpm2.h"
 #include "tpmrsa.h"
 #include "tcg.h"
 #include "mgmt_authority.h"
@@ -772,6 +773,64 @@ static int vtpmmgr_permcheck(struct tpm_opaque *opq)
 	return 1;
 }
 
+TPM_RESULT vtpmmgr_handle_getrandom(struct tpm_opaque *opaque,
+				    tpmcmd_t* tpmcmd)
+{
+	TPM_RESULT status = TPM_SUCCESS;
+	TPM_TAG tag;
+	UINT32 size;
+	const int max_rand_size = TCPA_MAX_BUFFER_LENGTH -
+				  sizeof_TPM_RQU_GetRandom(tpmcmd->req);
+	UINT32 rand_offset;
+	UINT32 rand_size;
+	TPM_COMMAND_CODE ord;
+	BYTE *p;
+
+	if (tpmcmd->req_len != sizeof_TPM_RQU_GetRandom(tpmcmd->req)) {
+		status = TPM_BAD_PARAMETER;
+		tag = TPM_TAG_RQU_COMMAND;
+		goto abort_egress;
+	}
+
+	p = unpack_TPM_RQU_HEADER(tpmcmd->req, &tag, &size, &ord);
+
+	if (!hw_is_tpm2()) {
+		size = TCPA_MAX_BUFFER_LENGTH;
+		TPMTRYRETURN(TPM_TransmitData(tpmcmd->req, tpmcmd->req_len,
+					      tpmcmd->resp, &size));
+		tpmcmd->resp_len = size;
+
+		return TPM_SUCCESS;
+	}
+
+	/* TPM_GetRandom req: <header><uint32 num bytes> */
+	unpack_UINT32(p, &rand_size);
+
+	/* Returning fewer bytes is acceptable per the spec. */
+	if (rand_size > max_rand_size)
+		rand_size = max_rand_size;
+
+	/* Call TPM2_GetRandom but return a TPM_GetRandom response. */
+	/* TPM_GetRandom resp: <header><uint32 num bytes><num random bytes> */
+	rand_offset = sizeof_TPM_RSP_HEADER(tpmcmd->resp) +
+		      sizeof_UINT32(tpmcmd->resp);
+
+	TPMTRYRETURN(TPM2_GetRandom(&rand_size, tpmcmd->resp + rand_offset));
+
+	p = pack_TPM_RSP_HEADER(tpmcmd->resp, TPM_TAG_RSP_COMMAND,
+				rand_offset + rand_size, status);
+	p = pack_UINT32(p, rand_size);
+	tpmcmd->resp_len = rand_offset + rand_size;
+
+	return status;
+
+abort_egress:
+	tpmcmd->resp_len = VTPM_COMMAND_HEADER_SIZE;
+	pack_TPM_RSP_HEADER(tpmcmd->resp, tag + 3, tpmcmd->resp_len, status);
+
+	return status;
+}
+
 TPM_RESULT vtpmmgr_handle_cmd(
 		struct tpm_opaque *opaque,
 		tpmcmd_t* tpmcmd)
@@ -842,7 +901,7 @@ TPM_RESULT vtpmmgr_handle_cmd(
 		switch(ord) {
 		case TPM_ORD_GetRandom:
 			vtpmloginfo(VTPM_LOG_VTPM, "Passthrough: TPM_GetRandom\n");
-			break;
+			return vtpmmgr_handle_getrandom(opaque, tpmcmd);
 		case TPM_ORD_PcrRead:
 			vtpmloginfo(VTPM_LOG_VTPM, "Passthrough: TPM_PcrRead\n");
 			// Quotes also need to be restricted to hide PCR values
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Thu May 06 14:04:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 14:04:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123573.233096 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leecK-0002OJ-Rj; Thu, 06 May 2021 14:04:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123573.233096; Thu, 06 May 2021 14:04:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leecK-0002OC-OV; Thu, 06 May 2021 14:04:40 +0000
Received: by outflank-mailman (input) for mailman id 123573;
 Thu, 06 May 2021 14:04:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NmKg=KB=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1leecJ-0002O0-6z
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 14:04:39 +0000
Received: from mail-lj1-x22e.google.com (unknown [2a00:1450:4864:20::22e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3eb1cdc7-de8c-4eb5-9a96-bda0214f6a85;
 Thu, 06 May 2021 14:04:38 +0000 (UTC)
Received: by mail-lj1-x22e.google.com with SMTP id o16so7219105ljp.3
 for <xen-devel@lists.xenproject.org>; Thu, 06 May 2021 07:04:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3eb1cdc7-de8c-4eb5-9a96-bda0214f6a85
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=Iv4rh0azE3wyuDutQfPSy04R7NYdGndNnYAgr/OMFeg=;
        b=pj/5fOnn8TnE5gNgEuCZCSMg1KRasWlUjpOJl0y8JA1f8XFXtQ7kqC7Crtj0LksDLL
         Ew8ZwVieI+gM/sb7Tu2gakIdZF3lJwhXYGLhIQA2t6GLMSziftIIQcLLifOjyt/3xIpH
         83olv1LMwNTR2sF5mVtk3nvSuDEiWWIAe+3IAMj//SAZ61horCH3yjxoKHo9+bCG7Qr/
         ixwZNucZN97DQ3EDiKorxUYsGR4tSL0httvnVXdB6Nf+MFIIJJlKBZwP9fM8+8Zp296X
         SI6BT4snB5kgW9a+PsOL9HxYWkvenxRsfI4SYMHVNJHp9axKpDuwt5oBmkU9VWy1Ta2G
         jCqA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=Iv4rh0azE3wyuDutQfPSy04R7NYdGndNnYAgr/OMFeg=;
        b=lV27YFsdUlnwEHyMIuTA9cCbn/GPC9Xm1HNsuP1VqsD7KgV/HawuP3pkZiNMWrOXMG
         YOg55TtddOv6HYx4c9QITTnVKrW2r905Ciml+m4vXQpQ5xsclJ+udxOUa4ePGMYrCG30
         kC6VrikaAlJTMlV+m8UR+xrT7n0YDSBFXUkVTRTKw11PQghWnuwzZmisKwyWm08985Hd
         iaQ0rWzLNZTBN+iX45MnamCK6+liWw3YREO+mT+YtjjlzOZ+16H6TfnT1GMWsHyf/kcE
         iCKEgMT1KfICeGoeIvs5Gi7M87ycUYQcOSrPlY02cHlciYOe3mnnQZ6ntK1OiTz2X206
         ZalQ==
X-Gm-Message-State: AOAM531HkBL0fzg7YRt6lslQUdpknukfPM0aQXA0onI7VD1TB2TazYXA
	docLU+hI0jQK0Z+GE0khekOD0BlwWB7iJcysqAwZUM4C
X-Google-Smtp-Source: ABdhPJzueLl1gPUv2yD0a3QJtKHmPXTsCEIt/UCkovPWKoufUqHtzsUNPQaFrGy48U5rcsnW97xK8bJbcfReHsEbEGM=
X-Received: by 2002:a2e:8356:: with SMTP id l22mr3558079ljh.204.1620309876985;
 Thu, 06 May 2021 07:04:36 -0700 (PDT)
MIME-Version: 1.0
References: <20210506135923.161427-1-jandryuk@gmail.com> <20210506135923.161427-9-jandryuk@gmail.com>
In-Reply-To: <20210506135923.161427-9-jandryuk@gmail.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Thu, 6 May 2021 10:04:25 -0400
Message-ID: <CAKf6xptH+p3KmNsaok_5dy+1WHZ5RHJfqKfmQVmsiauAo8FakA@mail.gmail.com>
Subject: Re: [PATCH v2 08/13] vtpmmgr: Shutdown more gracefully
To: xen-devel <xen-devel@lists.xenproject.org>
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, Quan Xu <quan.xu0@gmail.com>, 
	Samuel Thibault <samuel.thibault@ens-lyon.org>
Content-Type: text/plain; charset="UTF-8"

On Thu, May 6, 2021 at 10:00 AM Jason Andryuk <jandryuk@gmail.com> wrote:
>
> vtpmmgr uses the default, weak app_shutdown, which immediately calls the
> shutdown hypercall.  This short circuits the vtpmmgr clean up logic.  We
> need to perform the clean up to actually Flush our key out of the tpm.
>
> Setting do_shutdown is one step in that direction, but vtpmmgr will most
> likely be waiting in tpmback_req_any.  We need to call shutdown_tpmback
> to cancel the wait inside tpmback and perform the shutdown.
>
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> Reviewed-by: Samuel Thibault <samuel.thibaut@ens-lyon.org>

Whoops, this should be "Reviewed-by: Samuel Thibault
<samuel.thibault@ens-lyon.org>".  The above is missing an "l" in the
last name.

-Jason


From xen-devel-bounces@lists.xenproject.org Thu May 06 14:10:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 14:10:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123579.233108 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leei9-0003mX-HW; Thu, 06 May 2021 14:10:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123579.233108; Thu, 06 May 2021 14:10:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leei9-0003mQ-EH; Thu, 06 May 2021 14:10:41 +0000
Received: by outflank-mailman (input) for mailman id 123579;
 Thu, 06 May 2021 14:10:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NmKg=KB=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1leeYS-0003iB-ON
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 14:00:40 +0000
Received: from mail-qk1-x732.google.com (unknown [2607:f8b0:4864:20::732])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bdbc5300-8ed8-4b5d-a25c-9cea716549e0;
 Thu, 06 May 2021 14:00:05 +0000 (UTC)
Received: by mail-qk1-x732.google.com with SMTP id a22so4386201qkl.10
 for <xen-devel@lists.xenproject.org>; Thu, 06 May 2021 07:00:05 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:6095:81da:832e:3929])
 by smtp.gmail.com with ESMTPSA id
 189sm2069992qkh.99.2021.05.06.07.00.03
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 06 May 2021 07:00:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bdbc5300-8ed8-4b5d-a25c-9cea716549e0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=1sC6PaVJyxmKcXNwmv0ZrJDDSAEP8iNaKd4bhckdCXE=;
        b=ebg8vvSndFSKXEg6iZxUqqYDaQQx/5MfRXdnR/tjWhdDLlXhrtUnRj2sCNEekwYmjh
         ybyz9bp0xcbBAbf9CTB3rOt7mCBeOW8um2t9B8Dt6nB4miBlMkNFuA0caiwU9QD8O1JC
         8PK8RZppD6CN90dh4484V28rThyThXJXQ/rq2gWc3qK6mL5hkXrhu86I4u2HiwLtjKIK
         kXJb+w4et4cK8LiQ4r00nSjywYKd/X96/yegCFgjHKfQcXNvMe3qHcF0WNTZpDGbHfAh
         Qr9i5G5Q72f8CLDTt8obeuVHMd8aZ0aEZW2VrtKs4RClpx4L8vp2ARgyafVC6KqT0r0y
         vWRQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=1sC6PaVJyxmKcXNwmv0ZrJDDSAEP8iNaKd4bhckdCXE=;
        b=aAwDLugZYiuTuUqoHrE+/faCmTLp8eFybamzV3Mt6mtX81AMTYFBCwdbRVUpVjuVKp
         LM36HVPCpjUcGAzrPMI7UEjS0J5PsAtTz6zKNpw1hOFpUDws+SLOE6C82Vny30WKP9qt
         mVgM4/X1v06mmcr6C5GKU6ryGaYYg61A7/4HTq1dYliVivV+B/5prbwKZH2rU2vNC+/B
         EYeiJZHeLC+d7iUvK/Cjp8stJndJSl7Sl/JFSTYR+RJG5M2I7X6m5Ge7AP8iHJwiXFyu
         T1+JycNTqW+vq/jTGbyFa192b3X4z41VnXKMKl0zX5eUxYyKxrTXkQtMFvz/NYrCQJ9m
         +wfg==
X-Gm-Message-State: AOAM5319+PRiFDgvIiadgDBZ4gRr5k7jDQmTtvRKgeTnrcm7dS+e8r8a
	FeiKe5OTOkns1MSjBJJVBYXumZRoB6U=
X-Google-Smtp-Source: ABdhPJzL4DGsx5vrbgjgb3flP4SmhP7g6Cx9swZiXeM7Pt8mE0yentKObQYuauanntkIQoyK/yOXlQ==
X-Received: by 2002:ae9:ed44:: with SMTP id c65mr4076178qkg.271.1620309604778;
        Thu, 06 May 2021 07:00:04 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>,
	Samuel Thibault <samuel.thibault@ens-lyon.org>
Subject: [PATCH v2 10/13] vtpmmgr: Remove bogus cast from TPM2_GetRandom
Date: Thu,  6 May 2021 09:59:20 -0400
Message-Id: <20210506135923.161427-11-jandryuk@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210506135923.161427-1-jandryuk@gmail.com>
References: <20210506135923.161427-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The UINT32 <-> UINT16 casting in TPM2_GetRandom is incorrect.  Use a
local UINT16 as needed for the TPM hardware command and assign the
result.

Suggested-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 stubdom/vtpmmgr/tpm2.c | 13 ++++++++++---
 1 file changed, 10 insertions(+), 3 deletions(-)

diff --git a/stubdom/vtpmmgr/tpm2.c b/stubdom/vtpmmgr/tpm2.c
index 655e6d164c..ebd06eac74 100644
--- a/stubdom/vtpmmgr/tpm2.c
+++ b/stubdom/vtpmmgr/tpm2.c
@@ -427,15 +427,22 @@ abort_egress:
 
 TPM_RC TPM2_GetRandom(UINT32 * bytesRequested, BYTE * randomBytes)
 {
+    UINT16 bytesReq;
     TPM_BEGIN(TPM_ST_NO_SESSIONS, TPM_CC_GetRandom);
 
-    ptr = pack_UINT16(ptr, (UINT16)*bytesRequested);
+    if (*bytesRequested > UINT16_MAX)
+        bytesReq = UINT16_MAX;
+    else
+        bytesReq = *bytesRequested;
+
+    ptr = pack_UINT16(ptr, bytesReq);
 
     TPM_TRANSMIT();
     TPM_UNPACK_VERIFY();
 
-    ptr = unpack_UINT16(ptr, (UINT16 *)bytesRequested);
-    ptr = unpack_TPM_BUFFER(ptr, randomBytes, *bytesRequested);
+    ptr = unpack_UINT16(ptr, &bytesReq);
+    *bytesRequested = bytesReq;
+    ptr = unpack_TPM_BUFFER(ptr, randomBytes, bytesReq);
 
 abort_egress:
     return status;
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Thu May 06 14:10:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 14:10:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123583.233120 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leeiM-00048u-S2; Thu, 06 May 2021 14:10:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123583.233120; Thu, 06 May 2021 14:10:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leeiM-00048n-NQ; Thu, 06 May 2021 14:10:54 +0000
Received: by outflank-mailman (input) for mailman id 123583;
 Thu, 06 May 2021 14:10:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NmKg=KB=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1leeYh-0003iB-OU
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 14:00:55 +0000
Received: from mail-qv1-xf32.google.com (unknown [2607:f8b0:4864:20::f32])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c3021232-c65c-49fc-ae5a-973073c5306d;
 Thu, 06 May 2021 14:00:09 +0000 (UTC)
Received: by mail-qv1-xf32.google.com with SMTP id u1so3042441qvg.11
 for <xen-devel@lists.xenproject.org>; Thu, 06 May 2021 07:00:09 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:6095:81da:832e:3929])
 by smtp.gmail.com with ESMTPSA id
 189sm2069992qkh.99.2021.05.06.07.00.07
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 06 May 2021 07:00:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c3021232-c65c-49fc-ae5a-973073c5306d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=INJlvdLEiMRlxDVXoqvU7bWzaHIWnzxg1XcmDrQV7tE=;
        b=hzDTz2nGVFQefH2LMpgFDEFvOb/lGoQmVVe3Meg2/PusuSTC0hqWI2vO7FnpbedbVa
         VMqXxtJAdCD9yvOW8NBc4fY/pQrae9CWlfVudRO6TdwQKm0UQvcvAIueXYZ6I3zUIknf
         dFonW8qa3bo/iXTtt7etwOBahlooJw+SEZ2Q9MuowQulRfgIgqBFdZDSEDMvY0LP7Pzo
         rKr1U5U9X/q2glIKloDNJwFB5X1gMKPOqVWi126Dv3ULLodTeWVn9MMqPvPR/8CMV/sY
         CsXSBmPDxtEMvoKD8iEGoW2JiOaAHyFKt7sZkdT/AuVc08KxtQv32TmJOv5vnoAFnmA6
         7+5g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=INJlvdLEiMRlxDVXoqvU7bWzaHIWnzxg1XcmDrQV7tE=;
        b=NAnIojiTEZnVAcF+oZhaGiu2uDSubddlZ5R5DV5KcBNO/PHhsPGqOhIexIuHzqLPZR
         Y8RDhhxE1jCnEt/oaGJfw1d7TGRglegL1BA9Tzp447h8SHKoD9Ky/A8JEfd0uKRE8LEV
         NgJyvs0Y1UVQe/PpQU2kViXyYSth2LWUdpgrKHHqoOmO4qYN6WYH9KPmgADAwzx37ed1
         oByagZV6HdBskpaT9z0nhU5AVbJQZVDs2jDRL5J6JHxbINZNkLb5aGuUd8PDTXZzl/q7
         35Dwp+IYDSe6m+xVS0HBnu/Sa1EPBmjUfcmICPnCPY8X/ncvGE93xjz80U6CEVFzDqxo
         ISxg==
X-Gm-Message-State: AOAM5305N1E///mqXpDYle95QaTTb8VeLBRYuAk6RkSiKWyWkcCFrd9i
	ayHxt/+p/ZxB8RhnbsUsvGfkTONoRcI=
X-Google-Smtp-Source: ABdhPJxTqMFHefgYQlpQl9V3rpHD48J1Hf1gvVoQC47X7dXoL2/OtjrZzrC0QRpN22L5c1VOFkd7Kw==
X-Received: by 2002:a0c:f0c4:: with SMTP id d4mr4347832qvl.54.1620309608256;
        Thu, 06 May 2021 07:00:08 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Samuel Thibault <samuel.thibault@ens-lyon.org>
Subject: [PATCH v2 13/13] vtpm: Correct timeout units and command duration
Date: Thu,  6 May 2021 09:59:23 -0400
Message-Id: <20210506135923.161427-14-jandryuk@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210506135923.161427-1-jandryuk@gmail.com>
References: <20210506135923.161427-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add two patches:
vtpm-microsecond-duration.patch fixes the units for timeouts and command
durations.
vtpm-command-duration.patch increases the timeout linux uses to allow
commands to succeed.

Linux works around low timeouts, but not low durations.  The second
patch allows commands to complete that often timeout with the lower
command durations.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 stubdom/Makefile                        |  2 +
 stubdom/vtpm-command-duration.patch     | 52 +++++++++++++++++++++++++
 stubdom/vtpm-microsecond-duration.patch | 52 +++++++++++++++++++++++++
 3 files changed, 106 insertions(+)
 create mode 100644 stubdom/vtpm-command-duration.patch
 create mode 100644 stubdom/vtpm-microsecond-duration.patch

diff --git a/stubdom/Makefile b/stubdom/Makefile
index c6de5f68ae..06aa69d8bc 100644
--- a/stubdom/Makefile
+++ b/stubdom/Makefile
@@ -239,6 +239,8 @@ tpm_emulator-$(XEN_TARGET_ARCH): tpm_emulator-$(TPMEMU_VERSION).tar.gz
 	patch -d $@ -p1 < vtpm-implicit-fallthrough.patch
 	patch -d $@ -p1 < vtpm_TPM_ChangeAuthAsymFinish.patch
 	patch -d $@ -p1 < vtpm_extern.patch
+	patch -d $@ -p1 < vtpm-microsecond-duration.patch
+	patch -d $@ -p1 < vtpm-command-duration.patch
 	mkdir $@/build
 	cd $@/build; CC=${CC} $(CMAKE) .. -DCMAKE_C_FLAGS:STRING="-std=c99 -DTPM_NO_EXTERN $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -Wno-declaration-after-statement"
 	touch $@
diff --git a/stubdom/vtpm-command-duration.patch b/stubdom/vtpm-command-duration.patch
new file mode 100644
index 0000000000..6fdf2fc9be
--- /dev/null
+++ b/stubdom/vtpm-command-duration.patch
@@ -0,0 +1,52 @@
+From e7c976b5864e7d2649292d90ea60d5aea091a990 Mon Sep 17 00:00:00 2001
+From: Jason Andryuk <jandryuk@gmail.com>
+Date: Sun, 14 Mar 2021 12:46:34 -0400
+Subject: [PATCH 2/2] Increase command durations
+
+Wth Linux 5.4 xen-tpmfront and a Xen vtpm-stubdom, xen-tpmfront was
+failing commands with -ETIME:
+tpm tpm0: tpm_try_transmit: send(): error-62
+
+The vtpm was returning the data, but it was after the duration timeout
+in vtpm_send.  Linux may have started being more stringent about timing?
+
+The vtpm-stubdom has a little delay since it writes its disk before
+returning the response.
+
+Anyway, the durations are rather low.  When they were 1/10/1000 before
+converting to microseconds, Linux showed all three durations rounded to
+10000.  Update them with values from a physical TPM1.2.  These were
+taken from a WEC which was software downgraded from a TPM2 to a TPM1.2.
+They might be excessive, but I'd rather have a command succeed than
+return -ETIME.
+
+An IFX physical TPM1.2 uses:
+1000000
+1500000
+150000000
+
+Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
+---
+ tpm/tpm_data.c | 6 +++---
+ 1 file changed, 3 insertions(+), 3 deletions(-)
+
+diff --git a/tpm/tpm_data.c b/tpm/tpm_data.c
+index bebaf10..844afca 100644
+--- a/tpm/tpm_data.c
++++ b/tpm/tpm_data.c
+@@ -71,9 +71,9 @@ static void init_timeouts(void)
+   tpmData.permanent.data.tis_timeouts[1] = 2000000;
+   tpmData.permanent.data.tis_timeouts[2] = 750000;
+   tpmData.permanent.data.tis_timeouts[3] = 750000;
+-  tpmData.permanent.data.cmd_durations[0] = 1000;
+-  tpmData.permanent.data.cmd_durations[1] = 10000;
+-  tpmData.permanent.data.cmd_durations[2] = 1000000;
++  tpmData.permanent.data.cmd_durations[0] = 3000000;
++  tpmData.permanent.data.cmd_durations[1] = 3000000;
++  tpmData.permanent.data.cmd_durations[2] = 600000000;
+ }
+ 
+ void tpm_init_data(void)
+-- 
+2.30.2
+
diff --git a/stubdom/vtpm-microsecond-duration.patch b/stubdom/vtpm-microsecond-duration.patch
new file mode 100644
index 0000000000..7a906e72c5
--- /dev/null
+++ b/stubdom/vtpm-microsecond-duration.patch
@@ -0,0 +1,52 @@
+From 5a510e0afd7c288e3f0fb3523ec749ba1366ad61 Mon Sep 17 00:00:00 2001
+From: Jason Andryuk <jandryuk@gmail.com>
+Date: Sun, 14 Mar 2021 12:42:10 -0400
+Subject: [PATCH 1/2] Use microseconds for timeouts and durations
+
+The timeout and duration fields should be in microseconds according to
+the spec.
+
+TPM_CAP_PROP_TIS_TIMEOUT:
+A 4 element array of UINT32 values each denoting the timeout value in
+microseconds for the following in this order:
+
+TPM_CAP_PROP_DURATION:
+A 3 element array of UINT32 values each denoting the duration value in
+microseconds of the duration of the three classes of commands:
+
+Linux will scale the timeouts up by 1000, but not the durations.  Change
+the units for both sets as appropriate.
+
+Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
+---
+ tpm/tpm_data.c | 14 +++++++-------
+ 1 file changed, 7 insertions(+), 7 deletions(-)
+
+diff --git a/tpm/tpm_data.c b/tpm/tpm_data.c
+index a3a79ef..bebaf10 100644
+--- a/tpm/tpm_data.c
++++ b/tpm/tpm_data.c
+@@ -67,13 +67,13 @@ static void init_nv_storage(void)
+ static void init_timeouts(void)
+ {
+   /* for the timeouts we use the PC platform defaults */
+-  tpmData.permanent.data.tis_timeouts[0] = 750;
+-  tpmData.permanent.data.tis_timeouts[1] = 2000;
+-  tpmData.permanent.data.tis_timeouts[2] = 750;
+-  tpmData.permanent.data.tis_timeouts[3] = 750;
+-  tpmData.permanent.data.cmd_durations[0] = 1;
+-  tpmData.permanent.data.cmd_durations[1] = 10;
+-  tpmData.permanent.data.cmd_durations[2] = 1000;
++  tpmData.permanent.data.tis_timeouts[0] = 750000;
++  tpmData.permanent.data.tis_timeouts[1] = 2000000;
++  tpmData.permanent.data.tis_timeouts[2] = 750000;
++  tpmData.permanent.data.tis_timeouts[3] = 750000;
++  tpmData.permanent.data.cmd_durations[0] = 1000;
++  tpmData.permanent.data.cmd_durations[1] = 10000;
++  tpmData.permanent.data.cmd_durations[2] = 1000000;
+ }
+ 
+ void tpm_init_data(void)
+-- 
+2.30.2
+
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Thu May 06 14:10:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 14:10:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123584.233132 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leeiQ-0004Um-9C; Thu, 06 May 2021 14:10:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123584.233132; Thu, 06 May 2021 14:10:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leeiQ-0004Ud-5C; Thu, 06 May 2021 14:10:58 +0000
Received: by outflank-mailman (input) for mailman id 123584;
 Thu, 06 May 2021 14:10:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NmKg=KB=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1leeYX-0003iB-OQ
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 14:00:45 +0000
Received: from mail-qk1-x733.google.com (unknown [2607:f8b0:4864:20::733])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f8f56f0f-9e3a-445c-945a-17631d221a2c;
 Thu, 06 May 2021 14:00:06 +0000 (UTC)
Received: by mail-qk1-x733.google.com with SMTP id i17so4956110qki.3
 for <xen-devel@lists.xenproject.org>; Thu, 06 May 2021 07:00:06 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:6095:81da:832e:3929])
 by smtp.gmail.com with ESMTPSA id
 189sm2069992qkh.99.2021.05.06.07.00.04
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 06 May 2021 07:00:05 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f8f56f0f-9e3a-445c-945a-17631d221a2c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=fT9IdXW7l+JAkySBAGlsnWyPD7h3K3UgTIcjVV3hRCI=;
        b=tNt/BCVDGnQIFABGe2ZWB18X8SFKSiqnSWCqUj8FdMxLUkBTbWcJie1AAC/WPCi/wr
         yTsCdoxBmvrUix2iPiKDSJr8B0qdoGoZ+aWPqYUXLVMtjnDA0ifjuoaZPoHRiFEYWPCk
         yD42j08omdX8kbxovmFHYVdcOZ+jCXZBL996OKWjcYsCgE5gyBV+n+CxkcCFory+x7eN
         ywxS8GHY+3ititjoEiLiBO5+4bkIWxJfwcPLJ14HoUZog5AKrDJ36InkVgwmk3cIr2aW
         b8cqiGm7m8YLi/7fkluSTe90PBIsnZybF4VhRCSSpzEW/niUgAb+nOebMzCowBPSMs71
         fyuA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=fT9IdXW7l+JAkySBAGlsnWyPD7h3K3UgTIcjVV3hRCI=;
        b=sq2ru581JBa0VdT4ZL/HlQj8bYkrK2ptl3T3nKbByNavUXnzUUdXexMqbe+zaqVgJn
         YATrQBJ4vJ5om0elzKWGW/N/SIZ8pqGE3j43eejax29wuYHLF7F1nSWUC20l2H2BqUQZ
         ZzTHIg2IRT+Nt712e0p7Htk8/hX3E26bUT0En+QYF/mMfgKdxRzTWm4PJJtZ5vAtiVFI
         NM3tjIVpq5kpdQ3N+rttsESYI0CBYHodi6Dy5OGJS+RXIGnN/DHF8D5JhcxmCQYMA9o8
         dAYN0kjrRtTqOibJQnAeFzjiMc28o5wwQ7EC/r53ZtDKrtmGB71WUSaVN8SM73aRR856
         0y4g==
X-Gm-Message-State: AOAM533NmrQWL9D6ioseVASwPBmLJKta6bq5CCmLpPcHnvgCfHsmTNVS
	zd4Sn/iUPwkohgez/lkCgYbm6cnzohw=
X-Google-Smtp-Source: ABdhPJwh1m7BK20Tg9fj5e50WDNGt2C1hyl1Lxkxwuoo3bDiYFRPyAiyclRu7H+WgfOjXHnPvzhblA==
X-Received: by 2002:a37:6691:: with SMTP id a139mr3933885qkc.229.1620309605989;
        Thu, 06 May 2021 07:00:05 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>,
	Samuel Thibault <samuel.thibault@ens-lyon.org>
Subject: [PATCH v2 11/13] vtpmmgr: Fix owner_auth & srk_auth parsing
Date: Thu,  6 May 2021 09:59:21 -0400
Message-Id: <20210506135923.161427-12-jandryuk@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210506135923.161427-1-jandryuk@gmail.com>
References: <20210506135923.161427-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Argument parsing only matches to before ':' and then the string with
leading ':' is passed to parse_auth_string which fails to parse.  Extend
the length to include the seperator in the match.

While here, switch the seperator to "=".  The man page documented "="
and the other tpm.* arguments already use "=".  Since it didn't work
before, we don't need to worry about backwards compatibility.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 stubdom/vtpmmgr/init.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/stubdom/vtpmmgr/init.c b/stubdom/vtpmmgr/init.c
index 4ae34a4fcb..62dc5994de 100644
--- a/stubdom/vtpmmgr/init.c
+++ b/stubdom/vtpmmgr/init.c
@@ -289,16 +289,16 @@ int parse_cmdline_opts(int argc, char** argv, struct Opts* opts)
    memcpy(vtpm_globals.srk_auth, WELLKNOWN_AUTH, sizeof(TPM_AUTHDATA));
 
    for(i = 1; i < argc; ++i) {
-      if(!strncmp(argv[i], "owner_auth:", 10)) {
-         if((rc = parse_auth_string(argv[i] + 10, vtpm_globals.owner_auth)) < 0) {
+      if(!strncmp(argv[i], "owner_auth=", 11)) {
+         if((rc = parse_auth_string(argv[i] + 11, vtpm_globals.owner_auth)) < 0) {
             goto err_invalid;
          }
          if(rc == 1) {
             opts->gen_owner_auth = 1;
          }
       }
-      else if(!strncmp(argv[i], "srk_auth:", 8)) {
-         if((rc = parse_auth_string(argv[i] + 8, vtpm_globals.srk_auth)) != 0) {
+      else if(!strncmp(argv[i], "srk_auth=", 9)) {
+         if((rc = parse_auth_string(argv[i] + 9, vtpm_globals.srk_auth)) != 0) {
             goto err_invalid;
          }
       }
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Thu May 06 14:11:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 14:11:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123594.233144 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leeir-0005XJ-Hu; Thu, 06 May 2021 14:11:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123594.233144; Thu, 06 May 2021 14:11:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leeir-0005XC-EF; Thu, 06 May 2021 14:11:25 +0000
Received: by outflank-mailman (input) for mailman id 123594;
 Thu, 06 May 2021 14:11:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NmKg=KB=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1leeYc-0003iB-OS
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 14:00:50 +0000
Received: from mail-qv1-xf32.google.com (unknown [2607:f8b0:4864:20::f32])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3f2608d6-893c-4fef-a0dc-3062f788dbe2;
 Thu, 06 May 2021 14:00:07 +0000 (UTC)
Received: by mail-qv1-xf32.google.com with SMTP id i8so3089726qvv.0
 for <xen-devel@lists.xenproject.org>; Thu, 06 May 2021 07:00:07 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:6095:81da:832e:3929])
 by smtp.gmail.com with ESMTPSA id
 189sm2069992qkh.99.2021.05.06.07.00.06
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 06 May 2021 07:00:06 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3f2608d6-893c-4fef-a0dc-3062f788dbe2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=/E99YQTRx0RTZe0/WLzL6pyfS/6uYlG8h+BkItkooCA=;
        b=UaCztVZ/FvgmmGfmCyRvuOBoUlaURs6FyQg4qli38IOSMD2KRlepGKyjkoUZQoPNLz
         lKHaRMoQiD0JsidacRHL+lDV7hJLYzEufuD0VzEyZ4n3MPg2UfG/0zJcNBR3Ca6myBTG
         sxg1oUbygQfedRgXWi19MI7621vvkuKeqw4uq82nV1wsF/ph+mzW0xFxYnArgHu+Ha1l
         ry9VCBRbrw3kDRaYlou3tg2T+zsItnOKPy2EKfoQpVjyjL07O7bQowF3ndNdESIjW4lM
         F5D05iQ3AKf6eMRoRHG5v3I7G3pELEwPxWVbSWtrPMl/YZRvFGKGzrVVeaoQ7fHVGz9I
         TUMQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=/E99YQTRx0RTZe0/WLzL6pyfS/6uYlG8h+BkItkooCA=;
        b=CisDO9qDcWcI41/9/g7zxouud4tVGCzSEGM6zH09mEDHrBJlrV35pE/HdSaRKB0sEp
         Bl6HClJxXF/A1OENLo4LIqxtqUg8QelYzpPo5RqPCuQAlvuxY7KofvQEeSGraOdKM0UJ
         c1N10OHBfBYhyO9vAdtNdkzF8lSojwRqACdAmiNoF/6ZnU2md/kbyzWRiBX3kyvpArhg
         siuuusmbZ2THYBHXNNfbr38Yv2Gheoo3TY6G/rWi55e3+UcMCOKoBWMAxeHPwcKK4I1z
         uihe65n+sm83z5DqPF5ExKb86JZ3+dzcS8Eh8XeKar7UClC8fvQKKFLNQdW0nIByX0UY
         ZDJg==
X-Gm-Message-State: AOAM531SCcfg5i7a9gawC2JNW+yuOMVCKkmgyH8sj281+pjeb4CZwleR
	NIByq7wfvFOG/ggimSG3MJPIvrtJu8w=
X-Google-Smtp-Source: ABdhPJxleNtmX+v0KHt/6+jkeldv+4xkcWlSn4GhoIc/oVW82SbEtnz/Kcgb+4eNSUSrCMmthwqHgg==
X-Received: by 2002:a05:6214:241:: with SMTP id k1mr4430410qvt.29.1620309607051;
        Thu, 06 May 2021 07:00:07 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Jason Andryuk <jandryuk@gmail.com>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>,
	Samuel Thibault <samuel.thibault@ens-lyon.org>
Subject: [PATCH v2 12/13] vtpmmgr: Check req_len before unpacking command
Date: Thu,  6 May 2021 09:59:22 -0400
Message-Id: <20210506135923.161427-13-jandryuk@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210506135923.161427-1-jandryuk@gmail.com>
References: <20210506135923.161427-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

vtpm_handle_cmd doesn't ensure there is enough space before unpacking
the req buffer.  Add a minimum size check.  Called functions will have
to do their own checking if they need more data from the request.

The error case is tricky since abort_egress wants to rely with a
corresponding tag.  Just hardcode TPM_TAG_RQU_COMMAND since the vtpm is
sending in malformed commands in the first place.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 stubdom/vtpmmgr/vtpm_cmd_handler.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/stubdom/vtpmmgr/vtpm_cmd_handler.c b/stubdom/vtpmmgr/vtpm_cmd_handler.c
index c879b24c13..5586be6997 100644
--- a/stubdom/vtpmmgr/vtpm_cmd_handler.c
+++ b/stubdom/vtpmmgr/vtpm_cmd_handler.c
@@ -840,6 +840,12 @@ TPM_RESULT vtpmmgr_handle_cmd(
 	UINT32 size;
 	TPM_COMMAND_CODE ord;
 
+	if (tpmcmd->req_len < sizeof_TPM_RQU_HEADER(tpmcmd->req)) {
+		status = TPM_BAD_PARAMETER;
+		tag = TPM_TAG_RQU_COMMAND;
+		goto abort_egress;
+	}
+
 	unpack_TPM_RQU_HEADER(tpmcmd->req,
 			&tag, &size, &ord);
 
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Thu May 06 14:32:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 14:32:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123602.233156 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lef3N-000817-Sl; Thu, 06 May 2021 14:32:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123602.233156; Thu, 06 May 2021 14:32:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lef3N-000810-PY; Thu, 06 May 2021 14:32:37 +0000
Received: by outflank-mailman (input) for mailman id 123602;
 Thu, 06 May 2021 14:32:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YkD6=KB=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lef3M-00080u-O1
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 14:32:37 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 93fff602-a754-4639-90aa-719ba1a9daef;
 Thu, 06 May 2021 14:32:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 93fff602-a754-4639-90aa-719ba1a9daef
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620311555;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=sdRPK8q8efr0Xe/39l8foRLnAZqcI/oQMMPRnx/wias=;
  b=hqmRRrCgx4MwriDUWJn2BhYMD/o9QPh7jz18FmVt/oZqSENrIYPHVI9n
   sRq+TfNlmNICWRD8UPZLtCEej6IFhTzOXOKGgwGUZhlZ1GElxxv+xxF7+
   wooFdb/v4nwoD9NJELBIZ9Gmx+2tdtStKo/gMZq9iIbxJlFe7dkclnLQU
   M=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: zJO1A1rdFC/aAU1yWCBTnNnbjTqoGuCBWMcgXPRQ9rV5KDY5/IbUz/TcmLuU5QqjwmIz3ETBpJ
 jYmS2/XLOV4bOz7ExKz5R1KezeetTsT6/BR/w4UvvN7w89d6fi/2jWAQuErmrNB419mqqj0y8h
 JeTPezdV+nOF4VtW8AW5zny8cAFiRyVVuYOEy02XAR4hSkBg1jQsHHAkJU5RdIJuZsUo7X/0zj
 UdokDf6s/wE/iZfihB66GbLEKQSGSqrPOSCrPFq/cIGjsi41RfGSIp0lVaEougiuTc83Af0bOp
 w/M=
X-SBRS: 5.1
X-MesageID: 43213090
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:bbd0yqP+BrhcGcBcT//155DYdb4zR+YMi2TDiHoedfUFSKOlfp
 6V8MjztSWVtN4QMEtQ/exoS5PwP080kqQFnrX5XI3SIDUO3VHIEGgM1/qY/9SNIVyZygcZ79
 YcT0EcMqyDMbEZt7eD3ODQKb9Jq7PrgcPY55ar854ud3AMV0gJ1XYLNu/xKDwOeOApP+tdKH
 PR3Ls8m9L2Ek5nHvhTS0N1EdQqyLbw5dPbSC9DIyRixBiFjDuu5rK/OQOfxA0iXzRGxqpn2X
 TZkiTij5/T8c2T+1v57Sv+/p5WkNzuxp9oH8qXkPUYLT3ql0KBeJlhYbufpzo4ydvfq2rCqO
 O85yvIAv4DrE84JgqO0F3QMkjboXYTAkbZuBqlaSCJm72heNpSYPAx8L6wcXPimgcdVZ9Hof
 p2N8/wjeseMfr6plWK2zH/bWAhqqOFmwtUrQcttQ0XbWI/Us4ckWVNxjIbLH8/dBiKo7zPR9
 Meff00oswmKm+nUw==
X-IronPort-AV: E=Sophos;i="5.82,277,1613451600"; 
   d="scan'208";a="43213090"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Or7wzHy4xluRofA06Z838MGWy9jwkt88BAjNb7GZvICJWgJ8WqM2H8SD462koImB1pdVsD8WEOPIDHtJYzJaFsyt45AfMQfGxuNkK41yMIPMIJTw0HrUdW0KVysOI28Zqk3yZNBJORlWMidPcvQVY16vPh3Gr/L+BrOIpJedDj/9wzdQ2ebgHkMW66OPoO3RIa9svBjF7KQU66O7rr7Wl/3lkMKNcZ7lpijpA6GfhvVMfqwDJ1brlaQGyPmi7p3nHTU1YUw11dfomt3/2X8TVa2N/yRAuvITA+Svtj3+4mdY1S9IiBSb1nNsLzn6GHKzo/xfGbzZfok9Bdn63sDWCA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5FNwnAB3vIXhH5Kj/Dmqd51S5dHgDYQhMWt3N+k9sfo=;
 b=ICESqaqrTEkTLsLLnrTYV6ofxkXrshz1TdoOeLHKa5WXVtERpE4VE/FtcVdewyd3RHgGF8aquQ3WyBU/H2/dybLOTbEpd8dMX8wOhmufQt4CiZIFf1wyXJujSsaghB1m2A5PLVkTG5yGQAOyY5fXLtrNucPawCeSBZtrt2Y0OScQRE8A2qeSW99FLziIAsztGUBJ3zHLxwghJJ+pg8RuR0ObpjMfXwcs8CU6Bg1ZojMyRF5w+G8gv+Km4OB7AyEXlLzP2z8/S0SZ03zA9zqbCEvySqTDcbD2s9YyTs5PYAMxDUnJPKxWffnIRykS9pKAW8ESLU8JkJyXAKCy4f2Hfw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5FNwnAB3vIXhH5Kj/Dmqd51S5dHgDYQhMWt3N+k9sfo=;
 b=NyWUmzLyqe94lA1asmwApo7AUiuIdAj3FTGJl4pRIw7t/fVh82vnALwm//WHaqKWeeYYroIOqvIflFuNHApF/meVPetU41Ab3o7IKeZs+nIIVlMcuLtS5+UUww7EzfLbh81m2rENfzlo5nPa/Bvkyrzo4S5abyXD9fviBAB1Qpw=
To: Jan Beulich <jbeulich@suse.com>, George Dunlap <george.dunlap@citrix.com>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
CC: <xen-devel@lists.xenproject.org>, Ian Jackson <iwj@xenproject.org>,
	Anthony PERARD <anthony.perard@citrix.com>
References: <20210506124752.65844-1-george.dunlap@citrix.com>
 <acd695c1-dc04-4606-5212-5fd993e355b1@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2] SUPPORT.md: Un-shimmed 32-bit PV guests are no longer
 supported
Message-ID: <5b4c9bce-9335-b5f7-0c90-e47ae264d896@citrix.com>
Date: Thu, 6 May 2021 15:32:25 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <acd695c1-dc04-4606-5212-5fd993e355b1@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0252.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:8a::24) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 64c10801-6e7e-4812-44dc-08d9109bc7f9
X-MS-TrafficTypeDiagnostic: BYAPR03MB3989:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB3989C7B62153874F204E6BC1BA589@BYAPR03MB3989.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Xtc249su7DCuBwOA9xMR/7XMNzehdZhVAig3qZVa18OdkzFWgsrb1M0nqXR9eaYXbZWEkLlq62V96IvFs202guR6yhgB/GKak4upXDslIrhbCgLB8oqbgYYHf538QY1qr1KhEyIelQqbyFhPoNKgnl+rQlvQ6KEY1e8uy57U/t00e3vY4unB77s8XHv7O98bqEKPolTX64ZohM4W/c04WnPwGDqlIsbnyWxvA05pvhhd1X32/KAn037gh50wmTnyQAenK0HyyRxwHoijYIVY8Gtvu5+1W0wtcokH64JQ0L4Hhe1vmX+70sTDDTLLBhzzTXzo+YRvvq/EWRnRFvjRXR6eQNpRHaEQUVUiEbrfJ9T8A2wy2RHhO4dggy4vUur6E+KCtWQCsu2xbhHowubd+3RAs3uNAIMN9lPPwz1/YuUk1wOR2bLFbUSljHmDanRL8nqEM1YNmIxjjylVsskKzzK5weMZGtpapPcJ99dHizE4I4b9W7bJyF37GVrh889nm0GOE7MiAxfAYGJ0MUM8Ywllb/MvAyf6QvQt6s/6JWwoaI5aBlAdacRBRTdrSGINXKenA4m5v0/WFBFTbYZdV4UHUR+C9kngmr87r2AS7bwfqCihHMGxkUruMlhtc2ru5m2Toy3jZtEAIyigUPHdeAtvHAYT1RjFD8vw0hD5B4KVzO+y2Ssnv9shui+eIIYx
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(396003)(376002)(366004)(346002)(136003)(478600001)(66946007)(6666004)(316002)(31686004)(5660300002)(31696002)(53546011)(2906002)(2616005)(16576012)(8936002)(26005)(8676002)(6636002)(4326008)(66476007)(66556008)(107886003)(16526019)(83380400001)(36756003)(6486002)(86362001)(54906003)(110136005)(38100700002)(186003)(956004)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?dnBIK3h1RitDdS9ibGFLWis4RE5ZVll6QkpKazZKd2Z1TGJBREtIWWplUDRW?=
 =?utf-8?B?Ulc2ZnlGR1hHOWpHZnM2bS9UTitVZ000ODd4cFUxZmlJU3BjOHFsakRMQ3No?=
 =?utf-8?B?cXAwa0huaFFNck03Q0ppc1ZxQW1Dd3N3Vi9mT0dtN2xxcHREN3pPVUtMekJ6?=
 =?utf-8?B?cDQxTEN4U1l5ekVBRUhmdlBrQnZ0OC9LYTdzMVJNQnFEdzlLQUplR3dKN1B0?=
 =?utf-8?B?MkJwTEljMFQwcmFUZEZKdnBQcnZycXNkMTVoV3JPZW5XRDlkMDd0TjJ1Unl5?=
 =?utf-8?B?QUw0RWNaMlNSdmpQcjdzRGxTR0g2YjR2dDFTaS9aMzdnSzZieFhqdGh3UnVq?=
 =?utf-8?B?THFvVWlQVUQ4Tnp6eGlwWThYYWcrVTRjR0dzTDU2MnQrZ1gvYTVPSjVpdXZR?=
 =?utf-8?B?SVBuZjFiQW94L2pCd0FPVG9nMGJ5Mm5DMDVUNG8rU3dYMmZrcWs3TktndDBZ?=
 =?utf-8?B?bmhaZEpLVExkQ1NhSThFNEw1WWhHZmZrQU0vS0xYUGdIUGphTndJNHpjY1dV?=
 =?utf-8?B?TDhLbFFhc3ZxTE83N0phK0I1MjROVE9DYkNqZ0crVUlCbnRIczloNHlPY2xp?=
 =?utf-8?B?WW5Tb2tKTDlyUFlwWTNOMVNmV1pIT2hDckFURllEamNjRkMyQ1lNclFXZDNl?=
 =?utf-8?B?SExNRFI2Lys4WUNIL1VPOGJmWXJnVXZHaXZ2V3FDUUxrb3NaREhkN0FJT3lJ?=
 =?utf-8?B?VHdFelpLLzF6Qy9pdnVhUmw4UzdHdElJUFNHRGlOdDd3bFk2MWIzeXRyZFdt?=
 =?utf-8?B?dFFoSU1oRlp2YjdKbEh6V3dIYlFzOU44ck5ZQ0g2WDR1UWJ4dWVMbFZ5aHMr?=
 =?utf-8?B?Y0JHVXhPN2M2a3pMaXIvOW5haGw3QWVGdjVWdHBzcHJPSFJFOGd5ancydDF4?=
 =?utf-8?B?UXBxUlJISG0rUE95dGJ1OTBHeDErZm5zNU56VlpVM0pMOGpjdTZ0Zm5XQmhC?=
 =?utf-8?B?a1paN0p3ZEJ4Si9MM0ZCb1g1R2RkMy9pRkxQWDh3WXlsOXI3U3R3dVhhcmJG?=
 =?utf-8?B?YWVuNWNPQWxWMng0b1M2ZmNUZStmTi8yck5SUWZrQkUxaVdQZDgrVmpSTGQr?=
 =?utf-8?B?Z0xYbi9vdmNiQjFOVDR3L2lGQ3A3RjBBUjYyUmFHVE9MUjIvd3B3d2J2T0Vq?=
 =?utf-8?B?ZTEydVRaVjByZEJrY25qbW43SnlTUkJhOTR5VGdHNWxrNTZNUkhuVkNZYjdF?=
 =?utf-8?B?QWVFVXh1ZW5Rai9BYlEyV1ZoNldyMWgwcUV2akd6SlRwVUpjdEpYZjRsYVdy?=
 =?utf-8?B?b1F1ZGtyT1Z2RGhZRHdvcHVIUzZXZ2ZydkpjdlB2eFZxdHRGcWhJMWVrWXdY?=
 =?utf-8?B?VU15ZmJvWnhQZVJLUkNyUnh2VVZjOCtON091d3czWk5uU2dibXJLMmF6UHh6?=
 =?utf-8?B?a1k5aUxhamNjUWdaTzk0ZGlielZhZ2UxazkrMUFXZHl3ZUNQakhxNmp5SnNR?=
 =?utf-8?B?bFpaN0Y0WlZRVEg4WFE5TUlUYk4vZHI5cmxXWGdlN3k0ZjcvVzNkSngzOUtx?=
 =?utf-8?B?anNaRG5Ib0RBcG9hNlg5TXlVbnQ0OU5rdTFxZWFIdWZONnhyUDR6QVV2MXRQ?=
 =?utf-8?B?eFJWUkFuSnlFU1RhNU1kK3FGSTJzUWdJWTZIM0pUSTd6MWRFNkN1T3htR2hM?=
 =?utf-8?B?MERza2JycFZtcm14cUoxZFhWWTVPVUlPdFgyNzlNS1diTEVUcWlQbFZIazVj?=
 =?utf-8?B?YXV1cDVWNzQvL1p4ZXhqOTFwZHVhMWVJVnQ4cHFRdlBRV2JtU1hWSXBPVmxR?=
 =?utf-8?Q?tjExxmrtOwpSaJ8cbolRbf9s0wkZCeG3u0hb2sp?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 64c10801-6e7e-4812-44dc-08d9109bc7f9
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 May 2021 14:32:31.2921
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: +G763cEQ7XZs3E1mmn9BPfkuLLpNAy0yWjSJOWOZwbjxUlGxICZjnJdzJoCIJMSMq/gKiAuFmT3H2EPn6+9qe37BEAxtYUDJxIXUD6JQly8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3989
X-OriginatorOrg: citrix.com

On 06/05/2021 14:09, Jan Beulich wrote:
> On 06.05.2021 14:47, George Dunlap wrote:
>> --- a/xen/arch/x86/Kconfig
>> +++ b/xen/arch/x86/Kconfig
>> @@ -55,7 +55,7 @@ config PV
>>  config PV32
>>  	bool "Support for 32bit PV guests"
>>  	depends on PV
>> -	default y
>> +	default PV_SHIM
>>  	select COMPAT
>>  	---help---
>>  	  The 32bit PV ABI uses Ring1, an area of the x86 architecture which
>> @@ -67,7 +67,10 @@ config PV32
>>  	  reduction, or performance reasons.  Backwards compatibility can be
>>  	  provided via the PV Shim mechanism.
>> =20
>> -	  If unsure, say Y.
>> +	  Note that outside of PV Shim, 32-bit PV guests are not security
>> +	  supported anymore.
>> +
>> +	  If unsure, use the default setting.
> Alongside this I wonder whether we should also default opt_pv32 to false
> then, unless running in shim mode.

No - that's just rude to users.

Anyone whose enabled CONFIG_PV32 may potentially want to run such guests.

Its easy to avoid issues here by not running 32bit bit guests, or not
running untrusted guests, but forcing everyone to reboot a second time
to specify pv=3D32 to unbreak their environment is downright unhelpful.


Perhaps tangentially, xl/libxl needs some remedial work as a followup,
because:

Executing 'xl create -p tests/example/test-pv32pae-example.cfg'
Parsing config from tests/example/test-pv32pae-example.cfg
xc: error: panic: xg_dom_boot.c:121: xc_dom_boot_mem_init: can't
allocate low memory for domain: Out of memory
libxl: error: libxl_dom.c:586:libxl__build_dom: xc_dom_boot_mem_init
failed: Operation not supported
libxl: error: libxl_create.c:1572:domcreate_rebuild_done: Domain
3:cannot (re-)build domain: -3
libxl: error: libxl_domain.c:1182:libxl__destroy_domid: Domain
3:Non-existant domain
libxl: error: libxl_domain.c:1136:domain_destroy_callback: Domain
3:Unable to destroy guest
libxl: error: libxl_domain.c:1063:domain_destroy_cb: Domain
3:Destruction of domain failed

is what the user gets back when Xen has correctly reported that it isn't
pv32-capable, and rejects the switch_compat() hypercall.

~Andrew



From xen-devel-bounces@lists.xenproject.org Thu May 06 14:36:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 14:36:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123606.233168 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lef7V-0000FX-Ei; Thu, 06 May 2021 14:36:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123606.233168; Thu, 06 May 2021 14:36:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lef7V-0000FQ-Ba; Thu, 06 May 2021 14:36:53 +0000
Received: by outflank-mailman (input) for mailman id 123606;
 Thu, 06 May 2021 14:36:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PObI=KB=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lef7T-0000FJ-Lj
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 14:36:51 +0000
Received: from mo6-p00-ob.smtp.rzone.de (unknown [2a01:238:400:100::8])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6fed315e-0670-4af7-a42c-d1f5a6aa8de8;
 Thu, 06 May 2021 14:36:50 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.25.6 AUTH)
 with ESMTPSA id V0bf6dx46Eai09q
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 6 May 2021 16:36:44 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6fed315e-0670-4af7-a42c-d1f5a6aa8de8
ARC-Seal: i=1; a=rsa-sha256; t=1620311805; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=A0fi9cxmEccSeZo13VD187rseUtue+ML/C3IYMpapeT7OUUsl0ueG8yIJcAL2na0JR
    1xwTx1PJfXjpbMygq10zJNdIxOogmw6IYjm0P70dW3CbFwhoBg7kSY5ABxCqq0V/ZTBh
    gB3aZ+V2+JW4BlU8EGLXtpq+vIIfvmT0VCmhBp6FlsegID3u5CQ6C0cPh23qouNxBqyD
    g5ziy79NwZyS4xQplIAcO1FY1CcYq5l+Ey4qGsLbOWT9Vo0dky7HQm9DknEL/xZKHM1I
    mwbhe91Vp0vWINAdmmV9mRSw5VlZN8mdwGtdyZb+29i0FHionSU5ZazPcz/fMDuyx1qv
    LT3A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1620311805;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=ozekZX4ssam+kLWNjeFPeBX66g31SOfC75RWGeZRjD8=;
    b=hrzID5RoCFQMdFIMmLuuJl4HJu4cReH0BoWvE6yJWeLSqg0O49ygth9vySmvP8wuqc
    Ekz7ITWH6RvyGNmfUUXzTl+pbsXSSvaNIXb/pFsqnCtO6nmHGMPUwj7VHsogoRpeA0d/
    h5xFHjspjWBfB6rnQkkii3G7OETgzR2Mt2kJYguNmbCwsd9vIMiXTr4LQdYmR+yMKLMc
    jo+uYje+h/7Jk4bJFt6xGr5duvf9q1+xPCs8Ko9O2GdadWiUuSgxsDyeOfFLxROcwmh8
    62EmQ4iDbWc9p0MgraC5NRoAcyub9dcehRltFdBWaiP5xpa7VsMCZLOEkAdZYnX4Jm7k
    XK/w==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1620311805;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=ozekZX4ssam+kLWNjeFPeBX66g31SOfC75RWGeZRjD8=;
    b=aw/+6PZg5RrTDs3UDzQ63wHA5Cjb+mq2tTW+1ShkXjQGVMkBo5L6U4r+7/3jLrnBIF
    YVbnMmGU9IDZ2n4oJM8I6bp+F9vTk3CzfvgB5rljIqTo7+3S7+d2OEfyogK5B/w7IiF2
    DHQPZvFXWWs5wcDSuSaAOYvRdIihXCv7XbQzPK/SkN5K8DnxDcx/bGPgAJoU7+660jCl
    l4Ex2VR9fOuXM5KWcJBKEsAwid83CwZILiAakgIsd8oYb633CNrpKabyJSNsCuUbNexw
    OWUmC3W6sPlcph1hUl4eODohjr2Yh3NEBbbMXgn+cOAegstrJTFvIcOlV+lV5ATjfj7J
    R/ng==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisEsVR6GbU3svGYLiH9zFD0wpO7hNb2D/K88Gfjs"
X-RZG-CLASS-ID: mo00
Date: Thu, 6 May 2021 16:36:27 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: <xen-devel@lists.xenproject.org>, Ian Jackson <iwj@xenproject.org>, Wei
 Liu <wl@xen.org>, Roger Pau =?UTF-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Subject: Re: [PATCH v1] tools: fix incorrect suggestions for
 XENCONSOLED_TRACE on FreeBSD
Message-ID: <20210506163627.12d191cc.olaf@aepfle.de>
In-Reply-To: <c71658e6-422b-4852-6d21-4688d09d8b8e@citrix.com>
References: <20210504135021.8394-1-olaf@aepfle.de>
	<c71658e6-422b-4852-6d21-4688d09d8b8e@citrix.com>
X-Mailer: Claws Mail 2021.04.23 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_//BBTQIA3Ch69rwHHbimLKXV";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_//BBTQIA3Ch69rwHHbimLKXV
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Am Tue, 4 May 2021 18:47:12 +0100
schrieb Andrew Cooper <andrew.cooper3@citrix.com>:

> I'd also be tempted to fold this and the NetBSD change together.=C2=A0 It=
's
> not as if these bugfixes are distro-specific.

I will redo the BSD patches as you suggested.

Olaf

--Sig_//BBTQIA3Ch69rwHHbimLKXV
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmCT/usACgkQ86SN7mm1
DoCOBA/9GUb+i7nmx088f5C3OPm/jQDCmVbnm7q+MFcEhcNNCM2nZyoqRxL3z20A
zMKeRna9esNj4gh9B3I/XAnXTkN/1dC7jkztPpuKopjTkjuVk8CR0b44yldfQDTq
f0kJ3mdOi2PvotQ6mm5BQc0Vqq4yiauQD4aV/i3vPupzcclcn/1YQiRgm1YyHlrH
fjl22W7CARrPWAvEMMI61cO7Uc1dSuDgQBrNNBdsxAfTrgKb+PKHE8wJ6ICGHpZ0
cFQ0hPOG9HvboZl8KUSyIckJs3tCsOxPfB2eBUNYw5gxKdCYTayj+c+pYSGBZ0Bb
D1zV6NyJ9XA1gIXmb8Bfosf1NwyCOrEihPpOXRmXf9H7EhJtl6T8X7yDQFHmVkoi
dGxZR5V/Eg2Hb0L8KBuCdCD4Q/oW/5WoosjASClWorquNG+QJn0vyTuFAEBQqcHL
sUkah25f4aeDs6m3cPbbp7erjt8Q+7ex9D4V6rX5hOjfZxsMhu43su5j8niKUTB3
kA8xzbxv/bP40a9DyQOBtd+lMP948eu4S64050knKjAjZopS2A6KO3epHY0VET6v
WFEkE0uM3IxDCzArBZ9KfOYKMQDHpqb9zQ/7xGp74Fayhj1fsbjnbaimNwb+39g2
bEqeLBuwj2VGKiUOj5UKkE40x9sY9cLjpSigt62GaeiPZAs6wpw=
=glPz
-----END PGP SIGNATURE-----

--Sig_//BBTQIA3Ch69rwHHbimLKXV--


From xen-devel-bounces@lists.xenproject.org Thu May 06 14:37:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 14:37:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123608.233180 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lef7z-0000nQ-NG; Thu, 06 May 2021 14:37:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123608.233180; Thu, 06 May 2021 14:37:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lef7z-0000nJ-KG; Thu, 06 May 2021 14:37:23 +0000
Received: by outflank-mailman (input) for mailman id 123608;
 Thu, 06 May 2021 14:37:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=r5QE=KB=thesusis.net=phill@srs-us1.protection.inumbo.net>)
 id 1lef7y-0000n3-2n
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 14:37:22 +0000
Received: from vps.thesusis.net (unknown [34.202.238.73])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 83adba90-03e3-426e-bcd9-924df4c61081;
 Thu, 06 May 2021 14:37:21 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by vps.thesusis.net (Postfix) with ESMTP id 5B79E2F0DC;
 Thu,  6 May 2021 10:37:21 -0400 (EDT)
Received: from vps.thesusis.net ([127.0.0.1])
 by localhost (vps.thesusis.net [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id OyGslXEEArnl; Thu,  6 May 2021 10:37:21 -0400 (EDT)
Received: from debian.. (097-068-109-042.biz.spectrum.com [97.68.109.42])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits) server-digest SHA256)
 (No client certificate requested) (Authenticated sender: psusi)
 by vps.thesusis.net (Postfix) with ESMTPSA id E857B2F0DB;
 Thu,  6 May 2021 10:37:20 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 83adba90-03e3-426e-bcd9-924df4c61081
From: Phillip Susi <phill@thesusis.net>
To: phill@thesusis.net
Cc: xen-devel@lists.xenproject.org,
	linux-input@vger.kernel.org,
	dmitry.torokhov@gmail.com
Subject: [PATCH] Xen Keyboard: don't advertise every key known to man
Date: Thu,  6 May 2021 14:36:54 +0000
Message-Id: <20210506143654.17924-1-phill@thesusis.net>
X-Mailer: git-send-email 2.30.2
In-Reply-To: <87o8dw52jc.fsf@vps.thesusis.net>
References: <87o8dw52jc.fsf@vps.thesusis.net>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

For reasons I still don't understand, the input subsystem allows
input devices to advertise what keys they have, and adds this
information to the modalias for the device.  The Xen Virtual
Keyboard was advertising every known key, which resulted in a
modalias string over 2 KiB in length, which caused uevents to
fail with -ENOMEM ( when trying to add the modalias to the env ).
Remove this advertisement.
---
 drivers/input/misc/xen-kbdfront.c | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/drivers/input/misc/xen-kbdfront.c b/drivers/input/misc/xen-kbdfront.c
index 4ff5cd2a6d8d..d4bd558afda9 100644
--- a/drivers/input/misc/xen-kbdfront.c
+++ b/drivers/input/misc/xen-kbdfront.c
@@ -254,11 +254,6 @@ static int xenkbd_probe(struct xenbus_device *dev,
 		kbd->id.product = 0xffff;
 
 		__set_bit(EV_KEY, kbd->evbit);
-		for (i = KEY_ESC; i < KEY_UNKNOWN; i++)
-			__set_bit(i, kbd->keybit);
-		for (i = KEY_OK; i < KEY_MAX; i++)
-			__set_bit(i, kbd->keybit);
-
 		ret = input_register_device(kbd);
 		if (ret) {
 			input_free_device(kbd);
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Thu May 06 14:44:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 14:44:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123618.233191 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lefEK-0002L5-KJ; Thu, 06 May 2021 14:43:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123618.233191; Thu, 06 May 2021 14:43:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lefEK-0002Ky-HG; Thu, 06 May 2021 14:43:56 +0000
Received: by outflank-mailman (input) for mailman id 123618;
 Thu, 06 May 2021 14:43:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=sEA5=KB=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1lefEJ-0002Ks-NY
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 14:43:55 +0000
Received: from mail-wm1-x32a.google.com (unknown [2a00:1450:4864:20::32a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4d0b5e78-daf3-42c9-84ba-2aa4180077dd;
 Thu, 06 May 2021 14:43:52 +0000 (UTC)
Received: by mail-wm1-x32a.google.com with SMTP id
 b11-20020a7bc24b0000b0290148da0694ffso5450533wmj.2
 for <xen-devel@lists.xenproject.org>; Thu, 06 May 2021 07:43:52 -0700 (PDT)
Received: from [192.168.1.186]
 (host86-180-176-157.range86-180.btcentralplus.com. [86.180.176.157])
 by smtp.gmail.com with ESMTPSA id o15sm4540204wru.42.2021.05.06.07.43.50
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 06 May 2021 07:43:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4d0b5e78-daf3-42c9-84ba-2aa4180077dd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:subject:to:cc:references:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=Ml5++Jmo/jaNHi4JfexBhOqrQdGVQl2n4qPdFHUxiT0=;
        b=mLkgDE2yM4VpG0cUsA79H+eBo4OGaodagcUEj2lwfpkYwkhF1ruZA9gMUEDeqBoX3n
         HrBRt2aF5buUvEhSFzW3mN/KCtDcdsyCO7/B1XhB2Dqx4VM+hLG2/vdvIa9ju9P7wWnU
         wuhFM0j2TPBsVWKoGzPN4nB4Kzt0/PXAzdmGC0k1AN1uebtN4nYkGyQS3qmQ4klHE59b
         4FR5Na6u7T1qm8i9RZvovwRjCq1e+gBD2ztoEKGFL96KnWQMFlEYzhkvZ0FNfuAK+FLr
         K/S87goOrw/x/omhETe7bpMEzSJlIuc3uEBKZ1IMmKggQ50gI6585GN9yql21Fq2DNRP
         30vg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:subject:to:cc:references
         :message-id:date:user-agent:mime-version:in-reply-to
         :content-language:content-transfer-encoding;
        bh=Ml5++Jmo/jaNHi4JfexBhOqrQdGVQl2n4qPdFHUxiT0=;
        b=MtZPdzELho2E43ZGYBqIMvCk6FSN/dEcDKK4Newn7OIgvnu5/quFwfsGgqzr3HRT9i
         wZp4n6b0akooZv3W39sG5ZIrn/wz5hM0/VU1UWFaPEOR7WbRUYh+DBZJiKyrmVa45g2U
         Pit2Y3csti/R8onarpU7yxgQ9sDkHvQaPjozLwVcKWmHbE4Llx3vdEbRWNzedLYDCcz9
         XZc2TxJ7yvB6RQwwvmCvhdSh2PNJ7y20s1Cv1hRVM++FlBxmPyIVoOM7YTSFroC1qIJN
         iJz4rjqgF9pOxMStjZVv6M1lNGEW5gFAqnHxZan9NSgrG6NZEpsxzFn9gKcf2ps/8KNk
         oTLA==
X-Gm-Message-State: AOAM53181+vcwOMKxwYTsi0Wx12N3PE3sUxwR1+GTxeN8mi63ofIbcoS
	3hnNWp6IL5wmNJiS+5H2SbM=
X-Google-Smtp-Source: ABdhPJyYb7fEMw/HUg92rZQsqfqaGjOwXILUu1v+CGWYvH/G5bQt6f30ARgUe6lI11rjKnLYzFdxFw==
X-Received: by 2002:a1c:c28a:: with SMTP id s132mr15400176wmf.145.1620312231584;
        Thu, 06 May 2021 07:43:51 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: Paul Durrant <paul@xen.org>
Reply-To: paul@xen.org
Subject: Re: [PATCH RFC 1/2] docs/design: Add a design document for Live
 Update
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: dwmw2@infradead.org, hongyxia@amazon.com, raphning@amazon.com,
 maghul@amazon.com, Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20210506104259.16928-1-julien@xen.org>
 <20210506104259.16928-2-julien@xen.org>
Message-ID: <00db0c29-337f-4afd-d3a9-ee44b5b74146@xen.org>
Date: Thu, 6 May 2021 15:43:50 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210506104259.16928-2-julien@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 06/05/2021 11:42, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 

Looks good in general... just a few comments below...

> Administrators often require updating the Xen hypervisor to address
> security vulnerabilities, introduce new features, or fix software defects.
> Currently, we offer the following methods to perform the update:
> 
>      * Rebooting the guests and the host: this is highly disrupting to running
>        guests.
>      * Migrating off the guests, rebooting the host: this currently requires
>        the guest to cooperate (see [1] for a non-cooperative solution) and it
>        may not always be possible to migrate it off (i.e lack of capacity, use
>        of local storage...).
>      * Live patching: This is the less disruptive of the existing methods.
>        However, it can be difficult to prepare the livepatch if the change is
>        large or there are data structures to update.
> 
> This patch will introduce a new proposal called "Live Update" which will
> activate new software without noticeable downtime (i.e no - or minimal -
> customer).
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> ---
>   docs/designs/liveupdate.md | 254 +++++++++++++++++++++++++++++++++++++
>   1 file changed, 254 insertions(+)
>   create mode 100644 docs/designs/liveupdate.md
> 
> diff --git a/docs/designs/liveupdate.md b/docs/designs/liveupdate.md
> new file mode 100644
> index 000000000000..32993934f4fe
> --- /dev/null
> +++ b/docs/designs/liveupdate.md
> @@ -0,0 +1,254 @@
> +# Live Updating Xen
> +
> +## Background
> +
> +Administrators often require updating the Xen hypervisor to address security
> +vulnerabilities, introduce new features, or fix software defects.  Currently,
> +we offer the following methods to perform the update:
> +
> +    * Rebooting the guests and the host: this is highly disrupting to running
> +      guests.
> +    * Migrating off the guests, rebooting the host: this currently requires
> +      the guest to cooperate (see [1] for a non-cooperative solution) and it
> +      may not always be possible to migrate it off (i.e lack of capacity, use
> +      of local storage...).
> +    * Live patching: This is the less disruptive of the existing methods.
> +      However, it can be difficult to prepare the livepatch if the change is
> +      large or there are data structures to update.
> +
> +This document will present a new approach called "Live Update" which will
> +activate new software without noticeable downtime (i.e no - or minimal -
> +customer pain).
> +
> +## Terminology
> +
> +xen#1: Xen version currently active and running on a droplet.  This is the
> +“source” for the Live Update operation.  This version can actually be newer
> +than xen#2 in case of a rollback operation.
> +
> +xen#2: Xen version that's the “target” of the Live Update operation. This
> +version will become the active version after successful Live Update.  This
> +version of Xen can actually be older than xen#1 in case of a rollback
> +operation.
> +
> +## High-level overview
> +
> +Xen has a framework to bring a new image of the Xen hypervisor in memory using
> +kexec.  The existing framework does not meet the baseline functionality for
> +Live Update, since kexec results in a restart for the hypervisor, host, Dom0,
> +and all the guests.

Feels like there's a sentence or two missing here. The subject has 
jumped from a framework that is not fit for purpose to 'the operation'.

> +
> +The operation can be divided in roughly 4 parts:
> +
> +    1. Trigger: The operation will by triggered from outside the hypervisor
> +       (e.g. dom0 userspace).
> +    2. Save: The state will be stabilized by pausing the domains and
> +       serialized by xen#1.
> +    3. Hand-over: xen#1 will pass the serialized state and transfer control to
> +       xen#2.
> +    4. Restore: The state will be deserialized by xen#2.
> +
> +All the domains will be paused before xen#1 is starting to save the states,

s/is starting/starts

> +and any domain that was running before Live Update will be unpaused after
> +xen#2 has finished to restore the states.  This is to prevent a domain to try

s/finished to restore/finished restoring

and

s/domain to try/domain trying

> +to modify the state of another domain while it is being saved/restored.
> +
> +The current approach could be seen as non-cooperative migration with a twist:
> +all the domains (including dom0) are not expected be involved in the Live
> +Update process.
> +
> +The major differences compare to live migration are:

s/compare/compared

> +
> +    * The state is not transferred to another host, but instead locally to
> +      xen#2.
> +    * The memory content or device state (for passthrough) does not need to
> +      be part of the stream. Instead we need to preserve it.
> +    * PV backends, device emulators, xenstored are not recreated but preserved
> +      (as these are part of dom0).
> +
> +
> +Domains in process of being destroyed (*XEN\_DOMCTL\_destroydomain*) will need
> +to be preserved because another entity may have mappings (e.g foreign, grant)
> +on them.
> +
> +## Trigger
> +
> +Live update is built on top of the kexec interface to prepare the command line,
> +load xen#2 and trigger the operation.  A new kexec type has been introduced
> +(*KEXEC\_TYPE\_LIVE\_UPDATE*) to notify Xen to Live Update.
> +
> +The Live Update will be triggered from outside the hypervisor (e.g. dom0
> +userspace).  Support for the operation has been added in kexec-tools 2.0.21.
> +
> +All the domains will be paused before xen#1 is starting to save the states.

You already said this in the previous section.

> +In Xen, *domain\_pause()* will pause the vCPUs as soon as they can be re-
> +scheduled.  In other words, a pause request will not wait for asynchronous
> +requests (e.g. I/O) to finish.  For Live Update, this is not an ideal time to
> +pause because it will require more xen#1 internal state to be transferred.
> +Therefore, all the domains will be paused at an architectural restartable
> +boundary.
> +
> +Live update will not happen synchronously to the request but when all the
> +domains are quiescent.  As domains running device emulators (e.g Dom0) will > +be part of the process to quiesce HVM domains, we will need to let 
them run
> +until xen#1 is actually starting to save the state.  HVM vCPUs will be paused
> +as soon as any pending asynchronous request has finished.
> +
> +In the current implementation, all PV domains will continue to run while the
> +rest will be paused as soon as possible.  Note this approach is assuming that
> +device emulators are only running in PV domains.
> +
> +It should be easy to extend to PVH domains not requiring device emulations.
> +It will require more thought if we need to run device models in HVM domains as
> +there might be inter-dependency.
> +
> +## Save
> +
> +xen#1 will be responsible to preserve and serialize the state of each existing
> +domain and any system-wide state (e.g M2P).

s/to preserve and serialize/for preserving and serializing

> +
> +Each domain will be serialized independently using a modified migration stream,
> +if there is any dependency between domains (such as for IOREQ server) they will
> +be recorded using a domid. All the complexity of resolving the dependencies are
> +left to the restore path in xen#2 (more in the *Restore* section).
> +
> +At the moment, the domains are saved one by one in a single thread, but it
> +would be possible to consider multi-threading if it takes too long. Although
> +this may require some adjustment in the stream format.
> +
> +As we want to be able to Live Update between major versions of Xen (e.g Xen
> +4.11 -> Xen 4.15), the states preserved should not be a dump of Xen internal
> +structure but instead the minimal information that allow us to recreate the
> +domains.
> +
> +For instance, we don't want to preserve the frametable (and therefore
> +*struct page\_info*) as-is because the refcounting may be different across
> +between xen#1 and xen#2 (see XSA-299). Instead, we want to be able to recreate
> +*struct page\_info* based on minimal information that are considered stable
> +(such as the page type).
> +
> +Note that upgrading between version of Xen will also require all the hypercalls
> +to be stable. This will not be covered by this document.
> +
> +## Hand over
> +
> +### Memory usage restrictions
> +
> +xen#2 must take care not to use any memory pages which already belong to
> +guests.  To facilitate this, a number of contiguous region of memory are
> +reserved for the boot allocator, known as *live update bootmem*.
> +
> +xen#1 will always reserve a region just below Xen (the size is controlled by
> +the Xen command line parameter liveupdate) to allow Xen growing and provide
> +information about LiveUpdate (see the section *Breadcrumb*).  The region will be
> +passed to xen#2 using the same command line option but with the base address
> +specified.
> +
> +For simplicity, additional regions will be provided in the stream.  They will
> +consist of region that could be re-used by xen#2 during boot (such as the

s/region/a region

   Paul

> +xen#1's frametable memory).
> +
> +xen#2 must not use any pages outside those regions until it has consumed the
> +Live Update data stream and determined which pages are already in use by
> +running domains or need to be re-used as-is by Xen (e.g M2P).
> +
> +At run time, Xen may use memory from the reserved region for any purpose that
> +does not require preservation over a Live Update; in particular it __must__ not be
> +mapped to a domain or used by any Xen state requiring to be preserved (e.g
> +M2P).  In other word, the xenheap pages could be allocated from the reserved
> +regions if we remove the concept of shared xenheap pages.
> +
> +The xen#2's binary may be bigger (or smaller) compare to xen#1's binary.  So
> +for the purpose of loading xen#2 binary, kexec should treat the reserved memory
> +right below xen#1 and its region as a single contiguous space. xen#2 will be
> +loaded right at the top of the contiguous space and the rest of the memory will
> +be the new reserved memory (this may shrink or grow).  For that reason, freed
> +init memory from xen#1 image is also treated as reserved liveupdate update
> +bootmem.
> +
> +### Live Update data stream
> +
> +During handover, xen#1 creates a Live Update data stream containing all the
> +information required by the new Xen#2 to restore all the domains.
> +
> +Data pages for this stream may be allocated anywhere in physical memory outside
> +the *live update bootmem* regions.
> +
> +As calling __vmap()__/__vunmap()__ has a cost on the downtime.  We want to reduce the
> +number of call to __vmap()__ when restoring the stream.  Therefore the stream
> +will be contiguously virtually mapped in xen#2.  xen#1 will create an array of
> +MFNs of the allocated data pages, suitable for passing to __vmap()__.  The
> +array will be physically contiguous but the MFNs don't need to be physically
> +contiguous.
> +
> +### Breadcrumb
> +
> +Since the Live Update data stream is created during the final **kexec\_exec**
> +hypercall, its address cannot be passed on the command line to the new Xen
> +since the command line needs to have been set up by **kexec(8)** in userspace
> +long beforehand.
> +
> +Thus, to allow the new Xen to find the data stream, xen#1 places a breadcrumb
> +in the first words of the Live Update bootmem, containing the number of data
> +pages, and the physical address of the contiguous MFN array.
> +
> +### IOMMU
> +
> +Where devices are passed through to domains, it may not be possible to quiesce
> +those devices for the purpose of performing the update.
> +
> +If performing Live Update with assigned devices, xen#1 will leave the IOMMU
> +mappings active during the handover (thus implying that IOMMU page tables may
> +not be allocated in the *live update bootmem* region either).
> +
> +xen#2 must take control of the IOMMU without causing those mappings to become
> +invalid even for a short period of time.  In other words, xen#2 should not
> +re-setup the IOMMUs.  On hardware which does not support Posted Interrupts,
> +interrupts may need to be generated on resume.
> +
> +## Restore
> +
> +After xen#2 initialized itself and map the stream, it will be responsible to
> +restore the state of the system and each domain.
> +
> +Unlike the save part, it is not possible to restore a domain in a single pass.
> +There are dependencies between:
> +
> +    1. different states of a domain.  For instance, the event channels ABI
> +       used (2l vs fifo) requires to be restored before restoring the event
> +       channels.
> +    2. the same "state" within a domain.  For instance, in case of PV domain,
> +       the pages' ownership requires to be restored before restoring the type
> +       of the page (e.g is it an L4, L1... table?).
> +
> +    3. domains.  For instance when restoring the grant mapping, it will be
> +       necessary to have the page's owner in hand to do proper refcounting.
> +       Therefore the pages' ownership have to be restored first.
> +
> +Dependencies will be resolved using either multiple passes (for dependency
> +type 2 and 3) or using a specific ordering between records (for dependency
> +type 1).
> +
> +Each domain will be restored in 3 passes:
> +
> +    * Pass 0: Create the domain and restore the P2M for HVM. This can be broken
> +      down in 3 parts:
> +      * Allocate a domain via _domain\_create()_ but skip part that requires
> +        extra records (e.g HAP, P2M).
> +      * Restore any parts which needs to be done before create the vCPUs. This
> +        including restoring the P2M and whether HAP is used.
> +      * Create the vCPUs. Note this doesn't restore the state of the vCPUs.
> +    * Pass 1: It will restore the pages' ownership and the grant-table frames
> +    * Pass 2: This steps will restore any domain states (e.g vCPU state, event
> +      channels) that wasn't
> +
> +A domain should not have a dependency on another domain within the same pass.
> +Therefore it would be possible to take advantage of all the CPUs to restore
> +domains in parallel and reduce the overall downtime.
> +
> +Once all the domains have been restored, they will be unpaused if they were
> +running before Live Update.
> +
> +* * *
> +[1] https://xenbits.xen.org/gitweb/?p=xen.git;a=blob;f=docs/designs/non-cooperative-migration.md;h=4b876d809fb5b8aac02d29fd7760a5c0d5b86d87;hb=HEAD
> +
> 



From xen-devel-bounces@lists.xenproject.org Thu May 06 14:44:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 14:44:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123619.233204 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lefEP-0002cz-TY; Thu, 06 May 2021 14:44:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123619.233204; Thu, 06 May 2021 14:44:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lefEP-0002cs-Pa; Thu, 06 May 2021 14:44:01 +0000
Received: by outflank-mailman (input) for mailman id 123619;
 Thu, 06 May 2021 14:44:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HxhD=KB=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1lefEO-0002cI-QK
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 14:44:00 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown
 [40.107.6.66]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id eaa7c4fd-7dff-4d53-bf86-09fed8913b8f;
 Thu, 06 May 2021 14:43:59 +0000 (UTC)
Received: from DU2PR04CA0261.eurprd04.prod.outlook.com (2603:10a6:10:28e::26)
 by DB7PR08MB3404.eurprd08.prod.outlook.com (2603:10a6:10:4c::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4087.35; Thu, 6 May
 2021 14:43:50 +0000
Received: from DB5EUR03FT035.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:28e:cafe::5a) by DU2PR04CA0261.outlook.office365.com
 (2603:10a6:10:28e::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.24 via Frontend
 Transport; Thu, 6 May 2021 14:43:50 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT035.mail.protection.outlook.com (10.152.20.65) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4108.25 via Frontend Transport; Thu, 6 May 2021 14:43:50 +0000
Received: ("Tessian outbound 13cdc29c30b8:v91");
 Thu, 06 May 2021 14:43:50 +0000
Received: from 836d516b62f3.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 84EEA4D9-07A8-4D8F-9868-DE76C025A24B.1; 
 Thu, 06 May 2021 14:43:31 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 836d516b62f3.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 06 May 2021 14:43:31 +0000
Received: from VI1PR08MB3629.eurprd08.prod.outlook.com (2603:10a6:803:7f::25)
 by VI1PR08MB5375.eurprd08.prod.outlook.com (2603:10a6:803:130::7)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.26; Thu, 6 May
 2021 14:43:29 +0000
Received: from VI1PR08MB3629.eurprd08.prod.outlook.com
 ([fe80::4502:9762:8b3b:63d9]) by VI1PR08MB3629.eurprd08.prod.outlook.com
 ([fe80::4502:9762:8b3b:63d9%4]) with mapi id 15.20.4087.044; Thu, 6 May 2021
 14:43:29 +0000
Received: from smtpclient.apple (82.8.129.65) by
 LNXP265CA0051.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:5d::15) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4108.25 via Frontend Transport; Thu, 6 May 2021 14:43:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eaa7c4fd-7dff-4d53-bf86-09fed8913b8f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ElkAIVTIBIywb4RaTMWfi7okWmexFlLgEKKbWzchDuQ=;
 b=+OslYPquYaTPOYQ5b4Kw3p0B7mSEIYFJRdxWbs12rzXIObRsyEcAFM/us5kmCedENXasKK1mkbZ3r55PJ9//gD0ErQiQZE5C4+StJvCXsf+9yqavbKRcgzfCqfn6CTiY+k4u0g2wPB8x5EvNIduQLa5+CFVk/PFyHFAnf5XzpNs=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: cd67c82459f39719
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QoB7cJkL3DpPpjnSIqAnC1RNGTwzacHYN+Hs0U7lCEQdm2iISXEw/Ewioh0lRIpG9LvWLskLI3RK536Q2Icc0zPtNcr8guKSN21ITSq/LLPagH3wAtK8o2Qi4AixDRTfSIOI4VHhZ4crz4INvXAjj0iMrJ4ZF7pOnMQ5pe/2ciV5mCVu3YFYTxUTgKyors8tNCEDSFlELl5CdPqjOfq9GO1yJ5r+sOb8ZhjTiHKSHKv9vjoetaapUR/fzSqbFopWcvNuSt+jOMvdgd+3qhG9u/j4lrxktQhbUo9pPCeG91AHlh4v1OeorKMkONe7CzrUlTAN04wwldMM+/MrZl+a1Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ElkAIVTIBIywb4RaTMWfi7okWmexFlLgEKKbWzchDuQ=;
 b=kLCnDkkeau0Sq4dEeL2twyEYE9UppHZY4LtmPPwsrkgcAUvpe3b6oOMJpFSI9xmAju0UwUasLMX0waq/+cXoYjYxdZKCi9t3S9ULGJ0OnS7bzdvA5BvpnHf/z+lXs2qSsb+SoYNTnSly50tfOtQj13jctRFihj8MSqvxczLnOF0P6yuSq48VAxNlxkKoPp1hjWvEtpbQfZ0ro+DStJZF+UdMsk6J7V2v2Mo9S+yeWuqPuzxjMrQt5Fe+NPu2S/Z5wsGxbV7trJTCbYsgAdFgpBUxcSeD0ANMX5ISYu63FUXzb82W6Dr3PpPP5maD3c2bIGWbuLlHR00Ofz/Ve7TBCQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ElkAIVTIBIywb4RaTMWfi7okWmexFlLgEKKbWzchDuQ=;
 b=+OslYPquYaTPOYQ5b4Kw3p0B7mSEIYFJRdxWbs12rzXIObRsyEcAFM/us5kmCedENXasKK1mkbZ3r55PJ9//gD0ErQiQZE5C4+StJvCXsf+9yqavbKRcgzfCqfn6CTiY+k4u0g2wPB8x5EvNIduQLa5+CFVk/PFyHFAnf5XzpNs=
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
Content-Type: text/plain;
	charset=utf-8
Subject: Re: [PATCH v5 3/3] docs/doxygen: doxygen documentation for
 grant_table.h
From: Luca Fancellu <luca.fancellu@arm.com>
In-Reply-To: <8fada713-9ae5-ddd3-585b-0f090748fc49@suse.com>
Date: Thu, 6 May 2021 15:43:21 +0100
Cc: xen-devel@lists.xenproject.org,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 wei.chen@arm.com,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
Content-Transfer-Encoding: quoted-printable
Message-Id: <BFE99431-6929-4A14-BE57-BFD6D6AA2997@arm.com>
References: <20210504133145.767-1-luca.fancellu@arm.com>
 <20210504133145.767-4-luca.fancellu@arm.com>
 <alpine.DEB.2.21.2105041514260.5018@sstabellini-ThinkPad-T480s>
 <9E7D7B58-0ABA-4800-B2D3-9EE3E29CF599@arm.com>
 <8fada713-9ae5-ddd3-585b-0f090748fc49@suse.com>
To: Jan Beulich <jbeulich@suse.com>
X-Mailer: Apple Mail (2.3654.80.0.2.43)
X-Originating-IP: [82.8.129.65]
X-ClientProxiedBy: LNXP265CA0051.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:5d::15) To VI1PR08MB3629.eurprd08.prod.outlook.com
 (2603:10a6:803:7f::25)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 360639a5-2629-469d-d163-08d9109d5cdf
X-MS-TrafficTypeDiagnostic: VI1PR08MB5375:|DB7PR08MB3404:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<DB7PR08MB340432CA6C41D620EE25224AE4589@DB7PR08MB3404.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 uo6crh6GWUsgKZHaHtu3ceBsN1t4TdhE8cpL17nrXtogs7IAlMkMHed/eHRPjqYJud6ZhBsP9IlDbB2gBXiBFyHT2M9oj9FUq03Ditl4PMnVO2phQ5xwHAULYf7btzq7DTc9P5bWcw9QddRE9r8IU6HlDbSizI+MEP7Btitm7oLYOebE+GWz8f5I8dYPCEtKEexLoe5Hm2wqhKYmftfV65jo+ACl774zTtSL6bre0whoIncgYyM2nRhn1bT4DJJgGJTM86BEYr89a/Qy/AlRhE8Yl2wxm78M6dJkqpuReMlEXlygfPtWNYb0eXbJiyKijyiPyHWPi3eeOX7SgXXDcVa/ZNwetifZvMoC7lYmSlUFa/IIvXFHJ8uhpIIYooJ7aLwgGa9QeBLIrHU/z2fjSqQlm/27UyiR04ibYeiSyMM2du9leQnkWmxS01T5CIXBTL+LcqY4DbNNGD655rNVo/Fl551JGyh6zKOH5vOC8US6psrHZ+HGGkQ4hQKt0d0WZGXPcK5200+Sx5wZwi+VOHEoQyrtqzxB7ZvlOUN5uVR1MnHO4lsXHOHSR6y5YDDcYPIxW/VMeTPB0kOOdYM8WXV3Mi+qUpVEQh8pqZRidKgpZJ7sKL/I5gSuhKPIsvkyrvd2mXvZ5VwMl0aBL6p4tH6/wsKGMaeXZjNth1K2vYylT8laTpj0lJj7KSvJPRL4trao8wsL4CG0+Bq+f4NUdn80JU6VJ7MWUUXjRZ8kTyzi1etPUr9bxKX6iTJLKEGcb/kOPgktc20nqKy76LIoJQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR08MB3629.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39850400004)(346002)(136003)(376002)(366004)(396003)(186003)(4326008)(44832011)(2906002)(6506007)(53546011)(16526019)(33656002)(36756003)(52116002)(6512007)(6916009)(66476007)(66556008)(8936002)(8676002)(5660300002)(54906003)(316002)(2616005)(26005)(83380400001)(66946007)(956004)(86362001)(38100700002)(966005)(6486002)(38350700002)(6666004)(478600001)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
 =?utf-8?B?ekVNVS9IYWNlMkZpMVY0dTAvQUd6dnYxTDZhUlNBV25sUy8vRmdsT0pxVDBH?=
 =?utf-8?B?M2VrWVFiYkQ2KzFSWDJmUGFYZ1ZhZWZTU2ZmZ21JQ1A4WW1hQTdpYU5UaDRq?=
 =?utf-8?B?cW9kNVBCMU5KV0N3NE9UZnZ5RnI4aDlOVGlNd0k5b3BwbXFOdmpQaHQ5bUFo?=
 =?utf-8?B?Q2dqdHorQW1POE5keHFxbmJpZ1BPYmVmTnZQQTFwSDlxZEI5OThNV2FaZklW?=
 =?utf-8?B?MloyUUhiVjhEaHFZTzlaOURrZjBNM09EaXlkR041dnJoZURqMkJEcWM3b2N4?=
 =?utf-8?B?QWNpUllqUnNkUXdnMDMwWjkyM3FYaXRLUGc4U2N4RXR6TUZXWFlLMllKK1Nw?=
 =?utf-8?B?SVpXYWI3NElrNkdzeGdXZmdrZTBTdGZMZW15QW5YRjZOTXMzMzJtN21waTZp?=
 =?utf-8?B?aytELzYzNHo3bk9CMk5lVDllcXk3RzFDQyttbnM0VUlqczZmVWJaQ3ZqQmZp?=
 =?utf-8?B?MFJIOVd4YWYyd0lrNHg1VkFYVVlBdURSTnBDOFBIdmwxeGxycFYxaml5ckVz?=
 =?utf-8?B?eTc4TGk1MHRCZjVxUFIxcVJFNkFnNVBYU25Gall4L0JIMktRYm84eEhrbmpL?=
 =?utf-8?B?dTloWHI3U1pERmZPRjNteUhwWDFFMjh4WElWcnM2dm1tK1ltNHFDdUNXRFk0?=
 =?utf-8?B?eFhRTXlSSUlncXArNTYyRGRaYXhLRUpvRlRzelhTVGhQQ1NFQlBxNGlxRVRZ?=
 =?utf-8?B?VzlzK3BDRFlUclpOdWJSR3RGZFZPSU00REpMSVgxK0o1c3VnTThjZWJodnBF?=
 =?utf-8?B?Mm1pUis5NjdzVDh6ZEZTUStOOEpHdlY5R3NQWVFCdzE1aHNjTnEzeUp1Rndq?=
 =?utf-8?B?aHlSbGs2OE82SHQ3S24wNHM2K2JnMld3NzNKUmprRk5rVWxlbm5RNVcrYnRa?=
 =?utf-8?B?L2psdFhMRlNnQi9KcGdBTjA2cjJkT1pPZ1o5a0dqb1VMT3g4b3o1eHdzMFo0?=
 =?utf-8?B?L1d4WXBFdnQyOEV1WFM2VVhiZlhJWDg1NnZ0VTkybVFvVDdFQUU4MHhKWFYy?=
 =?utf-8?B?ZDh3dHZrQ0pUOEJmN2EwWjh6N1dHN3M0QXVwMEREZHRrL292ODRVMFlxSzMv?=
 =?utf-8?B?c1g0K0M5b3ZDN052SmEyZEdWU1RBc29Rekwwc1dkUm10VVFzMjJGcFVqdTd1?=
 =?utf-8?B?OC9Da2h5VGMrQldDbFlJSWRyd2FRZEEvM29SQVMySGFBa0NkNWxNNEVpaDVv?=
 =?utf-8?B?RzdDNHRrbGNncGZ6WXVCcWlLcDZzeDBCcXJiWWpyWmN1Z0FaOXB1dERaa0NU?=
 =?utf-8?B?L2NSdUU0UVpJaVlkSEEwRjJsdnR3Mmd2czB5MGtqc2ZXU3U3R3JKR2pBRER4?=
 =?utf-8?B?Tmp5SVFtZUNDTHJIcGs1M2dVT0RMSXJVbFRvSHlYWmxHSlhsYmx0UkVPM290?=
 =?utf-8?B?N0ZCbU1RaCtSQnFUekxBQmtYcUpUaERaWXNzQUlLeXhDOFMxa0FoamxKWkFH?=
 =?utf-8?B?M01pcGtabHBEUm1wdlhzOGVCU0FvcnFJMHJCaUtEY0pZUU5GUlk5WWJvMWp2?=
 =?utf-8?B?L01mWURLdTNBS2NZRDRxL0pQVzdyTFNhdjBEaWN3dDdEZmVlQ0tnaktmRGtI?=
 =?utf-8?B?NytjaHRmNEg3OUtTYUxMQWdMR0xBTm42ZTlyL05YT0RmR1c3WVUxc2k4amll?=
 =?utf-8?B?K0pFNktmK2pZdTJkS2lVNGhaZTlPcWJCNmhic3dMSldtSlFXVll1bXlkK2hr?=
 =?utf-8?B?QXgvL3dOLzVUU3VONFd6WjZ4Q2FTVVFVRjhRU3RCYm1VUmIrUzVyWHVWb250?=
 =?utf-8?Q?4J3lxYPZDqcLeKsPBZjTKOIB3I670oZke+r9fsE?=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB5375
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT035.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	5e0ebae0-41de-43e7-4f9d-08d9109d5003
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	2nbGOO2ZJ+MZWB4qKykFCdGtfTCLR8uWyu9cpWRWco9FZ7hfAV3SuM9FFYZ1fSyNg56wG7zhr3BLqElBfPc3AdkV+/wZS2y7B+3eMCCl7Re5JUIh+QOCMLVfwHwQ1zvfma1+X/ug2dp38aoqxaJRBHzqLSuVt2J2USpd5BXOewgCeOt3AuUNrgZJL9YX7CYUjsUNZU03/SbCMhGdk/mufPI1Zrx/fT60JiSLPvk0VSzxmyclh5K8/Pvf7QOo6eYMTzZ9nVlU3f3IxdUd1wxN0EJOXw+5M6T1U9jYA8MKYVub0zFTXjOn4GZKGmT3sU5sF67G1saYzd1VhEiHix5Spxahbz4CWHD3YqWc4O2yW0KHMdeTpG8rEbbMG2ZGpOjseV2z6S8SSh3VS4ccfg5UDlofRXQG0gpezmo7rOx/XUUFpKxgAI2qbJst4VMp0+64C2SKxW3SmsN7zb7OwQ+zTPsPJDK9qIPwxNbuNjoLNc419Jd1qgp0BDrX00n6j1Pe9lMC+2Y8BVIsXFfGuomiENoxtJJWpD5Myp/BLH0ZG/rsxEVhz0Z41um38zpPtYo9Z3tbSWhDOExWanitdR0CYXJC9CFz/zbFiK5Lde/uxcii2/sNuUjtjF6UpKRmf6XQ4lTnH4FdvtJyxgjwC6Fa6XfC3tWlFtsWHs6ewkzENBBouN4rgZ06+ubowb4i+i7SNBu2WK8H5MjI7/bDEKVZ8F1yBM0sEMM+cruRhmD9F7s=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(376002)(396003)(39850400004)(136003)(36840700001)(46966006)(356005)(478600001)(26005)(956004)(54906003)(8676002)(33656002)(44832011)(316002)(82740400003)(81166007)(966005)(8936002)(47076005)(6506007)(70206006)(2616005)(6512007)(70586007)(107886003)(83380400001)(6862004)(4326008)(186003)(336012)(86362001)(5660300002)(6666004)(2906002)(36860700001)(53546011)(16526019)(36756003)(6486002)(82310400003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 May 2021 14:43:50.3637
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 360639a5-2629-469d-d163-08d9109d5cdf
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT035.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3404



> On 6 May 2021, at 10:58, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> On 06.05.2021 10:48, Luca Fancellu wrote:
>>> On 4 May 2021, at 23:27, Stefano Stabellini <sstabellini@kernel.org> wr=
ote:
>>> On Tue, 4 May 2021, Luca Fancellu wrote:
>>>> @@ -51,13 +55,10 @@
>>>> * know the real machine address of a page it is sharing. This makes
>>>> * it possible to share memory correctly with domains running in
>>>> * fully virtualised memory.
>>>> - */
>>>> -
>>>> -/***********************************
>>>> + *
>>>> * GRANT TABLE REPRESENTATION
>>>> - */
>>>> -
>>>> -/* Some rough guidelines on accessing and updating grant-table entrie=
s
>>>> + *
>>>> + * Some rough guidelines on accessing and updating grant-table entrie=
s
>>>> * in a concurrency-safe manner. For more information, Linux contains a
>>>> * reference implementation for guest OSes (drivers/xen/grant_table.c, =
see
>>>> * http://git.kernel.org/?p=3Dlinux/kernel/git/torvalds/linux.git;a=3Db=
lob;f=3Ddrivers/xen/grant-table.c;hb=3DHEAD
>>>> @@ -66,6 +67,7 @@
>>>> *     compiler barrier will still be required.
>>>> *
>>>> * Introducing a valid entry into the grant table:
>>>> + * @keepindent
>>>> *  1. Write ent->domid.
>>>> *  2. Write ent->frame:
>>>> *      GTF_permit_access:   Frame to which access is permitted.
>>>> @@ -73,20 +75,25 @@
>>>> *                           frame, or zero if none.
>>>> *  3. Write memory barrier (WMB).
>>>> *  4. Write ent->flags, inc. valid type.
>>>> + * @endkeepindent
>>>> *
>>>> * Invalidating an unused GTF_permit_access entry:
>>>> + * @keepindent
>>>> *  1. flags =3D ent->flags.
>>>> *  2. Observe that !(flags & (GTF_reading|GTF_writing)).
>>>> *  3. Check result of SMP-safe CMPXCHG(&ent->flags, flags, 0).
>>>> *  NB. No need for WMB as reuse of entry is control-dependent on succe=
ss of
>>>> *      step 3, and all architectures guarantee ordering of ctrl-dep wr=
ites.
>>>> + * @endkeepindent
>>>> *
>>>> * Invalidating an in-use GTF_permit_access entry:
>>>> + *
>>>> *  This cannot be done directly. Request assistance from the domain co=
ntroller
>>>> *  which can set a timeout on the use of a grant entry and take necess=
ary
>>>> *  action. (NB. This is not yet implemented!).
>>>> *
>>>> * Invalidating an unused GTF_accept_transfer entry:
>>>> + * @keepindent
>>>> *  1. flags =3D ent->flags.
>>>> *  2. Observe that !(flags & GTF_transfer_committed). [*]
>>>> *  3. Check result of SMP-safe CMPXCHG(&ent->flags, flags, 0).
>>>> @@ -97,18 +104,24 @@
>>>> *      transferred frame is written. It is safe for the guest to spin =
waiting
>>>> *      for this to occur (detect by observing GTF_transfer_completed i=
n
>>>> *      ent->flags).
>>>> + * @endkeepindent
>>>> *
>>>> * Invalidating a committed GTF_accept_transfer entry:
>>>> *  1. Wait for (ent->flags & GTF_transfer_completed).
>>>> *
>>>> * Changing a GTF_permit_access from writable to read-only:
>>>> + *
>>>> *  Use SMP-safe CMPXCHG to set GTF_readonly, while checking !GTF_writi=
ng.
>>>> *
>>>> * Changing a GTF_permit_access from read-only to writable:
>>>> + *
>>>> *  Use SMP-safe bit-setting instruction.
>>>> + *
>>>> + * @addtogroup grant_table Grant Tables
>>>> + * @{
>>>> */
>>>>=20
>>>> -/*
>>>> +/**
>>>> * Reference to a grant entry in a specified domain's grant table.
>>>> */
>>>> typedef uint32_t grant_ref_t;
>>>=20
>>> Just below this typedef there is the following comment:
>>>=20
>>> /*
>>> * A grant table comprises a packed array of grant entries in one or mor=
e
>>> * page frames shared between Xen and a guest.
>>> * [XEN]: This field is written by Xen and read by the sharing guest.
>>> * [GST]: This field is written by the guest and read by Xen.
>>> */
>>>=20
>>> I noticed it doesn't appear in the output html. Any way we can retain i=
t
>>> somewhere? Maybe we have to move it together with the larger comment
>>> above?
>>=20
>> I agree with you, this comment should appear in the html docs, but to do=
 so
>> It has to be moved together with the larger comment above.
>>=20
>> In the original patch it was like that but I had to revert it back due t=
o objection, my proposal is
>> to put it together with the larger comment and write something like this=
 to
>> maintain a good readability:
>>=20
>>   *  Use SMP-safe CMPXCHG to set GTF_readonly, while checking !GTF_writi=
ng.
>>   *
>>   * Changing a GTF_permit_access from read-only to writable:
>>   *
>>   *  Use SMP-safe bit-setting instruction.
>> + *
>> + * A grant table comprises a packed array of grant entries in one or mo=
re
>> + * page frames shared between Xen and a guest.
>=20
> I think if this part was moved to the top of this big comment while ...
>=20
>> + * Data structure fields or defines described below have the following =
tags:
>> + * * [XEN]: This field is written by Xen and read by the sharing guest.
>> + * * [GST]: This field is written by the guest and read by Xen.
>=20
> ... this part was, as suggested by you, left near its bottom, I could
> agree.

Hi Jan,

Just to be sure that we are on the same page, something like this could be =
ok?

 * fully virtualised memory.
 *
 * GRANT TABLE REPRESENTATION
 *
+ * A grant table comprises a packed array of grant entries in one or more
+ * page frames shared between Xen and a guest.
+ *
 * Some rough guidelines on accessing and updating grant-table entries
 * in a concurrency-safe manner. For more information, Linux contains a
[=E2=80=A6]
 * Changing a GTF_permit_access from read-only to writable:
 *
 *  Use SMP-safe bit-setting instruction.
 *
+ * Data structure fields or defines described below have the following tag=
s:
+ * * [XEN]: This field is written by Xen and read by the sharing guest.
+ * * [GST]: This field is written by the guest and read by Xen.
 *
 * @addtogroup grant_table Grant Tables
 * @{


>=20
> However, you making this suggestion caused me to look more closely at
> what the comments actually describe. If there's effort to make the
> documentation easier accessible by extracting it from the header, I
> wonder whether - like with the v1 vs v2 comment pointed out previously
> as misleading - we shouldn't, as a prereq step, make an attempt to
> actually have the documentation be correct. For example I found this:
>=20
> /*
> * Version 1 and version 2 grant entries share a common prefix.  The
> * fields of the prefix are documented as part of struct
> * grant_entry_v1.
> */
> struct grant_entry_header {
>    uint16_t flags;
>    domid_t  domid;
> };
>=20
> The comment is wrong. "flags" here is only holding what's tagged
> [GST] for v1. The [XEN] tagged bits actually live in grant_status_t.
> This can perhaps best be seen in gnttab_set_version()'s code
> dealing with the first 8 entries. However, contrary to v2's
> intentions, GTF_transfer_committed and GTF_transfer_completed (which
> aren't properly tagged either way) get set by Xen in shared entries,
> not status ones. Maybe this was considered "okay" because the frame
> field also gets written in this case (i.e. the cache line will get
> dirtied in any event).
>=20
> Similarly I'd like to refer to my still pending "gnttab: GTF_sub_page
> is a v2-only flag", which also corrects documentation in this regard.
> And perhaps there's more.
>=20
> An alternative to correcting the (as it seems) v2 related issues, it
> may be worth considering to extract only v1 documentation in this
> initial phase.
>=20
> Jan
>=20



From xen-devel-bounces@lists.xenproject.org Thu May 06 14:49:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 14:49:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123629.233216 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lefJn-0003it-OE; Thu, 06 May 2021 14:49:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123629.233216; Thu, 06 May 2021 14:49:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lefJn-0003im-K7; Thu, 06 May 2021 14:49:35 +0000
Received: by outflank-mailman (input) for mailman id 123629;
 Thu, 06 May 2021 14:49:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EnkQ=KB=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lefJm-0003ig-A0
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 14:49:34 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bcaeb701-3a31-4c01-b779-095b100a0db7;
 Thu, 06 May 2021 14:49:33 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 34087B201;
 Thu,  6 May 2021 14:49:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bcaeb701-3a31-4c01-b779-095b100a0db7
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620312572; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=pAtdHnFg6XXrZdcey6Qvh/5tLAWYrPESQ9+K1HaRcSs=;
	b=dpzYtN+uLlWM9+tEoxzqMi0UN/XwwIZaXZHSLHNhW3T+7PgCr3kJa4ThNi8LfSrtt+9drM
	DRmuhqzgnjOu0ll99T/vhTW5JpWSCW4O31J6Fq3nszcfy0/JysY4UE1Q+nFReDLQPepKEM
	UNTk1iLxuV9eKT9DhIARPxKZJcGUjHw=
Subject: Re: [PATCH v5 3/3] docs/doxygen: doxygen documentation for
 grant_table.h
To: Luca Fancellu <luca.fancellu@arm.com>
Cc: xen-devel@lists.xenproject.org,
 Bertrand Marquis <bertrand.marquis@arm.com>, wei.chen@arm.com,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20210504133145.767-1-luca.fancellu@arm.com>
 <20210504133145.767-4-luca.fancellu@arm.com>
 <alpine.DEB.2.21.2105041514260.5018@sstabellini-ThinkPad-T480s>
 <9E7D7B58-0ABA-4800-B2D3-9EE3E29CF599@arm.com>
 <8fada713-9ae5-ddd3-585b-0f090748fc49@suse.com>
 <BFE99431-6929-4A14-BE57-BFD6D6AA2997@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b66c7973-385d-2380-21f6-8948cca7f5c7@suse.com>
Date: Thu, 6 May 2021 16:49:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <BFE99431-6929-4A14-BE57-BFD6D6AA2997@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 06.05.2021 16:43, Luca Fancellu wrote:
>> On 6 May 2021, at 10:58, Jan Beulich <jbeulich@suse.com> wrote:
>> On 06.05.2021 10:48, Luca Fancellu wrote:
>>>> On 4 May 2021, at 23:27, Stefano Stabellini <sstabellini@kernel.org> wrote:
>>>> On Tue, 4 May 2021, Luca Fancellu wrote:
>>>>> @@ -51,13 +55,10 @@
>>>>> * know the real machine address of a page it is sharing. This makes
>>>>> * it possible to share memory correctly with domains running in
>>>>> * fully virtualised memory.
>>>>> - */
>>>>> -
>>>>> -/***********************************
>>>>> + *
>>>>> * GRANT TABLE REPRESENTATION
>>>>> - */
>>>>> -
>>>>> -/* Some rough guidelines on accessing and updating grant-table entries
>>>>> + *
>>>>> + * Some rough guidelines on accessing and updating grant-table entries
>>>>> * in a concurrency-safe manner. For more information, Linux contains a
>>>>> * reference implementation for guest OSes (drivers/xen/grant_table.c, see
>>>>> * http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=drivers/xen/grant-table.c;hb=HEAD
>>>>> @@ -66,6 +67,7 @@
>>>>> *     compiler barrier will still be required.
>>>>> *
>>>>> * Introducing a valid entry into the grant table:
>>>>> + * @keepindent
>>>>> *  1. Write ent->domid.
>>>>> *  2. Write ent->frame:
>>>>> *      GTF_permit_access:   Frame to which access is permitted.
>>>>> @@ -73,20 +75,25 @@
>>>>> *                           frame, or zero if none.
>>>>> *  3. Write memory barrier (WMB).
>>>>> *  4. Write ent->flags, inc. valid type.
>>>>> + * @endkeepindent
>>>>> *
>>>>> * Invalidating an unused GTF_permit_access entry:
>>>>> + * @keepindent
>>>>> *  1. flags = ent->flags.
>>>>> *  2. Observe that !(flags & (GTF_reading|GTF_writing)).
>>>>> *  3. Check result of SMP-safe CMPXCHG(&ent->flags, flags, 0).
>>>>> *  NB. No need for WMB as reuse of entry is control-dependent on success of
>>>>> *      step 3, and all architectures guarantee ordering of ctrl-dep writes.
>>>>> + * @endkeepindent
>>>>> *
>>>>> * Invalidating an in-use GTF_permit_access entry:
>>>>> + *
>>>>> *  This cannot be done directly. Request assistance from the domain controller
>>>>> *  which can set a timeout on the use of a grant entry and take necessary
>>>>> *  action. (NB. This is not yet implemented!).
>>>>> *
>>>>> * Invalidating an unused GTF_accept_transfer entry:
>>>>> + * @keepindent
>>>>> *  1. flags = ent->flags.
>>>>> *  2. Observe that !(flags & GTF_transfer_committed). [*]
>>>>> *  3. Check result of SMP-safe CMPXCHG(&ent->flags, flags, 0).
>>>>> @@ -97,18 +104,24 @@
>>>>> *      transferred frame is written. It is safe for the guest to spin waiting
>>>>> *      for this to occur (detect by observing GTF_transfer_completed in
>>>>> *      ent->flags).
>>>>> + * @endkeepindent
>>>>> *
>>>>> * Invalidating a committed GTF_accept_transfer entry:
>>>>> *  1. Wait for (ent->flags & GTF_transfer_completed).
>>>>> *
>>>>> * Changing a GTF_permit_access from writable to read-only:
>>>>> + *
>>>>> *  Use SMP-safe CMPXCHG to set GTF_readonly, while checking !GTF_writing.
>>>>> *
>>>>> * Changing a GTF_permit_access from read-only to writable:
>>>>> + *
>>>>> *  Use SMP-safe bit-setting instruction.
>>>>> + *
>>>>> + * @addtogroup grant_table Grant Tables
>>>>> + * @{
>>>>> */
>>>>>
>>>>> -/*
>>>>> +/**
>>>>> * Reference to a grant entry in a specified domain's grant table.
>>>>> */
>>>>> typedef uint32_t grant_ref_t;
>>>>
>>>> Just below this typedef there is the following comment:
>>>>
>>>> /*
>>>> * A grant table comprises a packed array of grant entries in one or more
>>>> * page frames shared between Xen and a guest.
>>>> * [XEN]: This field is written by Xen and read by the sharing guest.
>>>> * [GST]: This field is written by the guest and read by Xen.
>>>> */
>>>>
>>>> I noticed it doesn't appear in the output html. Any way we can retain it
>>>> somewhere? Maybe we have to move it together with the larger comment
>>>> above?
>>>
>>> I agree with you, this comment should appear in the html docs, but to do so
>>> It has to be moved together with the larger comment above.
>>>
>>> In the original patch it was like that but I had to revert it back due to objection, my proposal is
>>> to put it together with the larger comment and write something like this to
>>> maintain a good readability:
>>>
>>>   *  Use SMP-safe CMPXCHG to set GTF_readonly, while checking !GTF_writing.
>>>   *
>>>   * Changing a GTF_permit_access from read-only to writable:
>>>   *
>>>   *  Use SMP-safe bit-setting instruction.
>>> + *
>>> + * A grant table comprises a packed array of grant entries in one or more
>>> + * page frames shared between Xen and a guest.
>>
>> I think if this part was moved to the top of this big comment while ...
>>
>>> + * Data structure fields or defines described below have the following tags:
>>> + * * [XEN]: This field is written by Xen and read by the sharing guest.
>>> + * * [GST]: This field is written by the guest and read by Xen.
>>
>> ... this part was, as suggested by you, left near its bottom, I could
>> agree.
> 
> Hi Jan,
> 
> Just to be sure that we are on the same page, something like this could be ok?
> 
>  * fully virtualised memory.
>  *
>  * GRANT TABLE REPRESENTATION
>  *
> + * A grant table comprises a packed array of grant entries in one or more
> + * page frames shared between Xen and a guest.
> + *
>  * Some rough guidelines on accessing and updating grant-table entries
>  * in a concurrency-safe manner. For more information, Linux contains a
> […]
>  * Changing a GTF_permit_access from read-only to writable:
>  *
>  *  Use SMP-safe bit-setting instruction.
>  *
> + * Data structure fields or defines described below have the following tags:
> + * * [XEN]: This field is written by Xen and read by the sharing guest.
> + * * [GST]: This field is written by the guest and read by Xen.
>  *
>  * @addtogroup grant_table Grant Tables
>  * @{

I think so, yes. Thanks.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 06 15:17:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 15:17:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123635.233234 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lefk6-0006xw-0D; Thu, 06 May 2021 15:16:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123635.233234; Thu, 06 May 2021 15:16:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lefk5-0006xp-St; Thu, 06 May 2021 15:16:45 +0000
Received: by outflank-mailman (input) for mailman id 123635;
 Thu, 06 May 2021 15:16:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PObI=KB=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lefk3-0006xi-UD
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 15:16:44 +0000
Received: from mo6-p01-ob.smtp.rzone.de (unknown [2a01:238:400:200::4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ba675642-be28-4a39-b110-4539c4f17dc5;
 Thu, 06 May 2021 15:16:42 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.25.6 AUTH)
 with ESMTPSA id V0bf6dx46FGa0Kh
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 6 May 2021 17:16:36 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ba675642-be28-4a39-b110-4539c4f17dc5
ARC-Seal: i=1; a=rsa-sha256; t=1620314197; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=cwKZhQe81ABzXHDSo3WT0bBFW2XsHz5yXsXeP8QDj9cRsmjmZ5RhhXyOgFVZaJkcM9
    DeU7B6NiAA9TbqzHJ1P6KzM2EVywGVJ7LvFsG/VK75JoDsfMilWlHEZCFMCLrdY/72Rg
    ORTqUeCEEQksFLoMoztPMCB2ZLpLOCGyvNAuw8XFCZXn+1Xwr9C1qHe6jQwLlS3F0qkI
    LS8sj4/GUx0TCaiO74MZ3Foe3lVGC88Bsv9qfAtc5tBU9A+MN+7j0whB90Whof1v33dZ
    R1x47CThm/y9GXwN+tagAlCygvL6Qkse801ulx35c5y4tRE0KtClRvyVdOl9W2meI36Y
    W2UQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1620314196;
    s=strato-dkim-0002; d=strato.com;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=OjUaEOhqsOGb5iSgnvuCDfwpquZXnkPZgrA9fzjy+zA=;
    b=KXGau3NmXbB5ZhF7SzyXIlnnYN7JbusUeI2X6LJ/Pw+ZjamxnXtNRMXdNiW1UKIG1r
    IhBMuajtlRPLhRvt+col75jNlWCFsP+kicJSD/4wzY68oaZKprqdfFgzonXdPhZU/SiZ
    zjvXdQtqj0CoXChj9EjqN6bk7qrQoPRltLTUIyAupjW5a4yUcohQP4t6BsUOo75hTvK4
    7Z70vhUi1W0pE2q3hwpOVxvmYuW0wSMBBLVEmJpYLieMw1gKUDLQSSWNAFafOhqxzbt0
    F3UH3jK9E42/r2GBBlAscKOf5dXyFh/QJzAgRNUEGoeaQ4YigVkbZrlWU7zK2wMo6Xw+
    0OaQ==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1620314196;
    s=strato-dkim-0002; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=OjUaEOhqsOGb5iSgnvuCDfwpquZXnkPZgrA9fzjy+zA=;
    b=S1pl+8AvzVp5eOPNWzl5Yl6hYN6WoWJG2nItKM/NW9RU0gfDt22Sv6idI8cHAbEDkp
    r4VeBWVLnQtPKsIXfGY4uuUZ/A7+cfqBUW3DWxUTqejik6PWQxTKA8FAz02kxiNWnR7U
    6n5fNhUqzH2692BtyYvLyG1igD7F62mWMqAJaNse638zLQYwo4/ZA6zlvSe3CrC6wd2T
    5NLZ+j291SbG8Nrk54qjBmdr3PRQ2QfOFoGxLpgRCih0XviLoG4532pl/QWrUt+IiCJB
    vXlsBYqpdooUzgnNFsV+zyx9yR27ZQAM7eMjwnxUuAIRz7Ff+M1tvcioV6CGN5GowsjG
    boLg==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgtl+1b1FMstFZvCqIQN5N7TvWFg4vzhFVdoKAuQ"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2] tools: remove unused sysconfig variable XENSTORED_ROOTDIR
Date: Thu,  6 May 2021 17:16:31 +0200
Message-Id: <20210506151631.1227-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The sysconfig variable XENSTORED_ROOTDIR is not used anymore.
It used to point to a directory with tdb files, which is now a tmpfs.

In case the database is not in tmpfs, like on sysv and BSD systems,
xenstored will truncate existing database files during start.

Fixes commit 2ef6ace428dec4795b8b0a67fff6949e817013de

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/hotplug/FreeBSD/rc.d/xencommons.in           | 5 -----
 tools/hotplug/Linux/init.d/sysconfig.xencommons.in | 7 -------
 tools/hotplug/Linux/launch-xenstore.in             | 1 -
 tools/hotplug/NetBSD/rc.d/xencommons.in            | 5 -----
 4 files changed, 18 deletions(-)

diff --git a/tools/hotplug/FreeBSD/rc.d/xencommons.in b/tools/hotplug/FreeBSD/rc.d/xencommons.in
index 4c61d8c94e..fddcce314c 100644
--- a/tools/hotplug/FreeBSD/rc.d/xencommons.in
+++ b/tools/hotplug/FreeBSD/rc.d/xencommons.in
@@ -42,11 +42,6 @@ xen_startcmd()
 
 	xenstored_pid=$(check_pidfile ${XENSTORED_PIDFILE} ${XENSTORED})
 	if test -z "$xenstored_pid"; then
-		printf "Cleaning xenstore database.\n"
-		if [ -z "${XENSTORED_ROOTDIR}" ]; then
-			XENSTORED_ROOTDIR="@XEN_LIB_STORED@"
-		fi
-		rm -f ${XENSTORED_ROOTDIR}/tdb* >/dev/null 2>&1
 		printf "Starting xenservices: xenstored, xenconsoled."
 		XENSTORED_ARGS=" --pid-file ${XENSTORED_PIDFILE}"
 		if [ -n "${XENSTORED_TRACE}" ]; then
diff --git a/tools/hotplug/Linux/init.d/sysconfig.xencommons.in b/tools/hotplug/Linux/init.d/sysconfig.xencommons.in
index b059a2910d..00cf7f91d4 100644
--- a/tools/hotplug/Linux/init.d/sysconfig.xencommons.in
+++ b/tools/hotplug/Linux/init.d/sysconfig.xencommons.in
@@ -48,13 +48,6 @@ XENSTORED_ARGS=
 # Only evaluated if XENSTORETYPE is "daemon".
 #XENSTORED_TRACE=[yes|on|1]
 
-## Type: string
-## Default: "@XEN_LIB_STORED@"
-#
-# Running xenstored on XENSTORED_ROOTDIR
-# Only evaluated if XENSTORETYPE is "daemon".
-#XENSTORED_ROOTDIR=@XEN_LIB_STORED@
-
 ## Type: string
 ## Default: @LIBEXEC@/boot/xenstore-stubdom.gz
 #
diff --git a/tools/hotplug/Linux/launch-xenstore.in b/tools/hotplug/Linux/launch-xenstore.in
index fa4ea4af49..d40c66482a 100644
--- a/tools/hotplug/Linux/launch-xenstore.in
+++ b/tools/hotplug/Linux/launch-xenstore.in
@@ -66,7 +66,6 @@ test -f @CONFIG_DIR@/@CONFIG_LEAF_DIR@/xencommons && . @CONFIG_DIR@/@CONFIG_LEAF
 [ "$XENSTORETYPE" = "" ] && XENSTORETYPE=daemon
 
 [ "$XENSTORETYPE" = "daemon" ] && {
-	[ -z "$XENSTORED_ROOTDIR" ] && XENSTORED_ROOTDIR="@XEN_LIB_STORED@"
 	[ -z "$XENSTORED_TRACE" ] || XENSTORED_ARGS="$XENSTORED_ARGS -T @XEN_LOG_DIR@/xenstored-trace.log"
 	[ -z "$XENSTORED" ] && XENSTORED=@XENSTORED@
 	[ -x "$XENSTORED" ] || {
diff --git a/tools/hotplug/NetBSD/rc.d/xencommons.in b/tools/hotplug/NetBSD/rc.d/xencommons.in
index 80e518f5de..cf2af06596 100644
--- a/tools/hotplug/NetBSD/rc.d/xencommons.in
+++ b/tools/hotplug/NetBSD/rc.d/xencommons.in
@@ -38,11 +38,6 @@ xen_startcmd()
 
 	xenstored_pid=$(check_pidfile ${XENSTORED_PIDFILE} ${sbindir}/xenstored)
 	if test -z "$xenstored_pid"; then
-		printf "Cleaning xenstore database.\n"
-		if [ -z "${XENSTORED_ROOTDIR}" ]; then
-			XENSTORED_ROOTDIR="@XEN_LIB_STORED@"
-		fi
-		rm -f ${XENSTORED_ROOTDIR}/tdb* >/dev/null 2>&1
 		printf "Starting xenservices: xenstored, xenconsoled."
 		XENSTORED_ARGS=" --pid-file ${XENSTORED_PIDFILE}"
 		if [ -n "${XENSTORED_TRACE}" ]; then


From xen-devel-bounces@lists.xenproject.org Thu May 06 15:17:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 15:17:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123636.233246 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lefkV-0007QI-97; Thu, 06 May 2021 15:17:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123636.233246; Thu, 06 May 2021 15:17:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lefkV-0007QB-4t; Thu, 06 May 2021 15:17:11 +0000
Received: by outflank-mailman (input) for mailman id 123636;
 Thu, 06 May 2021 15:17:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=PObI=KB=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lefkT-0007NO-BU
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 15:17:09 +0000
Received: from mo6-p01-ob.smtp.rzone.de (unknown [2a01:238:400:200::b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1fcd6c39-16e5-4867-b9f5-96a4ad40523b;
 Thu, 06 May 2021 15:17:08 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.25.6 AUTH)
 with ESMTPSA id V0bf6dx46FH30Kr
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 6 May 2021 17:17:03 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1fcd6c39-16e5-4867-b9f5-96a4ad40523b
ARC-Seal: i=1; a=rsa-sha256; t=1620314224; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=jzBcN2Rmupd0W1QlGCEEpZOie/G31WOdExFfoQkC6A76xEpCKg0NsM0Pweb3BoLzfR
    eeQVeYDOffC4ES1tGNOVo5hqZm8W61qforXkhiNas4YiHvKlB7PLnn3NhLNiiwWsLPIp
    ORGeW/jUT4AkVesZ4oBWoapAEYg59L2s6j9dRMKadDtnZQOjWb5BLGMXmbKmEalDdPDk
    z3F76UX2QWmMo3w1QXi0exAI0w6MBT89nfRprs7vQRCZ+geVup5kam/iaRUZyUx2vp4R
    N2ahidKBKorLJc0PdQxEsUuyzuxG8covtAUuOCVipqLl6ApRyPShJAGVMUe9RDF/yEDu
    hutg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1620314224;
    s=strato-dkim-0002; d=strato.com;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=FxbKl46WH8hAOJ0YzhvXx5JTFJunhpUurX0R/y8sMCM=;
    b=n4RcCjDeJT4MUegPYhnoe33gScGt1iZ7ZgT1IV+d0WUJ6VRwUAFH4pUXEI6+J7nP1D
    k0UU6HprK5QfzUM2fSMqzsmWIvxE0WPpsKmfvuqeqI1Ey0vwktAS1+gkkMLNRi5P93N6
    bA4xdQxwHwCPDIFQmn/SLvCe0GJPNzlSBPn3JQBI+Y9+LY5Xzkm+KMijuD7K/XhOog6l
    YHN3HeU4NnbYrqcVf/XMGoO/WlhXSaXnvuiwgBdfsfPbnMVrdwbDmBNX7hwi70wxUqGX
    MumLi5423GUqRisI7b4mVFl2ib+DfCN+I+xFs/jBgsbG0HYkuvUU+yTHWqOE/NwKWZmu
    6r/w==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1620314224;
    s=strato-dkim-0002; d=aepfle.de;
    h=Message-Id:Date:Subject:Cc:To:From:Cc:Date:From:Subject:Sender;
    bh=FxbKl46WH8hAOJ0YzhvXx5JTFJunhpUurX0R/y8sMCM=;
    b=bUBTYqB+f8GpAS9Ba42kS4RGYviIWVHbX54uvoTHQSi9VHVsD/Mrfz25iFH0FYAfoO
    HrXlvdVKDsKeC9kKSzsR8gQKJWq/ONgkuz6/ynd+egzGwaewUWLtxhkWeFvGQEAq+5BY
    BB5FpsTAR57K3RMNhQVThurTXuKJKOUfVZRBNDbzwJSLFEAxmSLQPglIFjgdGo0GYwE2
    DzrVSpeao0Znysnlr2UehuCtu7K7FhKrF52m10lG+tBuwEwSD3S12kR2ktH7W6xE2XNo
    jKVoryswEy6g81zzNQW54mdxkS7WvLpQGCJzvKTbTU86FrS7tbqvCupKaD7I6YN7fIlx
    Axlg==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QXkBR9MXjAuzpIG0mv9coXAgtl+1b1FMstFZvCqIQN5N7TvWFg4vzhFVdoKAuQ"
X-RZG-CLASS-ID: mo00
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Cc: Olaf Hering <olaf@aepfle.de>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2] tools: fix incorrect suggestions for XENCONSOLED_TRACE on BSD
Date: Thu,  6 May 2021 17:17:01 +0200
Message-Id: <20210506151701.1343-1-olaf@aepfle.de>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

--log does not take a file, it specifies what is supposed to be logged.

Also separate the XENSTORED and XENCONSOLED variables by a newline.

Signed-off-by: Olaf Hering <olaf@aepfle.de>
---
 tools/hotplug/FreeBSD/rc.d/xencommons.in | 5 +++--
 tools/hotplug/NetBSD/rc.d/xencommons.in  | 5 +++--
 2 files changed, 6 insertions(+), 4 deletions(-)

diff --git a/tools/hotplug/FreeBSD/rc.d/xencommons.in b/tools/hotplug/FreeBSD/rc.d/xencommons.in
index ccd5a9b055..4c61d8c94e 100644
--- a/tools/hotplug/FreeBSD/rc.d/xencommons.in
+++ b/tools/hotplug/FreeBSD/rc.d/xencommons.in
@@ -21,9 +21,10 @@ status_cmd="xen_status"
 extra_commands="status"
 required_files="/dev/xen/xenstored"
 
-XENSTORED_PIDFILE="@XEN_RUN_DIR@/xenstored.pid"
 XENCONSOLED_PIDFILE="@XEN_RUN_DIR@/xenconsoled.pid"
-#XENCONSOLED_TRACE="@XEN_LOG_DIR@/xenconsole-trace.log"
+#XENCONSOLED_TRACE="none|guest|hv|all"
+
+XENSTORED_PIDFILE="@XEN_RUN_DIR@/xenstored.pid"
 #XENSTORED_TRACE="@XEN_LOG_DIR@/xen/xenstore-trace.log"
 
 load_rc_config $name
diff --git a/tools/hotplug/NetBSD/rc.d/xencommons.in b/tools/hotplug/NetBSD/rc.d/xencommons.in
index 3981787eac..80e518f5de 100644
--- a/tools/hotplug/NetBSD/rc.d/xencommons.in
+++ b/tools/hotplug/NetBSD/rc.d/xencommons.in
@@ -20,9 +20,10 @@ status_cmd="xen_status"
 extra_commands="status"
 required_files="/kern/xen/privcmd"
 
-XENSTORED_PIDFILE="@XEN_RUN_DIR@/xenstored.pid"
 XENCONSOLED_PIDFILE="@XEN_RUN_DIR@/xenconsoled.pid"
-#XENCONSOLED_TRACE="@XEN_LOG_DIR@/xenconsole-trace.log"
+#XENCONSOLED_TRACE="none|guest|hv|all"
+
+XENSTORED_PIDFILE="@XEN_RUN_DIR@/xenstored.pid"
 #XENSTORED_TRACE="@XEN_LOG_DIR@/xenstore-trace.log"
 
 xen_precmd()


From xen-devel-bounces@lists.xenproject.org Thu May 06 16:12:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 16:12:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123645.233257 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1legc3-0005OO-Gr; Thu, 06 May 2021 16:12:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123645.233257; Thu, 06 May 2021 16:12:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1legc3-0005OH-Dx; Thu, 06 May 2021 16:12:31 +0000
Received: by outflank-mailman (input) for mailman id 123645;
 Thu, 06 May 2021 16:12:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1legc2-0005OB-0i
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 16:12:30 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1legc0-00047q-M1; Thu, 06 May 2021 16:12:28 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1legc0-0003uf-CI; Thu, 06 May 2021 16:12:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=taAzfylmniFMr/i/NXbHSTg9722Z1jzHz8t2M1whrOw=; b=oCuBi89tY6Ij+vI5tcKSZ8QzwJ
	1UnmlL6irFx2hokVbNHwE2Zk90srkcMDTeU+psg8ahVFfoPe9HhAAS8upaC7pycbdZy2SE+FFudPj
	vh1nvce0sNobV9xpb7SsTICO6UN0e2Yg+BFrTIygsZmQ8dB4mgNBGp4aaGHw28Jrj644=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	Julien Grall <jgrall@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH] tools/xenstored: Prevent a buffer overflow in dump_state_node_perms()
Date: Thu,  6 May 2021 17:12:23 +0100
Message-Id: <20210506161223.15984-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1

From: Julien Grall <jgrall@amazon.com>

ASAN reported one issue when Live Updating Xenstored:

=================================================================
==873==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7ffc194f53e0 at pc 0x555c6b323292 bp 0x7ffc194f5340 sp 0x7ffc194f5338
WRITE of size 1 at 0x7ffc194f53e0 thread T0
    #0 0x555c6b323291 in dump_state_node_perms xen/tools/xenstore/xenstored_core.c:2468
    #1 0x555c6b32746e in dump_state_special_node xen/tools/xenstore/xenstored_domain.c:1257
    #2 0x555c6b32a702 in dump_state_special_nodes xen/tools/xenstore/xenstored_domain.c:1273
    #3 0x555c6b32ddb3 in lu_dump_state xen/tools/xenstore/xenstored_control.c:521
    #4 0x555c6b32e380 in do_lu_start xen/tools/xenstore/xenstored_control.c:660
    #5 0x555c6b31b461 in call_delayed xen/tools/xenstore/xenstored_core.c:278
    #6 0x555c6b32275e in main xen/tools/xenstore/xenstored_core.c:2357
    #7 0x7f95eecf3d09 in __libc_start_main ../csu/libc-start.c:308
    #8 0x555c6b3197e9 in _start (/usr/local/sbin/xenstored+0xc7e9)

Address 0x7ffc194f53e0 is located in stack of thread T0 at offset 80 in frame
    #0 0x555c6b32713e in dump_state_special_node xen/tools/xenstore/xenstored_domain.c:1232

  This frame has 2 object(s):
    [32, 40) 'head' (line 1233)
    [64, 80) 'sn' (line 1234) <== Memory access at offset 80 overflows this variable

This is happening because the callers are passing a pointer to a variable
allocated on the stack. However, the field perms is a dynamic array, so
Xenstored will end up to read outside of the variable.

Rework the code so the permissions are written one by one in the fd.

Fixes: ed6eebf17d2c ("tools/xenstore: dump the xenstore state for live update")
Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 tools/xenstore/xenstored_core.c   | 26 ++++++++++++++------------
 tools/xenstore/xenstored_core.h   |  3 +--
 tools/xenstore/xenstored_domain.c |  2 +-
 3 files changed, 16 insertions(+), 15 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index d54a6042a9f7..f68da12b5b23 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -2447,34 +2447,36 @@ const char *dump_state_buffered_data(FILE *fp, const struct connection *c,
 	return NULL;
 }
 
-const char *dump_state_node_perms(FILE *fp, struct xs_state_node *sn,
-				  const struct xs_permissions *perms,
+const char *dump_state_node_perms(FILE *fp, const struct xs_permissions *perms,
 				  unsigned int n_perms)
 {
 	unsigned int p;
 
 	for (p = 0; p < n_perms; p++) {
+		struct xs_state_node_perm sp;
+
 		switch ((int)perms[p].perms & ~XS_PERM_IGNORE) {
 		case XS_PERM_READ:
-			sn->perms[p].access = XS_STATE_NODE_PERM_READ;
+			sp.access = XS_STATE_NODE_PERM_READ;
 			break;
 		case XS_PERM_WRITE:
-			sn->perms[p].access = XS_STATE_NODE_PERM_WRITE;
+			sp.access = XS_STATE_NODE_PERM_WRITE;
 			break;
 		case XS_PERM_READ | XS_PERM_WRITE:
-			sn->perms[p].access = XS_STATE_NODE_PERM_BOTH;
+			sp.access = XS_STATE_NODE_PERM_BOTH;
 			break;
 		default:
-			sn->perms[p].access = XS_STATE_NODE_PERM_NONE;
+			sp.access = XS_STATE_NODE_PERM_NONE;
 			break;
 		}
-		sn->perms[p].flags = (perms[p].perms & XS_PERM_IGNORE)
+		sp.flags = (perms[p].perms & XS_PERM_IGNORE)
 				     ? XS_STATE_NODE_PERM_IGNORE : 0;
-		sn->perms[p].domid = perms[p].id;
-	}
+		sp.domid = perms[p].id;
 
-	if (fwrite(sn->perms, sizeof(*sn->perms), n_perms, fp) != n_perms)
-		return "Dump node permissions error";
+		if (fwrite(&sp, sizeof(sp), 1, fp) != 1)
+			return "Dump node permission error";
+
+	}
 
 	return NULL;
 }
@@ -2519,7 +2521,7 @@ static const char *dump_state_node_tree(FILE *fp, char *path)
 	if (fwrite(&sn, sizeof(sn), 1, fp) != 1)
 		return "Dump node state error";
 
-	ret = dump_state_node_perms(fp, &sn, hdr->perms, hdr->num_perms);
+	ret = dump_state_node_perms(fp, hdr->perms, hdr->num_perms);
 	if (ret)
 		return ret;
 
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 1cdbc3dcb5f7..b50ea3f57d5a 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -271,8 +271,7 @@ const char *dump_state_buffered_data(FILE *fp, const struct connection *c,
 				     const struct connection *conn,
 				     struct xs_state_connection *sc);
 const char *dump_state_nodes(FILE *fp, const void *ctx);
-const char *dump_state_node_perms(FILE *fp, struct xs_state_node *sn,
-				  const struct xs_permissions *perms,
+const char *dump_state_node_perms(FILE *fp, const struct xs_permissions *perms,
 				  unsigned int n_perms);
 
 void read_state_global(const void *ctx, const void *state);
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 3d4d0649a243..580ed454a3f5 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -1254,7 +1254,7 @@ static const char *dump_state_special_node(FILE *fp, const char *name,
 	if (fwrite(&sn, sizeof(sn), 1, fp) != 1)
 		return "Dump special node error";
 
-	ret = dump_state_node_perms(fp, &sn, perms->p, perms->num);
+	ret = dump_state_node_perms(fp, perms->p, perms->num);
 	if (ret)
 		return ret;
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Thu May 06 16:28:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 16:28:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123649.233270 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1legqf-0006uo-Qz; Thu, 06 May 2021 16:27:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123649.233270; Thu, 06 May 2021 16:27:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1legqf-0006uh-NU; Thu, 06 May 2021 16:27:37 +0000
Received: by outflank-mailman (input) for mailman id 123649;
 Thu, 06 May 2021 16:27:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1legqe-0006ub-UU
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 16:27:36 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1legqe-0004Mx-Rg
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 16:27:36 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1legqe-00050f-Qe
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 16:27:36 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1legqb-0004TI-BZ; Thu, 06 May 2021 17:27:33 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=T1O6LXfc5jVUnaMVIQVHoBhu8P1I5m/Y1cyYd7XUx7U=; b=SNDVgZmIgzXLs8/M2y1IfnjOF1
	kIe0eB55q9eTDYJ7z8tkQpWPIf6ZnS44oxjztUiJ9Rfl/SW48IlNEmqs5g249WhrV906GmEjP1O1P
	i5tOyit7bdb3UXQdHmL/ue6CsutTOq5FE/yc6CqP8dd9U+nxMYrdS/rdRtjABcHjUqA0=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24724.6389.95487.1868@mariner.uk.xensource.com>
Date: Thu, 6 May 2021 17:27:33 +0100
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org,
    George Dunlap <george.dunlap@citrix.com>
Subject: Re: [xen-4.12-testing test] 161776: regressions - FAIL
In-Reply-To: <osstest-161776-mainreport@xen.org>
References: <osstest-161776-mainreport@xen.org>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

osstest service owner writes ("[xen-4.12-testing test] 161776: regressions - FAIL"):
> flight 161776 xen-4.12-testing real [real]
> flight 161806 xen-4.12-testing real-retest [real]
> http://logs.test-lab.xenproject.org/osstest/logs/161776/
> http://logs.test-lab.xenproject.org/osstest/logs/161806/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-amd64-amd64-xl-qcow2    19 guest-localmigrate/x10   fail REGR. vs. 159418

This has been failing for 48 days.

  http://logs.test-lab.xenproject.org/osstest/logs/161776/test-amd64-amd64-xl-qcow2/19.ts-guest-localmigrate.log

shows this:

  libxl: error: libxl_dom_suspend.c:377:suspend_common_wait_guest_timeout: Domain
6:guest did not suspend, timed out

as the first thing that goes wrong.  This is after the guest has
acknowledged the suspend request.

osstest tried to bisect it but was not able to reproduce the basis
pass.  That means either that we got (un)lucky with the basis pass, or
that something not version-controlled by osstest is responsible.  In
this case I think the dom0 and domU kernels, as well as the usual
pieces, are all properly version controlled by osstest.  The non-Xen
userland tools are not but I doubt they are the cause.

So I think this is not a real regression.  In lieu of a fix, I propose
to force push 5984905b2638df87a0262d1ee91f0a6e14a86df6 to stable-4.12.

Ian.


From xen-devel-bounces@lists.xenproject.org Thu May 06 16:49:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 16:49:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123656.233282 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lehBn-0000sb-Q9; Thu, 06 May 2021 16:49:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123656.233282; Thu, 06 May 2021 16:49:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lehBn-0000sU-N5; Thu, 06 May 2021 16:49:27 +0000
Received: by outflank-mailman (input) for mailman id 123656;
 Thu, 06 May 2021 16:49:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YkD6=KB=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lehBm-0000sO-Gv
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 16:49:26 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9a6dfc79-5257-4f67-9a65-f856308390e4;
 Thu, 06 May 2021 16:49:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9a6dfc79-5257-4f67-9a65-f856308390e4
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620319764;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=MNmEysxKi1C8vLv+oGi3bIL1QdN9QGF6+tqFhukZcLg=;
  b=GXAicGKVtJMZ/IGGCRlJWY2BbpcwwqO4mewaaJwmI2kZPVihBpaRSDq3
   GEwvuoveQUHF/SF7/deR4jXNFTOMF7EhBlgPp4V90FrvvP4iJR2uKFHAs
   /zHO16JUzlkD4Wk4RxxOG7JlX7VINwNoDafyq4yfdQUhqVscvkq/U4MKg
   w=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: q1Kh7AZqKddfJKQ8PxL5eifappJWea3o4KrEF/4c/HLAQXHrksWJ4GGGri8SZZBC9Dzvb5TFGt
 V4DsEd5f7IELu8Mzaxs9ISHMtctGvlBcZRxmH8cEEujRg1MidZEWFOlF6BZ8gSoWU7cJcjnfZx
 8LGZubgYWqopOkAPhiuviVR1QDeaChGCqaGKeszi0WStEcStMKzCKRaIoD9LB1GTMImZPFjek4
 mPSvaciB0pcAe4cBBNmeh1RWrBJj+JEj4JOZDL4Y4nKAnB8bmgdHMwRWSRLstegPAoPfNoL8XM
 BK4=
X-SBRS: 5.1
X-MesageID: 43345083
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:Ccxmv672LRCYgpCN0gPXwDLXdLJyesId70hD6qkQc3FomwKj9/
 xG/c5rsSMc7Qx6ZJhOo7+90cW7L080lqQFhLX5X43SPzUO0VHARO1fBOPZqAEIcBeOlNK1u5
 0AT0B/YueAcGSTj6zBkXWF+wBL+qj5zEiq792usUuEVWtRGsZdB58SMHfhLqVxLjM2Y6YRJd
 6nyedsgSGvQngTZtTTPAh+YwCSz+e77a4PeHQ9dmYa1DU=
X-IronPort-AV: E=Sophos;i="5.82,278,1613451600"; 
   d="scan'208";a="43345083"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=G0J+OrFYWvLqr4wYhhEOdu9THMw1eIDR5W3U/MZbVfK8epGRwEr+a6F26QR12p3/WTBAvSxhuXusLWShtTBW0swGnbj6rFpClMR462rVEvWc70hc9SroLULstfjp4Al523NEIL7w8e4GsMnTr1IV6G3icUz7KIxL/2x9oZHxgvCKbvd/CTP/zb6dWhxbE7AJjY+NSwLR9hNsbj9mr/utBS4y/9Sw9q5//bWgjj56DmNPD49+O3cseB9CeepvdpPNsKoI7E6Uh4U3E3bvu5iWm+eRkrDwCdckgHSE/MFrt2ybNT8V73H9y3A8kSJbAitpRwJ5WgMYy6N+RLWHX7T/IQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MNmEysxKi1C8vLv+oGi3bIL1QdN9QGF6+tqFhukZcLg=;
 b=Yq1+FharPZQ/sf4TifOL+J8zOPKEzxsFVjfgwIQHx3cqrVSYL4ApVXMwe3can7U7IyU1qaOy8sV++Lnk2UZxnjA+6I2nfCvKPNkfR7qJkAP1kUvihCmAab9CAMoyJxprzaCukAGNt7x3jeXbrnKpX9eV2EDlh/R7IPB97Fubns4KqBBO5F0g54Qi/619faXbWeNqe310iWu8s9M4S5AVPMuFYtmB5A3gx05HT0sQitKMCD2pZPE1ae/nTBXV3DsYujYoQXddEjvJFlM2bRofRkLcK8uQ+/c2i0900xbLpIWEXVVgLs8o4HhIylq1v4swLGk/nAqHRzUXyMcyK00PKg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MNmEysxKi1C8vLv+oGi3bIL1QdN9QGF6+tqFhukZcLg=;
 b=aD6sZhkjEQt//wXMccJENZJk2+oe6RfEe5p/8xp4129KkicADZoHAgcIeWCAScCJlea7O6XCAqGPNczyWvwuoEQDVpawChYHBSsRR8XYBYYg4v/4h8yHUO+tn06JPFXjGmIPiylyEZdFUBop0aQ6qD4fpeFQ0vDkMPKpyIa4omQ=
To: Olaf Hering <olaf@aepfle.de>, <xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210506151631.1227-1-olaf@aepfle.de>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2] tools: remove unused sysconfig variable
 XENSTORED_ROOTDIR
Message-ID: <a236f079-1771-7808-bb16-97b9dc5ed733@citrix.com>
Date: Thu, 6 May 2021 17:49:14 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <20210506151631.1227-1-olaf@aepfle.de>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0479.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a8::16) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 05666179-fee8-4cce-4ec0-08d910aee5b0
X-MS-TrafficTypeDiagnostic: BY5PR03MB4997:
X-Microsoft-Antispam-PRVS: <BY5PR03MB49975D9A31337F6A52936E4EBA589@BY5PR03MB4997.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:1443;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: X5XFYTFlA0HMR3m4vuSyuczcVz4aHHvctzxLhag9Jf5rjnjLDprei2ynHBJ8jRaAnKFdDdtD9XRCY5JQip8fEfg2K0xThmI4GshJ6Tfi2ajFV+RRBlNePUr2noo6Kk/tXxwoqanjwAvspCpyrNKOKJggoGky6qJ2jvzR5cPUEtYtN/ZKOU1vGBc6QIHAs7LLwFkYRAE1jXyal9bUfzYgWKngDcLgDNAhkav7pSvgOY3ErjxqYUocTkcaMo4fkgzUyP0fCIAJE0aEMv+22q9qLfuqKBXDK5yhAXL0lOdFdTzs34GcpetsdrKvUOSQYB+BF5J0ulllrn2MTfH7aJShd7S+Ht5/OPvphrhKb97eB8VVWkGdZdSpQ6ldA2pC3xNLYN+hUZsqRUIAZVh6ueoNdsFtR3Vcxl6o+6CEwn5JUD92gqTQA1+axm9XAXPVYNLF+MTGIky2Uvo6Rf2xT4DJtAYpqBxIo8bYK+MdTM3h8jm40PTRWtSr8Y/SZe4ei303r5ZOVI4DqJoZf42QPSyxqx1YM+A2h4dguBZWcRNZTSZlxsJ0P4XWoR/pZZLRLW6zDW6K+9Vtnb4VJYnRbqfnLCmo+AEP2jFbOlO82fBm8g4wCJRyp8uydjQiRG16/d+BNZstU3Xz9IHR57Sxnbl+4XYHaWnj7UGZtZDRz/kSHzIYMwFLEwgfpiT0RLNJkwW/F4QFDKCpA47JzLlEIy+ZLsWTXIxJorj1pCQwLKUcSNJ7s1eXlxWydnt9ydV7gQESIa0CD0EO7n9KkJ+coepQ2g==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(346002)(396003)(39860400002)(366004)(376002)(4326008)(54906003)(16576012)(86362001)(8676002)(5660300002)(6486002)(66556008)(8936002)(316002)(6666004)(66946007)(66476007)(36756003)(956004)(38100700002)(16526019)(186003)(2616005)(26005)(31686004)(53546011)(31696002)(83380400001)(2906002)(478600001)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?ZGVSY1ZNdTNjR1dyamp0em1nSVN3dnJUVWYwZ0VYS0tpVjNoQmsvTHpVSWpk?=
 =?utf-8?B?VVlDLzYvMG5tSEdWUnNqazF4eWx6TlZMOExkN2pKNnJIQnlGbTVoUmh1RkM1?=
 =?utf-8?B?Q3RpREgzdkJKU3MrUHZ5RXpmNysyc3Z4a1pFL1RzSU5uYmdSL3o2NThvVllv?=
 =?utf-8?B?aEFUZVdxTU54RkloYkhZN2o5aHBuTHU3bmh1OTdwU1V2YXljRWtlaEd6cDFF?=
 =?utf-8?B?c2tKeE53b041WndnbDdLR0dRcHVPMEMzNmVCblY3SllWSnA0YnczK2xYN0U2?=
 =?utf-8?B?MU1FRTVZT0NZTVhiemEzb3dQbGFDb09sY0ZQbm1iWjg3U2dhZXFPd25SaVZi?=
 =?utf-8?B?SGxRL1F5amdldzNvTmx5MDVpTm85dU5hVjRrV2N5L0h4M0d2RGE1N3VoaEhK?=
 =?utf-8?B?SkJOSm5nbVRNSUYreWU3bEdrQWt5eDJyYU43N0luRmpnekZZbmRnQ2JRWTVj?=
 =?utf-8?B?WnErZ253dUYxZ2JBdThVeng3UVNGb0R1SzJWYkJFL2NPMVEwVUlEYUtqbWZ4?=
 =?utf-8?B?c0VQY2R1aGZPNU9yQ2R0dysxa2M1aTRHVjhvcUtNTERpcmF2elJsWTROWXZW?=
 =?utf-8?B?alM1NFEzdjgyd25qTFFvSGljazc4SGRxYi95T0p0MUhIOEUyNUR0NkIzd1ps?=
 =?utf-8?B?MEh3NjRuUlVlWjBhaUxzQzI2VFo0Q2hBRXNoNldydllhTVc4S0dsa3ExdmtW?=
 =?utf-8?B?clFOQTlZd0JGMWFTTUNDdzJkejEzMjZLOTk4aXU5NHJMa0Jlci9sYlY5MW1n?=
 =?utf-8?B?MmVLcEUrZjJhV0tkK0MvZkhxd2czMng5SVBqZ0Z4N2Q0aDc1QmkvZEFabEFk?=
 =?utf-8?B?MmVVZlRuTk4wdGJ5OHo1NHNyaitxeTkwT0xuaUU3cGU4RDNXWUpka3FJc3Ja?=
 =?utf-8?B?SkN1d1o1eHpVSktlcHdYb0dYdXRvUGl2WHNEY3g1UGtZYjFLYW5TK09wbWcz?=
 =?utf-8?B?eHRMclhBY1JpdG95alBhdEtETmRXQlEyNEtKZmRhMzJveWlaQ09UNGxuNExL?=
 =?utf-8?B?aTFzTFBBU3VOYThYZHVyVkRRNnZuMU9RcGpGcDl3bXlTaThxWWY3cHJRQTND?=
 =?utf-8?B?bGVjS1FoRU5FeW1BczV6bGQ2dlpHbFlxLzE5VkFlME9BVnBZOWd3STBZS2NG?=
 =?utf-8?B?VkpxS1Q0RGtQU1k5QjRjbmpCYTdicHZPM1VIK1N2MWVHRVhwaVVPckR5TjBp?=
 =?utf-8?B?cVR5YnRMV08zMVI5NmhaWmg3NnBqUXcrK0h6cTBuckc1c0EyTHByQzlhVDNH?=
 =?utf-8?B?OThzcE9sOVJzM3dzL25mWXdNUFpURnpUZXR6ZGtQTUZoMnVkLzJoWWZxaTJz?=
 =?utf-8?B?elBrZ0FwVnFmUmkrNVlnbUhoTlVWYUF1ZkdZOE1EVy8vR254dk81Wlh5clZn?=
 =?utf-8?B?TGZNK3NHYzY3b1BoY1FvOFNVdFZleDQ1RHVSSXpxc2VMQUlybzNGak9Zc0pH?=
 =?utf-8?B?dEVkTWVEWHFnaXNMRjNGU0ZwRTVOMUcyelVWUWRwWWNsbnRtRXU3bXd3TDlS?=
 =?utf-8?B?b3pDaFNwbUI0MURtaTd5SVpUMzNpQTNzT3gyNzZGOTVTZmwvOU5QUDdBa1hL?=
 =?utf-8?B?VWczYzcwTmV3b2VBcitwbnhibUxldUxvR1FSKytxdFMwZ3BROS85N2NCVFRk?=
 =?utf-8?B?Q0I4ZWh1UlBmbGU2elljSXFlbnVZeHNRbm1yZmlyZHA0TjVXZWR1VHNScldY?=
 =?utf-8?B?S1JnNFhacWNra0lFZlgxSUV6cFhjQ3o4clpDVjY4cG9rTGR3ZWRCYlhNZFps?=
 =?utf-8?Q?LqfVSdFHYNx9EaGBOGZr2xHhkvRFK1c38SmY5Md?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 05666179-fee8-4cce-4ec0-08d910aee5b0
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 May 2021 16:49:21.5345
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: H4ZMWGnXNmshG9jjTab8CmA/dlENm6wQLP4KlueP0e01se6N5sTwD54kuH4MrrWq0xxXbOoPFNeoANHgy3Q4rZKwCa7F38gCFn1i6gs+Zjs=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB4997
X-OriginatorOrg: citrix.com

On 06/05/2021 16:16, Olaf Hering wrote:
> The sysconfig variable XENSTORED_ROOTDIR is not used anymore.
> It used to point to a directory with tdb files, which is now a tmpfs.
>
> In case the database is not in tmpfs, like on sysv and BSD systems,
> xenstored will truncate existing database files during start.
>
> Fixes commit 2ef6ace428dec4795b8b0a67fff6949e817013de
>
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Acked-by Andrew Cooper <andrew.cooper3@citrix.com>, although as we're
trying to keep on top of the changelog this time around, we probably
want the following hunk:

diff --git a/CHANGELOG.md b/CHANGELOG.md
index 0106fccec1..6896d70757 100644
--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -6,6 +6,10 @@ The format is based on [Keep a
Changelog](https://keepachangelog.com/en/1.0.0/)
=C2=A0
=C2=A0## [unstable
UNRELEASED](https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dshortlog;h=3Dst=
aging)
- TBD
=C2=A0
+### Removed
+ - XENSTORED_ROOTDIR environment variable from configuration files and
+=C2=A0=C2=A0 initscripts, due to being unused.
+
=C2=A0## [4.15.0
UNRELEASED](https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dshortlog;h=3DRE=
LEASE-4.15.0)
- TBD
=C2=A0
=C2=A0### Added / support upgraded

~Andrew



From xen-devel-bounces@lists.xenproject.org Thu May 06 17:43:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 17:43:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123664.233294 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lei1b-0006dV-Vw; Thu, 06 May 2021 17:42:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123664.233294; Thu, 06 May 2021 17:42:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lei1b-0006dO-Sy; Thu, 06 May 2021 17:42:59 +0000
Received: by outflank-mailman (input) for mailman id 123664;
 Thu, 06 May 2021 17:42:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=YkD6=KB=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lei1a-0006dH-7h
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 17:42:58 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5a1d6d50-e5fe-4c22-8764-1cc99a18ffaa;
 Thu, 06 May 2021 17:42:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5a1d6d50-e5fe-4c22-8764-1cc99a18ffaa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620322976;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=f9RWOyGzduxpQPt3HwpkryFrUOgR0rHocQ9KV7zROqw=;
  b=NedX9teQWs0d2uRn0Uua/lpX6YXf2S0MRSyyuuzxPuKWssuNLSm0bqKX
   J0V8drFl4BgS+I35j1pIJLp0LiLUMcbCK+y3kddGkjq0HLrRie8dOGe0B
   v8F16pX+SBBp/NYuZapKgw/sCALoQPTlI9XkS2HI7v0azAB1wsru98UoE
   s=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: c/vF+ohYXDIAvKFNcvcGEQed8/lB2ob/taLJPi9hxrwMjlj22D6+BHuWCBsUX5YhxWv9sR9ZBT
 XHy7VmV51VM5CaDZG8mNELOZG+JkScuMsku4McVINqQX7NOuY2O2d55W75NsHtI/GqBV/fbdpy
 VQvLkmMoentzkbbrndIOhkcKe6TOjeFBhJTlHeTatzP6+paPjwSNp+fLUPBozAC1WMjFMPlavO
 m7Xfpjf7nPlkcUU07kFIgj51KvPiEv5uhP9/A/TRJI3Zk+kjLySla0ljQjKgoTqfA9RTYAruPw
 LuU=
X-SBRS: 5.1
X-MesageID: 43626351
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:IfvvwKmIUO+HuxcYuIRjurwoySTpDfMEimdD5ihNYBxZY6Wkfp
 +V8cjzhCWftN9OYhodcIi7SdG9qADnhOVICO4qTP2ftWjdySCVxeRZgbcKrAeQfxEWmtQ96U
 4kSdkGNDSSNyk2sS+Z2njeLz9I+rDun86VbKXlvhFQpGpRGsJdBnJCe2Om+zpNNWt77PQCdK
 a0145inX6NaH4XZsO0Cj0uRO7YveDGk5rgfFovGwMnwBPmt0Lm1JfKVzyjmjsOWTJGxrkvtU
 LflRbi26mlu/anjjfBym7o6YhMkteJ8KoMOCXMsLlVFtzfsHfqWG1TYczBgNnzmpDr1L8eqq
 iNn/7nBbU215qeRBDznfKn4Xib7N9n0Q6e9bbfuwqunSWxfkNHN+NRwY1eaRfX8EwmoZV117
 9KxXuQs95NAQrHhzmV3am/a/hGrDvBnZMZq59ls5Wfa/psVFZbl/1XwKqUKuZ0IMve0vFuLA
 BDNrCs2B9mSyLpU5mChBgQ/DWFZAVCIv6peDl8hvCo
X-IronPort-AV: E=Sophos;i="5.82,278,1613451600"; 
   d="scan'208";a="43626351"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=n2MnDBwxG9EMwTaneE4sNqUq4nfakw8hyNN50Z29JKQ7kLp5ArD7dVpQ1uvelBkj8KuDM7IQoBbgyGKbIDP9mktpmiE9ftEyIY0wxuxCKsK0UmOM7rsi/j3pFOiA9Ju+bNRHbjdeO67Hsgc5TQoK0VNGKIqqX1wxCMupsnxPdoRUfhHV/ZqvgJFbU0zBTYvOLKibxKMOGSrfrenyrlg9rJd2YEv9QG4MFPReRQBsz/sXGOXbKdLD1IDHkBXiDMfS2rovkYVukTESe0SG/IzDDKZhPmR2m5wGg+N+uFXpQ3ZqGxec1pC1tCcOHc9FgGLLeElKl2avZLDRSMk1TvGkEQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=f9RWOyGzduxpQPt3HwpkryFrUOgR0rHocQ9KV7zROqw=;
 b=lT22mFLcTwuIpNFd1nU8+DwrCWLAxoTyfkkyvjqdx90NiyEFbJznUtxz3QDNRwF/Xg6a0KoBZedEfNd3J1Kf8w25Dask1hSD9Jvq0/HiQhHfwAwYOnOu2JuK8uQvoHDJ4tibsF0XD/dVD7mg+7QVjn59TkYKjjUVyac9lZr9Y5v/7rAgqpemZgYx/GrRXuTVeffSjkeSMkXT73fsE4qvI+wQCGAZWTiNcHbRKOw6fZr7Cpm3IEtMLbKYMW+MKrw8/mNPvKr3pbO9NkKJaDBVlF3TtjqDCHxUrpaIvAJZTqEQ6zwq1AD+enRhy4HUdflwmpPNjrMGUv3cd/zetx4hQA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=f9RWOyGzduxpQPt3HwpkryFrUOgR0rHocQ9KV7zROqw=;
 b=kiYAKbDWNJxAM1WmwcvgDVY9iOadZSjn6pYnCrM0xm6C4sK/kuayXv272ftyqupr3zM2cWGqJx+CCHM3ug8Gcq6HEeFrdBpalfuTlTb9j2SVjhms2f/1hqY9Tg2+Evn2SJfqx2LrAQJywrK4Na5HqQGIa1N+iQ4arWEO+QOmU94=
Subject: Re: [PATCH v2] tools: fix incorrect suggestions for XENCONSOLED_TRACE
 on BSD
To: Olaf Hering <olaf@aepfle.de>, <xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210506151701.1343-1-olaf@aepfle.de>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <77615b8f-6565-beea-4f23-8a2d81619bec@citrix.com>
Date: Thu, 6 May 2021 18:42:45 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <20210506151701.1343-1-olaf@aepfle.de>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0305.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:196::22) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: bee45c31-0c4f-4aa1-6787-08d910b65f01
X-MS-TrafficTypeDiagnostic: BYAPR03MB3861:
X-Microsoft-Antispam-PRVS: <BYAPR03MB38615A4544661CE11068B1E7BA589@BYAPR03MB3861.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:1227;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: U2IKpmRhaFzEad3gPXnsaNSqujCgFXbLflnivjBEdXd8gPogCTVRO1JP5SKInEHv+k47cmUjQ8MPML4fbxioEYrpkWcMOyKxGD27t0DM6hYehXbdwwSJIJu649T0GLLcVKg77kRzHJ/uqCpMHsHv1Rl2ywqHArIag/UqLN8x2Zfx7755G3NXvItdoFwHcwMvw/Cw55btTw4SNyo0oilL36R3GgrTkl2NFHc+KZEX3VJFC4IE/cbKNbVEVd9JzaeIo1fftcR66YYoVQv2Nlt7rGRi+ZMMRXzw+qx6dgxVjE9H9XsZSdBYq/IHAuWVwJRAeTLnpHBMGFaWm3nrb5ZdZw08CKffz2BdWs1SA1Ve4ls7CoifgG7pbAkp1rqd6dFcaoiB3o0chbeCcEwnENHghtdBHx6umSz7rQCgNxv3MG6peXO5QHP7/nMLsbKJWBdsfiegs6Sy9gfUCjg3tO2DrBmbaRM2sSeTrucpt+la165sdl6tvodODMNJFoM+rYi+R0zOlwl48+GEvsleGQL3GKklvFI+5dkT+lIqJwu9XD7FXgnrIKk1xVFemQHAZd/ruNn+SUcItschEXKwyJyC15eIHPPvME/r1AAkbgyqg2DUrx+c5X/FxRm7W1vooAVxNrxbebqpj70Tr4fRP/zwelxrAqRgieXLajPNWoaBc3bVW4DED1WshKItfrdUd+2q
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39850400004)(136003)(346002)(376002)(366004)(66556008)(66476007)(66946007)(31686004)(2906002)(86362001)(316002)(4326008)(54906003)(31696002)(36756003)(5660300002)(956004)(2616005)(6666004)(478600001)(8936002)(53546011)(8676002)(16526019)(38100700002)(26005)(6486002)(558084003)(186003)(16576012)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?UDJSRUhweVV3ZHJodU5ON0dPMGZ3U1BNQzNDZVNjWjhqeG5PYUMrNURLR0F2?=
 =?utf-8?B?cVVyS0NqSWJEc21ZSVo2K2ZvQTVqb3gzanVPNERBU01UeUZpeHRoQ0w3TFBw?=
 =?utf-8?B?U0pPREpMSUVDdGpEcXliSy9zd3laNDdNTnczVDRXSTl2eENsOGtPRWk1VmVM?=
 =?utf-8?B?SEhUdWowYlA1NmppMHhGciswQkcvaG9tUCs3a2JOWEE4aVRCZ1pNZFhwUDhn?=
 =?utf-8?B?U3REY1IzSEQxM2J2U09La25qeU45YWFlUlZxUGhtOWw2T1JRTmZMbWpEWG1V?=
 =?utf-8?B?WkVvR3NraFA4dmI2TmNKSVBud2JUL2E2MysrSFRZQzZrMlBZRi80bXE0QjM4?=
 =?utf-8?B?L3o3MUp2ODVlMXlVTnA2dlA0VUcvRSt4S2ZwTzk0TFNzcUxzeWJOUjFibm8z?=
 =?utf-8?B?TncyTFpNOEE2K2x4OTMxOE84TTB5NVBrMi8rMUFTVXE5UnJyZzAvOVd2YjRq?=
 =?utf-8?B?cEVvWlVVMllFQnhTdlhOTnRkckJVQnlzOG1XWmt1NWlOTjI5Ykt2S3JMUzll?=
 =?utf-8?B?cU01a2RUY2tVUFlZNzdKSTlNSlVaVThsR0hwZTlDNmE4cy8ycXRXRVJ3ZWdE?=
 =?utf-8?B?emNoSXVzUCt5dVVxZHgwTG5ObHFqVnhld2lZWGR3ZDEra2FyU1ZuVThqZko2?=
 =?utf-8?B?MWpJSlJjeVI0c2JnRldtRWIyN2lqSUtrNDhDTVd1WFR1VjJNVzZMSUV3YVdY?=
 =?utf-8?B?LzBkSkkrQVh4OTV4ZUlFTlNBMTd6UWxBeFQvbzRtMzlPNlZseEkrSlZESmwy?=
 =?utf-8?B?ZkFIbnRQdVNpS2xwbGpRVVVjRCtneGFtam9oRy9qNThDVHROVXFMWXdSQ1pz?=
 =?utf-8?B?Q1ZCRTJsOUF3bzhRZmF2Vjk2cWw2eGY5WXhUTDZ5N1V6WGZLV283ckgrb0xy?=
 =?utf-8?B?eFM5Y2tmOExINWdVQ3hsUjYvYmt5aTQwUlNUQmNhVHJETGxSdkt1Nk9qcVZK?=
 =?utf-8?B?RWxoNzNxYlZTUW5vTGVWbkJHWkUvdXJGZjhNQUcyemZlSjdmYk5wVm5iMXZQ?=
 =?utf-8?B?dWEzcTZCTUpLREwvejZ5VVBQakdNdjR0MTE3VmRpdVlpZ3R2ejN2VElWbFRv?=
 =?utf-8?B?YjR6UThWaGtCSUh4dmwzYU9INCt3WVRFaE96Z2Y4RCsyOFhCWHVIY0pWL3Nh?=
 =?utf-8?B?Yndoa2F3V2Q3cVRXekhhNnVvRlNRUC9QbnhmbmJEd3FrdjcvZFNsdmpFNkJK?=
 =?utf-8?B?aS8vaW0zbDNDOXQ1RGZWek5zNStGamNLZmlhdTIxMzFad3hXemY4aktwV2xJ?=
 =?utf-8?B?S1VVZnZOREZkTklvRU9WRGVYYm9BeTFYZEFGd1J6VUhJVVFKUUJnY1FIRi94?=
 =?utf-8?B?ZTVWMjZQNDhvT21EdE5vMEdsVFo4YWhUQXgvTi9pa0JmUkZnRlRBY0dKVjJm?=
 =?utf-8?B?WlpMZGV0K3gzbG53UjZISU9KYUFzMjI1aVlwZzY0TVVRSHZDcUFJcHZmNGlE?=
 =?utf-8?B?ZW0wQkFvSnE4R0ZCeHlPdlBwUlhNVndRWWhhUUhxeTRObTNZeWVHSlNVUmU4?=
 =?utf-8?B?T0tjYnVmSnVvNksrUDBnS0JleHFybjJwMktucERSemRkZVNJM3UwQ2JNTWxn?=
 =?utf-8?B?djNMSGNpU3MyODd1YUk2UUMwa2IwZC8rKzJnL042b3p4MkZ1TVZkejM3ZlpV?=
 =?utf-8?B?NHloaTZHWnJSaDdQU3ZZclJVdnRtS1JYYVFKNXkyTDI4elBjMDhaTFNDc3Zt?=
 =?utf-8?B?NS8vZ3UxUkRVbzA5SjM5Y1NJTmljSHg2YnUrMmVVQmFKb1Q3ZFZ6MWY3TG5R?=
 =?utf-8?Q?7IU3JxtbEFIbkSgLiI1QiQt9xmn1DxdsLP7Lm3c?=
X-MS-Exchange-CrossTenant-Network-Message-Id: bee45c31-0c4f-4aa1-6787-08d910b65f01
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 06 May 2021 17:42:52.3240
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: VkBWTUKrOGUYOL7bD2sC68E3pXE8/PHfHGlKk8b8k0DndQKBry7epMTTddvL68UFflQ8UXP+KKJ7hhhjTKXXQJNZwKiSvFwjpC0EfF/3+F4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3861
X-OriginatorOrg: citrix.com

On 06/05/2021 16:17, Olaf Hering wrote:
> --log does not take a file, it specifies what is supposed to be logged.
>
> Also separate the XENSTORED and XENCONSOLED variables by a newline.
>
> Signed-off-by: Olaf Hering <olaf@aepfle.de>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Thu May 06 17:55:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 17:55:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123669.233306 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leiDV-00087B-08; Thu, 06 May 2021 17:55:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123669.233306; Thu, 06 May 2021 17:55:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leiDU-000874-SW; Thu, 06 May 2021 17:55:16 +0000
Received: by outflank-mailman (input) for mailman id 123669;
 Thu, 06 May 2021 17:55:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leiDT-00086t-8D; Thu, 06 May 2021 17:55:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leiDS-0005v8-4o; Thu, 06 May 2021 17:55:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leiDR-0000uM-Qv; Thu, 06 May 2021 17:55:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1leiDR-0007vF-QN; Thu, 06 May 2021 17:55:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=Wf+BdYvWmjgE6b7+pgcndcjKg3SxN+RPY6c8Gnz0McY=; b=yqJuAZBfRZ9zUrg8d0w8brTtvf
	39YY5lWpHQFKfOBtlFdi+mE+rq5IQGvLudl7+whFSLC4Wn2jIqFFVRW9zg2GyiM1olK9ZomnGx26t
	sfw2RTh3CS3/PwvBQWUP1QmJOQytKELOWvKuRErWryj2b3afIINa4jKnSJSUI3e+xEv8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete test-amd64-amd64-xl-qemuu-ws16-amd64
Message-Id: <E1leiDR-0007vF-QN@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 06 May 2021 17:55:13 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-xl-qemuu-ws16-amd64
testid guest-saverestore

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  8a40754bca14df63c6d2ffe473b68a270dc50679
  Bug not present: dc04d25e2f3f7e26f7f97b860992076b5f04afdb
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/161802/


  (Revision log too long, omitted.)


*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  1b507e55f8199eaad99744613823f6929e4d57c6
  Bug not present: 4083904bc9fe5da580f7ca397b1e828fbc322732
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/161810/


  commit 1b507e55f8199eaad99744613823f6929e4d57c6
  Merge: 4083904bc9 8d17adf34f
  Author: Peter Maydell <peter.maydell@linaro.org>
  Date:   Thu Mar 18 19:00:49 2021 +0000
  
      Merge remote-tracking branch 'remotes/berrange-gitlab/tags/dep-many-pull-request' into staging
      
      Remove many old deprecated features
      
      The following features have been deprecated for well over the 2
      release cycle we promise
      
        ``-drive file=json:{...{'driver':'file'}}`` (since 3.0)
        ``-vnc acl`` (since 4.0.0)
        ``-mon ...,control=readline,pretty=on|off`` (since 4.1)
        ``migrate_set_downtime`` and ``migrate_set_speed`` (since 2.8.0)
        ``query-named-block-nodes`` result ``encryption_key_missing`` (since 2.10.0)
        ``query-block`` result ``inserted.encryption_key_missing`` (since 2.10.0)
        ``migrate-set-cache-size`` and ``query-migrate-cache-size`` (since 2.11.0)
        ``query-named-block-nodes`` and ``query-block`` result dirty-bitmaps[i].status (since 4.0)
        ``query-cpus`` (since 2.12.0)
        ``query-cpus-fast`` ``arch`` output member (since 3.0.0)
        ``query-events`` (since 4.0)
        chardev client socket with ``wait`` option (since 4.0)
        ``acl_show``, ``acl_reset``, ``acl_policy``, ``acl_add``, ``acl_remove`` (since 4.0.0)
        ``ide-drive`` (since 4.2)
        ``scsi-disk`` (since 4.2)
      
      # gpg: Signature made Thu 18 Mar 2021 09:23:39 GMT
      # gpg:                using RSA key DAF3A6FDB26B62912D0E8E3FBE86EBB415104FDF
      # gpg: Good signature from "Daniel P. Berrange <dan@berrange.com>" [full]
      # gpg:                 aka "Daniel P. Berrange <berrange@redhat.com>" [full]
      # Primary key fingerprint: DAF3 A6FD B26B 6291 2D0E  8E3F BE86 EBB4 1510 4FDF
      
      * remotes/berrange-gitlab/tags/dep-many-pull-request:
        block: remove support for using "file" driver with block/char devices
        block: remove 'dirty-bitmaps' field from 'BlockInfo' struct
        block: remove dirty bitmaps 'status' field
        block: remove 'encryption_key_missing' flag from QAPI
        hw/scsi: remove 'scsi-disk' device
        hw/ide: remove 'ide-drive' device
        chardev: reject use of 'wait' flag for socket client chardevs
        machine: remove 'arch' field from 'query-cpus-fast' QMP command
        machine: remove 'query-cpus' QMP command
        migrate: remove QMP/HMP commands for speed, downtime and cache size
        monitor: remove 'query-events' QMP command
        monitor: raise error when 'pretty' option is used with HMP
        ui, monitor: remove deprecated VNC ACL option and HMP commands
      
      Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
  
  commit 8d17adf34f501ded65a106572740760f0a75577c
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 11:16:32 2021 +0000
  
      block: remove support for using "file" driver with block/char devices
      
      The 'host_device' and 'host_cdrom' drivers must be used instead.
      
      Reviewed-by: Eric Blake <eblake@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit e67d8e2928200e24ecb47c7be3ea8270077f2996
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 19:22:36 2021 +0000
  
      block: remove 'dirty-bitmaps' field from 'BlockInfo' struct
      
      The same data is available in the 'BlockDeviceInfo' struct.
      
      Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 81cbfd5088690c53541ffd0d74851c8ab055a829
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 19:19:54 2021 +0000
  
      block: remove dirty bitmaps 'status' field
      
      The same information is available via the 'recording' and 'busy' fields.
      
      Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit ad1324e044240ae9fcf67e4c215481b7a35591b9
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 18:53:17 2021 +0000
  
      block: remove 'encryption_key_missing' flag from QAPI
      
      This has been hardcoded to "false" since 2.10.0, since secrets required
      to unlock block devices are now always provided up front instead of using
      interactive prompts.
      
      Reviewed-by: Eric Blake <eblake@redhat.com>
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 879be3af49132d232602e0ca783ec9b4112530fa
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 13:40:56 2021 +0000
  
      hw/scsi: remove 'scsi-disk' device
      
      The 'scsi-hd' and 'scsi-cd' devices provide suitable alternatives.
      
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit b50101833987b47e0740f1621de48637c468c3d1
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 13:40:56 2021 +0000
  
      hw/ide: remove 'ide-drive' device
      
      The 'ide-hd' and 'ide-cd' devices provide suitable alternatives.
      
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 24e13a4dc1eb1630eceffc7ab334145d902e763d
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 13:47:17 2021 +0000
  
      chardev: reject use of 'wait' flag for socket client chardevs
      
      This only makes sense conceptually when used with listener chardevs.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 445a5b4087567bf4d4ce76d394adf78d9d5c88a5
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 13:29:31 2021 +0000
  
      machine: remove 'arch' field from 'query-cpus-fast' QMP command
      
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 12:54:55 2021 +0000
  
      machine: remove 'query-cpus' QMP command
      
      The newer 'query-cpus-fast' command avoids side effects on the guest
      execution. Note that some of the field names are different in the
      'query-cpus-fast' command.
      
      Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Tested-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit cbde7be900d2a2279cbc4becb91d1ddd6a014def
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 18:40:12 2021 +0000
  
      migrate: remove QMP/HMP commands for speed, downtime and cache size
      
      The generic 'migrate_set_parameters' command handle all types of param.
      
      Only the QMP commands were documented in the deprecations page, but the
      rationale for deprecating applies equally to HMP, and the replacements
      exist. Furthermore the HMP commands are just shims to the QMP commands,
      so removing the latter breaks the former unless they get re-implemented.
      
      Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 8becb36063fb14df1e3ae4916215667e2cb65fa2
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 13:35:15 2021 +0000
  
      monitor: remove 'query-events' QMP command
      
      The code comment suggests removing QAPIEvent_(str|lookup) symbols too,
      however, these are both auto-generated as standard for any enum in
      QAPI. As such it they'll exist whether we use them or not.
      
      Reviewed-by: Eric Blake <eblake@redhat.com>
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 283d845c9164f57f5dba020a4783bb290493802f
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 17:56:13 2021 +0000
  
      monitor: raise error when 'pretty' option is used with HMP
      
      This is only semantically useful for QMP.
      
      Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 5994dcb8d8525ac044a31913c6bceeee788ec700
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 17:47:31 2021 +0000
  
      ui, monitor: remove deprecated VNC ACL option and HMP commands
      
      The VNC ACL concept has been replaced by the pluggable "authz" framework
      which does not use monitor commands.
      
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-amd64-xl-qemuu-ws16-amd64.guest-saverestore.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-amd64-xl-qemuu-ws16-amd64.guest-saverestore --summary-out=tmp/161815.bisection-summary --basis-template=152631 --blessings=real,real-bisect,real-retry qemu-mainline test-amd64-amd64-xl-qemuu-ws16-amd64 guest-saverestore
Searching for failure / basis pass:
 161780 fail [host=elbling1] / 160125 ok.
Failure / basis pass flights: 161780 / 160125
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3e13d8e34b53d8f9a3421a816ccfbdc5fa874e98 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee d26c277826dbbd64b3e3cb57159e1ecbfad33bc8
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b12498fc575f2ad30f09fe78badc7fef526e2d76 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#030ba3097a6e7d08b99f8a9d19a322f61409c1f6-f297b7f20010711e36e981fe45645302cc9d109d git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c74\
 37ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://git.qemu.org/qemu.git#b12498fc575f2ad30f09fe78badc7fef526e2d76-3e13d8e34b53d8f9a3421a816ccfbdc5fa874e98 git://xenbits.xen.org/osstest/seabios.git#b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee-b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee git://xenbits.xen.org/xen.git#21657ad4f01a634beac570c64c0691e51b9cf366-d26c277826dbbd64b3e3cb57159e1ecbfad33bc8
Loaded 63552 nodes in revision graph
Searching for test results:
 160125 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b12498fc575f2ad30f09fe78badc7fef526e2d76 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160134 fail irrelevant
 160147 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2e1293cbaac75e84f541f9acfa8e26749f4c3562 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160167 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca318882714080fb81fe9eb89a7b7934efc5bfae 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 bdee969c0e65d4d509932b1d70e3a3b2ffbff6d5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160328 fail irrelevant
 160361 fail irrelevant
 160392 fail irrelevant
 160418 fail irrelevant
 160448 fail irrelevant
 160477 fail irrelevant
 160501 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7b9a3c9f94bcac23c534bc9f42a9e914b433b299 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160522 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7b9a3c9f94bcac23c534bc9f42a9e914b433b299 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160541 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ec2e6e016d24bd429792d08cf607e4c5350dcdaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160563 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7993b0f83fe5c3f8555e79781d5d098f99751a94 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cead8c0d17462f3a1150b5657d3f4eaa88faf1cb
 160619 fail irrelevant
 160632 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 62bad17dcae18f55cb3bdc19909543dfdf928a2b 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6ee55e1d10c25c2f6bf5ce2084ad2327e17affa5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 90629587e16e2efdb61da77f25c25fba3c4a5fd7
 160650 fail irrelevant
 160736 fail irrelevant
 160748 fail irrelevant
 160779 fail irrelevant
 160801 fail irrelevant
 160827 fail irrelevant
 160851 fail irrelevant
 160883 fail irrelevant
 160916 fail irrelevant
 160980 fail irrelevant
 161050 fail irrelevant
 161088 fail irrelevant
 161121 fail irrelevant
 161147 fail irrelevant
 161171 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2ad22420a710dc07e3b644f91a5b55c09c39ecf3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 264aa183ad85b2779b27d1312724a291259ccc9f
 161191 fail irrelevant
 161210 fail irrelevant
 161232 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b53173e7cdafb7a318a239d557478fd73734a86a
 161256 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161276 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161290 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161308 fail irrelevant
 161334 fail irrelevant
 161364 fail irrelevant
 161388 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3b0d007a135284981fa750612a47234b83976f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b1cffefa1b163bce9aebc3416f562c1d3886eeaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 9f6cd4983715cb31f0ea540e6bbb63f799a35d8a
 161401 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3b0d007a135284981fa750612a47234b83976f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b1cffefa1b163bce9aebc3416f562c1d3886eeaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aaa3eafb3ba8b544d19ca41cda1477640b22b8fc
 161419 fail irrelevant
 161434 fail irrelevant
 161444 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f2f4c6be2dba3f8e97ac544b9c3da71e9f81b294 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ffa090bc56e73e287a63261e70ac02c0970be61a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee bea65a212c0581520203b6ad0d07615693f42f73
 161455 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f2f4c6be2dba3f8e97ac544b9c3da71e9f81b294 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ffa090bc56e73e287a63261e70ac02c0970be61a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee bea65a212c0581520203b6ad0d07615693f42f73
 161472 fail irrelevant
 161481 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5396354b868bd6652600a654bba7df16701ac1cb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0cef06d18762374c94eb4d511717a4735d668a24 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 11e7f0fe72ca0060762d18268e0388731fe8ccb6
 161495 fail irrelevant
 161514 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5b90b8abb4049e2d98040f548ad23b6ab22d5d19 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0cef06d18762374c94eb4d511717a4735d668a24 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 972ba1d1d4bcb77018b50fd2bb63c0e628859ed3
 161540 fail irrelevant
 161554 fail irrelevant
 161571 fail irrelevant
 161587 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8f860d2633baf9c2b6261f703f86e394c6bc22ca b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161604 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8f860d2633baf9c2b6261f703f86e394c6bc22ca b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161616 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 53c5433e84e8935abed8e91d4a2eb813168a0ecf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161631 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 15106f7dc3290ff3254611f265849a314a93eb0e b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161767 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b12498fc575f2ad30f09fe78badc7fef526e2d76 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161777 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 15106f7dc3290ff3254611f265849a314a93eb0e b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161766 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e93d8bcf9dbd5b8dd3b9ddbb1ece6a37e608f300 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee d26c277826dbbd64b3e3cb57159e1ecbfad33bc8
 161779 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 f2a9a6c2a86570ccbf8c5c30cbb8bf723168c459 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161781 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e93d8bcf9dbd5b8dd3b9ddbb1ece6a37e608f300 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee d26c277826dbbd64b3e3cb57159e1ecbfad33bc8
 161782 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 17422da082ffcecb38bd1f2e2de6d56a61e8cd9c b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161784 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dc04d25e2f3f7e26f7f97b860992076b5f04afdb b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161785 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0f418a207696b37f05d38f978c8873ee0a4f9815 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161788 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6e71c36557ed41017e634ae392fa80f03ced7fa1 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161790 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8a40754bca14df63c6d2ffe473b68a270dc50679 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161791 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 87a80dc4f2f5e51894db143685a5e39c8ce6f651 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4083904bc9fe5da580f7ca397b1e828fbc322732 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161792 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4083904bc9fe5da580f7ca397b1e828fbc322732 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161795 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1b507e55f8199eaad99744613823f6929e4d57c6 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161797 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dc04d25e2f3f7e26f7f97b860992076b5f04afdb b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161798 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8a40754bca14df63c6d2ffe473b68a270dc50679 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161801 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dc04d25e2f3f7e26f7f97b860992076b5f04afdb b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161802 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8a40754bca14df63c6d2ffe473b68a270dc50679 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161803 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4083904bc9fe5da580f7ca397b1e828fbc322732 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161805 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1b507e55f8199eaad99744613823f6929e4d57c6 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161808 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4083904bc9fe5da580f7ca397b1e828fbc322732 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161780 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3e13d8e34b53d8f9a3421a816ccfbdc5fa874e98 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee d26c277826dbbd64b3e3cb57159e1ecbfad33bc8
 161810 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1b507e55f8199eaad99744613823f6929e4d57c6 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161813 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b12498fc575f2ad30f09fe78badc7fef526e2d76 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161815 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3e13d8e34b53d8f9a3421a816ccfbdc5fa874e98 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee d26c277826dbbd64b3e3cb57159e1ecbfad33bc8
Searching for interesting versions
 Result found: flight 160125 (pass), for basis pass
 Result found: flight 161780 (fail), for basis failure
 Repro found: flight 161813 (pass), for basis pass
 Repro found: flight 161815 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4083904bc9fe5da580f7ca397b1e828fbc322732 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
No revisions left to test, checking graph state.
 Result found: flight 161784 (pass), for last pass
 Result found: flight 161790 (fail), for first failure
 Repro found: flight 161797 (pass), for last pass
 Repro found: flight 161798 (fail), for first failure
 Repro found: flight 161801 (pass), for last pass
 Repro found: flight 161802 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  8a40754bca14df63c6d2ffe473b68a270dc50679
  Bug not present: dc04d25e2f3f7e26f7f97b860992076b5f04afdb
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/161802/


  (Revision log too long, omitted.)

 Result found: flight 161792 (pass), for last pass
 Result found: flight 161795 (fail), for first failure
 Repro found: flight 161803 (pass), for last pass
 Repro found: flight 161805 (fail), for first failure
 Repro found: flight 161808 (pass), for last pass
 Repro found: flight 161810 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  1b507e55f8199eaad99744613823f6929e4d57c6
  Bug not present: 4083904bc9fe5da580f7ca397b1e828fbc322732
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/161810/


  commit 1b507e55f8199eaad99744613823f6929e4d57c6
  Merge: 4083904bc9 8d17adf34f
  Author: Peter Maydell <peter.maydell@linaro.org>
  Date:   Thu Mar 18 19:00:49 2021 +0000
  
      Merge remote-tracking branch 'remotes/berrange-gitlab/tags/dep-many-pull-request' into staging
      
      Remove many old deprecated features
      
      The following features have been deprecated for well over the 2
      release cycle we promise
      
        ``-drive file=json:{...{'driver':'file'}}`` (since 3.0)
        ``-vnc acl`` (since 4.0.0)
        ``-mon ...,control=readline,pretty=on|off`` (since 4.1)
        ``migrate_set_downtime`` and ``migrate_set_speed`` (since 2.8.0)
        ``query-named-block-nodes`` result ``encryption_key_missing`` (since 2.10.0)
        ``query-block`` result ``inserted.encryption_key_missing`` (since 2.10.0)
        ``migrate-set-cache-size`` and ``query-migrate-cache-size`` (since 2.11.0)
        ``query-named-block-nodes`` and ``query-block`` result dirty-bitmaps[i].status (since 4.0)
        ``query-cpus`` (since 2.12.0)
        ``query-cpus-fast`` ``arch`` output member (since 3.0.0)
        ``query-events`` (since 4.0)
        chardev client socket with ``wait`` option (since 4.0)
        ``acl_show``, ``acl_reset``, ``acl_policy``, ``acl_add``, ``acl_remove`` (since 4.0.0)
        ``ide-drive`` (since 4.2)
        ``scsi-disk`` (since 4.2)
      
      # gpg: Signature made Thu 18 Mar 2021 09:23:39 GMT
      # gpg:                using RSA key DAF3A6FDB26B62912D0E8E3FBE86EBB415104FDF
      # gpg: Good signature from "Daniel P. Berrange <dan@berrange.com>" [full]
      # gpg:                 aka "Daniel P. Berrange <berrange@redhat.com>" [full]
      # Primary key fingerprint: DAF3 A6FD B26B 6291 2D0E  8E3F BE86 EBB4 1510 4FDF
      
      * remotes/berrange-gitlab/tags/dep-many-pull-request:
        block: remove support for using "file" driver with block/char devices
        block: remove 'dirty-bitmaps' field from 'BlockInfo' struct
        block: remove dirty bitmaps 'status' field
        block: remove 'encryption_key_missing' flag from QAPI
        hw/scsi: remove 'scsi-disk' device
        hw/ide: remove 'ide-drive' device
        chardev: reject use of 'wait' flag for socket client chardevs
        machine: remove 'arch' field from 'query-cpus-fast' QMP command
        machine: remove 'query-cpus' QMP command
        migrate: remove QMP/HMP commands for speed, downtime and cache size
        monitor: remove 'query-events' QMP command
        monitor: raise error when 'pretty' option is used with HMP
        ui, monitor: remove deprecated VNC ACL option and HMP commands
      
      Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
  
  commit 8d17adf34f501ded65a106572740760f0a75577c
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 11:16:32 2021 +0000
  
      block: remove support for using "file" driver with block/char devices
      
      The 'host_device' and 'host_cdrom' drivers must be used instead.
      
      Reviewed-by: Eric Blake <eblake@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit e67d8e2928200e24ecb47c7be3ea8270077f2996
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 19:22:36 2021 +0000
  
      block: remove 'dirty-bitmaps' field from 'BlockInfo' struct
      
      The same data is available in the 'BlockDeviceInfo' struct.
      
      Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 81cbfd5088690c53541ffd0d74851c8ab055a829
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 19:19:54 2021 +0000
  
      block: remove dirty bitmaps 'status' field
      
      The same information is available via the 'recording' and 'busy' fields.
      
      Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit ad1324e044240ae9fcf67e4c215481b7a35591b9
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 18:53:17 2021 +0000
  
      block: remove 'encryption_key_missing' flag from QAPI
      
      This has been hardcoded to "false" since 2.10.0, since secrets required
      to unlock block devices are now always provided up front instead of using
      interactive prompts.
      
      Reviewed-by: Eric Blake <eblake@redhat.com>
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 879be3af49132d232602e0ca783ec9b4112530fa
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 13:40:56 2021 +0000
  
      hw/scsi: remove 'scsi-disk' device
      
      The 'scsi-hd' and 'scsi-cd' devices provide suitable alternatives.
      
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit b50101833987b47e0740f1621de48637c468c3d1
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 13:40:56 2021 +0000
  
      hw/ide: remove 'ide-drive' device
      
      The 'ide-hd' and 'ide-cd' devices provide suitable alternatives.
      
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 24e13a4dc1eb1630eceffc7ab334145d902e763d
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 13:47:17 2021 +0000
  
      chardev: reject use of 'wait' flag for socket client chardevs
      
      This only makes sense conceptually when used with listener chardevs.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 445a5b4087567bf4d4ce76d394adf78d9d5c88a5
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 13:29:31 2021 +0000
  
      machine: remove 'arch' field from 'query-cpus-fast' QMP command
      
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 12:54:55 2021 +0000
  
      machine: remove 'query-cpus' QMP command
      
      The newer 'query-cpus-fast' command avoids side effects on the guest
      execution. Note that some of the field names are different in the
      'query-cpus-fast' command.
      
      Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Tested-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit cbde7be900d2a2279cbc4becb91d1ddd6a014def
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 18:40:12 2021 +0000
  
      migrate: remove QMP/HMP commands for speed, downtime and cache size
      
      The generic 'migrate_set_parameters' command handle all types of param.
      
      Only the QMP commands were documented in the deprecations page, but the
      rationale for deprecating applies equally to HMP, and the replacements
      exist. Furthermore the HMP commands are just shims to the QMP commands,
      so removing the latter breaks the former unless they get re-implemented.
      
      Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 8becb36063fb14df1e3ae4916215667e2cb65fa2
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 13:35:15 2021 +0000
  
      monitor: remove 'query-events' QMP command
      
      The code comment suggests removing QAPIEvent_(str|lookup) symbols too,
      however, these are both auto-generated as standard for any enum in
      QAPI. As such it they'll exist whether we use them or not.
      
      Reviewed-by: Eric Blake <eblake@redhat.com>
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 283d845c9164f57f5dba020a4783bb290493802f
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 17:56:13 2021 +0000
  
      monitor: raise error when 'pretty' option is used with HMP
      
      This is only semantically useful for QMP.
      
      Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 5994dcb8d8525ac044a31913c6bceeee788ec700
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 17:47:31 2021 +0000
  
      ui, monitor: remove deprecated VNC ACL option and HMP commands
      
      The VNC ACL concept has been replaced by the pluggable "authz" framework
      which does not use monitor commands.
      
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>

neato: graph is too large for cairo-renderer bitmaps. Scaling by 0.720978 to fit
pnmtopng: 203 colors found
Revision graph left in /home/logs/results/bisect/qemu-mainline/test-amd64-amd64-xl-qemuu-ws16-amd64.guest-saverestore.{dot,ps,png,html,svg}.
----------------------------------------
161815: tolerable FAIL

flight 161815 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/161815/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail baseline untested


jobs:
 build-amd64                                                  pass    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Thu May 06 18:18:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 18:18:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123678.233324 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leia6-0002Dg-9L; Thu, 06 May 2021 18:18:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123678.233324; Thu, 06 May 2021 18:18:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leia6-0002DZ-51; Thu, 06 May 2021 18:18:38 +0000
Received: by outflank-mailman (input) for mailman id 123678;
 Thu, 06 May 2021 18:18:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leia5-0002DP-Dq; Thu, 06 May 2021 18:18:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leia5-0006OM-2t; Thu, 06 May 2021 18:18:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leia4-0002CY-Mj; Thu, 06 May 2021 18:18:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1leia4-0007in-ME; Thu, 06 May 2021 18:18:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=UhH+SSa0UCezwQo2lELDHwiYJUI8Ktcr2PDRwkoi11c=; b=eTz2UIv6EP/0DXY3kY1iTAm/fq
	Q9EqiHA56R6CmKXAB7Px84k4cnP3SZDVi/7Pa9Vq2jAjgzeI6T6JRkVOz1D+Z/3sOWv+gqmPo7SHG
	k+zDllW9/W0D1eyj5f8kr1u1IMP/omsgo5t41mMaxkr/IF3PLSDezs9e5gyAKQWmkx2s=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161799-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 161799: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=8404c9fbc84b741f66cff7d4934a25dd2c344452
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 06 May 2021 18:18:36 +0000

flight 161799 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161799/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx  8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     18 guest-localmigrate       fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                8404c9fbc84b741f66cff7d4934a25dd2c344452
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  278 days
Failing since        152366  2020-08-01 20:49:34 Z  277 days  463 attempts
Testing same since   161799  2021-05-05 21:30:43 Z    0 days    1 attempts

------------------------------------------------------------
5980 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1621988 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu May 06 20:27:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 20:27:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123705.233363 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lekaF-0006ik-1N; Thu, 06 May 2021 20:26:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123705.233363; Thu, 06 May 2021 20:26:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lekaE-0006id-U5; Thu, 06 May 2021 20:26:54 +0000
Received: by outflank-mailman (input) for mailman id 123705;
 Thu, 06 May 2021 20:26:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aTiY=KB=gmail.com=dmitry.torokhov@srs-us1.protection.inumbo.net>)
 id 1lekaD-0006iX-UG
 for xen-devel@lists.xenproject.org; Thu, 06 May 2021 20:26:54 +0000
Received: from mail-pg1-x535.google.com (unknown [2607:f8b0:4864:20::535])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 63008893-2cb0-4786-a232-5334353f20ac;
 Thu, 06 May 2021 20:26:53 +0000 (UTC)
Received: by mail-pg1-x535.google.com with SMTP id p12so5518648pgj.10
 for <xen-devel@lists.xenproject.org>; Thu, 06 May 2021 13:26:53 -0700 (PDT)
Received: from google.com ([2620:15c:202:201:5228:3770:a497:742])
 by smtp.gmail.com with ESMTPSA id b2sm10735266pji.28.2021.05.06.13.26.50
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 06 May 2021 13:26:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 63008893-2cb0-4786-a232-5334353f20ac
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=6PS4r5ihm0TuTFmfAJ2NxYhkHmZSjT5J6bjZKfQ3GX4=;
        b=lJX8V3dsO+IedLBuCkDF8k0tvFt6ddIGfxv2EHMfH8XYiqjCm/tMVd46N+tDLqBBSO
         uVwM8SLavveoTp/5so3jz4RDXzHuvGavDxV00ZmfzespKFCvm7A8QAIRhm2luq+CX5YD
         kR1HyIpgjjKOLXFZmGd0w5iqP8MIdvhkwIGKxjYYOqrvgrYMae/AQ20xUC1mQdtH/whN
         17rHE5ybJVP66G87wFYxfYtX2/5PjYKbGbENgp5siKPyBNOFZloGfur4kjXXZpf0OHPl
         MZBgcfdxgf1SsT4LmajECuDWxLLhz/c2DGind/Orbv0so7nK3vSrriqZtGCvizuvY91N
         6rzQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to;
        bh=6PS4r5ihm0TuTFmfAJ2NxYhkHmZSjT5J6bjZKfQ3GX4=;
        b=kENfDmgjeHVcP8LdZG1mn6rIar9TJRAAz8fE7Y++hnEzFe64PbV+abLDPpFYziKHvT
         ao2yPOVP0qi4DZE4xkm6XbS6zcG7KBqmFOduMZe+1630ZZ9Jl3vBK2fi7B2uTfPfS9xJ
         2EFXM+dVG+Y5/OWvzx93WWelplSLd4MMpKmj5e4/YQe8+PnSvrq1UJEVIw7Op/iB+tX9
         8UXNAl6tgKerGHUdg294OyOzeUfrMDUF6DfV37Jf8XDY0HXlwmsZOUqsi+DBpVJ70EOp
         9H0h/YFhGWHoZBTt4KPxyIUNngh1zya8mCd29fWfrPfm3C0pEw853lvo4z7lP6zW1qgV
         tyIQ==
X-Gm-Message-State: AOAM531FM4jurKoc9keupwjc8vvW2wCuZhPiH2n6SHI7L26uZN79Zxma
	WG0NCzRJwK/xwMsngAAXRM8=
X-Google-Smtp-Source: ABdhPJwyOd9a/hNWBoTSoQpiUH3+3o7sEIxqT9F2v1nZL3DFH01uXnAbmQ64QQV0BapasH5VDSAa2w==
X-Received: by 2002:a62:1517:0:b029:28e:a871:ffb2 with SMTP id 23-20020a6215170000b029028ea871ffb2mr6632315pfv.19.1620332812328;
        Thu, 06 May 2021 13:26:52 -0700 (PDT)
Date: Thu, 6 May 2021 13:26:48 -0700
From: Dmitry Torokhov <dmitry.torokhov@gmail.com>
To: Phillip Susi <phill@thesusis.net>
Cc: xen-devel@lists.xenproject.org, linux-input@vger.kernel.org
Subject: Re: [PATCH] Xen Keyboard: don't advertise every key known to man
Message-ID: <YJRRCEJrQOwVymdP@google.com>
References: <87o8dw52jc.fsf@vps.thesusis.net>
 <20210506143654.17924-1-phill@thesusis.net>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210506143654.17924-1-phill@thesusis.net>

On Thu, May 06, 2021 at 02:36:54PM +0000, Phillip Susi wrote:
> For reasons I still don't understand, the input subsystem allows
> input devices to advertise what keys they have, and adds this
> information to the modalias for the device.  The Xen Virtual
> Keyboard was advertising every known key, which resulted in a
> modalias string over 2 KiB in length, which caused uevents to
> fail with -ENOMEM ( when trying to add the modalias to the env ).
> Remove this advertisement.
> ---
>  drivers/input/misc/xen-kbdfront.c | 5 -----
>  1 file changed, 5 deletions(-)
> 
> diff --git a/drivers/input/misc/xen-kbdfront.c b/drivers/input/misc/xen-kbdfront.c
> index 4ff5cd2a6d8d..d4bd558afda9 100644
> --- a/drivers/input/misc/xen-kbdfront.c
> +++ b/drivers/input/misc/xen-kbdfront.c
> @@ -254,11 +254,6 @@ static int xenkbd_probe(struct xenbus_device *dev,
>  		kbd->id.product = 0xffff;
>  
>  		__set_bit(EV_KEY, kbd->evbit);
> -		for (i = KEY_ESC; i < KEY_UNKNOWN; i++)
> -			__set_bit(i, kbd->keybit);
> -		for (i = KEY_OK; i < KEY_MAX; i++)
> -			__set_bit(i, kbd->keybit);
> -

By doing this you are stopping delivery of all key events from this
device.

Thanks.

-- 
Dmitry


From xen-devel-bounces@lists.xenproject.org Thu May 06 20:46:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 20:46:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123710.233375 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lektW-0000XN-Ly; Thu, 06 May 2021 20:46:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123710.233375; Thu, 06 May 2021 20:46:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lektW-0000XG-Ii; Thu, 06 May 2021 20:46:50 +0000
Received: by outflank-mailman (input) for mailman id 123710;
 Thu, 06 May 2021 20:46:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lektV-0000X6-71; Thu, 06 May 2021 20:46:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lektU-0000Sq-Tt; Thu, 06 May 2021 20:46:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lektU-0000cB-Gh; Thu, 06 May 2021 20:46:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lektU-0001K1-GB; Thu, 06 May 2021 20:46:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=64BtrviUNV5WRfW/a4jHoPWXYaiRl4Scwq4AT4nBt0U=; b=V0VDnq5nXgLSS5lfHu6WkfbvpG
	jNlgFqzh3Gp3AhvmlHzLUA0kea6z31kSNXtPeKIs/Ah5EiaLy+/6yYq8b9GAOlIwzMkL7WCQxBqG9
	J9+5N7GTIymdPQaNRTbdymBKY86mzWI9yIELtzi9ipTVE41T5DMes7g+KeVzmQgj3Eas=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161814-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xtf test] 161814: all pass - PUSHED
X-Osstest-Versions-This:
    xtf=880092854e5473558af77289bb7c01a9fa9dda5a
X-Osstest-Versions-That:
    xtf=2cf3168661355565e2b80395ef16c5ee3cfc5f55
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 06 May 2021 20:46:48 +0000

flight 161814 xtf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161814/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xtf                  880092854e5473558af77289bb7c01a9fa9dda5a
baseline version:
 xtf                  2cf3168661355565e2b80395ef16c5ee3cfc5f55

Last test of basis   161375  2021-04-22 11:11:15 Z   14 days
Testing same since   161814  2021-05-06 14:40:11 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michal Orzel <michal.orzel@arm.com>

jobs:
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-amd64-pvops                                            pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xtf.git
   2cf3168..8800928  880092854e5473558af77289bb7c01a9fa9dda5a -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu May 06 21:41:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 21:41:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123720.233390 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leljt-0006jx-O9; Thu, 06 May 2021 21:40:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123720.233390; Thu, 06 May 2021 21:40:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leljt-0006jq-Kv; Thu, 06 May 2021 21:40:57 +0000
Received: by outflank-mailman (input) for mailman id 123720;
 Thu, 06 May 2021 21:40:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leljr-0006jg-S7; Thu, 06 May 2021 21:40:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leljr-0001ZC-KA; Thu, 06 May 2021 21:40:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leljr-0003iC-CQ; Thu, 06 May 2021 21:40:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1leljr-0001OA-Bx; Thu, 06 May 2021 21:40:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=IaspGgB/lYcC/JT6PsdG1PtsyvehEO4Gs1qfazOdsbI=; b=mKQk+x6CaI4jSx2fV8yCGGHKjM
	iyEY7SoWkGtqUbuy22GWvj2Ze52LDxvRfcM73481Qrxxi+5PlSbS4MErZbZJOxxNOeZdNRN4vqcEY
	tns/GmC1fUU6TeV71fxJcC34aajnLA28fXm2Ttkwvr3DscF2uaJ79gQZ0PETlMpp9z8Q=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161807-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 161807: regressions - FAIL
X-Osstest-Failures:
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-localmigrate/x10:fail:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5984905b2638df87a0262d1ee91f0a6e14a86df6
X-Osstest-Versions-That:
    xen=4cf5929606adc2fb1ab4e2921c14ba4b8046ecd1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 06 May 2021 21:40:55 +0000

flight 161807 xen-4.12-testing real [real]
flight 161819 xen-4.12-testing real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/161807/
http://logs.test-lab.xenproject.org/osstest/logs/161819/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qcow2    19 guest-localmigrate/x10   fail REGR. vs. 159418

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 159418
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 159418
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159418
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159418
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 159418
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 159418
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159418
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 159418
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159418
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 159418
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 159418
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  5984905b2638df87a0262d1ee91f0a6e14a86df6
baseline version:
 xen                  4cf5929606adc2fb1ab4e2921c14ba4b8046ecd1

Last test of basis   159418  2021-02-16 15:06:11 Z   79 days
Failing since        160128  2021-03-18 14:36:18 Z   49 days   67 attempts
Testing same since   161776  2021-05-04 19:06:01 Z    2 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Edwin Török <edvin.torok@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 324 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu May 06 23:54:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 06 May 2021 23:54:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123727.233410 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lenoq-0001OD-5S; Thu, 06 May 2021 23:54:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123727.233410; Thu, 06 May 2021 23:54:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lenoq-0001O6-2I; Thu, 06 May 2021 23:54:12 +0000
Received: by outflank-mailman (input) for mailman id 123727;
 Thu, 06 May 2021 23:54:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lenoo-0001Nw-UO; Thu, 06 May 2021 23:54:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lenoo-0003ob-MC; Thu, 06 May 2021 23:54:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lenoo-0001jY-AI; Thu, 06 May 2021 23:54:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lenoo-0001He-9k; Thu, 06 May 2021 23:54:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=JWiU5/Y0w5FAAuJvdg1AYjsSZhsW8f6YN/LfeBb/tLk=; b=PdaAO08x5edZWsN3g/ahbOfkst
	6ZLjZonvxR57MjrH2aM0Fb7eWs/A6ImmvMIOIyu4p2aHkj1wefKiWPEXQ/OinxVHgDHBTWpg4wsqv
	PfsGVhVZ73tCvZ53P1EapgOIfKoy1ndE4l6pD50zA2gOOxQ+jmN5IifyssA6NH9xLcBg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161811-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 161811: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-arm64-arm64-xl-credit1:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
X-Osstest-Versions-That:
    xen=8cccd6438e86112ab383e41b433b5a7e73be9621
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 06 May 2021 23:54:10 +0000

flight 161811 xen-unstable real [real]
flight 161822 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/161811/
http://logs.test-lab.xenproject.org/osstest/logs/161822/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-credit1 18 guest-start/debian.repeat fail pass in 161822-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161778
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161778
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 161778
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161778
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161778
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161778
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161778
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161778
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161778
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161778
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161778
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
baseline version:
 xen                  8cccd6438e86112ab383e41b433b5a7e73be9621

Last test of basis   161778  2021-05-04 23:09:09 Z    2 days
Testing same since   161811  2021-05-06 12:30:56 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8cccd6438e..09fc903c5a  09fc903c5ac042e2e1eb54e58ea7f207ed12ee16 -> master


From xen-devel-bounces@lists.xenproject.org Fri May 07 01:40:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 01:40:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123743.233448 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lepTC-0000Me-DR; Fri, 07 May 2021 01:39:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123743.233448; Fri, 07 May 2021 01:39:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lepTC-0000MX-9m; Fri, 07 May 2021 01:39:58 +0000
Received: by outflank-mailman (input) for mailman id 123743;
 Fri, 07 May 2021 01:39:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=awOp=KC=epam.com=prvs=5761306b0d=volodymyr_babchuk@srs-us1.protection.inumbo.net>)
 id 1lepT9-0000MR-Ul
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 01:39:56 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 917d60dd-7417-4261-9b81-5c6ec8ea50e7;
 Fri, 07 May 2021 01:39:54 +0000 (UTC)
Received: from pps.filterd (m0174679.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 1471VOM7029814; Fri, 7 May 2021 01:39:51 GMT
Received: from eur04-db3-obe.outbound.protection.outlook.com
 (mail-db3eur04lp2056.outbound.protection.outlook.com [104.47.12.56])
 by mx0a-0039f301.pphosted.com with ESMTP id 38csvw05by-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 07 May 2021 01:39:51 +0000
Received: from AM0PR03MB4372.eurprd03.prod.outlook.com (2603:10a6:208:cd::14)
 by AM9PR03MB7027.eurprd03.prod.outlook.com (2603:10a6:20b:286::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.25; Fri, 7 May
 2021 01:39:47 +0000
Received: from AM0PR03MB4372.eurprd03.prod.outlook.com
 ([fe80::e123:973d:6af2:97c]) by AM0PR03MB4372.eurprd03.prod.outlook.com
 ([fe80::e123:973d:6af2:97c%5]) with mapi id 15.20.4108.026; Fri, 7 May 2021
 01:39:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 917d60dd-7417-4261-9b81-5c6ec8ea50e7
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nfAAxvj2k9zxJrIMqKh5MLLAw7C1npJ3BLFG2zAX4jAUi5kVxG+LWDTWY6aSP259DUkOBkxJoru9fN0RhdRtIfYrEFfm33zzTAeAoOsm7dBfl3d7C4LwvuilRHTqeIYSH1O7/Nit/VWUTOIo3HjTs+7WyzVNLFsMawBjSw9u5DolezzTr+TIAeEX0F9AopA8r+SugSxRpmWesbIIS6Jo7FgdB61DTP5XsliW39VdzIKFSdDBomdaoVJ+JCMOqf8sqUXAxSWVwdne7YQZ0SeJPcZ7z+mzi4K5kr/34u8kwcxAAhet2mtOJvWQYNwOJqLxnjEQc2b/EN3mj8CkHNjARQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7wuDkXPIYyz+us8xX2jgYYjsbG6TmWiFvdgkn5m+tMs=;
 b=AfVNo3NLNuBUjsHMlsUhI7BGqX3f4cq3HLjJS59Gwp9k/0VF0uonaI/NT6hUkZhyCPb5qOrFC/y3H3c1TCFwrDIhKa7pvTHzVCY3ckI19qDNq1RvosRwyLtpmlQplCzkJu+GGV8JTcewjueXDbyeAjbFB+wiikJF1A2PtZ8Xv/DMAPlRdItmZtvlWuXPzciOtpfFYoI3thQIxt4lSzP8WmuFJvKKznhDXZ8ttPuw3aMNhl644LLao27JsXTAJi45KsVoTDeMy5EwUz8VJfRNz2v57qQQxUVp6WykRmIEG7GSr9FDccUiJx2MmRpDCuoRP8V9F+0Pf9H7v4avVbWvyg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7wuDkXPIYyz+us8xX2jgYYjsbG6TmWiFvdgkn5m+tMs=;
 b=i2QA0+TKz6XXqWyVh14DC/2+U9H9c8hgyGGmvoomjrzRHv1oXENtd+IqPamyd1KuKYxYjkNSU8ouf4vf/rR0dHYNw8o9SZePk/9EK9jN9iASz1VJ4zx9gJF1nK4QNONu7hwimOZO0VPcwvjkmDLbSKpMQP6R3JlLon7h371TiRzo8uqEshWknYELELcxS77txEpJ/8sJFqLVRHcVhhMpNOFTs9P6pJXkQzGtIFaUH3sRONUBAvHWFdrlRPhxfeNU3R2BoR6zmA8rFkzt614MJTr74a9fgkFeGYEeuo1uBYHZGgJdpq87wuqRkoqRvt8MzNCJcdesNN9LRIZnPnNAxg==
From: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
        Stefano Stabellini
	<sstabellini@kernel.org>,
        Julien Grall <julien@xen.org>
CC: "tee-dev@lists.linaro.org" <tee-dev@lists.linaro.org>
Subject: [PATCH] optee: enable OPTEE_SMC_SEC_CAP_MEMREF_NULL capability
Thread-Topic: [PATCH] optee: enable OPTEE_SMC_SEC_CAP_MEMREF_NULL capability
Thread-Index: AQHXQuHcC5ITAI62gUKYhtJEOCBguA==
Date: Fri, 7 May 2021 01:39:47 +0000
Message-ID: <20210507013938.676142-1-volodymyr_babchuk@epam.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-mailer: git-send-email 2.31.1
authentication-results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=epam.com;
x-originating-ip: [176.36.48.175]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 429301b4-28e4-4555-71f4-08d910f8ff9a
x-ms-traffictypediagnostic: AM9PR03MB7027:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: 
 <AM9PR03MB70279D257CAFF5B0C18FDDF7E6579@AM9PR03MB7027.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8882;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 B3OLqLbFWHhLVDZ83+oWBknpsb4MNa6CnuUnXKU3AidG0HPjOeRE3MaPrnpM4Hrm7JylUXZZ670p29JeelOlYOLPE6CR02goaJyWgsy1u2KgH6gPutx8j1C81RcfpHzdx3t7B6M1GPAlwSVKWuE+rYKNFelnqyS52/mzGgRvZ9tbeqcw/dxlPyhpmlb6wQXNyFtiRB+JQvpEZvzDkjd1Xy2CoEZdNyi3wFcuUoS4NT2WuDphydDHsAVswQNGo2GL4Xoc0xQtWotAl0FiszTwRM8EDlPAkajbTAUH8XOG1iARxcFyl8pRWO0P5OL5nhjnPFXXUeztDNR4/f80YXcxXza0CJ95x2cHjG/X0bKPQyqA8/HJ8T65urhHsUE0nAe9edqZMVfZFAc5L9LLfryhGCaKRkdeAf2YYYwYENXUyVP+qdne22oM5ZJfrbIgJhK5irgB5NQata050Xt7jDUHaZEqVUsqNPA/nYI26iF8bJoID+7xNKlOpfUX0CeGs00j6A80oksRX24xjls8KV7xeVrLSj8b5uwR1/5SFI53LNtglHlUlOs30QavCR1z2LweL5TXjN+GVllTEGAqAKpc6uZJFea2hpvu9uV98m4wsQo=
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB4372.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39850400004)(136003)(346002)(396003)(376002)(366004)(4326008)(1076003)(66556008)(64756008)(186003)(66476007)(66446008)(66946007)(5660300002)(6512007)(6506007)(71200400001)(91956017)(76116006)(55236004)(2906002)(2616005)(316002)(86362001)(36756003)(478600001)(8936002)(8676002)(122000001)(110136005)(38100700002)(6486002)(83380400001)(26005);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?iso-8859-1?Q?0lPl7BxTNUA25BZ8EBv/bDFH1BvHKXOTXjSjbI4E3LHCBg2QfP402eDnWl?=
 =?iso-8859-1?Q?6OoTTr4hoYnjMQ/rn+dPsE6JwWpjLePzxrPFXQfa0P3l5ZJ4AXRv4WnK7N?=
 =?iso-8859-1?Q?iAJIPX1mO/xkg3YNULMeq0zpPNsXVm9mgMpMZ+6H5kxjG93fs1FKuc2bqW?=
 =?iso-8859-1?Q?17mYOjQHGDsXo6V5FjQYBouvxGS9kjK7BrrMNlqELO3rYOXX/WlUR2jqph?=
 =?iso-8859-1?Q?wkD9cLXOLpHbKrCn0RZ6xvd9y7xeA4a4FPsg31bbout/9g2daq3VcML2Ta?=
 =?iso-8859-1?Q?Ch0g8fRZvb37/XemyoxEE2ibTX9q9LqOhCh7pQvst2YO82c8/lBrwiIZCC?=
 =?iso-8859-1?Q?qz2rkW0pVtBBeLjkvR5calMKLycbUfvLNuAPggHPD840ZfXFwdzrCQ8qfi?=
 =?iso-8859-1?Q?Q8XeelQkrciO6opsUwtunvLA1LvfLdZGNY8tH5ZTNwp1O4fCidsE1CH3T2?=
 =?iso-8859-1?Q?hvZisiwEqSQ1YqSps6PIjMXsZCDCNps3S5Alg3dH5ByBRfxII4JBiJDVvl?=
 =?iso-8859-1?Q?LogEoaiTHUhtKBuAV9DBXLA9VBBVOMRuvkWBwz3Sn6COATAT7o7ULCgClu?=
 =?iso-8859-1?Q?YsNi734x+pX4hbGsNZaB4rLYo2lHlr+JRsR2bSAB3wazYkZosZ+QP0yAY5?=
 =?iso-8859-1?Q?X/gNWhDvLCj2ysaBSJuGCBpiz/ZcX2vJCa2YgTIk2jg4nD1EGWRGHp3N0N?=
 =?iso-8859-1?Q?kxppBXNER1WNWvfXszn9NFcIpeDSM10G/y31B7tvTF+0JZIpEzQmlzRUJN?=
 =?iso-8859-1?Q?4PlZlu4P5++dxtEDGl/ietVDTdZ5Oe/5gznSJjBcxQXqW85p2U4c1XKrlh?=
 =?iso-8859-1?Q?Qw5y5rql3l0LCC/+qAxIPtPjtlMq4phkpYeitchVDpUOnP6vHh6J1Fczef?=
 =?iso-8859-1?Q?qYR7PErvjznQwFVd4BAr0AcjpZQi5ptbV63nh9nXszwHTEmPd2JtdB5z/g?=
 =?iso-8859-1?Q?DzQuVdbNOxpkShKV6BVtVPmbsMuHmJRY+XxdabcTyXhk/GPp7841Gtf+Ar?=
 =?iso-8859-1?Q?tb0Sna0AhjT6YK+koYb+NNNoAy0xW8qGtuHf7IBHXKQ10fjkmynCzxEWae?=
 =?iso-8859-1?Q?nXBagJEnhBoxJc/+OA877DZUt3sh2bzmKCmitGryxpQJSOHNrxWn6aBBiU?=
 =?iso-8859-1?Q?Q9CggYBwmYzTVbq96HJoKn/IHtf2Cd/AhcKee29x625aeqozOWnxBPO9zm?=
 =?iso-8859-1?Q?Rdcww9O9iKeK58WnR1AK4jFy6NrkW1xEHzOlF3Z3Mkdbz5MGBuktHjHy9f?=
 =?iso-8859-1?Q?RU6K7EGE7xzkpmBIBJRD8Yu+G1meJqCOeq+ELTUPClrL7ninEfw5lY6qba?=
 =?iso-8859-1?Q?p/ZYbWmCvfVdMeTEeh/TxBe5PnBy3C2rXte2nSf+nZjO0Upn4/mKvZQMf5?=
 =?iso-8859-1?Q?QgpZHiLrD7?=
Content-Type: text/plain; charset="iso-8859-1"
Content-Transfer-Encoding: quoted-printable
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB4372.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 429301b4-28e4-4555-71f4-08d910f8ff9a
X-MS-Exchange-CrossTenant-originalarrivaltime: 07 May 2021 01:39:47.3752
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: a9hBaoBFgJ+X4zY0nMuaRZKtrIEeSB0V3kl4dSN/sIjTy4P7vx3zDLnfFpPQEnTBMTm8qhy5Vfw9RuXiTLFCrb62CLNB4+o0QaP1l28zqnI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR03MB7027
X-Proofpoint-GUID: yWQpnpTgGPGsV_wC6lwr9b0KuIGPnoEa
X-Proofpoint-ORIG-GUID: yWQpnpTgGPGsV_wC6lwr9b0KuIGPnoEa
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1011 mlxscore=0
 malwarescore=0 adultscore=0 mlxlogscore=999 bulkscore=0 priorityscore=1501
 phishscore=0 spamscore=0 impostorscore=0 lowpriorityscore=0 suspectscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2105070007

OP-TEE mediator already have support for NULL memory references. It
was added in patch 0dbed3ad336 ("optee: allow plain TMEM buffers with
NULL address"). But it does not propagate
OPTEE_SMC_SEC_CAP_MEMREF_NULL capability flag to a guest, so well
behaving guest can't use this feature.

Note: linux optee driver honors this capability flag when handling
buffers from userspace clients, but ignores it when working with
internal calls. For instance, __optee_enumerate_devices() function
uses NULL argument to get buffer size hint from OP-TEE. This was the
reason, why "optee: allow plain TMEM buffers with NULL address" was
introduced in the first place.

This patch adds the mentioned capability to list of known
capabilities. From Linux point of view it means that userspace clients
can use this feature, which is confirmed by OP-TEE test suite:

* regression_1025 Test memref NULL and/or 0 bytes size
o regression_1025.1 Invalid NULL buffer memref registration
  regression_1025.1 OK
o regression_1025.2 Input/Output MEMREF Buffer NULL - Size 0 bytes
  regression_1025.2 OK
o regression_1025.3 Input MEMREF Buffer NULL - Size non 0 bytes
  regression_1025.3 OK
o regression_1025.4 Input MEMREF Buffer NULL over PTA invocation
  regression_1025.4 OK
  regression_1025 OK

Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
---
 xen/arch/arm/tee/optee.c            | 3 ++-
 xen/include/asm-arm/tee/optee_smc.h | 3 +++
 2 files changed, 5 insertions(+), 1 deletion(-)

diff --git a/xen/arch/arm/tee/optee.c b/xen/arch/arm/tee/optee.c
index 9570dc6771..6b59027964 100644
--- a/xen/arch/arm/tee/optee.c
+++ b/xen/arch/arm/tee/optee.c
@@ -96,7 +96,8 @@
 #define OPTEE_KNOWN_NSEC_CAPS OPTEE_SMC_NSEC_CAP_UNIPROCESSOR
 #define OPTEE_KNOWN_SEC_CAPS (OPTEE_SMC_SEC_CAP_HAVE_RESERVED_SHM | \
                               OPTEE_SMC_SEC_CAP_UNREGISTERED_SHM | \
-                              OPTEE_SMC_SEC_CAP_DYNAMIC_SHM)
+                              OPTEE_SMC_SEC_CAP_DYNAMIC_SHM | \
+                              OPTEE_SMC_SEC_CAP_MEMREF_NULL)
=20
 enum optee_call_state {
     OPTEE_CALL_NORMAL,
diff --git a/xen/include/asm-arm/tee/optee_smc.h b/xen/include/asm-arm/tee/=
optee_smc.h
index d568bb2fe1..2f5c702326 100644
--- a/xen/include/asm-arm/tee/optee_smc.h
+++ b/xen/include/asm-arm/tee/optee_smc.h
@@ -244,6 +244,9 @@
  */
 #define OPTEE_SMC_SEC_CAP_DYNAMIC_SHM		(1 << 2)
=20
+/* Secure world supports Shared Memory with a NULL reference */
+#define OPTEE_SMC_SEC_CAP_MEMREF_NULL		(1 << 4)
+
 #define OPTEE_SMC_FUNCID_EXCHANGE_CAPABILITIES	9
 #define OPTEE_SMC_EXCHANGE_CAPABILITIES \
 	OPTEE_SMC_FAST_CALL_VAL(OPTEE_SMC_FUNCID_EXCHANGE_CAPABILITIES)
--=20
2.31.0


From xen-devel-bounces@lists.xenproject.org Fri May 07 02:31:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 02:31:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123749.233466 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leqGn-0006H4-ER; Fri, 07 May 2021 02:31:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123749.233466; Fri, 07 May 2021 02:31:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leqGn-0006Gx-Aj; Fri, 07 May 2021 02:31:13 +0000
Received: by outflank-mailman (input) for mailman id 123749;
 Fri, 07 May 2021 02:31:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leqGm-0006Gn-3E; Fri, 07 May 2021 02:31:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leqGl-0004Hs-Qk; Fri, 07 May 2021 02:31:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leqGl-00016g-IA; Fri, 07 May 2021 02:31:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1leqGl-0005Cs-Hh; Fri, 07 May 2021 02:31:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0NfM2yZ8h87GhSk/uh51+X9k/TQ/6EkLrBktlWypWG4=; b=ajl4O9cQ+wwj6OmKhNwweJTquF
	x0K/NOgqQTb4uZklIgYvAsauZ2Cygd8xBqUquNlGn6BWY3FlOYTwTJF1/CnZrStamcoNkPTtvewnA
	0bjI4VcriK2CnrZq59k61CORWwj34Y6EC1CaU9ZeVEKaFh1O7B5YoyO+yumP/O7YXId4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161812-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 161812: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=d45a5270d075ea589f0b0ddcf963a5fea1f500ac
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 07 May 2021 02:31:11 +0000

flight 161812 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161812/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-libvirt      14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-libvirt-xsm  14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-libvirt-pair 25 guest-start/debian       fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                d45a5270d075ea589f0b0ddcf963a5fea1f500ac
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  259 days
Failing since        152659  2020-08-21 14:07:39 Z  258 days  471 attempts
Testing same since   161812  2021-05-06 12:38:46 Z    0 days    1 attempts

------------------------------------------------------------
483 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 147203 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 07 04:07:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 04:07:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123759.233481 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lerlS-0006Df-DL; Fri, 07 May 2021 04:06:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123759.233481; Fri, 07 May 2021 04:06:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lerlS-0006DY-AF; Fri, 07 May 2021 04:06:58 +0000
Received: by outflank-mailman (input) for mailman id 123759;
 Fri, 07 May 2021 04:06:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=agx6=KC=arm.com=henry.wang@srs-us1.protection.inumbo.net>)
 id 1lerlQ-0006DS-Km
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 04:06:56 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.21.44]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d1bdfedc-3c58-42dc-9da7-110fb137233e;
 Fri, 07 May 2021 04:06:55 +0000 (UTC)
Received: from DB6PR0601CA0024.eurprd06.prod.outlook.com (2603:10a6:4:7b::34)
 by AM8PR08MB5603.eurprd08.prod.outlook.com (2603:10a6:20b:1d4::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.26; Fri, 7 May
 2021 04:06:53 +0000
Received: from DB5EUR03FT027.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:7b:cafe::c9) by DB6PR0601CA0024.outlook.office365.com
 (2603:10a6:4:7b::34) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.25 via Frontend
 Transport; Fri, 7 May 2021 04:06:53 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT027.mail.protection.outlook.com (10.152.20.121) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4108.25 via Frontend Transport; Fri, 7 May 2021 04:06:52 +0000
Received: ("Tessian outbound 52fcc5bd9d3a:v91");
 Fri, 07 May 2021 04:06:52 +0000
Received: from 61a990d8c294.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 60DD2C07-0A7C-431B-8096-2587661F36B0.1; 
 Fri, 07 May 2021 04:06:46 +0000
Received: from FRA01-PR2-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 61a990d8c294.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 07 May 2021 04:06:46 +0000
Received: from PA4PR08MB6253.eurprd08.prod.outlook.com (2603:10a6:102:e4::8)
 by PR2PR08MB4748.eurprd08.prod.outlook.com (2603:10a6:101:1f::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.26; Fri, 7 May
 2021 04:06:37 +0000
Received: from PA4PR08MB6253.eurprd08.prod.outlook.com
 ([fe80::19f9:d346:b9af:5cad]) by PA4PR08MB6253.eurprd08.prod.outlook.com
 ([fe80::19f9:d346:b9af:5cad%3]) with mapi id 15.20.4108.029; Fri, 7 May 2021
 04:06:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d1bdfedc-3c58-42dc-9da7-110fb137233e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Hx5ewWZhaU7kL7WXqUZVcrXrshUKMDtOMIsINzIIukw=;
 b=Z5+F5p5THg7doCKyRFU0BaqzWNlcAzLrkUEuJvKddBYRtzZKjdjQs6Duw0/wTbFxfd0aJh+XNlrIdSW942+cPeuXmIAOrnHRtrpSEykXYy/17YrFkp4rZ8ZJkuzL0KAHKS/TA6Ju0rh2/TlXoNYQU3BHBSQBdMVyT4JFEhZHhxc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Mkuf0gsNGEsM2y9SEkCKroe3ukIaMT9jaSSukWHfhKPTMoxOlcqw+BKdtZLGKiDnDVzBSdmogMBp30bKEoUD7/86jO7CLCTyUFdqVyhHyNygc/dFjGyjV0OQXXit+54pas6hT+/D3c5m55hBQomGYHp1p54wcvOxpg77uZADon7zKPRyGH46y2aGeShBKXVCJh3ntcEVLaGq5VDUVmEnBRVCGAZQPn7MN9aVLwr+lpGiAKK+9tqJlQd80XkGtI+5Ka9aCEhcKe8vKD1bcCEdeuhtaUNVUstBJfpaAoAmKKjqY+okF1DBHAfIt+RkieX50fZjouL18AOsjyHDk6fBZA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Hx5ewWZhaU7kL7WXqUZVcrXrshUKMDtOMIsINzIIukw=;
 b=fOyBeaD8ne3phbK+2OaTi+fvHfq3K4R0nwMfVs2zPCvMsoj0AUoq2lJTPZMgjyKfSRbWfC1lQVG2/Iw+UXfEWWo0cunsm7DyU21aHIfNXRPMSN4TBLbDz6ZXkBRmcHYtuNODWzFW7UlRcfjuAtJCtDWYMgA5uXX6WDo4lLDe5nxc8tpXA6KC9eXN8WHZ0TuuIrGahwgT7MNo2N1lvMYUmou+YpUkEke+GEqAb8evafZ+JImjc1xSIHj42hdUrxUrUmVb/0WRn6XuFzLaIjMmpsyy2mMOCd+J6TOJ4S4Ujz1TLeiA6tzoz+v+7ZuDXlEvmIoZgZTzsTb0oYC7ell/gQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Hx5ewWZhaU7kL7WXqUZVcrXrshUKMDtOMIsINzIIukw=;
 b=Z5+F5p5THg7doCKyRFU0BaqzWNlcAzLrkUEuJvKddBYRtzZKjdjQs6Duw0/wTbFxfd0aJh+XNlrIdSW942+cPeuXmIAOrnHRtrpSEykXYy/17YrFkp4rZ8ZJkuzL0KAHKS/TA6Ju0rh2/TlXoNYQU3BHBSQBdMVyT4JFEhZHhxc=
From: Henry Wang <Henry.Wang@arm.com>
To: Julien Grall <julien@xen.org>, "sstabellini@kernel.org"
	<sstabellini@kernel.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Chen <Wei.Chen@arm.com>, Penny Zheng <Penny.Zheng@arm.com>, Bertrand
 Marquis <Bertrand.Marquis@arm.com>
Subject: RE: Discussion of Xenheap problems on AArch64
Thread-Topic: Discussion of Xenheap problems on AArch64
Thread-Index:
 Adc2dyA8lkZGRqbyRiSglHolanVkwQAFhaqAAACgy/AA4CfqgABHcHyAADhcqlAABznSAAGrycWA
Date: Fri, 7 May 2021 04:06:33 +0000
Message-ID:
 <PA4PR08MB62537A958107CD234831E0B892579@PA4PR08MB6253.eurprd08.prod.outlook.com>
References:
 <PA4PR08MB6253F49C13ED56811BA5B64E92479@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <cdde98ca-4183-c92b-adca-801330992fc5@xen.org>
 <PA4PR08MB62538BBA256E66A0415F0C7192479@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <f14aa1d6-35d2-a9a3-0672-7f0d3ae3ec89@xen.org>
 <PA4PR08MB62534C4130B59CAA9A8A8BF792419@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <PA4PR08MB6253FBC7F5E690DB74F2E11F92409@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <2a65b8c0-fccc-2ccc-f736-7f3f666e84d1@xen.org>
In-Reply-To: <2a65b8c0-fccc-2ccc-f736-7f3f666e84d1@xen.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 4DE9EA2C25A43B4A97526981C69DD760.0
x-checkrecipientchecked: true
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [203.126.0.111]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: 83e3d666-1f06-431d-9f73-08d9110d8bed
x-ms-traffictypediagnostic: PR2PR08MB4748:|AM8PR08MB5603:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM8PR08MB56032372398586C215E42B3B92579@AM8PR08MB5603.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8882;OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 zpjgGDBQL+vArh6GmhC5HpzXrTR38XHqVFvMZlUs5OmQPlCyIAGEb0FwkN9pSl+7qZ+8M7D1Bx5lhU1u2WthcpjoRK4YW9+veuFh0Y1WHy3ifv3PTxHAoo4UFG+Aa2JghB/EiRcml4ePdX9uekCUhMHolRCKF7HvMaeohBaMB2dBCwP9/D++fpxk+uGMyU3a9C13Xl1XY3O2voyt3v/O9ROqBsBjyofaCZ5Lak69QXfSR7PftZDJG/rud7YaDQKpR38tkr0SRcHHJXOFf35pS7BJYK+0imvR5IkVTWbsWnvDPwa+1r8kGVX8ovs11wOFz6jg3NZ9JG9k6Do8iasWb6MJJpV7e/hXN+X1RLjvDig7MZ0uhcTrLwDVzze+lYbCfOclu+cR61T3jC79TGLsXJa+3p97xI0tBmMjMG4ay0DEsL7/sEa1iUwLxtLmyyiEnQYVoGe2/4AnY5BrmMdW9xnnI5dvjp4rlqMBI1wOKcdSdHJniKi1ytVzABbBxxjuZC04MoigtL0wmWcJZQWtaG2Cw+5qi/3FTymR0yqxkEVluSG/Bs5JyoTyZKWvoR7rUQavCvTm4T589dyq7FTN7EbXtCSTspQGj0lEGk+2siiOvDPYexn5SEemhALCgNd6nCOgMTFyrvqwGwHVfWgm7VZi7nkRaRhlFCbzIijkeGsAfwVaSU1SXg86r3VTJVOt
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PA4PR08MB6253.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(376002)(136003)(366004)(396003)(39860400002)(8936002)(966005)(53546011)(478600001)(316002)(186003)(6506007)(26005)(38100700002)(8676002)(122000001)(52536014)(5660300002)(33656002)(7696005)(2906002)(66476007)(83380400001)(66446008)(64756008)(66556008)(9686003)(76116006)(54906003)(86362001)(71200400001)(55016002)(110136005)(4326008)(66946007);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?MVRDVkFnZ3RFMGJUY2hWcmJXRS9HMFR5aFhlRnVZKzBuckxtK2toMjF5cnVW?=
 =?utf-8?B?cW43cEJwVTg0cmlaeTlQVjJmQ2FwRW0xYTJUZlQvOU1CWEpmN0preEVTVi9u?=
 =?utf-8?B?QmhQVnhaRHh0dW41SEdSWjM3Rkl4bzg1MTVIRU1YOHllbFBsc3JCTHRiM2ti?=
 =?utf-8?B?K1FCbEd6ai95UDFxYndpdW92bjRSMkJXL2hiemxVVnpNejk5RDBRVWd6M1Nk?=
 =?utf-8?B?emwzMUQ5OExIa0hZZjR5Z3ZpdWFMamFnQ28raTFmMEUvVVBuQUcrOERwWkpB?=
 =?utf-8?B?czA2Y0hIWHUwM1VheGgrYjFtWDNXa0NtUnVBMkZUL3FTVkZEeHpHTUxBdmwv?=
 =?utf-8?B?QzVrWEkyd21QdU1UNWFpSm5mbXczTmVUejdjYTlkUG4xblduekRNUHRlVDdn?=
 =?utf-8?B?Vis2Zk9rdmZDVTRrRjRWRnNjc2xUQWtJOXY4bGtHcnptbGVWYnlJd0NLOWVK?=
 =?utf-8?B?WURzVEI4WXg4UjBFMzZZaTFPT0xBSGo4aHdnM0pIcStoWm9ZR1pmSS9LYS9Z?=
 =?utf-8?B?Q2E5aDcvOGpUM1hSbHEySnZVQ2JadXQ2bjlkY2RhR2ZGS2tYTUJUSTM2SUdN?=
 =?utf-8?B?TWFyek00YTZhbnFiRkIvYVkrZzgrcnI5ZmhpRkdOcVNkZzlKVjE2dzZndEEz?=
 =?utf-8?B?dUJyd1cwSk80UmsxczI4Y2dsYzgvVU5TSlRtQXNGYThiMHF4cDdSNSs3dWtF?=
 =?utf-8?B?enVKekVJR0NMVEtvdnBOaktpMnRlTjQ4YkR0djFSVVhncmc1STAxWUx3ZDlG?=
 =?utf-8?B?K3c4RTRxcEJjbnM0MllZVGRLblc3VEMxV0NscGJLNkF4dGxlZ0hTdS93Yk9y?=
 =?utf-8?B?WUZTcHd6cVMzUThyTEtGWStkL05UZTBPT0lmREdmZjNNTGV1MUJYQVNXL3JP?=
 =?utf-8?B?aW9NNnJjNEpMajdRVnY4elkvU0hkYjdwalFTbWplWVZ2dFZZWmNGb0dscWxl?=
 =?utf-8?B?UE1ib2x2SEZTSGRMT0N3NW9qUERzSUxYZU0xU0pvbytpWlVTNDVhMlVyb0pi?=
 =?utf-8?B?ajdXMXdoczVFcWRuN2lYbDlzelpoV2FXU0pIK0duZEZRdFg3NS8vejJQV2Yx?=
 =?utf-8?B?OTVTck1vSXhkekpSOHgzRUZPazRsMm5sbTc3ZnJmenFMQ0tqUDZ2QjJXMVNR?=
 =?utf-8?B?b2djYlhKOWRjNG5XMTlGSTZUK1prK1o5ZEtqNURHMnVzOVoyb3dnSldUTjFZ?=
 =?utf-8?B?d0MzekpaWTdTMmtaR1hmREc3MkVCMGJiRXhFV0F1aEJneE8xU2FOUHJKYVFu?=
 =?utf-8?B?UUVoUU1QLzdrNGpkSE5tay9ZaDRla3EzbXpieXI4SU5OQ09BVU43eUlBNWxp?=
 =?utf-8?B?blN2TjVQTkJTSUdtVkc3RWtIVzJvS25RMTRDQWFSdG5CUTZoSmFvSkNwS2du?=
 =?utf-8?B?VWRxbzFRRkF2NHhldE1pQnY5bWpDcHgrcjByR2FwL3BjamxIU0x3WDlSTGxt?=
 =?utf-8?B?VkpmaHl6a2dVVVBSR2JtNDJZVGdINWYxSWlFaFVyaTF5cTdsVHZRTitKazFD?=
 =?utf-8?B?dXo0dVhRaXgzUWtJYVAxNCtUcWx2bU1Ub29qSjRtQWtnbXc4S2pydjBXdmRU?=
 =?utf-8?B?NDl4RDBQRW96cTRHbkpBditPM1hCOEFrM0c3enNUWW9MZjFiMjZTZGZrM1pZ?=
 =?utf-8?B?VEphdERvRlM4NGQ0ektvVXo5YzBXeTN5ay95b1RDSm55ZFN3S2xwK1h3Z1Iy?=
 =?utf-8?B?VDZaV2RwZVhacVpNc2RhOVByd1JXaFIrVjUyM1BxL1JqUk1wbkYrMWRzQzNm?=
 =?utf-8?Q?IuSumQTw85f2swOnjw=3D?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR2PR08MB4748
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT027.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	5eb538a3-b6cc-4631-f41d-08d9110d81da
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	TS2gr5Rnta5rxlnaHyZHJQMefWWKK7OaaxUmR7uUeht9ks32oXgTbiUpw6fbXBj7bToEUX1LmBnMqlZWlQhUQbYFvEaYFGfOw1DOgF1Lr598fBPZNIhYVhhdwTANzwnymGfWq5NuOIixl8wwoFW3JR3opgtD3CTBd97DPDQL72fY2zRK8WZaEh7fkEo6UlHRMBiinmG0laeGf9GSrAXucOh+5d2qfxy8Rbgb2lu/ZfWQVVwswDrrzdCjTm6ybDgsI4AhmBfNXQ6czIKKCqfyuMB3gNiwQ3v98+oqnDzIcmrT377Ksh37wT7d0POT50GW7676OuWD+V8Klzh7qNa5gKW1/ns0qnfnqVjvjZ171dWRf1GOEiFch8U44dlWLnfEiwulQxLNe2UWiV1ZPXY0Hdi+FTm/1V+MXrk4vHTPiH8CQYwQklabtuLh6rhYVm7Ri1SakiBSkqEnwdd5i3UgJ1wO+OsurC2ioUizPajk4hKk/SVpzXop0mHqv8WIG2yitKcDjv3CBFTgtNPBHsRufsdY2yaKokjrS4MhkCp87Jfa55sJvLpgqDevP0OgRdQZ8ZcxdHN96+yXul7cbfhz67egBBwsfm1qdfA8LplWFpXrNqiMf1b1242INEUS98qFOXCitCLunW55WHu0azT9tcCmId3uJGvIDWfh0w2FqPVoiRZ/BwWHBV1KFgjYcSNHB4vOuI3kvxBqo0qu5C0+AvJRbOrRR2uTtrVDR1CzME0=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(39860400002)(396003)(346002)(136003)(36840700001)(46966006)(336012)(356005)(81166007)(82740400003)(53546011)(70586007)(70206006)(86362001)(83380400001)(82310400003)(4326008)(966005)(26005)(6506007)(186003)(33656002)(7696005)(5660300002)(52536014)(8676002)(316002)(47076005)(9686003)(54906003)(8936002)(478600001)(55016002)(36860700001)(2906002)(110136005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 May 2021 04:06:52.9418
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 83e3d666-1f06-431d-9f73-08d9110d8bed
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT027.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB5603

SGkgSnVsaWVuLA0KDQo+IEZyb206IEp1bGllbiBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+IE9u
IDI4LzA0LzIwMjEgMTA6MjgsIEhlbnJ5IFdhbmcgd3JvdGU6DQo+ID4gSGkgSnVsaWVuLA0KPiAN
Cj4gSGkgSGVucnksDQo+IA0KPiA+DQo+ID4gSSd2ZSBkb25lIHNvbWUgdGVzdCBhYm91dCB0aGUg
cGF0Y2ggc2VyaWVzIGluDQo+ID4gaHR0cHM6Ly94ZW5iaXRzLnhlbi5vcmcvZ2l0d2ViLz9wPXBl
b3BsZS9qdWxpZW5nL3hlbi0NCj4gdW5zdGFibGUuZ2l0O2E9c2hvcnRsb2c7aD1yZWZzL2hlYWRz
L3B0L3JmYy12Mg0KPiA+DQo+IA0KPiBUaGFua3MgeW91IGZvciB0aGUgdGVzdGluZy4gU29tZSBx
dWVzdGlvbnMgYmVsb3cuDQo+IA0KPiBJIGFtIGEgYml0IGNvbmZ1c2VkIHdpdGggdGhlIG91dHB1
dCB3aXRoIGFuZCB3aXRob3V0IG15IHBhdGNoZXMuIEJvdGggb2YNCj4gdGhlbSBhcmUgc2hvd2lu
ZyBhIGRhdGEgYWJvcnQgaW4gY2xlYXJfcGFnZSgpLg0KPiANCj4gQWJvdmUsIHlvdSBzdWdnZXN0
ZWQgdGhhdCB0aGVyZSBpcyBhIGJpZyBnYXAgYmV0d2VlbiB0aGUgdHdvIG1lbW9yeQ0KPiBiYW5r
cy4gQXJlIHRoZSBiYW5rcyBzdGlsbCBwb2ludCB0byBhY3R1YWwgUkFNPw0KDQpBbm90aGVyIHNv
cnJ5IGZvciB0aGUgdmVyeSBsYXRlIHJlcGx5LCB3ZSBoYWQgYSA1IGRheSBwdWJsaWMgaG9saWRh
eSBpbiANCkNoaW5hIGFuZCBpdCBhbHNvIHRvb2sgbWUgc29tZSB0aW1lIHRvIGZpZ3VyZSBvdXQg
aG93IHRvIGNvbmZpZ3VyZSB0aGUgDQpGVlAgKGl0IHR1cm5lZCBvdXQgSSBoYXZlIHRvIHNldCAt
QyBicC5zZWN1cmVfbWVtb3J5PWZhbHNlIHRvIGFjY2VzcyANCnNvbWUgcGFydHMgb2YgbWVtb3J5
IGhpZ2hlciB0aGFuIDRHKS4NCg0KWWVzIHlvdSBhcmUgYWJzb2x1dGVseSByaWdodC4gSW4gbXkg
cHJldmlvdXMgdGVzdCwgdGhlIGhpZ2hlciBtZW1vcnkgaXMgDQpub3QgdmFsaWQuIEJ5IHR1cm5p
bmcgb2ZmIEZWUCBzZWN1cmUgbWVtb3J5LCB0aGlzIHRpbWUgSSB0cmllZCAyIHRlc3QgY2FzZXM6
DQoNCjEuIFVzaW5nIHJlZyA9IDwweDAwIDB4ODAwMDAwMDAgMHgwMCAweDdmMDAwMDAwIDB4Zjgg
MHgwMDAwMDAwMCAweDAwIDB4ODAwMDAwMDA+Ow0KDQpJbiB0aGlzIGNhc2UsIHRoZSBndWVzdCBj
YW4gYmUgc3VjY2Vzc2Z1bGx5IGJvb3RlZC4NCg0KMi4gVXNpbmcgcmVnID0gPDB4MDAgMHg4MDAw
MDAwMCAweDAwIDB4N2YwMDAwMDAgMHhmOSAweDAwMDAwMDAwIDB4MDAgMHg4MDAwMDAwMD47DQoN
CkZpcnN0bHkgSSBjb25maXJtZWQgdGhlIG1lbW9yeSBpcyB2YWxpZCBieSB1c2luZyBtZCBjb21t
YW5kIGluIHUtYm9vdCBjb21tYW5kIGxpbmU6DQoNClZFeHByZXNzNjQjIG1kIDB4ZjkwMDAwMDAw
MA0KZjkwMDAwMDAwMDogZGZkZmRmY2YgY2ZkZmRmZGYgZGZkZmRmY2YgY2ZkZmRmZGYgICAgLi4u
Li4uLi4uLi4uLi4uLg0KVkV4cHJlc3M2NCMgbWQgMHhmOTgwMDAwMDAwDQpmOTgwMDAwMDAwOiBk
ZmRmZGZjZiBjZmRmZGZkZiBkZmRmZGZjZiBjZmRmZGZkZiAgICAuLi4uLi4uLi4uLi4uLi4uDQoN
CndoZW4gSSBjb250aW51ZSBib290aW5nIFhlbiwgSSBnb3QgZm9sbG93aW5nIGVycm9yIGxvZzoN
Cg0KKFhFTikgQ1BVOiAgICAwDQooWEVOKSBQQzogICAgIDAwMDAwMDAwMDAyYjVhNWMgYWxsb2Nf
Ym9vdF9wYWdlcysweDk0LzB4OTgNCihYRU4pIExSOiAgICAgMDAwMDAwMDAwMDJjYTNiYw0KKFhF
TikgU1A6ICAgICAwMDAwMDAwMDAwMmZmZGUwDQooWEVOKSBDUFNSOiAgIDYwMDAwM2M5IE1PREU6
NjQtYml0IEVMMmggKEh5cGVydmlzb3IsIGhhbmRsZXIpDQooWEVOKQ0KKFhFTikgICBWVENSX0VM
MjogODAwMDAwMDANCihYRU4pICBWVFRCUl9FTDI6IDAwMDAwMDAwMDAwMDAwMDANCihYRU4pDQoo
WEVOKSAgU0NUTFJfRUwyOiAzMGNkMTgzZA0KKFhFTikgICAgSENSX0VMMjogMDAwMDAwMDAwMDAw
MDAzOA0KKFhFTikgIFRUQlIwX0VMMjogMDAwMDAwMDA4NDEzYzAwMA0KKFhFTikNCihYRU4pICAg
IEVTUl9FTDI6IGYyMDAwMDAxDQooWEVOKSAgSFBGQVJfRUwyOiAwMDAwMDAwMDAwMDAwMDAwDQoo
WEVOKSAgICBGQVJfRUwyOiAwMDAwMDAwMDAwMDAwMDAwDQooWEVOKQ0KKFhFTikgWGVuIGNhbGwg
dHJhY2U6DQooWEVOKSAgICBbPDAwMDAwMDAwMDAyYjVhNWM+XSBhbGxvY19ib290X3BhZ2VzKzB4
OTQvMHg5OCAoUEMpDQooWEVOKSAgICBbPDAwMDAwMDAwMDAyY2EzYmM+XSBzZXR1cF9mcmFtZXRh
YmxlX21hcHBpbmdzKzB4YTQvMHgxMDggKExSKQ0KKFhFTikgICAgWzwwMDAwMDAwMDAwMmNhM2Jj
Pl0gc2V0dXBfZnJhbWV0YWJsZV9tYXBwaW5ncysweGE0LzB4MTA4DQooWEVOKSAgICBbPDAwMDAw
MDAwMDAyY2I5ODg+XSBzdGFydF94ZW4rMHgzNDQvMHhiY2MNCihYRU4pICAgIFs8MDAwMDAwMDAw
MDIwMDFjMD5dIGFybTY0L2hlYWQubyNwcmltYXJ5X3N3aXRjaGVkKzB4MTAvMHgzMA0KKFhFTikN
CihYRU4pICoqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCihYRU4pIFBh
bmljIG9uIENQVSAwOg0KKFhFTikgWGVuIEJVRyBhdCBwYWdlX2FsbG9jLmM6NDMyDQooWEVOKSAq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQoNCldlIGNhbiBjb250aW51
ZSBvdXIgZGlzY3Vzc2lvbiBmcm9tIGhlcmUuIFRoYW5rcyBeXg0KDQpLaW5kIHJlZ2FyZHMsDQoN
CkhlbnJ5DQoNCj4gDQo+IENoZWVycywNCj4gDQo+IC0tDQo+IEp1bGllbiBHcmFsbA0K


From xen-devel-bounces@lists.xenproject.org Fri May 07 05:10:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 05:10:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123765.233498 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leslD-0004c8-Ah; Fri, 07 May 2021 05:10:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123765.233498; Fri, 07 May 2021 05:10:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leslD-0004c1-7g; Fri, 07 May 2021 05:10:47 +0000
Received: by outflank-mailman (input) for mailman id 123765;
 Fri, 07 May 2021 05:10:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leslC-0004br-CE; Fri, 07 May 2021 05:10:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leslC-0007LV-2F; Fri, 07 May 2021 05:10:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1leslB-0001eU-LZ; Fri, 07 May 2021 05:10:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1leslB-0007nP-L1; Fri, 07 May 2021 05:10:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FjE+1Ei8uYeVM5Phh9JQcaMohGW4gVZtWnX8va8EQ/s=; b=q8kq2Cgf5inbt48Rwt7BE26Zz/
	2r2CKvogf69wYsaw6NzW5WgbrBl5N60xiafK90RbWx2R4Z8SjPt6DWZkaSPtOB0YXf8m2F56vKUVM
	JnlcAU1z8AQc/1f8Yc2Gdh8i/SQZ9KgKdvN2sRMN/VKCm8bE80EWhs6QEwP0fTuZM2Gc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161817-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 161817: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=38182162b50aa4e970e5997df0a0c4288147a153
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 07 May 2021 05:10:45 +0000

flight 161817 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161817/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx  8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                38182162b50aa4e970e5997df0a0c4288147a153
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  279 days
Failing since        152366  2020-08-01 20:49:34 Z  278 days  464 attempts
Testing same since   161817  2021-05-06 18:21:44 Z    0 days    1 attempts

------------------------------------------------------------
5989 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1624971 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 07 06:23:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 06:23:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123773.233514 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1letso-0002nF-Ma; Fri, 07 May 2021 06:22:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123773.233514; Fri, 07 May 2021 06:22:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1letso-0002n8-Ix; Fri, 07 May 2021 06:22:42 +0000
Received: by outflank-mailman (input) for mailman id 123773;
 Fri, 07 May 2021 06:22:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rJTn=KC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1letsn-0002n2-Ay
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 06:22:41 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 474f3c92-fadc-4433-a115-e596de231dca;
 Fri, 07 May 2021 06:22:38 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D129DB136;
 Fri,  7 May 2021 06:22:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 474f3c92-fadc-4433-a115-e596de231dca
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620368557; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=+rPoFfwrDqquUA3/waQM7VjLnXiWggLIa79HlJ27+SQ=;
	b=KL7/+oMv5M6cLjmvv4B81jM2AmBppywBi/0t51iDq2ME69nr97J82QwFlFhLZhl7Ge6/Oa
	U6DopaIaae62xtndbi74kTgDtrgeBmQcy6YOIxqcJ7gugF/EoM771CSvEGLalQDREjdYe9
	NSS7RfXGRt+Iu1j3ZSx0ZT41jWL1NnA=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/shim: fix build when !PV32
Message-ID: <08ea57f0-732e-fe12-409c-5487fb493429@suse.com>
Date: Fri, 7 May 2021 08:22:38 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

In this case compat headers don't get generated (and aren't needed).
The changes made by 527922008bce ("x86: slim down hypercall handling
when !PV32") also weren't quite sufficient for this case.

Try to limit #ifdef-ary by introducing two "fallback" #define-s.

Fixes: d23d792478db ("x86: avoid building COMPAT code when !HVM && !PV32")
Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/pv/shim.c
+++ b/xen/arch/x86/pv/shim.c
@@ -34,8 +34,6 @@
 #include <public/arch-x86/cpuid.h>
 #include <public/hvm/params.h>
 
-#include <compat/grant_table.h>
-
 #undef virt_to_mfn
 #define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
 
@@ -300,8 +298,10 @@ static void write_start_info(struct doma
                                           &si->console.domU.mfn) )
         BUG();
 
+#ifdef CONFIG_PV32
     if ( compat )
         xlat_start_info(si, XLAT_start_info_console_domU);
+#endif
 
     unmap_domain_page(si);
 }
@@ -675,6 +675,13 @@ void pv_shim_inject_evtchn(unsigned int
     }
 }
 
+#ifdef CONFIG_PV32
+# include <compat/grant_table.h>
+#else
+# define compat_gnttab_setup_table gnttab_setup_table
+# define compat_handle_okay guest_handle_okay
+#endif
+
 static long pv_shim_grant_table_op(unsigned int cmd,
                                    XEN_GUEST_HANDLE_PARAM(void) uop,
                                    unsigned int count)
@@ -704,10 +711,13 @@ static long pv_shim_grant_table_op(unsig
             rc = -EFAULT;
             break;
         }
+
+#ifdef CONFIG_PV32
         if ( compat )
 #define XLAT_gnttab_setup_table_HNDL_frame_list(d, s)
             XLAT_gnttab_setup_table(&nat, &cmp);
 #undef XLAT_gnttab_setup_table_HNDL_frame_list
+#endif
 
         nat.status = GNTST_okay;
 
@@ -778,6 +788,7 @@ static long pv_shim_grant_table_op(unsig
             }
 
             ASSERT(grant_frames[i]);
+#ifdef CONFIG_PV32
             if ( compat )
             {
                 compat_pfn_t pfn = grant_frames[i];
@@ -789,8 +800,10 @@ static long pv_shim_grant_table_op(unsig
                     break;
                 }
             }
-            else if ( __copy_to_guest_offset(nat.frame_list, i,
-                                             &grant_frames[i], 1) )
+            else
+#endif
+            if ( __copy_to_guest_offset(nat.frame_list, i,
+                                        &grant_frames[i], 1) )
             {
                 nat.status = GNTST_bad_virt_addr;
                 rc = -EFAULT;
@@ -799,10 +812,12 @@ static long pv_shim_grant_table_op(unsig
         }
         spin_unlock(&grant_lock);
 
+#ifdef CONFIG_PV32
         if ( compat )
 #define XLAT_gnttab_setup_table_HNDL_frame_list(d, s)
             XLAT_gnttab_setup_table(&cmp, &nat);
 #undef XLAT_gnttab_setup_table_HNDL_frame_list
+#endif
 
         if ( unlikely(compat ? __copy_to_guest(uop, &cmp, 1)
                              : __copy_to_guest(uop, &nat, 1)) )


From xen-devel-bounces@lists.xenproject.org Fri May 07 07:26:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 07:26:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123779.233531 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leury-00008j-Dq; Fri, 07 May 2021 07:25:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123779.233531; Fri, 07 May 2021 07:25:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leury-00008c-An; Fri, 07 May 2021 07:25:54 +0000
Received: by outflank-mailman (input) for mailman id 123779;
 Fri, 07 May 2021 07:25:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4HO4=KC=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1leurw-00008W-O6
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 07:25:52 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 370cd906-2501-4ef1-b8f2-17df48351306;
 Fri, 07 May 2021 07:25:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 370cd906-2501-4ef1-b8f2-17df48351306
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620372350;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=BK8YVJO8bvd99ktR4bOXQ3SMBRdbSrUAvdvPIS778hU=;
  b=ZdsTbP4IWNvU3owBYs1iSZiR80wL/xTyZldnTzLkzbbNRFEoaGJkeTBS
   ePShDp189aFdd3GCnzrPCykEUCyg0DByjQoOEnNvwPUQ2Ukx0iFALac97
   6vd0ttGK4dE1VchIPCDdicSqMB/bV/Doz6uWitBnIRnHB5Jd/FZynDrIx
   o=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: U5HyNIUuFsaGWUcHqp5gQR8vXgdXuceHg24avFJeFMiPKADBJYuB9rGe1oQuYquXI/wHM2Fo8P
 nFAXVGqhAejJZ+tQgSiRax5yiKn+oZ4rz4aePpOgnZc4cDlY6OyDbhzaEmOA6xxNHWLPk72KVY
 jeByZ9wDkg3hwfnDIsfhTOYRUNMh5nH6f+j3HLs+zqzCPqeWao+tZaXvPlfoZRs5Aedde8CnvQ
 viaWhYwV7ptydyNacdJgwgEOKmGBbXOwloDmIecChrJStE2WC7+/Km/lT/yy83EgahIdN8kfpn
 tOA=
X-SBRS: 5.1
X-MesageID: 43292486
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:Bi4RUKEgevrzt9kbpLqEEseALOsnbusQ8zAXPiBKJCC9vPb5qy
 nOpoV+6faQslwssR4b9uxoVJPvfZq+z+8R3WByB8bAYOCOggLBQL2KhbGI/9SKIVydygcy78
 Zdm6gVMqyMMbB55/yKnDVRxbwbsaa6GKPDv5ah8590JzsaDJ2Jd21Ce32m+ksdfnghObMJUK
 Cyy+BgvDSadXEefq2AdwM4t7iqnayzqHr+CyR2fyIa1A==
X-IronPort-AV: E=Sophos;i="5.82,280,1613451600"; 
   d="scan'208";a="43292486"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BBnxOXyJnH24Q9FuGNSZe+Nhy9vmXV4vfYW2YlBCOVYS7sbByhI70uawTAGQu6OR9lWk2/giesup6sy/WX4BxnasbDJTrhXnMH9lF0MJ7ANXAAKVESpUxsXXribj+p3rrB0Je6cotE/rAu919LGamvxZ/j5KJYX5oS9ySmul390+3GU6yeN+WHfcmrsAE1VBzHCfrDRghjbbvoZKQ4CCByMJoghOvSjFTlyBgKcW1yGYPlSLG+gSOXwlhMta2tXAW1eEXJg/KuXx4QxJGrLhHplmLKAM/iqr20wkzpnKO3JavG/o69E96DNgE0BRa1PpGpOrNA4VmXmGe0O+l55K6g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xe2TIIEZvd1s1LX5EMJRuHRL8wZSW/ZcuYuYkQ090g0=;
 b=Obw1tKRoOzu0Mitvx1o/efGIAyBa1MCaAk6JLUGX9YI7hViTqqfLXuYOWo281C6MCj8gMhi+u0SYNG9RMGYKeVQzRzRZFSVGF2+rBQWkw7liG+b0PULZCYDQ4h8rxC0pp/jejm/SJnDu+A+WwY5GDry+bxpUlkBTMwT8Rsn8BjicIzrxcQG/uq+/EInLpwZkPYsCyVq7z2hzJUO7M4rAfeYWe/MjSDItRDwDG4nNiNpM3QTibmGg9+BHRLCuV0wj4gFc43yFBdaoDSnspzv9chmhmpL63V0IZiXRWLGq5jhTPckm/Xnwo5Ys9NI4BrOljc8Ovbwj0noT/j4Y2XT21A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xe2TIIEZvd1s1LX5EMJRuHRL8wZSW/ZcuYuYkQ090g0=;
 b=NKeKlJPABec5Vhtmq5cMijoWf2t/0G2kKKt4SfgrmguLsBNonNN98VcUVpfVNCVERJuj+VIVcJpbrv55cgDIOoaCIMQWnK1Vke4+cPM1Ktgty7pLDTcJBv/L9Vf8w/oS2jfEau0EVHcJzS7//8+cNwSfZlc5IgvQ66os+ahC+ro=
Date: Fri, 7 May 2021 09:25:38 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Olaf Hering <olaf@aepfle.de>, <xen-devel@lists.xenproject.org>, Ian
 Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v1] tools: fix incorrect suggestions for
 XENCONSOLED_TRACE on FreeBSD
Message-ID: <YJTrch06dWF9SOLf@Air-de-Roger>
References: <20210504135021.8394-1-olaf@aepfle.de>
 <c71658e6-422b-4852-6d21-4688d09d8b8e@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <c71658e6-422b-4852-6d21-4688d09d8b8e@citrix.com>
X-ClientProxiedBy: AM4PR0902CA0002.eurprd09.prod.outlook.com
 (2603:10a6:200:9b::12) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ac380356-2b5d-4915-058e-08d9112953b1
X-MS-TrafficTypeDiagnostic: DM6PR03MB4057:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4057E08D49AD8641D95E05608F579@DM6PR03MB4057.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4303;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 6hkN9l5NNCJUh5aXPdmskvWo2g5Bea6qL/E0Imn3tpE07scyAqNZRO1p8Ey/ssDD5SKXTFGOrbV/1Iks2ZlkbG8GgMj86PngpwXwnWYAA5waWsVkeF3vEAdR/KBCkH8lrSBYAQWycZIwAXqPI77WpHBXb6TkZ/b3FrVatcc4gsH0VLAVe47kjZG1l82H3g0hKqvAYIkz+/HY+KNlPwb7hw3pdalROX0mWdj95FZPQSLdZ47yjir/K80qKQoUuKs1nGRoaJglvkdOgJscDq6WmX6Ti7UddiCUKf16AXLvZsSi8y/Sm1Ww4Kc+4ZJSex4N7hvvjqadukXwF/mWWT0YmKh09FdyWKvIXxw3Z9B0KpjMT3jvkzre4x/4txbKafwqg4bi+V0p0m+UOEriFpiw8JG6BSGBPTybPov+diOa1IP32QkuP0D4QNDInTk08N+orlWKJIGoLnutniOcAQKWRKHd8O/tF+2Yq04iE4g1eXhOuGoRH8vD1ows8r2G1o1IhMQ90ptZP49tpEMwRen6vE3Pn7MghO+HO394fiL9xhoaIocDBNyjFklGsw8tcJatK9vleqkCxMHnA8m7YzC9dEez4AF3M+TByDE/JZ5mBAA=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(39860400002)(376002)(136003)(396003)(366004)(346002)(83380400001)(6636002)(66946007)(66476007)(66556008)(5660300002)(316002)(2906002)(85182001)(54906003)(478600001)(26005)(9686003)(16526019)(8676002)(53546011)(86362001)(186003)(6496006)(33716001)(956004)(8936002)(38100700002)(6862004)(4326008)(6486002)(6666004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?YkNGQkZZQjJzUTlQUzVsRzVxVFhkRGxhU0kydUVNN3oyanhyMTYrQ0ptQ01F?=
 =?utf-8?B?WHJrcGZxeTNmWFdhbU94NXhzWU9sZEJBaStaaHFCZGMxb3R2SUsyOGowdVgv?=
 =?utf-8?B?em9wbVJHTjRxU29yZVJpVWVVREV0eDFrdUJyZ0hqSU84bjBwOHdFeERFbStW?=
 =?utf-8?B?MUVUNVBaVHE3QTNjdkNWWWo0dnFYMVVhSGMwQWhCT0dFOCtTWmpHMzFCUXdW?=
 =?utf-8?B?TG4vbk9oTlVRallFdWNMazV2UkhMaHMzcVIwa3BTMHRHbC94RTA2dmxaNXpG?=
 =?utf-8?B?WFU4YWpqTjcraDlTM2ZBTUh1YnZPQ1Q5ckpoclpxdzhMQWljcHZ5d3o1ZVR4?=
 =?utf-8?B?ZGRmeVQyYzlGNzh4VlJIWW9HZjkxVndTOTlERGQvbTAzV3dBaFRYRGl4MGxh?=
 =?utf-8?B?eVh1STBkOEQzNDZXZzNZS0w0V1N4QVppcU04cHppc21UYkR3REVOTFRLSWh1?=
 =?utf-8?B?SmVQTU0yTTRtVG9vOThBRHd2OWwzT3A0TnBjcTV6SDFsWUs4dTFVUmhYTGxh?=
 =?utf-8?B?NjFLN1F6WkVQYTJEbGxLak5OSlAyeUFzaW1zTm10VURBaDhMbGV2TFkydFBZ?=
 =?utf-8?B?Wkw3N1g0UU5GN2dlL1ZiS09vNGp6Ujdxdno4STJZNFdTOENmVWdYYmwrYmR3?=
 =?utf-8?B?eUdBODljK01vb3gvT3NmeXlYR2NaWDJ4VjNaNnJyV3VHaDl3M2xBdk1FREdy?=
 =?utf-8?B?aVljL2ZZV1AwOUs4MEt6d1JFeHFybDNPcWVhWDd0SklxaEttMWozZGs0Um9j?=
 =?utf-8?B?SCsybllNMlRHR3F5eVNzcU5CMXFtZEk4NGtLMGF0ZldCWWJZZHYyK0NHRFcr?=
 =?utf-8?B?Rm0vcGs1akxhUkk1aVlEOGxJeHNESi9ZL3hPS2ZZQVlUM1NQdTNxUWU3R2M0?=
 =?utf-8?B?U3J4TG5WaEkwRWJWQWRCcFFFTkg3T2dXaU0rWHlEdU00VGVRRHgzWjhBUDUv?=
 =?utf-8?B?Z0toWUxDYU4vbDd6dG8rU0R2Q01SamhSOVFpRmZrOTk1NjQvK3JNNzEzdjVM?=
 =?utf-8?B?ZVlZTUhzYjdFWlovMEJjSHZERjJsSU9lUGhoaGJNallYTHB5QWZxUGJQSnd1?=
 =?utf-8?B?czZDbDB3UjBkM1B6WERFYWpVZDN0dUJRM0xZd3dpN09qa09uTGVaVCt5WWpi?=
 =?utf-8?B?bnV4RTVlS0JKRG1OelFmS2U3U1d2UG45TVRaT3VMY2FjMnR6SVJMSUovWkV0?=
 =?utf-8?B?RXhIeWVQbGNMeEZ0VTBBWkhwQzdKYjJJREZPbThGSWFJYTVCRFBKQTNwMU9a?=
 =?utf-8?B?MTlFcU1WQUNQTDZaSk13bTFzQU0vajRrMFlzaFpndWhJNnNURG8zZ0V2Qllk?=
 =?utf-8?B?T2V5T2NrWCttbnh5aFpvTmxLcjBwT01TY3JkUkhNdCs5WGpHTUpYVmV6YUIy?=
 =?utf-8?B?bmVvb2NQaGFIOWJRcmNYb3RHRnpqdTdiaFYvWktqSENtdHRJYWhvRVlHbUZ2?=
 =?utf-8?B?QnhESUpCZjVjUi9iN1VKbmdxNUQ3eElhclVMb3d1d0NxN1FRcnRXYW9heHVV?=
 =?utf-8?B?QVhIK3F0OUxzVklSTkdmVzlHYUV1NjNaazBLdTNnRVJBenlhUUJxMU5RL253?=
 =?utf-8?B?c2dVVXVqZXVnc3kvUE9EcE4wUHB6dWwyVWJBWklzZWxPYWtWVmtkbEYrSzRY?=
 =?utf-8?B?NmRFU0JKKzBSKzdsbUg1czkwNWU0WVZNL1ZvUjlOeTNTZ25LaHZVTEYwb2Q3?=
 =?utf-8?B?N0xDV0RuRTBnV2FtNVk1ajBwQ0QxR1Fta0FvS2RqbUhvK3BFWFNXUDNZTUJz?=
 =?utf-8?Q?czNkyZrkq8CF8+w+CEysG424LXRUxma4pxOiVUV?=
X-MS-Exchange-CrossTenant-Network-Message-Id: ac380356-2b5d-4915-058e-08d9112953b1
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 May 2021 07:25:44.7158
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: zAh44jBA/cGM8UWQz0BjC4kT3f5p/N7rQGnQgpm0aVhMWZtoWW1qBIhG/K7RMPNmHXjkSoSkzKAM+ja02/h5Zg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4057
X-OriginatorOrg: citrix.com

On Tue, May 04, 2021 at 06:47:12PM +0100, Andrew Cooper wrote:
> On 04/05/2021 14:50, Olaf Hering wrote:
> > --log does not take a file, it specifies what is supposed to be logged.
> >
> > Signed-off-by: Olaf Hering <olaf@aepfle.de>
> 
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>.  That said, ...
> 
> > ---
> >  tools/hotplug/FreeBSD/rc.d/xencommons.in | 2 +-
> >  1 file changed, 1 insertion(+), 1 deletion(-)
> >
> > diff --git a/tools/hotplug/FreeBSD/rc.d/xencommons.in b/tools/hotplug/FreeBSD/rc.d/xencommons.in
> > index ccd5a9b055..36dd717944 100644
> > --- a/tools/hotplug/FreeBSD/rc.d/xencommons.in
> > +++ b/tools/hotplug/FreeBSD/rc.d/xencommons.in
> > @@ -23,7 +23,7 @@ required_files="/dev/xen/xenstored"
> >  
> >  XENSTORED_PIDFILE="@XEN_RUN_DIR@/xenstored.pid"
> >  XENCONSOLED_PIDFILE="@XEN_RUN_DIR@/xenconsoled.pid"
> > -#XENCONSOLED_TRACE="@XEN_LOG_DIR@/xenconsole-trace.log"
> > +#XENCONSOLED_TRACE="none|guest|hv|all"
> >  #XENSTORED_TRACE="@XEN_LOG_DIR@/xen/xenstore-trace.log"
> 
> It would probably be clearer to untangle these in one go, leaving the
> result looking like:
> 
> XENCONSOLED_PIDFILE="@XEN_RUN_DIR@/xenconsoled.pid"
> #XENCONSOLED_TRACE="none|guest|hv|all"
> 
> XENSTORED_PIDFILE="@XEN_RUN_DIR@/xenstored.pid"
> #XENSTORED_TRACE="@XEN_LOG_DIR@/xen/xenstore-trace.log"
> 
> I'd also be tempted to fold this and the NetBSD change together.  It's
> not as if these bugfixes are distro-specific.
> 
> 
> It looks like a bug in NetBSD in c/s 2e8644e1d90, which was copied into
> FreeBSD by c/s 5dcdb2bf569.  (P.S. Sorry Roger - both your bugs,
> starting from a decade ago).  It really is idiotic that we've got a
> commonly named *_TRACE variable with totally different semantics for the
> two daemons.  Then again, its far too late to fix this :(

Ups, sorry. Feel free to fix those in one go, and add my:

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri May 07 07:47:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 07:47:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123784.233544 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1levCO-0002RK-4K; Fri, 07 May 2021 07:47:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123784.233544; Fri, 07 May 2021 07:47:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1levCO-0002RD-13; Fri, 07 May 2021 07:47:00 +0000
Received: by outflank-mailman (input) for mailman id 123784;
 Fri, 07 May 2021 07:46:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rJTn=KC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1levCN-0002R7-AD
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 07:46:59 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id affa874a-7ea0-4f96-8c6c-b9d0e50c2286;
 Fri, 07 May 2021 07:46:58 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 40B10B113;
 Fri,  7 May 2021 07:46:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: affa874a-7ea0-4f96-8c6c-b9d0e50c2286
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620373617; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=xAfr4z7JzXC1gg6T/5RHLsz84op48LpVerLEWtvshP4=;
	b=QS9VcAWricZNpC7eHqX0peA+BZ4LWq1wF6jFOFEf2c7wQRFTicWJFo9G41h7nSyon0WVaR
	S47m9/VQW5hXhLLg/EXD6y+bkpEzuUL0FqROR37pM/Sjxn+0SUfTxNq/5sYlh7VdjTGEly
	NuJIG3zqCA8WLU9RmSN1Z22hAUeZ+Ys=
Subject: Re: [PATCH v4] gnttab: defer allocation of status frame tracking
 array
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <74048f89-fee7-06c2-ffd5-6e5a14bdf440@suse.com>
 <4450085e-be97-a1ba-dbd8-c72468406fd5@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <eb882824-55a8-bd59-9c31-ade25c2b8da9@suse.com>
Date: Fri, 7 May 2021 09:46:58 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <4450085e-be97-a1ba-dbd8-c72468406fd5@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 05.05.2021 12:49, Andrew Cooper wrote:
> On 04/05/2021 09:42, Jan Beulich wrote:
>> This array can be large when many grant frames are permitted; avoid
>> allocating it when it's not going to be used anyway, by doing this only
>> in gnttab_populate_status_frames(). While the delaying of the respective
>> memory allocation adds possible reasons for failure of the respective
>> enclosing operations, there are other memory allocations there already,
>> so callers can't expect these operations to always succeed anyway.
>>
>> As to the re-ordering at the end of gnttab_unpopulate_status_frames(),
>> this is merely to represent intended order of actions (shrink array
>> bound, then free higher array entries). If there were racing accesses,
>> suitable barriers would need adding in addition.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Nack.
> 
> The argument you make says that the grant status frames is "large
> enough" to care about not allocating.  (I frankly disagree, but that
> isn't relevant to my to my nack).
> 
> The change in logic here moves a failure in DOMCTL_createdomain, to
> GNTTABOP_setversion.  We know, from the last minute screwups in XSA-226,
> that there are versions of Windows and Linux in the field, used by
> customers, which will BUG()/BSOD when GNTTABOP_setversion doesn't succeed.

So after you did re-state the Linux aspect of this on the call yesterday
I went and looked. In the first phase of Linux supporting v2 at all
(3.3 - 3.16) there indeed was a panic() upon certain kinds of failures.
Only up to 3.13 was there an actual attempt to use v2. Also, while there
was some code there to support actual v2 features (sub-page or
transitive grants), all of this was dead. Hence their claim

		/*
		 * If we've already used version 2 features,
		 * but then suddenly discover that they're not
		 * available (e.g. migrating to an older
		 * version of Xen), almost unbounded badness
		 * can happen.
		 */

was bogus at best. If there was no way at all for set_version to fail
prior to the change, here I could probably accept your position. But as
said before by Julien - there are pre-existing memory allocations (of
typically larger size) there, and hence any guest assuming the call
can't fail is flawed already anyway. And no, I don't view this as a
reason for us to try to eliminate all memory allocations from that
hypercall (which would in particular mean populating status frames
irrespective of whether a guest would ever switch to v2). Guest flaws
would better be addressed in the guests.

Upon re-introduction in 4.15 no such fatal behavior was put back in
place. I notice though that even up-to-date Linux has problematic
behavior around GNTTABOP_setup_table - the call is followed (after an
explicit -ENOSYS check) by "BUG_ON(rc || setup.status)". The amount of
memory needed here (including the status table) is potentially far
higher than for set_version. And what's important: setup_table gets
invoked only after set_version, so any table expansion would happen
here, with - if any table growing is needed at all - a far higher risk
for failure.

This ordering of operations (set_version before setup_table) as well
as the "error checking" after setup_table was also in effect up to
3.13. Therefore the same conclusion can be drawn there: The main risk
for crashing the guest due to memory shortage in Xen is there, not
with the version switch. What you've observed with XSA-226 (or rather,
I assume, with a custom extension of yours at the time, to prohibit use
of v2, which was upstreamed only later) was entirely unrelated to memory
shortage, but was instead a result of v2 suddenly becoming unavailable
altogether.

As to Windows, in the pvdrivers/win/ git repos I haven't been able to
find any use of GrantTableSetVersion(). Of course I can't exclude
other versions or other driver variants making use of set_version in an
inappropriate way. But as long as they don't grow the number of grant
table entries before such a call, the main risk of memory allocation
failure would again be with other hypercalls.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 07 07:59:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 07:59:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123789.233556 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1levOL-00040Y-9D; Fri, 07 May 2021 07:59:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123789.233556; Fri, 07 May 2021 07:59:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1levOL-00040R-6D; Fri, 07 May 2021 07:59:21 +0000
Received: by outflank-mailman (input) for mailman id 123789;
 Fri, 07 May 2021 07:59:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W7X5=KC=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1levOK-00040L-I4
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 07:59:20 +0000
Received: from mail-wr1-x436.google.com (unknown [2a00:1450:4864:20::436])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ebc3d453-9203-42e5-aa87-41626d0a2214;
 Fri, 07 May 2021 07:59:19 +0000 (UTC)
Received: by mail-wr1-x436.google.com with SMTP id n2so8221477wrm.0
 for <xen-devel@lists.xenproject.org>; Fri, 07 May 2021 00:59:19 -0700 (PDT)
Received: from [192.168.1.186]
 (host86-180-176-157.range86-180.btcentralplus.com. [86.180.176.157])
 by smtp.gmail.com with ESMTPSA id 6sm14742646wmg.9.2021.05.07.00.59.17
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 07 May 2021 00:59:18 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ebc3d453-9203-42e5-aa87-41626d0a2214
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:subject:to:cc:references:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=oIGKBYwFUgUlzXnGLZjKJAY4IuJYeR1zWo2ZVMftHfs=;
        b=tiaLtJERz+NvI77p/SE+0q8bdgH+AihgKM35FplYjfXLuVsjftg70P7dNjYSmZFoTj
         +3J9/Yy14ABGFBmGhXL3PzG7cRCEEm1ZkqX+qvywowK4nLPrZ2bo7ddr53eun8QrqDcb
         sw6O11kJWKjVZYL9Wcc8HZdxetxEVoRf9mNt7LlDzrBS4xHdOmjDYT46fgxdDt6P/P8C
         2lxd9hIQM0YlhYxTlipd6He4QRq8u1tCc6QRSetTjiLmJLXlatNYxTSLRa15PYIjuSaJ
         ze7OudKzb72ZP4SRgdHYrbgUjGS08UZ9bt3jZUgvhy8Augf0jloLdB2uEbcRewxv0oeN
         u06g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:subject:to:cc:references
         :message-id:date:user-agent:mime-version:in-reply-to
         :content-language:content-transfer-encoding;
        bh=oIGKBYwFUgUlzXnGLZjKJAY4IuJYeR1zWo2ZVMftHfs=;
        b=JC2O9yAc/wDR4Z3+oUVtWWGR/O3GCY0l7CvBl0wSK3IoBplvxhczkns7Q4qe/HTYqn
         y95eKxIN/tH7admU+o2R3MiZV+QuVnBolxxHr1OZTloqAURgOwZyiGlpAP3bhD/lo4wI
         l+YauVUL08l+jyQ1a+05lB/Sj+PAvahOSau/9kjNC/k/aYBSPwTQhvtwnRpLEYuKDEjl
         RbHqBr8IoxkgDTH95TzTAu0MfqUmVInRu8HboT7hs6GSBQWl4f+hQzVRif9ebiw1SCoj
         c4tNq7QgMyKmmOJc77UDUslZH8fqWiO7ZUeecUwjnBlWnpr9Awr0WTFxmOVRgJ3hpUaf
         LfnQ==
X-Gm-Message-State: AOAM5308+o3uLcXBqhNw//p8ucbJoIdb9Vw9owbTALbXCsQxo9wNbHyS
	Ur/4EpAwb6wT4pAEebWchj0=
X-Google-Smtp-Source: ABdhPJxMRjBTa2+pXlnxk6Ds39p0P6CSSRTbJTLadwC7WSyxk5ctNWVGygIBK0drnR8BKvzBVkX2Fg==
X-Received: by 2002:a5d:40c3:: with SMTP id b3mr10609326wrq.304.1620374358571;
        Fri, 07 May 2021 00:59:18 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: Paul Durrant <paul@xen.org>
Reply-To: paul@xen.org
Subject: Re: [PATCH RFC 2/2] xen/kexec: Reserve KEXEC_TYPE_LIVEUPDATE and
 KEXEC_RANGE_MA_LIVEUPDATE
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: dwmw2@infradead.org, hongyxia@amazon.com, raphning@amazon.com,
 maghul@amazon.com, Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20210506104259.16928-1-julien@xen.org>
 <20210506104259.16928-3-julien@xen.org>
Message-ID: <a2fcd673-2500-917e-16f2-d18e553fe2db@xen.org>
Date: Fri, 7 May 2021 08:59:16 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210506104259.16928-3-julien@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 06/05/2021 11:42, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Unfortunately, the code to support Live Update has already been merged in
> Kexec and shipped since 2.0.21. Reserve the IDs used by Kexec before they
> end up to be re-used for a different purpose.
> 
> This patch reserves two IDs:
>      * KEXEC_TYPE_LIVEUPDATE: New operation to request Live Update
>      * KEXEC_MA_RANGE_LIVEUPDATE: New range to query the Live Update
>        area below Xen
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Paul Durrant <paul@xen.org>

> ---
>   xen/include/public/kexec.h | 13 ++++++++++---
>   1 file changed, 10 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/include/public/kexec.h b/xen/include/public/kexec.h
> index 3f2a118381ba..650d2feb036f 100644
> --- a/xen/include/public/kexec.h
> +++ b/xen/include/public/kexec.h
> @@ -71,17 +71,22 @@
>    */
>   
>   /*
> - * Kexec supports two types of operation:
> + * Kexec supports three types of operation:
>    * - kexec into a regular kernel, very similar to a standard reboot
>    *   - KEXEC_TYPE_DEFAULT is used to specify this type
>    * - kexec into a special "crash kernel", aka kexec-on-panic
>    *   - KEXEC_TYPE_CRASH is used to specify this type
>    *   - parts of our system may be broken at kexec-on-panic time
>    *     - the code should be kept as simple and self-contained as possible
> + * - Live update into a new Xen, preserving all running domains
> + *   - KEXEC_TYPE_LIVE_UPDATE is used to specify this type
> + *   - Xen performs non-cooperative live migration and stores live
> + *     update state in memory, passing it to the new Xen.
>    */
>   
> -#define KEXEC_TYPE_DEFAULT 0
> -#define KEXEC_TYPE_CRASH   1
> +#define KEXEC_TYPE_DEFAULT      0
> +#define KEXEC_TYPE_CRASH        1
> +#define KEXEC_TYPE_LIVEUPDATE   2
>   
>   
>   /* The kexec implementation for Xen allows the user to load two
> @@ -150,6 +155,8 @@ typedef struct xen_kexec_load_v1 {
>   #define KEXEC_RANGE_MA_EFI_MEMMAP 5 /* machine address and size of
>                                        * of the EFI Memory Map */
>   #define KEXEC_RANGE_MA_VMCOREINFO 6 /* machine address and size of vmcoreinfo */
> +/* machine address and size of the Live Update area below Xen */
> +#define KEXEC_RANGE_MA_LIVEUPDATE 7
>   
>   /*
>    * Find the address and size of certain memory areas
> 



From xen-devel-bounces@lists.xenproject.org Fri May 07 07:59:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 07:59:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123794.233568 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1levOo-0004d4-M9; Fri, 07 May 2021 07:59:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123794.233568; Fri, 07 May 2021 07:59:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1levOo-0004cx-IE; Fri, 07 May 2021 07:59:50 +0000
Received: by outflank-mailman (input) for mailman id 123794;
 Fri, 07 May 2021 07:59:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1levOn-0004bb-AT; Fri, 07 May 2021 07:59:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1levOn-0001oK-4M; Fri, 07 May 2021 07:59:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1levOm-0001P5-Sl; Fri, 07 May 2021 07:59:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1levOm-0006bv-SH; Fri, 07 May 2021 07:59:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=JYbbVtZyKjvgnmCv/329LnssnyWUY4PH4xM7x0y+cOg=; b=v6We8q0awLgqtHKZjvNhHyt/Yx
	KYEvFZjKvZoybAU5U9gj3+NbsDKmle5TvwWlhhVqw6BJU7Hk3/U5Y76fM0zDvmfeZ7SH38y5nyFB9
	Hskk3GKds8PiDIP0oO9h1CysR+ISW7Y6wY2gDCcpxJyEy2e/82+vseYlo0uVxKmXl+cg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161827-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 161827: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=4ef4476d3abd3d4eeff3a71ad9324df072c8405f
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 07 May 2021 07:59:48 +0000

flight 161827 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161827/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              4ef4476d3abd3d4eeff3a71ad9324df072c8405f
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  301 days
Failing since        151818  2020-07-11 04:18:52 Z  300 days  293 attempts
Testing same since   161827  2021-05-07 04:19:59 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 56554 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 07 08:14:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 08:14:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123803.233582 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1levci-0007Z8-47; Fri, 07 May 2021 08:14:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123803.233582; Fri, 07 May 2021 08:14:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1levci-0007Z1-1G; Fri, 07 May 2021 08:14:12 +0000
Received: by outflank-mailman (input) for mailman id 123803;
 Fri, 07 May 2021 08:14:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rJTn=KC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1levcg-0007Ys-1S
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 08:14:10 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fa8b46ab-451b-4d15-94d9-5a93f7496d9c;
 Fri, 07 May 2021 08:14:08 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 56D02B12E;
 Fri,  7 May 2021 08:14:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fa8b46ab-451b-4d15-94d9-5a93f7496d9c
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620375247; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ouOrfKbce9WN0n8msMSMMyCEi4fK4JyvSh7mKSKsy8s=;
	b=TLVEBQw8PrRVodntv69+s4NPYh3ckQ0bLkmVKUG15fPEcSwQusVO+t1lNZR0mj6caxEkLw
	pWQ7qDuRlJv153jhWlN4beLK2hSXRvzU6qShNH0BEjFLz/DzbfcgYhWvXPCsq3hjumN6Ty
	xJG4RUKt28hPgrtCPok143Fq5VpAJ4A=
Subject: Re: [xen-4.12-testing test] 161776: regressions - FAIL
To: Ian Jackson <iwj@xenproject.org>
Cc: xen-devel@lists.xenproject.org, George Dunlap <george.dunlap@citrix.com>
References: <osstest-161776-mainreport@xen.org>
 <24724.6389.95487.1868@mariner.uk.xensource.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d0e817e2-4097-239a-ee16-95f23e9ca52d@suse.com>
Date: Fri, 7 May 2021 10:14:08 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <24724.6389.95487.1868@mariner.uk.xensource.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 06.05.2021 18:27, Ian Jackson wrote:
> osstest service owner writes ("[xen-4.12-testing test] 161776: regressions - FAIL"):
>> flight 161776 xen-4.12-testing real [real]
>> flight 161806 xen-4.12-testing real-retest [real]
>> http://logs.test-lab.xenproject.org/osstest/logs/161776/
>> http://logs.test-lab.xenproject.org/osstest/logs/161806/
>>
>> Regressions :-(
>>
>> Tests which did not succeed and are blocking,
>> including tests which could not be run:
>>  test-amd64-amd64-xl-qcow2    19 guest-localmigrate/x10   fail REGR. vs. 159418
> 
> This has been failing for 48 days.
> 
>   http://logs.test-lab.xenproject.org/osstest/logs/161776/test-amd64-amd64-xl-qcow2/19.ts-guest-localmigrate.log
> 
> shows this:
> 
>   libxl: error: libxl_dom_suspend.c:377:suspend_common_wait_guest_timeout: Domain
> 6:guest did not suspend, timed out
> 
> as the first thing that goes wrong.  This is after the guest has
> acknowledged the suspend request.
> 
> osstest tried to bisect it but was not able to reproduce the basis
> pass.  That means either that we got (un)lucky with the basis pass, or
> that something not version-controlled by osstest is responsible.  In
> this case I think the dom0 and domU kernels, as well as the usual
> pieces, are all properly version controlled by osstest.  The non-Xen
> userland tools are not but I doubt they are the cause.
> 
> So I think this is not a real regression.  In lieu of a fix, I propose
> to force push 5984905b2638df87a0262d1ee91f0a6e14a86df6 to stable-4.12.

I did consider this as an option, but I don't think it's this simple.
Neither 4.11 and older nor 4.13 and newer exhibit such behavior. In
fact in 4.12 we appear to see pushes blocked now because there was a
success of this test once, in flight 159418. So while this may not
be a regression within 4.12 (and hence a force push may still be an
appropriate step), there is something wrong there with 4.12, I would
say. It being out of (general) support may of course mean we want to
leave it at that. Better, for the remaining time the branch is in
security-only maintenance state, would of course be to identify the
(presumably) missing backport ... Of course that's easy to say for
me, because I don't think I would realistically be the one to
undertake such an exercise.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 07 08:21:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 08:21:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123807.233594 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1levjc-0000Yn-Si; Fri, 07 May 2021 08:21:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123807.233594; Fri, 07 May 2021 08:21:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1levjc-0000Yg-Pb; Fri, 07 May 2021 08:21:20 +0000
Received: by outflank-mailman (input) for mailman id 123807;
 Fri, 07 May 2021 08:21:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4HO4=KC=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1levjc-0000YY-0U
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 08:21:20 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e4481b7a-8875-4225-b720-f651ab1f4270;
 Fri, 07 May 2021 08:21:18 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e4481b7a-8875-4225-b720-f651ab1f4270
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620375678;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=ZIh6UFaSwKCVV7sjBYWhxaFIzbVa8n+hG+bEuBG2IN4=;
  b=JaIw+VIyxgW1mmqeotKA6rYH2IhNIv+PdTbRmPgSgHSvPzP2VlNzgIqc
   EV0DYcnrfSR0bXfi4rzAD4O71CRSZ0yi+U7iBq/seqmR/VKXjtc9ZfBzQ
   cYgNZVWm0eFXgpQtQKLXkg98RiKvt9cKWNfTJZRkLysbcsw1BigOA4pmN
   8=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 75PaO44ZiXGJaDLGySM59f5Pr15U3xQjgA3fFaPxBVC7T2cnkB2enReycUdfTBoBIMxCbs0tZu
 RwrKi+WwRbHA2FHZgNNWJ4VJvuXwsRKM8WZma4JTmeikEHpwMVQCHDC+dr2+qxn0mBPUDYp+3Q
 rBaCcVGDlIZGv5+2MBkAP/fvlT/fhQ1dfZYGM5OweJyCHJaOLtX2pNEA81MZmrpVZ3iwywsBzE
 kwJ+xeBGWRA6+QvXgt5+B1Q8w4NBhrlsyaHf5t9IG+cRlaUeftPd9lsULweghyZiaocOePHii7
 w0M=
X-SBRS: 5.1
X-MesageID: 43087042
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:mBU9eqNziMLnbMBcTjujsMiBIKoaSvp037BK7S1MoNJuEvBw9v
 re+MjzsCWftN9/Yh4dcLy7VpVoIkmskKKdg7NhXotKNTOO0AeVxelZhrcKqAeQeREWmNQ96U
 9hGZIOdeEZDzJB/LrHCN/TKade/DGFmprY+9s31x1WPGZXgzkL1XYDNu6ceHcGIjVuNN4CO7
 e3wNFInDakcWR/VLXAOpFUN9Kz3uEijfjdEGY7OyI=
X-IronPort-AV: E=Sophos;i="5.82,280,1613451600"; 
   d="scan'208";a="43087042"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=IJkdUsbzg+SsCm99pWdzJ1IOJH/c+IVjAWXJVjPA4eA7W3fBfLBVwMhq63+e0S0LQDi+jU8pphYnrhWuKWEfISuwM+dy9/HmdTcpYyteEshdM8vQYCw7hjgTtAomfgpjUf58eTmNYy/ot4acbb01MkxRw5IiWYSxN4SNsrWchkRnjEaH0N+dgFLdvNjcuZc8bIkR/Db4AP4lTkYTt0jDawi5CZYi7m9qVsfGyCYXh9Z8UT4TSOvF+say/+52l7+c9pCqUasS3YM6vCjfv2SX2ejy4VqRIjR/a7CGqi/+PgOdvn2fOcq20hOw/AYGxoNPNIAuo7C4szYClxMyLaS8zg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tyRXKoIuo4CEC3Qf6JH6q7bcHQSVweAvWdr912tnoNE=;
 b=ddoHkrT16YTsxvo/aVbX5PlpZxBUYkHVt5PLHOZsPPo+oYTzqXzwKi6eEoOxHG60gnMoA5gN40vv1PwZe+dReydGgp9za/PKaVceIlNmMl7zH6b/cCHfagfDbHJ2cLBdtm6AsSVYZHRCBvVTeNcZaCshxBXFMm52GPWlh8M1EiBClOm2WXfJFBoqDC1B3F1si50kX+0+HL52zTxcqeLDK5n73Pg7eBCUpI0bjFYft+9V1f9q2fSRtN24/khoRT+JmfFrjedoumUIPh3LwCL1EYDWtxqWBpkC1KSKtiB3FlcwyDNPw/Ls+87M2EJnN4bpr9OwiWSxFa0HIbe+rHlxWA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tyRXKoIuo4CEC3Qf6JH6q7bcHQSVweAvWdr912tnoNE=;
 b=okIYxpwfiULm3QIykU2CgAlkVRcATNtQvZ1ZHiheaYn4+5nyJKZfsU2ZbnwbgeERanN/DD/Ps7ZFUaQ7y2tu1lEXhXfsLEj5BeSEmA1OaKwz+vjjuhp4mE2FRv0XuuxicP6IIZ5OirImXX/iiFZ5ayOQbZwHJxp0UMnVTW24Zh0=
Date: Fri, 7 May 2021 10:21:05 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] x86/shim: fix build when !PV32
Message-ID: <YJT4cV62lqFgFKq/@Air-de-Roger>
References: <08ea57f0-732e-fe12-409c-5487fb493429@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <08ea57f0-732e-fe12-409c-5487fb493429@suse.com>
X-ClientProxiedBy: MR2P264CA0116.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:33::32) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: be8ec20d-a38f-4831-014d-08d911311277
X-MS-TrafficTypeDiagnostic: DM6PR03MB5081:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB5081D40CBCB365EF8110F9CC8F579@DM6PR03MB5081.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 8GsDpRWxUweC08RQ+F4AT08d0LSRU2qc2LczfAh6iLbnidAInKTwa8w5Hh8zvxLwvO2psPMo3OHsSUzDwO6udXtfHZDODSToIQesdoh9FuiHAx8TU5FZXMcA166+N31yDkULzMTU3wOaIDvaRQByKZeQ2BhlpWLbA4EfUAlUiAjJHMcbJDIj95fyZTtDy0Ex2CYxVpJuANfuDV5j6Oms1yKrJ1poyLmDdZ9fnaOYFAuf3z3z5AWyn2Gx/K5urJuTsdt1FhVUva9r+9S1lzjdJyBYTxZ2GzSZcuLRo2EMRMy++qBvte8ZvcPlBV8vh243brCct5dB82NJvnWuNFXgVz8iPnWnKpaZwut+jooYQeoPdlZxv6kLXRO8y5VmNXU2eVZ/X87V7tllIEkCPN7Jk0dzjbedAW7honeIggv9FjL/p/B+wyWhutaMzIjlYtQUqfwx8QLc+dgHGMGa15ALRFdLdo/p51VNXs85O6Lt+Oey+oN3Dg/9wkOi8pxGYSM5Gu3/DCIuQ99JAXq1Ms8REYjdNR2hN5KThjHc6KNJVWhReRVaD+zffC0Xy+6B+KdAQP7bLYaYyTLt3dkyR7XforzsYthA2zzKeElZhSof1BQ=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(39860400002)(396003)(346002)(136003)(366004)(376002)(6486002)(86362001)(54906003)(26005)(38100700002)(83380400001)(6916009)(66556008)(66476007)(9686003)(66946007)(956004)(2906002)(4326008)(6666004)(6496006)(33716001)(85182001)(186003)(5660300002)(16526019)(8676002)(8936002)(316002)(478600001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?WCtFb095OVJibXlNTHJFUG41N1Frdjd0QlBuWmd0bnJVaGt4S290VXZGT2FI?=
 =?utf-8?B?am1Wa0VMNlNvaTNhbHJtS2Vhd0RmUEtPMGdpYUR6QmtpWVVGQytTWERZU1hK?=
 =?utf-8?B?RzhjTFhwVEJuQzBHTEZjb1lCdGVwVUJwaklHaEJmZThXeFMyT2V4dTd5OWxa?=
 =?utf-8?B?SHB2MTk2Z253WlNhUy9yZ0FFRVJaeGdBdEE2QVFzaUpxSFNtYi93VldkSUwy?=
 =?utf-8?B?TFBGSnRHK0NPRHowZDZmWGFvcFF6WndrOS9LNE1BNU42N1d1djZwN0JUOWF0?=
 =?utf-8?B?bDZSU2hxNU1rbWpWamJmdUtCbklwN0FOczByS3dkRzAyRTVqYlQ1aFRmODZj?=
 =?utf-8?B?UllpTG44Q0xvd3hUZlFtV1FhMVNhSVBhMWo1b252WVNsM0lnMmVGMmJkWW41?=
 =?utf-8?B?OXlQKzUzQWxwZjFNWlJWTUFacldvSEJaVU5XSWp4NGs5L0UrQ1d3V0R0VFV0?=
 =?utf-8?B?QXlCY3BtUWM3LzhpMFM0QmRDMldITTREYk9BZml1NEtoc0x6cER5NGp6eVBV?=
 =?utf-8?B?N3pCeUZMQzQ5QnFVT1JiQUdSN3hsQWE3UTNobnNPQXBnVi93cTMvTHFkcWhT?=
 =?utf-8?B?Z3VLN0VjN2tIZWxvT2MxVkRWUjYyWm5MTUJoOStwMzA2WElrL1V5MHBCZ2d2?=
 =?utf-8?B?WER1aWJhdHVnZnVVbGlkTFlYVE1obURyM09CRDFyQmpxL1JhWHhCZEp1Skc1?=
 =?utf-8?B?RW0rYkxJRHFZZ09NTzNNMDZKWkZWWlAraDBrT2podk82dXEzQmRTVkR2VHdC?=
 =?utf-8?B?Sm53NHZIZ2tDRENjRmZ4dHh2c25yS1ltQXp4MWd1b2xBUHFTRnNGOHh4S1du?=
 =?utf-8?B?QzhPcDNrdkpXUXNLTE9MaGNoRnZrMjRZUW9WY2R4bjc3MXNuQ1JXdEdJc3Vy?=
 =?utf-8?B?cU5KamQ4WU12bnR5VzJtUENUL3VzVW1DQUtndUsyTUI3QWFoUkhnRHphYXZB?=
 =?utf-8?B?KzZ5Mm5sZkdmZVREM01NTUNBb1FlVS9TbFQ0WGptV3UxMWNUU1hnUldGZlht?=
 =?utf-8?B?RnhIYWx0L0kyZGhJN3NwM3U3d2tMTFBBYnZndkdySzFRY2hJdEw0enNuUTRF?=
 =?utf-8?B?SU1RRzcyNVF4NlJTdEtrK1NXc1I1TEJ5UklpcGNLbEJ3d2FmaHY2SjNTSU5O?=
 =?utf-8?B?djNaL0hsNnNodUpWT3c0enFBME5mNzZnaEVOek1PaTNqOE9NejFWZlExYWZR?=
 =?utf-8?B?T3FTb29YM3h1L3ROcTFoMUZDRUVINWRjSmhNdGlaY3Vubys5bnB5aDZCQzhU?=
 =?utf-8?B?WEFSVEkrYlR6TXcwK0RnWEhhbDNReFAwNXBBSVVnb1lVREpoSGJmUktLMmFv?=
 =?utf-8?B?c0RvR1JPR3o0WHVGU2FKQ3k0YjZ3NXFzbEJpOXowdHVLZ04ySjZrdlcxZ1F2?=
 =?utf-8?B?U25OUkQ4ODd3a3N4Wk14NmZrWGpKS2RKVDQxVytzMEpoNzRMT0JFK2YyRHJS?=
 =?utf-8?B?citISitBRy9JVU9wM05aY0o4VkcvcUVTNUJ0bFRNSS9RVFFDSUhpalVzek00?=
 =?utf-8?B?b3dlcjdEbktBMDBaTWxJeHVWVVJyZEFjRDdXcitCbXREUG9ISU95MjVVUWwz?=
 =?utf-8?B?VVBtdkFwUDNvQTdDZjZIMkFnQ1JCaE1UYTFRSWp1UVliT3NOSDJTdENuNXU4?=
 =?utf-8?B?MlJUVkdUbG1yNUF1cC8wejBWQWY4UythTGYrZytnSk5qcU1zeWNjcFIzR082?=
 =?utf-8?B?TmpneStpSml2a2JQdXBQd0pTYTE2UVh3RnpVckphTDdxWVI2SVdqUnJ1L1NF?=
 =?utf-8?Q?vBJsVpnHAD8MmDDGKMnlo8/zKH/pdILkWgqrWw2?=
X-MS-Exchange-CrossTenant-Network-Message-Id: be8ec20d-a38f-4831-014d-08d911311277
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 May 2021 08:21:11.2774
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 7rgUaWpw7qb4Ao7yym42n1TPpI1esfwQZ2SkZRCxwv9KqQoDHm/UUy8AQ5O1sg5Ey84XMwHipZXNByI18cW/hg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5081
X-OriginatorOrg: citrix.com

On Fri, May 07, 2021 at 08:22:38AM +0200, Jan Beulich wrote:
> In this case compat headers don't get generated (and aren't needed).
> The changes made by 527922008bce ("x86: slim down hypercall handling
> when !PV32") also weren't quite sufficient for this case.
> 
> Try to limit #ifdef-ary by introducing two "fallback" #define-s.
> 
> Fixes: d23d792478db ("x86: avoid building COMPAT code when !HVM && !PV32")
> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/arch/x86/pv/shim.c
> +++ b/xen/arch/x86/pv/shim.c
> @@ -34,8 +34,6 @@
>  #include <public/arch-x86/cpuid.h>
>  #include <public/hvm/params.h>
>  
> -#include <compat/grant_table.h>
> -
>  #undef virt_to_mfn
>  #define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
>  
> @@ -300,8 +298,10 @@ static void write_start_info(struct doma
>                                            &si->console.domU.mfn) )
>          BUG();
>  
> +#ifdef CONFIG_PV32
>      if ( compat )
>          xlat_start_info(si, XLAT_start_info_console_domU);
> +#endif

Would it help the compiler logic if the 'compat' local variable was
made const?

I'm wondering if there's a way we can force DCE from the compiler and
avoid the ifdefs around the usage of compat.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri May 07 08:22:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 08:22:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123810.233607 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1levkN-00017A-7z; Fri, 07 May 2021 08:22:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123810.233607; Fri, 07 May 2021 08:22:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1levkN-000172-31; Fri, 07 May 2021 08:22:07 +0000
Received: by outflank-mailman (input) for mailman id 123810;
 Fri, 07 May 2021 08:22:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rJTn=KC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1levkM-00016s-3c
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 08:22:06 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b8d3eea5-1665-4286-96b8-ebdc39f12fc9;
 Fri, 07 May 2021 08:22:05 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 38AFEADDB;
 Fri,  7 May 2021 08:22:04 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b8d3eea5-1665-4286-96b8-ebdc39f12fc9
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620375724; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=sBoTpu1zfrXvyb4qD3J33Qgogu1ipNE/9lt6HW+iiEY=;
	b=bm76mobYxMYfpLulcyFelQSjoexDcdNvZsbTWQidLK0UCqras6sg1iCwBKhFsRym0DnKt3
	peSDCEnqeQIMSxT57UYUd1NIyXBf40T71V/0hxL/P5Dm+ezAAuQ70cnRolGK1XeI4SOCBd
	cc4K0FquhUvLleE+Vpdi7srGn2TlBGk=
Subject: Re: Ping: [PATCH] x86emul: fix test harness build for gas 2.36
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <723af87e-329d-6f52-ece4-fc3314796960@suse.com>
 <5e6f7769-ba07-bccb-9d73-4c7c0db67f89@citrix.com>
 <62d6134b-cf49-275e-d1a8-1b47d9152888@suse.com>
 <c6bff966-4341-8648-49d6-b243a2d821ac@suse.com>
Message-ID: <749efbb9-5b8a-abdd-5531-f4f24315e6ff@suse.com>
Date: Fri, 7 May 2021 10:22:05 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <c6bff966-4341-8648-49d6-b243a2d821ac@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 29.04.2021 11:24, Jan Beulich wrote:
> On 19.04.2021 17:51, Jan Beulich wrote:
>> On 19.04.2021 17:41, Andrew Cooper wrote:
>>> On 19/04/2021 16:30, Jan Beulich wrote:
>>>> All of the sudden, besides .text and .rodata and alike, an always
>>>> present .note.gnu.property section has appeared. This section, when
>>>> converting to binary format output, gets placed according to its
>>>> linked address, causing the resulting blobs to be about 128Mb in size.
>>>> The resulting headers with a C representation of the binary blobs then
>>>> are, of course all a multiple of that size (and take accordingly long
>>>> to create). I didn't bother waiting to see what size the final
>>>> test_x86_emulator binary then would have had.
>>>>
>>>> See also https://sourceware.org/bugzilla/show_bug.cgi?id=27753.
>>>>
>>>> Rather than figuring out whether gas supports -mx86-used-note=, simply
>>>> remove the section while creating *.bin.
>>>>
>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>>
>>>> --- a/tools/tests/x86_emulator/testcase.mk
>>>> +++ b/tools/tests/x86_emulator/testcase.mk
>>>> @@ -12,11 +12,11 @@ all: $(TESTCASE).bin
>>>>  %.bin: %.c
>>>>  	$(CC) $(filter-out -M% .%,$(CFLAGS)) -c $<
>>>>  	$(LD) $(LDFLAGS_DIRECT) -N -Ttext 0x100000 -o $*.tmp $*.o
>>>> -	$(OBJCOPY) -O binary $*.tmp $@
>>>> +	$(OBJCOPY) -O binary -R .note.gnu.property $*.tmp $@
>>>>  	rm -f $*.tmp
>>>>  
>>>>  %-opmask.bin: opmask.S
>>>>  	$(CC) $(filter-out -M% .%,$(CFLAGS)) -c $< -o $(basename $@).o
>>>>  	$(LD) $(LDFLAGS_DIRECT) -N -Ttext 0x100000 -o $(basename $@).tmp $(basename $@).o
>>>> -	$(OBJCOPY) -O binary $(basename $@).tmp $@
>>>> +	$(OBJCOPY) -O binary -R .note.gnu.property $(basename $@).tmp $@
>>>>  	rm -f $(basename $@).tmp
>>>
>>> Hmm - this is very ugly.  We don't really want to be stripping this
>>> information, because it covers various properties of the binary which
>>> need not to be lost, including stack-clash mitigations, and CET status.
>>
>> Could you clarify who you think wants to consume this information from
>> these format-less binary blobs? They're strictly internal to the test
>> harness.
>>
>>> We might be able to get away with saying that we're operating strictly
>>> with defaults, and folding these *.bin's back into a program which is
>>> also linked with defaults, at which point the resulting binary ought to
>>> end up with a compatible .note.gnu.property section, but I'm not sure
>>> how convinced I am by this argument.
>>
>> Well, if we want to make it complicated, we could of course extract
>> the notes into a separate ELF object, and include that object in the
>> linking process. But these notes are wrong anyway: Whichever insns the
>> blobs use, test_x86_emulator does _not_ need CPU support for them, as
>> it'll suitably avoid executing any of the blobs. Similarly stack and
>> CET related information is not of interest for the blobs, only for the
>> "normal" object files.
> 
> Besides there not having been any response from you so far, I'd like to
> point out that stripping the section is also what H.J. suggests in the
> referenced bugzilla entry. (As said there, I don't view this as an
> excuse to break use cases like ours by default, but that's orthogonal.)

Unless I hear back constructive feedback over the next week (while I'll
be on PTO), I'm going to commit this as is, if need be without any acks,
the week after. We shouldn't leave this broken; if you think you see
better ways to address the original issue, you can always submit an
incremental change later on.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 07 08:24:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 08:24:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123816.233619 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1levmg-0001th-P7; Fri, 07 May 2021 08:24:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123816.233619; Fri, 07 May 2021 08:24:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1levmg-0001ta-Kf; Fri, 07 May 2021 08:24:30 +0000
Received: by outflank-mailman (input) for mailman id 123816;
 Fri, 07 May 2021 08:24:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>) id 1levmf-0001tL-Cn
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 08:24:29 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1levmc-0002qE-HZ; Fri, 07 May 2021 08:24:26 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=ua82172827c7b5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1levmc-0003L4-6T; Fri, 07 May 2021 08:24:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Mime-Version:Content-Type:
	References:In-Reply-To:Date:Cc:To:From:Subject:Message-ID;
	bh=kzodNZzZ/s/DeUd/c79H/lZ0jYya5iYeJAjr4P6l0pQ=; b=ixcY31IfrTf1rQyljTpWUEvTgs
	UkESnTRvy1QK0OYRCwNsT2SiyyfGvNdU8l7w1Yxcy3Zso99FrkfAxLGni/WzTZqR+bItcgRk3bU4c
	FhiYlrPesMU9ZhHO+MLBQLpBF8hjuuZ24gcz7MxM7nGKRCg8GKwYXW4Ha573OxfKCBKU=;
Message-ID: <8773723448ea05a6ea0c843e408f6f05a04c2fd6.camel@xen.org>
Subject: Re: [PATCH RFC 2/2] xen/kexec: Reserve KEXEC_TYPE_LIVEUPDATE and
 KEXEC_RANGE_MA_LIVEUPDATE
From: Hongyan Xia <hx242@xen.org>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: dwmw2@infradead.org, paul@xen.org, raphning@amazon.com,
 maghul@amazon.com,  Julien Grall <jgrall@amazon.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Ian
 Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Date: Fri, 07 May 2021 09:24:22 +0100
In-Reply-To: <20210506104259.16928-3-julien@xen.org>
References: <20210506104259.16928-1-julien@xen.org>
	 <20210506104259.16928-3-julien@xen.org>
Content-Type: text/plain; charset="UTF-8"
X-Mailer: Evolution 3.28.5-0ubuntu0.18.04.2 
Mime-Version: 1.0
Content-Transfer-Encoding: 7bit

On Thu, 2021-05-06 at 11:42 +0100, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Unfortunately, the code to support Live Update has already been
> merged in
> Kexec and shipped since 2.0.21. Reserve the IDs used by Kexec before
> they
> end up to be re-used for a different purpose.
> 
> This patch reserves two IDs:
>     * KEXEC_TYPE_LIVEUPDATE: New operation to request Live Update
>     * KEXEC_MA_RANGE_LIVEUPDATE: New range to query the Live Update
>       area below Xen
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Hongyan Xia <hongyxia@amazon.com>

> ---
>  xen/include/public/kexec.h | 13 ++++++++++---
>  1 file changed, 10 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/include/public/kexec.h b/xen/include/public/kexec.h
> index 3f2a118381ba..650d2feb036f 100644
> --- a/xen/include/public/kexec.h
> +++ b/xen/include/public/kexec.h
> @@ -71,17 +71,22 @@
>   */
>  
>  /*
> - * Kexec supports two types of operation:
> + * Kexec supports three types of operation:
>   * - kexec into a regular kernel, very similar to a standard reboot
>   *   - KEXEC_TYPE_DEFAULT is used to specify this type
>   * - kexec into a special "crash kernel", aka kexec-on-panic
>   *   - KEXEC_TYPE_CRASH is used to specify this type
>   *   - parts of our system may be broken at kexec-on-panic time
>   *     - the code should be kept as simple and self-contained as
> possible
> + * - Live update into a new Xen, preserving all running domains
> + *   - KEXEC_TYPE_LIVE_UPDATE is used to specify this type
> + *   - Xen performs non-cooperative live migration and stores live
> + *     update state in memory, passing it to the new Xen.
>   */
>  
> -#define KEXEC_TYPE_DEFAULT 0
> -#define KEXEC_TYPE_CRASH   1
> +#define KEXEC_TYPE_DEFAULT      0
> +#define KEXEC_TYPE_CRASH        1
> +#define KEXEC_TYPE_LIVEUPDATE   2
>  
>  
>  /* The kexec implementation for Xen allows the user to load two
> @@ -150,6 +155,8 @@ typedef struct xen_kexec_load_v1 {
>  #define KEXEC_RANGE_MA_EFI_MEMMAP 5 /* machine address and size of
>                                       * of the EFI Memory Map */
>  #define KEXEC_RANGE_MA_VMCOREINFO 6 /* machine address and size of
> vmcoreinfo */
> +/* machine address and size of the Live Update area below Xen */
> +#define KEXEC_RANGE_MA_LIVEUPDATE 7

Very nit: I tend to say "right below" Xen, since below sounds like it
could be anywhere. In the design doc we also said "just below".

Hongyan



From xen-devel-bounces@lists.xenproject.org Fri May 07 08:25:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 08:25:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123821.233631 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1levnu-0002Vq-2H; Fri, 07 May 2021 08:25:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123821.233631; Fri, 07 May 2021 08:25:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1levnt-0002Vj-VH; Fri, 07 May 2021 08:25:45 +0000
Received: by outflank-mailman (input) for mailman id 123821;
 Fri, 07 May 2021 08:25:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rJTn=KC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1levns-0002Vb-8V
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 08:25:44 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2951210f-fd18-43b1-bf02-dd7eab40f013;
 Fri, 07 May 2021 08:25:43 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 99048AD21;
 Fri,  7 May 2021 08:25:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2951210f-fd18-43b1-bf02-dd7eab40f013
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620375942; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=4g32LixRQqC5xw6jvIWDLSjRncG5ffzm2Z8swxeyQ6M=;
	b=Ygk7G/moxEZmEDujNkiQQUg+H4qYqaBD4bV8U1WVDsaDfT9eO3GJlNX3OWrlQKfpOnLehG
	vTmD8ZM3jeS1YEijIc4GFSKqWowgcBr2HPHoUNOrxnI4nnMaJm0bA7/ZN+CnHLM6TInwTW
	XosqXoOCz4wNJM4sx728+xRwhveGcmM=
Subject: Ping: [PATCH] x86/AMD: also determine L3 cache size
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <7ffeec9f-2ce4-9122-4699-32c3ffb06a5d@suse.com>
 <3ff79e34-da70-85c3-0324-efa50313d5b4@citrix.com>
 <487bed52-bd1d-ceee-a85a-9bed9aad4712@suse.com>
Message-ID: <ebfb246f-ace8-f0eb-1860-70f74d894b4c@suse.com>
Date: Fri, 7 May 2021 10:25:43 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <487bed52-bd1d-ceee-a85a-9bed9aad4712@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 29.04.2021 11:21, Jan Beulich wrote:
> On 16.04.2021 16:21, Andrew Cooper wrote:
>> On 16/04/2021 14:20, Jan Beulich wrote:
>>> For Intel CPUs we record L3 cache size, hence we should also do so for
>>> AMD and alike.
>>>
>>> While making these additions, also make sure (throughout the function)
>>> that we don't needlessly overwrite prior values when the new value to be
>>> stored is zero.
>>>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>> ---
>>> I have to admit though that I'm not convinced the sole real use of the
>>> field (in flush_area_local()) is a good one - flushing an entire L3's
>>> worth of lines via CLFLUSH may not be more efficient than using WBINVD.
>>> But I didn't measure it (yet).
>>
>> WBINVD always needs a broadcast IPI to work correctly.
>>
>> CLFLUSH and friends let you do this from a single CPU, using cache
>> coherency to DTRT with the line, wherever it is.
>>
>>
>> Looking at that logic in flush_area_local(), I don't see how it can be
>> correct.  The WBINVD path is a decomposition inside the IPI, but in the
>> higher level helpers, I don't see how the "area too big, convert to
>> WBINVD" can be safe.
>>
>> All users of FLUSH_CACHE are flush_all(), except two PCI
>> Passthrough-restricted cases. MMUEXT_FLUSH_CACHE_GLOBAL looks to be
>> safe, while vmx_do_resume() has very dubious reasoning, and is dead code
>> I think, because I'm not aware of a VT-x capable CPU without WBINVD-exiting.
> 
> Besides my prior question on your reply, may I also ask what all of
> this means for the patch itself? After all you've been replying to
> the post-commit-message remark only so far.

As for the other patch just pinged again, unless I hear back on the
patch itself by then, I'm intending to commit this the week after the
next one, if need be without any acks.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 07 08:28:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 08:28:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123826.233643 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1levqJ-0003Ap-Hs; Fri, 07 May 2021 08:28:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123826.233643; Fri, 07 May 2021 08:28:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1levqJ-0003Ai-DS; Fri, 07 May 2021 08:28:15 +0000
Received: by outflank-mailman (input) for mailman id 123826;
 Fri, 07 May 2021 08:28:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rJTn=KC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1levqI-0003Aa-QQ
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 08:28:14 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a8444e05-a5cf-4a51-bde7-62ddadee6161;
 Fri, 07 May 2021 08:28:14 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 30E20AD21;
 Fri,  7 May 2021 08:28:13 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a8444e05-a5cf-4a51-bde7-62ddadee6161
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620376093; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=kKfxfZCvK01MxBI+cBeAxrxYVh6JrA/8NF4V4f20pGw=;
	b=OeH9NUsHQ8e3Y0KsNIqpup9oIIfSRt0iBuXJ2fbgPdl3qr62aigNTY4qBDw6ZqJdz+ojib
	S9jCSN+BCg2qC1EbRk6Qw8srgBZnuu2GuJwPBE2u6X4Z18gYQQ8MtJX1UPfJimx2eHANwb
	gJJQtYMTCke+xpuM4HoL2ke3TZYapzo=
Subject: Ping: [PATCH] x86: fix build race when generating temporary object
 files (take 2)
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Ian Jackson <iwj@xenproject.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <c35ad629-0dea-688a-199d-895186aeffb2@suse.com>
Message-ID: <a1c855ff-00c8-5cca-b938-38693ca53429@suse.com>
Date: Fri, 7 May 2021 10:28:14 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <c35ad629-0dea-688a-199d-895186aeffb2@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 26.04.2021 11:54, Jan Beulich wrote:
> The original commit wasn't quite sufficient: Emptying DEPS is helpful
> only when nothing will get added to it subsequently. xen/Rules.mk will,
> after including the local Makefile, amend DEPS by dependencies for
> objects living in sub-directories though. For the purpose of suppressing
> dependencies of the makefiles on the .*.d2 files (and thus to avoid
> their re-generation) it is, however, not necessary at all to play with
> DEPS. Instead we can override DEPS_INCLUDE (which generally is a late-
> expansion variable).
> 
> Fixes: 761bb575ce97 ("x86: fix build race when generating temporary object files")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Ping?

Ian, I'm also including you here because iirc the .*.d2 thing was an
invention of yours, so you may have an opinion.

Jan

> --- a/xen/arch/x86/Makefile
> +++ b/xen/arch/x86/Makefile
> @@ -314,5 +314,5 @@ clean::
>  # Suppress loading of DEPS files for internal, temporary target files.  This
>  # then also suppresses re-generation of the respective .*.d2 files.
>  ifeq ($(filter-out .xen%.o,$(notdir $(MAKECMDGOALS))),)
> -DEPS:=
> +DEPS_INCLUDE:=
>  endif
> 



From xen-devel-bounces@lists.xenproject.org Fri May 07 08:30:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 08:30:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123830.233655 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1levsA-0004Vd-UK; Fri, 07 May 2021 08:30:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123830.233655; Fri, 07 May 2021 08:30:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1levsA-0004VW-QL; Fri, 07 May 2021 08:30:10 +0000
Received: by outflank-mailman (input) for mailman id 123830;
 Fri, 07 May 2021 08:30:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rJTn=KC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1levsA-0004VQ-0t
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 08:30:10 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2c1f5e22-b35c-4624-8df6-b4e645a3b469;
 Fri, 07 May 2021 08:30:09 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 435D4AD21;
 Fri,  7 May 2021 08:30:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c1f5e22-b35c-4624-8df6-b4e645a3b469
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620376208; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=oRIk+UB+w/f1mAxF+tpHlmcPAWvTPnhSeyrW5cMHe4M=;
	b=nQ0W2tGh1H15+TdXQsFFCrjei5sINpQrP1Il9CKJEI9xL4ASAxVcPj3dT9kmbPZzu20PDC
	QNTGEe3goYhteuOHIkNOMSvkJVhRdG+L8VfFkV3Y10iKg9AUQlMxyCHQQfxczUeuaoCy5k
	aXsT58Oah85jjNgrT9Rz4gqdXGYfN0c=
Subject: Re: [PATCH RFC 2/2] xen/kexec: Reserve KEXEC_TYPE_LIVEUPDATE and
 KEXEC_RANGE_MA_LIVEUPDATE
To: Hongyan Xia <hx242@xen.org>
Cc: dwmw2@infradead.org, paul@xen.org, raphning@amazon.com,
 maghul@amazon.com, Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
References: <20210506104259.16928-1-julien@xen.org>
 <20210506104259.16928-3-julien@xen.org>
 <8773723448ea05a6ea0c843e408f6f05a04c2fd6.camel@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6b66d3a3-e4cd-f154-c8a4-05adef518733@suse.com>
Date: Fri, 7 May 2021 10:30:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <8773723448ea05a6ea0c843e408f6f05a04c2fd6.camel@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 07.05.2021 10:24, Hongyan Xia wrote:
> On Thu, 2021-05-06 at 11:42 +0100, Julien Grall wrote:
>> @@ -150,6 +155,8 @@ typedef struct xen_kexec_load_v1 {
>>  #define KEXEC_RANGE_MA_EFI_MEMMAP 5 /* machine address and size of
>>                                       * of the EFI Memory Map */
>>  #define KEXEC_RANGE_MA_VMCOREINFO 6 /* machine address and size of
>> vmcoreinfo */
>> +/* machine address and size of the Live Update area below Xen */
>> +#define KEXEC_RANGE_MA_LIVEUPDATE 7
> 
> Very nit: I tend to say "right below" Xen, since below sounds like it
> could be anywhere. In the design doc we also said "just below".

But is this a hard requirement, i.e. something that needs specifying
here?

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 07 08:34:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 08:34:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123835.233667 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1levwI-0005CS-Gg; Fri, 07 May 2021 08:34:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123835.233667; Fri, 07 May 2021 08:34:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1levwI-0005CL-CN; Fri, 07 May 2021 08:34:26 +0000
Received: by outflank-mailman (input) for mailman id 123835;
 Fri, 07 May 2021 08:34:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rJTn=KC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1levwH-0005CF-6H
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 08:34:25 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 10b54ca1-f322-4339-afc5-97740e4874f5;
 Fri, 07 May 2021 08:34:24 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A3AD4AF59;
 Fri,  7 May 2021 08:34:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 10b54ca1-f322-4339-afc5-97740e4874f5
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620376463; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=K9fVSfr49FNfNZyjj3LhpqYGR4JDCqjaghMVjN6Vub0=;
	b=UIw1iqQG8iO7eyz8jxU5BkYKWotgwJAtO50ZHms4/RugUeWetZ2EeOFoLCk+cuJCVx8p2y
	dXGNNrpgxJXPMti+Bf+719ZmwJqC1gp7cqTWStnpvU1H68ae+jN0X2X8iK+cBtB8+APv7g
	lYmlExSGlWnRcfAfRiJih2sf0RU3BaA=
Subject: Re: [PATCH] x86/shim: fix build when !PV32
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
References: <08ea57f0-732e-fe12-409c-5487fb493429@suse.com>
 <YJT4cV62lqFgFKq/@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c00c73a5-0d9c-9e1e-20d7-5d65ac23976e@suse.com>
Date: Fri, 7 May 2021 10:34:24 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <YJT4cV62lqFgFKq/@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 07.05.2021 10:21, Roger Pau Monné wrote:
> On Fri, May 07, 2021 at 08:22:38AM +0200, Jan Beulich wrote:
>> In this case compat headers don't get generated (and aren't needed).
>> The changes made by 527922008bce ("x86: slim down hypercall handling
>> when !PV32") also weren't quite sufficient for this case.
>>
>> Try to limit #ifdef-ary by introducing two "fallback" #define-s.
>>
>> Fixes: d23d792478db ("x86: avoid building COMPAT code when !HVM && !PV32")
>> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> --- a/xen/arch/x86/pv/shim.c
>> +++ b/xen/arch/x86/pv/shim.c
>> @@ -34,8 +34,6 @@
>>  #include <public/arch-x86/cpuid.h>
>>  #include <public/hvm/params.h>
>>  
>> -#include <compat/grant_table.h>
>> -
>>  #undef virt_to_mfn
>>  #define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
>>  
>> @@ -300,8 +298,10 @@ static void write_start_info(struct doma
>>                                            &si->console.domU.mfn) )
>>          BUG();
>>  
>> +#ifdef CONFIG_PV32
>>      if ( compat )
>>          xlat_start_info(si, XLAT_start_info_console_domU);
>> +#endif
> 
> Would it help the compiler logic if the 'compat' local variable was
> made const?

No, because XLAT_start_info_console_domU is undeclared when there are
no compat headers.

> I'm wondering if there's a way we can force DCE from the compiler and
> avoid the ifdefs around the usage of compat.

The issue isn't with DCE - I believe the compiler does okay in that
regard. The issue is with things simply lacking a declaration /
definition. That's no different from e.g. struct fields living
inside an #ifdef - all uses then need to as well, no matter whether
the compiler is capable of otherwise recognizing the code as dead.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 07 08:40:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 08:40:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123839.233679 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lew2H-0006Zh-59; Fri, 07 May 2021 08:40:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123839.233679; Fri, 07 May 2021 08:40:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lew2H-0006Za-26; Fri, 07 May 2021 08:40:37 +0000
Received: by outflank-mailman (input) for mailman id 123839;
 Fri, 07 May 2021 08:40:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rJTn=KC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lew2F-0006ZU-9S
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 08:40:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7c57a31a-961e-4422-bd31-daa7bc2dffe5;
 Fri, 07 May 2021 08:40:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 477F3AF1F;
 Fri,  7 May 2021 08:40:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7c57a31a-961e-4422-bd31-daa7bc2dffe5
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620376833; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=lSP494LFhPRsS9tN9QB1W7zthLUosYhvW9R78Mu49p0=;
	b=VWOOTltafbDR208fCN3YxKjNbvWpYFimCIeMFcvUYqIGLyrIo9tcj7XebO8RnurwqRf6tW
	7BWDIdGPQaAnNH8D15zB2bf+PxkyIxZ07EJB0e33/1F9SbXD5WveWkFti52FOTWwi9K9oy
	kc56ChdF3NWnM2WC99yWhnoeFtkrFpA=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/CPUID: don't shrink hypervisor leaves
Message-ID: <0f5fe8d3-4c43-e60f-c585-67b2f23383ab@suse.com>
Date: Fri, 7 May 2021 10:40:34 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

This is a partial revert of 540d911c2813 ("x86/CPUID: shrink
max_{,sub}leaf fields according to actual leaf contents"). Andrew points
out that XXX.

Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Obviously the XXX wants filling in. So far I did not really understand
what bad consequences there might be, but I can agree with the undoing
of this part of the original change along the lines of why the Viridian
side adjustment was also requested to be dropped (before the patch went
in).

--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -964,15 +964,13 @@ void cpuid_hypervisor_leaves(const struc
     uint32_t base = is_viridian_domain(d) ? 0x40000100 : 0x40000000;
     uint32_t idx  = leaf - base;
     unsigned int limit = is_viridian_domain(d) ? p->hv2_limit : p->hv_limit;
-    unsigned int dflt = is_pv_domain(d) ? XEN_CPUID_MAX_PV_NUM_LEAVES
-                                        : XEN_CPUID_MAX_HVM_NUM_LEAVES;
 
     if ( limit == 0 )
         /* Default number of leaves */
-        limit = dflt;
+        limit = XEN_CPUID_MAX_NUM_LEAVES;
     else
         /* Clamp toolstack value between 2 and MAX_NUM_LEAVES. */
-        limit = min(max(limit, 2u), dflt);
+        limit = min(max(limit, 2u), XEN_CPUID_MAX_NUM_LEAVES + 0u);
 
     if ( idx > limit )
         return;
--- a/xen/include/public/arch-x86/cpuid.h
+++ b/xen/include/public/arch-x86/cpuid.h
@@ -113,10 +113,6 @@
 /* Max. address width in bits taking memory hotplug into account. */
 #define XEN_CPUID_MACHINE_ADDRESS_WIDTH_MASK (0xffu << 0)
 
-#define XEN_CPUID_MAX_PV_NUM_LEAVES 5
-#define XEN_CPUID_MAX_HVM_NUM_LEAVES 4
-#define XEN_CPUID_MAX_NUM_LEAVES \
-    (XEN_CPUID_MAX_PV_NUM_LEAVES > XEN_CPUID_MAX_HVM_NUM_LEAVES ? \
-     XEN_CPUID_MAX_PV_NUM_LEAVES : XEN_CPUID_MAX_HVM_NUM_LEAVES)
+#define XEN_CPUID_MAX_NUM_LEAVES 5
 
 #endif /* __XEN_PUBLIC_ARCH_X86_CPUID_H__ */


From xen-devel-bounces@lists.xenproject.org Fri May 07 09:08:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 09:08:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123877.233717 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lewTK-0001i1-Rm; Fri, 07 May 2021 09:08:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123877.233717; Fri, 07 May 2021 09:08:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lewTK-0001hu-Om; Fri, 07 May 2021 09:08:34 +0000
Received: by outflank-mailman (input) for mailman id 123877;
 Fri, 07 May 2021 09:08:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4HO4=KC=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lewTJ-0001ho-IK
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 09:08:33 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 83febd23-3032-4600-a440-487bd5397f40;
 Fri, 07 May 2021 09:08:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 83febd23-3032-4600-a440-487bd5397f40
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620378512;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=bttZZDHKhTCsJHYHd2TqwvJKWpdXhYD5+OubWwFU1F8=;
  b=RYMyFjQ/GxoEKzAYKfDd++MlSNmhXh5boax4op6wVcbG237EWQ3+wBUF
   Xt53WETdf0PM/WI85tOMsZq3LD7o7kJKtffR2aYPZ3mm0ZTRjVMePhVcu
   z53ZJ2Z5XerX3Nl/LW8H+S5ggM6EnhqkYV/a5RslcaFI+EoUgUAAYY/fY
   s=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: Ma3UhxK+KIQIHakJOlsbwloHL47bz6TyWzK4sKrBe/JglDOadCuoGNg72lK3+xFt2CMb74BqGk
 Qe962F80Ed02RD139JswH4Kqs8v8mHfehWIRC8Do1c2ut12qGZQtxkke+GRM4Vg//FsTRnxD94
 sHLJhxx3EFpTCf6hpntjwWlUeGS5LdJUTirnzfIsboF3K86MAmG/XVSX9RFBE2COvmtnD0NxAx
 QwxjWRZTx6pNE2eBfAjcE3DwBIM7rm27mE1zhCRKhHd3ng04o6HlosfaveqtyIzCkAqYWOeqRg
 YLI=
X-SBRS: 5.1
X-MesageID: 43089387
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:B8XrsqmNv7C/+p1XTry10WUC4hTpDfL13DAbv31ZSRFFG/Fw9v
 re4cjzsCWetN9/YhAdcK+7Sc+9qB/nmaKdorNhRItKJTOWw1dAdbsSl7cKoAeQZxEWlNQ86U
 4IScEXYuEYa2IUsS+Q2mSF+rgbruVujciT9J/jJqNWPGNXg90J1XYfNu/iKDwUeOCwP+tcKH
 M03Lsjmwad
X-IronPort-AV: E=Sophos;i="5.82,280,1613451600"; 
   d="scan'208";a="43089387"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JGwjTfrO8bP7NCr+E1tLDO8BODvEadnXZWI9+4YJMBEI/ShlX1RwsLiyNfY6CS28hK7SMYq/rLHvb09g/tEFt6OKI5pBwWsCAsrMInyYpRfqxrF/CIagczcO2bfSTKvqhfSOjwI/n70kkjMTrLq5IEzNeaq0cFIW4zHZ4fv2eXojWl2NUzsyq8JrA4HUri+ZYdal1AxzybLI3faf2Ix7iSzsNzCaP9gL9vRmee9txeh/wd2UVc/K2qEwPZXTFD/MpVDWoIiz6Zn4i1UuJJmVLVBzVTSl7g21bqDcDJFGz1Jp3Cqjn6AtNJrC1rYRC4gUweY94zLwzMbD+f9k45le0Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9IkQx80Hb3xVr4x6PYdH59pgvItJyMAQ6I1uyU1oXWE=;
 b=jaXSaOaE4nwvoBqhOE9eOTnM1Emb5ZcM5gClexhxwA8FSZOW+Qixou7NBjJdGxuSsFu/Cco6b0M6va9bASYFdneX1tZjofDVybJ8shvEDfKqh9EeoePWRcmMkJKsnGigBi8k39aDIBzv67HGecgsHGHmQdGXmMm3VEmIrHJC9tfQrJKmjBk+Gy3pJr1gHgumdM/GvON8e7cOndRK8LlWLMh0AEo7mcD7zJqra6utWlsJh0GfkMq6SQuCHrS/ziNbfKv4n+EhnlRgVZWeHE2dmgZi9na3giafL4mRM+XzhqlfSxWJQjOJxX5KgaRlfgBPClNUxSK76c1K5iFzr1MtlQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9IkQx80Hb3xVr4x6PYdH59pgvItJyMAQ6I1uyU1oXWE=;
 b=wmBG9q/IsBXmUh2RIrg/iYhTjHYevWCvHvWAiO0aPcz6q/b9JErD0yxVCKhCHRtcUxIQ5k46ZJLZRVIN4vATBxh5TMofknfLxjAjVT0BPsdS07lxA2IFvPRV4xOfk+mpG/JNsBevI2zIhhWW8Gt0ImVi8bUhonEr4vSZnmrDsFk=
Date: Fri, 7 May 2021 11:08:20 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] x86/shim: fix build when !PV32
Message-ID: <YJUDhB1dBVDF8vmd@Air-de-Roger>
References: <08ea57f0-732e-fe12-409c-5487fb493429@suse.com>
 <YJT4cV62lqFgFKq/@Air-de-Roger>
 <c00c73a5-0d9c-9e1e-20d7-5d65ac23976e@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <c00c73a5-0d9c-9e1e-20d7-5d65ac23976e@suse.com>
X-ClientProxiedBy: MR2P264CA0071.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:31::35) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 2b691b29-d32b-4f23-39c6-08d91137ade3
X-MS-TrafficTypeDiagnostic: DM5PR03MB3371:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB3371B7AA53898CFAF9E71E768F579@DM5PR03MB3371.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: ImXLum0dX5ku03Rb64z5wyb1VgAUHd3Bx70LptmZMCih+ff9AfYZEtTYBJ9hdsR8NjTh19q+NoHXp7AWdSd8oRScs9w0skg69z8mp0zM+TfNwb9PMEJvGYHoWGzVoT3qQIoL1gThYNETU4O4lG244AT58jZQMYSQG+d5YH7ASl2Ky9NV8TXfwwFKPgJ1j69eqZ1OX0RndSDy6qzD3cBKMGDaE93H7Z+g+0ax+z1fkOzPH+ZQ5ELwgq+dmu+xH6y99Zg8NQ8dTGXm571P3Lo2hdJyxCmSa9KUYSyFOJlzY9IVpG7OXTsNJ0zglK3z97e7OSX9eiLDV84ClORKoGfrX3sovruoPDh103CDDkXKC+OaoV33uftaJZ8CS6GkM8dGZVJu4Z+LZLh2S3TgQI8KskLxFPuAhqshuJePcRYLp38me2ixtyTPeLNmXrSShvscyE3wYbhg/5V1C6ESzk1yyJljxPkOz6mrfNqJlqxD6LKSOqfUFvujJJg7vMHugGkKxaalOPXN0xdWlzpAGhG9ZC2gS7/xsNe4KBbWjlKY9wFx9B/ySqeg4hTm97JgFrVycgk/Cwfw3Z1Jc0rv1Wm01Vd8KbojnpKBw9WaWXpgsok=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(346002)(366004)(396003)(39860400002)(136003)(376002)(186003)(6666004)(9686003)(38100700002)(6486002)(16526019)(4326008)(956004)(26005)(5660300002)(478600001)(316002)(66946007)(53546011)(8936002)(8676002)(85182001)(66556008)(66476007)(86362001)(33716001)(83380400001)(6916009)(2906002)(6496006)(54906003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?NnVVWUh4SXZPS0RHU1NRa1NzYWNyQTFxQk16VGlRL1JndHlyZndEMjBQa1Fa?=
 =?utf-8?B?bzl3dnREeVhLaHZUK0RVK1hPbm94Mk1TdTVwaTRDdE1NMHJTZHo5R1JndkI0?=
 =?utf-8?B?cG5IY1NlOXpkME9QYk5nVGtUY1hEUExEUDNydVlLT0JqMC8rZnBkU1BXbHRT?=
 =?utf-8?B?T1ZMTWswWm9LTkUvZ0E4bnFJUHl6ZWpVMm9VZU5xd2lLV1dpblRhekZ5Y0Ev?=
 =?utf-8?B?a3pmaFVDTWROdzJHcTg0Rk9rSGxQemg4K01oYUZKUmI2LzMyR0hVQ25NQ29o?=
 =?utf-8?B?b0NoeEJ0cEdzUnU5U0dFN1RGTXVNa3lzelZxN3QwODQzNWhpWjhYUHJEYXJs?=
 =?utf-8?B?dnlUTkJmc0xwNHpNcHhnbENUZXdZN0ZVM0lQcENMb1I1OEx2ak00ZnNUZDhT?=
 =?utf-8?B?eHdEVEE0SmR1ZHlQdkVNeGN5MVMyOVpuNm5JZGJ6RUtORENaWDJLQkxaU3d4?=
 =?utf-8?B?V3QwR25ZTm1WUGZwWVVHNUxTaVE0KzQ5ZFI0ZUFkVU82RERpa3JwbitiSUk3?=
 =?utf-8?B?eG94OVQzRUxPL05tV0ROUGU5QS9EUEhZdnZBRVZybjFVYmRtd0hrcTBCT2Ry?=
 =?utf-8?B?d2ZHemZZSUdEbnpjV3NiWHVZdEVjMDdYMnF2RVJCZHVQTTY5RFd5UmtHeHF1?=
 =?utf-8?B?aHkvWGxsZGhYd2NqenFLRjNlQUgrR2FoUzVVenJpVzdpQmFQaWpIWGpKcFdC?=
 =?utf-8?B?ajd4R2xpMFJqd01OUVpWTEJhTUlvWUlPRHJaU1h4NVA5VkF0TXJKLzB2WUhO?=
 =?utf-8?B?ZldoTTVac3VXa0JvbUxhbjJwTUpYejRGYWlxd2ozOXNkMkV3VThETlpPaDNt?=
 =?utf-8?B?Wld4UUhSQWs4TGtiVndCK3ZFNStJRHF0ekxNcnFRNmVEeTZkRWo4RGJZK2lI?=
 =?utf-8?B?QWtYTTJtOFQvMDZMMWVFVWxFenFTbXY3eEtxN25mWVYvN1h1c1Voa3M1ZGNH?=
 =?utf-8?B?eWI1WGNjeVhxbU10eWY3RHMyQk1xQUwrOTFVN3NBcVJ3ZkN2Wk5YUlY5Y0Uv?=
 =?utf-8?B?cEVxMGg3R2l6TmZuRmJEV3gwV2ozRUJaZlFmRm80dzhZZmRGd2plVFNJalIy?=
 =?utf-8?B?STBzK3VXZlpOQTk3T1M0ZDZ6eUFDbDI5MEVxcEdDLzBXV2RLbjRjK296Unhr?=
 =?utf-8?B?ajMwMlU1TXZiUFpCQ3Vnc3BkQk5aenF5NHBMUmJSUFBuSGg0ck5tRjI5eVdw?=
 =?utf-8?B?eFUwV3VDYU9mdWI1VWVWQStEc20yOWZ2TExXS2hoUGJzMDdxTWxGbWswNGx1?=
 =?utf-8?B?Tk1rdERWR09tUGFGa1NlbEU3L3Q1ZnFhTGZiemFhK1J3UGNkWkRFOFpkRk81?=
 =?utf-8?B?c1B6eUhIRlBTdWhUOEtjbUhCTkNhRy9xRDlGVmpSWHo4a0pHNnJkVm5nQlZF?=
 =?utf-8?B?WlFSR3JHTittc3ZBZWlOeFJXMUsrRXRFUnVOWHdTTFdad1lia0Z0R1A2NWh4?=
 =?utf-8?B?YXVrdVQ1Q3VlZkNJSW44Q1crZ0U0RU1iYXRzSVNMaFladWJUUWcwZjcwMTB6?=
 =?utf-8?B?Wmp2OGxZUFhxbmxMS0p0dDdRbmdzanJGYmVDTWpUZWdzMjc3cXFSeVo2WHVv?=
 =?utf-8?B?RDNjQ1Z6STdMdTAwbUdoVU9Qd3J4L3d3bHI4WFdRZnVFUGFxQk56Wlh6WG1E?=
 =?utf-8?B?ME1vT3B6bDBhSVAreGhwbDRqQnE1bnRrT1NXeVUxVDg0VU1YaGtNNjFwK0Iw?=
 =?utf-8?B?SkJ4a0Y4R1MwcWRIVlpTbjQxTGgwdjUwdENJWm9jQmQ1aXlENEY0OVJ2MmdN?=
 =?utf-8?Q?/7Q/evXiRbsH/ZJMx9sgk2lV1vY0qUBQsC2b+Bh?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 2b691b29-d32b-4f23-39c6-08d91137ade3
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 May 2021 09:08:28.9750
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 7Q4Oktz3wXWx4CJn1g0yH2kiCOKszqDRqGPQiptBW+8nXpeKvJC2V0YTDuujxrNCdJcsVc8vGOv0xUutxnQ+KA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3371
X-OriginatorOrg: citrix.com

On Fri, May 07, 2021 at 10:34:24AM +0200, Jan Beulich wrote:
> On 07.05.2021 10:21, Roger Pau Monné wrote:
> > On Fri, May 07, 2021 at 08:22:38AM +0200, Jan Beulich wrote:
> >> In this case compat headers don't get generated (and aren't needed).
> >> The changes made by 527922008bce ("x86: slim down hypercall handling
> >> when !PV32") also weren't quite sufficient for this case.
> >>
> >> Try to limit #ifdef-ary by introducing two "fallback" #define-s.
> >>
> >> Fixes: d23d792478db ("x86: avoid building COMPAT code when !HVM && !PV32")
> >> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >>
> >> --- a/xen/arch/x86/pv/shim.c
> >> +++ b/xen/arch/x86/pv/shim.c
> >> @@ -34,8 +34,6 @@
> >>  #include <public/arch-x86/cpuid.h>
> >>  #include <public/hvm/params.h>
> >>  
> >> -#include <compat/grant_table.h>
> >> -
> >>  #undef virt_to_mfn
> >>  #define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
> >>  
> >> @@ -300,8 +298,10 @@ static void write_start_info(struct doma
> >>                                            &si->console.domU.mfn) )
> >>          BUG();
> >>  
> >> +#ifdef CONFIG_PV32
> >>      if ( compat )
> >>          xlat_start_info(si, XLAT_start_info_console_domU);
> >> +#endif
> > 
> > Would it help the compiler logic if the 'compat' local variable was
> > made const?
> 
> No, because XLAT_start_info_console_domU is undeclared when there are
> no compat headers.
> 
> > I'm wondering if there's a way we can force DCE from the compiler and
> > avoid the ifdefs around the usage of compat.
> 
> The issue isn't with DCE - I believe the compiler does okay in that
> regard. The issue is with things simply lacking a declaration /
> definition. That's no different from e.g. struct fields living
> inside an #ifdef - all uses then need to as well, no matter whether
> the compiler is capable of otherwise recognizing the code as dead.

Right, I see those are no longer declared anywhere. Since this is
gating compat code, would it make more sense to use CONFIG_COMPAT
rather than CONFIG_PV32 here?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri May 07 09:17:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 09:17:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123886.233732 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lewbw-0003Fl-Tx; Fri, 07 May 2021 09:17:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123886.233732; Fri, 07 May 2021 09:17:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lewbw-0003Fe-Qg; Fri, 07 May 2021 09:17:28 +0000
Received: by outflank-mailman (input) for mailman id 123886;
 Fri, 07 May 2021 09:17:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rJTn=KC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lewbu-0003FC-TQ
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 09:17:26 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 74b66c6e-be2b-4029-9fb0-70f492ec8228;
 Fri, 07 May 2021 09:17:25 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F0BBDAF26;
 Fri,  7 May 2021 09:17:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 74b66c6e-be2b-4029-9fb0-70f492ec8228
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620379045; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=gDtyuZNsX6fgkwJ7eE+6AnDA+t3eK0Inq7oGPZZCrS4=;
	b=rI6qREDSGhMeOVNXCZdUzXLJKwA4pePvssrU2q7jXfNKdzVvPFHJhlILD2KLB1+XsMS4XX
	02KXJQZUbRaiq+4dpQk5fXlixRG2E9AafjLzPhyfrqavGMc1JjRYRSoFCSzPn3/gxn1ZpB
	uiBmxSq2tfAsylno09/XB1H9MK+ioVs=
Subject: Re: [PATCH] x86/shim: fix build when !PV32
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
References: <08ea57f0-732e-fe12-409c-5487fb493429@suse.com>
 <YJT4cV62lqFgFKq/@Air-de-Roger>
 <c00c73a5-0d9c-9e1e-20d7-5d65ac23976e@suse.com>
 <YJUDhB1dBVDF8vmd@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <31ce12b7-b6c0-c8c6-13f3-fbc6826d2810@suse.com>
Date: Fri, 7 May 2021 11:17:26 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <YJUDhB1dBVDF8vmd@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 07.05.2021 11:08, Roger Pau Monné wrote:
> On Fri, May 07, 2021 at 10:34:24AM +0200, Jan Beulich wrote:
>> On 07.05.2021 10:21, Roger Pau Monné wrote:
>>> On Fri, May 07, 2021 at 08:22:38AM +0200, Jan Beulich wrote:
>>>> In this case compat headers don't get generated (and aren't needed).
>>>> The changes made by 527922008bce ("x86: slim down hypercall handling
>>>> when !PV32") also weren't quite sufficient for this case.
>>>>
>>>> Try to limit #ifdef-ary by introducing two "fallback" #define-s.
>>>>
>>>> Fixes: d23d792478db ("x86: avoid building COMPAT code when !HVM && !PV32")
>>>> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>>
>>>> --- a/xen/arch/x86/pv/shim.c
>>>> +++ b/xen/arch/x86/pv/shim.c
>>>> @@ -34,8 +34,6 @@
>>>>  #include <public/arch-x86/cpuid.h>
>>>>  #include <public/hvm/params.h>
>>>>  
>>>> -#include <compat/grant_table.h>
>>>> -
>>>>  #undef virt_to_mfn
>>>>  #define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
>>>>  
>>>> @@ -300,8 +298,10 @@ static void write_start_info(struct doma
>>>>                                            &si->console.domU.mfn) )
>>>>          BUG();
>>>>  
>>>> +#ifdef CONFIG_PV32
>>>>      if ( compat )
>>>>          xlat_start_info(si, XLAT_start_info_console_domU);
>>>> +#endif
>>>
>>> Would it help the compiler logic if the 'compat' local variable was
>>> made const?
>>
>> No, because XLAT_start_info_console_domU is undeclared when there are
>> no compat headers.
>>
>>> I'm wondering if there's a way we can force DCE from the compiler and
>>> avoid the ifdefs around the usage of compat.
>>
>> The issue isn't with DCE - I believe the compiler does okay in that
>> regard. The issue is with things simply lacking a declaration /
>> definition. That's no different from e.g. struct fields living
>> inside an #ifdef - all uses then need to as well, no matter whether
>> the compiler is capable of otherwise recognizing the code as dead.
> 
> Right, I see those are no longer declared anywhere. Since this is
> gating compat code, would it make more sense to use CONFIG_COMPAT
> rather than CONFIG_PV32 here?

I don't think so, no, from the abstract perspective that it's really
PV that the shim cares about, and hence other causes of COMPAT getting
selected shouldn't count.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 07 09:18:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 09:18:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123889.233744 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lewcg-0003om-7k; Fri, 07 May 2021 09:18:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123889.233744; Fri, 07 May 2021 09:18:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lewcg-0003of-4S; Fri, 07 May 2021 09:18:14 +0000
Received: by outflank-mailman (input) for mailman id 123889;
 Fri, 07 May 2021 09:18:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>) id 1lewcf-0003oX-Kr
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 09:18:13 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1lewcc-0003n8-U3; Fri, 07 May 2021 09:18:10 +0000
Received: from 54-240-197-239.amazon.com ([54.240.197.239]
 helo=ua82172827c7b5a.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.2:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <hx242@xen.org>)
 id 1lewcc-0007Je-HL; Fri, 07 May 2021 09:18:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Mime-Version:Content-Type:
	References:In-Reply-To:Date:Cc:To:From:Subject:Message-ID;
	bh=R7SEY595wkS8fOnvg/gOY/3rkZgF5EuFwck16pzVcAU=; b=5Ds4XHo+4MIYbDq5Xmle4RxQnB
	8AWH8EL+uhh/NhCP5mvMg4avIWEeBIDx67u0b3WLq5ZRJ7fnQbo6D33LUfg2rYEnV3WJomd/1F+WC
	ZFL9gCt5Htm6bNb9+mzvky0+bIRGawuOZKPuHjiqEqP6OsGO+y3SHVhHGmJrR/vyXUSs=;
Message-ID: <288e5af69a356060b9fce6c6fa77324950dac2c2.camel@xen.org>
Subject: Re: [PATCH RFC 1/2] docs/design: Add a design document for Live
 Update
From: Hongyan Xia <hx242@xen.org>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: dwmw2@infradead.org, paul@xen.org, raphning@amazon.com,
 maghul@amazon.com,  Julien Grall <jgrall@amazon.com>, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Ian
 Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Date: Fri, 07 May 2021 10:18:06 +0100
In-Reply-To: <20210506104259.16928-2-julien@xen.org>
References: <20210506104259.16928-1-julien@xen.org>
	 <20210506104259.16928-2-julien@xen.org>
Content-Type: text/plain; charset="UTF-8"
X-Mailer: Evolution 3.28.5-0ubuntu0.18.04.2 
Mime-Version: 1.0
Content-Transfer-Encoding: 8bit

On Thu, 2021-05-06 at 11:42 +0100, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Administrators often require updating the Xen hypervisor to address
> security vulnerabilities, introduce new features, or fix software
> defects.
> Currently, we offer the following methods to perform the update:
> 
>     * Rebooting the guests and the host: this is highly disrupting to
> running
>       guests.
>     * Migrating off the guests, rebooting the host: this currently
> requires
>       the guest to cooperate (see [1] for a non-cooperative solution)
> and it
>       may not always be possible to migrate it off (i.e lack of
> capacity, use
>       of local storage...).
>     * Live patching: This is the less disruptive of the existing
> methods.
>       However, it can be difficult to prepare the livepatch if the
> change is
>       large or there are data structures to update.

Might want to mention that live patching slowly consumes memory and
fragments the Xen image and degrades performance (especially when the
patched code is on the critical path).

> 
> This patch will introduce a new proposal called "Live Update" which
> will
> activate new software without noticeable downtime (i.e no - or
> minimal -
> customer).
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> ---
>  docs/designs/liveupdate.md | 254
> +++++++++++++++++++++++++++++++++++++
>  1 file changed, 254 insertions(+)
>  create mode 100644 docs/designs/liveupdate.md
> 
> diff --git a/docs/designs/liveupdate.md b/docs/designs/liveupdate.md
> new file mode 100644
> index 000000000000..32993934f4fe
> --- /dev/null
> +++ b/docs/designs/liveupdate.md
> @@ -0,0 +1,254 @@
> +# Live Updating Xen
> +
> +## Background
> +
> +Administrators often require updating the Xen hypervisor to address
> security
> +vulnerabilities, introduce new features, or fix software
> defects.  Currently,
> +we offer the following methods to perform the update:
> +
> +    * Rebooting the guests and the host: this is highly disrupting
> to running
> +      guests.
> +    * Migrating off the guests, rebooting the host: this currently
> requires
> +      the guest to cooperate (see [1] for a non-cooperative
> solution) and it
> +      may not always be possible to migrate it off (i.e lack of
> capacity, use
> +      of local storage...).
> +    * Live patching: This is the less disruptive of the existing
> methods.
> +      However, it can be difficult to prepare the livepatch if the
> change is
> +      large or there are data structures to update.
> +
> +This document will present a new approach called "Live Update" which
> will
> +activate new software without noticeable downtime (i.e no - or
> minimal -
> +customer pain).
> +
> +## Terminology
> +
> +xen#1: Xen version currently active and running on a droplet.  This
> is the
> +“source” for the Live Update operation.  This version can actually
> be newer
> +than xen#2 in case of a rollback operation.
> +
> +xen#2: Xen version that's the “target” of the Live Update operation.
> This
> +version will become the active version after successful Live
> Update.  This
> +version of Xen can actually be older than xen#1 in case of a
> rollback
> +operation.

A bit redundant since it was mentioned in Xen 1 already.

> +
> +## High-level overview
> +
> +Xen has a framework to bring a new image of the Xen hypervisor in
> memory using
> +kexec.  The existing framework does not meet the baseline
> functionality for
> +Live Update, since kexec results in a restart for the hypervisor,
> host, Dom0,
> +and all the guests.
> +
> +The operation can be divided in roughly 4 parts:
> +
> +    1. Trigger: The operation will by triggered from outside the
> hypervisor
> +       (e.g. dom0 userspace).
> +    2. Save: The state will be stabilized by pausing the domains and
> +       serialized by xen#1.
> +    3. Hand-over: xen#1 will pass the serialized state and transfer
> control to
> +       xen#2.
> +    4. Restore: The state will be deserialized by xen#2.
> +
> +All the domains will be paused before xen#1 is starting to save the
> states,
> +and any domain that was running before Live Update will be unpaused
> after
> +xen#2 has finished to restore the states.  This is to prevent a
> domain to try
> +to modify the state of another domain while it is being
> saved/restored.
> +
> +The current approach could be seen as non-cooperative migration with
> a twist:
> +all the domains (including dom0) are not expected be involved in the
> Live
> +Update process.
> +
> +The major differences compare to live migration are:
> +
> +    * The state is not transferred to another host, but instead
> locally to
> +      xen#2.
> +    * The memory content or device state (for passthrough) does not
> need to
> +      be part of the stream. Instead we need to preserve it.
> +    * PV backends, device emulators, xenstored are not recreated but
> preserved
> +      (as these are part of dom0).
> +
> +
> +Domains in process of being destroyed (*XEN\_DOMCTL\_destroydomain*)
> will need
> +to be preserved because another entity may have mappings (e.g
> foreign, grant)
> +on them.
> +
> +## Trigger
> +
> +Live update is built on top of the kexec interface to prepare the
> command line,
> +load xen#2 and trigger the operation.  A new kexec type has been
> introduced
> +(*KEXEC\_TYPE\_LIVE\_UPDATE*) to notify Xen to Live Update.
> +
> +The Live Update will be triggered from outside the hypervisor (e.g.
> dom0
> +userspace).  Support for the operation has been added in kexec-tools 
> 2.0.21.
> +
> +All the domains will be paused before xen#1 is starting to save the
> states.
> +In Xen, *domain\_pause()* will pause the vCPUs as soon as they can
> be re-
> +scheduled.  In other words, a pause request will not wait for
> asynchronous
> +requests (e.g. I/O) to finish.  For Live Update, this is not an
> ideal time to
> +pause because it will require more xen#1 internal state to be
> transferred.
> +Therefore, all the domains will be paused at an architectural
> restartable
> +boundary.
> +
> +Live update will not happen synchronously to the request but when
> all the
> +domains are quiescent.  As domains running device emulators (e.g
> Dom0) will
> +be part of the process to quiesce HVM domains, we will need to let
> them run
> +until xen#1 is actually starting to save the state.  HVM vCPUs will
> be paused
> +as soon as any pending asynchronous request has finished.
> +
> +In the current implementation, all PV domains will continue to run
> while the
> +rest will be paused as soon as possible.  Note this approach is
> assuming that
> +device emulators are only running in PV domains.
> +
> +It should be easy to extend to PVH domains not requiring device
> emulations.
> +It will require more thought if we need to run device models in HVM
> domains as
> +there might be inter-dependency.
> +
> +## Save
> +
> +xen#1 will be responsible to preserve and serialize the state of
> each existing
> +domain and any system-wide state (e.g M2P).
> +
> +Each domain will be serialized independently using a modified
> migration stream,
> +if there is any dependency between domains (such as for IOREQ
> server) they will
> +be recorded using a domid. All the complexity of resolving the
> dependencies are
> +left to the restore path in xen#2 (more in the *Restore* section).
> +
> +At the moment, the domains are saved one by one in a single thread,
> but it
> +would be possible to consider multi-threading if it takes too long.
> Although
> +this may require some adjustment in the stream format.
> +
> +As we want to be able to Live Update between major versions of Xen
> (e.g Xen
> +4.11 -> Xen 4.15), the states preserved should not be a dump of Xen
> internal
> +structure but instead the minimal information that allow us to
> recreate the
> +domains.
> +
> +For instance, we don't want to preserve the frametable (and
> therefore
> +*struct page\_info*) as-is because the refcounting may be different
> across
> +between xen#1 and xen#2 (see XSA-299). Instead, we want to be able
> to recreate
> +*struct page\_info* based on minimal information that are considered
> stable
> +(such as the page type).
> +
> +Note that upgrading between version of Xen will also require all the
> hypercalls
> +to be stable. This will not be covered by this document.
> +
> +## Hand over
> +
> +### Memory usage restrictions
> +
> +xen#2 must take care not to use any memory pages which already
> belong to
> +guests.  To facilitate this, a number of contiguous region of memory
> are
> +reserved for the boot allocator, known as *live update bootmem*.
> +
> +xen#1 will always reserve a region just below Xen (the size is
> controlled by
> +the Xen command line parameter liveupdate) to allow Xen growing and
> provide
> +information about LiveUpdate (see the section *Breadcrumb*).  The
> region will be
> +passed to xen#2 using the same command line option but with the base
> address
> +specified.

The size of the command line option may not be the same depending on
the size of Xen #2.

> +
> +For simplicity, additional regions will be provided in the
> stream.  They will
> +consist of region that could be re-used by xen#2 during boot (such
> as the
> +xen#1's frametable memory).
> +
> +xen#2 must not use any pages outside those regions until it has
> consumed the
> +Live Update data stream and determined which pages are already in
> use by
> +running domains or need to be re-used as-is by Xen (e.g M2P).
> +
> +At run time, Xen may use memory from the reserved region for any
> purpose that
> +does not require preservation over a Live Update; in particular it
> __must__ not be
> +mapped to a domain or used by any Xen state requiring to be
> preserved (e.g
> +M2P).  In other word, the xenheap pages could be allocated from the
> reserved
> +regions if we remove the concept of shared xenheap pages.
> +
> +The xen#2's binary may be bigger (or smaller) compare to xen#1's
> binary.  So
> +for the purpose of loading xen#2 binary, kexec should treat the
> reserved memory
> +right below xen#1 and its region as a single contiguous space. xen#2
> will be
> +loaded right at the top of the contiguous space and the rest of the
> memory will
> +be the new reserved memory (this may shrink or grow).  For that
> reason, freed
> +init memory from xen#1 image is also treated as reserved liveupdate
> update

s/update//

This is explained quite well actually, but I wonder if we can move this
part closer to the liveupdate command line section (they both talk
about the initial bootmem region and Xen size changes). After that, we
then talk about multiple regions and how we should use them.

> +bootmem.
> +
> +### Live Update data stream
> +
> +During handover, xen#1 creates a Live Update data stream containing
> all the
> +information required by the new Xen#2 to restore all the domains.
> +
> +Data pages for this stream may be allocated anywhere in physical
> memory outside
> +the *live update bootmem* regions.
> +
> +As calling __vmap()__/__vunmap()__ has a cost on the downtime.  We
> want to reduce the
> +number of call to __vmap()__ when restoring the stream.  Therefore
> the stream
> +will be contiguously virtually mapped in xen#2.  xen#1 will create
> an array of

Using vmap during restore for a contiguous range sounds more like
implementation and optimisation detail to me rather than an ABI
requirement, so I would s/the stream will be/the stream can be/.

> +MFNs of the allocated data pages, suitable for passing to
> __vmap()__.  The
> +array will be physically contiguous but the MFNs don't need to be
> physically
> +contiguous.
> +
> +### Breadcrumb
> +
> +Since the Live Update data stream is created during the final
> **kexec\_exec**
> +hypercall, its address cannot be passed on the command line to the
> new Xen
> +since the command line needs to have been set up by **kexec(8)** in
> userspace
> +long beforehand.
> +
> +Thus, to allow the new Xen to find the data stream, xen#1 places a
> breadcrumb
> +in the first words of the Live Update bootmem, containing the number
> of data
> +pages, and the physical address of the contiguous MFN array.
> +
> +### IOMMU
> +
> +Where devices are passed through to domains, it may not be possible
> to quiesce
> +those devices for the purpose of performing the update.
> +
> +If performing Live Update with assigned devices, xen#1 will leave
> the IOMMU
> +mappings active during the handover (thus implying that IOMMU page
> tables may
> +not be allocated in the *live update bootmem* region either).
> +
> +xen#2 must take control of the IOMMU without causing those mappings
> to become
> +invalid even for a short period of time.  In other words, xen#2
> should not
> +re-setup the IOMMUs.  On hardware which does not support Posted
> Interrupts,
> +interrupts may need to be generated on resume.
> +
> +## Restore
> +
> +After xen#2 initialized itself and map the stream, it will be
> responsible to
> +restore the state of the system and each domain.
> +
> +Unlike the save part, it is not possible to restore a domain in a
> single pass.
> +There are dependencies between:
> +
> +    1. different states of a domain.  For instance, the event
> channels ABI
> +       used (2l vs fifo) requires to be restored before restoring
> the event
> +       channels.
> +    2. the same "state" within a domain.  For instance, in case of
> PV domain,
> +       the pages' ownership requires to be restored before restoring
> the type
> +       of the page (e.g is it an L4, L1... table?).
> +
> +    3. domains.  For instance when restoring the grant mapping, it
> will be
> +       necessary to have the page's owner in hand to do proper
> refcounting.
> +       Therefore the pages' ownership have to be restored first.
> +
> +Dependencies will be resolved using either multiple passes (for
> dependency
> +type 2 and 3) or using a specific ordering between records (for
> dependency
> +type 1).
> +
> +Each domain will be restored in 3 passes:
> +
> +    * Pass 0: Create the domain and restore the P2M for HVM. This
> can be broken
> +      down in 3 parts:
> +      * Allocate a domain via _domain\_create()_ but skip part that
> requires
> +        extra records (e.g HAP, P2M).
> +      * Restore any parts which needs to be done before create the
> vCPUs. This
> +        including restoring the P2M and whether HAP is used.
> +      * Create the vCPUs. Note this doesn't restore the state of the
> vCPUs.
> +    * Pass 1: It will restore the pages' ownership and the grant-
> table frames
> +    * Pass 2: This steps will restore any domain states (e.g vCPU
> state, event
> +      channels) that wasn't

Sentence seems incomplete.

Hongyan



From xen-devel-bounces@lists.xenproject.org Fri May 07 09:20:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 09:20:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123893.233756 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leweU-0005BG-KA; Fri, 07 May 2021 09:20:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123893.233756; Fri, 07 May 2021 09:20:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leweU-0005B9-Gt; Fri, 07 May 2021 09:20:06 +0000
Received: by outflank-mailman (input) for mailman id 123893;
 Fri, 07 May 2021 09:20:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rJTn=KC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1leweT-000575-HB
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 09:20:05 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a4fdd598-9783-4bc4-90bf-7f8ed22ed8f9;
 Fri, 07 May 2021 09:20:04 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0B59FAF26;
 Fri,  7 May 2021 09:20:04 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a4fdd598-9783-4bc4-90bf-7f8ed22ed8f9
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620379204; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=zvDMYiChOJui9Rp1r0kGRrMB0y61WeFkYTxR3fddviI=;
	b=k/JzqFRq6lFQ++Q4xBvyTvtHpA6GxiUlR25CXW8VHYwwtfXAfaYXNUePqqbiFJ+1Aqjh+I
	PZH/CspRRB29n8jpO8GeViTlURxWxT3C/AjCNvlVkcLFrWjTTt/HLV7+qHe284L3tEjsDn
	KQk3kcRZ8/IdPHZ39NJrmZz5HQ0wewE=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Christopher Clark <christopher.w.clark@gmail.com>,
 Daniel de Graaf <dgdegra@tycho.nsa.gov>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] Argo/XSM: add SILO hooks
Message-ID: <f47a6aa0-3624-5819-2e3a-43c5e492ae1b@suse.com>
Date: Fri, 7 May 2021 11:20:05 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

In SILO mode restrictions for inter-domain communication should apply
here along the lines of those for evtchn and gnttab.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
Really I was first thinking about the shim: Shouldn't that proxy argo
requests just like it does for gnttab ones? It only then occurred to me
that there's also an implication for SILO mode.

--- a/xen/xsm/silo.c
+++ b/xen/xsm/silo.c
@@ -81,12 +81,35 @@ static int silo_grant_copy(struct domain
     return -EPERM;
 }
 
+#ifdef CONFIG_ARGO
+
+static int silo_argo_register_single_source(const struct domain *d1,
+                                            const struct domain *d2)
+{
+    if ( silo_mode_dom_check(d1, d2) )
+        return xsm_argo_register_single_source(d1, d2);
+    return -EPERM;
+}
+
+static int silo_argo_send(const struct domain *d1, const struct domain *d2)
+{
+    if ( silo_mode_dom_check(d1, d2) )
+        return xsm_argo_send(d1, d2);
+    return -EPERM;
+}
+
+#endif
+
 static struct xsm_operations silo_xsm_ops = {
     .evtchn_unbound = silo_evtchn_unbound,
     .evtchn_interdomain = silo_evtchn_interdomain,
     .grant_mapref = silo_grant_mapref,
     .grant_transfer = silo_grant_transfer,
     .grant_copy = silo_grant_copy,
+#ifdef CONFIG_ARGO
+    .argo_register_single_source = silo_argo_register_single_source,
+    .argo_send = silo_argo_send,
+#endif
 };
 
 void __init silo_init(void)


From xen-devel-bounces@lists.xenproject.org Fri May 07 09:52:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 09:52:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123902.233770 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lex9A-0008Mz-40; Fri, 07 May 2021 09:51:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123902.233770; Fri, 07 May 2021 09:51:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lex9A-0008Ms-10; Fri, 07 May 2021 09:51:48 +0000
Received: by outflank-mailman (input) for mailman id 123902;
 Fri, 07 May 2021 09:51:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lex98-0008Mm-Fv
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 09:51:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lex98-0004Uj-D7
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 09:51:46 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1lex98-0001VO-C9
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 09:51:46 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1lex95-0006Yk-1i; Fri, 07 May 2021 10:51:43 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=O7hDt9LQn4hn+QaJ1ujSB3S61m9ILuUEVtCZVqhdcLA=; b=BsfAtd5DHgdsDXubZ4+3yJUx71
	ZY+w+Fja72bP6m+9IV+C0BpUZRG9y+EtHKgw26COggLnfeMogpCPMcipoL2dwlBWAqqO4+RwTzpDS
	h0pmkwmFP6gaC2KCw++e3OA4fPw28dcyq6/tPhy6SO5SkIcrNU/pfW89cu21SShsAXKg=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24725.3502.744677.621162@mariner.uk.xensource.com>
Date: Fri, 7 May 2021 10:51:42 +0100
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org,
    George Dunlap <george.dunlap@citrix.com>
Subject: Re: [xen-4.12-testing test] 161776: regressions - FAIL
In-Reply-To: <d0e817e2-4097-239a-ee16-95f23e9ca52d@suse.com>
References: <osstest-161776-mainreport@xen.org>
	<24724.6389.95487.1868@mariner.uk.xensource.com>
	<d0e817e2-4097-239a-ee16-95f23e9ca52d@suse.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Jan Beulich writes ("Re: [xen-4.12-testing test] 161776: regressions - FAIL"):
> I did consider this as an option, but I don't think it's this simple.
> Neither 4.11 and older nor 4.13 and newer exhibit such behavior. In
> fact in 4.12 we appear to see pushes blocked now because there was a
> success of this test once, in flight 159418. So while this may not
> be a regression within 4.12 (and hence a force push may still be an
> appropriate step), there is something wrong there with 4.12, I would
> say. It being out of (general) support may of course mean we want to
> leave it at that. Better, for the remaining time the branch is in
> security-only maintenance state, would of course be to identify the
> (presumably) missing backport ... Of course that's easy to say for
> me, because I don't think I would realistically be the one to
> undertake such an exercise.

Hrm.  For now I have force pushed it as I suggested.

Ian.


From xen-devel-bounces@lists.xenproject.org Fri May 07 09:52:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 09:52:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123905.233783 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lexAF-0000VL-Fa; Fri, 07 May 2021 09:52:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123905.233783; Fri, 07 May 2021 09:52:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lexAF-0000VE-As; Fri, 07 May 2021 09:52:55 +0000
Received: by outflank-mailman (input) for mailman id 123905;
 Fri, 07 May 2021 09:52:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rJTn=KC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lexAE-0000V6-9P
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 09:52:54 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0db9f69d-22b7-4b5e-a4d7-e2226770eeb7;
 Fri, 07 May 2021 09:52:52 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5285CB11A;
 Fri,  7 May 2021 09:52:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0db9f69d-22b7-4b5e-a4d7-e2226770eeb7
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620381171; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=FioxbNzS+5ZvnGrJ3hcyi3r9bxQktgZckmy+alRpHMk=;
	b=ilyAKBJxbNp6zFANHnWHgR7xAbsrSzkTxGQebDM+HFQnWAVZXsN3DAKB6QhZ9EeBow2pHr
	5JrTk7Wy0fZ9e6+muYHGR9pMlPPVkzYhN3yNmlk8B/X55H5U6T0Of2xnEix7ZCPigMp1bF
	dSgcYiF9bCD4oKJ+XAT11fcA2tZt3Ag=
Subject: Re: [PATCH RFC 1/2] docs/design: Add a design document for Live
 Update
To: Julien Grall <julien@xen.org>
Cc: dwmw2@infradead.org, paul@xen.org, hongyxia@amazon.com,
 raphning@amazon.com, maghul@amazon.com, Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210506104259.16928-1-julien@xen.org>
 <20210506104259.16928-2-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f51b2ef6-3998-7371-cea9-502c5c9f8afa@suse.com>
Date: Fri, 7 May 2021 11:52:52 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210506104259.16928-2-julien@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 06.05.2021 12:42, Julien Grall wrote:
> +## High-level overview
> +
> +Xen has a framework to bring a new image of the Xen hypervisor in memory using
> +kexec.  The existing framework does not meet the baseline functionality for
> +Live Update, since kexec results in a restart for the hypervisor, host, Dom0,
> +and all the guests.
> +
> +The operation can be divided in roughly 4 parts:
> +
> +    1. Trigger: The operation will by triggered from outside the hypervisor
> +       (e.g. dom0 userspace).
> +    2. Save: The state will be stabilized by pausing the domains and
> +       serialized by xen#1.
> +    3. Hand-over: xen#1 will pass the serialized state and transfer control to
> +       xen#2.
> +    4. Restore: The state will be deserialized by xen#2.
> +
> +All the domains will be paused before xen#1 is starting to save the states,
> +and any domain that was running before Live Update will be unpaused after
> +xen#2 has finished to restore the states.  This is to prevent a domain to try
> +to modify the state of another domain while it is being saved/restored.
> +
> +The current approach could be seen as non-cooperative migration with a twist:
> +all the domains (including dom0) are not expected be involved in the Live
> +Update process.
> +
> +The major differences compare to live migration are:
> +
> +    * The state is not transferred to another host, but instead locally to
> +      xen#2.
> +    * The memory content or device state (for passthrough) does not need to
> +      be part of the stream. Instead we need to preserve it.
> +    * PV backends, device emulators, xenstored are not recreated but preserved
> +      (as these are part of dom0).

Isn't dom0 too limiting here?

> +## Trigger
> +
> +Live update is built on top of the kexec interface to prepare the command line,
> +load xen#2 and trigger the operation.  A new kexec type has been introduced
> +(*KEXEC\_TYPE\_LIVE\_UPDATE*) to notify Xen to Live Update.
> +
> +The Live Update will be triggered from outside the hypervisor (e.g. dom0
> +userspace).  Support for the operation has been added in kexec-tools 2.0.21.
> +
> +All the domains will be paused before xen#1 is starting to save the states.
> +In Xen, *domain\_pause()* will pause the vCPUs as soon as they can be re-
> +scheduled.  In other words, a pause request will not wait for asynchronous
> +requests (e.g. I/O) to finish.  For Live Update, this is not an ideal time to
> +pause because it will require more xen#1 internal state to be transferred.
> +Therefore, all the domains will be paused at an architectural restartable
> +boundary.

To me this leaves entirely unclear what this then means. domain_pause()
not being suitable is one thing, but what _is_ suitable seems worth
mentioning. Among other things I'd be curious to know what this would
mean for pending hypercall continuations.

> +## Save
> +
> +xen#1 will be responsible to preserve and serialize the state of each existing
> +domain and any system-wide state (e.g M2P).
> +
> +Each domain will be serialized independently using a modified migration stream,
> +if there is any dependency between domains (such as for IOREQ server) they will
> +be recorded using a domid. All the complexity of resolving the dependencies are
> +left to the restore path in xen#2 (more in the *Restore* section).
> +
> +At the moment, the domains are saved one by one in a single thread, but it
> +would be possible to consider multi-threading if it takes too long. Although
> +this may require some adjustment in the stream format.
> +
> +As we want to be able to Live Update between major versions of Xen (e.g Xen
> +4.11 -> Xen 4.15), the states preserved should not be a dump of Xen internal
> +structure but instead the minimal information that allow us to recreate the
> +domains.
> +
> +For instance, we don't want to preserve the frametable (and therefore
> +*struct page\_info*) as-is because the refcounting may be different across
> +between xen#1 and xen#2 (see XSA-299). Instead, we want to be able to recreate
> +*struct page\_info* based on minimal information that are considered stable
> +(such as the page type).

Perhaps leaving it at this very generic description is fine, but I can
easily see cases (which may not even be corner ones) where this quickly
gets problematic: What if xen#2 has state xen#1 didn't (properly) record?
Such information may not be possible to take out of thin air. Is the
consequence then that in such a case LU won't work? If so, is it perhaps
worthwhile having a Limitations section somewhere?

> +## Hand over
> +
> +### Memory usage restrictions
> +
> +xen#2 must take care not to use any memory pages which already belong to
> +guests.  To facilitate this, a number of contiguous region of memory are
> +reserved for the boot allocator, known as *live update bootmem*.
> +
> +xen#1 will always reserve a region just below Xen (the size is controlled by
> +the Xen command line parameter liveupdate) to allow Xen growing and provide
> +information about LiveUpdate (see the section *Breadcrumb*).  The region will be
> +passed to xen#2 using the same command line option but with the base address
> +specified.

I particularly don't understand the "to allow Xen growing" aspect here:
xen#2 needs to be placed in a different memory range anyway until xen#1
has handed over control. Are you suggesting it gets moved over to xen#1's
original physical range (not necessarily an exact match), and then
perhaps to start below where xen#1 started? Why would you do this? Xen
intentionally lives at a 2Mb boundary, such that in principle (for EFI:
in fact) large page mappings are possible. I also see no reason to reuse
the same physical area of memory for Xen itself - all you need is for
Xen's virtual addresses to be properly mapped to the new physical range.
I wonder what I'm missing here.

> +For simplicity, additional regions will be provided in the stream.  They will
> +consist of region that could be re-used by xen#2 during boot (such as the
> +xen#1's frametable memory).
> +
> +xen#2 must not use any pages outside those regions until it has consumed the
> +Live Update data stream and determined which pages are already in use by
> +running domains or need to be re-used as-is by Xen (e.g M2P).

Is the M2P really in the "need to be re-used" group, not just "can
be re-used for simplicity and efficiency reasons"?

> +## Restore
> +
> +After xen#2 initialized itself and map the stream, it will be responsible to
> +restore the state of the system and each domain.
> +
> +Unlike the save part, it is not possible to restore a domain in a single pass.
> +There are dependencies between:
> +
> +    1. different states of a domain.  For instance, the event channels ABI
> +       used (2l vs fifo) requires to be restored before restoring the event
> +       channels.
> +    2. the same "state" within a domain.  For instance, in case of PV domain,
> +       the pages' ownership requires to be restored before restoring the type
> +       of the page (e.g is it an L4, L1... table?).
> +
> +    3. domains.  For instance when restoring the grant mapping, it will be
> +       necessary to have the page's owner in hand to do proper refcounting.
> +       Therefore the pages' ownership have to be restored first.
> +
> +Dependencies will be resolved using either multiple passes (for dependency
> +type 2 and 3) or using a specific ordering between records (for dependency
> +type 1).
> +
> +Each domain will be restored in 3 passes:
> +
> +    * Pass 0: Create the domain and restore the P2M for HVM. This can be broken
> +      down in 3 parts:
> +      * Allocate a domain via _domain\_create()_ but skip part that requires
> +        extra records (e.g HAP, P2M).
> +      * Restore any parts which needs to be done before create the vCPUs. This
> +        including restoring the P2M and whether HAP is used.
> +      * Create the vCPUs. Note this doesn't restore the state of the vCPUs.
> +    * Pass 1: It will restore the pages' ownership and the grant-table frames
> +    * Pass 2: This steps will restore any domain states (e.g vCPU state, event
> +      channels) that wasn't

What about foreign mappings (which are part of the P2M)? Can they be
validly restored prior to restoring page ownership? In how far do you
fully trust xen#1's state to be fully consistent anyway, rather than
perhaps checking it?

> +A domain should not have a dependency on another domain within the same pass.
> +Therefore it would be possible to take advantage of all the CPUs to restore
> +domains in parallel and reduce the overall downtime.

"Dependency" may be ambiguous here. For example, an interdomain event
channel to me necessarily expresses a dependency between two domains.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 07 09:54:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 09:54:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123911.233795 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lexBL-0001C3-Ti; Fri, 07 May 2021 09:54:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123911.233795; Fri, 07 May 2021 09:54:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lexBL-0001Bv-QZ; Fri, 07 May 2021 09:54:03 +0000
Received: by outflank-mailman (input) for mailman id 123911;
 Fri, 07 May 2021 09:54:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lexBL-0001Bm-Fd; Fri, 07 May 2021 09:54:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lexBL-0004Wu-CM; Fri, 07 May 2021 09:54:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lexBL-0007LV-0H; Fri, 07 May 2021 09:54:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lexBK-0000wR-W2; Fri, 07 May 2021 09:54:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LWFeMZE+4iDGGxXOF2BSKl31fNzT8hcmNzJZIEHu3H0=; b=Q7FuT1abFDNr55eyUgvZo9fag9
	mOzR2W0oSypszugEKEOnnltCXm5W4/VZRiKZnuQZc0RnfJbZKaWhfhYNEKhTDnmqusE8WBxXSNEmW
	jzURW1QgQhBQbGqd8dbDgAaLxbl27nK2sSjosyX9xZ7B2zCmK6kF3V/ngV4/n9H1DcFk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161821-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-4.12-testing test] 161821: regressions - FAIL
X-Osstest-Failures:
    xen-4.12-testing:test-amd64-amd64-xl-qcow2:guest-localmigrate/x10:fail:regression
    xen-4.12-testing:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-start/debianhvm.repeat:fail:heisenbug
    xen-4.12-testing:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-4.12-testing:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=5984905b2638df87a0262d1ee91f0a6e14a86df6
X-Osstest-Versions-That:
    xen=4cf5929606adc2fb1ab4e2921c14ba4b8046ecd1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 07 May 2021 09:54:02 +0000

flight 161821 xen-4.12-testing real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161821/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qcow2    19 guest-localmigrate/x10   fail REGR. vs. 159418

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 20 guest-start/debianhvm.repeat fail pass in 161807

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 159418
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 159418
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159418
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 159418
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 159418
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 159418
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159418
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 159418
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159418
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 159418
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 159418
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  5984905b2638df87a0262d1ee91f0a6e14a86df6
baseline version:
 xen                  4cf5929606adc2fb1ab4e2921c14ba4b8046ecd1

Last test of basis   159418  2021-02-16 15:06:11 Z   79 days
Failing since        160128  2021-03-18 14:36:18 Z   49 days   68 attempts
Testing same since   161776  2021-05-04 19:06:01 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Anthony PERARD <anthony.perard@citrix.com>
  Christian Lindig <christian.lindig@citrix.com>
  Edwin Török <edvin.torok@citrix.com>
  George Dunlap <george.dunlap@citrix.com>
  Ian Jackson <ian.jackson@eu.citrix.com>
  Igor Druzhinin <igor.druzhinin@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Olaf Hering <olaf@aepfle.de>
  Stefano Stabellini <sstabellini@kernel.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 324 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 07 10:00:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 10:00:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123933.233866 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lexHJ-0003QM-81; Fri, 07 May 2021 10:00:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123933.233866; Fri, 07 May 2021 10:00:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lexHJ-0003QF-50; Fri, 07 May 2021 10:00:13 +0000
Received: by outflank-mailman (input) for mailman id 123933;
 Fri, 07 May 2021 10:00:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lexHH-0003Q9-NN
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 10:00:11 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lexHB-0004lc-3G; Fri, 07 May 2021 10:00:05 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lexHA-0002Pb-Pp; Fri, 07 May 2021 10:00:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=k9xZC6GAGFDMz2L3HsOLpRl3Y6kuyrkIhyR51NFYcc8=; b=kg2QV5B76hS4dZkbraC5jedNVI
	XRUXrqOzTHJx1QD3ee2ppJ5qkNyd7Oy1ZKZIQ5kJfLUFndiQN3srZq1EFgGuaWqIdQW3iwE7cBipu
	3nNPTqn6XI2aR4x9+0L6GTPKjmf8YImFSsI7HA3qjGYCbm2uBpLmaftEAqUGL4k9Y660=;
Subject: Re: [PATCH RFC 1/2] docs/design: Add a design document for Live
 Update
To: Hongyan Xia <hx242@xen.org>, xen-devel@lists.xenproject.org
Cc: dwmw2@infradead.org, paul@xen.org, raphning@amazon.com,
 maghul@amazon.com, Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20210506104259.16928-1-julien@xen.org>
 <20210506104259.16928-2-julien@xen.org>
 <288e5af69a356060b9fce6c6fa77324950dac2c2.camel@xen.org>
From: Julien Grall <julien@xen.org>
Message-ID: <e5d53be2-7533-c1d6-98c1-7a310fd85e07@xen.org>
Date: Fri, 7 May 2021 11:00:02 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <288e5af69a356060b9fce6c6fa77324950dac2c2.camel@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit


Hi Hongyan,

On 07/05/2021 10:18, Hongyan Xia wrote:
> On Thu, 2021-05-06 at 11:42 +0100, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> Administrators often require updating the Xen hypervisor to address
>> security vulnerabilities, introduce new features, or fix software
>> defects.
>> Currently, we offer the following methods to perform the update:
>>
>>      * Rebooting the guests and the host: this is highly disrupting to
>> running
>>        guests.
>>      * Migrating off the guests, rebooting the host: this currently
>> requires
>>        the guest to cooperate (see [1] for a non-cooperative solution)
>> and it
>>        may not always be possible to migrate it off (i.e lack of
>> capacity, use
>>        of local storage...).
>>      * Live patching: This is the less disruptive of the existing
>> methods.
>>        However, it can be difficult to prepare the livepatch if the
>> change is
>>        large or there are data structures to update.
> 
> Might want to mention that live patching slowly consumes memory and
> fragments the Xen image and degrades performance (especially when the
> patched code is on the critical path).
My goal wasn't to list all the drawbacks for each existign methods. 
Instead, I wanted to give a simple important reason for each for them.

I would prefer to keep the list as it is unless someone needs more 
arguments about introducing a new approach.

> >>
>> This patch will introduce a new proposal called "Live Update" which
>> will
>> activate new software without noticeable downtime (i.e no - or
>> minimal -
>> customer).
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>> ---
>>   docs/designs/liveupdate.md | 254
>> +++++++++++++++++++++++++++++++++++++
>>   1 file changed, 254 insertions(+)
>>   create mode 100644 docs/designs/liveupdate.md
>>
>> diff --git a/docs/designs/liveupdate.md b/docs/designs/liveupdate.md
>> new file mode 100644
>> index 000000000000..32993934f4fe
>> --- /dev/null
>> +++ b/docs/designs/liveupdate.md
>> @@ -0,0 +1,254 @@
>> +# Live Updating Xen
>> +
>> +## Background
>> +
>> +Administrators often require updating the Xen hypervisor to address
>> security
>> +vulnerabilities, introduce new features, or fix software
>> defects.  Currently,
>> +we offer the following methods to perform the update:
>> +
>> +    * Rebooting the guests and the host: this is highly disrupting
>> to running
>> +      guests.
>> +    * Migrating off the guests, rebooting the host: this currently
>> requires
>> +      the guest to cooperate (see [1] for a non-cooperative
>> solution) and it
>> +      may not always be possible to migrate it off (i.e lack of
>> capacity, use
>> +      of local storage...).
>> +    * Live patching: This is the less disruptive of the existing
>> methods.
>> +      However, it can be difficult to prepare the livepatch if the
>> change is
>> +      large or there are data structures to update.
>> +
>> +This document will present a new approach called "Live Update" which
>> will
>> +activate new software without noticeable downtime (i.e no - or
>> minimal -
>> +customer pain).
>> +
>> +## Terminology
>> +
>> +xen#1: Xen version currently active and running on a droplet.  This
>> is the
>> +“source” for the Live Update operation.  This version can actually
>> be newer
>> +than xen#2 in case of a rollback operation.
>> +
>> +xen#2: Xen version that's the “target” of the Live Update operation.
>> This
>> +version will become the active version after successful Live
>> Update.  This
>> +version of Xen can actually be older than xen#1 in case of a
>> rollback
>> +operation.
> 
> A bit redundant since it was mentioned in Xen 1 already.

Definitions tends to be redundant. So I would prefer to keep like that.

> 
>> +
>> +## High-level overview
>> +
>> +Xen has a framework to bring a new image of the Xen hypervisor in
>> memory using
>> +kexec.  The existing framework does not meet the baseline
>> functionality for
>> +Live Update, since kexec results in a restart for the hypervisor,
>> host, Dom0,
>> +and all the guests.
>> +
>> +The operation can be divided in roughly 4 parts:
>> +
>> +    1. Trigger: The operation will by triggered from outside the
>> hypervisor
>> +       (e.g. dom0 userspace).
>> +    2. Save: The state will be stabilized by pausing the domains and
>> +       serialized by xen#1.
>> +    3. Hand-over: xen#1 will pass the serialized state and transfer
>> control to
>> +       xen#2.
>> +    4. Restore: The state will be deserialized by xen#2.
>> +
>> +All the domains will be paused before xen#1 is starting to save the
>> states,
>> +and any domain that was running before Live Update will be unpaused
>> after
>> +xen#2 has finished to restore the states.  This is to prevent a
>> domain to try
>> +to modify the state of another domain while it is being
>> saved/restored.
>> +
>> +The current approach could be seen as non-cooperative migration with
>> a twist:
>> +all the domains (including dom0) are not expected be involved in the
>> Live
>> +Update process.
>> +
>> +The major differences compare to live migration are:
>> +
>> +    * The state is not transferred to another host, but instead
>> locally to
>> +      xen#2.
>> +    * The memory content or device state (for passthrough) does not
>> need to
>> +      be part of the stream. Instead we need to preserve it.
>> +    * PV backends, device emulators, xenstored are not recreated but
>> preserved
>> +      (as these are part of dom0).
>> +
>> +
>> +Domains in process of being destroyed (*XEN\_DOMCTL\_destroydomain*)
>> will need
>> +to be preserved because another entity may have mappings (e.g
>> foreign, grant)
>> +on them.
>> +
>> +## Trigger
>> +
>> +Live update is built on top of the kexec interface to prepare the
>> command line,
>> +load xen#2 and trigger the operation.  A new kexec type has been
>> introduced
>> +(*KEXEC\_TYPE\_LIVE\_UPDATE*) to notify Xen to Live Update.
>> +
>> +The Live Update will be triggered from outside the hypervisor (e.g.
>> dom0
>> +userspace).  Support for the operation has been added in kexec-tools
>> 2.0.21.
>> +
>> +All the domains will be paused before xen#1 is starting to save the
>> states.
>> +In Xen, *domain\_pause()* will pause the vCPUs as soon as they can
>> be re-
>> +scheduled.  In other words, a pause request will not wait for
>> asynchronous
>> +requests (e.g. I/O) to finish.  For Live Update, this is not an
>> ideal time to
>> +pause because it will require more xen#1 internal state to be
>> transferred.
>> +Therefore, all the domains will be paused at an architectural
>> restartable
>> +boundary.
>> +
>> +Live update will not happen synchronously to the request but when
>> all the
>> +domains are quiescent.  As domains running device emulators (e.g
>> Dom0) will
>> +be part of the process to quiesce HVM domains, we will need to let
>> them run
>> +until xen#1 is actually starting to save the state.  HVM vCPUs will
>> be paused
>> +as soon as any pending asynchronous request has finished.
>> +
>> +In the current implementation, all PV domains will continue to run
>> while the
>> +rest will be paused as soon as possible.  Note this approach is
>> assuming that
>> +device emulators are only running in PV domains.
>> +
>> +It should be easy to extend to PVH domains not requiring device
>> emulations.
>> +It will require more thought if we need to run device models in HVM
>> domains as
>> +there might be inter-dependency.
>> +
>> +## Save
>> +
>> +xen#1 will be responsible to preserve and serialize the state of
>> each existing
>> +domain and any system-wide state (e.g M2P).
>> +
>> +Each domain will be serialized independently using a modified
>> migration stream,
>> +if there is any dependency between domains (such as for IOREQ
>> server) they will
>> +be recorded using a domid. All the complexity of resolving the
>> dependencies are
>> +left to the restore path in xen#2 (more in the *Restore* section).
>> +
>> +At the moment, the domains are saved one by one in a single thread,
>> but it
>> +would be possible to consider multi-threading if it takes too long.
>> Although
>> +this may require some adjustment in the stream format.
>> +
>> +As we want to be able to Live Update between major versions of Xen
>> (e.g Xen
>> +4.11 -> Xen 4.15), the states preserved should not be a dump of Xen
>> internal
>> +structure but instead the minimal information that allow us to
>> recreate the
>> +domains.
>> +
>> +For instance, we don't want to preserve the frametable (and
>> therefore
>> +*struct page\_info*) as-is because the refcounting may be different
>> across
>> +between xen#1 and xen#2 (see XSA-299). Instead, we want to be able
>> to recreate
>> +*struct page\_info* based on minimal information that are considered
>> stable
>> +(such as the page type).
>> +
>> +Note that upgrading between version of Xen will also require all the
>> hypercalls
>> +to be stable. This will not be covered by this document.
>> +
>> +## Hand over
>> +
>> +### Memory usage restrictions
>> +
>> +xen#2 must take care not to use any memory pages which already
>> belong to
>> +guests.  To facilitate this, a number of contiguous region of memory
>> are
>> +reserved for the boot allocator, known as *live update bootmem*.
>> +
>> +xen#1 will always reserve a region just below Xen (the size is
>> controlled by
>> +the Xen command line parameter liveupdate) to allow Xen growing and
>> provide
>> +information about LiveUpdate (see the section *Breadcrumb*).  The
>> region will be
>> +passed to xen#2 using the same command line option but with the base
>> address
>> +specified.
> 
> The size of the command line option may not be the same depending on
> the size of Xen #2.

Good point I will update it.

> 
>> +
>> +For simplicity, additional regions will be provided in the
>> stream.  They will
>> +consist of region that could be re-used by xen#2 during boot (such
>> as the
>> +xen#1's frametable memory).
>> +
>> +xen#2 must not use any pages outside those regions until it has
>> consumed the
>> +Live Update data stream and determined which pages are already in
>> use by
>> +running domains or need to be re-used as-is by Xen (e.g M2P).
>> +
>> +At run time, Xen may use memory from the reserved region for any
>> purpose that
>> +does not require preservation over a Live Update; in particular it
>> __must__ not be
>> +mapped to a domain or used by any Xen state requiring to be
>> preserved (e.g
>> +M2P).  In other word, the xenheap pages could be allocated from the
>> reserved
>> +regions if we remove the concept of shared xenheap pages.
>> +
>> +The xen#2's binary may be bigger (or smaller) compare to xen#1's
>> binary.  So
>> +for the purpose of loading xen#2 binary, kexec should treat the
>> reserved memory
>> +right below xen#1 and its region as a single contiguous space. xen#2
>> will be
>> +loaded right at the top of the contiguous space and the rest of the
>> memory will
>> +be the new reserved memory (this may shrink or grow).  For that
>> reason, freed
>> +init memory from xen#1 image is also treated as reserved liveupdate
>> update
> 
> s/update//
> 
> This is explained quite well actually, but I wonder if we can move this
> part closer to the liveupdate command line section (they both talk
> about the initial bootmem region and Xen size changes). After that, we
> then talk about multiple regions and how we should use them.

Just for clarification, do you mean moving after "The region will be 
passed to xen#2 using the same command line option but with the base 
address specified."?

>> +bootmem.
>> +
>> +### Live Update data stream
>> +
>> +During handover, xen#1 creates a Live Update data stream containing
>> all the
>> +information required by the new Xen#2 to restore all the domains.
>> +
>> +Data pages for this stream may be allocated anywhere in physical
>> memory outside
>> +the *live update bootmem* regions.
>> +
>> +As calling __vmap()__/__vunmap()__ has a cost on the downtime.  We
>> want to reduce the
>> +number of call to __vmap()__ when restoring the stream.  Therefore
>> the stream
>> +will be contiguously virtually mapped in xen#2.  xen#1 will create
>> an array of
> 
> Using vmap during restore for a contiguous range sounds more like
> implementation and optimisation detail to me rather than an ABI
> requirement, so I would s/the stream will be/the stream can be/.

I will do.

> 
>> +MFNs of the allocated data pages, suitable for passing to
>> __vmap()__.  The
>> +array will be physically contiguous but the MFNs don't need to be
>> physically
>> +contiguous.
>> +
>> +### Breadcrumb
>> +
>> +Since the Live Update data stream is created during the final
>> **kexec\_exec**
>> +hypercall, its address cannot be passed on the command line to the
>> new Xen
>> +since the command line needs to have been set up by **kexec(8)** in
>> userspace
>> +long beforehand.
>> +
>> +Thus, to allow the new Xen to find the data stream, xen#1 places a
>> breadcrumb
>> +in the first words of the Live Update bootmem, containing the number
>> of data
>> +pages, and the physical address of the contiguous MFN array.
>> +
>> +### IOMMU
>> +
>> +Where devices are passed through to domains, it may not be possible
>> to quiesce
>> +those devices for the purpose of performing the update.
>> +
>> +If performing Live Update with assigned devices, xen#1 will leave
>> the IOMMU
>> +mappings active during the handover (thus implying that IOMMU page
>> tables may
>> +not be allocated in the *live update bootmem* region either).
>> +
>> +xen#2 must take control of the IOMMU without causing those mappings
>> to become
>> +invalid even for a short period of time.  In other words, xen#2
>> should not
>> +re-setup the IOMMUs.  On hardware which does not support Posted
>> Interrupts,
>> +interrupts may need to be generated on resume.
>> +
>> +## Restore
>> +
>> +After xen#2 initialized itself and map the stream, it will be
>> responsible to
>> +restore the state of the system and each domain.
>> +
>> +Unlike the save part, it is not possible to restore a domain in a
>> single pass.
>> +There are dependencies between:
>> +
>> +    1. different states of a domain.  For instance, the event
>> channels ABI
>> +       used (2l vs fifo) requires to be restored before restoring
>> the event
>> +       channels.
>> +    2. the same "state" within a domain.  For instance, in case of
>> PV domain,
>> +       the pages' ownership requires to be restored before restoring
>> the type
>> +       of the page (e.g is it an L4, L1... table?).
>> +
>> +    3. domains.  For instance when restoring the grant mapping, it
>> will be
>> +       necessary to have the page's owner in hand to do proper
>> refcounting.
>> +       Therefore the pages' ownership have to be restored first.
>> +
>> +Dependencies will be resolved using either multiple passes (for
>> dependency
>> +type 2 and 3) or using a specific ordering between records (for
>> dependency
>> +type 1).
>> +
>> +Each domain will be restored in 3 passes:
>> +
>> +    * Pass 0: Create the domain and restore the P2M for HVM. This
>> can be broken
>> +      down in 3 parts:
>> +      * Allocate a domain via _domain\_create()_ but skip part that
>> requires
>> +        extra records (e.g HAP, P2M).
>> +      * Restore any parts which needs to be done before create the
>> vCPUs. This
>> +        including restoring the P2M and whether HAP is used.
>> +      * Create the vCPUs. Note this doesn't restore the state of the
>> vCPUs.
>> +    * Pass 1: It will restore the pages' ownership and the grant-
>> table frames
>> +    * Pass 2: This steps will restore any domain states (e.g vCPU
>> state, event
>> +      channels) that wasn't
> 
> Sentence seems incomplete.

I can add 'already restored' if that clarifies it?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 07 10:15:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 10:15:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123940.233878 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lexVb-00052E-OY; Fri, 07 May 2021 10:14:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123940.233878; Fri, 07 May 2021 10:14:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lexVb-000527-KP; Fri, 07 May 2021 10:14:59 +0000
Received: by outflank-mailman (input) for mailman id 123940;
 Fri, 07 May 2021 10:14:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4HO4=KC=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lexVa-00051z-LA
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 10:14:58 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6d3d20c5-d192-45ef-b5a6-74bebfeeedaa;
 Fri, 07 May 2021 10:14:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6d3d20c5-d192-45ef-b5a6-74bebfeeedaa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620382497;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=crvJVVwu53PjMIdGcMshT3EFT+w/5RooeQPdEJL2Vb4=;
  b=cLEENRKgoC1/vJZE+H8n3EFYXH9GO8hlF/lzUq1AU00XK3kvv7JVmJ/A
   HLtK38a8I6aNOaNbey/uLNy1qy8a8aJpf92Pb6r03uWXF6mRbc6LkS4mX
   SC0Y/f8tBbxwIKVT7BEYsuBd2vy/d6IGrM8DALJAba0g8b2OSmzDroCK5
   A=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 8Bl84VTAmdh7GC1iPuNLzhpmdaR5IutitRmWgDTVZGNcwMjEyeXEk4KV0UjUgts8GbyLfR2Dv8
 kVtKmuAdJD9BBcv9Wr0nBfk5hQenSVtfd9cPfkpW3aMkPHsp+eoLczcCRp+iyCwBwvd8qXzVo2
 l8RmWfB4kwEuPhvKrq9ZDP2bl2amIRDay0MUaRFDs4tlNqwOM6JxriFQA2B0EqyMv/JHKtomyJ
 FWCyQpgUtQzP2w892ngX09MUckHIcsMyXcjDxtdlwuKNNPt0Iuhedt78oW4aoR1moOhW1Lva9G
 PMs=
X-SBRS: 5.1
X-MesageID: 43283749
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:jMdXzqqgNjyP3cvNxPLRm9QaV5sLL9V00zEX/kB9WHVpm5Oj+v
 xGzc5w6farsl0ssSkb6Ku90KnpewK+yXcH2/hqAV7CZniqhILMFu1fBOTZslrd8kHFl9K1kJ
 0QC5SWa+eAQWSS7/yKhjVQeuxIqLbozEnrv5am854Hd3AJV0gU1XYcNu/tKDwSeOApP/oEPa
 vZwvACiyureHwRYMj+LGICRfL/q9rCk4+jSQIaBjY8gTP+ww+A2frfKVy1zx0eWzRAzfMJ6m
 7eiTH04a2lrrWS1gLc7WnO9J5b8eGRi+erRfb8yvT9GA+cyDpAV74RHoFqewpF5N1H3Wxa0+
 UkZS1QePibpUmhOF1d6iGdpjUImAxel0MKj2XozkcL6PaJOg7TB6d69P1kWwqc5Ew6sN5m1q
 VXm2qfqppMFBvF2D/w/t7SSnhR5wGJSFcZ4KcuZkZkIMMjgX5q3PgiFUhuYd099eLBmfYa+c
 xVfbThDdptACCnhkHizx5SKYaXLwQO9z+9Mzo/U+KuoklroEw=
X-IronPort-AV: E=Sophos;i="5.82,280,1613451600"; 
   d="scan'208";a="43283749"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=N0rBNT0/Zpi/q0gSFvA/MC8EJAIZ9q7MzZ3eJSQhQ74DyEMdYUpMGpaiZxe0uA0Vo5U+XCXXCWkl6ODeTZcj/ITFL10nTtv/k/iSYkhaEro23paHiR8tSONEGRlRICT1LzAfq8wzNXixDPwgW8XjfudTrf0p9e+RLgwPdKrC3hkRzoPxt4jeJVHP67bIv1fSczEpv/Lr3NAW6fWsihbEV2swZfpLuE75DGOueN7seuD2GuAO2s2MjLzYeRlA8C80MSHaW7d9B7st83D5zMe8n/7GgX8XGi0/QB+u3AsI6RSHp7NKgXd/bG28GQ3KgxRqD/z0KDFGJAYdF83tTA+Aag==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cTEfHOjGNfS+EPLJAsyAuMcp1j34OhQGU0oOEQMhYgI=;
 b=LWPry6PUOPf6ayfdOuvKzzWHCShd0xX30I1Nit8pQM6rsKpg/wcWvaSJXff9gPr+CX6bsbafHT1vvS9cUDQU4oQeB38kas+UUSI8+1h9HAsxXKaJWem/I9eqan18e2/LlzbNP24KNq0AdYPsJ/Uti/FSFwQ71zqHDLHeBrlBN8Qw7DTEYhiGwK4HexgA+45+07qc2duT9fOgYVN3Odf4lZLZ/xoC3i473BB9va8/0vY/rOEjF3tIKMBCH06YXyXxFuREqP6CGdb+48oLIRsOxyDgiAss+Q9kycJvstWZDpVKx434ScnvXUakEMZOdAVck6J0Rtdn53jPJorEaFFjtg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=cTEfHOjGNfS+EPLJAsyAuMcp1j34OhQGU0oOEQMhYgI=;
 b=dNsXw/Fs2YvTQhz8WVENbU1zWdemt/6vQRQgytDOlNgIknS0EGPnozLIDA5iGEsGx/9ZmpHpY+TX3wZK75s5FbII4UpcWC/duCDtdM5pImgLxkKco0IfagHlDQPW8j0ayAy7O0LgmsPehuyzRHWvSvYLC94ty+P8qbFLsEXwlx0=
Date: Fri, 7 May 2021 12:14:45 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] x86/shim: fix build when !PV32
Message-ID: <YJUTFZWAt2VFqXw6@Air-de-Roger>
References: <08ea57f0-732e-fe12-409c-5487fb493429@suse.com>
 <YJT4cV62lqFgFKq/@Air-de-Roger>
 <c00c73a5-0d9c-9e1e-20d7-5d65ac23976e@suse.com>
 <YJUDhB1dBVDF8vmd@Air-de-Roger>
 <31ce12b7-b6c0-c8c6-13f3-fbc6826d2810@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <31ce12b7-b6c0-c8c6-13f3-fbc6826d2810@suse.com>
X-ClientProxiedBy: MR1P264CA0031.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:501:2f::18) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 7208aec2-1611-48e4-342f-08d91140f3d8
X-MS-TrafficTypeDiagnostic: DM5PR03MB3370:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB33709FC6A7308B8D05E1F0F98F579@DM5PR03MB3370.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 4oWk6upONqywsdiI8ynUZWXsboJKmkYPsmBDpY7roGnoen14bT3CLQH3Y1vEPozNAtvDtUp1ulKUpG7wyPwWq8i6U0IzOi8moscLGDoMdfPx+y14X5WzW/UtceHgIQcRUarXulla5hDuzANSObKIHN9OQjSax/t7OFUQrJo0ssEVAMYv0xVfkkhe6Anpfxlc0p3+AXN4wfaFxSBsMlzAk7CxFzc174JtbcD03nKjnZ05KJr2fxvBUJkfRg/QAmKieq3cVqaDeh57ANBq3X8hyYcavOdKDkCQ+DKykS61kYwS7dMfQrinjzXoMPLZk4SQ0RSoSEf3QG5AzayMO4wYXRy8EvoSuNsIYcKzceyasFb0NJ4Xb6VzVzxfS1XhnQ+gjMw5kSTfr+RtrKvbCBc1xhXbynUfVoNHlXXljTRvEdPHEzZS3ZYpKC3MvzzX9CZsCRdZcD5wca1TQADSwdoa9OjyKMJoCxdsTuid8Y5WM4BauXYAjJTnSyrY27vu/oeg4VlFxUzXBz0GHqw9M53TxlD7b8sxmLTQO5AckamvRckMufgtmhYaaPkvLKPVcFGvffSQGmfm86ONz1at4xe2hKul33KGNp07CQO5cDUDQEA=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(346002)(366004)(376002)(136003)(396003)(39860400002)(316002)(4326008)(6496006)(66946007)(66476007)(33716001)(86362001)(9686003)(66556008)(26005)(2906002)(6916009)(6486002)(186003)(83380400001)(478600001)(38100700002)(8936002)(16526019)(6666004)(8676002)(5660300002)(53546011)(54906003)(956004)(85182001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?a1dsRFdXbHRudnhvQTNCYTh0bVR2MitTaW9abVhYNU5HT05scFZocjFWSlZE?=
 =?utf-8?B?VUt0eGd4TER3L1lyZmZqdUloTld0RXZqTjNodGhkWkc2YkZwMkNkbzZSOG41?=
 =?utf-8?B?SG5BcnZTV1NRNlljRlluNzN6bnNuWktYbFBZYjh2UHhEYk5RMTl4b1lvSFNQ?=
 =?utf-8?B?cHRzRTd3QVhUSyt5SXV5bjNFdFFNczFWK0dkTzZsVmVvN3kxVTdseDRqWjBI?=
 =?utf-8?B?UDVxTUZPSk0vaWMyYkFuTmxnMDR0Nkt4cHRSakdDUEZKbmhpc1o0Qjltejl2?=
 =?utf-8?B?ajJ5akhwSWNVWVVDdVNiY25xRnJpellVS0hmQmpkUmVhdGRiRUk5TGoxcGNz?=
 =?utf-8?B?QVlrWUFPUnZWZmNTNmhrcVg4bGUxT2QvbDE1N3F1R1RoMDA3T2U4aEwyMGVB?=
 =?utf-8?B?b0psbm9YY3NmWXV2enU2TFZRdmd0VU9PM2s2UjVtUDk0ZVlKZ3VwNS9RZkx3?=
 =?utf-8?B?RTMzY1ZGc0llTzZoZ2ZQOWJUcm9ObUwzR0U4MHpmdE11Y2EwU0I5ZzExM1RR?=
 =?utf-8?B?S2pEeU56WDYrMmpBWDc4c3RydXVObDZyRnJMcWRMT2pEUHV5eXNWVks0M0lk?=
 =?utf-8?B?RlFoVTAzOGVxQXFERjJ4OHptUXhPaFBzTVhkWjBXMTdKbU9lTkk0R3k1b1JY?=
 =?utf-8?B?bmEwMDBYdGdJREJoSHpRci9aNVk3TEpxcll0K2tZNWZZbmxCUlRYdnZnUStI?=
 =?utf-8?B?SWZzQVQ5WnhmdTJZOVdQVU9FWVVjUXR5RHF2TytPMXI3dU5ncStsMURLTnhG?=
 =?utf-8?B?U2FITVAwKytGbGlLOUxxNFZKUEJnTjBkNXpHZTE5dEI1Zk9kT3dyMWJucERV?=
 =?utf-8?B?bXRIbkVEOE1vZUlYalc5bm95YkJaSVJQRmVCUDllaDZ5MEhRYXNzc0ZRbFI3?=
 =?utf-8?B?ZWV3aTQ0VWRoNXJNUHp4cFVzMWdHRnNzV2dGckxiSy9vTllaZWc2ZldVUjB5?=
 =?utf-8?B?Z2E1b09GemdVZEhjQUlKMkN5bEJrMElpTUV5QkkxL1p6bzlZRlc5RkNiYWVr?=
 =?utf-8?B?WVZCeHNkbTQ1c0cwSXBtRzd5RDNVK2NWV0wzUEZlczlvV2VSQStVSTFSWndN?=
 =?utf-8?B?Nzd2ZWZaY3Y1T0NnMVZzRi9qblpheVRCcG1rbWN2Zk1WU1JYM2MrRFkrMWR5?=
 =?utf-8?B?ZW9pMmxGODRzMktEUXo4OFY0L3dmS1dWQkdCRkJPaHFOWC9zZjFDelpabHlE?=
 =?utf-8?B?eFVDMkpWYnFZKzFrWExpQ0hNOVppSGVHd2U2RnV6dVdFQU5wNHBZYmVRNi9w?=
 =?utf-8?B?MVp2Ylh5NWl4NGt6ZWVHaGZjb3liY1JDbklWVThnQlJIUHdZMFNCYloyUjBi?=
 =?utf-8?B?eUxDTDhsVnY5amE5ellvaEIrcWxyalArUXZLZTNScHNzS2VLUy96Y3pwaEVl?=
 =?utf-8?B?SkdrK3lGZkR2QjNCUDY0UkZLZlNLelQ3STJSdEJIbkdpUURUTzlnRE1TZko1?=
 =?utf-8?B?TWd3RE81U01KbGVEb2xoUnhSRHA0aWlPUGJac0VNbzYrSDBRd1hGMjBucEtS?=
 =?utf-8?B?dGx6a3d0Nkd3S1ZNTy9TQkhneXd6OWRoM2tVZDdlQmZqL3o0QXRrT3RFb2Nn?=
 =?utf-8?B?cVhWdGxxUXdocU9BaktQTXZuNmo2OTlUOC91L3hVK3FiWEE4TTRPejFBb2lN?=
 =?utf-8?B?Rk1qWGYxTzJwWHpVN2t5bDI1RWhnZ2VMenlqck45MHB0ai9aZzFOZGFWcGFM?=
 =?utf-8?B?S2xUM3pteFVGL25oc3FzV1ZpNWZ4TlpjOUNvRnVIZTlsSFpUeXAyeVI0aFp1?=
 =?utf-8?Q?fpBfXeZ5PZpCQz+EliQUZCGaH4NinYbfoBR5JSr?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 7208aec2-1611-48e4-342f-08d91140f3d8
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 May 2021 10:14:51.8358
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: yYFbqnvpMXbeqdCEtuCWis+aCvjj6QmrZnn/4RDgPaIKtVV008WCHqAa+SYnbJuLKJ3iavf3c1sNx/NWM+h2Hw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3370
X-OriginatorOrg: citrix.com

On Fri, May 07, 2021 at 11:17:26AM +0200, Jan Beulich wrote:
> On 07.05.2021 11:08, Roger Pau Monné wrote:
> > On Fri, May 07, 2021 at 10:34:24AM +0200, Jan Beulich wrote:
> >> On 07.05.2021 10:21, Roger Pau Monné wrote:
> >>> On Fri, May 07, 2021 at 08:22:38AM +0200, Jan Beulich wrote:
> >>>> In this case compat headers don't get generated (and aren't needed).
> >>>> The changes made by 527922008bce ("x86: slim down hypercall handling
> >>>> when !PV32") also weren't quite sufficient for this case.
> >>>>
> >>>> Try to limit #ifdef-ary by introducing two "fallback" #define-s.
> >>>>
> >>>> Fixes: d23d792478db ("x86: avoid building COMPAT code when !HVM && !PV32")
> >>>> Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
> >>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> >>>>
> >>>> --- a/xen/arch/x86/pv/shim.c
> >>>> +++ b/xen/arch/x86/pv/shim.c
> >>>> @@ -34,8 +34,6 @@
> >>>>  #include <public/arch-x86/cpuid.h>
> >>>>  #include <public/hvm/params.h>
> >>>>  
> >>>> -#include <compat/grant_table.h>
> >>>> -
> >>>>  #undef virt_to_mfn
> >>>>  #define virt_to_mfn(va) _mfn(__virt_to_mfn(va))
> >>>>  
> >>>> @@ -300,8 +298,10 @@ static void write_start_info(struct doma
> >>>>                                            &si->console.domU.mfn) )
> >>>>          BUG();
> >>>>  
> >>>> +#ifdef CONFIG_PV32
> >>>>      if ( compat )
> >>>>          xlat_start_info(si, XLAT_start_info_console_domU);
> >>>> +#endif
> >>>
> >>> Would it help the compiler logic if the 'compat' local variable was
> >>> made const?
> >>
> >> No, because XLAT_start_info_console_domU is undeclared when there are
> >> no compat headers.
> >>
> >>> I'm wondering if there's a way we can force DCE from the compiler and
> >>> avoid the ifdefs around the usage of compat.
> >>
> >> The issue isn't with DCE - I believe the compiler does okay in that
> >> regard. The issue is with things simply lacking a declaration /
> >> definition. That's no different from e.g. struct fields living
> >> inside an #ifdef - all uses then need to as well, no matter whether
> >> the compiler is capable of otherwise recognizing the code as dead.
> > 
> > Right, I see those are no longer declared anywhere. Since this is
> > gating compat code, would it make more sense to use CONFIG_COMPAT
> > rather than CONFIG_PV32 here?
> 
> I don't think so, no, from the abstract perspective that it's really
> PV that the shim cares about, and hence other causes of COMPAT getting
> selected shouldn't count.

Ack, and we already use CONFIG_PV32 for similar stuff in the file
anyway.

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

It's just becoming slightly trickier to figure out what do you need to
gate with CONFIG_PV32 IMO.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri May 07 10:26:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 10:26:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123948.233889 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lexgh-0006XB-Qo; Fri, 07 May 2021 10:26:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123948.233889; Fri, 07 May 2021 10:26:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lexgh-0006X4-Nk; Fri, 07 May 2021 10:26:27 +0000
Received: by outflank-mailman (input) for mailman id 123948;
 Fri, 07 May 2021 10:26:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4HO4=KC=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lexgg-0006Wy-RG
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 10:26:26 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a16821d7-6d72-46a3-82d7-642feb0e5071;
 Fri, 07 May 2021 10:26:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a16821d7-6d72-46a3-82d7-642feb0e5071
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620383185;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=cBLNA3M/6/OVWfUIf0EjGQ794PL/9ZCVT8Pnc+UCy4I=;
  b=cqcWZhkDlWhujQfa0EsCWB5cohApozKxFKcEorOmEH5l2UBMuKHL5e2C
   Tt3iYAVBLib5HORvqYk58vDpWi+WFCk006VAQce8EPzt13nCvVmPT+bnY
   M+GIy4vXJzo97xetdm6Hsj5h+MTEr5B2WOTZzc8U7Pm+pV4vL5p2NbGFE
   8=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: vdkvY8i0oItjhJc8At2//s1kNRL4sXbPm178WU31BQKI9ak7DtKl34X5fSijuOw6Fq4EWBfzWd
 4qI9nXFgjRgZuwlyOw4X0zMqPf058zkGZfESpUTFH5jLcxXfTVID03FQKHjm6zOhCHifGxkQMc
 GssJcLxZGXOumgOA6A4EsVMl4ycph6x8wzspmfSSougcGu2zqBD79bftxHIbIB/sTID5xnMxSc
 /btdFae/MeDwwzDqkE+z8xEhNKHXI4C/IMejuzkHbQDeoCj8br+gdPMV8Er3fsU1o6AWQ17s7M
 Kfg=
X-SBRS: 5.1
X-MesageID: 44816443
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:g9h+Ta9oGqTf6ExTf65uk+Hadb1zdoMgy1knxilNoENuHPBwxv
 rAoB1E73PJYVYqOE3Jmbi7Sc+9qFfnhONICOgqTM2ftWzd2VdAQ7sSiLcKrweQfxEWs9QtqZ
 uIEJIOeOEYb2IK9foSiTPQe71LrajlgcLY99s2jU0dNj2CA5sQnjuRYTzra3GeKjM2YqbQQ/
 Gnl7R6TnebCDgqhqvRPAhLY8Hz4/nw0L72ax8PABAqrCOUiymz1bL8Gx+Emj8DTjJm294ZgC
 j4uj28wp/mn+Cwyxfa2WOWxY9RgsHdxtxKA9HJotQJKw/rlh2jaO1aKvy/VQgO0aOSAWsR4Z
 zxS09KBbU215qRRBD6nfLV4Xii7N50gEWSjmNx6BDY0L/ErDFTMbsLuWsWSGqe16KM1OsMpp
 6j5Fjpw6a/Oymw1BgV1+K4Ii2CqXDE1kbKsdRjxUC3ArFuJYO4k+QkjQpo+cA7bV3HAcYcYb
 BTMP0=
X-IronPort-AV: E=Sophos;i="5.82,280,1613451600"; 
   d="scan'208";a="44816443"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gEPLhjjiWy7a8STo1hX4OU9KeTGkEZWa0A1+LVAhYxOHUwswQ1Y4pVJroDfTkLS7yMyCvXz7uDop3+Nh/BcPrXcrbRmWB/wG7NESJUawmV/srizw5GUIdyZle4nKrpwkClZlahnGwMHFhe3OS9fTOt+09xoC0uYgo+fUsNsAU7wOnE+yNge2a4XEn/vr5U0xy9rshAPS901WsuMJrWGzMWXPeik+SLVWIq/xf/bIg/cyJ5DatdX9D+qfP9og0DfrEqih5y9aAgU3l3ok1P+dl7GdFCOaqy23l8CiczPN5x18Miy2OmsIKGqS+vKVQbzztolimZEAzEoBF8czyKLIGQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uA7J73w4tUhnzRmbH076hDYbU3zz8EBFrETmbD7W0Ak=;
 b=WmjUnh1RkmWYRkSaaVIMH1S9eLvuuy0oGY66fzy65L6M8OEcEu3RlwCpDeluBIc8AcnjlEI3gQ9hZhx/UcnQibciHmZxu9mX3E0TmmHzbisQXKTZOIT1YsWgBFIwy9PLODQ0HtqhzbrugJoQLmRxLwm5d2tDaOUvTSQSKL2CQcgaTg4++HHe/TXc8TybSsjDS7aRedGfYJeDaUiSgDSmPhec2rNGyBZ7EP6ThEVWCYWvvzbvnDNj/VK92BZJlHcgUu0Yf+gXzML7DsaRsI0+KewO1/oojk9hidACH8L8SdFrBQmspKjMawoTfrPCey+swCNAlCfGc2lkgNcB8Y1HyQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uA7J73w4tUhnzRmbH076hDYbU3zz8EBFrETmbD7W0Ak=;
 b=Gbeg0n3MQwjheOIRuAraGK9G2c2fkd19mU+S0o74n2wI+KZUGk959H6wR6J0NzyrjoL4j4ToftDe5zZjgFOMbSgI5v/nUgyWbfJZXiX/f2bSaaEuXjVi6rUPm3vFy1i68uZ2RbvRCA9JVDDMDestabBDj4wp6pdlOeTqWo6GqAY=
Date: Fri, 7 May 2021 12:26:15 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: George Dunlap <george.dunlap@citrix.com>
CC: <security@xenproject.org>, <xen-devel@lists.xenproject.org>, Jann Horn
	<jannh@google.com>, Jan Beulich <jbeulich@suse.com>
Subject: Re: [PATCH v2] SUPPORT.md: Un-shimmed 32-bit PV guests are no longer
 supported
Message-ID: <YJUVx8WMn/4f0gMS@Air-de-Roger>
References: <20210506124752.65844-1-george.dunlap@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20210506124752.65844-1-george.dunlap@citrix.com>
X-ClientProxiedBy: MR2P264CA0187.FRAP264.PROD.OUTLOOK.COM (2603:10a6:501::26)
 To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 23e50226-efd9-47f0-1bc5-08d911428f53
X-MS-TrafficTypeDiagnostic: DM6PR03MB4540:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB45401A2D7FEE02EAB3FD5E608F579@DM6PR03MB4540.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 3rD9P5xt5ufCv494x71cPqEAIVY1X78K38boaSHO+3lJJCRNZ4dSOVlXIkMehrHAACXira/Xv4pBgAPhnl7VWVmo1iv+uiGKPgc80nKZaH5WJGz84s4mhgfCxnx6kGvvTiDEt3XtNaWyk/Pt+gRBELiTlqdKsYOvzB/ohNcX3kQ+YjVMvEg3fQ1u7GivMRaBx1A7NsecAkR9EBL2CYw+MhnbzRQRRYXCfGAe7Z8DOv2a69QT7mHtH/x2PdGXSmkCAkMU2L2Cr1myBfXS1yDh/0QTdx02y/ByPKTp+HhfN9cofNSOxVl7qGUfh7Ish2eoZcZkfil2drSynq7F/9E0/GtO4QEDnQByQ73lKugKhwZ7vMV1ZlswbW5xJS1MbDNoeAdzgwnj4mml/m50O5ji9WGiLWSHNDpyjRV0Ad4cXHfomOUDnESSqEFID8lKQz69EJaiXOm4g3kIRHxYeYqI2KIUPbabRlcXvHGkJJhZoUt0IQv6nZXnFv/8jd+GBf59JVAtSz2H8rarJ9UP+a1W9A2AV3rPIBRoroSHQ7v/jAXZvead/6yw5K/0TbE2kvPjEo3TJSotZsGD6FYev1tHvC+TJb1oVbgzuMf9hgblyGQ=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(366004)(346002)(39860400002)(136003)(396003)(376002)(54906003)(4326008)(8936002)(6862004)(66476007)(478600001)(66556008)(85182001)(6496006)(26005)(6666004)(186003)(8676002)(9686003)(956004)(38100700002)(316002)(86362001)(33716001)(6636002)(6486002)(2906002)(5660300002)(83380400001)(66946007)(16526019);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?NUVqOEJMS0szZFRZZys1RGJENklncnRFdXg3eEdvWUZSQVlJempMTUdIMmtv?=
 =?utf-8?B?Zkc1ZnRYMUFEOFE0eU4rZmpLNW56UXV3QXBLdHJ0R2FpRGRDcExkbk9ZMnFz?=
 =?utf-8?B?RDFqei9RRTZZZ0NHd0tnLy9ZYXEyNHBrSkxheUtuTENMN3p3ZHpjKzhCbUlz?=
 =?utf-8?B?U2tMcVVMalB3MjJIZ1U2Wnc2TE9SMitHOVpLZWV4NDRDZkpDZDh1VlRCQlht?=
 =?utf-8?B?OTlvSkpudDk4aXFTUHlNelhmNEdTcXAyRFZhQ0xqb25ITE15Nk9qb25iUjkx?=
 =?utf-8?B?VlVBeEpUYjlIUmQvSnJTY3B5c1UzcnpvRFdBc3BZSTc1N3kvaXhZS0FxWStN?=
 =?utf-8?B?SytoY1FsdEc1bHVmWFA0cE41OG0yekJIcGtCOVRNUGprWWpSOS92R2JScEZh?=
 =?utf-8?B?TjdyNVpGM1hTd1ExM0l0cjRaWWdackdidG9rU0hYenVwM3JNQTAwQzR3TmFM?=
 =?utf-8?B?bG94cUJZY3l2RGtxQWtib0x3VW1MM1AvNG5JSjREeTRTM2t1eVUyVUZLcHJT?=
 =?utf-8?B?WGxFYndNalIxaVdYREEyek5aUGlUUnNJZnIvS1JtR1hpRSt2TW02N0c1QjBY?=
 =?utf-8?B?RHlreTAxaHJvV0xSUnJmMnlFRWtXQjNNcTlaTlhCRGYyRFIzdUdnUXdWdUlY?=
 =?utf-8?B?SEIwd000YmxYZ1RzUjZJanFqU2hSMkhzVFVXMEFhWkY2NjNiQmVvNzBLMi9P?=
 =?utf-8?B?ZnJnaDRjdGRRd213STY2cXpRRG9xblB6M3BZbTFvMDdTaFZyOHEvTXVQbjR0?=
 =?utf-8?B?VXF0SEM5M1IzRGJMZFV0TERkWGR4NDRYMC9IaHdFdXdiTC9JNHhDbFJ4V3hG?=
 =?utf-8?B?dnVSb2JGNXM1RWlvL2s2WTdmaUIxNzhFb3lJUE8vOUdpVjN3S29GRi9YUDJK?=
 =?utf-8?B?QVVzRUFoTTE5cFVsWmcrSjJTRlZwb0pvQzJnNnpiZkVldFZIeGh2Q2F3dVpI?=
 =?utf-8?B?MFZRTjk4aVRCODhVaG9QNXlYQ1hBZysrb2M1V3ZCTEh2MmdsQVBKQ3Nmenlu?=
 =?utf-8?B?QW1SUmx2N1NXcDVZTXNQNnp1QnhTcEF1cW5kdVg3Zi9YTW1rcFV5aUdqdjBP?=
 =?utf-8?B?dUUyTllKb3V2UUF2SkluVkpYL2pBOGxUa3VIb0tndXZuNEVrZDFoSmM5ZGtM?=
 =?utf-8?B?dm9ha0tZOUhMaVpCVFA4cFYvWXBWVGZoaFRFbEZhZ0c2TWdETUNFWmFYU1Bx?=
 =?utf-8?B?OFFVZDhLbWtiSHJwL1diRXdDN3VmanJJYm1VM0ZmWCtkNDFYSEJpcStyUUpS?=
 =?utf-8?B?T2hvaC9GVkJzYUVMU0x0R2dJQlhiTkRmakZJVE9BNHlVTTZNVElNSis2VDVL?=
 =?utf-8?B?NzV5ZEZNTndxZDE2ZndlZFpNMlcrSysxQmI0bWhHQ3VZektKMzk4VjNTbjhD?=
 =?utf-8?B?azF1MWlVdmZ5VGNNTy9zdDY3YmxrdXJkUnY1T2RFY1dCcW5nYXo4eDZFWjhN?=
 =?utf-8?B?TzVXdHVjMk5meVVHS1N5ZnFhTjZsRWdwWEZ2N2d5UHg1SW1pbHFGcndJQnB5?=
 =?utf-8?B?NEJBcDlEc0VxaGUrOVBiQ1QzTlhmT3Y5SXNQREhabmo4L1NtOEEvTmZUc01H?=
 =?utf-8?B?VGY2cCtrVzVRMVcvNU44Y2E3eXM5YWdEb2JOWmlTSVdaQk53MzNjV0ZTUTRH?=
 =?utf-8?B?eDFuUGtFNDRlMHRYMVBIUEIwT052ZHRZWmF5TGRaRzFSTkNVUFpEd0FBNnh6?=
 =?utf-8?B?RHdZN0l5UU44R0pqTEhLai9FOUdFdmwzdm11RkVrbm11MERGc0IzbzJvS0o3?=
 =?utf-8?Q?gK41Ov4mYi6DZIp7CBJ2kZ5LVuj/T8fQRnQTzgv?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 23e50226-efd9-47f0-1bc5-08d911428f53
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 May 2021 10:26:22.1440
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: TL+DT1B+26OFhU29gpc+azISoUteeAx4Lp038Li8cEIyU/aXqh/F6vBp+FTtEjGj3u+Pr6rFsuRSSp6zn53bbw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4540
X-OriginatorOrg: citrix.com

On Thu, May 06, 2021 at 01:47:52PM +0100, George Dunlap wrote:
> The support status of 32-bit guests doesn't seem particularly useful.
> 
> With it changed to fully unsupported outside of PV-shim, adjust the PV32
> Kconfig default accordingly.
> 
> Reported-by: Jann Horn <jannh@google.com>
> Signed-off-by: George Dunlap <george.dunlap@citrix.com>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v2:
>  - add in Kconfig from advisory, ported over c/s d23d792478d
> ---
>  SUPPORT.md           | 9 +--------
>  xen/arch/x86/Kconfig | 7 +++++--
>  2 files changed, 6 insertions(+), 10 deletions(-)
> 
> diff --git a/SUPPORT.md b/SUPPORT.md
> index d0d4fc6f4f..a29680e04c 100644
> --- a/SUPPORT.md
> +++ b/SUPPORT.md
> @@ -86,14 +86,7 @@ No hardware requirements
>  
>      Status, x86_64: Supported
>      Status, x86_32, shim: Supported
> -    Status, x86_32, without shim: Supported, with caveats
> -
> -Due to architectural limitations,
> -32-bit PV guests must be assumed to be able to read arbitrary host memory
> -using speculative execution attacks.
> -Advisories will continue to be issued
> -for new vulnerabilities related to un-shimmed 32-bit PV guests
> -enabling denial-of-service attacks or privilege escalation attacks.
> +    Status, x86_32, without shim: Supported, not security supported
>  
>  ### x86/HVM
>  
> diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
> index e55e029b79..9b164db641 100644
> --- a/xen/arch/x86/Kconfig
> +++ b/xen/arch/x86/Kconfig
> @@ -55,7 +55,7 @@ config PV
>  config PV32
>  	bool "Support for 32bit PV guests"
>  	depends on PV
> -	default y
> +	default PV_SHIM
>  	select COMPAT
>  	---help---
>  	  The 32bit PV ABI uses Ring1, an area of the x86 architecture which
> @@ -67,7 +67,10 @@ config PV32
>  	  reduction, or performance reasons.  Backwards compatibility can be
>  	  provided via the PV Shim mechanism.
>  
> -	  If unsure, say Y.
> +	  Note that outside of PV Shim, 32-bit PV guests are not security
> +	  supported anymore.
> +
> +	  If unsure, use the default setting.

While not opposed to this, I wonder whether we should give people some
time to adapt. We have in the past not blocked vulnerable
configurations by default (ie: the smt stuff for example).

It might be less disruptive for users to start by printing a warning
message at boot (either on the serial for dom0 or in the toolstack for
domU) and switch the default Kconfig slightly later?

Note I don't have any specific interest in 32bit PV, so I'm not going
to argue strongly against this if everyone else is fine with it.

I also wonder whether xl shouldn't try to boot PV 32bit guests by
default using the shim now if the hypervisor has been built without
CONFIG_PV32, or at least print a message so users know how to deal
with the fallout.

Roger.


From xen-devel-bounces@lists.xenproject.org Fri May 07 11:05:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 11:05:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123953.233902 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leyIU-000230-Si; Fri, 07 May 2021 11:05:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123953.233902; Fri, 07 May 2021 11:05:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leyIU-00022t-Pb; Fri, 07 May 2021 11:05:30 +0000
Received: by outflank-mailman (input) for mailman id 123953;
 Fri, 07 May 2021 11:05:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rJTn=KC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1leyIT-00022U-0m
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 11:05:29 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e9e669a8-3455-434f-8d5e-3bfbaa0c222d;
 Fri, 07 May 2021 11:05:28 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3AEECB12E;
 Fri,  7 May 2021 11:05:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e9e669a8-3455-434f-8d5e-3bfbaa0c222d
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620385527; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=vAWwMOVydF5C6C3DO/eQffg1CVqJ31VXlO2j7bmLIZU=;
	b=aGV+bFRiL8P7XGPNKS1XIlBkONvbsQqDl3Gg23QJd1KLUX7FqAiEnamYUTvXDIDPNP6Ulv
	Xw/Z0m/SUhh9d5itnef5AhdU7ztw8Wx9ISp+rb609rVjrqKAsyP0FCkfFHFtuXFihu5ssT
	V0wTQxE8XYMuWN+xmrsSnuWLWtyMr+A=
Subject: Re: [PATCH v2] SUPPORT.md: Un-shimmed 32-bit PV guests are no longer
 supported
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Jann Horn <jannh@google.com>,
 George Dunlap <george.dunlap@citrix.com>
References: <20210506124752.65844-1-george.dunlap@citrix.com>
 <YJUVx8WMn/4f0gMS@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0a61c24d-4bd3-c6af-7297-7a3b7bcd90b8@suse.com>
Date: Fri, 7 May 2021 13:05:28 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <YJUVx8WMn/4f0gMS@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 07.05.2021 12:26, Roger Pau Monné wrote:
> On Thu, May 06, 2021 at 01:47:52PM +0100, George Dunlap wrote:
>> --- a/xen/arch/x86/Kconfig
>> +++ b/xen/arch/x86/Kconfig
>> @@ -55,7 +55,7 @@ config PV
>>  config PV32
>>  	bool "Support for 32bit PV guests"
>>  	depends on PV
>> -	default y
>> +	default PV_SHIM
>>  	select COMPAT
>>  	---help---
>>  	  The 32bit PV ABI uses Ring1, an area of the x86 architecture which
>> @@ -67,7 +67,10 @@ config PV32
>>  	  reduction, or performance reasons.  Backwards compatibility can be
>>  	  provided via the PV Shim mechanism.
>>  
>> -	  If unsure, say Y.
>> +	  Note that outside of PV Shim, 32-bit PV guests are not security
>> +	  supported anymore.
>> +
>> +	  If unsure, use the default setting.
> 
> While not opposed to this, I wonder whether we should give people some
> time to adapt. We have in the past not blocked vulnerable
> configurations by default (ie: the smt stuff for example).
> 
> It might be less disruptive for users to start by printing a warning
> message at boot (either on the serial for dom0 or in the toolstack for
> domU) and switch the default Kconfig slightly later?

But by changing the default we don't disrupt anyone or anything. Or
are you suggesting people really caring about Xen build it with the
default config without even looking?

> Note I don't have any specific interest in 32bit PV, so I'm not going
> to argue strongly against this if everyone else is fine with it.
> 
> I also wonder whether xl shouldn't try to boot PV 32bit guests by
> default using the shim now if the hypervisor has been built without
> CONFIG_PV32, or at least print a message so users know how to deal
> with the fallout.

I, too, have been considering to suggest this. Iirc Andrew did already
point out that the error messages resulting from xl aren't really
helpful to understand what the problem is (iirc he said they claim an
out-of-memory situation).

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 07 11:06:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 11:06:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123955.233914 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leyJl-0002bq-7I; Fri, 07 May 2021 11:06:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123955.233914; Fri, 07 May 2021 11:06:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leyJl-0002bj-47; Fri, 07 May 2021 11:06:49 +0000
Received: by outflank-mailman (input) for mailman id 123955;
 Fri, 07 May 2021 11:06:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4HO4=KC=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1leyJj-0002bc-Is
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 11:06:47 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d673c10e-7f68-4c9c-b400-171f81e4f3b3;
 Fri, 07 May 2021 11:06:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d673c10e-7f68-4c9c-b400-171f81e4f3b3
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620385606;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=SiXL8HCi4a1geG1gxjTk0Watg0nvaMpsbWIYyd93bJQ=;
  b=YAiZN/hLdu4L8eLEwp72+jrRiQXf+Ct0wTlOMiRGEBgI2w+poqq6zg3A
   JZeHyJZwd0Yqyc9Z93kCMsy7L9zaeRe94KHVmrBC7L4WJUEw3UBw5Hloo
   lpGBOlD+cTZj9t16TOvaZ4ZAD8ieSc1qD98UKhsoYtPudLXWl2goDJM5o
   A=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: He2XGgQ32VkzG8+ktl4hcypxa6gJfuAEemOd9WiHVDLCbUi5xYQYu+cF+w6w/W7urlwcZoA3EQ
 FFGLHrumAZRfKHebfzfbPvVjUqo8wXYFHBffiJ391E4KK6nfZN6XVNIziK1VWo/llehloky6ar
 wt8395SUIjrqC4rYKRikWSw09IJkPrDa02QDAAltqtrJqZRjwpezm7EafU2IZ+DgzkAWo1GWom
 JpJLysL7B6rJYtkp4wcDFQo6tVbkLijRbREuSJWU8iUVSme11XsoilPhkKtDqGUhy89t9qFBZL
 ubw=
X-SBRS: 5.1
X-MesageID: 44818672
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:FhyONqEMk1kBWAXwpLqE78eALOsnbusQ8zAX/mt6Q3VuA6ilfq
 GV8MjzsCWetN9/Yh4dcLy7VpVoI0m9yXcF2+gs1N6ZNWGN1VdAR7sSjrcKrQeQfhHWx6pw0r
 phbrg7KPCYNykcsS8i2njbL+od
X-IronPort-AV: E=Sophos;i="5.82,280,1613451600"; 
   d="scan'208";a="44818672"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=l28fDTJbKjJ2KlWpD02o/AJT+ixwbhl0X9ISJ3JA5UOpkCQOCgMPgAWEA46my6X4ih5dFK5ai3t6k/1K+tqBU0BWX4I3mrd7tuqRNNGbL2s2l2MytBUD0c/QWAeisHdBn4PldFdnw2fx6tCk+CDlvGWcy/3+NJ8NbLqlzNRMhVUUcD1CBBZo8JdFIoGjE20Z2cdJKS/pLQJEnZBOcOyIkbM4ARUjh2n65Do15nq4/Xhwkk8XLhHYNij4WfLvECy3ab/5399izj7eC+B8/9VWAaSj2p2Jl6qqtyf88uNRTgLHVCSvKpx4IrafGWxXdrkRHvnTZLAK5gY6Zacz101eDA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8S8wkPmqqUfwri4uoZFuU5pUXK3Oogv3mO3oq70NVKg=;
 b=j73s6u9Gkp89XRgTVQCTbjHR5kJnw/HgWsbjCWuTPJs2Yw25MnQaT26Eu5NTGSuqrcSrT+S3OpXHfqo7mzP7ryguQ2PImYTh0+9eduo+82YT68CdywqrEr1YVlc4LEa377adsc8bLAn6qtXtZPJ4qeFJdL9Ww9RAaOeCVhdYbUn6BLKT16JPbyH8bpC+M8BbqSVH5BKSop+cWHml88sU+NwD5PkUpS5b0aqjLEHyLf0b5QcwSTqxArMucaHQxnYfdrz+84jOeSw25jjzNzhPPX22RdkEFsk3HDlIgRSd8xfgAbZvF3ET2TRaQ2fqfVp42XS+rWP3lLOzyk7DOUGFnw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8S8wkPmqqUfwri4uoZFuU5pUXK3Oogv3mO3oq70NVKg=;
 b=i69C85iS/1TAGAg3US4Ka3ZL2UhFl+g+fG1NY14F22gIaXJZfPDpNl8rUDvMZ5J6iKlSINEpoKNTEcNpUGeB2HF/ilMtNXTCtFLndcOKi9VCLReXf8OOrUe5qkWWdro0aMQeh7pfH1Hiwf6NCs7sNw6t9KhsnWNiXFBq4D5l4Vg=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Roger Pau Monne
	<roger.pau@citrix.com>, Jan Beulich <jbeulich@suse.com>, Wei Liu
	<wl@xen.org>, Ian Jackson <iwj@xenproject.org>, Anthony PERARD
	<anthony.perard@citrix.com>
Subject: [PATCH v4 00/10] libs/guest: new CPUID/MSR interface
Date: Fri,  7 May 2021 13:04:12 +0200
Message-ID: <20210507110422.24608-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.31.1
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: MR2P264CA0135.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:30::27) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e991d0b3-80ba-4829-c718-08d91148320d
X-MS-TrafficTypeDiagnostic: DM6PR03MB4683:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB468303F0BBA6A2BC52B7FED18F579@DM6PR03MB4683.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 8eH5FwRUvAutucYBae0mmG0JDQVpxicYgPY0O+q9JDS7/DYrLkE29TRrG8VnwkSAYGywTSDmHCh/nZJh6UjMQyxqP+OFWbJNKXnTdlT503dSsGKWfOF0eBwGclSL5yZCgeOeUNUypz7nhwp1/RS6iDU90fNLzXZs7qEjnfdCFl9sOA6FHaUMYglllklJn0e3ubhGNL28CQSdK3E7+FVRdFEJgg3yHO6uwsStWKmftR0QsO1CA4faKGSX1zsX9P7d9s2QnaKW3ualHKvyvJeiyQZzrwvcIqcrRyBIUfVQMrd0MzR7wWKiD8gn8Cbsgd5IfvT11JEspwbbhNfLworWQXfdLk46GWb97jKhxnKUSc43jtlYBY7gt/ZiAHnJaWJqlNATfT90Uz5HJ6QvUiOk2eGrBU9aHAA21Ce1Zl3PS95F6KqWqlGmnZAkwfpDctkLnz1BgTgufzUpivnjahNFqQV5yO64rkgETCNoPITZ38tSVTT2jPFyl2sY7ZM8HmSfOO2arLdh1hTGzfElvD05IgqcVHPx+3r2v5ZHfE6MGW/2HgZMmEzp+cIOkjKD9G3TH03cupHrsjEIzy4ga34u8UnZkRCXALoNUyxmPoia2QA=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(366004)(136003)(346002)(396003)(39860400002)(8936002)(478600001)(6666004)(956004)(2616005)(2906002)(6496006)(6486002)(26005)(1076003)(8676002)(16526019)(38100700002)(5660300002)(6916009)(316002)(66476007)(66556008)(83380400001)(66946007)(186003)(54906003)(107886003)(4326008)(36756003)(86362001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?czl2S1phS3hqVi8yVFlUMVUwSCtNU2JZRVRJY1RPcmNKMnFRSStzMG5POEIz?=
 =?utf-8?B?VWJpWWxvY3lZUHovZHNaUHpNZ2Y4bjdndzJ4dk1XanZzY3FLUjB6dTkvc0lK?=
 =?utf-8?B?a08xZHNSaGpmNjVSY1BXbjh4K0RGWEFLN1BxU05CenVobFVtSDYvakZ0OVZt?=
 =?utf-8?B?NE1xVy9RaXp3VE0vWW1ZZnlhbHV5MXR4RG92eHErTlNvTDJaY3ArOVZQVlFs?=
 =?utf-8?B?azB3ejdFQ2RFUUc3WlIwZDRCR0NPY0V1STJ0c3VGT29LK0h4SmZuSlR6V0Nm?=
 =?utf-8?B?SE12djltbWFqQWlOdzRjUEhpRnlqRXowSkFhbXEySUlIS0k4dDJRUk0vZ00w?=
 =?utf-8?B?QnJFUkJ6eTF3NUlySStiY0k0NTE0Zmszc2ZkRzIvMVNJdnJqbjRzdkJFa0Zo?=
 =?utf-8?B?NWlIclRtcWpzcUVjWW1nRnJCSW9NY1ErNWNVYlVxQnRMeDVkVThJUTVGNW9s?=
 =?utf-8?B?SmEzYmZUb282WDRNTGFVVlU3Qi83SzdkeVVRSXFUUTgzdDNxWFJPdXpyQnVl?=
 =?utf-8?B?ckJ4azRJcm1VMzdwU1cwaWZaL1JQNVAxN2JIMDA3aEhYU1NqOHpobzBvdm1H?=
 =?utf-8?B?ODJzaVg4UW1RRTVLckttRVVSTnExTW5hMXhQd1VOTExPcmxKM3M0QTBBY1I0?=
 =?utf-8?B?L01RbnZ6TmhtbkJqVFF0b21DWW85S2tESWhDNDdjLzI4Sk42RGxnUVh6L2J4?=
 =?utf-8?B?b2p3QUttRFEvV1lKcnZ1Nll0UnpFVG00ZmUxbFNJVCtRd29iQk8vTmk3eTVG?=
 =?utf-8?B?Y3E4TTFvRWlGN3VSc1RKL0QzVFk1bXJQeVZWdTM0WHdLUVBzeXp6eStSb3Nu?=
 =?utf-8?B?Z29PUU5iUkRZNUhITjFNeGh2QmhFM2lQUk04VU1UR2R2YitoN0Vkcjl6dmI1?=
 =?utf-8?B?bUovM2lGUkNDSkRkNFo0TStIUjBLZm1rTWtzd1FQS3hmT3J4eWF6ZU9OU2Vk?=
 =?utf-8?B?elRIOUdoa3UxaTJtWEhFWXpzTXZKc3ZxRjE2OTNvT0hJbERiNERQdkY2SExR?=
 =?utf-8?B?V1ZQRjdHMWpOMXpNSnJUcCtjZ3Rjek5vTWl0bzdXdHZoVmxkOU5reWx4Q0xa?=
 =?utf-8?B?cFNPOHNkOXNYSGN6SDV5YlZQcFFrYmV0cEhHa2pvd1c2YklkZWc3dkFaMzRt?=
 =?utf-8?B?RUJMZFpZQkpQSC9IOGx3MFZYTjBRZ29rM29YdlJXSWxrQXZNTnIyVFV2STJn?=
 =?utf-8?B?ZGxmZERxZFdPS2hlbEFmUi9nc0Y4UDNNNjRDUVJvYWZTb1dVRFBnR1YwS3A3?=
 =?utf-8?B?QWVzNjF1d1RhR0FMdDYzampsUGpaS0Ftd0wzVkRaV0pvYmxpQTBuVHVpdUdW?=
 =?utf-8?B?YkEyMjh0LytmRXdySk1HdEJ3MmExZW5CRjFKek01VUNVMTQ2aXBMR09rVlpY?=
 =?utf-8?B?QXptcnMxRUhZd0xMZUMvL3VOMHI3MUVMTTE0bU83d3VFR21xMk1ldk1tTjJq?=
 =?utf-8?B?N2UxU3dHajRacmZzdTkrWE13cWtOd2gvV29Tc3VadU04T1RNdlkwaXpZamti?=
 =?utf-8?B?bS9nUjlqdTc3VXl5OFlXSHZZKzFLYnI3ZmdQYUlZTGdXRWpzQ09sZkVRMUVD?=
 =?utf-8?B?OC9EVEFSQ3EwTjRabThYU2ZmVXo0c0xTaXp0MmF2cFloUlZvaERzSXlUbmtX?=
 =?utf-8?B?ZTU2Ly8xSFMxWlFzOURqZXhNT0dHQ1R3V0xyWVh3cmRsOVVQYWthT0hJYllo?=
 =?utf-8?B?Z3N4NjkyL1ZvTjBQL0tMajZDNWhGL0FiMy9OY1pTVUpTK2U3T0toYjZLSzZn?=
 =?utf-8?Q?H//woXw5ftft3IQoATRei2AURc81XytuX5STx5G?=
X-MS-Exchange-CrossTenant-Network-Message-Id: e991d0b3-80ba-4829-c718-08d91148320d
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 May 2021 11:06:42.6648
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: SlgumCSUWR5RLaDzvNA48h3qr34Ufz15kidJ3ePpi43p7hHV2DFKunwrDMn+HoCAL8sUmiPtys5nbBuAHGwRqg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4683
X-OriginatorOrg: citrix.com

Hello,

The following series introduces a new CPUID/MSR interface for the
xenguest library. Such interface handles both CPUID and MSRs using the
same opaque object, and provides some helpers for the user to peek or
modify such data without exposing the backing type. This is useful for
future development as CPUID and MSRs are closely related, so it makes
handling those much easier if they are inside the same object (ie: a
change to a CPUID bit might expose or hide an MSR).

In this patch series libxl and other in tree users have been switched to
use the new interface, so it shouldn't result in any functional change
from a user point of view.

Note there are still some missing pieces likely. The way to modify CPUID
data is not ideal, as it requires fetching a leaf and modifying it
directly. We might want some kind of interface in order to set specific
CPUID features more easily, but that's to be discussed, and would be
done as a follow up series.

The addition of a helper to generate compatible policies given two
inputs has been removed from this iteration, sine Andrew Cooper has
posted a patch to set the foundation for that, and further work should
be done against that baseline.

Thanks, Roger.

Roger Pau Monne (10):
  libx86: introduce helper to fetch cpuid leaf
  libs/guest: allow fetching a specific CPUID leaf from a cpu policy
  libx86: introduce helper to fetch msr entry
  libs/guest: allow fetching a specific MSR entry from a cpu policy
  libs/guest: make a cpu policy compatible with older Xen versions
  libs/guest: introduce helper set cpu topology in cpu policy
  libs/guest: rework xc_cpuid_xend_policy
  libs/guest: apply a featureset into a cpu policy
  libs/{light,guest}: implement xc_cpuid_apply_policy in libxl
  libs/guest: (re)move xc_cpu_policy_apply_cpuid

 tools/include/libxl.h                    |   6 +-
 tools/include/xenctrl.h                  |  44 --
 tools/include/xenguest.h                 |  18 +
 tools/libs/guest/xg_cpuid_x86.c          | 633 ++++++++---------------
 tools/libs/light/libxl_cpuid.c           | 228 +++++++-
 tools/libs/light/libxl_internal.h        |  26 +
 tools/tests/cpu-policy/test-cpu-policy.c | 123 ++++-
 xen/arch/x86/cpuid.c                     |  55 +-
 xen/include/xen/lib/x86/cpuid.h          |  18 +
 xen/include/xen/lib/x86/msr.h            |  19 +-
 xen/lib/x86/cpuid.c                      |  51 ++
 xen/lib/x86/msr.c                        |  41 +-
 12 files changed, 717 insertions(+), 545 deletions(-)

-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri May 07 11:06:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 11:06:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123956.233926 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leyJt-0002wm-MI; Fri, 07 May 2021 11:06:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123956.233926; Fri, 07 May 2021 11:06:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leyJt-0002wc-Gy; Fri, 07 May 2021 11:06:57 +0000
Received: by outflank-mailman (input) for mailman id 123956;
 Fri, 07 May 2021 11:06:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4HO4=KC=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1leyJs-0002vm-3t
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 11:06:56 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c89341f9-f751-4ab8-8fb3-44538a7d9b43;
 Fri, 07 May 2021 11:06:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c89341f9-f751-4ab8-8fb3-44538a7d9b43
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620385614;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:content-transfer-encoding:mime-version;
  bh=0X8TxPH7KufbkpJ3x7eqxsup3Sr7s5HEON67QQy9Z/I=;
  b=Fu1P4DFRv+cZ94asXtr8nn1RSsO5YhL5tou5AI2AfSGp2ocMKy4AA3lj
   ziA6XKBx3Mfj9nvQMACaMJZeUXh2uYwLVW3RLqyFeWyi1DZ8AVSeJQ/Co
   E98mVlcJXSTGrC8+J/qLrH6bb1UKmhlwcARvrwh9h3Fo88sfT7L914Ccj
   A=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: BOFMUJfSWYClqkoGCme7WkfTfBpjZ1XbTDCxp7MzIlLmTKnlNaopmaf9cCqRXi8UhXn4JQZ1J1
 /khIlyAukr19Akzqj3Dk4j+kvvOzIZD0twqyO2/1gIOdfZw0hLsXuxL6bwiI45DmhtunE4Vt/S
 WHpcEyV2YE8RRXmsu/+Fmz1PFNJW4qEgQfAbPfCtAKkPI7Vd+4Inj2MTP+4SGUvgmD3YXCpKC2
 v2o+M3hRFsSK1WLIk2U7bAcjnyZJhDPu64loX2+emqyaE7GNS8lUzT3srRamhz8TXnGNxv3EaG
 Vds=
X-SBRS: 5.1
X-MesageID: 43096201
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:v2wSUqHQ4un4coLKpLqEEseALOsnbusQ8zAXPiBKJCC9vPb5qy
 nOpoV+6faQslwssR4b9uxoVJPvfZq+z+8R3WByB8bAYOCOggLBQL2KhbGI/9SKIVydygcy78
 Zdm6gVMqyMMbB55/yKnDVRxbwbsaa6GKPDv5ah8590JzsaDJ2Jd21Ce32m+ksdfnghObMJUK
 Cyy+BgvDSadXEefq2AdwM4t7iqnayzqHr+CyR2fyIa1A==
X-IronPort-AV: E=Sophos;i="5.82,280,1613451600"; 
   d="scan'208";a="43096201"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OpljV9Bfwb8R7D/1BiqmW3b2N1q6g6sFvB1gND0540ZiWICWsyJNqLCdN8KDkTCcPNx+NdQXdaCoz8xsQxjT5sTB6EuQ7V7aPrDA4PCyEC6SvGGRsr7F8TTVSWS3ESF93tcr95lywdrpLa6pHPAajwyVR5XV8WxqPLPSsfxMNiOgsrVL4Bz7HDCyiiY6EHeoY95Dgefv12UVmD78jTQWpDJh8iA2r3GuGJVrTjWh5JcGcdxkw1gkpREYobQ8eke6WxETIHZMwnIn8rRFGkNtVSeUR+pQEnsajZ1HsKQ7+MqccCcRPRd2x8aEp31xM2bMLqF67IczWVBVJZIj+DjqTQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=64awfqRB5ESkjIM24i+z107CJ81OENwIGfRhEqoVoZE=;
 b=OTCWx9kZDsrDaaYDwl845walkFvK8QKtVoRrMB1h0V9WCXQEFxsZxhO9sjASD85kmhY5/SkVAnAy9KO7QvYfPjDKhNeBhng3rWlBq6RNejyVfrydjrPNnEjzJiIfNSdfaXLuVw/1W9pANmuBEhLOsUb3BDX8wz4FKINJLAUtnLr7CkVBYYTx5pORaEygxn4Gj6tD14OmrP+/hX6QS+7zM99KJj1jxw4F3L1Wbjrx32p/4Yy/hhwvp/hexDog/e3kOhiYR7hxOsXOByZCWB7LLuQViF4Fl6Vi2XABPhjfNpBicBGhEUKx91g6Ay3EjinBYCstajNu4O8U9pU0570XkA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=64awfqRB5ESkjIM24i+z107CJ81OENwIGfRhEqoVoZE=;
 b=IxLaZG7RffYOeOdF/Qr3PNSFCKNgt9hJZ+Nbl3SNue6QtZX8DeOt58M43OABlLRAfxuYCO8T+/VeMLlTyVzi8t5120XCgki27j0Y+Ocz2i89Ek9YrdLumOfNiKtlWeJWCA2kIYdyw4hIuVnZ3sSQQFmz8FMt8A+w/o0uqXMXTSI=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Roger Pau Monne
	<roger.pau@citrix.com>, Jan Beulich <jbeulich@suse.com>, Wei Liu
	<wl@xen.org>, Ian Jackson <iwj@xenproject.org>
Subject: [PATCH v4 01/10] libx86: introduce helper to fetch cpuid leaf
Date: Fri,  7 May 2021 13:04:13 +0200
Message-ID: <20210507110422.24608-2-roger.pau@citrix.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210507110422.24608-1-roger.pau@citrix.com>
References: <20210507110422.24608-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: MR2P264CA0154.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:501:1::17) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 03aa3151-bb8f-41a5-8ce8-08d911483614
X-MS-TrafficTypeDiagnostic: DM6PR03MB4475:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4475BFE994B3E18C4C8DAAE08F579@DM6PR03MB4475.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:530;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: saP422w1aJ5dYAWn1ezGpYjTDRrutZ43B66BzYku2uCir1XyZoe5UrQUan9Qt9XIvPjq5yBaZXcNaDLNF0PMCCPPNZdVzNWyLa3LI7A8GTFo2PmB8vMFnbvCf7z+ooaP10ekRz0IjjvF2CzRc+1LJNrSJyJc6jjhOcqEzgX1nU4gQhT/geG3CpVKEOYCEgecoBBlOPdPaFew0GaTcdmbD/YQhNkvUId2iSdnjhvJawzt3wCwHvD4XVGD5YwiH6avuFB7pmohYSVNsMxXj7MSUynIH0QxBo8xuocuwwac7BxonSmKLjVAJlZ8oQQbhI9ZkQ3y7R4JuhAQoiquiBFGwtM5FuV64lrr5JogHDFY0v7Q1p7HqDqQRjyvEMeyxCT/84dpxCRJ7DiV2LgU+DWbDvdZs4Ug0bux73UhqTbB1wfujcw41b9Bxhpar2cgCijBWiLTNPjjmdwVhPh3HQeggTkvU+iWAtCAo+8owUDrsgoGCNtopkE5LJdWG4oYck87H754vB56WcH/PCU1rdA5FJHQQ27MoNlRYR2Y7clERmo6DoUzYK3RknSSRfHKUBi3RaawkkzAw52suMHbHWDFv3CwRMUho3QFHL66ovmA9gw=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(136003)(376002)(346002)(39860400002)(366004)(8676002)(6496006)(26005)(66946007)(5660300002)(66556008)(66476007)(83380400001)(478600001)(956004)(1076003)(6916009)(316002)(38100700002)(54906003)(6486002)(2616005)(86362001)(8936002)(36756003)(6666004)(4326008)(186003)(16526019)(2906002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?WklZc25IaExpV2h4SVk0TEtPU3N2cnBxK2J2MElJb1JBaDVnQkNxc3JPSThC?=
 =?utf-8?B?ZVBZdGNvQllDMFBwNlpyWGYrMlBMdDNQNkhmaURnRnJvUnpwRjRmbTVsTzJx?=
 =?utf-8?B?YVFuYy9pZ1pUbklEeXYvUkVWRmNLc1JBOFZhMmh2R3VFRzBieW5Dd2JrODU4?=
 =?utf-8?B?NlhFQU5CcHpqcFFpbmpQWnBxTU54SDZpZGN2QjVOaVp4R1RraEpWVVJldzAx?=
 =?utf-8?B?RUFwa3NPREtucjBuRys3WENoOHJoNEVZWGdlMlpldmtlNnBCQmxxVTg4SFha?=
 =?utf-8?B?djhsUUdEUk9wc3JqYVdkajIwZEFYRWczR0FLTjM1a05VQWowb0oyU0h6V2Mw?=
 =?utf-8?B?UW42ZHVUZnR5c1IxNUJFc0tuQVJPNC9zb3dFSURsZ3E1am5vTzNmTS9rb0Rq?=
 =?utf-8?B?ZnNPTjM5TUJaVm90V0ZNSklJRUttcjNvOEEwb1Vua2NDeFZEWjEyRzJEdlJM?=
 =?utf-8?B?eGNPRUZOQm9kZW9pVWYxc2ErN1RQS3pnL2lOTmZTZ0pmcCtwbGtXTmNycnpT?=
 =?utf-8?B?TitEZ3ppaE9sWXFzTEwxU1VPZE5leXQ2b0E3WDVIT2JqOTdDakZJSDc2TGlE?=
 =?utf-8?B?NXJlOFJCVlVaWGRNT3JFM3hTcFBtaUlldTRJUWhzZzgwTG5hamN1b0R2dWVT?=
 =?utf-8?B?d1V3ajRWWDdsbHh0VEFHbTZreHRpNnN1clF1ZUxlS3cwdEZHY0lDT2pQQkYv?=
 =?utf-8?B?NmZVUExYNUxBb3JReWQwOC9yanBZbFZ3cHA5Q2hXdFlyUFVackt6alRtRHYr?=
 =?utf-8?B?QjhCV01nSFRoQU5jQ0JJZ3piYzBzbGV5WmFqWG9LLzAzaHJMdzhneEkyVm1S?=
 =?utf-8?B?U3BId3hGN2F6MXZhemRrS25mdGU4Wm9WZFVwYTJCZWtsSE8rdENUL3VmblFO?=
 =?utf-8?B?UGNTdUJtd0YrMXVmOUVQVFNEcVVqREI1aXhYMFRieTlVZEtuZnFRQ0Y2QnZz?=
 =?utf-8?B?eEovRnFIRlpHMVU2MEtVbWxVcURPSFRjRUg1L2YrMFIyMGhHbWRLakdaOUFU?=
 =?utf-8?B?WjM0TTkrM1hFWkU2Uk5uRWZWenR0OUI2dS83NWxKVXhTYUs2VEhOWWluWFlz?=
 =?utf-8?B?N29tbGZEYUpyUG0xRDJBTDQxVndRZm8yTjdqWHNxOXMxMk1Ga0RSdHk4aGNE?=
 =?utf-8?B?TVo2Z3Z6c2FyU3dHQ3ZZSjR3QWtkOHpoYWx5c0FqU2NZb1FWWm9iSW5pN2w3?=
 =?utf-8?B?bVNhOW5CZnI0TGNiVTY1Y2tSWDgvZC95WE1JWnR2U3YyY1FsUVdkNUdHM3Iv?=
 =?utf-8?B?VWthbDFSR290ekpTVGVwK1UzeUMrNVFnSWVuTW8xc1RLaUtpclJVUlRSVzdv?=
 =?utf-8?B?YUpjREczNUQ2ODJQcE41Rk1XY01pVlJPVW8vNFUxUUZOM2lhNEJXVHhGL1N4?=
 =?utf-8?B?aFlxMlFvdkVFNndZSUV4MHM4WUN5UklMUjkzZE1wMzZDVWtLOWxYaGVNVzZP?=
 =?utf-8?B?SkozbEZHSFM3aVRjS3MrekhSYlpCQ09md0xINnRvVGZEcnRxT21Wa2NTMmts?=
 =?utf-8?B?Q2pldGxTbGNNRkZSdERRYnhnNEVGb0hYVVlGUUhERTUzZGFNMWF6WUF1K08y?=
 =?utf-8?B?QWY0d3JsZWJGdDBlRTBIWEZHaDZWanF6KzQ2QncxeTNvM2VwM051SUR5NHpN?=
 =?utf-8?B?YVVyZ3Bud2pnNDlEa1dkczJQaEtZaVNBYmhleFlINTFja3JqQjJhNXZza2cv?=
 =?utf-8?B?Y1lqMUljSGZuUWFsYU5TRUY4TTdTc0xjR09LQ0diTFFpTWtoZVZGVHRoMGlK?=
 =?utf-8?Q?TKPqt+26jDcdMTf2LvFzcsx9z14eriLkVe9h528?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 03aa3151-bb8f-41a5-8ce8-08d911483614
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 May 2021 11:06:49.7148
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: lKZKwf/VV/yFNlrPEhUSzYDsEMiW9ZSFhsCH0/04nozO7xLRoAycDRPzdp+mXYt8m1GLi/Ko9Vl0jhaV8IFCHQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4475
X-OriginatorOrg: citrix.com

Introduce a helper based on the current Xen guest_cpuid code in order
to fetch a cpuid leaf from a policy. The newly introduced function in
cpuid.c should not be directly called and instead the provided
x86_cpuid_get_leaf macro should be used that will properly deal with
const and non-const inputs.

Also add a test to check that the introduced helper doesn't go over
the bounds of the policy.

Note the code in x86_cpuid_copy_from_buffer is not switched to use the
new function because of the boundary checks against the max fields of
the policy, which might not be properly set at the point where
x86_cpuid_copy_from_buffer get called, for example when filling an
empty policy from scratch.

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v3:
 - New in this version.
---
Regarding safety of the usage of array_access_nospec to obtain a
pointer to an element of an array, there are already other instances
of this usage, for example in viridian_time_wrmsr, so I would assume
this is fine.
---
 tools/tests/cpu-policy/test-cpu-policy.c | 75 ++++++++++++++++++++++++
 xen/arch/x86/cpuid.c                     | 55 +++--------------
 xen/include/xen/lib/x86/cpuid.h          | 18 ++++++
 xen/lib/x86/cpuid.c                      | 51 ++++++++++++++++
 4 files changed, 151 insertions(+), 48 deletions(-)

diff --git a/tools/tests/cpu-policy/test-cpu-policy.c b/tools/tests/cpu-policy/test-cpu-policy.c
index 75973298dfd..81de9720c8d 100644
--- a/tools/tests/cpu-policy/test-cpu-policy.c
+++ b/tools/tests/cpu-policy/test-cpu-policy.c
@@ -670,6 +670,80 @@ static void test_cpuid_maximum_leaf_shrinking(void)
     }
 }
 
+static void test_cpuid_get_leaf_failure(void)
+{
+    static const struct test {
+        struct cpuid_policy p;
+        const char *name;
+        uint32_t leaf, subleaf;
+    } tests[] = {
+        /* Bound checking logic. */
+        {
+            .name = "Basic max leaf >= array size",
+            .p = {
+                .basic.max_leaf = CPUID_GUEST_NR_BASIC,
+            },
+        },
+        {
+            .name = "Feature max leaf >= array size",
+            .p = {
+                .basic.max_leaf = CPUID_GUEST_NR_BASIC - 1,
+                .feat.max_subleaf = CPUID_GUEST_NR_FEAT,
+            },
+            .leaf = 0x00000007,
+        },
+        {
+            .name = "Extended max leaf >= array size",
+            .p = {
+                .extd.max_leaf = 0x80000000 + CPUID_GUEST_NR_EXTD,
+            },
+            .leaf = 0x80000000,
+        },
+
+        {
+            .name = "Basic leaf >= max leaf",
+            .p = {
+                .basic.max_leaf = CPUID_GUEST_NR_BASIC - 1,
+            },
+            .leaf = CPUID_GUEST_NR_BASIC,
+        },
+        {
+            .name = "Feature leaf >= max leaf",
+            .p = {
+                .basic.max_leaf = CPUID_GUEST_NR_BASIC - 1,
+                .feat.max_subleaf = CPUID_GUEST_NR_FEAT - 1,
+            },
+            .leaf = 0x00000007,
+            .subleaf = CPUID_GUEST_NR_FEAT,
+        },
+        {
+            .name = "Extended leaf >= max leaf",
+            .p = {
+                .extd.max_leaf = 0x80000000 + CPUID_GUEST_NR_EXTD - 1,
+            },
+            .leaf = 0x80000000 + CPUID_GUEST_NR_EXTD,
+        },
+    };
+    const struct cpuid_policy pc;
+    const struct cpuid_leaf *lc;
+    struct cpuid_policy p;
+    struct cpuid_leaf *l;
+
+    /* Constness build test. */
+    lc = x86_cpuid_get_leaf(&pc, 0, 0);
+    l = x86_cpuid_get_leaf(&p, 0, 0);
+
+    printf("Testing CPUID get leaf bound checking:\n");
+
+    for ( size_t i = 0; i < ARRAY_SIZE(tests); ++i )
+    {
+        const struct test *t = &tests[i];
+
+        if ( x86_cpuid_get_leaf(&t->p, t->leaf, t->subleaf) )
+            fail("  Test %s get leaf fail\n", t->name);
+    }
+}
+
 static void test_is_compatible_success(void)
 {
     static struct test {
@@ -786,6 +860,7 @@ int main(int argc, char **argv)
     test_cpuid_deserialise_failure();
     test_cpuid_out_of_range_clearing();
     test_cpuid_maximum_leaf_shrinking();
+    test_cpuid_get_leaf_failure();
 
     test_msr_serialise_success();
     test_msr_deserialise_failure();
diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index 752bf244ea3..f7481c3168d 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -779,48 +779,16 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf,
     switch ( leaf )
     {
     case 0 ... CPUID_GUEST_NR_BASIC - 1:
-        ASSERT(p->basic.max_leaf < ARRAY_SIZE(p->basic.raw));
-        if ( leaf > min_t(uint32_t, p->basic.max_leaf,
-                          ARRAY_SIZE(p->basic.raw) - 1) )
-            return;
-
-        switch ( leaf )
-        {
-        case 0x4:
-            if ( subleaf >= ARRAY_SIZE(p->cache.raw) )
-                return;
-
-            *res = array_access_nospec(p->cache.raw, subleaf);
-            break;
-
-        case 0x7:
-            ASSERT(p->feat.max_subleaf < ARRAY_SIZE(p->feat.raw));
-            if ( subleaf > min_t(uint32_t, p->feat.max_subleaf,
-                                 ARRAY_SIZE(p->feat.raw) - 1) )
-                return;
-
-            *res = array_access_nospec(p->feat.raw, subleaf);
-            break;
-
-        case 0xb:
-            if ( subleaf >= ARRAY_SIZE(p->topo.raw) )
-                return;
-
-            *res = array_access_nospec(p->topo.raw, subleaf);
-            break;
-
-        case XSTATE_CPUID:
-            if ( !p->basic.xsave || subleaf >= ARRAY_SIZE(p->xstate.raw) )
-                return;
+    case 0x80000000 ... 0x80000000 + CPUID_GUEST_NR_EXTD - 1:
+    {
+        const struct cpuid_leaf *tmp = x86_cpuid_get_leaf(p, leaf, subleaf);
 
-            *res = array_access_nospec(p->xstate.raw, subleaf);
-            break;
+        if ( !tmp )
+            return;
 
-        default:
-            *res = array_access_nospec(p->basic.raw, leaf);
-            break;
-        }
+        *res = *tmp;
         break;
+    }
 
     case 0x40000000 ... 0x400000ff:
         if ( is_viridian_domain(d) )
@@ -835,15 +803,6 @@ void guest_cpuid(const struct vcpu *v, uint32_t leaf,
     case 0x40000100 ... 0x400001ff:
         return cpuid_hypervisor_leaves(v, leaf, subleaf, res);
 
-    case 0x80000000 ... 0x80000000 + CPUID_GUEST_NR_EXTD - 1:
-        ASSERT((p->extd.max_leaf & 0xffff) < ARRAY_SIZE(p->extd.raw));
-        if ( (leaf & 0xffff) > min_t(uint32_t, p->extd.max_leaf & 0xffff,
-                                     ARRAY_SIZE(p->extd.raw) - 1) )
-            return;
-
-        *res = array_access_nospec(p->extd.raw, leaf & 0xffff);
-        break;
-
     default:
         return;
     }
diff --git a/xen/include/xen/lib/x86/cpuid.h b/xen/include/xen/lib/x86/cpuid.h
index 2300faf03e2..4e8797f9f5f 100644
--- a/xen/include/xen/lib/x86/cpuid.h
+++ b/xen/include/xen/lib/x86/cpuid.h
@@ -438,6 +438,24 @@ int x86_cpuid_copy_from_buffer(struct cpuid_policy *policy,
                                uint32_t nr_entries, uint32_t *err_leaf,
                                uint32_t *err_subleaf);
 
+/**
+ * Get a cpuid leaf from a policy object.
+ *
+ * @param policy      The cpuid_policy object.
+ * @param leaf        The leaf index.
+ * @param subleaf     The subleaf index.
+ * @returns a pointer to the requested leaf or NULL in case of error.
+ *
+ * The function will perform out of bound checks. Do not call this function
+ * directly and instead use x86_cpuid_get_leaf that will deal with both const
+ * and non-const policies returning a pointer with constness matching that of
+ * the input.
+ */
+const struct cpuid_leaf *_x86_cpuid_get_leaf(const struct cpuid_policy *p,
+                                             uint32_t leaf, uint32_t subleaf);
+#define x86_cpuid_get_leaf(p, l, s) \
+    ((__typeof__(&(p)->basic.raw[0]))_x86_cpuid_get_leaf(p, l, s))
+
 #endif /* !XEN_LIB_X86_CPUID_H */
 
 /*
diff --git a/xen/lib/x86/cpuid.c b/xen/lib/x86/cpuid.c
index 1409c254c8e..3637466ff9f 100644
--- a/xen/lib/x86/cpuid.c
+++ b/xen/lib/x86/cpuid.c
@@ -532,6 +532,57 @@ int x86_cpuid_copy_from_buffer(struct cpuid_policy *p,
     return -ERANGE;
 }
 
+const struct cpuid_leaf *_x86_cpuid_get_leaf(const struct cpuid_policy *p,
+                                             uint32_t leaf, uint32_t subleaf)
+{
+    switch ( leaf )
+    {
+    case 0 ... CPUID_GUEST_NR_BASIC - 1:
+        if ( p->basic.max_leaf >= ARRAY_SIZE(p->basic.raw) ||
+             leaf > p->basic.max_leaf )
+            return NULL;
+
+        switch ( leaf )
+        {
+        case 0x4:
+            if ( subleaf >= ARRAY_SIZE(p->cache.raw) )
+                return NULL;
+
+            return &array_access_nospec(p->cache.raw, subleaf);
+
+        case 0x7:
+            if ( p->feat.max_subleaf >= ARRAY_SIZE(p->feat.raw) ||
+                 subleaf > p->feat.max_subleaf )
+                return NULL;
+
+            return &array_access_nospec(p->feat.raw, subleaf);
+
+        case 0xb:
+            if ( subleaf >= ARRAY_SIZE(p->topo.raw) )
+                return NULL;
+
+            return &array_access_nospec(p->topo.raw, subleaf);
+
+        case 0xd:
+            if ( !p->basic.xsave || subleaf >= ARRAY_SIZE(p->xstate.raw) )
+                return NULL;
+
+            return &array_access_nospec(p->xstate.raw, subleaf);
+        }
+
+        return &array_access_nospec(p->basic.raw, leaf);
+
+    case 0x80000000 ... 0x80000000 + CPUID_GUEST_NR_EXTD - 1:
+        if ( (p->extd.max_leaf & 0xffff) >= ARRAY_SIZE(p->extd.raw) ||
+             leaf > p->extd.max_leaf )
+            return NULL;
+
+        return &array_access_nospec(p->extd.raw, leaf & 0xffff);
+    }
+
+    return NULL;
+}
+
 /*
  * Local variables:
  * mode: C
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri May 07 11:07:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 11:07:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123958.233938 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leyK0-0003M7-W7; Fri, 07 May 2021 11:07:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123958.233938; Fri, 07 May 2021 11:07:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leyK0-0003Lz-SF; Fri, 07 May 2021 11:07:04 +0000
Received: by outflank-mailman (input) for mailman id 123958;
 Fri, 07 May 2021 11:07:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4HO4=KC=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1leyJz-0003Kd-IM
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 11:07:03 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f6424572-4684-4312-959c-930c5b1c21bf;
 Fri, 07 May 2021 11:07:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f6424572-4684-4312-959c-930c5b1c21bf
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620385622;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:content-transfer-encoding:mime-version;
  bh=Elil3Dm30+598w9fi/VGaVFUHkrCfnckvWeQZr64YXI=;
  b=UULqZ6spiIjRziwfhlh+wTrPLkL+VBUIQ98ZvgeTnPQYYWx4Tz2/cbkz
   vNr7ycO9HM5eNNygjq9uXdahrA+GrX1bG9CFDnhPYKzsoy5SFaNsd0FVM
   pssuLOsHU2PEw2vqyLsPA8aZu1jsIinXjdVU+5EaumXxLX1t0/5yEzXw4
   g=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: bcF+QtuDdJx+VJ6xD91c+1BdMz65+Hm75orNzZDBrpBs/GjqCy3MF/fSJV8BSL8SxFeMLOWWh/
 fXdINb9JzQMv6yFo+pPXKzqBslav8Vhi0vtk4kDyuSj4d/oujFYnEBXQ8oaDv3BlHlpwUVAdE4
 jMIycqOYsZvRzlsXY8XVjmEtNnnAgMpkyqcYMYu1hUSAj5xmFH3xP7+iPPpl6GvggrSK+d6lOp
 Ikr7ta5AymAqx6RPgIok2WrcQKK9Ic+DNxHyTSWr2fdZwhNvJTHfnqfJtyNirAQ/xLdkJiiH0F
 tzU=
X-SBRS: 5.1
X-MesageID: 43304404
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:rdygc6/9Z1MpzR2XNR1uk+Eddb1zdoMgy1knxilNoERuA6ulfr
 OV7ZImPH7P+XMssR4b6LW90cW7L080lqQFpbX5X43SODUO0VHAROsOgeTfKlbbexEWg9Qtt5
 uIBJIOa+EYY2IasS6kizPSLz9q+qj/zEnhv5an857Gd3AsV0go1XYCNu5HencGBzWv1fICZd
 qhz9sCqDy6dXsNaMOnQnEDQujYvtXO0InreBgcGnccmXyzZJyTmc/H+jWjr2Qjugl0sMUfGG
 P+4njEz7Tmt+v+xg7X1mfV4ZgTktPlxtxPDNfJkMQPKjn3zgaufploXLeP+DA5ydvflWoCgZ
 3JuVMtLs5z43TeciW+uh32wRTt1z4o9jvr1UKYiWGLm72OeBsqT85awY5JeBrQ7EQt+Ntm1r
 hQwm6fv51LSRvdgSXm4cTSXR0CrDv8nZJq+dRjxUC2J+MlGfJsRaJ1xjIYLH9AdBiKoLzOZ4
 FVfYrhDNY8SyLUU5ir1lMfjuBEe05DaStubnJyxfB94gIm7EyRlXFov/D3tk1wiq7UnPF/lq
 X524pT5c1zps1/V9M+ONs8
X-IronPort-AV: E=Sophos;i="5.82,280,1613451600"; 
   d="scan'208";a="43304404"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hjzPXbhvnFEbXX7cCcIyoBTT0QLqD49qzP+6v9RGBX61kHFIEwMaJX51BGs3AnIB7pdH2jowxTo5F+rrircmWgvPnd3drnF3pW5tuGIZc65unwuDR4CL443wHSU7xMGkfHwqcUzVcsZsAIO+3WuMpwOd+e+dy2Dpn0l8k2OHuqyO+HpH5d1n+TWaZKoBQgI/ByshAtEsrrn14rWAsYposbjKist2a5p2Sn8KHCHg6qV+mOaYHjLqAdfpeUIhbhD45Kh23/cnJ+PFuBL9b/uNKWpUWUPdxCAQN9InR2g2NiLZ4GtN7M0LFqs2yR4GRzkT/sA0Pra9s6LdcDX9wtBeTw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZdUXkdSYmtbpTEHoBo3BerLgBTlLuL2N/x2tV5e9+AQ=;
 b=S+ARTF047uxONLDRhH+ir5109MsbWv4kMQ1dBVb3+ws7oRjxTQBpT2iU4HXhhqOGNzIzMfJ2v0G1kOemsxTuYlJ3vk2lkVZRkMI37lqYoSF/YTy/GFWrToUJrQTJFV39w5R6rTlfKD1oDqYQ0JGP/ErjjF3FoiZNqWsrP9Kqd/rVvdR4Td1nwAbR7nmhuR1/yhctt1VSQilbq9M5Tki8WYaDy1hS3NZMbzloT+gBBwtXD8I56b6GflfHapl5Ladvi23jGiLnrr+T44giujWCjyIoN62g4MIh4bniMqkrKWyte88AWu/sAvgHQ5V0RWyAbh/hL1YPijbQhuCBPfS3QQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZdUXkdSYmtbpTEHoBo3BerLgBTlLuL2N/x2tV5e9+AQ=;
 b=Zp7tvoYPIdo1qkvtXBIg1YcWqGAxRtlk6URixiw07JRefZvTtscp03sOZvs0ZO60lUO9iKPip12m7Cbhjfx8WLLy9++nnAKCepAKjig4D9fTcy6y5XUi34cu+eTClwNuGp0XpnwgpDgFUn7HNHqok1acBKHUj+uJvl/rF8orWOE=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Roger Pau Monne
	<roger.pau@citrix.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: [PATCH v4 02/10] libs/guest: allow fetching a specific CPUID leaf from a cpu policy
Date: Fri,  7 May 2021 13:04:14 +0200
Message-ID: <20210507110422.24608-3-roger.pau@citrix.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210507110422.24608-1-roger.pau@citrix.com>
References: <20210507110422.24608-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: MR2P264CA0125.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:30::17) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 50ed9f27-5c71-4d82-33bf-08d91148392c
X-MS-TrafficTypeDiagnostic: DM6PR03MB4475:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4475215F8B684F381FB59ACF8F579@DM6PR03MB4475.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 1u00ywYucUyhVcGpKT+DCgkXkqg4vi11x2ZKwGMnFA+sVUCRn/oJInvOxR6j2JnmdzdcEoM6K+pdTBUH7gvC1Jf/PahP1WsvKtDX4LgcKoeLQYsC8q/6PgZsgVcLphnDiVAo6rWS8VKJIvhVaV0oqHC/QEEMCR6QrnIxF2vFholApGDt1Lqyi0pw/qo7afZiTGgiZxMpXjM2bYkj42XZmVFpLbhQ7wrZLu+oTXe/rUGfvDrtWMeU9i6x9ywjDcMdun3jGDiC2QCMCscJYeosz3gL37RYAytXgXYieb7z6xVICVRCfUqLhPSiujhVvIhYimqBv/bEzLN/j3DXj2wuRlr6UBd05REnNjAFEK8QRlh8uOKmneLeIBTsS22LoDLKhzq6/zJ9c3voOE/jXX26X9LoqAC5tBiyRPcq2qUwgWyeo5Wd6ezUq49nGT98/rxxuzDzmQSyvjgTTrCCJbnYPJsCnbicdYgmHA/v6F9JipDGSAZho8+1WXDq9db8d77S/oKTEgWYj03TVdoXoiL/zWQrqOLefjMgsb6rlGC/3DLgrPL6yiwfao4GC+bpSfGyILfE47ltlUfpB3nVuXQf19f2X+sYHzPVBUgm93FK4As=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(136003)(376002)(346002)(39860400002)(366004)(8676002)(6496006)(26005)(66946007)(5660300002)(66556008)(66476007)(83380400001)(478600001)(956004)(1076003)(6916009)(316002)(38100700002)(54906003)(6486002)(2616005)(86362001)(8936002)(36756003)(6666004)(4326008)(186003)(16526019)(2906002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?d1REOHlZa3ljVkdGNTBXZWZDMTBVK2ZCUFpLRHU0VWJ3Q2dGTGM3aXZkamFG?=
 =?utf-8?B?VXpBZzhVZXRyTStDb3VSekFWbHVYdXV4eUV4aTNQS0tQTXdQYStrRW9GNzR3?=
 =?utf-8?B?Y2Qwa01tams1LzB5RzRTUmZwbEZjQ0NpQWlOaXRqcWt4Smp2S3VBZHdrT3Ew?=
 =?utf-8?B?dWdkSE05eW5qbHlUWTJMUUhqMWtYWE9ORE5tQzB4RndmejFUZlRNLzhKSW1T?=
 =?utf-8?B?d25NbVpXSUw1Sndva01nRWdWdEl1VGZma1R6akJRSy82ZUVBNnZEaGM2ZVZ5?=
 =?utf-8?B?RnFiNXJuNUxJY0hMUmJFY0U2WDdDTXZvVDBGNkdEb3RzOUxMTWk5ZnRIMkJn?=
 =?utf-8?B?VHJtUUJJMUcvUWJkY0FVeENKdDNyWEppcmlWUEtlaFViNjJ5ZnNibDc1TGFE?=
 =?utf-8?B?aUREYnA4c1crclJqbXVoYUN5K3ZvOWsySFRSZENpbHFnTVZNOE9wVFJDK3RJ?=
 =?utf-8?B?U0x5ZDFISVZESmVLQzZFcUNtV2pMTHBhQkwrMDl2VU9LaE9LbEhBTFpibVAw?=
 =?utf-8?B?TWg1NFNwQ2VrTk1ETTFIenB6Q0ZURXJPdFJlQXVFZjZMT29KcE5OaXpkKyt5?=
 =?utf-8?B?NW50ZXhxK1Y0b0RZRDRIY3U3RThGa003M1lBSjNGR21Iaklpd003TEZYT05i?=
 =?utf-8?B?ZXZRckw1SEVGZG1zNmhmdy9nNis1cGVyTTlEMWlva0VRMEZUUWhCOVg4bjR4?=
 =?utf-8?B?aGRFZ2xjWnE5c2ZhNFIrTERIbnhTUTNOaVA5VVV4LytZNDJ4cllNR1U5YnVE?=
 =?utf-8?B?YWNlVnA1UHBDNk9vRjBzNThWVUFEaUhjMDk1b1laSXdYOWFjNG1hOWRDcVBn?=
 =?utf-8?B?cUR0c3NtcjI1ZnE1U3JXZkwrOHVtZEpkZ2hvVk4xNGttZDhIMVBaY0pLUk4y?=
 =?utf-8?B?QllucDhXejNTTDJZMlhKQ3ptbnZ1WFRnK0Z1YlZabC9jb2cvRk8rbS9mMkdS?=
 =?utf-8?B?dkUzMCtsK1ZNZzBGOW5NSUJNUjY5N3U5RWZhV0U3WFR3NS9HU3U1TlhUd2JB?=
 =?utf-8?B?eHFqem5WK1ErSFN0eUNhYkt3VGZBVzZoWjV5aXR0ZlNISk1iUFF4RDhSMnZ4?=
 =?utf-8?B?cU90TmdxaCtRWldIdnAvdzExbzhFVnBROTByNHJLb053U1JGSjVidkdaMXo4?=
 =?utf-8?B?UzJYYk56ZEJkemwweFg0TTluWDMrSldsNzdUZk5VT1pqeU5MbE1scnlVVXBY?=
 =?utf-8?B?VGt3bzJaRGFWREFOOGtCS3JsMVlHUDZUL0lYelZhVUJkUDc1NTdrVVYwMzRM?=
 =?utf-8?B?MFM5NDdYb0o1NGt3aERlS3RrWFRrdGpZQytXUnF5d1g1L05jUSs3K25oNTZS?=
 =?utf-8?B?enBkZUpZL0lGa1oxcjhMdFdSRC9FOWFhVHNNZjZTbkN0NlRHRUppSngrUzVp?=
 =?utf-8?B?d1ErUFhvaW42Mml3bjUxTGFudU8wRlNRTGdpS1d5SlVWb0dNczdoL2JCdDhx?=
 =?utf-8?B?Sjk3WTgrSzhuaUhIOEN0MEZpeGNYb2srSWkwaGhzYzhlNWcvT0dkU3RLcnZt?=
 =?utf-8?B?NFFCSmhodGJZRzdxckJiTmFxRkp0disyZHQ2M01sYnc5WXZtT1UrRkhVampB?=
 =?utf-8?B?SVBZTlRFS29BYjJuRmQwYXh2ek5TZFdpZ290NmxYSmtLcEw4ekNCV2tocm5P?=
 =?utf-8?B?Z3pkbDlBUEtGdFB4VVkzWVlISTM0VXlNaGFldjdjc05kZ2Zad3BueWNIQVlp?=
 =?utf-8?B?K09DRXR0SjlMQUxUcVFtTmhxSmNJL1puTXB0dEpzeU1uKzJpWkZsd3RBYmh5?=
 =?utf-8?Q?ruxwbb3ZKgsjar30JJy6C4Y0LtxTMrY0Bbut76Z?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 50ed9f27-5c71-4d82-33bf-08d91148392c
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 May 2021 11:06:54.6063
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: WYfdFJTREafTERBYaPJkpspV5/KM5VJ5giTlA5p7XwO3gFZ69ni0HbsYXmy2A0FBry9v//loTLAB0bSi80dVIg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4475
X-OriginatorOrg: citrix.com

Introduce an interface that returns a specific leaf/subleaf from a cpu
policy in xen_cpuid_leaf_t format.

This is useful to callers can peek data from the opaque
xc_cpu_policy_t type.

No caller of the interface introduced on this patch.

Note that callers of find_leaf need to be slightly adjusted to use the
new helper parameters.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v3:
 - Use x86_cpuid_get_leaf.

Changes since v1:
 - Use find leaf.
---
 tools/include/xenguest.h        |  3 +++
 tools/libs/guest/xg_cpuid_x86.c | 23 +++++++++++++++++++++++
 2 files changed, 26 insertions(+)

diff --git a/tools/include/xenguest.h b/tools/include/xenguest.h
index 03c813a0d78..7001e04e88d 100644
--- a/tools/include/xenguest.h
+++ b/tools/include/xenguest.h
@@ -744,6 +744,9 @@ int xc_cpu_policy_update_cpuid(xc_interface *xch, xc_cpu_policy_t *policy,
                                uint32_t nr);
 int xc_cpu_policy_update_msrs(xc_interface *xch, xc_cpu_policy_t *policy,
                               const xen_msr_entry_t *msrs, uint32_t nr);
+int xc_cpu_policy_get_cpuid(xc_interface *xch, const xc_cpu_policy_t *policy,
+                            uint32_t leaf, uint32_t subleaf,
+                            xen_cpuid_leaf_t *out);
 
 /* Compatibility calculations. */
 bool xc_cpu_policy_is_compatible(xc_interface *xch, xc_cpu_policy_t *host,
diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x86.c
index 144b5a5aee6..460512d063b 100644
--- a/tools/libs/guest/xg_cpuid_x86.c
+++ b/tools/libs/guest/xg_cpuid_x86.c
@@ -860,6 +860,29 @@ int xc_cpu_policy_update_msrs(xc_interface *xch, xc_cpu_policy_t *policy,
     return rc;
 }
 
+int xc_cpu_policy_get_cpuid(xc_interface *xch, const xc_cpu_policy_t *policy,
+                            uint32_t leaf, uint32_t subleaf,
+                            xen_cpuid_leaf_t *out)
+{
+    const struct cpuid_leaf *tmp;
+
+    tmp = x86_cpuid_get_leaf(&policy->cpuid, leaf, subleaf);
+    if ( !tmp )
+    {
+        /* Unable to find a matching leaf. */
+        errno = ENOENT;
+        return -1;
+    }
+
+    out->leaf = leaf;
+    out->subleaf = subleaf;
+    out->a = tmp->a;
+    out->b = tmp->b;
+    out->c = tmp->c;
+    out->d = tmp->d;
+    return 0;
+}
+
 bool xc_cpu_policy_is_compatible(xc_interface *xch, xc_cpu_policy_t *host,
                                  xc_cpu_policy_t *guest)
 {
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri May 07 11:07:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 11:07:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123960.233950 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leyK3-0003hX-FN; Fri, 07 May 2021 11:07:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123960.233950; Fri, 07 May 2021 11:07:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leyK3-0003hH-Bp; Fri, 07 May 2021 11:07:07 +0000
Received: by outflank-mailman (input) for mailman id 123960;
 Fri, 07 May 2021 11:07:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4HO4=KC=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1leyK1-0003Kd-EV
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 11:07:05 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7635f833-24cd-44d3-b015-a594af157653;
 Fri, 07 May 2021 11:07:04 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7635f833-24cd-44d3-b015-a594af157653
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620385624;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:content-transfer-encoding:mime-version;
  bh=41gCcCU9e+JnlIhloGR5SydFBIlo+duLNASfJYuUK20=;
  b=FNwfgpIMIWdg/roSyxH9L4gGsD4m9TCPP7dEmBCTmyY3x3n3g6E/aPlY
   r10OAEgG/MixRgDJqYYEqi9VWxnWJlpgOlCa9qXuotr3HAg2N1G6NM7gQ
   ov+TGZy9e92HxxcJ+9GzR6xXwuD0zdpyN7E4oRnWoF+YNxA9Ij5gxGv+5
   8=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: Jyi1DOdmM5VRxq7JvscwCSqhYN/ret+O7+iU/hNmm2OVbYY3v3y/0mvtDlfukQQYusrMZgqUrP
 2XjsIvzxSeUkV9940qfR/VRuaDiE8Hj8FpJJlm6NTAQ0vC7G8zqobPhtcaf+bkkQ3KvZh0o48i
 AYQGQw4avSd2M+YN6UrDQQZaLF7OGmOn3T05AeSw2xuInuXgHOtYQ1rvuAvFxoqZAjF+CiSP5C
 YedQUhv/kYxaboFsvV3lm6Xdll28Ve3y3XoFBH7V5PgovlhRmd1SuD6GUjCzP5ZhRxj7viC766
 kqg=
X-SBRS: 5.1
X-MesageID: 43678467
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:AhWnyKr9vHU6BBSU4iranfwaV5sZL9V00zEX/kB9WHVpm5Oj+v
 xGzc5w6farsl0ssSkb6Ki90dq7MAjhHP9OkMIs1NKZMDUO11HYSL2KgbGC/9SkIVyGygc/79
 YrT0EdMqyWMbESt6+Tj2eF+pQbsb+6GcuT9ITjJgJWPGRXgtZbnmVE42igcnFedU1jP94UBZ
 Cc7s1Iq36LYnIMdPm2AXEDQqzqu8DLvIiOW29LOzcXrC21yR+44r/zFBaVmj0EVSlU/Lsk+W
 /Z1yTk+6SYte2hwBO07R6d030Woqqu9jJwPr3NtiEnEESutu9uXvUiZ1S2hkF1nAho0idurD
 CDmWZlAy050QKqQoj8m2qR5+Cn6kdi15aq8y7lvVLz5cP+Xz40EMxHmMZQdQbY8VMpuJVm3L
 tMxH/xjesgMfrsplWI2zHzbWAcqqN0mwtQrQcZtQ0XbWLfUs4lkWU7xjIcLH4tJlOK1GkXKp
 gdMCiH3ocpTbqzVQGogoBA+q3SYkgO
X-IronPort-AV: E=Sophos;i="5.82,280,1613451600"; 
   d="scan'208";a="43678467"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VEDFK8KY3HK8vtTwJ82fAQ+Dn3nGKiart75Tto7tuZ4o9cMshnilVbLfFTLeataXxofkdNndJh5HY1iw0TVspcSAhAsr3rfUMFqg19LvnO2wNa2wfd2WfFC1GKhBtjQCyM1OtxR4cO2dNLxoY0MPQ4RMEXzbJFyXmFN27l0fh1qXr3EyvLDgwCCzZvVQ0nQzTa3rM07Q06thw8+SnW3igJ5RNskXzc0lY7ppGvRulfsBzOqbT0CdWtt3+Wpo1413rv1FK5BbprH9YGQi8MawTs1g8k/65Vw98eGEwYJXgii3CC2uGd0NrFV6UoazXrlFepQxEeFEwcrk1l7GWQmN8Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=L6y66yUxlQCuDtTKUVE0naL1zKUgsBlYMEUqArZCk+0=;
 b=HJTfbB6QyFwl8v8/ogg6w+rOqd05aSTLbiy0O6gvz7pRtv9gdlYi3+EJMqxSRsbhoyoFp6rUCIL1sx/5SxAMFm5EJjTD0HutakUYYfHCQ7pDUvh/61WZJdp/OJ1ytD38ef6bnH8HCofo1b8rP2VzKZpRUboWNtEfiDYEcIN9gDIdrDD9xXVVh0LzpFrlMo16QeyNMl071OCWMt6ektyIZA31w/OLGFEX28RBuFEkABeXEIS+G0z89b5g74gYDUU+bsQTk7R6aYXkCIUKZ+byQdc0C8LwdsNQN8w7FSiwGKEL4GyNVGCqUHMxKaV8rsMeLE8do3Bo5GVFSkcMhVwdmg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=L6y66yUxlQCuDtTKUVE0naL1zKUgsBlYMEUqArZCk+0=;
 b=nN0oeOsZfMEXjzHD5GgXtM0IOGCoHGUAYJfAMNZQoJSSyPxVSnaxvQbBV9z18eY7AqhZqcVpMMrrzob3hpP7607x8JM4m67thuoXloEu9L2xGO1HrFckNItgSaJ3TlZdc+rd7Aom5mU2OQJaSBLewdl8jgUAFhiFNk8QEk7b+H0=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Roger Pau Monne
	<roger.pau@citrix.com>, Jan Beulich <jbeulich@suse.com>, Wei Liu
	<wl@xen.org>, Ian Jackson <iwj@xenproject.org>
Subject: [PATCH v4 03/10] libx86: introduce helper to fetch msr entry
Date: Fri,  7 May 2021 13:04:15 +0200
Message-ID: <20210507110422.24608-4-roger.pau@citrix.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210507110422.24608-1-roger.pau@citrix.com>
References: <20210507110422.24608-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: MR1P264CA0033.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:501:2f::20) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d63b2104-3c98-4d8d-78f3-08d911483cf3
X-MS-TrafficTypeDiagnostic: DM6PR03MB4475:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4475B3AF9D901A90077FD83B8F579@DM6PR03MB4475.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: +fLTYR7VAYF6vSGYm6LkCqeim9ofqX/o4hrAOZC5351Bkzd1DFwMvEourJeqWSraE1PHUyAte79lwra51hJFlJtszWHPZvNdkXLcFvlmQHNP5toY2OOgmY/qjNo9/a/Ro5bbO+wp4ie7mlaY+tZLw6kv615vfY3CbxRUExIGAA7FiUVk4prqo9wOHetMJeTy6t/VApNznl0uqAvuUun3yHzUV6WPdHG18PKbj4Ru510uEZmWYuYgiLghxwSasJNwrADDjKdQ8HGsJj8Eka5nJI8TS6k55/wiSi0ARjkQX/+kxcj07LdQ7mAJKhfMR/4aRqMNhpkRjKoYTV47dzdkUftih7/rXxzPTXWvLe2ByJDdy+oOmEyEiQSA2o3S3GYuxx8sThiEOg+G3lbpa1Vmv/NxQAtmoojZRhHmluMcNYuk6mDGkF4U3iV5BJXKExfE5WIJxFa2iv37Z+OuOD9ocY28aP+7rzIfrwQYckftpMJiv3zrHoFl5tHjR26kLhOZotmMnlWTDrO0X3y0d5X9dwYVRvhKo+3uU8+h0laie/KCtlrSSUMgbEm0JJ2r4HE9geuZlZ1XYCgKevCw9to9Ue/ZHf0Id+cYYf0EkqVr4evTXcPdEj9JX7eg/JUKKoh+dgCxEtMmkb3FtGHCIukXy4g4wr5VqyGENzLb/47pbOU=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(136003)(376002)(346002)(39860400002)(366004)(8676002)(6496006)(26005)(66946007)(5660300002)(66556008)(66476007)(83380400001)(478600001)(956004)(1076003)(6916009)(316002)(38100700002)(54906003)(6486002)(2616005)(86362001)(8936002)(36756003)(6666004)(4326008)(186003)(16526019)(2906002)(309714004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?SzZ3Tm41NGViTjA4UUliYzBRV0hScjJ2WWFUTFgzWExOSit4N1dJSjJPc2sy?=
 =?utf-8?B?UXBldWlzNGZHamNqTHBUN2Z1c3JJQlRhczBOWEVjQ3hSMFA0MVl2T3BNMU12?=
 =?utf-8?B?YmpJWXBnWFozY2p2blFPREJHYlRJdU9QWEpXVlRWd2lhM0ptNmJ2UldDWERr?=
 =?utf-8?B?OWNGMmx0bHpPdzZkTlpzbEtvVEdxalhDUGRyVkpGdUZOZ1YyWlp4bEZlcDZV?=
 =?utf-8?B?TzNHNGFESU5yVkRtaWgvZTdtWjlwNjdoR2tYRWZWa084YmxhNUg5Qzc2YU8r?=
 =?utf-8?B?K1RKdG9oZ0dYS1Y2ZTJ0dE84dGxxUlhXaXFidTFFWXJyRS8vZXBXNkZaUUR2?=
 =?utf-8?B?bnVTNkFnTURwUVFwY2UxRjdCV01vR3FvYXFTMkdjU0VlaU1HbCtNM2w3WHEz?=
 =?utf-8?B?Tm5GdUh5NDVsYU9ITmxhaCsvbGxXNjdrS2hjWEVOZnlxcUhUOFRzZjlBckts?=
 =?utf-8?B?WnhSRWdrdC8yRkwvSWZnL0JhWWJ0endCbEplNUJwMHZjUFlMeTlNblpyeVVO?=
 =?utf-8?B?dk45Z0trNFA4a0svSDl6ZmM4Wjdoa1lnSGZqa0F2dHpsWTAraG9UczJDSnhj?=
 =?utf-8?B?UCs2eHlnd2xST09taE1sdzh4RDMxYWhKQ1J0ekM3ZS9LZ0RiZ2cwcGNBR25X?=
 =?utf-8?B?ZnJXZnVlVVk2YWRmeGFaZEhiVTBFS2dzUHBBVWVnazBXY0o4aDdERTBLUHBK?=
 =?utf-8?B?U3UwUk9haWlkb0NDZGFJb0VLcjRuS3U3dlVpL1VLZjc4ckYyVmhuREpLUWNt?=
 =?utf-8?B?QUNvUzFDTEZMNGVSS3pIVjN6U09UcXd1Vm9hMGVJYkVCYkc2Q0NMR1FweE0v?=
 =?utf-8?B?Ylg0bWNwQWRXb3ZnMGlLZnVXUHZ3N2RjZk1Jdlc3L0R5NHQ3OERzazRKRUJM?=
 =?utf-8?B?S2p1SERCL3dtN2w0QWhZV3gvUFRYNjVlN2xHV29UTWpyMFQwR0xpVW5aNXNG?=
 =?utf-8?B?Zk5NQzl0eXZVNzViOWM5RDdZTTZxaHpVRkZod1ByWmdVUS82S05TYzA0V2RQ?=
 =?utf-8?B?SENSSTA0NWZyVmQybERpU3lscU5BY3BBVFlFeG1SRTN4T3J3MmtvZkRhY28x?=
 =?utf-8?B?cXA2TjJNbGJkTWVIRlVYY0czWldPZnRNRDJ5RVcyOXNVdERuRHFJTUJBSS9a?=
 =?utf-8?B?aFZDanFQMEE5Y0lxNi9YUno0UW9qSGZicjUxdEtuTjU0Y2FrVzBLTFZ3YS80?=
 =?utf-8?B?WStmeTZOay8yeHBLZXN6cjN5SXd4RUUvaUdCcW5IU2hobzd3MEdCRkNIcHBY?=
 =?utf-8?B?cHArTmRFUTRYdmlOYmxreUFWbzlHNm9xSXc0aFhMdkpaRHE4NWpSbVQ0MDdE?=
 =?utf-8?B?NW51eWorYzZJZS84Y0pHdnp1Sy9nVXJ6d0F0d1Y0ZktvTERDQ2pUQ0VFdCsw?=
 =?utf-8?B?cXdPNlJlV1VRdlNFcUhGVlNld21FcVZUUFVHN0RHdnE4c3Uvam1aZ0NXazZD?=
 =?utf-8?B?VDFqVXFNSWJLREpuVG0vdDNOMloyL0JUdmVvUlFBZG9mclFNcXByejJXYUlB?=
 =?utf-8?B?M3E4c2RVNWdHY0sxb3dkSVhHM0hncnFSL0NNQWMrWjdVd1hPWVVQRFMwcjhw?=
 =?utf-8?B?bEJjb1RyWWVBVlBRb2tzYXZGbUNyNC9IQ2N0dmdxS2RvakJzck9Hd0p4OWZV?=
 =?utf-8?B?QlZXNDNGdi9jMFFSSkpNbFZJT2I1Sy9NcGo2ck1pL0JjNUZsbWJBbWdqOXFI?=
 =?utf-8?B?M2pkSTF6WmJoRTJCRHVWeHN5bjlSZGlDZmJxckN2VkFRMHFCTXJsalFCTkxs?=
 =?utf-8?Q?0jZpzPDs4one0c+/y5rwuHQBpD2XObrqQKFnZkW?=
X-MS-Exchange-CrossTenant-Network-Message-Id: d63b2104-3c98-4d8d-78f3-08d911483cf3
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 May 2021 11:07:00.9543
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: L9wg+0jwBTtXn9ER+EzgzyXGOOmDjbLhmZEAd8SuDWccrzQoRYEab4i/sTipZVV4Ca4PJjWZbnABxLeZHOUziQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4475
X-OriginatorOrg: citrix.com

Use such helper in order to replace the code in
x86_msr_copy_from_buffer. Note the introduced helper should not be
directly called and instead x86_msr_get_entry should be used that will
properly deal with const and non-const inputs.

Note this requires making the raw fields uint64_t so that it can
accommodate the maximum size of MSRs values, and in turn removing the
truncation tests.

Suggested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v3:
 - New in this version.
---
 tools/tests/cpu-policy/test-cpu-policy.c | 48 +++++++++++++++++++-----
 xen/include/xen/lib/x86/msr.h            | 19 +++++++++-
 xen/lib/x86/msr.c                        | 41 ++++++++++----------
 3 files changed, 75 insertions(+), 33 deletions(-)

diff --git a/tools/tests/cpu-policy/test-cpu-policy.c b/tools/tests/cpu-policy/test-cpu-policy.c
index 81de9720c8d..854883fbb39 100644
--- a/tools/tests/cpu-policy/test-cpu-policy.c
+++ b/tools/tests/cpu-policy/test-cpu-policy.c
@@ -389,16 +389,6 @@ static void test_msr_deserialise_failure(void)
             .msr = { .idx = 0xce, .flags = 1 },
             .rc = -EINVAL,
         },
-        {
-            .name = "truncated val",
-            .msr = { .idx = 0xce, .val = ~0ull },
-            .rc = -EOVERFLOW,
-        },
-        {
-            .name = "truncated val",
-            .msr = { .idx = 0x10a, .val = ~0ull },
-            .rc = -EOVERFLOW,
-        },
     };
 
     printf("Testing MSR deserialise failure:\n");
@@ -744,6 +734,43 @@ static void test_cpuid_get_leaf_failure(void)
     }
 }
 
+static void test_msr_get_entry(void)
+{
+    static const struct test {
+        const char *name;
+        unsigned int idx;
+        bool success;
+    } tests[] = {
+        {
+            .name = "bad msr index",
+            .idx = -1,
+        },
+        {
+            .name = "good msr index",
+            .idx = 0xce,
+            .success = true,
+        },
+    };
+    const struct msr_policy pc;
+    const uint64_t *ec;
+    struct msr_policy p;
+    uint64_t *e;
+
+    /* Constness build test. */
+    ec = x86_msr_get_entry(&pc, 0);
+    e = x86_msr_get_entry(&p, 0);
+
+    printf("Testing MSR get leaf:\n");
+
+    for ( size_t i = 0; i < ARRAY_SIZE(tests); ++i )
+    {
+        const struct test *t = &tests[i];
+
+        if ( !!x86_msr_get_entry(&pc, t->idx) != t->success )
+            fail("  Test %s failed\n", t->name);
+    }
+}
+
 static void test_is_compatible_success(void)
 {
     static struct test {
@@ -864,6 +891,7 @@ int main(int argc, char **argv)
 
     test_msr_serialise_success();
     test_msr_deserialise_failure();
+    test_msr_get_entry();
 
     test_is_compatible_success();
     test_is_compatible_failure();
diff --git a/xen/include/xen/lib/x86/msr.h b/xen/include/xen/lib/x86/msr.h
index 48ba4a59c03..9d5bcfad886 100644
--- a/xen/include/xen/lib/x86/msr.h
+++ b/xen/include/xen/lib/x86/msr.h
@@ -17,7 +17,7 @@ struct msr_policy
      * is dependent on real hardware support.
      */
     union {
-        uint32_t raw;
+        uint64_t raw;
         struct {
             uint32_t :31;
             bool cpuid_faulting:1;
@@ -32,7 +32,7 @@ struct msr_policy
      * fixed in hardware.
      */
     union {
-        uint32_t raw;
+        uint64_t raw;
         struct {
             bool rdcl_no:1;
             bool ibrs_all:1;
@@ -91,6 +91,21 @@ int x86_msr_copy_from_buffer(struct msr_policy *policy,
                              const msr_entry_buffer_t msrs, uint32_t nr_entries,
                              uint32_t *err_msr);
 
+/**
+ * Get a MSR entry from a policy object.
+ *
+ * @param policy      The msr_policy object.
+ * @param idx         The index.
+ * @returns a pointer to the requested leaf or NULL in case of error.
+ *
+ * Do not call this function directly and instead use x86_msr_get_entry that
+ * will deal with both const and non-const policies returning a pointer with
+ * constness matching that of the input.
+ */
+const uint64_t *_x86_msr_get_entry(const struct msr_policy *policy,
+                                   uint32_t idx);
+#define x86_msr_get_entry(p, i) \
+    ((__typeof__(&(p)->platform_info.raw))_x86_msr_get_entry(p, i))
 #endif /* !XEN_LIB_X86_MSR_H */
 
 /*
diff --git a/xen/lib/x86/msr.c b/xen/lib/x86/msr.c
index 7d71e92a380..4b5e3553e34 100644
--- a/xen/lib/x86/msr.c
+++ b/xen/lib/x86/msr.c
@@ -74,6 +74,8 @@ int x86_msr_copy_from_buffer(struct msr_policy *p,
 
     for ( i = 0; i < nr_entries; i++ )
     {
+        uint64_t *val;
+
         if ( copy_from_buffer_offset(&data, msrs, i, 1) )
             return -EFAULT;
 
@@ -83,31 +85,13 @@ int x86_msr_copy_from_buffer(struct msr_policy *p,
             goto err;
         }
 
-        switch ( data.idx )
+        val = x86_msr_get_entry(p, data.idx);
+        if ( !val )
         {
-            /*
-             * Assign data.val to p->field, checking for truncation if the
-             * backing storage for field is smaller than uint64_t
-             */
-#define ASSIGN(field)                             \
-({                                                \
-    if ( (typeof(p->field))data.val != data.val ) \
-    {                                             \
-        rc = -EOVERFLOW;                          \
-        goto err;                                 \
-    }                                             \
-    p->field = data.val;                          \
-})
-
-        case MSR_INTEL_PLATFORM_INFO: ASSIGN(platform_info.raw); break;
-        case MSR_ARCH_CAPABILITIES:   ASSIGN(arch_caps.raw);     break;
-
-#undef ASSIGN
-
-        default:
             rc = -ERANGE;
             goto err;
         }
+        *val = data.val;
     }
 
     return 0;
@@ -119,6 +103,21 @@ int x86_msr_copy_from_buffer(struct msr_policy *p,
     return rc;
 }
 
+const uint64_t *_x86_msr_get_entry(const struct msr_policy *policy,
+                                   uint32_t idx)
+{
+    switch ( idx )
+    {
+    case MSR_INTEL_PLATFORM_INFO:
+        return &policy->platform_info.raw;
+
+    case MSR_ARCH_CAPABILITIES:
+        return &policy->arch_caps.raw;
+    }
+
+    return NULL;
+}
+
 /*
  * Local variables:
  * mode: C
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri May 07 11:07:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 11:07:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123963.233962 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leyKA-0004Iq-Qc; Fri, 07 May 2021 11:07:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123963.233962; Fri, 07 May 2021 11:07:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leyKA-0004Id-Mj; Fri, 07 May 2021 11:07:14 +0000
Received: by outflank-mailman (input) for mailman id 123963;
 Fri, 07 May 2021 11:07:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4HO4=KC=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1leyK9-0004Ft-9F
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 11:07:13 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d44f0357-35f2-497e-9f5d-e0d4b2da5183;
 Fri, 07 May 2021 11:07:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d44f0357-35f2-497e-9f5d-e0d4b2da5183
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620385632;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:content-transfer-encoding:mime-version;
  bh=gLlCE9etf3gr9KUeqpSgZ+qsCzt5Isc3p8Bma0wZMcA=;
  b=V31Xg3E7EXRueT539etYQesTh0S6n2vxkEi6ID7/nn/YiG3NmNvtQtHq
   Fq/jWEt7jtQbX/lPWh8jxHLvBsjxHdeA9F9wr8q6xL+pu+gscGXRqYGCH
   FkEiDh4A6C4cghVYNuf39l0+iEqf1qbUeJE0aspu0V5S6Qrarqqbe2Nfz
   U=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 0XEVxeCkgPP3MJ8zvkOM9bW3fa7tRXOPah6AlJsP6IJK/7PSvjmNkIANF+uJHcu9GdVauWf2/R
 2uX+pmAc4MJneBqeE7IH8o6XMAI4xRSCOQbB0b6bDopLeWSm4i9UFIJKmU4Wqm+81oyV4s3KgH
 n2etB9Y6HwLrxv5xowRZPmKu7sSViByCYndlIsegkhTAi5oJu/+nZHaEP2QJR9DuHEbynlALPL
 yhsCZD5CzjKtaqTQoimhgAw4hgSpuO1gDVNgf3m8GzELCUuJAOtSMjRv6pG3j9UDxrFfpbSKpl
 NtA=
X-SBRS: 5.1
X-MesageID: 43096222
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:gXwzZ6zyPKc9zSPfDdMmKrPw1r1zdoMgy1knxilNoHxuH/BwWf
 rPoB17726RtN91YhsdcL+7V5VoLUmzyXcX2/h1AV7BZniEhILAFugLgbcKqweKJ8SUzJ8+6U
 4PSclD4N2bNykGsS75ijPIb+rJFrO8gd+VbeS19QYScelzAZsQiDuQkmygYzZLrA8tP+teKL
 OsovBpihCHYnotYsGyFhA+LpL+T42iruOeXfYebSRXkDWzsQ==
X-IronPort-AV: E=Sophos;i="5.82,280,1613451600"; 
   d="scan'208";a="43096222"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TtBUQR/5Vfr9FzBxfvpAet6RymtnqRWfcpqg4BtAiwED9JhDre5rvgf+jBR8qsu3KMdHccTgmxod7yMLwRllWXM/dXMmrzTSzDbWrPUTwEkyHRkEPwNl8yHS3WlGtea34p/kXzbk9KsqNsYNKR2ggItS00HNgCT1t2ADYonFf4qRt89nnxQIOuAqftwSlTFTtT6PdhML6XVgFytOHq5atn/TjcXtHyaEu0OiLQHYHDv5A+X8Rrb4FxCfwkm+soJ858clzQa8K9Jn+zjAhQcSChXqDi9EdKNo1t5Qy571E7u4AeKDmGPsvEYVQKr9w1HfO3B+lqeVvueQOavd9bWw+Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZFiEJE/7tweLfCqiWGwMSdcuoNYoTfrGmGb2P+8nYGc=;
 b=lhP3T70AXPGiR/NGd+UtLjlkgEAHJfirCuYZGnMLdbRfRWrDvUjT3poFh/w2WCCkqA4L0x1A2W3b06v7uCG4Q+DrIExAJu2WV9/0foygMMFoaC6Rhhi9IBgNW4QrAeC6Zw81h8IzPWwdK6BfcNcfxfwcRVjxVO//dDN4M1Y6+rHHImVfT6W0yH+YwV+oYr+ilcUL4EQIfjm04iYNHa2N4j0HSRtVrlBMKd896Nli49b7kwfVAUQ6VfRRVbQCpu5gGrokLBj3ZZZb+/CutfLc7zNTM6BtPvd+jfbE6cBAnTTSQ/lftJsAMzT5AXiJ0lmjtd4v1thutKLAfkXZpaUluQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZFiEJE/7tweLfCqiWGwMSdcuoNYoTfrGmGb2P+8nYGc=;
 b=Hnz1VIO9BchCJEoq8j5sJ+hRvrx32LDiNL/JfzFk7naRAXN6s+FMT0zLhOHMtORX29J8enBPYqDGfG1hv0gyVthFThIr/z3T5Kh3VVVHvtHASpEVVRXQQSdP63bpgumBQV1n1dJjh60eU/CQQAlLsIeIF7IHESiCUEQn0tTEp5g=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Roger Pau Monne
	<roger.pau@citrix.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: [PATCH v4 04/10] libs/guest: allow fetching a specific MSR entry from a cpu policy
Date: Fri,  7 May 2021 13:04:16 +0200
Message-ID: <20210507110422.24608-5-roger.pau@citrix.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210507110422.24608-1-roger.pau@citrix.com>
References: <20210507110422.24608-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: MR2P264CA0055.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:31::19) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 5beccb37-55fd-4398-1d13-08d911484079
X-MS-TrafficTypeDiagnostic: DM6PR03MB4475:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB44758B09089D36328509588F8F579@DM6PR03MB4475.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: nJCs3J1vTBEQoX5ebu6k19yYh4Ob6MZgsWaVZPjPR2zUt6SNMJAwSppNgFotJ/lxrTj/QMxY5EXf01v4O9ZOlSUvSTAT0WVyVhj5Pwgr31+C/QeEueCUgTvYty/5p1bkWzyPXSJvzoqfq0zRL9xQxf+gx48TawzZWRAJf2+4MnOji+L+f1Tt8JVnjnD2jj3nofUx8VnwmVY0ZckJJ+Rdvx6r2Q/JJU9nyWwUbC8tT5WaQ8KryNYGYjf5Vlwiz7wzVpmD3I5wTiPKIr/pwv15d36uqBMPqtc9HvSvkKbQ9RE2FV+6slg6pGKImf9BRUvz/IG+L/pneHr6srw5G3teKAUfUXM4KNUh+8tKe4BDqM0akTg4lACsSWyL5JccEzzRZpFI6hjiWO8yuycuc4WuWLotvhfP5IKXgX2hILK2OkyDCG8i2jbV58LWuLHQeSuoo1v92cQRcnKavP3ICIrd5WxrVk6GM9ZyKdGf/afogaDiGRZKPe/21X6KZ0P5KQ2GUtd+V3V0l44dKFgD4edzp+btlJ37qSozVIS3Jbp3alnaaItDEKDMSCk3sHKXuGLplmaeOpZi3uG77TJz4b+WREmDhfLGLOLx3cXMi+zIXRA=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(136003)(376002)(346002)(39860400002)(366004)(8676002)(6496006)(26005)(66946007)(5660300002)(66556008)(66476007)(83380400001)(478600001)(956004)(1076003)(6916009)(316002)(38100700002)(54906003)(6486002)(2616005)(86362001)(8936002)(36756003)(6666004)(4326008)(186003)(16526019)(2906002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?MUZYNW9JcjJ2Ny9Lem1pM3BOazQ2VHZoeTYxdnhvTFVmTWhMaGVMNEo5aDVq?=
 =?utf-8?B?dlJHS20zcFVSSDRROVpKWmxnYWpZM1FUSGNQUnh1VHFLL056WU04bmJKbndO?=
 =?utf-8?B?RXdpNk9FZEhuY3VITHd4aUZwMzBZYlZJQWFGbXJ5M2UvUHZjMWhJdGNnM3NV?=
 =?utf-8?B?bUNSYlhmTXB6Z2d3WVY1c0NpQXFVOEhGeGZEMHdySmFYQTFEa3B2TDBGMTJ5?=
 =?utf-8?B?WDBvcE14cGZ6aFpkejBIazVCS0d6eUU1clRWM0V5M0pWMFdXcmZGd3hSd1RQ?=
 =?utf-8?B?eFluVUFOa1F3MHFWMUJvZEliSjNiRWpTK0c2dGVOb2lQTS9ad2h4NWxCdGNI?=
 =?utf-8?B?WUp2aWpUUHlEL1BLaS9CeU9DOXNZdGxZRDl4VHRMZEthWEpWVEQ2RHZNVkhv?=
 =?utf-8?B?NXJ4bWVKdWlsY3R4RG11a0V6YU8wQzVoM0dBNjZtUERVMUdlb2lDL3FrNGJW?=
 =?utf-8?B?TFpmQ0Q5aW5zd3FiOEZaZmxwT040MnBnTEVzV0Zpak43VmlHOTJnVVZId2t4?=
 =?utf-8?B?NDdMQ3NYOWs3eDU3Z2pnU0I2WThlVWxyWHBJS3paV2RKNytTSno5S3pnbHhz?=
 =?utf-8?B?MHd0TmdHekYzc0lPeFA1UWJiK1pQcitoUy9yYzNLVHF5WEN1UTEyeUwzNC9F?=
 =?utf-8?B?enFZQmlQRHJ6TTEvV01YcTRMc0svWEo5K2tiZHNtdUl1ejc0alNzZkNFR2Zt?=
 =?utf-8?B?NVVBTzRGV3VjdVh6V2JSdjdtQU1ISjRjdkRZYW55YkR5RFZEUTQ3c0lpN0J1?=
 =?utf-8?B?L1pxbUpBWTVNNDB6WVRPYlBIVnZpTW1lOTBYWXFzcCtoWW5hUU5McVVqdktV?=
 =?utf-8?B?K1MzaE0wQXJuQmZjaHhFbi9IcnBXMXI5aEhsd1pxbkVFaDRvTkUzNE5oMDRi?=
 =?utf-8?B?VlJMai8wR3V3aS96Y0ZWdUd0aDM2MVJuVExMZndrZjFyZTA4SzJwUnR6MXpE?=
 =?utf-8?B?WFV0UGlGeCtrUXpmVlFoOUVtYVRNQ0xjL3YzcTM5bzA2L1B2dnRGWVUxb0Qy?=
 =?utf-8?B?ck9qNE5oeG9QM0lzSmlvdG8yR2NHQWtYYTRnbnJ6VENub1RQeFZLMXIwaUdn?=
 =?utf-8?B?TnZ6MUJ4TmNxSEFkSGFYcngrSzllc3dnYUNXancrK2pPRlQ1RFMwWGlHZURO?=
 =?utf-8?B?Z0lPMGY5dWo0NTNBMm80YWtuMjdYd0N2eVdJMUhWRldxNS9HZy9HWnZSbzcx?=
 =?utf-8?B?OXN2WTAvZFE1cjZKNmpWN2R4bk9HRzlDS1czeTFISHFUL2pKOGJsTzZubnRu?=
 =?utf-8?B?a2ZLTmhSZVZYUTVhRXY5M042TnRlbXpUZU5jamhvbTJVei9sOHpEZ2hyRExo?=
 =?utf-8?B?T0NYeVVTcjFDSndHMnZmOGlkd3luK3ljN2hLNVBzbXVxUFRYM09nZDVMdEh3?=
 =?utf-8?B?ZlI3VkhUd2NCT1ZDYTlOQ0FQKzhtZEwwV3YxRTBkSEx3TUJXQjF4VnppOE01?=
 =?utf-8?B?Rm5hSjBvVXdGSkR0R09QZXZudmxQRmJTUDJlcGp5cHdyNC9FK2dTZUZMaUE4?=
 =?utf-8?B?elNGMzh1ODI5cWZEeHFoNTh3a3ZZK3AydGlKSmVwMVd1WW9hdG50aE9QWWlF?=
 =?utf-8?B?ZTczQzlmZ3pTUEhZTW5BOUJ3RS9YT2hyQmwyWVRoYTRjVVJlNjhRclpYWUt5?=
 =?utf-8?B?ZGgwSkpiZjRuUEtaTFJzbVl2NkhRRHZXNkRCbWwwOFNMaW5TWUIwczdYYS9L?=
 =?utf-8?B?cVh2cW44TW1Qb21zZnI4NjVwWGt2NE1oRkdwaEdQdTBSM2JwUm5JeXZ4NFZU?=
 =?utf-8?Q?Po7iq3iexQIQWgckxacQNvtiQo7nraFn0UEbD8u?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 5beccb37-55fd-4398-1d13-08d911484079
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 May 2021 11:07:06.8474
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: q7nbra2GcX7zG0CQF8deHXrdv9YigLlAXYqIGkqIFsFrsf2CuelA/FRg8Qf3KXEv7sL7VfCKqaVZwClQP75c/g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4475
X-OriginatorOrg: citrix.com

Introduce an interface that returns a specific MSR entry from a cpu
policy in xen_msr_entry_t format.

This is useful to callers can peek data from the opaque
xc_cpu_policy_t type.

No caller of the interface introduced on this patch.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v3:
 - Use x86_msr_get_entry.

Changes since v1:
 - Introduce a helper to perform a binary search of the MSR entries
   array.
---
 tools/include/xenguest.h        |  2 ++
 tools/libs/guest/xg_cpuid_x86.c | 20 ++++++++++++++++++++
 2 files changed, 22 insertions(+)

diff --git a/tools/include/xenguest.h b/tools/include/xenguest.h
index 7001e04e88d..8e8461b0660 100644
--- a/tools/include/xenguest.h
+++ b/tools/include/xenguest.h
@@ -747,6 +747,8 @@ int xc_cpu_policy_update_msrs(xc_interface *xch, xc_cpu_policy_t *policy,
 int xc_cpu_policy_get_cpuid(xc_interface *xch, const xc_cpu_policy_t *policy,
                             uint32_t leaf, uint32_t subleaf,
                             xen_cpuid_leaf_t *out);
+int xc_cpu_policy_get_msr(xc_interface *xch, const xc_cpu_policy_t *policy,
+                          uint32_t msr, xen_msr_entry_t *out);
 
 /* Compatibility calculations. */
 bool xc_cpu_policy_is_compatible(xc_interface *xch, xc_cpu_policy_t *host,
diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x86.c
index 460512d063b..cdfc79a86e7 100644
--- a/tools/libs/guest/xg_cpuid_x86.c
+++ b/tools/libs/guest/xg_cpuid_x86.c
@@ -883,6 +883,26 @@ int xc_cpu_policy_get_cpuid(xc_interface *xch, const xc_cpu_policy_t *policy,
     return 0;
 }
 
+int xc_cpu_policy_get_msr(xc_interface *xch, const xc_cpu_policy_t *policy,
+                          uint32_t msr, xen_msr_entry_t *out)
+{
+    const uint64_t *val;
+
+    *out = (xen_msr_entry_t){};
+
+    val = x86_msr_get_entry(&policy->msr, msr);
+    if ( !val )
+    {
+        errno = ENOENT;
+        return -1;
+    }
+
+    out->idx = msr;
+    out->val = *val;
+
+    return 0;
+}
+
 bool xc_cpu_policy_is_compatible(xc_interface *xch, xc_cpu_policy_t *host,
                                  xc_cpu_policy_t *guest)
 {
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri May 07 11:07:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 11:07:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123966.233974 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leyKF-0004pB-5c; Fri, 07 May 2021 11:07:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123966.233974; Fri, 07 May 2021 11:07:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leyKF-0004ox-1r; Fri, 07 May 2021 11:07:19 +0000
Received: by outflank-mailman (input) for mailman id 123966;
 Fri, 07 May 2021 11:07:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4HO4=KC=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1leyKE-0004jI-1X
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 11:07:18 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3f452d81-4586-43ab-b104-aa97b66944c0;
 Fri, 07 May 2021 11:07:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3f452d81-4586-43ab-b104-aa97b66944c0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620385636;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:content-transfer-encoding:mime-version;
  bh=DK5Oj288zwi/pAzruQfeshpv1Ya6qkXUi9NgpMqGtpg=;
  b=R7tY8x0G4c2mpGV0JrgGN5/sF/hFILVOrCqtoOwtMot4ZXzKjG6FVmLx
   j0agdiTrkrKmF0M3cJHIo6qHWpCePJGu/BX7zEx0oNQaLpN2ghB7otl+E
   4nPGhSBRpU7TFpHXFj4267QIiXQku7hpJaiAifqkA8T5tlgTvqyzxAqWq
   w=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: wBLs1wbGBlphFIMXsJra93bid4GkQexU8WJqk2kgcZC+JcxMxYIUsQW5WyMab5z+kY4Bm/1F6P
 AIxsXPswh9K1ClVbsKYj+z6sy4aQWSMIi1WEHNJI0JRvEgOdA06qvhUYt42l11lur6Wi/kIcBS
 y65/M3lZKyEU/TfgcEp5dwEouxFAYQunc+3Sht4qCMTfuZy6YRMMaqh9634ye1enSfMCOIpjjK
 i7w/faXl7Nf6s+sedB19DtbS4eSOBZ89vJ8IWRtYJOoekh9boOB1yY12AkZ56Si/AY05zqHtGp
 ut0=
X-SBRS: 5.1
X-MesageID: 43304421
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:/50QaKs7KHJAbbCB6XXX9Z807skCRIMji2hC6mlwRA09TyXGra
 2TdaUgvyMc1gx7ZJh5o6H5BEDyewKmyXcV2/hbAV7GZmXbUQSTXeVfBOfZogEIXheOj9K1tp
 0QOZSWaueAamSS5PySiGbXLz9j+qjgzEnCv5a8854Zd3AOV0gW1XYaNu/0KC1LbTgDIaB8OI
 uX58JBqTblU28QdN6HCn4MWPWGj8HXlbr9CCR2SyIP2U2rt3eF+bT6Gx+X0lM1SDVU24ov9m
 DDjkjQ+rijifem0RXRvlWjoKi+2eGRhOerNvb8yvT9GQ+cyTpAo74RGYFqiQpF4d1HLmxa1e
 Uk7S1Qe/iboEmhBF1d6SGdpjUIlgxepkMKgGXo/UfLsIj3Qik3BNFGgp8cehzF61A4tNU5y6
 5T2XmF3qAney8osR6Nk+QgbSsa4XZcYEBS4tL7hEYvGLf2qIUh2LD32XklWKvoMBiKmbzPId
 Mefv00vswmD29yR0qpzlWH7ubcIUgOIg==
X-IronPort-AV: E=Sophos;i="5.82,280,1613451600"; 
   d="scan'208";a="43304421"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SwW2R73A/0+ioVdnKEkHQHsR7vMSjrqBtJ2MkI6uowitw7PkCupMOUGHkwBa8SlzfIxB1dawC9P3Rn/WnwdP3+dF1wdEWXT9vrP5JDuE1JTxvxeMF5w5vEBKKcB8WHBJQvuv4HW1cgbyAVaiQsEkDTWpaUXuCFYOQDY5GZjFcubOn5iP/qoYQvagpH4BRGvsByQ+WIggs+KgipPiZiJy9/i2psHr8EyGSEFKTHouZEsAAyZ//W63rvax7QvAsosYzAIG0Hy4TGvTIdlWRGpI6Ay7k2QRC465ltySk+bn3RIzerzXaJe3zeUPI89v/HObxrrFnKeXWkW5eHH1Di137A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=v1TbxRetccB3cqkr4MGIU/LGmnU1UmJDCQ3gYxtvhWg=;
 b=KNl9EbkDfJ/eVyq90odXqQsnEOIVgKOv6PGczUZtpQpMsyyyeKCg78P+kITAfCHnS60zGpUCS0E9SCUVNKYs2R9VpEV+Jemz7dtsbymu7/YVYoqQMuSDZTgBAxDikrpQho8xqdLuWf+Wo2CXdlLEgmDq8RZVgtlPuFweAc8EhwRdenprJljyebef673uBmTBXrID7YS8JPzvzV/wUfkJJAM9IJWPFXjdh9bAXFsviKBXdYYiM5vaJP2cq45TKiVLLNAX086XA3G+4R3ZcDiRfBvrxjozHHLvfveh5p8X0JIDNg8oa1QzzrtEIeMoobrGsk18TmmeWtpvTNwpC/qw3w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=v1TbxRetccB3cqkr4MGIU/LGmnU1UmJDCQ3gYxtvhWg=;
 b=SrwvTE9EKfw1wqlj6BuH9oAidxpkVESk8I76+y0nfOkpKYHYT66bkZ/JKjMn6vrMWMat7hZMvG8SULUQx8LJKXQSsr5r4YDoXf7SthpTnJy7uWuHNB1HEWBz6XswLN4DVcZdSrNYgobbHWi+Lw4B84HU1XplpCKKBd1icJ7Zn8A=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Roger Pau Monne
	<roger.pau@citrix.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: [PATCH v4 05/10] libs/guest: make a cpu policy compatible with older Xen versions
Date: Fri,  7 May 2021 13:04:17 +0200
Message-ID: <20210507110422.24608-6-roger.pau@citrix.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210507110422.24608-1-roger.pau@citrix.com>
References: <20210507110422.24608-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: MR2P264CA0062.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:31::26) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 79327a89-74ea-48ae-fe47-08d91148444c
X-MS-TrafficTypeDiagnostic: DM6PR03MB4475:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB44751F66998272AEE46142138F579@DM6PR03MB4475.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:3513;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Nji8Zv8BmCfGnXxEJYveZmQ11ipTMPUIcDHyEsm9HmvEw0LrtFWogjMd01JCagj/IJ35FZDQJQi6f45rhiIn5/BTfI0eDkPaokwjHxI8V+qHUlcEKyE03DtFGew4t2NOKy7TMvdWM8hMl6hYTT3PL8rnlxRPvgMIY6NhYpYu+Cj7jQyp1soEp57v8VD6JpC2OhDExL0ukS5V31yaxddSNJcg2a7DZF9RphrWHoK1DON2OLQwHuyI5sZBYDgqb3oAc5Aux7iiClrd2y8HyMgqOvsGvBhPhayQnU0KrY4wWB1PnmpLzH2LZ82bsZJJ77aMYw/XOe5DhOsNroUVjlE5MN1rwyhtSLLoqr46yIUa9ymYAVFe5HQokFT4G8/H2j8Gw9kjVIJM32Mxqt82mItWtEzr8YJFl0vAfrKa7TK96JuqlV3s0aK60giLaSdcPF/N0jY+3iyy4/i4+StHz3RBceWJB5f2KylzVYdIZY2VeS0bCQUQ6Z+gBSfpsgA17VhqQrUxBew7X87N8MnqPIHH/EurP/YPef1j+2F50tz2uSFKRznbNxaB2p3H3p9HK181pEtfQnuZWT74A1axnjWPMw7+LcRUPOznJTXF/YQs2KE=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(136003)(376002)(346002)(39860400002)(366004)(8676002)(6496006)(26005)(66946007)(5660300002)(66556008)(66476007)(83380400001)(478600001)(956004)(1076003)(6916009)(316002)(38100700002)(54906003)(6486002)(2616005)(86362001)(8936002)(36756003)(4326008)(186003)(16526019)(2906002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?dytKaFpFZ2tyWkU1aGY1bXA4d1l1UGFxaHdIRktjYmgwZDBsNVBoQWRWN2U1?=
 =?utf-8?B?dm81R3JUNWk2cmJnQnhtZEZVbi9DOWNCQ255VkNLL0RzMjAxcWhXWW5wTzNI?=
 =?utf-8?B?Q01aNUx6NjAvcGpNdHhRYlR6cDV1RTMvM2VxMmVaUzZHQVZOaDZWS3U5N1Fq?=
 =?utf-8?B?eUltaXQrTWxiSExtUDZ1Z0ZNMU44Lzg3V3RRZGRqVHozR20zbm85eVZaWUsz?=
 =?utf-8?B?eDZ5TEh5SzF2ZEVRamxtZ1pXNkg0cE1ZSlBQaUdjNmhJeExwdGVkSjZMQkw5?=
 =?utf-8?B?bHRPdTFSTWtSRjF0ZHVrZElOL2VJOStwNzNjNFUzNkhDdGdHVGlMbVpHVm91?=
 =?utf-8?B?alpsZG1PWlBqQnVPaDByUUUyL05vckw0Q0x5QkVydUwvQ2ZnUUFWWGpkd2h1?=
 =?utf-8?B?SkJ3R2VNeXBONnpaVnFOZFlReW42SUdFMXhhaUQ5L0xMcmFSQXdoaEp5bVZh?=
 =?utf-8?B?b3NCTEY2RXhXUG9uRFFGZURWNTg1N1NDMXo4cXhjazFlN1IxK3F2NXllUmhP?=
 =?utf-8?B?emU3S3NNcFJnUXo5WFJvZUJqcm5yYW9hY0tubzVrdE54dVN1WGxkRDlDVjRV?=
 =?utf-8?B?VGgreDhmUytkUGtnSk9CNHB1OThwRG90d2ljbWltZFlRQkZiTHphODlPNHgx?=
 =?utf-8?B?c3FObzh6ZHZwNldkaGV3SmRPNXlXOHl6OXdPVHo2N2ZENlhlZVNQUWhNRU50?=
 =?utf-8?B?dlAyQmdTN3h1aTh2clhZQXl4enlwak5MU2xWbUxpTXZ6V3U5OGZsVE8vTUo2?=
 =?utf-8?B?RG5YQkVCdm80UGNpQ1FYNFQ2NFBybjM1MXY5N0hYbng5bHZRM2lic1MyNC9h?=
 =?utf-8?B?RWc1dE14VE1UNk5jSzR2VXc4REltNmI5QnZuUmhGa1N4Nnp4OHg0bTZkeElM?=
 =?utf-8?B?OVprNjQ3dzdkdFlXSzdEN0x2ZHBQM1I3SEc2YjRzZ01KWHRuQWhXdHNab3ZU?=
 =?utf-8?B?SkxXeVQ0QUlrb2lUSnFOVytveVBJaTQ0NlpvL1BBNTlReDZiWGpFb0ZTTWp4?=
 =?utf-8?B?NVR1SDV1L1pEcGRsWlNXOTM2TDdYdk8wa1dmM1V0TTlLb2Uwb3NHTmp6b2o0?=
 =?utf-8?B?ZVlmcElrcjZVNkRENkZuZHdzL0ZVbWduNHRVbmY4R1BDVy9oOU1Nd2NVL2sr?=
 =?utf-8?B?WjNySWJlUEw4cUpiSHJJcmlZOXhBMmdGc0dUTGVDVHBQQ0FtZDRhRHloMDRR?=
 =?utf-8?B?OTJmVDRiY2xNd3ptNGliTXBzSmNEZ1A0cFlYTS9zdGkzVTRnYjNlM05YK29R?=
 =?utf-8?B?b1hQNlRjYkI0djNmUmRCWGRGS1FCbWpsYnRyUjQvc1kvMHZaU1IxR3grZVdq?=
 =?utf-8?B?T1VyZ3FCVlQ4Y29kWkxsSjN2VWhpbzk1MnhILzFhSTNYSVpYUEJPZlZWQStl?=
 =?utf-8?B?dWtXRmkxYS9LbDJ1R2s2RHE0bk5BbFlJVms3K1RaNTE1VmQvd2VWMlBXNkcy?=
 =?utf-8?B?VVgzSlZGYnBvTTFRYmFXQjlmNkZTclZFVU1QZWhHYU54dnYwOWJtUk1mdXMx?=
 =?utf-8?B?TFl6alMzZXNyMmZDQ1dCWk5ObDB0bys3ZXR5N1dTMzNuN3dpaHpOV3FqY3ZY?=
 =?utf-8?B?Ym5DU0FxdGNzRlM0TnB4NVlWcURRbjFrU244RWZkcElML1VWUW1TeUNsZ2F2?=
 =?utf-8?B?akptM01vdDVmc2poZm56ZGNrdURSdGlWZU1nVjk2TWdrN3Znbnd2VVY3U1hP?=
 =?utf-8?B?Qk1PaFEwYlJrT1Urd3B1RTNwTEczeHo4M2FPa2RHQi83RW9WUXNJWVRrd2dk?=
 =?utf-8?Q?kZAxIrzSXt1bxsSVqzBZ/8AP0ZgeEIc7PgAYxgS?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 79327a89-74ea-48ae-fe47-08d91148444c
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 May 2021 11:07:13.2691
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 1MQwioud98NuvP4W1aNsUynhRKCtLm7wRUBkg0xjHdS54zRjy5DFzN3Eb+tWL6SfWhMAaNPr9iCunGcNN371Jw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4475
X-OriginatorOrg: citrix.com

Older Xen versions used to expose some CPUID bits which are no longer
exposed by default. In order to keep a compatible behavior with
guests migrated from versions of Xen that don't encode the CPUID data
on the migration stream introduce a function that sets the same bits
as older Xen versions.

This is pulled out from xc_cpuid_apply_policy which already has this
logic present.

No functional change intended.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v3:
 - Rename function to xc_cpu_policy_make_compat_4_12.

Changes since v1:
 - Move comments and explicitly mention pre-4.13 Xen.
---
 tools/include/xenguest.h        |  4 +++
 tools/libs/guest/xg_cpuid_x86.c | 58 ++++++++++++++++++++++++---------
 2 files changed, 47 insertions(+), 15 deletions(-)

diff --git a/tools/include/xenguest.h b/tools/include/xenguest.h
index 8e8461b0660..576e976d069 100644
--- a/tools/include/xenguest.h
+++ b/tools/include/xenguest.h
@@ -754,6 +754,10 @@ int xc_cpu_policy_get_msr(xc_interface *xch, const xc_cpu_policy_t *policy,
 bool xc_cpu_policy_is_compatible(xc_interface *xch, xc_cpu_policy_t *host,
                                  xc_cpu_policy_t *guest);
 
+/* Make a policy compatible with pre-4.13 Xen versions. */
+int xc_cpu_policy_make_compat_4_12(xc_interface *xch, xc_cpu_policy_t *policy,
+                                   bool hvm);
+
 int xc_get_cpu_levelling_caps(xc_interface *xch, uint32_t *caps);
 int xc_get_cpu_featureset(xc_interface *xch, uint32_t index,
                           uint32_t *nr_features, uint32_t *featureset);
diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x86.c
index cdfc79a86e7..fccbc54a400 100644
--- a/tools/libs/guest/xg_cpuid_x86.c
+++ b/tools/libs/guest/xg_cpuid_x86.c
@@ -441,6 +441,7 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
     unsigned int i, nr_leaves, nr_msrs;
     xen_cpuid_leaf_t *leaves = NULL;
     struct cpuid_policy *p = NULL;
+    struct xc_cpu_policy policy = { };
     uint32_t err_leaf = -1, err_subleaf = -1, err_msr = -1;
     uint32_t host_featureset[FEATURESET_NR_ENTRIES] = {};
     uint32_t len = ARRAY_SIZE(host_featureset);
@@ -505,21 +506,9 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
 
     if ( restore )
     {
-        /*
-         * Account for feature which have been disabled by default since Xen 4.13,
-         * so migrated-in VM's don't risk seeing features disappearing.
-         */
-        p->basic.rdrand = test_bit(X86_FEATURE_RDRAND, host_featureset);
-
-        if ( di.hvm )
-        {
-            p->feat.mpx = test_bit(X86_FEATURE_MPX, host_featureset);
-        }
-
-        /* Clamp maximum leaves to the ones supported on 4.12. */
-        p->basic.max_leaf = min(p->basic.max_leaf, 0xdu);
-        p->feat.max_subleaf = 0;
-        p->extd.max_leaf = min(p->extd.max_leaf, 0x1cu);
+        policy.cpuid = *p;
+        xc_cpu_policy_make_compat_4_12(xch, &policy, di.hvm);
+        *p = policy.cpuid;
     }
 
     if ( featureset )
@@ -921,3 +910,42 @@ bool xc_cpu_policy_is_compatible(xc_interface *xch, xc_cpu_policy_t *host,
 
     return false;
 }
+
+int xc_cpu_policy_make_compat_4_12(xc_interface *xch, xc_cpu_policy_t *policy,
+                                   bool hvm)
+{
+    xc_cpu_policy_t *host;
+    int rc;
+
+    host = xc_cpu_policy_init();
+    if ( !host )
+    {
+        errno = ENOMEM;
+        return -1;
+    }
+
+    rc = xc_cpu_policy_get_system(xch, XEN_SYSCTL_cpu_policy_host, host);
+    if ( rc )
+    {
+        ERROR("Failed to get host policy");
+        goto out;
+    }
+
+    /*
+     * Account for features which have been disabled by default since Xen 4.13,
+     * so migrated-in VM's don't risk seeing features disappearing.
+     */
+    policy->cpuid.basic.rdrand = host->cpuid.basic.rdrand;
+
+    if ( hvm )
+        policy->cpuid.feat.mpx = host->cpuid.feat.mpx;
+
+    /* Clamp maximum leaves to the ones supported on pre-4.13. */
+    policy->cpuid.basic.max_leaf = min(policy->cpuid.basic.max_leaf, 0xdu);
+    policy->cpuid.feat.max_subleaf = 0;
+    policy->cpuid.extd.max_leaf = min(policy->cpuid.extd.max_leaf, 0x1cu);
+
+ out:
+    xc_cpu_policy_destroy(host);
+    return rc;
+}
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri May 07 11:07:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 11:07:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123969.233986 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leyKM-0005W4-O0; Fri, 07 May 2021 11:07:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123969.233986; Fri, 07 May 2021 11:07:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leyKM-0005Vp-KI; Fri, 07 May 2021 11:07:26 +0000
Received: by outflank-mailman (input) for mailman id 123969;
 Fri, 07 May 2021 11:07:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4HO4=KC=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1leyKK-0004jI-Bh
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 11:07:24 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c0c0e2fe-2e4b-4ac0-add2-deba8a8c49ac;
 Fri, 07 May 2021 11:07:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c0c0e2fe-2e4b-4ac0-add2-deba8a8c49ac
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620385643;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:content-transfer-encoding:mime-version;
  bh=t57/ixwUki9bA5ay9uC2e1ZSSlVDTWnCzfy83CT1KWI=;
  b=cjilg0uq7HyTFE6dFoRv90P8m6jP9ib+gBgLeqsJajLt4/wnFj6TV43i
   vXHvuesBiCVhU+b8OE65jtjKKhgCzgjMkhtFqoLpKN272tn7CoVquSP61
   onk+XZ775x4w2V3n5T2sdWtuxUbehr9k7K9+CEuoKawo7tZwxaRN2Wph6
   w=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: qGDFAcDFgm/sItkJgbJlWxBnHS8sQh5BU0nuZEXQ2/BK52qUIzURyAA2jR4/J2+a632WmT3oT/
 AesPYv35r1nfyuocTc6BCzYPxjIjPCrhemlJvwBQFpJPiGZSoIk9HaYluHHtBZ/gySXW7azYZh
 ovlQEjSGIETLP8Brr66Z+7orkQng565sQ34Qld7rqeg0k+KnSk4rZQEwIkYVe1Zcfr/7PIEFh3
 ihq7nW3e3vjrpRj7nqlrxUuWFC5nvgXddYgskCtxaR4khERvpNz34OB4MG4EegS2xAOSfB1X5m
 wks=
X-SBRS: 5.1
X-MesageID: 43403575
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:LIgl8K140I0oRdy59ZU15AqjBYByeYIsimQD101hICG9Lfb0qy
 n+pp4mPEHP4wr5AEtQ4exoS5PwOk80lKQFqrX5WI3PYOCIghrNEGgP1+rfKnjbalTDH41mpO
 1dmspFebrN5DFB5K6UjjVQUexQpuVvm5rY5ts2uk0dKD2CHJsQjTuRZDz6LmRGAC19QbYpHp
 uV4cRK4xC6f24MU8i9Dn4ZG8DeutzijvvdEFI7Li9izDPLoSKj6bb8HRTd9AwZSSlzzbAr9n
 WAuxDl55+kr+qwxnbnpiDuBtVt6ZXcI+l4dYyxY/suW3bRY8GTFcZcsoi5zXEISSeUmRMXeZ
 f30lMd1o9ImgnslymO0GbQMk/boXsTAjbZuCOlqGqmrsrjSD0gDc1dwYpfbxvC8kIl+Mpxya
 RRwguixuxq5D777VDADuLzJmZXf4uP0AkfeOUo/jViuEslGcpsRKkkjQto+bs7bVPHAbEcYZ
 tT5ZvnlYhrmHuhHgDkV0dUsaORYkg=
X-IronPort-AV: E=Sophos;i="5.82,280,1613451600"; 
   d="scan'208";a="43403575"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PxRfu4w3h0ZZrzPVF6oKFTfyGZW4ANJAE8TIFNS9LkOGMZkfqNyba8H5av/kn/aMFqEs/P1KoY36cNlTB0nocuJrGFPx+9CTw1G9PCHsTMdrgUgszSvHZ2LAZN/d2ordhKpzPqjptPXUBmb4G6kvRTAQV5N77UNf75pi7ktXNXt1sfPri7CcR7rYMda9OzyJYlv7NQnn3SRMIzJNUySicwFiRxEW2m68yXPEr2SHmqRqXPNamFrStqiSbP12zyI6INziFN4kYTYShrHB8ZhavVRS+UygGy34nrPYfU3/e+dz0Il5dTPZxdPG+5VQgdtosItySFM4IUa5CTHit6CjVQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kNLcteEg52/2FZe+Swj7NwnygUOC6JFeRCiZi3dQW6s=;
 b=Ua2vYJCR5wOMfeovySQRhKmGZWIbHhNl4k/gfLkavL+T4Yem6FIfP5lqWYuDVX+NX4q1ZSBGFe9BB5vHhogjpwDTmwkWz+OioVatbpZpQIHmrSRO0SfLqRaPespec+O/j7Rbheh4S+Pk8AAMsabk+EalThAKii+U8G8j7EleCU5u+bMJsi9iY2Z91ZjH6gdnkjEPY9lXqoowoWNoARzkqgw4gJC5mPtO/RAeRdFVZVTAWwzlM7L+keZliEvRFfzGvXGoou43V44Z8ru6wU90C2nhAfqob6aUvy7bvKp05Rv6+eEbUuzizhBWcafEW7Aw68eYer5xpjksZP9aLdgZLw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=kNLcteEg52/2FZe+Swj7NwnygUOC6JFeRCiZi3dQW6s=;
 b=iqVgg4aDLQd2k51ckx9ReRwlnQvui8oBph0vVoAMW2cgz12gpA5yc2xOwsjwaGwratVgJY4/aBk+fJSaljXlOLbIbKqjEQGIBW118zoqSvtOhm41vIMR83qUAAeqltJkwRs/jaBydb0m+TKd4N7bkNr3twGgxv3PIwcJzmJi9B8=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Roger Pau Monne
	<roger.pau@citrix.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: [PATCH v4 06/10] libs/guest: introduce helper set cpu topology in cpu policy
Date: Fri,  7 May 2021 13:04:18 +0200
Message-ID: <20210507110422.24608-7-roger.pau@citrix.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210507110422.24608-1-roger.pau@citrix.com>
References: <20210507110422.24608-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: MR2P264CA0078.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:32::18) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e316ac50-33bc-4c4c-1eb9-08d911484850
X-MS-TrafficTypeDiagnostic: DM6PR03MB4475:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB44752050858E4E31CEB3E8308F579@DM6PR03MB4475.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:1169;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: h5/6rjML9zy/FQ6tNvLhI6DttyGBiloW5xNAAh3EKyT14vOTFjfBZp3G4IvtiO7saAsS1nvQScUhy8HVqPH9R87+evndX1cT2sLIo9nUbVd7ba6EckzeoXV3olRQftS7zfCitKFQThh/STwyrFOL7Nc2tdqoK1jeRcgEWFM5omlTYnui4PkvwL7pN+GXX1z3QqmztsQJnsdTwRo6LiXNMgvLHqzqnf6/aec2pzjArRVdDWFpx6M87g/TriDvQ00bG8AUikcEuj1qU+DRtBCp6AuceKehHlId7n/dORN9WmTe4ui7p9hQmAwcUbl4avURJIfMcpkceuRBtDb4rlCOtAMDU7t5gxcE/lv49YU5D895pDu7KHCBh0Ln4UWzNDdkbyRMNwkL4avxusAn5aPqp6uaBy/aR65Uc7v9ZWP+Gb1YykV0geDqsFSEA2Og3TYyFNK7CKIBTmPOOD8gnxqdrvTxEaolhKXILStB1WlG3lg+5ID4yAvO2P8RPJy4gl5i+SRIaud5FVGlC82IjVOuOpiQog0nPfgOcjBMG9SNi1esVZLPOPTuxH0r9c6k4CbT1o40SlKxdsvjXyWxnQpyUqLOeuTamdO6gp42Kjs80bs=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(136003)(376002)(346002)(39860400002)(366004)(8676002)(6496006)(26005)(66946007)(5660300002)(66556008)(66476007)(83380400001)(478600001)(956004)(1076003)(6916009)(316002)(38100700002)(54906003)(6486002)(2616005)(86362001)(8936002)(36756003)(4326008)(186003)(16526019)(2906002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?MmlRL1VZVUdmeHlsU0NzSk9FS1YrM2FXM3lldjg1ZDQ2R24yYlJPbUFOMlZX?=
 =?utf-8?B?UFpUbWY0NVRQRTVWR0tXSUJvSkhISzB1dG53R2doZFdTZVZyNndVNzdndzha?=
 =?utf-8?B?UVZRcVZmWUMyczJXOVNQU1lqdG1qRU1raVpWTFJObjFRRm9GU0VuZ25VS1lI?=
 =?utf-8?B?QU1nYk9MNktFVy9Xdm1laFVic3U0ekYrMVpXRHJWOWRxcTBwR2xUdmsvWG5F?=
 =?utf-8?B?WHRBcGlsOUNST1pINHcxOXJuOXA2QTVNWW51TURiRVVhbWJBMzJnSkpRUkdV?=
 =?utf-8?B?eDlYMzM0a3ZqNStZbW52VlZPVFFLcXNqc3dlcjJJYjNiQTJxWkNTNmU2dm5n?=
 =?utf-8?B?NE10Ky9EclFUMHRSQ2t4cUJmay83NUxKTkJtK2syT2d0UlA1Yjl5MFhkQ0ZF?=
 =?utf-8?B?cXZKZEt5RmY1WGNXdEpXV2JFanNIY1RVMWFMRHJsWVFVTENzY2xMdVRURlNT?=
 =?utf-8?B?OTBhZUpDbkdheUo3QlhBY1hoMkcwZUdNOXZEWkNJbUk0c2ovNXlDUk5meU4x?=
 =?utf-8?B?L1d3dXNzVDIrektwWkxLTE1JNi82VGVhYTN1cUZVWWNEanJYeFNtQkp1OERP?=
 =?utf-8?B?ZS9kakdGRlhjRkF3UVV1UG9vT3lLL1IxeGFjWVM4enJtUVZFZTQvOXA3Y1Jp?=
 =?utf-8?B?d0QvSVdBM3FOdnI3aW1IeTZyY3ZHZEVETmF2OTU0WmNmYTVIYVJxaUFIMDhG?=
 =?utf-8?B?T1djTG1FU25CVnZheEdTVEhQSFArWkpQS0NwS09PSXE1ZllpYUNOVkN1ZWNO?=
 =?utf-8?B?dkh3dWlUUkJJNmErSURBeENhL0NuNzN0VWRmTEFhM3h5MEx3ektiaHBwSHp5?=
 =?utf-8?B?dm1Sc2xvQmNXUzdHVjAwYXJ1M2VLSUxaSkFpNkt0WTNTZm9EaEF0Z0JBWUsz?=
 =?utf-8?B?RFZ1V3RwZVMrcGxPWVpyazJiSmFpR2lNQURYMDMyR0tmZGxjL1BaWmZEYnVY?=
 =?utf-8?B?OXlCQzluVE5oVTI2bkZUSTBZamFxMFdFQjFRVzFoQ25Ub2IwTmRVT2g3akgv?=
 =?utf-8?B?Y2l4Wk04RmdLNkRHcEs4dkNtY3VJRDJFWnJPaUpFOUwrRUl3TkdJRFgxOUNo?=
 =?utf-8?B?Y1pzVi9uZUNXZWI3L2dGWENqSWFoOHhKcFlqRDVHUnhMdGhCdlRrdU1CU2NE?=
 =?utf-8?B?UkJEM3Q4a0QvZFVwWkdGQzNWSTg2Nyt0anh6TTRtVThCSVhKNDZ0Y1JTYnJQ?=
 =?utf-8?B?ZjBLaTNzYjc0TXpIeXFHZWJNWkZPa1NLUnRyKzBHWS84a2MxVXl0b0p2dTZK?=
 =?utf-8?B?RWZnS01VWWNHOEZzTHFrajIwQVMvZEFzZEIwYzYrQXZEYld6ZkgySFNFaTVh?=
 =?utf-8?B?OHFGTXAvQm91aTRKV1ZFZzMyZ3FaN2ZGUTlEcUVuc2oySTJGYUNYQ1UyenFl?=
 =?utf-8?B?WUlpWTdXc05IbFlxT2tzZ1JxdTV3YzFvR2hZeUE0OGpOSEh6ZDBCVGNQSkNs?=
 =?utf-8?B?OWpqVUtNR0RVbWJrZkp6WjJMOG9UTXFzbGJ2NlNQblZERDlrdG5LVUZpZTFr?=
 =?utf-8?B?U0FSeTRhVmdZYzRzYlJRRElUVHN3MHEyUTBLUy9OLzcwdTRuN3JtOE5CZTFv?=
 =?utf-8?B?aVFZTFdFWU9TRTR1Y1ZGNHhEMXdlUW5TUEIyVGVPYzdPRFpTWHpNWmd3Q0lP?=
 =?utf-8?B?dHluVzR1ZDA4Vnc1UGVXSC82dlZjbEx1MUgrUGZlSUNqV2FPKzl2VWtONEJL?=
 =?utf-8?B?dng0TUtmNGJhaTRtbzJJcy9PNVY5ZDV1ZFdUVElkOFN2L3AwMlJYbmdwVHYx?=
 =?utf-8?Q?WulZ533t3M4I8CEPqOAw2Benli9wTfGZMxdmQy5?=
X-MS-Exchange-CrossTenant-Network-Message-Id: e316ac50-33bc-4c4c-1eb9-08d911484850
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 May 2021 11:07:19.9428
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: gdGm4fyEJdqKZwxigS956n9Srmfl9QH5ZbtGqZl1Wa+yf7paMNgzavY1+Genjv1hkmPU0EDQouKM2lsATSoDmQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4475
X-OriginatorOrg: citrix.com

This logic is pulled out from xc_cpuid_apply_policy and placed into a
separate helper. Note the legacy part of the introduced function, as
long term Xen will require a proper topology setter function capable
of expressing a more diverse set of topologies.

No functional change intended.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 - s/xc_cpu_policy_topology/xc_cpu_policy_legacy_topology/
---
 tools/include/xenguest.h        |   4 +
 tools/libs/guest/xg_cpuid_x86.c | 182 +++++++++++++++++---------------
 2 files changed, 103 insertions(+), 83 deletions(-)

diff --git a/tools/include/xenguest.h b/tools/include/xenguest.h
index 576e976d069..6fe01ae292b 100644
--- a/tools/include/xenguest.h
+++ b/tools/include/xenguest.h
@@ -758,6 +758,10 @@ bool xc_cpu_policy_is_compatible(xc_interface *xch, xc_cpu_policy_t *host,
 int xc_cpu_policy_make_compat_4_12(xc_interface *xch, xc_cpu_policy_t *policy,
                                    bool hvm);
 
+/* Setup the legacy policy topology. */
+int xc_cpu_policy_legacy_topology(xc_interface *xch, xc_cpu_policy_t *policy,
+                                  bool hvm);
+
 int xc_get_cpu_levelling_caps(xc_interface *xch, uint32_t *caps);
 int xc_get_cpu_featureset(xc_interface *xch, uint32_t index,
                           uint32_t *nr_features, uint32_t *featureset);
diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x86.c
index fccbc54a400..2c89c59cccb 100644
--- a/tools/libs/guest/xg_cpuid_x86.c
+++ b/tools/libs/guest/xg_cpuid_x86.c
@@ -438,13 +438,11 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
 {
     int rc;
     xc_dominfo_t di;
-    unsigned int i, nr_leaves, nr_msrs;
+    unsigned int nr_leaves, nr_msrs;
     xen_cpuid_leaf_t *leaves = NULL;
     struct cpuid_policy *p = NULL;
     struct xc_cpu_policy policy = { };
     uint32_t err_leaf = -1, err_subleaf = -1, err_msr = -1;
-    uint32_t host_featureset[FEATURESET_NR_ENTRIES] = {};
-    uint32_t len = ARRAY_SIZE(host_featureset);
 
     if ( xc_domain_getinfo(xch, domid, 1, &di) != 1 ||
          di.domid != domid )
@@ -467,22 +465,6 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
          (p = calloc(1, sizeof(*p))) == NULL )
         goto out;
 
-    /* Get the host policy. */
-    rc = xc_get_cpu_featureset(xch, XEN_SYSCTL_cpu_featureset_host,
-                               &len, host_featureset);
-    if ( rc )
-    {
-        /* Tolerate "buffer too small", as we've got the bits we need. */
-        if ( errno == ENOBUFS )
-            rc = 0;
-        else
-        {
-            PERROR("Failed to obtain host featureset");
-            rc = -errno;
-            goto out;
-        }
-    }
-
     /* Get the domain's default policy. */
     nr_msrs = 0;
     rc = get_system_cpu_policy(xch, di.hvm ? XEN_SYSCTL_cpu_policy_hvm_default
@@ -566,70 +548,11 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
         }
     }
 
-    if ( !di.hvm )
-    {
-        /*
-         * On hardware without CPUID Faulting, PV guests see real topology.
-         * As a consequence, they also need to see the host htt/cmp fields.
-         */
-        p->basic.htt       = test_bit(X86_FEATURE_HTT, host_featureset);
-        p->extd.cmp_legacy = test_bit(X86_FEATURE_CMP_LEGACY, host_featureset);
-    }
-    else
-    {
-        /*
-         * Topology for HVM guests is entirely controlled by Xen.  For now, we
-         * hardcode APIC_ID = vcpu_id * 2 to give the illusion of no SMT.
-         */
-        p->basic.htt = true;
-        p->extd.cmp_legacy = false;
-
-        /*
-         * Leaf 1 EBX[23:16] is Maximum Logical Processors Per Package.
-         * Update to reflect vLAPIC_ID = vCPU_ID * 2, but make sure to avoid
-         * overflow.
-         */
-        if ( !(p->basic.lppp & 0x80) )
-            p->basic.lppp *= 2;
-
-        switch ( p->x86_vendor )
-        {
-        case X86_VENDOR_INTEL:
-            for ( i = 0; (p->cache.subleaf[i].type &&
-                          i < ARRAY_SIZE(p->cache.raw)); ++i )
-            {
-                p->cache.subleaf[i].cores_per_package =
-                    (p->cache.subleaf[i].cores_per_package << 1) | 1;
-                p->cache.subleaf[i].threads_per_cache = 0;
-            }
-            break;
-
-        case X86_VENDOR_AMD:
-        case X86_VENDOR_HYGON:
-            /*
-             * Leaf 0x80000008 ECX[15:12] is ApicIdCoreSize.
-             * Leaf 0x80000008 ECX[7:0] is NumberOfCores (minus one).
-             * Update to reflect vLAPIC_ID = vCPU_ID * 2.  But avoid
-             * - overflow,
-             * - going out of sync with leaf 1 EBX[23:16],
-             * - incrementing ApicIdCoreSize when it's zero (which changes the
-             *   meaning of bits 7:0).
-             *
-             * UPDATE: I addition to avoiding overflow, some
-             * proprietary operating systems have trouble with
-             * apic_id_size values greater than 7.  Limit the value to
-             * 7 for now.
-             */
-            if ( p->extd.nc < 0x7f )
-            {
-                if ( p->extd.apic_id_size != 0 && p->extd.apic_id_size < 0x7 )
-                    p->extd.apic_id_size++;
-
-                p->extd.nc = (p->extd.nc << 1) | 1;
-            }
-            break;
-        }
-    }
+    policy.cpuid = *p;
+    rc = xc_cpu_policy_legacy_topology(xch, &policy, di.hvm);
+    if ( rc )
+        goto out;
+    *p = policy.cpuid;
 
     rc = x86_cpuid_copy_to_buffer(p, leaves, &nr_leaves);
     if ( rc )
@@ -949,3 +872,96 @@ int xc_cpu_policy_make_compat_4_12(xc_interface *xch, xc_cpu_policy_t *policy,
     xc_cpu_policy_destroy(host);
     return rc;
 }
+
+int xc_cpu_policy_legacy_topology(xc_interface *xch, xc_cpu_policy_t *policy,
+                                  bool hvm)
+{
+    if ( !hvm )
+    {
+        xc_cpu_policy_t *host;
+        int rc;
+
+        host = xc_cpu_policy_init();
+        if ( !host )
+        {
+            errno = ENOMEM;
+            return -1;
+        }
+
+        rc = xc_cpu_policy_get_system(xch, XEN_SYSCTL_cpu_policy_host, host);
+        if ( rc )
+        {
+            ERROR("Failed to get host policy");
+            xc_cpu_policy_destroy(host);
+            return rc;
+        }
+
+
+        /*
+         * On hardware without CPUID Faulting, PV guests see real topology.
+         * As a consequence, they also need to see the host htt/cmp fields.
+         */
+        policy->cpuid.basic.htt = host->cpuid.basic.htt;
+        policy->cpuid.extd.cmp_legacy = host->cpuid.extd.cmp_legacy;
+    }
+    else
+    {
+        unsigned int i;
+
+        /*
+         * Topology for HVM guests is entirely controlled by Xen.  For now, we
+         * hardcode APIC_ID = vcpu_id * 2 to give the illusion of no SMT.
+         */
+        policy->cpuid.basic.htt = true;
+        policy->cpuid.extd.cmp_legacy = false;
+
+        /*
+         * Leaf 1 EBX[23:16] is Maximum Logical Processors Per Package.
+         * Update to reflect vLAPIC_ID = vCPU_ID * 2, but make sure to avoid
+         * overflow.
+         */
+        if ( !(policy->cpuid.basic.lppp & 0x80) )
+            policy->cpuid.basic.lppp *= 2;
+
+        switch ( policy->cpuid.x86_vendor )
+        {
+        case X86_VENDOR_INTEL:
+            for ( i = 0; (policy->cpuid.cache.subleaf[i].type &&
+                          i < ARRAY_SIZE(policy->cpuid.cache.raw)); ++i )
+            {
+                policy->cpuid.cache.subleaf[i].cores_per_package =
+                  (policy->cpuid.cache.subleaf[i].cores_per_package << 1) | 1;
+                policy->cpuid.cache.subleaf[i].threads_per_cache = 0;
+            }
+            break;
+
+        case X86_VENDOR_AMD:
+        case X86_VENDOR_HYGON:
+            /*
+             * Leaf 0x80000008 ECX[15:12] is ApicIdCoreSize.
+             * Leaf 0x80000008 ECX[7:0] is NumberOfCores (minus one).
+             * Update to reflect vLAPIC_ID = vCPU_ID * 2.  But avoid
+             * - overflow,
+             * - going out of sync with leaf 1 EBX[23:16],
+             * - incrementing ApicIdCoreSize when it's zero (which changes the
+             *   meaning of bits 7:0).
+             *
+             * UPDATE: I addition to avoiding overflow, some
+             * proprietary operating systems have trouble with
+             * apic_id_size values greater than 7.  Limit the value to
+             * 7 for now.
+             */
+            if ( policy->cpuid.extd.nc < 0x7f )
+            {
+                if ( policy->cpuid.extd.apic_id_size != 0 &&
+                     policy->cpuid.extd.apic_id_size < 0x7 )
+                    policy->cpuid.extd.apic_id_size++;
+
+                policy->cpuid.extd.nc = (policy->cpuid.extd.nc << 1) | 1;
+            }
+            break;
+        }
+    }
+
+    return 0;
+}
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri May 07 11:07:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 11:07:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123976.233998 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leyKV-0006DH-2G; Fri, 07 May 2021 11:07:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123976.233998; Fri, 07 May 2021 11:07:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leyKU-0006DA-Ur; Fri, 07 May 2021 11:07:34 +0000
Received: by outflank-mailman (input) for mailman id 123976;
 Fri, 07 May 2021 11:07:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4HO4=KC=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1leyKU-0006BB-Ab
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 11:07:34 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e8e08e22-de8c-465f-abc5-46fac93e55e4;
 Fri, 07 May 2021 11:07:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e8e08e22-de8c-465f-abc5-46fac93e55e4
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620385652;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:content-transfer-encoding:mime-version;
  bh=fJ0HrS/eGncbeQacqXx7DubhiLIztaxtWhjNuopSA/A=;
  b=Ipag6eX9mVNFChhCyI/A1arQf3dAopb/2eze2me/bW0GwVX8jr2s2Jsy
   IGbHYjzy4nchfqtC/CLF5j4yxf93LkSaG3r1endE0H9h9rKK3deIw4GMw
   /iSPifjDjxXbvzqZJ2A9VpdC0zuYVXhN7zieEPeEXNel5Wt/Wu5i3hX9O
   o=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: N6aP19wmrD1YDIrtmwNfPSrWKF5/xg8+zwz37FME2TSW4kPRByOFxUJ5GsbSgiQVNEDBC2+gz3
 MC8pY5SZ+dEnWAsQtY+QIOY3dJLITxZBLxhfq7zRjOKyk0IorPqM5cjb8L/y0BAOXWdJX2JjcP
 PJvDFZ8JjVVSHG7iV3Lqgnhh1v/dC4cxviES7UupkswTqazOqsh3ClDdweW2pFq9/gyVT+ap5a
 5Hh3T6ii29EQQ708EtTAUjFrNovIG7DyQBgH8J1mkMsbGkQympZ8zC75ao09RhOhAujTuFf+7m
 IyI=
X-SBRS: 5.1
X-MesageID: 43304436
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:zZ7Tnq88WbA8Az7uU3Ruk+Hqdb1zdoMgy1knxilNoENuH/Bwxv
 rFoB1E73TJYW4qKQkdcKO7SdK9qBLnhNVICOwqUYtKMzOW3FdAQLsC0WKm+UyYJ8SczJ8W6U
 4DSdkYNDSYNzET4qjHCUuDYrAdKbK8gcOVbJLlvhJQpHZRGsNdBmlCajqzIwlTfk1rFJA5HJ
 2T6o5svDy7Y0kaacy9Gz0sQ/XDj8ejruOrXTc2QzocrCWehzKh77D3VzKC2A0Fbj9JybA+tU
 DYjg3C4Lm5uf3T8G6S64aT1eUZpDLS8KoCOCW+sLlXFtwqsHfrWG1VYczCgNnympDr1L9lqq
 iJn/5qBbUI15qYRBDJnfKq4Xis7N9m0Q6f9XaIxXTkusD3XzQ8Fo5Igp9YaALQ7w46sMh7y7
 8j5RPui3N7N2K1oM3G3am+a/iqrDvGnZMoq59bs5Wea/pqVFZ1l/1WwKp4KuZwIMvK0vFULA
 A1NrCj2B9/SyLrU5n2hBgR/OCR
X-IronPort-AV: E=Sophos;i="5.82,280,1613451600"; 
   d="scan'208";a="43304436"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=b3rBJT9rk31XDIsaUct6yEdAVlNNl2VOR14Rwl1F6hgTlhXXv8h1kRCQWrfp/nV6iRDUWzC/gE4fl0ZJ62DZxlWpQd2kikXf+fkxpTLoikJub31eMI95o5cu//SndnusaSLA6Q13vJo0CDw+jK7lndisUpGeL9hsk4Qxw7xkEVof7ydm85POm08tgTcWXG046BW09wV62OOLayOACgS+eAjoy9OEnQlxXpsu3MeAPQKoeD2hjv17S71CotC+dshQpF8yooXoj2mE6xZmtj6EoV0L5BVftJCUkZ8bYXzsJWm31a8LgzMTaoyzU7obf6zTvhZDTyG9HhauXeZUyiweRg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CYJG3uGJ8VJ4S+hSZ2uLPR8RulmRHjZIhZJGRk11WH8=;
 b=hQn5JGXs+KtWTxzm72+qH65tBbKiPbv9afUtGFR88nQkkyhq5Ch9DMLEPwldkJSKGg8oUoGa6E4L19jRcPLrUb9Mi7lThns+XkbItzfDVuApi/Ip9Vffae9qLibSP4JNDNhg9Vwxkq7Z5dM4rHRQKc2elR1cb3iJx3LbqS0jrwfWCtSIlPe/AfH7HRnEkVFwBQmb7sm0wXlfcqi3BasUcEAg5dZKhMqV/ENrTSvwnw/6PfQo525Pxt54ZfFxULSjAtlc2ywryyKvJbaMFOAQYmrPTpFrqLlQQ2aa41H0ArPhCPBYhRehUKCOl/jE2vdZdEgvO+dSkmIHESLAqm/cpg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CYJG3uGJ8VJ4S+hSZ2uLPR8RulmRHjZIhZJGRk11WH8=;
 b=IW++bpdtOGiSFNQlytppBQqA9oZXp9OFTIb9zA6lil/GBQ3JsndO8JytyQ1/uz9+Xe/oLwR6BhBSgfqySpieVFwX9LFBRWd7deIqCJxYURcWoiQgEbE0so+P/xZbDwXtn2niMNhO7TU15VZkRzQPnG6REZLGcUa26rRQtWlIprw=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Roger Pau Monne
	<roger.pau@citrix.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: [PATCH v4 08/10] libs/guest: apply a featureset into a cpu policy
Date: Fri,  7 May 2021 13:04:20 +0200
Message-ID: <20210507110422.24608-9-roger.pau@citrix.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210507110422.24608-1-roger.pau@citrix.com>
References: <20210507110422.24608-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: MR2P264CA0087.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:32::27) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e294e9f9-9a1e-4c33-8f91-08d911484db8
X-MS-TrafficTypeDiagnostic: DM6PR03MB4475:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB44752EF5BBA2EF25301DF0D88F579@DM6PR03MB4475.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Fkw5qYLfxVXp0wPO+PkezqWitov+kCl4L4dbi4j/5XFb44Th5QiYMH+GDQ70xe5hhie/I0/M0YSayt0XKo4KgaxMFPuGa1Qn7iiQLmUDrPRSKr7uc5uTJTleGVa5SVlxIlFJyM0zt+ZmNCFdYf9cBem/s3k4fiRcHmz4dEO6TdK2gD2k7Mc2EvD44xUvArEDc8dlHeEP/T2N8li5tPC1SIJnEMG4ZjlAKN65UJX89QAI1WKUbTSEB2n1sUg3ZprmZ+3mPBOQQA8jRNK/O4bQg8S12OsEf47+rvRf55dCL6GqyU/4dLX4etJhfwHJjuFTx9Zj364+EDeWsoFgDtqSFySsu/xKUVGyvP2FBG6wmSyVG8MhwHO13jM6zcfvn4o7a89N1hcmdqH56l6tlWrg/O/itHx/ilQMOh/1++b+s7EcCkR/npocnEFDKP9PAe3mv4La74oiYa12EFmT5LsrtCX2PvBa70owmeYOC/VDILG2KZjvENuL/mm8LAdQHARVbmLhDdfCzx88BuisBeCKdUlRMuEj06qSg3iFngRs9pG4lAN2hL0tH6Kt0HAD/rZK2PPilNP2NkuirzHjTj3426EKFns109NDksRHikCFoAo=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(136003)(376002)(346002)(39860400002)(366004)(8676002)(6496006)(26005)(66946007)(5660300002)(66556008)(66476007)(83380400001)(478600001)(956004)(1076003)(6916009)(316002)(38100700002)(54906003)(6486002)(2616005)(86362001)(8936002)(36756003)(6666004)(4326008)(186003)(16526019)(2906002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?ZVJ6dnptVkRGbmFrSXhoR0NGZ1A1eG4yQ1dYM3FDUk9GUjFTRlVZTkdiTk1S?=
 =?utf-8?B?TzdNN1duVnQ1WUZ6RzhpMGVCZ1AvMXBPR2RUaUVjOUluOFlvcVpEUkoxODRN?=
 =?utf-8?B?ZmVvQ3l5czF0a3FuSk1YazlpeXI5d0dQU0ZLVkJLQmRkaVRPYTduTW9razUv?=
 =?utf-8?B?UVdXVU5obSs4NWR4SHNhd0YxUDR1TFhhNkNtUy9tZUg3QTllUW5GclNLN2xt?=
 =?utf-8?B?U2VLTTY3N0kvQWs3VFdHYXJwWmIxM1dMdWZsR0ZCM25ZR1NObzA5b0VSdmpp?=
 =?utf-8?B?cGU0YlpvV3F6eHllSGVjOHEwRmlLVDhsTGYxZ0FxSVArVGkweWk5bVE1TW5R?=
 =?utf-8?B?M2JqT0g4ZVpKUG44VE9ZaTVkZ1N5RU1UU08vSG4rOU1CVlB0Zi9Od0dHbW51?=
 =?utf-8?B?bmJ2ZkdkZFcwUVdqU1Z3MTNOelpHNDBwYmRBY0lkTWhEZGtmWVUrUHJxTXBa?=
 =?utf-8?B?Y3NIQ05iaDBHSmttdXAzdUFEUkZwK3haK0JJRzZ2NEdSUDVkTlkvdGpjalJY?=
 =?utf-8?B?YjBpRjdoOUVSa3lFbXRLWDhkaE44RHE0ZTh5enJrUldwZ3R4UnNodnliL2dp?=
 =?utf-8?B?Mm51aFVlVnRpWnhLN2JYTnh0dGFzaDk1QTNRNnRGRC8rOWMzbldUQlFKa1FY?=
 =?utf-8?B?MER3QlpLRm9FdEpOUXUxbEhpZERtcjFOTXBYVlZTemRZZVVVNzF4S1RQcjQr?=
 =?utf-8?B?RVFyQlY3UFY4WEh6cUJVQnQ4U0R0S2FlekxDQmlwMUhGRDcvTnhmZ1VxL3ND?=
 =?utf-8?B?c0JxQml2VlZSVlZwZWs4Smd2Ry9GbkVtOEdRRkhwb0k0YVdvdXpjU3pJRjN1?=
 =?utf-8?B?T0lQL2RpWU1EMzFzT3ZmMU1iR3BVU01pMEJJbWRORjhqWlpreWx3RE50b0I1?=
 =?utf-8?B?ZE9mN2NiWkNVU3czbmN0RWt5eG5tVjUyNFY1VjBkb3l6YXY2SU1RUjBPelZX?=
 =?utf-8?B?dVJrYTNGVzdqLzRZRTVMRG9YaHBVd3hEWEJHWVRQZy93VmdaUEQ1WGs4QTFR?=
 =?utf-8?B?V3huNUpqTWk4YUxFNGZQMmN4UEZwSFM0WlhRYTdvdWFRRzFIbUttdVcxU0NG?=
 =?utf-8?B?VFEwdU9uOE9uS0JRSmkxZnlUbkNsendoRlBhbC9NM0pzUzFXajZHS2U0Qmpl?=
 =?utf-8?B?M3RqRGFGWVM3NzlWRHhlNHhCRGQ2RENXZG9kSUdNY2RVdjlNRzdHSjkzcGFR?=
 =?utf-8?B?c1pONHhNVjJGVnlGcVozWGM5QXdkQWpndlhTN2VPRjAzeGt5Mkc2V1ZGaFE3?=
 =?utf-8?B?WXBTY0YwWEZyaERNZ3JIWmJ4czFTQmdqOXdZdjBoY2s2UE91MkJCZkxHVjVs?=
 =?utf-8?B?ZlgxMndKcElGYU93YTRtdnFmZjlWQVAwZDMvb21uendTREZFQ1AzbEV6ZG1h?=
 =?utf-8?B?ejA3dWdteG4yanZxSStWU1dtQVEyQkVwYmN0aFA5Qm9NaXhiY1ZVaGpCNldH?=
 =?utf-8?B?OGdQTWhVUUJUZTkzTDBJYWxsTzBqajFYbHdkeGtJTDdkaDVuWGk4MCtDZmJn?=
 =?utf-8?B?ZXA1dDBUUGV0cFBHcFBrdTc0WFl5THp2RmJOaDQvdlJwUzZkWjg0R1RNRHVN?=
 =?utf-8?B?MkVZc1hiR0w1YmR0VGM4NDVpRngxRWMyNDFzTzlneE9WOFBQMWR0R2FXSER5?=
 =?utf-8?B?SC96NlNTOURUTm5IazFOWC9oUkNVK0ZIQ1ZxbWlxYkJHTmdBNjdWNHhZQW01?=
 =?utf-8?B?d2kyb24zemF4RTlKbW5aNFduZWU5SmQ2cmFKSURDOVpVek4vNFFRRm1vUUxl?=
 =?utf-8?Q?KJVfGo1xxRGYTUEBlH+31syjacRN7uTOJPoGGZQ?=
X-MS-Exchange-CrossTenant-Network-Message-Id: e294e9f9-9a1e-4c33-8f91-08d911484db8
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 May 2021 11:07:28.9790
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 9QBrs7kbBLG9FsmolxDSdOYItsZsowvwYs/oacMf7euY3KrpsFhk/rCx27+8LpH2fVOS6mpQ8rQf/QgtfME3Qw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4475
X-OriginatorOrg: citrix.com

Pull out the code from xc_cpuid_apply_policy that applies a featureset
to a cpu policy and place it on it's own standalone function that's
part of the public interface.

No functional change intended.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 tools/include/xenguest.h        |  5 ++
 tools/libs/guest/xg_cpuid_x86.c | 95 ++++++++++++++++++++-------------
 2 files changed, 62 insertions(+), 38 deletions(-)

diff --git a/tools/include/xenguest.h b/tools/include/xenguest.h
index 3e2fe7f3654..134c00f29a0 100644
--- a/tools/include/xenguest.h
+++ b/tools/include/xenguest.h
@@ -766,6 +766,11 @@ int xc_cpu_policy_legacy_topology(xc_interface *xch, xc_cpu_policy_t *policy,
 int xc_cpu_policy_apply_cpuid(xc_interface *xch, xc_cpu_policy_t *policy,
                               const struct xc_xend_cpuid *cpuid, bool hvm);
 
+/* Apply a featureset to the policy. */
+int xc_cpu_policy_apply_featureset(xc_interface *xch, xc_cpu_policy_t *policy,
+                                   const uint32_t *featureset,
+                                   unsigned int nr_features);
+
 int xc_get_cpu_levelling_caps(xc_interface *xch, uint32_t *caps);
 int xc_get_cpu_featureset(xc_interface *xch, uint32_t index,
                           uint32_t *nr_features, uint32_t *featureset);
diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x86.c
index 65b49753d3e..778bc2130ea 100644
--- a/tools/libs/guest/xg_cpuid_x86.c
+++ b/tools/libs/guest/xg_cpuid_x86.c
@@ -452,46 +452,15 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
 
     if ( featureset )
     {
-        uint32_t disabled_features[FEATURESET_NR_ENTRIES],
-            feat[FEATURESET_NR_ENTRIES] = {};
-        static const uint32_t deep_features[] = INIT_DEEP_FEATURES;
-        unsigned int i, b;
-
-        /*
-         * The user supplied featureset may be shorter or longer than
-         * FEATURESET_NR_ENTRIES.  Shorter is fine, and we will zero-extend.
-         * Longer is fine, so long as it only padded with zeros.
-         */
-        unsigned int user_len = min(FEATURESET_NR_ENTRIES + 0u, nr_features);
-
-        /* Check for truncated set bits. */
-        rc = -EOPNOTSUPP;
-        for ( i = user_len; i < nr_features; ++i )
-            if ( featureset[i] != 0 )
-                goto out;
-
-        memcpy(feat, featureset, sizeof(*featureset) * user_len);
-
-        /* Disable deep dependencies of disabled features. */
-        for ( i = 0; i < ARRAY_SIZE(disabled_features); ++i )
-            disabled_features[i] = ~feat[i] & deep_features[i];
-
-        for ( b = 0; b < sizeof(disabled_features) * CHAR_BIT; ++b )
+        policy.cpuid = *p;
+        rc = xc_cpu_policy_apply_featureset(xch, &policy, featureset,
+                                            nr_features);
+        if ( rc )
         {
-            const uint32_t *dfs;
-
-            if ( !test_bit(b, disabled_features) ||
-                 !(dfs = x86_cpuid_lookup_deep_deps(b)) )
-                continue;
-
-            for ( i = 0; i < ARRAY_SIZE(disabled_features); ++i )
-            {
-                feat[i] &= ~dfs[i];
-                disabled_features[i] &= ~dfs[i];
-            }
+            ERROR("Failed to apply featureset to policy");
+            goto out;
         }
-
-        cpuid_featureset_to_policy(feat, p);
+        *p = policy.cpuid;
     }
     else
     {
@@ -923,3 +892,53 @@ int xc_cpu_policy_legacy_topology(xc_interface *xch, xc_cpu_policy_t *policy,
 
     return 0;
 }
+
+int xc_cpu_policy_apply_featureset(xc_interface *xch, xc_cpu_policy_t *policy,
+                                   const uint32_t *featureset,
+                                   unsigned int nr_features)
+{
+    uint32_t disabled_features[FEATURESET_NR_ENTRIES],
+        feat[FEATURESET_NR_ENTRIES] = {};
+    static const uint32_t deep_features[] = INIT_DEEP_FEATURES;
+    unsigned int i, b;
+
+    /*
+     * The user supplied featureset may be shorter or longer than
+     * FEATURESET_NR_ENTRIES.  Shorter is fine, and we will zero-extend.
+     * Longer is fine, so long as it only padded with zeros.
+     */
+    unsigned int user_len = min(FEATURESET_NR_ENTRIES + 0u, nr_features);
+
+    /* Check for truncated set bits. */
+    for ( i = user_len; i < nr_features; ++i )
+        if ( featureset[i] != 0 )
+        {
+            errno = EOPNOTSUPP;
+            return -1;
+        }
+
+    memcpy(feat, featureset, sizeof(*featureset) * user_len);
+
+    /* Disable deep dependencies of disabled features. */
+    for ( i = 0; i < ARRAY_SIZE(disabled_features); ++i )
+        disabled_features[i] = ~feat[i] & deep_features[i];
+
+    for ( b = 0; b < sizeof(disabled_features) * CHAR_BIT; ++b )
+    {
+        const uint32_t *dfs;
+
+        if ( !test_bit(b, disabled_features) ||
+             !(dfs = x86_cpuid_lookup_deep_deps(b)) )
+            continue;
+
+        for ( i = 0; i < ARRAY_SIZE(disabled_features); ++i )
+        {
+            feat[i] &= ~dfs[i];
+            disabled_features[i] &= ~dfs[i];
+        }
+    }
+
+    cpuid_featureset_to_policy(feat, &policy->cpuid);
+
+    return 0;
+}
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri May 07 11:07:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 11:07:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123978.234010 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leyKa-0006nP-G4; Fri, 07 May 2021 11:07:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123978.234010; Fri, 07 May 2021 11:07:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leyKa-0006nB-Cn; Fri, 07 May 2021 11:07:40 +0000
Received: by outflank-mailman (input) for mailman id 123978;
 Fri, 07 May 2021 11:07:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4HO4=KC=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1leyKY-0004jI-Hc
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 11:07:38 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 39bce5a7-9878-4bd5-ae1e-c25afcadf535;
 Fri, 07 May 2021 11:07:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 39bce5a7-9878-4bd5-ae1e-c25afcadf535
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620385649;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:content-transfer-encoding:mime-version;
  bh=oDJ6glkCLlfUuzvIT1EjKxHZukaOvVKn/CkwxDpr4as=;
  b=GjW/cvxBlKVgFyv7Nx/XCLhT9xlEGXxZJiLKRG3RIVwN+64fwF0YVlSJ
   MY7pgyW8OKCzc8L2m0Uvir5s8d8tmu3QPjg2NlaFWP01htTkQsrdszI/S
   VIl5/IOk+aOxb9gYWi1P+XzqHzb1HdJNmrKbiuQCa8ckDXpR9j2wNZfsG
   Y=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: XcOi2GSrpuv1xWnr7iV2sYsOi4hpzu/Ka0As0x4vD5yeD0G/Uct25IaWISTy0S3ooDi8pdsIAL
 QUWXj2hZ0Pz1nBhj93yQsm/OTDtoaUuKkoIYZYsvOunT4C8jsdcdwL37YjMANs6Xw0DWeyGUGN
 ya62LCM1AGDuP+lOB7CJy1cn6xSYr6Wwy+REivUMf2cSVHU4R35yR2fueK4yExuzE+xZ4MeDD1
 DIh3ajNBbyWutKJOKwNmjReGRs6ImYOHwK+MnNiVPS6hvgViYDvRPWbYQs/vQfG9oG8dJs8QOW
 z1g=
X-SBRS: 5.1
X-MesageID: 43096246
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:ee+tS6+ufxdpXJXjkfluk+Hqdb1zdoMgy1knxilNoENuH/Bwxv
 rFoB1E73TJYW4qKQkdcKO7SdK9qBLnhNVICOwqUYtKMzOW3FdAQLsC0WKm+UyYJ8SczJ8W6U
 4DSdkYNDSYNzET4qjHCUuDYrAdKbK8gcOVbJLlvhJQpHZRGsNdBmlCajqzIwlTfk1rFJA5HJ
 2T6o5svDy7Y0kaacy9Gz0sQ/XDj8ejruOrXTc2QzocrCWehzKh77D3VzKC2A0Fbj9JybA+tU
 DYjg3C4Lm5uf3T8G6S64aT1eUZpDLS8KoCOCW+sLlXFtwqsHfrWG1VYczCgNnympDr1L9lqq
 iJn/5qBbUI15qYRBDJnfKq4Xis7N9m0Q6f9XaIxXTkusD3XzQ8Fo5Igp9YaALQ7w46sMh7y7
 8j5RPui3N7N2K1oM3G3am+a/iqrDvGnZMoq59bs5Wea/pqVFZ1l/1WwKp4KuZwIMvK0vFULA
 A1NrCj2B9/SyLrU5n2hBgR/OCR
X-IronPort-AV: E=Sophos;i="5.82,280,1613451600"; 
   d="scan'208";a="43096246"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GdGea/Sn/56bOKfif6mDtTKvRKys3iJaZ8PDCKXxHrKKNaVIAm6FJl96kr1QZKnb1HNQUGO6XSKl+eQ4UtRd0GpT0dsQfDAECLUG86uS4aWftqhXeU6KF+LXBhbvP8RpqWokixy8AcxcfpvbqgcfdVMazrcW6w0L2xjlkKRtjwtg1jIRIjpTVYbWBzHjtBHifmGR59aQdfM2LmBQsLoD7s0ffP4AsLbz4sx2NW6NNNx6eW2EIQe31GlbG0qhFMZ7Ro36MxREZ17yjXYC6UeCior9ac5EaRiQz7znxIkvJfZycLAoEqTGSKkMqdKe3HtEeu51SjRNQimj8/Y3o05+Sw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7afhcgom3dSGOWUu5uK4ZZv4DipC5W8MhkjaH97+CUQ=;
 b=FfMfXSi/HE7kJEytKuIm4ndS26djosVP9JJCkSJKnX1gG2d4Z4iXAT8bxVOHK8ePk+hnLaqcoQn6CeSkjnyAxdKSJizy8Nfyg5Qx+s2a5aO++yhucQezOWysqMuFgNaj0YeGrDQ2+2VzpQUYSdr+ufbng19KhwDWL+gZVXJRbPBu4ZXLdU9FBFpqqDxhQ9WVtV0qPWXeR/2Ge5koTYps+vzb3BUX42nMDoU1xajCKMatQWVKt5RKI9pHj7pFudRRl3XBtGddep0gxBPdkSRks0rfw9YK64atSg0oaj0VF8Wh0DtyJZJq32KbrdVAT6orjic+t8rdGJgGbmKT6arzAw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7afhcgom3dSGOWUu5uK4ZZv4DipC5W8MhkjaH97+CUQ=;
 b=vnGuDC3jLESEN0kBm1T8iGtBylf0/+KXsv+XDYtAWnZd+fQYhvDu5VuzZ5JlE/hsmXUJgPA8TRhDouugbD6jUPeXzMlMSKPQcj6yM6JVlgY8J6E1vPdpEX/iUp8ypXJpKz6ysW44gk/VAVmrM3nkqeSysnC78089T3iAxwAtiEY=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Roger Pau Monne
	<roger.pau@citrix.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>
Subject: [PATCH v4 07/10] libs/guest: rework xc_cpuid_xend_policy
Date: Fri,  7 May 2021 13:04:19 +0200
Message-ID: <20210507110422.24608-8-roger.pau@citrix.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210507110422.24608-1-roger.pau@citrix.com>
References: <20210507110422.24608-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: MR2P264CA0079.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:32::19) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 5d4ff7b6-5865-4937-93a7-08d911484ae5
X-MS-TrafficTypeDiagnostic: DM6PR03MB4475:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4475E290E5F4A3C385DAC39E8F579@DM6PR03MB4475.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:65;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Aij6rlKiWijTiTV7DBdQe++PiZyMcSpgPb+tt+GQ+Dhadg4vmNtDPrflPhjW9n/LiYMg4Jke/wF9v842+KZtPOJ7+V51SnuRicSWIygvtL5utHkGrrzeXslwCtL6KLHNCuovWCA6ovMuV7jGRpWu891DzpUwM3mVmC6rGayUxqjOCRobNNOkb8o7jugfVN1JxAWpgTWoGxgKkin2eg7IDHgQDOy85zN3iQjeMwlAI+OtooG8I6gbciWczphfXvR1VPRBMpZLlE8JXPFfN8xeY6FA5Fyiu7LmD8wrSPySVPrYT3eoip7gB9IdEXg5OfKlMl5QKKOSGDmzTtC2b6NG4VS2M27IIycCc3Hp1LgO8SR15wOwRO+kx6awpGYEXHR0kmNovRNTe2oZZ2evEd+xGyDc1R3kB0wsNHYIBc6bKEa+HynkAAIpVuvZel0uEOM9krg5/jykbixe0ESEh2pUJLMDQnry+RcHRf3bXMRml5FRYClqbP26jGMbxPpfLrrwvoUuYjGJ6+qlyldrW5nVBGGyMw6PHAfdxmcP4LUXzvdO2F6LdG+wJ6I6DAvJbBjKIFvgwmed0eE4DsK7gEbMUBQ3g5H4XMmF02/6jfvNnNc=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(136003)(376002)(346002)(39860400002)(366004)(8676002)(6496006)(26005)(66946007)(5660300002)(66556008)(66476007)(83380400001)(478600001)(956004)(1076003)(6916009)(316002)(38100700002)(54906003)(6486002)(2616005)(86362001)(8936002)(36756003)(6666004)(4326008)(186003)(16526019)(2906002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?Q29zVnhUY3MyaVplc3E1empjbW1lNUx3MTZOeDl0QTl1MlgveWdlQUxPR2xI?=
 =?utf-8?B?bDVWckRGaU4yeS8yakdLQ2VVUGc1RDY0SEV4NFl1bHU5eVhkTElPc1RSTVRH?=
 =?utf-8?B?N3pUMm5kNzA2cC9MY25iT3ArbnJvMXdkaFFGb1IxMjZWRXBxQjZlUHV3OVJH?=
 =?utf-8?B?b1hJNzg5WWxLL3B2ZzN4UW0wbFZVYmIzcXM4d0ROSEtzNGJPNHRLeG1sWmpS?=
 =?utf-8?B?eUFPakNBM0Z5ZUw4WTlxc1FxdlY0SUJmMDF6QXFDbWNWZUZwYkE0cXlDaXVS?=
 =?utf-8?B?ZXh3UGlOWkVrcDlPbWZ2Z0VVandiUUhSbnp4bFc0YmxlL2hKQ0lCTkVaSnFC?=
 =?utf-8?B?UUs3SDBlK0gvcFBaOGdTeVJ4YzF1dFRIbys2VktTcnFmaDBiNTd1UnFyZnND?=
 =?utf-8?B?S00zR25vOURhUFkrWjVkR3JMWnFOc2gwTk1Od1RSdHNrZHVQSW1FUVppa2lw?=
 =?utf-8?B?bEtJc3M1UDBlZ3RoeStvOE9pZkJlaE81cDhMNUJFY2NvOVJTYmZUZXRua1BB?=
 =?utf-8?B?ay9XZDVIS3U0b2Q4ZDh6aENVOVRVcmFDVnN1RWNxSllKUWZkcEdVd0UyaU9I?=
 =?utf-8?B?MjRnUSs3SmlJRitRUnNtMEhEOHJ4SE5VczN1RTVVRlpzTDJldmZVRkNHMjJL?=
 =?utf-8?B?TmFiT1hJcVdxSEY2L0dkRnhUMDFFWDFKNTByeld3VnBQdmhDQ1BNZ0hWeWJD?=
 =?utf-8?B?UEZEby9lRVdNZlRFN1FPYzNqSWxLakhOWS9HZkpaNG11eHNac1VFSXh3MUJH?=
 =?utf-8?B?NDNkWXJnMnEzTFRyVGhyRnBWSElhbTFrK2t0ZHVoajNkamoyQ3lucXVtclZl?=
 =?utf-8?B?eXJ4U25kOHAzMlU3NVZBWmlYdEVXUlI1T3UvZ2Y5QVF4WjlSclV6MmRBeGY5?=
 =?utf-8?B?YU15S2NjamJZbnlXdVNFMFRvbVJTTTlSdWhQbmh2cFc1Y0VZZDlLemJsQWhR?=
 =?utf-8?B?YWNKSVNRQU9wWUpDQ1lTdUlEMU81YThYdXROcFR4MGhYTVN2UStGamZ6dGxO?=
 =?utf-8?B?MTBhdEd6cENXOUVFRnRPeVhFTGV1R21aQkNFdi9zcExXUlZFeFhxOEtWNmhk?=
 =?utf-8?B?akp2OHArVHF0aEpyaHpzNW9kRGNzRnVidHZjVmtVZWIzRDZRSm1MR3U2YytR?=
 =?utf-8?B?T3gvdDZSRzlsd1U4dVA1VTlaYkFpYVEvN2xHTUxpeXJZMHdCVStBdGxRNWVK?=
 =?utf-8?B?V2dPU2QyWlo3bUdORE1PNWFZVnZIazdWMnRmN2dSRFFvc0V6L3pqUTJQb0l6?=
 =?utf-8?B?Y3lkZ09tbzJYZG03d0ltUnhGazIrZzFLZ05aaHN6L1JLblJacFBQSzJKLytS?=
 =?utf-8?B?WkQzU2lSTExRT0tMenlpM3V4WitnUENYdFpaMkZ4MDlwNlJNckh0U3VLU0Fv?=
 =?utf-8?B?b2Q5M1p3MCtoVGFidks3eHBzdWorSGZtSmRXSG9PZm8zbHlVdlI0dGRUMXQx?=
 =?utf-8?B?czVDaFlFd0E2Q0czcUgvUEJ0cGg2UXVxbXN2Y0t4WUd6N2h2NXlTVW5oTVBx?=
 =?utf-8?B?UWI2R3g4N0FQaURxUS9rYjFrbkhzNnNwTlJ5SUNQVmQ4azlUcm15S3ZwTlMw?=
 =?utf-8?B?c2RxclBrWHNhTFgwbGhacWEza0NFMXpza3Y4YkdrN0hZWVUrY2tBZjhaZkxl?=
 =?utf-8?B?V2dMWVNFUkRVS1dpUFlhU0pXZlNwS3FoMkFTS0RHWGt1WlJjclloNXAzZWg2?=
 =?utf-8?B?ckppNW1mb1ZVNVJqMStxWUgwT3ozOERnUVUzZjVZWWMzT1J5WEVJY2NSdWdw?=
 =?utf-8?Q?jVQMBeHfqjPrGNT+oBrjQbFXYeMTLMWGiDS/L/F?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 5d4ff7b6-5865-4937-93a7-08d911484ae5
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 May 2021 11:07:24.2817
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: G9aDeQb/0NbiazOuZBiFXOYFwActkUD9NuWiO+6uJbDLdDBQOxZ2rFXtNWXEi9bct/BW+yJ2mH7Mp5lNoCKbfw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4475
X-OriginatorOrg: citrix.com

Rename xc_cpuid_xend_policy to xc_cpu_policy_apply_cpuid and make it
public. Modify the function internally to use the new xc_cpu_policy_*
set of functions. Also don't apply the passed policy to a domain
directly, and instead modify the provided xc_cpu_policy_t. The caller
will be responsible of applying the modified cpu policy to the domain.

Note that further patches will end up removing this function, as the
callers should have the necessary helpers to modify an xc_cpu_policy_t
themselves.

The find_leaf helper and related comparison function is also removed,
as it's no longer needed to search for cpuid leafs as finding the
matching leaves is now done using xc_cpu_policy_get_cpuid.

No functional change intended.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v3:
 - Drop find_leaf and comparison helper.
---
 tools/include/xenguest.h        |   4 +
 tools/libs/guest/xg_cpuid_x86.c | 200 +++++++++++++-------------------
 2 files changed, 83 insertions(+), 121 deletions(-)

diff --git a/tools/include/xenguest.h b/tools/include/xenguest.h
index 6fe01ae292b..3e2fe7f3654 100644
--- a/tools/include/xenguest.h
+++ b/tools/include/xenguest.h
@@ -762,6 +762,10 @@ int xc_cpu_policy_make_compat_4_12(xc_interface *xch, xc_cpu_policy_t *policy,
 int xc_cpu_policy_legacy_topology(xc_interface *xch, xc_cpu_policy_t *policy,
                                   bool hvm);
 
+/* Apply an xc_xend_cpuid object to the policy. */
+int xc_cpu_policy_apply_cpuid(xc_interface *xch, xc_cpu_policy_t *policy,
+                              const struct xc_xend_cpuid *cpuid, bool hvm);
+
 int xc_get_cpu_levelling_caps(xc_interface *xch, uint32_t *caps);
 int xc_get_cpu_featureset(xc_interface *xch, uint32_t index,
                           uint32_t *nr_features, uint32_t *featureset);
diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x86.c
index 2c89c59cccb..65b49753d3e 100644
--- a/tools/libs/guest/xg_cpuid_x86.c
+++ b/tools/libs/guest/xg_cpuid_x86.c
@@ -263,144 +263,107 @@ int xc_set_domain_cpu_policy(xc_interface *xch, uint32_t domid,
     return ret;
 }
 
-static int compare_leaves(const void *l, const void *r)
-{
-    const xen_cpuid_leaf_t *lhs = l;
-    const xen_cpuid_leaf_t *rhs = r;
-
-    if ( lhs->leaf != rhs->leaf )
-        return lhs->leaf < rhs->leaf ? -1 : 1;
-
-    if ( lhs->subleaf != rhs->subleaf )
-        return lhs->subleaf < rhs->subleaf ? -1 : 1;
-
-    return 0;
-}
-
-static xen_cpuid_leaf_t *find_leaf(
-    xen_cpuid_leaf_t *leaves, unsigned int nr_leaves,
-    const struct xc_xend_cpuid *xend)
-{
-    const xen_cpuid_leaf_t key = { xend->leaf, xend->subleaf };
-
-    return bsearch(&key, leaves, nr_leaves, sizeof(*leaves), compare_leaves);
-}
-
-static int xc_cpuid_xend_policy(
-    xc_interface *xch, uint32_t domid, const struct xc_xend_cpuid *xend)
+int xc_cpu_policy_apply_cpuid(xc_interface *xch, xc_cpu_policy_t *policy,
+                              const struct xc_xend_cpuid *cpuid, bool hvm)
 {
     int rc;
-    xc_dominfo_t di;
-    unsigned int nr_leaves, nr_msrs;
-    uint32_t err_leaf = -1, err_subleaf = -1, err_msr = -1;
-    /*
-     * Three full policies.  The host, default for the domain type,
-     * and domain current.
-     */
-    xen_cpuid_leaf_t *host = NULL, *def = NULL, *cur = NULL;
-    unsigned int nr_host, nr_def, nr_cur;
+    xc_cpu_policy_t *host = NULL, *def = NULL;
 
-    if ( xc_domain_getinfo(xch, domid, 1, &di) != 1 ||
-         di.domid != domid )
-    {
-        ERROR("Failed to obtain d%d info", domid);
-        rc = -ESRCH;
-        goto fail;
-    }
-
-    rc = xc_cpu_policy_get_size(xch, &nr_leaves, &nr_msrs);
-    if ( rc )
-    {
-        PERROR("Failed to obtain policy info size");
-        rc = -errno;
-        goto fail;
-    }
-
-    rc = -ENOMEM;
-    if ( (host = calloc(nr_leaves, sizeof(*host))) == NULL ||
-         (def  = calloc(nr_leaves, sizeof(*def)))  == NULL ||
-         (cur  = calloc(nr_leaves, sizeof(*cur)))  == NULL )
-    {
-        ERROR("Unable to allocate memory for %u CPUID leaves", nr_leaves);
-        goto fail;
-    }
-
-    /* Get the domain's current policy. */
-    nr_msrs = 0;
-    nr_cur = nr_leaves;
-    rc = get_domain_cpu_policy(xch, domid, &nr_cur, cur, &nr_msrs, NULL);
-    if ( rc )
+    host = xc_cpu_policy_init();
+    def = xc_cpu_policy_init();
+    if ( !host || !def )
     {
-        PERROR("Failed to obtain d%d current policy", domid);
-        rc = -errno;
-        goto fail;
+        PERROR("Failed to init policies");
+        rc = -ENOMEM;
+        goto out;
     }
 
     /* Get the domain type's default policy. */
-    nr_msrs = 0;
-    nr_def = nr_leaves;
-    rc = get_system_cpu_policy(xch, di.hvm ? XEN_SYSCTL_cpu_policy_hvm_default
+    rc = xc_cpu_policy_get_system(xch, hvm ? XEN_SYSCTL_cpu_policy_hvm_default
                                            : XEN_SYSCTL_cpu_policy_pv_default,
-                               &nr_def, def, &nr_msrs, NULL);
+                                  def);
     if ( rc )
     {
-        PERROR("Failed to obtain %s def policy", di.hvm ? "hvm" : "pv");
-        rc = -errno;
-        goto fail;
+        PERROR("Failed to obtain %s def policy", hvm ? "hvm" : "pv");
+        goto out;
     }
 
     /* Get the host policy. */
-    nr_msrs = 0;
-    nr_host = nr_leaves;
-    rc = get_system_cpu_policy(xch, XEN_SYSCTL_cpu_policy_host,
-                               &nr_host, host, &nr_msrs, NULL);
+    rc = xc_cpu_policy_get_system(xch, XEN_SYSCTL_cpu_policy_host, host);
     if ( rc )
     {
         PERROR("Failed to obtain host policy");
-        rc = -errno;
-        goto fail;
+        goto out;
     }
 
     rc = -EINVAL;
-    for ( ; xend->leaf != XEN_CPUID_INPUT_UNUSED; ++xend )
+    for ( ; cpuid->leaf != XEN_CPUID_INPUT_UNUSED; ++cpuid )
     {
-        xen_cpuid_leaf_t *cur_leaf = find_leaf(cur, nr_cur, xend);
-        const xen_cpuid_leaf_t *def_leaf = find_leaf(def, nr_def, xend);
-        const xen_cpuid_leaf_t *host_leaf = find_leaf(host, nr_host, xend);
+        xen_cpuid_leaf_t cur_leaf;
+        xen_cpuid_leaf_t def_leaf;
+        xen_cpuid_leaf_t host_leaf;
 
-        if ( cur_leaf == NULL || def_leaf == NULL || host_leaf == NULL )
+        rc = xc_cpu_policy_get_cpuid(xch, policy, cpuid->leaf, cpuid->subleaf,
+                                     &cur_leaf);
+        if ( rc )
+        {
+            ERROR("Failed to get current policy leaf %#x subleaf %#x",
+                  cpuid->leaf, cpuid->subleaf);
+            goto out;
+        }
+        rc = xc_cpu_policy_get_cpuid(xch, def, cpuid->leaf, cpuid->subleaf,
+                                     &def_leaf);
+        if ( rc )
+        {
+            ERROR("Failed to get def policy leaf %#x subleaf %#x",
+                  cpuid->leaf, cpuid->subleaf);
+            goto out;
+        }
+        rc = xc_cpu_policy_get_cpuid(xch, host, cpuid->leaf, cpuid->subleaf,
+                                     &host_leaf);
+        if ( rc )
         {
-            ERROR("Missing leaf %#x, subleaf %#x", xend->leaf, xend->subleaf);
-            goto fail;
+            ERROR("Failed to get host policy leaf %#x subleaf %#x",
+                  cpuid->leaf, cpuid->subleaf);
+            goto out;
         }
 
-        for ( unsigned int i = 0; i < ARRAY_SIZE(xend->policy); i++ )
+        for ( unsigned int i = 0; i < ARRAY_SIZE(cpuid->policy); i++ )
         {
-            uint32_t *cur_reg = &cur_leaf->a + i;
-            const uint32_t *def_reg = &def_leaf->a + i;
-            const uint32_t *host_reg = &host_leaf->a + i;
+            uint32_t *cur_reg = &cur_leaf.a + i;
+            const uint32_t *def_reg = &def_leaf.a + i;
+            const uint32_t *host_reg = &host_leaf.a + i;
 
-            if ( xend->policy[i] == NULL )
+            if ( cpuid->policy[i] == NULL )
                 continue;
 
             for ( unsigned int j = 0; j < 32; j++ )
             {
                 bool val;
 
-                if ( xend->policy[i][j] == '1' )
+                switch ( cpuid->policy[i][j] )
+                {
+                case '1':
                     val = true;
-                else if ( xend->policy[i][j] == '0' )
+                    break;
+
+                case '0':
                     val = false;
-                else if ( xend->policy[i][j] == 'x' )
+                    break;
+
+                case 'x':
                     val = test_bit(31 - j, def_reg);
-                else if ( xend->policy[i][j] == 'k' ||
-                          xend->policy[i][j] == 's' )
+                    break;
+
+                case 'k':
+                case 's':
                     val = test_bit(31 - j, host_reg);
-                else
-                {
+                    break;
+
+                default:
                     ERROR("Bad character '%c' in policy[%d] string '%s'",
-                          xend->policy[i][j], i, xend->policy[i]);
-                    goto fail;
+                          cpuid->policy[i][j], i, cpuid->policy[i]);
+                    goto out;
                 }
 
                 clear_bit(31 - j, cur_reg);
@@ -408,25 +371,19 @@ static int xc_cpuid_xend_policy(
                     set_bit(31 - j, cur_reg);
             }
         }
-    }
 
-    /* Feed the transformed currrent policy back up to Xen. */
-    rc = xc_set_domain_cpu_policy(xch, domid, nr_cur, cur, 0, NULL,
-                                  &err_leaf, &err_subleaf, &err_msr);
-    if ( rc )
-    {
-        PERROR("Failed to set d%d's policy (err leaf %#x, subleaf %#x, msr %#x)",
-               domid, err_leaf, err_subleaf, err_msr);
-        rc = -errno;
-        goto fail;
+        rc = xc_cpu_policy_update_cpuid(xch, policy, &cur_leaf, 1);
+        if ( rc )
+        {
+            PERROR("Failed to set policy leaf %#x subleaf %#x",
+                   cpuid->leaf, cpuid->subleaf);
+            goto out;
+        }
     }
 
-    /* Success! */
-
- fail:
-    free(cur);
-    free(def);
-    free(host);
+ out:
+    xc_cpu_policy_destroy(def);
+    xc_cpu_policy_destroy(host);
 
     return rc;
 }
@@ -434,7 +391,7 @@ static int xc_cpuid_xend_policy(
 int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
                           const uint32_t *featureset, unsigned int nr_features,
                           bool pae, bool itsc, bool nested_virt,
-                          const struct xc_xend_cpuid *xend)
+                          const struct xc_xend_cpuid *cpuid)
 {
     int rc;
     xc_dominfo_t di;
@@ -554,6 +511,10 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
         goto out;
     *p = policy.cpuid;
 
+    rc = xc_cpu_policy_apply_cpuid(xch, &policy, cpuid, di.hvm);
+    if ( rc )
+        goto out;
+
     rc = x86_cpuid_copy_to_buffer(p, leaves, &nr_leaves);
     if ( rc )
     {
@@ -571,9 +532,6 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
         goto out;
     }
 
-    if ( xend && (rc = xc_cpuid_xend_policy(xch, domid, xend)) )
-        goto out;
-
     rc = 0;
 
 out:
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri May 07 11:07:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 11:07:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123986.234028 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leyKl-0007bO-AR; Fri, 07 May 2021 11:07:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123986.234028; Fri, 07 May 2021 11:07:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leyKl-0007ar-2G; Fri, 07 May 2021 11:07:51 +0000
Received: by outflank-mailman (input) for mailman id 123986;
 Fri, 07 May 2021 11:07:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4HO4=KC=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1leyKj-0006BB-AC
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 11:07:49 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e83dcd7a-a08d-402d-ba4d-1137a022d6f3;
 Fri, 07 May 2021 11:07:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e83dcd7a-a08d-402d-ba4d-1137a022d6f3
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620385664;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:content-transfer-encoding:mime-version;
  bh=kRHn8rtOSU6R7rJjGVkU/jbvyzLsjPcrTWylK9efHNg=;
  b=C30Lv6qVaITbQxVUfM+kpI4WHE2E+mw8Hoz1QpLeqWYzyvDWP4i9Mdbd
   1W401rJnu8MOkOzluUi1W9Ro71/cavF0Wn4pAALwxDkNc7Ge4KCC0GFHP
   A3vz9u+rpE1rJ9UcQXj0Im2rNTOoWBlkLlNQo7c5wlB1kt5s6VoNNMY/d
   o=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 6YwB6C8YKZQ4983hdCzyEI5GbzrjOKmyaIGAPD/MX5wEnFxsQQc5uB6+9R6gAmaE86mOPOqxcz
 LfJH6l7lpq9FS/BdLeo4GBeYM7kZbWfQQXWQkYe3EgUONNyj4B3nfYW7qxPDrbx6q8zKANzaho
 EFgXaBO1K8HyEAW96cQs7sL3Hkm7w3FXY7dbKOLZQ8cPD9xIypZ1aNP7gGoDUMM83i8S0tEPyj
 9z0iCs8DXfpFZmNEn8fcWyisx/wv8QEfVtLtQx1OJEryhptoCvwgxFnbjCaOU3mBXcbFhvu1V7
 +Vw=
X-SBRS: 5.1
X-MesageID: 43286703
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:LWAzX6P1HSFLssBcT/P155DYdb4zR+YMi2TDiHoddfUFSKalfp
 6V98jzjSWE8Ar5K0tQ4uxoWZPwCk80kKQY3WB/B8bHYOCLggqVxcRZnPLfKl7bamfDH4xmpM
 BdmsFFYbWeY2SSz/yKhjVQeOxQo+VvhZrY4Ns2uE0dLz2CBZsB0y5JTiKgVmFmTghPApQ0UL
 CG4NBcmjamcXMLKuymG3gsRYH41pH2vaOjRSRDKw8s6QGIgz/twqX9CQKk0hAXVC4K6as+8F
 LCjxfy6syYwr6GI17npiHuBqZt6ZvcI+h4dY+xYw8uW3fRYzOTFcVcsnu5zXUISa+UmRIXeZ
 L30m0d1oxImg7slyeO0FbQMkDboUoTwm6nxlmCjXT5p8vlADo8FspanIpcNgDU8kw6obhHod
 R2Nk+ixu5q5Cn77VPADhnzJmJXf0GP0DMfeC4o/gliuK4lGfdsRKAkjTJo+aY7bVDHAdocYZ
 dT5ennlYZrmH2hHg/kglU=
X-IronPort-AV: E=Sophos;i="5.82,280,1613451600"; 
   d="scan'208";a="43286703"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RYyQw1WYCvzj+RuO9/5EVmK2pFDLlFrFqNG28lAvJOotKMy+a7pDNJAt8Td5tJBMhQexZYcLQOFRukT1uTv3l/m4TsAfm1Dqcor+9tkTcExSdKPz800ZoVyNUZ5EZJVeJU357hduPXGlFvFpxWVCKo1Mv/I8HJ86xII9Jhe180EYwhKroXQRrdcaPvWQiBgfcSrCpm8rwtFxZG7IdtaMgSsIYA8lQAKzj8cxfDQAycATuqLxxw2opr/XRMSby7jXPxUdh1kRQ8VzczRU27MoCHZY6GjdV5AoP4OVp0QSq74CtfjOcd9n62fJrR0axn7/fudO3GqvcIYhZgM8VSdp5g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0j8xIBD2PCCfKzL4UmHaXF4HJIPd/gVUnY3X5MNUNFM=;
 b=gACgwAELYeoTbLFkYOybt3AE+y5ZO9+kYIJfkTXQTcFNjuG3JebQy/iuNFr8g+2NDJ2RJtNGZIvDH/clUO0NSm2/n6r7H9EMs8maXbJxgRBCsKCOpRvdtxBRBio2vqzqTsb0bw7vYcUQaS8h8XwDwDyEfLhOhgXlV9LdU7IZirWSxb0xN2UkB0wmqG3CFAoJ1YWyHcqixEIlHBJsHJa0WJH5BYHAi0UiHzCX6bi7RcrEf4jipyaeAMxvLfsutZL3mwvcXgUwzfJgPXJ/SXjkiVYSozY5RVdWmhU/eUozI6TdkbobZp/T8t+SjoqxtzM6uNuLcEXZjOyFBu9Q08nIlQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0j8xIBD2PCCfKzL4UmHaXF4HJIPd/gVUnY3X5MNUNFM=;
 b=NBSEpIHX1CCE4U5iQkyVcPVKl4eGwOKzEJERupLWq/9Z57Lj0xMpwyHhDWELZGLkFeOs0B30NOg7H4UYX0X/77CwSGE1I9TFOHMxvbSmtXL51SGPhgyJK2IJ376vpvuL9KMeP49hb1DcevkmfB4EtrS4bsjpaMPTGe84GZAn1qg=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Roger Pau Monne
	<roger.pau@citrix.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v4 10/10] libs/guest: (re)move xc_cpu_policy_apply_cpuid
Date: Fri,  7 May 2021 13:04:22 +0200
Message-ID: <20210507110422.24608-11-roger.pau@citrix.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210507110422.24608-1-roger.pau@citrix.com>
References: <20210507110422.24608-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: MR2P264CA0180.FRAP264.PROD.OUTLOOK.COM (2603:10a6:501::19)
 To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: cef9262e-8699-4ac0-25be-08d9114854c4
X-MS-TrafficTypeDiagnostic: DM6PR03MB4683:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4683BD97B7B7A3CB045193388F579@DM6PR03MB4683.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:272;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Uiz4KtnoHjaEGWI03B6cV6r9dFQm5S156UCWfpd/WEErYYzrVq8OmghiiCK5b5F6GCH3m4jNdOlisYbx581ov9WLHy+z55EbmWqg6Atka83tXv87FCVcCqlc1eU9sGRAXvMZcRu3+B24ycY5oLXUJQengvdnx57QEkDxh0okIGSENCkFGilzrP+M7AYeqswNBPTCz+9JNys6qX4Sxm4kXRDdUnmaH9G8GH+LYP/lgZZqrVyscIIlvQVtJgue0d/054mBI26b8ze3c9za1ZeLf44VQDrdOls9sW61n/V76M74ZrcXNextz5fizrr5xF7GHUQukkZFmpduW17Qe2NfR7fWFo2zwQo9me++utOmTm+E2uyBtGyFLXHZwr3WiJAvK1MvrSDdc1hxuN4qjvitxcCd9++C8tUBocv+aZOHLaSUsSnJ4pVfMIiRdtZouP7f+RKnwvFivPDW5w8dY64WbVq4jzCv6PA2KvnGRm/p2x5ftve1a8h4iYJc6cV4SzH4RJo8BNq1oObH2jXURph2EKJgIS0Z30uK7d0dJdMKCqd9P/DmzSvaC+OcZXQx/ihYzSuRWzJ7NLy8a4vzhy0PLQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(366004)(136003)(346002)(396003)(39860400002)(8936002)(478600001)(6666004)(956004)(2616005)(2906002)(6496006)(6486002)(26005)(30864003)(1076003)(8676002)(16526019)(38100700002)(5660300002)(6916009)(316002)(66476007)(66556008)(83380400001)(66946007)(186003)(54906003)(107886003)(4326008)(36756003)(86362001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?cEFtSEFUckZ0Ly9vT2FmNFl2OTVEbmQvOGdITDJoWWlUMFc3d3hzZTNrVGJi?=
 =?utf-8?B?S2hEemRFeEdyZVpGVkU5RTRVQ3hxdnptZGduMnlVWnFGYkNGUWF3ZW9JQThD?=
 =?utf-8?B?UXhRc0pMV3B0SmlobGNNUnZ2ZFFGT0JWYW5LcGVnUzRVNFpNU0NpdWlweGJO?=
 =?utf-8?B?cllOczJQRVFsd1VxWlhuai9aK1JiQUdwTlJNdDg5N1FIVGNCazgzQ1VxQ0tR?=
 =?utf-8?B?ZmloUFNQOFZ3bER6OGJlMTExa2JQK0JmOWxNYWVSWnNtdktKa2lLQjJCZFI1?=
 =?utf-8?B?Q3VlT3hYakNEQ3AvclNSU1lQT0ljNnFiRmQ3WnR5aWJ3KzBkUk9rcUFnZzhM?=
 =?utf-8?B?R205a1ZzK3YzT3FESE5Rbk9FNmQvMmh0Zno0UEYwL3dvUmV6V2NKWGhrR1Na?=
 =?utf-8?B?Q1BqcXZrS0poU3pyZ1FJV2Npa3gzQVBHalpRbWNVSFloMzJteUJlT3Q3Qzk5?=
 =?utf-8?B?SzBqVDljTVlPRW5XaGRub3E5VTB2bXR2K1YwR25FWlh2OFliSENkSzQrRlFN?=
 =?utf-8?B?eFRiRWdjMkhCZmR1RkVDajNqdmJybTdDYUI0R0t0R1JqMWpkMVlDVzVLMDdX?=
 =?utf-8?B?eTVDV3h0NDJ2UE1xU1ZiazZmSkVDWW5RYXMwS2xHUDlUZGtaNWFnbXJUMklT?=
 =?utf-8?B?NTdvcXM1TXB6TTByQUhDWU15TFVmZWVaNVhSbnVKZC9IdkxVNVR0S1I2ZUc0?=
 =?utf-8?B?UnVWZlhueWJITHdQcWhsRzdYMXZvQ0oycnVQdUFhc3RFOWVaNUFJUXBNYTM1?=
 =?utf-8?B?cTltMjkyNW92RHUvZmNnQW5Cc3g0OHA4UmY2Z1pyNDZWYURwb0lMVEFBQmFp?=
 =?utf-8?B?SkR4THpzVTdBUE5vVHdaamJKdFk3RGxNa29hMDFYWC9JV05ROFFvV3hKMkI1?=
 =?utf-8?B?dkZtenFMZzF3TFgwV0RpS29oM2t0bkdoemZ3UGhSVTJ0QXp2b293M2hHelVy?=
 =?utf-8?B?NHU0bFNZeE5rSnlTOE5Gc0hyREQyQ0lmTDdSRzc5Zkx5TFljVnM0UkdMamQx?=
 =?utf-8?B?MUVHZThTYis1M2hibHFwUnVpZnZMcU5uQ3pPSC9EZXlvbFppSkk3UlNOTzg1?=
 =?utf-8?B?QzNYYzdvaE9YdGl5c3lobys1ODhwZTd3N2FaOFFWakJub24rcTJZTHhJQUly?=
 =?utf-8?B?VTlRa0ptMk10N0FXNW4yV1kySTgrNXRNd2tXYkRXeGttTUVSZ0ZOZU1RRkJp?=
 =?utf-8?B?Nk5QVHdqdXg1VVZITnM1bFJmcE5yWHVKMlp5dUFLd0pOem51MWVoS3hmOFJX?=
 =?utf-8?B?THMwU3p0Ky91eWZpSWFPNFIwSkI4eVVkcTAzVnJuUk9BS0ttTExHdytydEZ1?=
 =?utf-8?B?dXpwU1owbmltaGZvTnVibitONVdRak9jYko4TGlFak5lMGgxUDBHUGo3eFZx?=
 =?utf-8?B?Y2JxTk8wait0NVpybVg0NFR3bnVEa1BkZzRaRGxadkwwRFhQMzVjQ2NyU1hH?=
 =?utf-8?B?Nmw0WURtbVR2ZEJrQm5BQzYyRlY3VmRyMFZwd3gybTFwQ3UvZ1Q4c2ZUN09Z?=
 =?utf-8?B?cWJmNEtDTVFkcGpaWmlDcGpQVVRRVmNoYjlIWHVsNkFHQnFQdWtzcjUrWVhN?=
 =?utf-8?B?RXVmaTZZMHJSVzF0ZjAzRHBFaVhzOXo3YlFQNURwbXNjS1VMRDJxNDVLRktB?=
 =?utf-8?B?aU1QR3ZNeHJtTitJRHMrWk5xbVl3S0VncURjekdNRGQ3bVFQalQ5enp6emQ4?=
 =?utf-8?B?TkVPZm1NY0R5YjZKdnhKczFpUCtRQU5uZ0p0RitRcmY2WkUwTmk0elNHUHdl?=
 =?utf-8?Q?NRBntv1Y8kMNGVluGFZdaGTZIWkj5Tzo12yk/DV?=
X-MS-Exchange-CrossTenant-Network-Message-Id: cef9262e-8699-4ac0-25be-08d9114854c4
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 May 2021 11:07:40.9712
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 3r/zga9Nt+m2kmvOyBpbeT+ZZTfOCrbgy/JbXJvKKXAleDyGjc4+TdqDF21RGj7ocMlWBody3ruYPLRC5FvKLg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4683
X-OriginatorOrg: citrix.com

Move the logic from xc_cpu_policy_apply_cpuid into libxl, now that the
xc_cpu_policy_* helpers allow modifying a cpu policy. By moving such
parsing into libxl directly we can get rid of xc_xend_cpuid, as libxl
will now implement it's own private type for storing CPUID
information, which currently matches xc_xend_cpuid.

Note the function logic is moved as-is, but requires adapting to the
libxl coding style.

No functional change intended.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
---
Changes since v2:
 - Use LOG*D.
 - Pass a gc to apply_policy.
 - Use 'r' for libxc return values.
---
 tools/include/libxl.h             |   6 +-
 tools/include/xenctrl.h           |  26 ------
 tools/include/xenguest.h          |   4 -
 tools/libs/guest/xg_cpuid_x86.c   | 125 --------------------------
 tools/libs/light/libxl_cpuid.c    | 142 ++++++++++++++++++++++++++++--
 tools/libs/light/libxl_internal.h |  26 ++++++
 6 files changed, 165 insertions(+), 164 deletions(-)

diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index ae7fe27c1f2..150b7ba85ac 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -1375,10 +1375,10 @@ void libxl_bitmap_init(libxl_bitmap *map);
 void libxl_bitmap_dispose(libxl_bitmap *map);
 
 /*
- * libxl_cpuid_policy is opaque in the libxl ABI.  Users of both libxl and
- * libxc may not make assumptions about xc_xend_cpuid.
+ * libxl_cpuid_policy is opaque in the libxl ABI. Users of libxl may not make
+ * assumptions about libxl__cpuid_policy.
  */
-typedef struct xc_xend_cpuid libxl_cpuid_policy;
+typedef struct libxl__cpuid_policy libxl_cpuid_policy;
 typedef libxl_cpuid_policy * libxl_cpuid_policy_list;
 void libxl_cpuid_dispose(libxl_cpuid_policy_list *cpuid_list);
 int libxl_cpuid_policy_list_length(const libxl_cpuid_policy_list *l);
diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index 17fa3734800..30c2a5f5284 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -1864,32 +1864,6 @@ int xc_domain_debug_control(xc_interface *xch,
 
 #if defined(__i386__) || defined(__x86_64__)
 
-/*
- * CPUID policy data, expressed in the legacy XEND format.
- *
- * Policy is an array of strings, 32 chars long:
- *   policy[0] = eax
- *   policy[1] = ebx
- *   policy[2] = ecx
- *   policy[3] = edx
- *
- * The format of the string is the following:
- *   '1' -> force to 1
- *   '0' -> force to 0
- *   'x' -> we don't care (use default)
- *   'k' -> pass through host value
- *   's' -> legacy alias for 'k'
- */
-struct xc_xend_cpuid {
-    union {
-        struct {
-            uint32_t leaf, subleaf;
-        };
-        uint32_t input[2];
-    };
-    char *policy[4];
-};
-
 int xc_mca_op(xc_interface *xch, struct xen_mc *mc);
 int xc_mca_op_inject_v2(xc_interface *xch, unsigned int flags,
                         xc_cpumap_t cpumap, unsigned int nr_cpus);
diff --git a/tools/include/xenguest.h b/tools/include/xenguest.h
index 134c00f29a0..414baa30171 100644
--- a/tools/include/xenguest.h
+++ b/tools/include/xenguest.h
@@ -762,10 +762,6 @@ int xc_cpu_policy_make_compat_4_12(xc_interface *xch, xc_cpu_policy_t *policy,
 int xc_cpu_policy_legacy_topology(xc_interface *xch, xc_cpu_policy_t *policy,
                                   bool hvm);
 
-/* Apply an xc_xend_cpuid object to the policy. */
-int xc_cpu_policy_apply_cpuid(xc_interface *xch, xc_cpu_policy_t *policy,
-                              const struct xc_xend_cpuid *cpuid, bool hvm);
-
 /* Apply a featureset to the policy. */
 int xc_cpu_policy_apply_featureset(xc_interface *xch, xc_cpu_policy_t *policy,
                                    const uint32_t *featureset,
diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x86.c
index 8c62c3ac360..e0fff98fe77 100644
--- a/tools/libs/guest/xg_cpuid_x86.c
+++ b/tools/libs/guest/xg_cpuid_x86.c
@@ -263,131 +263,6 @@ int xc_set_domain_cpu_policy(xc_interface *xch, uint32_t domid,
     return ret;
 }
 
-int xc_cpu_policy_apply_cpuid(xc_interface *xch, xc_cpu_policy_t *policy,
-                              const struct xc_xend_cpuid *cpuid, bool hvm)
-{
-    int rc;
-    xc_cpu_policy_t *host = NULL, *def = NULL;
-
-    host = xc_cpu_policy_init();
-    def = xc_cpu_policy_init();
-    if ( !host || !def )
-    {
-        PERROR("Failed to init policies");
-        rc = -ENOMEM;
-        goto out;
-    }
-
-    /* Get the domain type's default policy. */
-    rc = xc_cpu_policy_get_system(xch, hvm ? XEN_SYSCTL_cpu_policy_hvm_default
-                                           : XEN_SYSCTL_cpu_policy_pv_default,
-                                  def);
-    if ( rc )
-    {
-        PERROR("Failed to obtain %s def policy", hvm ? "hvm" : "pv");
-        goto out;
-    }
-
-    /* Get the host policy. */
-    rc = xc_cpu_policy_get_system(xch, XEN_SYSCTL_cpu_policy_host, host);
-    if ( rc )
-    {
-        PERROR("Failed to obtain host policy");
-        goto out;
-    }
-
-    rc = -EINVAL;
-    for ( ; cpuid->leaf != XEN_CPUID_INPUT_UNUSED; ++cpuid )
-    {
-        xen_cpuid_leaf_t cur_leaf;
-        xen_cpuid_leaf_t def_leaf;
-        xen_cpuid_leaf_t host_leaf;
-
-        rc = xc_cpu_policy_get_cpuid(xch, policy, cpuid->leaf, cpuid->subleaf,
-                                     &cur_leaf);
-        if ( rc )
-        {
-            ERROR("Failed to get current policy leaf %#x subleaf %#x",
-                  cpuid->leaf, cpuid->subleaf);
-            goto out;
-        }
-        rc = xc_cpu_policy_get_cpuid(xch, def, cpuid->leaf, cpuid->subleaf,
-                                     &def_leaf);
-        if ( rc )
-        {
-            ERROR("Failed to get def policy leaf %#x subleaf %#x",
-                  cpuid->leaf, cpuid->subleaf);
-            goto out;
-        }
-        rc = xc_cpu_policy_get_cpuid(xch, host, cpuid->leaf, cpuid->subleaf,
-                                     &host_leaf);
-        if ( rc )
-        {
-            ERROR("Failed to get host policy leaf %#x subleaf %#x",
-                  cpuid->leaf, cpuid->subleaf);
-            goto out;
-        }
-
-        for ( unsigned int i = 0; i < ARRAY_SIZE(cpuid->policy); i++ )
-        {
-            uint32_t *cur_reg = &cur_leaf.a + i;
-            const uint32_t *def_reg = &def_leaf.a + i;
-            const uint32_t *host_reg = &host_leaf.a + i;
-
-            if ( cpuid->policy[i] == NULL )
-                continue;
-
-            for ( unsigned int j = 0; j < 32; j++ )
-            {
-                bool val;
-
-                switch ( cpuid->policy[i][j] )
-                {
-                case '1':
-                    val = true;
-                    break;
-
-                case '0':
-                    val = false;
-                    break;
-
-                case 'x':
-                    val = test_bit(31 - j, def_reg);
-                    break;
-
-                case 'k':
-                case 's':
-                    val = test_bit(31 - j, host_reg);
-                    break;
-
-                default:
-                    ERROR("Bad character '%c' in policy[%d] string '%s'",
-                          cpuid->policy[i][j], i, cpuid->policy[i]);
-                    goto out;
-                }
-
-                clear_bit(31 - j, cur_reg);
-                if ( val )
-                    set_bit(31 - j, cur_reg);
-            }
-        }
-
-        rc = xc_cpu_policy_update_cpuid(xch, policy, &cur_leaf, 1);
-        if ( rc )
-        {
-            PERROR("Failed to set policy leaf %#x subleaf %#x",
-                   cpuid->leaf, cpuid->subleaf);
-            goto out;
-        }
-    }
-
- out:
-    xc_cpu_policy_destroy(def);
-    xc_cpu_policy_destroy(host);
-
-    return rc;
-}
-
 xc_cpu_policy_t *xc_cpu_policy_init(void)
 {
     return calloc(1, sizeof(struct xc_cpu_policy));
diff --git a/tools/libs/light/libxl_cpuid.c b/tools/libs/light/libxl_cpuid.c
index 6d17e89191f..6c11b7a7c1f 100644
--- a/tools/libs/light/libxl_cpuid.c
+++ b/tools/libs/light/libxl_cpuid.c
@@ -298,7 +298,7 @@ int libxl_cpuid_parse_config(libxl_cpuid_policy_list *cpuid, const char* str)
     char *sep, *val, *endptr;
     int i;
     const struct cpuid_flags *flag;
-    struct xc_xend_cpuid *entry;
+    struct libxl__cpuid_policy *entry;
     unsigned long num;
     char flags[33], *resstr;
 
@@ -376,7 +376,7 @@ int libxl_cpuid_parse_config_xend(libxl_cpuid_policy_list *cpuid,
     char *endptr;
     unsigned long value;
     uint32_t leaf, subleaf = XEN_CPUID_INPUT_UNUSED;
-    struct xc_xend_cpuid *entry;
+    struct libxl__cpuid_policy *entry;
 
     /* parse the leaf number */
     value = strtoul(str, &endptr, 0);
@@ -426,6 +426,137 @@ int libxl_cpuid_parse_config_xend(libxl_cpuid_policy_list *cpuid,
     return 0;
 }
 
+static int apply_cpuid(libxl__gc *gc, xc_cpu_policy_t *policy,
+                       libxl_cpuid_policy_list cpuid, bool hvm, domid_t domid)
+{
+    int r, rc = 0;
+    xc_cpu_policy_t *host = NULL, *def = NULL;
+
+    host = xc_cpu_policy_init();
+    def = xc_cpu_policy_init();
+    if (!host || !def) {
+        LOGD(ERROR, domid, "Failed to init policies");
+        rc = ERROR_FAIL;
+        goto out;
+    }
+
+    /* Get the domain type's default policy. */
+    r = xc_cpu_policy_get_system(CTX->xch,
+                                 hvm ? XEN_SYSCTL_cpu_policy_hvm_default
+                                     : XEN_SYSCTL_cpu_policy_pv_default,
+                                 def);
+    if (r) {
+        LOGED(ERROR, domid, "Failed to obtain %s def policy",
+              hvm ? "hvm" : "pv");
+        rc = ERROR_FAIL;
+        goto out;
+    }
+
+    /* Get the host policy. */
+    r = xc_cpu_policy_get_system(CTX->xch, XEN_SYSCTL_cpu_policy_host, host);
+    if (r) {
+        LOGED(ERROR, domid, "Failed to obtain host policy");
+        rc = ERROR_FAIL;
+        goto out;
+    }
+
+    for (; cpuid->leaf != XEN_CPUID_INPUT_UNUSED; ++cpuid) {
+        xen_cpuid_leaf_t cur_leaf;
+        xen_cpuid_leaf_t def_leaf;
+        xen_cpuid_leaf_t host_leaf;
+
+        r = xc_cpu_policy_get_cpuid(CTX->xch, policy, cpuid->leaf,
+                                    cpuid->subleaf, &cur_leaf);
+        if (r) {
+            LOGED(ERROR, domid,
+                  "Failed to get current policy leaf %#x subleaf %#x",
+                  cpuid->leaf, cpuid->subleaf);
+            r = ERROR_FAIL;
+            goto out;
+        }
+        r = xc_cpu_policy_get_cpuid(CTX->xch, def, cpuid->leaf, cpuid->subleaf,
+                                    &def_leaf);
+        if (r) {
+            LOGED(ERROR, domid,
+                  "Failed to get def policy leaf %#x subleaf %#x",
+                  cpuid->leaf, cpuid->subleaf);
+            rc = ERROR_FAIL;
+            goto out;
+        }
+        r = xc_cpu_policy_get_cpuid(CTX->xch, host, cpuid->leaf,
+                                    cpuid->subleaf, &host_leaf);
+        if (r) {
+            LOGED(ERROR, domid,
+                  "Failed to get host policy leaf %#x subleaf %#x",
+                  cpuid->leaf, cpuid->subleaf);
+            rc = ERROR_FAIL;
+            goto out;
+        }
+
+        for (unsigned int i = 0; i < ARRAY_SIZE(cpuid->policy); i++) {
+            uint32_t *cur_reg = &cur_leaf.a + i;
+            const uint32_t *def_reg = &def_leaf.a + i;
+            const uint32_t *host_reg = &host_leaf.a + i;
+
+            if (cpuid->policy[i] == NULL)
+                continue;
+
+#define test_bit(i, r) !!(*(r) & (1u << (i)))
+#define set_bit(i, r) (*(r) |= (1u << (i)))
+#define clear_bit(i, r)  (*(r) &= ~(1u << (i)))
+            for (unsigned int j = 0; j < 32; j++) {
+                bool val;
+
+                switch (cpuid->policy[i][j]) {
+                case '1':
+                    val = true;
+                    break;
+
+                case '0':
+                    val = false;
+                    break;
+
+                case 'x':
+                    val = test_bit(31 - j, def_reg);
+                    break;
+
+                case 'k':
+                case 's':
+                    val = test_bit(31 - j, host_reg);
+                    break;
+
+                default:
+                    LOGD(ERROR, domid,
+                         "Bad character '%c' in policy[%d] string '%s'",
+                         cpuid->policy[i][j], i, cpuid->policy[i]);
+                    rc = ERROR_FAIL;
+                    goto out;
+                }
+
+                clear_bit(31 - j, cur_reg);
+                if (val)
+                    set_bit(31 - j, cur_reg);
+            }
+#undef clear_bit
+#undef set_bit
+#undef test_bit
+        }
+
+        r = xc_cpu_policy_update_cpuid(CTX->xch, policy, &cur_leaf, 1);
+        if (r) {
+            LOGED(ERROR, domid, "Failed to set policy leaf %#x subleaf %#x",
+                  cpuid->leaf, cpuid->subleaf);
+            rc = ERROR_FAIL;
+            goto out;
+        }
+    }
+
+ out:
+    xc_cpu_policy_destroy(def);
+    xc_cpu_policy_destroy(host);
+    return rc;
+}
+
 int libxl__cpuid_legacy(libxl_ctx *ctx, uint32_t domid, bool restore,
                         libxl_domain_build_info *info)
 {
@@ -541,10 +672,9 @@ int libxl__cpuid_legacy(libxl_ctx *ctx, uint32_t domid, bool restore,
     }
 
     /* Apply the bits from info->cpuid if any. */
-    r = xc_cpu_policy_apply_cpuid(ctx->xch, policy, info->cpuid, hvm);
-    if (r) {
-        LOGEVD(ERROR, domid, -r, "Failed to apply CPUID changes");
-        rc = ERROR_FAIL;
+    rc = apply_cpuid(gc, policy, info->cpuid, hvm, domid);
+    if (rc) {
+        LOGD(ERROR, domid, "Failed to apply CPUID changes");
         goto out;
     }
 
diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index 44a2f3c8fe3..f271a729fa7 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -2052,6 +2052,32 @@ typedef yajl_gen_status (*libxl__gen_json_callback)(yajl_gen hand, void *);
 _hidden char *libxl__object_to_json(libxl_ctx *ctx, const char *type,
                                     libxl__gen_json_callback gen, void *p);
 
+/*
+ * CPUID policy data, expressed in the internal libxl format.
+ *
+ * Policy is an array of strings, 32 chars long:
+ *   policy[0] = eax
+ *   policy[1] = ebx
+ *   policy[2] = ecx
+ *   policy[3] = edx
+ *
+ * The format of the string is the following:
+ *   '1' -> force to 1
+ *   '0' -> force to 0
+ *   'x' -> we don't care (use default)
+ *   'k' -> pass through host value
+ *   's' -> legacy alias for 'k'
+ */
+struct libxl__cpuid_policy {
+    union {
+        struct {
+            uint32_t leaf, subleaf;
+        };
+        uint32_t input[2];
+    };
+    char *policy[4];
+};
+
 _hidden int libxl__cpuid_legacy(libxl_ctx *ctx, uint32_t domid, bool retore,
                                 libxl_domain_build_info *info);
 
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri May 07 11:07:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 11:07:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.123985.234022 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leyKk-0007WI-RG; Fri, 07 May 2021 11:07:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 123985.234022; Fri, 07 May 2021 11:07:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leyKk-0007W0-Mu; Fri, 07 May 2021 11:07:50 +0000
Received: by outflank-mailman (input) for mailman id 123985;
 Fri, 07 May 2021 11:07:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=4HO4=KC=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1leyKi-0004jI-Hm
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 11:07:48 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f10407cf-ab0f-401c-b5ea-c8dc30e6d001;
 Fri, 07 May 2021 11:07:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f10407cf-ab0f-401c-b5ea-c8dc30e6d001
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620385658;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:content-transfer-encoding:mime-version;
  bh=mKFJUuT1DfiU2bFrYOWaYqhY+twLBUQuutnIPTuVTF8=;
  b=gA896gEShFtF+q+s89g8W2KVIweD4gnwxGXtrxY7Cc337S5Kg0xXMpe2
   cXB4sjryum2xdR47t8PPyX7bO9GHMGUsCFTbwx1FeTUYpgJL2h1rdyIUM
   Zc7oHTFs+14Evr5UaKkxz94d+ER6t3ixN1MiPqqWY/hddBOmpS3bvRnB6
   k=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: +TTd36LPnTQFDlpLfZqXMBapv77s17XPSisAPBhf6+6xlWWiuNzaL+ONepsOdr1r+OZNuPDDJu
 6Gy3dt6nC0YfB1suNUcszC/M1ivpnjySCnATvZFojt51VtC9tb/DU7PB4JYXCLik6bBHDVliup
 bZ73tVzXmKo4AiJVzyEHHzr/4ZcZF8b8WbKgEwKDI3JVx/TvotQWWEyxa9GNdp3KRIIOUm1VxV
 SO44ShcFPXerUHQrzKQRrvNkHRNKKSfnRjARnOn7LH07esNPx29Mh1KTdQbxg+zfPFmgWrmxsc
 0lA=
X-SBRS: 5.1
X-MesageID: 43678507
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:bCNwlaHiIuwDMPU4pLqF2pHXdLJyesId70hD6qkvc3Fom52j/f
 xGws5x6fatskdoZJhSo6H6BEDgewKWyXcR2+Us1NiZLW3bUQeTTb2KqLGSugEIeBeOvNK1t5
 0QFJSWYeeYZTcVsS+52njfLz9K+qjlzEncv5a6854bd3AJV0gP1WdEIzfeNnczaBhNBJI/Gp
 bZzNFAvSCcdXMeadn+LmUZXsDYzue72K7OUFojPVoK+QOOhTSn5PrRCB6DxCoTVDtJ3PML7X
 XFqQrk/a+u2svLhSM0llWjoai+quGRiuerN/b8yfT97Q+cyDpAUb4RGoFqegpF5d1Hpmxa1O
 Uk6C1QRfibo0mhA11d5yGdljUImQxel0PK2BuWh2Durtf+Qy9/A81dhZhBeh+c8EY4uspguZ
 g7qV5xmqAneS8oph6NkOQglisa5XZcqkBS2NL7T0YvJLf2TYUh5bD30HklYavoLRiKmrzPSt
 Mecv00zMwmAW9yQUqpwVVS/A==
X-IronPort-AV: E=Sophos;i="5.82,280,1613451600"; 
   d="scan'208";a="43678507"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jbe/vkyg2jmcItUftLykdM1mu/+Ahbj5MvpGPG7FvOeaD3+1S7afgN9+ZdNi4syTrDRMkZQ8S9kxZtZfgMRs8+88AVIoSmM9VfiyfFo689nO6TarcR1BUiMGbCCCR2TncBEazpcufuoUXja3M4vYKfuW17EItNbMcLNzyoO4l13EQa64Lu7SA8qeQTHrTS3MM9K39AKATToq3LGWw1NP7QEBh0TzIZL2RRa/4bPvCnTOP9FPnuJDVvQ0cgp2rMnbcjwwWX5ZH3lH041yNB6D4lnwGRS41aelZJRm44TpECJWfCiAi1jpFdrJOotC+N2pHlOepTbZhsen3LfDNc+a2A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=X77vzMLRLsKKyts6CxSqYJMt1Y+/JSMerGrc8DZVgSU=;
 b=Ju6vcmGOYHKi0GCwmIGAESRdtK+x4jXPkvNdoy1PJY7O8RZUJ27BSqN/RGkwGkTTKwJsq7gYMiL/3Li15rSiw+ycPzNq96tNzliO9MT1FsUjTwUNtGaYJNNzyX8YAWAd6tuOTKcc92MlJwRKqGRZV5SVf8FI9ndHbORXmY6K6E896zLh1GxMDCBT8c6724X55FC2onU2FDM34TTtTHP/T4wj5QMQloWiNoUwfUboVF2n9ODMlGsWOaajPtp1BxzyeoOdCvKK9HlTGli72S19LtfLPjm4ULq3dapnWLYFU94zbg9f4Sp6M6jr+jhzvNe6pXZVgP3z0UHu3hxiBghV2g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=X77vzMLRLsKKyts6CxSqYJMt1Y+/JSMerGrc8DZVgSU=;
 b=faYj9ZGzEXYOk+d93DhUiaUipsZlhii/+/LuuuKo39bbWDLGa69lkEj8KR4rn6WDE8/Ic+QK4JYH7ol8jXr1lQbMWPMFwd35bgOFE7rjsrfiojpooBn77TwZJCpe/PzrJ86j28CsS+hTAnPvpzPWDFRtxV854iD5jCwfkskxJcM=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Roger Pau Monne
	<roger.pau@citrix.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu
	<wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Subject: [PATCH v4 09/10] libs/{light,guest}: implement xc_cpuid_apply_policy in libxl
Date: Fri,  7 May 2021 13:04:21 +0200
Message-ID: <20210507110422.24608-10-roger.pau@citrix.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210507110422.24608-1-roger.pau@citrix.com>
References: <20210507110422.24608-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: MR2P264CA0035.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500::23)
 To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 86790a1e-306c-4b22-25b6-08d91148511e
X-MS-TrafficTypeDiagnostic: DM6PR03MB4683:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4683E53A55BBF6C7B65167358F579@DM6PR03MB4683.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:164;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: DPR+0u6OaEX0vcfP0V1K/sVKoPitJaP9+sd1YJIE2RDDwTNDp61l39qsviJZ7az3jc0rHVqoeACtEVFZY5PO396iYHmlkDmH8W2O9IUlHG8/NTM7Ss4Yi+FbvENN1jvTA0KLzepIL5PxceNaAk2W4wlyt7JF7tnn65q0tEPJAlcryR4WIngxx0aPboo1qsx4O03QAqyUGHdGjsesth3SxqkdyElY7yxCCwyCopvahb/CwdGFsdS0jAq+OHs+S+draeyCm9CVxiCF0ZFSbzEg84qekI+TphhURkpKTudcG2YwllcVKD1uxoztiZJLsXCoZYx3bFgnvQsao68y2Bbj77Pda9MjUonmHg+4r0ttml3371eBcxFKYe2vLjsRR7E2WmBtNF841bF56rqSREZg0CpNQlisoUbubD9UKRYqIzMhPniUI7saz8pYJEh2LoJ61Rd1ikVUgUNdtnfsdqBTZVCxTryYPlRZWA1AM2Z8V/5I95R8CTaubO/9jworKeUzyP33UH6VRA0hVxgqcrHsIZxByNxvWNt1gAnxq7fmL32D8xmQA0o97D4y4Ywy3Z8QaepMAadYZ95xtUp4mUVjfszpoN25hTD75C8ucKlrP6I=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(366004)(136003)(346002)(396003)(39860400002)(8936002)(478600001)(6666004)(956004)(2616005)(2906002)(6496006)(6486002)(26005)(1076003)(8676002)(16526019)(38100700002)(5660300002)(6916009)(316002)(66476007)(66556008)(83380400001)(66946007)(186003)(54906003)(107886003)(4326008)(36756003)(86362001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?S3U5TVNvaTZlbnFEZ1h6S1pWcGcwNmdUc1hYUGp0OEhSUmNSaHEyczdLczNz?=
 =?utf-8?B?MmVJMDRDTEk3enlxNHpkWnA0aURzKzJKZGRQaWNqckF0L1lpVGJPdzRRbk5D?=
 =?utf-8?B?am4xR2hKN3d6cnR3Wi9pOUlqb2FQSThSRFZQUTZwc3dVNFFxdVd2d2tDTS9s?=
 =?utf-8?B?SFNOdUZwQVRDY2g0UnFnV3NCdWhiMEhFb2FoWXoyUk5ZVmF1U0QycXNzQTE2?=
 =?utf-8?B?SSs1NUh2UGlzQU53WGMyZzhrdEI3Nkh1dG5sVjNpdTNpUzBNK3lFaU9PQlI1?=
 =?utf-8?B?TS9qM2NJVDFCU1ZZWEc5KzQ0VUxDNnE1T3gvUVVvUjd2YkJsMUpzTHJEZlZP?=
 =?utf-8?B?Mkk4UlhtQWJYb01RYk9KNkhNWGZFYXdCeDVCbEIySHJIZXVOZVdQOFJvZEFG?=
 =?utf-8?B?UkE0NlloYzhTbDFRa0lBUmVLd0RuNTFaa0VWWFg5b0sySzQ5Unp3bWR3VzJm?=
 =?utf-8?B?bGFMQitFOFg3ME0yVlI3ZzFiWmZsY1dRWFA2OURvYWR0V0wyNWRqWVoxc2dV?=
 =?utf-8?B?eWNvMUpvcHoxQ0ZlMDk5MHE5U3lhMzVaVjJMUjYzNDAyMkRONTYvMkRGaFd1?=
 =?utf-8?B?eU1qbmNIYXBqYlhycDR3OVdvQkgxWDBMYkhkc213am9rWnRRZktTc1NLckxS?=
 =?utf-8?B?NmxpbkJ6aDZjODNwUERuUDRtc2dkY0JLVUhPR1M5WW9ZNWFGOXQwck5md2RM?=
 =?utf-8?B?OENGbFFaR0ZiVE8vRWRRK0dnSXZoZDRBZXhHTC9xZGI3NUVKWjkwOFR4ZkQ0?=
 =?utf-8?B?V213V3VEQjNKQ0N0NTRYemRzWmhqZ1VCWkdqcjVZbVM3ZWN0eU9IQVViaGFp?=
 =?utf-8?B?Y25EY3FBNVROcGlZckRlMGNpMGZDOFR4Rm9uM2dWUWFDQWJ2TUkvV2tYTmNU?=
 =?utf-8?B?bFlhNDJ1M2R3dk13K0V1VlRJNDRuNm83MjIvQld6VEJyUGFmSENRWERMM1Jh?=
 =?utf-8?B?blRPc21XakVNUjhwZjlNOUhlRmFoNmd5b3ZIbngxUCswVHpPZ1lDUHEzU1o4?=
 =?utf-8?B?YzFXNlJsQ0h1L3FGQlZRU2VMd2U0UHZ6YzI4K2ZTMnNQdjBybXFOVkZhNXBY?=
 =?utf-8?B?YUQ2OFUxOTZUR0l4WHFIVVM4SnlUZnFBOGpzVk56VDVsY1ZMSDZ0S2tjZS84?=
 =?utf-8?B?SXF4UXNaSFB5T3o0REY4NktpeGhPNlQ5VVB4SU1Mall6bG92VEdKWVZ1NlJI?=
 =?utf-8?B?RFFKRXEwb09KbXF0NStqWXVFQWpMZ3lreE9Wcm9GRFUybXVmQzRVU3hHSUYv?=
 =?utf-8?B?Ym1EQ1JueVh1TUVVVnFZdTNyNnJKZWtUbGUyWXJYNTJMZEVHcXpCNlM1UTdM?=
 =?utf-8?B?VGVoWGxtTzVYNXhhS1hUS1lhbWM1YmpVeEM4aC9hMVlxTkJ0THQ2RnM5cFZa?=
 =?utf-8?B?bmtFU3RxTzhGNXRsMjJ2dUoyRDNtRTFvL0wwQjFyNjhwc1QzclJZWS8rcG1u?=
 =?utf-8?B?SUlUY1ZNMytMd1E0SEM5b09PdFIvM2s4bGhXQm5JTTd5dkNLZzRZZHkwMFdl?=
 =?utf-8?B?eGRUdTVCNEpTQ0NGM0dnZkgyU1ducFpwODl0M1VyMTh1cUZxWG1oZER4MGtv?=
 =?utf-8?B?ZWtLUHRTcloreERnTlErbjVBa0p6OFY2bXlXQS92ZDV4ZFoxMDU2NWtzWTc5?=
 =?utf-8?B?azFnb2xvT3pHaVk2UitiWlNnaFF5U2lkeEluaXR6ZERoL09RVFl5R1dYclpI?=
 =?utf-8?B?Y1lxTGJFbXBsQmtqZlR5VkczY1ZFeW9RZWhvSzdodWtlYy80MFZMcmpodGEy?=
 =?utf-8?Q?Pev5Gu3aLbABI6Rf3Db+M20/6DGOvuMF7uiszB+?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 86790a1e-306c-4b22-25b6-08d91148511e
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 May 2021 11:07:34.8063
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: QtzKYhA3tJ5qutpjS5y3jGVM69KnKLBsdP8cEznMLkoSIdg+mnExlAyZ9MLlnMR8cQ9GX0uOTl/QqRoGzrFgtA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4683
X-OriginatorOrg: citrix.com

With the addition of the xc_cpu_policy_* now libxl can have better
control over the cpu policy, this allows removing the
xc_cpuid_apply_policy function and instead coding the required bits by
libxl in libxl__cpuid_legacy directly.

Remove xc_cpuid_apply_policy.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
---
Changes since v2:
 - Use 'r' for libxc return values.
 - Fix comment about making a cpu policy compatible.
 - Use LOG*D macros.
---
 tools/include/xenctrl.h         |  18 -----
 tools/libs/guest/xg_cpuid_x86.c | 122 --------------------------------
 tools/libs/light/libxl_cpuid.c  |  94 ++++++++++++++++++++++--
 3 files changed, 87 insertions(+), 147 deletions(-)

diff --git a/tools/include/xenctrl.h b/tools/include/xenctrl.h
index e894c5c392d..17fa3734800 100644
--- a/tools/include/xenctrl.h
+++ b/tools/include/xenctrl.h
@@ -1890,24 +1890,6 @@ struct xc_xend_cpuid {
     char *policy[4];
 };
 
-/*
- * Make adjustments to the CPUID settings for a domain.
- *
- * This path is used in two cases.  First, for fresh boots of the domain, and
- * secondly for migrate-in/restore of pre-4.14 guests (where CPUID data was
- * missing from the stream).  The @restore parameter distinguishes these
- * cases, and the generated policy must be compatible with a 4.13.
- *
- * Either pass a full new @featureset (and @nr_features), or adjust individual
- * features (@pae, @itsc, @nested_virt).
- *
- * Then (optionally) apply legacy XEND overrides (@xend) to the result.
- */
-int xc_cpuid_apply_policy(xc_interface *xch,
-                          uint32_t domid, bool restore,
-                          const uint32_t *featureset,
-                          unsigned int nr_features, bool pae, bool itsc,
-                          bool nested_virt, const struct xc_xend_cpuid *xend);
 int xc_mca_op(xc_interface *xch, struct xen_mc *mc);
 int xc_mca_op_inject_v2(xc_interface *xch, unsigned int flags,
                         xc_cpumap_t cpumap, unsigned int nr_cpus);
diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x86.c
index 778bc2130ea..8c62c3ac360 100644
--- a/tools/libs/guest/xg_cpuid_x86.c
+++ b/tools/libs/guest/xg_cpuid_x86.c
@@ -388,128 +388,6 @@ int xc_cpu_policy_apply_cpuid(xc_interface *xch, xc_cpu_policy_t *policy,
     return rc;
 }
 
-int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
-                          const uint32_t *featureset, unsigned int nr_features,
-                          bool pae, bool itsc, bool nested_virt,
-                          const struct xc_xend_cpuid *cpuid)
-{
-    int rc;
-    xc_dominfo_t di;
-    unsigned int nr_leaves, nr_msrs;
-    xen_cpuid_leaf_t *leaves = NULL;
-    struct cpuid_policy *p = NULL;
-    struct xc_cpu_policy policy = { };
-    uint32_t err_leaf = -1, err_subleaf = -1, err_msr = -1;
-
-    if ( xc_domain_getinfo(xch, domid, 1, &di) != 1 ||
-         di.domid != domid )
-    {
-        ERROR("Failed to obtain d%d info", domid);
-        rc = -ESRCH;
-        goto out;
-    }
-
-    rc = xc_cpu_policy_get_size(xch, &nr_leaves, &nr_msrs);
-    if ( rc )
-    {
-        PERROR("Failed to obtain policy info size");
-        rc = -errno;
-        goto out;
-    }
-
-    rc = -ENOMEM;
-    if ( (leaves = calloc(nr_leaves, sizeof(*leaves))) == NULL ||
-         (p = calloc(1, sizeof(*p))) == NULL )
-        goto out;
-
-    /* Get the domain's default policy. */
-    nr_msrs = 0;
-    rc = get_system_cpu_policy(xch, di.hvm ? XEN_SYSCTL_cpu_policy_hvm_default
-                                           : XEN_SYSCTL_cpu_policy_pv_default,
-                               &nr_leaves, leaves, &nr_msrs, NULL);
-    if ( rc )
-    {
-        PERROR("Failed to obtain %s default policy", di.hvm ? "hvm" : "pv");
-        rc = -errno;
-        goto out;
-    }
-
-    rc = x86_cpuid_copy_from_buffer(p, leaves, nr_leaves,
-                                    &err_leaf, &err_subleaf);
-    if ( rc )
-    {
-        ERROR("Failed to deserialise CPUID (err leaf %#x, subleaf %#x) (%d = %s)",
-              err_leaf, err_subleaf, -rc, strerror(-rc));
-        goto out;
-    }
-
-    if ( restore )
-    {
-        policy.cpuid = *p;
-        xc_cpu_policy_make_compat_4_12(xch, &policy, di.hvm);
-        *p = policy.cpuid;
-    }
-
-    if ( featureset )
-    {
-        policy.cpuid = *p;
-        rc = xc_cpu_policy_apply_featureset(xch, &policy, featureset,
-                                            nr_features);
-        if ( rc )
-        {
-            ERROR("Failed to apply featureset to policy");
-            goto out;
-        }
-        *p = policy.cpuid;
-    }
-    else
-    {
-        p->extd.itsc = itsc;
-
-        if ( di.hvm )
-        {
-            p->basic.pae = pae;
-            p->basic.vmx = nested_virt;
-            p->extd.svm = nested_virt;
-        }
-    }
-
-    policy.cpuid = *p;
-    rc = xc_cpu_policy_legacy_topology(xch, &policy, di.hvm);
-    if ( rc )
-        goto out;
-    *p = policy.cpuid;
-
-    rc = xc_cpu_policy_apply_cpuid(xch, &policy, cpuid, di.hvm);
-    if ( rc )
-        goto out;
-
-    rc = x86_cpuid_copy_to_buffer(p, leaves, &nr_leaves);
-    if ( rc )
-    {
-        ERROR("Failed to serialise CPUID (%d = %s)", -rc, strerror(-rc));
-        goto out;
-    }
-
-    rc = xc_set_domain_cpu_policy(xch, domid, nr_leaves, leaves, 0, NULL,
-                                  &err_leaf, &err_subleaf, &err_msr);
-    if ( rc )
-    {
-        PERROR("Failed to set d%d's policy (err leaf %#x, subleaf %#x, msr %#x)",
-               domid, err_leaf, err_subleaf, err_msr);
-        rc = -errno;
-        goto out;
-    }
-
-    rc = 0;
-
-out:
-    free(p);
-    free(leaves);
-
-    return rc;
-}
-
 xc_cpu_policy_t *xc_cpu_policy_init(void)
 {
     return calloc(1, sizeof(struct xc_cpu_policy));
diff --git a/tools/libs/light/libxl_cpuid.c b/tools/libs/light/libxl_cpuid.c
index eb6feaa96d1..6d17e89191f 100644
--- a/tools/libs/light/libxl_cpuid.c
+++ b/tools/libs/light/libxl_cpuid.c
@@ -430,9 +430,11 @@ int libxl__cpuid_legacy(libxl_ctx *ctx, uint32_t domid, bool restore,
                         libxl_domain_build_info *info)
 {
     GC_INIT(ctx);
+    xc_cpu_policy_t *policy = NULL;
+    bool hvm = info->type == LIBXL_DOMAIN_TYPE_HVM;
     bool pae = true;
     bool itsc;
-    int r;
+    int r, rc = 0;
 
     /*
      * Gross hack.  Using libxl_defbool_val() here causes libvirt to crash in
@@ -443,6 +445,41 @@ int libxl__cpuid_legacy(libxl_ctx *ctx, uint32_t domid, bool restore,
      */
     bool nested_virt = info->nested_hvm.val > 0;
 
+    policy = xc_cpu_policy_init();
+    if (!policy) {
+        LOGED(ERROR, domid, "Failed to init CPU policy");
+        rc = ERROR_FAIL;
+        goto out;
+    }
+
+    r = xc_cpu_policy_get_domain(ctx->xch, domid, policy);
+    if (r) {
+        LOGED(ERROR, domid, "Failed to fetch domain CPU policy");
+        rc = ERROR_FAIL;
+        goto out;
+    }
+
+    if (restore) {
+        /*
+         * Make sure the policy is compatible with pre Xen 4.13. Note that
+         * newer Xen versions will pass policy data on the restore stream, so
+         * any adjustments done here will be superseded.
+         */
+        r = xc_cpu_policy_make_compat_4_12(ctx->xch, policy, hvm);
+        if (r) {
+            LOGED(ERROR, domid, "Failed to setup compatible CPU policy");
+            rc = ERROR_FAIL;
+            goto out;
+        }
+    }
+
+    r = xc_cpu_policy_legacy_topology(ctx->xch, policy, hvm);
+    if (r) {
+        LOGED(ERROR, domid, "Failed to setup CPU policy topology");
+        rc = ERROR_FAIL;
+        goto out;
+    }
+
     /*
      * For PV guests, PAE is Xen-controlled (it is the 'p' that differentiates
      * the xen-3.0-x86_32 and xen-3.0-x86_32p ABIs).  It is mandatory as Xen
@@ -453,8 +490,15 @@ int libxl__cpuid_legacy(libxl_ctx *ctx, uint32_t domid, bool restore,
      *
      * HVM guests get a top-level choice of whether PAE is available.
      */
-    if (info->type == LIBXL_DOMAIN_TYPE_HVM)
+    if (hvm)
         pae = libxl_defbool_val(info->u.hvm.pae);
+    rc = libxl_cpuid_parse_config(&info->cpuid, GCSPRINTF("pae=%d", pae));
+    if (rc) {
+        LOGD(ERROR, domid, "Failed to set PAE CPUID flag");
+        rc = ERROR_FAIL;
+        goto out;
+    }
+
 
     /*
      * Advertising Invariant TSC to a guest means that the TSC frequency won't
@@ -470,14 +514,50 @@ int libxl__cpuid_legacy(libxl_ctx *ctx, uint32_t domid, bool restore,
      */
     itsc = (libxl_defbool_val(info->disable_migrate) ||
             info->tsc_mode == LIBXL_TSC_MODE_ALWAYS_EMULATE);
+    rc = libxl_cpuid_parse_config(&info->cpuid, GCSPRINTF("invtsc=%d", itsc));
+    if (rc) {
+        LOGD(ERROR, domid, "Failed to set Invariant TSC CPUID flag");
+        rc = ERROR_FAIL;
+        goto out;
+    }
 
-    r = xc_cpuid_apply_policy(ctx->xch, domid, restore, NULL, 0,
-                              pae, itsc, nested_virt, info->cpuid);
-    if (r)
-        LOGEVD(ERROR, -r, domid, "Failed to apply CPUID policy");
+    /* Set Nested virt CPUID bits for HVM. */
+    if (hvm) {
+        rc = libxl_cpuid_parse_config(&info->cpuid, GCSPRINTF("vmx=%d",
+                                                              nested_virt));
+        if (rc) {
+            LOGD(ERROR, domid, "Failed to set VMX CPUID flag");
+            rc = ERROR_FAIL;
+            goto out;
+        }
+
+        rc = libxl_cpuid_parse_config(&info->cpuid, GCSPRINTF("svm=%d",
+                                                              nested_virt));
+        if (rc) {
+            LOGD(ERROR, domid, "Failed to set SVM CPUID flag");
+            rc = ERROR_FAIL;
+            goto out;
+        }
+    }
+
+    /* Apply the bits from info->cpuid if any. */
+    r = xc_cpu_policy_apply_cpuid(ctx->xch, policy, info->cpuid, hvm);
+    if (r) {
+        LOGEVD(ERROR, domid, -r, "Failed to apply CPUID changes");
+        rc = ERROR_FAIL;
+        goto out;
+    }
+
+    r = xc_cpu_policy_set_domain(ctx->xch, domid, policy);
+    if (r) {
+        LOGED(ERROR, domid, "Failed to set domain CPUID policy");
+        rc = ERROR_FAIL;
+    }
 
+ out:
+    xc_cpu_policy_destroy(policy);
     GC_FREE;
-    return r ? ERROR_FAIL : 0;
+    return rc;
 }
 
 static const char *input_names[2] = { "leaf", "subleaf" };
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri May 07 11:44:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 11:44:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124020.234052 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leyu0-0004Zq-FK; Fri, 07 May 2021 11:44:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124020.234052; Fri, 07 May 2021 11:44:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1leyu0-0004Zj-AI; Fri, 07 May 2021 11:44:16 +0000
Received: by outflank-mailman (input) for mailman id 124020;
 Fri, 07 May 2021 11:44:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1leytz-0004Zd-1P
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 11:44:15 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1leytv-0006WS-7A; Fri, 07 May 2021 11:44:11 +0000
Received: from 54-240-197-238.amazon.com ([54.240.197.238]
 helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1leytu-0002BJ-RV; Fri, 07 May 2021 11:44:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=hpoXNdRYfTv2MxY6dItkzard5ajRnFszrdlk9S6/rKY=; b=Z2zAW5WW4ZzP5nGglVODfLL7h+
	wuj7+UDcgaJxymjAV+xaeOycMDjDVrT0MKcVkP+6iHv73JrDZwH2Q5qAG4q5fDyC5S/h2vkDDRYc+
	XHsLQaTt5shPUl3bHw2jKyH07VPgat+NxZm+iXQx55AvZgRWhDCfxpRyDQyTQlU7YHQI=;
Subject: Re: [PATCH RFC 1/2] docs/design: Add a design document for Live
 Update
To: Jan Beulich <jbeulich@suse.com>
Cc: dwmw2@infradead.org, paul@xen.org, hongyxia@amazon.com,
 raphning@amazon.com, mahgul@amazon.com, Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210506104259.16928-1-julien@xen.org>
 <20210506104259.16928-2-julien@xen.org>
 <f51b2ef6-3998-7371-cea9-502c5c9f8afa@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <2a497e4c-d5a3-1da2-699e-1e31740a81f0@xen.org>
Date: Fri, 7 May 2021 12:44:08 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <f51b2ef6-3998-7371-cea9-502c5c9f8afa@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 07/05/2021 10:52, Jan Beulich wrote:
> On 06.05.2021 12:42, Julien Grall wrote:
>> +## High-level overview
>> +
>> +Xen has a framework to bring a new image of the Xen hypervisor in memory using
>> +kexec.  The existing framework does not meet the baseline functionality for
>> +Live Update, since kexec results in a restart for the hypervisor, host, Dom0,
>> +and all the guests.
>> +
>> +The operation can be divided in roughly 4 parts:
>> +
>> +    1. Trigger: The operation will by triggered from outside the hypervisor
>> +       (e.g. dom0 userspace).
>> +    2. Save: The state will be stabilized by pausing the domains and
>> +       serialized by xen#1.
>> +    3. Hand-over: xen#1 will pass the serialized state and transfer control to
>> +       xen#2.
>> +    4. Restore: The state will be deserialized by xen#2.
>> +
>> +All the domains will be paused before xen#1 is starting to save the states,
>> +and any domain that was running before Live Update will be unpaused after
>> +xen#2 has finished to restore the states.  This is to prevent a domain to try
>> +to modify the state of another domain while it is being saved/restored.
>> +
>> +The current approach could be seen as non-cooperative migration with a twist:
>> +all the domains (including dom0) are not expected be involved in the Live
>> +Update process.
>> +
>> +The major differences compare to live migration are:
>> +
>> +    * The state is not transferred to another host, but instead locally to
>> +      xen#2.
>> +    * The memory content or device state (for passthrough) does not need to
>> +      be part of the stream. Instead we need to preserve it.
>> +    * PV backends, device emulators, xenstored are not recreated but preserved
>> +      (as these are part of dom0).
> 
> Isn't dom0 too limiting here?

Good point. I can replace with "as these are living outside of the 
hypervisor".

> 
>> +## Trigger
>> +
>> +Live update is built on top of the kexec interface to prepare the command line,
>> +load xen#2 and trigger the operation.  A new kexec type has been introduced
>> +(*KEXEC\_TYPE\_LIVE\_UPDATE*) to notify Xen to Live Update.
>> +
>> +The Live Update will be triggered from outside the hypervisor (e.g. dom0
>> +userspace).  Support for the operation has been added in kexec-tools 2.0.21.
>> +
>> +All the domains will be paused before xen#1 is starting to save the states.
>> +In Xen, *domain\_pause()* will pause the vCPUs as soon as they can be re-
>> +scheduled.  In other words, a pause request will not wait for asynchronous
>> +requests (e.g. I/O) to finish.  For Live Update, this is not an ideal time to
>> +pause because it will require more xen#1 internal state to be transferred.
>> +Therefore, all the domains will be paused at an architectural restartable
>> +boundary.
> 
> To me this leaves entirely unclear what this then means. domain_pause()
> not being suitable is one thing, but what _is_ suitable seems worth
> mentioning.

I haven't mentioned anything because there is nothing directly suitable 
for Live Update. What we want is a behavior similar to 
``domain_shutdown()`` but without cloberring ``d->shutdown_code()`` as 
we would need to transfer it.

This is quite similar to what live migration is doing as, AFAICT, it 
will "shutdown" the domain with the reason SHUTDOWN_suspend.

> Among other things I'd be curious to know what this would
> mean for pending hypercall continuations.

Most of the hypercalls are fine because the state is encoded in the vCPU 
registers and can continue on a new Xen.

The problematic one are:
   1) Hypercalls running in a tasklet (mostly SYSCTL_*)
   2) XEN_DOMCTL_destroydomain
   3) EVTCHNOP_reset{,_cont}

For 1), we need to make sure the tasklets are completed before Live 
Update happens.

For 2), we could decide to wait until it is finished but it can take a 
while (on some of our testing it takes ~20ish to destroy) or it can 
never finish (e.g. zombie domain). The question is still open on how to 
deal with them because we can't really recreate them using 
domain_create() (some state may have already been relinquished).

For 3), you may remember the discussion we had on security ML during 
XSA-344. One possibility would be to restart the command from scratch 
(or not transfer the event channel at all).

> 
>> +## Save
>> +
>> +xen#1 will be responsible to preserve and serialize the state of each existing
>> +domain and any system-wide state (e.g M2P).
>> +
>> +Each domain will be serialized independently using a modified migration stream,
>> +if there is any dependency between domains (such as for IOREQ server) they will
>> +be recorded using a domid. All the complexity of resolving the dependencies are
>> +left to the restore path in xen#2 (more in the *Restore* section).
>> +
>> +At the moment, the domains are saved one by one in a single thread, but it
>> +would be possible to consider multi-threading if it takes too long. Although
>> +this may require some adjustment in the stream format.
>> +
>> +As we want to be able to Live Update between major versions of Xen (e.g Xen
>> +4.11 -> Xen 4.15), the states preserved should not be a dump of Xen internal
>> +structure but instead the minimal information that allow us to recreate the
>> +domains.
>> +
>> +For instance, we don't want to preserve the frametable (and therefore
>> +*struct page\_info*) as-is because the refcounting may be different across
>> +between xen#1 and xen#2 (see XSA-299). Instead, we want to be able to recreate
>> +*struct page\_info* based on minimal information that are considered stable
>> +(such as the page type).
> 
> Perhaps leaving it at this very generic description is fine, but I can
> easily see cases (which may not even be corner ones) where this quickly
> gets problematic: What if xen#2 has state xen#1 didn't (properly) record?
> Such information may not be possible to take out of thin air. Is the
> consequence then that in such a case LU won't work?
I can see cases where the state may not be record by xen#1, but so far I 
am struggling to find a case where we could not fake them in xen#2. Do 
you have any example?

>> +## Hand over
>> +
>> +### Memory usage restrictions
>> +
>> +xen#2 must take care not to use any memory pages which already belong to
>> +guests.  To facilitate this, a number of contiguous region of memory are
>> +reserved for the boot allocator, known as *live update bootmem*.
>> +
>> +xen#1 will always reserve a region just below Xen (the size is controlled by
>> +the Xen command line parameter liveupdate) to allow Xen growing and provide
>> +information about LiveUpdate (see the section *Breadcrumb*).  The region will be
>> +passed to xen#2 using the same command line option but with the base address
>> +specified.
> 
> I particularly don't understand the "to allow Xen growing" aspect here:
> xen#2 needs to be placed in a different memory range anyway until xen#1
> has handed over control.
> Are you suggesting it gets moved over to xen#1's
> original physical range (not necessarily an exact match), and then
> perhaps to start below where xen#1 started? 

That's correct.

> Why would you do this?

There are a few reasons:
   1) kexec-tools is in charge of selecting the physical address where 
the kernel (or Xen in our case) will be loaded. So we need to tell kexec 
where is a good place to load the new binary.
   2) xen#2 may end up to be loaded in a "random" and therefore possibly 
inconvenient place.

> Xen intentionally lives at a 2Mb boundary, such that in principle (for EFI:
> in fact) large page mappings are possible.

Right, xen#2 will still be loaded at a 2MB boundary. But it may be 2MB 
lower than the original one.

> I also see no reason to reuse
> the same physical area of memory for Xen itself - all you need is for
> Xen's virtual addresses to be properly mapped to the new physical range.
> I wonder what I'm missing here.
It is a known convenient place. It may be difficult to find a similar 
spot on host that have been long-running.

> 
>> +For simplicity, additional regions will be provided in the stream.  They will
>> +consist of region that could be re-used by xen#2 during boot (such as the
>> +xen#1's frametable memory).
>> +
>> +xen#2 must not use any pages outside those regions until it has consumed the
>> +Live Update data stream and determined which pages are already in use by
>> +running domains or need to be re-used as-is by Xen (e.g M2P).
> 
> Is the M2P really in the "need to be re-used" group, not just "can
> be re-used for simplicity and efficiency reasons"?

The MFNs are shared with privileged guests (e.g. dom0). So, I believe, 
the M2P needs to reside at the same place.

The efficiency is an additional benefits.

> 
>> +## Restore
>> +
>> +After xen#2 initialized itself and map the stream, it will be responsible to
>> +restore the state of the system and each domain.
>> +
>> +Unlike the save part, it is not possible to restore a domain in a single pass.
>> +There are dependencies between:
>> +
>> +    1. different states of a domain.  For instance, the event channels ABI
>> +       used (2l vs fifo) requires to be restored before restoring the event
>> +       channels.
>> +    2. the same "state" within a domain.  For instance, in case of PV domain,
>> +       the pages' ownership requires to be restored before restoring the type
>> +       of the page (e.g is it an L4, L1... table?).
>> +
>> +    3. domains.  For instance when restoring the grant mapping, it will be
>> +       necessary to have the page's owner in hand to do proper refcounting.
>> +       Therefore the pages' ownership have to be restored first.
>> +
>> +Dependencies will be resolved using either multiple passes (for dependency
>> +type 2 and 3) or using a specific ordering between records (for dependency
>> +type 1).
>> +
>> +Each domain will be restored in 3 passes:
>> +
>> +    * Pass 0: Create the domain and restore the P2M for HVM. This can be broken
>> +      down in 3 parts:
>> +      * Allocate a domain via _domain\_create()_ but skip part that requires
>> +        extra records (e.g HAP, P2M).
>> +      * Restore any parts which needs to be done before create the vCPUs. This
>> +        including restoring the P2M and whether HAP is used.
>> +      * Create the vCPUs. Note this doesn't restore the state of the vCPUs.
>> +    * Pass 1: It will restore the pages' ownership and the grant-table frames
>> +    * Pass 2: This steps will restore any domain states (e.g vCPU state, event
>> +      channels) that wasn't
> 
> What about foreign mappings (which are part of the P2M)? Can they be
> validly restored prior to restoring page ownership?

Our plan is to transfer the P2M as-is because it is used by the IOMMU. 
So the P2M may be restored before it is fully validated.

> In how far do you
> fully trust xen#1's state to be fully consistent anyway, rather than
> perhaps checking it?

This is a tricky question. If the state is not consistent, then it may 
be difficult to get around it. To continue on the example of foreign 
mapping, what if Xen#2 thinks dom0 has not the right to map it? We can't 
easily (?) recover from that.

So far, you need to put some trust in xen#1 state. IOW, you would not be 
able to blindly replace a reboot with LiveUpdating the hypervisor. This 
will need to be tested.

In our PoC, we decided to crash as soon as there is an inconsistent 
state. This avoids to continue running on a possibly unsafe setup but 
will introduce pain to the VM users.

As a future improvement, we could look at been able to Live Update with 
inconsistent state.

> 
>> +A domain should not have a dependency on another domain within the same pass.
>> +Therefore it would be possible to take advantage of all the CPUs to restore
>> +domains in parallel and reduce the overall downtime.
> 
> "Dependency" may be ambiguous here. For example, an interdomain event
> channel to me necessarily expresses a dependency between two domains.

That's a good point. AFAICT, the interdomain can be restored either way. 
So I will rephrase it.

Thank you for the feedback.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 07 12:15:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 12:15:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124030.234063 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lezOG-0007oF-7c; Fri, 07 May 2021 12:15:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124030.234063; Fri, 07 May 2021 12:15:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lezOG-0007o8-4X; Fri, 07 May 2021 12:15:32 +0000
Received: by outflank-mailman (input) for mailman id 124030;
 Fri, 07 May 2021 12:15:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rJTn=KC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lezOF-0007o2-Ct
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 12:15:31 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0cdbe640-e4b2-42c0-8dd4-b69e6b624250;
 Fri, 07 May 2021 12:15:29 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B6913AF9A;
 Fri,  7 May 2021 12:15:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0cdbe640-e4b2-42c0-8dd4-b69e6b624250
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620389728; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=2gt06RgcdtU2VepUwGZRx30NWr1z7jooyDq18fA3ggQ=;
	b=R4k7f450pG8qPTl/jPP/ba0C1GP+YyV9omsDISAh45x8XsyZ/1UTDeI1ksDscly4No57sW
	OGllcMvmukfmffU8obdyuWxi8rwEEq3DLn3yEAK5OLeUgxlirkgnCjcOzfrAdbjSnMAC89
	4q2ZGRRJPTI0zp61p2qRwIDjqONakQA=
Subject: Re: [PATCH RFC 1/2] docs/design: Add a design document for Live
 Update
To: Julien Grall <julien@xen.org>
Cc: dwmw2@infradead.org, paul@xen.org, hongyxia@amazon.com,
 raphning@amazon.com, mahgul@amazon.com, Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210506104259.16928-1-julien@xen.org>
 <20210506104259.16928-2-julien@xen.org>
 <f51b2ef6-3998-7371-cea9-502c5c9f8afa@suse.com>
 <2a497e4c-d5a3-1da2-699e-1e31740a81f0@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <324f10b9-2b1f-ec61-1816-44c960c285f8@suse.com>
Date: Fri, 7 May 2021 14:15:29 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <2a497e4c-d5a3-1da2-699e-1e31740a81f0@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 07.05.2021 13:44, Julien Grall wrote:
> On 07/05/2021 10:52, Jan Beulich wrote:
>> On 06.05.2021 12:42, Julien Grall wrote:
>>> +## Trigger
>>> +
>>> +Live update is built on top of the kexec interface to prepare the command line,
>>> +load xen#2 and trigger the operation.  A new kexec type has been introduced
>>> +(*KEXEC\_TYPE\_LIVE\_UPDATE*) to notify Xen to Live Update.
>>> +
>>> +The Live Update will be triggered from outside the hypervisor (e.g. dom0
>>> +userspace).  Support for the operation has been added in kexec-tools 2.0.21.
>>> +
>>> +All the domains will be paused before xen#1 is starting to save the states.
>>> +In Xen, *domain\_pause()* will pause the vCPUs as soon as they can be re-
>>> +scheduled.  In other words, a pause request will not wait for asynchronous
>>> +requests (e.g. I/O) to finish.  For Live Update, this is not an ideal time to
>>> +pause because it will require more xen#1 internal state to be transferred.
>>> +Therefore, all the domains will be paused at an architectural restartable
>>> +boundary.
>>
>> To me this leaves entirely unclear what this then means. domain_pause()
>> not being suitable is one thing, but what _is_ suitable seems worth
>> mentioning.
> 
> I haven't mentioned anything because there is nothing directly suitable 
> for Live Update. What we want is a behavior similar to 
> ``domain_shutdown()`` but without cloberring ``d->shutdown_code()`` as 
> we would need to transfer it.
> 
> This is quite similar to what live migration is doing as, AFAICT, it 
> will "shutdown" the domain with the reason SHUTDOWN_suspend.
> 
>> Among other things I'd be curious to know what this would
>> mean for pending hypercall continuations.
> 
> Most of the hypercalls are fine because the state is encoded in the vCPU 
> registers and can continue on a new Xen.
> 
> The problematic one are:
>    1) Hypercalls running in a tasklet (mostly SYSCTL_*)
>    2) XEN_DOMCTL_destroydomain
>    3) EVTCHNOP_reset{,_cont}

4) paging_domctl_continuation
5) various PV mm hypercalls leaving state in struct page_info or
the old_guest_table per-vCPU field

> For 1), we need to make sure the tasklets are completed before Live 
> Update happens.
> 
> For 2), we could decide to wait until it is finished but it can take a 
> while (on some of our testing it takes ~20ish to destroy) or it can 
> never finish (e.g. zombie domain). The question is still open on how to 
> deal with them because we can't really recreate them using 
> domain_create() (some state may have already been relinquished).
> 
> For 3), you may remember the discussion we had on security ML during 
> XSA-344. One possibility would be to restart the command from scratch 
> (or not transfer the event channel at all).

Yes, I do recall that.

>>> +## Save
>>> +
>>> +xen#1 will be responsible to preserve and serialize the state of each existing
>>> +domain and any system-wide state (e.g M2P).
>>> +
>>> +Each domain will be serialized independently using a modified migration stream,
>>> +if there is any dependency between domains (such as for IOREQ server) they will
>>> +be recorded using a domid. All the complexity of resolving the dependencies are
>>> +left to the restore path in xen#2 (more in the *Restore* section).
>>> +
>>> +At the moment, the domains are saved one by one in a single thread, but it
>>> +would be possible to consider multi-threading if it takes too long. Although
>>> +this may require some adjustment in the stream format.
>>> +
>>> +As we want to be able to Live Update between major versions of Xen (e.g Xen
>>> +4.11 -> Xen 4.15), the states preserved should not be a dump of Xen internal
>>> +structure but instead the minimal information that allow us to recreate the
>>> +domains.
>>> +
>>> +For instance, we don't want to preserve the frametable (and therefore
>>> +*struct page\_info*) as-is because the refcounting may be different across
>>> +between xen#1 and xen#2 (see XSA-299). Instead, we want to be able to recreate
>>> +*struct page\_info* based on minimal information that are considered stable
>>> +(such as the page type).
>>
>> Perhaps leaving it at this very generic description is fine, but I can
>> easily see cases (which may not even be corner ones) where this quickly
>> gets problematic: What if xen#2 has state xen#1 didn't (properly) record?
>> Such information may not be possible to take out of thin air. Is the
>> consequence then that in such a case LU won't work?
> I can see cases where the state may not be record by xen#1, but so far I 
> am struggling to find a case where we could not fake them in xen#2. Do 
> you have any example?

The thing that came to mind were the state representation (and logic)
changes done for XSA-299.

>>> +## Hand over
>>> +
>>> +### Memory usage restrictions
>>> +
>>> +xen#2 must take care not to use any memory pages which already belong to
>>> +guests.  To facilitate this, a number of contiguous region of memory are
>>> +reserved for the boot allocator, known as *live update bootmem*.
>>> +
>>> +xen#1 will always reserve a region just below Xen (the size is controlled by
>>> +the Xen command line parameter liveupdate) to allow Xen growing and provide
>>> +information about LiveUpdate (see the section *Breadcrumb*).  The region will be
>>> +passed to xen#2 using the same command line option but with the base address
>>> +specified.
>>
>> I particularly don't understand the "to allow Xen growing" aspect here:
>> xen#2 needs to be placed in a different memory range anyway until xen#1
>> has handed over control.
>> Are you suggesting it gets moved over to xen#1's
>> original physical range (not necessarily an exact match), and then
>> perhaps to start below where xen#1 started? 
> 
> That's correct.
> 
>> Why would you do this?
> 
> There are a few reasons:
>    1) kexec-tools is in charge of selecting the physical address where 
> the kernel (or Xen in our case) will be loaded. So we need to tell kexec 
> where is a good place to load the new binary.
>    2) xen#2 may end up to be loaded in a "random" and therefore possibly 
> inconvenient place.

"Inconvenient" should be avoidable as long as the needed alignment
can be guaranteed. In particular I don't think there's too much in
the way in order to have (x86) Xen run on physical memory above
4Gb.

>> Xen intentionally lives at a 2Mb boundary, such that in principle (for EFI:
>> in fact) large page mappings are possible.
> 
> Right, xen#2 will still be loaded at a 2MB boundary. But it may be 2MB 
> lower than the original one.

Oh, I see. The wording made be think you would move it down in
smaller steps. I think somewhere (perhaps in a reply to someone
else) it was said that you'd place it such that its upper address
matches that of xen#1.

>> I also see no reason to reuse
>> the same physical area of memory for Xen itself - all you need is for
>> Xen's virtual addresses to be properly mapped to the new physical range.
>> I wonder what I'm missing here.
> It is a known convenient place. It may be difficult to find a similar 
> spot on host that have been long-running.

I'm not convinced: If it was placed in the kexec area at a 2Mb
boundary, it could just run from there. If the kexec area is
large enough, this would work any number of times (as occupied
ranges become available again when the next LU cycle ends).

>>> +For simplicity, additional regions will be provided in the stream.  They will
>>> +consist of region that could be re-used by xen#2 during boot (such as the
>>> +xen#1's frametable memory).
>>> +
>>> +xen#2 must not use any pages outside those regions until it has consumed the
>>> +Live Update data stream and determined which pages are already in use by
>>> +running domains or need to be re-used as-is by Xen (e.g M2P).
>>
>> Is the M2P really in the "need to be re-used" group, not just "can
>> be re-used for simplicity and efficiency reasons"?
> 
> The MFNs are shared with privileged guests (e.g. dom0). So, I believe, 
> the M2P needs to reside at the same place.

Oh, yes, good point.

>>> +## Restore
>>> +
>>> +After xen#2 initialized itself and map the stream, it will be responsible to
>>> +restore the state of the system and each domain.
>>> +
>>> +Unlike the save part, it is not possible to restore a domain in a single pass.
>>> +There are dependencies between:
>>> +
>>> +    1. different states of a domain.  For instance, the event channels ABI
>>> +       used (2l vs fifo) requires to be restored before restoring the event
>>> +       channels.
>>> +    2. the same "state" within a domain.  For instance, in case of PV domain,
>>> +       the pages' ownership requires to be restored before restoring the type
>>> +       of the page (e.g is it an L4, L1... table?).
>>> +
>>> +    3. domains.  For instance when restoring the grant mapping, it will be
>>> +       necessary to have the page's owner in hand to do proper refcounting.
>>> +       Therefore the pages' ownership have to be restored first.
>>> +
>>> +Dependencies will be resolved using either multiple passes (for dependency
>>> +type 2 and 3) or using a specific ordering between records (for dependency
>>> +type 1).
>>> +
>>> +Each domain will be restored in 3 passes:
>>> +
>>> +    * Pass 0: Create the domain and restore the P2M for HVM. This can be broken
>>> +      down in 3 parts:
>>> +      * Allocate a domain via _domain\_create()_ but skip part that requires
>>> +        extra records (e.g HAP, P2M).
>>> +      * Restore any parts which needs to be done before create the vCPUs. This
>>> +        including restoring the P2M and whether HAP is used.
>>> +      * Create the vCPUs. Note this doesn't restore the state of the vCPUs.
>>> +    * Pass 1: It will restore the pages' ownership and the grant-table frames
>>> +    * Pass 2: This steps will restore any domain states (e.g vCPU state, event
>>> +      channels) that wasn't
>>
>> What about foreign mappings (which are part of the P2M)? Can they be
>> validly restored prior to restoring page ownership?
> 
> Our plan is to transfer the P2M as-is because it is used by the IOMMU. 
> So the P2M may be restored before it is fully validated.
> 
>> In how far do you
>> fully trust xen#1's state to be fully consistent anyway, rather than
>> perhaps checking it?
> 
> This is a tricky question. If the state is not consistent, then it may 
> be difficult to get around it. To continue on the example of foreign 
> mapping, what if Xen#2 thinks dom0 has not the right to map it? We can't 
> easily (?) recover from that.
> 
> So far, you need to put some trust in xen#1 state. IOW, you would not be 
> able to blindly replace a reboot with LiveUpdating the hypervisor. This 
> will need to be tested.

But this then eliminates a subset of the intended use cases: If e.g.
a refcounting bug needed to be fixed in Xen, and if you don't know
whether xen#1 has actually accumulated any badness, you still won't
be able to avoid the reboot.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 07 12:56:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 12:56:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124044.234076 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lf01W-0003Im-Hi; Fri, 07 May 2021 12:56:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124044.234076; Fri, 07 May 2021 12:56:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lf01W-0003If-EQ; Fri, 07 May 2021 12:56:06 +0000
Received: by outflank-mailman (input) for mailman id 124044;
 Fri, 07 May 2021 12:56:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lf01V-0003IV-9w; Fri, 07 May 2021 12:56:05 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lf01U-0007iT-WC; Fri, 07 May 2021 12:56:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lf01U-0000TF-Lg; Fri, 07 May 2021 12:56:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lf01U-0005JF-LC; Fri, 07 May 2021 12:56:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ne84tyTnFsWbFh3XGenrHs+cXqBY72Lhsb5l7YTaDPs=; b=xMhk81hKqzgU42oUjh8zR1EIro
	sj5pcvwdEi3NPOLXpHdYb6+DQEVxbH71EgXenNpIkdnRCQAeGMo3EvoloZDFdxoDDqjPADgLGpYuv
	CaboaVfeBzxC9Ppra2RfPKO5nXZyG0ZuYWOsTnNsZX6tYYfSg8jnWPtHzU1dQHfB3xds=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161831-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 161831: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=7a2b787880bddbb3bd68b18efe1d6fe339df6ff1
X-Osstest-Versions-That:
    xen=09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 07 May 2021 12:56:04 +0000

flight 161831 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161831/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  7a2b787880bddbb3bd68b18efe1d6fe339df6ff1
baseline version:
 xen                  09fc903c5ac042e2e1eb54e58ea7f207ed12ee16

Last test of basis   161800  2021-05-05 22:00:26 Z    1 days
Testing same since   161831  2021-05-07 09:01:42 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien@xen.org>
  Roger Pau Monné <roger.pau@citrix.com>
  Volodymyr Babchuk <volodymyr_babchuk@epam.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   09fc903c5a..7a2b787880  7a2b787880bddbb3bd68b18efe1d6fe339df6ff1 -> smoke


From xen-devel-bounces@lists.xenproject.org Fri May 07 13:40:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 13:40:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124049.234091 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lf0iB-000835-10; Fri, 07 May 2021 13:40:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124049.234091; Fri, 07 May 2021 13:40:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lf0iA-00082x-UF; Fri, 07 May 2021 13:40:10 +0000
Received: by outflank-mailman (input) for mailman id 124049;
 Fri, 07 May 2021 13:40:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lf0iA-00082o-JI; Fri, 07 May 2021 13:40:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lf0iA-0008QI-AE; Fri, 07 May 2021 13:40:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lf0i9-0002dD-Ul; Fri, 07 May 2021 13:40:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lf0i9-00081a-UH; Fri, 07 May 2021 13:40:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=PONK4A86GTUPQk7pps4jXbzb7+fEYDQJfspqsRAZtUI=; b=mkMzEiW5Qi3jCfsm27io+T09xs
	bXA3icZu6TKCnB6B6cP390GC032ttJqdx3NPWOTmy6qhiw7e+Wrlh+JfpuNEPeOUJ0Shn+Zd5C2iC
	YGC5iCkgDnTtw1fSSH6XUSrkm9jxO5hKC+4DYqwjd8+Z7aIfYdGokVRTpIYHrzkXOzpI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161825-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 161825: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
X-Osstest-Versions-That:
    xen=09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 07 May 2021 13:40:09 +0000

flight 161825 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161825/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161811
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161811
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 161811
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161811
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161811
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161811
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161811
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161811
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161811
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161811
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161811
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
baseline version:
 xen                  09fc903c5ac042e2e1eb54e58ea7f207ed12ee16

Last test of basis   161825  2021-05-07 01:56:04 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Fri May 07 14:59:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 14:59:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124059.234111 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lf1wt-0006UR-WF; Fri, 07 May 2021 14:59:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124059.234111; Fri, 07 May 2021 14:59:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lf1wt-0006UK-TJ; Fri, 07 May 2021 14:59:27 +0000
Received: by outflank-mailman (input) for mailman id 124059;
 Fri, 07 May 2021 14:59:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=gK0J=KC=amazon.com=prvs=754d3a1f3=hongyxia@srs-us1.protection.inumbo.net>)
 id 1lf1ws-0006UE-SP
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 14:59:26 +0000
Received: from smtp-fw-9102.amazon.com (unknown [207.171.184.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3d74a335-5171-42be-9093-763b573671ae;
 Fri, 07 May 2021 14:59:25 +0000 (UTC)
Received: from pdx4-co-svc-p1-lb2-vlan2.amazon.com (HELO
 email-inbound-relay-1d-5dd976cd.us-east-1.amazon.com) ([10.25.36.210])
 by smtp-border-fw-9102.sea19.amazon.com with ESMTP; 07 May 2021 14:59:16 +0000
Received: from EX13D39EUA001.ant.amazon.com
 (iad12-ws-svc-p26-lb9-vlan3.iad.amazon.com [10.40.163.38])
 by email-inbound-relay-1d-5dd976cd.us-east-1.amazon.com (Postfix) with ESMTPS
 id E2BB3A2591; Fri,  7 May 2021 14:59:14 +0000 (UTC)
Received: from EX13D37EUA003.ant.amazon.com (10.43.165.7) by
 EX13D39EUA001.ant.amazon.com (10.43.165.90) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Fri, 7 May 2021 14:59:14 +0000
Received: from EX13D37EUA003.ant.amazon.com ([10.43.165.7]) by
 EX13D37EUA003.ant.amazon.com ([10.43.165.7]) with mapi id 15.00.1497.015;
 Fri, 7 May 2021 14:59:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3d74a335-5171-42be-9093-763b573671ae
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
  t=1620399566; x=1651935566;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=BqqMRIsOwJmULanbAJ47vyruUAfOqJsmS9xhTlGiscE=;
  b=GJAKXggzPAhhgfnWub58teKFZ1RVOhNtH8RLDN5OkRddtBssTZhtPxhE
   QsMTdBehz0U6g/nfbuZYO/tpyyO58ix+2Sh76ZxeQpUndApKMY2U/kj2f
   hPqaL8ZkOIR4P6hDzbVYZvVAGA7U+qDYj8AnZ8TWJaOSXf7uq1jBqC4Ug
   I=;
X-IronPort-AV: E=Sophos;i="5.82,281,1613433600"; 
   d="scan'208";a="133868623"
From: "Xia, Hongyan" <hongyxia@amazon.com>
To: "jbeulich@suse.com" <jbeulich@suse.com>, "julien@xen.org" <julien@xen.org>
CC: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
	"sstabellini@kernel.org" <sstabellini@kernel.org>, "Gul, Mahircan"
	<mahgul@amazon.co.uk>, "paul@xen.org" <paul@xen.org>, "Ning, Raphael"
	<raphning@amazon.com>, "wl@xen.org" <wl@xen.org>, "iwj@xenproject.org"
	<iwj@xenproject.org>, "Grall, Julien" <jgrall@amazon.co.uk>,
	"george.dunlap@citrix.com" <george.dunlap@citrix.com>, "dwmw2@infradead.org"
	<dwmw2@infradead.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH RFC 1/2] docs/design: Add a design document for Live
 Update
Thread-Topic: [PATCH RFC 1/2] docs/design: Add a design document for Live
 Update
Thread-Index: AQHXQ1GL5yDA2hXTmkq1WsWucyAoqw==
Date: Fri, 7 May 2021 14:59:14 +0000
Message-ID: <28b31476e2044161f94bfd85d1d3c8b2f6dfb806.camel@amazon.com>
References: <20210506104259.16928-1-julien@xen.org>
	 <20210506104259.16928-2-julien@xen.org>
	 <f51b2ef6-3998-7371-cea9-502c5c9f8afa@suse.com>
	 <2a497e4c-d5a3-1da2-699e-1e31740a81f0@xen.org>
	 <324f10b9-2b1f-ec61-1816-44c960c285f8@suse.com>
In-Reply-To: <324f10b9-2b1f-ec61-1816-44c960c285f8@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-messagesentrepresentingtype: 1
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.160.119]
Content-Type: text/plain; charset="utf-8"
Content-ID: <98995F9367FD964BBC9A84C0B10B47D8@amazon.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk

T24gRnJpLCAyMDIxLTA1LTA3IGF0IDE0OjE1ICswMjAwLCBKYW4gQmV1bGljaCB3cm90ZToNCj4g
T24gMDcuMDUuMjAyMSAxMzo0NCwgSnVsaWVuIEdyYWxsIHdyb3RlOg0KWy4uLl0NCj4gPiANCj4g
PiBJdCBpcyBhIGtub3duIGNvbnZlbmllbnQgcGxhY2UuIEl0IG1heSBiZSBkaWZmaWN1bHQgdG8g
ZmluZCBhDQo+ID4gc2ltaWxhciANCj4gPiBzcG90IG9uIGhvc3QgdGhhdCBoYXZlIGJlZW4gbG9u
Zy1ydW5uaW5nLg0KPiANCj4gSSdtIG5vdCBjb252aW5jZWQ6IElmIGl0IHdhcyBwbGFjZWQgaW4g
dGhlIGtleGVjIGFyZWEgYXQgYSAyTWINCj4gYm91bmRhcnksIGl0IGNvdWxkIGp1c3QgcnVuIGZy
b20gdGhlcmUuIElmIHRoZSBrZXhlYyBhcmVhIGlzDQo+IGxhcmdlIGVub3VnaCwgdGhpcyB3b3Vs
ZCB3b3JrIGFueSBudW1iZXIgb2YgdGltZXMgKGFzIG9jY3VwaWVkDQo+IHJhbmdlcyBiZWNvbWUg
YXZhaWxhYmxlIGFnYWluIHdoZW4gdGhlIG5leHQgTFUgY3ljbGUgZW5kcykuDQoNClRvIG1ha2Ug
c3VyZSB0aGUgbmV4dCBYZW4gY2FuIGJlIGxvYWRlZCBhbmQgcnVuIGFueXdoZXJlIGluIGNhc2Ug
a2V4ZWMNCmNhbm5vdCBmaW5kIGxhcmdlIGVub3VnaCBtZW1vcnkgdW5kZXIgNEcsIHdlIG5lZWQg
dG86DQoNCjEuIHRlYWNoIGtleGVjIHRvIGxvYWQgdGhlIHdob2xlIGltYWdlIGNvbnRpZ3VvdXNs
eS4gQXQgdGhlIG1vbWVudA0Ka2V4ZWMgcHJlcGFyZXMgc2NhdHRlcmVkIDRLIHBhZ2VzIHdoaWNo
IGFyZSBub3QgcnVubmFibGUgdW50aWwgdGhleSBhcmUNCmNvcGllZCB0byBhIGNvbnRpZ3VvdXMg
ZGVzdGluYXRpb24uIChXaGF0IGlmIGl0IGNhbid0IGZpbmQgYSBjb250aWd1b3VzDQpyYW5nZT8p
DQoNCjIuIHRlYWNoIFhlbiB0aGF0IGl0IGNhbiBiZSBqdW1wZWQgaW50byB3aXRoIHNvbWUgZXhp
c3RpbmcgcGFnZSB0YWJsZXMNCndoaWNoIHBvaW50IHRvIGl0c2VsZiBhYm92ZSA0Ry4gV2UgY2Fu
J3QgZG8gcmVhbC9wcm90ZWN0ZWQgbW9kZSBlbnRyeQ0KYmVjYXVzZSBpdCBuZWVkcyB0byBzdGFy
dCBiZWxvdyA0RyBwaHlzaWNhbGx5LiBNYXliZSBhIG1vZGlmaWVkIHZlcnNpb24NCm9mIHRoZSBF
RkkgZW50cnkgcGF0aCAobXkgZmFtaWxpYXJpdHkgd2l0aCBYZW4gRUZJIGVudHJ5IGlzIGxpbWl0
ZWQpPw0KDQozLiByZXdyaXRlIGFsbCB0aGUgZWFybHkgYm9vdCBiaXRzIHRoYXQgYXNzdW1lIFhl
biBpcyB1bmRlciA0RyBhbmQgaXRzDQpidW5kbGVkIHBhZ2UgdGFibGVzIGZvciBiZWxvdyA0Ry4N
Cg0KVGhlc2UgYXJlIHRoZSBvYnN0YWNsZXMgb2ZmIHRoZSB0b3Agb2YgbXkgaGVhZC4gU28gSSB0
aGluayB0aGVyZSBpcyBubw0KZnVuZGFtZW50YWwgcmVhc29uIHdoeSB3ZSBoYXZlIHRvIHBsYWNl
IFhlbiAjMiB3aGVyZSBYZW4gIzEgd2FzLCBidXQNCmRvaW5nIHNvIGlzIGEgbWFzc2l2ZSByZWR1
Y3Rpb24gb2YgcGFpbiB3aGljaCBhbGxvd3MgdXMgdG8gcmV1c2UgbXVjaA0Kb2YgdGhlIGV4aXN0
aW5nIFhlbiBjb2RlLg0KDQpNYXliZSwgdGhpcyBwYXJ0IGRvZXMgbm90IGhhdmUgdG8gYmUgcGFy
dCBvZiB0aGUgQUJJIGFuZCB3ZSBqdXN0DQpzdWdnZXN0IHRoaXMgYXMgb25lIHdheSBvZiBsb2Fk
aW5nIHRoZSBuZXh0IFhlbiB0byBjb3BlIHdpdGggZ3Jvd3RoPw0KVGhpcyBpcyB0aGUgYmVzdCB3
YXkgSSBjYW4gdGhpbmsgb2YgKGxvYWRpbmcgWGVuIHdoZXJlIGl0IHdhcyBhbmQNCmV4cGFuZCBp
bnRvIHRoZSByZXNlcnZlZCBib290bWVtIGlmIG5lZWRlZCkgdGhhdCBkb2VzIG5vdCBuZWVkIHRv
DQpyZXdyaXRlIGEgbG90IG9mIGVhcmx5IGJvb3QgY29kZSBhbmQgY2FuIHByZXR0eSBtdWNoIGd1
YXJhbnRlZSBzdWNjZXNzDQpldmVuIGlmIG1lbW9yeSBpcyB0aWdodCBhbmQgZnJhZ21lbnRlZC4N
Cg0KSG9uZ3lhbg0K


From xen-devel-bounces@lists.xenproject.org Fri May 07 15:10:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 15:10:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124063.234123 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lf27O-0000Gs-0Y; Fri, 07 May 2021 15:10:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124063.234123; Fri, 07 May 2021 15:10:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lf27N-0000Gk-Tt; Fri, 07 May 2021 15:10:17 +0000
Received: by outflank-mailman (input) for mailman id 124063;
 Fri, 07 May 2021 15:10:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kza/=KC=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1lf27M-0000Ge-Ai
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 15:10:16 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 04751cac-fd3c-44b1-aa93-838881fbc321;
 Fri, 07 May 2021 15:10:15 +0000 (UTC)
Received: from [10.10.1.24] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1620400192346523.3740762787533;
 Fri, 7 May 2021 08:09:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 04751cac-fd3c-44b1-aa93-838881fbc321
ARC-Seal: i=1; a=rsa-sha256; t=1620400205; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=CmoaEAsg+sWjmivXCPaJQbEOQM7ik9nSFdpkL7WxDQwG4C722qoefHgGQvSwMRz9oJIvgAMHdAIhadPRwKUN5SCo/e07V6mDqkejgyM3ac81SFxrdG48J3fWaQhgi+T4D0+8FBQSa84Ib+bwNXIYy3R6MsgkDfOZaFN9ZXo03fs=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1620400205; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=WTihv1F2nV1abmzrN1LRdfqxi3o6PNbr7Kg84wpaQWw=; 
	b=hAglI96u7OMhu93D/sAjcpFk6742MTPKYpKr65TsqQJxtBZKUETdruSqgbddnQGl3/HbRHLUmz6FNDP+WN4exavOxvvYGWj+cp/BDdBKbnYzvC3H2g1RCoE9GSnuTLGY1DzZUzYkVtVUHIEz1Us9mLKpjzDMT+dTtrdV4aI8Y2U=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com> header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1620400205;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Subject:To:Cc:References:From:Message-ID:Date:MIME-Version:In-Reply-To:Content-Type:Content-Transfer-Encoding;
	bh=WTihv1F2nV1abmzrN1LRdfqxi3o6PNbr7Kg84wpaQWw=;
	b=IrKwcN6fDDJQ6uziWzZYSKCQj0UAXKiIM7SoFKyj9tkVLtfxZIi+MdYFhG3Bxo0q
	tU/ZzlPngSXJqtoryRoIlrvo2KnM9yNVA20PqrV8G6uXy5nhN8FKmzUhx08cpLd1Vlk
	5MxKqf0F0LhCbvOtVVq+H10QeuU6qkWhrzw0V5uw=
Subject: Re: [PATCH] Argo/XSM: add SILO hooks
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Christopher Clark <christopher.w.clark@gmail.com>,
 Daniel de Graaf <dgdegra@tycho.nsa.gov>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <f47a6aa0-3624-5819-2e3a-43c5e492ae1b@suse.com>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Message-ID: <ac40709a-e1a2-03ae-6d44-be811f545bca@apertussolutions.com>
Date: Fri, 7 May 2021 11:09:50 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.9.0
MIME-Version: 1.0
In-Reply-To: <f47a6aa0-3624-5819-2e3a-43c5e492ae1b@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-ZohoMailClient: External

On 5/7/21 5:20 AM, Jan Beulich wrote:
> In SILO mode restrictions for inter-domain communication should apply
> here along the lines of those for evtchn and gnttab.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Daniel P. Smith <dpsmith@apertussolutions.com>

> ---
> Really I was first thinking about the shim: Shouldn't that proxy argo
> requests just like it does for gnttab ones? It only then occurred to me
> that there's also an implication for SILO mode.
> 
> --- a/xen/xsm/silo.c
> +++ b/xen/xsm/silo.c
> @@ -81,12 +81,35 @@ static int silo_grant_copy(struct domain
>      return -EPERM;
>  }
>  
> +#ifdef CONFIG_ARGO
> +
> +static int silo_argo_register_single_source(const struct domain *d1,
> +                                            const struct domain *d2)
> +{
> +    if ( silo_mode_dom_check(d1, d2) )
> +        return xsm_argo_register_single_source(d1, d2);
> +    return -EPERM;
> +}
> +
> +static int silo_argo_send(const struct domain *d1, const struct domain *d2)
> +{
> +    if ( silo_mode_dom_check(d1, d2) )
> +        return xsm_argo_send(d1, d2);
> +    return -EPERM;
> +}
> +
> +#endif
> +
>  static struct xsm_operations silo_xsm_ops = {
>      .evtchn_unbound = silo_evtchn_unbound,
>      .evtchn_interdomain = silo_evtchn_interdomain,
>      .grant_mapref = silo_grant_mapref,
>      .grant_transfer = silo_grant_transfer,
>      .grant_copy = silo_grant_copy,
> +#ifdef CONFIG_ARGO
> +    .argo_register_single_source = silo_argo_register_single_source,
> +    .argo_send = silo_argo_send,
> +#endif
>  };
>  
>  void __init silo_init(void)
> 



From xen-devel-bounces@lists.xenproject.org Fri May 07 15:28:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 15:28:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124067.234136 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lf2PC-0001rI-In; Fri, 07 May 2021 15:28:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124067.234136; Fri, 07 May 2021 15:28:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lf2PC-0001rB-FW; Fri, 07 May 2021 15:28:42 +0000
Received: by outflank-mailman (input) for mailman id 124067;
 Fri, 07 May 2021 15:28:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Rq7T=KC=tklengyel.com=tamas@srs-us1.protection.inumbo.net>)
 id 1lf2PB-0001r5-DE
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 15:28:41 +0000
Received: from MTA-09-3.privateemail.com (unknown [68.65.122.19])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d34aa4ec-29a2-408a-aec3-2b265d88776f;
 Fri, 07 May 2021 15:28:40 +0000 (UTC)
Received: from MTA-09.privateemail.com (localhost [127.0.0.1])
 by MTA-09.privateemail.com (Postfix) with ESMTP id E137660059;
 Fri,  7 May 2021 11:28:39 -0400 (EDT)
Received: from toma-xps.lan (unknown [10.20.151.239])
 by MTA-09.privateemail.com (Postfix) with ESMTPA id 3DC206004F;
 Fri,  7 May 2021 11:28:39 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d34aa4ec-29a2-408a-aec3-2b265d88776f
From: Tamas K Lengyel <tamas@tklengyel.com>
To: xen-devel@lists.xenproject.org
Cc: Tamas K Lengyel <tamas@tklengyel.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] tools/misc/xen-vmtrace: handle more signals and install by default
Date: Fri,  7 May 2021 11:28:36 -0400
Message-Id: <20210507152836.20026-1-tamas@tklengyel.com>
X-Mailer: git-send-email 2.27.0
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-Virus-Scanned: ClamAV using ClamSMTP

Signed-off-by: Tamas K Lengyel <tamas@tklengyel.com>
---
 tools/misc/Makefile      |  2 +-
 tools/misc/xen-vmtrace.c | 12 +++++++++---
 2 files changed, 10 insertions(+), 4 deletions(-)

diff --git a/tools/misc/Makefile b/tools/misc/Makefile
index 2b683819d4..c32c42d546 100644
--- a/tools/misc/Makefile
+++ b/tools/misc/Makefile
@@ -25,6 +25,7 @@ INSTALL_SBIN-$(CONFIG_X86)     += xen-lowmemd
 INSTALL_SBIN-$(CONFIG_X86)     += xen-memshare
 INSTALL_SBIN-$(CONFIG_X86)     += xen-mfndump
 INSTALL_SBIN-$(CONFIG_X86)     += xen-ucode
+INSTALL_SBIN-$(CONFIG_X86)     += xen-vmtrace
 INSTALL_SBIN                   += xencov
 INSTALL_SBIN                   += xenhypfs
 INSTALL_SBIN                   += xenlockprof
@@ -51,7 +52,6 @@ TARGETS_COPY += xenpvnetboot
 TARGETS_BUILD := $(filter-out $(TARGETS_COPY),$(TARGETS_ALL))
 
 # ... including build-only targets
-TARGETS_BUILD-$(CONFIG_X86)    += xen-vmtrace
 TARGETS_BUILD += $(TARGETS_BUILD-y)
 
 .PHONY: all build
diff --git a/tools/misc/xen-vmtrace.c b/tools/misc/xen-vmtrace.c
index 35d14c6a9b..5b688a54af 100644
--- a/tools/misc/xen-vmtrace.c
+++ b/tools/misc/xen-vmtrace.c
@@ -44,7 +44,7 @@ static size_t size;
 static char *buf;
 
 static sig_atomic_t interrupted;
-static void int_handler(int signum)
+static void close_handler(int signum)
 {
     interrupted = 1;
 }
@@ -78,8 +78,14 @@ int main(int argc, char **argv)
     int rc, exit = 1;
     xenforeignmemory_resource_handle *fres = NULL;
 
-    if ( signal(SIGINT, int_handler) == SIG_ERR )
-        err(1, "Failed to register signal handler\n");
+    struct sigaction act;
+    act.sa_handler = close_handler;
+    act.sa_flags = 0;
+    sigemptyset(&act.sa_mask);
+    sigaction(SIGHUP,  &act, NULL);
+    sigaction(SIGTERM, &act, NULL);
+    sigaction(SIGINT,  &act, NULL);
+    sigaction(SIGALRM, &act, NULL);
 
     if ( argc != 3 )
     {
-- 
2.27.0



From xen-devel-bounces@lists.xenproject.org Fri May 07 15:28:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 15:28:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124068.234148 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lf2PM-0002B4-QK; Fri, 07 May 2021 15:28:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124068.234148; Fri, 07 May 2021 15:28:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lf2PM-0002Av-NA; Fri, 07 May 2021 15:28:52 +0000
Received: by outflank-mailman (input) for mailman id 124068;
 Fri, 07 May 2021 15:28:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rJTn=KC=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lf2PL-0002AF-A7
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 15:28:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b1f40b92-7b39-49c7-b6dc-b29de329a397;
 Fri, 07 May 2021 15:28:50 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 80C62AC6A;
 Fri,  7 May 2021 15:28:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b1f40b92-7b39-49c7-b6dc-b29de329a397
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620401329; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=qpu+bOi5Xpc3dDo4ngOHPAFA8yoMvUdr5hcWr7/Zdbw=;
	b=BnUC0RNoZLylqQEfVX9C+DYGNrejBMcsJX0TezJZiotIls856hfdaf0OKy0ajKQU1hST5Z
	J3in7jn15vuUUUe492U55Ows1a/ndgMaU9fSCv/yeRQPRDnhn49wX2npGInzXw2Ukj1Eeh
	fq8m3XeQ7WPl8gZODQ1KlRe/yqZbGGA=
Subject: Re: [PATCH RFC 1/2] docs/design: Add a design document for Live
 Update
To: "Xia, Hongyan" <hongyxia@amazon.com>
Cc: "andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "Gul, Mahircan" <mahgul@amazon.co.uk>, "paul@xen.org" <paul@xen.org>,
 "Ning, Raphael" <raphning@amazon.com>, "wl@xen.org" <wl@xen.org>,
 "iwj@xenproject.org" <iwj@xenproject.org>,
 "Grall, Julien" <jgrall@amazon.co.uk>,
 "george.dunlap@citrix.com" <george.dunlap@citrix.com>,
 "dwmw2@infradead.org" <dwmw2@infradead.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "julien@xen.org" <julien@xen.org>
References: <20210506104259.16928-1-julien@xen.org>
 <20210506104259.16928-2-julien@xen.org>
 <f51b2ef6-3998-7371-cea9-502c5c9f8afa@suse.com>
 <2a497e4c-d5a3-1da2-699e-1e31740a81f0@xen.org>
 <324f10b9-2b1f-ec61-1816-44c960c285f8@suse.com>
 <28b31476e2044161f94bfd85d1d3c8b2f6dfb806.camel@amazon.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4e161632-e558-039c-2a2a-398fe492a7fc@suse.com>
Date: Fri, 7 May 2021 17:28:49 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <28b31476e2044161f94bfd85d1d3c8b2f6dfb806.camel@amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 07.05.2021 16:59, Xia, Hongyan wrote:
> On Fri, 2021-05-07 at 14:15 +0200, Jan Beulich wrote:
>> On 07.05.2021 13:44, Julien Grall wrote:
> [...]
>>>
>>> It is a known convenient place. It may be difficult to find a
>>> similar 
>>> spot on host that have been long-running.
>>
>> I'm not convinced: If it was placed in the kexec area at a 2Mb
>> boundary, it could just run from there. If the kexec area is
>> large enough, this would work any number of times (as occupied
>> ranges become available again when the next LU cycle ends).
> 
> To make sure the next Xen can be loaded and run anywhere in case kexec
> cannot find large enough memory under 4G, we need to:
> 
> 1. teach kexec to load the whole image contiguously. At the moment
> kexec prepares scattered 4K pages which are not runnable until they are
> copied to a contiguous destination. (What if it can't find a contiguous
> range?)
> 
> 2. teach Xen that it can be jumped into with some existing page tables
> which point to itself above 4G. We can't do real/protected mode entry
> because it needs to start below 4G physically. Maybe a modified version
> of the EFI entry path (my familiarity with Xen EFI entry is limited)?
> 
> 3. rewrite all the early boot bits that assume Xen is under 4G and its
> bundled page tables for below 4G.
> 
> These are the obstacles off the top of my head. So I think there is no
> fundamental reason why we have to place Xen #2 where Xen #1 was, but
> doing so is a massive reduction of pain which allows us to reuse much
> of the existing Xen code.
> 
> Maybe, this part does not have to be part of the ABI and we just
> suggest this as one way of loading the next Xen to cope with growth?
> This is the best way I can think of (loading Xen where it was and
> expand into the reserved bootmem if needed) that does not need to
> rewrite a lot of early boot code and can pretty much guarantee success
> even if memory is tight and fragmented.

Yeah, all of this as an initial implementation plan sounds fine to
me. But it should then be called out as such (rather than as part of
how things ought to [remain to] be).

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 07 15:31:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 15:31:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124073.234160 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lf2Rp-0003pk-6g; Fri, 07 May 2021 15:31:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124073.234160; Fri, 07 May 2021 15:31:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lf2Rp-0003pd-3B; Fri, 07 May 2021 15:31:25 +0000
Received: by outflank-mailman (input) for mailman id 124073;
 Fri, 07 May 2021 15:31:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Qbhn=KC=gmail.com=dpsmith.dev@srs-us1.protection.inumbo.net>)
 id 1lf2Rm-0003pV-SO
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 15:31:22 +0000
Received: from mail-qt1-x833.google.com (unknown [2607:f8b0:4864:20::833])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c8cc72b0-c544-459d-9c2e-998fb391e780;
 Fri, 07 May 2021 15:31:22 +0000 (UTC)
Received: by mail-qt1-x833.google.com with SMTP id c11so6771278qth.2
 for <xen-devel@lists.xenproject.org>; Fri, 07 May 2021 08:31:22 -0700 (PDT)
Received: from [10.10.1.24] (static-72-81-132-2.bltmmd.fios.verizon.net.
 [72.81.132.2])
 by smtp.gmail.com with ESMTPSA id q13sm5140503qkj.43.2021.05.07.08.31.20
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 07 May 2021 08:31:20 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c8cc72b0-c544-459d-9c2e-998fb391e780
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=71cCRQn+HQWCN9MH5Z6ZnIp2rjcsD2U6fT/zul98+C4=;
        b=qhBjN9gW2haeADbEBgmmibhvdS6XPIbpvsOWGy+XbdvWCkqkBCA58O6F2t2zW5KPxh
         qCN40LL0HdjO8yCitVoc1TIXfwVD2lLrlS3/mKOqv2A2FWB2/rkYdW7gsVvTNeQapVfW
         uK9/mytUMbGhRyGSa/vncCzsvEIw1F+1C15kEehLKbG1iaFcyUI/9f9P170pCXinIOpF
         xjUqVxPbJc0+bCJj2htURf17ozO+/bWB/Fps34ghe0usJvftv91D+n4+fsRTq4SXpSW/
         Ci22yNH9U9iv6J5j5508GjMa4utqMN94h3UY7T88p0RXVy1F+g7dD6pUmC+anR4oVV1R
         wpKw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=71cCRQn+HQWCN9MH5Z6ZnIp2rjcsD2U6fT/zul98+C4=;
        b=WE/THk9QFIw73P/S/4zAWKt6TJb/NonGGhlq7LO+QS7OSLtE/TvIKH5LnxCe8AJcYk
         IgZ+Oy9EioDLCIboYE+B89O1gwCyB+v03vC7PxdWOQqYvhJwzGlkAUDfh/vTD/GAU9u6
         oGZOXCKYNIUDWISx3KYlwQrbqvP8WOvuQrlQq/vhcMKcMLkqq9E+mOoIZRBBJogwXZ4F
         mAw0LjNBuJ5sk69Rs2W+MgvUHe5r8g/lwvQYgpdbDsw/jObWNhX8izlJe2TqkMbioz3D
         cQ2oWjrIDxLqSjmixINHr9sYGlQUMS9UG+Py8qu+eAnr6y9p3Wh5naUlzvkO4XAemk01
         g7Xg==
X-Gm-Message-State: AOAM532ZATplPk28BTK84F/Xn+Mb6Udc16M6nje+K+QYP7n6HS6hZvBm
	ylXQrFdPY7Wh56xJlMMQ6xc=
X-Google-Smtp-Source: ABdhPJzgKy0K3vlah+CPHtMoJFL4kY+jX19KDcna3mc9Tk983PzGYi5tWMzvo1v4fxnzJhZgdz6wsw==
X-Received: by 2002:a05:622a:13c6:: with SMTP id p6mr10113062qtk.288.1620401481957;
        Fri, 07 May 2021 08:31:21 -0700 (PDT)
Subject: Re: [PATCH 1/9] docs: Warn about incomplete vtpmmgr TPM 2.0 support
To: Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210504124842.220445-1-jandryuk@gmail.com>
 <20210504124842.220445-2-jandryuk@gmail.com>
From: "Daniel P. Smith" <dpsmith.dev@gmail.com>
Message-ID: <cea36fb6-9464-5e20-fbd7-fba367fd9ced@gmail.com>
Date: Fri, 7 May 2021 11:31:19 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.9.0
MIME-Version: 1.0
In-Reply-To: <20210504124842.220445-2-jandryuk@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 5/4/21 8:48 AM, Jason Andryuk wrote:
> The vtpmmgr TPM 2.0 support is incomplete.  Add a warning about that to
> the documentation so others don't have to work through discovering it is
> broken.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> ---

Reviewed-by: Daniel P. Smith <dpsmith@apertussolutions.com>

>  docs/man/xen-vtpmmgr.7.pod | 11 +++++++++++
>  1 file changed, 11 insertions(+)
> 
> diff --git a/docs/man/xen-vtpmmgr.7.pod b/docs/man/xen-vtpmmgr.7.pod
> index af825a7ffe..875dcce508 100644
> --- a/docs/man/xen-vtpmmgr.7.pod
> +++ b/docs/man/xen-vtpmmgr.7.pod
> @@ -222,6 +222,17 @@ XSM label, not the kernel.
>  
>  =head1 Appendix B: vtpmmgr on TPM 2.0
>  
> +=head2 WARNING: Incomplete - cannot persist data
> +
> +TPM 2.0 support for vTPM manager is incomplete.  There is no support for
> +persisting an encryption key, so vTPM manager regenerates primary and secondary
> +key handles each boot.
> +
> +Also, the vTPM manger group command implementation hardcodes TPM 1.2 commands.
> +This means running manage-vtpmmgr.pl fails when the TPM 2.0 hardware rejects
> +the TPM 1.2 commands.  vTPM manager with TPM 2.0 cannot create groups and
> +therefore cannot persist vTPM contents.
> +
>  =head2 Manager disk image setup:
>  
>  The vTPM Manager requires a disk image to store its encrypted data. The image
> 



From xen-devel-bounces@lists.xenproject.org Fri May 07 15:33:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 15:33:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124078.234171 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lf2TU-0004SQ-ID; Fri, 07 May 2021 15:33:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124078.234171; Fri, 07 May 2021 15:33:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lf2TU-0004SJ-F9; Fri, 07 May 2021 15:33:08 +0000
Received: by outflank-mailman (input) for mailman id 124078;
 Fri, 07 May 2021 15:33:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Qbhn=KC=gmail.com=dpsmith.dev@srs-us1.protection.inumbo.net>)
 id 1lf2TT-0004SD-Lv
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 15:33:07 +0000
Received: from mail-qt1-x835.google.com (unknown [2607:f8b0:4864:20::835])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 83e66737-124c-4028-a073-d46273eac366;
 Fri, 07 May 2021 15:33:07 +0000 (UTC)
Received: by mail-qt1-x835.google.com with SMTP id n22so6859887qtk.9
 for <xen-devel@lists.xenproject.org>; Fri, 07 May 2021 08:33:07 -0700 (PDT)
Received: from [10.10.1.24] (static-72-81-132-2.bltmmd.fios.verizon.net.
 [72.81.132.2])
 by smtp.gmail.com with ESMTPSA id v11sm4721702qtx.79.2021.05.07.08.33.06
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 07 May 2021 08:33:06 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 83e66737-124c-4028-a073-d46273eac366
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=98jDhbtFXAmDoRmGETuRAFMHEHy1Q9q7QLyn2c0vfkQ=;
        b=OUXd4kJBl7YxwcnKYxlU1s9iVb1vrCg9+/eXATKUPQMuLDFqlAJeh1grlI24tcfshg
         NB2qlY9/1fupGnPfzxtje8gO3SyIK/q1rwzacf/bxtatES6X0C7hr+fZBdOoaXB9Cg1c
         HAPyg11ab6BK/m6uVTs+7T++bgwiScnYFQjamjSfLyZSqWwVEsLdUAG+3y4Y3bNyPuBD
         qDs5iqiht1xQpTYO+8BYenJY//TEkov9OxEauKTmG0Tj/a3xSgc4ELpqC1YbNO7uktk9
         J9zmo24lxthMwx2oF1w/jBobs+T//J3X4UDlGL0qz9udqUf4BZSYSP3lhJZ0y2ed/Am2
         trHw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=98jDhbtFXAmDoRmGETuRAFMHEHy1Q9q7QLyn2c0vfkQ=;
        b=j+86NK4s7mZAz3K6y/R2GI/4dKdMJjhaORteB/qvgfLyFAvkAvAPgZVoMhBSt6pwne
         /xzUUpsjSvFBeO5U5Ce4u7g8CEkFZh/zgmqNC9UiL5KdpqfVrX6hgNe+tye68Rs/iL6s
         zE8Ohwse7QOe+/bhq567UqO8h9YdWF0TAnsoPoX7m8FestA4PvorqsW2SNcuqFIHxy5y
         N9rtuPThGLHCOd0mJieEYv7Y/FRDNvip0ZEkWlWpufhR/fp4+mU3mJTCZSVuVcaIAMPB
         EZgT7QH8XTfobb27qwmKmkJKM+ymiuSXkdyCDfFdv1dQj+PKtmMrUeOk3IXkJSxqU4nP
         l0TA==
X-Gm-Message-State: AOAM533+Zzvvej6LU2Cfc8x0PswF4EQoxxshl/nZoTbs9eb/Z2fJmJVR
	FWZBSFccyMtbxsLJy848GcmNNFjKOA4=
X-Google-Smtp-Source: ABdhPJxPjTFzhvjpTP94UNrt1REvxwOL5XhV6xDVIEy21VX9ABEnepRCA8TP7z9EVGzHrGAWKDAEGw==
X-Received: by 2002:ac8:5a16:: with SMTP id n22mr9773658qta.103.1620401586802;
        Fri, 07 May 2021 08:33:06 -0700 (PDT)
Subject: Re: [PATCH 2/9] vtpmmgr: Print error code to aid debugging
To: Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, Quan Xu <quan.xu0@gmail.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>
References: <20210504124842.220445-1-jandryuk@gmail.com>
 <20210504124842.220445-3-jandryuk@gmail.com>
From: "Daniel P. Smith" <dpsmith.dev@gmail.com>
Message-ID: <a79284c5-669c-0749-8234-86f2bc6f4d46@gmail.com>
Date: Fri, 7 May 2021 11:33:05 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.9.0
MIME-Version: 1.0
In-Reply-To: <20210504124842.220445-3-jandryuk@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 5/4/21 8:48 AM, Jason Andryuk wrote:
> tpm_get_error_name returns "Unknown Error Code" when an error string
> is not defined.  In that case, we should print the Error Code so it can
> be looked up offline.  tpm_get_error_name returns a const string, so
> just have the two callers always print the error code so it is always
> available.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> ---

Reviewed-by: Daniel P. Smith <dpsmith@apertussolutions.com>

>  stubdom/vtpmmgr/tpm.c  | 2 +-
>  stubdom/vtpmmgr/tpm2.c | 2 +-
>  2 files changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/stubdom/vtpmmgr/tpm.c b/stubdom/vtpmmgr/tpm.c
> index 779cddd64e..83b2bc16b2 100644
> --- a/stubdom/vtpmmgr/tpm.c
> +++ b/stubdom/vtpmmgr/tpm.c
> @@ -109,7 +109,7 @@
>  			UINT32 rsp_status; \
>  			UNPACK_OUT(TPM_RSP_HEADER, &rsp_tag, &rsp_len, &rsp_status); \
>  			if (rsp_status != TPM_SUCCESS) { \
> -				vtpmlogerror(VTPM_LOG_TPM, "Failed with return code %s\n", tpm_get_error_name(rsp_status)); \
> +				vtpmlogerror(VTPM_LOG_TPM, "Failed with return code %s (%x)\n", tpm_get_error_name(rsp_status), rsp_status); \
>  				status = rsp_status; \
>  				goto abort_egress; \
>  			} \
> diff --git a/stubdom/vtpmmgr/tpm2.c b/stubdom/vtpmmgr/tpm2.c
> index c9f1016ab5..655e6d164c 100644
> --- a/stubdom/vtpmmgr/tpm2.c
> +++ b/stubdom/vtpmmgr/tpm2.c
> @@ -126,7 +126,7 @@
>      ptr = unpack_TPM_RSP_HEADER(ptr, \
>            &(tag), &(paramSize), &(status));\
>      if ((status) != TPM_SUCCESS){ \
> -        vtpmlogerror(VTPM_LOG_TPM, "Failed with return code %s\n", tpm_get_error_name(status));\
> +        vtpmlogerror(VTPM_LOG_TPM, "Failed with return code %s (%x)\n", tpm_get_error_name(status), (status));\
>          goto abort_egress;\
>      }\
>  } while(0)
> 



From xen-devel-bounces@lists.xenproject.org Fri May 07 15:37:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 15:37:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124086.234184 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lf2XS-0005Dd-8a; Fri, 07 May 2021 15:37:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124086.234184; Fri, 07 May 2021 15:37:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lf2XS-0005DW-4N; Fri, 07 May 2021 15:37:14 +0000
Received: by outflank-mailman (input) for mailman id 124086;
 Fri, 07 May 2021 15:37:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Qbhn=KC=gmail.com=dpsmith.dev@srs-us1.protection.inumbo.net>)
 id 1lf2XQ-0005DQ-No
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 15:37:12 +0000
Received: from mail-qv1-xf33.google.com (unknown [2607:f8b0:4864:20::f33])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ca09348a-3a89-4d8d-8ee0-490229ad5236;
 Fri, 07 May 2021 15:37:11 +0000 (UTC)
Received: by mail-qv1-xf33.google.com with SMTP id l19so4973779qvu.8
 for <xen-devel@lists.xenproject.org>; Fri, 07 May 2021 08:37:11 -0700 (PDT)
Received: from [10.10.1.24] (static-72-81-132-2.bltmmd.fios.verizon.net.
 [72.81.132.2])
 by smtp.gmail.com with ESMTPSA id p10sm4894437qkg.74.2021.05.07.08.37.10
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 07 May 2021 08:37:10 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ca09348a-3a89-4d8d-8ee0-490229ad5236
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=L8mlQNSwZyVckYDAuK46yfO+eT1ItLiJvvt/PtTX1Rg=;
        b=IxvhU0VoX0tyTZXrFG+FMr1aIeEbYajOX3D8HemhMMpC3kIgmzXDNEfaFc0eeueO7C
         8uWvVFX50/UFMbQFns+BU3V76KL4szsFbedqtZDpCSKtv4BSkXQ9aX+BXDDz+M6G360M
         6zkqrIX2Cq3+n6fCMA49Z+Jn9ODomFrfC15D2REsmquGM8h3IBY2XQBOkiEIi6wWZl8K
         Goqr1frf0X/3yORrU44XfvgbDg81XH65idLtrMAMIl5sS01OJdVpMDsEDxFwawE6o8hf
         Okouv1//YoGp8LLDo1vfpF3IYDFLjl+ZhRt1kHvMhv5UESQJDBY2xK1IyxmpyAHcWzHB
         7/3g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=L8mlQNSwZyVckYDAuK46yfO+eT1ItLiJvvt/PtTX1Rg=;
        b=Yw6+KDO092DXMBRNWni32mTxlI/E2CRvhiqziI7kWc8eq65r5bYht3bp1ECsNHE4AO
         1EClozVZ7E0MllQXUNrMMLnvIgu5yyJjIg2y/SVwfZbgM0e67zZ9lx5sPSENYJSCfaDK
         ZS/UKJyrkQYiST5d8Xw4sYUqq1/WpIh0LQOoiXt/OalQjj1QlQV+aoD/98UYngro7XOQ
         W4FiHlDyU0qtF1DcSLQbmZ/QXySpA2bfktp09GDH396nGLBvV+eFpGOWQHLSY6Rw3cad
         Psuq6dbAnM7EWSs3UKGXQ9/NpMTuigatiJcCgQAMB1lkoqHYlrgfQtCzMhA1JLo9O42x
         O/yA==
X-Gm-Message-State: AOAM531yj/bEeHYvPVX78jVxcHp6mi6M642IO6cfg5WBTd4wte/EjP/v
	QLQbjv5ZEjCJ514iYdxgfrg=
X-Google-Smtp-Source: ABdhPJxTeTGuFqdflSAXGhDX0CP2eA7VHJwF41W67yovilbuxldH6oYzZzaevwdQ4VDGyW7XxU20Fw==
X-Received: by 2002:a05:6214:d4c:: with SMTP id 12mr795724qvr.2.1620401831732;
        Fri, 07 May 2021 08:37:11 -0700 (PDT)
Subject: Re: [PATCH 3/9] stubom: newlib: Enable C99 formats for %z
To: Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>
References: <20210504124842.220445-1-jandryuk@gmail.com>
 <20210504124842.220445-4-jandryuk@gmail.com>
From: "Daniel P. Smith" <dpsmith.dev@gmail.com>
Message-ID: <d6ce6e80-5d4b-12cf-518f-e495c894f29c@gmail.com>
Date: Fri, 7 May 2021 11:37:10 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.9.0
MIME-Version: 1.0
In-Reply-To: <20210504124842.220445-4-jandryuk@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 5/4/21 8:48 AM, Jason Andryuk wrote:
> vtpmmgr was changed to print size_t with the %z modifier, but newlib
> isn't compiled with %z support.  So you get output like:
> 
> root seal: zu; sector of 13: zu
> root: zu v=zu
> itree: 36; sector of 112: zu
> group: zu v=zu id=zu md=zu
> group seal: zu; 5 in parent: zu; sector of 13: zu
> vtpm: zu+zu; sector of 48: zu
> 
> Enable the C99 formats in newlib so vtpmmgr prints the numeric values.
> 
> Fixes 9379af08ccc0 "stubdom: vtpmmgr: Correctly format size_t with %z
> when printing."
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> ---

Reviewed-by: Daniel P. Smith <dpsmith@apertussolutions.com>

> I haven't tried, but the other option would be to cast size_t and avoid
> %z.  Since this seems to be the only mini-os use of %z, that may be
> better than building a larger newlib.
> ---
>  stubdom/Makefile | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/stubdom/Makefile b/stubdom/Makefile
> index 90d9ffcd9f..c6de5f68ae 100644
> --- a/stubdom/Makefile
> +++ b/stubdom/Makefile
> @@ -105,7 +105,7 @@ cross-newlib: $(NEWLIB_STAMPFILE)
>  $(NEWLIB_STAMPFILE): mk-headers-$(XEN_TARGET_ARCH) newlib-$(NEWLIB_VERSION)
>  	mkdir -p newlib-$(XEN_TARGET_ARCH)
>  	( cd newlib-$(XEN_TARGET_ARCH) && \
> -	  CC_FOR_TARGET="$(CC) $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) $(NEWLIB_CFLAGS)" AR_FOR_TARGET=$(AR) LD_FOR_TARGET=$(LD) RANLIB_FOR_TARGET=$(RANLIB) ../newlib-$(NEWLIB_VERSION)/configure --prefix=$(CROSS_PREFIX) --verbose --target=$(GNU_TARGET_ARCH)-xen-elf --enable-newlib-io-long-long --disable-multilib && \
> +	  CC_FOR_TARGET="$(CC) $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) $(NEWLIB_CFLAGS)" AR_FOR_TARGET=$(AR) LD_FOR_TARGET=$(LD) RANLIB_FOR_TARGET=$(RANLIB) ../newlib-$(NEWLIB_VERSION)/configure --prefix=$(CROSS_PREFIX) --verbose --target=$(GNU_TARGET_ARCH)-xen-elf --enable-newlib-io-long-long --enable-newlib-io-c99-formats --disable-multilib && \
>  	  $(MAKE) DESTDIR= && \
>  	  $(MAKE) DESTDIR= install )
>  
> 



From xen-devel-bounces@lists.xenproject.org Fri May 07 15:45:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 15:45:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124090.234195 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lf2ff-0006jZ-3Z; Fri, 07 May 2021 15:45:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124090.234195; Fri, 07 May 2021 15:45:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lf2ff-0006jS-0L; Fri, 07 May 2021 15:45:43 +0000
Received: by outflank-mailman (input) for mailman id 124090;
 Fri, 07 May 2021 15:45:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ji/y=KC=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1lf2fd-0006jJ-Du
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 15:45:41 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.7.72]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2c0f2831-f004-4470-a482-091a921a21ee;
 Fri, 07 May 2021 15:45:38 +0000 (UTC)
Received: from AM6PR0502CA0043.eurprd05.prod.outlook.com
 (2603:10a6:20b:56::20) by AS8PR08MB6646.eurprd08.prod.outlook.com
 (2603:10a6:20b:350::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.26; Fri, 7 May
 2021 15:45:36 +0000
Received: from VE1EUR03FT031.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:56:cafe::29) by AM6PR0502CA0043.outlook.office365.com
 (2603:10a6:20b:56::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.24 via Frontend
 Transport; Fri, 7 May 2021 15:45:36 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT031.mail.protection.outlook.com (10.152.18.69) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4108.25 via Frontend Transport; Fri, 7 May 2021 15:45:36 +0000
Received: ("Tessian outbound 9a5bb9d11315:v91");
 Fri, 07 May 2021 15:45:35 +0000
Received: from d05685a4c72a.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 030FDC34-E323-4730-8C4A-781C7AC727CB.1; 
 Fri, 07 May 2021 15:45:26 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id d05685a4c72a.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 07 May 2021 15:45:26 +0000
Received: from VI1PR08MB3629.eurprd08.prod.outlook.com (2603:10a6:803:7f::25)
 by VI1PR0801MB2112.eurprd08.prod.outlook.com (2603:10a6:800:8c::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.28; Fri, 7 May
 2021 15:45:24 +0000
Received: from VI1PR08MB3629.eurprd08.prod.outlook.com
 ([fe80::4502:9762:8b3b:63d9]) by VI1PR08MB3629.eurprd08.prod.outlook.com
 ([fe80::4502:9762:8b3b:63d9%4]) with mapi id 15.20.4087.044; Fri, 7 May 2021
 15:45:23 +0000
Received: from smtpclient.apple (82.8.129.65) by
 LO4P123CA0379.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:18f::6) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4108.25 via Frontend Transport; Fri, 7 May 2021 15:45:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c0f2831-f004-4470-a482-091a921a21ee
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EcDk2yTj1V8Y63wIGJ70iSRdC+Auzgf93jwe7QFeIeo=;
 b=HsqivNOu0h8cGZUuIYO0xDimGiJG2ZewWrCEBST6TlXeFuicr+4pQRXEGqtyPqvdnG7GUhkbIpx+1iQIfI6q0rayCtJMk77yEeirHrdcFohzdrqf8JP3OCShmkdReYflO2PAV3QhVs4YZHwkGIQqkIDVK2bY2G2dxpEtPpbw2Jg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 098160f6ba61a2b8
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hn91IHlrYHej6LBMXMZL+ldkGZdiRJJWoB8pMRrzmNPsy3oPrM/nVx3kNYLj9aDB1m/h4wem2yoq6b5bLf6XDIh9pzeT7FYw89pVHTkvi+TF0Z8b8sT/Py+Y4Cexlb5LM6DefIszBTQkJxQlQEkMMS72DoF6V9QgV7BA7cOKmW2G4mTsgrLZp3xLpXcQ0Nj2hjo8NQwmi6HH8ysa5/Ledu1dEW5jaN2b5Ov+pzjk0R/iq+G+eIdU7au7x0miSUpEBS6+BO8oQ3GUpqeEaN8jeQVetKPxs8O7skiUV1nLx9G36GjnRCEd5zneIPi0BPY2IT5n2FfMTgj84rt5SZfRIg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EcDk2yTj1V8Y63wIGJ70iSRdC+Auzgf93jwe7QFeIeo=;
 b=l2rL8FdI2oMHIzEoAsXYRAFiNoYJK5d1lXKo1jzaRyNWcdlJ1kms41d8jigMFYYleMtpDeeIAjKphQqcEvJWphm/qcVXWtaf1HPKudp98qxTci+Tn2FBvWJddTwbrUIF3ddQIv+4F5dY3ngI90xUM0O+b53jdNYMtUrfGJBzQy33vtEC25n5NtIw2rpc1MpH8MufLOBiAutYU/5zy5chdbZFdiqZt+6QolDZIuMjGCK2hUXGOCOd8W6Q7f3lFS6sB/fSFfCNNtazhXZmHjgnnjU0k9id2f36Mdbk9eLi5X/ChzsWQErwnfejIZPNnikFaKEnhrHOOzL7w5vwKWJ8SA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=EcDk2yTj1V8Y63wIGJ70iSRdC+Auzgf93jwe7QFeIeo=;
 b=HsqivNOu0h8cGZUuIYO0xDimGiJG2ZewWrCEBST6TlXeFuicr+4pQRXEGqtyPqvdnG7GUhkbIpx+1iQIfI6q0rayCtJMk77yEeirHrdcFohzdrqf8JP3OCShmkdReYflO2PAV3QhVs4YZHwkGIQqkIDVK2bY2G2dxpEtPpbw2Jg=
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
Content-Type: text/plain;
	charset=us-ascii
Subject: Re: [PATCH] tools/xenstored: Prevent a buffer overflow in
 dump_state_node_perms()
From: Luca Fancellu <luca.fancellu@arm.com>
In-Reply-To: <20210506161223.15984-1-julien@xen.org>
Date: Fri, 7 May 2021 16:45:16 +0100
Cc: xen-devel@lists.xenproject.org,
 Julien Grall <jgrall@amazon.com>,
 Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>
Content-Transfer-Encoding: quoted-printable
Message-Id: <25AAA5C9-D7DB-4E04-99D3-57A50E2A2726@arm.com>
References: <20210506161223.15984-1-julien@xen.org>
To: Julien Grall <julien@xen.org>
X-Mailer: Apple Mail (2.3654.80.0.2.43)
X-Originating-IP: [82.8.129.65]
X-ClientProxiedBy: LO4P123CA0379.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:18f::6) To VI1PR08MB3629.eurprd08.prod.outlook.com
 (2603:10a6:803:7f::25)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6b45ad36-52d3-4da1-cc69-08d9116f282b
X-MS-TrafficTypeDiagnostic: VI1PR0801MB2112:|AS8PR08MB6646:
X-Microsoft-Antispam-PRVS:
	<AS8PR08MB664681ADBE65FAFE64EA4517E4579@AS8PR08MB6646.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:6108;OLM:6108;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 3+jF7KLB+0RZZG3kG1XMdU2kEnbabeFY7NahnQxqKnwBDL7OsZYUGtuJv8Ev2dPwq+xy37TnirYeLKz7SIpwUWE8MjXAD3tXGtj/yHa0C87+eMSS/nadVvRTq3egcs01RmoluWZzX2swjCt/U8/fREthzBR12vfM3FnaUTtMHpeGVLlRMXXjfaQ73o3SEUnPVoq8Q0tJTvlVXzj/Mf59muidVxIx/2GX3QCcg7gIl9kWByW3ekRfWf3NGBwZ1KNYePhf9mYllt5d+e6DeJB8estW2kGkrtC84P1PCQ5lc4M07KrVMRYaCOBbaSuklQyF1uAI/jqhdwgjXysjohccDkEZcL7++CSjUDrXWCIhlS2WvsiDOR0mfFLTq6VZb0gO/KpkgRobB217fd4xXZIXxTPv2un5S4mGxlQQLpN/rcFZ657cmtUIKRb3V/1x01Bnd1dMPiAwHsZELZnVTxfrgITmxwOcc7dxbS9M7Rp1fQUZbE1ytRNdWV8VtYuaO8r8gNraN3Kr0sA6O2MJ0dOg6GJ0Z/Vz+sD2BiWYkeTkPXbHSnnu5HDZlpPD0nYnvXqpjM8UUVKi/hd0Zx8k3iXeUSVO/XzNqzPNcvy+KpyGzz/2aXodpFqp1Yz8hkW2Tjf4A1By5UWIIjAZmKvjkQTyR10vcadmoaGtUUtP12sPI4jc99GV9jFDmVkNsO6rqvMJ
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR08MB3629.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(39850400004)(366004)(396003)(376002)(136003)(33656002)(2906002)(36756003)(2616005)(52116002)(83380400001)(53546011)(6506007)(956004)(4326008)(66476007)(66556008)(5660300002)(44832011)(26005)(186003)(38350700002)(66946007)(478600001)(6512007)(316002)(8936002)(54906003)(8676002)(16526019)(86362001)(38100700002)(6666004)(6486002)(6916009)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
 =?us-ascii?Q?hwuGEKQvtPnvnhXIAK6wd+jf9Jy6XJovQqblvCvC2VQipWRe5AiAAOrHeREo?=
 =?us-ascii?Q?U1TZOfXt7A6FaHum9JuVVSDp6T8/KsXHgh7af1xdq0XCKanZHkihIl25D5bO?=
 =?us-ascii?Q?sEtlJwB0haLArs/x5Rpe/MI0Nl8Qs7YOH1UcQuCi1H5gnU+Q4TRR1438AFom?=
 =?us-ascii?Q?X6nkMeUH1UfZ7edGxbMblHKt5DM2pgVxTLTE4VpZG5E9DhXM3vzrxshjPwLs?=
 =?us-ascii?Q?5z03dkPXkfBBPQs86cQOf8OLufHM6J+tiKRTNASB84MI7W/AIOJTURWSEpwn?=
 =?us-ascii?Q?twFcvSCTpGnsxKwmTgezqw/H1rURy1mZX1VaRfkT89Pft/btdY2pDbHfzc1s?=
 =?us-ascii?Q?hH3x1oYoNBQAWlROoQ6jtUtkA0litVwmrKuFEDehyXzGyA9qtmRy9Vc4QUfM?=
 =?us-ascii?Q?9uq4qkJCa3Qd2/quKE/pgTVf9veBMg3goqCJ394RfUGh1vIhQFRKAJVJjnMd?=
 =?us-ascii?Q?x5wHMaXmxazkL8QahRtVvIXJ7yJF+Ounhs2sN7PiO2itYVo4OsthiJk9okZj?=
 =?us-ascii?Q?00e7u1/XaC59J4844SrfgLVGMRsNWykzx6oXPLMbIs4yENVWCgQXPngCnCi5?=
 =?us-ascii?Q?IzAK4lh9tp+DFA7y6CgPLKYuqcnqcOXIla5jLhFwiROFcfQBBI0IIYDAqDZo?=
 =?us-ascii?Q?eC8BAGdRey8Xlhf8XQGyE10+AZ3YQMKqm1kBKCaW5I2RomxEZCGmhSLCtZR+?=
 =?us-ascii?Q?PFYqDuA89Rat7cBPDhM6tGe5Tj1k//6NRd8UmyYDXsNOWiIibcO9lcwKksve?=
 =?us-ascii?Q?rRKSrwIlb+be27aL1ZvPSnvmTSDN7lcTijjwp3Ze1xG0znp+VJkjloQOELh6?=
 =?us-ascii?Q?YULAfwWXJuWjzQY2SkkvvGxyIGLar5Zk6MDl8ae36iMuRuTxnfa81c6WJuaX?=
 =?us-ascii?Q?1DeNSUHq4UCEfKl951cvFr9Gk5rdf7EBs1l1OurhUcoUJ09Gysh5YxQEgPdP?=
 =?us-ascii?Q?gARIZgGgas9nJmEikOdOpL50O1Xi0+WTjTbyvYHV6yjQMQcjBvNUkJY8VcBE?=
 =?us-ascii?Q?bjCRJDZ8/pzTSTNiSndQHGKKfkVZ6sACQ5RJNvsZCXunDK4NtuasIAC2hHlz?=
 =?us-ascii?Q?LiPA6A6LOI8KLrUn5NezS/Pl78euNZrbqmV1NpR5Yk56M+WDIx/W2i4saJRg?=
 =?us-ascii?Q?QcbHU/oNAkNDt7aCAys+dogU63KHxaLL/Zj4S4VO84Xqw+AhN8sXgs35uqdK?=
 =?us-ascii?Q?H5+RWqpK7NyqZS7HEylWJhTqhuqexVhxwDMueN6DOJOK6f/2kUkOqJEqUxSd?=
 =?us-ascii?Q?3AZV/pkdXqPqLrWMjVp8K9rt7MiccC7zp+DbEVXnDQF9sDhgFEl3FgYS5QR2?=
 =?us-ascii?Q?CSMY6Udx4t9buveVhedwzdQ+?=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0801MB2112
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT031.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	ff9bc706-9e9c-4b08-a616-08d9116f208a
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	yQ/NOuNdl1UN//sHxhn8hzbN2FiZEDrgOP9sR5dXEuHFWuM+yEgJ2CdfcmMsX9e/eXnC044WZXqGV7fzAdAciOJ9BOSwBhFrU6WeiNUHwP/YpzaTKxmVMAzGOcyiq/8lyVAuv5/dQEhON02Y2nvsE/x1Aaij/JSFr8A1QvMCBRT0VpcQ+LeYAxHrxQ5wAZnHjIawNCabqAIZcUJYH7V/KUP15KAB0tnizGY7oUD3YYrFCzYvAa14+WuFJvcGXCIS0sJTUQ0JN5p9FMObdIliLKcURuHAuxonGv6jspH1LQvjLkxnEoIBgcZuGJ62rSMOgoeJBqC6M3zD+AfadszGgtVtoFgfkZidFJpI8Hzsq5eRPe8rMf3ReG3w/DvqfCKMYuQOBaWyypXBcqTfpnyNbQmzm8AjylFcsKaAi1NEz9EdBnOE4qAjZs8ehLTM25h+SPyuTysitaB4WP8ubc78Mh2930OQ/iJH6uihZkacPRx4BDmZlP9REc8snOKxFM9dS+/B57Lfok9xnFCUa86NgmIBS55lfY2z3y83hlnjktNJIsy2Kk1heVM0qrq38Mb9wN/pHRituUMIKnad1FJzB5O27oJqT/9OXcqgxz2ZsbTUn5EcYEi2YpgyDLKRXQNYaTRdnoQH4a8B340n3aB+xNmn60XJ6y5WRwIgq6n+yTI=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(396003)(39850400004)(376002)(346002)(36840700001)(46966006)(336012)(82310400003)(70206006)(54906003)(6486002)(478600001)(107886003)(8936002)(83380400001)(316002)(2906002)(26005)(86362001)(44832011)(956004)(6666004)(2616005)(70586007)(8676002)(6506007)(53546011)(33656002)(6862004)(6512007)(4326008)(16526019)(36756003)(81166007)(356005)(47076005)(5660300002)(82740400003)(36860700001)(186003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 07 May 2021 15:45:36.1347
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 6b45ad36-52d3-4da1-cc69-08d9116f282b
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT031.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6646



> On 6 May 2021, at 17:12, Julien Grall <julien@xen.org> wrote:
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> ASAN reported one issue when Live Updating Xenstored:
>=20
> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> =3D=3D873=3D=3DERROR: AddressSanitizer: stack-buffer-overflow on address =
0x7ffc194f53e0 at pc 0x555c6b323292 bp 0x7ffc194f5340 sp 0x7ffc194f5338
> WRITE of size 1 at 0x7ffc194f53e0 thread T0
>    #0 0x555c6b323291 in dump_state_node_perms xen/tools/xenstore/xenstore=
d_core.c:2468
>    #1 0x555c6b32746e in dump_state_special_node xen/tools/xenstore/xensto=
red_domain.c:1257
>    #2 0x555c6b32a702 in dump_state_special_nodes xen/tools/xenstore/xenst=
ored_domain.c:1273
>    #3 0x555c6b32ddb3 in lu_dump_state xen/tools/xenstore/xenstored_contro=
l.c:521
>    #4 0x555c6b32e380 in do_lu_start xen/tools/xenstore/xenstored_control.=
c:660
>    #5 0x555c6b31b461 in call_delayed xen/tools/xenstore/xenstored_core.c:=
278
>    #6 0x555c6b32275e in main xen/tools/xenstore/xenstored_core.c:2357
>    #7 0x7f95eecf3d09 in __libc_start_main ../csu/libc-start.c:308
>    #8 0x555c6b3197e9 in _start (/usr/local/sbin/xenstored+0xc7e9)
>=20
> Address 0x7ffc194f53e0 is located in stack of thread T0 at offset 80 in f=
rame
>    #0 0x555c6b32713e in dump_state_special_node xen/tools/xenstore/xensto=
red_domain.c:1232
>=20
>  This frame has 2 object(s):
>    [32, 40) 'head' (line 1233)
>    [64, 80) 'sn' (line 1234) <=3D=3D Memory access at offset 80 overflows=
 this variable
>=20
> This is happening because the callers are passing a pointer to a variable
> allocated on the stack. However, the field perms is a dynamic array, so
> Xenstored will end up to read outside of the variable.
>=20
> Rework the code so the permissions are written one by one in the fd.
>=20
> Fixes: ed6eebf17d2c ("tools/xenstore: dump the xenstore state for live up=
date")
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> ---
> tools/xenstore/xenstored_core.c   | 26 ++++++++++++++------------
> tools/xenstore/xenstored_core.h   |  3 +--
> tools/xenstore/xenstored_domain.c |  2 +-
> 3 files changed, 16 insertions(+), 15 deletions(-)
>=20
> diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_c=
ore.c
> index d54a6042a9f7..f68da12b5b23 100644
> --- a/tools/xenstore/xenstored_core.c
> +++ b/tools/xenstore/xenstored_core.c
> @@ -2447,34 +2447,36 @@ const char *dump_state_buffered_data(FILE *fp, co=
nst struct connection *c,
> 	return NULL;
> }
>=20
> -const char *dump_state_node_perms(FILE *fp, struct xs_state_node *sn,
> -				  const struct xs_permissions *perms,
> +const char *dump_state_node_perms(FILE *fp, const struct xs_permissions =
*perms,
> 				  unsigned int n_perms)
> {
> 	unsigned int p;
>=20
> 	for (p =3D 0; p < n_perms; p++) {
> +		struct xs_state_node_perm sp;
> +
> 		switch ((int)perms[p].perms & ~XS_PERM_IGNORE) {
> 		case XS_PERM_READ:
> -			sn->perms[p].access =3D XS_STATE_NODE_PERM_READ;
> +			sp.access =3D XS_STATE_NODE_PERM_READ;
> 			break;
> 		case XS_PERM_WRITE:
> -			sn->perms[p].access =3D XS_STATE_NODE_PERM_WRITE;
> +			sp.access =3D XS_STATE_NODE_PERM_WRITE;
> 			break;
> 		case XS_PERM_READ | XS_PERM_WRITE:
> -			sn->perms[p].access =3D XS_STATE_NODE_PERM_BOTH;
> +			sp.access =3D XS_STATE_NODE_PERM_BOTH;
> 			break;
> 		default:
> -			sn->perms[p].access =3D XS_STATE_NODE_PERM_NONE;
> +			sp.access =3D XS_STATE_NODE_PERM_NONE;
> 			break;
> 		}
> -		sn->perms[p].flags =3D (perms[p].perms & XS_PERM_IGNORE)
> +		sp.flags =3D (perms[p].perms & XS_PERM_IGNORE)
> 				     ? XS_STATE_NODE_PERM_IGNORE : 0;
> -		sn->perms[p].domid =3D perms[p].id;
> -	}
> +		sp.domid =3D perms[p].id;
>=20
> -	if (fwrite(sn->perms, sizeof(*sn->perms), n_perms, fp) !=3D n_perms)
> -		return "Dump node permissions error";
> +		if (fwrite(&sp, sizeof(sp), 1, fp) !=3D 1)
> +			return "Dump node permission error";
> +
> +	}
>=20
> 	return NULL;
> }
> @@ -2519,7 +2521,7 @@ static const char *dump_state_node_tree(FILE *fp, c=
har *path)
> 	if (fwrite(&sn, sizeof(sn), 1, fp) !=3D 1)
> 		return "Dump node state error";
>=20
> -	ret =3D dump_state_node_perms(fp, &sn, hdr->perms, hdr->num_perms);
> +	ret =3D dump_state_node_perms(fp, hdr->perms, hdr->num_perms);
> 	if (ret)
> 		return ret;
>=20
> diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_c=
ore.h
> index 1cdbc3dcb5f7..b50ea3f57d5a 100644
> --- a/tools/xenstore/xenstored_core.h
> +++ b/tools/xenstore/xenstored_core.h
> @@ -271,8 +271,7 @@ const char *dump_state_buffered_data(FILE *fp, const =
struct connection *c,
> 				     const struct connection *conn,
> 				     struct xs_state_connection *sc);
> const char *dump_state_nodes(FILE *fp, const void *ctx);
> -const char *dump_state_node_perms(FILE *fp, struct xs_state_node *sn,
> -				  const struct xs_permissions *perms,
> +const char *dump_state_node_perms(FILE *fp, const struct xs_permissions =
*perms,
> 				  unsigned int n_perms);
>=20
> void read_state_global(const void *ctx, const void *state);
> diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored=
_domain.c
> index 3d4d0649a243..580ed454a3f5 100644
> --- a/tools/xenstore/xenstored_domain.c
> +++ b/tools/xenstore/xenstored_domain.c
> @@ -1254,7 +1254,7 @@ static const char *dump_state_special_node(FILE *fp=
, const char *name,
> 	if (fwrite(&sn, sizeof(sn), 1, fp) !=3D 1)
> 		return "Dump special node error";
>=20
> -	ret =3D dump_state_node_perms(fp, &sn, perms->p, perms->num);
> +	ret =3D dump_state_node_perms(fp, perms->p, perms->num);
> 	if (ret)
> 		return ret;
>=20
> --=20
> 2.17.1
>=20
>=20

Tested on FVP and another arm board, basic testing (run Xen, dom0, run one/=
two guests) - Everything fine.

Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>

Cheers,
Luca



From xen-devel-bounces@lists.xenproject.org Fri May 07 15:48:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 15:48:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124093.234207 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lf2i1-0007Lf-IA; Fri, 07 May 2021 15:48:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124093.234207; Fri, 07 May 2021 15:48:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lf2i1-0007LY-FC; Fri, 07 May 2021 15:48:09 +0000
Received: by outflank-mailman (input) for mailman id 124093;
 Fri, 07 May 2021 15:48:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Qbhn=KC=gmail.com=dpsmith.dev@srs-us1.protection.inumbo.net>)
 id 1lf2i0-0007LQ-4M
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 15:48:08 +0000
Received: from mail-qv1-xf29.google.com (unknown [2607:f8b0:4864:20::f29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5ed5bbf1-e3e3-447f-90f6-068cbd849a01;
 Fri, 07 May 2021 15:48:07 +0000 (UTC)
Received: by mail-qv1-xf29.google.com with SMTP id q6so5024383qvb.2
 for <xen-devel@lists.xenproject.org>; Fri, 07 May 2021 08:48:07 -0700 (PDT)
Received: from [10.10.1.24] (static-72-81-132-2.bltmmd.fios.verizon.net.
 [72.81.132.2])
 by smtp.gmail.com with ESMTPSA id g18sm2911823qke.37.2021.05.07.08.48.06
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 07 May 2021 08:48:06 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5ed5bbf1-e3e3-447f-90f6-068cbd849a01
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=YRcApzHOn+sfyfn81ofA9ANPZfjft3A4579yg/zuVik=;
        b=aQ3757cPunvy5tT+UNTdFdat6FjpIOE0/TyhqM58cJO/81MTqnrbFxPo3EkfAKOZgP
         7FHpgWHrXh5FwoniLKFBpKbKAvrBaOFVLEKvGRkXsAzW0m0tctdDeBs7HteGomxx5BPK
         vAIgUbxEHa0dixctDp5E07pFTqRjgoSft/ylEcSnb8HKAr5Tn0KB/XOkyPysiAfFfpiA
         iEKY0fHcd37qd13GKwAoTfeUlO3jW7N4RzQ9wHCudBNt4R3ps+v5RrTLYOX4mnzsPilu
         dsX+KmOnyunjNYFSKJaFWMiKhZHGkPJOvx1BAJeAgfrzjLuo2xv1Se9wy3Bq45qpT2r0
         w4RA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=YRcApzHOn+sfyfn81ofA9ANPZfjft3A4579yg/zuVik=;
        b=JWCX/Gh1N3oAvbOTeqgtByEDkhF6bVTE01UGG6xnLfcRjBRhv+AZl66MtCFU5TrSf1
         AjJDKKrlhxrt64H5yjPOJ47RyX9hDD0lj5AJc082A6AMWQQvbf1obIpdq/u9sLmgPsk+
         y3I3UY8SmEFK8/SLVEZHW+NruDC+p4GNhBJBueIYof98UWU6CQW5nf/f9WcLQmdv5iMZ
         V3oZwSYAXG8ghdQHAhyedn+AeSHfhrILW8wfXkbphRxfiDCmGcaINL5UcVjU1iInxLXG
         +NP01UczfZGHHc2nZrhOONJwzsLJZmvIr2wJXMvacQE2CcNUXS+4ILFwoxLvQ7RZUjxp
         CZoA==
X-Gm-Message-State: AOAM533v/EN3asL2bnB7wmUkObY62VB20OKYqlS1bt+mL7Qr1sUwYmVS
	FDHE8gNydzVLquHuYLgRNIM=
X-Google-Smtp-Source: ABdhPJzbdjXGy+jyNYqHRFwZvM8WOsi4JRjGGZiXTnzjkTuPhmCdQAxmAySqRIWucf+zddc+vYvdEg==
X-Received: by 2002:a0c:e847:: with SMTP id l7mr10547833qvo.10.1620402487289;
        Fri, 07 May 2021 08:48:07 -0700 (PDT)
Subject: Re: [PATCH v2 05/13] vtpmmgr: Move vtpmmgr_shutdown
To: Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, Quan Xu <quan.xu0@gmail.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>
References: <20210506135923.161427-1-jandryuk@gmail.com>
 <20210506135923.161427-6-jandryuk@gmail.com>
From: "Daniel P. Smith" <dpsmith.dev@gmail.com>
Message-ID: <3df41436-0932-f79d-99e7-28a5c6a5c1a4@gmail.com>
Date: Fri, 7 May 2021 11:48:05 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.9.0
MIME-Version: 1.0
In-Reply-To: <20210506135923.161427-6-jandryuk@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 5/6/21 9:59 AM, Jason Andryuk wrote:
> Reposition vtpmmgr_shutdown so it can call flush_tpm2 without a forward
> declaration.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
> ---

Reviewed-by: Daniel P. Smith <dpsmith@apertussolutions.com>

>  stubdom/vtpmmgr/init.c | 28 ++++++++++++++--------------
>  1 file changed, 14 insertions(+), 14 deletions(-)
> 
> diff --git a/stubdom/vtpmmgr/init.c b/stubdom/vtpmmgr/init.c
> index 130e4f4bf6..decf8e8b4d 100644
> --- a/stubdom/vtpmmgr/init.c
> +++ b/stubdom/vtpmmgr/init.c
> @@ -503,20 +503,6 @@ egress:
>     return status;
>  }
>  
> -void vtpmmgr_shutdown(void)
> -{
> -   /* Cleanup TPM resources */
> -   TPM_TerminateHandle(vtpm_globals.oiap.AuthHandle);
> -
> -   /* Close tpmback */
> -   shutdown_tpmback();
> -
> -   /* Close tpmfront/tpm_tis */
> -   close(vtpm_globals.tpm_fd);
> -
> -   vtpmloginfo(VTPM_LOG_VTPM, "VTPM Manager stopped.\n");
> -}
> -
>  /* TPM 2.0 */
>  
>  static void tpm2_AuthArea_ctor(const char *authValue, UINT32 authLen,
> @@ -797,3 +783,17 @@ abort_egress:
>  egress:
>      return status;
>  }
> +
> +void vtpmmgr_shutdown(void)
> +{
> +   /* Cleanup TPM resources */
> +   TPM_TerminateHandle(vtpm_globals.oiap.AuthHandle);
> +
> +   /* Close tpmback */
> +   shutdown_tpmback();
> +
> +   /* Close tpmfront/tpm_tis */
> +   close(vtpm_globals.tpm_fd);
> +
> +   vtpmloginfo(VTPM_LOG_VTPM, "VTPM Manager stopped.\n");
> +}
> 



From xen-devel-bounces@lists.xenproject.org Fri May 07 17:47:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 17:47:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124113.234232 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lf4ZW-0001tD-O3; Fri, 07 May 2021 17:47:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124113.234232; Fri, 07 May 2021 17:47:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lf4ZW-0001t6-L0; Fri, 07 May 2021 17:47:30 +0000
Received: by outflank-mailman (input) for mailman id 124113;
 Fri, 07 May 2021 17:47:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lf4ZV-0001sw-FK; Fri, 07 May 2021 17:47:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lf4ZV-0004fy-4g; Fri, 07 May 2021 17:47:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lf4ZU-0006w4-Q8; Fri, 07 May 2021 17:47:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lf4ZU-0001iw-Pc; Fri, 07 May 2021 17:47:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=u+gbxlvicWKyHuVl7XYuHQ/Xn56uB/meVqgOf6ssbW8=; b=RE904+mTnlpTyJlsmZgQCNkzk4
	FmZQ6dlq6C4qUcFgdJUlMpfye0MBGbcSLnwhUhPTdF3Ahbwe7iUr2rjv1pJTNUC3oZZ/7P9KbzUwn
	fr8TZBAfmpNEmHQ0IwTjVkn7M1dVchUBfhsbpe1nFuGwLGw5ubTTMx1Tm/6D9h/MEtjw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161826-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 161826: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=d90f154867ec0ec22fd719164b88716e8fd48672
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 07 May 2021 17:47:28 +0000

flight 161826 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161826/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-libvirt      14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-libvirt-xsm  14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-libvirt-pair 25 guest-start/debian       fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                d90f154867ec0ec22fd719164b88716e8fd48672
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  260 days
Failing since        152659  2020-08-21 14:07:39 Z  259 days  472 attempts
Testing same since   161826  2021-05-07 02:33:20 Z    0 days    1 attempts

------------------------------------------------------------
487 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 148130 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 07 20:26:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 20:26:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124128.234276 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lf73b-00087x-Ks; Fri, 07 May 2021 20:26:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124128.234276; Fri, 07 May 2021 20:26:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lf73b-00087q-Hn; Fri, 07 May 2021 20:26:43 +0000
Received: by outflank-mailman (input) for mailman id 124128;
 Fri, 07 May 2021 20:26:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V6aH=KC=gmail.com=bobbyeshleman@srs-us1.protection.inumbo.net>)
 id 1lf73Z-00087k-N2
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 20:26:41 +0000
Received: from mail-pl1-x62f.google.com (unknown [2607:f8b0:4864:20::62f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7fdc60c9-78f8-47b2-9a41-cbb64a31350f;
 Fri, 07 May 2021 20:26:40 +0000 (UTC)
Received: by mail-pl1-x62f.google.com with SMTP id b21so5897405plz.0
 for <xen-devel@lists.xenproject.org>; Fri, 07 May 2021 13:26:40 -0700 (PDT)
Received: from ?IPv6:2601:1c2:4f80:d230::1? ([2601:1c2:4f80:d230::1])
 by smtp.gmail.com with ESMTPSA id o5sm5415079pgq.58.2021.05.07.13.26.37
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 07 May 2021 13:26:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7fdc60c9-78f8-47b2-9a41-cbb64a31350f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:organization:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=t3qc2EvHl2iWZFO6wdqraQk+x5uhiKnWa7rVn4z+toU=;
        b=DT5Y/jI/dmaJ5ZCtoWGTGV6zQvTZv4sryfDVpZaXxfVO6zGwgSQMnd6f6jJ/HYepTX
         IByef+IPw7vVDsRgJt4WXY1CpGoNsDU5AtM4G8GfJEBzgf/UasOvlSo7Ct7RPt3+uq2K
         oFzXGeczBlO610OdHZmAX0dApc+lb9Al1LEtNMhbzxmyftJqZcr3gpia2uvxxuG8GUu0
         4kmCe+7YpqBIx3AdCIXR+9ke4B/yPl85OA5KAgu9JshYt2u+h16CjR+JG4CC/h1aMnNa
         bPeWo3qM4+hYuFWiX+0fiocLFqDWYKwFQ713/HKllBXT0CrcXLQ+z9Po79VIZyWePOxo
         5tuA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:organization
         :message-id:date:user-agent:mime-version:in-reply-to
         :content-language:content-transfer-encoding;
        bh=t3qc2EvHl2iWZFO6wdqraQk+x5uhiKnWa7rVn4z+toU=;
        b=brMhHUSzL4f2KgDb8ZMaRekXpuJ/wl20W+cJUGbqQWD5eiJ1C6Z/9Chs5J2Tt+1qM4
         6RZVy4r9OWQ2zqq4rydC4HxBJc9E17cKDX9+GhozMFZWXHx++1qy58TbDY4HSKdpyCQv
         ikWplkaqAD87vv3dN2mNIbuiQiet10jVAIrqaoqqnQ8rD4DmNTk/C+7rc4IXKKMifqqx
         s+MqAwiVK2EJkP+fk4SDdzZjsmzMiKIyNKmou8i+1A1gRjU0LZ+mk4V30rJAzcAZG6OJ
         kbLT/g9Liyz6RpzzEvePC12/hS213zs4HnH0A3XN94lMrtYqukYQgplyM9rYUg09PdXU
         JF0A==
X-Gm-Message-State: AOAM532gP8GoS0RkPK4ZTSs2QyR3EweW/Pd8/I5ABOrX0zU7KoJ3vFHl
	INc1KPmia3O3fpM6yDI8Cbc2n1d5eSV+HD0V
X-Google-Smtp-Source: ABdhPJzbVW31Ad/zoYaoV5MLgGRT4Cc0nAr0mgRyM/bxkIkRjn0/w85KjIBYtvVUsThlY5b7fXncLA==
X-Received: by 2002:a17:902:b68c:b029:e6:bb9f:7577 with SMTP id c12-20020a170902b68cb02900e6bb9f7577mr11794708pls.0.1620419199049;
        Fri, 07 May 2021 13:26:39 -0700 (PDT)
Subject: Re: [PATCH v3 2/5] xen/x86: manually build xen.mb.efi binary
To: Jan Beulich <jbeulich@suse.com>
Cc: Daniel Kiper <daniel.kiper@oracle.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <cover.1611273359.git.bobbyeshleman@gmail.com>
 <28d5536a2f7691e8f79d55f1470fa89ce4fae93d.1611273359.git.bobbyeshleman@gmail.com>
 <3c621726-31c4-6a79-a020-88c59644111b@suse.com>
From: Bob Eshleman <bobbyeshleman@gmail.com>
Organization: Vates SAS
Message-ID: <74ea104d-3826-d80d-3af5-f444d065c73f@gmail.com>
Date: Fri, 7 May 2021 13:26:37 -0700
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
MIME-Version: 1.0
In-Reply-To: <3c621726-31c4-6a79-a020-88c59644111b@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Jan,

I mulled over your feedback and I think I can now see your reservations
with this series.

I'm wondering if the long-term goal of using the xen mb1/mb2 binary as the
basis for creating a EFI loadable mb1/mb2 payload is actually the wrong
approach.

After all, I do not see a feasible way to maintain the comprehensive
sectioning, the proper reloc table, the proper debug directory, etc...
that is found in the current xen.efi using the approach in this series,
which would mean maintaining a third binary forever.

What is your intuition WRT the idea that instead of trying add a PE/COFF hdr
in front of Xen's mb2 bin, we instead go the route of introducing valid mb2
entry points into xen.efi?

At the end of the day, our goal is just to have a binary that meets these
requirements:

* Is verifiable with shim (PE/COFF)
* May boot on BIOS platforms via grub2
* May boot on EFI platforms via grub2 or EFI loader

Thanks

-- 
Bobby Eshleman
SE at Vates SAS


From xen-devel-bounces@lists.xenproject.org Fri May 07 21:00:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 21:00:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124133.234291 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lf7aX-0003aC-A2; Fri, 07 May 2021 21:00:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124133.234291; Fri, 07 May 2021 21:00:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lf7aX-0003a5-6l; Fri, 07 May 2021 21:00:45 +0000
Received: by outflank-mailman (input) for mailman id 124133;
 Fri, 07 May 2021 21:00:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lf7aW-0003Zv-CF; Fri, 07 May 2021 21:00:44 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lf7aW-000818-79; Fri, 07 May 2021 21:00:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lf7aV-0007eK-Ps; Fri, 07 May 2021 21:00:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lf7aV-00052i-PO; Fri, 07 May 2021 21:00:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=DR/XnZmCEj2WDOVukj4Aej02ja6VoyBusaQ/IG4Iaxk=; b=j7P6RE/1OXcETORPfJtsh2JWQb
	TOTfrvpMWmEzVErc+Scmc7O0LZ42vG6CSwnnjtz58BsAc8g344Hl4kPKRGeqUjFDxmnHyjuoJP3Jb
	PrAYI8FIpjHN5ImnScgEKiCFFeMzTDpCA6co06SaG/gLLnNDU2NEJZfBnWsWTsp2oiQk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161829-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 161829: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=e48661230cc35b3d0f4367eddfc19f86463ab917
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 07 May 2021 21:00:43 +0000

flight 161829 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161829/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx  8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                e48661230cc35b3d0f4367eddfc19f86463ab917
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  280 days
Failing since        152366  2020-08-01 20:49:34 Z  279 days  465 attempts
Testing same since   161829  2021-05-07 05:12:46 Z    0 days    1 attempts

------------------------------------------------------------
5992 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1625261 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 07 22:53:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 22:53:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124161.234330 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lf9Lg-0006Jt-U2; Fri, 07 May 2021 22:53:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124161.234330; Fri, 07 May 2021 22:53:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lf9Lg-0006Jm-QZ; Fri, 07 May 2021 22:53:32 +0000
Received: by outflank-mailman (input) for mailman id 124161;
 Fri, 07 May 2021 22:53:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lf9Lf-0006Jc-JI; Fri, 07 May 2021 22:53:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lf9Lf-0001Vj-C1; Fri, 07 May 2021 22:53:31 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lf9Lf-00060S-4U; Fri, 07 May 2021 22:53:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lf9Lf-0000BY-3x; Fri, 07 May 2021 22:53:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=6UI5BoHsUojbbQUMmYM9O5sqyLSYEJ33aQccERUPl04=; b=SbkqeT2OZ1cHCUcF0+3qRRRmJY
	b+PmSiLJCXgL6deVHHiYK+ImOJpOuDW5aFdXu6D5uZmnndY7tkMOBvD1Ikla+oU4fWj+3Nf4liDvP
	aKl/+HlGS9Sk6cFdCx4EuGD9gJ2pWpg+aMiYF4twvSlOIxqU6vj3IMPn8FKiO1K4t8ig=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161841-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 161841: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=a7da84c457b05479ab423a2e589c5f46c7da0ed7
X-Osstest-Versions-That:
    xen=7a2b787880bddbb3bd68b18efe1d6fe339df6ff1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 07 May 2021 22:53:31 +0000

flight 161841 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161841/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  a7da84c457b05479ab423a2e589c5f46c7da0ed7
baseline version:
 xen                  7a2b787880bddbb3bd68b18efe1d6fe339df6ff1

Last test of basis   161831  2021-05-07 09:01:42 Z    0 days
Testing same since   161841  2021-05-07 19:04:15 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jason Andryuk <jandryuk@gmail.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   7a2b787880..a7da84c457  a7da84c457b05479ab423a2e589c5f46c7da0ed7 -> smoke


From xen-devel-bounces@lists.xenproject.org Fri May 07 23:21:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 23:21:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124166.234344 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lf9mf-000113-6z; Fri, 07 May 2021 23:21:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124166.234344; Fri, 07 May 2021 23:21:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lf9mf-00010w-3n; Fri, 07 May 2021 23:21:25 +0000
Received: by outflank-mailman (input) for mailman id 124166;
 Fri, 07 May 2021 23:21:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lf9md-00010l-LE; Fri, 07 May 2021 23:21:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lf9md-0001yp-G6; Fri, 07 May 2021 23:21:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lf9md-0006p8-7i; Fri, 07 May 2021 23:21:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lf9md-0004O7-7B; Fri, 07 May 2021 23:21:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0fig6VadWlU087VkMb1qpyg5IOLottFhq9RSCEo4LKA=; b=WSaKoB0TJwx/oRm3YJz7L7dwc3
	Af1rkieFXFQWWTrIoI+vHURZvkPZ+iAunTM8JiPkpb1RlJIy+/XehslJg/lpCTZF6GN02l0O8WMMc
	qEB3xHf2QGky3KsoStQ3+yDsHLUXy12hJO3Hu+/oFOmrDoLt24LRGDbF/qCr22CSprso=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161832-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 161832: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=b5dbcd05792a4bad2c9bb3c4658c854e72c444b7
X-Osstest-Versions-That:
    linux=370636ffbb8695e6af549011ad91a048c8cab267
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 07 May 2021 23:21:23 +0000

flight 161832 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161832/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161600
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161600
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161600
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161600
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 161600
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161600
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161600
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161600
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161600
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161600
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161600
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                b5dbcd05792a4bad2c9bb3c4658c854e72c444b7
baseline version:
 linux                370636ffbb8695e6af549011ad91a048c8cab267

Last test of basis   161600  2021-05-02 09:11:10 Z    5 days
Testing same since   161832  2021-05-07 09:10:55 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alan Stern <stern@rowland.harvard.edu>
  Alex Williamson <alex.williamson@redhat.com>
  Alexei Starovoitov <ast@kernel.org>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Arnd Bergmann <arnd@arndb.de>
  Chris Chiu <chris.chiu@canonical.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Borkmann <daniel@iogearbox.net>
  David S. Miller <davem@davemloft.net>
  David Switzer <david.switzer@intel.com>
  Florian Fainelli <f.fainelli@gmail.com>
  George Kennedy <george.kennedy@oracle.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Hans de Goede <hdegoede@redhat.com>
  Hulk Robot <hulkrobot@huawei.com>
  Jari Ruusu <jariruusu@protonmail.com>
  Jason Gunthorpe <jgg@nvidia.com>
  Jason Self <jason@bluehome.net>
  Jiri Kosina <jkosina@suse.cz>
  Jon Hunter <jonathanh@nvidia.com>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Kalle Valo <kvalo@codeaurora.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Luca Coelho <luciano.coelho@intel.com>
  Mark Pearson <markpearson@lenovo.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Miklos Szeredi <mszeredi@redhat.com>
  Namhyung Kim <namhyung@kernel.org>
  Nick Lowe <nick.lowe@gmail.com>
  Ondrej Mosnacek <omosnace@redhat.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Phillip Potter <phil@philpotter.co.uk>
  Piotr Krysiuk <piotras@gmail.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Romain Naour <romain.naour@gmail.com>
  Sasha Levin <sashal@kernel.org>
  Sedat Dilek <sedat.dilek@gmail.com> # LLVM/Clang v12.0.0-rc3
  Shuah Khan <skhan@linuxfoundation.org>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  Sudip Mukherjee <sudipm.mukherjee@gmail.com>
  Takashi Iwai <tiwai@suse.de>
  Thomas Bogendoerfer <tsbogend@alpha.franken.de>
  Thomas Richter <tmricht@linux.ibm.com>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Zhen Lei <thunder.leizhen@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   370636ffbb86..b5dbcd05792a  b5dbcd05792a4bad2c9bb3c4658c854e72c444b7 -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Fri May 07 23:32:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 07 May 2021 23:32:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124171.234360 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lf9xe-0002TH-9I; Fri, 07 May 2021 23:32:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124171.234360; Fri, 07 May 2021 23:32:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lf9xe-0002TA-5r; Fri, 07 May 2021 23:32:46 +0000
Received: by outflank-mailman (input) for mailman id 124171;
 Fri, 07 May 2021 23:32:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lf9xd-0002T4-IM
 for xen-devel@lists.xenproject.org; Fri, 07 May 2021 23:32:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lf9xY-00029m-JG; Fri, 07 May 2021 23:32:40 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lf9xY-0000jH-8c; Fri, 07 May 2021 23:32:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:Cc:From:References:To:Subject;
	bh=RRKPR/aWrJ7LwPgDINzUrTMu5xyvpg8DPEZFvEJ00bY=; b=tMgCoUBHv9LaUWdllsRF+TRoZy
	oid+05c/Ar0UzQlYqI9V/H490UIzJySws7JJ8FKk+yqjLQqoMVKzkEqwQkybFEelgdCb6kaEtLCtn
	Ev5VYOjYO6dwhkYB8voarwgJbpHK7FkE4Kyl9eFYxow6bf69DLDjCH38dUyVEjyufvqc=;
Subject: Regression when booting 5.15 as dom0 on arm64 (WAS: Re: [linux-linus
 test] 161829: regressions - FAIL)
To: f.fainelli@gmail.com, Stefano Stabellini <sstabellini@kernel.org>
References: <osstest-161829-mainreport@xen.org>
From: Julien Grall <julien@xen.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 linux-kernel@vger.kernel.org,
 osstest service owner <osstest-admin@xenproject.org>, hch@lst.de,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 iommu@lists.linux-foundation.org, linux-kernel@vger.kernel.org
Message-ID: <4ea1e89f-a7a0-7664-470c-b3cf773a1031@xen.org>
Date: Sat, 8 May 2021 00:32:37 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <osstest-161829-mainreport@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi all,

On 07/05/2021 22:00, osstest service owner wrote:
> flight 161829 linux-linus real [real]
> http://logs.test-lab.xenproject.org/osstest/logs/161829/
> 
> Regressions :-(
> 
> Tests which did not succeed and are blocking,
> including tests which could not be run:

[...]

>   test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
>   test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
>   test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
>   test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
>   test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
>   test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
>   test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
>   test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
>   test-arm64-arm64-xl-thunderx  8 xen-boot                 fail REGR. vs. 152332
>   test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
>   test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
>   test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 152332
>   test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
>   test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Osstest reported dom0 boot failure on all the arm64 platform we have. 
The stack trace is similar everywhere:

[   18.101077] Unable to handle kernel NULL pointer dereference at 
virtual address 0000000000000008
[   18.101441] Mem abort info:
[   18.101625]   ESR = 0x96000004
[   18.101839]   EC = 0x25: DABT (current EL), IL = 32 bits
[   18.102111]   SET = 0, FnV = 0
[   18.102327]   EA = 0, S1PTW = 0
[   18.102544] Data abort info:
[   18.102747]   ISV = 0, ISS = 0x00000004
[   18.102968]   CM = 0, WnR = 0
[   18.103183] [0000000000000008] user address but active_mm is swapper
[   18.103476] Internal error: Oops: 96000004 [#1] SMP
[   18.103689] Modules linked in:
[   18.103881] CPU: 0 PID: 1 Comm: swapper/0 Not tainted 5.12.0-rc3+ #126
[   18.104172] Hardware name: Foundation-v8A (DT)
[   18.104376] pstate: 60000005 (nZCv daif -PAN -UAO -TCO BTYPE=--)
[   18.104653] pc : xen_swiotlb_dma_supported+0x30/0xc8
[   18.104893] lr : dma_supported+0x38/0x68
[   18.105118] sp : ffff80001295bac0
[   18.105289] x29: ffff80001295bac0 x28: ffff8000114f44c0
[   18.105600] x27: 0000000000000007 x26: ffff8000113a1000
[   18.105906] x25: 0000000000000000 x24: ffff800011d2e910
[   18.106213] x23: ffff800011d4d000 x22: ffff000012fad810
[   18.106525] x21: ffffffffffffffff x20: ffffffffffffffff
[   18.106837] x19: ffff000012fad810 x18: 00000000ffffffeb
[   18.107146] x17: 0000000000000000 x16: 00000000493f1445
[   18.107450] x15: ffff80001132d000 x14: 000000001c131000
[   18.107759] x13: 00000000498d0616 x12: ffff8000129f7000
[   18.108068] x11: ffff000012c08710 x10: ffff800011a91000
[   18.108380] x9 : 0000000000003000 x8 : ffff00001ffff000
[   18.108722] x7 : ffff800011a90a88 x6 : ffff800010a7275c
[   18.109031] x5 : 0000000000000000 x4 : 0000000000000001
[   18.109331] x3 : 2cd8f9dc91b3df00 x2 : ffff8000109c7578
[   18.109640] x1 : 0000000000000000 x0 : 0000000000000000
[   18.109940] Call trace:
[   18.110079]  xen_swiotlb_dma_supported+0x30/0xc8
[   18.110319]  dma_supported+0x38/0x68
[   18.110543]  dma_set_mask+0x30/0x58
[   18.110765]  virtio_mmio_probe+0x1c8/0x238
[   18.110979]  platform_probe+0x6c/0x108
[   18.111188]  really_probe+0xfc/0x3c8
[   18.111413]  driver_probe_device+0x68/0xe8
[   18.111647]  device_driver_attach+0x74/0x98
[   18.111883]  __driver_attach+0x98/0xe0
[   18.112111]  bus_for_each_dev+0x84/0xd8
[   18.112334]  driver_attach+0x30/0x40
[   18.112557]  bus_add_driver+0x168/0x228
[   18.112784]  driver_register+0x64/0x110
[   18.113016]  __platform_driver_register+0x34/0x40
[   18.113257]  virtio_mmio_init+0x20/0x28
[   18.113480]  do_one_initcall+0x90/0x470
[   18.113694]  kernel_init_freeable+0x3ec/0x440
[   18.113950]  kernel_init+0x1c/0x138
[   18.114158]  ret_from_fork+0x10/0x18
[   18.114409] Code: f000f641 f000a200 f944e821 f9410400 (f9400434)
[   18.114718] ---[ end trace d39406719f25d228 ]---
[   18.115044] Kernel panic - not syncing: Attempted to kill init! 
exitcode=0x0000000b
[   18.115339] SMP: stopping secondary CPUs
[   18.115584] Kernel Offset: disabled
[   18.115743] CPU features: 0x00240000,61802000
[   18.115954] Memory Limit: none
[   18.116173] ---[ end Kernel panic - not syncing: Attempted to kill 
init! exitcode=0x0000000b ]---

I have bisected manually and pinpoint the following commit:

commit 2726bf3ff2520dba61fafc90a055640f7ad54e05 (refs/bisect/bad)
Author: Florian Fainelli <f.fainelli@gmail.com>
Date:   Mon Mar 22 18:53:49 2021 -0700

     swiotlb: Make SWIOTLB_NO_FORCE perform no allocation

     When SWIOTLB_NO_FORCE is used, there should really be no allocations of
     default_nslabs to occur since we are not going to use those slabs. If a
     platform was somehow setting swiotlb_no_force and a later call to
     swiotlb_init() was to be made we would still be proceeding with
     allocating the default SWIOTLB size (64MB), whereas if swiotlb=noforce
     was set on the kernel command line we would have only allocated 2KB.

     This would be inconsistent and the point of initializing default_nslabs
     to 1, was intended to allocate the minimum amount of memory 
possible, so
     simply remove that minimal allocation period.

     Signed-off-by: Florian Fainelli <f.fainelli@gmail.com>
     Reviewed-by: Christoph Hellwig <hch@lst.de>
     Signed-off-by: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>

The pointer dereferenced seems to suggest that the swiotlb hasn't been 
allocated. From what I can tell, this may be because swiotlb_force is 
set to SWIOTLB_NO_FORCE, we will still enable the swiotlb when running 
on top of Xen.

I am not entirely sure what would be the correct fix. Any opinions?

Cheers,

> 
> Regressions which are regarded as allowable (not blocking):
>   test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152332
> 
> Tests which did not succeed, but are not blocking:
>   test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
>   test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
>   test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
>   test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
>   test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
>   test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
>   test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
>   test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
>   test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
>   test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
>   test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
>   test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
>   test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
>   test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
>   test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
>   test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
>   test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
>   test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
>   test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
>   test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
>   test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
>   test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
>   test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
>   test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
>   test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
>   test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
> 
> version targeted for testing:
>   linux                e48661230cc35b3d0f4367eddfc19f86463ab917
> baseline version:
>   linux                deacdb3e3979979016fcd0ffd518c320a62ad166
> 
> Last test of basis   152332  2020-07-31 19:41:23 Z  280 days
> Failing since        152366  2020-08-01 20:49:34 Z  279 days  465 attempts
> Testing same since   161829  2021-05-07 05:12:46 Z    0 days    1 attempts
> 
> ------------------------------------------------------------
> 5992 people touched revisions under test,
> not listing them all
> 
> jobs:
>   build-amd64-xsm                                              pass
>   build-arm64-xsm                                              pass
>   build-i386-xsm                                               pass
>   build-amd64                                                  pass
>   build-arm64                                                  pass
>   build-armhf                                                  pass
>   build-i386                                                   pass
>   build-amd64-libvirt                                          pass
>   build-arm64-libvirt                                          pass
>   build-armhf-libvirt                                          pass
>   build-i386-libvirt                                           pass
>   build-amd64-pvops                                            pass
>   build-arm64-pvops                                            pass
>   build-armhf-pvops                                            pass
>   build-i386-pvops                                             pass
>   test-amd64-amd64-xl                                          pass
>   test-amd64-coresched-amd64-xl                                pass
>   test-arm64-arm64-xl                                          fail
>   test-armhf-armhf-xl                                          pass
>   test-amd64-i386-xl                                           fail
>   test-amd64-coresched-i386-xl                                 fail
>   test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass
>   test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail
>   test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass
>   test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail
>   test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass
>   test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail
>   test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass
>   test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail
>   test-amd64-amd64-libvirt-xsm                                 pass
>   test-arm64-arm64-libvirt-xsm                                 fail
>   test-amd64-i386-libvirt-xsm                                  fail
>   test-amd64-amd64-xl-xsm                                      pass
>   test-arm64-arm64-xl-xsm                                      fail
>   test-amd64-i386-xl-xsm                                       fail
>   test-amd64-amd64-qemuu-nested-amd                            fail
>   test-amd64-amd64-xl-pvhv2-amd                                pass
>   test-amd64-i386-qemut-rhel6hvm-amd                           fail
>   test-amd64-i386-qemuu-rhel6hvm-amd                           fail
>   test-amd64-amd64-dom0pvh-xl-amd                              pass
>   test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass
>   test-amd64-i386-xl-qemut-debianhvm-amd64                     fail
>   test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass
>   test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail
>   test-amd64-i386-freebsd10-amd64                              fail
>   test-amd64-amd64-qemuu-freebsd11-amd64                       pass
>   test-amd64-amd64-qemuu-freebsd12-amd64                       pass
>   test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass
>   test-amd64-i386-xl-qemuu-ovmf-amd64                          fail
>   test-amd64-amd64-xl-qemut-win7-amd64                         fail
>   test-amd64-i386-xl-qemut-win7-amd64                          fail
>   test-amd64-amd64-xl-qemuu-win7-amd64                         fail
>   test-amd64-i386-xl-qemuu-win7-amd64                          fail
>   test-amd64-amd64-xl-qemut-ws16-amd64                         fail
>   test-amd64-i386-xl-qemut-ws16-amd64                          fail
>   test-amd64-amd64-xl-qemuu-ws16-amd64                         fail
>   test-amd64-i386-xl-qemuu-ws16-amd64                          fail
>   test-armhf-armhf-xl-arndale                                  pass
>   test-amd64-amd64-xl-credit1                                  pass
>   test-arm64-arm64-xl-credit1                                  fail
>   test-armhf-armhf-xl-credit1                                  pass
>   test-amd64-amd64-xl-credit2                                  pass
>   test-arm64-arm64-xl-credit2                                  fail
>   test-armhf-armhf-xl-credit2                                  pass
>   test-armhf-armhf-xl-cubietruck                               pass
>   test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass
>   test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail
>   test-amd64-amd64-examine                                     pass
>   test-arm64-arm64-examine                                     fail
>   test-armhf-armhf-examine                                     pass
>   test-amd64-i386-examine                                      fail
>   test-amd64-i386-freebsd10-i386                               fail
>   test-amd64-amd64-qemuu-nested-intel                          pass
>   test-amd64-amd64-xl-pvhv2-intel                              pass
>   test-amd64-i386-qemut-rhel6hvm-intel                         fail
>   test-amd64-i386-qemuu-rhel6hvm-intel                         fail
>   test-amd64-amd64-dom0pvh-xl-intel                            pass
>   test-amd64-amd64-libvirt                                     pass
>   test-armhf-armhf-libvirt                                     pass
>   test-amd64-i386-libvirt                                      fail
>   test-amd64-amd64-xl-multivcpu                                pass
>   test-armhf-armhf-xl-multivcpu                                pass
>   test-amd64-amd64-pair                                        pass
>   test-amd64-i386-pair                                         fail
>   test-amd64-amd64-libvirt-pair                                pass
>   test-amd64-i386-libvirt-pair                                 fail
>   test-amd64-amd64-amd64-pvgrub                                fail
>   test-amd64-amd64-i386-pvgrub                                 fail
>   test-amd64-amd64-xl-pvshim                                   pass
>   test-amd64-i386-xl-pvshim                                    fail
>   test-amd64-amd64-pygrub                                      pass
>   test-amd64-amd64-xl-qcow2                                    fail
>   test-armhf-armhf-libvirt-raw                                 pass
>   test-amd64-i386-xl-raw                                       fail
>   test-amd64-amd64-xl-rtds                                     fail
>   test-armhf-armhf-xl-rtds                                     pass
>   test-arm64-arm64-xl-seattle                                  fail
>   test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass
>   test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail
>   test-amd64-amd64-xl-shadow                                   pass
>   test-amd64-i386-xl-shadow                                    fail
>   test-arm64-arm64-xl-thunderx                                 fail
>   test-amd64-amd64-libvirt-vhd                                 fail
>   test-armhf-armhf-xl-vhd                                      fail
> 
> 
> ------------------------------------------------------------
> sg-report-flight on osstest.test-lab.xenproject.org
> logs: /home/logs/logs
> images: /home/logs/images
> 
> Logs, config files, etc. are available at
>      http://logs.test-lab.xenproject.org/osstest/logs
> 
> Explanation of these reports, and of osstest in general, is at
>      http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
>      http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master
> 
> Test harness code can be found at
>      http://xenbits.xen.org/gitweb?p=osstest.git;a=summary
> 
> 
> Not pushing.
> 
> (No revision log; it would be 1625261 lines long.)
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat May 08 00:42:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 May 2021 00:42:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124182.234378 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfB2c-0001He-H9; Sat, 08 May 2021 00:41:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124182.234378; Sat, 08 May 2021 00:41:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfB2c-0001HX-Dd; Sat, 08 May 2021 00:41:58 +0000
Received: by outflank-mailman (input) for mailman id 124182;
 Sat, 08 May 2021 00:41:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=j7fW=KD=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1lfB2a-0001HR-Qe
 for xen-devel@lists.xenproject.org; Sat, 08 May 2021 00:41:57 +0000
Received: from out1-smtp.messagingengine.com (unknown [66.111.4.25])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d48dcfe6-3abd-48e4-93dc-30b2ad5e224f;
 Sat, 08 May 2021 00:41:55 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.nyi.internal (Postfix) with ESMTP id 2A3295C00E5;
 Fri,  7 May 2021 20:41:55 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute3.internal (MEProxy); Fri, 07 May 2021 20:41:55 -0400
Received: from mail-itl (ip5b434f04.dynamic.kabel-deutschland.de [91.67.79.4])
 by mail.messagingengine.com (Postfix) with ESMTPA;
 Fri,  7 May 2021 20:41:53 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d48dcfe6-3abd-48e4-93dc-30b2ad5e224f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:content-type:date:from:in-reply-to
	:message-id:mime-version:references:subject:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; bh=n2i9lb
	1cDFHBhzulZlCy85ItUbUjzY2Rm3nibICrJL4=; b=ZV1zjl5dyQ4KgK80tuv4bQ
	EtTPysRfSZV1CFCyBHVdde2W53NrWG/RvNK4ZDBMoc+Y4fadZYGedvWbzUnyID4w
	K0d/SKLz/dfDGquCfTxQXNFAiCucDoq1cTTkvYWHUnV0d3gnQte38vw+/F+VVapD
	v14uO58mYnIt93XmaT3h56IHwlN06L3OasSh9+X2Qwfo51pfINqmpt2PnyY266uP
	vfzM2ECuskTi1z5DlqXadHQF91hjK0wvQE+rpGwpjV46iVoGnvnPewWgbOSADmJe
	DbTVRkgt8NxGblffxjjy6YlvIpVF0cl14DChipEeTTb6mfy7m41nh6922gfga/oQ
	==
X-ME-Sender: <xms:Ut6VYOIF9YIDBea2xmpUCYx1-S79R0ko18othBZAN0OSrsFYExnFmw>
    <xme:Ut6VYGKp6pVQKy6FL2W_wKu5gIOQV-k_5IK90d6_hvnK0vko4qgchXtuIlyzCjcRR
    G0KH0rTm62vIA>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrvdegfedgfeejucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvffukfhfgggtuggjsehgtderredttdejnecuhfhrohhmpeforghrvghk
    ucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesihhnvh
    hishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeetveff
    iefghfekhffggeeffffhgeevieektedthfehveeiheeiiedtudegfeetffenucfkpheple
    durdeijedrjeelrdegnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghi
    lhhfrhhomhepmhgrrhhmrghrvghksehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtg
    homh
X-ME-Proxy: <xmx:Ut6VYOuC0MLd_wjegbgNyjG51mAad41yHZjtejvGz9kmCsrSMvy8jg>
    <xmx:Ut6VYDZM7gfRRYzAJ7_-lp3sPUfmCeQ4dxGLVyO6vrz_-0BbQpSawQ>
    <xmx:Ut6VYFa7-spspQxsXE3OVRkM4xxCyvzg7W3TS5F5eRLVJrsqZgBEfA>
    <xmx:U96VYJGXXs-hjuvZbv2sLSdlwzi3W37aBqMEujJrcnOUjyplAx0lrg>
Date: Sat, 8 May 2021 02:41:50 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Luca Fancellu <luca.fancellu@arm.com>
Cc: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org
Subject: Re: [PATCH] xen/gntdev: fix gntdev_mmap() error exit path
Message-ID: <YJXeTy0lNWvSMnZH@mail-itl>
References: <20210423054038.26696-1-jgross@suse.com>
 <467B8109-C829-4755-8398-196E50090898@arm.com>
 <9cb9bd6c-8185-9741-31b9-8f6baf3848a3@suse.com>
 <92E0F915-499C-4471-B0C9-336494C5E31D@arm.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="pOlv8fsOYh8hH7v+"
Content-Disposition: inline
In-Reply-To: <92E0F915-499C-4471-B0C9-336494C5E31D@arm.com>


--pOlv8fsOYh8hH7v+
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Sat, 8 May 2021 02:41:50 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Luca Fancellu <luca.fancellu@arm.com>
Cc: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org
Subject: Re: [PATCH] xen/gntdev: fix gntdev_mmap() error exit path

On Fri, Apr 23, 2021 at 08:04:57AM +0100, Luca Fancellu wrote:
> > On 23 Apr 2021, at 08:00, Juergen Gross <jgross@suse.com> wrote:
> > On 23.04.21 08:55, Luca Fancellu wrote:
> >>> On 23 Apr 2021, at 06:40, Juergen Gross <jgross@suse.com> wrote:
> >>>=20
> >>> Commit d3eeb1d77c5d0af ("xen/gntdev: use mmu_interval_notifier_insert=
")
> >>> introduced an error in gntdev_mmap(): in case the call of
> >>> mmu_interval_notifier_insert_locked() fails the exit path should not
> >>> call mmu_interval_notifier_remove(), as this might result in NULL
> >>> dereferences.
> >>>=20
> >>> One reason for failure is e.g. a signal pending for the running
> >>> process.
> >>>=20
> >>> Fixes: d3eeb1d77c5d0af ("xen/gntdev: use mmu_interval_notifier_insert=
")
> >>> Cc: stable@vger.kernel.org
> >>> Reported-by: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblething=
slab.com>
> >>> Tested-by: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingsl=
ab.com>
> >>> Signed-off-by: Juergen Gross <jgross@suse.com>

(...)

> Right, thanks, seems good to me.
>=20
> Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>

Can somebody ack this fix please?=20

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--pOlv8fsOYh8hH7v+
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmCV3k4ACgkQ24/THMrX
1yxJTwf+MztgRk40NrBaqMG3/vvS/xaYpsHaX6aDvKJn6O+2s6bkhNejJURDM7QW
B048xcn16B4UTKoIr/6BuToivQKKaQyuX3Da8XstMYDJc6H70Eisf6o2YX33EvZS
k0F+R6dnCKHb9DVWltZs4onpAAUlnKeaYIpu+dgsCISX9UoOo7ogmNANyLtZEYz+
BvzFd5vWrF5I6KOjVBxMSF0ENGi9y+YYelKfCYYe0E6DM1059WmxAuzRlc+PJv6j
9qGS46t0JkbVlIgV7O36Lw7/m25OvoxLBBb/ImH5yBo+uNMmCBUavR75U/9gInE3
RN3X65Zqr8gCKtldPPUG5Ro2bF1AcQ==
=FcB0
-----END PGP SIGNATURE-----

--pOlv8fsOYh8hH7v+--


From xen-devel-bounces@lists.xenproject.org Sat May 08 02:30:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 May 2021 02:30:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124190.234390 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfCjD-0008Iu-DE; Sat, 08 May 2021 02:30:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124190.234390; Sat, 08 May 2021 02:30:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfCjD-0008In-7m; Sat, 08 May 2021 02:30:03 +0000
Received: by outflank-mailman (input) for mailman id 124190;
 Sat, 08 May 2021 02:30:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfCjB-00086J-UR; Sat, 08 May 2021 02:30:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfCjB-0002zK-ML; Sat, 08 May 2021 02:30:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfCjB-0007DG-DO; Sat, 08 May 2021 02:30:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lfCjB-0000OC-Cv; Sat, 08 May 2021 02:30:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=/svN7QoYPCjxMQ89DxxDdSseBPXjkghb84eFjZpu09w=; b=OnIfb7U1LqAY/1YeMUcj5kwdxu
	ccfxFOVnxSF42OG5BF3OtHJxNxLJHJVw0joZUW1f3NYwuJADgTI9ZK1TIp6FNkaGDdnnIYm1AqkbS
	NGlm3f7M0DJv1zquHC0BXYKGQhcy9K8aQ3iG96fkvlBHx6E3Sl2cz504OzhVLSxAdRx0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm
Message-Id: <E1lfCjB-0000OC-Cv@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 08 May 2021 02:30:01 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm
testid debian-hvm-install

Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  8d17adf34f501ded65a106572740760f0a75577c
  Bug not present: e67d8e2928200e24ecb47c7be3ea8270077f2996
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/161845/


  commit 8d17adf34f501ded65a106572740760f0a75577c
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 11:16:32 2021 +0000
  
      block: remove support for using "file" driver with block/char devices
      
      The 'host_device' and 'host_cdrom' drivers must be used instead.
      
      Reviewed-by: Eric Blake <eblake@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm.debian-hvm-install.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm.debian-hvm-install --summary-out=tmp/161845.bisection-summary --basis-template=152631 --blessings=real,real-bisect,real-retry qemu-mainline test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm debian-hvm-install
Searching for failure / basis pass:
 161826 fail [host=albana0] / 160125 [host=pinot1] 160119 [host=albana1] 160113 [host=albana1] 160104 [host=chardonnay0] 160097 [host=chardonnay1] 160091 [host=elbling1] 160082 [host=pinot0] 160079 ok.
Failure / basis pass flights: 161826 / 160079
(tree with no url: minios)
Tree: libvirt git://xenbits.xen.org/libvirt.git
Tree: libvirt_keycodemapdb https://gitlab.com/keycodemap/keycodemapdb.git
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
Basis pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6f8a81fc296535f73c48cf9563862e088cc71c57 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 14b95b3b8546db201e7efd0636ae0e215fae98f3
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/libvirt.git#2c846fa6bcc11929c9fb857a22430fb9945654ad-2c846fa6bcc11929c9fb857a22430fb9945654ad https://gitlab.com/keycodemap/keycodemapdb.git#27acf0ef828bf719b2053ba398b195829413dbdd-27acf0ef828bf719b2053ba398b195829413dbdd git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0\
 dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#4751a48aeb2ab828b0a5cbdc585fd3642967cda1-f297b7f20010711e36e981fe45645302cc9d109d git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c7437ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://git.qemu.org/qemu.git#6f8a81fc296535f73c48cf9563862e088cc71c57-d90f154867ec0ec22fd719164b88716e8fd48672 git://xenbits.xen.org/osstest/seabios.git#b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee-b0d61ec\
 ef66eb05bd7a4eb7ada88ec5dab06dfee git://xenbits.xen.org/xen.git#14b95b3b8546db201e7efd0636ae0e215fae98f3-09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
Loaded 44464 nodes in revision graph
Searching for test results:
 160079 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6f8a81fc296535f73c48cf9563862e088cc71c57 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 14b95b3b8546db201e7efd0636ae0e215fae98f3
 160082 [host=pinot0]
 160088 []
 160091 [host=elbling1]
 160097 [host=chardonnay1]
 160104 [host=chardonnay0]
 160113 [host=albana1]
 160119 [host=albana1]
 160125 [host=pinot1]
 160134 fail irrelevant
 160147 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2e1293cbaac75e84f541f9acfa8e26749f4c3562 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160167 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca318882714080fb81fe9eb89a7b7934efc5bfae 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 bdee969c0e65d4d509932b1d70e3a3b2ffbff6d5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160328 fail irrelevant
 160361 fail irrelevant
 160392 fail irrelevant
 160418 fail irrelevant
 160448 fail irrelevant
 160477 fail irrelevant
 160501 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7b9a3c9f94bcac23c534bc9f42a9e914b433b299 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160522 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7b9a3c9f94bcac23c534bc9f42a9e914b433b299 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160541 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ec2e6e016d24bd429792d08cf607e4c5350dcdaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160563 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7993b0f83fe5c3f8555e79781d5d098f99751a94 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cead8c0d17462f3a1150b5657d3f4eaa88faf1cb
 160619 fail irrelevant
 160632 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 62bad17dcae18f55cb3bdc19909543dfdf928a2b 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6ee55e1d10c25c2f6bf5ce2084ad2327e17affa5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 90629587e16e2efdb61da77f25c25fba3c4a5fd7
 160650 fail irrelevant
 160736 fail irrelevant
 160748 fail irrelevant
 160779 fail irrelevant
 160801 fail irrelevant
 160827 fail irrelevant
 160851 fail irrelevant
 160883 fail irrelevant
 160916 fail irrelevant
 160980 fail irrelevant
 161050 fail irrelevant
 161088 fail irrelevant
 161121 fail irrelevant
 161147 fail irrelevant
 161171 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2ad22420a710dc07e3b644f91a5b55c09c39ecf3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 264aa183ad85b2779b27d1312724a291259ccc9f
 161191 fail irrelevant
 161210 fail irrelevant
 161232 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b53173e7cdafb7a318a239d557478fd73734a86a
 161256 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161276 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161290 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161308 fail irrelevant
 161334 fail irrelevant
 161364 fail irrelevant
 161388 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3b0d007a135284981fa750612a47234b83976f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b1cffefa1b163bce9aebc3416f562c1d3886eeaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 9f6cd4983715cb31f0ea540e6bbb63f799a35d8a
 161401 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3b0d007a135284981fa750612a47234b83976f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b1cffefa1b163bce9aebc3416f562c1d3886eeaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aaa3eafb3ba8b544d19ca41cda1477640b22b8fc
 161419 fail irrelevant
 161434 fail irrelevant
 161444 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f2f4c6be2dba3f8e97ac544b9c3da71e9f81b294 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ffa090bc56e73e287a63261e70ac02c0970be61a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee bea65a212c0581520203b6ad0d07615693f42f73
 161455 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f2f4c6be2dba3f8e97ac544b9c3da71e9f81b294 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ffa090bc56e73e287a63261e70ac02c0970be61a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee bea65a212c0581520203b6ad0d07615693f42f73
 161472 fail irrelevant
 161481 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5396354b868bd6652600a654bba7df16701ac1cb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0cef06d18762374c94eb4d511717a4735d668a24 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 11e7f0fe72ca0060762d18268e0388731fe8ccb6
 161495 fail irrelevant
 161514 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5b90b8abb4049e2d98040f548ad23b6ab22d5d19 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0cef06d18762374c94eb4d511717a4735d668a24 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 972ba1d1d4bcb77018b50fd2bb63c0e628859ed3
 161540 fail irrelevant
 161554 fail irrelevant
 161571 fail irrelevant
 161587 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8f860d2633baf9c2b6261f703f86e394c6bc22ca b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161608 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6f8a81fc296535f73c48cf9563862e088cc71c57 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 14b95b3b8546db201e7efd0636ae0e215fae98f3
 161611 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8f860d2633baf9c2b6261f703f86e394c6bc22ca b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161612 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 f2a9a6c2a86570ccbf8c5c30cbb8bf723168c459 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161604 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8f860d2633baf9c2b6261f703f86e394c6bc22ca b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161614 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8a40754bca14df63c6d2ffe473b68a270dc50679 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161617 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7286d62d4e259be8cecf3dc2deea80ecc14489a5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161618 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2255564fd21059960966b47212def9069cb56077 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161620 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6e71c36557ed41017e634ae392fa80f03ced7fa1 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161622 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 5d1428d6c43942cfb40a909e4c30a5cbb81bda8f b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161624 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8b858f9998a9d59a9a7188f2c5c6ffb99eff6115 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161626 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 30ca7eddc486646fa19c9619fcf233ceaa65e28c b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161627 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 571d413b5da6bc6f1c2aaca8484717642255ddb0 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161616 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 53c5433e84e8935abed8e91d4a2eb813168a0ecf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161630 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 f2e8319d456724c3d8514d943dc4607e2f08e88a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b4011741e6b39a8fd0ed5aded96c16c45ead5888
 161631 []
 161765 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6f8a81fc296535f73c48cf9563862e088cc71c57 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 14b95b3b8546db201e7efd0636ae0e215fae98f3
 161766 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e93d8bcf9dbd5b8dd3b9ddbb1ece6a37e608f300 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee d26c277826dbbd64b3e3cb57159e1ecbfad33bc8
 161780 fail irrelevant
 161816 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6f8a81fc296535f73c48cf9563862e088cc71c57 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 14b95b3b8546db201e7efd0636ae0e215fae98f3
 161818 fail irrelevant
 161838 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8d17adf34f501ded65a106572740760f0a75577c b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161820 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ab5b8a3fb41d035ea320ce85593ba505ea5305bc b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161823 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f3bdfc41866edf7c256e689deb9d091a950c8fca 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6e31b3a5c34c6e5be7ef60773e607f189eaa15f3 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b4011741e6b39a8fd0ed5aded96c16c45ead5888
 161812 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d45a5270d075ea589f0b0ddcf963a5fea1f500ac b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 8cccd6438e86112ab383e41b433b5a7e73be9621
 161824 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8d17adf34f501ded65a106572740760f0a75577c b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161828 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d45a5270d075ea589f0b0ddcf963a5fea1f500ac b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 8cccd6438e86112ab383e41b433b5a7e73be9621
 161830 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b50101833987b47e0740f1621de48637c468c3d1 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161833 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e67d8e2928200e24ecb47c7be3ea8270077f2996 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161834 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8d17adf34f501ded65a106572740760f0a75577c b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161835 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8d17adf34f501ded65a106572740760f0a75577c b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161837 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e67d8e2928200e24ecb47c7be3ea8270077f2996 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161826 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
 161840 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6f8a81fc296535f73c48cf9563862e088cc71c57 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 14b95b3b8546db201e7efd0636ae0e215fae98f3
 161842 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
 161844 pass 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e67d8e2928200e24ecb47c7be3ea8270077f2996 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161845 fail 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8d17adf34f501ded65a106572740760f0a75577c b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
Searching for interesting versions
 Result found: flight 160079 (pass), for basis pass
 Result found: flight 161826 (fail), for basis failure
 Repro found: flight 161840 (pass), for basis pass
 Repro found: flight 161842 (fail), for basis failure
 0 revisions at 2c846fa6bcc11929c9fb857a22430fb9945654ad 27acf0ef828bf719b2053ba398b195829413dbdd c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e67d8e2928200e24ecb47c7be3ea8270077f2996 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
No revisions left to test, checking graph state.
 Result found: flight 161833 (pass), for last pass
 Result found: flight 161835 (fail), for first failure
 Repro found: flight 161837 (pass), for last pass
 Repro found: flight 161838 (fail), for first failure
 Repro found: flight 161844 (pass), for last pass
 Repro found: flight 161845 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  8d17adf34f501ded65a106572740760f0a75577c
  Bug not present: e67d8e2928200e24ecb47c7be3ea8270077f2996
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/161845/


  commit 8d17adf34f501ded65a106572740760f0a75577c
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 11:16:32 2021 +0000
  
      block: remove support for using "file" driver with block/char devices
      
      The 'host_device' and 'host_cdrom' drivers must be used instead.
      
      Reviewed-by: Eric Blake <eblake@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>

neato: graph is too large for cairo-renderer bitmaps. Scaling by 0.62232 to fit
pnmtopng: 232 colors found
Revision graph left in /home/logs/results/bisect/qemu-mainline/test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm.debian-hvm-install.{dot,ps,png,html,svg}.
----------------------------------------
161845: tolerable FAIL

flight 161845 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/161845/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail baseline untested


jobs:
 build-i386-libvirt                                           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Sat May 08 06:35:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 May 2021 06:35:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124206.234420 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfGYp-0005De-6N; Sat, 08 May 2021 06:35:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124206.234420; Sat, 08 May 2021 06:35:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfGYp-0005DX-2Y; Sat, 08 May 2021 06:35:35 +0000
Received: by outflank-mailman (input) for mailman id 124206;
 Sat, 08 May 2021 06:35:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfGYn-0005DN-8P; Sat, 08 May 2021 06:35:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfGYn-0007SG-00; Sat, 08 May 2021 06:35:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfGYm-0003nu-LF; Sat, 08 May 2021 06:35:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lfGYm-0007c6-Km; Sat, 08 May 2021 06:35:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=6tl9ocr2s5iYmzbIehtpsGPgjaH/xU+foYPrYBMOYvM=; b=Dz9wLIsCdvJvZLOYAmq8nKNjKu
	9hDE1xhbVsZ1iAo/wy9zRwDo++bzg2ufqWRIKsmJSDvVJE85SghIjMfQsOYYXzRAe0BC+GTdQ8h3r
	ajywOSdRW3KdogZZgV+mgLYzm+ZMpBH72u3+/M4wnAnbzLTSbVSc1rgFzgeL5KoppySk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161848-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 161848: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=d1873e03b461c6a8535e338aa6869ece757fded3
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 08 May 2021 06:35:32 +0000

flight 161848 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161848/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              d1873e03b461c6a8535e338aa6869ece757fded3
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  302 days
Failing since        151818  2020-07-11 04:18:52 Z  301 days  294 attempts
Testing same since   161848  2021-05-08 04:23:52 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 56845 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 08 07:34:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 May 2021 07:34:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124215.234442 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfHTa-0002Vq-PF; Sat, 08 May 2021 07:34:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124215.234442; Sat, 08 May 2021 07:34:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfHTa-0002Vj-Lw; Sat, 08 May 2021 07:34:14 +0000
Received: by outflank-mailman (input) for mailman id 124215;
 Sat, 08 May 2021 07:34:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfHTZ-0002VZ-EJ; Sat, 08 May 2021 07:34:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfHTZ-0008UX-7j; Sat, 08 May 2021 07:34:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfHTY-0006Dk-VF; Sat, 08 May 2021 07:34:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lfHTY-0006H0-Ug; Sat, 08 May 2021 07:34:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=fQDVOnE74Ud6paLIUDRhwvu8EZCU3BjNtfNhhT5S3/Q=; b=Ou3nuPTsksfJv1U9ZixeCFdm4U
	HrM2x4tqwEethMinPCZx9BySx+EJEMVamP3/OxExLhFh/SiDVJ1Q1ndT10QFHFi+lgVeXGXl+vAph
	lI2cMiy2YNGCDFxlUadR4h7QAV82lt6SV5FdvjuyHh7u+4KzCrndUIra7p2TRNDaIba0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161836-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 161836: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-start/debianhvm.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=7a2b787880bddbb3bd68b18efe1d6fe339df6ff1
X-Osstest-Versions-That:
    xen=09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 08 May 2021 07:34:12 +0000

flight 161836 xen-unstable real [real]
flight 161849 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/161836/
http://logs.test-lab.xenproject.org/osstest/logs/161849/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-qemuu-debianhvm-amd64 20 guest-start/debianhvm.repeat fail pass in 161849-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161825
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161825
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 161825
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161825
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161825
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161825
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161825
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161825
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161825
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161825
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161825
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  7a2b787880bddbb3bd68b18efe1d6fe339df6ff1
baseline version:
 xen                  09fc903c5ac042e2e1eb54e58ea7f207ed12ee16

Last test of basis   161825  2021-05-07 01:56:04 Z    1 days
Testing same since   161836  2021-05-07 14:06:46 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <julien@xen.org>
  Roger Pau Monné <roger.pau@citrix.com>
  Volodymyr Babchuk <volodymyr_babchuk@epam.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   09fc903c5a..7a2b787880  7a2b787880bddbb3bd68b18efe1d6fe339df6ff1 -> master


From xen-devel-bounces@lists.xenproject.org Sat May 08 08:01:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 May 2021 08:01:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124188.234458 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfHu7-0006B9-A9; Sat, 08 May 2021 08:01:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124188.234458; Sat, 08 May 2021 08:01:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfHu7-0006B2-68; Sat, 08 May 2021 08:01:39 +0000
Received: by outflank-mailman (input) for mailman id 124188;
 Sat, 08 May 2021 02:19:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kEvs=KD=huawei.com=thunder.leizhen@srs-us1.protection.inumbo.net>)
 id 1lfCZB-0007Du-NA
 for xen-devel@lists.xenproject.org; Sat, 08 May 2021 02:19:41 +0000
Received: from szxga05-in.huawei.com (unknown [45.249.212.191])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d7b832cd-9636-45f9-a45f-b16edd17fdbf;
 Sat, 08 May 2021 02:19:40 +0000 (UTC)
Received: from DGGEMS411-HUB.china.huawei.com (unknown [172.30.72.60])
 by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4FcWCw3fRwzkX8q;
 Sat,  8 May 2021 10:17:00 +0800 (CST)
Received: from thunder-town.china.huawei.com (10.174.177.72) by
 DGGEMS411-HUB.china.huawei.com (10.3.19.211) with Microsoft SMTP Server id
 14.3.498.0; Sat, 8 May 2021 10:19:27 +0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d7b832cd-9636-45f9-a45f-b16edd17fdbf
From: Zhen Lei <thunder.leizhen@huawei.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>, Juergen Gross
	<jgross@suse.com>, Stefano Stabellini <sstabellini@kernel.org>, Andrew Morton
	<akpm@linux-foundation.org>, Dan Carpenter <dan.carpenter@oracle.com>, "Dan
 Williams" <dan.j.williams@intel.com>, xen-devel
	<xen-devel@lists.xenproject.org>, linux-kernel <linux-kernel@vger.kernel.org>
CC: Zhen Lei <thunder.leizhen@huawei.com>
Subject: [PATCH 1/1] xen/unpopulated-alloc: fix error return code in fill_list()
Date: Sat, 8 May 2021 10:19:13 +0800
Message-ID: <20210508021913.1727-1-thunder.leizhen@huawei.com>
X-Mailer: git-send-email 2.26.0.windows.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.174.177.72]
X-CFilter-Loop: Reflected

Fix to return a negative error code from the error handling case instead
of 0, as done elsewhere in this function.

Fixes: a4574f63edc6 ("mm/memremap_pages: convert to 'struct range'")
Reported-by: Hulk Robot <hulkci@huawei.com>
Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
---
 drivers/xen/unpopulated-alloc.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/drivers/xen/unpopulated-alloc.c b/drivers/xen/unpopulated-alloc.c
index e64e6befc63b..87e6b7db892f 100644
--- a/drivers/xen/unpopulated-alloc.c
+++ b/drivers/xen/unpopulated-alloc.c
@@ -39,8 +39,10 @@ static int fill_list(unsigned int nr_pages)
 	}
 
 	pgmap = kzalloc(sizeof(*pgmap), GFP_KERNEL);
-	if (!pgmap)
+	if (!pgmap) {
+		ret = -ENOMEM;
 		goto err_pgmap;
+	}
 
 	pgmap->type = MEMORY_DEVICE_GENERIC;
 	pgmap->range = (struct range) {
-- 
2.25.1




From xen-devel-bounces@lists.xenproject.org Sat May 08 09:18:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 May 2021 09:18:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124231.234476 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfJ5u-0004OD-1W; Sat, 08 May 2021 09:17:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124231.234476; Sat, 08 May 2021 09:17:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfJ5t-0004O6-Uc; Sat, 08 May 2021 09:17:53 +0000
Received: by outflank-mailman (input) for mailman id 124231;
 Sat, 08 May 2021 09:17:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfJ5t-0004Nw-0I; Sat, 08 May 2021 09:17:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfJ5s-0002Fu-Ou; Sat, 08 May 2021 09:17:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfJ5s-0003F4-9D; Sat, 08 May 2021 09:17:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lfJ5s-0003AD-8m; Sat, 08 May 2021 09:17:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Riuo6zGRBV42V1tDk+qeVM/nQwlZA2YKT3mYHtY0/sw=; b=Zlvxxkxi2nIR+6mRiIh3ThQIRZ
	1JlWUroG2SHTEtnpsCbyZMM+99ECoJVE3X5KvzL4AtoiAXxih1QEGbD3s6entk7IvcTecCk/8je9i
	DVPea3WH/nqiC5a6EUQfne24Gt81piRzBdKp8ij3oNvRLvfwkkrkr+gUnHvJmqfrR5pc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161839-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 161839: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=d90f154867ec0ec22fd719164b88716e8fd48672
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 08 May 2021 09:17:52 +0000

flight 161839 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161839/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-libvirt      14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-libvirt-xsm  14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-libvirt-pair 25 guest-start/debian       fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                d90f154867ec0ec22fd719164b88716e8fd48672
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  261 days
Failing since        152659  2020-08-21 14:07:39 Z  259 days  473 attempts
Testing same since   161826  2021-05-07 02:33:20 Z    1 days    2 attempts

------------------------------------------------------------
487 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 148130 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 08 10:39:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 May 2021 10:39:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124241.234500 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfKM9-0003M1-48; Sat, 08 May 2021 10:38:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124241.234500; Sat, 08 May 2021 10:38:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfKM9-0003Lu-0q; Sat, 08 May 2021 10:38:45 +0000
Received: by outflank-mailman (input) for mailman id 124241;
 Sat, 08 May 2021 10:38:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfKM7-0003LU-FJ; Sat, 08 May 2021 10:38:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfKM7-0003d2-8y; Sat, 08 May 2021 10:38:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfKM6-0000DE-Tj; Sat, 08 May 2021 10:38:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lfKM6-00032I-TA; Sat, 08 May 2021 10:38:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ULlDvS1oPjnaZeL98ADznCKB6oLVbaM06W0mTiJKXJE=; b=PPdrD4dNsagOIilZadY+kV8BBQ
	7tcTVHyTo3UoW6QssL79GYYnMEN1KJi9ftYVpvvag5ErIYFi3hxPZULFrsuMH3q/xRksnH6ZCO1KK
	gfuNKbukTjCfARRV3cu+mQyOSEkyqj9jhVf1DVCB9r+qBQBP7lUjy10+BO5zwarARhWk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161853-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 161853: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:build-amd64:xen-build:fail:regression
    qemu-mainline:build-arm64:xen-build:fail:regression
    qemu-mainline:build-i386:xen-build:fail:regression
    qemu-mainline:build-i386-xsm:xen-build:fail:regression
    qemu-mainline:build-arm64-xsm:xen-build:fail:regression
    qemu-mainline:build-amd64-xsm:xen-build:fail:regression
    qemu-mainline:build-armhf:xen-build:fail:regression
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    qemu-mainline:build-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-armhf-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:build-i386-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    qemuu=d90f154867ec0ec22fd719164b88716e8fd48672
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 08 May 2021 10:38:42 +0000

flight 161853 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161853/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 152631
 build-arm64                   6 xen-build                fail REGR. vs. 152631
 build-i386                    6 xen-build                fail REGR. vs. 152631
 build-i386-xsm                6 xen-build                fail REGR. vs. 152631
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152631
 build-amd64-xsm               6 xen-build                fail REGR. vs. 152631
 build-armhf                   6 xen-build                fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-arndale   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit1   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-rtds      1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-vhd       1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)     blocked n/a
 build-armhf-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-i386-libvirt            1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-credit2   1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl-cubietruck  1 build-check(1)               blocked  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a

version targeted for testing:
 qemuu                d90f154867ec0ec22fd719164b88716e8fd48672
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  261 days
Failing since        152659  2020-08-21 14:07:39 Z  259 days  474 attempts
Testing same since   161826  2021-05-07 02:33:20 Z    1 days    3 attempts

------------------------------------------------------------
487 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              fail    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  fail    
 build-i386                                                   fail    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          blocked 
 build-i386-libvirt                                           blocked 
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          blocked 
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 blocked 
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  blocked 
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  blocked 
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  blocked 
 test-armhf-armhf-xl-cubietruck                               blocked 
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                blocked 
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     blocked 
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 148130 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 08 11:49:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 May 2021 11:49:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124261.234518 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfLS6-0001Qb-O5; Sat, 08 May 2021 11:48:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124261.234518; Sat, 08 May 2021 11:48:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfLS6-0001QU-Jf; Sat, 08 May 2021 11:48:58 +0000
Received: by outflank-mailman (input) for mailman id 124261;
 Sat, 08 May 2021 11:48:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0wEz=KD=ens-lyon.org=samuel.thibault@bounce.ens-lyon.org>)
 id 1lfLS5-0001QO-Bo
 for xen-devel@lists.xenproject.org; Sat, 08 May 2021 11:48:57 +0000
Received: from sonata.ens-lyon.org (unknown [140.77.166.138])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6478de38-f967-4034-b392-90cd9e22357a;
 Sat, 08 May 2021 11:48:56 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by sonata.ens-lyon.org (Postfix) with ESMTP id C5513201C3;
 Sat,  8 May 2021 13:43:04 +0200 (CEST)
Received: from sonata.ens-lyon.org ([127.0.0.1])
 by localhost (sonata.ens-lyon.org [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id dpMAKn6puGgo; Sat,  8 May 2021 13:43:04 +0200 (CEST)
Received: from begin (lfbn-bor-1-56-204.w90-50.abo.wanadoo.fr [90.50.148.204])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by sonata.ens-lyon.org (Postfix) with ESMTPSA id A5586201AD;
 Sat,  8 May 2021 13:43:04 +0200 (CEST)
Received: from samy by begin with local (Exim 4.94)
 (envelope-from <samuel.thibault@ens-lyon.org>)
 id 1lfLMN-00BMTv-V3; Sat, 08 May 2021 13:43:03 +0200
Resent-From: Samuel Thibault <samuel.thibault@ens-lyon.org>
Resent-Date: Sat, 8 May 2021 13:43:03 +0200
Resent-Message-ID: <20210508114303.kg3ljchsoia67iot@begin>
Resent-To: jandryuk@gmail.com, xen-devel@lists.xenproject.org,
 dgdegra@tycho.nsa.gov, quan.xu0@gmail.com
Received: from samy by begin with local (Exim 4.94.2)
 (envelope-from <samuel.thibault@ens-lyon.org>)
 id 1lelk9-006BGX-U9; Thu, 06 May 2021 23:41:13 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6478de38-f967-4034-b392-90cd9e22357a
Date: Thu, 6 May 2021 23:41:13 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel@lists.xenproject.org, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
Subject: Re: [PATCH v2 10/13] vtpmmgr: Remove bogus cast from TPM2_GetRandom
Message-ID: <20210506214113.bhkhiif4utufxxwp@begin>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
References: <20210506135923.161427-1-jandryuk@gmail.com>
 <20210506135923.161427-11-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20210506135923.161427-11-jandryuk@gmail.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)

Jason Andryuk, le jeu. 06 mai 2021 09:59:20 -0400, a ecrit:
> The UINT32 <-> UINT16 casting in TPM2_GetRandom is incorrect.  Use a
> local UINT16 as needed for the TPM hardware command and assign the
> result.
> 
> Suggested-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

> ---
>  stubdom/vtpmmgr/tpm2.c | 13 ++++++++++---
>  1 file changed, 10 insertions(+), 3 deletions(-)
> 
> diff --git a/stubdom/vtpmmgr/tpm2.c b/stubdom/vtpmmgr/tpm2.c
> index 655e6d164c..ebd06eac74 100644
> --- a/stubdom/vtpmmgr/tpm2.c
> +++ b/stubdom/vtpmmgr/tpm2.c
> @@ -427,15 +427,22 @@ abort_egress:
>  
>  TPM_RC TPM2_GetRandom(UINT32 * bytesRequested, BYTE * randomBytes)
>  {
> +    UINT16 bytesReq;
>      TPM_BEGIN(TPM_ST_NO_SESSIONS, TPM_CC_GetRandom);
>  
> -    ptr = pack_UINT16(ptr, (UINT16)*bytesRequested);
> +    if (*bytesRequested > UINT16_MAX)
> +        bytesReq = UINT16_MAX;
> +    else
> +        bytesReq = *bytesRequested;
> +
> +    ptr = pack_UINT16(ptr, bytesReq);
>  
>      TPM_TRANSMIT();
>      TPM_UNPACK_VERIFY();
>  
> -    ptr = unpack_UINT16(ptr, (UINT16 *)bytesRequested);
> -    ptr = unpack_TPM_BUFFER(ptr, randomBytes, *bytesRequested);
> +    ptr = unpack_UINT16(ptr, &bytesReq);
> +    *bytesRequested = bytesReq;
> +    ptr = unpack_TPM_BUFFER(ptr, randomBytes, bytesReq);
>  
>  abort_egress:
>      return status;
> -- 
> 2.30.2
> 

-- 
Samuel
<N> (* If you have a precise idea of the intended use of the following code, please
<N>    write to Eduardo.Gimenez@inria.fr and ask for the prize :-)
<N>    -- Eduardo (11/8/97) *)
 -+- N sur #ens-mim - et c'était un des développeurs -+-


From xen-devel-bounces@lists.xenproject.org Sat May 08 11:53:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 May 2021 11:53:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124266.234532 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfLWw-0002oS-CG; Sat, 08 May 2021 11:53:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124266.234532; Sat, 08 May 2021 11:53:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfLWw-0002oL-9L; Sat, 08 May 2021 11:53:58 +0000
Received: by outflank-mailman (input) for mailman id 124266;
 Sat, 08 May 2021 11:53:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0wEz=KD=ens-lyon.org=samuel.thibault@bounce.ens-lyon.org>)
 id 1lfLWv-0002o9-3R
 for xen-devel@lists.xenproject.org; Sat, 08 May 2021 11:53:57 +0000
Received: from sonata.ens-lyon.org (unknown [140.77.166.138])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 334e49a0-2862-49b2-917d-5fc65212a1be;
 Sat, 08 May 2021 11:53:56 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by sonata.ens-lyon.org (Postfix) with ESMTP id BED562019C;
 Sat,  8 May 2021 13:43:55 +0200 (CEST)
Received: from sonata.ens-lyon.org ([127.0.0.1])
 by localhost (sonata.ens-lyon.org [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id tb5DW_x33ynA; Sat,  8 May 2021 13:43:55 +0200 (CEST)
Received: from begin (lfbn-bor-1-56-204.w90-50.abo.wanadoo.fr [90.50.148.204])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by sonata.ens-lyon.org (Postfix) with ESMTPSA id 9D6152018D;
 Sat,  8 May 2021 13:43:55 +0200 (CEST)
Received: from samy by begin with local (Exim 4.94)
 (envelope-from <samuel.thibault@ens-lyon.org>)
 id 1lfLNC-00BMUU-W9; Sat, 08 May 2021 13:43:55 +0200
Resent-From: Samuel Thibault <samuel.thibault@ens-lyon.org>
Resent-Date: Sat, 8 May 2021 13:43:54 +0200
Resent-Message-ID: <20210508114354.vf72yybsevrdc2tq@begin>
Resent-To: jandryuk@gmail.com, xen-devel@lists.xenproject.org,
 iwj@xenproject.org, wl@xen.org
Received: from samy by begin with local (Exim 4.94.2)
 (envelope-from <samuel.thibault@ens-lyon.org>)
 id 1lelvU-006Grj-1F; Thu, 06 May 2021 23:52:56 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 334e49a0-2862-49b2-917d-5fc65212a1be
Date: Thu, 6 May 2021 23:52:55 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2 13/13] vtpm: Correct timeout units and command duration
Message-ID: <20210506215255.yftnedauoz4e3bga@begin>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210506135923.161427-1-jandryuk@gmail.com>
 <20210506135923.161427-14-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20210506135923.161427-14-jandryuk@gmail.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)

Jason Andryuk, le jeu. 06 mai 2021 09:59:23 -0400, a ecrit:
> Add two patches:
> vtpm-microsecond-duration.patch fixes the units for timeouts and command
> durations.
> vtpm-command-duration.patch increases the timeout linux uses to allow
> commands to succeed.
> 
> Linux works around low timeouts, but not low durations.  The second
> patch allows commands to complete that often timeout with the lower
> command durations.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

> ---
>  stubdom/Makefile                        |  2 +
>  stubdom/vtpm-command-duration.patch     | 52 +++++++++++++++++++++++++
>  stubdom/vtpm-microsecond-duration.patch | 52 +++++++++++++++++++++++++
>  3 files changed, 106 insertions(+)
>  create mode 100644 stubdom/vtpm-command-duration.patch
>  create mode 100644 stubdom/vtpm-microsecond-duration.patch
> 
> diff --git a/stubdom/Makefile b/stubdom/Makefile
> index c6de5f68ae..06aa69d8bc 100644
> --- a/stubdom/Makefile
> +++ b/stubdom/Makefile
> @@ -239,6 +239,8 @@ tpm_emulator-$(XEN_TARGET_ARCH): tpm_emulator-$(TPMEMU_VERSION).tar.gz
>  	patch -d $@ -p1 < vtpm-implicit-fallthrough.patch
>  	patch -d $@ -p1 < vtpm_TPM_ChangeAuthAsymFinish.patch
>  	patch -d $@ -p1 < vtpm_extern.patch
> +	patch -d $@ -p1 < vtpm-microsecond-duration.patch
> +	patch -d $@ -p1 < vtpm-command-duration.patch
>  	mkdir $@/build
>  	cd $@/build; CC=${CC} $(CMAKE) .. -DCMAKE_C_FLAGS:STRING="-std=c99 -DTPM_NO_EXTERN $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -Wno-declaration-after-statement"
>  	touch $@
> diff --git a/stubdom/vtpm-command-duration.patch b/stubdom/vtpm-command-duration.patch
> new file mode 100644
> index 0000000000..6fdf2fc9be
> --- /dev/null
> +++ b/stubdom/vtpm-command-duration.patch
> @@ -0,0 +1,52 @@
> +From e7c976b5864e7d2649292d90ea60d5aea091a990 Mon Sep 17 00:00:00 2001
> +From: Jason Andryuk <jandryuk@gmail.com>
> +Date: Sun, 14 Mar 2021 12:46:34 -0400
> +Subject: [PATCH 2/2] Increase command durations
> +
> +Wth Linux 5.4 xen-tpmfront and a Xen vtpm-stubdom, xen-tpmfront was
> +failing commands with -ETIME:
> +tpm tpm0: tpm_try_transmit: send(): error-62
> +
> +The vtpm was returning the data, but it was after the duration timeout
> +in vtpm_send.  Linux may have started being more stringent about timing?
> +
> +The vtpm-stubdom has a little delay since it writes its disk before
> +returning the response.
> +
> +Anyway, the durations are rather low.  When they were 1/10/1000 before
> +converting to microseconds, Linux showed all three durations rounded to
> +10000.  Update them with values from a physical TPM1.2.  These were
> +taken from a WEC which was software downgraded from a TPM2 to a TPM1.2.
> +They might be excessive, but I'd rather have a command succeed than
> +return -ETIME.
> +
> +An IFX physical TPM1.2 uses:
> +1000000
> +1500000
> +150000000
> +
> +Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> +---
> + tpm/tpm_data.c | 6 +++---
> + 1 file changed, 3 insertions(+), 3 deletions(-)
> +
> +diff --git a/tpm/tpm_data.c b/tpm/tpm_data.c
> +index bebaf10..844afca 100644
> +--- a/tpm/tpm_data.c
> ++++ b/tpm/tpm_data.c
> +@@ -71,9 +71,9 @@ static void init_timeouts(void)
> +   tpmData.permanent.data.tis_timeouts[1] = 2000000;
> +   tpmData.permanent.data.tis_timeouts[2] = 750000;
> +   tpmData.permanent.data.tis_timeouts[3] = 750000;
> +-  tpmData.permanent.data.cmd_durations[0] = 1000;
> +-  tpmData.permanent.data.cmd_durations[1] = 10000;
> +-  tpmData.permanent.data.cmd_durations[2] = 1000000;
> ++  tpmData.permanent.data.cmd_durations[0] = 3000000;
> ++  tpmData.permanent.data.cmd_durations[1] = 3000000;
> ++  tpmData.permanent.data.cmd_durations[2] = 600000000;
> + }
> + 
> + void tpm_init_data(void)
> +-- 
> +2.30.2
> +
> diff --git a/stubdom/vtpm-microsecond-duration.patch b/stubdom/vtpm-microsecond-duration.patch
> new file mode 100644
> index 0000000000..7a906e72c5
> --- /dev/null
> +++ b/stubdom/vtpm-microsecond-duration.patch
> @@ -0,0 +1,52 @@
> +From 5a510e0afd7c288e3f0fb3523ec749ba1366ad61 Mon Sep 17 00:00:00 2001
> +From: Jason Andryuk <jandryuk@gmail.com>
> +Date: Sun, 14 Mar 2021 12:42:10 -0400
> +Subject: [PATCH 1/2] Use microseconds for timeouts and durations
> +
> +The timeout and duration fields should be in microseconds according to
> +the spec.
> +
> +TPM_CAP_PROP_TIS_TIMEOUT:
> +A 4 element array of UINT32 values each denoting the timeout value in
> +microseconds for the following in this order:
> +
> +TPM_CAP_PROP_DURATION:
> +A 3 element array of UINT32 values each denoting the duration value in
> +microseconds of the duration of the three classes of commands:
> +
> +Linux will scale the timeouts up by 1000, but not the durations.  Change
> +the units for both sets as appropriate.
> +
> +Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> +---
> + tpm/tpm_data.c | 14 +++++++-------
> + 1 file changed, 7 insertions(+), 7 deletions(-)
> +
> +diff --git a/tpm/tpm_data.c b/tpm/tpm_data.c
> +index a3a79ef..bebaf10 100644
> +--- a/tpm/tpm_data.c
> ++++ b/tpm/tpm_data.c
> +@@ -67,13 +67,13 @@ static void init_nv_storage(void)
> + static void init_timeouts(void)
> + {
> +   /* for the timeouts we use the PC platform defaults */
> +-  tpmData.permanent.data.tis_timeouts[0] = 750;
> +-  tpmData.permanent.data.tis_timeouts[1] = 2000;
> +-  tpmData.permanent.data.tis_timeouts[2] = 750;
> +-  tpmData.permanent.data.tis_timeouts[3] = 750;
> +-  tpmData.permanent.data.cmd_durations[0] = 1;
> +-  tpmData.permanent.data.cmd_durations[1] = 10;
> +-  tpmData.permanent.data.cmd_durations[2] = 1000;
> ++  tpmData.permanent.data.tis_timeouts[0] = 750000;
> ++  tpmData.permanent.data.tis_timeouts[1] = 2000000;
> ++  tpmData.permanent.data.tis_timeouts[2] = 750000;
> ++  tpmData.permanent.data.tis_timeouts[3] = 750000;
> ++  tpmData.permanent.data.cmd_durations[0] = 1000;
> ++  tpmData.permanent.data.cmd_durations[1] = 10000;
> ++  tpmData.permanent.data.cmd_durations[2] = 1000000;
> + }
> + 
> + void tpm_init_data(void)
> +-- 
> +2.30.2
> +
> -- 
> 2.30.2
> 

-- 
Samuel
Pour un père, autant mourir que de faire plein de calculs et pas s'occuper
de son fils
 -+- y sur #ens-mim - sombres histoires de zombies -+-


From xen-devel-bounces@lists.xenproject.org Sat May 08 11:53:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 May 2021 11:53:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124267.234539 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfLWw-0002s1-Lj; Sat, 08 May 2021 11:53:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124267.234539; Sat, 08 May 2021 11:53:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfLWw-0002qu-H6; Sat, 08 May 2021 11:53:58 +0000
Received: by outflank-mailman (input) for mailman id 124267;
 Sat, 08 May 2021 11:53:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0wEz=KD=ens-lyon.org=samuel.thibault@bounce.ens-lyon.org>)
 id 1lfLWv-0002oB-7G
 for xen-devel@lists.xenproject.org; Sat, 08 May 2021 11:53:57 +0000
Received: from sonata.ens-lyon.org (unknown [140.77.166.138])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8052b0f8-0e70-4258-873d-b6e9dd96d311;
 Sat, 08 May 2021 11:53:56 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by sonata.ens-lyon.org (Postfix) with ESMTP id BF953201CA;
 Sat,  8 May 2021 13:43:45 +0200 (CEST)
Received: from sonata.ens-lyon.org ([127.0.0.1])
 by localhost (sonata.ens-lyon.org [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id a-eHrPlRN5Dd; Sat,  8 May 2021 13:43:45 +0200 (CEST)
Received: from begin (lfbn-bor-1-56-204.w90-50.abo.wanadoo.fr [90.50.148.204])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by sonata.ens-lyon.org (Postfix) with ESMTPSA id 9F804201C9;
 Sat,  8 May 2021 13:43:45 +0200 (CEST)
Received: from samy by begin with local (Exim 4.94)
 (envelope-from <samuel.thibault@ens-lyon.org>)
 id 1lfLN3-00BMUQ-3U; Sat, 08 May 2021 13:43:45 +0200
Resent-From: Samuel Thibault <samuel.thibault@ens-lyon.org>
Resent-Date: Sat, 8 May 2021 13:43:45 +0200
Resent-Message-ID: <20210508114345.gattpm6g7zulm7hx@begin>
Resent-To: jandryuk@gmail.com, xen-devel@lists.xenproject.org,
 dgdegra@tycho.nsa.gov, quan.xu0@gmail.com
Received: from samy by begin with local (Exim 4.94.2)
 (envelope-from <samuel.thibault@ens-lyon.org>)
 id 1lelko-006BIO-MM; Thu, 06 May 2021 23:41:54 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8052b0f8-0e70-4258-873d-b6e9dd96d311
Date: Thu, 6 May 2021 23:41:54 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel@lists.xenproject.org, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
Subject: Re: [PATCH v2 11/13] vtpmmgr: Fix owner_auth & srk_auth parsing
Message-ID: <20210506214154.ssot4jwf64k22wth@begin>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
References: <20210506135923.161427-1-jandryuk@gmail.com>
 <20210506135923.161427-12-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210506135923.161427-12-jandryuk@gmail.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)

Jason Andryuk, le jeu. 06 mai 2021 09:59:21 -0400, a ecrit:
> Argument parsing only matches to before ':' and then the string with
> leading ':' is passed to parse_auth_string which fails to parse.  Extend
> the length to include the seperator in the match.
> 
> While here, switch the seperator to "=".  The man page documented "="
> and the other tpm.* arguments already use "=".  Since it didn't work
> before, we don't need to worry about backwards compatibility.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

> ---
>  stubdom/vtpmmgr/init.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/stubdom/vtpmmgr/init.c b/stubdom/vtpmmgr/init.c
> index 4ae34a4fcb..62dc5994de 100644
> --- a/stubdom/vtpmmgr/init.c
> +++ b/stubdom/vtpmmgr/init.c
> @@ -289,16 +289,16 @@ int parse_cmdline_opts(int argc, char** argv, struct Opts* opts)
>     memcpy(vtpm_globals.srk_auth, WELLKNOWN_AUTH, sizeof(TPM_AUTHDATA));
>  
>     for(i = 1; i < argc; ++i) {
> -      if(!strncmp(argv[i], "owner_auth:", 10)) {
> -         if((rc = parse_auth_string(argv[i] + 10, vtpm_globals.owner_auth)) < 0) {
> +      if(!strncmp(argv[i], "owner_auth=", 11)) {
> +         if((rc = parse_auth_string(argv[i] + 11, vtpm_globals.owner_auth)) < 0) {
>              goto err_invalid;
>           }
>           if(rc == 1) {
>              opts->gen_owner_auth = 1;
>           }
>        }
> -      else if(!strncmp(argv[i], "srk_auth:", 8)) {
> -         if((rc = parse_auth_string(argv[i] + 8, vtpm_globals.srk_auth)) != 0) {
> +      else if(!strncmp(argv[i], "srk_auth=", 9)) {
> +         if((rc = parse_auth_string(argv[i] + 9, vtpm_globals.srk_auth)) != 0) {
>              goto err_invalid;
>           }
>        }
> -- 
> 2.30.2
> 

-- 
Samuel
c> ah (on trouve fluide glacial sur le net, ou il faut aller dans le monde reel ?)
s> dans le monde reel
c> zut


From xen-devel-bounces@lists.xenproject.org Sat May 08 11:54:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 May 2021 11:54:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124268.234557 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfLX1-0003QE-VJ; Sat, 08 May 2021 11:54:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124268.234557; Sat, 08 May 2021 11:54:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfLX1-0003Q5-RI; Sat, 08 May 2021 11:54:03 +0000
Received: by outflank-mailman (input) for mailman id 124268;
 Sat, 08 May 2021 11:54:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0wEz=KD=ens-lyon.org=samuel.thibault@bounce.ens-lyon.org>)
 id 1lfLX0-0002oB-6C
 for xen-devel@lists.xenproject.org; Sat, 08 May 2021 11:54:02 +0000
Received: from sonata.ens-lyon.org (unknown [140.77.166.138])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7a8aa0bd-16ed-4ea4-8dd9-bb86e3442cee;
 Sat, 08 May 2021 11:53:56 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by sonata.ens-lyon.org (Postfix) with ESMTP id C5EB7201A2;
 Sat,  8 May 2021 13:34:33 +0200 (CEST)
Received: from sonata.ens-lyon.org ([127.0.0.1])
 by localhost (sonata.ens-lyon.org [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id EeqwRdYTS3Gp; Sat,  8 May 2021 13:34:33 +0200 (CEST)
Received: from begin (lfbn-bor-1-56-204.w90-50.abo.wanadoo.fr [90.50.148.204])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by sonata.ens-lyon.org (Postfix) with ESMTPSA id A69F3201A1;
 Sat,  8 May 2021 13:34:33 +0200 (CEST)
Received: from samy by begin with local (Exim 4.94)
 (envelope-from <samuel.thibault@ens-lyon.org>)
 id 1lfLE9-00BM8I-42; Sat, 08 May 2021 13:34:33 +0200
Resent-From: Samuel Thibault <samuel.thibault@ens-lyon.org>
Resent-Date: Sat, 8 May 2021 13:34:33 +0200
Resent-Message-ID: <20210508113433.xgnrtzopj2cpbsgi@begin>
Resent-To: xen-devel@lists.xenproject.org, dgdegra@tycho.nsa.gov,
 quan.xu0@gmail.com
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7a8aa0bd-16ed-4ea4-8dd9-bb86e3442cee
Date: Thu, 6 May 2021 23:40:16 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel@lists.xenproject.org, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
Subject: Re: [PATCH v2 09/13] vtpmmgr: Support GetRandom passthrough on TPM
 2.0
Message-ID: <20210506214016.yt2df7y6xf5cbdzm@begin>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
References: <20210506135923.161427-1-jandryuk@gmail.com>
 <20210506135923.161427-10-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210506135923.161427-10-jandryuk@gmail.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)

Jason Andryuk, le jeu. 06 mai 2021 09:59:19 -0400, a ecrit:
> GetRandom passthrough currently fails when using vtpmmgr with a hardware
> TPM 2.0.
> vtpmmgr (8): INFO[VTPM]: Passthrough: TPM_GetRandom
> vtpm (12): vtpm_cmd.c:120: Error: TPM_GetRandom() failed with error code (30)
> 
> When running on TPM 2.0 hardware, vtpmmgr needs to convert the TPM 1.2
> TPM_ORD_GetRandom into a TPM2 TPM_CC_GetRandom command.  Besides the
> differing ordinal, the TPM 1.2 uses 32bit sizes for the request and
> response (vs. 16bit for TPM2).
> 
> Place the random output directly into the tpmcmd->resp and build the
> packet around it.  This avoids bouncing through an extra buffer, but the
> header has to be written after grabbing the random bytes so we have the
> number of bytes to include in the size.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

> ---
> v2:
> Add bounds and size checks
> Whitespace fixup
> ---
>  stubdom/vtpmmgr/marshal.h          | 15 ++++++++
>  stubdom/vtpmmgr/vtpm_cmd_handler.c | 61 +++++++++++++++++++++++++++++-
>  2 files changed, 75 insertions(+), 1 deletion(-)
> 
> diff --git a/stubdom/vtpmmgr/marshal.h b/stubdom/vtpmmgr/marshal.h
> index dce19c6439..f1037a7976 100644
> --- a/stubdom/vtpmmgr/marshal.h
> +++ b/stubdom/vtpmmgr/marshal.h
> @@ -890,6 +890,15 @@ inline int sizeof_TPM_AUTH_SESSION(const TPM_AUTH_SESSION* auth) {
>  	return rv;
>  }
>  
> +static
> +inline int sizeof_TPM_RQU_HEADER(BYTE* ptr) {
> +	int rv = 0;
> +	rv += sizeof_UINT16(ptr);
> +	rv += sizeof_UINT32(ptr);
> +	rv += sizeof_UINT32(ptr);
> +	return rv;
> +}
> +
>  static
>  inline BYTE* pack_TPM_RQU_HEADER(BYTE* ptr,
>  		TPM_TAG tag,
> @@ -920,8 +929,14 @@ inline int unpack3_TPM_RQU_HEADER(BYTE* ptr, UINT32* pos, UINT32 max,
>  		unpack3_UINT32(ptr, pos, max, ord);
>  }
>  
> +static
> +inline int sizeof_TPM_RQU_GetRandom(BYTE* ptr) {
> +	return sizeof_TPM_RQU_HEADER(ptr) + sizeof_UINT32(ptr);
> +}
> +
>  #define pack_TPM_RSP_HEADER(p, t, s, r) pack_TPM_RQU_HEADER(p, t, s, r)
>  #define unpack_TPM_RSP_HEADER(p, t, s, r) unpack_TPM_RQU_HEADER(p, t, s, r)
>  #define unpack3_TPM_RSP_HEADER(p, l, m, t, s, r) unpack3_TPM_RQU_HEADER(p, l, m, t, s, r)
> +#define sizeof_TPM_RSP_HEADER(p) sizeof_TPM_RQU_HEADER(p)
>  
>  #endif
> diff --git a/stubdom/vtpmmgr/vtpm_cmd_handler.c b/stubdom/vtpmmgr/vtpm_cmd_handler.c
> index 2ac14fae77..c879b24c13 100644
> --- a/stubdom/vtpmmgr/vtpm_cmd_handler.c
> +++ b/stubdom/vtpmmgr/vtpm_cmd_handler.c
> @@ -47,6 +47,7 @@
>  #include "vtpm_disk.h"
>  #include "vtpmmgr.h"
>  #include "tpm.h"
> +#include "tpm2.h"
>  #include "tpmrsa.h"
>  #include "tcg.h"
>  #include "mgmt_authority.h"
> @@ -772,6 +773,64 @@ static int vtpmmgr_permcheck(struct tpm_opaque *opq)
>  	return 1;
>  }
>  
> +TPM_RESULT vtpmmgr_handle_getrandom(struct tpm_opaque *opaque,
> +				    tpmcmd_t* tpmcmd)
> +{
> +	TPM_RESULT status = TPM_SUCCESS;
> +	TPM_TAG tag;
> +	UINT32 size;
> +	const int max_rand_size = TCPA_MAX_BUFFER_LENGTH -
> +				  sizeof_TPM_RQU_GetRandom(tpmcmd->req);
> +	UINT32 rand_offset;
> +	UINT32 rand_size;
> +	TPM_COMMAND_CODE ord;
> +	BYTE *p;
> +
> +	if (tpmcmd->req_len != sizeof_TPM_RQU_GetRandom(tpmcmd->req)) {
> +		status = TPM_BAD_PARAMETER;
> +		tag = TPM_TAG_RQU_COMMAND;
> +		goto abort_egress;
> +	}
> +
> +	p = unpack_TPM_RQU_HEADER(tpmcmd->req, &tag, &size, &ord);
> +
> +	if (!hw_is_tpm2()) {
> +		size = TCPA_MAX_BUFFER_LENGTH;
> +		TPMTRYRETURN(TPM_TransmitData(tpmcmd->req, tpmcmd->req_len,
> +					      tpmcmd->resp, &size));
> +		tpmcmd->resp_len = size;
> +
> +		return TPM_SUCCESS;
> +	}
> +
> +	/* TPM_GetRandom req: <header><uint32 num bytes> */
> +	unpack_UINT32(p, &rand_size);
> +
> +	/* Returning fewer bytes is acceptable per the spec. */
> +	if (rand_size > max_rand_size)
> +		rand_size = max_rand_size;
> +
> +	/* Call TPM2_GetRandom but return a TPM_GetRandom response. */
> +	/* TPM_GetRandom resp: <header><uint32 num bytes><num random bytes> */
> +	rand_offset = sizeof_TPM_RSP_HEADER(tpmcmd->resp) +
> +		      sizeof_UINT32(tpmcmd->resp);
> +
> +	TPMTRYRETURN(TPM2_GetRandom(&rand_size, tpmcmd->resp + rand_offset));
> +
> +	p = pack_TPM_RSP_HEADER(tpmcmd->resp, TPM_TAG_RSP_COMMAND,
> +				rand_offset + rand_size, status);
> +	p = pack_UINT32(p, rand_size);
> +	tpmcmd->resp_len = rand_offset + rand_size;
> +
> +	return status;
> +
> +abort_egress:
> +	tpmcmd->resp_len = VTPM_COMMAND_HEADER_SIZE;
> +	pack_TPM_RSP_HEADER(tpmcmd->resp, tag + 3, tpmcmd->resp_len, status);
> +
> +	return status;
> +}
> +
>  TPM_RESULT vtpmmgr_handle_cmd(
>  		struct tpm_opaque *opaque,
>  		tpmcmd_t* tpmcmd)
> @@ -842,7 +901,7 @@ TPM_RESULT vtpmmgr_handle_cmd(
>  		switch(ord) {
>  		case TPM_ORD_GetRandom:
>  			vtpmloginfo(VTPM_LOG_VTPM, "Passthrough: TPM_GetRandom\n");
> -			break;
> +			return vtpmmgr_handle_getrandom(opaque, tpmcmd);
>  		case TPM_ORD_PcrRead:
>  			vtpmloginfo(VTPM_LOG_VTPM, "Passthrough: TPM_PcrRead\n");
>  			// Quotes also need to be restricted to hide PCR values
> -- 
> 2.30.2
> 

-- 
Samuel
<i8b4uUnderground> d-_-b
<BonyNoMore> how u make that inverted b?
<BonyNoMore> wait
<BonyNoMore> never mind


From xen-devel-bounces@lists.xenproject.org Sat May 08 12:18:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 May 2021 12:18:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124290.234569 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfLuS-0006bx-9P; Sat, 08 May 2021 12:18:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124290.234569; Sat, 08 May 2021 12:18:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfLuS-0006bq-3V; Sat, 08 May 2021 12:18:16 +0000
Received: by outflank-mailman (input) for mailman id 124290;
 Sat, 08 May 2021 12:18:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfLuR-0006bg-0D; Sat, 08 May 2021 12:18:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfLuQ-0005Fi-O4; Sat, 08 May 2021 12:18:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfLuQ-0006x2-Cx; Sat, 08 May 2021 12:18:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lfLuQ-0003o3-CS; Sat, 08 May 2021 12:18:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LYIk4DxWWJI1ctWzYa9SZ2dc2Vh4GWOuQtcxF1JgOYQ=; b=08pI5yPGFeopPzAyXbtoJ6PeM7
	gaL8zWwZQlsQ1iHYmHK1HzZVnWPiMP8yNmKqNJHM3lXDxg5yIp8frcMsn8tGmLdVXVDRTnFHTy4z9
	eOiZXLOdqDTGhj+ANtHkXywa/mPf+PT4JVOhrsqmfwc4LFg/lhqpxpf0Uwm8LXqkEnDs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161843-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 161843: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=1ad77a05cfaed42cba301368350817333ac69b6a
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 08 May 2021 12:18:14 +0000

flight 161843 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161843/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx  8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                1ad77a05cfaed42cba301368350817333ac69b6a
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  280 days
Failing since        152366  2020-08-01 20:49:34 Z  279 days  466 attempts
Testing same since   161843  2021-05-07 21:11:09 Z    0 days    1 attempts

------------------------------------------------------------
6020 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1632238 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 08 12:24:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 May 2021 12:24:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124297.234589 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfLzy-00084b-74; Sat, 08 May 2021 12:23:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124297.234589; Sat, 08 May 2021 12:23:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfLzy-00083z-2K; Sat, 08 May 2021 12:23:58 +0000
Received: by outflank-mailman (input) for mailman id 124297;
 Sat, 08 May 2021 12:23:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0wEz=KD=ens-lyon.org=samuel.thibault@bounce.ens-lyon.org>)
 id 1lfLzx-00081O-1e
 for xen-devel@lists.xenproject.org; Sat, 08 May 2021 12:23:57 +0000
Received: from sonata.ens-lyon.org (unknown [140.77.166.138])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d2afc73b-170a-44b2-a7e1-defb3a0f617f;
 Sat, 08 May 2021 12:23:56 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by sonata.ens-lyon.org (Postfix) with ESMTP id 1018C201CB;
 Sat,  8 May 2021 13:44:15 +0200 (CEST)
Received: from sonata.ens-lyon.org ([127.0.0.1])
 by localhost (sonata.ens-lyon.org [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id nb9Bg_JjYJvu; Sat,  8 May 2021 13:44:14 +0200 (CEST)
Received: from begin (lfbn-bor-1-56-204.w90-50.abo.wanadoo.fr [90.50.148.204])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by sonata.ens-lyon.org (Postfix) with ESMTPSA id E29BA201B1;
 Sat,  8 May 2021 13:44:14 +0200 (CEST)
Received: from samy by begin with local (Exim 4.94)
 (envelope-from <samuel.thibault@ens-lyon.org>)
 id 1lfLNW-00BMUc-7w; Sat, 08 May 2021 13:44:14 +0200
Resent-From: Samuel Thibault <samuel.thibault@ens-lyon.org>
Resent-Date: Sat, 8 May 2021 13:44:14 +0200
Resent-Message-ID: <20210508114414.bdzve3wejocomm7l@begin>
Resent-To: jandryuk@gmail.com, xen-devel@lists.xenproject.org,
 dgdegra@tycho.nsa.gov, quan.xu0@gmail.com
Received: from samy by begin with local (Exim 4.94.2)
 (envelope-from <samuel.thibault@ens-lyon.org>)
 id 1leljE-006BG3-RM; Thu, 06 May 2021 23:40:16 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d2afc73b-170a-44b2-a7e1-defb3a0f617f
Date: Thu, 6 May 2021 23:40:16 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel@lists.xenproject.org, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
Subject: Re: [PATCH v2 09/13] vtpmmgr: Support GetRandom passthrough on TPM
 2.0
Message-ID: <20210506214016.yt2df7y6xf5cbdzm@begin>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
References: <20210506135923.161427-1-jandryuk@gmail.com>
 <20210506135923.161427-10-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210506135923.161427-10-jandryuk@gmail.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)

Jason Andryuk, le jeu. 06 mai 2021 09:59:19 -0400, a ecrit:
> GetRandom passthrough currently fails when using vtpmmgr with a hardware
> TPM 2.0.
> vtpmmgr (8): INFO[VTPM]: Passthrough: TPM_GetRandom
> vtpm (12): vtpm_cmd.c:120: Error: TPM_GetRandom() failed with error code (30)
> 
> When running on TPM 2.0 hardware, vtpmmgr needs to convert the TPM 1.2
> TPM_ORD_GetRandom into a TPM2 TPM_CC_GetRandom command.  Besides the
> differing ordinal, the TPM 1.2 uses 32bit sizes for the request and
> response (vs. 16bit for TPM2).
> 
> Place the random output directly into the tpmcmd->resp and build the
> packet around it.  This avoids bouncing through an extra buffer, but the
> header has to be written after grabbing the random bytes so we have the
> number of bytes to include in the size.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

> ---
> v2:
> Add bounds and size checks
> Whitespace fixup
> ---
>  stubdom/vtpmmgr/marshal.h          | 15 ++++++++
>  stubdom/vtpmmgr/vtpm_cmd_handler.c | 61 +++++++++++++++++++++++++++++-
>  2 files changed, 75 insertions(+), 1 deletion(-)
> 
> diff --git a/stubdom/vtpmmgr/marshal.h b/stubdom/vtpmmgr/marshal.h
> index dce19c6439..f1037a7976 100644
> --- a/stubdom/vtpmmgr/marshal.h
> +++ b/stubdom/vtpmmgr/marshal.h
> @@ -890,6 +890,15 @@ inline int sizeof_TPM_AUTH_SESSION(const TPM_AUTH_SESSION* auth) {
>  	return rv;
>  }
>  
> +static
> +inline int sizeof_TPM_RQU_HEADER(BYTE* ptr) {
> +	int rv = 0;
> +	rv += sizeof_UINT16(ptr);
> +	rv += sizeof_UINT32(ptr);
> +	rv += sizeof_UINT32(ptr);
> +	return rv;
> +}
> +
>  static
>  inline BYTE* pack_TPM_RQU_HEADER(BYTE* ptr,
>  		TPM_TAG tag,
> @@ -920,8 +929,14 @@ inline int unpack3_TPM_RQU_HEADER(BYTE* ptr, UINT32* pos, UINT32 max,
>  		unpack3_UINT32(ptr, pos, max, ord);
>  }
>  
> +static
> +inline int sizeof_TPM_RQU_GetRandom(BYTE* ptr) {
> +	return sizeof_TPM_RQU_HEADER(ptr) + sizeof_UINT32(ptr);
> +}
> +
>  #define pack_TPM_RSP_HEADER(p, t, s, r) pack_TPM_RQU_HEADER(p, t, s, r)
>  #define unpack_TPM_RSP_HEADER(p, t, s, r) unpack_TPM_RQU_HEADER(p, t, s, r)
>  #define unpack3_TPM_RSP_HEADER(p, l, m, t, s, r) unpack3_TPM_RQU_HEADER(p, l, m, t, s, r)
> +#define sizeof_TPM_RSP_HEADER(p) sizeof_TPM_RQU_HEADER(p)
>  
>  #endif
> diff --git a/stubdom/vtpmmgr/vtpm_cmd_handler.c b/stubdom/vtpmmgr/vtpm_cmd_handler.c
> index 2ac14fae77..c879b24c13 100644
> --- a/stubdom/vtpmmgr/vtpm_cmd_handler.c
> +++ b/stubdom/vtpmmgr/vtpm_cmd_handler.c
> @@ -47,6 +47,7 @@
>  #include "vtpm_disk.h"
>  #include "vtpmmgr.h"
>  #include "tpm.h"
> +#include "tpm2.h"
>  #include "tpmrsa.h"
>  #include "tcg.h"
>  #include "mgmt_authority.h"
> @@ -772,6 +773,64 @@ static int vtpmmgr_permcheck(struct tpm_opaque *opq)
>  	return 1;
>  }
>  
> +TPM_RESULT vtpmmgr_handle_getrandom(struct tpm_opaque *opaque,
> +				    tpmcmd_t* tpmcmd)
> +{
> +	TPM_RESULT status = TPM_SUCCESS;
> +	TPM_TAG tag;
> +	UINT32 size;
> +	const int max_rand_size = TCPA_MAX_BUFFER_LENGTH -
> +				  sizeof_TPM_RQU_GetRandom(tpmcmd->req);
> +	UINT32 rand_offset;
> +	UINT32 rand_size;
> +	TPM_COMMAND_CODE ord;
> +	BYTE *p;
> +
> +	if (tpmcmd->req_len != sizeof_TPM_RQU_GetRandom(tpmcmd->req)) {
> +		status = TPM_BAD_PARAMETER;
> +		tag = TPM_TAG_RQU_COMMAND;
> +		goto abort_egress;
> +	}
> +
> +	p = unpack_TPM_RQU_HEADER(tpmcmd->req, &tag, &size, &ord);
> +
> +	if (!hw_is_tpm2()) {
> +		size = TCPA_MAX_BUFFER_LENGTH;
> +		TPMTRYRETURN(TPM_TransmitData(tpmcmd->req, tpmcmd->req_len,
> +					      tpmcmd->resp, &size));
> +		tpmcmd->resp_len = size;
> +
> +		return TPM_SUCCESS;
> +	}
> +
> +	/* TPM_GetRandom req: <header><uint32 num bytes> */
> +	unpack_UINT32(p, &rand_size);
> +
> +	/* Returning fewer bytes is acceptable per the spec. */
> +	if (rand_size > max_rand_size)
> +		rand_size = max_rand_size;
> +
> +	/* Call TPM2_GetRandom but return a TPM_GetRandom response. */
> +	/* TPM_GetRandom resp: <header><uint32 num bytes><num random bytes> */
> +	rand_offset = sizeof_TPM_RSP_HEADER(tpmcmd->resp) +
> +		      sizeof_UINT32(tpmcmd->resp);
> +
> +	TPMTRYRETURN(TPM2_GetRandom(&rand_size, tpmcmd->resp + rand_offset));
> +
> +	p = pack_TPM_RSP_HEADER(tpmcmd->resp, TPM_TAG_RSP_COMMAND,
> +				rand_offset + rand_size, status);
> +	p = pack_UINT32(p, rand_size);
> +	tpmcmd->resp_len = rand_offset + rand_size;
> +
> +	return status;
> +
> +abort_egress:
> +	tpmcmd->resp_len = VTPM_COMMAND_HEADER_SIZE;
> +	pack_TPM_RSP_HEADER(tpmcmd->resp, tag + 3, tpmcmd->resp_len, status);
> +
> +	return status;
> +}
> +
>  TPM_RESULT vtpmmgr_handle_cmd(
>  		struct tpm_opaque *opaque,
>  		tpmcmd_t* tpmcmd)
> @@ -842,7 +901,7 @@ TPM_RESULT vtpmmgr_handle_cmd(
>  		switch(ord) {
>  		case TPM_ORD_GetRandom:
>  			vtpmloginfo(VTPM_LOG_VTPM, "Passthrough: TPM_GetRandom\n");
> -			break;
> +			return vtpmmgr_handle_getrandom(opaque, tpmcmd);
>  		case TPM_ORD_PcrRead:
>  			vtpmloginfo(VTPM_LOG_VTPM, "Passthrough: TPM_PcrRead\n");
>  			// Quotes also need to be restricted to hide PCR values
> -- 
> 2.30.2
> 

-- 
Samuel
<i8b4uUnderground> d-_-b
<BonyNoMore> how u make that inverted b?
<BonyNoMore> wait
<BonyNoMore> never mind


From xen-devel-bounces@lists.xenproject.org Sat May 08 12:24:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 May 2021 12:24:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124296.234584 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfLzx-00081g-To; Sat, 08 May 2021 12:23:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124296.234584; Sat, 08 May 2021 12:23:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfLzx-00081Z-Qd; Sat, 08 May 2021 12:23:57 +0000
Received: by outflank-mailman (input) for mailman id 124296;
 Sat, 08 May 2021 12:23:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0wEz=KD=ens-lyon.org=samuel.thibault@bounce.ens-lyon.org>)
 id 1lfLzx-00081N-1V
 for xen-devel@lists.xenproject.org; Sat, 08 May 2021 12:23:57 +0000
Received: from sonata.ens-lyon.org (unknown [140.77.166.138])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2ec5bebd-6b78-4268-a098-437666b28ae0;
 Sat, 08 May 2021 12:23:56 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by sonata.ens-lyon.org (Postfix) with ESMTP id D2D8C201B0;
 Sat,  8 May 2021 13:44:05 +0200 (CEST)
Received: from sonata.ens-lyon.org ([127.0.0.1])
 by localhost (sonata.ens-lyon.org [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id Qla649ug3nDo; Sat,  8 May 2021 13:44:05 +0200 (CEST)
Received: from begin (lfbn-bor-1-56-204.w90-50.abo.wanadoo.fr [90.50.148.204])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by sonata.ens-lyon.org (Postfix) with ESMTPSA id B2625201AE;
 Sat,  8 May 2021 13:44:05 +0200 (CEST)
Received: from samy by begin with local (Exim 4.94)
 (envelope-from <samuel.thibault@ens-lyon.org>)
 id 1lfLNN-00BMUY-6T; Sat, 08 May 2021 13:44:05 +0200
Resent-From: Samuel Thibault <samuel.thibault@ens-lyon.org>
Resent-Date: Sat, 8 May 2021 13:44:05 +0200
Resent-Message-ID: <20210508114405.vo7inlbci2iwcjqo@begin>
Resent-To: jandryuk@gmail.com, xen-devel@lists.xenproject.org,
 dgdegra@tycho.nsa.gov, quan.xu0@gmail.com
Received: from samy by begin with local (Exim 4.94.2)
 (envelope-from <samuel.thibault@ens-lyon.org>)
 id 1lellQ-006BIt-J4; Thu, 06 May 2021 23:42:32 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2ec5bebd-6b78-4268-a098-437666b28ae0
Date: Thu, 6 May 2021 23:42:32 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel@lists.xenproject.org, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
Subject: Re: [PATCH v2 12/13] vtpmmgr: Check req_len before unpacking command
Message-ID: <20210506214232.qv6ft4giyfoujsie@begin>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
References: <20210506135923.161427-1-jandryuk@gmail.com>
 <20210506135923.161427-13-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210506135923.161427-13-jandryuk@gmail.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)

Jason Andryuk, le jeu. 06 mai 2021 09:59:22 -0400, a ecrit:
> vtpm_handle_cmd doesn't ensure there is enough space before unpacking
> the req buffer.  Add a minimum size check.  Called functions will have
> to do their own checking if they need more data from the request.
> 
> The error case is tricky since abort_egress wants to rely with a
> corresponding tag.  Just hardcode TPM_TAG_RQU_COMMAND since the vtpm is
> sending in malformed commands in the first place.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

> ---
>  stubdom/vtpmmgr/vtpm_cmd_handler.c | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/stubdom/vtpmmgr/vtpm_cmd_handler.c b/stubdom/vtpmmgr/vtpm_cmd_handler.c
> index c879b24c13..5586be6997 100644
> --- a/stubdom/vtpmmgr/vtpm_cmd_handler.c
> +++ b/stubdom/vtpmmgr/vtpm_cmd_handler.c
> @@ -840,6 +840,12 @@ TPM_RESULT vtpmmgr_handle_cmd(
>  	UINT32 size;
>  	TPM_COMMAND_CODE ord;
>  
> +	if (tpmcmd->req_len < sizeof_TPM_RQU_HEADER(tpmcmd->req)) {
> +		status = TPM_BAD_PARAMETER;
> +		tag = TPM_TAG_RQU_COMMAND;
> +		goto abort_egress;
> +	}
> +
>  	unpack_TPM_RQU_HEADER(tpmcmd->req,
>  			&tag, &size, &ord);
>  
> -- 
> 2.30.2
> 

-- 
Samuel
j'etais en train de nettoyer ma souris et le coup est parti...
 -+- s sur #ens-mim - et en plus c vrai... -+-


From xen-devel-bounces@lists.xenproject.org Sat May 08 12:54:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 May 2021 12:54:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124311.234608 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfMT0-0003Li-KX; Sat, 08 May 2021 12:53:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124311.234608; Sat, 08 May 2021 12:53:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfMT0-0003Lb-HJ; Sat, 08 May 2021 12:53:58 +0000
Received: by outflank-mailman (input) for mailman id 124311;
 Sat, 08 May 2021 12:53:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0wEz=KD=ens-lyon.org=samuel.thibault@bounce.ens-lyon.org>)
 id 1lfMSy-0003LU-Op
 for xen-devel@lists.xenproject.org; Sat, 08 May 2021 12:53:56 +0000
Received: from sonata.ens-lyon.org (unknown [140.77.166.138])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7f1a7492-48be-48d6-872b-3ac647eace0c;
 Sat, 08 May 2021 12:53:55 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by sonata.ens-lyon.org (Postfix) with ESMTP id 0512D201A4;
 Sat,  8 May 2021 13:34:42 +0200 (CEST)
Received: from sonata.ens-lyon.org ([127.0.0.1])
 by localhost (sonata.ens-lyon.org [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id 1rwcWxDj7mb7; Sat,  8 May 2021 13:34:41 +0200 (CEST)
Received: from begin (lfbn-bor-1-56-204.w90-50.abo.wanadoo.fr [90.50.148.204])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by sonata.ens-lyon.org (Postfix) with ESMTPSA id D84EC201A3;
 Sat,  8 May 2021 13:34:41 +0200 (CEST)
Received: from samy by begin with local (Exim 4.94)
 (envelope-from <samuel.thibault@ens-lyon.org>)
 id 1lfLEH-00BM8M-Aa; Sat, 08 May 2021 13:34:41 +0200
Resent-From: Samuel Thibault <samuel.thibault@ens-lyon.org>
Resent-Date: Sat, 8 May 2021 13:34:41 +0200
Resent-Message-ID: <20210508113441.zapne7itvhhiu2vu@begin>
Resent-To: jandryuk@gmail.com, xen-devel@lists.xenproject.org,
 dgdegra@tycho.nsa.gov, quan.xu0@gmail.com
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7f1a7492-48be-48d6-872b-3ac647eace0c
Date: Thu, 6 May 2021 23:41:13 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel@lists.xenproject.org, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
Subject: Re: [PATCH v2 10/13] vtpmmgr: Remove bogus cast from TPM2_GetRandom
Message-ID: <20210506214113.bhkhiif4utufxxwp@begin>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
References: <20210506135923.161427-1-jandryuk@gmail.com>
 <20210506135923.161427-11-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20210506135923.161427-11-jandryuk@gmail.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)

Jason Andryuk, le jeu. 06 mai 2021 09:59:20 -0400, a ecrit:
> The UINT32 <-> UINT16 casting in TPM2_GetRandom is incorrect.  Use a
> local UINT16 as needed for the TPM hardware command and assign the
> result.
> 
> Suggested-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

> ---
>  stubdom/vtpmmgr/tpm2.c | 13 ++++++++++---
>  1 file changed, 10 insertions(+), 3 deletions(-)
> 
> diff --git a/stubdom/vtpmmgr/tpm2.c b/stubdom/vtpmmgr/tpm2.c
> index 655e6d164c..ebd06eac74 100644
> --- a/stubdom/vtpmmgr/tpm2.c
> +++ b/stubdom/vtpmmgr/tpm2.c
> @@ -427,15 +427,22 @@ abort_egress:
>  
>  TPM_RC TPM2_GetRandom(UINT32 * bytesRequested, BYTE * randomBytes)
>  {
> +    UINT16 bytesReq;
>      TPM_BEGIN(TPM_ST_NO_SESSIONS, TPM_CC_GetRandom);
>  
> -    ptr = pack_UINT16(ptr, (UINT16)*bytesRequested);
> +    if (*bytesRequested > UINT16_MAX)
> +        bytesReq = UINT16_MAX;
> +    else
> +        bytesReq = *bytesRequested;
> +
> +    ptr = pack_UINT16(ptr, bytesReq);
>  
>      TPM_TRANSMIT();
>      TPM_UNPACK_VERIFY();
>  
> -    ptr = unpack_UINT16(ptr, (UINT16 *)bytesRequested);
> -    ptr = unpack_TPM_BUFFER(ptr, randomBytes, *bytesRequested);
> +    ptr = unpack_UINT16(ptr, &bytesReq);
> +    *bytesRequested = bytesReq;
> +    ptr = unpack_TPM_BUFFER(ptr, randomBytes, bytesReq);
>  
>  abort_egress:
>      return status;
> -- 
> 2.30.2
> 

-- 
Samuel
<N> (* If you have a precise idea of the intended use of the following code, please
<N>    write to Eduardo.Gimenez@inria.fr and ask for the prize :-)
<N>    -- Eduardo (11/8/97) *)
 -+- N sur #ens-mim - et c'était un des développeurs -+-


From xen-devel-bounces@lists.xenproject.org Sat May 08 13:39:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 May 2021 13:39:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124320.234632 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfNAd-0007jY-Cg; Sat, 08 May 2021 13:39:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124320.234632; Sat, 08 May 2021 13:39:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfNAd-0007jR-8e; Sat, 08 May 2021 13:39:03 +0000
Received: by outflank-mailman (input) for mailman id 124320;
 Sat, 08 May 2021 13:39:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0wEz=KD=ens-lyon.org=samuel.thibault@bounce.ens-lyon.org>)
 id 1lfNAc-0007SV-Cm
 for xen-devel@lists.xenproject.org; Sat, 08 May 2021 13:39:02 +0000
Received: from sonata.ens-lyon.org (unknown [140.77.166.138])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id acbeae16-8fd2-4386-ad98-355af3bcd79d;
 Sat, 08 May 2021 13:38:57 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by sonata.ens-lyon.org (Postfix) with ESMTP id D290D201C6;
 Sat,  8 May 2021 13:43:17 +0200 (CEST)
Received: from sonata.ens-lyon.org ([127.0.0.1])
 by localhost (sonata.ens-lyon.org [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id 8RLLTE32nPYN; Sat,  8 May 2021 13:43:17 +0200 (CEST)
Received: from begin (lfbn-bor-1-56-204.w90-50.abo.wanadoo.fr [90.50.148.204])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by sonata.ens-lyon.org (Postfix) with ESMTPSA id A5E43201C5;
 Sat,  8 May 2021 13:43:17 +0200 (CEST)
Received: from samy by begin with local (Exim 4.94)
 (envelope-from <samuel.thibault@ens-lyon.org>)
 id 1lfLMb-00BMU6-16; Sat, 08 May 2021 13:43:17 +0200
Resent-From: Samuel Thibault <samuel.thibault@ens-lyon.org>
Resent-Date: Sat, 8 May 2021 13:43:16 +0200
Resent-Message-ID: <20210508114316.3cqqtlc3mgm37v4r@begin>
Resent-To: jandryuk@gmail.com, xen-devel@lists.xenproject.org,
 iwj@xenproject.org, wl@xen.org, dgdegra@tycho.nsa.gov,
 quan.xu0@gmail.com
Received: from samy by begin with local (Exim 4.94)
 (envelope-from <samuel.thibault@ens-lyon.org>)
 id 1lfLDg-00BM7y-Gf; Sat, 08 May 2021 13:34:04 +0200
Resent-From: Samuel Thibault <samuel.thibault@ens-lyon.org>
Resent-Date: Sat, 8 May 2021 13:34:04 +0200
Resent-Message-ID: <20210508113404.ymmnt2qrs6gijzm6@begin>
Resent-To: jandryuk@gmail.com, xen-devel@lists.xenproject.org,
 iwj@xenproject.org, wl@xen.org, QuanXu@begin
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: acbeae16-8fd2-4386-ad98-355af3bcd79d
Date: Thu, 6 May 2021 23:35:44 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
Subject: Re: [PATCH v2 04/13] vtpmmgr: Allow specifying srk_handle for TPM2
Message-ID: <20210506213544.6twiioapinyzajb4@begin>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
References: <20210506135923.161427-1-jandryuk@gmail.com>
 <20210506135923.161427-5-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210506135923.161427-5-jandryuk@gmail.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)

Jason Andryuk, le jeu. 06 mai 2021 09:59:14 -0400, a ecrit:
> Bypass taking ownership of the TPM2 if an srk_handle is specified.
> 
> This srk_handle must be usable with Null auth for the time being.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

> ---
> v2: Use "=" seperator
> ---
>  docs/man/xen-vtpmmgr.7.pod |  7 +++++++
>  stubdom/vtpmmgr/init.c     | 11 ++++++++++-
>  2 files changed, 17 insertions(+), 1 deletion(-)
> 
> diff --git a/docs/man/xen-vtpmmgr.7.pod b/docs/man/xen-vtpmmgr.7.pod
> index 875dcce508..3286954568 100644
> --- a/docs/man/xen-vtpmmgr.7.pod
> +++ b/docs/man/xen-vtpmmgr.7.pod
> @@ -92,6 +92,13 @@ Valid arguments:
>  
>  =over 4
>  
> +=item srk_handle=<HANDLE>
> +
> +Specify a srk_handle for TPM 2.0.  TPM 2.0 uses a key hierarchy, and
> +this allow specifying the parent handle for vtpmmgr to create its own
> +key under.  Using this option bypasses vtpmmgr trying to take ownership
> +of the TPM.
> +
>  =item owner_auth=<AUTHSPEC>
>  
>  =item srk_auth=<AUTHSPEC>
> diff --git a/stubdom/vtpmmgr/init.c b/stubdom/vtpmmgr/init.c
> index 1506735051..130e4f4bf6 100644
> --- a/stubdom/vtpmmgr/init.c
> +++ b/stubdom/vtpmmgr/init.c
> @@ -302,6 +302,11 @@ int parse_cmdline_opts(int argc, char** argv, struct Opts* opts)
>              goto err_invalid;
>           }
>        }
> +      else if(!strncmp(argv[i], "srk_handle=", 11)) {
> +         if(sscanf(argv[i] + 11, "%x", &vtpm_globals.srk_handle) != 1) {
> +            goto err_invalid;
> +         }
> +      }
>        else if(!strncmp(argv[i], "tpmdriver=", 10)) {
>           if(!strcmp(argv[i] + 10, "tpm_tis")) {
>              opts->tpmdriver = TPMDRV_TPM_TIS;
> @@ -586,7 +591,11 @@ TPM_RESULT vtpmmgr2_create(void)
>  {
>      TPM_RESULT status = TPM_SUCCESS;
>  
> -    TPMTRYRETURN(tpm2_take_ownership());
> +    if ( vtpm_globals.srk_handle == 0 ) {
> +        TPMTRYRETURN(tpm2_take_ownership());
> +    } else {
> +        tpm2_AuthArea_ctor(NULL, 0, &vtpm_globals.srk_auth_area);
> +    }
>  
>     /* create SK */
>      TPM2_Create_Params_out out;
> -- 
> 2.30.2
> 

-- 
Samuel
"...[Linux's] capacity to talk via any medium except smoke signals."
(By Dr. Greg Wettstein, Roger Maris Cancer Center)


From xen-devel-bounces@lists.xenproject.org Sat May 08 13:39:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 May 2021 13:39:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124319.234620 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfNAZ-0007Si-46; Sat, 08 May 2021 13:38:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124319.234620; Sat, 08 May 2021 13:38:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfNAZ-0007Sb-1A; Sat, 08 May 2021 13:38:59 +0000
Received: by outflank-mailman (input) for mailman id 124319;
 Sat, 08 May 2021 13:38:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0wEz=KD=ens-lyon.org=samuel.thibault@bounce.ens-lyon.org>)
 id 1lfNAX-0007SV-IK
 for xen-devel@lists.xenproject.org; Sat, 08 May 2021 13:38:57 +0000
Received: from sonata.ens-lyon.org (unknown [140.77.166.138])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 170830da-7ca9-4d09-a3de-0fcc78fac654;
 Sat, 08 May 2021 13:38:56 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by sonata.ens-lyon.org (Postfix) with ESMTP id 0026A201C8;
 Sat,  8 May 2021 13:43:35 +0200 (CEST)
Received: from sonata.ens-lyon.org ([127.0.0.1])
 by localhost (sonata.ens-lyon.org [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id 7VuzFJee62ij; Sat,  8 May 2021 13:43:35 +0200 (CEST)
Received: from begin (lfbn-bor-1-56-204.w90-50.abo.wanadoo.fr [90.50.148.204])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by sonata.ens-lyon.org (Postfix) with ESMTPSA id D2EE2201C7;
 Sat,  8 May 2021 13:43:35 +0200 (CEST)
Received: from samy by begin with local (Exim 4.94)
 (envelope-from <samuel.thibault@ens-lyon.org>)
 id 1lfLMt-00BMUL-87; Sat, 08 May 2021 13:43:35 +0200
Resent-From: Samuel Thibault <samuel.thibault@ens-lyon.org>
Resent-Date: Sat, 8 May 2021 13:43:35 +0200
Resent-Message-ID: <20210508114335.wq5cyjpjru4ssbrh@begin>
Resent-To: jandryuk@gmail.com, xen-devel@lists.xenproject.org,
 iwj@xenproject.org, wl@xen.org, dgdegra@tycho.nsa.gov,
 quan.xu0@gmail.com
Received: from samy by begin with local (Exim 4.94.2)
 (envelope-from <samuel.thibault@ens-lyon.org>)
 id 1leleq-0066zp-S4; Thu, 06 May 2021 23:35:44 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 170830da-7ca9-4d09-a3de-0fcc78fac654
Date: Thu, 6 May 2021 23:35:44 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
Subject: Re: [PATCH v2 04/13] vtpmmgr: Allow specifying srk_handle for TPM2
Message-ID: <20210506213544.6twiioapinyzajb4@begin>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
References: <20210506135923.161427-1-jandryuk@gmail.com>
 <20210506135923.161427-5-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210506135923.161427-5-jandryuk@gmail.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)

Jason Andryuk, le jeu. 06 mai 2021 09:59:14 -0400, a ecrit:
> Bypass taking ownership of the TPM2 if an srk_handle is specified.
> 
> This srk_handle must be usable with Null auth for the time being.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

> ---
> v2: Use "=" seperator
> ---
>  docs/man/xen-vtpmmgr.7.pod |  7 +++++++
>  stubdom/vtpmmgr/init.c     | 11 ++++++++++-
>  2 files changed, 17 insertions(+), 1 deletion(-)
> 
> diff --git a/docs/man/xen-vtpmmgr.7.pod b/docs/man/xen-vtpmmgr.7.pod
> index 875dcce508..3286954568 100644
> --- a/docs/man/xen-vtpmmgr.7.pod
> +++ b/docs/man/xen-vtpmmgr.7.pod
> @@ -92,6 +92,13 @@ Valid arguments:
>  
>  =over 4
>  
> +=item srk_handle=<HANDLE>
> +
> +Specify a srk_handle for TPM 2.0.  TPM 2.0 uses a key hierarchy, and
> +this allow specifying the parent handle for vtpmmgr to create its own
> +key under.  Using this option bypasses vtpmmgr trying to take ownership
> +of the TPM.
> +
>  =item owner_auth=<AUTHSPEC>
>  
>  =item srk_auth=<AUTHSPEC>
> diff --git a/stubdom/vtpmmgr/init.c b/stubdom/vtpmmgr/init.c
> index 1506735051..130e4f4bf6 100644
> --- a/stubdom/vtpmmgr/init.c
> +++ b/stubdom/vtpmmgr/init.c
> @@ -302,6 +302,11 @@ int parse_cmdline_opts(int argc, char** argv, struct Opts* opts)
>              goto err_invalid;
>           }
>        }
> +      else if(!strncmp(argv[i], "srk_handle=", 11)) {
> +         if(sscanf(argv[i] + 11, "%x", &vtpm_globals.srk_handle) != 1) {
> +            goto err_invalid;
> +         }
> +      }
>        else if(!strncmp(argv[i], "tpmdriver=", 10)) {
>           if(!strcmp(argv[i] + 10, "tpm_tis")) {
>              opts->tpmdriver = TPMDRV_TPM_TIS;
> @@ -586,7 +591,11 @@ TPM_RESULT vtpmmgr2_create(void)
>  {
>      TPM_RESULT status = TPM_SUCCESS;
>  
> -    TPMTRYRETURN(tpm2_take_ownership());
> +    if ( vtpm_globals.srk_handle == 0 ) {
> +        TPMTRYRETURN(tpm2_take_ownership());
> +    } else {
> +        tpm2_AuthArea_ctor(NULL, 0, &vtpm_globals.srk_auth_area);
> +    }
>  
>     /* create SK */
>      TPM2_Create_Params_out out;
> -- 
> 2.30.2
> 

-- 
Samuel
"...[Linux's] capacity to talk via any medium except smoke signals."
(By Dr. Greg Wettstein, Roger Maris Cancer Center)


From xen-devel-bounces@lists.xenproject.org Sat May 08 15:14:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 May 2021 15:14:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124345.234644 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfOeV-0000Ex-Ek; Sat, 08 May 2021 15:13:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124345.234644; Sat, 08 May 2021 15:13:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfOeV-0000Eq-Be; Sat, 08 May 2021 15:13:59 +0000
Received: by outflank-mailman (input) for mailman id 124345;
 Sat, 08 May 2021 15:13:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0wEz=KD=ens-lyon.org=samuel.thibault@bounce.ens-lyon.org>)
 id 1lfOeT-0000Ee-IF
 for xen-devel@lists.xenproject.org; Sat, 08 May 2021 15:13:57 +0000
Received: from sonata.ens-lyon.org (unknown [140.77.166.138])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5cc594b9-0980-4981-b9c5-884d771fca3d;
 Sat, 08 May 2021 15:13:56 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by sonata.ens-lyon.org (Postfix) with ESMTP id 78F422019E;
 Sat,  8 May 2021 13:34:05 +0200 (CEST)
Received: from sonata.ens-lyon.org ([127.0.0.1])
 by localhost (sonata.ens-lyon.org [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id hdTFSHcFiw0D; Sat,  8 May 2021 13:34:05 +0200 (CEST)
Received: from begin (lfbn-bor-1-56-204.w90-50.abo.wanadoo.fr [90.50.148.204])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by sonata.ens-lyon.org (Postfix) with ESMTPSA id 154B02019C;
 Sat,  8 May 2021 13:34:05 +0200 (CEST)
Received: from samy by begin with local (Exim 4.94)
 (envelope-from <samuel.thibault@ens-lyon.org>)
 id 1lfLDg-00BM7y-Gf; Sat, 08 May 2021 13:34:04 +0200
Resent-From: Samuel Thibault <samuel.thibault@ens-lyon.org>
Resent-Date: Sat, 8 May 2021 13:34:04 +0200
Resent-Message-ID: <20210508113404.ymmnt2qrs6gijzm6@begin>
Resent-To: jandryuk@gmail.com, xen-devel@lists.xenproject.org,
 iwj@xenproject.org, wl@xen.org, QuanXu@begin
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5cc594b9-0980-4981-b9c5-884d771fca3d
Date: Thu, 6 May 2021 23:35:44 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
Subject: Re: [PATCH v2 04/13] vtpmmgr: Allow specifying srk_handle for TPM2
Message-ID: <20210506213544.6twiioapinyzajb4@begin>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
References: <20210506135923.161427-1-jandryuk@gmail.com>
 <20210506135923.161427-5-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210506135923.161427-5-jandryuk@gmail.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)

Jason Andryuk, le jeu. 06 mai 2021 09:59:14 -0400, a ecrit:
> Bypass taking ownership of the TPM2 if an srk_handle is specified.
> 
> This srk_handle must be usable with Null auth for the time being.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

> ---
> v2: Use "=" seperator
> ---
>  docs/man/xen-vtpmmgr.7.pod |  7 +++++++
>  stubdom/vtpmmgr/init.c     | 11 ++++++++++-
>  2 files changed, 17 insertions(+), 1 deletion(-)
> 
> diff --git a/docs/man/xen-vtpmmgr.7.pod b/docs/man/xen-vtpmmgr.7.pod
> index 875dcce508..3286954568 100644
> --- a/docs/man/xen-vtpmmgr.7.pod
> +++ b/docs/man/xen-vtpmmgr.7.pod
> @@ -92,6 +92,13 @@ Valid arguments:
>  
>  =over 4
>  
> +=item srk_handle=<HANDLE>
> +
> +Specify a srk_handle for TPM 2.0.  TPM 2.0 uses a key hierarchy, and
> +this allow specifying the parent handle for vtpmmgr to create its own
> +key under.  Using this option bypasses vtpmmgr trying to take ownership
> +of the TPM.
> +
>  =item owner_auth=<AUTHSPEC>
>  
>  =item srk_auth=<AUTHSPEC>
> diff --git a/stubdom/vtpmmgr/init.c b/stubdom/vtpmmgr/init.c
> index 1506735051..130e4f4bf6 100644
> --- a/stubdom/vtpmmgr/init.c
> +++ b/stubdom/vtpmmgr/init.c
> @@ -302,6 +302,11 @@ int parse_cmdline_opts(int argc, char** argv, struct Opts* opts)
>              goto err_invalid;
>           }
>        }
> +      else if(!strncmp(argv[i], "srk_handle=", 11)) {
> +         if(sscanf(argv[i] + 11, "%x", &vtpm_globals.srk_handle) != 1) {
> +            goto err_invalid;
> +         }
> +      }
>        else if(!strncmp(argv[i], "tpmdriver=", 10)) {
>           if(!strcmp(argv[i] + 10, "tpm_tis")) {
>              opts->tpmdriver = TPMDRV_TPM_TIS;
> @@ -586,7 +591,11 @@ TPM_RESULT vtpmmgr2_create(void)
>  {
>      TPM_RESULT status = TPM_SUCCESS;
>  
> -    TPMTRYRETURN(tpm2_take_ownership());
> +    if ( vtpm_globals.srk_handle == 0 ) {
> +        TPMTRYRETURN(tpm2_take_ownership());
> +    } else {
> +        tpm2_AuthArea_ctor(NULL, 0, &vtpm_globals.srk_auth_area);
> +    }
>  
>     /* create SK */
>      TPM2_Create_Params_out out;
> -- 
> 2.30.2
> 

-- 
Samuel
"...[Linux's] capacity to talk via any medium except smoke signals."
(By Dr. Greg Wettstein, Roger Maris Cancer Center)


From xen-devel-bounces@lists.xenproject.org Sat May 08 15:14:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 May 2021 15:14:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124346.234648 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfOeV-0000IG-N9; Sat, 08 May 2021 15:13:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124346.234648; Sat, 08 May 2021 15:13:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfOeV-0000HB-K4; Sat, 08 May 2021 15:13:59 +0000
Received: by outflank-mailman (input) for mailman id 124346;
 Sat, 08 May 2021 15:13:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0wEz=KD=ens-lyon.org=samuel.thibault@bounce.ens-lyon.org>)
 id 1lfOeT-0000Ef-Mt
 for xen-devel@lists.xenproject.org; Sat, 08 May 2021 15:13:57 +0000
Received: from sonata.ens-lyon.org (unknown [140.77.166.138])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5988f13d-6048-4f93-b5d2-06a1756efe13;
 Sat, 08 May 2021 15:13:56 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by sonata.ens-lyon.org (Postfix) with ESMTP id 14990201A8;
 Sat,  8 May 2021 13:35:00 +0200 (CEST)
Received: from sonata.ens-lyon.org ([127.0.0.1])
 by localhost (sonata.ens-lyon.org [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id 0Bi0iRRNeYzz; Sat,  8 May 2021 13:34:59 +0200 (CEST)
Received: from begin (lfbn-bor-1-56-204.w90-50.abo.wanadoo.fr [90.50.148.204])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by sonata.ens-lyon.org (Postfix) with ESMTPSA id E8CDB201A7;
 Sat,  8 May 2021 13:34:59 +0200 (CEST)
Received: from samy by begin with local (Exim 4.94)
 (envelope-from <samuel.thibault@ens-lyon.org>)
 id 1lfLEZ-00BM8V-8r; Sat, 08 May 2021 13:34:59 +0200
Resent-From: Samuel Thibault <samuel.thibault@ens-lyon.org>
Resent-Date: Sat, 8 May 2021 13:34:59 +0200
Resent-Message-ID: <20210508113459.idepe2p4eizd3fhj@begin>
Resent-To: jandryuk@gmail.com, xen-devel@lists.xenproject.org,
 dgdegra@tycho.nsa.gov, quan.xu0@gmail.com
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5988f13d-6048-4f93-b5d2-06a1756efe13
Date: Thu, 6 May 2021 23:42:32 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel@lists.xenproject.org, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
Subject: Re: [PATCH v2 12/13] vtpmmgr: Check req_len before unpacking command
Message-ID: <20210506214232.qv6ft4giyfoujsie@begin>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
References: <20210506135923.161427-1-jandryuk@gmail.com>
 <20210506135923.161427-13-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210506135923.161427-13-jandryuk@gmail.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)

Jason Andryuk, le jeu. 06 mai 2021 09:59:22 -0400, a ecrit:
> vtpm_handle_cmd doesn't ensure there is enough space before unpacking
> the req buffer.  Add a minimum size check.  Called functions will have
> to do their own checking if they need more data from the request.
> 
> The error case is tricky since abort_egress wants to rely with a
> corresponding tag.  Just hardcode TPM_TAG_RQU_COMMAND since the vtpm is
> sending in malformed commands in the first place.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

> ---
>  stubdom/vtpmmgr/vtpm_cmd_handler.c | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/stubdom/vtpmmgr/vtpm_cmd_handler.c b/stubdom/vtpmmgr/vtpm_cmd_handler.c
> index c879b24c13..5586be6997 100644
> --- a/stubdom/vtpmmgr/vtpm_cmd_handler.c
> +++ b/stubdom/vtpmmgr/vtpm_cmd_handler.c
> @@ -840,6 +840,12 @@ TPM_RESULT vtpmmgr_handle_cmd(
>  	UINT32 size;
>  	TPM_COMMAND_CODE ord;
>  
> +	if (tpmcmd->req_len < sizeof_TPM_RQU_HEADER(tpmcmd->req)) {
> +		status = TPM_BAD_PARAMETER;
> +		tag = TPM_TAG_RQU_COMMAND;
> +		goto abort_egress;
> +	}
> +
>  	unpack_TPM_RQU_HEADER(tpmcmd->req,
>  			&tag, &size, &ord);
>  
> -- 
> 2.30.2
> 

-- 
Samuel
j'etais en train de nettoyer ma souris et le coup est parti...
 -+- s sur #ens-mim - et en plus c vrai... -+-


From xen-devel-bounces@lists.xenproject.org Sat May 08 15:14:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 May 2021 15:14:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124347.234668 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfOeZ-0000mj-0t; Sat, 08 May 2021 15:14:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124347.234668; Sat, 08 May 2021 15:14:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfOeY-0000mc-U5; Sat, 08 May 2021 15:14:02 +0000
Received: by outflank-mailman (input) for mailman id 124347;
 Sat, 08 May 2021 15:14:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0wEz=KD=ens-lyon.org=samuel.thibault@bounce.ens-lyon.org>)
 id 1lfOeY-0000Ee-Bk
 for xen-devel@lists.xenproject.org; Sat, 08 May 2021 15:14:02 +0000
Received: from sonata.ens-lyon.org (unknown [140.77.166.138])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5773dd27-4c3e-4e59-87f2-31da74938602;
 Sat, 08 May 2021 15:13:56 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by sonata.ens-lyon.org (Postfix) with ESMTP id 1E524201AA;
 Sat,  8 May 2021 13:35:08 +0200 (CEST)
Received: from sonata.ens-lyon.org ([127.0.0.1])
 by localhost (sonata.ens-lyon.org [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id m4VFAkahei4v; Sat,  8 May 2021 13:35:08 +0200 (CEST)
Received: from begin (lfbn-bor-1-56-204.w90-50.abo.wanadoo.fr [90.50.148.204])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by sonata.ens-lyon.org (Postfix) with ESMTPSA id F0853201A9;
 Sat,  8 May 2021 13:35:07 +0200 (CEST)
Received: from samy by begin with local (Exim 4.94)
 (envelope-from <samuel.thibault@ens-lyon.org>)
 id 1lfLEh-00BM8c-CP; Sat, 08 May 2021 13:35:07 +0200
Resent-From: Samuel Thibault <samuel.thibault@ens-lyon.org>
Resent-Date: Sat, 8 May 2021 13:35:07 +0200
Resent-Message-ID: <20210508113507.vama7kfpk2u33xcj@begin>
Resent-To: jandryuk@gmail.com, xen-devel@lists.xenproject.org,
 iwj@xenproject.org, wl@xen.org
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5773dd27-4c3e-4e59-87f2-31da74938602
Date: Thu, 6 May 2021 23:52:55 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2 13/13] vtpm: Correct timeout units and command duration
Message-ID: <20210506215255.yftnedauoz4e3bga@begin>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210506135923.161427-1-jandryuk@gmail.com>
 <20210506135923.161427-14-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20210506135923.161427-14-jandryuk@gmail.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)

Jason Andryuk, le jeu. 06 mai 2021 09:59:23 -0400, a ecrit:
> Add two patches:
> vtpm-microsecond-duration.patch fixes the units for timeouts and command
> durations.
> vtpm-command-duration.patch increases the timeout linux uses to allow
> commands to succeed.
> 
> Linux works around low timeouts, but not low durations.  The second
> patch allows commands to complete that often timeout with the lower
> command durations.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

> ---
>  stubdom/Makefile                        |  2 +
>  stubdom/vtpm-command-duration.patch     | 52 +++++++++++++++++++++++++
>  stubdom/vtpm-microsecond-duration.patch | 52 +++++++++++++++++++++++++
>  3 files changed, 106 insertions(+)
>  create mode 100644 stubdom/vtpm-command-duration.patch
>  create mode 100644 stubdom/vtpm-microsecond-duration.patch
> 
> diff --git a/stubdom/Makefile b/stubdom/Makefile
> index c6de5f68ae..06aa69d8bc 100644
> --- a/stubdom/Makefile
> +++ b/stubdom/Makefile
> @@ -239,6 +239,8 @@ tpm_emulator-$(XEN_TARGET_ARCH): tpm_emulator-$(TPMEMU_VERSION).tar.gz
>  	patch -d $@ -p1 < vtpm-implicit-fallthrough.patch
>  	patch -d $@ -p1 < vtpm_TPM_ChangeAuthAsymFinish.patch
>  	patch -d $@ -p1 < vtpm_extern.patch
> +	patch -d $@ -p1 < vtpm-microsecond-duration.patch
> +	patch -d $@ -p1 < vtpm-command-duration.patch
>  	mkdir $@/build
>  	cd $@/build; CC=${CC} $(CMAKE) .. -DCMAKE_C_FLAGS:STRING="-std=c99 -DTPM_NO_EXTERN $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -Wno-declaration-after-statement"
>  	touch $@
> diff --git a/stubdom/vtpm-command-duration.patch b/stubdom/vtpm-command-duration.patch
> new file mode 100644
> index 0000000000..6fdf2fc9be
> --- /dev/null
> +++ b/stubdom/vtpm-command-duration.patch
> @@ -0,0 +1,52 @@
> +From e7c976b5864e7d2649292d90ea60d5aea091a990 Mon Sep 17 00:00:00 2001
> +From: Jason Andryuk <jandryuk@gmail.com>
> +Date: Sun, 14 Mar 2021 12:46:34 -0400
> +Subject: [PATCH 2/2] Increase command durations
> +
> +Wth Linux 5.4 xen-tpmfront and a Xen vtpm-stubdom, xen-tpmfront was
> +failing commands with -ETIME:
> +tpm tpm0: tpm_try_transmit: send(): error-62
> +
> +The vtpm was returning the data, but it was after the duration timeout
> +in vtpm_send.  Linux may have started being more stringent about timing?
> +
> +The vtpm-stubdom has a little delay since it writes its disk before
> +returning the response.
> +
> +Anyway, the durations are rather low.  When they were 1/10/1000 before
> +converting to microseconds, Linux showed all three durations rounded to
> +10000.  Update them with values from a physical TPM1.2.  These were
> +taken from a WEC which was software downgraded from a TPM2 to a TPM1.2.
> +They might be excessive, but I'd rather have a command succeed than
> +return -ETIME.
> +
> +An IFX physical TPM1.2 uses:
> +1000000
> +1500000
> +150000000
> +
> +Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> +---
> + tpm/tpm_data.c | 6 +++---
> + 1 file changed, 3 insertions(+), 3 deletions(-)
> +
> +diff --git a/tpm/tpm_data.c b/tpm/tpm_data.c
> +index bebaf10..844afca 100644
> +--- a/tpm/tpm_data.c
> ++++ b/tpm/tpm_data.c
> +@@ -71,9 +71,9 @@ static void init_timeouts(void)
> +   tpmData.permanent.data.tis_timeouts[1] = 2000000;
> +   tpmData.permanent.data.tis_timeouts[2] = 750000;
> +   tpmData.permanent.data.tis_timeouts[3] = 750000;
> +-  tpmData.permanent.data.cmd_durations[0] = 1000;
> +-  tpmData.permanent.data.cmd_durations[1] = 10000;
> +-  tpmData.permanent.data.cmd_durations[2] = 1000000;
> ++  tpmData.permanent.data.cmd_durations[0] = 3000000;
> ++  tpmData.permanent.data.cmd_durations[1] = 3000000;
> ++  tpmData.permanent.data.cmd_durations[2] = 600000000;
> + }
> + 
> + void tpm_init_data(void)
> +-- 
> +2.30.2
> +
> diff --git a/stubdom/vtpm-microsecond-duration.patch b/stubdom/vtpm-microsecond-duration.patch
> new file mode 100644
> index 0000000000..7a906e72c5
> --- /dev/null
> +++ b/stubdom/vtpm-microsecond-duration.patch
> @@ -0,0 +1,52 @@
> +From 5a510e0afd7c288e3f0fb3523ec749ba1366ad61 Mon Sep 17 00:00:00 2001
> +From: Jason Andryuk <jandryuk@gmail.com>
> +Date: Sun, 14 Mar 2021 12:42:10 -0400
> +Subject: [PATCH 1/2] Use microseconds for timeouts and durations
> +
> +The timeout and duration fields should be in microseconds according to
> +the spec.
> +
> +TPM_CAP_PROP_TIS_TIMEOUT:
> +A 4 element array of UINT32 values each denoting the timeout value in
> +microseconds for the following in this order:
> +
> +TPM_CAP_PROP_DURATION:
> +A 3 element array of UINT32 values each denoting the duration value in
> +microseconds of the duration of the three classes of commands:
> +
> +Linux will scale the timeouts up by 1000, but not the durations.  Change
> +the units for both sets as appropriate.
> +
> +Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> +---
> + tpm/tpm_data.c | 14 +++++++-------
> + 1 file changed, 7 insertions(+), 7 deletions(-)
> +
> +diff --git a/tpm/tpm_data.c b/tpm/tpm_data.c
> +index a3a79ef..bebaf10 100644
> +--- a/tpm/tpm_data.c
> ++++ b/tpm/tpm_data.c
> +@@ -67,13 +67,13 @@ static void init_nv_storage(void)
> + static void init_timeouts(void)
> + {
> +   /* for the timeouts we use the PC platform defaults */
> +-  tpmData.permanent.data.tis_timeouts[0] = 750;
> +-  tpmData.permanent.data.tis_timeouts[1] = 2000;
> +-  tpmData.permanent.data.tis_timeouts[2] = 750;
> +-  tpmData.permanent.data.tis_timeouts[3] = 750;
> +-  tpmData.permanent.data.cmd_durations[0] = 1;
> +-  tpmData.permanent.data.cmd_durations[1] = 10;
> +-  tpmData.permanent.data.cmd_durations[2] = 1000;
> ++  tpmData.permanent.data.tis_timeouts[0] = 750000;
> ++  tpmData.permanent.data.tis_timeouts[1] = 2000000;
> ++  tpmData.permanent.data.tis_timeouts[2] = 750000;
> ++  tpmData.permanent.data.tis_timeouts[3] = 750000;
> ++  tpmData.permanent.data.cmd_durations[0] = 1000;
> ++  tpmData.permanent.data.cmd_durations[1] = 10000;
> ++  tpmData.permanent.data.cmd_durations[2] = 1000000;
> + }
> + 
> + void tpm_init_data(void)
> +-- 
> +2.30.2
> +
> -- 
> 2.30.2
> 

-- 
Samuel
Pour un père, autant mourir que de faire plein de calculs et pas s'occuper
de son fils
 -+- y sur #ens-mim - sombres histoires de zombies -+-


From xen-devel-bounces@lists.xenproject.org Sat May 08 17:34:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 May 2021 17:34:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124392.234680 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfQpy-0005wi-6A; Sat, 08 May 2021 17:33:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124392.234680; Sat, 08 May 2021 17:33:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfQpy-0005wb-3C; Sat, 08 May 2021 17:33:58 +0000
Received: by outflank-mailman (input) for mailman id 124392;
 Sat, 08 May 2021 17:33:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0wEz=KD=ens-lyon.org=samuel.thibault@bounce.ens-lyon.org>)
 id 1lfQpx-0005wP-1q
 for xen-devel@lists.xenproject.org; Sat, 08 May 2021 17:33:57 +0000
Received: from sonata.ens-lyon.org (unknown [140.77.166.138])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c563adcd-48fa-48a0-9146-944e56fd711e;
 Sat, 08 May 2021 17:33:55 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by sonata.ens-lyon.org (Postfix) with ESMTP id EC72B201A6;
 Sat,  8 May 2021 13:34:51 +0200 (CEST)
Received: from sonata.ens-lyon.org ([127.0.0.1])
 by localhost (sonata.ens-lyon.org [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id G7WusLwhDq2v; Sat,  8 May 2021 13:34:51 +0200 (CEST)
Received: from begin (lfbn-bor-1-56-204.w90-50.abo.wanadoo.fr [90.50.148.204])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by sonata.ens-lyon.org (Postfix) with ESMTPSA id CD0EC201A5;
 Sat,  8 May 2021 13:34:51 +0200 (CEST)
Received: from samy by begin with local (Exim 4.94)
 (envelope-from <samuel.thibault@ens-lyon.org>)
 id 1lfLER-00BM8R-90; Sat, 08 May 2021 13:34:51 +0200
Resent-From: Samuel Thibault <samuel.thibault@ens-lyon.org>
Resent-Date: Sat, 8 May 2021 13:34:51 +0200
Resent-Message-ID: <20210508113451.l7byctvejcxw3ab4@begin>
Resent-To: jandryuk@gmail.com, xen-devel@lists.xenproject.org,
 dgdegra@tycho.nsa.gov, quan.xu0@gmail.com
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c563adcd-48fa-48a0-9146-944e56fd711e
Date: Thu, 6 May 2021 23:41:54 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel@lists.xenproject.org, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
Subject: Re: [PATCH v2 11/13] vtpmmgr: Fix owner_auth & srk_auth parsing
Message-ID: <20210506214154.ssot4jwf64k22wth@begin>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
References: <20210506135923.161427-1-jandryuk@gmail.com>
 <20210506135923.161427-12-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210506135923.161427-12-jandryuk@gmail.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)

Jason Andryuk, le jeu. 06 mai 2021 09:59:21 -0400, a ecrit:
> Argument parsing only matches to before ':' and then the string with
> leading ':' is passed to parse_auth_string which fails to parse.  Extend
> the length to include the seperator in the match.
> 
> While here, switch the seperator to "=".  The man page documented "="
> and the other tpm.* arguments already use "=".  Since it didn't work
> before, we don't need to worry about backwards compatibility.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

> ---
>  stubdom/vtpmmgr/init.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/stubdom/vtpmmgr/init.c b/stubdom/vtpmmgr/init.c
> index 4ae34a4fcb..62dc5994de 100644
> --- a/stubdom/vtpmmgr/init.c
> +++ b/stubdom/vtpmmgr/init.c
> @@ -289,16 +289,16 @@ int parse_cmdline_opts(int argc, char** argv, struct Opts* opts)
>     memcpy(vtpm_globals.srk_auth, WELLKNOWN_AUTH, sizeof(TPM_AUTHDATA));
>  
>     for(i = 1; i < argc; ++i) {
> -      if(!strncmp(argv[i], "owner_auth:", 10)) {
> -         if((rc = parse_auth_string(argv[i] + 10, vtpm_globals.owner_auth)) < 0) {
> +      if(!strncmp(argv[i], "owner_auth=", 11)) {
> +         if((rc = parse_auth_string(argv[i] + 11, vtpm_globals.owner_auth)) < 0) {
>              goto err_invalid;
>           }
>           if(rc == 1) {
>              opts->gen_owner_auth = 1;
>           }
>        }
> -      else if(!strncmp(argv[i], "srk_auth:", 8)) {
> -         if((rc = parse_auth_string(argv[i] + 8, vtpm_globals.srk_auth)) != 0) {
> +      else if(!strncmp(argv[i], "srk_auth=", 9)) {
> +         if((rc = parse_auth_string(argv[i] + 9, vtpm_globals.srk_auth)) != 0) {
>              goto err_invalid;
>           }
>        }
> -- 
> 2.30.2
> 

-- 
Samuel
c> ah (on trouve fluide glacial sur le net, ou il faut aller dans le monde reel ?)
s> dans le monde reel
c> zut


From xen-devel-bounces@lists.xenproject.org Sat May 08 17:34:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 May 2021 17:34:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124393.234692 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfQpz-0006Cx-FR; Sat, 08 May 2021 17:33:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124393.234692; Sat, 08 May 2021 17:33:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfQpz-0006Cq-CR; Sat, 08 May 2021 17:33:59 +0000
Received: by outflank-mailman (input) for mailman id 124393;
 Sat, 08 May 2021 17:33:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0wEz=KD=ens-lyon.org=samuel.thibault@bounce.ens-lyon.org>)
 id 1lfQpx-0005wV-F3
 for xen-devel@lists.xenproject.org; Sat, 08 May 2021 17:33:57 +0000
Received: from sonata.ens-lyon.org (unknown [140.77.166.138])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 755d190e-71a9-4527-a9e7-d4bf7266af85;
 Sat, 08 May 2021 17:33:56 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by sonata.ens-lyon.org (Postfix) with ESMTP id F3BDB201A0;
 Sat,  8 May 2021 13:34:22 +0200 (CEST)
Received: from sonata.ens-lyon.org ([127.0.0.1])
 by localhost (sonata.ens-lyon.org [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id OWo9_DJXw0bR; Sat,  8 May 2021 13:34:22 +0200 (CEST)
Received: from begin (lfbn-bor-1-56-204.w90-50.abo.wanadoo.fr [90.50.148.204])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256
 bits)) (No client certificate requested)
 by sonata.ens-lyon.org (Postfix) with ESMTPSA id C40792019F;
 Sat,  8 May 2021 13:34:22 +0200 (CEST)
Received: from samy by begin with local (Exim 4.94)
 (envelope-from <samuel.thibault@ens-lyon.org>)
 id 1lfLDy-00BM8A-1v; Sat, 08 May 2021 13:34:22 +0200
Resent-From: Samuel Thibault <samuel.thibault@ens-lyon.org>
Resent-Date: Sat, 8 May 2021 13:34:22 +0200
Resent-Message-ID: <20210508113422.b5k42dbsyapmqeze@begin>
Resent-To: jandryuk@gmail.com, xen-devel@lists.xenproject.org,
 iwj@xenproject.org, wl@xen.org, dgdegra@tycho.nsa.gov,
 quan.xu0@gmail.com
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 755d190e-71a9-4527-a9e7-d4bf7266af85
Date: Thu, 6 May 2021 23:35:44 +0200
From: Samuel Thibault <samuel.thibault@ens-lyon.org>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>, Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
Subject: Re: [PATCH v2 04/13] vtpmmgr: Allow specifying srk_handle for TPM2
Message-ID: <20210506213544.6twiioapinyzajb4@begin>
Mail-Followup-To: Samuel Thibault <samuel.thibault@ens-lyon.org>,
	Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>,
	Quan Xu <quan.xu0@gmail.com>
References: <20210506135923.161427-1-jandryuk@gmail.com>
 <20210506135923.161427-5-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210506135923.161427-5-jandryuk@gmail.com>
Organization: I am not organized
User-Agent: NeoMutt/20170609 (1.8.3)

Jason Andryuk, le jeu. 06 mai 2021 09:59:14 -0400, a ecrit:
> Bypass taking ownership of the TPM2 if an srk_handle is specified.
> 
> This srk_handle must be usable with Null auth for the time being.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>

> ---
> v2: Use "=" seperator
> ---
>  docs/man/xen-vtpmmgr.7.pod |  7 +++++++
>  stubdom/vtpmmgr/init.c     | 11 ++++++++++-
>  2 files changed, 17 insertions(+), 1 deletion(-)
> 
> diff --git a/docs/man/xen-vtpmmgr.7.pod b/docs/man/xen-vtpmmgr.7.pod
> index 875dcce508..3286954568 100644
> --- a/docs/man/xen-vtpmmgr.7.pod
> +++ b/docs/man/xen-vtpmmgr.7.pod
> @@ -92,6 +92,13 @@ Valid arguments:
>  
>  =over 4
>  
> +=item srk_handle=<HANDLE>
> +
> +Specify a srk_handle for TPM 2.0.  TPM 2.0 uses a key hierarchy, and
> +this allow specifying the parent handle for vtpmmgr to create its own
> +key under.  Using this option bypasses vtpmmgr trying to take ownership
> +of the TPM.
> +
>  =item owner_auth=<AUTHSPEC>
>  
>  =item srk_auth=<AUTHSPEC>
> diff --git a/stubdom/vtpmmgr/init.c b/stubdom/vtpmmgr/init.c
> index 1506735051..130e4f4bf6 100644
> --- a/stubdom/vtpmmgr/init.c
> +++ b/stubdom/vtpmmgr/init.c
> @@ -302,6 +302,11 @@ int parse_cmdline_opts(int argc, char** argv, struct Opts* opts)
>              goto err_invalid;
>           }
>        }
> +      else if(!strncmp(argv[i], "srk_handle=", 11)) {
> +         if(sscanf(argv[i] + 11, "%x", &vtpm_globals.srk_handle) != 1) {
> +            goto err_invalid;
> +         }
> +      }
>        else if(!strncmp(argv[i], "tpmdriver=", 10)) {
>           if(!strcmp(argv[i] + 10, "tpm_tis")) {
>              opts->tpmdriver = TPMDRV_TPM_TIS;
> @@ -586,7 +591,11 @@ TPM_RESULT vtpmmgr2_create(void)
>  {
>      TPM_RESULT status = TPM_SUCCESS;
>  
> -    TPMTRYRETURN(tpm2_take_ownership());
> +    if ( vtpm_globals.srk_handle == 0 ) {
> +        TPMTRYRETURN(tpm2_take_ownership());
> +    } else {
> +        tpm2_AuthArea_ctor(NULL, 0, &vtpm_globals.srk_auth_area);
> +    }
>  
>     /* create SK */
>      TPM2_Create_Params_out out;
> -- 
> 2.30.2
> 

-- 
Samuel
"...[Linux's] capacity to talk via any medium except smoke signals."
(By Dr. Greg Wettstein, Roger Maris Cancer Center)


From xen-devel-bounces@lists.xenproject.org Sat May 08 18:33:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 May 2021 18:33:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124407.234703 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfRlb-0003kO-Pc; Sat, 08 May 2021 18:33:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124407.234703; Sat, 08 May 2021 18:33:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfRlb-0003kH-Mc; Sat, 08 May 2021 18:33:31 +0000
Received: by outflank-mailman (input) for mailman id 124407;
 Sat, 08 May 2021 18:33:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfRla-0003k7-F6; Sat, 08 May 2021 18:33:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfRla-0003Wc-7A; Sat, 08 May 2021 18:33:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfRlZ-0007PM-RK; Sat, 08 May 2021 18:33:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lfRlZ-0007DM-Qo; Sat, 08 May 2021 18:33:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0envFA1umEgVUPPQGzcy8DsvbIsEuf3LCrkeWKSL/kE=; b=1XeQTL6uERVibCuWpqVozAIt93
	8II+E7HKcjddAD8HTPytPAdobaK6Jg6O/+StG8zwURtaADkc6Sf1K/2DP8fy0u6M7mSON61hgAECw
	PCSlXMCJgxtyghkK1x59lX+0IDeaQP13xk76Ig/ydXsprUoolSMq0rRy7qhbNkKqgVwE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161851-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 161851: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-xtf-amd64-amd64-5:xtf/test-pv32pae-xsa-286:fail:regression
    xen-unstable:test-xtf-amd64-amd64-2:xtf/test-pv32pae-xsa-286:fail:heisenbug
    xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-credit2:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=a7da84c457b05479ab423a2e589c5f46c7da0ed7
X-Osstest-Versions-That:
    xen=7a2b787880bddbb3bd68b18efe1d6fe339df6ff1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 08 May 2021 18:33:29 +0000

flight 161851 xen-unstable real [real]
flight 161858 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/161851/
http://logs.test-lab.xenproject.org/osstest/logs/161858/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-xtf-amd64-amd64-5       92 xtf/test-pv32pae-xsa-286 fail REGR. vs. 161836

Tests which are failing intermittently (not blocking):
 test-xtf-amd64-amd64-2  92 xtf/test-pv32pae-xsa-286 fail pass in 161858-retest
 test-amd64-amd64-examine      4 memdisk-try-append  fail pass in 161858-retest
 test-armhf-armhf-xl-credit2 18 guest-start/debian.repeat fail pass in 161858-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161836
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161836
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 161836
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161836
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161836
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161836
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161836
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161836
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161836
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161836
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161836
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  a7da84c457b05479ab423a2e589c5f46c7da0ed7
baseline version:
 xen                  7a2b787880bddbb3bd68b18efe1d6fe339df6ff1

Last test of basis   161836  2021-05-07 14:06:46 Z    1 days
Testing same since   161851  2021-05-08 07:36:24 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jason Andryuk <jandryuk@gmail.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  fail    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit a7da84c457b05479ab423a2e589c5f46c7da0ed7
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Thu May 6 09:59:15 2021 -0400

    vtpmmgr: Move vtpmmgr_shutdown
    
    Reposition vtpmmgr_shutdown so it can call flush_tpm2 without a forward
    declaration.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Reviewed-by: Daniel P. Smith <dpsmith@apertussolutions.com>

commit 244fdf0027b223cc7f402783c1c6084e179e7064
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Thu May 6 09:59:13 2021 -0400

    stubom: newlib: Enable C99 formats for %z
    
    vtpmmgr was changed to print size_t with the %z modifier, but newlib
    isn't compiled with %z support.  So you get output like:
    
    root seal: zu; sector of 13: zu
    root: zu v=zu
    itree: 36; sector of 112: zu
    group: zu v=zu id=zu md=zu
    group seal: zu; 5 in parent: zu; sector of 13: zu
    vtpm: zu+zu; sector of 48: zu
    
    Enable the C99 formats in newlib so vtpmmgr prints the numeric values.
    
    Fixes: 9379af08ccc0 ("stubdom: vtpmmgr: Correctly format size_t with %z when printing.")
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Reviewed-by: Daniel P. Smith <dpsmith@apertussolutions.com>

commit 15a59d6ef3acdd816578eecca7a9247fd38bdf99
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Thu May 6 09:59:12 2021 -0400

    vtpmmgr: Print error code to aid debugging
    
    tpm_get_error_name returns "Unknown Error Code" when an error string
    is not defined.  In that case, we should print the Error Code so it can
    be looked up offline.  tpm_get_error_name returns a const string, so
    just have the two callers always print the error code so it is always
    available.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
    Reviewed-by: Daniel P. Smith <dpsmith@apertussolutions.com>

commit 93b2558fae83ab3a6a9b48c851d48ccf57be2298
Author: Jason Andryuk <jandryuk@gmail.com>
Date:   Thu May 6 09:59:11 2021 -0400

    docs: Warn about incomplete vtpmmgr TPM 2.0 support
    
    The vtpmmgr TPM 2.0 support is incomplete.  Add a warning about that to
    the documentation so others don't have to work through discovering it is
    broken.
    
    Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Reviewed-by: Daniel P. Smith <dpsmith@apertussolutions.com>

commit 27a4986d4fcd6a1bfdac9cafbce1a2f7a58f796e
Author: Olaf Hering <olaf@aepfle.de>
Date:   Thu May 6 17:17:01 2021 +0200

    tools: fix incorrect suggestions for XENCONSOLED_TRACE on BSD
    
    --log does not take a file, it specifies what is supposed to be logged.
    
    Also separate the XENSTORED and XENCONSOLED variables by a newline.
    
    Signed-off-by: Olaf Hering <olaf@aepfle.de>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Sat May 08 18:38:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 May 2021 18:38:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124415.234719 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfRpv-0004Vn-KM; Sat, 08 May 2021 18:37:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124415.234719; Sat, 08 May 2021 18:37:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfRpv-0004Vg-HJ; Sat, 08 May 2021 18:37:59 +0000
Received: by outflank-mailman (input) for mailman id 124415;
 Sat, 08 May 2021 18:37:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sqKf=KD=gmail.com=persaur@srs-us1.protection.inumbo.net>)
 id 1lfRpt-0004VZ-Tu
 for xen-devel@lists.xenproject.org; Sat, 08 May 2021 18:37:57 +0000
Received: from mail-qk1-x72a.google.com (unknown [2607:f8b0:4864:20::72a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 79f3ee9b-e0eb-4831-a470-1a29debc4a2f;
 Sat, 08 May 2021 18:37:56 +0000 (UTC)
Received: by mail-qk1-x72a.google.com with SMTP id k127so11803211qkc.6
 for <xen-devel@lists.xenproject.org>; Sat, 08 May 2021 11:37:56 -0700 (PDT)
Received: from smtpclient.apple ([199.33.71.18])
 by smtp.gmail.com with ESMTPSA id 28sm2619888qkr.36.2021.05.08.11.37.56
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sat, 08 May 2021 11:37:56 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 79f3ee9b-e0eb-4831-a470-1a29debc4a2f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=content-transfer-encoding:from:mime-version:subject:message-id:date
         :cc:to;
        bh=GYEZwNqgXbl+DgKBwxyaj9ItIrgeS+e3z48vLMFgvdU=;
        b=kHjRvNPn61k8yLJgxnkBOjZenTSr7RToDiCO3CJssOxRZMXG5Eec0W0oZFCoWiZrLW
         Il6M3rf3JlPs7sVlPvQ3SdtMs4nCGS7sXjvvPu8B8gmi+9NjI/HMz0c/up0p7vUHxuMF
         iY1p08hEPHJ3z2J07WM11ayvNBB1xTKX8KrvpW5WLLh+3py6f+O9NLjKurBsvdApzlm5
         m9ikqSksGRdhCnkFfH5TfWuQ5tejSQmTr4e0sOr5LxbI/a70vsGwrucL7cr0rN3YpwVQ
         mvn5fJ7Naa/NrGvLibvurqEIfKHNvoGfbYlDZHWksj5xvQkTaxBzJF8HyiTWRhxFrnnu
         ko9g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:content-transfer-encoding:from:mime-version
         :subject:message-id:date:cc:to;
        bh=GYEZwNqgXbl+DgKBwxyaj9ItIrgeS+e3z48vLMFgvdU=;
        b=UWk0+U7sOrxEDJkUtLhrskHg+lYetH22KKVEVsx/O+nZGLNnCgVVtSKzbYcxa7cSrx
         +c2x2S28qT+o88AM8f0EhcptVFN4tsFW1wRaR0URk+jZLmiW28LWF11i0Yr6WnQ1vPLw
         Yj6l6Bw+RHOVwXOqmd5/pFxUmOQTcnxyjbspu/cK5jwRf5qZsmqPGCbJx5I3By3uDdb5
         DtDrv5SplnI6UqHxLYM++5rrQndKZmmCal+zHSLITKvDsrH4rwPDkFOx7gNmDvF8dIG+
         JmiECQDwyoKAtO5hrzQjS+FNSM5ju6CcT6nZDHXWDsugm2W5PeNE7srCLUI6XWPRknYg
         fXcg==
X-Gm-Message-State: AOAM5311NflEkhC3UfP8C18UwDUDMUCWhwXa357WNSi3+FKC4CNzl25O
	mJx9UEaEtt7NDaklpWMSKXw=
X-Google-Smtp-Source: ABdhPJz70fBlJjDntjBTpu84IHM0i0r1Y47aQcVYf+xCRIVz8sH18nVZq8KdEajkgtGAuNOjZqffFw==
X-Received: by 2002:a37:5dc5:: with SMTP id r188mr16107436qkb.303.1620499076613;
        Sat, 08 May 2021 11:37:56 -0700 (PDT)
Content-Type: multipart/alternative; boundary=Apple-Mail-5BB23005-C8D5-4CD0-A64D-8522016ADBF6
Content-Transfer-Encoding: 7bit
From: Rich Persaud <persaur@gmail.com>
Mime-Version: 1.0 (1.0)
Subject: Re: [PATCH v2 01/13] docs: Warn about incomplete vtpmmgr TPM 2.0 support
Message-Id: <D66E1606-7354-4B1E-9F20-DA9BB830FAFA@gmail.com>
Date: Sat, 8 May 2021 14:37:55 -0400
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <Andrew.Cooper3@citrix.com>
To: Jason Andryuk <jandryuk@gmail.com>
X-Mailer: iPad Mail (18E212)


--Apple-Mail-5BB23005-C8D5-4CD0-A64D-8522016ADBF6
Content-Type: text/plain;
	charset=utf-8
Content-Transfer-Encoding: quoted-printable

On May 6, 2021, at 10:00, Jason Andryuk <jandryuk@gmail.com> wrote:
> =EF=BB=BFThe vtpmmgr TPM 2.0 support is incomplete.  Add a warning about t=
hat to
> the documentation so others don't have to work through discovering it is
> broken.
>=20
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
> docs/man/xen-vtpmmgr.7.pod | 11 +++++++++++
> 1 file changed, 11 insertions(+)
>=20
> diff --git a/docs/man/xen-vtpmmgr.7.pod b/docs/man/xen-vtpmmgr.7.pod
> index af825a7ffe..875dcce508 100644
> --- a/docs/man/xen-vtpmmgr.7.pod
> +++ b/docs/man/xen-vtpmmgr.7.pod
> @@ -222,6 +222,17 @@ XSM label, not the kernel.
>=20
> =3Dhead1 Appendix B: vtpmmgr on TPM 2.0
>=20
> +=3Dhead2 WARNING: Incomplete - cannot persist data
> +
> +TPM 2.0 support for vTPM manager is incomplete.  There is no support for
> +persisting an encryption key, so vTPM manager regenerates primary and sec=
ondary
> +key handles each boot.
> +
> +Also, the vTPM manger group command implementation hardcodes TPM 1.2 comm=
ands.
> +This means running manage-vtpmmgr.pl fails when the TPM 2.0 hardware reje=
cts
> +the TPM 1.2 commands.  vTPM manager with TPM 2.0 cannot create groups and=

> +therefore cannot persist vTPM contents.
> +
> =3Dhead2 Manager disk image setup:
>=20
> The vTPM Manager requires a disk image to store its encrypted data. The im=
age
> --=20
> 2.30.2

Should SUPPORT.md also be updated?

https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dblob;f=3DSUPPORT.md;hb=3Dref=
s/heads/master

Rich=

--Apple-Mail-5BB23005-C8D5-4CD0-A64D-8522016ADBF6
Content-Type: text/html;
	charset=utf-8
Content-Transfer-Encoding: quoted-printable

<html><head><meta http-equiv=3D"content-type" content=3D"text/html; charset=3D=
utf-8"></head><body dir=3D"auto"><div dir=3D"ltr">On May 6, 2021, at 10:00, J=
ason Andryuk &lt;jandryuk@gmail.com&gt; wrote:<div dir=3D"ltr"><blockquote t=
ype=3D"cite"><br></blockquote></div><blockquote type=3D"cite"><div dir=3D"lt=
r">=EF=BB=BF<span>The vtpmmgr TPM 2.0 support is incomplete. &nbsp;Add a war=
ning about that to</span><br><span>the documentation so others don't have to=
 work through discovering it is</span><br><span>broken.</span><br><span></sp=
an><br><span>Signed-off-by: Jason Andryuk &lt;jandryuk@gmail.com&gt;</span><=
br><span>Acked-by: Andrew Cooper &lt;andrew.cooper3@citrix.com&gt;</span><br=
><span>---</span><br><span> docs/man/xen-vtpmmgr.7.pod | 11 +++++++++++</spa=
n><br><span> 1 file changed, 11 insertions(+)</span><br><span></span><br><sp=
an>diff --git a/docs/man/xen-vtpmmgr.7.pod b/docs/man/xen-vtpmmgr.7.pod</spa=
n><br><span>index af825a7ffe..875dcce508 100644</span><br><span>--- a/docs/m=
an/xen-vtpmmgr.7.pod</span><br><span>+++ b/docs/man/xen-vtpmmgr.7.pod</span>=
<br><span>@@ -222,6 +222,17 @@ XSM label, not the kernel.</span><br><span></=
span><br><span> =3Dhead1 Appendix B: vtpmmgr on TPM 2.0</span><br><span></sp=
an><br><span>+=3Dhead2 WARNING: Incomplete - cannot persist data</span><br><=
span>+</span><br><span>+TPM 2.0 support for vTPM manager is incomplete. &nbs=
p;There is no support for</span><br><span>+persisting an encryption key, so v=
TPM manager regenerates primary and secondary</span><br><span>+key handles e=
ach boot.</span><br><span>+</span><br><span>+Also, the vTPM manger group com=
mand implementation hardcodes TPM 1.2 commands.</span><br><span>+This means r=
unning manage-vtpmmgr.pl fails when the TPM 2.0 hardware rejects</span><br><=
span>+the TPM 1.2 commands. &nbsp;vTPM manager with TPM 2.0 cannot create gr=
oups and</span><br><span>+therefore cannot persist vTPM contents.</span><br>=
<span>+</span><br><span> =3Dhead2 Manager disk image setup:</span><br><span>=
</span><br><span> The vTPM Manager requires a disk image to store its encryp=
ted data. The image</span><br><span>-- </span><br><span>2.30.2</span><br></d=
iv></blockquote><br><div>Should SUPPORT.md also be updated?</div><div><br></=
div><div><a href=3D"https://xenbits.xen.org/gitweb/?p=3Dxen.git;a=3Dblob;f=3D=
SUPPORT.md;hb=3Drefs/heads/master">https://xenbits.xen.org/gitweb/?p=3Dxen.g=
it;a=3Dblob;f=3DSUPPORT.md;hb=3Drefs/heads/master</a></div><div><br></div><d=
iv>Rich</div></div></body></html>=

--Apple-Mail-5BB23005-C8D5-4CD0-A64D-8522016ADBF6--


From xen-devel-bounces@lists.xenproject.org Sat May 08 21:22:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 08 May 2021 21:22:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124447.234733 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfUOz-0002Kz-Mi; Sat, 08 May 2021 21:22:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124447.234733; Sat, 08 May 2021 21:22:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfUOz-0002Ks-JQ; Sat, 08 May 2021 21:22:21 +0000
Received: by outflank-mailman (input) for mailman id 124447;
 Sat, 08 May 2021 21:22:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfUOx-0002Ki-Q9; Sat, 08 May 2021 21:22:19 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfUOx-0006JX-Kv; Sat, 08 May 2021 21:22:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfUOx-0007Wo-8H; Sat, 08 May 2021 21:22:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lfUOx-0002WN-7l; Sat, 08 May 2021 21:22:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/HMNYRZqhu5EHU33lPrwN2FATWYd0sOf3xcTYPjHcCU=; b=dsmTfDWoeK0F4rQ6GF6S5wNHoa
	9F5S7Mp27TrGUUJ2Pklbq9jVIW1aHhAqVNXm6jxrTMYBQiGZ4z/OKZhsU7ZLyMqYzcUBJPiLbAx8+
	+Tck/DsPjdMYwWlDFOLa5OVmDQOVpZkpm1oiz9AXHksrSyHeJjW9b/Ir6kv/f2SzoLdw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161856-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 161856: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=d90f154867ec0ec22fd719164b88716e8fd48672
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 08 May 2021 21:22:19 +0000

flight 161856 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161856/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-libvirt      14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-libvirt-xsm  14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-libvirt-pair 25 guest-start/debian       fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                d90f154867ec0ec22fd719164b88716e8fd48672
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  261 days
Failing since        152659  2020-08-21 14:07:39 Z  260 days  475 attempts
Testing same since   161826  2021-05-07 02:33:20 Z    1 days    4 attempts

------------------------------------------------------------
487 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 148130 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 09 00:09:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 May 2021 00:09:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124466.234764 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfX0d-0000Fm-Oi; Sun, 09 May 2021 00:09:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124466.234764; Sun, 09 May 2021 00:09:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfX0d-0000Ff-L0; Sun, 09 May 2021 00:09:23 +0000
Received: by outflank-mailman (input) for mailman id 124466;
 Sun, 09 May 2021 00:09:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfX0d-0000FV-44; Sun, 09 May 2021 00:09:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfX0c-0001LP-V1; Sun, 09 May 2021 00:09:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfX0c-0000c0-IS; Sun, 09 May 2021 00:09:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lfX0c-0005hK-Hv; Sun, 09 May 2021 00:09:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NnoGlAshy4WLfdaFI5CIGnzAa7zRlnfZLjy7MJmF3DI=; b=Cv+mzqs0uz/ysDMINYYO7/nvbI
	wN1P/SOmVQe5Af/t9zYPhXcDMpbMQH/6fLbipy+KVg317sFmPkaaNXf9dteUqMN12nH9YuHFi21N7
	IULCFc6JUcaUkrRC2iyTzoF+xvBf1CjZ8j7sTtrblJgFgBDwm0vEm/TDjw4FS0rB30kE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161857-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 161857: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-localmigrate/x10:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=dd860052c99b1e088352bdd4fb7aef46f8d2ef47
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 09 May 2021 00:09:22 +0000

flight 161857 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161857/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 19 guest-localmigrate/x10  fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx  8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                dd860052c99b1e088352bdd4fb7aef46f8d2ef47
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  281 days
Failing since        152366  2020-08-01 20:49:34 Z  280 days  467 attempts
Testing same since   161857  2021-05-08 12:21:08 Z    0 days    1 attempts

------------------------------------------------------------
6021 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1632421 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 09 04:19:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 May 2021 04:19:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124486.234809 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfauo-0003pK-A8; Sun, 09 May 2021 04:19:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124486.234809; Sun, 09 May 2021 04:19:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfauo-0003os-2H; Sun, 09 May 2021 04:19:38 +0000
Received: by outflank-mailman (input) for mailman id 124486;
 Sun, 09 May 2021 04:19:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfaum-0003oi-Oy; Sun, 09 May 2021 04:19:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfaum-0002fU-Fq; Sun, 09 May 2021 04:19:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfaum-0005H8-4z; Sun, 09 May 2021 04:19:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lfaum-0008V5-4X; Sun, 09 May 2021 04:19:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=T0kgpEyiCHqBMsiXqVty59dEDNJXuiITawAA2GvhyPU=; b=CCTT8hq4lU+BAgUkp+uaTcamRK
	FFs4z+YGcU21LRFCoUrjaWzMZ/szZt/AYMmCUEUs9K8RZ8l4HT9Qjx3/K4pBv8M6g+ldPyS4CA67i
	lBVezhnjQB1axkEbBEVC4kR47mf1m9RqSG1I43TKqjLQo7FvEdKp8ZTsLTqUDfSkEFjQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161859-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 161859: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-xtf-amd64-amd64-5:xtf/test-pv32pae-xsa-286:fail:heisenbug
    xen-unstable:test-xtf-amd64-amd64-2:xtf/test-pv32pae-xsa-286:fail:heisenbug
    xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-credit2:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-xtf-amd64-amd64-1:xtf/test-pv32pae-xsa-286:fail:heisenbug
    xen-unstable:test-amd64-i386-libvirt-xsm:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=a7da84c457b05479ab423a2e589c5f46c7da0ed7
X-Osstest-Versions-That:
    xen=7a2b787880bddbb3bd68b18efe1d6fe339df6ff1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 09 May 2021 04:19:36 +0000

flight 161859 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161859/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-xtf-amd64-amd64-5 92 xtf/test-pv32pae-xsa-286 fail in 161851 pass in 161859
 test-xtf-amd64-amd64-2 92 xtf/test-pv32pae-xsa-286 fail in 161851 pass in 161859
 test-amd64-amd64-examine    4 memdisk-try-append fail in 161851 pass in 161859
 test-armhf-armhf-xl-credit2 18 guest-start/debian.repeat fail in 161851 pass in 161859
 test-xtf-amd64-amd64-1       92 xtf/test-pv32pae-xsa-286   fail pass in 161851
 test-amd64-i386-libvirt-xsm  20 guest-start/debian.repeat  fail pass in 161851

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161836
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161836
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 161836
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161836
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161836
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161836
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161836
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161836
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161836
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161836
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161836
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  a7da84c457b05479ab423a2e589c5f46c7da0ed7
baseline version:
 xen                  7a2b787880bddbb3bd68b18efe1d6fe339df6ff1

Last test of basis   161836  2021-05-07 14:06:46 Z    1 days
Testing same since   161851  2021-05-08 07:36:24 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jason Andryuk <jandryuk@gmail.com>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   7a2b787880..a7da84c457  a7da84c457b05479ab423a2e589c5f46c7da0ed7 -> master


From xen-devel-bounces@lists.xenproject.org Sun May 09 06:38:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 May 2021 06:38:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124504.234853 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfd4d-0008N0-K3; Sun, 09 May 2021 06:37:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124504.234853; Sun, 09 May 2021 06:37:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfd4d-0008Mt-GY; Sun, 09 May 2021 06:37:55 +0000
Received: by outflank-mailman (input) for mailman id 124504;
 Sun, 09 May 2021 06:37:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfd4b-0008Mj-Tj; Sun, 09 May 2021 06:37:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfd4b-0005rJ-Ng; Sun, 09 May 2021 06:37:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfd4b-0004Z8-EW; Sun, 09 May 2021 06:37:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lfd4b-0004Ib-Dz; Sun, 09 May 2021 06:37:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=KzLJnz8ZoJ5cceagrKdTdFwYb9C6AzHCKcL+q9gCfME=; b=KuGbs7hXzypsluu4T+q/Cz0rgJ
	88yCDaw0LZ+ew2m31+lSRAFaeutkEWmRBmPrUSsf+Lg9m6mlhEC6BbfbQCVwj4hk+YT1//+SrIE86
	p+VFADo7DS8glP7s/4ULSrPxJ7f9hSocNf43ufbHs48HPGvCHiISqtxjNGc2sJHNy7ME=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161871-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 161871: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=d1873e03b461c6a8535e338aa6869ece757fded3
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 09 May 2021 06:37:53 +0000

flight 161871 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161871/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              d1873e03b461c6a8535e338aa6869ece757fded3
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  303 days
Failing since        151818  2020-07-11 04:18:52 Z  302 days  295 attempts
Testing same since   161848  2021-05-08 04:23:52 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 56845 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 09 09:06:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 May 2021 09:06:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124532.234879 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lffO6-0005KH-1B; Sun, 09 May 2021 09:06:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124532.234879; Sun, 09 May 2021 09:06:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lffO5-0005KA-US; Sun, 09 May 2021 09:06:09 +0000
Received: by outflank-mailman (input) for mailman id 124532;
 Sun, 09 May 2021 09:06:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lffO4-0005Jz-Ix; Sun, 09 May 2021 09:06:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lffO4-0000MY-BP; Sun, 09 May 2021 09:06:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lffO4-00044j-1w; Sun, 09 May 2021 09:06:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lffO4-0000vp-1V; Sun, 09 May 2021 09:06:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=JtYvPONKSX0jRlQLW5Ip9WskFp6iq3lw++xiRwXhfqM=; b=lrM/vzJuGGa6eKnTuB6ZUYdu9R
	MozXBZoAP5L/nVPNHGOf4BPeENetonMq6pdP38b5dLUQRyysujaxsEQ5UdZ4kdlc84P2huhyNs+8r
	Zg3Ee7WkqVOfjc3c2Tc5n60GLCLzy2L3RXHS7yYSGuAnnkYx3RS+KTtZabWkDcTat7TE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161862-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 161862: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=d90f154867ec0ec22fd719164b88716e8fd48672
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 09 May 2021 09:06:08 +0000

flight 161862 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161862/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-libvirt      14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-libvirt-xsm  14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-libvirt-pair 25 guest-start/debian       fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                d90f154867ec0ec22fd719164b88716e8fd48672
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  261 days
Failing since        152659  2020-08-21 14:07:39 Z  260 days  476 attempts
Testing same since   161826  2021-05-07 02:33:20 Z    2 days    5 attempts

------------------------------------------------------------
487 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 148130 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 09 09:48:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 May 2021 09:48:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124544.234895 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfg2e-0000xn-Fh; Sun, 09 May 2021 09:48:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124544.234895; Sun, 09 May 2021 09:48:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfg2e-0000xg-Ci; Sun, 09 May 2021 09:48:04 +0000
Received: by outflank-mailman (input) for mailman id 124544;
 Sun, 09 May 2021 09:48:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfg2c-0000xU-PK; Sun, 09 May 2021 09:48:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfg2c-00011A-Ky; Sun, 09 May 2021 09:48:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfg2c-0006Y1-8m; Sun, 09 May 2021 09:48:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lfg2c-0001PY-8H; Sun, 09 May 2021 09:48:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=eP4kgsbfj8aswDhybzjVs0/RTcIDkoNqdgUIggWsK8g=; b=E+2In0W2tSUr63rH13KOGhrpb1
	yorIq/6NHJ/9sHPnMNCz7TrbBsqf7Nh07o1EzI2zd3NlpxtBjLEg98zr7Bp71EYzspQfffbuQ8Zv+
	ewxmYWc9D2oTAZD0KU5LkWgBlBjzEES3VVK12u1/nAdhRXK1wvKlMtPKYPWV6ZDBZ4Ak=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161877-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 161877: all pass - PUSHED
X-Osstest-Versions-This:
    xen=a7da84c457b05479ab423a2e589c5f46c7da0ed7
X-Osstest-Versions-That:
    xen=8cccd6438e86112ab383e41b433b5a7e73be9621
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 09 May 2021 09:48:02 +0000

flight 161877 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161877/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  a7da84c457b05479ab423a2e589c5f46c7da0ed7
baseline version:
 xen                  8cccd6438e86112ab383e41b433b5a7e73be9621

Last test of basis   161786  2021-05-05 09:18:31 Z    4 days
Testing same since   161877  2021-05-09 09:18:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Julien Grall <jgrall@amazon.com>
  Julien Grall <julien@xen.org>
  Olaf Hering <olaf@aepfle.de>
  Roger Pau Monné <roger.pau@citrix.com>
  Volodymyr Babchuk <volodymyr_babchuk@epam.com>
  Wei Liu <wl@xen.org>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   8cccd6438e..a7da84c457  a7da84c457b05479ab423a2e589c5f46c7da0ed7 -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Sun May 09 12:06:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 May 2021 12:06:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124594.234922 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfiCm-0004zQ-J9; Sun, 09 May 2021 12:06:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124594.234922; Sun, 09 May 2021 12:06:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfiCm-0004zJ-FG; Sun, 09 May 2021 12:06:40 +0000
Received: by outflank-mailman (input) for mailman id 124594;
 Sun, 09 May 2021 12:06:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfiCk-0004z9-UO; Sun, 09 May 2021 12:06:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfiCk-0003Ji-HD; Sun, 09 May 2021 12:06:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfiCk-0005pY-6V; Sun, 09 May 2021 12:06:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lfiCk-0000wM-61; Sun, 09 May 2021 12:06:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=V2xud7YLMi2Zz6sMnsR9YaLU0HhKMhNlV6vVj4qn9XQ=; b=ChMTuR59rg+nbnEidHh2uYvb+3
	5D83dT/c8FzKjvCfc/3qWekH85bqH2odxien5UqZRlELozRx9R7kqjn37z4x6NuNLS7nWpBh82B+u
	3YqWb0yvOGZAKXN4SxYtyCSbglF5SBgKuKr/tsxtL0sMA6GIW5eybZPTBQ8Kug3xAAxk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161865-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 161865: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-saverestore.2:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=b741596468b010af2846b75f5e75a842ce344a6e
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 09 May 2021 12:06:38 +0000

flight 161865 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161865/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 18 guest-saverestore.2      fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx  8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                b741596468b010af2846b75f5e75a842ce344a6e
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  281 days
Failing since        152366  2020-08-01 20:49:34 Z  280 days  468 attempts
Testing same since   161865  2021-05-09 00:41:13 Z    0 days    1 attempts

------------------------------------------------------------
6033 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1635715 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 09 16:17:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 May 2021 16:17:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124693.234949 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfm75-0001qu-H8; Sun, 09 May 2021 16:17:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124693.234949; Sun, 09 May 2021 16:17:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfm75-0001qn-DL; Sun, 09 May 2021 16:17:03 +0000
Received: by outflank-mailman (input) for mailman id 124693;
 Sun, 09 May 2021 16:17:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfm73-0001qd-JF; Sun, 09 May 2021 16:17:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfm73-0008BN-AN; Sun, 09 May 2021 16:17:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfm72-00044W-Vf; Sun, 09 May 2021 16:17:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lfm72-0002CL-VC; Sun, 09 May 2021 16:17:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=o5cllGyi70HTH4yrCZ8+W+PnwlBC4LbVrdMnNPo4nho=; b=xK6T6oCrL1nT+iPhDQ6umVTetj
	KUfRiv5oJWosc5XaSlOALyp1vTeTR0UfAyZdAXnkbBkYHd2RVD5/5N1vivsN2d6xEgZY/4/QhtnoX
	8JVpoAzh/8iuGLe8K2sZtj/C3HWQA0zGPwb/TtTuXJe2fju8Piz4EbNwDD4VZtfruJFE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161872-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 161872: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=a7da84c457b05479ab423a2e589c5f46c7da0ed7
X-Osstest-Versions-That:
    xen=a7da84c457b05479ab423a2e589c5f46c7da0ed7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 09 May 2021 16:17:00 +0000

flight 161872 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161872/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-examine      4 memdisk-try-append           fail  like 161851
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161859
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161859
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 161859
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161859
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161859
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161859
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161859
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161859
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161859
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161859
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161859
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  a7da84c457b05479ab423a2e589c5f46c7da0ed7
baseline version:
 xen                  a7da84c457b05479ab423a2e589c5f46c7da0ed7

Last test of basis   161872  2021-05-09 04:22:50 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun May 09 21:32:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 May 2021 21:32:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124773.234981 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfr2N-0004ER-4Q; Sun, 09 May 2021 21:32:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124773.234981; Sun, 09 May 2021 21:32:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfr2N-0004EK-1J; Sun, 09 May 2021 21:32:31 +0000
Received: by outflank-mailman (input) for mailman id 124773;
 Sun, 09 May 2021 21:32:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfr2M-0004EA-3y; Sun, 09 May 2021 21:32:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfr2L-0005am-VF; Sun, 09 May 2021 21:32:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfr2L-00015z-NA; Sun, 09 May 2021 21:32:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lfr2L-0005Kn-Mi; Sun, 09 May 2021 21:32:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=kccwpX06Xy9ldJC0zckLEO3vxUqyAPQLxUPpp8zLdzI=; b=MZBIlz/gjwwOck2BXYFb2ePB9T
	UYZ3Jja+6Aqj6gm7TbsAarH9HkXA4npOD45CoU0VHhC2N9L1/ZaK8DrZR2T04oUORqUMm/OQv7J1t
	vOR1rU5KElbIeBw00WE9fYhNRsAOUGwNflZHhv+q0YOM6FUf3wFeawX6CHYfKWEuuasc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete test-amd64-i386-xl-qemuu-ws16-amd64
Message-Id: <E1lfr2L-0005Kn-Mi@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 09 May 2021 21:32:29 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-xl-qemuu-ws16-amd64
testid guest-saverestore

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  8a40754bca14df63c6d2ffe473b68a270dc50679
  Bug not present: dc04d25e2f3f7e26f7f97b860992076b5f04afdb
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/161881/


  (Revision log too long, omitted.)


*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  1b507e55f8199eaad99744613823f6929e4d57c6
  Bug not present: 4083904bc9fe5da580f7ca397b1e828fbc322732
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/161885/


  commit 1b507e55f8199eaad99744613823f6929e4d57c6
  Merge: 4083904bc9 8d17adf34f
  Author: Peter Maydell <peter.maydell@linaro.org>
  Date:   Thu Mar 18 19:00:49 2021 +0000
  
      Merge remote-tracking branch 'remotes/berrange-gitlab/tags/dep-many-pull-request' into staging
      
      Remove many old deprecated features
      
      The following features have been deprecated for well over the 2
      release cycle we promise
      
        ``-drive file=json:{...{'driver':'file'}}`` (since 3.0)
        ``-vnc acl`` (since 4.0.0)
        ``-mon ...,control=readline,pretty=on|off`` (since 4.1)
        ``migrate_set_downtime`` and ``migrate_set_speed`` (since 2.8.0)
        ``query-named-block-nodes`` result ``encryption_key_missing`` (since 2.10.0)
        ``query-block`` result ``inserted.encryption_key_missing`` (since 2.10.0)
        ``migrate-set-cache-size`` and ``query-migrate-cache-size`` (since 2.11.0)
        ``query-named-block-nodes`` and ``query-block`` result dirty-bitmaps[i].status (since 4.0)
        ``query-cpus`` (since 2.12.0)
        ``query-cpus-fast`` ``arch`` output member (since 3.0.0)
        ``query-events`` (since 4.0)
        chardev client socket with ``wait`` option (since 4.0)
        ``acl_show``, ``acl_reset``, ``acl_policy``, ``acl_add``, ``acl_remove`` (since 4.0.0)
        ``ide-drive`` (since 4.2)
        ``scsi-disk`` (since 4.2)
      
      # gpg: Signature made Thu 18 Mar 2021 09:23:39 GMT
      # gpg:                using RSA key DAF3A6FDB26B62912D0E8E3FBE86EBB415104FDF
      # gpg: Good signature from "Daniel P. Berrange <dan@berrange.com>" [full]
      # gpg:                 aka "Daniel P. Berrange <berrange@redhat.com>" [full]
      # Primary key fingerprint: DAF3 A6FD B26B 6291 2D0E  8E3F BE86 EBB4 1510 4FDF
      
      * remotes/berrange-gitlab/tags/dep-many-pull-request:
        block: remove support for using "file" driver with block/char devices
        block: remove 'dirty-bitmaps' field from 'BlockInfo' struct
        block: remove dirty bitmaps 'status' field
        block: remove 'encryption_key_missing' flag from QAPI
        hw/scsi: remove 'scsi-disk' device
        hw/ide: remove 'ide-drive' device
        chardev: reject use of 'wait' flag for socket client chardevs
        machine: remove 'arch' field from 'query-cpus-fast' QMP command
        machine: remove 'query-cpus' QMP command
        migrate: remove QMP/HMP commands for speed, downtime and cache size
        monitor: remove 'query-events' QMP command
        monitor: raise error when 'pretty' option is used with HMP
        ui, monitor: remove deprecated VNC ACL option and HMP commands
      
      Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
  
  commit 8d17adf34f501ded65a106572740760f0a75577c
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 11:16:32 2021 +0000
  
      block: remove support for using "file" driver with block/char devices
      
      The 'host_device' and 'host_cdrom' drivers must be used instead.
      
      Reviewed-by: Eric Blake <eblake@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit e67d8e2928200e24ecb47c7be3ea8270077f2996
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 19:22:36 2021 +0000
  
      block: remove 'dirty-bitmaps' field from 'BlockInfo' struct
      
      The same data is available in the 'BlockDeviceInfo' struct.
      
      Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 81cbfd5088690c53541ffd0d74851c8ab055a829
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 19:19:54 2021 +0000
  
      block: remove dirty bitmaps 'status' field
      
      The same information is available via the 'recording' and 'busy' fields.
      
      Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit ad1324e044240ae9fcf67e4c215481b7a35591b9
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 18:53:17 2021 +0000
  
      block: remove 'encryption_key_missing' flag from QAPI
      
      This has been hardcoded to "false" since 2.10.0, since secrets required
      to unlock block devices are now always provided up front instead of using
      interactive prompts.
      
      Reviewed-by: Eric Blake <eblake@redhat.com>
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 879be3af49132d232602e0ca783ec9b4112530fa
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 13:40:56 2021 +0000
  
      hw/scsi: remove 'scsi-disk' device
      
      The 'scsi-hd' and 'scsi-cd' devices provide suitable alternatives.
      
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit b50101833987b47e0740f1621de48637c468c3d1
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 13:40:56 2021 +0000
  
      hw/ide: remove 'ide-drive' device
      
      The 'ide-hd' and 'ide-cd' devices provide suitable alternatives.
      
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 24e13a4dc1eb1630eceffc7ab334145d902e763d
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 13:47:17 2021 +0000
  
      chardev: reject use of 'wait' flag for socket client chardevs
      
      This only makes sense conceptually when used with listener chardevs.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 445a5b4087567bf4d4ce76d394adf78d9d5c88a5
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 13:29:31 2021 +0000
  
      machine: remove 'arch' field from 'query-cpus-fast' QMP command
      
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 12:54:55 2021 +0000
  
      machine: remove 'query-cpus' QMP command
      
      The newer 'query-cpus-fast' command avoids side effects on the guest
      execution. Note that some of the field names are different in the
      'query-cpus-fast' command.
      
      Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Tested-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit cbde7be900d2a2279cbc4becb91d1ddd6a014def
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 18:40:12 2021 +0000
  
      migrate: remove QMP/HMP commands for speed, downtime and cache size
      
      The generic 'migrate_set_parameters' command handle all types of param.
      
      Only the QMP commands were documented in the deprecations page, but the
      rationale for deprecating applies equally to HMP, and the replacements
      exist. Furthermore the HMP commands are just shims to the QMP commands,
      so removing the latter breaks the former unless they get re-implemented.
      
      Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 8becb36063fb14df1e3ae4916215667e2cb65fa2
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 13:35:15 2021 +0000
  
      monitor: remove 'query-events' QMP command
      
      The code comment suggests removing QAPIEvent_(str|lookup) symbols too,
      however, these are both auto-generated as standard for any enum in
      QAPI. As such it they'll exist whether we use them or not.
      
      Reviewed-by: Eric Blake <eblake@redhat.com>
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 283d845c9164f57f5dba020a4783bb290493802f
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 17:56:13 2021 +0000
  
      monitor: raise error when 'pretty' option is used with HMP
      
      This is only semantically useful for QMP.
      
      Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 5994dcb8d8525ac044a31913c6bceeee788ec700
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 17:47:31 2021 +0000
  
      ui, monitor: remove deprecated VNC ACL option and HMP commands
      
      The VNC ACL concept has been replaced by the pluggable "authz" framework
      which does not use monitor commands.
      
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-i386-xl-qemuu-ws16-amd64.guest-saverestore.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-i386-xl-qemuu-ws16-amd64.guest-saverestore --summary-out=tmp/161885.bisection-summary --basis-template=152631 --blessings=real,real-bisect,real-retry qemu-mainline test-amd64-i386-xl-qemuu-ws16-amd64 guest-saverestore
Searching for failure / basis pass:
 161862 fail [host=fiano0] / 160125 ok.
Failure / basis pass flights: 161862 / 160125
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 7a2b787880bddbb3bd68b18efe1d6fe339df6ff1
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b12498fc575f2ad30f09fe78badc7fef526e2d76 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#030ba3097a6e7d08b99f8a9d19a322f61409c1f6-f297b7f20010711e36e981fe45645302cc9d109d git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c74\
 37ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://git.qemu.org/qemu.git#b12498fc575f2ad30f09fe78badc7fef526e2d76-d90f154867ec0ec22fd719164b88716e8fd48672 git://xenbits.xen.org/osstest/seabios.git#b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee-b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee git://xenbits.xen.org/xen.git#21657ad4f01a634beac570c64c0691e51b9cf366-7a2b787880bddbb3bd68b18efe1d6fe339df6ff1
Loaded 44367 nodes in revision graph
Searching for test results:
 160125 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b12498fc575f2ad30f09fe78badc7fef526e2d76 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160134 fail irrelevant
 160147 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2e1293cbaac75e84f541f9acfa8e26749f4c3562 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160167 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca318882714080fb81fe9eb89a7b7934efc5bfae 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 bdee969c0e65d4d509932b1d70e3a3b2ffbff6d5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160328 fail irrelevant
 160361 fail irrelevant
 160392 fail irrelevant
 160418 fail irrelevant
 160448 fail irrelevant
 160477 fail irrelevant
 160501 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7b9a3c9f94bcac23c534bc9f42a9e914b433b299 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160522 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7b9a3c9f94bcac23c534bc9f42a9e914b433b299 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160541 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ec2e6e016d24bd429792d08cf607e4c5350dcdaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160563 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7993b0f83fe5c3f8555e79781d5d098f99751a94 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cead8c0d17462f3a1150b5657d3f4eaa88faf1cb
 160619 fail irrelevant
 160632 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 62bad17dcae18f55cb3bdc19909543dfdf928a2b 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6ee55e1d10c25c2f6bf5ce2084ad2327e17affa5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 90629587e16e2efdb61da77f25c25fba3c4a5fd7
 160650 fail irrelevant
 160736 fail irrelevant
 160748 fail irrelevant
 160779 fail irrelevant
 160801 fail irrelevant
 160827 fail irrelevant
 160851 fail irrelevant
 160883 fail irrelevant
 160916 fail irrelevant
 160980 fail irrelevant
 161050 fail irrelevant
 161088 fail irrelevant
 161121 fail irrelevant
 161147 fail irrelevant
 161171 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2ad22420a710dc07e3b644f91a5b55c09c39ecf3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 264aa183ad85b2779b27d1312724a291259ccc9f
 161191 fail irrelevant
 161210 fail irrelevant
 161232 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b53173e7cdafb7a318a239d557478fd73734a86a
 161256 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161276 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161290 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161308 fail irrelevant
 161334 fail irrelevant
 161364 fail irrelevant
 161388 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3b0d007a135284981fa750612a47234b83976f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b1cffefa1b163bce9aebc3416f562c1d3886eeaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 9f6cd4983715cb31f0ea540e6bbb63f799a35d8a
 161401 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3b0d007a135284981fa750612a47234b83976f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b1cffefa1b163bce9aebc3416f562c1d3886eeaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aaa3eafb3ba8b544d19ca41cda1477640b22b8fc
 161419 fail irrelevant
 161434 fail irrelevant
 161444 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f2f4c6be2dba3f8e97ac544b9c3da71e9f81b294 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ffa090bc56e73e287a63261e70ac02c0970be61a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee bea65a212c0581520203b6ad0d07615693f42f73
 161455 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f2f4c6be2dba3f8e97ac544b9c3da71e9f81b294 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ffa090bc56e73e287a63261e70ac02c0970be61a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee bea65a212c0581520203b6ad0d07615693f42f73
 161472 fail irrelevant
 161481 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5396354b868bd6652600a654bba7df16701ac1cb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0cef06d18762374c94eb4d511717a4735d668a24 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 11e7f0fe72ca0060762d18268e0388731fe8ccb6
 161495 fail irrelevant
 161514 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5b90b8abb4049e2d98040f548ad23b6ab22d5d19 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0cef06d18762374c94eb4d511717a4735d668a24 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 972ba1d1d4bcb77018b50fd2bb63c0e628859ed3
 161540 fail irrelevant
 161554 fail irrelevant
 161571 fail irrelevant
 161587 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8f860d2633baf9c2b6261f703f86e394c6bc22ca b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161604 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8f860d2633baf9c2b6261f703f86e394c6bc22ca b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161616 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 53c5433e84e8935abed8e91d4a2eb813168a0ecf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161631 []
 161766 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e93d8bcf9dbd5b8dd3b9ddbb1ece6a37e608f300 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee d26c277826dbbd64b3e3cb57159e1ecbfad33bc8
 161780 fail irrelevant
 161812 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d45a5270d075ea589f0b0ddcf963a5fea1f500ac b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 8cccd6438e86112ab383e41b433b5a7e73be9621
 161826 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
 161846 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b12498fc575f2ad30f09fe78badc7fef526e2d76 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 161847 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
 161850 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 f2a9a6c2a86570ccbf8c5c30cbb8bf723168c459 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161839 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
 161852 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 17422da082ffcecb38bd1f2e2de6d56a61e8cd9c b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161854 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dc04d25e2f3f7e26f7f97b860992076b5f04afdb b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161853 []
 161855 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0f418a207696b37f05d38f978c8873ee0a4f9815 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161856 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 7a2b787880bddbb3bd68b18efe1d6fe339df6ff1
 161861 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6e71c36557ed41017e634ae392fa80f03ced7fa1 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161868 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8a40754bca14df63c6d2ffe473b68a270dc50679 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161870 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 87a80dc4f2f5e51894db143685a5e39c8ce6f651 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4083904bc9fe5da580f7ca397b1e828fbc322732 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161873 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4083904bc9fe5da580f7ca397b1e828fbc322732 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161874 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1b507e55f8199eaad99744613823f6929e4d57c6 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161862 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 7a2b787880bddbb3bd68b18efe1d6fe339df6ff1
 161875 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dc04d25e2f3f7e26f7f97b860992076b5f04afdb b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161878 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8a40754bca14df63c6d2ffe473b68a270dc50679 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161879 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dc04d25e2f3f7e26f7f97b860992076b5f04afdb b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161881 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8a40754bca14df63c6d2ffe473b68a270dc50679 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161882 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4083904bc9fe5da580f7ca397b1e828fbc322732 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161883 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1b507e55f8199eaad99744613823f6929e4d57c6 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161884 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4083904bc9fe5da580f7ca397b1e828fbc322732 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 161885 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1b507e55f8199eaad99744613823f6929e4d57c6 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
Searching for interesting versions
 Result found: flight 160125 (pass), for basis pass
 Result found: flight 161826 (fail), for basis failure (at ancestor ~2)
 Repro found: flight 161846 (pass), for basis pass
 Repro found: flight 161856 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4083904bc9fe5da580f7ca397b1e828fbc322732 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
No revisions left to test, checking graph state.
 Result found: flight 161854 (pass), for last pass
 Result found: flight 161868 (fail), for first failure
 Repro found: flight 161875 (pass), for last pass
 Repro found: flight 161878 (fail), for first failure
 Repro found: flight 161879 (pass), for last pass
 Repro found: flight 161881 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  8a40754bca14df63c6d2ffe473b68a270dc50679
  Bug not present: dc04d25e2f3f7e26f7f97b860992076b5f04afdb
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/161881/


  (Revision log too long, omitted.)

 Result found: flight 161873 (pass), for last pass
 Result found: flight 161874 (fail), for first failure
 Repro found: flight 161882 (pass), for last pass
 Repro found: flight 161883 (fail), for first failure
 Repro found: flight 161884 (pass), for last pass
 Repro found: flight 161885 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  1b507e55f8199eaad99744613823f6929e4d57c6
  Bug not present: 4083904bc9fe5da580f7ca397b1e828fbc322732
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/161885/


  commit 1b507e55f8199eaad99744613823f6929e4d57c6
  Merge: 4083904bc9 8d17adf34f
  Author: Peter Maydell <peter.maydell@linaro.org>
  Date:   Thu Mar 18 19:00:49 2021 +0000
  
      Merge remote-tracking branch 'remotes/berrange-gitlab/tags/dep-many-pull-request' into staging
      
      Remove many old deprecated features
      
      The following features have been deprecated for well over the 2
      release cycle we promise
      
        ``-drive file=json:{...{'driver':'file'}}`` (since 3.0)
        ``-vnc acl`` (since 4.0.0)
        ``-mon ...,control=readline,pretty=on|off`` (since 4.1)
        ``migrate_set_downtime`` and ``migrate_set_speed`` (since 2.8.0)
        ``query-named-block-nodes`` result ``encryption_key_missing`` (since 2.10.0)
        ``query-block`` result ``inserted.encryption_key_missing`` (since 2.10.0)
        ``migrate-set-cache-size`` and ``query-migrate-cache-size`` (since 2.11.0)
        ``query-named-block-nodes`` and ``query-block`` result dirty-bitmaps[i].status (since 4.0)
        ``query-cpus`` (since 2.12.0)
        ``query-cpus-fast`` ``arch`` output member (since 3.0.0)
        ``query-events`` (since 4.0)
        chardev client socket with ``wait`` option (since 4.0)
        ``acl_show``, ``acl_reset``, ``acl_policy``, ``acl_add``, ``acl_remove`` (since 4.0.0)
        ``ide-drive`` (since 4.2)
        ``scsi-disk`` (since 4.2)
      
      # gpg: Signature made Thu 18 Mar 2021 09:23:39 GMT
      # gpg:                using RSA key DAF3A6FDB26B62912D0E8E3FBE86EBB415104FDF
      # gpg: Good signature from "Daniel P. Berrange <dan@berrange.com>" [full]
      # gpg:                 aka "Daniel P. Berrange <berrange@redhat.com>" [full]
      # Primary key fingerprint: DAF3 A6FD B26B 6291 2D0E  8E3F BE86 EBB4 1510 4FDF
      
      * remotes/berrange-gitlab/tags/dep-many-pull-request:
        block: remove support for using "file" driver with block/char devices
        block: remove 'dirty-bitmaps' field from 'BlockInfo' struct
        block: remove dirty bitmaps 'status' field
        block: remove 'encryption_key_missing' flag from QAPI
        hw/scsi: remove 'scsi-disk' device
        hw/ide: remove 'ide-drive' device
        chardev: reject use of 'wait' flag for socket client chardevs
        machine: remove 'arch' field from 'query-cpus-fast' QMP command
        machine: remove 'query-cpus' QMP command
        migrate: remove QMP/HMP commands for speed, downtime and cache size
        monitor: remove 'query-events' QMP command
        monitor: raise error when 'pretty' option is used with HMP
        ui, monitor: remove deprecated VNC ACL option and HMP commands
      
      Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
  
  commit 8d17adf34f501ded65a106572740760f0a75577c
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 11:16:32 2021 +0000
  
      block: remove support for using "file" driver with block/char devices
      
      The 'host_device' and 'host_cdrom' drivers must be used instead.
      
      Reviewed-by: Eric Blake <eblake@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit e67d8e2928200e24ecb47c7be3ea8270077f2996
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 19:22:36 2021 +0000
  
      block: remove 'dirty-bitmaps' field from 'BlockInfo' struct
      
      The same data is available in the 'BlockDeviceInfo' struct.
      
      Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 81cbfd5088690c53541ffd0d74851c8ab055a829
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 19:19:54 2021 +0000
  
      block: remove dirty bitmaps 'status' field
      
      The same information is available via the 'recording' and 'busy' fields.
      
      Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit ad1324e044240ae9fcf67e4c215481b7a35591b9
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 18:53:17 2021 +0000
  
      block: remove 'encryption_key_missing' flag from QAPI
      
      This has been hardcoded to "false" since 2.10.0, since secrets required
      to unlock block devices are now always provided up front instead of using
      interactive prompts.
      
      Reviewed-by: Eric Blake <eblake@redhat.com>
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 879be3af49132d232602e0ca783ec9b4112530fa
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 13:40:56 2021 +0000
  
      hw/scsi: remove 'scsi-disk' device
      
      The 'scsi-hd' and 'scsi-cd' devices provide suitable alternatives.
      
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit b50101833987b47e0740f1621de48637c468c3d1
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 13:40:56 2021 +0000
  
      hw/ide: remove 'ide-drive' device
      
      The 'ide-hd' and 'ide-cd' devices provide suitable alternatives.
      
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 24e13a4dc1eb1630eceffc7ab334145d902e763d
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 13:47:17 2021 +0000
  
      chardev: reject use of 'wait' flag for socket client chardevs
      
      This only makes sense conceptually when used with listener chardevs.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 445a5b4087567bf4d4ce76d394adf78d9d5c88a5
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 13:29:31 2021 +0000
  
      machine: remove 'arch' field from 'query-cpus-fast' QMP command
      
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 12:54:55 2021 +0000
  
      machine: remove 'query-cpus' QMP command
      
      The newer 'query-cpus-fast' command avoids side effects on the guest
      execution. Note that some of the field names are different in the
      'query-cpus-fast' command.
      
      Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Tested-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit cbde7be900d2a2279cbc4becb91d1ddd6a014def
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 18:40:12 2021 +0000
  
      migrate: remove QMP/HMP commands for speed, downtime and cache size
      
      The generic 'migrate_set_parameters' command handle all types of param.
      
      Only the QMP commands were documented in the deprecations page, but the
      rationale for deprecating applies equally to HMP, and the replacements
      exist. Furthermore the HMP commands are just shims to the QMP commands,
      so removing the latter breaks the former unless they get re-implemented.
      
      Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 8becb36063fb14df1e3ae4916215667e2cb65fa2
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 13:35:15 2021 +0000
  
      monitor: remove 'query-events' QMP command
      
      The code comment suggests removing QAPIEvent_(str|lookup) symbols too,
      however, these are both auto-generated as standard for any enum in
      QAPI. As such it they'll exist whether we use them or not.
      
      Reviewed-by: Eric Blake <eblake@redhat.com>
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 283d845c9164f57f5dba020a4783bb290493802f
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 17:56:13 2021 +0000
  
      monitor: raise error when 'pretty' option is used with HMP
      
      This is only semantically useful for QMP.
      
      Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 5994dcb8d8525ac044a31913c6bceeee788ec700
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 17:47:31 2021 +0000
  
      ui, monitor: remove deprecated VNC ACL option and HMP commands
      
      The VNC ACL concept has been replaced by the pluggable "authz" framework
      which does not use monitor commands.
      
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>

neato: graph is too large for cairo-renderer bitmaps. Scaling by 0.673996 to fit
pnmtopng: 212 colors found
Revision graph left in /home/logs/results/bisect/qemu-mainline/test-amd64-i386-xl-qemuu-ws16-amd64.guest-saverestore.{dot,ps,png,html,svg}.
----------------------------------------
161885: tolerable ALL FAIL

flight 161885 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/161885/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail baseline untested


jobs:
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Sun May 09 22:33:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 May 2021 22:33:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124787.234997 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfrzW-0001Vr-S7; Sun, 09 May 2021 22:33:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124787.234997; Sun, 09 May 2021 22:33:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfrzW-0001Vk-Op; Sun, 09 May 2021 22:33:38 +0000
Received: by outflank-mailman (input) for mailman id 124787;
 Sun, 09 May 2021 22:33:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfrzV-0001Va-Dv; Sun, 09 May 2021 22:33:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfrzV-0006YI-7S; Sun, 09 May 2021 22:33:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfrzU-0003P7-SX; Sun, 09 May 2021 22:33:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lfrzU-0008Oe-S0; Sun, 09 May 2021 22:33:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LHbAOQQ6VaM0Mzl/sr9Jl+Bnihl4GnqIzTDO+rCEwzw=; b=UCYRQ7psmywBFXQwKZs5hrPFaR
	0VdDs+pGalEUj4sY27rgEHSmy7pHhTN7dEg0R7V2fEjvMINyD8pxuQXlCSjxklOQtvzn8bNY7uCFn
	YgNBSgQNAS2SpLNe1Q3uNeGXrFbf4ko4Q9Whn7ZUVWHRzviUStHbnhtLtgZFB+xF1W4o=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161876-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 161876: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=d90f154867ec0ec22fd719164b88716e8fd48672
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 09 May 2021 22:33:36 +0000

flight 161876 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161876/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-libvirt      14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-libvirt-xsm  14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-libvirt-pair 25 guest-start/debian       fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                d90f154867ec0ec22fd719164b88716e8fd48672
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  262 days
Failing since        152659  2020-08-21 14:07:39 Z  261 days  477 attempts
Testing same since   161826  2021-05-07 02:33:20 Z    2 days    6 attempts

------------------------------------------------------------
487 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 148130 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 09 23:25:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 09 May 2021 23:25:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124796.235012 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfsnm-0006Lu-ED; Sun, 09 May 2021 23:25:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124796.235012; Sun, 09 May 2021 23:25:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfsnm-0006Ln-Al; Sun, 09 May 2021 23:25:34 +0000
Received: by outflank-mailman (input) for mailman id 124796;
 Sun, 09 May 2021 23:25:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfsnl-0006Ld-JQ; Sun, 09 May 2021 23:25:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfsnl-0007PN-DV; Sun, 09 May 2021 23:25:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfsnl-0004u3-0x; Sun, 09 May 2021 23:25:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lfsnl-0008A5-0R; Sun, 09 May 2021 23:25:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=5tdRcxR7bD/bebihnwSq3k9YA3l0tb2DFYuM5DFSQFo=; b=WiE8aG+5o+l5tWv1dAmWhzTnrP
	LyZUkgsktWRrfG1NmzKIHjyBFxdGeiNUW6DCze9RnHb5N9FIrwAg0QZpoGA9i2BsO0ls3cZbUqQx3
	3x0MYNPbdiaIp96TuK693dMvnh5vKbs1cIVxCbQGZKY9tC3zFOsqT2Neg8Lo3Tp/lAB4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161880-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 161880: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=b741596468b010af2846b75f5e75a842ce344a6e
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 09 May 2021 23:25:33 +0000

flight 161880 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161880/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx  8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                b741596468b010af2846b75f5e75a842ce344a6e
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  282 days
Failing since        152366  2020-08-01 20:49:34 Z  281 days  469 attempts
Testing same since   161865  2021-05-09 00:41:13 Z    0 days    2 attempts

------------------------------------------------------------
6033 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1635715 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 10 06:26:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 06:26:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124830.235030 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfzMm-0007nJ-D1; Mon, 10 May 2021 06:26:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124830.235030; Mon, 10 May 2021 06:26:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lfzMl-0007mc-VN; Mon, 10 May 2021 06:26:07 +0000
Received: by outflank-mailman (input) for mailman id 124830;
 Mon, 10 May 2021 06:26:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfzMl-0007mS-5w; Mon, 10 May 2021 06:26:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfzMk-0004hD-U5; Mon, 10 May 2021 06:26:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lfzMk-0007wI-Ho; Mon, 10 May 2021 06:26:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lfzMk-0002yJ-G2; Mon, 10 May 2021 06:26:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LoyDYuzaeKgNS2xydFu1ZSLKQ8So5NQSvQo6NRfnwTM=; b=F/se5RjNfuIGfFiaftVFzxaXCz
	L8W3dLidRyqREgJbZAmHuUP+dRZ7MEWCAOd06nvAxPezYGHlu0Wt636YOwN27NI3FJZs1OZzpF+pD
	VEt7BapWke++IIWjM2PFOqUSJiCsYIwVoGpj9wNNsYaRgeV+SNETVk6xYgQZi5tN+0hU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161886-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 161886: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=d90f154867ec0ec22fd719164b88716e8fd48672
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 10 May 2021 06:26:06 +0000

flight 161886 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161886/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-libvirt      14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-libvirt-xsm  14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-libvirt-pair 25 guest-start/debian       fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                d90f154867ec0ec22fd719164b88716e8fd48672
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  262 days
Failing since        152659  2020-08-21 14:07:39 Z  261 days  478 attempts
Testing same since   161826  2021-05-07 02:33:20 Z    3 days    7 attempts

------------------------------------------------------------
487 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 148130 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 10 07:07:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 07:07:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124838.235042 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg00W-0003Nm-Ai; Mon, 10 May 2021 07:07:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124838.235042; Mon, 10 May 2021 07:07:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg00W-0003Nf-7T; Mon, 10 May 2021 07:07:12 +0000
Received: by outflank-mailman (input) for mailman id 124838;
 Mon, 10 May 2021 07:07:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lg00U-0003NV-N5; Mon, 10 May 2021 07:07:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lg00U-0005UC-Ep; Mon, 10 May 2021 07:07:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lg00U-0001Wy-5G; Mon, 10 May 2021 07:07:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lg00U-0000dZ-4k; Mon, 10 May 2021 07:07:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=slBXBXoIP9ncsc6WoOKX3oGEhXql+CKpP7/SugEdXLw=; b=ybbFCLx+s1VbdHCGtP0jOLYwXP
	wcxao2cBSbGlT2x2eYBfWJebgtpLwFnbsbKDS+qObtWXq/VOVxdKDtlteQ1nMoLS/aYyq+vYA2UIo
	Pzr47eCi4Xd5dU2ACyZYrw7EhgDOHIHqsw7tViwekEu0oc98tXN+lcMb05fHx0L6R6JA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161889-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 161889: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=d1873e03b461c6a8535e338aa6869ece757fded3
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 10 May 2021 07:07:10 +0000

flight 161889 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161889/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              d1873e03b461c6a8535e338aa6869ece757fded3
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  304 days
Failing since        151818  2020-07-11 04:18:52 Z  303 days  296 attempts
Testing same since   161848  2021-05-08 04:23:52 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 56845 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 10 07:27:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 07:27:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124848.235057 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg0Jf-0005if-6E; Mon, 10 May 2021 07:26:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124848.235057; Mon, 10 May 2021 07:26:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg0Jf-0005iY-27; Mon, 10 May 2021 07:26:59 +0000
Received: by outflank-mailman (input) for mailman id 124848;
 Mon, 10 May 2021 07:26:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EdaL=KF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lg0Jd-0005iS-Nf
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 07:26:57 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 71aed54e-1611-4428-8b53-49145054bb78;
 Mon, 10 May 2021 07:26:56 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 79E75AF83;
 Mon, 10 May 2021 07:26:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 71aed54e-1611-4428-8b53-49145054bb78
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620631615; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=xYyFHuEJzA8bW84BDFLGHieoULclpUXmQ+5ryikikSc=;
	b=Vbtb+FaJMmaDpzduJR8BiknddjmAEJD2OKIczdh7vBmyMjkEQjzF1Yq3hXaUiQVQxDX22h
	BBGzM4wSKw1AWJ8KZK64QwBbAILC8uNbDFADDdBLRvTtl3ZoE14JQJ4D47fnSCvi6iBtdW
	WTEWG+HMNTyEbfQmHOS8ejIUbFdgBis=
Subject: Re: [PATCH 1/1] xen/unpopulated-alloc: fix error return code in
 fill_list()
To: Zhen Lei <thunder.leizhen@huawei.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Andrew Morton <akpm@linux-foundation.org>,
 Dan Carpenter <dan.carpenter@oracle.com>,
 Dan Williams <dan.j.williams@intel.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 linux-kernel <linux-kernel@vger.kernel.org>
References: <20210508021913.1727-1-thunder.leizhen@huawei.com>
From: Juergen Gross <jgross@suse.com>
Message-ID: <9d117e16-869b-2780-cf46-a4eae591dcf9@suse.com>
Date: Mon, 10 May 2021 09:26:54 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.0
MIME-Version: 1.0
In-Reply-To: <20210508021913.1727-1-thunder.leizhen@huawei.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="c8IW4CRW9p9hDoAkvLcUrsosdIjASDqIJ"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--c8IW4CRW9p9hDoAkvLcUrsosdIjASDqIJ
Content-Type: multipart/mixed; boundary="fx7x3yKP6pikwEJllzBg9r3fltrlEKFRs";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Zhen Lei <thunder.leizhen@huawei.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Andrew Morton <akpm@linux-foundation.org>,
 Dan Carpenter <dan.carpenter@oracle.com>,
 Dan Williams <dan.j.williams@intel.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 linux-kernel <linux-kernel@vger.kernel.org>
Message-ID: <9d117e16-869b-2780-cf46-a4eae591dcf9@suse.com>
Subject: Re: [PATCH 1/1] xen/unpopulated-alloc: fix error return code in
 fill_list()
References: <20210508021913.1727-1-thunder.leizhen@huawei.com>
In-Reply-To: <20210508021913.1727-1-thunder.leizhen@huawei.com>

--fx7x3yKP6pikwEJllzBg9r3fltrlEKFRs
Content-Type: multipart/mixed;
 boundary="------------B3B5B62A3A54C4C0604E2DDD"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------B3B5B62A3A54C4C0604E2DDD
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 08.05.21 04:19, Zhen Lei wrote:
> Fix to return a negative error code from the error handling case instea=
d
> of 0, as done elsewhere in this function.
>=20
> Fixes: a4574f63edc6 ("mm/memremap_pages: convert to 'struct range'")
> Reported-by: Hulk Robot <hulkci@huawei.com>
> Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------B3B5B62A3A54C4C0604E2DDD
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------B3B5B62A3A54C4C0604E2DDD--

--fx7x3yKP6pikwEJllzBg9r3fltrlEKFRs--

--c8IW4CRW9p9hDoAkvLcUrsosdIjASDqIJ
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmCY4D4FAwAAAAAACgkQsN6d1ii/Ey8p
4ggAh3EiWzBp0LOasjOKlL9yLd//LN6Hh9qhQcEMHmMG/5oYRXXahNo8SZVcMK1l1hWVni80Dd5D
m1plMuOLPBHax+6R7MJVSz3EoiTrTjDAbs6hc2A0MLa8XIGcEu0H03zHXfLuemkj7TdIQ5tKSSlr
BIEE1rXh1AWeCl/JUexqy4/jZmuYR7cbWlIxI2hqsUb8yaCTdySv5tZaqQjOECSSSpuYGO/XGUIM
Y7Qwa1PTwXU7i4u+CypzOgN3BnrxeWRrJahzt7h/0My29pXWRKY0dNJXgkHdAE2t1XG6DrX7czY6
NDRQqSouaoULhpsSdFM6qPx8xbYKOIfVgdiHiYHdiw==
=20ku
-----END PGP SIGNATURE-----

--c8IW4CRW9p9hDoAkvLcUrsosdIjASDqIJ--


From xen-devel-bounces@lists.xenproject.org Mon May 10 07:31:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 07:31:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124853.235069 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg0OH-000762-Ol; Mon, 10 May 2021 07:31:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124853.235069; Mon, 10 May 2021 07:31:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg0OH-00075v-L6; Mon, 10 May 2021 07:31:45 +0000
Received: by outflank-mailman (input) for mailman id 124853;
 Mon, 10 May 2021 07:31:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EdaL=KF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lg0OG-00075p-9N
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 07:31:44 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 57493dd4-8167-4a74-9f7b-427baaf8dfed;
 Mon, 10 May 2021 07:31:43 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 73019AF83;
 Mon, 10 May 2021 07:31:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 57493dd4-8167-4a74-9f7b-427baaf8dfed
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620631902; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=mY8k8SGs4hAq+6S/2g0vm+oUPS6TGS15HZ1yjDtmoFc=;
	b=kmfYBizb6VsJB0VvZtAaVlL9RqiHo3X9Ml1VKGe/TO+T7mNw0JKYgTBi3nJr2cjvoyHrtt
	8GNxjma96YwB++4NAs4moVnngfos3lwiPZkZryBOvDzM/Iq6MP1WpE4wCulIG6rdadm0m/
	WLgwceJVglALW8PZwv4jizU0Z5nWkYc=
Subject: Re: [PATCH v1] tools: add newlines to xenstored WRL_LOG
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210503154712.508-1-olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
Message-ID: <17f8a84f-13c3-2d55-13b0-79abc7f83855@suse.com>
Date: Mon, 10 May 2021 09:31:41 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.0
MIME-Version: 1.0
In-Reply-To: <20210503154712.508-1-olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="4SvcMA7KxV495cmCYUWLU0jSU3JcI1eR6"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--4SvcMA7KxV495cmCYUWLU0jSU3JcI1eR6
Content-Type: multipart/mixed; boundary="d6Lcf5BBCdxVrBTmRc1Ik5B2f2RUg2Vby";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <17f8a84f-13c3-2d55-13b0-79abc7f83855@suse.com>
Subject: Re: [PATCH v1] tools: add newlines to xenstored WRL_LOG
References: <20210503154712.508-1-olaf@aepfle.de>
In-Reply-To: <20210503154712.508-1-olaf@aepfle.de>

--d6Lcf5BBCdxVrBTmRc1Ik5B2f2RUg2Vby
Content-Type: multipart/mixed;
 boundary="------------5FA622E058BD5D971C1324D3"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------5FA622E058BD5D971C1324D3
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 03.05.21 17:47, Olaf Hering wrote:
> According to syslog(3) the fmt string does not need a newline.
> The mini-os implementation of syslog requires the trailing newline.
> Other calls to syslog do include the newline already, add it also to WR=
L_LOG.

Mind doing the same for the two syslog() calls in xenstored_core.c
lacking the newline?


Juergen

--------------5FA622E058BD5D971C1324D3
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------5FA622E058BD5D971C1324D3--

--d6Lcf5BBCdxVrBTmRc1Ik5B2f2RUg2Vby--

--4SvcMA7KxV495cmCYUWLU0jSU3JcI1eR6
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmCY4V0FAwAAAAAACgkQsN6d1ii/Ey/3
HQf7B3mns0RKRfrgNgkab3iTg9LMHLCu7yFmJKKgohqfDn0rdUUeb7haC2K2hR276Bbny8J/SWPW
wQdSbhcWYfL4z6yCsBZOwV5ImAjGbA6GAfSK0Dwn3+kJuLB2RK9sjCgINLKOsMGwyHYcdeROSZFI
G3smgrMbpbt+fk4Sm21YThLd8czpqIuEs+r0MaC0OeJfoDqMXxLKlX8nDnNUFHYE/vtMmC9d7eks
+hrbAaImpyZpI5PahjqGj/zXI/s/PM5s7/iekNwxJOpxcogBFNseuksMqqTJKKyVYUnjbbvS8lG5
2bDUN2nL/D+erCJu42R5r1pVdY4uiShOk10XiudPZA==
=qt5k
-----END PGP SIGNATURE-----

--4SvcMA7KxV495cmCYUWLU0jSU3JcI1eR6--


From xen-devel-bounces@lists.xenproject.org Mon May 10 07:34:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 07:34:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124856.235081 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg0Qo-0007i5-5E; Mon, 10 May 2021 07:34:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124856.235081; Mon, 10 May 2021 07:34:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg0Qo-0007hy-20; Mon, 10 May 2021 07:34:22 +0000
Received: by outflank-mailman (input) for mailman id 124856;
 Mon, 10 May 2021 07:34:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EdaL=KF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lg0Qn-0007hs-20
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 07:34:21 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0d6b3502-3b76-4e0f-9f03-26c4a0e24152;
 Mon, 10 May 2021 07:34:20 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5922DABF6;
 Mon, 10 May 2021 07:34:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0d6b3502-3b76-4e0f-9f03-26c4a0e24152
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620632059; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=CkWB4DKMUPkS+E9mhycqBYJmL9heLB96Gn1awJMWClI=;
	b=ouI/x/H28CnyxQI11Mw/Yi9SkVORhheihn9uObD18i3vZv3lpOUkr3dupmFD5PxPk+vON/
	dug7BFR16iVkKG1FUjnYrpakCp+7U2UnECD0AjgrnHwdqLs3+QR2plTPmRUVRWiBzaHR5e
	DZuynZeCSUhTeQZzWze3MzItpOyJtls=
Subject: Re: [PATCH 0/3] xen: remove some checks for always present Xen
 features
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 x86@kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, "H. Peter Anvin" <hpa@zytor.com>,
 Peter Zijlstra <peterz@infradead.org>
References: <20210422151007.2205-1-jgross@suse.com>
From: Juergen Gross <jgross@suse.com>
Message-ID: <3c89ca14-8790-2d0e-a115-16a0976f68e3@suse.com>
Date: Mon, 10 May 2021 09:34:18 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.0
MIME-Version: 1.0
In-Reply-To: <20210422151007.2205-1-jgross@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="WlPnCKIBs2J0wdgbz8pEkHKm6b5mInLeY"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--WlPnCKIBs2J0wdgbz8pEkHKm6b5mInLeY
Content-Type: multipart/mixed; boundary="LJlwZftPSouPtiSeWToky9Zro5Kz88bVZ";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 x86@kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Thomas Gleixner <tglx@linutronix.de>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, "H. Peter Anvin" <hpa@zytor.com>,
 Peter Zijlstra <peterz@infradead.org>
Message-ID: <3c89ca14-8790-2d0e-a115-16a0976f68e3@suse.com>
Subject: Re: [PATCH 0/3] xen: remove some checks for always present Xen
 features
References: <20210422151007.2205-1-jgross@suse.com>
In-Reply-To: <20210422151007.2205-1-jgross@suse.com>

--LJlwZftPSouPtiSeWToky9Zro5Kz88bVZ
Content-Type: multipart/mixed;
 boundary="------------3BCDB79306790CD4BF29FE09"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------3BCDB79306790CD4BF29FE09
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 22.04.21 17:10, Juergen Gross wrote:
> Some features of Xen can be assumed to be always present, so add a
> central check to verify this being true and remove the other checks.
>=20
> Juergen Gross (3):
>    xen: check required Xen features
>    xen: assume XENFEAT_mmu_pt_update_preserve_ad being set for pv guest=
s
>    xen: assume XENFEAT_gnttab_map_avail_bits being set for pv guests
>=20
>   arch/x86/xen/enlighten_pv.c | 12 ++----------
>   arch/x86/xen/mmu_pv.c       |  4 ++--
>   drivers/xen/features.c      | 18 ++++++++++++++++++
>   drivers/xen/gntdev.c        | 36 ++----------------------------------=

>   4 files changed, 24 insertions(+), 46 deletions(-)
>=20

Could I please get some feedback on this series?


Juergen

--------------3BCDB79306790CD4BF29FE09
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------3BCDB79306790CD4BF29FE09--

--LJlwZftPSouPtiSeWToky9Zro5Kz88bVZ--

--WlPnCKIBs2J0wdgbz8pEkHKm6b5mInLeY
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmCY4foFAwAAAAAACgkQsN6d1ii/Ey/X
/Af/fMLLG/2nuiRCzRd0ht+vpkyyLE7M2ymDuJy3b3lWQ0v2TpBNH7xaqqeqH+xP+FPh6dnnsEEU
cBoYHqdg01WjtSdRNtTmkESKeA8Gz8/bPBGxZkqKKOxF+V5k2WH9kKsDLBReDq24dpPDVac4j/1J
JWnsGbgzdCJxkAzK5HA8NfVRZqmOopGGe1INipdGTRMjWbYGlU8GuALxOXcjXVb6GJQaQAFVFZro
RN2t6jaMIlA9rZZGikFi30pX9Z8qvuwb5ACXPO59LI2LysFOUy6jzr1IZVvEjc+B3SfmGU03PXx2
xTUiuhKlyD07tHIAV9i9XTnZuWUqqWed/4Y9gGjwQQ==
=3olk
-----END PGP SIGNATURE-----

--WlPnCKIBs2J0wdgbz8pEkHKm6b5mInLeY--


From xen-devel-bounces@lists.xenproject.org Mon May 10 07:49:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 07:49:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124863.235092 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg0fh-0000ow-G9; Mon, 10 May 2021 07:49:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124863.235092; Mon, 10 May 2021 07:49:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg0fh-0000op-DE; Mon, 10 May 2021 07:49:45 +0000
Received: by outflank-mailman (input) for mailman id 124863;
 Mon, 10 May 2021 07:49:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EdaL=KF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lg0fg-0000oj-0d
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 07:49:44 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2fa04fdc-fffd-4123-8dbc-b69a3546c66e;
 Mon, 10 May 2021 07:49:42 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DF517AD2D;
 Mon, 10 May 2021 07:49:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2fa04fdc-fffd-4123-8dbc-b69a3546c66e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620632982; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=0j83j8RRCJnZ3MKwwsBj8yqBRz3BqNt5M6mxu5X8Wwo=;
	b=FotK3A+6GfJnf1/8xTrFFiptRKCEEIkHX7F8oZHIVuVpIDeFPFWTGy7Ee5zQIUJ8GOXOSU
	HNVKAU/9d0+0/k6TnykU3yAMxkAqVrzsSi7+wcKH9Ns+VjRydOCzCibs7iFAYj/MMtAEAq
	g+oRrvozg205lPiuUJQld4n8VbKOuUo=
Subject: Re: [PATCH] tools/xenstored: Prevent a buffer overflow in
 dump_state_node_perms()
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Julien Grall <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>
References: <20210506161223.15984-1-julien@xen.org>
From: Juergen Gross <jgross@suse.com>
Message-ID: <f9542104-b645-4d94-5aab-0854e4b54ff0@suse.com>
Date: Mon, 10 May 2021 09:49:41 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.0
MIME-Version: 1.0
In-Reply-To: <20210506161223.15984-1-julien@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="M7Dqxg3PbDERPyj4OsEb9flCpBK13PLa8"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--M7Dqxg3PbDERPyj4OsEb9flCpBK13PLa8
Content-Type: multipart/mixed; boundary="P0dmCRNO8HREAsoC0YH5fxeDjhKJOgapR";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Julien Grall <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>
Message-ID: <f9542104-b645-4d94-5aab-0854e4b54ff0@suse.com>
Subject: Re: [PATCH] tools/xenstored: Prevent a buffer overflow in
 dump_state_node_perms()
References: <20210506161223.15984-1-julien@xen.org>
In-Reply-To: <20210506161223.15984-1-julien@xen.org>

--P0dmCRNO8HREAsoC0YH5fxeDjhKJOgapR
Content-Type: multipart/mixed;
 boundary="------------91173C0EA6CDDADA40C5A858"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------91173C0EA6CDDADA40C5A858
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 06.05.21 18:12, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
>=20
> ASAN reported one issue when Live Updating Xenstored:
>=20
> =3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=
=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D=3D
> =3D=3D873=3D=3DERROR: AddressSanitizer: stack-buffer-overflow on addres=
s 0x7ffc194f53e0 at pc 0x555c6b323292 bp 0x7ffc194f5340 sp 0x7ffc194f5338=

> WRITE of size 1 at 0x7ffc194f53e0 thread T0
>      #0 0x555c6b323291 in dump_state_node_perms xen/tools/xenstore/xens=
tored_core.c:2468
>      #1 0x555c6b32746e in dump_state_special_node xen/tools/xenstore/xe=
nstored_domain.c:1257
>      #2 0x555c6b32a702 in dump_state_special_nodes xen/tools/xenstore/x=
enstored_domain.c:1273
>      #3 0x555c6b32ddb3 in lu_dump_state xen/tools/xenstore/xenstored_co=
ntrol.c:521
>      #4 0x555c6b32e380 in do_lu_start xen/tools/xenstore/xenstored_cont=
rol.c:660
>      #5 0x555c6b31b461 in call_delayed xen/tools/xenstore/xenstored_cor=
e.c:278
>      #6 0x555c6b32275e in main xen/tools/xenstore/xenstored_core.c:2357=

>      #7 0x7f95eecf3d09 in __libc_start_main ../csu/libc-start.c:308
>      #8 0x555c6b3197e9 in _start (/usr/local/sbin/xenstored+0xc7e9)
>=20
> Address 0x7ffc194f53e0 is located in stack of thread T0 at offset 80 in=
 frame
>      #0 0x555c6b32713e in dump_state_special_node xen/tools/xenstore/xe=
nstored_domain.c:1232
>=20
>    This frame has 2 object(s):
>      [32, 40) 'head' (line 1233)
>      [64, 80) 'sn' (line 1234) <=3D=3D Memory access at offset 80 overf=
lows this variable
>=20
> This is happening because the callers are passing a pointer to a variab=
le
> allocated on the stack. However, the field perms is a dynamic array, so=

> Xenstored will end up to read outside of the variable.
>=20
> Rework the code so the permissions are written one by one in the fd.
>=20
> Fixes: ed6eebf17d2c ("tools/xenstore: dump the xenstore state for live =
update")
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------91173C0EA6CDDADA40C5A858
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------91173C0EA6CDDADA40C5A858--

--P0dmCRNO8HREAsoC0YH5fxeDjhKJOgapR--

--M7Dqxg3PbDERPyj4OsEb9flCpBK13PLa8
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmCY5ZUFAwAAAAAACgkQsN6d1ii/Ey9t
6gf+PcEXkHUG1/XhitF2xSy++vtjjRFB/eCoD8oh5TdX8QP+XzjD+GqAO/DFS+ycifg0KI7iy3ck
PjfNqAKlmUkjXI3TP3KiV/3vBli12QKMynXdr+LYEmigDuPx2miecA0K217RxWBvND6OzioFPJOZ
BB2y2CAmDivirAuOK6adMF77429yfRaXsUSCnslAOdQUCcm6bWaoIwUOZLm6Q6klCwlI9P9mnNMO
2qY2Ey7nkAxpbDn8Kbt5pTf6qZFT3swI0hd0vssUhg5nKMLy6E0HmP/WF/dMcHNNqzDN1d4ALGKF
Go4JVvtxq4im2W18bsJ/hyNWZ/YNvPovhVxkUtc6BA==
=sxDG
-----END PGP SIGNATURE-----

--M7Dqxg3PbDERPyj4OsEb9flCpBK13PLa8--


From xen-devel-bounces@lists.xenproject.org Mon May 10 07:59:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 07:59:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124878.235104 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg0oq-0002N1-Gc; Mon, 10 May 2021 07:59:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124878.235104; Mon, 10 May 2021 07:59:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg0oq-0002Mu-De; Mon, 10 May 2021 07:59:12 +0000
Received: by outflank-mailman (input) for mailman id 124878;
 Mon, 10 May 2021 07:59:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EdaL=KF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lg0op-0002Ml-1O
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 07:59:11 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 64d36221-775c-407c-a821-49d2b9e068bc;
 Mon, 10 May 2021 07:59:06 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DF29DAE57;
 Mon, 10 May 2021 07:59:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 64d36221-775c-407c-a821-49d2b9e068bc
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620633546; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=TuZVZ/TZC+amwC+uSPzlrvCbYklxbsfl2Zo4uow0QtM=;
	b=MZBcmLBBHrZ1wfDmTie51ynVN+p9kalGIgPfxsMqHhSJmp4MYuDWgAgnR9/CHGq5YbBeOc
	XT7/1TA7xKwDsQpPQbTmP4jQzLcNh7/CMpnUY3deZWvw6rUNHWt74ljco5yKZ8copvcdd1
	aVCuEaZGmHXdJS5RbaR37aXxP5YEVY0=
Subject: Re: [PATCH 1/4] x86/xen/entry: Rename xenpv_exc_nmi to noist_exc_nmi
To: Lai Jiangshan <jiangshanlai@gmail.com>, linux-kernel@vger.kernel.org
Cc: Lai Jiangshan <laijs@linux.alibaba.com>,
 Thomas Gleixner <tglx@linutronix.de>, Paolo Bonzini <pbonzini@redhat.com>,
 Sean Christopherson <seanjc@google.com>, Steven Rostedt
 <rostedt@goodmis.org>, Andi Kleen <ak@linux.intel.com>,
 Andy Lutomirski <luto@kernel.org>, Vitaly Kuznetsov <vkuznets@redhat.com>,
 Wanpeng Li <wanpengli@tencent.com>, Jim Mattson <jmattson@google.com>,
 Joerg Roedel <joro@8bytes.org>, kvm@vger.kernel.org,
 Josh Poimboeuf <jpoimboe@redhat.com>, Uros Bizjak <ubizjak@gmail.com>,
 Maxim Levitsky <mlevitsk@redhat.com>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, x86@kernel.org,
 "H. Peter Anvin" <hpa@zytor.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Peter Zijlstra <peterz@infradead.org>,
 Alexandre Chartre <alexandre.chartre@oracle.com>,
 Joerg Roedel <jroedel@suse.de>, Jian Cai <caij2003@gmail.com>,
 xen-devel@lists.xenproject.org
References: <20210426230949.3561-1-jiangshanlai@gmail.com>
 <20210426230949.3561-2-jiangshanlai@gmail.com>
From: Juergen Gross <jgross@suse.com>
Message-ID: <76c9d530-6a55-927b-9727-7875bc8101bb@suse.com>
Date: Mon, 10 May 2021 09:59:03 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.0
MIME-Version: 1.0
In-Reply-To: <20210426230949.3561-2-jiangshanlai@gmail.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="mQbbA6hZIMq2ZCFvc5HSGnGiqncPrGTud"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--mQbbA6hZIMq2ZCFvc5HSGnGiqncPrGTud
Content-Type: multipart/mixed; boundary="IGLvBXepGr91Ct8EfRs0yAnZoEVaWAbXG";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Lai Jiangshan <jiangshanlai@gmail.com>, linux-kernel@vger.kernel.org
Cc: Lai Jiangshan <laijs@linux.alibaba.com>,
 Thomas Gleixner <tglx@linutronix.de>, Paolo Bonzini <pbonzini@redhat.com>,
 Sean Christopherson <seanjc@google.com>, Steven Rostedt
 <rostedt@goodmis.org>, Andi Kleen <ak@linux.intel.com>,
 Andy Lutomirski <luto@kernel.org>, Vitaly Kuznetsov <vkuznets@redhat.com>,
 Wanpeng Li <wanpengli@tencent.com>, Jim Mattson <jmattson@google.com>,
 Joerg Roedel <joro@8bytes.org>, kvm@vger.kernel.org,
 Josh Poimboeuf <jpoimboe@redhat.com>, Uros Bizjak <ubizjak@gmail.com>,
 Maxim Levitsky <mlevitsk@redhat.com>, Ingo Molnar <mingo@redhat.com>,
 Borislav Petkov <bp@alien8.de>, x86@kernel.org,
 "H. Peter Anvin" <hpa@zytor.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Peter Zijlstra <peterz@infradead.org>,
 Alexandre Chartre <alexandre.chartre@oracle.com>,
 Joerg Roedel <jroedel@suse.de>, Jian Cai <caij2003@gmail.com>,
 xen-devel@lists.xenproject.org
Message-ID: <76c9d530-6a55-927b-9727-7875bc8101bb@suse.com>
Subject: Re: [PATCH 1/4] x86/xen/entry: Rename xenpv_exc_nmi to noist_exc_nmi
References: <20210426230949.3561-1-jiangshanlai@gmail.com>
 <20210426230949.3561-2-jiangshanlai@gmail.com>
In-Reply-To: <20210426230949.3561-2-jiangshanlai@gmail.com>

--IGLvBXepGr91Ct8EfRs0yAnZoEVaWAbXG
Content-Type: multipart/mixed;
 boundary="------------B3541311547641A2A2D99FD5"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------B3541311547641A2A2D99FD5
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 27.04.21 01:09, Lai Jiangshan wrote:
> From: Lai Jiangshan <laijs@linux.alibaba.com>
>=20
> There is no any functionality change intended.  Just rename it and
> move it to arch/x86/kernel/nmi.c so that we can resue it later in
> next patch for early NMI and kvm.
>=20
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: Paolo Bonzini <pbonzini@redhat.com>
> Cc: Sean Christopherson <seanjc@google.com>
> Cc: Steven Rostedt <rostedt@goodmis.org>
> Cc: Andi Kleen <ak@linux.intel.com>
> Cc: Andy Lutomirski <luto@kernel.org>
> Cc: Vitaly Kuznetsov <vkuznets@redhat.com>
> Cc: Wanpeng Li <wanpengli@tencent.com>
> Cc: Jim Mattson <jmattson@google.com>
> Cc: Joerg Roedel <joro@8bytes.org>
> Cc: kvm@vger.kernel.org
> Cc: Josh Poimboeuf <jpoimboe@redhat.com>
> Cc: Uros Bizjak <ubizjak@gmail.com>
> Cc: Maxim Levitsky <mlevitsk@redhat.com>
> Signed-off-by: Lai Jiangshan <laijs@linux.alibaba.com>

Acked-by: Juergen Gross <jgross@suse.com>


Juergen

--------------B3541311547641A2A2D99FD5
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------B3541311547641A2A2D99FD5--

--IGLvBXepGr91Ct8EfRs0yAnZoEVaWAbXG--

--mQbbA6hZIMq2ZCFvc5HSGnGiqncPrGTud
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmCY58cFAwAAAAAACgkQsN6d1ii/Ey9t
LwgAhGh3kiEytpFwdlrXX4nrGqwyp+dgw7V8JtwPEAbGMsJoY/p8Ii1SYqsirhnhbqZBhTJthDoK
X9fhFPxUejj1QZf+/1V0D9xMUGRyqCGE/m8eVj4BZiJLf0Rf6zVe1u7dVSnhiZpwRTOC4xCgIVVy
xsc+Xshq4VfG4L+4kfPuPcDOLhbNUCXFijLYIDUpfBInsFNS5iInJ9IjiNJSb2iIdb3ggIWtOdqI
D3ua2HDxita+PD39wzrKeBzLOqM9KRgkB6X4eUvKDSw1nx8+g7S0sUgdEoZmUlYTupLDvH9fL+fl
zliVI0HYzTAhhxS6J1uTP91IMat6HFhV4i79cxHIfQ==
=RO4B
-----END PGP SIGNATURE-----

--mQbbA6hZIMq2ZCFvc5HSGnGiqncPrGTud--


From xen-devel-bounces@lists.xenproject.org Mon May 10 08:08:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 08:08:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124887.235116 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg0xR-0004Q5-Hv; Mon, 10 May 2021 08:08:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124887.235116; Mon, 10 May 2021 08:08:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg0xR-0004Py-F1; Mon, 10 May 2021 08:08:05 +0000
Received: by outflank-mailman (input) for mailman id 124887;
 Mon, 10 May 2021 08:08:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lg0xQ-0004Pm-94; Mon, 10 May 2021 08:08:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lg0xP-0007Sj-R1; Mon, 10 May 2021 08:08:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lg0xP-0005BY-HN; Mon, 10 May 2021 08:08:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lg0xP-0005PT-Gr; Mon, 10 May 2021 08:08:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=CQDlFkl60fjvL2baQN9zD5jr6RkHKZS18G50ZbJo/Ws=; b=RnWuyAI9J2hJ6Mfrk/ir9ynRlA
	Zzx3qhqapxnzAXQK4vVqgptdGqYRx9k30U3UTJJxSd7cnDGYi7vBQjp1WmuNwV7ZsGFwrd3y053tK
	JR4+5apE3VPep/FgvjVe491DAnBu/kkn/+hoNOcAQYOxFfXH0RnDwNQdf2b9ChZoUOK4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161887-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 161887: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=6efb943b8616ec53a5e444193dccf1af9ad627b5
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 10 May 2021 08:08:03 +0000

flight 161887 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161887/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx  8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                6efb943b8616ec53a5e444193dccf1af9ad627b5
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  282 days
Failing since        152366  2020-08-01 20:49:34 Z  281 days  470 attempts
Testing same since   161887  2021-05-09 23:42:16 Z    0 days    1 attempts

------------------------------------------------------------
6035 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1637319 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 10 08:34:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 08:34:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124905.235132 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg1Mw-00026g-R5; Mon, 10 May 2021 08:34:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124905.235132; Mon, 10 May 2021 08:34:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg1Mw-00026E-JA; Mon, 10 May 2021 08:34:26 +0000
Received: by outflank-mailman (input) for mailman id 124905;
 Mon, 10 May 2021 08:34:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=1j8I=KF=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lg1Mu-00025R-Kg
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 08:34:24 +0000
Received: from mo6-p00-ob.smtp.rzone.de (unknown [2a01:238:400:100::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 49f4bf9f-9fc0-4f77-8d5d-3fe1335a3a2c;
 Mon, 10 May 2021 08:34:23 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.25.6 AUTH)
 with ESMTPSA id n023e2x4A8YIBIX
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Mon, 10 May 2021 10:34:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 49f4bf9f-9fc0-4f77-8d5d-3fe1335a3a2c
ARC-Seal: i=1; a=rsa-sha256; t=1620635658; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=ezwGmzIK7uUptoq0zt7/ZAkICh3UjfePda8dlk24Mq2RmiO/DOTXOhr4Jnsyt1FTRd
    FKk1AR7lkItFZ7Cn1mhC1Im4nZlgpK9K7uPTO4jFqXj71OXuIQhOz7wIWJLDYIJDa43s
    //2zS/WJKhQ3H/CrNlbh3ZZbdDGG9NnSP3mAkfm57v0i7qKZVRMHHtQ2uPUBvBDZ1tAo
    rMMH/kMyn1G1Cj7tSskL2uAKYfk4xB4Ve8CEP2i74dza1XIQUZGEWwCVMgnxe+zUB7yb
    cces0Nnh2PaoKYNT0VhxHO6LIP2bTzWnB2ruxM4qXWxqXa6O9klD8ybAoV5gVS4xnxQ1
    fR3w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1620635658;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=VuNIj8OZCR2Wp97RCIeA5syRszNX2zYH4073027fukA=;
    b=AqTON6vcVzprVD5/2QH4vuuThu99H4vYwVbTk/5T/l/QSQNGTfq1YMdhaopKrn3sfZ
    +Alw6PXDJ7lZa4310yxnaybPr0tceGOfQE+XDWRmvUbWHPngmiPoXOu7Fqw7PBldKyeh
    2rMSuWi2jJL0b153z7kRpCytRRp0BjiLhvhjlZX4JGSLlaOoAqdwbUNocasaBJyOqGRv
    DIz6nqEuOln3LrP31gWojBXt6Mx54jivodGJIhP7wOa1nKVyfsF+c+Vy77CogPRndUsa
    1H14qIoBMOE1KI/hka98khWHSuDFWViuLrGTEjD+t/R8U4UuYxpqC1jP3YBjzKg3hJFu
    2ElA==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1620635658;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=VuNIj8OZCR2Wp97RCIeA5syRszNX2zYH4073027fukA=;
    b=nPc2sSPV1gWc9MO8kDHfKAXyuP6y4398y6Eqp3Apxo/tPSfts2jqR8utfmWu3vSyNk
    TckATOIPn44lMx3obKazydu8+SDYWAkLI3rpDjAYrj3phkTl7cjl9MpSLqvzJmB9AQ0c
    /g+iaENTHeHPsdbAs1+cDJZHjATF/5cTn/iYIjssCoWXAMmPYvaaXVLbQmykDAyBoBZP
    eSXX6NB2QOwco8GCT2TGLj98mJSSM1luwGcg7/bWO3d3gAnFqFXGN+bkQygjb1RgDZIl
    MPwfjhixq8ZCUUgbgngTvVS9VSe2zSloFVbpxbPUxc8qXmxR9sH2LVjrWcvBfwjwbqVG
    C5vg==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisF/Wx6Ea03sAi8O4Y0c9DLMc9kgmB2KMHkQZ2le"
X-RZG-CLASS-ID: mo00
Date: Mon, 10 May 2021 10:34:00 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>, Wei
 Liu <wl@xen.org>
Subject: Re: [PATCH v1] tools: add newlines to xenstored WRL_LOG
Message-ID: <20210510103400.2df2cc9b.olaf@aepfle.de>
In-Reply-To: <17f8a84f-13c3-2d55-13b0-79abc7f83855@suse.com>
References: <20210503154712.508-1-olaf@aepfle.de>
	<17f8a84f-13c3-2d55-13b0-79abc7f83855@suse.com>
X-Mailer: Claws Mail 2021.04.23 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/lZBIBSAVPf4818zyxdXm+bs";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/lZBIBSAVPf4818zyxdXm+bs
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Mon, 10 May 2021 09:31:41 +0200
schrieb Juergen Gross <jgross@suse.com>:

> Mind doing the same for the two syslog() calls in xenstored_core.c
> lacking the newline?

I will send a separate patch for them.

Olaf

--Sig_/lZBIBSAVPf4818zyxdXm+bs
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmCY7/gACgkQ86SN7mm1
DoCJFBAAm/LR/z2bmOg2ej7KEe9peae0gWxwJh/BMvQIFnLkyJvBobRa9LkrQ4ay
IcqAoicCEXiE2muWHgA2DT0NlhngW0bbOeYbfCk7f8hMJQ+sClexHP4c5P/vjySC
4xBFEm47cdhkxuFmDfHPwQU/B9GUqm3DXoK0g2+toOZndZ9XSrKeHyp7cAg8webX
02a6iCs0gwc4b4CR/ED/xq9pHIbAi9+ycqWESBM0cs0UFxft0XaR1W6Rr7Dqcza6
YUTAJE18nDVg5VNoCY4xyjlFxUtZ6svNnNGpjMMdPYuTcnSaNmNH6A7WLFw8ZOwq
hgfTwE0YmF99/vP/VRWRQx3Ol9CMEIDKZ3bWvn8BJisPbpJZZPX7LFTOvcaPR4jJ
VmPjJ591CwWUKArynFjv5wsnK/DuSHCRw8DklCtVHTbWQbT/2qXi9sdR2znbOLIt
VjtQ29YMfymmznFTh74450Ly/l9cotresM7bh9n4BYzavBqEyr8UewxH1nZ2Uj+A
/xxDv+/jL4pFMPffBdL28dGaszdCksZUJtfILUFvdQDhTf++BxBywGyaxSKIv4qZ
hTzh+B0BWkA6tCOEe9qteeDZ1szHyDbsD0pG7EssplzRlL4P4P4pKtNmrsOh7aXZ
MgBf+IdSFzHe9OQt4IfL3tHhn5hNS6a/DX5pDyhP30Qpo4Jcguc=
=f0y4
-----END PGP SIGNATURE-----

--Sig_/lZBIBSAVPf4818zyxdXm+bs--


From xen-devel-bounces@lists.xenproject.org Mon May 10 08:35:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 08:35:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124908.235144 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg1O0-0002eU-1A; Mon, 10 May 2021 08:35:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124908.235144; Mon, 10 May 2021 08:35:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg1Nz-0002eN-Th; Mon, 10 May 2021 08:35:31 +0000
Received: by outflank-mailman (input) for mailman id 124908;
 Mon, 10 May 2021 08:35:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HTYQ=KF=cs.pub.ro=costin.lupu@srs-us1.protection.inumbo.net>)
 id 1lg1Ny-0002e9-Gw
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 08:35:30 +0000
Received: from mx.upb.ro (unknown [141.85.13.220])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 36638aeb-0cc4-4a30-b18f-65922d7ae5e8;
 Mon, 10 May 2021 08:35:28 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by mx.upb.ro (Postfix) with ESMTP id 5592AB56012A;
 Mon, 10 May 2021 11:35:26 +0300 (EEST)
Received: from mx.upb.ro ([127.0.0.1])
 by localhost (mx.upb.ro [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id CXOifzqejPkA; Mon, 10 May 2021 11:35:24 +0300 (EEST)
Received: from localhost (localhost [127.0.0.1])
 by mx.upb.ro (Postfix) with ESMTP id 5A839B56010C;
 Mon, 10 May 2021 11:35:24 +0300 (EEST)
Received: from mx.upb.ro ([127.0.0.1])
 by localhost (mx.upb.ro [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id 0JM0fi1_JVPo; Mon, 10 May 2021 11:35:24 +0300 (EEST)
Received: from localhost.localdomain (unknown [188.25.174.245])
 by mx.upb.ro (Postfix) with ESMTPSA id A01D6B56009C;
 Mon, 10 May 2021 11:35:23 +0300 (EEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 36638aeb-0cc4-4a30-b18f-65922d7ae5e8
X-Virus-Scanned: amavisd-new at upb.ro
From: Costin Lupu <costin.lupu@cs.pub.ro>
To: xen-devel@lists.xenproject.org
Cc: Tim Deegan <tim@xen.org>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v3 0/5] Fix redefinition errors for toolstack libs
Date: Mon, 10 May 2021 11:35:14 +0300
Message-Id: <cover.1620633386.git.costin.lupu@cs.pub.ro>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable

For replication I used gcc 10.3 on an Alpine system. In order to replicat=
e the
redefinition error for PAGE_SIZE one should install the 'fortify-headers'
package which will change the chain of included headers by indirectly inc=
luding
/usr/include/limits.h where PAGE_SIZE and PATH_MAX are defined.

Changes since v1:
- Use XC_PAGE_* macros instead of PAGE_* macros

Changes since v2:
- Define KDD_PAGE_* macros for changes in debugger/kdd/

Costin Lupu (5):
  tools/debugger: Fix PAGE_SIZE redefinition error
  tools/libfsimage: Fix PATH_MAX redefinition error
  tools/libs/foreignmemory: Fix PAGE_SIZE redefinition error
  tools/libs/gnttab: Fix PAGE_SIZE redefinition error
  tools/ocaml: Fix redefinition errors

 tools/debugger/kdd/kdd-xen.c                  | 15 ++++------
 tools/debugger/kdd/kdd.c                      | 19 ++++++-------
 tools/debugger/kdd/kdd.h                      |  7 +++++
 tools/libfsimage/ext2fs/fsys_ext2fs.c         |  2 ++
 tools/libfsimage/reiserfs/fsys_reiserfs.c     |  2 ++
 tools/libs/foreignmemory/core.c               |  2 +-
 tools/libs/foreignmemory/freebsd.c            | 10 +++----
 tools/libs/foreignmemory/linux.c              | 23 +++++++--------
 tools/libs/foreignmemory/minios.c             |  2 +-
 tools/libs/foreignmemory/netbsd.c             | 10 +++----
 tools/libs/foreignmemory/private.h            |  9 +-----
 tools/libs/gnttab/freebsd.c                   | 28 +++++++++----------
 tools/libs/gnttab/linux.c                     | 28 +++++++++----------
 tools/libs/gnttab/netbsd.c                    | 23 +++++++--------
 tools/ocaml/libs/xc/xenctrl_stubs.c           | 10 +++----
 .../ocaml/libs/xentoollog/xentoollog_stubs.c  |  4 +++
 tools/ocaml/libs/xl/xenlight_stubs.c          |  4 +++
 17 files changed, 98 insertions(+), 100 deletions(-)

--=20
2.20.1



From xen-devel-bounces@lists.xenproject.org Mon May 10 08:35:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 08:35:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124909.235149 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg1O0-0002i8-AB; Mon, 10 May 2021 08:35:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124909.235149; Mon, 10 May 2021 08:35:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg1O0-0002h4-57; Mon, 10 May 2021 08:35:32 +0000
Received: by outflank-mailman (input) for mailman id 124909;
 Mon, 10 May 2021 08:35:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HTYQ=KF=cs.pub.ro=costin.lupu@srs-us1.protection.inumbo.net>)
 id 1lg1Ny-0002eE-Nx
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 08:35:30 +0000
Received: from mx.upb.ro (unknown [141.85.13.200])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 241b6232-d9e7-4af3-824a-d845635745e5;
 Mon, 10 May 2021 08:35:29 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by mx.upb.ro (Postfix) with ESMTP id A5B8FB560125;
 Mon, 10 May 2021 11:35:28 +0300 (EEST)
Received: from mx.upb.ro ([127.0.0.1])
 by localhost (mx.upb.ro [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id il_eQ4xYdlLU; Mon, 10 May 2021 11:35:25 +0300 (EEST)
Received: from localhost (localhost [127.0.0.1])
 by mx.upb.ro (Postfix) with ESMTP id 9983AB560122;
 Mon, 10 May 2021 11:35:25 +0300 (EEST)
Received: from mx.upb.ro ([127.0.0.1])
 by localhost (mx.upb.ro [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id CD1RpKBsSRw4; Mon, 10 May 2021 11:35:25 +0300 (EEST)
Received: from localhost.localdomain (unknown [188.25.174.245])
 by mx.upb.ro (Postfix) with ESMTPSA id 3BC0AB560126;
 Mon, 10 May 2021 11:35:25 +0300 (EEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 241b6232-d9e7-4af3-824a-d845635745e5
X-Virus-Scanned: amavisd-new at upb.ro
From: Costin Lupu <costin.lupu@cs.pub.ro>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 3/5] tools/libs/foreignmemory: Fix PAGE_SIZE redefinition error
Date: Mon, 10 May 2021 11:35:17 +0300
Message-Id: <2040286fc39b7e1d46376a8e75ac59d8d3be5aff.1620633386.git.costin.lupu@cs.pub.ro>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <cover.1620633386.git.costin.lupu@cs.pub.ro>
References: <cover.1620633386.git.costin.lupu@cs.pub.ro>
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable

If PAGE_SIZE is already defined in the system (e.g. in /usr/include/limit=
s.h
header) then gcc will trigger a redefinition error because of -Werror. Th=
is
patch replaces usage of PAGE_* macros with XC_PAGE_* macros in order to a=
void
confusion between control domain page granularity (PAGE_* definitions) an=
d
guest domain page granularity (which is what we are dealing with here).

Signed-off-by: Costin Lupu <costin.lupu@cs.pub.ro>
---
 tools/libs/foreignmemory/core.c    |  2 +-
 tools/libs/foreignmemory/freebsd.c | 10 +++++-----
 tools/libs/foreignmemory/linux.c   | 23 ++++++++++++-----------
 tools/libs/foreignmemory/minios.c  |  2 +-
 tools/libs/foreignmemory/netbsd.c  | 10 +++++-----
 tools/libs/foreignmemory/private.h |  9 +--------
 6 files changed, 25 insertions(+), 31 deletions(-)

diff --git a/tools/libs/foreignmemory/core.c b/tools/libs/foreignmemory/c=
ore.c
index 28ec311af1..7edc6f0dbf 100644
--- a/tools/libs/foreignmemory/core.c
+++ b/tools/libs/foreignmemory/core.c
@@ -202,7 +202,7 @@ int xenforeignmemory_resource_size(
     if ( rc )
         return rc;
=20
-    *size =3D fres.nr_frames << PAGE_SHIFT;
+    *size =3D fres.nr_frames << XC_PAGE_SHIFT;
     return 0;
 }
=20
diff --git a/tools/libs/foreignmemory/freebsd.c b/tools/libs/foreignmemor=
y/freebsd.c
index d94ea07862..2cf0fa1c38 100644
--- a/tools/libs/foreignmemory/freebsd.c
+++ b/tools/libs/foreignmemory/freebsd.c
@@ -63,7 +63,7 @@ void *osdep_xenforeignmemory_map(xenforeignmemory_handl=
e *fmem,
     privcmd_mmapbatch_t ioctlx;
     int rc;
=20
-    addr =3D mmap(addr, num << PAGE_SHIFT, prot, flags | MAP_SHARED, fd,=
 0);
+    addr =3D mmap(addr, num << XC_PAGE_SHIFT, prot, flags | MAP_SHARED, =
fd, 0);
     if ( addr =3D=3D MAP_FAILED )
         return NULL;
=20
@@ -78,7 +78,7 @@ void *osdep_xenforeignmemory_map(xenforeignmemory_handl=
e *fmem,
     {
         int saved_errno =3D errno;
=20
-        (void)munmap(addr, num << PAGE_SHIFT);
+        (void)munmap(addr, num << XC_PAGE_SHIFT);
         errno =3D saved_errno;
         return NULL;
     }
@@ -89,7 +89,7 @@ void *osdep_xenforeignmemory_map(xenforeignmemory_handl=
e *fmem,
 int osdep_xenforeignmemory_unmap(xenforeignmemory_handle *fmem,
                                  void *addr, size_t num)
 {
-    return munmap(addr, num << PAGE_SHIFT);
+    return munmap(addr, num << XC_PAGE_SHIFT);
 }
=20
 int osdep_xenforeignmemory_restrict(xenforeignmemory_handle *fmem,
@@ -101,7 +101,7 @@ int osdep_xenforeignmemory_restrict(xenforeignmemory_=
handle *fmem,
 int osdep_xenforeignmemory_unmap_resource(xenforeignmemory_handle *fmem,
                                         xenforeignmemory_resource_handle=
 *fres)
 {
-    return fres ? munmap(fres->addr, fres->nr_frames << PAGE_SHIFT) : 0;
+    return fres ? munmap(fres->addr, fres->nr_frames << XC_PAGE_SHIFT) :=
 0;
 }
=20
 int osdep_xenforeignmemory_map_resource(xenforeignmemory_handle *fmem,
@@ -120,7 +120,7 @@ int osdep_xenforeignmemory_map_resource(xenforeignmem=
ory_handle *fmem,
         /* Request for resource size.  Skip mmap(). */
         goto skip_mmap;
=20
-    fres->addr =3D mmap(fres->addr, fres->nr_frames << PAGE_SHIFT,
+    fres->addr =3D mmap(fres->addr, fres->nr_frames << XC_PAGE_SHIFT,
                       fres->prot, fres->flags | MAP_SHARED, fmem->fd, 0)=
;
     if ( fres->addr =3D=3D MAP_FAILED )
         return -1;
diff --git a/tools/libs/foreignmemory/linux.c b/tools/libs/foreignmemory/=
linux.c
index c1f35e2db7..a5f6d62567 100644
--- a/tools/libs/foreignmemory/linux.c
+++ b/tools/libs/foreignmemory/linux.c
@@ -134,7 +134,7 @@ static int retry_paged(int fd, uint32_t dom, void *ad=
dr,
         /* At least one gfn is still in paging state */
         ioctlx.num =3D 1;
         ioctlx.dom =3D dom;
-        ioctlx.addr =3D (unsigned long)addr + (i<<PAGE_SHIFT);
+        ioctlx.addr =3D (unsigned long)addr + (i<<XC_PAGE_SHIFT);
         ioctlx.arr =3D arr + i;
         ioctlx.err =3D err + i;
=20
@@ -168,7 +168,7 @@ void *osdep_xenforeignmemory_map(xenforeignmemory_han=
dle *fmem,
     size_t i;
     int rc;
=20
-    addr =3D mmap(addr, num << PAGE_SHIFT, prot, flags | MAP_SHARED,
+    addr =3D mmap(addr, num << XC_PAGE_SHIFT, prot, flags | MAP_SHARED,
                 fd, 0);
     if ( addr =3D=3D MAP_FAILED )
         return NULL;
@@ -198,9 +198,10 @@ void *osdep_xenforeignmemory_map(xenforeignmemory_ha=
ndle *fmem,
          */
         privcmd_mmapbatch_t ioctlx;
         xen_pfn_t *pfn;
-        unsigned int pfn_arr_size =3D ROUNDUP((num * sizeof(*pfn)), PAGE=
_SHIFT);
+        unsigned int pfn_arr_size =3D ROUNDUP((num * sizeof(*pfn)), XC_P=
AGE_SHIFT);
+        int os_page_size =3D getpagesize();
=20
-        if ( pfn_arr_size <=3D PAGE_SIZE )
+        if ( pfn_arr_size <=3D os_page_size )
             pfn =3D alloca(num * sizeof(*pfn));
         else
         {
@@ -209,7 +210,7 @@ void *osdep_xenforeignmemory_map(xenforeignmemory_han=
dle *fmem,
             if ( pfn =3D=3D MAP_FAILED )
             {
                 PERROR("mmap of pfn array failed");
-                (void)munmap(addr, num << PAGE_SHIFT);
+                (void)munmap(addr, num << XC_PAGE_SHIFT);
                 return NULL;
             }
         }
@@ -242,7 +243,7 @@ void *osdep_xenforeignmemory_map(xenforeignmemory_han=
dle *fmem,
                     continue;
                 }
                 rc =3D map_foreign_batch_single(fd, dom, pfn + i,
-                        (unsigned long)addr + (i<<PAGE_SHIFT));
+                        (unsigned long)addr + (i<<XC_PAGE_SHIFT));
                 if ( rc < 0 )
                 {
                     rc =3D -errno;
@@ -254,7 +255,7 @@ void *osdep_xenforeignmemory_map(xenforeignmemory_han=
dle *fmem,
             break;
         }
=20
-        if ( pfn_arr_size > PAGE_SIZE )
+        if ( pfn_arr_size > os_page_size )
             munmap(pfn, pfn_arr_size);
=20
         if ( rc =3D=3D -ENOENT && i =3D=3D num )
@@ -270,7 +271,7 @@ void *osdep_xenforeignmemory_map(xenforeignmemory_han=
dle *fmem,
     {
         int saved_errno =3D errno;
=20
-        (void)munmap(addr, num << PAGE_SHIFT);
+        (void)munmap(addr, num << XC_PAGE_SHIFT);
         errno =3D saved_errno;
         return NULL;
     }
@@ -281,7 +282,7 @@ void *osdep_xenforeignmemory_map(xenforeignmemory_han=
dle *fmem,
 int osdep_xenforeignmemory_unmap(xenforeignmemory_handle *fmem,
                                  void *addr, size_t num)
 {
-    return munmap(addr, num << PAGE_SHIFT);
+    return munmap(addr, num << XC_PAGE_SHIFT);
 }
=20
 int osdep_xenforeignmemory_restrict(xenforeignmemory_handle *fmem,
@@ -293,7 +294,7 @@ int osdep_xenforeignmemory_restrict(xenforeignmemory_=
handle *fmem,
 int osdep_xenforeignmemory_unmap_resource(
     xenforeignmemory_handle *fmem, xenforeignmemory_resource_handle *fre=
s)
 {
-    return fres ? munmap(fres->addr, fres->nr_frames << PAGE_SHIFT) : 0;
+    return fres ? munmap(fres->addr, fres->nr_frames << XC_PAGE_SHIFT) :=
 0;
 }
=20
 int osdep_xenforeignmemory_map_resource(
@@ -312,7 +313,7 @@ int osdep_xenforeignmemory_map_resource(
         /* Request for resource size.  Skip mmap(). */
         goto skip_mmap;
=20
-    fres->addr =3D mmap(fres->addr, fres->nr_frames << PAGE_SHIFT,
+    fres->addr =3D mmap(fres->addr, fres->nr_frames << XC_PAGE_SHIFT,
                       fres->prot, fres->flags | MAP_SHARED, fmem->fd, 0)=
;
     if ( fres->addr =3D=3D MAP_FAILED )
         return -1;
diff --git a/tools/libs/foreignmemory/minios.c b/tools/libs/foreignmemory=
/minios.c
index 43341ca301..c5453736d5 100644
--- a/tools/libs/foreignmemory/minios.c
+++ b/tools/libs/foreignmemory/minios.c
@@ -55,7 +55,7 @@ void *osdep_xenforeignmemory_map(xenforeignmemory_handl=
e *fmem,
 int osdep_xenforeignmemory_unmap(xenforeignmemory_handle *fmem,
                                  void *addr, size_t num)
 {
-    return munmap(addr, num << PAGE_SHIFT);
+    return munmap(addr, num << XC_PAGE_SHIFT);
 }
=20
 /*
diff --git a/tools/libs/foreignmemory/netbsd.c b/tools/libs/foreignmemory=
/netbsd.c
index c0b1b8f79d..597db775d7 100644
--- a/tools/libs/foreignmemory/netbsd.c
+++ b/tools/libs/foreignmemory/netbsd.c
@@ -76,7 +76,7 @@ void *osdep_xenforeignmemory_map(xenforeignmemory_handl=
e *fmem,
 {
     int fd =3D fmem->fd;
     privcmd_mmapbatch_v2_t ioctlx;
-    addr =3D mmap(addr, num * PAGE_SIZE, prot,
+    addr =3D mmap(addr, num * XC_PAGE_SIZE, prot,
                 flags | MAP_ANON | MAP_SHARED, -1, 0);
     if ( addr =3D=3D MAP_FAILED ) {
         PERROR("osdep_xenforeignmemory_map: mmap failed");
@@ -93,7 +93,7 @@ void *osdep_xenforeignmemory_map(xenforeignmemory_handl=
e *fmem,
     {
         int saved_errno =3D errno;
         PERROR("osdep_xenforeignmemory_map: ioctl failed");
-        munmap(addr, num * PAGE_SIZE);
+        munmap(addr, num * XC_PAGE_SIZE);
         errno =3D saved_errno;
         return NULL;
     }
@@ -104,7 +104,7 @@ void *osdep_xenforeignmemory_map(xenforeignmemory_han=
dle *fmem,
 int osdep_xenforeignmemory_unmap(xenforeignmemory_handle *fmem,
                                  void *addr, size_t num)
 {
-    return munmap(addr, num * PAGE_SIZE);
+    return munmap(addr, num * XC_PAGE_SIZE);
 }
=20
 int osdep_xenforeignmemory_restrict(xenforeignmemory_handle *fmem,
@@ -117,7 +117,7 @@ int osdep_xenforeignmemory_restrict(xenforeignmemory_=
handle *fmem,
 int osdep_xenforeignmemory_unmap_resource(
     xenforeignmemory_handle *fmem, xenforeignmemory_resource_handle *fre=
s)
 {
-    return fres ? munmap(fres->addr, fres->nr_frames << PAGE_SHIFT) : 0;
+    return fres ? munmap(fres->addr, fres->nr_frames << XC_PAGE_SHIFT) :=
 0;
 }
=20
 int osdep_xenforeignmemory_map_resource(
@@ -136,7 +136,7 @@ int osdep_xenforeignmemory_map_resource(
         /* Request for resource size.  Skip mmap(). */
         goto skip_mmap;
=20
-    fres->addr =3D mmap(fres->addr, fres->nr_frames << PAGE_SHIFT,
+    fres->addr =3D mmap(fres->addr, fres->nr_frames << XC_PAGE_SHIFT,
                       fres->prot, fres->flags | MAP_ANON | MAP_SHARED, -=
1, 0);
     if ( fres->addr =3D=3D MAP_FAILED )
         return -1;
diff --git a/tools/libs/foreignmemory/private.h b/tools/libs/foreignmemor=
y/private.h
index 1ee3626dd2..65fe77aa5b 100644
--- a/tools/libs/foreignmemory/private.h
+++ b/tools/libs/foreignmemory/private.h
@@ -1,6 +1,7 @@
 #ifndef XENFOREIGNMEMORY_PRIVATE_H
 #define XENFOREIGNMEMORY_PRIVATE_H
=20
+#include <xenctrl.h>
 #include <xentoollog.h>
=20
 #include <xenforeignmemory.h>
@@ -10,14 +11,6 @@
 #include <xen/xen.h>
 #include <xen/sys/privcmd.h>
=20
-#ifndef PAGE_SHIFT /* Mini-os, Yukk */
-#define PAGE_SHIFT           12
-#endif
-#ifndef __MINIOS__ /* Yukk */
-#define PAGE_SIZE            (1UL << PAGE_SHIFT)
-#define PAGE_MASK            (~(PAGE_SIZE-1))
-#endif
-
 struct xenforeignmemory_handle {
     xentoollog_logger *logger, *logger_tofree;
     unsigned flags;
--=20
2.20.1



From xen-devel-bounces@lists.xenproject.org Mon May 10 08:35:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 08:35:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124910.235168 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg1O4-0003Ep-HI; Mon, 10 May 2021 08:35:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124910.235168; Mon, 10 May 2021 08:35:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg1O4-0003Ee-E8; Mon, 10 May 2021 08:35:36 +0000
Received: by outflank-mailman (input) for mailman id 124910;
 Mon, 10 May 2021 08:35:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HTYQ=KF=cs.pub.ro=costin.lupu@srs-us1.protection.inumbo.net>)
 id 1lg1O3-0002e9-FW
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 08:35:35 +0000
Received: from mx.upb.ro (unknown [141.85.13.210])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a4c4f404-e686-4e6f-b88d-a6fdc9cc32f0;
 Mon, 10 May 2021 08:35:28 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by mx.upb.ro (Postfix) with ESMTP id 7186EB56007B;
 Mon, 10 May 2021 11:35:27 +0300 (EEST)
Received: from mx.upb.ro ([127.0.0.1])
 by localhost (mx.upb.ro [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id s8ApT06jIE62; Mon, 10 May 2021 11:35:24 +0300 (EEST)
Received: from localhost (localhost [127.0.0.1])
 by mx.upb.ro (Postfix) with ESMTP id C15BEB560125;
 Mon, 10 May 2021 11:35:24 +0300 (EEST)
Received: from mx.upb.ro ([127.0.0.1])
 by localhost (mx.upb.ro [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id wUFcqcURlskq; Mon, 10 May 2021 11:35:24 +0300 (EEST)
Received: from localhost.localdomain (unknown [188.25.174.245])
 by mx.upb.ro (Postfix) with ESMTPSA id 5A31DB56007B;
 Mon, 10 May 2021 11:35:24 +0300 (EEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a4c4f404-e686-4e6f-b88d-a6fdc9cc32f0
X-Virus-Scanned: amavisd-new at upb.ro
From: Costin Lupu <costin.lupu@cs.pub.ro>
To: xen-devel@lists.xenproject.org
Cc: Tim Deegan <tim@xen.org>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 1/5] tools/debugger: Fix PAGE_SIZE redefinition error
Date: Mon, 10 May 2021 11:35:15 +0300
Message-Id: <88d4d2deeca3259450dc9af2b97f2fc1453a5d7d.1620633386.git.costin.lupu@cs.pub.ro>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <cover.1620633386.git.costin.lupu@cs.pub.ro>
References: <cover.1620633386.git.costin.lupu@cs.pub.ro>
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable

If PAGE_SIZE is already defined in the system (e.g. in /usr/include/limit=
s.h
header) then gcc will trigger a redefinition error because of -Werror. Th=
is
patch replaces usage of PAGE_* macros with KDD_PAGE_* macros in order to =
avoid
confusion between control domain page granularity (PAGE_* definitions) an=
d
guest domain page granularity (which is what we are dealing with here).

We chose to define the KDD_PAGE_* macros instead of using XC_PAGE_* macro=
s
because (1) the code in kdd.c should not include any Xen headers and (2) =
to add
consistency for code in both kdd.c and kdd-xen.c.

Signed-off-by: Costin Lupu <costin.lupu@cs.pub.ro>
---
 tools/debugger/kdd/kdd-xen.c | 15 ++++++---------
 tools/debugger/kdd/kdd.c     | 19 ++++++++-----------
 tools/debugger/kdd/kdd.h     |  7 +++++++
 3 files changed, 21 insertions(+), 20 deletions(-)

diff --git a/tools/debugger/kdd/kdd-xen.c b/tools/debugger/kdd/kdd-xen.c
index f3f9529f9f..e78c9311c4 100644
--- a/tools/debugger/kdd/kdd-xen.c
+++ b/tools/debugger/kdd/kdd-xen.c
@@ -48,9 +48,6 @@
=20
 #define MAPSIZE 4093 /* Prime */
=20
-#define PAGE_SHIFT 12
-#define PAGE_SIZE (1U << PAGE_SHIFT)
-
 struct kdd_guest {
     struct xentoollog_logger xc_log; /* Must be first for xc log callbac=
ks */
     xc_interface *xc_handle;
@@ -72,7 +69,7 @@ static void flush_maps(kdd_guest *g)
     int i;
     for (i =3D 0; i < MAPSIZE; i++) {
         if (g->maps[i] !=3D NULL)
-            munmap(g->maps[i], PAGE_SIZE);
+            munmap(g->maps[i], KDD_PAGE_SIZE);
         g->maps[i] =3D NULL;
     }
 }
@@ -490,13 +487,13 @@ static uint32_t kdd_access_physical_page(kdd_guest =
*g, uint64_t addr,
     uint32_t map_pfn, map_offset;
     uint8_t *map;
=20
-    map_pfn =3D (addr >> PAGE_SHIFT);
-    map_offset =3D addr & (PAGE_SIZE - 1);
+    map_pfn =3D (addr >> KDD_PAGE_SHIFT);
+    map_offset =3D addr & (KDD_PAGE_SIZE - 1);
=20
     /* Evict any mapping of the wrong frame from our slot */=20
     if (g->pfns[map_pfn % MAPSIZE] !=3D map_pfn
         && g->maps[map_pfn % MAPSIZE] !=3D NULL) {
-        munmap(g->maps[map_pfn % MAPSIZE], PAGE_SIZE);
+        munmap(g->maps[map_pfn % MAPSIZE], KDD_PAGE_SIZE);
         g->maps[map_pfn % MAPSIZE] =3D NULL;
     }
     g->pfns[map_pfn % MAPSIZE] =3D map_pfn;
@@ -507,7 +504,7 @@ static uint32_t kdd_access_physical_page(kdd_guest *g=
, uint64_t addr,
     else {
         map =3D xc_map_foreign_range(g->xc_handle,
                                    g->domid,
-                                   PAGE_SIZE,
+                                   KDD_PAGE_SIZE,
                                    PROT_READ|PROT_WRITE,
                                    map_pfn);
=20
@@ -533,7 +530,7 @@ uint32_t kdd_access_physical(kdd_guest *g, uint64_t a=
ddr,
 {
     uint32_t chunk, rv, done =3D 0;
     while (len > 0) {
-        chunk =3D PAGE_SIZE - (addr & (PAGE_SIZE - 1));
+        chunk =3D KDD_PAGE_SIZE - (addr & (KDD_PAGE_SIZE - 1));
         if (chunk > len)=20
             chunk =3D len;
         rv =3D kdd_access_physical_page(g, addr, chunk, buf, write);
diff --git a/tools/debugger/kdd/kdd.c b/tools/debugger/kdd/kdd.c
index 17513c2650..320c623eda 100644
--- a/tools/debugger/kdd/kdd.c
+++ b/tools/debugger/kdd/kdd.c
@@ -288,9 +288,6 @@ static void kdd_log_pkt(kdd_state *s, const char *nam=
e, kdd_pkt *p)
  *  Memory access: virtual addresses and syntactic sugar.
  */
=20
-#define PAGE_SHIFT (12)
-#define PAGE_SIZE (1ULL << PAGE_SHIFT)=20
-
 static uint32_t kdd_read_physical(kdd_state *s, uint64_t addr,=20
                                   uint32_t len, void *buf)
 {
@@ -352,7 +349,7 @@ static uint64_t v2p(kdd_state *s, int cpuid, uint64_t=
 va)
=20
     /* Walk the appropriate number of levels */
     for (i =3D levels; i > 0; i--) {
-        shift =3D PAGE_SHIFT + bits * (i-1);
+        shift =3D KDD_PAGE_SHIFT + bits * (i-1);
         mask =3D ((1ULL << bits) - 1) << shift;
         offset =3D ((va & mask) >> shift) * width;
         KDD_DEBUG(s, "level %i: mask 0x%16.16"PRIx64" pa 0x%16.16"PRIx64
@@ -364,12 +361,12 @@ static uint64_t v2p(kdd_state *s, int cpuid, uint64=
_t va)
             return -1ULL; // Not present
         pa =3D entry & 0x000ffffffffff000ULL;
         if (pse && (i =3D=3D 2) && (entry & 0x80)) { // Superpage
-            mask =3D ((1ULL << (PAGE_SHIFT + bits)) - 1);
+            mask =3D ((1ULL << (KDD_PAGE_SHIFT + bits)) - 1);
             return (pa & ~mask) + (va & mask);
         }
     }
=20
-    return pa + (va & (PAGE_SIZE - 1));
+    return pa + (va & (KDD_PAGE_SIZE - 1));
 }
=20
 static uint32_t kdd_access_virtual(kdd_state *s, int cpuid, uint64_t add=
r,
@@ -380,7 +377,7 @@ static uint32_t kdd_access_virtual(kdd_state *s, int =
cpuid, uint64_t addr,
    =20
     /* Process one page at a time */
     while (len > 0) {
-        chunk =3D PAGE_SIZE - (addr & (PAGE_SIZE - 1));
+        chunk =3D KDD_PAGE_SIZE - (addr & (KDD_PAGE_SIZE - 1));
         if (chunk > len)=20
             chunk =3D len;
         pa =3D v2p(s, cpuid, addr);
@@ -591,7 +588,7 @@ static void get_os_info_64(kdd_state *s)
     uint64_t dbgkd_addr;
     DBGKD_GET_VERSION64 dbgkd_get_version64;
     /* Maybe 1GB is too big for the limit to search? */
-    uint32_t search_limit =3D (1024 * 1024 * 1024) / PAGE_SIZE; /*1GB/Pa=
geSize*/
+    uint32_t search_limit =3D (1024 * 1024 * 1024) / KDD_PAGE_SIZE; /*1G=
B/PageSize*/
     uint64_t efer;
=20
     /* if we are not in 64-bit mode, fail */
@@ -620,7 +617,7 @@ static void get_os_info_64(kdd_state *s)
      * in 1GB range above the current page base address
      */
=20
-    base =3D idt0_addr & ~(PAGE_SIZE - 1);
+    base =3D idt0_addr & ~(KDD_PAGE_SIZE - 1);
=20
     while (search_limit) {
         uint16_t val;
@@ -633,7 +630,7 @@ static void get_os_info_64(kdd_state *s)
         if (val =3D=3D MZ_HEADER) // MZ
             break;
=20
-        base -=3D PAGE_SIZE;
+        base -=3D KDD_PAGE_SIZE;
         search_limit -=3D 1;
     }
=20
@@ -720,7 +717,7 @@ static void find_os(kdd_state *s)
         /* Try each page in the potential range of kernel load addresses=
 */
         for (limit =3D s->os.base + s->os.range;
              s->os.base <=3D limit;
-             s->os.base +=3D PAGE_SIZE)
+             s->os.base +=3D KDD_PAGE_SIZE)
             if (check_os(s))
                 return;
     }
diff --git a/tools/debugger/kdd/kdd.h b/tools/debugger/kdd/kdd.h
index b9a17440df..b476a76d93 100644
--- a/tools/debugger/kdd/kdd.h
+++ b/tools/debugger/kdd/kdd.h
@@ -39,6 +39,13 @@
=20
 #define PACKED __attribute__((packed))
=20
+/* We define our page related constants here in order to specifically
+ * avoid using the Xen page macros (this is a restriction for the code
+ * in kdd.c which should not include any Xen headers) and to add
+ * consistency for code in both kdd.c and kdd-xen.c. */
+#define KDD_PAGE_SHIFT 12
+#define KDD_PAGE_SIZE (1U << KDD_PAGE_SHIFT)
+
 /***********************************************************************=
******
  * Serial line protocol: Sender sends a 16-byte header with an optional
  * payload following it.  Receiver responds to each packet with an
--=20
2.20.1



From xen-devel-bounces@lists.xenproject.org Mon May 10 08:35:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 08:35:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124911.235180 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg1O6-0003X3-0I; Mon, 10 May 2021 08:35:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124911.235180; Mon, 10 May 2021 08:35:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg1O5-0003Ww-St; Mon, 10 May 2021 08:35:37 +0000
Received: by outflank-mailman (input) for mailman id 124911;
 Mon, 10 May 2021 08:35:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HTYQ=KF=cs.pub.ro=costin.lupu@srs-us1.protection.inumbo.net>)
 id 1lg1O3-0002eE-Mf
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 08:35:35 +0000
Received: from mx.upb.ro (unknown [141.85.13.200])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 735a8251-99c3-4a2c-a60a-16b009ac1d42;
 Mon, 10 May 2021 08:35:28 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by mx.upb.ro (Postfix) with ESMTP id 33A2AB560113;
 Mon, 10 May 2021 11:35:27 +0300 (EEST)
Received: from mx.upb.ro ([127.0.0.1])
 by localhost (mx.upb.ro [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id PQJTuWJ9td-r; Mon, 10 May 2021 11:35:25 +0300 (EEST)
Received: from localhost (localhost [127.0.0.1])
 by mx.upb.ro (Postfix) with ESMTP id 3BA25B56007B;
 Mon, 10 May 2021 11:35:25 +0300 (EEST)
Received: from mx.upb.ro ([127.0.0.1])
 by localhost (mx.upb.ro [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id IskPJ_eR65xH; Mon, 10 May 2021 11:35:25 +0300 (EEST)
Received: from localhost.localdomain (unknown [188.25.174.245])
 by mx.upb.ro (Postfix) with ESMTPSA id C0E1FB560122;
 Mon, 10 May 2021 11:35:24 +0300 (EEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 735a8251-99c3-4a2c-a60a-16b009ac1d42
X-Virus-Scanned: amavisd-new at upb.ro
From: Costin Lupu <costin.lupu@cs.pub.ro>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v3 2/5] tools/libfsimage: Fix PATH_MAX redefinition error
Date: Mon, 10 May 2021 11:35:16 +0300
Message-Id: <25e3833a1335193dd98c7d2fbf80eebc17c3528e.1620633386.git.costin.lupu@cs.pub.ro>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <cover.1620633386.git.costin.lupu@cs.pub.ro>
References: <cover.1620633386.git.costin.lupu@cs.pub.ro>
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable

If PATH_MAX is already defined in the system (e.g. in /usr/include/limits=
.h
header) then gcc will trigger a redefinition error because of -Werror.

Signed-off-by: Costin Lupu <costin.lupu@cs.pub.ro>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
 tools/libfsimage/ext2fs/fsys_ext2fs.c     | 2 ++
 tools/libfsimage/reiserfs/fsys_reiserfs.c | 2 ++
 2 files changed, 4 insertions(+)

diff --git a/tools/libfsimage/ext2fs/fsys_ext2fs.c b/tools/libfsimage/ext=
2fs/fsys_ext2fs.c
index a4ed10419c..5ed8fce90e 100644
--- a/tools/libfsimage/ext2fs/fsys_ext2fs.c
+++ b/tools/libfsimage/ext2fs/fsys_ext2fs.c
@@ -278,7 +278,9 @@ struct ext4_extent_header {
=20
 #define EXT2_SUPER_MAGIC      0xEF53	/* include/linux/ext2_fs.h */
 #define EXT2_ROOT_INO              2	/* include/linux/ext2_fs.h */
+#ifndef PATH_MAX
 #define PATH_MAX                1024	/* include/linux/limits.h */
+#endif
 #define MAX_LINK_COUNT             5	/* number of symbolic links to foll=
ow */
=20
 /* made up, these are pointers into FSYS_BUF */
diff --git a/tools/libfsimage/reiserfs/fsys_reiserfs.c b/tools/libfsimage=
/reiserfs/fsys_reiserfs.c
index 916eb15292..10ca657476 100644
--- a/tools/libfsimage/reiserfs/fsys_reiserfs.c
+++ b/tools/libfsimage/reiserfs/fsys_reiserfs.c
@@ -284,7 +284,9 @@ struct reiserfs_de_head
 #define S_ISDIR(mode) (((mode) & 0170000) =3D=3D 0040000)
 #define S_ISLNK(mode) (((mode) & 0170000) =3D=3D 0120000)
=20
+#ifndef PATH_MAX
 #define PATH_MAX       1024	/* include/linux/limits.h */
+#endif
 #define MAX_LINK_COUNT    5	/* number of symbolic links to follow */
=20
 /* The size of the node cache */
--=20
2.20.1



From xen-devel-bounces@lists.xenproject.org Mon May 10 08:35:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 08:35:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124912.235192 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg1OA-0003uM-AZ; Mon, 10 May 2021 08:35:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124912.235192; Mon, 10 May 2021 08:35:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg1OA-0003u9-6q; Mon, 10 May 2021 08:35:42 +0000
Received: by outflank-mailman (input) for mailman id 124912;
 Mon, 10 May 2021 08:35:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HTYQ=KF=cs.pub.ro=costin.lupu@srs-us1.protection.inumbo.net>)
 id 1lg1O8-0002eE-Mx
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 08:35:40 +0000
Received: from mx.upb.ro (unknown [141.85.13.210])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 76893116-5f30-4af4-b45e-129555ef27c2;
 Mon, 10 May 2021 08:35:30 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by mx.upb.ro (Postfix) with ESMTP id 0A56EB560122;
 Mon, 10 May 2021 11:35:30 +0300 (EEST)
Received: from mx.upb.ro ([127.0.0.1])
 by localhost (mx.upb.ro [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id i0deq6r05gz0; Mon, 10 May 2021 11:35:26 +0300 (EEST)
Received: from localhost (localhost [127.0.0.1])
 by mx.upb.ro (Postfix) with ESMTP id 032A8B56012B;
 Mon, 10 May 2021 11:35:26 +0300 (EEST)
Received: from mx.upb.ro ([127.0.0.1])
 by localhost (mx.upb.ro [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id VLVtO8iOZMin; Mon, 10 May 2021 11:35:25 +0300 (EEST)
Received: from localhost.localdomain (unknown [188.25.174.245])
 by mx.upb.ro (Postfix) with ESMTPSA id 9A686B56012A;
 Mon, 10 May 2021 11:35:25 +0300 (EEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 76893116-5f30-4af4-b45e-129555ef27c2
X-Virus-Scanned: amavisd-new at upb.ro
From: Costin Lupu <costin.lupu@cs.pub.ro>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 4/5] tools/libs/gnttab: Fix PAGE_SIZE redefinition error
Date: Mon, 10 May 2021 11:35:18 +0300
Message-Id: <b1e87eb24dfde3b1f11c5a14a4298531b4a308ad.1620633386.git.costin.lupu@cs.pub.ro>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <cover.1620633386.git.costin.lupu@cs.pub.ro>
References: <cover.1620633386.git.costin.lupu@cs.pub.ro>
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable

If PAGE_SIZE is already defined in the system (e.g. in /usr/include/limit=
s.h
header) then gcc will trigger a redefinition error because of -Werror. Th=
is
patch replaces usage of PAGE_* macros with XC_PAGE_* macros in order to a=
void
confusion between control domain page granularity (PAGE_* definitions) an=
d
guest domain page granularity (which is what we are dealing with here).

Signed-off-by: Costin Lupu <costin.lupu@cs.pub.ro>
---
 tools/libs/gnttab/freebsd.c | 28 +++++++++++++---------------
 tools/libs/gnttab/linux.c   | 28 +++++++++++++---------------
 tools/libs/gnttab/netbsd.c  | 23 ++++++++++-------------
 3 files changed, 36 insertions(+), 43 deletions(-)

diff --git a/tools/libs/gnttab/freebsd.c b/tools/libs/gnttab/freebsd.c
index 768af701c6..e42ac3fbf3 100644
--- a/tools/libs/gnttab/freebsd.c
+++ b/tools/libs/gnttab/freebsd.c
@@ -30,14 +30,11 @@
=20
 #include <xen/sys/gntdev.h>
=20
+#include <xenctrl.h>
 #include <xen-tools/libs.h>
=20
 #include "private.h"
=20
-#define PAGE_SHIFT           12
-#define PAGE_SIZE            (1UL << PAGE_SHIFT)
-#define PAGE_MASK            (~(PAGE_SIZE-1))
-
 #define DEVXEN "/dev/xen/gntdev"
=20
 int osdep_gnttab_open(xengnttab_handle *xgt)
@@ -77,10 +74,11 @@ void *osdep_gnttab_grant_map(xengnttab_handle *xgt,
     int domids_stride;
     unsigned int refs_size =3D ROUNDUP(count *
                                      sizeof(struct ioctl_gntdev_grant_re=
f),
-                                     PAGE_SHIFT);
+                                     XC_PAGE_SHIFT);
+    int os_page_size =3D getpagesize();
=20
     domids_stride =3D (flags & XENGNTTAB_GRANT_MAP_SINGLE_DOMAIN) ? 0 : =
1;
-    if ( refs_size <=3D PAGE_SIZE )
+    if ( refs_size <=3D os_page_size )
         map.refs =3D malloc(refs_size);
     else
     {
@@ -107,7 +105,7 @@ void *osdep_gnttab_grant_map(xengnttab_handle *xgt,
         goto out;
     }
=20
-    addr =3D mmap(NULL, PAGE_SIZE * count, prot, MAP_SHARED, fd,
+    addr =3D mmap(NULL, XC_PAGE_SIZE * count, prot, MAP_SHARED, fd,
                 map.index);
     if ( addr !=3D MAP_FAILED )
     {
@@ -116,7 +114,7 @@ void *osdep_gnttab_grant_map(xengnttab_handle *xgt,
=20
         notify.index =3D map.index;
         notify.action =3D 0;
-        if ( notify_offset < PAGE_SIZE * count )
+        if ( notify_offset < XC_PAGE_SIZE * count )
         {
             notify.index +=3D notify_offset;
             notify.action |=3D UNMAP_NOTIFY_CLEAR_BYTE;
@@ -131,7 +129,7 @@ void *osdep_gnttab_grant_map(xengnttab_handle *xgt,
         if ( rv )
         {
             GTERROR(xgt->logger, "ioctl SET_UNMAP_NOTIFY failed");
-            munmap(addr, count * PAGE_SIZE);
+            munmap(addr, count * XC_PAGE_SIZE);
             addr =3D MAP_FAILED;
         }
     }
@@ -150,7 +148,7 @@ void *osdep_gnttab_grant_map(xengnttab_handle *xgt,
     }
=20
  out:
-    if ( refs_size > PAGE_SIZE )
+    if ( refs_size > os_page_size )
         munmap(map.refs, refs_size);
     else
         free(map.refs);
@@ -189,7 +187,7 @@ int osdep_gnttab_unmap(xengnttab_handle *xgt,
     }
=20
     /* Next, unmap the memory. */
-    if ( (rc =3D munmap(start_address, count * PAGE_SIZE)) )
+    if ( (rc =3D munmap(start_address, count * XC_PAGE_SIZE)) )
         return rc;
=20
     /* Finally, unmap the driver slots used to store the grant informati=
on. */
@@ -256,7 +254,7 @@ void *osdep_gntshr_share_pages(xengntshr_handle *xgs,
         goto out;
     }
=20
-    area =3D mmap(NULL, count * PAGE_SIZE, PROT_READ | PROT_WRITE, MAP_S=
HARED,
+    area =3D mmap(NULL, count * XC_PAGE_SIZE, PROT_READ | PROT_WRITE, MA=
P_SHARED,
                 fd, gref_info.index);
=20
     if ( area =3D=3D MAP_FAILED )
@@ -268,7 +266,7 @@ void *osdep_gntshr_share_pages(xengntshr_handle *xgs,
=20
     notify.index =3D gref_info.index;
     notify.action =3D 0;
-    if ( notify_offset < PAGE_SIZE * count )
+    if ( notify_offset < XC_PAGE_SIZE * count )
     {
         notify.index +=3D notify_offset;
         notify.action |=3D UNMAP_NOTIFY_CLEAR_BYTE;
@@ -283,7 +281,7 @@ void *osdep_gntshr_share_pages(xengntshr_handle *xgs,
     if ( err )
     {
         GSERROR(xgs->logger, "ioctl SET_UNMAP_NOTIFY failed");
-        munmap(area, count * PAGE_SIZE);
+        munmap(area, count * XC_PAGE_SIZE);
         area =3D NULL;
     }
=20
@@ -306,7 +304,7 @@ void *osdep_gntshr_share_pages(xengntshr_handle *xgs,
 int osdep_gntshr_unshare(xengntshr_handle *xgs,
                          void *start_address, uint32_t count)
 {
-    return munmap(start_address, count * PAGE_SIZE);
+    return munmap(start_address, count * XC_PAGE_SIZE);
 }
=20
 /*
diff --git a/tools/libs/gnttab/linux.c b/tools/libs/gnttab/linux.c
index 74331a4c7b..9ce27bee6e 100644
--- a/tools/libs/gnttab/linux.c
+++ b/tools/libs/gnttab/linux.c
@@ -32,14 +32,11 @@
 #include <xen/sys/gntdev.h>
 #include <xen/sys/gntalloc.h>
=20
+#include <xenctrl.h>
 #include <xen-tools/libs.h>
=20
 #include "private.h"
=20
-#define PAGE_SHIFT           12
-#define PAGE_SIZE            (1UL << PAGE_SHIFT)
-#define PAGE_MASK            (~(PAGE_SIZE-1))
-
 #define DEVXEN "/dev/xen/"
=20
 #ifndef O_CLOEXEC
@@ -92,6 +89,7 @@ void *osdep_gnttab_grant_map(xengnttab_handle *xgt,
     int fd =3D xgt->fd;
     struct ioctl_gntdev_map_grant_ref *map;
     unsigned int map_size =3D sizeof(*map) + (count - 1) * sizeof(map->r=
efs[0]);
+    int os_page_size =3D getpagesize();
     void *addr =3D NULL;
     int domids_stride =3D 1;
     int i;
@@ -99,11 +97,11 @@ void *osdep_gnttab_grant_map(xengnttab_handle *xgt,
     if (flags & XENGNTTAB_GRANT_MAP_SINGLE_DOMAIN)
         domids_stride =3D 0;
=20
-    if ( map_size <=3D PAGE_SIZE )
+    if ( map_size <=3D os_page_size )
         map =3D alloca(map_size);
     else
     {
-        map_size =3D ROUNDUP(map_size, PAGE_SHIFT);
+        map_size =3D ROUNDUP(map_size, XC_PAGE_SHIFT);
         map =3D mmap(NULL, map_size, PROT_READ | PROT_WRITE,
                    MAP_PRIVATE | MAP_ANON | MAP_POPULATE, -1, 0);
         if ( map =3D=3D MAP_FAILED )
@@ -127,7 +125,7 @@ void *osdep_gnttab_grant_map(xengnttab_handle *xgt,
     }
=20
  retry:
-    addr =3D mmap(NULL, PAGE_SIZE * count, prot, MAP_SHARED, fd,
+    addr =3D mmap(NULL, XC_PAGE_SIZE * count, prot, MAP_SHARED, fd,
                 map->index);
=20
     if (addr =3D=3D MAP_FAILED && errno =3D=3D EAGAIN)
@@ -152,7 +150,7 @@ void *osdep_gnttab_grant_map(xengnttab_handle *xgt,
         struct ioctl_gntdev_unmap_notify notify;
         notify.index =3D map->index;
         notify.action =3D 0;
-        if (notify_offset < PAGE_SIZE * count) {
+        if (notify_offset < XC_PAGE_SIZE * count) {
             notify.index +=3D notify_offset;
             notify.action |=3D UNMAP_NOTIFY_CLEAR_BYTE;
         }
@@ -164,7 +162,7 @@ void *osdep_gnttab_grant_map(xengnttab_handle *xgt,
             rv =3D ioctl(fd, IOCTL_GNTDEV_SET_UNMAP_NOTIFY, &notify);
         if (rv) {
             GTERROR(xgt->logger, "ioctl SET_UNMAP_NOTIFY failed");
-            munmap(addr, count * PAGE_SIZE);
+            munmap(addr, count * XC_PAGE_SIZE);
             addr =3D MAP_FAILED;
         }
     }
@@ -184,7 +182,7 @@ void *osdep_gnttab_grant_map(xengnttab_handle *xgt,
     }
=20
  out:
-    if ( map_size > PAGE_SIZE )
+    if ( map_size > os_page_size )
         munmap(map, map_size);
=20
     return addr;
@@ -220,7 +218,7 @@ int osdep_gnttab_unmap(xengnttab_handle *xgt,
     }
=20
     /* Next, unmap the memory. */
-    if ( (rc =3D munmap(start_address, count * PAGE_SIZE)) )
+    if ( (rc =3D munmap(start_address, count * XC_PAGE_SIZE)) )
         return rc;
=20
     /* Finally, unmap the driver slots used to store the grant informati=
on. */
@@ -466,7 +464,7 @@ void *osdep_gntshr_share_pages(xengntshr_handle *xgs,
         goto out;
     }
=20
-    area =3D mmap(NULL, count * PAGE_SIZE, PROT_READ | PROT_WRITE,
+    area =3D mmap(NULL, count * XC_PAGE_SIZE, PROT_READ | PROT_WRITE,
         MAP_SHARED, fd, gref_info->index);
=20
     if (area =3D=3D MAP_FAILED) {
@@ -477,7 +475,7 @@ void *osdep_gntshr_share_pages(xengntshr_handle *xgs,
=20
     notify.index =3D gref_info->index;
     notify.action =3D 0;
-    if (notify_offset < PAGE_SIZE * count) {
+    if (notify_offset < XC_PAGE_SIZE * count) {
         notify.index +=3D notify_offset;
         notify.action |=3D UNMAP_NOTIFY_CLEAR_BYTE;
     }
@@ -489,7 +487,7 @@ void *osdep_gntshr_share_pages(xengntshr_handle *xgs,
         err =3D ioctl(fd, IOCTL_GNTALLOC_SET_UNMAP_NOTIFY, &notify);
     if (err) {
         GSERROR(xgs->logger, "ioctl SET_UNMAP_NOTIFY failed");
-        munmap(area, count * PAGE_SIZE);
+        munmap(area, count * XC_PAGE_SIZE);
         area =3D NULL;
     }
=20
@@ -510,7 +508,7 @@ void *osdep_gntshr_share_pages(xengntshr_handle *xgs,
 int osdep_gntshr_unshare(xengntshr_handle *xgs,
                          void *start_address, uint32_t count)
 {
-    return munmap(start_address, count * PAGE_SIZE);
+    return munmap(start_address, count * XC_PAGE_SIZE);
 }
=20
 /*
diff --git a/tools/libs/gnttab/netbsd.c b/tools/libs/gnttab/netbsd.c
index f8d7c356eb..a4ad624b54 100644
--- a/tools/libs/gnttab/netbsd.c
+++ b/tools/libs/gnttab/netbsd.c
@@ -28,15 +28,12 @@
 #include <sys/ioctl.h>
 #include <sys/mman.h>
=20
+#include <xenctrl.h>
 #include <xen/xen.h>
 #include <xen/xenio.h>
=20
 #include "private.h"
=20
-#define PAGE_SHIFT           12
-#define PAGE_SIZE            (1UL << PAGE_SHIFT)
-#define PAGE_MASK            (~(PAGE_SIZE-1))
-
 #define DEVXEN "/kern/xen/privcmd"
=20
 int osdep_gnttab_open(xengnttab_handle *xgt)
@@ -87,19 +84,19 @@ void *osdep_gnttab_grant_map(xengnttab_handle *xgt,
     }
=20
     map.count =3D count;
-    addr =3D mmap(NULL, count * PAGE_SIZE,
+    addr =3D mmap(NULL, count * XC_PAGE_SIZE,
                 prot, flags | MAP_ANON | MAP_SHARED, -1, 0);
     if ( map.va =3D=3D MAP_FAILED )
     {
         GTERROR(xgt->logger, "osdep_gnttab_grant_map: mmap failed");
-        munmap((void *)map.va, count * PAGE_SIZE);
+        munmap((void *)map.va, count * XC_PAGE_SIZE);
         addr =3D MAP_FAILED;
     }
     map.va =3D addr;
=20
     map.notify.offset =3D 0;
     map.notify.action =3D 0;
-    if ( notify_offset < PAGE_SIZE * count )
+    if ( notify_offset < XC_PAGE_SIZE * count )
     {
         map.notify.offset =3D notify_offset;
         map.notify.action |=3D UNMAP_NOTIFY_CLEAR_BYTE;
@@ -115,7 +112,7 @@ void *osdep_gnttab_grant_map(xengnttab_handle *xgt,
     {
         GTERROR(xgt->logger,
             "ioctl IOCTL_GNTDEV_MMAP_GRANT_REF failed: %d", rv);
-        munmap(addr, count * PAGE_SIZE);
+        munmap(addr, count * XC_PAGE_SIZE);
         addr =3D MAP_FAILED;
     }
=20
@@ -136,7 +133,7 @@ int osdep_gnttab_unmap(xengnttab_handle *xgt,
     }
=20
     /* Next, unmap the memory. */
-    rc =3D munmap(start_address, count * PAGE_SIZE);
+    rc =3D munmap(start_address, count * XC_PAGE_SIZE);
=20
     return rc;
 }
@@ -187,7 +184,7 @@ void *osdep_gntshr_share_pages(xengntshr_handle *xgs,
     alloc.domid =3D domid;
     alloc.flags =3D writable ? GNTDEV_ALLOC_FLAG_WRITABLE : 0;
     alloc.count =3D count;
-    area =3D mmap(NULL, count * PAGE_SIZE,
+    area =3D mmap(NULL, count * XC_PAGE_SIZE,
                 PROT_READ | PROT_WRITE, MAP_ANON | MAP_SHARED, -1, 0);
=20
     if ( area =3D=3D MAP_FAILED )
@@ -200,7 +197,7 @@ void *osdep_gntshr_share_pages(xengntshr_handle *xgs,
=20
     alloc.notify.offset =3D 0;
     alloc.notify.action =3D 0;
-    if ( notify_offset < PAGE_SIZE * count )
+    if ( notify_offset < XC_PAGE_SIZE * count )
     {
         alloc.notify.offset =3D notify_offset;
         alloc.notify.action |=3D UNMAP_NOTIFY_CLEAR_BYTE;
@@ -215,7 +212,7 @@ void *osdep_gntshr_share_pages(xengntshr_handle *xgs,
     if ( err )
     {
         GSERROR(xgs->logger, "IOCTL_GNTDEV_ALLOC_GRANT_REF failed");
-        munmap(area, count * PAGE_SIZE);
+        munmap(area, count * XC_PAGE_SIZE);
         area =3D MAP_FAILED;
         goto out;
     }
@@ -230,7 +227,7 @@ void *osdep_gntshr_share_pages(xengntshr_handle *xgs,
 int osdep_gntshr_unshare(xengntshr_handle *xgs,
                          void *start_address, uint32_t count)
 {
-    return munmap(start_address, count * PAGE_SIZE);
+    return munmap(start_address, count * XC_PAGE_SIZE);
 }
=20
 /*
--=20
2.20.1



From xen-devel-bounces@lists.xenproject.org Mon May 10 08:35:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 08:35:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124913.235203 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg1OF-0004Ot-K5; Mon, 10 May 2021 08:35:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124913.235203; Mon, 10 May 2021 08:35:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg1OF-0004Oe-Gd; Mon, 10 May 2021 08:35:47 +0000
Received: by outflank-mailman (input) for mailman id 124913;
 Mon, 10 May 2021 08:35:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=HTYQ=KF=cs.pub.ro=costin.lupu@srs-us1.protection.inumbo.net>)
 id 1lg1OD-0002eE-N4
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 08:35:45 +0000
Received: from mx.upb.ro (unknown [141.85.13.230])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f16acf60-e581-4f6d-ba49-8a6aa8bc49ef;
 Mon, 10 May 2021 08:35:31 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by mx.upb.ro (Postfix) with ESMTP id 81A88B560127;
 Mon, 10 May 2021 11:35:30 +0300 (EEST)
Received: from mx.upb.ro ([127.0.0.1])
 by localhost (mx.upb.ro [127.0.0.1]) (amavisd-new, port 10032)
 with ESMTP id QOR1lcOwAZK7; Mon, 10 May 2021 11:35:26 +0300 (EEST)
Received: from localhost (localhost [127.0.0.1])
 by mx.upb.ro (Postfix) with ESMTP id 9C781B56012E;
 Mon, 10 May 2021 11:35:26 +0300 (EEST)
Received: from mx.upb.ro ([127.0.0.1])
 by localhost (mx.upb.ro [127.0.0.1]) (amavisd-new, port 10026)
 with ESMTP id qMgwYnPDJBH2; Mon, 10 May 2021 11:35:26 +0300 (EEST)
Received: from localhost.localdomain (unknown [188.25.174.245])
 by mx.upb.ro (Postfix) with ESMTPSA id 02A1BB560113;
 Mon, 10 May 2021 11:35:25 +0300 (EEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f16acf60-e581-4f6d-ba49-8a6aa8bc49ef
X-Virus-Scanned: amavisd-new at upb.ro
From: Costin Lupu <costin.lupu@cs.pub.ro>
To: xen-devel@lists.xenproject.org
Cc: Christian Lindig <christian.lindig@citrix.com>,
	David Scott <dave@recoil.org>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 5/5] tools/ocaml: Fix redefinition errors
Date: Mon, 10 May 2021 11:35:19 +0300
Message-Id: <50763a92df0c58ed0d7749717b7ff5e2a039a4dd.1620633386.git.costin.lupu@cs.pub.ro>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <cover.1620633386.git.costin.lupu@cs.pub.ro>
References: <cover.1620633386.git.costin.lupu@cs.pub.ro>
MIME-Version: 1.0
Content-Transfer-Encoding: quoted-printable

If PAGE_SIZE is already defined in the system (e.g. in /usr/include/limit=
s.h
header) then gcc will trigger a redefinition error because of -Werror. Th=
is
patch replaces usage of PAGE_* macros with XC_PAGE_* macros in order to a=
void
confusion between control domain page granularity (PAGE_* definitions) an=
d
guest domain page granularity (which is what we are dealing with here).

Same issue applies for redefinitions of Val_none and Some_val macros whic=
h
can be already define in the OCaml system headers (e.g.
/usr/lib/ocaml/caml/mlvalues.h).

Signed-off-by: Costin Lupu <costin.lupu@cs.pub.ro>
---
 tools/ocaml/libs/xc/xenctrl_stubs.c            | 10 ++++------
 tools/ocaml/libs/xentoollog/xentoollog_stubs.c |  4 ++++
 tools/ocaml/libs/xl/xenlight_stubs.c           |  4 ++++
 3 files changed, 12 insertions(+), 6 deletions(-)

diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xe=
nctrl_stubs.c
index d05d7bb30e..f9e33e599a 100644
--- a/tools/ocaml/libs/xc/xenctrl_stubs.c
+++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
@@ -36,14 +36,12 @@
=20
 #include "mmap_stubs.h"
=20
-#define PAGE_SHIFT		12
-#define PAGE_SIZE               (1UL << PAGE_SHIFT)
-#define PAGE_MASK               (~(PAGE_SIZE-1))
-
 #define _H(__h) ((xc_interface *)(__h))
 #define _D(__d) ((uint32_t)Int_val(__d))
=20
+#ifndef Val_none
 #define Val_none (Val_int(0))
+#endif
=20
 #define string_of_option_array(array, index) \
 	((Field(array, index) =3D=3D Val_none) ? NULL : String_val(Field(Field(=
array, index), 0)))
@@ -818,7 +816,7 @@ CAMLprim value stub_xc_domain_memory_increase_reserva=
tion(value xch,
 	CAMLparam3(xch, domid, mem_kb);
 	int retval;
=20
-	unsigned long nr_extents =3D ((unsigned long)(Int64_val(mem_kb))) >> (P=
AGE_SHIFT - 10);
+	unsigned long nr_extents =3D ((unsigned long)(Int64_val(mem_kb))) >> (X=
C_PAGE_SHIFT - 10);
=20
 	uint32_t c_domid =3D _D(domid);
 	caml_enter_blocking_section();
@@ -924,7 +922,7 @@ CAMLprim value stub_pages_to_kib(value pages)
 {
 	CAMLparam1(pages);
=20
-	CAMLreturn(caml_copy_int64(Int64_val(pages) << (PAGE_SHIFT - 10)));
+	CAMLreturn(caml_copy_int64(Int64_val(pages) << (XC_PAGE_SHIFT - 10)));
 }
=20
=20
diff --git a/tools/ocaml/libs/xentoollog/xentoollog_stubs.c b/tools/ocaml=
/libs/xentoollog/xentoollog_stubs.c
index bf64b211c2..e4306a0c2f 100644
--- a/tools/ocaml/libs/xentoollog/xentoollog_stubs.c
+++ b/tools/ocaml/libs/xentoollog/xentoollog_stubs.c
@@ -53,8 +53,12 @@ static char * dup_String_val(value s)
 #include "_xtl_levels.inc"
=20
 /* Option type support as per http://www.linux-nantes.org/~fmonnier/ocam=
l/ocaml-wrapping-c.php */
+#ifndef Val_none
 #define Val_none Val_int(0)
+#endif
+#ifndef Some_val
 #define Some_val(v) Field(v,0)
+#endif
=20
 static value Val_some(value v)
 {
diff --git a/tools/ocaml/libs/xl/xenlight_stubs.c b/tools/ocaml/libs/xl/x=
enlight_stubs.c
index 352a00134d..45b8af61c7 100644
--- a/tools/ocaml/libs/xl/xenlight_stubs.c
+++ b/tools/ocaml/libs/xl/xenlight_stubs.c
@@ -227,8 +227,12 @@ static value Val_string_list(libxl_string_list *c_va=
l)
 }
=20
 /* Option type support as per http://www.linux-nantes.org/~fmonnier/ocam=
l/ocaml-wrapping-c.php */
+#ifndef Val_none
 #define Val_none Val_int(0)
+#endif
+#ifndef Some_val
 #define Some_val(v) Field(v,0)
+#endif
=20
 static value Val_some(value v)
 {
--=20
2.20.1



From xen-devel-bounces@lists.xenproject.org Mon May 10 08:38:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 08:38:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124931.235216 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg1RK-0006Pv-7c; Mon, 10 May 2021 08:38:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124931.235216; Mon, 10 May 2021 08:38:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg1RK-0006Po-3Q; Mon, 10 May 2021 08:38:58 +0000
Received: by outflank-mailman (input) for mailman id 124931;
 Mon, 10 May 2021 08:38:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EdaL=KF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lg1RI-0006PM-J8
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 08:38:56 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1f8303eb-9b27-4dd6-acd1-454b95a6c6b8;
 Mon, 10 May 2021 08:38:54 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C97C1AC87;
 Mon, 10 May 2021 08:38:53 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1f8303eb-9b27-4dd6-acd1-454b95a6c6b8
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620635933; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=74B7PBay7PHaB+G8v8dq+cjrRYRC3la1e1/ZQIvNvdo=;
	b=mMI+y3EQgDGyy8zFz8U0IC9aQysdKKrBDzq8C42h3o7xQUq5svUxAXRbU+5ZMVduuUhTh7
	hX2ik3dxsYf1aTurAN7GZXradRoriH7iYr3eRn0oZZ8n/KMMO1dlQgWSdjflpABiU1oskr
	49CGXshjpzswQWN17455OpUxfAXi4AE=
Subject: Re: [PATCH v1] tools: add newlines to xenstored WRL_LOG
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>
References: <20210503154712.508-1-olaf@aepfle.de>
 <17f8a84f-13c3-2d55-13b0-79abc7f83855@suse.com>
 <20210510103400.2df2cc9b.olaf@aepfle.de>
From: Juergen Gross <jgross@suse.com>
Message-ID: <c4ed9ebb-4b1a-c215-186b-cb6db3f80d41@suse.com>
Date: Mon, 10 May 2021 10:38:53 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.0
MIME-Version: 1.0
In-Reply-To: <20210510103400.2df2cc9b.olaf@aepfle.de>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="pNjvJR5D2kuU3bKHmlEUBOIZcf0Q8nkvZ"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--pNjvJR5D2kuU3bKHmlEUBOIZcf0Q8nkvZ
Content-Type: multipart/mixed; boundary="zGORfapA80Lnnh4wDrGjC6aJo8Q2UiIAt";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>
Message-ID: <c4ed9ebb-4b1a-c215-186b-cb6db3f80d41@suse.com>
Subject: Re: [PATCH v1] tools: add newlines to xenstored WRL_LOG
References: <20210503154712.508-1-olaf@aepfle.de>
 <17f8a84f-13c3-2d55-13b0-79abc7f83855@suse.com>
 <20210510103400.2df2cc9b.olaf@aepfle.de>
In-Reply-To: <20210510103400.2df2cc9b.olaf@aepfle.de>

--zGORfapA80Lnnh4wDrGjC6aJo8Q2UiIAt
Content-Type: multipart/mixed;
 boundary="------------EA0839D235DD51A03CEE1938"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------EA0839D235DD51A03CEE1938
Content-Type: text/plain; charset=windows-1252; format=flowed
Content-Transfer-Encoding: quoted-printable

On 10.05.21 10:34, Olaf Hering wrote:
> Am Mon, 10 May 2021 09:31:41 +0200
> schrieb Juergen Gross <jgross@suse.com>:
>=20
>> Mind doing the same for the two syslog() calls in xenstored_core.c
>> lacking the newline?
>=20
> I will send a separate patch for them.

Okay, in this case you can add my

Reviewed-by: Juergen Gross <jgross@suse.com>

to the WRL_LOG patch.


Juergen


--------------EA0839D235DD51A03CEE1938
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------EA0839D235DD51A03CEE1938--

--zGORfapA80Lnnh4wDrGjC6aJo8Q2UiIAt--

--pNjvJR5D2kuU3bKHmlEUBOIZcf0Q8nkvZ
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmCY8R0FAwAAAAAACgkQsN6d1ii/Ey9L
+wf/eVMhhwiKolD4nJ51lWvBCcURLf45IcJO6hgn5qloSlipgUvud0eGuCY81+iCIQx+igQ8iWA0
wZ7LKHKHlZ3WLZpCBwsDrdpLhHB0MfCEqQbbMtegiFS5tSlK4zGA9jig+L3Rs5Ud3BfCOedl/GUM
tppSbMgdEUHYryry0gsclkLOZQ+3+sawL7yftat5vu4/4oX5p4WVjf7w3E0CwteOLr2M3PXVCugW
E4nSoqKjU9hhiwdjdnwPfsCCR5MvZKasfa8Y4n5DngA2R5bmR3x6rbtbq14BLcFBhc8BdleCIVrt
l8z3cvk0NslTiuNvkAFqafU0iZqqXUU01VUsWH3hJg==
=7eMk
-----END PGP SIGNATURE-----

--pNjvJR5D2kuU3bKHmlEUBOIZcf0Q8nkvZ--


From xen-devel-bounces@lists.xenproject.org Mon May 10 08:41:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 08:41:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124952.235246 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg1TK-0008LP-4X; Mon, 10 May 2021 08:41:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124952.235246; Mon, 10 May 2021 08:41:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg1TK-0008LI-17; Mon, 10 May 2021 08:41:02 +0000
Received: by outflank-mailman (input) for mailman id 124952;
 Mon, 10 May 2021 08:41:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5bM6=KF=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lg1TI-0008L0-Nu
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 08:41:00 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e3cecc12-879e-4590-b63c-63de5ed460ab;
 Mon, 10 May 2021 08:40:59 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 4D61467373; Mon, 10 May 2021 10:40:57 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e3cecc12-879e-4590-b63c-63de5ed460ab
Date: Mon, 10 May 2021 10:40:57 +0200
From: Christoph Hellwig <hch@lst.de>
To: Julien Grall <julien@xen.org>
Cc: f.fainelli@gmail.com, Stefano Stabellini <sstabellini@kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	linux-kernel@vger.kernel.org,
	osstest service owner <osstest-admin@xenproject.org>, hch@lst.de,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	iommu@lists.linux-foundation.org
Subject: Re: Regression when booting 5.15 as dom0 on arm64 (WAS: Re:
 [linux-linus test] 161829: regressions - FAIL)
Message-ID: <20210510084057.GA933@lst.de>
References: <osstest-161829-mainreport@xen.org> <4ea1e89f-a7a0-7664-470c-b3cf773a1031@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <4ea1e89f-a7a0-7664-470c-b3cf773a1031@xen.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Sat, May 08, 2021 at 12:32:37AM +0100, Julien Grall wrote:
> The pointer dereferenced seems to suggest that the swiotlb hasn't been 
> allocated. From what I can tell, this may be because swiotlb_force is set 
> to SWIOTLB_NO_FORCE, we will still enable the swiotlb when running on top 
> of Xen.
>
> I am not entirely sure what would be the correct fix. Any opinions?

Can you try something like the patch below (not even compile tested, but
the intent should be obvious?


diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 16a2b2b1c54d..7671bc153fb1 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -44,6 +44,8 @@
 #include <asm/tlb.h>
 #include <asm/alternative.h>
 
+#include <xen/arm/swiotlb-xen.h>
+
 /*
  * We need to be able to catch inadvertent references to memstart_addr
  * that occur (potentially in generic code) before arm64_memblock_init()
@@ -482,7 +484,7 @@ void __init mem_init(void)
 	if (swiotlb_force == SWIOTLB_FORCE ||
 	    max_pfn > PFN_DOWN(arm64_dma_phys_limit))
 		swiotlb_init(1);
-	else
+	else if (!IS_ENABLED(CONFIG_XEN) || !xen_swiotlb_detect())
 		swiotlb_force = SWIOTLB_NO_FORCE;
 
 	set_max_mapnr(max_pfn - PHYS_PFN_OFFSET);


From xen-devel-bounces@lists.xenproject.org Mon May 10 08:41:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 08:41:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124960.235257 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg1Ti-0000fN-D2; Mon, 10 May 2021 08:41:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124960.235257; Mon, 10 May 2021 08:41:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg1Ti-0000fG-9w; Mon, 10 May 2021 08:41:26 +0000
Received: by outflank-mailman (input) for mailman id 124960;
 Mon, 10 May 2021 08:41:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jR2S=KF=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1lg1Tg-0000ei-St
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 08:41:24 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 194a8c48-74f5-4635-8726-284aab90a316;
 Mon, 10 May 2021 08:41:16 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 93B83113E;
 Mon, 10 May 2021 01:41:16 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com
 [10.1.197.16])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C6AB93F719;
 Mon, 10 May 2021 01:41:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 194a8c48-74f5-4635-8726-284aab90a316
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v6 1/9] docs: add doxygen configuration file
Date: Mon, 10 May 2021 09:40:57 +0100
Message-Id: <20210510084105.17108-2-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210510084105.17108-1-luca.fancellu@arm.com>
References: <20210510084105.17108-1-luca.fancellu@arm.com>

Add xen.doxyfile.in as template for the doxygen
configuration file, it will be used to generate
the doxygen documentation.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
---
 docs/xen.doxyfile.in | 2316 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 2316 insertions(+)
 create mode 100644 docs/xen.doxyfile.in

diff --git a/docs/xen.doxyfile.in b/docs/xen.doxyfile.in
new file mode 100644
index 0000000000..00969d9b78
--- /dev/null
+++ b/docs/xen.doxyfile.in
@@ -0,0 +1,2316 @@
+# Doxyfile 1.8.13
+
+# This file describes the settings to be used by the documentation system
+# doxygen (www.doxygen.org) for a project.
+#
+# All text after a double hash (##) is considered a comment and is placed in
+# front of the TAG it is preceding.
+#
+# All text after a single hash (#) is considered a comment and will be ignored.
+# The format is:
+# TAG = value [value, ...]
+# For lists, items can also be appended using:
+# TAG += value [value, ...]
+# Values that contain spaces should be placed between quotes (\" \").
+#
+# This file is base on doc/zephyr.doxyfile.in zephry 2.3
+
+#---------------------------------------------------------------------------
+# Project related configuration options
+#---------------------------------------------------------------------------
+
+# This tag specifies the encoding used for all characters in the config file
+# that follow. The default is UTF-8 which is also the encoding used for all text
+# before the first occurrence of this tag. Doxygen uses libiconv (or the iconv
+# built into libc) for the transcoding. See
+# https://www.gnu.org/software/libiconv/ for the list of possible encodings.
+# The default value is: UTF-8.
+
+DOXYFILE_ENCODING      = UTF-8
+
+# The PROJECT_NAME tag is a single word (or a sequence of words surrounded by
+# double-quotes, unless you are using Doxywizard) that should identify the
+# project for which the documentation is generated. This name is used in the
+# title of most generated pages and in a few other places.
+# The default value is: My Project.
+
+PROJECT_NAME           = "Xen Project"
+
+# The PROJECT_NUMBER tag can be used to enter a project or revision number. This
+# could be handy for archiving the generated documentation or if some version
+# control system is used.
+
+PROJECT_NUMBER         =
+
+# Using the PROJECT_BRIEF tag one can provide an optional one line description
+# for a project that appears at the top of each page and should give viewer a
+# quick idea about the purpose of the project. Keep the description short.
+
+PROJECT_BRIEF          = "An Open Source Type 1 Hypervisor"
+
+# With the PROJECT_LOGO tag one can specify a logo or an icon that is included
+# in the documentation. The maximum height of the logo should not exceed 55
+# pixels and the maximum width should not exceed 200 pixels. Doxygen will copy
+# the logo to the output directory.
+
+PROJECT_LOGO           = "xen-doxygen/xen_project_logo_165x67.png"
+
+# The OUTPUT_DIRECTORY tag is used to specify the (relative or absolute) path
+# into which the generated documentation will be written. If a relative path is
+# entered, it will be relative to the location where doxygen was started. If
+# left blank the current directory will be used.
+
+OUTPUT_DIRECTORY       = @DOXY_OUT@
+
+# If the CREATE_SUBDIRS tag is set to YES then doxygen will create 4096 sub-
+# directories (in 2 levels) under the output directory of each output format and
+# will distribute the generated files over these directories. Enabling this
+# option can be useful when feeding doxygen a huge amount of source files, where
+# putting all generated files in the same directory would otherwise causes
+# performance problems for the file system.
+# The default value is: NO.
+
+CREATE_SUBDIRS         = NO
+
+# The OUTPUT_LANGUAGE tag is used to specify the language in which all
+# documentation generated by doxygen is written. Doxygen will use this
+# information to generate all constant output in the proper language.
+# Possible values are: Afrikaans, Arabic, Armenian, Brazilian, Catalan, Chinese,
+# Chinese-Traditional, Croatian, Czech, Danish, Dutch, English (United States),
+# Esperanto, Farsi (Persian), Finnish, French, German, Greek, Hungarian,
+# Indonesian, Italian, Japanese, Japanese-en (Japanese with English messages),
+# Korean, Korean-en (Korean with English messages), Latvian, Lithuanian,
+# Macedonian, Norwegian, Persian (Farsi), Polish, Portuguese, Romanian, Russian,
+# Serbian, Serbian-Cyrillic, Slovak, Slovene, Spanish, Swedish, Turkish,
+# Ukrainian and Vietnamese.
+# The default value is: English.
+
+OUTPUT_LANGUAGE        = English
+
+# If the BRIEF_MEMBER_DESC tag is set to YES, doxygen will include brief member
+# descriptions after the members that are listed in the file and class
+# documentation (similar to Javadoc). Set to NO to disable this.
+# The default value is: YES.
+
+BRIEF_MEMBER_DESC      = YES
+
+# If the REPEAT_BRIEF tag is set to YES, doxygen will prepend the brief
+# description of a member or function before the detailed description
+#
+# Note: If both HIDE_UNDOC_MEMBERS and BRIEF_MEMBER_DESC are set to NO, the
+# brief descriptions will be completely suppressed.
+# The default value is: YES.
+
+REPEAT_BRIEF           = YES
+
+# This tag implements a quasi-intelligent brief description abbreviator that is
+# used to form the text in various listings. Each string in this list, if found
+# as the leading text of the brief description, will be stripped from the text
+# and the result, after processing the whole list, is used as the annotated
+# text. Otherwise, the brief description is used as-is. If left blank, the
+# following values are used ($name is automatically replaced with the name of
+# the entity):The $name class, The $name widget, The $name file, is, provides,
+# specifies, contains, represents, a, an and the.
+
+ABBREVIATE_BRIEF       = YES
+
+# If the ALWAYS_DETAILED_SEC and REPEAT_BRIEF tags are both set to YES then
+# doxygen will generate a detailed section even if there is only a brief
+# description.
+# The default value is: NO.
+
+ALWAYS_DETAILED_SEC    = YES
+
+# If the INLINE_INHERITED_MEMB tag is set to YES, doxygen will show all
+# inherited members of a class in the documentation of that class as if those
+# members were ordinary class members. Constructors, destructors and assignment
+# operators of the base classes will not be shown.
+# The default value is: NO.
+
+INLINE_INHERITED_MEMB  = YES
+
+# If the FULL_PATH_NAMES tag is set to YES, doxygen will prepend the full path
+# before files name in the file list and in the header files. If set to NO the
+# shortest path that makes the file name unique will be used
+# The default value is: YES.
+
+FULL_PATH_NAMES        = YES
+
+# The STRIP_FROM_PATH tag can be used to strip a user-defined part of the path.
+# Stripping is only done if one of the specified strings matches the left-hand
+# part of the path. The tag can be used to show relative paths in the file list.
+# If left blank the directory from which doxygen is run is used as the path to
+# strip.
+#
+# Note that you can specify absolute paths here, but also relative paths, which
+# will be relative from the directory where doxygen is started.
+# This tag requires that the tag FULL_PATH_NAMES is set to YES.
+
+STRIP_FROM_PATH        = @XEN_BASE@
+
+# The STRIP_FROM_INC_PATH tag can be used to strip a user-defined part of the
+# path mentioned in the documentation of a class, which tells the reader which
+# header file to include in order to use a class. If left blank only the name of
+# the header file containing the class definition is used. Otherwise one should
+# specify the list of include paths that are normally passed to the compiler
+# using the -I flag.
+
+STRIP_FROM_INC_PATH    =
+
+# If the SHORT_NAMES tag is set to YES, doxygen will generate much shorter (but
+# less readable) file names. This can be useful is your file systems doesn't
+# support long names like on DOS, Mac, or CD-ROM.
+# The default value is: NO.
+
+SHORT_NAMES            = NO
+
+# If the JAVADOC_AUTOBRIEF tag is set to YES then doxygen will interpret the
+# first line (until the first dot) of a Javadoc-style comment as the brief
+# description. If set to NO, the Javadoc-style will behave just like regular Qt-
+# style comments (thus requiring an explicit @brief command for a brief
+# description.)
+# The default value is: NO.
+
+JAVADOC_AUTOBRIEF      = NO
+
+# If the QT_AUTOBRIEF tag is set to YES then doxygen will interpret the first
+# line (until the first dot) of a Qt-style comment as the brief description. If
+# set to NO, the Qt-style will behave just like regular Qt-style comments (thus
+# requiring an explicit \brief command for a brief description.)
+# The default value is: NO.
+
+QT_AUTOBRIEF           = NO
+
+# The MULTILINE_CPP_IS_BRIEF tag can be set to YES to make doxygen treat a
+# multi-line C++ special comment block (i.e. a block of //! or /// comments) as
+# a brief description. This used to be the default behavior. The new default is
+# to treat a multi-line C++ comment block as a detailed description. Set this
+# tag to YES if you prefer the old behavior instead.
+#
+# Note that setting this tag to YES also means that rational rose comments are
+# not recognized any more.
+# The default value is: NO.
+
+MULTILINE_CPP_IS_BRIEF = NO
+
+# If the INHERIT_DOCS tag is set to YES then an undocumented member inherits the
+# documentation from any documented member that it re-implements.
+# The default value is: YES.
+
+INHERIT_DOCS           = YES
+
+# If the SEPARATE_MEMBER_PAGES tag is set to YES then doxygen will produce a new
+# page for each member. If set to NO, the documentation of a member will be part
+# of the file/class/namespace that contains it.
+# The default value is: NO.
+
+SEPARATE_MEMBER_PAGES  = YES
+
+# The TAB_SIZE tag can be used to set the number of spaces in a tab. Doxygen
+# uses this value to replace tabs by spaces in code fragments.
+# Minimum value: 1, maximum value: 16, default value: 4.
+
+TAB_SIZE               = 8
+
+# This tag can be used to specify a number of aliases that act as commands in
+# the documentation. An alias has the form:
+# name=value
+# For example adding
+# "sideeffect=@par Side Effects:\n"
+# will allow you to put the command \sideeffect (or @sideeffect) in the
+# documentation, which will result in a user-defined paragraph with heading
+# "Side Effects:". You can put \n's in the value part of an alias to insert
+# newlines.
+
+ALIASES                = "rst=\verbatim embed:rst:leading-asterisk" \
+                         "endrst=\endverbatim" \
+                         "keepindent=\code" \
+                         "endkeepindent=\endcode"
+
+ALIASES += req{1}="\ref XEN_\1 \"XEN-\1\" "
+ALIASES += satisfy{1}="\xrefitem satisfy \"Satisfies requirement\" \"Requirement Implementation\" \1"
+ALIASES += verify{1}="\xrefitem verify \"Verifies requirement\" \"Requirement Verification\" \1"
+
+# Set the OPTIMIZE_OUTPUT_FOR_C tag to YES if your project consists of C sources
+# only. Doxygen will then generate output that is more tailored for C. For
+# instance, some of the names that are used will be different. The list of all
+# members will be omitted, etc.
+# The default value is: NO.
+
+OPTIMIZE_OUTPUT_FOR_C  = YES
+
+# Set the OPTIMIZE_OUTPUT_JAVA tag to YES if your project consists of Java or
+# Python sources only. Doxygen will then generate output that is more tailored
+# for that language. For instance, namespaces will be presented as packages,
+# qualified scopes will look different, etc.
+# The default value is: NO.
+
+OPTIMIZE_OUTPUT_JAVA   = NO
+
+# Set the OPTIMIZE_FOR_FORTRAN tag to YES if your project consists of Fortran
+# sources. Doxygen will then generate output that is tailored for Fortran.
+# The default value is: NO.
+
+OPTIMIZE_FOR_FORTRAN   = NO
+
+# Set the OPTIMIZE_OUTPUT_VHDL tag to YES if your project consists of VHDL
+# sources. Doxygen will then generate output that is tailored for VHDL.
+# The default value is: NO.
+
+OPTIMIZE_OUTPUT_VHDL   = NO
+
+# Doxygen selects the parser to use depending on the extension of the files it
+# parses. With this tag you can assign which parser to use for a given
+# extension. Doxygen has a built-in mapping, but you can override or extend it
+# using this tag. The format is ext=language, where ext is a file extension, and
+# language is one of the parsers supported by doxygen: IDL, Java, Javascript,
+# C#, C, C++, D, PHP, Objective-C, Python, Fortran (fixed format Fortran:
+# FortranFixed, free formatted Fortran: FortranFree, unknown formatted Fortran:
+# Fortran. In the later case the parser tries to guess whether the code is fixed
+# or free formatted code, this is the default for Fortran type files), VHDL. For
+# instance to make doxygen treat .inc files as Fortran files (default is PHP),
+# and .f files as C (default is Fortran), use: inc=Fortran f=C.
+#
+# Note: For files without extension you can use no_extension as a placeholder.
+#
+# Note that for custom extensions you also need to set FILE_PATTERNS otherwise
+# the files are not read by doxygen.
+
+EXTENSION_MAPPING      =
+
+# If the MARKDOWN_SUPPORT tag is enabled then doxygen pre-processes all comments
+# according to the Markdown format, which allows for more readable
+# documentation. See http://daringfireball.net/projects/markdown/ for details.
+# The output of markdown processing is further processed by doxygen, so you can
+# mix doxygen, HTML, and XML commands with Markdown formatting. Disable only in
+# case of backward compatibilities issues.
+# The default value is: YES.
+
+MARKDOWN_SUPPORT       = YES
+
+# When enabled doxygen tries to link words that correspond to documented
+# classes, or namespaces to their corresponding documentation. Such a link can
+# be prevented in individual cases by putting a % sign in front of the word or
+# globally by setting AUTOLINK_SUPPORT to NO.
+# The default value is: YES.
+
+AUTOLINK_SUPPORT       = YES
+
+# If you use STL classes (i.e. std::string, std::vector, etc.) but do not want
+# to include (a tag file for) the STL sources as input, then you should set this
+# tag to YES in order to let doxygen match functions declarations and
+# definitions whose arguments contain STL classes (e.g. func(std::string);
+# versus func(std::string) {}). This also make the inheritance and collaboration
+# diagrams that involve STL classes more complete and accurate.
+# The default value is: NO.
+
+BUILTIN_STL_SUPPORT    = NO
+
+# If you use Microsoft's C++/CLI language, you should set this option to YES to
+# enable parsing support.
+# The default value is: NO.
+
+CPP_CLI_SUPPORT        = YES
+
+# Set the SIP_SUPPORT tag to YES if your project consists of sip (see:
+# https://www.riverbankcomputing.com/software/sip/intro) sources only. Doxygen
+# will parse them like normal C++ but will assume all classes use public instead
+# of private inheritance when no explicit protection keyword is present.
+# The default value is: NO.
+
+SIP_SUPPORT            = NO
+
+# For Microsoft's IDL there are propget and propput attributes to indicate
+# getter and setter methods for a property. Setting this option to YES will make
+# doxygen to replace the get and set methods by a property in the documentation.
+# This will only work if the methods are indeed getting or setting a simple
+# type. If this is not the case, or you want to show the methods anyway, you
+# should set this option to NO.
+# The default value is: YES.
+
+IDL_PROPERTY_SUPPORT   = YES
+
+# If member grouping is used in the documentation and the DISTRIBUTE_GROUP_DOC
+# tag is set to YES then doxygen will reuse the documentation of the first
+# member in the group (if any) for the other members of the group. By default
+# all members of a group must be documented explicitly.
+# The default value is: NO.
+
+DISTRIBUTE_GROUP_DOC   = NO
+
+# Set the SUBGROUPING tag to YES to allow class member groups of the same type
+# (for instance a group of public functions) to be put as a subgroup of that
+# type (e.g. under the Public Functions section). Set it to NO to prevent
+# subgrouping. Alternatively, this can be done per class using the
+# \nosubgrouping command.
+# The default value is: YES.
+
+SUBGROUPING            = YES
+
+# When the INLINE_GROUPED_CLASSES tag is set to YES, classes, structs and unions
+# are shown inside the group in which they are included (e.g. using \ingroup)
+# instead of on a separate page (for HTML and Man pages) or section (for LaTeX
+# and RTF).
+#
+# Note that this feature does not work in combination with
+# SEPARATE_MEMBER_PAGES.
+# The default value is: NO.
+
+INLINE_GROUPED_CLASSES = NO
+
+# When the INLINE_SIMPLE_STRUCTS tag is set to YES, structs, classes, and unions
+# with only public data fields or simple typedef fields will be shown inline in
+# the documentation of the scope in which they are defined (i.e. file,
+# namespace, or group documentation), provided this scope is documented. If set
+# to NO, structs, classes, and unions are shown on a separate page (for HTML and
+# Man pages) or section (for LaTeX and RTF).
+# The default value is: NO.
+
+INLINE_SIMPLE_STRUCTS  = YES
+
+# When TYPEDEF_HIDES_STRUCT tag is enabled, a typedef of a struct, union, or
+# enum is documented as struct, union, or enum with the name of the typedef. So
+# typedef struct TypeS {} TypeT, will appear in the documentation as a struct
+# with name TypeT. When disabled the typedef will appear as a member of a file,
+# namespace, or class. And the struct will be named TypeS. This can typically be
+# useful for C code in case the coding convention dictates that all compound
+# types are typedef'ed and only the typedef is referenced, never the tag name.
+# The default value is: NO.
+
+TYPEDEF_HIDES_STRUCT   = NO
+
+# The size of the symbol lookup cache can be set using LOOKUP_CACHE_SIZE. This
+# cache is used to resolve symbols given their name and scope. Since this can be
+# an expensive process and often the same symbol appears multiple times in the
+# code, doxygen keeps a cache of pre-resolved symbols. If the cache is too small
+# doxygen will become slower. If the cache is too large, memory is wasted. The
+# cache size is given by this formula: 2^(16+LOOKUP_CACHE_SIZE). The valid range
+# is 0..9, the default is 0, corresponding to a cache size of 2^16=65536
+# symbols. At the end of a run doxygen will report the cache usage and suggest
+# the optimal cache size from a speed point of view.
+# Minimum value: 0, maximum value: 9, default value: 0.
+
+LOOKUP_CACHE_SIZE      = 9
+
+#---------------------------------------------------------------------------
+# Build related configuration options
+#---------------------------------------------------------------------------
+
+# If the EXTRACT_ALL tag is set to YES, doxygen will assume all entities in
+# documentation are documented, even if no documentation was available. Private
+# class members and static file members will be hidden unless the
+# EXTRACT_PRIVATE respectively EXTRACT_STATIC tags are set to YES.
+# Note: This will also disable the warnings about undocumented members that are
+# normally produced when WARNINGS is set to YES.
+# The default value is: NO.
+
+EXTRACT_ALL            = YES
+
+# If the EXTRACT_PRIVATE tag is set to YES, all private members of a class will
+# be included in the documentation.
+# The default value is: NO.
+
+EXTRACT_PRIVATE        = NO
+
+# If the EXTRACT_PACKAGE tag is set to YES, all members with package or internal
+# scope will be included in the documentation.
+# The default value is: NO.
+
+EXTRACT_PACKAGE        = YES
+
+# If the EXTRACT_STATIC tag is set to YES, all static members of a file will be
+# included in the documentation.
+# The default value is: NO.
+
+EXTRACT_STATIC         = YES
+
+# If the EXTRACT_LOCAL_CLASSES tag is set to YES, classes (and structs) defined
+# locally in source files will be included in the documentation. If set to NO,
+# only classes defined in header files are included. Does not have any effect
+# for Java sources.
+# The default value is: YES.
+
+EXTRACT_LOCAL_CLASSES  = YES
+
+# This flag is only useful for Objective-C code. If set to YES, local methods,
+# which are defined in the implementation section but not in the interface are
+# included in the documentation. If set to NO, only methods in the interface are
+# included.
+# The default value is: NO.
+
+EXTRACT_LOCAL_METHODS  = YES
+
+# If this flag is set to YES, the members of anonymous namespaces will be
+# extracted and appear in the documentation as a namespace called
+# 'anonymous_namespace{file}', where file will be replaced with the base name of
+# the file that contains the anonymous namespace. By default anonymous namespace
+# are hidden.
+# The default value is: NO.
+
+EXTRACT_ANON_NSPACES   = NO
+
+# If the HIDE_UNDOC_MEMBERS tag is set to YES, doxygen will hide all
+# undocumented members inside documented classes or files. If set to NO these
+# members will be included in the various overviews, but no documentation
+# section is generated. This option has no effect if EXTRACT_ALL is enabled.
+# The default value is: NO.
+
+HIDE_UNDOC_MEMBERS     = NO
+
+# If the HIDE_UNDOC_CLASSES tag is set to YES, doxygen will hide all
+# undocumented classes that are normally visible in the class hierarchy. If set
+# to NO, these classes will be included in the various overviews. This option
+# has no effect if EXTRACT_ALL is enabled.
+# The default value is: NO.
+
+HIDE_UNDOC_CLASSES     = NO
+
+# If the HIDE_FRIEND_COMPOUNDS tag is set to YES, doxygen will hide all friend
+# (class|struct|union) declarations. If set to NO, these declarations will be
+# included in the documentation.
+# The default value is: NO.
+
+HIDE_FRIEND_COMPOUNDS  = NO
+
+# If the HIDE_IN_BODY_DOCS tag is set to YES, doxygen will hide any
+# documentation blocks found inside the body of a function. If set to NO, these
+# blocks will be appended to the function's detailed documentation block.
+# The default value is: NO.
+
+HIDE_IN_BODY_DOCS      = NO
+
+# The INTERNAL_DOCS tag determines if documentation that is typed after a
+# \internal command is included. If the tag is set to NO then the documentation
+# will be excluded. Set it to YES to include the internal documentation.
+# The default value is: NO.
+
+INTERNAL_DOCS          = NO
+
+# If the CASE_SENSE_NAMES tag is set to NO then doxygen will only generate file
+# names in lower-case letters. If set to YES, upper-case letters are also
+# allowed. This is useful if you have classes or files whose names only differ
+# in case and if your file system supports case sensitive file names. Windows
+# and Mac users are advised to set this option to NO.
+# The default value is: system dependent.
+
+CASE_SENSE_NAMES       = YES
+
+# If the HIDE_SCOPE_NAMES tag is set to NO then doxygen will show members with
+# their full class and namespace scopes in the documentation. If set to YES, the
+# scope will be hidden.
+# The default value is: NO.
+
+HIDE_SCOPE_NAMES       = NO
+
+# If the SHOW_INCLUDE_FILES tag is set to YES then doxygen will put a list of
+# the files that are included by a file in the documentation of that file.
+# The default value is: YES.
+
+SHOW_INCLUDE_FILES     = YES
+
+# If the SHOW_GROUPED_MEMB_INC tag is set to YES then Doxygen will add for each
+# grouped member an include statement to the documentation, telling the reader
+# which file to include in order to use the member.
+# The default value is: NO.
+
+SHOW_GROUPED_MEMB_INC  = YES
+
+# If the FORCE_LOCAL_INCLUDES tag is set to YES then doxygen will list include
+# files with double quotes in the documentation rather than with sharp brackets.
+# The default value is: NO.
+
+FORCE_LOCAL_INCLUDES   = NO
+
+# If the INLINE_INFO tag is set to YES then a tag [inline] is inserted in the
+# documentation for inline members.
+# The default value is: YES.
+
+INLINE_INFO            = YES
+
+# If the SORT_MEMBER_DOCS tag is set to YES then doxygen will sort the
+# (detailed) documentation of file and class members alphabetically by member
+# name. If set to NO, the members will appear in declaration order.
+# The default value is: YES.
+
+SORT_MEMBER_DOCS       = YES
+
+# If the SORT_BRIEF_DOCS tag is set to YES then doxygen will sort the brief
+# descriptions of file, namespace and class members alphabetically by member
+# name. If set to NO, the members will appear in declaration order. Note that
+# this will also influence the order of the classes in the class list.
+# The default value is: NO.
+
+SORT_BRIEF_DOCS        = NO
+
+# If the SORT_MEMBERS_CTORS_1ST tag is set to YES then doxygen will sort the
+# (brief and detailed) documentation of class members so that constructors and
+# destructors are listed first. If set to NO the constructors will appear in the
+# respective orders defined by SORT_BRIEF_DOCS and SORT_MEMBER_DOCS.
+# Note: If SORT_BRIEF_DOCS is set to NO this option is ignored for sorting brief
+# member documentation.
+# Note: If SORT_MEMBER_DOCS is set to NO this option is ignored for sorting
+# detailed member documentation.
+# The default value is: NO.
+
+SORT_MEMBERS_CTORS_1ST = NO
+
+# If the SORT_GROUP_NAMES tag is set to YES then doxygen will sort the hierarchy
+# of group names into alphabetical order. If set to NO the group names will
+# appear in their defined order.
+# The default value is: NO.
+
+SORT_GROUP_NAMES       = YES
+
+# If the SORT_BY_SCOPE_NAME tag is set to YES, the class list will be sorted by
+# fully-qualified names, including namespaces. If set to NO, the class list will
+# be sorted only by class name, not including the namespace part.
+# Note: This option is not very useful if HIDE_SCOPE_NAMES is set to YES.
+# Note: This option applies only to the class list, not to the alphabetical
+# list.
+# The default value is: NO.
+
+SORT_BY_SCOPE_NAME     = YES
+
+# If the STRICT_PROTO_MATCHING option is enabled and doxygen fails to do proper
+# type resolution of all parameters of a function it will reject a match between
+# the prototype and the implementation of a member function even if there is
+# only one candidate or it is obvious which candidate to choose by doing a
+# simple string match. By disabling STRICT_PROTO_MATCHING doxygen will still
+# accept a match between prototype and implementation in such cases.
+# The default value is: NO.
+
+STRICT_PROTO_MATCHING  = YES
+
+# The GENERATE_TODOLIST tag can be used to enable (YES) or disable (NO) the todo
+# list. This list is created by putting \todo commands in the documentation.
+# The default value is: YES.
+
+GENERATE_TODOLIST      = NO
+
+# The GENERATE_TESTLIST tag can be used to enable (YES) or disable (NO) the test
+# list. This list is created by putting \test commands in the documentation.
+# The default value is: YES.
+
+GENERATE_TESTLIST      = NO
+
+# The GENERATE_BUGLIST tag can be used to enable (YES) or disable (NO) the bug
+# list. This list is created by putting \bug commands in the documentation.
+# The default value is: YES.
+
+GENERATE_BUGLIST       = NO
+
+# The GENERATE_DEPRECATEDLIST tag can be used to enable (YES) or disable (NO)
+# the deprecated list. This list is created by putting \deprecated commands in
+# the documentation.
+# The default value is: YES.
+
+GENERATE_DEPRECATEDLIST= YES
+
+# The ENABLED_SECTIONS tag can be used to enable conditional documentation
+# sections, marked by \if <section_label> ... \endif and \cond <section_label>
+# ... \endcond blocks.
+
+ENABLED_SECTIONS       = YES
+
+# The MAX_INITIALIZER_LINES tag determines the maximum number of lines that the
+# initial value of a variable or macro / define can have for it to appear in the
+# documentation. If the initializer consists of more lines than specified here
+# it will be hidden. Use a value of 0 to hide initializers completely. The
+# appearance of the value of individual variables and macros / defines can be
+# controlled using \showinitializer or \hideinitializer command in the
+# documentation regardless of this setting.
+# Minimum value: 0, maximum value: 10000, default value: 30.
+
+MAX_INITIALIZER_LINES  = 300
+
+# Set the SHOW_USED_FILES tag to NO to disable the list of files generated at
+# the bottom of the documentation of classes and structs. If set to YES, the
+# list will mention the files that were used to generate the documentation.
+# The default value is: YES.
+
+SHOW_USED_FILES        = YES
+
+# Set the SHOW_FILES tag to NO to disable the generation of the Files page. This
+# will remove the Files entry from the Quick Index and from the Folder Tree View
+# (if specified).
+# The default value is: YES.
+
+SHOW_FILES             = YES
+
+# Set the SHOW_NAMESPACES tag to NO to disable the generation of the Namespaces
+# page. This will remove the Namespaces entry from the Quick Index and from the
+# Folder Tree View (if specified).
+# The default value is: YES.
+
+SHOW_NAMESPACES        = YES
+
+# The FILE_VERSION_FILTER tag can be used to specify a program or script that
+# doxygen should invoke to get the current version for each file (typically from
+# the version control system). Doxygen will invoke the program by executing (via
+# popen()) the command command input-file, where command is the value of the
+# FILE_VERSION_FILTER tag, and input-file is the name of an input file provided
+# by doxygen. Whatever the program writes to standard output is used as the file
+# version. For an example see the documentation.
+
+FILE_VERSION_FILTER    =
+
+# The LAYOUT_FILE tag can be used to specify a layout file which will be parsed
+# by doxygen. The layout file controls the global structure of the generated
+# output files in an output format independent way. To create the layout file
+# that represents doxygen's defaults, run doxygen with the -l option. You can
+# optionally specify a file name after the option, if omitted DoxygenLayout.xml
+# will be used as the name of the layout file.
+#
+# Note that if you run doxygen from a directory containing a file called
+# DoxygenLayout.xml, doxygen will parse it automatically even if the LAYOUT_FILE
+# tag is left empty.
+
+LAYOUT_FILE            =
+
+# The CITE_BIB_FILES tag can be used to specify one or more bib files containing
+# the reference definitions. This must be a list of .bib files. The .bib
+# extension is automatically appended if omitted. This requires the bibtex tool
+# to be installed. See also https://en.wikipedia.org/wiki/BibTeX for more info.
+# For LaTeX the style of the bibliography can be controlled using
+# LATEX_BIB_STYLE. To use this feature you need bibtex and perl available in the
+# search path. See also \cite for info how to create references.
+
+CITE_BIB_FILES         =
+
+#---------------------------------------------------------------------------
+# Configuration options related to warning and progress messages
+#---------------------------------------------------------------------------
+
+# The QUIET tag can be used to turn on/off the messages that are generated to
+# standard output by doxygen. If QUIET is set to YES this implies that the
+# messages are off.
+# The default value is: NO.
+
+QUIET                  = YES
+
+# The WARNINGS tag can be used to turn on/off the warning messages that are
+# generated to standard error (stderr) by doxygen. If WARNINGS is set to YES
+# this implies that the warnings are on.
+#
+# Tip: Turn warnings on while writing the documentation.
+# The default value is: YES.
+
+WARNINGS               = YES
+
+# If the WARN_IF_UNDOCUMENTED tag is set to YES then doxygen will generate
+# warnings for undocumented members. If EXTRACT_ALL is set to YES then this flag
+# will automatically be disabled.
+# The default value is: YES.
+
+WARN_IF_UNDOCUMENTED   = YES
+
+# If the WARN_IF_DOC_ERROR tag is set to YES, doxygen will generate warnings for
+# potential errors in the documentation, such as not documenting some parameters
+# in a documented function, or documenting parameters that don't exist or using
+# markup commands wrongly.
+# The default value is: YES.
+
+WARN_IF_DOC_ERROR      = YES
+
+# This WARN_NO_PARAMDOC option can be enabled to get warnings for functions that
+# are documented, but have no documentation for their parameters or return
+# value. If set to NO, doxygen will only warn about wrong or incomplete
+# parameter documentation, but not about the absence of documentation.
+# The default value is: NO.
+
+WARN_NO_PARAMDOC       = NO
+
+# The WARN_FORMAT tag determines the format of the warning messages that doxygen
+# can produce. The string should contain the $file, $line, and $text tags, which
+# will be replaced by the file and line number from which the warning originated
+# and the warning text. Optionally the format may contain $version, which will
+# be replaced by the version of the file (if it could be obtained via
+# FILE_VERSION_FILTER)
+# The default value is: $file:$line: $text.
+
+WARN_FORMAT            = "$file:$line: $text"
+
+# The WARN_LOGFILE tag can be used to specify a file to which warning and error
+# messages should be written. If left blank the output is written to standard
+# error (stderr).
+
+WARN_LOGFILE           =
+
+#---------------------------------------------------------------------------
+# Configuration options related to the input files
+#---------------------------------------------------------------------------
+
+# The INPUT tag is used to specify the files and/or directories that contain
+# documented source files. You may enter file names like myfile.cpp or
+# directories like /usr/src/myproject. Separate the files or directories with
+# spaces. See also FILE_PATTERNS and EXTENSION_MAPPING
+# Note: If this tag is empty the current directory is searched.
+
+INPUT                  = "@XEN_BASE@/docs/xen-doxygen/mainpage.md"
+
+# This tag can be used to specify the character encoding of the source files
+# that doxygen parses. Internally doxygen uses the UTF-8 encoding. Doxygen uses
+# libiconv (or the iconv built into libc) for the transcoding. See the libiconv
+# documentation (see: https://www.gnu.org/software/libiconv/) for the list of
+# possible encodings.
+# The default value is: UTF-8.
+
+INPUT_ENCODING         = UTF-8
+
+# If the value of the INPUT tag contains directories, you can use the
+# FILE_PATTERNS tag to specify one or more wildcard patterns (like *.cpp and
+# *.h) to filter out the source-files in the directories.
+#
+# Note that for custom extensions or not directly supported extensions you also
+# need to set EXTENSION_MAPPING for the extension otherwise the files are not
+# read by doxygen.
+#
+# If left blank the following patterns are tested:*.c, *.cc, *.cxx, *.cpp,
+# *.c++, *.java, *.ii, *.ixx, *.ipp, *.i++, *.inl, *.idl, *.ddl, *.odl, *.h,
+# *.hh, *.hxx, *.hpp, *.h++, *.cs, *.d, *.php, *.php4, *.php5, *.phtml, *.inc,
+# *.m, *.markdown, *.md, *.mm, *.dox, *.py, *.pyw, *.f90, *.f95, *.f03, *.f08,
+# *.f, *.for, *.tcl, *.vhd, *.vhdl, *.ucf and *.qsf.
+
+# This MUST be kept in sync with DOXY_SOURCES in doc/CMakeLists.txt
+# for incremental (and faster) builds to work correctly.
+FILE_PATTERNS          = "*.c" \
+                         "*.h" \
+                         "*.S" \
+                         "*.md"
+
+# The RECURSIVE tag can be used to specify whether or not subdirectories should
+# be searched for input files as well.
+# The default value is: NO.
+
+RECURSIVE              = YES
+
+# The EXCLUDE tag can be used to specify files and/or directories that should be
+# excluded from the INPUT source files. This way you can easily exclude a
+# subdirectory from a directory tree whose root is specified with the INPUT tag.
+#
+# Note that relative paths are relative to the directory from which doxygen is
+# run.
+
+EXCLUDE                = @XEN_BASE@/include/nothing.h
+
+# The EXCLUDE_SYMLINKS tag can be used to select whether or not files or
+# directories that are symbolic links (a Unix file system feature) are excluded
+# from the input.
+# The default value is: NO.
+
+EXCLUDE_SYMLINKS       = NO
+
+# If the value of the INPUT tag contains directories, you can use the
+# EXCLUDE_PATTERNS tag to specify one or more wildcard patterns to exclude
+# certain files from those directories.
+#
+# Note that the wildcards are matched against the file with absolute path, so to
+# exclude all test directories for example use the pattern */test/*
+
+EXCLUDE_PATTERNS       =
+
+# The EXCLUDE_SYMBOLS tag can be used to specify one or more symbol names
+# (namespaces, classes, functions, etc.) that should be excluded from the
+# output. The symbol name can be a fully qualified name, a word, or if the
+# wildcard * is used, a substring. Examples: ANamespace, AClass,
+# AClass::ANamespace, ANamespace::*Test
+#
+# Note that the wildcards are matched against the file with absolute path, so to
+# exclude all test directories use the pattern */test/*
+
+# Hide internal names (starting with an underscore, and doxygen-generated names
+# for nested unnamed unions that don't generate meaningful sphinx output anyway.
+EXCLUDE_SYMBOLS        =
+# _*  *.__unnamed__ z_* Z_*
+
+# The EXAMPLE_PATH tag can be used to specify one or more files or directories
+# that contain example code fragments that are included (see the \include
+# command).
+
+EXAMPLE_PATH           =
+
+# If the value of the EXAMPLE_PATH tag contains directories, you can use the
+# EXAMPLE_PATTERNS tag to specify one or more wildcard pattern (like *.cpp and
+# *.h) to filter out the source-files in the directories. If left blank all
+# files are included.
+
+EXAMPLE_PATTERNS       =
+
+# If the EXAMPLE_RECURSIVE tag is set to YES then subdirectories will be
+# searched for input files to be used with the \include or \dontinclude commands
+# irrespective of the value of the RECURSIVE tag.
+# The default value is: NO.
+
+EXAMPLE_RECURSIVE      = YES
+
+# The IMAGE_PATH tag can be used to specify one or more files or directories
+# that contain images that are to be included in the documentation (see the
+# \image command).
+
+IMAGE_PATH             =
+
+# The INPUT_FILTER tag can be used to specify a program that doxygen should
+# invoke to filter for each input file. Doxygen will invoke the filter program
+# by executing (via popen()) the command:
+#
+# <filter> <input-file>
+#
+# where <filter> is the value of the INPUT_FILTER tag, and <input-file> is the
+# name of an input file. Doxygen will then use the output that the filter
+# program writes to standard output. If FILTER_PATTERNS is specified, this tag
+# will be ignored.
+#
+# Note that the filter must not add or remove lines; it is applied before the
+# code is scanned, but not when the output code is generated. If lines are added
+# or removed, the anchors will not be placed correctly.
+#
+# Note that for custom extensions or not directly supported extensions you also
+# need to set EXTENSION_MAPPING for the extension otherwise the files are not
+# properly processed by doxygen.
+
+INPUT_FILTER           =
+
+# The FILTER_PATTERNS tag can be used to specify filters on a per file pattern
+# basis. Doxygen will compare the file name with each pattern and apply the
+# filter if there is a match. The filters are a list of the form: pattern=filter
+# (like *.cpp=my_cpp_filter). See INPUT_FILTER for further information on how
+# filters are used. If the FILTER_PATTERNS tag is empty or if none of the
+# patterns match the file name, INPUT_FILTER is applied.
+#
+# Note that for custom extensions or not directly supported extensions you also
+# need to set EXTENSION_MAPPING for the extension otherwise the files are not
+# properly processed by doxygen.
+
+FILTER_PATTERNS     = *.h="\"@XEN_BASE@/docs/xen-doxygen/doxy-preprocessor.py\""
+
+# If the FILTER_SOURCE_FILES tag is set to YES, the input filter (if set using
+# INPUT_FILTER) will also be used to filter the input files that are used for
+# producing the source files to browse (i.e. when SOURCE_BROWSER is set to YES).
+# The default value is: NO.
+
+FILTER_SOURCE_FILES    = NO
+
+# The FILTER_SOURCE_PATTERNS tag can be used to specify source filters per file
+# pattern. A pattern will override the setting for FILTER_PATTERN (if any) and
+# it is also possible to disable source filtering for a specific pattern using
+# *.ext= (so without naming a filter).
+# This tag requires that the tag FILTER_SOURCE_FILES is set to YES.
+
+FILTER_SOURCE_PATTERNS =
+
+# If the USE_MDFILE_AS_MAINPAGE tag refers to the name of a markdown file that
+# is part of the input, its contents will be placed on the main page
+# (index.html). This can be useful if you have a project on for instance GitHub
+# and want to reuse the introduction page also for the doxygen output.
+
+USE_MDFILE_AS_MAINPAGE = "mainpage.md"
+
+#---------------------------------------------------------------------------
+# Configuration options related to source browsing
+#---------------------------------------------------------------------------
+
+# If the SOURCE_BROWSER tag is set to YES then a list of source files will be
+# generated. Documented entities will be cross-referenced with these sources.
+#
+# Note: To get rid of all source code in the generated output, make sure that
+# also VERBATIM_HEADERS is set to NO.
+# The default value is: NO.
+
+SOURCE_BROWSER         = NO
+
+# Setting the INLINE_SOURCES tag to YES will include the body of functions,
+# classes and enums directly into the documentation.
+# The default value is: NO.
+
+INLINE_SOURCES         = NO
+
+# Setting the STRIP_CODE_COMMENTS tag to YES will instruct doxygen to hide any
+# special comment blocks from generated source code fragments. Normal C, C++ and
+# Fortran comments will always remain visible.
+# The default value is: YES.
+
+STRIP_CODE_COMMENTS    = YES
+
+# If the REFERENCED_BY_RELATION tag is set to YES then for each documented
+# function all documented functions referencing it will be listed.
+# The default value is: NO.
+
+REFERENCED_BY_RELATION = NO
+
+# If the REFERENCES_RELATION tag is set to YES then for each documented function
+# all documented entities called/used by that function will be listed.
+# The default value is: NO.
+
+REFERENCES_RELATION    = NO
+
+# If the REFERENCES_LINK_SOURCE tag is set to YES and SOURCE_BROWSER tag is set
+# to YES then the hyperlinks from functions in REFERENCES_RELATION and
+# REFERENCED_BY_RELATION lists will link to the source code. Otherwise they will
+# link to the documentation.
+# The default value is: YES.
+
+REFERENCES_LINK_SOURCE = YES
+
+# If SOURCE_TOOLTIPS is enabled (the default) then hovering a hyperlink in the
+# source code will show a tooltip with additional information such as prototype,
+# brief description and links to the definition and documentation. Since this
+# will make the HTML file larger and loading of large files a bit slower, you
+# can opt to disable this feature.
+# The default value is: YES.
+# This tag requires that the tag SOURCE_BROWSER is set to YES.
+
+SOURCE_TOOLTIPS        = YES
+
+# If the USE_HTAGS tag is set to YES then the references to source code will
+# point to the HTML generated by the htags(1) tool instead of doxygen built-in
+# source browser. The htags tool is part of GNU's global source tagging system
+# (see https://www.gnu.org/software/global/global.html). You will need version
+# 4.8.6 or higher.
+#
+# To use it do the following:
+# - Install the latest version of global
+# - Enable SOURCE_BROWSER and USE_HTAGS in the config file
+# - Make sure the INPUT points to the root of the source tree
+# - Run doxygen as normal
+#
+# Doxygen will invoke htags (and that will in turn invoke gtags), so these
+# tools must be available from the command line (i.e. in the search path).
+#
+# The result: instead of the source browser generated by doxygen, the links to
+# source code will now point to the output of htags.
+# The default value is: NO.
+# This tag requires that the tag SOURCE_BROWSER is set to YES.
+
+USE_HTAGS              = NO
+
+# If the VERBATIM_HEADERS tag is set the YES then doxygen will generate a
+# verbatim copy of the header file for each class for which an include is
+# specified. Set to NO to disable this.
+# See also: Section \class.
+# The default value is: YES.
+
+VERBATIM_HEADERS       = YES
+
+#---------------------------------------------------------------------------
+# Configuration options related to the alphabetical class index
+#---------------------------------------------------------------------------
+
+# If the ALPHABETICAL_INDEX tag is set to YES, an alphabetical index of all
+# compounds will be generated. Enable this if the project contains a lot of
+# classes, structs, unions or interfaces.
+# The default value is: YES.
+
+ALPHABETICAL_INDEX     = YES
+
+# The COLS_IN_ALPHA_INDEX tag can be used to specify the number of columns in
+# which the alphabetical index list will be split.
+# Minimum value: 1, maximum value: 20, default value: 5.
+# This tag requires that the tag ALPHABETICAL_INDEX is set to YES.
+
+COLS_IN_ALPHA_INDEX    = 2
+
+# In case all classes in a project start with a common prefix, all classes will
+# be put under the same header in the alphabetical index. The IGNORE_PREFIX tag
+# can be used to specify a prefix (or a list of prefixes) that should be ignored
+# while generating the index headers.
+# This tag requires that the tag ALPHABETICAL_INDEX is set to YES.
+
+IGNORE_PREFIX          =
+
+#---------------------------------------------------------------------------
+# Configuration options related to the HTML output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_HTML tag is set to YES, doxygen will generate HTML output
+# The default value is: YES.
+
+GENERATE_HTML          = YES
+
+# The HTML_OUTPUT tag is used to specify where the HTML docs will be put. If a
+# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
+# it.
+# The default directory is: html.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_OUTPUT            = html
+
+# The HTML_FILE_EXTENSION tag can be used to specify the file extension for each
+# generated HTML page (for example: .htm, .php, .asp).
+# The default value is: .html.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_FILE_EXTENSION    = .html
+
+# The HTML_HEADER tag can be used to specify a user-defined HTML header file for
+# each generated HTML page. If the tag is left blank doxygen will generate a
+# standard header.
+#
+# To get valid HTML the header file that includes any scripts and style sheets
+# that doxygen needs, which is dependent on the configuration options used (e.g.
+# the setting GENERATE_TREEVIEW). It is highly recommended to start with a
+# default header using
+# doxygen -w html new_header.html new_footer.html new_stylesheet.css
+# YourConfigFile
+# and then modify the file new_header.html. See also section "Doxygen usage"
+# for information on how to generate the default header that doxygen normally
+# uses.
+# Note: The header is subject to change so you typically have to regenerate the
+# default header when upgrading to a newer version of doxygen. For a description
+# of the possible markers and block names see the documentation.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_HEADER            = xen-doxygen/header.html
+
+# The HTML_FOOTER tag can be used to specify a user-defined HTML footer for each
+# generated HTML page. If the tag is left blank doxygen will generate a standard
+# footer. See HTML_HEADER for more information on how to generate a default
+# footer and what special commands can be used inside the footer. See also
+# section "Doxygen usage" for information on how to generate the default footer
+# that doxygen normally uses.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_FOOTER            =
+
+# The HTML_STYLESHEET tag can be used to specify a user-defined cascading style
+# sheet that is used by each HTML page. It can be used to fine-tune the look of
+# the HTML output. If left blank doxygen will generate a default style sheet.
+# See also section "Doxygen usage" for information on how to generate the style
+# sheet that doxygen normally uses.
+# Note: It is recommended to use HTML_EXTRA_STYLESHEET instead of this tag, as
+# it is more robust and this tag (HTML_STYLESHEET) will in the future become
+# obsolete.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_STYLESHEET        =
+
+# The HTML_EXTRA_STYLESHEET tag can be used to specify additional user-defined
+# cascading style sheets that are included after the standard style sheets
+# created by doxygen. Using this option one can overrule certain style aspects.
+# This is preferred over using HTML_STYLESHEET since it does not replace the
+# standard style sheet and is therefore more robust against future updates.
+# Doxygen will copy the style sheet files to the output directory.
+# Note: The order of the extra style sheet files is of importance (e.g. the last
+# style sheet in the list overrules the setting of the previous ones in the
+# list). For an example see the documentation.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_EXTRA_STYLESHEET  = xen-doxygen/customdoxygen.css
+
+# The HTML_EXTRA_FILES tag can be used to specify one or more extra images or
+# other source files which should be copied to the HTML output directory. Note
+# that these files will be copied to the base HTML output directory. Use the
+# $relpath^ marker in the HTML_HEADER and/or HTML_FOOTER files to load these
+# files. In the HTML_STYLESHEET file, use the file name only. Also note that the
+# files will be copied as-is; there are no commands or markers available.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_EXTRA_FILES       =
+
+# The HTML_COLORSTYLE_HUE tag controls the color of the HTML output. Doxygen
+# will adjust the colors in the style sheet and background images according to
+# this color. Hue is specified as an angle on a colorwheel, see
+# https://en.wikipedia.org/wiki/Hue for more information. For instance the value
+# 0 represents red, 60 is yellow, 120 is green, 180 is cyan, 240 is blue, 300
+# purple, and 360 is red again.
+# Minimum value: 0, maximum value: 359, default value: 220.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_COLORSTYLE_HUE    =
+
+# The HTML_COLORSTYLE_SAT tag controls the purity (or saturation) of the colors
+# in the HTML output. For a value of 0 the output will use grayscales only. A
+# value of 255 will produce the most vivid colors.
+# Minimum value: 0, maximum value: 255, default value: 100.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_COLORSTYLE_SAT    =
+
+# The HTML_COLORSTYLE_GAMMA tag controls the gamma correction applied to the
+# luminance component of the colors in the HTML output. Values below 100
+# gradually make the output lighter, whereas values above 100 make the output
+# darker. The value divided by 100 is the actual gamma applied, so 80 represents
+# a gamma of 0.8, The value 220 represents a gamma of 2.2, and 100 does not
+# change the gamma.
+# Minimum value: 40, maximum value: 240, default value: 80.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_COLORSTYLE_GAMMA  =
+
+# If the HTML_TIMESTAMP tag is set to YES then the footer of each generated HTML
+# page will contain the date and time when the page was generated. Setting this
+# to YES can help to show when doxygen was last run and thus if the
+# documentation is up to date.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_TIMESTAMP         = YES
+
+# If the HTML_DYNAMIC_SECTIONS tag is set to YES then the generated HTML
+# documentation will contain sections that can be hidden and shown after the
+# page has loaded.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_DYNAMIC_SECTIONS  = YES
+
+# With HTML_INDEX_NUM_ENTRIES one can control the preferred number of entries
+# shown in the various tree structured indices initially; the user can expand
+# and collapse entries dynamically later on. Doxygen will expand the tree to
+# such a level that at most the specified number of entries are visible (unless
+# a fully collapsed tree already exceeds this amount). So setting the number of
+# entries 1 will produce a full collapsed tree by default. 0 is a special value
+# representing an infinite number of entries and will result in a full expanded
+# tree by default.
+# Minimum value: 0, maximum value: 9999, default value: 100.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+HTML_INDEX_NUM_ENTRIES = 100
+
+# If the GENERATE_DOCSET tag is set to YES, additional index files will be
+# generated that can be used as input for Apple's Xcode 3 integrated development
+# environment (see: https://developer.apple.com/tools/xcode/), introduced with
+# OSX 10.5 (Leopard). To create a documentation set, doxygen will generate a
+# Makefile in the HTML output directory. Running make will produce the docset in
+# that directory and running make install will install the docset in
+# ~/Library/Developer/Shared/Documentation/DocSets so that Xcode will find it at
+# startup. See https://developer.apple.com/tools/creatingdocsetswithdoxygen.html
+# for more information.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+GENERATE_DOCSET        = YES
+
+# This tag determines the name of the docset feed. A documentation feed provides
+# an umbrella under which multiple documentation sets from a single provider
+# (such as a company or product suite) can be grouped.
+# The default value is: Doxygen generated docs.
+# This tag requires that the tag GENERATE_DOCSET is set to YES.
+
+DOCSET_FEEDNAME        = "Doxygen generated docs"
+
+# This tag specifies a string that should uniquely identify the documentation
+# set bundle. This should be a reverse domain-name style string, e.g.
+# com.mycompany.MyDocSet. Doxygen will append .docset to the name.
+# The default value is: org.doxygen.Project.
+# This tag requires that the tag GENERATE_DOCSET is set to YES.
+
+DOCSET_BUNDLE_ID       = org.doxygen.Project
+
+# The DOCSET_PUBLISHER_ID tag specifies a string that should uniquely identify
+# the documentation publisher. This should be a reverse domain-name style
+# string, e.g. com.mycompany.MyDocSet.documentation.
+# The default value is: org.doxygen.Publisher.
+# This tag requires that the tag GENERATE_DOCSET is set to YES.
+
+DOCSET_PUBLISHER_ID    = org.doxygen.Publisher
+
+# The DOCSET_PUBLISHER_NAME tag identifies the documentation publisher.
+# The default value is: Publisher.
+# This tag requires that the tag GENERATE_DOCSET is set to YES.
+
+DOCSET_PUBLISHER_NAME  = Publisher
+
+# If the GENERATE_HTMLHELP tag is set to YES then doxygen generates three
+# additional HTML index files: index.hhp, index.hhc, and index.hhk. The
+# index.hhp is a project file that can be read by Microsoft's HTML Help Workshop
+# (see: http://www.microsoft.com/en-us/download/details.aspx?id=21138) on
+# Windows.
+#
+# The HTML Help Workshop contains a compiler that can convert all HTML output
+# generated by doxygen into a single compiled HTML file (.chm). Compiled HTML
+# files are now used as the Windows 98 help format, and will replace the old
+# Windows help format (.hlp) on all Windows platforms in the future. Compressed
+# HTML files also contain an index, a table of contents, and you can search for
+# words in the documentation. The HTML workshop also contains a viewer for
+# compressed HTML files.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+GENERATE_HTMLHELP      = NO
+
+# The CHM_FILE tag can be used to specify the file name of the resulting .chm
+# file. You can add a path in front of the file if the result should not be
+# written to the html output directory.
+# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
+
+CHM_FILE               = NO
+
+# The HHC_LOCATION tag can be used to specify the location (absolute path
+# including file name) of the HTML help compiler (hhc.exe). If non-empty,
+# doxygen will try to run the HTML help compiler on the generated index.hhp.
+# The file has to be specified with full path.
+# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
+
+HHC_LOCATION           =
+
+# The GENERATE_CHI flag controls if a separate .chi index file is generated
+# (YES) or that it should be included in the master .chm file (NO).
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
+
+GENERATE_CHI           = NO
+
+# The CHM_INDEX_ENCODING is used to encode HtmlHelp index (hhk), content (hhc)
+# and project file content.
+# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
+
+CHM_INDEX_ENCODING     =
+
+# The BINARY_TOC flag controls whether a binary table of contents is generated
+# (YES) or a normal table of contents (NO) in the .chm file. Furthermore it
+# enables the Previous and Next buttons.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
+
+BINARY_TOC             = YES
+
+# The TOC_EXPAND flag can be set to YES to add extra items for group members to
+# the table of contents of the HTML help documentation and to the tree view.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTMLHELP is set to YES.
+
+TOC_EXPAND             = NO
+
+# If the GENERATE_QHP tag is set to YES and both QHP_NAMESPACE and
+# QHP_VIRTUAL_FOLDER are set, an additional index file will be generated that
+# can be used as input for Qt's qhelpgenerator to generate a Qt Compressed Help
+# (.qch) of the generated HTML documentation.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+GENERATE_QHP           = NO
+
+# If the QHG_LOCATION tag is specified, the QCH_FILE tag can be used to specify
+# the file name of the resulting .qch file. The path specified is relative to
+# the HTML output folder.
+# This tag requires that the tag GENERATE_QHP is set to YES.
+
+QCH_FILE               =
+
+# The QHP_NAMESPACE tag specifies the namespace to use when generating Qt Help
+# Project output. For more information please see Qt Help Project / Namespace
+# (see: http://doc.qt.io/qt-4.8/qthelpproject.html#namespace).
+# The default value is: org.doxygen.Project.
+# This tag requires that the tag GENERATE_QHP is set to YES.
+
+QHP_NAMESPACE          = org.doxygen.Project
+
+# The QHP_VIRTUAL_FOLDER tag specifies the namespace to use when generating Qt
+# Help Project output. For more information please see Qt Help Project / Virtual
+# Folders (see: http://doc.qt.io/qt-4.8/qthelpproject.html#virtual-folders).
+# The default value is: doc.
+# This tag requires that the tag GENERATE_QHP is set to YES.
+
+QHP_VIRTUAL_FOLDER     = doc
+
+# If the QHP_CUST_FILTER_NAME tag is set, it specifies the name of a custom
+# filter to add. For more information please see Qt Help Project / Custom
+# Filters (see: http://doc.qt.io/qt-4.8/qthelpproject.html#custom-filters).
+# This tag requires that the tag GENERATE_QHP is set to YES.
+
+QHP_CUST_FILTER_NAME   =
+
+# The QHP_CUST_FILTER_ATTRS tag specifies the list of the attributes of the
+# custom filter to add. For more information please see Qt Help Project / Custom
+# Filters (see: http://doc.qt.io/qt-4.8/qthelpproject.html#custom-filters).
+# This tag requires that the tag GENERATE_QHP is set to YES.
+
+QHP_CUST_FILTER_ATTRS  =
+
+# The QHP_SECT_FILTER_ATTRS tag specifies the list of the attributes this
+# project's filter section matches. Qt Help Project / Filter Attributes (see:
+# http://doc.qt.io/qt-4.8/qthelpproject.html#filter-attributes).
+# This tag requires that the tag GENERATE_QHP is set to YES.
+
+QHP_SECT_FILTER_ATTRS  =
+
+# The QHG_LOCATION tag can be used to specify the location of Qt's
+# qhelpgenerator. If non-empty doxygen will try to run qhelpgenerator on the
+# generated .qhp file.
+# This tag requires that the tag GENERATE_QHP is set to YES.
+
+QHG_LOCATION           =
+
+# If the GENERATE_ECLIPSEHELP tag is set to YES, additional index files will be
+# generated, together with the HTML files, they form an Eclipse help plugin. To
+# install this plugin and make it available under the help contents menu in
+# Eclipse, the contents of the directory containing the HTML and XML files needs
+# to be copied into the plugins directory of eclipse. The name of the directory
+# within the plugins directory should be the same as the ECLIPSE_DOC_ID value.
+# After copying Eclipse needs to be restarted before the help appears.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+GENERATE_ECLIPSEHELP   = NO
+
+# A unique identifier for the Eclipse help plugin. When installing the plugin
+# the directory name containing the HTML and XML files should also have this
+# name. Each documentation set should have its own identifier.
+# The default value is: org.doxygen.Project.
+# This tag requires that the tag GENERATE_ECLIPSEHELP is set to YES.
+
+ECLIPSE_DOC_ID         = org.doxygen.Project
+
+# If you want full control over the layout of the generated HTML pages it might
+# be necessary to disable the index and replace it with your own. The
+# DISABLE_INDEX tag can be used to turn on/off the condensed index (tabs) at top
+# of each HTML page. A value of NO enables the index and the value YES disables
+# it. Since the tabs in the index contain the same information as the navigation
+# tree, you can set this option to YES if you also set GENERATE_TREEVIEW to YES.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+DISABLE_INDEX          = NO
+
+# The GENERATE_TREEVIEW tag is used to specify whether a tree-like index
+# structure should be generated to display hierarchical information. If the tag
+# value is set to YES, a side panel will be generated containing a tree-like
+# index structure (just like the one that is generated for HTML Help). For this
+# to work a browser that supports JavaScript, DHTML, CSS and frames is required
+# (i.e. any modern browser). Windows users are probably better off using the
+# HTML help feature. Via custom style sheets (see HTML_EXTRA_STYLESHEET) one can
+# further fine-tune the look of the index. As an example, the default style
+# sheet generated by doxygen has an example that shows how to put an image at
+# the root of the tree instead of the PROJECT_NAME. Since the tree basically has
+# the same information as the tab index, you could consider setting
+# DISABLE_INDEX to YES when enabling this option.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+GENERATE_TREEVIEW      = YES
+
+# The ENUM_VALUES_PER_LINE tag can be used to set the number of enum values that
+# doxygen will group on one line in the generated HTML documentation.
+#
+# Note that a value of 0 will completely suppress the enum values from appearing
+# in the overview section.
+# Minimum value: 0, maximum value: 20, default value: 4.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+ENUM_VALUES_PER_LINE   = 4
+
+# If the treeview is enabled (see GENERATE_TREEVIEW) then this tag can be used
+# to set the initial width (in pixels) of the frame in which the tree is shown.
+# Minimum value: 0, maximum value: 1500, default value: 250.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+TREEVIEW_WIDTH         = 250
+
+# If the EXT_LINKS_IN_WINDOW option is set to YES, doxygen will open links to
+# external symbols imported via tag files in a separate window.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+EXT_LINKS_IN_WINDOW    = NO
+
+# Use this tag to change the font size of LaTeX formulas included as images in
+# the HTML documentation. When you change the font size after a successful
+# doxygen run you need to manually remove any form_*.png images from the HTML
+# output directory to force them to be regenerated.
+# Minimum value: 8, maximum value: 50, default value: 10.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+FORMULA_FONTSIZE       = 10
+
+# Use the FORMULA_TRANPARENT tag to determine whether or not the images
+# generated for formulas are transparent PNGs. Transparent PNGs are not
+# supported properly for IE 6.0, but are supported on all modern browsers.
+#
+# Note that when changing this option you need to delete any form_*.png files in
+# the HTML output directory before the changes have effect.
+# The default value is: YES.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+FORMULA_TRANSPARENT    = YES
+
+# Enable the USE_MATHJAX option to render LaTeX formulas using MathJax (see
+# https://www.mathjax.org) which uses client side Javascript for the rendering
+# instead of using pre-rendered bitmaps. Use this if you do not have LaTeX
+# installed or if you want to formulas look prettier in the HTML output. When
+# enabled you may also need to install MathJax separately and configure the path
+# to it using the MATHJAX_RELPATH option.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+USE_MATHJAX            = NO
+
+# When MathJax is enabled you can set the default output format to be used for
+# the MathJax output. See the MathJax site (see:
+# http://docs.mathjax.org/en/latest/output.html) for more details.
+# Possible values are: HTML-CSS (which is slower, but has the best
+# compatibility), NativeMML (i.e. MathML) and SVG.
+# The default value is: HTML-CSS.
+# This tag requires that the tag USE_MATHJAX is set to YES.
+
+MATHJAX_FORMAT         = HTML-CSS
+
+# When MathJax is enabled you need to specify the location relative to the HTML
+# output directory using the MATHJAX_RELPATH option. The destination directory
+# should contain the MathJax.js script. For instance, if the mathjax directory
+# is located at the same level as the HTML output directory, then
+# MATHJAX_RELPATH should be ../mathjax. The default value points to the MathJax
+# Content Delivery Network so you can quickly see the result without installing
+# MathJax. However, it is strongly recommended to install a local copy of
+# MathJax from https://www.mathjax.org before deployment.
+# The default value is: http://cdn.mathjax.org/mathjax/latest.
+# This tag requires that the tag USE_MATHJAX is set to YES.
+
+MATHJAX_RELPATH        = http://cdn.mathjax.org/mathjax/latest
+
+# The MATHJAX_EXTENSIONS tag can be used to specify one or more MathJax
+# extension names that should be enabled during MathJax rendering. For example
+# MATHJAX_EXTENSIONS = TeX/AMSmath TeX/AMSsymbols
+# This tag requires that the tag USE_MATHJAX is set to YES.
+
+MATHJAX_EXTENSIONS     =
+
+# The MATHJAX_CODEFILE tag can be used to specify a file with javascript pieces
+# of code that will be used on startup of the MathJax code. See the MathJax site
+# (see: http://docs.mathjax.org/en/latest/output.html) for more details. For an
+# example see the documentation.
+# This tag requires that the tag USE_MATHJAX is set to YES.
+
+MATHJAX_CODEFILE       =
+
+# When the SEARCHENGINE tag is enabled doxygen will generate a search box for
+# the HTML output. The underlying search engine uses javascript and DHTML and
+# should work on any modern browser. Note that when using HTML help
+# (GENERATE_HTMLHELP), Qt help (GENERATE_QHP), or docsets (GENERATE_DOCSET)
+# there is already a search function so this one should typically be disabled.
+# For large projects the javascript based search engine can be slow, then
+# enabling SERVER_BASED_SEARCH may provide a better solution. It is possible to
+# search using the keyboard; to jump to the search box use <access key> + S
+# (what the <access key> is depends on the OS and browser, but it is typically
+# <CTRL>, <ALT>/<option>, or both). Inside the search box use the <cursor down
+# key> to jump into the search results window, the results can be navigated
+# using the <cursor keys>. Press <Enter> to select an item or <escape> to cancel
+# the search. The filter options can be selected when the cursor is inside the
+# search box by pressing <Shift>+<cursor down>. Also here use the <cursor keys>
+# to select a filter and <Enter> or <escape> to activate or cancel the filter
+# option.
+# The default value is: YES.
+# This tag requires that the tag GENERATE_HTML is set to YES.
+
+SEARCHENGINE           = YES
+
+# When the SERVER_BASED_SEARCH tag is enabled the search engine will be
+# implemented using a web server instead of a web client using Javascript. There
+# are two flavors of web server based searching depending on the EXTERNAL_SEARCH
+# setting. When disabled, doxygen will generate a PHP script for searching and
+# an index file used by the script. When EXTERNAL_SEARCH is enabled the indexing
+# and searching needs to be provided by external tools. See the section
+# "External Indexing and Searching" for details.
+# The default value is: NO.
+# This tag requires that the tag SEARCHENGINE is set to YES.
+
+SERVER_BASED_SEARCH    = NO
+
+# When EXTERNAL_SEARCH tag is enabled doxygen will no longer generate the PHP
+# script for searching. Instead the search results are written to an XML file
+# which needs to be processed by an external indexer. Doxygen will invoke an
+# external search engine pointed to by the SEARCHENGINE_URL option to obtain the
+# search results.
+#
+# Doxygen ships with an example indexer (doxyindexer) and search engine
+# (doxysearch.cgi) which are based on the open source search engine library
+# Xapian (see: https://xapian.org/).
+#
+# See the section "External Indexing and Searching" for details.
+# The default value is: NO.
+# This tag requires that the tag SEARCHENGINE is set to YES.
+
+EXTERNAL_SEARCH        = NO
+
+# The SEARCHENGINE_URL should point to a search engine hosted by a web server
+# which will return the search results when EXTERNAL_SEARCH is enabled.
+#
+# Doxygen ships with an example indexer (doxyindexer) and search engine
+# (doxysearch.cgi) which are based on the open source search engine library
+# Xapian (see: https://xapian.org/). See the section "External Indexing and
+# Searching" for details.
+# This tag requires that the tag SEARCHENGINE is set to YES.
+
+SEARCHENGINE_URL       =
+
+# When SERVER_BASED_SEARCH and EXTERNAL_SEARCH are both enabled the unindexed
+# search data is written to a file for indexing by an external tool. With the
+# SEARCHDATA_FILE tag the name of this file can be specified.
+# The default file is: searchdata.xml.
+# This tag requires that the tag SEARCHENGINE is set to YES.
+
+SEARCHDATA_FILE        = searchdata.xml
+
+# When SERVER_BASED_SEARCH and EXTERNAL_SEARCH are both enabled the
+# EXTERNAL_SEARCH_ID tag can be used as an identifier for the project. This is
+# useful in combination with EXTRA_SEARCH_MAPPINGS to search through multiple
+# projects and redirect the results back to the right project.
+# This tag requires that the tag SEARCHENGINE is set to YES.
+
+EXTERNAL_SEARCH_ID     =
+
+# The EXTRA_SEARCH_MAPPINGS tag can be used to enable searching through doxygen
+# projects other than the one defined by this configuration file, but that are
+# all added to the same external search index. Each project needs to have a
+# unique id set via EXTERNAL_SEARCH_ID. The search mapping then maps the id of
+# to a relative location where the documentation can be found. The format is:
+# EXTRA_SEARCH_MAPPINGS = tagname1=loc1 tagname2=loc2 ...
+# This tag requires that the tag SEARCHENGINE is set to YES.
+
+EXTRA_SEARCH_MAPPINGS  =
+
+#---------------------------------------------------------------------------
+# Configuration options related to the LaTeX output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_LATEX tag is set to YES, doxygen will generate LaTeX output.
+# The default value is: YES.
+
+GENERATE_LATEX         = NO
+
+# The LATEX_OUTPUT tag is used to specify where the LaTeX docs will be put. If a
+# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
+# it.
+# The default directory is: latex.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_OUTPUT           = latex
+
+# The LATEX_CMD_NAME tag can be used to specify the LaTeX command name to be
+# invoked.
+#
+# Note that when enabling USE_PDFLATEX this option is only used for generating
+# bitmaps for formulas in the HTML output, but not in the Makefile that is
+# written to the output directory.
+# The default file is: latex.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_CMD_NAME         = latex
+
+# The MAKEINDEX_CMD_NAME tag can be used to specify the command name to generate
+# index for LaTeX.
+# The default file is: makeindex.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+MAKEINDEX_CMD_NAME     = makeindex
+
+# If the COMPACT_LATEX tag is set to YES, doxygen generates more compact LaTeX
+# documents. This may be useful for small projects and may help to save some
+# trees in general.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+COMPACT_LATEX          = NO
+
+# The PAPER_TYPE tag can be used to set the paper type that is used by the
+# printer.
+# Possible values are: a4 (210 x 297 mm), letter (8.5 x 11 inches), legal (8.5 x
+# 14 inches) and executive (7.25 x 10.5 inches).
+# The default value is: a4.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+PAPER_TYPE             = a4
+
+# The EXTRA_PACKAGES tag can be used to specify one or more LaTeX package names
+# that should be included in the LaTeX output. The package can be specified just
+# by its name or with the correct syntax as to be used with the LaTeX
+# \usepackage command. To get the times font for instance you can specify :
+# EXTRA_PACKAGES=times or EXTRA_PACKAGES={times}
+# To use the option intlimits with the amsmath package you can specify:
+# EXTRA_PACKAGES=[intlimits]{amsmath}
+# If left blank no extra packages will be included.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+EXTRA_PACKAGES         =
+
+# The LATEX_HEADER tag can be used to specify a personal LaTeX header for the
+# generated LaTeX document. The header should contain everything until the first
+# chapter. If it is left blank doxygen will generate a standard header. See
+# section "Doxygen usage" for information on how to let doxygen write the
+# default header to a separate file.
+#
+# Note: Only use a user-defined header if you know what you are doing! The
+# following commands have a special meaning inside the header: $title,
+# $datetime, $date, $doxygenversion, $projectname, $projectnumber,
+# $projectbrief, $projectlogo. Doxygen will replace $title with the empty
+# string, for the replacement values of the other commands the user is referred
+# to HTML_HEADER.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_HEADER           =
+
+# The LATEX_FOOTER tag can be used to specify a personal LaTeX footer for the
+# generated LaTeX document. The footer should contain everything after the last
+# chapter. If it is left blank doxygen will generate a standard footer. See
+# LATEX_HEADER for more information on how to generate a default footer and what
+# special commands can be used inside the footer.
+#
+# Note: Only use a user-defined footer if you know what you are doing!
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_FOOTER           =
+
+# The LATEX_EXTRA_FILES tag can be used to specify one or more extra images or
+# other source files which should be copied to the LATEX_OUTPUT output
+# directory. Note that the files will be copied as-is; there are no commands or
+# markers available.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_EXTRA_FILES      =
+
+# If the PDF_HYPERLINKS tag is set to YES, the LaTeX that is generated is
+# prepared for conversion to PDF (using ps2pdf or pdflatex). The PDF file will
+# contain links (just like the HTML output) instead of page references. This
+# makes the output suitable for online browsing using a PDF viewer.
+# The default value is: YES.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+PDF_HYPERLINKS         = YES
+
+# If the USE_PDFLATEX tag is set to YES, doxygen will use pdflatex to generate
+# the PDF file directly from the LaTeX files. Set this option to YES, to get a
+# higher quality PDF documentation.
+# The default value is: YES.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+USE_PDFLATEX           = YES
+
+# If the LATEX_BATCHMODE tag is set to YES, doxygen will add the \batchmode
+# command to the generated LaTeX files. This will instruct LaTeX to keep running
+# if errors occur, instead of asking the user for help. This option is also used
+# when generating formulas in HTML.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_BATCHMODE        = NO
+
+# If the LATEX_HIDE_INDICES tag is set to YES then doxygen will not include the
+# index chapters (such as File Index, Compound Index, etc.) in the output.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_HIDE_INDICES     = NO
+
+# If the LATEX_SOURCE_CODE tag is set to YES then doxygen will include source
+# code with syntax highlighting in the LaTeX output.
+#
+# Note that which sources are shown also depends on other settings such as
+# SOURCE_BROWSER.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_SOURCE_CODE      = NO
+
+# The LATEX_BIB_STYLE tag can be used to specify the style to use for the
+# bibliography, e.g. plainnat, or ieeetr. See
+# https://en.wikipedia.org/wiki/BibTeX and \cite for more info.
+# The default value is: plain.
+# This tag requires that the tag GENERATE_LATEX is set to YES.
+
+LATEX_BIB_STYLE        = plain
+
+#---------------------------------------------------------------------------
+# Configuration options related to the RTF output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_RTF tag is set to YES, doxygen will generate RTF output. The
+# RTF output is optimized for Word 97 and may not look too pretty with other RTF
+# readers/editors.
+# The default value is: NO.
+
+GENERATE_RTF           = NO
+
+# The RTF_OUTPUT tag is used to specify where the RTF docs will be put. If a
+# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
+# it.
+# The default directory is: rtf.
+# This tag requires that the tag GENERATE_RTF is set to YES.
+
+RTF_OUTPUT             = rtf
+
+# If the COMPACT_RTF tag is set to YES, doxygen generates more compact RTF
+# documents. This may be useful for small projects and may help to save some
+# trees in general.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_RTF is set to YES.
+
+COMPACT_RTF            = NO
+
+# If the RTF_HYPERLINKS tag is set to YES, the RTF that is generated will
+# contain hyperlink fields. The RTF file will contain links (just like the HTML
+# output) instead of page references. This makes the output suitable for online
+# browsing using Word or some other Word compatible readers that support those
+# fields.
+#
+# Note: WordPad (write) and others do not support links.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_RTF is set to YES.
+
+RTF_HYPERLINKS         = YES
+
+# Load stylesheet definitions from file. Syntax is similar to doxygen's config
+# file, i.e. a series of assignments. You only have to provide replacements,
+# missing definitions are set to their default value.
+#
+# See also section "Doxygen usage" for information on how to generate the
+# default style sheet that doxygen normally uses.
+# This tag requires that the tag GENERATE_RTF is set to YES.
+
+RTF_STYLESHEET_FILE    =
+
+# Set optional variables used in the generation of an RTF document. Syntax is
+# similar to doxygen's config file. A template extensions file can be generated
+# using doxygen -e rtf extensionFile.
+# This tag requires that the tag GENERATE_RTF is set to YES.
+
+RTF_EXTENSIONS_FILE    =
+
+#---------------------------------------------------------------------------
+# Configuration options related to the man page output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_MAN tag is set to YES, doxygen will generate man pages for
+# classes and files.
+# The default value is: NO.
+
+GENERATE_MAN           = NO
+
+# The MAN_OUTPUT tag is used to specify where the man pages will be put. If a
+# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
+# it. A directory man3 will be created inside the directory specified by
+# MAN_OUTPUT.
+# The default directory is: man.
+# This tag requires that the tag GENERATE_MAN is set to YES.
+
+MAN_OUTPUT             = man
+
+# The MAN_EXTENSION tag determines the extension that is added to the generated
+# man pages. In case the manual section does not start with a number, the number
+# 3 is prepended. The dot (.) at the beginning of the MAN_EXTENSION tag is
+# optional.
+# The default value is: .3.
+# This tag requires that the tag GENERATE_MAN is set to YES.
+
+MAN_EXTENSION          = .3
+
+# If the MAN_LINKS tag is set to YES and doxygen generates man output, then it
+# will generate one additional man file for each entity documented in the real
+# man page(s). These additional files only source the real man page, but without
+# them the man command would be unable to find the correct page.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_MAN is set to YES.
+
+MAN_LINKS              = NO
+
+#---------------------------------------------------------------------------
+# Configuration options related to the XML output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_XML tag is set to YES, doxygen will generate an XML file that
+# captures the structure of the code including all documentation.
+# The default value is: NO.
+
+GENERATE_XML           = YES
+
+# The XML_OUTPUT tag is used to specify where the XML pages will be put. If a
+# relative path is entered the value of OUTPUT_DIRECTORY will be put in front of
+# it.
+# The default directory is: xml.
+# This tag requires that the tag GENERATE_XML is set to YES.
+
+XML_OUTPUT             = xml
+
+# If the XML_PROGRAMLISTING tag is set to YES, doxygen will dump the program
+# listings (including syntax highlighting and cross-referencing information) to
+# the XML output. Note that enabling this will significantly increase the size
+# of the XML output.
+# The default value is: YES.
+# This tag requires that the tag GENERATE_XML is set to YES.
+
+XML_PROGRAMLISTING     = YES
+
+#---------------------------------------------------------------------------
+# Configuration options related to the DOCBOOK output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_DOCBOOK tag is set to YES, doxygen will generate Docbook files
+# that can be used to generate PDF.
+# The default value is: NO.
+
+GENERATE_DOCBOOK       = NO
+
+# The DOCBOOK_OUTPUT tag is used to specify where the Docbook pages will be put.
+# If a relative path is entered the value of OUTPUT_DIRECTORY will be put in
+# front of it.
+# The default directory is: docbook.
+# This tag requires that the tag GENERATE_DOCBOOK is set to YES.
+
+DOCBOOK_OUTPUT         = docbook
+
+#---------------------------------------------------------------------------
+# Configuration options for the AutoGen Definitions output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_AUTOGEN_DEF tag is set to YES, doxygen will generate an
+# AutoGen Definitions (see http://autogen.sourceforge.net/) file that captures
+# the structure of the code including all documentation. Note that this feature
+# is still experimental and incomplete at the moment.
+# The default value is: NO.
+
+GENERATE_AUTOGEN_DEF   = NO
+
+#---------------------------------------------------------------------------
+# Configuration options related to the Perl module output
+#---------------------------------------------------------------------------
+
+# If the GENERATE_PERLMOD tag is set to YES, doxygen will generate a Perl module
+# file that captures the structure of the code including all documentation.
+#
+# Note that this feature is still experimental and incomplete at the moment.
+# The default value is: NO.
+
+GENERATE_PERLMOD       = NO
+
+# If the PERLMOD_LATEX tag is set to YES, doxygen will generate the necessary
+# Makefile rules, Perl scripts and LaTeX code to be able to generate PDF and DVI
+# output from the Perl module output.
+# The default value is: NO.
+# This tag requires that the tag GENERATE_PERLMOD is set to YES.
+
+PERLMOD_LATEX          = NO
+
+# If the PERLMOD_PRETTY tag is set to YES, the Perl module output will be nicely
+# formatted so it can be parsed by a human reader. This is useful if you want to
+# understand what is going on. On the other hand, if this tag is set to NO, the
+# size of the Perl module output will be much smaller and Perl will parse it
+# just the same.
+# The default value is: YES.
+# This tag requires that the tag GENERATE_PERLMOD is set to YES.
+
+PERLMOD_PRETTY         = YES
+
+# The names of the make variables in the generated doxyrules.make file are
+# prefixed with the string contained in PERLMOD_MAKEVAR_PREFIX. This is useful
+# so different doxyrules.make files included by the same Makefile don't
+# overwrite each other's variables.
+# This tag requires that the tag GENERATE_PERLMOD is set to YES.
+
+PERLMOD_MAKEVAR_PREFIX =
+
+#---------------------------------------------------------------------------
+# Configuration options related to the preprocessor
+#---------------------------------------------------------------------------
+
+# If the ENABLE_PREPROCESSING tag is set to YES, doxygen will evaluate all
+# C-preprocessor directives found in the sources and include files.
+# The default value is: YES.
+
+ENABLE_PREPROCESSING   = YES
+
+# If the MACRO_EXPANSION tag is set to YES, doxygen will expand all macro names
+# in the source code. If set to NO, only conditional compilation will be
+# performed. Macro expansion can be done in a controlled way by setting
+# EXPAND_ONLY_PREDEF to YES.
+# The default value is: NO.
+# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
+
+MACRO_EXPANSION        = YES
+
+# If the EXPAND_ONLY_PREDEF and MACRO_EXPANSION tags are both set to YES then
+# the macro expansion is limited to the macros specified with the PREDEFINED and
+# EXPAND_AS_DEFINED tags.
+# The default value is: NO.
+# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
+
+EXPAND_ONLY_PREDEF     = NO
+
+# If the SEARCH_INCLUDES tag is set to YES, the include files in the
+# INCLUDE_PATH will be searched if a #include is found.
+# The default value is: YES.
+# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
+
+SEARCH_INCLUDES        = YES
+
+# The INCLUDE_PATH tag can be used to specify one or more directories that
+# contain include files that are not input files but should be processed by the
+# preprocessor.
+# This tag requires that the tag SEARCH_INCLUDES is set to YES.
+
+INCLUDE_PATH           = "@XEN_BASE@/xen/include/generated" \
+                         "@XEN_BASE@/xen/include/"
+
+# You can use the INCLUDE_FILE_PATTERNS tag to specify one or more wildcard
+# patterns (like *.h and *.hpp) to filter out the header-files in the
+# directories. If left blank, the patterns specified with FILE_PATTERNS will be
+# used.
+# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
+
+INCLUDE_FILE_PATTERNS  =
+
+# The PREDEFINED tag can be used to specify one or more macro names that are
+# defined before the preprocessor is started (similar to the -D option of e.g.
+# gcc). The argument of the tag is a list of macros of the form: name or
+# name=definition (no spaces). If the definition and the "=" are omitted, "=1"
+# is assumed. To prevent a macro definition from being undefined via #undef or
+# recursively expanded use the := operator instead of the = operator.
+# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
+
+PREDEFINED             = __attribute__(x)= \
+                         DOXYGEN \
+                         __XEN__
+
+# If the MACRO_EXPANSION and EXPAND_ONLY_PREDEF tags are set to YES then this
+# tag can be used to specify a list of macro names that should be expanded. The
+# macro definition that is found in the sources will be used. Use the PREDEFINED
+# tag if you want to use a different macro definition that overrules the
+# definition found in the source code.
+# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
+
+EXPAND_AS_DEFINED      =
+
+# If the SKIP_FUNCTION_MACROS tag is set to YES then doxygen's preprocessor will
+# remove all references to function-like macros that are alone on a line, have
+# an all uppercase name, and do not end with a semicolon. Such function macros
+# are typically used for boiler-plate code, and will confuse the parser if not
+# removed.
+# The default value is: YES.
+# This tag requires that the tag ENABLE_PREPROCESSING is set to YES.
+
+SKIP_FUNCTION_MACROS   = NO
+
+#---------------------------------------------------------------------------
+# Configuration options related to external references
+#---------------------------------------------------------------------------
+
+# The TAGFILES tag can be used to specify one or more tag files. For each tag
+# file the location of the external documentation should be added. The format of
+# a tag file without this location is as follows:
+# TAGFILES = file1 file2 ...
+# Adding location for the tag files is done as follows:
+# TAGFILES = file1=loc1 "file2 = loc2" ...
+# where loc1 and loc2 can be relative or absolute paths or URLs. See the
+# section "Linking to external documentation" for more information about the use
+# of tag files.
+# Note: Each tag file must have a unique name (where the name does NOT include
+# the path). If a tag file is not located in the directory in which doxygen is
+# run, you must also specify the path to the tagfile here.
+
+TAGFILES               =
+
+# When a file name is specified after GENERATE_TAGFILE, doxygen will create a
+# tag file that is based on the input files it reads. See section "Linking to
+# external documentation" for more information about the usage of tag files.
+
+GENERATE_TAGFILE       =
+
+# If the ALLEXTERNALS tag is set to YES, all external class will be listed in
+# the class index. If set to NO, only the inherited external classes will be
+# listed.
+# The default value is: NO.
+
+ALLEXTERNALS           = NO
+
+# If the EXTERNAL_GROUPS tag is set to YES, all external groups will be listed
+# in the modules index. If set to NO, only the current project's groups will be
+# listed.
+# The default value is: YES.
+
+EXTERNAL_GROUPS        = YES
+
+# If the EXTERNAL_PAGES tag is set to YES, all external pages will be listed in
+# the related pages index. If set to NO, only the current project's pages will
+# be listed.
+# The default value is: YES.
+
+EXTERNAL_PAGES         = YES
+
+#---------------------------------------------------------------------------
+# Configuration options related to the dot tool
+#---------------------------------------------------------------------------
+
+# If the CLASS_DIAGRAMS tag is set to YES, doxygen will generate a class diagram
+# (in HTML and LaTeX) for classes with base or super classes. Setting the tag to
+# NO turns the diagrams off. Note that this option also works with HAVE_DOT
+# disabled, but it is recommended to install and use dot, since it yields more
+# powerful graphs.
+# The default value is: YES.
+
+CLASS_DIAGRAMS         = NO
+
+# You can include diagrams made with dia in doxygen documentation. Doxygen will
+# then run dia to produce the diagram and insert it in the documentation. The
+# DIA_PATH tag allows you to specify the directory where the dia binary resides.
+# If left empty dia is assumed to be found in the default search path.
+
+DIA_PATH               =
+
+# If set to YES the inheritance and collaboration graphs will hide inheritance
+# and usage relations if the target is undocumented or is not a class.
+# The default value is: YES.
+
+HIDE_UNDOC_RELATIONS   = YES
+
+# If you set the HAVE_DOT tag to YES then doxygen will assume the dot tool is
+# available from the path. This tool is part of Graphviz (see:
+# http://www.graphviz.org/), a graph visualization toolkit from AT&T and Lucent
+# Bell Labs. The other options in this section have no effect if this option is
+# set to NO
+# The default value is: NO.
+
+HAVE_DOT               = NO
+
+# The DOT_NUM_THREADS specifies the number of dot invocations doxygen is allowed
+# to run in parallel. When set to 0 doxygen will base this on the number of
+# processors available in the system. You can set it explicitly to a value
+# larger than 0 to get control over the balance between CPU load and processing
+# speed.
+# Minimum value: 0, maximum value: 32, default value: 0.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_NUM_THREADS        = 0
+
+# When you want a differently looking font in the dot files that doxygen
+# generates you can specify the font name using DOT_FONTNAME. You need to make
+# sure dot is able to find the font, which can be done by putting it in a
+# standard location or by setting the DOTFONTPATH environment variable or by
+# setting DOT_FONTPATH to the directory containing the font.
+# The default value is: Helvetica.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_FONTNAME           = Helvetica
+
+# The DOT_FONTSIZE tag can be used to set the size (in points) of the font of
+# dot graphs.
+# Minimum value: 4, maximum value: 24, default value: 10.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_FONTSIZE           = 10
+
+# By default doxygen will tell dot to use the default font as specified with
+# DOT_FONTNAME. If you specify a different font using DOT_FONTNAME you can set
+# the path where dot can find it using this tag.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_FONTPATH           =
+
+# If the CLASS_GRAPH tag is set to YES then doxygen will generate a graph for
+# each documented class showing the direct and indirect inheritance relations.
+# Setting this tag to YES will force the CLASS_DIAGRAMS tag to NO.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+CLASS_GRAPH            = YES
+
+# If the COLLABORATION_GRAPH tag is set to YES then doxygen will generate a
+# graph for each documented class showing the direct and indirect implementation
+# dependencies (inheritance, containment, and class references variables) of the
+# class with other documented classes.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+COLLABORATION_GRAPH    = YES
+
+# If the GROUP_GRAPHS tag is set to YES then doxygen will generate a graph for
+# groups, showing the direct groups dependencies.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+GROUP_GRAPHS           = YES
+
+# If the UML_LOOK tag is set to YES, doxygen will generate inheritance and
+# collaboration diagrams in a style similar to the OMG's Unified Modeling
+# Language.
+# The default value is: NO.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+UML_LOOK               = NO
+
+# If the UML_LOOK tag is enabled, the fields and methods are shown inside the
+# class node. If there are many fields or methods and many nodes the graph may
+# become too big to be useful. The UML_LIMIT_NUM_FIELDS threshold limits the
+# number of items for each type to make the size more manageable. Set this to 0
+# for no limit. Note that the threshold may be exceeded by 50% before the limit
+# is enforced. So when you set the threshold to 10, up to 15 fields may appear,
+# but if the number exceeds 15, the total amount of fields shown is limited to
+# 10.
+# Minimum value: 0, maximum value: 100, default value: 10.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+UML_LIMIT_NUM_FIELDS   = 10
+
+# If the TEMPLATE_RELATIONS tag is set to YES then the inheritance and
+# collaboration graphs will show the relations between templates and their
+# instances.
+# The default value is: NO.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+TEMPLATE_RELATIONS     = NO
+
+# If the INCLUDE_GRAPH, ENABLE_PREPROCESSING and SEARCH_INCLUDES tags are set to
+# YES then doxygen will generate a graph for each documented file showing the
+# direct and indirect include dependencies of the file with other documented
+# files.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+INCLUDE_GRAPH          = YES
+
+# If the INCLUDED_BY_GRAPH, ENABLE_PREPROCESSING and SEARCH_INCLUDES tags are
+# set to YES then doxygen will generate a graph for each documented file showing
+# the direct and indirect include dependencies of the file with other documented
+# files.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+INCLUDED_BY_GRAPH      = YES
+
+# If the CALL_GRAPH tag is set to YES then doxygen will generate a call
+# dependency graph for every global function or class method.
+#
+# Note that enabling this option will significantly increase the time of a run.
+# So in most cases it will be better to enable call graphs for selected
+# functions only using the \callgraph command. Disabling a call graph can be
+# accomplished by means of the command \hidecallgraph.
+# The default value is: NO.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+CALL_GRAPH             = NO
+
+# If the CALLER_GRAPH tag is set to YES then doxygen will generate a caller
+# dependency graph for every global function or class method.
+#
+# Note that enabling this option will significantly increase the time of a run.
+# So in most cases it will be better to enable caller graphs for selected
+# functions only using the \callergraph command. Disabling a caller graph can be
+# accomplished by means of the command \hidecallergraph.
+# The default value is: NO.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+CALLER_GRAPH           = NO
+
+# If the GRAPHICAL_HIERARCHY tag is set to YES then doxygen will graphical
+# hierarchy of all classes instead of a textual one.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+GRAPHICAL_HIERARCHY    = YES
+
+# If the DIRECTORY_GRAPH tag is set to YES then doxygen will show the
+# dependencies a directory has on other directories in a graphical way. The
+# dependency relations are determined by the #include relations between the
+# files in the directories.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DIRECTORY_GRAPH        = YES
+
+# The DOT_IMAGE_FORMAT tag can be used to set the image format of the images
+# generated by dot. For an explanation of the image formats see the section
+# output formats in the documentation of the dot tool (Graphviz (see:
+# http://www.graphviz.org/)).
+# Note: If you choose svg you need to set HTML_FILE_EXTENSION to xhtml in order
+# to make the SVG files visible in IE 9+ (other browsers do not have this
+# requirement).
+# Possible values are: png, jpg, gif, svg, png:gd, png:gd:gd, png:cairo,
+# png:cairo:gd, png:cairo:cairo, png:cairo:gdiplus, png:gdiplus and
+# png:gdiplus:gdiplus.
+# The default value is: png.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_IMAGE_FORMAT       = png
+
+# If DOT_IMAGE_FORMAT is set to svg, then this option can be set to YES to
+# enable generation of interactive SVG images that allow zooming and panning.
+#
+# Note that this requires a modern browser other than Internet Explorer. Tested
+# and working are Firefox, Chrome, Safari, and Opera.
+# Note: For IE 9+ you need to set HTML_FILE_EXTENSION to xhtml in order to make
+# the SVG files visible. Older versions of IE do not have SVG support.
+# The default value is: NO.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+INTERACTIVE_SVG        = NO
+
+# The DOT_PATH tag can be used to specify the path where the dot tool can be
+# found. If left blank, it is assumed the dot tool can be found in the path.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_PATH               =
+
+# The DOTFILE_DIRS tag can be used to specify one or more directories that
+# contain dot files that are included in the documentation (see the \dotfile
+# command).
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOTFILE_DIRS           =
+
+# The MSCFILE_DIRS tag can be used to specify one or more directories that
+# contain msc files that are included in the documentation (see the \mscfile
+# command).
+
+MSCFILE_DIRS           =
+
+# The DIAFILE_DIRS tag can be used to specify one or more directories that
+# contain dia files that are included in the documentation (see the \diafile
+# command).
+
+DIAFILE_DIRS           =
+
+# The DOT_GRAPH_MAX_NODES tag can be used to set the maximum number of nodes
+# that will be shown in the graph. If the number of nodes in a graph becomes
+# larger than this value, doxygen will truncate the graph, which is visualized
+# by representing a node as a red box. Note that doxygen if the number of direct
+# children of the root node in a graph is already larger than
+# DOT_GRAPH_MAX_NODES then the graph will not be shown at all. Also note that
+# the size of a graph can be further restricted by MAX_DOT_GRAPH_DEPTH.
+# Minimum value: 0, maximum value: 10000, default value: 50.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_GRAPH_MAX_NODES    = 50
+
+# The MAX_DOT_GRAPH_DEPTH tag can be used to set the maximum depth of the graphs
+# generated by dot. A depth value of 3 means that only nodes reachable from the
+# root by following a path via at most 3 edges will be shown. Nodes that lay
+# further from the root node will be omitted. Note that setting this option to 1
+# or 2 may greatly reduce the computation time needed for large code bases. Also
+# note that the size of a graph can be further restricted by
+# DOT_GRAPH_MAX_NODES. Using a depth of 0 means no depth restriction.
+# Minimum value: 0, maximum value: 1000, default value: 0.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+MAX_DOT_GRAPH_DEPTH    = 0
+
+# Set the DOT_TRANSPARENT tag to YES to generate images with a transparent
+# background. This is disabled by default, because dot on Windows does not seem
+# to support this out of the box.
+#
+# Warning: Depending on the platform used, enabling this option may lead to
+# badly anti-aliased labels on the edges of a graph (i.e. they become hard to
+# read).
+# The default value is: NO.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_TRANSPARENT        = NO
+
+# Set the DOT_MULTI_TARGETS tag to YES to allow dot to generate multiple output
+# files in one run (i.e. multiple -o and -T options on the command line). This
+# makes dot run faster, but since only newer versions of dot (>1.8.10) support
+# this, this feature is disabled by default.
+# The default value is: NO.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_MULTI_TARGETS      = NO
+
+# If the GENERATE_LEGEND tag is set to YES doxygen will generate a legend page
+# explaining the meaning of the various boxes and arrows in the dot generated
+# graphs.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+GENERATE_LEGEND        = YES
+
+# If the DOT_CLEANUP tag is set to YES, doxygen will remove the intermediate dot
+# files that are used to generate the various graphs.
+# The default value is: YES.
+# This tag requires that the tag HAVE_DOT is set to YES.
+
+DOT_CLEANUP            = YES
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon May 10 08:41:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 08:41:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124962.235269 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg1Tm-00011R-Rs; Mon, 10 May 2021 08:41:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124962.235269; Mon, 10 May 2021 08:41:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg1Tm-00011E-O4; Mon, 10 May 2021 08:41:30 +0000
Received: by outflank-mailman (input) for mailman id 124962;
 Mon, 10 May 2021 08:41:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jR2S=KF=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1lg1Tl-0000ei-RX
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 08:41:29 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 72a2a2b6-1ef0-48e9-b93f-c0a4888d172f;
 Mon, 10 May 2021 08:41:20 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C7828106F;
 Mon, 10 May 2021 01:41:19 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com
 [10.1.197.16])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6041E3F719;
 Mon, 10 May 2021 01:41:18 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 72a2a2b6-1ef0-48e9-b93f-c0a4888d172f
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v6 3/9] docs: add doxygen templates
Date: Mon, 10 May 2021 09:40:59 +0100
Message-Id: <20210510084105.17108-4-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210510084105.17108-1-luca.fancellu@arm.com>
References: <20210510084105.17108-1-luca.fancellu@arm.com>

Add doxygen templates for the doxygen documentation.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
---
 docs/xen-doxygen/customdoxygen.css | 36 +++++++++++++++++++
 docs/xen-doxygen/footer.html       | 21 +++++++++++
 docs/xen-doxygen/header.html       | 56 ++++++++++++++++++++++++++++++
 docs/xen-doxygen/mainpage.md       |  5 +++
 4 files changed, 118 insertions(+)
 create mode 100644 docs/xen-doxygen/customdoxygen.css
 create mode 100644 docs/xen-doxygen/footer.html
 create mode 100644 docs/xen-doxygen/header.html
 create mode 100644 docs/xen-doxygen/mainpage.md

diff --git a/docs/xen-doxygen/customdoxygen.css b/docs/xen-doxygen/customdoxygen.css
new file mode 100644
index 0000000000..4735e41cf5
--- /dev/null
+++ b/docs/xen-doxygen/customdoxygen.css
@@ -0,0 +1,36 @@
+/* Custom CSS for Doxygen-generated HTML
+ * Copyright (c) 2015 Intel Corporation
+ * SPDX-License-Identifier: Apache-2.0
+ */
+
+code {
+  font-family: Monaco,Menlo,Consolas,"Courier New",monospace;
+  background-color: #D8D8D8;
+  padding: 0 0.25em 0 0.25em;
+}
+
+pre.fragment {
+  display: block;
+  font-family: Monaco,Menlo,Consolas,"Courier New",monospace;
+  padding: 1rem;
+  word-break: break-all;
+  word-wrap: break-word;
+  white-space: pre;
+  background-color: #D8D8D8;
+}
+
+#projectlogo
+{
+  vertical-align: middle;
+}
+
+#projectname
+{
+  font: 200% Tahoma, Arial,sans-serif;
+  color: #3D578C;
+}
+
+#projectbrief
+{
+  color: #3D578C;
+}
diff --git a/docs/xen-doxygen/footer.html b/docs/xen-doxygen/footer.html
new file mode 100644
index 0000000000..a24bf2b9b4
--- /dev/null
+++ b/docs/xen-doxygen/footer.html
@@ -0,0 +1,21 @@
+<!-- HTML footer for doxygen 1.8.13-->
+<!-- start footer part -->
+<!--BEGIN GENERATE_TREEVIEW-->
+<div id="nav-path" class="navpath"><!-- id is needed for treeview function! -->
+  <ul>
+    $navpath
+    <li class="footer">$generatedby
+    <a href="http://www.doxygen.org/index.html">
+    <img class="footer" src="$relpath^doxygen.png" alt="doxygen"/></a> $doxygenversion </li>
+  </ul>
+</div>
+<!--END GENERATE_TREEVIEW-->
+<!--BEGIN !GENERATE_TREEVIEW-->
+<hr class="footer"/><address class="footer"><small>
+$generatedby &#160;<a href="http://www.doxygen.org/index.html">
+<img class="footer" src="$relpath^doxygen.png" alt="doxygen"/>
+</a> $doxygenversion
+</small></address>
+<!--END !GENERATE_TREEVIEW-->
+</body>
+</html>
diff --git a/docs/xen-doxygen/header.html b/docs/xen-doxygen/header.html
new file mode 100644
index 0000000000..83ac2f1835
--- /dev/null
+++ b/docs/xen-doxygen/header.html
@@ -0,0 +1,56 @@
+<!-- HTML header for doxygen 1.8.13-->
+<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
+<html xmlns="http://www.w3.org/1999/xhtml">
+<head>
+<meta http-equiv="Content-Type" content="text/xhtml;charset=UTF-8"/>
+<meta http-equiv="X-UA-Compatible" content="IE=9"/>
+<meta name="generator" content="Doxygen $doxygenversion"/>
+<meta name="viewport" content="width=device-width, initial-scale=1"/>
+<!--BEGIN PROJECT_NAME--><title>$projectname: $title</title><!--END PROJECT_NAME-->
+<!--BEGIN !PROJECT_NAME--><title>$title</title><!--END !PROJECT_NAME-->
+<link href="$relpath^tabs.css" rel="stylesheet" type="text/css"/>
+<script type="text/javascript" src="$relpath^jquery.js"></script>
+<script type="text/javascript" src="$relpath^dynsections.js"></script>
+$treeview
+$search
+$mathjax
+<link href="$relpath^$stylesheet" rel="stylesheet" type="text/css" />
+$extrastylesheet
+</head>
+<body>
+<div id="top"><!-- do not remove this div, it is closed by doxygen! -->
+
+<!--BEGIN TITLEAREA-->
+<div id="titlearea">
+<table cellspacing="0" cellpadding="0">
+ <tbody>
+ <tr style="height: 56px;">
+  <!--BEGIN PROJECT_LOGO-->
+  <td id="projectlogo"><img alt="Logo" src="$relpath^$projectlogo"/></td>
+  <!--END PROJECT_LOGO-->
+  <!--BEGIN PROJECT_NAME-->
+  <td id="projectalign" style="padding-left: 1em;">
+   <div id="projectname">$projectname
+   <!--BEGIN PROJECT_NUMBER-->&#160;<span id="projectnumber">$projectnumber</span><!--END PROJECT_NUMBER-->
+   </div>
+   <!--BEGIN PROJECT_BRIEF--><div id="projectbrief">$projectbrief</div><!--END PROJECT_BRIEF-->
+  </td>
+  <!--END PROJECT_NAME-->
+  <!--BEGIN !PROJECT_NAME-->
+   <!--BEGIN PROJECT_BRIEF-->
+    <td style="padding-left: 0.5em;">
+    <div id="projectbrief">$projectbrief</div>
+    </td>
+   <!--END PROJECT_BRIEF-->
+  <!--END !PROJECT_NAME-->
+  <!--BEGIN DISABLE_INDEX-->
+   <!--BEGIN SEARCHENGINE-->
+   <td>$searchbox</td>
+   <!--END SEARCHENGINE-->
+  <!--END DISABLE_INDEX-->
+ </tr>
+ </tbody>
+</table>
+</div>
+<!--END TITLEAREA-->
+<!-- end header part -->
diff --git a/docs/xen-doxygen/mainpage.md b/docs/xen-doxygen/mainpage.md
new file mode 100644
index 0000000000..ff548b87fc
--- /dev/null
+++ b/docs/xen-doxygen/mainpage.md
@@ -0,0 +1,5 @@
+# API Documentation   {#index}
+
+## Introduction
+
+## Licensing
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon May 10 08:41:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 08:41:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124963.235282 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg1To-0001Ka-5G; Mon, 10 May 2021 08:41:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124963.235282; Mon, 10 May 2021 08:41:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg1To-0001KJ-1a; Mon, 10 May 2021 08:41:32 +0000
Received: by outflank-mailman (input) for mailman id 124963;
 Mon, 10 May 2021 08:41:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jR2S=KF=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1lg1Tm-0008L0-Ng
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 08:41:30 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 87a44038-5c7a-4c6f-b7c9-4df9ade5601f;
 Mon, 10 May 2021 08:41:15 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 93AD7106F;
 Mon, 10 May 2021 01:41:14 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com
 [10.1.197.16])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 3C6F23F719;
 Mon, 10 May 2021 01:41:13 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 87a44038-5c7a-4c6f-b7c9-4df9ade5601f
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v6 0/9] Use Doxygen and sphinx for html documentation
Date: Mon, 10 May 2021 09:40:56 +0100
Message-Id: <20210510084105.17108-1-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.17.1

This serie introduce doxygen in the sphinx html docs generation.
One benefit is to keep most of the documentation in the source
files of xen so that it's more maintainable, on the other hand
there are some limitation of doxygen that should be addressed
modifying the current codebase (for example doxygen can't parse
anonymous structure/union).

To reproduce the documentation xen must be compiled because
most of the headers are generated on compilation time from
the makefiles.

Here follows the steps to generate the sphinx html docs, some
package may be required on your machine, everything is suggested
by the autoconf script.
Here I'm building the arm64 docs (the only introduced for now by
this serie):

./configure
make -C xen XEN_TARGET_ARCH="arm64" CROSS_COMPILE="aarch64-linux-gnu-" menuconfig
make -C xen XEN_TARGET_ARCH="arm64" CROSS_COMPILE="aarch64-linux-gnu-"
make -C docs XEN_TARGET_ARCH="arm64" sphinx-html

now in docs/sphinx/html/ we have the generated docs starting
from the index.html page.

Luca Fancellu (9):
  docs: add doxygen configuration file
  docs: add Xen png logo for the doxygen documentation
  docs: add doxygen templates
  m4/python: add function to docs_tool.m4 and new m4 module
  docs: add checks to configure for sphinx and doxygen
  docs: add doxygen preprocessor and related files
  docs: Change Makefile and sphinx configuration for doxygen
  docs: hypercalls sphinx skeleton for generated html
  docs/doxygen: doxygen documentation for grant_table.h

 .gitignore                                    |    7 +
 config/Docs.mk.in                             |    2 +
 docs/Makefile                                 |   46 +-
 docs/conf.py                                  |   48 +-
 docs/configure                                |  258 ++
 docs/configure.ac                             |   15 +
 docs/hypercall-interfaces/arm32.rst           |   32 +
 docs/hypercall-interfaces/arm64.rst           |   33 +
 .../common/grant_tables.rst                   |    9 +
 docs/hypercall-interfaces/index.rst.in        |    7 +
 docs/hypercall-interfaces/x86_64.rst          |   32 +
 docs/index.rst                                |    8 +
 docs/xen-doxygen/customdoxygen.css            |   36 +
 docs/xen-doxygen/doxy-preprocessor.py         |  110 +
 docs/xen-doxygen/doxy_input.list              |    1 +
 docs/xen-doxygen/doxygen_include.h.in         |   32 +
 docs/xen-doxygen/footer.html                  |   21 +
 docs/xen-doxygen/header.html                  |   56 +
 docs/xen-doxygen/mainpage.md                  |    5 +
 docs/xen-doxygen/xen_project_logo_165x67.png  |  Bin 0 -> 18223 bytes
 docs/xen.doxyfile.in                          | 2316 +++++++++++++++++
 m4/ax_python_module.m4                        |   56 +
 m4/docs_tool.m4                               |    9 +
 xen/include/public/grant_table.h              |  387 +--
 24 files changed, 3367 insertions(+), 159 deletions(-)
 create mode 100644 docs/hypercall-interfaces/arm32.rst
 create mode 100644 docs/hypercall-interfaces/arm64.rst
 create mode 100644 docs/hypercall-interfaces/common/grant_tables.rst
 create mode 100644 docs/hypercall-interfaces/index.rst.in
 create mode 100644 docs/hypercall-interfaces/x86_64.rst
 create mode 100644 docs/xen-doxygen/customdoxygen.css
 create mode 100755 docs/xen-doxygen/doxy-preprocessor.py
 create mode 100644 docs/xen-doxygen/doxy_input.list
 create mode 100644 docs/xen-doxygen/doxygen_include.h.in
 create mode 100644 docs/xen-doxygen/footer.html
 create mode 100644 docs/xen-doxygen/header.html
 create mode 100644 docs/xen-doxygen/mainpage.md
 create mode 100644 docs/xen-doxygen/xen_project_logo_165x67.png
 create mode 100644 docs/xen.doxyfile.in
 create mode 100644 m4/ax_python_module.m4

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon May 10 08:41:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 08:41:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124965.235294 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg1Tt-0001je-F4; Mon, 10 May 2021 08:41:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124965.235294; Mon, 10 May 2021 08:41:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg1Tt-0001jN-An; Mon, 10 May 2021 08:41:37 +0000
Received: by outflank-mailman (input) for mailman id 124965;
 Mon, 10 May 2021 08:41:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jR2S=KF=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1lg1Tr-0008L0-Ng
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 08:41:35 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 50ccbb59-1223-4e84-b65f-c9a77e98272d;
 Mon, 10 May 2021 08:41:21 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id D803B11D4;
 Mon, 10 May 2021 01:41:20 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com
 [10.1.197.16])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 020793F719;
 Mon, 10 May 2021 01:41:19 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 50ccbb59-1223-4e84-b65f-c9a77e98272d
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v6 4/9] m4/python: add function to docs_tool.m4 and new m4 module
Date: Mon, 10 May 2021 09:41:00 +0100
Message-Id: <20210510084105.17108-5-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210510084105.17108-1-luca.fancellu@arm.com>
References: <20210510084105.17108-1-luca.fancellu@arm.com>

Add ax_python_module.m4 to have a way to check if
a python module is installed in the system.

Add a function to docs_tool.m4 to throw an error if the
required docs tool is missing.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
 m4/ax_python_module.m4 | 56 ++++++++++++++++++++++++++++++++++++++++++
 m4/docs_tool.m4        |  9 +++++++
 2 files changed, 65 insertions(+)
 create mode 100644 m4/ax_python_module.m4

diff --git a/m4/ax_python_module.m4 b/m4/ax_python_module.m4
new file mode 100644
index 0000000000..107d88264a
--- /dev/null
+++ b/m4/ax_python_module.m4
@@ -0,0 +1,56 @@
+# ===========================================================================
+#     https://www.gnu.org/software/autoconf-archive/ax_python_module.html
+# ===========================================================================
+#
+# SYNOPSIS
+#
+#   AX_PYTHON_MODULE(modname[, fatal, python])
+#
+# DESCRIPTION
+#
+#   Checks for Python module.
+#
+#   If fatal is non-empty then absence of a module will trigger an error.
+#   The third parameter can either be "python" for Python 2 or "python3" for
+#   Python 3; defaults to Python 3.
+#
+# LICENSE
+#
+#   Copyright (c) 2008 Andrew Collier
+#
+#   Copying and distribution of this file, with or without modification, are
+#   permitted in any medium without royalty provided the copyright notice
+#   and this notice are preserved. This file is offered as-is, without any
+#   warranty.
+
+#serial 9
+
+AU_ALIAS([AC_PYTHON_MODULE], [AX_PYTHON_MODULE])
+AC_DEFUN([AX_PYTHON_MODULE],[
+    if test -z $PYTHON;
+    then
+        if test -z "$3";
+        then
+            PYTHON="python3"
+        else
+            PYTHON="$3"
+        fi
+    fi
+    PYTHON_NAME=`basename $PYTHON`
+    AC_MSG_CHECKING($PYTHON_NAME module: $1)
+    $PYTHON -c "import $1" 2>/dev/null
+    if test $? -eq 0;
+    then
+        AC_MSG_RESULT(yes)
+        eval AS_TR_CPP(HAVE_PYMOD_$1)=yes
+    else
+        AC_MSG_RESULT(no)
+        eval AS_TR_CPP(HAVE_PYMOD_$1)=no
+        #
+        if test -n "$2"
+        then
+            AC_MSG_ERROR(failed to find required module $1)
+            exit 1
+        fi
+    fi
+])
\ No newline at end of file
diff --git a/m4/docs_tool.m4 b/m4/docs_tool.m4
index 3e8814ac8d..39aa348026 100644
--- a/m4/docs_tool.m4
+++ b/m4/docs_tool.m4
@@ -15,3 +15,12 @@ dnl
         AC_MSG_WARN([$2 is not available so some documentation won't be built])
     ])
 ])
+
+AC_DEFUN([AX_DOCS_TOOL_REQ_PROG], [
+dnl
+    AC_ARG_VAR([$1], [Path to $2 tool])
+    AC_PATH_PROG([$1], [$2])
+    AS_IF([! test -x "$ac_cv_path_$1"], [
+        AC_MSG_ERROR([$2 is needed])
+    ])
+])
\ No newline at end of file
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon May 10 08:41:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 08:41:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124968.235306 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg1Tx-0002BC-Pb; Mon, 10 May 2021 08:41:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124968.235306; Mon, 10 May 2021 08:41:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg1Tx-0002B1-LB; Mon, 10 May 2021 08:41:41 +0000
Received: by outflank-mailman (input) for mailman id 124968;
 Mon, 10 May 2021 08:41:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jR2S=KF=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1lg1Tv-0000ei-Rq
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 08:41:39 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id c60b2f7a-142c-4ad6-96ae-d3994e3577b3;
 Mon, 10 May 2021 08:41:18 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2715511B3;
 Mon, 10 May 2021 01:41:18 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com
 [10.1.197.16])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C62643F719;
 Mon, 10 May 2021 01:41:16 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c60b2f7a-142c-4ad6-96ae-d3994e3577b3
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v6 2/9] docs: add Xen png logo for the doxygen documentation
Date: Mon, 10 May 2021 09:40:58 +0100
Message-Id: <20210510084105.17108-3-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210510084105.17108-1-luca.fancellu@arm.com>
References: <20210510084105.17108-1-luca.fancellu@arm.com>

Add the xen-doxygen folder for the doxygen template
and add the Xen png logo in it.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
Acked-by: Stefano Stabellini <sstabellini@kernel.org>
---
 docs/xen-doxygen/xen_project_logo_165x67.png | Bin 0 -> 18223 bytes
 1 file changed, 0 insertions(+), 0 deletions(-)
 create mode 100644 docs/xen-doxygen/xen_project_logo_165x67.png

diff --git a/docs/xen-doxygen/xen_project_logo_165x67.png b/docs/xen-doxygen/xen_project_logo_165x67.png
new file mode 100644
index 0000000000000000000000000000000000000000..7244959d59cdeb9f23c5202160ea45508dfc7265
GIT binary patch
literal 18223
zcmV+NKn=f%P)<h;3K|Lk000e1NJLTq005-`002V>1^@s6{Wir#00004XF*Lt006O$
zeEU(80000WV@Og>004&%004{+008|`004nN004b?008NW002DY000@xb3BE2000U(
zX+uL$P-t&-Z*ypGa3D!TLm+T+Z)Rz1WdHz3$DNjUR8-d%htIutdZEoQ0#b(FyTAa_
zdy`&8VVD_UC<6{NG_fI~0ue<-nj%P0#DLLIBvwSR5EN9f2P6n6F&ITuEN@2Ei>|D^
z_ww@l<E(G(v-i3C?7h!g7XXr{FPE1FO97C|6YzsPoaqsfQFQD8fB_z0fGGe>Rz|vC
zuzLs)$;-`!o*{AqUjza0dRV*yaMRE;fKCVhpQKsoe1Yhg01=zBIT<Vw7l=3|OOP(M
z&x)8Dmn>!&C1$=TK@rP|Ibo3vKKm@PqnO#LJhq6%Ij6Hz*<$V$@wQAMN5qJ)hzm2h
zoGcOF60t^#FqJFfH{#e-4l@G)6iI9sa9D{VHW4w29}?su;^hF~NC{tY+*d5%WDCTX
za!E_i;d2ub1#}&jF5T4HnnCyEWTkKf0>c0%E1Ah>(_PY1)0w;+02c53Su*0<(nUqK
zG_|(0G&D0Z{i;y^b@OjZ+}lNZ8Th$p5Uu}<?XUdO8USF-iE6X+i!H7SfX*!d$ld#5
z(>MTtq^NHl*T1?CO*}7&0ztZsv2j*bmJyf3G7=Z`5B*PvzoD<bXCyxEkMhu6Iq^(k
zihwSz8!Ig(O~|Kbq%&C@y5XOP_#X%Ubsh#moOlkO!xKe>iKdLpOAxi2$L0#SX*@cY
z_n(^h55xYX#km%V()bZjV~l{*bt*u9?FT3d5g^g~#a;iSZ@&02Abxq_DwB(I|L-^b
zXThc7C4-yrInE_0gw7K3GZ**7&k~>k0Z0NWkO#^@9q0f<U<Ry!EpP;Gz#I635D*Dg
z0~SaGseli%Kpxlx3PCa03HE?$PzM@8GiU|JK_@r`&Vx(f8n^*&gZp3<On_%#7Q6-v
z5CmZ%GDLyoAr(jy(ud3-24oMpLB3EB6bZ#b2@nqwLV3_;s2D1Ps-b$Q8TuYN37v<o
zK!ea-XbhT$euv({2uy;huoA2V8^a9P3HE_Q;8kz}yavvN3*a4aCENfXg*)K$@HO~0
zJPJR9=MaDp5gMY37$OYB1@T9ska&cTtVfEF3ZwyPMY@qb<R&tT%ph-37!(CXM;W4Q
zQJ$z!6brQmwH{T1szx0~b)b4tH&J7#S=2`~8Lf!cN86yi&=KeabQZc0U4d>wx1%qj
zZ=)yBuQ3=54Wo^*!gyjLF-e%Um=erBOdIALW)L%unZshS@>qSW9o8Sq#0s#5*edK%
z>{;v(b^`kbN5rY%%y90wC>#%$kE_5P!JWYk;U;klcqzOl-UjcFXXA75rT9jCH~u<)
z0>40zCTJ7v2qA<d!X`o`p_Oov@PP1=NF=Het%-p|E^#BVl6Z`GnK(v#OOhe!kz7d8
zBq3=B=@980=`QIdnM~FqJCdWw0`d-WGx-Af5&4Y-MZ!qJOM)%2L83;YLt;qcxg=gv
zQ_@LtwPdbjh2#mz>yk54cquI@7b&LHdZ`+zlTss6bJ7%PQ)z$cROu4wBhpu-r)01)
zS~6}jY?%U?gEALn#wiFzo#H}aQ8rT=DHkadR18&{>P1bW7E`~Y4p3)hWn`DhhRJ5j
z*2tcg9i<^OEt(fCg;q*CP8+7ZTcWhYX$fb^_9d-LhL+6BEtPYW<H!}swaML<dnZqq
zcau++-zDEE|4;#?pr;V1kfpF+;iAIKQtDFMrL3hzOOG$TrwA+RDF!L7RXnKJuQ;cq
ztmL7Tu2iLTL1{*rrtGMkq+G6iMtNF=qGGSYRVi0FtMZgCOLwBD&@1V^^jTF!RZmr+
zYQ5@!>VlfKTBusSTASKKb%HuWJzl+By+?gkLq)?+BTu76<DMp7lcAZYxmUAKb6!hZ
zD_m=<R;SjKww$(?cCL1d_5&TVj)Tq`od%s-x)@!CZnEw^-5Ywao`qhbUX9*$eOTX8
zpR2!5f6xGJU~RxNXfPNtBpEsxW*W8_jv3L6e2wyrI*pziYZylv?=tQ){%B%hl48<m
za^F<O)Y~-QwA=J|Gd(kwS&i8(bF#U+`3CbY^B2qXmvNTuUv|fWV&P}8)uPAZgQb-v
z-?G(m+DgMJ)~eQOgh6ElFiIGgt<l!b)*Gx(S--Whv=P`GxB1Q1&^Foji0#yJ?d6>1
zjmyXF)a;mc^>(B7bo*HQ1NNg1st!zt28YLv>W*y3CdWx9U8f|cqfXDAO`Q48?auQq
zHZJR2&bcD49<D{M18y>Ip>EY~kKEPV6Wm+eXFV)D)_R=tM0@&p?(!V*Qu1PXHG9o^
zTY0bZ?)4%01p8F`JoeS|<@<K~!G7L;yZs)l&|JY=(diHTz5I9kKMc?gSQGGLASN&%
zuqN<HkZDj}P+u@5I41Z=@aqugkkXL*p*o?$(4H{Ku;{Snu=#M;@UrmH2;+!#5!WIW
zBDs-WQP`-ksHUj7m2NBdtel9ph%SsCUZuS%d)1ZI3ae9ApN^4?VaA+@MaPE69*KR=
z^k+6O=i<ELYU5^EF08$*XKY7yIeVI8$0_4X#@of0#ZM*JCG1X^PIO4DNSxuiaI3j5
zl01{@lID~BlMf|-N(oPCOU0$erk>=<@RE7GY07EYX@lwd>4oW|Yi!o+Su@M`;WuSK
z8LKk71XR(_RKHM1xJ5XYX`fk>`6eqY>qNG6HZQwBM=xi4&Sb88?zd}EYguc1@>KIS
z<&CX#T35dwS|7K*XM_5Nf(;WJJvJWRMA($P>8E^?{IdL4o5MGE7bq2MEEwP7v8AO@
zqL5!WvekBL-8R%V?zVyL=G&{be=K4bT`e{#t|)$A!YaA?jp;X)-+bB;zhj`(vULAW
z%ue3U;av{94wp%n<(7@__S@Z2PA@Mif3+uO&y|X06?J<Fdxd*PD}5`wsx+#0R=uxI
ztiE02T+>#oSi8M;ejj_^(0<4Lt#wLu#dYrva1Y$6_o(k^&}yhSh&h;f@JVA>W8b%o
zZ=0JGnu?n~9O4}sJsfnnx7n(>`H13?(iXTy*fM=I`sj`CT)*pTHEgYKqqP+u1IL8N
zo_-(u{qS+0<2@%BCt82d{Gqm;(q7a7b>wu+b|!X?c13m#p7cK1({0<`{-e>4hfb-U
zsyQuty7Ua;Ou?B?XLHZaol8GAb3Wnxcu!2v{R<HnZuJKC4qWuPc=?k1r3-ydeP=J*
zT|RZi=E}*djH{j3EU$I+TlBa8Wbsq`faO5Pb*t-LH>_`T4=x`(GvqLI{-*2AOSimk
zUAw*F_TX^n@STz9k<mNsJ5zU4?!LH}d2iwV#s}yJMGvJORy<OC)bO+J&uycYqo>DQ
z$NC=!KfXWC8h`dn#xL(D3Z9UkR7|Q&Hcy#Notk!^zVUSB(}`#4&lYA1f0h2V_PNgU
zAAWQEt$#LRcH#y9#i!p(Udq2b^lI6wp1FXzN3T;~FU%Lck$-deE#qz9yYP3D3t8{6
z?<+s(e(3(_^YOu_)K8!O1p}D#{JO;G(*OVf32;bRa{vGf4*&oQ4*`<-1El}}02*{f
zSaefwW^{L9a%BKeVQFr3E>1;MAa*k@H7+qQF!XYv002BXNkl<ZcwX&&2Ur%z_P#e7
z6Qi+fqA~V@E%vU7y^D&10wPTTDJm);MHEC76-DVCL_kmw?7jDbz4s2N*c<Kq-*@37
z#Au5Dn|t;CJm2#^yWj4#-8pmSoS8GTMLrU$3Dg1V0S)qxwSgKyRp2vyrhkMg0%Wkd
zKntJ?&<p4b^aln21A#&LNB-{z@P1FA6VMc>1$+-w06x=a`rA|vr~**(wFP<rWHvJ1
z+fW~4G{)LwjHRuWrIk&K7A;2c+FM}=GH^GbB|rwP43q&r(`WiaDhp7WH3WVE3K+3O
zi4q#qr%!j?xN+0UGiS~ozkBEIovf@Z$;`}>#~E32=hh>6`k4PSh1Z`vdHU@7wd+^*
z?%lU7ARy4EXV0EvkdBI3DMdQ~?D{JKrGd}%nSMu<T$GGI14?&XzI^%No}LTlpE-Rt
z<;|PSY>>im&!4}Ld-qc1+`03zf8TytnYc<qg2QFa>h*H?@DaIi{(_{Yrpb#JFO=|E
zS&WyBYw34aC9jU}(WB>Bq)!HAH&01S-IQv=XXgA&3Y7=gowf(q#gZ8{6ILWFefi?m
zi=3QX0Yl2ehmXK)mt@z@J(7@+D3Oto;^X5hvuDqiY15{Ot*xy<lFGb!^CU1ZP-3EE
zWYwxwvTxr3xpe7@JbLs*NhdoyM{=@r1&n@V`0(LY$dAm~2WSQS2vBwS2KY?>M~Pi0
zyJ{LP>{f?_hJ^ZOJbd&pCtJWoSqd}Vx^-JFT(}^|jvWJ&?UVKE*GpVnoP>mg$btn6
z#MRYRoSd9w)~s2wXwf1G4GmT9uUofHcJACMM~)m(q$;H=r7RgUH%EZnoC60AZ12*g
zi>hm<?n*13#!yM%GyNYTc9XQIX-!i)4t92);db|K>g{Yuu}m=I^Jg!?kdGxJ;}N9f
zLon1mxpL)-!eC@hV)N$Bl9-q$HOYuPMn^}>#*G_g|Ni}Q>eMMoNlB3tCr-%Kt5+pG
zJzbtXd!}^hxw%q6Z$O(iZAz|MwdzQeh5BY=fDPs|WBwl@TD;W(4%JY19GaB4CNc9(
zj=X-IDNmm~lSdhk!IaMxqa_#IL*0-Jb?Ve<*}8SBczJor`0?XKPft$<4<0PNdi4@W
zJL%rNy9^jGKr}Tq#n8}Drc9Y4OO`B=-Me?o_3Jla{5+9YuU<(`4#ec!1SY+E_wLO;
zefn6SOl&C4fbW1(z-Rg&CQ3*e6|}6?t5m6?bNJBllxI(0%Hzk+APv*xe)@fMvCkDg
zfdG@+w{I)bjuJ6EJY2lJy~V}FMXar@MPFZEh7KJny?ghTK7IO1zkdBiU0q#9j2Iy%
zCMF6~y1BW@;>C-VxLdYtQKT&;<=e#WoV@zv@zZCMCQX`w@=={=18_9pGh_ab(zgI5
zq{5KhyY;lVaQ^C@jEtv}mYObCuUu2QXi7yg<;|Nnm9Co1xMIZ$S+;DM!dQO3{^IB`
zJ=Z|rI9o?&RF<xe?kgiBqvvL3X3xNg&kPI<UXB_yI!jN_AZv`VY4)sH9=V~RVM@22
zl$0bJHf&JzQb@&ob|f?AD2z&7Q&AsYXJ>~5hlZe>LjW=+M+QE3<^N;E3jG1-45({q
zYTI49c;i`m+5@?M?WUYPdrofL$m?Ej-MXbPV@ynp0vap{2?^rk?U!w3W&Og`)HD@F
z&FO^;7j6#-2uO&ChzO63ja{^S`SN+d0x)kdbj!G)prDN~dX6|dJ6{LGKD4#5e-#(E
zJcs%w^hZVSgps4D1srN(xBl|QOQ;ZU6f6Dps~lOgYQt)jmyF2)cMchSw#xs9h(-e?
z&Y*T}J6N1Je(uT58@J`mnR9aZ@L@TB{=6a?j~+cLU@Qp^4i+$*BHe-lgR;#`&F+B_
zkDwcl0mIEmPEOX}yLayZ#Os!kk<kgdXFCYIHbC2#FJE?q6#V(@*|WV49z3WC2AqI2
zef|CYx7yp=UqaXXaMh|+xog+16{I68SFThHB1&ka1tz@@!zx3bK79e*_N4^+M?|CC
zg8>>p94`7A_)MQT(Xin#OaH1>f6(8yWpC=GOIM*M9#@8Is4tQ!XpH#!`YQULpP!!u
z2ZiRKJ5B{7?E^zCTC--2{^`@FyMcLHg83Q(wSj5?8U8nvf4vsa0B8mY+#Y!hg>;+_
zD}4iW%?m&V8Ip{z=$o6j$b$zD6dm^Z_3M%f{WedMr-`$Zn=g{3Rn8e8>cw9HpXn1N
zcH7h=y92`m1D700NjWLowr&@6xys-+CDzKsmCE3^gM)+2m@z}_Y#m>Y9z8k*U2iCu
z)Er%MA9UgEAw8QQ9Zp4l`>%if>r2qTaQ>Hw3<@dQ77}#mx^?T^U@&b4)1M0q4a??O
zCk-M>=rd=|$e}}rWbfX6N@F>C<hX3#y8m^H=B*r&3`Yytz-atXY8D2|#Rm8d%2pJ|
z&-9TJ2ccU7LoCfFM{U@!4NSORJUl$a#>Pe_PoAs{?a+EAIK0!P%P%i*epy>vdn*i>
zXhTE85HP48ENu@0hRkepb92YirAt#CC=15?f*Ji40%MlUXU+~MPMk1>L{8|^rOR18
zJ-y6f!-mPAL4#!M*s;ni5od>-ojF4^UuHPFilxQGr~Uf%OGda59UUDn#F?)u6Jcy@
z>}G9kJsD~SJ(EiQod)xn{&Per$?3zs)vMPu3k!*UwPVLlRQ3#6jBfh;wQJyy52wiJ
z=mSW*Go(_fzsmrShV{I>yn3svt1m<so{^vA2h$}OX%~TA2M*jEpsqd?S^TZW@|pf~
zL@7yWHr(EB=CKXyH^Ycop)e(-ViYXvOV_W-g{xQO<oR=wa_XcUIe9{k0z7{o&R)8x
zFhWjFPOi4D?oy=Rv}n<y#hKxs5t$c63%XI)u3cwdzI^#H(>j0U@;mb)SRR&(;D46&
z+~rG3*)QF=ro=sa`68wNm69b(+9K^1#flaC_A^s{e8h^U_jm7>E!xfSobMx>Hg1uH
z3l}PL)n;a9^1}~5$lm??<=*}KayLC)uEW5%1nGJ4>Q%W6L*tK-@${*}nAdLKe$}v1
zqp8T}M=-%3>T&r@LY=R}gb5R(pFMl_hG|@d)z5t2m5-`CJTlzPyLXlHrote)hce&7
z|C1+A=4feYokO~AKriI;1MoHQx%>Xeh)VxYfaVw@t204?q2li!AfBF{io*mQCu-HI
zC2DGFas?Hhp7Btgym+qQ;giSm;PE3c-V+6no;@vq>KgB>Xz$*=cae_<Q2u?4kk61D
z0Pi0X5^^v<s-!)9pybUwRe7oMC|nMf>*>oE%6(o-y`QdF>6<of%4*oK;Sr=uhU|}g
zY6Fy#pADUlj5v-*ukmgT)tXGRpX!pEup&`mM7otxTEb9~jvYJ7_uqdn0|yR#miFMj
zJVNEa%zPu9m8P%6`#@rtOwJ4D)BO4KRr=*Og&C9QUwrWeyY`yZt5+X;@ZiB4`BCH<
znCx|SmXa2!aQcPw%go7^7jIt4Q#P9C&*Wi7hB9Qdc=6&_Ft#=!Z5yZ$oskzMWC`GN
zxBU?k4IDb_4$&B*y~AUUrvwHD%gmWG6{hs_^^+bwdMM5l9ou(G{pzc)b~-ydKL$zm
zyBsC{0%c`o<!ESVa9U6e4Duxlk<Xkg%TF+9Jnr4Qm)GZi0OjQ7%Inv!mD!(j=g!d)
zBp2!K0<57wwgtXJe#P_i{7fGqad5c_D$3AgjMcM{kZ_qc%~|nz<X9jbDY|v*CPRm6
z<o4^=FQ8AKJ~I$DCN?(q1>MT%74#u=^XAPLwQJWNkG!jbIf~)PvBNHoj*f}1UcLIg
z2gY9{&Wm%lhjZn-cI}ep&6~>)RjZ3ygGO7?0Qvw`07Zv{RHl?<PeJvC9!4Ca<p>19
zbE*)fAkm9`3=GV2;6VMK<t0?(tsHCRz5f}aCwcpynzr(?w3s9zAz?COhKtgD)8&j3
z5|*zF)6lwUV`DSU-rl}<<;s=E0Lgpy?8zbnd?;gX+qUgG^5&3?T8R=RhQ-Cj9nZh@
zLBu(6N^nkr(pR%$#fs9qcOMxzKwZ=aX<b4-oPMP4Ot1B>fO>}x9V`tD4CYLlG$|PR
z^J=Io>j-rBt4vKzeKa*Soe^#rK!;+E;gVs?fU)1JhmwIooJFGl0P~_#3-ja3&UX3N
z=$l#?%>g=4Q<b7x@vr<mC^R@T)e#tkUc}bW(9pf`IPkDx!)QG6*|-{j2J0t1?yAx!
z`}gWGaNFcbcCfa+6jw8L-|p`2ij?G#P{)p)=c1y!g@uLHXbnOfGf1KRmoHz=Wmoy3
zj5%k{oWu3%)mwDp#EE<Pm;N>}Z@SIhhoyZDh8P%3%9k&%lsiw-_mukenq)R<(j;ug
zj2WxfuU~)W%9ShksYc`{@u!rUn)+nt&Yc&1eSJ4}?%a7Yo}Ua>jm-Bp1K>ZIsj8)=
zr6bCdcJ=Djbmm9w-o5)W^M4x~Hf%V(Wy_XhXc!)dOU7rtv_YB8{QdnmpE`Bw_RE(q
zl@W+{5uQIA@9-A%^_Aaz^9>j9{gAJpe{xg;;5wI)n#1(&I63Ccim1A7yi`R>jvA$x
zHDJI1&Ev<9*FSpnXc@fpvUp&&cKgAD2VWpF(82dZ2=Q`J;ji-l{%s;dqO!;|oR`m~
zWJkFaAI-jf`zn&2&vEkP$^01q9)hlV50aE~?37>?@J<R0CY)1BwuMf<H$V7aB0dXx
zr|pQV3kf}Y>(;I3$ZYwy|1#aUaU-LD|Nc%$rw)XzqO*TWL`hk_SkYoeqxDR(X1cnI
zOMchQp&(W&nR0}d_7z=S-A>RUE8&@GF;mB)YZzBDdZ0^BfAr{)(tZ9XiTWe;TDs4z
zS+gb!68B8Sij{KMZMSOG3JuIzVc7Q(nSg1qK`E|q2uq2}=lH9Vf8)lD=c-n%s*U$h
z1@A=Z(s86OD(CEfbprj1q@$yA7}EIU-;v_)A{dG<)YR0>Q4Sh)pVSBgC1sszKh$tC
zG%yp-`FV@FIG0R)l3h2^s#UvS3k!?-cvj9uE8P`9>y_(vxa>6$UHkK=PoFB4{C6l9
zznntDMSn&N`})mm1$15GaL<AT3!bCP+K6(@uT`_w6EY+nHB|H_^NkXo$IPru1!Ov^
z9jY#$R{GhqX9enrB6Z2245Run-|U=hB`(wAxr)x8Kc8NuN)>LZP#N!psvr#{i`%zv
zE8Q*Qs=#=Kk(HgLluJeErr+6$ScbxRzD=t8ET8fpWe*7nIfHU^LYY1(FDY7bYCkn?
z9WYAQ2s-2(MUSKd1_#c`kQ@#wS+dkHbVZf%tX~2uX+XxL70cxkTF~JVH*MaOdHe1i
zx&QD%0Ukce_#GZ(Je2eY_Y2Nd;Z$*W?d|{f(o*Hto!fHl#&x;M>CTH6<jmPKTw=k&
z_=u7vOLF<G6%2s~D(T3w+_-g9iFfDTT_ugQ`{@NB^KII+DO*EBBdtuCGDi`Accn^|
zQlP4&K~2diTozTCQ`6JrZt6WHO+rP>;^NLD{x48#xD=yq%T}$t8D5o!3aV#PL6s)&
zJ>moajw~C?e)IM%B@gDOdLM<;QlW79_>4^NHl7y^_6^FSU#wU$TIinyQKVB+Hfhtk
z)AG@JW5A4a6^7*aB$<=?*zn<+)X}=c$H$kzyZ=o$0EYYmD2azw!(Wp+b?ffhzI%7h
zNl2ZPQ>WxOq}2&XpLgNd>C;LWLUqogDh`jTG*X}s9yoeb_8vSaJNE36&D*z2d_qEY
z^A;^mBHy{lUlT|5zWw^epc~G8imvz|!XHdYkt4^C7o?-YiTq<+Lc*JGOPA*O<$S;b
z5>6jSOTY(Ab>Y<c^95!7UD;5kLrDMdv14-M?p^-RZP~gt*9y`4Z9i)C=xu2aA1dKh
z&si9S(<F|bIH8o8&$xZpF4?$kn?%LN<|M9KmCI|#5J#1_DsEw@(p05&6y-Z};eyny
zTh|rFSTza}l<T7>)fio~4(+<G)gL`Z=Fjs~7?Shd)GK@T>?wwZMjUwNFa}-8e)nK_
z*rEW3NI0xv)4fNJbK7?A0`u&ZUHkUS?)?Ye3*Ik`x9{!PvrjqSzI%^s*s@htt=}L?
zYu3t4_t~$?SE#TV`Po4-(f6}W^%^zo($dmiK6vs>wjlp)yLQXYy?OC@WLiA#IdD)x
z%E{BxxJi>kh&vhO{~1sNIPKU7X>t)-`1(zovsn(tD_lm^k!fy4UfcPs_^&-`)EdO&
z4jzM?oSm<vXFO5DsN((+Ht*Oe>o#qcgjK6^hiVLa`0cmfayoP?u(y4Oj;VYeRaoZz
zHh7)oRAJA-LvrZoF=^De@nWRG=lUe{N-`LS44QZ9(0$9OQAP?=a$1qz<Qx*>Qdl#y
zaoGq%@1gfT6dss{PX)9{F2i-#H!@D!unjMH%QjiNdFuyZ&897~X5(h%xN5^DiBC+D
z*cFMQJ6iuC^5Bw(v5=6h(HJ;<GGK<=tSdC(lh<uj!YgSY{^|`#14mVwYd39`Qx~tu
z`YqdEB3>de0pPlx-T=oUx%<XwAoA#$bJ^=QZ&T8E8_Xkl{YK?{!?qo=X~!-Zp`$~o
zOC8(9CouTQ-Xkf>{ld>x7zNJ=EKgo7A(2rzU?Qpk{(u#pPZ#fW9L}R$X1PCSS>BaS
zVJK+=49hZ=D_?#N8d;rBg(*p#1!&x{efKSTU`mfUbCo5r1SKWM`NoYKmxC}Il;zOE
zd%FNUGz}jP7LNc{dk2NQiC?uwmL;wFV8pIWmS`~JQZS=mXhcrycI`O;&XzwMc~P>_
z-L8JK7A+R<ICM0Z`Nbq8DRFs>AtNIVRa*SNeDzv6ef0)6Ovr|D^YqfCOBt#$L=|xD
zA@AMFcp^*USG*UI%i<*h;CURFYRE9neSH3srAoOjO<0-ByqIU<XDSR;nON4y_~p`f
zz`$FGKO0Qc56{*Js0naT-CBRl*gJpaeX5=r`AcwAOeW$^MH<zCPon!KvGp`)+xq9V
zx;h3j8-@<o+xYwY(`iB(5*j<!gxxm>ifO^Ux3%F#@_zk_6)R3&vu4fRhgpIbogiVc
z@gI!vWy^6~E-tg@Wqx0!%6_Cj9g?GWer3^Fn1)(cQ_Jxe_oSSX;H9w=8WSfWOP49f
z!fC0l^E@Pa8PZ-Mu}N!W1te?I#;uYJ$-*O%uzr(7tw@rfs2K4JUm^h!OWz6qh$!(1
zi4ecAB}(2s`t;q2@));l(>8(WheRV!pl~?~gZEgjV3cQ3U`P(6eiY)9AvyI*Ba*5S
zhu(Db$C}<qgw%akIw~m7sH7P$K}%x9YO?JE#5F}49IpEKnUY3JzK&SCWs45Wb#?Nr
z@1UR{W$g>4W&i&Dl??<ChFub!C4M)x#)k!wIT}N<&b)f{YR2OnkuX$BU}TI0E{Xjh
z1Vw>?mL;IF<8n)U^9}VY2UHq|H~4~CVKlPnZ>j6-KIchf{7NN1e=y5C^ToA*$Y|y0
z8y+RzA>lxT_=2GrCm=trUsx0vF-kf235^8PMJi$52JbQcqQFq`3JAfqNNLr!{aTb~
z!cff-CpYXl@cy#omq)pebLM9?#qJ5>+Toe%0R^)}IC6e_l=W1{^yTX|y+1ty_xOwi
z%h;n&KN^BVkOrrcKYjv376mBXsx)fYJV0}}t~feQld!NbWw|U@3=SMPP(jtI)l3nN
zFNfWIfwaWK@|=v(aq{HJu`gb{$UJiXI$AB6DPBQn%<>OHSnLW}w_~4VWQ$C&wZDTr
z%m7Nr!WG7qkrRTJEXxgxS)s%WiGF{cED!JdM?`~}!o<@zP!{+FgQ>#sKU}<k1^&Sb
zc+P9@!s4JjFrKGRpv+zD4`z<eEmyt*RSP?7JIB;DJNBddq~v;cd3ZhV{j5;RGkExj
z%ZRIuXX03=>iOt2!bNXZUjAXPmnJ~({FCS7F=#2IFY>PTWA!-1?SOPBP(FSFLly%#
zrdIZcsx|C}Ym5>r%LxiIMny$Qe0;pJFpG=B3=PN7AgBhAi4|!HsYM2*XU(9<$jBo|
zOSbG!k?^=giNecMVQJ!;LiqK1-o3Z){}!>y>*e;7Ou6$kQ-*43(-*QGDv2WoN`;{e
zKb9?9&Lw8$x`Mp^DBb9lNHb}@jJ2Hbs!qN7x9T=fyHHQ9;aQ@7A-w&+@b$vsmGcH_
zJU`l`S&Nk@78|d#PgvBOgbmvY%JgpJzXM>&@+4{6yybet{RPj(*&u~*&PHGVr{*m~
zmaW==@~>6my*s^MpFINUc?E`1pLE4LYJhY;ovB9fQMts##LC5r7E>QOSSL$=w2?$c
zL@GO7aFYRUNX5N>X3UtGhTvQ-OE)g|*JRE*jT<+%IC}J`GJ>42Zkt5L!6?CtSq5f`
zNnEQi0MB_O<M0}fh5xI<ybUD`?yuUso5{<LLnpGDwQTK)eCy&JE3RZ<(3+LM``&|j
zM#j^y$xE+rK84d?zGjmoZrCAOx}#H(PB1V9PzK%eL07to0I$>Y#{MsW;W(W~3%*bF
z8a4gnR&A6xFp(<%cR^LY=!7*A616-Bmi9b6PZK;B->pJ8=jcdBjS)Hr_MSMe#8JJQ
zcY)8nEO`SA!_{JK>qvca9MYgO^LuT9kB(5+<sLnH#KGRyy?PG5J3>oWSq8xc|6J$I
zHQgE-8ZvdNJ%@!jRKn>q>L)FAbaWP_r>DQTl=?(MqtRucy9kR-1al<Ah)9%(xRuIz
z1Q?a)s{g#laCz5M_h@}4{|3Cqusl!Nv{PYBbe&JimMd?9{5hXn6ct$=aPkXZn&Tf4
z14CwcK^#?{zamNES8rA@+Sv3a(y&GvtpQr`1l`8C-OFHgb@jPinu})-`y(Js=!g*`
zd^Jaov_?Ey$vt}aAGB@Nmc74uhPR<A504?yE5vS^>tn>7h%(ShF3{U?&Yo<@1RMKX
zTMry7c&<NAGf7FapZd_#h^v7#J`H_RMX2X*ftnpUbZF7G^)I_dXz9r;H+O|8xj_{r
zrk0i#odl$BpZ;?Zg)VCY=FFM1iQX$~p*ML2P|u2h;vWU&Iy&zd3WFgW1_l1}TmjA(
z2S+O5RdEXL<%chfg1AbWOJ&9SZF2vW$aouvD=Y(04lkTTP!q;k+dQ4?9Uz{5AxhfJ
zM^&DJw1BYKm2&*@J#qC|_z3ZA0M0~FC#Hp49~kWB=C)C30e^wwAbFX-fyp@PZtO|a
z4NT0htlN1IMo`{+Q04s_;5y5<I5<icLN6UMeB@Qcr9N2+D6o>z;5g(p)_tMxOETM^
zrOEq?f+CP^gw(22kLm;s!%st>R1w~5DWI08r)T?mbs8<y($tr6<18R0mnj=naqk{l
z-*f|;GQ~E%RH;%ft5>hS$c_Je!eV9Sd>?UJ;3u;e1<IVoK{y5|$2s0Wq5r(ka18G<
zcM-5yT<3W!*BJIU@d_h9uIjjQ%RbqB;H2~$q(S$t_UP`Y=jt?Q)a2MScTbr<XQ9mS
zK>G8L{(Ntl>6uqP_l5rA?iHZqvwVG?)J3Dai0AS{eEnX%dO2{#<e@`{ZqemEC^}II
zTR3mVpS92r@JE`yO8J?Wwf*!bUw>VKT|XtI@gzr=CthGS-e>;r1>S`#%IW6mD-E0c
z#M!CtV4%YKB$=~jC8#I!eM4myo<sFa?*s2MUq?5OEW~w#KFJ>7)AOlB9sF?k@ZnCi
zYSq#lqOS8?M_W%B7UIrn1n0tk`Q;aB)~p#lSe|W8xg^u(ERv}+=ZT~H0_EIkwwD5)
zzx|&_`<e5_&SkFHx_HPG=h<R2eU8`?xaR1-uppc&eqpH6BAoHUHy%1Gi^F1b8a8RB
z@@1qh7uKR}hnLn)Zb;u9>CYD18BE_p0rR$V1(f_2NND^zd7-f8uf+e6Nn6BazHdRk
z3`;o70^B^s-fg~2MLJVZ_Q}Y{*lNnlij}{MLH^vdwMB_<zM1AQ%QM#=@8<0?{0_Kh
zH^T$rJaWG%TGSJIdQ&h^VSSQ@WUDqEx@>Us3KAQHC*GMB!Sr~X?jc%wV>s(&i8MIp
z{pskFDneaUc>MTrYdbr;kzG3V-mjx=Aof!olzk@IjdQxOXRlsz<>5nF9JN}k>}LUP
zGQn}SSWlZP6P)JAL}Hpp9`gSy_YlT%s+*YGx{A4ti<nHBCdT8ZN~312o^|fg@0GcY
ztBiMWD~O|tR~Q9p;>h&u+!o84gO|j@&h>fevgPRjVF1jmU%!4%UA4wyU@=9;T04sA
zWM>&?GadQPz%d_FW}qxCO8HfJsnS-Rk9Tla?pxZ;MA>IDAH;PLvnkWX#M((rCOV0}
zxlL}Hj$PBf_@XGMI}Opzvg>YNuVK?TzMpXjtDtb+Z@~;@F`eWr!*s_yMO@Cm*8mFZ
zlXRBp(`(T1OP2OCmHhua&FNx>=SJB$RmoW|cK@Fqfho%Zjgyj+hBs^0%3W7`Ozvn{
z;XXb-Vr69|4({%<>w2b4a`O>`@s46J!AXoK%@E_sGsI}pbmcf^vWs$5uIJw~oH$MN
z5q`9ly^J!q6J0YK89LfLr^=5tFCyEd;$MBWXYeS~tkG5uO4zs4D-6{=p7UrubFqZ3
zKPm%98q<oNYi@3Ng0oLGYu1#KrAkYiF1=)!frX41V=W_1CX0^g6wxsQ#@UGOI9t)P
zn5sO7k+pLHl=7l1dMqo#X(P@^lSxQpqG%e8ml4Jjr1ub=%=%4Q+$;8F@eRm#HZTko
z&n}YVr5)P-+<UJ!;%Ffb<NYxvDCxKPxfiESM=%YbK%d09;wU^o%kaU7vDSYv&51G!
z&sXKgS}PEj`sAm(qcNEB$IF*5_Z>HG+$31xm-_S`C@osH5zm;l;=S^)j2b^fG{@S=
z2(zi8ZSEl2;~Zq<IQzVFb4TS^@E`ZIOzo8L!^cb(4a12tM1Q<=9XRq$snTWXu)wK8
zx`;V^|6}bF!;B}1<`^)5Szi3#rdt@l-s5rNtN@wrzfyi`_sdPrJZWodi@m+QvYSVn
zHf`j~;$KOPdX43$wq2!5f32K;BgW<q(j6xQfkC?FGFZ=2hKwFBLks{M)kj$Z7Pyxe
zroXnCbm}woO`}$w9{*6Y?zQ4ym)MOmQ!=^$x~TB>0Ig+CchZ<>UZ+uup!!YQoUYTT
z`K{Uwo2C5`5aw=!W^K<^`0fX9#N_~Yfj&vXe=Zp7-l$c_O?4Z$yofk=-%jIg)NRz_
zR>Kw@jw7BK^li=zeUfW*6arIL;+~MuB_~w*{)gn6YOQiZc3cvB?|2z*YAYIGn&BpP
zq6KF76<|IbHNi{-|7)5~m0@7Up~hfFFq^vm1R11nB`rJmc~Z1!u>@o_4qY89vpyJ;
z^XYEQI`()l)MygYw=GC3FTM9e6ODuUIf5z8oEJ%@s?}xW$dMAYBuZJu$Tc}_+qRS1
zHS0;SqF;0Rj?-=XzA05YrF`Wo=YFi)=vFPYW~uo|%SURB`c3ZO{sk}!!_$+U`l1)W
zu}C`0_C?uS0i0RlrdV`B;dEzlJP+48@l!7RFaXsG?!r+RGvC5AfU{KHVUSSwE6^uN
z_|K_ZuE*zUPYyY;F}$6|+u*n<8$5@)*maZ9IpoKoE_!-#*o&hUAHSZ-LZUKTjvhVQ
zX_&G3`Gix?#d204m<)^v1{g_6W$6S=%LmU@$h*cc3`^!34#v~~W2!?6)oap<Q-nc)
z4)WpPFt=;rIv=jq8-V&;+GoVrOy)HLjP`4qh4WBBm49A-de*KoKW?wm7zPg=rc94=
zeAURvNQP^S5cPqYS)Dp{-mIaaF}G#QmS*35_Z=CA1LUkcGEU(NWu<em6&b(;l12+<
z>;rTJXoS=PI0Z<_NGjaba>k@O(jQo@S~aUGRjSwnj)h=fwrp7r%Ig7r!N8P^Edb88
zd=Kz_e35eMWDy0YPbq1FxJZw>W@Dfwzy|S0_z9q!8W*y$>sAX33yVofNx6FT=+SFS
zVq%UpP*dxSM)39N^XH44ICEOb@5qT`MGhZJDRTHkN|B={Pbk-U|J>z^Mb2Njtbpfd
zE}SoN^4uAvd<Ag;p7R>-F&>Y|pYMkajg5mtcLc{wf4BPccZ34c%BxzdPMxuSF)>%R
zTz(}J-Mqzc5_CD6nKH(9mU1+j;;J0q2Co}J*E4`#rVqVL4|=37bT$n`tL*Qq)!+aw
z$2SL|Ae@O&_U&Pz0IqH3dgd`zs@FQAHy#GWMCf+ZA>WZtVW`T&vQ2XLm#7`*(Cv;D
zOX$E*LBz;#tQZ?v$iM-^PFq`ByDnO^sE?_sDJzW5uG9f}PNpbaq5R(rpvxG`%1yY)
zgk%hs^}QPs;5w&1JD}n(K7aoFH8<mW8xJ2oe1ZEn%+1Z&K=~n5I+l<bNnhMbO)Wy)
zPD?9tJ3U>gRDgN15*dzbx&I6dM`x8{s8=q57_xHh+WRNYof9yN>^X2C1NR0);+IK#
z^r*-k<fG*G7I>ZaAH4-$yL<osJL9m-yq}&Q4*9U5)|oM5hU>zG3zvI)d&j!FyK|Sj
zf(3TJBiulP{TD5~1|2(gv^#U=Oxor%sp2><Ky2K+WU`B=SUcxgzZ0gxLghKYYbq$$
zc;CrG#yiax3s~CYU}+B@ZThBknQ}Xjwml?HSLDU+Ns$Ih<{+K}WpJMZ@~*%18Fb&u
z(H-gKS;KGV^}9R~j{m2itiJIZW%c1Jf;6OkyAEP#XeI{wmd{L0&B9l$S~cwD%a=c7
zWMq)J*dW;DlF8mx(Y#J3ATzLhf83(;2@FvFn{U1uee~F|XK5J^l#RF^a?>r~!IQ_5
zn(<KXrl&zm%A@%B_+yCHjuH!hsXo0InVFdiLqNTH=Y17;U&TBuH|V<#9Xd?8$#qJ%
zZz&rz%$qllyUh*6{j!fAKQ8j-&6{_oMHEi2Fwp4U8J@_=$tm*m=~E?t)N7dv6)Frz
zV^qFp!_CIR!^5{Be@gw|>$>?boBBpu6BCnARM@NS$FGWqUxaW>%*E4BrqA<{X&#Hk
z*~43$=Xxvu-v!=d7>+mE&+?T1L$#h4EA}Ow3#Q<ub3mBV=A9xN*I4;<>gbP?CF3SL
zX8u`zDk$NA8S{K)j&HaexR5Gq_nZ{9#y?5FzQeP%N9yi`O5$++`gQKP{=0qF|6PRg
zv!T_fSFheIyvM9JS((ZzOEmJ_Lx&FK9653%hhI<QO3xRs@)oqEr>AE%Y0|_F@i_R;
zvQc+z#B>18F0rel{>MdmR2NhYghGTXNV&@gzv@fpYlf=~CRFxDBID$jsalO1HOxni
z8nu7u(4iN#w6sn)Yu3z%@o=9#78#e!!Tgv8m%K1N4sU%|I6Z>QhN80pQp#}_jJ+HM
z3RfFYU*-4Rxh#e~FI9)OFgOFisLEPpeqyG3_wK5%cW|`;pCym-3(Ps}K2}rgcgL;U
zni~)mC*Gk;Wl>P1EbtGN`M$w2&nHOc;mGqp0>gML4wPA5zS6d17cMd2a0y-Tbtnb#
zaD36s?{~z92H4&30gda_Z;&!)ae&PA4SHu@Z$ni^mf1TrT0&x1icfg7RH#_#KDzPH
zy?gie!u$9U_{UcBq3o=Lmiqeo>-h4yZX69E7fg4%Ql&~qDp#(2pjNF~7kqqtUh*E7
zzCsG|V^&Fs*LBR8G2_>*TXz5w{V4QdR+{d#Y12ZFA3uH(|Ci%`9lS$Ua*fiZOZRtj
za*Eixb?aGfh5*LDla!QnOixdbIx*KsmCk>sEK@z`((ZwQfz125gM$NoQVqd0RAI;v
zbVY2Vt*z~nkdSbMJ8+yofBxQ<En7}Q#h8bCG>#PJef|2iQdjuTay0JRwW~#NaPU?%
zx@(s&UrvY4dSmI*rMrIm=_eXloUU!@>+8F2|Ni|2-2<LIH!d#jHu5@+uADA@+^Z>%
zP^jXKnl^1UXQB78$hZ{}xP*?ak>aiR2L{OkKl&KvVWIz@!z&<I+~&>~=%j+~g3Bl}
z&CJX~@glh_f*<wd+80{dd`PxPG9X(Dr38=l5SA@Y7YNZ9F*4&_`Me8-%ego(L;@nB
zWXbYGnYY-79|myRxpQYDypMm>AtAql`#^8Kz?aYEz5Dj<%fx$LhU;AaLwB}$sK|3%
z6VHtfC`k<r4Csny6c7+lu>U3)Edv#%Y~=wxlnya$;Cy$TEG#ViE?&Htt^@=X-YkI|
z7l28qE^&s7j6@x`y=sRJ=6L}6B(9BsK1l<!8XAmNVq)TP+(S~iiX>B|$qV!6&*$hv
zD+m^<G?b1N%atqFZ^42E8|gQjAMQ8hK#%am(9n?E&X_@6DA@Hx6^75x=R!Bm2Esbe
zBV<5}s6nO5l}%f=ZnL~`lcsyr8a6s!Urp@{eFO8*;9}kS^{>{cSMM&E?QyNzbzaq|
zRVx>ew}Eidu3bB2NQR$G=4X=Wi;0SPk(`{&Zk!u0DIccK2c0S2#t<+jEmNoB#fz^%
z{+IqJ&-VdwGEx4tZasQ#)*C%qFDfdE7V}@y3qO=l$&|vQQR&Lgu-xzq@8CYJ*&^@O
zxZf1#hLC2;=3U$X0Tq^`rl!WF4-R|w?77eDB(58xH035;+{O^yUj)J!LF(+{ebx4c
z+&qCgBAHvYT@EYW%F2pESPhXcrQpyF8#X9AbWnAG6;976EucZurcK8{!rdp6a5IK0
zSFZ5;NZCxAd61wx=5*=Og`0b+RE$B8eAn{t$(=iQlyY!)5`H>{arp&0dwcu!RjO2(
z&yU8kY%CYkWEhs29|eOXW@DqOkVhymsUnzw4g{2B#sG&(3i}mWRjXEQT8kDf=JxH|
zH?CK&UdI|YYI3`GuR+;^25QKlL4(D}$XFa59hJ2_bPC?FV}~3(cu-caUafq-jz@mp
zWc&8*a^%R-H@J5d*H@#^9G_#pDkfF;_xJZh7q)NBnl<T=968*e3X(+ePNFjiEoujQ
z2W1-;y3_G<DRe&W-o1x(@77Ct^yn{5o3?z4#&M!&&z`~k`}a41w5CN()!-jv%KWse
zHE7Vl?Z&NJS(FE?1S2CO8WbFQqLk~^xN&1IRA>gm$kprD`A(?j%tPgQU$}fZD-{*`
z01TFnisOzS=$_xKSh0fskPFSmjoV5_xPAAo!T`{dU!Xp2v~Jz{s*R1!<LjtoZkv4{
zOt5R$uKQ@z9M`N@v0}w3r_Y{!{qWHvg?Xn;nZl6}e)U%)A~NzU)4Y?K%6fYZ=DSp;
zOqs)|>$|t^+{w*Aqj(O9k9WNn;eSTM>gg8{ph$3*0}?wIb@8Ze+qPGJ{`uz{kkT)>
zZ8uaH8gO>it5<i$`@e+e&3TLlaTob`dV0Rn)YQBKwI!u)-MSo-qYj-%c#*U~X?b#5
zi(_E*0hLeTZ!mW3*lsZ6wa`*srca-~Y4~u>%T^Xs-#9zDiSsm9Wlawm@AT=@%F9ek
zOH*3;rAwEUk3aFdJ76^BJKpQpuU9^0$ImMh{1_;~uiWy;dt_vKC(-+z>xAfN%&<(0
zpI+X!ZJPuI1<O>ssWRTm8kj6r7Ph%)DEH7^ZvwMULqiz=iQOFKXE#RO>K~*0^qUMA
zI53f$C0xKeAX7#}MBGDnej+F+=n$U$0=lfst$2oHyoYso);|6E9Y#C{95?Mda)e*0
zke&PX$-2#(bARd9?KaX)#{C7QN|o|WTD$humYq9=m}Fx^M+@qr+&p0pkZ5af|717f
z?}0$redthD{RRy<*4h>6MDIO(SoWu+$N`{DyLMbZWH4mdu;pYfUfZ~Rdls1S1g=xU
z1_GO0-QAxaI&ne{9X~Gf7B0MoYsQ^AclNywX50+Hvu)3w+-^O3(AAC~Rd5G9P~Na6
zO`B5xr8^*dKHb`N>TG1*e7?Ph4oTBy&A5dOXRqiF+X;+L1NT=#B2<QH(b4)P(b9S+
zz;JaS-aA2=wSYuSw6wH5Gu~?IGvCD#;^Q5Ht|~+l6IaTmi<cAyUa>OKg;8XaL9boA
zu6#O=l9UXrVo-hvl#IylE)pt9O9p4Rz^;HyPO(o3!TY|xe&XilE^e-K08g>Ab<5Nq
zIqEW)Fg`judJN*Xg7VL)zrQu4^Pz<FsS0%&I$Uet=AC<GdGZ?BxO0zOyZb=s0(a)h
z4M|wDUZSA8hb)Pf!0;ux>KemXp{{_1p^53a9S06ed{W-h;QoWuInh81nFfO)(%pOF
z{1u5!TqWU4<FYD$_uV>NAA@|m0NnF(QqMkpFRom-QC6&8CvhuRW&c>C26eiT+Ipi?
z4jezFTt~i{rM~@^3}NLm+vDz`6K7=4(c^OA!iCJx(9qOn%a+}MFt`R|<<5cQr(eh8
zIpUMn%H*jIHxS2loaMv|2Tq)pWr@k+;pO!L*Mia0a&m!|d<W>A+z_HC^k@$7)6h{L
zrZqC<$mt7`xOSr~iCvyuu3Y&gxIP&7Dex!|OW^;96W(tP=m#yJ$ZA2F`MJ2b92sM5
zmNwJHD=R2qse}YAk?^o3vSssDxpDo5qU@6?sUK0vrtU%sLQv6VcS4<mM>3r%Jddh#
z>MhiFjzR)OL`2F0&xJC7-Xigw?<;d>`{$aOT0b*08+Qg0DHzgLGc`4}3Cc+&{y#=|
zv7GZBq(9c|O7dnjn$SpD96~2!7>z-pvM3-FhFA~`FMqL|Z2OXHc@UlqNwvjf!j!4^
zLSvT;R~}BZwO1+|P)E8P^3k<%bWTm)x>LAZ*W7wCrKAfa3<vo+E5>Kl9jT{(jmwaI
zLL<b*!}B?q#R1nXr#LvLEd?_NM#YNNBwNOH`l4u2huK~}&$&`GXz4NuiCM1P=Sogx
zHD@?SV8WC#^Voa9K$drr`$F&MT;CKPmmnG=w8^Me$e%qd$9`$?a|WrR>KylnTTgYo
z8<V_Nf{~A{({y^)+o0@>!(Lhu8o2iVN2qXqKd~{SS1(w#PUxC8TU%RSH5qI1c)|RT
zoUo8(5*iHaH!@lh(drK#JOriorov!k%s(PBEv-Lp$45rx=j_(6-yk6&q5K-IEb{Ub
z??re~USVQA!SNOJ+1t=}wn1{c9y@ldKcur7lKE|S@E;(&WLowHGv_aQ#Fd1;s8oAb
z_iP=5F)v0LnLO3eH-6lIutsV<wT9HS!h!LSDqRqd#*ew{yoHZLVpf15Vx{lE!Stx-
z8f^~Qa0R2D^Xz#_x``XN%VfvtG@d3Reho+@O6b}pOO<k5wQbKUzldn@$9o^5Ir0M2
z{klYn>GOSqUPUfX2IH*|^<kRy%(6iEnG5_w-ULT4m&oNwVq!J%*+@NuG;IUp+bHkt
zk$Q%A)HO6yhiU7kqwLqdt5VgkM2QmKi-N;5BjS@}DHwCaD1FtcOWGx>X+$;<G9;I%
zR6zO0dMx&P9u}7<kt>q9CW>)%aIZ4T|Nqa7ZxbrN9;8`2FwO|*iwhPnUcAS`!s5nw
zt0^zMy&`fWLYGTK*fLof6)(|C`I+S;=!ARa*s<es?dml}_hffW>y4Hex8CEYnqh^?
z+BNGSaU=QhQ3-&Q3Gk1QK>sKS^otgUsqSw^k2bhBb?VgpFowL){p%o|wkS^>;2#?t
ze;?(y^y<}X)SMl$JSm&2I74DrNT*-A(VcD;zz+>@JdR_C<4_meWPr079J8(WeU++i
zp)o76!sAy$0<DxzKX=`O>vZ^_qX!4HN7*{NJfJi?b0bZjym*nLuCC5ir5w%ZQm0NG
z8yHNtqgJez;AoCBM`o4!wlpQ}i0U<K`G6U6maST^FjD(YU6hSji+}aiG*7>v*UWSG
z@w3vi-++6#PD_|00nuQxHKwMfbVi8<_-#v_ufP7<ZJtlSYc89<55xSx$urOJEc6>-
zm)(P7q)@-Mlq^}2Lw|g>>XpCy-Wd%ob0r$n@yoXa<z}OtsHUdYwN|ZK%K9q&|MwD@
zfCJKvAk{iUQt890_FuSg;Snn<t29%SaWCxcre*mo3dxO*N|vQd5+!D70vL2T7&T5|
z)R<^8EEzXeqL##>dyWP(E(M|`EF=a@9S>%Vkr~tHW}BOhe`RE7lnx_?pK1<(@nir-
z?g-U@U&iIk$A5+JVmY_lvUArS>-;0)WUhCR%=Zq;tx>Z!H>Dbj+dTm~b0~Tiz{=oM
zCdUw)x9|LOvUl(jna6lO!MQbS*QHyWYQ%uUO#M6m(*5XcFMpZm6D%C`gN~jF<L?|S
z^V3jQo^!%uu6Ll!oVzHyO@~hGYR3Wto40I}5EQjSn7&s)L~fmW4OHIo=Hn*VJn~o+
zAiGj7P*uwD_xHc)<m7b1%ggH`1cs8Pp^3%mFN+t~f@I@Z;@IIMN8RyQ8~~Lh5~UFC
z<Czf|8F>PR=VgwOhD0ZbowM873Kc7Hw4rhBy7hhL`G(~BghmVJur6J?^Z`8v!ucXR
zF8Pk@yv1j}EnSAsND2GT5b~=5G=c8-%Yg$2j8?8(xiBOoWFsv03lk<xxNmGc=J`}x
zhio^O**WNbbC*UYNL+NX#Kk6qQS(5?jm6Q|I~=;ASMD@Nmu!o1Rxgc>jqigI&(EDZ
zcOxwEg<wYGbLY<e0-cn)Bi+RQtArOzMZNc6P2I~jGiJ*q$C+X>-tINJF3x7rb&Se9
zbuKa@r&(2wC}heW1BQ+`&v=ua+{9#p?JLA_MqK(pQs1K68i(xMwDgRhO>}S(+v&5#
z(cMd?&&!)$wRfE-Q=DgsrHxZ|{f13a5Y`Toj#GZ>y#@?94pFHrL$R`Re)IL$CESrd
zcY_;Pvv$3W7B)_~)($h3r4tUbJjHR=0%f_#B$Q>6(=5>%J?32T;$P82O8KI3`3jX{
z%qBVHaJdPWT;xsf`iS$K#WEFn*dxA?x%IWuWy(@lZeF!o^+jBk!t^FP&lX!(4<!$6
zLlauyT-*N7FJ=Ri@LSzY(em2EqU(niWCH0qFFZUvY5x5A$DEy=Z$kfkXkudW%+S#A
zrGbILYfB5uH(<Qi`uh50z-Q*><_~OaY;HhTJhouLf~2spFpq?U1QUL|{`T$L9UvLW
zjK5Az{g(+ZmQ!wh`VJd)edwsMZ-$IA$?VcwokJ*kxJOTzcj`|071FcM5S=TGH&o9I
z@dnc(r{gf)*i@q;oa0t6Rl00+o6bER4AC8%r8{nlj5fEE(UwzXu=beO?Ys85U#46I
zcE22op*7wN7^i7um2Ny~x)_XiklsUeXykDHcU>?h=d~@X)vUX}TYrt`T85Un+GDLn
z*L1RsFdm=Xd+^A|4V$&$Yzy7&=t)%$(o&fM{=Qm`9l!J)@(gL`j><2Gj@gu)K0|e$
zG-=hIT|RwITO*H##fp7t*12c@JHzzNb4HGxDB30ybMbzj)UMw+3}M<J-T&+gOE3;~
zye44KwpXrP>A7pyu9553t+ND!%|O?)AU;0cFE%z7{2IO#T|h*1bhLj=OpIq-T-<cr
zw_3Y)t<I4nM|z{HYK?R_O-P2P;{Q=m{)=q}&3-&2lphd)|C4~WxJL>3&fzFrrxa{f
z?91X-h~o=z+duknva(cj*914CrIQ0c88{o^lgd@9a;S3E>c`8JuSETmdLNf$=z$U1
z7cE+>ZS#&j9CXIdI5;gZDR-#RWEld~ZQ7ci=p2fnV+^-zr1J!wivvo2TV`9uDnA~r
zROQD*B}#tF;U=!`<e)$I>tq8_R0*U{W;X(SN|gF`3&J0(_<gmMZ@w+VRhZOc`P|(Q
zmKz_it~vuwNOM)$3Y8C_VKLkSKoe=vkW+R!`L7U|sV2|>XbR9<hlWXeFl@)WckliT
zowQSCW@dX_YXh_ZC=I!{dUb%3=%XdPSdQbei>3oGE0N$iuf2B}0`KLAp~JH(9?z9{
z=F=AzF?050ICl5+5aI+!J1`TyYseU!OCOD{o-SzZ0L7P}e5LOVM@^XadeWRA(Kfdi
zP17kdP}l6u*WZ-n_$1W{su9ea!#8SRc=~km0|L}T#{irc?}2-K1~Me)yeYLw^j5Er
z`;^9<s$_V^8;NVwohemS{-Hz_TqpCh%$z>f#D6{~9gr0(9oIfS0@GCm{-fiH|4zb-
zWra~!q*UY>oobwp760C058eab8_y#srie#CbP;#IC56M%BIovilrnTc=5h=&4%47+
zTcbgvpXN+)oiF2^+{I$5ix^qiX4bCXkZYoJ!4RBKASoV102zn*@;VuXpi?uik$D;B
zU$gzASO&&X%>t648BP_4@!6Odhs3a|GLw;6W;QDN(=u(<809}YsqvZqCau`8waD^y
zn~N-4y|GA4^7<mN$s3Bqt=WWYTZ$xX++HMc^UflPo3`hN+fpQM^(GZ#u(Da91exTE
z*i{>Nk5Z?4!zMr3>KU6{8m>Jmwa)<cTUD!7<FukBx=TtiMJ-sdev6WKemoVk6;AVC
z%#Zmpf0joThvnt{{BZA%ql877@jSc^u*^zX`Jd^0rvC%PN(XVAmU<i=Yq-{k4i6)6
mojw4Z{rN|I06vV06#0L0BQaiPgq42)0000<MNUMnLSTY`z}`*(

literal 0
HcmV?d00001

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon May 10 08:41:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 08:41:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124969.235312 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg1Ty-0002IQ-FK; Mon, 10 May 2021 08:41:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124969.235312; Mon, 10 May 2021 08:41:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg1Ty-0002Gi-79; Mon, 10 May 2021 08:41:42 +0000
Received: by outflank-mailman (input) for mailman id 124969;
 Mon, 10 May 2021 08:41:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jR2S=KF=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1lg1Tw-0008L0-Nv
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 08:41:40 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 4d1b1b12-dd96-4a25-86fd-fedd5bbae6aa;
 Mon, 10 May 2021 08:41:22 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id E971B13A1;
 Mon, 10 May 2021 01:41:21 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com
 [10.1.197.16])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 15A193F719;
 Mon, 10 May 2021 01:41:20 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4d1b1b12-dd96-4a25-86fd-fedd5bbae6aa
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v6 5/9] docs: add checks to configure for sphinx and doxygen
Date: Mon, 10 May 2021 09:41:01 +0100
Message-Id: <20210510084105.17108-6-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210510084105.17108-1-luca.fancellu@arm.com>
References: <20210510084105.17108-1-luca.fancellu@arm.com>

Add checks in the configure files to see if the system
is capable of generate the sphinx html docs using
doxygen and sphinx-breathe tools.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
 config/Docs.mk.in |   2 +
 docs/configure    | 258 ++++++++++++++++++++++++++++++++++++++++++++++
 docs/configure.ac |  15 +++
 3 files changed, 275 insertions(+)

diff --git a/config/Docs.mk.in b/config/Docs.mk.in
index e76e5cd5ff..dfd4a02838 100644
--- a/config/Docs.mk.in
+++ b/config/Docs.mk.in
@@ -7,3 +7,5 @@ POD2HTML            := @POD2HTML@
 POD2TEXT            := @POD2TEXT@
 PANDOC              := @PANDOC@
 PERL                := @PERL@
+SPHINXBUILD         := @SPHINXBUILD@
+DOXYGEN             := @DOXYGEN@
diff --git a/docs/configure b/docs/configure
index 569bd4c2ff..0ebf046a79 100755
--- a/docs/configure
+++ b/docs/configure
@@ -588,6 +588,8 @@ ac_unique_file="misc/xen-command-line.pandoc"
 ac_subst_vars='LTLIBOBJS
 LIBOBJS
 PERL
+DOXYGEN
+SPHINXBUILD
 PANDOC
 POD2TEXT
 POD2HTML
@@ -673,6 +675,7 @@ POD2MAN
 POD2HTML
 POD2TEXT
 PANDOC
+DOXYGEN
 PERL'
 
 
@@ -1318,6 +1321,7 @@ Some influential environment variables:
   POD2HTML    Path to pod2html tool
   POD2TEXT    Path to pod2text tool
   PANDOC      Path to pandoc tool
+  DOXYGEN     Path to doxygen tool
   PERL        Path to Perl parser
 
 Use these variables to override the choices made by `configure' or to help
@@ -1800,6 +1804,7 @@ ac_configure="$SHELL $ac_aux_dir/configure"  # Please don't use this var.
 
 
 
+
 case "$host_os" in
 *freebsd*) XENSTORED_KVA=/dev/xen/xenstored ;;
 *) XENSTORED_KVA=/proc/xen/xsd_kva ;;
@@ -1812,6 +1817,53 @@ case "$host_os" in
 esac
 
 
+# ===========================================================================
+#     https://www.gnu.org/software/autoconf-archive/ax_python_module.html
+# ===========================================================================
+#
+# SYNOPSIS
+#
+#   AX_PYTHON_MODULE(modname[, fatal, python])
+#
+# DESCRIPTION
+#
+#   Checks for Python module.
+#
+#   If fatal is non-empty then absence of a module will trigger an error.
+#   The third parameter can either be "python" for Python 2 or "python3" for
+#   Python 3; defaults to Python 3.
+#
+# LICENSE
+#
+#   Copyright (c) 2008 Andrew Collier
+#
+#   Copying and distribution of this file, with or without modification, are
+#   permitted in any medium without royalty provided the copyright notice
+#   and this notice are preserved. This file is offered as-is, without any
+#   warranty.
+
+#serial 9
+
+# This is what autoupdate's m4 run will expand.  It fires
+# the warning (with _au_warn_XXX), outputs it into the
+# updated configure.ac (with AC_DIAGNOSE), and then outputs
+# the replacement expansion.
+
+
+# This is an auxiliary macro that is also run when
+# autoupdate runs m4.  It simply calls m4_warning, but
+# we need a wrapper so that each warning is emitted only
+# once.  We break the quoting in m4_warning's argument in
+# order to expand this macro's arguments, not AU_DEFUN's.
+
+
+# Finally, this is the expansion that is picked up by
+# autoconf.  It tells the user to run autoupdate, and
+# then outputs the replacement expansion.  We do not care
+# about autoupdate's warning because that contains
+# information on what to do *after* running autoupdate.
+
+
 
 
 test "x$prefix" = "xNONE" && prefix=$ac_default_prefix
@@ -2232,6 +2284,212 @@ $as_echo "$as_me: WARNING: pandoc is not available so some documentation won't b
 fi
 
 
+# If sphinx is installed, make sure to have also the dependencies to build
+# Sphinx documentation.
+for ac_prog in sphinx-build
+do
+  # Extract the first word of "$ac_prog", so it can be a program name with args.
+set dummy $ac_prog; ac_word=$2
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+$as_echo_n "checking for $ac_word... " >&6; }
+if ${ac_cv_prog_SPHINXBUILD+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  if test -n "$SPHINXBUILD"; then
+  ac_cv_prog_SPHINXBUILD="$SPHINXBUILD" # Let the user override the test.
+else
+as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+  IFS=$as_save_IFS
+  test -z "$as_dir" && as_dir=.
+    for ac_exec_ext in '' $ac_executable_extensions; do
+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+    ac_cv_prog_SPHINXBUILD="$ac_prog"
+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+    break 2
+  fi
+done
+  done
+IFS=$as_save_IFS
+
+fi
+fi
+SPHINXBUILD=$ac_cv_prog_SPHINXBUILD
+if test -n "$SPHINXBUILD"; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $SPHINXBUILD" >&5
+$as_echo "$SPHINXBUILD" >&6; }
+else
+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+fi
+
+
+  test -n "$SPHINXBUILD" && break
+done
+test -n "$SPHINXBUILD" || SPHINXBUILD="no"
+
+    if test "x$SPHINXBUILD" = xno; then :
+  { $as_echo "$as_me:${as_lineno-$LINENO}: WARNING: sphinx-build is not available so sphinx documentation \
+won't be built" >&5
+$as_echo "$as_me: WARNING: sphinx-build is not available so sphinx documentation \
+won't be built" >&2;}
+else
+
+            # Extract the first word of "sphinx-build", so it can be a program name with args.
+set dummy sphinx-build; ac_word=$2
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+$as_echo_n "checking for $ac_word... " >&6; }
+if ${ac_cv_path_SPHINXBUILD+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  case $SPHINXBUILD in
+  [\\/]* | ?:[\\/]*)
+  ac_cv_path_SPHINXBUILD="$SPHINXBUILD" # Let the user override the test with a path.
+  ;;
+  *)
+  as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+  IFS=$as_save_IFS
+  test -z "$as_dir" && as_dir=.
+    for ac_exec_ext in '' $ac_executable_extensions; do
+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+    ac_cv_path_SPHINXBUILD="$as_dir/$ac_word$ac_exec_ext"
+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+    break 2
+  fi
+done
+  done
+IFS=$as_save_IFS
+
+  ;;
+esac
+fi
+SPHINXBUILD=$ac_cv_path_SPHINXBUILD
+if test -n "$SPHINXBUILD"; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $SPHINXBUILD" >&5
+$as_echo "$SPHINXBUILD" >&6; }
+else
+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+fi
+
+
+
+
+    # Extract the first word of "doxygen", so it can be a program name with args.
+set dummy doxygen; ac_word=$2
+{ $as_echo "$as_me:${as_lineno-$LINENO}: checking for $ac_word" >&5
+$as_echo_n "checking for $ac_word... " >&6; }
+if ${ac_cv_path_DOXYGEN+:} false; then :
+  $as_echo_n "(cached) " >&6
+else
+  case $DOXYGEN in
+  [\\/]* | ?:[\\/]*)
+  ac_cv_path_DOXYGEN="$DOXYGEN" # Let the user override the test with a path.
+  ;;
+  *)
+  as_save_IFS=$IFS; IFS=$PATH_SEPARATOR
+for as_dir in $PATH
+do
+  IFS=$as_save_IFS
+  test -z "$as_dir" && as_dir=.
+    for ac_exec_ext in '' $ac_executable_extensions; do
+  if as_fn_executable_p "$as_dir/$ac_word$ac_exec_ext"; then
+    ac_cv_path_DOXYGEN="$as_dir/$ac_word$ac_exec_ext"
+    $as_echo "$as_me:${as_lineno-$LINENO}: found $as_dir/$ac_word$ac_exec_ext" >&5
+    break 2
+  fi
+done
+  done
+IFS=$as_save_IFS
+
+  ;;
+esac
+fi
+DOXYGEN=$ac_cv_path_DOXYGEN
+if test -n "$DOXYGEN"; then
+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: $DOXYGEN" >&5
+$as_echo "$DOXYGEN" >&6; }
+else
+  { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+fi
+
+
+    if ! test -x "$ac_cv_path_DOXYGEN"; then :
+
+        as_fn_error $? "doxygen is needed" "$LINENO" 5
+
+fi
+
+
+    if test -z $PYTHON;
+    then
+        if test -z "";
+        then
+            PYTHON="python3"
+        else
+            PYTHON=""
+        fi
+    fi
+    PYTHON_NAME=`basename $PYTHON`
+    { $as_echo "$as_me:${as_lineno-$LINENO}: checking $PYTHON_NAME module: breathe" >&5
+$as_echo_n "checking $PYTHON_NAME module: breathe... " >&6; }
+    $PYTHON -c "import breathe" 2>/dev/null
+    if test $? -eq 0;
+    then
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+$as_echo "yes" >&6; }
+        eval HAVE_PYMOD_BREATHE=yes
+    else
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+        eval HAVE_PYMOD_BREATHE=no
+        #
+        if test -n "yes"
+        then
+            as_fn_error $? "failed to find required module breathe" "$LINENO" 5
+            exit 1
+        fi
+    fi
+
+
+    if test -z $PYTHON;
+    then
+        if test -z "";
+        then
+            PYTHON="python3"
+        else
+            PYTHON=""
+        fi
+    fi
+    PYTHON_NAME=`basename $PYTHON`
+    { $as_echo "$as_me:${as_lineno-$LINENO}: checking $PYTHON_NAME module: sphinx_rtd_theme" >&5
+$as_echo_n "checking $PYTHON_NAME module: sphinx_rtd_theme... " >&6; }
+    $PYTHON -c "import sphinx_rtd_theme" 2>/dev/null
+    if test $? -eq 0;
+    then
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: yes" >&5
+$as_echo "yes" >&6; }
+        eval HAVE_PYMOD_SPHINX_RTD_THEME=yes
+    else
+        { $as_echo "$as_me:${as_lineno-$LINENO}: result: no" >&5
+$as_echo "no" >&6; }
+        eval HAVE_PYMOD_SPHINX_RTD_THEME=no
+        #
+        if test -n "yes"
+        then
+            as_fn_error $? "failed to find required module sphinx_rtd_theme" "$LINENO" 5
+            exit 1
+        fi
+    fi
+
+
+
+fi
+
 
 # Extract the first word of "perl", so it can be a program name with args.
 set dummy perl; ac_word=$2
diff --git a/docs/configure.ac b/docs/configure.ac
index c2e5edd3b3..a2ff55f30a 100644
--- a/docs/configure.ac
+++ b/docs/configure.ac
@@ -20,6 +20,7 @@ m4_include([../m4/docs_tool.m4])
 m4_include([../m4/path_or_fail.m4])
 m4_include([../m4/features.m4])
 m4_include([../m4/paths.m4])
+m4_include([../m4/ax_python_module.m4])
 
 AX_XEN_EXPAND_CONFIG()
 
@@ -29,6 +30,20 @@ AX_DOCS_TOOL_PROG([POD2HTML], [pod2html])
 AX_DOCS_TOOL_PROG([POD2TEXT], [pod2text])
 AX_DOCS_TOOL_PROG([PANDOC], [pandoc])
 
+# If sphinx is installed, make sure to have also the dependencies to build
+# Sphinx documentation.
+AC_CHECK_PROGS([SPHINXBUILD], [sphinx-build], [no])
+    AS_IF([test "x$SPHINXBUILD" = xno],
+        [AC_MSG_WARN(sphinx-build is not available so sphinx documentation \
+won't be built)],
+        [
+            AC_PATH_PROG([SPHINXBUILD], [sphinx-build])
+            AX_DOCS_TOOL_REQ_PROG([DOXYGEN], [doxygen])
+            AX_PYTHON_MODULE([breathe],[yes])
+            AX_PYTHON_MODULE([sphinx_rtd_theme], [yes])
+        ]
+    )
+
 AC_ARG_VAR([PERL], [Path to Perl parser])
 AX_PATH_PROG_OR_FAIL([PERL], [perl])
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon May 10 08:41:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 08:41:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124970.235330 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg1U2-000384-QA; Mon, 10 May 2021 08:41:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124970.235330; Mon, 10 May 2021 08:41:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg1U2-00037q-M9; Mon, 10 May 2021 08:41:46 +0000
Received: by outflank-mailman (input) for mailman id 124970;
 Mon, 10 May 2021 08:41:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jR2S=KF=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1lg1U0-0000ei-Ru
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 08:41:44 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id b9f5daf7-e91c-428e-b8f9-d4ac3c66e3e2;
 Mon, 10 May 2021 08:41:27 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C498411D4;
 Mon, 10 May 2021 01:41:26 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com
 [10.1.197.16])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 6049E3F719;
 Mon, 10 May 2021 01:41:25 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b9f5daf7-e91c-428e-b8f9-d4ac3c66e3e2
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v6 8/9] docs: hypercalls sphinx skeleton for generated html
Date: Mon, 10 May 2021 09:41:04 +0100
Message-Id: <20210510084105.17108-9-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210510084105.17108-1-luca.fancellu@arm.com>
References: <20210510084105.17108-1-luca.fancellu@arm.com>

Create a skeleton for the documentation about hypercalls

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
v6 changes:
- Now every platform has the same sections in .rst files
---
 .gitignore                             |  1 +
 docs/Makefile                          |  4 ++++
 docs/hypercall-interfaces/arm32.rst    | 32 ++++++++++++++++++++++++++
 docs/hypercall-interfaces/arm64.rst    | 32 ++++++++++++++++++++++++++
 docs/hypercall-interfaces/index.rst.in |  7 ++++++
 docs/hypercall-interfaces/x86_64.rst   | 32 ++++++++++++++++++++++++++
 docs/index.rst                         |  8 +++++++
 7 files changed, 116 insertions(+)
 create mode 100644 docs/hypercall-interfaces/arm32.rst
 create mode 100644 docs/hypercall-interfaces/arm64.rst
 create mode 100644 docs/hypercall-interfaces/index.rst.in
 create mode 100644 docs/hypercall-interfaces/x86_64.rst

diff --git a/.gitignore b/.gitignore
index d271e0ce6a..a9aab120ae 100644
--- a/.gitignore
+++ b/.gitignore
@@ -64,6 +64,7 @@ docs/xen.doxyfile
 docs/xen.doxyfile.tmp
 docs/xen-doxygen/doxygen_include.h
 docs/xen-doxygen/doxygen_include.h.tmp
+docs/hypercall-interfaces/index.rst
 extras/mini-os*
 install/*
 stubdom/*-minios-config.mk
diff --git a/docs/Makefile b/docs/Makefile
index 2f784c36ce..b02c3dfb79 100644
--- a/docs/Makefile
+++ b/docs/Makefile
@@ -61,6 +61,9 @@ build: html txt pdf man-pages figs
 sphinx-html: $(DOXY_DEPS) $(DOXY_LIST_SOURCES)
 ifneq ($(SPHINXBUILD),no)
 	$(DOXYGEN) xen.doxyfile
+	@echo "Generating hypercall-interfaces/index.rst"
+	@sed -e "s,@XEN_TARGET_ARCH@,$(XEN_TARGET_ARCH),g" \
+		hypercall-interfaces/index.rst.in > hypercall-interfaces/index.rst
 	XEN_ROOT=$(realpath $(XEN_ROOT)) $(SPHINXBUILD) -b html . sphinx/html
 else
 	@echo "Sphinx is not installed; skipping sphinx-html documentation."
@@ -108,6 +111,7 @@ clean: clean-man-pages
 	rm -f xen.doxyfile.tmp
 	rm -f xen-doxygen/doxygen_include.h
 	rm -f xen-doxygen/doxygen_include.h.tmp
+	rm -f hypercall-interfaces/index.rst
 
 .PHONY: distclean
 distclean: clean
diff --git a/docs/hypercall-interfaces/arm32.rst b/docs/hypercall-interfaces/arm32.rst
new file mode 100644
index 0000000000..6762d9fc7c
--- /dev/null
+++ b/docs/hypercall-interfaces/arm32.rst
@@ -0,0 +1,32 @@
+.. SPDX-License-Identifier: CC-BY-4.0
+
+Hypercall Interfaces - arm32
+============================
+
+Starting points
+---------------
+.. toctree::
+   :maxdepth: 2
+
+
+
+Functions
+---------
+
+
+Structs
+-------
+
+
+Enums and sets of #defines
+--------------------------
+
+
+Typedefs
+--------
+
+
+Enum values and individual #defines
+-----------------------------------
+
+
diff --git a/docs/hypercall-interfaces/arm64.rst b/docs/hypercall-interfaces/arm64.rst
new file mode 100644
index 0000000000..5e701a2adc
--- /dev/null
+++ b/docs/hypercall-interfaces/arm64.rst
@@ -0,0 +1,32 @@
+.. SPDX-License-Identifier: CC-BY-4.0
+
+Hypercall Interfaces - arm64
+============================
+
+Starting points
+---------------
+.. toctree::
+   :maxdepth: 2
+
+
+
+Functions
+---------
+
+
+Structs
+-------
+
+
+Enums and sets of #defines
+--------------------------
+
+
+Typedefs
+--------
+
+
+Enum values and individual #defines
+-----------------------------------
+
+
diff --git a/docs/hypercall-interfaces/index.rst.in b/docs/hypercall-interfaces/index.rst.in
new file mode 100644
index 0000000000..e4dcc5db8d
--- /dev/null
+++ b/docs/hypercall-interfaces/index.rst.in
@@ -0,0 +1,7 @@
+.. SPDX-License-Identifier: CC-BY-4.0
+
+Hypercall Interfaces
+====================
+
+.. toctree::
+   @XEN_TARGET_ARCH@
diff --git a/docs/hypercall-interfaces/x86_64.rst b/docs/hypercall-interfaces/x86_64.rst
new file mode 100644
index 0000000000..59e948900c
--- /dev/null
+++ b/docs/hypercall-interfaces/x86_64.rst
@@ -0,0 +1,32 @@
+.. SPDX-License-Identifier: CC-BY-4.0
+
+Hypercall Interfaces - x86_64
+=============================
+
+Starting points
+---------------
+.. toctree::
+   :maxdepth: 2
+
+
+
+Functions
+---------
+
+
+Structs
+-------
+
+
+Enums and sets of #defines
+--------------------------
+
+
+Typedefs
+--------
+
+
+Enum values and individual #defines
+-----------------------------------
+
+
diff --git a/docs/index.rst b/docs/index.rst
index b75487a05d..52226a42d8 100644
--- a/docs/index.rst
+++ b/docs/index.rst
@@ -53,6 +53,14 @@ kind of development environment.
    hypervisor-guide/index
 
 
+Hypercall Interfaces documentation
+----------------------------------
+
+.. toctree::
+   :maxdepth: 2
+
+   hypercall-interfaces/index
+
 Miscellanea
 -----------
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon May 10 08:41:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 08:41:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124971.235335 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg1U3-0003BY-8j; Mon, 10 May 2021 08:41:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124971.235335; Mon, 10 May 2021 08:41:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg1U3-0003Ak-2g; Mon, 10 May 2021 08:41:47 +0000
Received: by outflank-mailman (input) for mailman id 124971;
 Mon, 10 May 2021 08:41:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jR2S=KF=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1lg1U1-0008L0-OB
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 08:41:45 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 120380b4-d6db-4857-8ba4-6d056083678d;
 Mon, 10 May 2021 08:41:23 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 8A29411FB;
 Mon, 10 May 2021 01:41:23 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com
 [10.1.197.16])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 2A3F33F719;
 Mon, 10 May 2021 01:41:22 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 120380b4-d6db-4857-8ba4-6d056083678d
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v6 6/9] docs: add doxygen preprocessor and related files
Date: Mon, 10 May 2021 09:41:02 +0100
Message-Id: <20210510084105.17108-7-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210510084105.17108-1-luca.fancellu@arm.com>
References: <20210510084105.17108-1-luca.fancellu@arm.com>

Add preprocessor called by doxygen before parsing headers,
it will include in every header a doxygen_include.h file
that provides missing defines and includes that are
usually passed by the compiler.

Add doxy_input.list that is a text file containing the
relative path to the source code file to be parsed by
doxygen. The path sould be relative to the xen folder:
E.g. xen/include/public/grant_table.h

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
 docs/xen-doxygen/doxy-preprocessor.py | 110 ++++++++++++++++++++++++++
 docs/xen-doxygen/doxy_input.list      |   0
 docs/xen-doxygen/doxygen_include.h.in |  32 ++++++++
 3 files changed, 142 insertions(+)
 create mode 100755 docs/xen-doxygen/doxy-preprocessor.py
 create mode 100644 docs/xen-doxygen/doxy_input.list
 create mode 100644 docs/xen-doxygen/doxygen_include.h.in

diff --git a/docs/xen-doxygen/doxy-preprocessor.py b/docs/xen-doxygen/doxy-preprocessor.py
new file mode 100755
index 0000000000..496899d8e6
--- /dev/null
+++ b/docs/xen-doxygen/doxy-preprocessor.py
@@ -0,0 +1,110 @@
+#!/usr/bin/python3
+#
+# Copyright (c) 2021, Arm Limited.
+#
+# SPDX-License-Identifier: GPL-2.0
+#
+
+import os, sys, re
+
+
+# Variables that holds the preprocessed header text
+output_text = ""
+header_file_name = ""
+
+# Variables to enumerate the anonymous structs/unions
+anonymous_struct_count = 0
+anonymous_union_count = 0
+
+
+def error(text):
+    sys.stderr.write("{}\n".format(text))
+    sys.exit(1)
+
+
+def write_to_output(text):
+    sys.stdout.write(text)
+
+
+def insert_doxygen_header(text):
+    # Here the strategy is to insert the #include <doxygen_include.h> in the
+    # first line of the header
+    abspath = os.path.dirname(os.path.abspath(__file__))
+    text += "#include \"{}/doxygen_include.h\"\n".format(abspath)
+
+    return text
+
+
+def enumerate_anonymous(match):
+    global anonymous_struct_count
+    global anonymous_union_count
+
+    if "struct" in match.group(1):
+        label = "anonymous_struct_%d" % anonymous_struct_count
+        anonymous_struct_count += 1
+    else:
+        label = "anonymous_union_%d" % anonymous_union_count
+        anonymous_union_count += 1
+
+    return match.group(1) + " " + label + " {"
+
+
+def manage_anonymous_structs_unions(text):
+    # Match anonymous unions/structs with this pattern:
+    # struct/union {
+    #     [...]
+    #
+    # and substitute it in this way:
+    #
+    # struct anonymous_struct_# {
+    #     [...]
+    # or
+    # union anonymous_union_# {
+    #     [...]
+    # where # is a counter starting from zero, different between structs and
+    # unions
+    #
+    # We don't count anonymous union/struct that are part of a typedef because
+    # they don't create any issue for doxygen
+    text = re.sub(
+        "(?<!typedef\s)(struct|union)\s+?\{",
+        enumerate_anonymous,
+        text,
+        flags=re.S
+    )
+
+    return text
+
+
+def main(argv):
+    global output_text
+    global header_file_name
+
+    if len(argv) != 1:
+        error("Script called without arguments!")
+
+    header_file_name = argv[0]
+
+    # Open header file
+    input_header_file = open(header_file_name, 'r')
+    # Read all lines
+    lines = input_header_file.readlines()
+
+    # Inject config.h and some defines in the current header, during compilation
+    # this job is done by the -include argument passed to the compiler.
+    output_text = insert_doxygen_header(output_text)
+
+    # Load file content in a variable
+    for line in lines:
+        output_text += "{}".format(line)
+
+    # Try to get rid of any anonymous union/struct
+    output_text = manage_anonymous_structs_unions(output_text)
+
+    # Final stage of the preprocessor, print the output to stdout
+    write_to_output(output_text)
+
+
+if __name__ == "__main__":
+    main(sys.argv[1:])
+    sys.exit(0)
diff --git a/docs/xen-doxygen/doxy_input.list b/docs/xen-doxygen/doxy_input.list
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/docs/xen-doxygen/doxygen_include.h.in b/docs/xen-doxygen/doxygen_include.h.in
new file mode 100644
index 0000000000..df284f3931
--- /dev/null
+++ b/docs/xen-doxygen/doxygen_include.h.in
@@ -0,0 +1,32 @@
+/*
+ * Doxygen include header
+ * It supplies the xen/include/xen/config.h that is included using the -include
+ * argument of the compiler in Xen Makefile.
+ * Other macros are defined because they are usually provided by the compiler.
+ *
+ * Copyright (C) 2021 ARM Limited
+ *
+ * Author: Luca Fancellu <luca.fancellu@arm.com>
+ *
+ * SPDX-License-Identifier: GPL-2.0
+ */
+
+#include "@XEN_BASE@/xen/include/xen/config.h"
+
+#if defined(CONFIG_X86_64)
+
+#define __x86_64__ 1
+
+#elif defined(CONFIG_ARM_64)
+
+#define __aarch64__ 1
+
+#elif defined(CONFIG_ARM_32)
+
+#define __arm__ 1
+
+#else
+
+#error Architecture not supported/recognized.
+
+#endif
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon May 10 08:43:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 08:43:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124998.235353 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg1Vd-0005ub-Qd; Mon, 10 May 2021 08:43:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124998.235353; Mon, 10 May 2021 08:43:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg1Vd-0005uU-Ng; Mon, 10 May 2021 08:43:25 +0000
Received: by outflank-mailman (input) for mailman id 124998;
 Mon, 10 May 2021 08:43:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jR2S=KF=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1lg1UB-0008L0-Oi
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 08:41:55 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id b2a18b6b-6015-4b41-b561-12498a08a357;
 Mon, 10 May 2021 08:41:28 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6115A11FB;
 Mon, 10 May 2021 01:41:28 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com
 [10.1.197.16])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 046D73F719;
 Mon, 10 May 2021 01:41:26 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b2a18b6b-6015-4b41-b561-12498a08a357
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v6 9/9] docs/doxygen: doxygen documentation for grant_table.h
Date: Mon, 10 May 2021 09:41:05 +0100
Message-Id: <20210510084105.17108-10-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210510084105.17108-1-luca.fancellu@arm.com>
References: <20210510084105.17108-1-luca.fancellu@arm.com>

Modification to include/public/grant_table.h:

1) Add doxygen tags to:
 - Create Grant tables section
 - include variables in the generated documentation
 - Used @keepindent/@endkeepindent to enclose comment
   section that are indented using spaces, to keep
   the indentation.
2) Add .rst file for grant table for Arm64

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
v6 changes:
- Fix misaligned comment
- Moved comments to make them display in the docs
- Included more documentation in the docs
  (see output here: https://luca.fancellu.gitlab.io/xen-docs/hypercall-interfaces/common/grant_tables.html)
v5 changes:
- Move GNTCOPY_* define next to the flags field
v4 changes:
- Used @keepindent/@endkeepindent doxygen commands
  to keep text with spaces indentation.
- drop changes to grant_entry_v1 comment, it will
  be changed and included in the docs in a future patch
- Move docs .rst to "common" folder
v3 changes:
- removed tags to skip anonymous union/struct
- moved back comment pointed out by Jan
- moved down defines related to struct gnttab_copy
  as pointed out by Jan
v2 changes:
- Revert back to anonymous union/struct
- add doxygen tags to skip anonymous union/struct
---
 docs/hypercall-interfaces/arm64.rst           |   1 +
 .../common/grant_tables.rst                   |   9 +
 docs/xen-doxygen/doxy_input.list              |   1 +
 xen/include/public/grant_table.h              | 387 +++++++++++-------
 4 files changed, 245 insertions(+), 153 deletions(-)
 create mode 100644 docs/hypercall-interfaces/common/grant_tables.rst

diff --git a/docs/hypercall-interfaces/arm64.rst b/docs/hypercall-interfaces/arm64.rst
index 5e701a2adc..cb4c0d13de 100644
--- a/docs/hypercall-interfaces/arm64.rst
+++ b/docs/hypercall-interfaces/arm64.rst
@@ -8,6 +8,7 @@ Starting points
 .. toctree::
    :maxdepth: 2
 
+   common/grant_tables
 
 
 Functions
diff --git a/docs/hypercall-interfaces/common/grant_tables.rst b/docs/hypercall-interfaces/common/grant_tables.rst
new file mode 100644
index 0000000000..b8a1ef8759
--- /dev/null
+++ b/docs/hypercall-interfaces/common/grant_tables.rst
@@ -0,0 +1,9 @@
+.. SPDX-License-Identifier: CC-BY-4.0
+
+Grant Tables
+============
+
+.. doxygengroup:: grant_table
+   :project: Xen
+   :members:
+   :undoc-members:
diff --git a/docs/xen-doxygen/doxy_input.list b/docs/xen-doxygen/doxy_input.list
index e69de29bb2..233d692fa7 100644
--- a/docs/xen-doxygen/doxy_input.list
+++ b/docs/xen-doxygen/doxy_input.list
@@ -0,0 +1 @@
+xen/include/public/grant_table.h
diff --git a/xen/include/public/grant_table.h b/xen/include/public/grant_table.h
index 84b1d26b36..dfa5155927 100644
--- a/xen/include/public/grant_table.h
+++ b/xen/include/public/grant_table.h
@@ -25,15 +25,19 @@
  * Copyright (c) 2004, K A Fraser
  */
 
+/**
+ * @file
+ * @brief Interface for granting foreign access to page frames, and receiving
+ * page-ownership transfers.
+ */
+
 #ifndef __XEN_PUBLIC_GRANT_TABLE_H__
 #define __XEN_PUBLIC_GRANT_TABLE_H__
 
 #include "xen.h"
 
-/*
- * `incontents 150 gnttab Grant Tables
- *
- * Xen's grant tables provide a generic mechanism to memory sharing
+/**
+ * @brief Xen's grant tables provide a generic mechanism to memory sharing
  * between domains. This shared memory interface underpins the split
  * device drivers for block and network IO.
  *
@@ -51,13 +55,13 @@
  * know the real machine address of a page it is sharing. This makes
  * it possible to share memory correctly with domains running in
  * fully virtualised memory.
- */
-
-/***********************************
+ *
  * GRANT TABLE REPRESENTATION
- */
-
-/* Some rough guidelines on accessing and updating grant-table entries
+ *
+ * A grant table comprises a packed array of grant entries in one or more
+ * page frames shared between Xen and a guest.
+ *
+ * Some rough guidelines on accessing and updating grant-table entries
  * in a concurrency-safe manner. For more information, Linux contains a
  * reference implementation for guest OSes (drivers/xen/grant_table.c, see
  * http://git.kernel.org/?p=linux/kernel/git/torvalds/linux.git;a=blob;f=drivers/xen/grant-table.c;hb=HEAD
@@ -66,6 +70,7 @@
  *     compiler barrier will still be required.
  *
  * Introducing a valid entry into the grant table:
+ * @keepindent
  *  1. Write ent->domid.
  *  2. Write ent->frame:
  *      GTF_permit_access:   Frame to which access is permitted.
@@ -73,20 +78,25 @@
  *                           frame, or zero if none.
  *  3. Write memory barrier (WMB).
  *  4. Write ent->flags, inc. valid type.
+ * @endkeepindent
  *
  * Invalidating an unused GTF_permit_access entry:
+ * @keepindent
  *  1. flags = ent->flags.
  *  2. Observe that !(flags & (GTF_reading|GTF_writing)).
  *  3. Check result of SMP-safe CMPXCHG(&ent->flags, flags, 0).
  *  NB. No need for WMB as reuse of entry is control-dependent on success of
  *      step 3, and all architectures guarantee ordering of ctrl-dep writes.
+ * @endkeepindent
  *
  * Invalidating an in-use GTF_permit_access entry:
+ *
  *  This cannot be done directly. Request assistance from the domain controller
  *  which can set a timeout on the use of a grant entry and take necessary
  *  action. (NB. This is not yet implemented!).
  *
  * Invalidating an unused GTF_accept_transfer entry:
+ * @keepindent
  *  1. flags = ent->flags.
  *  2. Observe that !(flags & GTF_transfer_committed). [*]
  *  3. Check result of SMP-safe CMPXCHG(&ent->flags, flags, 0).
@@ -97,29 +107,32 @@
  *      transferred frame is written. It is safe for the guest to spin waiting
  *      for this to occur (detect by observing GTF_transfer_completed in
  *      ent->flags).
+ * @endkeepindent
  *
  * Invalidating a committed GTF_accept_transfer entry:
  *  1. Wait for (ent->flags & GTF_transfer_completed).
  *
  * Changing a GTF_permit_access from writable to read-only:
+ *
  *  Use SMP-safe CMPXCHG to set GTF_readonly, while checking !GTF_writing.
  *
  * Changing a GTF_permit_access from read-only to writable:
+ *
  *  Use SMP-safe bit-setting instruction.
+ *
+ * Data structure fields or defines described below have the following tags:
+ * * [XEN]: This field is written by Xen and read by the sharing guest.
+ * * [GST]: This field is written by the guest and read by Xen.
+ *
+ * @addtogroup grant_table Grant Tables
+ * @{
  */
 
-/*
+/**
  * Reference to a grant entry in a specified domain's grant table.
  */
 typedef uint32_t grant_ref_t;
 
-/*
- * A grant table comprises a packed array of grant entries in one or more
- * page frames shared between Xen and a guest.
- * [XEN]: This field is written by Xen and read by the sharing guest.
- * [GST]: This field is written by the guest and read by Xen.
- */
-
 /*
  * Version 1 of the grant table entry structure is maintained purely
  * for backwards compatibility.  New guests should use version 2.
@@ -129,15 +142,17 @@ typedef uint32_t grant_ref_t;
 #define grant_entry_v1_t grant_entry_t
 #endif
 struct grant_entry_v1 {
-    /* GTF_xxx: various type and flag information.  [XEN,GST] */
+    /** GTF_xxx: various type and flag information.  [XEN,GST] */
     uint16_t flags;
-    /* The domain being granted foreign privileges. [GST] */
+    /** The domain being granted foreign privileges. [GST] */
     domid_t  domid;
-    /*
+    /**
+     * @keepindent
      * GTF_permit_access: GFN that @domid is allowed to map and access. [GST]
      * GTF_accept_transfer: GFN that @domid is allowed to transfer into. [GST]
      * GTF_transfer_completed: MFN whose ownership transferred by @domid
      *                         (non-translated guests only). [XEN]
+     * @endkeepindent
      */
     uint32_t frame;
 };
@@ -150,60 +165,99 @@ typedef struct grant_entry_v1 grant_entry_v1_t;
 #define GNTTAB_RESERVED_CONSOLE        0
 #define GNTTAB_RESERVED_XENSTORE       1
 
-/*
- * Type of grant entry.
- *  GTF_invalid: This grant entry grants no privileges.
- *  GTF_permit_access: Allow @domid to map/access @frame.
- *  GTF_accept_transfer: Allow @domid to transfer ownership of one page frame
- *                       to this guest. Xen writes the page number to @frame.
- *  GTF_transitive: Allow @domid to transitively access a subrange of
- *                  @trans_grant in @trans_domid.  No mappings are allowed.
- */
+/** This type of grant entry grants no privileges. */
 #define GTF_invalid         (0U<<0)
+
+/** This type of grant entry allow \@domid to map/access \@frame. */
 #define GTF_permit_access   (1U<<0)
+
+/**
+ * This type of grant entry allow \@domid to transfer ownership of one pageframe
+ * to this guest. Xen writes the page number to \@frame.
+ */
 #define GTF_accept_transfer (2U<<0)
+
+/**
+ * This type of grant entry allow \@domid to transitively access a subrange of
+ * \@trans_grant in \@trans_domid.  No mappings are allowed.
+ */
 #define GTF_transitive      (3U<<0)
+
 #define GTF_type_mask       (3U<<0)
 
-/*
- * Subflags for GTF_permit_access and GTF_transitive.
- *  GTF_readonly: Restrict @domid to read-only mappings and accesses. [GST]
- *  GTF_reading: Grant entry is currently mapped for reading by @domid. [XEN]
- *  GTF_writing: Grant entry is currently mapped for writing by @domid. [XEN]
- * Further subflags for GTF_permit_access only.
- *  GTF_PAT, GTF_PWT, GTF_PCD: (x86) cache attribute flags to be used for
- *                             mappings of the grant [GST]
- *  GTF_sub_page: Grant access to only a subrange of the page.  @domid
- *                will only be allowed to copy from the grant, and not
- *                map it. [GST]
+/**
+ * @def GTF_readonly
+ * Subflag for GTF_permit_access and GTF_transitive: Restrict \@domid to
+ * read-only mappings and accesses. [GST]
  */
 #define _GTF_readonly       (2)
 #define GTF_readonly        (1U<<_GTF_readonly)
+
+/**
+ * @def GTF_reading
+ * Subflag for GTF_permit_access and GTF_transitive: Grant entry is currently
+ * mapped for reading by \@domid. [XEN]
+ */
 #define _GTF_reading        (3)
 #define GTF_reading         (1U<<_GTF_reading)
+
+/**
+ * @def GTF_writing
+ * Subflag for GTF_permit_access and GTF_transitive: Grant entry is currently
+ * mapped for writing by \@domid. [XEN]
+ */
 #define _GTF_writing        (4)
 #define GTF_writing         (1U<<_GTF_writing)
+
+/**
+ * @def GTF_PWT
+ * Subflag for GTF_permit_access only: (x86) cache attribute flags to be used
+ * for mappings of the grant [GST]
+ */
 #define _GTF_PWT            (5)
 #define GTF_PWT             (1U<<_GTF_PWT)
+
+/**
+ * @def GTF_PCD
+ * Subflag for GTF_permit_access only: (x86) cache attribute flags to be used
+ * for mappings of the grant [GST]
+ */
 #define _GTF_PCD            (6)
 #define GTF_PCD             (1U<<_GTF_PCD)
+
+/**
+ * @def GTF_PAT
+ * Subflag for GTF_permit_access only: (x86) cache attribute flags to be used
+ * for mappings of the grant [GST]
+ */
 #define _GTF_PAT            (7)
 #define GTF_PAT             (1U<<_GTF_PAT)
+
+/**
+ * @def GTF_sub_page
+ * Subflag for GTF_permit_access only: Grant access to only a subrange of the
+ * page. \@domid will only be allowed to copy from the grant, and not map it.
+ * [GST]
+ */
 #define _GTF_sub_page       (8)
 #define GTF_sub_page        (1U<<_GTF_sub_page)
 
-/*
- * Subflags for GTF_accept_transfer:
- *  GTF_transfer_committed: Xen sets this flag to indicate that it is committed
- *      to transferring ownership of a page frame. When a guest sees this flag
- *      it must /not/ modify the grant entry until GTF_transfer_completed is
- *      set by Xen.
- *  GTF_transfer_completed: It is safe for the guest to spin-wait on this flag
- *      after reading GTF_transfer_committed. Xen will always write the frame
- *      address, followed by ORing this flag, in a timely manner.
+/**
+ * @def GTF_transfer_committed
+ * Subflag for GTF_accept_transfer: Xen sets this flag to indicate that it is
+ * committed to transferring ownership of a page frame. When a guest sees this
+ * flag it must /not/ modify the grant entry until GTF_transfer_completed is
+ * set by Xen.
  */
 #define _GTF_transfer_committed (2)
 #define GTF_transfer_committed  (1U<<_GTF_transfer_committed)
+
+/**
+ * @def GTF_transfer_completed
+ * Subflag for GTF_accept_transfer: It is safe for the guest to spin-wait on
+ * this flag after reading GTF_transfer_committed. Xen will always write the
+ * frame address, followed by ORing this flag, in a timely manner.
+ */
 #define _GTF_transfer_completed (3)
 #define GTF_transfer_completed  (1U<<_GTF_transfer_completed)
 
@@ -228,17 +282,17 @@ struct grant_entry_header {
 };
 typedef struct grant_entry_header grant_entry_header_t;
 
-/*
+/**
  * Version 2 of the grant entry structure.
  */
 union grant_entry_v2 {
     grant_entry_header_t hdr;
 
-    /*
+    /**
      * This member is used for V1-style full page grants, where either:
      *
-     * -- hdr.type is GTF_accept_transfer, or
-     * -- hdr.type is GTF_permit_access and GTF_sub_page is not set.
+     * * hdr.type is GTF_accept_transfer, or
+     * * hdr.type is GTF_permit_access and GTF_sub_page is not set.
      *
      * In that case, the frame field has the same semantics as the
      * field of the same name in the V1 entry structure.
@@ -249,10 +303,10 @@ union grant_entry_v2 {
         uint64_t frame;
     } full_page;
 
-    /*
+    /**
      * If the grant type is GTF_grant_access and GTF_sub_page is set,
-     * @domid is allowed to access bytes [@page_off,@page_off+@length)
-     * in frame @frame.
+     * \@domid is allowed to access bytes [\@page_off,\@page_off+\@length)
+     * in frame \@frame.
      */
     struct {
         grant_entry_header_t hdr;
@@ -261,9 +315,9 @@ union grant_entry_v2 {
         uint64_t frame;
     } sub_page;
 
-    /*
-     * If the grant is GTF_transitive, @domid is allowed to use the
-     * grant @gref in domain @trans_domid, as if it was the local
+    /**
+     * If the grant is GTF_transitive, \@domid is allowed to use the
+     * grant \@gref in domain \@trans_domid, as if it was the local
      * domain.  Obviously, the transitive access must be compatible
      * with the original grant.
      *
@@ -277,7 +331,7 @@ union grant_entry_v2 {
         grant_ref_t gref;
     } transitive;
 
-    uint32_t __spacer[4]; /* Pad to a power of two */
+    uint32_t __spacer[4]; /**< Pad to a power of two */
 };
 typedef union grant_entry_v2 grant_entry_v2_t;
 
@@ -317,24 +371,25 @@ typedef uint16_t grant_status_t;
 #endif /* __XEN_INTERFACE_VERSION__ */
 /* ` } */
 
-/*
+/**
  * Handle to track a mapping created via a grant reference.
  */
 typedef uint32_t grant_handle_t;
 
-/*
- * GNTTABOP_map_grant_ref: Map the grant entry (<dom>,<ref>) for access
- * by devices and/or host CPUs. If successful, <handle> is a tracking number
- * that must be presented later to destroy the mapping(s). On error, <status>
+/**
+ * GNTTABOP_map_grant_ref: Map the grant entry (\@dom,\@ref) for access
+ * by devices and/or host CPUs. If successful, \@handle is a tracking number
+ * that must be presented later to destroy the mapping(s). On error, \@status
  * is a negative status code.
+ *
  * NOTES:
- *  1. If GNTMAP_device_map is specified then <dev_bus_addr> is the address
+ *  1. If GNTMAP_device_map is specified then \@dev_bus_addr is the address
  *     via which I/O devices may access the granted frame.
  *  2. If GNTMAP_host_map is specified then a mapping will be added at
  *     either a host virtual address in the current address space, or at
  *     a PTE at the specified machine address.  The type of mapping to
  *     perform is selected through the GNTMAP_contains_pte flag, and the
- *     address is specified in <host_addr>.
+ *     address is specified in \@host_addr.
  *  3. Mappings should only be destroyed via GNTTABOP_unmap_grant_ref. If a
  *     host mapping is destroyed by other means then it is *NOT* guaranteed
  *     to be accounted to the correct grant reference!
@@ -342,25 +397,26 @@ typedef uint32_t grant_handle_t;
 struct gnttab_map_grant_ref {
     /* IN parameters. */
     uint64_t host_addr;
-    uint32_t flags;               /* GNTMAP_* */
+    uint32_t flags;               /**< GNTMAP_* */
     grant_ref_t ref;
     domid_t  dom;
     /* OUT parameters. */
-    int16_t  status;              /* => enum grant_status */
+    int16_t  status;              /**< GNTST_* status code */
     grant_handle_t handle;
     uint64_t dev_bus_addr;
 };
 typedef struct gnttab_map_grant_ref gnttab_map_grant_ref_t;
 DEFINE_XEN_GUEST_HANDLE(gnttab_map_grant_ref_t);
 
-/*
+/**
  * GNTTABOP_unmap_grant_ref: Destroy one or more grant-reference mappings
- * tracked by <handle>. If <host_addr> or <dev_bus_addr> is zero, that
+ * tracked by \@handle. If \@host_addr or \@dev_bus_addr is zero, that
  * field is ignored. If non-zero, they must refer to a device/host mapping
- * that is tracked by <handle>
+ * that is tracked by \@handle
+ *
  * NOTES:
  *  1. The call may fail in an undefined manner if either mapping is not
- *     tracked by <handle>.
+ *     tracked by \@handle.
  *  3. After executing a batch of unmaps, it is guaranteed that no stale
  *     mappings will remain in the device or host TLBs.
  */
@@ -370,18 +426,19 @@ struct gnttab_unmap_grant_ref {
     uint64_t dev_bus_addr;
     grant_handle_t handle;
     /* OUT parameters. */
-    int16_t  status;              /* => enum grant_status */
+    int16_t  status;              /**< GNTST_* status code */
 };
 typedef struct gnttab_unmap_grant_ref gnttab_unmap_grant_ref_t;
 DEFINE_XEN_GUEST_HANDLE(gnttab_unmap_grant_ref_t);
 
-/*
- * GNTTABOP_setup_table: Set up a grant table for <dom> comprising at least
- * <nr_frames> pages. The frame addresses are written to the <frame_list>.
- * Only <nr_frames> addresses are written, even if the table is larger.
+/**
+ * GNTTABOP_setup_table: Set up a grant table for \@dom comprising at least
+ * \@nr_frames pages. The frame addresses are written to the \@frame_list.
+ * Only \@nr_frames addresses are written, even if the table is larger.
+ *
  * NOTES:
- *  1. <dom> may be specified as DOMID_SELF.
- *  2. Only a sufficiently-privileged domain may specify <dom> != DOMID_SELF.
+ *  1. \@dom may be specified as DOMID_SELF.
+ *  2. Only a sufficiently-privileged domain may specify \@dom != DOMID_SELF.
  *  3. Xen may not support more than a single grant-table page per domain.
  */
 struct gnttab_setup_table {
@@ -389,7 +446,7 @@ struct gnttab_setup_table {
     domid_t  dom;
     uint32_t nr_frames;
     /* OUT parameters. */
-    int16_t  status;              /* => enum grant_status */
+    int16_t  status;              /**< GNTST_* status code */
 #if __XEN_INTERFACE_VERSION__ < 0x00040300
     XEN_GUEST_HANDLE(ulong) frame_list;
 #else
@@ -399,7 +456,7 @@ struct gnttab_setup_table {
 typedef struct gnttab_setup_table gnttab_setup_table_t;
 DEFINE_XEN_GUEST_HANDLE(gnttab_setup_table_t);
 
-/*
+/**
  * GNTTABOP_dump_table: Dump the contents of the grant table to the
  * xen console. Debugging use only.
  */
@@ -407,14 +464,14 @@ struct gnttab_dump_table {
     /* IN parameters. */
     domid_t dom;
     /* OUT parameters. */
-    int16_t status;               /* => enum grant_status */
+    int16_t status;               /**< GNTST_* status code */
 };
 typedef struct gnttab_dump_table gnttab_dump_table_t;
 DEFINE_XEN_GUEST_HANDLE(gnttab_dump_table_t);
 
-/*
- * GNTTABOP_transfer: Transfer <frame> to a foreign domain. The foreign domain
- * has previously registered its interest in the transfer via <domid, ref>.
+/**
+ * GNTTABOP_transfer: Transfer \@frame to a foreign domain. The foreign domain
+ * has previously registered its interest in the transfer via \@domid, \@ref.
  *
  * Note that, even if the transfer fails, the specified page no longer belongs
  * to the calling domain *unless* the error is GNTST_bad_page.
@@ -427,13 +484,13 @@ struct gnttab_transfer {
     domid_t       domid;
     grant_ref_t   ref;
     /* OUT parameters. */
-    int16_t       status;
+    int16_t       status;               /**< GNTST_* status code */
 };
 typedef struct gnttab_transfer gnttab_transfer_t;
 DEFINE_XEN_GUEST_HANDLE(gnttab_transfer_t);
 
 
-/*
+/**
  * GNTTABOP_copy: Hypervisor based copy
  * source and destinations can be eithers MFNs or, for foreign domains,
  * grant references. the foreign domain has to grant read/write access
@@ -451,11 +508,6 @@ DEFINE_XEN_GUEST_HANDLE(gnttab_transfer_t);
  * bytes to be copied.
  */
 
-#define _GNTCOPY_source_gref      (0)
-#define GNTCOPY_source_gref       (1<<_GNTCOPY_source_gref)
-#define _GNTCOPY_dest_gref        (1)
-#define GNTCOPY_dest_gref         (1<<_GNTCOPY_dest_gref)
-
 struct gnttab_copy {
     /* IN parameters. */
     struct gnttab_copy_ptr {
@@ -467,19 +519,24 @@ struct gnttab_copy {
         uint16_t offset;
     } source, dest;
     uint16_t      len;
-    uint16_t      flags;          /* GNTCOPY_* */
+    uint16_t      flags;          /**< GNTCOPY_* */
+#define _GNTCOPY_source_gref      (0)
+#define GNTCOPY_source_gref       (1<<_GNTCOPY_source_gref)
+#define _GNTCOPY_dest_gref        (1)
+#define GNTCOPY_dest_gref         (1<<_GNTCOPY_dest_gref)
     /* OUT parameters. */
     int16_t       status;
 };
 typedef struct gnttab_copy  gnttab_copy_t;
 DEFINE_XEN_GUEST_HANDLE(gnttab_copy_t);
 
-/*
+/**
  * GNTTABOP_query_size: Query the current and maximum sizes of the shared
  * grant table.
+ *
  * NOTES:
- *  1. <dom> may be specified as DOMID_SELF.
- *  2. Only a sufficiently-privileged domain may specify <dom> != DOMID_SELF.
+ *  1. \@dom may be specified as DOMID_SELF.
+ *  2. Only a sufficiently-privileged domain may specify \@dom != DOMID_SELF.
  */
 struct gnttab_query_size {
     /* IN parameters. */
@@ -487,19 +544,20 @@ struct gnttab_query_size {
     /* OUT parameters. */
     uint32_t nr_frames;
     uint32_t max_nr_frames;
-    int16_t  status;              /* => enum grant_status */
+    int16_t  status;              /**< GNTST_* status code */
 };
 typedef struct gnttab_query_size gnttab_query_size_t;
 DEFINE_XEN_GUEST_HANDLE(gnttab_query_size_t);
 
-/*
+/**
  * GNTTABOP_unmap_and_replace: Destroy one or more grant-reference mappings
- * tracked by <handle> but atomically replace the page table entry with one
- * pointing to the machine address under <new_addr>.  <new_addr> will be
+ * tracked by \@handle but atomically replace the page table entry with one
+ * pointing to the machine address under \@new_addr. \@new_addr will be
  * redirected to the null entry.
+ *
  * NOTES:
  *  1. The call may fail in an undefined manner if either mapping is not
- *     tracked by <handle>.
+ *     tracked by \@handle.
  *  2. After executing a batch of unmaps, it is guaranteed that no stale
  *     mappings will remain in the device or host TLBs.
  */
@@ -509,13 +567,13 @@ struct gnttab_unmap_and_replace {
     uint64_t new_addr;
     grant_handle_t handle;
     /* OUT parameters. */
-    int16_t  status;              /* => enum grant_status */
+    int16_t  status;              /**< GNTST_* status code */
 };
 typedef struct gnttab_unmap_and_replace gnttab_unmap_and_replace_t;
 DEFINE_XEN_GUEST_HANDLE(gnttab_unmap_and_replace_t);
 
 #if __XEN_INTERFACE_VERSION__ >= 0x0003020a
-/*
+/**
  * GNTTABOP_set_version: Request a particular version of the grant
  * table shared table structure.  This operation may be used to toggle
  * between different versions, but must be performed while no grants
@@ -529,32 +587,33 @@ typedef struct gnttab_set_version gnttab_set_version_t;
 DEFINE_XEN_GUEST_HANDLE(gnttab_set_version_t);
 
 
-/*
+/**
  * GNTTABOP_get_status_frames: Get the list of frames used to store grant
- * status for <dom>. In grant format version 2, the status is separated
+ * status for \@dom. In grant format version 2, the status is separated
  * from the other shared grant fields to allow more efficient synchronization
  * using barriers instead of atomic cmpexch operations.
- * <nr_frames> specify the size of vector <frame_list>.
- * The frame addresses are returned in the <frame_list>.
- * Only <nr_frames> addresses are returned, even if the table is larger.
+ * \@nr_frames specify the size of vector \@frame_list.
+ * The frame addresses are returned in the \@frame_list.
+ * Only \@nr_frames addresses are returned, even if the table is larger.
+ *
  * NOTES:
- *  1. <dom> may be specified as DOMID_SELF.
- *  2. Only a sufficiently-privileged domain may specify <dom> != DOMID_SELF.
+ *  1. \@dom may be specified as DOMID_SELF.
+ *  2. Only a sufficiently-privileged domain may specify \@dom != DOMID_SELF.
  */
 struct gnttab_get_status_frames {
     /* IN parameters. */
     uint32_t nr_frames;
     domid_t  dom;
     /* OUT parameters. */
-    int16_t  status;              /* => enum grant_status */
+    int16_t  status;              /**< GNTST_* status code */
     XEN_GUEST_HANDLE(uint64_t) frame_list;
 };
 typedef struct gnttab_get_status_frames gnttab_get_status_frames_t;
 DEFINE_XEN_GUEST_HANDLE(gnttab_get_status_frames_t);
 
-/*
+/**
  * GNTTABOP_get_version: Get the grant table version which is in
- * effect for domain <dom>.
+ * effect for domain \@dom.
  */
 struct gnttab_get_version {
     /* IN parameters */
@@ -566,7 +625,7 @@ struct gnttab_get_version {
 typedef struct gnttab_get_version gnttab_get_version_t;
 DEFINE_XEN_GUEST_HANDLE(gnttab_get_version_t);
 
-/*
+/**
  * GNTTABOP_swap_grant_ref: Swap the contents of two grant entries.
  */
 struct gnttab_swap_grant_ref {
@@ -574,12 +633,12 @@ struct gnttab_swap_grant_ref {
     grant_ref_t ref_a;
     grant_ref_t ref_b;
     /* OUT parameters */
-    int16_t status;             /* => enum grant_status */
+    int16_t status;             /**< GNTST_* status code */
 };
 typedef struct gnttab_swap_grant_ref gnttab_swap_grant_ref_t;
 DEFINE_XEN_GUEST_HANDLE(gnttab_swap_grant_ref_t);
 
-/*
+/**
  * Issue one or more cache maintenance operations on a portion of a
  * page granted to the calling domain by a foreign domain.
  */
@@ -588,8 +647,8 @@ struct gnttab_cache_flush {
         uint64_t dev_bus_addr;
         grant_ref_t ref;
     } a;
-    uint16_t offset; /* offset from start of grant */
-    uint16_t length; /* size within the grant */
+    uint16_t offset; /**< offset from start of grant */
+    uint16_t length; /**< size within the grant */
 #define GNTTAB_CACHE_CLEAN          (1u<<0)
 #define GNTTAB_CACHE_INVAL          (1u<<1)
 #define GNTTAB_CACHE_SOURCE_GREF    (1u<<31)
@@ -600,40 +659,60 @@ DEFINE_XEN_GUEST_HANDLE(gnttab_cache_flush_t);
 
 #endif /* __XEN_INTERFACE_VERSION__ */
 
-/*
- * Bitfield values for gnttab_map_grant_ref.flags.
+/**
+ * @def GNTMAP_device_map
+ * Bitfield value for gnttab_map_grant_ref.flags: Map the grant entry for
+ * access by I/O devices.
  */
- /* Map the grant entry for access by I/O devices. */
 #define _GNTMAP_device_map      (0)
 #define GNTMAP_device_map       (1<<_GNTMAP_device_map)
- /* Map the grant entry for access by host CPUs. */
+
+/**
+ * @def GNTMAP_host_map
+ * Bitfield value for gnttab_map_grant_ref.flags: Map the grant entry for
+ * access by host CPUs.
+ */
 #define _GNTMAP_host_map        (1)
 #define GNTMAP_host_map         (1<<_GNTMAP_host_map)
- /* Accesses to the granted frame will be restricted to read-only access. */
+
+/**
+ * @def GNTMAP_readonly
+ * Bitfield value for gnttab_map_grant_ref.flags: Accesses to the granted frame
+ * will be restricted to read-only access.
+ */
 #define _GNTMAP_readonly        (2)
 #define GNTMAP_readonly         (1<<_GNTMAP_readonly)
- /*
-  * GNTMAP_host_map subflag:
-  *  0 => The host mapping is usable only by the guest OS.
-  *  1 => The host mapping is usable by guest OS + current application.
-  */
+
+/**
+ * @def GNTMAP_application_map
+ * Bitfield value for gnttab_map_grant_ref.flags.
+ *
+ * GNTMAP_host_map subflag:
+ * * 0 => The host mapping is usable only by the guest OS.
+ * * 1 => The host mapping is usable by guest OS + current application.
+ */
 #define _GNTMAP_application_map (3)
 #define GNTMAP_application_map  (1<<_GNTMAP_application_map)
 
- /*
-  * GNTMAP_contains_pte subflag:
-  *  0 => This map request contains a host virtual address.
-  *  1 => This map request contains the machine addess of the PTE to update.
-  */
+/**
+ * @def GNTMAP_contains_pte
+ * Bitfield value for gnttab_map_grant_ref.flags.
+ *
+ * GNTMAP_contains_pte subflag:
+ * * 0 => This map request contains a host virtual address.
+ * * 1 => This map request contains the machine addess of the PTE to update.
+ */
 #define _GNTMAP_contains_pte    (4)
 #define GNTMAP_contains_pte     (1<<_GNTMAP_contains_pte)
 
 #define _GNTMAP_can_fail        (5)
 #define GNTMAP_can_fail         (1<<_GNTMAP_can_fail)
 
-/*
- * Bits to be placed in guest kernel available PTE bits (architecture
- * dependent; only supported when XENFEAT_gnttab_map_avail_bits is set).
+/**
+ * @def GNTMAP_guest_avail_mask
+ * Bitfield value for gnttab_map_grant_ref.flags: Bits to be placed in guest
+ * kernel available PTE bits (architecture dependent; only supported when
+ * XENFEAT_gnttab_map_avail_bits is set).
  */
 #define _GNTMAP_guest_avail0    (16)
 #define GNTMAP_guest_avail_mask ((uint32_t)~0 << _GNTMAP_guest_avail0)
@@ -641,21 +720,19 @@ DEFINE_XEN_GUEST_HANDLE(gnttab_cache_flush_t);
 /*
  * Values for error status returns. All errors are -ve.
  */
-/* ` enum grant_status { */
-#define GNTST_okay             (0)  /* Normal return.                        */
-#define GNTST_general_error    (-1) /* General undefined error.              */
-#define GNTST_bad_domain       (-2) /* Unrecognsed domain id.                */
-#define GNTST_bad_gntref       (-3) /* Unrecognised or inappropriate gntref. */
-#define GNTST_bad_handle       (-4) /* Unrecognised or inappropriate handle. */
-#define GNTST_bad_virt_addr    (-5) /* Inappropriate virtual address to map. */
-#define GNTST_bad_dev_addr     (-6) /* Inappropriate device address to unmap.*/
-#define GNTST_no_device_space  (-7) /* Out of space in I/O MMU.              */
-#define GNTST_permission_denied (-8) /* Not enough privilege for operation.  */
-#define GNTST_bad_page         (-9) /* Specified page was invalid for op.    */
-#define GNTST_bad_copy_arg    (-10) /* copy arguments cross page boundary.   */
-#define GNTST_address_too_big (-11) /* transfer page address too large.      */
-#define GNTST_eagain          (-12) /* Operation not done; try again.        */
-/* ` } */
+#define GNTST_okay             (0)  /**< Normal return.                        */
+#define GNTST_general_error    (-1) /**< General undefined error.              */
+#define GNTST_bad_domain       (-2) /**< Unrecognsed domain id.                */
+#define GNTST_bad_gntref       (-3) /**< Unrecognised or inappropriate gntref. */
+#define GNTST_bad_handle       (-4) /**< Unrecognised or inappropriate handle. */
+#define GNTST_bad_virt_addr    (-5) /**< Inappropriate virtual address to map. */
+#define GNTST_bad_dev_addr     (-6) /**< Inappropriate device address to unmap.*/
+#define GNTST_no_device_space  (-7) /**< Out of space in I/O MMU.              */
+#define GNTST_permission_denied (-8) /**< Not enough privilege for operation.  */
+#define GNTST_bad_page         (-9) /**< Specified page was invalid for op.    */
+#define GNTST_bad_copy_arg    (-10) /**< copy arguments cross page boundary.   */
+#define GNTST_address_too_big (-11) /**< transfer page address too large.      */
+#define GNTST_eagain          (-12) /**< Operation not done; try again.        */
 
 #define GNTTABOP_error_msgs {                   \
     "okay",                                     \
@@ -673,6 +750,10 @@ DEFINE_XEN_GUEST_HANDLE(gnttab_cache_flush_t);
     "operation not done; try again"             \
 }
 
+/**
+ * @}
+ */
+
 #endif /* __XEN_PUBLIC_GRANT_TABLE_H__ */
 
 /*
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon May 10 08:43:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 08:43:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.124999.235358 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg1Ve-0005yE-4f; Mon, 10 May 2021 08:43:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 124999.235358; Mon, 10 May 2021 08:43:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg1Ve-0005x2-0s; Mon, 10 May 2021 08:43:26 +0000
Received: by outflank-mailman (input) for mailman id 124999;
 Mon, 10 May 2021 08:43:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jR2S=KF=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1lg1U6-0008L0-OP
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 08:41:50 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id e9f42ebd-eb64-47a7-a2ce-c4d9b87a9826;
 Mon, 10 May 2021 08:41:25 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2FB6C1424;
 Mon, 10 May 2021 01:41:25 -0700 (PDT)
Received: from e125770.cambridge.arm.com (e125770.cambridge.arm.com
 [10.1.197.16])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id BE41A3F719;
 Mon, 10 May 2021 01:41:23 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e9f42ebd-eb64-47a7-a2ce-c4d9b87a9826
From: Luca Fancellu <luca.fancellu@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	wei.chen@arm.com,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v6 7/9] docs: Change Makefile and sphinx configuration for doxygen
Date: Mon, 10 May 2021 09:41:03 +0100
Message-Id: <20210510084105.17108-8-luca.fancellu@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210510084105.17108-1-luca.fancellu@arm.com>
References: <20210510084105.17108-1-luca.fancellu@arm.com>

Modify docs/Makefile to call doxygen and generate sphinx
html documentation given the doxygen XML output.

Modify docs/conf.py sphinx configuration file to setup
the breathe extension that works as bridge between
sphinx and doxygen.

Add some files to the .gitignore to ignore some
generated files for doxygen.

Signed-off-by: Luca Fancellu <luca.fancellu@arm.com>
---
 .gitignore    |  6 ++++++
 docs/Makefile | 42 +++++++++++++++++++++++++++++++++++++++---
 docs/conf.py  | 48 +++++++++++++++++++++++++++++++++++++++++++++---
 3 files changed, 90 insertions(+), 6 deletions(-)

diff --git a/.gitignore b/.gitignore
index 1c2fa1530b..d271e0ce6a 100644
--- a/.gitignore
+++ b/.gitignore
@@ -58,6 +58,12 @@ docs/man7/
 docs/man8/
 docs/pdf/
 docs/txt/
+docs/doxygen-output
+docs/sphinx
+docs/xen.doxyfile
+docs/xen.doxyfile.tmp
+docs/xen-doxygen/doxygen_include.h
+docs/xen-doxygen/doxygen_include.h.tmp
 extras/mini-os*
 install/*
 stubdom/*-minios-config.mk
diff --git a/docs/Makefile b/docs/Makefile
index 8de1efb6f5..2f784c36ce 100644
--- a/docs/Makefile
+++ b/docs/Makefile
@@ -17,6 +17,18 @@ TXTSRC-y := $(sort $(shell find misc -name '*.txt' -print))
 
 PANDOCSRC-y := $(sort $(shell find designs/ features/ misc/ process/ specs/ \( -name '*.pandoc' -o -name '*.md' \) -print))
 
+# Directory in which the doxygen documentation is created
+# This must be kept in sync with breathe_projects value in conf.py
+DOXYGEN_OUTPUT = doxygen-output
+
+# Doxygen input headers from xen-doxygen/doxy_input.list file
+DOXY_LIST_SOURCES != cat "xen-doxygen/doxy_input.list"
+DOXY_LIST_SOURCES := $(realpath $(addprefix $(XEN_ROOT)/,$(DOXY_LIST_SOURCES)))
+
+DOXY_DEPS := xen.doxyfile \
+			 xen-doxygen/mainpage.md \
+			 xen-doxygen/doxygen_include.h
+
 # Documentation targets
 $(foreach i,$(MAN_SECTIONS), \
   $(eval DOC_MAN$(i) := $(patsubst man/%.$(i),man$(i)/%.$(i), \
@@ -46,8 +58,28 @@ all: build
 build: html txt pdf man-pages figs
 
 .PHONY: sphinx-html
-sphinx-html:
-	sphinx-build -b html . sphinx/html
+sphinx-html: $(DOXY_DEPS) $(DOXY_LIST_SOURCES)
+ifneq ($(SPHINXBUILD),no)
+	$(DOXYGEN) xen.doxyfile
+	XEN_ROOT=$(realpath $(XEN_ROOT)) $(SPHINXBUILD) -b html . sphinx/html
+else
+	@echo "Sphinx is not installed; skipping sphinx-html documentation."
+endif
+
+xen.doxyfile: xen.doxyfile.in xen-doxygen/doxy_input.list
+	@echo "Generating $@"
+	@sed -e "s,@XEN_BASE@,$(realpath $(XEN_ROOT)),g" $< \
+		| sed -e "s,@DOXY_OUT@,$(DOXYGEN_OUTPUT),g" > $@.tmp
+	@$(foreach inc,\
+		$(DOXY_LIST_SOURCES),\
+		echo "INPUT += \"$(inc)\"" >> $@.tmp; \
+	)
+	mv $@.tmp $@
+
+xen-doxygen/doxygen_include.h: xen-doxygen/doxygen_include.h.in
+	@echo "Generating $@"
+	@sed -e "s,@XEN_BASE@,$(realpath $(XEN_ROOT)),g" $< > $@.tmp
+	@mv $@.tmp $@
 
 .PHONY: html
 html: $(DOC_HTML) html/index.html
@@ -71,7 +103,11 @@ clean: clean-man-pages
 	$(MAKE) -C figs clean
 	rm -rf .word_count *.aux *.dvi *.bbl *.blg *.glo *.idx *~
 	rm -rf *.ilg *.log *.ind *.toc *.bak *.tmp core
-	rm -rf html txt pdf sphinx/html
+	rm -rf html txt pdf sphinx $(DOXYGEN_OUTPUT)
+	rm -f xen.doxyfile
+	rm -f xen.doxyfile.tmp
+	rm -f xen-doxygen/doxygen_include.h
+	rm -f xen-doxygen/doxygen_include.h.tmp
 
 .PHONY: distclean
 distclean: clean
diff --git a/docs/conf.py b/docs/conf.py
index 50e41501db..a48de42331 100644
--- a/docs/conf.py
+++ b/docs/conf.py
@@ -13,13 +13,17 @@
 # add these directories to sys.path here. If the directory is relative to the
 # documentation root, use os.path.abspath to make it absolute, like shown here.
 #
-# import os
-# import sys
+import os
+import sys
 # sys.path.insert(0, os.path.abspath('.'))
 
 
 # -- Project information -----------------------------------------------------
 
+if "XEN_ROOT" not in os.environ:
+    sys.exit("$XEN_ROOT environment variable undefined.")
+XEN_ROOT = os.path.abspath(os.environ["XEN_ROOT"])
+
 project = u'Xen'
 copyright = u'2019, The Xen development community'
 author = u'The Xen development community'
@@ -35,6 +39,7 @@ try:
             xen_subver = line.split(u"=")[1].strip()
         elif line.startswith(u"export XEN_EXTRAVERSION"):
             xen_extra = line.split(u"=")[1].split(u"$", 1)[0].strip()
+
 except:
     pass
 finally:
@@ -44,6 +49,15 @@ finally:
     else:
         version = release = u"unknown version"
 
+try:
+    xen_doxygen_output = None
+
+    for line in open(u"Makefile"):
+        if line.startswith(u"DOXYGEN_OUTPUT"):
+                xen_doxygen_output = line.split(u"=")[1].strip()
+except:
+    sys.exit("DOXYGEN_OUTPUT variable undefined.")
+
 # -- General configuration ---------------------------------------------------
 
 # If your documentation needs a minimal Sphinx version, state it here.
@@ -53,7 +67,8 @@ needs_sphinx = '1.4'
 # Add any Sphinx extension module names here, as strings. They can be
 # extensions coming with Sphinx (named 'sphinx.ext.*') or your custom
 # ones.
-extensions = []
+# breathe -> extension that integrates doxygen xml output with sphinx
+extensions = ['breathe']
 
 # Add any paths that contain templates here, relative to this directory.
 templates_path = ['_templates']
@@ -175,6 +190,33 @@ texinfo_documents = [
      'Miscellaneous'),
 ]
 
+# -- Options for Breathe extension -------------------------------------------
+
+breathe_projects = {
+    "Xen": "{}/docs/{}/xml".format(XEN_ROOT, xen_doxygen_output)
+}
+breathe_default_project = "Xen"
+
+breathe_domain_by_extension = {
+    "h": "c",
+    "c": "c",
+}
+breathe_separate_member_pages = True
+breathe_show_enumvalue_initializer = True
+breathe_show_define_initializer = True
+
+# Qualifiers to a function are causing Sphihx/Breathe to warn about
+# Error when parsing function declaration and more.  This is a list
+# of strings that the parser additionally should accept as
+# attributes.
+cpp_id_attributes = [
+    '__syscall', '__deprecated', '__may_alias',
+    '__used', '__unused', '__weak',
+    '__DEPRECATED_MACRO', 'FUNC_NORETURN',
+    '__subsystem',
+]
+c_id_attributes = cpp_id_attributes
+
 
 # -- Options for Epub output -------------------------------------------------
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon May 10 09:26:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 09:26:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125037.235378 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg2Aw-0002dK-H6; Mon, 10 May 2021 09:26:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125037.235378; Mon, 10 May 2021 09:26:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg2Aw-0002dD-EB; Mon, 10 May 2021 09:26:06 +0000
Received: by outflank-mailman (input) for mailman id 125037;
 Mon, 10 May 2021 09:26:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EdaL=KF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lg2Av-0002d6-EB
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 09:26:05 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 295a8820-6bc8-41e7-b4a3-7e77cb251a7b;
 Mon, 10 May 2021 09:26:02 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 70503B020;
 Mon, 10 May 2021 09:26:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 295a8820-6bc8-41e7-b4a3-7e77cb251a7b
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620638761; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=EuJP2b73NAg5XmnQx/n3IoYjeAXq4Twnue7NdfhH42o=;
	b=Z5/aT/FeRGwMYtlqS3TW9zJkqYCKt65GYPv57zoxKaGeR6zscrDlSRdkHdE60gNtVI8Ybg
	DulSqsn4wD3PCAEKVa4MAljfEA0mA714HoF4WFa8EuDM0tIN0zK+AMov4zyEZd6VOaqCm2
	ZL+gW7dtnRNx0aygrYba9BVh85pvnIw=
Subject: Re: [PATCH 1/1] xen/unpopulated-alloc: fix error return code in
 fill_list()
To: Zhen Lei <thunder.leizhen@huawei.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Andrew Morton <akpm@linux-foundation.org>,
 Dan Carpenter <dan.carpenter@oracle.com>,
 Dan Williams <dan.j.williams@intel.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 linux-kernel <linux-kernel@vger.kernel.org>
References: <20210508021913.1727-1-thunder.leizhen@huawei.com>
From: Juergen Gross <jgross@suse.com>
Message-ID: <794bdb21-4531-7e18-e4ca-e30c450b526b@suse.com>
Date: Mon, 10 May 2021 11:26:00 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.0
MIME-Version: 1.0
In-Reply-To: <20210508021913.1727-1-thunder.leizhen@huawei.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="8YWKunPbgPgWtF3uYtVYGpYUcNgD8R5wI"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--8YWKunPbgPgWtF3uYtVYGpYUcNgD8R5wI
Content-Type: multipart/mixed; boundary="YvKUouaZMkG0wguPs2ZMybSSshQSIPCJa";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Zhen Lei <thunder.leizhen@huawei.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Andrew Morton <akpm@linux-foundation.org>,
 Dan Carpenter <dan.carpenter@oracle.com>,
 Dan Williams <dan.j.williams@intel.com>,
 xen-devel <xen-devel@lists.xenproject.org>,
 linux-kernel <linux-kernel@vger.kernel.org>
Message-ID: <794bdb21-4531-7e18-e4ca-e30c450b526b@suse.com>
Subject: Re: [PATCH 1/1] xen/unpopulated-alloc: fix error return code in
 fill_list()
References: <20210508021913.1727-1-thunder.leizhen@huawei.com>
In-Reply-To: <20210508021913.1727-1-thunder.leizhen@huawei.com>

--YvKUouaZMkG0wguPs2ZMybSSshQSIPCJa
Content-Type: multipart/mixed;
 boundary="------------754B54539108BCAB53E56AFA"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------754B54539108BCAB53E56AFA
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 08.05.21 04:19, Zhen Lei wrote:
> Fix to return a negative error code from the error handling case instea=
d
> of 0, as done elsewhere in this function.
>=20
> Fixes: a4574f63edc6 ("mm/memremap_pages: convert to 'struct range'")
> Reported-by: Hulk Robot <hulkci@huawei.com>
> Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>

Pushed to xen/tip.git for-linus-5.13b


Juergen

--------------754B54539108BCAB53E56AFA
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------754B54539108BCAB53E56AFA--

--YvKUouaZMkG0wguPs2ZMybSSshQSIPCJa--

--8YWKunPbgPgWtF3uYtVYGpYUcNgD8R5wI
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmCY/CgFAwAAAAAACgkQsN6d1ii/Ey88
Hwf+MvWd8YLRJvVcb+/8u7R2x3Qyyfc4HZVJKYO8J8PqLNTnh0SQOiUKgYxzdAb5zx3bp42MAFqn
sxpFIxR3qXVohlX24m/WZkqGTrKl7MeCogWx1w4JkVGk6b60aTHVl1Pjhz7Uu6fRRoGOnZxC162J
rfaVbVZO3DNtae7hK8Ul0KyMMR2JIEZPrxHpT1i5a1d8Il3NCL5faBg2JVxtgqhR1T8xOy6yEbfj
dUVLw3tr538kgQXUiJeyJQOhEYFrre9Vx5u8VfS+GTc+ech+wD2Fn9tYThZjiTy806FNVmiOoBQt
UpOb0Wncz8HFkoJTqNxjhLV+P17MbF06dTkHzyLlSw==
=USm/
-----END PGP SIGNATURE-----

--8YWKunPbgPgWtF3uYtVYGpYUcNgD8R5wI--


From xen-devel-bounces@lists.xenproject.org Mon May 10 09:26:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 09:26:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125038.235390 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg2BO-000393-Ra; Mon, 10 May 2021 09:26:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125038.235390; Mon, 10 May 2021 09:26:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg2BO-00038w-NF; Mon, 10 May 2021 09:26:34 +0000
Received: by outflank-mailman (input) for mailman id 125038;
 Mon, 10 May 2021 09:26:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EdaL=KF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lg2BN-00038e-Nf
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 09:26:33 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3db41407-7a81-4d41-9890-9f14bb1dcb99;
 Mon, 10 May 2021 09:26:32 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 11A6FB034;
 Mon, 10 May 2021 09:26:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3db41407-7a81-4d41-9890-9f14bb1dcb99
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620638792; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=2+8TTrA68oj4V/scIVGFMFkKbN2KDcunqkf8eElhtxc=;
	b=ZtfgP8XhujztQdB70jt0YpRc+su9NvKQYJJq/36S1F6Z/+7jfpPkG+E4FrpSvULTyMMotG
	l7zJOD1b36mG9tL0FMWinzao70ji+M/VjvK9V92jz4MQR7IZLsy4T0NVNwiSMUvoX9CceP
	Fe+k+gGNadESL2ycF6UjYq6PllRhAL4=
Subject: Re: [PATCH] xen/gntdev: fix gntdev_mmap() error exit path
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?= <marmarek@invisiblethingslab.com>
References: <20210423054038.26696-1-jgross@suse.com>
From: Juergen Gross <jgross@suse.com>
Message-ID: <4b034ebb-4b41-ab00-3cf7-d8943e62474a@suse.com>
Date: Mon, 10 May 2021 11:26:31 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.0
MIME-Version: 1.0
In-Reply-To: <20210423054038.26696-1-jgross@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="wVYMBkZzxegc9HeHGTpuRIibzEXEISYF4"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--wVYMBkZzxegc9HeHGTpuRIibzEXEISYF4
Content-Type: multipart/mixed; boundary="gakxpOIby6mSXEggqm5llSC2S6sNcng8J";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, stable@vger.kernel.org,
 =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?= <marmarek@invisiblethingslab.com>
Message-ID: <4b034ebb-4b41-ab00-3cf7-d8943e62474a@suse.com>
Subject: Re: [PATCH] xen/gntdev: fix gntdev_mmap() error exit path
References: <20210423054038.26696-1-jgross@suse.com>
In-Reply-To: <20210423054038.26696-1-jgross@suse.com>

--gakxpOIby6mSXEggqm5llSC2S6sNcng8J
Content-Type: multipart/mixed;
 boundary="------------CCD07EF3ED352F535AE0D5EB"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------CCD07EF3ED352F535AE0D5EB
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 23.04.21 07:40, Juergen Gross wrote:
> Commit d3eeb1d77c5d0af ("xen/gntdev: use mmu_interval_notifier_insert")=

> introduced an error in gntdev_mmap(): in case the call of
> mmu_interval_notifier_insert_locked() fails the exit path should not
> call mmu_interval_notifier_remove(), as this might result in NULL
> dereferences.
>=20
> One reason for failure is e.g. a signal pending for the running
> process.
>=20
> Fixes: d3eeb1d77c5d0af ("xen/gntdev: use mmu_interval_notifier_insert")=

> Cc: stable@vger.kernel.org
> Reported-by: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingsl=
ab.com>
> Tested-by: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingslab=
=2Ecom>
> Signed-off-by: Juergen Gross <jgross@suse.com>

Pushed to xen/tip.git for-linus-5.13b


Juergen

--------------CCD07EF3ED352F535AE0D5EB
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------CCD07EF3ED352F535AE0D5EB--

--gakxpOIby6mSXEggqm5llSC2S6sNcng8J--

--wVYMBkZzxegc9HeHGTpuRIibzEXEISYF4
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmCY/EcFAwAAAAAACgkQsN6d1ii/Ey9G
MwgAndiVu5uJVs3ZsCPTnVemia6fLTVaaS19A0bAxmfh7ROoSvPZ1Uec+VNlGdGfCwjXh1n/Sc55
euPrMIFY4jNZy4ZW1pXog40GeQu9TDWpyOXfKn7c9WE6MCkR30fOGbIiYkRz9LFwETckdmNgkiOm
xXLjCeF5Vryi8MC+RF+i270HRt5lxZqU0aWb8wrMKAHZmXJ8wnUv3V/ksLGuLPldzG+XdMzpj61X
sYekZAJ4JaEwqHRHJgbYeU5wupAyt8l96mT/P/7dEzNMQqRHrwy3SiyRfXZME/Qqv2yctxdqy2Rt
OUmJeihkiVfMOW505OkbkwerlLTrrqNei1NsjEHQbA==
=Ptr7
-----END PGP SIGNATURE-----

--wVYMBkZzxegc9HeHGTpuRIibzEXEISYF4--


From xen-devel-bounces@lists.xenproject.org Mon May 10 09:50:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 09:50:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125049.235414 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg2Yt-0006b9-7e; Mon, 10 May 2021 09:50:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125049.235414; Mon, 10 May 2021 09:50:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg2Yt-0006b2-4I; Mon, 10 May 2021 09:50:51 +0000
Received: by outflank-mailman (input) for mailman id 125049;
 Mon, 10 May 2021 09:50:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NcLl=KF=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lg2Yr-0006aU-Rj
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 09:50:49 +0000
Received: from mail-pl1-x62f.google.com (unknown [2607:f8b0:4864:20::62f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5288974c-9a9f-4a25-a2f7-6430acd2af01;
 Mon, 10 May 2021 09:50:49 +0000 (UTC)
Received: by mail-pl1-x62f.google.com with SMTP id s20so8846810plr.13
 for <xen-devel@lists.xenproject.org>; Mon, 10 May 2021 02:50:49 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:a524:abe8:94e3:5601])
 by smtp.gmail.com with UTF8SMTPSA id i62sm10957565pfc.162.2021.05.10.02.50.40
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 May 2021 02:50:47 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5288974c-9a9f-4a25-a2f7-6430acd2af01
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=l57NLzuwUUMRZhssJX0lL8JJDXGk+1/UfIIsGGfy3jA=;
        b=huOTaemx/vQNaoB1cvrnA57ypE0vmqhe7ShZkgp3NAItXrk4k8kvOfBQCV83z3IngU
         LjO0os4LWJvRIc/zT8XVvQ/udpySO4cVFayr8AZ/h253vg4E9s3gUhllP1veAksDKlv1
         u/GQbBSuwBtUiADmsFqc0gMGCn19+8viSk9bQ=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=l57NLzuwUUMRZhssJX0lL8JJDXGk+1/UfIIsGGfy3jA=;
        b=XnZ1rbtGxLd0xhQsfkLI7uwqusW+5OS+txujJ9uVS1ijtB2Mu7vEcaUKcpgCLAgElF
         zXrJMYWqhuCV7MKpmDWaMJBwERYi0yZ3ELw/YkfYq/58yUiO2DIq442Vq+HsW22s0KAX
         zcLhxNHVgytfpY1W4p9hxm/xylDjyTGmDPbWXD2clIybqES+cjIT2tVlABf3ADS55o05
         l/zjgqW3PTKOv+Nn01KqYI/JN0PFmw1x1mAEx6bkLgUPtQcImctx0NiGXzZ9GhBL4Njj
         J6YNWyoKrr/kSpkQ8E0fn+NUr/CjNUu0Y4CGYmh5vrj7CiYqNeQJEAbC+5LRaDZNybZu
         KAgg==
X-Gm-Message-State: AOAM532cRct+3GYR7SHSivF1V1gooVTlMYb1qToGqocoXwpn3u1hE04C
	GXOQ/U1fQPxQK/RjAEkyZWsDVg==
X-Google-Smtp-Source: ABdhPJxWVuCYZwrOa/E4SzReuXkZoXpg2ApXBYG9GCU58CD3SXKffBuVhoPZLaoYkILB2nJk2guWbw==
X-Received: by 2002:a17:902:db09:b029:ee:ad5e:cd58 with SMTP id m9-20020a170902db09b02900eead5ecd58mr23870524plx.78.1620640248239;
        Mon, 10 May 2021 02:50:48 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	nouveau@lists.freedesktop.org,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v6 01/15] swiotlb: Refactor swiotlb init functions
Date: Mon, 10 May 2021 17:50:12 +0800
Message-Id: <20210510095026.3477496-2-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.607.g51e8a6a459-goog
In-Reply-To: <20210510095026.3477496-1-tientzu@chromium.org>
References: <20210510095026.3477496-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new function, swiotlb_init_io_tlb_mem, for the io_tlb_mem struct
initialization to make the code reusable.

Note that we now also call set_memory_decrypted in swiotlb_init_with_tbl.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/swiotlb.c | 51 ++++++++++++++++++++++----------------------
 1 file changed, 25 insertions(+), 26 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 8ca7d505d61c..d3232fc19385 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -168,9 +168,30 @@ void __init swiotlb_update_mem_attributes(void)
 	memset(vaddr, 0, bytes);
 }
 
-int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
+static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
+				    unsigned long nslabs, bool late_alloc)
 {
+	void *vaddr = phys_to_virt(start);
 	unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
+
+	mem->nslabs = nslabs;
+	mem->start = start;
+	mem->end = mem->start + bytes;
+	mem->index = 0;
+	mem->late_alloc = late_alloc;
+	spin_lock_init(&mem->lock);
+	for (i = 0; i < mem->nslabs; i++) {
+		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
+		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
+		mem->slots[i].alloc_size = 0;
+	}
+
+	set_memory_decrypted((unsigned long)vaddr, bytes >> PAGE_SHIFT);
+	memset(vaddr, 0, bytes);
+}
+
+int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
+{
 	struct io_tlb_mem *mem;
 	size_t alloc_size;
 
@@ -186,16 +207,8 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
 	if (!mem)
 		panic("%s: Failed to allocate %zu bytes align=0x%lx\n",
 		      __func__, alloc_size, PAGE_SIZE);
-	mem->nslabs = nslabs;
-	mem->start = __pa(tlb);
-	mem->end = mem->start + bytes;
-	mem->index = 0;
-	spin_lock_init(&mem->lock);
-	for (i = 0; i < mem->nslabs; i++) {
-		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
-		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
-		mem->slots[i].alloc_size = 0;
-	}
+
+	swiotlb_init_io_tlb_mem(mem, __pa(tlb), nslabs, false);
 
 	io_tlb_default_mem = mem;
 	if (verbose)
@@ -282,7 +295,6 @@ swiotlb_late_init_with_default_size(size_t default_size)
 int
 swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 {
-	unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
 	struct io_tlb_mem *mem;
 
 	if (swiotlb_force == SWIOTLB_NO_FORCE)
@@ -297,20 +309,7 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 	if (!mem)
 		return -ENOMEM;
 
-	mem->nslabs = nslabs;
-	mem->start = virt_to_phys(tlb);
-	mem->end = mem->start + bytes;
-	mem->index = 0;
-	mem->late_alloc = 1;
-	spin_lock_init(&mem->lock);
-	for (i = 0; i < mem->nslabs; i++) {
-		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
-		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
-		mem->slots[i].alloc_size = 0;
-	}
-
-	set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT);
-	memset(tlb, 0, bytes);
+	swiotlb_init_io_tlb_mem(mem, virt_to_phys(tlb), nslabs, true);
 
 	io_tlb_default_mem = mem;
 	swiotlb_print_info();
-- 
2.31.1.607.g51e8a6a459-goog



From xen-devel-bounces@lists.xenproject.org Mon May 10 09:50:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 09:50:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125048.235401 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg2Yk-0006Ic-Qb; Mon, 10 May 2021 09:50:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125048.235401; Mon, 10 May 2021 09:50:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg2Yk-0006IV-Nd; Mon, 10 May 2021 09:50:42 +0000
Received: by outflank-mailman (input) for mailman id 125048;
 Mon, 10 May 2021 09:50:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NcLl=KF=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lg2Yj-0006IP-MJ
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 09:50:41 +0000
Received: from mail-pf1-x431.google.com (unknown [2607:f8b0:4864:20::431])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5de35131-f720-45a4-beac-c0f62c6ab20c;
 Mon, 10 May 2021 09:50:40 +0000 (UTC)
Received: by mail-pf1-x431.google.com with SMTP id k19so13248835pfu.5
 for <xen-devel@lists.xenproject.org>; Mon, 10 May 2021 02:50:40 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:a524:abe8:94e3:5601])
 by smtp.gmail.com with UTF8SMTPSA id 3sm10134744pff.132.2021.05.10.02.50.32
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 May 2021 02:50:39 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5de35131-f720-45a4-beac-c0f62c6ab20c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=suql6mxhF7VWacBQX+vVI90y4jo7pK7o7lRK42P1ziE=;
        b=lOHb/wGLPI35HYDpC1OybPS0wSHtEceWE9NWy4yGK8ilXXldfGkKtOa8K66bNr4LEO
         Pr9azSla0OuJEaDOigJpQisql/QT6Rzgq/OL94Z5dEW4Jo9qDvVnUfwuUeDl/NGvpzhn
         KxdIcdeNTo8YF1ZknPwF9akS+XlqzfXmq6cBU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=suql6mxhF7VWacBQX+vVI90y4jo7pK7o7lRK42P1ziE=;
        b=QmnjJ+Ku233DQNV7DGQGAIoM8c4t0gAkwqIqRKMFzzOaNAsCL73bpgSEN9j/1RSOOS
         6BCY+kixfn7K6ghMnh+P3WtL7JrYmcVqvo2ZuD0fGXAHE4KdyhNPui2G2dP244sRjYDz
         d/LhbnylOV6AUTu+94BOwG9lh6g81KSrb4CVxIHg/EA+fCMnxfn2fGvtU0T8G8So2+Wg
         j39BeTL9luOG5YwMZrYyXuKDbOonrSLakNNbnqiWQ6gYl7PJGU/TmkzckLVYE8rlR6S7
         sMEUrwqIQC/zw9jeui0PL1L3BRlSXHN/WlzYarvR5i/uTkdKODcS13DQNlRdDvMbfjbf
         Ol4g==
X-Gm-Message-State: AOAM532gpeB4DVvnDUeB3lmIXKAZMh8CJLUyJLTSzRlBZbDmx91F5W3u
	ukdxc3x+wSUKQW6uX2C8a5lgrA==
X-Google-Smtp-Source: ABdhPJxaRNqwofbH1Q1gT6hjDo7UCjX7KCvjB0rJLaDOY22mJQZwGH/owOzO7oGoIZrGlaECEV1xwQ==
X-Received: by 2002:a62:2a14:0:b029:263:20c5:6d8c with SMTP id q20-20020a622a140000b029026320c56d8cmr24491637pfq.23.1620640239613;
        Mon, 10 May 2021 02:50:39 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	nouveau@lists.freedesktop.org,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com,
	Claire Chang <tientzu@google.com>
Subject: [PATCH v6 00/15] Restricted DMA
Date: Mon, 10 May 2021 17:50:11 +0800
Message-Id: <20210510095026.3477496-1-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.607.g51e8a6a459-goog
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Claire Chang <tientzu@google.com>

This series implements mitigations for lack of DMA access control on
systems without an IOMMU, which could result in the DMA accessing the
system memory at unexpected times and/or unexpected addresses, possibly
leading to data leakage or corruption.

For example, we plan to use the PCI-e bus for Wi-Fi and that PCI-e bus is
not behind an IOMMU. As PCI-e, by design, gives the device full access to
system memory, a vulnerability in the Wi-Fi firmware could easily escalate
to a full system exploit (remote wifi exploits: [1a], [1b] that shows a
full chain of exploits; [2], [3]).

To mitigate the security concerns, we introduce restricted DMA. Restricted
DMA utilizes the existing swiotlb to bounce streaming DMA in and out of a
specially allocated region and does memory allocation from the same region.
The feature on its own provides a basic level of protection against the DMA
overwriting buffer contents at unexpected times. However, to protect
against general data leakage and system memory corruption, the system needs
to provide a way to restrict the DMA to a predefined memory region (this is
usually done at firmware level, e.g. MPU in ATF on some ARM platforms [4]).

[1a] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_4.html
[1b] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_11.html
[2] https://blade.tencent.com/en/advisories/qualpwn/
[3] https://www.bleepingcomputer.com/news/security/vulnerabilities-found-in-highly-popular-firmware-for-wifi-chips/
[4] https://github.com/ARM-software/arm-trusted-firmware/blob/master/plat/mediatek/mt8183/drivers/emi_mpu/emi_mpu.c#L132

v6:
Address the comments in v5

v5:
Rebase on latest linux-next
https://lore.kernel.org/patchwork/cover/1416899/

v4:
- Fix spinlock bad magic
- Use rmem->name for debugfs entry
- Address the comments in v3
https://lore.kernel.org/patchwork/cover/1378113/

v3:
Using only one reserved memory region for both streaming DMA and memory
allocation.
https://lore.kernel.org/patchwork/cover/1360992/

v2:
Building on top of swiotlb.
https://lore.kernel.org/patchwork/cover/1280705/

v1:
Using dma_map_ops.
https://lore.kernel.org/patchwork/cover/1271660/
*** BLURB HERE ***

Claire Chang (15):
  swiotlb: Refactor swiotlb init functions
  swiotlb: Refactor swiotlb_create_debugfs
  swiotlb: Add DMA_RESTRICTED_POOL
  swiotlb: Add restricted DMA pool initialization
  swiotlb: Add a new get_io_tlb_mem getter
  swiotlb: Update is_swiotlb_buffer to add a struct device argument
  swiotlb: Update is_swiotlb_active to add a struct device argument
  swiotlb: Bounce data from/to restricted DMA pool if available
  swiotlb: Move alloc_size to find_slots
  swiotlb: Refactor swiotlb_tbl_unmap_single
  dma-direct: Add a new wrapper __dma_direct_free_pages()
  swiotlb: Add restricted DMA alloc/free support.
  dma-direct: Allocate memory from restricted DMA pool if available
  dt-bindings: of: Add restricted DMA pool
  of: Add plumbing for restricted DMA pool

 .../reserved-memory/reserved-memory.txt       |  27 ++
 drivers/gpu/drm/i915/gem/i915_gem_internal.c  |   2 +-
 drivers/gpu/drm/nouveau/nouveau_ttm.c         |   2 +-
 drivers/iommu/dma-iommu.c                     |  12 +-
 drivers/of/address.c                          |  25 ++
 drivers/of/device.c                           |   3 +
 drivers/of/of_private.h                       |   5 +
 drivers/pci/xen-pcifront.c                    |   2 +-
 drivers/xen/swiotlb-xen.c                     |   2 +-
 include/linux/device.h                        |   4 +
 include/linux/swiotlb.h                       |  41 ++-
 kernel/dma/Kconfig                            |  14 +
 kernel/dma/direct.c                           |  63 +++--
 kernel/dma/direct.h                           |   9 +-
 kernel/dma/swiotlb.c                          | 242 +++++++++++++-----
 15 files changed, 356 insertions(+), 97 deletions(-)

-- 
2.31.1.607.g51e8a6a459-goog



From xen-devel-bounces@lists.xenproject.org Mon May 10 09:50:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 09:50:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125051.235426 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg2Z1-000718-Gy; Mon, 10 May 2021 09:50:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125051.235426; Mon, 10 May 2021 09:50:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg2Z1-000711-DD; Mon, 10 May 2021 09:50:59 +0000
Received: by outflank-mailman (input) for mailman id 125051;
 Mon, 10 May 2021 09:50:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NcLl=KF=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lg2Z0-0006yg-86
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 09:50:58 +0000
Received: from mail-pj1-x102f.google.com (unknown [2607:f8b0:4864:20::102f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c12bb93c-f2ba-419c-84de-ec0eb41aa6f3;
 Mon, 10 May 2021 09:50:57 +0000 (UTC)
Received: by mail-pj1-x102f.google.com with SMTP id
 lj11-20020a17090b344bb029015bc3073608so9988256pjb.3
 for <xen-devel@lists.xenproject.org>; Mon, 10 May 2021 02:50:57 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:a524:abe8:94e3:5601])
 by smtp.gmail.com with UTF8SMTPSA id 14sm10615255pfv.33.2021.05.10.02.50.49
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 May 2021 02:50:56 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c12bb93c-f2ba-419c-84de-ec0eb41aa6f3
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=xzceaTJbUUoQMRiby9lwQqwtKo4y5qZbVn1w31ybRrg=;
        b=bOYHl1lk+9c5fpkjvJLlW2V+ZW1HWQZ8+GqQWpRXOyRwXl5gJ0b79YDClCoBfBBm4i
         Lg3ZY42RA3b3XCB5qgKa30diM6DGWie5c/Bp8lBvVatzuKYQ+cRSVyrR3Vn43XQ38Fby
         6vdAkHilxAu/KAE1HFPNhr6sG764ExUW8d2uc=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=xzceaTJbUUoQMRiby9lwQqwtKo4y5qZbVn1w31ybRrg=;
        b=Fk2TUqJxqirNlYbd3NLfg8yIFYOdTCgVsVgoqiSbaAB2/sKGA+1/EK+boKq5kU46J4
         qLtoAmwMAkKG+2h5UuD+YirSS3StLbsQ+R1z6kaRC4gk4F+QY6yuhhbgP6URhVNOxZ9m
         qxuJ15dFgmvHXjGbOpXukC4IG++bQ3WNsdTKnT+YXwftrO7epCUs+PZwxwmUA0A9q+PO
         zlKa+HBTXjIXnXUN/BnESig2AhJRb/P5V8lw9oJF9gJF1xoFKsGqWXcgnGN/iwwQk95B
         /1ChRyl8aSruuC2UC1W/T9fhbhg4gRPzFnEPwsxoLXMmKH38JEBCYVI6vlq0Mx79ihjL
         qIgA==
X-Gm-Message-State: AOAM533P8N+Y96HwxyKdWwpSVtzUG+TE5cf3P4R1Zgsc8qMFvAhcPiA0
	6LcWKReWZrg+STYFseUO3BJWXA==
X-Google-Smtp-Source: ABdhPJxwy0teVP8y0d4KrEbfmPne0jX/g2pgdK0Y1sxb/62SuqcTyQ5c51wqp1akhS8ac/pL7Mr0nw==
X-Received: by 2002:a17:90a:bf0c:: with SMTP id c12mr25754828pjs.206.1620640256876;
        Mon, 10 May 2021 02:50:56 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	nouveau@lists.freedesktop.org,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v6 02/15] swiotlb: Refactor swiotlb_create_debugfs
Date: Mon, 10 May 2021 17:50:13 +0800
Message-Id: <20210510095026.3477496-3-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.607.g51e8a6a459-goog
In-Reply-To: <20210510095026.3477496-1-tientzu@chromium.org>
References: <20210510095026.3477496-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Split the debugfs creation to make the code reusable for supporting
different bounce buffer pools, e.g. restricted DMA pool.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/swiotlb.c | 22 ++++++++++++++++------
 1 file changed, 16 insertions(+), 6 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index d3232fc19385..858475bd6923 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -64,6 +64,7 @@
 enum swiotlb_force swiotlb_force;
 
 struct io_tlb_mem *io_tlb_default_mem;
+static struct dentry *debugfs_dir;
 
 /*
  * Max segment that we can provide which (if pages are contingous) will
@@ -662,18 +663,27 @@ EXPORT_SYMBOL_GPL(is_swiotlb_active);
 
 #ifdef CONFIG_DEBUG_FS
 
-static int __init swiotlb_create_debugfs(void)
+static void swiotlb_create_debugfs(struct io_tlb_mem *mem, const char *name,
+				   struct dentry *node)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
-
 	if (!mem)
-		return 0;
-	mem->debugfs = debugfs_create_dir("swiotlb", NULL);
+		return;
+
+	mem->debugfs = debugfs_create_dir(name, node);
 	debugfs_create_ulong("io_tlb_nslabs", 0400, mem->debugfs, &mem->nslabs);
 	debugfs_create_ulong("io_tlb_used", 0400, mem->debugfs, &mem->used);
+}
+
+static int __init swiotlb_create_default_debugfs(void)
+{
+	struct io_tlb_mem *mem = io_tlb_default_mem;
+
+	swiotlb_create_debugfs(mem, "swiotlb", NULL);
+	debugfs_dir = mem->debugfs;
+
 	return 0;
 }
 
-late_initcall(swiotlb_create_debugfs);
+late_initcall(swiotlb_create_default_debugfs);
 
 #endif
-- 
2.31.1.607.g51e8a6a459-goog



From xen-devel-bounces@lists.xenproject.org Mon May 10 09:51:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 09:51:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125054.235438 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg2Z9-0007eG-TJ; Mon, 10 May 2021 09:51:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125054.235438; Mon, 10 May 2021 09:51:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg2Z9-0007e7-O8; Mon, 10 May 2021 09:51:07 +0000
Received: by outflank-mailman (input) for mailman id 125054;
 Mon, 10 May 2021 09:51:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NcLl=KF=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lg2Z8-0006yg-Qp
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 09:51:06 +0000
Received: from mail-pl1-x62a.google.com (unknown [2607:f8b0:4864:20::62a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a732de66-c3c4-4664-ab4f-d6938473f051;
 Mon, 10 May 2021 09:51:06 +0000 (UTC)
Received: by mail-pl1-x62a.google.com with SMTP id b15so2684452plh.10
 for <xen-devel@lists.xenproject.org>; Mon, 10 May 2021 02:51:06 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:a524:abe8:94e3:5601])
 by smtp.gmail.com with UTF8SMTPSA id ga1sm11342699pjb.5.2021.05.10.02.50.58
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 May 2021 02:51:05 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a732de66-c3c4-4664-ab4f-d6938473f051
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=UxwjATyfUtX3knbkz7Ng4a91AyuLGtHH/dkvosOSCGg=;
        b=KOr6914IYxgjLwHfgEdtt7yY0GH36g2NGsJo3fGwbbJaZNcrllky3/6cWFIPbrcxnF
         J6WIRuHd84U2i5BBiDKA/bsQe9y2Z9UkJ/RbWHnAxAdDWQ5FGpvFFVOBGDi0XQRFybWw
         dDc+W6hlTJe2OtGlIaJLuh2fe89FW6RCivPnU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=UxwjATyfUtX3knbkz7Ng4a91AyuLGtHH/dkvosOSCGg=;
        b=UaKwY+D+WqJ5npUvvk8D9suzezUX1dhxpILFAOY6gPP8ZxVkvamrKgnoonXRMTVTN4
         4BbQALl49A9rTf3yh8+qEnojDVZt6KzKD3yvB1OaWXkD1aCAy5RmTfTDDbT//8mvubak
         CZTLZhYp7BCuqe+j7fwTsi38pVyD32GTCGA3EY4WF5w5SeQPJb1s/0hdJ4bnwkuX/Dzz
         R1zhR6kDGpGVrg0pn61w140JwxzVmRC+D3zewQqIsltRlQeqhiZ5kmopwPkqy5JCXeBn
         iGqa/FGSH1N+0dQqHsM7Xj6S/WSzEFc6ctvLImtZu0Y5alCQJRZv9uwQnkFXbYliMc5t
         thwg==
X-Gm-Message-State: AOAM531WkKk3WVm/4575AFw7Ndp7jZ/qseIL3edlb0dXsEzSWXxnOYGX
	dmXMv4cKrPUXwXCZNWmmVhpCPg==
X-Google-Smtp-Source: ABdhPJxTaq6vhy1R6G0CsSRkldwCBABZULYIDbfdaYlGZp1FgQhncVgdgDLR34kcwkcv6yJulF+tnA==
X-Received: by 2002:a17:90a:cf10:: with SMTP id h16mr26442566pju.49.1620640265461;
        Mon, 10 May 2021 02:51:05 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	nouveau@lists.freedesktop.org,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v6 03/15] swiotlb: Add DMA_RESTRICTED_POOL
Date: Mon, 10 May 2021 17:50:14 +0800
Message-Id: <20210510095026.3477496-4-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.607.g51e8a6a459-goog
In-Reply-To: <20210510095026.3477496-1-tientzu@chromium.org>
References: <20210510095026.3477496-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new kconfig symbol, DMA_RESTRICTED_POOL, for restricted DMA pool.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/Kconfig | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/kernel/dma/Kconfig b/kernel/dma/Kconfig
index 77b405508743..3e961dc39634 100644
--- a/kernel/dma/Kconfig
+++ b/kernel/dma/Kconfig
@@ -80,6 +80,20 @@ config SWIOTLB
 	bool
 	select NEED_DMA_MAP_STATE
 
+config DMA_RESTRICTED_POOL
+	bool "DMA Restricted Pool"
+	depends on OF && OF_RESERVED_MEM
+	select SWIOTLB
+	help
+	  This enables support for restricted DMA pools which provide a level of
+	  DMA memory protection on systems with limited hardware protection
+	  capabilities, such as those lacking an IOMMU.
+
+	  For more information see
+	  <Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt>
+	  and <kernel/dma/swiotlb.c>.
+	  If unsure, say "n".
+
 #
 # Should be selected if we can mmap non-coherent mappings to userspace.
 # The only thing that is really required is a way to set an uncached bit
-- 
2.31.1.607.g51e8a6a459-goog



From xen-devel-bounces@lists.xenproject.org Mon May 10 09:51:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 09:51:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125057.235450 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg2ZJ-0008Hn-61; Mon, 10 May 2021 09:51:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125057.235450; Mon, 10 May 2021 09:51:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg2ZJ-0008Hg-25; Mon, 10 May 2021 09:51:17 +0000
Received: by outflank-mailman (input) for mailman id 125057;
 Mon, 10 May 2021 09:51:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NcLl=KF=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lg2ZH-0006yg-J4
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 09:51:15 +0000
Received: from mail-pj1-x102e.google.com (unknown [2607:f8b0:4864:20::102e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9173233d-ec42-410c-b964-4a13e051f3ee;
 Mon, 10 May 2021 09:51:14 +0000 (UTC)
Received: by mail-pj1-x102e.google.com with SMTP id
 gc22-20020a17090b3116b02901558435aec1so9990818pjb.4
 for <xen-devel@lists.xenproject.org>; Mon, 10 May 2021 02:51:14 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:a524:abe8:94e3:5601])
 by smtp.gmail.com with UTF8SMTPSA id 85sm10570175pge.92.2021.05.10.02.51.06
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 May 2021 02:51:13 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9173233d-ec42-410c-b964-4a13e051f3ee
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=N3P9GLK7vHKhYVQQtJseEAxm9BpujcOrU5/c0rsEVEk=;
        b=Y7aOu5/1TjGJ39Q6KlGygdgKwH9kbvC5j2KTJY6W4XTdOExKpGYjoeXyBJiXgd/KYk
         KzSpnP+ON6ZMnZKtG68AQfgatZrpSEM8rt3qhHBX67wSb4snRTMym5W+JAqIFz3C1rDu
         rEn+A4sHCgc1HrIPZtrVf2hBqqnA1o60c7EF4=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=N3P9GLK7vHKhYVQQtJseEAxm9BpujcOrU5/c0rsEVEk=;
        b=GDvLkPggzBHPEIP4ZVeRi12Lpf1iAQlzTCpPOMwQ1OajrWXtbZuMB2T+EguMxJfjTk
         leMo5p8TelxINC2GjNGt/msdf/s1RmSclCEkAUz1thBF9HrQg9qvT+94ynDvDvIbC2iF
         rkonPF2dHAg5432+EnEBcqZD+A1l3rRu6IgKRvPT+yvsho5CADlQUrEqAAaqHgT4UKwR
         +ScKO/CZkhVG9Pcr1VpHgR8NeoBvyGv1UErZHmtfJixOVa9CUd6vszFFOk/XCm6lUDYW
         3bk9rc4u/0ZgMOQK06OrHkAKxTM3XVBQxwetW0B+c9L1EgRotYPYJwyrX4ebmJtdzaIN
         hVtg==
X-Gm-Message-State: AOAM532jd/GD6q52CKpPYmpL5XIqDHhm+xgMc+u4GiaA0QQV7lzq08iO
	cClvBv7I6MJ0FSYUQ2ZX1h0AVg==
X-Google-Smtp-Source: ABdhPJx/+mqX+1PPw1d6WrMopD8tcfP0qxrVaW6dh7H0TOwQsbO/ZVrG7imWQ0Lmzh0hiACiE2GhCg==
X-Received: by 2002:a17:902:f203:b029:ee:e32f:2a28 with SMTP id m3-20020a170902f203b02900eee32f2a28mr22959818plc.45.1620640274038;
        Mon, 10 May 2021 02:51:14 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	nouveau@lists.freedesktop.org,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v6 04/15] swiotlb: Add restricted DMA pool initialization
Date: Mon, 10 May 2021 17:50:15 +0800
Message-Id: <20210510095026.3477496-5-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.607.g51e8a6a459-goog
In-Reply-To: <20210510095026.3477496-1-tientzu@chromium.org>
References: <20210510095026.3477496-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add the initialization function to create restricted DMA pools from
matching reserved-memory nodes.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 include/linux/device.h  |  4 +++
 include/linux/swiotlb.h |  3 +-
 kernel/dma/swiotlb.c    | 79 +++++++++++++++++++++++++++++++++++++++++
 3 files changed, 85 insertions(+), 1 deletion(-)

diff --git a/include/linux/device.h b/include/linux/device.h
index 38a2071cf776..4987608ea4ff 100644
--- a/include/linux/device.h
+++ b/include/linux/device.h
@@ -416,6 +416,7 @@ struct dev_links_info {
  * @dma_pools:	Dma pools (if dma'ble device).
  * @dma_mem:	Internal for coherent mem override.
  * @cma_area:	Contiguous memory area for dma allocations
+ * @dma_io_tlb_mem: Internal for swiotlb io_tlb_mem override.
  * @archdata:	For arch-specific additions.
  * @of_node:	Associated device tree node.
  * @fwnode:	Associated device node supplied by platform firmware.
@@ -521,6 +522,9 @@ struct device {
 #ifdef CONFIG_DMA_CMA
 	struct cma *cma_area;		/* contiguous memory area for dma
 					   allocations */
+#endif
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+	struct io_tlb_mem *dma_io_tlb_mem;
 #endif
 	/* arch specific additions */
 	struct dev_archdata	archdata;
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 216854a5e513..03ad6e3b4056 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -72,7 +72,8 @@ extern enum swiotlb_force swiotlb_force;
  *		range check to see if the memory was in fact allocated by this
  *		API.
  * @nslabs:	The number of IO TLB blocks (in groups of 64) between @start and
- *		@end. This is command line adjustable via setup_io_tlb_npages.
+ *		@end. For default swiotlb, this is command line adjustable via
+ *		setup_io_tlb_npages.
  * @used:	The number of used IO TLB block.
  * @list:	The free list describing the number of free entries available
  *		from each index.
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 858475bd6923..4ea027b75013 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -39,6 +39,13 @@
 #ifdef CONFIG_DEBUG_FS
 #include <linux/debugfs.h>
 #endif
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+#include <linux/io.h>
+#include <linux/of.h>
+#include <linux/of_fdt.h>
+#include <linux/of_reserved_mem.h>
+#include <linux/slab.h>
+#endif
 
 #include <asm/io.h>
 #include <asm/dma.h>
@@ -687,3 +694,75 @@ static int __init swiotlb_create_default_debugfs(void)
 late_initcall(swiotlb_create_default_debugfs);
 
 #endif
+
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
+				    struct device *dev)
+{
+	struct io_tlb_mem *mem = rmem->priv;
+	unsigned long nslabs = rmem->size >> IO_TLB_SHIFT;
+
+	if (dev->dma_io_tlb_mem)
+		return 0;
+
+	/* Since multiple devices can share the same pool, the private data,
+	 * io_tlb_mem struct, will be initialized by the first device attached
+	 * to it.
+	 */
+	if (!mem) {
+		mem = kzalloc(struct_size(mem, slots, nslabs), GFP_KERNEL);
+		if (!mem)
+			return -ENOMEM;
+#ifdef CONFIG_ARM
+		if (!PageHighMem(pfn_to_page(PHYS_PFN(rmem->base)))) {
+			kfree(mem);
+			return -EINVAL;
+		}
+#endif /* CONFIG_ARM */
+		swiotlb_init_io_tlb_mem(mem, rmem->base, nslabs, false);
+
+		rmem->priv = mem;
+
+#ifdef CONFIG_DEBUG_FS
+		if (!debugfs_dir)
+			debugfs_dir = debugfs_create_dir("swiotlb", NULL);
+
+		swiotlb_create_debugfs(mem, rmem->name, debugfs_dir);
+#endif /* CONFIG_DEBUG_FS */
+	}
+
+	dev->dma_io_tlb_mem = mem;
+
+	return 0;
+}
+
+static void rmem_swiotlb_device_release(struct reserved_mem *rmem,
+					struct device *dev)
+{
+	if (dev)
+		dev->dma_io_tlb_mem = NULL;
+}
+
+static const struct reserved_mem_ops rmem_swiotlb_ops = {
+	.device_init = rmem_swiotlb_device_init,
+	.device_release = rmem_swiotlb_device_release,
+};
+
+static int __init rmem_swiotlb_setup(struct reserved_mem *rmem)
+{
+	unsigned long node = rmem->fdt_node;
+
+	if (of_get_flat_dt_prop(node, "reusable", NULL) ||
+	    of_get_flat_dt_prop(node, "linux,cma-default", NULL) ||
+	    of_get_flat_dt_prop(node, "linux,dma-default", NULL) ||
+	    of_get_flat_dt_prop(node, "no-map", NULL))
+		return -EINVAL;
+
+	rmem->ops = &rmem_swiotlb_ops;
+	pr_info("Reserved memory: created device swiotlb memory pool at %pa, size %ld MiB\n",
+		&rmem->base, (unsigned long)rmem->size / SZ_1M);
+	return 0;
+}
+
+RESERVEDMEM_OF_DECLARE(dma, "restricted-dma-pool", rmem_swiotlb_setup);
+#endif /* CONFIG_DMA_RESTRICTED_POOL */
-- 
2.31.1.607.g51e8a6a459-goog



From xen-devel-bounces@lists.xenproject.org Mon May 10 09:51:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 09:51:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125060.235462 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg2ZR-0000QA-J9; Mon, 10 May 2021 09:51:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125060.235462; Mon, 10 May 2021 09:51:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg2ZR-0000Q1-El; Mon, 10 May 2021 09:51:25 +0000
Received: by outflank-mailman (input) for mailman id 125060;
 Mon, 10 May 2021 09:51:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NcLl=KF=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lg2ZQ-0000Oo-J5
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 09:51:24 +0000
Received: from mail-pj1-x1035.google.com (unknown [2607:f8b0:4864:20::1035])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f223f673-d063-487e-adc7-57718bd48a59;
 Mon, 10 May 2021 09:51:23 +0000 (UTC)
Received: by mail-pj1-x1035.google.com with SMTP id lp4so9445836pjb.1
 for <xen-devel@lists.xenproject.org>; Mon, 10 May 2021 02:51:23 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:a524:abe8:94e3:5601])
 by smtp.gmail.com with UTF8SMTPSA id ml19sm46030318pjb.2.2021.05.10.02.51.15
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 May 2021 02:51:22 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f223f673-d063-487e-adc7-57718bd48a59
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=H8s2Ssxedi+m7Lc7mdaT6IyIVFNL7UMFVGCjjIXUKBs=;
        b=aEzUGv6HyICG91lVRchMBsfGFoiKw5t3UKv5r9/XWkq1l03uTxk5gK7BReGArG4wM7
         eL8k61LHaWwrgnh+WJYeL7I+S/QoUwEkwv6x0tDx3HcpYoUp9liwFvhqHJX8OnvlaqU3
         oxtyLQT57sgPbzesRlkZ21sVrQ/LPb2+Dv9rE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=H8s2Ssxedi+m7Lc7mdaT6IyIVFNL7UMFVGCjjIXUKBs=;
        b=hjX/N7oS5rQGF6Tt9iqBgUiYh8RoSzZVS7T5GWrxWYdc1GmK0o5kY6bV6hwamANSeL
         Y/Mm/W4c46RoerMHBgxX2bv2jKZn1TJi95AL4DjpzWF/9DEJrzy3zmEDLB9gj2paSh1i
         d3+SuPtHdKEtjm1fpYBlbqkyiRRIW47I7cEbMLeRZjThB/s16Wxs3vgw4r/xtiyTMHxU
         bhs9Gxv2tOBiY5edqn5jzsut43RKVl/fcoVhiMZfjhCqj7vvDJhT2JGaVDLGUxNpilQ0
         zrP8B1q8rTLgkHUsRn0HRE6jXbg+3+AG/JH/Px0NJgGNF5mqdeeXWSozBpL/L8VrtlpF
         dmKw==
X-Gm-Message-State: AOAM531/N7ts+PWJTe5nWs2f4Nwg81Z0z6FZVCCD/stF00uC8L1SGcMa
	Nue7KMlBDxIUGjNgACjxsbSteg==
X-Google-Smtp-Source: ABdhPJzROW5FUCBZB2fXFHkP0ALBea1sVjbZahlfu04HnoSywATlCq/z9y3jgockvO/xDk1J1RoGzg==
X-Received: by 2002:a17:90b:1bcd:: with SMTP id oa13mr40100520pjb.22.1620640283118;
        Mon, 10 May 2021 02:51:23 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	nouveau@lists.freedesktop.org,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v6 05/15] swiotlb: Add a new get_io_tlb_mem getter
Date: Mon, 10 May 2021 17:50:16 +0800
Message-Id: <20210510095026.3477496-6-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.607.g51e8a6a459-goog
In-Reply-To: <20210510095026.3477496-1-tientzu@chromium.org>
References: <20210510095026.3477496-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new getter, get_io_tlb_mem, to help select the io_tlb_mem struct.
The restricted DMA pool is preferred if available.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 include/linux/swiotlb.h | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 03ad6e3b4056..b469f04cca26 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -2,6 +2,7 @@
 #ifndef __LINUX_SWIOTLB_H
 #define __LINUX_SWIOTLB_H
 
+#include <linux/device.h>
 #include <linux/dma-direction.h>
 #include <linux/init.h>
 #include <linux/types.h>
@@ -102,6 +103,16 @@ struct io_tlb_mem {
 };
 extern struct io_tlb_mem *io_tlb_default_mem;
 
+static inline struct io_tlb_mem *get_io_tlb_mem(struct device *dev)
+{
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+	if (dev && dev->dma_io_tlb_mem)
+		return dev->dma_io_tlb_mem;
+#endif /* CONFIG_DMA_RESTRICTED_POOL */
+
+	return io_tlb_default_mem;
+}
+
 static inline bool is_swiotlb_buffer(phys_addr_t paddr)
 {
 	struct io_tlb_mem *mem = io_tlb_default_mem;
-- 
2.31.1.607.g51e8a6a459-goog



From xen-devel-bounces@lists.xenproject.org Mon May 10 09:51:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 09:51:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125066.235474 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg2Za-0000yx-Sm; Mon, 10 May 2021 09:51:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125066.235474; Mon, 10 May 2021 09:51:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg2Za-0000yg-OO; Mon, 10 May 2021 09:51:34 +0000
Received: by outflank-mailman (input) for mailman id 125066;
 Mon, 10 May 2021 09:51:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NcLl=KF=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lg2ZZ-0000Oo-Aq
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 09:51:33 +0000
Received: from mail-pj1-x102b.google.com (unknown [2607:f8b0:4864:20::102b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a4f53698-8f8b-4df3-84ef-c1d804054074;
 Mon, 10 May 2021 09:51:32 +0000 (UTC)
Received: by mail-pj1-x102b.google.com with SMTP id
 gq14-20020a17090b104eb029015be008ab0fso9741937pjb.1
 for <xen-devel@lists.xenproject.org>; Mon, 10 May 2021 02:51:32 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:a524:abe8:94e3:5601])
 by smtp.gmail.com with UTF8SMTPSA id w123sm10461275pfw.151.2021.05.10.02.51.24
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 May 2021 02:51:31 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a4f53698-8f8b-4df3-84ef-c1d804054074
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=tZ3CGs8f3EQmgAGpvf1e73TqcXREEE3ZI5sdNnb0eLM=;
        b=lMyouNryYo5w5lJuE95kxHABr2UavthBxCrElnGryB1miCRsJl0qEAj6meErMJSozY
         Bf+wF4N8bQO1PcUVKGrINY6G+y/OtqfRp6rQIHL8mDnrXGD2FuJWgJ9PSVDL0De8/bNM
         idqD9o+yRklHjS47HZfFBHHPPAwrsxzQqtqBs=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=tZ3CGs8f3EQmgAGpvf1e73TqcXREEE3ZI5sdNnb0eLM=;
        b=NASqcu3dDdb4rBvKCcqyKi1liNZVbrqFu57ln42kQJJxQ9pY+zNmtTR/72qxFo658I
         LV7K8Rn8TYk9cLw3JPSq9TUS31xnU/99ipRXlY5qAYeudJwmFgEyh5ecHLw7m8nBlQUz
         KbHTsSNax425MoDSBAlmehcxlFc9GQcEE0hoAc3v+NyHqCDJeBIn919OvY59MWz2+Vcs
         wcYzplhFJmoT8/ZA9uHXjVpVT/VTpMcqqmRVMktZ7MSdHUbNYAH6T9N36pQjBfnDxyQS
         Psn2GxgMXhKpQy4nj8ft4q8DXLAj+Rh7BV6E45LUFSCNUZNmBtBi2zcxm5K+IfLBEXOG
         1TEQ==
X-Gm-Message-State: AOAM5323yGFicKKp9fVedBUQbEaOWSlgwhJxKD7xv4CV3bzMjm9EGh4p
	KV+DN4/IkSRc1pFxYMuD8gh96g==
X-Google-Smtp-Source: ABdhPJy89/+hGNaFqoNDxXRLLHozjYcSxQ6nSmzavfsLOw7KffX2ycDLbnJ717I2O8/EFzmzVKyA2w==
X-Received: by 2002:a17:90b:88b:: with SMTP id bj11mr27142987pjb.224.1620640291726;
        Mon, 10 May 2021 02:51:31 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	nouveau@lists.freedesktop.org,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v6 06/15] swiotlb: Update is_swiotlb_buffer to add a struct device argument
Date: Mon, 10 May 2021 17:50:17 +0800
Message-Id: <20210510095026.3477496-7-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.607.g51e8a6a459-goog
In-Reply-To: <20210510095026.3477496-1-tientzu@chromium.org>
References: <20210510095026.3477496-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Update is_swiotlb_buffer to add a struct device argument. This will be
useful later to allow for restricted DMA pool.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 drivers/iommu/dma-iommu.c | 12 ++++++------
 drivers/xen/swiotlb-xen.c |  2 +-
 include/linux/swiotlb.h   |  6 +++---
 kernel/dma/direct.c       |  6 +++---
 kernel/dma/direct.h       |  6 +++---
 5 files changed, 16 insertions(+), 16 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 7bcdd1205535..a5df35bfd150 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -504,7 +504,7 @@ static void __iommu_dma_unmap_swiotlb(struct device *dev, dma_addr_t dma_addr,
 
 	__iommu_dma_unmap(dev, dma_addr, size);
 
-	if (unlikely(is_swiotlb_buffer(phys)))
+	if (unlikely(is_swiotlb_buffer(dev, phys)))
 		swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
 }
 
@@ -575,7 +575,7 @@ static dma_addr_t __iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys,
 	}
 
 	iova = __iommu_dma_map(dev, phys, aligned_size, prot, dma_mask);
-	if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(phys))
+	if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(dev, phys))
 		swiotlb_tbl_unmap_single(dev, phys, org_size, dir, attrs);
 	return iova;
 }
@@ -781,7 +781,7 @@ static void iommu_dma_sync_single_for_cpu(struct device *dev,
 	if (!dev_is_dma_coherent(dev))
 		arch_sync_dma_for_cpu(phys, size, dir);
 
-	if (is_swiotlb_buffer(phys))
+	if (is_swiotlb_buffer(dev, phys))
 		swiotlb_sync_single_for_cpu(dev, phys, size, dir);
 }
 
@@ -794,7 +794,7 @@ static void iommu_dma_sync_single_for_device(struct device *dev,
 		return;
 
 	phys = iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle);
-	if (is_swiotlb_buffer(phys))
+	if (is_swiotlb_buffer(dev, phys))
 		swiotlb_sync_single_for_device(dev, phys, size, dir);
 
 	if (!dev_is_dma_coherent(dev))
@@ -815,7 +815,7 @@ static void iommu_dma_sync_sg_for_cpu(struct device *dev,
 		if (!dev_is_dma_coherent(dev))
 			arch_sync_dma_for_cpu(sg_phys(sg), sg->length, dir);
 
-		if (is_swiotlb_buffer(sg_phys(sg)))
+		if (is_swiotlb_buffer(dev, sg_phys(sg)))
 			swiotlb_sync_single_for_cpu(dev, sg_phys(sg),
 						    sg->length, dir);
 	}
@@ -832,7 +832,7 @@ static void iommu_dma_sync_sg_for_device(struct device *dev,
 		return;
 
 	for_each_sg(sgl, sg, nelems, i) {
-		if (is_swiotlb_buffer(sg_phys(sg)))
+		if (is_swiotlb_buffer(dev, sg_phys(sg)))
 			swiotlb_sync_single_for_device(dev, sg_phys(sg),
 						       sg->length, dir);
 
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 4c89afc0df62..0c6ed09f8513 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -100,7 +100,7 @@ static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr)
 	 * in our domain. Therefore _only_ check address within our domain.
 	 */
 	if (pfn_valid(PFN_DOWN(paddr)))
-		return is_swiotlb_buffer(paddr);
+		return is_swiotlb_buffer(dev, paddr);
 	return 0;
 }
 
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index b469f04cca26..2a6cca07540b 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -113,9 +113,9 @@ static inline struct io_tlb_mem *get_io_tlb_mem(struct device *dev)
 	return io_tlb_default_mem;
 }
 
-static inline bool is_swiotlb_buffer(phys_addr_t paddr)
+static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = get_io_tlb_mem(dev);
 
 	return mem && paddr >= mem->start && paddr < mem->end;
 }
@@ -127,7 +127,7 @@ bool is_swiotlb_active(void);
 void __init swiotlb_adjust_size(unsigned long size);
 #else
 #define swiotlb_force SWIOTLB_NO_FORCE
-static inline bool is_swiotlb_buffer(phys_addr_t paddr)
+static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
 	return false;
 }
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index f737e3347059..84c9feb5474a 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -343,7 +343,7 @@ void dma_direct_sync_sg_for_device(struct device *dev,
 	for_each_sg(sgl, sg, nents, i) {
 		phys_addr_t paddr = dma_to_phys(dev, sg_dma_address(sg));
 
-		if (unlikely(is_swiotlb_buffer(paddr)))
+		if (unlikely(is_swiotlb_buffer(dev, paddr)))
 			swiotlb_sync_single_for_device(dev, paddr, sg->length,
 						       dir);
 
@@ -369,7 +369,7 @@ void dma_direct_sync_sg_for_cpu(struct device *dev,
 		if (!dev_is_dma_coherent(dev))
 			arch_sync_dma_for_cpu(paddr, sg->length, dir);
 
-		if (unlikely(is_swiotlb_buffer(paddr)))
+		if (unlikely(is_swiotlb_buffer(dev, paddr)))
 			swiotlb_sync_single_for_cpu(dev, paddr, sg->length,
 						    dir);
 
@@ -504,7 +504,7 @@ size_t dma_direct_max_mapping_size(struct device *dev)
 bool dma_direct_need_sync(struct device *dev, dma_addr_t dma_addr)
 {
 	return !dev_is_dma_coherent(dev) ||
-		is_swiotlb_buffer(dma_to_phys(dev, dma_addr));
+	       is_swiotlb_buffer(dev, dma_to_phys(dev, dma_addr));
 }
 
 /**
diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
index 50afc05b6f1d..13e9e7158d94 100644
--- a/kernel/dma/direct.h
+++ b/kernel/dma/direct.h
@@ -56,7 +56,7 @@ static inline void dma_direct_sync_single_for_device(struct device *dev,
 {
 	phys_addr_t paddr = dma_to_phys(dev, addr);
 
-	if (unlikely(is_swiotlb_buffer(paddr)))
+	if (unlikely(is_swiotlb_buffer(dev, paddr)))
 		swiotlb_sync_single_for_device(dev, paddr, size, dir);
 
 	if (!dev_is_dma_coherent(dev))
@@ -73,7 +73,7 @@ static inline void dma_direct_sync_single_for_cpu(struct device *dev,
 		arch_sync_dma_for_cpu_all();
 	}
 
-	if (unlikely(is_swiotlb_buffer(paddr)))
+	if (unlikely(is_swiotlb_buffer(dev, paddr)))
 		swiotlb_sync_single_for_cpu(dev, paddr, size, dir);
 
 	if (dir == DMA_FROM_DEVICE)
@@ -113,7 +113,7 @@ static inline void dma_direct_unmap_page(struct device *dev, dma_addr_t addr,
 	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
 		dma_direct_sync_single_for_cpu(dev, addr, size, dir);
 
-	if (unlikely(is_swiotlb_buffer(phys)))
+	if (unlikely(is_swiotlb_buffer(dev, phys)))
 		swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
 }
 #endif /* _KERNEL_DMA_DIRECT_H */
-- 
2.31.1.607.g51e8a6a459-goog



From xen-devel-bounces@lists.xenproject.org Mon May 10 09:51:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 09:51:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125070.235486 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg2Zj-0001eK-6a; Mon, 10 May 2021 09:51:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125070.235486; Mon, 10 May 2021 09:51:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg2Zj-0001e8-2m; Mon, 10 May 2021 09:51:43 +0000
Received: by outflank-mailman (input) for mailman id 125070;
 Mon, 10 May 2021 09:51:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NcLl=KF=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lg2Zh-0001bJ-Uo
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 09:51:41 +0000
Received: from mail-pf1-x42c.google.com (unknown [2607:f8b0:4864:20::42c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6740ae25-12c4-410f-93bc-71fb9f834bd8;
 Mon, 10 May 2021 09:51:41 +0000 (UTC)
Received: by mail-pf1-x42c.google.com with SMTP id p4so13283421pfo.3
 for <xen-devel@lists.xenproject.org>; Mon, 10 May 2021 02:51:41 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:a524:abe8:94e3:5601])
 by smtp.gmail.com with UTF8SMTPSA id z16sm18516572pjq.42.2021.05.10.02.51.33
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 May 2021 02:51:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6740ae25-12c4-410f-93bc-71fb9f834bd8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=+JmVmhSFGiDk3/OglOY5X9UAspXEPeRZrOOQ1jue7o0=;
        b=chdPcJe7gomL+7ILnJK41quCdTSfmmzVjcLxZvr96abVbyS2hxQz55JkSmjVpMuD2l
         7hrZDyiec0Eptx6DlOgctch3MP18AQl2Z0UBjvJYxcIDfchdSPR9HSnruN0+XHwsK91U
         nLMYdSbyGfntFvFyhHjVYHf29JB1JZpxW2+jI=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=+JmVmhSFGiDk3/OglOY5X9UAspXEPeRZrOOQ1jue7o0=;
        b=WC1STTRg1JByTE0x6z5/kSHureXLSeCUduO2ZpqCUB7/AlyegA1r7ZMe7U67eQEDfJ
         OIff2lZHW7s7brarGOC++pYNUIY5Rkp6gmXwXt0k2FIIEtYIWqSYae94jV0bum36Yd1E
         QEDig+8PyDmfYxzesHxMq2c8KqdlU9O9kQkZfcrw6RNYBoEh1BH8l6mOiCZWCi6Gg3Ne
         npt0J9sAuhsRlrY9n689Ea0XK7pS7JZqVHkSfalNJxGr0umlVR7NfA02XTmUpglyrRhF
         B5I+kuVsh0JmHD3QTW2RgEpBcGBTNGSu6JQ0BeLnqQeaP8lXOWeK9O6ZF1zf38bYcwgB
         ypRQ==
X-Gm-Message-State: AOAM533t5hduGfJL2cF29dZIAiDAKPivLp1ioq4BH4I+Ki02BY0VXzla
	Fv3W7QEEAxFu5VyaPmO/C5Rqbg==
X-Google-Smtp-Source: ABdhPJwEuklfL8gjazzjb7wDgipUUvm7epPmX8yFSlIdzWj0uWm4MepZyYy8A9RBNCckvPkGtYRV9Q==
X-Received: by 2002:aa7:828f:0:b029:200:6e27:8c8f with SMTP id s15-20020aa7828f0000b02902006e278c8fmr24143255pfm.44.1620640300375;
        Mon, 10 May 2021 02:51:40 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	nouveau@lists.freedesktop.org,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v6 07/15] swiotlb: Update is_swiotlb_active to add a struct device argument
Date: Mon, 10 May 2021 17:50:18 +0800
Message-Id: <20210510095026.3477496-8-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.607.g51e8a6a459-goog
In-Reply-To: <20210510095026.3477496-1-tientzu@chromium.org>
References: <20210510095026.3477496-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Update is_swiotlb_active to add a struct device argument. This will be
useful later to allow for restricted DMA pool.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 drivers/gpu/drm/i915/gem/i915_gem_internal.c | 2 +-
 drivers/gpu/drm/nouveau/nouveau_ttm.c        | 2 +-
 drivers/pci/xen-pcifront.c                   | 2 +-
 include/linux/swiotlb.h                      | 4 ++--
 kernel/dma/direct.c                          | 2 +-
 kernel/dma/swiotlb.c                         | 4 ++--
 6 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
index ce6b664b10aa..7d48c433446b 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
@@ -42,7 +42,7 @@ static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
 
 	max_order = MAX_ORDER;
 #ifdef CONFIG_SWIOTLB
-	if (is_swiotlb_active()) {
+	if (is_swiotlb_active(NULL)) {
 		unsigned int max_segment;
 
 		max_segment = swiotlb_max_segment();
diff --git a/drivers/gpu/drm/nouveau/nouveau_ttm.c b/drivers/gpu/drm/nouveau/nouveau_ttm.c
index e8b506a6685b..2a2ae6d6cf6d 100644
--- a/drivers/gpu/drm/nouveau/nouveau_ttm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_ttm.c
@@ -321,7 +321,7 @@ nouveau_ttm_init(struct nouveau_drm *drm)
 	}
 
 #if IS_ENABLED(CONFIG_SWIOTLB) && IS_ENABLED(CONFIG_X86)
-	need_swiotlb = is_swiotlb_active();
+	need_swiotlb = is_swiotlb_active(NULL);
 #endif
 
 	ret = ttm_device_init(&drm->ttm.bdev, &nouveau_bo_driver, drm->dev->dev,
diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c
index b7a8f3a1921f..6d548ce53ce7 100644
--- a/drivers/pci/xen-pcifront.c
+++ b/drivers/pci/xen-pcifront.c
@@ -693,7 +693,7 @@ static int pcifront_connect_and_init_dma(struct pcifront_device *pdev)
 
 	spin_unlock(&pcifront_dev_lock);
 
-	if (!err && !is_swiotlb_active()) {
+	if (!err && !is_swiotlb_active(NULL)) {
 		err = pci_xen_swiotlb_init_late();
 		if (err)
 			dev_err(&pdev->xdev->dev, "Could not setup SWIOTLB!\n");
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 2a6cca07540b..c530c976d18b 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -123,7 +123,7 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 void __init swiotlb_exit(void);
 unsigned int swiotlb_max_segment(void);
 size_t swiotlb_max_mapping_size(struct device *dev);
-bool is_swiotlb_active(void);
+bool is_swiotlb_active(struct device *dev);
 void __init swiotlb_adjust_size(unsigned long size);
 #else
 #define swiotlb_force SWIOTLB_NO_FORCE
@@ -143,7 +143,7 @@ static inline size_t swiotlb_max_mapping_size(struct device *dev)
 	return SIZE_MAX;
 }
 
-static inline bool is_swiotlb_active(void)
+static inline bool is_swiotlb_active(struct device *dev)
 {
 	return false;
 }
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 84c9feb5474a..7a88c34d0867 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -495,7 +495,7 @@ int dma_direct_supported(struct device *dev, u64 mask)
 size_t dma_direct_max_mapping_size(struct device *dev)
 {
 	/* If SWIOTLB is active, use its maximum mapping size */
-	if (is_swiotlb_active() &&
+	if (is_swiotlb_active(dev) &&
 	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
 		return swiotlb_max_mapping_size(dev);
 	return SIZE_MAX;
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 4ea027b75013..81bed3d2c771 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -662,9 +662,9 @@ size_t swiotlb_max_mapping_size(struct device *dev)
 	return ((size_t)IO_TLB_SIZE) * IO_TLB_SEGSIZE;
 }
 
-bool is_swiotlb_active(void)
+bool is_swiotlb_active(struct device *dev)
 {
-	return io_tlb_default_mem != NULL;
+	return get_io_tlb_mem(dev) != NULL;
 }
 EXPORT_SYMBOL_GPL(is_swiotlb_active);
 
-- 
2.31.1.607.g51e8a6a459-goog



From xen-devel-bounces@lists.xenproject.org Mon May 10 09:51:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 09:51:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125074.235498 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg2Zs-0002EW-Eu; Mon, 10 May 2021 09:51:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125074.235498; Mon, 10 May 2021 09:51:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg2Zs-0002EL-BP; Mon, 10 May 2021 09:51:52 +0000
Received: by outflank-mailman (input) for mailman id 125074;
 Mon, 10 May 2021 09:51:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NcLl=KF=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lg2Zq-0002B1-J4
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 09:51:50 +0000
Received: from mail-pl1-x62d.google.com (unknown [2607:f8b0:4864:20::62d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0b02f7a2-874f-4b06-87b7-47f2c8bbf3ec;
 Mon, 10 May 2021 09:51:49 +0000 (UTC)
Received: by mail-pl1-x62d.google.com with SMTP id z18so5273455plg.8
 for <xen-devel@lists.xenproject.org>; Mon, 10 May 2021 02:51:49 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:a524:abe8:94e3:5601])
 by smtp.gmail.com with UTF8SMTPSA id s64sm6702432pfs.3.2021.05.10.02.51.41
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 May 2021 02:51:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0b02f7a2-874f-4b06-87b7-47f2c8bbf3ec
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=01kC5VD+dD2b0OOexB3kVYJ/sTlgIQPUcrnnVCwnvhY=;
        b=lgBDGiALZJWoTMGClwt7gAfCf14fepwM0lhs1vHuCCfVO5wyRcvJle6S59iBD1WHjw
         VcpIItr7I/JDG2KUPkhdMWHWpmui21OOXJ9ngFwe6+SQwXm1Lr/pV+cfahxnBOMneB9Q
         gdKX6Ey7xAgGLsyJX9bor8s5uFP/awnm9jDNw=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=01kC5VD+dD2b0OOexB3kVYJ/sTlgIQPUcrnnVCwnvhY=;
        b=i9x2Ic4sH2hGOGvMzo0yse9mrQbGm/Wknx1+Iu8hIAUk2aT4PqohdB+fVgpjPvn/dF
         rLPRfEwiHXVYQi6x7T2qJZkR8pd0gtxqb1PqfgzYapy4KdnE5e7A3Z9G9Z6nAvTkPKYG
         jTQqymPDjqG3u073A0Wx5v53cYcQ0C4OnR0tIjrDU5UX7TR8lrenxj2LBTjwdn7Fm9kg
         UqMt0BsiyH0XvMZTRrxFOXby0uRW8MKsXO3KQhfblWppAYzJU9JNwIhyfsxqzhLRcOKA
         CoZv4pNiI3Kq3WDTU/HA+lZSsp56zP2W16VSaGOgU0saEibVlfz4tvOF+GE0HHHNxYbr
         snvw==
X-Gm-Message-State: AOAM5329K0FfP+m4HU7tx3jb0LOGo0xWmrXZyUoLIcD0cSK1FR1ma21L
	dtcYXyh/9BVhBQpoAfMExr5GCQ==
X-Google-Smtp-Source: ABdhPJye9uoE8lM9qdQ85PwqQIVeAFlLuKjevIwDP1Nhkih9+NHBUgn/S+CAWbVadnjMg7MY4s3LVA==
X-Received: by 2002:a17:90b:3551:: with SMTP id lt17mr26991167pjb.92.1620640309019;
        Mon, 10 May 2021 02:51:49 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	nouveau@lists.freedesktop.org,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v6 08/15] swiotlb: Bounce data from/to restricted DMA pool if available
Date: Mon, 10 May 2021 17:50:19 +0800
Message-Id: <20210510095026.3477496-9-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.607.g51e8a6a459-goog
In-Reply-To: <20210510095026.3477496-1-tientzu@chromium.org>
References: <20210510095026.3477496-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Regardless of swiotlb setting, the restricted DMA pool is preferred if
available.

The restricted DMA pools provide a basic level of protection against the
DMA overwriting buffer contents at unexpected times. However, to protect
against general data leakage and system memory corruption, the system
needs to provide a way to lock down the memory access, e.g., MPU.

Note that is_dev_swiotlb_force doesn't check if
swiotlb_force == SWIOTLB_FORCE. Otherwise the memory allocation behavior
with default swiotlb will be changed by the following patche
("dma-direct: Allocate memory from restricted DMA pool if available").

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 include/linux/swiotlb.h | 13 +++++++++++++
 kernel/dma/direct.c     |  3 ++-
 kernel/dma/direct.h     |  3 ++-
 kernel/dma/swiotlb.c    |  8 ++++----
 4 files changed, 21 insertions(+), 6 deletions(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index c530c976d18b..0c5a18d9cf89 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -120,6 +120,15 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 	return mem && paddr >= mem->start && paddr < mem->end;
 }
 
+static inline bool is_dev_swiotlb_force(struct device *dev)
+{
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+	if (dev->dma_io_tlb_mem)
+		return true;
+#endif /* CONFIG_DMA_RESTRICTED_POOL */
+	return false;
+}
+
 void __init swiotlb_exit(void);
 unsigned int swiotlb_max_segment(void);
 size_t swiotlb_max_mapping_size(struct device *dev);
@@ -131,6 +140,10 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
 	return false;
 }
+static inline bool is_dev_swiotlb_force(struct device *dev)
+{
+	return false;
+}
 static inline void swiotlb_exit(void)
 {
 }
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 7a88c34d0867..078f7087e466 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -496,7 +496,8 @@ size_t dma_direct_max_mapping_size(struct device *dev)
 {
 	/* If SWIOTLB is active, use its maximum mapping size */
 	if (is_swiotlb_active(dev) &&
-	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
+	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE ||
+	     is_dev_swiotlb_force(dev)))
 		return swiotlb_max_mapping_size(dev);
 	return SIZE_MAX;
 }
diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
index 13e9e7158d94..f94813674e23 100644
--- a/kernel/dma/direct.h
+++ b/kernel/dma/direct.h
@@ -87,7 +87,8 @@ static inline dma_addr_t dma_direct_map_page(struct device *dev,
 	phys_addr_t phys = page_to_phys(page) + offset;
 	dma_addr_t dma_addr = phys_to_dma(dev, phys);
 
-	if (unlikely(swiotlb_force == SWIOTLB_FORCE))
+	if (unlikely(swiotlb_force == SWIOTLB_FORCE) ||
+	    is_dev_swiotlb_force(dev))
 		return swiotlb_map(dev, phys, size, dir, attrs);
 
 	if (unlikely(!dma_capable(dev, dma_addr, size, true))) {
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 81bed3d2c771..3f1ad080a4bc 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -347,7 +347,7 @@ void __init swiotlb_exit(void)
 static void swiotlb_bounce(struct device *dev, phys_addr_t tlb_addr, size_t size,
 			   enum dma_data_direction dir)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = get_io_tlb_mem(dev);
 	int index = (tlb_addr - mem->start) >> IO_TLB_SHIFT;
 	phys_addr_t orig_addr = mem->slots[index].orig_addr;
 	size_t alloc_size = mem->slots[index].alloc_size;
@@ -429,7 +429,7 @@ static unsigned int wrap_index(struct io_tlb_mem *mem, unsigned int index)
 static int find_slots(struct device *dev, phys_addr_t orig_addr,
 		size_t alloc_size)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = get_io_tlb_mem(dev);
 	unsigned long boundary_mask = dma_get_seg_boundary(dev);
 	dma_addr_t tbl_dma_addr =
 		phys_to_dma_unencrypted(dev, mem->start) & boundary_mask;
@@ -506,7 +506,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 		size_t mapping_size, size_t alloc_size,
 		enum dma_data_direction dir, unsigned long attrs)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = get_io_tlb_mem(dev);
 	unsigned int offset = swiotlb_align_offset(dev, orig_addr);
 	unsigned int i;
 	int index;
@@ -557,7 +557,7 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
 			      size_t mapping_size, enum dma_data_direction dir,
 			      unsigned long attrs)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = get_io_tlb_mem(hwdev);
 	unsigned long flags;
 	unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr);
 	int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT;
-- 
2.31.1.607.g51e8a6a459-goog



From xen-devel-bounces@lists.xenproject.org Mon May 10 09:52:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 09:52:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125077.235510 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg2Zz-0002p3-Ue; Mon, 10 May 2021 09:51:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125077.235510; Mon, 10 May 2021 09:51:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg2Zz-0002ou-Pn; Mon, 10 May 2021 09:51:59 +0000
Received: by outflank-mailman (input) for mailman id 125077;
 Mon, 10 May 2021 09:51:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NcLl=KF=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lg2Zz-0002B1-9W
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 09:51:59 +0000
Received: from mail-pj1-x102f.google.com (unknown [2607:f8b0:4864:20::102f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e7ead912-fb0f-4b56-ab79-c6d660e4883d;
 Mon, 10 May 2021 09:51:58 +0000 (UTC)
Received: by mail-pj1-x102f.google.com with SMTP id g24so9442873pji.4
 for <xen-devel@lists.xenproject.org>; Mon, 10 May 2021 02:51:58 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:a524:abe8:94e3:5601])
 by smtp.gmail.com with UTF8SMTPSA id l127sm10665077pfd.128.2021.05.10.02.51.50
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 May 2021 02:51:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e7ead912-fb0f-4b56-ab79-c6d660e4883d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=pTHCFxssIveqpuArSgW3Qj3cZNx2ZZwG3aJYd2ZEd8Y=;
        b=SU2CBBmY8DtSQd0TNZ7943oJOHoK8+TBS5VGCwhcrloaivVlt1fQYEyAF2Dc62+eQt
         UT9wdd13n+8GJAVW0U5UJTACdpxx/KKTBexY7stqKf1PPvqGXsqREXZG7XFlnC1R91bt
         2OsKvp8oZwX6m5c1ltX1+1ihIJktkKBoMRuaA=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=pTHCFxssIveqpuArSgW3Qj3cZNx2ZZwG3aJYd2ZEd8Y=;
        b=UHk08k8hga5nmNJ7NvCSyxdwRIB4l/ur7WivvI3zK314GUv6DSyxGmY6xaECbAb0FI
         vLmNfvLcIDuI1UxC88WX3Z9dfIFq00bFoZoNx4UdPg6ZApBYQcLli36OprLL2I+aF5qH
         JgsQ7nTOhwhGmI+qWGYu/un58uJRyG2BSj9Z4dmpgelyU1r3isYasbDj23AGwfqmFdWh
         EBvibmlil5Hb96WI0oLgULfUSAt9rYqiO9t0szU5hnz3W1026YFjhzG6tKOgKhEWJATb
         /K47XK+Eq05kEJZG5VFQua1w7x8eXtjc65LAHwWa/4isixQb6xNMc6QwnvqPdBVSSbm6
         YpQw==
X-Gm-Message-State: AOAM533LRHhzgjrfpZBQ4janeAyzqtk0PATFT02pwSQ7GgA6ggFO6aNT
	jJh1QOe2oBni+Ywz50XuIhg93Q==
X-Google-Smtp-Source: ABdhPJyDFzChlNem0OEVDyj1b5MdL1myWQlCxWzTZ5O8kSwg2MCCOlZ4L1D63xzDiO05ddNXDCzncQ==
X-Received: by 2002:a17:90a:d78c:: with SMTP id z12mr39367495pju.106.1620640317770;
        Mon, 10 May 2021 02:51:57 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	nouveau@lists.freedesktop.org,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v6 09/15] swiotlb: Move alloc_size to find_slots
Date: Mon, 10 May 2021 17:50:20 +0800
Message-Id: <20210510095026.3477496-10-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.607.g51e8a6a459-goog
In-Reply-To: <20210510095026.3477496-1-tientzu@chromium.org>
References: <20210510095026.3477496-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Move the maintenance of alloc_size to find_slots for better code
reusability later.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/swiotlb.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 3f1ad080a4bc..ef04d8f7708f 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -482,8 +482,11 @@ static int find_slots(struct device *dev, phys_addr_t orig_addr,
 	return -1;
 
 found:
-	for (i = index; i < index + nslots; i++)
+	for (i = index; i < index + nslots; i++) {
 		mem->slots[i].list = 0;
+		mem->slots[i].alloc_size =
+			alloc_size - ((i - index) << IO_TLB_SHIFT);
+	}
 	for (i = index - 1;
 	     io_tlb_offset(i) != IO_TLB_SEGSIZE - 1 &&
 	     mem->slots[i].list; i--)
@@ -538,11 +541,8 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 	 * This is needed when we sync the memory.  Then we sync the buffer if
 	 * needed.
 	 */
-	for (i = 0; i < nr_slots(alloc_size + offset); i++) {
+	for (i = 0; i < nr_slots(alloc_size + offset); i++)
 		mem->slots[index + i].orig_addr = slot_addr(orig_addr, i);
-		mem->slots[index + i].alloc_size =
-			alloc_size - (i << IO_TLB_SHIFT);
-	}
 	tlb_addr = slot_addr(mem->start, index) + offset;
 	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
 	    (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL))
-- 
2.31.1.607.g51e8a6a459-goog



From xen-devel-bounces@lists.xenproject.org Mon May 10 09:52:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 09:52:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125080.235522 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg2a9-0003R4-AO; Mon, 10 May 2021 09:52:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125080.235522; Mon, 10 May 2021 09:52:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg2a9-0003Qu-5G; Mon, 10 May 2021 09:52:09 +0000
Received: by outflank-mailman (input) for mailman id 125080;
 Mon, 10 May 2021 09:52:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NcLl=KF=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lg2a7-0003O2-Sk
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 09:52:07 +0000
Received: from mail-pj1-x1034.google.com (unknown [2607:f8b0:4864:20::1034])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 88736481-0133-4310-a8dd-fa7765ece785;
 Mon, 10 May 2021 09:52:07 +0000 (UTC)
Received: by mail-pj1-x1034.google.com with SMTP id gj14so9442872pjb.5
 for <xen-devel@lists.xenproject.org>; Mon, 10 May 2021 02:52:07 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:a524:abe8:94e3:5601])
 by smtp.gmail.com with UTF8SMTPSA id q11sm11036253pjq.6.2021.05.10.02.51.59
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 May 2021 02:52:06 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 88736481-0133-4310-a8dd-fa7765ece785
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=4pOFRke/Q7tZST7mlvM0MmSbL4aoYUvXn+1GR7SUJ34=;
        b=A7U0jjyTG8tgDp385jacEJGLNC4oL7IUZlzoTWyLzKk6RkDKUQ5v+SD+9WKkocmvjG
         LUaefVuL0kRQGj7dkHAfhhjPa4I0Jfm8PqMLbRTvg4D7h7o8M7PlgtpcXiQ1iqPHQ0aU
         +IJlZ9Uvbxgm75hG8lTxaMBiVB4c7B5X4T9+c=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=4pOFRke/Q7tZST7mlvM0MmSbL4aoYUvXn+1GR7SUJ34=;
        b=kskkabmmiw8NU/38r00XbumD5NZ73T6Qw2DBOu73iHmp8QKw05hWk0yx8qzMm6bfWJ
         8XMnudKbNwxUXY310F7a0gVG1KHSUGzFk6V58HoJl1BMNM0v42cBc1i9dClFxoly3jkn
         4rmknm7aRPfK4OOgPa/2hu7F9viK9mMRLaVzlA+Ca1COxXmA7yH9WNHcrwPY1XqlFXly
         +3RC/g9kqvqoTQlO1O2LNNR/djLiiGfGjf1NqLW0SDVsQP7kg3gxMdKyhIQH4S24EeK5
         adxs2mDflr9BvHgYHcY0b0+s7eEZ1AzLlnPhTIEa0hNLMUYWqdSU/SMG719vYKQKG61t
         7vbg==
X-Gm-Message-State: AOAM530qFwZB2fjIHbViI7IbUnkHS+wycQEfWZwvxcjZY05A082nDmvK
	tBPIJA9RWAIRB3nj1PWk8WeQsg==
X-Google-Smtp-Source: ABdhPJyNhiNJqKO//YUtjfv5tp68csdaR9ELLg7UNG8ZjStHGvhbFTS+EdG5CKcr3bxZkctUA/Q/rQ==
X-Received: by 2002:a17:90a:246:: with SMTP id t6mr27380633pje.228.1620640326419;
        Mon, 10 May 2021 02:52:06 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	nouveau@lists.freedesktop.org,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v6 10/15] swiotlb: Refactor swiotlb_tbl_unmap_single
Date: Mon, 10 May 2021 17:50:21 +0800
Message-Id: <20210510095026.3477496-11-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.607.g51e8a6a459-goog
In-Reply-To: <20210510095026.3477496-1-tientzu@chromium.org>
References: <20210510095026.3477496-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new function, release_slots, to make the code reusable for supporting
different bounce buffer pools, e.g. restricted DMA pool.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/swiotlb.c | 35 ++++++++++++++++++++---------------
 1 file changed, 20 insertions(+), 15 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index ef04d8f7708f..fa11787e4b95 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -550,27 +550,15 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 	return tlb_addr;
 }
 
-/*
- * tlb_addr is the physical address of the bounce buffer to unmap.
- */
-void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
-			      size_t mapping_size, enum dma_data_direction dir,
-			      unsigned long attrs)
+static void release_slots(struct device *dev, phys_addr_t tlb_addr)
 {
-	struct io_tlb_mem *mem = get_io_tlb_mem(hwdev);
+	struct io_tlb_mem *mem = get_io_tlb_mem(dev);
 	unsigned long flags;
-	unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr);
+	unsigned int offset = swiotlb_align_offset(dev, tlb_addr);
 	int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT;
 	int nslots = nr_slots(mem->slots[index].alloc_size + offset);
 	int count, i;
 
-	/*
-	 * First, sync the memory before unmapping the entry
-	 */
-	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
-	    (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL))
-		swiotlb_bounce(hwdev, tlb_addr, mapping_size, DMA_FROM_DEVICE);
-
 	/*
 	 * Return the buffer to the free list by setting the corresponding
 	 * entries to indicate the number of contiguous entries available.
@@ -605,6 +593,23 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
 	spin_unlock_irqrestore(&mem->lock, flags);
 }
 
+/*
+ * tlb_addr is the physical address of the bounce buffer to unmap.
+ */
+void swiotlb_tbl_unmap_single(struct device *dev, phys_addr_t tlb_addr,
+			      size_t mapping_size, enum dma_data_direction dir,
+			      unsigned long attrs)
+{
+	/*
+	 * First, sync the memory before unmapping the entry
+	 */
+	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
+	    (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL))
+		swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_FROM_DEVICE);
+
+	release_slots(dev, tlb_addr);
+}
+
 void swiotlb_sync_single_for_device(struct device *dev, phys_addr_t tlb_addr,
 		size_t size, enum dma_data_direction dir)
 {
-- 
2.31.1.607.g51e8a6a459-goog



From xen-devel-bounces@lists.xenproject.org Mon May 10 09:52:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 09:52:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125085.235534 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg2aH-00043a-IK; Mon, 10 May 2021 09:52:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125085.235534; Mon, 10 May 2021 09:52:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg2aH-00043P-FE; Mon, 10 May 2021 09:52:17 +0000
Received: by outflank-mailman (input) for mailman id 125085;
 Mon, 10 May 2021 09:52:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NcLl=KF=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lg2aG-00040y-CP
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 09:52:16 +0000
Received: from mail-pf1-x429.google.com (unknown [2607:f8b0:4864:20::429])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dba3aafc-f0ea-43de-8bc7-4122807dadbf;
 Mon, 10 May 2021 09:52:15 +0000 (UTC)
Received: by mail-pf1-x429.google.com with SMTP id c17so13247880pfn.6
 for <xen-devel@lists.xenproject.org>; Mon, 10 May 2021 02:52:15 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:a524:abe8:94e3:5601])
 by smtp.gmail.com with UTF8SMTPSA id g2sm4121532pfj.218.2021.05.10.02.52.07
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 May 2021 02:52:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dba3aafc-f0ea-43de-8bc7-4122807dadbf
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=t6SScDqP0X0FEaGwYViR1WRhEa6WWe5PW8l4/cNEd6s=;
        b=Np+mNLfnE5UH9v+sj7M7sXVpWmpJwNO1DnmfTEXXKWEeG5PclaliBfYf0J+kN7jfVA
         EwzmyQstbxUWNj9zGZGjDfvTwocid9N4ile+tbERZqEa6UZsUV+1sXylz+EEx90dcYpA
         QDp+I0oGFi86MOxid3sXnhVs4C0s/bcSV7YyE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=t6SScDqP0X0FEaGwYViR1WRhEa6WWe5PW8l4/cNEd6s=;
        b=kxjQWgJoucIKqnKnJR+8kAg8VtsFPC0Y/DufhbuRe8QVe1iF7y9GkdI+ecLmnGlyel
         +2/PH2oKZm+QC0IkEJefZbN+KaXU6UvKAI9xvmEy6A+ohfGFXdOVziJsaqwuMi18ALrK
         kZ4abwJYihQB88TuaU5rgVw7GjhwyOlzAlMClXF83eUgPVxktCsvFZFf6j6+U2BKrYn1
         VztjTXQl5lNcVfXY5G9CWl5MhKMtvziuEFx2inVc9+9/V6JfQjVfMsEtUhkwxOv5GByj
         Pbwg8WVnmMUToGG/Y7IjR+5ZHXFA84YKMOc6k4vLluvtizs70vQZyKeQxVr/CbIJVmij
         xMCw==
X-Gm-Message-State: AOAM530jkSrmpHxrFs16Ang7dQ+J24IEMY69TNp9UvoXgqHtKSQU+5dB
	Bho+gRlyZ7QPKPfBBcGgOraAQQ==
X-Google-Smtp-Source: ABdhPJzTQch1UU+UQmd61gbcCZP+myc5F721Mj+utWhy5ilRUzzapDijoDqtK3NL+wBlztaE9YFArQ==
X-Received: by 2002:a63:fc11:: with SMTP id j17mr20874729pgi.355.1620640335011;
        Mon, 10 May 2021 02:52:15 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	nouveau@lists.freedesktop.org,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v6 11/15] dma-direct: Add a new wrapper __dma_direct_free_pages()
Date: Mon, 10 May 2021 17:50:22 +0800
Message-Id: <20210510095026.3477496-12-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.607.g51e8a6a459-goog
In-Reply-To: <20210510095026.3477496-1-tientzu@chromium.org>
References: <20210510095026.3477496-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new wrapper __dma_direct_free_pages() that will be useful later
for swiotlb_free().

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/direct.c | 14 ++++++++++----
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 078f7087e466..eb4098323bbc 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -75,6 +75,12 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size)
 		min_not_zero(dev->coherent_dma_mask, dev->bus_dma_limit);
 }
 
+static void __dma_direct_free_pages(struct device *dev, struct page *page,
+				    size_t size)
+{
+	dma_free_contiguous(dev, page, size);
+}
+
 static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
 		gfp_t gfp)
 {
@@ -237,7 +243,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 			return NULL;
 	}
 out_free_pages:
-	dma_free_contiguous(dev, page, size);
+	__dma_direct_free_pages(dev, page, size);
 	return NULL;
 }
 
@@ -273,7 +279,7 @@ void dma_direct_free(struct device *dev, size_t size,
 	else if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_CLEAR_UNCACHED))
 		arch_dma_clear_uncached(cpu_addr, size);
 
-	dma_free_contiguous(dev, dma_direct_to_page(dev, dma_addr), size);
+	__dma_direct_free_pages(dev, dma_direct_to_page(dev, dma_addr), size);
 }
 
 struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
@@ -310,7 +316,7 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
 	*dma_handle = phys_to_dma_direct(dev, page_to_phys(page));
 	return page;
 out_free_pages:
-	dma_free_contiguous(dev, page, size);
+	__dma_direct_free_pages(dev, page, size);
 	return NULL;
 }
 
@@ -329,7 +335,7 @@ void dma_direct_free_pages(struct device *dev, size_t size,
 	if (force_dma_unencrypted(dev))
 		set_memory_encrypted((unsigned long)vaddr, 1 << page_order);
 
-	dma_free_contiguous(dev, page, size);
+	__dma_direct_free_pages(dev, page, size);
 }
 
 #if defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE) || \
-- 
2.31.1.607.g51e8a6a459-goog



From xen-devel-bounces@lists.xenproject.org Mon May 10 09:52:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 09:52:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125089.235546 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg2aP-0004dv-SE; Mon, 10 May 2021 09:52:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125089.235546; Mon, 10 May 2021 09:52:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg2aP-0004dg-Ou; Mon, 10 May 2021 09:52:25 +0000
Received: by outflank-mailman (input) for mailman id 125089;
 Mon, 10 May 2021 09:52:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NcLl=KF=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lg2aP-0004bT-3d
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 09:52:25 +0000
Received: from mail-pl1-x62b.google.com (unknown [2607:f8b0:4864:20::62b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f1a4f09b-a12b-4939-8f69-9d6128ba80a4;
 Mon, 10 May 2021 09:52:24 +0000 (UTC)
Received: by mail-pl1-x62b.google.com with SMTP id n16so8860989plf.7
 for <xen-devel@lists.xenproject.org>; Mon, 10 May 2021 02:52:24 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:a524:abe8:94e3:5601])
 by smtp.gmail.com with UTF8SMTPSA id f6sm10227287pfe.74.2021.05.10.02.52.16
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 May 2021 02:52:23 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f1a4f09b-a12b-4939-8f69-9d6128ba80a4
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=3YaoLVJ/BlS2jGrFuaf+HAqZru6rV2ab/VHFDnxpz5s=;
        b=I10YUhrVFYEnBb151sjznBCAywSiwwuXX1ra65r+6Dp7kLXlkJTjZj64MC39DrflfN
         L6B6coAmKhXT3uzYOIMy3hHP2AqrBakoEoGo6H/hyImCZ+Vs9dEj0AdH5ph0QtkKY+Bg
         SjBGhbtFT0tOcMKvv+2FyMSaO8PoMl7ClMI8s=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=3YaoLVJ/BlS2jGrFuaf+HAqZru6rV2ab/VHFDnxpz5s=;
        b=FoO2R8++42yVAfcdx2QNNhUzxNPEjSup10L064MRyPl794OWeI+uGWf3f4vikhPeiE
         tloW8gqVHLO2Y5UvsVOsrBbXQMLbvMo3s1U6uu9PXKYyGFQzZ+AepEamAaicfi9VaVwQ
         L3uKjCqAYeuyqZVvXBsJjjFtLlD65IhelcjJXR+vscIK0zIRdOXnSqgQPatX2au5wBaj
         tPwcJufFPZMSYnb+21ZnZx4RYhsyTiTLjbm5Eb89bnnvIqxHFxG/jD12mbh1i4EFKdUh
         L/hn7Rmj3Ct6jiVqWE3lNP4wdjGtnLe58QjdhDVVjFnM/kPDDBhl8lrl+54TJFd9EXqz
         Tzfg==
X-Gm-Message-State: AOAM530kah28iH2rdaYfQu9WijEt894fMyGQrfh47tz8YLssLGKhDjYC
	4xtKlcEyepzflozfAfouGhMFRQ==
X-Google-Smtp-Source: ABdhPJyvBg/0ILrSmUeqEzW7Fk7bklgK3teEOUZ3TKSkxe3vq+G4eTcWybVNarpL8KZA9dCjKF/+aQ==
X-Received: by 2002:a17:90a:246:: with SMTP id t6mr27381833pje.228.1620640343682;
        Mon, 10 May 2021 02:52:23 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	nouveau@lists.freedesktop.org,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v6 12/15] swiotlb: Add restricted DMA alloc/free support.
Date: Mon, 10 May 2021 17:50:23 +0800
Message-Id: <20210510095026.3477496-13-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.607.g51e8a6a459-goog
In-Reply-To: <20210510095026.3477496-1-tientzu@chromium.org>
References: <20210510095026.3477496-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add the functions, swiotlb_{alloc,free} to support the memory allocation
from restricted DMA pool.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 include/linux/swiotlb.h |  4 ++++
 kernel/dma/swiotlb.c    | 35 +++++++++++++++++++++++++++++++++--
 2 files changed, 37 insertions(+), 2 deletions(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 0c5a18d9cf89..e8cf49bd90c5 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -134,6 +134,10 @@ unsigned int swiotlb_max_segment(void);
 size_t swiotlb_max_mapping_size(struct device *dev);
 bool is_swiotlb_active(struct device *dev);
 void __init swiotlb_adjust_size(unsigned long size);
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+struct page *swiotlb_alloc(struct device *dev, size_t size);
+bool swiotlb_free(struct device *dev, struct page *page, size_t size);
+#endif /* CONFIG_DMA_RESTRICTED_POOL */
 #else
 #define swiotlb_force SWIOTLB_NO_FORCE
 static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index fa11787e4b95..d99d5f902259 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -457,8 +457,9 @@ static int find_slots(struct device *dev, phys_addr_t orig_addr,
 
 	index = wrap = wrap_index(mem, ALIGN(mem->index, stride));
 	do {
-		if ((slot_addr(tbl_dma_addr, index) & iotlb_align_mask) !=
-		    (orig_addr & iotlb_align_mask)) {
+		if (orig_addr &&
+		    (slot_addr(tbl_dma_addr, index) & iotlb_align_mask) !=
+			    (orig_addr & iotlb_align_mask)) {
 			index = wrap_index(mem, index + 1);
 			continue;
 		}
@@ -701,6 +702,36 @@ late_initcall(swiotlb_create_default_debugfs);
 #endif
 
 #ifdef CONFIG_DMA_RESTRICTED_POOL
+struct page *swiotlb_alloc(struct device *dev, size_t size)
+{
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
+	phys_addr_t tlb_addr;
+	int index;
+
+	if (!mem)
+		return NULL;
+
+	index = find_slots(dev, 0, size);
+	if (index == -1)
+		return NULL;
+
+	tlb_addr = slot_addr(mem->start, index);
+
+	return pfn_to_page(PFN_DOWN(tlb_addr));
+}
+
+bool swiotlb_free(struct device *dev, struct page *page, size_t size)
+{
+	phys_addr_t tlb_addr = page_to_phys(page);
+
+	if (!is_swiotlb_buffer(dev, tlb_addr))
+		return false;
+
+	release_slots(dev, tlb_addr);
+
+	return true;
+}
+
 static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
 				    struct device *dev)
 {
-- 
2.31.1.607.g51e8a6a459-goog



From xen-devel-bounces@lists.xenproject.org Mon May 10 09:52:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 09:52:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125092.235558 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg2aZ-0005Hs-6g; Mon, 10 May 2021 09:52:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125092.235558; Mon, 10 May 2021 09:52:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg2aZ-0005Hf-2d; Mon, 10 May 2021 09:52:35 +0000
Received: by outflank-mailman (input) for mailman id 125092;
 Mon, 10 May 2021 09:52:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NcLl=KF=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lg2aX-0005Eb-UA
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 09:52:33 +0000
Received: from mail-pl1-x629.google.com (unknown [2607:f8b0:4864:20::629])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1ac1168f-723a-4f39-b21b-0f436917dd91;
 Mon, 10 May 2021 09:52:32 +0000 (UTC)
Received: by mail-pl1-x629.google.com with SMTP id t21so8875742plo.2
 for <xen-devel@lists.xenproject.org>; Mon, 10 May 2021 02:52:32 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:a524:abe8:94e3:5601])
 by smtp.gmail.com with UTF8SMTPSA id 15sm10199759pjt.17.2021.05.10.02.52.24
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 May 2021 02:52:31 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1ac1168f-723a-4f39-b21b-0f436917dd91
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=fO4/BDgOqSIweTkVlBXlAFU55A/tHyh8EFQ3OCJ0164=;
        b=a/WuPnT5SWKEIKaQCzsTYV8esrJXULCD+Pttku/Hjp9UGsEHrBcpr4K60CJSleWPaJ
         Cs+rhFZCjeyPwMUFq4cNS3B54eAb1lfhIoXu9ETsYSgcWLTxNsI5jc7/9Z5SKKogzIJd
         OHxCjg4x9FxeJXA4ZyVgnBWzyFvoWoATrwxIg=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=fO4/BDgOqSIweTkVlBXlAFU55A/tHyh8EFQ3OCJ0164=;
        b=e3o3oQYnoujER6hfGXhnhsZzO93RSmWgmD9MFa9q74s58bCctrLEODmDM7kRZn7EAE
         uF3EYgCk2Ej6Ra+asvxPgNcSon9kEur+vkW74X9cDNvJ8teHwdCwosKkV/Qz1jd4mZ6H
         cBSPH1vEcQ4QOXiH7VtuxVmKa4Rna6zw6j84p4KhetZlPKlggKwmFyOclMwcjkv6Ssgc
         e9UMbKfSNQ61R1Ubbdv1huhj6izK2t08KfnEVD19f6wGJ3k+5SACLnteyG8af4ZydsB4
         wGlSlHvHhEtYFX5VJSoF83ddosG6cAjK3omKIzwDZigXhoG1CL4qDnO7BVwg6tsCMVYF
         6muw==
X-Gm-Message-State: AOAM531JIbT2vDj56OJh07vPDRUwDgYNwLas1ZmgyRyATJHAlah0hqmo
	rDw9eAY3AiUQiEC7IvSLv/mbCg==
X-Google-Smtp-Source: ABdhPJwxBoZD0WeI9zQ0gddyfu3vgr7xM+940pwyPaN5j/qpuJotOQIMQPfObl9zFPGpUQ2nKcqGBw==
X-Received: by 2002:a17:90a:1d44:: with SMTP id u4mr40914071pju.46.1620640352294;
        Mon, 10 May 2021 02:52:32 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	nouveau@lists.freedesktop.org,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v6 13/15] dma-direct: Allocate memory from restricted DMA pool if available
Date: Mon, 10 May 2021 17:50:24 +0800
Message-Id: <20210510095026.3477496-14-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.607.g51e8a6a459-goog
In-Reply-To: <20210510095026.3477496-1-tientzu@chromium.org>
References: <20210510095026.3477496-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The restricted DMA pool is preferred if available.

The restricted DMA pools provide a basic level of protection against the
DMA overwriting buffer contents at unexpected times. However, to protect
against general data leakage and system memory corruption, the system
needs to provide a way to lock down the memory access, e.g., MPU.

Note that since coherent allocation needs remapping, one must set up
another device coherent pool by shared-dma-pool and use
dma_alloc_from_dev_coherent instead for atomic coherent allocation.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/direct.c | 38 +++++++++++++++++++++++++++++---------
 1 file changed, 29 insertions(+), 9 deletions(-)

diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index eb4098323bbc..0d521f78c7b9 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -78,6 +78,10 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size)
 static void __dma_direct_free_pages(struct device *dev, struct page *page,
 				    size_t size)
 {
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+	if (swiotlb_free(dev, page, size))
+		return;
+#endif
 	dma_free_contiguous(dev, page, size);
 }
 
@@ -92,7 +96,17 @@ static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
 
 	gfp |= dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask,
 					   &phys_limit);
-	page = dma_alloc_contiguous(dev, size, gfp);
+
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+	page = swiotlb_alloc(dev, size);
+	if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
+		__dma_direct_free_pages(dev, page, size);
+		page = NULL;
+	}
+#endif
+
+	if (!page)
+		page = dma_alloc_contiguous(dev, size, gfp);
 	if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
 		dma_free_contiguous(dev, page, size);
 		page = NULL;
@@ -148,7 +162,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 		gfp |= __GFP_NOWARN;
 
 	if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
-	    !force_dma_unencrypted(dev)) {
+	    !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) {
 		page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO);
 		if (!page)
 			return NULL;
@@ -161,18 +175,23 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 	}
 
 	if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) &&
-	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
-	    !dev_is_dma_coherent(dev))
+	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) &&
+	    !is_dev_swiotlb_force(dev))
 		return arch_dma_alloc(dev, size, dma_handle, gfp, attrs);
 
 	/*
 	 * Remapping or decrypting memory may block. If either is required and
 	 * we can't block, allocate the memory from the atomic pools.
+	 * If restricted DMA (i.e., is_dev_swiotlb_force) is required, one must
+	 * set up another device coherent pool by shared-dma-pool and use
+	 * dma_alloc_from_dev_coherent instead.
 	 */
 	if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) &&
 	    !gfpflags_allow_blocking(gfp) &&
 	    (force_dma_unencrypted(dev) ||
-	     (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev))))
+	     (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
+	      !dev_is_dma_coherent(dev))) &&
+	    !is_dev_swiotlb_force(dev))
 		return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
 
 	/* we always manually zero the memory once we are done */
@@ -253,15 +272,15 @@ void dma_direct_free(struct device *dev, size_t size,
 	unsigned int page_order = get_order(size);
 
 	if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
-	    !force_dma_unencrypted(dev)) {
+	    !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) {
 		/* cpu_addr is a struct page cookie, not a kernel address */
 		dma_free_contiguous(dev, cpu_addr, size);
 		return;
 	}
 
 	if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) &&
-	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
-	    !dev_is_dma_coherent(dev)) {
+	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) &&
+	    !is_dev_swiotlb_force(dev)) {
 		arch_dma_free(dev, size, cpu_addr, dma_addr, attrs);
 		return;
 	}
@@ -289,7 +308,8 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
 	void *ret;
 
 	if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) &&
-	    force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp))
+	    force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp) &&
+	    !is_dev_swiotlb_force(dev))
 		return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
 
 	page = __dma_direct_alloc_pages(dev, size, gfp);
-- 
2.31.1.607.g51e8a6a459-goog



From xen-devel-bounces@lists.xenproject.org Mon May 10 09:52:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 09:52:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125095.235569 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg2ah-0005tM-KR; Mon, 10 May 2021 09:52:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125095.235569; Mon, 10 May 2021 09:52:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg2ah-0005tD-H0; Mon, 10 May 2021 09:52:43 +0000
Received: by outflank-mailman (input) for mailman id 125095;
 Mon, 10 May 2021 09:52:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NcLl=KF=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lg2ag-0005qZ-FR
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 09:52:42 +0000
Received: from mail-pf1-x42d.google.com (unknown [2607:f8b0:4864:20::42d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3a018552-59ae-463f-bc29-60b1673a8e23;
 Mon, 10 May 2021 09:52:41 +0000 (UTC)
Received: by mail-pf1-x42d.google.com with SMTP id h127so13224588pfe.9
 for <xen-devel@lists.xenproject.org>; Mon, 10 May 2021 02:52:41 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:a524:abe8:94e3:5601])
 by smtp.gmail.com with UTF8SMTPSA id q7sm9111446pfq.172.2021.05.10.02.52.33
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 May 2021 02:52:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3a018552-59ae-463f-bc29-60b1673a8e23
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=TI//+rWs2AonD6qoX7ur8OGwRk/rsyDr2ocu0NzNq5I=;
        b=OeemdSgPmia6P5EsXKkEzN0OL4fVLwcXTd/CieenRMdBKHxFqUP4Df4joJbySWSSMc
         NQa62kYUP6NBQhMfe69O4Y1TVNyiYjME1kHrojL7NiW2Og6QvxSOkeGdpSahkdmKkkLI
         olHy6atIMBhauJm3Bf+RxBbMLkFko3Ac9JTU8=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=TI//+rWs2AonD6qoX7ur8OGwRk/rsyDr2ocu0NzNq5I=;
        b=VPqEiaFCZ8Iod4Kt7G0A1u2pn69ahRvSOISRAscmaM5gLOipBL7LOcbHpf/9A21z70
         B54BUN6AK/DDsJ2Qg+dH0lvyaIrGxbJx/5N9mFDgoVYaeSnHzwmQJ1GMa9++BZH3vayK
         sLN0tMUUuqNnDFSfRXoMORm+i9WtILQr+FyHFXDqLdL/tYK6oPiBKbvJ5OJ/AONmfHUg
         hA8MSvtSSfXEaZEU3K0H4gEuE8MGn0On7aBeS2PmsN8nRw75iGRcvpz6z6lNt4llT9PO
         l3aH+0X4yRjSK56yEmQTNdg+6YRMPDQlFCx0NRZ8x+hRVUYElQnOMuFvsNe4ZdqYs6aI
         jp1Q==
X-Gm-Message-State: AOAM533qEDVjDpdbt6fsXab6PWe264jUjeL41hcbWzTeh7WPGTxcGUZ7
	JcH8lAI5WfrnoimWGOgqP/k0pw==
X-Google-Smtp-Source: ABdhPJx3NolYg0tYgQE7hrG22fSVZg4paXFqQrFAkVZpowUqVebMyfoHJdLoW9aBQ5hSJPVDYKavFA==
X-Received: by 2002:a63:5160:: with SMTP id r32mr23960989pgl.83.1620640360984;
        Mon, 10 May 2021 02:52:40 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	nouveau@lists.freedesktop.org,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v6 14/15] dt-bindings: of: Add restricted DMA pool
Date: Mon, 10 May 2021 17:50:25 +0800
Message-Id: <20210510095026.3477496-15-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.607.g51e8a6a459-goog
In-Reply-To: <20210510095026.3477496-1-tientzu@chromium.org>
References: <20210510095026.3477496-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Introduce the new compatible string, restricted-dma-pool, for restricted
DMA. One can specify the address and length of the restricted DMA memory
region by restricted-dma-pool in the reserved-memory node.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 .../reserved-memory/reserved-memory.txt       | 27 +++++++++++++++++++
 1 file changed, 27 insertions(+)

diff --git a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
index e8d3096d922c..284aea659015 100644
--- a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
+++ b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
@@ -51,6 +51,23 @@ compatible (optional) - standard definition
           used as a shared pool of DMA buffers for a set of devices. It can
           be used by an operating system to instantiate the necessary pool
           management subsystem if necessary.
+        - restricted-dma-pool: This indicates a region of memory meant to be
+          used as a pool of restricted DMA buffers for a set of devices. The
+          memory region would be the only region accessible to those devices.
+          When using this, the no-map and reusable properties must not be set,
+          so the operating system can create a virtual mapping that will be used
+          for synchronization. The main purpose for restricted DMA is to
+          mitigate the lack of DMA access control on systems without an IOMMU,
+          which could result in the DMA accessing the system memory at
+          unexpected times and/or unexpected addresses, possibly leading to data
+          leakage or corruption. The feature on its own provides a basic level
+          of protection against the DMA overwriting buffer contents at
+          unexpected times. However, to protect against general data leakage and
+          system memory corruption, the system needs to provide way to lock down
+          the memory access, e.g., MPU. Note that since coherent allocation
+          needs remapping, one must set up another device coherent pool by
+          shared-dma-pool and use dma_alloc_from_dev_coherent instead for atomic
+          coherent allocation.
         - vendor specific string in the form <vendor>,[<device>-]<usage>
 no-map (optional) - empty property
     - Indicates the operating system must not create a virtual mapping
@@ -120,6 +137,11 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB).
 			compatible = "acme,multimedia-memory";
 			reg = <0x77000000 0x4000000>;
 		};
+
+		restricted_dma_mem_reserved: restricted_dma_mem_reserved {
+			compatible = "restricted-dma-pool";
+			reg = <0x50000000 0x400000>;
+		};
 	};
 
 	/* ... */
@@ -138,4 +160,9 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB).
 		memory-region = <&multimedia_reserved>;
 		/* ... */
 	};
+
+	pcie_device: pcie_device@0,0 {
+		memory-region = <&restricted_dma_mem_reserved>;
+		/* ... */
+	};
 };
-- 
2.31.1.607.g51e8a6a459-goog



From xen-devel-bounces@lists.xenproject.org Mon May 10 09:52:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 09:52:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125099.235582 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg2ar-0006XU-VH; Mon, 10 May 2021 09:52:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125099.235582; Mon, 10 May 2021 09:52:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg2ar-0006XH-RY; Mon, 10 May 2021 09:52:53 +0000
Received: by outflank-mailman (input) for mailman id 125099;
 Mon, 10 May 2021 09:52:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NcLl=KF=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lg2aq-0006Ri-DW
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 09:52:52 +0000
Received: from mail-pg1-x52a.google.com (unknown [2607:f8b0:4864:20::52a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a01bc71d-4107-47c7-8ccd-60c6c51a265c;
 Mon, 10 May 2021 09:52:50 +0000 (UTC)
Received: by mail-pg1-x52a.google.com with SMTP id y32so12888901pga.11
 for <xen-devel@lists.xenproject.org>; Mon, 10 May 2021 02:52:50 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:a524:abe8:94e3:5601])
 by smtp.gmail.com with UTF8SMTPSA id k9sm4190684pgq.27.2021.05.10.02.52.42
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 May 2021 02:52:49 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a01bc71d-4107-47c7-8ccd-60c6c51a265c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=N9YtEc1AKRWzrvXmeOPdeOSLAm21S21YZtTJBnrwUb8=;
        b=K1N6EXk85Wvc+hEOva4wXm/pG0LlShcwRNR1nzYaMJRORgqDZTx8d0OHgKA7EZWdW9
         DKVvH5XQc7hrxaIcoiPBWcXqLtVTb0ShzjIeG4+8nxjarb1dd1TCqieDYri51cXF3WJ3
         rTKZnoW2Yosw5eCi0THzxANV5pwO1EvAdcAaY=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=N9YtEc1AKRWzrvXmeOPdeOSLAm21S21YZtTJBnrwUb8=;
        b=UWew6G+VAUU+Fj8ZU8P+u1pjluqMmqRDEFabknrP/tzzXK30i1YrRJWE6El3SRyB3j
         cwBgEajHtQOe8E8ZtZYjFm6YLOoCdMcCH1imqNfB2gNOdrl4xSYSINw1DOt36tN24v1t
         xJKrl8WjicoWE5JtY6BZxic5isYEGmsVhQsz29XErvBycJPwmGY19bW6XKdbkPsst/U5
         fMpcDm6f6ITSw7Kq1HY5wX4aR5tO/6fSqqT5ReudoezBat4LWQOfKdhYJW5PzhdzJN/e
         oQdshOEL8fRDZBvbgPbfiq6VV6z7qMYNWLgW4aJOTr8d73udunvvospKX9APrdci1Urz
         a7Zw==
X-Gm-Message-State: AOAM532JiPZ3JXS7zzAl+TzZGXCN0e6ZVu4iMClDXWYNWGm+rUWi5zhy
	e583gISS7oWwG1LJmMPTORkTNw==
X-Google-Smtp-Source: ABdhPJx4HfAA4wpH8/Kt4ymLNpegRYw4Z9ZhxTLF9qKqtTWcfssB+nzDFXYh+57Un66kEi/PTJPfag==
X-Received: by 2002:a62:28b:0:b029:28e:e592:fe65 with SMTP id 133-20020a62028b0000b029028ee592fe65mr23864386pfc.75.1620640369611;
        Mon, 10 May 2021 02:52:49 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	nouveau@lists.freedesktop.org,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v6 15/15] of: Add plumbing for restricted DMA pool
Date: Mon, 10 May 2021 17:50:26 +0800
Message-Id: <20210510095026.3477496-16-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.607.g51e8a6a459-goog
In-Reply-To: <20210510095026.3477496-1-tientzu@chromium.org>
References: <20210510095026.3477496-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

If a device is not behind an IOMMU, we look up the device node and set
up the restricted DMA when the restricted-dma-pool is presented.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 drivers/of/address.c    | 25 +++++++++++++++++++++++++
 drivers/of/device.c     |  3 +++
 drivers/of/of_private.h |  5 +++++
 3 files changed, 33 insertions(+)

diff --git a/drivers/of/address.c b/drivers/of/address.c
index aca94c348bd4..c562a9ff5f0b 100644
--- a/drivers/of/address.c
+++ b/drivers/of/address.c
@@ -8,6 +8,7 @@
 #include <linux/logic_pio.h>
 #include <linux/module.h>
 #include <linux/of_address.h>
+#include <linux/of_reserved_mem.h>
 #include <linux/pci.h>
 #include <linux/pci_regs.h>
 #include <linux/sizes.h>
@@ -1112,6 +1113,30 @@ bool of_dma_is_coherent(struct device_node *np)
 }
 EXPORT_SYMBOL_GPL(of_dma_is_coherent);
 
+int of_dma_set_restricted_buffer(struct device *dev)
+{
+	struct device_node *node;
+	int count, i;
+
+	if (!dev->of_node)
+		return 0;
+
+	count = of_property_count_elems_of_size(dev->of_node, "memory-region",
+						sizeof(phandle));
+	for (i = 0; i < count; i++) {
+		node = of_parse_phandle(dev->of_node, "memory-region", i);
+		/* There might be multiple memory regions, but only one
+		 * restriced-dma-pool region is allowed.
+		 */
+		if (of_device_is_compatible(node, "restricted-dma-pool") &&
+		    of_device_is_available(node))
+			return of_reserved_mem_device_init_by_idx(
+				dev, dev->of_node, i);
+	}
+
+	return 0;
+}
+
 /**
  * of_mmio_is_nonposted - Check if device uses non-posted MMIO
  * @np:	device node
diff --git a/drivers/of/device.c b/drivers/of/device.c
index c5a9473a5fb1..d8d865223e51 100644
--- a/drivers/of/device.c
+++ b/drivers/of/device.c
@@ -165,6 +165,9 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
 
 	arch_setup_dma_ops(dev, dma_start, size, iommu, coherent);
 
+	if (!iommu)
+		return of_dma_set_restricted_buffer(dev);
+
 	return 0;
 }
 EXPORT_SYMBOL_GPL(of_dma_configure_id);
diff --git a/drivers/of/of_private.h b/drivers/of/of_private.h
index d717efbd637d..e9237f5eff48 100644
--- a/drivers/of/of_private.h
+++ b/drivers/of/of_private.h
@@ -163,12 +163,17 @@ struct bus_dma_region;
 #if defined(CONFIG_OF_ADDRESS) && defined(CONFIG_HAS_DMA)
 int of_dma_get_range(struct device_node *np,
 		const struct bus_dma_region **map);
+int of_dma_set_restricted_buffer(struct device *dev);
 #else
 static inline int of_dma_get_range(struct device_node *np,
 		const struct bus_dma_region **map)
 {
 	return -ENODEV;
 }
+static inline int of_dma_get_restricted_buffer(struct device *dev)
+{
+	return -ENODEV;
+}
 #endif
 
 #endif /* _LINUX_OF_PRIVATE_H */
-- 
2.31.1.607.g51e8a6a459-goog



From xen-devel-bounces@lists.xenproject.org Mon May 10 10:01:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 10:01:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125141.235593 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg2jY-0000SM-Pl; Mon, 10 May 2021 10:01:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125141.235593; Mon, 10 May 2021 10:01:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg2jY-0000SF-Mp; Mon, 10 May 2021 10:01:52 +0000
Received: by outflank-mailman (input) for mailman id 125141;
 Mon, 10 May 2021 10:01:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NcLl=KF=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lg2jX-0000S9-TJ
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 10:01:51 +0000
Received: from mail-pj1-x102f.google.com (unknown [2607:f8b0:4864:20::102f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id de84b93b-fc1a-4620-91fe-2dc0e3e85d62;
 Mon, 10 May 2021 10:01:51 +0000 (UTC)
Received: by mail-pj1-x102f.google.com with SMTP id lp4so9460074pjb.1
 for <xen-devel@lists.xenproject.org>; Mon, 10 May 2021 03:01:51 -0700 (PDT)
Received: from mail-pf1-f171.google.com (mail-pf1-f171.google.com.
 [209.85.210.171])
 by smtp.gmail.com with ESMTPSA id ge4sm10944817pjb.4.2021.05.10.03.01.49
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 May 2021 03:01:49 -0700 (PDT)
Received: by mail-pf1-f171.google.com with SMTP id 10so13396307pfl.1
 for <xen-devel@lists.xenproject.org>; Mon, 10 May 2021 03:01:49 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: de84b93b-fc1a-4620-91fe-2dc0e3e85d62
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=FsG0kiz0NRABZofCxSQspiIrXqavy4umxXhfkY+/Sxk=;
        b=W+HYriAR9Do/fBW2QstFckigQIl1nRoVqh8kV8zVcpd71XHdedwgr9ODHG4afQkOIy
         ynwXu3yhnIcmMABVDj/QNeRt0F7V/f3la3pyVX+S+CEgEQdWOUMw7w57whXmk3tKqZQ+
         S7fuzxNEGgGmhiunD9+8473p2BUhLcDS4p74Q=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=FsG0kiz0NRABZofCxSQspiIrXqavy4umxXhfkY+/Sxk=;
        b=UyoRQsKjY2gDGBXEHmGpJD7fNNe5LHLrNevr45ZMQYC5W3S6I76ze88k2VfkidRJZA
         TYfApj7xUae/wP/wZr9e5DIurQBPuiynDJC5eNpEeOySre62rhZthg61ELm8XPkpwfyd
         1aU1EXO+NW7m88JbkFcAflenJXgtxnokG4DJRBNPzzU4/FCfAdqP/8cvNYCUTmWOS+hX
         McJyGqoZGwF1+aIWtQnTgIvbpMmxakE/aubMtUPbwl2DmwWGH+8mStcHo44bl4wo5IVx
         MTeyGgARj5xfGC43hneBO8i6OtEJwmSVIQykUytVwPwofS3R27RKc5MDMo5zC+UrcCfA
         E81g==
X-Gm-Message-State: AOAM5332kqJ/x1XOKwQMeJkmNusJLBe0M7G4N3+qBmrwGYQ7MKlGe7h+
	CAKk8U8j3VhxsHftVX1/pkvCUaZgV4cn7A==
X-Google-Smtp-Source: ABdhPJzDBfEZ1SvnPnCnaP5armFLMJwRxiHEVqFZmtNXGp21NSRZKdUobwXS2WhR1dGfXjL8nvvZAA==
X-Received: by 2002:a17:902:ed93:b029:ed:5770:5e87 with SMTP id e19-20020a170902ed93b02900ed57705e87mr23544796plj.11.1620640909971;
        Mon, 10 May 2021 03:01:49 -0700 (PDT)
X-Received: by 2002:a92:6804:: with SMTP id d4mr20856971ilc.5.1620640447894;
 Mon, 10 May 2021 02:54:07 -0700 (PDT)
MIME-Version: 1.0
References: <20210422081508.3942748-1-tientzu@chromium.org>
In-Reply-To: <20210422081508.3942748-1-tientzu@chromium.org>
From: Claire Chang <tientzu@chromium.org>
Date: Mon, 10 May 2021 17:53:57 +0800
X-Gmail-Original-Message-ID: <CALiNf2_h8r6jpd1JqTwNEmW22KK8aT9B4djLKkYP7Hhnju2EKw@mail.gmail.com>
Message-ID: <CALiNf2_h8r6jpd1JqTwNEmW22KK8aT9B4djLKkYP7Hhnju2EKw@mail.gmail.com>
Subject: Re: [PATCH v5 00/16] Restricted DMA
To: Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>, 
	Frank Rowand <frowand.list@gmail.com>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, 
	boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig <hch@lst.de>, 
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org, paulus@samba.org, 
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, sstabellini@kernel.org, 
	Robin Murphy <robin.murphy@arm.com>, grant.likely@arm.com, xypron.glpk@gmx.de, 
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org, bauerman@linux.ibm.com, 
	peterz@infradead.org, Greg KH <gregkh@linuxfoundation.org>, 
	Saravana Kannan <saravanak@google.com>, "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, 
	heikki.krogerus@linux.intel.com, 
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Randy Dunlap <rdunlap@infradead.org>, 
	Dan Williams <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
	linux-devicetree <devicetree@vger.kernel.org>, lkml <linux-kernel@vger.kernel.org>, 
	linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, 
	Nicolas Boichat <drinkcat@chromium.org>, Jim Quinlan <james.quinlan@broadcom.com>, 
	Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com, 
	Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk, 
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
	Jianxiong Gao <jxgao@google.com>, joonas.lahtinen@linux.intel.com, 
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
	matthew.auld@intel.com, nouveau@lists.freedesktop.org, rodrigo.vivi@intel.com, 
	thomas.hellstrom@linux.intel.com
Content-Type: text/plain; charset="UTF-8"

v6: https://lore.kernel.org/patchwork/cover/1423201/


From xen-devel-bounces@lists.xenproject.org Mon May 10 11:32:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 11:32:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125155.235606 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg48x-0000XZ-Jj; Mon, 10 May 2021 11:32:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125155.235606; Mon, 10 May 2021 11:32:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg48x-0000XS-Gs; Mon, 10 May 2021 11:32:11 +0000
Received: by outflank-mailman (input) for mailman id 125155;
 Mon, 10 May 2021 11:32:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jUww=KF=infradead.org=peterz@srs-us1.protection.inumbo.net>)
 id 1lg48v-0000XM-Pf
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 11:32:10 +0000
Received: from desiato.infradead.org (unknown
 [2001:8b0:10b:1:d65d:64ff:fe57:4e05])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ce7f819e-ef6b-4c8f-b05a-ae2305847321;
 Mon, 10 May 2021 11:32:05 +0000 (UTC)
Received: from j217100.upc-j.chello.nl ([24.132.217.100]
 helo=noisy.programming.kicks-ass.net)
 by desiato.infradead.org with esmtpsa (Exim 4.94 #2 (Red Hat Linux))
 id 1lg48m-00E5um-VO; Mon, 10 May 2021 11:32:01 +0000
Received: from hirez.programming.kicks-ass.net
 (hirez.programming.kicks-ass.net [192.168.1.225])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (2048 bits))
 (Client did not present a certificate)
 by noisy.programming.kicks-ass.net (Postfix) with ESMTPS id 32B893002C4;
 Mon, 10 May 2021 13:32:00 +0200 (CEST)
Received: by hirez.programming.kicks-ass.net (Postfix, from userid 1000)
 id 10A7C2028F007; Mon, 10 May 2021 13:32:00 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ce7f819e-ef6b-4c8f-b05a-ae2305847321
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=infradead.org; s=desiato.20200630; h=In-Reply-To:Content-Transfer-Encoding:
	Content-Type:MIME-Version:References:Message-ID:Subject:Cc:To:From:Date:
	Sender:Reply-To:Content-ID:Content-Description;
	bh=TVPjZAPijAHa4LLIWNuJ7wX5h0yLsHB88OKvBcyrgOo=; b=Gw1gDi9F8ySm9OmXtdCpTxBtJg
	Nq3qzu+lXQfCZfJwhi+774DevttDiMcVEwLMIXFt6kcxjVSmJKOBvR/oN1/wZk8GwzpL6opOQwnEr
	eWvtD1szk+xiE7MTg/c6Cd4ktFdvjdYp0YELxgAg1m2R5H4h/7/R1cLpqj6ylG3WTx1ljqIocEFuv
	521+Hbg4Nb+AkVWd7EReAGd6Uq+gwPWjom7BhVcI8qV+q447UnxQ7X/JPUmtBqEUtgd4XfGQZHyk6
	5XLw1M5g+Y7DFnFv6oBshdWuk/6MEJCRw5EeShxPlFUk6obt6nFHGmcIXXK5q2/Sfyp+oRlBTNwhT
	OnHZTXCQ==;
Date: Mon, 10 May 2021 13:31:59 +0200
From: Peter Zijlstra <peterz@infradead.org>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	x86@kernel.org, Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, Borislav Petkov <bp@alien8.de>,
	"H. Peter Anvin" <hpa@zytor.com>
Subject: Re: [PATCH 0/3] xen: remove some checks for always present Xen
 features
Message-ID: <YJkZr7yIVVW1Fw+o@hirez.programming.kicks-ass.net>
References: <20210422151007.2205-1-jgross@suse.com>
 <3c89ca14-8790-2d0e-a115-16a0976f68e3@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
In-Reply-To: <3c89ca14-8790-2d0e-a115-16a0976f68e3@suse.com>

On Mon, May 10, 2021 at 09:34:18AM +0200, Juergen Gross wrote:
> On 22.04.21 17:10, Juergen Gross wrote:
> > Some features of Xen can be assumed to be always present, so add a
> > central check to verify this being true and remove the other checks.
> >=20
> > Juergen Gross (3):
> >    xen: check required Xen features
> >    xen: assume XENFEAT_mmu_pt_update_preserve_ad being set for pv guests
> >    xen: assume XENFEAT_gnttab_map_avail_bits being set for pv guests
> >=20
> >   arch/x86/xen/enlighten_pv.c | 12 ++----------
> >   arch/x86/xen/mmu_pv.c       |  4 ++--
> >   drivers/xen/features.c      | 18 ++++++++++++++++++
> >   drivers/xen/gntdev.c        | 36 ++----------------------------------
> >   4 files changed, 24 insertions(+), 46 deletions(-)
> >=20
>=20
> Could I please get some feedback on this series?

I'm obviously in favour, but given I'm not an actual Xen user that might
not be worth much, still:

Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>


From xen-devel-bounces@lists.xenproject.org Mon May 10 11:56:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 11:56:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125163.235618 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg4WP-000323-KC; Mon, 10 May 2021 11:56:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125163.235618; Mon, 10 May 2021 11:56:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg4WP-00031w-Gh; Mon, 10 May 2021 11:56:25 +0000
Received: by outflank-mailman (input) for mailman id 125163;
 Mon, 10 May 2021 11:56:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MfTK=KF=gmail.com=dpsmith.dev@srs-us1.protection.inumbo.net>)
 id 1lg4WN-00031q-WB
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 11:56:24 +0000
Received: from mail-qv1-xf34.google.com (unknown [2607:f8b0:4864:20::f34])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5c72902f-f07f-4719-9f16-0f1c82fe520f;
 Mon, 10 May 2021 11:56:23 +0000 (UTC)
Received: by mail-qv1-xf34.google.com with SMTP id u7so8092347qvv.12
 for <xen-devel@lists.xenproject.org>; Mon, 10 May 2021 04:56:23 -0700 (PDT)
Received: from [10.10.1.24] (static-72-81-132-2.bltmmd.fios.verizon.net.
 [72.81.132.2])
 by smtp.gmail.com with ESMTPSA id 67sm11525758qtf.54.2021.05.10.04.56.22
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 May 2021 04:56:22 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5c72902f-f07f-4719-9f16-0f1c82fe520f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=XCJVmYNyrM3q94U5z333NtN42F/p27BnwTSwO1Xl7Cg=;
        b=VTgmozNTHdgjFRff9DtYHbWdGehM00zvo/G+1WYdb1tJ8x4D2AqqOCbKgL1CbDbNYL
         ckbCCwYgmrSNJ9c/aKzAwdLs+4vb8cWcNLfLnAHeg4XdBsL70Vlk66lGK9z6Lzl7XJWd
         6yfnv49ikG7KuRLtAEBWJpeqj5Ke+23vgvj7VRFDJh9eLkKibI+vTt5fLpR5DLi7qLQN
         y4Yg/W8ud4LBM3j9XK2/yd1u/TlFncbP82vbRnvaPMUparsstTfNnTwwg1Evhm+A/IRY
         2VKchEJQLVUzU88DOhWq0hqHJHXBv+2EHtmE/BVn/D/iLQBZlRbmaA96vTx3yaJ/9nWZ
         qZ2w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=XCJVmYNyrM3q94U5z333NtN42F/p27BnwTSwO1Xl7Cg=;
        b=o+eKDksTENLlifv+0wR3GXPwEFj8UrF7PgMBLBezjB/3DwdDqpbnkLSGZgbZmHTriy
         EKUhRMNP9Reb5SjARHhkSQ9IY8FYt5d9KKUJlBB3nLIo2QtC8AYTe4qtBV1K6GqE1U8F
         0Rd9c2zW60ZBW+UKgU5BHHZZUQAefK0OTuRYaYw8VDVZ88/jjVzgQP9DaNV8N4P5SBC+
         ob8lGddihbRZg85O0px3TiwOszo56K8WtExBb638frErL3awt3rswP4L+UtvVKb3+fAw
         rL6e6iaFnH9UXvPfrTPB2bNBL/syeyimtbEgYgqfCacyw7vwiKZ0u8IdP1iGB/weDF2X
         O3lQ==
X-Gm-Message-State: AOAM530IqwgT2rc9TmJzYfDjMNdFotryhPBZ67w1wzD05/ov64+6P61a
	o/3IT5l5Rx97I2guastaYZs=
X-Google-Smtp-Source: ABdhPJytltMqMo/fhsJ+XFRv+wIo5w4SlzRQ5N6/2CxX5I6ZFrmXNACqm0byydhRNeIcgU6k+NRVtw==
X-Received: by 2002:a05:6214:62a:: with SMTP id a10mr22842521qvx.5.1620647782834;
        Mon, 10 May 2021 04:56:22 -0700 (PDT)
Subject: Re: [PATCH v2 04/13] vtpmmgr: Allow specifying srk_handle for TPM2
To: Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Daniel De Graaf <dgdegra@tycho.nsa.gov>, Quan Xu <quan.xu0@gmail.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>
References: <20210506135923.161427-1-jandryuk@gmail.com>
 <20210506135923.161427-5-jandryuk@gmail.com>
From: "Daniel P. Smith" <dpsmith.dev@gmail.com>
Message-ID: <cd1c7281-6a10-94dc-7b6a-3897c8d895d5@gmail.com>
Date: Mon, 10 May 2021 07:56:21 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210506135923.161427-5-jandryuk@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 5/6/21 9:59 AM, Jason Andryuk wrote:
> Bypass taking ownership of the TPM2 if an srk_handle is specified.
> 
> This srk_handle must be usable with Null auth for the time being.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> ---

Reviewed-by: Daniel P. Smith <dpsmith@apertussolutions.com>

> v2: Use "=" seperator
> ---
>  docs/man/xen-vtpmmgr.7.pod |  7 +++++++
>  stubdom/vtpmmgr/init.c     | 11 ++++++++++-
>  2 files changed, 17 insertions(+), 1 deletion(-)
> 
> diff --git a/docs/man/xen-vtpmmgr.7.pod b/docs/man/xen-vtpmmgr.7.pod
> index 875dcce508..3286954568 100644
> --- a/docs/man/xen-vtpmmgr.7.pod
> +++ b/docs/man/xen-vtpmmgr.7.pod
> @@ -92,6 +92,13 @@ Valid arguments:
>  
>  =over 4
>  
> +=item srk_handle=<HANDLE>
> +
> +Specify a srk_handle for TPM 2.0.  TPM 2.0 uses a key hierarchy, and
> +this allow specifying the parent handle for vtpmmgr to create its own
> +key under.  Using this option bypasses vtpmmgr trying to take ownership
> +of the TPM.
> +
>  =item owner_auth=<AUTHSPEC>
>  
>  =item srk_auth=<AUTHSPEC>
> diff --git a/stubdom/vtpmmgr/init.c b/stubdom/vtpmmgr/init.c
> index 1506735051..130e4f4bf6 100644
> --- a/stubdom/vtpmmgr/init.c
> +++ b/stubdom/vtpmmgr/init.c
> @@ -302,6 +302,11 @@ int parse_cmdline_opts(int argc, char** argv, struct Opts* opts)
>              goto err_invalid;
>           }
>        }
> +      else if(!strncmp(argv[i], "srk_handle=", 11)) {
> +         if(sscanf(argv[i] + 11, "%x", &vtpm_globals.srk_handle) != 1) {
> +            goto err_invalid;
> +         }
> +      }
>        else if(!strncmp(argv[i], "tpmdriver=", 10)) {
>           if(!strcmp(argv[i] + 10, "tpm_tis")) {
>              opts->tpmdriver = TPMDRV_TPM_TIS;
> @@ -586,7 +591,11 @@ TPM_RESULT vtpmmgr2_create(void)
>  {
>      TPM_RESULT status = TPM_SUCCESS;
>  
> -    TPMTRYRETURN(tpm2_take_ownership());
> +    if ( vtpm_globals.srk_handle == 0 ) {
> +        TPMTRYRETURN(tpm2_take_ownership());
> +    } else {
> +        tpm2_AuthArea_ctor(NULL, 0, &vtpm_globals.srk_auth_area);
> +    }
>  
>     /* create SK */
>      TPM2_Create_Params_out out;
> 



From xen-devel-bounces@lists.xenproject.org Mon May 10 12:12:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 12:12:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125174.235630 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg4lq-0005NU-3Q; Mon, 10 May 2021 12:12:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125174.235630; Mon, 10 May 2021 12:12:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg4lp-0005NN-Vt; Mon, 10 May 2021 12:12:21 +0000
Received: by outflank-mailman (input) for mailman id 125174;
 Mon, 10 May 2021 12:12:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XNES=KF=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1lg4lo-0005NH-SG
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 12:12:21 +0000
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f7c0c8ae-5353-47ac-a2ab-575ec1e45c54;
 Mon, 10 May 2021 12:12:20 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 14AC4omh072462;
 Mon, 10 May 2021 12:12:03 GMT
Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71])
 by userp2130.oracle.com with ESMTP id 38e285a8en-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Mon, 10 May 2021 12:12:03 +0000
Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1])
 by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 14AC5kVv174029;
 Mon, 10 May 2021 12:12:02 GMT
Received: from nam04-mw2-obe.outbound.protection.outlook.com
 (mail-mw2nam08lp2171.outbound.protection.outlook.com [104.47.73.171])
 by aserp3030.oracle.com with ESMTP id 38e5pv281j-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Mon, 10 May 2021 12:12:02 +0000
Received: from BYAPR10MB3288.namprd10.prod.outlook.com (2603:10b6:a03:156::21)
 by BY5PR10MB3889.namprd10.prod.outlook.com (2603:10b6:a03:1b1::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.25; Mon, 10 May
 2021 12:11:59 +0000
Received: from BYAPR10MB3288.namprd10.prod.outlook.com
 ([fe80::819c:ca1f:448d:3024]) by BYAPR10MB3288.namprd10.prod.outlook.com
 ([fe80::819c:ca1f:448d:3024%7]) with mapi id 15.20.4108.031; Mon, 10 May 2021
 12:11:59 +0000
Received: from [192.168.1.195] (73.249.50.119) by
 SA9PR13CA0123.namprd13.prod.outlook.com (2603:10b6:806:27::8) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.11 via Frontend Transport; Mon, 10 May 2021 12:11:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f7c0c8ae-5353-47ac-a2ab-575ec1e45c54
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=rIcuE5kOedayuhR3R9WmVjbSFPpiiS9ZXiljrSfR7Z4=;
 b=OQ3NDsGZD+UpM/nL6miAGHoE51IGwL4/qjv+CpAJbppaxYxZ0LnR5lUh0Tqx7VlAr99N
 Wx8UhHH4BsAooIPR/0rQO62bcQJeWN+eNVqVzb4WwBSqnHH7buPAkk6NExJirZyjgQyp
 IwU9M3QnY5miSWEVSzZS/xaN9i2Iyzn2GZnaw+E7E/qZX8NiyFZTSpkx03moV44ej9tP
 hNHwkn5jka1SsFP8w05u+1tQQ4wTgQ1cdlfZO7RstJGJM3cdg131aJzXjllseWUzBJ8i
 1zc0FTYbPaJaNQ6tx5qQ61cEOrltaS9vpZWsDtN7x/QLtZEQwRLNR56SYeH9x1JFo2WU Dg== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Bq9vy0bcYQM0Rk/Yiq5Yw9SgFpFbKtviHuE7bNZG9yWng76ZbwogdtuelnsvOaSVlcsu5PDF2iqvaSmUgdKUwAtslpX7UfSFmCJpBwF+zlCs2OYLufYqm/PE1Jv3Juh4PB3IK6yGgkm38OTKwjTXMVNlTandXBHU3cGNfId1il5bmM4hW3VOCcretGno7EJr+0IRaat9jtlezDSGoz7gJGBhQ05PP5VbhBmuM76A3ypOOZ1npnYEW0jZX9MOwUoQn8Rrplad1EyqzpFBZjkstnP5rvGxq+2SimPY+uCZ5AkNlFKIyTSvqqciabZ5bWc2Gyndh8P+kuu/R1dcFvnH+A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rIcuE5kOedayuhR3R9WmVjbSFPpiiS9ZXiljrSfR7Z4=;
 b=KMvTFH3UkJ4RIFiSFFZsVtU771rlVg5q3+ds//5aJZ75n/cokvNEX7ss35rfMlT1JkbMU2FXIPfwhLdrWDdJJF7UBkCvsD14+u2q07SMV88tzO166azsdMGvQ1D/cVBbK1lD/75G0aIDxRZXR0a8ZTuZjW4fVNvz5wAeSnmh1WGaaxao52Y1w9LRaJtkWDdyBTZtNzRRaHk/tKb4JoivQTiOL0M+eKvZs55ptL2Wv62vYtNUtQpYyugdolpenmOLyaWUbDrBq/lZtTFLUzPtXLlqxRbM9ZB6faaj4+dyJvJMhIBxsbyk/lQ66b6KBNAKLXDI8Mh8RBhTv2F7+UZ3Hg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rIcuE5kOedayuhR3R9WmVjbSFPpiiS9ZXiljrSfR7Z4=;
 b=sq1b2UC/rvFUSGHijWtVfFZZWnges9C2YBiZcepHCkQ8Vltjc4qgbdYlOAyPh/2JQ5kGGDg/InSrtGPMouKkrDVZ47jh7DgvWPiVfqUgBcaWh9WX+gWP/V4zHhR6C3zBNALVeIVmEGFRqQ+uG6FIYlYy5FOo2/U4/8hpkh/WFa0=
Authentication-Results: infradead.org; dkim=none (message not signed)
 header.d=none;infradead.org; dmarc=none action=none header.from=oracle.com;
Subject: Re: [PATCH 1/3] xen: check required Xen features
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
        linux-kernel@vger.kernel.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
        Peter Zijlstra <peterz@infradead.org>
References: <20210422151007.2205-1-jgross@suse.com>
 <20210422151007.2205-2-jgross@suse.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <4282bb6f-1d19-4d00-d468-f5d4c7fb0f90@oracle.com>
Date: Mon, 10 May 2021 08:11:56 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
In-Reply-To: <20210422151007.2205-2-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Originating-IP: [73.249.50.119]
X-ClientProxiedBy: SA9PR13CA0123.namprd13.prod.outlook.com
 (2603:10b6:806:27::8) To BYAPR10MB3288.namprd10.prod.outlook.com
 (2603:10b6:a03:156::21)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 17806bd3-6ce4-41f6-05d4-08d913acd002
X-MS-TrafficTypeDiagnostic: BY5PR10MB3889:
X-Microsoft-Antispam-PRVS: 
	<BY5PR10MB3889FC6E7DFDFD6745AC8A008A549@BY5PR10MB3889.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:3173;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	f6WQw0UBiS8ehGnaqkq9g+u4jmrCDT1dulzeMYi3fBQi7haXs5qN9tzVMge/uPu7q9UiMRsdIAnY8z06MZxJCIFMO5M6fqVgIPO65XNzV+aWnD1YSYILXTEFQXRcpMKbrDBvQrgKdxWgC8iPh8tal2wLnp9WA+sbHLXV+06CBWDmOlvNcQ2ruxflIe9HBqPtP0zjlcQcXOvwpCPZVKoI7DH/cr9aPAc2t0kpjXIpvcuXEVQcjrDIkIpEBvtYp0yl9p3HoqernDw5hTAPPSeawqATN6BoBCqPAv0OzYH1gmAPAGZ4sxEdC0FVremrWLipchubhpfGa+DrBwm5Hk4ZP4YG3IvDsUSXQTdy4P4rsWd1ulHRhgCPjUHSnH3QWbswTSgeK5MYff/WhhfbQlr16TYoW7zzVp0nNfCe4RZEVYd0U25hSdev1hsYiUgcm/dZZdQ99g/tk2SzBWxeT7kkGxO8lpI/OAmqLV3gfdjSk/blSkQDYRj7Rirv8w14YOKHLZX0AHoXEuIjh4t4i1/lLOOvJp9S6jhky1CMPt/Doq1XtsUVIdAR5O/pRQxfCdtfmjMhZU1nC9MJM0p5FwLkUO4cCn0j0PsctiPoRCuO59tODYD+bjkGxevdOd407/Tz7cLMQExqKKZ77WuRROiaMtyRPmpRyf9s/o8tmZVmWQTH6SRIG6ENhKr3TR9+pzlv
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB3288.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(366004)(376002)(39860400002)(396003)(136003)(316002)(4326008)(66946007)(16526019)(16576012)(186003)(26005)(5660300002)(36756003)(54906003)(31696002)(6486002)(44832011)(38100700002)(86362001)(31686004)(66556008)(66476007)(2616005)(956004)(83380400001)(478600001)(53546011)(4744005)(8936002)(8676002)(2906002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?utf-8?B?eDJxZFRSYi9KeXhDcWJpcHhwc0ptMFVweGdqYXVHakhjRzA1eWp5alBGeVNu?=
 =?utf-8?B?RitQQWFDNVc4VXo4WC82T0tFdVdlYU5lMDZNOCtrZDNLUW9kYnVEalc2TlB5?=
 =?utf-8?B?eHpFTHRzbkhLZlZ2a1ZISisySnIvbGlXL1ZpemxiVDljNnJ5YnpseTlWcUF6?=
 =?utf-8?B?MzUvTUpXOVExbTBzZE5Ja1hmallWVXBGeEpQUTRBb2NVbmxadkpVRGxKTk9o?=
 =?utf-8?B?QnNJTkRJRk9TdWpRUW4yN3J5cDJsem1sMmlLdkJGanFtK0lmaFppTmdrR283?=
 =?utf-8?B?V3BvNGtjbFV5K3Q3WGdZanlyZzEwVVBMZUhveitMWHRMMk1MNG1zQWdvT3E5?=
 =?utf-8?B?WllEbUgvNnBnL0NhVUVMQnRld3NiL2VZZ2JTZ2UwVnlTUEtsdVhJcmcxSkVR?=
 =?utf-8?B?UnQzSzhHVE5rNGNQNzVKSzJSZkYrTHhaUS9SL2RyZWZrMi9jZjhlaER5algz?=
 =?utf-8?B?M3NaQTVKWnR0K0JqMGtOSXozQUlnaVVnY08zNXBDVy9JbUpZWUdmZXFsSGdh?=
 =?utf-8?B?TllOeDdabXFwVHZLeW8xUWxuRzVobzFSTDB6dkFFQ1Bab1h6dkZPcGVtdko0?=
 =?utf-8?B?MkhDd0JtSWZLczhvaUp1STlPYmJ2eDNLeEs4QU9ORmQ5aWpqSWxaY1B3bUdM?=
 =?utf-8?B?K1lKaE5CKzc2NTM0cHo5WGJMNmxTMll5QTRHVTI2VkJDaHBjUWtjdXdNV1h5?=
 =?utf-8?B?ZEJPOHYrZVRsSUpPSElUVUZ5QkpKRXZYbzRPOXZZY1gyN1FaMnZXclBYQkw5?=
 =?utf-8?B?YnBJeldEbExpMFFURTVtM0FLVUsvaStkQzhLbmxkVVhjT0U1NHhGZGp6VE8y?=
 =?utf-8?B?RGpvTFVrTjRzajE0cWpiQWpZNy9RQVplMFFlR1dub1hIaGRZczlCQUNnRXZW?=
 =?utf-8?B?dnJqVExkQ2prdHR5YUN2S3c3Y2syampPUE5vQkcxZjN6eGtVOGt6UDdrWlRY?=
 =?utf-8?B?bVdEWjYwa2JvQ0xsVStsT1ZZL1QyZy9kcXZMUUlLZ0xraHAxSi9pWGkxdld0?=
 =?utf-8?B?Sjh4d2owaEUxSUI0L2YydGdvcTNOelFQNjA2RjhKUjdldVc1YmtSZUVyOTU5?=
 =?utf-8?B?WFdQeHVZdExUOTVaQUJ5UElqeXg4VzJ3OTdyVTIrT2tMZ1MvenhUZG52YWV3?=
 =?utf-8?B?RVJJSzdZRU1nK0dEcG1LSGFlbVRENGYyMW1maEZscUNKNEJUcDhOaE9aZVpZ?=
 =?utf-8?B?WFdsOFZLa09IUXMvTFI1aHVJS0lWem5DLzI5L0NzTTdTRTNRWlJKYXFqNWRI?=
 =?utf-8?B?azBYN0RFeHJxdDgvSW1FUDhYbEF0eDFZekJHOXhrdU1BV3U0aW00UlJDRHRW?=
 =?utf-8?B?Q2ZQbHFobmJtVWxvcCtKKzRQODdrWmJDOU01aVovMGFjTU1QNlBsWnc3VVkv?=
 =?utf-8?B?V2V1b05GVUlCQ1pLYklJUG84WmJSOERpSm1QRnpzeWlKMEJqSzZRN1ZpelYy?=
 =?utf-8?B?OS9iRUdIT2VCWGkrYmJ0VTNuSndORU9jdkVLemJZV2hJZzVqWWZGcDBCU3FN?=
 =?utf-8?B?VTBSZUlORiswOWl4QVgwa1lPVkNWclFQSlR1Nm5oOVFHYVlQSVZxY2xEbG1U?=
 =?utf-8?B?eXZzRkdBb3ZtOC9nVi8vdkQzMlB0L0NYWTZ3ZDNodzY4c3pCZ0JXRnFHS0VG?=
 =?utf-8?B?RHRualJ4eE0yTTRMSHM1K1Q2QlM2dnFIOGxUaHM2N1dVaUk5MnJSOVNLOUs0?=
 =?utf-8?B?ZDNtemNLUTV4cVdjMktRNVIxaDdCNkM1NE5ZT0VEanRFdWdiaXBtbzBkQnZK?=
 =?utf-8?Q?28A3wM0muX0bCyp6+8BAt2jSF/gfDk6nl/+uOtM?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 17806bd3-6ce4-41f6-05d4-08d913acd002
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB3288.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 10 May 2021 12:11:59.6121
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: vdtap1MUo01iyOxxbDU2jeY+Utlm9nLyZVtmj23zmvc+cdpVuyIx8LiuJ3ow4JZk3d51Fibq8ahaAdPyrNmAXEEj4kiITWqc/jD/50h7Vkg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR10MB3889
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9979 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 phishscore=0 malwarescore=0 spamscore=0
 mlxscore=0 adultscore=0 mlxlogscore=999 bulkscore=0 suspectscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2105100089
X-Proofpoint-GUID: AfmjJP8iHAb3vGWGmOLhqBTsqo9aLmpT
X-Proofpoint-ORIG-GUID: AfmjJP8iHAb3vGWGmOLhqBTsqo9aLmpT
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9979 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 mlxlogscore=999
 mlxscore=0 bulkscore=0 lowpriorityscore=0 priorityscore=1501 spamscore=0
 clxscore=1011 impostorscore=0 phishscore=0 malwarescore=0 adultscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2105100089


On 4/22/21 11:10 AM, Juergen Gross wrote:
>  
> +/*
> + * Linux kernel expects at least Xen 4.0.
> + *
> + * Assume some features to be available for that reason (depending on guest
> + * mode, of course).
> + */
> +#define chk_feature(f) {						\
> +		if (!xen_feature(f))					\
> +			pr_err("Xen: feature %s not available!\n", #f);	\
> +	}


With your changes in the subsequent patches, are we still going to function properly without those features? (i.e. maybe we should just panic)


(Also, chk_required_features() perhaps?)


-boris


> +
>  u8 xen_features[XENFEAT_NR_SUBMAPS * 32] __read_mostly;
>  EXPORT_SYMBOL_GPL(xen_features);
>  
> @@ -31,4 +44,9 @@ void xen_setup_features(void)
>  		for (j = 0; j < 32; j++)
>  			xen_features[i * 32 + j] = !!(fi.submap & 1<<j);
>  	}
> +
> +	if (xen_pv_domain()) {
> +		chk_feature(XENFEAT_mmu_pt_update_preserve_ad);
> +		chk_feature(XENFEAT_gnttab_map_avail_bits);
> +	}
>  }


From xen-devel-bounces@lists.xenproject.org Mon May 10 12:12:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 12:12:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125175.235641 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg4mH-0005tA-BC; Mon, 10 May 2021 12:12:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125175.235641; Mon, 10 May 2021 12:12:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg4mH-0005t1-84; Mon, 10 May 2021 12:12:49 +0000
Received: by outflank-mailman (input) for mailman id 125175;
 Mon, 10 May 2021 12:12:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/Mbt=KF=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1lg4mF-0005ow-Co
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 12:12:47 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 20eee47e-e995-4984-b29f-f0c588867edf;
 Mon, 10 May 2021 12:12:46 +0000 (UTC)
Received: from [10.10.1.24] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 162064875843729.611131871728276;
 Mon, 10 May 2021 05:12:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 20eee47e-e995-4984-b29f-f0c588867edf
ARC-Seal: i=1; a=rsa-sha256; t=1620648763; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=SBsdWi0leMcf0Qga1YOkNHaFwLzDkW7d9HKwGMsg437YT6NjczdH2i4osaj4Lvgb8t5aor/9ZV2gOfPQNgBR0suB4R101yJ8Wl5ZKYpI90ElqTywhh9bA97hguBbzfXz/rJNzIVu53TYA3roeW+isizi3vYrksQqkSrsShUJF/o=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1620648763; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=IUuO+4CPENSAnYQ5OZVeksgThItfvhkq3+AKtI7uo5g=; 
	b=DIqidl2VM1t7tsRhFewz5uNtfn5fT+fezdiSLIG8H2NwVJhhnK+q8EopDKoBmeNw5HwFJMcOvUR7KkERJ1kpbUWK1g88tqaWhdewgc4pHR+jLpisyTvubvQ4sWg4nT+PNooz85Eez9Icd3k0a/QuoEjW9hBDpIRUsTdv9gW30Go=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com> header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1620648763;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Subject:To:Cc:References:From:Message-ID:Date:MIME-Version:In-Reply-To:Content-Type:Content-Transfer-Encoding;
	bh=IUuO+4CPENSAnYQ5OZVeksgThItfvhkq3+AKtI7uo5g=;
	b=ZecZkM8UeZUPpwWTKdddWLB/WnbwqBCokZW44inDGnBmQQbKemdfJLU9YRo8LKFs
	xT7GWki8LPMo5txa3Cklvy3T2gTT7W6NPIPOCU8W9GX+Ome8ibqDKRwlYn3+cz6lqXq
	xwmZyqCLzkElI94Nfy3AwGeYnDn8yCYgMeCc8K40=
Subject: Re: [PATCH v2 06/13] vtpmmgr: Flush transient keys on shutdown
To: Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, Quan Xu <quan.xu0@gmail.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>
References: <20210506135923.161427-1-jandryuk@gmail.com>
 <20210506135923.161427-7-jandryuk@gmail.com>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Message-ID: <0f5dfb40-90d2-1866-a570-2cb3eefcb6d3@apertussolutions.com>
Date: Mon, 10 May 2021 08:12:36 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210506135923.161427-7-jandryuk@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-ZohoMailClient: External

On 5/6/21 9:59 AM, Jason Andryuk wrote:
> Remove our key so it isn't left in the TPM for someone to come along
> after vtpmmgr shutsdown.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
> ---

Reviewed-by: Daniel P. Smith <dpsmith@apertussolutions.com>

>  stubdom/vtpmmgr/init.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 
> diff --git a/stubdom/vtpmmgr/init.c b/stubdom/vtpmmgr/init.c
> index decf8e8b4d..56b4be85b3 100644
> --- a/stubdom/vtpmmgr/init.c
> +++ b/stubdom/vtpmmgr/init.c
> @@ -792,6 +792,14 @@ void vtpmmgr_shutdown(void)
>     /* Close tpmback */
>     shutdown_tpmback();
>  
> +    if (hw_is_tpm2()) {
> +        /* Blow away all stale handles left in the tpm*/
> +        if (flush_tpm2() != TPM_SUCCESS) {
> +            vtpmlogerror(VTPM_LOG_TPM,
> +                         "TPM2_FlushResources failed, continuing shutdown..\n");
> +        }
> +    }
> +
>     /* Close tpmfront/tpm_tis */
>     close(vtpm_globals.tpm_fd);
>  
> 



From xen-devel-bounces@lists.xenproject.org Mon May 10 12:20:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 12:20:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125189.235653 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg4tN-0007SY-38; Mon, 10 May 2021 12:20:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125189.235653; Mon, 10 May 2021 12:20:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg4tN-0007SR-0D; Mon, 10 May 2021 12:20:09 +0000
Received: by outflank-mailman (input) for mailman id 125189;
 Mon, 10 May 2021 12:20:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/Mbt=KF=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1lg4tL-0007SL-Lt
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 12:20:07 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id adf97eb2-566c-489a-8a76-eb3501f7d8dc;
 Mon, 10 May 2021 12:20:06 +0000 (UTC)
Received: from [10.10.1.24] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1620649201249296.6079821401013;
 Mon, 10 May 2021 05:20:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: adf97eb2-566c-489a-8a76-eb3501f7d8dc
ARC-Seal: i=1; a=rsa-sha256; t=1620649203; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=houLbPScNXHc8ZZFQn6nuIWLU0m/G45nABZX3bZI81EZ4hyVPCq629AKhQJEooX9BwBvE+UWAaoz2uwOmTEpLu+uilV4pjsHr3O6ZVm71LOGhYF5Z39RnnxwdBJOkHhHSKhacyRyCjFEhGdWtTARKOktW3oquhibJI9Y5aeHurI=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1620649203; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=b5QIqmH34lMWwx9hU1HX9D4DAzXxa8mXB2e28YFf4PQ=; 
	b=Xtw7mNBaC1gZwZppKmd5ytTPtf5wHU6moHCvBTXYLhKEbG1exG3BVtfiKwOZ368Wd8ChbMvy8brjCCwisgDTK2djiDL3GNuKF6TSRtiQB1YZygdYV/L7CnHQMHAY+IFdEQb7mRqJafiLP0aJQvSBnGvaFfA12AC6UJOMRwfwdJM=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com> header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1620649203;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Subject:To:Cc:References:From:Message-ID:Date:MIME-Version:In-Reply-To:Content-Type:Content-Transfer-Encoding;
	bh=b5QIqmH34lMWwx9hU1HX9D4DAzXxa8mXB2e28YFf4PQ=;
	b=J+dEvSCsIT2jRFNQPhWvse1HtTxTsbVlSsgVd1JWA/Jmrni87Bf+G1UnT1PnsjCE
	gjxmrxZPJobbTKsEeZNanDR2LjZlYtzagiVhYy6HU1zH3wzurj49fpMmwrEHFuUvcLd
	deeHT/OiZHsJlHhsA5WTDpZ8ag042hs454jjNTJk=
Subject: Re: [PATCH v2 07/13] vtpmmgr: Flush all transient keys
To: Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, Quan Xu <quan.xu0@gmail.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>
References: <20210506135923.161427-1-jandryuk@gmail.com>
 <20210506135923.161427-8-jandryuk@gmail.com>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Message-ID: <389c8617-1944-b570-2e68-57dbb45c94b0@apertussolutions.com>
Date: Mon, 10 May 2021 08:19:59 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210506135923.161427-8-jandryuk@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-ZohoMailClient: External

On 5/6/21 9:59 AM, Jason Andryuk wrote:
> We're only flushing 2 transients, but there are 3 handles.  Use <= to also
> flush the third handle since TRANSIENT_LAST is inclusive
> 
> The number of transient handles/keys is hardware dependent, so this
> should query for the limit.  And assignment of handles is assumed to be
> sequential from the minimum.  That may not be guaranteed, but seems okay
> with my tpm2.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> Reviewed-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
> ---
> v2 add "since TRANSIENT_LAST is inclusive" to commit message.
> ---

Reviewed-by: Daniel P. Smith <dpsmith@apertussolutions.com>

>  stubdom/vtpmmgr/init.c | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/stubdom/vtpmmgr/init.c b/stubdom/vtpmmgr/init.c
> index 56b4be85b3..4ae34a4fcb 100644
> --- a/stubdom/vtpmmgr/init.c
> +++ b/stubdom/vtpmmgr/init.c
> @@ -656,7 +656,7 @@ static TPM_RC flush_tpm2(void)
>  {
>      int i;
>  
> -    for (i = TRANSIENT_FIRST; i < TRANSIENT_LAST; i++)
> +    for (i = TRANSIENT_FIRST; i <= TRANSIENT_LAST; i++)
>           TPM2_FlushContext(i);
>  
>      return TPM_SUCCESS;
> 



From xen-devel-bounces@lists.xenproject.org Mon May 10 12:43:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 12:43:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125197.235666 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg5FS-0001Pc-Ve; Mon, 10 May 2021 12:42:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125197.235666; Mon, 10 May 2021 12:42:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg5FS-0001PV-Sm; Mon, 10 May 2021 12:42:58 +0000
Received: by outflank-mailman (input) for mailman id 125197;
 Mon, 10 May 2021 12:42:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/Mbt=KF=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1lg5FS-0001PP-09
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 12:42:58 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 202f170b-23c6-4785-9eb0-68fe58095d52;
 Mon, 10 May 2021 12:42:57 +0000 (UTC)
Received: from [10.10.1.24] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1620650572040854.0141297689187;
 Mon, 10 May 2021 05:42:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 202f170b-23c6-4785-9eb0-68fe58095d52
ARC-Seal: i=1; a=rsa-sha256; t=1620650574; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=CSNlFklawnHPee4lwxWi5eOyYhZNgpKheczdfGvyoP9x4cTB0bvxGgsw7GVu2sBBgNDHA59iTzU3Gv6cJVEEtb3ieYDIuy0s7seZqsYU7Yh96JDJDHSBhDnpe1bR/sEhhDnfE46p2BIKgHl3wfdWrN7x/usMedQl/yWjRxy5NI4=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1620650574; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=JrtRtYTziMVBkc25lrmbzpeFMiZIj4YEO43pv7WHuuQ=; 
	b=ZxXBJ/U9+OXhyfFm0aaRRFmWn+NKIAVg7Wh0NTPKMZDW/OQX1BAxFRO7FsRXH0SgMc5XcAYLRXrsCQgT1Esj2V1Et1dWlAGWVJAOgFn7tRjL9WaGN6NspUZo164Zd5jRu4t9UMnG9qwwMpd9SstRCf/ZZivmQO3RN68EI74zw/c=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com> header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1620650574;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Subject:To:Cc:References:From:Message-ID:Date:MIME-Version:In-Reply-To:Content-Type:Content-Transfer-Encoding;
	bh=JrtRtYTziMVBkc25lrmbzpeFMiZIj4YEO43pv7WHuuQ=;
	b=EMma8h59ICjDj+NIvmY2AB4N7yOT4swZw2L+quVLcKIkA6imRVfh0GbqzTD3Dk8m
	IbTwTfXua4RNm1BZwV4PKYjIiXB4uZv0X84pqoj0PaLlDA4VMjvFGWOJFnuvF7zcv3w
	kyDaE+KggWEj89yejz4t1vyuzyhwKaEyYnbsDoY0=
Subject: Re: [PATCH v2 08/13] vtpmmgr: Shutdown more gracefully
To: Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, Quan Xu <quan.xu0@gmail.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>,
 Samuel Thibault <samuel.thibaut@ens-lyon.org>
References: <20210506135923.161427-1-jandryuk@gmail.com>
 <20210506135923.161427-9-jandryuk@gmail.com>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Message-ID: <bd57ee2f-2b7a-7ff8-69d6-a9275d959223@apertussolutions.com>
Date: Mon, 10 May 2021 08:42:50 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210506135923.161427-9-jandryuk@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-ZohoMailClient: External

On 5/6/21 9:59 AM, Jason Andryuk wrote:
> vtpmmgr uses the default, weak app_shutdown, which immediately calls the
> shutdown hypercall.  This short circuits the vtpmmgr clean up logic.  We
> need to perform the clean up to actually Flush our key out of the tpm.
> 
> Setting do_shutdown is one step in that direction, but vtpmmgr will most
> likely be waiting in tpmback_req_any.  We need to call shutdown_tpmback
> to cancel the wait inside tpmback and perform the shutdown.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> Reviewed-by: Samuel Thibault <samuel.thibaut@ens-lyon.org>
> ---

Reviewed-by: Daniel P. Smith <dpsmith@apertussolutions.com>

>  stubdom/vtpmmgr/vtpmmgr.c | 12 +++++++++++-
>  1 file changed, 11 insertions(+), 1 deletion(-)
> 
> diff --git a/stubdom/vtpmmgr/vtpmmgr.c b/stubdom/vtpmmgr/vtpmmgr.c
> index 9fddaa24f8..46ea018921 100644
> --- a/stubdom/vtpmmgr/vtpmmgr.c
> +++ b/stubdom/vtpmmgr/vtpmmgr.c
> @@ -67,11 +67,21 @@ int hw_is_tpm2(void)
>      return (hardware_version.hw_version == TPM2_HARDWARE) ? 1 : 0;
>  }
>  
> +static int do_shutdown;
> +
> +void app_shutdown(unsigned int reason)
> +{
> +    printk("Shutdown requested: %d\n", reason);
> +    do_shutdown = 1;
> +
> +    shutdown_tpmback();
> +}
> +
>  void main_loop(void) {
>     tpmcmd_t* tpmcmd;
>     uint8_t respbuf[TCPA_MAX_BUFFER_LENGTH];
>  
> -   while(1) {
> +   while (!do_shutdown) {
>        /* Wait for requests from a vtpm */
>        vtpmloginfo(VTPM_LOG_VTPM, "Waiting for commands from vTPM's:\n");
>        if((tpmcmd = tpmback_req_any()) == NULL) {
> 



From xen-devel-bounces@lists.xenproject.org Mon May 10 12:51:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 12:51:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125204.235678 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg5Nz-0002tc-Si; Mon, 10 May 2021 12:51:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125204.235678; Mon, 10 May 2021 12:51:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg5Nz-0002tV-PT; Mon, 10 May 2021 12:51:47 +0000
Received: by outflank-mailman (input) for mailman id 125204;
 Mon, 10 May 2021 12:51:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/Mbt=KF=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1lg5Nx-0002tP-Le
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 12:51:45 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 21936f57-bf11-4213-bae6-923d59036851;
 Mon, 10 May 2021 12:51:44 +0000 (UTC)
Received: from [10.10.1.24] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1620651099282908.8077127391684;
 Mon, 10 May 2021 05:51:39 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 21936f57-bf11-4213-bae6-923d59036851
ARC-Seal: i=1; a=rsa-sha256; t=1620651101; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=EmyTgMSss4I0LIy7xAxhF6XyANzhVWPKOOJG+YArKmIhWTyn6C3ojwgssdEBX3kEhfG/DTt/6XdlVNv3XpjNuK9Y+P+k2EDa1ESjZeQM3wbRTnwYctgK8zaVmUjIUvAG9tqCRR2zIGB+x6W5FfiYCaXXu8Q4zts7rQhqkKWxaIc=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1620651101; h=Content-Type:Content-Transfer-Encoding:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=pynKZeyQxVTifrojGR/tNbMPrrrFntd1doGv+bVQ9HQ=; 
	b=dcThfqocKl+1lARpjlTJ9vhYz913v8P+pBETRWVoUjIraZcml4NnN0DLHwH3QDgd55hqMhlIRd7gFpnl2zeNwtTZ+7EBJ23oZWoB3mDTyvon8eu3puQiFEwk1Vq8BgHFMiZfikB5B0GlVWL49f3n5p0FeqCbbSx6agmq5yt0ebM=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com> header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1620651101;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Subject:To:References:From:Message-ID:Date:MIME-Version:In-Reply-To:Content-Type:Content-Transfer-Encoding;
	bh=pynKZeyQxVTifrojGR/tNbMPrrrFntd1doGv+bVQ9HQ=;
	b=PLquPeXE1+g+U6kyPoBlxTrJyz/i0I8vn2rQUOgM2CoOEpbSZNrKNHRzHcKoVJ9e
	5DWaYgoSEKLNY0mNEQqBtgWn1vGBTIWnMu4XLlRBKEZoEIKmrP9rZM3bwW/64DMPkEi
	e5Mxjr63ycw5pVZWyrnWTvi/O9PlK/9Rn03inmYY=
Subject: Re: [PATCH v2 09/13] vtpmmgr: Support GetRandom passthrough on TPM
 2.0
To: xen-devel@lists.xenproject.org
References: <20210506135923.161427-1-jandryuk@gmail.com>
 <20210506135923.161427-10-jandryuk@gmail.com>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Message-ID: <acbe24ac-0702-ebe3-1a6b-7f5899d97f79@apertussolutions.com>
Date: Mon, 10 May 2021 08:51:38 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210506135923.161427-10-jandryuk@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-ZohoMailClient: External

On 5/6/21 9:59 AM, Jason Andryuk wrote:
> GetRandom passthrough currently fails when using vtpmmgr with a hardware
> TPM 2.0.
> vtpmmgr (8): INFO[VTPM]: Passthrough: TPM_GetRandom
> vtpm (12): vtpm_cmd.c:120: Error: TPM_GetRandom() failed with error code (30)
> 
> When running on TPM 2.0 hardware, vtpmmgr needs to convert the TPM 1.2
> TPM_ORD_GetRandom into a TPM2 TPM_CC_GetRandom command.  Besides the
> differing ordinal, the TPM 1.2 uses 32bit sizes for the request and
> response (vs. 16bit for TPM2).
> 
> Place the random output directly into the tpmcmd->resp and build the
> packet around it.  This avoids bouncing through an extra buffer, but the
> header has to be written after grabbing the random bytes so we have the
> number of bytes to include in the size.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> 
> ---
> v2:
> Add bounds and size checks
> Whitespace fixup
> ---

Reviewed by: Daniel P. Smith <dpsmith@apertussolutions.com>

>  stubdom/vtpmmgr/marshal.h          | 15 ++++++++
>  stubdom/vtpmmgr/vtpm_cmd_handler.c | 61 +++++++++++++++++++++++++++++-
>  2 files changed, 75 insertions(+), 1 deletion(-)
> 
> diff --git a/stubdom/vtpmmgr/marshal.h b/stubdom/vtpmmgr/marshal.h
> index dce19c6439..f1037a7976 100644
> --- a/stubdom/vtpmmgr/marshal.h
> +++ b/stubdom/vtpmmgr/marshal.h
> @@ -890,6 +890,15 @@ inline int sizeof_TPM_AUTH_SESSION(const TPM_AUTH_SESSION* auth) {
>  	return rv;
>  }
>  
> +static
> +inline int sizeof_TPM_RQU_HEADER(BYTE* ptr) {
> +	int rv = 0;
> +	rv += sizeof_UINT16(ptr);
> +	rv += sizeof_UINT32(ptr);
> +	rv += sizeof_UINT32(ptr);
> +	return rv;
> +}
> +
>  static
>  inline BYTE* pack_TPM_RQU_HEADER(BYTE* ptr,
>  		TPM_TAG tag,
> @@ -920,8 +929,14 @@ inline int unpack3_TPM_RQU_HEADER(BYTE* ptr, UINT32* pos, UINT32 max,
>  		unpack3_UINT32(ptr, pos, max, ord);
>  }
>  
> +static
> +inline int sizeof_TPM_RQU_GetRandom(BYTE* ptr) {
> +	return sizeof_TPM_RQU_HEADER(ptr) + sizeof_UINT32(ptr);
> +}
> +
>  #define pack_TPM_RSP_HEADER(p, t, s, r) pack_TPM_RQU_HEADER(p, t, s, r)
>  #define unpack_TPM_RSP_HEADER(p, t, s, r) unpack_TPM_RQU_HEADER(p, t, s, r)
>  #define unpack3_TPM_RSP_HEADER(p, l, m, t, s, r) unpack3_TPM_RQU_HEADER(p, l, m, t, s, r)
> +#define sizeof_TPM_RSP_HEADER(p) sizeof_TPM_RQU_HEADER(p)
>  
>  #endif
> diff --git a/stubdom/vtpmmgr/vtpm_cmd_handler.c b/stubdom/vtpmmgr/vtpm_cmd_handler.c
> index 2ac14fae77..c879b24c13 100644
> --- a/stubdom/vtpmmgr/vtpm_cmd_handler.c
> +++ b/stubdom/vtpmmgr/vtpm_cmd_handler.c
> @@ -47,6 +47,7 @@
>  #include "vtpm_disk.h"
>  #include "vtpmmgr.h"
>  #include "tpm.h"
> +#include "tpm2.h"
>  #include "tpmrsa.h"
>  #include "tcg.h"
>  #include "mgmt_authority.h"
> @@ -772,6 +773,64 @@ static int vtpmmgr_permcheck(struct tpm_opaque *opq)
>  	return 1;
>  }
>  
> +TPM_RESULT vtpmmgr_handle_getrandom(struct tpm_opaque *opaque,
> +				    tpmcmd_t* tpmcmd)
> +{
> +	TPM_RESULT status = TPM_SUCCESS;
> +	TPM_TAG tag;
> +	UINT32 size;
> +	const int max_rand_size = TCPA_MAX_BUFFER_LENGTH -
> +				  sizeof_TPM_RQU_GetRandom(tpmcmd->req);
> +	UINT32 rand_offset;
> +	UINT32 rand_size;
> +	TPM_COMMAND_CODE ord;
> +	BYTE *p;
> +
> +	if (tpmcmd->req_len != sizeof_TPM_RQU_GetRandom(tpmcmd->req)) {
> +		status = TPM_BAD_PARAMETER;
> +		tag = TPM_TAG_RQU_COMMAND;
> +		goto abort_egress;
> +	}
> +
> +	p = unpack_TPM_RQU_HEADER(tpmcmd->req, &tag, &size, &ord);
> +
> +	if (!hw_is_tpm2()) {
> +		size = TCPA_MAX_BUFFER_LENGTH;
> +		TPMTRYRETURN(TPM_TransmitData(tpmcmd->req, tpmcmd->req_len,
> +					      tpmcmd->resp, &size));
> +		tpmcmd->resp_len = size;
> +
> +		return TPM_SUCCESS;
> +	}
> +
> +	/* TPM_GetRandom req: <header><uint32 num bytes> */
> +	unpack_UINT32(p, &rand_size);
> +
> +	/* Returning fewer bytes is acceptable per the spec. */
> +	if (rand_size > max_rand_size)
> +		rand_size = max_rand_size;
> +
> +	/* Call TPM2_GetRandom but return a TPM_GetRandom response. */
> +	/* TPM_GetRandom resp: <header><uint32 num bytes><num random bytes> */
> +	rand_offset = sizeof_TPM_RSP_HEADER(tpmcmd->resp) +
> +		      sizeof_UINT32(tpmcmd->resp);
> +
> +	TPMTRYRETURN(TPM2_GetRandom(&rand_size, tpmcmd->resp + rand_offset));
> +
> +	p = pack_TPM_RSP_HEADER(tpmcmd->resp, TPM_TAG_RSP_COMMAND,
> +				rand_offset + rand_size, status);
> +	p = pack_UINT32(p, rand_size);
> +	tpmcmd->resp_len = rand_offset + rand_size;
> +
> +	return status;
> +
> +abort_egress:
> +	tpmcmd->resp_len = VTPM_COMMAND_HEADER_SIZE;
> +	pack_TPM_RSP_HEADER(tpmcmd->resp, tag + 3, tpmcmd->resp_len, status);
> +
> +	return status;
> +}
> +
>  TPM_RESULT vtpmmgr_handle_cmd(
>  		struct tpm_opaque *opaque,
>  		tpmcmd_t* tpmcmd)
> @@ -842,7 +901,7 @@ TPM_RESULT vtpmmgr_handle_cmd(
>  		switch(ord) {
>  		case TPM_ORD_GetRandom:
>  			vtpmloginfo(VTPM_LOG_VTPM, "Passthrough: TPM_GetRandom\n");
> -			break;
> +			return vtpmmgr_handle_getrandom(opaque, tpmcmd);
>  		case TPM_ORD_PcrRead:
>  			vtpmloginfo(VTPM_LOG_VTPM, "Passthrough: TPM_PcrRead\n");
>  			// Quotes also need to be restricted to hide PCR values
> 



From xen-devel-bounces@lists.xenproject.org Mon May 10 13:03:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 13:03:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125233.235708 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg5ZR-0005YU-Gg; Mon, 10 May 2021 13:03:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125233.235708; Mon, 10 May 2021 13:03:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg5ZR-0005YN-DY; Mon, 10 May 2021 13:03:37 +0000
Received: by outflank-mailman (input) for mailman id 125233;
 Mon, 10 May 2021 13:03:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MfTK=KF=gmail.com=dpsmith.dev@srs-us1.protection.inumbo.net>)
 id 1lg5ZQ-0005YH-0Y
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 13:03:36 +0000
Received: from mail-qk1-x731.google.com (unknown [2607:f8b0:4864:20::731])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6e2f0c14-2cbe-48d6-89bb-1eb43f4cb56a;
 Mon, 10 May 2021 13:03:35 +0000 (UTC)
Received: by mail-qk1-x731.google.com with SMTP id a2so15130443qkh.11
 for <xen-devel@lists.xenproject.org>; Mon, 10 May 2021 06:03:35 -0700 (PDT)
Received: from [10.10.1.24] (static-72-81-132-2.bltmmd.fios.verizon.net.
 [72.81.132.2])
 by smtp.gmail.com with ESMTPSA id p9sm10582966qtl.78.2021.05.10.06.03.33
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 May 2021 06:03:34 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6e2f0c14-2cbe-48d6-89bb-1eb43f4cb56a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=u9wwsJw/AzoP7B7tVFN5THH5fhpqP7hdqSXkzA4O/2g=;
        b=bqe+t5lrAuy2TWOKWT3/zVrAy5lWGxnvC4GVT7sp0UpgYLFXkddpN69PTa/O9HpFGP
         eC+Lh8BG1el4i6xcpXkn1ZtFBWMcc6SvNHgnGbFucDympDkzm3Lw9d+A9v/4gilAmAWW
         UqeGA+I9Nus/RzN1Giaf7kil8HDMDtQoY5bvV3KjsjSFn06HTTF/6rzIsB8yifTf+T6D
         7vaCh5wDCdKeDP8WE/lbbnRJQL2tJQSih5GkfSf+SD+/hvzaGF+Tw7zz1+0aGXa6E2c1
         USqLPVPuvq6CaPgTZge12nDgmc7cvxAY74iIgfH/W6bOLV5dmuJ7enllBpqwfcsQxoFn
         yTVA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=u9wwsJw/AzoP7B7tVFN5THH5fhpqP7hdqSXkzA4O/2g=;
        b=FFVyuG2LoAp9BIFEcqMgItatvHCQFkmPyRXcO7Wn+OEeKzq+JtN0mJOte/vrAeX0o2
         Uh68djYUamUxfVCjgump7apN9D8VmtA3M12KOV3NgRT84uVTwAxDILbC13k4dGuuVe+g
         aBgKZVxrmj2OA/u34P2sjzn+KwyaDdte8782xWYoFY5wzff9Pd/AXVX94Bmf5ULqW28o
         +OvUvVy2XlcSK7vcuWBO/PazM/YX9E3tQiorsRp/X6u31H5TPKML5knXfrgtZKjJlDmR
         x5UpkHbdPowxUtLCpd6TGSqjRExEJpCZro5ossfEx/U0VxLcA4d4ZlD2+7jgKzF62xI+
         PtNw==
X-Gm-Message-State: AOAM533mu53PG5DDP5KfkmbnJRsGf/leFWYOfZDDStH7XVN++Tn9T4Wf
	RX9XG1mF8h0lJehHE+lviQc=
X-Google-Smtp-Source: ABdhPJxPr+gJ0TPO/psnXTggkLjCvkh7CSj8c/T/o7vEqjxLhLFylhEiWo13OfgWwHIxpeKJ7jfyAA==
X-Received: by 2002:a37:5b84:: with SMTP id p126mr22724046qkb.142.1620651815020;
        Mon, 10 May 2021 06:03:35 -0700 (PDT)
Subject: Re: [PATCH v2 10/13] vtpmmgr: Remove bogus cast from TPM2_GetRandom
To: Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, Quan Xu <quan.xu0@gmail.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>
References: <20210506135923.161427-1-jandryuk@gmail.com>
 <20210506135923.161427-11-jandryuk@gmail.com>
From: "Daniel P. Smith" <dpsmith.dev@gmail.com>
Message-ID: <c65ae1b0-84a8-ca3a-a1f6-b10a19e379b8@gmail.com>
Date: Mon, 10 May 2021 09:03:33 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210506135923.161427-11-jandryuk@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 5/6/21 9:59 AM, Jason Andryuk wrote:
> The UINT32 <-> UINT16 casting in TPM2_GetRandom is incorrect.  Use a
> local UINT16 as needed for the TPM hardware command and assign the
> result.
> 
> Suggested-by: Samuel Thibault <samuel.thibault@ens-lyon.org>
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> ---

Reviewed-by: Daniel P. Smith <dpsmith@apertussolutions.com>

>  stubdom/vtpmmgr/tpm2.c | 13 ++++++++++---
>  1 file changed, 10 insertions(+), 3 deletions(-)
> 
> diff --git a/stubdom/vtpmmgr/tpm2.c b/stubdom/vtpmmgr/tpm2.c
> index 655e6d164c..ebd06eac74 100644
> --- a/stubdom/vtpmmgr/tpm2.c
> +++ b/stubdom/vtpmmgr/tpm2.c
> @@ -427,15 +427,22 @@ abort_egress:
>  
>  TPM_RC TPM2_GetRandom(UINT32 * bytesRequested, BYTE * randomBytes)
>  {
> +    UINT16 bytesReq;
>      TPM_BEGIN(TPM_ST_NO_SESSIONS, TPM_CC_GetRandom);
>  
> -    ptr = pack_UINT16(ptr, (UINT16)*bytesRequested);
> +    if (*bytesRequested > UINT16_MAX)
> +        bytesReq = UINT16_MAX;
> +    else
> +        bytesReq = *bytesRequested;
> +
> +    ptr = pack_UINT16(ptr, bytesReq);
>  
>      TPM_TRANSMIT();
>      TPM_UNPACK_VERIFY();
>  
> -    ptr = unpack_UINT16(ptr, (UINT16 *)bytesRequested);
> -    ptr = unpack_TPM_BUFFER(ptr, randomBytes, *bytesRequested);
> +    ptr = unpack_UINT16(ptr, &bytesReq);
> +    *bytesRequested = bytesReq;
> +    ptr = unpack_TPM_BUFFER(ptr, randomBytes, bytesReq);
>  
>  abort_egress:
>      return status;
> 



From xen-devel-bounces@lists.xenproject.org Mon May 10 13:05:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 13:05:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125237.235720 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg5ay-0006Cr-T7; Mon, 10 May 2021 13:05:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125237.235720; Mon, 10 May 2021 13:05:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg5ay-0006Ck-Ok; Mon, 10 May 2021 13:05:12 +0000
Received: by outflank-mailman (input) for mailman id 125237;
 Mon, 10 May 2021 13:05:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lg5ax-0006Ca-8b; Mon, 10 May 2021 13:05:11 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lg5ax-0005Og-18; Mon, 10 May 2021 13:05:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lg5aw-0000qq-MF; Mon, 10 May 2021 13:05:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lg5aw-0004BJ-Lo; Mon, 10 May 2021 13:05:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=bsi9Ou73dFunhQ0+y/Q8H2fIRKuWDGgtG7ly4kwBKXo=; b=lRIdlJhbNJ6h4qeYJ5tOecCvbU
	m8xvu93h+zibX7QNzunXKz3LkmDIOzn9Wp7wkv/P52v4KOC6l1wFMMU7p/3DWb8hBoR7+/oqWQNkC
	w6Pwmx/TCevUhRAz6wRGXBUyLtEnaq2Gr46oMI9pvsJDeWmNNckvgJvP+8I2HdUTGutg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161888-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 161888: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-examine:reboot:fail:heisenbug
    xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=a7da84c457b05479ab423a2e589c5f46c7da0ed7
X-Osstest-Versions-That:
    xen=a7da84c457b05479ab423a2e589c5f46c7da0ed7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 10 May 2021 13:05:10 +0000

flight 161888 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161888/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-examine      8 reboot                     fail pass in 161872

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-examine      4 memdisk-try-append  fail in 161872 like 161851
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161872
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161872
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 161872
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161872
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161872
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161872
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161872
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161872
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161872
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161872
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161872
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  a7da84c457b05479ab423a2e589c5f46c7da0ed7
baseline version:
 xen                  a7da84c457b05479ab423a2e589c5f46c7da0ed7

Last test of basis   161888  2021-05-10 01:51:44 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     fail    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon May 10 13:19:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 13:19:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125247.235735 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg5oH-0007l8-4K; Mon, 10 May 2021 13:18:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125247.235735; Mon, 10 May 2021 13:18:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg5oH-0007l1-0i; Mon, 10 May 2021 13:18:57 +0000
Received: by outflank-mailman (input) for mailman id 125247;
 Mon, 10 May 2021 13:18:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MfTK=KF=gmail.com=dpsmith.dev@srs-us1.protection.inumbo.net>)
 id 1lg5oG-0007kv-7d
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 13:18:56 +0000
Received: from mail-qv1-xf29.google.com (unknown [2607:f8b0:4864:20::f29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9ad5f4bb-dfb3-4335-a385-22e0044e08c5;
 Mon, 10 May 2021 13:18:55 +0000 (UTC)
Received: by mail-qv1-xf29.google.com with SMTP id g5so2604468qvk.1
 for <xen-devel@lists.xenproject.org>; Mon, 10 May 2021 06:18:55 -0700 (PDT)
Received: from [10.10.1.24] (static-72-81-132-2.bltmmd.fios.verizon.net.
 [72.81.132.2])
 by smtp.gmail.com with ESMTPSA id w16sm9391647qts.70.2021.05.10.06.18.54
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 May 2021 06:18:54 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9ad5f4bb-dfb3-4335-a385-22e0044e08c5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=M63A6KD4PRi1SPAftYhqReoGHmjMgNuLtMgU6jXVMg4=;
        b=rz2N2k/LBDI1yTmJp8BGGNMhFUfMs9gytmYkPRMxvNzRK914L8q7ABXwZKhnzBSRQQ
         hKRhvbqRc+lyqyxV/o4FieTJK/P42rbxQSd5cLdrjiAKW49NoEbh1RLfxXN6JLyDAq/o
         KpXJe11QQXKkcPTRpyrudT04Aez9kwRy2e+v/gJU17whAglvRa6fFb3A/vA0hM0O6CVz
         DF+M1tqhMncxT8T2bGi+limZewrCtrL0uIn2OsY9Not90i0OkNEOTyzAXVG4k6g46n8W
         QaeRpyVEFuwXnPCZeQC0+lrxMp3lZNshsHP5kdQnqHB2uvxaSrlL205m1Hg9k9C2Z+AF
         QuZQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=M63A6KD4PRi1SPAftYhqReoGHmjMgNuLtMgU6jXVMg4=;
        b=mOqUxqmtcgKvh/T715gUA9bwUvR3rgkU0REJEd8C/kzNC6TzbDEhfMRR7Kg9/aQO3H
         aVxWYUYs5x8dWG/BZRUI3JxvcX8xzwPphKKoCKuAbboKsuLxQQ1TewYmzg+9GO1U7sdQ
         9goWm0ulffL8p6b9JRfwVdE5rvH1ed6HwAwDBBtSv+fW7cUpx4Pl0+31qVdHT01eBjax
         WW/vbJ3Cbq+4UM3dG0X1GraeMr1+BCplznL0TxXj/gty2K/MxiPNVjYjvH9+2Te2n6Ru
         EYjTM+yj2j1E2YTr83vJaHG2QMl6cEMum/fH3lxMNIQmuJDWYzvWFjjMDvU4I9HEEqPp
         bqBg==
X-Gm-Message-State: AOAM532dSK8FH0C+XCAoOnOQ/yG07+7yjzOHAkxL9X+BYqPA7zgkCQA0
	+KLQeA89uDhn47WPiyKZ8jw=
X-Google-Smtp-Source: ABdhPJx6Yhd+ZwFmabh90PIz/oU2VHkdOLzalYp/VTM8sC2/K2Zu7riH9kVZt1GesloMB7fGpKBVZA==
X-Received: by 2002:a05:6214:76b:: with SMTP id f11mr23606369qvz.8.1620652735225;
        Mon, 10 May 2021 06:18:55 -0700 (PDT)
Subject: Re: [PATCH v2 11/13] vtpmmgr: Fix owner_auth & srk_auth parsing
To: Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, Quan Xu <quan.xu0@gmail.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>
References: <20210506135923.161427-1-jandryuk@gmail.com>
 <20210506135923.161427-12-jandryuk@gmail.com>
From: "Daniel P. Smith" <dpsmith.dev@gmail.com>
Message-ID: <894f022d-aa8c-2780-d8da-af919dafea28@gmail.com>
Date: Mon, 10 May 2021 09:18:53 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210506135923.161427-12-jandryuk@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 5/6/21 9:59 AM, Jason Andryuk wrote:
> Argument parsing only matches to before ':' and then the string with
> leading ':' is passed to parse_auth_string which fails to parse.  Extend
> the length to include the seperator in the match.
> 
> While here, switch the seperator to "=".  The man page documented "="
> and the other tpm.* arguments already use "=".  Since it didn't work
> before, we don't need to worry about backwards compatibility.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> ---

Reviewed-by: Daniel P. Smith <dpsmith@apertussolutions.com>

>  stubdom/vtpmmgr/init.c | 8 ++++----
>  1 file changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/stubdom/vtpmmgr/init.c b/stubdom/vtpmmgr/init.c
> index 4ae34a4fcb..62dc5994de 100644
> --- a/stubdom/vtpmmgr/init.c
> +++ b/stubdom/vtpmmgr/init.c
> @@ -289,16 +289,16 @@ int parse_cmdline_opts(int argc, char** argv, struct Opts* opts)
>     memcpy(vtpm_globals.srk_auth, WELLKNOWN_AUTH, sizeof(TPM_AUTHDATA));
>  
>     for(i = 1; i < argc; ++i) {
> -      if(!strncmp(argv[i], "owner_auth:", 10)) {
> -         if((rc = parse_auth_string(argv[i] + 10, vtpm_globals.owner_auth)) < 0) {
> +      if(!strncmp(argv[i], "owner_auth=", 11)) {
> +         if((rc = parse_auth_string(argv[i] + 11, vtpm_globals.owner_auth)) < 0) {
>              goto err_invalid;
>           }
>           if(rc == 1) {
>              opts->gen_owner_auth = 1;
>           }
>        }
> -      else if(!strncmp(argv[i], "srk_auth:", 8)) {
> -         if((rc = parse_auth_string(argv[i] + 8, vtpm_globals.srk_auth)) != 0) {
> +      else if(!strncmp(argv[i], "srk_auth=", 9)) {
> +         if((rc = parse_auth_string(argv[i] + 9, vtpm_globals.srk_auth)) != 0) {
>              goto err_invalid;
>           }
>        }
> 



From xen-devel-bounces@lists.xenproject.org Mon May 10 13:21:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 13:21:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125253.235746 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg5r9-0000kX-N3; Mon, 10 May 2021 13:21:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125253.235746; Mon, 10 May 2021 13:21:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg5r9-0000kQ-K8; Mon, 10 May 2021 13:21:55 +0000
Received: by outflank-mailman (input) for mailman id 125253;
 Mon, 10 May 2021 13:21:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EdaL=KF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lg5r8-0000kK-MY
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 13:21:54 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c3983e92-8b5a-4e9b-9fab-bea71e446900;
 Mon, 10 May 2021 13:21:53 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 98A17B0BE;
 Mon, 10 May 2021 13:21:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c3983e92-8b5a-4e9b-9fab-bea71e446900
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620652912; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=76XvVApBB4h7nYMX/HL+xbx3l2IPhYwz78pPaYpWgyE=;
	b=C8nlywMgxfVWPNWbEBDWDlHZEKozXTkQlJe7DpH5o5yxrObUopaUDcW4dfdspP8psx9zqU
	+i1sWoejZSL9c8im6svLSdKAvsh1k4aGdA189j9+Pt0WEMi4ZzJs1d/S2wxiaaL/qsofqB
	Y07lur/mzuiVkCyGmKk6df3pWjGtanI=
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Peter Zijlstra <peterz@infradead.org>
References: <20210422151007.2205-1-jgross@suse.com>
 <20210422151007.2205-2-jgross@suse.com>
 <4282bb6f-1d19-4d00-d468-f5d4c7fb0f90@oracle.com>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH 1/3] xen: check required Xen features
Message-ID: <d53d6b83-2730-1ab6-7dba-236e86e247b3@suse.com>
Date: Mon, 10 May 2021 15:21:51 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.0
MIME-Version: 1.0
In-Reply-To: <4282bb6f-1d19-4d00-d468-f5d4c7fb0f90@oracle.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="v0xJqlEx16PQDex2Ji9eivWzhreV2gsI3"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--v0xJqlEx16PQDex2Ji9eivWzhreV2gsI3
Content-Type: multipart/mixed; boundary="7atwZJqHbxCWQIe1gx0HLOWsAMaPzFtob";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Peter Zijlstra <peterz@infradead.org>
Message-ID: <d53d6b83-2730-1ab6-7dba-236e86e247b3@suse.com>
Subject: Re: [PATCH 1/3] xen: check required Xen features
References: <20210422151007.2205-1-jgross@suse.com>
 <20210422151007.2205-2-jgross@suse.com>
 <4282bb6f-1d19-4d00-d468-f5d4c7fb0f90@oracle.com>
In-Reply-To: <4282bb6f-1d19-4d00-d468-f5d4c7fb0f90@oracle.com>

--7atwZJqHbxCWQIe1gx0HLOWsAMaPzFtob
Content-Type: multipart/mixed;
 boundary="------------EB1D33D114EA20FD47EBA1F0"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------EB1D33D114EA20FD47EBA1F0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 10.05.21 14:11, Boris Ostrovsky wrote:
>=20
> On 4/22/21 11:10 AM, Juergen Gross wrote:
>>  =20
>> +/*
>> + * Linux kernel expects at least Xen 4.0.
>> + *
>> + * Assume some features to be available for that reason (depending on=
 guest
>> + * mode, of course).
>> + */
>> +#define chk_feature(f) {						\
>> +		if (!xen_feature(f))					\
>> +			pr_err("Xen: feature %s not available!\n", #f);	\
>> +	}
>=20
>=20
> With your changes in the subsequent patches, are we still going to func=
tion properly without those features? (i.e. maybe we should just panic)

Depends on the use case.

XENFEAT_gnttab_map_avail_bits is relevant for driver domains using
user space backends only. In case it is not available "interesting"
things might happen.

XENFEAT_mmu_pt_update_preserve_ad not being present would result in
a subsequent mmu-update function using that feature returning -ENOSYS,
so this wouldn't be unrecognized.

So panic() might be a good idea in case the features are not available.

> (Also, chk_required_features() perhaps?)

Fine with me.


Juergen

--------------EB1D33D114EA20FD47EBA1F0
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------EB1D33D114EA20FD47EBA1F0--

--7atwZJqHbxCWQIe1gx0HLOWsAMaPzFtob--

--v0xJqlEx16PQDex2Ji9eivWzhreV2gsI3
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmCZM28FAwAAAAAACgkQsN6d1ii/Ey9a
UAf9GTiuYSXEqCcah/973qkXKYOKNSnvm+avImO9swd3uYJZEAzIsV8KUReYqWnFefWwlJ9IMOZr
Y2yRjS/LQPuwNZl5giuqzrrPouYNbLhtfYFEaEx7zxNINQeQwB1tWCP6RI69jR1iowIAPuLhLUyN
+qj8+K8EhK+4Z1dfaBtOYAzBJMlWSyJvFIJ+RanRt3xQCn5TUoBegImkFWvTXH7rVKHpaBDY9rUq
u0C2ac9aIJQRffg9WN5HK54poy9Mydsj6Ny0/d7yPNFskUZrdPNVL9ptsKkMsWUyLbWgYFVTAex1
OjM52H2OtsHh73FzHNPvJUq4FYrHOEjIiIBKeIaZ2A==
=XdOU
-----END PGP SIGNATURE-----

--v0xJqlEx16PQDex2Ji9eivWzhreV2gsI3--


From xen-devel-bounces@lists.xenproject.org Mon May 10 13:33:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 13:33:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125260.235759 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg61q-0002Fk-Q2; Mon, 10 May 2021 13:32:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125260.235759; Mon, 10 May 2021 13:32:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg61q-0002Fd-MQ; Mon, 10 May 2021 13:32:58 +0000
Received: by outflank-mailman (input) for mailman id 125260;
 Mon, 10 May 2021 13:32:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/Mbt=KF=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1lg61o-0002FX-OO
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 13:32:56 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1c234b0d-bd38-49f7-b015-c781d6497137;
 Mon, 10 May 2021 13:32:55 +0000 (UTC)
Received: from [10.10.1.24] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1620653568355506.90985128944874;
 Mon, 10 May 2021 06:32:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c234b0d-bd38-49f7-b015-c781d6497137
ARC-Seal: i=1; a=rsa-sha256; t=1620653571; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=CoY3RmdDdwfHi5Ak7OI9JEemzwHn8NUQf5lXRCIKO/B4IXgqSutKyu0bRAKFfvNKQSgOMy/Rp/uHS8+ixCHcKaExQnxtXmeKu3mgmKkibR14JnSnknKxBwOotNuS3oN5FmjC51qRrEr7M84ll7eZjfHdvE/i/XbrCmzhxuPJfjk=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1620653571; h=Content-Type:Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=9xMWfM5tlvBr2xlDpLPODs0gNfrcWEF781IuUBw4OM0=; 
	b=m//DXoZqJKMC7uJ8oztbUM9l1Vo03wvNaOksXZ+7rBFgzw7VsTjuskU3IYnAB9bRMWY1ZzINLluBoiRnm4YRhcrcvnw/R9Xt+9nlhV3ddOKJIcZiUtZRIgslLgYvsReb/GFNKp7ODO77KWqcr/WWRoO9xHbPJwJxfyn8soyYyZ0=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com> header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1620653571;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Subject:To:Cc:References:From:Message-ID:Date:MIME-Version:In-Reply-To:Content-Type:Content-Transfer-Encoding;
	bh=9xMWfM5tlvBr2xlDpLPODs0gNfrcWEF781IuUBw4OM0=;
	b=gyJMjEAfM3p3Bqwl4tmKNlpQQnzcdhnTIOM13QlT0g1KkMAVO4RpOkB8Luvq5miW
	mJokH5Te+lZOJiELoRRBwkGuzRNqJTP3rOjkDvl172T0nFbW6khNllzyg0vsrsxO5Ad
	rE+eJ+GDOEO+hX3oOIqmH4VIBRfdQH4Ghi3EkQdo=
Subject: Re: [PATCH v2 12/13] vtpmmgr: Check req_len before unpacking command
To: Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org
Cc: Daniel De Graaf <dgdegra@tycho.nsa.gov>, Quan Xu <quan.xu0@gmail.com>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>
References: <20210506135923.161427-1-jandryuk@gmail.com>
 <20210506135923.161427-13-jandryuk@gmail.com>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Message-ID: <409ecf34-8f6b-4d12-4455-ef7fc1af4f75@apertussolutions.com>
Date: Mon, 10 May 2021 09:32:46 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210506135923.161427-13-jandryuk@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-ZohoMailClient: External

On 5/6/21 9:59 AM, Jason Andryuk wrote:
> vtpm_handle_cmd doesn't ensure there is enough space before unpacking
> the req buffer.  Add a minimum size check.  Called functions will have
> to do their own checking if they need more data from the request.
> 
> The error case is tricky since abort_egress wants to rely with a
> corresponding tag.  Just hardcode TPM_TAG_RQU_COMMAND since the vtpm is
> sending in malformed commands in the first place.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> ---

Reviewed-by: Daniel P. Smith <dpsmith@apertussolutions.com>

>  stubdom/vtpmmgr/vtpm_cmd_handler.c | 6 ++++++
>  1 file changed, 6 insertions(+)
> 
> diff --git a/stubdom/vtpmmgr/vtpm_cmd_handler.c b/stubdom/vtpmmgr/vtpm_cmd_handler.c
> index c879b24c13..5586be6997 100644
> --- a/stubdom/vtpmmgr/vtpm_cmd_handler.c
> +++ b/stubdom/vtpmmgr/vtpm_cmd_handler.c
> @@ -840,6 +840,12 @@ TPM_RESULT vtpmmgr_handle_cmd(
>  	UINT32 size;
>  	TPM_COMMAND_CODE ord;
>  
> +	if (tpmcmd->req_len < sizeof_TPM_RQU_HEADER(tpmcmd->req)) {
> +		status = TPM_BAD_PARAMETER;
> +		tag = TPM_TAG_RQU_COMMAND;
> +		goto abort_egress;
> +	}
> +
>  	unpack_TPM_RQU_HEADER(tpmcmd->req,
>  			&tag, &size, &ord);
>  
> 



From xen-devel-bounces@lists.xenproject.org Mon May 10 13:40:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 13:40:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125266.235771 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg698-0003jK-Jr; Mon, 10 May 2021 13:40:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125266.235771; Mon, 10 May 2021 13:40:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg698-0003jD-F4; Mon, 10 May 2021 13:40:30 +0000
Received: by outflank-mailman (input) for mailman id 125266;
 Mon, 10 May 2021 13:40:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MfTK=KF=gmail.com=dpsmith.dev@srs-us1.protection.inumbo.net>)
 id 1lg696-0003j7-8v
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 13:40:28 +0000
Received: from mail-qt1-x836.google.com (unknown [2607:f8b0:4864:20::836])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9c1d1e76-19ed-4066-bcf8-96a2905e6204;
 Mon, 10 May 2021 13:40:26 +0000 (UTC)
Received: by mail-qt1-x836.google.com with SMTP id c10so2075590qtx.10
 for <xen-devel@lists.xenproject.org>; Mon, 10 May 2021 06:40:26 -0700 (PDT)
Received: from [10.10.1.24] (static-72-81-132-2.bltmmd.fios.verizon.net.
 [72.81.132.2])
 by smtp.gmail.com with ESMTPSA id e17sm11330734qto.59.2021.05.10.06.40.25
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 May 2021 06:40:25 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9c1d1e76-19ed-4066-bcf8-96a2905e6204
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=+w/N0fe4Cjk6fuxLse5gEZAT5DIwD13ZOaNCIfvDMSg=;
        b=GE60DNx4pDQHy5yvaQSBo4gyVY4PAPF9YGb2BUzdWky7j7oTww96aUWvxv46R3nsQi
         V4GdoCcI5s5r+aew7vJ63xrH/f+UNdgcz5x32/lK/bUkL02kiNaLUwnTNw6bPihovaEP
         W9izkqXQANHR5GeuJ0i+AIy6+DYo2hfqbVAKfWiXUrzs+b/6wXcH8hU0OMWDtIByi1Y0
         LL+OgWvfyIN2x2w2McXW5fdh4OMklLYlccisk0UbwUbJw5mEGlkG2u79sP2a/2LKczA8
         wXCkinkOaULqQYg3wAhsjS7yzxwHUte27C2y+C9qbZEjGtARvofYL4yNpeiLspVzGlGp
         ZGEQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=+w/N0fe4Cjk6fuxLse5gEZAT5DIwD13ZOaNCIfvDMSg=;
        b=NkjEm9yUZERMgD3rAiZewx2wPVTehLSZWXuIAga9w+ehoJwArzGSWXBuRsj238A1iW
         XdQu3lHY7hE3128R/NyUOvYJZCRQSc3xP2pybuaZVb45IEorN3c/xWucQja+Hs+jT0O9
         vmn3WzGE240Z2ufSrHS24mOv0Vb3LvXnvD+1aAp/xQwcIY8sTZbYM0nkPLu9EJIJTVc0
         yGVtyTnaVY8BX1qnP5KOAttGFAOM3po4Y5BI+Tqdvq95iV8DEZc2jWj+zt/y6YbdqOBL
         THIoKIQ048W/J64U6iUORhlEBomUszgwF0E7qUUF1HuGFJOO/VSc6pEOU9T7ZK6duUQs
         iz7g==
X-Gm-Message-State: AOAM531L9IDoBsNu3yP8bWjA99Yi9lnhDZD2oxOCgd3ypcf2xFsbITRJ
	L7OwOPPIplNZCkTWlMRiRZk=
X-Google-Smtp-Source: ABdhPJx9W1tP149bkYJ77Gxvg/V6dP0u++FiwDmmsccLZ/7oRghMtTGeryIhkZns3NLa5z0McfbV/w==
X-Received: by 2002:a05:622a:11c8:: with SMTP id n8mr21253029qtk.279.1620654026378;
        Mon, 10 May 2021 06:40:26 -0700 (PDT)
Subject: Re: [PATCH v2 13/13] vtpm: Correct timeout units and command duration
To: Jason Andryuk <jandryuk@gmail.com>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Samuel Thibault <samuel.thibault@ens-lyon.org>
References: <20210506135923.161427-1-jandryuk@gmail.com>
 <20210506135923.161427-14-jandryuk@gmail.com>
From: "Daniel P. Smith" <dpsmith.dev@gmail.com>
Message-ID: <e0658f24-5646-c145-c232-dbccd86cb064@gmail.com>
Date: Mon, 10 May 2021 09:40:24 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210506135923.161427-14-jandryuk@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 5/6/21 9:59 AM, Jason Andryuk wrote:
> Add two patches:
> vtpm-microsecond-duration.patch fixes the units for timeouts and command
> durations.
> vtpm-command-duration.patch increases the timeout linux uses to allow
> commands to succeed.
> 
> Linux works around low timeouts, but not low durations.  The second
> patch allows commands to complete that often timeout with the lower
> command durations.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> ---

Reviewed-by: Daniel P. Smith <dpsmith@apertussolutions.com>

>  stubdom/Makefile                        |  2 +
>  stubdom/vtpm-command-duration.patch     | 52 +++++++++++++++++++++++++
>  stubdom/vtpm-microsecond-duration.patch | 52 +++++++++++++++++++++++++
>  3 files changed, 106 insertions(+)
>  create mode 100644 stubdom/vtpm-command-duration.patch
>  create mode 100644 stubdom/vtpm-microsecond-duration.patch
> 
> diff --git a/stubdom/Makefile b/stubdom/Makefile
> index c6de5f68ae..06aa69d8bc 100644
> --- a/stubdom/Makefile
> +++ b/stubdom/Makefile
> @@ -239,6 +239,8 @@ tpm_emulator-$(XEN_TARGET_ARCH): tpm_emulator-$(TPMEMU_VERSION).tar.gz
>  	patch -d $@ -p1 < vtpm-implicit-fallthrough.patch
>  	patch -d $@ -p1 < vtpm_TPM_ChangeAuthAsymFinish.patch
>  	patch -d $@ -p1 < vtpm_extern.patch
> +	patch -d $@ -p1 < vtpm-microsecond-duration.patch
> +	patch -d $@ -p1 < vtpm-command-duration.patch
>  	mkdir $@/build
>  	cd $@/build; CC=${CC} $(CMAKE) .. -DCMAKE_C_FLAGS:STRING="-std=c99 -DTPM_NO_EXTERN $(TARGET_CPPFLAGS) $(TARGET_CFLAGS) -Wno-declaration-after-statement"
>  	touch $@
> diff --git a/stubdom/vtpm-command-duration.patch b/stubdom/vtpm-command-duration.patch
> new file mode 100644
> index 0000000000..6fdf2fc9be
> --- /dev/null
> +++ b/stubdom/vtpm-command-duration.patch
> @@ -0,0 +1,52 @@
> +From e7c976b5864e7d2649292d90ea60d5aea091a990 Mon Sep 17 00:00:00 2001
> +From: Jason Andryuk <jandryuk@gmail.com>
> +Date: Sun, 14 Mar 2021 12:46:34 -0400
> +Subject: [PATCH 2/2] Increase command durations
> +
> +Wth Linux 5.4 xen-tpmfront and a Xen vtpm-stubdom, xen-tpmfront was
> +failing commands with -ETIME:
> +tpm tpm0: tpm_try_transmit: send(): error-62
> +
> +The vtpm was returning the data, but it was after the duration timeout
> +in vtpm_send.  Linux may have started being more stringent about timing?
> +
> +The vtpm-stubdom has a little delay since it writes its disk before
> +returning the response.
> +
> +Anyway, the durations are rather low.  When they were 1/10/1000 before
> +converting to microseconds, Linux showed all three durations rounded to
> +10000.  Update them with values from a physical TPM1.2.  These were
> +taken from a WEC which was software downgraded from a TPM2 to a TPM1.2.
> +They might be excessive, but I'd rather have a command succeed than
> +return -ETIME.
> +
> +An IFX physical TPM1.2 uses:
> +1000000
> +1500000
> +150000000
> +
> +Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> +---
> + tpm/tpm_data.c | 6 +++---
> + 1 file changed, 3 insertions(+), 3 deletions(-)
> +
> +diff --git a/tpm/tpm_data.c b/tpm/tpm_data.c
> +index bebaf10..844afca 100644
> +--- a/tpm/tpm_data.c
> ++++ b/tpm/tpm_data.c
> +@@ -71,9 +71,9 @@ static void init_timeouts(void)
> +   tpmData.permanent.data.tis_timeouts[1] = 2000000;
> +   tpmData.permanent.data.tis_timeouts[2] = 750000;
> +   tpmData.permanent.data.tis_timeouts[3] = 750000;
> +-  tpmData.permanent.data.cmd_durations[0] = 1000;
> +-  tpmData.permanent.data.cmd_durations[1] = 10000;
> +-  tpmData.permanent.data.cmd_durations[2] = 1000000;
> ++  tpmData.permanent.data.cmd_durations[0] = 3000000;
> ++  tpmData.permanent.data.cmd_durations[1] = 3000000;
> ++  tpmData.permanent.data.cmd_durations[2] = 600000000;
> + }
> + 
> + void tpm_init_data(void)
> +-- 
> +2.30.2
> +
> diff --git a/stubdom/vtpm-microsecond-duration.patch b/stubdom/vtpm-microsecond-duration.patch
> new file mode 100644
> index 0000000000..7a906e72c5
> --- /dev/null
> +++ b/stubdom/vtpm-microsecond-duration.patch
> @@ -0,0 +1,52 @@
> +From 5a510e0afd7c288e3f0fb3523ec749ba1366ad61 Mon Sep 17 00:00:00 2001
> +From: Jason Andryuk <jandryuk@gmail.com>
> +Date: Sun, 14 Mar 2021 12:42:10 -0400
> +Subject: [PATCH 1/2] Use microseconds for timeouts and durations
> +
> +The timeout and duration fields should be in microseconds according to
> +the spec.
> +
> +TPM_CAP_PROP_TIS_TIMEOUT:
> +A 4 element array of UINT32 values each denoting the timeout value in
> +microseconds for the following in this order:
> +
> +TPM_CAP_PROP_DURATION:
> +A 3 element array of UINT32 values each denoting the duration value in
> +microseconds of the duration of the three classes of commands:
> +
> +Linux will scale the timeouts up by 1000, but not the durations.  Change
> +the units for both sets as appropriate.
> +
> +Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> +---
> + tpm/tpm_data.c | 14 +++++++-------
> + 1 file changed, 7 insertions(+), 7 deletions(-)
> +
> +diff --git a/tpm/tpm_data.c b/tpm/tpm_data.c
> +index a3a79ef..bebaf10 100644
> +--- a/tpm/tpm_data.c
> ++++ b/tpm/tpm_data.c
> +@@ -67,13 +67,13 @@ static void init_nv_storage(void)
> + static void init_timeouts(void)
> + {
> +   /* for the timeouts we use the PC platform defaults */
> +-  tpmData.permanent.data.tis_timeouts[0] = 750;
> +-  tpmData.permanent.data.tis_timeouts[1] = 2000;
> +-  tpmData.permanent.data.tis_timeouts[2] = 750;
> +-  tpmData.permanent.data.tis_timeouts[3] = 750;
> +-  tpmData.permanent.data.cmd_durations[0] = 1;
> +-  tpmData.permanent.data.cmd_durations[1] = 10;
> +-  tpmData.permanent.data.cmd_durations[2] = 1000;
> ++  tpmData.permanent.data.tis_timeouts[0] = 750000;
> ++  tpmData.permanent.data.tis_timeouts[1] = 2000000;
> ++  tpmData.permanent.data.tis_timeouts[2] = 750000;
> ++  tpmData.permanent.data.tis_timeouts[3] = 750000;
> ++  tpmData.permanent.data.cmd_durations[0] = 1000;
> ++  tpmData.permanent.data.cmd_durations[1] = 10000;
> ++  tpmData.permanent.data.cmd_durations[2] = 1000000;
> + }
> + 
> + void tpm_init_data(void)
> +-- 
> +2.30.2
> +
> 



From xen-devel-bounces@lists.xenproject.org Mon May 10 14:12:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 14:12:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125272.235782 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg6dc-00073T-W1; Mon, 10 May 2021 14:12:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125272.235782; Mon, 10 May 2021 14:12:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg6dc-00073M-T6; Mon, 10 May 2021 14:12:00 +0000
Received: by outflank-mailman (input) for mailman id 125272;
 Mon, 10 May 2021 14:11:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3wdO=KF=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1lg6db-00073G-S9
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 14:11:59 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2a278634-3e30-46e3-ad3b-900ca43f19bc;
 Mon, 10 May 2021 14:11:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2a278634-3e30-46e3-ad3b-900ca43f19bc
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620655918;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=5JQ+vXUtDbtQP8ztvtmnQ5r5Ya7aAPV1uZp31rhiqTE=;
  b=Z+F5qNRyr9wX/Av0OplO/lHw977eQUMHdUUCkXHSCU4XfKExLKL8jMv5
   oHDawSwo8OcM9bHUduDdDQzbrAhJG6m42l5Alhyh5r9JbeZm1ia/m30qz
   9v4LX9hbsGilwqaEGnblQ8U9IwIsAivBeUYRUqUBGUVaOevBq/T3n8ygV
   g=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: n7bSBdCyQwfHk/3PDV3oYgiLJyW8IXSq9kHfA3ZhtULPEr7h4erOPB3JU3iMAcmdltBXWJjZMV
 oDingFHCqkM5cEAXQKBS+AYi2addZi48M/QePn33E4snwTgzG3/Vp4xrz2xnFi5Eu28qiYv4Hh
 8SXr8T06kTjZ2AJcbcUb3fb6IOcB4RtihBYHDxGIFpoyLeVN+gZhJBzil9tXS4oShTLU5dGHkO
 j4QuldtMLMVEfJ0IvHVDlB5PBnQQVfvklYss7f0ji+iPCvat3xbfCuY5Jxn2dEpt1yGMJ8OxOw
 gV0=
X-SBRS: 5.1
X-MesageID: 43831601
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:JXJ6iKON47aMQ8BcT+j155DYdb4zR+YMi2TDiHoddfUFSKalfp
 6V98jzjSWE8wr4WBkb6LO90dq7MAnhHP9OkMMs1NKZMDUO11HYS72KgbGC/9SkIVyHygc/79
 YsT0EdMqyXMbESt6+Tj2eF+pQbsaC6GcuT9IXjJgJWPGVXgtZbnmJE42igcnFedU1jP94UBZ
 Cc7s1Iq36LYnIMdPm2AXEDQqzqu8DLvIiOW29IOzcXrC21yR+44r/zFBaVmj0EVSlU/Lsk+W
 /Z1yTk+6SYte2hwBO07R6c030Woqqh9jJwPr3OtiEnEESvtu9uXvUlZ1S2hkF0nAho0idvrD
 CDmWZmAy050QKtQoj8m2qQ5+Cn6kdj15aq8y7mvVLz5cP+Xz40EMxHmMZQdQbY8VMpuJVm3L
 tMxH/xjeseMfrsplWK2zHzbWAiqqN0mwtWrQcZtQ0VbWLfUs4nkWU7xjImLH4tJlOL1GkXKp
 gbMCiH3ocmTbqzVQGrgoBA+q3TYkgO
X-IronPort-AV: E=Sophos;i="5.82,287,1613451600"; 
   d="scan'208";a="43831601"
Date: Mon, 10 May 2021 15:11:51 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jason Andryuk <jandryuk@gmail.com>
CC: xen-devel <xen-devel@lists.xenproject.org>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [XEN PATCH 1/8] libxl: Replace deprecated QMP command by
 "query-cpus-fast"
Message-ID: <YJk/J/iOC4122jzu@perard>
References: <20210423161558.224367-1-anthony.perard@citrix.com>
 <20210423161558.224367-2-anthony.perard@citrix.com>
 <CAKf6xpuDQuUbJ+Gn9OHNU6BXb2Rm+Bdv7hPNtMxcX1CA4d_aYA@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <CAKf6xpuDQuUbJ+Gn9OHNU6BXb2Rm+Bdv7hPNtMxcX1CA4d_aYA@mail.gmail.com>

On Wed, Apr 28, 2021 at 12:53:12PM -0400, Jason Andryuk wrote:
> On Fri, Apr 23, 2021 at 12:16 PM Anthony PERARD <anthony.perard@citrix.com> wrote:
> > +static int qmp_parse_query_cpus_fast(libxl__gc *gc,
> > +                                     libxl_domid domid,
> > +                                     const libxl__json_object *response,
> > +                                     libxl_bitmap *const map)
> > +{
> > +    int i;
> > +    const libxl__json_object *cpu;
> > +
> > +    libxl_bitmap_set_none(map);
> > +    /* Parse response to QMP command "query-cpus-fast":
> > +     * [ { 'cpu-index': 'int',...} ]
> > +     */
> > +    for (i = 0; (cpu = libxl__json_array_get(response, i)); i++) {
> > +        unsigned int cpu_index;
> > +        const libxl__json_object *o;
> > +
> > +        o = libxl__json_map_get("cpu-index", cpu, JSON_INTEGER);
> 
> Looks like qmp_parse_query_cpus_fast and qmp_parse_query_cpus just
> differ by the key string.  So you could pass it in as an argument -
> maybe with qmp_parse_query_cpus_fast and qmp_parse_query_cpus as
> wrappers around a common implementation?
> 
> But if you prefer this separate function, it's fine.

I think it's better to have two different functions because we are
parsing two different commands, even if they are very similar.

> Reviewed-by: Jason Andryuk <jandryuk@gmail.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Mon May 10 14:17:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 14:17:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125277.235794 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg6ip-0007n5-K9; Mon, 10 May 2021 14:17:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125277.235794; Mon, 10 May 2021 14:17:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg6ip-0007my-HJ; Mon, 10 May 2021 14:17:23 +0000
Received: by outflank-mailman (input) for mailman id 125277;
 Mon, 10 May 2021 14:17:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3wdO=KF=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1lg6in-0007ms-SM
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 14:17:21 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7ec6c5cd-b360-42e8-a366-1bf6a5877537;
 Mon, 10 May 2021 14:17:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7ec6c5cd-b360-42e8-a366-1bf6a5877537
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620656240;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=Ednk2M4FSrg/qGy5mYV394twVU1RJ0QEIra0t+rkiRU=;
  b=VKBUrmgnDeOFhk4M0LIy1f09S2HLoHLlc2KVpBbVNo4xBFbFWDgOO0Su
   1LK5kVds3bURUIRdOnzAbuFfzQw/5NpVwA/T/KD7upg/zu9cE6jrh/Tl2
   EK+HTTaA9y3RU4o6OQkZp/pcbVG2l1pRkBzLHOy9trLo3uXtLf/hdkKvb
   A=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: A4+iveKZzaJ72gpg1MlWPiImrRg2Lb3PCPu8XSJtrSbxrr9AKZof439yPd9+PSuMtF0RAeGPuI
 UjMFi6rZCN+7aXlMJ9M4veSPrNKojYb2d5E+lV13yqw26eYdFFxMbJkHenrzk+z+VtswDkQ695
 Z7yR5YCd14mzeXXn1UmUuRNPaBhFDKTMrFr+Ht9vgzIEPUx2lfLq8nM7gxRzKs/scj6IykDVRR
 Zt95wQMkvyjuDpAzzDOZra6ojEI61uSMCsn2IvRtsbHiICOYC7AcX/SNn4kpVap/SGFDul9LEV
 vC8=
X-SBRS: 5.1
X-MesageID: 43555661
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:fVos+KtseloimCN3Xpez0VG07skCl4Mji2hC6mlwRA09TyXGra
 2TdaUgvyMc1gx7ZJh5o6H4BEGBKUm8yXcH2/hoAV7CZniuhILMFu0StLcKrAeQfhEWmtQz6U
 4kSdkZNDSSNykzsS+Z2njdLz9I+rDuzEnrv5a4854Hd2FXgtRbnmVE43GgYy5LrWd9a6bQTf
 Gnl496jgvlXU5SQtWwB3EDUeSGjcbMjojabRkPAANiwBWSjBuzgYSKWCSw71M7aXdi0L0i+W
 /Kn0jS/aO4qcy2zRfayiv684lWot380dFObfb8wPT9aw+cxzpAVr4RFIFqjwpF7t1HL2xa0e
 Ukli1Qc/ibLUmhPl1d7yGdmDUImwxekEMKgWXo+0cL5/aJBg7SQvAx+L5xY1/X7VEts8p717
 8O12WFt4BPBReFhyjl4cPUPisa4nZcjEBS49L7tUYvJLf2qYUh3LD393klZ6vo3BiKm7zPNd
 Meev00yMwmDm9yXkqpzlWHmubcIkjbNi32PHQqq4iQySYTn3x8wg8dzMkAlmwNnahND6Vs9q
 DBKLotl71LQ4sQYbxmAesdXMetY1a9Bi7kISaXO0qiF60CNjbXp5H27dwOlaeXRKA=
X-IronPort-AV: E=Sophos;i="5.82,287,1613451600"; 
   d="scan'208";a="43555661"
Date: Mon, 10 May 2021 15:17:15 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Jason Andryuk <jandryuk@gmail.com>
CC: xen-devel <xen-devel@lists.xenproject.org>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [XEN PATCH 0/8] Fix libxl with QEMU 6.0 + remove some more
 deprecated usages.
Message-ID: <YJlAa4fqcUCotlhM@perard>
References: <20210423161558.224367-1-anthony.perard@citrix.com>
 <CAKf6xpt_xkpnNwcq2-WS3SN+Qj8gcz33MaGdfCW=30HzfqrWng@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <CAKf6xpt_xkpnNwcq2-WS3SN+Qj8gcz33MaGdfCW=30HzfqrWng@mail.gmail.com>

On Mon, May 03, 2021 at 10:13:57AM -0400, Jason Andryuk wrote:
> On Fri, Apr 23, 2021 at 12:16 PM Anthony PERARD
> <anthony.perard@citrix.com> wrote:
> >
> > Patch series available in this git branch:
> > https://xenbits.xen.org/git-http/people/aperard/xen-unstable.git br.deprecated-qemu-qmp-and-cmd-v1
> >
> > The Xen 4.15 release that went out just before QEMU 6.0 won't be compaptible
> > with the latter. This patch series fixes libxl to replace use of QMP commands
> > that have been removed from QEMU and to fix usage of deprecated command and
> > parameters that well be remove from QEMU in the future.
> >
> > All of the series should be backported to at least Xen 4.15 or it won't be
> > possible to migrate, hotplug cpu or change cdrom on HVM guest when QEMU 6.0 and
> > newer is used. QEMU 6.0 is about to be release, within a week.
> >
> > Backport: 4.15
> >
> > Anthony PERARD (8):
> >   libxl: Replace deprecated QMP command by "query-cpus-fast"
> >   libxl: Replace QEMU's command line short-form boolean option
> >   libxl: Replace deprecated "cpu-add" QMP command by "device_add"
> >   libxl: Use -device for cd-rom drives
> >   libxl: Assert qmp_ev's state in qmp_ev_qemu_compare_version
> >   libxl: Export libxl__qmp_ev_qemu_compare_version
> >   libxl: Use `id` with the "eject" QMP command
> >   libxl: Replace QMP command "change" by "blockdev-change-media"
> 
> For the rest of the series besides
> libxl: Replace deprecated QMP command by "query-cpus-fast"
> and
> libxl: Replace deprecated "cpu-add" QMP command by "device_add"
> 
> Reviewed-by: Jason Andryuk <jandryuk@gmail.com>

Thanks for the review!

Cheers,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Mon May 10 14:58:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 14:58:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125298.235847 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg7Md-0004Af-6Q; Mon, 10 May 2021 14:58:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125298.235847; Mon, 10 May 2021 14:58:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg7Md-0004AY-3F; Mon, 10 May 2021 14:58:31 +0000
Received: by outflank-mailman (input) for mailman id 125298;
 Mon, 10 May 2021 14:58:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5bM6=KF=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lg7Mb-0004AQ-An
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 14:58:29 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9ed29534-d824-44e2-9d55-8d77ae5a880b;
 Mon, 10 May 2021 14:58:27 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 7F6AC67373; Mon, 10 May 2021 16:58:23 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9ed29534-d824-44e2-9d55-8d77ae5a880b
Date: Mon, 10 May 2021 16:58:23 +0200
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
	bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
	jxgao@google.com, joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com, nouveau@lists.freedesktop.org,
	rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v6 01/15] swiotlb: Refactor swiotlb init functions
Message-ID: <20210510145823.GA28066@lst.de>
References: <20210510095026.3477496-1-tientzu@chromium.org> <20210510095026.3477496-2-tientzu@chromium.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210510095026.3477496-2-tientzu@chromium.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>


From xen-devel-bounces@lists.xenproject.org Mon May 10 14:59:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 14:59:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125300.235859 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg7NA-0004gu-G6; Mon, 10 May 2021 14:59:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125300.235859; Mon, 10 May 2021 14:59:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg7NA-0004gn-C8; Mon, 10 May 2021 14:59:04 +0000
Received: by outflank-mailman (input) for mailman id 125300;
 Mon, 10 May 2021 14:59:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5bM6=KF=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lg7N9-0004ga-L1
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 14:59:03 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b5682814-642e-4a00-aee1-664551745c7d;
 Mon, 10 May 2021 14:59:02 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 621D667373; Mon, 10 May 2021 16:59:00 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b5682814-642e-4a00-aee1-664551745c7d
Date: Mon, 10 May 2021 16:59:00 +0200
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
	bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
	jxgao@google.com, joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com, nouveau@lists.freedesktop.org,
	rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v6 02/15] swiotlb: Refactor swiotlb_create_debugfs
Message-ID: <20210510145900.GB28066@lst.de>
References: <20210510095026.3477496-1-tientzu@chromium.org> <20210510095026.3477496-3-tientzu@chromium.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210510095026.3477496-3-tientzu@chromium.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>


From xen-devel-bounces@lists.xenproject.org Mon May 10 15:03:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 15:03:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125306.235871 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg7R2-0006Bk-0r; Mon, 10 May 2021 15:03:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125306.235871; Mon, 10 May 2021 15:03:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg7R1-0006Bd-SL; Mon, 10 May 2021 15:03:03 +0000
Received: by outflank-mailman (input) for mailman id 125306;
 Mon, 10 May 2021 15:03:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5bM6=KF=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lg7R0-0006BW-Mi
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 15:03:02 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 95d4930a-c556-44b4-a444-2f3188e178d8;
 Mon, 10 May 2021 15:03:01 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 2B33867373; Mon, 10 May 2021 17:02:57 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 95d4930a-c556-44b4-a444-2f3188e178d8
Date: Mon, 10 May 2021 17:02:56 +0200
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
	bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
	jxgao@google.com, joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com, nouveau@lists.freedesktop.org,
	rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v6 04/15] swiotlb: Add restricted DMA pool
 initialization
Message-ID: <20210510150256.GC28066@lst.de>
References: <20210510095026.3477496-1-tientzu@chromium.org> <20210510095026.3477496-5-tientzu@chromium.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210510095026.3477496-5-tientzu@chromium.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

> +#ifdef CONFIG_DMA_RESTRICTED_POOL
> +#include <linux/io.h>
> +#include <linux/of.h>
> +#include <linux/of_fdt.h>
> +#include <linux/of_reserved_mem.h>
> +#include <linux/slab.h>
> +#endif

I don't think any of this belongs into swiotlb.c.  Marking
swiotlb_init_io_tlb_mem non-static and having all this code in a separate
file is probably a better idea.

> +#ifdef CONFIG_DMA_RESTRICTED_POOL
> +static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
> +				    struct device *dev)
> +{
> +	struct io_tlb_mem *mem = rmem->priv;
> +	unsigned long nslabs = rmem->size >> IO_TLB_SHIFT;
> +
> +	if (dev->dma_io_tlb_mem)
> +		return 0;
> +
> +	/* Since multiple devices can share the same pool, the private data,
> +	 * io_tlb_mem struct, will be initialized by the first device attached
> +	 * to it.
> +	 */

This is not the normal kernel comment style.

> +#ifdef CONFIG_ARM
> +		if (!PageHighMem(pfn_to_page(PHYS_PFN(rmem->base)))) {
> +			kfree(mem);
> +			return -EINVAL;
> +		}
> +#endif /* CONFIG_ARM */

And this is weird.  Why would ARM have such a restriction?  And if we have
such rstrictions it absolutely belongs into an arch helper.

> +		swiotlb_init_io_tlb_mem(mem, rmem->base, nslabs, false);
> +
> +		rmem->priv = mem;
> +
> +#ifdef CONFIG_DEBUG_FS
> +		if (!debugfs_dir)
> +			debugfs_dir = debugfs_create_dir("swiotlb", NULL);
> +
> +		swiotlb_create_debugfs(mem, rmem->name, debugfs_dir);

Doesn't the debugfs_create_dir belong into swiotlb_create_debugfs?  Also
please use IS_ENABLEd or a stub to avoid ifdefs like this.


From xen-devel-bounces@lists.xenproject.org Mon May 10 15:03:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 15:03:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125308.235883 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg7Rk-0006pw-8H; Mon, 10 May 2021 15:03:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125308.235883; Mon, 10 May 2021 15:03:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg7Rk-0006po-5Q; Mon, 10 May 2021 15:03:48 +0000
Received: by outflank-mailman (input) for mailman id 125308;
 Mon, 10 May 2021 15:03:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5bM6=KF=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lg7Ri-0006pg-Ip
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 15:03:46 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c8e56175-fbb3-4313-8721-e1b152f6682e;
 Mon, 10 May 2021 15:03:45 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 3764D68AFE; Mon, 10 May 2021 17:03:43 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c8e56175-fbb3-4313-8721-e1b152f6682e
Date: Mon, 10 May 2021 17:03:42 +0200
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
	bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
	jxgao@google.com, joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com, nouveau@lists.freedesktop.org,
	rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v6 05/15] swiotlb: Add a new get_io_tlb_mem getter
Message-ID: <20210510150342.GD28066@lst.de>
References: <20210510095026.3477496-1-tientzu@chromium.org> <20210510095026.3477496-6-tientzu@chromium.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210510095026.3477496-6-tientzu@chromium.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

> +static inline struct io_tlb_mem *get_io_tlb_mem(struct device *dev)
> +{
> +#ifdef CONFIG_DMA_RESTRICTED_POOL
> +	if (dev && dev->dma_io_tlb_mem)
> +		return dev->dma_io_tlb_mem;
> +#endif /* CONFIG_DMA_RESTRICTED_POOL */
> +
> +	return io_tlb_default_mem;

Given that we're also looking into a not addressing restricted pool
I'd rather always assign the active pool to dev->dma_io_tlb_mem and
do away with this helper.


From xen-devel-bounces@lists.xenproject.org Mon May 10 15:05:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 15:05:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125314.235895 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg7TF-0007Vi-Ji; Mon, 10 May 2021 15:05:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125314.235895; Mon, 10 May 2021 15:05:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg7TF-0007Vb-Gh; Mon, 10 May 2021 15:05:21 +0000
Received: by outflank-mailman (input) for mailman id 125314;
 Mon, 10 May 2021 15:05:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=5bM6=KF=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lg7TD-0007VS-Tx
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 15:05:19 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9964c90b-7e54-44ce-b96f-8a492216a985;
 Mon, 10 May 2021 15:05:19 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 2DAB567373; Mon, 10 May 2021 17:05:17 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9964c90b-7e54-44ce-b96f-8a492216a985
Date: Mon, 10 May 2021 17:05:16 +0200
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
	bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
	jxgao@google.com, joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com, nouveau@lists.freedesktop.org,
	rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v6 08/15] swiotlb: Bounce data from/to restricted DMA
 pool if available
Message-ID: <20210510150516.GE28066@lst.de>
References: <20210510095026.3477496-1-tientzu@chromium.org> <20210510095026.3477496-9-tientzu@chromium.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210510095026.3477496-9-tientzu@chromium.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

> +static inline bool is_dev_swiotlb_force(struct device *dev)
> +{
> +#ifdef CONFIG_DMA_RESTRICTED_POOL
> +	if (dev->dma_io_tlb_mem)
> +		return true;
> +#endif /* CONFIG_DMA_RESTRICTED_POOL */
> +	return false;
> +}
> +

>  	/* If SWIOTLB is active, use its maximum mapping size */
>  	if (is_swiotlb_active(dev) &&
> -	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
> +	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE ||
> +	     is_dev_swiotlb_force(dev)))

This is a mess.  I think the right way is to have an always_bounce flag
in the io_tlb_mem structure instead.  Then the global swiotlb_force can
go away and be replace with this and the fact that having no
io_tlb_mem structure at all means forced no buffering (after a little
refactoring).


From xen-devel-bounces@lists.xenproject.org Mon May 10 15:07:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 15:07:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125321.235907 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg7VQ-00089A-0V; Mon, 10 May 2021 15:07:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125321.235907; Mon, 10 May 2021 15:07:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg7VP-000893-TM; Mon, 10 May 2021 15:07:35 +0000
Received: by outflank-mailman (input) for mailman id 125321;
 Mon, 10 May 2021 15:07:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lg7VO-00088x-DG
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 15:07:34 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lg7VN-0007Zf-59; Mon, 10 May 2021 15:07:33 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lg7VM-0000KU-S6; Mon, 10 May 2021 15:07:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=GjXwukIvD8kbHiXk9G8DBtnyyL6OQFNZ/+Pgap0hRLY=; b=R8whYJgva8vwlgzUNJSnA09g6P
	+wvvrNmPM8K3Ns7z6V8iYypJclaFPIhJd241kxe4kQ/rwOnWB1PNFkgFsoYnzvX5Zx+vCybBlSt+f
	S6qjyV0dzVw4fSv+s1Hm9twIsR5trnI9y5eAA60urvtlPlpQtjEyQEAt38QGI7Fg8t8Y=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	Julien Grall <jgrall@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH] tools/xenstore: Fix indentation in the header of xenstored_control.c
Date: Mon, 10 May 2021 16:07:28 +0100
Message-Id: <20210510150728.6263-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1

From: Julien Grall <jgrall@amazon.com>

Commit e867af081d94 "tools/xenstore: save new binary for live update"
seemed to have spuriously changed the indentation of the first line of
the copyright header.

The previous indentation is re-instated so all the lines are indented
the same.

Reported-by: Bjoern Doebel <doebel@amazon.com>
Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 tools/xenstore/xenstored_control.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index 8e470f2b2056..52d4817679fe 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -1,5 +1,5 @@
 /*
-Interactive commands for Xen Store Daemon.
+    Interactive commands for Xen Store Daemon.
     Copyright (C) 2017 Juergen Gross, SUSE Linux GmbH
 
     This program is free software; you can redistribute it and/or modify
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon May 10 15:13:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 15:13:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125332.235919 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg7ad-00019n-Jc; Mon, 10 May 2021 15:12:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125332.235919; Mon, 10 May 2021 15:12:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg7ad-00019g-Gd; Mon, 10 May 2021 15:12:59 +0000
Received: by outflank-mailman (input) for mailman id 125332;
 Mon, 10 May 2021 15:12:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=EdaL=KF=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lg7ac-00019a-O5
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 15:12:58 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1a9ec5d1-9afb-4aa2-8dd7-713f1401c464;
 Mon, 10 May 2021 15:12:57 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A5976B151;
 Mon, 10 May 2021 15:12:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1a9ec5d1-9afb-4aa2-8dd7-713f1401c464
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620659576; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=BcNtx2gcnf7x/Le/H4uC9RhArURcAXZPcpUU0lgSHqA=;
	b=kDaAxfgPkVzH2E5Pva9mC8VdU+WYTHv0oQoeH82vDMVO9cstQOBPfaLszS6bkwvGLZousF
	j0Oh11KZ7pO9XY4y7asLiQ9f+fN9lB1JWdrma3LnAXeZdQOvU+vsP0TWD7lAd979EcggOL
	mOlps507dAr0pJ/Gpexz6m5StPLBnHs=
Subject: Re: [PATCH] tools/xenstore: Fix indentation in the header of
 xenstored_control.c
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Julien Grall <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>
References: <20210510150728.6263-1-julien@xen.org>
From: Juergen Gross <jgross@suse.com>
Message-ID: <37c4fadc-c177-e07f-1026-748a1caa943d@suse.com>
Date: Mon, 10 May 2021 17:12:55 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.0
MIME-Version: 1.0
In-Reply-To: <20210510150728.6263-1-julien@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="LqSVLGe3BH6Uv84uqaWA1IQ4sYUtbRakS"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--LqSVLGe3BH6Uv84uqaWA1IQ4sYUtbRakS
Content-Type: multipart/mixed; boundary="XNy9W6Xx0CKCwGbQqNjZCQbtYolwBvURW";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Julien Grall <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>
Message-ID: <37c4fadc-c177-e07f-1026-748a1caa943d@suse.com>
Subject: Re: [PATCH] tools/xenstore: Fix indentation in the header of
 xenstored_control.c
References: <20210510150728.6263-1-julien@xen.org>
In-Reply-To: <20210510150728.6263-1-julien@xen.org>

--XNy9W6Xx0CKCwGbQqNjZCQbtYolwBvURW
Content-Type: multipart/mixed;
 boundary="------------3DAED068CCEE4D3EDD0940A5"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------3DAED068CCEE4D3EDD0940A5
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 10.05.21 17:07, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
>=20
> Commit e867af081d94 "tools/xenstore: save new binary for live update"
> seemed to have spuriously changed the indentation of the first line of
> the copyright header.
>=20
> The previous indentation is re-instated so all the lines are indented
> the same.
>=20
> Reported-by: Bjoern Doebel <doebel@amazon.com>
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------3DAED068CCEE4D3EDD0940A5
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------3DAED068CCEE4D3EDD0940A5--

--XNy9W6Xx0CKCwGbQqNjZCQbtYolwBvURW--

--LqSVLGe3BH6Uv84uqaWA1IQ4sYUtbRakS
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmCZTXcFAwAAAAAACgkQsN6d1ii/Ey+U
1wgAi34PvXn1nOd0xaRwzJjalxNGNglZkBuKJmGzDm2jXoDFt9havuxIQOhG9nlb6j9QPYwPxgKf
oL2VlK/9np2Uxbs+odq8zUobgBqXUxxiVR+3b1VEXz6DDctYGVoqO1FsG2eWOe1Msk3Hh9hQGmuT
BMh9vJsIOu21sw9jAr2moVbJA8dhfmwqoQ5WIEnHxA5WXgatnIxKv+eiMw0F7vPCyWuCuz1ZClhI
/AFOyQkBORyR/R8gF2D5ZT6Nm/yQDg8dXFq/X0cAM4YerTFyatheuGVLpc6Ib404GUuTVLCs/MG9
O7j+zjF4o/lF0fi/78uKL8cpC9x04dXFfo7ylmP6fw==
=Lpc3
-----END PGP SIGNATURE-----

--LqSVLGe3BH6Uv84uqaWA1IQ4sYUtbRakS--


From xen-devel-bounces@lists.xenproject.org Mon May 10 15:47:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 15:47:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125342.235931 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg88C-0004iR-Fo; Mon, 10 May 2021 15:47:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125342.235931; Mon, 10 May 2021 15:47:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg88C-0004iK-Ci; Mon, 10 May 2021 15:47:40 +0000
Received: by outflank-mailman (input) for mailman id 125342;
 Mon, 10 May 2021 15:47:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JfCf=KF=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1lg88B-0004iE-6X
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 15:47:39 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 0a22bc61-3b23-4cde-bcba-7184bf5cb9e5;
 Mon, 10 May 2021 15:47:38 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id B6FFF168F;
 Mon, 10 May 2021 08:47:37 -0700 (PDT)
Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0FC853F73B;
 Mon, 10 May 2021 08:47:35 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0a22bc61-3b23-4cde-bcba-7184bf5cb9e5
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [PATCH v5 0/2] xen/pci: Make PCI passthrough code non-x86 specific
Date: Mon, 10 May 2021 16:47:25 +0100
Message-Id: <cover.1620661205.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1

This patch series is preparatory work to implement the PCI passthrough support
for the ARM architecture.

Rahul Singh (2):
  xen/pci: Refactor PCI MSI intercept related code
  xen/pci: Refactor MSI code that implements MSI functionality within
    XEN

 xen/arch/x86/Kconfig                    |  1 +
 xen/drivers/passthrough/Makefile        |  1 +
 xen/drivers/passthrough/msi-intercept.c | 96 +++++++++++++++++++++++++
 xen/drivers/passthrough/pci.c           | 54 ++++----------
 xen/drivers/pci/Kconfig                 |  4 ++
 xen/drivers/vpci/Makefile               |  3 +-
 xen/drivers/vpci/header.c               | 19 ++---
 xen/drivers/vpci/msix.c                 | 54 ++++++++++++++
 xen/drivers/vpci/vpci.c                 |  3 +-
 xen/include/xen/msi-intercept.h         | 57 +++++++++++++++
 xen/include/xen/pci.h                   | 11 +--
 xen/include/xen/vpci.h                  | 43 +++++------
 xen/xsm/flask/hooks.c                   |  8 +--
 13 files changed, 262 insertions(+), 92 deletions(-)
 create mode 100644 xen/drivers/passthrough/msi-intercept.c
 create mode 100644 xen/include/xen/msi-intercept.h

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon May 10 15:47:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 15:47:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125343.235943 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg88Q-00054D-Na; Mon, 10 May 2021 15:47:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125343.235943; Mon, 10 May 2021 15:47:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg88Q-000546-KU; Mon, 10 May 2021 15:47:54 +0000
Received: by outflank-mailman (input) for mailman id 125343;
 Mon, 10 May 2021 15:47:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JfCf=KF=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1lg88P-00052m-Id
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 15:47:53 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 20e8c1f1-87c8-40b4-9f98-068192085dfe;
 Mon, 10 May 2021 15:47:51 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 7B145168F;
 Mon, 10 May 2021 08:47:46 -0700 (PDT)
Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id C5BCE3F73B;
 Mon, 10 May 2021 08:47:44 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 20e8c1f1-87c8-40b4-9f98-068192085dfe
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Wei Liu <wl@xen.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH v5 1/2] xen/pci: Refactor PCI MSI intercept related code
Date: Mon, 10 May 2021 16:47:26 +0100
Message-Id: <73305c5cb5b805618ec24405c1848d240a786a89.1620661205.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1620661205.git.rahul.singh@arm.com>
References: <cover.1620661205.git.rahul.singh@arm.com>
In-Reply-To: <cover.1620661205.git.rahul.singh@arm.com>
References: <cover.1620661205.git.rahul.singh@arm.com>

MSI intercept related code is not useful for ARM when MSI interrupts are
injected via GICv3 ITS.

Therefore introducing the new flag CONFIG_HAS_PCI_MSI_INTERCEPT to gate
the MSI code for ARM in common code and also implemented the stub
version for the unused code for ARM to avoid compilation error when
HAS_PCI is enabled for ARM.

No functional change intended.

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---
 xen/arch/x86/Kconfig                    |  1 +
 xen/drivers/passthrough/Makefile        |  1 +
 xen/drivers/passthrough/msi-intercept.c | 55 +++++++++++++++++++++++++
 xen/drivers/passthrough/pci.c           | 20 ++++-----
 xen/drivers/pci/Kconfig                 |  4 ++
 xen/drivers/vpci/Makefile               |  3 +-
 xen/drivers/vpci/header.c               | 19 ++-------
 xen/drivers/vpci/msix.c                 | 54 ++++++++++++++++++++++++
 xen/drivers/vpci/vpci.c                 |  3 +-
 xen/include/xen/msi-intercept.h         | 48 +++++++++++++++++++++
 xen/include/xen/vpci.h                  | 43 ++++++++-----------
 11 files changed, 195 insertions(+), 56 deletions(-)
 create mode 100644 xen/drivers/passthrough/msi-intercept.c
 create mode 100644 xen/include/xen/msi-intercept.h

diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
index e55e029b79..6630724cfe 100644
--- a/xen/arch/x86/Kconfig
+++ b/xen/arch/x86/Kconfig
@@ -19,6 +19,7 @@ config X86
 	select HAS_NS16550
 	select HAS_PASSTHROUGH
 	select HAS_PCI
+	select HAS_PCI_MSI_INTERCEPT
 	select HAS_PDX
 	select HAS_SCHED_GRANULARITY
 	select HAS_UBSAN
diff --git a/xen/drivers/passthrough/Makefile b/xen/drivers/passthrough/Makefile
index 445690e3e5..eb27d10f5a 100644
--- a/xen/drivers/passthrough/Makefile
+++ b/xen/drivers/passthrough/Makefile
@@ -7,3 +7,4 @@ obj-y += iommu.o
 obj-$(CONFIG_HAS_PCI) += pci.o
 obj-$(CONFIG_HAS_DEVICE_TREE) += device_tree.o
 obj-$(CONFIG_HAS_PCI) += ats.o
+obj-$(CONFIG_HAS_PCI_MSI_INTERCEPT) += msi-intercept.o
diff --git a/xen/drivers/passthrough/msi-intercept.c b/xen/drivers/passthrough/msi-intercept.c
new file mode 100644
index 0000000000..757f3fc236
--- /dev/null
+++ b/xen/drivers/passthrough/msi-intercept.c
@@ -0,0 +1,55 @@
+/*
+ * Copyright (C) 2008,  Netronome Systems, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/init.h>
+#include <xen/pci.h>
+#include <asm/msi.h>
+#include <asm/hvm/io.h>
+
+int pdev_msix_assign(struct domain *d, struct pci_dev *pdev)
+{
+    int rc;
+
+    if ( pdev->msix )
+    {
+        rc = pci_reset_msix_state(pdev);
+        if ( rc )
+            return rc;
+        msixtbl_init(d);
+    }
+
+    return 0;
+}
+
+void pdev_dump_msi(const struct pci_dev *pdev)
+{
+    const struct msi_desc *msi;
+
+    printk("- MSIs < ");
+    list_for_each_entry ( msi, &pdev->msi_list, list )
+        printk("%d ", msi->irq);
+    printk(">");
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index 199ce08612..237461b4ab 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -32,8 +32,8 @@
 #include <xen/softirq.h>
 #include <xen/tasklet.h>
 #include <xen/vpci.h>
+#include <xen/msi-intercept.h>
 #include <xsm/xsm.h>
-#include <asm/msi.h>
 #include "ats.h"
 
 struct pci_seg {
@@ -1271,18 +1271,16 @@ bool_t pcie_aer_get_firmware_first(const struct pci_dev *pdev)
 static int _dump_pci_devices(struct pci_seg *pseg, void *arg)
 {
     struct pci_dev *pdev;
-    struct msi_desc *msi;
 
     printk("==== segment %04x ====\n", pseg->nr);
 
     list_for_each_entry ( pdev, &pseg->alldevs_list, alldevs_list )
     {
-        printk("%pp - %pd - node %-3d - MSIs < ",
+        printk("%pp - %pd - node %-3d ",
                &pdev->sbdf, pdev->domain,
                (pdev->node != NUMA_NO_NODE) ? pdev->node : -1);
-        list_for_each_entry ( msi, &pdev->msi_list, list )
-               printk("%d ", msi->irq);
-        printk(">\n");
+        pdev_dump_msi(pdev);
+        printk("\n");
     }
 
     return 0;
@@ -1422,13 +1420,9 @@ static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn, u32 flag)
     ASSERT(pdev && (pdev->domain == hardware_domain ||
                     pdev->domain == dom_io));
 
-    if ( pdev->msix )
-    {
-        rc = pci_reset_msix_state(pdev);
-        if ( rc )
-            goto done;
-        msixtbl_init(d);
-    }
+    rc = pdev_msix_assign(d, pdev);
+    if ( rc )
+        goto done;
 
     pdev->fault.count = 0;
 
diff --git a/xen/drivers/pci/Kconfig b/xen/drivers/pci/Kconfig
index 7da03fa13b..295731a3f4 100644
--- a/xen/drivers/pci/Kconfig
+++ b/xen/drivers/pci/Kconfig
@@ -1,3 +1,7 @@
 
 config HAS_PCI
 	bool
+
+config HAS_PCI_MSI_INTERCEPT
+	bool
+	depends on HAS_PCI
diff --git a/xen/drivers/vpci/Makefile b/xen/drivers/vpci/Makefile
index 55d1bdfda0..a95e6c2ca0 100644
--- a/xen/drivers/vpci/Makefile
+++ b/xen/drivers/vpci/Makefile
@@ -1 +1,2 @@
-obj-y += vpci.o header.o msi.o msix.o
+obj-y += vpci.o header.o
+obj-$(CONFIG_HAS_PCI_MSI_INTERCEPT) += msi.o msix.o
diff --git a/xen/drivers/vpci/header.c b/xen/drivers/vpci/header.c
index ba9a036202..81d3d2d17f 100644
--- a/xen/drivers/vpci/header.c
+++ b/xen/drivers/vpci/header.c
@@ -206,7 +206,6 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
     struct vpci_header *header = &pdev->vpci->header;
     struct rangeset *mem = rangeset_new(NULL, NULL, 0);
     struct pci_dev *tmp, *dev = NULL;
-    const struct vpci_msix *msix = pdev->vpci->msix;
     unsigned int i;
     int rc;
 
@@ -244,21 +243,11 @@ static int modify_bars(const struct pci_dev *pdev, uint16_t cmd, bool rom_only)
     }
 
     /* Remove any MSIX regions if present. */
-    for ( i = 0; msix && i < ARRAY_SIZE(msix->tables); i++ )
+    rc = vpci_remove_msix_regions(pdev->vpci, mem);
+    if ( rc )
     {
-        unsigned long start = PFN_DOWN(vmsix_table_addr(pdev->vpci, i));
-        unsigned long end = PFN_DOWN(vmsix_table_addr(pdev->vpci, i) +
-                                     vmsix_table_size(pdev->vpci, i) - 1);
-
-        rc = rangeset_remove_range(mem, start, end);
-        if ( rc )
-        {
-            printk(XENLOG_G_WARNING
-                   "Failed to remove MSIX table [%lx, %lx]: %d\n",
-                   start, end, rc);
-            rangeset_destroy(mem);
-            return rc;
-        }
+        rangeset_destroy(mem);
+        return rc;
     }
 
     /*
diff --git a/xen/drivers/vpci/msix.c b/xen/drivers/vpci/msix.c
index 846f1b8d70..fe4ec08816 100644
--- a/xen/drivers/vpci/msix.c
+++ b/xen/drivers/vpci/msix.c
@@ -27,6 +27,35 @@
     ((addr) >= vmsix_table_addr(vpci, nr) &&                              \
      (addr) < vmsix_table_addr(vpci, nr) + vmsix_table_size(vpci, nr))
 
+/*
+ * Helper functions to fetch MSIX related data. They are used by both the
+ * emulated MSIX code and the BAR handlers.
+ */
+static inline paddr_t vmsix_table_base(const struct vpci *vpci,
+                                       unsigned int nr)
+{
+    return vpci->header.bars[vpci->msix->tables[nr] & PCI_MSIX_BIRMASK].addr;
+}
+
+static inline paddr_t vmsix_table_addr(const struct vpci *vpci,
+                                       unsigned int nr)
+{
+    return vmsix_table_base(vpci, nr) +
+           (vpci->msix->tables[nr] & ~PCI_MSIX_BIRMASK);
+}
+
+/*
+ * Note regarding the size calculation of the PBA: the spec mentions "The last
+ * QWORD will not necessarily be fully populated", so it implies that the PBA
+ * size is 64-bit aligned.
+ */
+static inline size_t vmsix_table_size(const struct vpci *vpci, unsigned int nr)
+{
+    return (nr == VPCI_MSIX_TABLE)
+           ? vpci->msix->max_entries * PCI_MSIX_ENTRY_SIZE
+           : ROUNDUP(DIV_ROUND_UP(vpci->msix->max_entries, 8), 8);
+}
+
 static uint32_t control_read(const struct pci_dev *pdev, unsigned int reg,
                              void *data)
 {
@@ -428,6 +457,31 @@ int vpci_make_msix_hole(const struct pci_dev *pdev)
     return 0;
 }
 
+int vpci_remove_msix_regions(const struct vpci *vpci, struct rangeset *mem)
+{
+    const struct vpci_msix *msix = vpci->msix;
+    unsigned int i;
+    int rc;
+
+    for ( i = 0; msix && i < ARRAY_SIZE(msix->tables); i++ )
+    {
+        unsigned long start = PFN_DOWN(vmsix_table_addr(vpci, i));
+        unsigned long end = PFN_DOWN(vmsix_table_addr(vpci, i) +
+                vmsix_table_size(vpci, i) - 1);
+
+        rc = rangeset_remove_range(mem, start, end);
+        if ( rc )
+        {
+            printk(XENLOG_G_WARNING
+                   "Failed to remove MSIX table [%lx, %lx]: %d\n",
+                   start, end, rc);
+            return rc;
+        }
+    }
+
+    return 0;
+}
+
 static int init_msix(struct pci_dev *pdev)
 {
     struct domain *d = pdev->domain;
diff --git a/xen/drivers/vpci/vpci.c b/xen/drivers/vpci/vpci.c
index cbd1bac7fc..85084dd924 100644
--- a/xen/drivers/vpci/vpci.c
+++ b/xen/drivers/vpci/vpci.c
@@ -48,8 +48,7 @@ void vpci_remove_device(struct pci_dev *pdev)
         xfree(r);
     }
     spin_unlock(&pdev->vpci->lock);
-    xfree(pdev->vpci->msix);
-    xfree(pdev->vpci->msi);
+    vpci_msi_free(pdev->vpci);
     xfree(pdev->vpci);
     pdev->vpci = NULL;
 }
diff --git a/xen/include/xen/msi-intercept.h b/xen/include/xen/msi-intercept.h
new file mode 100644
index 0000000000..5cfc21662c
--- /dev/null
+++ b/xen/include/xen/msi-intercept.h
@@ -0,0 +1,48 @@
+/*
+ * Copyright (C) 2008,  Netronome Systems, Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __XEN_MSI_INTERCEPT_H_
+#define __XEN_MSI_INTERCEPT_H_
+
+#ifdef CONFIG_HAS_PCI_MSI_INTERCEPT
+
+#include <asm/msi.h>
+
+int pdev_msix_assign(struct domain *d, struct pci_dev *pdev);
+void pdev_dump_msi(const struct pci_dev *pdev);
+
+#else /* !CONFIG_HAS_PCI_MSI_INTERCEPT */
+
+static inline int pdev_msix_assign(struct domain *d, struct pci_dev *pdev)
+{
+    return 0;
+}
+
+static inline void pdev_dump_msi(const struct pci_dev *pdev) {}
+
+#endif /* CONFIG_HAS_PCI_MSI_INTERCEPT */
+
+#endif /* __XEN_MSI_INTERCEPT_H */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/xen/vpci.h b/xen/include/xen/vpci.h
index 9f5b5d52e1..6b828c9bda 100644
--- a/xen/include/xen/vpci.h
+++ b/xen/include/xen/vpci.h
@@ -91,6 +91,7 @@ struct vpci {
         /* FIXME: currently there's no support for SR-IOV. */
     } header;
 
+#ifdef CONFIG_HAS_PCI_MSI_INTERCEPT
     /* MSI data. */
     struct vpci_msi {
       /* Address. */
@@ -136,6 +137,7 @@ struct vpci {
             struct vpci_arch_msix_entry arch;
         } entries[];
     } *msix;
+#endif /* CONFIG_HAS_PCI_MSI_INTERCEPT */
 #endif
 };
 
@@ -147,7 +149,7 @@ struct vpci_vcpu {
     bool rom_only : 1;
 };
 
-#ifdef __XEN__
+#ifdef CONFIG_HAS_PCI_MSI_INTERCEPT
 void vpci_dump_msi(void);
 
 /* Make sure there's a hole in the p2m for the MSIX mmio areas. */
@@ -174,41 +176,32 @@ int __must_check vpci_msix_arch_disable_entry(struct vpci_msix_entry *entry,
                                               const struct pci_dev *pdev);
 void vpci_msix_arch_init_entry(struct vpci_msix_entry *entry);
 int vpci_msix_arch_print(const struct vpci_msix *msix);
+int vpci_remove_msix_regions(const struct vpci *vpci, struct rangeset *mem);
 
-/*
- * Helper functions to fetch MSIX related data. They are used by both the
- * emulated MSIX code and the BAR handlers.
- */
-static inline paddr_t vmsix_table_base(const struct vpci *vpci, unsigned int nr)
+static inline unsigned int vmsix_entry_nr(const struct vpci_msix *msix,
+                                          const struct vpci_msix_entry *entry)
 {
-    return vpci->header.bars[vpci->msix->tables[nr] & PCI_MSIX_BIRMASK].addr;
+    return entry - msix->entries;
 }
 
-static inline paddr_t vmsix_table_addr(const struct vpci *vpci, unsigned int nr)
+static inline void vpci_msi_free(struct vpci *vpci)
 {
-    return vmsix_table_base(vpci, nr) +
-           (vpci->msix->tables[nr] & ~PCI_MSIX_BIRMASK);
+    XFREE(vpci->msix);
+    XFREE(vpci->msi);
 }
-
-/*
- * Note regarding the size calculation of the PBA: the spec mentions "The last
- * QWORD will not necessarily be fully populated", so it implies that the PBA
- * size is 64-bit aligned.
- */
-static inline size_t vmsix_table_size(const struct vpci *vpci, unsigned int nr)
+#else /* !CONFIG_HAS_PCI_MSI_INTERCEPT */
+static inline int vpci_make_msix_hole(const struct pci_dev *pdev)
 {
-    return
-        (nr == VPCI_MSIX_TABLE) ? vpci->msix->max_entries * PCI_MSIX_ENTRY_SIZE
-                                : ROUNDUP(DIV_ROUND_UP(vpci->msix->max_entries,
-                                                       8), 8);
+    return 0;
 }
 
-static inline unsigned int vmsix_entry_nr(const struct vpci_msix *msix,
-                                          const struct vpci_msix_entry *entry)
+static inline int vpci_remove_msix_regions(const struct vpci *vpci,
+                                           struct rangeset *mem)
 {
-    return entry - msix->entries;
+    return 0;
 }
-#endif /* __XEN__ */
+static inline void vpci_msi_free(struct vpci *vpci) {}
+#endif /* CONFIG_HAS_PCI_MSI_INTERCEPT */
 
 #else /* !CONFIG_HAS_VPCI */
 struct vpci_vcpu {};
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon May 10 15:48:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 15:48:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125348.235954 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg88p-0005oI-0t; Mon, 10 May 2021 15:48:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125348.235954; Mon, 10 May 2021 15:48:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg88o-0005oB-U1; Mon, 10 May 2021 15:48:18 +0000
Received: by outflank-mailman (input) for mailman id 125348;
 Mon, 10 May 2021 15:48:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JfCf=KF=arm.com=rahul.singh@srs-us1.protection.inumbo.net>)
 id 1lg88n-0005kO-5S
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 15:48:17 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 03ff9d8b-7816-49a7-b3dc-18f0d05f1b0d;
 Mon, 10 May 2021 15:48:15 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 6CF0F168F;
 Mon, 10 May 2021 08:48:15 -0700 (PDT)
Received: from e109506.cambridge.arm.com (e109506.cambridge.arm.com
 [10.1.199.1])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id DB5433F73B;
 Mon, 10 May 2021 08:48:13 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 03ff9d8b-7816-49a7-b3dc-18f0d05f1b0d
From: Rahul Singh <rahul.singh@arm.com>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com,
	rahul.singh@arm.com,
	Jan Beulich <jbeulich@suse.com>,
	Paul Durrant <paul@xen.org>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Daniel De Graaf <dgdegra@tycho.nsa.gov>
Subject: [PATCH v5 2/2] xen/pci: Refactor MSI code that implements MSI functionality within XEN
Date: Mon, 10 May 2021 16:47:27 +0100
Message-Id: <2843d1c399e8ed81d807e680147cac5cd601d28a.1620661205.git.rahul.singh@arm.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1620661205.git.rahul.singh@arm.com>
References: <cover.1620661205.git.rahul.singh@arm.com>
In-Reply-To: <cover.1620661205.git.rahul.singh@arm.com>
References: <cover.1620661205.git.rahul.singh@arm.com>

MSI code that implements MSI functionality to support MSI within XEN is
not usable on ARM. Move the code under CONFIG_HAS_PCI_MSI_INTERCEPT flag
to gate the code for ARM.

Currently, we have no idea how MSI functionality will be supported for
other architecture therefore we have decided to move the code under
CONFIG_PCI_MSI_INTERCEPT. We know this is not the right flag to gate the
code but to avoid an extra flag we decided to use this.

No functional change intended.

Signed-off-by: Rahul Singh <rahul.singh@arm.com>
---
 xen/drivers/passthrough/msi-intercept.c | 41 +++++++++++++++++++++++++
 xen/drivers/passthrough/pci.c           | 34 ++++----------------
 xen/include/xen/msi-intercept.h         | 11 ++++++-
 xen/include/xen/pci.h                   | 11 ++++---
 xen/xsm/flask/hooks.c                   |  8 ++---
 5 files changed, 68 insertions(+), 37 deletions(-)

diff --git a/xen/drivers/passthrough/msi-intercept.c b/xen/drivers/passthrough/msi-intercept.c
index 757f3fc236..3f9d4f480b 100644
--- a/xen/drivers/passthrough/msi-intercept.c
+++ b/xen/drivers/passthrough/msi-intercept.c
@@ -44,6 +44,47 @@ void pdev_dump_msi(const struct pci_dev *pdev)
     printk(">");
 }
 
+int pdev_msi_init(struct pci_dev *pdev)
+{
+    unsigned int pos;
+
+    INIT_LIST_HEAD(&pdev->msi_list);
+
+    pos = pci_find_cap_offset(pdev->seg, pdev->bus, PCI_SLOT(pdev->devfn),
+                              PCI_FUNC(pdev->devfn), PCI_CAP_ID_MSI);
+    if ( pos )
+    {
+        uint16_t ctrl = pci_conf_read16(pdev->sbdf, msi_control_reg(pos));
+
+        pdev->msi_maxvec = multi_msi_capable(ctrl);
+    }
+
+    pos = pci_find_cap_offset(pdev->seg, pdev->bus, PCI_SLOT(pdev->devfn),
+                              PCI_FUNC(pdev->devfn), PCI_CAP_ID_MSIX);
+    if ( pos )
+    {
+        struct arch_msix *msix = xzalloc(struct arch_msix);
+        uint16_t ctrl;
+
+        if ( !msix )
+            return -ENOMEM;
+
+        spin_lock_init(&msix->table_lock);
+
+        ctrl = pci_conf_read16(pdev->sbdf, msix_control_reg(pos));
+        msix->nr_entries = msix_table_size(ctrl);
+
+        pdev->msix = msix;
+    }
+
+    return 0;
+}
+
+void pdev_msi_deinit(struct pci_dev *pdev)
+{
+    XFREE(pdev->msix);
+}
+
 /*
  * Local variables:
  * mode: C
diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index 237461b4ab..a3ec85c293 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -314,6 +314,7 @@ static struct pci_dev *alloc_pdev(struct pci_seg *pseg, u8 bus, u8 devfn)
 {
     struct pci_dev *pdev;
     unsigned int pos;
+    int rc;
 
     list_for_each_entry ( pdev, &pseg->alldevs_list, alldevs_list )
         if ( pdev->bus == bus && pdev->devfn == devfn )
@@ -327,35 +328,12 @@ static struct pci_dev *alloc_pdev(struct pci_seg *pseg, u8 bus, u8 devfn)
     *((u8*) &pdev->bus) = bus;
     *((u8*) &pdev->devfn) = devfn;
     pdev->domain = NULL;
-    INIT_LIST_HEAD(&pdev->msi_list);
-
-    pos = pci_find_cap_offset(pseg->nr, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
-                              PCI_CAP_ID_MSI);
-    if ( pos )
-    {
-        uint16_t ctrl = pci_conf_read16(pdev->sbdf, msi_control_reg(pos));
 
-        pdev->msi_maxvec = multi_msi_capable(ctrl);
-    }
-
-    pos = pci_find_cap_offset(pseg->nr, bus, PCI_SLOT(devfn), PCI_FUNC(devfn),
-                              PCI_CAP_ID_MSIX);
-    if ( pos )
+    rc = pdev_msi_init(pdev);
+    if ( rc )
     {
-        struct arch_msix *msix = xzalloc(struct arch_msix);
-        uint16_t ctrl;
-
-        if ( !msix )
-        {
-            xfree(pdev);
-            return NULL;
-        }
-        spin_lock_init(&msix->table_lock);
-
-        ctrl = pci_conf_read16(pdev->sbdf, msix_control_reg(pos));
-        msix->nr_entries = msix_table_size(ctrl);
-
-        pdev->msix = msix;
+        xfree(pdev);
+        return NULL;
     }
 
     list_add(&pdev->alldevs_list, &pseg->alldevs_list);
@@ -449,7 +427,7 @@ static void free_pdev(struct pci_seg *pseg, struct pci_dev *pdev)
     }
 
     list_del(&pdev->alldevs_list);
-    xfree(pdev->msix);
+    pdev_msi_deinit(pdev);
     xfree(pdev);
 }
 
diff --git a/xen/include/xen/msi-intercept.h b/xen/include/xen/msi-intercept.h
index 5cfc21662c..08e467ff79 100644
--- a/xen/include/xen/msi-intercept.h
+++ b/xen/include/xen/msi-intercept.h
@@ -23,9 +23,10 @@
 
 int pdev_msix_assign(struct domain *d, struct pci_dev *pdev);
 void pdev_dump_msi(const struct pci_dev *pdev);
+int pdev_msi_init(struct pci_dev *pdev);
+void pdev_msi_deinit(struct pci_dev *pdev);
 
 #else /* !CONFIG_HAS_PCI_MSI_INTERCEPT */
-
 static inline int pdev_msix_assign(struct domain *d, struct pci_dev *pdev)
 {
     return 0;
@@ -33,6 +34,14 @@ static inline int pdev_msix_assign(struct domain *d, struct pci_dev *pdev)
 
 static inline void pdev_dump_msi(const struct pci_dev *pdev) {}
 
+static inline int pdev_msi_init(struct pci_dev *pdev)
+{
+    return 0;
+}
+
+static inline void pdev_msi_deinit(struct pci_dev *pdev) {}
+static inline void pci_cleanup_msi(struct pci_dev *pdev) {}
+
 #endif /* CONFIG_HAS_PCI_MSI_INTERCEPT */
 
 #endif /* __XEN_MSI_INTERCEPT_H */
diff --git a/xen/include/xen/pci.h b/xen/include/xen/pci.h
index 8e3d4d9454..01a92ce9e6 100644
--- a/xen/include/xen/pci.h
+++ b/xen/include/xen/pci.h
@@ -79,10 +79,6 @@ struct pci_dev {
     struct list_head alldevs_list;
     struct list_head domain_list;
 
-    struct list_head msi_list;
-
-    struct arch_msix *msix;
-
     struct domain *domain;
 
     const union {
@@ -94,7 +90,14 @@ struct pci_dev {
         pci_sbdf_t sbdf;
     };
 
+#ifdef CONFIG_HAS_PCI_MSI_INTERCEPT
+    struct list_head msi_list;
+
+    struct arch_msix *msix;
+
     uint8_t msi_maxvec;
+#endif
+
     uint8_t phantom_stride;
 
     nodeid_t node; /* NUMA node */
diff --git a/xen/xsm/flask/hooks.c b/xen/xsm/flask/hooks.c
index f1a1217c98..024c368fb8 100644
--- a/xen/xsm/flask/hooks.c
+++ b/xen/xsm/flask/hooks.c
@@ -21,7 +21,7 @@
 #include <xen/guest_access.h>
 #include <xen/xenoprof.h>
 #include <xen/iommu.h>
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_PCI_MSI_INTERCEPT
 #include <asm/msi.h>
 #endif
 #include <public/xen.h>
@@ -114,7 +114,7 @@ static int get_irq_sid(int irq, u32 *sid, struct avc_audit_data *ad)
         }
         return security_irq_sid(irq, sid);
     }
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_PCI_MSI_INTERCEPT
     {
         struct irq_desc *desc = irq_to_desc(irq);
         if ( desc->msi_desc && desc->msi_desc->dev ) {
@@ -874,7 +874,7 @@ static int flask_map_domain_pirq (struct domain *d)
 static int flask_map_domain_msi (struct domain *d, int irq, const void *data,
                                  u32 *sid, struct avc_audit_data *ad)
 {
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_PCI_MSI_INTERCEPT
     const struct msi_info *msi = data;
     u32 machine_bdf = (msi->seg << 16) | (msi->bus << 8) | msi->devfn;
 
@@ -940,7 +940,7 @@ static int flask_unmap_domain_pirq (struct domain *d)
 static int flask_unmap_domain_msi (struct domain *d, int irq, const void *data,
                                    u32 *sid, struct avc_audit_data *ad)
 {
-#ifdef CONFIG_HAS_PCI
+#ifdef CONFIG_HAS_PCI_MSI_INTERCEPT
     const struct pci_dev *pdev = data;
     u32 machine_bdf = (pdev->seg << 16) | (pdev->bus << 8) | pdev->devfn;
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon May 10 15:52:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 15:52:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125361.235967 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg8Ce-0007IQ-Hs; Mon, 10 May 2021 15:52:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125361.235967; Mon, 10 May 2021 15:52:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg8Ce-0007IJ-EL; Mon, 10 May 2021 15:52:16 +0000
Received: by outflank-mailman (input) for mailman id 125361;
 Mon, 10 May 2021 15:52:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lg8Cd-0007I9-Sx; Mon, 10 May 2021 15:52:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lg8Cd-0008Kd-Ig; Mon, 10 May 2021 15:52:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lg8Cd-0007oR-8h; Mon, 10 May 2021 15:52:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lg8Cd-0004Te-8E; Mon, 10 May 2021 15:52:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Pr4JMzgl0EMi5/60vbQE0rgcWKnnMSCU+TIDmnj3G/Q=; b=XhddLvVjVD3JpHyo7CyeIRDxeB
	QDiBr326hDDY4NkzrKzCTmrWbIPpl23xIcDkST0UHjpX+Z6QnXcLZoH3FJDx4YRk9tG9g38Atzxa2
	rlhgkPLU1Hw3bnmT1zxH/tLjMHDcGjSm9icK57vlhFgRccbRsUw3TG/9q+Mt71KDJ3q0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161891-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 161891: regressions - trouble: blocked/fail/pass/starved
X-Osstest-Failures:
    linux-linus:build-amd64:xen-build:fail:regression
    linux-linus:build-arm64:xen-build:fail:regression
    linux-linus:build-arm64-xsm:xen-build:fail:regression
    linux-linus:build-i386-xsm:xen-build:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-raw:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-examine:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-coresched-i386-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-coresched-amd64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-rtds:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ovmf-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:build-check(1):blocked:nonblocking
    linux-linus:build-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:build-check(1):blocked:nonblocking
    linux-linus:build-arm64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qcow2:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-amd64-pvgrub:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvshim:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-dom0pvh-xl-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvhv2-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-dom0pvh-xl-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-i386-pvgrub:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-pvhv2-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-multivcpu:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-pygrub:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-freebsd11-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-shadow:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-freebsd10-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-freebsd10-i386:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-pair:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-pvshim:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start.2:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:hosts-allocate:starved:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    linux=6efb943b8616ec53a5e444193dccf1af9ad627b5
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 10 May 2021 15:52:15 +0000

flight 161891 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161891/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-amd64                   6 xen-build                fail REGR. vs. 152332
 build-arm64                   6 xen-build                fail REGR. vs. 152332
 build-arm64-xsm               6 xen-build                fail REGR. vs. 152332
 build-i386-xsm                6 xen-build                fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemut-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemut-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1)  blocked n/a
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-i386-xl-qemuu-ovmf-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-win7-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-qemuu-ws16-amd64  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-raw        1 build-check(1)               blocked  n/a
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-amd64-i386-examine       1 build-check(1)               blocked  n/a
 test-amd64-coresched-i386-xl  1 build-check(1)               blocked  n/a
 test-amd64-coresched-amd64-xl  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-shadow    1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-rtds      1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-ws16-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-ovmf-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 1 build-check(1) blocked n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow  1 build-check(1) blocked n/a
 build-amd64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qemut-ws16-amd64  1 build-check(1)             blocked n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64  1 build-check(1)             blocked n/a
 test-amd64-amd64-xl-qemut-debianhvm-amd64  1 build-check(1)        blocked n/a
 test-amd64-amd64-xl-qcow2     1 build-check(1)               blocked  n/a
 test-amd64-amd64-amd64-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvshim    1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-dom0pvh-xl-intel  1 build-check(1)               blocked  n/a
 test-amd64-amd64-examine      1 build-check(1)               blocked  n/a
 test-amd64-amd64-i386-pvgrub  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-pvhv2-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-multivcpu  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit2   1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-pair         1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-credit1   1 build-check(1)               blocked  n/a
 test-amd64-amd64-pygrub       1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd11-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-freebsd12-amd64  1 build-check(1)           blocked n/a
 test-amd64-amd64-qemuu-nested-amd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-intel  1 build-check(1)              blocked n/a
 test-amd64-i386-xl-xsm        1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-shadow     1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-amd64  1 build-check(1)               blocked  n/a
 test-amd64-i386-freebsd10-i386  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-amd64-i386-pair          1 build-check(1)               blocked  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemut-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  1 build-check(1)               blocked n/a
 test-amd64-i386-qemuu-rhel6hvm-intel  1 build-check(1)             blocked n/a
 test-amd64-i386-xl            1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-pvshim     1 build-check(1)               blocked  n/a
 test-amd64-i386-xl-qemut-debianhvm-amd64  1 build-check(1)         blocked n/a
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm  1 build-check(1)      blocked n/a
 test-armhf-armhf-xl-rtds     19 guest-start.2           fail blocked in 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm  3 hosts-allocate     starved n/a
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm 3 hosts-allocate starved n/a

version targeted for testing:
 linux                6efb943b8616ec53a5e444193dccf1af9ad627b5
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  282 days
Failing since        152366  2020-08-01 20:49:34 Z  281 days  471 attempts
Testing same since   161887  2021-05-09 23:42:16 Z    0 days    2 attempts

------------------------------------------------------------
6035 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              fail    
 build-i386-xsm                                               fail    
 build-amd64                                                  fail    
 build-arm64                                                  fail    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          blocked 
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          blocked 
 test-amd64-coresched-amd64-xl                                blocked 
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           blocked 
 test-amd64-coresched-i386-xl                                 blocked 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        starved 
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         blocked 
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 starved 
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       blocked 
 test-amd64-amd64-qemuu-nested-amd                            blocked 
 test-amd64-amd64-xl-pvhv2-amd                                blocked 
 test-amd64-i386-qemut-rhel6hvm-amd                           blocked 
 test-amd64-i386-qemuu-rhel6hvm-amd                           blocked 
 test-amd64-amd64-dom0pvh-xl-amd                              blocked 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemut-debianhvm-amd64                     blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     blocked 
 test-amd64-i386-freebsd10-amd64                              blocked 
 test-amd64-amd64-qemuu-freebsd11-amd64                       blocked 
 test-amd64-amd64-qemuu-freebsd12-amd64                       blocked 
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ovmf-amd64                          blocked 
 test-amd64-amd64-xl-qemut-win7-amd64                         blocked 
 test-amd64-i386-xl-qemut-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-win7-amd64                         blocked 
 test-amd64-i386-xl-qemuu-win7-amd64                          blocked 
 test-amd64-amd64-xl-qemut-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemut-ws16-amd64                          blocked 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         blocked 
 test-amd64-i386-xl-qemuu-ws16-amd64                          blocked 
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  blocked 
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  blocked 
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        blocked 
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         blocked 
 test-amd64-amd64-examine                                     blocked 
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      blocked 
 test-amd64-i386-freebsd10-i386                               blocked 
 test-amd64-amd64-qemuu-nested-intel                          blocked 
 test-amd64-amd64-xl-pvhv2-intel                              blocked 
 test-amd64-i386-qemut-rhel6hvm-intel                         blocked 
 test-amd64-i386-qemuu-rhel6hvm-intel                         blocked 
 test-amd64-amd64-dom0pvh-xl-intel                            blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-xl-multivcpu                                blocked 
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        blocked 
 test-amd64-i386-pair                                         blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-amd64-amd64-amd64-pvgrub                                blocked 
 test-amd64-amd64-i386-pvgrub                                 blocked 
 test-amd64-amd64-xl-pvshim                                   blocked 
 test-amd64-i386-xl-pvshim                                    blocked 
 test-amd64-amd64-pygrub                                      blocked 
 test-amd64-amd64-xl-qcow2                                    blocked 
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       blocked 
 test-amd64-amd64-xl-rtds                                     blocked 
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             blocked 
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              blocked 
 test-amd64-amd64-xl-shadow                                   blocked 
 test-amd64-i386-xl-shadow                                    blocked 
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1637319 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 10 16:51:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 16:51:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125371.235982 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg97N-0005XN-2m; Mon, 10 May 2021 16:50:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125371.235982; Mon, 10 May 2021 16:50:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg97M-0005XG-Vt; Mon, 10 May 2021 16:50:52 +0000
Received: by outflank-mailman (input) for mailman id 125371;
 Mon, 10 May 2021 16:50:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lg97M-0005X6-6P; Mon, 10 May 2021 16:50:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lg97M-0001RG-05; Mon, 10 May 2021 16:50:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lg97L-0001ib-Mk; Mon, 10 May 2021 16:50:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lg97L-0005FF-MD; Mon, 10 May 2021 16:50:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=kLkHhPFvn3S441ugXhczdrOcglo+xeDtmrJLbHwOuE0=; b=beGT4DDkADq5P2DhPV8ZixtRGK
	Ci8dJjQnLn6Bx2gG+Z0h5TbwqAsfFe4+ATejG7cTHw+Rh2EUpTzfMkSzM7Pm0gAeA2b3blalVS9Xb
	waU0wXs0AUnyqUaXIF3qKn9FFuJqLkvFUTo0TozT0k+lpLcpljRjSZzNGZ/QUBX1dG4Y=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161890-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 161890: regressions - trouble: fail/pass/starved
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:hosts-allocate:starved:nonblocking
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:hosts-allocate:starved:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    qemuu=d90f154867ec0ec22fd719164b88716e8fd48672
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 10 May 2021 16:50:51 +0000

flight 161890 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161890/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-libvirt      14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-libvirt-xsm  14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-libvirt-pair 25 guest-start/debian       fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-xl-qemuu-debianhvm-amd64  3 hosts-allocate        starved n/a
 test-amd64-i386-xl-qemuu-win7-amd64  3 hosts-allocate              starved n/a
 test-amd64-i386-xl-pvshim     3 hosts-allocate               starved  n/a

version targeted for testing:
 qemuu                d90f154867ec0ec22fd719164b88716e8fd48672
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  263 days
Failing since        152659  2020-08-21 14:07:39 Z  262 days  479 attempts
Testing same since   161826  2021-05-07 02:33:20 Z    3 days    8 attempts

------------------------------------------------------------
487 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    starved 
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          starved 
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    starved 
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 148130 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 10 16:58:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 16:58:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125377.235996 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg9Es-0006Iz-Su; Mon, 10 May 2021 16:58:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125377.235996; Mon, 10 May 2021 16:58:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg9Es-0006Is-Q6; Mon, 10 May 2021 16:58:38 +0000
Received: by outflank-mailman (input) for mailman id 125377;
 Mon, 10 May 2021 16:58:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lg9Eq-0006Im-VX
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 16:58:36 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lg9Eq-0001Yo-MM; Mon, 10 May 2021 16:58:36 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lg9Eq-0005rF-Ge; Mon, 10 May 2021 16:58:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=NyujeM3T0IpczxTYLEBttv+KFwnO26q5AkcjabW6pbU=; b=OREN/ysARrPiM9eWqtkZsc+N4P
	glMgsmR6O2shmX4eLGAb3JrxmGl++yjkgjw8/qGOXYd6ZMuKd+gJdnCbxwEoH+FeEOQLSYziqrmut
	ZqUTzdEVsyCAMsZy50v1y2+vLBxPGbPkHzYIklqq90bcCfNrJ2fz23Nbnabf/T8UOFRs=;
Subject: Re: Discussion of Xenheap problems on AArch64
To: Henry Wang <Henry.Wang@arm.com>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Chen <Wei.Chen@arm.com>, Penny Zheng <Penny.Zheng@arm.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
References: <PA4PR08MB6253F49C13ED56811BA5B64E92479@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <cdde98ca-4183-c92b-adca-801330992fc5@xen.org>
 <PA4PR08MB62538BBA256E66A0415F0C7192479@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <f14aa1d6-35d2-a9a3-0672-7f0d3ae3ec89@xen.org>
 <PA4PR08MB62534C4130B59CAA9A8A8BF792419@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <PA4PR08MB6253FBC7F5E690DB74F2E11F92409@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <2a65b8c0-fccc-2ccc-f736-7f3f666e84d1@xen.org>
 <PA4PR08MB62537A958107CD234831E0B892579@PA4PR08MB6253.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
Message-ID: <ba649865-410b-e1be-39a3-c4cac802f464@xen.org>
Date: Mon, 10 May 2021 17:58:34 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <PA4PR08MB62537A958107CD234831E0B892579@PA4PR08MB6253.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Henry,

On 07/05/2021 05:06, Henry Wang wrote:
>> From: Julien Grall <julien@xen.org>
>> On 28/04/2021 10:28, Henry Wang wrote:
>>> Hi Julien,
>>
>> Hi Henry,
>>
>>>
>>> I've done some test about the patch series in
>>> https://xenbits.xen.org/gitweb/?p=people/julieng/xen-
>> unstable.git;a=shortlog;h=refs/heads/pt/rfc-v2
>>>
>>
>> Thanks you for the testing. Some questions below.
>>
>> I am a bit confused with the output with and without my patches. Both of
>> them are showing a data abort in clear_page().
>>
>> Above, you suggested that there is a big gap between the two memory
>> banks. Are the banks still point to actual RAM?
> 
> Another sorry for the very late reply, we had a 5 day public holiday in
> China and it also took me some time to figure out how to configure the
> FVP (it turned out I have to set -C bp.secure_memory=false to access
> some parts of memory higher than 4G).

No worries. I never tried to tweak the memory layout on the FVP before. 
It is good to know it can be done to properly test memory issue :).

[...]

> when I continue booting Xen, I got following error log:
> 
> (XEN) CPU:    0
> (XEN) PC:     00000000002b5a5c alloc_boot_pages+0x94/0x98
> (XEN) LR:     00000000002ca3bc
> (XEN) SP:     00000000002ffde0
> (XEN) CPSR:   600003c9 MODE:64-bit EL2h (Hypervisor, handler)
> (XEN)
> (XEN)   VTCR_EL2: 80000000
> (XEN)  VTTBR_EL2: 0000000000000000
> (XEN)
> (XEN)  SCTLR_EL2: 30cd183d
> (XEN)    HCR_EL2: 0000000000000038
> (XEN)  TTBR0_EL2: 000000008413c000
> (XEN)
> (XEN)    ESR_EL2: f2000001
> (XEN)  HPFAR_EL2: 0000000000000000
> (XEN)    FAR_EL2: 0000000000000000
> (XEN)
> (XEN) Xen call trace:
> (XEN)    [<00000000002b5a5c>] alloc_boot_pages+0x94/0x98 (PC)
> (XEN)    [<00000000002ca3bc>] setup_frametable_mappings+0xa4/0x108 (LR)
> (XEN)    [<00000000002ca3bc>] setup_frametable_mappings+0xa4/0x108
> (XEN)    [<00000000002cb988>] start_xen+0x344/0xbcc
> (XEN)    [<00000000002001c0>] arm64/head.o#primary_switched+0x10/0x30
> (XEN)
> (XEN) ****************************************
> (XEN) Panic on CPU 0:
> (XEN) Xen BUG at page_alloc.c:432
> (XEN) ****************************************

This is happening without my patch series applied, right? If so, what 
happen if you apply it?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 10 17:02:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 17:02:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125381.236009 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg9IO-0007gU-DE; Mon, 10 May 2021 17:02:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125381.236009; Mon, 10 May 2021 17:02:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg9IO-0007gN-9u; Mon, 10 May 2021 17:02:16 +0000
Received: by outflank-mailman (input) for mailman id 125381;
 Mon, 10 May 2021 17:02:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lg9IM-0007gH-HU
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 17:02:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lg9IL-0001fK-BG; Mon, 10 May 2021 17:02:13 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lg9IL-0006TC-5A; Mon, 10 May 2021 17:02:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=5pPQci01U6lj1DP+RAW/PtTaJRGE5cGiknh2ABOcd5U=; b=jvPAeqIPrOps6YsG6XdDXWYBI+
	0PEYxOw4yjS7iDTdY61wyblaweHJ+N/XkFHCpxTGT7Ed8bp1OSaG/4nO+fLCKbS51FyKELxySVINE
	PGlNuE/NPwFG766FGC7zG61Eb2YobxWq0lWe0ijue0TAjekgEiSWk8TsJAX4wSKEmWYc=;
Subject: Re: [PATCH v3 02/10] arm/domain: Get rid of READ/WRITE_SYSREG32
To: Michal Orzel <michal.orzel@arm.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, bertrand.marquis@arm.com,
 wei.chen@arm.com
References: <20210505074308.11016-1-michal.orzel@arm.com>
 <20210505074308.11016-3-michal.orzel@arm.com>
 <fd49021f-c437-fd0c-b3a8-e3a237e633be@xen.org>
 <795c63a5-76fc-0de4-d3be-ac3b9d90fa58@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <da280b09-b089-6acb-8d7b-aee7b4fa2b86@xen.org>
Date: Mon, 10 May 2021 18:02:11 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <795c63a5-76fc-0de4-d3be-ac3b9d90fa58@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 06/05/2021 07:13, Michal Orzel wrote:
> Hi Julien,

Hi Michal,

> On 05.05.2021 20:03, Julien Grall wrote:
>> Hi Michal,
>>
>> On 05/05/2021 08:43, Michal Orzel wrote:
>>> AArch64 registers are 64bit whereas AArch32 registers
>>> are 32bit or 64bit. MSR/MRS are expecting 64bit values thus
>>> we should get rid of helpers READ/WRITE_SYSREG32
>>> in favour of using READ/WRITE_SYSREG.
>>> We should also use register_t type when reading sysregs
>>> which can correspond to uint64_t or uint32_t.
>>> Even though many AArch64 registers have upper 32bit reserved
>>> it does not mean that they can't be widen in the future.
>>>
>>> Modify type of register cntkctl to register_t.
>>>
>>> Modify accesses to thumbee registers to use READ/WRITE_SYSREG.
>>> Thumbee registers are only usable by a 32bit domain and in fact
>>> should be only accessed on ARMv7 as they were retrospectively dropped
>>> on ARMv8.
>>
>> Sorry for not replying on v2. How about:
>>
>> "
>> Thumbee registers are only usable by a 32-bit domain and therefore we can just store the bottom 32-bit (IOW there is no type change). In fact, this could technically be restricted to Armv7 HW (the support was dropped retrospectively in Armv8) but leave it as-is for now.
>> "
>>
>> If you are happy with it, I will do it on commit.
>>
> I am happy with it. Please ack and change it on commit.

Thanks.

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 10 17:19:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 17:19:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125389.236020 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg9ZB-0000wu-Td; Mon, 10 May 2021 17:19:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125389.236020; Mon, 10 May 2021 17:19:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg9ZB-0000wn-Qb; Mon, 10 May 2021 17:19:37 +0000
Received: by outflank-mailman (input) for mailman id 125389;
 Mon, 10 May 2021 17:19:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lg9ZA-0000wh-HU
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 17:19:36 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lg9Z1-0001w4-42; Mon, 10 May 2021 17:19:27 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lg9Z0-0007qf-T7; Mon, 10 May 2021 17:19:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=m5qxCaZp1ejqtUS7kBwp2KDNFCV084AjTlJ/YcrB2p0=; b=r4tLqNK1IqCAE+pl89EULElMn4
	VrZ7xxoMBRRLQJvHBjtWnIQq9xPBzkXBHWF+jOnrgEQ2DwqQ+LTwZQmzpdoThHWhH8OEB5NOXJKBV
	F9QZPIrwpaEenW7oiNzeZeLQej70+iVoNO849kKuKJ84o444XlKAr7ezGdibQvKHEJto=;
Subject: Re: [PATCH v3 00/10] arm64: Get rid of READ/WRITE_SYSREG32
To: Michal Orzel <michal.orzel@arm.com>, xen-devel@lists.xenproject.org
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>,
 Tamas K Lengyel <tamas@tklengyel.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>, bertrand.marquis@arm.com,
 wei.chen@arm.com
References: <20210505074308.11016-1-michal.orzel@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <9eefdce8-a002-bdef-5de7-4c598b537940@xen.org>
Date: Mon, 10 May 2021 18:19:23 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210505074308.11016-1-michal.orzel@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Michal,

On 05/05/2021 08:42, Michal Orzel wrote:
> The purpose of this patch series is to remove usage of 32bit helper
> macros READ/WRITE_SYSREG32 on arm64 as the idea of them is
> not following the latest ARMv8 specification and mrs/msr instructions
> are expecting 64bit values.
> According to ARM DDI 0487G.a all the AArch64 system registers
> are 64bit wide even though many of them have upper 32bit reserved.
> This does not mean that in the newer versions of ARMv8 or in the next
> architecture, some of the sysregs will get widen.
> Also when dealing with system registers we should use register_t
> type.
> 
> This patch series removes the use of READ/WRITE_SYSREG32
> and replaces these calls with READ/WRITE_SYSREG. The change was
> splited into several small patches to make the review proces easier.
> 
> This patch series focuses on removing READ/WRITE_SYSREG32.
> The next thing to do is to also get rid of vreg_emulate_sysreg32
> and other parts related to it like TVM_REG macro.
> The final part will be to completely remove macros READ/WRITE_SYSREG32.
> 
> Michal Orzel (10):
>    arm64/vfp: Get rid of READ/WRITE_SYSREG32
>    arm/domain: Get rid of READ/WRITE_SYSREG32
>    arm: Modify type of actlr to register_t
>    arm/gic: Remove member hcr of structure gic_v3
>    arm/gic: Get rid of READ/WRITE_SYSREG32
>    arm/p2m: Get rid of READ/WRITE_SYSREG32
>    xen/arm: Always access SCTLR_EL2 using READ/WRITE_SYSREG()
>    arm/page: Get rid of READ/WRITE_SYSREG32
>    arm/time,vtimer: Get rid of READ/WRITE_SYSREG32

I have merged the first 9 patches. It looks like...

>    arm64: Change type of hsr, cpsr, spsr_el1 to uint64_t

... this one needs an answer from you at least.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 10 17:27:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 17:27:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125407.236068 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg9gg-0002vD-20; Mon, 10 May 2021 17:27:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125407.236068; Mon, 10 May 2021 17:27:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg9gf-0002v6-VA; Mon, 10 May 2021 17:27:21 +0000
Received: by outflank-mailman (input) for mailman id 125407;
 Mon, 10 May 2021 17:27:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lg9ge-0002uz-F6
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 17:27:20 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lg9gd-00027F-5t; Mon, 10 May 2021 17:27:19 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lg9gc-0000JB-Vw; Mon, 10 May 2021 17:27:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=kBChSxyk3jihcic0rYJXT6s+MBwS7r746fvzB2xyVvY=; b=YfpoD8VgPad51haxAFsAwLIXkP
	/Ck24h8bzrm7DOrB1otx8dlyYosKNTr2Tn3236gjpuknCKMHASunfravA9T1GIdtGuMQvFk4BbJnr
	Ctve4x2BEy7CW7cCMeESOzPsOJmLWpkyDqsVK1PVcN4bI03pbnTMpu+PZq+mPRDQZ+d8=;
Subject: Re: [PATCH] tools/xenstored: Prevent a buffer overflow in
 dump_state_node_perms()
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Julien Grall <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>
References: <20210506161223.15984-1-julien@xen.org>
 <f9542104-b645-4d94-5aab-0854e4b54ff0@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <ec396bd6-c720-82c9-eceb-5f7ec466610f@xen.org>
Date: Mon, 10 May 2021 18:27:17 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <f9542104-b645-4d94-5aab-0854e4b54ff0@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Juergen,

On 10/05/2021 08:49, Juergen Gross wrote:
> On 06.05.21 18:12, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> ASAN reported one issue when Live Updating Xenstored:
>>
>> =================================================================
>> ==873==ERROR: AddressSanitizer: stack-buffer-overflow on address 
>> 0x7ffc194f53e0 at pc 0x555c6b323292 bp 0x7ffc194f5340 sp 0x7ffc194f5338
>> WRITE of size 1 at 0x7ffc194f53e0 thread T0
>>      #0 0x555c6b323291 in dump_state_node_perms 
>> xen/tools/xenstore/xenstored_core.c:2468
>>      #1 0x555c6b32746e in dump_state_special_node 
>> xen/tools/xenstore/xenstored_domain.c:1257
>>      #2 0x555c6b32a702 in dump_state_special_nodes 
>> xen/tools/xenstore/xenstored_domain.c:1273
>>      #3 0x555c6b32ddb3 in lu_dump_state 
>> xen/tools/xenstore/xenstored_control.c:521
>>      #4 0x555c6b32e380 in do_lu_start 
>> xen/tools/xenstore/xenstored_control.c:660
>>      #5 0x555c6b31b461 in call_delayed 
>> xen/tools/xenstore/xenstored_core.c:278
>>      #6 0x555c6b32275e in main xen/tools/xenstore/xenstored_core.c:2357
>>      #7 0x7f95eecf3d09 in __libc_start_main ../csu/libc-start.c:308
>>      #8 0x555c6b3197e9 in _start (/usr/local/sbin/xenstored+0xc7e9)
>>
>> Address 0x7ffc194f53e0 is located in stack of thread T0 at offset 80 
>> in frame
>>      #0 0x555c6b32713e in dump_state_special_node 
>> xen/tools/xenstore/xenstored_domain.c:1232
>>
>>    This frame has 2 object(s):
>>      [32, 40) 'head' (line 1233)
>>      [64, 80) 'sn' (line 1234) <== Memory access at offset 80 
>> overflows this variable
>>
>> This is happening because the callers are passing a pointer to a variable
>> allocated on the stack. However, the field perms is a dynamic array, so
>> Xenstored will end up to read outside of the variable.
>>
>> Rework the code so the permissions are written one by one in the fd.
>>
>> Fixes: ed6eebf17d2c ("tools/xenstore: dump the xenstore state for live 
>> update")
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> Reviewed-by: Juergen Gross <jgross@suse.com>

Committed.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 10 17:30:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 17:30:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125410.236081 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg9jO-0004GC-Fk; Mon, 10 May 2021 17:30:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125410.236081; Mon, 10 May 2021 17:30:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg9jO-0004G5-CZ; Mon, 10 May 2021 17:30:10 +0000
Received: by outflank-mailman (input) for mailman id 125410;
 Mon, 10 May 2021 17:30:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lg9jM-0004Fj-C9
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 17:30:08 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lg9jK-0002AM-Ev; Mon, 10 May 2021 17:30:06 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lg9jK-0000Ve-96; Mon, 10 May 2021 17:30:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=yr+ZS+izYvu7Pfvv95PU5vmf3YJm9+MekIOLXwfIbag=; b=I8BsQQmfMKq+DeP+o6LMh5xIWU
	piw2UAPHCUeMF/Vce9HfF45Pu6xqmUrlgTl232K7x3XgGmYdrJWLfQUhzsagmppjJWzJE3wagwTxf
	N+oWLvJX5Z02fPdOtkD9BaYxUiQrBGPF+zB0AoSfDFwQUdHk6RTGVvhPPImYq4zz5H+c=;
Subject: Re: [PATCH] tools/xenstore: Fix indentation in the header of
 xenstored_control.c
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Julien Grall <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>
References: <20210510150728.6263-1-julien@xen.org>
 <37c4fadc-c177-e07f-1026-748a1caa943d@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <73e2732f-166e-fa9d-5f32-905fde92cc47@xen.org>
Date: Mon, 10 May 2021 18:30:04 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <37c4fadc-c177-e07f-1026-748a1caa943d@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 10/05/2021 16:12, Juergen Gross wrote:
> On 10.05.21 17:07, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> Commit e867af081d94 "tools/xenstore: save new binary for live update"
>> seemed to have spuriously changed the indentation of the first line of
>> the copyright header.
>>
>> The previous indentation is re-instated so all the lines are indented
>> the same.
>>
>> Reported-by: Bjoern Doebel <doebel@amazon.com>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> Reviewed-by: Juergen Gross <jgross@suse.com>

Committed.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 10 17:36:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 17:36:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125419.236100 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg9pW-00058P-9Z; Mon, 10 May 2021 17:36:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125419.236100; Mon, 10 May 2021 17:36:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg9pW-00058I-6V; Mon, 10 May 2021 17:36:30 +0000
Received: by outflank-mailman (input) for mailman id 125419;
 Mon, 10 May 2021 17:36:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lg9pV-00058C-GG
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 17:36:29 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lg9pU-0002IZ-Tp; Mon, 10 May 2021 17:36:28 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lg9pU-0001GU-O9; Mon, 10 May 2021 17:36:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=sn6q8LNvcDLHLxZYDenaUjvQR2fRW69gHC7bj3GpUYc=; b=TY4Nj3Ttfri1kGXtsgVZDf8HJF
	VsPcXg/lcu4ofvTOToI16ZXbUUpACYeNEHqwo7GSLrOvE2gldXdTsBF0W1/7LQkm3AB5HUsyG0Sx1
	08h0Cbf45nxCaWft/+ZPs51FSnp4TmqJ+0KPqciUw+HnkAvaBIylBpcMC+ND/Wd5lEYw=;
Subject: Re: [PATCH] optee: enable OPTEE_SMC_SEC_CAP_MEMREF_NULL capability
To: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: "tee-dev@lists.linaro.org" <tee-dev@lists.linaro.org>
References: <20210507013938.676142-1-volodymyr_babchuk@epam.com>
From: Julien Grall <julien@xen.org>
Message-ID: <7d74e29f-08ec-a4fc-db39-4dbb14a8d89b@xen.org>
Date: Mon, 10 May 2021 18:36:27 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210507013938.676142-1-volodymyr_babchuk@epam.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Volodymyr,

On 07/05/2021 02:39, Volodymyr Babchuk wrote:
> OP-TEE mediator already have support for NULL memory references. It
> was added in patch 0dbed3ad336 ("optee: allow plain TMEM buffers with
> NULL address"). But it does not propagate
> OPTEE_SMC_SEC_CAP_MEMREF_NULL capability flag to a guest, so well
> behaving guest can't use this feature.
> 
> Note: linux optee driver honors this capability flag when handling
> buffers from userspace clients, but ignores it when working with
> internal calls. For instance, __optee_enumerate_devices() function
> uses NULL argument to get buffer size hint from OP-TEE. This was the
> reason, why "optee: allow plain TMEM buffers with NULL address" was
> introduced in the first place.
> 
> This patch adds the mentioned capability to list of known
> capabilities. From Linux point of view it means that userspace clients
> can use this feature, which is confirmed by OP-TEE test suite:
> 
> * regression_1025 Test memref NULL and/or 0 bytes size
> o regression_1025.1 Invalid NULL buffer memref registration
>    regression_1025.1 OK
> o regression_1025.2 Input/Output MEMREF Buffer NULL - Size 0 bytes
>    regression_1025.2 OK
> o regression_1025.3 Input MEMREF Buffer NULL - Size non 0 bytes
>    regression_1025.3 OK
> o regression_1025.4 Input MEMREF Buffer NULL over PTA invocation
>    regression_1025.4 OK
>    regression_1025 OK
> 
> Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>

Acked-by: Julien Grall <jgrall@amazon.com>

And committed.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 10 17:43:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 17:43:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125424.236112 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg9w0-0006aU-0R; Mon, 10 May 2021 17:43:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125424.236112; Mon, 10 May 2021 17:43:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lg9vz-0006aN-Tj; Mon, 10 May 2021 17:43:11 +0000
Received: by outflank-mailman (input) for mailman id 125424;
 Mon, 10 May 2021 17:43:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lg9vy-0006aH-Ua
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 17:43:10 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lg9vx-0002Ok-OJ; Mon, 10 May 2021 17:43:09 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lg9vx-0001w3-HG; Mon, 10 May 2021 17:43:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=DpsR2dSQkKRVnlUx9iRVUmS8p5eDXvOcK08F/AJ/4Ug=; b=EELlmyci7XtaBoDDg2UchqtO07
	2HunXuh+3QZ0F1KOME2Q+Eqhm46xFJd5kHatGOKEdUlec4F5B8escXXRQZtVr6cfJFTF5u6zdcRoo
	EHKgZW9tu/qBkSVkmQtiDGwBx0mCmT9jCQcThiDrU2RltIhRy21Mjd5SMDtw4uJ5JqNQ=;
Subject: Re: Ping: [PATCH] build: centralize / unify asm-offsets generation
To: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>
References: <fa340caa-4ee1-a302-fbf1-1df400493d6b@suse.com>
 <YH7zXpPz03+kLzEr@Air-de-Roger>
 <e9de883b-604b-1193-b748-5a59795a9f31@suse.com>
 <YH7/SvkrB2yGgRij@Air-de-Roger>
 <5170aa51-8e34-3a45-5bf6-c0a187b1c427@suse.com>
 <YIfTyT4rvD9yEqiM@Air-de-Roger>
 <5018479d-b566-a32b-9a01-5ccf01fe0880@suse.com>
 <YIgSNRq2w7KSSaiD@Air-de-Roger>
 <e9a7b3c5-db38-76d8-64ec-2cfd9f46f1fd@suse.com>
 <YIgb/Tz/ic6uVXWo@Air-de-Roger>
 <8c34d016-47fb-eb6a-2be5-9497094effa7@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <48b9a32b-acd3-d086-6ec9-18ee9ae3686d@xen.org>
Date: Mon, 10 May 2021 18:43:07 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <8c34d016-47fb-eb6a-2be5-9497094effa7@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Jan,

Sorry for the late answer.

On 29/04/2021 10:18, Jan Beulich wrote:
> On 27.04.2021 16:13, Roger Pau Monné wrote:
>> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
> 
> Thanks Roger.
> 
> Julien, Stefano, may I ask for an Arm side ack (or otherwise) here
> as well?

Acked-by: Julien Grall <jgrall@amazon.com>

I will let you commit the patch because I wasn't sure if there was some 
tweaks required based on the conversation between Roger and you.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 10 17:47:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 17:47:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125430.236129 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgA03-0007IF-IC; Mon, 10 May 2021 17:47:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125430.236129; Mon, 10 May 2021 17:47:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgA03-0007I8-F1; Mon, 10 May 2021 17:47:23 +0000
Received: by outflank-mailman (input) for mailman id 125430;
 Mon, 10 May 2021 17:47:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lgA01-0007I2-VJ
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 17:47:21 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lgA00-0002WM-Sf; Mon, 10 May 2021 17:47:20 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lgA00-0002PX-Mq; Mon, 10 May 2021 17:47:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:References:Cc:To:From:Subject;
	bh=7E/rxIi6XybEPSN9wMCI4KK/XpYE2gkOkR1B+tNb2dg=; b=mOFJaiRomKV4RBjz/qd2nYVDdc
	5tqQ5OCfnlBtb9BwogszgaEr0IWtQMxPmjp7FjbE7ZGlAZtcCMxSJmT1/XgN8jrYR5FlBIpd5HfYf
	B6KPKiyS9wNKy2TW3HeHdBJjqwIhxB4aEoRAoA4DoP5zDdN7XPWG306VgDaDGHrOWjzU=;
Subject: PING Re: [PATCH v2] xen/arm: kernel: Propagate the error if we fail
 to decompress the kernel
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: bertrand.marquis@arm.com, Julien Grall <jgrall@amazon.com>,
 Michal Orzel <michal.orzel@arm.com>
References: <20210406191554.12012-1-julien@xen.org>
 <591603c5-ebcb-e9d6-74a0-bed921458a69@arm.com>
 <6c8133a5-dc78-083b-0cec-69860e46daf7@xen.org>
Message-ID: <91ab70b6-3a02-22ab-0a92-3ac3943c828c@xen.org>
Date: Mon, 10 May 2021 18:47:19 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <6c8133a5-dc78-083b-0cec-69860e46daf7@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 18/04/2021 19:26, Julien Grall wrote:
> On 12/04/2021 07:45, Michal Orzel wrote:
>> On 06.04.2021 21:15, Julien Grall wrote:
>>> From: Julien Grall <jgrall@amazon.com>
>>>
>>> Currently, we are ignoring any error from perform_gunzip() and replacing
>>> the compressed kernel with the "uncompressed" kernel.
>>>
>>> If there is a gzip failure, then it means that the output buffer may
>>> contain garbagge. So it can result to various sort of behavior that may
>>> be difficult to root cause.
>>>
>>> In case of failure, free the output buffer and propagate the error.
>>> We also need to adjust the return check for kernel_compress() as
>>> perform_gunzip() may return a positive value.
>>>
>>> Take the opportunity to adjust the code style for the check.
>>>
>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>>
>>> ---
>>>      Changes in v2:
>>>          - Fix build
>>> ---
>>
>> Reviewed-by: Michal Orzel <michal.orzel@arm.com>
> 
> Thanks! @Stefano, can I get your acked-by?

Ping? I intend to commit it on Wednesday unless I hear otherwise.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 10 17:49:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 17:49:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125433.236141 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgA1o-0007vS-St; Mon, 10 May 2021 17:49:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125433.236141; Mon, 10 May 2021 17:49:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgA1o-0007vL-Pu; Mon, 10 May 2021 17:49:12 +0000
Received: by outflank-mailman (input) for mailman id 125433;
 Mon, 10 May 2021 17:49:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lgA1m-0007vD-WA
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 17:49:11 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lgA1f-0002Xr-T4; Mon, 10 May 2021 17:49:03 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lgA1f-0002Vw-MJ; Mon, 10 May 2021 17:49:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=E2udCdJwYYDlQ9mifrWd88GpfICPpwhmU2H5E9V1TkA=; b=YpBZg09oDlqn+2b+xjeHWn3GFd
	D38oEx/qje9TsZDhxowC/BND0wZCrIe6Zo0ZNTDs6sbT4rhX2Lww+NO0EJV+LOoWdqEm5jque/wF+
	akN7bdCzFqtW1uucx2c6aGpMeWzZ2980v+gAWg7E2lDQg9K9R7Hj4Lyv/yd3HT8XF8oQ=;
Subject: PING Re: [PATCH 00/14] Use const whether we point to literal strings
 (take 1)
To: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
Cc: Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Dario Faggioli <dfaggioli@suse.com>, Tim Deegan <tim@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20210405155713.29754-1-julien@xen.org>
From: Julien Grall <julien@xen.org>
Message-ID: <05eaa910-7630-d1e4-4b70-3008d672fe5c@xen.org>
Date: Mon, 10 May 2021 18:49:01 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210405155713.29754-1-julien@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

Ian, Wei, Anthony, can I get some feedbacks on the tools side?

Cheers,

On 05/04/2021 16:56, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Hi all,
> 
> By default, both Clang and GCC will happily compile C code where
> non-const char * point to literal strings. This means the following
> code will be accepted:
> 
>      char *str = "test";
> 
>      str[0] = 'a';
> 
> Literal strings will reside in rodata, so they are not modifiable.
> This will result to an permission fault at runtime if the permissions
> are enforced in the page-tables (this is the case in Xen).
> 
> I am not aware of code trying to modify literal strings in Xen.
> However, there is a frequent use of non-const char * to point to
> literal strings. Given the size of the codebase, there is a risk
> to involuntarily introduce code that will modify literal strings.
> 
> Therefore it would be better to enforce using const when pointing
> to such strings. Both GCC and Clang provide an option to warn
> for such case (see -Wwrite-strings) and therefore could be used
> by Xen.
> 
> This series doesn't yet make use of -Wwrite-strings because
> the tree is not fully converted. Instead, it contains some easy
> and likely non-controversial use const in the code.
> 
> The major blockers to enable -Wwrite-strings are the following:
>      - xen/common/efi: union string is used in both const and
>      non-const situation. It doesn't feel right to specific one member
>      const and the other non-const.
>      - libxl: the major block is the flexarray framework as we would use
>      it with string (now const char*). I thought it would be possible to
>      make the interface const, but it looks like there are a couple of
>      places where we need to modify the content (such as in
>      libxl_json.c).
> 
> Ideally, I would like to have -Wwrite-strings unconditionally used
> tree-wide. But, some of the area may required some heavy refactoring.
> 
> One solution would be to enable it tree-wide but turned it off at a
> directroy/file level.
> 
> Any opinions?
> 
> Cheers,
> 
> Julien Grall (14):
>    xen: Constify the second parameter of rangeset_new()
>    xen/sched: Constify name and opt_name in struct scheduler
>    xen/x86: shadow: The return type of sh_audit_flags() should be const
>    xen/char: console: Use const whenever we point to literal strings
>    tools/libs: guest: Use const whenever we point to literal strings
>    tools/libs: stat: Use const whenever we point to literal strings
>    tools/xl: Use const whenever we point to literal strings
>    tools/firmware: hvmloader: Use const in __bug() and __assert_failed()
>    tools/console: Use const whenever we point to literal strings
>    tools/kdd: Use const whenever we point to literal strings
>    tools/misc: Use const whenever we point to literal strings
>    tools/top: The string parameter in set_prompt() and set_delay() should
>      be const
>    tools/xenmon: xenbaked: Mark const the field text in stat_map_t
>    tools/xentrace: Use const whenever we point to literal strings
> 
>   tools/console/client/main.c         |  4 +-
>   tools/console/daemon/io.c           | 10 ++--
>   tools/debugger/kdd/kdd.c            | 10 ++--
>   tools/firmware/hvmloader/util.c     |  4 +-
>   tools/firmware/hvmloader/util.h     |  4 +-
>   tools/include/xenguest.h            | 10 ++--
>   tools/libs/guest/xg_dom_core.c      |  8 ++--
>   tools/libs/guest/xg_dom_elfloader.c |  4 +-
>   tools/libs/guest/xg_dom_hvmloader.c |  2 +-
>   tools/libs/guest/xg_dom_x86.c       |  9 ++--
>   tools/libs/guest/xg_private.h       |  2 +-
>   tools/libs/stat/xenstat_linux.c     |  4 +-
>   tools/libs/stat/xenstat_qmp.c       | 12 ++---
>   tools/misc/xen-detect.c             |  2 +-
>   tools/misc/xenhypfs.c               |  6 +--
>   tools/xenmon/xenbaked.c             |  2 +-
>   tools/xentop/xentop.c               | 12 ++---
>   tools/xentrace/xenalyze.c           | 71 +++++++++++++++--------------
>   tools/xentrace/xenctx.c             |  4 +-
>   tools/xl/xl.h                       |  8 ++--
>   tools/xl/xl_console.c               |  2 +-
>   tools/xl/xl_utils.c                 |  4 +-
>   tools/xl/xl_utils.h                 |  4 +-
>   xen/arch/x86/mm/shadow/multi.c      | 12 ++---
>   xen/common/rangeset.c               |  2 +-
>   xen/common/sched/private.h          |  4 +-
>   xen/drivers/char/console.c          |  4 +-
>   xen/include/xen/rangeset.h          |  2 +-
>   28 files changed, 113 insertions(+), 109 deletions(-)
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 10 18:15:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 18:15:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125461.236171 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgARA-0003uo-En; Mon, 10 May 2021 18:15:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125461.236171; Mon, 10 May 2021 18:15:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgARA-0003uh-Aa; Mon, 10 May 2021 18:15:24 +0000
Received: by outflank-mailman (input) for mailman id 125461;
 Mon, 10 May 2021 18:15:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lgAR9-0003ub-Dg
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 18:15:23 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lgAR5-000343-IE; Mon, 10 May 2021 18:15:19 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lgAR5-00056p-BN; Mon, 10 May 2021 18:15:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=U5oB8mfFZLEuy5Yg/E0zgF4iSW40BPLFycfVFKKLR18=; b=bOJV5SmQ0O2K2VSbIDK/2BxZ2Q
	hA+OjDCuSuM4X73YxfSLnzK4PkvUYT13yWt3F9Kp4L/luL4z544C+SikxjoGEzsAhlfyw5WbInzbj
	BoLURVvENjwaKuMRzgEPeOb3k9kixOubpH6Ut322sIfARM84GOZ8pWvwuCQcPyNHUdZw=;
Subject: Re: Regression when booting 5.15 as dom0 on arm64 (WAS: Re:
 [linux-linus test] 161829: regressions - FAIL)
To: Christoph Hellwig <hch@lst.de>
Cc: f.fainelli@gmail.com, Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 linux-kernel@vger.kernel.org,
 osstest service owner <osstest-admin@xenproject.org>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 iommu@lists.linux-foundation.org
References: <osstest-161829-mainreport@xen.org>
 <4ea1e89f-a7a0-7664-470c-b3cf773a1031@xen.org> <20210510084057.GA933@lst.de>
From: Julien Grall <julien@xen.org>
Message-ID: <8b851596-acf7-9d3b-b08a-848cae5adada@xen.org>
Date: Mon, 10 May 2021 19:15:17 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210510084057.GA933@lst.de>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Christoph,

On 10/05/2021 09:40, Christoph Hellwig wrote:
> On Sat, May 08, 2021 at 12:32:37AM +0100, Julien Grall wrote:
>> The pointer dereferenced seems to suggest that the swiotlb hasn't been
>> allocated. From what I can tell, this may be because swiotlb_force is set
>> to SWIOTLB_NO_FORCE, we will still enable the swiotlb when running on top
>> of Xen.
>>
>> I am not entirely sure what would be the correct fix. Any opinions?
> 
> Can you try something like the patch below (not even compile tested, but
> the intent should be obvious?
> 
> 
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 16a2b2b1c54d..7671bc153fb1 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -44,6 +44,8 @@
>   #include <asm/tlb.h>
>   #include <asm/alternative.h>
>   
> +#include <xen/arm/swiotlb-xen.h>
> +
>   /*
>    * We need to be able to catch inadvertent references to memstart_addr
>    * that occur (potentially in generic code) before arm64_memblock_init()
> @@ -482,7 +484,7 @@ void __init mem_init(void)
>   	if (swiotlb_force == SWIOTLB_FORCE ||
>   	    max_pfn > PFN_DOWN(arm64_dma_phys_limit))
>   		swiotlb_init(1);
> -	else
> +	else if (!IS_ENABLED(CONFIG_XEN) || !xen_swiotlb_detect())
>   		swiotlb_force = SWIOTLB_NO_FORCE;
>   
>   	set_max_mapnr(max_pfn - PHYS_PFN_OFFSET);

I have applied the patch on top of 5.13-rc1 and can confirm I am able to 
boot dom0. Are you going to submit the patch?

Thank you for your help!

Best regards,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 10 18:32:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 18:32:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125475.236183 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgAhM-0006I4-0U; Mon, 10 May 2021 18:32:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125475.236183; Mon, 10 May 2021 18:32:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgAhL-0006Hx-Tg; Mon, 10 May 2021 18:32:07 +0000
Received: by outflank-mailman (input) for mailman id 125475;
 Mon, 10 May 2021 18:32:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ldtt=KF=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1lgAhJ-0006Hq-V6
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 18:32:06 +0000
Received: from out3-smtp.messagingengine.com (unknown [66.111.4.27])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 193d5bb8-7c7a-4c68-8849-f209ec7f1817;
 Mon, 10 May 2021 18:32:04 +0000 (UTC)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailout.nyi.internal (Postfix) with ESMTP id A4CF95C0059;
 Mon, 10 May 2021 14:32:04 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute1.internal (MEProxy); Mon, 10 May 2021 14:32:04 -0400
Received: from mail-itl (ip5b434f04.dynamic.kabel-deutschland.de [91.67.79.4])
 by mail.messagingengine.com (Postfix) with ESMTPA;
 Mon, 10 May 2021 14:32:03 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 193d5bb8-7c7a-4c68-8849-f209ec7f1817
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:content-type:date:from:in-reply-to
	:message-id:mime-version:references:subject:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; bh=aOuFH0
	WYG7OLoDZYT2WsE8sb6UDPB+uWwy6WV5ejnUY=; b=r9of+koFcZeIECKV/m5Az/
	20yNDdgPap1GQd4u/RysICY2u90EGJ6mone6051he+xtDt8+aqmE3+Rs1KgBiXn7
	GZXdSJp5hhJc4O949gzcgzRmOr5Chjn8LrQPzWnILSjMniD5LcP7jZ2edR3XtrPa
	9A2LC7AwNgaBJMOe73XyjmMFgBxg/uaexaCx6xlXoKUsslm1qCZbhjfKUN6aIZQe
	/I3yamW35qCGPukto1NS9oDAcjHYjAKCGpfF4NlFrmHK/OU9oMQ5xOZu17+UR2P+
	XTMpShP3TVQWmxLwYH5KIuO8MYONqoxNtX29GE+Ifs6J1LQD9v8DzG6ofkbcI4Kg
	==
X-ME-Sender: <xms:JHyZYOF9oGa0R_3xdkj2MVUZYMluNIX-veUCo0jp3RM-SfFTR2vp7Q>
    <xme:JHyZYPXaz1S1RFQLr_PtGrLZco_VN7aO-I4HudOKv6c5ZsciXWgoVhAl3oBL9vhGa
    0D5issB3WI1dQ>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrvdegkedguddviecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpeffhffvuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgv
    khcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinh
    hvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepteev
    ffeigffhkefhgfegfeffhfegveeikeettdfhheevieehieeitddugeefteffnecukfhppe
    eluddrieejrdejledrgeenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgr
    ihhlfhhrohhmpehmrghrmhgrrhgvkhesihhnvhhishhisghlvghthhhinhhgshhlrggsrd
    gtohhm
X-ME-Proxy: <xmx:JHyZYIKwdCLclEikCJ7gw3h59PVxkuUlQtcWrInR2jytX0M2O-jItg>
    <xmx:JHyZYIFf_zqu7J_e-F3dhpsuRZpZY88nIsiDNspK1vZeYwt7ic_IYw>
    <xmx:JHyZYEVUubPhukPrWMNharCLpRK8XRZe2F4kVLfOvLZAMWHOLXQdAA>
    <xmx:JHyZYBjF_gsDAC1jUet5rzoBhFBHjqhXjwfsbakPs-dXPgAvnjKFxQ>
Date: Mon, 10 May 2021 20:32:00 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Michael Brown <mbrown@fensystems.co.uk>
Cc: paul@xen.org, xen-devel@lists.xenproject.org, netdev@vger.kernel.org,
	wei.liu@kernel.org, pdurrant@amazon.com
Subject: Re: [PATCH] xen-netback: Check for hotplug-status existence before
 watching
Message-ID: <YJl8IC7EbXKpARWL@mail-itl>
References: <54659eec-e315-5dc5-1578-d91633a80077@xen.org>
 <20210413152512.903750-1-mbrown@fensystems.co.uk>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="4nndLI8LmWn0yaOQ"
Content-Disposition: inline
In-Reply-To: <20210413152512.903750-1-mbrown@fensystems.co.uk>


--4nndLI8LmWn0yaOQ
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Mon, 10 May 2021 20:32:00 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Michael Brown <mbrown@fensystems.co.uk>
Cc: paul@xen.org, xen-devel@lists.xenproject.org, netdev@vger.kernel.org,
	wei.liu@kernel.org, pdurrant@amazon.com
Subject: Re: [PATCH] xen-netback: Check for hotplug-status existence before
 watching

On Tue, Apr 13, 2021 at 04:25:12PM +0100, Michael Brown wrote:
> The logic in connect() is currently written with the assumption that
> xenbus_watch_pathfmt() will return an error for a node that does not
> exist.  This assumption is incorrect: xenstore does allow a watch to
> be registered for a nonexistent node (and will send notifications
> should the node be subsequently created).
>=20
> As of commit 1f2565780 ("xen-netback: remove 'hotplug-status' once it
> has served its purpose"), this leads to a failure when a domU
> transitions into XenbusStateConnected more than once.  On the first
> domU transition into Connected state, the "hotplug-status" node will
> be deleted by the hotplug_status_changed() callback in dom0.  On the
> second or subsequent domU transition into Connected state, the
> hotplug_status_changed() callback will therefore never be invoked, and
> so the backend will remain stuck in InitWait.
>=20
> This failure prevents scenarios such as reloading the xen-netfront
> module within a domU, or booting a domU via iPXE.  There is
> unfortunately no way for the domU to work around this dom0 bug.
>=20
> Fix by explicitly checking for existence of the "hotplug-status" node,
> thereby creating the behaviour that was previously assumed to exist.

This change is wrong. The 'hotplug-status' node is created _only_ by a
hotplug script and done so when it's executed. When kernel waits for
hotplug script to be executed it waits for the node to _appear_, not
_change_. So, this change basically made the kernel not waiting for the
hotplug script at all.

Furthermore, there is an additional side effect: in case of a driver
domain, xl devd may be started after the backend node is created (this
may happen if you start the frontend domain in parallel with the
backend). In this case, 'xl devd' will see the vif backend in
XenbusStateConnected state already and will not execute hotplug script
at all.

I think the proper fix is to re-register the watch when necessary,
instead of not registering it at all.

> Signed-off-by: Michael Brown <mbrown@fensystems.co.uk>
> ---
>  drivers/net/xen-netback/xenbus.c | 12 ++++++++----
>  1 file changed, 8 insertions(+), 4 deletions(-)
>=20
> diff --git a/drivers/net/xen-netback/xenbus.c b/drivers/net/xen-netback/x=
enbus.c
> index a5439c130130..d24b7a7993aa 100644
> --- a/drivers/net/xen-netback/xenbus.c
> +++ b/drivers/net/xen-netback/xenbus.c
> @@ -824,11 +824,15 @@ static void connect(struct backend_info *be)
>  	xenvif_carrier_on(be->vif);
> =20
>  	unregister_hotplug_status_watch(be);
> -	err =3D xenbus_watch_pathfmt(dev, &be->hotplug_status_watch, NULL,
> -				   hotplug_status_changed,
> -				   "%s/%s", dev->nodename, "hotplug-status");
> -	if (!err)
> +	if (xenbus_exists(XBT_NIL, dev->nodename, "hotplug-status")) {
> +		err =3D xenbus_watch_pathfmt(dev, &be->hotplug_status_watch,
> +					   NULL, hotplug_status_changed,
> +					   "%s/%s", dev->nodename,
> +					   "hotplug-status");
> +		if (err)
> +			goto err;
>  		be->have_hotplug_status_watch =3D 1;
> +	}
> =20
>  	netif_tx_wake_all_queues(be->vif->dev);
> =20

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--4nndLI8LmWn0yaOQ
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmCZfCAACgkQ24/THMrX
1yxSwQf/YTAGMutixak92lIstM43bfVn4H1cSkIJ6TWKGxS87n4SOfpSP8Az4BDY
TX0DYJt2Ud8zbNJtxo0NKTV7ITjyvXtmMRwqrEPo77N1PMD+Uz4/v9GWgcIJEWkI
FBypIxY/YIY/PswzoKeRDFvTOLg0hgdrtmWkrHI1qTofG7OERWYvLSqj2/5emVni
GuYgL8TrzyFrWZ30F0HR8scAYxONQAnBokhzqBN6Zf7wQukNOcHsx19PCUf7v2f5
jVE4GwOKda/+pAbpHlLZBNCiA4LK8fo74vwzqG1Xd1hFsRcEno/zKmPEKipIC9SC
wEY/Jy1iM5z/l80fEdGO4zic9Jl/Ew==
=Vf0i
-----END PGP SIGNATURE-----

--4nndLI8LmWn0yaOQ--


From xen-devel-bounces@lists.xenproject.org Mon May 10 18:47:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 18:47:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125480.236194 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgAvr-0007yf-B1; Mon, 10 May 2021 18:47:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125480.236194; Mon, 10 May 2021 18:47:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgAvr-0007yY-83; Mon, 10 May 2021 18:47:07 +0000
Received: by outflank-mailman (input) for mailman id 125480;
 Mon, 10 May 2021 18:47:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fIyG=KF=fensystems.co.uk=mbrown@srs-us1.protection.inumbo.net>)
 id 1lgAvq-0007yS-4R
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 18:47:06 +0000
Received: from blyat.fensystems.co.uk (unknown [54.246.183.96])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d65bd89e-cb0d-4530-afa7-2193a1ccdd30;
 Mon, 10 May 2021 18:47:04 +0000 (UTC)
Received: from dolphin.home (unknown
 [IPv6:2a00:23c6:5495:5e00:72b3:d5ff:feb1:e101])
 by blyat.fensystems.co.uk (Postfix) with ESMTPSA id 28319442C3;
 Mon, 10 May 2021 18:47:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d65bd89e-cb0d-4530-afa7-2193a1ccdd30
Subject: Re: [PATCH] xen-netback: Check for hotplug-status existence before
 watching
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
Cc: paul@xen.org, xen-devel@lists.xenproject.org, netdev@vger.kernel.org,
 wei.liu@kernel.org, pdurrant@amazon.com
References: <54659eec-e315-5dc5-1578-d91633a80077@xen.org>
 <20210413152512.903750-1-mbrown@fensystems.co.uk> <YJl8IC7EbXKpARWL@mail-itl>
From: Michael Brown <mbrown@fensystems.co.uk>
Message-ID: <404130e4-210d-2214-47a8-833c0463d997@fensystems.co.uk>
Date: Mon, 10 May 2021 19:47:01 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <YJl8IC7EbXKpARWL@mail-itl>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-Spam-Status: No, score=-2.9 required=5.0 tests=ALL_TRUSTED,BAYES_00
	autolearn=ham autolearn_force=no version=3.4.2
X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on
	blyat.fensystems.co.uk

On 10/05/2021 19:32, Marek Marczykowski-Górecki wrote:
> On Tue, Apr 13, 2021 at 04:25:12PM +0100, Michael Brown wrote:
>> The logic in connect() is currently written with the assumption that
>> xenbus_watch_pathfmt() will return an error for a node that does not
>> exist.  This assumption is incorrect: xenstore does allow a watch to
>> be registered for a nonexistent node (and will send notifications
>> should the node be subsequently created).
>>
>> As of commit 1f2565780 ("xen-netback: remove 'hotplug-status' once it
>> has served its purpose"), this leads to a failure when a domU
>> transitions into XenbusStateConnected more than once.  On the first
>> domU transition into Connected state, the "hotplug-status" node will
>> be deleted by the hotplug_status_changed() callback in dom0.  On the
>> second or subsequent domU transition into Connected state, the
>> hotplug_status_changed() callback will therefore never be invoked, and
>> so the backend will remain stuck in InitWait.
>>
>> This failure prevents scenarios such as reloading the xen-netfront
>> module within a domU, or booting a domU via iPXE.  There is
>> unfortunately no way for the domU to work around this dom0 bug.
>>
>> Fix by explicitly checking for existence of the "hotplug-status" node,
>> thereby creating the behaviour that was previously assumed to exist.
> 
> This change is wrong. The 'hotplug-status' node is created _only_ by a
> hotplug script and done so when it's executed. When kernel waits for
> hotplug script to be executed it waits for the node to _appear_, not
> _change_. So, this change basically made the kernel not waiting for the
> hotplug script at all.

That doesn't sound plausible to me.  In the setup as you describe, how 
is the kernel expected to differentiate between "hotplug script has not 
yet created the node" and "hotplug script does not exist and will 
therefore never create any node"?

Michael


From xen-devel-bounces@lists.xenproject.org Mon May 10 18:50:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 18:50:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125483.236206 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgAz7-0000uP-QB; Mon, 10 May 2021 18:50:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125483.236206; Mon, 10 May 2021 18:50:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgAz7-0000uI-NQ; Mon, 10 May 2021 18:50:29 +0000
Received: by outflank-mailman (input) for mailman id 125483;
 Mon, 10 May 2021 18:50:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgAz6-0000u8-Mn; Mon, 10 May 2021 18:50:28 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgAz6-0003d0-JP; Mon, 10 May 2021 18:50:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgAz6-0000AP-AR; Mon, 10 May 2021 18:50:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lgAz6-0006m3-9x; Mon, 10 May 2021 18:50:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=BoVZeAziLdk3Aa37C4721Xla2E0CTCrcpSob7o9UWqA=; b=asUl0a1WRxNZSZhpI4MjgAU+n3
	PfmzxgbeY9F5RtMt9Nq6mV35yeEtj8B3+iCbVkQsV+LvtJ/rsmZJwgR4ETpIM+TwEm9EL7jiaKCVr
	AUPlfcz6UAQ6Pc2nJxPLrBedZUOXEy6eBN/URkWzqRg63rxi5E7Px7dFsYqBqAVLA+QY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161893-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 161893: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=982c89ed527bc5b0ffae5da9fd33f9d2d1528f06
X-Osstest-Versions-That:
    xen=a7da84c457b05479ab423a2e589c5f46c7da0ed7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 10 May 2021 18:50:28 +0000

flight 161893 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161893/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  982c89ed527bc5b0ffae5da9fd33f9d2d1528f06
baseline version:
 xen                  a7da84c457b05479ab423a2e589c5f46c7da0ed7

Last test of basis   161841  2021-05-07 19:04:15 Z    2 days
Testing same since   161893  2021-05-10 15:02:47 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jason Andryuk <jandryuk@gmail.com>
  Olaf Hering <olaf@aepfle.de>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   a7da84c457..982c89ed52  982c89ed527bc5b0ffae5da9fd33f9d2d1528f06 -> smoke


From xen-devel-bounces@lists.xenproject.org Mon May 10 18:53:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 18:53:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125490.236222 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgB1i-0001ZO-8E; Mon, 10 May 2021 18:53:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125490.236222; Mon, 10 May 2021 18:53:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgB1i-0001ZH-42; Mon, 10 May 2021 18:53:10 +0000
Received: by outflank-mailman (input) for mailman id 125490;
 Mon, 10 May 2021 18:53:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ldtt=KF=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1lgB1g-0001ZA-Jv
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 18:53:08 +0000
Received: from out3-smtp.messagingengine.com (unknown [66.111.4.27])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 68633a19-c997-416b-8f02-c447f3e8397e;
 Mon, 10 May 2021 18:53:07 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailout.nyi.internal (Postfix) with ESMTP id B4F9C5C0120;
 Mon, 10 May 2021 14:53:07 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute3.internal (MEProxy); Mon, 10 May 2021 14:53:07 -0400
Received: from mail-itl (ip5b434f04.dynamic.kabel-deutschland.de [91.67.79.4])
 by mail.messagingengine.com (Postfix) with ESMTPA;
 Mon, 10 May 2021 14:53:05 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 68633a19-c997-416b-8f02-c447f3e8397e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:content-type:date:from:in-reply-to
	:message-id:mime-version:references:subject:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; bh=r0qj5R
	lCwePmFyFchT4bxLEmTY2iUQspFuobg+IjXRc=; b=kHo59t6qMXxJly41WkoKt+
	OuUUo314DEbsAH00qyCNCIBpSsv3virlTkCnwMBrRoz6YEI7N3Vv52oP8mtYLUME
	+D8u9wVFYxw34Jm029nYXFaNrZ1zL/Oz0Fmz41eS8R7bieNW53yOPHM5qIJJk9Ln
	L+HaHTc7uVurlgtMq52cr797qNwI7CbrtmmjsL6GwhomfewlzGlAZO2+f8lwZjyD
	368E7WrTy1JXsP+KgWxWKJACF4aeTsxVVAwadjajMLqmtakMNUcvvHpmuRLcAXF+
	rqUVDndmITsBy2dJwDYtgpeHFekU9FSuW8bdD4ivQqMXtOeIGE9IgJ4WTY1S3qcA
	==
X-ME-Sender: <xms:E4GZYF2IeAYOcb0yPnaTtXwJuSqnxVKL00H5uqXw5BCKF7MYh3oWbA>
    <xme:E4GZYMG5h6_VbR5DrgpqYFM2DR4ev4XaE60AsyCQhRK6vBc8OU_FST61P2Jv_LMAF
    60eii3f0a4yiQ>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrvdegkedgudefudcutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpeffhffvuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgv
    khcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinh
    hvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepteev
    ffeigffhkefhgfegfeffhfegveeikeettdfhheevieehieeitddugeefteffnecukfhppe
    eluddrieejrdejledrgeenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgr
    ihhlfhhrohhmpehmrghrmhgrrhgvkhesihhnvhhishhisghlvghthhhinhhgshhlrggsrd
    gtohhm
X-ME-Proxy: <xmx:E4GZYF7V-UCSSX1yP42_p-x8l0skhQ5C8VCQjAIpCXINix7qA1pPOA>
    <xmx:E4GZYC25nZurqaMgZ5CqjXh0Z_w4d0kGClKYVm7WmbkRna_lj3EcZA>
    <xmx:E4GZYIEKRu5nGOuEcCeN1MsQ70GC4dD1ioPhhh3Ds6uGap84Tlyq9w>
    <xmx:E4GZYNSG_ZdjzD__G0cgrCn154c6-c7nEznN_s1iYmaJsCulQwBJwg>
Date: Mon, 10 May 2021 20:53:02 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Michael Brown <mbrown@fensystems.co.uk>
Cc: paul@xen.org, xen-devel@lists.xenproject.org, netdev@vger.kernel.org,
	wei.liu@kernel.org, pdurrant@amazon.com
Subject: Re: [PATCH] xen-netback: Check for hotplug-status existence before
 watching
Message-ID: <YJmBDpqQ12ZBGf58@mail-itl>
References: <54659eec-e315-5dc5-1578-d91633a80077@xen.org>
 <20210413152512.903750-1-mbrown@fensystems.co.uk>
 <YJl8IC7EbXKpARWL@mail-itl>
 <404130e4-210d-2214-47a8-833c0463d997@fensystems.co.uk>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="+ZJZ8cOzaoxgT+l8"
Content-Disposition: inline
In-Reply-To: <404130e4-210d-2214-47a8-833c0463d997@fensystems.co.uk>


--+ZJZ8cOzaoxgT+l8
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Mon, 10 May 2021 20:53:02 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Michael Brown <mbrown@fensystems.co.uk>
Cc: paul@xen.org, xen-devel@lists.xenproject.org, netdev@vger.kernel.org,
	wei.liu@kernel.org, pdurrant@amazon.com
Subject: Re: [PATCH] xen-netback: Check for hotplug-status existence before
 watching

On Mon, May 10, 2021 at 07:47:01PM +0100, Michael Brown wrote:
> On 10/05/2021 19:32, Marek Marczykowski-G=C3=B3recki wrote:
> > On Tue, Apr 13, 2021 at 04:25:12PM +0100, Michael Brown wrote:
> > > The logic in connect() is currently written with the assumption that
> > > xenbus_watch_pathfmt() will return an error for a node that does not
> > > exist.  This assumption is incorrect: xenstore does allow a watch to
> > > be registered for a nonexistent node (and will send notifications
> > > should the node be subsequently created).
> > >=20
> > > As of commit 1f2565780 ("xen-netback: remove 'hotplug-status' once it
> > > has served its purpose"), this leads to a failure when a domU
> > > transitions into XenbusStateConnected more than once.  On the first
> > > domU transition into Connected state, the "hotplug-status" node will
> > > be deleted by the hotplug_status_changed() callback in dom0.  On the
> > > second or subsequent domU transition into Connected state, the
> > > hotplug_status_changed() callback will therefore never be invoked, and
> > > so the backend will remain stuck in InitWait.
> > >=20
> > > This failure prevents scenarios such as reloading the xen-netfront
> > > module within a domU, or booting a domU via iPXE.  There is
> > > unfortunately no way for the domU to work around this dom0 bug.
> > >=20
> > > Fix by explicitly checking for existence of the "hotplug-status" node,
> > > thereby creating the behaviour that was previously assumed to exist.
> >=20
> > This change is wrong. The 'hotplug-status' node is created _only_ by a
> > hotplug script and done so when it's executed. When kernel waits for
> > hotplug script to be executed it waits for the node to _appear_, not
> > _change_. So, this change basically made the kernel not waiting for the
> > hotplug script at all.
>=20
> That doesn't sound plausible to me.  In the setup as you describe, how is
> the kernel expected to differentiate between "hotplug script has not yet
> created the node" and "hotplug script does not exist and will therefore
> never create any node"?

Is the later valid at all? From what I can see in the toolstack, it
always sets some hotplug script (if not specified explicitly - then
"vif-bridge"),

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--+ZJZ8cOzaoxgT+l8
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmCZgQ4ACgkQ24/THMrX
1yy/ggf/YP/1zFEZRHr/CVh4dFf2WG3tcdu6+PZfVx2t2vgYGy8PTvocREeIBpUE
8Cxa/uvAOF89qo1yLAJq+wlHvvUm+CQCxksO4dGjmz0OLNowekQz93RSyI/W9FYx
I9Mxa5b0ga4X0kFU4Nk7jwZ+KkLfUV248xh7LrpCZ8CyWvWyvH4OIwlfY6nWIVLc
VhcYD4bzibrjyAr7waDSmtvomUHvWc69Xmvj0drHFnrC6dj7p9PRJdgqCCO6u2uf
GKVqEFUWTl4WAGTjb69IZLnRdT4DIlILe5gQOQ+sOsfJktBWsQtQQ3iL/WOVx0Mk
rANHLc/h7UaLAcc88+YCKyGByRiPLA==
=6AQI
-----END PGP SIGNATURE-----

--+ZJZ8cOzaoxgT+l8--


From xen-devel-bounces@lists.xenproject.org Mon May 10 19:04:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 19:04:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125496.236234 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgBCJ-0003D5-8T; Mon, 10 May 2021 19:04:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125496.236234; Mon, 10 May 2021 19:04:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgBCJ-0003Cy-4p; Mon, 10 May 2021 19:04:07 +0000
Received: by outflank-mailman (input) for mailman id 125496;
 Mon, 10 May 2021 19:04:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nDnD=KF=gmail.com=f.fainelli@srs-us1.protection.inumbo.net>)
 id 1lgBCH-0003BL-BX
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 19:04:05 +0000
Received: from mail-pf1-x42f.google.com (unknown [2607:f8b0:4864:20::42f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3ad01582-94e5-46c3-bad9-720e96ba5b3c;
 Mon, 10 May 2021 19:04:04 +0000 (UTC)
Received: by mail-pf1-x42f.google.com with SMTP id v191so14258452pfc.8
 for <xen-devel@lists.xenproject.org>; Mon, 10 May 2021 12:04:04 -0700 (PDT)
Received: from [10.230.29.202] ([192.19.223.252])
 by smtp.gmail.com with ESMTPSA id p19sm11552318pff.206.2021.05.10.12.04.02
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 10 May 2021 12:04:03 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3ad01582-94e5-46c3-bad9-720e96ba5b3c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=WQ3UlPMi8iO1JweMxxShHAfYCCuhz0sN8zAS6TXgeZ8=;
        b=vd4BMadKXX2VYgCwHP2rz/WhOyv8UV+3Nk1yGWi1MWwBuhF4Mb4GRq5sxXn4AEuYdL
         dLlUqxkd1lWXDByZ2We6cegOkUUHrhOH5WFhT/RnRyS9O9Ma2v674hwJ2hbrWTiO0Mjw
         ny/SeUMfYOLdb3OSL9sDDLLyfsgNEU2inDnK2LmM6HmtSLKhrxmST9EVM8PtIP57ze7e
         0gFip1GIGoXL9gpDsnDbeyh7x8uqVeW4m8YXy3Pjh79KTVGWJRDitG6EjQXkm0j3vOGN
         UUGKo2g7dnwrzaPC8gzO4YFIBXKvsREJrkxf0gSESM3qCHywcPiSFYG34qYlPYQbZ+T3
         Gb2g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=WQ3UlPMi8iO1JweMxxShHAfYCCuhz0sN8zAS6TXgeZ8=;
        b=D659gopH5+SwcUQZSE/36L8zgrYdCKSC+3EZ/7V0mywbOXIlXTfrd3g7xnfwdhbB1a
         3N8TuCB13C9MYKWXxaVaAX495wpTzd/BPvQ/jkeNKZpYZoUoFRbx9d0PIXZiIqYKdWLN
         EyFUNxUohrFi6IdYhDFJzn3mz2iShnQpbKVBSJ3BZR1cTb5NxTG//7OGqv2LyON2xv89
         ZEB1hDFsCvc2XM98cfQbUg9xE9dwatT2lDY6PKtKIhS71VcH4iUDyETUlwF8cyO/JqGo
         f6SwLOyQFLbUS6pV/4aT5u5SorpAJ2JXK9zBnJ9oqGkn1B3L8gOXST9Exhj4BKJnaXHB
         Sbuw==
X-Gm-Message-State: AOAM533W8ob0ZPemHjD0m8L3Y2BXBlxPyKhPIIhVLWPOfeTB7ElSeUqy
	UaKGxDgQDxAgUpV0q5sxqnI=
X-Google-Smtp-Source: ABdhPJyfcWJULySA3pD+719piBeBcWvrLBFc8WnyM7QHhcOIc5yXuHy57rzrSbcSkDQFpBH4yBwBvw==
X-Received: by 2002:a63:e30d:: with SMTP id f13mr22059947pgh.201.1620673443541;
        Mon, 10 May 2021 12:04:03 -0700 (PDT)
Subject: Re: Regression when booting 5.15 as dom0 on arm64 (WAS: Re:
 [linux-linus test] 161829: regressions - FAIL)
To: Julien Grall <julien@xen.org>, Christoph Hellwig <hch@lst.de>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 linux-kernel@vger.kernel.org,
 osstest service owner <osstest-admin@xenproject.org>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 iommu@lists.linux-foundation.org
References: <osstest-161829-mainreport@xen.org>
 <4ea1e89f-a7a0-7664-470c-b3cf773a1031@xen.org> <20210510084057.GA933@lst.de>
 <8b851596-acf7-9d3b-b08a-848cae5adada@xen.org>
From: Florian Fainelli <f.fainelli@gmail.com>
Message-ID: <2c19af0b-e4c1-4f57-19cd-a86b4dc18c35@gmail.com>
Date: Mon, 10 May 2021 12:04:01 -0700
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Firefox/78.0 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <8b851596-acf7-9d3b-b08a-848cae5adada@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit



On 5/10/2021 11:15 AM, Julien Grall wrote:
> Hi Christoph,
> 
> On 10/05/2021 09:40, Christoph Hellwig wrote:
>> On Sat, May 08, 2021 at 12:32:37AM +0100, Julien Grall wrote:
>>> The pointer dereferenced seems to suggest that the swiotlb hasn't been
>>> allocated. From what I can tell, this may be because swiotlb_force is
>>> set
>>> to SWIOTLB_NO_FORCE, we will still enable the swiotlb when running on
>>> top
>>> of Xen.
>>>
>>> I am not entirely sure what would be the correct fix. Any opinions?
>>
>> Can you try something like the patch below (not even compile tested, but
>> the intent should be obvious?
>>
>>
>> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
>> index 16a2b2b1c54d..7671bc153fb1 100644
>> --- a/arch/arm64/mm/init.c
>> +++ b/arch/arm64/mm/init.c
>> @@ -44,6 +44,8 @@
>>   #include <asm/tlb.h>
>>   #include <asm/alternative.h>
>>   +#include <xen/arm/swiotlb-xen.h>
>> +
>>   /*
>>    * We need to be able to catch inadvertent references to memstart_addr
>>    * that occur (potentially in generic code) before
>> arm64_memblock_init()
>> @@ -482,7 +484,7 @@ void __init mem_init(void)
>>       if (swiotlb_force == SWIOTLB_FORCE ||
>>           max_pfn > PFN_DOWN(arm64_dma_phys_limit))
>>           swiotlb_init(1);
>> -    else
>> +    else if (!IS_ENABLED(CONFIG_XEN) || !xen_swiotlb_detect())
>>           swiotlb_force = SWIOTLB_NO_FORCE;
>>         set_max_mapnr(max_pfn - PHYS_PFN_OFFSET);
> 
> I have applied the patch on top of 5.13-rc1 and can confirm I am able to
> boot dom0. Are you going to submit the patch?

Sorry about that Julien and thanks Christoph!
-- 
Florian


From xen-devel-bounces@lists.xenproject.org Mon May 10 19:07:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 19:07:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125501.236246 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgBF7-0003tJ-P5; Mon, 10 May 2021 19:07:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125501.236246; Mon, 10 May 2021 19:07:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgBF7-0003tC-Ly; Mon, 10 May 2021 19:07:01 +0000
Received: by outflank-mailman (input) for mailman id 125501;
 Mon, 10 May 2021 19:06:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fIyG=KF=fensystems.co.uk=mbrown@srs-us1.protection.inumbo.net>)
 id 1lgBF5-0003t6-JD
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 19:06:59 +0000
Received: from blyat.fensystems.co.uk (unknown [54.246.183.96])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 516b5dd3-a205-40d1-b485-3b4065e6ffe0;
 Mon, 10 May 2021 19:06:58 +0000 (UTC)
Received: from dolphin.home (unknown
 [IPv6:2a00:23c6:5495:5e00:72b3:d5ff:feb1:e101])
 by blyat.fensystems.co.uk (Postfix) with ESMTPSA id CBAA7442C4;
 Mon, 10 May 2021 19:06:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 516b5dd3-a205-40d1-b485-3b4065e6ffe0
Subject: Re: [PATCH] xen-netback: Check for hotplug-status existence before
 watching
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>
Cc: paul@xen.org, xen-devel@lists.xenproject.org, netdev@vger.kernel.org,
 wei.liu@kernel.org, pdurrant@amazon.com
References: <54659eec-e315-5dc5-1578-d91633a80077@xen.org>
 <20210413152512.903750-1-mbrown@fensystems.co.uk> <YJl8IC7EbXKpARWL@mail-itl>
 <404130e4-210d-2214-47a8-833c0463d997@fensystems.co.uk>
 <YJmBDpqQ12ZBGf58@mail-itl>
From: Michael Brown <mbrown@fensystems.co.uk>
Message-ID: <21f38a92-c8ae-12a7-f1d8-50810c5eb088@fensystems.co.uk>
Date: Mon, 10 May 2021 20:06:55 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <YJmBDpqQ12ZBGf58@mail-itl>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-Spam-Status: No, score=-2.9 required=5.0 tests=ALL_TRUSTED,BAYES_00
	autolearn=ham autolearn_force=no version=3.4.2
X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on
	blyat.fensystems.co.uk

On 10/05/2021 19:53, Marek Marczykowski-Górecki wrote:
> On Mon, May 10, 2021 at 07:47:01PM +0100, Michael Brown wrote:
>> That doesn't sound plausible to me.  In the setup as you describe, how is
>> the kernel expected to differentiate between "hotplug script has not yet
>> created the node" and "hotplug script does not exist and will therefore
>> never create any node"?
> 
> Is the later valid at all? From what I can see in the toolstack, it
> always sets some hotplug script (if not specified explicitly - then
> "vif-bridge"),

I don't see any definitive documentation around that so I can't answer 
for sure.  It's probably best to let one of the Xen guys answer that.

If you have a suggested patch, I'm happy to test that it doesn't 
reintroduce the regression bug that was fixed by this commit.

Michael


From xen-devel-bounces@lists.xenproject.org Mon May 10 19:43:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 19:43:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125510.236257 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgBo0-0007z7-JK; Mon, 10 May 2021 19:43:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125510.236257; Mon, 10 May 2021 19:43:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgBo0-0007z0-GO; Mon, 10 May 2021 19:43:04 +0000
Received: by outflank-mailman (input) for mailman id 125510;
 Mon, 10 May 2021 19:43:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ldtt=KF=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1lgBnz-0007yu-9M
 for xen-devel@lists.xenproject.org; Mon, 10 May 2021 19:43:03 +0000
Received: from out3-smtp.messagingengine.com (unknown [66.111.4.27])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e286287e-1c5f-49be-b5ae-bb735ebef34c;
 Mon, 10 May 2021 19:43:02 +0000 (UTC)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailout.nyi.internal (Postfix) with ESMTP id 202785C0061;
 Mon, 10 May 2021 15:43:02 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute1.internal (MEProxy); Mon, 10 May 2021 15:43:02 -0400
Received: from mail-itl (ip5b434f04.dynamic.kabel-deutschland.de [91.67.79.4])
 by mail.messagingengine.com (Postfix) with ESMTPA;
 Mon, 10 May 2021 15:43:00 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e286287e-1c5f-49be-b5ae-bb735ebef34c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:content-type:date:from:in-reply-to
	:message-id:mime-version:references:subject:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; bh=L03Uio
	jr12PpS6PKDJd2ELGBLdxQ6yiTOAhVTtDPRVU=; b=qfcGIo9f2E70b4wQLBr81V
	H9gWJRAnLiX+QCjr1R0ama2p7WlDsZtgyzUBCgL+xu6rUvrenj2y9CS7fddl38+L
	7MdCMa9FIuPbe0ZjCIty02hG8FptzpMCUAqEgZuvpkfGmAN1QKY2wIkzOHHrGhgi
	UcPnAtjLhjaWBftbfd0vZL97eh2f6NwdeSQr9fpm4KS21eaTUh1pDet/vdWe7l16
	4f4XXoE6gczjH93uPSWC/fb40u5AhlrcxEYzgtsvOeasthR16g2Sx7uV+IMT1XwV
	66u5GcvP6UPMW/CyP1SIoY2O5f2zlMYfwfkTAn47a9kb+MQN5GLU+HeswkKwRDjA
	==
X-ME-Sender: <xms:xYyZYOEVSKewBEcx05jVh4-yi7bp7K0O9ya5DDSWhZSCGVM8jsHOxw>
    <xme:xYyZYPWSu47BIDeJMRSCxOt1BtyHwnlOBcyPjJBhhqIu5Rk41LmkNV3YIopOy0M0X
    NZu8sMMAltWHQ>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrvdegkedgudegudcutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpeffhffvuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgv
    khcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinh
    hvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepteev
    ffeigffhkefhgfegfeffhfegveeikeettdfhheevieehieeitddugeefteffnecukfhppe
    eluddrieejrdejledrgeenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgr
    ihhlfhhrohhmpehmrghrmhgrrhgvkhesihhnvhhishhisghlvghthhhinhhgshhlrggsrd
    gtohhm
X-ME-Proxy: <xmx:xYyZYIIIIjNDWmuokcUvX4epAwKQT-cm4OgjwyfS0oF-l6ye39n4Rg>
    <xmx:xYyZYIHX3TE7s-6ouCe7ZgYhaeqjZjyTZ3xqwB1JPmB9H9pWEUZtzw>
    <xmx:xYyZYEVsQlDXb9OdR8Hmq15V6gYjs7YwBq2UEHGwDGhmgdlD5GkHJQ>
    <xmx:xoyZYLS8Sm1eMuBM6c9l6eCLA2OysCycUhrPSgCkJjS-iTOY0D8g_w>
Date: Mon, 10 May 2021 21:42:52 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Michael Brown <mbrown@fensystems.co.uk>, paul@xen.org
Cc: paul@xen.org, xen-devel@lists.xenproject.org, netdev@vger.kernel.org,
	wei.liu@kernel.org, pdurrant@amazon.com
Subject: Re: [PATCH] xen-netback: Check for hotplug-status existence before
 watching
Message-ID: <YJmMvTkp2Y1hlLLm@mail-itl>
References: <54659eec-e315-5dc5-1578-d91633a80077@xen.org>
 <20210413152512.903750-1-mbrown@fensystems.co.uk>
 <YJl8IC7EbXKpARWL@mail-itl>
 <404130e4-210d-2214-47a8-833c0463d997@fensystems.co.uk>
 <YJmBDpqQ12ZBGf58@mail-itl>
 <21f38a92-c8ae-12a7-f1d8-50810c5eb088@fensystems.co.uk>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="N7lT0JGnZD6+el7C"
Content-Disposition: inline
In-Reply-To: <21f38a92-c8ae-12a7-f1d8-50810c5eb088@fensystems.co.uk>


--N7lT0JGnZD6+el7C
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Mon, 10 May 2021 21:42:52 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Michael Brown <mbrown@fensystems.co.uk>, paul@xen.org
Cc: paul@xen.org, xen-devel@lists.xenproject.org, netdev@vger.kernel.org,
	wei.liu@kernel.org, pdurrant@amazon.com
Subject: Re: [PATCH] xen-netback: Check for hotplug-status existence before
 watching

On Mon, May 10, 2021 at 08:06:55PM +0100, Michael Brown wrote:
> If you have a suggested patch, I'm happy to test that it doesn't reintrod=
uce
> the regression bug that was fixed by this commit.

Actually, I've just tested with a simple reloading xen-netfront module. It
seems in this case, the hotplug script is not re-executed. In fact, I
think it should not be re-executed at all, since the vif interface
remains in place (it just gets NO-CARRIER flag).

This brings a question, why removing hotplug-status in the first place?
The interface remains correctly configured by the hotplug script after
all. From the commit message:

    xen-netback: remove 'hotplug-status' once it has served its purpose

    Removing the 'hotplug-status' node in netback_remove() is wrong; the sc=
ript
    may not have completed. Only remove the node once the watch has fired a=
nd
    has been unregistered.

I think the intention was to remove 'hotplug-status' node _later_ in
case of quickly adding and removing the interface. Is that right, Paul?
In that case, letting hotplug_status_changed() remove the entry wont
work, because the watch was unregistered few lines earlier in
netback_remove(). And keeping the watch is not an option, because the
whole backend_info struct is going to be free-ed already.

If my guess about the original reason for the change is right, I think
it should be fixed at the hotplug script level - it should check if the
device is still there before writing 'hotplug-status' node. I'm not sure
if doing it race-free is possible from a shell script (I think it
requires doing xenstore read _and_ write in a single transaction). But
in the worst case, the aftermath of loosing the race is leaving stray
'hotplug-status' xenstore node - not ideal, but also less harmful than
failing to bring up an interface. At this point, the toolstack could cleanup
it later, perhaps while setting up that interface again (if it gets
re-connected)?

Anyway, perhaps the best thing to do now, is to revert both commits, and
think of an alternative solution for the original issue? That of course
assumes I guessed correctly why it was done in the first place...

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--N7lT0JGnZD6+el7C
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmCZjLwACgkQ24/THMrX
1yxjlAgAhdtYB3Reqgw6Zn/a1iAFa+d1JPbv8huIZu3RSB3YYD2LOYNP4h/wBQbs
YPZ28kcqZmlbnUIlh7kFYO6EuyK+2CjiwxhsQB0QOiZWwv2ZDZhfW0n9TSVK3Q+t
aT67q6/J3EJDL0eEiiER32LBpWGxwMJoQDr4QeqpO46Ha6gdN5rdBKEqSbyP30Gu
Uccdhx1pW8agqDdmrdBiZV0cupQygsAN7/z987FnlAnqrdawiOwlMqBMhZVQnjeW
ZjPJ4gNPMsurbt6YD/BtSGphuW9Rj7RJ5vD+Hal8Cb/efDCPSkpl2uIPSQp7KTzk
tWeHUPh+6izwA+etf9f0mKzfMUWwmw==
=Dncm
-----END PGP SIGNATURE-----

--N7lT0JGnZD6+el7C--


From xen-devel-bounces@lists.xenproject.org Mon May 10 21:38:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 21:38:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125552.236305 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgDbl-0004IY-U8; Mon, 10 May 2021 21:38:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125552.236305; Mon, 10 May 2021 21:38:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgDbl-0004IR-Qz; Mon, 10 May 2021 21:38:33 +0000
Received: by outflank-mailman (input) for mailman id 125552;
 Mon, 10 May 2021 21:38:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgDbk-0004IG-Qg; Mon, 10 May 2021 21:38:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgDbk-0006b4-MC; Mon, 10 May 2021 21:38:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgDbk-0007rM-BZ; Mon, 10 May 2021 21:38:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lgDbk-0005MB-B4; Mon, 10 May 2021 21:38:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=bVO9kqv0Epb0jyv3ivyeAPG6SSiYaJwxrBA4n4RU6v0=; b=byzgdFw4iUwCgBN1tZXdfTqJPr
	5ssTtjJWDmebcNtQULZyQ6e+MMW/PUAL5L21NtyZPDVvKM5LT2zn0kOTdRTDJKHoGW+4Ql/FkROrj
	/TaAbz77fQDhNYJfAzpmuU5adOKFOZMz309ynDiV5XdQ+vMysuezwin/Gmvr+hXcVcio=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161895-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 161895: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=375f2d8e684dce2ab6f375382f35e546c7ab62ee
X-Osstest-Versions-That:
    ovmf=f297b7f20010711e36e981fe45645302cc9d109d
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 10 May 2021 21:38:32 +0000

flight 161895 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161895/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 375f2d8e684dce2ab6f375382f35e546c7ab62ee
baseline version:
 ovmf                 f297b7f20010711e36e981fe45645302cc9d109d

Last test of basis   161726  2021-05-04 06:28:31 Z    6 days
Testing same since   161895  2021-05-10 16:10:12 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Nicola Mazzucato <nicola.mazzucato@arm.com>
  Pierre Gondois <Pierre.Gondois@arm.com>
  Sami Mujawar <sami.mujawar@arm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   f297b7f200..375f2d8e68  375f2d8e684dce2ab6f375382f35e546c7ab62ee -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon May 10 22:11:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 10 May 2021 22:11:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125558.236320 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgE7Y-0008Uf-IC; Mon, 10 May 2021 22:11:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125558.236320; Mon, 10 May 2021 22:11:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgE7Y-0008UY-FB; Mon, 10 May 2021 22:11:24 +0000
Received: by outflank-mailman (input) for mailman id 125558;
 Mon, 10 May 2021 22:11:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgE7W-0008UO-VF; Mon, 10 May 2021 22:11:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgE7W-00079M-Qp; Mon, 10 May 2021 22:11:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgE7W-0000a9-E2; Mon, 10 May 2021 22:11:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lgE7W-0000oR-DU; Mon, 10 May 2021 22:11:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gv/B7S+2fJI/TiOH/FXYj2Cy5HIzB9BTMAfKkrCmvGg=; b=2dduEoAxELIjWFP1zLeUWJtK2K
	RDjJ+qDVnwarJOdPyJHdGj310uMdx+8mqBAQqp3ZPkwg2MB4h3IK16hYng07cuZsTVlS326+kqYlV
	SY8gxws+74snvDLoywd1izfXTmd5WMi/ltB/JcDqT20UAV15IAkzcaS4L1AX8r6P2FfI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161897-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 161897: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=d4fb5f166c2bfbaf9ba0de69da0d411288f437a9
X-Osstest-Versions-That:
    xen=982c89ed527bc5b0ffae5da9fd33f9d2d1528f06
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 10 May 2021 22:11:22 +0000

flight 161897 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161897/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  d4fb5f166c2bfbaf9ba0de69da0d411288f437a9
baseline version:
 xen                  982c89ed527bc5b0ffae5da9fd33f9d2d1528f06

Last test of basis   161893  2021-05-10 15:02:47 Z    0 days
Testing same since   161897  2021-05-10 19:02:36 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@arm.com>
  Volodymyr Babchuk <volodymyr_babchuk@epam.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   982c89ed52..d4fb5f166c  d4fb5f166c2bfbaf9ba0de69da0d411288f437a9 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue May 11 01:11:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 01:11:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125564.236335 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgGvt-0006zL-BT; Tue, 11 May 2021 01:11:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125564.236335; Tue, 11 May 2021 01:11:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgGvt-0006zE-8Q; Tue, 11 May 2021 01:11:33 +0000
Received: by outflank-mailman (input) for mailman id 125564;
 Tue, 11 May 2021 01:11:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=03QD=KG=arm.com=henry.wang@srs-us1.protection.inumbo.net>)
 id 1lgGvr-0006z8-RA
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 01:11:32 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.8.70]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9e1659dd-cab2-4c8b-95f5-d69fddf37212;
 Tue, 11 May 2021 01:11:28 +0000 (UTC)
Received: from AM6P194CA0009.EURP194.PROD.OUTLOOK.COM (2603:10a6:209:90::22)
 by AM6PR08MB3623.eurprd08.prod.outlook.com (2603:10a6:20b:48::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.25; Tue, 11 May
 2021 01:11:25 +0000
Received: from AM5EUR03FT061.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:90:cafe::38) by AM6P194CA0009.outlook.office365.com
 (2603:10a6:209:90::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.25 via Frontend
 Transport; Tue, 11 May 2021 01:11:25 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT061.mail.protection.outlook.com (10.152.16.247) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4108.25 via Frontend Transport; Tue, 11 May 2021 01:11:25 +0000
Received: ("Tessian outbound 6c4b4bc1cefb:v91");
 Tue, 11 May 2021 01:11:24 +0000
Received: from 330094bf693b.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 C8932092-45C2-4C6B-A759-9ECF0221EE1D.1; 
 Tue, 11 May 2021 01:11:14 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 330094bf693b.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 11 May 2021 01:11:14 +0000
Received: from PA4PR08MB6253.eurprd08.prod.outlook.com (20.182.209.8) by
 PA4PR08MB6175.eurprd08.prod.outlook.com (20.182.210.210) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4108.29; Tue, 11 May 2021 01:11:11 +0000
Received: from PA4PR08MB6253.eurprd08.prod.outlook.com
 ([fe80::19f9:d346:b9af:5cad]) by PA4PR08MB6253.eurprd08.prod.outlook.com
 ([fe80::19f9:d346:b9af:5cad%3]) with mapi id 15.20.4108.031; Tue, 11 May 2021
 01:11:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9e1659dd-cab2-4c8b-95f5-d69fddf37212
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uDeA7xY/TEkGUE5p3O+Bl2PSZJbLEmyyQJcjtzDFJzQ=;
 b=hxw+/Vc00stMGv/7vbuTX0dDBuA60Wbt+X8a8k6A0UaOOaevewHrvk4aOXLrXZnoaBoCu0cCQ7U66hGzdyrpErm4QVAkn+/vFKZvAmocaeR9XYnHSsntvwnaR/fmoWHxY3nbLGcOxUDsh3X2ILqE/QYtvBZnvFqdyTrZt/CcVQI=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=j/hUFG5pfIrbiROaUfsMjH6wBgq9k9CMpKFRqaxpMRIr3qfxk+pRvSA1ddmMehLs1XKD8dWppPffrhhUBCIzTUsWZayeqId/DeEt/Y/e1bFbLmArpFptj/n5Itnn6Z97UAOS4ElIHszUvxSj7VsFql96NH1YoHUP6yapbPB4427piNMQ1M0ysOCyXcZo+Z/wcAvG3CkaIb014tEPtamRFQdQ9TxZ7dWvr+0iXICcNNKrVA7KafcmQZxhduJ2GVgTsCNsFQr0AcE74dJgsI/RyKAChFtcRsfclrZmCtJt9Sq7m5J8OAeaSC5A79zSbfN6vrpAXG4bDUnsY0vmXgtseA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uDeA7xY/TEkGUE5p3O+Bl2PSZJbLEmyyQJcjtzDFJzQ=;
 b=Jm7Zxv1XtwdqxOPx3CF6+Labh/sZ9YesyyaYO1ge6wkQneyWBOnrqoeLKe+4VxmNyUx4of5Z3IBEFL9CQ8SIJsE4p/UqNupgBCBV+niDcH0KS2J4fuO7UI4Ekd/P0Pl5hm5TYaJjpwhdbjIjSBj1Mp/jwKbB9mGj3zptciGwa6EvmngRr+EiavcPTGJjW4UHb2hwWGWpXH6NvOkSCz1YX1hQdZs1cbIgSJ91ZMJuIhD4ejCZdy3KMofTmnqmmyqcc1BhZDcBOPNcJNrdGgecVssVZWaYhgRBCwrUiKkDD2OHwOaYNMAw+sAZIQjFx/zPfHJjTOOkb8HJ5yQLYp+9Qw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uDeA7xY/TEkGUE5p3O+Bl2PSZJbLEmyyQJcjtzDFJzQ=;
 b=hxw+/Vc00stMGv/7vbuTX0dDBuA60Wbt+X8a8k6A0UaOOaevewHrvk4aOXLrXZnoaBoCu0cCQ7U66hGzdyrpErm4QVAkn+/vFKZvAmocaeR9XYnHSsntvwnaR/fmoWHxY3nbLGcOxUDsh3X2ILqE/QYtvBZnvFqdyTrZt/CcVQI=
From: Henry Wang <Henry.Wang@arm.com>
To: Julien Grall <julien@xen.org>, "sstabellini@kernel.org"
	<sstabellini@kernel.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Chen <Wei.Chen@arm.com>, Penny Zheng <Penny.Zheng@arm.com>, Bertrand
 Marquis <Bertrand.Marquis@arm.com>
Subject: RE: Discussion of Xenheap problems on AArch64
Thread-Topic: Discussion of Xenheap problems on AArch64
Thread-Index:
 Adc2dyA8lkZGRqbyRiSglHolanVkwQAFhaqAAACgy/AA4CfqgABHcHyAADhcqlAABznSAAGrycWAALiGZgAAEDKF4A==
Date: Tue, 11 May 2021 01:11:10 +0000
Message-ID:
 <PA4PR08MB6253F85E184CA51BDB99786992539@PA4PR08MB6253.eurprd08.prod.outlook.com>
References:
 <PA4PR08MB6253F49C13ED56811BA5B64E92479@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <cdde98ca-4183-c92b-adca-801330992fc5@xen.org>
 <PA4PR08MB62538BBA256E66A0415F0C7192479@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <f14aa1d6-35d2-a9a3-0672-7f0d3ae3ec89@xen.org>
 <PA4PR08MB62534C4130B59CAA9A8A8BF792419@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <PA4PR08MB6253FBC7F5E690DB74F2E11F92409@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <2a65b8c0-fccc-2ccc-f736-7f3f666e84d1@xen.org>
 <PA4PR08MB62537A958107CD234831E0B892579@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <ba649865-410b-e1be-39a3-c4cac802f464@xen.org>
In-Reply-To: <ba649865-410b-e1be-39a3-c4cac802f464@xen.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: EB4512013D00F047AF3A17069222309E.0
x-checkrecipientchecked: true
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [203.126.0.112]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: fff23869-810d-4394-e9c5-08d91419b27d
x-ms-traffictypediagnostic: PA4PR08MB6175:|AM6PR08MB3623:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB36235D433B17A304326F340192539@AM6PR08MB3623.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:2582;OLM:2582;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 VMLLJEK4d0RCkijhnMwaxbk4i6sVIO7qPzpPpmO7Lj+69NlHPKujVMyx7k2ZcNQf+KL16+kgFrXq5bEkFkPXrmh6Xtz73++dOWehQ6i+LWgAXMhQ7OONXaYPasb7I9AJoCJq7R9KEwrrtC1Qi9L4dX7JxpVbwviLuZWeXE8a8FUCckcM3HT8a1Dp5CQ7Wf0b+orcsPMlazhYjWrav07k9Nlp2TY1XBKGaCzd1fV/Lawao9jaS9agIXeMV3a7WJN+ZT7VgkMfV5NgfNFBY/9t33ffRUyU2qA5If9yByjZDPwIuJ+7c6ANl+LzXYX5uRurKPrNjkQ61FKBKDOSXcqsOekCoN4SdiXqMwgtMMyI+2/H+mO9oWq/LZyuSRh/ocbZb1Em5NcpxIZ7ilAoAJvHVeWOwX3VEdtRUtLEKvIIvJeo90Rpg7gRDVKv83rGgVsoQ4rJgKGUq4PRvwatBRCxgqoZ4COQ5FtHOBghRRE6Qy/T/YOdueF/6idltd3Bv0rIaerwtxTMsFJgZGrZEjLkwubGKTorBRusUpajbTHIDlJVgnEFgAcXFgzqlp8/Lrsr/YQ9AnBH3IWEm/BBgZXpZvG1JBUQG23kSCvs7NFU85w=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PA4PR08MB6253.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(136003)(346002)(396003)(376002)(39850400004)(7696005)(2906002)(53546011)(66446008)(71200400001)(26005)(6506007)(54906003)(186003)(33656002)(76116006)(5660300002)(316002)(55016002)(4326008)(9686003)(110136005)(64756008)(66556008)(66946007)(52536014)(478600001)(8676002)(86362001)(38100700002)(122000001)(8936002)(66476007);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?blJLd2Iyc2EvcWk3ZVJPQzR4QzdkZUlVZnY2T2o1QTNSQ3NRUWlGWUhydW5Q?=
 =?utf-8?B?YmRHYzc4eTlCNUwwYW9USDlScW5NaWEwM1pnUDUzOU91TkNGcTVBYkxyd2lF?=
 =?utf-8?B?ZVNpYUpvdzRpL0R3K2FSWjcyWDY2MloxdVFCdWI0eE5Cb1VOcEwrcG5JUFV3?=
 =?utf-8?B?bnJwdnVlOEwzZFBqbFZBR1RHdDIyMm95K0ZqQzNVWkk2ZnVZVWwzMjQrUDlI?=
 =?utf-8?B?ODJBT2IzTTR0WmdBbjAzWlF1UU94eTQvb1dhN2EvWnJKZFFCMkZJKytUZmJW?=
 =?utf-8?B?akEwQk5wT0tISVc5TTdiTjE1Z09tZ2kxOUhwc0lKQjlUQkFTN2YvcFVHUFpQ?=
 =?utf-8?B?bG0xSjF0QWprT3lpRG1qNEpISXNadGxSY0o1Rkl5ZFV5amQzckt0cUZiTEl5?=
 =?utf-8?B?Ynd0a1NCN3RLdjNTU3dJZUx5K1BBVTJEdEdOWTZkRWFMTGdSYklFS1pUNWZC?=
 =?utf-8?B?VUFwWlBOS2RrYkZvV3VBMFcraDhFSDBySlNwWjkvb1BWY0kwTFBaUWZSMFNW?=
 =?utf-8?B?STlTUFBFN3hlcHBKVEREWDFFRmFEY1I4K2EwbTFFV1oveVhxbzdCZzhQeGh5?=
 =?utf-8?B?U2NBcHJObVV2akFVRFRlUVdtL085OHFDTzJxb29rZXNDcnNXa09hYm5BMHFa?=
 =?utf-8?B?V0tWUk9YbmFzNFAwbFNXQjhIZnJ2Sm5HTHZHaWc4WGJhT3lHNU55K0l4aXQy?=
 =?utf-8?B?eUdTQUZFOGtiblVCNUFHcDNST2Y4RDhxeEF3U2J1Z0Y4WFBBV0xQcFM1Y0ht?=
 =?utf-8?B?cjBpelFXRVdZd2dqQ25pK2NZMHpkVkFDNmlaQTZTWXpicnRva2VWS2Z1eDlt?=
 =?utf-8?B?YUJ4YVVMcVlrMWNuSkdjUTN1dXUydlJpVWFaWkM3bW5oREVzNEswY1RlNGdU?=
 =?utf-8?B?eGIvQ01CdFBFRXlWOFdvaEdqQ3grQ2UvRXR0cVNVMnJVYy9YSG5yTVFQQi9P?=
 =?utf-8?B?TkJnN1IwOHJUOVV1dG9BUno2aDY2TW5UN1cyRVVYcDhmSnJGcDBKek96cW5p?=
 =?utf-8?B?eW52M3dTcFR1MHpGTGNEVDY0aU5EczB5aHFyYXVlZE5QUytqRmpQaXExTUdm?=
 =?utf-8?B?aEdOZTZnWm1OVkNuYkJodzllSHUwV0pqS0srVjdKMHpSSktHRjcvdWdmd285?=
 =?utf-8?B?dHRGMTBKRzR4QzFzQUtUazFyWHppZUlhbGNEbndoeU1KSzNRbzh0bGxqMVY3?=
 =?utf-8?B?TlRCa3hpU2xrL2hVYjlCVHBQSWVjOVl0a2Z3ZGVSUW4vaFJaK3ZKTmZoN2ZL?=
 =?utf-8?B?eTNxank5TVNEZUNlOXpBMWtpVUgra3A4Tm5UdEdKSHRYdXNVRjBnVnk1dEUr?=
 =?utf-8?B?NEhtdUNWWUFoaXJzRnQ1V1UxRDUzTklVdU0yQ1VZelJDeTlpT0xERm5OVkhq?=
 =?utf-8?B?SXFtRllUQ1dZUnk1TWNwUUV2YWcyTURmOXF2WjN6SnM3cXRwYVNtYmlZK0VE?=
 =?utf-8?B?SHE0alZPbGVHYVV3VHlHZWlxRjhpQnAzc01LTEV3Zk9qdTZUOW0wRmZGaFd5?=
 =?utf-8?B?UDdLMldzUFM1d3lYK21EU2M0dHFjYzYyK092UzdPSHlsUkJucWdaaXRpV3g0?=
 =?utf-8?B?dDQ5eDFyN3pHREpqd0Z0RzIxZ2x1R0tzazByWWVJUmt5aEllcjVWdUtwVXhW?=
 =?utf-8?B?OFh4K0wvMEQ2dklNc3NQOGJvb1JHZ21tY2phRnVPZGhzSzRMbkJET1N5bDkr?=
 =?utf-8?B?NksveDE5eFp1bFFuVkxIcjJ1aEtSa3MwZ000ZmJPVGtZb2dKNWd2RlFDMXRD?=
 =?utf-8?Q?WfT+E6M5JPfKBTH106h1t19v0mPhZMwlNmbSp1r?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR08MB6175
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT061.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	a511d525-5b2f-491d-e872-08d91419aa0d
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	SS4E1sJDZxtx8yFi1dNU6823c7nqIVZfNfaJxzUmkOgDSm/9RsWALw8n7uyL7APIQFxLYtxyxTa8EzYknoaI89KDoJCJSMCJsXu9tNZsTTNhzgs9XCt4U7O//GtJeoBJAXJusqKM/KK+rBBCVuO1k7QMGTGXKxZ8ZHcevdFwvchUiFwMbe8FHmnThfJtXKkdBKf8tpaNLIjPu23dHMOjY8bTs8SrzmIjBDQiobmB0Gz7tWj7BWF9luOXK5/FbosgCZgY3MYNthbtTk13we3/Wq1qkLIIPE0PZ3Iq3edVrSZ2usnkcnoapNkYp+mOIFo1uBvGNJ4ZPWSIwsIy6jsyBHXJABtA/HEMy5eXNie4iPbGq0ZURWo8duYp1N8v8s/1UI5IOW8km2xrr3VvRlV4WeXUHt9969hHgM8O+p6/l6PyLA3hJioN6TMQVM7+HkKvj5oSA9lGYLsH0+rOGFeBZ2x7YiWzSXsoqqdzGKD3xjQXhAAoAWEXXYM0vEpR435XOy49u5FALPL4O3xuW/zUeHvicDn+8yAfTNWm69e2vTcQpxJ6uiLO9xobd48ETBs/XTUXMMudZC7g1aQ9mfWmYpSqB+lvOcV5c6oVUowRyXcdXnynyuHKO5jlL+5ouW/jADozzq9N3PXnqxQJmHU8VyDOZHj+/AEUTgBPYpJYFAA=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(346002)(376002)(136003)(39850400004)(46966006)(36840700001)(356005)(26005)(7696005)(82740400003)(5660300002)(33656002)(4326008)(316002)(6506007)(70206006)(53546011)(8676002)(70586007)(8936002)(2906002)(336012)(52536014)(110136005)(186003)(36860700001)(54906003)(55016002)(86362001)(9686003)(478600001)(47076005)(81166007)(82310400003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2021 01:11:25.0350
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: fff23869-810d-4394-e9c5-08d91419b27d
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT061.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB3623

SGkgSnVsaWVuLA0KDQo+IEZyb206IEp1bGllbiBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+IEhp
IEhlbnJ5LA0KPiANCj4gT24gMDcvMDUvMjAyMSAwNTowNiwgSGVucnkgV2FuZyB3cm90ZToNCj4g
Pj4gRnJvbTogSnVsaWVuIEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4NCj4gPj4gT24gMjgvMDQvMjAy
MSAxMDoyOCwgSGVucnkgV2FuZyB3cm90ZToNCj4gWy4uLl0NCj4gDQo+ID4gd2hlbiBJIGNvbnRp
bnVlIGJvb3RpbmcgWGVuLCBJIGdvdCBmb2xsb3dpbmcgZXJyb3IgbG9nOg0KPiA+DQo+ID4gKFhF
TikgQ1BVOiAgICAwDQo+ID4gKFhFTikgUEM6ICAgICAwMDAwMDAwMDAwMmI1YTVjIGFsbG9jX2Jv
b3RfcGFnZXMrMHg5NC8weDk4DQo+ID4gKFhFTikgTFI6ICAgICAwMDAwMDAwMDAwMmNhM2JjDQo+
ID4gKFhFTikgU1A6ICAgICAwMDAwMDAwMDAwMmZmZGUwDQo+ID4gKFhFTikgQ1BTUjogICA2MDAw
MDNjOSBNT0RFOjY0LWJpdCBFTDJoIChIeXBlcnZpc29yLCBoYW5kbGVyKQ0KPiA+IChYRU4pDQo+
ID4gKFhFTikgICBWVENSX0VMMjogODAwMDAwMDANCj4gPiAoWEVOKSAgVlRUQlJfRUwyOiAwMDAw
MDAwMDAwMDAwMDAwDQo+ID4gKFhFTikNCj4gPiAoWEVOKSAgU0NUTFJfRUwyOiAzMGNkMTgzZA0K
PiA+IChYRU4pICAgIEhDUl9FTDI6IDAwMDAwMDAwMDAwMDAwMzgNCj4gPiAoWEVOKSAgVFRCUjBf
RUwyOiAwMDAwMDAwMDg0MTNjMDAwDQo+ID4gKFhFTikNCj4gPiAoWEVOKSAgICBFU1JfRUwyOiBm
MjAwMDAwMQ0KPiA+IChYRU4pICBIUEZBUl9FTDI6IDAwMDAwMDAwMDAwMDAwMDANCj4gPiAoWEVO
KSAgICBGQVJfRUwyOiAwMDAwMDAwMDAwMDAwMDAwDQo+ID4gKFhFTikNCj4gPiAoWEVOKSBYZW4g
Y2FsbCB0cmFjZToNCj4gPiAoWEVOKSAgICBbPDAwMDAwMDAwMDAyYjVhNWM+XSBhbGxvY19ib290
X3BhZ2VzKzB4OTQvMHg5OCAoUEMpDQo+ID4gKFhFTikgICAgWzwwMDAwMDAwMDAwMmNhM2JjPl0g
c2V0dXBfZnJhbWV0YWJsZV9tYXBwaW5ncysweGE0LzB4MTA4DQo+IChMUikNCj4gPiAoWEVOKSAg
ICBbPDAwMDAwMDAwMDAyY2EzYmM+XSBzZXR1cF9mcmFtZXRhYmxlX21hcHBpbmdzKzB4YTQvMHgx
MDgNCj4gPiAoWEVOKSAgICBbPDAwMDAwMDAwMDAyY2I5ODg+XSBzdGFydF94ZW4rMHgzNDQvMHhi
Y2MNCj4gPiAoWEVOKSAgICBbPDAwMDAwMDAwMDAyMDAxYzA+XQ0KPiBhcm02NC9oZWFkLm8jcHJp
bWFyeV9zd2l0Y2hlZCsweDEwLzB4MzANCj4gPiAoWEVOKQ0KPiA+IChYRU4pICoqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioNCj4gPiAoWEVOKSBQYW5pYyBvbiBDUFUgMDoN
Cj4gPiAoWEVOKSBYZW4gQlVHIGF0IHBhZ2VfYWxsb2MuYzo0MzINCj4gPiAoWEVOKSAqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQo+IA0KPiBUaGlzIGlzIGhhcHBlbmlu
ZyB3aXRob3V0IG15IHBhdGNoIHNlcmllcyBhcHBsaWVkLCByaWdodD8gSWYgc28sIHdoYXQNCj4g
aGFwcGVuIGlmIHlvdSBhcHBseSBpdD8NCg0KTm8sIEkgYW0gYWZyYWlkIHRoaXMgaXMgd2l0aCB5
b3VyIHBhdGNoIHNlcmllcyBhcHBsaWVkLCBhbmQgdGhhdCBpcyB3aHkgSQ0KYW0gYSBsaXR0bGUg
Yml0IGNvbmZ1c2VkIGFib3V0IHRoZSBlcnJvciBsb2cuLi4NCg0KVGhhbmtzIGZvciB5b3VyIHBh
dGllbmNlLg0KDQpLaW5kIHJlZ2FyZHMsDQoNCkhlbnJ5DQoNCj4gDQo+IENoZWVycywNCj4gDQo+
IC0tDQo+IEp1bGllbiBHcmFsbA0K


From xen-devel-bounces@lists.xenproject.org Tue May 11 01:44:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 01:44:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125572.236348 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgHRd-00027o-46; Tue, 11 May 2021 01:44:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125572.236348; Tue, 11 May 2021 01:44:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgHRd-00027h-0W; Tue, 11 May 2021 01:44:21 +0000
Received: by outflank-mailman (input) for mailman id 125572;
 Tue, 11 May 2021 01:44:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgHRb-00027X-Et; Tue, 11 May 2021 01:44:19 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgHRb-00088g-8C; Tue, 11 May 2021 01:44:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgHRa-0002J2-SU; Tue, 11 May 2021 01:44:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lgHRa-0007HS-Qs; Tue, 11 May 2021 01:44:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=i7WQuqtAnK8m+ewCYCO9v1OX5W3ORGCY0/EglQgNHPM=; b=NLUIefB/P1vNGBQmC7bsLqNMeY
	/SkyO0FZ6V3bJXKH3O/2swdR7UPe0iwYnOUxXYpxuBqExi61JAKpEh2G2BvyqZuaN8G9YPZBFVrvB
	UZG798PpmHy/y24OLtPRs/7/M/5byTh9lyTd3pg6NC0UgChG2Pj8u5YIa1w2vuuip210=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161894-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 161894: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-libvirt-xsm:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=6efb943b8616ec53a5e444193dccf1af9ad627b5
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 11 May 2021 01:44:18 +0000

flight 161894 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161894/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx  8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-xsm 20 guest-start/debian.repeat  fail pass in 161887

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                6efb943b8616ec53a5e444193dccf1af9ad627b5
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  283 days
Failing since        152366  2020-08-01 20:49:34 Z  282 days  472 attempts
Testing same since   161887  2021-05-09 23:42:16 Z    1 days    3 attempts

------------------------------------------------------------
6035 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1637319 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 11 01:46:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 01:46:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125576.236364 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgHTq-0002ka-JY; Tue, 11 May 2021 01:46:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125576.236364; Tue, 11 May 2021 01:46:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgHTq-0002kT-EC; Tue, 11 May 2021 01:46:38 +0000
Received: by outflank-mailman (input) for mailman id 125576;
 Tue, 11 May 2021 01:46:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nI6L=KG=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lgHTo-0002kN-PC
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 01:46:36 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b8c8eaf0-c322-4265-8e46-7e1ecd33da81;
 Tue, 11 May 2021 01:46:36 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 416A161629;
 Tue, 11 May 2021 01:46:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b8c8eaf0-c322-4265-8e46-7e1ecd33da81
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1620697595;
	bh=zRFH7hcgqBzuVJGwApqdrtTvpvj+GmXyjHWNsw6DGg8=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=SrbUmQ3XfUDBVScU7RGYGh+COsif34UcsKqoTgGV9vf8XRs8R1lfXDv2Rg8m5+EF9
	 CJPluc1/MQWvbLjpb0BVOfrQVGN/CpceyyrZMmsRnCPL3kqtV6OlO3OEzsslK/qv7R
	 zfl2Tf8Vw+0CIzNstymmxgXrqQa0cXQFMxMCpH3x+EScqVCBhtS+W0OWy9oXkU1Diw
	 G6RzBgQ9DfY0kzP1vEIMFq5Dn15X6EwBB3uZAB8JhvRKGL4JvgSEGm4RjP4loHchpQ
	 dnVwpyniLe2sjqdRGm4CgKX7WruGCQeIvlzp1r0kmQMvORLOQOTUtZolCtGmlfMEQH
	 t7e1j5rQsPpEw==
Date: Mon, 10 May 2021 18:46:34 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Christoph Hellwig <hch@lst.de>
cc: Julien Grall <julien@xen.org>, f.fainelli@gmail.com, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    linux-kernel@vger.kernel.org, 
    osstest service owner <osstest-admin@xenproject.org>, 
    Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, 
    Boris Ostrovsky <boris.ostrovsky@oracle.com>, 
    iommu@lists.linux-foundation.org
Subject: Re: Regression when booting 5.15 as dom0 on arm64 (WAS: Re: [linux-linus
 test] 161829: regressions - FAIL)
In-Reply-To: <20210510084057.GA933@lst.de>
Message-ID: <alpine.DEB.2.21.2105101818260.5018@sstabellini-ThinkPad-T480s>
References: <osstest-161829-mainreport@xen.org> <4ea1e89f-a7a0-7664-470c-b3cf773a1031@xen.org> <20210510084057.GA933@lst.de>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Mon, 10 May 2021, Christoph Hellwig wrote:
> On Sat, May 08, 2021 at 12:32:37AM +0100, Julien Grall wrote:
> > The pointer dereferenced seems to suggest that the swiotlb hasn't been 
> > allocated. From what I can tell, this may be because swiotlb_force is set 
> > to SWIOTLB_NO_FORCE, we will still enable the swiotlb when running on top 
> > of Xen.
> >
> > I am not entirely sure what would be the correct fix. Any opinions?
> 
> Can you try something like the patch below (not even compile tested, but
> the intent should be obvious?
> 
> 
> diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> index 16a2b2b1c54d..7671bc153fb1 100644
> --- a/arch/arm64/mm/init.c
> +++ b/arch/arm64/mm/init.c
> @@ -44,6 +44,8 @@
>  #include <asm/tlb.h>
>  #include <asm/alternative.h>
>  
> +#include <xen/arm/swiotlb-xen.h>
> +
>  /*
>   * We need to be able to catch inadvertent references to memstart_addr
>   * that occur (potentially in generic code) before arm64_memblock_init()
> @@ -482,7 +484,7 @@ void __init mem_init(void)
>  	if (swiotlb_force == SWIOTLB_FORCE ||
>  	    max_pfn > PFN_DOWN(arm64_dma_phys_limit))
>  		swiotlb_init(1);
> -	else
> +	else if (!IS_ENABLED(CONFIG_XEN) || !xen_swiotlb_detect())
>  		swiotlb_force = SWIOTLB_NO_FORCE;
>  
>  	set_max_mapnr(max_pfn - PHYS_PFN_OFFSET);

The "IS_ENABLED(CONFIG_XEN)" is not needed as the check is already part
of xen_swiotlb_detect().


But let me ask another question first. Do you think it makes sense to have:

	if (swiotlb_force == SWIOTLB_NO_FORCE)
		return 0;

at the beginning of swiotlb_late_init_with_tbl? I am asking because
swiotlb_late_init_with_tbl is meant for special late initializations,
right? It shouldn't really matter the presence or absence of
SWIOTLB_NO_FORCE in regards to swiotlb_late_init_with_tbl. Also the
commit message for "swiotlb: Make SWIOTLB_NO_FORCE perform no
allocation" says that "If a platform was somehow setting
swiotlb_no_force and a later call to swiotlb_init() was to be made we
would still be proceeding with allocating the default SWIOTLB size
(64MB)." Our case here is very similar, right? So the allocation should
proceed?


Which brings me to a separate unrelated issue, still affecting the path
xen_swiotlb_init -> swiotlb_late_init_with_tbl. If swiotlb_init(1) is
called by mem_init then swiotlb_late_init_with_tbl will fail due to the
check:

    /* protect against double initialization */
    if (WARN_ON_ONCE(io_tlb_default_mem))
        return -ENOMEM;

xen_swiotlb_init is meant to ask Xen to make a bunch of pages physically
contiguous. Then, it initializes the swiotlb buffer based on those
pages. So it is a problem that swiotlb_late_init_with_tbl refuses to
continue. However, in practice it is not a problem today because on ARM
we don't actually make any special requests to Xen to make the pages
physically contiguous (yet). See the empty implementation of
arch/arm/xen/mm.c:xen_create_contiguous_region. I don't know about x86.

So maybe we should instead do something like the appended?


diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c
index f8f07469d259..f5a3638d1dee 100644
--- a/arch/arm/xen/mm.c
+++ b/arch/arm/xen/mm.c
@@ -152,6 +152,7 @@ static int __init xen_mm_init(void)
 	struct gnttab_cache_flush cflush;
 	if (!xen_swiotlb_detect())
 		return 0;
+	swiotlb_exit();
 	xen_swiotlb_init();
 
 	cflush.op = 0;
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 8ca7d505d61c..f17be37298a7 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -285,9 +285,6 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 	unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
 	struct io_tlb_mem *mem;
 
-	if (swiotlb_force == SWIOTLB_NO_FORCE)
-		return 0;
-
 	/* protect against double initialization */
 	if (WARN_ON_ONCE(io_tlb_default_mem))
 		return -ENOMEM;


From xen-devel-bounces@lists.xenproject.org Tue May 11 04:54:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 04:54:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125604.236392 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgKPQ-0005iu-J3; Tue, 11 May 2021 04:54:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125604.236392; Tue, 11 May 2021 04:54:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgKPQ-0005in-G0; Tue, 11 May 2021 04:54:16 +0000
Received: by outflank-mailman (input) for mailman id 125604;
 Tue, 11 May 2021 04:54:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgKPP-0005id-FK; Tue, 11 May 2021 04:54:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgKPP-0003Ua-9V; Tue, 11 May 2021 04:54:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgKPO-0004Ke-VZ; Tue, 11 May 2021 04:54:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lgKPO-00061K-UZ; Tue, 11 May 2021 04:54:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wMofPHtPs1jSVkYMv4w+PhEx7V/ifqTKoo/SzItTw74=; b=IU//t/xYwsZX5BE4lvfQZwkK/d
	PNIxKPe24KB2pnIGcxaavjJS/Cetx9/Xex1ix84yJyWnjG1ZT0lyLu5w/LNUJj+MX84tXgAvu9F/5
	hA+fBSMZ9pn72n9kXixiX8prpsIZ6tqxral/MLJ7iR5jTm0cNUHKpEraQGZzDYemJJkc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161896-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 161896: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=74e31681ba05ed1876818df30c581bc530554fb3
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 11 May 2021 04:54:14 +0000

flight 161896 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161896/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-libvirt      14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-libvirt-xsm  14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-libvirt-pair 25 guest-start/debian       fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                74e31681ba05ed1876818df30c581bc530554fb3
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  263 days
Failing since        152659  2020-08-21 14:07:39 Z  262 days  480 attempts
Testing same since   161896  2021-05-10 17:09:26 Z    0 days    1 attempts

------------------------------------------------------------
490 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 148512 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 11 05:58:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 05:58:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125613.236408 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgLPe-0003ox-GC; Tue, 11 May 2021 05:58:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125613.236408; Tue, 11 May 2021 05:58:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgLPe-0003oq-DD; Tue, 11 May 2021 05:58:34 +0000
Received: by outflank-mailman (input) for mailman id 125613;
 Tue, 11 May 2021 05:58:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgLPc-0003og-Q4; Tue, 11 May 2021 05:58:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgLPc-0004of-JE; Tue, 11 May 2021 05:58:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgLPc-0007sL-AI; Tue, 11 May 2021 05:58:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lgLPc-000404-9r; Tue, 11 May 2021 05:58:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=B0QX823C7GPisWShQB7JhFUmmZ9NsuSnTEtT1IBA+Jw=; b=pGOUdLeMpXu4+fKT2zBCiCn7Ai
	aTgfX6ez40jJl08kitLOoEuiw4CZXkOcGWstSRYYgnwdt0xWm1F0/w8fvdbgYRZ8MQ6LriMe6++Wf
	wcm517mlbTfvuegYDvSipqQeJbX6oWk4rP7HMvUz8knzFz4qIhgQyeZzqYVV7AFMOfrE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161899-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 161899: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=ef3840c1ff320698523dd6b94ba7c86354392784
X-Osstest-Versions-That:
    ovmf=375f2d8e684dce2ab6f375382f35e546c7ab62ee
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 11 May 2021 05:58:32 +0000

flight 161899 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161899/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 ef3840c1ff320698523dd6b94ba7c86354392784
baseline version:
 ovmf                 375f2d8e684dce2ab6f375382f35e546c7ab62ee

Last test of basis   161895  2021-05-10 16:10:12 Z    0 days
Testing same since   161899  2021-05-10 23:42:19 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Michael D Kinney <michael.d.kinney@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   375f2d8e68..ef3840c1ff  ef3840c1ff320698523dd6b94ba7c86354392784 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue May 11 06:36:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 06:36:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125619.236423 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgLzs-0008Cl-5f; Tue, 11 May 2021 06:36:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125619.236423; Tue, 11 May 2021 06:36:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgLzs-0008Ce-2S; Tue, 11 May 2021 06:36:00 +0000
Received: by outflank-mailman (input) for mailman id 125619;
 Tue, 11 May 2021 06:35:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=nFSD=KG=xen.org=tim@srs-us1.protection.inumbo.net>)
 id 1lgLzp-0008CY-Si
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 06:35:57 +0000
Received: from deinos.phlegethon.org (unknown [2001:41d0:8:b1d7::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 315daa80-185c-4cb4-9f77-b69b0745ace1;
 Tue, 11 May 2021 06:35:56 +0000 (UTC)
Received: from tjd by deinos.phlegethon.org with local (Exim 4.94.2 (FreeBSD))
 (envelope-from <tim@xen.org>)
 id 1lgLzk-0007pF-Gz; Tue, 11 May 2021 06:35:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 315daa80-185c-4cb4-9f77-b69b0745ace1
Date: Tue, 11 May 2021 07:35:52 +0100
From: Tim Deegan <tim@xen.org>
To: Costin Lupu <costin.lupu@cs.pub.ro>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: Re: [PATCH v3 1/5] tools/debugger: Fix PAGE_SIZE redefinition error
Message-ID: <YJolyMVdxPIXVCQo@deinos.phlegethon.org>
References: <cover.1620633386.git.costin.lupu@cs.pub.ro>
 <88d4d2deeca3259450dc9af2b97f2fc1453a5d7d.1620633386.git.costin.lupu@cs.pub.ro>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
In-Reply-To: <88d4d2deeca3259450dc9af2b97f2fc1453a5d7d.1620633386.git.costin.lupu@cs.pub.ro>
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on deinos.phlegethon.org); SAEximRunCond expanded to false

At 11:35 +0300 on 10 May (1620646515), Costin Lupu wrote:
> If PAGE_SIZE is already defined in the system (e.g. in /usr/include/limits.h
> header) then gcc will trigger a redefinition error because of -Werror. This
> patch replaces usage of PAGE_* macros with KDD_PAGE_* macros in order to avoid
> confusion between control domain page granularity (PAGE_* definitions) and
> guest domain page granularity (which is what we are dealing with here).
> 
> We chose to define the KDD_PAGE_* macros instead of using XC_PAGE_* macros
> because (1) the code in kdd.c should not include any Xen headers and (2) to add
> consistency for code in both kdd.c and kdd-xen.c.
> 
> Signed-off-by: Costin Lupu <costin.lupu@cs.pub.ro>

Reviewed-by: Tim Deegan <tim@xen.org>

Thanks!

Tim.


From xen-devel-bounces@lists.xenproject.org Tue May 11 06:36:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 06:36:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125620.236435 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgLzv-0008TQ-EH; Tue, 11 May 2021 06:36:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125620.236435; Tue, 11 May 2021 06:36:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgLzv-0008TJ-AU; Tue, 11 May 2021 06:36:03 +0000
Received: by outflank-mailman (input) for mailman id 125620;
 Tue, 11 May 2021 06:36:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=OZd+=KG=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lgLzu-0008Sz-OE
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 06:36:02 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2187cd16-62f6-4edc-9932-880b66c39ad2;
 Tue, 11 May 2021 06:36:00 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 8C47567373; Tue, 11 May 2021 08:35:58 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2187cd16-62f6-4edc-9932-880b66c39ad2
Date: Tue, 11 May 2021 08:35:58 +0200
From: Christoph Hellwig <hch@lst.de>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>, Julien Grall <julien@xen.org>,
	f.fainelli@gmail.com,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	linux-kernel@vger.kernel.org,
	osstest service owner <osstest-admin@xenproject.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	iommu@lists.linux-foundation.org
Subject: Re: Regression when booting 5.15 as dom0 on arm64 (WAS: Re:
 [linux-linus test] 161829: regressions - FAIL)
Message-ID: <20210511063558.GA7605@lst.de>
References: <osstest-161829-mainreport@xen.org> <4ea1e89f-a7a0-7664-470c-b3cf773a1031@xen.org> <20210510084057.GA933@lst.de> <alpine.DEB.2.21.2105101818260.5018@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.21.2105101818260.5018@sstabellini-ThinkPad-T480s>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Mon, May 10, 2021 at 06:46:34PM -0700, Stefano Stabellini wrote:
> On Mon, 10 May 2021, Christoph Hellwig wrote:
> > On Sat, May 08, 2021 at 12:32:37AM +0100, Julien Grall wrote:
> > > The pointer dereferenced seems to suggest that the swiotlb hasn't been 
> > > allocated. From what I can tell, this may be because swiotlb_force is set 
> > > to SWIOTLB_NO_FORCE, we will still enable the swiotlb when running on top 
> > > of Xen.
> > >
> > > I am not entirely sure what would be the correct fix. Any opinions?
> > 
> > Can you try something like the patch below (not even compile tested, but
> > the intent should be obvious?
> > 
> > 
> > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> > index 16a2b2b1c54d..7671bc153fb1 100644
> > --- a/arch/arm64/mm/init.c
> > +++ b/arch/arm64/mm/init.c
> > @@ -44,6 +44,8 @@
> >  #include <asm/tlb.h>
> >  #include <asm/alternative.h>
> >  
> > +#include <xen/arm/swiotlb-xen.h>
> > +
> >  /*
> >   * We need to be able to catch inadvertent references to memstart_addr
> >   * that occur (potentially in generic code) before arm64_memblock_init()
> > @@ -482,7 +484,7 @@ void __init mem_init(void)
> >  	if (swiotlb_force == SWIOTLB_FORCE ||
> >  	    max_pfn > PFN_DOWN(arm64_dma_phys_limit))
> >  		swiotlb_init(1);
> > -	else
> > +	else if (!IS_ENABLED(CONFIG_XEN) || !xen_swiotlb_detect())
> >  		swiotlb_force = SWIOTLB_NO_FORCE;
> >  
> >  	set_max_mapnr(max_pfn - PHYS_PFN_OFFSET);
> 
> The "IS_ENABLED(CONFIG_XEN)" is not needed as the check is already part
> of xen_swiotlb_detect().

As far as I can tell the x86 version of xen_swiotlb_detect has a
!CONFIG_XEN stub.  The arm/arm64 version in uncoditionally declared, but
the implementation only compiled when Xen support is enabled.

> 
> 
> But let me ask another question first. Do you think it makes sense to have:
> 
> 	if (swiotlb_force == SWIOTLB_NO_FORCE)
> 		return 0;
> 
> at the beginning of swiotlb_late_init_with_tbl? I am asking because
> swiotlb_late_init_with_tbl is meant for special late initializations,
> right? It shouldn't really matter the presence or absence of
> SWIOTLB_NO_FORCE in regards to swiotlb_late_init_with_tbl. Also the
> commit message for "swiotlb: Make SWIOTLB_NO_FORCE perform no
> allocation" says that "If a platform was somehow setting
> swiotlb_no_force and a later call to swiotlb_init() was to be made we
> would still be proceeding with allocating the default SWIOTLB size
> (64MB)." Our case here is very similar, right? So the allocation should
> proceed?

Well, right now SWIOTLB_NO_FORCE is checked in dma_direct_map_page.
We need to clean all this up a bit, especially with the work to support
multiple swiotlb buffers, but I think for now this is the best we can
do.

> Which brings me to a separate unrelated issue, still affecting the path
> xen_swiotlb_init -> swiotlb_late_init_with_tbl. If swiotlb_init(1) is
> called by mem_init then swiotlb_late_init_with_tbl will fail due to the
> check:
> 
>     /* protect against double initialization */
>     if (WARN_ON_ONCE(io_tlb_default_mem))
>         return -ENOMEM;
> 
> xen_swiotlb_init is meant to ask Xen to make a bunch of pages physically
> contiguous. Then, it initializes the swiotlb buffer based on those
> pages. So it is a problem that swiotlb_late_init_with_tbl refuses to
> continue. However, in practice it is not a problem today because on ARM
> we don't actually make any special requests to Xen to make the pages
> physically contiguous (yet). See the empty implementation of
> arch/arm/xen/mm.c:xen_create_contiguous_region. I don't know about x86.
> 
> So maybe we should instead do something like the appended?

So I'd like to change the core swiotlb initialization to just use
a callback into the arch/xen code to make the pages contigous and
kill all that code duplication.  Together with the multiple swiotlb
buffer work I'd rather avoid churn that goes into a different direction
if possible.


From xen-devel-bounces@lists.xenproject.org Tue May 11 06:37:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 06:37:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125624.236447 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgM1L-0000wu-PQ; Tue, 11 May 2021 06:37:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125624.236447; Tue, 11 May 2021 06:37:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgM1L-0000wn-M0; Tue, 11 May 2021 06:37:31 +0000
Received: by outflank-mailman (input) for mailman id 125624;
 Tue, 11 May 2021 06:37:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1y2t=KG=arm.com=michal.orzel@srs-us1.protection.inumbo.net>)
 id 1lgM1K-0000wb-CA
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 06:37:30 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id 90517df5-5487-40cc-981c-df11108319f4;
 Tue, 11 May 2021 06:37:28 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 2CFC9168F;
 Mon, 10 May 2021 23:37:28 -0700 (PDT)
Received: from [10.57.3.172] (unknown [10.57.3.172])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id BDCF63F718;
 Mon, 10 May 2021 23:37:25 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 90517df5-5487-40cc-981c-df11108319f4
Subject: Re: [PATCH v3 10/10] arm64: Change type of hsr, cpsr, spsr_el1 to
 uint64_t
To: Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall
 <julien@xen.org>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, Tamas K Lengyel <tamas@tklengyel.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>, bertrand.marquis@arm.com,
 wei.chen@arm.com, xen-devel@lists.xenproject.org
References: <20210505074308.11016-1-michal.orzel@arm.com>
 <20210505074308.11016-11-michal.orzel@arm.com>
 <c5676e69-a474-d1ad-c7e9-49c03be3ab66@suse.com>
From: Michal Orzel <michal.orzel@arm.com>
Message-ID: <1ff4f9fb-0eca-189a-2b47-b910dc6b3639@arm.com>
Date: Tue, 11 May 2021 08:37:20 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <c5676e69-a474-d1ad-c7e9-49c03be3ab66@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Hi Jan,

On 05.05.2021 10:00, Jan Beulich wrote:
> On 05.05.2021 09:43, Michal Orzel wrote:
>> --- a/xen/include/public/arch-arm.h
>> +++ b/xen/include/public/arch-arm.h
>> @@ -267,10 +267,10 @@ struct vcpu_guest_core_regs
>>  
>>      /* Return address and mode */
>>      __DECL_REG(pc64,         pc32);             /* ELR_EL2 */
>> -    uint32_t cpsr;                              /* SPSR_EL2 */
>> +    uint64_t cpsr;                              /* SPSR_EL2 */
>>  
>>      union {
>> -        uint32_t spsr_el1;       /* AArch64 */
>> +        uint64_t spsr_el1;       /* AArch64 */
>>          uint32_t spsr_svc;       /* AArch32 */
>>      };
> 
> This change affects, besides domctl, also default_initialise_vcpu(),
> which Arm's arch_initialise_vcpu() calls. I realize do_arm_vcpu_op()
> only allows two unrelated VCPUOP_* to pass, but then I don't
> understand why arch_initialise_vcpu() doesn't simply return e.g.
> -EOPNOTSUPP. Hence I suspect I'm missing something.
> 
I agree that do_arm_vcpu_op only allows two VCPUOP* to pass and
arch_initialise_vcpu being called in case of VCPUOP_initialise
has no sense as VCPUOP_initialise is not supported on arm.
It makes sense that it should return -EOPNOTSUPP.
However do_arm_vcpu_op will not accept VCPUOP_initialise and will return
-EINVAL. So arch_initialise_vcpu for arm will not be called.
Do you think that changing this behaviour so that arch_initialise_vcpu returns
-EOPNOTSUPP should be part of this patch?

>> --- a/xen/include/public/domctl.h
>> +++ b/xen/include/public/domctl.h
>> @@ -38,7 +38,7 @@
>>  #include "hvm/save.h"
>>  #include "memory.h"
>>  
>> -#define XEN_DOMCTL_INTERFACE_VERSION 0x00000013
>> +#define XEN_DOMCTL_INTERFACE_VERSION 0x00000014
> 
> So this is to cover for the struct vcpu_guest_core_regs change.
> 
>> --- a/xen/include/public/vm_event.h
>> +++ b/xen/include/public/vm_event.h
>> @@ -266,8 +266,7 @@ struct vm_event_regs_arm {
>>      uint64_t ttbr1;
>>      uint64_t ttbcr;
>>      uint64_t pc;
>> -    uint32_t cpsr;
>> -    uint32_t _pad;
>> +    uint64_t cpsr;
>>  };
> 
> Then I wonder why this isn't accompanied by a similar bump of
> VM_EVENT_INTERFACE_VERSION. I don't see you drop any checking /
> filling of the _pad field, so existing callers may pass garbage
> there, and new callers need to be prevented from looking at the
> upper half when running on an older hypervisor.
> 
> Jan
> 
Cheers,
Michal


From xen-devel-bounces@lists.xenproject.org Tue May 11 07:07:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 07:07:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125634.236458 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgMU0-0004QW-5P; Tue, 11 May 2021 07:07:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125634.236458; Tue, 11 May 2021 07:07:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgMU0-0004QP-2X; Tue, 11 May 2021 07:07:08 +0000
Received: by outflank-mailman (input) for mailman id 125634;
 Tue, 11 May 2021 07:07:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/xBg=KG=amazon.co.uk=prvs=758e73cee=pdurrant@srs-us1.protection.inumbo.net>)
 id 1lgMTy-0004QJ-2K
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 07:07:06 +0000
Received: from smtp-fw-33001.amazon.com (unknown [207.171.190.10])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b625f296-3f99-406c-acbb-f9f993977385;
 Tue, 11 May 2021 07:07:04 +0000 (UTC)
Received: from pdx4-co-svc-p1-lb2-vlan3.amazon.com (HELO
 email-inbound-relay-2c-456ef9c9.us-west-2.amazon.com) ([10.25.36.214])
 by smtp-border-fw-33001.sea14.amazon.com with ESMTP; 11 May 2021 07:06:58 +0000
Received: from EX13D32EUC003.ant.amazon.com
 (pdx1-ws-svc-p6-lb9-vlan3.pdx.amazon.com [10.236.137.198])
 by email-inbound-relay-2c-456ef9c9.us-west-2.amazon.com (Postfix) with ESMTPS
 id C9DF3365327; Tue, 11 May 2021 07:06:56 +0000 (UTC)
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D32EUC003.ant.amazon.com (10.43.164.24) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 11 May 2021 07:06:55 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.015;
 Tue, 11 May 2021 07:06:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b625f296-3f99-406c-acbb-f9f993977385
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt;
  s=amazon201209; t=1620716825; x=1652252825;
  h=from:to:cc:date:message-id:references:in-reply-to:
   content-transfer-encoding:mime-version:subject;
  bh=FgiF7Puuz72Tc+bHv4Fk487lVQ8q5lBGnrGRwaNZG+o=;
  b=bRMY9/im/vV3O329XRpUnzXazcHn8kZRFg41sRbEyfovYJlvyVkEARGH
   SOTsJFqmKqyZaUhNUZjyhfWMRqYJoj3RvZILwLKDlilwZLGtVC4d0Kkn9
   Dloy5c4VU923KdNGzSOn5cmx5vc4yvb+/fyNe2LSw3i/DANmb3Rk4JmP2
   M=;
X-IronPort-AV: E=Sophos;i="5.82,290,1613433600"; 
   d="scan'208";a="125102215"
Subject: RE: [PATCH] xen-netback: Check for hotplug-status existence before watching
Thread-Topic: [PATCH] xen-netback: Check for hotplug-status existence before watching
From: "Durrant, Paul" <pdurrant@amazon.co.uk>
To: =?utf-8?B?TWFyZWsgTWFyY3p5a293c2tpLUfDs3JlY2tp?=
	<marmarek@invisiblethingslab.com>, Michael Brown <mbrown@fensystems.co.uk>,
	"paul@xen.org" <paul@xen.org>
CC: "paul@xen.org" <paul@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "netdev@vger.kernel.org"
	<netdev@vger.kernel.org>, "wei.liu@kernel.org" <wei.liu@kernel.org>
Thread-Index: AQHXMHlgrSEnNO4f/0CXVVR3ElA0XKrdNJQAgAAEMoCAAAGvAIAAA+GAgAAKCwCAAL1BsA==
Date: Tue, 11 May 2021 07:06:55 +0000
Message-ID: <df9e9a32b0294aee814eeb58d2d71edd@EX13D32EUC003.ant.amazon.com>
References: <54659eec-e315-5dc5-1578-d91633a80077@xen.org>
 <20210413152512.903750-1-mbrown@fensystems.co.uk> <YJl8IC7EbXKpARWL@mail-itl>
 <404130e4-210d-2214-47a8-833c0463d997@fensystems.co.uk>
 <YJmBDpqQ12ZBGf58@mail-itl>
 <21f38a92-c8ae-12a7-f1d8-50810c5eb088@fensystems.co.uk>
 <YJmMvTkp2Y1hlLLm@mail-itl>
In-Reply-To: <YJmMvTkp2Y1hlLLm@mail-itl>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.166.209]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBNYXJlayBNYXJjenlrb3dza2kt
R8OzcmVja2kgPG1hcm1hcmVrQGludmlzaWJsZXRoaW5nc2xhYi5jb20+DQo+IFNlbnQ6IDEwIE1h
eSAyMDIxIDIwOjQzDQo+IFRvOiBNaWNoYWVsIEJyb3duIDxtYnJvd25AZmVuc3lzdGVtcy5jby51
az47IHBhdWxAeGVuLm9yZw0KPiBDYzogcGF1bEB4ZW4ub3JnOyB4ZW4tZGV2ZWxAbGlzdHMueGVu
cHJvamVjdC5vcmc7IG5ldGRldkB2Z2VyLmtlcm5lbC5vcmc7IHdlaS5saXVAa2VybmVsLm9yZzsg
RHVycmFudCwNCj4gUGF1bCA8cGR1cnJhbnRAYW1hem9uLmNvLnVrPg0KPiBTdWJqZWN0OiBSRTog
W0VYVEVSTkFMXSBbUEFUQ0hdIHhlbi1uZXRiYWNrOiBDaGVjayBmb3IgaG90cGx1Zy1zdGF0dXMg
ZXhpc3RlbmNlIGJlZm9yZSB3YXRjaGluZw0KPiANCj4gT24gTW9uLCBNYXkgMTAsIDIwMjEgYXQg
MDg6MDY6NTVQTSArMDEwMCwgTWljaGFlbCBCcm93biB3cm90ZToNCj4gPiBJZiB5b3UgaGF2ZSBh
IHN1Z2dlc3RlZCBwYXRjaCwgSSdtIGhhcHB5IHRvIHRlc3QgdGhhdCBpdCBkb2Vzbid0IHJlaW50
cm9kdWNlDQo+ID4gdGhlIHJlZ3Jlc3Npb24gYnVnIHRoYXQgd2FzIGZpeGVkIGJ5IHRoaXMgY29t
bWl0Lg0KPiANCj4gQWN0dWFsbHksIEkndmUganVzdCB0ZXN0ZWQgd2l0aCBhIHNpbXBsZSByZWxv
YWRpbmcgeGVuLW5ldGZyb250IG1vZHVsZS4gSXQNCj4gc2VlbXMgaW4gdGhpcyBjYXNlLCB0aGUg
aG90cGx1ZyBzY3JpcHQgaXMgbm90IHJlLWV4ZWN1dGVkLiBJbiBmYWN0LCBJDQo+IHRoaW5rIGl0
IHNob3VsZCBub3QgYmUgcmUtZXhlY3V0ZWQgYXQgYWxsLCBzaW5jZSB0aGUgdmlmIGludGVyZmFj
ZQ0KPiByZW1haW5zIGluIHBsYWNlIChpdCBqdXN0IGdldHMgTk8tQ0FSUklFUiBmbGFnKS4NCj4g
DQo+IFRoaXMgYnJpbmdzIGEgcXVlc3Rpb24sIHdoeSByZW1vdmluZyBob3RwbHVnLXN0YXR1cyBp
biB0aGUgZmlyc3QgcGxhY2U/DQo+IFRoZSBpbnRlcmZhY2UgcmVtYWlucyBjb3JyZWN0bHkgY29u
ZmlndXJlZCBieSB0aGUgaG90cGx1ZyBzY3JpcHQgYWZ0ZXINCj4gYWxsLiBGcm9tIHRoZSBjb21t
aXQgbWVzc2FnZToNCj4gDQo+ICAgICB4ZW4tbmV0YmFjazogcmVtb3ZlICdob3RwbHVnLXN0YXR1
cycgb25jZSBpdCBoYXMgc2VydmVkIGl0cyBwdXJwb3NlDQo+IA0KPiAgICAgUmVtb3ZpbmcgdGhl
ICdob3RwbHVnLXN0YXR1cycgbm9kZSBpbiBuZXRiYWNrX3JlbW92ZSgpIGlzIHdyb25nOyB0aGUg
c2NyaXB0DQo+ICAgICBtYXkgbm90IGhhdmUgY29tcGxldGVkLiBPbmx5IHJlbW92ZSB0aGUgbm9k
ZSBvbmNlIHRoZSB3YXRjaCBoYXMgZmlyZWQgYW5kDQo+ICAgICBoYXMgYmVlbiB1bnJlZ2lzdGVy
ZWQuDQo+IA0KPiBJIHRoaW5rIHRoZSBpbnRlbnRpb24gd2FzIHRvIHJlbW92ZSAnaG90cGx1Zy1z
dGF0dXMnIG5vZGUgX2xhdGVyXyBpbg0KPiBjYXNlIG9mIHF1aWNrbHkgYWRkaW5nIGFuZCByZW1v
dmluZyB0aGUgaW50ZXJmYWNlLiBJcyB0aGF0IHJpZ2h0LCBQYXVsPw0KDQpUaGUgcmVtb3ZhbCB3
YXMgZG9uZSB0byBhbGxvdyB1bmJpbmQvYmluZCB0byBmdW5jdGlvbiBjb3JyZWN0bHkuIElJUkMg
YmVmb3JlIHRoZSBvcmlnaW5hbCBwYXRjaCBkb2luZyBhIGJpbmQgd291bGQgc3RhbGwgZm9yZXZl
ciB3YWl0aW5nIGZvciB0aGUgaG90cGx1ZyBzdGF0dXMgdG8gY2hhbmdlLCB3aGljaCB3b3VsZCBu
ZXZlciBoYXBwZW4uDQoNCj4gSW4gdGhhdCBjYXNlLCBsZXR0aW5nIGhvdHBsdWdfc3RhdHVzX2No
YW5nZWQoKSByZW1vdmUgdGhlIGVudHJ5IHdvbnQNCj4gd29yaywgYmVjYXVzZSB0aGUgd2F0Y2gg
d2FzIHVucmVnaXN0ZXJlZCBmZXcgbGluZXMgZWFybGllciBpbg0KPiBuZXRiYWNrX3JlbW92ZSgp
LiBBbmQga2VlcGluZyB0aGUgd2F0Y2ggaXMgbm90IGFuIG9wdGlvbiwgYmVjYXVzZSB0aGUNCj4g
d2hvbGUgYmFja2VuZF9pbmZvIHN0cnVjdCBpcyBnb2luZyB0byBiZSBmcmVlLWVkIGFscmVhZHku
DQo+IA0KPiBJZiBteSBndWVzcyBhYm91dCB0aGUgb3JpZ2luYWwgcmVhc29uIGZvciB0aGUgY2hh
bmdlIGlzIHJpZ2h0LCBJIHRoaW5rDQo+IGl0IHNob3VsZCBiZSBmaXhlZCBhdCB0aGUgaG90cGx1
ZyBzY3JpcHQgbGV2ZWwgLSBpdCBzaG91bGQgY2hlY2sgaWYgdGhlDQo+IGRldmljZSBpcyBzdGls
bCB0aGVyZSBiZWZvcmUgd3JpdGluZyAnaG90cGx1Zy1zdGF0dXMnIG5vZGUuDQo+IEknbSBub3Qg
c3VyZSBpZiBkb2luZyBpdCByYWNlLWZyZWUgaXMgcG9zc2libGUgZnJvbSBhIHNoZWxsIHNjcmlw
dCAoSSB0aGluayBpdA0KPiByZXF1aXJlcyBkb2luZyB4ZW5zdG9yZSByZWFkIF9hbmRfIHdyaXRl
IGluIGEgc2luZ2xlIHRyYW5zYWN0aW9uKS4gQnV0DQo+IGluIHRoZSB3b3JzdCBjYXNlLCB0aGUg
YWZ0ZXJtYXRoIG9mIGxvb3NpbmcgdGhlIHJhY2UgaXMgbGVhdmluZyBzdHJheQ0KPiAnaG90cGx1
Zy1zdGF0dXMnIHhlbnN0b3JlIG5vZGUgLSBub3QgaWRlYWwsIGJ1dCBhbHNvIGxlc3MgaGFybWZ1
bCB0aGFuDQo+IGZhaWxpbmcgdG8gYnJpbmcgdXAgYW4gaW50ZXJmYWNlLiBBdCB0aGlzIHBvaW50
LCB0aGUgdG9vbHN0YWNrIGNvdWxkIGNsZWFudXANCj4gaXQgbGF0ZXIsIHBlcmhhcHMgd2hpbGUg
c2V0dGluZyB1cCB0aGF0IGludGVyZmFjZSBhZ2FpbiAoaWYgaXQgZ2V0cw0KPiByZS1jb25uZWN0
ZWQpPw0KPiANCj4gQW55d2F5LCBwZXJoYXBzIHRoZSBiZXN0IHRoaW5nIHRvIGRvIG5vdywgaXMg
dG8gcmV2ZXJ0IGJvdGggY29tbWl0cywgYW5kDQo+IHRoaW5rIG9mIGFuIGFsdGVybmF0aXZlIHNv
bHV0aW9uIGZvciB0aGUgb3JpZ2luYWwgaXNzdWU/IFRoYXQgb2YgY291cnNlDQo+IGFzc3VtZXMg
SSBndWVzc2VkIGNvcnJlY3RseSB3aHkgaXQgd2FzIGRvbmUgaW4gdGhlIGZpcnN0IHBsYWNlLi4u
DQo+IA0KDQpTaW1wbHkgcmV2ZXJ0aW5nIGV2ZXJ5dGhpbmcgd291bGQgbGlrZWx5IGJyZWFrIHRo
ZSBhYmlsaXR5IHRvIGRvIHVuYmluZCBhbmQgYmluZCAod2hpY2ggaXMgdXNlZnVsIGUuZyB0byBh
bGxvdyB1cGRhdGUgdGhlIG5ldGJhY2sgbW9kdWxlIHdoaWxzdCBndWVzdHMgYXJlIHN0aWxsIHJ1
bm5pbmcpIHNvIEkgZG9uJ3QgdGhpbmsgdGhhdCdzIGFuIG9wdGlvbi4NCg0KICBQYXVsDQo=


From xen-devel-bounces@lists.xenproject.org Tue May 11 07:43:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 07:43:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125639.236470 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgN3P-0000IB-33; Tue, 11 May 2021 07:43:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125639.236470; Tue, 11 May 2021 07:43:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgN3O-0000I4-W2; Tue, 11 May 2021 07:43:42 +0000
Received: by outflank-mailman (input) for mailman id 125639;
 Tue, 11 May 2021 07:41:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RFu/=KG=linux.intel.com=andriy.shevchenko@srs-us1.protection.inumbo.net>)
 id 1lgN1L-00009D-01
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 07:41:35 +0000
Received: from mga01.intel.com (unknown [192.55.52.88])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6388c132-01e0-4052-92b0-7d8ff32597bb;
 Tue, 11 May 2021 07:41:33 +0000 (UTC)
Received: from orsmga006.jf.intel.com ([10.7.209.51])
 by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 11 May 2021 00:41:32 -0700
Received: from black.fi.intel.com ([10.237.72.28])
 by orsmga006.jf.intel.com with ESMTP; 11 May 2021 00:41:19 -0700
Received: by black.fi.intel.com (Postfix, from userid 1003)
 id 2D1AE12A; Tue, 11 May 2021 10:41:40 +0300 (EEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6388c132-01e0-4052-92b0-7d8ff32597bb
IronPort-SDR: 3adnES8xfTMOaXrXbwEMYk1+s90Vk6JDNXlZrVRRPKmvJyOuSJjg7O5bmEBrQQNfLbLyCf+Bev
 543s/+FNsLZg==
X-IronPort-AV: E=McAfee;i="6200,9189,9980"; a="220336163"
X-IronPort-AV: E=Sophos;i="5.82,290,1613462400"; 
   d="scan'208";a="220336163"
IronPort-SDR: 3g9gDCi2EHxXnoBEVRT+/gTEGvKlyIy4Tv25gxiOZLN+rUg+ZWcpyjBwnNfsQsPnMNbiyIWdrV
 w3yazI3jmg2Q==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.82,290,1613462400"; 
   d="scan'208";a="392207552"
From: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
To: Thomas Bogendoerfer <tsbogend@alpha.franken.de>,
	Alexander Lobakin <alobakin@pm.me>,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Johannes Berg <johannes.berg@intel.com>,
	Joerg Roedel <jroedel@suse.de>,
	Wei Liu <wei.liu@kernel.org>,
	Mike Rapoport <rppt@kernel.org>,
	Florian Fainelli <f.fainelli@gmail.com>,
	Corey Minyard <cminyard@mvista.com>,
	Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com>,
	Michael Kelley <mikelley@microsoft.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Scott Branden <scott.branden@broadcom.com>,
	Olof Johansson <olof@lixom.net>,
	Mihai Carabas <mihai.carabas@oracle.com>,
	Marek Czerski <ma.czerski@gmail.com>,
	Bjorn Andersson <bjorn.andersson@linaro.org>,
	Mathieu Poirier <mathieu.poirier@linaro.org>,
	Rishabh Bhatnagar <rishabhb@codeaurora.org>,
	Vineeth Vijayan <vneethv@linux.ibm.com>,
	Peter Oberparleiter <oberpar@linux.ibm.com>,
	Alexander Egorenkov <egorenar@linux.ibm.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>,
	"Paul E. McKenney" <paulmck@kernel.org>,
	Peter Zijlstra <peterz@infradead.org>,
	"Steven Rostedt (VMware)" <rostedt@goodmis.org>,
	linux-alpha@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-arm-kernel@lists.infradead.org,
	linux-mips@vger.kernel.org,
	linux-parisc@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-s390@vger.kernel.org,
	sparclinux@vger.kernel.org,
	linux-um@lists.infradead.org,
	linux-hyperv@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	linux-xtensa@linux-xtensa.org,
	openipmi-developer@lists.sourceforge.net,
	linux-clk@vger.kernel.org,
	linux-edac@vger.kernel.org,
	coresight@lists.linaro.org,
	linux-leds@vger.kernel.org,
	bcm-kernel-feedback-list@broadcom.com,
	netdev@vger.kernel.org,
	linux-pm@vger.kernel.org,
	linux-remoteproc@vger.kernel.org,
	linux-staging@lists.linux.dev,
	dri-devel@lists.freedesktop.org,
	linux-fbdev@vger.kernel.org,
	linux-arch@vger.kernel.org,
	kexec@lists.infradead.org,
	rcu@vger.kernel.org,
	linux-fsdevel@vger.kernel.org
Cc: Richard Henderson <rth@twiddle.net>,
	Ivan Kokshaysky <ink@jurassic.park.msu.ru>,
	Matt Turner <mattst88@gmail.com>,
	Catalin Marinas <catalin.marinas@arm.com>,
	Will Deacon <will@kernel.org>,
	"James E.J. Bottomley" <James.Bottomley@HansenPartnership.com>,
	Helge Deller <deller@gmx.de>,
	Michael Ellerman <mpe@ellerman.id.au>,
	Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	Paul Mackerras <paulus@samba.org>,
	Heiko Carstens <hca@linux.ibm.com>,
	Vasily Gorbik <gor@linux.ibm.com>,
	Christian Borntraeger <borntraeger@de.ibm.com>,
	"David S. Miller" <davem@davemloft.net>,
	Jeff Dike <jdike@addtoit.com>,
	Richard Weinberger <richard@nod.at>,
	Anton Ivanov <anton.ivanov@cambridgegreys.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>,
	Borislav Petkov <bp@alien8.de>,
	x86@kernel.org,
	"H. Peter Anvin" <hpa@zytor.com>,
	"K. Y. Srinivasan" <kys@microsoft.com>,
	Haiyang Zhang <haiyangz@microsoft.com>,
	Stephen Hemminger <sthemmin@microsoft.com>,
	Dexuan Cui <decui@microsoft.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Chris Zankel <chris@zankel.net>,
	Max Filippov <jcmvbkbc@gmail.com>,
	Corey Minyard <minyard@acm.org>,
	Paul Walmsley <paul.walmsley@sifive.com>,
	Michael Turquette <mturquette@baylibre.com>,
	Stephen Boyd <sboyd@kernel.org>,
	Dinh Nguyen <dinguyen@kernel.org>,
	Mauro Carvalho Chehab <mchehab@kernel.org>,
	Tony Luck <tony.luck@intel.com>,
	James Morse <james.morse@arm.com>,
	Robert Richter <rric@kernel.org>,
	Suzuki K Poulose <suzuki.poulose@arm.com>,
	Mike Leach <mike.leach@linaro.org>,
	Leo Yan <leo.yan@linaro.org>,
	Alexander Shishkin <alexander.shishkin@linux.intel.com>,
	Pavel Machek <pavel@ucw.cz>,
	Arnd Bergmann <arnd@arndb.de>,
	Alex Elder <elder@kernel.org>,
	Jakub Kicinski <kuba@kernel.org>,
	Sebastian Reichel <sre@kernel.org>,
	Ohad Ben-Cohen <ohad@wizery.com>,
	Jens Frederich <jfrederich@gmail.com>,
	Daniel Drake <dsd@laptop.org>,
	Jon Nettleton <jon.nettleton@gmail.com>,
	Eric Biederman <ebiederm@xmission.com>,
	Josh Triplett <josh@joshtriplett.org>,
	Mathieu Desnoyers <mathieu.desnoyers@efficios.com>,
	Lai Jiangshan <jiangshanlai@gmail.com>,
	Joel Fernandes <joel@joelfernandes.org>,
	Luis Chamberlain <mcgrof@kernel.org>,
	Kees Cook <keescook@chromium.org>,
	Iurii Zaikin <yzaikin@google.com>,
	Mike Rapoport <rppt@linux.ibm.com>,
	Christian Brauner <christian.brauner@ubuntu.com>,
	Rasmus Villemoes <linux@rasmusvillemoes.dk>
Subject: [PATCH v3 1/1] kernel.h: Split out panic and oops helpers
Date: Tue, 11 May 2021 10:41:37 +0300
Message-Id: <20210511074137.33666-1-andriy.shevchenko@linux.intel.com>
X-Mailer: git-send-email 2.30.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

kernel.h is being used as a dump for all kinds of stuff for a long time.
Here is the attempt to start cleaning it up by splitting out panic and
oops helpers.

There are several purposes of doing this:
- dropping dependency in bug.h
- dropping a loop by moving out panic_notifier.h
- unload kernel.h from something which has its own domain

At the same time convert users tree-wide to use new headers, although
for the time being include new header back to kernel.h to avoid twisted
indirected includes for existing users.

Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Reviewed-by: Bjorn Andersson <bjorn.andersson@linaro.org>
Acked-by: Mike Rapoport <rppt@linux.ibm.com>
Acked-by: Corey Minyard <cminyard@mvista.com>
Acked-by: Christian Brauner <christian.brauner@ubuntu.com>
Acked-by: Arnd Bergmann <arnd@arndb.de>
Acked-by: Kees Cook <keescook@chromium.org>
Acked-by: Wei Liu <wei.liu@kernel.org>
Acked-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
Co-developed-by: Andrew Morton <akpm@linux-foundation.org>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: Sebastian Reichel <sre@kernel.org>
Acked-by: Luis Chamberlain <mcgrof@kernel.org>
Acked-by: Stephen Boyd <sboyd@kernel.org>
Acked-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
Acked-by: Helge Deller <deller@gmx.de> # parisc
---
v3: rebased on top of v5.13-rc1, collected a few more tags

Note WRT Andrew's SoB tag above: I have added it since part of the cases
I took from him. Andrew, feel free to amend or tell me how you want me
to do.

 arch/alpha/kernel/setup.c                     |  2 +-
 arch/arm64/kernel/setup.c                     |  1 +
 arch/mips/kernel/relocate.c                   |  1 +
 arch/mips/sgi-ip22/ip22-reset.c               |  1 +
 arch/mips/sgi-ip32/ip32-reset.c               |  1 +
 arch/parisc/kernel/pdc_chassis.c              |  1 +
 arch/powerpc/kernel/setup-common.c            |  1 +
 arch/s390/kernel/ipl.c                        |  1 +
 arch/sparc/kernel/sstate.c                    |  1 +
 arch/um/drivers/mconsole_kern.c               |  1 +
 arch/um/kernel/um_arch.c                      |  1 +
 arch/x86/include/asm/desc.h                   |  1 +
 arch/x86/kernel/cpu/mshyperv.c                |  1 +
 arch/x86/kernel/setup.c                       |  1 +
 arch/x86/purgatory/purgatory.c                |  2 +
 arch/x86/xen/enlighten.c                      |  1 +
 arch/xtensa/platforms/iss/setup.c             |  1 +
 drivers/bus/brcmstb_gisb.c                    |  1 +
 drivers/char/ipmi/ipmi_msghandler.c           |  1 +
 drivers/clk/analogbits/wrpll-cln28hpc.c       |  4 +
 drivers/edac/altera_edac.c                    |  1 +
 drivers/firmware/google/gsmi.c                |  1 +
 drivers/hv/vmbus_drv.c                        |  1 +
 .../hwtracing/coresight/coresight-cpu-debug.c |  1 +
 drivers/leds/trigger/ledtrig-activity.c       |  1 +
 drivers/leds/trigger/ledtrig-heartbeat.c      |  1 +
 drivers/leds/trigger/ledtrig-panic.c          |  1 +
 drivers/misc/bcm-vk/bcm_vk_dev.c              |  1 +
 drivers/misc/ibmasm/heartbeat.c               |  1 +
 drivers/misc/pvpanic/pvpanic.c                |  1 +
 drivers/net/ipa/ipa_smp2p.c                   |  1 +
 drivers/parisc/power.c                        |  1 +
 drivers/power/reset/ltc2952-poweroff.c        |  1 +
 drivers/remoteproc/remoteproc_core.c          |  1 +
 drivers/s390/char/con3215.c                   |  1 +
 drivers/s390/char/con3270.c                   |  1 +
 drivers/s390/char/sclp.c                      |  1 +
 drivers/s390/char/sclp_con.c                  |  1 +
 drivers/s390/char/sclp_vt220.c                |  1 +
 drivers/s390/char/zcore.c                     |  1 +
 drivers/soc/bcm/brcmstb/pm/pm-arm.c           |  1 +
 drivers/staging/olpc_dcon/olpc_dcon.c         |  1 +
 drivers/video/fbdev/hyperv_fb.c               |  1 +
 include/asm-generic/bug.h                     |  3 +-
 include/linux/kernel.h                        | 84 +---------------
 include/linux/panic.h                         | 98 +++++++++++++++++++
 include/linux/panic_notifier.h                | 12 +++
 kernel/hung_task.c                            |  1 +
 kernel/kexec_core.c                           |  1 +
 kernel/panic.c                                |  1 +
 kernel/rcu/tree.c                             |  2 +
 kernel/sysctl.c                               |  1 +
 kernel/trace/trace.c                          |  1 +
 53 files changed, 167 insertions(+), 85 deletions(-)
 create mode 100644 include/linux/panic.h
 create mode 100644 include/linux/panic_notifier.h

diff --git a/arch/alpha/kernel/setup.c b/arch/alpha/kernel/setup.c
index 03dda3beb3bd..5d1296534682 100644
--- a/arch/alpha/kernel/setup.c
+++ b/arch/alpha/kernel/setup.c
@@ -28,6 +28,7 @@
 #include <linux/init.h>
 #include <linux/string.h>
 #include <linux/ioport.h>
+#include <linux/panic_notifier.h>
 #include <linux/platform_device.h>
 #include <linux/memblock.h>
 #include <linux/pci.h>
@@ -46,7 +47,6 @@
 #include <linux/log2.h>
 #include <linux/export.h>
 
-extern struct atomic_notifier_head panic_notifier_list;
 static int alpha_panic_event(struct notifier_block *, unsigned long, void *);
 static struct notifier_block alpha_panic_block = {
 	alpha_panic_event,
diff --git a/arch/arm64/kernel/setup.c b/arch/arm64/kernel/setup.c
index 61845c0821d9..787bc0f601b3 100644
--- a/arch/arm64/kernel/setup.c
+++ b/arch/arm64/kernel/setup.c
@@ -23,6 +23,7 @@
 #include <linux/interrupt.h>
 #include <linux/smp.h>
 #include <linux/fs.h>
+#include <linux/panic_notifier.h>
 #include <linux/proc_fs.h>
 #include <linux/memblock.h>
 #include <linux/of_fdt.h>
diff --git a/arch/mips/kernel/relocate.c b/arch/mips/kernel/relocate.c
index 499a5357c09f..56b51de2dc51 100644
--- a/arch/mips/kernel/relocate.c
+++ b/arch/mips/kernel/relocate.c
@@ -18,6 +18,7 @@
 #include <linux/kernel.h>
 #include <linux/libfdt.h>
 #include <linux/of_fdt.h>
+#include <linux/panic_notifier.h>
 #include <linux/sched/task.h>
 #include <linux/start_kernel.h>
 #include <linux/string.h>
diff --git a/arch/mips/sgi-ip22/ip22-reset.c b/arch/mips/sgi-ip22/ip22-reset.c
index c374f3ceec38..9028dbbb45dd 100644
--- a/arch/mips/sgi-ip22/ip22-reset.c
+++ b/arch/mips/sgi-ip22/ip22-reset.c
@@ -12,6 +12,7 @@
 #include <linux/kernel.h>
 #include <linux/sched/signal.h>
 #include <linux/notifier.h>
+#include <linux/panic_notifier.h>
 #include <linux/pm.h>
 #include <linux/timer.h>
 
diff --git a/arch/mips/sgi-ip32/ip32-reset.c b/arch/mips/sgi-ip32/ip32-reset.c
index 20d8637340be..18d1c115cd53 100644
--- a/arch/mips/sgi-ip32/ip32-reset.c
+++ b/arch/mips/sgi-ip32/ip32-reset.c
@@ -12,6 +12,7 @@
 #include <linux/init.h>
 #include <linux/kernel.h>
 #include <linux/module.h>
+#include <linux/panic_notifier.h>
 #include <linux/sched.h>
 #include <linux/sched/signal.h>
 #include <linux/notifier.h>
diff --git a/arch/parisc/kernel/pdc_chassis.c b/arch/parisc/kernel/pdc_chassis.c
index 75ae88d13909..da154406d368 100644
--- a/arch/parisc/kernel/pdc_chassis.c
+++ b/arch/parisc/kernel/pdc_chassis.c
@@ -20,6 +20,7 @@
 #include <linux/init.h>
 #include <linux/module.h>
 #include <linux/kernel.h>
+#include <linux/panic_notifier.h>
 #include <linux/reboot.h>
 #include <linux/notifier.h>
 #include <linux/cache.h>
diff --git a/arch/powerpc/kernel/setup-common.c b/arch/powerpc/kernel/setup-common.c
index 74a98fff2c2f..046fe21b5c3b 100644
--- a/arch/powerpc/kernel/setup-common.c
+++ b/arch/powerpc/kernel/setup-common.c
@@ -9,6 +9,7 @@
 #undef DEBUG
 
 #include <linux/export.h>
+#include <linux/panic_notifier.h>
 #include <linux/string.h>
 #include <linux/sched.h>
 #include <linux/init.h>
diff --git a/arch/s390/kernel/ipl.c b/arch/s390/kernel/ipl.c
index dba04fbc37a2..36f870dc944f 100644
--- a/arch/s390/kernel/ipl.c
+++ b/arch/s390/kernel/ipl.c
@@ -13,6 +13,7 @@
 #include <linux/init.h>
 #include <linux/device.h>
 #include <linux/delay.h>
+#include <linux/panic_notifier.h>
 #include <linux/reboot.h>
 #include <linux/ctype.h>
 #include <linux/fs.h>
diff --git a/arch/sparc/kernel/sstate.c b/arch/sparc/kernel/sstate.c
index ac8677c3841e..3bcc4ddc6911 100644
--- a/arch/sparc/kernel/sstate.c
+++ b/arch/sparc/kernel/sstate.c
@@ -6,6 +6,7 @@
 
 #include <linux/kernel.h>
 #include <linux/notifier.h>
+#include <linux/panic_notifier.h>
 #include <linux/reboot.h>
 #include <linux/init.h>
 
diff --git a/arch/um/drivers/mconsole_kern.c b/arch/um/drivers/mconsole_kern.c
index 6d00af25ec6b..328b16f99b30 100644
--- a/arch/um/drivers/mconsole_kern.c
+++ b/arch/um/drivers/mconsole_kern.c
@@ -12,6 +12,7 @@
 #include <linux/mm.h>
 #include <linux/module.h>
 #include <linux/notifier.h>
+#include <linux/panic_notifier.h>
 #include <linux/reboot.h>
 #include <linux/sched/debug.h>
 #include <linux/proc_fs.h>
diff --git a/arch/um/kernel/um_arch.c b/arch/um/kernel/um_arch.c
index 74e07e748a9b..9512253947d5 100644
--- a/arch/um/kernel/um_arch.c
+++ b/arch/um/kernel/um_arch.c
@@ -7,6 +7,7 @@
 #include <linux/init.h>
 #include <linux/mm.h>
 #include <linux/module.h>
+#include <linux/panic_notifier.h>
 #include <linux/seq_file.h>
 #include <linux/string.h>
 #include <linux/utsname.h>
diff --git a/arch/x86/include/asm/desc.h b/arch/x86/include/asm/desc.h
index 476082a83d1c..ceb12683b6d1 100644
--- a/arch/x86/include/asm/desc.h
+++ b/arch/x86/include/asm/desc.h
@@ -9,6 +9,7 @@
 #include <asm/irq_vectors.h>
 #include <asm/cpu_entry_area.h>
 
+#include <linux/debug_locks.h>
 #include <linux/smp.h>
 #include <linux/percpu.h>
 
diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c
index 22f13343b5da..9e5c6f2b044d 100644
--- a/arch/x86/kernel/cpu/mshyperv.c
+++ b/arch/x86/kernel/cpu/mshyperv.c
@@ -17,6 +17,7 @@
 #include <linux/irq.h>
 #include <linux/kexec.h>
 #include <linux/i8253.h>
+#include <linux/panic_notifier.h>
 #include <linux/random.h>
 #include <asm/processor.h>
 #include <asm/hypervisor.h>
diff --git a/arch/x86/kernel/setup.c b/arch/x86/kernel/setup.c
index 72920af0b3c0..bdcdd29efea6 100644
--- a/arch/x86/kernel/setup.c
+++ b/arch/x86/kernel/setup.c
@@ -14,6 +14,7 @@
 #include <linux/initrd.h>
 #include <linux/iscsi_ibft.h>
 #include <linux/memblock.h>
+#include <linux/panic_notifier.h>
 #include <linux/pci.h>
 #include <linux/root_dev.h>
 #include <linux/hugetlb.h>
diff --git a/arch/x86/purgatory/purgatory.c b/arch/x86/purgatory/purgatory.c
index f03b64d9cb51..7558139920f8 100644
--- a/arch/x86/purgatory/purgatory.c
+++ b/arch/x86/purgatory/purgatory.c
@@ -9,6 +9,8 @@
  */
 
 #include <linux/bug.h>
+#include <linux/kernel.h>
+#include <linux/types.h>
 #include <crypto/sha2.h>
 #include <asm/purgatory.h>
 
diff --git a/arch/x86/xen/enlighten.c b/arch/x86/xen/enlighten.c
index aa9f50fccc5d..c79bd0af2e8c 100644
--- a/arch/x86/xen/enlighten.c
+++ b/arch/x86/xen/enlighten.c
@@ -6,6 +6,7 @@
 #include <linux/cpu.h>
 #include <linux/kexec.h>
 #include <linux/slab.h>
+#include <linux/panic_notifier.h>
 
 #include <xen/xen.h>
 #include <xen/features.h>
diff --git a/arch/xtensa/platforms/iss/setup.c b/arch/xtensa/platforms/iss/setup.c
index ed519aee0ec8..d3433e1bb94e 100644
--- a/arch/xtensa/platforms/iss/setup.c
+++ b/arch/xtensa/platforms/iss/setup.c
@@ -14,6 +14,7 @@
 #include <linux/init.h>
 #include <linux/kernel.h>
 #include <linux/notifier.h>
+#include <linux/panic_notifier.h>
 #include <linux/printk.h>
 #include <linux/string.h>
 
diff --git a/drivers/bus/brcmstb_gisb.c b/drivers/bus/brcmstb_gisb.c
index 7355fa2cb439..6551286a60cc 100644
--- a/drivers/bus/brcmstb_gisb.c
+++ b/drivers/bus/brcmstb_gisb.c
@@ -6,6 +6,7 @@
 #include <linux/init.h>
 #include <linux/types.h>
 #include <linux/module.h>
+#include <linux/panic_notifier.h>
 #include <linux/platform_device.h>
 #include <linux/interrupt.h>
 #include <linux/sysfs.h>
diff --git a/drivers/char/ipmi/ipmi_msghandler.c b/drivers/char/ipmi/ipmi_msghandler.c
index 8a0e97b33cae..e96cb5c4f97a 100644
--- a/drivers/char/ipmi/ipmi_msghandler.c
+++ b/drivers/char/ipmi/ipmi_msghandler.c
@@ -16,6 +16,7 @@
 
 #include <linux/module.h>
 #include <linux/errno.h>
+#include <linux/panic_notifier.h>
 #include <linux/poll.h>
 #include <linux/sched.h>
 #include <linux/seq_file.h>
diff --git a/drivers/clk/analogbits/wrpll-cln28hpc.c b/drivers/clk/analogbits/wrpll-cln28hpc.c
index 776ead319ae9..7c64ea52a8d5 100644
--- a/drivers/clk/analogbits/wrpll-cln28hpc.c
+++ b/drivers/clk/analogbits/wrpll-cln28hpc.c
@@ -23,8 +23,12 @@
 
 #include <linux/bug.h>
 #include <linux/err.h>
+#include <linux/limits.h>
 #include <linux/log2.h>
 #include <linux/math64.h>
+#include <linux/math.h>
+#include <linux/minmax.h>
+
 #include <linux/clk/analogbits-wrpll-cln28hpc.h>
 
 /* MIN_INPUT_FREQ: minimum input clock frequency, in Hz (Fref_min) */
diff --git a/drivers/edac/altera_edac.c b/drivers/edac/altera_edac.c
index 5f7fd79ec82f..61c21bd880a4 100644
--- a/drivers/edac/altera_edac.c
+++ b/drivers/edac/altera_edac.c
@@ -20,6 +20,7 @@
 #include <linux/of_address.h>
 #include <linux/of_irq.h>
 #include <linux/of_platform.h>
+#include <linux/panic_notifier.h>
 #include <linux/platform_device.h>
 #include <linux/regmap.h>
 #include <linux/types.h>
diff --git a/drivers/firmware/google/gsmi.c b/drivers/firmware/google/gsmi.c
index bb6e77ee3898..adaa492c3d2d 100644
--- a/drivers/firmware/google/gsmi.c
+++ b/drivers/firmware/google/gsmi.c
@@ -19,6 +19,7 @@
 #include <linux/dma-mapping.h>
 #include <linux/fs.h>
 #include <linux/slab.h>
+#include <linux/panic_notifier.h>
 #include <linux/ioctl.h>
 #include <linux/acpi.h>
 #include <linux/io.h>
diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
index 92cb3f7d21d9..57bbbaa4e8f7 100644
--- a/drivers/hv/vmbus_drv.c
+++ b/drivers/hv/vmbus_drv.c
@@ -25,6 +25,7 @@
 
 #include <linux/delay.h>
 #include <linux/notifier.h>
+#include <linux/panic_notifier.h>
 #include <linux/ptrace.h>
 #include <linux/screen_info.h>
 #include <linux/kdebug.h>
diff --git a/drivers/hwtracing/coresight/coresight-cpu-debug.c b/drivers/hwtracing/coresight/coresight-cpu-debug.c
index 2dcf13de751f..9731d3a96073 100644
--- a/drivers/hwtracing/coresight/coresight-cpu-debug.c
+++ b/drivers/hwtracing/coresight/coresight-cpu-debug.c
@@ -17,6 +17,7 @@
 #include <linux/kernel.h>
 #include <linux/module.h>
 #include <linux/moduleparam.h>
+#include <linux/panic_notifier.h>
 #include <linux/pm_qos.h>
 #include <linux/slab.h>
 #include <linux/smp.h>
diff --git a/drivers/leds/trigger/ledtrig-activity.c b/drivers/leds/trigger/ledtrig-activity.c
index 14ba7faaed9e..30bc9df03636 100644
--- a/drivers/leds/trigger/ledtrig-activity.c
+++ b/drivers/leds/trigger/ledtrig-activity.c
@@ -11,6 +11,7 @@
 #include <linux/kernel_stat.h>
 #include <linux/leds.h>
 #include <linux/module.h>
+#include <linux/panic_notifier.h>
 #include <linux/reboot.h>
 #include <linux/sched.h>
 #include <linux/slab.h>
diff --git a/drivers/leds/trigger/ledtrig-heartbeat.c b/drivers/leds/trigger/ledtrig-heartbeat.c
index 36b6709afe9f..7fe0a05574d2 100644
--- a/drivers/leds/trigger/ledtrig-heartbeat.c
+++ b/drivers/leds/trigger/ledtrig-heartbeat.c
@@ -11,6 +11,7 @@
 #include <linux/module.h>
 #include <linux/kernel.h>
 #include <linux/init.h>
+#include <linux/panic_notifier.h>
 #include <linux/slab.h>
 #include <linux/timer.h>
 #include <linux/sched.h>
diff --git a/drivers/leds/trigger/ledtrig-panic.c b/drivers/leds/trigger/ledtrig-panic.c
index 5751cd032f9d..64abf2e91608 100644
--- a/drivers/leds/trigger/ledtrig-panic.c
+++ b/drivers/leds/trigger/ledtrig-panic.c
@@ -8,6 +8,7 @@
 #include <linux/kernel.h>
 #include <linux/init.h>
 #include <linux/notifier.h>
+#include <linux/panic_notifier.h>
 #include <linux/leds.h>
 #include "../leds.h"
 
diff --git a/drivers/misc/bcm-vk/bcm_vk_dev.c b/drivers/misc/bcm-vk/bcm_vk_dev.c
index 6bfea3210389..ad639ee85b2a 100644
--- a/drivers/misc/bcm-vk/bcm_vk_dev.c
+++ b/drivers/misc/bcm-vk/bcm_vk_dev.c
@@ -9,6 +9,7 @@
 #include <linux/fs.h>
 #include <linux/idr.h>
 #include <linux/interrupt.h>
+#include <linux/panic_notifier.h>
 #include <linux/kref.h>
 #include <linux/module.h>
 #include <linux/mutex.h>
diff --git a/drivers/misc/ibmasm/heartbeat.c b/drivers/misc/ibmasm/heartbeat.c
index 4f5f3bdc814d..59c9a0d95659 100644
--- a/drivers/misc/ibmasm/heartbeat.c
+++ b/drivers/misc/ibmasm/heartbeat.c
@@ -9,6 +9,7 @@
  */
 
 #include <linux/notifier.h>
+#include <linux/panic_notifier.h>
 #include "ibmasm.h"
 #include "dot_command.h"
 #include "lowlevel.h"
diff --git a/drivers/misc/pvpanic/pvpanic.c b/drivers/misc/pvpanic/pvpanic.c
index 65f70a4da8c0..793ea0c01193 100644
--- a/drivers/misc/pvpanic/pvpanic.c
+++ b/drivers/misc/pvpanic/pvpanic.c
@@ -13,6 +13,7 @@
 #include <linux/mod_devicetable.h>
 #include <linux/module.h>
 #include <linux/platform_device.h>
+#include <linux/panic_notifier.h>
 #include <linux/types.h>
 #include <linux/cdev.h>
 #include <linux/list.h>
diff --git a/drivers/net/ipa/ipa_smp2p.c b/drivers/net/ipa/ipa_smp2p.c
index a5f7a79a1923..34b68dc43886 100644
--- a/drivers/net/ipa/ipa_smp2p.c
+++ b/drivers/net/ipa/ipa_smp2p.c
@@ -8,6 +8,7 @@
 #include <linux/device.h>
 #include <linux/interrupt.h>
 #include <linux/notifier.h>
+#include <linux/panic_notifier.h>
 #include <linux/soc/qcom/smem.h>
 #include <linux/soc/qcom/smem_state.h>
 
diff --git a/drivers/parisc/power.c b/drivers/parisc/power.c
index ebaf6867b457..456776bd8ee6 100644
--- a/drivers/parisc/power.c
+++ b/drivers/parisc/power.c
@@ -38,6 +38,7 @@
 #include <linux/init.h>
 #include <linux/kernel.h>
 #include <linux/notifier.h>
+#include <linux/panic_notifier.h>
 #include <linux/reboot.h>
 #include <linux/sched/signal.h>
 #include <linux/kthread.h>
diff --git a/drivers/power/reset/ltc2952-poweroff.c b/drivers/power/reset/ltc2952-poweroff.c
index d1495af30081..8688c8ba8894 100644
--- a/drivers/power/reset/ltc2952-poweroff.c
+++ b/drivers/power/reset/ltc2952-poweroff.c
@@ -52,6 +52,7 @@
 #include <linux/slab.h>
 #include <linux/kmod.h>
 #include <linux/module.h>
+#include <linux/panic_notifier.h>
 #include <linux/mod_devicetable.h>
 #include <linux/gpio/consumer.h>
 #include <linux/reboot.h>
diff --git a/drivers/remoteproc/remoteproc_core.c b/drivers/remoteproc/remoteproc_core.c
index 626a6b90fba2..76dd8e2b1e7e 100644
--- a/drivers/remoteproc/remoteproc_core.c
+++ b/drivers/remoteproc/remoteproc_core.c
@@ -20,6 +20,7 @@
 #include <linux/kernel.h>
 #include <linux/module.h>
 #include <linux/device.h>
+#include <linux/panic_notifier.h>
 #include <linux/slab.h>
 #include <linux/mutex.h>
 #include <linux/dma-map-ops.h>
diff --git a/drivers/s390/char/con3215.c b/drivers/s390/char/con3215.c
index 1fd5bca9fa20..02523f4e29f4 100644
--- a/drivers/s390/char/con3215.c
+++ b/drivers/s390/char/con3215.c
@@ -19,6 +19,7 @@
 #include <linux/console.h>
 #include <linux/interrupt.h>
 #include <linux/err.h>
+#include <linux/panic_notifier.h>
 #include <linux/reboot.h>
 #include <linux/serial.h> /* ASYNC_* flags */
 #include <linux/slab.h>
diff --git a/drivers/s390/char/con3270.c b/drivers/s390/char/con3270.c
index e21962c0fd94..87cdbace1453 100644
--- a/drivers/s390/char/con3270.c
+++ b/drivers/s390/char/con3270.c
@@ -13,6 +13,7 @@
 #include <linux/init.h>
 #include <linux/interrupt.h>
 #include <linux/list.h>
+#include <linux/panic_notifier.h>
 #include <linux/types.h>
 #include <linux/slab.h>
 #include <linux/err.h>
diff --git a/drivers/s390/char/sclp.c b/drivers/s390/char/sclp.c
index 986bbbc23d0a..6627820a5eb9 100644
--- a/drivers/s390/char/sclp.c
+++ b/drivers/s390/char/sclp.c
@@ -11,6 +11,7 @@
 #include <linux/kernel_stat.h>
 #include <linux/module.h>
 #include <linux/err.h>
+#include <linux/panic_notifier.h>
 #include <linux/spinlock.h>
 #include <linux/interrupt.h>
 #include <linux/timer.h>
diff --git a/drivers/s390/char/sclp_con.c b/drivers/s390/char/sclp_con.c
index 9b852a47ccc1..cc01a7b8595d 100644
--- a/drivers/s390/char/sclp_con.c
+++ b/drivers/s390/char/sclp_con.c
@@ -10,6 +10,7 @@
 #include <linux/kmod.h>
 #include <linux/console.h>
 #include <linux/init.h>
+#include <linux/panic_notifier.h>
 #include <linux/timer.h>
 #include <linux/jiffies.h>
 #include <linux/termios.h>
diff --git a/drivers/s390/char/sclp_vt220.c b/drivers/s390/char/sclp_vt220.c
index 7f4445b0f819..5b8a7b090a97 100644
--- a/drivers/s390/char/sclp_vt220.c
+++ b/drivers/s390/char/sclp_vt220.c
@@ -9,6 +9,7 @@
 
 #include <linux/module.h>
 #include <linux/spinlock.h>
+#include <linux/panic_notifier.h>
 #include <linux/list.h>
 #include <linux/wait.h>
 #include <linux/timer.h>
diff --git a/drivers/s390/char/zcore.c b/drivers/s390/char/zcore.c
index bd3c724bf695..b5b0848da93b 100644
--- a/drivers/s390/char/zcore.c
+++ b/drivers/s390/char/zcore.c
@@ -15,6 +15,7 @@
 #include <linux/init.h>
 #include <linux/slab.h>
 #include <linux/debugfs.h>
+#include <linux/panic_notifier.h>
 #include <linux/reboot.h>
 
 #include <asm/asm-offsets.h>
diff --git a/drivers/soc/bcm/brcmstb/pm/pm-arm.c b/drivers/soc/bcm/brcmstb/pm/pm-arm.c
index a673fdffe216..3cbb165d6e30 100644
--- a/drivers/soc/bcm/brcmstb/pm/pm-arm.c
+++ b/drivers/soc/bcm/brcmstb/pm/pm-arm.c
@@ -28,6 +28,7 @@
 #include <linux/notifier.h>
 #include <linux/of.h>
 #include <linux/of_address.h>
+#include <linux/panic_notifier.h>
 #include <linux/platform_device.h>
 #include <linux/pm.h>
 #include <linux/printk.h>
diff --git a/drivers/staging/olpc_dcon/olpc_dcon.c b/drivers/staging/olpc_dcon/olpc_dcon.c
index 6d8e9a481786..7284cb4ac395 100644
--- a/drivers/staging/olpc_dcon/olpc_dcon.c
+++ b/drivers/staging/olpc_dcon/olpc_dcon.c
@@ -22,6 +22,7 @@
 #include <linux/device.h>
 #include <linux/uaccess.h>
 #include <linux/ctype.h>
+#include <linux/panic_notifier.h>
 #include <linux/reboot.h>
 #include <linux/olpc-ec.h>
 #include <asm/tsc.h>
diff --git a/drivers/video/fbdev/hyperv_fb.c b/drivers/video/fbdev/hyperv_fb.c
index a7e6eea2c4a1..23999df52739 100644
--- a/drivers/video/fbdev/hyperv_fb.c
+++ b/drivers/video/fbdev/hyperv_fb.c
@@ -52,6 +52,7 @@
 #include <linux/completion.h>
 #include <linux/fb.h>
 #include <linux/pci.h>
+#include <linux/panic_notifier.h>
 #include <linux/efi.h>
 #include <linux/console.h>
 
diff --git a/include/asm-generic/bug.h b/include/asm-generic/bug.h
index b402494883b6..f152b9bb916f 100644
--- a/include/asm-generic/bug.h
+++ b/include/asm-generic/bug.h
@@ -17,7 +17,8 @@
 #endif
 
 #ifndef __ASSEMBLY__
-#include <linux/kernel.h>
+#include <linux/panic.h>
+#include <linux/printk.h>
 
 #ifdef CONFIG_BUG
 
diff --git a/include/linux/kernel.h b/include/linux/kernel.h
index 15d8bad3d2f2..50f4a57cf50c 100644
--- a/include/linux/kernel.h
+++ b/include/linux/kernel.h
@@ -14,6 +14,7 @@
 #include <linux/math.h>
 #include <linux/minmax.h>
 #include <linux/typecheck.h>
+#include <linux/panic.h>
 #include <linux/printk.h>
 #include <linux/build_bug.h>
 #include <linux/static_call_types.h>
@@ -72,7 +73,6 @@
 #define lower_32_bits(n) ((u32)((n) & 0xffffffff))
 
 struct completion;
-struct pt_regs;
 struct user;
 
 #ifdef CONFIG_PREEMPT_VOLUNTARY
@@ -177,14 +177,6 @@ void __might_fault(const char *file, int line);
 static inline void might_fault(void) { }
 #endif
 
-extern struct atomic_notifier_head panic_notifier_list;
-extern long (*panic_blink)(int state);
-__printf(1, 2)
-void panic(const char *fmt, ...) __noreturn __cold;
-void nmi_panic(struct pt_regs *regs, const char *msg);
-extern void oops_enter(void);
-extern void oops_exit(void);
-extern bool oops_may_print(void);
 void do_exit(long error_code) __noreturn;
 void complete_and_exit(struct completion *, long) __noreturn;
 
@@ -370,52 +362,8 @@ extern int __kernel_text_address(unsigned long addr);
 extern int kernel_text_address(unsigned long addr);
 extern int func_ptr_is_kernel_text(void *ptr);
 
-#ifdef CONFIG_SMP
-extern unsigned int sysctl_oops_all_cpu_backtrace;
-#else
-#define sysctl_oops_all_cpu_backtrace 0
-#endif /* CONFIG_SMP */
-
 extern void bust_spinlocks(int yes);
-extern int panic_timeout;
-extern unsigned long panic_print;
-extern int panic_on_oops;
-extern int panic_on_unrecovered_nmi;
-extern int panic_on_io_nmi;
-extern int panic_on_warn;
-extern unsigned long panic_on_taint;
-extern bool panic_on_taint_nousertaint;
-extern int sysctl_panic_on_rcu_stall;
-extern int sysctl_max_rcu_stall_to_panic;
-extern int sysctl_panic_on_stackoverflow;
-
-extern bool crash_kexec_post_notifiers;
 
-/*
- * panic_cpu is used for synchronizing panic() and crash_kexec() execution. It
- * holds a CPU number which is executing panic() currently. A value of
- * PANIC_CPU_INVALID means no CPU has entered panic() or crash_kexec().
- */
-extern atomic_t panic_cpu;
-#define PANIC_CPU_INVALID	-1
-
-/*
- * Only to be used by arch init code. If the user over-wrote the default
- * CONFIG_PANIC_TIMEOUT, honor it.
- */
-static inline void set_arch_panic_timeout(int timeout, int arch_default_timeout)
-{
-	if (panic_timeout == arch_default_timeout)
-		panic_timeout = timeout;
-}
-extern const char *print_tainted(void);
-enum lockdep_ok {
-	LOCKDEP_STILL_OK,
-	LOCKDEP_NOW_UNRELIABLE
-};
-extern void add_taint(unsigned flag, enum lockdep_ok);
-extern int test_taint(unsigned flag);
-extern unsigned long get_taint(void);
 extern int root_mountflags;
 
 extern bool early_boot_irqs_disabled;
@@ -434,36 +382,6 @@ extern enum system_states {
 	SYSTEM_SUSPEND,
 } system_state;
 
-/* This cannot be an enum because some may be used in assembly source. */
-#define TAINT_PROPRIETARY_MODULE	0
-#define TAINT_FORCED_MODULE		1
-#define TAINT_CPU_OUT_OF_SPEC		2
-#define TAINT_FORCED_RMMOD		3
-#define TAINT_MACHINE_CHECK		4
-#define TAINT_BAD_PAGE			5
-#define TAINT_USER			6
-#define TAINT_DIE			7
-#define TAINT_OVERRIDDEN_ACPI_TABLE	8
-#define TAINT_WARN			9
-#define TAINT_CRAP			10
-#define TAINT_FIRMWARE_WORKAROUND	11
-#define TAINT_OOT_MODULE		12
-#define TAINT_UNSIGNED_MODULE		13
-#define TAINT_SOFTLOCKUP		14
-#define TAINT_LIVEPATCH			15
-#define TAINT_AUX			16
-#define TAINT_RANDSTRUCT		17
-#define TAINT_FLAGS_COUNT		18
-#define TAINT_FLAGS_MAX			((1UL << TAINT_FLAGS_COUNT) - 1)
-
-struct taint_flag {
-	char c_true;	/* character printed when tainted */
-	char c_false;	/* character printed when not tainted */
-	bool module;	/* also show as a per-module taint flag */
-};
-
-extern const struct taint_flag taint_flags[TAINT_FLAGS_COUNT];
-
 extern const char hex_asc[];
 #define hex_asc_lo(x)	hex_asc[((x) & 0x0f)]
 #define hex_asc_hi(x)	hex_asc[((x) & 0xf0) >> 4]
diff --git a/include/linux/panic.h b/include/linux/panic.h
new file mode 100644
index 000000000000..f5844908a089
--- /dev/null
+++ b/include/linux/panic.h
@@ -0,0 +1,98 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _LINUX_PANIC_H
+#define _LINUX_PANIC_H
+
+#include <linux/compiler_attributes.h>
+#include <linux/types.h>
+
+struct pt_regs;
+
+extern long (*panic_blink)(int state);
+__printf(1, 2)
+void panic(const char *fmt, ...) __noreturn __cold;
+void nmi_panic(struct pt_regs *regs, const char *msg);
+extern void oops_enter(void);
+extern void oops_exit(void);
+extern bool oops_may_print(void);
+
+#ifdef CONFIG_SMP
+extern unsigned int sysctl_oops_all_cpu_backtrace;
+#else
+#define sysctl_oops_all_cpu_backtrace 0
+#endif /* CONFIG_SMP */
+
+extern int panic_timeout;
+extern unsigned long panic_print;
+extern int panic_on_oops;
+extern int panic_on_unrecovered_nmi;
+extern int panic_on_io_nmi;
+extern int panic_on_warn;
+
+extern unsigned long panic_on_taint;
+extern bool panic_on_taint_nousertaint;
+
+extern int sysctl_panic_on_rcu_stall;
+extern int sysctl_max_rcu_stall_to_panic;
+extern int sysctl_panic_on_stackoverflow;
+
+extern bool crash_kexec_post_notifiers;
+
+/*
+ * panic_cpu is used for synchronizing panic() and crash_kexec() execution. It
+ * holds a CPU number which is executing panic() currently. A value of
+ * PANIC_CPU_INVALID means no CPU has entered panic() or crash_kexec().
+ */
+extern atomic_t panic_cpu;
+#define PANIC_CPU_INVALID	-1
+
+/*
+ * Only to be used by arch init code. If the user over-wrote the default
+ * CONFIG_PANIC_TIMEOUT, honor it.
+ */
+static inline void set_arch_panic_timeout(int timeout, int arch_default_timeout)
+{
+	if (panic_timeout == arch_default_timeout)
+		panic_timeout = timeout;
+}
+
+/* This cannot be an enum because some may be used in assembly source. */
+#define TAINT_PROPRIETARY_MODULE	0
+#define TAINT_FORCED_MODULE		1
+#define TAINT_CPU_OUT_OF_SPEC		2
+#define TAINT_FORCED_RMMOD		3
+#define TAINT_MACHINE_CHECK		4
+#define TAINT_BAD_PAGE			5
+#define TAINT_USER			6
+#define TAINT_DIE			7
+#define TAINT_OVERRIDDEN_ACPI_TABLE	8
+#define TAINT_WARN			9
+#define TAINT_CRAP			10
+#define TAINT_FIRMWARE_WORKAROUND	11
+#define TAINT_OOT_MODULE		12
+#define TAINT_UNSIGNED_MODULE		13
+#define TAINT_SOFTLOCKUP		14
+#define TAINT_LIVEPATCH			15
+#define TAINT_AUX			16
+#define TAINT_RANDSTRUCT		17
+#define TAINT_FLAGS_COUNT		18
+#define TAINT_FLAGS_MAX			((1UL << TAINT_FLAGS_COUNT) - 1)
+
+struct taint_flag {
+	char c_true;	/* character printed when tainted */
+	char c_false;	/* character printed when not tainted */
+	bool module;	/* also show as a per-module taint flag */
+};
+
+extern const struct taint_flag taint_flags[TAINT_FLAGS_COUNT];
+
+enum lockdep_ok {
+	LOCKDEP_STILL_OK,
+	LOCKDEP_NOW_UNRELIABLE,
+};
+
+extern const char *print_tainted(void);
+extern void add_taint(unsigned flag, enum lockdep_ok);
+extern int test_taint(unsigned flag);
+extern unsigned long get_taint(void);
+
+#endif	/* _LINUX_PANIC_H */
diff --git a/include/linux/panic_notifier.h b/include/linux/panic_notifier.h
new file mode 100644
index 000000000000..41e32483d7a7
--- /dev/null
+++ b/include/linux/panic_notifier.h
@@ -0,0 +1,12 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef _LINUX_PANIC_NOTIFIERS_H
+#define _LINUX_PANIC_NOTIFIERS_H
+
+#include <linux/notifier.h>
+#include <linux/types.h>
+
+extern struct atomic_notifier_head panic_notifier_list;
+
+extern bool crash_kexec_post_notifiers;
+
+#endif	/* _LINUX_PANIC_NOTIFIERS_H */
diff --git a/kernel/hung_task.c b/kernel/hung_task.c
index bb2e3e15c84c..2871076e4d29 100644
--- a/kernel/hung_task.c
+++ b/kernel/hung_task.c
@@ -15,6 +15,7 @@
 #include <linux/kthread.h>
 #include <linux/lockdep.h>
 #include <linux/export.h>
+#include <linux/panic_notifier.h>
 #include <linux/sysctl.h>
 #include <linux/suspend.h>
 #include <linux/utsname.h>
diff --git a/kernel/kexec_core.c b/kernel/kexec_core.c
index f099baee3578..4b34a9aa32bc 100644
--- a/kernel/kexec_core.c
+++ b/kernel/kexec_core.c
@@ -26,6 +26,7 @@
 #include <linux/suspend.h>
 #include <linux/device.h>
 #include <linux/freezer.h>
+#include <linux/panic_notifier.h>
 #include <linux/pm.h>
 #include <linux/cpu.h>
 #include <linux/uaccess.h>
diff --git a/kernel/panic.c b/kernel/panic.c
index 332736a72a58..edad89660a2b 100644
--- a/kernel/panic.c
+++ b/kernel/panic.c
@@ -23,6 +23,7 @@
 #include <linux/reboot.h>
 #include <linux/delay.h>
 #include <linux/kexec.h>
+#include <linux/panic_notifier.h>
 #include <linux/sched.h>
 #include <linux/sysrq.h>
 #include <linux/init.h>
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 3a5fef9fc934..faa847ce28cd 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -32,6 +32,8 @@
 #include <linux/export.h>
 #include <linux/completion.h>
 #include <linux/moduleparam.h>
+#include <linux/panic.h>
+#include <linux/panic_notifier.h>
 #include <linux/percpu.h>
 #include <linux/notifier.h>
 #include <linux/cpu.h>
diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index 6e0b77f1117c..304be14fa09b 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -27,6 +27,7 @@
 #include <linux/sysctl.h>
 #include <linux/bitmap.h>
 #include <linux/signal.h>
+#include <linux/panic.h>
 #include <linux/printk.h>
 #include <linux/proc_fs.h>
 #include <linux/security.h>
diff --git a/kernel/trace/trace.c b/kernel/trace/trace.c
index 560e4c8d3825..1c4e702133e8 100644
--- a/kernel/trace/trace.c
+++ b/kernel/trace/trace.c
@@ -39,6 +39,7 @@
 #include <linux/slab.h>
 #include <linux/ctype.h>
 #include <linux/init.h>
+#include <linux/panic_notifier.h>
 #include <linux/poll.h>
 #include <linux/nmi.h>
 #include <linux/fs.h>
-- 
2.30.2



From xen-devel-bounces@lists.xenproject.org Tue May 11 08:52:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 08:52:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125651.236483 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgO7Q-0007xA-Cf; Tue, 11 May 2021 08:51:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125651.236483; Tue, 11 May 2021 08:51:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgO7Q-0007x3-9X; Tue, 11 May 2021 08:51:56 +0000
Received: by outflank-mailman (input) for mailman id 125651;
 Tue, 11 May 2021 08:51:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgO7P-0007wt-Rv; Tue, 11 May 2021 08:51:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgO7P-0008Kq-J5; Tue, 11 May 2021 08:51:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgO7P-0007hj-77; Tue, 11 May 2021 08:51:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lgO7P-0005de-6a; Tue, 11 May 2021 08:51:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zBdKcFwV9Ebs93AXl0LSwormC1IcDgkAf8G32wAR9wY=; b=hzf6ihkAl89mJjWhWgSr09sB4C
	WcxPQRUu6dAYH/0USm5hw9UMrABcnTum8tvzCniRPG6rWSK0vnEgYjIU0D0G+8FI6mzf30d2mCahZ
	cNYcUci3wN/sj0WrDTohUEUBs9k8ioYDtPGpb6AT5TNGwR+OafxQTp8RT4W3YgLS9r1k=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161901-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 161901: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    libvirt:build-arm64:<job status>:broken:regression
    libvirt:build-arm64-pvops:<job status>:broken:regression
    libvirt:build-arm64-xsm:<job status>:broken:regression
    libvirt:build-arm64-pvops:host-install(4):broken:regression
    libvirt:build-arm64-xsm:host-install(4):broken:regression
    libvirt:build-arm64:host-install(4):broken:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=39954c76a6eb2a1e8a2cd1183579380982cf2a6f
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 11 May 2021 08:51:55 +0000

flight 161901 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161901/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 151777
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 151777
 build-arm64                   4 host-install(4)        broken REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              39954c76a6eb2a1e8a2cd1183579380982cf2a6f
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  305 days
Failing since        151818  2020-07-11 04:18:52 Z  304 days  297 attempts
Testing same since   161901  2021-05-11 04:20:05 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  broken  
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-step build-arm64-pvops host-install(4)
broken-step build-arm64-xsm host-install(4)
broken-step build-arm64 host-install(4)

Not pushing.

(No revision log; it would be 57035 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 11 09:28:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 09:28:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125660.236504 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgOgc-0003GJ-NT; Tue, 11 May 2021 09:28:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125660.236504; Tue, 11 May 2021 09:28:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgOgc-0003Es-FY; Tue, 11 May 2021 09:28:18 +0000
Received: by outflank-mailman (input) for mailman id 125660;
 Tue, 11 May 2021 09:28:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tiF3=KG=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1lgOga-0003CR-Rx
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 09:28:16 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f8be17c6-3df0-41f8-b820-c3ba412f30a1;
 Tue, 11 May 2021 09:28:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f8be17c6-3df0-41f8-b820-c3ba412f30a1
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620725294;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=Ig18it7r326gV2XAUVuedZDlWR+ZhbMI7srkYdlqQWk=;
  b=C0tU7UnRHJDhUeMDGv5DSymZIkN69qjXx4+jOUopufA3H/gcx2SgsiAh
   e6wOTE7w+Ph5/3oTDbpN5IjBVaiwMEertn6NnH8rX6lSdg3cxpnb4LDqM
   yLMQqhMLLb+y4n4tTZ/+c1bJlKRwaOoR5fc8qqeoiMGPjjZU58lOE66Iu
   A=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: FKZMmZTfuOHzCSOphczn+T+cD8ZzpmOGjfvJtrlD4OdnQSgVio+HHTdMIIP1FeN+yCwNLiiECJ
 Fyab+38y9gvxkZ6PAu44w/1heJM9dTTRcFKqUy8IEJpiqf40qTgkmd+jVAqSILpwogCyGxsYQQ
 6LbUNo4JgOvOAAQiKXBs87T0ilfVKot1xygXGD3N3pvGKuwU51FRcHvQi9TcoOfU+JWXDi25lG
 u7sXGgWrQKDRW3TQ3j0/Ecm2HogfT9bR6dY9HitdrEnb3dFx6i2t2bI/OcY+Lb/PJCcHZtKTvw
 L/Q=
X-SBRS: 5.1
X-MesageID: 43902497
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:FxMQt68r8nyF5+MpT1Zuk+DUI+orL9Y04lQ7vn2YSXRuHPBw8P
 re+MjztCWE7gr5N0tBpTntAsW9qBDnhPtICOsqTNSftWDd0QPCRuxfBOPZslrd8kbFl9K1u5
 0OT0EHMqyTMWRH
X-IronPort-AV: E=Sophos;i="5.82,290,1613451600"; 
   d="scan'208";a="43902497"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH v2 0/8] Fix libxl with QEMU 6.0 + remove some more deprecated usages.
Date: Tue, 11 May 2021 10:28:02 +0100
Message-ID: <20210511092810.13759-1-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.31.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Patch series available in this git branch:
https://xenbits.xen.org/git-http/people/aperard/xen-unstable.git br.deprecated-qemu-qmp-and-cmd-v2

v2:
- fix coding style in patch 3
- all reviewed

The Xen 4.15 release that went out just before QEMU 6.0 won't be compaptible
with the latter. This patch series fixes libxl to replace use of QMP commands
that have been removed from QEMU and to fix usage of deprecated command and
parameters that well be remove from QEMU in the future.

All of the series should be backported to at least Xen 4.15 or it won't be
possible to migrate, hotplug cpu or change cdrom on HVM guest when QEMU 6.0 and
newer is used. QEMU 6.0 is about to be release, within a week.

Backport: 4.15

Anthony PERARD (8):
  libxl: Replace deprecated QMP command by "query-cpus-fast"
  libxl: Replace QEMU's command line short-form boolean option
  libxl: Replace deprecated "cpu-add" QMP command by "device_add"
  libxl: Use -device for cd-rom drives
  libxl: Assert qmp_ev's state in qmp_ev_qemu_compare_version
  libxl: Export libxl__qmp_ev_qemu_compare_version
  libxl: Use `id` with the "eject" QMP command
  libxl: Replace QMP command "change" by "blockdev-change-media"

 tools/libs/light/libxl_disk.c     |  67 +++++++++--
 tools/libs/light/libxl_dm.c       |  30 +++--
 tools/libs/light/libxl_domain.c   | 192 ++++++++++++++++++++++++++++--
 tools/libs/light/libxl_internal.h |   8 ++
 tools/libs/light/libxl_qmp.c      |   6 +-
 5 files changed, 272 insertions(+), 31 deletions(-)

-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Tue May 11 09:28:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 09:28:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125661.236497 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgOgc-0003Ck-BT; Tue, 11 May 2021 09:28:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125661.236497; Tue, 11 May 2021 09:28:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgOgc-0003Cd-8V; Tue, 11 May 2021 09:28:18 +0000
Received: by outflank-mailman (input) for mailman id 125661;
 Tue, 11 May 2021 09:28:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tiF3=KG=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1lgOga-0003CS-TG
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 09:28:16 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6181c3ad-3937-48ce-b657-1aedeeacc2a5;
 Tue, 11 May 2021 09:28:15 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6181c3ad-3937-48ce-b657-1aedeeacc2a5
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620725295;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=Lj/dnGcpZd9IQ9qo88EitYJiLnP3UnXfOqQmMLE15wg=;
  b=Snjy7GUECs3uD06LLyLADsC46UuYOG0CAtEVTSRHyiOgTneJEWmMnSrL
   daOwNNnX/NmmDJ7n9cduQ44OX0caJ32vHzRKjEh+yd4VZ0dQj6bND5f6g
   ZI72RE7BsnLENuNOXe1OpMdm0BiRyaKg5viBSPDcOlvr8L5AloHm5/4Qw
   s=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: U5WgJknRmmH6B6HptDW+4J1OkgsyHDtbD1e0NQvPWdXliV17Y3OwHm7agBTjuSDvYwmyRbj/RG
 gj8LSTFz4RAKEIhcQO3rIaA58IHF16DREYcBMHqCev9bXv5OJASG/imqCJZzbUR9v+tlRn954M
 +I0JOAjgI/hYpW8JfZ8W1r5AW9gV83XH4gDctJdLxodfFfQwlyDr2hOSzwEUA0AAkMXGC3u9Im
 zgXmqNPcM0n2PcNVvqDIaAJdD2Ir9kgJMELAtIvXlfprOtNcKfrZvo8UGDS0k0sitEnYoFhBY5
 YK0=
X-SBRS: 5.1
X-MesageID: 43528821
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:WrQI+KHdR6LotNM/pLqE0MeALOsnbusQ8zAXP0AYc3Jom6uj5q
 aTdZUgpGfJYVkqOE3I9ertBEDEewK4yXcX2/h3AV7BZniEhILAFugLhuGO/9SjIVybygc079
 YYT0EUMrzN5DZB4voSmDPIceod/A==
X-IronPort-AV: E=Sophos;i="5.82,290,1613451600"; 
   d="scan'208";a="43528821"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Jason Andryuk
	<jandryuk@gmail.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH v2 1/8] libxl: Replace deprecated QMP command by "query-cpus-fast"
Date: Tue, 11 May 2021 10:28:03 +0100
Message-ID: <20210511092810.13759-2-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210511092810.13759-1-anthony.perard@citrix.com>
References: <20210511092810.13759-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

We use the deprecated QMP command "query-cpus" which is removed in the
QEMU 6.0 release. There's a replacement which is "query-cpus-fast",
and have been available since QEMU 2.12 (April 2018).

This patch try the new command first and when the command isn't
available, it fall back to the deprecated one so libxl still works
with older QEMU versions.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
---

Notes:
    This is v2 of '[XEN PATCH for-4.15] libxl: Replace deprecated QMP
    command by "query-cpus-fast"' as the patch never made it into the
    release.
    
    changes:
    - introduce a fallback for when the new command isn't available.

 tools/libs/light/libxl_domain.c | 103 ++++++++++++++++++++++++++++++--
 1 file changed, 98 insertions(+), 5 deletions(-)

diff --git a/tools/libs/light/libxl_domain.c b/tools/libs/light/libxl_domain.c
index 5d4ec9071160..8c003aa7cb04 100644
--- a/tools/libs/light/libxl_domain.c
+++ b/tools/libs/light/libxl_domain.c
@@ -1740,6 +1740,35 @@ static int libxl__set_vcpuonline_xenstore(libxl__gc *gc, uint32_t domid,
     return rc;
 }
 
+static int qmp_parse_query_cpus_fast(libxl__gc *gc,
+                                     libxl_domid domid,
+                                     const libxl__json_object *response,
+                                     libxl_bitmap *const map)
+{
+    int i;
+    const libxl__json_object *cpu;
+
+    libxl_bitmap_set_none(map);
+    /* Parse response to QMP command "query-cpus-fast":
+     * [ { 'cpu-index': 'int',...} ]
+     */
+    for (i = 0; (cpu = libxl__json_array_get(response, i)); i++) {
+        unsigned int cpu_index;
+        const libxl__json_object *o;
+
+        o = libxl__json_map_get("cpu-index", cpu, JSON_INTEGER);
+        if (!o) {
+            LOGD(ERROR, domid, "Failed to retrieve CPU index.");
+            return ERROR_QEMU_API;
+        }
+
+        cpu_index = libxl__json_object_get_integer(o);
+        libxl_bitmap_set(map, cpu_index);
+    }
+
+    return 0;
+}
+
 static int qmp_parse_query_cpus(libxl__gc *gc,
                                 libxl_domid domid,
                                 const libxl__json_object *response,
@@ -1778,8 +1807,13 @@ typedef struct set_vcpuonline_state {
     int index; /* for loop on final_map */
 } set_vcpuonline_state;
 
+static void set_vcpuonline_qmp_cpus_fast_queried(libxl__egc *,
+    libxl__ev_qmp *, const libxl__json_object *, int rc);
 static void set_vcpuonline_qmp_cpus_queried(libxl__egc *,
     libxl__ev_qmp *, const libxl__json_object *, int rc);
+static void set_vcpuonline_qmp_query_cpus_parse(libxl__egc *,
+    libxl__ev_qmp *qmp, const libxl__json_object *,
+    bool query_cpus_fast, int rc);
 static void set_vcpuonline_qmp_add_cpu(libxl__egc *,
     libxl__ev_qmp *, const libxl__json_object *response, int rc);
 static void set_vcpuonline_timeout(libxl__egc *egc,
@@ -1840,8 +1874,8 @@ int libxl_set_vcpuonline(libxl_ctx *ctx, uint32_t domid,
                                              set_vcpuonline_timeout,
                                              LIBXL_QMP_CMD_TIMEOUT * 1000);
             if (rc) goto out;
-            qmp->callback = set_vcpuonline_qmp_cpus_queried;
-            rc = libxl__ev_qmp_send(egc, qmp, "query-cpus", NULL);
+            qmp->callback = set_vcpuonline_qmp_cpus_fast_queried;
+            rc = libxl__ev_qmp_send(egc, qmp, "query-cpus-fast", NULL);
             if (rc) goto out;
             return AO_INPROGRESS;
         default:
@@ -1860,11 +1894,39 @@ int libxl_set_vcpuonline(libxl_ctx *ctx, uint32_t domid,
     return AO_INPROGRESS;
 }
 
+static void set_vcpuonline_qmp_cpus_fast_queried(libxl__egc *egc,
+    libxl__ev_qmp *qmp, const libxl__json_object *response, int rc)
+{
+    EGC_GC;
+    set_vcpuonline_state *svos = CONTAINER_OF(qmp, *svos, qmp);
+
+    if (rc == ERROR_QMP_COMMAND_NOT_FOUND) {
+        /* Try again, we probably talking to a QEMU older than 2.12 */
+        qmp->callback = set_vcpuonline_qmp_cpus_queried;
+        rc = libxl__ev_qmp_send(egc, qmp, "query-cpus", NULL);
+        if (rc) goto out;
+        return;
+    }
+
+out:
+    set_vcpuonline_qmp_query_cpus_parse(egc, qmp, response, true, rc);
+}
+
 static void set_vcpuonline_qmp_cpus_queried(libxl__egc *egc,
     libxl__ev_qmp *qmp, const libxl__json_object *response, int rc)
 {
     EGC_GC;
     set_vcpuonline_state *svos = CONTAINER_OF(qmp, *svos, qmp);
+
+    set_vcpuonline_qmp_query_cpus_parse(egc, qmp, response, false, rc);
+}
+
+static void set_vcpuonline_qmp_query_cpus_parse(libxl__egc *egc,
+    libxl__ev_qmp *qmp, const libxl__json_object *response,
+    bool query_cpus_fast, int rc)
+{
+    EGC_GC;
+    set_vcpuonline_state *svos = CONTAINER_OF(qmp, *svos, qmp);
     int i;
     libxl_bitmap current_map;
 
@@ -1876,7 +1938,11 @@ static void set_vcpuonline_qmp_cpus_queried(libxl__egc *egc,
     if (rc) goto out;
 
     libxl_bitmap_alloc(CTX, &current_map, svos->info.vcpu_max_id + 1);
-    rc = qmp_parse_query_cpus(gc, qmp->domid, response, &current_map);
+    if (query_cpus_fast) {
+        rc = qmp_parse_query_cpus_fast(gc, qmp->domid, response, &current_map);
+    } else {
+        rc = qmp_parse_query_cpus(gc, qmp->domid, response, &current_map);
+    }
     if (rc) goto out;
 
     libxl_bitmap_copy_alloc(CTX, final_map, svos->cpumap);
@@ -2121,6 +2187,9 @@ typedef struct {
 
 static void retrieve_domain_configuration_lock_acquired(
     libxl__egc *egc, libxl__ev_slowlock *, int rc);
+static void retrieve_domain_configuration_cpu_fast_queried(
+    libxl__egc *egc, libxl__ev_qmp *qmp,
+    const libxl__json_object *response, int rc);
 static void retrieve_domain_configuration_cpu_queried(
     libxl__egc *egc, libxl__ev_qmp *qmp,
     const libxl__json_object *response, int rc);
@@ -2198,8 +2267,8 @@ static void retrieve_domain_configuration_lock_acquired(
         if (rc) goto out;
         libxl_bitmap_alloc(CTX, &rdcs->qemuu_cpus,
                            d_config->b_info.max_vcpus);
-        rdcs->qmp.callback = retrieve_domain_configuration_cpu_queried;
-        rc = libxl__ev_qmp_send(egc, &rdcs->qmp, "query-cpus", NULL);
+        rdcs->qmp.callback = retrieve_domain_configuration_cpu_fast_queried;
+        rc = libxl__ev_qmp_send(egc, &rdcs->qmp, "query-cpus-fast", NULL);
         if (rc) goto out;
         has_callback = true;
     }
@@ -2210,6 +2279,30 @@ static void retrieve_domain_configuration_lock_acquired(
         retrieve_domain_configuration_end(egc, rdcs, rc);
 }
 
+static void retrieve_domain_configuration_cpu_fast_queried(
+    libxl__egc *egc, libxl__ev_qmp *qmp,
+    const libxl__json_object *response, int rc)
+{
+    EGC_GC;
+    retrieve_domain_configuration_state *rdcs =
+        CONTAINER_OF(qmp, *rdcs, qmp);
+
+    if (rc == ERROR_QMP_COMMAND_NOT_FOUND) {
+        /* Try again, we probably talking to a QEMU older than 2.12 */
+        rdcs->qmp.callback = retrieve_domain_configuration_cpu_queried;
+        rc = libxl__ev_qmp_send(egc, &rdcs->qmp, "query-cpus", NULL);
+        if (rc) goto out;
+        return;
+    }
+
+    if (rc) goto out;
+
+    rc = qmp_parse_query_cpus_fast(gc, qmp->domid, response, &rdcs->qemuu_cpus);
+
+out:
+    retrieve_domain_configuration_end(egc, rdcs, rc);
+}
+
 static void retrieve_domain_configuration_cpu_queried(
     libxl__egc *egc, libxl__ev_qmp *qmp,
     const libxl__json_object *response, int rc)
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Tue May 11 09:28:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 09:28:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125662.236522 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgOgh-0003mL-SS; Tue, 11 May 2021 09:28:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125662.236522; Tue, 11 May 2021 09:28:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgOgh-0003mB-OE; Tue, 11 May 2021 09:28:23 +0000
Received: by outflank-mailman (input) for mailman id 125662;
 Tue, 11 May 2021 09:28:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tiF3=KG=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1lgOgf-0003CS-Pm
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 09:28:21 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 85060c84-865a-4bc3-9c6f-515465efa1e0;
 Tue, 11 May 2021 09:28:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 85060c84-865a-4bc3-9c6f-515465efa1e0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620725296;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=jiAw2u4kXozSgOauXMnxK9qZm/ervdgw/ztBsqd2eLI=;
  b=OiVYsSK8TKUSFlP5C3uSqoIRcxek28mdL9b65ta7a8gp3UgoChkTmUIL
   hqsab+U4WXR0Vc3nUXIHjolsm895tl4e51Ugyy+UOJQjomwdc78ZSEfMg
   LISyDf48T9dDpNWsHViSyGntkp6OfrZSWNCWNyZZR+9ewggpDILDVD3ZJ
   Q=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: cbBs2QPmEOmmoXSsPEzkdgREFMXI17afDX3VWPunzRw5ia2Pv/t5ZZ+n53QuxVXafus7Yc2xsE
 elalCSepbXc9ocXLb9UUsJHPe/ZOhHJtYNGXJQHQtVuU+P4qy1I7ieXHdh4nWbBm3xETgQ7vxQ
 U1Hxlvh3Ge8Av5cBnmOWxQPFOVlyn4uFnu9M9SsPzCrLfpz8gkEoM3aBGazP1cAqVYjb6NeGcN
 86E5+XQx5wrHVnfOjX7x706fqTzQWHJWOxtIityel7zpsSHkEkgXFHw0ZudskN77SpWvVHUjG4
 rKk=
X-SBRS: 5.1
X-MesageID: 45044864
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:3oOxZahw3hufCEIkGsqYbVu0PnBQX+x13DAbv31ZSRFFG/FwyP
 rBoB1L73DJYWgqNE3I+OrwdJVoLkmsk6KdjbNhXotKGTOWw1dAT7sSorcKoQeQYhEWn9Q1vc
 wLHskfNDSzNykDsS+T2nj+Lz9K+qjjzEncv5a4854bd3APV0gP1XYaNu+zKDwKeCB2Qb4CUL
 aM7MtOoDStPV4NaN6gO3UDV+/f4/XWiZPPe3c9dlAawTjLqQntxK/xEhCe0BtbeShI260e/W
 /MlBG8zrm/ssu81gTX2wbontVrcZrau5t+7f63+4oowwbX+0OVjbFaKv6/VGpcmpDS1L9lqq
 iJn/5qBbUI15qYRBDLnfKq4Xin7N9m0Q6d9bfM6UGT0fDRVXY0DdFMipledQac4008vMtk2K
 YOxG6BsYFLZCmw1hgVyuK4Hy2CrHDE6kbKUNRj+EC2eeMlGcxsRaV2xjIlLH7BJlOy1GkDKp
 giMCjx3ociTbqqVQGugoA0+q3fYp0aJGbzfnQ/
X-IronPort-AV: E=Sophos;i="5.82,290,1613451600"; 
   d="scan'208";a="45044864"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Jason Andryuk
	<jandryuk@gmail.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH v2 2/8] libxl: Replace QEMU's command line short-form boolean option
Date: Tue, 11 May 2021 10:28:04 +0100
Message-ID: <20210511092810.13759-3-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210511092810.13759-1-anthony.perard@citrix.com>
References: <20210511092810.13759-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

Short-form boolean options are deprecated in QEMU 6.0.
Upstream commit that deprecate those: ccd3b3b8112b ("qemu-option: warn
for short-form boolean options").

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
---
 tools/libs/light/libxl_dm.c | 14 +++++++-------
 1 file changed, 7 insertions(+), 7 deletions(-)

diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index 3599a82ef01b..0a0c1ef7c62e 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -977,14 +977,14 @@ static char *dm_spice_options(libxl__gc *gc,
     if (spice->host)
         opt = GCSPRINTF("%s,addr=%s", opt, spice->host);
     if (libxl_defbool_val(spice->disable_ticketing))
-        opt = GCSPRINTF("%s,disable-ticketing", opt);
+        opt = GCSPRINTF("%s,disable-ticketing=on", opt);
     else
         opt = GCSPRINTF("%s,password=%s", opt, spice->passwd);
     opt = GCSPRINTF("%s,agent-mouse=%s", opt,
                     libxl_defbool_val(spice->agent_mouse) ? "on" : "off");
 
     if (!libxl_defbool_val(spice->clipboard_sharing))
-        opt = GCSPRINTF("%s,disable-copy-paste", opt);
+        opt = GCSPRINTF("%s,disable-copy-paste=on", opt);
 
     if (spice->image_compression)
         opt = GCSPRINTF("%s,image-compression=%s", opt,
@@ -1224,7 +1224,7 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
         flexarray_append(dm_args, "-chardev");
         if (state->dm_monitor_fd >= 0) {
             flexarray_append(dm_args,
-                GCSPRINTF("socket,id=libxl-cmd,fd=%d,server,nowait",
+                GCSPRINTF("socket,id=libxl-cmd,fd=%d,server=on,wait=off",
                           state->dm_monitor_fd));
 
             /*
@@ -1237,7 +1237,7 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
         } else {
             flexarray_append(dm_args,
                              GCSPRINTF("socket,id=libxl-cmd,"
-                                       "path=%s,server,nowait",
+                                       "path=%s,server=on,wait=off",
                                        libxl__qemu_qmp_path(gc, guest_domid)));
         }
 
@@ -1247,7 +1247,7 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
         flexarray_append(dm_args, "-chardev");
         flexarray_append(dm_args,
                          GCSPRINTF("socket,id=libxenstat-cmd,"
-                                        "path=%s/qmp-libxenstat-%d,server,nowait",
+                                        "path=%s/qmp-libxenstat-%d,server=on,wait=off",
                                         libxl__run_dir_path(), guest_domid));
 
         flexarray_append(dm_args, "-mon");
@@ -1264,7 +1264,7 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
             case LIBXL_CHANNEL_CONNECTION_SOCKET:
                 path = guest_config->channels[i].u.socket.path;
                 chardev = GCSPRINTF("socket,id=libxl-channel%d,path=%s,"
-                                    "server,nowait", devid, path);
+                                    "server=on,wait=off", devid, path);
                 break;
             default:
                 /* We've forgotten to add the clause */
@@ -1577,7 +1577,7 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
         nics[i].colo_##sock_port) {                                         \
         flexarray_append(dm_args, "-chardev");                              \
         flexarray_append(dm_args,                                           \
-            GCSPRINTF("socket,id=%s,host=%s,port=%s,server,nowait",         \
+            GCSPRINTF("socket,id=%s,host=%s,port=%s,server=on,wait=off",    \
                       nics[i].colo_##sock_id,                               \
                       nics[i].colo_##sock_ip,                               \
                       nics[i].colo_##sock_port));                           \
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Tue May 11 09:28:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 09:28:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125663.236526 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgOgi-0003pI-5f; Tue, 11 May 2021 09:28:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125663.236526; Tue, 11 May 2021 09:28:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgOgi-0003op-0W; Tue, 11 May 2021 09:28:24 +0000
Received: by outflank-mailman (input) for mailman id 125663;
 Tue, 11 May 2021 09:28:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tiF3=KG=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1lgOgg-0003ke-62
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 09:28:22 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 22332a16-40d8-4b02-b66d-8147fb086ada;
 Tue, 11 May 2021 09:28:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 22332a16-40d8-4b02-b66d-8147fb086ada
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620725299;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=/Z5o0fCLmaSsMSrYh8CFK3qvyQu5rxrgLPAepK/jeWc=;
  b=Hs29rFC6RGkzL/odGi12nQ7e0RyIoQY7lHk0icUpKrXEzuXA6Kf5MfwS
   ve8DWH1yHlS0INWYq7ETlue7d0gcP8Olp9qfL7XThoRFWwSX/3CxQk+rB
   xC5GpahehBnXojlt5H0uNOxGzskcDa7IDEgXN7Z6dMV9kzAEux+UR6Syc
   U=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: WpfxmiJT8DXx6D3rvG677HDsf5LlmX52Dzeyb5yWzo4I7DLG2p0QEyKmkLvnd4d+8AJlyGK2FU
 F+lkIQfHO5nyxotHw93JLYLl3qXuTENC8ItMhlRqQsqxq2zbugnLQYGNT4FjNpB5qlBuisIpTL
 MGSKtiXF2KEA4/Szk4RTeoJS1PVOUeQCBALWSgJOd9vNsklVVrUiczg6lxUFSTsLh57v3Eet7s
 tyVXzlvv9eH4IpXHYzSUzX/r6lssaD9mwuXQOT68vqNUB5FsHLDRYUKwNLlMdIQBHDM6EQXqsL
 zqg=
X-SBRS: 5.1
X-MesageID: 43508703
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:8UtQFKkMC4Z+dnqtAc2Yumge+7jpDfLo3DAbv31ZSRFFG/Fw9/
 rCoB17726QtN91YhsdcL+7V5VoLUmzyXcX2/hyAV7BZmnbUQKTRekP0WKL+Vbd8kbFh41gPM
 lbEpSXCLfLfCJHZcSR2njELz73quP3jJxBho3lvghQpRkBUdAF0+/gYDzranGfQmN9dP0EPa
 vZ3OVrjRy6d08aa8yqb0N1JNQq97Xw5fTbiQdtPW9f1DWz
X-IronPort-AV: E=Sophos;i="5.82,290,1613451600"; 
   d="scan'208";a="43508703"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Jason Andryuk
	<jandryuk@gmail.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH v2 4/8] libxl: Use -device for cd-rom drives
Date: Tue, 11 May 2021 10:28:06 +0100
Message-ID: <20210511092810.13759-5-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210511092810.13759-1-anthony.perard@citrix.com>
References: <20210511092810.13759-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

This allows to set an `id` on the device instead of only the drive. We
are going to need the `id` with the "eject" and
"blockdev-change-media" QMP command as using `device` parameter on
those is deprecated. (`device` is the `id` of the `-drive` on the
command line).

We set the same `id` on both -device and -drive as QEMU doesn't
complain and we can then either do "eject id=$id" or "eject
device=$id".

Using "-drive + -device" instead of only "-drive" has been
available since at least QEMU 0.15, and seems to be the preferred way as it
separates the host part (-drive which describe the disk image location
and format) from the guest part (-device which describe the emulated
device). More information in qemu.git/docs/qdev-device-use.txt .

Changing the command line during migration for the cdrom seems fine.
Also the documentation about migration in QEMU explains that the device
state ID is "been formed from a bus name and device address", so
second IDE bus and first device address on bus is still thus and
doesn't matter if written "-drive if=ide,index=2" or "-drive
ide-cd,bus=ide.1,unit=0".
See qemu.git/docs/devel/migration.rst .

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
---
 tools/libs/light/libxl_dm.c | 16 +++++++++++++---
 1 file changed, 13 insertions(+), 3 deletions(-)

diff --git a/tools/libs/light/libxl_dm.c b/tools/libs/light/libxl_dm.c
index 0a0c1ef7c62e..5b01cf284163 100644
--- a/tools/libs/light/libxl_dm.c
+++ b/tools/libs/light/libxl_dm.c
@@ -1913,6 +1913,7 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
             }
 
             if (disks[i].is_cdrom) {
+                const char *drive_id;
                 if (disk > 4) {
                     LOGD(WARN, guest_domid, "Emulated CDROM can be only one of the first 4 disks.\n"
                          "Disk %s will be available via PV drivers but not as an "
@@ -1920,13 +1921,22 @@ static int libxl__build_device_model_args_new(libxl__gc *gc,
                          disks[i].vdev);
                     continue;
                 }
-                drive = libxl__sprintf(gc,
-                         "if=ide,index=%d,readonly=on,media=cdrom,id=ide-%i",
-                         disk, dev_number);
+
+                drive_id = GCSPRINTF("ide-%i", dev_number);
+                drive = GCSPRINTF("if=none,readonly=on,id=%s", drive_id);
 
                 if (target_path)
                     drive = libxl__sprintf(gc, "%s,file=%s,format=%s",
                                            drive, target_path, format);
+
+                flexarray_vappend(dm_args,
+                    "-drive", drive,
+                    "-device",
+                    GCSPRINTF("ide-cd,id=%s,drive=%s,bus=ide.%u,unit=%u",
+                              drive_id, drive_id,
+                              disk / 2, disk % 2),
+                    NULL);
+                continue;
             } else {
                 /*
                  * Explicit sd disks are passed through as is.
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Tue May 11 09:28:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 09:28:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125664.236546 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgOgm-0004Sr-Nj; Tue, 11 May 2021 09:28:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125664.236546; Tue, 11 May 2021 09:28:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgOgm-0004Sb-Ja; Tue, 11 May 2021 09:28:28 +0000
Received: by outflank-mailman (input) for mailman id 125664;
 Tue, 11 May 2021 09:28:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tiF3=KG=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1lgOgk-0003CS-Px
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 09:28:26 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 82d3ce8f-aaa5-4fa3-9509-4f83f14f2936;
 Tue, 11 May 2021 09:28:18 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 82d3ce8f-aaa5-4fa3-9509-4f83f14f2936
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620725297;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=K8gPTzU82ZOUTi54+69xvIkJJNm7Kwz13GZQD0G5vXM=;
  b=Uk/lZAlilwApqdLm6/Tk6abu6cQ1ESLp3dx82f4AnFizxUxeWtrUhH1p
   SH/LPWc2cg3NKigMeEltZLGcqFw4Qw+aWKMUUB0CxH8r2KMWYj7Nm0K6l
   5OryhWGec5s5oMR70fzQjaGmfx//g69GjqH4Wb77+PBKBMGNJ8k95GqLp
   8=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: Sp4KX4JisA/iWq0YvBzhRNcLG4kFU0wshITCFgIWluPdyDNEqYUIzIIlvjEatra8IOdhV+Kv6V
 N+u9jyfGD0QZ/mi+Ozd27aIWBs1jCRlzBA00Csj7zkqEx7zLhzfepNY0ECBFQmpGqgowjz20DO
 4gVQcGZPfRd3BS200wToEw1ssZXqYGVgNzbQvdzhumGbhmPZ1Agwj4ZK9bAHqkvGJ3y2SE/+RU
 vIch9P+FaYg6m+o8m7LrwbssnMMBa+oZzudRvE03uzyIEwAX20vM4pRT9ETy1I2jzXfrqPlCL/
 Pa4=
X-SBRS: 5.1
X-MesageID: 43313589
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:DMcLoq1yECnHCjlGXA8tlwqjBIokLtp133Aq2lEZdPRUGvb3qy
 nIpoV86faUskdoZJhOo7C90cW7LU80sKQFhLX5Xo3SOzUO2lHYT72KhLGKq1aLdhEWtNQtsZ
 uIG5IOceEYZmIasS+V2maF+q4bsbu6zJw=
X-IronPort-AV: E=Sophos;i="5.82,290,1613451600"; 
   d="scan'208";a="43313589"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Jason Andryuk
	<jandryuk@gmail.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH v2 3/8] libxl: Replace deprecated "cpu-add" QMP command by "device_add"
Date: Tue, 11 May 2021 10:28:05 +0100
Message-ID: <20210511092810.13759-4-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210511092810.13759-1-anthony.perard@citrix.com>
References: <20210511092810.13759-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

The command "cpu-add" for CPU hotplug is deprecated and has been
removed from QEMU 6.0 (April 2021). We need to add cpus with the
command "device_add" now.

In order to find out which parameters to pass to "device_add" we first
make a call to "query-hotpluggable-cpus" which list the cpus drivers
and properties.

The algorithm to figure out which CPU to add, and by extension if any
CPU needs to be hotplugged, is in the function that adds the cpus.
Because of that, the command "query-hotpluggable-cpus" is always
called, even when not needed.

In case we are using a version of QEMU older than 2.7 (Sept 2016)
which don't have "query-hotpluggable-cpus", we fallback to using
"cpu-add".

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
---

Notes:
    v2:
    - fix coding style

 tools/libs/light/libxl_domain.c | 89 ++++++++++++++++++++++++++++++++-
 1 file changed, 87 insertions(+), 2 deletions(-)

diff --git a/tools/libs/light/libxl_domain.c b/tools/libs/light/libxl_domain.c
index 8c003aa7cb04..c00c36c92879 100644
--- a/tools/libs/light/libxl_domain.c
+++ b/tools/libs/light/libxl_domain.c
@@ -1805,6 +1805,7 @@ typedef struct set_vcpuonline_state {
     libxl_dominfo info;
     libxl_bitmap final_map;
     int index; /* for loop on final_map */
+    const char *cpu_driver;
 } set_vcpuonline_state;
 
 static void set_vcpuonline_qmp_cpus_fast_queried(libxl__egc *,
@@ -1814,6 +1815,10 @@ static void set_vcpuonline_qmp_cpus_queried(libxl__egc *,
 static void set_vcpuonline_qmp_query_cpus_parse(libxl__egc *,
     libxl__ev_qmp *qmp, const libxl__json_object *,
     bool query_cpus_fast, int rc);
+static void set_vcpuonline_qmp_query_hotpluggable_cpus(libxl__egc *egc,
+    libxl__ev_qmp *qmp, const libxl__json_object *response, int rc);
+static void set_vcpuonline_qmp_device_add_cpu(libxl__egc *,
+    libxl__ev_qmp *, const libxl__json_object *response, int rc);
 static void set_vcpuonline_qmp_add_cpu(libxl__egc *,
     libxl__ev_qmp *, const libxl__json_object *response, int rc);
 static void set_vcpuonline_timeout(libxl__egc *egc,
@@ -1951,13 +1956,54 @@ static void set_vcpuonline_qmp_query_cpus_parse(libxl__egc *egc,
         libxl_bitmap_reset(final_map, i);
     }
 
+    qmp->callback = set_vcpuonline_qmp_query_hotpluggable_cpus;
+    rc = libxl__ev_qmp_send(egc, qmp, "query-hotpluggable-cpus", NULL);
+
 out:
     libxl_bitmap_dispose(&current_map);
+    if (rc)
+        set_vcpuonline_done(egc, svos, rc); /* must be last */
+}
+
+static void set_vcpuonline_qmp_query_hotpluggable_cpus(libxl__egc *egc,
+    libxl__ev_qmp *qmp, const libxl__json_object *response, int rc)
+{
+    set_vcpuonline_state *svos = CONTAINER_OF(qmp, *svos, qmp);
+    const libxl__json_object *cpu;
+    const libxl__json_object *cpu_driver;
+
+    if (rc == ERROR_QMP_COMMAND_NOT_FOUND) {
+        /* We are probably connected to a version of QEMU older than 2.7,
+         * let's fallback to using "cpu-add" command. */
+        svos->index = -1;
+        set_vcpuonline_qmp_add_cpu(egc, qmp, NULL, 0); /* must be last */
+        return;
+    }
+
+    if (rc) goto out;
+
+    /* Parse response to QMP command "query-hotpluggable-cpus"
+     * [ { 'type': 'str', ... ]
+     *
+     * We are looking for the driver name for CPU to be hotplug. We'll
+     * assume that cpus property are core-id=0, thread-id=0 and
+     * socket-id=$cpu_index, as we start qemu with "-smp %d,maxcpus=%d", so
+     * we don't parse the properties listed for each hotpluggable cpus.
+     */
+
+    cpu = libxl__json_array_get(response, 0);
+    cpu_driver = libxl__json_map_get("type", cpu, JSON_STRING);
+    svos->cpu_driver = libxl__json_object_get_string(cpu_driver);
+
+    if (!svos->cpu_driver)
+        rc = ERROR_QEMU_API;
+
+out:
     svos->index = -1;
-    set_vcpuonline_qmp_add_cpu(egc, qmp, NULL, rc); /* must be last */
+    set_vcpuonline_qmp_device_add_cpu(egc, qmp, NULL, rc); /* must be last */
 }
 
-static void set_vcpuonline_qmp_add_cpu(libxl__egc *egc,
+static void set_vcpuonline_qmp_device_add_cpu(libxl__egc *egc,
     libxl__ev_qmp *qmp, const libxl__json_object *response, int rc)
 {
     STATE_AO_GC(qmp->ao);
@@ -1969,6 +2015,45 @@ static void set_vcpuonline_qmp_add_cpu(libxl__egc *egc,
 
     if (rc) goto out;
 
+    while (libxl_bitmap_cpu_valid(map, ++svos->index)) {
+        if (libxl_bitmap_test(map, svos->index)) {
+            qmp->callback = set_vcpuonline_qmp_device_add_cpu;
+            libxl__qmp_param_add_string(gc, &args, "id", GCSPRINTF("cpu-%d", svos->index));
+            libxl__qmp_param_add_string(gc, &args, "driver", svos->cpu_driver);
+            /* We'll assume that we start QEMU with -smp %d,maxcpus=%d, so
+             * that "core-id" and "thread-id" are always 0 so that
+             * "socket-id" correspond the cpu index.
+             * Those properties are otherwise listed by
+             * "query-hotpluggable-cpus". */
+            libxl__qmp_param_add_integer(gc, &args, "socket-id", svos->index);
+            libxl__qmp_param_add_integer(gc, &args, "core-id", 0);
+            libxl__qmp_param_add_integer(gc, &args, "thread-id", 0);
+            rc = libxl__ev_qmp_send(egc, qmp, "device_add", args);
+            if (rc) goto out;
+            return;
+        }
+    }
+
+out:
+    set_vcpuonline_done(egc, svos, rc);
+}
+
+/* Fallback function for QEMU older than 2.7, when
+ * 'query-hotpluggable-cpus' wasn't available and vcpu object couldn't be
+ * added with 'device_add'. */
+static void set_vcpuonline_qmp_add_cpu(libxl__egc *egc, libxl__ev_qmp *qmp,
+                                       const libxl__json_object *response,
+                                       int rc)
+{
+    STATE_AO_GC(qmp->ao);
+    set_vcpuonline_state *svos = CONTAINER_OF(qmp, *svos, qmp);
+    libxl__json_object *args = NULL;
+
+    /* Convenience aliases */
+    libxl_bitmap *map = &svos->final_map;
+
+    if (rc) goto out;
+
     while (libxl_bitmap_cpu_valid(map, ++svos->index)) {
         if (libxl_bitmap_test(map, svos->index)) {
             qmp->callback = set_vcpuonline_qmp_add_cpu;
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Tue May 11 09:28:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 09:28:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125665.236558 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgOgs-0004xt-3n; Tue, 11 May 2021 09:28:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125665.236558; Tue, 11 May 2021 09:28:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgOgr-0004wM-Tf; Tue, 11 May 2021 09:28:33 +0000
Received: by outflank-mailman (input) for mailman id 125665;
 Tue, 11 May 2021 09:28:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tiF3=KG=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1lgOgp-0003CS-QC
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 09:28:31 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 627edbc2-2b6a-4689-8328-0edac7c00d06;
 Tue, 11 May 2021 09:28:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 627edbc2-2b6a-4689-8328-0edac7c00d06
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620725300;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=ZLCUuRpfeTD/8ZQ8pXbTB6NvDUzollCddxfo249yUko=;
  b=er9QO5AdnoO5JqY3T+AUdZZmw1WQjaFle+Y10oMeVuS/8Sj8h0Hnc2zR
   HylGI8MDUhZZXJwgrsISEDqIVwjxjyG3YOz/vtovZ7hSw/QNF3SkR4e9U
   0GjfU4yX4ZlzfV67Elu5tk//IZs2/qbhWeQ2HNH+ImKQuyMzzeTmPiged
   s=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: sUJkmyKG96ytKhXu42Cn4ZJVXWTJ2rqAnIg9NjRaqokqlG/UoqralpEXCl3kFSUi0tVPvdLmtI
 Ro1NeCtspzBCEj73eOzoJ+eWn3zknUDRIdYtt/+AiQCcKM+DMeSjz8eH6BhNezTEFo4IXEUZhX
 dCfJ5QILn8yq4BjfrQu5SkaAPtiVC5e9hpsMyT+JX+c6DAkuPj/GWk33SmrSvxrYUYOEe0GOOm
 FxK6mZSAHBtTBZmaxSi05zVqJL3fT/NEnOL9SR70yGoOKXblfvIA0sY+9m42SCArtEhB6cQXt1
 Xo8=
X-SBRS: 5.1
X-MesageID: 43624001
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:VToBzqzH7LkEfRr1sUt0KrPw6L1zdoMgy1knxilNoHxuH/Bw9v
 re+cjzsCWftN9/Yh4dcLy7VpVoIkmsl6Kdg7NwAV7KZmCP1FdARLsI0WKI+UyCJ8SRzI9gPa
 cLSdkFNDXzZ2IK8PoTNmODYqodKNrsytHWuQ/HpU0dKT2D88tbnn9E4gDwKDwQeCB2QaAXOb
 C7/cR9qz+paR0sH7+G7ilsZZmkmzXT/qiWGCI7Ow==
X-IronPort-AV: E=Sophos;i="5.82,290,1613451600"; 
   d="scan'208";a="43624001"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Jason Andryuk
	<jandryuk@gmail.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH v2 5/8] libxl: Assert qmp_ev's state in qmp_ev_qemu_compare_version
Date: Tue, 11 May 2021 10:28:07 +0100
Message-ID: <20210511092810.13759-6-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210511092810.13759-1-anthony.perard@citrix.com>
References: <20210511092810.13759-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

We are supposed to read the version information only when qmp_ev is in
state "Connected" (that correspond to state==qmp_state_connected),
assert it so that the function isn't used too early.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
---
 tools/libs/light/libxl_qmp.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/tools/libs/light/libxl_qmp.c b/tools/libs/light/libxl_qmp.c
index 9b638e6f5442..d0967c9f029f 100644
--- a/tools/libs/light/libxl_qmp.c
+++ b/tools/libs/light/libxl_qmp.c
@@ -292,6 +292,8 @@ static int qmp_handle_response(libxl__gc *gc, libxl__qmp_handler *qmp,
 static int qmp_ev_qemu_compare_version(libxl__ev_qmp *ev, int major,
                                        int minor, int micro)
 {
+    assert(ev->state == qmp_state_connected);
+
 #define CHECK_VERSION(level) do { \
     if (ev->qemu_version.level > (level)) return +1; \
     if (ev->qemu_version.level < (level)) return -1; \
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Tue May 11 09:28:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 09:28:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125666.236570 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgOgw-0005cZ-CW; Tue, 11 May 2021 09:28:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125666.236570; Tue, 11 May 2021 09:28:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgOgw-0005cJ-8c; Tue, 11 May 2021 09:28:38 +0000
Received: by outflank-mailman (input) for mailman id 125666;
 Tue, 11 May 2021 09:28:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tiF3=KG=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1lgOgu-0003CS-QI
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 09:28:36 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4fd96fab-119e-4fd6-83c6-13902c1a4a75;
 Tue, 11 May 2021 09:28:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4fd96fab-119e-4fd6-83c6-13902c1a4a75
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620725301;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=5TXGPQaxHF3c+AxwdM2mzDWq6ZbvGPMF7pL7COPhMao=;
  b=F32S+JuxxduiyLWGxIjSgs3qZSInsnhqdubCEFjQMSLvzqzxQTtE/VsN
   +Qr7ZiK6h0g4PcPhudIBPtLBSm8WFiLo4RLn0pNp6LZ24e1EN5x0nvxlM
   yYhFFPqfcZLTCb42Vf4oiS+KGBgBFEfbTPnQzJiOhRbqnyRVfxxCFphnu
   g=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: HBhO1fWqAGziFPHWeyyUFF6jO7hf3HQe7A7T3q1IOe5SWxgUYYsyfYQx8zTSe7sywaO2gPQLDe
 YbU1yPtxsAaSA6T1D8AQKQE0jors2PYumZf4MrgWGppL+DKe6h1TdEquiqKuuKAwqwqMyHCjcn
 8ng/0YIQsU7gXOkY6y+YmfTFKkj3TYbZmyPubVGEQUQxWKZPvyJLKp9WDIWAWri09uxlswYpIp
 UiNneZeJ/9tFCLeFiL0AKR6FGEaaNW137sR7GnnX3ZR+wNVnUAnhjTMBEv5kwL/u3wR41KuMf7
 tfI=
X-SBRS: 5.1
X-MesageID: 43313593
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:Jknn4q8tSUR0/Rfqx8duk+DgI+orL9Y04lQ7vn2YSXRuHPBw8P
 re+sjztCWE8Ar5N0tBpTntAsW9qDbnhPtICOoqTNCftWvdyQiVxehZhOOIqVDd8m/Fh4pgPM
 9bAtFD4bbLbGSS4/yU3ODBKadD/OW6
X-IronPort-AV: E=Sophos;i="5.82,290,1613451600"; 
   d="scan'208";a="43313593"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Jason Andryuk
	<jandryuk@gmail.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH v2 6/8] libxl: Export libxl__qmp_ev_qemu_compare_version
Date: Tue, 11 May 2021 10:28:08 +0100
Message-ID: <20210511092810.13759-7-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210511092810.13759-1-anthony.perard@citrix.com>
References: <20210511092810.13759-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

We are going to want to check QEMU's version in other places where we
can use libxl__ev_qmp_send.

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
---
 tools/libs/light/libxl_internal.h | 8 ++++++++
 tools/libs/light/libxl_qmp.c      | 4 ++--
 2 files changed, 10 insertions(+), 2 deletions(-)

diff --git a/tools/libs/light/libxl_internal.h b/tools/libs/light/libxl_internal.h
index 44a2f3c8fe3b..0b4671318c82 100644
--- a/tools/libs/light/libxl_internal.h
+++ b/tools/libs/light/libxl_internal.h
@@ -492,6 +492,14 @@ _hidden int libxl__ev_qmp_send(libxl__egc *egc, libxl__ev_qmp *ev,
                                const char *cmd, libxl__json_object *args);
 _hidden void libxl__ev_qmp_dispose(libxl__gc *gc, libxl__ev_qmp *ev);
 
+/* return values:
+ *   < 0  if qemu's version <  asked version
+ *   = 0  if qemu's version == asked version
+ *   > 0  if qemu's version >  asked version
+ */
+_hidden int libxl__qmp_ev_qemu_compare_version(libxl__ev_qmp *ev, int major,
+                                               int minor, int micro);
+
 typedef enum {
     /* initial state */
     qmp_state_disconnected = 1,
diff --git a/tools/libs/light/libxl_qmp.c b/tools/libs/light/libxl_qmp.c
index d0967c9f029f..fb146a54cb9c 100644
--- a/tools/libs/light/libxl_qmp.c
+++ b/tools/libs/light/libxl_qmp.c
@@ -289,7 +289,7 @@ static int qmp_handle_response(libxl__gc *gc, libxl__qmp_handler *qmp,
  *   = 0  if qemu's version == asked version
  *   > 0  if qemu's version >  asked version
  */
-static int qmp_ev_qemu_compare_version(libxl__ev_qmp *ev, int major,
+int libxl__qmp_ev_qemu_compare_version(libxl__ev_qmp *ev, int major,
                                        int minor, int micro)
 {
     assert(ev->state == qmp_state_connected);
@@ -1073,7 +1073,7 @@ static void dm_state_save_to_fdset(libxl__egc *egc, libxl__ev_qmp *ev, int fdset
     /* The `live` parameter was added to QEMU 2.11. It signals QEMU that
      * the save operation is for a live migration rather than for taking a
      * snapshot. */
-    if (qmp_ev_qemu_compare_version(ev, 2, 11, 0) >= 0)
+    if (libxl__qmp_ev_qemu_compare_version(ev, 2, 11, 0) >= 0)
         libxl__qmp_param_add_bool(gc, &args, "live", dsps->live);
     QMP_PARAMETERS_SPRINTF(&args, "filename", "/dev/fdset/%d", fdset);
     rc = libxl__ev_qmp_send(egc, ev, "xen-save-devices-state", args);
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Tue May 11 09:28:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 09:28:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125669.236582 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgOh1-0006MO-RO; Tue, 11 May 2021 09:28:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125669.236582; Tue, 11 May 2021 09:28:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgOh1-0006Kn-L9; Tue, 11 May 2021 09:28:43 +0000
Received: by outflank-mailman (input) for mailman id 125669;
 Tue, 11 May 2021 09:28:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tiF3=KG=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1lgOgz-0003CS-QV
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 09:28:41 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 46a06a73-9bbc-4a92-90a9-b86aac1d7837;
 Tue, 11 May 2021 09:28:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 46a06a73-9bbc-4a92-90a9-b86aac1d7837
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620725303;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=oQ0ssEsENtdgMjUiXBgiKma4Z4zbQ49R9PO2WXFh8mc=;
  b=KvTd8sjoVescoSgfhEW4g93S/Y+zTUMyIS2LHb5dbElW8lCADJVwKlg2
   r8Ors8JO8CoMMjKp8kIcVrZzfcVVx7Ubokmrw9mpEZ86tO7i79wegJjXa
   IlDqWyLge5kzesOfRVARzRG1Yh05tHaFXApkdYMLjk/775E9DJJqq3BCF
   A=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: s1FEg6hnKqsfVKtX/Nvna1UfJOpMpc3lWMl3vRtD4XDJlvvBV9glarTxH3Om61vRuEmMTjoiZf
 ilAawEAZOEwQWVgF0cABftxT1/8Z2ws7WylXVLiGV+gQMYoeBhdzOGSOAPN1FUPJ6vxUPmHfNY
 f4jsbjqG7O72uNdlxmjCJFkXT81CMuN0PzXFMDHfO3rwGKsVrctcnECu/Fby0r3SOV59zvpOAk
 +xJAX8VK1Go1YGHwKHYE2frTNnDi/8rVf1irNM2JahytSpmWEaZFl14IvBwgUNrmLgtYK9Jq+y
 rDk=
X-SBRS: 5.1
X-MesageID: 43624005
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:FNSAY64UP9lUT8BKLQPXwAzXdLJyesId70hD6qkQc3Fom62j5q
 WTdZEgvyMc5wx/ZJhNo7690cq7MBHhHPxOgbX5VI3KNGXbUQOTR72KhrGSoAEIdReeygZcv5
 0QCZSXCrfLfCVHZRCR2njFLz4iquP3j5xBnY3lvhNQpZkBUdAZ0+9+YDzrdXFedU19KrcSMo
 GT3cZDryrIQwVtUizqbkN1OdQqvrfw5evbXSI=
X-IronPort-AV: E=Sophos;i="5.82,290,1613451600"; 
   d="scan'208";a="43624005"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Jason Andryuk
	<jandryuk@gmail.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH v2 8/8] libxl: Replace QMP command "change" by "blockdev-change-media"
Date: Tue, 11 May 2021 10:28:10 +0100
Message-ID: <20210511092810.13759-9-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210511092810.13759-1-anthony.perard@citrix.com>
References: <20210511092810.13759-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

"change" command as been removed in QEMU 6.0. We can use
"blockdev-change-medium" instead.

Using `id` with "blockdev-change-medium" requires a change to the QEMU
command line, introduced by:
    "libxl: Use -device for cd-rom drives"

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
---
 tools/libs/light/libxl_disk.c | 24 +++++++++++++++++++-----
 1 file changed, 19 insertions(+), 5 deletions(-)

diff --git a/tools/libs/light/libxl_disk.c b/tools/libs/light/libxl_disk.c
index faabdea7a4c3..93936d0dd0f8 100644
--- a/tools/libs/light/libxl_disk.c
+++ b/tools/libs/light/libxl_disk.c
@@ -962,12 +962,26 @@ static void cdrom_insert_addfd_cb(libxl__egc *egc,
     fdset = libxl__json_object_get_integer(o);
 
     devid = libxl__device_disk_dev_number(disk->vdev, NULL, NULL);
-    QMP_PARAMETERS_SPRINTF(&args, "device", "ide-%i", devid);
-    QMP_PARAMETERS_SPRINTF(&args, "target", "/dev/fdset/%d", fdset);
-    libxl__qmp_param_add_string(gc, &args, "arg",
-        libxl__qemu_disk_format_string(disk->format));
     qmp->callback = cdrom_insert_inserted;
-    rc = libxl__ev_qmp_send(egc, qmp, "change", args);
+
+    /* "change" is deprecated since QEMU 2.5 and the `device` parameter for
+     * for "blockdev-change-medium" is deprecated in QEMU 2.8.
+     * But `id` is only available in 2.8 we'll start using the new command
+     * with `id` with QEMU 2.8.
+     */
+    if (libxl__qmp_ev_qemu_compare_version(qmp, 2, 8, 0) >= 0) {
+        QMP_PARAMETERS_SPRINTF(&args, "id", "ide-%i", devid);
+        QMP_PARAMETERS_SPRINTF(&args, "filename", "/dev/fdset/%d", fdset);
+        libxl__qmp_param_add_string(gc, &args, "format",
+            libxl__qemu_disk_format_string(disk->format));
+        rc = libxl__ev_qmp_send(egc, qmp, "blockdev-change-medium", args);
+    } else {
+        QMP_PARAMETERS_SPRINTF(&args, "device", "ide-%i", devid);
+        QMP_PARAMETERS_SPRINTF(&args, "target", "/dev/fdset/%d", fdset);
+        libxl__qmp_param_add_string(gc, &args, "arg",
+            libxl__qemu_disk_format_string(disk->format));
+        rc = libxl__ev_qmp_send(egc, qmp, "change", args);
+    }
 out:
     if (rc)
         cdrom_insert_done(egc, cis, rc); /* must be last */
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Tue May 11 09:28:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 09:28:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125680.236594 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgOhE-0007RF-8x; Tue, 11 May 2021 09:28:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125680.236594; Tue, 11 May 2021 09:28:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgOhE-0007Qv-5C; Tue, 11 May 2021 09:28:56 +0000
Received: by outflank-mailman (input) for mailman id 125680;
 Tue, 11 May 2021 09:28:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tiF3=KG=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1lgOhD-00064e-KL
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 09:28:55 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5f774ed2-8a08-4d8d-b5c1-4a94b2073dcf;
 Tue, 11 May 2021 09:28:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5f774ed2-8a08-4d8d-b5c1-4a94b2073dcf
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620725325;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=K74/i7Z8FufP+fc6tmHyA0vTMqCvL54/sWOINqL7csI=;
  b=KIufayX2it1DXubWr4oODHqXSInoQiKk8toRbdum7g/waO8K8UhBCdF1
   9n6pEdi1AXeFTaHb4Ir1gPXuJ422DS/Gvzazla2WWgR91/T0jqTvcHWj3
   NsuJYXlRxKfnvCk2ZOYJCRVnHRh1ORmdTxCtkFWaMU72Xz34Emsm4A2IU
   w=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: C2udhTb7XNw4I3BAQVNa47ZlG+25lLUTOmOlJtmRb+k/aGKnfDYl1dsCxbPQ9fg2O20EWNSfcw
 CBJGYl8HjuUvglOzjMqPANovYMmgt4KgqMHyCJh3McnPZGOtDfDQCYDzrUYqiisQL88FDELnYS
 l5yrQ9gXDJ0ED4VcGYwmtx9hmNGBJQgln0yL/reo/8fbEYn9xbnuzL0+284S5AABTv8K9qJ6Mc
 3w1c64oe2EaRW7mA8SwZVkqQnAYRYcpbx5dbtjJ3FlrA6PIRXH1mNVOoMLGhtBG+7rDGdoK4ht
 ex0=
X-SBRS: 5.1
X-MesageID: 43528826
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:XbYuM6+tY2xxHSW8z+tuk+DeI+orL9Y04lQ7vn2YSXRuHfBw8P
 re+8jztCWE8Qr5N0tApTntAsS9qDbnhPxICOoqTNOftWvd2FdARbsKheCJ/9SjIVyaygc079
 YHT0EUMrPN5DZB4foSmDPIcOod/A==
X-IronPort-AV: E=Sophos;i="5.82,290,1613451600"; 
   d="scan'208";a="43528826"
From: Anthony PERARD <anthony.perard@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Anthony PERARD <anthony.perard@citrix.com>, Jason Andryuk
	<jandryuk@gmail.com>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: [XEN PATCH v2 7/8] libxl: Use `id` with the "eject" QMP command
Date: Tue, 11 May 2021 10:28:09 +0100
Message-ID: <20210511092810.13759-8-anthony.perard@citrix.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210511092810.13759-1-anthony.perard@citrix.com>
References: <20210511092810.13759-1-anthony.perard@citrix.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain

`device` parameter is deprecated since QEMU 2.8.

This requires changes to the command line introduced by:
    "libxl: Use -device for cd-rom drives"

Signed-off-by: Anthony PERARD <anthony.perard@citrix.com>
Reviewed-by: Jason Andryuk <jandryuk@gmail.com>
---
 tools/libs/light/libxl_disk.c | 43 +++++++++++++++++++++++++++++------
 1 file changed, 36 insertions(+), 7 deletions(-)

diff --git a/tools/libs/light/libxl_disk.c b/tools/libs/light/libxl_disk.c
index 411ffeaca6ce..faabdea7a4c3 100644
--- a/tools/libs/light/libxl_disk.c
+++ b/tools/libs/light/libxl_disk.c
@@ -656,6 +656,8 @@ typedef struct {
 
 static void cdrom_insert_lock_acquired(libxl__egc *, libxl__ev_slowlock *,
                                        int rc);
+static void cdrom_insert_qmp_connected(libxl__egc *, libxl__ev_qmp *,
+                                       const libxl__json_object *, int rc);
 static void cdrom_insert_ejected(libxl__egc *egc, libxl__ev_qmp *,
                                  const libxl__json_object *, int rc);
 static void cdrom_insert_addfd_cb(libxl__egc *egc, libxl__ev_qmp *,
@@ -770,13 +772,12 @@ static void cdrom_insert_lock_acquired(libxl__egc *egc,
      */
 
     if (cis->dm_ver == LIBXL_DEVICE_MODEL_VERSION_QEMU_XEN) {
-        libxl__json_object *args = NULL;
-        int devid = libxl__device_disk_dev_number(cis->disk->vdev,
-                                                  NULL, NULL);
-
-        QMP_PARAMETERS_SPRINTF(&args, "device", "ide-%i", devid);
-        cis->qmp.callback = cdrom_insert_ejected;
-        rc = libxl__ev_qmp_send(egc, &cis->qmp, "eject", args);
+        /* Before running the "eject" command, we need to know QEMU's
+         * version to find out which command to issue.
+         * cis->qmp isn't in Connected state yet, so run a dummy command
+         * to have QEMU's version available. */
+        cis->qmp.callback = cdrom_insert_qmp_connected;
+        rc = libxl__ev_qmp_send(egc, &cis->qmp, "query-version", NULL);
         if (rc) goto out;
     } else {
         cdrom_insert_ejected(egc, &cis->qmp, NULL, 0); /* must be last */
@@ -787,6 +788,34 @@ static void cdrom_insert_lock_acquired(libxl__egc *egc,
     cdrom_insert_done(egc, cis, rc); /* must be last */
 }
 
+static void cdrom_insert_qmp_connected(libxl__egc *egc, libxl__ev_qmp *qmp,
+                                       const libxl__json_object *response,
+                                       int rc)
+{
+    libxl__cdrom_insert_state *cis = CONTAINER_OF(qmp, *cis, qmp);
+    STATE_AO_GC(cis->ao);
+    libxl__json_object *args = NULL;
+    int devid = libxl__device_disk_dev_number(cis->disk->vdev,
+                                              NULL, NULL);
+
+    if (rc) goto out;
+
+    /* Using `device` parameter is deprecated since QEMU 2.8, we should
+     * use `id` now. They both have different meaning but we set the
+     * same `id` on -drive and -device on the command line.
+     */
+    if (libxl__qmp_ev_qemu_compare_version(qmp, 2, 8, 0) >= 0)
+        QMP_PARAMETERS_SPRINTF(&args, "id", "ide-%i", devid);
+    else
+        QMP_PARAMETERS_SPRINTF(&args, "device", "ide-%i", devid);
+    qmp->callback = cdrom_insert_ejected;
+    rc = libxl__ev_qmp_send(egc, qmp, "eject", args);
+    if (rc) goto out;
+    return;
+out:
+    cdrom_insert_done(egc, cis, rc); /* must be last */
+}
+
 static void cdrom_insert_ejected(libxl__egc *egc,
                                  libxl__ev_qmp *qmp,
                                  const libxl__json_object *response,
-- 
Anthony PERARD



From xen-devel-bounces@lists.xenproject.org Tue May 11 09:58:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 09:58:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125724.236658 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgP9q-0003vI-AD; Tue, 11 May 2021 09:58:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125724.236658; Tue, 11 May 2021 09:58:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgP9q-0003vB-5o; Tue, 11 May 2021 09:58:30 +0000
Received: by outflank-mailman (input) for mailman id 125724;
 Tue, 11 May 2021 09:58:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgP9p-0003v1-0B; Tue, 11 May 2021 09:58:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgP9o-00016V-OG; Tue, 11 May 2021 09:58:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgP9o-0001Gw-E9; Tue, 11 May 2021 09:58:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lgP9o-0004ax-Dg; Tue, 11 May 2021 09:58:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=KNdKbdUdl9bASTrwf+JqGX6TteruRvk0B57A25PosfM=; b=LDsaTEZZDsc1ewnnQES5Dv8nYW
	c9x4XPFHCOk7fv3g3qxLhbujUThp5lvL0v+Kn8fvnTsNEuW4FpKf+uDMnRQbTlgXoAb3HYBO8NyO7
	E/W36ci/6L/WS1OhV3QHmlVeBCAFui07ShcA3AST4dOd2mMJbDLK9P8f/xfg13+hOCHE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161898-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 161898: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-i386-xl-raw:guest-localmigrate/x10:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=982c89ed527bc5b0ffae5da9fd33f9d2d1528f06
X-Osstest-Versions-That:
    xen=a7da84c457b05479ab423a2e589c5f46c7da0ed7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 11 May 2021 09:58:28 +0000

flight 161898 xen-unstable real [real]
flight 161903 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/161898/
http://logs.test-lab.xenproject.org/osstest/logs/161903/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl-raw    19 guest-localmigrate/x10 fail pass in 161903-retest

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    18 guest-start/debian.repeat fail REGR. vs. 161888

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161888
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161888
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 161888
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161888
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161888
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161888
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161888
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161888
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161888
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161888
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161888
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  982c89ed527bc5b0ffae5da9fd33f9d2d1528f06
baseline version:
 xen                  a7da84c457b05479ab423a2e589c5f46c7da0ed7

Last test of basis   161888  2021-05-10 01:51:44 Z    1 days
Testing same since   161898  2021-05-10 19:06:50 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jason Andryuk <jandryuk@gmail.com>
  Olaf Hering <olaf@aepfle.de>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   a7da84c457..982c89ed52  982c89ed527bc5b0ffae5da9fd33f9d2d1528f06 -> master


From xen-devel-bounces@lists.xenproject.org Tue May 11 10:41:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 10:41:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125730.236673 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgPp3-0000gp-CS; Tue, 11 May 2021 10:41:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125730.236673; Tue, 11 May 2021 10:41:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgPp3-0000gi-8l; Tue, 11 May 2021 10:41:05 +0000
Received: by outflank-mailman (input) for mailman id 125730;
 Tue, 11 May 2021 10:41:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=y01E=KG=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1lgPp1-0000gc-LE
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 10:41:03 +0000
Received: from out3-smtp.messagingengine.com (unknown [66.111.4.27])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7f3d9383-1c7c-4461-85bc-aabbd7f5c2c7;
 Tue, 11 May 2021 10:41:02 +0000 (UTC)
Received: from compute4.internal (compute4.nyi.internal [10.202.2.44])
 by mailout.nyi.internal (Postfix) with ESMTP id A27A35C0153;
 Tue, 11 May 2021 06:41:02 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute4.internal (MEProxy); Tue, 11 May 2021 06:41:02 -0400
Received: from mail-itl (ip5b434f04.dynamic.kabel-deutschland.de [91.67.79.4])
 by mail.messagingengine.com (Postfix) with ESMTPA;
 Tue, 11 May 2021 06:41:00 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7f3d9383-1c7c-4461-85bc-aabbd7f5c2c7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:content-type:date:from:in-reply-to
	:message-id:mime-version:references:subject:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; bh=eEzsag
	JXE9j7VxBo+AWmXBF3pphXok7LzgFqbTTuun0=; b=ixCgNh0lNwMsRtRxLx2hRB
	UzT5a7tnSj8A3K3aN2hRP6BTzJNeXdmsHc9b4a/zoQzdZ9yyypxDPRg7MfbYlZAa
	kkuU3764t850EhzWsVzrSe0OgRcTwWv79BnG1yRyBi2OXLdk6/MDCzoMBeuD+wUD
	ZICsyFSPbpssORFFWD4JbFBgr/o1lc8KIq0btOcSKVwT40PXyb/QSd3yuLvG5873
	Ur9mvx3OGFcQ7WzBmaXQMXz7fR4Y5LIKFp+wgxG+8K7tOh/FeOje8awSuez58NhD
	p26x0b6dT6DOk7vjTTTswN0A1oABoyJqkXdN/XW3GiEYJ1cBS3M/Xu3Li6Kau3AA
	==
X-ME-Sender: <xms:PV-aYMeErmFrTBjHgQ5JxQtJKvc_eaDgw9HkGyed0gH-bR8-CzcJOQ>
    <xme:PV-aYONuA6TLf10pYFLHGhBuySeoxdACKfz8qhzewyYInckw5ROBHj5NT4BvsbXBh
    zVag659bGWRsg>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrvdehtddgfedvucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvffukfhfgggtuggjsehgtderredttdejnecuhfhrohhmpeforghrvghk
    ucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesihhnvh
    hishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeetveff
    iefghfekhffggeeffffhgeevieektedthfehveeiheeiiedtudegfeetffenucfkpheple
    durdeijedrjeelrdegnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghi
    lhhfrhhomhepmhgrrhhmrghrvghksehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtg
    homh
X-ME-Proxy: <xmx:PV-aYNiAuvUsPNlIR7j3RB1blVNYHa87XY2YlyNHOAElDsUfvauzXQ>
    <xmx:PV-aYB9YLoz1AuB59PCjt3gdqJJtM64QMCWqxLub_dvzQ0OW4AhumA>
    <xmx:PV-aYIsjv63Vn8a5Yb_VNFRElSxwMv8uHftTF4tTsvFzNBCiWnILyQ>
    <xmx:Pl-aYB73mFHwoWWil0pPOzUd5xu8VehXbIdpUGqWjUywjQQlf1bqrA>
Date: Tue, 11 May 2021 12:40:54 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: "Durrant, Paul" <pdurrant@amazon.co.uk>
Cc: Michael Brown <mbrown@fensystems.co.uk>, "paul@xen.org" <paul@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"wei.liu@kernel.org" <wei.liu@kernel.org>
Subject: Re: [PATCH] xen-netback: Check for hotplug-status existence before
 watching
Message-ID: <YJpfORXIgEaWlQ7E@mail-itl>
References: <54659eec-e315-5dc5-1578-d91633a80077@xen.org>
 <20210413152512.903750-1-mbrown@fensystems.co.uk>
 <YJl8IC7EbXKpARWL@mail-itl>
 <404130e4-210d-2214-47a8-833c0463d997@fensystems.co.uk>
 <YJmBDpqQ12ZBGf58@mail-itl>
 <21f38a92-c8ae-12a7-f1d8-50810c5eb088@fensystems.co.uk>
 <YJmMvTkp2Y1hlLLm@mail-itl>
 <df9e9a32b0294aee814eeb58d2d71edd@EX13D32EUC003.ant.amazon.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="p94ypmT+e7ybeaDo"
Content-Disposition: inline
In-Reply-To: <df9e9a32b0294aee814eeb58d2d71edd@EX13D32EUC003.ant.amazon.com>


--p94ypmT+e7ybeaDo
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Tue, 11 May 2021 12:40:54 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: "Durrant, Paul" <pdurrant@amazon.co.uk>
Cc: Michael Brown <mbrown@fensystems.co.uk>, "paul@xen.org" <paul@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"wei.liu@kernel.org" <wei.liu@kernel.org>
Subject: Re: [PATCH] xen-netback: Check for hotplug-status existence before
 watching

On Tue, May 11, 2021 at 07:06:55AM +0000, Durrant, Paul wrote:
> > -----Original Message-----
> > From: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingslab.com>
> > Sent: 10 May 2021 20:43
> > To: Michael Brown <mbrown@fensystems.co.uk>; paul@xen.org
> > Cc: paul@xen.org; xen-devel@lists.xenproject.org; netdev@vger.kernel.or=
g; wei.liu@kernel.org; Durrant,
> > Paul <pdurrant@amazon.co.uk>
> > Subject: RE: [EXTERNAL] [PATCH] xen-netback: Check for hotplug-status e=
xistence before watching
> >=20
> > On Mon, May 10, 2021 at 08:06:55PM +0100, Michael Brown wrote:
> > > If you have a suggested patch, I'm happy to test that it doesn't rein=
troduce
> > > the regression bug that was fixed by this commit.
> >=20
> > Actually, I've just tested with a simple reloading xen-netfront module.=
 It
> > seems in this case, the hotplug script is not re-executed. In fact, I
> > think it should not be re-executed at all, since the vif interface
> > remains in place (it just gets NO-CARRIER flag).
> >=20
> > This brings a question, why removing hotplug-status in the first place?
> > The interface remains correctly configured by the hotplug script after
> > all. From the commit message:
> >=20
> >     xen-netback: remove 'hotplug-status' once it has served its purpose
> >=20
> >     Removing the 'hotplug-status' node in netback_remove() is wrong; th=
e script
> >     may not have completed. Only remove the node once the watch has fir=
ed and
> >     has been unregistered.
> >=20
> > I think the intention was to remove 'hotplug-status' node _later_ in
> > case of quickly adding and removing the interface. Is that right, Paul?
>=20
> The removal was done to allow unbind/bind to function correctly. IIRC bef=
ore the original patch doing a bind would stall forever waiting for the hot=
plug status to change, which would never happen.

Hmm, in that case maybe don't remove it at all in the backend, and let
it be cleaned up by the toolstack, when it removes other backend-related
nodes?

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--p94ypmT+e7ybeaDo
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmCaXzkACgkQ24/THMrX
1yzyiQf+IwD73XUVWyRPGJyFqKFetP/eCbzHBvyKLyaXxN9I3l0kWWExbMpoXd4C
8Hsj4VvjTaA8vf9GHItovqZR0xE0Qy6YcMheLS+/Jjj5RVlcqgWa5GXwh2S+LfsA
nCkTCClugpZpmeHIuJF6GhhW9fRfFJNVL7HN+24BCvL26GNjLn5BCOCaoxmr3GnW
w0QvxdnTJ1BsJg+Dxi3/7fsdAll/p52fF2DluejJC7wvAQzRUIvIQi+r0K1f71Ue
NlFgnI+dhhCVltg/uygZQZr5ykGWA8CgKVN8/SlOtByhiRoY8hn6ciOYF1nvx419
oz3KPLCdp0x90ccPg1lk7RTHDkf3cw==
=muMR
-----END PGP SIGNATURE-----

--p94ypmT+e7ybeaDo--


From xen-devel-bounces@lists.xenproject.org Tue May 11 10:45:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 10:45:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125738.236685 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgPt7-0001XP-1L; Tue, 11 May 2021 10:45:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125738.236685; Tue, 11 May 2021 10:45:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgPt6-0001XI-Uc; Tue, 11 May 2021 10:45:16 +0000
Received: by outflank-mailman (input) for mailman id 125738;
 Tue, 11 May 2021 10:45:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=y01E=KG=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1lgPt5-0001XC-GC
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 10:45:15 +0000
Received: from out3-smtp.messagingengine.com (unknown [66.111.4.27])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 74307ffb-c76e-4cac-b82c-d996bdb560c9;
 Tue, 11 May 2021 10:45:15 +0000 (UTC)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailout.nyi.internal (Postfix) with ESMTP id CB1725C0125;
 Tue, 11 May 2021 06:45:14 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute1.internal (MEProxy); Tue, 11 May 2021 06:45:14 -0400
Received: from mail-itl (unknown [91.67.79.4])
 by mail.messagingengine.com (Postfix) with ESMTPA;
 Tue, 11 May 2021 06:45:13 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 74307ffb-c76e-4cac-b82c-d996bdb560c9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:content-type:date:from:in-reply-to
	:message-id:mime-version:references:subject:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; bh=ckrL5E
	dwv505m1v7gPS9q3iDh+sypyFDIgzQz5gG5Nw=; b=qsibRaSX73r2WWdVUtdrzi
	VabW/wk+ieXgAJiwNUK9FCoBjTXT6Jjr6JGb6C5ncnDdyeXxeVZoqpZOSvx0XyrH
	TdyPqxSIOfJMF/U/wZExm07HY5Q5N3rnyrnSdwThR5Oga0TFIRnHT9nO3Dnvl8O3
	iHjEKAICJeMjG6FqdMTXSbErsU6e7u0x1ERSVPGt3cvS5SSOVZGbTY4vk1Ad5/dd
	LrB5X8+Tb0L7mxhLFY31Nrdc3uwQX/mYQ/d3IzyLKp4un0/RLy09CIBlhR5JG0Tu
	A0toTkleRatVyL4vyEr5g8E6txs9Xb7seCbuciTYEsExIANgUElLeQAezYfQndmA
	==
X-ME-Sender: <xms:OmCaYMzUxWdXOsZfGE_XSXgurvbAcaVDotyYtZalvIWXfSom06tnyA>
    <xme:OmCaYAT_cEhcyLD_w2OkcVqv1knuma4fOQiRXykmRrzh-3NGnV7433_VokX8Vc19j
    eVcAy57QVAg3g>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrvdehtddgfeefucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvffukfhfgggtuggjsehgtderredttdejnecuhfhrohhmpeforghrvghk
    ucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesihhnvh
    hishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeetveff
    iefghfekhffggeeffffhgeevieektedthfehveeiheeiiedtudegfeetffenucfkpheple
    durdeijedrjeelrdegnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghi
    lhhfrhhomhepmhgrrhhmrghrvghksehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtg
    homh
X-ME-Proxy: <xmx:OmCaYOWESuTcmUo4muF5cmd2CimZHmEHkDMlHlK4MnfleruhITBk1g>
    <xmx:OmCaYKi_H4Xm39WAK3lB9bQ72fxXjqg8SQrFQ0TTaUODwrM4oSYzog>
    <xmx:OmCaYOBO2c-6Wrd5KimPznLmjWHh_wNX-i1hO8ltDdkEAMQPaplSmQ>
    <xmx:OmCaYBNYKwZfZdLpIxgf6zkQsbkd_pPlXMkXoC3NEIK5cu37b4xsmA>
Date: Tue, 11 May 2021 12:45:10 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: "Durrant, Paul" <pdurrant@amazon.co.uk>
Cc: Michael Brown <mbrown@fensystems.co.uk>, "paul@xen.org" <paul@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"wei.liu@kernel.org" <wei.liu@kernel.org>
Subject: Re: [PATCH] xen-netback: Check for hotplug-status existence before
 watching
Message-ID: <YJpgNvOmDL9SuRye@mail-itl>
References: <54659eec-e315-5dc5-1578-d91633a80077@xen.org>
 <20210413152512.903750-1-mbrown@fensystems.co.uk>
 <YJl8IC7EbXKpARWL@mail-itl>
 <404130e4-210d-2214-47a8-833c0463d997@fensystems.co.uk>
 <YJmBDpqQ12ZBGf58@mail-itl>
 <21f38a92-c8ae-12a7-f1d8-50810c5eb088@fensystems.co.uk>
 <YJmMvTkp2Y1hlLLm@mail-itl>
 <df9e9a32b0294aee814eeb58d2d71edd@EX13D32EUC003.ant.amazon.com>
 <YJpfORXIgEaWlQ7E@mail-itl>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="bYEBe4gQw1iCtcBR"
Content-Disposition: inline
In-Reply-To: <YJpfORXIgEaWlQ7E@mail-itl>


--bYEBe4gQw1iCtcBR
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Tue, 11 May 2021 12:45:10 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: "Durrant, Paul" <pdurrant@amazon.co.uk>
Cc: Michael Brown <mbrown@fensystems.co.uk>, "paul@xen.org" <paul@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"wei.liu@kernel.org" <wei.liu@kernel.org>
Subject: Re: [PATCH] xen-netback: Check for hotplug-status existence before
 watching

On Tue, May 11, 2021 at 12:40:54PM +0200, Marek Marczykowski-G=C3=B3recki w=
rote:
> On Tue, May 11, 2021 at 07:06:55AM +0000, Durrant, Paul wrote:
> > > -----Original Message-----
> > > From: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingslab.co=
m>
> > > Sent: 10 May 2021 20:43
> > > To: Michael Brown <mbrown@fensystems.co.uk>; paul@xen.org
> > > Cc: paul@xen.org; xen-devel@lists.xenproject.org; netdev@vger.kernel.=
org; wei.liu@kernel.org; Durrant,
> > > Paul <pdurrant@amazon.co.uk>
> > > Subject: RE: [EXTERNAL] [PATCH] xen-netback: Check for hotplug-status=
 existence before watching
> > >=20
> > > On Mon, May 10, 2021 at 08:06:55PM +0100, Michael Brown wrote:
> > > > If you have a suggested patch, I'm happy to test that it doesn't re=
introduce
> > > > the regression bug that was fixed by this commit.
> > >=20
> > > Actually, I've just tested with a simple reloading xen-netfront modul=
e. It
> > > seems in this case, the hotplug script is not re-executed. In fact, I
> > > think it should not be re-executed at all, since the vif interface
> > > remains in place (it just gets NO-CARRIER flag).
> > >=20
> > > This brings a question, why removing hotplug-status in the first plac=
e?
> > > The interface remains correctly configured by the hotplug script after
> > > all. From the commit message:
> > >=20
> > >     xen-netback: remove 'hotplug-status' once it has served its purpo=
se
> > >=20
> > >     Removing the 'hotplug-status' node in netback_remove() is wrong; =
the script
> > >     may not have completed. Only remove the node once the watch has f=
ired and
> > >     has been unregistered.
> > >=20
> > > I think the intention was to remove 'hotplug-status' node _later_ in
> > > case of quickly adding and removing the interface. Is that right, Pau=
l?
> >=20
> > The removal was done to allow unbind/bind to function correctly. IIRC b=
efore the original patch doing a bind would stall forever waiting for the h=
otplug status to change, which would never happen.
>=20
> Hmm, in that case maybe don't remove it at all in the backend, and let
> it be cleaned up by the toolstack, when it removes other backend-related
> nodes?

No, unbind/bind _does_ require hotplug script to be called again.

When exactly vif interface appears in the system (starts to be available
for the hotplug script)? Maybe remove 'hotplug-status' just before that
point?


--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--bYEBe4gQw1iCtcBR
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmCaYDYACgkQ24/THMrX
1yyAAQf/U2PU/dLfm2NOJ8xQosmw1dIAScvQCAoxIbAXx0JIi/Ets+qgCRvp1thE
/pnLR4gvm2mGYAE/nhxTsTvQfv1rLzvAJqVKfJ1BMISwygiYNyAgErmM0SrdjHvQ
IIfTWBeVy2QzyHG0XcNzoe2ZqSfuTrsaGyTMU6B+LHiq+QR9nnPz9wQiwkuiQZ7f
jLD3XWkUYyqnuC+hefLWByIQPnyzTv4yG2WWWCxDJn62La6nTfUXZINe3OlhLQQW
u0MMs5aPpLU2jLmexUEkoeDeUMXVYdQgYyD37gs7uRAv5B3LNcw3hM/VlDftuX5u
0FKx7A3kpwMxyB3lqPJ8arCG8Ftm8Q==
=iGK7
-----END PGP SIGNATURE-----

--bYEBe4gQw1iCtcBR--


From xen-devel-bounces@lists.xenproject.org Tue May 11 12:16:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 12:16:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125751.236697 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgRJX-0002RN-5u; Tue, 11 May 2021 12:16:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125751.236697; Tue, 11 May 2021 12:16:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgRJX-0002RG-2b; Tue, 11 May 2021 12:16:39 +0000
Received: by outflank-mailman (input) for mailman id 125751;
 Tue, 11 May 2021 12:16:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgRJV-0002R6-T3; Tue, 11 May 2021 12:16:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgRJV-0003VP-Mo; Tue, 11 May 2021 12:16:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgRJV-00085Q-CG; Tue, 11 May 2021 12:16:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lgRJV-0005Nm-BF; Tue, 11 May 2021 12:16:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NVYAdh2ZndDFIv8Gpqm1KeKbsh/SujE4Zhig1Uu/Un4=; b=xDDc4csgHGWnMtWwOCSmpXegR1
	Y3YudyXZffqFk0z1xz3Yx4Q0O25oaQn4XPhkCatPoitkhsZCYbkf2TXIFhX42YxOZu0oe87dwlmbq
	8yk8BrU1xgSz2XPsYp4PBKxj9H5XMfoZXxApvXHYc4tJp2vu29Yd19CqpNMNIAGJXJ3Q=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161900-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 161900: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=1140ab592e2ebf8153d2b322604031a8868ce7a5
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 11 May 2021 12:16:37 +0000

flight 161900 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161900/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx  8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                1140ab592e2ebf8153d2b322604031a8868ce7a5
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  283 days
Failing since        152366  2020-08-01 20:49:34 Z  282 days  473 attempts
Testing same since   161900  2021-05-11 01:55:00 Z    0 days    1 attempts

------------------------------------------------------------
6042 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1639451 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 11 12:46:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 12:46:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125757.236711 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgRml-0005w2-HF; Tue, 11 May 2021 12:46:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125757.236711; Tue, 11 May 2021 12:46:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgRml-0005vv-EL; Tue, 11 May 2021 12:46:51 +0000
Received: by outflank-mailman (input) for mailman id 125757;
 Tue, 11 May 2021 12:46:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/xBg=KG=amazon.co.uk=prvs=758e73cee=pdurrant@srs-us1.protection.inumbo.net>)
 id 1lgRmk-0005vp-Gd
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 12:46:50 +0000
Received: from smtp-fw-9102.amazon.com (unknown [207.171.184.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 971a21c8-0e00-4345-8fce-efb28f18d447;
 Tue, 11 May 2021 12:46:49 +0000 (UTC)
Received: from pdx4-co-svc-p1-lb2-vlan2.amazon.com (HELO
 email-inbound-relay-2a-53356bf6.us-west-2.amazon.com) ([10.25.36.210])
 by smtp-border-fw-9102.sea19.amazon.com with ESMTP; 11 May 2021 12:46:41 +0000
Received: from EX13D32EUC003.ant.amazon.com
 (pdx1-ws-svc-p6-lb9-vlan2.pdx.amazon.com [10.236.137.194])
 by email-inbound-relay-2a-53356bf6.us-west-2.amazon.com (Postfix) with ESMTPS
 id 146DAA1865; Tue, 11 May 2021 12:46:40 +0000 (UTC)
Received: from EX13D32EUC003.ant.amazon.com (10.43.164.24) by
 EX13D32EUC003.ant.amazon.com (10.43.164.24) with Microsoft SMTP Server (TLS)
 id 15.0.1497.2; Tue, 11 May 2021 12:46:38 +0000
Received: from EX13D32EUC003.ant.amazon.com ([10.43.164.24]) by
 EX13D32EUC003.ant.amazon.com ([10.43.164.24]) with mapi id 15.00.1497.015;
 Tue, 11 May 2021 12:46:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 971a21c8-0e00-4345-8fce-efb28f18d447
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.co.uk; i=@amazon.co.uk; q=dns/txt;
  s=amazon201209; t=1620737210; x=1652273210;
  h=from:to:cc:date:message-id:references:in-reply-to:
   content-transfer-encoding:mime-version:subject;
  bh=9mYQdexU/j0t5uVV5OxSrAwNJQ/AzFnPxwg3Sa0K8j8=;
  b=SyxpRNur4pfBWr1E93VXSdiYOr/d1El7iIepzUnfl9fzwSmtMygfQxSb
   BKcN47kR7s+/DtwBgbxrBhVSuvjFzHWzCIPoYWOwZr4+PWZS3pbynM+hi
   IaR3SQ5vjYCShWPcj1QTaHQJ31lLr9JPwx8JlIbIfJX/vWOVN0Wouz45x
   4=;
X-IronPort-AV: E=Sophos;i="5.82,290,1613433600"; 
   d="scan'208";a="134361449"
Subject: RE: [PATCH] xen-netback: Check for hotplug-status existence before watching
Thread-Topic: [PATCH] xen-netback: Check for hotplug-status existence before watching
From: "Durrant, Paul" <pdurrant@amazon.co.uk>
To: =?utf-8?B?TWFyZWsgTWFyY3p5a293c2tpLUfDs3JlY2tp?=
	<marmarek@invisiblethingslab.com>
CC: Michael Brown <mbrown@fensystems.co.uk>, "paul@xen.org" <paul@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>, "wei.liu@kernel.org"
	<wei.liu@kernel.org>
Thread-Index: AQHXMHlgrSEnNO4f/0CXVVR3ElA0XKrdNJQAgAAEMoCAAAGvAIAAA+GAgAAKCwCAAL1BsIAAPacAgAABMQCAACDP4A==
Date: Tue, 11 May 2021 12:46:38 +0000
Message-ID: <9edd6873034f474baafd70b1df693001@EX13D32EUC003.ant.amazon.com>
References: <54659eec-e315-5dc5-1578-d91633a80077@xen.org>
 <20210413152512.903750-1-mbrown@fensystems.co.uk> <YJl8IC7EbXKpARWL@mail-itl>
 <404130e4-210d-2214-47a8-833c0463d997@fensystems.co.uk>
 <YJmBDpqQ12ZBGf58@mail-itl>
 <21f38a92-c8ae-12a7-f1d8-50810c5eb088@fensystems.co.uk>
 <YJmMvTkp2Y1hlLLm@mail-itl>
 <df9e9a32b0294aee814eeb58d2d71edd@EX13D32EUC003.ant.amazon.com>
 <YJpfORXIgEaWlQ7E@mail-itl> <YJpgNvOmDL9SuRye@mail-itl>
In-Reply-To: <YJpgNvOmDL9SuRye@mail-itl>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.43.166.209]
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
Precedence: Bulk

PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiBGcm9tOiBNYXJlayBNYXJjenlrb3dza2kt
R8OzcmVja2kgPG1hcm1hcmVrQGludmlzaWJsZXRoaW5nc2xhYi5jb20+DQo+IFNlbnQ6IDExIE1h
eSAyMDIxIDExOjQ1DQo+IFRvOiBEdXJyYW50LCBQYXVsIDxwZHVycmFudEBhbWF6b24uY28udWs+
DQo+IENjOiBNaWNoYWVsIEJyb3duIDxtYnJvd25AZmVuc3lzdGVtcy5jby51az47IHBhdWxAeGVu
Lm9yZzsgeGVuLWRldmVsQGxpc3RzLnhlbnByb2plY3Qub3JnOw0KPiBuZXRkZXZAdmdlci5rZXJu
ZWwub3JnOyB3ZWkubGl1QGtlcm5lbC5vcmcNCj4gU3ViamVjdDogUkU6IFtFWFRFUk5BTF0gW1BB
VENIXSB4ZW4tbmV0YmFjazogQ2hlY2sgZm9yIGhvdHBsdWctc3RhdHVzIGV4aXN0ZW5jZSBiZWZv
cmUgd2F0Y2hpbmcNCj4gDQo+IE9uIFR1ZSwgTWF5IDExLCAyMDIxIGF0IDEyOjQwOjU0UE0gKzAy
MDAsIE1hcmVrIE1hcmN6eWtvd3NraS1Hw7NyZWNraSB3cm90ZToNCj4gPiBPbiBUdWUsIE1heSAx
MSwgMjAyMSBhdCAwNzowNjo1NUFNICswMDAwLCBEdXJyYW50LCBQYXVsIHdyb3RlOg0KPiA+ID4g
PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0KPiA+ID4gPiBGcm9tOiBNYXJlayBNYXJjenlr
b3dza2ktR8OzcmVja2kgPG1hcm1hcmVrQGludmlzaWJsZXRoaW5nc2xhYi5jb20+DQo+ID4gPiA+
IFNlbnQ6IDEwIE1heSAyMDIxIDIwOjQzDQo+ID4gPiA+IFRvOiBNaWNoYWVsIEJyb3duIDxtYnJv
d25AZmVuc3lzdGVtcy5jby51az47IHBhdWxAeGVuLm9yZw0KPiA+ID4gPiBDYzogcGF1bEB4ZW4u
b3JnOyB4ZW4tZGV2ZWxAbGlzdHMueGVucHJvamVjdC5vcmc7IG5ldGRldkB2Z2VyLmtlcm5lbC5v
cmc7IHdlaS5saXVAa2VybmVsLm9yZzsNCj4gRHVycmFudCwNCj4gPiA+ID4gUGF1bCA8cGR1cnJh
bnRAYW1hem9uLmNvLnVrPg0KPiA+ID4gPiBTdWJqZWN0OiBSRTogW0VYVEVSTkFMXSBbUEFUQ0hd
IHhlbi1uZXRiYWNrOiBDaGVjayBmb3IgaG90cGx1Zy1zdGF0dXMgZXhpc3RlbmNlIGJlZm9yZSB3
YXRjaGluZw0KPiA+ID4gPg0KPiA+ID4gPiBPbiBNb24sIE1heSAxMCwgMjAyMSBhdCAwODowNjo1
NVBNICswMTAwLCBNaWNoYWVsIEJyb3duIHdyb3RlOg0KPiA+ID4gPiA+IElmIHlvdSBoYXZlIGEg
c3VnZ2VzdGVkIHBhdGNoLCBJJ20gaGFwcHkgdG8gdGVzdCB0aGF0IGl0IGRvZXNuJ3QgcmVpbnRy
b2R1Y2UNCj4gPiA+ID4gPiB0aGUgcmVncmVzc2lvbiBidWcgdGhhdCB3YXMgZml4ZWQgYnkgdGhp
cyBjb21taXQuDQo+ID4gPiA+DQo+ID4gPiA+IEFjdHVhbGx5LCBJJ3ZlIGp1c3QgdGVzdGVkIHdp
dGggYSBzaW1wbGUgcmVsb2FkaW5nIHhlbi1uZXRmcm9udCBtb2R1bGUuIEl0DQo+ID4gPiA+IHNl
ZW1zIGluIHRoaXMgY2FzZSwgdGhlIGhvdHBsdWcgc2NyaXB0IGlzIG5vdCByZS1leGVjdXRlZC4g
SW4gZmFjdCwgSQ0KPiA+ID4gPiB0aGluayBpdCBzaG91bGQgbm90IGJlIHJlLWV4ZWN1dGVkIGF0
IGFsbCwgc2luY2UgdGhlIHZpZiBpbnRlcmZhY2UNCj4gPiA+ID4gcmVtYWlucyBpbiBwbGFjZSAo
aXQganVzdCBnZXRzIE5PLUNBUlJJRVIgZmxhZykuDQo+ID4gPiA+DQo+ID4gPiA+IFRoaXMgYnJp
bmdzIGEgcXVlc3Rpb24sIHdoeSByZW1vdmluZyBob3RwbHVnLXN0YXR1cyBpbiB0aGUgZmlyc3Qg
cGxhY2U/DQo+ID4gPiA+IFRoZSBpbnRlcmZhY2UgcmVtYWlucyBjb3JyZWN0bHkgY29uZmlndXJl
ZCBieSB0aGUgaG90cGx1ZyBzY3JpcHQgYWZ0ZXINCj4gPiA+ID4gYWxsLiBGcm9tIHRoZSBjb21t
aXQgbWVzc2FnZToNCj4gPiA+ID4NCj4gPiA+ID4gICAgIHhlbi1uZXRiYWNrOiByZW1vdmUgJ2hv
dHBsdWctc3RhdHVzJyBvbmNlIGl0IGhhcyBzZXJ2ZWQgaXRzIHB1cnBvc2UNCj4gPiA+ID4NCj4g
PiA+ID4gICAgIFJlbW92aW5nIHRoZSAnaG90cGx1Zy1zdGF0dXMnIG5vZGUgaW4gbmV0YmFja19y
ZW1vdmUoKSBpcyB3cm9uZzsgdGhlIHNjcmlwdA0KPiA+ID4gPiAgICAgbWF5IG5vdCBoYXZlIGNv
bXBsZXRlZC4gT25seSByZW1vdmUgdGhlIG5vZGUgb25jZSB0aGUgd2F0Y2ggaGFzIGZpcmVkIGFu
ZA0KPiA+ID4gPiAgICAgaGFzIGJlZW4gdW5yZWdpc3RlcmVkLg0KPiA+ID4gPg0KPiA+ID4gPiBJ
IHRoaW5rIHRoZSBpbnRlbnRpb24gd2FzIHRvIHJlbW92ZSAnaG90cGx1Zy1zdGF0dXMnIG5vZGUg
X2xhdGVyXyBpbg0KPiA+ID4gPiBjYXNlIG9mIHF1aWNrbHkgYWRkaW5nIGFuZCByZW1vdmluZyB0
aGUgaW50ZXJmYWNlLiBJcyB0aGF0IHJpZ2h0LCBQYXVsPw0KPiA+ID4NCj4gPiA+IFRoZSByZW1v
dmFsIHdhcyBkb25lIHRvIGFsbG93IHVuYmluZC9iaW5kIHRvIGZ1bmN0aW9uIGNvcnJlY3RseS4g
SUlSQyBiZWZvcmUgdGhlIG9yaWdpbmFsIHBhdGNoDQo+IGRvaW5nIGEgYmluZCB3b3VsZCBzdGFs
bCBmb3JldmVyIHdhaXRpbmcgZm9yIHRoZSBob3RwbHVnIHN0YXR1cyB0byBjaGFuZ2UsIHdoaWNo
IHdvdWxkIG5ldmVyIGhhcHBlbi4NCj4gPg0KPiA+IEhtbSwgaW4gdGhhdCBjYXNlIG1heWJlIGRv
bid0IHJlbW92ZSBpdCBhdCBhbGwgaW4gdGhlIGJhY2tlbmQsIGFuZCBsZXQNCj4gPiBpdCBiZSBj
bGVhbmVkIHVwIGJ5IHRoZSB0b29sc3RhY2ssIHdoZW4gaXQgcmVtb3ZlcyBvdGhlciBiYWNrZW5k
LXJlbGF0ZWQNCj4gPiBub2Rlcz8NCj4gDQo+IE5vLCB1bmJpbmQvYmluZCBfZG9lc18gcmVxdWly
ZSBob3RwbHVnIHNjcmlwdCB0byBiZSBjYWxsZWQgYWdhaW4uDQo+IA0KDQpZZXMsIHNvcnJ5IEkg
d2FzIG1pc3JlbWVtYmVyaW5nLiBNeSBtZW1vcnkgaXMgaGF6eSBidXQgdGhlcmUgd2FzIGRlZmlu
aXRlbHkgYSBwcm9ibGVtIGF0IHRoZSB0aW1lIHdpdGggbGVhdmluZyB0aGUgbm9kZSBpbiBwbGFj
ZS4NCg0KPiBXaGVuIGV4YWN0bHkgdmlmIGludGVyZmFjZSBhcHBlYXJzIGluIHRoZSBzeXN0ZW0g
KHN0YXJ0cyB0byBiZSBhdmFpbGFibGUNCj4gZm9yIHRoZSBob3RwbHVnIHNjcmlwdCk/IE1heWJl
IHJlbW92ZSAnaG90cGx1Zy1zdGF0dXMnIGp1c3QgYmVmb3JlIHRoYXQNCj4gcG9pbnQ/DQo+IA0K
DQpJIHJlYWxseSBjYW4ndCByZW1lbWJlciBhbnkgZGV0YWlsLiBQZXJoYXBzIHRyeSByZXZlcnRp
bmcgYm90aCBwYXRjaGVzIHRoZW4gYW5kIGNoZWNrIHRoYXQgdGhlIHVuYmluZC9ybW1vZC9tb2Rw
cm9iZS9iaW5kIHNlcXVlbmNlIHN0aWxsIHdvcmtzIChhbmQgdGhlIGJhY2tlbmQgYWN0dWFsbHkg
bWFrZXMgaXQgaW50byBjb25uZWN0ZWQgc3RhdGUpLg0KDQogIFBhdWwNCg0K


From xen-devel-bounces@lists.xenproject.org Tue May 11 14:37:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 14:37:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125768.236724 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgTVN-0000FP-Uq; Tue, 11 May 2021 14:37:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125768.236724; Tue, 11 May 2021 14:37:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgTVN-0000FI-Rw; Tue, 11 May 2021 14:37:01 +0000
Received: by outflank-mailman (input) for mailman id 125768;
 Tue, 11 May 2021 14:35:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=9+NX=KG=ieee.org=elder@srs-us1.protection.inumbo.net>)
 id 1lgTUN-0000EF-9O
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 14:35:59 +0000
Received: from mail-oi1-x236.google.com (unknown [2607:f8b0:4864:20::236])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id abef1768-cc19-49ce-99fe-3e5d7e882f8e;
 Tue, 11 May 2021 14:35:57 +0000 (UTC)
Received: by mail-oi1-x236.google.com with SMTP id u16so19215946oiu.7
 for <xen-devel@lists.xenproject.org>; Tue, 11 May 2021 07:35:57 -0700 (PDT)
Received: from [172.22.22.4] (c-73-185-129-58.hsd1.mn.comcast.net.
 [73.185.129.58])
 by smtp.googlemail.com with ESMTPSA id 2sm3341540ota.67.2021.05.11.07.35.55
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 11 May 2021 07:35:56 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: abef1768-cc19-49ce-99fe-3e5d7e882f8e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=ieee.org; s=google;
        h=subject:to:references:from:cc:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=ETXAFKWQdH8/eG9YgJ2MyXQqUI4w0G+a8+7kidOetNw=;
        b=D+8UcMkB2GUwjj2F27jdDoxAVAL9rdC9MdMoJFlDBNeHzfe67XWP/sw3xSweHhjvu0
         L7zFyh43n9/fy4cUZSlUsyNvNqDkT2TqDuEFXFN77WQ7UwdcaxgaJydbt2h0Tqt9neqT
         2ljICDLxSDdscBg19eDiJRfFGl65YBejwUIvU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:references:from:cc:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=ETXAFKWQdH8/eG9YgJ2MyXQqUI4w0G+a8+7kidOetNw=;
        b=Wn5WeQk0UpbpxuCI4FeB5/Pigw+ZoiMSUuyDy7RQt37Yb1S5cRhhmoH59wo9jJvHGk
         yfijFUG3KGLrDLMhskjETgKdPFgiKW5qdfYkbQzfpo1w5gKLo1lEU8aWrvgZKDlwZm+W
         eLR9oqAwTDi6Xjl8QiTAB5ipq7t8xjYecJMJ7q1w4wpeN2CZbNpDlJktrOhz/W4aHSP9
         GAJf6zrpUyDqgsWn2E/L+2oCy83nKhY6LNto/dI2UURxVPHsgbqpI9WAh1/uO36IAxT6
         UHXk/A/FkrhqCdyyLx0SP6r3vJY3XERdP83LTlwlXRFrx2Q3A6W55sPXXcqBKGM0uimi
         2nhw==
X-Gm-Message-State: AOAM532NwSTatw/ibhP4sYPQfpQ2nqXAT5V2kHwqeHBlpI+Z+WUgiSdE
	87u0A4E6kXkzc03ekBHlIuxoKnOVpx28Aj4F
X-Google-Smtp-Source: ABdhPJyvp7T0CdewBKkqMXtDA5ThSmqkKCGgPnZJ05uagz+Vq5z9FBuvh4nrA0ZkdxuooO6UIklVPg==
X-Received: by 2002:aca:53d8:: with SMTP id h207mr3883260oib.177.1620743757147;
        Tue, 11 May 2021 07:35:57 -0700 (PDT)
Subject: Re: [PATCH v3 1/1] kernel.h: Split out panic and oops helpers
To: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
References: <20210511074137.33666-1-andriy.shevchenko@linux.intel.com>
From: Alex Elder <elder@ieee.org>
Cc: linux-xtensa@linux-xtensa.org, openipmi-developer@lists.sourceforge.net,
 linux-clk@vger.kernel.org, linux-edac@vger.kernel.org,
 linux-alpha@vger.kernel.org, linux-kernel@vger.kernel.org,
 linux-arm-kernel@lists.infradead.org, linux-mips@vger.kernel.org,
 linux-parisc@vger.kernel.org, linuxppc-dev@lists.ozlabs.org,
 linux-s390@vger.kernel.org, sparclinux@vger.kernel.org,
 linux-um@lists.infradead.org, linux-hyperv@vger.kernel.org,
 coresight@lists.linaro.org, linux-leds@vger.kernel.org,
 bcm-kernel-feedback-list@broadcom.com, netdev@vger.kernel.org,
 linux-pm@vger.kernel.org, linux-remoteproc@vger.kernel.org,
 linux-staging@lists.linux.dev, dri-devel@lists.freedesktop.org,
 linux-fbdev@vger.kernel.org, linux-arch@vger.kernel.org,
 kexec@lists.infradead.org, rcu@vger.kernel.org,
 linux-fsdevel@vger.kernel.org, xen-devel@lists.xenproject.org
Message-ID: <c6fa5d2c-84e2-2046-19f0-66cf5dd72077@ieee.org>
Date: Tue, 11 May 2021 09:35:54 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.1
MIME-Version: 1.0
In-Reply-To: <20210511074137.33666-1-andriy.shevchenko@linux.intel.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 5/11/21 2:41 AM, Andy Shevchenko wrote:
> kernel.h is being used as a dump for all kinds of stuff for a long time.
> Here is the attempt to start cleaning it up by splitting out panic and
> oops helpers.
> 
> There are several purposes of doing this:
> - dropping dependency in bug.h
> - dropping a loop by moving out panic_notifier.h
> - unload kernel.h from something which has its own domain
> 
> At the same time convert users tree-wide to use new headers, although
> for the time being include new header back to kernel.h to avoid twisted
> indirected includes for existing users.
> 
> Signed-off-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
> Reviewed-by: Bjorn Andersson <bjorn.andersson@linaro.org>
> Acked-by: Mike Rapoport <rppt@linux.ibm.com>
> Acked-by: Corey Minyard <cminyard@mvista.com>
> Acked-by: Christian Brauner <christian.brauner@ubuntu.com>
> Acked-by: Arnd Bergmann <arnd@arndb.de>
> Acked-by: Kees Cook <keescook@chromium.org>
> Acked-by: Wei Liu <wei.liu@kernel.org>
> Acked-by: Rasmus Villemoes <linux@rasmusvillemoes.dk>
> Co-developed-by: Andrew Morton <akpm@linux-foundation.org>
> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
> Acked-by: Sebastian Reichel <sre@kernel.org>
> Acked-by: Luis Chamberlain <mcgrof@kernel.org>
> Acked-by: Stephen Boyd <sboyd@kernel.org>
> Acked-by: Thomas Bogendoerfer <tsbogend@alpha.franken.de>
> Acked-by: Helge Deller <deller@gmx.de> # parisc
> ---
> v3: rebased on top of v5.13-rc1, collected a few more tags
> 
> Note WRT Andrew's SoB tag above: I have added it since part of the cases
> I took from him. Andrew, feel free to amend or tell me how you want me
> to do.
> 

Acked-by: Alex Elder <elder@kernel.org>

. . .

> diff --git a/drivers/net/ipa/ipa_smp2p.c b/drivers/net/ipa/ipa_smp2p.c
> index a5f7a79a1923..34b68dc43886 100644
> --- a/drivers/net/ipa/ipa_smp2p.c
> +++ b/drivers/net/ipa/ipa_smp2p.c
> @@ -8,6 +8,7 @@
>   #include <linux/device.h>
>   #include <linux/interrupt.h>
>   #include <linux/notifier.h>
> +#include <linux/panic_notifier.h>
>   #include <linux/soc/qcom/smem.h>
>   #include <linux/soc/qcom/smem_state.h>
>   

. . .


From xen-devel-bounces@lists.xenproject.org Tue May 11 14:59:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 14:59:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125774.236736 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgTqf-0002mb-PJ; Tue, 11 May 2021 14:59:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125774.236736; Tue, 11 May 2021 14:59:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgTqf-0002mU-LX; Tue, 11 May 2021 14:59:01 +0000
Received: by outflank-mailman (input) for mailman id 125774;
 Tue, 11 May 2021 14:58:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tiF3=KG=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1lgTqd-0002mO-M8
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 14:58:59 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1e2ad1fb-5db9-4390-a90e-2d9f69239b0c;
 Tue, 11 May 2021 14:58:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1e2ad1fb-5db9-4390-a90e-2d9f69239b0c
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620745138;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=7lkkXU0a3GZrcNgZ/uu5jEOrp6kp8vqX57+wxJFGPME=;
  b=ACfC0JZoMNM/XCXrI5UJ+9uEaXobJFvPcTYuqii8noafqTyONHaOtVaO
   4zA1bbf8GPc09A1mYwL7Z7rOxVkjYvxUQPTg9WIDgY4KZPEcwjYThmOIt
   z6LC4/Iyp6HYGHlhAak/j4tUXxFli7qM0gVr/FpDbX56C87OdCGkLoXG4
   c=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: yjN3speGgGhHmE6x3mNoRBzKCJuVBWWgRhEmEBfNgdMjYdrmYr+mmjPe8870KyDN4q1keMFjy0
 mQPytvfjCJCKKAY/nRbLOdQJN74RqmbPdqBUiZMK7ivxDu8DiCFUEYoe+jLeSz9YND4kWqxggo
 3QvWF4K+uTmNcKzlWnz0t3HKA9MhxiVWYdK422FumHo9X4f+TwZl3c6P2OM6D2z3LFdWMCSCfp
 egkF61aNgKQIXQUcw1tnTCjr7peB1SD0wQ3aTxUZUQOtNh1Sef5qquMWtKQc4v6aWbDMSpS0SI
 zfQ=
X-SBRS: 5.1
X-MesageID: 45076739
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:fq4U7altiTet/eAGhrxlP6ZM+ZTpDfIr3DAbv31ZSRFFG/GwvM
 ql9c5rsSMc6Qx8ZJhEo7u90ca7Lk80maQa3WByB9eftXjd2VdARbsKheGO/9SKIVycygcy79
 YET4FOTPH2EFhmnYLbzWCDYrEdKQC8gcKVuds=
X-IronPort-AV: E=Sophos;i="5.82,291,1613451600"; 
   d="scan'208";a="45076739"
Date: Tue, 11 May 2021 15:58:54 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Julien Grall <julien@xen.org>
CC: <xen-devel@lists.xenproject.org>, Julien Grall <jgrall@amazon.com>, Ian
 Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH 05/14] tools/libs: guest: Use const whenever we point to
 literal strings
Message-ID: <YJqbrs482KY23QQE@perard>
References: <20210405155713.29754-1-julien@xen.org>
 <20210405155713.29754-6-julien@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20210405155713.29754-6-julien@xen.org>

On Mon, Apr 05, 2021 at 04:57:04PM +0100, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> literal strings are not meant to be modified. So we should use const
> *char rather than char * when we want to store a pointer to them.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> ---
> diff --git a/tools/libs/guest/xg_dom_x86.c b/tools/libs/guest/xg_dom_x86.c
> index 2953aeb90b35..e379b07f9945 100644
> --- a/tools/libs/guest/xg_dom_x86.c
> +++ b/tools/libs/guest/xg_dom_x86.c
> @@ -1148,11 +1148,12 @@ static int vcpu_hvm(struct xc_dom_image *dom)
>  
>  /* ------------------------------------------------------------------------ */
>  
> -static int x86_compat(xc_interface *xch, uint32_t domid, char *guest_type)
> +static int x86_compat(xc_interface *xch, uint32_t domid,
> +                      const char *guest_type)
>  {
>      static const struct {
> -        char           *guest;
> -        uint32_t        size;
> +        const char      *guest;
> +        uint32_t       size;

It seems that one space have been removed by mistake just before "size".

The rest looks good:
Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue May 11 15:03:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 15:03:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125780.236747 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgTv0-0004IN-BS; Tue, 11 May 2021 15:03:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125780.236747; Tue, 11 May 2021 15:03:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgTv0-0004IG-8Z; Tue, 11 May 2021 15:03:30 +0000
Received: by outflank-mailman (input) for mailman id 125780;
 Tue, 11 May 2021 15:03:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tiF3=KG=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1lgTuz-0004IA-Hf
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 15:03:29 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3f05fb18-9082-4348-bdd5-515f3b03448f;
 Tue, 11 May 2021 15:03:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3f05fb18-9082-4348-bdd5-515f3b03448f
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620745408;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=Ive6uwLR7sMAV0tERTAkBNbCIpG99cEIedUrrl5rArQ=;
  b=QjTksKL5+FxTir4c72NcDvRo2hB6/CotnQV36RXza8TF8D+JDI1g6i6x
   jKKkBLkD/IF+lkUS0h8s2mCiMnqF9Tv1BpokdYR0f1W83uLUTxS0fLxoJ
   DKypeImCI6t8mxkhD/ULEivZH8DMR43dq5DHlPgG8G4HjE2lKaAIrBu4J
   E=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: u+OdYE2d9v5fvHCQ/rLOBtmfpBb+Z1zmb0fNJ4datVPHJIMG7FnqzqqHd3FDZSb69orUOjrM/p
 yALvlKIVpT4zAwdyq3WlwPdWsLDlEY0Xz+awpOKn+tteFe01+YACThZcvi4DcbQwrWo737+ZZ0
 cCqhnJ/cPEvJAkxcuD4jP6p0dyrFA2qCeoojy5Xu55reIS3/DLt7cG2wWIFph08g/VwpWjqoic
 9xghSp9CdyvXbLm9olSvTpx+Nwe7dhQ/ZMlVtn1xOFTcpV/818rK+VCr8X8Kr5EfRtKeXNNCUM
 WyY=
X-SBRS: 5.1
X-MesageID: 43539642
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:wiBOm6v0tX0XhxJ9zhYdxSId7skDjNV00zEX/kB9WHVpm6yj+v
 xGUs566faUskd0ZJhEo7q90ca7Lk80maQa3WBzB8bGYOCFghrKEGgK1+KLrwEIcxeUygc379
 YDT0ERMrzN5VgRt7eG3OG7eexQvOVuJsqT9JjjJ3QGd3AVV0l5hT0JbTpyiidNNXJ77ZxSLu
 v72uN34wCOVF4wdcqBCnwMT4H41qf2fMKPW29+O/Y/gjP+9Q+V1A==
X-IronPort-AV: E=Sophos;i="5.82,291,1613451600"; 
   d="scan'208";a="43539642"
Date: Tue, 11 May 2021 16:03:24 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Julien Grall <julien@xen.org>
CC: <xen-devel@lists.xenproject.org>, Julien Grall <jgrall@amazon.com>, "Ian
 Jackson" <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH 06/14] tools/libs: stat: Use const whenever we point to
 literal strings
Message-ID: <YJqcvEeRLfwbZpEV@perard>
References: <20210405155713.29754-1-julien@xen.org>
 <20210405155713.29754-7-julien@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20210405155713.29754-7-julien@xen.org>

On Mon, Apr 05, 2021 at 04:57:05PM +0100, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> literal strings are not meant to be modified. So we should use const
> char * rather than char * when we want to store a pointer to them.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue May 11 15:19:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 15:19:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125785.236760 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgUA0-0005uw-Pc; Tue, 11 May 2021 15:19:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125785.236760; Tue, 11 May 2021 15:19:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgUA0-0005up-MQ; Tue, 11 May 2021 15:19:00 +0000
Received: by outflank-mailman (input) for mailman id 125785;
 Tue, 11 May 2021 15:18:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tiF3=KG=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1lgU9z-0005uj-NT
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 15:18:59 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 010bd790-90e9-4f7b-93d1-6ecf39dd8287;
 Tue, 11 May 2021 15:18:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 010bd790-90e9-4f7b-93d1-6ecf39dd8287
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620746338;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=jze14czM65N3ND9Hzys2dewBaD36ReStQXcZaSi3C0w=;
  b=MCmBw3eXbwpvoMGbV1xIrO5hTh4PaGJBn5w9PtbWSUOzzdD804+XAnOp
   J6uaaFysPw9yT8J04BSjPXKck/QKcgpV1p1rb5ozX2+A+IOuxukvAGlLo
   wpZNBTFnnVtBdAq5YWKwePQrf/Ki1KgmRVM0v7pMUGDua0Uxsa3vUQWHP
   o=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: S2mVILq5LMBUghPiJ8hIufZTUZkpE2kBGKqhBCdC45bvBgXGCsenxB7CLNhu8PhJgg6UR5m3fj
 5UzMFmG68p4JXtzY364+g84f65UeW3VBszPImKXV0uyZbImbgLj8344JfsKoRm538ywycJ010S
 wtsa4nJCjXfcmEfHbiqbfNjEIWkZ8IagWx3nySYRJLk3DufSBIniK5TMdTM5PpHwcQmctwWgu6
 +0u+IIWw8/sLSiq8hPUuC5h6RXUBoU096Fly351wmqXMJx9wPvzbYo0HqXwDXcGAUzSYbsUiPy
 gys=
X-SBRS: 5.1
X-MesageID: 43935543
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:kq9hF6ryfWJjgxxiF5W9TNkaV5oheYIsimQD101hICG9Ffbo8v
 xG/c5rtyMc5wx8ZJhNo7+90cq7MBDhHPxOgLX5VI3KNGOKhILBFvAH0WKI+V3d8kPFmNK0is
 xbGJSWoueAamRHsQ==
X-IronPort-AV: E=Sophos;i="5.82,291,1613451600"; 
   d="scan'208";a="43935543"
Date: Tue, 11 May 2021 16:18:55 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Julien Grall <julien@xen.org>
CC: <xen-devel@lists.xenproject.org>, Julien Grall <jgrall@amazon.com>, Ian
 Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH 09/14] tools/console: Use const whenever we point to
 literal strings
Message-ID: <YJqgXz1s8N3T4+Fo@perard>
References: <20210405155713.29754-1-julien@xen.org>
 <20210405155713.29754-10-julien@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20210405155713.29754-10-julien@xen.org>

On Mon, Apr 05, 2021 at 04:57:08PM +0100, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> literal strings are not meant to be modified. So we should use const
> char * rather than char * when we want to store a pointer to them.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> ---
> diff --git a/tools/console/daemon/io.c b/tools/console/daemon/io.c
> index 4af27ffc5d02..6a8a94e31b65 100644
> --- a/tools/console/daemon/io.c
> +++ b/tools/console/daemon/io.c
> @@ -109,9 +109,9 @@ struct console {
>  };
>  
>  struct console_type {
> -	char *xsname;
> -	char *ttyname;
> -	char *log_suffix;
> +	const char *xsname;

I think that const of `xsname` is lost in console_init() in the same
file.
We have:

    static int console_init(.. )
    {
        struct console_type **con_type = (struct console_type **)data;
        char *xsname, *xspath;
        xsname = (char *)(*con_type)->xsname;
    }

So constify "xsname" in console_init() should be part of the patch, I
think.

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue May 11 15:38:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 15:38:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125791.236776 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgUSV-00005I-Fo; Tue, 11 May 2021 15:38:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125791.236776; Tue, 11 May 2021 15:38:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgUSV-00005B-CK; Tue, 11 May 2021 15:38:07 +0000
Received: by outflank-mailman (input) for mailman id 125791;
 Tue, 11 May 2021 15:38:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tiF3=KG=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1lgUST-000052-FO
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 15:38:05 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 26308c16-cf12-4f24-9b09-357f6dc5e6f7;
 Tue, 11 May 2021 15:38:04 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 26308c16-cf12-4f24-9b09-357f6dc5e6f7
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620747484;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=/mVvZJb6a3J6JyU/mCMymSi34pknKkya6IqAQC+qieg=;
  b=gT/PJaj/EBrkT7Wtm0nf4tvzXybN017Juoiv7ykVOUo+CoNvX1MNSbfR
   4HGGnUA7+IaXpJm44NNIGQ6mNcKkmmBJ146TNkLvSJEvPApOYyNLJ0grl
   92b6FD8RJxwrTjZGrQCBHQQflRPOIjkdz4dU7UpLUYoQYIS1p7Wrj2oEU
   E=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: MCG4+Il14Rkz9XYzlt51lc8Fa6/pxPOM7T7EvJXr2OsGd4RBVnBy17ACDF4M5D7K3DzAfeXVmO
 QPaF+1OgA/84p+RzY2MP+2SebSZZfI28MI8RWyz1byWeWOMUzuToApjZMZNM887FztxAvV+j2o
 MAk+YIMtpe+HB/QJAuFPSaYFul3pkWbuHxz81j1iZCHY2RGicS2SHOldrVIq8IjNWS/H9UQIFC
 vUk0N22nKlmTFd4ji1ILvPcay5Pem1dg+2mBkBeGPTBImZvOHnFWhFui8Wboil/+62+yGNwIFR
 anE=
X-SBRS: 5.1
X-MesageID: 43659972
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:LauO+6CWGkYeJZPlHelc55DYdb4zR+YMi2TDt3oddfU1SL3+qy
 nKpp4mPHDP5wr5NEtPpTnEAtjifZq+z+8Q3WByB9eftWDd0QPFEGgh1/qB/9SJIUbDH4VmpM
 JdmsZFaeEZDTJB/LrHCAvTKade/DFQmprY+9s3zB1WPHBXg7kL1XYeNu4CeHcGPjWvA/ACZe
 Ohz/sCnRWMU1INYP+2A3EUNtKz2uEixPrdEGY77wdM0nj0sQ+V
X-IronPort-AV: E=Sophos;i="5.82,291,1613451600"; 
   d="scan'208";a="43659972"
Date: Tue, 11 May 2021 16:37:59 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Julien Grall <julien@xen.org>
CC: <xen-devel@lists.xenproject.org>, Julien Grall <jgrall@amazon.com>, Ian
 Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH 11/14] tools/misc: Use const whenever we point to literal
 strings
Message-ID: <YJqk152zKuEmv3mY@perard>
References: <20210405155713.29754-1-julien@xen.org>
 <20210405155713.29754-12-julien@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20210405155713.29754-12-julien@xen.org>

On Mon, Apr 05, 2021 at 04:57:10PM +0100, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> literal strings are not meant to be modified. So we should use const
> char * rather than char * when we we to store a pointer to them.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue May 11 15:43:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 15:43:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125796.236788 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgUXn-0001al-2h; Tue, 11 May 2021 15:43:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125796.236788; Tue, 11 May 2021 15:43:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgUXm-0001ae-Vp; Tue, 11 May 2021 15:43:34 +0000
Received: by outflank-mailman (input) for mailman id 125796;
 Tue, 11 May 2021 15:43:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgUXk-0001aT-Vl; Tue, 11 May 2021 15:43:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgUXk-0007B8-Op; Tue, 11 May 2021 15:43:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgUXk-0002hZ-DR; Tue, 11 May 2021 15:43:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lgUXk-0004E9-Cy; Tue, 11 May 2021 15:43:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qX8dHdycUh3M6U2bXbJUX5qNNZXrOZg6io/zAcIQe2w=; b=pPGDnKfI3LEQI5H05eER4nC4cd
	Lu9MGui7PRw512nS81482tMJscWzLW7eTOj56ulfBUX9paSZupstSMLztUE+ytGjx7e6c3xc8THTV
	OksnCiB+CJlm7y/n9IarD4ex7u2DCTUL2NxXJB5LrnWSV+JOioNwEoOOuD8dmMM//MsM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161902-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 161902: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    qemu-mainline:build-arm64:<job status>:broken:regression
    qemu-mainline:build-arm64-pvops:<job status>:broken:regression
    qemu-mainline:build-arm64-xsm:<job status>:broken:regression
    qemu-mainline:build-arm64:host-install(4):broken:regression
    qemu-mainline:build-arm64-xsm:host-install(4):broken:regression
    qemu-mainline:build-arm64-pvops:host-install(4):broken:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:build-arm64-libvirt:build-check(1):blocked:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=e58c7a3bba3076890592f02d2b0e596bf191b5c2
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 11 May 2021 15:43:32 +0000

flight 161902 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161902/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-arm64                   4 host-install(4)        broken REGR. vs. 152631
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 152631
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-libvirt      14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-libvirt-xsm  14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-libvirt-pair 25 guest-start/debian       fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                e58c7a3bba3076890592f02d2b0e596bf191b5c2
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  264 days
Failing since        152659  2020-08-21 14:07:39 Z  263 days  481 attempts
Testing same since   161902  2021-05-11 04:56:24 Z    0 days    1 attempts

------------------------------------------------------------
490 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  broken  
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-step build-arm64 host-install(4)
broken-step build-arm64-xsm host-install(4)
broken-step build-arm64-pvops host-install(4)

Not pushing.

(No revision log; it would be 149039 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 11 15:46:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 15:46:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125803.236803 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgUaI-0002Iz-PA; Tue, 11 May 2021 15:46:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125803.236803; Tue, 11 May 2021 15:46:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgUaI-0002Is-KW; Tue, 11 May 2021 15:46:10 +0000
Received: by outflank-mailman (input) for mailman id 125803;
 Tue, 11 May 2021 15:46:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tiF3=KG=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1lgUaH-0002Im-F6
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 15:46:09 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e4a8ec25-81c0-4a8e-a370-cf1568eb2f9e;
 Tue, 11 May 2021 15:46:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e4a8ec25-81c0-4a8e-a370-cf1568eb2f9e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620747968;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=V8fn6TqoQExzpifWiHwkN0EtjFDdty7np472LGpGsck=;
  b=U6WS516doaWXDMoK2shPXTFuG5MV4AEFVMiaGmtXF5XYWHrPYO4UOhR5
   YLXsm3Qz/GeQwyVjDGap4wAQ4NJMOWuDpoVnwe7MTI7QnE0/A1nVD/nDe
   2/cuzw1lRwZnyA1f4D5rqJXv9clETCqMSMnd8RyGoiY80aZ2XXvciO6PU
   c=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 0Vm+3RgCcX1GTvoz669DkBeiZmbA6rbzmSJgLHRgkUcN4hCFVpqQm4XB/BTQWhXzGSIQhHQqBF
 gajVUHTBRzZdWzOFRMrWImSXyROvjyt0x52oEBkCnZaZWmFSZO5yOf/J8c33EvANMWeVqv6EB1
 hdtFH6534k6xd2igpJCvREPBB+nYrwvPdFtnVBZ3K+7eaNNibEoSJIV0jsT7jRPuQgn0iz4pUE
 uTziQaqNjtoededmqZ0rM6F8/aI/TGXGHLVXkespZX4EVBskDAzD9vq1zsA2Mup2cVd2KeMzjf
 Sko=
X-SBRS: 5.1
X-MesageID: 43350723
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:95EH3KrANCyAMtaAs6bRfuEaV5oreYIsimQD101hICG8cqSj+f
 xGuM5rsSMc6QxhPU3I9ursBEDtex/hHNtOkO4s1NSZLWvbUQmTTL2KhLGKq1aLJ8S9zJ8/6U
 4JSdkZNDSaNzlHZKjBjzWFLw==
X-IronPort-AV: E=Sophos;i="5.82,291,1613451600"; 
   d="scan'208";a="43350723"
Date: Tue, 11 May 2021 16:46:04 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Julien Grall <julien@xen.org>
CC: <xen-devel@lists.xenproject.org>, Julien Grall <jgrall@amazon.com>, Ian
 Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH 12/14] tools/top: The string parameter in set_prompt()
 and set_delay() should be const
Message-ID: <YJqmvN2sIhd+benU@perard>
References: <20210405155713.29754-1-julien@xen.org>
 <20210405155713.29754-13-julien@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20210405155713.29754-13-julien@xen.org>

On Mon, Apr 05, 2021 at 04:57:11PM +0100, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Neither string parameter in set_prompt() and set_delay() are meant to
> be modified. In particular, new_prompt can point to a literal string.
> 
> So mark the two parameters as const and propagate it.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue May 11 16:08:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 16:08:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125810.236815 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgUvk-0005QM-Ha; Tue, 11 May 2021 16:08:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125810.236815; Tue, 11 May 2021 16:08:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgUvk-0005QF-Da; Tue, 11 May 2021 16:08:20 +0000
Received: by outflank-mailman (input) for mailman id 125810;
 Tue, 11 May 2021 16:08:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tiF3=KG=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1lgUvj-0005Q9-1e
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 16:08:19 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7f8b0f73-7636-49fd-b1ee-168d3004a6ff;
 Tue, 11 May 2021 16:08:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7f8b0f73-7636-49fd-b1ee-168d3004a6ff
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620749297;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=J+jei1tDOwEjJHXoH3QKMABLnE7yv5zL+AmXQXrxrjs=;
  b=ZGLjIc2Uv2QyWGkKhZ61iCL1VoKTd7R/00mIqDMrVwUrvYrsk/24gZ/i
   H2Ndqufz/0WYL2rvTYAe5mwygSGTz8g5GCxYbAK6D0yltgR8KYON8fQrf
   sM6ad9SOoB8oHNIw+sk+/Z/MQ1bQgjj8LR91Ugv1C3bV1/HpZ+wOCPMFr
   A=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: kJGEdoc+Vyi45IScvXdSAb2d8Ykjr36AhAkSfOqwp0z86vfTqj3MBXFoWYds+W5WhEmPL3sPM9
 R8iypIiGWFNNV2bQBHvMfc9F2nuoKfP28LeOHGJf70XnZG/VsBx4Bk86/D1Qz7a7WFGZkrdDC8
 K6NQ9Si0nyI4v36eW9X2S+3o6H9nT7ItcQftwZdcapW0L2BukRpOqOkUNt2rCMZGS2C/wOcmsp
 l0t9G33BVyKbA/26o8QD0HKvs1PkuwDfr1M6H4m4fwi0y1k6wHB5z+IdxnyfsSObQ2Y3eb4zsu
 vtM=
X-SBRS: 5.1
X-MesageID: 43942879
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:PYFpc636sGwh1k3nOoPMiwqjBNEkLtp133Aq2lEZdPU1SK2lfq
 +V98jzuSWftN9zYh8dcLK7VJVoKEm0naKdibNwAV7IZmbbUQWTQb1f0Q==
X-IronPort-AV: E=Sophos;i="5.82,291,1613451600"; 
   d="scan'208";a="43942879"
Date: Tue, 11 May 2021 17:08:13 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Julien Grall <julien@xen.org>
CC: <xen-devel@lists.xenproject.org>, Julien Grall <jgrall@amazon.com>, "Ian
 Jackson" <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH 13/14] tools/xenmon: xenbaked: Mark const the field text
 in stat_map_t
Message-ID: <YJqr7dCNUQhEiI/B@perard>
References: <20210405155713.29754-1-julien@xen.org>
 <20210405155713.29754-14-julien@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20210405155713.29754-14-julien@xen.org>

On Mon, Apr 05, 2021 at 04:57:12PM +0100, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> The field text in stat_map_t will point to string literals. So mark it
> as const to allow the compiler to catch any modified of the string.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue May 11 16:42:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 16:42:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125817.236827 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgVSU-0001Ba-9t; Tue, 11 May 2021 16:42:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125817.236827; Tue, 11 May 2021 16:42:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgVSU-0001BT-6g; Tue, 11 May 2021 16:42:10 +0000
Received: by outflank-mailman (input) for mailman id 125817;
 Tue, 11 May 2021 16:42:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Tq61=KG=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lgVSS-0001BN-5l
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 16:42:08 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 06c0b46c-34c5-4b8b-bfda-6575d2e01023;
 Tue, 11 May 2021 16:42:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 06c0b46c-34c5-4b8b-bfda-6575d2e01023
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620751327;
  h=from:to:cc:references:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=J34mvslvyE2BkQ5cxj3q/tvzkRJ4tyApqnGhdsNAF4Q=;
  b=cflONfX42DAdlJPdCYmiSzLw2UL7zDNuNcdKQE3hFcNGcD7dOOUnby+b
   0fEdO25akGseyCPSWJ9vkCaDJBsN3b1zqvAspAYgKNAKVpPTrOOXj8W53
   YhskXznInLWD77ay3yGvzmLCLUfOirTkX1ER56epJIw19QTSNA7vakndu
   E=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: LsL6Xre9yOcBUpJlBxEOY4eXtl+UUyTpxDAyLpN8uRdntYUqWlHKLe8nn6F+CN2eBiN9rSr42a
 ERDqUtwX9AB1xaAxYFARXOSK4c4JFXmH/dnwWV2iTnyz4katlLAkyCLo3yoA9K6L9ArT+Rd1RT
 1Bu9+kzu88eeIVYZxgAcqKrFilYpPAvOUhpziLocRPQCygOWf6HChMskR463KNj468sFrMP3cg
 5yA8gSuZG7qM70lmF+l5+4A+L65BGyqxFqr+cy/OEn01964nZWEGmVmXddFTqg0khV27EBPeFo
 STQ=
X-SBRS: 5.1
X-MesageID: 43667177
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:uC2XXqh1J0EeZJod35JZP/AJ0nBQXh4ji2hC6mlwRA09TyX5ra
 2TdZUgpHrJYVMqMk3I9uruBEDtex3hHP1OkOss1NWZPDUO0VHARO1fBOPZqAEIcBeOldK1u5
 0AT0B/YueAd2STj6zBkXSF+wBL+qj6zEiq792usEuEVWtRGsVdB58SMHfiLqVxLjM2YqYRJd
 6nyedsgSGvQngTZtTTPAh/YwCSz+e78q4PeHQ9dmca1DU=
X-IronPort-AV: E=Sophos;i="5.82,291,1613451600"; 
   d="scan'208";a="43667177"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PNlbpEYNq+rAjKMCq+smjMd8B29YnWxAgqbQU3wKlMrJHAdBvkY1U+p61edCY50C8FSubAS6CwY96xyeQirdX46R9q9oSdOYPqDB7KrPODzsxLhxDmzbsMBWLMjWTgVLJRrzOH0489FlWvtSF93EoJ/UWCQJ0ZIjzI3dLQ5x3NHWDWX0K0MEQG3IuFxNiQbBNCUIg9Kk270ALARyY1HG4HFslnX1kzVxbeLanZefjOBSvzaljotChbXfYmciuIbGlt8hDSctC9aHYCC9mM7Qb0dMb84aHKKXoIcjI1kffLMI6dCn9yUpU8Ds9XvUh7RNC27NyfnCbVSAS2FVGloOqw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=J34mvslvyE2BkQ5cxj3q/tvzkRJ4tyApqnGhdsNAF4Q=;
 b=fu3M7EJguc1+X7hCpi0s/nPF4wINxDM87NNeigH74whNBZ5/OhuG0azDF8o6ibl+9DOs6m8vxkdF/FqoViabnaKRxn5mlJY4ab8laZDSJkZQA56/V7FdKsVzuVG4qIERR24eEOoH+2NkXjcbDy1ps1IXsP9NrN+0xGu1LObG3yvtBh4Eyshv0gTxsf3dlXsFpxyWPUyb14U4O6ChlM1EbgZCILyTwh8YnQaBWmKA6jIhM8uN5AdgxVjB3MJYbuhwmTeCVNyfQ1bWaDtOHZyfAB0arcbK4fzA29EbbuU8yJCy/X22whopBr0uE+eCrDSW3BckVpIRnPkGKR7OaBY+zg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=J34mvslvyE2BkQ5cxj3q/tvzkRJ4tyApqnGhdsNAF4Q=;
 b=hBlAbWBPktfba4aOKtdqTgg6O3ybrD7QUOFw8GYiaEWBcgmXQP+rsLjxxek2NcRMCJ5OeLEg5ms36+waI0uVbfaz3TENLikW4PX3h4gKSt7qn2xnCUtczNFcVbPHgwbOXYtWJqELaN2cQjdMUd/pLl60WFJqpWi3pA0+4IwIs2s=
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: George Dunlap <george.dunlap@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
	=?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <322de6db-e01f-0b57-5777-5d94a13c441a@suse.com>
 <8ba8f016-0aed-277b-bbea-80022d057791@suse.com>
 <5a954be8-e213-36d8-27da-4c51243dc280@citrix.com>
 <f515fdfb-d1a6-56d8-5db3-ebddeed23806@suse.com>
Subject: Re: [PATCH v3 03/22] x86/xstate: re-size save area when CPUID policy
 changes
Message-ID: <f16afc8a-ccd4-7e5e-e08d-d96597c6e8ab@citrix.com>
Date: Tue, 11 May 2021 17:41:54 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <f515fdfb-d1a6-56d8-5db3-ebddeed23806@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0396.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:f::24) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 640a0976-1a9b-452f-1601-08d9149bb3f1
X-MS-TrafficTypeDiagnostic: BYAPR03MB3429:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB34294EADB389D9D45BA7475ABA539@BYAPR03MB3429.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Gbwht9Do6uZsRm1pN5qwG2yjoXtgCv6qizCE7jhNtkv+RRmCcKB1uQFcG5IYna9fSyUj5V4I2q9UdqFtKt3doSHL7xX+E/k+5KVd2ejuD+hFIrFNc6EFiEzd4gRQBhTRld/pncKdgwHd0D74UDrOupf2gt6GDpb0/YtSgNXKVH/Ba5fv99iJ8bE+Q9+6a7IO1rXL3NmEfIYrqsoJwjJQAWo01oBJNDULgoyaqzK3X3tCrqgdLHf4iOGnTsbG5ubp3/DDwTRHhHUICnteR7X4t+GK5xA2/FkPP4huy1BiBq4cX2/uCWAKNlnPpcKQ5cEDltfpx9lWt3jfDcp+tbbaZR6+We/HhKyyGhRVvxWWO/Td3AEMgnF3O+s1GOH45SN1wyo6aZhNYaBCpbnGTRLlMYFpyfkZYGsxXD0iM9XYC6XsgMRfyYq4RnEXz7fxx0MNOpthZgv2MN8iwqbReBbicDIWcGaYjKCbOsW39sm0eJW1Nf5UoMyzaRhmK7+10lJY4Lnzhopts3i36Dqq94Zl03o7jT/ZkvRdEFX+tq5nZYtwbatPbxvLlXZZ9lGYVBGccFxJNe8EiIwP021lwXhr62uQ1QP33jq60LNTkNn4icD6UxT52bT8c5pe/myT3UbNjUegVe0vH75aZIEo9M874cWEGgJqW6+HYFO+cGtj9jIAvScWVU8Vxvpm7rvYc0c3
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(396003)(376002)(39860400002)(366004)(136003)(316002)(16576012)(186003)(36756003)(6666004)(83380400001)(4326008)(8676002)(2616005)(54906003)(8936002)(5660300002)(16526019)(2906002)(31686004)(956004)(478600001)(26005)(66946007)(53546011)(66476007)(38100700002)(66556008)(86362001)(31696002)(6486002)(6916009)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?N0tFeldZZ2N5cVZOYnFXdkJobW9vRHhjTVFFUjVvR2NFTVd0NE9GNXBKVFBD?=
 =?utf-8?B?aGphV0p2bjV0UEYzTnZOamdpM0szMDBoK2RQbFJCZEp1YnlLbGdFQ2dTZGk3?=
 =?utf-8?B?NElheHRJZ3J5QmUvNTcrT2FzMldYblhRazcxQldCUWUrUmxxQ2p4R0kzcVAr?=
 =?utf-8?B?UmNFdnhZUytncXAvUldEcjI4MEtyOGkwNTRic0NFbkZNdXdOUFAwZEkzbFN5?=
 =?utf-8?B?TDE0MjY5WEFRaTV5NEdSaWNZZnZVbGpscWptSk0rOEVvUVlQcFNSU0RoKzRP?=
 =?utf-8?B?bzZkL2lNZklkWEE1SGVZMWVZMnlocUliTVd1UVo3SWV6Ni9KVnNjejVPQk54?=
 =?utf-8?B?UGFLUlhyWFpBN3lDS2JweXprdldVdGxQMlhiSVlNOXRNM2RBQVFzVUpLRk51?=
 =?utf-8?B?VnFpUkhoUm5VdDJiVWRka1BCWlhRdElxY21CK2YrUzU1alMyOW9BeWpiTTFk?=
 =?utf-8?B?YWRQK1JFdStPcnB3c0V1ZDVPZkpST1Z0UXdld2pYWmc0Q20weWF1b1Z6c0Jv?=
 =?utf-8?B?K2ZNdklTOFBpNGpMYUIxU05YeGQ2WmJ5TzBrUEx1VWZXRXhEZUNsTVNDNEsz?=
 =?utf-8?B?VmU4ZytITzRDYXlrQnYyUUNHVjhBRDBqMEc0ZmEvYk1wMlZDRk1uWlZRS0VX?=
 =?utf-8?B?UlM2VndaUjUxSndzSk04OXppL2ZJVGZHVmJjVjU5Yk9WMUFGMWtORmlYWk54?=
 =?utf-8?B?c0FZUXZ5K0ZwVFNoTWlSdzExWEtWSmJZZ1NpTWdjOCthc3MwMG1kVE9FaGlB?=
 =?utf-8?B?NzY1Y3FMWE9xRHY5cGswZjhBSHpCNEFTZ1c0akk3clN4TkJlNXVDckd6M1E1?=
 =?utf-8?B?ajFtTW1JR094WFgrYjE0ejIxbzNoKzNwVk1hR3pJNGwxcmZvSjN3RzFRT1pX?=
 =?utf-8?B?ZGRGS3VDcVptbmpwTkcyS1FIaitSMkNGVWFVbVVwMzdSWjl3Z1NaR3BPdkQ3?=
 =?utf-8?B?N3pxa2VBcTVHdDBrcFN5dThEdWFZZDNibzBVS0t0SnRUSlBHbjJTNGE1SGFT?=
 =?utf-8?B?ZGRINFN2bko4UXBWVVZicWNBZWJBdjI3UUl4RW9VazFnOVJ6dk85UEpEeEth?=
 =?utf-8?B?b3FOTzMzZ3FBT2ZVVWh5WEVId0F1L3U5OEc4UWJUYjYwM3Z3d1hNUWk3Z0JP?=
 =?utf-8?B?SWRCUSt5ZFFEQk1XZkE0eGU5N2pDTEwxUVZEeUJ3VHpnUzUzYjV1MkJzYkFE?=
 =?utf-8?B?VGhUMmozK3g2eE1tRU9wWFJWeG1LUEd2bVdKVVVZTmdhbE16ZU8rOEVXVkw1?=
 =?utf-8?B?WmJXWkx6MGxEeUViQTBwTERYU1Y5VnB4Szk3bVlsalk3TGlBTnI2RU44aFBz?=
 =?utf-8?B?bmsvK2FGTXQ0dGpzcEk1QlBGa0xDMFUwU1JScldFdWN1ZHVuMXBRWk9vNmgy?=
 =?utf-8?B?V0VKQlRRczhkcmZhVlFPVTJac0JyenhMVGNQbDQ2Y2NTNlIwWCtBcW1kbDYr?=
 =?utf-8?B?dHZwYWlCdXNXeEhmTGhNY2UvUC83bTNOQWM4bWl1NGl4MkhYcHVWblQxUHFS?=
 =?utf-8?B?bHdqVTBGVlhOV3JzUlI4L2FJbWhmVEwrdlBJc3ZWc3YzL0ErNWdjNHFJVFpj?=
 =?utf-8?B?R0hidlBuN1g0MGh4U1FJam50aEl6V1F3eFZHVm9VaXR0VE1ET3JzNlYxcUx5?=
 =?utf-8?B?MnVyckw3RlM2ZEFXMnUxMzhtME5LdDhwZTRxTHVZclZaK0MzUFBVb21HQi9Z?=
 =?utf-8?B?S2hTTGVwaDJZY1ZUM3YwR3Fmc0JCUUhMQW1LV2FWbWtaYnB1c2ptdTU4VjBK?=
 =?utf-8?Q?qbnY5+lTdyonQjsB17geFzsff1ZnH7G4PVlZn4B?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 640a0976-1a9b-452f-1601-08d9149bb3f1
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2021 16:42:02.3442
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: s+t4vTlRn1J3TswnQviXoioibxoaID57WhNtVQBh8iLN5rb//KpAFwofinYtohIoFJoxsPR97UjJWJmds/RketuSOwlWEXqep/dcVqvO89c=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3429
X-OriginatorOrg: citrix.com

On 03/05/2021 15:22, Jan Beulich wrote:
>> Another consequence is that we need to rethink our hypercall behaviour.=
=C2=A0
>> There is no such thing as supervisor states in an uncompressed XSAVE
>> image, which means we can't continue with that being the ABI.
> I don't think the hypercall input / output blob needs to follow any
> specific hardware layout.

Currently, the blob is { xcr0, xcr0_accum, uncompressed image }.

As we haven't supported any compressed states yet, we are at liberty to
create a forward compatible change by logically s/xcr0/xstate/ and
permitting an uncompressed image.

Irritatingly, we have xcr0=3D0 as a permitted state and out in the field,
for "no xsave state".=C2=A0 This contributes a substantial quantity of
complexity in our xstate logic, and invalidates the easy fix I had for
not letting the HVM initpath explode.

The first task is to untangle the non-architectural xcr0=3D0 case, and to
support compressed images.=C2=A0 Size parsing needs to be split into two, a=
s
for compressed images, we need to consume XSTATE_BV and XCOMP_BV to
cross-check the size.

I think we also want a rule that Xen will always send compressed if it
is using XSAVES (/XSAVEC in the interim?)=C2=A0 We do not want to be workin=
g
with uncompressed images at all, now that MPX is a reasonable sized hole
in the middle.

Cleaning this up will then unblock v2 of the existing xstate cleanup
series I posted.

>> In terms of actual context switching, we want to be using XSAVES/XRSTORS
>> whenever it is available, even if we're not using supervisor states.=C2=
=A0
>> XSAVES has both the inuse and modified optimisations, without the broken
>> consequence of XSAVEOPT (which is firmly in the "don't ever use this"
>> bucket now).
> The XSAVEOPT anomaly is affecting user mode only, isn't it? Or are
> you talking of something I have forgot about?

It's not safe to use at all in L1 xen, because the tracking leaks
between non-root contexts.=C2=A0 I can't remember if there are further
problems for an L0 xen, but I have a nagging feeling that there is.

~Andrew



From xen-devel-bounces@lists.xenproject.org Tue May 11 16:42:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 16:42:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125820.236839 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgVT7-0001jO-Kb; Tue, 11 May 2021 16:42:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125820.236839; Tue, 11 May 2021 16:42:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgVT7-0001jH-HB; Tue, 11 May 2021 16:42:49 +0000
Received: by outflank-mailman (input) for mailman id 125820;
 Tue, 11 May 2021 16:42:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zMp2=KG=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lgVT6-0001hY-G9
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 16:42:48 +0000
Received: from mail-il1-x132.google.com (unknown [2607:f8b0:4864:20::132])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3d915ebd-c94f-43c5-ac1c-c15f2298bbad;
 Tue, 11 May 2021 16:42:44 +0000 (UTC)
Received: by mail-il1-x132.google.com with SMTP id e14so17682783ils.12
 for <xen-devel@lists.xenproject.org>; Tue, 11 May 2021 09:42:44 -0700 (PDT)
Received: from mail-il1-f175.google.com (mail-il1-f175.google.com.
 [209.85.166.175])
 by smtp.gmail.com with ESMTPSA id e12sm9510029ilu.75.2021.05.11.09.42.43
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 11 May 2021 09:42:43 -0700 (PDT)
Received: by mail-il1-f175.google.com with SMTP id e14so17682746ils.12
 for <xen-devel@lists.xenproject.org>; Tue, 11 May 2021 09:42:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3d915ebd-c94f-43c5-ac1c-c15f2298bbad
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=k6jrKLJf+SUK032/zz9tWYcN5Rhay3mFZvhqIWgxNBs=;
        b=L/Y31OZjVAvXas4gAOFGDpY1O+WckxTleHtLR2a6rScOLannRZ4axVg75JGljNN3EA
         40xB287W01zuvyDHYZFS2WLSs1CG9A82KNV8OVT7o92X2FrRRw8J292qbL27/DexCkE7
         qg4XvM9gaQyzYYNGyObKTqbnr0WqygwnwiYxc=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=k6jrKLJf+SUK032/zz9tWYcN5Rhay3mFZvhqIWgxNBs=;
        b=qi9HAk2mZKOK7azzeL1ApiPC6ZUiyLlDxDxHB2Z4yfVxR/d5iza+XLLkuAfxkl+vMA
         oYG7geV1Kwe8/cHIDGb3sp6HGI/gOEAftBj0E0Ebb/zBg8wKx60eNSRufSdDoGBYOig1
         9QXnUYrRTeOAuYuAcUzDDIidGmC7f5Kx72RIz+bEue8XaZ/nzZUaKRICAhpdgbd8eOSf
         Bo2YkolWYF8ziW35VJqdzrFaNUD6RES+RggMMWuSAk+bYWL2xxzfTsh5A3RTEEgDMUAC
         Hu51CdwirMqiMBq6exBGa2vvrnLeXO6iTvzFjue++6cVainroyWKH95cJjTGNe7Rs2ze
         yfbw==
X-Gm-Message-State: AOAM532U6YZp36/6zcvIQ43dDm5uO4duXt6yGFKVZnIkkZ9ld83qgBch
	IeGuUbxur0ySE+Q+xKM6X/7V9ES3cqdVpg==
X-Google-Smtp-Source: ABdhPJyA7xOvwHKtF7Olq5cAg7RoArKuUB7s4jupLnPyLDy7B8jFQJMzAuWn8oWUqWKGVlxHH5+q0A==
X-Received: by 2002:a05:6e02:10c5:: with SMTP id s5mr28665397ilj.88.1620751363980;
        Tue, 11 May 2021 09:42:43 -0700 (PDT)
X-Received: by 2002:a6b:7b08:: with SMTP id l8mr22174004iop.50.1620751352978;
 Tue, 11 May 2021 09:42:32 -0700 (PDT)
MIME-Version: 1.0
References: <20210510095026.3477496-1-tientzu@chromium.org>
 <20210510095026.3477496-6-tientzu@chromium.org> <20210510150342.GD28066@lst.de>
In-Reply-To: <20210510150342.GD28066@lst.de>
From: Claire Chang <tientzu@chromium.org>
Date: Wed, 12 May 2021 00:42:22 +0800
X-Gmail-Original-Message-ID: <CALiNf2_7mHuMG5DTQD0GsriN=vuX0ytyUn4rxEmsK2iP3PKV+w@mail.gmail.com>
Message-ID: <CALiNf2_7mHuMG5DTQD0GsriN=vuX0ytyUn4rxEmsK2iP3PKV+w@mail.gmail.com>
Subject: Re: [PATCH v6 05/15] swiotlb: Add a new get_io_tlb_mem getter
To: Christoph Hellwig <hch@lst.de>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>, 
	Will Deacon <will@kernel.org>, Frank Rowand <frowand.list@gmail.com>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com, jgross@suse.com, 
	Marek Szyprowski <m.szyprowski@samsung.com>, benh@kernel.crashing.org, paulus@samba.org, 
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, sstabellini@kernel.org, 
	Robin Murphy <robin.murphy@arm.com>, grant.likely@arm.com, xypron.glpk@gmx.de, 
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org, bauerman@linux.ibm.com, 
	peterz@infradead.org, Greg KH <gregkh@linuxfoundation.org>, 
	Saravana Kannan <saravanak@google.com>, "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, 
	heikki.krogerus@linux.intel.com, 
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Randy Dunlap <rdunlap@infradead.org>, 
	Dan Williams <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
	linux-devicetree <devicetree@vger.kernel.org>, lkml <linux-kernel@vger.kernel.org>, 
	linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, 
	Nicolas Boichat <drinkcat@chromium.org>, Jim Quinlan <james.quinlan@broadcom.com>, 
	Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com, 
	Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk, 
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
	Jianxiong Gao <jxgao@google.com>, joonas.lahtinen@linux.intel.com, 
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
	matthew.auld@intel.com, nouveau@lists.freedesktop.org, rodrigo.vivi@intel.com, 
	thomas.hellstrom@linux.intel.com
Content-Type: text/plain; charset="UTF-8"

On Mon, May 10, 2021 at 11:03 PM Christoph Hellwig <hch@lst.de> wrote:
>
> > +static inline struct io_tlb_mem *get_io_tlb_mem(struct device *dev)
> > +{
> > +#ifdef CONFIG_DMA_RESTRICTED_POOL
> > +     if (dev && dev->dma_io_tlb_mem)
> > +             return dev->dma_io_tlb_mem;
> > +#endif /* CONFIG_DMA_RESTRICTED_POOL */
> > +
> > +     return io_tlb_default_mem;
>
> Given that we're also looking into a not addressing restricted pool
> I'd rather always assign the active pool to dev->dma_io_tlb_mem and
> do away with this helper.

Where do you think is the proper place to do the assignment? First
time calling swiotlb_map? or in of_dma_configure_id?


From xen-devel-bounces@lists.xenproject.org Tue May 11 16:42:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 16:42:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125821.236850 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgVTC-00022C-SA; Tue, 11 May 2021 16:42:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125821.236850; Tue, 11 May 2021 16:42:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgVTC-000225-P4; Tue, 11 May 2021 16:42:54 +0000
Received: by outflank-mailman (input) for mailman id 125821;
 Tue, 11 May 2021 16:42:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zMp2=KG=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lgVTB-0001hY-GC
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 16:42:53 +0000
Received: from mail-il1-x12e.google.com (unknown [2607:f8b0:4864:20::12e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 050f473c-5752-4b4d-a2bb-9a2dbe2cde7b;
 Tue, 11 May 2021 16:42:52 +0000 (UTC)
Received: by mail-il1-x12e.google.com with SMTP id w13so5561926ilv.11
 for <xen-devel@lists.xenproject.org>; Tue, 11 May 2021 09:42:52 -0700 (PDT)
Received: from mail-io1-f44.google.com (mail-io1-f44.google.com.
 [209.85.166.44])
 by smtp.gmail.com with ESMTPSA id h14sm9744499ils.13.2021.05.11.09.42.50
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 11 May 2021 09:42:51 -0700 (PDT)
Received: by mail-io1-f44.google.com with SMTP id z24so18806291ioj.7
 for <xen-devel@lists.xenproject.org>; Tue, 11 May 2021 09:42:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 050f473c-5752-4b4d-a2bb-9a2dbe2cde7b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=Ll12iwWOH+g7rF+Qcvmc/ZPVWK50RY5A+TnwOnViek8=;
        b=jF9yywweDau5byF0sr7SVkFA9hoYDaWzF8iy+wyA9wHf9oOaCvA+67rLL4jVjSY9gn
         QoY25mdgO7IV1eEXpAEk5s63+vTygeFS39HRXfLrmhNXD74fIDm9qeLodCWWHbiJG3ic
         G2KrYNbeLupAhGANRLweLDvdOuGV9VrSQUJi8=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=Ll12iwWOH+g7rF+Qcvmc/ZPVWK50RY5A+TnwOnViek8=;
        b=sKPwvuxVX4eKie7Ocggs0RJvxLIz2PXwrJWaMU5KGxJS4Y45Flh4gqgAEX8wahj+u4
         Bi9JCxVOWHBynAbLtCGRhnoERXSjhzzr5kDsygcltGoj0j1BXGExCA14HkesIr2WK4A0
         Kkbknk7IuoYrlBG3Bm7olKRiG0JpVEuGoQx2KV2gxsCgVjgBOzQ0DgrHoKcVUklsgi/B
         APxy4MxD1H3dyfv4k0d3GYxRlsDRnxITvwBTcORZa43ktOVpZbkFZ6TrQNZsyWkjQUK8
         ul0VUTQSdXDozz4kUfP8zyHWvZ/wkfyHQ+jZNhkNdgayLcmrGO2/mGKI1B+MlT/ri3Us
         Jsjw==
X-Gm-Message-State: AOAM533LZHjmtfxtaHSz/NN8KVatpRqfaRC8/dDiP2bMJWAjQPONikfj
	s2K5iRYg3x5ZRFAO+BKRU2PnCHxRhK6klQ==
X-Google-Smtp-Source: ABdhPJyX5PM2ls3gC0Yqw37UDScBGrMqLOzONvLWeu4F5s1UFJw36+Tem/TNHfJ5s/PH6W2P0pfRAA==
X-Received: by 2002:a92:650d:: with SMTP id z13mr12888687ilb.193.1620751371621;
        Tue, 11 May 2021 09:42:51 -0700 (PDT)
X-Received: by 2002:a05:6e02:e82:: with SMTP id t2mr17831684ilj.18.1620751359226;
 Tue, 11 May 2021 09:42:39 -0700 (PDT)
MIME-Version: 1.0
References: <20210510095026.3477496-1-tientzu@chromium.org>
 <20210510095026.3477496-5-tientzu@chromium.org> <20210510150256.GC28066@lst.de>
In-Reply-To: <20210510150256.GC28066@lst.de>
From: Claire Chang <tientzu@chromium.org>
Date: Wed, 12 May 2021 00:42:28 +0800
X-Gmail-Original-Message-ID: <CALiNf28jgAU7zN4pwgPKgaecM-KXRHHqwHj4sPXVf_3M0-goMQ@mail.gmail.com>
Message-ID: <CALiNf28jgAU7zN4pwgPKgaecM-KXRHHqwHj4sPXVf_3M0-goMQ@mail.gmail.com>
Subject: Re: [PATCH v6 04/15] swiotlb: Add restricted DMA pool initialization
To: Christoph Hellwig <hch@lst.de>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>, 
	Will Deacon <will@kernel.org>, Frank Rowand <frowand.list@gmail.com>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com, jgross@suse.com, 
	Marek Szyprowski <m.szyprowski@samsung.com>, benh@kernel.crashing.org, paulus@samba.org, 
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, sstabellini@kernel.org, 
	Robin Murphy <robin.murphy@arm.com>, grant.likely@arm.com, xypron.glpk@gmx.de, 
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org, bauerman@linux.ibm.com, 
	peterz@infradead.org, Greg KH <gregkh@linuxfoundation.org>, 
	Saravana Kannan <saravanak@google.com>, "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, 
	heikki.krogerus@linux.intel.com, 
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Randy Dunlap <rdunlap@infradead.org>, 
	Dan Williams <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
	linux-devicetree <devicetree@vger.kernel.org>, lkml <linux-kernel@vger.kernel.org>, 
	linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, 
	Nicolas Boichat <drinkcat@chromium.org>, Jim Quinlan <james.quinlan@broadcom.com>, 
	Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com, 
	Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk, 
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
	Jianxiong Gao <jxgao@google.com>, joonas.lahtinen@linux.intel.com, 
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
	matthew.auld@intel.com, nouveau@lists.freedesktop.org, rodrigo.vivi@intel.com, 
	thomas.hellstrom@linux.intel.com
Content-Type: text/plain; charset="UTF-8"

On Mon, May 10, 2021 at 11:03 PM Christoph Hellwig <hch@lst.de> wrote:
>
> > +#ifdef CONFIG_DMA_RESTRICTED_POOL
> > +#include <linux/io.h>
> > +#include <linux/of.h>
> > +#include <linux/of_fdt.h>
> > +#include <linux/of_reserved_mem.h>
> > +#include <linux/slab.h>
> > +#endif
>
> I don't think any of this belongs into swiotlb.c.  Marking
> swiotlb_init_io_tlb_mem non-static and having all this code in a separate
> file is probably a better idea.

Will do in the next version.

>
> > +#ifdef CONFIG_DMA_RESTRICTED_POOL
> > +static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
> > +                                 struct device *dev)
> > +{
> > +     struct io_tlb_mem *mem = rmem->priv;
> > +     unsigned long nslabs = rmem->size >> IO_TLB_SHIFT;
> > +
> > +     if (dev->dma_io_tlb_mem)
> > +             return 0;
> > +
> > +     /* Since multiple devices can share the same pool, the private data,
> > +      * io_tlb_mem struct, will be initialized by the first device attached
> > +      * to it.
> > +      */
>
> This is not the normal kernel comment style.

Will fix this in the next version.

>
> > +#ifdef CONFIG_ARM
> > +             if (!PageHighMem(pfn_to_page(PHYS_PFN(rmem->base)))) {
> > +                     kfree(mem);
> > +                     return -EINVAL;
> > +             }
> > +#endif /* CONFIG_ARM */
>
> And this is weird.  Why would ARM have such a restriction?  And if we have
> such rstrictions it absolutely belongs into an arch helper.

Now I think the CONFIG_ARM can just be removed?
The goal here is to make sure we're using linear map and can safely
use phys_to_dma/dma_to_phys.

>
> > +             swiotlb_init_io_tlb_mem(mem, rmem->base, nslabs, false);
> > +
> > +             rmem->priv = mem;
> > +
> > +#ifdef CONFIG_DEBUG_FS
> > +             if (!debugfs_dir)
> > +                     debugfs_dir = debugfs_create_dir("swiotlb", NULL);
> > +
> > +             swiotlb_create_debugfs(mem, rmem->name, debugfs_dir);
>
> Doesn't the debugfs_create_dir belong into swiotlb_create_debugfs?  Also
> please use IS_ENABLEd or a stub to avoid ifdefs like this.

Will move it into swiotlb_create_debugfs and use IS_ENABLED in the next version.


From xen-devel-bounces@lists.xenproject.org Tue May 11 16:47:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 16:47:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125835.236862 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgVXl-0003HQ-Je; Tue, 11 May 2021 16:47:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125835.236862; Tue, 11 May 2021 16:47:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgVXl-0003HJ-Gg; Tue, 11 May 2021 16:47:37 +0000
Received: by outflank-mailman (input) for mailman id 125835;
 Tue, 11 May 2021 16:47:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nI6L=KG=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lgVXk-0003HA-DJ
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 16:47:36 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c18b3f20-4cf1-4a9e-b466-2ae63ea33481;
 Tue, 11 May 2021 16:47:35 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 2D8BB613CF;
 Tue, 11 May 2021 16:47:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c18b3f20-4cf1-4a9e-b466-2ae63ea33481
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1620751654;
	bh=hU+yLFeBxlaW2RjWGMkuL1C1oCUtCvrztFZPdNLLPBs=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=dQ7Nr6QvpTPOXyPELC3zxv3QGr57GJrhhYd3bGSTX0Iei81QLLsBw4ch3MIBZsVMj
	 5dhtEb+zrDfe8NTwEI0pwv6ZPuUNSRued0gn56rDquIg1cLKdu+jfBwsBL1m3PKI9j
	 vD28w5PlnJi1O7OV7Tnu+QUBbbV3bVW5u8CTn2jFK8rNp8Yfpb6m5H8S//XvS9hM9Z
	 wmUYWjtqadwsU6qbSoUcenWgfVa5CQncuRi5yNGPkOVpnwGDweg5OoEwI6DPhdI/QD
	 VLF64GgmnZfE/ldwBKK4pEPd+pf2uH2oaott7rrYZ2/WxAxxhp7MThGl3/yOAfvqXu
	 liiCAYj1dXyvA==
Date: Tue, 11 May 2021 09:47:33 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Christoph Hellwig <hch@lst.de>
cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    f.fainelli@gmail.com, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    linux-kernel@vger.kernel.org, 
    osstest service owner <osstest-admin@xenproject.org>, 
    Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, 
    Boris Ostrovsky <boris.ostrovsky@oracle.com>, 
    iommu@lists.linux-foundation.org
Subject: Re: Regression when booting 5.15 as dom0 on arm64 (WAS: Re: [linux-linus
 test] 161829: regressions - FAIL)]
In-Reply-To: <20210511063558.GA7605@lst.de>
Message-ID: <alpine.DEB.2.21.2105110925430.5018@sstabellini-ThinkPad-T480s>
References: <osstest-161829-mainreport@xen.org> <4ea1e89f-a7a0-7664-470c-b3cf773a1031@xen.org> <20210510084057.GA933@lst.de> <alpine.DEB.2.21.2105101818260.5018@sstabellini-ThinkPad-T480s> <20210511063558.GA7605@lst.de>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 11 May 2021, Christoph Hellwig wrote:
> On Mon, May 10, 2021 at 06:46:34PM -0700, Stefano Stabellini wrote:
> > On Mon, 10 May 2021, Christoph Hellwig wrote:
> > > On Sat, May 08, 2021 at 12:32:37AM +0100, Julien Grall wrote:
> > > > The pointer dereferenced seems to suggest that the swiotlb hasn't been 
> > > > allocated. From what I can tell, this may be because swiotlb_force is set 
> > > > to SWIOTLB_NO_FORCE, we will still enable the swiotlb when running on top 
> > > > of Xen.
> > > >
> > > > I am not entirely sure what would be the correct fix. Any opinions?
> > > 
> > > Can you try something like the patch below (not even compile tested, but
> > > the intent should be obvious?
> > > 
> > > 
> > > diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
> > > index 16a2b2b1c54d..7671bc153fb1 100644
> > > --- a/arch/arm64/mm/init.c
> > > +++ b/arch/arm64/mm/init.c
> > > @@ -44,6 +44,8 @@
> > >  #include <asm/tlb.h>
> > >  #include <asm/alternative.h>
> > >  
> > > +#include <xen/arm/swiotlb-xen.h>
> > > +
> > >  /*
> > >   * We need to be able to catch inadvertent references to memstart_addr
> > >   * that occur (potentially in generic code) before arm64_memblock_init()
> > > @@ -482,7 +484,7 @@ void __init mem_init(void)
> > >  	if (swiotlb_force == SWIOTLB_FORCE ||
> > >  	    max_pfn > PFN_DOWN(arm64_dma_phys_limit))
> > >  		swiotlb_init(1);
> > > -	else
> > > +	else if (!IS_ENABLED(CONFIG_XEN) || !xen_swiotlb_detect())
> > >  		swiotlb_force = SWIOTLB_NO_FORCE;
> > >  
> > >  	set_max_mapnr(max_pfn - PHYS_PFN_OFFSET);
> > 
> > The "IS_ENABLED(CONFIG_XEN)" is not needed as the check is already part
> > of xen_swiotlb_detect().
> 
> As far as I can tell the x86 version of xen_swiotlb_detect has a
> !CONFIG_XEN stub.  The arm/arm64 version in uncoditionally declared, but
> the implementation only compiled when Xen support is enabled.

The implementation of xen_swiotlb_detect should work fine if
!CONFIG_XEN, but the issue is that it is implemented in
arch/arm/xen/mm.c, so it is not going to be available.

I think it would be good to turn it into a static inline so that we can
call it from arch/arm64/mm/init.c and other similar places with or
without CONFIG_XEN, see appended patch below. It compiles without
CONFIG_XEN.


> > But let me ask another question first. Do you think it makes sense to have:
> > 
> > 	if (swiotlb_force == SWIOTLB_NO_FORCE)
> > 		return 0;
> > 
> > at the beginning of swiotlb_late_init_with_tbl? I am asking because
> > swiotlb_late_init_with_tbl is meant for special late initializations,
> > right? It shouldn't really matter the presence or absence of
> > SWIOTLB_NO_FORCE in regards to swiotlb_late_init_with_tbl. Also the
> > commit message for "swiotlb: Make SWIOTLB_NO_FORCE perform no
> > allocation" says that "If a platform was somehow setting
> > swiotlb_no_force and a later call to swiotlb_init() was to be made we
> > would still be proceeding with allocating the default SWIOTLB size
> > (64MB)." Our case here is very similar, right? So the allocation should
> > proceed?
> 
> Well, right now SWIOTLB_NO_FORCE is checked in dma_direct_map_page.
> We need to clean all this up a bit, especially with the work to support
> multiple swiotlb buffers, but I think for now this is the best we can
> do.

OK


> > Which brings me to a separate unrelated issue, still affecting the path
> > xen_swiotlb_init -> swiotlb_late_init_with_tbl. If swiotlb_init(1) is
> > called by mem_init then swiotlb_late_init_with_tbl will fail due to the
> > check:
> > 
> >     /* protect against double initialization */
> >     if (WARN_ON_ONCE(io_tlb_default_mem))
> >         return -ENOMEM;
> > 
> > xen_swiotlb_init is meant to ask Xen to make a bunch of pages physically
> > contiguous. Then, it initializes the swiotlb buffer based on those
> > pages. So it is a problem that swiotlb_late_init_with_tbl refuses to
> > continue. However, in practice it is not a problem today because on ARM
> > we don't actually make any special requests to Xen to make the pages
> > physically contiguous (yet). See the empty implementation of
> > arch/arm/xen/mm.c:xen_create_contiguous_region. I don't know about x86.
> > 
> > So maybe we should instead do something like the appended?
> 
> So I'd like to change the core swiotlb initialization to just use
> a callback into the arch/xen code to make the pages contigous and
> kill all that code duplication.  Together with the multiple swiotlb
> buffer work I'd rather avoid churn that goes into a different direction
> if possible.

That's a much better plan. It is also not super urgent, so maybe for now
we could add an explicit check for io_tlb_default_mem != NULL at the
beginning of xen_swiotlb_init? So that at least we can fail explicitly
or ignore it explicitly rather than by accident.


---


diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c
index f8f07469d259..223b1151fd7d 100644
--- a/arch/arm/xen/mm.c
+++ b/arch/arm/xen/mm.c
@@ -135,18 +135,6 @@ void xen_destroy_contiguous_region(phys_addr_t pstart, unsigned int order)
 	return;
 }
 
-int xen_swiotlb_detect(void)
-{
-	if (!xen_domain())
-		return 0;
-	if (xen_feature(XENFEAT_direct_mapped))
-		return 1;
-	/* legacy case */
-	if (!xen_feature(XENFEAT_not_direct_mapped) && xen_initial_domain())
-		return 1;
-	return 0;
-}
-
 static int __init xen_mm_init(void)
 {
 	struct gnttab_cache_flush cflush;
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 16a2b2b1c54d..e55409caaee3 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -43,6 +43,7 @@
 #include <linux/sizes.h>
 #include <asm/tlb.h>
 #include <asm/alternative.h>
+#include <asm/xen/swiotlb-xen.h>
 
 /*
  * We need to be able to catch inadvertent references to memstart_addr
@@ -482,7 +483,7 @@ void __init mem_init(void)
 	if (swiotlb_force == SWIOTLB_FORCE ||
 	    max_pfn > PFN_DOWN(arm64_dma_phys_limit))
 		swiotlb_init(1);
-	else
+	else if (!xen_swiotlb_detect())
 		swiotlb_force = SWIOTLB_NO_FORCE;
 
 	set_max_mapnr(max_pfn - PHYS_PFN_OFFSET);
diff --git a/include/xen/arm/swiotlb-xen.h b/include/xen/arm/swiotlb-xen.h
index 2994fe6031a0..33336ab58afc 100644
--- a/include/xen/arm/swiotlb-xen.h
+++ b/include/xen/arm/swiotlb-xen.h
@@ -2,6 +2,19 @@
 #ifndef _ASM_ARM_SWIOTLB_XEN_H
 #define _ASM_ARM_SWIOTLB_XEN_H
 
-extern int xen_swiotlb_detect(void);
+#include <xen/features.h>
+#include <xen/xen.h>
+
+static inline int xen_swiotlb_detect(void)
+{
+	if (!xen_domain())
+		return 0;
+	if (xen_feature(XENFEAT_direct_mapped))
+		return 1;
+	/* legacy case */
+	if (!xen_feature(XENFEAT_not_direct_mapped) && xen_initial_domain())
+		return 1;
+	return 0;
+}
 
 #endif /* _ASM_ARM_SWIOTLB_XEN_H */


From xen-devel-bounces@lists.xenproject.org Tue May 11 16:49:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 16:49:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125838.236875 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgVZf-0003uT-0V; Tue, 11 May 2021 16:49:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125838.236875; Tue, 11 May 2021 16:49:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgVZe-0003uM-TR; Tue, 11 May 2021 16:49:34 +0000
Received: by outflank-mailman (input) for mailman id 125838;
 Tue, 11 May 2021 16:49:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zMp2=KG=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lgVZc-0003uC-Pz
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 16:49:32 +0000
Received: from mail-pl1-x629.google.com (unknown [2607:f8b0:4864:20::629])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0ad1915d-8cff-49f4-a766-d6be041b6bad;
 Tue, 11 May 2021 16:49:32 +0000 (UTC)
Received: by mail-pl1-x629.google.com with SMTP id t4so11116120plc.6
 for <xen-devel@lists.xenproject.org>; Tue, 11 May 2021 09:49:32 -0700 (PDT)
Received: from mail-pf1-f178.google.com (mail-pf1-f178.google.com.
 [209.85.210.178])
 by smtp.gmail.com with ESMTPSA id x79sm14157839pfc.57.2021.05.11.09.49.30
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 11 May 2021 09:49:30 -0700 (PDT)
Received: by mail-pf1-f178.google.com with SMTP id h127so16492772pfe.9
 for <xen-devel@lists.xenproject.org>; Tue, 11 May 2021 09:49:30 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0ad1915d-8cff-49f4-a766-d6be041b6bad
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=I574vs+oxpmw2RaQkdeqT3TmVa8C0vE6HCQzgLmVaEk=;
        b=EO4zcKQD13cnMpJi5k5sqZg36m/uhjFrv+x7tUEK/JnHkDgZTOULzljGCah93D1FN5
         SwmaT+lt9As201TL9V65C0BiJShCnGyDe8UgjgY5Jm1YBpkuQxrN8+e9OJjS8RpeP4ih
         4tasAkIoShr42ndvrMVih2QtviIwqYbb0WQHQ=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=I574vs+oxpmw2RaQkdeqT3TmVa8C0vE6HCQzgLmVaEk=;
        b=iii7wuCT3ZKQpkHdp4/6kEwB/wtf6+jyIune2L3kUAzHi3VwGE+H/z6eIs9KG63JyI
         nQGKwUKfWWH0E21nOUEbYu/76X59hUhP9zTo8B/sgbN3xEjm4ZePz3EUdqaH+yGKjDc4
         +hEror5q+ZC8gKI2cpMr9uv43cjJKOE8OOpYumhAoKbOn8J46QSVzlhknpiFIIviHHs0
         1Qaoaz8hS7GlMgRe6HjWJmM+qLpegmgDKA7hsqxuwAfnQfZjrBQoKiSfsSBtyug+mfSQ
         n+VykFN+oS81iackGuQfELn4w3S41EPPDpivWoDDYM/wxiy1jUb7ex02X6GZmywQvxE9
         GjIw==
X-Gm-Message-State: AOAM532wXhY9uSMFSeIcFr2Zb7e+D436d4SNilKbZ6xNV8ct1uEhRKMR
	LKVZLCUZWjlWDHCrzWT5z3NQIvQxwiVhZg==
X-Google-Smtp-Source: ABdhPJxrq5k00V/RrBKmsXNm082mXlCv+Nhe0rWGHJ6nPslNifdFpP8hZewMIx2MUkFHrWSPY5KPdQ==
X-Received: by 2002:a17:902:9f88:b029:ee:b4e5:64d4 with SMTP id g8-20020a1709029f88b02900eeb4e564d4mr30389356plq.41.1620751771058;
        Tue, 11 May 2021 09:49:31 -0700 (PDT)
X-Received: by 2002:a92:6804:: with SMTP id d4mr27241366ilc.5.1620751346868;
 Tue, 11 May 2021 09:42:26 -0700 (PDT)
MIME-Version: 1.0
References: <20210510095026.3477496-1-tientzu@chromium.org>
 <20210510095026.3477496-9-tientzu@chromium.org> <20210510150516.GE28066@lst.de>
In-Reply-To: <20210510150516.GE28066@lst.de>
From: Claire Chang <tientzu@chromium.org>
Date: Wed, 12 May 2021 00:42:15 +0800
X-Gmail-Original-Message-ID: <CALiNf2-x8Gw0TPLdeRnfPmUTeuK9dsLbDXN4hPnc08y21uuUXQ@mail.gmail.com>
Message-ID: <CALiNf2-x8Gw0TPLdeRnfPmUTeuK9dsLbDXN4hPnc08y21uuUXQ@mail.gmail.com>
Subject: Re: [PATCH v6 08/15] swiotlb: Bounce data from/to restricted DMA pool
 if available
To: Christoph Hellwig <hch@lst.de>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>, 
	Will Deacon <will@kernel.org>, Frank Rowand <frowand.list@gmail.com>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com, jgross@suse.com, 
	Marek Szyprowski <m.szyprowski@samsung.com>, benh@kernel.crashing.org, paulus@samba.org, 
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, sstabellini@kernel.org, 
	Robin Murphy <robin.murphy@arm.com>, grant.likely@arm.com, xypron.glpk@gmx.de, 
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org, bauerman@linux.ibm.com, 
	peterz@infradead.org, Greg KH <gregkh@linuxfoundation.org>, 
	Saravana Kannan <saravanak@google.com>, "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, 
	heikki.krogerus@linux.intel.com, 
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Randy Dunlap <rdunlap@infradead.org>, 
	Dan Williams <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
	linux-devicetree <devicetree@vger.kernel.org>, lkml <linux-kernel@vger.kernel.org>, 
	linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, 
	Nicolas Boichat <drinkcat@chromium.org>, Jim Quinlan <james.quinlan@broadcom.com>, 
	Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com, 
	Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk, 
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
	Jianxiong Gao <jxgao@google.com>, joonas.lahtinen@linux.intel.com, 
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
	matthew.auld@intel.com, nouveau@lists.freedesktop.org, rodrigo.vivi@intel.com, 
	thomas.hellstrom@linux.intel.com
Content-Type: text/plain; charset="UTF-8"

On Mon, May 10, 2021 at 11:05 PM Christoph Hellwig <hch@lst.de> wrote:
>
> > +static inline bool is_dev_swiotlb_force(struct device *dev)
> > +{
> > +#ifdef CONFIG_DMA_RESTRICTED_POOL
> > +     if (dev->dma_io_tlb_mem)
> > +             return true;
> > +#endif /* CONFIG_DMA_RESTRICTED_POOL */
> > +     return false;
> > +}
> > +
>
> >       /* If SWIOTLB is active, use its maximum mapping size */
> >       if (is_swiotlb_active(dev) &&
> > -         (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
> > +         (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE ||
> > +          is_dev_swiotlb_force(dev)))
>
> This is a mess.  I think the right way is to have an always_bounce flag
> in the io_tlb_mem structure instead.  Then the global swiotlb_force can
> go away and be replace with this and the fact that having no
> io_tlb_mem structure at all means forced no buffering (after a little
> refactoring).

Will do in the next version.


From xen-devel-bounces@lists.xenproject.org Tue May 11 16:49:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 16:49:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125839.236887 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgVZj-0004Cp-9E; Tue, 11 May 2021 16:49:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125839.236887; Tue, 11 May 2021 16:49:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgVZj-0004Ci-4z; Tue, 11 May 2021 16:49:39 +0000
Received: by outflank-mailman (input) for mailman id 125839;
 Tue, 11 May 2021 16:49:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=OZd+=KG=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lgVZi-0004CO-Le
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 16:49:38 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2adc501c-ef24-4767-b570-4847a31b8f91;
 Tue, 11 May 2021 16:49:36 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id E267367373; Tue, 11 May 2021 18:49:33 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2adc501c-ef24-4767-b570-4847a31b8f91
Date: Tue, 11 May 2021 18:49:33 +0200
From: Christoph Hellwig <hch@lst.de>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>, Julien Grall <julien@xen.org>,
	f.fainelli@gmail.com,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	linux-kernel@vger.kernel.org,
	osstest service owner <osstest-admin@xenproject.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	iommu@lists.linux-foundation.org
Subject: Re: Regression when booting 5.15 as dom0 on arm64 (WAS: Re:
 [linux-linus test] 161829: regressions - FAIL)]
Message-ID: <20210511164933.GA19775@lst.de>
References: <osstest-161829-mainreport@xen.org> <4ea1e89f-a7a0-7664-470c-b3cf773a1031@xen.org> <20210510084057.GA933@lst.de> <alpine.DEB.2.21.2105101818260.5018@sstabellini-ThinkPad-T480s> <20210511063558.GA7605@lst.de> <alpine.DEB.2.21.2105110925430.5018@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.21.2105110925430.5018@sstabellini-ThinkPad-T480s>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Tue, May 11, 2021 at 09:47:33AM -0700, Stefano Stabellini wrote:
> That's a much better plan. It is also not super urgent, so maybe for now
> we could add an explicit check for io_tlb_default_mem != NULL at the
> beginning of xen_swiotlb_init? So that at least we can fail explicitly
> or ignore it explicitly rather than by accident.

Fine with me.  Do you want to take over from here and test and submit
your version?


From xen-devel-bounces@lists.xenproject.org Tue May 11 16:51:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 16:51:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125846.236899 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgVbQ-0005pB-Lm; Tue, 11 May 2021 16:51:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125846.236899; Tue, 11 May 2021 16:51:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgVbQ-0005p4-HB; Tue, 11 May 2021 16:51:24 +0000
Received: by outflank-mailman (input) for mailman id 125846;
 Tue, 11 May 2021 16:51:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nI6L=KG=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lgVbP-0005or-3K
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 16:51:23 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bcd0cabd-6e70-4295-84d0-b14fd29f103c;
 Tue, 11 May 2021 16:51:22 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 091D2613C6;
 Tue, 11 May 2021 16:51:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bcd0cabd-6e70-4295-84d0-b14fd29f103c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1620751881;
	bh=36FOm/vubAEI0n2qxF3dc7ltKmZVubRWdCsf6LaDk8s=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=gO6+6kWfZzUSTxw2kbO6cTxuKglMDkF/V6jBMXaPjqGwC1vBJlt8oRXf2y22D/D/f
	 h3ec4bZ36ZUQLYx/+rueldWcwYZZpjBrrmxaP1NLduZWCtgnykKt6u3Tq8Gm1NXu6Q
	 IwjAhx5NWHNz/FkO71dMPx57gAvY1vNnR9hCwcTuaKoclxxteV1TOIbesBucfqhIMl
	 1z6IZRpACDIot0Pg/Rth55+IITCQvd3xDbZdxucxGL2+3QSYBhPzAbTfiSYoRb5nTE
	 1BwXhe9J4XZND5YgTRauuZ4YronUvWPzodXuoWixRNBKwHhMpiAhANdOcQhps6tn8Z
	 Qa6gavxx/4WYA==
Date: Tue, 11 May 2021 09:51:20 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Christoph Hellwig <hch@lst.de>
cc: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>, 
    f.fainelli@gmail.com, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    linux-kernel@vger.kernel.org, 
    osstest service owner <osstest-admin@xenproject.org>, 
    Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, 
    Boris Ostrovsky <boris.ostrovsky@oracle.com>, 
    iommu@lists.linux-foundation.org
Subject: Re: Regression when booting 5.15 as dom0 on arm64 (WAS: Re: [linux-linus
 test] 161829: regressions - FAIL)]
In-Reply-To: <20210511164933.GA19775@lst.de>
Message-ID: <alpine.DEB.2.21.2105110950580.5018@sstabellini-ThinkPad-T480s>
References: <osstest-161829-mainreport@xen.org> <4ea1e89f-a7a0-7664-470c-b3cf773a1031@xen.org> <20210510084057.GA933@lst.de> <alpine.DEB.2.21.2105101818260.5018@sstabellini-ThinkPad-T480s> <20210511063558.GA7605@lst.de> <alpine.DEB.2.21.2105110925430.5018@sstabellini-ThinkPad-T480s>
 <20210511164933.GA19775@lst.de>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 11 May 2021, Christoph Hellwig wrote:
> On Tue, May 11, 2021 at 09:47:33AM -0700, Stefano Stabellini wrote:
> > That's a much better plan. It is also not super urgent, so maybe for now
> > we could add an explicit check for io_tlb_default_mem != NULL at the
> > beginning of xen_swiotlb_init? So that at least we can fail explicitly
> > or ignore it explicitly rather than by accident.
> 
> Fine with me.  Do you want to take over from here and test and submit
> your version?

I can do that. Can I add your signed-off-by for your original fix?


From xen-devel-bounces@lists.xenproject.org Tue May 11 17:01:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 17:01:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125854.236910 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgVkp-0007Ru-H4; Tue, 11 May 2021 17:01:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125854.236910; Tue, 11 May 2021 17:01:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgVkp-0007Rn-ED; Tue, 11 May 2021 17:01:07 +0000
Received: by outflank-mailman (input) for mailman id 125854;
 Tue, 11 May 2021 17:01:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=OZd+=KG=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lgVko-0007Rh-CC
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 17:01:06 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c3928804-92dc-4be8-abb0-787b92627f3e;
 Tue, 11 May 2021 17:01:04 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 14EDB67373; Tue, 11 May 2021 19:01:02 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c3928804-92dc-4be8-abb0-787b92627f3e
Date: Tue, 11 May 2021 19:01:01 +0200
From: Christoph Hellwig <hch@lst.de>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Christoph Hellwig <hch@lst.de>, Julien Grall <julien@xen.org>,
	f.fainelli@gmail.com,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	linux-kernel@vger.kernel.org,
	osstest service owner <osstest-admin@xenproject.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	iommu@lists.linux-foundation.org
Subject: Re: Regression when booting 5.15 as dom0 on arm64 (WAS: Re:
 [linux-linus test] 161829: regressions - FAIL)]
Message-ID: <20210511170101.GA20936@lst.de>
References: <osstest-161829-mainreport@xen.org> <4ea1e89f-a7a0-7664-470c-b3cf773a1031@xen.org> <20210510084057.GA933@lst.de> <alpine.DEB.2.21.2105101818260.5018@sstabellini-ThinkPad-T480s> <20210511063558.GA7605@lst.de> <alpine.DEB.2.21.2105110925430.5018@sstabellini-ThinkPad-T480s> <20210511164933.GA19775@lst.de> <alpine.DEB.2.21.2105110950580.5018@sstabellini-ThinkPad-T480s>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <alpine.DEB.2.21.2105110950580.5018@sstabellini-ThinkPad-T480s>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Tue, May 11, 2021 at 09:51:20AM -0700, Stefano Stabellini wrote:
> On Tue, 11 May 2021, Christoph Hellwig wrote:
> > On Tue, May 11, 2021 at 09:47:33AM -0700, Stefano Stabellini wrote:
> > > That's a much better plan. It is also not super urgent, so maybe for now
> > > we could add an explicit check for io_tlb_default_mem != NULL at the
> > > beginning of xen_swiotlb_init? So that at least we can fail explicitly
> > > or ignore it explicitly rather than by accident.
> > 
> > Fine with me.  Do you want to take over from here and test and submit
> > your version?
> 
> I can do that. Can I add your signed-off-by for your original fix?

Sure:

Signed-off-by: Christoph Hellwig <hch@lst.de>


From xen-devel-bounces@lists.xenproject.org Tue May 11 17:41:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 17:41:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125861.236923 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgWOC-0003p7-KP; Tue, 11 May 2021 17:41:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125861.236923; Tue, 11 May 2021 17:41:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgWOC-0003p0-HD; Tue, 11 May 2021 17:41:48 +0000
Received: by outflank-mailman (input) for mailman id 125861;
 Tue, 11 May 2021 17:41:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nI6L=KG=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lgWOB-0003oo-E3
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 17:41:47 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8891cb61-4688-4f84-a586-c228ec11c22d;
 Tue, 11 May 2021 17:41:46 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id F355F611C9;
 Tue, 11 May 2021 17:41:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8891cb61-4688-4f84-a586-c228ec11c22d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1620754905;
	bh=p5pl8VglhLEAnmwX/+IRaXV2EP7T6wfDyOXKKS7PxLw=;
	h=From:To:Cc:Subject:Date:From;
	b=mhCoF0FWPfPN/PDmOQMAitbT9aqlzHlYF/CtyR12btuCdKbmhihNy25BWkvz8BxGC
	 6KF9qc8MTe7+Lj+KmpO4FiedIpwSdO+7cGDubiXTrRNiBvkHNAjoXH+c+OZWYxHgMx
	 zQ6Sa3KprdTvbEZ9cnC1Kd7x5J/UsZAesiExc1I3mMl8N38992qENw+lLIerKXlsc8
	 WUpCBHf4OSTqThh+Iqd1iOapZrAOZatdglTIFNNjebFhOa21pjHlInh6WqQVbYMcT+
	 ZKUvjiDXcBrFeMTAMietHC2IqN4pYaxwxClDfIGjMxlZBYJ231GhwkxrL1qt3b/UDL
	 qmDo1+MQvlh8w==
From: Stefano Stabellini <sstabellini@kernel.org>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	hch@lst.de,
	linux-kernel@vger.kernel.org,
	Stefano Stabellini <stefano.stabellini@xilinx.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	catalin.marinas@arm.com,
	will@kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: [PATCH 1/2] xen/arm64: do not set SWIOTLB_NO_FORCE when swiotlb is required
Date: Tue, 11 May 2021 10:41:41 -0700
Message-Id: <20210511174142.12742-1-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1

From: Stefano Stabellini <stefano.stabellini@xilinx.com>

Although SWIOTLB_NO_FORCE is meant to allow later calls to swiotlb_init,
today dma_direct_map_page returns error if SWIOTLB_NO_FORCE.

For now, without a larger overhaul of SWIOTLB_NO_FORCE, the best we can
do is to avoid setting SWIOTLB_NO_FORCE in mem_init when we know that it
is going to be required later (e.g. Xen requires it).

To make xen_swiotlb_detect available to !CONFIG_XEN builds, move it to a
static inline function.

CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
CC: catalin.marinas@arm.com
CC: will@kernel.org
CC: linux-arm-kernel@lists.infradead.org
Fixes: 2726bf3ff252 ("swiotlb: Make SWIOTLB_NO_FORCE perform no allocation")
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
 arch/arm/xen/mm.c             | 12 ------------
 arch/arm64/mm/init.c          |  3 ++-
 include/xen/arm/swiotlb-xen.h | 15 ++++++++++++++-
 3 files changed, 16 insertions(+), 14 deletions(-)

diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c
index f8f07469d259..223b1151fd7d 100644
--- a/arch/arm/xen/mm.c
+++ b/arch/arm/xen/mm.c
@@ -135,18 +135,6 @@ void xen_destroy_contiguous_region(phys_addr_t pstart, unsigned int order)
 	return;
 }
 
-int xen_swiotlb_detect(void)
-{
-	if (!xen_domain())
-		return 0;
-	if (xen_feature(XENFEAT_direct_mapped))
-		return 1;
-	/* legacy case */
-	if (!xen_feature(XENFEAT_not_direct_mapped) && xen_initial_domain())
-		return 1;
-	return 0;
-}
-
 static int __init xen_mm_init(void)
 {
 	struct gnttab_cache_flush cflush;
diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 16a2b2b1c54d..e55409caaee3 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -43,6 +43,7 @@
 #include <linux/sizes.h>
 #include <asm/tlb.h>
 #include <asm/alternative.h>
+#include <asm/xen/swiotlb-xen.h>
 
 /*
  * We need to be able to catch inadvertent references to memstart_addr
@@ -482,7 +483,7 @@ void __init mem_init(void)
 	if (swiotlb_force == SWIOTLB_FORCE ||
 	    max_pfn > PFN_DOWN(arm64_dma_phys_limit))
 		swiotlb_init(1);
-	else
+	else if (!xen_swiotlb_detect())
 		swiotlb_force = SWIOTLB_NO_FORCE;
 
 	set_max_mapnr(max_pfn - PHYS_PFN_OFFSET);
diff --git a/include/xen/arm/swiotlb-xen.h b/include/xen/arm/swiotlb-xen.h
index 2994fe6031a0..33336ab58afc 100644
--- a/include/xen/arm/swiotlb-xen.h
+++ b/include/xen/arm/swiotlb-xen.h
@@ -2,6 +2,19 @@
 #ifndef _ASM_ARM_SWIOTLB_XEN_H
 #define _ASM_ARM_SWIOTLB_XEN_H
 
-extern int xen_swiotlb_detect(void);
+#include <xen/features.h>
+#include <xen/xen.h>
+
+static inline int xen_swiotlb_detect(void)
+{
+	if (!xen_domain())
+		return 0;
+	if (xen_feature(XENFEAT_direct_mapped))
+		return 1;
+	/* legacy case */
+	if (!xen_feature(XENFEAT_not_direct_mapped) && xen_initial_domain())
+		return 1;
+	return 0;
+}
 
 #endif /* _ASM_ARM_SWIOTLB_XEN_H */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 11 17:41:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 17:41:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125862.236926 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgWOC-0003s9-SR; Tue, 11 May 2021 17:41:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125862.236926; Tue, 11 May 2021 17:41:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgWOC-0003rU-OL; Tue, 11 May 2021 17:41:48 +0000
Received: by outflank-mailman (input) for mailman id 125862;
 Tue, 11 May 2021 17:41:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nI6L=KG=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lgWOB-0003op-IT
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 17:41:47 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 677c5b21-4605-4b96-b1f7-2cc429289251;
 Tue, 11 May 2021 17:41:47 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id CB623613C3;
 Tue, 11 May 2021 17:41:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 677c5b21-4605-4b96-b1f7-2cc429289251
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1620754906;
	bh=/y24nuaxa3ahuPNyDsX+pUzlA7FV7W7+VmIh+phhWv4=;
	h=From:To:Cc:Subject:Date:From;
	b=aLZ+AS1gA5cMZ8RVBKoIqDIZcfszquu/ab7Ff6SE7rkRnIPa1yGD2Wv1/kGeFyZ2s
	 KTKehYlE2EUwCF9G4rnoyYlJ7fZKBBdXEwh0VBqARwd8gn0iA0AUJcR8NWIxsiD36S
	 hnENri5ucimFD/OOKF3SQJ3xLPKR7GMFxJr0mH5IYSAhPVEgIJlYKyxepqOeINvD6B
	 hbzLGO4L4D/Vzw2gjBofuadHnoqU0rF5Vn+NF0QYlcYB2fM0CVcCtM2y9eg5oROn8G
	 X2w6XtAJQagQb7QGiiHjRpxbG3HmR9xROVxxsx5qW0OHbSO3sp4gcYF5EfVvEemxZA
	 cpgazArg1UgDA==
From: Stefano Stabellini <sstabellini@kernel.org>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	hch@lst.de,
	linux-kernel@vger.kernel.org,
	Stefano Stabellini <stefano.stabellini@xilinx.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com
Subject: [PATCH 2/2] xen/swiotlb: check if the swiotlb has already been initialized
Date: Tue, 11 May 2021 10:41:42 -0700
Message-Id: <20210511174142.12742-2-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1

From: Stefano Stabellini <stefano.stabellini@xilinx.com>

xen_swiotlb_init calls swiotlb_late_init_with_tbl, which fails with
-ENOMEM if the swiotlb has already been initialized.

Add an explicit check io_tlb_default_mem != NULL at the beginning of
xen_swiotlb_init. If the swiotlb is already initialized print a warning
and return -EEXIST.

On x86, the error propagates.

On ARM, we don't actually need a special swiotlb buffer (yet), any
buffer would do. So ignore the error and continue.

CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
---
 arch/arm/xen/mm.c         | 8 +++++++-
 drivers/xen/swiotlb-xen.c | 5 +++++
 2 files changed, 12 insertions(+), 1 deletion(-)

diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c
index 223b1151fd7d..a7e54a087b80 100644
--- a/arch/arm/xen/mm.c
+++ b/arch/arm/xen/mm.c
@@ -138,9 +138,15 @@ void xen_destroy_contiguous_region(phys_addr_t pstart, unsigned int order)
 static int __init xen_mm_init(void)
 {
 	struct gnttab_cache_flush cflush;
+	int rc;
+
 	if (!xen_swiotlb_detect())
 		return 0;
-	xen_swiotlb_init();
+
+	rc = xen_swiotlb_init();
+	/* we can work with the default swiotlb */
+	if (rc < 0 && rc != -EEXIST)
+		return rc;
 
 	cflush.op = 0;
 	cflush.a.dev_bus_addr = 0;
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 4c89afc0df62..6412d59ce7f8 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -164,6 +164,11 @@ int __ref xen_swiotlb_init(void)
 	int rc = -ENOMEM;
 	char *start;
 
+	if (io_tlb_default_mem != NULL) {
+		printk(KERN_WARNING "Xen-SWIOTLB: swiotlb buffer already initialized\n");
+		return -EEXIST;
+	}
+
 retry:
 	m_ret = XEN_SWIOTLB_ENOMEM;
 	order = get_order(bytes);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 11 18:06:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 18:06:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125892.236964 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgWlz-00009X-HA; Tue, 11 May 2021 18:06:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125892.236964; Tue, 11 May 2021 18:06:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgWlz-00009Q-E9; Tue, 11 May 2021 18:06:23 +0000
Received: by outflank-mailman (input) for mailman id 125892;
 Tue, 11 May 2021 18:06:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iFnS=KG=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
 id 1lgWlx-00009J-Vf
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 18:06:22 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 55f34bf5-a458-496e-ad07-354cfd91063e;
 Tue, 11 May 2021 18:06:18 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 55f34bf5-a458-496e-ad07-354cfd91063e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620756378;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=uJd4x9Vaptn0QmvchrvWDGpAcSY3yfuBcsn4eM0qSh0=;
  b=VHhQvcoTcD6iglWFLz2BS39dd4T5/avM/FL6hyowanSDJN2BhrajmPsq
   t1vv0wV7Q36r7X7/CyWeiTXjbVuSGvyv9L+HIeZSi5dhim9PsPeyPXZOo
   S77F56o9yoO77eHe0b6RYWYPRZ5k50zaqqgB9GM3KuJNKGBD67vdi4Ktd
   o=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: YFUYZh+m3L1S8X5zYi6aGBQh6NuXo18mJkuRCCTbNIFUFSX9vSRZb8pcDSDX2HouicW8NkQOKO
 MQespezWxp2kTMV0Q+MuwAbbZOqsZp771vzREGT6X89HzxW6Z3uVsNfaRbllqDYHhLIbJ+P9MP
 QG6KjgvggHCx4BdRTiipVmNncqcgIcJQPMYob88+RMScXpho45U4POPa1GP8YqULD0LyGjDLBq
 6tdVCd4lA64Dcq3xZJBGvwGek5dyHGlpsHuO6XlSGVPsDXKdb74U9JiQC/44U11g49LjQ9Wdjh
 +fs=
X-SBRS: 5.1
X-MesageID: 43579190
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:QgHJ0aPseGlvT8BcTs2jsMiBIKoaSvp037Eqv3oedfUzSL3+qy
 nOpoV+6faaslYssR0b9exoW5PwJE80l6QFgrX5VI3KNGKN1VdARLsSi7cKqAeAJ8SRzIFgPN
 9bAspDNOE=
X-IronPort-AV: E=Sophos;i="5.82,291,1613451600"; 
   d="scan'208";a="43579190"
From: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Jan Beulich
	<jbeulich@suse.com>, "Julien Grall" <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Christian Lindig
	<christian.lindig@citrix.com>, David Scott <dave@recoil.org>, Juergen Gross
	<jgross@suse.com>
Subject: [PATCH v2 00/17] live update and gnttab patches
Date: Tue, 11 May 2021 19:05:13 +0100
Message-ID: <cover.1620755942.git.edvin.torok@citrix.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

These patches have been posted previously.
The gnttab patches (tools/ocaml/libs/mmap) were not applied at the time
to avoid conflicts with an in-progress XSA.
The binary format live-update and fuzzing patches were not applied
because it was too close to the next Xen release freeze.

The patches depend on each-other: live-update only works correctly when the gnttab
patches are taken too (MFN is not part of the binary live-update stream),
so they are included here as a single series.
The gnttab patches replaces one use of libxenctrl with stable interfaces, leaving one unstable
libxenctrl interface used by oxenstored.

The 'vendor external dependencies' may be optional, it is useful to be part
of a patchqueue in a specfile so that you can build everything without external dependencies,
but might as well commit it so everyone has it easily available not just XenServer.

Note that the live-update fuzz test doesn't yet pass, it is still able to find bugs.
However the reduced version with a fixed seed used as a unit test does pass,
so it is useful to have it committed, and further improvements can be made later
as more bugs are discovered and fixed.

Edwin Török (17):
  docs/designs/xenstore-migration.md: clarify that deletes are recursive
  tools/ocaml: add unit test skeleton with Dune build system
  tools/ocaml: vendor external dependencies for convenience
  tools/ocaml/xenstored: implement the live migration binary format
  tools/ocaml/xenstored: add binary dump format support
  tools/ocaml/xenstored: add support for binary format
  tools/ocaml/xenstored: validate config file before live update
  Add structured fuzzing unit test
  tools/ocaml: use common macros for manipulating mmap_interface
  tools/ocaml/libs/mmap: allocate correct number of bytes
  tools/ocaml/libs/mmap: Expose stub_mmap_alloc
  tools/ocaml/libs/mmap: mark mmap/munmap as blocking
  tools/ocaml/libs/xb: import gnttab stubs from mirage
  tools/ocaml: safer Xenmmap interface
  tools/ocaml/xenstored: use gnttab instead of xenctrl's
    foreign_map_range
  tools/ocaml/xenstored: don't store domU's mfn of ring page
  tools/ocaml/libs/mmap: Clean up unused read/write

 docs/designs/xenstore-migration.md            |    3 +-
 tools/ocaml/.gitignore                        |    2 +
 tools/ocaml/Makefile                          |   53 +
 tools/ocaml/dune-project                      |    5 +
 tools/ocaml/duniverse/cmdliner/.gitignore     |   10 +
 tools/ocaml/duniverse/cmdliner/.ocp-indent    |    1 +
 tools/ocaml/duniverse/cmdliner/B0.ml          |    9 +
 tools/ocaml/duniverse/cmdliner/CHANGES.md     |  255 +++
 tools/ocaml/duniverse/cmdliner/LICENSE.md     |   13 +
 tools/ocaml/duniverse/cmdliner/Makefile       |   77 +
 tools/ocaml/duniverse/cmdliner/README.md      |   51 +
 tools/ocaml/duniverse/cmdliner/_tags          |    3 +
 tools/ocaml/duniverse/cmdliner/build.ml       |  155 ++
 tools/ocaml/duniverse/cmdliner/cmdliner.opam  |   32 +
 tools/ocaml/duniverse/cmdliner/doc/api.odocl  |    1 +
 tools/ocaml/duniverse/cmdliner/dune-project   |    2 +
 tools/ocaml/duniverse/cmdliner/pkg/META       |    7 +
 tools/ocaml/duniverse/cmdliner/pkg/pkg.ml     |   33 +
 .../ocaml/duniverse/cmdliner/src/cmdliner.ml  |  309 ++++
 .../ocaml/duniverse/cmdliner/src/cmdliner.mli | 1624 +++++++++++++++++
 .../duniverse/cmdliner/src/cmdliner.mllib     |   11 +
 .../duniverse/cmdliner/src/cmdliner_arg.ml    |  356 ++++
 .../duniverse/cmdliner/src/cmdliner_arg.mli   |  111 ++
 .../duniverse/cmdliner/src/cmdliner_base.ml   |  302 +++
 .../duniverse/cmdliner/src/cmdliner_base.mli  |   68 +
 .../duniverse/cmdliner/src/cmdliner_cline.ml  |  199 ++
 .../duniverse/cmdliner/src/cmdliner_cline.mli |   34 +
 .../duniverse/cmdliner/src/cmdliner_docgen.ml |  352 ++++
 .../cmdliner/src/cmdliner_docgen.mli          |   30 +
 .../duniverse/cmdliner/src/cmdliner_info.ml   |  233 +++
 .../duniverse/cmdliner/src/cmdliner_info.mli  |  140 ++
 .../cmdliner/src/cmdliner_manpage.ml          |  502 +++++
 .../cmdliner/src/cmdliner_manpage.mli         |  100 +
 .../duniverse/cmdliner/src/cmdliner_msg.ml    |  116 ++
 .../duniverse/cmdliner/src/cmdliner_msg.mli   |   56 +
 .../cmdliner/src/cmdliner_suggest.ml          |   54 +
 .../cmdliner/src/cmdliner_suggest.mli         |   25 +
 .../duniverse/cmdliner/src/cmdliner_term.ml   |   41 +
 .../duniverse/cmdliner/src/cmdliner_term.mli  |   40 +
 .../duniverse/cmdliner/src/cmdliner_trie.ml   |   97 +
 .../duniverse/cmdliner/src/cmdliner_trie.mli  |   35 +
 tools/ocaml/duniverse/cmdliner/src/dune       |    4 +
 tools/ocaml/duniverse/cmdliner/test/chorus.ml |   31 +
 tools/ocaml/duniverse/cmdliner/test/cp_ex.ml  |   54 +
 .../ocaml/duniverse/cmdliner/test/darcs_ex.ml |  149 ++
 tools/ocaml/duniverse/cmdliner/test/dune      |   12 +
 tools/ocaml/duniverse/cmdliner/test/revolt.ml |    9 +
 tools/ocaml/duniverse/cmdliner/test/rm_ex.ml  |   53 +
 .../ocaml/duniverse/cmdliner/test/tail_ex.ml  |   73 +
 .../ocaml/duniverse/cmdliner/test/test_man.ml |  100 +
 .../duniverse/cmdliner/test/test_man_utf8.ml  |   11 +
 .../duniverse/cmdliner/test/test_opt_req.ml   |   13 +
 .../ocaml/duniverse/cmdliner/test/test_pos.ml |   13 +
 .../duniverse/cmdliner/test/test_pos_all.ml   |   11 +
 .../duniverse/cmdliner/test/test_pos_left.ml  |   11 +
 .../duniverse/cmdliner/test/test_pos_req.ml   |   15 +
 .../duniverse/cmdliner/test/test_pos_rev.ml   |   14 +
 .../duniverse/cmdliner/test/test_term_dups.ml |   19 +
 .../cmdliner/test/test_with_used_args.ml      |   18 +
 tools/ocaml/duniverse/cppo/.gitignore         |    5 +
 tools/ocaml/duniverse/cppo/.ocp-indent        |   22 +
 tools/ocaml/duniverse/cppo/.travis.yml        |   16 +
 tools/ocaml/duniverse/cppo/CODEOWNERS         |    8 +
 tools/ocaml/duniverse/cppo/Changes            |   85 +
 tools/ocaml/duniverse/cppo/INSTALL.md         |   17 +
 tools/ocaml/duniverse/cppo/LICENSE.md         |   24 +
 tools/ocaml/duniverse/cppo/Makefile           |   18 +
 tools/ocaml/duniverse/cppo/README.md          |  521 ++++++
 tools/ocaml/duniverse/cppo/VERSION            |    1 +
 tools/ocaml/duniverse/cppo/appveyor.yml       |   14 +
 tools/ocaml/duniverse/cppo/cppo.opam          |   31 +
 .../ocaml/duniverse/cppo/cppo_ocamlbuild.opam |   27 +
 tools/ocaml/duniverse/cppo/dune-project       |    3 +
 tools/ocaml/duniverse/cppo/examples/Makefile  |    8 +
 tools/ocaml/duniverse/cppo/examples/debug.ml  |    7 +
 tools/ocaml/duniverse/cppo/examples/dune      |   32 +
 tools/ocaml/duniverse/cppo/examples/french.ml |   34 +
 tools/ocaml/duniverse/cppo/examples/lexer.mll |    9 +
 .../duniverse/cppo/ocamlbuild_plugin/_tags    |    1 +
 .../duniverse/cppo/ocamlbuild_plugin/dune     |    6 +
 .../cppo/ocamlbuild_plugin/ocamlbuild_cppo.ml |   35 +
 .../ocamlbuild_plugin/ocamlbuild_cppo.mli     |    9 +
 tools/ocaml/duniverse/cppo/src/compat.ml      |    7 +
 .../ocaml/duniverse/cppo/src/cppo_command.ml  |   63 +
 .../ocaml/duniverse/cppo/src/cppo_command.mli |   11 +
 tools/ocaml/duniverse/cppo/src/cppo_eval.ml   |  697 +++++++
 tools/ocaml/duniverse/cppo/src/cppo_eval.mli  |   29 +
 tools/ocaml/duniverse/cppo/src/cppo_lexer.mll |  721 ++++++++
 tools/ocaml/duniverse/cppo/src/cppo_main.ml   |  230 +++
 .../ocaml/duniverse/cppo/src/cppo_parser.mly  |  266 +++
 tools/ocaml/duniverse/cppo/src/cppo_types.ml  |   98 +
 tools/ocaml/duniverse/cppo/src/cppo_types.mli |   70 +
 .../ocaml/duniverse/cppo/src/cppo_version.mli |    1 +
 tools/ocaml/duniverse/cppo/src/dune           |   21 +
 tools/ocaml/duniverse/cppo/test/capital.cppo  |    6 +
 tools/ocaml/duniverse/cppo/test/capital.ref   |    6 +
 tools/ocaml/duniverse/cppo/test/comments.cppo |    7 +
 tools/ocaml/duniverse/cppo/test/comments.ref  |    8 +
 tools/ocaml/duniverse/cppo/test/cond.cppo     |   47 +
 tools/ocaml/duniverse/cppo/test/cond.ref      |   17 +
 tools/ocaml/duniverse/cppo/test/dune          |  130 ++
 tools/ocaml/duniverse/cppo/test/ext.cppo      |   10 +
 tools/ocaml/duniverse/cppo/test/ext.ref       |   28 +
 tools/ocaml/duniverse/cppo/test/incl.cppo     |    3 +
 tools/ocaml/duniverse/cppo/test/incl2.cppo    |    1 +
 tools/ocaml/duniverse/cppo/test/loc.cppo      |    8 +
 tools/ocaml/duniverse/cppo/test/loc.ref       |   21 +
 .../ocaml/duniverse/cppo/test/paren_arg.cppo  |    3 +
 tools/ocaml/duniverse/cppo/test/paren_arg.ref |    4 +
 tools/ocaml/duniverse/cppo/test/source.sh     |   13 +
 tools/ocaml/duniverse/cppo/test/test.cppo     |  144 ++
 tools/ocaml/duniverse/cppo/test/tuple.cppo    |   38 +
 tools/ocaml/duniverse/cppo/test/tuple.ref     |   20 +
 .../ocaml/duniverse/cppo/test/unmatched.cppo  |   14 +
 tools/ocaml/duniverse/cppo/test/unmatched.ref |   15 +
 tools/ocaml/duniverse/cppo/test/version.cppo  |   30 +
 tools/ocaml/duniverse/crowbar/.gitignore      |    5 +
 tools/ocaml/duniverse/crowbar/CHANGES.md      |    9 +
 tools/ocaml/duniverse/crowbar/LICENSE.md      |    8 +
 tools/ocaml/duniverse/crowbar/README.md       |   82 +
 tools/ocaml/duniverse/crowbar/crowbar.opam    |   33 +
 tools/ocaml/duniverse/crowbar/dune            |    1 +
 tools/ocaml/duniverse/crowbar/dune-project    |    2 +
 .../duniverse/crowbar/examples/.gitignore     |    1 +
 .../duniverse/crowbar/examples/calendar/dune  |    3 +
 .../examples/calendar/test_calendar.ml        |   29 +
 .../duniverse/crowbar/examples/fpath/dune     |    4 +
 .../crowbar/examples/fpath/test_fpath.ml      |   18 +
 .../duniverse/crowbar/examples/input/testcase |    1 +
 .../ocaml/duniverse/crowbar/examples/map/dune |    3 +
 .../crowbar/examples/map/test_map.ml          |   47 +
 .../duniverse/crowbar/examples/pprint/dune    |    3 +
 .../crowbar/examples/pprint/test_pprint.ml    |   39 +
 .../crowbar/examples/serializer/dune          |    3 +
 .../crowbar/examples/serializer/serializer.ml |   34 +
 .../examples/serializer/test_serializer.ml    |   47 +
 .../duniverse/crowbar/examples/uunf/dune      |    3 +
 .../crowbar/examples/uunf/test_uunf.ml        |   75 +
 .../duniverse/crowbar/examples/xmldiff/dune   |    3 +
 .../crowbar/examples/xmldiff/test_xmldiff.ml  |   42 +
 tools/ocaml/duniverse/crowbar/src/crowbar.ml  |  582 ++++++
 tools/ocaml/duniverse/crowbar/src/crowbar.mli |  251 +++
 tools/ocaml/duniverse/crowbar/src/dune        |    3 +
 tools/ocaml/duniverse/crowbar/src/todo        |   16 +
 tools/ocaml/duniverse/csexp/CHANGES.md        |   45 +
 tools/ocaml/duniverse/csexp/LICENSE.md        |   21 +
 tools/ocaml/duniverse/csexp/Makefile          |   23 +
 tools/ocaml/duniverse/csexp/README.md         |   33 +
 .../duniverse/csexp/bench/csexp_bench.ml      |   22 +
 tools/ocaml/duniverse/csexp/bench/dune        |   11 +
 tools/ocaml/duniverse/csexp/bench/main.ml     |    1 +
 tools/ocaml/duniverse/csexp/bench/runner.sh   |    4 +
 tools/ocaml/duniverse/csexp/csexp.opam        |   51 +
 .../ocaml/duniverse/csexp/csexp.opam.template |   14 +
 tools/ocaml/duniverse/csexp/dune-project      |   42 +
 .../ocaml/duniverse/csexp/dune-workspace.dev  |    6 +
 tools/ocaml/duniverse/csexp/src/csexp.ml      |  333 ++++
 tools/ocaml/duniverse/csexp/src/csexp.mli     |  369 ++++
 tools/ocaml/duniverse/csexp/src/dune          |    3 +
 tools/ocaml/duniverse/csexp/test/dune         |    6 +
 tools/ocaml/duniverse/csexp/test/test.ml      |  142 ++
 tools/ocaml/duniverse/dune                    |    4 +
 tools/ocaml/duniverse/fmt/.gitignore          |    8 +
 tools/ocaml/duniverse/fmt/.ocp-indent         |    1 +
 tools/ocaml/duniverse/fmt/CHANGES.md          |   98 +
 tools/ocaml/duniverse/fmt/LICENSE.md          |   13 +
 tools/ocaml/duniverse/fmt/README.md           |   35 +
 tools/ocaml/duniverse/fmt/_tags               |    7 +
 tools/ocaml/duniverse/fmt/doc/api.odocl       |    3 +
 tools/ocaml/duniverse/fmt/doc/index.mld       |   11 +
 tools/ocaml/duniverse/fmt/dune-project        |    2 +
 tools/ocaml/duniverse/fmt/fmt.opam            |   35 +
 tools/ocaml/duniverse/fmt/pkg/META            |   40 +
 tools/ocaml/duniverse/fmt/pkg/pkg.ml          |   18 +
 tools/ocaml/duniverse/fmt/src/dune            |   30 +
 tools/ocaml/duniverse/fmt/src/fmt.ml          |  787 ++++++++
 tools/ocaml/duniverse/fmt/src/fmt.mli         |  689 +++++++
 tools/ocaml/duniverse/fmt/src/fmt.mllib       |    1 +
 tools/ocaml/duniverse/fmt/src/fmt_cli.ml      |   32 +
 tools/ocaml/duniverse/fmt/src/fmt_cli.mli     |   45 +
 tools/ocaml/duniverse/fmt/src/fmt_cli.mllib   |    1 +
 tools/ocaml/duniverse/fmt/src/fmt_top.ml      |   23 +
 tools/ocaml/duniverse/fmt/src/fmt_top.mllib   |    1 +
 tools/ocaml/duniverse/fmt/src/fmt_tty.ml      |   78 +
 tools/ocaml/duniverse/fmt/src/fmt_tty.mli     |   50 +
 tools/ocaml/duniverse/fmt/src/fmt_tty.mllib   |    1 +
 .../duniverse/fmt/src/fmt_tty_top_init.ml     |   23 +
 tools/ocaml/duniverse/fmt/test/test.ml        |  322 ++++
 .../duniverse/ocaml-afl-persistent/.gitignore |    2 +
 .../duniverse/ocaml-afl-persistent/CHANGES.md |   22 +
 .../duniverse/ocaml-afl-persistent/LICENSE.md |    8 +
 .../duniverse/ocaml-afl-persistent/README.md  |   17 +
 .../ocaml-afl-persistent/afl-persistent.opam  |   49 +
 .../afl-persistent.opam.template              |   16 +
 .../aflPersistent.available.ml                |   21 +
 .../ocaml-afl-persistent/aflPersistent.mli    |    1 +
 .../aflPersistent.stub.ml                     |    1 +
 .../duniverse/ocaml-afl-persistent/detect.sh  |   43 +
 .../ocaml/duniverse/ocaml-afl-persistent/dune |   20 +
 .../ocaml-afl-persistent/dune-project         |   23 +
 .../duniverse/ocaml-afl-persistent/test.ml    |    3 +
 .../ocaml-afl-persistent/test/harness.ml      |   22 +
 .../ocaml-afl-persistent/test/test.ml         |   73 +
 .../ocaml-afl-persistent/test/test.sh         |   33 +
 .../ocaml/duniverse/ocplib-endian/.gitignore  |    3 +
 .../ocaml/duniverse/ocplib-endian/.travis.yml |   19 +
 .../ocaml/duniverse/ocplib-endian/CHANGES.md  |   55 +
 .../ocaml/duniverse/ocplib-endian/COPYING.txt |  521 ++++++
 tools/ocaml/duniverse/ocplib-endian/Makefile  |   13 +
 tools/ocaml/duniverse/ocplib-endian/README.md |   16 +
 .../duniverse/ocplib-endian/dune-project      |    2 +
 .../ocplib-endian/ocplib-endian.opam          |   30 +
 .../ocplib-endian/src/be_ocaml_401.ml         |   32 +
 .../duniverse/ocplib-endian/src/common.ml     |   24 +
 .../ocplib-endian/src/common_401.cppo.ml      |  100 +
 .../ocplib-endian/src/common_float.ml         |    5 +
 tools/ocaml/duniverse/ocplib-endian/src/dune  |   75 +
 .../ocplib-endian/src/endianBigstring.cppo.ml |  112 ++
 .../src/endianBigstring.cppo.mli              |  128 ++
 .../ocplib-endian/src/endianBytes.cppo.ml     |  130 ++
 .../ocplib-endian/src/endianBytes.cppo.mli    |  124 ++
 .../ocplib-endian/src/endianString.cppo.ml    |  118 ++
 .../ocplib-endian/src/endianString.cppo.mli   |  121 ++
 .../ocplib-endian/src/le_ocaml_401.ml         |   32 +
 .../ocplib-endian/src/ne_ocaml_401.ml         |   20 +
 .../duniverse/ocplib-endian/tests/bench.ml    |  436 +++++
 .../ocaml/duniverse/ocplib-endian/tests/dune  |   35 +
 .../duniverse/ocplib-endian/tests/test.ml     |   39 +
 .../tests/test_bigstring.cppo.ml              |  191 ++
 .../ocplib-endian/tests/test_bytes.cppo.ml    |  185 ++
 .../ocplib-endian/tests/test_string.cppo.ml   |  185 ++
 tools/ocaml/duniverse/result/CHANGES.md       |   15 +
 tools/ocaml/duniverse/result/LICENSE.md       |   24 +
 tools/ocaml/duniverse/result/Makefile         |   17 +
 tools/ocaml/duniverse/result/README.md        |    5 +
 tools/ocaml/duniverse/result/dune             |   12 +
 tools/ocaml/duniverse/result/dune-project     |    3 +
 .../duniverse/result/result-as-alias-4.08.ml  |    2 +
 .../ocaml/duniverse/result/result-as-alias.ml |    2 +
 .../duniverse/result/result-as-newtype.ml     |    2 +
 tools/ocaml/duniverse/result/result.opam      |   18 +
 tools/ocaml/duniverse/result/which_result.ml  |   14 +
 tools/ocaml/duniverse/stdlib-shims/CHANGES.md |    5 +
 tools/ocaml/duniverse/stdlib-shims/LICENSE    |  203 +++
 tools/ocaml/duniverse/stdlib-shims/README.md  |    2 +
 .../ocaml/duniverse/stdlib-shims/dune-project |    1 +
 .../duniverse/stdlib-shims/dune-workspace.dev |   14 +
 tools/ocaml/duniverse/stdlib-shims/src/dune   |   97 +
 .../duniverse/stdlib-shims/stdlib-shims.opam  |   24 +
 tools/ocaml/duniverse/stdlib-shims/test/dune  |    3 +
 .../ocaml/duniverse/stdlib-shims/test/test.ml |    2 +
 tools/ocaml/libs/eventchn/dune                |    8 +
 tools/ocaml/libs/mmap/Makefile                |   19 +-
 tools/ocaml/libs/mmap/dune                    |   18 +
 tools/ocaml/libs/mmap/gnt.ml                  |   62 +
 tools/ocaml/libs/mmap/gnt.mli                 |   87 +
 tools/ocaml/libs/mmap/gnttab_stubs.c          |  106 ++
 tools/ocaml/libs/mmap/mmap_stubs.h            |   11 +-
 tools/ocaml/libs/mmap/xenmmap.ml              |   17 +-
 tools/ocaml/libs/mmap/xenmmap.mli             |   13 +-
 tools/ocaml/libs/mmap/xenmmap_stubs.c         |   86 +-
 tools/ocaml/libs/xb/dune                      |    7 +
 tools/ocaml/libs/xb/xb.ml                     |   10 +-
 tools/ocaml/libs/xb/xb.mli                    |    4 +-
 tools/ocaml/libs/xb/xs_ring_stubs.c           |   14 +-
 tools/ocaml/libs/xc/dune                      |    9 +
 tools/ocaml/libs/xc/xenctrl.ml                |    6 +-
 tools/ocaml/libs/xc/xenctrl.mli               |    5 +-
 tools/ocaml/libs/xs/dune                      |    4 +
 tools/ocaml/xen.opam                          |    1 +
 tools/ocaml/xen.opam.locked                   |  119 ++
 tools/ocaml/xenstore.opam                     |    1 +
 tools/ocaml/xenstored.opam                    |   21 +
 tools/ocaml/xenstored/Makefile                |    4 +-
 tools/ocaml/xenstored/connection.ml           |   63 +-
 tools/ocaml/xenstored/disk.ml                 |  319 ++++
 tools/ocaml/xenstored/domain.ml               |    9 +-
 tools/ocaml/xenstored/domains.ml              |   13 +-
 tools/ocaml/xenstored/dune                    |   19 +
 tools/ocaml/xenstored/parse_arg.ml            |    6 +-
 tools/ocaml/xenstored/perms.ml                |    2 +
 tools/ocaml/xenstored/poll.ml                 |    3 +-
 tools/ocaml/xenstored/process.ml              |   30 +-
 tools/ocaml/xenstored/store.ml                |    1 +
 tools/ocaml/xenstored/test/dune               |   23 +
 tools/ocaml/xenstored/test/generator.ml       |  189 ++
 tools/ocaml/xenstored/test/gnt.ml             |   52 +
 tools/ocaml/xenstored/test/model.ml           |  253 +++
 tools/ocaml/xenstored/test/old/arbitrary.ml   |  261 +++
 tools/ocaml/xenstored/test/old/gen_paths.ml   |   66 +
 .../xenstored/test/old/xenstored_test.ml      |  527 ++++++
 tools/ocaml/xenstored/test/pathtree.ml        |   40 +
 tools/ocaml/xenstored/test/testable.ml        |  380 ++++
 tools/ocaml/xenstored/test/types.ml           |  437 +++++
 .../xenmmap.mli => xenstored/test/xenctrl.ml} |   40 +-
 tools/ocaml/xenstored/test/xeneventchn.ml     |   50 +
 tools/ocaml/xenstored/test/xenstored_test.ml  |  147 ++
 tools/ocaml/xenstored/test/xs_protocol.ml     |  733 ++++++++
 tools/ocaml/xenstored/transaction.ml          |  119 +-
 tools/ocaml/xenstored/xenstored.ml            |  224 ++-
 tools/ocaml/xenstored/xenstored_main.ml       |    1 +
 301 files changed, 22724 insertions(+), 193 deletions(-)
 create mode 100644 tools/ocaml/.gitignore
 create mode 100644 tools/ocaml/dune-project
 create mode 100644 tools/ocaml/duniverse/cmdliner/.gitignore
 create mode 100644 tools/ocaml/duniverse/cmdliner/.ocp-indent
 create mode 100644 tools/ocaml/duniverse/cmdliner/B0.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/CHANGES.md
 create mode 100644 tools/ocaml/duniverse/cmdliner/LICENSE.md
 create mode 100644 tools/ocaml/duniverse/cmdliner/Makefile
 create mode 100644 tools/ocaml/duniverse/cmdliner/README.md
 create mode 100644 tools/ocaml/duniverse/cmdliner/_tags
 create mode 100755 tools/ocaml/duniverse/cmdliner/build.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/cmdliner.opam
 create mode 100644 tools/ocaml/duniverse/cmdliner/doc/api.odocl
 create mode 100644 tools/ocaml/duniverse/cmdliner/dune-project
 create mode 100644 tools/ocaml/duniverse/cmdliner/pkg/META
 create mode 100755 tools/ocaml/duniverse/cmdliner/pkg/pkg.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner.mli
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner.mllib
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner_arg.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner_arg.mli
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner_base.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner_base.mli
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner_cline.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner_cline.mli
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner_docgen.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner_docgen.mli
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner_info.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner_info.mli
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner_manpage.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner_manpage.mli
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner_msg.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner_msg.mli
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner_suggest.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner_suggest.mli
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner_term.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner_term.mli
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner_trie.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner_trie.mli
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/dune
 create mode 100644 tools/ocaml/duniverse/cmdliner/test/chorus.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/test/cp_ex.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/test/darcs_ex.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/test/dune
 create mode 100644 tools/ocaml/duniverse/cmdliner/test/revolt.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/test/rm_ex.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/test/tail_ex.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/test/test_man.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/test/test_man_utf8.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/test/test_opt_req.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/test/test_pos.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/test/test_pos_all.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/test/test_pos_left.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/test/test_pos_req.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/test/test_pos_rev.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/test/test_term_dups.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/test/test_with_used_args.ml
 create mode 100644 tools/ocaml/duniverse/cppo/.gitignore
 create mode 100644 tools/ocaml/duniverse/cppo/.ocp-indent
 create mode 100644 tools/ocaml/duniverse/cppo/.travis.yml
 create mode 100644 tools/ocaml/duniverse/cppo/CODEOWNERS
 create mode 100644 tools/ocaml/duniverse/cppo/Changes
 create mode 100644 tools/ocaml/duniverse/cppo/INSTALL.md
 create mode 100644 tools/ocaml/duniverse/cppo/LICENSE.md
 create mode 100644 tools/ocaml/duniverse/cppo/Makefile
 create mode 100644 tools/ocaml/duniverse/cppo/README.md
 create mode 100644 tools/ocaml/duniverse/cppo/VERSION
 create mode 100644 tools/ocaml/duniverse/cppo/appveyor.yml
 create mode 100644 tools/ocaml/duniverse/cppo/cppo.opam
 create mode 100644 tools/ocaml/duniverse/cppo/cppo_ocamlbuild.opam
 create mode 100644 tools/ocaml/duniverse/cppo/dune-project
 create mode 100644 tools/ocaml/duniverse/cppo/examples/Makefile
 create mode 100644 tools/ocaml/duniverse/cppo/examples/debug.ml
 create mode 100644 tools/ocaml/duniverse/cppo/examples/dune
 create mode 100644 tools/ocaml/duniverse/cppo/examples/french.ml
 create mode 100644 tools/ocaml/duniverse/cppo/examples/lexer.mll
 create mode 100644 tools/ocaml/duniverse/cppo/ocamlbuild_plugin/_tags
 create mode 100644 tools/ocaml/duniverse/cppo/ocamlbuild_plugin/dune
 create mode 100644 tools/ocaml/duniverse/cppo/ocamlbuild_plugin/ocamlbuild_cppo.ml
 create mode 100644 tools/ocaml/duniverse/cppo/ocamlbuild_plugin/ocamlbuild_cppo.mli
 create mode 100644 tools/ocaml/duniverse/cppo/src/compat.ml
 create mode 100644 tools/ocaml/duniverse/cppo/src/cppo_command.ml
 create mode 100644 tools/ocaml/duniverse/cppo/src/cppo_command.mli
 create mode 100644 tools/ocaml/duniverse/cppo/src/cppo_eval.ml
 create mode 100644 tools/ocaml/duniverse/cppo/src/cppo_eval.mli
 create mode 100644 tools/ocaml/duniverse/cppo/src/cppo_lexer.mll
 create mode 100644 tools/ocaml/duniverse/cppo/src/cppo_main.ml
 create mode 100644 tools/ocaml/duniverse/cppo/src/cppo_parser.mly
 create mode 100644 tools/ocaml/duniverse/cppo/src/cppo_types.ml
 create mode 100644 tools/ocaml/duniverse/cppo/src/cppo_types.mli
 create mode 100644 tools/ocaml/duniverse/cppo/src/cppo_version.mli
 create mode 100644 tools/ocaml/duniverse/cppo/src/dune
 create mode 100644 tools/ocaml/duniverse/cppo/test/capital.cppo
 create mode 100644 tools/ocaml/duniverse/cppo/test/capital.ref
 create mode 100644 tools/ocaml/duniverse/cppo/test/comments.cppo
 create mode 100644 tools/ocaml/duniverse/cppo/test/comments.ref
 create mode 100644 tools/ocaml/duniverse/cppo/test/cond.cppo
 create mode 100644 tools/ocaml/duniverse/cppo/test/cond.ref
 create mode 100644 tools/ocaml/duniverse/cppo/test/dune
 create mode 100644 tools/ocaml/duniverse/cppo/test/ext.cppo
 create mode 100644 tools/ocaml/duniverse/cppo/test/ext.ref
 create mode 100644 tools/ocaml/duniverse/cppo/test/incl.cppo
 create mode 100644 tools/ocaml/duniverse/cppo/test/incl2.cppo
 create mode 100644 tools/ocaml/duniverse/cppo/test/loc.cppo
 create mode 100644 tools/ocaml/duniverse/cppo/test/loc.ref
 create mode 100644 tools/ocaml/duniverse/cppo/test/paren_arg.cppo
 create mode 100644 tools/ocaml/duniverse/cppo/test/paren_arg.ref
 create mode 100755 tools/ocaml/duniverse/cppo/test/source.sh
 create mode 100644 tools/ocaml/duniverse/cppo/test/test.cppo
 create mode 100644 tools/ocaml/duniverse/cppo/test/tuple.cppo
 create mode 100644 tools/ocaml/duniverse/cppo/test/tuple.ref
 create mode 100644 tools/ocaml/duniverse/cppo/test/unmatched.cppo
 create mode 100644 tools/ocaml/duniverse/cppo/test/unmatched.ref
 create mode 100644 tools/ocaml/duniverse/cppo/test/version.cppo
 create mode 100644 tools/ocaml/duniverse/crowbar/.gitignore
 create mode 100644 tools/ocaml/duniverse/crowbar/CHANGES.md
 create mode 100644 tools/ocaml/duniverse/crowbar/LICENSE.md
 create mode 100644 tools/ocaml/duniverse/crowbar/README.md
 create mode 100644 tools/ocaml/duniverse/crowbar/crowbar.opam
 create mode 100644 tools/ocaml/duniverse/crowbar/dune
 create mode 100644 tools/ocaml/duniverse/crowbar/dune-project
 create mode 100644 tools/ocaml/duniverse/crowbar/examples/.gitignore
 create mode 100644 tools/ocaml/duniverse/crowbar/examples/calendar/dune
 create mode 100644 tools/ocaml/duniverse/crowbar/examples/calendar/test_calendar.ml
 create mode 100644 tools/ocaml/duniverse/crowbar/examples/fpath/dune
 create mode 100644 tools/ocaml/duniverse/crowbar/examples/fpath/test_fpath.ml
 create mode 100644 tools/ocaml/duniverse/crowbar/examples/input/testcase
 create mode 100644 tools/ocaml/duniverse/crowbar/examples/map/dune
 create mode 100644 tools/ocaml/duniverse/crowbar/examples/map/test_map.ml
 create mode 100644 tools/ocaml/duniverse/crowbar/examples/pprint/dune
 create mode 100644 tools/ocaml/duniverse/crowbar/examples/pprint/test_pprint.ml
 create mode 100644 tools/ocaml/duniverse/crowbar/examples/serializer/dune
 create mode 100644 tools/ocaml/duniverse/crowbar/examples/serializer/serializer.ml
 create mode 100644 tools/ocaml/duniverse/crowbar/examples/serializer/test_serializer.ml
 create mode 100644 tools/ocaml/duniverse/crowbar/examples/uunf/dune
 create mode 100644 tools/ocaml/duniverse/crowbar/examples/uunf/test_uunf.ml
 create mode 100644 tools/ocaml/duniverse/crowbar/examples/xmldiff/dune
 create mode 100644 tools/ocaml/duniverse/crowbar/examples/xmldiff/test_xmldiff.ml
 create mode 100644 tools/ocaml/duniverse/crowbar/src/crowbar.ml
 create mode 100644 tools/ocaml/duniverse/crowbar/src/crowbar.mli
 create mode 100644 tools/ocaml/duniverse/crowbar/src/dune
 create mode 100644 tools/ocaml/duniverse/crowbar/src/todo
 create mode 100644 tools/ocaml/duniverse/csexp/CHANGES.md
 create mode 100644 tools/ocaml/duniverse/csexp/LICENSE.md
 create mode 100644 tools/ocaml/duniverse/csexp/Makefile
 create mode 100644 tools/ocaml/duniverse/csexp/README.md
 create mode 100644 tools/ocaml/duniverse/csexp/bench/csexp_bench.ml
 create mode 100644 tools/ocaml/duniverse/csexp/bench/dune
 create mode 100644 tools/ocaml/duniverse/csexp/bench/main.ml
 create mode 100755 tools/ocaml/duniverse/csexp/bench/runner.sh
 create mode 100644 tools/ocaml/duniverse/csexp/csexp.opam
 create mode 100644 tools/ocaml/duniverse/csexp/csexp.opam.template
 create mode 100644 tools/ocaml/duniverse/csexp/dune-project
 create mode 100644 tools/ocaml/duniverse/csexp/dune-workspace.dev
 create mode 100644 tools/ocaml/duniverse/csexp/src/csexp.ml
 create mode 100644 tools/ocaml/duniverse/csexp/src/csexp.mli
 create mode 100644 tools/ocaml/duniverse/csexp/src/dune
 create mode 100644 tools/ocaml/duniverse/csexp/test/dune
 create mode 100644 tools/ocaml/duniverse/csexp/test/test.ml
 create mode 100644 tools/ocaml/duniverse/dune
 create mode 100644 tools/ocaml/duniverse/fmt/.gitignore
 create mode 100644 tools/ocaml/duniverse/fmt/.ocp-indent
 create mode 100644 tools/ocaml/duniverse/fmt/CHANGES.md
 create mode 100644 tools/ocaml/duniverse/fmt/LICENSE.md
 create mode 100644 tools/ocaml/duniverse/fmt/README.md
 create mode 100644 tools/ocaml/duniverse/fmt/_tags
 create mode 100644 tools/ocaml/duniverse/fmt/doc/api.odocl
 create mode 100644 tools/ocaml/duniverse/fmt/doc/index.mld
 create mode 100644 tools/ocaml/duniverse/fmt/dune-project
 create mode 100644 tools/ocaml/duniverse/fmt/fmt.opam
 create mode 100644 tools/ocaml/duniverse/fmt/pkg/META
 create mode 100755 tools/ocaml/duniverse/fmt/pkg/pkg.ml
 create mode 100644 tools/ocaml/duniverse/fmt/src/dune
 create mode 100644 tools/ocaml/duniverse/fmt/src/fmt.ml
 create mode 100644 tools/ocaml/duniverse/fmt/src/fmt.mli
 create mode 100644 tools/ocaml/duniverse/fmt/src/fmt.mllib
 create mode 100644 tools/ocaml/duniverse/fmt/src/fmt_cli.ml
 create mode 100644 tools/ocaml/duniverse/fmt/src/fmt_cli.mli
 create mode 100644 tools/ocaml/duniverse/fmt/src/fmt_cli.mllib
 create mode 100644 tools/ocaml/duniverse/fmt/src/fmt_top.ml
 create mode 100644 tools/ocaml/duniverse/fmt/src/fmt_top.mllib
 create mode 100644 tools/ocaml/duniverse/fmt/src/fmt_tty.ml
 create mode 100644 tools/ocaml/duniverse/fmt/src/fmt_tty.mli
 create mode 100644 tools/ocaml/duniverse/fmt/src/fmt_tty.mllib
 create mode 100644 tools/ocaml/duniverse/fmt/src/fmt_tty_top_init.ml
 create mode 100644 tools/ocaml/duniverse/fmt/test/test.ml
 create mode 100644 tools/ocaml/duniverse/ocaml-afl-persistent/.gitignore
 create mode 100644 tools/ocaml/duniverse/ocaml-afl-persistent/CHANGES.md
 create mode 100644 tools/ocaml/duniverse/ocaml-afl-persistent/LICENSE.md
 create mode 100644 tools/ocaml/duniverse/ocaml-afl-persistent/README.md
 create mode 100644 tools/ocaml/duniverse/ocaml-afl-persistent/afl-persistent.opam
 create mode 100644 tools/ocaml/duniverse/ocaml-afl-persistent/afl-persistent.opam.template
 create mode 100644 tools/ocaml/duniverse/ocaml-afl-persistent/aflPersistent.available.ml
 create mode 100644 tools/ocaml/duniverse/ocaml-afl-persistent/aflPersistent.mli
 create mode 100644 tools/ocaml/duniverse/ocaml-afl-persistent/aflPersistent.stub.ml
 create mode 100755 tools/ocaml/duniverse/ocaml-afl-persistent/detect.sh
 create mode 100644 tools/ocaml/duniverse/ocaml-afl-persistent/dune
 create mode 100644 tools/ocaml/duniverse/ocaml-afl-persistent/dune-project
 create mode 100644 tools/ocaml/duniverse/ocaml-afl-persistent/test.ml
 create mode 100644 tools/ocaml/duniverse/ocaml-afl-persistent/test/harness.ml
 create mode 100644 tools/ocaml/duniverse/ocaml-afl-persistent/test/test.ml
 create mode 100755 tools/ocaml/duniverse/ocaml-afl-persistent/test/test.sh
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/.gitignore
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/.travis.yml
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/CHANGES.md
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/COPYING.txt
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/Makefile
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/README.md
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/dune-project
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/ocplib-endian.opam
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/src/be_ocaml_401.ml
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/src/common.ml
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/src/common_401.cppo.ml
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/src/common_float.ml
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/src/dune
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/src/endianBigstring.cppo.ml
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/src/endianBigstring.cppo.mli
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/src/endianBytes.cppo.ml
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/src/endianBytes.cppo.mli
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/src/endianString.cppo.ml
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/src/endianString.cppo.mli
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/src/le_ocaml_401.ml
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/src/ne_ocaml_401.ml
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/tests/bench.ml
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/tests/dune
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/tests/test.ml
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/tests/test_bigstring.cppo.ml
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/tests/test_bytes.cppo.ml
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/tests/test_string.cppo.ml
 create mode 100755 tools/ocaml/duniverse/result/CHANGES.md
 create mode 100755 tools/ocaml/duniverse/result/LICENSE.md
 create mode 100755 tools/ocaml/duniverse/result/Makefile
 create mode 100755 tools/ocaml/duniverse/result/README.md
 create mode 100755 tools/ocaml/duniverse/result/dune
 create mode 100755 tools/ocaml/duniverse/result/dune-project
 create mode 100755 tools/ocaml/duniverse/result/result-as-alias-4.08.ml
 create mode 100755 tools/ocaml/duniverse/result/result-as-alias.ml
 create mode 100755 tools/ocaml/duniverse/result/result-as-newtype.ml
 create mode 100755 tools/ocaml/duniverse/result/result.opam
 create mode 100755 tools/ocaml/duniverse/result/which_result.ml
 create mode 100644 tools/ocaml/duniverse/stdlib-shims/CHANGES.md
 create mode 100644 tools/ocaml/duniverse/stdlib-shims/LICENSE
 create mode 100644 tools/ocaml/duniverse/stdlib-shims/README.md
 create mode 100644 tools/ocaml/duniverse/stdlib-shims/dune-project
 create mode 100644 tools/ocaml/duniverse/stdlib-shims/dune-workspace.dev
 create mode 100644 tools/ocaml/duniverse/stdlib-shims/src/dune
 create mode 100644 tools/ocaml/duniverse/stdlib-shims/stdlib-shims.opam
 create mode 100644 tools/ocaml/duniverse/stdlib-shims/test/dune
 create mode 100644 tools/ocaml/duniverse/stdlib-shims/test/test.ml
 create mode 100644 tools/ocaml/libs/eventchn/dune
 create mode 100644 tools/ocaml/libs/mmap/dune
 create mode 100644 tools/ocaml/libs/mmap/gnt.ml
 create mode 100644 tools/ocaml/libs/mmap/gnt.mli
 create mode 100644 tools/ocaml/libs/mmap/gnttab_stubs.c
 create mode 100644 tools/ocaml/libs/xb/dune
 create mode 100644 tools/ocaml/libs/xc/dune
 create mode 100644 tools/ocaml/libs/xs/dune
 create mode 100644 tools/ocaml/xen.opam
 create mode 100644 tools/ocaml/xen.opam.locked
 create mode 100644 tools/ocaml/xenstore.opam
 create mode 100644 tools/ocaml/xenstored.opam
 create mode 100644 tools/ocaml/xenstored/dune
 create mode 100644 tools/ocaml/xenstored/test/dune
 create mode 100644 tools/ocaml/xenstored/test/generator.ml
 create mode 100644 tools/ocaml/xenstored/test/gnt.ml
 create mode 100644 tools/ocaml/xenstored/test/model.ml
 create mode 100644 tools/ocaml/xenstored/test/old/arbitrary.ml
 create mode 100644 tools/ocaml/xenstored/test/old/gen_paths.ml
 create mode 100644 tools/ocaml/xenstored/test/old/xenstored_test.ml
 create mode 100644 tools/ocaml/xenstored/test/pathtree.ml
 create mode 100644 tools/ocaml/xenstored/test/testable.ml
 create mode 100644 tools/ocaml/xenstored/test/types.ml
 copy tools/ocaml/{libs/mmap/xenmmap.mli => xenstored/test/xenctrl.ml} (54%)
 create mode 100644 tools/ocaml/xenstored/test/xeneventchn.ml
 create mode 100644 tools/ocaml/xenstored/test/xenstored_test.ml
 create mode 100644 tools/ocaml/xenstored/test/xs_protocol.ml
 create mode 100644 tools/ocaml/xenstored/xenstored_main.ml

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 11 18:06:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 18:06:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125898.236977 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgWmU-0000hz-Vc; Tue, 11 May 2021 18:06:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125898.236977; Tue, 11 May 2021 18:06:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgWmU-0000hs-Sm; Tue, 11 May 2021 18:06:54 +0000
Received: by outflank-mailman (input) for mailman id 125898;
 Tue, 11 May 2021 18:06:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iFnS=KG=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
 id 1lgWmT-0000hb-Qp
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 18:06:53 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 030b699d-ba57-4ff2-92aa-1b1c49c25836;
 Tue, 11 May 2021 18:06:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 030b699d-ba57-4ff2-92aa-1b1c49c25836
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620756412;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=LJybmG+tBz4IHj5Tv2R6AGzuVV3gHz/bSFDawWJReUw=;
  b=Hnnvl/WIu49Q1HX5O/aKBIEfWdKtoAt7mEYDZvtniN6G3BnePUKKK1PY
   9rvhEQQdDQBfleGYU5VP4407exUB0fSLugsxUN3j+NYE40PQvbPLVUphE
   EDq4BiXLw2XEfNud2dQTY9jJXPg4KvR7ZXNOruBNG0o8u7C7GEgO0G7FO
   o=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 2XBnpXDrfeAUsHExPzOU4DUtdbP3NkWpMxhDt2/N9KZXxZrAf8DE0E5VECDQoJdKOe+eVTcUNF
 Wnmairsgd8GscxkAymYmA3YauKCpr1MTQ2uil240NVafuLPpsq9xQngdpRpUsBhX7Ohp+OMTXA
 PXGdzw8acPYvfu9axqxRLUENFfoSAVc5Pz+jYARskbiOkPiC/jlNwiX6jUB2tomNkcjxCk8hOc
 xZEN8fdlRy8TbvcSYyGEdqIOjuIuflsOuaJR/gDe/FuEkYMYl1gDEGsi23CBbnmVQaX6d6nnvd
 m34=
X-SBRS: 5.1
X-MesageID: 43675314
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:S4LKvq39hDQLwVEJvbUAtwqjBIokLtp133Aq2lEZdPRUGvb3qy
 nIpoVj6faUskd2ZJhOo7C90cW7LU80sKQFhLX5Xo3SOzUO2lHYT72KhLGKq1aLdhEWtNQtsZ
 uIG5IOcOEYZmIasS+V2maF+q4bsbu6zJw=
X-IronPort-AV: E=Sophos;i="5.82,291,1613451600"; 
   d="scan'208";a="43675314"
From: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>, "Christian
 Lindig" <christian.lindig@citrix.com>, David Scott <dave@recoil.org>, "Ian
 Jackson" <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: [PATCH v2 05/17] tools/ocaml/xenstored: add binary dump format support
Date: Tue, 11 May 2021 19:05:18 +0100
Message-ID: <f454c3c6c5d33049ba72ac7ca75c8bbe684676bc.1620755942.git.edvin.torok@citrix.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1620755942.git.edvin.torok@citrix.com>
References: <cover.1620755942.git.edvin.torok@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Do not dump -1, it'll trigger an assertion, use 0xFF.. instead.

Signed-off-by: Edwin Török <edvin.torok@citrix.com>
---
 tools/ocaml/xenstored/connection.ml | 63 +++++++++++++++++++++--------
 tools/ocaml/xenstored/disk.ml       |  3 +-
 2 files changed, 49 insertions(+), 17 deletions(-)

diff --git a/tools/ocaml/xenstored/connection.ml b/tools/ocaml/xenstored/connection.ml
index 65f99ea6f2..7a894a2eb1 100644
--- a/tools/ocaml/xenstored/connection.ml
+++ b/tools/ocaml/xenstored/connection.ml
@@ -17,6 +17,7 @@
 exception End_of_file
 
 open Stdext
+module LR = Disk.LiveRecord
 
 let xenstore_payload_max = 4096 (* xen/include/public/io/xs_wire.h *)
 
@@ -77,6 +78,10 @@ let number_of_transactions con =
 
 let get_domain con = con.dom
 
+let get_id con = match con.dom with
+| None -> 2*LR.domid_invalid + con.anonid
+| Some dom -> 1 + Domain.get_id dom
+
 let anon_id_next = ref 1
 
 let get_domstr con =
@@ -278,6 +283,9 @@ let end_transaction con tid commit =
 let get_transaction con tid =
 	Hashtbl.find con.transactions tid
 
+let iter_transactions con f =
+	Hashtbl.iter f con.transactions
+
 let do_input con = Xenbus.Xb.input con.xb
 let has_input con = Xenbus.Xb.has_in_packet con.xb
 let has_partial_input con = match con.xb.Xenbus.Xb.partial_in with
@@ -336,22 +344,45 @@ let incr_ops con = con.stat_nb_ops <- con.stat_nb_ops + 1
 let stats con =
 	Hashtbl.length con.watches, con.stat_nb_ops
 
-let dump con chan =
-	let id = match con.dom with
-	| Some dom ->
-		let domid = Domain.get_id dom in
-		(* dump domain *)
-		Domain.dump dom chan;
-		domid
-	| None ->
-		let fd = con |> get_fd |> Utils.FD.to_int in
-		Printf.fprintf chan "socket,%d\n" fd;
-		-fd
-	in
-	(* dump watches *)
-	List.iter (fun (path, token) ->
-		Printf.fprintf chan "watch,%d,%s,%s\n" id (Utils.hexify path) (Utils.hexify token)
-		) (list_watches con)
+let serialize_pkt_in buf xb =
+	let open Xenbus.Xb in
+	Queue.iter (fun p -> Buffer.add_string buf (Packet.to_string p)) xb.pkt_in;
+	match xb.partial_in with
+	| NoHdr (to_read, hdrb) ->
+		(* see Xb.input *)
+		let used = Xenbus.Partial.header_size () - to_read in
+		Buffer.add_subbytes buf hdrb 0 used
+	| HaveHdr p ->
+		p |> Packet.of_partialpkt |> Packet.to_string |> Buffer.add_string buf
+
+let serialize_pkt_out buf xb =
+	let open Xenbus.Xb in
+	Buffer.add_string buf xb.partial_out;
+	Queue.iter (fun p -> Buffer.add_string buf (Packet.to_string p)) xb.pkt_out
+
+let dump con store chan =
+	let conid = get_id con in
+	let conn = match con.dom with
+	| None -> LR.Socket (get_fd con)
+	| Some dom -> LR.Domain {
+		id = Domain.get_id dom;
+		target = LR.domid_invalid;  (* TODO: we do not store this info *)
+		remote_port = Domain.get_remote_port dom
+	} in
+	let pkt_in = Buffer.create 4096 in
+	let pkt_out = Buffer.create 4096 in
+	serialize_pkt_in pkt_in con.xb;
+	serialize_pkt_out pkt_out con.xb;
+	LR.write_connection_data chan ~conid ~conn  pkt_in con.xb.partial_out pkt_out;
+
+	con |> list_watches
+	|> List.rev (* preserve order in dump/reload *)
+	|> List.iter (fun (wpath, token) ->
+		LR.write_watch_data chan ~conid ~wpath ~token
+	);
+	let conpath = get_path con in
+	iter_transactions con (fun _ txn ->
+		 Transaction.dump store conpath ~conid txn chan)
 
 let debug con =
 	let domid = get_domstr con in
diff --git a/tools/ocaml/xenstored/disk.ml b/tools/ocaml/xenstored/disk.ml
index 595fdab54a..59794324e1 100644
--- a/tools/ocaml/xenstored/disk.ml
+++ b/tools/ocaml/xenstored/disk.ml
@@ -292,7 +292,8 @@ module LiveRecord = struct
 	let write_global_data t ~rw_sock =
 		write_record t Type.global_data 8 @@ fun b ->
 		O.w32 b (FD.to_int rw_sock);
-		O.w32 b (-1)
+                (* TODO: this needs a unit test/live update test too! *)
+		O.w32 b 0xFFFF_FFFF
 
 	let read_global_data t ~len f =
 		read_expect t "global_data" 8 len;
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 11 18:07:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 18:07:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125899.236988 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgWma-00013q-8a; Tue, 11 May 2021 18:07:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125899.236988; Tue, 11 May 2021 18:07:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgWma-00013j-5B; Tue, 11 May 2021 18:07:00 +0000
Received: by outflank-mailman (input) for mailman id 125899;
 Tue, 11 May 2021 18:06:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iFnS=KG=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
 id 1lgWmY-0000hb-Pl
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 18:06:58 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 86efdff2-a848-4969-b83b-8343218ced3c;
 Tue, 11 May 2021 18:06:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 86efdff2-a848-4969-b83b-8343218ced3c
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620756411;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=kIHSglxPAwnvzoL5IzOBULzsMXsyyLvM7qOksEaxYCM=;
  b=YMNC1G1Ysy5r2XsAc+Nh97kZq5avg42cvFSB2o+uqSu2ctcMO1WAHhEd
   /v+llB8ozIWL0M2BdOrhxfD5CFA4sfeQYjdObe0lxOAr/xpiGo1xXGXL+
   eyXuHy7BnHCE92yAvUV59MPqCFl1JwFNdUvlZxdwZWdtfMhZg3vAtl9hi
   Q=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: cl8K5cFZfpjF/NnbS6sFm8sXJGgjZmf4SlKT6+vvWFjC+wuWBU94DAFj/WtLDoNfEZ/Z+joHpX
 n7BMXmNtI+c3UwGTuwnL4ezcOlnhqqrRLoonqp5cj/SKOwoUv+kOZqmRVEm0cAWODUulqKo+K8
 JlyABlGXRh1B0WrP6UA5J/i2aizOZFdzMzeurbL6bpWaGlGHYu4DVjg8vKRRk+g30sCgPLqfth
 8mzZx4cNy4oppVEFtRUl0WywzJkzANjpVpB5PB3utkNVzXtne7WELk7z9ShoDWD/NjcV7/AMiK
 Qys=
X-SBRS: 5.1
X-MesageID: 43579247
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:9/6l269oOnbNBBwYLGJuk+DgI+orL9Y04lQ7vn2YSXRuHPBw8P
 re5cjztCWE7gr5N0tBpTntAsW9qDbnhPtICOoqTNCftWvdyQiVxehZhOOIqVDd8m/Fh4pgPM
 9bAtBD4bbLbGSS4/yU3ODBKadD/OW6
X-IronPort-AV: E=Sophos;i="5.82,291,1613451600"; 
   d="scan'208";a="43579247"
From: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>, "Christian
 Lindig" <christian.lindig@citrix.com>, David Scott <dave@recoil.org>, "Ian
 Jackson" <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: [PATCH v2 04/17] tools/ocaml/xenstored: implement the live migration binary format
Date: Tue, 11 May 2021 19:05:17 +0100
Message-ID: <1203d68f34f55b675e64df228c7d45405e1304a8.1620755942.git.edvin.torok@citrix.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1620755942.git.edvin.torok@citrix.com>
References: <cover.1620755942.git.edvin.torok@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

This is implemented by C xenstored as live update dump format.
oxenstored already has its own (text-based) dump format, but for
compatibility implement one compatible with C xenstored.

This will also be useful in the future for non-cooperative guest live migration.

docs/designs/xenstore-migration.md documents the format

For now this always dumps integers in big endian order, because even old
versions of OCaml have support for that.
The binary format supports both little and big endian orders, so this
should be compatible.

To dump in little endian or native endian order we would
require OCaml 4.08+.

Signed-off-by: Edwin Török <edvin.torok@citrix.com>
---
 tools/ocaml/xenstored/disk.ml | 318 ++++++++++++++++++++++++++++++++++
 1 file changed, 318 insertions(+)

diff --git a/tools/ocaml/xenstored/disk.ml b/tools/ocaml/xenstored/disk.ml
index 4739967b61..595fdab54a 100644
--- a/tools/ocaml/xenstored/disk.ml
+++ b/tools/ocaml/xenstored/disk.ml
@@ -155,3 +155,321 @@ let write store =
 		Unix.rename tfile xs_daemon_database
 	with exc ->
 		error "caught exn %s" (Printexc.to_string exc)
+
+	module BinaryOut = struct
+		let version = 0x1
+		let endian = 1
+		let padding = String.make 7 '\x00'
+
+		let write_header ch =
+			(* for testing endian order *)
+			output_binary_int ch 0x78656e73;
+			output_binary_int ch 0x746f7265;
+			output_binary_int ch version;
+			output_binary_int ch endian;
+			ch
+
+		let w8 = output_char
+		let w16 ch i =
+			assert (i >= 0 && i lsr 16 = 0);
+			output_byte ch (i lsr 8);
+			output_byte ch i
+
+		let w32 ch v =
+			assert (v >= 0 && v <= 0xFFFF_FFFF);
+			output_binary_int ch v
+
+		let pos = pos_out
+		let wpad ch =
+			let padto = 8 in
+			let padby = (padto - pos ch mod padto) mod padto in
+			if padby > 0 then
+				output_substring ch padding 0 padby
+
+		let wstring = output_string
+	end
+
+	module BinaryIn = struct
+		type t = in_channel
+
+		let read_header t =
+			let h = Bytes.make 8 '\x00' in
+			really_input t h 0 (Bytes.length h);
+			let ver = input_binary_int t in
+			let endian = input_binary_int t in
+			if Bytes.to_string h <> "xenstore" then
+				failwith "Header doesn't begin with 'xenstore'";
+			if ver <> BinaryOut.version then
+				failwith "Incompatible version";
+			if endian <> BinaryOut.endian then
+				failwith "Incompatible endianness"
+
+		let r8 = input_char
+
+		let r16 t = 
+			let r0 = input_byte t in
+			let r1 = input_byte t  in
+			(r0 lsl 8) lor r1
+
+		let r32 t =
+			(* read unsigned 32-bit int *)
+			let r = input_binary_int t land 0xFFFF_FFFF in
+			assert (r >= 0);
+			r
+
+		let rstring = really_input_string
+
+		let rpad t =
+			let padto = 8 in
+			let padby = (padto - pos_in t mod padto) mod padto in
+			if padby > 0 then
+				ignore (really_input_string t padby)
+	end
+
+module FD : sig
+     type t = Unix.file_descr
+     val of_int: int -> t
+     val to_int : t -> int
+end = struct
+    type t = Unix.file_descr
+    (* This is like Obj.magic but just for these types,
+       and relies on Unix.file_descr = int *)
+    external to_int : t -> int = "%identity"
+    external of_int : int -> t = "%identity"
+end
+
+module LiveRecord = struct
+	(* See docs/designs/xenstore-migration.md for binary format *)
+	module Type : sig
+		type t = private int
+		val end_ : t
+		val global_data : t
+		val connection_data : t
+		val watch_data : t
+		val transaction_data : t
+		val node_data: t
+	end = struct
+		type t = int
+		let end_ = 0x0
+		let global_data = 0x01
+		let connection_data = 0x02
+		let watch_data = 0x03
+		let transaction_data = 0x04
+		let node_data = 0x05
+	end
+
+	module I = BinaryIn
+	module O = BinaryOut
+
+	let write_expect msg expected actual =
+		if expected <> actual then
+			let m = Printf.sprintf "expected %d <> %d: %s" expected actual msg in
+			invalid_arg m
+
+	let write_record t (typ: Type.t) len f =
+		assert (O.pos t mod 8 = 0);
+		O.w32 t (typ :> int);
+		O.w32 t len;
+		let p0 = O.pos t in
+		f t;
+		let p1 = O.pos t in
+		write_expect "position and length" len (p1-p0);
+		O.wpad t
+
+	let write_end t =
+		write_record t Type.end_ 0 ignore
+
+	let read_expect t msg expected actual =
+		if expected <> actual then
+			let pos = pos_in t in
+			let m = Printf.sprintf "expected %d <> %d at ~%d: %s" expected actual pos msg in
+			invalid_arg m
+
+	let read_end t ~len f =
+		read_expect t "end" 0 len;
+		f ()
+
+	let write_global_data t ~rw_sock =
+		write_record t Type.global_data 8 @@ fun b ->
+		O.w32 b (FD.to_int rw_sock);
+		O.w32 b (-1)
+
+	let read_global_data t ~len f =
+		read_expect t "global_data" 8 len;
+		let rw_sock = FD.of_int (I.r32 t) in
+		let _ = FD.of_int (I.r32 t) in
+		f ~rw_sock
+
+	let conn_shared_ring = 0x0
+	let conn_socket = 0x1
+	let domid_invalid = 0x7FF4
+
+	(* oxenstored doesn't support readonly sockets yet *)
+	let flags_connection_readonly = 0x1l
+
+	type dom = { id: int; target: int; remote_port: int }
+	type conn = Socket of Unix.file_descr | Domain of dom
+
+	let write_connection_data t ~conid ~conn xb_pktin xb_partialout xb_pktout =
+		let in_data_len = Buffer.length xb_pktin in
+		let out_resp_len = String.length xb_partialout in
+		let out_data_len = Buffer.length xb_pktout in
+		let data_len = in_data_len + out_data_len in
+
+		write_record t Type.connection_data (32 + data_len) @@ fun b ->
+		assert (conid > 0);
+		O.w32 b conid;
+		O.w32 b (match conn with
+		| Socket _ -> conn_socket
+		| Domain _ -> conn_shared_ring
+		);
+		let flags = 0x0 in
+		O.w32 b flags;
+
+		(match conn with
+		| Socket fd ->
+			O.w32 b (FD.to_int fd);
+			O.w32 b 0 (* pad *)
+		| Domain dom ->
+			O.w16 b dom.id;
+			O.w16 b dom.target;
+			O.w32 b dom.remote_port
+			);
+
+		O.w32 b in_data_len;
+		O.w32 b out_resp_len;
+		O.w32 b out_data_len;
+		Buffer.output_buffer b xb_pktin;
+		O.wstring b xb_partialout;
+		Buffer.output_buffer b xb_pktout
+
+	let read_connection_data t ~len f =
+		let conid = I.r32 t in
+		assert (conid > 0);
+		let kind = I.r32 t in
+		let flags = I.r32 t in
+		read_expect t "flags" 0 flags;
+		let conn = (match kind with
+		| x when x = conn_socket ->
+			let fd = FD.of_int (I.r32 t) in
+			I.r32 t |> ignore;
+			Socket fd
+		| x when x = conn_shared_ring ->
+			let id = I.r16 t in
+			let target = I.r16 t in
+			let remote_port = I.r32 t in
+			Domain {id; target; remote_port }
+		| x ->
+			invalid_arg (Printf.sprintf "Unknown connection kind %x" x)
+		) in
+		let in_data_len = I.r32 t in
+		let out_resp_len = I.r32 t in
+		let out_data_len = I.r32 t in
+		let in_data = really_input_string t in_data_len in
+		let out_data = really_input_string t out_data_len in
+		f ~conid ~conn ~in_data ~out_data ~out_resp_len
+
+
+	let write_watch_data t ~conid ~wpath ~token =
+		let wpath_len = String.length wpath in
+		let token_len = String.length token in
+
+		write_record t Type.watch_data (12+wpath_len+token_len) @@ fun b ->
+		O.w32 b conid;
+		O.w32 b (String.length wpath);
+		O.w32 b (String.length token);
+		O.wstring b wpath;
+		O.wstring b token
+
+	let read_watch_data t ~len f =
+		let conid = I.r32 t in
+		let wpathlen = I.r32 t in
+		let tokenlen = I.r32 t in
+		let wpath = I.rstring t wpathlen in
+		let token = I.rstring t tokenlen in
+		f ~conid ~wpath ~token
+
+	let write_transaction_data t ~conid ~txid =
+		write_record t Type.transaction_data 8 @@ fun b ->
+		O.w32 b conid;
+		O.w32 b txid
+
+	let read_transaction_data t ~len f =
+		read_expect t "transaction" 8 len;
+		let conid = I.r32 t in
+		let txid = I.r32 t in
+		f ~conid ~txid
+
+	type access = R | W | RW | Del
+
+	let write_node_data t ~txidaccess ~path ~value ~perms =
+		let path_len = String.length path in
+		let value_len = String.length value in
+		let perms = Perms.Node.acls perms in
+		let len = 24 + (List.length perms)*4 + path_len + value_len in
+
+		write_record t Type.node_data len @@ fun b ->
+		O.w32 b (match txidaccess with None -> 0 | Some (conid, _, _) -> conid);
+		O.w32 b (match txidaccess with None -> 0 | Some (_, txid, _) -> txid);
+		O.w32 b path_len;
+		O.w32 b value_len;
+		O.w32 b (match txidaccess with
+		| None -> 0x0
+		| Some (_, _, Del) -> 0x0
+		| Some (_, _, R) -> 0x1
+		| Some (_, _, W) -> 0x2
+		| Some (_, _, RW) -> 0x3
+		);
+		O.w32 b (List.length perms);
+		List.iter (fun (domid, permty) ->
+			O.w8 b (Perms.char_of_permty permty);
+			O.w8 b '\x00';
+			O.w16 b domid;
+		) perms;
+		O.wstring b path;
+		O.wstring b value
+
+	let read_node_data t ~len f =
+		let conid = I.r32 t in
+		let txid = I.r32 t in
+		let path_len = I.r32 t in
+		let value_len = I.r32 t in
+		let txaccess = match conid, I.r32 t with
+		| 0, _ -> None
+		| _, 0 -> Some (conid, txid, Del)
+		| _, 1 -> Some (conid, txid, R)
+		| _, 2 -> Some (conid, txid, W)
+		| _, 3 -> Some (conid, txid, RW)
+		| _ -> invalid_arg "invalid access flag"
+		in
+		let a = Array.init (I.r32 t) (fun _ ->
+					let perm = Perms.permty_of_char (I.r8 t) in
+					I.r8 t |> ignore;
+					let domid = I.r16 t in
+					domid, perm
+		) in
+		let perms = match Array.to_list a with
+		| [] -> invalid_arg "Permission list cannot be empty";
+		| (owner, other) :: acls ->
+			Perms.Node.create owner other acls
+		in
+		let path = I.rstring t path_len in
+		let value = I.rstring t value_len in
+		f ~txaccess ~perms ~path ~value
+
+	let read_record t ~on_end ~on_global_data ~on_connection_data ~on_watch_data ~on_transaction_data ~on_node_data =
+		I.rpad t; (* if we fail to process a record (e.g. callback raises, ensure we resume at right place *)
+		let typ = I.r32 t in
+		let len = I.r32 t in
+		let p0 = pos_in t in
+		(match typ with
+		| x when x = (Type.end_ :> int) -> read_end t ~len on_end
+		| x when x = (Type.global_data :> int) -> read_global_data t ~len on_global_data
+		| x when x = (Type.connection_data :> int) -> read_connection_data t ~len on_connection_data
+		| x when x = (Type.watch_data :> int) -> read_watch_data t ~len on_watch_data
+		| x when x = (Type.transaction_data :> int) -> read_transaction_data t ~len on_transaction_data
+		| x when x = (Type.node_data :> int) -> read_node_data t ~len on_node_data
+		| x -> failwith (Printf.sprintf "Unknown record type: %x" x));
+		let p1 = pos_in t in
+		read_expect t "record length" len (p1-p0)
+end
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 11 18:07:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 18:07:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125900.237001 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgWmf-0001Ps-H5; Tue, 11 May 2021 18:07:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125900.237001; Tue, 11 May 2021 18:07:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgWmf-0001Pl-Dm; Tue, 11 May 2021 18:07:05 +0000
Received: by outflank-mailman (input) for mailman id 125900;
 Tue, 11 May 2021 18:07:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iFnS=KG=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
 id 1lgWmd-0000hb-Pq
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 18:07:03 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 14ac971e-46fa-49c1-92ac-8e32a0151c57;
 Tue, 11 May 2021 18:06:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 14ac971e-46fa-49c1-92ac-8e32a0151c57
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620756414;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=s2aBGweGMww0BGayyrFXSOFGfJlyB1IK+/pB0SJODd8=;
  b=KPwqoeBw7FcAG6ALJDV6OGEnwixHJZgTswzPxdpIdycbXu02xiMee1Ad
   WtuAQgqLu0AXTqdCreRLrV4n4JvaVPGzpSxuaRkhv62VcK3vW82G+ONCU
   py55mgrGoFVmH1szTMeWs+t1H8fdy6/b8SpkgGcApTSnNHgcuJHgSGtDt
   Q=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: CC1zsBr/y4R9C1vbycjoeM34x5WOX++nyZQxUXs1mWepQr6wdkd9KquzfTu26GaKMb9rIV28Bf
 VkqG/NiLqAJN4s5eRzIwoLoat2Sg28bNpgLsomHzX5L1LW3VnVxzeo+x52Z60B8CoOKfP8aJj9
 9hbUSg/SUl7SFIi8EErjV5KuS3+leimLLtWmCd2dZkffUyHFDvjlohuiFu6b7tgDFpvZuyqKgL
 2Ko6WRTnV6OFSiT5M0C8ZoELGnm7T/eL9SSGF1gNtLxrOwrvvEgdSWd/mM804sYTPXD3jBbmm/
 Osw=
X-SBRS: 5.1
X-MesageID: 43579255
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:AyA/h6DE+dEyxXvlHelW55DYdb4zR+YMi2TDt3oddfWaSKylfq
 GV7ZAmPHrP4gr5N0tOpTntAse9qBDnhPtICOsqTNSftWDd0QPFEGgL1+DfKlbbak/DH4BmtJ
 uJc8JFeaDN5VoRt7eH3OFveexQv+Vu88qT9JnjJ28Gd3AMV0n5hT0JcTpyFCdNNW97LKt8Lr
 WwzOxdqQGtfHwGB/7LfEXsD4D41qT2fIuNW29/OyIa
X-IronPort-AV: E=Sophos;i="5.82,291,1613451600"; 
   d="scan'208";a="43579255"
From: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Jan Beulich
	<jbeulich@suse.com>, "Julien Grall" <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: [PATCH v2 01/17] docs/designs/xenstore-migration.md: clarify that deletes are recursive
Date: Tue, 11 May 2021 19:05:14 +0100
Message-ID: <3d46415392bd8f90266b624a2ea9c220b3164d18.1620755942.git.edvin.torok@citrix.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1620755942.git.edvin.torok@citrix.com>
References: <cover.1620755942.git.edvin.torok@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Signed-off-by: Edwin Török <edvin.torok@citrix.com>
---
 docs/designs/xenstore-migration.md | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/docs/designs/xenstore-migration.md b/docs/designs/xenstore-migration.md
index 5f1155273e..87ef540918 100644
--- a/docs/designs/xenstore-migration.md
+++ b/docs/designs/xenstore-migration.md
@@ -364,7 +364,8 @@ record previously present).
 |              | 0x0001: read                                   |
 |              | 0x0002: written                                |
 |              |                                                |
-|              | The value will be zero for a deleted node      |
+|              | The value will be zero for a recursively       |
+|              | deleted node                                   |
 |              |                                                |
 | `perm-count` | The number (N) of node permission specifiers   |
 |              | (which will be 0 for a node deleted in a       |
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 11 18:07:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 18:07:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125901.237013 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgWmk-0001nz-Qg; Tue, 11 May 2021 18:07:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125901.237013; Tue, 11 May 2021 18:07:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgWmk-0001no-Li; Tue, 11 May 2021 18:07:10 +0000
Received: by outflank-mailman (input) for mailman id 125901;
 Tue, 11 May 2021 18:07:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iFnS=KG=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
 id 1lgWmi-0000hb-Pw
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 18:07:08 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 09254c1f-cd3b-4beb-961a-f31b7ee5b6c1;
 Tue, 11 May 2021 18:06:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 09254c1f-cd3b-4beb-961a-f31b7ee5b6c1
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620756418;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=aZQnD/ue/pcqfGJ1B3U3Ds8P1LU7rkGdfiBQesXOyvs=;
  b=Bsh+joSp9ubehxyeFsp0OBKM3X1ildEv2941iLvMNt+MTErlGwDwYW/p
   g/XiY0bojUGXd0Al2vPQ4s9/ZzJ7VbFWfTgOZ/7p0VbrZ1utHoplHJMlc
   BQruwQC6LAGVwoRa1eWheZELF3ZQv+VS6dToa09FyvoksrpFV9Ho5kdJN
   w=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: zddHIAFWfMIgeURa1CfnJgKd/RtNaAjlYYDng/ZSmL96M1P9iL7XVVnUwfVNXKI67vgaI+LOVI
 +kEyw35KXTccct8VYNlMCmVHBiAeY27lj7enqNJnrnpljdPiDwu6Nj6m46bSJw8bdfLnpgAPAo
 7Ddgcf+wRafZruBAr3ZLFVtFQrYXhskO2O/eYJfDoUEMBuun9opdi5wEBn5YveW6LwDeELYP3e
 RpNQKMu7T/0ynbzt2qTbhDt6VBqWyTCXJcg/ShFStmQxeZyXgM6CmXg0QYXqHS+EL0tQeKFpov
 OFY=
X-SBRS: 5.1
X-MesageID: 43954237
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:7s5/Ra98dAr91pxwh6puk+Hxdb1zdoMgy1knxilNoENuH/Bwxv
 rFoB1E73TJYW4qKQodcdDpAtjifZquz+8O3WBxB8bpYOCCggeVxe5ZnOzfKlHbehEWs9QtrZ
 uIEJIOReEYb2IK6/oSiTPQe7lP/DDEytHQuQ609QYOcegeUdAF0+4PMHf/LqQZfml7LKt8MK
 DZyttMpjKmd3hSRN+8HGM5U+/KoMCOvI76YDYdbiRXpzWmvHeN0vrXAhKY1hARX3dk2rE561
 XIlAT/++GKr+y78BnBzGXehq4m1ucJi+EzRfBkuPJlaQkEuTzYJriJnIfy+QzdldvfqGrCVu
 O85yvIcf4DrE85NVvF3CcFkzOQrArGrUWShWNwyEGT3vDRVXY0DdFMipledQac4008vMtk2K
 YOxG6BsYFLZCmw1RgVyuK4IC2CrHDE10bKUNRj/UC3WrFuIIO5bbZviH+9Na1wVx4SxLpXYN
 WGPfuskcq+K2nqHkwxllMfs+BEcE5DYCu7fg==
X-IronPort-AV: E=Sophos;i="5.82,291,1613451600"; 
   d="scan'208";a="43954237"
From: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>, "Christian
 Lindig" <christian.lindig@citrix.com>, David Scott <dave@recoil.org>, "Ian
 Jackson" <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: [PATCH v2 02/17] tools/ocaml: add unit test skeleton with Dune build system
Date: Tue, 11 May 2021 19:05:15 +0100
Message-ID: <f51e272a5255fe130ca511f081302f51203ccebf.1620755942.git.edvin.torok@citrix.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1620755942.git.edvin.torok@citrix.com>
References: <cover.1620755942.git.edvin.torok@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Based on initial work by Christian Lindig

Doing oxenstored development, especially fuzzing/unit tests requires
an incremental and fast build system.

Dune is the preferred upstream build system for OCaml, and has been in
use by the XAPI project for years.
Is is incremental and also generates editor integration files (.merlin).

Usage:
./xs-reconfigure.sh
cd tools/ocaml
make clean
make check

There are some other convenience targets as well:
make dune-clean
make dune-syntax-check
make dune-build-oxenstored

There are some files that are generated by Make, these are created
by a 'dune-pre' target, they are too closely tied to make and
cannot yet be generated by Dune itself.

The various Makefile targets are used as entrypoints into Dune
that set the needed env vars (for C include files and libraries)
and ensure that the generated files are available.

The unit tests do not require Xen to be available, so add mock
eventchn and xenctrl libraries for the unit test to use,
and copy the non-system specific modules from xenstored/ to
xenstored/test/.

Xenstored had to be split into Xenstored and Xenstored_main,
so that we can use the functions defined in Xenstored without
actually starting up the daemon in a unit test.
Similarly argument parsing had to be delayed until after daemon startup.

Also had to disable setrlimit when running as non-root in poll.ml.

Signed-off-by: Edwin Török <edvin.torok@citrix.com>
---
 tools/ocaml/.gitignore                       |  2 +
 tools/ocaml/Makefile                         | 33 +++++++++++++
 tools/ocaml/dune-project                     |  5 ++
 tools/ocaml/libs/eventchn/dune               |  8 ++++
 tools/ocaml/libs/mmap/dune                   |  8 ++++
 tools/ocaml/libs/xb/dune                     |  7 +++
 tools/ocaml/libs/xc/dune                     |  9 ++++
 tools/ocaml/libs/xs/dune                     |  4 ++
 tools/ocaml/xen.opam                         |  1 +
 tools/ocaml/xenstore.opam                    |  1 +
 tools/ocaml/xenstored.opam                   | 21 ++++++++
 tools/ocaml/xenstored/Makefile               |  3 +-
 tools/ocaml/xenstored/dune                   | 19 ++++++++
 tools/ocaml/xenstored/parse_arg.ml           |  2 +-
 tools/ocaml/xenstored/poll.ml                |  3 +-
 tools/ocaml/xenstored/test/dune              | 11 +++++
 tools/ocaml/xenstored/test/xenctrl.ml        | 48 +++++++++++++++++++
 tools/ocaml/xenstored/test/xeneventchn.ml    | 50 ++++++++++++++++++++
 tools/ocaml/xenstored/test/xenstored_test.ml |  2 +
 tools/ocaml/xenstored/xenstored.ml           |  4 +-
 tools/ocaml/xenstored/xenstored_main.ml      |  1 +
 21 files changed, 237 insertions(+), 5 deletions(-)
 create mode 100644 tools/ocaml/.gitignore
 create mode 100644 tools/ocaml/dune-project
 create mode 100644 tools/ocaml/libs/eventchn/dune
 create mode 100644 tools/ocaml/libs/mmap/dune
 create mode 100644 tools/ocaml/libs/xb/dune
 create mode 100644 tools/ocaml/libs/xc/dune
 create mode 100644 tools/ocaml/libs/xs/dune
 create mode 100644 tools/ocaml/xen.opam
 create mode 100644 tools/ocaml/xenstore.opam
 create mode 100644 tools/ocaml/xenstored.opam
 create mode 100644 tools/ocaml/xenstored/dune
 create mode 100644 tools/ocaml/xenstored/test/dune
 create mode 100644 tools/ocaml/xenstored/test/xenctrl.ml
 create mode 100644 tools/ocaml/xenstored/test/xeneventchn.ml
 create mode 100644 tools/ocaml/xenstored/test/xenstored_test.ml
 create mode 100644 tools/ocaml/xenstored/xenstored_main.ml

diff --git a/tools/ocaml/.gitignore b/tools/ocaml/.gitignore
new file mode 100644
index 0000000000..655e32b07c
--- /dev/null
+++ b/tools/ocaml/.gitignore
@@ -0,0 +1,2 @@
+_build
+.merlin
diff --git a/tools/ocaml/Makefile b/tools/ocaml/Makefile
index a7c04b6546..53dd0a0f0d 100644
--- a/tools/ocaml/Makefile
+++ b/tools/ocaml/Makefile
@@ -34,3 +34,36 @@ build-tools-oxenstored:
 	$(MAKE) -s -C libs/xb
 	$(MAKE) -s -C libs/xc
 	$(MAKE) -C xenstored
+
+LIBRARY_PATH=$(XEN_libxenctrl):$(XEN_libxenguest):$(XEN_libxentoollog):$(XEN_libxencall):$(XEN_libxenevtchn):$(XEN_libxenforeignmemory):$(XEN_libxengnttab):$(XEN_libxendevicemodel):$(XEN_libxentoolcore)
+C_INCLUDE_PATH=$(XEN_libxenctrl)/include:$(XEN_libxengnttab)/include:$(XEN_libxenevtchn)/include:$(XEN_libxentoollog)/include:$(XEN_INCLUDE)
+
+# Files generated by the Makefile
+# These cannot be generated from dune, because dune cannot refer to files
+# in the parent directory (so it couldn't copy/use Config.mk)
+.PHONY: dune-pre
+dune-pre:
+	$(MAKE) -s -C ../../ build-tools-public-headers
+	$(MAKE) -s -C libs/xs paths.ml
+	$(MAKE) -s -C libs/xc xenctrl_abi_check.h
+	$(MAKE) -s -C xenstored paths.ml _paths.h
+
+.PHONY: check
+check: dune-pre
+	# --force isn't necessary here if the test is deterministic
+	OCAMLRUNPARAM=b C_INCLUDE_PATH=$(C_INCLUDE_PATH) dune runtest --profile=release --no-buffer --force
+
+# Convenience targets for development
+
+.PHONY: dune-clean
+dune-clean:
+	$(MAKE) clean
+	dune clean
+
+.PHONY: dune-syntax-check
+dune-syntax-check: dune-pre
+	LIBRARY_PATH=$(LIBRARY_PATH) C_INCLUDE_PATH=$(C_INCLUDE_PATH) dune build --profile=release @check
+
+.PHONY: build-oxenstored-dune
+dune-build-oxenstored: dune-pre
+	LD_LIBRARY_PATH=$(LIBRARY_PATH) LIBRARY_PATH=$(LIBRARY_PATH) C_INCLUDE_PATH=$(C_INCLUDE_PATH) dune build --profile=release @all
diff --git a/tools/ocaml/dune-project b/tools/ocaml/dune-project
new file mode 100644
index 0000000000..b41cfae68b
--- /dev/null
+++ b/tools/ocaml/dune-project
@@ -0,0 +1,5 @@
+(lang dune 2.0)
+
+(name xen)
+
+(formatting disabled)
diff --git a/tools/ocaml/libs/eventchn/dune b/tools/ocaml/libs/eventchn/dune
new file mode 100644
index 0000000000..e08bc76fdf
--- /dev/null
+++ b/tools/ocaml/libs/eventchn/dune
@@ -0,0 +1,8 @@
+(library
+ (foreign_stubs
+  (language c)
+  (names xeneventchn_stubs))
+ (name xeneventchn)
+ (public_name xen.eventchn)
+ (libraries unix)
+ (c_library_flags -lxenevtchn))
diff --git a/tools/ocaml/libs/mmap/dune b/tools/ocaml/libs/mmap/dune
new file mode 100644
index 0000000000..a47de44e47
--- /dev/null
+++ b/tools/ocaml/libs/mmap/dune
@@ -0,0 +1,8 @@
+(library
+ (foreign_stubs
+  (language c)
+  (names xenmmap_stubs))
+ (name xenmmap)
+ (public_name xen.mmap)
+ (libraries unix)
+ (install_c_headers mmap_stubs))
diff --git a/tools/ocaml/libs/xb/dune b/tools/ocaml/libs/xb/dune
new file mode 100644
index 0000000000..feb30adc01
--- /dev/null
+++ b/tools/ocaml/libs/xb/dune
@@ -0,0 +1,7 @@
+(library
+ (foreign_stubs
+  (language c)
+  (names xenbus_stubs xs_ring_stubs))
+ (name xenbus)
+ (public_name xen.bus)
+ (libraries unix xenmmap))
diff --git a/tools/ocaml/libs/xc/dune b/tools/ocaml/libs/xc/dune
new file mode 100644
index 0000000000..fb75ee8ff7
--- /dev/null
+++ b/tools/ocaml/libs/xc/dune
@@ -0,0 +1,9 @@
+(library
+ (foreign_stubs
+  (language c)
+  (names xenctrl_stubs))
+ (name xenctrl)
+ (public_name xen.ctrl)
+ (libraries unix xenmmap)
+ (c_library_flags -lxenctrl -lxenguest -lxencall -lxenforeignmemory
+   -lxengnttab))
diff --git a/tools/ocaml/libs/xs/dune b/tools/ocaml/libs/xs/dune
new file mode 100644
index 0000000000..c79ea75775
--- /dev/null
+++ b/tools/ocaml/libs/xs/dune
@@ -0,0 +1,4 @@
+(library
+ (name xenstore)
+ (public_name xen.store)
+ (libraries unix xenbus))
diff --git a/tools/ocaml/xen.opam b/tools/ocaml/xen.opam
new file mode 100644
index 0000000000..013b84db61
--- /dev/null
+++ b/tools/ocaml/xen.opam
@@ -0,0 +1 @@
+opam-version: "2.0"
diff --git a/tools/ocaml/xenstore.opam b/tools/ocaml/xenstore.opam
new file mode 100644
index 0000000000..013b84db61
--- /dev/null
+++ b/tools/ocaml/xenstore.opam
@@ -0,0 +1 @@
+opam-version: "2.0"
diff --git a/tools/ocaml/xenstored.opam b/tools/ocaml/xenstored.opam
new file mode 100644
index 0000000000..a226328e43
--- /dev/null
+++ b/tools/ocaml/xenstored.opam
@@ -0,0 +1,21 @@
+opam-version: "2.0"
+synopsis: "In-memory key-value store for the Xen hypervisor"
+maintainer: "lindig@gmail.com"
+authors: "lindig@gmail.com"
+license: "LGPL"
+homepage: "https://github.com/lindig/xen-ocaml-tools"
+bug-reports: "https://github.com/lindig/xen-ocaml-tools/issues"
+depends: [
+  "ocaml"
+  "dune" {build}
+  "base-unix"
+
+  "crowbar" {with-test}
+  "fmt" {with-test}
+
+  "crowbar"
+  "fmt"
+]
+build: ["dune" "build" "-p" name "-j" jobs]
+depexts: ["m4" "libxen-dev" "libsystemd-dev"] {os-distribution = "debian"}
+dev-repo: "git+https://github.com/lindig/xen-ocaml-tools.git"
diff --git a/tools/ocaml/xenstored/Makefile b/tools/ocaml/xenstored/Makefile
index 89ec3ec76a..9d2da206d8 100644
--- a/tools/ocaml/xenstored/Makefile
+++ b/tools/ocaml/xenstored/Makefile
@@ -56,7 +56,8 @@ OBJS = paths \
 	history \
 	parse_arg \
 	process \
-	xenstored
+	xenstored \
+	xenstored_main
 
 INTF = symbol.cmi trie.cmi syslog.cmi systemd.cmi poll.cmi
 
diff --git a/tools/ocaml/xenstored/dune b/tools/ocaml/xenstored/dune
new file mode 100644
index 0000000000..714a2ae07e
--- /dev/null
+++ b/tools/ocaml/xenstored/dune
@@ -0,0 +1,19 @@
+(executable
+ (modes byte exe)
+ (name xenstored_main)
+ (modules (:standard \ syslog systemd))
+ (public_name oxenstored)
+ (package xenstored)
+ (flags (:standard -w -52))
+ (libraries unix xen.bus xen.mmap xen.ctrl xen.eventchn xenstubs))
+
+(library
+ (foreign_stubs
+  (language c)
+  (names syslog_stubs systemd_stubs select_stubs)
+  (flags (-DHAVE_SYSTEMD)))
+ (modules syslog systemd)
+ (name xenstubs)
+ (wrapped false)
+ (libraries unix)
+ (c_library_flags -lsystemd))
diff --git a/tools/ocaml/xenstored/parse_arg.ml b/tools/ocaml/xenstored/parse_arg.ml
index 7c0478e76a..965cb9ebeb 100644
--- a/tools/ocaml/xenstored/parse_arg.ml
+++ b/tools/ocaml/xenstored/parse_arg.ml
@@ -28,7 +28,7 @@ type config =
 	disable_socket: bool;
 }
 
-let do_argv =
+let do_argv () =
 	let pidfile = ref "" and tracefile = ref "" (* old xenstored compatibility *)
 	and domain_init = ref true
 	and activate_access_log = ref true
diff --git a/tools/ocaml/xenstored/poll.ml b/tools/ocaml/xenstored/poll.ml
index 26f8620dfc..92e0717ed2 100644
--- a/tools/ocaml/xenstored/poll.ml
+++ b/tools/ocaml/xenstored/poll.ml
@@ -64,4 +64,5 @@ let poll_select in_fds out_fds exc_fds timeout =
 			a r
 
 let () =
-        set_fd_limit (get_sys_fs_nr_open ())
+        if Unix.geteuid () = 0 then
+          set_fd_limit (get_sys_fs_nr_open ())
diff --git a/tools/ocaml/xenstored/test/dune b/tools/ocaml/xenstored/test/dune
new file mode 100644
index 0000000000..2a3eb2b7df
--- /dev/null
+++ b/tools/ocaml/xenstored/test/dune
@@ -0,0 +1,11 @@
+(copy_files# ../*.ml{,i})
+
+(test
+ (modes native)
+ (ocamlopt_flags -afl-instrument)
+ (name xenstored_test)
+ (modules (:standard \ syslog systemd))
+ (package xenstored)
+ (flags (:standard -w -52))
+ ;;(action (run %{test} -v --seed 364172147))
+ (libraries unix xen.bus xen.mmap xenstubs crowbar xen.store fmt fmt.tty))
diff --git a/tools/ocaml/xenstored/test/xenctrl.ml b/tools/ocaml/xenstored/test/xenctrl.ml
new file mode 100644
index 0000000000..37d6da0a47
--- /dev/null
+++ b/tools/ocaml/xenstored/test/xenctrl.ml
@@ -0,0 +1,48 @@
+(*
+ * Copyright (C) 2006-2007 XenSource Ltd.
+ * Copyright (C) 2008      Citrix Ltd.
+ * Author Vincent Hanquez <vincent.hanquez@eu.citrix.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published
+ * by the Free Software Foundation; version 2.1 only. with the special
+ * exception on linking described in file LICENSE.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU Lesser General Public License for more details.
+ *)
+
+(** *)
+type domid = int
+
+(* ** xenctrl.h ** *)
+
+
+type domaininfo =
+{
+	domid             : domid;
+	dying             : bool;
+	shutdown          : bool;
+	shutdown_code     : int;
+}
+
+exception Error of string
+
+type handle = unit
+
+let interface_open () = ()
+let interface_close () = ()
+
+let domain_getinfo () domid = {
+  domid = domid;
+  dying = false;
+  shutdown = false;
+  shutdown_code = 0;
+}
+
+let devzero = Unix.openfile "/dev/zero" [] 0
+let  nullmap () = Xenmmap.mmap devzero Xenmmap.RDWR Xenmmap.PRIVATE 4096 0
+
+let map_foreign_range _ _ _ _ = nullmap ()
diff --git a/tools/ocaml/xenstored/test/xeneventchn.ml b/tools/ocaml/xenstored/test/xeneventchn.ml
new file mode 100644
index 0000000000..6612722dc2
--- /dev/null
+++ b/tools/ocaml/xenstored/test/xeneventchn.ml
@@ -0,0 +1,50 @@
+(*
+ * Copyright (C) 2006-2007 XenSource Ltd.
+ * Copyright (C) 2008      Citrix Ltd.
+ * Author Vincent Hanquez <vincent.hanquez@eu.citrix.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published
+ * by the Free Software Foundation; version 2.1 only. with the special
+ * exception on linking described in file LICENSE.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU Lesser General Public License for more details.
+ *)
+
+type handle = Unix.file_descr * int ref
+
+let devnull = Unix.openfile "/dev/null" [] 0
+let init () = devnull, ref 0
+let fd (h, _) = h
+
+type t = int
+
+type virq_t =
+  | Timer        (* #define VIRQ_TIMER      0 *)
+  | Debug        (* #define VIRQ_DEBUG      1 *)
+  | Console      (* #define VIRQ_CONSOLE    2 *)
+  | Dom_exc      (* #define VIRQ_DOM_EXC    3 *)
+  | Tbuf         (* #define VIRQ_TBUF       4 *)
+  | Reserved_5   (* Do not use this value as it's not defined *)
+  | Debugger     (* #define VIRQ_DEBUGGER   6 *)
+  | Xenoprof     (* #define VIRQ_XENOPROF   7 *)
+  | Con_ring     (* #define VIRQ_CON_RING   8 *)
+  | Pcpu_state   (* #define VIRQ_PCPU_STATE 9 *)
+  | Mem_event    (* #define VIRQ_MEM_EVENT  10 *)
+  | Xc_reserved  (* #define VIRQ_XC_RESERVED 11 *)
+  | Enomem       (* #define VIRQ_ENOMEM     12 *)
+  | Xenpmu       (* #define VIRQ_XENPMU     13 *)
+
+let notify _h _ = ()
+let bind_interdomain (_h, port) domid remote_port = incr port; !port
+let bind_virq (_h, port) _ = incr port; !port
+let bind_dom_exc_virq handle = bind_virq handle Dom_exc
+let unbind _ _ = ()
+let pending (_h, port) = !port
+let unmask _ _ = ()
+
+let to_int x = x
+let of_int x = x
diff --git a/tools/ocaml/xenstored/test/xenstored_test.ml b/tools/ocaml/xenstored/test/xenstored_test.ml
new file mode 100644
index 0000000000..e86b68e867
--- /dev/null
+++ b/tools/ocaml/xenstored/test/xenstored_test.ml
@@ -0,0 +1,2 @@
+open Xenstored
+let () = ()
diff --git a/tools/ocaml/xenstored/xenstored.ml b/tools/ocaml/xenstored/xenstored.ml
index d44ae673c4..ae2eab498a 100644
--- a/tools/ocaml/xenstored/xenstored.ml
+++ b/tools/ocaml/xenstored/xenstored.ml
@@ -265,8 +265,8 @@ let to_file store cons fds file =
 	        (fun () -> close_out channel)
 end
 
-let _ =
-	let cf = do_argv in
+let main () =
+	let cf = do_argv () in
 	let pidfile =
 		if Sys.file_exists (config_filename cf) then
 			parse_config (config_filename cf)
diff --git a/tools/ocaml/xenstored/xenstored_main.ml b/tools/ocaml/xenstored/xenstored_main.ml
new file mode 100644
index 0000000000..929dd62fb4
--- /dev/null
+++ b/tools/ocaml/xenstored/xenstored_main.ml
@@ -0,0 +1 @@
+let () = Xenstored.main ()
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 11 18:07:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 18:07:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125902.237025 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgWmm-00026S-8H; Tue, 11 May 2021 18:07:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125902.237025; Tue, 11 May 2021 18:07:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgWmm-00026F-3b; Tue, 11 May 2021 18:07:12 +0000
Received: by outflank-mailman (input) for mailman id 125902;
 Tue, 11 May 2021 18:07:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iFnS=KG=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
 id 1lgWmk-0001nY-Ej
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 18:07:10 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f6ea6f30-41ac-4d6b-8f96-044bd00ac273;
 Tue, 11 May 2021 18:07:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f6ea6f30-41ac-4d6b-8f96-044bd00ac273
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620756427;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=7RVILLnjZIWgrnPBH52virpLPiH5emkE0QnosYt8EPU=;
  b=TKAvB5HF3vTRtiVKe2bIvqSPf0WnQwumqf6Hswy6V553XaUKNenbZjy8
   H4FBtYmY1fDJ8HXqD6EKlnUphSagHP6+FJTifZrFI62AJLJUNrAuf2mWK
   o8+Jli/CleY4dmQAYneCqrhONIaTdWuYaHG2N8Kybwg+6e5CHUkQwCAJ2
   c=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: LOuE82vKbi49458eLw7TL0Ke43U6+ZNj/8Bqz/NrLuYslddHdwtEr9X2Jjrf/Kf3vLPTLc/vcV
 VSw/S1egbUjyZhq/UIFA0seJ8U+xmVH9wzjBAY+85JFcU1gfaGSwRdgQitwXgfvLsmeSoc6a5O
 oonX0QBAkMQOf/OsfTDWRRALN+mTeCMn8eppPeSCKQVdfa8VAVsNWJKWzb3g+1VyO3cmZIhfq3
 PKouk2YEZW4RkXOyth9FFXylEVhbBjvMdcXzJz0vTPT63x2KPYzxTfAPrSsncAEV442IOcINMg
 2tM=
X-SBRS: 5.1
X-MesageID: 45101007
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:4ccXnqyt8H0/fsNrTgiqKrPwFr1zdoMgy1knxilNoRw8SK2lfq
 eV7YwmPH7P+U8ssR4b6LO90cW7Lk80sKQFhbX5Xo3SOjUO2lHYTr2KhLGKq1aLdkHDH6xmpM
 BdmsBFeabN5DNB7foSjjPXLz9Z+qjjzJyV
X-IronPort-AV: E=Sophos;i="5.82,291,1613451600"; 
   d="scan'208";a="45101007"
From: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>, "Christian
 Lindig" <christian.lindig@citrix.com>, David Scott <dave@recoil.org>, "Ian
 Jackson" <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: [PATCH v2 08/17] Add structured fuzzing unit test
Date: Tue, 11 May 2021 19:05:21 +0100
Message-ID: <a9f057131b75e1bd2dcb49c795630ab5875b7f76.1620755942.git.edvin.torok@citrix.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1620755942.git.edvin.torok@citrix.com>
References: <cover.1620755942.git.edvin.torok@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Based on ideas from qcstm, implemented using Crowbar.

Quickcheck-style property tests that uses AFL for quickly
exploring various values that trigger bugs in the code.

This is structured/guided fuzzing: we read an arbitrary random number,
and use it to generate some valid looking xenstore trees and commands.

There are 2 instances of xenstored: one that runs the live update
command, and one that ignores it.
Live-update should be a no-op wrt to xenstored state: this is our
quicheck property.

When any mismatch is identified it prints the input
(tree+xenstore commands), and a diff of the output:
the internal xenstore tree state + quotas.

afl-cmin can be used to further minimize the testcase.
Crowbar (AFL persistent mode Quickcheck integration) is used due to
speed: this very easily gets us a multi-core parallelizable test.

Currently the Transaction tests fail, which is why live updates with
active transactions are rejected. These tests are commented out.

There is also some incomplete code here that attempts to find functional
bugs in xenstored by interpeting xenstore commands in a simpler way and
comparing states.

This will build the fuzzer and run it single core for sanity test:
make container-fuzz-sanity-test

This will run it multicore (requires all dependencies installed on the host,
including ocaml-bun, the multi-core AFL runner):
make dune-oxenstored-fuzz

'make check' will also run the fuzzer but with input supplied by OCaml's
random number generator, and for a very small number of iterations
(few thousand). This doesn't require any external tools (no AFL, bun).

On failure it prints a base64 encoding of the fuzzer state that can be
used to reproduce the failure instantly, which is very useful for
debugging: one can iterate on the failed fuzzer state until it is fixed,
and then run the fuzzer again to find next failure.

The unit tests here require OCaml 4.06, but the rest of the codebase
doesn't (yet).

See https://lore.kernel.org/xen-devel/cbb2742191e9c1303fdfd95feef4d829ecf33a0d.camel@citrix.com/
for previous discussion of OCaml version.

Signed-off-by: Edwin Török <edvin.torok@citrix.com>
---
 tools/ocaml/Makefile                          |  19 +
 tools/ocaml/xenstored/process.ml              |  12 +-
 tools/ocaml/xenstored/store.ml                |   1 +
 tools/ocaml/xenstored/test/dune               |  12 +
 tools/ocaml/xenstored/test/generator.ml       | 189 +++++
 tools/ocaml/xenstored/test/model.ml           | 253 ++++++
 tools/ocaml/xenstored/test/old/arbitrary.ml   | 261 +++++++
 tools/ocaml/xenstored/test/old/gen_paths.ml   |  66 ++
 .../xenstored/test/old/xenstored_test.ml      | 527 +++++++++++++
 tools/ocaml/xenstored/test/pathtree.ml        |  40 +
 tools/ocaml/xenstored/test/testable.ml        | 379 +++++++++
 tools/ocaml/xenstored/test/types.ml           | 437 +++++++++++
 tools/ocaml/xenstored/test/xenstored_test.ml  | 149 +++-
 tools/ocaml/xenstored/test/xs_protocol.ml     | 733 ++++++++++++++++++
 tools/ocaml/xenstored/transaction.ml          | 119 ++-
 15 files changed, 3188 insertions(+), 9 deletions(-)
 create mode 100644 tools/ocaml/xenstored/test/generator.ml
 create mode 100644 tools/ocaml/xenstored/test/model.ml
 create mode 100644 tools/ocaml/xenstored/test/old/arbitrary.ml
 create mode 100644 tools/ocaml/xenstored/test/old/gen_paths.ml
 create mode 100644 tools/ocaml/xenstored/test/old/xenstored_test.ml
 create mode 100644 tools/ocaml/xenstored/test/pathtree.ml
 create mode 100644 tools/ocaml/xenstored/test/testable.ml
 create mode 100644 tools/ocaml/xenstored/test/types.ml
 create mode 100644 tools/ocaml/xenstored/test/xs_protocol.ml

diff --git a/tools/ocaml/Makefile b/tools/ocaml/Makefile
index 53dd0a0f0d..de375820a3 100644
--- a/tools/ocaml/Makefile
+++ b/tools/ocaml/Makefile
@@ -67,3 +67,22 @@ dune-syntax-check: dune-pre
 .PHONY: build-oxenstored-dune
 dune-build-oxenstored: dune-pre
 	LD_LIBRARY_PATH=$(LIBRARY_PATH) LIBRARY_PATH=$(LIBRARY_PATH) C_INCLUDE_PATH=$(C_INCLUDE_PATH) dune build --profile=release @all
+
+.PHONY: oxenstored-fuzz1 oxenstored-fuzz
+dune-oxenstored-fuzz: dune-pre
+	# --force is needed, otherwise it would cache a successful run
+	sh -c '. /etc/profile && C_INCLUDE_PATH=$(C_INCLUDE_PATH) dune build --profile=release --no-buffer --force @fuzz'
+
+dune-oxenstored-fuzz1: dune-pre
+	sh -c '. /etc/profile && C_INCLUDE_PATH=$(C_INCLUDE_PATH) dune build --profile=release --no-buffer --force @fuzz1'
+
+.PHONY: container-fuzz
+container-fuzz-sanity-test:
+	dune clean
+	podman build -t oxenstored-fuzz .
+	# if UID is 0 then we get EPERM on setrlimit from inside the container
+	# use containerize script which ensures that uid is not 0
+	# (podman/docker run would get us a uid of 0)
+	# Only do a sanity test with 1 core, actually doing fuzzing inside a container is a bad idea
+	# due to FUSE overlayfs overhead
+	CONTAINER=oxenstored-fuzz CONTAINER_NO_PULL=1 DOCKER_CMD=podman ../../automation/scripts/containerize make -C tools/ocaml dune-oxenstored-fuzz1
diff --git a/tools/ocaml/xenstored/process.ml b/tools/ocaml/xenstored/process.ml
index d573c88685..13b7153536 100644
--- a/tools/ocaml/xenstored/process.ml
+++ b/tools/ocaml/xenstored/process.ml
@@ -169,7 +169,7 @@ let parse_live_update args =
 				]
 			(fun x -> raise (Arg.Bad x))
 			"live-update -s" ;
-			debug "Live update process queued" ;
+			(* debug "Live update process queued" ; *)
 				{!state with deadline = Unix.gettimeofday () +. float !timeout
 				; force= !force; pending= true})
 		| _ ->
@@ -449,6 +449,8 @@ let transaction_replay c t doms cons =
 		(fun () ->
 			try
 				Logging.start_transaction ~con ~tid;
+				if t.must_fail then
+					raise Transaction_again;
 				List.iter (perform_exn ~wlog:true replay_t) (Transaction.get_operations t); (* May throw EAGAIN *)
 
 				Logging.end_transaction ~con ~tid;
@@ -550,7 +552,7 @@ let do_introduce con t domains cons data =
 		| _                         -> raise Invalid_Cmd_Args;
 		in
 	let dom =
-		if Domains.exist domains domid then
+		if Domains.exist domains domid then begin
 			let edom = Domains.find domains domid in
 			if (Domain.get_mfn edom) = mfn && (Connections.find_domain cons domid) != con then begin
 				(* Use XS_INTRODUCE for recreating the xenbus event-channel. *)
@@ -558,12 +560,16 @@ let do_introduce con t domains cons data =
 				Domain.bind_interdomain edom;
 			end;
 			edom
+		end
 		else try
 			let ndom = Domains.create domains domid mfn port in
 			Connections.add_domain cons ndom;
 			Connections.fire_spec_watches (Transaction.get_root t) cons Store.Path.introduce_domain;
 			ndom
-		with _ -> raise Invalid_Cmd_Args
+		with e ->
+			let bt = Printexc.get_backtrace () in
+			 Logging.debug "process" "do_introduce: %s (%s)" (Printexc.to_string e) bt;
+			 raise Invalid_Cmd_Args
 	in
 	if (Domain.get_remote_port dom) <> port || (Domain.get_mfn dom) <> mfn then
 		raise Domain_not_match
diff --git a/tools/ocaml/xenstored/store.ml b/tools/ocaml/xenstored/store.ml
index 20e67b1427..85ca3daae9 100644
--- a/tools/ocaml/xenstored/store.ml
+++ b/tools/ocaml/xenstored/store.ml
@@ -133,6 +133,7 @@ let of_path_and_name path name =
 	| _ -> path @ [name]
 
 let create path connection_path =
+	Logging.debug "store" "Path.create %S %S" path connection_path;
 	of_string (Utils.path_validate path connection_path)
 
 let to_string t =
diff --git a/tools/ocaml/xenstored/test/dune b/tools/ocaml/xenstored/test/dune
index 2a3eb2b7df..cd62926be9 100644
--- a/tools/ocaml/xenstored/test/dune
+++ b/tools/ocaml/xenstored/test/dune
@@ -9,3 +9,15 @@
  (flags (:standard -w -52))
  ;;(action (run %{test} -v --seed 364172147))
  (libraries unix xen.bus xen.mmap xenstubs crowbar xen.store fmt fmt.tty))
+
+(rule
+(alias fuzz1)
+(deps xenstored_test.exe)
+(action (run bun -svv ./xenstored_test.exe))
+)
+
+(rule
+(alias fuzz)
+(deps xenstored_test.exe)
+(action (run bun --no-kill ./xenstored_test.exe))
+)
diff --git a/tools/ocaml/xenstored/test/generator.ml b/tools/ocaml/xenstored/test/generator.ml
new file mode 100644
index 0000000000..6f7dc374f8
--- /dev/null
+++ b/tools/ocaml/xenstored/test/generator.ml
@@ -0,0 +1,189 @@
+module type S = sig
+  type cmd
+
+  type state
+
+  val init_state : state
+
+  val next_state : cmd -> state -> state
+
+  val precond : cmd -> state -> bool
+end
+
+module IntSet = Set.Make (Int)
+module IntMap = Map.Make (Int)
+
+module Pickable (K : sig
+  include Map.OrderedType
+
+  include Hashtbl.HashedType with type t := t
+end) =
+struct
+  (* allow picking a random value from a changing map keys.
+     Store a random value (hash of key) as first element of key,
+     then use find_first to pick an item related to the random element if any.
+     This should be more efficient than converting to a list and using List.nth to pick
+  *)
+  module Key = struct
+    type t = int * K.t
+
+    let of_key k = (K.hash k, k)
+
+    let compare (h, k) (h', k') =
+      match Int.compare h h' with 0 -> K.compare k k' | r -> r
+  end
+
+  module M = Map.Make (Key)
+
+  type 'a t = 'a M.t
+
+  let empty = M.empty
+
+  let singleton k v = M.singleton (Key.of_key k) v
+
+  let add k v m = M.add (Key.of_key k) v m
+
+  let find_opt k m = M.find_opt (Key.of_key k) m
+
+  let mem k m = M.mem (Key.of_key k) m
+
+  let remove k m = M.remove (Key.of_key k) m
+
+  let merge f m m' = M.merge f m m'
+
+  let is_empty = M.is_empty
+
+  let update k f m = M.update (Key.of_key k) f m
+
+  let choose rnd m =
+    (* function needs to be monotonic, so the hash has to be part of the key *)
+    let gte (keyhash, _) = Int.compare keyhash rnd >= 0 in
+    match M.find_first_opt gte m with
+    | Some ((_, k), _) ->
+        k
+    | None ->
+        snd @@ fst @@ M.min_binding m
+end
+
+module PickablePath = Pickable (struct
+  type t = string
+
+  let hash = Hashtbl.hash
+
+  let compare = String.compare
+
+  let equal = String.equal
+end)
+
+module PickableInt = Pickable (struct
+  include Int
+
+  let hash = Hashtbl.hash
+end)
+
+module PathObserver = struct
+  type state =
+    { seen: unit PickablePath.t
+    ; dom_txs: unit PickableInt.t PickableInt.t
+    ; next_tid: int }
+
+  let choose_path t rnd = PickablePath.choose rnd t.seen
+
+  let choose_domid t rnd = PickableInt.choose rnd t.dom_txs
+
+  let choose_txid_opt t domid rnd =
+    match PickableInt.find_opt domid t.dom_txs with
+    | None ->
+        0
+    | Some txs ->
+        if PickableInt.is_empty txs then 0 else PickableInt.choose rnd txs
+
+  let new_domid domid = PickableInt.singleton domid PickableInt.empty
+
+  let both _ _ _ = Some ()
+
+  let merge_txs _ s s' =
+    let s = Option.value ~default:PickableInt.empty s in
+    let s' = Option.value ~default:PickableInt.empty s' in
+    Some (PickableInt.merge both s s')
+
+  let init_state =
+    {seen= PickablePath.singleton "/" (); dom_txs= new_domid 0; next_tid= 1}
+
+  let with_path path t = {t with seen= PickablePath.add path () t.seen}
+
+  let split0 str =
+    match Process.split (Some 2) '\000' str with
+    | [x; y] ->
+        (x, y)
+    | _ ->
+        invalid_arg str
+
+  let next_state (domid, cmd) t =
+    let open Xenbus.Xb in
+    match cmd with
+    | {Xenbus.Packet.ty= Transaction_start; _} ->
+        let update = function
+          | None ->
+              None
+          | Some txs ->
+              Some (PickableInt.add t.next_tid () txs)
+        in
+        { t with
+          dom_txs= PickableInt.update domid update t.dom_txs
+        ; next_tid= t.next_tid + 1 }
+    | { Xenbus.Packet.ty=
+          Op.(
+            ( Rm
+            | Read
+            | Directory
+            | Getperms
+            | Setperms
+            | Unwatch
+            | Reset_watches
+            | Getdomainpath
+            | Isintroduced
+            | Set_target
+            | Debug ))
+      ; _ } ->
+        t
+    | {Xenbus.Packet.ty= Op.(Watchevent | Error | Resume | Invalid); _} ->
+        assert false
+    | {Xenbus.Packet.ty= Op.Transaction_end; tid; _} ->
+        let update = function
+          | None ->
+              None
+          | Some txs ->
+              Some (PickableInt.remove tid txs)
+        in
+        {t with dom_txs= PickableInt.update domid update t.dom_txs}
+    | {Xenbus.Packet.ty= Op.(Write | Mkdir | Watch); data} ->
+        let path, _ = split0 data in
+        with_path path t
+    | {Xenbus.Packet.ty= Introduce; data} ->
+        let domidstr, _ = split0 data in
+        let domid' = int_of_string domidstr in
+        if domid = 0 then
+          { t with
+            dom_txs= PickableInt.merge merge_txs t.dom_txs (new_domid domid') }
+        else t
+    | {Xenbus.Packet.ty= Release; data} ->
+        let domidstr, _ = split0 data in
+        let domid = int_of_string domidstr in
+        {t with dom_txs= PickableInt.remove domid t.dom_txs}
+
+  let precond (domid, cmd) t =
+    ( match PickableInt.find_opt domid t.dom_txs with
+    | None ->
+        false
+    | Some txs ->
+        let tid = cmd.Xenbus.Packet.tid in
+        tid = 0 || PickableInt.mem tid txs )
+    && Testable.Command.precond cmd t
+
+  let pp =
+    let open Fmt in
+    Dump.record
+      [ Dump.field "domid" fst Fmt.int
+      ; Dump.field "cmd" snd Testable.Command.pp_dump ]
+end
diff --git a/tools/ocaml/xenstored/test/model.ml b/tools/ocaml/xenstored/test/model.ml
new file mode 100644
index 0000000000..4b5ae462fb
--- /dev/null
+++ b/tools/ocaml/xenstored/test/model.ml
@@ -0,0 +1,253 @@
+open Xs_protocol
+
+(* Conventions:
+Aim for correctness, use simplest data structure that unambigously represents state.
+
+E.g.:
+* a list when duplicates are allowed, order matters and the empty list is a valid value
+* a set when elements appearing multiple time have the same semantic meaning as them appearing once,
+and the order is unspecified or sorted
+* a map when a single key is mapped to a single value, and order is unspecified or sorted
+
+When we must retain the original order for queries, but semantically it doesn't matter
+then store both a canonical representation and the original order.
+
+*)
+
+let rec string_for_all_from s f pos =
+  pos = String.length s || (f s.[pos] && (string_for_all_from s f @@ (pos + 1)))
+
+type error = [`Msg of string]
+
+module Path : sig
+  (** a valid xenstore path *)
+  type t
+
+  val root : t
+
+  val of_string : string -> t option
+  (** [of_string path] parses [path].
+      @return [None] if the path is syntactically not valid *)
+
+  val to_string : t -> string
+  (** [to_string path] converts path to string. *)
+
+  (** [is_child parent child] returns true if [child] is a child of [parent].
+      A path is considered to be a child of itself *)
+  val is_child : t -> t -> bool
+end = struct
+  type t = string list
+
+  let is_valid_char = function
+    | '0' .. '9' | 'a' .. 'z' | 'A' .. 'Z' | '-' | '/' | '_' | '@' ->
+        true
+    | _ ->
+        false
+
+  let root = [""]
+
+  let nonempty s = String.length s > 0
+
+  let of_string s =
+    let n = String.length s in
+    if
+      n > 0 (* empty path is not permitted *)
+      && n < 1024
+      (* paths cannot exceed 1024 chars, FIXME: relative vs absolute *)
+      && string_for_all_from s is_valid_char 0
+    then
+      match String.split_on_char '/' s with
+      | [] ->
+          assert false
+      | [""; ""] ->
+          Some root
+      | _ :: tl as path ->
+          if List.for_all nonempty tl then Some path else None
+    else None
+
+  let to_string = String.concat "/"
+
+  let rec is_child p c =
+    match (p, c) with
+    | [], [] ->
+        true (* a path is a child of itself *)
+    | [], _ ->
+        true
+    | phd :: ptl, chd :: ctl when phd = chd ->
+        is_child ptl ctl
+    | _ ->
+        false
+end
+
+module PathMap = Map.Make (String)
+module DomidSet = Set.Make (Int)
+module DomidMap = Map.Make (Int)
+
+let preserve_order = ref true
+
+module CanonicalACL = struct
+  module RW = struct
+    type t = {read: bool; write: bool}
+
+    let of_perm = function
+      | ACL.NONE ->
+          {read= false; write= false}
+      | ACL.WRITE ->
+          {read= false; write= true}
+      | ACL.READ ->
+          {read= true; write= false}
+      | ACL.RDWR ->
+          {read= true; write= true}
+
+    let to_perm = function
+      | {read= false; write= false} ->
+          ACL.NONE
+      | {read= false; write= true} ->
+          ACL.WRITE
+      | {read= true; write= false} ->
+          ACL.READ
+      | {read= true; write= true} ->
+          ACL.RDWR
+
+    let full = {read= true; write= true}
+  end
+
+  module RWMap = struct
+    type t = {fallback: RW.t; map: RW.t DomidMap.t}
+
+    let lookup t domid =
+      (* other=RDWR can be overriden by explicitly revoking
+         permissions for a domain, so a read=false,write=false
+         in the DomidMap is not necessarily redundant
+      *)
+      DomidMap.find_opt domid t.map |> Option.value ~default:t.fallback
+
+    let create fallback owner =
+      (* owner always has full permissions, and cannot be overriden *)
+      {fallback; map= DomidMap.singleton owner RW.full}
+
+    let override t (domid, perm) =
+      let rw = RW.of_perm perm in
+      (* first entry wins, see perms.ml, also entries that are same as the fallback are
+         not necessarily redundant: (b1,b2,r2) means that domid 2 has rdwr,
+         but if we remove the seemingly redundant `b2` entry then the override would make it
+         just read which would be wrong. *)
+      if DomidMap.mem domid t.map then t
+      else {t with map= DomidMap.add domid rw t.map}
+  end
+
+  type t = {original: ACL.t; owner: ACL.domid; acl: RWMap.t}
+
+  let can_read t domid = (RWMap.lookup t.acl domid).read
+
+  let can_write t domid = (RWMap.lookup t.acl domid).write
+
+  let owner t = t.owner
+
+  let of_acl original =
+    let fallback = RW.of_perm original.ACL.other in
+    let owner = original.ACL.owner in
+    let acl =
+      let init = RWMap.create fallback owner in
+      List.fold_left RWMap.override init original.ACL.acl
+    in
+    {original; owner; acl}
+
+  let to_acl t =
+    if !preserve_order then t.original
+    else
+      ACL.
+        { owner= t.owner
+        ; other= RW.to_perm t.acl.fallback
+        ; acl= t.acl.map |> DomidMap.map RW.to_perm |> DomidMap.bindings }
+end
+
+module Store = struct
+  type node = {value: string; children: string list; acl: CanonicalACL.t}
+
+  type t = {paths: node PathMap.t}
+
+  let create () = {paths= PathMap.empty}
+
+  let parent x = failwith "TODO"
+
+  let add t key value =
+    let paths = PathMap.add key value t.paths in
+    {paths}
+
+  let remove t key =
+    let paths = PathMap.remove key t.paths in
+    {paths}
+end
+
+type t = Store.t
+
+let create () = Store.create ()
+
+let reply_enoent = Response.Error "ENOENT"
+
+let reply_eexist = Response.Error "EEXIST"
+
+let with_node_read t path f =
+  ( t
+  , match PathMap.find_opt path t.paths with
+    | None ->
+        reply_enoent
+    | Some n ->
+        f n )
+
+(* TODO: perm check *)
+let perform_path t domid path = function
+  | Request.Read ->
+      with_node_read t path @@ fun n -> Response.Read n.value
+  | Request.Directory ->
+      with_node_read t path @@ fun n -> Response.Directory n.children
+  | Request.Directory_part _ ->
+      (t, Response.Error "ENOTSUP")
+  | Request.Getperms ->
+      with_node_read t path @@ fun n -> Response.Getperms n.acl
+  | Request.Write value -> (
+    (* TODO: implicit mkdir *)
+    match PathMap.find_opt path t.paths with
+    | Some _ ->
+        (t, reply_eexist)
+    | None ->
+        let acl = ACL.{owner= domid; other= NONE; acl= []} in
+        let n = {value; children= []; acl} in
+        ({t with paths= PathMap.add path n t.paths}, Response.Write) )
+  | Request.Setperms acl -> (
+    match PathMap.find_opt path t.paths with
+    | Some _ ->
+        (t, reply_enoent)
+    | None ->
+        let update_node = function
+          | None ->
+              None
+          | Some n ->
+              Some {n with acl}
+        in
+        ( {t with paths= PathMap.update path update_node t.paths}
+        , Response.Setperms ) )
+  | Request.Mkdir -> (
+    (* TODO: implicit mkdir *)
+    match PathMap.find_opt path t.paths with
+    | Some _ ->
+        (t, reply_eexist)
+    | None ->
+        let acl = ACL.{owner= domid; other= NONE; acl= []} in
+        let n = {value= ""; children= []; acl} in
+        ({t with paths= PathMap.add path n t.paths}, Response.Mkdir) )
+  | Request.Rm -> (
+    match PathMap.find_opt path t.paths with
+    | None ->
+        (t, reply_enoent)
+    | Some _ ->
+        ({t with paths= PathMap.remove path t.paths}, Response.Rm) )
+
+let perform t domid = function
+  | Request.PathOp (path, op) ->
+      perform_path t domid path op
+  | Request.Getdomainpath domid ->
+      (t, Response.Getdomainpath (Printf.sprintf "/local/domain/%d" domid))
+  | _ ->
+      failwith "TODO"
diff --git a/tools/ocaml/xenstored/test/old/arbitrary.ml b/tools/ocaml/xenstored/test/old/arbitrary.ml
new file mode 100644
index 0000000000..6b0bf9864a
--- /dev/null
+++ b/tools/ocaml/xenstored/test/old/arbitrary.ml
@@ -0,0 +1,261 @@
+open QCheck
+
+(* See https://github.com/gasche/random-generator/blob/51351c16b587a1c4216d158e847dcfa6db15f009/random_generator.mli#L275-L325
+    for background on fueled generators for recursive data structures.
+   The difference here is that we build an N-ary tree, not a binary tree as in the example.
+   So we need to spread the fuel among elements of a list of random size.
+*)
+
+(** [spread fuel] creates an array of a random size, and spreads fuel among array elements.
+  Each array slot uses up at least 1 fuel itself.
+  For example the full list of possible arrays with [4] fuel is:
+  {[ [[|3|]; [|0; 2|]; [|1; 1|]; [|2; 0|]; [|0; 0; 0; 0|]] ]}
+*)
+let spread = function
+  | 0 ->
+      Gen.return [||]
+  | n when n < 0 ->
+      invalid_arg "negative fuel"
+  | n ->
+      Gen.(
+        1 -- n
+        >>= fun per_element ->
+        (* We got n fuel to divide up, such that most elements have [per_element] fuel.
+           Round up the number of elements *)
+        let m = (n + per_element - 1) / per_element in
+        (* each element uses up at least one fuel, this has to be subtracted before propagation *)
+        let a = Array.make m (per_element - 1) in
+        (* handle remainder *)
+        a.(0) <- n - (per_element * (m - 1)) - 1 ;
+        assert (Array.fold_left ( + ) m a = n) ;
+        (* ensure that remainder is in a random position *)
+        Gen.shuffle_a a >|= fun () -> a)
+
+(** [spread_l fuel sized_element] spreads [fuel] among list elements,
+    where each list element is created using [sized_element].
+    [sized_element] needs to create an element of exactly the requested size
+     (which may be a recursive element, that calls [spread_l] in turn).
+    Each list element consumes 1 fuel implicitly and sized_element is called with decreased fuel.
+ *)
+let spread_l fuel (sized_elem : 'a Gen.sized) =
+  Gen.(
+    spread fuel
+    >>= fun a ->
+    a |> Array.map sized_elem |> Gen.flatten_a |> Gen.map Array.to_list)
+
+module Tree = struct
+  (* For better shrinking put the (recursive) list first *)
+  type 'a t = Nodes of ('a t * 'a) list
+
+  (** [empty] the empty tree (of size 1) *)
+  let empty = Nodes []
+
+  (** [nodes subtree] tree constructor *)
+  let nodes children = Nodes children
+
+  (** [tree elem_gen] generates a random tree, with elements generated by [elem_gen] *)
+  let tree elem =
+    Gen.sized @@ Gen.fix
+    @@ fun self fuel ->
+    (* self is the generator for a subtree *)
+    let node fuel = Gen.(pair (self fuel) elem) in
+    (* using spread_l ensures that fuel decreases by at least 1, thus ensuring termination *)
+    Gen.map nodes @@ spread_l fuel node
+
+  (** [zero _] is a default implementation for [small] *)
+  let zero _ = 0
+
+  (** [small elem_size tree] returns the count of nodes in the tree and the sum of element sizes
+      as determined by [elem_size] *)
+  let rec small ?(elem_size = zero) (Nodes tree) =
+    List.fold_left
+      (fun acc (subtree, elem) ->
+        acc + elem_size elem + small ~elem_size subtree)
+      1 tree
+
+  (** [shrink ?elem tree] returns a list of potentially smaller trees based on [tree].
+   *)
+  let shrink ?(elem = Shrink.nil) =
+    (* Shrinking needs to generate smaller trees (as determined by [small]),
+       QCheck will keep iterating until it finds a smaller tree that still reproduces the bug.
+       It will then invoke the shrinker again on the smaller tree to attempt to shrink it further.
+       Once the tree shape cannot be shrunk further individual node elements will be shrunk.
+    *)
+    let rec tree (Nodes t) =
+      (* first try to shrink the subtree to a leaf,
+         and if that doesn't work then recursively shrink the subtree
+      *)
+      Iter.append (Iter.return empty)
+      @@ Iter.map nodes
+      @@ Shrink.list ~shrink:(Shrink.pair tree elem) t
+    in
+    tree
+
+  (** [make arb] creates a tree generator with elements generated by [arb].
+      The tree has a shrinker and size defined.
+   *)
+  let make arb =
+    let gen = tree @@ gen arb in
+    QCheck.make
+      ~small:(small ?elem_size:arb.small)
+      ~shrink:(shrink ?elem:arb.shrink) gen
+
+  (** [paths_of_tree ~join tree] return all paths through the tree,
+      with path elements joined using [join] *)
+  let paths_of_tree ~join t =
+    let rec paths_of_subtree (paths, path) (Nodes nodes) =
+      ListLabels.fold_left nodes ~init:paths ~f:(fun paths (tree, elem) ->
+          let path = elem :: path in
+          paths_of_subtree (join (List.rev path) :: paths, path) tree)
+    in
+    paths_of_subtree ([], []) t
+
+  let paths join arb =
+    make arb
+    (* we need to retain the tree, so that the shrinking is done on the tree,
+       and not on the paths *)
+    |> map_keep_input (paths_of_tree ~join)
+end
+
+module Case = struct
+  type ('a, 'b) t =
+    { case_tag: string
+    ; orig: 'a QCheck.arbitrary
+    ; map: 'a -> 'b
+    ; shrink: 'a -> 'b Iter.t
+    ; print: 'a Print.t
+    ; small: 'a -> int }
+
+  (** [make arb f] defines a new variant case with constructor arguments
+      generated by [arb] and constructor [f]. *)
+  let make case_tag orig map =
+    let shrink a =
+      match orig.QCheck.shrink with
+      | None ->
+          Iter.empty
+      | Some s ->
+          Iter.map map @@ s a
+    in
+    let small a = match orig.QCheck.small with None -> 0 | Some s -> s a in
+    let print a = match orig.QCheck.print with None -> "_" | Some p -> p a in
+    {case_tag; orig; map; shrink; small; print}
+
+  type 'a call =
+    { tag: string
+    ; shrink_lazy: 'a Iter.t Lazy.t
+    ; small_lazy: int Lazy.t
+    ; print: string Lazy.t }
+
+  (** [call tag case args] used by the implementation of [rev] to build a shrinker/small of appropriate type *)
+  let call t a =
+    { tag= t.case_tag
+    ; shrink_lazy= lazy (t.shrink a)
+    ; small_lazy= lazy (t.small a)
+    ; print= lazy (t.print a) }
+
+  (** [to_sum case] converts all variant cases to the same type so they can be put into a list *)
+  let to_sum t = Gen.map t.map @@ QCheck.gen t.orig
+end
+
+(** [sum ~print ~rev cases] defines an arbitrary for a sum type consisting of [cases]
+  variant case generators. [print] converts the sum type to a string.
+  [rev] matches on the sum type and should invoke [Case.call <variant-tag> <variant-case-def> <args>].
+
+  E.g.
+  {|
+  type t = A of int | B of float
+
+  let case_a = Case.make "A" int (fun i -> A i)
+
+  let case_b = Case.make "B" float (fun f -> B f)
+
+  let rev t =
+    match t with A i -> Case.call case_a i | B g -> Case.call case_b g
+
+  let x =
+    sum
+      ~print:(fun _ -> failwith "TODO")
+      [Case.to_sum case_a; Case.to_sum case_b]
+  |}
+ *)
+let sum ~rev lst =
+  let shrink b = Lazy.force (rev b).Case.shrink_lazy in
+  let small b = Lazy.force (rev b).Case.small_lazy in
+  let collect b = (rev b).Case.tag in
+  let print b = let r = rev b in r.Case.tag ^ " " ^ Lazy.force r.print in
+  QCheck.make ~shrink ~small ~collect ~print (Gen.oneof lst)
+
+(*
+let mk_packet op to_string arb =
+  Case.make arb (fun x -> Xenbus.Packet.create 0 0 op (to_string x))
+
+let read_packet =
+  mk_packet Xenbus.Xb.Op.Read Store.Path.to_string (list path_element)
+
+let write_packet =
+  mk_packet Xenbus.Xb.Op.Write
+    (fun (x, y) -> Store.Path.to_string x ^ "\x00" ^ y)
+    (pair (list path_element) binary)
+
+let packet =
+  sum ~print:Xenbus.Packet.to_string
+    [Case.to_sum read_packet; Case.to_sum write_packet]
+*)
+
+(** [binary] is a generator of strings containing \x00 characters. *)
+let binary =
+  (* increase frequency of '\x00' to 10%, otherwise it'd be ~1/256 *)
+  string_gen (Gen.frequency [(10, Gen.return '\x00'); (90, Gen.char)])
+  |> set_print String.escaped
+
+(** [path_chars] valid path characters according to Xenstore protocol. *)
+let path_chars =
+  List.init 256 Char.chr
+  |> List.filter Store.Path.char_is_valid
+  |> Array.of_list |> Gen.oneofa
+
+(** [path_element] a valid path element *)
+let path_element =
+   string_gen_of_size Gen.small_int path_chars
+
+type tree = string Tree.t
+
+let paths = Tree.paths Store.Path.to_string path_element
+
+let with_validate p =
+  map_same_type
+  @@ fun v ->
+  (* reject it in a way known to QCheck: precondition failed,
+     instead of testcase failed *)
+  assume @@ p v ;
+  v
+
+(** [non_nul string_arb] rejects strings generated by [string_arb] that contain '\x00'. *)
+let non_nul = with_validate @@ fun s -> not (String.contains s '\x00')
+
+(** [plus arb] generates a list of 1 or more elements generated by [arb] *)
+let plus arb = list_of_size Gen.(map succ small_int) arb
+
+(** [star arb] generates a list of 0 or more elements generated by [arb] *)
+let star arb = list_of_size Gen.small_int arb
+
+let reserved =
+  string_of_size Gen.(frequency [(90, Gen.return 0); (10, Gen.small_int)])
+
+(** According to xenstore protocol this could go up to 65535, but an actual domid
+    shouldn't go above this value *)
+let domid_first_reserved = 0x7FF0
+
+(** [new_domid] generates DomU domids *)
+let new_domid = 1 -- domid_first_reserved
+
+let permty =
+  let open Perms in
+  oneofl [READ; WRITE; RDWR; NONE]
+
+let perms domid =
+  map
+    (fun (domid, other, acls) -> Perms.Node.create domid other acls)
+    ~rev:(fun n ->
+      (Perms.Node.get_owner n, Perms.Node.get_other n, Perms.Node.get_acl n))
+  @@ triple domid permty (small_list (pair domid permty))
diff --git a/tools/ocaml/xenstored/test/old/gen_paths.ml b/tools/ocaml/xenstored/test/old/gen_paths.ml
new file mode 100644
index 0000000000..b50c5b7cad
--- /dev/null
+++ b/tools/ocaml/xenstored/test/old/gen_paths.ml
@@ -0,0 +1,66 @@
+open QCheck
+open Store
+
+type tree = Leaf | Nodes of (string * tree) list
+
+let nodes children = Nodes children
+let gen_tree = QCheck.Gen.(sized @@ fix
+  (fun self n ->
+    let children = frequency [1, pure 0; 2, int_bound n] >>= fun m ->
+    match m with
+    | 0 -> pure []
+    | _ -> list_repeat m (pair string (self (n/m)))
+    in
+    frequency
+     [ 1, pure Leaf
+     ; 2, map nodes children
+    ]
+    ))
+
+let rec paths_of_tree (acc, path) = function
+| Leaf -> acc
+| Nodes l ->
+  List.fold_left (fun acc (k, children) ->
+    let path = k :: path in
+    paths_of_tree (Store.Path.to_string (List.rev path) :: acc, path) children
+  ) acc l
+
+let gen_paths_choices =
+  Gen.map (fun tree ->
+  tree |> paths_of_tree ([], []) |> Array.of_list
+  ) gen_tree
+
+(*let arb_name = Gen.small_string
+
+let arb_permty = let open Perms in oneofl [ READ; WRITE; RDWR; NONE ]
+
+let arb_domid = oneofl [ 0; 1; 0x7FEF]
+
+let arb_perms =
+   map (fun (domid, other, acls) -> Perms.Node.create domid other acls)
+   ~rev:(fun n -> Perms.Node.get_owner n, Perms.Node.get_other n, Perms.Node.get_acl n)
+   @@ triple arb_domid arb_permty (small_list (pair arb_domid arb_permty))*)
+
+let arb_name = Gen.small_string
+let arb_value = Gen.small_string
+
+let node_of name value children =
+  List.fold_left (fun c acc -> Node.add_child acc c)
+  (Node.create name Perms.Node.default0 value ) children
+
+let g = QCheck.Gen.(sized @@ fix
+  (fun self n ->
+      frequency [1, pure 0; 2, int_bound n] >>= fun m ->
+      let children = match m with
+      | 0 -> pure []
+      | _ -> list_repeat m (self (n/m))
+      in
+      map3 node_of arb_name arb_value children
+    ))
+
+let paths_of_tree t =
+  let paths = ref [] in
+  Store.traversal t (fun path node ->
+    paths := (Store.Path.of_path_and_name path (Node.get_name node) |> Store.Path.to_string) :: !paths
+  );
+  !paths
diff --git a/tools/ocaml/xenstored/test/old/xenstored_test.ml b/tools/ocaml/xenstored/test/old/xenstored_test.ml
new file mode 100644
index 0000000000..84cfc45d4f
--- /dev/null
+++ b/tools/ocaml/xenstored/test/old/xenstored_test.ml
@@ -0,0 +1,527 @@
+open Stdext
+open QCheck
+open Arbitrary
+
+let () =
+  (* Logging.access_log_nb_files := 1 ;
+     Logging.access_log_transaction_ops := true ;
+     Logging.access_log_special_ops := true ;
+     Logging.access_log_destination := File "/tmp/log" ;
+     Logging.init_access_log ignore ;
+     Logging.set_xenstored_log_destination "/dev/stderr";
+     Logging.init_xenstored_log (); *)
+  Domains.xenstored_port := "xenstored-port" ;
+  let f = open_out !Domains.xenstored_port in
+  Printf.fprintf f "%d" 1 ;
+  close_out f ;
+  Domains.xenstored_kva := "/dev/zero"
+
+module Command = struct
+  type value = string
+
+  let value = binary
+
+  type token = string
+
+  type txid = int
+
+  type domid = Xenctrl.domid
+
+  type t =
+    | Read of Store.Path.t
+    | Write of Store.Path.t * value
+    | Mkdir of Store.Path.t
+    | Rm of Store.Path.t
+    | Directory of Store.Path.t
+    (* | Directory_part not implemented *)
+    | Get_perms of Store.Path.t
+    | Set_perms of Store.Path.t * Perms.Node.t
+    | Watch of Store.Path.t * token
+    | Unwatch of Store.Path.t * token
+    | Reset_watches
+    | Transaction_start
+    | Transaction_end of bool
+    | Introduce of domid * nativeint * int
+    | Release of int
+    | Get_domain_path of domid
+    | Is_domain_introduced of domid
+    | Set_target of domid * domid
+    | LiveUpdate
+
+  type state =
+    { store: Store.t
+    ; doms: Domains.domains
+    ; cons: Connections.t
+    ; domids: int array }
+
+  let path = list path_element
+
+  let token = printable_string
+
+  let domid state = oneofa ~print:Print.int state.domids
+
+  let cmd state =
+    let domid = domid state in
+    let cmd_read = Case.make "READ" path (fun path -> Read path) in
+    let cmd_write =
+      Case.make "WRITE" (pair path value) (fun (path, value) ->
+          Write (path, value))
+    in
+    let cmd_mkdir = Case.make "MKDIR" path (fun path -> Mkdir path) in
+    let cmd_rm = Case.make "RM" path (fun path -> Rm path) in
+    let cmd_directory =
+      Case.make "DIRECTORY" path (fun path -> Directory path)
+    in
+    let cmd_get_perms =
+      Case.make "GET_PERMS" path (fun path -> Get_perms path)
+    in
+    let cmd_set_perms =
+      Case.make "SET_PERMS"
+        (pair path (perms domid))
+        (fun (path, perms) -> Set_perms (path, perms))
+    in
+    let cmd_watch =
+      Case.make "WATCH" (pair path token) (fun (path, token) ->
+          Watch (path, token))
+    in
+    let cmd_unwatch =
+      Case.make "UNWATCH" (pair path token) (fun (path, token) ->
+          Unwatch (path, token))
+    in
+    let cmd_reset_watches =
+      Case.make "RESET_WATCHES" unit (fun () -> Reset_watches)
+    in
+    let cmd_tx_start =
+      Case.make "TRANSACTION_START" unit (fun () -> Transaction_start)
+    in
+    let cmd_tx_end =
+      Case.make "TRANSACTION_END" bool (fun commit -> Transaction_end commit)
+    in
+    let cmd_introduce =
+      Case.make "INTRODUCE" (triple domid int int) (fun (domid, gfn, port) ->
+          Introduce (domid, Nativeint.of_int gfn, port))
+    in
+    let cmd_release = Case.make "RELEASE" domid (fun domid -> Release domid) in
+    let cmd_get_domain_path =
+      Case.make "GET_DOMAIN_PATH" domid (fun domid -> Get_domain_path domid)
+    in
+    let cmd_is_domain_introduced =
+      Case.make "IS_DOMAIN_INTRODUCED" domid (fun domid ->
+          Is_domain_introduced domid)
+    in
+    let cmd_set_target =
+      Case.make "SET_TARGET" (pair domid domid) (fun (domid, tdomid) ->
+          Set_target (domid, tdomid))
+    in
+    let cmd_live_update =
+      Case.make "CONTROL live-update" unit (fun () -> LiveUpdate)
+    in
+    let rev = function
+      | Read a ->
+          Case.call cmd_read a
+      | Write (p, v) ->
+          Case.call cmd_write (p, v)
+      | Mkdir a ->
+          Case.call cmd_mkdir a
+      | Rm a ->
+          Case.call cmd_rm a
+      | Directory a ->
+          Case.call cmd_directory a
+      | Get_perms a ->
+          Case.call cmd_get_perms a
+      | Set_perms (p, v) ->
+          Case.call cmd_set_perms (p, v)
+      | Watch (p, t) ->
+          Case.call cmd_watch (p, t)
+      | Unwatch (p, t) ->
+          Case.call cmd_unwatch (p, t)
+      | Reset_watches ->
+          Case.call cmd_reset_watches ()
+      | Transaction_start ->
+          Case.call cmd_tx_start ()
+      | Transaction_end a ->
+          Case.call cmd_tx_end a
+      | Introduce (d, g, p) ->
+          Case.call cmd_introduce (d, Nativeint.to_int g, p)
+      | Release a ->
+          Case.call cmd_release a
+      | Get_domain_path a ->
+          Case.call cmd_get_domain_path a
+      | Is_domain_introduced a ->
+          Case.call cmd_is_domain_introduced a
+      | Set_target (d, t) ->
+          Case.call cmd_set_target (d, t)
+      | LiveUpdate ->
+          Case.call cmd_live_update ()
+    in
+    let open Case in
+    sum ~rev
+      [ to_sum cmd_read
+      ; to_sum cmd_write
+      ; to_sum cmd_mkdir
+      ; to_sum cmd_rm
+      ; to_sum cmd_directory
+      ; to_sum cmd_get_perms
+      ; to_sum cmd_set_perms
+      ; to_sum cmd_watch
+      ; to_sum cmd_unwatch
+      ; to_sum cmd_reset_watches
+      ; to_sum cmd_tx_start
+      ; to_sum cmd_tx_end
+      ; to_sum cmd_introduce
+      ; to_sum cmd_release
+      ; to_sum cmd_get_domain_path
+      ; to_sum cmd_is_domain_introduced
+      ; to_sum cmd_set_target
+      ; to_sum cmd_live_update ]
+
+  let run tid =
+    let open Xenstore.Queueop in
+    function
+    | Read p ->
+        read tid Store.Path.(to_string p)
+    | Write (p, v) ->
+        write tid Store.Path.(to_string p) v
+    | Mkdir p ->
+        mkdir tid Store.Path.(to_string p)
+    | Rm p ->
+        rm tid Store.Path.(to_string p)
+    | Directory p ->
+        directory tid Store.Path.(to_string p)
+    | Get_perms p ->
+        getperms tid Store.Path.(to_string p)
+    | Set_perms (p, v) ->
+        setperms tid Store.Path.(to_string p) Perms.Node.(to_string v)
+    | Watch (p, t) ->
+        watch Store.Path.(to_string p) t
+    | Unwatch (p, t) ->
+        unwatch Store.Path.(to_string p) t
+    | Reset_watches ->
+        let open Xenbus in
+        fun con -> Xb.queue con (Xb.Packet.create 0 0 Xb.Op.Reset_watches "")
+    | Transaction_start ->
+        transaction_start
+    | Transaction_end c ->
+        transaction_end tid c
+    | Release d ->
+        release d
+    | Get_domain_path d ->
+        getdomainpath d
+    | Is_domain_introduced d ->
+        let open Xenbus in
+        fun con ->
+          Xb.queue con
+            (Xb.Packet.create 0 0 Xb.Op.Isintroduced (string_of_int d))
+    | Set_target (d, t) ->
+        let open Xenbus in
+        fun con ->
+          Xb.queue con
+            (Xb.Packet.create 0 0 Xb.Op.Isintroduced
+               (String.concat "\x00" [string_of_int d; string_of_int t]))
+    | LiveUpdate ->
+        debug ["live-update"; "-s"]
+    | Introduce (d, g, p) ->
+        introduce d g p
+end
+
+module Spec = struct
+  type cmd = New | Cmd of Command.domid * int option * Command.t
+
+  type state =
+    { xb: Xenbus.Xb.t
+    ; cnt: int
+    ; cmdstate: Command.state ref option
+    ; failure: (exn * string) option }
+
+  type sut = state ref
+
+  let doms = Domains.init (Event.init ()) ignore
+
+  let dom0 = Domains.create0 doms
+
+  let new_state () =
+    let cons = Connections.create () in
+    Connections.add_domain cons dom0 ;
+    let store = Store.create () in
+    let con = Perms.Connection.create 0 in
+    Store.mkdir store con ["tool"] ;
+    {Command.store; doms; cons; domids= [|0|]}
+
+  let print = function
+    | New ->
+        "NEW"
+    | Cmd (d, t, c) ->
+        let s = new_state () in
+        let cmd = Command.cmd s in
+        (Option.get (triple (Command.domid s) (option int) cmd).print) (d, t, c)
+
+  let shrink = function
+    | New ->
+        Iter.empty
+    | Cmd (d, t, c) ->
+        let s = new_state () in
+        let cmd = Command.cmd s in
+        Iter.map (fun (d, t, c) -> Cmd (d, t, c))
+        @@ (Option.get (triple (Command.domid s) (option int) cmd).shrink)
+             (d, t, c)
+
+  let arb_cmd state =
+    ( match state.cmdstate with
+    | None ->
+        always New
+    | Some s ->
+        let cmd = Command.cmd !s in
+        QCheck.map
+          (fun (d, t, c) -> Cmd (d, t, c))
+          ~rev:(fun (Cmd (d, t, c)) -> (d, t, c))
+        @@ triple (Command.domid !s) (option int) cmd )
+    |> set_print print |> set_shrink shrink
+
+  (*    |> set_collect (fun (_, _, c) -> (Option.get cmd.QCheck.collect) c)*)
+
+  let init_state =
+    {cnt= 0; xb= Xenbus.Xb.open_fd Unix.stdout; cmdstate= None; failure= None}
+
+  let precond cmd s =
+    match (cmd, s.cmdstate) with
+    | New, None ->
+        true
+    | New, _ ->
+        false
+    | Cmd _, None ->
+        false
+    | Cmd (_, _, Command.Release 0), _ ->
+        false
+    | _ ->
+        true
+
+  let next_state cmd state =
+    { ( try
+          assume (precond cmd state) ;
+          match cmd with
+          | New ->
+              {state with cmdstate= Some (ref @@ new_state ())}
+          | Cmd (domid, tid, cmd) ->
+              let tid = match tid with None -> 0 | Some id -> 1 + id in
+              Command.run tid cmd state.xb ;
+              let s = !(Option.get state.cmdstate) in
+              let con = Connections.find_domain s.Command.cons domid in
+              Queue.clear con.xb.pkt_out ;
+              let run_packet packet =
+                let tid, rid, ty, data = Xenbus.Xb.Packet.unpack packet in
+                let req = {Packet.tid; Packet.rid; Packet.ty; Packet.data} in
+                Process.process_packet ~store:s.Command.store
+                  ~cons:s.Command.cons ~doms:s.Command.doms ~con ~req ;
+                Process.write_access_log ~ty ~tid
+                  ~con:(Connection.get_domstr con)
+                  ~data ;
+                let packet = Connection.peek_output con in
+                let tid, _rid, ty, data = Xenbus.Xb.Packet.unpack packet in
+                Process.write_answer_log ~ty ~tid
+                  ~con:(Connection.get_domstr con)
+                  ~data
+              in
+              Queue.iter run_packet state.xb.pkt_out ;
+              Queue.clear state.xb.pkt_out ;
+              state
+        with e ->
+          let bt = Printexc.get_backtrace () in
+          {state with failure= Some (e, bt)} )
+      with
+      cnt= state.cnt + 1 }
+
+  let init_sut () = ref init_state
+
+  let cleanup _ = ()
+
+  module P = struct
+    type t = string list
+
+    let compare = compare
+  end
+
+  module PathMap = Map.Make (P)
+
+  module DomidMap = Map.Make (struct
+    type t = Xenctrl.domid
+
+    let compare = compare
+  end)
+
+  module IntMap = Map.Make (struct
+    type t = int
+
+    let compare = compare
+  end)
+
+  module FDMap = Map.Make (struct
+    type t = Unix.file_descr
+
+    let compare = compare
+  end)
+
+  let map_of_store s =
+    let m = ref PathMap.empty in
+    Store.dump_fct s (fun path node -> m := PathMap.add path node !m) ;
+    !m
+
+  let node_equiv n n' =
+    Perms.equiv (Store.Node.get_perms n) (Store.Node.get_perms n')
+    && Store.Node.get_name n = Store.Node.get_name n'
+    && Store.Node.get_value n = Store.Node.get_value n'
+
+  let store_root_equiv s s' =
+    if not (PathMap.equal node_equiv (map_of_store s) (map_of_store s')) then
+      let b = Store.dump_store_buf s.root in
+      let b' = Store.dump_store_buf s'.root in
+      Test.fail_reportf "Store trees are not equivalent:\n %s\n <>\n %s"
+        (Buffer.contents b) (Buffer.contents b')
+    else true
+
+  let map_of_domid_table tbl = Hashtbl.fold DomidMap.add tbl DomidMap.empty
+
+  let map_of_quota q = map_of_domid_table q.Quota.cur
+
+  let store_quota_equiv root root' q q' =
+    let _ =
+      DomidMap.merge
+        (fun domid q q' ->
+          let q = Option.value ~default:(-1) q in
+          let q' = Option.value ~default:(-1) q' in
+          if q <> q' then
+            let b = Store.dump_store_buf root in
+            let b' = Store.dump_store_buf root' in
+            Test.fail_reportf "quota mismatch on %d: %d <> %d\n%s\n%s\n" domid q
+              q' (Buffer.contents b) (Buffer.contents b')
+          else Some q)
+        (map_of_quota q) (map_of_quota q')
+    in
+    true
+
+  let store_equiv s s' =
+    store_root_equiv s s'
+    && store_quota_equiv s.root s'.root (Store.get_quota s) (Store.get_quota s')
+
+  let map_of_domains d = map_of_domid_table d.Domains.table
+
+  let domain_equiv d d' =
+    Domain.get_id d = Domain.get_id d'
+    && Domain.get_remote_port d = Domain.get_remote_port d'
+
+  let domains_equiv d d' =
+    DomidMap.equal domain_equiv (map_of_domains d) (map_of_domains d')
+
+  let map_of_fd_table tbl = Hashtbl.fold FDMap.add tbl FDMap.empty
+
+  let map_of_int_table tbl = Hashtbl.fold IntMap.add tbl IntMap.empty
+
+  let list_of_queue q = Queue.fold (fun acc e -> e :: acc) [] q
+
+  let connection_equiv c c' =
+    let l = list_of_queue c.Connection.xb.pkt_out in
+    let l' = list_of_queue c'.Connection.xb.pkt_out in
+    if List.length l <> List.length l' || List.exists2 ( <> ) l l' then (
+      let print_packets l =
+        l
+        |> List.rev_map (fun p ->
+               let tid, rid, ty, data = Xenbus.Packet.unpack p in
+               let tystr = Xenbus.Xb.Op.to_string ty in
+               Printf.sprintf "tid=%d, rid=%d, ty=%s, data=%s" tid rid tystr
+                 (String.escaped data))
+        |> String.concat "\n"
+      in
+      let r = print_packets l in
+      let r' = print_packets l' in
+      Test.fail_reportf "Replies not equal:\n%s\n <>\n %s" r r' )
+    else
+      let n = Connection.number_of_transactions c in
+      let n' = Connection.number_of_transactions c' in
+      if n <> n' then Test.fail_reportf "TX count mismatch: %d <> %d" n n'
+      else true
+
+  let connections_equiv c c' =
+    FDMap.equal connection_equiv
+      (map_of_fd_table c.Connections.anonymous)
+      (map_of_fd_table c'.Connections.anonymous)
+    && IntMap.equal connection_equiv
+         (map_of_int_table c.Connections.domains)
+         (map_of_int_table c'.Connections.domains)
+
+  let dump_load s =
+    let tmp = Filename.temp_file "xenstored" "qcheck.dump" in
+    finally
+      (fun () ->
+        let fds = {Xenstored.DB.rw_sock= None; ro_sock= None} in
+        Xenstored.DB.to_file fds !s.Command.store !s.Command.cons tmp ;
+        s := new_state () ;
+        let _fds', errors =
+          Xenstored.DB.from_file ~live:true !s.Command.store !s.Command.doms
+            !s.Command.cons tmp
+        in
+        if errors > 0 then
+          Test.fail_reportf "Errors during live update: %d" errors)
+      (fun () -> Sys.remove tmp)
+
+  let run_cmd cmd state sut =
+    ( match state.failure with
+    | None ->
+        true
+    | Some (e, bt) ->
+        Test.fail_reportf "Exception %s, backtrace: %s" (Printexc.to_string e)
+          bt )
+    &&
+    match cmd with
+    | New ->
+        sut := next_state cmd !sut ;
+        true
+    | Cmd (0, _, Command.LiveUpdate) ->
+        let s = !sut.cmdstate in
+        let store1 = Store.copy !(Option.get s).store in
+        let doms1 = !(Option.get s).doms in
+        dump_load (Option.get s) ;
+        (* reply is expected not to be equivalent, since after live update we got an empty reply queue,
+           so don't compare connections
+        *)
+        store_equiv store1 !(Option.get s).store
+        && domains_equiv doms1 !(Option.get s).doms
+    | Cmd(_, _, cmd') -> (
+        (* TODO: also got same reply, and check for equivalence on the actual Live Update *)
+        sut := next_state cmd !sut ;
+        let ids = Hashtbl.create 47 in
+        Connections.iter !(Option.get state.cmdstate).cons (fun con ->
+            Hashtbl.add ids (Connection.get_id con) con.next_tid) ;
+        let state = next_state cmd state in
+        match (!sut.failure, state.cmdstate, !sut.cmdstate) with
+        | None, Some s, Some s' ->
+            let r = cmd' = Command.Transaction_start (* txid can change *) || 
+               connections_equiv !s.cons !s'.cons in
+            Connections.iter !(Option.get state.cmdstate).cons (fun con ->
+                let tid = Hashtbl.find ids (Connection.get_id con) in
+                if con.next_tid <> tid then (
+                  let (_ : bool) = Connection.end_transaction con tid None in
+                  () ;
+                  con.next_tid <- tid )) ;
+            r
+        | None, None, None ->
+            true
+        | None, None, _ ->
+            Test.fail_report "state uninit"
+        | None, _, None ->
+            Test.fail_report "sut uninit"
+        | Some (e, bt), _, _ ->
+            Test.fail_reportf "Exception %s, backtrace: %s"
+              (Printexc.to_string e) bt )
+end
+
+module States = QCSTM.Make (Spec)
+
+(* && watches_equiv c c' *)
+
+let test = States.agree_test ~count:100 ~name:"live-update"
+
+let test =
+  Test.make ~name:"live-update" ~count:100
+    (States.arb_cmds Spec.init_state)
+    States.agree_prop
+
+let () = QCheck_base_runner.run_tests_main [test]
diff --git a/tools/ocaml/xenstored/test/pathtree.ml b/tools/ocaml/xenstored/test/pathtree.ml
new file mode 100644
index 0000000000..50cbb0302d
--- /dev/null
+++ b/tools/ocaml/xenstored/test/pathtree.ml
@@ -0,0 +1,40 @@
+module M = Map.Make(String)
+type 'a t = { data: 'a; children: 'a t M.t }
+
+type 'a tree = 'a t
+let of_data data = { data; children = M.empty }
+
+let update key f t = { t with children = M.update key f t.children }
+let set t data = { t with data }
+
+module Cursor = struct
+  type 'a t = { tree: 'a tree; up: ('a t * M.key) option }
+
+  let of_tree tree = { tree; up = None }
+
+  let create parent key tree = { tree; up = Some (parent, key) }
+
+  let down cur k =
+    M.find_opt k cur.tree.children |> Option.map @@ create cur k
+
+  let down_implicit_create ~implicit cur k =
+    match down cur k with
+    | Some r -> r
+    | None -> cur.tree.data |> implicit |> of_data |> create cur k
+
+  let rec to_tree t = match t.up with
+    | None -> t.tree
+    | Some (parent, key) ->
+        to_tree { parent with tree = update key (fun _ -> Some t.tree) parent.tree }
+
+  let set cur data = { cur with tree = set cur.tree data }
+  let get cur = cur.tree.data
+
+  let rm_child cur key = { cur with tree = update key (fun _ -> None) cur.tree}
+
+  (* TODO: down with implicit create *)
+end
+
+
+
+let rec map f t = { data = f t.data; children = M.map (map f) t.children }
diff --git a/tools/ocaml/xenstored/test/testable.ml b/tools/ocaml/xenstored/test/testable.ml
new file mode 100644
index 0000000000..2fa749fbb3
--- /dev/null
+++ b/tools/ocaml/xenstored/test/testable.ml
@@ -0,0 +1,379 @@
+let is_output_devnull = Unix.stat "/dev/null" = Unix.fstat Unix.stdout
+
+let () =
+  if not is_output_devnull then (
+    Printexc.record_backtrace true ;
+    Fmt_tty.setup_std_outputs () ;
+    try
+      let cols =
+        let ch = Unix.open_process_in "tput cols" in
+        Stdext.finally
+          (fun () -> input_line ch |> int_of_string)
+          (fun () -> Unix.close_process_in ch)
+      in
+      Format.set_margin cols
+    with _ -> () )
+
+let devnull () = Unix.openfile "/dev/null" [] 0
+
+let xb = Xenbus.Xb.open_fd (devnull ())
+
+module Command = struct
+  type path = Store.Path.t
+
+  type value = string
+
+  type token = string
+
+  type domid = int
+
+  type t = Xenbus.Packet.t
+
+  open Xenstore.Queueop
+
+  let cmd f =
+    Queue.clear xb.pkt_out ;
+    let () = f xb in
+    let p = Xenbus.Xb.peek_output xb in
+    Queue.clear xb.pkt_out ; p
+
+  let pathcmd f pathgen tid state = cmd @@ f tid @@ pathgen state
+
+  let cmd_read gen tid state = pathcmd read gen tid state
+
+  let cmd_write pathgen v tid state = cmd @@ write tid (pathgen state) v
+
+  let cmd_mkdir g t s = pathcmd mkdir g t s
+
+  let cmd_rm g t s = pathcmd rm g t s
+
+  let cmd_directory g t s = pathcmd directory g t s
+
+  let cmd_getperms g t s = pathcmd getperms g t s
+
+  let cmd_setperms pathgen vgen tid state =
+    cmd @@ setperms tid (pathgen state) (Perms.Node.to_string @@ vgen state)
+
+  let cmd_watch pathgen token _ state = cmd @@ watch (pathgen state) token
+
+  let cmd_unwatch pathgen token _ state = cmd @@ unwatch (pathgen state) token
+
+  let cmd_reset_watches tid _state =
+    let open Xenbus in
+    cmd
+    @@ fun con ->
+    Xenbus.Xb.queue con
+      (Xenbus.Xb.Packet.create 0 0 Xenbus.Xb.Op.Reset_watches "")
+
+  let cmd_transaction_start _ _ = cmd @@ transaction_start
+
+  let cmd_transaction_end commit tid _ = cmd @@ transaction_end tid commit
+
+  let domcmd f idgen _ state = cmd @@ f @@ idgen state
+
+  let cmd_release idgen state = domcmd release idgen state
+
+  let cmd_getdomainpath i s = domcmd getdomainpath i s
+
+  let cmd_isintroduced i t s =
+    domcmd
+      (fun d con ->
+        let open Xenbus in
+        Xenbus.Xb.queue con
+          (Xenbus.Xb.Packet.create 0 0 Xenbus.Xb.Op.Isintroduced
+             (string_of_int d)))
+      i t s
+
+  let cmd_set_target idgen1 idgen2 _ state =
+    let d = idgen1 state in
+    let t = idgen2 state in
+    cmd
+    @@ fun con ->
+    Xenbus.Xb.queue con
+      (Xenbus.Xb.Packet.create 0 0 Xenbus.Xb.Op.Isintroduced
+         (String.concat "\x00" [string_of_int d; string_of_int t]))
+
+  let cmd_liveupdate _ _ = cmd @@ debug ["live-update"; "-s"]
+
+  let cmd_introduce id port _ state = cmd @@ introduce id 0n port
+
+  let pp_dump = Types.pp_dump_packet
+
+  let precond cmd _state =
+    match cmd with
+    | {Xenbus.Packet.ty= Xenbus.Xb.Op.Release; data= "0\000"} ->
+        false
+        (* can't release Dom0 in the tests, or we crash due to shared dom0 backend *)
+    | {ty= Xenbus.Xb.Op.Rm; data= ""} ->
+        (* this is expected to cause inconsistencies on pre-created paths like /local *)
+        false
+    | _ ->
+        true
+end
+
+let with_logger ~on_exn f =
+  if is_output_devnull then f ()
+  else
+    let old = (!Logging.xenstored_logger, !Logging.access_logger) in
+    let logs = ref [] in
+    let write ?(level = Logging.Debug) s =
+      let msg = Printf.sprintf "%s %s" (Logging.string_of_level level) s in
+      logs := msg :: !logs
+    in
+    let logger =
+      Some {Logging.stop= ignore; restart= ignore; rotate= ignore; write}
+    in
+    Logging.xenstored_logger := logger ;
+    Logging.access_logger := logger ;
+    Stdext.finally
+      (fun () ->
+        try f ()
+        with e ->
+          let bt = Printexc.get_raw_backtrace () in
+          on_exn e bt (List.rev !logs))
+      (fun () ->
+        Logging.xenstored_logger := fst old ;
+        Logging.access_logger := snd old)
+
+type t =
+  { store: Store.t
+  ; cons: Connections.t
+  ; doms: Domains.domains
+  ; mutable anon: Unix.file_descr option
+  ; live_update: bool
+  ; txidtbl: (int, int) Hashtbl.t }
+
+let () =
+  Logging.xenstored_log_level := Logging.Debug ;
+  Logging.access_log_special_ops := true ;
+  Logging.access_log_transaction_ops := true ;
+  let name, f = Filename.open_temp_file "xenstored" "port" in
+  Domains.xenstored_port := name ;
+  Stdext.finally (fun () -> Printf.fprintf f "%d" 1) (fun () -> close_out f) ;
+  Domains.xenstored_kva := "/dev/zero" ;
+  (* entries from a typical oxenstored.conf *)
+  Transaction.do_coalesce := true ;
+  Perms.activate := true ;
+  Quota.activate := true ;
+  Quota.maxent := 8192 ;
+  Quota.maxsize := 2048 ;
+  Define.maxwatch := 512 ;
+  Define.maxtransaction := 10 ;
+  Define.maxrequests := 1024
+
+(* we MUST NOT release dom0, or we crash,
+   this is shared between multiple tests, because
+   it keeps an FD open, and we want to avoid EMFILE
+*)
+
+let create ?(live_update = false) () =
+  let store = Store.create () in
+  let cons = Connections.create () in
+  let doms = Domains.init (Event.init ()) ignore in
+  let dom0 = Domains.create0 doms in
+  let txidtbl = Hashtbl.create 47 in
+  Connections.add_domain cons dom0 ;
+  {store; cons; doms; anon= None; live_update; txidtbl}
+
+let cleanup t = Connections.iter t.cons Connection.close
+
+let init t =
+  let local = Store.Path.of_string "/local" in
+  let con = Perms.Connection.create 0 in
+  Store.mkdir t.store con local ;
+ (* Store.mkdir t.store con (Store.Path.of_string "/tool") ;*)
+  let fd = devnull () in
+  t.anon <- Some fd ;
+  Connections.add_anonymous t.cons fd
+
+let dump_load s =
+  let tmp = Filename.temp_file "xenstored" "qcheck.dump" in
+  Stdext.finally
+    (fun () ->
+      Xenstored.DB.to_file None s.store s.cons tmp ;
+      let s' = create () in
+      (* preserve FD *)
+      s'.anon <- s.anon ;
+      s.anon <- None ;
+      let _fds', errors =
+        Xenstored.DB.from_file ~live:true s'.store s'.doms s'.cons tmp
+      in
+      if errors > 0 then
+        failwith (Printf.sprintf "Errors during live update: %d" errors) ;
+      s')
+    (fun () -> Sys.remove tmp)
+
+let is_live_update = function
+  | {Xenbus.Packet.ty= Xenbus.Xb.Op.Debug; data= "live-update\000-s\000"} ->
+      true
+  | _ ->
+      false
+
+let is_tx_start p = p.Xenbus.Packet.ty = Xenbus.Xb.Op.Transaction_start
+
+let with_tmpfile prefix write f =
+  let name, ch = Filename.open_temp_file prefix ".txt" in
+  Stdext.finally
+    (fun () ->
+      Stdext.finally (fun () -> write ch) (fun () -> close_out ch) ;
+      f name)
+    (fun () -> Sys.remove name)
+
+let with_pp_to_file prefix pp x f =
+  let write ch =
+    let ppf = Format.formatter_of_out_channel ch in
+    Format.pp_set_margin ppf @@ Format.get_margin () ;
+    pp ppf x ;
+    Fmt.flush ppf ()
+  in
+  with_tmpfile prefix write f
+
+let run_cmd_get_output ?(ok_codes = [0]) cmd =
+  let cmd = Array.of_list cmd in
+  let ch = Unix.open_process_args_in cmd.(0) cmd in
+  Stdext.finally
+    (fun () ->
+      let lines = ref [] in
+      try
+        while true do
+          lines := input_line ch :: !lines
+        done ;
+        assert false
+      with End_of_file -> List.rev !lines |> String.concat "\n")
+    (fun () ->
+      match Unix.close_process_in ch with
+      | Unix.WEXITED code when List.mem code ok_codes ->
+          ()
+      | status ->
+          Crowbar.failf "%a %a" (Fmt.array Fmt.string) cmd
+            Types.pp_process_status status)
+
+let call_diff x y =
+  let ok_codes = [0; 1] in
+  run_cmd_get_output ~ok_codes
+    [ "/usr/bin/git"
+    ; "diff"
+    ; "-U10000" (* we want to see the entire state, where possible *)
+    ; "--no-index"
+    ; ( "--word-diff="
+      ^ if Fmt.style_renderer Fmt.stdout = `Ansi_tty then "color" else "plain"
+      )
+    ; "--color-moved=dimmed-zebra"
+    ; x
+    ; y ]
+
+let check_eq_exn prefix ~pp ~eq x y =
+  if not @@ eq x y then
+    if is_output_devnull then failwith "different"
+    else
+      with_pp_to_file "expected" pp x
+      @@ fun xfile ->
+      with_pp_to_file "actual" pp y
+      @@ fun yfile ->
+      failwith
+      @@ Printf.sprintf "%s agrement: %s" prefix (call_diff xfile yfile)
+
+let run next_tid t (domid, cmd) =
+  let con =
+    match domid with
+    | 0 ->
+        Connections.find !t.cons (Option.get !t.anon)
+    | id ->
+        Connections.find_domain !t.cons domid
+  in
+  (* clear out any watch events, TODO: don't  *)
+  Connections.iter !t.cons (fun con -> Queue.clear con.xb.pkt_out) ;
+  (* TODO: use the global live update state that processing the command sets, but remember to reset it *)
+  if is_live_update cmd then
+    if !t.live_update then (
+      let t0 = !t in
+      let t' = dump_load t0 in
+      Connections.iter t'.cons (fun con ->
+          Connection.iter_transactions con
+          @@ fun _ tx ->
+            (*  if tx.Transaction.operations <> [] then TODO: only if we dump snapshot state
+                correctly *)
+             Transaction.mark_failed tx) ;
+      Logging.info "store" "store: %s" (Fmt.to_to_string Types.pp_dump_store t'.store);
+      Logging.info "store" "store: %s" (Fmt.to_to_string Types.pp_dump_store t0.store);
+      check_eq_exn "store" ~pp:Types.pp_dump_store ~eq:Types.equal_store
+        t0.store t'.store ;
+      (* TODO: now we have a disagreement here... so we can't test this until TX state is restored *)
+      (*check_eq_exn "connections" ~pp:Types.pp_dump_connections
+        ~eq:Types.equal_connections t0.cons t'.cons ;*)
+      check_eq_exn "domains" ~pp:Types.pp_dump_domains ~eq:Types.equal_domains
+        t0.doms t'.doms ;
+      (* avoid double close on anonymous conn *)
+      Connections.iter_domains t0.cons Connection.close ;
+      t := {t' with txidtbl= !t.txidtbl} )
+    else begin
+      Logging.debug "testable" "BEFORE TXMARK";
+      Connections.iter !t.cons (fun con ->
+          Connection.iter_transactions con
+          @@ fun txid tx ->
+             Logging.debug "testable" "marking to fail %d" txid; 
+             (* if tx.Transaction.operations <> [] then see above TODO *)
+             Transaction.mark_failed tx) 
+    end;
+  let run_packet packet =
+    let tid, rid, ty, data = Xenbus.Xb.Packet.unpack packet in
+    Logging.debug "testable" "tid: %d" tid ;
+    let tid = if tid <> 0 then Hashtbl.find !t.txidtbl tid else tid in
+    let req : Packet.request =
+      {Packet.tid; Packet.rid; Packet.ty; Packet.data}
+    in
+    Process.process_packet ~store:!t.store ~cons:!t.cons ~doms:!t.doms ~con ~req ;
+    Process.write_access_log ~ty ~tid ~con:(Connection.get_domstr con) ~data ;
+    let packet = Connection.peek_output con in
+    if ty = Xenbus.Xb.Op.Transaction_start then (
+      Logging.debug "testable" "Adding mapping for tid %d" next_tid ;
+      Hashtbl.add !t.txidtbl next_tid (con.Connection.next_tid - 1) ) ;
+    let tid, _rid, ty, data = Xenbus.Xb.Packet.unpack packet in
+    Process.write_answer_log ~ty ~tid ~con:(Connection.get_domstr con) ~data
+  in
+  (* TODO: also a Nodes command with multiple packets *)
+  run_packet cmd ; (* TODO: act on and clear watches? *)
+                   con
+
+let is_tx_marked_fail con p =
+  let tid = p.Xenbus.Packet.tid in
+  if tid = 0 then false
+  else begin
+    let r = try (Connection.get_transaction con tid).must_fail
+    with Not_found -> false in
+    Logging.info "testable" "TXI %d: %b" tid r;
+    r
+  end
+
+let run2 next_tid t t' (domid, cmd) =
+  let con = run next_tid t (domid, cmd) in
+  let con' = run next_tid t' (domid, cmd) in
+  (* TODO: ignore txid mismatches on transactions *)
+  if not @@ (is_tx_start cmd || is_tx_marked_fail con cmd) then
+    (* TODO: ignore disagreements when transactions are marked as failed *)
+    check_eq_exn "reply packets" ~pp:Types.pp_dump_xb ~eq:Types.equal_xb_pkts
+      con.xb con'.xb ;
+  Queue.clear con'.xb.pkt_out ;
+  Queue.clear con.xb.pkt_out
+
+module type S = sig
+  type cmd
+
+  type state
+
+  type sut
+
+  val init_state : state
+
+  val next_state : cmd -> state -> state
+
+  val init_sut : unit -> sut
+
+  val cleanup : sut -> unit
+
+  val run_cmd : cmd -> state -> sut -> bool
+
+  val precond : cmd -> state -> bool
+
+  val pp : cmd Fmt.t
+end
diff --git a/tools/ocaml/xenstored/test/types.ml b/tools/ocaml/xenstored/test/types.ml
new file mode 100644
index 0000000000..f46d20b245
--- /dev/null
+++ b/tools/ocaml/xenstored/test/types.ml
@@ -0,0 +1,437 @@
+(*
+ * Copyright (C) Citrix Systems Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published
+ * by the Free Software Foundation; version 2.1 only. with the special
+ * exception on linking described in file LICENSE.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU Lesser General Public License for more details.
+ *)
+
+let domid_first_reserved = 0x7FF0
+
+type 'a eq = 'a -> 'a -> bool
+
+let hashtable_equal (eq : 'a eq) h h' =
+  Hashtbl.length h = Hashtbl.length h'
+  && Hashtbl.fold
+       (fun k v acc ->
+         acc
+         && match Hashtbl.find_opt h' k with Some x -> eq v x | None -> false)
+       h true
+
+let list_equal (eq : 'a eq) l l' =
+  try List.for_all2 eq l l' with Invalid_argument _ -> false
+
+let queue_equal eq q q' =
+  Queue.length q = Queue.length q'
+  &&
+  let list_of_queue q = Queue.fold (fun acc e -> e :: acc) [] q in
+  list_equal eq (list_of_queue q) (list_of_queue q')
+
+let pp_process_status ppf = function
+  | Unix.WEXITED code ->
+      Fmt.pf ppf "exited with code %d" code
+  | Unix.WSIGNALED osig ->
+      Fmt.pf ppf "killed by signal %a" Fmt.Dump.signal osig
+  | Unix.WSTOPPED osig ->
+      Fmt.pf ppf "stopped by signal %a" Fmt.Dump.signal osig
+
+let pp_dump_ref dump =
+  Fmt.using ( ! ) Fmt.(dump |> Fmt.braces |> prefix (const string "ref"))
+
+let pp_file_descr = Fmt.using Disk.FD.to_int Fmt.int
+
+module Quota = struct
+  open Quota
+
+  let pp_dump =
+    let open Fmt in
+    Dump.record
+      [ Dump.field "maxent" (fun q -> q.maxent) int
+      ; Dump.field "maxsize" (fun q -> q.maxsize) int
+      ; Dump.field "cur" (fun q -> q.cur) @@ Dump.hashtbl int int ]
+
+  let drop_dom0 h =
+    (* Quota is ignored for Dom0 and will be wrong in some situations:
+       - when domains die any nodes owned by them get inherited by Dom0
+       - the root node is owned by Dom0, if its ownership is changed Dom0's quota will be off-by-one
+      Since Dom0's quota is not actually used, just drop it when comparing
+     *)
+    let h' = Hashtbl.copy h in
+    Hashtbl.remove h' 0;
+    h'
+
+  let equal q q' =
+    q.maxent = q'.maxent && q.maxsize = q'.maxsize
+    && hashtable_equal Int.equal (drop_dom0 q.cur) (drop_dom0 q'.cur)
+end
+let pp_dump_quota = Quota.pp_dump
+let equal_quota = Quota.equal
+
+module Store = struct
+  open Store
+
+  module Node = struct
+    open Node
+
+    let pp_dump ppf t =
+      let buf = dump_store_buf t in
+      Fmt.lines ppf (Buffer.contents buf)
+
+    let rec equal n n' =
+      Symbol.equal n.name n'.name
+      && Perms.equiv n.perms n'.perms
+      && String.equal n.value n'.value
+      && SymbolMap.equal equal n.children n'.children
+  end
+
+  module Path = struct
+    open Path
+
+    let pp_dump = Fmt.using to_string Fmt.string
+
+    let equal p p' = list_equal String.equal p p'
+
+    let hash (p : t) = Hashtbl.hash p
+
+    let compare (p : t) (p' : t) = compare p p'
+  end
+
+  let pp_dump =
+    let open Fmt in
+    (* only print relevant fields, expected to stay same during live-update. *)
+    Dump.record
+      [ Dump.field "stat_transaction_coalesce"
+          (fun t -> t.stat_transaction_coalesce)
+          int
+      ; Dump.field "stat_transaction_abort"
+          (fun t -> t.stat_transaction_coalesce)
+          int
+      ; Dump.field "store" (fun t -> t.root) Node.pp_dump
+      ; Dump.field "quota" (fun t -> t.quota) Quota.pp_dump ]
+
+  let equal s s' =
+    (* ignore stats *)
+    Node.equal s.root s'.root && Quota.equal s.quota s'.quota
+end
+
+let pp_dump_store = Store.pp_dump
+let equal_store = Store.equal
+
+module Xb = struct
+  open Xenbus.Xb
+
+  module Op = struct
+    open Xenbus.Op
+
+    let pp_dump = Fmt.of_to_string to_string
+
+    let equal (op : t) (op' : t) = op = op'
+  end
+
+  module Packet = struct
+    open Xenbus.Packet
+
+    let pp_dump =
+      let open Fmt in
+      Dump.record
+        [ Dump.field "tid" get_tid int
+        ; Dump.field "rid" get_rid int
+        ; Dump.field "ty" get_ty Op.pp_dump
+        ; Dump.field "data" get_data Dump.string ]
+
+    let equal (p : t) (p' : t) =
+      (* ignore TXID, it can be different after a live-update *)
+      p.rid = p'.rid && p.ty = p'.ty && p.data = p'.data
+  end
+
+  module Partial = struct
+    open Xenbus.Partial
+
+    let pp_dump =
+      let open Fmt in
+      Dump.record
+        [ Dump.field "tid" (fun p -> p.tid) int
+        ; Dump.field "rid" (fun p -> p.rid) int
+        ; Dump.field "ty" (fun p -> p.ty) Op.pp_dump
+        ; Dump.field "len" (fun p -> p.len) int
+        ; Dump.field "buf" (fun p -> p.buf) Fmt.buffer ]
+
+    let equal p p' =
+      p.tid = p'.tid && p.rid = p'.rid && p.ty = p'.ty
+      && Buffer.contents p.buf = Buffer.contents p'.buf
+  end
+
+  let pp_dump_partial_buf ppf = function
+    | HaveHdr pkt ->
+        Fmt.pf ppf "HaveHdr %a" Partial.pp_dump pkt
+    | NoHdr (i, b) ->
+        Fmt.pf ppf "NoHdr(%d, %S)" i (Bytes.to_string b)
+
+  let equal_partial_buf buf buf' =
+    match (buf, buf') with
+    | HaveHdr pkt, HaveHdr pkt' ->
+        Partial.equal pkt pkt'
+    | NoHdr (i, b), NoHdr (i', b') ->
+        i = i' && b = b'
+    | HaveHdr _, NoHdr _ | NoHdr _, HaveHdr _ ->
+        false
+
+  let pp_backend ppf = function
+    | Fd {fd} ->
+        Fmt.prefix (Fmt.const Fmt.string "Fd ") pp_file_descr ppf fd
+    | Xenmmap _ ->
+        Fmt.const Fmt.string "Xenmmap _" ppf ()
+
+  let equal_backend b b' =
+    match (b, b') with
+    | Fd fd, Fd fd' ->
+        fd = fd'
+    | Xenmmap _, Xenmmap _ ->
+        true (* can't extract the FD to compare *)
+    | Fd _, Xenmmap _ | Xenmmap _, Fd _ ->
+        false
+
+  let pp_dump =
+    let open Fmt in
+    Dump.record
+      [ Dump.field "backend" (fun x -> x.backend) pp_backend
+      ; Dump.field "pkt_in" (fun x -> x.pkt_in) @@ Dump.queue Packet.pp_dump
+      ; Dump.field "pkt_out" (fun x -> x.pkt_out) @@ Dump.queue Packet.pp_dump
+      ; Dump.field "partial_in" (fun x -> x.partial_in) pp_dump_partial_buf
+      ; Dump.field "partial_out" (fun x -> x.partial_out) Dump.string ]
+
+  let equal_pkts xb xb' =
+    let queue_eq = queue_equal Packet.equal in
+    queue_eq xb.pkt_in xb'.pkt_in
+    && queue_eq xb.pkt_out xb'.pkt_out
+    && xb.partial_in = xb'.partial_in
+    && xb.partial_out = xb'.partial_out
+
+  let equal xb xb' = equal_backend xb.backend xb'.backend && equal_pkts xb xb'
+end
+
+let pp_dump_packet = Xb.Packet.pp_dump
+let pp_dump_xb = Xb.pp_dump
+let equal_xb = Xb.equal
+let equal_xb_pkts = Xb.equal_pkts
+
+module Packet = struct
+  open Packet
+
+  let pp_dump_request =
+    let open Fmt in
+    Dump.record
+      [ Dump.field "tid" (fun t -> t.tid) int
+      ; Dump.field "rid" (fun t -> t.rid) int
+      ; Dump.field "ty" (fun t -> t.ty) Xb.Op.pp_dump
+      ; Dump.field "data" (fun t -> t.data) Dump.string ]
+
+  let equal_req r r' =
+    r.tid = r'.tid && r.rid = r'.rid && r.ty = r'.ty && r.data = r'.data
+
+  let pp_dump_response ppf = function
+    | Reply str ->
+        Fmt.pf ppf "Reply %S" str
+    | Error str ->
+        Fmt.pf ppf "Error %S" str
+    | Ack _ ->
+        Fmt.string ppf "Ack"
+
+  let equal_response = response_equal
+end
+
+module Transaction = struct
+  open Transaction
+
+  let pp_dump_ty ppf = function
+    | Transaction.No ->
+        Fmt.string ppf "No"
+    | Full (id, orig, canonical) ->
+        Fmt.pf ppf "Full @[(%d, %a, %a)@]" id Store.pp_dump orig Store.pp_dump
+          canonical
+
+  let equal_ty t t' =
+    match (t, t') with
+    | Transaction.No, Transaction.No ->
+        true
+    | Transaction.Full _, Transaction.Full _ ->
+        (* We expect the trees not to be identical, so we ignore any differences here.
+           The reply comparison tests will find any mismatches in observable transaction state
+        *)
+        true
+    | Transaction.No, Transaction.Full _ | Transaction.Full _, Transaction.No ->
+        false
+
+  let equal_pathop (op, path) (op', path') =
+    op = op' && Store.Path.equal path path'
+
+  let pp_dump_op = Fmt.pair Packet.pp_dump_request Packet.pp_dump_response
+
+  let equal_op (req, reply) (req', reply') =
+    Packet.equal_req req req' && Packet.equal_response reply reply'
+
+  let pp_dump =
+    let open Fmt in
+    let open Transaction in
+    Dump.record
+      [ Dump.field "ty" (fun t -> t.ty) pp_dump_ty
+      ; Dump.field "start_count" (fun t -> t.start_count) int64
+      ; Dump.field "store" (fun t -> t.store) Store.pp_dump
+      ; Dump.field "quota" (fun t -> t.quota) Quota.pp_dump
+      ; Dump.field "must_fail" (fun t -> t.must_fail) Fmt.bool
+      ; Dump.field "paths" (fun t -> t.paths)
+        @@ Dump.list (pair Xb.Op.pp_dump Store.Path.pp_dump)
+      ; Dump.field "operations" (fun t -> t.operations)
+        @@ list (pair Packet.pp_dump_request Packet.pp_dump_response)
+      ; Dump.field "read_lowpath" (fun t -> t.read_lowpath)
+        @@ option Store.Path.pp_dump
+      ; Dump.field "write_lowpath" (fun t -> t.write_lowpath)
+        @@ option Store.Path.pp_dump ]
+
+  let equal t t' =
+    equal_ty t.ty t'.ty
+    (* ignored: quota at start of transaction, not relevant
+       && Quota.equal t.quota t'.quota *)
+    (*&& list_equal equal_pathop t.paths t'.paths *)
+    (*&& list_equal equal_op t.operations t'.operations*)
+    && t.must_fail = t'.must_fail
+    (* ignore lowpath, impossible to recreate from limited migration info *)
+    (*&& Option.equal Store.Path.equal t.read_lowpath t'.read_lowpath
+    && Option.equal Store.Path.equal t.write_lowpath t'.write_lowpath *)
+end
+
+module Connection = struct
+  open Connection
+
+  let pp_dump_watch =
+    let open Fmt in
+    Dump.record
+      [ Dump.field "token" (fun w -> w.token) Dump.string
+      ; Dump.field "path" (fun w -> w.path) Dump.string
+      ; Dump.field "base" (fun w -> w.base) Dump.string
+      ; Dump.field "is_relative" (fun w -> w.is_relative) Fmt.bool ]
+
+  let pp_dump =
+    let open Fmt in
+    Dump.record
+      [ Dump.field "xb" (fun c -> c.xb) Xb.pp_dump
+      ; Dump.field "transactions" (fun c -> c.transactions)
+        @@ Dump.hashtbl int Transaction.pp_dump
+      ; Dump.field "next_tid" (fun t -> t.next_tid) int
+      ; Dump.field "nb_watches" (fun c -> c.nb_watches) int
+      ; Dump.field "anonid" (fun c -> c.anonid) int
+      ; Dump.field "watches" (fun c -> c.watches)
+        @@ Dump.hashtbl Dump.string (Dump.list pp_dump_watch)
+      ; Dump.field "perm" (fun c -> c.perm)
+        @@ Fmt.using Perms.Connection.to_string Fmt.string ]
+
+  let equal c c' =
+    let watch_equal w w' =
+      (* avoid recursion, these must be physically equal *)
+      w.con == c && w'.con == c' && w.token = w'.token && w.path = w'.path
+      && w.base = w'.base
+      && w.is_relative = w'.is_relative
+    in
+    Xb.equal c.xb c'.xb
+    && hashtable_equal Transaction.equal c.transactions c'.transactions
+    (* next_tid ignored, not preserved *)
+    && hashtable_equal (list_equal watch_equal) c.watches c'.watches
+    && c.nb_watches = c'.nb_watches
+    (* anonid ignored, not preserved *)
+    (* && c.anonid = c'.anonid *) && c.perm = c'.perm
+
+  let equal_watch w w' =
+    equal w.con w'.con && w.token = w'.token && w.path = w'.path
+    && w.base = w'.base
+    && w.is_relative = w'.is_relative
+end
+
+module Trie = struct
+  open Trie
+
+  let pp_dump dump_elt =
+    Fmt.Dump.iter_bindings Trie.iter (Fmt.any "trie") Fmt.string
+      Fmt.(option dump_elt)
+
+  let plus1 _ _ acc = acc + 1
+
+  let length t = fold plus1 t 0
+
+  (* Trie.iter doesn't give full path so we can't compare the paths/values exactly.
+     They will be compared as part of the individual connections
+  *)
+  let equal _eq t t' = length t = length t'
+end
+
+module Connections = struct
+  open Connections
+
+  let pp_dump =
+    let open Fmt in
+    Dump.record
+      [ Dump.field "anonymous" (fun t -> t.anonymous)
+        @@ Dump.hashtbl Fmt.(any "") Connection.pp_dump
+      ; Dump.field "domains" (fun t -> t.domains)
+        @@ Dump.hashtbl Fmt.int Connection.pp_dump
+      ; Dump.field "ports" (fun t -> t.ports)
+        @@ Dump.hashtbl
+             (Fmt.using Xeneventchn.to_int Fmt.int)
+             Connection.pp_dump
+      ; Dump.field "watches" (fun t -> t.watches)
+        @@ Trie.pp_dump (Dump.list Connection.pp_dump_watch) ]
+
+  let equal c c' =
+    hashtable_equal Connection.equal c.anonymous c'.anonymous
+    && hashtable_equal Connection.equal c.domains c'.domains
+    (* TODO: local port changes for now *)
+    (*&& hashtable_equal Connection.equal c.ports c'.ports *)
+    && Trie.equal (list_equal Connection.equal_watch) c.watches c'.watches
+end
+
+let pp_dump_connections = Connections.pp_dump
+let equal_connections = Connections.equal
+
+module Domain = struct
+  open Domain
+
+  let pp_dump =
+    let open Fmt in
+    Dump.record
+      [ Dump.field "id" Domain.get_id int
+      ; Dump.field "remote_port" Domain.get_remote_port int
+      ; Dump.field "bad_client" Domain.is_bad_domain bool
+      ; Dump.field "io_credit" Domain.get_io_credit int
+      ; Dump.field "conflict_credit" (fun t -> t.conflict_credit) float
+      ; Dump.field "caused_conflicts" (fun t -> t.caused_conflicts) int64 ]
+
+  (* ignore stats fields *)
+  let equal t t' = t.id = t'.id && t.remote_port = t'.remote_port
+end
+
+module Domains = struct
+  open Domains
+
+  let pp_dump =
+    let open Fmt in
+    Dump.record
+      [ Dump.field "table" (fun t -> t.table)
+        @@ Dump.hashtbl Fmt.int Domain.pp_dump
+      ; Dump.field "doms_conflict_paused" (fun t -> t.doms_conflict_paused)
+        @@ Dump.queue (pp_dump_ref @@ Dump.option Domain.pp_dump)
+      ; Dump.field "doms_with_conflict_penalty" (fun t ->
+            t.doms_with_conflict_penalty)
+        @@ Dump.queue (pp_dump_ref @@ Dump.option Domain.pp_dump)
+      ; Dump.field "n_paused" (fun t -> t.n_paused) int
+      ; Dump.field "n_penalised" (fun t -> t.n_penalised) int ]
+
+  (* ignore statistic fields *)
+  let equal t t' = hashtable_equal Domain.equal t.table t'.table
+end
+let pp_dump_domains = Domains.pp_dump
+let equal_domains = Domains.equal
diff --git a/tools/ocaml/xenstored/test/xenstored_test.ml b/tools/ocaml/xenstored/test/xenstored_test.ml
index e86b68e867..acf3209087 100644
--- a/tools/ocaml/xenstored/test/xenstored_test.ml
+++ b/tools/ocaml/xenstored/test/xenstored_test.ml
@@ -1,2 +1,147 @@
-open Xenstored
-let () = ()
+open Testable
+open Generator
+module Cb = Crowbar
+
+let random_path = Cb.list Cb.bytes
+
+let value = Cb.bytes
+
+let token = Cb.bytes
+
+let permty =
+  [Perms.READ; Perms.WRITE; Perms.RDWR; Perms.NONE]
+  |> List.map Cb.const |> Cb.choose
+
+let new_domid = Cb.range ~min:1 Types.domid_first_reserved
+
+let port = Cb.range 0xFFFF_FFFF (*uint32_t*)
+
+let arb_cmd =
+  let open Command in
+  let path =
+    Cb.choose
+      [ Cb.map [Cb.int] (fun rnd state -> PathObserver.choose_path state rnd)
+      ; Cb.map [random_path] (fun x _ -> Store.Path.to_string x) ]
+  in
+  let domid =
+    Cb.map [Cb.int] (fun rnd state -> PathObserver.choose_domid state rnd)
+  in
+  let perms =
+    Cb.map [domid; permty; Cb.pair domid permty |> Cb.list]
+    @@ fun idgen owner other state ->
+    let other = List.map (fun (idgen, ty) -> (idgen state, ty)) other in
+    Perms.Node.create (idgen state) owner other
+  in
+  let guard' ~f gen state =
+    let v = gen state in
+    Cb.guard (f v) ;
+    v
+  in
+  let cmd =
+    let open Testable.Command in
+    Cb.choose
+      [ Cb.map [path] cmd_read
+      ; Cb.map [path; value] cmd_write
+      ; Cb.map [path] cmd_mkdir
+      ; Cb.map [path] (fun p -> cmd_rm @@ guard' ~f:(fun p -> p <> "/") p)
+      ; Cb.map [path] cmd_directory
+      ; Cb.map [path] cmd_getperms
+      ; Cb.map [path; perms] cmd_setperms
+      ; Cb.map [path; token] cmd_watch
+      ; Cb.map [path; token] cmd_unwatch
+      ; Cb.const cmd_reset_watches
+      ; Cb.const cmd_transaction_start
+      ; Cb.map [Cb.bool] cmd_transaction_end
+      ; Cb.map [new_domid; port] cmd_introduce
+      ; Cb.map [domid] (fun idgen ->
+            cmd_release @@ guard' ~f:(fun id -> id <> 0) idgen)
+      ; Cb.map [domid] cmd_getdomainpath
+      ; Cb.map [domid] cmd_isintroduced
+      ; Cb.map [domid; domid] cmd_set_target
+      ; Cb.const cmd_liveupdate ]
+  in
+  Cb.map [domid; Cb.int; cmd] (fun this rnd cmd state ->
+      let this = this state in
+      let txid = PathObserver.choose_txid_opt state this rnd in
+      let cmd = cmd txid state in
+      (this, cmd))
+
+(* based on QCSTM *)
+module Make (Spec : sig
+  include Testable.S
+
+  val arb_cmd : (state -> cmd) Crowbar.gen
+end) =
+struct
+  let arb_cmds =
+    Crowbar.with_printer (Fmt.Dump.list Spec.pp)
+    @@ Crowbar.map [Crowbar.list1 Spec.arb_cmd] (fun cmdgens ->
+           let cmds, _ =
+             List.fold_left
+               (fun (cmds, s) f ->
+                 let cmd = f s in
+                 Crowbar.check (Spec.precond cmd s) ;
+                 (cmd :: cmds, Spec.next_state cmd s))
+               ([], Spec.init_state) cmdgens
+           in
+           List.rev cmds)
+
+  let interp_agree sut cs =
+    List.fold_left
+      (fun s cmd ->
+        Crowbar.check
+          ( try Spec.run_cmd cmd s sut
+            with Failure msg -> Crowbar.failf "%a" Fmt.lines msg ) ;
+        Spec.next_state cmd s)
+      Spec.init_state cs
+
+  let agree_prop cs =
+    let on_exn e bt logs =
+      List.iter prerr_endline logs ;
+      Printexc.raise_with_backtrace e bt
+    in
+    Testable.with_logger ~on_exn (fun () ->
+        let sut = Spec.init_sut () in
+        Stdext.finally (fun () -> 
+        let (_ : Spec.state) = interp_agree sut cs in ())
+        (fun () -> 
+        Spec.cleanup sut))
+
+  let agree_test ~name = Crowbar.add_test ~name [arb_cmds] agree_prop
+end
+
+module LU = Make (struct
+  include PathObserver
+
+  type cmd = int * Testable.Command.t
+
+  type sut = Testable.t ref * Testable.t ref
+
+  let arb_cmd = arb_cmd
+
+  let init_sut () =
+    let sut1 = Testable.create () in
+    Testable.init sut1 ;
+    let sut2 = Testable.create ~live_update:true () in
+    Testable.init sut2 ;
+    let sut1 = ref sut1 in
+    let sut2 = ref sut2 in
+    (sut1, sut2)
+
+  let cleanup (sut1, sut2) =
+    Testable.cleanup !sut1 ; Testable.cleanup !sut2
+
+  let run_cmd cmd state (sut1, sut2) =
+    Testable.run2 state.next_tid sut1 sut2 cmd ;
+    true
+end)
+
+let () =
+  (* Crowbar runs at_exit, and after bisect's coverage dumper,
+     registering an at_exit here would run *before* Crowbar starts,
+     hence the nested at_exit which puts the bisect dumper in the proper place
+     to dump coverage *after* crowbar is finished.
+   *)
+  (* at_exit (fun () -> at_exit Bisect.Runtime.write_coverage_data);*)
+  print_endline "";
+  LU.agree_test ~name:"live-update-agree";
diff --git a/tools/ocaml/xenstored/test/xs_protocol.ml b/tools/ocaml/xenstored/test/xs_protocol.ml
new file mode 100644
index 0000000000..b5da2aff34
--- /dev/null
+++ b/tools/ocaml/xenstored/test/xs_protocol.ml
@@ -0,0 +1,733 @@
+(*
+ * Copyright (C) Citrix Systems Inc.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU Lesser General Public License as published
+ * by the Free Software Foundation; version 2.1 only. with the special
+ * exception on linking described in file LICENSE.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU Lesser General Public License for more details.
+ *)
+
+let ( |> ) f g = g f
+let ( ++ ) f g x = f (g x)
+
+module Op = struct
+  type t =
+    | Debug | Directory | Read | Getperms
+    | Watch | Unwatch | Transaction_start
+    | Transaction_end | Introduce | Release
+    | Getdomainpath | Write | Mkdir | Rm
+    | Setperms | Watchevent | Error | Isintroduced
+    | Resume | Set_target
+    | Reset_watches | Directory_part
+
+  let to_int32 = function
+    | Debug -> 0l
+    | Directory -> 1l
+    | Read -> 2l
+    | Getperms -> 3l
+    | Watch -> 4l
+    | Unwatch -> 5l
+    | Transaction_start -> 6l
+    | Transaction_end -> 7l
+    | Introduce -> 8l
+    | Release -> 9l
+    | Getdomainpath -> 10l
+    | Write -> 11l
+    | Mkdir -> 12l
+    | Rm -> 13l
+    | Setperms -> 14l
+    | Watchevent -> 15l
+    | Error -> 16l
+    | Isintroduced -> 17l
+    | Resume -> 18l
+    | Set_target -> 19l
+    | Reset_watches -> 21l (* 20 is reserved *)
+    | Directory_part -> 22l
+
+  (* The index of the value in the array is the integer representation used
+     by the wire protocol. Every element of t exists exactly once in the array. *)
+  let on_the_wire =
+    let a = Array.make 23 None in
+    ListLabels.iter
+      ~f:(fun v -> a.(v |> to_int32 |> Int32.to_int) <- Some v)
+      [ Debug; Directory; Read; Getperms; Watch; Unwatch; Transaction_start
+      ; Transaction_end; Introduce; Release; Getdomainpath; Write; Mkdir; Rm
+      ; Setperms; Watchevent; Error; Isintroduced; Resume; Set_target
+      ; Reset_watches; Directory_part ] ;
+    a
+
+  let of_int32 i =
+    let i = Int32.to_int i in
+    if i >= 0 && i < Array.length on_the_wire then on_the_wire.(i) else None
+
+  let to_string = function
+    | Debug             -> "debug"
+    | Directory         -> "directory"
+    | Read              -> "read"
+    | Getperms          -> "getperms"
+    | Watch             -> "watch"
+    | Unwatch           -> "unwatch"
+    | Transaction_start -> "transaction_start"
+    | Transaction_end   -> "transaction_end"
+    | Introduce         -> "introduce"
+    | Release           -> "release"
+    | Getdomainpath     -> "getdomainpath"
+    | Write             -> "write"
+    | Mkdir             -> "mkdir"
+    | Rm                -> "rm"
+    | Setperms          -> "setperms"
+    | Watchevent        -> "watchevent"
+    | Error             -> "error"
+    | Isintroduced      -> "isintroduced"
+    | Resume            -> "resume"
+    | Set_target        -> "set_target"
+    | Reset_watches     -> "reset_watches"
+    | Directory_part    -> "directory_part"
+end
+
+let split_string ?limit:(limit=max_int) c s =
+  let len = String.length s in
+  let next_c from =
+    try
+      Some (String.index_from s from c)
+    with
+    | Not_found -> None
+  in
+  let decr n = max 0 (n-1) in
+  let rec loop n from acc =
+    match decr n, next_c from with
+    | 0, _
+    | _, None ->
+      (* No further instances of c, or we've reached limit *)
+      String.sub s from (len - from) :: acc
+    | n', Some idx ->
+      let a = String.sub s from (idx - from) in
+      (loop[@tailcall]) n' (idx + 1) (a :: acc)
+  in loop limit 0 [] |> List.rev
+
+
+module ACL = struct
+  type perm =
+    | NONE
+    | READ
+    | WRITE
+    | RDWR
+
+  let char_of_perm = function
+    | READ -> 'r'
+    | WRITE -> 'w'
+    | RDWR -> 'b'
+    | NONE -> 'n'
+
+  let perm_of_char = function
+    | 'r' -> Some READ
+    | 'w' -> Some WRITE
+    | 'b' -> Some RDWR
+    | 'n' -> Some NONE
+    | _ -> None
+
+  type domid = int
+
+  type t = {
+    owner: domid;             (** domain which "owns", has full access *)
+    other: perm;              (** default permissions for all others... *)
+    acl: (domid * perm) list; (** ... unless overridden in the ACL *)
+  }
+
+  let to_string perms =
+    let string_of_perm (id, perm) = Printf.sprintf "%c%u" (char_of_perm perm) id in
+    String.concat "\000" (List.map string_of_perm ((perms.owner,perms.other) :: perms.acl))
+
+  let of_string s =
+    (* A perm is stored as '<c>domid' *)
+    let perm_of_char_exn x = match (perm_of_char x) with Some y -> y | None -> raise Not_found in
+    try
+      let perm_of_string s =
+        if String.length s < 2
+        then invalid_arg (Printf.sprintf "Permission string too short: '%s'" s);
+        int_of_string (String.sub s 1 (String.length s - 1)), perm_of_char_exn s.[0] in
+      let l = List.map perm_of_string (split_string '\000' s) in
+      match l with
+      | (owner, other) :: l -> Some { owner = owner; other = other; acl = l }
+      | [] -> Some { owner = 0; other = NONE; acl = [] }
+    with _ ->
+      None
+end
+
+type t = {
+  tid: int32;
+  rid: int32;
+  ty: Op.t;
+  len: int;
+  data: Buffer.t;
+}
+
+let sizeof_header = 16
+let get_header_ty v = Cstruct.LE.get_uint32 v 0
+let set_header_ty v x = Cstruct.LE.set_uint32 v 0 x
+let get_header_rid v = Cstruct.LE.get_uint32 v 4
+let set_header_rid v x = Cstruct.LE.set_uint32 v 4 x
+let get_header_tid v = Cstruct.LE.get_uint32 v 8
+let set_header_tid v x = Cstruct.LE.set_uint32 v 8 x
+let get_header_len v = Cstruct.LE.get_uint32 v 12
+let set_header_len v x = Cstruct.LE.set_uint32 v 12 x
+
+let to_bytes pkt =
+  let header = Cstruct.create sizeof_header in
+  let len = Int32.of_int (Buffer.length pkt.data) in
+  let ty = Op.to_int32 pkt.ty in
+  set_header_ty header ty;
+  set_header_rid header pkt.rid;
+  set_header_tid header pkt.tid;
+  set_header_len header len;
+  let result = Buffer.create 64 in
+  Buffer.add_bytes result (Cstruct.to_bytes header);
+  Buffer.add_buffer result pkt.data;
+  Buffer.to_bytes result
+
+let get_tid pkt = pkt.tid
+let get_ty pkt = pkt.ty
+let get_data pkt =
+  if pkt.len > 0 && Buffer.nth pkt.data (pkt.len - 1) = '\000' then
+    Buffer.sub pkt.data 0 (pkt.len - 1)
+  else
+    Buffer.contents pkt.data
+let get_rid pkt = pkt.rid
+
+module Parser = struct
+  (** Incrementally parse packets *)
+
+  let header_size = 16
+
+  let xenstore_payload_max = 4096 (* xen/include/public/io/xs_wire.h *)
+
+  let allow_oversize_packets = ref true
+
+  type state =
+    | Unknown_operation of int32
+    | Parser_failed of string
+    | Need_more_data of int
+    | Packet of t
+
+  type parse =
+    | ReadingHeader of int * bytes
+    | ReadingBody of t
+    | Finished of state
+
+  let start () = ReadingHeader (0, Bytes.make header_size '\000')
+
+  let state = function
+    | ReadingHeader(got_already, _) -> Need_more_data (header_size - got_already)
+    | ReadingBody pkt -> Need_more_data (pkt.len - (Buffer.length pkt.data))
+    | Finished r -> r
+
+  let parse_header str =
+    let header = Cstruct.create sizeof_header in
+    Cstruct.blit_from_string str 0 header 0 sizeof_header;
+    let ty = get_header_ty header in
+    let rid = get_header_rid header in
+    let tid = get_header_tid header in
+    let len = get_header_len header in
+
+    let len = Int32.to_int len in
+    (* A packet which is bigger than xenstore_payload_max is illegal.
+       This will leave the guest connection is a bad state and will
+       be hard to recover from without restarting the connection
+       (ie rebooting the guest) *)
+    let len = if !allow_oversize_packets then len else max 0 (min xenstore_payload_max len) in
+
+    begin match Op.of_int32 ty with
+      | Some ty ->
+        let t = {
+          tid = tid;
+          rid = rid;
+          ty = ty;
+          len = len;
+          data = Buffer.create len;
+        } in
+        if len = 0
+        then Finished (Packet t)
+        else ReadingBody t
+      | None -> Finished (Unknown_operation ty)
+    end
+
+  let input state (bytes : string) =
+    match state with
+    | ReadingHeader(got_already, (str : bytes)) ->
+      Bytes.blit_string bytes 0 str got_already (String.length bytes);
+      let got_already = got_already + (String.length bytes) in
+      if got_already < header_size
+      then ReadingHeader(got_already, str)
+      else parse_header (Bytes.to_string str)
+    | ReadingBody x ->
+      Buffer.add_string x.data bytes;
+      let needed = x.len - (Buffer.length x.data) in
+      if needed > 0
+      then ReadingBody x
+      else Finished (Packet x)
+    | Finished f -> Finished f
+end
+
+(* Should we switch to an explicit stream abstraction here? *)
+module type IO = sig
+  type 'a t
+  val return: 'a -> 'a t
+  val ( >>= ): 'a t -> ('a -> 'b t) -> 'b t
+
+  type channel
+  val read: channel -> bytes -> int -> int -> int t
+  val write: channel -> bytes -> int -> int -> unit t
+end
+
+exception Unknown_xenstore_operation of int32
+exception Response_parser_failed of string
+exception EOF
+
+type ('a, 'b) result =
+  | Ok of 'a
+  | Exception of 'b
+
+module PacketStream = functor(IO: IO) -> struct
+  let ( >>= ) = IO.( >>= )
+  let return = IO.return
+
+  type stream = {
+    channel: IO.channel;
+    mutable incoming_pkt: Parser.parse; (* incrementally parses the next packet *)
+  }
+
+  let make t = {
+    channel = t;
+    incoming_pkt = Parser.start ();
+  }
+
+  (* [recv client] returns a single Packet, or fails *)
+  let rec recv t =
+    let open Parser in match Parser.state t.incoming_pkt with
+    | Packet pkt ->
+      t.incoming_pkt <- start ();
+      return (Ok pkt)
+    | Need_more_data x ->
+      let buf = Bytes.make x '\000' in
+      IO.read t.channel buf 0 x
+      >>= (function
+          | 0 -> return (Exception EOF)
+          | n ->
+            let fragment = Bytes.sub_string buf 0 n in
+            t.incoming_pkt <- input t.incoming_pkt fragment;
+            recv t)
+    | Unknown_operation x -> return (Exception (Unknown_xenstore_operation x))
+    | Parser_failed x -> return (Exception (Response_parser_failed x))
+
+  (* [send client pkt] sends [pkt] and returns (), or fails *)
+  let send t request =
+    let req = to_bytes request in
+    IO.write t.channel req 0 (Bytes.length req)
+end
+
+module Token = struct
+  type t = string
+
+  (** [to_user_string x] returns the user-supplied part of the watch token *)
+  let to_user_string x = Scanf.sscanf x "%d:%s" (fun _ x -> x)
+
+  let to_debug_string x = x
+
+  let of_string x = x
+  let to_string x = x
+end
+
+let data_concat ls = (String.concat "\000" ls) ^ "\000"
+
+let create tid rid ty data =
+  let len = String.length data in
+  let b = Buffer.create len in
+  Buffer.add_string b data;
+  {
+    tid = tid;
+    rid = rid;
+    ty = ty;
+    len = len;
+    data = b;
+  }
+
+module Response = struct
+
+  type payload =
+    | Read of string
+    | Directory of string list
+    | Getperms of ACL.t
+    | Getdomainpath of string
+    | Transaction_start of int32
+    | Write
+    | Mkdir
+    | Rm
+    | Setperms
+    | Watch
+    | Unwatch
+    | Transaction_end
+    | Debug of string list
+    | Introduce
+    | Resume
+    | Release
+    | Set_target
+    | Reset_watches
+    | Directory_part of int * string list
+    | Isintroduced of bool
+    | Error of string
+    | Watchevent of string * string
+
+  let prettyprint_payload =
+    let open Printf in function
+      | Read x -> sprintf "Read %s" x
+      | Directory xs -> sprintf "Directory [ %s ]" (String.concat "; " xs)
+      | Getperms acl -> sprintf "Getperms %s" (ACL.to_string acl)
+      | Getdomainpath p -> sprintf "Getdomainpath %s" p
+      | Transaction_start x -> sprintf "Transaction_start %ld" x
+      | Write -> "Write"
+      | Mkdir -> "Mkdir"
+      | Rm -> "Rm"
+      | Setperms -> "Setperms"
+      | Watch -> "Watch"
+      | Unwatch -> "Unwatch"
+      | Transaction_end -> "Transaction_end"
+      | Debug xs -> sprintf "Debug [ %s ]" (String.concat "; " xs)
+      | Introduce -> "Introduce"
+      | Resume -> "Resume"
+      | Release -> "Release"
+      | Set_target -> "Set_target"
+      | Reset_watches -> "Reset_watches"
+      | Directory_part (gencnt, xs) ->
+          sprintf "Directory_part #%d [ %s ]" gencnt (String.concat "; " xs)
+      | Isintroduced x -> sprintf "Isintroduced %b" x
+      | Error x -> sprintf "Error %s" x
+      | Watchevent (x, y) -> sprintf "Watchevent %s %s" x y
+
+  let ty_of_payload = function
+    | Read _ -> Op.Read
+    | Directory _ -> Op.Directory
+    | Getperms _ -> Op.Getperms
+    | Getdomainpath _ -> Op.Getdomainpath
+    | Transaction_start _ -> Op.Transaction_start
+    | Debug _ -> Op.Debug
+    | Isintroduced _ -> Op.Isintroduced
+    | Watchevent (_, _) -> Op.Watchevent
+    | Error _ -> Op.Error
+    | Write -> Op.Write
+    | Mkdir -> Op.Mkdir
+    | Rm -> Op.Rm
+    | Setperms -> Op.Setperms
+    | Watch -> Op.Watch
+    | Unwatch -> Op.Unwatch
+    | Transaction_end -> Op.Transaction_end
+    | Introduce -> Op.Introduce
+    | Resume -> Op.Resume
+    | Release -> Op.Release
+    | Set_target -> Op.Set_target
+    | Reset_watches -> Op.Reset_watches
+    | Directory_part _ -> Op.Directory_part
+
+  let ok = "OK\000"
+
+  let data_of_payload = function
+    | Read x                   -> x
+    | Directory ls             -> if ls = [] then "" else data_concat ls
+    | Getperms perms           -> data_concat [ ACL.to_string perms ]
+    | Getdomainpath x          -> data_concat [ x ]
+    | Transaction_start tid    -> data_concat [ Int32.to_string tid ]
+    | Debug items              -> data_concat items
+    | Isintroduced b           -> data_concat [ if b then "T" else "F" ]
+    | Watchevent (path, token) -> data_concat [ path; token ]
+    | Error x                  -> data_concat [ x ]
+    | _                        -> ok
+
+  let print x tid rid =
+    create tid rid (ty_of_payload x) (data_of_payload x)
+end
+
+module Request = struct
+
+  type path_op =
+    | Read
+    | Directory
+    | Directory_part of int
+    | Getperms
+    | Write of string
+    | Mkdir
+    | Rm
+    | Setperms of ACL.t
+
+  type payload =
+    | PathOp of string * path_op
+    | Getdomainpath of int
+    | Transaction_start
+    | Watch of string * string
+    | Unwatch of string * string
+    | Transaction_end of bool
+    | Debug of string list
+    | Introduce of int * Nativeint.t * int
+    | Resume of int
+    | Release of int
+    | Set_target of int * int
+    | Reset_watches
+    | Isintroduced of int
+    | Error of string
+    | Watchevent of string
+
+  open Printf
+
+  let prettyprint_pathop x = function
+    | Read -> sprintf "Read %s" x
+    | Directory -> sprintf "Directory %s" x
+    | Directory_part off -> sprintf "Directory %s @%d" x off
+    | Getperms -> sprintf "Getperms %s" x
+    | Write v -> sprintf "Write %s %s" x v
+    | Mkdir -> sprintf "Mkdir %s" x
+    | Rm -> sprintf "Rm %s" x
+    | Setperms acl -> sprintf "Setperms %s %s" x (ACL.to_string acl)
+
+  let prettyprint_payload = function
+    | PathOp (path, op) -> prettyprint_pathop path op
+    | Getdomainpath x -> sprintf "Getdomainpath %d" x
+    | Transaction_start -> "Transaction_start"
+    | Watch (x, y) -> sprintf "Watch %s %s" x y
+    | Unwatch (x, y) -> sprintf "Unwatch %s %s" x y
+    | Transaction_end x -> sprintf "Transaction_end %b" x
+    | Debug xs -> sprintf "Debug [ %s ]" (String.concat "; " xs)
+    | Introduce (x, n, y) -> sprintf "Introduce %d %nu %d" x n y
+    | Resume x -> sprintf "Resume %d" x
+    | Release x -> sprintf "Release %d" x
+    | Set_target (x, y) -> sprintf "Set_target %d %d" x y
+    | Reset_watches -> "Reset_watches"
+    | Isintroduced x -> sprintf "Isintroduced %d" x
+    | Error x -> sprintf "Error %s" x
+    | Watchevent x -> sprintf "Watchevent %s" x
+
+  exception Parse_failure
+
+  let strings data = split_string '\000' data
+
+  let one_string data =
+    let args = split_string ~limit:2 '\000' data in
+    match args with
+    | x :: [] -> x
+    | _       ->
+      raise Parse_failure
+
+  let two_strings data =
+    let args = split_string ~limit:2 '\000' data in
+    match args with
+    | a :: b :: [] -> a, b
+    | a :: [] -> a, "" (* terminating NULL removed by get_data *)
+    | _            ->
+      raise Parse_failure
+
+  let acl x = match ACL.of_string x with
+    | Some x -> x
+    | None ->
+      raise Parse_failure
+
+  let domid s =
+    let v = ref 0 in
+    let is_digit c = c >= '0' && c <= '9' in
+    let len = String.length s in
+    let i = ref 0 in
+    while !i < len && not (is_digit s.[!i]) do incr i done;
+    while !i < len && is_digit s.[!i]
+    do
+      let x = (Char.code s.[!i]) - (Char.code '0') in
+      v := !v * 10 + x;
+      incr i
+    done;
+    !v
+
+  let bool = function
+    | "F" -> false
+    | "T" -> true
+    | _ ->
+      raise Parse_failure
+
+  let parse_exn request =
+    let data = get_data request in
+    match get_ty request with
+    | Op.Read -> PathOp (data |> one_string, Read)
+    | Op.Directory -> PathOp (data |> one_string, Directory)
+    | Op.Getperms -> PathOp (data |> one_string, Getperms)
+    | Op.Getdomainpath -> Getdomainpath (data |> one_string |> domid)
+    | Op.Transaction_start -> Transaction_start
+    | Op.Write ->
+      let path, value = two_strings data in
+      PathOp (path, Write value)
+    | Op.Mkdir -> PathOp (data |> one_string, Mkdir)
+    | Op.Rm -> PathOp (data |> one_string, Rm)
+    | Op.Setperms ->
+      let path, perms = two_strings data in
+      let perms = acl perms in
+      PathOp(path, Setperms perms)
+    | Op.Watch ->
+      let path, token = two_strings data in
+      Watch(path, token)
+    | Op.Unwatch ->
+      let path, token = two_strings data in
+      Unwatch(path, token)
+    | Op.Transaction_end -> Transaction_end(data |> one_string |> bool)
+    | Op.Debug -> Debug (strings data)
+    | Op.Introduce ->
+      begin match strings data with
+        | d :: mfn :: port :: _ ->
+          let d = domid d in
+          let mfn = Nativeint.of_string mfn in
+          let port = int_of_string port in
+          Introduce (d, mfn, port)
+        | _ ->
+          raise Parse_failure
+      end
+    | Op.Resume -> Resume (data |> one_string |> domid)
+    | Op.Release -> Release (data |> one_string |> domid)
+    | Op.Set_target ->
+      let mine, yours = two_strings data in
+      let mine = domid mine and yours = domid yours in
+      Set_target(mine, yours)
+    | Op.Reset_watches -> Reset_watches
+    | Op.Directory_part ->
+        let path, offstr = two_strings data in
+        PathOp (path, Directory_part (int_of_string offstr))
+    | Op.Isintroduced -> Isintroduced (data |> one_string |> domid)
+    | Op.Error -> Error(data |> one_string)
+    | Op.Watchevent -> Watchevent(data |> one_string)
+
+  let parse request =
+    try
+      Some (parse_exn request)
+    with _ -> None
+
+  let prettyprint request =
+    Printf.sprintf "tid = %ld; rid = %ld; payload = %s"
+      (get_tid request) (get_rid request)
+      (match parse request with
+       | None -> "None"
+       | Some x -> "Some " ^ (prettyprint_payload x))
+
+  let ty_of_payload = function
+    | PathOp(_, Directory) -> Op.Directory
+    | PathOp(_, Read) -> Op.Read
+    | PathOp(_, Getperms) -> Op.Getperms
+    | Debug _ -> Op.Debug
+    | Watch (_, _) -> Op.Watch
+    | Unwatch (_, _) -> Op.Unwatch
+    | Transaction_start -> Op.Transaction_start
+    | Transaction_end _ -> Op.Transaction_end
+    | Introduce(_, _, _) -> Op.Introduce
+    | Release _ -> Op.Release
+    | Resume _ -> Op.Resume
+    | Getdomainpath _ -> Op.Getdomainpath
+    | PathOp(_, Write _) -> Op.Write
+    | PathOp(_, Mkdir) -> Op.Mkdir
+    | PathOp(_, Rm) -> Op.Rm
+    | PathOp(_, Setperms _) -> Op.Setperms
+    | Set_target (_, _) -> Op.Set_target
+    | Reset_watches -> Op.Reset_watches
+    | PathOp(_, Directory_part _) -> Op.Directory_part
+    | Isintroduced _ -> Op.Isintroduced
+    | Error _ -> Op.Error
+    | Watchevent _ -> Op.Watchevent
+
+  let transactional_of_payload = function
+    | PathOp(_, _)
+    | Transaction_end _ -> true
+    | _ -> false
+
+  let data_of_payload = function
+    | PathOp(path, Write value) ->
+      path ^ "\000" ^ value (* no NULL at the end *)
+    | PathOp(path, Setperms perms) ->
+      data_concat [ path; ACL.to_string perms ]
+    | PathOp(path, _) -> data_concat [ path ]
+    | Debug commands -> data_concat commands
+    | Watch (path, token)
+    | Unwatch (path, token) -> data_concat [ path; token ]
+    | Transaction_start -> data_concat []
+    | Transaction_end commit -> data_concat [ if commit then "T" else "F" ]
+    | Introduce(domid, mfn, port) ->
+      data_concat [
+        Printf.sprintf "%u" domid;
+        Printf.sprintf "%nu" mfn;
+        string_of_int port;
+      ]
+    | Release domid
+    | Resume domid
+    | Getdomainpath domid
+    | Isintroduced domid ->
+      data_concat [ Printf.sprintf "%u" domid; ]
+    | Reset_watches -> data_concat []
+    | Set_target (mine, yours) ->
+      data_concat [ Printf.sprintf "%u" mine; Printf.sprintf "%u" yours; ]
+    | Error _ ->
+      failwith "Unimplemented: data_of_payload (Error)"
+    | Watchevent _ ->
+      failwith "Unimplemented: data_of_payload (Watchevent)"
+
+  let print x tid rid =
+    create
+      (if transactional_of_payload x then tid else 0l)
+      rid
+      (ty_of_payload x)
+      (data_of_payload x)
+end
+
+module Unmarshal = struct
+  let some x = Some x
+  let int_of_string_opt x = try Some(int_of_string x) with _ -> None
+  let int32_of_string_opt x = try Some(Int32.of_string x) with _ -> None
+  let unit_of_string_opt x = if x = "" then Some () else None
+  let ok x = if x = "OK" then Some () else None
+
+  let string = some ++ get_data
+  let list = some ++ split_string '\000' ++ get_data
+  let acl = ACL.of_string ++ get_data
+  let int = int_of_string_opt ++ get_data
+  let int32 = int32_of_string_opt ++ get_data
+  let unit = unit_of_string_opt ++ get_data
+  let ok = ok ++ get_data
+end
+
+exception Enoent of string
+exception Eagain
+exception Eexist
+exception Invalid
+exception Error of string
+
+let response hint sent received f = match get_ty sent, get_ty received with
+  | _, Op.Error ->
+    begin match get_data received with
+      | "ENOENT" -> raise (Enoent hint)
+      | "EAGAIN" -> raise Eagain
+      | "EINVAL" -> raise Invalid
+      | "EEXIST" -> raise Eexist
+      | s -> raise (Error s)
+    end
+  | x, y when x = y ->
+    begin match f received with
+      | None -> raise (Error (Printf.sprintf "failed to parse response (hint:%s) (payload:%s)" hint (get_data received)))
+      | Some z -> z
+    end
+  | x, y ->
+    raise (Error (Printf.sprintf "unexpected packet: expected %s; got %s" (Op.to_string x) (Op.to_string y)))
+
+type address =
+  | Unix of string
+  | Domain of int
+
+let string_of_address = function
+  | Unix x -> x
+  | Domain x -> string_of_int x
+
+let domain_of_address = function
+  | Unix _ -> 0
+  | Domain x -> x
+
diff --git a/tools/ocaml/xenstored/transaction.ml b/tools/ocaml/xenstored/transaction.ml
index 17b1bdf2ea..0466b04ae3 100644
--- a/tools/ocaml/xenstored/transaction.ml
+++ b/tools/ocaml/xenstored/transaction.ml
@@ -82,6 +82,7 @@ type t = {
 	start_count: int64;
 	store: Store.t; (* This is the store that we change in write operations. *)
 	quota: Quota.t;
+	mutable must_fail: bool;
 	oldroot: Store.Node.t;
 	mutable paths: (Xenbus.Xb.Op.operation * Store.Path.t) list;
 	mutable operations: (Packet.request * Packet.response) list;
@@ -89,7 +90,7 @@ type t = {
 	mutable write_lowpath: Store.Path.t option;
 }
 let get_id t = match t.ty with No -> none | Full (id, _, _) -> id
-
+let mark_failed t = t.must_fail <- true
 let counter = ref 0L
 let failed_commits = ref 0L
 let failed_commits_no_culprit = ref 0L
@@ -117,6 +118,8 @@ let trim_short_running_transactions txn =
 		keep
 		!short_running_txns
 
+let invalid_op = Xenbus.Xb.Op.Invalid, []
+
 let make ?(internal=false) id store =
 	let ty = if id = none then No else Full(id, Store.copy store, store) in
 	let txn = {
@@ -129,6 +132,7 @@ let make ?(internal=false) id store =
 		operations = [];
 		read_lowpath = None;
 		write_lowpath = None;
+		must_fail = false;
 	} in
 	if id <> none && not internal then (
 		let now = Unix.gettimeofday () in
@@ -139,10 +143,11 @@ let make ?(internal=false) id store =
 let get_store t = t.store
 let get_paths t = t.paths
 
+let is_read_only t = t.paths = [] && not t.must_fail
 let get_root t = Store.get_root t.store
 
-let is_read_only t = t.paths = []
 let add_wop t ty path = t.paths <- (ty, path) :: t.paths
+let clear_wops t = t.paths <- []
 let add_operation ~perm t request response =
 	if !Define.maxrequests >= 0
 		&& not (Perms.Connection.is_dom0 perm)
@@ -151,7 +156,9 @@ let add_operation ~perm t request response =
 	t.operations <- (request, response) :: t.operations
 let get_operations t = List.rev t.operations
 let set_read_lowpath t path = t.read_lowpath <- get_lowest path t.read_lowpath
-let set_write_lowpath t path = t.write_lowpath <- get_lowest path t.write_lowpath
+let set_write_lowpath t path =
+	 Logging.debug "transaction" "set_writelowpath (%d) %s" (get_id t)  (Store.Path.to_string path);
+	 t.write_lowpath <- get_lowest path t.write_lowpath
 
 let path_exists t path = Store.path_exists t.store path
 
@@ -200,7 +207,7 @@ let commit ~con t =
 	let has_commited =
 	match t.ty with
 	| No                         -> true
-	| Full (_id, oldstore, cstore) ->       (* "cstore" meaning current canonical store *)
+	| Full (id, oldstore, cstore) ->       (* "cstore" meaning current canonical store *)
 		let commit_partial oldroot cstore store =
 			(* get the lowest path of the query and verify that it hasn't
 			   been modified by others transactions. *)
@@ -240,11 +247,16 @@ let commit ~con t =
 				(* we try a partial commit if possible *)
 				commit_partial oldroot cstore store
 			in
+		if t.must_fail then begin
+			Logging.info "transaction" "Transaction %d was marked to fail (by live-update)" id;
+			false
+		end else
 		if !test_eagain && Random.int 3 = 0 then
 			false
 		else
 			try_commit (Store.get_root oldstore) cstore t.store
 		in
+	Logging.info "transaction" "has_commited: %b" has_commited;
 	if has_commited && has_write_ops then
 		Disk.write t.store;
 	if not has_commited
@@ -252,3 +264,102 @@ let commit ~con t =
 	else if not !has_coalesced
 	then Logging.commit ~tid:(get_id t) ~con;
 	has_commited
+
+module LR = Disk.LiveRecord
+
+(* here instead of Store.ml to avoid dependency cycle *)
+let write_node ch txidaccess path node =
+	let value = Store.Node.get_value node in
+	let perms = Store.Node.get_perms node in
+	let path = Store.Path.of_path_and_name path (Symbol.to_string node.Store.Node.name) |> Store.Path.to_string in
+	LR.write_node_data ch ~txidaccess ~path ~value ~perms
+
+let split limit c s =
+	let limit = match limit with None -> 8 | Some x -> x in
+	String.split ~limit c s
+	
+exception Invalid_Cmd_Args
+let split_one_path data conpath =
+	let args = split (Some 2) '\000' data in
+	match args with
+	| path :: "" :: [] -> Store.Path.create path conpath
+	| _                -> raise Invalid_Cmd_Args
+	
+let dump base conpath ~conid txn ch =
+	(* TODO: implicit paths need to be converted to explicit *)
+	let txid = get_id txn in
+	LR.write_transaction_data ch ~conid ~txid;
+	let store = get_store txn in
+	let write_node_mkdir path =
+   let perms, value =	match Store.get_node store path with
+  | None -> Perms.Node.default0, "" (* need to dump mkdir anyway even if later deleted due to implicit path creation *)
+  | Some node -> Store.Node.get_perms node, Store.Node.get_value node (* not always "", e.g. on EEXIST *) in
+  LR.write_node_data ch ~txidaccess:(Some (conid, txid, LR.W)) ~path:(Store.Path.to_string path) ~value ~perms
+in
+	maybe (fun path ->
+		(* if there were any reads make sure the tree matches, remove all contents and write out subtree *)
+		match Store.get_node store path with
+		| None -> (* we've only read nodes that we ended up deleting, nothing to do *) ()
+		| Some node ->
+			write_node ch (Some (conid, txid, LR.Del)) (Store.Path.get_parent path) node;
+			let path = Store.Path.get_parent path in
+			Store.traversal node @@ fun path' node ->
+			write_node ch (Some (conid,txid, LR.R)) (List.append path path') node
+	) txn.read_lowpath;
+	(* we could do something similar for write_lowpath, but that would become 
+	 	 complicated to handle correctly wrt to permissions and quotas if there are nodes
+		 owned by other domains in the subtree.
+	*)
+	let ops = get_operations txn in
+	if ops <> [] then
+		(* mark that we had some operation, these could be failures, etc.
+			 we want to fail the transaction after a live-update,
+			 unless it is completely a no-op
+		 *)
+		let perms = Store.getperms store Perms.Connection.full_rights [] in
+		let value = Store.get_root store |> Store.Node.get_value in
+  	LR.write_node_data ch ~txidaccess:(Some (conid, txid, LR.R)) ~path:"/" ~value ~perms;
+	ListLabels.iter (fun (req, reply) ->
+		Logging.debug "transaction" "dumpop %s" (Xenbus.Xb.Op.to_string req.Packet.ty); 
+		let data = req.Packet.data in
+		let open Xenbus.Xb.Op in
+		match reply with
+		| Packet.Error _ -> ()
+		| _ ->
+		try match req.Packet.ty with
+| Debug
+| Watch
+| Unwatch
+| Transaction_start
+| Transaction_end
+| Introduce
+| Release
+| Watchevent
+| Getdomainpath
+| Error
+| Isintroduced
+| Resume
+| Set_target
+| Reset_watches
+| Invalid
+| Directory
+| (Read|Getperms) -> ()
+| (Write|Setperms) ->
+		(match (split (Some 2) '\000' data) with
+		| path :: _ :: _ ->
+	let path = Store.Path.create path conpath in
+	if req.Packet.ty = Write then
+  write_node_mkdir (Store.Path.get_parent path);(* implicit mkdir *)
+	(match Store.get_node store path with
+	| None -> ()
+	| Some node ->
+	write_node ch (Some (conid, txid, LR.W)) (Store.Path.get_parent path) node)
+	| _ -> raise Invalid_Cmd_Args)
+| Mkdir ->
+	let path = split_one_path data conpath in
+  write_node_mkdir  path;
+| Rm ->
+	let path = split_one_path data conpath |> Store.Path.to_string in
+	LR.write_node_data ch ~txidaccess:(Some (conid, txid, LR.Del)) ~path ~value:"" ~perms:Perms.Node.default0
+	with Invalid_Cmd_Args|Define.Invalid_path|Not_found-> ()
+	 ) ops
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 11 18:07:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 18:07:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125903.237037 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgWmo-0002Sp-OW; Tue, 11 May 2021 18:07:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125903.237037; Tue, 11 May 2021 18:07:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgWmo-0002Sg-KK; Tue, 11 May 2021 18:07:14 +0000
Received: by outflank-mailman (input) for mailman id 125903;
 Tue, 11 May 2021 18:07:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iFnS=KG=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
 id 1lgWmn-0000hb-QA
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 18:07:13 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 26ebaf9d-3570-4e7d-92e8-e3ca2c9b182f;
 Tue, 11 May 2021 18:07:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 26ebaf9d-3570-4e7d-92e8-e3ca2c9b182f
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620756419;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=leCiJnwy6SsPHaP/NXfEhVjwwDD8mhaCyeX0SGCr7Vg=;
  b=bgXd0rvqL6y/C8IoIylLZc1P2NhbaAJ5Rz8DU9ETNX5V8i4v4kx2Ruic
   +Cm4f+r70GetafWRv8eMX4gua91Or3hKHir8TgmrbsGnN6kmQIO+khstR
   aSykWASmueZtVx1Rpbndc2B1FNL2w/+JdH2KclLYDJH1ZvNGPtaW/O/H8
   E=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: ZzyOP/FQWeaVFS2+bnyM/30umDnoi1SJO3xpblwGzLyuHQW+Ojbj6/KwFP8dNHU7TuxYME0yzK
 9CYxYQnXlRCoGfvf4S31CzNXZKQp0nDcLJ/0WHEuCYnSgzOY1KJRJ2Rg/pgJHrr4Z7IMLUMnS2
 OKH3xeUB2tULtoTr0ry1j+fHJJ4x4vNDmZaPIa4UuMuwKxf3rqIYyHRi9hkHdJv8edy2NNbkG8
 OKlwsV8Hr56BGTTv3fLLzl7G41pW03FlyLsmxoVWCLhzRFSqJaglOy0i9DU9JYnIjscvduaoS7
 +jY=
X-SBRS: 5.1
X-MesageID: 43954245
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:Y69OrKo8o7U7lY6P+EqWbLcaV5oReYIsimQD101hICG8cqSj9v
 xG+85rrSMc6QxhIU3I9urwW5VoLUmyyXcx2/h0AV7AZniBhILLFvAB0WKK+VSJcEeSmtK1l5
 0QFJSWYOeAdmSS5vyb3ODXKbgdKaG8gcWVuds=
X-IronPort-AV: E=Sophos;i="5.82,291,1613451600"; 
   d="scan'208";a="43954245"
From: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>, "Christian
 Lindig" <christian.lindig@citrix.com>, David Scott <dave@recoil.org>, "Ian
 Jackson" <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: [PATCH v2 06/17] tools/ocaml/xenstored: add support for binary format
Date: Tue, 11 May 2021 19:05:19 +0100
Message-ID: <b79ae21f3b906ebc9a3f94e1a6c5cacb6ed2ee93.1620755942.git.edvin.torok@citrix.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1620755942.git.edvin.torok@citrix.com>
References: <cover.1620755942.git.edvin.torok@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

oxenstored already had support for loading a partial dump from a text format.
Add support for the binary format too.
We no longer dump the text format, but we support loading the text format for
backwards compatibility purposes.
(a version of oxenstored supporting live-update with the old text format has been
released as part of the security series)

Signed-off-by: Edwin Török <edvin.torok@citrix.com>
---
 tools/ocaml/xenstored/perms.ml     |   2 +
 tools/ocaml/xenstored/xenstored.ml | 202 ++++++++++++++++++++++++-----
 2 files changed, 174 insertions(+), 30 deletions(-)

diff --git a/tools/ocaml/xenstored/perms.ml b/tools/ocaml/xenstored/perms.ml
index e8a16221f8..61c1c60083 100644
--- a/tools/ocaml/xenstored/perms.ml
+++ b/tools/ocaml/xenstored/perms.ml
@@ -69,6 +69,8 @@ let remove_domid ~domid perm =
 
 let default0 = create 0 NONE []
 
+let acls t = (t.owner, t.other) :: t.acl
+
 let perm_of_string s =
 	let ty = permty_of_char s.[0]
 	and id = int_of_string (String.sub s 1 (String.length s - 1)) in
diff --git a/tools/ocaml/xenstored/xenstored.ml b/tools/ocaml/xenstored/xenstored.ml
index ae2eab498a..2aa0dbc0e1 100644
--- a/tools/ocaml/xenstored/xenstored.ml
+++ b/tools/ocaml/xenstored/xenstored.ml
@@ -141,7 +141,8 @@ exception Bad_format of string
 
 let dump_format_header = "$xenstored-dump-format"
 
-let from_channel_f chan global_f socket_f domain_f watch_f store_f =
+(* for backwards compatibility with already released live-update *)
+let from_channel_f_compat chan global_f socket_f domain_f watch_f store_f =
 	let unhexify s = Utils.unhexify s in
 	let getpath s =
 		let u = Utils.unhexify s in
@@ -186,7 +187,7 @@ let from_channel_f chan global_f socket_f domain_f watch_f store_f =
 	done;
 	info "Completed loading xenstore dump"
 
-let from_channel store cons doms chan =
+let from_channel_compat ~live store cons doms chan =
 	(* don't let the permission get on our way, full perm ! *)
 	let op = Store.get_ops store Perms.Connection.full_rights in
 	let rwro = ref (None) in
@@ -226,43 +227,183 @@ let from_channel store cons doms chan =
 		op.Store.write path value;
 		op.Store.setperms path perms
 		in
-	from_channel_f chan global_f socket_f domain_f watch_f store_f;
+	from_channel_f_compat chan global_f socket_f domain_f watch_f store_f;
 	!rwro
 
-let from_file store cons doms file =
-	info "Loading xenstore dump from %s" file;
-	let channel = open_in file in
-	finally (fun () -> from_channel store doms cons channel)
+module LR = Disk.LiveRecord
+
+let from_channel_f_bin chan on_global_data on_connection_data on_watch_data on_transaction_data on_node_data =
+	Disk.BinaryIn.read_header chan;
+	let quit = ref false in
+	let on_end () = quit := true in
+	let errors = ref 0 in
+	while not !quit
+	do
+		try
+			LR.read_record chan ~on_end ~on_global_data ~on_connection_data ~on_watch_data ~on_transaction_data ~on_node_data
+		with exn ->
+			let bt = Printexc.get_backtrace () in
+			incr errors;
+			Logging.warn "xenstored" "restoring: ignoring faulty record (exception: %s): %s" (Printexc.to_string exn) bt
+	done;
+        info "Completed loading xenstore dump";
+	!errors
+
+
+let from_channel_bin ~live store cons doms chan =
+	(* don't let the permission get on our way, full perm ! *)
+	let maintx = Transaction.make ~internal:true Transaction.none store in
+	let fullperm = Perms.Connection.full_rights in
+	let fds = ref None in
+	let allcons = Hashtbl.create 1021 in
+	let contxid_to_op = Hashtbl.create 1021 in
+	let global_f ~rw_sock =
+		(* file descriptors are only valid on a live-reload, a cold restart won't have them *)
+		if live then
+			fds := Some rw_sock
+	in
+	let domain_f ~conid ~conn ~in_data ~out_data ~out_resp_len =
+		let con = match conn with
+		| LR.Domain { LR.id = 0; _ } ->
+			(* Dom0 is precreated *)
+			Connections.find_domain cons 0
+		| LR.Domain d ->
+			debug "Recreating domain %d, port %d" d.id d.remote_port; 
+			(* FIXME: gnttab *)
+			Domains.create doms d.id 0n d.remote_port
+			|> Connections.add_domain cons;
+			Connections.find_domain cons d.id
+		| LR.Socket fd ->
+			debug "Recreating open socket";
+			(* TODO: rw/ro flag *)
+			Connections.add_anonymous cons fd;
+			Connections.find cons fd
+		in
+		Hashtbl.add allcons conid con
+	in
+	let watch_f ~conid ~wpath ~token =
+		let con = Hashtbl.find allcons conid in
+		ignore (Connections.add_watch cons con wpath token);
+		()
+		in
+	let transaction_f ~conid ~txid =
+		let con = Hashtbl.find allcons conid in
+		con.Connection.next_tid <- txid;
+		let id = Connection.start_transaction con store in
+		assert (id = txid);
+		let txn = Connection.get_transaction con txid in
+		Hashtbl.add contxid_to_op (conid, txid) txn
+	in
+	let store_f ~txaccess ~perms ~path ~value =
+		let txn, op = match txaccess with
+		| None -> maintx, LR.W
+		| Some (conid, txid, op) ->
+			 let (txn, _) as r = Hashtbl.find contxid_to_op (conid, txid), op in
+     	 (* make sure this doesn't commit, even as RO *)
+			 Transaction.mark_failed txn;
+			 r
+        in
+	let get_con id =
+		if id < 0 then Connections.find cons (Utils.FD.of_int (-id))
+		else Connections.find_domain cons id
+	in
+	let watch_f id path token =
+		ignore (Connections.add_watch cons (get_con id) path token)
+		in
+		let path = Store.Path.of_string path in
+		try match op with
+		| LR.R ->
+			 Logging.debug "xenstored" "TR %s %S" (Store.Path.to_string path) value;
+			(* these are values read by the tx, potentially
+				 no write access here. Make the tree match. *)
+			Transaction.write txn fullperm path value; 
+			Transaction.setperms txn fullperm path perms;
+		| LR.W | LR.RW ->
+			 Logging.debug "xenstored" "TW %d %s %S" (Transaction.get_id txn) (Store.Path.to_string path) value;
+			 (* We started with empty tree, create parents.
+			    All the implicit mkdirs from the original tx should be explicit already for quota purposes.
+			 *)
+			 Process.create_implicit_path txn fullperm path;
+ 			 Transaction.write txn fullperm path value; 
+			 Transaction.setperms txn fullperm path perms;
+			 Logging.debug "xenstored" "TWdone %s %S" (Store.Path.to_string path) value;
+		| LR.Del ->
+			 Logging.debug "xenstored" "TDel %s " (Store.Path.to_string path);
+			Transaction.rm txn fullperm path
+		with Not_found|Define.Doesnt_exist|Define.Lookup_Doesnt_exist _ -> ()
+		in
+	(* make sure we got a quota entry for Dom0, so that setperms on / doesn't cause quota to be off-by-one *)
+	Transaction.mkdir maintx fullperm (Store.Path.of_string "/local");
+	let errors = from_channel_f_bin chan global_f domain_f watch_f transaction_f store_f in
+	(* do not fire any watches, but this makes a tx RO *)
+(*	Transaction.clear_wops maintx; *)
+	let errors = if not @@ Transaction.commit ~con:"live-update" maintx then begin
+		Logging.warn "xenstored" "live-update: failed to commit main transaction";
+		errors + 1
+	end else errors
+	in
+	!fds, errors
+
+let from_channel = from_channel_bin (* TODO: detect and accept text format *)
+
+let from_file ~live store cons doms file =
+	let channel = open_in_bin file in
+	finally (fun () -> from_channel_bin ~live store doms cons channel)
 	        (fun () -> close_in channel)
 
-let to_channel store cons rw chan =
-	let hexify s = Utils.hexify s in
+let to_channel rw_sock store cons chan =
+	let t = Disk.BinaryOut.write_header chan in
 
-	fprintf chan "%s\n" dump_format_header;
-	let fdopt = function None -> -1 | Some fd ->
-		(* systemd and utils.ml sets it close on exec *)
-		Unix.clear_close_on_exec fd;
-		Utils.FD.to_int fd in
-	fprintf chan "global,%d\n" (fdopt rw);
-
-	(* dump connections related to domains: domid, mfn, eventchn port/ sockets, and watches *)
-	Connections.iter cons (fun con -> Connection.dump con chan);
+	(match rw_sock with
+	| Some rw_sock ->
+		LR.write_global_data t ~rw_sock
+	| _ -> ());
 
 	(* dump the store *)
 	Store.dump_fct store (fun path node ->
-		let name, perms, value = Store.Node.unpack node in
-		let fullpath = Store.Path.to_string (Store.Path.of_path_and_name path name) in
-		let permstr = Perms.Node.to_string perms in
-		fprintf chan "store,%s,%s,%s\n" (hexify fullpath) (hexify permstr) (hexify value)
+		Transaction.write_node t None path node
 	);
+
+	(* dump connections related to domains and sockets; domid, mfn, eventchn port, watches *)
+	Connections.iter cons (fun con -> Connection.dump con store t);
+
+	LR.write_end t;
 	flush chan;
 	()
 
+let validate_f ch =
+	let conids = Hashtbl.create 1021 in
+	let txids = Hashtbl.create 1021 in
+	let global_f ~rw_sock = () in
+	let domain_f ~conid ~conn ~in_data ~out_data ~out_resp_len =
+		Hashtbl.add conids conid ()
+	in
+	let watch_f ~conid ~wpath ~token =
+		Hashtbl.find conids conid
+	in
+	let transaction_f ~conid ~txid =
+		Hashtbl.find conids conid;
+		Hashtbl.add txids (conid, txid) ()
+	in 
+	let store_f ~txaccess ~perms ~path ~value =
+		match txaccess with
+		| None -> ()
+		| Some (conid, txid, _) ->
+			Hashtbl.find conids conid;
+			Hashtbl.find txids (conid, txid)
+	in
+	let errors = from_channel_f_bin ch global_f domain_f watch_f transaction_f store_f in
+	if errors > 0 then
+		failwith (Printf.sprintf "Failed to re-read dump: %d errors" errors)
 
-let to_file store cons fds file =
-	let channel = open_out_gen [ Open_wronly; Open_creat; Open_trunc; ] 0o600 file in
-	finally (fun () -> to_channel store cons fds channel)
-	        (fun () -> close_out channel)
+let to_file fds store cons file =
+	let channel = open_out_gen [ Open_wronly; Open_creat; Open_trunc; Open_binary ] 0o600 file in
+	finally (fun () -> to_channel fds store cons channel)
+					(fun () -> close_out channel);
+	let channel = open_in_bin file in
+	finally (fun () -> validate_f channel)
+	        (fun () -> close_in channel)
+	
 end
 
 let main () =
@@ -329,8 +470,9 @@ let main () =
 
 	let rw_sock =
 	if cf.restart && Sys.file_exists Disk.xs_daemon_database then (
-		let rwro = DB.from_file store domains cons Disk.xs_daemon_database in
-		info "Live reload: database loaded";
+		Connections.add_domain cons (Domains.create0 domains);
+		let rwro, errors = DB.from_file ~live:cf.live_reload store domains cons Disk.xs_daemon_database in
+		info "Live reload: database loaded (%d errors)" errors;
 		Event.bind_dom_exc_virq eventchn;
 		Process.LiveUpdate.completed ();
 		rwro
@@ -360,7 +502,7 @@ let main () =
 	Sys.set_signal Sys.sigpipe Sys.Signal_ignore;
 
 	if cf.activate_access_log then begin
-		let post_rotate () = DB.to_file store cons (None) Disk.xs_daemon_database in
+		let post_rotate () = DB.to_file None store cons Disk.xs_daemon_database in
 		Logging.init_access_log post_rotate
 	end;
 
@@ -521,7 +663,7 @@ let main () =
 			live_update := Process.LiveUpdate.should_run cons;
 			if !live_update || !quit then begin
 				(* don't initiate live update if saving state fails *)
-				DB.to_file store cons (rw_sock) Disk.xs_daemon_database;
+				DB.to_file rw_sock store cons Disk.xs_daemon_database;
 				quit := true;
 			end
 		with exc ->
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 11 18:07:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 18:07:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125907.237049 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgWmu-0002xd-FC; Tue, 11 May 2021 18:07:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125907.237049; Tue, 11 May 2021 18:07:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgWmu-0002xU-AL; Tue, 11 May 2021 18:07:20 +0000
Received: by outflank-mailman (input) for mailman id 125907;
 Tue, 11 May 2021 18:07:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iFnS=KG=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
 id 1lgWms-0000hb-QD
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 18:07:18 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9c9be45a-b6e0-44f1-897a-16a30e5de11b;
 Tue, 11 May 2021 18:07:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9c9be45a-b6e0-44f1-897a-16a30e5de11b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620756421;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=9mD+y4+KHJLl7F5bXUBrPg32mwOSoUG5fSCQwye7W70=;
  b=XTTUhpB1cJv7Afv+gaIhlJhCj5vOgOyT4wOrjCjerORKIOPUCcbruXg/
   /YfGBqJF1/r0D9qrmQFHQnvgIMeVTXB6henp2uwHLmU5DpzeoqDWoT0kX
   ACp/sa4DOQXRBhGs3RR1mxaJnMrq4YYwDCLGR/GMml0U0tFguu6zT2Bvk
   c=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: UQeXyntwYzSztu4nyK19C5cw8sAUrc9eVV6jQWCg2hA7PhP5mCdaPh01FsCzxfVp3vD5RgECo5
 1ehNvxozd4YgFyBhI5FbpgUdkld+oA6+Wl94ln5TbMxjzn3j2UdBgWzzHHkDXZ6VEP+L0/EpgC
 vmRhSN2U6FLqOqkpIcbtaHpwZLNL8u2INqdLPTrjXgF2IFVBNbIx234NbSYcRMWoLyEoLEcBN3
 oMLty26irLS1Stusg5Xvd5QffwEyKRqOAmMwairLpbomk2feZbgqDgbRPy8rUwsJDsMLL7/AmS
 Sy0=
X-SBRS: 5.1
X-MesageID: 43954257
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:gs89JaHeYqgY8YuOpLqE0seALOsnbusQ8zAXP0AYc31om6uj5r
 iTdZUgpGbJYVkqKRIdcLy7V5VoBEmskaKdgrNhW4tKPjOW2ldARbsKheCJrlHd8m/Fh4lgPM
 9bAtND4bbLbWSS4/yV3ODBKadE/OW6
X-IronPort-AV: E=Sophos;i="5.82,291,1613451600"; 
   d="scan'208";a="43954257"
From: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>, "Christian
 Lindig" <christian.lindig@citrix.com>, David Scott <dave@recoil.org>, "Ian
 Jackson" <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: [PATCH v2 09/17] tools/ocaml: use common macros for manipulating mmap_interface
Date: Tue, 11 May 2021 19:05:22 +0100
Message-ID: <744b98946062028be059435fbe2b9ccc2009e1e8.1620755942.git.edvin.torok@citrix.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1620755942.git.edvin.torok@citrix.com>
References: <cover.1620755942.git.edvin.torok@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Also expose these macros in a header file that can be reused by
the upcoming grant table code.

Signed-off-by: Edwin Török <edvin.torok@citrix.com>
---
 tools/ocaml/libs/mmap/mmap_stubs.h    |  7 +++++++
 tools/ocaml/libs/mmap/xenmmap_stubs.c |  2 --
 tools/ocaml/libs/xb/xs_ring_stubs.c   | 14 +++++---------
 3 files changed, 12 insertions(+), 11 deletions(-)

diff --git a/tools/ocaml/libs/mmap/mmap_stubs.h b/tools/ocaml/libs/mmap/mmap_stubs.h
index 65e4239890..816ba6a724 100644
--- a/tools/ocaml/libs/mmap/mmap_stubs.h
+++ b/tools/ocaml/libs/mmap/mmap_stubs.h
@@ -30,4 +30,11 @@ struct mmap_interface
 	int len;
 };
 
+#ifndef Data_abstract_val
+#define Data_abstract_val(v) ((void*) Op_val(v))
+#endif
+
+#define Intf_val(a) ((struct mmap_interface *) Data_abstract_val(a))
+#define Intf_data_val(a) (Intf_val(a)->addr)
+
 #endif
diff --git a/tools/ocaml/libs/mmap/xenmmap_stubs.c b/tools/ocaml/libs/mmap/xenmmap_stubs.c
index e2ce088e25..b811990a89 100644
--- a/tools/ocaml/libs/mmap/xenmmap_stubs.c
+++ b/tools/ocaml/libs/mmap/xenmmap_stubs.c
@@ -28,8 +28,6 @@
 #include <caml/fail.h>
 #include <caml/callback.h>
 
-#define Intf_val(a) ((struct mmap_interface *) a)
-
 static int mmap_interface_init(struct mmap_interface *intf,
                                int fd, int pflag, int mflag,
                                int len, int offset)
diff --git a/tools/ocaml/libs/xb/xs_ring_stubs.c b/tools/ocaml/libs/xb/xs_ring_stubs.c
index 7a91fdee75..614c6e371d 100644
--- a/tools/ocaml/libs/xb/xs_ring_stubs.c
+++ b/tools/ocaml/libs/xb/xs_ring_stubs.c
@@ -35,8 +35,6 @@
 #include <sys/mman.h>
 #include "mmap_stubs.h"
 
-#define GET_C_STRUCT(a) ((struct mmap_interface *) a)
-
 /*
  * Bytes_val has been introduced by Ocaml 4.06.1. So define our own version
  * if needed.
@@ -52,12 +50,11 @@ CAMLprim value ml_interface_read(value ml_interface,
 	CAMLparam3(ml_interface, ml_buffer, ml_len);
 	CAMLlocal1(ml_result);
 
-	struct mmap_interface *interface = GET_C_STRUCT(ml_interface);
 	unsigned char *buffer = Bytes_val(ml_buffer);
 	int len = Int_val(ml_len);
 	int result;
 
-	struct xenstore_domain_interface *intf = interface->addr;
+	struct xenstore_domain_interface *intf = Intf_data_val(ml_interface);
 	XENSTORE_RING_IDX cons, prod; /* offsets only */
 	int total_data, data;
 	uint32_t connection;
@@ -111,12 +108,11 @@ CAMLprim value ml_interface_write(value ml_interface,
 	CAMLparam3(ml_interface, ml_buffer, ml_len);
 	CAMLlocal1(ml_result);
 
-	struct mmap_interface *interface = GET_C_STRUCT(ml_interface);
 	const unsigned char *buffer = Bytes_val(ml_buffer);
 	int len = Int_val(ml_len);
 	int result;
 
-	struct xenstore_domain_interface *intf = interface->addr;
+	struct xenstore_domain_interface *intf = Intf_data_val(ml_interface);
 	XENSTORE_RING_IDX cons, prod;
 	int total_space, space;
 	uint32_t connection;
@@ -166,7 +162,7 @@ exit:
 CAMLprim value ml_interface_set_server_features(value interface, value v)
 {
 	CAMLparam2(interface, v);
-	struct xenstore_domain_interface *intf = GET_C_STRUCT(interface)->addr;
+	struct xenstore_domain_interface *intf = Intf_data_val(interface);
 	if (intf == (void*)MAP_FAILED)
 		caml_failwith("Interface closed");
 
@@ -178,7 +174,7 @@ CAMLprim value ml_interface_set_server_features(value interface, value v)
 CAMLprim value ml_interface_get_server_features(value interface)
 {
 	CAMLparam1(interface);
-	struct xenstore_domain_interface *intf = GET_C_STRUCT(interface)->addr;
+	struct xenstore_domain_interface *intf = Intf_data_val(interface);
 
 	CAMLreturn(Val_int (intf->server_features));
 }
@@ -186,7 +182,7 @@ CAMLprim value ml_interface_get_server_features(value interface)
 CAMLprim value ml_interface_close(value interface)
 {
 	CAMLparam1(interface);
-	struct xenstore_domain_interface *intf = GET_C_STRUCT(interface)->addr;
+	struct xenstore_domain_interface *intf = Intf_data_val(interface);
 	int i;
 
 	intf->req_cons = intf->req_prod = intf->rsp_cons = intf->rsp_prod = 0;
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 11 18:07:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 18:07:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125909.237061 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgWmv-0003I9-Od; Tue, 11 May 2021 18:07:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125909.237061; Tue, 11 May 2021 18:07:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgWmv-0003Ge-Kz; Tue, 11 May 2021 18:07:21 +0000
Received: by outflank-mailman (input) for mailman id 125909;
 Tue, 11 May 2021 18:07:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iFnS=KG=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
 id 1lgWmu-0001nY-DX
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 18:07:20 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6f8c4a83-831a-40ed-8f89-f3d2965861df;
 Tue, 11 May 2021 18:07:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6f8c4a83-831a-40ed-8f89-f3d2965861df
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620756430;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=p0W19jtbXsU+HwGFf/dzsvYLrPw4HfB2Wm1ke3VJSTU=;
  b=dQMd3S3Sh1uwjZz11L+p0WZJChMX7GaGF7J6k85HWSKjfLzhEFGL5iRL
   Yo2pTQXXwISgXE6xH9onF5WBFRWYXD/O05R2T/mMB08JTlo4PKp92qirt
   kR5CB9AWLI4Mm1J6taOI84LMctdfsxRnGVQuOSk2h0nTTgEepL4D5dMTO
   4=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: kFob7Ev9G9Y0/Z94owD+Jbmq1srgk7MQRmvTxVjkheEbgfYfIYGrsES29esGjpru6VOo6Ms2Fj
 hnpvqhiBExUTY88zyv6g5xoAf2Ub94Y3+Olbbof+howte6Bx1X63++ZmL48iyZq5Eo8ATVkWIT
 +HrnHtJsz+KmkrHR+F/ShE9BNc5cP1J/6P9zyYSSt+w3QgHlhR0SNjy7MKIKQGdoCS0obasASc
 5M1yS0iVLBq+bCSgZr4ujmoj5A7RpVUqrdLvyGRCixyUsy+XEDekU8jYHQ2C4c9FoOeZWXeeor
 b7Y=
X-SBRS: 5.1
X-MesageID: 45101013
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:jrcCBa6OcVqvAPVt+QPXwPDXdLJyesId70hD6qhwISY6TiX+rb
 HWoB17726TtN9/YhEdcLy7VJVoBEmskKKdgrNhWotKPjOW21dARbsKheCJrgEIWReOktK1vZ
 0QC5SWY+eQMbEVt6nHCXGDYrQd/OU=
X-IronPort-AV: E=Sophos;i="5.82,291,1613451600"; 
   d="scan'208";a="45101013"
From: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>, "Christian
 Lindig" <christian.lindig@citrix.com>, David Scott <dave@recoil.org>, "Ian
 Jackson" <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: [PATCH v2 07/17] tools/ocaml/xenstored: validate config file before live update
Date: Tue, 11 May 2021 19:05:20 +0100
Message-ID: <a53934dfa8ef984bffa858cc573cc7a6445bbdc0.1620755942.git.edvin.torok@citrix.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1620755942.git.edvin.torok@citrix.com>
References: <cover.1620755942.git.edvin.torok@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

The configuration file can contain typos or various errors that could prevent
live update from succeeding (e.g. a flag only valid on a different version).
Unknown entries in the config file would be ignored on startup normally,
add a strict --config-test that live-update can use to check that the config file
is valid *for the new binary*.

Signed-off-by: Edwin Török <edvin.torok@citrix.com>
---
 tools/ocaml/xenstored/parse_arg.ml | 4 ++++
 tools/ocaml/xenstored/process.ml   | 2 +-
 tools/ocaml/xenstored/xenstored.ml | 9 +++++++--
 3 files changed, 12 insertions(+), 3 deletions(-)

diff --git a/tools/ocaml/xenstored/parse_arg.ml b/tools/ocaml/xenstored/parse_arg.ml
index 965cb9ebeb..588970825f 100644
--- a/tools/ocaml/xenstored/parse_arg.ml
+++ b/tools/ocaml/xenstored/parse_arg.ml
@@ -26,6 +26,7 @@ type config =
 	restart: bool;
 	live_reload: bool;
 	disable_socket: bool;
+	config_test: bool;
 }
 
 let do_argv () =
@@ -38,6 +39,7 @@ let do_argv () =
 	and restart = ref false
 	and live_reload = ref false
 	and disable_socket = ref false
+	and config_test = ref false
 	in
 
 	let speclist =
@@ -55,6 +57,7 @@ let do_argv () =
 		  ("-T", Arg.Set_string tracefile, ""); (* for compatibility *)
 		  ("--restart", Arg.Set restart, "Read database on starting");
 		  ("--live", Arg.Set live_reload, "Read live dump on startup");
+		  ("--config-test", Arg.Set config_test, "Test validity of config file");
 		  ("--disable-socket", Arg.Unit (fun () -> disable_socket := true), "Disable socket");
 		] in
 	let usage_msg = "usage : xenstored [--config-file <filename>] [--no-domain-init] [--help] [--no-fork] [--reraise-top-level] [--restart] [--disable-socket]" in
@@ -70,4 +73,5 @@ let do_argv () =
 		restart = !restart;
 		live_reload = !live_reload;
 		disable_socket = !disable_socket;
+		config_test = !config_test;
 	}
diff --git a/tools/ocaml/xenstored/process.ml b/tools/ocaml/xenstored/process.ml
index 27790d4a5c..d573c88685 100644
--- a/tools/ocaml/xenstored/process.ml
+++ b/tools/ocaml/xenstored/process.ml
@@ -121,7 +121,7 @@ let launch_exn t =
 
 let validate_exn t =
 	(* --help must be last to check validity of earlier arguments *)
-	let t' = {t with cmdline= t.cmdline @ ["--help"]} in
+	let t' = {t with cmdline= t.cmdline @ ["--config-test"]} in
 	let cmd = string_of_t t' in
 	debug "Executing %s" cmd ;
 	match Unix.fork () with
diff --git a/tools/ocaml/xenstored/xenstored.ml b/tools/ocaml/xenstored/xenstored.ml
index 2aa0dbc0e1..34e706910e 100644
--- a/tools/ocaml/xenstored/xenstored.ml
+++ b/tools/ocaml/xenstored/xenstored.ml
@@ -88,7 +88,7 @@ let default_pidfile = Paths.xen_run_dir ^ "/xenstored.pid"
 
 let ring_scan_interval = ref 20
 
-let parse_config filename =
+let parse_config ?(strict=false) filename =
 	let pidfile = ref default_pidfile in
 	let options = [
 		("merge-activate", Config.Set_bool Transaction.do_coalesce);
@@ -126,11 +126,12 @@ let parse_config filename =
 		("xenstored-port", Config.Set_string Domains.xenstored_port); ] in
 	begin try Config.read filename options (fun _ _ -> raise Not_found)
 	with
-	| Config.Error err -> List.iter (fun (k, e) ->
+	| Config.Error err as e -> List.iter (fun (k, e) ->
 		match e with
 		| "unknown key" -> eprintf "config: unknown key %s\n" k
 		| _             -> eprintf "config: %s: %s\n" k e
 		) err;
+		if strict then raise e
 	| Sys_error m -> eprintf "error: config: %s\n" m;
 	end;
 	!pidfile
@@ -408,6 +409,10 @@ end
 
 let main () =
 	let cf = do_argv () in
+	if cf.config_test then begin
+	  let _pidfile:string = parse_config ~strict:true (config_filename cf) in
+	  exit 0
+	end;
 	let pidfile =
 		if Sys.file_exists (config_filename cf) then
 			parse_config (config_filename cf)
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 11 18:15:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 18:15:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125951.237072 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgWuj-0006uM-KG; Tue, 11 May 2021 18:15:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125951.237072; Tue, 11 May 2021 18:15:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgWuj-0006uF-Gh; Tue, 11 May 2021 18:15:25 +0000
Received: by outflank-mailman (input) for mailman id 125951;
 Tue, 11 May 2021 18:15:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iFnS=KG=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
 id 1lgWui-0006u9-Ag
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 18:15:24 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1f25cf45-5a9f-4a00-ae2a-4f56859ce7c2;
 Tue, 11 May 2021 18:15:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1f25cf45-5a9f-4a00-ae2a-4f56859ce7c2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620756921;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=ipvOrFsgZjQ1bYF3s6GBqaSj2zNwqJcHZxl3zWOmZio=;
  b=dlmVEZbwoEJlp5EvPHcTYFuvBunfIpTQ00XLlc3P5264UHGukB3Oapey
   GMIVoZa4iqbSd9thqjsGnyB+KGDx031K3roBSeopCukBCPuFIiAVmmuZH
   WYcmSHG4QE+8FuOLzuy9Z8BVJvX+oqi4JgmAGdRiY9sRedIGv0I0dHSJH
   A=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 5y+4RfKpPMPMquZbY2yRj8QZScaAV7xbqcy5zp7cQZcGAeBsGcgRWOYAIFpWI2cFjtDjD4+IWN
 Nq6lHSgalPPTt+MF0rruMWBu5Zu/5Subz+DvRbOX/gK9wrC9USj8/UGlt1IgQEeYfV8WKXfYyQ
 H19tHT897qu+k6XZczcP5bBHCpKKVp8XjMhmKiLCvNTA3Be9UfdCxE7SEFLnjBTh+w4EAlGSvz
 yTR9AfS7AFXAyfeC5WROr1MhbbhICiUmTWtGZhyaFlfzN8+MzYNG+ntHxyL+Kqf6puF1omORMt
 6QE=
X-SBRS: 5.1
X-MesageID: 43561939
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:X+MQxKwoXAPmKvIUg75XKrPwIb1zdoMgy1knxilNoH1uA6mlfq
 WV954mPHDP5Ar5J0tQ/exoVJPsfZq+z+8W3WByB9mftWDd0QOVxedZjLcKqAeOJ8SRzI5gPI
 5bAstD4dTLfD9Hsfo=
X-IronPort-AV: E=Sophos;i="5.82,291,1613451600"; 
   d="scan'208";a="43561939"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=k9TtZpIkyK8L5GCbOtzSJ6Yhq+e3RiGwMH7LMYZoBgYmrt9ZnxUmPU3sn6KvC13g8D2HLoJ5dxZG1cm5h8wan8rPe02/3sp3GogY2R89rjY+rTRUWl4tRMeDldTlntSNy7FmJiQ9wQExucd6O+mWzeW+TvQ/XYtMW/fyxWOIimYbqaEM0r0s8AL6dsg6kGs5mmty/oOJRibqOekYp7vyqOc/yzn8Yj8yyseyVL5fWnA5aYIyAKhBBsaVrUk4QNjv6mPocabBW4WWkil2NTLnMpCW+vIls0/sC8Ej9idgPBGEBqcIEy8LKEjPboshhnXmh2VN43DI9y7ZEWYUzlijlQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ipvOrFsgZjQ1bYF3s6GBqaSj2zNwqJcHZxl3zWOmZio=;
 b=Xd+nWETUSuFcSX1dcwE+YRzEOzKZ+PZKs3SVbMky8CUIrvJRFmOFOkkJWNFQqybLxa7Yus12BxJy4BS07jc7I4v2FLvizKxlc1JwlUsnXQbRZvWOV5fQ0wa2CCqty4jm05lVrvp+mIj9s4jMzZfzsQonB0XfJZwlK7K+46QRuXnn7yvPUUoOZFmXMx5M2UDGx46OtW4QMPTucxWU/JSpoO0AtsmfyFDuuH0bkCkkBVhp/pA7tS8GLyLdqsmSxcObf82fsGCBHS1O7D/wsxw+rZj6IBT8dZYbubNP/ch2KmJIL30Tn52fl3DJG8ub5H0mQJ0FLxnr7XqkRkca5YQ3Og==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ipvOrFsgZjQ1bYF3s6GBqaSj2zNwqJcHZxl3zWOmZio=;
 b=lwOkaJVubxtXCbJdziZcHDxcH8I96m+cbsOSHndwQu3Myj8tBf49PkhaemYlwDuLqhqnvJ5ef0qxrjlusJ/LimCFKEISs6Hvco4tei6PfMdhhX4hzgEE4N6rvu0sot8vuBvc7KZOWSyqKrig8c7uPG90e91b+1zx+oqPkOTAj5c=
From: Edwin Torok <edvin.torok@citrix.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: "jbeulich@suse.com" <jbeulich@suse.com>, "julien@xen.org"
	<julien@xen.org>, "jgross@suse.com" <jgross@suse.com>, "wl@xen.org"
	<wl@xen.org>, "iwj@xenproject.org" <iwj@xenproject.org>,
	"sstabellini@kernel.org" <sstabellini@kernel.org>, "dave@recoil.org"
	<dave@recoil.org>, George Dunlap <George.Dunlap@citrix.com>, Andrew Cooper
	<Andrew.Cooper3@citrix.com>, Christian Lindig <christian.lindig@citrix.com>
Subject: Re: [PATCH v2 00/17] live update and gnttab patches
Thread-Topic: [PATCH v2 00/17] live update and gnttab patches
Thread-Index: AQHXRpBYbSP9+kXyy06RGaCma1VeS6relSkA
Date: Tue, 11 May 2021 18:12:04 +0000
Message-ID: <6cfb5b540c0a34930aea4bfd1ed682537cc420dd.camel@citrix.com>
References: <cover.1620755942.git.edvin.torok@citrix.com>
In-Reply-To: <cover.1620755942.git.edvin.torok@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Evolution 3.36.4-0ubuntu1 
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 0b233322-0b83-4e62-fc10-08d914a84851
x-ms-traffictypediagnostic: BYAPR03MB4566:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <BYAPR03MB45669F4677CB250C67C432ED9B539@BYAPR03MB4566.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:3968;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 1N8IqU1yFHUTZueGC696Tn36jyFN5BRGfYbWgP6jsMW1E73i1S69XPkuM0Xd0S8IlJp0/DjD7ALee/+14eCALYZN/eF8qDAJdrJEojl0Cu3xjZN2GDVcjNBClhOuWHFTJW4fdhnBaq/CpWSFVtYpP8jljnWH6QZ76KY2nuT3e0S0W6cpH2m2yiMm8VEsmXVMtuPY3ANhCfMl5Dz9yT67OI2GGK3sO+By0z5z8mRHqtgYKqJUpSjhIY5VppHxin4zoO0rFJT6V3vF3dWxKf3Ww0bLgXATJx7QeArMiws/IGRHO55y/oIaa9XeNwdGjCzvYZQNlj4QuYLDYr5z2RGC+D0k39Tc5gQsK1HBjbmJsJEaUZJbdfnEs26FKqvi3Vv+HwVf0uGoM/9LEt8ENZqshDA67nhaEHOTkDmZgi5vzogAFFyELcKsBMglD3c+Pp37EqKBuc6tZHiBQl5ntr1NM/1/N9PLEspnqf29APSgo54MDmMgMBHSCBPt6+IT8zwGbKaR0OWiLeh3jxrou7LUD5OuX9Klp2jms4KCZU8ZMxWuv8G7NA8MIpSy2GlaV2lKchinl5iDhvuLDx4k/C/eGpyn66xWtgGcarBELmqmnr5QE46TRwDhGiQrPLGFE34dMUp9EizFpyG1/qtkWomRuZnQn4u2zT7UzolADbPSLRHEgQPdpqbOBBbC6KygpSui
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB5888.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(376002)(346002)(39850400004)(366004)(396003)(186003)(36756003)(83380400001)(107886003)(2906002)(86362001)(478600001)(66574015)(8936002)(2616005)(54906003)(6512007)(6916009)(966005)(122000001)(30864003)(76116006)(91956017)(38100700002)(26005)(71200400001)(15650500001)(6506007)(8676002)(5660300002)(66946007)(64756008)(66556008)(316002)(4326008)(66476007)(6486002)(66446008)(579004)(559001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: =?utf-8?B?OU1DMkdsOSsxejZBRTNOU0M3ME5NRTFZTDFRb1BUeVc3cDZHbTRQc0RDVTQx?=
 =?utf-8?B?aWg3ZFdaRm94QUllaVhKWmFONUVWNmR5bnJna1NjblhScHgyK1F2SUp6VUVJ?=
 =?utf-8?B?VUZlYWM3YzU0MEYvTnFhWk9MMDVQUXkwTFJVenVlTWR0VHNmSXFnMGFzRFUw?=
 =?utf-8?B?cjRZcmNDQ2xZeHJ5Q2lCZjFwbllhRGo5OG1yWE1JVEJuVGpKQnZ2aUhmUGFJ?=
 =?utf-8?B?S0hLMWI2WlRpd3ZQZmZobGtUMkR1MUZkWWJGa3ZVUHpkck81aFUyTTFNWmFO?=
 =?utf-8?B?aUZqbHNuQTdFcnNINjc4czRXTWN6VjlGTW94RjVBWnNEd0JhT00vV212cEx6?=
 =?utf-8?B?eDNKZXJaNFZIT3RXcFZUTlJPTzBZUUg3czlXZmpuZ0lab3lCaWljZU8zdXY0?=
 =?utf-8?B?M0J0Y1RTQWgzdDBSYXNyTmlJSituRzBDc1F1U2tqTHhuN0NpN3VSOFBXUktq?=
 =?utf-8?B?Z0xXYS9xQnVQMVFIUWxyV1BWc0xqWCsvb3Mzd0pxeDNmdE15SFR1MGJYUzRp?=
 =?utf-8?B?M1RPZVNFbXBVSk5lMkFEbzNuNkljbUpXK0hEaVpoLzZKbHZleWF1M0ErcXo1?=
 =?utf-8?B?TDNsdHZOaUhjc1hkMXA5QzZPbzZzNG44NFdtWXp4ZXBOL29WMWN2RmE1ZDM5?=
 =?utf-8?B?Y2ttcEZSRDBjRGgwZWpUVWdTMDczUDlnOHRFaUF3a3ZRYmNia2wvNkk3clZL?=
 =?utf-8?B?Q0VnMUxMSHo0WU1oTVdoVzBpL21BeDFxRWljTU5vb25NdG9odzhvSEdOQklK?=
 =?utf-8?B?ckJkZkoxdDlEbFg1MnE3SmQ2VmtmZzdWeXFxcFBNb0dNOHVMZnZ5VkJCQkdC?=
 =?utf-8?B?dm1naFFad1Z5VlgwYjBMUVB6N0NqTXF4K3lkVjdyM3ZBa1k4NVJQQ056YWt0?=
 =?utf-8?B?RVhmWjJqdkhaeUdJMmVTY0cwWU5mUGs3cVgyaFlqWDRBUDZMYWZWMW8vYmpt?=
 =?utf-8?B?WXpvWHJyTW1EWDVxWVRvOTFHSC9wdFhObXV1M1FsUGc5bE5kZVJ3RU1GMDZi?=
 =?utf-8?B?dWFQb200bmNJaDlBM1NXQWE1RENDc3ZGcnp5eGQvTFd5RVE3dE5zZjFwcitn?=
 =?utf-8?B?eHN2b3B6MGFBeW5wVmdTMm9xem1lb2svblNtdUNUdDdvSStPQ2pGUmFzT2Rt?=
 =?utf-8?B?WFErb0JOSkh6dW56d2YyRDZlU3V3R0JzTDgxR2t3NnU4OWdDK2tWRDkrTzRM?=
 =?utf-8?B?dWdMMCtFVjNYNXdORzdMUDVpS3hYbWIwWHA4MjVtWjhuUFZhMSszc2p2OHEw?=
 =?utf-8?B?NVRSL0dNL05lZ1FEMk5VaEFtbVd6S2xsZFZNQUxDVjVCekJ0RFBDM2d1NU9O?=
 =?utf-8?B?MUMwQ3pMczR3dWh2QlBFajIwSUlETmZDeEJxdld2MGN2Z2xNcExFcjU5TjNZ?=
 =?utf-8?B?OW1LNHJ3NTRiZ1haVGozSHI1dm9JOVE0MWJVQk5SVnV5YWJtb0hDNXRydU1N?=
 =?utf-8?B?MVdmcUovZHVjM1FxenNmcVJZdm51Z2s1eExTRzFoQXFPVkhDT0QzbS9TQU85?=
 =?utf-8?B?NURrQ2J5TklBQU0xMG1XaUlXR25Dd1drZjhwOVBQSWF2UE4vMVFGTUh4WDJJ?=
 =?utf-8?B?T0lPWFJtRjBXbmFNbXZjL3dLM3dQSXVWZGVQdjlENGtHQnAySnBtZlpyb3c1?=
 =?utf-8?B?bVpJOWJzNVhqKzh3b2E1TThJakd6SHZuRmJBOGdSN3hkR3pJcWdTcWVFbEFS?=
 =?utf-8?B?TXk4dVlNd1ZIV3VTdG1LbjBlNHhtSndLSFYveVptdEk1QXNYbHBOa2xwVGNO?=
 =?utf-8?Q?zd/3CkP5ZDGkLca3Y4p5z3gmkDf0psqjaxuGVl1?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <C7038DF3D577B044A276D756A36BA843@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB5888.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0b233322-0b83-4e62-fc10-08d914a84851
X-MS-Exchange-CrossTenant-originalarrivaltime: 11 May 2021 18:12:04.8685
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: xN+cH9sEX61I6jYdsYWaE9CtydLB36XkWeeigjgjClrL+vMy0ocq00fWOSFnC7Xifh9nBBxaUc4TDXDgzQNWxQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4566
X-OriginatorOrg: citrix.com

T24gVHVlLCAyMDIxLTA1LTExIGF0IDE5OjA1ICswMTAwLCBFZHdpbiBUw7Zyw7ZrIHdyb3RlOg0K
PiBUaGVzZSBwYXRjaGVzIGhhdmUgYmVlbiBwb3N0ZWQgcHJldmlvdXNseS4NCg0KQSBnaXQgdHJl
ZSBpcyBhdmFpbGFibGUgaGVyZSBmb3IgZWFzaWVyIHJldmlld2luZzoNCmh0dHBzOi8vZ2l0aHVi
LmNvbS9lZHdpbnRvcm9rL3hlbi9wdWxsLzINCg0KDQo+IFRoZSBnbnR0YWIgcGF0Y2hlcyAodG9v
bHMvb2NhbWwvbGlicy9tbWFwKSB3ZXJlIG5vdCBhcHBsaWVkIGF0IHRoZQ0KPiB0aW1lDQo+IHRv
IGF2b2lkIGNvbmZsaWN0cyB3aXRoIGFuIGluLXByb2dyZXNzIFhTQS4NCj4gVGhlIGJpbmFyeSBm
b3JtYXQgbGl2ZS11cGRhdGUgYW5kIGZ1enppbmcgcGF0Y2hlcyB3ZXJlIG5vdCBhcHBsaWVkDQo+
IGJlY2F1c2UgaXQgd2FzIHRvbyBjbG9zZSB0byB0aGUgbmV4dCBYZW4gcmVsZWFzZSBmcmVlemUu
DQo+IA0KPiBUaGUgcGF0Y2hlcyBkZXBlbmQgb24gZWFjaC1vdGhlcjogbGl2ZS11cGRhdGUgb25s
eSB3b3JrcyBjb3JyZWN0bHkNCj4gd2hlbiB0aGUgZ250dGFiDQo+IHBhdGNoZXMgYXJlIHRha2Vu
IHRvbyAoTUZOIGlzIG5vdCBwYXJ0IG9mIHRoZSBiaW5hcnkgbGl2ZS11cGRhdGUNCj4gc3RyZWFt
KSwNCj4gc28gdGhleSBhcmUgaW5jbHVkZWQgaGVyZSBhcyBhIHNpbmdsZSBzZXJpZXMuDQo+IFRo
ZSBnbnR0YWIgcGF0Y2hlcyByZXBsYWNlcyBvbmUgdXNlIG9mIGxpYnhlbmN0cmwgd2l0aCBzdGFi
bGUNCj4gaW50ZXJmYWNlcywgbGVhdmluZyBvbmUgdW5zdGFibGUNCj4gbGlieGVuY3RybCBpbnRl
cmZhY2UgdXNlZCBieSBveGVuc3RvcmVkLg0KPiANCj4gVGhlICd2ZW5kb3IgZXh0ZXJuYWwgZGVw
ZW5kZW5jaWVzJyBtYXkgYmUgb3B0aW9uYWwsIGl0IGlzIHVzZWZ1bCB0bw0KPiBiZSBwYXJ0DQo+
IG9mIGEgcGF0Y2hxdWV1ZSBpbiBhIHNwZWNmaWxlIHNvIHRoYXQgeW91IGNhbiBidWlsZCBldmVy
eXRoaW5nDQo+IHdpdGhvdXQgZXh0ZXJuYWwgZGVwZW5kZW5jaWVzLA0KPiBidXQgbWlnaHQgYXMg
d2VsbCBjb21taXQgaXQgc28gZXZlcnlvbmUgaGFzIGl0IGVhc2lseSBhdmFpbGFibGUgbm90DQo+
IGp1c3QgWGVuU2VydmVyLg0KPiANCj4gTm90ZSB0aGF0IHRoZSBsaXZlLXVwZGF0ZSBmdXp6IHRl
c3QgZG9lc24ndCB5ZXQgcGFzcywgaXQgaXMgc3RpbGwNCj4gYWJsZSB0byBmaW5kIGJ1Z3MuDQo+
IEhvd2V2ZXIgdGhlIHJlZHVjZWQgdmVyc2lvbiB3aXRoIGEgZml4ZWQgc2VlZCB1c2VkIGFzIGEg
dW5pdCB0ZXN0DQo+IGRvZXMgcGFzcywNCj4gc28gaXQgaXMgdXNlZnVsIHRvIGhhdmUgaXQgY29t
bWl0dGVkLCBhbmQgZnVydGhlciBpbXByb3ZlbWVudHMgY2FuIGJlDQo+IG1hZGUgbGF0ZXINCj4g
YXMgbW9yZSBidWdzIGFyZSBkaXNjb3ZlcmVkIGFuZCBmaXhlZC4NCj4gDQo+IEVkd2luIFTDtnLD
tmsgKDE3KToNCj4gICBkb2NzL2Rlc2lnbnMveGVuc3RvcmUtbWlncmF0aW9uLm1kOiBjbGFyaWZ5
IHRoYXQgZGVsZXRlcyBhcmUNCj4gcmVjdXJzaXZlDQo+ICAgdG9vbHMvb2NhbWw6IGFkZCB1bml0
IHRlc3Qgc2tlbGV0b24gd2l0aCBEdW5lIGJ1aWxkIHN5c3RlbQ0KPiAgIHRvb2xzL29jYW1sOiB2
ZW5kb3IgZXh0ZXJuYWwgZGVwZW5kZW5jaWVzIGZvciBjb252ZW5pZW5jZQ0KPiAgIHRvb2xzL29j
YW1sL3hlbnN0b3JlZDogaW1wbGVtZW50IHRoZSBsaXZlIG1pZ3JhdGlvbiBiaW5hcnkgZm9ybWF0
DQo+ICAgdG9vbHMvb2NhbWwveGVuc3RvcmVkOiBhZGQgYmluYXJ5IGR1bXAgZm9ybWF0IHN1cHBv
cnQNCj4gICB0b29scy9vY2FtbC94ZW5zdG9yZWQ6IGFkZCBzdXBwb3J0IGZvciBiaW5hcnkgZm9y
bWF0DQo+ICAgdG9vbHMvb2NhbWwveGVuc3RvcmVkOiB2YWxpZGF0ZSBjb25maWcgZmlsZSBiZWZv
cmUgbGl2ZSB1cGRhdGUNCj4gICBBZGQgc3RydWN0dXJlZCBmdXp6aW5nIHVuaXQgdGVzdA0KPiAg
IHRvb2xzL29jYW1sOiB1c2UgY29tbW9uIG1hY3JvcyBmb3IgbWFuaXB1bGF0aW5nIG1tYXBfaW50
ZXJmYWNlDQo+ICAgdG9vbHMvb2NhbWwvbGlicy9tbWFwOiBhbGxvY2F0ZSBjb3JyZWN0IG51bWJl
ciBvZiBieXRlcw0KPiAgIHRvb2xzL29jYW1sL2xpYnMvbW1hcDogRXhwb3NlIHN0dWJfbW1hcF9h
bGxvYw0KPiAgIHRvb2xzL29jYW1sL2xpYnMvbW1hcDogbWFyayBtbWFwL211bm1hcCBhcyBibG9j
a2luZw0KPiAgIHRvb2xzL29jYW1sL2xpYnMveGI6IGltcG9ydCBnbnR0YWIgc3R1YnMgZnJvbSBt
aXJhZ2UNCj4gICB0b29scy9vY2FtbDogc2FmZXIgWGVubW1hcCBpbnRlcmZhY2UNCj4gICB0b29s
cy9vY2FtbC94ZW5zdG9yZWQ6IHVzZSBnbnR0YWIgaW5zdGVhZCBvZiB4ZW5jdHJsJ3MNCj4gICAg
IGZvcmVpZ25fbWFwX3JhbmdlDQo+ICAgdG9vbHMvb2NhbWwveGVuc3RvcmVkOiBkb24ndCBzdG9y
ZSBkb21VJ3MgbWZuIG9mIHJpbmcgcGFnZQ0KPiAgIHRvb2xzL29jYW1sL2xpYnMvbW1hcDogQ2xl
YW4gdXAgdW51c2VkIHJlYWQvd3JpdGUNCj4gDQo+ICBkb2NzL2Rlc2lnbnMveGVuc3RvcmUtbWln
cmF0aW9uLm1kICAgICAgICAgICAgfCAgICAzICstDQo+ICB0b29scy9vY2FtbC8uZ2l0aWdub3Jl
ICAgICAgICAgICAgICAgICAgICAgICAgfCAgICAyICsNCj4gIHRvb2xzL29jYW1sL01ha2VmaWxl
ICAgICAgICAgICAgICAgICAgICAgICAgICB8ICAgNTMgKw0KPiAgdG9vbHMvb2NhbWwvZHVuZS1w
cm9qZWN0ICAgICAgICAgICAgICAgICAgICAgIHwgICAgNSArDQo+ICB0b29scy9vY2FtbC9kdW5p
dmVyc2UvY21kbGluZXIvLmdpdGlnbm9yZSAgICAgfCAgIDEwICsNCj4gIHRvb2xzL29jYW1sL2R1
bml2ZXJzZS9jbWRsaW5lci8ub2NwLWluZGVudCAgICB8ICAgIDEgKw0KPiAgdG9vbHMvb2NhbWwv
ZHVuaXZlcnNlL2NtZGxpbmVyL0IwLm1sICAgICAgICAgIHwgICAgOSArDQo+ICB0b29scy9vY2Ft
bC9kdW5pdmVyc2UvY21kbGluZXIvQ0hBTkdFUy5tZCAgICAgfCAgMjU1ICsrKw0KPiAgdG9vbHMv
b2NhbWwvZHVuaXZlcnNlL2NtZGxpbmVyL0xJQ0VOU0UubWQgICAgIHwgICAxMyArDQo+ICB0b29s
cy9vY2FtbC9kdW5pdmVyc2UvY21kbGluZXIvTWFrZWZpbGUgICAgICAgfCAgIDc3ICsNCj4gIHRv
b2xzL29jYW1sL2R1bml2ZXJzZS9jbWRsaW5lci9SRUFETUUubWQgICAgICB8ICAgNTEgKw0KPiAg
dG9vbHMvb2NhbWwvZHVuaXZlcnNlL2NtZGxpbmVyL190YWdzICAgICAgICAgIHwgICAgMyArDQo+
ICB0b29scy9vY2FtbC9kdW5pdmVyc2UvY21kbGluZXIvYnVpbGQubWwgICAgICAgfCAgMTU1ICsr
DQo+ICB0b29scy9vY2FtbC9kdW5pdmVyc2UvY21kbGluZXIvY21kbGluZXIub3BhbSAgfCAgIDMy
ICsNCj4gIHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jbWRsaW5lci9kb2MvYXBpLm9kb2NsICB8ICAg
IDEgKw0KPiAgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2NtZGxpbmVyL2R1bmUtcHJvamVjdCAgIHwg
ICAgMiArDQo+ICB0b29scy9vY2FtbC9kdW5pdmVyc2UvY21kbGluZXIvcGtnL01FVEEgICAgICAg
fCAgICA3ICsNCj4gIHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jbWRsaW5lci9wa2cvcGtnLm1sICAg
ICB8ICAgMzMgKw0KPiAgLi4uL29jYW1sL2R1bml2ZXJzZS9jbWRsaW5lci9zcmMvY21kbGluZXIu
bWwgIHwgIDMwOSArKysrDQo+ICAuLi4vb2NhbWwvZHVuaXZlcnNlL2NtZGxpbmVyL3NyYy9jbWRs
aW5lci5tbGkgfCAxNjI0DQo+ICsrKysrKysrKysrKysrKysrDQo+ICAuLi4vZHVuaXZlcnNlL2Nt
ZGxpbmVyL3NyYy9jbWRsaW5lci5tbGxpYiAgICAgfCAgIDExICsNCj4gIC4uLi9kdW5pdmVyc2Uv
Y21kbGluZXIvc3JjL2NtZGxpbmVyX2FyZy5tbCAgICB8ICAzNTYgKysrKw0KPiAgLi4uL2R1bml2
ZXJzZS9jbWRsaW5lci9zcmMvY21kbGluZXJfYXJnLm1saSAgIHwgIDExMSArKw0KPiAgLi4uL2R1
bml2ZXJzZS9jbWRsaW5lci9zcmMvY21kbGluZXJfYmFzZS5tbCAgIHwgIDMwMiArKysNCj4gIC4u
Li9kdW5pdmVyc2UvY21kbGluZXIvc3JjL2NtZGxpbmVyX2Jhc2UubWxpICB8ICAgNjggKw0KPiAg
Li4uL2R1bml2ZXJzZS9jbWRsaW5lci9zcmMvY21kbGluZXJfY2xpbmUubWwgIHwgIDE5OSArKw0K
PiAgLi4uL2R1bml2ZXJzZS9jbWRsaW5lci9zcmMvY21kbGluZXJfY2xpbmUubWxpIHwgICAzNCAr
DQo+ICAuLi4vZHVuaXZlcnNlL2NtZGxpbmVyL3NyYy9jbWRsaW5lcl9kb2NnZW4ubWwgfCAgMzUy
ICsrKysNCj4gIC4uLi9jbWRsaW5lci9zcmMvY21kbGluZXJfZG9jZ2VuLm1saSAgICAgICAgICB8
ICAgMzAgKw0KPiAgLi4uL2R1bml2ZXJzZS9jbWRsaW5lci9zcmMvY21kbGluZXJfaW5mby5tbCAg
IHwgIDIzMyArKysNCj4gIC4uLi9kdW5pdmVyc2UvY21kbGluZXIvc3JjL2NtZGxpbmVyX2luZm8u
bWxpICB8ICAxNDAgKysNCj4gIC4uLi9jbWRsaW5lci9zcmMvY21kbGluZXJfbWFucGFnZS5tbCAg
ICAgICAgICB8ICA1MDIgKysrKysNCj4gIC4uLi9jbWRsaW5lci9zcmMvY21kbGluZXJfbWFucGFn
ZS5tbGkgICAgICAgICB8ICAxMDAgKw0KPiAgLi4uL2R1bml2ZXJzZS9jbWRsaW5lci9zcmMvY21k
bGluZXJfbXNnLm1sICAgIHwgIDExNiArKw0KPiAgLi4uL2R1bml2ZXJzZS9jbWRsaW5lci9zcmMv
Y21kbGluZXJfbXNnLm1saSAgIHwgICA1NiArDQo+ICAuLi4vY21kbGluZXIvc3JjL2NtZGxpbmVy
X3N1Z2dlc3QubWwgICAgICAgICAgfCAgIDU0ICsNCj4gIC4uLi9jbWRsaW5lci9zcmMvY21kbGlu
ZXJfc3VnZ2VzdC5tbGkgICAgICAgICB8ICAgMjUgKw0KPiAgLi4uL2R1bml2ZXJzZS9jbWRsaW5l
ci9zcmMvY21kbGluZXJfdGVybS5tbCAgIHwgICA0MSArDQo+ICAuLi4vZHVuaXZlcnNlL2NtZGxp
bmVyL3NyYy9jbWRsaW5lcl90ZXJtLm1saSAgfCAgIDQwICsNCj4gIC4uLi9kdW5pdmVyc2UvY21k
bGluZXIvc3JjL2NtZGxpbmVyX3RyaWUubWwgICB8ICAgOTcgKw0KPiAgLi4uL2R1bml2ZXJzZS9j
bWRsaW5lci9zcmMvY21kbGluZXJfdHJpZS5tbGkgIHwgICAzNSArDQo+ICB0b29scy9vY2FtbC9k
dW5pdmVyc2UvY21kbGluZXIvc3JjL2R1bmUgICAgICAgfCAgICA0ICsNCj4gIHRvb2xzL29jYW1s
L2R1bml2ZXJzZS9jbWRsaW5lci90ZXN0L2Nob3J1cy5tbCB8ICAgMzEgKw0KPiAgdG9vbHMvb2Nh
bWwvZHVuaXZlcnNlL2NtZGxpbmVyL3Rlc3QvY3BfZXgubWwgIHwgICA1NCArDQo+ICAuLi4vb2Nh
bWwvZHVuaXZlcnNlL2NtZGxpbmVyL3Rlc3QvZGFyY3NfZXgubWwgfCAgMTQ5ICsrDQo+ICB0b29s
cy9vY2FtbC9kdW5pdmVyc2UvY21kbGluZXIvdGVzdC9kdW5lICAgICAgfCAgIDEyICsNCj4gIHRv
b2xzL29jYW1sL2R1bml2ZXJzZS9jbWRsaW5lci90ZXN0L3Jldm9sdC5tbCB8ICAgIDkgKw0KPiAg
dG9vbHMvb2NhbWwvZHVuaXZlcnNlL2NtZGxpbmVyL3Rlc3Qvcm1fZXgubWwgIHwgICA1MyArDQo+
ICAuLi4vb2NhbWwvZHVuaXZlcnNlL2NtZGxpbmVyL3Rlc3QvdGFpbF9leC5tbCAgfCAgIDczICsN
Cj4gIC4uLi9vY2FtbC9kdW5pdmVyc2UvY21kbGluZXIvdGVzdC90ZXN0X21hbi5tbCB8ICAxMDAg
Kw0KPiAgLi4uL2R1bml2ZXJzZS9jbWRsaW5lci90ZXN0L3Rlc3RfbWFuX3V0ZjgubWwgIHwgICAx
MSArDQo+ICAuLi4vZHVuaXZlcnNlL2NtZGxpbmVyL3Rlc3QvdGVzdF9vcHRfcmVxLm1sICAgfCAg
IDEzICsNCj4gIC4uLi9vY2FtbC9kdW5pdmVyc2UvY21kbGluZXIvdGVzdC90ZXN0X3Bvcy5tbCB8
ICAgMTMgKw0KPiAgLi4uL2R1bml2ZXJzZS9jbWRsaW5lci90ZXN0L3Rlc3RfcG9zX2FsbC5tbCAg
IHwgICAxMSArDQo+ICAuLi4vZHVuaXZlcnNlL2NtZGxpbmVyL3Rlc3QvdGVzdF9wb3NfbGVmdC5t
bCAgfCAgIDExICsNCj4gIC4uLi9kdW5pdmVyc2UvY21kbGluZXIvdGVzdC90ZXN0X3Bvc19yZXEu
bWwgICB8ICAgMTUgKw0KPiAgLi4uL2R1bml2ZXJzZS9jbWRsaW5lci90ZXN0L3Rlc3RfcG9zX3Jl
di5tbCAgIHwgICAxNCArDQo+ICAuLi4vZHVuaXZlcnNlL2NtZGxpbmVyL3Rlc3QvdGVzdF90ZXJt
X2R1cHMubWwgfCAgIDE5ICsNCj4gIC4uLi9jbWRsaW5lci90ZXN0L3Rlc3Rfd2l0aF91c2VkX2Fy
Z3MubWwgICAgICB8ICAgMTggKw0KPiAgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2NwcG8vLmdpdGln
bm9yZSAgICAgICAgIHwgICAgNSArDQo+ICB0b29scy9vY2FtbC9kdW5pdmVyc2UvY3Bwby8ub2Nw
LWluZGVudCAgICAgICAgfCAgIDIyICsNCj4gIHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jcHBvLy50
cmF2aXMueW1sICAgICAgICB8ICAgMTYgKw0KPiAgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2NwcG8v
Q09ERU9XTkVSUyAgICAgICAgIHwgICAgOCArDQo+ICB0b29scy9vY2FtbC9kdW5pdmVyc2UvY3Bw
by9DaGFuZ2VzICAgICAgICAgICAgfCAgIDg1ICsNCj4gIHRvb2xzL29jYW1sL2R1bml2ZXJzZS9j
cHBvL0lOU1RBTEwubWQgICAgICAgICB8ICAgMTcgKw0KPiAgdG9vbHMvb2NhbWwvZHVuaXZlcnNl
L2NwcG8vTElDRU5TRS5tZCAgICAgICAgIHwgICAyNCArDQo+ICB0b29scy9vY2FtbC9kdW5pdmVy
c2UvY3Bwby9NYWtlZmlsZSAgICAgICAgICAgfCAgIDE4ICsNCj4gIHRvb2xzL29jYW1sL2R1bml2
ZXJzZS9jcHBvL1JFQURNRS5tZCAgICAgICAgICB8ICA1MjEgKysrKysrDQo+ICB0b29scy9vY2Ft
bC9kdW5pdmVyc2UvY3Bwby9WRVJTSU9OICAgICAgICAgICAgfCAgICAxICsNCj4gIHRvb2xzL29j
YW1sL2R1bml2ZXJzZS9jcHBvL2FwcHZleW9yLnltbCAgICAgICB8ICAgMTQgKw0KPiAgdG9vbHMv
b2NhbWwvZHVuaXZlcnNlL2NwcG8vY3Bwby5vcGFtICAgICAgICAgIHwgICAzMSArDQo+ICAuLi4v
b2NhbWwvZHVuaXZlcnNlL2NwcG8vY3Bwb19vY2FtbGJ1aWxkLm9wYW0gfCAgIDI3ICsNCj4gIHRv
b2xzL29jYW1sL2R1bml2ZXJzZS9jcHBvL2R1bmUtcHJvamVjdCAgICAgICB8ICAgIDMgKw0KPiAg
dG9vbHMvb2NhbWwvZHVuaXZlcnNlL2NwcG8vZXhhbXBsZXMvTWFrZWZpbGUgIHwgICAgOCArDQo+
ICB0b29scy9vY2FtbC9kdW5pdmVyc2UvY3Bwby9leGFtcGxlcy9kZWJ1Zy5tbCAgfCAgICA3ICsN
Cj4gIHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jcHBvL2V4YW1wbGVzL2R1bmUgICAgICB8ICAgMzIg
Kw0KPiAgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2NwcG8vZXhhbXBsZXMvZnJlbmNoLm1sIHwgICAz
NCArDQo+ICB0b29scy9vY2FtbC9kdW5pdmVyc2UvY3Bwby9leGFtcGxlcy9sZXhlci5tbGwgfCAg
ICA5ICsNCj4gIC4uLi9kdW5pdmVyc2UvY3Bwby9vY2FtbGJ1aWxkX3BsdWdpbi9fdGFncyAgICB8
ICAgIDEgKw0KPiAgLi4uL2R1bml2ZXJzZS9jcHBvL29jYW1sYnVpbGRfcGx1Z2luL2R1bmUgICAg
IHwgICAgNiArDQo+ICAuLi4vY3Bwby9vY2FtbGJ1aWxkX3BsdWdpbi9vY2FtbGJ1aWxkX2NwcG8u
bWwgfCAgIDM1ICsNCj4gIC4uLi9vY2FtbGJ1aWxkX3BsdWdpbi9vY2FtbGJ1aWxkX2NwcG8ubWxp
ICAgICB8ICAgIDkgKw0KPiAgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2NwcG8vc3JjL2NvbXBhdC5t
bCAgICAgIHwgICAgNyArDQo+ICAuLi4vb2NhbWwvZHVuaXZlcnNlL2NwcG8vc3JjL2NwcG9fY29t
bWFuZC5tbCAgfCAgIDYzICsNCj4gIC4uLi9vY2FtbC9kdW5pdmVyc2UvY3Bwby9zcmMvY3Bwb19j
b21tYW5kLm1saSB8ICAgMTEgKw0KPiAgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2NwcG8vc3JjL2Nw
cG9fZXZhbC5tbCAgIHwgIDY5NyArKysrKysrDQo+ICB0b29scy9vY2FtbC9kdW5pdmVyc2UvY3Bw
by9zcmMvY3Bwb19ldmFsLm1saSAgfCAgIDI5ICsNCj4gIHRvb2xzL29jYW1sL2R1bml2ZXJzZS9j
cHBvL3NyYy9jcHBvX2xleGVyLm1sbCB8ICA3MjEgKysrKysrKysNCj4gIHRvb2xzL29jYW1sL2R1
bml2ZXJzZS9jcHBvL3NyYy9jcHBvX21haW4ubWwgICB8ICAyMzAgKysrDQo+ICAuLi4vb2NhbWwv
ZHVuaXZlcnNlL2NwcG8vc3JjL2NwcG9fcGFyc2VyLm1seSAgfCAgMjY2ICsrKw0KPiAgdG9vbHMv
b2NhbWwvZHVuaXZlcnNlL2NwcG8vc3JjL2NwcG9fdHlwZXMubWwgIHwgICA5OCArDQo+ICB0b29s
cy9vY2FtbC9kdW5pdmVyc2UvY3Bwby9zcmMvY3Bwb190eXBlcy5tbGkgfCAgIDcwICsNCj4gIC4u
Li9vY2FtbC9kdW5pdmVyc2UvY3Bwby9zcmMvY3Bwb192ZXJzaW9uLm1saSB8ICAgIDEgKw0KPiAg
dG9vbHMvb2NhbWwvZHVuaXZlcnNlL2NwcG8vc3JjL2R1bmUgICAgICAgICAgIHwgICAyMSArDQo+
ICB0b29scy9vY2FtbC9kdW5pdmVyc2UvY3Bwby90ZXN0L2NhcGl0YWwuY3BwbyAgfCAgICA2ICsN
Cj4gIHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jcHBvL3Rlc3QvY2FwaXRhbC5yZWYgICB8ICAgIDYg
Kw0KPiAgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2NwcG8vdGVzdC9jb21tZW50cy5jcHBvIHwgICAg
NyArDQo+ICB0b29scy9vY2FtbC9kdW5pdmVyc2UvY3Bwby90ZXN0L2NvbW1lbnRzLnJlZiAgfCAg
ICA4ICsNCj4gIHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jcHBvL3Rlc3QvY29uZC5jcHBvICAgICB8
ICAgNDcgKw0KPiAgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2NwcG8vdGVzdC9jb25kLnJlZiAgICAg
IHwgICAxNyArDQo+ICB0b29scy9vY2FtbC9kdW5pdmVyc2UvY3Bwby90ZXN0L2R1bmUgICAgICAg
ICAgfCAgMTMwICsrDQo+ICB0b29scy9vY2FtbC9kdW5pdmVyc2UvY3Bwby90ZXN0L2V4dC5jcHBv
ICAgICAgfCAgIDEwICsNCj4gIHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jcHBvL3Rlc3QvZXh0LnJl
ZiAgICAgICB8ICAgMjggKw0KPiAgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2NwcG8vdGVzdC9pbmNs
LmNwcG8gICAgIHwgICAgMyArDQo+ICB0b29scy9vY2FtbC9kdW5pdmVyc2UvY3Bwby90ZXN0L2lu
Y2wyLmNwcG8gICAgfCAgICAxICsNCj4gIHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jcHBvL3Rlc3Qv
bG9jLmNwcG8gICAgICB8ICAgIDggKw0KPiAgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2NwcG8vdGVz
dC9sb2MucmVmICAgICAgIHwgICAyMSArDQo+ICAuLi4vb2NhbWwvZHVuaXZlcnNlL2NwcG8vdGVz
dC9wYXJlbl9hcmcuY3BwbyAgfCAgICAzICsNCj4gIHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jcHBv
L3Rlc3QvcGFyZW5fYXJnLnJlZiB8ICAgIDQgKw0KPiAgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2Nw
cG8vdGVzdC9zb3VyY2Uuc2ggICAgIHwgICAxMyArDQo+ICB0b29scy9vY2FtbC9kdW5pdmVyc2Uv
Y3Bwby90ZXN0L3Rlc3QuY3BwbyAgICAgfCAgMTQ0ICsrDQo+ICB0b29scy9vY2FtbC9kdW5pdmVy
c2UvY3Bwby90ZXN0L3R1cGxlLmNwcG8gICAgfCAgIDM4ICsNCj4gIHRvb2xzL29jYW1sL2R1bml2
ZXJzZS9jcHBvL3Rlc3QvdHVwbGUucmVmICAgICB8ICAgMjAgKw0KPiAgLi4uL29jYW1sL2R1bml2
ZXJzZS9jcHBvL3Rlc3QvdW5tYXRjaGVkLmNwcG8gIHwgICAxNCArDQo+ICB0b29scy9vY2FtbC9k
dW5pdmVyc2UvY3Bwby90ZXN0L3VubWF0Y2hlZC5yZWYgfCAgIDE1ICsNCj4gIHRvb2xzL29jYW1s
L2R1bml2ZXJzZS9jcHBvL3Rlc3QvdmVyc2lvbi5jcHBvICB8ICAgMzAgKw0KPiAgdG9vbHMvb2Nh
bWwvZHVuaXZlcnNlL2Nyb3diYXIvLmdpdGlnbm9yZSAgICAgIHwgICAgNSArDQo+ICB0b29scy9v
Y2FtbC9kdW5pdmVyc2UvY3Jvd2Jhci9DSEFOR0VTLm1kICAgICAgfCAgICA5ICsNCj4gIHRvb2xz
L29jYW1sL2R1bml2ZXJzZS9jcm93YmFyL0xJQ0VOU0UubWQgICAgICB8ICAgIDggKw0KPiAgdG9v
bHMvb2NhbWwvZHVuaXZlcnNlL2Nyb3diYXIvUkVBRE1FLm1kICAgICAgIHwgICA4MiArDQo+ICB0
b29scy9vY2FtbC9kdW5pdmVyc2UvY3Jvd2Jhci9jcm93YmFyLm9wYW0gICAgfCAgIDMzICsNCj4g
IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jcm93YmFyL2R1bmUgICAgICAgICAgICB8ICAgIDEgKw0K
PiAgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2Nyb3diYXIvZHVuZS1wcm9qZWN0ICAgIHwgICAgMiAr
DQo+ICAuLi4vZHVuaXZlcnNlL2Nyb3diYXIvZXhhbXBsZXMvLmdpdGlnbm9yZSAgICAgfCAgICAx
ICsNCj4gIC4uLi9kdW5pdmVyc2UvY3Jvd2Jhci9leGFtcGxlcy9jYWxlbmRhci9kdW5lICB8ICAg
IDMgKw0KPiAgLi4uL2V4YW1wbGVzL2NhbGVuZGFyL3Rlc3RfY2FsZW5kYXIubWwgICAgICAgIHwg
ICAyOSArDQo+ICAuLi4vZHVuaXZlcnNlL2Nyb3diYXIvZXhhbXBsZXMvZnBhdGgvZHVuZSAgICAg
fCAgICA0ICsNCj4gIC4uLi9jcm93YmFyL2V4YW1wbGVzL2ZwYXRoL3Rlc3RfZnBhdGgubWwgICAg
ICB8ICAgMTggKw0KPiAgLi4uL2R1bml2ZXJzZS9jcm93YmFyL2V4YW1wbGVzL2lucHV0L3Rlc3Rj
YXNlIHwgICAgMSArDQo+ICAuLi4vb2NhbWwvZHVuaXZlcnNlL2Nyb3diYXIvZXhhbXBsZXMvbWFw
L2R1bmUgfCAgICAzICsNCj4gIC4uLi9jcm93YmFyL2V4YW1wbGVzL21hcC90ZXN0X21hcC5tbCAg
ICAgICAgICB8ICAgNDcgKw0KPiAgLi4uL2R1bml2ZXJzZS9jcm93YmFyL2V4YW1wbGVzL3Bwcmlu
dC9kdW5lICAgIHwgICAgMyArDQo+ICAuLi4vY3Jvd2Jhci9leGFtcGxlcy9wcHJpbnQvdGVzdF9w
cHJpbnQubWwgICAgfCAgIDM5ICsNCj4gIC4uLi9jcm93YmFyL2V4YW1wbGVzL3NlcmlhbGl6ZXIv
ZHVuZSAgICAgICAgICB8ICAgIDMgKw0KPiAgLi4uL2Nyb3diYXIvZXhhbXBsZXMvc2VyaWFsaXpl
ci9zZXJpYWxpemVyLm1sIHwgICAzNCArDQo+ICAuLi4vZXhhbXBsZXMvc2VyaWFsaXplci90ZXN0
X3NlcmlhbGl6ZXIubWwgICAgfCAgIDQ3ICsNCj4gIC4uLi9kdW5pdmVyc2UvY3Jvd2Jhci9leGFt
cGxlcy91dW5mL2R1bmUgICAgICB8ICAgIDMgKw0KPiAgLi4uL2Nyb3diYXIvZXhhbXBsZXMvdXVu
Zi90ZXN0X3V1bmYubWwgICAgICAgIHwgICA3NSArDQo+ICAuLi4vZHVuaXZlcnNlL2Nyb3diYXIv
ZXhhbXBsZXMveG1sZGlmZi9kdW5lICAgfCAgICAzICsNCj4gIC4uLi9jcm93YmFyL2V4YW1wbGVz
L3htbGRpZmYvdGVzdF94bWxkaWZmLm1sICB8ICAgNDIgKw0KPiAgdG9vbHMvb2NhbWwvZHVuaXZl
cnNlL2Nyb3diYXIvc3JjL2Nyb3diYXIubWwgIHwgIDU4MiArKysrKysNCj4gIHRvb2xzL29jYW1s
L2R1bml2ZXJzZS9jcm93YmFyL3NyYy9jcm93YmFyLm1saSB8ICAyNTEgKysrDQo+ICB0b29scy9v
Y2FtbC9kdW5pdmVyc2UvY3Jvd2Jhci9zcmMvZHVuZSAgICAgICAgfCAgICAzICsNCj4gIHRvb2xz
L29jYW1sL2R1bml2ZXJzZS9jcm93YmFyL3NyYy90b2RvICAgICAgICB8ICAgMTYgKw0KPiAgdG9v
bHMvb2NhbWwvZHVuaXZlcnNlL2NzZXhwL0NIQU5HRVMubWQgICAgICAgIHwgICA0NSArDQo+ICB0
b29scy9vY2FtbC9kdW5pdmVyc2UvY3NleHAvTElDRU5TRS5tZCAgICAgICAgfCAgIDIxICsNCj4g
IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jc2V4cC9NYWtlZmlsZSAgICAgICAgICB8ICAgMjMgKw0K
PiAgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2NzZXhwL1JFQURNRS5tZCAgICAgICAgIHwgICAzMyAr
DQo+ICAuLi4vZHVuaXZlcnNlL2NzZXhwL2JlbmNoL2NzZXhwX2JlbmNoLm1sICAgICAgfCAgIDIy
ICsNCj4gIHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jc2V4cC9iZW5jaC9kdW5lICAgICAgICB8ICAg
MTEgKw0KPiAgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2NzZXhwL2JlbmNoL21haW4ubWwgICAgIHwg
ICAgMSArDQo+ICB0b29scy9vY2FtbC9kdW5pdmVyc2UvY3NleHAvYmVuY2gvcnVubmVyLnNoICAg
fCAgICA0ICsNCj4gIHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jc2V4cC9jc2V4cC5vcGFtICAgICAg
ICB8ICAgNTEgKw0KPiAgLi4uL29jYW1sL2R1bml2ZXJzZS9jc2V4cC9jc2V4cC5vcGFtLnRlbXBs
YXRlIHwgICAxNCArDQo+ICB0b29scy9vY2FtbC9kdW5pdmVyc2UvY3NleHAvZHVuZS1wcm9qZWN0
ICAgICAgfCAgIDQyICsNCj4gIC4uLi9vY2FtbC9kdW5pdmVyc2UvY3NleHAvZHVuZS13b3Jrc3Bh
Y2UuZGV2ICB8ICAgIDYgKw0KPiAgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2NzZXhwL3NyYy9jc2V4
cC5tbCAgICAgIHwgIDMzMyArKysrDQo+ICB0b29scy9vY2FtbC9kdW5pdmVyc2UvY3NleHAvc3Jj
L2NzZXhwLm1saSAgICAgfCAgMzY5ICsrKysNCj4gIHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jc2V4
cC9zcmMvZHVuZSAgICAgICAgICB8ICAgIDMgKw0KPiAgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2Nz
ZXhwL3Rlc3QvZHVuZSAgICAgICAgIHwgICAgNiArDQo+ICB0b29scy9vY2FtbC9kdW5pdmVyc2Uv
Y3NleHAvdGVzdC90ZXN0Lm1sICAgICAgfCAgMTQyICsrDQo+ICB0b29scy9vY2FtbC9kdW5pdmVy
c2UvZHVuZSAgICAgICAgICAgICAgICAgICAgfCAgICA0ICsNCj4gIHRvb2xzL29jYW1sL2R1bml2
ZXJzZS9mbXQvLmdpdGlnbm9yZSAgICAgICAgICB8ICAgIDggKw0KPiAgdG9vbHMvb2NhbWwvZHVu
aXZlcnNlL2ZtdC8ub2NwLWluZGVudCAgICAgICAgIHwgICAgMSArDQo+ICB0b29scy9vY2FtbC9k
dW5pdmVyc2UvZm10L0NIQU5HRVMubWQgICAgICAgICAgfCAgIDk4ICsNCj4gIHRvb2xzL29jYW1s
L2R1bml2ZXJzZS9mbXQvTElDRU5TRS5tZCAgICAgICAgICB8ICAgMTMgKw0KPiAgdG9vbHMvb2Nh
bWwvZHVuaXZlcnNlL2ZtdC9SRUFETUUubWQgICAgICAgICAgIHwgICAzNSArDQo+ICB0b29scy9v
Y2FtbC9kdW5pdmVyc2UvZm10L190YWdzICAgICAgICAgICAgICAgfCAgICA3ICsNCj4gIHRvb2xz
L29jYW1sL2R1bml2ZXJzZS9mbXQvZG9jL2FwaS5vZG9jbCAgICAgICB8ICAgIDMgKw0KPiAgdG9v
bHMvb2NhbWwvZHVuaXZlcnNlL2ZtdC9kb2MvaW5kZXgubWxkICAgICAgIHwgICAxMSArDQo+ICB0
b29scy9vY2FtbC9kdW5pdmVyc2UvZm10L2R1bmUtcHJvamVjdCAgICAgICAgfCAgICAyICsNCj4g
IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9mbXQvZm10Lm9wYW0gICAgICAgICAgICB8ICAgMzUgKw0K
PiAgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2ZtdC9wa2cvTUVUQSAgICAgICAgICAgIHwgICA0MCAr
DQo+ICB0b29scy9vY2FtbC9kdW5pdmVyc2UvZm10L3BrZy9wa2cubWwgICAgICAgICAgfCAgIDE4
ICsNCj4gIHRvb2xzL29jYW1sL2R1bml2ZXJzZS9mbXQvc3JjL2R1bmUgICAgICAgICAgICB8ICAg
MzAgKw0KPiAgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2ZtdC9zcmMvZm10Lm1sICAgICAgICAgIHwg
IDc4NyArKysrKysrKw0KPiAgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2ZtdC9zcmMvZm10Lm1saSAg
ICAgICAgIHwgIDY4OSArKysrKysrDQo+ICB0b29scy9vY2FtbC9kdW5pdmVyc2UvZm10L3NyYy9m
bXQubWxsaWIgICAgICAgfCAgICAxICsNCj4gIHRvb2xzL29jYW1sL2R1bml2ZXJzZS9mbXQvc3Jj
L2ZtdF9jbGkubWwgICAgICB8ICAgMzIgKw0KPiAgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2ZtdC9z
cmMvZm10X2NsaS5tbGkgICAgIHwgICA0NSArDQo+ICB0b29scy9vY2FtbC9kdW5pdmVyc2UvZm10
L3NyYy9mbXRfY2xpLm1sbGliICAgfCAgICAxICsNCj4gIHRvb2xzL29jYW1sL2R1bml2ZXJzZS9m
bXQvc3JjL2ZtdF90b3AubWwgICAgICB8ICAgMjMgKw0KPiAgdG9vbHMvb2NhbWwvZHVuaXZlcnNl
L2ZtdC9zcmMvZm10X3RvcC5tbGxpYiAgIHwgICAgMSArDQo+ICB0b29scy9vY2FtbC9kdW5pdmVy
c2UvZm10L3NyYy9mbXRfdHR5Lm1sICAgICAgfCAgIDc4ICsNCj4gIHRvb2xzL29jYW1sL2R1bml2
ZXJzZS9mbXQvc3JjL2ZtdF90dHkubWxpICAgICB8ICAgNTAgKw0KPiAgdG9vbHMvb2NhbWwvZHVu
aXZlcnNlL2ZtdC9zcmMvZm10X3R0eS5tbGxpYiAgIHwgICAgMSArDQo+ICAuLi4vZHVuaXZlcnNl
L2ZtdC9zcmMvZm10X3R0eV90b3BfaW5pdC5tbCAgICAgfCAgIDIzICsNCj4gIHRvb2xzL29jYW1s
L2R1bml2ZXJzZS9mbXQvdGVzdC90ZXN0Lm1sICAgICAgICB8ICAzMjIgKysrKw0KPiAgLi4uL2R1
bml2ZXJzZS9vY2FtbC1hZmwtcGVyc2lzdGVudC8uZ2l0aWdub3JlIHwgICAgMiArDQo+ICAuLi4v
ZHVuaXZlcnNlL29jYW1sLWFmbC1wZXJzaXN0ZW50L0NIQU5HRVMubWQgfCAgIDIyICsNCj4gIC4u
Li9kdW5pdmVyc2Uvb2NhbWwtYWZsLXBlcnNpc3RlbnQvTElDRU5TRS5tZCB8ICAgIDggKw0KPiAg
Li4uL2R1bml2ZXJzZS9vY2FtbC1hZmwtcGVyc2lzdGVudC9SRUFETUUubWQgIHwgICAxNyArDQo+
ICAuLi4vb2NhbWwtYWZsLXBlcnNpc3RlbnQvYWZsLXBlcnNpc3RlbnQub3BhbSAgfCAgIDQ5ICsN
Cj4gIC4uLi9hZmwtcGVyc2lzdGVudC5vcGFtLnRlbXBsYXRlICAgICAgICAgICAgICB8ICAgMTYg
Kw0KPiAgLi4uL2FmbFBlcnNpc3RlbnQuYXZhaWxhYmxlLm1sICAgICAgICAgICAgICAgIHwgICAy
MSArDQo+ICAuLi4vb2NhbWwtYWZsLXBlcnNpc3RlbnQvYWZsUGVyc2lzdGVudC5tbGkgICAgfCAg
ICAxICsNCj4gIC4uLi9hZmxQZXJzaXN0ZW50LnN0dWIubWwgICAgICAgICAgICAgICAgICAgICB8
ICAgIDEgKw0KPiAgLi4uL2R1bml2ZXJzZS9vY2FtbC1hZmwtcGVyc2lzdGVudC9kZXRlY3Quc2gg
IHwgICA0MyArDQo+ICAuLi4vb2NhbWwvZHVuaXZlcnNlL29jYW1sLWFmbC1wZXJzaXN0ZW50L2R1
bmUgfCAgIDIwICsNCj4gIC4uLi9vY2FtbC1hZmwtcGVyc2lzdGVudC9kdW5lLXByb2plY3QgICAg
ICAgICB8ICAgMjMgKw0KPiAgLi4uL2R1bml2ZXJzZS9vY2FtbC1hZmwtcGVyc2lzdGVudC90ZXN0
Lm1sICAgIHwgICAgMyArDQo+ICAuLi4vb2NhbWwtYWZsLXBlcnNpc3RlbnQvdGVzdC9oYXJuZXNz
Lm1sICAgICAgfCAgIDIyICsNCj4gIC4uLi9vY2FtbC1hZmwtcGVyc2lzdGVudC90ZXN0L3Rlc3Qu
bWwgICAgICAgICB8ICAgNzMgKw0KPiAgLi4uL29jYW1sLWFmbC1wZXJzaXN0ZW50L3Rlc3QvdGVz
dC5zaCAgICAgICAgIHwgICAzMyArDQo+ICAuLi4vb2NhbWwvZHVuaXZlcnNlL29jcGxpYi1lbmRp
YW4vLmdpdGlnbm9yZSAgfCAgICAzICsNCj4gIC4uLi9vY2FtbC9kdW5pdmVyc2Uvb2NwbGliLWVu
ZGlhbi8udHJhdmlzLnltbCB8ICAgMTkgKw0KPiAgLi4uL29jYW1sL2R1bml2ZXJzZS9vY3BsaWIt
ZW5kaWFuL0NIQU5HRVMubWQgIHwgICA1NSArDQo+ICAuLi4vb2NhbWwvZHVuaXZlcnNlL29jcGxp
Yi1lbmRpYW4vQ09QWUlORy50eHQgfCAgNTIxICsrKysrKw0KPiAgdG9vbHMvb2NhbWwvZHVuaXZl
cnNlL29jcGxpYi1lbmRpYW4vTWFrZWZpbGUgIHwgICAxMyArDQo+ICB0b29scy9vY2FtbC9kdW5p
dmVyc2Uvb2NwbGliLWVuZGlhbi9SRUFETUUubWQgfCAgIDE2ICsNCj4gIC4uLi9kdW5pdmVyc2Uv
b2NwbGliLWVuZGlhbi9kdW5lLXByb2plY3QgICAgICB8ICAgIDIgKw0KPiAgLi4uL29jcGxpYi1l
bmRpYW4vb2NwbGliLWVuZGlhbi5vcGFtICAgICAgICAgIHwgICAzMCArDQo+ICAuLi4vb2NwbGli
LWVuZGlhbi9zcmMvYmVfb2NhbWxfNDAxLm1sICAgICAgICAgfCAgIDMyICsNCj4gIC4uLi9kdW5p
dmVyc2Uvb2NwbGliLWVuZGlhbi9zcmMvY29tbW9uLm1sICAgICB8ICAgMjQgKw0KPiAgLi4uL29j
cGxpYi1lbmRpYW4vc3JjL2NvbW1vbl80MDEuY3Bwby5tbCAgICAgIHwgIDEwMCArDQo+ICAuLi4v
b2NwbGliLWVuZGlhbi9zcmMvY29tbW9uX2Zsb2F0Lm1sICAgICAgICAgfCAgICA1ICsNCj4gIHRv
b2xzL29jYW1sL2R1bml2ZXJzZS9vY3BsaWItZW5kaWFuL3NyYy9kdW5lICB8ICAgNzUgKw0KPiAg
Li4uL29jcGxpYi1lbmRpYW4vc3JjL2VuZGlhbkJpZ3N0cmluZy5jcHBvLm1sIHwgIDExMiArKw0K
PiAgLi4uL3NyYy9lbmRpYW5CaWdzdHJpbmcuY3Bwby5tbGkgICAgICAgICAgICAgIHwgIDEyOCAr
Kw0KPiAgLi4uL29jcGxpYi1lbmRpYW4vc3JjL2VuZGlhbkJ5dGVzLmNwcG8ubWwgICAgIHwgIDEz
MCArKw0KPiAgLi4uL29jcGxpYi1lbmRpYW4vc3JjL2VuZGlhbkJ5dGVzLmNwcG8ubWxpICAgIHwg
IDEyNCArKw0KPiAgLi4uL29jcGxpYi1lbmRpYW4vc3JjL2VuZGlhblN0cmluZy5jcHBvLm1sICAg
IHwgIDExOCArKw0KPiAgLi4uL29jcGxpYi1lbmRpYW4vc3JjL2VuZGlhblN0cmluZy5jcHBvLm1s
aSAgIHwgIDEyMSArKw0KPiAgLi4uL29jcGxpYi1lbmRpYW4vc3JjL2xlX29jYW1sXzQwMS5tbCAg
ICAgICAgIHwgICAzMiArDQo+ICAuLi4vb2NwbGliLWVuZGlhbi9zcmMvbmVfb2NhbWxfNDAxLm1s
ICAgICAgICAgfCAgIDIwICsNCj4gIC4uLi9kdW5pdmVyc2Uvb2NwbGliLWVuZGlhbi90ZXN0cy9i
ZW5jaC5tbCAgICB8ICA0MzYgKysrKysNCj4gIC4uLi9vY2FtbC9kdW5pdmVyc2Uvb2NwbGliLWVu
ZGlhbi90ZXN0cy9kdW5lICB8ICAgMzUgKw0KPiAgLi4uL2R1bml2ZXJzZS9vY3BsaWItZW5kaWFu
L3Rlc3RzL3Rlc3QubWwgICAgIHwgICAzOSArDQo+ICAuLi4vdGVzdHMvdGVzdF9iaWdzdHJpbmcu
Y3Bwby5tbCAgICAgICAgICAgICAgfCAgMTkxICsrDQo+ICAuLi4vb2NwbGliLWVuZGlhbi90ZXN0
cy90ZXN0X2J5dGVzLmNwcG8ubWwgICAgfCAgMTg1ICsrDQo+ICAuLi4vb2NwbGliLWVuZGlhbi90
ZXN0cy90ZXN0X3N0cmluZy5jcHBvLm1sICAgfCAgMTg1ICsrDQo+ICB0b29scy9vY2FtbC9kdW5p
dmVyc2UvcmVzdWx0L0NIQU5HRVMubWQgICAgICAgfCAgIDE1ICsNCj4gIHRvb2xzL29jYW1sL2R1
bml2ZXJzZS9yZXN1bHQvTElDRU5TRS5tZCAgICAgICB8ICAgMjQgKw0KPiAgdG9vbHMvb2NhbWwv
ZHVuaXZlcnNlL3Jlc3VsdC9NYWtlZmlsZSAgICAgICAgIHwgICAxNyArDQo+ICB0b29scy9vY2Ft
bC9kdW5pdmVyc2UvcmVzdWx0L1JFQURNRS5tZCAgICAgICAgfCAgICA1ICsNCj4gIHRvb2xzL29j
YW1sL2R1bml2ZXJzZS9yZXN1bHQvZHVuZSAgICAgICAgICAgICB8ICAgMTIgKw0KPiAgdG9vbHMv
b2NhbWwvZHVuaXZlcnNlL3Jlc3VsdC9kdW5lLXByb2plY3QgICAgIHwgICAgMyArDQo+ICAuLi4v
ZHVuaXZlcnNlL3Jlc3VsdC9yZXN1bHQtYXMtYWxpYXMtNC4wOC5tbCAgfCAgICAyICsNCj4gIC4u
Li9vY2FtbC9kdW5pdmVyc2UvcmVzdWx0L3Jlc3VsdC1hcy1hbGlhcy5tbCB8ICAgIDIgKw0KPiAg
Li4uL2R1bml2ZXJzZS9yZXN1bHQvcmVzdWx0LWFzLW5ld3R5cGUubWwgICAgIHwgICAgMiArDQo+
ICB0b29scy9vY2FtbC9kdW5pdmVyc2UvcmVzdWx0L3Jlc3VsdC5vcGFtICAgICAgfCAgIDE4ICsN
Cj4gIHRvb2xzL29jYW1sL2R1bml2ZXJzZS9yZXN1bHQvd2hpY2hfcmVzdWx0Lm1sICB8ICAgMTQg
Kw0KPiAgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL3N0ZGxpYi1zaGltcy9DSEFOR0VTLm1kIHwgICAg
NSArDQo+ICB0b29scy9vY2FtbC9kdW5pdmVyc2Uvc3RkbGliLXNoaW1zL0xJQ0VOU0UgICAgfCAg
MjAzICsrKw0KPiAgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL3N0ZGxpYi1zaGltcy9SRUFETUUubWQg
IHwgICAgMiArDQo+ICAuLi4vb2NhbWwvZHVuaXZlcnNlL3N0ZGxpYi1zaGltcy9kdW5lLXByb2pl
Y3QgfCAgICAxICsNCj4gIC4uLi9kdW5pdmVyc2Uvc3RkbGliLXNoaW1zL2R1bmUtd29ya3NwYWNl
LmRldiB8ICAgMTQgKw0KPiAgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL3N0ZGxpYi1zaGltcy9zcmMv
ZHVuZSAgIHwgICA5NyArDQo+ICAuLi4vZHVuaXZlcnNlL3N0ZGxpYi1zaGltcy9zdGRsaWItc2hp
bXMub3BhbSAgfCAgIDI0ICsNCj4gIHRvb2xzL29jYW1sL2R1bml2ZXJzZS9zdGRsaWItc2hpbXMv
dGVzdC9kdW5lICB8ICAgIDMgKw0KPiAgLi4uL29jYW1sL2R1bml2ZXJzZS9zdGRsaWItc2hpbXMv
dGVzdC90ZXN0Lm1sIHwgICAgMiArDQo+ICB0b29scy9vY2FtbC9saWJzL2V2ZW50Y2huL2R1bmUg
ICAgICAgICAgICAgICAgfCAgICA4ICsNCj4gIHRvb2xzL29jYW1sL2xpYnMvbW1hcC9NYWtlZmls
ZSAgICAgICAgICAgICAgICB8ICAgMTkgKy0NCj4gIHRvb2xzL29jYW1sL2xpYnMvbW1hcC9kdW5l
ICAgICAgICAgICAgICAgICAgICB8ICAgMTggKw0KPiAgdG9vbHMvb2NhbWwvbGlicy9tbWFwL2du
dC5tbCAgICAgICAgICAgICAgICAgIHwgICA2MiArDQo+ICB0b29scy9vY2FtbC9saWJzL21tYXAv
Z250Lm1saSAgICAgICAgICAgICAgICAgfCAgIDg3ICsNCj4gIHRvb2xzL29jYW1sL2xpYnMvbW1h
cC9nbnR0YWJfc3R1YnMuYyAgICAgICAgICB8ICAxMDYgKysNCj4gIHRvb2xzL29jYW1sL2xpYnMv
bW1hcC9tbWFwX3N0dWJzLmggICAgICAgICAgICB8ICAgMTEgKy0NCj4gIHRvb2xzL29jYW1sL2xp
YnMvbW1hcC94ZW5tbWFwLm1sICAgICAgICAgICAgICB8ICAgMTcgKy0NCj4gIHRvb2xzL29jYW1s
L2xpYnMvbW1hcC94ZW5tbWFwLm1saSAgICAgICAgICAgICB8ICAgMTMgKy0NCj4gIHRvb2xzL29j
YW1sL2xpYnMvbW1hcC94ZW5tbWFwX3N0dWJzLmMgICAgICAgICB8ICAgODYgKy0NCj4gIHRvb2xz
L29jYW1sL2xpYnMveGIvZHVuZSAgICAgICAgICAgICAgICAgICAgICB8ICAgIDcgKw0KPiAgdG9v
bHMvb2NhbWwvbGlicy94Yi94Yi5tbCAgICAgICAgICAgICAgICAgICAgIHwgICAxMCArLQ0KPiAg
dG9vbHMvb2NhbWwvbGlicy94Yi94Yi5tbGkgICAgICAgICAgICAgICAgICAgIHwgICAgNCArLQ0K
PiAgdG9vbHMvb2NhbWwvbGlicy94Yi94c19yaW5nX3N0dWJzLmMgICAgICAgICAgIHwgICAxNCAr
LQ0KPiAgdG9vbHMvb2NhbWwvbGlicy94Yy9kdW5lICAgICAgICAgICAgICAgICAgICAgIHwgICAg
OSArDQo+ICB0b29scy9vY2FtbC9saWJzL3hjL3hlbmN0cmwubWwgICAgICAgICAgICAgICAgfCAg
ICA2ICstDQo+ICB0b29scy9vY2FtbC9saWJzL3hjL3hlbmN0cmwubWxpICAgICAgICAgICAgICAg
fCAgICA1ICstDQo+ICB0b29scy9vY2FtbC9saWJzL3hzL2R1bmUgICAgICAgICAgICAgICAgICAg
ICAgfCAgICA0ICsNCj4gIHRvb2xzL29jYW1sL3hlbi5vcGFtICAgICAgICAgICAgICAgICAgICAg
ICAgICB8ICAgIDEgKw0KPiAgdG9vbHMvb2NhbWwveGVuLm9wYW0ubG9ja2VkICAgICAgICAgICAg
ICAgICAgIHwgIDExOSArKw0KPiAgdG9vbHMvb2NhbWwveGVuc3RvcmUub3BhbSAgICAgICAgICAg
ICAgICAgICAgIHwgICAgMSArDQo+ICB0b29scy9vY2FtbC94ZW5zdG9yZWQub3BhbSAgICAgICAg
ICAgICAgICAgICAgfCAgIDIxICsNCj4gIHRvb2xzL29jYW1sL3hlbnN0b3JlZC9NYWtlZmlsZSAg
ICAgICAgICAgICAgICB8ICAgIDQgKy0NCj4gIHRvb2xzL29jYW1sL3hlbnN0b3JlZC9jb25uZWN0
aW9uLm1sICAgICAgICAgICB8ICAgNjMgKy0NCj4gIHRvb2xzL29jYW1sL3hlbnN0b3JlZC9kaXNr
Lm1sICAgICAgICAgICAgICAgICB8ICAzMTkgKysrKw0KPiAgdG9vbHMvb2NhbWwveGVuc3RvcmVk
L2RvbWFpbi5tbCAgICAgICAgICAgICAgIHwgICAgOSArLQ0KPiAgdG9vbHMvb2NhbWwveGVuc3Rv
cmVkL2RvbWFpbnMubWwgICAgICAgICAgICAgIHwgICAxMyArLQ0KPiAgdG9vbHMvb2NhbWwveGVu
c3RvcmVkL2R1bmUgICAgICAgICAgICAgICAgICAgIHwgICAxOSArDQo+ICB0b29scy9vY2FtbC94
ZW5zdG9yZWQvcGFyc2VfYXJnLm1sICAgICAgICAgICAgfCAgICA2ICstDQo+ICB0b29scy9vY2Ft
bC94ZW5zdG9yZWQvcGVybXMubWwgICAgICAgICAgICAgICAgfCAgICAyICsNCj4gIHRvb2xzL29j
YW1sL3hlbnN0b3JlZC9wb2xsLm1sICAgICAgICAgICAgICAgICB8ICAgIDMgKy0NCj4gIHRvb2xz
L29jYW1sL3hlbnN0b3JlZC9wcm9jZXNzLm1sICAgICAgICAgICAgICB8ICAgMzAgKy0NCj4gIHRv
b2xzL29jYW1sL3hlbnN0b3JlZC9zdG9yZS5tbCAgICAgICAgICAgICAgICB8ICAgIDEgKw0KPiAg
dG9vbHMvb2NhbWwveGVuc3RvcmVkL3Rlc3QvZHVuZSAgICAgICAgICAgICAgIHwgICAyMyArDQo+
ICB0b29scy9vY2FtbC94ZW5zdG9yZWQvdGVzdC9nZW5lcmF0b3IubWwgICAgICAgfCAgMTg5ICsr
DQo+ICB0b29scy9vY2FtbC94ZW5zdG9yZWQvdGVzdC9nbnQubWwgICAgICAgICAgICAgfCAgIDUy
ICsNCj4gIHRvb2xzL29jYW1sL3hlbnN0b3JlZC90ZXN0L21vZGVsLm1sICAgICAgICAgICB8ICAy
NTMgKysrDQo+ICB0b29scy9vY2FtbC94ZW5zdG9yZWQvdGVzdC9vbGQvYXJiaXRyYXJ5Lm1sICAg
fCAgMjYxICsrKw0KPiAgdG9vbHMvb2NhbWwveGVuc3RvcmVkL3Rlc3Qvb2xkL2dlbl9wYXRocy5t
bCAgIHwgICA2NiArDQo+ICAuLi4veGVuc3RvcmVkL3Rlc3Qvb2xkL3hlbnN0b3JlZF90ZXN0Lm1s
ICAgICAgfCAgNTI3ICsrKysrKw0KPiAgdG9vbHMvb2NhbWwveGVuc3RvcmVkL3Rlc3QvcGF0aHRy
ZWUubWwgICAgICAgIHwgICA0MCArDQo+ICB0b29scy9vY2FtbC94ZW5zdG9yZWQvdGVzdC90ZXN0
YWJsZS5tbCAgICAgICAgfCAgMzgwICsrKysNCj4gIHRvb2xzL29jYW1sL3hlbnN0b3JlZC90ZXN0
L3R5cGVzLm1sICAgICAgICAgICB8ICA0MzcgKysrKysNCj4gIC4uLi94ZW5tbWFwLm1saSA9PiB4
ZW5zdG9yZWQvdGVzdC94ZW5jdHJsLm1sfSB8ICAgNDAgKy0NCj4gIHRvb2xzL29jYW1sL3hlbnN0
b3JlZC90ZXN0L3hlbmV2ZW50Y2huLm1sICAgICB8ICAgNTAgKw0KPiAgdG9vbHMvb2NhbWwveGVu
c3RvcmVkL3Rlc3QveGVuc3RvcmVkX3Rlc3QubWwgIHwgIDE0NyArKw0KPiAgdG9vbHMvb2NhbWwv
eGVuc3RvcmVkL3Rlc3QveHNfcHJvdG9jb2wubWwgICAgIHwgIDczMyArKysrKysrKw0KPiAgdG9v
bHMvb2NhbWwveGVuc3RvcmVkL3RyYW5zYWN0aW9uLm1sICAgICAgICAgIHwgIDExOSArLQ0KPiAg
dG9vbHMvb2NhbWwveGVuc3RvcmVkL3hlbnN0b3JlZC5tbCAgICAgICAgICAgIHwgIDIyNCArKy0N
Cj4gIHRvb2xzL29jYW1sL3hlbnN0b3JlZC94ZW5zdG9yZWRfbWFpbi5tbCAgICAgICB8ICAgIDEg
Kw0KPiAgMzAxIGZpbGVzIGNoYW5nZWQsIDIyNzI0IGluc2VydGlvbnMoKyksIDE5MyBkZWxldGlv
bnMoLSkNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC8uZ2l0aWdub3JlDQo+ICBj
cmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvb2NhbWwvZHVuZS1wcm9qZWN0DQo+ICBjcmVhdGUgbW9k
ZSAxMDA2NDQgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2NtZGxpbmVyLy5naXRpZ25vcmUNCj4gIGNy
ZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5pdmVyc2UvY21kbGluZXIvLm9jcC1pbmRl
bnQNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5pdmVyc2UvY21kbGluZXIv
QjAubWwNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5pdmVyc2UvY21kbGlu
ZXIvQ0hBTkdFUy5tZA0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1sL2R1bml2ZXJz
ZS9jbWRsaW5lci9MSUNFTlNFLm1kDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvb2NhbWwv
ZHVuaXZlcnNlL2NtZGxpbmVyL01ha2VmaWxlDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMv
b2NhbWwvZHVuaXZlcnNlL2NtZGxpbmVyL1JFQURNRS5tZA0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0
IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jbWRsaW5lci9fdGFncw0KPiAgY3JlYXRlIG1vZGUgMTAw
NzU1IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jbWRsaW5lci9idWlsZC5tbA0KPiAgY3JlYXRlIG1v
ZGUgMTAwNjQ0IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jbWRsaW5lci9jbWRsaW5lci5vcGFtDQo+
ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2NtZGxpbmVyL2RvYy9h
cGkub2RvY2wNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5pdmVyc2UvY21k
bGluZXIvZHVuZS1wcm9qZWN0DQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvb2NhbWwvZHVu
aXZlcnNlL2NtZGxpbmVyL3BrZy9NRVRBDQo+ICBjcmVhdGUgbW9kZSAxMDA3NTUgdG9vbHMvb2Nh
bWwvZHVuaXZlcnNlL2NtZGxpbmVyL3BrZy9wa2cubWwNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0
b29scy9vY2FtbC9kdW5pdmVyc2UvY21kbGluZXIvc3JjL2NtZGxpbmVyLm1sDQo+ICBjcmVhdGUg
bW9kZSAxMDA2NDQgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2NtZGxpbmVyL3NyYy9jbWRsaW5lci5t
bGkNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5pdmVyc2UvY21kbGluZXIv
c3JjL2NtZGxpbmVyLm1sbGliDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQNCj4gdG9vbHMvb2NhbWwv
ZHVuaXZlcnNlL2NtZGxpbmVyL3NyYy9jbWRsaW5lcl9hcmcubWwNCj4gIGNyZWF0ZSBtb2RlIDEw
MDY0NA0KPiB0b29scy9vY2FtbC9kdW5pdmVyc2UvY21kbGluZXIvc3JjL2NtZGxpbmVyX2FyZy5t
bGkNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NA0KPiB0b29scy9vY2FtbC9kdW5pdmVyc2UvY21kbGlu
ZXIvc3JjL2NtZGxpbmVyX2Jhc2UubWwNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NA0KPiB0b29scy9v
Y2FtbC9kdW5pdmVyc2UvY21kbGluZXIvc3JjL2NtZGxpbmVyX2Jhc2UubWxpDQo+ICBjcmVhdGUg
bW9kZSAxMDA2NDQNCj4gdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2NtZGxpbmVyL3NyYy9jbWRsaW5l
cl9jbGluZS5tbA0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0DQo+IHRvb2xzL29jYW1sL2R1bml2ZXJz
ZS9jbWRsaW5lci9zcmMvY21kbGluZXJfY2xpbmUubWxpDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQN
Cj4gdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2NtZGxpbmVyL3NyYy9jbWRsaW5lcl9kb2NnZW4ubWwN
Cj4gIGNyZWF0ZSBtb2RlIDEwMDY0NA0KPiB0b29scy9vY2FtbC9kdW5pdmVyc2UvY21kbGluZXIv
c3JjL2NtZGxpbmVyX2RvY2dlbi5tbGkNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NA0KPiB0b29scy9v
Y2FtbC9kdW5pdmVyc2UvY21kbGluZXIvc3JjL2NtZGxpbmVyX2luZm8ubWwNCj4gIGNyZWF0ZSBt
b2RlIDEwMDY0NA0KPiB0b29scy9vY2FtbC9kdW5pdmVyc2UvY21kbGluZXIvc3JjL2NtZGxpbmVy
X2luZm8ubWxpDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQNCj4gdG9vbHMvb2NhbWwvZHVuaXZlcnNl
L2NtZGxpbmVyL3NyYy9jbWRsaW5lcl9tYW5wYWdlLm1sDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQN
Cj4gdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2NtZGxpbmVyL3NyYy9jbWRsaW5lcl9tYW5wYWdlLm1s
aQ0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0DQo+IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jbWRsaW5l
ci9zcmMvY21kbGluZXJfbXNnLm1sDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQNCj4gdG9vbHMvb2Nh
bWwvZHVuaXZlcnNlL2NtZGxpbmVyL3NyYy9jbWRsaW5lcl9tc2cubWxpDQo+ICBjcmVhdGUgbW9k
ZSAxMDA2NDQNCj4gdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2NtZGxpbmVyL3NyYy9jbWRsaW5lcl9z
dWdnZXN0Lm1sDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQNCj4gdG9vbHMvb2NhbWwvZHVuaXZlcnNl
L2NtZGxpbmVyL3NyYy9jbWRsaW5lcl9zdWdnZXN0Lm1saQ0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0
DQo+IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jbWRsaW5lci9zcmMvY21kbGluZXJfdGVybS5tbA0K
PiAgY3JlYXRlIG1vZGUgMTAwNjQ0DQo+IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jbWRsaW5lci9z
cmMvY21kbGluZXJfdGVybS5tbGkNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NA0KPiB0b29scy9vY2Ft
bC9kdW5pdmVyc2UvY21kbGluZXIvc3JjL2NtZGxpbmVyX3RyaWUubWwNCj4gIGNyZWF0ZSBtb2Rl
IDEwMDY0NA0KPiB0b29scy9vY2FtbC9kdW5pdmVyc2UvY21kbGluZXIvc3JjL2NtZGxpbmVyX3Ry
aWUubWxpDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2NtZGxp
bmVyL3NyYy9kdW5lDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvb2NhbWwvZHVuaXZlcnNl
L2NtZGxpbmVyL3Rlc3QvY2hvcnVzLm1sDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvb2Nh
bWwvZHVuaXZlcnNlL2NtZGxpbmVyL3Rlc3QvY3BfZXgubWwNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0
NCB0b29scy9vY2FtbC9kdW5pdmVyc2UvY21kbGluZXIvdGVzdC9kYXJjc19leC5tbA0KPiAgY3Jl
YXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jbWRsaW5lci90ZXN0L2R1bmUN
Cj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5pdmVyc2UvY21kbGluZXIvdGVz
dC9yZXZvbHQubWwNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5pdmVyc2Uv
Y21kbGluZXIvdGVzdC9ybV9leC5tbA0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1s
L2R1bml2ZXJzZS9jbWRsaW5lci90ZXN0L3RhaWxfZXgubWwNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0
NCB0b29scy9vY2FtbC9kdW5pdmVyc2UvY21kbGluZXIvdGVzdC90ZXN0X21hbi5tbA0KPiAgY3Jl
YXRlIG1vZGUgMTAwNjQ0DQo+IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jbWRsaW5lci90ZXN0L3Rl
c3RfbWFuX3V0ZjgubWwNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NA0KPiB0b29scy9vY2FtbC9kdW5p
dmVyc2UvY21kbGluZXIvdGVzdC90ZXN0X29wdF9yZXEubWwNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0
NCB0b29scy9vY2FtbC9kdW5pdmVyc2UvY21kbGluZXIvdGVzdC90ZXN0X3Bvcy5tbA0KPiAgY3Jl
YXRlIG1vZGUgMTAwNjQ0DQo+IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jbWRsaW5lci90ZXN0L3Rl
c3RfcG9zX2FsbC5tbA0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0DQo+IHRvb2xzL29jYW1sL2R1bml2
ZXJzZS9jbWRsaW5lci90ZXN0L3Rlc3RfcG9zX2xlZnQubWwNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0
NA0KPiB0b29scy9vY2FtbC9kdW5pdmVyc2UvY21kbGluZXIvdGVzdC90ZXN0X3Bvc19yZXEubWwN
Cj4gIGNyZWF0ZSBtb2RlIDEwMDY0NA0KPiB0b29scy9vY2FtbC9kdW5pdmVyc2UvY21kbGluZXIv
dGVzdC90ZXN0X3Bvc19yZXYubWwNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NA0KPiB0b29scy9vY2Ft
bC9kdW5pdmVyc2UvY21kbGluZXIvdGVzdC90ZXN0X3Rlcm1fZHVwcy5tbA0KPiAgY3JlYXRlIG1v
ZGUgMTAwNjQ0DQo+IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jbWRsaW5lci90ZXN0L3Rlc3Rfd2l0
aF91c2VkX2FyZ3MubWwNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5pdmVy
c2UvY3Bwby8uZ2l0aWdub3JlDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvb2NhbWwvZHVu
aXZlcnNlL2NwcG8vLm9jcC1pbmRlbnQNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2Ft
bC9kdW5pdmVyc2UvY3Bwby8udHJhdmlzLnltbA0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xz
L29jYW1sL2R1bml2ZXJzZS9jcHBvL0NPREVPV05FUlMNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0
b29scy9vY2FtbC9kdW5pdmVyc2UvY3Bwby9DaGFuZ2VzDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQg
dG9vbHMvb2NhbWwvZHVuaXZlcnNlL2NwcG8vSU5TVEFMTC5tZA0KPiAgY3JlYXRlIG1vZGUgMTAw
NjQ0IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jcHBvL0xJQ0VOU0UubWQNCj4gIGNyZWF0ZSBtb2Rl
IDEwMDY0NCB0b29scy9vY2FtbC9kdW5pdmVyc2UvY3Bwby9NYWtlZmlsZQ0KPiAgY3JlYXRlIG1v
ZGUgMTAwNjQ0IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jcHBvL1JFQURNRS5tZA0KPiAgY3JlYXRl
IG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jcHBvL1ZFUlNJT04NCj4gIGNyZWF0
ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5pdmVyc2UvY3Bwby9hcHB2ZXlvci55bWwNCj4g
IGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5pdmVyc2UvY3Bwby9jcHBvLm9wYW0N
Cj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5pdmVyc2UvY3Bwby9jcHBvX29j
YW1sYnVpbGQub3BhbQ0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1sL2R1bml2ZXJz
ZS9jcHBvL2R1bmUtcHJvamVjdA0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1sL2R1
bml2ZXJzZS9jcHBvL2V4YW1wbGVzL01ha2VmaWxlDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9v
bHMvb2NhbWwvZHVuaXZlcnNlL2NwcG8vZXhhbXBsZXMvZGVidWcubWwNCj4gIGNyZWF0ZSBtb2Rl
IDEwMDY0NCB0b29scy9vY2FtbC9kdW5pdmVyc2UvY3Bwby9leGFtcGxlcy9kdW5lDQo+ICBjcmVh
dGUgbW9kZSAxMDA2NDQgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2NwcG8vZXhhbXBsZXMvZnJlbmNo
Lm1sDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2NwcG8vZXhh
bXBsZXMvbGV4ZXIubWxsDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQNCj4gdG9vbHMvb2NhbWwvZHVu
aXZlcnNlL2NwcG8vb2NhbWxidWlsZF9wbHVnaW4vX3RhZ3MNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0
NCB0b29scy9vY2FtbC9kdW5pdmVyc2UvY3Bwby9vY2FtbGJ1aWxkX3BsdWdpbi9kdW5lDQo+ICBj
cmVhdGUgbW9kZSAxMDA2NDQNCj4gdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2NwcG8vb2NhbWxidWls
ZF9wbHVnaW4vb2NhbWxidWlsZF9jcHBvLm1sDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQNCj4gdG9v
bHMvb2NhbWwvZHVuaXZlcnNlL2NwcG8vb2NhbWxidWlsZF9wbHVnaW4vb2NhbWxidWlsZF9jcHBv
Lm1saQ0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jcHBvL3Ny
Yy9jb21wYXQubWwNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5pdmVyc2Uv
Y3Bwby9zcmMvY3Bwb19jb21tYW5kLm1sDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvb2Nh
bWwvZHVuaXZlcnNlL2NwcG8vc3JjL2NwcG9fY29tbWFuZC5tbGkNCj4gIGNyZWF0ZSBtb2RlIDEw
MDY0NCB0b29scy9vY2FtbC9kdW5pdmVyc2UvY3Bwby9zcmMvY3Bwb19ldmFsLm1sDQo+ICBjcmVh
dGUgbW9kZSAxMDA2NDQgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2NwcG8vc3JjL2NwcG9fZXZhbC5t
bGkNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5pdmVyc2UvY3Bwby9zcmMv
Y3Bwb19sZXhlci5tbGwNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5pdmVy
c2UvY3Bwby9zcmMvY3Bwb19tYWluLm1sDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvb2Nh
bWwvZHVuaXZlcnNlL2NwcG8vc3JjL2NwcG9fcGFyc2VyLm1seQ0KPiAgY3JlYXRlIG1vZGUgMTAw
NjQ0IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jcHBvL3NyYy9jcHBvX3R5cGVzLm1sDQo+ICBjcmVh
dGUgbW9kZSAxMDA2NDQgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2NwcG8vc3JjL2NwcG9fdHlwZXMu
bWxpDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2NwcG8vc3Jj
L2NwcG9fdmVyc2lvbi5tbGkNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5p
dmVyc2UvY3Bwby9zcmMvZHVuZQ0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1sL2R1
bml2ZXJzZS9jcHBvL3Rlc3QvY2FwaXRhbC5jcHBvDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9v
bHMvb2NhbWwvZHVuaXZlcnNlL2NwcG8vdGVzdC9jYXBpdGFsLnJlZg0KPiAgY3JlYXRlIG1vZGUg
MTAwNjQ0IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jcHBvL3Rlc3QvY29tbWVudHMuY3Bwbw0KPiAg
Y3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jcHBvL3Rlc3QvY29tbWVu
dHMucmVmDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2NwcG8v
dGVzdC9jb25kLmNwcG8NCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5pdmVy
c2UvY3Bwby90ZXN0L2NvbmQucmVmDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvb2NhbWwv
ZHVuaXZlcnNlL2NwcG8vdGVzdC9kdW5lDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvb2Nh
bWwvZHVuaXZlcnNlL2NwcG8vdGVzdC9leHQuY3Bwbw0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRv
b2xzL29jYW1sL2R1bml2ZXJzZS9jcHBvL3Rlc3QvZXh0LnJlZg0KPiAgY3JlYXRlIG1vZGUgMTAw
NjQ0IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jcHBvL3Rlc3QvaW5jbC5jcHBvDQo+ICBjcmVhdGUg
bW9kZSAxMDA2NDQgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2NwcG8vdGVzdC9pbmNsMi5jcHBvDQo+
ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2NwcG8vdGVzdC9sb2Mu
Y3Bwbw0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jcHBvL3Rl
c3QvbG9jLnJlZg0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9j
cHBvL3Rlc3QvcGFyZW5fYXJnLmNwcG8NCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2Ft
bC9kdW5pdmVyc2UvY3Bwby90ZXN0L3BhcmVuX2FyZy5yZWYNCj4gIGNyZWF0ZSBtb2RlIDEwMDc1
NSB0b29scy9vY2FtbC9kdW5pdmVyc2UvY3Bwby90ZXN0L3NvdXJjZS5zaA0KPiAgY3JlYXRlIG1v
ZGUgMTAwNjQ0IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jcHBvL3Rlc3QvdGVzdC5jcHBvDQo+ICBj
cmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2NwcG8vdGVzdC90dXBsZS5j
cHBvDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2NwcG8vdGVz
dC90dXBsZS5yZWYNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5pdmVyc2Uv
Y3Bwby90ZXN0L3VubWF0Y2hlZC5jcHBvDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvb2Nh
bWwvZHVuaXZlcnNlL2NwcG8vdGVzdC91bm1hdGNoZWQucmVmDQo+ICBjcmVhdGUgbW9kZSAxMDA2
NDQgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2NwcG8vdGVzdC92ZXJzaW9uLmNwcG8NCj4gIGNyZWF0
ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5pdmVyc2UvY3Jvd2Jhci8uZ2l0aWdub3JlDQo+
ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2Nyb3diYXIvQ0hBTkdF
Uy5tZA0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jcm93YmFy
L0xJQ0VOU0UubWQNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5pdmVyc2Uv
Y3Jvd2Jhci9SRUFETUUubWQNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5p
dmVyc2UvY3Jvd2Jhci9jcm93YmFyLm9wYW0NCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9v
Y2FtbC9kdW5pdmVyc2UvY3Jvd2Jhci9kdW5lDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMv
b2NhbWwvZHVuaXZlcnNlL2Nyb3diYXIvZHVuZS1wcm9qZWN0DQo+ICBjcmVhdGUgbW9kZSAxMDA2
NDQgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2Nyb3diYXIvZXhhbXBsZXMvLmdpdGlnbm9yZQ0KPiAg
Y3JlYXRlIG1vZGUgMTAwNjQ0DQo+IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jcm93YmFyL2V4YW1w
bGVzL2NhbGVuZGFyL2R1bmUNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NA0KPiB0b29scy9vY2FtbC9k
dW5pdmVyc2UvY3Jvd2Jhci9leGFtcGxlcy9jYWxlbmRhci90ZXN0X2NhbGVuZGFyLm1sDQo+ICBj
cmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2Nyb3diYXIvZXhhbXBsZXMv
ZnBhdGgvZHVuZQ0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0DQo+IHRvb2xzL29jYW1sL2R1bml2ZXJz
ZS9jcm93YmFyL2V4YW1wbGVzL2ZwYXRoL3Rlc3RfZnBhdGgubWwNCj4gIGNyZWF0ZSBtb2RlIDEw
MDY0NA0KPiB0b29scy9vY2FtbC9kdW5pdmVyc2UvY3Jvd2Jhci9leGFtcGxlcy9pbnB1dC90ZXN0
Y2FzZQ0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jcm93YmFy
L2V4YW1wbGVzL21hcC9kdW5lDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQNCj4gdG9vbHMvb2NhbWwv
ZHVuaXZlcnNlL2Nyb3diYXIvZXhhbXBsZXMvbWFwL3Rlc3RfbWFwLm1sDQo+ICBjcmVhdGUgbW9k
ZSAxMDA2NDQNCj4gdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2Nyb3diYXIvZXhhbXBsZXMvcHByaW50
L2R1bmUNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NA0KPiB0b29scy9vY2FtbC9kdW5pdmVyc2UvY3Jv
d2Jhci9leGFtcGxlcy9wcHJpbnQvdGVzdF9wcHJpbnQubWwNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0
NA0KPiB0b29scy9vY2FtbC9kdW5pdmVyc2UvY3Jvd2Jhci9leGFtcGxlcy9zZXJpYWxpemVyL2R1
bmUNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NA0KPiB0b29scy9vY2FtbC9kdW5pdmVyc2UvY3Jvd2Jh
ci9leGFtcGxlcy9zZXJpYWxpemVyL3NlcmlhbGl6ZXIubWwNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0
NA0KPiB0b29scy9vY2FtbC9kdW5pdmVyc2UvY3Jvd2Jhci9leGFtcGxlcy9zZXJpYWxpemVyL3Rl
c3Rfc2VyaWFsaXplci5tbA0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1sL2R1bml2
ZXJzZS9jcm93YmFyL2V4YW1wbGVzL3V1bmYvZHVuZQ0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0DQo+
IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jcm93YmFyL2V4YW1wbGVzL3V1bmYvdGVzdF91dW5mLm1s
DQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQNCj4gdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2Nyb3diYXIv
ZXhhbXBsZXMveG1sZGlmZi9kdW5lDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQNCj4gdG9vbHMvb2Nh
bWwvZHVuaXZlcnNlL2Nyb3diYXIvZXhhbXBsZXMveG1sZGlmZi90ZXN0X3htbGRpZmYubWwNCj4g
IGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5pdmVyc2UvY3Jvd2Jhci9zcmMvY3Jv
d2Jhci5tbA0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jcm93
YmFyL3NyYy9jcm93YmFyLm1saQ0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1sL2R1
bml2ZXJzZS9jcm93YmFyL3NyYy9kdW5lDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvb2Nh
bWwvZHVuaXZlcnNlL2Nyb3diYXIvc3JjL3RvZG8NCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29s
cy9vY2FtbC9kdW5pdmVyc2UvY3NleHAvQ0hBTkdFUy5tZA0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0
IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jc2V4cC9MSUNFTlNFLm1kDQo+ICBjcmVhdGUgbW9kZSAx
MDA2NDQgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2NzZXhwL01ha2VmaWxlDQo+ICBjcmVhdGUgbW9k
ZSAxMDA2NDQgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2NzZXhwL1JFQURNRS5tZA0KPiAgY3JlYXRl
IG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jc2V4cC9iZW5jaC9jc2V4cF9iZW5j
aC5tbA0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jc2V4cC9i
ZW5jaC9kdW5lDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2Nz
ZXhwL2JlbmNoL21haW4ubWwNCj4gIGNyZWF0ZSBtb2RlIDEwMDc1NSB0b29scy9vY2FtbC9kdW5p
dmVyc2UvY3NleHAvYmVuY2gvcnVubmVyLnNoDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMv
b2NhbWwvZHVuaXZlcnNlL2NzZXhwL2NzZXhwLm9wYW0NCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0
b29scy9vY2FtbC9kdW5pdmVyc2UvY3NleHAvY3NleHAub3BhbS50ZW1wbGF0ZQ0KPiAgY3JlYXRl
IG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9jc2V4cC9kdW5lLXByb2plY3QNCj4g
IGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5pdmVyc2UvY3NleHAvZHVuZS13b3Jr
c3BhY2UuZGV2DQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2Nz
ZXhwL3NyYy9jc2V4cC5tbA0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1sL2R1bml2
ZXJzZS9jc2V4cC9zcmMvY3NleHAubWxpDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvb2Nh
bWwvZHVuaXZlcnNlL2NzZXhwL3NyYy9kdW5lDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMv
b2NhbWwvZHVuaXZlcnNlL2NzZXhwL3Rlc3QvZHVuZQ0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRv
b2xzL29jYW1sL2R1bml2ZXJzZS9jc2V4cC90ZXN0L3Rlc3QubWwNCj4gIGNyZWF0ZSBtb2RlIDEw
MDY0NCB0b29scy9vY2FtbC9kdW5pdmVyc2UvZHVuZQ0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRv
b2xzL29jYW1sL2R1bml2ZXJzZS9mbXQvLmdpdGlnbm9yZQ0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0
IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9mbXQvLm9jcC1pbmRlbnQNCj4gIGNyZWF0ZSBtb2RlIDEw
MDY0NCB0b29scy9vY2FtbC9kdW5pdmVyc2UvZm10L0NIQU5HRVMubWQNCj4gIGNyZWF0ZSBtb2Rl
IDEwMDY0NCB0b29scy9vY2FtbC9kdW5pdmVyc2UvZm10L0xJQ0VOU0UubWQNCj4gIGNyZWF0ZSBt
b2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5pdmVyc2UvZm10L1JFQURNRS5tZA0KPiAgY3JlYXRl
IG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9mbXQvX3RhZ3MNCj4gIGNyZWF0ZSBt
b2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5pdmVyc2UvZm10L2RvYy9hcGkub2RvY2wNCj4gIGNy
ZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5pdmVyc2UvZm10L2RvYy9pbmRleC5tbGQN
Cj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5pdmVyc2UvZm10L2R1bmUtcHJv
amVjdA0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9mbXQvZm10
Lm9wYW0NCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5pdmVyc2UvZm10L3Br
Zy9NRVRBDQo+ICBjcmVhdGUgbW9kZSAxMDA3NTUgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2ZtdC9w
a2cvcGtnLm1sDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2Zt
dC9zcmMvZHVuZQ0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9m
bXQvc3JjL2ZtdC5tbA0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1sL2R1bml2ZXJz
ZS9mbXQvc3JjL2ZtdC5tbGkNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5p
dmVyc2UvZm10L3NyYy9mbXQubWxsaWINCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2Ft
bC9kdW5pdmVyc2UvZm10L3NyYy9mbXRfY2xpLm1sDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9v
bHMvb2NhbWwvZHVuaXZlcnNlL2ZtdC9zcmMvZm10X2NsaS5tbGkNCj4gIGNyZWF0ZSBtb2RlIDEw
MDY0NCB0b29scy9vY2FtbC9kdW5pdmVyc2UvZm10L3NyYy9mbXRfY2xpLm1sbGliDQo+ICBjcmVh
dGUgbW9kZSAxMDA2NDQgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL2ZtdC9zcmMvZm10X3RvcC5tbA0K
PiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9mbXQvc3JjL2ZtdF90
b3AubWxsaWINCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5pdmVyc2UvZm10
L3NyYy9mbXRfdHR5Lm1sDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvb2NhbWwvZHVuaXZl
cnNlL2ZtdC9zcmMvZm10X3R0eS5tbGkNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2Ft
bC9kdW5pdmVyc2UvZm10L3NyYy9mbXRfdHR5Lm1sbGliDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQg
dG9vbHMvb2NhbWwvZHVuaXZlcnNlL2ZtdC9zcmMvZm10X3R0eV90b3BfaW5pdC5tbA0KPiAgY3Jl
YXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9mbXQvdGVzdC90ZXN0Lm1sDQo+
ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL29jYW1sLWFmbC0NCj4g
cGVyc2lzdGVudC8uZ2l0aWdub3JlDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvb2NhbWwv
ZHVuaXZlcnNlL29jYW1sLWFmbC0NCj4gcGVyc2lzdGVudC9DSEFOR0VTLm1kDQo+ICBjcmVhdGUg
bW9kZSAxMDA2NDQgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL29jYW1sLWFmbC0NCj4gcGVyc2lzdGVu
dC9MSUNFTlNFLm1kDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvb2NhbWwvZHVuaXZlcnNl
L29jYW1sLWFmbC0NCj4gcGVyc2lzdGVudC9SRUFETUUubWQNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0
NCB0b29scy9vY2FtbC9kdW5pdmVyc2Uvb2NhbWwtYWZsLXBlcnNpc3RlbnQvYWZsLQ0KPiBwZXJz
aXN0ZW50Lm9wYW0NCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5pdmVyc2Uv
b2NhbWwtYWZsLXBlcnNpc3RlbnQvYWZsLQ0KPiBwZXJzaXN0ZW50Lm9wYW0udGVtcGxhdGUNCj4g
IGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5pdmVyc2Uvb2NhbWwtYWZsLQ0KPiBw
ZXJzaXN0ZW50L2FmbFBlcnNpc3RlbnQuYXZhaWxhYmxlLm1sDQo+ICBjcmVhdGUgbW9kZSAxMDA2
NDQgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL29jYW1sLWFmbC0NCj4gcGVyc2lzdGVudC9hZmxQZXJz
aXN0ZW50Lm1saQ0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9v
Y2FtbC1hZmwtDQo+IHBlcnNpc3RlbnQvYWZsUGVyc2lzdGVudC5zdHViLm1sDQo+ICBjcmVhdGUg
bW9kZSAxMDA3NTUgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL29jYW1sLWFmbC0NCj4gcGVyc2lzdGVu
dC9kZXRlY3Quc2gNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5pdmVyc2Uv
b2NhbWwtYWZsLXBlcnNpc3RlbnQvZHVuZQ0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29j
YW1sL2R1bml2ZXJzZS9vY2FtbC1hZmwtcGVyc2lzdGVudC9kdW5lLQ0KPiBwcm9qZWN0DQo+ICBj
cmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL29jYW1sLWFmbC0NCj4gcGVy
c2lzdGVudC90ZXN0Lm1sDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvb2NhbWwvZHVuaXZl
cnNlL29jYW1sLWFmbC0NCj4gcGVyc2lzdGVudC90ZXN0L2hhcm5lc3MubWwNCj4gIGNyZWF0ZSBt
b2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5pdmVyc2Uvb2NhbWwtYWZsLQ0KPiBwZXJzaXN0ZW50
L3Rlc3QvdGVzdC5tbA0KPiAgY3JlYXRlIG1vZGUgMTAwNzU1IHRvb2xzL29jYW1sL2R1bml2ZXJz
ZS9vY2FtbC1hZmwtDQo+IHBlcnNpc3RlbnQvdGVzdC90ZXN0LnNoDQo+ICBjcmVhdGUgbW9kZSAx
MDA2NDQgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL29jcGxpYi1lbmRpYW4vLmdpdGlnbm9yZQ0KPiAg
Y3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9vY3BsaWItZW5kaWFuLy50
cmF2aXMueW1sDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL29j
cGxpYi1lbmRpYW4vQ0hBTkdFUy5tZA0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1s
L2R1bml2ZXJzZS9vY3BsaWItZW5kaWFuL0NPUFlJTkcudHh0DQo+ICBjcmVhdGUgbW9kZSAxMDA2
NDQgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL29jcGxpYi1lbmRpYW4vTWFrZWZpbGUNCj4gIGNyZWF0
ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5pdmVyc2Uvb2NwbGliLWVuZGlhbi9SRUFETUUu
bWQNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5pdmVyc2Uvb2NwbGliLWVu
ZGlhbi9kdW5lLXByb2plY3QNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5p
dmVyc2Uvb2NwbGliLWVuZGlhbi9vY3BsaWItDQo+IGVuZGlhbi5vcGFtDQo+ICBjcmVhdGUgbW9k
ZSAxMDA2NDQgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL29jcGxpYi0NCj4gZW5kaWFuL3NyYy9iZV9v
Y2FtbF80MDEubWwNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5pdmVyc2Uv
b2NwbGliLWVuZGlhbi9zcmMvY29tbW9uLm1sDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMv
b2NhbWwvZHVuaXZlcnNlL29jcGxpYi0NCj4gZW5kaWFuL3NyYy9jb21tb25fNDAxLmNwcG8ubWwN
Cj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5pdmVyc2Uvb2NwbGliLQ0KPiBl
bmRpYW4vc3JjL2NvbW1vbl9mbG9hdC5tbA0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29j
YW1sL2R1bml2ZXJzZS9vY3BsaWItZW5kaWFuL3NyYy9kdW5lDQo+ICBjcmVhdGUgbW9kZSAxMDA2
NDQgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL29jcGxpYi0NCj4gZW5kaWFuL3NyYy9lbmRpYW5CaWdz
dHJpbmcuY3Bwby5tbA0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1sL2R1bml2ZXJz
ZS9vY3BsaWItDQo+IGVuZGlhbi9zcmMvZW5kaWFuQmlnc3RyaW5nLmNwcG8ubWxpDQo+ICBjcmVh
dGUgbW9kZSAxMDA2NDQgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL29jcGxpYi0NCj4gZW5kaWFuL3Ny
Yy9lbmRpYW5CeXRlcy5jcHBvLm1sDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvb2NhbWwv
ZHVuaXZlcnNlL29jcGxpYi0NCj4gZW5kaWFuL3NyYy9lbmRpYW5CeXRlcy5jcHBvLm1saQ0KPiAg
Y3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9vY3BsaWItDQo+IGVuZGlh
bi9zcmMvZW5kaWFuU3RyaW5nLmNwcG8ubWwNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9v
Y2FtbC9kdW5pdmVyc2Uvb2NwbGliLQ0KPiBlbmRpYW4vc3JjL2VuZGlhblN0cmluZy5jcHBvLm1s
aQ0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9vY3BsaWItDQo+
IGVuZGlhbi9zcmMvbGVfb2NhbWxfNDAxLm1sDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMv
b2NhbWwvZHVuaXZlcnNlL29jcGxpYi0NCj4gZW5kaWFuL3NyYy9uZV9vY2FtbF80MDEubWwNCj4g
IGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5pdmVyc2Uvb2NwbGliLQ0KPiBlbmRp
YW4vdGVzdHMvYmVuY2gubWwNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5p
dmVyc2Uvb2NwbGliLWVuZGlhbi90ZXN0cy9kdW5lDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9v
bHMvb2NhbWwvZHVuaXZlcnNlL29jcGxpYi1lbmRpYW4vdGVzdHMvdGVzdC5tbA0KPiAgY3JlYXRl
IG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9vY3BsaWItDQo+IGVuZGlhbi90ZXN0
cy90ZXN0X2JpZ3N0cmluZy5jcHBvLm1sDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvb2Nh
bWwvZHVuaXZlcnNlL29jcGxpYi0NCj4gZW5kaWFuL3Rlc3RzL3Rlc3RfYnl0ZXMuY3Bwby5tbA0K
PiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9vY3BsaWItDQo+IGVu
ZGlhbi90ZXN0cy90ZXN0X3N0cmluZy5jcHBvLm1sDQo+ICBjcmVhdGUgbW9kZSAxMDA3NTUgdG9v
bHMvb2NhbWwvZHVuaXZlcnNlL3Jlc3VsdC9DSEFOR0VTLm1kDQo+ICBjcmVhdGUgbW9kZSAxMDA3
NTUgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL3Jlc3VsdC9MSUNFTlNFLm1kDQo+ICBjcmVhdGUgbW9k
ZSAxMDA3NTUgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL3Jlc3VsdC9NYWtlZmlsZQ0KPiAgY3JlYXRl
IG1vZGUgMTAwNzU1IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9yZXN1bHQvUkVBRE1FLm1kDQo+ICBj
cmVhdGUgbW9kZSAxMDA3NTUgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL3Jlc3VsdC9kdW5lDQo+ICBj
cmVhdGUgbW9kZSAxMDA3NTUgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL3Jlc3VsdC9kdW5lLXByb2pl
Y3QNCj4gIGNyZWF0ZSBtb2RlIDEwMDc1NSB0b29scy9vY2FtbC9kdW5pdmVyc2UvcmVzdWx0L3Jl
c3VsdC1hcy1hbGlhcy0NCj4gNC4wOC5tbA0KPiAgY3JlYXRlIG1vZGUgMTAwNzU1IHRvb2xzL29j
YW1sL2R1bml2ZXJzZS9yZXN1bHQvcmVzdWx0LWFzLWFsaWFzLm1sDQo+ICBjcmVhdGUgbW9kZSAx
MDA3NTUgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL3Jlc3VsdC9yZXN1bHQtYXMtbmV3dHlwZS5tbA0K
PiAgY3JlYXRlIG1vZGUgMTAwNzU1IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9yZXN1bHQvcmVzdWx0
Lm9wYW0NCj4gIGNyZWF0ZSBtb2RlIDEwMDc1NSB0b29scy9vY2FtbC9kdW5pdmVyc2UvcmVzdWx0
L3doaWNoX3Jlc3VsdC5tbA0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1sL2R1bml2
ZXJzZS9zdGRsaWItc2hpbXMvQ0hBTkdFUy5tZA0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xz
L29jYW1sL2R1bml2ZXJzZS9zdGRsaWItc2hpbXMvTElDRU5TRQ0KPiAgY3JlYXRlIG1vZGUgMTAw
NjQ0IHRvb2xzL29jYW1sL2R1bml2ZXJzZS9zdGRsaWItc2hpbXMvUkVBRE1FLm1kDQo+ICBjcmVh
dGUgbW9kZSAxMDA2NDQgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL3N0ZGxpYi1zaGltcy9kdW5lLXBy
b2plY3QNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5pdmVyc2Uvc3RkbGli
LXNoaW1zL2R1bmUtDQo+IHdvcmtzcGFjZS5kZXYNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29s
cy9vY2FtbC9kdW5pdmVyc2Uvc3RkbGliLXNoaW1zL3NyYy9kdW5lDQo+ICBjcmVhdGUgbW9kZSAx
MDA2NDQgdG9vbHMvb2NhbWwvZHVuaXZlcnNlL3N0ZGxpYi1zaGltcy9zdGRsaWItDQo+IHNoaW1z
Lm9wYW0NCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9kdW5pdmVyc2Uvc3RkbGli
LXNoaW1zL3Rlc3QvZHVuZQ0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1sL2R1bml2
ZXJzZS9zdGRsaWItc2hpbXMvdGVzdC90ZXN0Lm1sDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9v
bHMvb2NhbWwvbGlicy9ldmVudGNobi9kdW5lDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMv
b2NhbWwvbGlicy9tbWFwL2R1bmUNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9s
aWJzL21tYXAvZ250Lm1sDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvb2NhbWwvbGlicy9t
bWFwL2dudC5tbGkNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9saWJzL21tYXAv
Z250dGFiX3N0dWJzLmMNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9saWJzL3hi
L2R1bmUNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9saWJzL3hjL2R1bmUNCj4g
IGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC9saWJzL3hzL2R1bmUNCj4gIGNyZWF0ZSBt
b2RlIDEwMDY0NCB0b29scy9vY2FtbC94ZW4ub3BhbQ0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRv
b2xzL29jYW1sL3hlbi5vcGFtLmxvY2tlZA0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29j
YW1sL3hlbnN0b3JlLm9wYW0NCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29scy9vY2FtbC94ZW5z
dG9yZWQub3BhbQ0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1sL3hlbnN0b3JlZC9k
dW5lDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvb2NhbWwveGVuc3RvcmVkL3Rlc3QvZHVu
ZQ0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1sL3hlbnN0b3JlZC90ZXN0L2dlbmVy
YXRvci5tbA0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1sL3hlbnN0b3JlZC90ZXN0
L2dudC5tbA0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1sL3hlbnN0b3JlZC90ZXN0
L21vZGVsLm1sDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvb2NhbWwveGVuc3RvcmVkL3Rl
c3Qvb2xkL2FyYml0cmFyeS5tbA0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1sL3hl
bnN0b3JlZC90ZXN0L29sZC9nZW5fcGF0aHMubWwNCj4gIGNyZWF0ZSBtb2RlIDEwMDY0NCB0b29s
cy9vY2FtbC94ZW5zdG9yZWQvdGVzdC9vbGQveGVuc3RvcmVkX3Rlc3QubWwNCj4gIGNyZWF0ZSBt
b2RlIDEwMDY0NCB0b29scy9vY2FtbC94ZW5zdG9yZWQvdGVzdC9wYXRodHJlZS5tbA0KPiAgY3Jl
YXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1sL3hlbnN0b3JlZC90ZXN0L3Rlc3RhYmxlLm1sDQo+
ICBjcmVhdGUgbW9kZSAxMDA2NDQgdG9vbHMvb2NhbWwveGVuc3RvcmVkL3Rlc3QvdHlwZXMubWwN
Cj4gIGNvcHkgdG9vbHMvb2NhbWwve2xpYnMvbW1hcC94ZW5tbWFwLm1saSA9Pg0KPiB4ZW5zdG9y
ZWQvdGVzdC94ZW5jdHJsLm1sfSAoNTQlKQ0KPiAgY3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29j
YW1sL3hlbnN0b3JlZC90ZXN0L3hlbmV2ZW50Y2huLm1sDQo+ICBjcmVhdGUgbW9kZSAxMDA2NDQg
dG9vbHMvb2NhbWwveGVuc3RvcmVkL3Rlc3QveGVuc3RvcmVkX3Rlc3QubWwNCj4gIGNyZWF0ZSBt
b2RlIDEwMDY0NCB0b29scy9vY2FtbC94ZW5zdG9yZWQvdGVzdC94c19wcm90b2NvbC5tbA0KPiAg
Y3JlYXRlIG1vZGUgMTAwNjQ0IHRvb2xzL29jYW1sL3hlbnN0b3JlZC94ZW5zdG9yZWRfbWFpbi5t
bA0KPiANCg==


From xen-devel-bounces@lists.xenproject.org Tue May 11 18:19:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 18:19:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125959.237084 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgWyy-0007gJ-B3; Tue, 11 May 2021 18:19:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125959.237084; Tue, 11 May 2021 18:19:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgWyy-0007gC-8A; Tue, 11 May 2021 18:19:48 +0000
Received: by outflank-mailman (input) for mailman id 125959;
 Tue, 11 May 2021 18:19:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iFnS=KG=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
 id 1lgWyw-0007fz-6U
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 18:19:46 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7a4fe988-ddc3-46cd-ab49-5b2f1a4c3dcd;
 Tue, 11 May 2021 18:19:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7a4fe988-ddc3-46cd-ab49-5b2f1a4c3dcd
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620757184;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=mfyFDJgIkLfaf/q9LEu49lHIlJRkjQUjve9JlgsoDR8=;
  b=fGYhQP2ai9DgOHZnAjk2WWVyFQjmJ6XaPuIHTgU3ZfgbcVcfuYn4MEkd
   g920QAUan/R0iZZ9faOMaskSd5Fe+sfa4OdqkDf68PHUA/o47nJU18q5U
   vawgbDNLs494HbRi9Oas8M4EE/TfpC/3l5ybNJCvSVPqD0V2eIeQ0q02M
   o=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: iyilpD8SzCuioWNXOA9zFQdv0QcR3XhXht5LopOS6cVHbkq7tv7tK6I/yHXjb/ouc6mgLZvkvm
 Vp0V5STd83/IReHf6355A4PJeDdXxclLhlEbM6eX/TORACxQJbqoPVWC9GLOntlt4cYetwS/LX
 5hgPDAVyhtIlp+jxOznnPz2jNGB13Ux4NtM+q2MYYtdTri0/mpbF1m4h2uEIqq/PFsmjTFsSRq
 yHomvODWg6ssASAGw9fxtJDw9HE5ZP70w5jMVtJtozSajL4Wzge+1rvn4zwV8d8Tju8Pem+hTU
 70s=
X-SBRS: 5.1
X-MesageID: 43676913
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:msXP7qz674nE3o0i9+mmKrPw6L1zdoMgy1knxilNoHxuH/Bw9v
 re+cjzsCWftN9/Yh4dcLy7VpVoIkmsl6Kdg7NwAV7KZmCP1FdARLsI0WKI+UyCJ8SRzI9gPa
 cLSdkFNDXzZ2IK8PoTNmODYqodKNrsytHWuQ/HpU0dKT2D88tbnn9E4gDwKDwQeCB2QaAXOb
 C7/cR9qz+paR0sH7+G7ilsZZmkmzXT/qiWGCI7Ow==
X-IronPort-AV: E=Sophos;i="5.82,291,1613451600"; 
   d="scan'208";a="43676913"
From: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>, "Christian
 Lindig" <christian.lindig@citrix.com>, David Scott <dave@recoil.org>, "Ian
 Jackson" <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: [PATCH v2 17/17] tools/ocaml/libs/mmap: Clean up unused read/write
Date: Tue, 11 May 2021 19:05:30 +0100
Message-ID: <9bfd0989994953f08142d26cbe5a22651a1faa2a.1620755943.git.edvin.torok@citrix.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1620755942.git.edvin.torok@citrix.com>
References: <cover.1620755942.git.edvin.torok@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Xenmmap is only modified by the ring functions,
these functions are unused.

Signed-off-by: Edwin Török <edvin.torok@citrix.com>
---
 tools/ocaml/libs/mmap/xenmmap.ml      |  5 ----
 tools/ocaml/libs/mmap/xenmmap.mli     |  4 ---
 tools/ocaml/libs/mmap/xenmmap_stubs.c | 41 ---------------------------
 3 files changed, 50 deletions(-)

diff --git a/tools/ocaml/libs/mmap/xenmmap.ml b/tools/ocaml/libs/mmap/xenmmap.ml
index af258942a0..e17a62e607 100644
--- a/tools/ocaml/libs/mmap/xenmmap.ml
+++ b/tools/ocaml/libs/mmap/xenmmap.ml
@@ -24,11 +24,6 @@ type mmap_map_flag = SHARED | PRIVATE
 (* mmap: fd -> prot_flag -> map_flag -> length -> offset -> interface *)
 external mmap': Unix.file_descr -> mmap_prot_flag -> mmap_map_flag
 		-> int -> int -> mmap_interface = "stub_mmap_init"
-(* read: interface -> start -> length -> data *)
-external read: mmap_interface -> int -> int -> string = "stub_mmap_read"
-(* write: interface -> data -> start -> length -> unit *)
-external write: mmap_interface -> string -> int -> int -> unit = "stub_mmap_write"
-(* getpagesize: unit -> size of page *)
 external unmap': mmap_interface -> unit = "stub_mmap_final"
 (* getpagesize: unit -> size of page *)
 let make ?(unmap=unmap') interface = interface, unmap
diff --git a/tools/ocaml/libs/mmap/xenmmap.mli b/tools/ocaml/libs/mmap/xenmmap.mli
index 075b24eab4..abf2a50131 100644
--- a/tools/ocaml/libs/mmap/xenmmap.mli
+++ b/tools/ocaml/libs/mmap/xenmmap.mli
@@ -19,10 +19,6 @@ type mmap_interface
 type mmap_prot_flag = RDONLY | WRONLY | RDWR
 type mmap_map_flag = SHARED | PRIVATE
 
-external read : mmap_interface -> int -> int -> string = "stub_mmap_read"
-external write : mmap_interface -> string -> int -> int -> unit
-               = "stub_mmap_write"
-
 val mmap : Unix.file_descr -> mmap_prot_flag -> mmap_map_flag -> int -> int -> t
 val unmap : t -> unit
 
diff --git a/tools/ocaml/libs/mmap/xenmmap_stubs.c b/tools/ocaml/libs/mmap/xenmmap_stubs.c
index e8d2d6add5..633e1fa916 100644
--- a/tools/ocaml/libs/mmap/xenmmap_stubs.c
+++ b/tools/ocaml/libs/mmap/xenmmap_stubs.c
@@ -96,47 +96,6 @@ CAMLprim value stub_mmap_final(value intf)
 	CAMLreturn(Val_unit);
 }
 
-CAMLprim value stub_mmap_read(value intf, value start, value len)
-{
-	CAMLparam3(intf, start, len);
-	CAMLlocal1(data);
-	int c_start;
-	int c_len;
-
-	c_start = Int_val(start);
-	c_len = Int_val(len);
-
-	if (c_start > Intf_val(intf)->len)
-		caml_invalid_argument("start invalid");
-	if (c_start + c_len > Intf_val(intf)->len)
-		caml_invalid_argument("len invalid");
-
-	data = caml_alloc_string(c_len);
-	memcpy((char *) data, Intf_val(intf)->addr + c_start, c_len);
-
-	CAMLreturn(data);
-}
-
-CAMLprim value stub_mmap_write(value intf, value data,
-                               value start, value len)
-{
-	CAMLparam4(intf, data, start, len);
-	int c_start;
-	int c_len;
-
-	c_start = Int_val(start);
-	c_len = Int_val(len);
-
-	if (c_start > Intf_val(intf)->len)
-		caml_invalid_argument("start invalid");
-	if (c_start + c_len > Intf_val(intf)->len)
-		caml_invalid_argument("len invalid");
-
-	memcpy(Intf_val(intf)->addr + c_start, (char *) data, c_len);
-
-	CAMLreturn(Val_unit);
-}
-
 CAMLprim value stub_mmap_getpagesize(value unit)
 {
 	CAMLparam1(unit);
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 11 18:19:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 18:19:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125960.237090 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgWyy-0007jj-NK; Tue, 11 May 2021 18:19:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125960.237090; Tue, 11 May 2021 18:19:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgWyy-0007iS-GO; Tue, 11 May 2021 18:19:48 +0000
Received: by outflank-mailman (input) for mailman id 125960;
 Tue, 11 May 2021 18:19:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iFnS=KG=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
 id 1lgWyx-0007g6-HP
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 18:19:47 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0b63d9b5-d94b-41d2-920f-8f6312662347;
 Tue, 11 May 2021 18:19:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0b63d9b5-d94b-41d2-920f-8f6312662347
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620757186;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=SU7cgAQhk8vsTkLv/y2uGvNvL2Atz9xPDkxcsiAL/Ik=;
  b=DF2iR63RcPxEEgp5BdlmahraON9RvtLSzlhmj8WPmXh0+MnIZRjIrdP+
   kvuXgjybqV6Rle9uoS7KhyILgYDjXzvrkLTS/S1rNKf5fdLt1HRyP4U0R
   YeXUsr2WToNHrUp00vq5gxznRf03UfQjiuGWtso8h194dTKBdCBPR9i2m
   0=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: Js8J1Ottk9+P26tbK96orEpViIabgTRWSOozgof7eDuwY7XKgwVCvGH0wBVhvtSCELEnFH6OJ0
 fBp22c5JAdwR7wDAGrXyMhkeEsMspYxwUf93ZrkIutqa4FMP7EGaPEZJAL3D8gqS9yq/c+jxDZ
 Il+RjeVW592GORgriTnu20gCMwlO175x/506AASueEQwnhscllpB1n1TqbbMz94191Y6RGxQjH
 YTxsahTYWhchKTcXO5fsX5eaVAaRvofo9uNw5FDs6chwZVMmWlqb9+y7Dd4odwuOeQGR4oRAZf
 j3E=
X-SBRS: 5.1
X-MesageID: 43955845
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:7A/JIa6E+2oilrlqOQPXwPDXdLJyesId70hD6qhwISY6TiX+rb
 HWoB17726TtN9/YhEdcLy7VJVoBEmskKKdgrNhWotKPjOW21dARbsKheCJrgEIWReOktK1vZ
 0QC5SWY+eQMbEVt6nHCXGDYrQd/OU=
X-IronPort-AV: E=Sophos;i="5.82,291,1613451600"; 
   d="scan'208";a="43955845"
From: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>, "Christian
 Lindig" <christian.lindig@citrix.com>, David Scott <dave@recoil.org>, "Ian
 Jackson" <iwj@xenproject.org>, Wei Liu <wl@xen.org>, Juergen Gross
	<jgross@suse.com>
Subject: [PATCH v2 16/17] tools/ocaml/xenstored: don't store domU's mfn of ring page
Date: Tue, 11 May 2021 19:05:29 +0100
Message-ID: <49200c1e5de78257fc43e26f545651484dbe4ff0.1620755943.git.edvin.torok@citrix.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1620755942.git.edvin.torok@citrix.com>
References: <cover.1620755942.git.edvin.torok@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

This is a port of the following C xenstored commit
122b52230aa5b79d65e18b8b77094027faa2f8e2 tools/xenstore: don't store domU's mfn of ring page in xenstored

Backwards compat: accept a domain dump both with and without MFN.

CC: Juergen Gross <jgross@suse.com>
Signed-off-by: Edwin Török <edvin.torok@citrix.com>
---
 tools/ocaml/xenstored/domain.ml  |  7 ++-----
 tools/ocaml/xenstored/domains.ml |  6 +++---
 tools/ocaml/xenstored/process.ml | 16 +++++-----------
 3 files changed, 10 insertions(+), 19 deletions(-)

diff --git a/tools/ocaml/xenstored/domain.ml b/tools/ocaml/xenstored/domain.ml
index 82d7b1a7ef..960ebef218 100644
--- a/tools/ocaml/xenstored/domain.ml
+++ b/tools/ocaml/xenstored/domain.ml
@@ -22,7 +22,6 @@ let warn  fmt = Logging.warn  "domain" fmt
 type t =
 {
 	id: Xenctrl.domid;
-	mfn: nativeint;
 	interface: Xenmmap.t;
 	eventchn: Event.t;
 	mutable remote_port: int;
@@ -40,7 +39,6 @@ type t =
 let is_dom0 d = d.id = 0
 let get_id domain = domain.id
 let get_interface d = d.interface
-let get_mfn d = d.mfn
 let get_remote_port d = d.remote_port
 let get_port d = d.port
 
@@ -61,7 +59,7 @@ let string_of_port = function
 | Some x -> string_of_int (Xeneventchn.to_int x)
 
 let dump d chan =
-	fprintf chan "dom,%d,%nd,%d\n" d.id d.mfn d.remote_port
+	fprintf chan "dom,%d,%d\n" d.id d.remote_port
 
 let notify dom = match dom.port with
 | None ->
@@ -87,9 +85,8 @@ let close dom =
 	Xenmmap.unmap dom.interface;
 	()
 
-let make id mfn remote_port interface eventchn = {
+let make id remote_port interface eventchn = {
 	id = id;
-	mfn = mfn;
 	remote_port = remote_port;
 	interface = interface;
 	eventchn = eventchn;
diff --git a/tools/ocaml/xenstored/domains.ml b/tools/ocaml/xenstored/domains.ml
index d9cb693751..0dfeed193a 100644
--- a/tools/ocaml/xenstored/domains.ml
+++ b/tools/ocaml/xenstored/domains.ml
@@ -124,10 +124,10 @@ let cleanup doms =
 let resume _doms _domid =
 	()
 
-let create doms domid mfn port =
+let create doms domid port =
 	let mapping = Gnt.(Gnttab.map_exn doms.gnttab { domid; ref = xenstore} true) in
 	let interface = Gnt.Gnttab.Local_mapping.to_pages doms.gnttab mapping in
-	let dom = Domain.make domid mfn port interface doms.eventchn in
+	let dom = Domain.make domid port interface doms.eventchn in
 	Hashtbl.add doms.table domid dom;
 	Domain.bind_interdomain dom;
 	dom
@@ -147,7 +147,7 @@ let create0 doms =
 			port, interface
 		)
 		in
-	let dom = Domain.make 0 Nativeint.zero port interface doms.eventchn in
+	let dom = Domain.make 0 port interface doms.eventchn in
 	Hashtbl.add doms.table 0 dom;
 	Domain.bind_interdomain dom;
 	Domain.notify dom;
diff --git a/tools/ocaml/xenstored/process.ml b/tools/ocaml/xenstored/process.ml
index 13b7153536..890970b8c5 100644
--- a/tools/ocaml/xenstored/process.ml
+++ b/tools/ocaml/xenstored/process.ml
@@ -235,10 +235,6 @@ let do_debug con t _domains cons data =
 	| "watches" :: _ ->
 		let watches = Connections.debug cons in
 		Some (watches ^ "\000")
-	| "mfn" :: domid :: _ ->
-		let domid = int_of_string domid in
-		let con = Connections.find_domain cons domid in
-		may (fun dom -> Printf.sprintf "%nd\000" (Domain.get_mfn dom)) (Connection.get_domain con)
 	| _ -> None
 	with _ -> None
 
@@ -554,15 +550,13 @@ let do_introduce con t domains cons data =
 	let dom =
 		if Domains.exist domains domid then begin
 			let edom = Domains.find domains domid in
-			if (Domain.get_mfn edom) = mfn && (Connections.find_domain cons domid) != con then begin
-				(* Use XS_INTRODUCE for recreating the xenbus event-channel. *)
-				edom.remote_port <- port;
-				Domain.bind_interdomain edom;
-			end;
+			(* Use XS_INTRODUCE for recreating the xenbus event-channel. *)
+			edom.remote_port <- port;
+			Domain.bind_interdomain edom;
 			edom
 		end
 		else try
-			let ndom = Domains.create domains domid mfn port in
+			let ndom = Domains.create domains domid port in
 			Connections.add_domain cons ndom;
 			Connections.fire_spec_watches (Transaction.get_root t) cons Store.Path.introduce_domain;
 			ndom
@@ -571,7 +565,7 @@ let do_introduce con t domains cons data =
 			 Logging.debug "process" "do_introduce: %s (%s)" (Printexc.to_string e) bt;
 			 raise Invalid_Cmd_Args
 	in
-	if (Domain.get_remote_port dom) <> port || (Domain.get_mfn dom) <> mfn then
+	if (Domain.get_remote_port dom) <> port then
 		raise Domain_not_match
 
 let do_release con t domains cons data =
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 11 18:19:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 18:19:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125962.237109 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgWz3-0008Hn-TC; Tue, 11 May 2021 18:19:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125962.237109; Tue, 11 May 2021 18:19:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgWz3-0008Hc-Pj; Tue, 11 May 2021 18:19:53 +0000
Received: by outflank-mailman (input) for mailman id 125962;
 Tue, 11 May 2021 18:19:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iFnS=KG=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
 id 1lgWz2-0007g6-GI
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 18:19:52 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 19b3bfa7-5419-49b1-8051-bd5514dfe581;
 Tue, 11 May 2021 18:19:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 19b3bfa7-5419-49b1-8051-bd5514dfe581
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620757187;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=EoaUpYs9b5JtY9mZ1Jy5aNZ4/ohnkMdircO88bxkg44=;
  b=CHcTnBQ+gbXHApTpscebxkhpzBSOUA1WUEEhTnbrMVn8fA7Px6OwgTy6
   CJXlvR4R0vmzePd3hrxiwsEeAN+FVR9TMtNpODlef+6CwGiTvmjdqOpXQ
   0rdtuwvkbZi5XXNHCab4tjr7TPLbNw3eJ6v/5TiOJeZBvhWbQg3qO31J8
   0=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: jXvHeAJvDN1mXXdaAWfqlPHHazwViQ0FtFAp5WTIfOpuEqy+a0KF3Ohxx4UIqAk8EUleoTu5tl
 NaRTf4EMlBhS6a9UrEyQLomS4Wry7YRe26ICnZ1n3MP+FaDaqd00tixNEsozcWIN57bePj/hax
 edxjqei4oFFQWlggOIaVjGXhk65Cv4ADXQpot/PjgaArEdGdKc/BdQ0ZoLWxXOTwYyB+AmSWNU
 bA7dWsMaQJtOWZuXtTDd3+JHwpqYGEwrX6TOKyQYRB70rd9+tD6X9LR5T8hwevckaH4hv/jYQa
 a7Y=
X-SBRS: 5.1
X-MesageID: 43562364
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:afEZh6mNz+/LDj4Xj8abPvWYjILpDfIW3DAbv31ZSRFFG/Fxl6
 iV/cjzsiWE8Ar5OUtQ4OxoV5PwIk80maQb3WBVB8bHYOCEghrPEGgB1/qB/9SIIUSXnYQxuZ
 uIMZIOb+EYZWIK9voSizPZLz9P+re6GdiT9ILj80s=
X-IronPort-AV: E=Sophos;i="5.82,291,1613451600"; 
   d="scan'208";a="43562364"
From: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>, "Christian
 Lindig" <christian.lindig@citrix.com>, David Scott <dave@recoil.org>, "Ian
 Jackson" <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: [PATCH v2 12/17] tools/ocaml/libs/mmap: mark mmap/munmap as blocking
Date: Tue, 11 May 2021 19:05:25 +0100
Message-ID: <294a60be29027d33b0a1d154b7d576237c7dd420.1620755942.git.edvin.torok@citrix.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1620755942.git.edvin.torok@citrix.com>
References: <cover.1620755942.git.edvin.torok@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

These functions can potentially take some time,
so allow other OCaml code to proceed meanwhile (if any).

Signed-off-by: Edwin Török <edvin.torok@citrix.com>
---
 tools/ocaml/libs/mmap/xenmmap_stubs.c | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/tools/ocaml/libs/mmap/xenmmap_stubs.c b/tools/ocaml/libs/mmap/xenmmap_stubs.c
index d7a97c76f5..e8d2d6add5 100644
--- a/tools/ocaml/libs/mmap/xenmmap_stubs.c
+++ b/tools/ocaml/libs/mmap/xenmmap_stubs.c
@@ -28,6 +28,7 @@
 #include <caml/fail.h>
 #include <caml/callback.h>
 #include <caml/unixsupport.h>
+#include <caml/signals.h>
 
 #define Wsize_bsize_round(n) (Wsize_bsize( (n) + sizeof(value) - 1 ))
 
@@ -69,7 +70,9 @@ CAMLprim value stub_mmap_init(value fd, value pflag, value mflag,
 		caml_invalid_argument("negative offset");
 	length = Int_val(len);
 
+	caml_enter_blocking_section();
 	addr = mmap(NULL, length, c_pflag, c_mflag, Int_val(fd), Int_val(offset));
+	caml_leave_blocking_section();
 	if (MAP_FAILED == addr)
 		uerror("mmap", Nothing);
 
@@ -80,10 +83,15 @@ CAMLprim value stub_mmap_init(value fd, value pflag, value mflag,
 CAMLprim value stub_mmap_final(value intf)
 {
 	CAMLparam1(intf);
+	struct mmap_interface interface = *Intf_val(intf);
 
-	if (Intf_val(intf)->addr != MAP_FAILED)
-		munmap(Intf_val(intf)->addr, Intf_val(intf)->len);
+	/* mark it as freed, in case munmap below fails, so we don't retry it */
 	Intf_val(intf)->addr = MAP_FAILED;
+	if (interface.addr != MAP_FAILED) {
+		caml_enter_blocking_section();
+		munmap(interface.addr, interface.len);
+		caml_leave_blocking_section();
+	}
 
 	CAMLreturn(Val_unit);
 }
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 11 18:19:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 18:19:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125963.237120 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgWz7-0000Bd-5n; Tue, 11 May 2021 18:19:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125963.237120; Tue, 11 May 2021 18:19:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgWz7-0000BV-2T; Tue, 11 May 2021 18:19:57 +0000
Received: by outflank-mailman (input) for mailman id 125963;
 Tue, 11 May 2021 18:19:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iFnS=KG=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
 id 1lgWz6-0007fz-5J
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 18:19:56 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 004134c6-8940-498b-b7f6-53c71747813c;
 Tue, 11 May 2021 18:19:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 004134c6-8940-498b-b7f6-53c71747813c
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620757192;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=/QT5ezwDnkjXc3nVgBVAB8DaWEl/4FRFCADP++oYh28=;
  b=Jdu6ymxTEyir3Wpq1Vlje0oTSI8uvRDp3EyQ5XHU8GofqKnyO/4yfL/T
   tadaeJnDhPWPHVau6fdtmwh0bZEN92ikFEct1VuRWE+8T+4Bto5mRkjmo
   XhPMYX+n/3LKVu7pP4SrfkG0+/zf7153MPrjkTArPJkUgO2F6N1x6W1iT
   o=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: CKYOs+i1d3uxTBukfgQWpA1/0Q6N5xF4YW3jDwQw4XsDP1Yu/ygVFpSXvRJ3wFHVWtIMu3qyOX
 F9MHR0nLkK4Jyspb534EhqoXML/Esnp857rRv0rWlQ5smYpGBZDZx0sGM3p6785xDL2X7wBbEX
 Qb5pYN3pIQllJJ3iigpKUx5qV+7pSHODIad8Vvb/r5S/HNEiwr5l2rtW5DFqtLRrgjc6ygT9t2
 zkO4fJLvzYevOalpREIDVvSWqSfkDcLy1i2K+LfbAv9VRyHxi/RD7guWQCnoUkR43o9Ah/vROA
 G+8=
X-SBRS: 5.1
X-MesageID: 43676922
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:jUJ+BaDEC34bMKflHelo55DYdb4zR+YMi2TDt3oddfU1SL38qy
 nKpp4mPHDP5wr5NEtPpTniAtjjfZq/z/5ICOAqVN/PYOCPggCVxepZnOjfKlPbehEX9oRmpN
 1dm6oVMqyMMbCt5/yKnDVRELwbsaa6GLjDv5a785/0JzsaE52J6W1Ce2GmO3wzfiZqL7wjGq
 GR48JWzgDQAkj+PqyAdx84t/GonayzqK7b
X-IronPort-AV: E=Sophos;i="5.82,291,1613451600"; 
   d="scan'208";a="43676922"
From: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>, "Christian
 Lindig" <christian.lindig@citrix.com>, David Scott <dave@recoil.org>, "Ian
 Jackson" <iwj@xenproject.org>, Wei Liu <wl@xen.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>
Subject: [PATCH v2 15/17] tools/ocaml/xenstored: use gnttab instead of xenctrl's foreign_map_range
Date: Tue, 11 May 2021 19:05:28 +0100
Message-ID: <2e703b8a3e75370ed0208b2c1da9a3562df82a14.1620755943.git.edvin.torok@citrix.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1620755942.git.edvin.torok@citrix.com>
References: <cover.1620755942.git.edvin.torok@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

This is an oxenstored port of the following C xenstored commit:
38eeb3864de40aa568c48f9f26271c141c62b50b tools/xenstored: Drop mapping of the ring via foreign map

Now only Xenctrl.domain_getinfo remains as the last use of unstable xenctrl interface
in oxenstored.

Depends on: tools/ocaml: safer Xenmmap interface
(without it the code would build but the wrong unmap function would get
 called on domain destruction)

CC: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Edwin Török <edvin.torok@citrix.com>
---
 tools/ocaml/xenstored/domains.ml   | 7 +++++--
 tools/ocaml/xenstored/xenstored.ml | 3 ++-
 2 files changed, 7 insertions(+), 3 deletions(-)

diff --git a/tools/ocaml/xenstored/domains.ml b/tools/ocaml/xenstored/domains.ml
index 17fe2fa257..d9cb693751 100644
--- a/tools/ocaml/xenstored/domains.ml
+++ b/tools/ocaml/xenstored/domains.ml
@@ -22,6 +22,7 @@ let xc = Xenctrl.interface_open ()
 
 type domains = {
 	eventchn: Event.t;
+	gnttab: Gnt.Gnttab.interface;
 	table: (Xenctrl.domid, Domain.t) Hashtbl.t;
 
 	(* N.B. the Queue module is not thread-safe but oxenstored is single-threaded. *)
@@ -42,8 +43,9 @@ type domains = {
 	mutable n_penalised: int; (* Number of domains with less than maximum credit *)
 }
 
-let init eventchn on_first_conflict_pause = {
+let init eventchn gnttab on_first_conflict_pause = {
 	eventchn = eventchn;
+	gnttab;
 	table = Hashtbl.create 10;
 	doms_conflict_paused = Queue.create ();
 	doms_with_conflict_penalty = Queue.create ();
@@ -123,7 +125,8 @@ let resume _doms _domid =
 	()
 
 let create doms domid mfn port =
-	let interface = Xenctrl.map_foreign_range xc domid (Xenmmap.getpagesize()) mfn in
+	let mapping = Gnt.(Gnttab.map_exn doms.gnttab { domid; ref = xenstore} true) in
+	let interface = Gnt.Gnttab.Local_mapping.to_pages doms.gnttab mapping in
 	let dom = Domain.make domid mfn port interface doms.eventchn in
 	Hashtbl.add doms.table domid dom;
 	Domain.bind_interdomain dom;
diff --git a/tools/ocaml/xenstored/xenstored.ml b/tools/ocaml/xenstored/xenstored.ml
index a6b86b167c..75c35107d5 100644
--- a/tools/ocaml/xenstored/xenstored.ml
+++ b/tools/ocaml/xenstored/xenstored.ml
@@ -446,6 +446,7 @@ let main () =
 
 	let store = Store.create () in
 	let eventchn = Event.init () in
+	let gnttab = Gnt.Gnttab.interface_open () in
 	let next_frequent_ops = ref 0. in
 	let advance_next_frequent_ops () =
 		next_frequent_ops := (Unix.gettimeofday () +. !Define.conflict_max_history_seconds)
@@ -453,7 +454,7 @@ let main () =
 	let delay_next_frequent_ops_by duration =
 		next_frequent_ops := !next_frequent_ops +. duration
 	in
-	let domains = Domains.init eventchn advance_next_frequent_ops in
+	let domains = Domains.init eventchn gnttab advance_next_frequent_ops in
 
 	(* For things that need to be done periodically but more often
 	 * than the periodic_ops function *)
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 11 18:19:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 18:19:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125964.237132 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgWz9-0000XE-HI; Tue, 11 May 2021 18:19:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125964.237132; Tue, 11 May 2021 18:19:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgWz9-0000Wx-C7; Tue, 11 May 2021 18:19:59 +0000
Received: by outflank-mailman (input) for mailman id 125964;
 Tue, 11 May 2021 18:19:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iFnS=KG=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
 id 1lgWz7-0007g6-Gk
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 18:19:57 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7fc6e699-2fa5-47e5-895a-4b2c4772b8ec;
 Tue, 11 May 2021 18:19:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7fc6e699-2fa5-47e5-895a-4b2c4772b8ec
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620757191;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=44hyZ/ZaZ9beZ9T4JdjVPbP63rTAyQBALalI4rEKkk4=;
  b=dm97E3F89Yp8Q820UOGMinnPdbQPQ4MellCAizbdPabh79ARGeBBctsK
   bQTAApzBdUwUvTRdIJ5ghiglkIhiz1f28QFm+iVLduldEIwoiMy1grXfJ
   SaAtyiB3Ny7Sitl72mBNt1NI7b/90oAJNKaAfMp0qyBG0klPrTiGidMCh
   o=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: fbrcHgMwOJSde12PXOvnKypdUGN+tM9wSfZye82uYj9Hza/31CRLKi/YN/2NpBIrEBKFYsIn9X
 2CQYrY7cNjFS4lAvIpJOpEg9wND4gvg6Ltj9++Kp9wTO3pzidE5QnmoIqM6sNg2CVLQTKYp/Sa
 Tpg2ZMHRVAMNwEtsG8xfefZZPpotufj1qBRaYXlyamq3Wkc5QUaRNRcAMNR5xffFC6AD0HdbTP
 fw7yWehjbKCKtOyTAo7FLZuLfoMkL/XyoUThkKGvziwssD4qVugM62lhSbLpucryoMkzwIFMvX
 bPk=
X-SBRS: 5.1
X-MesageID: 43580607
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:+jZYZq+keJzTGnCyxNxuk+DiI+orL9Y04lQ7vn2YSXRuE/Bw8P
 re5MjztCWE8Qr5N0tQ+uxoVJPufZqYz+8Q3WBzB8bFYOCFghrLEGgK1+KLqFeMdxEWtNQtsp
 uIG5IOc+EYZmIbsS+V2meF+q4bsby6zJw=
X-IronPort-AV: E=Sophos;i="5.82,291,1613451600"; 
   d="scan'208";a="43580607"
From: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>, "Christian
 Lindig" <christian.lindig@citrix.com>, David Scott <dave@recoil.org>, "Ian
 Jackson" <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: [PATCH v2 13/17] tools/ocaml/libs/xb: import gnttab stubs from mirage
Date: Tue, 11 May 2021 19:05:26 +0100
Message-ID: <d298ca3e3f9f57075d9100645e0ff986127689d7.1620755942.git.edvin.torok@citrix.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1620755942.git.edvin.torok@citrix.com>
References: <cover.1620755942.git.edvin.torok@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Upstream URL: https://github.com/mirage/ocaml-gnt
Mirage is part of the Xen project and the license is compatible,
copyright headers are retained.

Changes from upstream:
* cut down dependencies: dropped Lwt, replaced Io_page with Xenmmap
* only import Gnttab and not Gntshr

This is for xenstored's use only which needs a way to grant map
the xenstore ring without using xenctrl.

The gnt code is added into libs/mmap because it uses mmap_stubs.h.
Also this makes it possible to mock out gnttab in the unit tests:
replace it with code that just mmaps /dev/zero.
For the mocking to work gnt.ml needs to be in a dir other than xenstored/.

Signed-off-by: Edwin Török <edvin.torok@citrix.com>
---
 tools/ocaml/Makefile                   |   1 +
 tools/ocaml/libs/mmap/Makefile         |  19 +++--
 tools/ocaml/libs/mmap/dune             |  10 +++
 tools/ocaml/libs/mmap/gnt.ml           |  60 ++++++++++++++
 tools/ocaml/libs/mmap/gnt.mli          |  86 ++++++++++++++++++++
 tools/ocaml/libs/mmap/gnttab_stubs.c   | 106 +++++++++++++++++++++++++
 tools/ocaml/xenstored/Makefile         |   1 +
 tools/ocaml/xenstored/dune             |   6 +-
 tools/ocaml/xenstored/test/gnt.ml      |  52 ++++++++++++
 tools/ocaml/xenstored/test/testable.ml |   3 +-
 tools/ocaml/xenstored/xenstored.ml     |  10 +--
 11 files changed, 339 insertions(+), 15 deletions(-)
 create mode 100644 tools/ocaml/libs/mmap/gnt.ml
 create mode 100644 tools/ocaml/libs/mmap/gnt.mli
 create mode 100644 tools/ocaml/libs/mmap/gnttab_stubs.c
 create mode 100644 tools/ocaml/xenstored/test/gnt.ml

diff --git a/tools/ocaml/Makefile b/tools/ocaml/Makefile
index de375820a3..1236b0e584 100644
--- a/tools/ocaml/Makefile
+++ b/tools/ocaml/Makefile
@@ -43,6 +43,7 @@ C_INCLUDE_PATH=$(XEN_libxenctrl)/include:$(XEN_libxengnttab)/include:$(XEN_libxe
 # in the parent directory (so it couldn't copy/use Config.mk)
 .PHONY: dune-pre
 dune-pre:
+	$(MAKE) clean
 	$(MAKE) -s -C ../../ build-tools-public-headers
 	$(MAKE) -s -C libs/xs paths.ml
 	$(MAKE) -s -C libs/xc xenctrl_abi_check.h
diff --git a/tools/ocaml/libs/mmap/Makefile b/tools/ocaml/libs/mmap/Makefile
index df45819df5..ed4903b48a 100644
--- a/tools/ocaml/libs/mmap/Makefile
+++ b/tools/ocaml/libs/mmap/Makefile
@@ -2,9 +2,7 @@ TOPLEVEL=$(CURDIR)/../..
 XEN_ROOT=$(TOPLEVEL)/../..
 include $(TOPLEVEL)/common.make
 
-OBJS = xenmmap
 INTF = $(foreach obj, $(OBJS),$(obj).cmi)
-LIBS = xenmmap.cma xenmmap.cmxa
 
 all: $(INTF) $(LIBS) $(PROGRAMS)
 
@@ -12,15 +10,26 @@ bins: $(PROGRAMS)
 
 libs: $(LIBS)
 
-xenmmap_OBJS = $(OBJS)
+# gnt is an internal library, not installed
+gnt_OBJS = gnt
+gnt_C_OBJS = gnttab_stubs
+gnt_LIBS = gnt.cma gnt.cmxa
+LIBS_gnt = $(LDLIBS_libxengnttab)
+CFLAGS += $(CFLAGS_libxengnttab)
+
+xenmmap_OBJS = xenmmap
 xenmmap_C_OBJS = xenmmap_stubs
-OCAML_LIBRARY = xenmmap
+xenmmap_LIBS = xenmmap.cma xenmmap.cmxa
+
+OCAML_LIBRARY = xenmmap gnt
+OBJS = $(xenmmap_OBJS) $(gnt_OBJS)
+LIBS = $(xenmmap_LIBS) $(gnt_LIBS)
 
 .PHONY: install
 install: $(LIBS) META
 	mkdir -p $(OCAMLDESTDIR)
 	$(OCAMLFIND) remove -destdir $(OCAMLDESTDIR) xenmmap
-	$(OCAMLFIND) install -destdir $(OCAMLDESTDIR) -ldconf ignore xenmmap META $(INTF) $(LIBS) *.a *.so *.cmx
+	$(OCAMLFIND) install -destdir $(OCAMLDESTDIR) -ldconf ignore xenmmap META *xenmmap*.cmi $(xenmmap_LIBS) *xenmmap*.a *xenmmap*.so *xenmmap*.cmx
 
 .PHONY: uninstall
 uninstall:
diff --git a/tools/ocaml/libs/mmap/dune b/tools/ocaml/libs/mmap/dune
index a47de44e47..f4c98153c4 100644
--- a/tools/ocaml/libs/mmap/dune
+++ b/tools/ocaml/libs/mmap/dune
@@ -3,6 +3,16 @@
   (language c)
   (names xenmmap_stubs))
  (name xenmmap)
+ (modules xenmmap)
  (public_name xen.mmap)
  (libraries unix)
  (install_c_headers mmap_stubs))
+
+(library
+ (foreign_stubs
+  (language c)
+  (names gnttab_stubs))
+ (name xengnt)
+ (modules gnt)
+ (wrapped false) 
+ (libraries unix xen.mmap))
diff --git a/tools/ocaml/libs/mmap/gnt.ml b/tools/ocaml/libs/mmap/gnt.ml
new file mode 100644
index 0000000000..65f0334b7c
--- /dev/null
+++ b/tools/ocaml/libs/mmap/gnt.ml
@@ -0,0 +1,60 @@
+(*
+ * Copyright (c) 2010 Anil Madhavapeddy <anil@recoil.org>
+ * Copyright (C) 2012-2014 Citrix Inc
+ *
+ * Permission to use, copy, modify, and distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ *)
+
+type gntref = int
+type domid = int
+
+let console = 0 (* public/grant_table.h:GNTTAB_RESERVED_CONSOLE *)
+let xenstore = 1 (* public/grant_table.h:GNTTAB_RESERVED_XENSTORE *)
+
+type grant_handle (* handle to a mapped grant *)
+
+module Gnttab = struct
+  type interface
+
+  external interface_open': unit -> interface = "stub_gnttab_interface_open"
+
+  let interface_open () =
+    try
+      interface_open' ()
+    with e ->
+      Printf.fprintf stderr "Failed to open grant table device: ENOENT\n";
+      Printf.fprintf stderr "Does this system have Xen userspace grant table support?\n";
+      Printf.fprintf stderr "On linux try:\n";
+      Printf.fprintf stderr "  sudo modprobe xen-gntdev\n%!";
+      raise e
+
+  external interface_close: interface -> unit = "stub_gnttab_interface_close"
+
+  type grant = {
+    domid: domid;
+    ref: gntref;
+  }
+
+  module Local_mapping = struct
+    type t = Xenmmap.mmap_interface
+
+    let to_pages t = t
+  end
+
+  external unmap_exn : interface -> Local_mapping.t -> unit = "stub_gnttab_unmap"
+
+  external map_fresh_exn: interface -> gntref -> domid -> bool -> Local_mapping.t = "stub_gnttab_map_fresh"
+
+  let map_exn interface grant writable =
+      map_fresh_exn interface grant.ref grant.domid writable
+end
diff --git a/tools/ocaml/libs/mmap/gnt.mli b/tools/ocaml/libs/mmap/gnt.mli
new file mode 100644
index 0000000000..302e13b05d
--- /dev/null
+++ b/tools/ocaml/libs/mmap/gnt.mli
@@ -0,0 +1,86 @@
+(*
+ * Copyright (c) 2010 Anil Madhavapeddy <anil@recoil.org>
+ * Copyright (C) 2012-2014 Citrix Inc
+ * 
+ * Permission to use, copy, modify, and distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ *)
+
+(** Allow a local xen domain to read/write memory exported ("granted")
+    from foreign domains. Safe memory sharing is a building block of all
+    xen inter-domain communication protocols such as those for virtual
+    network and disk devices.
+
+    Foreign domains will explicitly "grant" us access to certain memory
+    regions such as disk buffers. These regions are uniquely identified
+    by the pair of (foreign domain id, integer reference) which is
+    passed to us over some existing channel (typically via xenstore keys
+    or via structures in previously-shared memory region).
+*)
+
+(** {2 Common interface} *)
+
+type gntref = int
+(** Type of a grant table index, called a grant reference in
+    Xen's terminology. *)
+
+(** {2 Receiving foreign pages} *)
+
+module Gnttab : sig
+  type interface
+  (** A connection to the grant device, needed for mapping/unmapping *)
+
+  val interface_open: unit -> interface
+  (** Open a connection to the grant device. This must be done before any
+      calls to map or unmap. *)
+
+  val interface_close: interface -> unit
+  (** Close a connection to the grant device. Any future calls to map or
+      unmap will fail. *)
+
+  type grant = {
+    domid: int;
+    (** foreign domain who is exporting memory *)
+    ref: gntref;
+    (** id which identifies the specific export in the foreign domain *)
+  }
+  (** A foreign domain must explicitly "grant" us memory and send us the
+      "reference". The pair of (foreign domain id, reference) uniquely
+      identifies the block of memory. This pair ("grant") is transmitted
+      to us out-of-band, usually either via xenstore during device setup or
+      via a shared memory ring structure. *)
+
+  module Local_mapping : sig
+    type t
+    (** Abstract type representing a locally-mapped shared memory page *)
+
+    val to_pages: t -> Xenmmap.mmap_interface
+  end
+
+  val map_exn : interface -> grant -> bool -> Local_mapping.t
+  (** [map_exn if grant writable] creates a single mapping from
+      [grant] that will be writable if [writable] is [true]. *)
+
+  val unmap_exn: interface -> Local_mapping.t -> unit
+  (** Unmap a single mapping (which may involve multiple grants). Throws a
+      Failure if unsuccessful. *)
+end
+
+val console: gntref
+(** In xen-4.2 and later, the domain builder will allocate one of the
+    reserved grant table entries and use it to pre-authorise the console
+    backend domain. *)
+
+val xenstore: gntref
+(** In xen-4.2 and later, the domain builder will allocate one of the
+    reserved grant table entries and use it to pre-authorise the xenstore
+    backend domain. *)
diff --git a/tools/ocaml/libs/mmap/gnttab_stubs.c b/tools/ocaml/libs/mmap/gnttab_stubs.c
new file mode 100644
index 0000000000..f0b4ab237f
--- /dev/null
+++ b/tools/ocaml/libs/mmap/gnttab_stubs.c
@@ -0,0 +1,106 @@
+/*
+ * Copyright (C) 2012-2013 Citrix Inc
+ *
+ * Permission to use, copy, modify, and distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ */
+
+#include <stdlib.h>
+#include <stdint.h>
+#include <string.h>
+#include <errno.h>
+
+/* For PROT_READ | PROT_WRITE */
+#include <sys/mman.h>
+
+#define CAML_NAME_SPACE
+#include <caml/alloc.h>
+#include <caml/memory.h>
+#include <caml/signals.h>
+#include <caml/fail.h>
+#include <caml/callback.h>
+#include <caml/bigarray.h>
+
+#include "xengnttab.h"
+#include "mmap_stubs.h"
+
+#ifndef Data_abstract_val
+#define Data_abstract_val(v) ((void*) Op_val(v))
+#endif
+
+#define _G(__g) (*((xengnttab_handle**)Data_abstract_val(__g)))
+
+CAMLprim value stub_gnttab_interface_open(void)
+{
+	CAMLparam0();
+	CAMLlocal1(result);
+	xengnttab_handle *xgh;
+
+	xgh = xengnttab_open(NULL, 0);
+	if (xgh == NULL)
+		caml_failwith("Failed to open interface");
+	result = caml_alloc(1, Abstract_tag);
+	_G(result) = xgh;
+
+	CAMLreturn(result);
+}
+
+CAMLprim value stub_gnttab_interface_close(value xgh)
+{
+	CAMLparam1(xgh);
+
+	xengnttab_close(_G(xgh));
+
+	CAMLreturn(Val_unit);
+}
+
+#define _M(__m) ((struct mmap_interface*)Data_abstract_val(__m))
+#define XEN_PAGE_SHIFT 12
+
+CAMLprim value stub_gnttab_unmap(value xgh, value array)
+{
+	CAMLparam2(xgh, array);
+	int result;
+
+	caml_enter_blocking_section();
+	result = xengnttab_unmap(_G(xgh), _M(array)->addr, _M(array)->len >> XEN_PAGE_SHIFT);
+	caml_leave_blocking_section();
+
+	if(result!=0) {
+		caml_failwith("Failed to unmap grant");
+	}
+
+	CAMLreturn(Val_unit);
+}
+
+CAMLprim value stub_gnttab_map_fresh(
+	value xgh,
+	value reference,
+	value domid,
+	value writable
+	)
+{
+	CAMLparam4(xgh, reference, domid, writable);
+	CAMLlocal1(contents);
+	void *map;
+
+	caml_enter_blocking_section();
+	map = xengnttab_map_grant_ref(_G(xgh), Int_val(domid), Int_val(reference),
+		Bool_val(writable)?PROT_READ | PROT_WRITE:PROT_READ);
+	caml_leave_blocking_section();
+
+	if(map==NULL) {
+		caml_failwith("Failed to map grant ref");
+	}
+	contents = stub_mmap_alloc(map, 1 << XEN_PAGE_SHIFT);
+	CAMLreturn(contents);
+}
diff --git a/tools/ocaml/xenstored/Makefile b/tools/ocaml/xenstored/Makefile
index 9d2da206d8..689a8fb07d 100644
--- a/tools/ocaml/xenstored/Makefile
+++ b/tools/ocaml/xenstored/Makefile
@@ -67,6 +67,7 @@ XENSTOREDLIBS = \
 	-ccopt -L -ccopt . systemd.cmxa \
 	-ccopt -L -ccopt . poll.cmxa \
 	-ccopt -L -ccopt $(OCAML_TOPLEVEL)/libs/mmap $(OCAML_TOPLEVEL)/libs/mmap/xenmmap.cmxa \
+	-ccopt -L -ccopt $(OCAML_TOPLEVEL)/libs/mmap $(OCAML_TOPLEVEL)/libs/mmap/gnt.cmxa \
 	-ccopt -L -ccopt $(OCAML_TOPLEVEL)/libs/eventchn $(OCAML_TOPLEVEL)/libs/eventchn/xeneventchn.cmxa \
 	-ccopt -L -ccopt $(OCAML_TOPLEVEL)/libs/xc $(OCAML_TOPLEVEL)/libs/xc/xenctrl.cmxa \
 	-ccopt -L -ccopt $(OCAML_TOPLEVEL)/libs/xb $(OCAML_TOPLEVEL)/libs/xb/xenbus.cmxa \
diff --git a/tools/ocaml/xenstored/dune b/tools/ocaml/xenstored/dune
index 714a2ae07e..81a6bf7a4a 100644
--- a/tools/ocaml/xenstored/dune
+++ b/tools/ocaml/xenstored/dune
@@ -1,17 +1,17 @@
 (executable
- (modes byte exe)
+ (modes exe)
  (name xenstored_main)
  (modules (:standard \ syslog systemd))
  (public_name oxenstored)
  (package xenstored)
  (flags (:standard -w -52))
- (libraries unix xen.bus xen.mmap xen.ctrl xen.eventchn xenstubs))
+ (libraries unix xen.bus xen.mmap xen.ctrl xen.eventchn xenstubs xengnt))
 
 (library
  (foreign_stubs
   (language c)
   (names syslog_stubs systemd_stubs select_stubs)
-  (flags (-DHAVE_SYSTEMD)))
+  (flags (-DHAVE_SYSTEMD -I../libs/mmap/)))
  (modules syslog systemd)
  (name xenstubs)
  (wrapped false)
diff --git a/tools/ocaml/xenstored/test/gnt.ml b/tools/ocaml/xenstored/test/gnt.ml
new file mode 100644
index 0000000000..ae71e2aaef
--- /dev/null
+++ b/tools/ocaml/xenstored/test/gnt.ml
@@ -0,0 +1,52 @@
+(*
+ * Copyright (c) 2010 Anil Madhavapeddy <anil@recoil.org>
+ * Copyright (C) 2012-2014 Citrix Inc
+ *
+ * Permission to use, copy, modify, and distribute this software for any
+ * purpose with or without fee is hereby granted, provided that the above
+ * copyright notice and this permission notice appear in all copies.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+ * WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+ * MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ * ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+ * WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ * ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+ * OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+ *)
+
+type gntref = int
+type domid = int
+
+let console = 0 (* public/grant_table.h:GNTTAB_RESERVED_CONSOLE *)
+let xenstore = 1 (* public/grant_table.h:GNTTAB_RESERVED_XENSTORE *)
+
+type grant_handle (* handle to a mapped grant *)
+
+module Gnttab = struct
+  type interface = unit
+
+  let interface_open () = ()
+  let interface_close () = ()
+
+  type grant = {
+    domid: domid;
+    ref: gntref;
+  }
+
+  let unmap_exn () _ = () (* FIXME: leak *)
+  let devzero = Unix.openfile "/dev/zero" [] 0
+  let  nullmap () = Xenmmap.mmap devzero Xenmmap.RDWR Xenmmap.PRIVATE 4096 0
+  let map_fresh_exn () _ _ _ = Xenmmap.to_interface (nullmap())
+
+  module Local_mapping = struct
+    type t = Xenmmap.mmap_interface
+
+    let to_pages interface t =
+      Xenmmap.make t ~unmap:(unmap_exn interface)
+  end
+
+  let map_exn interface grant writable : Local_mapping.t =
+    map_fresh_exn interface grant.ref grant.domid writable
+
+end
diff --git a/tools/ocaml/xenstored/test/testable.ml b/tools/ocaml/xenstored/test/testable.ml
index 2fa749fbb3..37042356b8 100644
--- a/tools/ocaml/xenstored/test/testable.ml
+++ b/tools/ocaml/xenstored/test/testable.ml
@@ -169,7 +169,8 @@ let () =
 let create ?(live_update = false) () =
   let store = Store.create () in
   let cons = Connections.create () in
-  let doms = Domains.init (Event.init ()) ignore in
+  let gnt = Gnt.Gnttab.interface_open () in (* dummy *)
+  let doms = Domains.init (Event.init ()) gnt ignore in
   let dom0 = Domains.create0 doms in
   let txidtbl = Hashtbl.create 47 in
   Connections.add_domain cons dom0 ;
diff --git a/tools/ocaml/xenstored/xenstored.ml b/tools/ocaml/xenstored/xenstored.ml
index 34e706910e..a6b86b167c 100644
--- a/tools/ocaml/xenstored/xenstored.ml
+++ b/tools/ocaml/xenstored/xenstored.ml
@@ -166,9 +166,8 @@ let from_channel_f_compat chan global_f socket_f domain_f watch_f store_f =
 					global_f ~rw
 				| "socket" :: fd :: [] ->
 					socket_f ~fd:(int_of_string fd)
-				| "dom" :: domid :: mfn :: port :: []->
+				| "dom" :: domid :: _mfn :: port :: []->
 					domain_f (int_of_string domid)
-					         (Nativeint.of_string mfn)
 					         (int_of_string port)
 				| "watch" :: domid :: path :: token :: [] ->
 					watch_f (int_of_string domid)
@@ -208,10 +207,10 @@ let from_channel_compat ~live store cons doms chan =
 		else
 			warn "Ignoring invalid socket FD %d" fd
 	in
-	let domain_f domid mfn port =
+	let domain_f domid port =
 		let ndom =
 			if domid > 0 then
-				Domains.create doms domid mfn port
+				Domains.create doms domid port
 			else
 				Domains.create0 doms
 			in
@@ -270,8 +269,7 @@ let from_channel_bin ~live store cons doms chan =
 			Connections.find_domain cons 0
 		| LR.Domain d ->
 			debug "Recreating domain %d, port %d" d.id d.remote_port; 
-			(* FIXME: gnttab *)
-			Domains.create doms d.id 0n d.remote_port
+			Domains.create doms d.id d.remote_port
 			|> Connections.add_domain cons;
 			Connections.find_domain cons d.id
 		| LR.Socket fd ->
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 11 18:20:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 18:20:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125965.237144 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgWzC-00012t-4D; Tue, 11 May 2021 18:20:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125965.237144; Tue, 11 May 2021 18:20:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgWzC-00012K-0V; Tue, 11 May 2021 18:20:02 +0000
Received: by outflank-mailman (input) for mailman id 125965;
 Tue, 11 May 2021 18:20:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iFnS=KG=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
 id 1lgWzB-0007fz-5W
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 18:20:01 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 44ad55f2-40e0-4b29-b73e-727bf73df6b9;
 Tue, 11 May 2021 18:19:53 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 44ad55f2-40e0-4b29-b73e-727bf73df6b9
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620757193;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=wH4ZTnWLfQtQSCRp1fYnZ6Grjr9l6Pd0afiHDgqCHrM=;
  b=GI+dBNyrMhpKjrIqNEqrrLHSe3SoWGBQpwX7ANSAKpVGcn8VwTlcmEq+
   jsvTdMY5gXj3aOWIek1WTxt0oZ5DeczAyXC4e/AjZOqD8itfFk9RBqlV1
   C3ii+Nt3HCCLVJxS+LGjsJ5o0Q9OxJOhSXz9MiArhzj0WmfXUz4fvG5yR
   U=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 5dqYvYo9PE6bRcChvJHKFrOgutPz93AFiuEpLzhFm+/o8GJC64ivIpxv1XHEhD6rJzTgW92JBD
 YUgSM6VQ1YfKywzksLHNpmIoaJMQZRaRM95+yvQLvMWVN/oPwS7l+0HICLYKKOdw05jra/1oDC
 Y4R0miiMfKxRECU4tfIcMCu82t/6bUVbHmidc5XljLdtiBewFmG2gVH2uuLimICTTpcPrAgehU
 1FSk4Xl+JCPWgTeVjgsrGOblx4+ld7XKXWT7RxBZRoTZjScQbSdrBvFZPijKNqhdFZZHkZ+YoC
 prU=
X-SBRS: 5.1
X-MesageID: 43562377
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:cDkGHakVqebc6LuaNIr1rdhr2xrpDfLo3DAbv31ZSRFFG/Fw9/
 rCoB17726QtN91YhsdcL+7V5VoLUmzyXcX2/hyAV7BZmnbUQKTRekP0WKL+Vbd8kbFh41gPM
 lbEpSXCLfLfCJHZcSR2njELz73quP3jJxBho3lvghQpRkBUdAF0+/gYDzranGfQmN9dP0EPa
 vZ3OVrjRy6d08aa8yqb0N1JNQq97Xw5fTbiQdtPW9f1DWz
X-IronPort-AV: E=Sophos;i="5.82,291,1613451600"; 
   d="scan'208";a="43562377"
From: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>, "Christian
 Lindig" <christian.lindig@citrix.com>, David Scott <dave@recoil.org>, "Ian
 Jackson" <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: [PATCH v2 10/17] tools/ocaml/libs/mmap: allocate correct number of bytes
Date: Tue, 11 May 2021 19:05:23 +0100
Message-ID: <f235418a0632d7aa6e3fb9d611faf31325a8336e.1620755942.git.edvin.torok@citrix.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1620755942.git.edvin.torok@citrix.com>
References: <cover.1620755942.git.edvin.torok@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

OCaml memory allocation functions use words as units,
unless explicitly documented otherwise.
Thus we were allocating more memory than necessary,
caml_alloc should've been called with the parameter '2',
but was called with a lot more.
To account for future changes in the struct keep using sizeof,
but round up and convert to number of words.

For OCaml 1 word = sizeof(value)

The Wsize_bsize macro converts bytes to words.

Signed-off-by: Edwin Török <edvin.torok@citrix.com>
---
 tools/ocaml/libs/mmap/xenmmap_stubs.c | 4 +++-
 1 file changed, 3 insertions(+), 1 deletion(-)

diff --git a/tools/ocaml/libs/mmap/xenmmap_stubs.c b/tools/ocaml/libs/mmap/xenmmap_stubs.c
index b811990a89..4d09c5a6e6 100644
--- a/tools/ocaml/libs/mmap/xenmmap_stubs.c
+++ b/tools/ocaml/libs/mmap/xenmmap_stubs.c
@@ -28,6 +28,8 @@
 #include <caml/fail.h>
 #include <caml/callback.h>
 
+#define Wsize_bsize_round(n) (Wsize_bsize( (n) + sizeof(value) - 1 ))
+
 static int mmap_interface_init(struct mmap_interface *intf,
                                int fd, int pflag, int mflag,
                                int len, int offset)
@@ -57,7 +59,7 @@ CAMLprim value stub_mmap_init(value fd, value pflag, value mflag,
 	default: caml_invalid_argument("maptype");
 	}
 
-	result = caml_alloc(sizeof(struct mmap_interface), Abstract_tag);
+	result = caml_alloc(Wsize_bsize_round(sizeof(struct mmap_interface)), Abstract_tag);
 
 	if (mmap_interface_init(Intf_val(result), Int_val(fd),
 	                        c_pflag, c_mflag,
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 11 18:20:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 18:20:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125966.237157 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgWzE-0001eh-II; Tue, 11 May 2021 18:20:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125966.237157; Tue, 11 May 2021 18:20:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgWzE-0001e4-Dp; Tue, 11 May 2021 18:20:04 +0000
Received: by outflank-mailman (input) for mailman id 125966;
 Tue, 11 May 2021 18:20:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iFnS=KG=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
 id 1lgWzC-0007g6-Gc
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 18:20:02 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f5946d18-6347-4a01-b85a-ef69397626d8;
 Tue, 11 May 2021 18:19:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f5946d18-6347-4a01-b85a-ef69397626d8
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620757192;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=J+9ZxhITGKoBgE2ifCsItTortedaZyt2nhMpStVL/so=;
  b=XJSfpc+dTIt6Q38wUp8db9vi/yMt1O/ebOIm0kqVeL0XyGdYI764Fsjh
   ZbkBMWwJhD0u87sto/loF8PF7rEdLgfN4iffGJHiHHoawUukEF1jWg/VO
   BkaBcKomDqjxQ7c0WolHh70un31yFtwbyggMUth5Xk01n2Iqq8vvzFPWs
   A=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: eok2EWv+dnswMwHUxdnkS/1jaZMiXDFa8autCFiLG+96j29z7lRLvcNsXNXK8AE9HMbjLmZ4z4
 wVpschl9SdLGmR0x9xpadQRjzhhqdMkTiqRRhcRrBmKoZTH9+EAShUW39qicA0CCB+MkreIDvG
 /ruM/TdhMJMeTJ4+0Y0EzGZtncOYzvvTFqSF1CDUpXnlkonquSzBgM2af8+NC5Ea7NRQk2cjEj
 t5EqGT6OdFNIVB7p62PqYuJrPT54JfDRbRjksqBm1dnActV1vMsNtrn0oeYWINvUt3LGJKb2a3
 D1Q=
X-SBRS: 5.1
X-MesageID: 43562374
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:IGpwuaylIAVs6EAzT9IgKrPw6L1zdoMgy1knxilNoHxuH/Bw9v
 re+cjzsCWftN9/Yh4dcLy7VpVoIkmsl6Kdg7NwAV7KZmCP1FdARLsI0WKI+UyCJ8SRzI9gPa
 cLSdkFNDXzZ2IK8PoTNmODYqodKNrsytHWuQ/HpU0dKT2D88tbnn9E4gDwKDwQeCB2QaAXOb
 C7/cR9qz+paR0sH7+G7ilsZZmkmzXT/qiWGCI7Ow==
X-IronPort-AV: E=Sophos;i="5.82,291,1613451600"; 
   d="scan'208";a="43562374"
From: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>, "Christian
 Lindig" <christian.lindig@citrix.com>, David Scott <dave@recoil.org>, "Ian
 Jackson" <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: [PATCH v2 11/17] tools/ocaml/libs/mmap: Expose stub_mmap_alloc
Date: Tue, 11 May 2021 19:05:24 +0100
Message-ID: <ef983bbf2cf5cea6fa45da82ace330a50ab26a8e.1620755942.git.edvin.torok@citrix.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1620755942.git.edvin.torok@citrix.com>
References: <cover.1620755942.git.edvin.torok@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

This also handles mmap errors better by using the `uerror` helper
to raise a proper exception using `errno`.

Changed type of `len` from `int` to `size_t`: at construction time we
ensure the length is >= 0, so we can reflect this by using an unsigned
type. The type is unsigned at the C API level, and a negative integer
would just get translated to a very large unsigned number otherwise.

mmap also takes off_t and size_t, so using int64 would be more generic
here, however we only ever use this interface to map rings, so keeping
the `int` sizes is fine.
OCaml itself only uses `ints` for mapping bigarrays, and int64 for just
the offset.

Signed-off-by: Edwin Török <edvin.torok@citrix.com>
---
 tools/ocaml/libs/mmap/mmap_stubs.h    |  4 +++-
 tools/ocaml/libs/mmap/xenmmap_stubs.c | 31 +++++++++++++++++----------
 2 files changed, 23 insertions(+), 12 deletions(-)

diff --git a/tools/ocaml/libs/mmap/mmap_stubs.h b/tools/ocaml/libs/mmap/mmap_stubs.h
index 816ba6a724..3352594e38 100644
--- a/tools/ocaml/libs/mmap/mmap_stubs.h
+++ b/tools/ocaml/libs/mmap/mmap_stubs.h
@@ -27,7 +27,7 @@
 struct mmap_interface
 {
 	void *addr;
-	int len;
+	size_t len;
 };
 
 #ifndef Data_abstract_val
@@ -37,4 +37,6 @@ struct mmap_interface
 #define Intf_val(a) ((struct mmap_interface *) Data_abstract_val(a))
 #define Intf_data_val(a) (Intf_val(a)->addr)
 
+value stub_mmap_alloc(void *addr, size_t len);
+
 #endif
diff --git a/tools/ocaml/libs/mmap/xenmmap_stubs.c b/tools/ocaml/libs/mmap/xenmmap_stubs.c
index 4d09c5a6e6..d7a97c76f5 100644
--- a/tools/ocaml/libs/mmap/xenmmap_stubs.c
+++ b/tools/ocaml/libs/mmap/xenmmap_stubs.c
@@ -27,16 +27,18 @@
 #include <caml/custom.h>
 #include <caml/fail.h>
 #include <caml/callback.h>
+#include <caml/unixsupport.h>
 
 #define Wsize_bsize_round(n) (Wsize_bsize( (n) + sizeof(value) - 1 ))
 
-static int mmap_interface_init(struct mmap_interface *intf,
-                               int fd, int pflag, int mflag,
-                               int len, int offset)
+value stub_mmap_alloc(void *addr, size_t len)
 {
-	intf->len = len;
-	intf->addr = mmap(NULL, len, pflag, mflag, fd, offset);
-	return (intf->addr == MAP_FAILED) ? errno : 0;
+	CAMLparam0();
+	CAMLlocal1(result);
+	result = caml_alloc(Wsize_bsize_round(sizeof(struct mmap_interface)), Abstract_tag);
+	Intf_val(result)->addr = addr;
+	Intf_val(result)->len = len;
+	CAMLreturn(result);
 }
 
 CAMLprim value stub_mmap_init(value fd, value pflag, value mflag,
@@ -45,6 +47,8 @@ CAMLprim value stub_mmap_init(value fd, value pflag, value mflag,
 	CAMLparam5(fd, pflag, mflag, len, offset);
 	CAMLlocal1(result);
 	int c_pflag, c_mflag;
+	void* addr;
+	size_t length;
 
 	switch (Int_val(pflag)) {
 	case 0: c_pflag = PROT_READ; break;
@@ -59,12 +63,17 @@ CAMLprim value stub_mmap_init(value fd, value pflag, value mflag,
 	default: caml_invalid_argument("maptype");
 	}
 
-	result = caml_alloc(Wsize_bsize_round(sizeof(struct mmap_interface)), Abstract_tag);
+	if (Int_val(len) < 0)
+		caml_invalid_argument("negative size");
+	if (Int_val(offset) < 0)
+		caml_invalid_argument("negative offset");
+	length = Int_val(len);
 
-	if (mmap_interface_init(Intf_val(result), Int_val(fd),
-	                        c_pflag, c_mflag,
-	                        Int_val(len), Int_val(offset)))
-		caml_failwith("mmap");
+	addr = mmap(NULL, length, c_pflag, c_mflag, Int_val(fd), Int_val(offset));
+	if (MAP_FAILED == addr)
+		uerror("mmap", Nothing);
+
+	result = stub_mmap_alloc(addr, length);
 	CAMLreturn(result);
 }
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 11 18:20:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 18:20:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125968.237169 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgWzJ-0002bh-3a; Tue, 11 May 2021 18:20:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125968.237169; Tue, 11 May 2021 18:20:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgWzI-0002bK-RU; Tue, 11 May 2021 18:20:08 +0000
Received: by outflank-mailman (input) for mailman id 125968;
 Tue, 11 May 2021 18:20:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iFnS=KG=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
 id 1lgWzH-0007g6-Gl
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 18:20:07 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 88ff7d7f-6c71-4a69-8edb-c37ae6c09640;
 Tue, 11 May 2021 18:19:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 88ff7d7f-6c71-4a69-8edb-c37ae6c09640
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620757194;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=vLnsZPs8BICIF7ZVuZV/p4rSztfidYu+6cB1ksjHrPs=;
  b=Y49JnBmCod3yo8Xc8u2idtcJmcLc6vgk/A7VOa1kMU1tj2VkljT09uo4
   1LiyjECCTYxnqUAs8n1XFMU6uWXXuepDXRDAwwfkHmCPUr0PVXXujWthV
   dCv5N8Z70Uc7QGXM9KhpobI+GkZRhTlG3XXBPyCDkd31/oyyXFgX6wbkq
   w=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: hOS2G/aFV/aXoVyR5dAibfzoLgF/9nA3Lys700JNgpd8zSz8ovffHnIYY2GjGow1RtG4tKVnNS
 OVFwr4jKpLo/DBFUD8CWZuAPe8VTWKM+iNuDqnqDyaQthiV1DzZ4zlcvySgAvWF4Y/3JtI3JTu
 9ar3fFqtIVRgnJWiwPfVB0KlcvCwOKDt1J29dchHXGXYVct4IsmwhZ7HBQ4ZugXpnua7j4UoMB
 0OAfpLYhAvx+V+NbZNgkNVK9tPbh6tWYMonKZPtHavvDyVeTscjAM0ivWBWjZSMcUB2p/HnuMJ
 +38=
X-SBRS: 5.1
X-MesageID: 43580616
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:KlyDw6z8XyK2zqkOsK6lKrPwFL1zdoMgy1knxilNoRw8SKKlfq
 eV7Y0mPH7P+VAssR4b+exoVJPtfZqYz+8R3WBzB8bEYOCFghrKEGgK1+KLqFeMJ8S9zJ846U
 4JSdkHNDSaNzlHZKjBjzVQa+xQouW6zA==
X-IronPort-AV: E=Sophos;i="5.82,291,1613451600"; 
   d="scan'208";a="43580616"
From: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>, "Christian
 Lindig" <christian.lindig@citrix.com>, David Scott <dave@recoil.org>, "Ian
 Jackson" <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: [PATCH v2 14/17] tools/ocaml: safer Xenmmap interface
Date: Tue, 11 May 2021 19:05:27 +0100
Message-ID: <3e5e2d75c78646d31f4d50625cd0c05c70bae331.1620755942.git.edvin.torok@citrix.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1620755942.git.edvin.torok@citrix.com>
References: <cover.1620755942.git.edvin.torok@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Xenmmap.mmap_interface is created from multiple places:
* via mmap(), which needs to be unmap()-ed
* xc_map_foreign_range
* xengnttab_map_grant_ref

Signed-off-by: Edwin Török <edvin.torok@citrix.com>
---
 tools/ocaml/libs/mmap/gnt.ml      | 14 ++++++++------
 tools/ocaml/libs/mmap/gnt.mli     |  3 ++-
 tools/ocaml/libs/mmap/xenmmap.ml  | 14 ++++++++++++--
 tools/ocaml/libs/mmap/xenmmap.mli | 11 ++++++++---
 tools/ocaml/libs/xb/xb.ml         | 10 +++++-----
 tools/ocaml/libs/xb/xb.mli        |  4 ++--
 tools/ocaml/libs/xc/xenctrl.ml    |  6 ++++--
 tools/ocaml/libs/xc/xenctrl.mli   |  5 ++---
 tools/ocaml/xenstored/domain.ml   |  2 +-
 9 files changed, 44 insertions(+), 25 deletions(-)

diff --git a/tools/ocaml/libs/mmap/gnt.ml b/tools/ocaml/libs/mmap/gnt.ml
index 65f0334b7c..bef2d3e850 100644
--- a/tools/ocaml/libs/mmap/gnt.ml
+++ b/tools/ocaml/libs/mmap/gnt.ml
@@ -45,16 +45,18 @@ module Gnttab = struct
     ref: gntref;
   }
 
+  external unmap_exn : interface -> Xenmmap.mmap_interface -> unit = "stub_gnttab_unmap"
+
+  external map_fresh_exn: interface -> gntref -> domid -> bool -> Xenmmap.mmap_interface = "stub_gnttab_map_fresh"
+
   module Local_mapping = struct
     type t = Xenmmap.mmap_interface
 
-    let to_pages t = t
+    let to_pages interface t =
+      Xenmmap.make t ~unmap:(unmap_exn interface)
   end
 
-  external unmap_exn : interface -> Local_mapping.t -> unit = "stub_gnttab_unmap"
-
-  external map_fresh_exn: interface -> gntref -> domid -> bool -> Local_mapping.t = "stub_gnttab_map_fresh"
-
   let map_exn interface grant writable =
-      map_fresh_exn interface grant.ref grant.domid writable
+    map_fresh_exn interface grant.ref grant.domid writable
+
 end
diff --git a/tools/ocaml/libs/mmap/gnt.mli b/tools/ocaml/libs/mmap/gnt.mli
index 302e13b05d..13ab4c7ead 100644
--- a/tools/ocaml/libs/mmap/gnt.mli
+++ b/tools/ocaml/libs/mmap/gnt.mli
@@ -53,6 +53,7 @@ module Gnttab : sig
     ref: gntref;
     (** id which identifies the specific export in the foreign domain *)
   }
+
   (** A foreign domain must explicitly "grant" us memory and send us the
       "reference". The pair of (foreign domain id, reference) uniquely
       identifies the block of memory. This pair ("grant") is transmitted
@@ -63,7 +64,7 @@ module Gnttab : sig
     type t
     (** Abstract type representing a locally-mapped shared memory page *)
 
-    val to_pages: t -> Xenmmap.mmap_interface
+    val to_pages: interface -> t -> Xenmmap.t
   end
 
   val map_exn : interface -> grant -> bool -> Local_mapping.t
diff --git a/tools/ocaml/libs/mmap/xenmmap.ml b/tools/ocaml/libs/mmap/xenmmap.ml
index 44b67c89d2..af258942a0 100644
--- a/tools/ocaml/libs/mmap/xenmmap.ml
+++ b/tools/ocaml/libs/mmap/xenmmap.ml
@@ -15,17 +15,27 @@
  *)
 
 type mmap_interface
+type t = mmap_interface * (mmap_interface -> unit)
+
 
 type mmap_prot_flag = RDONLY | WRONLY | RDWR
 type mmap_map_flag = SHARED | PRIVATE
 
 (* mmap: fd -> prot_flag -> map_flag -> length -> offset -> interface *)
-external mmap: Unix.file_descr -> mmap_prot_flag -> mmap_map_flag
+external mmap': Unix.file_descr -> mmap_prot_flag -> mmap_map_flag
 		-> int -> int -> mmap_interface = "stub_mmap_init"
-external unmap: mmap_interface -> unit = "stub_mmap_final"
 (* read: interface -> start -> length -> data *)
 external read: mmap_interface -> int -> int -> string = "stub_mmap_read"
 (* write: interface -> data -> start -> length -> unit *)
 external write: mmap_interface -> string -> int -> int -> unit = "stub_mmap_write"
 (* getpagesize: unit -> size of page *)
+external unmap': mmap_interface -> unit = "stub_mmap_final"
+(* getpagesize: unit -> size of page *)
+let make ?(unmap=unmap') interface = interface, unmap
 external getpagesize: unit -> int = "stub_mmap_getpagesize"
+
+let to_interface (intf, _) = intf
+let mmap fd prot_flag map_flag length offset =
+	let map = mmap' fd prot_flag map_flag length offset in
+	make map ~unmap:unmap'
+let unmap (map, do_unmap) = do_unmap map
diff --git a/tools/ocaml/libs/mmap/xenmmap.mli b/tools/ocaml/libs/mmap/xenmmap.mli
index 8f92ed6310..075b24eab4 100644
--- a/tools/ocaml/libs/mmap/xenmmap.mli
+++ b/tools/ocaml/libs/mmap/xenmmap.mli
@@ -14,15 +14,20 @@
  * GNU Lesser General Public License for more details.
  *)
 
+type t
 type mmap_interface
 type mmap_prot_flag = RDONLY | WRONLY | RDWR
 type mmap_map_flag = SHARED | PRIVATE
 
-external mmap : Unix.file_descr -> mmap_prot_flag -> mmap_map_flag -> int -> int
-             -> mmap_interface = "stub_mmap_init"
-external unmap : mmap_interface -> unit = "stub_mmap_final"
 external read : mmap_interface -> int -> int -> string = "stub_mmap_read"
 external write : mmap_interface -> string -> int -> int -> unit
                = "stub_mmap_write"
 
+val mmap : Unix.file_descr -> mmap_prot_flag -> mmap_map_flag -> int -> int -> t
+val unmap : t -> unit
+
+val make: ?unmap:(mmap_interface -> unit) -> mmap_interface -> t 
+
+val to_interface: t -> mmap_interface
+
 external getpagesize : unit -> int = "stub_mmap_getpagesize"
diff --git a/tools/ocaml/libs/xb/xb.ml b/tools/ocaml/libs/xb/xb.ml
index 104d319d77..4ddf741420 100644
--- a/tools/ocaml/libs/xb/xb.ml
+++ b/tools/ocaml/libs/xb/xb.ml
@@ -28,7 +28,7 @@ let _ =
 
 type backend_mmap =
 {
-	mmap: Xenmmap.mmap_interface;     (* mmaped interface = xs_ring *)
+	mmap: Xenmmap.t;     (* mmaped interface = xs_ring *)
 	eventchn_notify: unit -> unit; (* function to notify through eventchn *)
 	mutable work_again: bool;
 }
@@ -59,7 +59,7 @@ let reconnect t = match t.backend with
 		(* should never happen, so close the connection *)
 		raise End_of_file
 	| Xenmmap backend ->
-		Xs_ring.close backend.mmap;
+		Xs_ring.close Xenmmap.(to_interface backend.mmap);
 		backend.eventchn_notify ();
 		(* Clear our old connection state *)
 		Queue.clear t.pkt_in;
@@ -77,7 +77,7 @@ let read_fd back _con b len =
 
 let read_mmap back _con b len =
 	let s = Bytes.make len '\000' in
-	let rd = Xs_ring.read back.mmap s len in
+	let rd = Xs_ring.read Xenmmap.(to_interface back.mmap) s len in
 	Bytes.blit s 0 b 0 rd;
 	back.work_again <- (rd > 0);
 	if rd > 0 then
@@ -93,7 +93,7 @@ let write_fd back _con b len =
 	Unix.write_substring back.fd b 0 len
 
 let write_mmap back _con s len =
-	let ws = Xs_ring.write_substring back.mmap s len in
+	let ws = Xs_ring.write_substring Xenmmap.(to_interface back.mmap) s len in
 	if ws > 0 then
 		back.eventchn_notify ();
 	ws
@@ -167,7 +167,7 @@ let open_fd fd = newcon (Fd { fd = fd; })
 
 let open_mmap mmap notifyfct =
 	(* Advertise XENSTORE_SERVER_FEATURE_RECONNECTION *)
-	Xs_ring.set_server_features mmap (Xs_ring.Server_features.singleton Xs_ring.Server_feature.Reconnection);
+	Xs_ring.set_server_features (Xenmmap.to_interface mmap) (Xs_ring.Server_features.singleton Xs_ring.Server_feature.Reconnection);
 	newcon (Xenmmap {
 		mmap = mmap;
 		eventchn_notify = notifyfct;
diff --git a/tools/ocaml/libs/xb/xb.mli b/tools/ocaml/libs/xb/xb.mli
index 3a00da6cdd..0184d77ffc 100644
--- a/tools/ocaml/libs/xb/xb.mli
+++ b/tools/ocaml/libs/xb/xb.mli
@@ -59,7 +59,7 @@ exception Noent
 exception Invalid
 exception Reconnect
 type backend_mmap = {
-  mmap : Xenmmap.mmap_interface;
+  mmap : Xenmmap.t;
   eventchn_notify : unit -> unit;
   mutable work_again : bool;
 }
@@ -86,7 +86,7 @@ val output : t -> bool
 val input : t -> bool
 val newcon : backend -> t
 val open_fd : Unix.file_descr -> t
-val open_mmap : Xenmmap.mmap_interface -> (unit -> unit) -> t
+val open_mmap : Xenmmap.t -> (unit -> unit) -> t
 val close : t -> unit
 val is_fd : t -> bool
 val is_mmap : t -> bool
diff --git a/tools/ocaml/libs/xc/xenctrl.ml b/tools/ocaml/libs/xc/xenctrl.ml
index a5588c643f..49950c368a 100644
--- a/tools/ocaml/libs/xc/xenctrl.ml
+++ b/tools/ocaml/libs/xc/xenctrl.ml
@@ -265,9 +265,11 @@ external domain_set_memmap_limit: handle -> domid -> int64 -> unit
 external domain_memory_increase_reservation: handle -> domid -> int64 -> unit
        = "stub_xc_domain_memory_increase_reservation"
 
-external map_foreign_range: handle -> domid -> int
+external map_foreign_range': handle -> domid -> int
                          -> nativeint -> Xenmmap.mmap_interface
-       = "stub_map_foreign_range"
+			 = "stub_map_foreign_range"
+let map_foreign_range handle domid port mfn =
+	Xenmmap.make (map_foreign_range' handle domid port mfn)
 
 external domain_assign_device: handle -> domid -> (int * int * int * int) -> unit
        = "stub_xc_domain_assign_device"
diff --git a/tools/ocaml/libs/xc/xenctrl.mli b/tools/ocaml/libs/xc/xenctrl.mli
index 6e94940a8a..ad9d07e7a0 100644
--- a/tools/ocaml/libs/xc/xenctrl.mli
+++ b/tools/ocaml/libs/xc/xenctrl.mli
@@ -202,9 +202,8 @@ external domain_set_memmap_limit : handle -> domid -> int64 -> unit
 external domain_memory_increase_reservation :
   handle -> domid -> int64 -> unit
   = "stub_xc_domain_memory_increase_reservation"
-external map_foreign_range :
-  handle -> domid -> int -> nativeint -> Xenmmap.mmap_interface
-  = "stub_map_foreign_range"
+val map_foreign_range :
+  handle -> domid -> int -> nativeint -> Xenmmap.t
 
 external domain_assign_device: handle -> domid -> (int * int * int * int) -> unit
        = "stub_xc_domain_assign_device"
diff --git a/tools/ocaml/xenstored/domain.ml b/tools/ocaml/xenstored/domain.ml
index 81cb59b8f1..82d7b1a7ef 100644
--- a/tools/ocaml/xenstored/domain.ml
+++ b/tools/ocaml/xenstored/domain.ml
@@ -23,7 +23,7 @@ type t =
 {
 	id: Xenctrl.domid;
 	mfn: nativeint;
-	interface: Xenmmap.mmap_interface;
+	interface: Xenmmap.t;
 	eventchn: Event.t;
 	mutable remote_port: int;
 	mutable port: Xeneventchn.t option;
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 11 18:31:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 18:31:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.125904.237181 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgXA4-0005yX-4v; Tue, 11 May 2021 18:31:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 125904.237181; Tue, 11 May 2021 18:31:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgXA3-0005yQ-Vf; Tue, 11 May 2021 18:31:16 +0000
Received: by outflank-mailman (input) for mailman id 125904;
 Tue, 11 May 2021 18:07:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=iFnS=KG=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
 id 1lgWmp-0001nY-D7
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 18:07:15 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c0f3be8a-b68c-4083-bba1-8f900dcae6d6;
 Tue, 11 May 2021 18:07:04 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c0f3be8a-b68c-4083-bba1-8f900dcae6d6
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620756424;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=mqm2Gc2+qWUMHLmZhD/y0DMFCobF8xyc73SR1mUL2Zk=;
  b=YPHNBCdk6xvq6UP4s3Py/VzpSWT23KQmDPeAl0niTBcS6LDCIVCbJpJq
   yRvfyTwZGTK6c5+I9mBjUFjA8WEq6e6m0cWkaXl7745V1Q5sabmMhhl5n
   YLJJp5z2lUcKWeSMjGrbSuO1cBLcVyMmzAxf3SYZTZCqIPpvQ6O/TDIbo
   g=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 1JaAF9mkjSkZ3gvBYIVoB/P1FtA7/TLd4oBXFG/ieOf2o0oudjxMxYNDpJojsNo1ccs/iwtWsn
 TSKi6K0fEdQ5Xu/CYfBiEvGk6o6VSV+dnarfi4CSsnBR+Xt9n7Do+IeiMIU88E/Hj9RAj1MEw3
 gDVpHGQuX8n7BrGyGZmJyfvtsOxpNLzWp99LMc8sfgkhBJw/4cJsIohzwRLW4GaMOS3zTs4rJw
 sPY65tApe+LT0J3ukaNwXbvXVwAdWm+4OwBht36iaImCck6IsShXW64To4wq/hC2tQfGq0NAmP
 iYo=
X-SBRS: 5.1
X-MesageID: 43675373
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:R4IWCqOp+jXA4MBcTjujsMiBIKoaSvp037BK7S1MoNJuEvBw9v
 re+MjzsCWftN9/Yh4dcLy7VpVoIkmskKKdg7NhXotKNTOO0AeVxelZhrcKqAeQeREWmNQ96U
 9hGZIOdeEZDzJB/LrHCN/TKade/DGFmprY+9s31x1WPGZXgzkL1XYDNu6ceHcGIjVuNN4CO7
 e3wNFInDakcWR/VLXAOpFUN9Kz3uEijfjdEGY7OyI=
X-IronPort-AV: E=Sophos;i="5.82,291,1613451600"; 
   d="scan'208";a="43675373"
From: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: =?UTF-8?q?Edwin=20T=C3=B6r=C3=B6k?= <edvin.torok@citrix.com>, "Christian
 Lindig" <christian.lindig@citrix.com>, David Scott <dave@recoil.org>, "Ian
 Jackson" <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: [PATCH v2 03/17] tools/ocaml: vendor external dependencies for convenience
Date: Tue, 11 May 2021 19:05:16 +0100
Message-ID: <878863919ef8eea9fc715d5b86f6f0a9eb75b0be.1620755942.git.edvin.torok@citrix.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <cover.1620755942.git.edvin.torok@citrix.com>
References: <cover.1620755942.git.edvin.torok@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

To run the unit tests these dependencies need to be available.
The developer can either install them themselves using opam,
or we can add them as subdirs here.

Dune will automatically pick the libraries from the system or build
it from the subdir as needed, no changes to the dune files are needed.

The duniverse/ subdir was generated by using the 'opam monorepo' command:
https://github.com/ocamllabs/opam-monorepo

This wrote a lockfile (xen.opam.locked) containing tarball sources
and hashes, and then opam monorepo pull downloaded the sources.

Signed-off-by: Edwin Török <edvin.torok@citrix.com>
---
 tools/ocaml/duniverse/cmdliner/.gitignore     |   10 +
 tools/ocaml/duniverse/cmdliner/.ocp-indent    |    1 +
 tools/ocaml/duniverse/cmdliner/B0.ml          |    9 +
 tools/ocaml/duniverse/cmdliner/CHANGES.md     |  255 +++
 tools/ocaml/duniverse/cmdliner/LICENSE.md     |   13 +
 tools/ocaml/duniverse/cmdliner/Makefile       |   77 +
 tools/ocaml/duniverse/cmdliner/README.md      |   51 +
 tools/ocaml/duniverse/cmdliner/_tags          |    3 +
 tools/ocaml/duniverse/cmdliner/build.ml       |  155 ++
 tools/ocaml/duniverse/cmdliner/cmdliner.opam  |   32 +
 tools/ocaml/duniverse/cmdliner/doc/api.odocl  |    1 +
 tools/ocaml/duniverse/cmdliner/dune-project   |    2 +
 tools/ocaml/duniverse/cmdliner/pkg/META       |    7 +
 tools/ocaml/duniverse/cmdliner/pkg/pkg.ml     |   33 +
 .../ocaml/duniverse/cmdliner/src/cmdliner.ml  |  309 ++++
 .../ocaml/duniverse/cmdliner/src/cmdliner.mli | 1624 +++++++++++++++++
 .../duniverse/cmdliner/src/cmdliner.mllib     |   11 +
 .../duniverse/cmdliner/src/cmdliner_arg.ml    |  356 ++++
 .../duniverse/cmdliner/src/cmdliner_arg.mli   |  111 ++
 .../duniverse/cmdliner/src/cmdliner_base.ml   |  302 +++
 .../duniverse/cmdliner/src/cmdliner_base.mli  |   68 +
 .../duniverse/cmdliner/src/cmdliner_cline.ml  |  199 ++
 .../duniverse/cmdliner/src/cmdliner_cline.mli |   34 +
 .../duniverse/cmdliner/src/cmdliner_docgen.ml |  352 ++++
 .../cmdliner/src/cmdliner_docgen.mli          |   30 +
 .../duniverse/cmdliner/src/cmdliner_info.ml   |  233 +++
 .../duniverse/cmdliner/src/cmdliner_info.mli  |  140 ++
 .../cmdliner/src/cmdliner_manpage.ml          |  502 +++++
 .../cmdliner/src/cmdliner_manpage.mli         |  100 +
 .../duniverse/cmdliner/src/cmdliner_msg.ml    |  116 ++
 .../duniverse/cmdliner/src/cmdliner_msg.mli   |   56 +
 .../cmdliner/src/cmdliner_suggest.ml          |   54 +
 .../cmdliner/src/cmdliner_suggest.mli         |   25 +
 .../duniverse/cmdliner/src/cmdliner_term.ml   |   41 +
 .../duniverse/cmdliner/src/cmdliner_term.mli  |   40 +
 .../duniverse/cmdliner/src/cmdliner_trie.ml   |   97 +
 .../duniverse/cmdliner/src/cmdliner_trie.mli  |   35 +
 tools/ocaml/duniverse/cmdliner/src/dune       |    4 +
 tools/ocaml/duniverse/cmdliner/test/chorus.ml |   31 +
 tools/ocaml/duniverse/cmdliner/test/cp_ex.ml  |   54 +
 .../ocaml/duniverse/cmdliner/test/darcs_ex.ml |  149 ++
 tools/ocaml/duniverse/cmdliner/test/dune      |   12 +
 tools/ocaml/duniverse/cmdliner/test/revolt.ml |    9 +
 tools/ocaml/duniverse/cmdliner/test/rm_ex.ml  |   53 +
 .../ocaml/duniverse/cmdliner/test/tail_ex.ml  |   73 +
 .../ocaml/duniverse/cmdliner/test/test_man.ml |  100 +
 .../duniverse/cmdliner/test/test_man_utf8.ml  |   11 +
 .../duniverse/cmdliner/test/test_opt_req.ml   |   13 +
 .../ocaml/duniverse/cmdliner/test/test_pos.ml |   13 +
 .../duniverse/cmdliner/test/test_pos_all.ml   |   11 +
 .../duniverse/cmdliner/test/test_pos_left.ml  |   11 +
 .../duniverse/cmdliner/test/test_pos_req.ml   |   15 +
 .../duniverse/cmdliner/test/test_pos_rev.ml   |   14 +
 .../duniverse/cmdliner/test/test_term_dups.ml |   19 +
 .../cmdliner/test/test_with_used_args.ml      |   18 +
 tools/ocaml/duniverse/cppo/.gitignore         |    5 +
 tools/ocaml/duniverse/cppo/.ocp-indent        |   22 +
 tools/ocaml/duniverse/cppo/.travis.yml        |   16 +
 tools/ocaml/duniverse/cppo/CODEOWNERS         |    8 +
 tools/ocaml/duniverse/cppo/Changes            |   85 +
 tools/ocaml/duniverse/cppo/INSTALL.md         |   17 +
 tools/ocaml/duniverse/cppo/LICENSE.md         |   24 +
 tools/ocaml/duniverse/cppo/Makefile           |   18 +
 tools/ocaml/duniverse/cppo/README.md          |  521 ++++++
 tools/ocaml/duniverse/cppo/VERSION            |    1 +
 tools/ocaml/duniverse/cppo/appveyor.yml       |   14 +
 tools/ocaml/duniverse/cppo/cppo.opam          |   31 +
 .../ocaml/duniverse/cppo/cppo_ocamlbuild.opam |   27 +
 tools/ocaml/duniverse/cppo/dune-project       |    3 +
 tools/ocaml/duniverse/cppo/examples/Makefile  |    8 +
 tools/ocaml/duniverse/cppo/examples/debug.ml  |    7 +
 tools/ocaml/duniverse/cppo/examples/dune      |   32 +
 tools/ocaml/duniverse/cppo/examples/french.ml |   34 +
 tools/ocaml/duniverse/cppo/examples/lexer.mll |    9 +
 .../duniverse/cppo/ocamlbuild_plugin/_tags    |    1 +
 .../duniverse/cppo/ocamlbuild_plugin/dune     |    6 +
 .../cppo/ocamlbuild_plugin/ocamlbuild_cppo.ml |   35 +
 .../ocamlbuild_plugin/ocamlbuild_cppo.mli     |    9 +
 tools/ocaml/duniverse/cppo/src/compat.ml      |    7 +
 .../ocaml/duniverse/cppo/src/cppo_command.ml  |   63 +
 .../ocaml/duniverse/cppo/src/cppo_command.mli |   11 +
 tools/ocaml/duniverse/cppo/src/cppo_eval.ml   |  697 +++++++
 tools/ocaml/duniverse/cppo/src/cppo_eval.mli  |   29 +
 tools/ocaml/duniverse/cppo/src/cppo_lexer.mll |  721 ++++++++
 tools/ocaml/duniverse/cppo/src/cppo_main.ml   |  230 +++
 .../ocaml/duniverse/cppo/src/cppo_parser.mly  |  266 +++
 tools/ocaml/duniverse/cppo/src/cppo_types.ml  |   98 +
 tools/ocaml/duniverse/cppo/src/cppo_types.mli |   70 +
 .../ocaml/duniverse/cppo/src/cppo_version.mli |    1 +
 tools/ocaml/duniverse/cppo/src/dune           |   21 +
 tools/ocaml/duniverse/cppo/test/capital.cppo  |    6 +
 tools/ocaml/duniverse/cppo/test/capital.ref   |    6 +
 tools/ocaml/duniverse/cppo/test/comments.cppo |    7 +
 tools/ocaml/duniverse/cppo/test/comments.ref  |    8 +
 tools/ocaml/duniverse/cppo/test/cond.cppo     |   47 +
 tools/ocaml/duniverse/cppo/test/cond.ref      |   17 +
 tools/ocaml/duniverse/cppo/test/dune          |  130 ++
 tools/ocaml/duniverse/cppo/test/ext.cppo      |   10 +
 tools/ocaml/duniverse/cppo/test/ext.ref       |   28 +
 tools/ocaml/duniverse/cppo/test/incl.cppo     |    3 +
 tools/ocaml/duniverse/cppo/test/incl2.cppo    |    1 +
 tools/ocaml/duniverse/cppo/test/loc.cppo      |    8 +
 tools/ocaml/duniverse/cppo/test/loc.ref       |   21 +
 .../ocaml/duniverse/cppo/test/paren_arg.cppo  |    3 +
 tools/ocaml/duniverse/cppo/test/paren_arg.ref |    4 +
 tools/ocaml/duniverse/cppo/test/source.sh     |   13 +
 tools/ocaml/duniverse/cppo/test/test.cppo     |  144 ++
 tools/ocaml/duniverse/cppo/test/tuple.cppo    |   38 +
 tools/ocaml/duniverse/cppo/test/tuple.ref     |   20 +
 .../ocaml/duniverse/cppo/test/unmatched.cppo  |   14 +
 tools/ocaml/duniverse/cppo/test/unmatched.ref |   15 +
 tools/ocaml/duniverse/cppo/test/version.cppo  |   30 +
 tools/ocaml/duniverse/crowbar/.gitignore      |    5 +
 tools/ocaml/duniverse/crowbar/CHANGES.md      |    9 +
 tools/ocaml/duniverse/crowbar/LICENSE.md      |    8 +
 tools/ocaml/duniverse/crowbar/README.md       |   82 +
 tools/ocaml/duniverse/crowbar/crowbar.opam    |   33 +
 tools/ocaml/duniverse/crowbar/dune            |    1 +
 tools/ocaml/duniverse/crowbar/dune-project    |    2 +
 .../duniverse/crowbar/examples/.gitignore     |    1 +
 .../duniverse/crowbar/examples/calendar/dune  |    3 +
 .../examples/calendar/test_calendar.ml        |   29 +
 .../duniverse/crowbar/examples/fpath/dune     |    4 +
 .../crowbar/examples/fpath/test_fpath.ml      |   18 +
 .../duniverse/crowbar/examples/input/testcase |    1 +
 .../ocaml/duniverse/crowbar/examples/map/dune |    3 +
 .../crowbar/examples/map/test_map.ml          |   47 +
 .../duniverse/crowbar/examples/pprint/dune    |    3 +
 .../crowbar/examples/pprint/test_pprint.ml    |   39 +
 .../crowbar/examples/serializer/dune          |    3 +
 .../crowbar/examples/serializer/serializer.ml |   34 +
 .../examples/serializer/test_serializer.ml    |   47 +
 .../duniverse/crowbar/examples/uunf/dune      |    3 +
 .../crowbar/examples/uunf/test_uunf.ml        |   75 +
 .../duniverse/crowbar/examples/xmldiff/dune   |    3 +
 .../crowbar/examples/xmldiff/test_xmldiff.ml  |   42 +
 tools/ocaml/duniverse/crowbar/src/crowbar.ml  |  582 ++++++
 tools/ocaml/duniverse/crowbar/src/crowbar.mli |  251 +++
 tools/ocaml/duniverse/crowbar/src/dune        |    3 +
 tools/ocaml/duniverse/crowbar/src/todo        |   16 +
 tools/ocaml/duniverse/csexp/CHANGES.md        |   45 +
 tools/ocaml/duniverse/csexp/LICENSE.md        |   21 +
 tools/ocaml/duniverse/csexp/Makefile          |   23 +
 tools/ocaml/duniverse/csexp/README.md         |   33 +
 .../duniverse/csexp/bench/csexp_bench.ml      |   22 +
 tools/ocaml/duniverse/csexp/bench/dune        |   11 +
 tools/ocaml/duniverse/csexp/bench/main.ml     |    1 +
 tools/ocaml/duniverse/csexp/bench/runner.sh   |    4 +
 tools/ocaml/duniverse/csexp/csexp.opam        |   51 +
 .../ocaml/duniverse/csexp/csexp.opam.template |   14 +
 tools/ocaml/duniverse/csexp/dune-project      |   42 +
 .../ocaml/duniverse/csexp/dune-workspace.dev  |    6 +
 tools/ocaml/duniverse/csexp/src/csexp.ml      |  333 ++++
 tools/ocaml/duniverse/csexp/src/csexp.mli     |  369 ++++
 tools/ocaml/duniverse/csexp/src/dune          |    3 +
 tools/ocaml/duniverse/csexp/test/dune         |    6 +
 tools/ocaml/duniverse/csexp/test/test.ml      |  142 ++
 tools/ocaml/duniverse/dune                    |    4 +
 tools/ocaml/duniverse/fmt/.gitignore          |    8 +
 tools/ocaml/duniverse/fmt/.ocp-indent         |    1 +
 tools/ocaml/duniverse/fmt/CHANGES.md          |   98 +
 tools/ocaml/duniverse/fmt/LICENSE.md          |   13 +
 tools/ocaml/duniverse/fmt/README.md           |   35 +
 tools/ocaml/duniverse/fmt/_tags               |    7 +
 tools/ocaml/duniverse/fmt/doc/api.odocl       |    3 +
 tools/ocaml/duniverse/fmt/doc/index.mld       |   11 +
 tools/ocaml/duniverse/fmt/dune-project        |    2 +
 tools/ocaml/duniverse/fmt/fmt.opam            |   35 +
 tools/ocaml/duniverse/fmt/pkg/META            |   40 +
 tools/ocaml/duniverse/fmt/pkg/pkg.ml          |   18 +
 tools/ocaml/duniverse/fmt/src/dune            |   30 +
 tools/ocaml/duniverse/fmt/src/fmt.ml          |  787 ++++++++
 tools/ocaml/duniverse/fmt/src/fmt.mli         |  689 +++++++
 tools/ocaml/duniverse/fmt/src/fmt.mllib       |    1 +
 tools/ocaml/duniverse/fmt/src/fmt_cli.ml      |   32 +
 tools/ocaml/duniverse/fmt/src/fmt_cli.mli     |   45 +
 tools/ocaml/duniverse/fmt/src/fmt_cli.mllib   |    1 +
 tools/ocaml/duniverse/fmt/src/fmt_top.ml      |   23 +
 tools/ocaml/duniverse/fmt/src/fmt_top.mllib   |    1 +
 tools/ocaml/duniverse/fmt/src/fmt_tty.ml      |   78 +
 tools/ocaml/duniverse/fmt/src/fmt_tty.mli     |   50 +
 tools/ocaml/duniverse/fmt/src/fmt_tty.mllib   |    1 +
 .../duniverse/fmt/src/fmt_tty_top_init.ml     |   23 +
 tools/ocaml/duniverse/fmt/test/test.ml        |  322 ++++
 .../duniverse/ocaml-afl-persistent/.gitignore |    2 +
 .../duniverse/ocaml-afl-persistent/CHANGES.md |   22 +
 .../duniverse/ocaml-afl-persistent/LICENSE.md |    8 +
 .../duniverse/ocaml-afl-persistent/README.md  |   17 +
 .../ocaml-afl-persistent/afl-persistent.opam  |   49 +
 .../afl-persistent.opam.template              |   16 +
 .../aflPersistent.available.ml                |   21 +
 .../ocaml-afl-persistent/aflPersistent.mli    |    1 +
 .../aflPersistent.stub.ml                     |    1 +
 .../duniverse/ocaml-afl-persistent/detect.sh  |   43 +
 .../ocaml/duniverse/ocaml-afl-persistent/dune |   20 +
 .../ocaml-afl-persistent/dune-project         |   23 +
 .../duniverse/ocaml-afl-persistent/test.ml    |    3 +
 .../ocaml-afl-persistent/test/harness.ml      |   22 +
 .../ocaml-afl-persistent/test/test.ml         |   73 +
 .../ocaml-afl-persistent/test/test.sh         |   33 +
 .../ocaml/duniverse/ocplib-endian/.gitignore  |    3 +
 .../ocaml/duniverse/ocplib-endian/.travis.yml |   19 +
 .../ocaml/duniverse/ocplib-endian/CHANGES.md  |   55 +
 .../ocaml/duniverse/ocplib-endian/COPYING.txt |  521 ++++++
 tools/ocaml/duniverse/ocplib-endian/Makefile  |   13 +
 tools/ocaml/duniverse/ocplib-endian/README.md |   16 +
 .../duniverse/ocplib-endian/dune-project      |    2 +
 .../ocplib-endian/ocplib-endian.opam          |   30 +
 .../ocplib-endian/src/be_ocaml_401.ml         |   32 +
 .../duniverse/ocplib-endian/src/common.ml     |   24 +
 .../ocplib-endian/src/common_401.cppo.ml      |  100 +
 .../ocplib-endian/src/common_float.ml         |    5 +
 tools/ocaml/duniverse/ocplib-endian/src/dune  |   75 +
 .../ocplib-endian/src/endianBigstring.cppo.ml |  112 ++
 .../src/endianBigstring.cppo.mli              |  128 ++
 .../ocplib-endian/src/endianBytes.cppo.ml     |  130 ++
 .../ocplib-endian/src/endianBytes.cppo.mli    |  124 ++
 .../ocplib-endian/src/endianString.cppo.ml    |  118 ++
 .../ocplib-endian/src/endianString.cppo.mli   |  121 ++
 .../ocplib-endian/src/le_ocaml_401.ml         |   32 +
 .../ocplib-endian/src/ne_ocaml_401.ml         |   20 +
 .../duniverse/ocplib-endian/tests/bench.ml    |  436 +++++
 .../ocaml/duniverse/ocplib-endian/tests/dune  |   35 +
 .../duniverse/ocplib-endian/tests/test.ml     |   39 +
 .../tests/test_bigstring.cppo.ml              |  191 ++
 .../ocplib-endian/tests/test_bytes.cppo.ml    |  185 ++
 .../ocplib-endian/tests/test_string.cppo.ml   |  185 ++
 tools/ocaml/duniverse/result/CHANGES.md       |   15 +
 tools/ocaml/duniverse/result/LICENSE.md       |   24 +
 tools/ocaml/duniverse/result/Makefile         |   17 +
 tools/ocaml/duniverse/result/README.md        |    5 +
 tools/ocaml/duniverse/result/dune             |   12 +
 tools/ocaml/duniverse/result/dune-project     |    3 +
 .../duniverse/result/result-as-alias-4.08.ml  |    2 +
 .../ocaml/duniverse/result/result-as-alias.ml |    2 +
 .../duniverse/result/result-as-newtype.ml     |    2 +
 tools/ocaml/duniverse/result/result.opam      |   18 +
 tools/ocaml/duniverse/result/which_result.ml  |   14 +
 tools/ocaml/duniverse/stdlib-shims/CHANGES.md |    5 +
 tools/ocaml/duniverse/stdlib-shims/LICENSE    |  203 +++
 tools/ocaml/duniverse/stdlib-shims/README.md  |    2 +
 .../ocaml/duniverse/stdlib-shims/dune-project |    1 +
 .../duniverse/stdlib-shims/dune-workspace.dev |   14 +
 tools/ocaml/duniverse/stdlib-shims/src/dune   |   97 +
 .../duniverse/stdlib-shims/stdlib-shims.opam  |   24 +
 tools/ocaml/duniverse/stdlib-shims/test/dune  |    3 +
 .../ocaml/duniverse/stdlib-shims/test/test.ml |    2 +
 tools/ocaml/xen.opam.locked                   |  119 ++
 248 files changed, 18334 insertions(+)
 create mode 100644 tools/ocaml/duniverse/cmdliner/.gitignore
 create mode 100644 tools/ocaml/duniverse/cmdliner/.ocp-indent
 create mode 100644 tools/ocaml/duniverse/cmdliner/B0.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/CHANGES.md
 create mode 100644 tools/ocaml/duniverse/cmdliner/LICENSE.md
 create mode 100644 tools/ocaml/duniverse/cmdliner/Makefile
 create mode 100644 tools/ocaml/duniverse/cmdliner/README.md
 create mode 100644 tools/ocaml/duniverse/cmdliner/_tags
 create mode 100755 tools/ocaml/duniverse/cmdliner/build.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/cmdliner.opam
 create mode 100644 tools/ocaml/duniverse/cmdliner/doc/api.odocl
 create mode 100644 tools/ocaml/duniverse/cmdliner/dune-project
 create mode 100644 tools/ocaml/duniverse/cmdliner/pkg/META
 create mode 100755 tools/ocaml/duniverse/cmdliner/pkg/pkg.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner.mli
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner.mllib
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner_arg.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner_arg.mli
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner_base.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner_base.mli
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner_cline.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner_cline.mli
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner_docgen.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner_docgen.mli
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner_info.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner_info.mli
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner_manpage.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner_manpage.mli
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner_msg.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner_msg.mli
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner_suggest.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner_suggest.mli
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner_term.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner_term.mli
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner_trie.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/cmdliner_trie.mli
 create mode 100644 tools/ocaml/duniverse/cmdliner/src/dune
 create mode 100644 tools/ocaml/duniverse/cmdliner/test/chorus.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/test/cp_ex.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/test/darcs_ex.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/test/dune
 create mode 100644 tools/ocaml/duniverse/cmdliner/test/revolt.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/test/rm_ex.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/test/tail_ex.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/test/test_man.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/test/test_man_utf8.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/test/test_opt_req.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/test/test_pos.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/test/test_pos_all.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/test/test_pos_left.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/test/test_pos_req.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/test/test_pos_rev.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/test/test_term_dups.ml
 create mode 100644 tools/ocaml/duniverse/cmdliner/test/test_with_used_args.ml
 create mode 100644 tools/ocaml/duniverse/cppo/.gitignore
 create mode 100644 tools/ocaml/duniverse/cppo/.ocp-indent
 create mode 100644 tools/ocaml/duniverse/cppo/.travis.yml
 create mode 100644 tools/ocaml/duniverse/cppo/CODEOWNERS
 create mode 100644 tools/ocaml/duniverse/cppo/Changes
 create mode 100644 tools/ocaml/duniverse/cppo/INSTALL.md
 create mode 100644 tools/ocaml/duniverse/cppo/LICENSE.md
 create mode 100644 tools/ocaml/duniverse/cppo/Makefile
 create mode 100644 tools/ocaml/duniverse/cppo/README.md
 create mode 100644 tools/ocaml/duniverse/cppo/VERSION
 create mode 100644 tools/ocaml/duniverse/cppo/appveyor.yml
 create mode 100644 tools/ocaml/duniverse/cppo/cppo.opam
 create mode 100644 tools/ocaml/duniverse/cppo/cppo_ocamlbuild.opam
 create mode 100644 tools/ocaml/duniverse/cppo/dune-project
 create mode 100644 tools/ocaml/duniverse/cppo/examples/Makefile
 create mode 100644 tools/ocaml/duniverse/cppo/examples/debug.ml
 create mode 100644 tools/ocaml/duniverse/cppo/examples/dune
 create mode 100644 tools/ocaml/duniverse/cppo/examples/french.ml
 create mode 100644 tools/ocaml/duniverse/cppo/examples/lexer.mll
 create mode 100644 tools/ocaml/duniverse/cppo/ocamlbuild_plugin/_tags
 create mode 100644 tools/ocaml/duniverse/cppo/ocamlbuild_plugin/dune
 create mode 100644 tools/ocaml/duniverse/cppo/ocamlbuild_plugin/ocamlbuild_cppo.ml
 create mode 100644 tools/ocaml/duniverse/cppo/ocamlbuild_plugin/ocamlbuild_cppo.mli
 create mode 100644 tools/ocaml/duniverse/cppo/src/compat.ml
 create mode 100644 tools/ocaml/duniverse/cppo/src/cppo_command.ml
 create mode 100644 tools/ocaml/duniverse/cppo/src/cppo_command.mli
 create mode 100644 tools/ocaml/duniverse/cppo/src/cppo_eval.ml
 create mode 100644 tools/ocaml/duniverse/cppo/src/cppo_eval.mli
 create mode 100644 tools/ocaml/duniverse/cppo/src/cppo_lexer.mll
 create mode 100644 tools/ocaml/duniverse/cppo/src/cppo_main.ml
 create mode 100644 tools/ocaml/duniverse/cppo/src/cppo_parser.mly
 create mode 100644 tools/ocaml/duniverse/cppo/src/cppo_types.ml
 create mode 100644 tools/ocaml/duniverse/cppo/src/cppo_types.mli
 create mode 100644 tools/ocaml/duniverse/cppo/src/cppo_version.mli
 create mode 100644 tools/ocaml/duniverse/cppo/src/dune
 create mode 100644 tools/ocaml/duniverse/cppo/test/capital.cppo
 create mode 100644 tools/ocaml/duniverse/cppo/test/capital.ref
 create mode 100644 tools/ocaml/duniverse/cppo/test/comments.cppo
 create mode 100644 tools/ocaml/duniverse/cppo/test/comments.ref
 create mode 100644 tools/ocaml/duniverse/cppo/test/cond.cppo
 create mode 100644 tools/ocaml/duniverse/cppo/test/cond.ref
 create mode 100644 tools/ocaml/duniverse/cppo/test/dune
 create mode 100644 tools/ocaml/duniverse/cppo/test/ext.cppo
 create mode 100644 tools/ocaml/duniverse/cppo/test/ext.ref
 create mode 100644 tools/ocaml/duniverse/cppo/test/incl.cppo
 create mode 100644 tools/ocaml/duniverse/cppo/test/incl2.cppo
 create mode 100644 tools/ocaml/duniverse/cppo/test/loc.cppo
 create mode 100644 tools/ocaml/duniverse/cppo/test/loc.ref
 create mode 100644 tools/ocaml/duniverse/cppo/test/paren_arg.cppo
 create mode 100644 tools/ocaml/duniverse/cppo/test/paren_arg.ref
 create mode 100755 tools/ocaml/duniverse/cppo/test/source.sh
 create mode 100644 tools/ocaml/duniverse/cppo/test/test.cppo
 create mode 100644 tools/ocaml/duniverse/cppo/test/tuple.cppo
 create mode 100644 tools/ocaml/duniverse/cppo/test/tuple.ref
 create mode 100644 tools/ocaml/duniverse/cppo/test/unmatched.cppo
 create mode 100644 tools/ocaml/duniverse/cppo/test/unmatched.ref
 create mode 100644 tools/ocaml/duniverse/cppo/test/version.cppo
 create mode 100644 tools/ocaml/duniverse/crowbar/.gitignore
 create mode 100644 tools/ocaml/duniverse/crowbar/CHANGES.md
 create mode 100644 tools/ocaml/duniverse/crowbar/LICENSE.md
 create mode 100644 tools/ocaml/duniverse/crowbar/README.md
 create mode 100644 tools/ocaml/duniverse/crowbar/crowbar.opam
 create mode 100644 tools/ocaml/duniverse/crowbar/dune
 create mode 100644 tools/ocaml/duniverse/crowbar/dune-project
 create mode 100644 tools/ocaml/duniverse/crowbar/examples/.gitignore
 create mode 100644 tools/ocaml/duniverse/crowbar/examples/calendar/dune
 create mode 100644 tools/ocaml/duniverse/crowbar/examples/calendar/test_calendar.ml
 create mode 100644 tools/ocaml/duniverse/crowbar/examples/fpath/dune
 create mode 100644 tools/ocaml/duniverse/crowbar/examples/fpath/test_fpath.ml
 create mode 100644 tools/ocaml/duniverse/crowbar/examples/input/testcase
 create mode 100644 tools/ocaml/duniverse/crowbar/examples/map/dune
 create mode 100644 tools/ocaml/duniverse/crowbar/examples/map/test_map.ml
 create mode 100644 tools/ocaml/duniverse/crowbar/examples/pprint/dune
 create mode 100644 tools/ocaml/duniverse/crowbar/examples/pprint/test_pprint.ml
 create mode 100644 tools/ocaml/duniverse/crowbar/examples/serializer/dune
 create mode 100644 tools/ocaml/duniverse/crowbar/examples/serializer/serializer.ml
 create mode 100644 tools/ocaml/duniverse/crowbar/examples/serializer/test_serializer.ml
 create mode 100644 tools/ocaml/duniverse/crowbar/examples/uunf/dune
 create mode 100644 tools/ocaml/duniverse/crowbar/examples/uunf/test_uunf.ml
 create mode 100644 tools/ocaml/duniverse/crowbar/examples/xmldiff/dune
 create mode 100644 tools/ocaml/duniverse/crowbar/examples/xmldiff/test_xmldiff.ml
 create mode 100644 tools/ocaml/duniverse/crowbar/src/crowbar.ml
 create mode 100644 tools/ocaml/duniverse/crowbar/src/crowbar.mli
 create mode 100644 tools/ocaml/duniverse/crowbar/src/dune
 create mode 100644 tools/ocaml/duniverse/crowbar/src/todo
 create mode 100644 tools/ocaml/duniverse/csexp/CHANGES.md
 create mode 100644 tools/ocaml/duniverse/csexp/LICENSE.md
 create mode 100644 tools/ocaml/duniverse/csexp/Makefile
 create mode 100644 tools/ocaml/duniverse/csexp/README.md
 create mode 100644 tools/ocaml/duniverse/csexp/bench/csexp_bench.ml
 create mode 100644 tools/ocaml/duniverse/csexp/bench/dune
 create mode 100644 tools/ocaml/duniverse/csexp/bench/main.ml
 create mode 100755 tools/ocaml/duniverse/csexp/bench/runner.sh
 create mode 100644 tools/ocaml/duniverse/csexp/csexp.opam
 create mode 100644 tools/ocaml/duniverse/csexp/csexp.opam.template
 create mode 100644 tools/ocaml/duniverse/csexp/dune-project
 create mode 100644 tools/ocaml/duniverse/csexp/dune-workspace.dev
 create mode 100644 tools/ocaml/duniverse/csexp/src/csexp.ml
 create mode 100644 tools/ocaml/duniverse/csexp/src/csexp.mli
 create mode 100644 tools/ocaml/duniverse/csexp/src/dune
 create mode 100644 tools/ocaml/duniverse/csexp/test/dune
 create mode 100644 tools/ocaml/duniverse/csexp/test/test.ml
 create mode 100644 tools/ocaml/duniverse/dune
 create mode 100644 tools/ocaml/duniverse/fmt/.gitignore
 create mode 100644 tools/ocaml/duniverse/fmt/.ocp-indent
 create mode 100644 tools/ocaml/duniverse/fmt/CHANGES.md
 create mode 100644 tools/ocaml/duniverse/fmt/LICENSE.md
 create mode 100644 tools/ocaml/duniverse/fmt/README.md
 create mode 100644 tools/ocaml/duniverse/fmt/_tags
 create mode 100644 tools/ocaml/duniverse/fmt/doc/api.odocl
 create mode 100644 tools/ocaml/duniverse/fmt/doc/index.mld
 create mode 100644 tools/ocaml/duniverse/fmt/dune-project
 create mode 100644 tools/ocaml/duniverse/fmt/fmt.opam
 create mode 100644 tools/ocaml/duniverse/fmt/pkg/META
 create mode 100755 tools/ocaml/duniverse/fmt/pkg/pkg.ml
 create mode 100644 tools/ocaml/duniverse/fmt/src/dune
 create mode 100644 tools/ocaml/duniverse/fmt/src/fmt.ml
 create mode 100644 tools/ocaml/duniverse/fmt/src/fmt.mli
 create mode 100644 tools/ocaml/duniverse/fmt/src/fmt.mllib
 create mode 100644 tools/ocaml/duniverse/fmt/src/fmt_cli.ml
 create mode 100644 tools/ocaml/duniverse/fmt/src/fmt_cli.mli
 create mode 100644 tools/ocaml/duniverse/fmt/src/fmt_cli.mllib
 create mode 100644 tools/ocaml/duniverse/fmt/src/fmt_top.ml
 create mode 100644 tools/ocaml/duniverse/fmt/src/fmt_top.mllib
 create mode 100644 tools/ocaml/duniverse/fmt/src/fmt_tty.ml
 create mode 100644 tools/ocaml/duniverse/fmt/src/fmt_tty.mli
 create mode 100644 tools/ocaml/duniverse/fmt/src/fmt_tty.mllib
 create mode 100644 tools/ocaml/duniverse/fmt/src/fmt_tty_top_init.ml
 create mode 100644 tools/ocaml/duniverse/fmt/test/test.ml
 create mode 100644 tools/ocaml/duniverse/ocaml-afl-persistent/.gitignore
 create mode 100644 tools/ocaml/duniverse/ocaml-afl-persistent/CHANGES.md
 create mode 100644 tools/ocaml/duniverse/ocaml-afl-persistent/LICENSE.md
 create mode 100644 tools/ocaml/duniverse/ocaml-afl-persistent/README.md
 create mode 100644 tools/ocaml/duniverse/ocaml-afl-persistent/afl-persistent.opam
 create mode 100644 tools/ocaml/duniverse/ocaml-afl-persistent/afl-persistent.opam.template
 create mode 100644 tools/ocaml/duniverse/ocaml-afl-persistent/aflPersistent.available.ml
 create mode 100644 tools/ocaml/duniverse/ocaml-afl-persistent/aflPersistent.mli
 create mode 100644 tools/ocaml/duniverse/ocaml-afl-persistent/aflPersistent.stub.ml
 create mode 100755 tools/ocaml/duniverse/ocaml-afl-persistent/detect.sh
 create mode 100644 tools/ocaml/duniverse/ocaml-afl-persistent/dune
 create mode 100644 tools/ocaml/duniverse/ocaml-afl-persistent/dune-project
 create mode 100644 tools/ocaml/duniverse/ocaml-afl-persistent/test.ml
 create mode 100644 tools/ocaml/duniverse/ocaml-afl-persistent/test/harness.ml
 create mode 100644 tools/ocaml/duniverse/ocaml-afl-persistent/test/test.ml
 create mode 100755 tools/ocaml/duniverse/ocaml-afl-persistent/test/test.sh
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/.gitignore
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/.travis.yml
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/CHANGES.md
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/COPYING.txt
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/Makefile
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/README.md
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/dune-project
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/ocplib-endian.opam
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/src/be_ocaml_401.ml
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/src/common.ml
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/src/common_401.cppo.ml
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/src/common_float.ml
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/src/dune
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/src/endianBigstring.cppo.ml
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/src/endianBigstring.cppo.mli
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/src/endianBytes.cppo.ml
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/src/endianBytes.cppo.mli
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/src/endianString.cppo.ml
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/src/endianString.cppo.mli
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/src/le_ocaml_401.ml
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/src/ne_ocaml_401.ml
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/tests/bench.ml
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/tests/dune
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/tests/test.ml
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/tests/test_bigstring.cppo.ml
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/tests/test_bytes.cppo.ml
 create mode 100644 tools/ocaml/duniverse/ocplib-endian/tests/test_string.cppo.ml
 create mode 100755 tools/ocaml/duniverse/result/CHANGES.md
 create mode 100755 tools/ocaml/duniverse/result/LICENSE.md
 create mode 100755 tools/ocaml/duniverse/result/Makefile
 create mode 100755 tools/ocaml/duniverse/result/README.md
 create mode 100755 tools/ocaml/duniverse/result/dune
 create mode 100755 tools/ocaml/duniverse/result/dune-project
 create mode 100755 tools/ocaml/duniverse/result/result-as-alias-4.08.ml
 create mode 100755 tools/ocaml/duniverse/result/result-as-alias.ml
 create mode 100755 tools/ocaml/duniverse/result/result-as-newtype.ml
 create mode 100755 tools/ocaml/duniverse/result/result.opam
 create mode 100755 tools/ocaml/duniverse/result/which_result.ml
 create mode 100644 tools/ocaml/duniverse/stdlib-shims/CHANGES.md
 create mode 100644 tools/ocaml/duniverse/stdlib-shims/LICENSE
 create mode 100644 tools/ocaml/duniverse/stdlib-shims/README.md
 create mode 100644 tools/ocaml/duniverse/stdlib-shims/dune-project
 create mode 100644 tools/ocaml/duniverse/stdlib-shims/dune-workspace.dev
 create mode 100644 tools/ocaml/duniverse/stdlib-shims/src/dune
 create mode 100644 tools/ocaml/duniverse/stdlib-shims/stdlib-shims.opam
 create mode 100644 tools/ocaml/duniverse/stdlib-shims/test/dune
 create mode 100644 tools/ocaml/duniverse/stdlib-shims/test/test.ml
 create mode 100644 tools/ocaml/xen.opam.locked

diff --git a/tools/ocaml/duniverse/cmdliner/.gitignore b/tools/ocaml/duniverse/cmdliner/.gitignore
new file mode 100644
index 0000000000..2b7712335b
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/.gitignore
@@ -0,0 +1,10 @@
+_build
+_b0
+tmp
+*~
+\.\#*
+\#*#
+*.byte
+*.native
+cmdliner.install
+src/.merlin
diff --git a/tools/ocaml/duniverse/cmdliner/.ocp-indent b/tools/ocaml/duniverse/cmdliner/.ocp-indent
new file mode 100644
index 0000000000..ad2fbcbfa5
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/.ocp-indent
@@ -0,0 +1 @@
+strict_with=always,match_clause=4,strict_else=never
\ No newline at end of file
diff --git a/tools/ocaml/duniverse/cmdliner/B0.ml b/tools/ocaml/duniverse/cmdliner/B0.ml
new file mode 100644
index 0000000000..ddeb802719
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/B0.ml
@@ -0,0 +1,9 @@
+open B0
+
+let cmdliner = "cmdliner"
+let doc = "Declarative definition of command line interfaces for OCaml"
+
+let pkg = Pkg.create cmdliner ~doc
+let lib =
+  let srcs = (`Src_dirs [Fpath.v "src"]) in
+  B0_ocaml.Unit.lib ~pkg cmdliner srcs ~doc
diff --git a/tools/ocaml/duniverse/cmdliner/CHANGES.md b/tools/ocaml/duniverse/cmdliner/CHANGES.md
new file mode 100644
index 0000000000..ec68c51a08
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/CHANGES.md
@@ -0,0 +1,255 @@
+v1.0.4 2019-06-14 Zagreb
+------------------------
+
+- Change the way `Error (_, e)` term evaluation results 
+  are formatted. Instead of treating `e` as text, treat
+  it as formatted lines.
+- Fix 4.08 `Pervasives` deprecation.
+- Fix 4.03 String deprecations.
+- Fix bootstrap build in absence of dynlink.
+- Make the `Makefile` bootstrap build reproducible.
+  Thanks to Thomas Leonard for the patch.
+
+v1.0.3 2018-11-26 Zagreb
+------------------------
+
+- Add `Term.with_used_args`. Thanks to Jeremie Dimino for
+  the patch.
+- Use `Makefile` bootstrap build in opam file.
+- Drop ocamlbuild requirement for `Makefile` bootstrap build.
+- Drop support for ocaml < 4.03.0
+- Dune build support.
+
+v1.0.2 2017-08-07 Zagreb
+------------------------
+
+- Don't remove the `Makefile` from the distribution.
+
+v1.0.1 2017-08-03 Zagreb
+------------------------
+
+- Add a `Makefile` to build and install cmdliner without `topkg` and
+  opam `.install` files. Helps bootstraping opam in OS package
+  managers. Thanks to Hendrik Tews for the patches.
+
+v1.0.0 2017-03-02 La Forclaz (VS)
+---------------------------------
+
+**IMPORTANT** The `Arg.converter` type is deprecated in favor of the
+`Arg.conv` type. For this release both types are equal but the next
+major release will drop the former and make the latter abstract. All
+users are kindly requested to migrate to use the new type and **only**
+via the new `Arg.[p]conv` and `Arg.conv_{parser,printer}` functions.
+
+- Allow terms to be used more than once in terms without tripping out
+  documentation generation (#77). Thanks to François Bobot and Gabriel
+  Radanne.
+- Disallow defining the same option (resp. command) name twice via two
+  different arguments (resp. terms). Raises Invalid_argument, used
+  to be undefined behaviour (in practice, an arbitrary one would be
+  ignored).
+- Improve converter API (see important message above).
+- Add `Term.exit[_status]` and `Term.exit_status_of[_status]_result`.
+  improves composition with `Pervasives.exit`.
+- Add `Term.term_result` and `Term.cli_parse_result` improves composition
+  with terms evaluating to `result` types.
+- Add `Arg.parser_of_kind_of_string`.
+- Change semantics of `Arg.pos_left` (see #76 for details).
+- Deprecate `Term.man_format` in favor of `Arg.man_format`.
+- Reserve the `--cmdliner` option for library use. This is unused for now
+  but will be in the future.
+- Relicense from BSD3 to ISC.
+- Safe-string support.
+- Build depend on topkg.
+
+### End-user visible changes
+
+The following changes affect the end-user behaviour of all binaries using
+cmdliner.
+
+- Required positional arguments. All missing required position
+  arguments are now reported to the end-user, in the correct
+  order (#39). Thanks to Dmitrii Kashin for the report.
+- Optional arguments. All unknown and ambiguous optional argument
+  arguments are now reported to the end-user (instead of only
+  the first one).
+- Change default behaviour of `--help[=FMT]` option. `FMT` no longer
+  defaults to `pager` if unspecified.  It defaults to the new value
+  `auto` which prints the help as `pager` or `plain` whenever the
+  `TERM` environment variable is `dumb` or undefined (#43). At the API
+  level this changes the signature of the type `Term.ret` and values
+  `Term.ret`, `Term.man_format` (deprecated) and `Manpage.print` to add the
+  new `` `Auto`` case to manual formats. These are now represented by the
+  `Manpage.format` type rather than inlined polyvars.
+
+### Doc specification improvements and fixes
+
+- Add `?envs` optional argument to `Term.info`. Documents environment
+  variables that influence a term's evaluation and automatically
+  integrate them in the manual.
+- Add `?exits` optional argument to `Term.info`. Documents exit statuses of
+  the program. Use `Term.default_exits` if you are using the new `Term.exit`
+  functions.
+- Add `?man_xrefs` optional argument to `Term.info`. Documents
+  references to other manpages. Automatically formats a `SEE ALSO` section
+  in the manual.
+- Add `Manpage.escape` to escape a string from the documentation markup
+  language.
+- Add `Manpage.s_*` constants for standard man page section names.
+- Add a `` `Blocks`` case to `Manpage.blocks` to allow block splicing
+  (#69).  This avoids having to concatenate block lists at the
+  toplevel of your program.
+- `Arg.env_var`, change default environment variable section to the
+   standard `ENVIRONMENT` manual section rather than `ENVIRONMENT
+   VARIABLES`.  If you previously manually positioned that section in
+   your man page you will have to change the name. See also next point.
+- Fix automatic placement of default environment variable section (#44)
+  whenever unspecified in the man page.
+- Better automatic insertions of man page sections (#73). See the API
+  docs about manual specification. As a side effect the `NAME` section
+  can now also be overriden manually.
+- Fix repeated environment variable printing for flags (#64). Thanks to
+  Thomas Gazagnaire for the report.
+- Fix rendering of env vars in man pages, bold is standard (#71).
+- Fix plain help formatting for commands with empty
+  description. Thanks to Maciek Starzyk for the patch.
+- Fix (implement really) groff man page escaping (#48).
+- Request `an` macros directly in the man page via `.mso` this
+  makes man pages self-describing and avoids having to call `groff` with
+  the `-man` option.
+- Document required optional arguments as such (#82). Thanks to Isaac Hodes
+  for the report.
+
+### Doc language sanitization
+
+This release tries to bring sanity to the doc language. This may break
+the rendering of some of your man pages. Thanks to Gabriel Scherer,
+Ivan Gotovchits and Nicolás Ojeda Bär for the feedback.
+
+- It is only allowed to use the variables `$(var)` that are mentioned in
+  the docs (`$(docv)`, `$(opt)`, etc.) and the markup directives
+  `$({i,b},text)`. Any other unknown `$(var)` will generate errors
+  on standard error during documentation generation.
+- Markup directives `$({i,b},text)` treat `text` as is, modulo escapes;
+  see next point.
+- Characters `$`, `(`, `)` and `\` can respectively be escaped by `\$`,
+  `\(`, `\)` and `\\`. Escaping `$` and `\` is mandatory everywhere.
+  Escaping `)` is mandatory only in markup directives. Escaping `(`
+  is only here for your symmetric pleasure. Any other sequence of
+  character starting with a `\` is an illegal sequence.
+- Variables `$(mname)` and `$(tname)` are now marked up with bold when
+  substituted. If you used to write `$(b,$(tname))` this will generate
+  an error on standard output, since `$` is not escaped in the markup
+  directive. Simply replace these by `$(tname)`.
+
+v0.9.8 2015-10-11 Cambridge (UK)
+--------------------------------
+
+- Bring back support for OCaml 3.12.0
+- Support for pre-formatted paragraphs in man pages. This adds a
+  ```Pre`` case to the `Manpage.block` type which can break existing
+  programs. Thanks to Guillaume Bury for suggesting and help.
+- Support for environment variables. If an argument is absent from the
+  command line, its value can be read and parsed from an environment
+  variable. This adds an `env` optional argument to the `Arg.info`
+  function which can break existing programs.
+- Support for new variables in option documentation strings. `$(opt)`
+  can be used to refer to the name of the option being documented and
+  `$(env)` for the name of the option's the environment variable.
+- Deprecate `Term.pure` in favor of `Term.const`.
+- Man page generation. Keep undefined variables untouched. Previously
+  a `$(undef)` would be turned into `undef`.
+- Turn a few misterious and spurious `Not_found` exceptions into
+  `Invalid_arg`. These can be triggered by client programming errors
+  (e.g. an unclosed variable in a documentation string).
+- Positional arguments. Invoke the printer on the default (absent)
+  value only if needed. See Optional arguments in the release notes of
+  v0.9.6.
+
+v0.9.7 2015-02-06 La Forclaz (VS)
+---------------------------------
+
+- Build system, don't depend on `ocamlfind`. The package no longer
+  depends on ocamlfind. Thanks to Louis Gesbert for the patch. 
+
+v0.9.6 2014-11-18 La Forclaz (VS)
+---------------------------------
+
+- Optional arguments. Invoke the printer on the default (absent) value
+  only if needed, i.e. if help is shown. Strictly speaking an
+  interface breaking change – for example if the absent value was lazy
+  it would be forced on each run. This is no longer the case.
+- Parsed command line syntax: allow short flags to be specified
+  together under a single dash, possibly ending with a short option.
+  This allows to specify e.g. `tar -xvzf archive.tgz` or `tar
+  -xvzfarchive.tgz`. Previously this resulted in an error, all the
+  short flags had to be specified separately. Backward compatible in
+  the sense that only more command lines are parsed. Thanks to Hugo
+  Heuzard for the patch.
+- End user error message improvements using heuristics and edit
+  distance search in the optional argument and sub command name
+  spaces. Thanks to Hugo Heuzard for the patch.
+- Adds `Arg.doc_{quote,alts,alts_enum}`, documentation string
+  helpers.
+- Adds the `Term.eval_peek_opts` function for advanced usage scenarios.
+- The function `Arg.enum` now raises `Invalid_argument` if the
+  enumeration is empty.
+- Improves help paging behaviour on Windows. Thanks to Romain Bardou
+  for the help.
+
+
+v0.9.5 2014-07-04 Cambridge (UK)
+--------------------------------
+
+- Add variance annotation to Term.t. Thanks to Peter Zotov for suggesting.
+- Fix section name formatting in plain text output. Thanks to Mikhail
+  Sobolev for reporting.
+
+
+v0.9.4 2014-02-09 La Forclaz (VS)
+---------------------------------
+
+- Remove temporary files created for paged help. Thanks to Kaustuv Chaudhuri
+  for the suggestion.
+- Avoid linking against `Oo` (was used to get program uuid).
+- Check the environment for `$MANPAGER` aswell. Thanks to Raphaël Proust
+  for the patch.
+- OPAM friendly workflow and drop OASIS support.
+
+
+v0.9.3 2013-01-04 La Forclaz (VS)
+---------------------------------
+
+- Allow user specified `SYNOPSIS` sections.
+
+
+v0.9.2 2012-08-05 Lausanne
+--------------------------
+
+- OASIS 0.3.0 support.
+
+
+v0.9.1 2012-03-17 La Forclaz (VS)
+---------------------------------
+
+- OASIS support.
+- Fixed broken `Arg.pos_right`.
+- Variables `$(tname)` and `$(mname)` can be used in a term's man
+  page to respectively refer to the term's name and the main term
+  name.
+- Support for custom variable substitution in `Manpage.print`.
+- Adds `Term.man_format`, to facilitate the definition of help commands.
+- Rewrote the examples with a better and consistent style.
+
+Incompatible API changes:
+
+- The signature of `Term.eval` and `Term.eval_choice` changed to make
+  it more regular: the given term and its info must be tupled together
+  even for the main term and the tuple order was swapped to make it
+  consistent with the one used for arguments.
+
+
+v0.9.0 2011-05-27 Lausanne
+--------------------------
+
+- First release.
diff --git a/tools/ocaml/duniverse/cmdliner/LICENSE.md b/tools/ocaml/duniverse/cmdliner/LICENSE.md
new file mode 100644
index 0000000000..90fca24d71
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/LICENSE.md
@@ -0,0 +1,13 @@
+Copyright (c) 2011 Daniel C. Bünzli
+
+Permission to use, copy, modify, and/or distribute this software for any
+purpose with or without fee is hereby granted, provided that the above
+copyright notice and this permission notice appear in all copies.
+
+THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
diff --git a/tools/ocaml/duniverse/cmdliner/Makefile b/tools/ocaml/duniverse/cmdliner/Makefile
new file mode 100644
index 0000000000..1d2ffd40b7
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/Makefile
@@ -0,0 +1,77 @@
+# To be used by system package managers to bootstrap opam. topkg
+# cannot be used as it needs opam-installer which is provided by opam
+# itself.
+
+# Typical usage:
+#
+# make all
+# make install PREFIX=/usr/local
+# make install-doc PREFIX=/usr/local
+
+# Adjust the following on the cli invocation for configuring
+
+-include $(shell ocamlc -where)/Makefile.config
+
+PREFIX=/usr
+LIBDIR=$(DESTDIR)$(PREFIX)/lib/ocaml/cmdliner
+DOCDIR=$(DESTDIR)$(PREFIX)/share/doc/cmdliner
+NATIVE=$(shell ocamlopt -version > /dev/null 2>&1 && echo true)
+# EXT_LIB     by default value of OCaml's Makefile.config
+# NATDYNLINK  by default value of OCaml's Makefile.config
+
+INSTALL=install
+B=_build
+BASE=$(B)/cmdliner
+
+ifeq ($(NATIVE),true)
+	BUILD-TARGETS=build-byte build-native
+	INSTALL-TARGETS=install-common install-byte install-native
+	ifeq ($(NATDYNLINK),true)
+	  BUILD-TARGETS += build-native-dynlink
+	  INSTALL-TARGETS += install-native-dynlink
+	endif
+else
+	BUILD-TARGETS=build-byte
+	INSTALL-TARGETS=install-common install-byte
+endif
+
+all: $(BUILD-TARGETS)
+
+install: $(INSTALL-TARGETS)
+
+install-doc:
+	$(INSTALL) -d $(DOCDIR)
+	$(INSTALL) CHANGES.md LICENSE.md README.md $(DOCDIR)
+
+clean:
+	ocaml build.ml clean
+
+build-byte:
+	ocaml build.ml cma
+
+build-native:
+	ocaml build.ml cmxa
+
+build-native-dynlink:
+	ocaml build.ml cmxs
+
+create-libdir:
+	$(INSTALL) -d $(LIBDIR)
+
+install-common: create-libdir
+	$(INSTALL) pkg/META $(BASE).mli $(BASE).cmi $(BASE).cmti $(LIBDIR)
+	$(INSTALL) cmdliner.opam $(LIBDIR)/opam
+
+install-byte: create-libdir
+	$(INSTALL) $(BASE).cma $(LIBDIR)
+
+install-native: create-libdir
+	$(INSTALL) $(BASE).cmxa $(BASE)$(EXT_LIB) $(wildcard $(B)/cmdliner*.cmx) \
+  $(LIBDIR)
+
+install-native-dynlink: create-libdir
+	$(INSTALL) $(BASE).cmxs $(LIBDIR)
+
+.PHONY: all install install-doc clean build-byte build-native \
+	build-native-dynlink create-libdir install-common install-byte \
+  install-native install-dynlink
diff --git a/tools/ocaml/duniverse/cmdliner/README.md b/tools/ocaml/duniverse/cmdliner/README.md
new file mode 100644
index 0000000000..408e80f76c
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/README.md
@@ -0,0 +1,51 @@
+Cmdliner — Declarative definition of command line interfaces for OCaml
+-------------------------------------------------------------------------------
+%%VERSION%%
+
+Cmdliner allows the declarative definition of command line interfaces
+for OCaml.
+
+It provides a simple and compositional mechanism to convert command
+line arguments to OCaml values and pass them to your functions. The
+module automatically handles syntax errors, help messages and UNIX man
+page generation. It supports programs with single or multiple commands
+and respects most of the [POSIX][1] and [GNU][2] conventions.
+
+Cmdliner has no dependencies and is distributed under the ISC license.
+
+[1]: http://pubs.opengroup.org/onlinepubs/009695399/basedefs/xbd_chap12.html
+[2]: http://www.gnu.org/software/libc/manual/html_node/Argument-Syntax.html
+
+Home page: http://erratique.ch/software/cmdliner  
+Contact: Daniel Bünzli `<daniel.buenzl i@erratique.ch>`
+
+
+## Installation
+
+Cmdliner can be installed with `opam`:
+
+    opam install cmdliner
+
+If you don't use `opam` consult the [`opam`](opam) file for build
+instructions.
+
+
+## Documentation
+
+The documentation and API reference is automatically generated by from
+the source interfaces. It can be consulted [online][doc] or via
+`odig doc cmdliner`.
+
+[doc]: http://erratique.ch/software/cmdliner/doc/Cmdliner
+
+
+## Sample programs
+
+If you installed Cmdliner with `opam` sample programs are located in
+the directory `opam config var cmdliner:doc`. These programs define
+the command line of some classic programs.
+
+In the distribution sample programs are located in the `test`
+directory of the distribution. They can be built and run with:
+
+    topkg build --tests true && topkg test
diff --git a/tools/ocaml/duniverse/cmdliner/_tags b/tools/ocaml/duniverse/cmdliner/_tags
new file mode 100644
index 0000000000..71bfd61d91
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/_tags
@@ -0,0 +1,3 @@
+true : bin_annot, safe_string
+<src> : include
+<test> : include
\ No newline at end of file
diff --git a/tools/ocaml/duniverse/cmdliner/build.ml b/tools/ocaml/duniverse/cmdliner/build.ml
new file mode 100755
index 0000000000..3228af3205
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/build.ml
@@ -0,0 +1,155 @@
+#!/usr/bin/env ocaml
+
+(* Usage: ocaml build.ml [cma|cmxa|cmxs|clean] *)
+
+let root_dir = Sys.getcwd ()
+let build_dir = "_build"
+let src_dir = "src"
+
+let base_ocaml_opts =
+  [ "-g"; "-bin-annot";
+    "-safe-string"; (* Remove once we require >= 4.06 *) ]
+
+(* Logging *)
+
+let strf = Printf.sprintf
+let err fmt = Printf.kfprintf (fun oc -> flush oc; exit 1) stderr fmt
+let log fmt = Printf.kfprintf (fun oc -> flush oc) stdout fmt
+
+(* The running joke *)
+
+let rev_cut ~sep s = match String.rindex s sep with
+| exception Not_found -> None
+| i -> String.(Some (sub s 0 i, sub s (i + 1) (length s - (i + 1))))
+
+let cuts ~sep s =
+  let rec loop acc = function
+  | "" -> acc
+  | s ->
+      match rev_cut ~sep s with
+      | None -> s :: acc
+      | Some (l, r) -> loop (r :: acc) l
+  in
+  loop [] s
+
+(* Read, write and collect files *)
+
+let fpath ~dir f = String.concat "" [dir; "/"; f]
+
+let string_of_file f =
+  let ic = open_in_bin f in
+  let len = in_channel_length ic in
+  let buf = Bytes.create len in
+  really_input ic buf 0 len;
+  close_in ic;
+  Bytes.unsafe_to_string buf
+
+let string_to_file f s =
+  let oc = open_out_bin f in
+  output_string oc s;
+  close_out oc
+
+let cp src dst = string_to_file dst (string_of_file src)
+
+let ml_srcs dir =
+  let add_file dir acc f = match rev_cut ~sep:'.' f with
+  | Some (m, e) when e = "ml" || e = "mli" -> f :: acc
+  | Some _ | None -> acc
+  in
+  Array.fold_left (add_file dir) [] (Sys.readdir dir)
+
+(* Finding and running commands *)
+
+let find_cmd cmds =
+  let test, null = match Sys.win32 with
+  | true -> "where", " NUL"
+  | false -> "type", "/dev/null"
+  in
+  let cmd c = Sys.command (strf "%s %s 1>%s 2>%s" test c null null) = 0 in
+  try Some (List.find cmd cmds) with Not_found -> None
+
+let err_cmd exit cmd = err "exited with %d: %s\n" exit cmd
+let quote_cmd = match Sys.win32 with
+| false -> fun cmd -> cmd
+| true -> fun cmd -> strf "\"%s\"" cmd
+
+let run_cmd args =
+  let cmd = String.concat " " (List.map Filename.quote args) in
+(*  log "[EXEC] %s\n" cmd; *)
+  let exit = Sys.command (quote_cmd cmd) in
+  if exit = 0 then () else err_cmd exit cmd
+
+let read_cmd args =
+  let stdout = Filename.temp_file (Filename.basename Sys.argv.(0)) "b00t" in
+  at_exit (fun () -> try ignore (Sys.remove stdout) with _ -> ());
+  let cmd = String.concat " " (List.map Filename.quote args) in
+  let cmd = quote_cmd @@ strf "%s 1>%s" cmd (Filename.quote stdout) in
+  let exit = Sys.command cmd in
+  if exit = 0 then string_of_file stdout else err_cmd exit cmd
+
+(* Create and delete directories *)
+
+let mkdir dir =
+  try match Sys.file_exists dir with
+  | true -> ()
+  | false -> run_cmd ["mkdir"; dir]
+  with
+  | Sys_error e -> err "%s: %s" dir e
+
+let rmdir dir =
+  try match Sys.file_exists dir with
+  | false -> ()
+  | true ->
+      let rm f = Sys.remove (fpath ~dir f) in
+      Array.iter rm (Sys.readdir dir);
+      run_cmd ["rmdir"; dir]
+  with
+  | Sys_error e -> err "%s: %s" dir e
+
+(* Lookup OCaml compilers and ocamldep *)
+
+let really_find_cmd alts = match find_cmd alts with
+| Some cmd -> cmd
+| None -> err "No %s found in PATH\n" (List.hd @@ List.rev alts)
+
+let ocamlc () = really_find_cmd ["ocamlc.opt"; "ocamlc"]
+let ocamlopt () = really_find_cmd ["ocamlopt.opt"; "ocamlopt"]
+let ocamldep () = really_find_cmd ["ocamldep.opt"; "ocamldep"]
+
+(* Build *)
+
+let sort_srcs srcs =
+  let srcs = List.sort String.compare srcs in
+  read_cmd (ocamldep () :: "-slash" :: "-sort" :: srcs)
+  |> String.trim |> cuts ~sep:' '
+
+let common srcs = base_ocaml_opts @ sort_srcs srcs
+
+let build_cma srcs =
+  run_cmd ([ocamlc ()] @ common srcs @ ["-a"; "-o"; "cmdliner.cma"])
+
+let build_cmxa srcs =
+  run_cmd ([ocamlopt ()] @ common srcs @ ["-a"; "-o"; "cmdliner.cmxa"])
+
+let build_cmxs srcs =
+  run_cmd ([ocamlopt ()] @ common srcs @ ["-shared"; "-o"; "cmdliner.cmxs"])
+
+let clean () = rmdir build_dir
+
+let in_build_dir f =
+  let srcs = ml_srcs src_dir in
+  let cp src = cp (fpath ~dir:src_dir src) (fpath ~dir:build_dir src) in
+  mkdir build_dir;
+  List.iter cp srcs;
+  Sys.chdir build_dir; f srcs; Sys.chdir root_dir
+
+let main () = match Array.to_list Sys.argv with
+| _ :: [ "cma" ] -> in_build_dir build_cma
+| _ :: [ "cmxa" ] -> in_build_dir build_cmxa
+| _ :: [ "cmxs" ] -> in_build_dir build_cmxs
+| _ :: [ "clean" ] -> clean ()
+| [] | [_] -> err "Missing argument: cma, cmxa, cmxs or clean\n";
+| cmd :: args ->
+    err "%s: Unknown argument(s): %s\n" cmd @@ String.concat " " args
+
+let () = main ()
diff --git a/tools/ocaml/duniverse/cmdliner/cmdliner.opam b/tools/ocaml/duniverse/cmdliner/cmdliner.opam
new file mode 100644
index 0000000000..cb958e70d2
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/cmdliner.opam
@@ -0,0 +1,32 @@
+opam-version: "2.0"
+maintainer: "Daniel Bünzli <daniel.buenzl i@erratique.ch>"
+authors: ["Daniel Bünzli <daniel.buenzl i@erratique.ch>"]
+homepage: "http://erratique.ch/software/cmdliner"
+doc: "http://erratique.ch/software/cmdliner/doc/Cmdliner"
+dev-repo: "git+https://github.com/dune-universe/cmdliner.git"
+bug-reports: "https://github.com/dbuenzli/cmdliner/issues"
+tags: [ "cli" "system" "declarative" "org:erratique" ]
+license: "ISC"
+depends: [
+  "dune" "ocaml" {>= "4.03.0"} ]
+synopsis: """Declarative definition of command line interfaces for OCaml"""
+description: """\
+
+Cmdliner allows the declarative definition of command line interfaces
+for OCaml.
+
+It provides a simple and compositional mechanism to convert command
+line arguments to OCaml values and pass them to your functions. The
+module automatically handles syntax errors, help messages and UNIX man
+page generation. It supports programs with single or multiple commands
+and respects most of the [POSIX][1] and [GNU][2] conventions.
+
+Cmdliner has no dependencies and is distributed under the ISC license.
+
+[1]: http://pubs.opengroup.org/onlinepubs/009695399/basedefs/xbd_chap12.html
+[2]: http://www.gnu.org/software/libc/manual/html_node/Argument-Syntax.html
+"""
+build: [[ "dune" "build" "-p" name ]]
+url {
+  src: "git://github.com/dune-universe/cmdliner.git#duniverse-v1.0.4"
+}
diff --git a/tools/ocaml/duniverse/cmdliner/doc/api.odocl b/tools/ocaml/duniverse/cmdliner/doc/api.odocl
new file mode 100644
index 0000000000..58711c53d9
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/doc/api.odocl
@@ -0,0 +1 @@
+Cmdliner
diff --git a/tools/ocaml/duniverse/cmdliner/dune-project b/tools/ocaml/duniverse/cmdliner/dune-project
new file mode 100644
index 0000000000..f4beddd4f7
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/dune-project
@@ -0,0 +1,2 @@
+(lang dune 1.4)
+(name cmdliner)
\ No newline at end of file
diff --git a/tools/ocaml/duniverse/cmdliner/pkg/META b/tools/ocaml/duniverse/cmdliner/pkg/META
new file mode 100644
index 0000000000..81671c5328
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/pkg/META
@@ -0,0 +1,7 @@
+version = "%%VERSION%%"
+description = "Declarative definition of command line interfaces"
+requires = ""
+archive(byte) = "cmdliner.cma"
+archive(native) = "cmdliner.cmxa"
+plugin(byte) = "cmdliner.cma"
+plugin(native) = "cmdliner.cmxs"
\ No newline at end of file
diff --git a/tools/ocaml/duniverse/cmdliner/pkg/pkg.ml b/tools/ocaml/duniverse/cmdliner/pkg/pkg.ml
new file mode 100755
index 0000000000..7d3982ac9e
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/pkg/pkg.ml
@@ -0,0 +1,33 @@
+#!/usr/bin/env ocaml
+#use "topfind"
+#require "topkg"
+open Topkg
+
+let test t = Pkg.flatten [ Pkg.test ~run:false t; Pkg.doc (t ^ ".ml")]
+
+let distrib =
+  let exclude_paths () = Ok [".git";".gitignore";".gitattributes";"_build"] in
+  Pkg.distrib ~exclude_paths ()
+
+let opams =
+  [Pkg.opam_file "cmdliner.opam"]
+
+let () =
+  Pkg.describe ~distrib "cmdliner" ~opams @@ fun c ->
+  Ok [ Pkg.mllib ~api:["Cmdliner"] "src/cmdliner.mllib";
+       test "test/chorus";
+       test "test/cp_ex";
+       test "test/darcs_ex";
+       test "test/revolt";
+       test "test/rm_ex";
+       test "test/tail_ex";
+       Pkg.test ~run:false "test/test_man";
+       Pkg.test ~run:false "test/test_man_utf8";
+       Pkg.test ~run:false "test/test_pos";
+       Pkg.test ~run:false "test/test_pos_rev";
+       Pkg.test ~run:false "test/test_pos_all";
+       Pkg.test ~run:false "test/test_pos_left";
+       Pkg.test ~run:false "test/test_pos_req";
+       Pkg.test ~run:false "test/test_opt_req";
+       Pkg.test ~run:false "test/test_term_dups";
+       Pkg.test ~run:false "test/test_with_used_args"; ]
diff --git a/tools/ocaml/duniverse/cmdliner/src/cmdliner.ml b/tools/ocaml/duniverse/cmdliner/src/cmdliner.ml
new file mode 100644
index 0000000000..40afd525cd
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/src/cmdliner.ml
@@ -0,0 +1,309 @@
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli. All rights reserved.
+   Distributed under the ISC license, see terms at the end of the file.
+   %%NAME%% %%VERSION%%
+  ---------------------------------------------------------------------------*)
+
+module Manpage = Cmdliner_manpage
+module Arg = Cmdliner_arg
+module Term = struct
+  type ('a, 'b) stdlib_result = ('a, 'b) result
+
+  include Cmdliner_term
+
+  (* Deprecated *)
+
+  let man_format = Cmdliner_arg.man_format
+  let pure = const
+
+  (* Terms *)
+
+  let ( $ ) = app
+
+  type 'a ret = [ `Ok of 'a | term_escape ]
+
+  let ret (al, v) =
+    al, fun ei cl -> match v ei cl with
+    | Ok (`Ok v) -> Ok v
+    | Ok (`Error _ as err) -> Error err
+    | Ok (`Help _ as help) -> Error help
+    | Error _ as e -> e
+
+  let term_result ?(usage = false) (al, v) =
+    al, fun ei cl -> match v ei cl with
+    | Ok (Ok _ as ok) -> ok
+    | Ok (Error (`Msg e)) -> Error (`Error (usage, e))
+    | Error _ as e -> e
+
+  let cli_parse_result (al, v) =
+    al, fun ei cl -> match v ei cl with
+    | Ok (Ok _ as ok) -> ok
+    | Ok (Error (`Msg e)) -> Error (`Parse e)
+    | Error _ as e -> e
+
+  let main_name =
+    Cmdliner_info.Args.empty,
+    (fun ei _ -> Ok (Cmdliner_info.(term_name @@ eval_main ei)))
+
+  let choice_names =
+    let choice_name t = Cmdliner_info.term_name t in
+    Cmdliner_info.Args.empty,
+    (fun ei _ -> Ok (List.rev_map choice_name (Cmdliner_info.eval_choices ei)))
+
+  let with_used_args (al, v) : (_ * string list) t =
+    al, fun ei cl ->
+      match v ei cl with
+      | Ok x ->
+          let actual_args arg_info acc =
+            let args = Cmdliner_cline.actual_args cl arg_info in
+            List.rev_append args acc
+          in
+          let used = List.rev (Cmdliner_info.Args.fold actual_args al []) in
+          Ok (x, used)
+      | Error _ as e -> e
+
+  (* Term information *)
+
+  type exit_info = Cmdliner_info.exit
+  let exit_info = Cmdliner_info.exit
+
+  let exit_status_success = 0
+  let exit_status_cli_error = 124
+  let exit_status_internal_error = 125
+  let default_error_exits =
+    [ exit_info exit_status_cli_error ~doc:"on command line parsing errors.";
+      exit_info exit_status_internal_error
+        ~doc:"on unexpected internal errors (bugs)."; ]
+
+  let default_exits =
+    (exit_info exit_status_success ~doc:"on success.") :: default_error_exits
+
+  type env_info = Cmdliner_info.env
+  let env_info = Cmdliner_info.env
+
+  type info = Cmdliner_info.term
+  let info = Cmdliner_info.term ~args:Cmdliner_info.Args.empty
+  let name ti = Cmdliner_info.term_name ti
+
+  (* Evaluation *)
+
+  let err_help s = "Term error, help requested for unknown command " ^ s
+  let err_argv = "argv array must have at least one element"
+  let err_multi_cmd_def name (a, _) (a', _) =
+    Cmdliner_base.err_multi_def ~kind:"command" name Cmdliner_info.term_doc a a'
+
+  type 'a result =
+    [ `Ok of 'a | `Error of [`Parse | `Term | `Exn ] | `Version | `Help ]
+
+  let add_stdopts ei =
+    let docs = Cmdliner_info.(term_stdopts_docs @@ eval_term ei) in
+    let vargs, vers = match Cmdliner_info.(term_version @@ eval_main ei) with
+    | None -> Cmdliner_info.Args.empty, None
+    | Some _ ->
+        let args, _ as vers = Cmdliner_arg.stdopt_version ~docs in
+        args, Some vers
+    in
+    let help = Cmdliner_arg.stdopt_help ~docs in
+    let args = Cmdliner_info.Args.union vargs (fst help) in
+    let term = Cmdliner_info.(term_add_args (eval_term ei) args) in
+    help, vers, Cmdliner_info.eval_with_term ei term
+
+  type 'a eval_result =
+    ('a, [ term_escape
+         | `Exn of exn * Printexc.raw_backtrace
+         | `Parse of string
+         | `Std_help of Manpage.format | `Std_version ]) stdlib_result
+
+  let run ~catch ei cl f = try (f ei cl :> 'a eval_result) with
+  | exn when catch ->
+      let bt = Printexc.get_raw_backtrace () in
+      Error (`Exn (exn, bt))
+
+  let try_eval_stdopts ~catch ei cl help version =
+    match run ~catch ei cl (snd help) with
+    | Ok (Some fmt) -> Some (Error (`Std_help fmt))
+    | Error _ as err -> Some err
+    | Ok None ->
+        match version with
+        | None -> None
+        | Some version ->
+            match run ~catch ei cl (snd version) with
+            | Ok false -> None
+            | Ok true -> Some (Error (`Std_version))
+            | Error _ as err -> Some err
+
+  let term_eval ~catch ei f args =
+    let help, version, ei = add_stdopts ei in
+    let term_args = Cmdliner_info.(term_args @@ eval_term ei) in
+    let res = match Cmdliner_cline.create term_args args with
+    | Error (e, cl) ->
+        begin match try_eval_stdopts ~catch ei cl help version with
+        | Some e -> e
+        | None -> Error (`Error (true, e))
+        end
+    | Ok cl ->
+        match try_eval_stdopts ~catch ei cl help version with
+        | Some e -> e
+        | None -> run ~catch ei cl f
+    in
+    ei, res
+
+  let term_eval_peek_opts ei f args =
+    let help, version, ei = add_stdopts ei in
+    let term_args = Cmdliner_info.(term_args @@ eval_term ei) in
+    let v, ret = match Cmdliner_cline.create ~peek_opts:true term_args args with
+    | Error (e, cl) ->
+        begin match try_eval_stdopts ~catch:true ei cl help version with
+        | Some e -> None, e
+        | None -> None, Error (`Error (true, e))
+        end
+    | Ok cl ->
+        let ret = run ~catch:true ei cl f in
+        let v = match ret with Ok v -> Some v | Error _ -> None in
+        match try_eval_stdopts ~catch:true ei cl help version with
+        | Some e -> v, e
+        | None -> v, ret
+    in
+    let ret = match ret with
+    | Ok v -> `Ok v
+    | Error `Std_help _ -> `Help
+    | Error `Std_version -> `Version
+    | Error `Parse _ -> `Error `Parse
+    | Error `Help _ -> `Help
+    | Error `Exn _ -> `Error `Exn
+    | Error `Error _ -> `Error `Term
+    in
+    v, ret
+
+  let do_help help_ppf err_ppf ei fmt cmd =
+    let ei = match cmd with
+    | None -> Cmdliner_info.(eval_with_term ei @@ eval_main ei)
+    | Some cmd ->
+        try
+          let is_cmd t = Cmdliner_info.term_name t = cmd in
+          let cmd = List.find is_cmd (Cmdliner_info.eval_choices ei) in
+          Cmdliner_info.eval_with_term ei cmd
+        with Not_found -> invalid_arg (err_help cmd)
+    in
+    let _, _, ei = add_stdopts ei (* may not be the originally eval'd term *) in
+    Cmdliner_docgen.pp_man ~errs:err_ppf fmt help_ppf ei
+
+  let do_result help_ppf err_ppf ei = function
+  | Ok v -> `Ok v
+  | Error res ->
+      match res with
+      | `Std_help fmt -> Cmdliner_docgen.pp_man err_ppf fmt help_ppf ei; `Help
+      | `Std_version -> Cmdliner_msg.pp_version help_ppf ei; `Version
+      | `Parse err ->
+          Cmdliner_msg.pp_err_usage err_ppf ei ~err_lines:false ~err;
+          `Error `Parse
+      | `Help (fmt, cmd) -> do_help help_ppf err_ppf ei fmt cmd; `Help
+      | `Exn (e, bt) -> Cmdliner_msg.pp_backtrace err_ppf ei e bt; `Error `Exn
+      | `Error (usage, err) ->
+          (if usage
+           then Cmdliner_msg.pp_err_usage err_ppf ei ~err_lines:true ~err
+           else Cmdliner_msg.pp_err err_ppf ei ~err);
+          `Error `Term
+
+  (* API *)
+
+  let env_default v = try Some (Sys.getenv v) with Not_found -> None
+  let remove_exec argv =
+    try List.tl (Array.to_list argv) with Failure _ -> invalid_arg err_argv
+
+  let eval
+      ?help:(help_ppf = Format.std_formatter)
+      ?err:(err_ppf = Format.err_formatter)
+      ?(catch = true) ?(env = env_default) ?(argv = Sys.argv) ((al, f), ti) =
+    let term = Cmdliner_info.term_add_args ti al in
+    let ei = Cmdliner_info.eval ~term ~main:term ~choices:[] ~env in
+    let args = remove_exec argv in
+    let ei, res = term_eval ~catch ei f args in
+    do_result help_ppf err_ppf ei res
+
+  let choose_term main choices = function
+  | [] -> Ok (main, [])
+  | maybe :: args' as args ->
+      if String.length maybe > 1 && maybe.[0] = '-' then Ok (main, args) else
+      let index =
+        let add acc (choice, _ as c) =
+          let name = Cmdliner_info.term_name choice in
+          match Cmdliner_trie.add acc name c with
+          | `New t -> t
+          | `Replaced (c', _) -> invalid_arg (err_multi_cmd_def name c c')
+        in
+        List.fold_left add Cmdliner_trie.empty choices
+      in
+      match Cmdliner_trie.find index maybe with
+      | `Ok choice -> Ok (choice, args')
+      | `Not_found ->
+          let all = Cmdliner_trie.ambiguities index "" in
+          let hints = Cmdliner_suggest.value maybe all in
+          Error (Cmdliner_base.err_unknown ~kind:"command" maybe ~hints)
+      | `Ambiguous ->
+          let ambs = Cmdliner_trie.ambiguities index maybe in
+          let ambs = List.sort compare ambs in
+          Error (Cmdliner_base.err_ambiguous ~kind:"command" maybe ~ambs)
+
+  let eval_choice
+      ?help:(help_ppf = Format.std_formatter)
+      ?err:(err_ppf = Format.err_formatter)
+      ?(catch = true) ?(env = env_default) ?(argv = Sys.argv)
+      main choices =
+    let to_term_f ((al, f), ti) = Cmdliner_info.term_add_args ti al, f in
+    let choices_f = List.rev_map to_term_f choices in
+    let main_f = to_term_f main in
+    let choices = List.rev_map fst choices_f in
+    let main = fst main_f in
+    match choose_term main_f choices_f (remove_exec argv) with
+    | Error err ->
+        let ei = Cmdliner_info.eval ~term:main ~main ~choices ~env in
+        Cmdliner_msg.pp_err_usage err_ppf ei ~err_lines:false ~err;
+        `Error `Parse
+    | Ok ((chosen, f), args) ->
+        let ei = Cmdliner_info.eval ~term:chosen ~main ~choices ~env in
+        let ei, res = term_eval ~catch ei f args in
+        do_result help_ppf err_ppf ei res
+
+  let eval_peek_opts
+      ?(version_opt = false) ?(env = env_default) ?(argv = Sys.argv)
+      ((args, f) : 'a t) =
+    let version = if version_opt then Some "dummy" else None in
+    let term = Cmdliner_info.term ~args ?version "dummy" in
+    let ei = Cmdliner_info.eval ~term ~main:term ~choices:[] ~env  in
+    (term_eval_peek_opts ei f (remove_exec argv) :> 'a option * 'a result)
+
+  (* Exits *)
+
+  let exit_status_of_result ?(term_err = 1) = function
+  | `Ok _ | `Help | `Version -> exit_status_success
+  | `Error `Term -> term_err
+  | `Error `Exn -> exit_status_internal_error
+  | `Error `Parse -> exit_status_cli_error
+
+  let exit_status_of_status_result ?term_err = function
+  | `Ok n -> n
+  | r -> exit_status_of_result ?term_err r
+
+  let stdlib_exit = exit
+  let exit ?term_err r = stdlib_exit (exit_status_of_result ?term_err r)
+  let exit_status ?term_err r =
+    stdlib_exit (exit_status_of_status_result ?term_err r)
+
+end
+
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli
+
+   Permission to use, copy, modify, and/or distribute this software for any
+   purpose with or without fee is hereby granted, provided that the above
+   copyright notice and this permission notice appear in all copies.
+
+   THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+   WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+   MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+   ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+   WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+   ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+   OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+  ---------------------------------------------------------------------------*)
diff --git a/tools/ocaml/duniverse/cmdliner/src/cmdliner.mli b/tools/ocaml/duniverse/cmdliner/src/cmdliner.mli
new file mode 100644
index 0000000000..a993e83c4e
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/src/cmdliner.mli
@@ -0,0 +1,1624 @@
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli. All rights reserved.
+   Distributed under the ISC license, see terms at the end of the file.
+   %%NAME%% %%VERSION%%
+  ---------------------------------------------------------------------------*)
+
+(** Declarative definition of command line interfaces.
+
+    [Cmdliner] provides a simple and compositional mechanism
+    to convert command line arguments to OCaml values and pass them to
+    your functions. The module automatically handles syntax errors,
+    help messages and UNIX man page generation. It supports programs
+    with single or multiple commands
+    (like [darcs] or [git]) and respect most of the
+    {{:http://www.opengroup.org/onlinepubs/009695399/basedefs/xbd_chap12.html}
+    POSIX} and
+    {{:http://www.gnu.org/software/libc/manual/html_node/Argument-Syntax.html}
+    GNU} conventions.
+
+    Consult the {{!basics}basics}, details about the supported
+    {{!cmdline}command line syntax} and {{!examples} examples} of
+    use. Open the module to use it, it defines only three modules in
+    your scope.
+
+    {e %%VERSION%% — {{:%%PKG_HOMEPAGE%% }homepage}} *)
+
+(** {1:top Interface} *)
+
+(** Man page specification.
+
+    Man page generation is automatically handled by [Cmdliner],
+    consult the {{!manual}details}.
+
+    The {!block} type is used to define a man page's content. It's a
+    good idea to follow the {{!standard_sections}standard} manual page
+    structure.
+
+   {b References.}
+   {ul
+   {- [man-pages(7)], {{:http://man7.org/linux/man-pages/man7/man-pages.7.html}
+      {e Conventions for writing Linux man pages}}.}} *)
+module Manpage : sig
+
+  (** {1:man Man pages} *)
+
+  type block =
+    [ `S of string | `P of string | `Pre of string | `I of string * string
+    | `Noblank | `Blocks of block list ]
+  (** The type for a block of man page text.
+
+      {ul
+      {- [`S s] introduces a new section [s], see the
+         {{!standard_sections}standard section names}.}
+      {- [`P t] is a new paragraph with text [t].}
+      {- [`Pre t] is a new preformatted paragraph with text [t].}
+      {- [`I (l,t)] is an indented paragraph with label
+         [l] and text [t].}
+      {- [`Noblank] suppresses the blank line introduced between two blocks.}
+      {- [`Blocks bs] splices the blocks [bs].}}
+
+      Except in [`Pre], whitespace and newlines are not significant
+      and are all collapsed to a single space. All block strings
+      support the {{!doclang}documentation markup language}.*)
+
+  val escape : string -> string
+  (** [escape s] escapes [s] so that it doesn't get interpreted by the
+      {{!doclang}documentation markup language}. *)
+
+  type title = string * int * string * string * string
+  (** The type for man page titles. Describes the man page
+      [title], [section], [center_footer], [left_footer], [center_header]. *)
+
+  type t = title * block list
+  (** The type for a man page. A title and the page text as a list of blocks. *)
+
+  type xref =
+    [ `Main | `Cmd of string | `Tool of string | `Page of string * int ]
+  (** The type for man page cross-references.
+      {ul
+      {- [`Main] refers to the man page of the program itself.}
+      {- [`Cmd cmd] refers to the man page of the program's [cmd]
+         command (which must exist).}
+      {- [`Tool bin] refers to the command line tool named [bin].}
+      {- [`Page (name, sec)] refers to the man page [name(sec)].}} *)
+
+  (** {1:standard_sections Standard section names and content}
+
+      The following are standard man page section names, roughly ordered
+      in the order they conventionally appear. See also
+      {{:http://man7.org/linux/man-pages/man7/man-pages.7.html}[man man-pages]}
+      for more elaborations about what sections should contain. *)
+
+  val s_name : string
+  (** The [NAME] section. This section is automatically created by
+      [Cmdliner] for your. *)
+
+  val s_synopsis : string
+  (** The [SYNOPSIS] section. By default this section is automatically
+      created by [Cmdliner] for you, unless it is the first section of
+      your term's man page, in which case it will replace it with yours. *)
+
+  val s_description : string
+  (** The [DESCRIPTION] section. This should be a description of what
+      the tool does and provide a little bit of usage and
+      documentation guidance. *)
+
+  val s_commands : string
+  (** The [COMMANDS] section. By default subcommands get listed here. *)
+
+  val s_arguments : string
+  (** The [ARGUMENTS] section. By default positional arguments get
+      listed here. *)
+
+  val s_options : string
+  (** The [OPTIONS] section. By default options and flag arguments get
+      listed here. *)
+
+  val s_common_options : string
+  (** The [COMMON OPTIONS] section. For programs with multiple commands
+      a section that can be used to gather options common to all commands. *)
+
+  val s_exit_status : string
+  (** The [EXIT STATUS] section. By default term status exit codes
+      get listed here. *)
+
+  val s_environment : string
+  (** The [ENVIRONMENT] section. By default environment variables get
+      listed here. *)
+
+  val s_environment_intro : block
+  (** [s_environment_intro] is the introduction content used by cmdliner
+      when it creates the {!s_environment} section. *)
+
+  val s_files : string
+  (** The [FILES] section. *)
+
+  val s_bugs : string
+  (** The [BUGS] section. *)
+
+  val s_examples : string
+  (** The [EXAMPLES] section. *)
+
+  val s_authors : string
+  (** The [AUTHORS] section. *)
+
+  val s_see_also : string
+  (** The [SEE ALSO] section. *)
+
+  (** {1:output Output}
+
+    The {!print} function can be useful if the client wants to define
+    other man pages (e.g. to implement a help command). *)
+
+  type format = [ `Auto | `Pager | `Plain | `Groff ]
+  (** The type for man page output specification.
+      {ul
+      {- [`Auto], formats like [`Pager] or [`Plain] whenever the [TERM]
+         environment variable is [dumb] or unset.}
+      {- [`Pager], tries to write to a discovered pager, if that fails
+         uses the [`Plain] format.}
+      {- [`Plain], formats to plain text.}
+      {- [`Groff], formats to groff commands.}} *)
+
+  val print :
+    ?errs:Format.formatter ->
+    ?subst:(string -> string option) -> format -> Format.formatter -> t -> unit
+  (** [print ~errs ~subst fmt ppf page] prints [page] on [ppf] in the
+      format [fmt]. [subst] can be used to perform variable
+      substitution,(defaults to the identity). [errs] is used to print
+      formatting errors, it defaults to {!Format.err_formatter}. *)
+end
+
+(** Terms.
+
+    A term is evaluated by a program to produce a {{!result}result},
+    which can be turned into an {{!exits}exit status}. A term made of terms
+    referring to {{!Arg}command line arguments} implicitly defines a
+    command line syntax. *)
+module Term : sig
+
+  (** {1:terms Terms} *)
+
+  type +'a t
+  (** The type for terms evaluating to values of type 'a. *)
+
+  val const : 'a -> 'a t
+  (** [const v] is a term that evaluates to [v]. *)
+
+  (**/**)
+  val pure : 'a -> 'a t
+  (** @deprecated use {!const} instead. *)
+
+  val man_format : Manpage.format t
+  (** @deprecated Use {!Arg.man_format} instead. *)
+  (**/**)
+
+  val ( $ ) : ('a -> 'b) t -> 'a t -> 'b t
+  (** [f $ v] is a term that evaluates to the result of applying
+      the evaluation of [v] to the one of [f]. *)
+
+  val app : ('a -> 'b) t -> 'a t -> 'b t
+  (** [app] is {!($)}. *)
+
+  (** {1 Interacting with Cmdliner's evaluation} *)
+
+  type 'a ret =
+    [ `Help of Manpage.format * string option
+    | `Error of (bool * string)
+    | `Ok of 'a ]
+  (** The type for command return values. See {!ret}. *)
+
+  val ret : 'a ret t -> 'a t
+  (** [ret v] is a term whose evaluation depends on the case
+      to which [v] evaluates. With :
+      {ul
+      {- [`Ok v], it evaluates to [v].}
+      {- [`Error (usage, e)], the evaluation fails and [Cmdliner] prints
+         the error [e] and the term's usage if [usage] is [true].}
+      {- [`Help (format, name)], the evaluation fails and [Cmdliner] prints the
+         term's man page in the given [format] (or the man page for a
+         specific [name] term in case of multiple term evaluation).}}   *)
+
+  val term_result : ?usage:bool -> ('a, [`Msg of string]) result t -> 'a t
+  (** [term_result ~usage t] evaluates to
+      {ul
+      {- [`Ok v] if [t] evaluates to [Ok v]}
+      {- [`Error `Term] with the error message [e] and usage shown according
+         to [usage] (defaults to [false]), if [t] evaluates to
+         [Error (`Msg e)].}} *)
+
+  val cli_parse_result : ('a, [`Msg of string]) result t -> 'a t
+  (** [cli_parse_result t] is a term that evaluates to:
+      {ul
+      {- [`Ok v] if [t] evaluates to [Ok v].}
+      {- [`Error `Parse] with the error message [e]
+         if [t] evaluates to [Error (`Msg e)].}} *)
+
+  val main_name : string t
+  (** [main_name] is a term that evaluates to the "main" term's name. *)
+
+  val choice_names : string list t
+  (** [choice_names] is a term that evaluates to the names of the terms
+      to choose from. *)
+
+  val with_used_args : 'a t -> ('a * string list) t
+  (** [with_used_args t] is a term that evaluates to [t] tupled
+      with the arguments from the command line that where used to
+      evaluate [t]. *)
+
+  (** {1:tinfo Term information}
+
+      Term information defines the name and man page of a term.
+      For simple evaluation this is the name of the program and its
+      man page. For multiple term evaluation, this is
+      the name of a command and its man page. *)
+
+  type exit_info
+  (** The type for exit status information. *)
+
+  val exit_info : ?docs:string -> ?doc:string -> ?max:int -> int -> exit_info
+  (** [exit_info ~docs ~doc min ~max] describe the range of exit
+      statuses from [min] to [max] (defaults to [min]). [doc] is the
+      man page information for the statuses, defaults to ["undocumented"].
+      [docs] is the title of the man page section in which the statuses
+      will be listed, it defaults to {!Manpage.s_exit_status}.
+
+      In [doc] the {{!doclang}documentation markup language} can be
+      used with following variables:
+      {ul
+      {- [$(status)], the value of [min].}
+      {- [$(status_max)], the value of [max].}
+      {- The variables mentioned in {!info}}} *)
+
+  val default_exits : exit_info list
+  (** [default_exits] is information for exit status {!exit_status_success}
+      added to {!default_error_exits}. *)
+
+  val default_error_exits : exit_info list
+  (** [default_error_exits] is information for exit statuses
+      {!exit_status_cli_error} and {!exit_status_internal_error}. *)
+
+  type env_info
+  (** The type for environment variable information. *)
+
+  val env_info : ?docs:string -> ?doc:string -> string -> env_info
+  (** [env_info ~docs ~doc var] describes an environment variable
+      [var]. [doc] is the man page information of the environment
+      variable, defaults to ["undocumented"]. [docs] is the title of
+      the man page section in which the environment variable will be
+      listed, it defaults to {!Manpage.s_environment}.
+
+      In [doc] the {{!doclang}documentation markup language} can be
+      used with following variables:
+      {ul
+      {- [$(env)], the value of [var].}
+      {- The variables mentioned in {!info}}} *)
+
+  type info
+  (** The type for term information. *)
+
+  val info :
+    ?man_xrefs:Manpage.xref list -> ?man:Manpage.block list ->
+    ?envs:env_info list -> ?exits:exit_info list -> ?sdocs:string ->
+    ?docs:string -> ?doc:string -> ?version:string -> string -> info
+  (** [info sdocs man docs doc version name] is a term information
+      such that:
+      {ul
+      {- [name] is the name of the program or the command.}
+      {- [version] is the version string of the program, ignored
+         for commands.}
+      {- [doc] is a one line description of the program or command used
+         for the [NAME] section of the term's man page. For commands this
+         description is also used in the list of commands of the main
+         term's man page.}
+      {- [docs], only for commands, the title of the section of the main
+         term's man page where it should be listed (defaults to
+         {!Manpage.s_commands}).}
+      {- [sdocs] defines the title of the section in which the
+         standard [--help] and [--version] arguments are listed
+         (defaults to {!Manpage.s_options}).}
+      {- [exits] is a list of exit statuses that the term evaluation
+         may produce.}
+      {- [envs] is a list of environment variables that influence
+         the term's evaluation.}
+      {- [man] is the text of the man page for the term.}
+      {- [man_xrefs] are cross-references to other manual pages. These
+         are used to generate a {!Manpage.s_see_also} section.}}
+      [doc], [man], [envs] support the {{!doclang}documentation markup
+      language} in which the following variables are recognized:
+      {ul
+      {- [$(tname)] the term's name.}
+      {- [$(mname)] the main term's name.}} *)
+
+  val name : info -> string
+  (** [name ti] is the name of the term information. *)
+
+ (** {1:evaluation Evaluation} *)
+
+  type 'a result =
+    [ `Ok of 'a | `Error of [`Parse | `Term | `Exn ] | `Version | `Help ]
+  (** The type for evaluation results.
+      {ul
+      {- [`Ok v], the term evaluated successfully and [v] is the result.}
+      {- [`Version], the version string of the main term was printed
+       on the help formatter.}
+      {- [`Help], man page about the term was printed on the help formatter.}
+      {- [`Error `Parse], a command line parse error occurred and was
+         reported on the error formatter.}
+      {- [`Error `Term], a term evaluation error occurred and was reported
+         on the error formatter (see {!Term.ret}).}
+      {- [`Error `Exn], an exception [e] was caught and reported
+         on the error formatter (see the [~catch] parameter of {!eval}).}} *)
+
+  val eval :
+    ?help:Format.formatter -> ?err:Format.formatter -> ?catch:bool ->
+    ?env:(string -> string option) -> ?argv:string array -> ('a t * info) ->
+    'a result
+  (** [eval help err catch argv (t,i)]  is the evaluation result
+      of [t] with command line arguments [argv] (defaults to {!Sys.argv}).
+
+      If [catch] is [true] (default) uncaught exceptions
+      are intercepted and their stack trace is written to the [err]
+      formatter.
+
+      [help] is the formatter used to print help or version messages
+      (defaults to {!Format.std_formatter}). [err] is the formatter
+      used to print error messages (defaults to {!Format.err_formatter}).
+
+      [env] is used for environment variable lookup, the default
+      uses {!Sys.getenv}. *)
+
+  val eval_choice :
+    ?help:Format.formatter -> ?err:Format.formatter -> ?catch:bool ->
+    ?env:(string -> string option) -> ?argv:string array ->
+    'a t * info -> ('a t * info) list -> 'a result
+  (** [eval_choice help err catch argv (t,i) choices] is like {!eval}
+      except that if the first argument on the command line is not an option
+      name it will look in [choices] for a term whose information has this
+      name and evaluate it.
+
+      If the command name is unknown an error is reported. If the name
+      is unspecified the "main" term [t] is evaluated. [i] defines the
+      name and man page of the program. *)
+
+  val eval_peek_opts :
+    ?version_opt:bool -> ?env:(string -> string option) ->
+    ?argv:string array -> 'a t -> 'a option * 'a result
+  (** [eval_peek_opts version_opt argv t] evaluates [t], a term made
+      of optional arguments only, with the command line [argv]
+      (defaults to {!Sys.argv}). In this evaluation, unknown optional
+      arguments and positional arguments are ignored.
+
+      The evaluation returns a pair. The first component is
+      the result of parsing the command line [argv] stripped from
+      any help and version option if [version_opt] is [true] (defaults
+      to [false]). It results in:
+      {ul
+      {- [Some _] if the command line would be parsed correctly given the
+         {e partial} knowledge in [t].}
+      {- [None] if a parse error would occur on the options of [t]}}
+
+      The second component is the result of parsing the command line
+      [argv] without stripping the help and version options. It
+      indicates what the evaluation would result in on [argv] given
+      the partial knowledge in [t] (for example it would return
+      [`Help] if there's a help option in [argv]). However in
+      contrasts to {!eval} and {!eval_choice} no side effects like
+      error reporting or help output occurs.
+
+      {b Note.} Positional arguments can't be peeked without the full
+      specification of the command line: we can't tell apart a
+      positional argument from the value of an unknown optional
+      argument.  *)
+
+  (** {1:exits Turning evaluation results into exit codes}
+
+      {b Note.} If you are using the following functions to handle
+      the evaluation result of a term you should add {!default_exits} to
+      the term's information {{!info}[~exits]} argument.
+
+      {b WARNING.} You should avoid status codes strictly greater than 125
+      as those may be used by
+      {{:https://www.gnu.org/software/bash/manual/html_node/Exit-Status.html}
+       some} shells. *)
+
+  val exit_status_success : int
+  (** [exit_status_success] is 0, the exit status for success. *)
+
+  val exit_status_cli_error : int
+  (** [exit_status_cli_error] is 124, an exit status for command line
+      parsing errors. *)
+
+  val exit_status_internal_error : int
+  (** [exit_status_internal_error] is 125, an exit status for unexpected
+      internal errors. *)
+
+  val exit_status_of_result : ?term_err:int -> 'a result -> int
+  (** [exit_status_of_result ~term_err r] is an [exit(3)] status
+      code determined from [r] as follows:
+      {ul
+      {- {!exit_status_success} if [r] is one of [`Ok _], [`Version], [`Help]}
+      {- [term_err] if [r] is [`Error `Term], [term_err] defaults to [1].}
+      {- {!exit_status_cli_error} if [r] is [`Error `Parse]}
+      {- {!exit_status_internal_error} if [r] is [`Error `Exn]}} *)
+
+  val exit_status_of_status_result : ?term_err:int -> int result -> int
+  (** [exit_status_of_status_result] is like {!exit_status_of_result}
+      except for [`Ok n] where [n] is used as the status exit code. *)
+
+  val exit : ?term_err:int -> 'a result -> unit
+  (** [exit ~term_err r] is
+      [Stdlib.exit @@ exit_status_of_result ~term_err r] *)
+
+  val exit_status : ?term_err:int -> int result -> unit
+  (** [exit_status ~term_err r] is
+      [Stdlib.exit @@ exit_status_of_status_result ~term_err r] *)
+end
+
+(** Terms for command line arguments.
+
+    This module provides functions to define terms that evaluate
+    to the arguments provided on the command line.
+
+    Basic constraints, like the argument type or repeatability, are
+    specified by defining a value of type {!t}. Further constraints can
+    be specified during the {{!argterms}conversion} to a term. *)
+module Arg : sig
+
+(** {1:argconv Argument converters}
+
+    An argument converter transforms a string argument of the command
+    line to an OCaml value. {{!converters}Predefined converters}
+    are provided for many types of the standard library. *)
+
+  type 'a parser = string -> [ `Ok of 'a | `Error of string ]
+  (** The type for argument parsers.
+
+      @deprecated Use a parser with [('a, [ `Msg of string]) result] results
+      and {!conv}. *)
+
+  type 'a printer = Format.formatter -> 'a -> unit
+  (** The type for converted argument printers. *)
+
+  type 'a conv = 'a parser * 'a printer
+  (** The type for argument converters.
+
+      {b WARNING.} This type will become abstract in the next
+      major version of cmdliner, use {!val:conv} or {!pconv}
+      to construct values of this type. *)
+
+  type 'a converter = 'a conv
+  (** @deprecated Use the {!type:conv} type via the {!val:conv} and {!pconv}
+      functions. *)
+
+  val conv :
+    ?docv:string -> (string -> ('a, [`Msg of string]) result) * 'a printer ->
+    'a conv
+  (** [converter ~docv (parse, print)] is an argument converter
+      parsing values with [parse] and printing them with
+      [print]. [docv] is a documentation meta-variable used in the
+      documentation to stand for the argument value, defaults to
+      ["VALUE"]. *)
+
+  val pconv :
+    ?docv:string -> 'a parser * 'a printer -> 'a conv
+  (** [pconv] is like {!converter}, but uses a deprecated {!parser}
+      signature. *)
+
+  val conv_parser : 'a conv -> (string -> ('a, [`Msg of string]) result)
+  (** [conv_parser c] 's [c]'s parser. *)
+
+  val conv_printer : 'a conv -> 'a printer
+  (** [conv_printer c] is [c]'s printer. *)
+
+  val conv_docv : 'a conv -> string
+  (** [conv_docv c] is [c]'s documentation meta-variable.
+
+      {b WARNING.} Currently always returns ["VALUE"] in the future
+      will return the value given to {!conv} or {!pconv}. *)
+
+  val parser_of_kind_of_string :
+    kind:string -> (string -> 'a option) ->
+    (string -> ('a, [`Msg of string]) result)
+  (** [parser_of_kind_of_string ~kind kind_of_string] is an argument
+      parser using the [kind_of_string] function for parsing and [kind]
+      to report errors (e.g. could be ["an integer"] for an [int] parser.). *)
+
+  val some : ?none:string -> 'a conv -> 'a option conv
+  (** [some none c] is like the converter [c] except it returns
+      [Some] value. It is used for command line arguments
+      that default to [None] when absent. [none] is what to print to
+      document the absence (defaults to [""]). *)
+
+(** {1:arginfo Arguments and their information}
+
+    Argument information defines the man page information of an
+    argument and, for optional arguments, its names. An environment
+    variable can also be specified to read the argument value from
+    if the argument is absent from the command line and the variable
+    is defined. *)
+
+  type env = Term.env_info
+  (** The type for environment variables and their documentation. *)
+
+  val env_var : ?docs:string -> ?doc:string -> string -> env
+  (** [env_var docs doc var] is an environment variables [var]. [doc]
+      is the man page information of the environment variable, the
+      {{!doclang}documentation markup language} with the  variables
+      mentioned in {!info} be used; it defaults to ["See option
+      $(opt)."].  [docs] is the title of the man page section in which
+      the environment variable will be listed, it defaults to
+      {!Manpage.s_environment}. *)
+
+  type 'a t
+  (** The type for arguments holding data of type ['a]. *)
+
+  type info
+  (** The type for information about command line arguments. *)
+
+  val info :
+    ?docs:string -> ?docv:string -> ?doc:string -> ?env:env -> string list ->
+    info
+  (** [info docs docv doc env names] defines information for
+      an argument.
+      {ul
+      {- [names] defines the names under which an optional argument
+         can be referred to. Strings of length [1] (["c"]) define
+         short option names (["-c"]), longer strings (["count"])
+         define long option names (["--count"]). [names] must be empty
+         for positional arguments.}
+      {- [env] defines the name of an environment variable which is
+         looked up for defining the argument if it is absent from the
+         command line. See {{!envlookup}environment variables} for
+         details.}
+      {- [doc] is the man page information of the argument.
+         The {{!doclang}documentation language} can be used and
+         the following variables are recognized:
+         {ul
+         {- ["$(docv)"] the value of [docv] (see below).}
+         {- ["$(opt)"], one of the options of [names], preference
+            is given to a long one.}
+         {- ["$(env)"], the environment var specified by [env] (if any).}}
+         {{!doc_helpers}These functions} can help with formatting argument
+         values.}
+      {- [docv] is for positional and non-flag optional arguments.
+         It is a variable name used in the man page to stand for their value.}
+      {- [docs] is the title of the man page section in which the argument
+         will be listed. For optional arguments this defaults
+         to {!Manpage.s_options}. For positional arguments this defaults
+         to {!Manpage.s_arguments}. However a positional argument is only
+         listed if it has both a [doc] and [docv] specified.}} *)
+
+  val ( & ) : ('a -> 'b) -> 'a -> 'b
+  (** [f & v] is [f v], a right associative composition operator for
+      specifying argument terms. *)
+
+(** {1:optargs Optional arguments}
+
+    The information of an optional argument must have at least
+    one name or [Invalid_argument] is raised. *)
+
+  val flag : info -> bool t
+  (** [flag i] is a [bool] argument defined by an optional flag
+      that may appear {e at most} once on the command line under one of
+      the names specified by [i]. The argument holds [true] if the
+      flag is present on the command line and [false] otherwise. *)
+
+  val flag_all : info -> bool list t
+  (** [flag_all] is like {!flag} except the flag may appear more than
+      once. The argument holds a list that contains one [true] value per
+      occurrence of the flag. It holds the empty list if the flag
+      is absent from the command line. *)
+
+  val vflag : 'a -> ('a * info) list -> 'a t
+  (** [vflag v \[v]{_0}[,i]{_0}[;...\]] is an ['a] argument defined
+      by an optional flag that may appear {e at most} once on
+      the command line under one of the names specified in the [i]{_k}
+      values. The argument holds [v] if the flag is absent from the
+      command line and the value [v]{_k} if the name under which it appears
+      is in [i]{_k}.
+
+      {b Note.} Environment variable lookup is unsupported for
+      for these arguments. *)
+
+  val vflag_all : 'a list -> ('a * info) list -> 'a list t
+  (** [vflag_all v l] is like {!vflag} except the flag may appear more
+      than once. The argument holds the list [v] if the flag is absent
+      from the command line. Otherwise it holds a list that contains one
+      corresponding value per occurrence of the flag, in the order found on
+      the command line.
+
+      {b Note.} Environment variable lookup is unsupported for
+      for these arguments. *)
+
+  val opt : ?vopt:'a -> 'a conv -> 'a -> info -> 'a t
+  (** [opt vopt c v i] is an ['a] argument defined by the value of
+      an optional argument that may appear {e at most} once on the command
+      line under one of the names specified by [i]. The argument holds
+      [v] if the option is absent from the command line. Otherwise
+      it has the value of the option as converted by [c].
+
+      If [vopt] is provided the value of the optional argument is itself
+      optional, taking the value [vopt] if unspecified on the command line. *)
+
+  val opt_all : ?vopt:'a -> 'a conv -> 'a list -> info -> 'a list t
+  (** [opt_all vopt c v i] is like {!opt} except the optional argument may
+      appear more than once. The argument holds a list that contains one value
+      per occurrence of the flag in the order found on the command line.
+      It holds the list [v] if the flag is absent from the command line. *)
+
+  (** {1:posargs Positional arguments}
+
+      The information of a positional argument must have no name
+      or [Invalid_argument] is raised. Positional arguments indexing
+      is zero-based.
+
+      {b Warning.} The following combinators allow to specify and
+      extract a given positional argument with more than one term.
+      This should not be done as it will likely confuse end users and
+      documentation generation. These over-specifications may be
+      prevented by raising [Invalid_argument] in the future. But for now
+      it is the client's duty to make sure this doesn't happen. *)
+
+  val pos : ?rev:bool -> int -> 'a conv -> 'a -> info -> 'a t
+  (** [pos rev n c v i] is an ['a] argument defined by the [n]th
+      positional argument of the command line as converted by [c].
+      If the positional argument is absent from the command line
+      the argument is [v].
+
+      If [rev] is [true] (defaults to [false]), the computed
+      position is [max-n] where [max] is the position of
+      the last positional argument present on the command line. *)
+
+  val pos_all : 'a conv -> 'a list -> info -> 'a list t
+  (** [pos_all c v i] is an ['a list] argument that holds
+      all the positional arguments of the command line as converted
+      by [c] or [v] if there are none. *)
+
+  val pos_left :
+    ?rev:bool -> int -> 'a conv -> 'a list -> info -> 'a list t
+  (** [pos_left rev n c v i] is an ['a list] argument that holds
+      all the positional arguments as converted by [c] found on the left
+      of the [n]th positional argument or [v] if there are none.
+
+      If [rev] is [true] (defaults to [false]), the computed
+      position is [max-n] where [max] is the position of
+      the last positional argument present on the command line. *)
+
+  val pos_right :
+    ?rev:bool -> int -> 'a conv -> 'a list -> info -> 'a list t
+  (** [pos_right] is like {!pos_left} except it holds all the positional
+      arguments found on the right of the specified positional argument. *)
+
+  (** {1:argterms Arguments as terms} *)
+
+  val value : 'a t -> 'a Term.t
+  (** [value a] is a term that evaluates to [a]'s value. *)
+
+  val required : 'a option t -> 'a Term.t
+  (** [required a] is a term that fails if [a]'s value is [None] and
+      evaluates to the value of [Some] otherwise. Use this for required
+      positional arguments (it can also be used for defining required
+      optional arguments, but from a user interface perspective this
+      shouldn't be done, it is a contradiction in terms). *)
+
+  val non_empty : 'a list t -> 'a list Term.t
+  (** [non_empty a] is term that fails if [a]'s list is empty and
+      evaluates to [a]'s list otherwise. Use this for non empty lists
+      of positional arguments. *)
+
+  val last : 'a list t -> 'a Term.t
+  (** [last a] is a term that fails if [a]'s list is empty and evaluates
+      to the value of the last element of the list otherwise. Use this
+      for lists of flags or options where the last occurrence takes precedence
+      over the others. *)
+
+  (** {1:predef Predefined arguments} *)
+
+  val man_format : Manpage.format Term.t
+  (** [man_format] is a term that defines a [--man-format] option and
+      evaluates to a value that can be used with {!Manpage.print}. *)
+
+  (** {1:converters Predefined converters} *)
+
+  val bool : bool conv
+  (** [bool] converts values with {!bool_of_string}. *)
+
+  val char : char conv
+  (** [char] converts values by ensuring the argument has a single char. *)
+
+  val int : int conv
+  (** [int] converts values with {!int_of_string}. *)
+
+  val nativeint : nativeint conv
+  (** [nativeint] converts values with {!Nativeint.of_string}. *)
+
+  val int32 : int32 conv
+  (** [int32] converts values with {!Int32.of_string}. *)
+
+  val int64 : int64 conv
+  (** [int64] converts values with {!Int64.of_string}. *)
+
+  val float : float conv
+  (** [float] converts values with {!float_of_string}. *)
+
+  val string : string conv
+  (** [string] converts values with the identity function. *)
+
+  val enum : (string * 'a) list -> 'a conv
+  (** [enum l p] converts values such that unambiguous prefixes of string names
+      in [l] map to the corresponding value of type ['a].
+
+      {b Warning.} The type ['a] must be comparable with {!Pervasives.compare}.
+
+      @raise Invalid_argument if [l] is empty. *)
+
+  val file : string conv
+  (** [file] converts a value with the identity function and
+      checks with {!Sys.file_exists} that a file with that name exists. *)
+
+  val dir : string conv
+  (** [dir] converts a value with the identity function and checks
+      with {!Sys.file_exists} and {!Sys.is_directory}
+      that a directory with that name exists. *)
+
+  val non_dir_file : string conv
+  (** [non_dir_file] converts a value with the identity function and checks
+      with {!Sys.file_exists} and {!Sys.is_directory}
+      that a non directory file with that name exists. *)
+
+  val list : ?sep:char -> 'a conv -> 'a list conv
+  (** [list sep c] splits the argument at each [sep] (defaults to [','])
+      character and converts each substrings with [c]. *)
+
+  val array : ?sep:char -> 'a conv -> 'a array conv
+  (** [array sep c] splits the argument at each [sep] (defaults to [','])
+      character and converts each substring with [c]. *)
+
+  val pair : ?sep:char -> 'a conv -> 'b conv -> ('a * 'b) conv
+  (** [pair sep c0 c1] splits the argument at the {e first} [sep] character
+      (defaults to [',']) and respectively converts the substrings with
+      [c0] and [c1]. *)
+
+  val t2 : ?sep:char -> 'a conv -> 'b conv -> ('a * 'b) conv
+  (** {!t2} is {!pair}. *)
+
+  val t3 : ?sep:char -> 'a conv ->'b conv -> 'c conv -> ('a * 'b * 'c) conv
+  (** [t3 sep c0 c1 c2] splits the argument at the {e first} two [sep]
+      characters (defaults to [',']) and respectively converts the
+      substrings with [c0], [c1] and [c2]. *)
+
+  val t4 :
+    ?sep:char -> 'a conv -> 'b conv -> 'c conv -> 'd conv ->
+    ('a * 'b * 'c * 'd) conv
+  (** [t4 sep c0 c1 c2 c3] splits the argument at the {e first} three [sep]
+      characters (defaults to [',']) respectively converts the substrings
+      with [c0], [c1], [c2] and [c3]. *)
+
+  (** {1:doc_helpers Documentation formatting helpers} *)
+
+  val doc_quote : string -> string
+  (** [doc_quote s] quotes the string [s]. *)
+
+  val doc_alts : ?quoted:bool -> string list -> string
+  (** [doc_alts alts] documents the alternative tokens [alts] according
+      the number of alternatives. If [quoted] is [true] (default)
+      the tokens are quoted. The resulting string can be used in
+      sentences of the form ["$(docv) must be %s"].
+
+      @raise Invalid_argument if [alts] is the empty string.  *)
+
+  val doc_alts_enum : ?quoted:bool -> (string * 'a) list -> string
+  (** [doc_alts_enum quoted alts] is [doc_alts quoted (List.map fst alts)]. *)
+end
+
+(** {1:basics Basics}
+
+ With [Cmdliner] your program evaluates a term. A {e term} is a value
+ of type {!Term.t}. The type parameter indicates the type of the
+ result of the evaluation.
+
+One way to create terms is by lifting regular OCaml values with
+{!Term.const}. Terms can be applied to terms evaluating to functional
+values with {!Term.( $ )}. For example for the function:
+
+{[
+let revolt () = print_endline "Revolt!"
+]}
+
+the term :
+
+{[
+open Cmdliner
+
+let revolt_t = Term.(const revolt $ const ())
+]}
+
+is a term that evaluates to the result (and effect) of the [revolt]
+function. Terms are evaluated with {!Term.eval}:
+
+{[
+let () = Term.exit @@ Term.eval (revolt_t, Term.info "revolt")
+]}
+
+This defines a command line program named ["revolt"], without command
+line arguments, that just prints ["Revolt!"] on [stdout].
+
+{[
+> ./revolt
+Revolt!
+]}
+
+The combinators in the {!Arg} module allow to extract command line
+argument data as terms. These terms can then be applied to lifted
+OCaml functions to be evaluated by the program.
+
+Terms corresponding to command line argument data that are part of a
+term evaluation implicitly define a command line syntax.  We show this
+on an concrete example.
+
+Consider the [chorus] function that prints repeatedly a given message :
+
+{[
+let chorus count msg =
+  for i = 1 to count do print_endline msg done
+]}
+
+we want to make it available from the command line with the synopsis:
+
+{[
+chorus [-c COUNT | --count=COUNT] [MSG]
+]}
+
+where [COUNT] defaults to [10] and [MSG] defaults to ["Revolt!"]. We
+first define a term corresponding to the [--count] option:
+
+{[
+let count =
+  let doc = "Repeat the message $(docv) times." in
+  Arg.(value & opt int 10 & info ["c"; "count"] ~docv:"COUNT" ~doc)
+]}
+
+This says that [count] is a term that evaluates to the value of an
+optional argument of type [int] that defaults to [10] if unspecified
+and whose option name is either [-c] or [--count]. The arguments [doc]
+and [docv] are used to generate the option's man page information.
+
+The term for the positional argument [MSG] is:
+
+{[
+let msg =
+  let doc = "Overrides the default message to print." in
+  let env = Arg.env_var "CHORUS_MSG" ~doc in
+  let doc = "The message to print." in
+  Arg.(value & pos 0 string "Revolt!" & info [] ~env ~docv:"MSG" ~doc)
+]}
+
+which says that [msg] is a term whose value is the positional argument
+at index [0] of type [string] and defaults to ["Revolt!"]  or the
+value of the environment variable [CHORUS_MSG] if the argument is
+unspecified on the command line. Here again [doc] and [docv] are used
+for the man page information.
+
+The term for executing [chorus] with these command line arguments is :
+
+{[
+let chorus_t = Term.(const chorus $ count $ msg)
+]}
+
+and we are now ready to define our program:
+
+{[
+let info =
+  let doc = "print a customizable message repeatedly" in
+  let man = [
+    `S Manpage.s_bugs;
+    `P "Email bug reports to <hehey at example.org>." ]
+  in
+  Term.info "chorus" ~version:"%‌%VERSION%%" ~doc ~exits:Term.default_exits ~man
+
+let () = Term.exit @@ Term.eval (chorus_t, info))
+]}
+
+The [info] value created with {!Term.info} gives more information
+about the term we execute and is used to generate the program's man
+page. Since we provided a [~version] string, the program will
+automatically respond to the [--version] option by printing this
+string.
+
+A program using {!Term.eval} always responds to the [--help] option by
+showing the man page about the program generated using the information
+you provided with {!Term.info} and {!Arg.info}.  Here is the output
+generated by our example :
+
+{v
+> ./chorus --help
+NAME
+       chorus - print a customizable message repeatedly
+
+SYNOPSIS
+       chorus [OPTION]... [MSG]
+
+ARGUMENTS
+       MSG (absent=Revolt! or CHORUS_MSG env)
+           The message to print.
+
+OPTIONS
+       -c COUNT, --count=COUNT (absent=10)
+           Repeat the message COUNT times.
+
+       --help[=FMT] (default=auto)
+           Show this help in format FMT. The value FMT must be one of `auto',
+           `pager', `groff' or `plain'. With `auto', the format is `pager` or
+           `plain' whenever the TERM env var is `dumb' or undefined.
+
+       --version
+           Show version information.
+
+EXIT STATUS
+       chorus exits with the following status:
+
+       0   on success.
+
+       124 on command line parsing errors.
+
+       125 on unexpected internal errors (bugs).
+
+ENVIRONMENT
+       These environment variables affect the execution of chorus:
+
+       CHORUS_MSG
+           Overrides the default message to print.
+
+BUGS
+       Email bug reports to <hehey at example.org>.
+v}
+
+If a pager is available, this output is written to a pager. This help
+is also available in plain text or in the
+{{:http://www.gnu.org/software/groff/groff.html}groff} man page format
+by invoking the program with the option [--help=plain] or
+[--help=groff].
+
+For examples of more complex command line definitions look and run
+the {{!examples}examples}.
+
+{2:multiterms Multiple terms}
+
+[Cmdliner] also provides support for programs like [darcs] or [git]
+that have multiple commands each with their own syntax:
+
+{[prog COMMAND [OPTION]... ARG...]}
+
+A command is defined by coupling a term with {{!Term.tinfo}term
+information}. The term information defines the command name and its
+man page. Given a list of commands the function {!Term.eval_choice}
+will execute the term corresponding to the [COMMAND] argument or a
+specific "main" term if there is no [COMMAND] argument.
+
+{2:doclang Documentation markup language}
+
+Manpage {{!Manpage.block}blocks} and doc strings support the following
+markup language.
+
+{ul
+{- Markup directives [$(i,text)] and [$(b,text)], where [text] is raw
+   text respectively rendered in italics and bold.}
+{- Outside markup directives, context dependent variables of the form
+   [$(var)] are substituted by marked up data. For example in a term's
+   man page [$(tname)] is substituted by the term name in bold.}
+{- Characters $, (, ) and \ can respectively be escaped by \$, \(, \)
+   and \\ (in OCaml strings this will be ["\\$"], ["\\("], ["\\)"],
+   ["\\\\"]). Escaping $ and \ is mandatory everywhere. Escaping ) is
+   mandatory only in markup directives. Escaping ( is only here for
+   your symmetric pleasure. Any other sequence of characters starting
+   with a \ is an illegal character sequence.}
+{- Refering to unknown markup directives or variables will generate
+   errors on standard error during documentation generation.}}
+
+{2:manual Manual}
+
+Man page sections for a term are printed in the order specified by the
+term manual as given to {!Term.info}. Unless specified explicitely in
+the term's manual the following sections are automaticaly created and
+populated for you:
+
+{ul
+{- {{!Manpage.s_name}[NAME]} section.}
+{- {{!Manpage.s_synopsis}[SYNOPSIS]} section.}}
+
+The various [doc] documentation strings specified by the term's
+subterms and additional metadata get inserted at the end of the
+documentation section name [docs] they respectively mention, in the
+following order:
+
+{ol
+{- Commands, see {!Term.info}.}
+{- Positional arguments, see {!Arg.info}. Those are listed iff
+   both the [docv] and [doc] string is specified by {!Arg.info}.}
+{- Optional arguments, see {!Arg.info}.}
+{- Exit statuses, see {!Term.exit_info}.}
+{- Environment variables, see {!Arg.env_var} and {!Term.env_info}.}}
+
+If a [docs] section name is mentioned and does not exist in the term's
+manual, an empty section is created for it, after which the [doc] strings
+are inserted, possibly prefixed by boilerplate text (e.g. for
+{!Manpage.s_environment} and {!Manpage.s_exit_status}).
+
+If the created section is:
+{ul
+{- {{!Manpage.standard_sections}standard}, it
+    is inserted at the right place in the order specified
+    {{!Manpage.standard_sections}here}, but after a possible non-standard
+    section explicitely specified by the term since the latter get the
+    order number of the last previously specified standard section
+    or the order of {!Manpage.s_synopsis} if there is no such section.}
+{-  non-standard, it is inserted before the {!Manpage.s_commands}
+    section or the first subsequent existing standard section if it
+    doesn't exist. Taking advantage of this behaviour is discouraged,
+    you should declare manually your non standard section in the term's
+    manual.}}
+
+Ideally all manual strings should be UTF-8 encoded. However at the
+moment macOS (until at least 10.12) is stuck with [groff 1.19.2] which
+doesn't support `preconv(1)`. Regarding UTF-8 output, generating the
+man page with [-Tutf8] maps the hyphen-minus [U+002D] to the minus
+sign [U+2212] which makes it difficult to search it in the pager, so
+[-Tascii] is used for now.  Conclusion is that it is better to stick
+to the ASCII set for now. Please contact the author if something seems
+wrong in this reasoning or if you know a work around this.
+
+{2:misc Miscellaneous}
+
+{ul
+{- The option name [--cmdliner] is reserved by the library.}
+{- The option name [--help], (and [--version] if you specify a version
+   string) is reserved by the library. Using it as a term or option
+   name may result in undefined behaviour.}
+{- Defining the same option or command name via two different
+   arguments or terms is illegal and raises [Invalid_argument].}}
+
+{1:cmdline Command line syntax}
+
+For programs evaluating a single term the most general form of invocation is:
+
+{[
+prog [OPTION]... [ARG]...
+]}
+
+The program automatically reponds to the [--help] option by printing
+the help. If a version string is provided in the {{!Term.tinfo}term
+information}, it also automatically responds to the [--version] option
+by printing this string.
+
+Command line arguments are either {{!optargs}{e optional}} or
+{{!posargs}{e positional}}. Both can be freely interleaved but since
+[Cmdliner] accepts many optional forms this may result in
+ambiguities. The special {{!posargs} token [--]} can be used to
+resolve them.
+
+Programs evaluating multiple terms also add this form of invocation:
+
+{[
+prog COMMAND [OPTION]... [ARG]...
+]}
+
+Commands automatically respond to the [--help] option by printing
+their help. The [COMMAND] string must be the first string following
+the program name and may be specified by a prefix as long as it is not
+ambiguous.
+
+{2:optargs Optional arguments}
+
+An optional argument is specified on the command line by a {e name}
+possibly followed by a {e value}.
+
+The name of an option can be short or long.
+
+{ul
+{- A {e short} name is a dash followed by a single alphanumeric
+   character: ["-h"], ["-q"], ["-I"].}
+{- A {e long} name is two dashes followed by alphanumeric
+   characters and dashes: ["--help"], ["--silent"], ["--ignore-case"].}}
+
+More than one name may refer to the same optional argument.  For
+example in a given program the names ["-q"], ["--quiet"] and
+["--silent"] may all stand for the same boolean argument indicating
+the program to be quiet.  Long names can be specified by any non
+ambiguous prefix.
+
+The value of an option can be specified in three different ways.
+
+{ul
+{- As the next token on the command line: ["-o a.out"], ["--output a.out"].}
+{- Glued to a short name: ["-oa.out"].}
+{- Glued to a long name after an equal character: ["--output=a.out"].}}
+
+Glued forms are especially useful if the value itself starts with a
+dash as is the case for negative numbers, ["--min=-10"].
+
+An optional argument without a value is either a {e flag} (see
+{!Arg.flag}, {!Arg.vflag}) or an optional argument with an optional
+value (see the [~vopt] argument of {!Arg.opt}).
+
+Short flags can be grouped together to share a single dash and the
+group can end with a short option. For example assuming ["-v"] and
+["-x"] are flags and ["-f"] is a short option:
+
+{ul
+{- ["-vx"] will be parsed as ["-v -x"].}
+{- ["-vxfopt"] will be parsed as ["-v -x -fopt"].}
+{- ["-vxf opt"] will be parsed as ["-v -x -fopt"].}
+{- ["-fvx"] will be parsed as ["-f=vx"].}}
+
+{2:posargs Positional arguments}
+
+Positional arguments are tokens on the command line that are not
+option names and are not the value of an optional argument. They are
+numbered from left to right starting with zero.
+
+Since positional arguments may be mistaken as the optional value of an
+optional argument or they may need to look like option names, anything
+that follows the special token ["--"] on the command line is
+considered to be a positional argument.
+
+{2:envlookup Environment variables}
+
+Non-required command line arguments can be backed up by an environment
+variable.  If the argument is absent from the command line and that
+the environment variable is defined, its value is parsed using the
+argument converter and defines the value of the argument.
+
+For {!Arg.flag} and {!Arg.flag_all} that do not have an argument converter a
+boolean is parsed from the lowercased variable value as follows:
+
+
+{ul
+{- [""], ["false"], ["no"], ["n"] or ["0"] is [false].}
+{- ["true"], ["yes"], ["y"] or ["1"] is [true].}
+{- Any other string is an error.}}
+
+Note that environment variables are not supported for {!Arg.vflag} and
+{!Arg.vflag_all}.
+
+{1:examples Examples}
+
+These examples are in the [test] directory of the distribution.
+
+{2:exrm A [rm] command}
+
+We define the command line interface of a [rm] command with the synopsis:
+
+{[
+rm [OPTION]... FILE...
+]}
+
+The [-f], [-i] and [-I] flags define the prompt behaviour of [rm],
+represented in our program by the [prompt] type. If more than one of
+these flags is present on the command line the last one takes
+precedence.
+
+To implement this behaviour we map the presence of these flags to
+values of the [prompt] type by using {!Arg.vflag_all}.  This argument
+will contain all occurrences of the flag on the command line and we
+just take the {!Arg.last} one to define our term value (if there's no
+occurrence the last value of the default list [[Always]] is taken,
+i.e. the default is [Always]).
+
+{[
+(* Implementation of the command, we just print the args. *)
+
+type prompt = Always | Once | Never
+let prompt_str = function
+| Always -> "always" | Once -> "once" | Never -> "never"
+
+let rm prompt recurse files =
+  Printf.printf "prompt = %s\nrecurse = %B\nfiles = %s\n"
+    (prompt_str prompt) recurse (String.concat ", " files)
+
+(* Command line interface *)
+
+open Cmdliner
+
+let files = Arg.(non_empty & pos_all file [] & info [] ~docv:"FILE")
+let prompt =
+  let doc = "Prompt before every removal." in
+  let always = Always, Arg.info ["i"] ~doc in
+  let doc = "Ignore nonexistent files and never prompt." in
+  let never = Never, Arg.info ["f"; "force"] ~doc in
+  let doc = "Prompt once before removing more than three files, or when
+             removing recursively. Less intrusive than $(b,-i), while
+             still giving protection against most mistakes."
+  in
+  let once = Once, Arg.info ["I"] ~doc in
+  Arg.(last & vflag_all [Always] [always; never; once])
+
+let recursive =
+  let doc = "Remove directories and their contents recursively." in
+  Arg.(value & flag & info ["r"; "R"; "recursive"] ~doc)
+
+let cmd =
+  let doc = "remove files or directories" in
+  let man = [
+    `S Manpage.s_description;
+    `P "$(tname) removes each specified $(i,FILE). By default it does not
+        remove directories, to also remove them and their contents, use the
+        option $(b,--recursive) ($(b,-r) or $(b,-R)).";
+    `P "To remove a file whose name starts with a `-', for example
+        `-foo', use one of these commands:";
+    `P "rm -- -foo"; `Noblank;
+    `P "rm ./-foo";
+    `P "$(tname) removes symbolic links, not the files referenced by the
+        links.";
+    `S Manpage.s_bugs; `P "Report bugs to <hehey at example.org>.";
+    `S Manpage.s_see_also; `P "$(b,rmdir)(1), $(b,unlink)(2)" ]
+  in
+  Term.(const rm $ prompt $ recursive $ files),
+  Term.info "rm" ~version:"%%VERSION%%" ~doc ~exits:Term.default_exits ~man
+
+let () = Term.(exit @@ eval cmd)
+]}
+
+{2:excp A [cp] command}
+
+We define the command line interface of a [cp] command with the synopsis:
+{[
+cp [OPTION]... SOURCE... DEST
+]}
+
+The [DEST] argument must be a directory if there is more than one
+[SOURCE]. This constraint is too complex to be expressed by the
+combinators of {!Arg}. Hence we just give it the {!Arg.string} type
+and verify the constraint at the beginning of the [cp]
+implementation. If unsatisfied we return an [`Error] and by using
+{!Term.ret} on the lifted result [cp_t] of [cp], [Cmdliner] handles
+the error reporting.
+
+{[
+(* Implementation, we check the dest argument and print the args *)
+
+let cp verbose recurse force srcs dest =
+  if List.length srcs > 1 &&
+  (not (Sys.file_exists dest) || not (Sys.is_directory dest))
+  then
+    `Error (false, dest ^ " is not a directory")
+  else
+    `Ok (Printf.printf
+     "verbose = %B\nrecurse = %B\nforce = %B\nsrcs = %s\ndest = %s\n"
+      verbose recurse force (String.concat ", " srcs) dest)
+
+(* Command line interface *)
+
+open Cmdliner
+
+let verbose =
+  let doc = "Print file names as they are copied." in
+  Arg.(value & flag & info ["v"; "verbose"] ~doc)
+
+let recurse =
+  let doc = "Copy directories recursively." in
+  Arg.(value & flag & info ["r"; "R"; "recursive"] ~doc)
+
+let force =
+  let doc = "If a destination file cannot be opened, remove it and try again."in
+  Arg.(value & flag & info ["f"; "force"] ~doc)
+
+let srcs =
+  let doc = "Source file(s) to copy." in
+  Arg.(non_empty & pos_left ~rev:true 0 file [] & info [] ~docv:"SOURCE" ~doc)
+
+let dest =
+  let doc = "Destination of the copy. Must be a directory if there is more
+             than one $(i,SOURCE)." in
+  Arg.(required & pos ~rev:true 0 (some string) None & info [] ~docv:"DEST"
+         ~doc)
+
+let cmd =
+  let doc = "copy files" in
+  let man_xrefs =
+    [ `Tool "mv"; `Tool "scp"; `Page (2, "umask"); `Page (7, "symlink") ]
+  in
+  let exits = Term.default_exits in
+  let man =
+    [ `S Manpage.s_bugs;
+      `P "Email them to <hehey at example.org>."; ]
+  in
+  Term.(ret (const cp $ verbose $ recurse $ force $ srcs $ dest)),
+  Term.info "cp" ~version:"%%VERSION%%" ~doc ~exits ~man ~man_xrefs
+
+let () = Term.(exit @@ eval cmd)
+]}
+
+{2:extail A [tail] command}
+
+We define the command line interface of a [tail] command with the
+synopsis:
+
+{[
+tail [OPTION]... [FILE]...
+]}
+
+The [--lines] option whose value specifies the number of last lines to
+print has a special syntax where a [+] prefix indicates to start
+printing from that line number. In the program this is represented by
+the [loc] type. We define a custom [loc] {{!Arg.argconv}argument
+converter} for this option.
+
+The [--follow] option has an optional enumerated value. The argument
+converter [follow], created with {!Arg.enum} parses the option value
+into the enumeration. By using {!Arg.some} and the [~vopt] argument of
+{!Arg.opt}, the term corresponding to the option [--follow] evaluates
+to [None] if [--follow] is absent from the command line, to [Some
+Descriptor] if present but without a value and to [Some v] if present
+with a value [v] specified.
+
+{[
+(* Implementation of the command, we just print the args. *)
+
+type loc = bool * int
+type verb = Verbose | Quiet
+type follow = Name | Descriptor
+
+let str = Printf.sprintf
+let opt_str sv = function None -> "None" | Some v -> str "Some(%s)" (sv v)
+let loc_str (rev, k) = if rev then str "%d" k else str "+%d" k
+let follow_str = function Name -> "name" | Descriptor -> "descriptor"
+let verb_str = function Verbose -> "verbose" | Quiet -> "quiet"
+
+let tail lines follow verb pid files =
+  Printf.printf "lines = %s\nfollow = %s\nverb = %s\npid = %s\nfiles = %s\n"
+    (loc_str lines) (opt_str follow_str follow) (verb_str verb)
+    (opt_str string_of_int pid) (String.concat ", " files)
+
+(* Command line interface *)
+
+open Cmdliner
+
+let lines =
+  let loc =
+    let parse s =
+      try
+        if s <> "" && s.[0] <> '+' then Ok (true, int_of_string s) else
+        Ok (false, int_of_string (String.sub s 1 (String.length s - 1)))
+      with Failure _ -> Error (`Msg "unable to parse integer")
+    in
+    let print ppf p = Format.fprintf ppf "%s" (loc_str p) in
+    Arg.conv ~docv:"N" (parse, print)
+  in
+  Arg.(value & opt loc (true, 10) & info ["n"; "lines"] ~docv:"N"
+         ~doc:"Output the last $(docv) lines or use $(i,+)$(docv) to start
+               output after the $(i,N)-1th line.")
+
+let follow =
+  let doc = "Output appended data as the file grows. $(docv) specifies how the
+             file should be tracked, by its `name' or by its `descriptor'." in
+  let follow = Arg.enum ["name", Name; "descriptor", Descriptor] in
+  Arg.(value & opt (some follow) ~vopt:(Some Descriptor) None &
+       info ["f"; "follow"] ~docv:"ID" ~doc)
+
+let verb =
+  let doc = "Never output headers giving file names." in
+  let quiet = Quiet, Arg.info ["q"; "quiet"; "silent"] ~doc in
+  let doc = "Always output headers giving file names." in
+  let verbose = Verbose, Arg.info ["v"; "verbose"] ~doc in
+  Arg.(last & vflag_all [Quiet] [quiet; verbose])
+
+let pid =
+  let doc = "With -f, terminate after process $(docv) dies." in
+  Arg.(value & opt (some int) None & info ["pid"] ~docv:"PID" ~doc)
+
+let files = Arg.(value & (pos_all non_dir_file []) & info [] ~docv:"FILE")
+
+let cmd =
+  let doc = "display the last part of a file" in
+  let man = [
+    `S Manpage.s_description;
+    `P "$(tname) prints the last lines of each $(i,FILE) to standard output. If
+        no file is specified reads standard input. The number of printed
+        lines can be  specified with the $(b,-n) option.";
+    `S Manpage.s_bugs;
+    `P "Report them to <hehey at example.org>.";
+    `S Manpage.s_see_also;
+    `P "$(b,cat)(1), $(b,head)(1)" ]
+  in
+  Term.(const tail $ lines $ follow $ verb $ pid $ files),
+  Term.info "tail" ~version:"%‌%VERSION%%" ~doc ~exits:Term.default_exits ~man
+
+let () = Term.(exit @@ eval cmd)
+]}
+
+{2:exdarcs A [darcs] command}
+
+We define the command line interface of a [darcs] command with the
+synopsis:
+
+{[
+darcs [COMMAND] ...
+]}
+
+The [--debug], [-q], [-v] and [--prehook] options are available in
+each command.  To avoid having to pass them individually to each
+command we gather them in a record of type [copts]. By lifting the
+record constructor [copts] into the term [copts_t] we now have a term
+that we can pass to the commands to stand for an argument of type
+[copts]. These options are documented in a section called [COMMON
+OPTIONS], since we also want to put [--help] and [--version] in this
+section, the term information of commands makes a judicious use of the
+[sdocs] parameter of {!Term.info}.
+
+The [help] command shows help about commands or other topics. The help
+shown for commands is generated by [Cmdliner] by making an appropriate
+use of {!Term.ret} on the lifted [help] function.
+
+If the program is invoked without a command we just want to show the
+help of the program as printed by [Cmdliner] with [--help]. This is
+done by the [default_cmd] term.
+
+{[
+(* Implementations, just print the args. *)
+
+type verb = Normal | Quiet | Verbose
+type copts = { debug : bool; verb : verb; prehook : string option }
+
+let str = Printf.sprintf
+let opt_str sv = function None -> "None" | Some v -> str "Some(%s)" (sv v)
+let opt_str_str = opt_str (fun s -> s)
+let verb_str = function
+  | Normal -> "normal" | Quiet -> "quiet" | Verbose -> "verbose"
+
+let pr_copts oc copts = Printf.fprintf oc
+    "debug = %B\nverbosity = %s\nprehook = %s\n"
+    copts.debug (verb_str copts.verb) (opt_str_str copts.prehook)
+
+let initialize copts repodir = Printf.printf
+    "%arepodir = %s\n" pr_copts copts repodir
+
+let record copts name email all ask_deps files = Printf.printf
+    "%aname = %s\nemail = %s\nall = %B\nask-deps = %B\nfiles = %s\n"
+    pr_copts copts (opt_str_str name) (opt_str_str email) all ask_deps
+    (String.concat ", " files)
+
+let help copts man_format cmds topic = match topic with
+| None -> `Help (`Pager, None) (* help about the program. *)
+| Some topic ->
+    let topics = "topics" :: "patterns" :: "environment" :: cmds in
+    let conv, _ = Cmdliner.Arg.enum (List.rev_map (fun s -> (s, s)) topics) in
+    match conv topic with
+    | `Error e -> `Error (false, e)
+    | `Ok t when t = "topics" -> List.iter print_endline topics; `Ok ()
+    | `Ok t when List.mem t cmds -> `Help (man_format, Some t)
+    | `Ok t ->
+        let page = (topic, 7, "", "", ""), [`S topic; `P "Say something";] in
+        `Ok (Cmdliner.Manpage.print man_format Format.std_formatter page)
+
+open Cmdliner
+
+(* Help sections common to all commands *)
+
+let help_secs = [
+ `S Manpage.s_common_options;
+ `P "These options are common to all commands.";
+ `S "MORE HELP";
+ `P "Use `$(mname) $(i,COMMAND) --help' for help on a single command.";`Noblank;
+ `P "Use `$(mname) help patterns' for help on patch matching."; `Noblank;
+ `P "Use `$(mname) help environment' for help on environment variables.";
+ `S Manpage.s_bugs; `P "Check bug reports at http://bugs.example.org.";]
+
+(* Options common to all commands *)
+
+let copts debug verb prehook = { debug; verb; prehook }
+let copts_t =
+  let docs = Manpage.s_common_options in
+  let debug =
+    let doc = "Give only debug output." in
+    Arg.(value & flag & info ["debug"] ~docs ~doc)
+  in
+  let verb =
+    let doc = "Suppress informational output." in
+    let quiet = Quiet, Arg.info ["q"; "quiet"] ~docs ~doc in
+    let doc = "Give verbose output." in
+    let verbose = Verbose, Arg.info ["v"; "verbose"] ~docs ~doc in
+    Arg.(last & vflag_all [Normal] [quiet; verbose])
+  in
+  let prehook =
+    let doc = "Specify command to run before this $(mname) command." in
+    Arg.(value & opt (some string) None & info ["prehook"] ~docs ~doc)
+  in
+  Term.(const copts $ debug $ verb $ prehook)
+
+(* Commands *)
+
+let initialize_cmd =
+  let repodir =
+    let doc = "Run the program in repository directory $(docv)." in
+    Arg.(value & opt file Filename.current_dir_name & info ["repodir"]
+           ~docv:"DIR" ~doc)
+  in
+  let doc = "make the current directory a repository" in
+  let exits = Term.default_exits in
+  let man = [
+    `S Manpage.s_description;
+    `P "Turns the current directory into a Darcs repository. Any
+       existing files and subdirectories become ...";
+    `Blocks help_secs; ]
+  in
+  Term.(const initialize $ copts_t $ repodir),
+  Term.info "initialize" ~doc ~sdocs:Manpage.s_common_options ~exits ~man
+
+let record_cmd =
+  let pname =
+    let doc = "Name of the patch." in
+    Arg.(value & opt (some string) None & info ["m"; "patch-name"] ~docv:"NAME"
+           ~doc)
+  in
+  let author =
+    let doc = "Specifies the author's identity." in
+    Arg.(value & opt (some string) None & info ["A"; "author"] ~docv:"EMAIL"
+           ~doc)
+  in
+  let all =
+    let doc = "Answer yes to all patches." in
+    Arg.(value & flag & info ["a"; "all"] ~doc)
+  in
+  let ask_deps =
+    let doc = "Ask for extra dependencies." in
+    Arg.(value & flag & info ["ask-deps"] ~doc)
+  in
+  let files = Arg.(value & (pos_all file) [] & info [] ~docv:"FILE or DIR") in
+  let doc = "create a patch from unrecorded changes" in
+  let exits = Term.default_exits in
+  let man =
+    [`S Manpage.s_description;
+     `P "Creates a patch from changes in the working tree. If you specify
+         a set of files ...";
+     `Blocks help_secs; ]
+  in
+  Term.(const record $ copts_t $ pname $ author $ all $ ask_deps $ files),
+  Term.info "record" ~doc ~sdocs:Manpage.s_common_options ~exits ~man
+
+let help_cmd =
+  let topic =
+    let doc = "The topic to get help on. `topics' lists the topics." in
+    Arg.(value & pos 0 (some string) None & info [] ~docv:"TOPIC" ~doc)
+  in
+  let doc = "display help about darcs and darcs commands" in
+  let man =
+    [`S Manpage.s_description;
+     `P "Prints help about darcs commands and other subjects...";
+     `Blocks help_secs; ]
+  in
+  Term.(ret
+          (const help $ copts_t $ Arg.man_format $ Term.choice_names $topic)),
+  Term.info "help" ~doc ~exits:Term.default_exits ~man
+
+let default_cmd =
+  let doc = "a revision control system" in
+  let sdocs = Manpage.s_common_options in
+  let exits = Term.default_exits in
+  let man = help_secs in
+  Term.(ret (const (fun _ -> `Help (`Pager, None)) $ copts_t)),
+  Term.info "darcs" ~version:"%%VERSION%%" ~doc ~sdocs ~exits ~man
+
+let cmds = [initialize_cmd; record_cmd; help_cmd]
+
+let () = Term.(exit @@ eval_choice default_cmd cmds)
+]}
+*)
+
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli
+
+   Permission to use, copy, modify, and/or distribute this software for any
+   purpose with or without fee is hereby granted, provided that the above
+   copyright notice and this permission notice appear in all copies.
+
+
+   THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+   WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+   MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+   ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+   WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+   ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+   OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+  ---------------------------------------------------------------------------*)
diff --git a/tools/ocaml/duniverse/cmdliner/src/cmdliner.mllib b/tools/ocaml/duniverse/cmdliner/src/cmdliner.mllib
new file mode 100644
index 0000000000..f1ec5a3ad4
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/src/cmdliner.mllib
@@ -0,0 +1,11 @@
+Cmdliner_suggest
+Cmdliner_trie
+Cmdliner_base
+Cmdliner_manpage
+Cmdliner_info
+Cmdliner_docgen
+Cmdliner_msg
+Cmdliner_cline
+Cmdliner_arg
+Cmdliner_term
+Cmdliner
diff --git a/tools/ocaml/duniverse/cmdliner/src/cmdliner_arg.ml b/tools/ocaml/duniverse/cmdliner/src/cmdliner_arg.ml
new file mode 100644
index 0000000000..589f2eb4ad
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/src/cmdliner_arg.ml
@@ -0,0 +1,356 @@
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli. All rights reserved.
+   Distributed under the ISC license, see terms at the end of the file.
+   %%NAME%% %%VERSION%%
+  ---------------------------------------------------------------------------*)
+
+let rev_compare n0 n1 = compare n1 n0
+
+(* Invalid_argument strings **)
+
+let err_not_opt = "Option argument without name"
+let err_not_pos = "Positional argument with a name"
+
+(* Documentation formatting helpers *)
+
+let strf = Printf.sprintf
+let doc_quote = Cmdliner_base.quote
+let doc_alts = Cmdliner_base.alts_str
+let doc_alts_enum ?quoted enum = doc_alts ?quoted (List.map fst enum)
+
+let str_of_pp pp v = pp Format.str_formatter v; Format.flush_str_formatter ()
+
+(* Argument converters *)
+
+type 'a parser = string -> [ `Ok of 'a | `Error of string ]
+type 'a printer = Format.formatter -> 'a -> unit
+
+type 'a conv = 'a parser * 'a printer
+type 'a converter = 'a conv
+
+let default_docv = "VALUE"
+let conv ?docv (parse, print) =
+  let parse s = match parse s with Ok v -> `Ok v | Error (`Msg e) -> `Error e in
+  parse, print
+
+let pconv ?docv conv = conv
+
+let conv_parser (parse, _) =
+  fun s -> match parse s with `Ok v -> Ok v | `Error e -> Error (`Msg e)
+
+let conv_printer (_, print) = print
+let conv_docv _ = default_docv
+
+let err_invalid s kind = `Msg (strf "invalid value '%s', expected %s" s kind)
+let parser_of_kind_of_string ~kind k_of_string =
+  fun s -> match k_of_string s with
+  | None -> Error (err_invalid s kind)
+  | Some v -> Ok v
+
+let some = Cmdliner_base.some
+
+(* Argument information *)
+
+type env = Cmdliner_info.env
+let env_var = Cmdliner_info.env
+
+type 'a t = 'a Cmdliner_term.t
+type info = Cmdliner_info.arg
+let info = Cmdliner_info.arg
+
+(* Arguments *)
+
+let ( & ) f x = f x
+
+let err e = Error (`Parse e)
+
+let parse_to_list parser s = match parser s with
+| `Ok v -> `Ok [v]
+| `Error _ as e -> e
+
+let try_env ei a parse ~absent = match Cmdliner_info.arg_env a with
+| None -> Ok absent
+| Some env ->
+    let var = Cmdliner_info.env_var env in
+    match Cmdliner_info.(eval_env_var ei var) with
+    | None -> Ok absent
+    | Some v ->
+        match parse v with
+        | `Ok v -> Ok v
+        | `Error e -> err (Cmdliner_msg.err_env_parse env ~err:e)
+
+let arg_to_args = Cmdliner_info.Args.singleton
+let list_to_args f l =
+  let add acc v = Cmdliner_info.Args.add (f v) acc in
+  List.fold_left add Cmdliner_info.Args.empty l
+
+let flag a =
+  if Cmdliner_info.arg_is_pos a then invalid_arg err_not_opt else
+  let convert ei cl = match Cmdliner_cline.opt_arg cl a with
+  | [] -> try_env ei a Cmdliner_base.env_bool_parse ~absent:false
+  | [_, _, None] -> Ok true
+  | [_, f, Some v] -> err (Cmdliner_msg.err_flag_value f v)
+  | (_, f, _) :: (_ ,g, _) :: _  -> err (Cmdliner_msg.err_opt_repeated f g)
+  in
+  arg_to_args a, convert
+
+let flag_all a =
+  if Cmdliner_info.arg_is_pos a then invalid_arg err_not_opt else
+  let a = Cmdliner_info.arg_make_all_opts a in
+  let convert ei cl = match Cmdliner_cline.opt_arg cl a with
+  | [] ->
+      try_env ei a (parse_to_list Cmdliner_base.env_bool_parse) ~absent:[]
+  | l ->
+      try
+        let truth (_, f, v) = match v with
+        | None -> true
+        | Some v -> failwith (Cmdliner_msg.err_flag_value f v)
+        in
+        Ok (List.rev_map truth l)
+      with Failure e -> err e
+  in
+  arg_to_args a, convert
+
+let vflag v l =
+  let convert _ cl =
+    let rec aux fv = function
+    | (v, a) :: rest ->
+        begin match Cmdliner_cline.opt_arg cl a with
+        | [] -> aux fv rest
+        | [_, f, None] ->
+            begin match fv with
+            | None -> aux (Some (f, v)) rest
+            | Some (g, _) -> failwith (Cmdliner_msg.err_opt_repeated g f)
+            end
+        | [_, f, Some v] -> failwith (Cmdliner_msg.err_flag_value f v)
+        | (_, f, _) :: (_, g, _) :: _ ->
+            failwith (Cmdliner_msg.err_opt_repeated g f)
+        end
+    | [] -> match fv with None -> v | Some (_, v) -> v
+    in
+    try Ok (aux None l) with Failure e -> err e
+  in
+  let flag (_, a) =
+    if Cmdliner_info.arg_is_pos a then invalid_arg err_not_opt else a
+  in
+  list_to_args flag l, convert
+
+let vflag_all v l =
+  let convert _ cl =
+    let rec aux acc = function
+    | (fv, a) :: rest ->
+        begin match Cmdliner_cline.opt_arg cl a with
+        | [] -> aux acc rest
+        | l ->
+            let fval (k, f, v) = match v with
+            | None -> (k, fv)
+            | Some v -> failwith (Cmdliner_msg.err_flag_value f v)
+            in
+            aux (List.rev_append (List.rev_map fval l) acc) rest
+        end
+    | [] ->
+        if acc = [] then v else List.rev_map snd (List.sort rev_compare acc)
+    in
+    try Ok (aux [] l) with Failure e -> err e
+  in
+  let flag (_, a) =
+    if Cmdliner_info.arg_is_pos a then invalid_arg err_not_opt else
+    Cmdliner_info.arg_make_all_opts a
+  in
+  list_to_args flag l, convert
+
+let parse_opt_value parse f v = match parse v with
+| `Ok v -> v
+| `Error e -> failwith (Cmdliner_msg.err_opt_parse f e)
+
+let opt ?vopt (parse, print) v a =
+  if Cmdliner_info.arg_is_pos a then invalid_arg err_not_opt else
+  let absent = Cmdliner_info.Val (lazy (str_of_pp print v)) in
+  let kind = match vopt with
+  | None -> Cmdliner_info.Opt
+  | Some dv -> Cmdliner_info.Opt_vopt (str_of_pp print dv)
+  in
+  let a = Cmdliner_info.arg_make_opt ~absent ~kind a in
+  let convert ei cl = match Cmdliner_cline.opt_arg cl a with
+  | [] -> try_env ei a parse ~absent:v
+  | [_, f, Some v] ->
+      (try Ok (parse_opt_value parse f v) with Failure e -> err e)
+  | [_, f, None] ->
+      begin match vopt with
+      | None -> err (Cmdliner_msg.err_opt_value_missing f)
+      | Some optv -> Ok optv
+      end
+  | (_, f, _) :: (_, g, _) :: _ -> err (Cmdliner_msg.err_opt_repeated g f)
+  in
+  arg_to_args a, convert
+
+let opt_all ?vopt (parse, print) v a =
+  if Cmdliner_info.arg_is_pos a then invalid_arg err_not_opt else
+  let absent = Cmdliner_info.Val (lazy "") in
+  let kind = match vopt with
+  | None -> Cmdliner_info.Opt
+  | Some dv -> Cmdliner_info.Opt_vopt (str_of_pp print dv)
+  in
+  let a = Cmdliner_info.arg_make_opt_all ~absent ~kind a in
+  let convert ei cl = match Cmdliner_cline.opt_arg cl a with
+  | [] -> try_env ei a (parse_to_list parse) ~absent:v
+  | l ->
+      let parse (k, f, v) = match v with
+      | Some v -> (k, parse_opt_value parse f v)
+      | None -> match vopt with
+      | None -> failwith (Cmdliner_msg.err_opt_value_missing f)
+      | Some dv -> (k, dv)
+      in
+      try Ok (List.rev_map snd
+                (List.sort rev_compare (List.rev_map parse l))) with
+      | Failure e -> err e
+  in
+  arg_to_args a, convert
+
+(* Positional arguments *)
+
+let parse_pos_value parse a v = match parse v with
+| `Ok v -> v
+| `Error e -> failwith (Cmdliner_msg.err_pos_parse a e)
+
+let pos ?(rev = false) k (parse, print) v a =
+  if Cmdliner_info.arg_is_opt a then invalid_arg err_not_pos else
+  let absent = Cmdliner_info.Val (lazy (str_of_pp print v)) in
+  let pos = Cmdliner_info.pos ~rev ~start:k ~len:(Some 1) in
+  let a = Cmdliner_info.arg_make_pos_abs ~absent ~pos a in
+  let convert ei cl = match Cmdliner_cline.pos_arg cl a with
+  | [] -> try_env ei a parse ~absent:v
+  | [v] ->
+      (try Ok (parse_pos_value parse a v) with Failure e -> err e)
+  | _ -> assert false
+  in
+  arg_to_args a, convert
+
+let pos_list pos (parse, _) v a =
+  if Cmdliner_info.arg_is_opt a then invalid_arg err_not_pos else
+  let a = Cmdliner_info.arg_make_pos pos a in
+  let convert ei cl = match Cmdliner_cline.pos_arg cl a with
+  | [] -> try_env ei a (parse_to_list parse) ~absent:v
+  | l ->
+      try Ok (List.rev (List.rev_map (parse_pos_value parse a) l)) with
+      | Failure e -> err e
+  in
+  arg_to_args a, convert
+
+let all = Cmdliner_info.pos ~rev:false ~start:0 ~len:None
+let pos_all c v a = pos_list all c v a
+
+let pos_left ?(rev = false) k =
+  let start = if rev then k + 1 else 0 in
+  let len = if rev then None else Some k in
+  pos_list (Cmdliner_info.pos ~rev ~start ~len)
+
+let pos_right ?(rev = false) k =
+  let start = if rev then 0 else k + 1 in
+  let len = if rev then Some k else None in
+  pos_list (Cmdliner_info.pos ~rev ~start ~len)
+
+(* Arguments as terms *)
+
+let absent_error args =
+  let make_req a acc =
+    let req_a = Cmdliner_info.arg_make_req a in
+    Cmdliner_info.Args.add req_a acc
+  in
+  Cmdliner_info.Args.fold make_req args Cmdliner_info.Args.empty
+
+let value a = a
+
+let err_arg_missing args =
+  err @@ Cmdliner_msg.err_arg_missing (Cmdliner_info.Args.choose args)
+
+let required (args, convert) =
+  let args = absent_error args in
+  let convert ei cl = match convert ei cl with
+  | Ok (Some v) -> Ok v
+  | Ok None -> err_arg_missing args
+  | Error _ as e -> e
+  in
+  args, convert
+
+let non_empty (al, convert) =
+  let args = absent_error al in
+  let convert ei cl = match convert ei cl with
+  | Ok [] -> err_arg_missing args
+  | Ok l -> Ok l
+  | Error _ as e -> e
+  in
+  args, convert
+
+let last (args, convert) =
+  let convert ei cl = match convert ei cl with
+  | Ok [] -> err_arg_missing args
+  | Ok l -> Ok (List.hd (List.rev l))
+  | Error _ as e -> e
+  in
+  args, convert
+
+(* Predefined arguments *)
+
+let man_fmts =
+  ["auto", `Auto; "pager", `Pager; "groff", `Groff; "plain", `Plain]
+
+let man_fmt_docv = "FMT"
+let man_fmts_enum = Cmdliner_base.enum man_fmts
+let man_fmts_alts = doc_alts_enum man_fmts
+let man_fmts_doc kind =
+  strf "Show %s in format $(docv). The value $(docv) must be %s. With `auto',
+        the format is `pager` or `plain' whenever the $(b,TERM) env var is
+        `dumb' or undefined."
+    kind man_fmts_alts
+
+let man_format =
+  let doc = man_fmts_doc "output" in
+  let docv = man_fmt_docv in
+  value & opt man_fmts_enum `Pager & info ["man-format"] ~docv ~doc
+
+let stdopt_version ~docs =
+  value & flag & info ["version"] ~docs ~doc:"Show version information."
+
+let stdopt_help ~docs =
+  let doc = man_fmts_doc "this help" in
+  let docv = man_fmt_docv in
+  value & opt ~vopt:(Some `Auto) (some man_fmts_enum) None &
+  info ["help"] ~docv ~docs ~doc
+
+(* Predefined converters. *)
+
+let bool = Cmdliner_base.bool
+let char = Cmdliner_base.char
+let int = Cmdliner_base.int
+let nativeint = Cmdliner_base.nativeint
+let int32 = Cmdliner_base.int32
+let int64 = Cmdliner_base.int64
+let float = Cmdliner_base.float
+let string = Cmdliner_base.string
+let enum = Cmdliner_base.enum
+let file = Cmdliner_base.file
+let dir = Cmdliner_base.dir
+let non_dir_file = Cmdliner_base.non_dir_file
+let list = Cmdliner_base.list
+let array = Cmdliner_base.array
+let pair = Cmdliner_base.pair
+let t2 = Cmdliner_base.t2
+let t3 = Cmdliner_base.t3
+let t4 = Cmdliner_base.t4
+
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli
+
+   Permission to use, copy, modify, and/or distribute this software for any
+   purpose with or without fee is hereby granted, provided that the above
+   copyright notice and this permission notice appear in all copies.
+
+   THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+   WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+   MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+   ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+   WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+   ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+   OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+  ---------------------------------------------------------------------------*)
diff --git a/tools/ocaml/duniverse/cmdliner/src/cmdliner_arg.mli b/tools/ocaml/duniverse/cmdliner/src/cmdliner_arg.mli
new file mode 100644
index 0000000000..725f923b8e
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/src/cmdliner_arg.mli
@@ -0,0 +1,111 @@
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli. All rights reserved.
+   Distributed under the ISC license, see terms at the end of the file.
+   %%NAME%% %%VERSION%%
+  ---------------------------------------------------------------------------*)
+
+(** Command line arguments as terms. *)
+
+type 'a parser = string -> [ `Ok of 'a | `Error of string ]
+type 'a printer = Format.formatter -> 'a -> unit
+type 'a conv = 'a parser * 'a printer
+type 'a converter = 'a conv
+
+val conv :
+  ?docv:string -> (string -> ('a, [`Msg of string]) result) * 'a printer ->
+  'a conv
+
+val pconv : ?docv:string -> 'a parser * 'a printer -> 'a conv
+val conv_parser : 'a conv -> (string -> ('a, [`Msg of string]) result)
+val conv_printer : 'a conv -> 'a printer
+val conv_docv : 'a conv -> string
+
+val parser_of_kind_of_string :
+  kind:string -> (string -> 'a option) ->
+  (string -> ('a, [`Msg of string]) result)
+
+val some : ?none:string -> 'a converter -> 'a option converter
+
+type env = Cmdliner_info.env
+val env_var : ?docs:string -> ?doc:string -> string -> env
+
+type 'a t = 'a Cmdliner_term.t
+
+type info
+val info :
+  ?docs:string -> ?docv:string -> ?doc:string -> ?env:env -> string list -> info
+
+val ( & ) : ('a -> 'b) -> 'a -> 'b
+
+val flag : info -> bool t
+val flag_all : info -> bool list t
+val vflag : 'a -> ('a * info) list -> 'a t
+val vflag_all : 'a list -> ('a * info) list -> 'a list t
+val opt : ?vopt:'a -> 'a converter -> 'a -> info -> 'a t
+val opt_all : ?vopt:'a -> 'a converter -> 'a list -> info -> 'a list t
+
+val pos : ?rev:bool -> int -> 'a converter -> 'a -> info -> 'a t
+val pos_all : 'a converter -> 'a list -> info -> 'a list t
+val pos_left : ?rev:bool -> int -> 'a converter -> 'a list -> info -> 'a list t
+val pos_right : ?rev:bool -> int -> 'a converter -> 'a list -> info -> 'a list t
+
+(** {1 As terms} *)
+
+val value : 'a t -> 'a Cmdliner_term.t
+val required : 'a option t -> 'a Cmdliner_term.t
+val non_empty : 'a list t -> 'a list Cmdliner_term.t
+val last : 'a list t -> 'a Cmdliner_term.t
+
+(** {1 Predefined arguments} *)
+
+val man_format : Cmdliner_manpage.format Cmdliner_term.t
+val stdopt_version : docs:string -> bool Cmdliner_term.t
+val stdopt_help : docs:string -> Cmdliner_manpage.format option Cmdliner_term.t
+
+(** {1 Converters} *)
+
+val bool : bool converter
+val char : char converter
+val int : int converter
+val nativeint : nativeint converter
+val int32 : int32 converter
+val int64 : int64 converter
+val float : float converter
+val string : string converter
+val enum : (string * 'a) list -> 'a converter
+val file : string converter
+val dir : string converter
+val non_dir_file : string converter
+val list : ?sep:char -> 'a converter -> 'a list converter
+val array : ?sep:char -> 'a converter -> 'a array converter
+val pair : ?sep:char -> 'a converter -> 'b converter -> ('a * 'b) converter
+val t2 : ?sep:char -> 'a converter -> 'b converter -> ('a * 'b) converter
+
+val t3 :
+  ?sep:char -> 'a converter ->'b converter -> 'c converter ->
+  ('a * 'b * 'c) converter
+
+val t4 :
+  ?sep:char -> 'a converter ->'b converter -> 'c converter -> 'd converter ->
+  ('a * 'b * 'c * 'd) converter
+
+val doc_quote : string -> string
+val doc_alts : ?quoted:bool -> string list -> string
+val doc_alts_enum : ?quoted:bool -> (string * 'a) list -> string
+
+
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli
+
+   Permission to use, copy, modify, and/or distribute this software for any
+   purpose with or without fee is hereby granted, provided that the above
+   copyright notice and this permission notice appear in all copies.
+
+   THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+   WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+   MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+   ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+   WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+   ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+   OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+  ---------------------------------------------------------------------------*)
diff --git a/tools/ocaml/duniverse/cmdliner/src/cmdliner_base.ml b/tools/ocaml/duniverse/cmdliner/src/cmdliner_base.ml
new file mode 100644
index 0000000000..24ad20c65f
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/src/cmdliner_base.ml
@@ -0,0 +1,302 @@
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli. All rights reserved.
+   Distributed under the ISC license, see terms at the end of the file.
+   %%NAME%% %%VERSION%%
+  ---------------------------------------------------------------------------*)
+
+(* Invalid argument strings *)
+
+let err_empty_list = "empty list"
+let err_incomplete_enum = "Incomplete enumeration for the type"
+
+(* Formatting tools *)
+
+let strf = Printf.sprintf
+let pp = Format.fprintf
+let pp_sp = Format.pp_print_space
+let pp_str = Format.pp_print_string
+let pp_char = Format.pp_print_char
+let pp_text = Format.pp_print_text
+let pp_lines ppf s =
+  let rec stop_at sat ~start ~max s =
+    if start > max then start else
+    if sat s.[start] then start else
+    stop_at sat ~start:(start + 1) ~max s
+  in
+  let sub s start stop ~max =
+    if start = stop then "" else
+    if start = 0 && stop > max then s else
+    String.sub s start (stop - start)
+  in
+  let is_nl c = c = '\n' in
+  let max = String.length s - 1 in
+  let rec loop start s = match stop_at is_nl ~start ~max s with
+  | stop when stop > max -> Format.pp_print_string ppf (sub s start stop ~max)
+  | stop ->
+      Format.pp_print_string ppf (sub s start stop ~max);
+      Format.pp_force_newline ppf ();
+      loop (stop + 1) s
+  in
+  loop 0 s
+
+let pp_tokens ~spaces ppf s = (* collapse white and hint spaces (maybe) *)
+  let is_space = function ' ' | '\n' | '\r' | '\t' -> true | _ -> false in
+  let i_max = String.length s - 1 in
+  let flush start stop = pp_str ppf (String.sub s start (stop - start + 1)) in
+  let rec skip_white i =
+    if i > i_max then i else
+    if is_space s.[i] then skip_white (i + 1) else i
+  in
+  let rec loop start i =
+    if i > i_max then flush start i_max else
+    if not (is_space s.[i]) then loop start (i + 1) else
+    let next_start = skip_white i in
+    (flush start (i - 1); if spaces then pp_sp ppf () else pp_char ppf ' ';
+     if next_start > i_max then () else loop next_start next_start)
+  in
+  loop 0 0
+
+(* Converter (end-user) error messages *)
+
+let quote s = strf "`%s'" s
+let alts_str ?(quoted = true) alts =
+  let quote = if quoted then quote else (fun s -> s) in
+  match alts with
+  | [] -> invalid_arg err_empty_list
+  | [a] -> (quote a)
+  | [a; b] -> strf "either %s or %s" (quote a) (quote b)
+  | alts ->
+      let rev_alts = List.rev alts in
+      strf "one of %s or %s"
+        (String.concat ", " (List.rev_map quote (List.tl rev_alts)))
+        (quote (List.hd rev_alts))
+
+let err_multi_def ~kind name doc v v' =
+  strf "%s %s defined twice (doc strings are '%s' and '%s')"
+    kind name (doc v) (doc v')
+
+let err_ambiguous ~kind s ~ambs =
+    strf "%s %s ambiguous and could be %s" kind (quote s) (alts_str ambs)
+
+let err_unknown ?(hints = []) ~kind v =
+  let did_you_mean s = strf ", did you mean %s ?" s in
+  let hints = match hints with [] -> "." | hs -> did_you_mean (alts_str hs) in
+  strf "unknown %s %s%s" kind (quote v) hints
+
+let err_no kind s = strf "no %s %s" (quote s) kind
+let err_not_dir s = strf "%s is not a directory" (quote s)
+let err_is_dir s = strf "%s is a directory" (quote s)
+let err_element kind s exp =
+  strf "invalid element in %s (`%s'): %s" kind s exp
+
+let err_invalid kind s exp = strf "invalid %s %s, %s" kind (quote s) exp
+let err_invalid_val = err_invalid "value"
+let err_sep_miss sep s =
+  err_invalid_val s (strf "missing a `%c' separator" sep)
+
+(* Converters *)
+
+type 'a parser = string -> [ `Ok of 'a | `Error of string ]
+type 'a printer = Format.formatter -> 'a -> unit
+type 'a conv = 'a parser * 'a printer
+
+let some ?(none = "") (parse, print) =
+  let parse s = match parse s with
+  | `Ok v -> `Ok (Some v)
+  | `Error _ as e -> e
+  in
+  let print ppf v = match v with
+  | None -> Format.pp_print_string ppf none
+  | Some v -> print ppf v
+  in
+  parse, print
+
+let bool =
+  let parse s = try `Ok (bool_of_string s) with
+  | Invalid_argument _ ->
+      `Error (err_invalid_val s (alts_str ["true"; "false"]))
+  in
+  parse, Format.pp_print_bool
+
+let char =
+  let parse s = match String.length s = 1 with
+  | true -> `Ok s.[0]
+  | false -> `Error (err_invalid_val s "expected a character")
+  in
+  parse, pp_char
+
+let parse_with t_of_str exp s =
+  try `Ok (t_of_str s) with Failure _ -> `Error (err_invalid_val s exp)
+
+let int =
+  parse_with int_of_string "expected an integer", Format.pp_print_int
+
+let int32 =
+  parse_with Int32.of_string "expected a 32-bit integer",
+  (fun ppf -> pp ppf "%ld")
+
+let int64 =
+  parse_with Int64.of_string "expected a 64-bit integer",
+  (fun ppf -> pp ppf "%Ld")
+
+let nativeint =
+  parse_with Nativeint.of_string "expected a processor-native integer",
+  (fun ppf -> pp ppf "%nd")
+
+let float =
+  parse_with float_of_string "expected a floating point number",
+  Format.pp_print_float
+
+let string = (fun s -> `Ok s), pp_str
+let enum sl =
+  if sl = [] then invalid_arg err_empty_list else
+  let t = Cmdliner_trie.of_list sl in
+  let parse s = match Cmdliner_trie.find t s with
+  | `Ok _ as r -> r
+  | `Ambiguous ->
+      let ambs = List.sort compare (Cmdliner_trie.ambiguities t s) in
+      `Error (err_ambiguous "enum value" s ambs)
+  | `Not_found ->
+        let alts = List.rev (List.rev_map (fun (s, _) -> s) sl) in
+        `Error (err_invalid_val s ("expected " ^ (alts_str alts)))
+  in
+  let print ppf v =
+    let sl_inv = List.rev_map (fun (s,v) -> (v,s)) sl in
+    try pp_str ppf (List.assoc v sl_inv)
+    with Not_found -> invalid_arg err_incomplete_enum
+  in
+  parse, print
+
+let file =
+  let parse s = match Sys.file_exists s with
+  | true -> `Ok s
+  | false -> `Error (err_no "file or directory" s)
+  in
+  parse, pp_str
+
+let dir =
+  let parse s = match Sys.file_exists s with
+  | true -> if Sys.is_directory s then `Ok s else `Error (err_not_dir s)
+  | false -> `Error (err_no "directory" s)
+  in
+  parse, pp_str
+
+let non_dir_file =
+  let parse s = match Sys.file_exists s with
+  | true -> if not (Sys.is_directory s) then `Ok s else `Error (err_is_dir s)
+  | false -> `Error (err_no "file" s)
+  in
+  parse, pp_str
+
+let split_and_parse sep parse s = (* raises [Failure] *)
+  let parse sub = match parse sub with
+  | `Error e -> failwith e | `Ok v -> v
+  in
+  let rec split accum j =
+    let i = try String.rindex_from s j sep with Not_found -> -1 in
+    if (i = -1) then
+      let p = String.sub s 0 (j + 1) in
+      if p <> "" then parse p :: accum else accum
+    else
+    let p = String.sub s (i + 1) (j - i) in
+    let accum' = if p <> "" then parse p :: accum else accum in
+    split accum' (i - 1)
+  in
+  split [] (String.length s - 1)
+
+let list ?(sep = ',') (parse, pp_e) =
+  let parse s = try `Ok (split_and_parse sep parse s) with
+  | Failure e -> `Error (err_element "list" s e)
+  in
+  let rec print ppf = function
+  | v :: l -> pp_e ppf v; if (l <> []) then (pp_char ppf sep; print ppf l)
+  | [] -> ()
+  in
+  parse, print
+
+let array ?(sep = ',') (parse, pp_e) =
+  let parse s = try `Ok (Array.of_list (split_and_parse sep parse s)) with
+  | Failure e -> `Error (err_element "array" s e)
+  in
+  let print ppf v =
+    let max = Array.length v - 1 in
+    for i = 0 to max do pp_e ppf v.(i); if i <> max then pp_char ppf sep done
+  in
+  parse, print
+
+let split_left sep s =
+  try
+    let i = String.index s sep in
+    let len = String.length s in
+    Some ((String.sub s 0 i), (String.sub s (i + 1) (len - i - 1)))
+  with Not_found -> None
+
+let pair ?(sep = ',') (pa0, pr0) (pa1, pr1) =
+  let parser s = match split_left sep s with
+  | None -> `Error (err_sep_miss sep s)
+  | Some (v0, v1) ->
+      match pa0 v0, pa1 v1 with
+      | `Ok v0, `Ok v1 -> `Ok (v0, v1)
+      | `Error e, _ | _, `Error e -> `Error (err_element "pair" s e)
+  in
+  let printer ppf (v0, v1) = pp ppf "%a%c%a" pr0 v0 sep pr1 v1 in
+  parser, printer
+
+let t2 = pair
+let t3 ?(sep = ',') (pa0, pr0) (pa1, pr1) (pa2, pr2) =
+  let parse s = match split_left sep s with
+  | None -> `Error (err_sep_miss sep s)
+  | Some (v0, s) ->
+      match split_left sep s with
+      | None -> `Error (err_sep_miss sep s)
+      | Some (v1, v2) ->
+          match pa0 v0, pa1 v1, pa2 v2 with
+          | `Ok v0, `Ok v1, `Ok v2 -> `Ok (v0, v1, v2)
+          | `Error e, _, _ | _, `Error e, _ | _, _, `Error e ->
+              `Error (err_element "triple" s e)
+  in
+  let print ppf (v0, v1, v2) =
+    pp ppf "%a%c%a%c%a" pr0 v0 sep pr1 v1 sep pr2 v2
+  in
+  parse, print
+
+let t4 ?(sep = ',') (pa0, pr0) (pa1, pr1) (pa2, pr2) (pa3, pr3) =
+  let parse s = match split_left sep s with
+  | None -> `Error (err_sep_miss sep s)
+  | Some(v0, s) ->
+      match split_left sep s with
+      | None -> `Error (err_sep_miss sep s)
+      | Some (v1, s) ->
+          match split_left sep s with
+          | None -> `Error (err_sep_miss sep s)
+          | Some (v2, v3) ->
+              match pa0 v0, pa1 v1, pa2 v2, pa3 v3 with
+              | `Ok v1, `Ok v2, `Ok v3, `Ok v4 -> `Ok (v1, v2, v3, v4)
+              | `Error e, _, _, _ | _, `Error e, _, _ | _, _, `Error e, _
+              | _, _, _, `Error e -> `Error (err_element "quadruple" s e)
+  in
+  let print ppf (v0, v1, v2, v3) =
+    pp ppf "%a%c%a%c%a%c%a" pr0 v0 sep pr1 v1 sep pr2 v2 sep pr3 v3
+  in
+  parse, print
+
+let env_bool_parse s = match String.lowercase_ascii s with
+| "" | "false" | "no" | "n" | "0" -> `Ok false
+| "true" | "yes" | "y" | "1" -> `Ok true
+| s -> `Error (err_invalid_val s (alts_str ["true"; "yes"; "false"; "no" ]))
+
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli
+
+   Permission to use, copy, modify, and/or distribute this software for any
+   purpose with or without fee is hereby granted, provided that the above
+   copyright notice and this permission notice appear in all copies.
+
+   THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+   WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+   MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+   ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+   WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+   ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+   OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+  ---------------------------------------------------------------------------*)
diff --git a/tools/ocaml/duniverse/cmdliner/src/cmdliner_base.mli b/tools/ocaml/duniverse/cmdliner/src/cmdliner_base.mli
new file mode 100644
index 0000000000..5c54ee01f3
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/src/cmdliner_base.mli
@@ -0,0 +1,68 @@
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli. All rights reserved.
+   Distributed under the ISC license, see terms at the end of the file.
+   %%NAME%% %%VERSION%%
+  ---------------------------------------------------------------------------*)
+
+(** A few helpful base definitions. *)
+
+(** {1:fmt Formatting helpers} *)
+
+val pp_text : Format.formatter -> string -> unit
+val pp_lines : Format.formatter -> string -> unit
+val pp_tokens : spaces:bool -> Format.formatter -> string -> unit
+
+(** {1:err Error message helpers} *)
+
+val quote : string -> string
+val alts_str : ?quoted:bool -> string list -> string
+val err_ambiguous : kind:string -> string -> ambs:string list -> string
+val err_unknown : ?hints:string list -> kind:string -> string -> string
+val err_multi_def :
+  kind:string -> string -> ('b -> string) -> 'b -> 'b -> string
+
+(** {1:conv Textual OCaml value converters} *)
+
+type 'a parser = string -> [ `Ok of 'a | `Error of string ]
+type 'a printer = Format.formatter -> 'a -> unit
+type 'a conv = 'a parser * 'a printer
+
+val some : ?none:string -> 'a conv -> 'a option conv
+val bool : bool conv
+val char : char conv
+val int : int conv
+val nativeint : nativeint conv
+val int32 : int32 conv
+val int64 : int64 conv
+val float : float conv
+val string : string conv
+val enum : (string * 'a) list -> 'a conv
+val file : string conv
+val dir : string conv
+val non_dir_file : string conv
+val list : ?sep:char -> 'a conv -> 'a list conv
+val array : ?sep:char -> 'a conv -> 'a array conv
+val pair : ?sep:char -> 'a conv -> 'b conv -> ('a * 'b) conv
+val t2 : ?sep:char -> 'a conv -> 'b conv -> ('a * 'b) conv
+val t3 : ?sep:char -> 'a conv ->'b conv -> 'c conv -> ('a * 'b * 'c) conv
+val t4 :
+  ?sep:char -> 'a conv -> 'b conv -> 'c conv -> 'd conv ->
+  ('a * 'b * 'c * 'd) conv
+
+val env_bool_parse : bool parser
+
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli
+
+   Permission to use, copy, modify, and/or distribute this software for any
+   purpose with or without fee is hereby granted, provided that the above
+   copyright notice and this permission notice appear in all copies.
+
+   THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+   WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+   MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+   ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+   WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+   ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+   OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+  ---------------------------------------------------------------------------*)
diff --git a/tools/ocaml/duniverse/cmdliner/src/cmdliner_cline.ml b/tools/ocaml/duniverse/cmdliner/src/cmdliner_cline.ml
new file mode 100644
index 0000000000..e305d398c2
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/src/cmdliner_cline.ml
@@ -0,0 +1,199 @@
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli. All rights reserved.
+   Distributed under the ISC license, see terms at the end of the file.
+   %%NAME%% %%VERSION%%
+  ---------------------------------------------------------------------------*)
+
+(* A command line stores pre-parsed information about the command
+   line's arguments in a more structured way. Given the
+   Cmdliner_info.arg values mentioned in a term and Sys.argv
+   (without exec name) we parse the command line into a map of
+   Cmdliner_info.arg values to [arg] values (see below). This map is used by
+   the term's closures to retrieve and convert command line arguments
+   (see the Cmdliner_arg module). *)
+
+let err_multi_opt_name_def name a a' =
+  Cmdliner_base.err_multi_def
+    ~kind:"option name" name Cmdliner_info.arg_doc a a'
+
+module Amap = Map.Make (Cmdliner_info.Arg)
+
+type arg =      (* unconverted argument data as found on the command line. *)
+| O of (int * string * (string option)) list (* (pos, name, value) of opt. *)
+| P of string list
+
+type t = arg Amap.t  (* command line, maps arg_infos to arg value. *)
+
+let get_arg cl a = try Amap.find a cl with Not_found -> assert false
+let opt_arg cl a = match get_arg cl a with O l -> l | _ -> assert false
+let pos_arg cl a = match get_arg cl a with P l -> l | _ -> assert false
+let actual_args cl a = match get_arg cl a with
+| P args -> args
+| O l ->
+    let extract_args (_pos, name, value) =
+      name :: (match value with None -> [] | Some v -> [v])
+    in
+    List.concat (List.map extract_args l)
+
+let arg_info_indexes args =
+  (* from [args] returns a trie mapping the names of optional arguments to
+     their arg_info, a list with all arg_info for positional arguments and
+     a cmdline mapping each arg_info to an empty [arg]. *)
+  let rec loop optidx posidx cl = function
+  | [] -> optidx, posidx, cl
+  | a :: l ->
+      match Cmdliner_info.arg_is_pos a with
+      | true -> loop optidx (a :: posidx) (Amap.add a (P []) cl) l
+      | false ->
+          let add t name = match Cmdliner_trie.add t name a with
+          | `New t -> t
+          | `Replaced (a', _) -> invalid_arg (err_multi_opt_name_def name a a')
+          in
+          let names = Cmdliner_info.arg_opt_names a in
+          let optidx = List.fold_left add optidx names in
+          loop optidx posidx (Amap.add a (O []) cl) l
+  in
+  loop Cmdliner_trie.empty [] Amap.empty (Cmdliner_info.Args.elements args)
+
+(* Optional argument parsing *)
+
+let is_opt s = String.length s > 1 && s.[0] = '-'
+let is_short_opt s = String.length s = 2 && s.[0] = '-'
+
+let parse_opt_arg s = (* (name, value) of opt arg, assert len > 1. *)
+  let l = String.length s in
+  if s.[1] <> '-' then (* short opt *)
+    if l = 2 then s, None else
+    String.sub s 0 2, Some (String.sub s 2 (l - 2)) (* with glued opt arg *)
+  else try (* long opt *)
+    let i = String.index s '=' in
+    String.sub s 0 i, Some (String.sub s (i + 1) (l - i - 1))
+  with Not_found -> s, None
+
+let hint_matching_opt optidx s =
+  (* hint options that could match [s] in [optidx]. FIXME explain this is
+     a bit obscure. *)
+  if String.length s <= 2 then [] else
+  let short_opt, long_opt =
+    if s.[1] <> '-'
+    then s, Printf.sprintf "-%s" s
+    else String.sub s 1 (String.length s - 1), s
+  in
+  let short_opt, _ = parse_opt_arg short_opt in
+  let long_opt, _ = parse_opt_arg long_opt in
+  let all = Cmdliner_trie.ambiguities optidx "-" in
+  match List.mem short_opt all, Cmdliner_suggest.value long_opt all with
+  | false, [] -> []
+  | false, l -> l
+  | true, [] -> [short_opt]
+  | true, l -> if List.mem short_opt l then l else short_opt :: l
+
+let parse_opt_args ~peek_opts optidx cl args =
+  (* returns an updated [cl] cmdline according to the options found in [args]
+     with the trie index [optidx]. Positional arguments are returned in order
+     in a list. *)
+  let rec loop errs k cl pargs = function
+  | [] -> List.rev errs, cl, List.rev pargs
+  | "--" :: args -> List.rev errs, cl, (List.rev_append pargs args)
+  | s :: args ->
+      if not (is_opt s) then loop errs (k + 1) cl (s :: pargs) args else
+      let name, value = parse_opt_arg s in
+      match Cmdliner_trie.find optidx name with
+      | `Ok a ->
+          let value, args = match value, Cmdliner_info.arg_opt_kind a with
+          | Some v, Cmdliner_info.Flag when is_short_opt name ->
+              None, ("-" ^ v) :: args
+          | Some _, _ -> value, args
+          | None, Cmdliner_info.Flag -> value, args
+          | None, _ ->
+              match args with
+              | [] -> None, args
+              | v :: rest -> if is_opt v then None, args else Some v, rest
+          in
+          let arg = O ((k, name, value) :: opt_arg cl a) in
+          loop errs (k + 1) (Amap.add a arg cl) pargs args
+      | `Not_found when peek_opts -> loop errs (k + 1) cl pargs args
+      | `Not_found ->
+          let hints = hint_matching_opt optidx s in
+          let err = Cmdliner_base.err_unknown ~kind:"option" ~hints name in
+          loop (err :: errs) (k + 1) cl pargs args
+      | `Ambiguous ->
+          let ambs = Cmdliner_trie.ambiguities optidx name in
+          let ambs = List.sort compare ambs in
+          let err = Cmdliner_base.err_ambiguous "option" name ambs in
+          loop (err :: errs) (k + 1) cl pargs args
+  in
+  let errs, cl, pargs = loop [] 0 cl [] args in
+  if errs = [] then Ok (cl, pargs) else
+  let err = String.concat "\n" errs in
+  Error (err, cl, pargs)
+
+let take_range start stop l =
+  let rec loop i acc = function
+  | [] -> List.rev acc
+  | v :: vs ->
+      if i < start then loop (i + 1) acc vs else
+      if i <= stop then loop (i + 1) (v :: acc) vs else
+      List.rev acc
+  in
+  loop 0 [] l
+
+let process_pos_args posidx cl pargs =
+  (* returns an updated [cl] cmdline in which each positional arg mentioned
+     in the list index posidx, is given a value according the list
+     of positional arguments values [pargs]. *)
+  if pargs = [] then
+    let misses = List.filter Cmdliner_info.arg_is_req posidx in
+    if misses = [] then Ok cl else
+    Error (Cmdliner_msg.err_pos_misses misses, cl)
+  else
+  let last = List.length pargs - 1 in
+  let pos rev k = if rev then last - k else k in
+  let rec loop misses cl max_spec = function
+  | [] -> misses, cl, max_spec
+  | a :: al ->
+      let apos = Cmdliner_info.arg_pos a in
+      let rev = Cmdliner_info.pos_rev apos in
+      let start = pos rev (Cmdliner_info.pos_start apos) in
+      let stop = match Cmdliner_info.pos_len apos with
+      | None -> pos rev last
+      | Some n -> pos rev (Cmdliner_info.pos_start apos + n - 1)
+      in
+      let start, stop = if rev then stop, start else start, stop in
+      let args = take_range start stop pargs in
+      let max_spec = max stop max_spec in
+      let cl = Amap.add a (P args) cl in
+      let misses = match Cmdliner_info.arg_is_req a && args = [] with
+      | true -> a :: misses
+      | false -> misses
+      in
+      loop misses cl max_spec al
+  in
+  let misses, cl, max_spec = loop [] cl (-1) posidx in
+  if misses <> [] then Error (Cmdliner_msg.err_pos_misses misses, cl) else
+  if last <= max_spec then Ok cl else
+  let excess = take_range (max_spec + 1) last pargs in
+  Error (Cmdliner_msg.err_pos_excess excess, cl)
+
+let create ?(peek_opts = false) al args =
+  let optidx, posidx, cl = arg_info_indexes al in
+  match parse_opt_args ~peek_opts optidx cl args with
+  | Ok (cl, _) when peek_opts -> Ok cl
+  | Ok (cl, pargs) -> process_pos_args posidx cl pargs
+  | Error (errs, cl, _) -> Error (errs, cl)
+
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli
+
+   Permission to use, copy, modify, and/or distribute this software for any
+   purpose with or without fee is hereby granted, provided that the above
+   copyright notice and this permission notice appear in all copies.
+
+   THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+   WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+   MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+   ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+   WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+   ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+   OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+  ---------------------------------------------------------------------------*)
diff --git a/tools/ocaml/duniverse/cmdliner/src/cmdliner_cline.mli b/tools/ocaml/duniverse/cmdliner/src/cmdliner_cline.mli
new file mode 100644
index 0000000000..63dad28acc
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/src/cmdliner_cline.mli
@@ -0,0 +1,34 @@
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli. All rights reserved.
+   Distributed under the ISC license, see terms at the end of the file.
+   %%NAME%% %%VERSION%%
+  ---------------------------------------------------------------------------*)
+
+(** Command lines. *)
+
+type t
+
+val create :
+  ?peek_opts:bool -> Cmdliner_info.args -> string list ->
+  (t, string * t) result
+
+val opt_arg : t -> Cmdliner_info.arg -> (int * string * (string option)) list
+val pos_arg : t -> Cmdliner_info.arg -> string list
+val actual_args : t -> Cmdliner_info.arg -> string list
+(** Actual command line arguments from the command line *)
+
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli
+
+   Permission to use, copy, modify, and/or distribute this software for any
+   purpose with or without fee is hereby granted, provided that the above
+   copyright notice and this permission notice appear in all copies.
+
+   THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+   WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+   MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+   ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+   WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+   ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+   OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+  ---------------------------------------------------------------------------*)
diff --git a/tools/ocaml/duniverse/cmdliner/src/cmdliner_docgen.ml b/tools/ocaml/duniverse/cmdliner/src/cmdliner_docgen.ml
new file mode 100644
index 0000000000..054164f6d4
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/src/cmdliner_docgen.ml
@@ -0,0 +1,352 @@
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli. All rights reserved.
+   Distributed under the ISC license, see terms at the end of the file.
+   %%NAME%% %%VERSION%%
+  ---------------------------------------------------------------------------*)
+
+let rev_compare n0 n1 = compare n1 n0
+let strf = Printf.sprintf
+
+let esc = Cmdliner_manpage.escape
+let term_name t = esc @@ Cmdliner_info.term_name t
+
+let sorted_items_to_blocks ~boilerplate:b items =
+  (* Items are sorted by section and then rev. sorted by appearance.
+     We gather them by section in correct order in a `Block and prefix
+     them with optional boilerplate *)
+  let boilerplate = match b with None -> (fun _ -> None) | Some b -> b in
+  let mk_block sec acc = match boilerplate sec with
+  | None -> (sec, `Blocks acc)
+  | Some b -> (sec, `Blocks (b :: acc))
+  in
+  let rec loop secs sec acc = function
+  | (sec', it) :: its when sec' = sec -> loop secs sec (it :: acc) its
+  | (sec', it) :: its -> loop (mk_block sec acc :: secs) sec' [it] its
+  | [] -> (mk_block sec acc) :: secs
+  in
+  match items with
+  | [] -> []
+  | (sec, it) :: its -> loop [] sec [it] its
+
+(* Doc string variables substitutions. *)
+
+let env_info_subst ~subst e = function
+| "env" -> Some (strf "$(b,%s)" @@ esc (Cmdliner_info.env_var e))
+| id -> subst id
+
+let exit_info_subst ~subst e = function
+| "status" -> Some (strf "%d" (fst @@ Cmdliner_info.exit_statuses e))
+| "status_max" -> Some (strf "%d" (snd @@ Cmdliner_info.exit_statuses e))
+| id -> subst id
+
+let arg_info_subst ~subst a = function
+| "docv" ->
+    Some (strf "$(i,%s)" @@ esc (Cmdliner_info.arg_docv a))
+| "opt" when Cmdliner_info.arg_is_opt a ->
+    Some (strf "$(b,%s)" @@ esc (Cmdliner_info.arg_opt_name_sample a))
+| "env" as id ->
+    begin match Cmdliner_info.arg_env a with
+    | Some e -> env_info_subst ~subst e id
+    | None -> subst id
+    end
+| id -> subst id
+
+let term_info_subst ei = function
+| "tname" -> Some (strf "$(b,%s)" @@ term_name (Cmdliner_info.eval_term ei))
+| "mname" -> Some (strf "$(b,%s)" @@ term_name (Cmdliner_info.eval_main ei))
+| _ -> None
+
+(* Command docs *)
+
+let invocation ?(sep = ' ') ei = match Cmdliner_info.eval_kind ei with
+| `Simple | `Multiple_main -> term_name (Cmdliner_info.eval_main ei)
+| `Multiple_sub ->
+    strf "%s%c%s"
+      Cmdliner_info.(term_name @@ eval_main ei) sep
+      Cmdliner_info.(term_name @@ eval_term ei)
+
+let plain_invocation ei = invocation ei
+let invocation ?sep ei = esc @@ invocation ?sep ei
+
+let synopsis_pos_arg a =
+  let v = match Cmdliner_info.arg_docv a with "" -> "ARG" | v -> v in
+  let v = strf "$(i,%s)" (esc v) in
+  let v = (if Cmdliner_info.arg_is_req a then strf "%s" else strf "[%s]") v in
+  match Cmdliner_info.(pos_len @@ arg_pos a) with
+  | None -> v ^ "..."
+  | Some 1 -> v
+  | Some n ->
+      let rec loop n acc = if n <= 0 then acc else loop (n - 1) (v :: acc) in
+      String.concat " " (loop n [])
+
+let synopsis ei = match Cmdliner_info.eval_kind ei with
+| `Multiple_main -> strf "$(b,%s) $(i,COMMAND) ..." @@ invocation ei
+| `Simple | `Multiple_sub ->
+    let rev_cli_order (a0, _) (a1, _) =
+      Cmdliner_info.rev_arg_pos_cli_order a0 a1
+    in
+    let add_pos a acc = match Cmdliner_info.arg_is_opt a with
+    | true -> acc
+    | false -> (a, synopsis_pos_arg a) :: acc
+    in
+    let args = Cmdliner_info.(term_args @@ eval_term ei) in
+    let pargs = Cmdliner_info.Args.fold add_pos args [] in
+    let pargs = List.sort rev_cli_order pargs in
+    let pargs = String.concat " " (List.rev_map snd pargs) in
+    strf "$(b,%s) [$(i,OPTION)]... %s" (invocation ei) pargs
+
+let cmd_docs ei = match Cmdliner_info.eval_kind ei with
+| `Simple | `Multiple_sub -> []
+| `Multiple_main ->
+    let add_cmd acc t =
+      let cmd = strf "$(b,%s)" @@ term_name t in
+      (Cmdliner_info.term_docs t, `I (cmd, Cmdliner_info.term_doc t)) :: acc
+    in
+    let by_sec_by_rev_name (s0, `I (c0, _)) (s1, `I (c1, _)) =
+      let c = compare s0 s1 in
+      if c <> 0 then c else compare c1 c0 (* N.B. reverse *)
+    in
+    let cmds = List.fold_left add_cmd [] (Cmdliner_info.eval_choices ei) in
+    let cmds = List.sort by_sec_by_rev_name cmds in
+    let cmds = (cmds :> (string * Cmdliner_manpage.block) list) in
+    sorted_items_to_blocks ~boilerplate:None cmds
+
+(* Argument docs *)
+
+let arg_man_item_label a =
+  if Cmdliner_info.arg_is_pos a
+  then strf "$(i,%s)" (esc @@ Cmdliner_info.arg_docv a) else
+  let fmt_name var = match Cmdliner_info.arg_opt_kind a with
+  | Cmdliner_info.Flag -> fun n -> strf "$(b,%s)" (esc n)
+  | Cmdliner_info.Opt ->
+      fun n ->
+        if String.length n > 2
+        then strf "$(b,%s)=$(i,%s)" (esc n) (esc var)
+        else strf "$(b,%s) $(i,%s)" (esc n) (esc var)
+  | Cmdliner_info.Opt_vopt _ ->
+      fun n ->
+        if String.length n > 2
+        then strf "$(b,%s)[=$(i,%s)]" (esc n) (esc var)
+        else strf "$(b,%s) [$(i,%s)]" (esc n) (esc var)
+  in
+  let var = match Cmdliner_info.arg_docv a with "" -> "VAL" | v -> v in
+  let names = List.sort compare (Cmdliner_info.arg_opt_names a) in
+  let s = String.concat ", " (List.rev_map (fmt_name var) names) in
+  s
+
+let arg_to_man_item ~errs ~subst ~buf a =
+  let or_env ~value a = match Cmdliner_info.arg_env a with
+  | None -> ""
+  | Some e ->
+      let value = if value then " or" else "absent " in
+      strf "%s $(b,%s) env" value (esc @@ Cmdliner_info.env_var e)
+  in
+  let absent = match Cmdliner_info.arg_absent a with
+  | Cmdliner_info.Err -> "required"
+  | Cmdliner_info.Val v ->
+      match Lazy.force v with
+      | "" -> strf "%s" (or_env ~value:false a)
+      | v -> strf "absent=%s%s" v (or_env ~value:true a)
+  in
+  let optvopt = match Cmdliner_info.arg_opt_kind a with
+  | Cmdliner_info.Opt_vopt v -> strf "default=%s" v
+  | _ -> ""
+  in
+  let argvdoc = match optvopt, absent with
+  | "", "" -> ""
+  | s, "" | "", s -> strf " (%s)" s
+  | s, s' -> strf " (%s) (%s)" s s'
+  in
+  let subst = arg_info_subst ~subst a in
+  let doc = Cmdliner_info.arg_doc a in
+  let doc = Cmdliner_manpage.subst_vars ~errs ~subst buf doc in
+  (Cmdliner_info.arg_docs a, `I (arg_man_item_label a ^ argvdoc, doc))
+
+let arg_docs ~errs ~subst ~buf ei =
+  let by_sec_by_arg a0 a1 =
+    let c = compare (Cmdliner_info.arg_docs a0) (Cmdliner_info.arg_docs a1) in
+    if c <> 0 then c else
+    match Cmdliner_info.arg_is_opt a0, Cmdliner_info.arg_is_opt a1 with
+    | true, true -> (* optional by name *)
+        let key names =
+          let k = List.hd (List.sort rev_compare names) in
+          let k = String.lowercase_ascii k in
+          if k.[1] = '-' then String.sub k 1 (String.length k - 1) else k
+        in
+        compare
+          (key @@ Cmdliner_info.arg_opt_names a0)
+          (key @@ Cmdliner_info.arg_opt_names a1)
+    | false, false -> (* positional by variable *)
+        compare
+          (String.lowercase_ascii @@ Cmdliner_info.arg_docv a0)
+          (String.lowercase_ascii @@ Cmdliner_info.arg_docv a1)
+    | true, false -> -1 (* positional first *)
+    | false, true -> 1  (* optional after *)
+  in
+  let keep_arg a acc =
+    if not Cmdliner_info.(arg_is_pos a && (arg_docv a = "" || arg_doc a = ""))
+    then (a :: acc) else acc
+  in
+  let args = Cmdliner_info.(term_args @@ eval_term ei) in
+  let args = Cmdliner_info.Args.fold keep_arg args [] in
+  let args = List.sort by_sec_by_arg args in
+  let args = List.rev_map (arg_to_man_item ~errs ~subst ~buf) args in
+  sorted_items_to_blocks ~boilerplate:None args
+
+(* Exit statuses doc *)
+
+let exit_boilerplate sec = match sec = Cmdliner_manpage.s_exit_status with
+| false -> None
+| true -> Some (Cmdliner_manpage.s_exit_status_intro)
+
+let exit_docs ~errs ~subst ~buf ~has_sexit ei =
+  let by_sec (s0, _) (s1, _) = compare s0 s1 in
+  let add_exit_item acc e =
+    let subst = exit_info_subst ~subst e in
+    let min, max = Cmdliner_info.exit_statuses e in
+    let doc = Cmdliner_info.exit_doc e in
+    let label = if min = max then strf "%d" min else strf "%d-%d" min max in
+    let item = `I (label, Cmdliner_manpage.subst_vars ~errs ~subst buf doc) in
+    Cmdliner_info.(exit_docs e, item) :: acc
+  in
+  let exits = Cmdliner_info.(term_exits @@ eval_term ei) in
+  let exits = List.sort Cmdliner_info.exit_order exits in
+  let exits = List.fold_left add_exit_item [] exits in
+  let exits = List.stable_sort by_sec (* sort by section *) exits in
+  let boilerplate = if has_sexit then None else Some exit_boilerplate in
+  sorted_items_to_blocks ~boilerplate exits
+
+(* Environment doc *)
+
+let env_boilerplate sec = match sec = Cmdliner_manpage.s_environment with
+| false -> None
+| true -> Some (Cmdliner_manpage.s_environment_intro)
+
+let env_docs ~errs ~subst ~buf ~has_senv ei =
+  let add_env_item ~subst (seen, envs as acc) e =
+    if Cmdliner_info.Envs.mem e seen then acc else
+    let seen = Cmdliner_info.Envs.add e seen in
+    let var = strf "$(b,%s)" @@ esc (Cmdliner_info.env_var e) in
+    let doc = Cmdliner_info.env_doc e in
+    let doc = Cmdliner_manpage.subst_vars ~errs ~subst buf doc in
+    let envs = (Cmdliner_info.env_docs e, `I (var, doc)) :: envs in
+    seen, envs
+  in
+  let add_arg_env a acc = match Cmdliner_info.arg_env a with
+  | None -> acc
+  | Some e -> add_env_item ~subst:(arg_info_subst ~subst a) acc e
+  in
+  let add_env acc e = add_env_item ~subst:(env_info_subst ~subst e) acc e in
+  let by_sec_by_rev_name (s0, `I (v0, _)) (s1, `I (v1, _)) =
+    let c = compare s0 s1 in
+    if c <> 0 then c else compare v1 v0 (* N.B. reverse *)
+  in
+  (* Arg envs before term envs is important here: if the same is mentioned
+     both in an arg and in a term the substs of the arg are allowed. *)
+  let args = Cmdliner_info.(term_args @@ eval_term ei) in
+  let tenvs = Cmdliner_info.(term_envs @@ eval_term ei) in
+  let init = Cmdliner_info.Envs.empty, [] in
+  let acc = Cmdliner_info.Args.fold add_arg_env args init in
+  let _, envs = List.fold_left add_env acc tenvs in
+  let envs = List.sort by_sec_by_rev_name envs in
+  let envs = (envs :> (string * Cmdliner_manpage.block) list) in
+  let boilerplate = if has_senv then None else Some env_boilerplate in
+  sorted_items_to_blocks ~boilerplate envs
+
+(* xref doc *)
+
+let xref_docs ~errs ei =
+  let main = Cmdliner_info.(term_name @@ eval_main ei) in
+  let to_xref = function
+  | `Main -> main, 1
+  | `Tool tool -> tool, 1
+  | `Page (name, sec) -> name, sec
+  | `Cmd c ->
+      if Cmdliner_info.eval_has_choice ei c then strf "%s-%s" main c, 1 else
+      (Format.fprintf errs "xref %s: no such term name@." c; "doc-err", 0)
+  in
+  let xref_str (name, sec) = strf "%s(%d)" (esc name) sec in
+  let xrefs = Cmdliner_info.(term_man_xrefs @@ eval_term ei) in
+  let xrefs = List.fold_left (fun acc x -> to_xref x :: acc) [] xrefs in
+  let xrefs = List.(rev_map xref_str (sort rev_compare xrefs)) in
+  if xrefs = [] then [] else
+  [Cmdliner_manpage.s_see_also, `P (String.concat ", " xrefs)]
+
+(* Man page construction *)
+
+let ensure_s_name ei sm =
+  if Cmdliner_manpage.(smap_has_section sm s_name) then sm else
+  let tname = invocation ~sep:'-' ei in
+  let tdoc = Cmdliner_info.(term_doc @@ eval_term ei) in
+  let tagline = if tdoc = "" then "" else strf " - %s" tdoc in
+  let tagline = `P (strf "%s%s" tname tagline) in
+  Cmdliner_manpage.(smap_append_block sm ~sec:s_name tagline)
+
+let ensure_s_synopsis ei sm =
+  if Cmdliner_manpage.(smap_has_section sm ~sec:s_synopsis) then sm else
+  let synopsis = `P (synopsis ei) in
+  Cmdliner_manpage.(smap_append_block sm ~sec:s_synopsis synopsis)
+
+let insert_term_man_docs ~errs ei sm =
+  let buf = Buffer.create 200 in
+  let subst = term_info_subst ei in
+  let ins sm (s, b) = Cmdliner_manpage.smap_append_block sm s b in
+  let has_senv = Cmdliner_manpage.(smap_has_section sm s_environment) in
+  let has_sexit = Cmdliner_manpage.(smap_has_section sm s_exit_status) in
+  let sm = List.fold_left ins sm (cmd_docs ei) in
+  let sm = List.fold_left ins sm (arg_docs ~errs ~subst ~buf ei) in
+  let sm = List.fold_left ins sm (exit_docs ~errs ~subst ~buf ~has_sexit ei)in
+  let sm = List.fold_left ins sm (env_docs ~errs ~subst ~buf ~has_senv ei) in
+  let sm = List.fold_left ins sm (xref_docs ~errs ei) in
+  sm
+
+let text ~errs ei =
+  let man = Cmdliner_info.(term_man @@ eval_term ei) in
+  let sm = Cmdliner_manpage.smap_of_blocks man in
+  let sm = ensure_s_name ei sm in
+  let sm = ensure_s_synopsis ei sm in
+  let sm = insert_term_man_docs ei ~errs sm in
+  Cmdliner_manpage.smap_to_blocks sm
+
+let title ei =
+  let main = Cmdliner_info.eval_main ei in
+  let exec = String.capitalize_ascii (Cmdliner_info.term_name main) in
+  let name = String.uppercase_ascii (invocation ~sep:'-' ei) in
+  let center_header = esc @@ strf "%s Manual" exec in
+  let left_footer =
+    let version = match Cmdliner_info.term_version main with
+    | None -> "" | Some v -> " " ^ v
+    in
+    esc @@ strf "%s%s" exec version
+  in
+  name, 1, "", left_footer, center_header
+
+let man ~errs ei = title ei, text ~errs ei
+
+let pp_man ~errs fmt ppf ei =
+  Cmdliner_manpage.print
+    ~errs ~subst:(term_info_subst ei) fmt ppf (man ~errs ei)
+
+(* Plain synopsis for usage *)
+
+let pp_plain_synopsis ~errs ppf ei =
+  let buf = Buffer.create 100 in
+  let subst = term_info_subst ei in
+  let syn = Cmdliner_manpage.doc_to_plain ~errs ~subst buf (synopsis ei) in
+  Format.fprintf ppf "@[%s@]" syn
+
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli
+
+   Permission to use, copy, modify, and/or distribute this software for any
+   purpose with or without fee is hereby granted, provided that the above
+   copyright notice and this permission notice appear in all copies.
+
+   THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+   WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+   MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+   ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+   WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+   ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+   OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+  ---------------------------------------------------------------------------*)
diff --git a/tools/ocaml/duniverse/cmdliner/src/cmdliner_docgen.mli b/tools/ocaml/duniverse/cmdliner/src/cmdliner_docgen.mli
new file mode 100644
index 0000000000..05fb6a9187
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/src/cmdliner_docgen.mli
@@ -0,0 +1,30 @@
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli. All rights reserved.
+   Distributed under the ISC license, see terms at the end of the file.
+   %%NAME%% %%VERSION%%
+  ---------------------------------------------------------------------------*)
+
+val plain_invocation : Cmdliner_info.eval -> string
+
+val pp_man :
+  errs:Format.formatter -> Cmdliner_manpage.format -> Format.formatter ->
+  Cmdliner_info.eval -> unit
+
+val pp_plain_synopsis :
+  errs:Format.formatter -> Format.formatter -> Cmdliner_info.eval -> unit
+
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli
+
+   Permission to use, copy, modify, and/or distribute this software for any
+   purpose with or without fee is hereby granted, provided that the above
+   copyright notice and this permission notice appear in all copies.
+
+   THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+   WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+   MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+   ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+   WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+   ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+   OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+  ---------------------------------------------------------------------------*)
diff --git a/tools/ocaml/duniverse/cmdliner/src/cmdliner_info.ml b/tools/ocaml/duniverse/cmdliner/src/cmdliner_info.ml
new file mode 100644
index 0000000000..418dd4d972
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/src/cmdliner_info.ml
@@ -0,0 +1,233 @@
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli. All rights reserved.
+   Distributed under the ISC license, see terms at the end of the file.
+   %%NAME%% %%VERSION%%
+  ---------------------------------------------------------------------------*)
+
+
+let new_id =       (* thread-safe UIDs, Oo.id (object end) was used before. *)
+  let c = ref 0 in
+  fun () ->
+    let id = !c in
+    incr c; if id > !c then assert false (* too many ids *) else id
+
+(* Environments *)
+
+type env =                     (* information about an environment variable. *)
+  { env_id : int;                              (* unique id for the env var. *)
+    env_var : string;                                       (* the variable. *)
+    env_doc : string;                                               (* help. *)
+    env_docs : string; }              (* title of help section where listed. *)
+
+let env
+    ?docs:(env_docs = Cmdliner_manpage.s_environment)
+    ?doc:(env_doc = "See option $(opt).") env_var =
+  { env_id = new_id (); env_var; env_doc; env_docs }
+
+let env_var e = e.env_var
+let env_doc e = e.env_doc
+let env_docs e = e.env_docs
+
+
+module Env = struct
+  type t = env
+  let compare a0 a1 = (compare : int -> int -> int) a0.env_id a1.env_id
+end
+
+module Envs = Set.Make (Env)
+type envs = Envs.t
+
+(* Arguments *)
+
+type arg_absence = Err | Val of string Lazy.t
+type opt_kind = Flag | Opt | Opt_vopt of string
+
+type pos_kind =                  (* information about a positional argument. *)
+  { pos_rev : bool;         (* if [true] positions are counted from the end. *)
+    pos_start : int;                           (* start positional argument. *)
+    pos_len : int option }    (* number of arguments or [None] if unbounded. *)
+
+let pos ~rev:pos_rev ~start:pos_start ~len:pos_len =
+  { pos_rev; pos_start; pos_len}
+
+let pos_rev p = p.pos_rev
+let pos_start p = p.pos_start
+let pos_len p = p.pos_len
+
+type arg =                     (* information about a command line argument. *)
+  { id : int;                                 (* unique id for the argument. *)
+    absent : arg_absence;                            (* behaviour if absent. *)
+    env : env option;                               (* environment variable. *)
+    doc : string;                                                   (* help. *)
+    docv : string;                (* variable name for the argument in help. *)
+    docs : string;                    (* title of help section where listed. *)
+    pos : pos_kind;                                  (* positional arg kind. *)
+    opt_kind : opt_kind;                               (* optional arg kind. *)
+    opt_names : string list;                        (* names (for opt args). *)
+    opt_all : bool; }                          (* repeatable (for opt args). *)
+
+let dumb_pos = pos ~rev:false ~start:(-1) ~len:None
+
+let arg ?docs ?(docv = "") ?(doc = "") ?env names =
+  let dash n = if String.length n = 1 then "-" ^ n else "--" ^ n in
+  let opt_names = List.map dash names in
+  let docs = match docs with
+  | Some s -> s
+  | None ->
+      match names with
+      | [] -> Cmdliner_manpage.s_arguments
+      | _ -> Cmdliner_manpage.s_options
+  in
+  { id = new_id (); absent = Val (lazy ""); env; doc; docv; docs;
+    pos = dumb_pos; opt_kind = Flag; opt_names; opt_all = false; }
+
+let arg_id a = a.id
+let arg_absent a = a.absent
+let arg_env a = a.env
+let arg_doc a = a.doc
+let arg_docv a = a.docv
+let arg_docs a = a.docs
+let arg_pos a = a.pos
+let arg_opt_kind a = a.opt_kind
+let arg_opt_names a = a.opt_names
+let arg_opt_all a = a.opt_all
+let arg_opt_name_sample a =
+  (* First long or short name (in that order) in the list; this
+     allows the client to control which name is shown *)
+  let rec find = function
+  | [] -> List.hd a.opt_names
+  | n :: ns -> if (String.length n) > 2 then n else find ns
+  in
+  find a.opt_names
+
+let arg_make_req a = { a with absent = Err }
+let arg_make_all_opts a = { a with opt_all = true }
+let arg_make_opt ~absent ~kind:opt_kind a = { a with absent; opt_kind }
+let arg_make_opt_all ~absent ~kind:opt_kind a =
+  { a with absent; opt_kind; opt_all = true  }
+
+let arg_make_pos ~pos a = { a with pos }
+let arg_make_pos_abs ~absent ~pos a = { a with absent; pos }
+
+let arg_is_opt a = a.opt_names <> []
+let arg_is_pos a = a.opt_names = []
+let arg_is_req a = a.absent = Err
+
+let arg_pos_cli_order a0 a1 =              (* best-effort order on the cli. *)
+  let c = compare (a0.pos.pos_rev) (a1.pos.pos_rev) in
+  if c <> 0 then c else
+  if a0.pos.pos_rev
+  then compare a1.pos.pos_start a0.pos.pos_start
+  else compare a0.pos.pos_start a1.pos.pos_start
+
+let rev_arg_pos_cli_order a0 a1 = arg_pos_cli_order a1 a0
+
+module Arg = struct
+  type t = arg
+  let compare a0 a1 = (compare : int -> int -> int) a0.id a1.id
+end
+
+module Args = Set.Make (Arg)
+type args = Args.t
+
+(* Exit info *)
+
+type exit =
+  { exit_statuses : int * int;
+    exit_doc : string;
+    exit_docs : string; }
+
+let exit
+    ?docs:(exit_docs = Cmdliner_manpage.s_exit_status)
+    ?doc:(exit_doc = "undocumented") ?max min =
+  let max = match max with None -> min | Some max -> max in
+  { exit_statuses = (min, max); exit_doc; exit_docs }
+
+let exit_statuses e = e.exit_statuses
+let exit_doc e = e.exit_doc
+let exit_docs e = e.exit_docs
+let exit_order e0 e1 = compare e0.exit_statuses e1.exit_statuses
+
+(* Term info *)
+
+type term_info =
+  { term_name : string;                                 (* name of the term. *)
+    term_version : string option;                (* version (for --version). *)
+    term_doc : string;                      (* one line description of term. *)
+    term_docs : string;     (* title of man section where listed (commands). *)
+    term_sdocs : string; (* standard options, title of section where listed. *)
+    term_exits : exit list;                      (* exit codes for the term. *)
+    term_envs : env list;               (* env vars that influence the term. *)
+    term_man : Cmdliner_manpage.block list;                (* man page text. *)
+    term_man_xrefs : Cmdliner_manpage.xref list; }        (* man cross-refs. *)
+
+type term =
+  { term_info : term_info;
+    term_args : args; }
+
+let term
+    ?args:(term_args = Args.empty) ?man_xrefs:(term_man_xrefs = [])
+    ?man:(term_man = []) ?envs:(term_envs = []) ?exits:(term_exits = [])
+    ?sdocs:(term_sdocs = Cmdliner_manpage.s_options)
+    ?docs:(term_docs = "COMMANDS") ?doc:(term_doc = "") ?version:term_version
+    term_name =
+  let term_info =
+    { term_name; term_version; term_doc; term_docs; term_sdocs; term_exits;
+      term_envs; term_man; term_man_xrefs }
+  in
+  { term_info; term_args }
+
+let term_name t = t.term_info.term_name
+let term_version t = t.term_info.term_version
+let term_doc t = t.term_info.term_doc
+let term_docs t = t.term_info.term_docs
+let term_stdopts_docs t = t.term_info.term_sdocs
+let term_exits t = t.term_info.term_exits
+let term_envs t = t.term_info.term_envs
+let term_man t = t.term_info.term_man
+let term_man_xrefs t = t.term_info.term_man_xrefs
+let term_args t = t.term_args
+
+let term_add_args t args =
+  { t with term_args = Args.union args t.term_args }
+
+(* Eval info *)
+
+type eval =                     (* information about the evaluation context. *)
+  { term : term;                                    (* term being evaluated. *)
+    main : term;                                               (* main term. *)
+    choices : term list;                                (* all term choices. *)
+    env : string -> string option }          (* environment variable lookup. *)
+
+let eval ~term ~main ~choices ~env = { term; main; choices; env }
+let eval_term e = e.term
+let eval_main e = e.main
+let eval_choices e = e.choices
+let eval_env_var e v = e.env v
+
+let eval_kind ei =
+  if ei.choices = [] then `Simple else
+  if (ei.term.term_info.term_name == ei.main.term_info.term_name)
+  then `Multiple_main else `Multiple_sub
+
+let eval_with_term ei term = { ei with term }
+
+let eval_has_choice e cmd =
+  let is_cmd t = t.term_info.term_name = cmd in
+  List.exists is_cmd e.choices
+
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli
+
+   Permission to use, copy, modify, and/or distribute this software for any
+   purpose with or without fee is hereby granted, provided that the above
+   copyright notice and this permission notice appear in all copies.
+
+   THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+   WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+   MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+   ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+   WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+   ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+   OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+  ---------------------------------------------------------------------------*)
diff --git a/tools/ocaml/duniverse/cmdliner/src/cmdliner_info.mli b/tools/ocaml/duniverse/cmdliner/src/cmdliner_info.mli
new file mode 100644
index 0000000000..7fa60cbca0
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/src/cmdliner_info.mli
@@ -0,0 +1,140 @@
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli. All rights reserved.
+   Distributed under the ISC license, see terms at the end of the file.
+   %%NAME%% %%VERSION%%
+  ---------------------------------------------------------------------------*)
+
+(** Terms, argument, env vars information.
+
+    The following types keep untyped information about arguments and
+    terms. This data is used to parse the command line, report errors
+    and format man pages. *)
+
+(** {1:env Environment variables} *)
+
+type env
+val env : ?docs:string -> ?doc:string -> string -> env
+val env_var : env -> string
+val env_doc : env -> string
+val env_docs : env -> string
+
+module Env : Set.OrderedType with type t = env
+module Envs : Set.S with type elt = env
+type envs = Envs.t
+
+(** {1:arg Arguments} *)
+
+type arg_absence =
+| Err  (** an error is reported. *)
+| Val of string Lazy.t (** if <> "", takes the given default value. *)
+(** The type for what happens if the argument is absent from the cli. *)
+
+type opt_kind =
+| Flag (** without value, just a flag. *)
+| Opt  (** with required value. *)
+| Opt_vopt of string (** with optional value, takes given default. *)
+(** The type for optional argument kinds. *)
+
+type pos_kind
+val pos : rev:bool -> start:int -> len:int option -> pos_kind
+val pos_rev : pos_kind -> bool
+val pos_start : pos_kind -> int
+val pos_len : pos_kind -> int option
+
+type arg
+val arg :
+  ?docs:string -> ?docv:string -> ?doc:string -> ?env:env ->
+  string list -> arg
+
+val arg_id : arg -> int
+val arg_absent : arg -> arg_absence
+val arg_env : arg -> env option
+val arg_doc : arg -> string
+val arg_docv : arg -> string
+val arg_docs : arg -> string
+val arg_opt_names : arg -> string list (* has dashes *)
+val arg_opt_name_sample : arg -> string (* warning must be an opt arg *)
+val arg_opt_kind : arg -> opt_kind
+val arg_pos : arg -> pos_kind
+
+val arg_make_req : arg -> arg
+val arg_make_all_opts : arg -> arg
+val arg_make_opt : absent:arg_absence -> kind:opt_kind -> arg -> arg
+val arg_make_opt_all : absent:arg_absence -> kind:opt_kind -> arg -> arg
+val arg_make_pos : pos:pos_kind -> arg -> arg
+val arg_make_pos_abs : absent:arg_absence -> pos:pos_kind -> arg -> arg
+
+val arg_is_opt : arg -> bool
+val arg_is_pos : arg -> bool
+val arg_is_req : arg -> bool
+
+val arg_pos_cli_order : arg -> arg -> int
+val rev_arg_pos_cli_order : arg -> arg -> int
+
+module Arg : Set.OrderedType with type t = arg
+module Args : Set.S with type elt = arg
+type args = Args.t
+
+(** {1:exit Exit status} *)
+
+type exit
+val exit : ?docs:string -> ?doc:string -> ?max:int -> int -> exit
+val exit_statuses : exit -> int * int
+val exit_doc : exit -> string
+val exit_docs : exit -> string
+val exit_order : exit -> exit -> int
+
+(** {1:term Term information} *)
+
+type term
+
+val term :
+  ?args:args -> ?man_xrefs:Cmdliner_manpage.xref list ->
+  ?man:Cmdliner_manpage.block list -> ?envs:env list -> ?exits:exit list ->
+  ?sdocs:string -> ?docs:string -> ?doc:string -> ?version:string ->
+  string -> term
+
+val term_name : term -> string
+val term_version : term -> string option
+val term_doc : term -> string
+val term_docs : term -> string
+val term_stdopts_docs : term -> string
+val term_exits : term -> exit list
+val term_envs : term -> env list
+val term_man : term -> Cmdliner_manpage.block list
+val term_man_xrefs : term -> Cmdliner_manpage.xref list
+val term_args : term -> args
+
+val term_add_args : term -> args -> term
+
+(** {1:eval Evaluation information} *)
+
+type eval
+
+val eval :
+  term:term -> main:term -> choices:term list ->
+  env:(string -> string option) -> eval
+
+val eval_term : eval -> term
+val eval_main : eval -> term
+val eval_choices : eval -> term list
+val eval_env_var : eval -> string -> string option
+val eval_kind : eval -> [> `Multiple_main | `Multiple_sub | `Simple ]
+val eval_with_term : eval -> term -> eval
+val eval_has_choice : eval -> string -> bool
+
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli
+
+   Permission to use, copy, modify, and/or distribute this software for any
+   purpose with or without fee is hereby granted, provided that the above
+   copyright notice and this permission notice appear in all copies.
+
+   THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+   WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+   MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+   ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+   WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+   ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+   OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+  ---------------------------------------------------------------------------*)
diff --git a/tools/ocaml/duniverse/cmdliner/src/cmdliner_manpage.ml b/tools/ocaml/duniverse/cmdliner/src/cmdliner_manpage.ml
new file mode 100644
index 0000000000..2dbd1f6dd4
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/src/cmdliner_manpage.ml
@@ -0,0 +1,502 @@
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli. All rights reserved.
+   Distributed under the ISC license, see terms at the end of the file.
+   %%NAME%% %%VERSION%%
+  ---------------------------------------------------------------------------*)
+
+(* Manpages *)
+
+type block =
+  [ `S of string | `P of string | `Pre of string | `I of string * string
+  | `Noblank | `Blocks of block list ]
+
+type title = string * int * string * string * string
+
+type t = title * block list
+
+type xref =
+  [ `Main | `Cmd of string | `Tool of string | `Page of string * int ]
+
+(* Standard sections *)
+
+let s_name = "NAME"
+let s_synopsis = "SYNOPSIS"
+let s_description = "DESCRIPTION"
+let s_commands = "COMMANDS"
+let s_arguments = "ARGUMENTS"
+let s_options = "OPTIONS"
+let s_common_options = "COMMON OPTIONS"
+let s_exit_status = "EXIT STATUS"
+let s_exit_status_intro =
+  `P "$(tname) exits with the following status:"
+
+let s_environment = "ENVIRONMENT"
+let s_environment_intro =
+  `P "These environment variables affect the execution of $(tname):"
+
+let s_files = "FILES"
+let s_examples = "EXAMPLES"
+let s_bugs = "BUGS"
+let s_authors = "AUTHORS"
+let s_see_also = "SEE ALSO"
+
+(* Section order *)
+
+let s_created = ""
+let order =
+  [| s_name; s_synopsis; s_description; s_created; s_commands;
+     s_arguments; s_options; s_common_options; s_exit_status;
+     s_environment; s_files; s_examples; s_bugs; s_authors; s_see_also; |]
+
+let order_synopsis = 1
+let order_created = 3
+
+let section_of_order i = order.(i)
+let section_to_order ~on_unknown s =
+  let max = Array.length order - 1 in
+  let rec loop i = match i > max with
+  | true -> on_unknown
+  | false -> if order.(i) = s then i else loop (i + 1)
+  in
+  loop 0
+
+(* Section maps
+
+   Section maps, maps section names to their section order and reversed
+   content blocks (content is not reversed in `Block blocks). The sections
+   are listed in reversed order. Unknown sections get the order of the last
+   known section. *)
+
+type smap = (string * (int * block list)) list
+
+let smap_of_blocks bs = (* N.B. this flattens `Blocks, not t.r. *)
+  let rec loop s s_o rbs smap = function
+  | [] -> s, s_o, rbs, smap
+  | `S new_sec :: bs ->
+      let new_o = section_to_order ~on_unknown:s_o new_sec in
+      loop new_sec new_o [] ((s, (s_o, rbs)):: smap) bs
+  | `Blocks blist :: bs ->
+      let s, s_o, rbs, rmap = loop s s_o rbs smap blist (* not t.r. *) in
+      loop s s_o rbs rmap bs
+  | (`P _ | `Pre _ | `I _ | `Noblank as c) :: bs ->
+      loop s s_o (c :: rbs) smap bs
+  in
+  let first, (bs : block list) = match bs with
+  | `S s :: bs -> s, bs
+  | `Blocks (`S s :: blist) :: bs -> s, (`Blocks blist) :: bs
+  | _ -> "", bs
+  in
+  let first_o = section_to_order ~on_unknown:order_synopsis first in
+  let s, s_o, rc, smap = loop first first_o [] [] bs in
+  (s, (s_o, rc)) :: smap
+
+let smap_to_blocks smap = (* N.B. this leaves `Blocks content untouched. *)
+  let rec loop acc smap s = function
+  | b :: rbs -> loop (b :: acc) smap s rbs
+  | [] ->
+      let acc =  if s = "" then acc else `S s :: acc in
+      match smap with
+      | (s, (_, rbs)) :: smap -> loop acc smap s rbs
+      | [] -> acc
+  in
+  match smap with
+  | [] -> []
+  | (s, (_, rbs)) :: smap -> loop [] smap s rbs
+
+let smap_has_section smap ~sec = List.exists (fun (s, _) -> sec = s) smap
+let smap_append_block smap ~sec b =
+  let o = section_to_order ~on_unknown:order_created sec in
+  let try_insert =
+    let rec loop max_lt_o left = function
+    | (s', (o, rbs)) :: right when s' = sec ->
+        Ok (List.rev_append ((sec, (o, b :: rbs)) :: left) right)
+    | (_, (o', _) as s) :: right ->
+        let max_lt_o = if o' < o then max o' max_lt_o else max_lt_o in
+        loop max_lt_o (s :: left) right
+    | [] ->
+        if max_lt_o <> -1 then Error max_lt_o else
+        Ok (List.rev ((sec, (o, [b])) :: left))
+    in
+    loop (-1) [] smap
+  in
+  match try_insert with
+  | Ok smap -> smap
+  | Error insert_before ->
+      let rec loop left = function
+      | (s', (o', _)) :: _ as right when o' = insert_before ->
+          List.rev_append ((sec, (o, [b])) :: left) right
+      | s :: ss -> loop (s :: left) ss
+      | [] -> assert false
+      in
+      loop [] smap
+
+(* Formatting tools *)
+
+let strf = Printf.sprintf
+let pf = Format.fprintf
+let pp_str = Format.pp_print_string
+let pp_char = Format.pp_print_char
+let pp_indent ppf c = for i = 1 to c do pp_char ppf ' ' done
+let pp_lines = Cmdliner_base.pp_lines
+let pp_tokens = Cmdliner_base.pp_tokens
+
+(* Cmdliner markup handling *)
+
+let err e fmt = pf e ("cmdliner error: " ^^ fmt ^^ "@.")
+let err_unescaped ~errs c s = err errs "unescaped %C in %S" c s
+let err_malformed ~errs s = err errs "Malformed $(...) in %S" s
+let err_unclosed ~errs s = err errs "Unclosed $(...) in %S" s
+let err_undef ~errs id s = err errs "Undefined variable $(%s) in %S" id s
+let err_illegal_esc ~errs c s = err errs "Illegal escape char %C in %S" c s
+let err_markup ~errs dir s =
+  err errs "Unknown cmdliner markup $(%c,...) in %S" dir s
+
+let is_markup_dir = function 'i' | 'b' -> true | _ -> false
+let is_markup_esc = function '$' | '\\' | '(' | ')' -> true | _ -> false
+let markup_need_esc = function '\\' | '$' -> true | _ -> false
+let markup_text_need_esc = function '\\' | '$' | ')' -> true | _ -> false
+
+let escape s = (* escapes [s] from doc language. *)
+  let max_i = String.length s - 1 in
+  let rec escaped_len i l =
+    if i > max_i then l else
+    if markup_text_need_esc s.[i] then escaped_len (i + 1) (l + 2) else
+    escaped_len (i + 1) (l + 1)
+  in
+  let escaped_len = escaped_len 0 0 in
+  if escaped_len = String.length s then s else
+  let b = Bytes.create escaped_len in
+  let rec loop i k =
+    if i > max_i then Bytes.unsafe_to_string b else
+    let c = String.unsafe_get s i in
+    if not (markup_text_need_esc c)
+    then (Bytes.unsafe_set b k c; loop (i + 1) (k + 1))
+    else (Bytes.unsafe_set b k '\\'; Bytes.unsafe_set b (k + 1) c;
+          loop (i + 1) (k + 2))
+  in
+  loop 0 0
+
+let subst_vars ~errs ~subst b s =
+  let max_i = String.length s - 1 in
+  let flush start stop = match start > max_i with
+  | true -> ()
+  | false -> Buffer.add_substring b s start (stop - start + 1)
+  in
+  let skip_escape k start i =
+    if i > max_i then err_unescaped ~errs '\\' s else k start (i + 1)
+  in
+  let rec skip_markup k start i =
+    if i > max_i then (err_unclosed ~errs s; k start i) else
+    match s.[i] with
+    | '\\' -> skip_escape (skip_markup k) start (i + 1)
+    | ')' -> k start (i + 1)
+    | c -> skip_markup k start (i + 1)
+  in
+  let rec add_subst start i =
+    if i > max_i then (err_unclosed ~errs s; loop start i) else
+    if s.[i] <> ')' then add_subst start (i + 1) else
+    let id = String.sub s start (i - start) in
+    let next = i + 1 in
+    begin match subst id with
+    | None -> err_undef ~errs id s; Buffer.add_string b "undefined";
+    | Some v -> Buffer.add_string b v
+    end;
+    loop next next
+  and loop start i =
+    if i > max_i then flush start max_i else
+    let next = i + 1 in
+    match s.[i] with
+    | '\\' -> skip_escape loop start next
+    | '$' ->
+        if next > max_i then err_unescaped ~errs '$' s else
+        begin match s.[next] with
+        | '(' ->
+            let min = next + 2 in
+            if min > max_i then (err_unclosed ~errs s; loop start next) else
+            begin match s.[min] with
+            | ',' -> skip_markup loop start (min + 1)
+            | _ ->
+                let start_id = next + 1 in
+                flush start (i - 1); add_subst start_id start_id
+            end
+        | _ -> err_unescaped ~errs '$' s; loop start next
+        end;
+    | c -> loop start next
+  in
+  (Buffer.clear b; loop 0 0; Buffer.contents b)
+
+let add_markup_esc ~errs k b s start next target_need_escape target_escape =
+  let max_i = String.length s - 1 in
+  if next > max_i then err_unescaped ~errs '\\' s else
+  match s.[next] with
+  | c when not (is_markup_esc s.[next]) ->
+      err_illegal_esc ~errs c s;
+      k (next + 1) (next + 1)
+  | c ->
+      (if target_need_escape c then target_escape b c else Buffer.add_char b c);
+      k (next + 1) (next + 1)
+
+let add_markup_text ~errs k b s start target_need_escape target_escape =
+  let max_i = String.length s - 1 in
+  let flush start stop = match start > max_i with
+  | true -> ()
+  | false -> Buffer.add_substring b s start (stop - start + 1)
+  in
+  let rec loop start i =
+    if i > max_i then (err_unclosed ~errs s; flush start max_i) else
+    let next = i + 1 in
+    match s.[i] with
+    | '\\' -> (* unescape *)
+        flush start (i - 1);
+        add_markup_esc ~errs loop b s start next
+          target_need_escape target_escape
+    | ')' -> flush start (i - 1); k next next
+    | c when markup_text_need_esc c ->
+        err_unescaped ~errs c s; flush start (i - 1); loop next next
+    | c when target_need_escape c ->
+        flush start (i - 1); target_escape b c; loop next next
+    | c -> loop start next
+  in
+  loop start start
+
+(* Plain text output *)
+
+let markup_to_plain ~errs b s =
+  let max_i = String.length s - 1 in
+  let flush start stop = match start > max_i with
+  | true -> ()
+  | false -> Buffer.add_substring b s start (stop - start + 1)
+  in
+  let need_escape _ = false in
+  let escape _ _ = assert false in
+  let rec loop start i =
+    if i > max_i then flush start max_i else
+    let next = i + 1 in
+    match s.[i] with
+    | '\\' ->
+        flush start (i - 1);
+        add_markup_esc ~errs loop b s start next need_escape escape
+    | '$' ->
+        if next > max_i then err_unescaped ~errs '$' s else
+        begin match s.[next] with
+        | '(' ->
+            let min = next + 2 in
+            if min > max_i then (err_unclosed ~errs s; loop start next) else
+            begin match s.[min] with
+            | ',' ->
+                let markup = s.[min - 1] in
+                if not (is_markup_dir markup)
+                then (err_markup ~errs markup s; loop start next) else
+                let start_data = min + 1 in
+                (flush start (i - 1);
+                 add_markup_text ~errs loop b s start_data need_escape escape)
+            | _ ->
+                err_malformed ~errs s; loop start next
+            end
+        | _ -> err_unescaped ~errs '$' s; loop start next
+        end
+    | c when markup_need_esc c ->
+        err_unescaped ~errs c s; flush start (i - 1); loop next next
+    | c -> loop start next
+  in
+  (Buffer.clear b; loop 0 0; Buffer.contents b)
+
+let doc_to_plain ~errs ~subst b s =
+  markup_to_plain ~errs b (subst_vars ~errs ~subst b s)
+
+let p_indent = 7                                  (* paragraph indentation. *)
+let l_indent = 4                                      (* label indentation. *)
+
+let pp_plain_blocks ~errs subst ppf ts =
+  let b = Buffer.create 1024 in
+  let markup t = doc_to_plain ~errs b ~subst t in
+  let pp_tokens ppf t = pp_tokens ~spaces:true ppf t in
+  let rec loop = function
+  | [] -> ()
+  | t :: ts ->
+      begin match t with
+      | `Noblank -> ()
+      | `Blocks bs -> loop bs (* not T.R. *)
+      | `P s -> pf ppf "%a@[%a@]@," pp_indent p_indent pp_tokens (markup s)
+      | `S s -> pf ppf "@[%a@]" pp_tokens (markup s)
+      | `Pre s -> pf ppf "%a@[%a@]@," pp_indent p_indent pp_lines (markup s)
+      | `I (label, s) ->
+          let label = markup label in
+          let s = markup s in
+          pf ppf "@[%a@[%a@]" pp_indent p_indent pp_tokens label;
+          if s = "" then pf ppf "@]@," else
+          let ll = String.length label in
+          begin match ll < l_indent with
+          | true ->
+              pf ppf "%a@[%a@]@]" pp_indent (l_indent - ll) pp_tokens s
+          | false ->
+              pf ppf "@\n%a@[%a@]@]"
+                pp_indent (p_indent + l_indent) pp_tokens s
+          end;
+          match ts with `I _ :: _ -> pf ppf "@," | _ -> ()
+      end;
+      begin match ts with
+      | `Noblank :: ts -> loop ts
+      | ts -> Format.pp_print_cut ppf (); loop ts
+      end
+  in
+  loop ts
+
+let pp_plain_page ~errs subst ppf (_, text) =
+  pf ppf "@[<v>%a@]" (pp_plain_blocks ~errs subst) text
+
+(* Groff output *)
+
+let markup_to_groff ~errs b s =
+  let max_i = String.length s - 1 in
+  let flush start stop = match start > max_i with
+  | true -> ()
+  | false -> Buffer.add_substring b s start (stop - start + 1)
+  in
+  let need_escape = function '.' | '\'' | '-' | '\\' -> true | _ -> false in
+  let escape b c = Printf.bprintf b "\\N'%d'" (Char.code c) in
+  let rec end_text start i = Buffer.add_string b "\\fR"; loop start i
+  and loop start i =
+    if i > max_i then flush start max_i else
+    let next = i + 1 in
+    match s.[i] with
+    | '\\' ->
+        flush start (i - 1);
+        add_markup_esc ~errs loop b s start next need_escape escape
+    | '$' ->
+        if next > max_i then err_unescaped ~errs '$' s else
+        begin match s.[next] with
+        | '(' ->
+            let min = next + 2 in
+            if min > max_i then (err_unclosed ~errs s; loop start next) else
+            begin match s.[min] with
+            | ','  ->
+                let start_data = min + 1 in
+                flush start (i - 1);
+                begin match s.[min - 1] with
+                | 'i' -> Buffer.add_string b "\\fI"
+                | 'b' -> Buffer.add_string b "\\fB"
+                | markup -> err_markup ~errs markup s
+                end;
+                add_markup_text ~errs end_text b s start_data need_escape escape
+            | _ -> err_malformed ~errs s; loop start next
+            end
+        | _ -> err_unescaped ~errs '$' s; flush start (i - 1); loop next next
+        end
+    | c when markup_need_esc c ->
+        err_unescaped ~errs c s; flush start (i - 1); loop next next
+    | c when need_escape c ->
+        flush start (i - 1); escape b c; loop next next
+    | c -> loop start next
+  in
+  (Buffer.clear b; loop 0 0; Buffer.contents b)
+
+let doc_to_groff ~errs ~subst b s =
+  markup_to_groff ~errs b (subst_vars ~errs ~subst b s)
+
+let pp_groff_blocks ~errs subst ppf text =
+  let buf = Buffer.create 1024 in
+  let markup t = doc_to_groff ~errs ~subst buf t in
+  let pp_tokens ppf t = pp_tokens ~spaces:false ppf t in
+  let rec pp_block = function
+  | `Blocks bs -> List.iter pp_block bs (* not T.R. *)
+  | `P s -> pf ppf "@\n.P@\n%a" pp_tokens (markup s)
+  | `Pre s -> pf ppf "@\n.P@\n.nf@\n%a@\n.fi" pp_lines (markup s)
+  | `S s -> pf ppf "@\n.SH %a" pp_tokens (markup s)
+  | `Noblank -> pf ppf "@\n.sp -1"
+  | `I (l, s) ->
+      pf ppf "@\n.TP 4@\n%a@\n%a" pp_tokens (markup l) pp_tokens (markup s)
+  in
+  List.iter pp_block text
+
+let pp_groff_page ~errs subst ppf ((n, s, a1, a2, a3), t) =
+  pf ppf ".\\\" Pipe this output to groff -Tutf8 | less@\n\
+          .\\\"@\n\
+          .mso an.tmac@\n\
+          .TH \"%s\" %d \"%s\" \"%s\" \"%s\"@\n\
+          .\\\" Disable hyphenation and ragged-right@\n\
+          .nh@\n\
+          .ad l\
+          %a@?"
+    n s a1 a2 a3 (pp_groff_blocks ~errs subst) t
+
+(* Printing to a pager *)
+
+let pp_to_temp_file pp_v v =
+  try
+    let exec = Filename.basename Sys.argv.(0) in
+    let file, oc = Filename.open_temp_file exec "out" in
+    let ppf = Format.formatter_of_out_channel oc in
+    pp_v ppf v; Format.pp_print_flush ppf (); close_out oc;
+    at_exit (fun () -> try Sys.remove file with Sys_error e -> ());
+    Some file
+  with Sys_error _ -> None
+
+let find_cmd cmds =
+  let test, null = match Sys.os_type with
+  | "Win32" -> "where", " NUL"
+  | _ -> "type", "/dev/null"
+  in
+  let cmd c = Sys.command (strf "%s %s 1>%s 2>%s" test c null null) = 0 in
+  try Some (List.find cmd cmds) with Not_found -> None
+
+let pp_to_pager print ppf v =
+  let pager =
+    let cmds = ["less"; "more"] in
+    let cmds = try (Sys.getenv "PAGER") :: cmds with Not_found -> cmds in
+    let cmds = try (Sys.getenv "MANPAGER") :: cmds with Not_found -> cmds in
+    find_cmd cmds
+  in
+  match pager with
+  | None -> print `Plain ppf v
+  | Some pager ->
+      let cmd = match (find_cmd ["groff"; "nroff"]) with
+      | None ->
+          begin match pp_to_temp_file (print `Plain) v with
+          | None -> None
+          | Some f -> Some (strf "%s < %s" pager f)
+          end
+      | Some c ->
+          begin match pp_to_temp_file (print `Groff) v with
+          | None -> None
+          | Some f ->
+              (* TODO use -Tutf8, but annoyingly maps U+002D to U+2212. *)
+              let xroff = if c = "groff" then c ^ " -Tascii -P-c" else c in
+              Some (strf "%s < %s | %s" xroff f pager)
+          end
+      in
+      match cmd with
+      | None -> print `Plain ppf v
+      | Some cmd -> if (Sys.command cmd) <> 0 then print `Plain ppf v
+
+(* Output *)
+
+type format = [ `Auto | `Pager | `Plain | `Groff ]
+
+let rec print
+    ?(errs = Format.err_formatter)
+    ?(subst = fun x -> None) fmt ppf page =
+  match fmt with
+  | `Pager -> pp_to_pager (print ~errs ~subst) ppf page
+  | `Plain -> pp_plain_page ~errs subst ppf page
+  | `Groff -> pp_groff_page ~errs subst ppf page
+  | `Auto ->
+      match try (Some (Sys.getenv "TERM")) with Not_found -> None with
+      | None | Some "dumb" -> print ~errs ~subst `Plain ppf page
+      | Some _ -> print ~errs ~subst `Pager ppf page
+
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli
+
+   Permission to use, copy, modify, and/or distribute this software for any
+   purpose with or without fee is hereby granted, provided that the above
+   copyright notice and this permission notice appear in all copies.
+
+   THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+   WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+   MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+   ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+   WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+   ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+   OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+  ---------------------------------------------------------------------------*)
diff --git a/tools/ocaml/duniverse/cmdliner/src/cmdliner_manpage.mli b/tools/ocaml/duniverse/cmdliner/src/cmdliner_manpage.mli
new file mode 100644
index 0000000000..3bbbb53c68
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/src/cmdliner_manpage.mli
@@ -0,0 +1,100 @@
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli. All rights reserved.
+   Distributed under the ISC license, see terms at the end of the file.
+   %%NAME%% %%VERSION%%
+  ---------------------------------------------------------------------------*)
+
+(** Manpages.
+
+    See {!Cmdliner.Manpage}. *)
+
+type block =
+  [ `S of string | `P of string | `Pre of string | `I of string * string
+  | `Noblank | `Blocks of block list ]
+
+val escape : string -> string
+(** [escape s] escapes [s] from the doc language. *)
+
+type title = string * int * string * string * string
+
+type t = title * block list
+
+type xref =
+  [ `Main | `Cmd of string | `Tool of string | `Page of string * int ]
+
+(** {1 Standard section names} *)
+
+val s_name : string
+val s_synopsis : string
+val s_description : string
+val s_commands : string
+val s_arguments : string
+val s_options : string
+val s_common_options : string
+val s_exit_status : string
+val s_environment : string
+val s_files : string
+val s_bugs : string
+val s_examples : string
+val s_authors : string
+val s_see_also : string
+
+(** {1 Section maps}
+
+    Used for handling the merging of metadata doc strings. *)
+
+type smap
+val smap_of_blocks : block list -> smap
+val smap_to_blocks : smap -> block list
+val smap_has_section : smap -> sec:string -> bool
+val smap_append_block : smap -> sec:string -> block -> smap
+(** [smap_append_block smap sec b] appends [b] at the end of section
+    [sec] creating it at the right place if needed. *)
+
+(** {1 Content boilerplate} *)
+
+val s_exit_status_intro : block
+val s_environment_intro : block
+
+(** {1 Output} *)
+
+type format = [ `Auto | `Pager | `Plain | `Groff ]
+val print :
+  ?errs:Format.formatter -> ?subst:(string -> string option) -> format ->
+  Format.formatter -> t -> unit
+
+(** {1 Printers and escapes used by Cmdliner module} *)
+
+val subst_vars :
+  errs:Format.formatter -> subst:(string -> string option) -> Buffer.t ->
+  string -> string
+(** [subst b ~subst s], using [b], substitutes in [s] variables of the form
+    "$(doc)" by their [subst] definition. This leaves escapes and markup
+    directives $(markup,...) intact.
+
+    @raise Invalid_argument in case of illegal syntax. *)
+
+val doc_to_plain :
+  errs:Format.formatter -> subst:(string -> string option) -> Buffer.t ->
+  string -> string
+(** [doc_to_plain b ~subst s] using [b], subsitutes in [s] variables by
+    their [subst] definition and renders cmdliner directives to plain
+    text.
+
+    @raise Invalid_argument in case of illegal syntax. *)
+
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli
+
+   Permission to use, copy, modify, and/or distribute this software for any
+   purpose with or without fee is hereby granted, provided that the above
+   copyright notice and this permission notice appear in all copies.
+
+   THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+   WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+   MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+   ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+   WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+   ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+   OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+  ---------------------------------------------------------------------------*)
diff --git a/tools/ocaml/duniverse/cmdliner/src/cmdliner_msg.ml b/tools/ocaml/duniverse/cmdliner/src/cmdliner_msg.ml
new file mode 100644
index 0000000000..e26599b8a1
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/src/cmdliner_msg.ml
@@ -0,0 +1,116 @@
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli. All rights reserved.
+   Distributed under the ISC license, see terms at the end of the file.
+   %%NAME%% %%VERSION%%
+  ---------------------------------------------------------------------------*)
+
+let strf = Printf.sprintf
+let quote = Cmdliner_base.quote
+
+let pp = Format.fprintf
+let pp_text = Cmdliner_base.pp_text
+let pp_lines = Cmdliner_base.pp_lines
+
+(* Environment variable errors *)
+
+let err_env_parse env ~err =
+  let var = Cmdliner_info.env_var env in
+  strf "environment variable %s: %s" (quote var) err
+
+(* Positional argument errors *)
+
+let err_pos_excess excess =
+  strf "too many arguments, don't know what to do with %s"
+    (String.concat ", " (List.map quote excess))
+
+let err_pos_miss a = match Cmdliner_info.arg_docv a with
+| "" -> "a required argument is missing"
+| v -> strf "required argument %s is missing" v
+
+let err_pos_misses = function
+| [] -> assert false
+| [a] -> err_pos_miss a
+| args ->
+    let add_arg acc a = match Cmdliner_info.arg_docv a with
+    | "" -> "ARG" :: acc
+    | argv -> argv :: acc
+    in
+    let rev_args = List.sort Cmdliner_info.rev_arg_pos_cli_order args in
+    let args = List.fold_left add_arg [] rev_args in
+    let args = String.concat ", " args in
+    strf "required arguments %s are missing" args
+
+let err_pos_parse a ~err = match Cmdliner_info.arg_docv a with
+| "" -> err
+| argv ->
+    match Cmdliner_info.(pos_len @@ arg_pos a) with
+    | Some 1 -> strf "%s argument: %s" argv err
+    | None | Some _ -> strf "%s... arguments: %s" argv err
+
+(* Optional argument errors *)
+
+let err_flag_value flag v =
+  strf "option %s is a flag, it cannot take the argument %s"
+    (quote flag) (quote v)
+
+let err_opt_value_missing f = strf "option %s needs an argument" (quote f)
+let err_opt_parse f ~err = strf "option %s: %s" (quote f) err
+let err_opt_repeated f f' =
+  if f = f' then strf "option %s cannot be repeated" (quote f) else
+  strf "options %s and %s cannot be present at the same time"
+    (quote f) (quote f')
+
+(* Argument errors *)
+
+let err_arg_missing a =
+  if Cmdliner_info.arg_is_pos a then err_pos_miss a else
+  strf "required option %s is missing" (Cmdliner_info.arg_opt_name_sample a)
+
+(* Other messages *)
+
+let exec_name ei = Cmdliner_info.(term_name @@ eval_main ei)
+
+let pp_version ppf ei = match Cmdliner_info.(term_version @@ eval_main ei) with
+| None -> assert false
+| Some v -> pp ppf "@[%a@]@." Cmdliner_base.pp_text v
+
+let pp_try_help ppf ei = match Cmdliner_info.eval_kind ei with
+| `Simple | `Multiple_main ->
+    pp ppf "@[<2>Try `%s --help' for more information.@]" (exec_name ei)
+| `Multiple_sub ->
+    let exec_cmd = Cmdliner_docgen.plain_invocation ei in
+    pp ppf "@[<2>Try `%s --help' or `%s --help' for more information.@]"
+      exec_cmd (exec_name ei)
+
+let pp_err ppf ei ~err = pp ppf "%s: @[%a@]@." (exec_name ei) pp_lines err
+
+let pp_err_usage ppf ei ~err_lines ~err =
+  let pp_err = if err_lines then pp_lines else pp_text in
+  pp ppf "@[<v>%s: @[%a@]@,@[Usage: @[%a@]@]@,%a@]@."
+    (exec_name ei) pp_err err (Cmdliner_docgen.pp_plain_synopsis ~errs:ppf) ei
+    pp_try_help ei
+
+let pp_backtrace ppf ei e bt =
+  let bt = Printexc.raw_backtrace_to_string bt in
+  let bt =
+    let len = String.length bt in
+    if len > 0 then String.sub bt 0 (len - 1) (* remove final '\n' *) else bt
+  in
+  pp ppf "%s: @[internal error, uncaught exception:@\n%a@]@."
+    (exec_name ei) pp_lines (strf "%s\n%s" (Printexc.to_string e) bt)
+
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli
+
+   Permission to use, copy, modify, and/or distribute this software for any
+   purpose with or without fee is hereby granted, provided that the above
+   copyright notice and this permission notice appear in all copies.
+
+   THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+   WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+   MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+   ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+   WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+   ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+   OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+  ---------------------------------------------------------------------------*)
diff --git a/tools/ocaml/duniverse/cmdliner/src/cmdliner_msg.mli b/tools/ocaml/duniverse/cmdliner/src/cmdliner_msg.mli
new file mode 100644
index 0000000000..9c69d50ec1
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/src/cmdliner_msg.mli
@@ -0,0 +1,56 @@
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli. All rights reserved.
+   Distributed under the ISC license, see terms at the end of the file.
+   %%NAME%% %%VERSION%%
+  ---------------------------------------------------------------------------*)
+
+(** Messages for the end-user. *)
+
+(** {1:env_err Environment variable errors} *)
+
+val err_env_parse : Cmdliner_info.env -> err:string -> string
+
+(** {1:pos_err Positional argument errors} *)
+
+val err_pos_excess : string list -> string
+val err_pos_misses : Cmdliner_info.arg list -> string
+val err_pos_parse : Cmdliner_info.arg -> err:string -> string
+
+(** {1:opt_err Optional argument errors} *)
+
+val err_flag_value : string -> string -> string
+val err_opt_value_missing : string -> string
+val err_opt_parse : string -> err:string -> string
+val err_opt_repeated : string -> string -> string
+
+(** {1:arg_err Argument errors} *)
+
+val err_arg_missing : Cmdliner_info.arg -> string
+
+(** {1:msgs Other messages} *)
+
+val pp_version : Format.formatter -> Cmdliner_info.eval -> unit
+val pp_try_help : Format.formatter -> Cmdliner_info.eval -> unit
+val pp_err : Format.formatter -> Cmdliner_info.eval -> err:string -> unit
+val pp_err_usage :
+  Format.formatter -> Cmdliner_info.eval -> err_lines:bool -> err:string -> unit
+
+val pp_backtrace :
+  Format.formatter ->
+  Cmdliner_info.eval -> exn -> Printexc.raw_backtrace -> unit
+
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli
+
+   Permission to use, copy, modify, and/or distribute this software for any
+   purpose with or without fee is hereby granted, provided that the above
+   copyright notice and this permission notice appear in all copies.
+
+   THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+   WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+   MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+   ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+   WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+   ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+   OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+  ---------------------------------------------------------------------------*)
diff --git a/tools/ocaml/duniverse/cmdliner/src/cmdliner_suggest.ml b/tools/ocaml/duniverse/cmdliner/src/cmdliner_suggest.ml
new file mode 100644
index 0000000000..d333604eae
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/src/cmdliner_suggest.ml
@@ -0,0 +1,54 @@
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli. All rights reserved.
+   Distributed under the ISC license, see terms at the end of the file.
+   %%NAME%% %%VERSION%%
+  ---------------------------------------------------------------------------*)
+
+let levenshtein_distance s t =
+  (* As found here http://rosettacode.org/wiki/Levenshtein_distance#OCaml *)
+  let minimum a b c = min a (min b c) in
+  let m = String.length s in
+  let n = String.length t in
+  (* for all i and j, d.(i).(j) will hold the Levenshtein distance between
+     the first i characters of s and the first j characters of t *)
+  let d = Array.make_matrix (m+1) (n+1) 0 in
+  for i = 0 to m do d.(i).(0) <- i done;
+  for j = 0 to n do d.(0).(j) <- j done;
+  for j = 1 to n do
+    for i = 1 to m do
+      if s.[i-1] = t.[j-1] then
+        d.(i).(j) <- d.(i-1).(j-1)  (* no operation required *)
+      else
+        d.(i).(j) <- minimum
+            (d.(i-1).(j) + 1)   (* a deletion *)
+            (d.(i).(j-1) + 1)   (* an insertion *)
+            (d.(i-1).(j-1) + 1) (* a substitution *)
+    done;
+  done;
+  d.(m).(n)
+
+let value s candidates =
+  let add (min, acc) name =
+    let d = levenshtein_distance s name in
+    if d = min then min, (name :: acc) else
+    if d < min then d, [name] else
+    min, acc
+  in
+  let dist, suggs = List.fold_left add (max_int, []) candidates in
+  if dist < 3 (* suggest only if not too far *) then suggs else []
+
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli
+
+   Permission to use, copy, modify, and/or distribute this software for any
+   purpose with or without fee is hereby granted, provided that the above
+   copyright notice and this permission notice appear in all copies.
+
+   THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+   WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+   MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+   ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+   WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+   ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+   OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+  ---------------------------------------------------------------------------*)
diff --git a/tools/ocaml/duniverse/cmdliner/src/cmdliner_suggest.mli b/tools/ocaml/duniverse/cmdliner/src/cmdliner_suggest.mli
new file mode 100644
index 0000000000..70fa815661
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/src/cmdliner_suggest.mli
@@ -0,0 +1,25 @@
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli. All rights reserved.
+   Distributed under the ISC license, see terms at the end of the file.
+   %%NAME%% %%VERSION%%
+  ---------------------------------------------------------------------------*)
+
+val value : string -> string list -> string list
+(** [value near candidates] suggests values from [candidates]
+    not to far from near. *)
+
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli
+
+   Permission to use, copy, modify, and/or distribute this software for any
+   purpose with or without fee is hereby granted, provided that the above
+   copyright notice and this permission notice appear in all copies.
+
+   THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+   WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+   MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+   ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+   WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+   ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+   OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+  ---------------------------------------------------------------------------*)
diff --git a/tools/ocaml/duniverse/cmdliner/src/cmdliner_term.ml b/tools/ocaml/duniverse/cmdliner/src/cmdliner_term.ml
new file mode 100644
index 0000000000..2e405bacf5
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/src/cmdliner_term.ml
@@ -0,0 +1,41 @@
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli. All rights reserved.
+   Distributed under the ISC license, see terms at the end of the file.
+   %%NAME%% %%VERSION%%
+  ---------------------------------------------------------------------------*)
+
+type term_escape =
+  [ `Error of bool * string
+  | `Help of Cmdliner_manpage.format * string option ]
+
+type 'a parser =
+  Cmdliner_info.eval -> Cmdliner_cline.t ->
+  ('a, [ `Parse of string | term_escape ]) result
+
+type 'a t = Cmdliner_info.args * 'a parser
+
+let const v = Cmdliner_info.Args.empty, (fun _ _ -> Ok v)
+let app (args_f, f) (args_v, v) =
+  Cmdliner_info.Args.union args_f args_v,
+  fun ei cl -> match (f ei cl) with
+  | Error _ as e -> e
+  | Ok f ->
+      match v ei cl with
+      | Error _ as e -> e
+      | Ok v -> Ok (f v)
+
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli
+
+   Permission to use, copy, modify, and/or distribute this software for any
+   purpose with or without fee is hereby granted, provided that the above
+   copyright notice and this permission notice appear in all copies.
+
+   THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+   WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+   MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+   ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+   WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+   ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+   OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+  ---------------------------------------------------------------------------*)
diff --git a/tools/ocaml/duniverse/cmdliner/src/cmdliner_term.mli b/tools/ocaml/duniverse/cmdliner/src/cmdliner_term.mli
new file mode 100644
index 0000000000..e9472f75aa
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/src/cmdliner_term.mli
@@ -0,0 +1,40 @@
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli. All rights reserved.
+   Distributed under the ISC license, see terms at the end of the file.
+   %%NAME%% %%VERSION%%
+  ---------------------------------------------------------------------------*)
+
+(** Terms *)
+
+type term_escape =
+  [ `Error of bool * string
+  | `Help of Cmdliner_manpage.format * string option ]
+
+type 'a parser =
+  Cmdliner_info.eval -> Cmdliner_cline.t ->
+  ('a, [ `Parse of string | term_escape ]) result
+(** Type type for command line parser. given static information about
+    the command line and a command line to parse returns an OCaml value. *)
+
+type 'a t = Cmdliner_info.args * 'a parser
+(** The type for terms. The list of arguments it can parse and the parsing
+    function that does so. *)
+
+val const : 'a -> 'a t
+val app : ('a -> 'b) t -> 'a t -> 'b t
+
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli
+
+   Permission to use, copy, modify, and/or distribute this software for any
+   purpose with or without fee is hereby granted, provided that the above
+   copyright notice and this permission notice appear in all copies.
+
+   THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+   WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+   MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+   ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+   WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+   ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+   OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+  ---------------------------------------------------------------------------*)
diff --git a/tools/ocaml/duniverse/cmdliner/src/cmdliner_trie.ml b/tools/ocaml/duniverse/cmdliner/src/cmdliner_trie.ml
new file mode 100644
index 0000000000..0aaf53f38b
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/src/cmdliner_trie.ml
@@ -0,0 +1,97 @@
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli. All rights reserved.
+   Distributed under the ISC license, see terms at the end of the file.
+   %%NAME%% %%VERSION%%
+  ---------------------------------------------------------------------------*)
+
+module Cmap = Map.Make (Char)                           (* character maps. *)
+
+type 'a value =                         (* type for holding a bound value. *)
+| Pre of 'a                    (* value is bound by the prefix of a key. *)
+| Key of 'a                          (* value is bound by an entire key. *)
+| Amb                     (* no value bound because of ambiguous prefix. *)
+| Nil                            (* not bound (only for the empty trie). *)
+
+type 'a t = { v : 'a value; succs : 'a t Cmap.t }
+let empty = { v = Nil; succs = Cmap.empty }
+let is_empty t = t = empty
+
+(* N.B. If we replace a non-ambiguous key, it becomes ambiguous but it's
+   not important for our use. Also the following is not tail recursive but
+   the stack is bounded by key length. *)
+let add t k d =
+  let rec loop t k len i d pre_d = match i = len with
+  | true ->
+      let t' = { v = Key d; succs = t.succs } in
+      begin match t.v with
+      | Key old -> `Replaced (old, t')
+      | _ -> `New t'
+      end
+  | false ->
+      let v = match t.v with
+      | Amb | Pre _ -> Amb | Key _ as v -> v | Nil -> pre_d
+      in
+      let t' = try Cmap.find k.[i] t.succs with Not_found -> empty in
+      match loop t' k len (i + 1) d pre_d with
+      | `New n -> `New { v; succs = Cmap.add k.[i] n t.succs }
+      | `Replaced (o, n) ->
+          `Replaced (o, { v; succs = Cmap.add k.[i] n t.succs })
+  in
+  loop t k (String.length k) 0 d (Pre d (* allocate less *))
+
+let find_node t k =
+  let rec aux t k len i =
+    if i = len then t else
+    aux (Cmap.find k.[i] t.succs) k len (i + 1)
+  in
+  aux t k (String.length k) 0
+
+let find t k =
+  try match (find_node t k).v with
+  | Key v | Pre v -> `Ok v | Amb -> `Ambiguous | Nil -> `Not_found
+  with Not_found -> `Not_found
+
+let ambiguities t p =                        (* ambiguities of [p] in [t]. *)
+  try
+    let t = find_node t p in
+    match t.v with
+    | Key _ | Pre _ | Nil -> []
+    | Amb ->
+        let add_char s c = s ^ (String.make 1 c) in
+        let rem_char s = String.sub s 0 ((String.length s) - 1) in
+        let to_list m = Cmap.fold (fun k t acc -> (k,t) :: acc) m [] in
+        let rec aux acc p = function
+        | ((c, t) :: succs) :: rest ->
+            let p' = add_char p c in
+            let acc' = match t.v with
+            | Pre _ | Amb -> acc
+            | Key _ -> (p' :: acc)
+            | Nil -> assert false
+            in
+            aux acc' p' ((to_list t.succs) :: succs :: rest)
+        | [] :: [] -> acc
+        | [] :: rest -> aux acc (rem_char p) rest
+        | [] -> assert false
+        in
+        aux [] p (to_list t.succs :: [])
+  with Not_found -> []
+
+let of_list l =
+  let add t (s, v) = match add t s v with `New t -> t | `Replaced (_, t) -> t in
+  List.fold_left add empty l
+
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli
+
+   Permission to use, copy, modify, and/or distribute this software for any
+   purpose with or without fee is hereby granted, provided that the above
+   copyright notice and this permission notice appear in all copies.
+
+   THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+   WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+   MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+   ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+   WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+   ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+   OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+  ---------------------------------------------------------------------------*)
diff --git a/tools/ocaml/duniverse/cmdliner/src/cmdliner_trie.mli b/tools/ocaml/duniverse/cmdliner/src/cmdliner_trie.mli
new file mode 100644
index 0000000000..01d4029177
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/src/cmdliner_trie.mli
@@ -0,0 +1,35 @@
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli. All rights reserved.
+   Distributed under the ISC license, see terms at the end of the file.
+   %%NAME%% %%VERSION%%
+  ---------------------------------------------------------------------------*)
+
+(** Tries.
+
+    This implementation also maps any non ambiguous prefix of a
+    key to its value. *)
+
+type 'a t
+
+val empty : 'a t
+val is_empty : 'a t -> bool
+val add : 'a t -> string -> 'a -> [ `New of 'a t | `Replaced of 'a * 'a t ]
+val find : 'a t -> string -> [ `Ok of 'a | `Ambiguous | `Not_found ]
+val ambiguities : 'a t -> string -> string list
+val of_list : (string * 'a) list -> 'a t
+
+(*---------------------------------------------------------------------------
+   Copyright (c) 2011 Daniel C. Bünzli
+
+   Permission to use, copy, modify, and/or distribute this software for any
+   purpose with or without fee is hereby granted, provided that the above
+   copyright notice and this permission notice appear in all copies.
+
+   THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+   WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+   MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+   ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+   WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+   ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+   OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+  ---------------------------------------------------------------------------*)
diff --git a/tools/ocaml/duniverse/cmdliner/src/dune b/tools/ocaml/duniverse/cmdliner/src/dune
new file mode 100644
index 0000000000..b9ef5c9a15
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/src/dune
@@ -0,0 +1,4 @@
+(library
+ (public_name cmdliner)
+ (flags :standard -w -3-6-27-32-35)
+ (wrapped false))
diff --git a/tools/ocaml/duniverse/cmdliner/test/chorus.ml b/tools/ocaml/duniverse/cmdliner/test/chorus.ml
new file mode 100644
index 0000000000..5cf4c20ca9
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/test/chorus.ml
@@ -0,0 +1,31 @@
+(* Example from the documentation, this code is in public domain. *)
+
+(* Implementation of the command *)
+
+let chorus count msg = for i = 1 to count do print_endline msg done
+
+(* Command line interface *)
+
+open Cmdliner
+
+let count =
+  let doc = "Repeat the message $(docv) times." in
+  Arg.(value & opt int 10 & info ["c"; "count"] ~docv:"COUNT" ~doc)
+
+let msg =
+  let doc = "Overrides the default message to print." in
+  let env = Arg.env_var "CHORUS_MSG" ~doc in
+  let doc = "The message to print." in
+  Arg.(value & pos 0 string "Revolt!" & info [] ~env ~docv:"MSG" ~doc)
+
+let chorus_t = Term.(const chorus $ count $ msg)
+
+let info =
+  let doc = "print a customizable message repeatedly" in
+  let man = [
+    `S Manpage.s_bugs;
+    `P "Email bug reports to <hehey at example.org>." ]
+  in
+  Term.info "chorus" ~version:"%%VERSION%%" ~doc ~exits:Term.default_exits ~man
+
+let () = Term.exit @@ Term.eval (chorus_t, info)
diff --git a/tools/ocaml/duniverse/cmdliner/test/cp_ex.ml b/tools/ocaml/duniverse/cmdliner/test/cp_ex.ml
new file mode 100644
index 0000000000..381509fba4
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/test/cp_ex.ml
@@ -0,0 +1,54 @@
+(* Example from the documentation, this code is in public domain. *)
+
+(* Implementation, we check the dest argument and print the args *)
+
+let cp verbose recurse force srcs dest =
+  if List.length srcs > 1 &&
+  (not (Sys.file_exists dest) || not (Sys.is_directory dest))
+  then
+    `Error (false, dest ^ " is not a directory")
+  else
+  `Ok (Printf.printf
+           "verbose = %B\nrecurse = %B\nforce = %B\nsrcs = %s\ndest = %s\n"
+           verbose recurse force (String.concat ", " srcs) dest)
+
+(* Command line interface *)
+
+open Cmdliner
+
+let verbose =
+  let doc = "Print file names as they are copied." in
+  Arg.(value & flag & info ["v"; "verbose"] ~doc)
+
+let recurse =
+  let doc = "Copy directories recursively." in
+  Arg.(value & flag & info ["r"; "R"; "recursive"] ~doc)
+
+let force =
+  let doc = "If a destination file cannot be opened, remove it and try again."in
+  Arg.(value & flag & info ["f"; "force"] ~doc)
+
+let srcs =
+  let doc = "Source file(s) to copy." in
+  Arg.(non_empty & pos_left ~rev:true 0 file [] & info [] ~docv:"SOURCE" ~doc)
+
+let dest =
+  let doc = "Destination of the copy. Must be a directory if there is more
+           than one $(i,SOURCE)." in
+  let docv = "DEST" in
+  Arg.(required & pos ~rev:true 0 (some string) None & info [] ~docv ~doc)
+
+let cmd =
+  let doc = "copy files" in
+  let man_xrefs =
+    [ `Tool "mv"; `Tool "scp"; `Page ("umask", 2); `Page ("symlink", 7) ]
+  in
+  let exits = Term.default_exits in
+  let man =
+    [ `S Manpage.s_bugs;
+      `P "Email them to <hehey at example.org>."; ]
+  in
+  Term.(ret (const cp $ verbose $ recurse $ force $ srcs $ dest)),
+  Term.info "cp" ~version:"%%VERSION%%" ~doc ~exits ~man ~man_xrefs
+
+let () = Term.(exit @@ eval cmd)
diff --git a/tools/ocaml/duniverse/cmdliner/test/darcs_ex.ml b/tools/ocaml/duniverse/cmdliner/test/darcs_ex.ml
new file mode 100644
index 0000000000..a7fd5196d7
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/test/darcs_ex.ml
@@ -0,0 +1,149 @@
+(* Example from the documentation, this code is in public domain. *)
+
+(* Implementations, just print the args. *)
+
+type verb = Normal | Quiet | Verbose
+type copts = { debug : bool; verb : verb; prehook : string option }
+
+let str = Printf.sprintf
+let opt_str sv = function None -> "None" | Some v -> str "Some(%s)" (sv v)
+let opt_str_str = opt_str (fun s -> s)
+let verb_str = function
+  | Normal -> "normal" | Quiet -> "quiet" | Verbose -> "verbose"
+
+let pr_copts oc copts = Printf.fprintf oc
+    "debug = %B\nverbosity = %s\nprehook = %s\n"
+    copts.debug (verb_str copts.verb) (opt_str_str copts.prehook)
+
+let initialize copts repodir = Printf.printf
+    "%arepodir = %s\n" pr_copts copts repodir
+
+let record copts name email all ask_deps files = Printf.printf
+    "%aname = %s\nemail = %s\nall = %B\nask-deps = %B\nfiles = %s\n"
+    pr_copts copts (opt_str_str name) (opt_str_str email) all ask_deps
+    (String.concat ", " files)
+
+let help copts man_format cmds topic = match topic with
+| None -> `Help (`Pager, None) (* help about the program. *)
+| Some topic ->
+    let topics = "topics" :: "patterns" :: "environment" :: cmds in
+    let conv, _ = Cmdliner.Arg.enum (List.rev_map (fun s -> (s, s)) topics) in
+    match conv topic with
+    | `Error e -> `Error (false, e)
+    | `Ok t when t = "topics" -> List.iter print_endline topics; `Ok ()
+    | `Ok t when List.mem t cmds -> `Help (man_format, Some t)
+    | `Ok t ->
+        let page = (topic, 7, "", "", ""), [`S topic; `P "Say something";] in
+        `Ok (Cmdliner.Manpage.print man_format Format.std_formatter page)
+
+open Cmdliner
+
+(* Help sections common to all commands *)
+
+let help_secs = [
+ `S Manpage.s_common_options;
+ `P "These options are common to all commands.";
+ `S "MORE HELP";
+ `P "Use `$(mname) $(i,COMMAND) --help' for help on a single command.";`Noblank;
+ `P "Use `$(mname) help patterns' for help on patch matching."; `Noblank;
+ `P "Use `$(mname) help environment' for help on environment variables.";
+ `S Manpage.s_bugs; `P "Check bug reports at http://bugs.example.org.";]
+
+(* Options common to all commands *)
+
+let copts debug verb prehook = { debug; verb; prehook }
+let copts_t =
+  let docs = Manpage.s_common_options in
+  let debug =
+    let doc = "Give only debug output." in
+    Arg.(value & flag & info ["debug"] ~docs ~doc)
+  in
+  let verb =
+    let doc = "Suppress informational output." in
+    let quiet = Quiet, Arg.info ["q"; "quiet"] ~docs ~doc in
+    let doc = "Give verbose output." in
+    let verbose = Verbose, Arg.info ["v"; "verbose"] ~docs ~doc in
+    Arg.(last & vflag_all [Normal] [quiet; verbose])
+  in
+  let prehook =
+    let doc = "Specify command to run before this $(mname) command." in
+    Arg.(value & opt (some string) None & info ["prehook"] ~docs ~doc)
+  in
+  Term.(const copts $ debug $ verb $ prehook)
+
+(* Commands *)
+
+let initialize_cmd =
+  let repodir =
+    let doc = "Run the program in repository directory $(docv)." in
+    Arg.(value & opt file Filename.current_dir_name & info ["repodir"]
+           ~docv:"DIR" ~doc)
+  in
+  let doc = "make the current directory a repository" in
+  let exits = Term.default_exits in
+  let man = [
+    `S Manpage.s_description;
+    `P "Turns the current directory into a Darcs repository. Any
+       existing files and subdirectories become ...";
+    `Blocks help_secs; ]
+  in
+  Term.(const initialize $ copts_t $ repodir),
+  Term.info "initialize" ~doc ~sdocs:Manpage.s_common_options ~exits ~man
+
+let record_cmd =
+  let pname =
+    let doc = "Name of the patch." in
+    Arg.(value & opt (some string) None & info ["m"; "patch-name"] ~docv:"NAME"
+           ~doc)
+  in
+  let author =
+    let doc = "Specifies the author's identity." in
+    Arg.(value & opt (some string) None & info ["A"; "author"] ~docv:"EMAIL"
+           ~doc)
+  in
+  let all =
+    let doc = "Answer yes to all patches." in
+    Arg.(value & flag & info ["a"; "all"] ~doc)
+  in
+  let ask_deps =
+    let doc = "Ask for extra dependencies." in
+    Arg.(value & flag & info ["ask-deps"] ~doc)
+  in
+  let files = Arg.(value & (pos_all file) [] & info [] ~docv:"FILE or DIR") in
+  let doc = "create a patch from unrecorded changes" in
+  let exits = Term.default_exits in
+  let man =
+    [`S Manpage.s_description;
+     `P "Creates a patch from changes in the working tree. If you specify
+         a set of files ...";
+     `Blocks help_secs; ]
+  in
+  Term.(const record $ copts_t $ pname $ author $ all $ ask_deps $ files),
+  Term.info "record" ~doc ~sdocs:Manpage.s_common_options ~exits ~man
+
+let help_cmd =
+  let topic =
+    let doc = "The topic to get help on. `topics' lists the topics." in
+    Arg.(value & pos 0 (some string) None & info [] ~docv:"TOPIC" ~doc)
+  in
+  let doc = "display help about darcs and darcs commands" in
+  let exits = Term.default_exits in
+  let man =
+    [`S Manpage.s_description;
+     `P "Prints help about darcs commands and other subjects...";
+     `Blocks help_secs; ]
+  in
+  Term.(ret (const help $ copts_t $ Arg.man_format $ Term.choice_names $topic)),
+  Term.info "help" ~doc ~exits ~man
+
+let default_cmd =
+  let doc = "a revision control system" in
+  let sdocs = Manpage.s_common_options in
+  let exits = Term.default_exits in
+  let man = help_secs in
+  Term.(ret (const (fun _ -> `Help (`Pager, None)) $ copts_t)),
+  Term.info "darcs" ~version:"%%VERSION%%" ~doc ~sdocs ~exits ~man
+
+let cmds = [initialize_cmd; record_cmd; help_cmd]
+
+let () = Term.(exit @@ eval_choice default_cmd cmds)
diff --git a/tools/ocaml/duniverse/cmdliner/test/dune b/tools/ocaml/duniverse/cmdliner/test/dune
new file mode 100644
index 0000000000..012c36aebf
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/test/dune
@@ -0,0 +1,12 @@
+(executables
+ (names test_man
+        test_man_utf8
+        test_pos
+        test_pos_rev
+        test_pos_all
+        test_pos_left
+        test_pos_req
+        test_opt_req
+        test_term_dups
+        test_with_used_args)
+ (libraries cmdliner))
diff --git a/tools/ocaml/duniverse/cmdliner/test/revolt.ml b/tools/ocaml/duniverse/cmdliner/test/revolt.ml
new file mode 100644
index 0000000000..f372e1d497
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/test/revolt.ml
@@ -0,0 +1,9 @@
+(* Example from the documentation, this code is in public domain. *)
+
+let revolt () = print_endline "Revolt!"
+
+open Cmdliner
+
+let revolt_t = Term.(const revolt $ const ())
+
+let () = Term.(exit @@ eval (revolt_t, Term.info "revolt"))
diff --git a/tools/ocaml/duniverse/cmdliner/test/rm_ex.ml b/tools/ocaml/duniverse/cmdliner/test/rm_ex.ml
new file mode 100644
index 0000000000..ba6344a886
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/test/rm_ex.ml
@@ -0,0 +1,53 @@
+(* Example from the documentation, this code is in public domain. *)
+
+(* Implementation of the command, we just print the args. *)
+
+type prompt = Always | Once | Never
+let prompt_str = function
+| Always -> "always" | Once -> "once" | Never -> "never"
+
+let rm prompt recurse files =
+  Printf.printf "prompt = %s\nrecurse = %B\nfiles = %s\n"
+    (prompt_str prompt) recurse (String.concat ", " files)
+
+(* Command line interface *)
+
+open Cmdliner
+
+let files = Arg.(non_empty & pos_all file [] & info [] ~docv:"FILE")
+let prompt =
+  let doc = "Prompt before every removal." in
+  let always = Always, Arg.info ["i"] ~doc in
+  let doc = "Ignore nonexistent files and never prompt." in
+  let never = Never, Arg.info ["f"; "force"] ~doc in
+  let doc = "Prompt once before removing more than three files, or when
+             removing recursively. Less intrusive than $(b,-i), while
+             still giving protection against most mistakes."
+  in
+  let once = Once, Arg.info ["I"] ~doc in
+  Arg.(last & vflag_all [Always] [always; never; once])
+
+let recursive =
+  let doc = "Remove directories and their contents recursively." in
+  Arg.(value & flag & info ["r"; "R"; "recursive"] ~doc)
+
+let cmd =
+  let doc = "remove files or directories" in
+  let man = [
+    `S Manpage.s_description;
+    `P "$(tname) removes each specified $(i,FILE). By default it does not
+        remove directories, to also remove them and their contents, use the
+        option $(b,--recursive) ($(b,-r) or $(b,-R)).";
+    `P "To remove a file whose name starts with a `-', for example
+        `-foo', use one of these commands:";
+    `Pre "$(mname) -- -foo\n\
+          $(mname) ./-foo";
+    `P "$(tname) removes symbolic links, not the files referenced by the
+        links.";
+    `S Manpage.s_bugs; `P "Report bugs to <hehey at example.org>.";
+    `S Manpage.s_see_also; `P "$(b,rmdir)(1), $(b,unlink)(2)" ]
+  in
+  Term.(const rm $ prompt $ recursive $ files),
+  Term.info "rm" ~version:"%%VERSION%%" ~doc ~exits:Term.default_exits ~man
+
+let () = Term.(exit @@ eval cmd)
diff --git a/tools/ocaml/duniverse/cmdliner/test/tail_ex.ml b/tools/ocaml/duniverse/cmdliner/test/tail_ex.ml
new file mode 100644
index 0000000000..3786ee2750
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/test/tail_ex.ml
@@ -0,0 +1,73 @@
+(* Example from the documentation, this code is in public domain. *)
+
+(* Implementation of the command, we just print the args. *)
+
+type loc = bool * int
+type verb = Verbose | Quiet
+type follow = Name | Descriptor
+
+let str = Printf.sprintf
+let opt_str sv = function None -> "None" | Some v -> str "Some(%s)" (sv v)
+let loc_str (rev, k) = if rev then str "%d" k else str "+%d" k
+let follow_str = function Name -> "name" | Descriptor -> "descriptor"
+let verb_str = function Verbose -> "verbose" | Quiet -> "quiet"
+
+let tail lines follow verb pid files =
+  Printf.printf "lines = %s\nfollow = %s\nverb = %s\npid = %s\nfiles = %s\n"
+    (loc_str lines) (opt_str follow_str follow) (verb_str verb)
+    (opt_str string_of_int pid) (String.concat ", " files)
+
+(* Command line interface *)
+
+open Cmdliner
+
+let lines =
+  let loc =
+    let parse s =
+      try
+        if s <> "" && s.[0] <> '+' then Ok (true, int_of_string s) else
+        Ok (false, int_of_string (String.sub s 1 (String.length s - 1)))
+      with Failure _ -> Error (`Msg "unable to parse integer")
+    in
+    let print ppf p = Format.fprintf ppf "%s" (loc_str p) in
+    Arg.conv ~docv:"N" (parse, print)
+  in
+  Arg.(value & opt loc (true, 10) & info ["n"; "lines"] ~docv:"N"
+         ~doc:"Output the last $(docv) lines or use $(i,+)$(docv) to start
+               output after the $(i,N)-1th line.")
+let follow =
+  let doc = "Output appended data as the file grows. $(docv) specifies how the
+             file should be tracked, by its `name' or by its `descriptor'." in
+  let follow = Arg.enum ["name", Name; "descriptor", Descriptor] in
+  Arg.(value & opt (some follow) ~vopt:(Some Descriptor) None &
+       info ["f"; "follow"] ~docv:"ID" ~doc)
+
+let verb =
+  let doc = "Never output headers giving file names." in
+  let quiet = Quiet, Arg.info ["q"; "quiet"; "silent"] ~doc in
+  let doc = "Always output headers giving file names." in
+  let verbose = Verbose, Arg.info ["v"; "verbose"] ~doc in
+  Arg.(last & vflag_all [Quiet] [quiet; verbose])
+
+let pid =
+  let doc = "With -f, terminate after process $(docv) dies." in
+  Arg.(value & opt (some int) None & info ["pid"] ~docv:"PID" ~doc)
+
+let files = Arg.(value & (pos_all non_dir_file []) & info [] ~docv:"FILE")
+
+let cmd =
+  let doc = "display the last part of a file" in
+  let man = [
+    `S Manpage.s_description;
+    `P "$(tname) prints the last lines of each $(i,FILE) to standard output. If
+        no file is specified reads standard input. The number of printed
+        lines can be  specified with the $(b,-n) option.";
+    `S Manpage.s_bugs;
+    `P "Report them to <hehey at example.org>.";
+    `S Manpage.s_see_also;
+    `P "$(b,cat)(1), $(b,head)(1)" ]
+  in
+  Term.(const tail $ lines $ follow $ verb $ pid $ files),
+  Term.info "tail" ~version:"%%VERSION%%" ~doc ~exits:Term.default_exits ~man
+
+let () = Term.(exit @@ eval cmd)
diff --git a/tools/ocaml/duniverse/cmdliner/test/test_man.ml b/tools/ocaml/duniverse/cmdliner/test/test_man.ml
new file mode 100644
index 0000000000..46822c8275
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/test/test_man.ml
@@ -0,0 +1,100 @@
+
+open Cmdliner
+
+let hey =
+  let doc = "Equivalent to set $(opt)." in
+  let env = Arg.env_var "TEST_ENV" ~doc in
+  let doc = "Set hey." in
+  Arg.(value & flag & info ["hey"; "y"] ~env ~doc)
+
+let repodir =
+  let doc = "See option $(opt)." in
+  let env = Arg.env_var "TEST_REPODDIR" ~doc in
+  let doc = "Run the program in repository directory $(docv)." in
+  Arg.(value & opt file Filename.current_dir_name & info ["repodir"] ~env
+         ~docv:"DIR" ~doc)
+
+let id =
+  let doc = "See option $(opt)." in
+  let env = Arg.env_var "TEST_ID" ~doc in
+  let doc = "Whatever $(docv) bla $(env) and $(opt)." in
+  Arg.(value & opt int ~vopt:10 0 & info ["id"; "i"] ~env ~docv:"ID)" ~doc)
+
+let miaouw =
+  let doc = "See option $(opt). These are term names $(mname) $(tname)" in
+  let docs = "MIAOUW SECTION (non-standard unpositioned do not do this)" in
+  let env = Arg.env_var "TEST_MIAOUW" ~doc ~docs in
+  let doc = "Whatever this is the doc var $(docv) this is the env var $(env) \
+             this is the opt $(opt) and this is $(i,italic) and this is
+             $(b,bold) and this $(b,\\$(opt\\)) is \\$(opt) in bold and this
+             \\$ is a dollar. $(mname) is the main term name, $(tname) is the
+             term name."
+  in
+  Arg.(value & opt string "miaouw" & info ["m";] ~env ~docv:"MIAOUW" ~doc)
+
+let test hey repodir id miaouw =
+  Format.printf "hey: %B@.repodir: %s@.id: %d@.miaouw: %s@."
+    hey repodir id miaouw
+
+let man_test_t = Term.(const test $ hey $ repodir $ id $ miaouw)
+
+let info =
+  let doc = "print a customizable message repeatedly" in
+  let envs = [ Term.env_info "TEST_IT" ~doc:"This is $(env) for $(tname)" ] in
+  let exits = [ Term.exit_info ~doc:"This is a $(status) for $(tname)" 1;
+                Term.exit_info ~doc:"Ranges from $(status) to $(status_max)"
+                  ~max:10 2; ] @ Term.default_exits
+  in
+  let man = [
+    `S "THIS IS A SECTION FOR $(mname)";
+    `P "$(mname) subst at begin and end $(mname)";
+    `P "$(i,italic) and $(b,bold)";
+    `P "\\$ escaped \\$\\$ escaped \\$";
+    `P "This does not fail \\$(a)";
+    `P ". this is a paragraph starting with a dot.";
+    `P "' this is a paragraph starting with a quote.";
+    `P "This: \\\\(rs is a backslash for groff and you should not see a \\\\";
+    `P "This: \\\\N'46' is a quote for groff and you should not see a '";
+    `P "This: \\\\\"  is a groff comment and it should not be one.";
+    `P "This is a non preformatted paragraph, filling will occur. This will
+        be properly layout on 80 columns.";
+    `Pre "This is a preformatted paragraph for $(mname) no filling will \
+          occur do the $(i,ASCII) art $(b,here) this will overflow on 80 \
+          columns \n\
+          01234556789\
+          01234556789\
+          01234556789\
+          01234556789\
+          01234556789\
+          01234556789\
+          01234556789\
+          01234556789\n\n\
+          ... Should not break\n\
+          a... Should not break\n\
+          +---+\n\
+          |  /|\n\
+          | / | ----> Let's swim to the moon.\n\
+          |/  |\n\
+          +---+";
+    `P "These are escapes escaped \\$ \\( \\) \\\\";
+    `P "() does not need to be escaped outside directives.";
+    `Blocks [
+      `P "The following to paragraphs are spliced in.";
+      `P "This dollar needs escape \\$(var) this one aswell $(b,\\$(bla\\))";
+      `P "This is another paragraph \\$(bla) $(i,\\$(bla\\)) $(b,\\$\\(bla\\))";
+    ];
+    `Noblank;
+    `Pre "This is another preformatted paragraph.\n\
+          There should be no blanks before and after it.";
+    `Noblank;
+    `P "Hey ho";
+    `I ("label", "item label");
+    `I ("lebal", "item lebal");
+    `P "The last paragraph";
+    `S Manpage.s_bugs;
+    `P "Email bug reports to <hehey at example.org>.";]
+  in
+  let man_xrefs = [`Page ("ascii", 7); `Main; `Tool "grep";] in
+  Term.info "man_test" ~version:"%%VERSION%%" ~doc ~envs ~exits ~man ~man_xrefs
+
+let () = Term.exit @@ Term.eval (man_test_t, info)
diff --git a/tools/ocaml/duniverse/cmdliner/test/test_man_utf8.ml b/tools/ocaml/duniverse/cmdliner/test/test_man_utf8.ml
new file mode 100644
index 0000000000..e4112a9caf
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/test/test_man_utf8.ml
@@ -0,0 +1,11 @@
+open Cmdliner
+
+let nop () = print_endline "It's the manual that is of interest."
+
+
+let test_pos =
+  Term.(const nop $ const ()),
+  Term.info "test_pos"
+    ~doc:"UTF-8 test: íöüóőúűéáăîâșț ÍÜÓŐÚŰÉÁĂÎÂȘȚ 雙峰駱駝"
+
+let () = Term.(exit @@ eval test_pos)
diff --git a/tools/ocaml/duniverse/cmdliner/test/test_opt_req.ml b/tools/ocaml/duniverse/cmdliner/test/test_opt_req.ml
new file mode 100644
index 0000000000..4cb525deba
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/test/test_opt_req.ml
@@ -0,0 +1,13 @@
+open Cmdliner
+
+let opt o = print_endline o
+
+let test_opt =
+  let req =
+    Arg.(required & opt (some string) None & info ["r"; "req"] ~docv:"ARG")
+  in
+  Term.(const opt $ req),
+  Term.info "test_opt_req"
+    ~doc:"Test optional required arguments (don't do this)"
+
+let () = Term.(exit @@ eval test_opt)
diff --git a/tools/ocaml/duniverse/cmdliner/test/test_pos.ml b/tools/ocaml/duniverse/cmdliner/test/test_pos.ml
new file mode 100644
index 0000000000..fd6e101f48
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/test/test_pos.ml
@@ -0,0 +1,13 @@
+open Cmdliner
+
+let pos l t r =
+  print_endline (String.concat "\n" (l @ ["--"; t; "--"] @ r))
+
+let test_pos =
+  let l = Arg.(value & pos_left 2 string [] & info [] ~docv:"LEFT") in
+  let t = Arg.(value & pos 2 string "undefined" & info [] ~docv:"TWO") in
+  let r = Arg.(value & pos_right 2 string [] & info [] ~docv:"RIGHT") in
+  Term.(const pos $ l $ t $ r),
+  Term.info "test_pos" ~doc:"Test pos arguments"
+
+let () = Term.(exit @@ eval test_pos)
diff --git a/tools/ocaml/duniverse/cmdliner/test/test_pos_all.ml b/tools/ocaml/duniverse/cmdliner/test/test_pos_all.ml
new file mode 100644
index 0000000000..b5dc7082a8
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/test/test_pos_all.ml
@@ -0,0 +1,11 @@
+open Cmdliner
+
+let pos_all all = print_endline (String.concat "\n" all)
+
+let test_pos_all =
+  let docv = "THEARG" in
+  let all = Arg.(value & pos_all string [] & info [] ~docv) in
+  Term.(const pos_all $ all),
+  Term.info "test_pos_all" ~doc:"Test pos all"
+
+let () = Term.(exit @@ eval test_pos_all)
diff --git a/tools/ocaml/duniverse/cmdliner/test/test_pos_left.ml b/tools/ocaml/duniverse/cmdliner/test/test_pos_left.ml
new file mode 100644
index 0000000000..90e4fbe13c
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/test/test_pos_left.ml
@@ -0,0 +1,11 @@
+open Cmdliner
+
+let pos l =
+  print_endline (String.concat "\n" l)
+
+let test_pos_left =
+  let l = Arg.(value & pos_left 2 string [] & info [] ~docv:"LEFT") in
+  Term.(const pos $ l),
+  Term.info "test_pos" ~doc:"Test pos left"
+
+let () = Term.(exit @@ eval test_pos_left)
diff --git a/tools/ocaml/duniverse/cmdliner/test/test_pos_req.ml b/tools/ocaml/duniverse/cmdliner/test/test_pos_req.ml
new file mode 100644
index 0000000000..282a77a877
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/test/test_pos_req.ml
@@ -0,0 +1,15 @@
+open Cmdliner
+
+let pos r a1 a0 a2  =
+  print_endline (String.concat "\n" ([a0; a1; a2; "--"] @ r))
+
+let test_pos =
+  let req p =
+    let docv = Printf.sprintf "ARG%d" p in
+    Arg.(required & pos p (some string) None & info [] ~docv)
+  in
+  let right = Arg.(non_empty & pos_right 2 string [] & info [] ~docv:"RIGHT") in
+  Term.(const pos $ right $ req 1 $ req 0 $ req 2),
+  Term.info "test_pos_req" ~doc:"Test pos req arguments"
+
+let () = Term.(exit @@ eval test_pos)
diff --git a/tools/ocaml/duniverse/cmdliner/test/test_pos_rev.ml b/tools/ocaml/duniverse/cmdliner/test/test_pos_rev.ml
new file mode 100644
index 0000000000..d8321aa861
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/test/test_pos_rev.ml
@@ -0,0 +1,14 @@
+open Cmdliner
+
+let pos l t r =
+  print_endline (String.concat "\n" (l @ ["--"; t; "--"] @ r))
+
+let test_pos =
+  let rev = true in
+  let l = Arg.(value & pos_left ~rev 2 string [] & info [] ~docv:"LEFT") in
+  let t = Arg.(value & pos ~rev 2 string "undefined" & info [] ~docv:"TWO") in
+  let r = Arg.(value & pos_right ~rev 2 string [] & info [] ~docv:"RIGHT") in
+  Term.(const pos $ l $ t $ r),
+  Term.info "test_pos" ~doc:"Test pos rev arguments"
+
+let () = Term.(exit @@ eval test_pos)
diff --git a/tools/ocaml/duniverse/cmdliner/test/test_term_dups.ml b/tools/ocaml/duniverse/cmdliner/test/test_term_dups.ml
new file mode 100644
index 0000000000..c6462761c7
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/test/test_term_dups.ml
@@ -0,0 +1,19 @@
+open Cmdliner
+
+let dups p p_dup o o_dup =
+  let b = string_of_bool in
+  print_endline (String.concat "\n" [p; p_dup; b o; b o_dup;])
+
+let test_pos =
+  let p =
+    let doc = "First pos argument should show up only once in the docs" in
+    Arg.(value & pos 0 string "undefined" & info [] ~doc ~docv:"POS")
+  in
+  let o =
+    let doc = "This should show up only once in the docs" in
+    Arg.(value & flag & info ["f"; "flag"] ~doc)
+  in
+  Term.(const dups $ p $ p $ o $ o),
+  Term.info "test_term_dups" ~doc:"Test multiple term usage"
+
+let () = Term.(exit @@ eval test_pos)
diff --git a/tools/ocaml/duniverse/cmdliner/test/test_with_used_args.ml b/tools/ocaml/duniverse/cmdliner/test/test_with_used_args.ml
new file mode 100644
index 0000000000..0a45d07775
--- /dev/null
+++ b/tools/ocaml/duniverse/cmdliner/test/test_with_used_args.ml
@@ -0,0 +1,18 @@
+open Cmdliner
+
+let print_args ((), args) _other =
+  print_endline (String.concat " " args)
+
+let test_pos_left =
+  let a = Arg.(value & flag & info ["a"; "aaa"]) in
+  let b = Arg.(value & opt (some string) None & info ["b"; "bbb"]) in
+  let c = Arg.(value & pos_all string [] & info []) in
+  let main =
+    let ignore_values _a _b _c = () in
+    Term.(with_used_args (const ignore_values $ a $ b $ c))
+  in
+  let other = Arg.(value & flag & info ["other"]) in
+  Term.(const print_args $ main $ other),
+  Term.info "test_capture" ~doc:"Test pos left"
+
+let () = Term.(exit @@ eval test_pos_left)
diff --git a/tools/ocaml/duniverse/cppo/.gitignore b/tools/ocaml/duniverse/cppo/.gitignore
new file mode 100644
index 0000000000..1d0dd35c65
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/.gitignore
@@ -0,0 +1,5 @@
+*~
+_build
+.merlin
+*.install
+.*.swp
diff --git a/tools/ocaml/duniverse/cppo/.ocp-indent b/tools/ocaml/duniverse/cppo/.ocp-indent
new file mode 100644
index 0000000000..fb580a5b4b
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/.ocp-indent
@@ -0,0 +1,22 @@
+# See https://github.com/OCamlPro/ocp-indent/blob/master/.ocp-indent for more
+
+# Indent for clauses inside a pattern-match (after the arrow):
+#    match foo with
+#    | _ ->
+#    ^^^^bar
+# the default is 2, which aligns the pattern and the expression
+match_clause = 4
+
+# When nesting expressions on the same line, their indentation are in
+# some cases stacked, so that it remains correct if you close them one
+# at a line. This may lead to large indents in complex code though, so
+# this parameter can be used to set a maximum value. Note that it only
+# affects indentation after function arrows and opening parens at end
+# of line.
+#
+# for example (left: `none`; right: `4`)
+#    let f = g (h (i (fun x ->     #    let f = g (h (i (fun x ->
+#          x)                      #        x)
+#        )                         #      )
+#      )                           #    )
+max_indent = 2
diff --git a/tools/ocaml/duniverse/cppo/.travis.yml b/tools/ocaml/duniverse/cppo/.travis.yml
new file mode 100644
index 0000000000..1f17d1158d
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/.travis.yml
@@ -0,0 +1,16 @@
+language: c
+sudo: required
+install: wget https://raw.githubusercontent.com/ocaml/ocaml-ci-scripts/master/.travis-opam.sh
+script: bash -ex .travis-opam.sh
+env:
+  global:
+  - PACKAGE=cppo
+  matrix:
+  - OCAML_VERSION=4.03
+  - OCAML_VERSION=4.04
+  - OCAML_VERSION=4.05
+  - OCAML_VERSION=4.06
+  - OCAML_VERSION=4.07
+os:
+  - linux
+  - osx
diff --git a/tools/ocaml/duniverse/cppo/CODEOWNERS b/tools/ocaml/duniverse/cppo/CODEOWNERS
new file mode 100644
index 0000000000..2a7c825096
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/CODEOWNERS
@@ -0,0 +1,8 @@
+# We're looking for one or more volunteers to take the lead of cppo,
+# with the help of ocaml-community.
+#
+# Call for volunteers: https://github.com/ocaml-community/meta/issues/27
+# About ocaml-community: https://github.com/ocaml-community/meta
+#
+# Interim maintainers who won't be very responsive :-(
+* @mjambon @pmetzger
diff --git a/tools/ocaml/duniverse/cppo/Changes b/tools/ocaml/duniverse/cppo/Changes
new file mode 100644
index 0000000000..48581b7c1c
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/Changes
@@ -0,0 +1,85 @@
+## v1.6.7 (2020-12-21)
+- [compat] Treat ~ and - the same in semver in order to parse
+           OCaml 4.12.0 pre-release versions.
+- [compat] Restore 4.02.3 compatibility.
+
+## v1.6.6 (2019-05-27)
+- [pkg] port build system to dune from jbuilder.
+- [pkg] upgrade opam metadata to 2.0 format.
+- [pkg] remove topkg and use dune-release.
+- [compat] Use `String.capitalize_ascii` to remove warning.
+
+## v1.6.5 (2018-09-12)
+- [bug] Fix 'asr' operator (#61)
+
+## v1.6.4 (2018-02-26)
+- [compat] Tests should now work with older versions of jbuilder.
+
+## v1.6.3 (2018-02-21)
+- [compat] Fix tests.
+
+## v1.6.1 (2018-01-25)
+- [compat] Emit line directives always containing the file name,
+           as mandated starting with ocaml 4.07.
+
+## v1.6.0 (2017-08-07)
+- [pkg] BREAKING: cppo and cppo_ocamlbuild are now two distinct opam
+        packages.
+
+## v1.5.0 (2017-04-24)
+- [+ui] Added the `CAPITALIZE()` function.
+
+## v1.4.0 (2016-08-19)
+- [compat] Cppo is now safe-string ready.
+
+## v1.3.2 (2016-04-20)
+- [pkg] Cppo can now be built on MSVC.
+
+## v1.3.1 (2015-09-20)
+- [bug] Possible to have #endif between two matching parenthesis.
+
+## v1.3.0 (2015-09-13)
+- [+ui] Removed the need for escaping commas and parenthesis in macros.
+- [+ui] Blanks is now allowed in argument list in macro definitions.
+- [+ui] #directive with wrong arguments is now giving a proper error.
+- [bug] Fixed expansion of __FILE__ and __LINE__.
+
+## v1.1.2 (2014-11-10)
+- [+ui] Ocamlbuild_cppo: added the ocamlbuild flag `cppo_V(NAME:VERSION)`,
+        equivalent to `-V NAME:VERSION` (for _tags file).
+
+## v1.1.1 (2014-11-10)
+- [+ui] Ocamlbuild_cppo: added the ocamlbuild flag `cppo_V_OCAML`,
+        equivalent to `-V OCAML:VERSION` (for _tags file).
+
+## v1.1.0 (2014-11-04)
+- [+ui] Added the `-V NAME:VERSION` option.
+- [+ui] Support for tuples in comparisons: tuples can be constructed
+        and compared, e.g. `#if (2 + 2, 5) < (4, 5)`.
+
+## v1.0.1 (2014-10-20)
+- [+ui] `#elif` and `#else` can now be used in the same #if-#else statement.
+- [bug] Fixed the Ocamlbuild flag `cppo_n`.
+
+## v1.0.0 (2014-09-06)
+- [bug] OCaml comments are now better parsed. For example, (* '"' *) works.
+
+## v0.9.4 (2014-06-10)
+- [+ui] Added the ocamlbuild_cppo plugin for Ocamlbuild. To use it:
+        `-plugin(cppo_ocamlbuild)`.
+
+## v0.9.3 (2012-02-03)
+- [pkg] New way of building the tar.gz archive.
+
+## v0.9.2 (2011-08-12)
+- [+ui] Added two predefined macros STRINGIFY and CONCAT for making
+        string literals and for building identifiers respectively.
+
+## v0.9.1 (2011-07-20)
+- [+ui] Added support for processing sections of files using external programs
+        (#ext/#endext, -x option)
+- [doc] Moved and extended documentation into the README file.
+
+## v0.9.0 (2009-11-17)
+
+- initial public release
diff --git a/tools/ocaml/duniverse/cppo/INSTALL.md b/tools/ocaml/duniverse/cppo/INSTALL.md
new file mode 100644
index 0000000000..ce1da139a0
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/INSTALL.md
@@ -0,0 +1,17 @@
+Installation instructions for cppo
+==================================
+
+Building cppo requires GNU Make and a standard OCaml
+installation. It can be installed with opam or manually as follows:
+
+Build:
+
+```
+make
+```
+
+Install:
+
+```
+make DESTDIR=/some/path install
+```
diff --git a/tools/ocaml/duniverse/cppo/LICENSE.md b/tools/ocaml/duniverse/cppo/LICENSE.md
new file mode 100644
index 0000000000..f1725ba4ef
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/LICENSE.md
@@ -0,0 +1,24 @@
+Copyright (c) 2009-2011 Martin Jambon
+All rights reserved.
+
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions
+are met:
+1. Redistributions of source code must retain the above copyright
+   notice, this list of conditions and the following disclaimer.
+2. Redistributions in binary form must reproduce the above copyright
+   notice, this list of conditions and the following disclaimer in the
+   documentation and/or other materials provided with the distribution.
+3. The name of the author may not be used to endorse or promote products
+   derived from this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE AUTHOR ``AS IS'' AND ANY EXPRESS OR
+IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES
+OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED.
+IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR ANY DIRECT, INDIRECT,
+INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT
+NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
+DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
+THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF
+THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/tools/ocaml/duniverse/cppo/Makefile b/tools/ocaml/duniverse/cppo/Makefile
new file mode 100644
index 0000000000..c69d27e470
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/Makefile
@@ -0,0 +1,18 @@
+all:
+	@dune build
+
+test:
+	@dune runtest
+
+install:
+	@dune install
+
+uninstall:
+	@dune uninstall
+
+check: test
+
+.PHONY: clean all check test install uninstall
+
+clean:
+	dune clean
diff --git a/tools/ocaml/duniverse/cppo/README.md b/tools/ocaml/duniverse/cppo/README.md
new file mode 100644
index 0000000000..8d5093af17
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/README.md
@@ -0,0 +1,521 @@
+[![Build status](https://ci.appveyor.com/api/projects/status/ft3167hf8yr2n5d3?svg=true)](https://ci.appveyor.com/project/Chris00/cppo-pnjtx)
+
+Cppo: cpp for OCaml
+===================
+
+Cppo is an equivalent of the C preprocessor for OCaml programs.
+It allows the definition of simple macros and file inclusion.
+
+Cppo is:
+
+* more OCaml-friendly than cpp
+* easy to learn without consulting a manual
+* reasonably fast
+* simple to install and to maintain
+
+User guide
+----------
+
+Cppo is a preprocessor for programming languages that follow lexical rules
+compatible with OCaml including OCaml-style comments `(* ... *)`. These include Ocamllex, Ocamlyacc, Menhir, and extensions of OCaml based on Camlp4, Camlp5, or ppx. Cppo should work with Bucklescript as well. It won't work so well with Reason code because Reason uses C-style comment delimiters `/*` and `*/`.
+
+Cppo supports a number of directives. A directive is a `#` sign placed
+at the beginning of a line, possibly preceded by some whitespace, and followed
+by a valid directive name or by a number:
+
+```ocaml
+BLANK* "#" BLANK* ("define"|"undef"
+                  |"if"|"ifdef"|"ifndef"|"else"|"elif"|"endif"
+                  |"include"
+                  |"warning"|"error"
+                  |"ext"|"endext") ...
+```
+
+Directives can be split into multiple lines by placing a backslash `\` at
+the end of the line to be continued. In general, any special character
+can used as a normal character by preceding it with backslash.
+
+
+File inclusion
+--------------
+
+```ocaml
+#include "hello.ml"
+```
+
+This is how a source file `hello.ml` can be included.
+Relative paths are searched first in the directory of the current file
+and then in the search paths added on the command line using `-I`, if any.
+
+
+Macros
+------
+
+This is a simple macro that doesn't take an argument ("object-like
+macro" in the cpp jargon):
+
+```ocaml
+#define Ms Mississippi
+
+match state with
+    Ms -> true
+  | _ -> false
+```
+
+After preprocessing by cppo, the code above becomes:
+
+```ocaml
+match state with
+    Mississippi -> true
+  | _ -> false
+```
+
+If needed, defined macros can be undefined. This is required prior to
+redefining a macro:
+
+```ocaml
+#undef X
+```
+
+An important distinction with cpp is that only previously-defined
+macros are accessible. Defining, undefining or redefining a macro has
+no effect on how previous macros will expand.
+
+Macros can take arguments ("function-like macro" in the cpp
+jargon). Both in the definition (`#define`) and in macro application the
+opening parenthesis must stick to the macro's identifier:
+
+```ocaml
+#define debug(args) if !debugging then Printf.eprintf args else ()
+
+debug("Testing %i" (1 + 1))
+```
+
+is expanded into:
+
+```ocaml
+if !debugging then Printf.eprintf "Testing %i" (1 + 1) else ()
+```
+
+Here is a multiline macro definition. Newlines occurring between
+tokens must be protected by a backslash:
+
+```ocaml
+#define repeat_until(action,condition) \
+  action; \
+  while not (condition) do \
+    action \
+  done
+```
+
+All user-definable macros are constant. There are however two
+predefined variable macros: `__FILE__` and `__LINE__` which take the value
+of the position in the source file where the macro is being expanded.
+
+```ocaml
+#define loc (Printf.sprintf "File %S, line %i" __FILE__ __LINE__)
+```
+
+Macros can be defined on the command line as follows:
+
+```ocaml
+# preprocessing only
+cppo -D 'VERSION 1.0' example.ml
+
+# preprocessing and compiling
+ocamlopt -c -pp "cppo -D 'VERSION 1.0'" example.ml
+```
+
+Conditionals
+------------
+
+Here is a quick reference on conditionals available in cppo. If you
+are not familiar with `#ifdef`, `#ifndef`, `#if`, `#else` and `#elif`, please
+refer to the corresponding section in the cpp manual.
+
+```ocaml
+#ifndef VERSION
+#warning "VERSION is undefined"
+#define VERSION "n/a"
+#endif
+#ifndef VERSION
+#error "VERSION is undefined"
+#endif
+#if OCAML_MAJOR >= 3 && OCAML_MINOR >= 10
+...
+#endif
+#ifdef X
+...
+#elif defined Y
+...
+#else
+...
+#endif
+```
+
+The boolean expressions following `#if` and `#elif` may perform arithmetic
+operations and tests over 64-bit ints.
+
+Boolean expressions:
+
+* `defined` ...  followed by an identifier, returns true if such a macro exists
+* `true`
+* `false`
+* `(` ... `)`
+* ... `&&` ...
+* ... `||` ...
+* `not` ...
+
+Arithmetic comparisons used in boolean expressions:
+
+* ... `=` ...
+* ... `<` ...
+* ... `>` ...
+* ... `<>` ...
+* ... `<=` ...
+* ... `>=` ...
+
+Arithmetic operators over signed 64-bit ints:
+
+* `(` ... `)`
+* ... `+` ...
+* ... `-` ...
+* ... `*` ...
+* ... `/` ...
+* ... `mod` ...
+* ... `lsl` ...
+* ... `lsr` ...
+* ... `asr` ...
+* ... `land` ...
+* ... `lor` ...
+* ... `lxor` ...
+* `lnot` ...
+
+Macro identifiers can be used in place of ints as long as they expand
+to an int literal or a tuple of int literals, e.g.:
+
+```ocaml
+#define one 1
+
+#if one + one <> 2
+#error "Something's wrong."
+#endif
+
+#define VERSION (1, 0, 5)
+#if VERSION <= (1, 0, 2)
+#error "Version 1.0.2 or greater is required."
+#endif
+```
+
+Version strings (http://semver.org/) can also be passed to cppo on the
+command line. This results in multiple variables being defined, all
+sharing the same prefix. See the output of `cppo -help` (copied at the
+bottom of this page).
+
+```
+$ cppo -V OCAML:`ocamlc -version`
+#if OCAML_VERSION >= (4, 0, 0)
+(* All is well. *)
+#else
+  #error "This version of OCaml is not supported."
+#endif
+```
+
+Output:
+```
+# 2 "<stdin>"
+(* All is well. *)
+```
+
+Source file location
+--------------------
+
+Location directives are the same as in OCaml and are echoed in the
+output. They consist of a line number optionally followed by a file name:
+
+```ocaml
+# 123
+# 456 "source"
+```
+
+Messages
+--------
+
+Warnings and error messages can be produced by the preprocessor:
+
+```ocaml
+#ifndef X
+  #warning "Assuming default value for X"
+  #define X 1
+#elif X = 0
+  #error "X may not be null"
+#endif
+```
+
+Calling an external processor
+-----------------------------
+
+Cppo provides a mechanism for converting sections of a file using
+and external program. Such a section must be placed between `#ext` and
+`#endext` directives.
+
+```bash
+$ cat foo
+ABC
+#ext lowercase
+DEF
+#endext
+GHI
+#ext lowercase
+KLM
+NOP
+#endext
+QRS
+
+$ cppo -x lowercase:'tr "[A-Z]" "[a-z]"' foo
+# 1 "foo"
+ABC
+def
+# 5 "foo"
+GHI
+klm
+nop
+# 10 "foo"
+QRS
+```
+
+In the example above, `lowercase` is the name given on the
+command-line to external command `'tr "[A-Z]" "[a-z]"'` that reads
+input from stdin and writes its output to stdout.
+
+
+Escaping
+--------
+
+The following characters can be escaped by a backslash when needed:
+
+```ocaml
+(
+)
+,
+#
+```
+
+In OCaml `#` is used for method calls. It is usually not a problem
+because in order to be interpreted as a preprocessor directive, it
+must be the first non-blank character of a line and be a known
+directive. If an object has a define method and you want `#` to appear
+first on a line, you would have to use `\#` instead:
+
+```ocaml
+obj
+  \#define
+```
+
+Line directives in the usual format supported by OCaml are correctly
+interpreted by cppo.
+
+Comments and string literals constitute single tokens even when they
+span across multiple lines. Therefore newlines within string literals
+and comments should remain as-is (no preceding backslash) even in a
+macro body:
+
+```ocaml
+#define welcome \
+"**********
+*Welcome!*
+**********
+"
+```
+
+Concatenation
+-------------
+
+`CONCAT()` is a predefined macro that takes two arguments, removes any
+whitespace between and around them and fuses them into a single identifier.
+The result of the concatenation must be a valid identifier of the
+form [A-Za-z_][A-Za-z0-9_]+ or [A-Za-z], or empty.
+
+For example,
+
+```ocaml
+#define x 123
+CONCAT(z, x)
+```
+
+expands into:
+
+```ocaml
+z123
+```
+
+However the following is illegal:
+
+```ocaml
+#define x 123
+CONCAT(x, z)
+```
+
+because 123z does not form a valid identifier.
+
+`CONCAT(a,b)` is roughly equivalent to `a##b` in cpp syntax.
+
+CAPITALIZE
+---------------
+
+`CAPITALIZE()` is a predefined macro that takes one argument,
+removes any leading and trailing whitespace, reduces each internal
+whitespace sequence to a single space character and produces
+a valid OCaml identifer with first character.
+
+For example,
+```ocaml
+#define EVENT(n,ty) external CONCAT(on,CAPITALIZE(n)) : ty = STRINGIFY(n) [@@bs.val] 
+EVENT(exit, unit -> unit)
+```
+is expanded into:
+
+```ocaml
+external  onExit  :  unit -> unit = "exit" [@@bs.val]
+```
+
+Stringification
+---------------
+
+`STRINGIFY()` is a predefined macro that takes one argument,
+removes any leading and trailing whitespace, reduces each internal
+whitespace sequence to a single space character and produces
+a valid OCaml string literal.
+
+For example,
+
+```ocaml
+#define TRACE(f) Printf.printf ">>> %s\n" STRINGIFY(f); f
+TRACE(print_endline) "Hello"
+```
+
+is expanded into:
+
+```ocaml
+Printf.printf ">>> %s\n" "print_endline"; print_endline "Hello"
+```
+
+`STRINGIFY(x)` is the equivalent of `#x` in cpp syntax.
+
+
+Ocamlbuild plugin
+------------------
+
+An ocamlbuild plugin is available. To use it, you can call ocamlbuild
+with the argument `-plugin-tag package(cppo_ocamlbuild)` (only since
+ocaml 4.01 and cppo >= 0.9.4).
+
+Starting from **cppo >= 1.6.0**, the `cppo_ocamlbuild` plugin is in a
+separate OPAM package (`opam install cppo_ocamlbuild`).
+
+With Oasis :
+```
+OCamlVersion: >= 4.01
+AlphaFeatures: ocamlbuild_more_args
+XOCamlbuildPluginTags: package(cppo_ocamlbuild)
+```
+
+After that, you need to add in your `myocamlbuild.ml` :
+```ocaml
+let () =
+  Ocamlbuild_plugin.dispatch
+    (fun hook ->
+      Ocamlbuild_cppo.dispatcher hook ;
+    )
+```
+
+By default the plugin will apply cppo on all files ending in `.cppo.ml`
+`cppo.mli`, and `cppo.mlpack`, in order to produce `.ml`, `.mli`,
+and`.mlpack` files.  The following tags are available:
+* `cppo_D(X)` ≡ `-D X`
+* `cppo_U(X)` ≡ `-U X`
+* `cppo_q` ≡ `-q`
+* `cppo_s` ≡ `-s`
+* `cppo_n` ≡ `-n`
+* `cppo_x(NAME:CMD_TEMPLATE)` ≡ `-x NAME:CMD_TEMPLATE`
+* The tag `cppo_I(foo)` can behave in two way:
+  * If `foo` is a directory, it's equivalent to `-I foo`.
+  * If `foo` is a file, it adds `foo` as a dependency and apply `-I
+    parent(foo)`.
+* `cppo_V(NAME:VERSION)` ≡ `-V NAME:VERSION`
+* `cppo_V_OCAML` ≡ `-V OCAML:VERSION`, where `VERSION`
+   is the version of OCaml that ocamlbuild uses.
+
+Detailed command-line usage and options
+---------------------------------------
+
+```
+Usage: ./cppo [OPTIONS] [FILE1 [FILE2 ...]]
+Options:
+  -D DEF
+          Equivalent of interpreting '#define DEF' before processing the
+          input
+  -U IDENT
+          Equivalent of interpreting '#undef IDENT' before processing the
+          input
+  -I DIR
+          Add directory DIR to the search path for included files
+  -V VAR:MAJOR.MINOR.PATCH-OPTPRERELEASE+OPTBUILD
+          Define the following variables extracted from a version string
+          (following the Semantic Versioning syntax http://semver.org/):
+
+            VAR_MAJOR           must be a non-negative int
+            VAR_MINOR           must be a non-negative int
+            VAR_PATCH           must be a non-negative int
+            VAR_PRERELEASE      if the OPTPRERELEASE part exists
+            VAR_BUILD           if the OPTBUILD part exists
+            VAR_VERSION         is the tuple (MAJOR, MINOR, PATCH)
+            VAR_VERSION_STRING  is the string MAJOR.MINOR.PATCH
+            VAR_VERSION_FULL    is the original string
+
+          Example: cppo -V OCAML:4.02.1
+
+  -o FILE
+          Output file
+  -q
+          Identify and preserve camlp4 quotations
+  -s
+          Output line directives pointing to the exact source location of
+          each token, including those coming from the body of macro
+          definitions.  This behavior is off by default.
+  -n
+          Do not output any line directive other than those found in the
+          input (overrides -s).
+  -version
+          Print the version of the program and exit.
+  -x NAME:CMD_TEMPLATE
+          Define a custom preprocessor target section starting with:
+            #ext "NAME"
+          and ending with:
+            #endext
+
+          NAME must be a lowercase identifier of the form [a-z][A-Za-z0-9_]*
+
+          CMD_TEMPLATE is a command template supporting the following
+          special sequences:
+            %F  file name (unescaped; beware of potential scripting attacks)
+            %B  number of the first line
+            %E  number of the last line
+            %%  a single percent sign
+
+          Filename, first line number and last line number are also
+          available from the following environment variables:
+          CPPO_FILE, CPPO_FIRST_LINE, CPPO_LAST_LINE.
+
+          The command produced is expected to read the data lines from stdin
+          and to write its output to stdout.
+  -help  Display this list of options
+  --help  Display this list of options
+```
+
+
+Contributing
+------------
+
+See our contribution guidelines at
+https://github.com/mjambon/documents/blob/master/how-to-contribute.md
diff --git a/tools/ocaml/duniverse/cppo/VERSION b/tools/ocaml/duniverse/cppo/VERSION
new file mode 100644
index 0000000000..ec70f75560
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/VERSION
@@ -0,0 +1 @@
+1.6.6
diff --git a/tools/ocaml/duniverse/cppo/appveyor.yml b/tools/ocaml/duniverse/cppo/appveyor.yml
new file mode 100644
index 0000000000..456a4cc206
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/appveyor.yml
@@ -0,0 +1,14 @@
+
+environment:
+  matrix:
+    - OCAML_BRANCH: 4.05
+    - OCAML_BRANCH: 4.06
+
+install:
+  - appveyor DownloadFile "https://raw.githubusercontent.com/Chris00/ocaml-appveyor/master/install_ocaml.cmd" -FileName "C:\install_ocaml.cmd"
+  - C:\install_ocaml.cmd
+
+build_script:
+  - cd "%APPVEYOR_BUILD_FOLDER%"
+  - dune subst
+  - dune build -p cppo
diff --git a/tools/ocaml/duniverse/cppo/cppo.opam b/tools/ocaml/duniverse/cppo/cppo.opam
new file mode 100644
index 0000000000..33258eb4c9
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/cppo.opam
@@ -0,0 +1,31 @@
+version: "1.6.7"
+opam-version: "2.0"
+maintainer: "martin@mjambon.com"
+authors: "Martin Jambon"
+license: "BSD-3-Clause"
+homepage: "https://github.com/ocaml-community/cppo"
+doc: "https://ocaml-community.github.io/cppo/"
+bug-reports: "https://github.com/ocaml-community/cppo/issues"
+depends: [
+  "ocaml" {>= "4.02.3"}
+  "dune" {>= "1.0"}
+  "base-unix"
+]
+build: [
+  ["dune" "subst"] {pinned}
+  ["dune" "build" "-p" name "-j" jobs]
+  ["dune" "runtest" "-p" name "-j" jobs] {with-test}
+]
+dev-repo: "git+https://github.com/ocaml-community/cppo.git"
+synopsis: "Code preprocessor like cpp for OCaml"
+description: """
+Cppo is an equivalent of the C preprocessor for OCaml programs.
+It allows the definition of simple macros and file inclusion.
+
+Cppo is:
+
+* more OCaml-friendly than cpp
+* easy to learn without consulting a manual
+* reasonably fast
+* simple to install and to maintain
+"""
\ No newline at end of file
diff --git a/tools/ocaml/duniverse/cppo/cppo_ocamlbuild.opam b/tools/ocaml/duniverse/cppo/cppo_ocamlbuild.opam
new file mode 100644
index 0000000000..22fa8ce630
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/cppo_ocamlbuild.opam
@@ -0,0 +1,27 @@
+version: "1.6.7"
+opam-version: "2.0"
+maintainer: "martin@mjambon.com"
+authors: "Martin Jambon"
+license: "BSD-3-Clause"
+homepage: "https://github.com/ocaml-community/cppo"
+doc: "https://ocaml-community.github.io/cppo/"
+bug-reports: "https://github.com/ocaml-community/cppo/issues"
+depends: [
+  "ocaml"
+  "dune" {>= "1.0"}
+  "ocamlbuild"
+  "ocamlfind"
+]
+build: [
+  ["dune" "subst"] {pinned}
+  ["dune" "build" "-p" name "-j" jobs]
+  ["dune" "runtest" "-p" name "-j" jobs] {with-test}
+]
+dev-repo: "git+https://github.com/ocaml-community/cppo.git"
+synopsis: "Plugin to use cppo with ocamlbuild"
+description: """
+This ocamlbuild plugin lets you use cppo in ocamlbuild projects.
+
+To use it, you can call ocamlbuild with the argument `-plugin-tag
+package(cppo_ocamlbuild)` (only since ocaml 4.01 and cppo >= 0.9.4).
+"""
\ No newline at end of file
diff --git a/tools/ocaml/duniverse/cppo/dune-project b/tools/ocaml/duniverse/cppo/dune-project
new file mode 100644
index 0000000000..3f91b7266a
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/dune-project
@@ -0,0 +1,3 @@
+(lang dune 1.0)
+(name cppo)
+(version v1.6.7)
diff --git a/tools/ocaml/duniverse/cppo/examples/Makefile b/tools/ocaml/duniverse/cppo/examples/Makefile
new file mode 100644
index 0000000000..f9dd33fbfd
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/examples/Makefile
@@ -0,0 +1,8 @@
+.PHONY: all clean
+all:
+	../cppo debug.ml > debug.out
+	../cppo french.ml > french.out
+	ocamllex lexer.mll
+	../cppo lexer.ml > lexer.out
+clean:
+	rm -f *.out lexer.ml
diff --git a/tools/ocaml/duniverse/cppo/examples/debug.ml b/tools/ocaml/duniverse/cppo/examples/debug.ml
new file mode 100644
index 0000000000..d47b512224
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/examples/debug.ml
@@ -0,0 +1,7 @@
+#ifdef DEBUG
+#define debug(s) Printf.eprintf "[%S %i] %s\n%!" __FILE__ __LINE__ s
+#else
+#define debug(s) ()
+#endif
+
+debug("test")
diff --git a/tools/ocaml/duniverse/cppo/examples/dune b/tools/ocaml/duniverse/cppo/examples/dune
new file mode 100644
index 0000000000..f4d9de7c6f
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/examples/dune
@@ -0,0 +1,32 @@
+(ocamllex lexer)
+
+(rule
+ (deps
+  (:< debug.ml))
+ (targets debug.out)
+ (action
+  (with-stdout-to
+   %{targets}
+   (run %{bin:cppo} %{<}))))
+
+(rule
+ (deps
+  (:< french.ml))
+ (targets french.out)
+ (action
+  (with-stdout-to
+   %{targets}
+   (run %{bin:cppo} %{<}))))
+
+(rule
+ (deps
+  (:< lexer.ml))
+ (targets lexer.out)
+ (action
+  (with-stdout-to
+   %{targets}
+   (run %{bin:cppo} %{<}))))
+
+(alias
+ (name DEFAULT)
+ (deps debug.out french.out lexer.out))
diff --git a/tools/ocaml/duniverse/cppo/examples/french.ml b/tools/ocaml/duniverse/cppo/examples/french.ml
new file mode 100644
index 0000000000..e173a1fe1c
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/examples/french.ml
@@ -0,0 +1,34 @@
+#define soit let
+#define fonction function
+#define fon fun
+#define dans in
+#define si if
+#define alors then
+#define sinon else
+
+#define Liste List
+#define Affichef Printf
+#define affichef printf
+
+#define separation split
+#define tri sort
+
+soit rec separation x = fonction
+    y :: l ->
+      soit l1, l2 = separation x l dans
+      si y < x alors (y :: l1), l2
+      sinon l1, (y :: l2)
+  | [] ->
+      [], []
+
+soit rec tri = fonction
+    x :: l ->
+      soit l1, l2 = separation x l dans
+      tri l1 @ [x] @ tri l2
+  | [] ->
+      []
+
+soit () =
+  soit l = tri [ 5; 3; 7; 1; 7; 4; 99; 22 ] dans
+  Liste.iter (fon i -> Affichef.affichef "%i " i) l;
+  Affichef.affichef "\n"
diff --git a/tools/ocaml/duniverse/cppo/examples/lexer.mll b/tools/ocaml/duniverse/cppo/examples/lexer.mll
new file mode 100644
index 0000000000..446e8eef28
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/examples/lexer.mll
@@ -0,0 +1,9 @@
+(* Warning: ocamllex doesn't accept cppo directives
+            within the rules section. *)
+rule token = parse
+    ['a'-'z']+  { `String (Lexing.lexeme lexbuf) }
+{
+#ifndef NOFOO
+  let foo () = ()
+#endif
+}
diff --git a/tools/ocaml/duniverse/cppo/ocamlbuild_plugin/_tags b/tools/ocaml/duniverse/cppo/ocamlbuild_plugin/_tags
new file mode 100644
index 0000000000..dc946a1c24
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/ocamlbuild_plugin/_tags
@@ -0,0 +1 @@
+true: package(ocamlbuild)
diff --git a/tools/ocaml/duniverse/cppo/ocamlbuild_plugin/dune b/tools/ocaml/duniverse/cppo/ocamlbuild_plugin/dune
new file mode 100644
index 0000000000..b512a12f29
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/ocamlbuild_plugin/dune
@@ -0,0 +1,6 @@
+(library
+ (name cppo_ocamlbuild)
+ (public_name cppo_ocamlbuild)
+ (wrapped false)
+ (synopsis "Cppo ocamlbuild plugin")
+ (libraries ocamlbuild))
diff --git a/tools/ocaml/duniverse/cppo/ocamlbuild_plugin/ocamlbuild_cppo.ml b/tools/ocaml/duniverse/cppo/ocamlbuild_plugin/ocamlbuild_cppo.ml
new file mode 100644
index 0000000000..f301c36240
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/ocamlbuild_plugin/ocamlbuild_cppo.ml
@@ -0,0 +1,35 @@
+
+open Ocamlbuild_plugin
+
+let cppo_rules ext =
+  let dep   = "%(name).cppo"-.-ext
+  and prod1 = "%(name: <*> and not <*.cppo>)"-.-ext
+  and prod2 = "%(name: <**/*> and not <**/*.cppo>)"-.-ext in
+  let cppo_rule prod env _build =
+    let dep = env dep in
+    let prod = env prod in
+    let tags = tags_of_pathname prod ++ "cppo" in
+    Cmd (S[A "cppo"; T tags; S [A "-o"; P prod]; P dep ])
+  in
+  rule ("cppo: *.cppo."-.-ext^" -> *."-.-ext)  ~dep ~prod:prod1 (cppo_rule prod1);
+  rule ("cppo: **/*.cppo."-.-ext^" -> **/*."-.-ext)  ~dep ~prod:prod2 (cppo_rule prod2)
+
+let dispatcher = function
+  | After_rules -> begin
+      List.iter cppo_rules ["ml"; "mli"; "mlpack"];
+      pflag ["cppo"] "cppo_D" (fun s -> S [A "-D"; A s]) ;
+      pflag ["cppo"] "cppo_U" (fun s -> S [A "-U"; A s]) ;
+      pflag ["cppo"] "cppo_I" (fun s ->
+        if Pathname.is_directory s then S [A "-I"; P s]
+        else S [A "-I"; P (Pathname.dirname s)]
+      ) ;
+      pdep ["cppo"] "cppo_I" (fun s ->
+        if Pathname.is_directory s then [] else [s]) ;
+      flag ["cppo"; "cppo_q"] (A "-q") ;
+      flag ["cppo"; "cppo_s"] (A "-s") ;
+      flag ["cppo"; "cppo_n"] (A "-n") ;
+      pflag ["cppo"] "cppo_x" (fun s -> S [A "-x"; A s]);
+      pflag ["cppo"] "cppo_V" (fun s -> S [A "-V"; A s]);
+      flag ["cppo"; "cppo_V_OCAML"] & S [A "-V"; A ("OCAML:" ^ Sys.ocaml_version)]
+    end
+  | _ -> ()
diff --git a/tools/ocaml/duniverse/cppo/ocamlbuild_plugin/ocamlbuild_cppo.mli b/tools/ocaml/duniverse/cppo/ocamlbuild_plugin/ocamlbuild_cppo.mli
new file mode 100644
index 0000000000..212435857f
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/ocamlbuild_plugin/ocamlbuild_cppo.mli
@@ -0,0 +1,9 @@
+
+(** [cppo_rules extension] will add rules to Ocamlbuild so that
+    cppo is applied to files ending in "cppo.[extension]".
+
+    By default rules are inserted for files ending with "ml", "mli" and
+    "mlpack". *)
+val cppo_rules : string -> unit
+
+val dispatcher : Ocamlbuild_plugin.hook -> unit
diff --git a/tools/ocaml/duniverse/cppo/src/compat.ml b/tools/ocaml/duniverse/cppo/src/compat.ml
new file mode 100644
index 0000000000..5cd4a1b58d
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/src/compat.ml
@@ -0,0 +1,7 @@
+if Filename.check_suffix Sys.argv.(1) ".ml" &&
+   Scanf.sscanf Sys.ocaml_version "%d.%d" (fun a b -> (a, b)) < (4, 03) then
+  print_endline "\
+module String = struct
+  include String
+  let capitalize_ascii = capitalize
+end"
diff --git a/tools/ocaml/duniverse/cppo/src/cppo_command.ml b/tools/ocaml/duniverse/cppo/src/cppo_command.ml
new file mode 100644
index 0000000000..5c61028c9a
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/src/cppo_command.ml
@@ -0,0 +1,63 @@
+open Printf
+
+type command_token =
+    [ `Text of string
+    | `Loc_file
+    | `Loc_first_line
+    | `Loc_last_line ]
+
+type command_template = command_token list
+
+let parse s : command_template =
+  let rec loop acc buf s len i =
+    if i >= len then
+      let s = Buffer.contents buf in
+      if s = "" then acc
+      else `Text s :: acc
+    else if i = len - 1 then (
+      Buffer.add_char buf s.[i];
+      `Text (Buffer.contents buf) :: acc
+    )
+    else
+      let c = s.[i] in
+      if c = '%' then
+        let acc =
+          let s = Buffer.contents buf in
+          Buffer.clear buf;
+          if s = "" then acc
+          else
+            `Text s :: acc
+        in
+        let x =
+          match s.[i+1] with
+              'F' -> `Loc_file
+            | 'B' -> `Loc_first_line
+            | 'E' -> `Loc_last_line
+            | '%' -> `Text "%"
+            | _ ->
+                failwith (
+                  sprintf "Invalid escape sequence in command template %S. \
+                             Use %%%% for a %% sign." s
+                )
+        in
+        loop (x :: acc) buf s len (i + 2)
+      else (
+        Buffer.add_char buf c;
+        loop acc buf s len (i + 1)
+      )
+  in
+  let len = String.length s in
+  List.rev (loop [] (Buffer.create len) s len 0)
+
+
+let subst (cmd : command_template) file first last =
+  let l =
+    List.map (
+      function
+          `Text s -> s
+        | `Loc_file -> file
+        | `Loc_first_line -> string_of_int first
+        | `Loc_last_line -> string_of_int last
+    ) cmd
+  in
+  String.concat "" l
diff --git a/tools/ocaml/duniverse/cppo/src/cppo_command.mli b/tools/ocaml/duniverse/cppo/src/cppo_command.mli
new file mode 100644
index 0000000000..af57d8cbfc
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/src/cppo_command.mli
@@ -0,0 +1,11 @@
+type command_token =
+  [ `Text of string
+  | `Loc_file
+  | `Loc_first_line
+  | `Loc_last_line ]
+
+type command_template = command_token list
+
+val subst : command_template -> string -> int -> int -> string
+
+val parse : string -> command_template
diff --git a/tools/ocaml/duniverse/cppo/src/cppo_eval.ml b/tools/ocaml/duniverse/cppo/src/cppo_eval.ml
new file mode 100644
index 0000000000..fb3f9de1e0
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/src/cppo_eval.ml
@@ -0,0 +1,697 @@
+open Printf
+
+open Cppo_types
+
+module S = Set.Make (String)
+module M = Map.Make (String)
+
+let builtins = [
+  "__FILE__", (fun _env -> `Special);
+  "__LINE__", (fun _env -> `Special);
+  "STRINGIFY", (fun env ->
+                  `Defun (dummy_loc, "STRINGIFY",
+                          ["x"],
+                          [`Stringify (`Ident (dummy_loc, "x", None))],
+                          env)
+               );
+  "CONCAT", (fun env ->
+               `Defun (dummy_loc, "CONCAT",
+                       ["x";"y"],
+                       [`Concat (`Ident (dummy_loc, "x", None),
+                                 `Ident (dummy_loc, "y", None))],
+                       env)
+            );
+  "CAPITALIZE", (fun env ->
+    `Defun (dummy_loc, "CAPITALIZE",
+            ["x"],
+            [`Capitalize (`Ident (dummy_loc, "x", None))],
+            env)
+  );
+
+]
+
+let is_reserved s =
+  List.exists (fun (s', _) -> s = s') builtins
+
+let builtin_env =
+  List.fold_left (fun env (s, f) -> M.add s (f env) env) M.empty builtins
+
+let line_directive buf pos =
+  let len = Buffer.length buf in
+  if len > 0 && Buffer.nth buf (len - 1) <> '\n' then
+    Buffer.add_char buf '\n';
+  bprintf buf "# %i %S\n"
+    pos.Lexing.pos_lnum
+    pos.Lexing.pos_fname;
+  bprintf buf "%s" (String.make (pos.Lexing.pos_cnum - pos.Lexing.pos_bol) ' ')
+
+let rec add_sep sep last = function
+    [] -> [ last ]
+  | [x] -> [ x; last ]
+  | x :: l -> x :: sep :: add_sep sep last l
+
+
+let remove_space l =
+  List.filter (function `Text (_, true, _) -> false | _ -> true) l
+
+let trim_and_compact buf s =
+  let started = ref false in
+  let need_space = ref false in
+  for i = 0 to String.length s - 1 do
+    match s.[i] with
+        ' ' | '\t' | '\n' | '\r' ->
+          if !started then
+            need_space := true
+      | c ->
+          if !need_space then
+            Buffer.add_char buf ' ';
+          (match c with
+               '\"' -> Buffer.add_string buf "\\\""
+             | '\\' -> Buffer.add_string buf "\\\\"
+             | c -> Buffer.add_char buf c);
+          started := true;
+          need_space := false
+  done
+
+let stringify buf s =
+  Buffer.add_char buf '\"';
+  trim_and_compact buf s;
+  Buffer.add_char buf '\"'
+
+let trim_and_compact_string s =
+  let buf = Buffer.create (String.length s) in
+  trim_and_compact buf s;
+  Buffer.contents buf
+
+let trim_compact_and_capitalize_string s =
+  let buf = Buffer.create (String.length s) in
+  trim_and_compact buf s;
+  String.capitalize_ascii (Buffer.contents buf)
+
+let is_ident s =
+  let len = String.length s in
+  len > 0
+  &&
+    (match s.[0] with
+         'A'..'Z' | 'a'..'z' -> true
+       | '_' when len > 1 -> true
+       | _ -> false)
+  &&
+    (try
+       for i = 1 to len - 1 do
+         match s.[i] with
+             'A'..'Z' | 'a'..'z' | '_' | '0'..'9' -> ()
+           | _ -> raise Exit
+       done;
+       true
+     with Exit ->
+       false)
+
+let concat loc x y =
+  let s = trim_and_compact_string x ^ trim_and_compact_string y in
+  if not (s = "" || is_ident s) then
+    error loc
+      (sprintf "CONCAT() does not expand into a valid identifier nor \
+                into whitespace:\n%S" s)
+  else
+    if s = "" then " "
+    else " " ^ s ^ " "
+
+(*
+   Expand the contents of a variable used in a boolean expression.
+
+   Ideally, we should first completely expand the contents bound
+   to the variable, and then parse the result as an int or an int tuple.
+   This is a bit complicated to do well, and we don't want to implement
+   a full programming language here either.
+
+   Instead we only accept int literals, int tuple literals, and variables that
+   themselves expand into one those.
+
+   In particular:
+   - We do not support arithmetic operations
+   - We do not support tuples containing variables such as (x, y)
+
+   Example of contents that we support:
+   - 123
+   - (1, 2, 3)
+   - x, where x expands into 123.
+*)
+let rec eval_ident env loc name =
+  let l =
+    try
+      match M.find name env with
+      | `Def (_, _, l, _) -> l
+      | `Defun _ ->
+          error loc (sprintf "%S expects arguments" name)
+      | `Special -> assert false
+    with Not_found -> error loc (sprintf "Undefined identifier %S" name)
+  in
+  let expansion_error () =
+    error loc
+      (sprintf "\
+Variable %s found in cppo boolean expression must expand
+into an int literal, into a tuple of int literals,
+or into a variable with the same properties."
+         name)
+  in
+  (try
+     match remove_space l with
+       [ `Ident (loc, name, None) ] ->
+         (* single identifier that we expand recursively *)
+         eval_ident env loc name
+     | _ ->
+         (* int literal or int tuple literal; variables not allowed *)
+         let text =
+           List.map (
+             function
+               `Text (_, _is_space, s) -> s
+             | _ ->
+                 expansion_error ()
+           ) (Cppo_types.flatten_nodes l)
+         in
+         let s = String.concat "" text in
+         (match Cppo_lexer.int_tuple_of_string s with
+            Some [i] -> `Int i
+          | Some l -> `Tuple (loc, List.map (fun i -> `Int i) l)
+          | None ->
+              expansion_error ()
+         )
+   with Cppo_error _ ->
+     expansion_error ()
+  )
+
+let rec replace_idents env (x : arith_expr) : arith_expr =
+  match x with
+    | `Ident (loc, name) -> eval_ident env loc name
+
+    | `Int x -> `Int x
+    | `Neg x -> `Neg (replace_idents env x)
+    | `Add (a, b) -> `Add (replace_idents env a, replace_idents env b)
+    | `Sub (a, b) -> `Sub (replace_idents env a, replace_idents env b)
+    | `Mul (a, b) -> `Mul (replace_idents env a, replace_idents env b)
+    | `Div (loc, a, b) -> `Div (loc, replace_idents env a, replace_idents env b)
+    | `Mod (loc, a, b) -> `Mod (loc, replace_idents env a, replace_idents env b)
+    | `Lnot a -> `Lnot (replace_idents env a)
+    | `Lsl (a, b) -> `Lsl (replace_idents env a, replace_idents env b)
+    | `Lsr (a, b) -> `Lsr (replace_idents env a, replace_idents env b)
+    | `Asr (a, b) -> `Asr (replace_idents env a, replace_idents env b)
+    | `Land (a, b) -> `Land (replace_idents env a, replace_idents env b)
+    | `Lor (a, b) -> `Lor (replace_idents env a, replace_idents env b)
+    | `Lxor (a, b) -> `Lxor (replace_idents env a, replace_idents env b)
+    | `Tuple (loc, l) -> `Tuple (loc, List.map (replace_idents env) l)
+
+let rec eval_int env (x : arith_expr) : int64 =
+  match x with
+    | `Ident (loc, name) -> eval_int env (eval_ident env loc name)
+
+    | `Int x -> x
+    | `Neg x -> Int64.neg (eval_int env x)
+    | `Add (a, b) -> Int64.add (eval_int env a) (eval_int env b)
+    | `Sub (a, b) -> Int64.sub (eval_int env a) (eval_int env b)
+    | `Mul (a, b) -> Int64.mul (eval_int env a) (eval_int env b)
+    | `Div (loc, a, b) ->
+        (try Int64.div (eval_int env a) (eval_int env b)
+         with Division_by_zero ->
+           error loc "Division by zero")
+
+    | `Mod (loc, a, b) ->
+        (try Int64.rem (eval_int env a) (eval_int env b)
+         with Division_by_zero ->
+           error loc "Division by zero")
+
+    | `Lnot a -> Int64.lognot (eval_int env a)
+
+    | `Lsl (a, b) ->
+        let n = eval_int env a in
+        let shift = eval_int env b in
+        let shift =
+          if shift >= 64L then 64L
+          else if shift <= -64L then -64L
+          else shift
+        in
+        Int64.shift_left n (Int64.to_int shift)
+
+    | `Lsr (a, b) ->
+        let n = eval_int env a in
+        let shift = eval_int env b in
+        let shift =
+          if shift >= 64L then 64L
+          else if shift <= -64L then -64L
+          else shift
+        in
+        Int64.shift_right_logical n (Int64.to_int shift)
+
+    | `Asr (a, b) ->
+        let n = eval_int env a in
+        let shift = eval_int env b in
+        let shift =
+          if shift >= 64L then 64L
+          else if shift <= -64L then -64L
+          else shift
+        in
+        Int64.shift_right n (Int64.to_int shift)
+
+    | `Land (a, b) -> Int64.logand (eval_int env a) (eval_int env b)
+    | `Lor (a, b) -> Int64.logor (eval_int env a) (eval_int env b)
+    | `Lxor (a, b) -> Int64.logxor (eval_int env a) (eval_int env b)
+    | `Tuple (loc, l) ->
+        assert (List.length l <> 1);
+        error loc "Operation not supported on tuples"
+
+let rec compare_lists al bl =
+  match al, bl with
+  | a :: al, b :: bl ->
+      let c = Int64.compare a b in
+      if c <> 0 then c
+      else compare_lists al bl
+  | [], [] -> 0
+  | [], _ -> -1
+  | _, [] -> 1
+
+let compare_tuples env (a : arith_expr) (b : arith_expr) =
+  (* We replace the identifiers first to get a better error message
+     on such input:
+
+       #define x (1, 2)
+       #if x >= (1, 2)
+
+     since variables must represent a single int, not a tuple.
+  *)
+  let a = replace_idents env a in
+  let b = replace_idents env b in
+  match a, b with
+  | `Tuple (_, al), `Tuple (_, bl) when List.length al = List.length bl ->
+      let eval_list l = List.map (eval_int env) l in
+      compare_lists (eval_list al) (eval_list bl)
+
+  | `Tuple (_loc1, al), `Tuple (loc2, bl) ->
+      error loc2
+        (sprintf "Tuple of length %i cannot be compared to a tuple of length %i"
+           (List.length bl) (List.length al)
+        )
+
+  | `Tuple (loc, _), _
+  | _, `Tuple (loc, _) ->
+      error loc "Tuple cannot be compared to an int"
+
+  | a, b ->
+      Int64.compare (eval_int env a) (eval_int env b)
+
+let rec eval_bool env (x : bool_expr) =
+  match x with
+      `True -> true
+    | `False -> false
+    | `Defined s -> M.mem s env
+    | `Not x -> not (eval_bool env x)
+    | `And (a, b) -> eval_bool env a && eval_bool env b
+    | `Or (a, b) -> eval_bool env a || eval_bool env b
+    | `Eq (a, b) -> compare_tuples env a b = 0
+    | `Lt (a, b) -> compare_tuples env a b < 0
+    | `Gt (a, b) -> compare_tuples env a b > 0
+
+
+type globals = {
+  call_loc : Cppo_types.loc;
+    (* location used to set the value of
+       __FILE__ and __LINE__ global variables *)
+
+  mutable buf : Buffer.t;
+    (* buffer where the output is written *)
+
+  included : S.t;
+    (* set of already-included files *)
+
+  require_location : bool ref;
+    (* whether a line directive should be printed before outputting the next
+       token *)
+
+  show_exact_locations : bool;
+    (* whether line directives should be printed even for expanded macro
+       bodies *)
+
+  enable_loc : bool ref;
+    (* whether line directives should be printed *)
+
+  g_preserve_quotations : bool;
+    (* identify and preserve camlp4 quotations *)
+
+  incdirs : string list;
+    (* directories for finding included files *)
+
+  current_directory : string;
+    (* directory containing the current file *)
+
+  extensions : (string, Cppo_command.command_template) Hashtbl.t;
+    (* mapping from extension ID to pipeline command *)
+}
+
+
+
+let parse ~preserve_quotations file lexbuf =
+  let lexer_env = Cppo_lexer.init ~preserve_quotations file lexbuf in
+  try
+    Cppo_parser.main (Cppo_lexer.line lexer_env) lexbuf
+  with
+      Parsing.Parse_error ->
+        error (Cppo_lexer.loc lexbuf) "syntax error"
+    | Cppo_types.Cppo_error _ as e ->
+        raise e
+    | e ->
+        error (Cppo_lexer.loc lexbuf) (Printexc.to_string e)
+
+let plural n =
+  if abs n <= 1 then ""
+  else "s"
+
+
+let maybe_print_location g pos =
+  if !(g.enable_loc) then
+    if !(g.require_location) then (
+      line_directive g.buf pos
+    )
+
+let expand_ext g loc id data =
+  let cmd_tpl =
+    try Hashtbl.find g.extensions id
+    with Not_found ->
+      error loc (sprintf "Undefined extension %s" id)
+  in
+  let p1, p2 = loc in
+  let file = p1.Lexing.pos_fname in
+  let first = p1.Lexing.pos_lnum in
+  let last = p2.Lexing.pos_lnum in
+  let cmd = Cppo_command.subst cmd_tpl file first last in
+  Unix.putenv "CPPO_FILE" file;
+  Unix.putenv "CPPO_FIRST_LINE" (string_of_int first);
+  Unix.putenv "CPPO_LAST_LINE" (string_of_int last);
+  let (ic, oc) as p = Unix.open_process cmd in
+  output_string oc data;
+  close_out oc;
+  (try
+     while true do
+       bprintf g.buf "%s\n" (input_line ic)
+     done
+   with End_of_file -> ()
+  );
+  match Unix.close_process p with
+      Unix.WEXITED 0 -> ()
+    | Unix.WEXITED n ->
+        failwith (sprintf "Command %S exited with status %i" cmd n)
+    | _ ->
+        failwith (sprintf "Command %S failed" cmd)
+
+let rec include_file g loc rel_file env =
+  let file =
+    if not (Filename.is_relative rel_file) then
+      if Sys.file_exists rel_file then
+        rel_file
+      else
+        error loc (sprintf "Included file %S does not exist" rel_file)
+    else
+      try
+        let dir =
+          List.find (
+            fun dir ->
+              let file = Filename.concat dir rel_file in
+              Sys.file_exists file
+          ) (g.current_directory :: g.incdirs)
+        in
+        if dir = Filename.current_dir_name then
+          rel_file
+        else
+          Filename.concat dir rel_file
+      with Not_found ->
+        error loc (sprintf "Cannot find included file %S" rel_file)
+  in
+  if S.mem file g.included then
+    failwith (sprintf "Cyclic inclusion of file %S" file)
+  else
+    let ic = open_in file in
+    let lexbuf = Lexing.from_channel ic in
+    let l = parse ~preserve_quotations:g.g_preserve_quotations file lexbuf in
+    close_in ic;
+    expand_list { g with
+                    included = S.add file g.included;
+                    current_directory = Filename.dirname file
+                } env l
+
+and expand_list ?(top = false) g env l =
+  List.fold_left (expand_node ~top g) env l
+
+and expand_node ?(top = false) g env0 (x : node) =
+  match x with
+      `Ident (loc, name, opt_args) ->
+
+        let def =
+          try Some (M.find name env0)
+          with Not_found -> None
+        in
+        let g =
+          if top && def <> None || g.call_loc == dummy_loc then
+            { g with call_loc = loc }
+          else g
+        in
+
+        let enable_loc0 = !(g.enable_loc) in
+
+        if def <> None then (
+          g.require_location := true;
+
+          if not g.show_exact_locations then (
+            (* error reports will point more or less to the point
+               where the code is included rather than the source location
+               of the macro definition *)
+            maybe_print_location g (fst loc);
+            g.enable_loc := false
+          )
+        );
+
+        let env =
+          match def, opt_args with
+              None, None ->
+                expand_node g env0 (`Text (loc, false, name))
+            | None, Some args ->
+                let with_sep =
+                  add_sep
+                    [`Text (loc, false, ",")]
+                    [`Text (loc, false, ")")]
+                    args in
+                let l =
+                  `Text (loc, false, name ^ "(") :: List.flatten with_sep in
+                expand_list g env0 l
+
+            | Some (`Defun (_, _, arg_names, _, _)), None ->
+                error loc
+                  (sprintf "%S expects %i arguments but is applied to none."
+                     name (List.length arg_names))
+
+            | Some (`Def _), Some _ ->
+                error loc
+                  (sprintf "%S expects no arguments" name)
+
+            | Some (`Def (_, _, l, env)), None ->
+                ignore (expand_list g env l);
+                env0
+
+            | Some (`Defun (_, _, arg_names, l, env)), Some args ->
+                let argc = List.length arg_names in
+                let n = List.length args in
+                let args =
+                  (* it's ok to pass an empty arg if one arg
+                     is expected *)
+                  if n = 0 && argc = 1 then [[]]
+                  else args
+                in
+                if argc <> n then
+                  error loc
+                    (sprintf "%S expects %i argument%s but is applied to \
+                              %i argument%s."
+                       name argc (plural argc) n (plural n))
+                else
+                  let app_env =
+                    List.fold_left2 (
+                      fun env name l ->
+                        M.add name (`Def (loc, name, l, env0)) env
+                    ) env arg_names args
+                  in
+                  ignore (expand_list g app_env l);
+                  env0
+
+            | Some `Special, _ -> assert false
+        in
+
+        if def = None then
+          g.require_location := false
+        else
+          g.require_location := true;
+
+        (* restore initial setting *)
+        g.enable_loc := enable_loc0;
+
+        env
+
+
+    | `Def (loc, name, body)->
+        g.require_location := true;
+        if M.mem name env0 then
+          error loc (sprintf "%S is already defined" name)
+        else
+          M.add name (`Def (loc, name, body, env0)) env0
+
+    | `Defun (loc, name, arg_names, body) ->
+        g.require_location := true;
+        if M.mem name env0 then
+          error loc (sprintf "%S is already defined" name)
+        else
+          M.add name (`Defun (loc, name, arg_names, body, env0)) env0
+
+    | `Undef (loc, name) ->
+        g.require_location := true;
+        if is_reserved name then
+          error loc
+            (sprintf "%S is a built-in variable that cannot be undefined" name)
+        else
+          M.remove name env0
+
+    | `Include (loc, file) ->
+        g.require_location := true;
+        let env = include_file g loc file env0 in
+        g.require_location := true;
+        env
+
+    | `Ext (loc, id, data) ->
+        g.require_location := true;
+        expand_ext g loc id data;
+        g.require_location := true;
+        env0
+
+    | `Cond (_loc, test, if_true, if_false) ->
+        let l =
+          if eval_bool env0 test then if_true
+          else if_false
+        in
+        g.require_location := true;
+        let env = expand_list g env0 l in
+        g.require_location := true;
+        env
+
+    | `Error (loc, msg) ->
+        error loc msg
+
+    | `Warning (loc, msg) ->
+        warning loc msg;
+        env0
+
+    | `Text (loc, is_space, s) ->
+        if not is_space then (
+          maybe_print_location g (fst loc);
+          g.require_location := false
+        );
+        Buffer.add_string g.buf s;
+        env0
+
+    | `Seq l ->
+        expand_list g env0 l
+
+    | `Stringify x ->
+        let enable_loc0 = !(g.enable_loc) in
+        g.enable_loc := false;
+        let buf0 = g.buf in
+        let local_buf = Buffer.create 100 in
+        g.buf <- local_buf;
+        ignore (expand_node g env0 x);
+        stringify buf0 (Buffer.contents local_buf);
+        g.buf <- buf0;
+        g.enable_loc := enable_loc0;
+        env0
+
+    | `Capitalize (x : node) ->
+        let enable_loc0 = !(g.enable_loc) in
+        g.enable_loc := false;
+        let buf0 = g.buf in
+        let local_buf = Buffer.create 100 in
+        g.buf <- local_buf;
+        ignore (expand_node g env0 x);
+        let xs = Buffer.contents local_buf in
+        let s = trim_compact_and_capitalize_string xs in
+          (* stringify buf0 (Buffer.contents local_buf); *)
+        Buffer.add_string buf0 s ;
+        g.buf <- buf0;
+        g.enable_loc := enable_loc0;
+        env0
+    | `Concat (x, y) ->
+        let enable_loc0 = !(g.enable_loc) in
+        g.enable_loc := false;
+        let buf0 = g.buf in
+        let local_buf = Buffer.create 100 in
+        g.buf <- local_buf;
+        ignore (expand_node g env0 x);
+        let xs = Buffer.contents local_buf in
+        Buffer.clear local_buf;
+        ignore (expand_node g env0 y);
+        let ys = Buffer.contents local_buf in
+        let s = concat g.call_loc xs ys in
+        Buffer.add_string buf0 s;
+        g.buf <- buf0;
+        g.enable_loc := enable_loc0;
+        env0
+
+    | `Line (loc, opt_file, n) ->
+        (* printing a line directive is not strictly needed *)
+        (match opt_file with
+             None ->
+               maybe_print_location g (fst loc);
+               bprintf g.buf "\n# %i\n" n
+           | Some file ->
+               bprintf g.buf "\n# %i %S\n" n file
+        );
+        (* printing the location next time is needed because it just changed *)
+        g.require_location := true;
+        env0
+
+    | `Current_line loc ->
+        maybe_print_location g (fst loc);
+        g.require_location := true;
+        let pos, _ = g.call_loc in
+        bprintf g.buf " %i " pos.Lexing.pos_lnum;
+        env0
+
+    | `Current_file loc ->
+        maybe_print_location g (fst loc);
+        g.require_location := true;
+        let pos, _ = g.call_loc in
+        bprintf g.buf " %S " pos.Lexing.pos_fname;
+        env0
+
+
+
+
+let include_inputs
+    ~extensions
+    ~preserve_quotations
+    ~incdirs
+    ~show_exact_locations
+    ~show_no_locations
+    buf env l =
+
+  let enable_loc = not show_no_locations in
+  List.fold_left (
+    fun env (dir, file, open_, close) ->
+      let l = parse ~preserve_quotations file (open_ ()) in
+      close ();
+      let g = {
+        call_loc = dummy_loc;
+        buf = buf;
+        included = S.empty;
+        require_location = ref true;
+        show_exact_locations = show_exact_locations;
+        enable_loc = ref enable_loc;
+        g_preserve_quotations = preserve_quotations;
+        incdirs = incdirs;
+        current_directory = dir;
+        extensions = extensions;
+      }
+      in
+      expand_list ~top:true { g with included = S.add file g.included } env l
+  ) env l
diff --git a/tools/ocaml/duniverse/cppo/src/cppo_eval.mli b/tools/ocaml/duniverse/cppo/src/cppo_eval.mli
new file mode 100644
index 0000000000..d4302f02d5
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/src/cppo_eval.mli
@@ -0,0 +1,29 @@
+(** The type signatures in this module are not yet for public consumption.
+
+    Please don't rely on them in any way.*)
+
+module S : Set.S with type elt = string
+module M : Map.S with type key = string
+
+val builtin_env
+  : [> `Defun of
+         Cppo_types.loc * string * string list *
+         [> `Capitalize of Cppo_types.node
+         | `Concat of (Cppo_types.node * Cppo_types.node)
+         | `Stringify of Cppo_types.node ] list * 'a
+    | `Special ] M.t as 'a
+
+val include_inputs
+  : extensions:(string, Cppo_command.command_template) Hashtbl.t
+  -> preserve_quotations:bool
+  -> incdirs:string list
+  -> show_exact_locations:bool
+  -> show_no_locations:bool
+  -> Buffer.t
+  -> (([< `Def of Cppo_types.loc * string * Cppo_types.node list * 'a
+       | `Defun of Cppo_types.loc * string * string list * Cppo_types.node list * 'a
+       | `Special
+            > `Def `Defun ]
+       as 'b)
+        M.t as 'a)
+  -> (string * string * (unit -> Lexing.lexbuf) * (unit -> unit)) list -> 'a
diff --git a/tools/ocaml/duniverse/cppo/src/cppo_lexer.mll b/tools/ocaml/duniverse/cppo/src/cppo_lexer.mll
new file mode 100644
index 0000000000..93ae9013d6
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/src/cppo_lexer.mll
@@ -0,0 +1,721 @@
+{
+open Printf
+open Lexing
+
+open Cppo_types
+open Cppo_parser
+
+let pos1 lexbuf = lexbuf.lex_start_p
+let pos2 lexbuf = lexbuf.lex_curr_p
+let loc lexbuf = (pos1 lexbuf, pos2 lexbuf)
+
+let lexer_error lexbuf descr =
+  error (loc lexbuf) descr
+
+let new_file lb name =
+  lb.lex_curr_p <- { lb.lex_curr_p with pos_fname = name }
+
+let lex_new_lines lb =
+  let n = ref 0 in
+  let s = lb.lex_buffer in
+  for i = lb.lex_start_pos to lb.lex_curr_pos do
+    if Bytes.get s i = '\n' then
+      incr n
+  done;
+  let p = lb.lex_curr_p in
+  lb.lex_curr_p <-
+    { p with
+        pos_lnum = p.pos_lnum + !n;
+        pos_bol = p.pos_cnum
+    }
+
+let count_new_lines lb n =
+  let p = lb.lex_curr_p in
+  lb.lex_curr_p <-
+    { p with
+        pos_lnum = p.pos_lnum + n;
+        pos_bol = p.pos_cnum
+    }
+
+(* must start a new line *)
+let update_pos lb p added_chars added_breaks =
+  let cnum = p.pos_cnum + added_chars in
+  lb.lex_curr_p <-
+    { pos_fname = p.pos_fname;
+      pos_lnum = p.pos_lnum + added_breaks;
+      pos_bol = cnum;
+      pos_cnum = cnum }
+
+let set_lnum lb opt_file lnum =
+  let p = lb.lex_curr_p in
+  let cnum = p.pos_cnum in
+  let fname =
+    match opt_file with
+        None -> p.pos_fname
+      | Some file -> file
+  in
+  lb.lex_curr_p <-
+    { pos_fname = fname;
+      pos_bol = cnum;
+      pos_cnum = cnum;
+      pos_lnum = lnum }
+
+let shift lb n =
+  let p = lb.lex_curr_p in
+  lb.lex_curr_p <- { p with pos_cnum = p.pos_cnum + n }
+
+let read_hexdigit c =
+  match c with
+      '0'..'9' -> Char.code c - 48
+    | 'A'..'F' -> Char.code c - 55
+    | 'a'..'z' -> Char.code c - 87
+    | _ -> invalid_arg "read_hexdigit"
+
+let read_hex2 c1 c2 =
+  Char.chr (read_hexdigit c1 * 16 + read_hexdigit c2)
+
+type env = {
+  preserve_quotations : bool;
+  mutable lexer : [ `Ocaml | `Test ];
+  mutable line_start : bool;
+  mutable in_directive : bool; (* true while processing a directive, until the
+                                  final newline *)
+  buf : Buffer.t;
+  mutable token_start : Lexing.position;
+  lexbuf : Lexing.lexbuf;
+}
+
+let new_line env =
+  env.line_start <- true;
+  count_new_lines env.lexbuf 1
+
+let clear env = Buffer.clear env.buf
+
+let add env s =
+  env.line_start <- false;
+  Buffer.add_string env.buf s
+
+let add_char env c =
+  env.line_start <- false;
+  Buffer.add_char env.buf c
+
+let get env = Buffer.contents env.buf
+
+let long_loc e = (e.token_start, pos2 e.lexbuf)
+
+let cppo_directives = [
+  "define";
+  "elif";
+  "else";
+  "endif";
+  "error";
+  "if";
+  "ifdef";
+  "ifndef";
+  "include";
+  "undef";
+  "warning";
+]
+
+let is_reserved_directive =
+  let tbl = Hashtbl.create 20 in
+  List.iter (fun s -> Hashtbl.add tbl s ()) cppo_directives;
+  fun s -> Hashtbl.mem tbl s
+
+}
+
+(* standard character classes used for macro identifiers *)
+let upper = ['A'-'Z']
+let lower = ['a'-'z']
+let digit = ['0'-'9']
+
+let identchar = upper | lower | digit | [ '_' '\'' ]
+
+
+(* iso-8859-1 upper and lower characters used for ocaml identifiers *)
+let oc_upper = ['A'-'Z' '\192'-'\214' '\216'-'\222']
+let oc_lower = ['a'-'z' '\223'-'\246' '\248'-'\255']
+let oc_identchar = oc_upper | oc_lower | digit | ['_' '\'']
+
+(*
+  Identifiers: ident is used for macro names and is a subset of oc_ident
+*)
+let ident = (lower | '_' identchar | upper) identchar*
+let oc_ident = (oc_lower | '_' oc_identchar | oc_upper) oc_identchar*
+
+
+
+let hex = ['0'-'9' 'a'-'f' 'A'-'F']
+let oct = ['0'-'7']
+let bin = ['0'-'1']
+
+let operator_char =
+  [ '!' '$' '%' '&' '*' '+' '-' '.' '/' ':' '<' '=' '>' '?' '@' '^' '|' '~']
+let infix_symbol =
+  ['=' '<' '>' '@' '^' '|' '&' '+' '-' '*' '/' '$' '%'] operator_char*
+let prefix_symbol = ['!' '?' '~'] operator_char*
+
+let blank = [ ' ' '\t' ]
+let space = [ ' ' '\t' '\r' '\n' ]
+
+let line = ( [^'\n'] | '\\' ('\r'? '\n') )* ('\n' | eof)
+
+let dblank0 = (blank | '\\' '\r'? '\n')*
+let dblank1 = blank (blank | '\\' '\r'? '\n')*
+
+rule token e = parse
+    ""
+      {
+        (*
+          We use two different lexers for boolean expressions in #if directives
+          and for regular OCaml tokens.
+        *)
+        match e.lexer with
+            `Ocaml -> ocaml_token e lexbuf
+          | `Test -> test_token e lexbuf
+      }
+
+and line e = parse
+    blank* "#" as s
+        {
+          match e.lexer with
+              `Test -> lexer_error lexbuf "Syntax error in boolean expression"
+            | `Ocaml ->
+                if e.line_start then (
+                  e.in_directive <- true;
+                  clear e;
+                  add e s;
+                  e.token_start <- pos1 lexbuf;
+                  e.line_start <- false;
+                  directive e lexbuf
+                )
+                else (
+                  e.line_start <- false;
+                  clear e;
+                  TEXT (loc lexbuf, false, s)
+                )
+        }
+
+  | ""  { clear e;
+          token e lexbuf }
+
+and directive e = parse
+    blank* "define" dblank1 (ident as id) "("
+      { DEFUN (long_loc e, id) }
+
+  | blank* "define" dblank1 (ident as id)
+      { assert e.in_directive;
+        DEF (long_loc e, id) }
+
+  | blank* "undef" dblank1 (ident as id)
+      { blank_until_eol e lexbuf;
+        UNDEF (long_loc e, id) }
+
+  | blank* "if" dblank1    { e.lexer <- `Test;
+                             IF (long_loc e) }
+  | blank* "elif" dblank1  { e.lexer <- `Test;
+                             ELIF (long_loc e) }
+
+  | blank* "ifdef" dblank1 (ident as id)
+      { blank_until_eol e lexbuf;
+        IFDEF (long_loc e, `Defined id) }
+
+  | blank* "ifndef" dblank1 (ident as id)
+      { blank_until_eol e lexbuf;
+        IFDEF (long_loc e, `Not (`Defined id)) }
+
+  | blank* "ext" dblank1 (ident as id)
+      { blank_until_eol e lexbuf;
+        clear e;
+        let s = read_ext e lexbuf in
+        EXT (long_loc e, id, s) }
+
+  | blank* "define" dblank1 oc_ident
+  | blank* "undef" dblank1 oc_ident
+  | blank* "ifdef" dblank1 oc_ident
+  | blank* "ifndef" dblank1 oc_ident
+  | blank* "ext" dblank1 oc_ident
+      { error (loc lexbuf)
+          "Identifiers containing non-ASCII characters \
+           may not be used as macro identifiers" }
+
+  | blank* "else"
+      { blank_until_eol e lexbuf;
+        ELSE (long_loc e) }
+
+  | blank* "endif"
+      { blank_until_eol e lexbuf;
+        ENDIF (long_loc e) }
+
+  | blank* "include" dblank0 '"'
+      { clear e;
+        eval_string e lexbuf;
+        blank_until_eol e lexbuf;
+        INCLUDE (long_loc e, get e) }
+
+  | blank* "error" dblank0 '"'
+      { clear e;
+        eval_string e lexbuf;
+        blank_until_eol e lexbuf;
+        ERROR (long_loc e, get e) }
+
+  | blank* "warning" dblank0 '"'
+      { clear e;
+        eval_string e lexbuf;
+        blank_until_eol e lexbuf;
+        WARNING (long_loc e, get e) }
+
+  | blank* (['0'-'9']+ as lnum) dblank0 '\r'? '\n'
+      { e.in_directive <- false;
+        new_line e;
+        let here = long_loc e in
+        let fname = None in
+        let lnum = int_of_string lnum in
+        (* Apply line directive regardless of possible #if condition. *)
+        set_lnum lexbuf fname lnum;
+        LINE (here, None, lnum) }
+
+  | blank* (['0'-'9']+ as lnum) dblank0 '"'
+      { clear e;
+        eval_string e lexbuf;
+        blank_until_eol e lexbuf;
+        let here = long_loc e in
+        let fname = Some (get e) in
+        let lnum = int_of_string lnum in
+        (* Apply line directive regardless of possible #if condition. *)
+        set_lnum lexbuf fname lnum;
+        LINE (here, fname, lnum) }
+
+  | blank*
+      { e.in_directive <- false;
+        add e (lexeme lexbuf);
+        TEXT (long_loc e, true, get e) }
+
+  | blank* (['a'-'z']+ as s)
+      { if is_reserved_directive s then
+          error (loc lexbuf) "cppo directive with missing or wrong arguments";
+        e.in_directive <- false;
+        add e (lexeme lexbuf);
+        TEXT (long_loc e, false, get e) }
+
+
+and blank_until_eol e = parse
+    blank* eof
+  | blank* '\r'? '\n' { new_line e;
+                        e.in_directive <- false }
+  | ""                { lexer_error lexbuf "syntax error in directive" }
+
+and read_ext e = parse
+    blank* "#" blank* "endext" blank* ('\r'? '\n' | eof)
+      { let s = get e in
+        clear e;
+        new_line e;
+        e.in_directive <- false;
+        s }
+
+  | (blank* as a) "\\" ("#" blank* "endext" blank* '\r'? '\n' as b)
+      { add e a;
+        add e b;
+        new_line e;
+        read_ext e lexbuf }
+
+  | [^'\n']* '\n' as x
+      { add e x;
+        new_line e;
+        read_ext e lexbuf }
+
+  | eof
+      { lexer_error lexbuf "End of file within #ext ... #endext" }
+
+and ocaml_token e = parse
+    "__LINE__"
+      { e.line_start <- false;
+        CURRENT_LINE (loc lexbuf) }
+
+  | "__FILE__"
+      { e.line_start <- false;
+        CURRENT_FILE (loc lexbuf) }
+
+  | ident as s
+      { e.line_start <- false;
+        IDENT (loc lexbuf, s) }
+
+  | oc_ident as s
+      { e.line_start <- false;
+        TEXT (loc lexbuf, false, s) }
+
+  | ident as s "("
+      { e.line_start <- false;
+        FUNIDENT (loc lexbuf, s) }
+
+  | "'\n'"
+  | "'\r\n'"
+      { new_line e;
+        TEXT (loc lexbuf, false, lexeme lexbuf) }
+
+  | "("       { e.line_start <- false; OP_PAREN (loc lexbuf) }
+  | ")"       { e.line_start <- false; CL_PAREN (loc lexbuf) }
+  | ","       { e.line_start <- false; COMMA (loc lexbuf) }
+
+  | "\\)"     { e.line_start <- false; TEXT (loc lexbuf, false, " )") }
+  | "\\,"     { e.line_start <- false; TEXT (loc lexbuf, false, " ,") }
+  | "\\("     { e.line_start <- false; TEXT (loc lexbuf, false, " (") }
+  | "\\#"     { e.line_start <- false; TEXT (loc lexbuf, false, " #") }
+
+  | '`'
+  | "!=" | "#" | "&" | "&&" | "(" |  "*" | "+" | "-"
+  | "-." | "->" | "." | ".. :" | "::" | ":=" | ":>" | ";" | ";;" | "<"
+  | "<-" | "=" | ">" | ">]" | ">}" | "?" | "??" | "[" | "[<" | "[>" | "[|"
+  | "]" | "_" | "`" | "{" | "{<" | "|" | "|]" | "}" | "~"
+  | ">>"
+  | prefix_symbol
+  | infix_symbol
+  | "'" ([^ '\'' '\\']
+         | '\\' (_ | digit digit digit | 'x' hex hex)) "'"
+
+      { e.line_start <- false;
+        TEXT (loc lexbuf, false, lexeme lexbuf) }
+
+  | blank+
+      { TEXT (loc lexbuf, true, lexeme lexbuf) }
+
+  | '\\' ('\r'? '\n' as nl)
+
+      {
+        new_line e;
+        if e.in_directive then
+          TEXT (loc lexbuf, true, nl)
+        else
+          TEXT (loc lexbuf, false, lexeme lexbuf)
+      }
+
+  | '\r'? '\n'
+      {
+        new_line e;
+        if e.in_directive then (
+          e.in_directive <- false;
+          ENDEF (loc lexbuf)
+        )
+        else
+          TEXT (loc lexbuf, true, lexeme lexbuf)
+      }
+
+  | "(*"
+      { clear e;
+        add e "(*";
+        e.token_start <- pos1 lexbuf;
+        comment (loc lexbuf) e 1 lexbuf }
+
+  | '"'
+      { clear e;
+        add e "\"";
+        e.token_start <- pos1 lexbuf;
+        string e lexbuf;
+        e.line_start <- false;
+        TEXT (long_loc e, false, get e) }
+
+  | "<:"
+  | "<<"
+      { if e.preserve_quotations then (
+          clear e;
+          add e (lexeme lexbuf);
+          e.token_start <- pos1 lexbuf;
+          quotation e lexbuf;
+          e.line_start <- false;
+          TEXT (long_loc e, false, get e)
+        )
+        else (
+          e.line_start <- false;
+          TEXT (loc lexbuf, false, lexeme lexbuf)
+        )
+      }
+
+
+  | '-'? ( digit (digit | '_')*
+         | ("0x"| "0X") hex (hex | '_')*
+         | ("0o"| "0O") oct (oct | '_')*
+         | ("0b"| "0B") bin (bin | '_')* )
+
+  | '-'? digit (digit | '_')* ('.' (digit | '_')* )?
+      (['e' 'E'] ['+' '-']? digit (digit | '_')* )?
+      { e.line_start <- false;
+        TEXT (loc lexbuf, false, lexeme lexbuf) }
+
+  | blank+
+      { TEXT (loc lexbuf, true, lexeme lexbuf) }
+
+  | _
+      { e.line_start <- false;
+        TEXT (loc lexbuf, false, lexeme lexbuf) }
+
+  | eof
+      { EOF }
+
+
+and comment startloc e depth = parse
+    "(*"
+      { add e "(*";
+        comment startloc e (depth + 1) lexbuf }
+
+  | "*)"
+      { let depth = depth - 1 in
+        add e "*)";
+        if depth > 0 then
+          comment startloc e depth lexbuf
+        else (
+          e.line_start <- false;
+          TEXT (long_loc e, false, get e)
+        )
+      }
+  | '"'
+      { add_char e '"';
+        string e lexbuf;
+        comment startloc e depth lexbuf }
+
+  | "'\n'"
+  | "'\r\n'"
+      { new_line e;
+        add e (lexeme lexbuf);
+        comment startloc e depth lexbuf }
+
+  | "'" ([^ '\'' '\\']
+         | '\\' (_ | digit digit digit | 'x' hex hex)) "'"
+      { add e (lexeme lexbuf);
+        comment startloc e depth lexbuf }
+
+  | '\r'? '\n'
+      {
+        new_line e;
+        add e (lexeme lexbuf);
+        comment startloc e depth lexbuf
+      }
+
+  | [^'(' '*' '"' '\'' '\r' '\n']+
+      {
+        add e (lexeme lexbuf);
+        comment startloc e depth lexbuf
+      }
+
+  | _
+      { add e (lexeme lexbuf);
+        comment startloc e depth lexbuf }
+
+  | eof
+      { error startloc "Unterminated comment reaching the end of file" }
+
+
+and string e = parse
+    '"'
+      { add_char e '"' }
+
+  | "\\\\"
+  | '\\' '"'
+      { add e (lexeme lexbuf);
+        string e lexbuf }
+
+  | '\\' '\r'? '\n'
+      {
+        add e (lexeme lexbuf);
+        new_line e;
+        string e lexbuf
+      }
+
+  | '\r'? '\n'
+      {
+        if e.in_directive then
+          lexer_error lexbuf "Unterminated string literal"
+        else (
+          add e (lexeme lexbuf);
+          new_line e;
+          string e lexbuf
+        )
+      }
+
+  | _ as c
+      { add_char e c;
+        string e lexbuf }
+
+  | eof
+      { }
+
+
+and eval_string e = parse
+    '"'
+      {  }
+
+  | '\\' (['\'' '\"' '\\'] as c)
+      { add_char e c;
+        eval_string e lexbuf }
+
+  | '\\' '\r'? '\n'
+      { assert e.in_directive;
+        eval_string e lexbuf }
+
+  | '\r'? '\n'
+      { assert e.in_directive;
+        lexer_error lexbuf "Unterminated string literal" }
+
+  | '\\' (digit digit digit as s)
+      { add_char e (Char.chr (int_of_string s));
+        eval_string e lexbuf }
+
+  | '\\' 'x' (hex as c1) (hex as c2)
+      { add_char e (read_hex2 c1 c2);
+        eval_string e lexbuf }
+
+  | '\\' 'b'
+      { add_char e '\b';
+        eval_string e lexbuf }
+
+  | '\\' 'n'
+      { add_char e '\n';
+        eval_string e lexbuf }
+
+  | '\\' 'r'
+      { add_char e '\r';
+        eval_string e lexbuf }
+
+  | '\\' 't'
+      { add_char e '\t';
+        eval_string e lexbuf }
+
+  | [^ '\"' '\\']+
+      { add e (lexeme lexbuf);
+        eval_string e lexbuf }
+
+  | eof
+      { lexer_error lexbuf "Unterminated string literal" }
+
+
+and quotation e = parse
+    ">>"
+      { add e ">>" }
+
+  | "\\>>"
+      { add e "\\>>";
+        quotation e lexbuf }
+
+  | '\\' '\r'? '\n'
+      {
+        if e.in_directive then (
+          new_line e;
+          quotation e lexbuf
+        )
+        else (
+          add e (lexeme lexbuf);
+          new_line e;
+          quotation e lexbuf
+        )
+      }
+
+  | '\r'? '\n'
+      {
+        if e.in_directive then
+          lexer_error lexbuf "Unterminated quotation"
+        else (
+          add e (lexeme lexbuf);
+          new_line e;
+          quotation e lexbuf
+        )
+      }
+
+  | [^'>' '\\' '\r' '\n']+
+      { add e (lexeme lexbuf);
+        quotation e lexbuf }
+
+  | eof
+      { lexer_error lexbuf "Unterminated quotation" }
+
+and test_token e = parse
+    "true"    { TRUE }
+  | "false"   { FALSE }
+  | "defined" { DEFINED }
+  | "("       { OP_PAREN (loc lexbuf) }
+  | ")"       { CL_PAREN (loc lexbuf) }
+  | "&&"      { AND }
+  | "||"      { OR }
+  | "not"     { NOT }
+  | "="       { EQ }
+  | "<"       { LT }
+  | ">"       { GT }
+  | "<>"      { NE }
+  | "<="      { LE }
+  | ">="      { GE }
+
+  | '-'? ( digit (digit | '_')*
+         | ("0x"| "0X") hex (hex | '_')*
+         | ("0o"| "0O") oct (oct | '_')*
+         | ("0b"| "0B") bin (bin | '_')* )
+      { let s = Lexing.lexeme lexbuf in
+        try INT (Int64.of_string s)
+        with _ ->
+          error (loc lexbuf)
+            (sprintf "Integer constant %s is out the valid range for int64" s)
+      }
+
+  | "+"       { PLUS }
+  | "-"       { MINUS }
+  | "*"       { STAR }
+  | "/"       { SLASH (loc lexbuf) }
+  | "mod"     { MOD (loc lexbuf) }
+  | "lsl"     { LSL }
+  | "lsr"     { LSR }
+  | "asr"     { ASR }
+  | "land"    { LAND }
+  | "lor"     { LOR }
+  | "lxor"    { LXOR }
+  | "lnot"    { LNOT }
+
+  | ","       { COMMA (loc lexbuf) }
+
+  | ident
+      { IDENT (loc lexbuf, lexeme lexbuf) }
+
+  | blank+                   { test_token e lexbuf }
+  | '\\' '\r'? '\n'          { new_line e;
+                               test_token e lexbuf }
+  | '\r'? '\n'
+  | eof        { assert e.in_directive;
+                 e.in_directive <- false;
+                 new_line e;
+                 e.lexer <- `Ocaml;
+                 ENDTEST (loc lexbuf) }
+  | _          { error (loc lexbuf)
+                   (sprintf "Invalid token %s" (Lexing.lexeme lexbuf)) }
+
+
+(* Parse just an int or a tuple of ints *)
+and int_tuple = parse
+  | space* (([^'(']#space)+ as s) space* eof
+                      { [Int64.of_string s] }
+
+  | space* "("        { int_tuple_content lexbuf }
+
+  | eof | _           { failwith "Not an int nor a tuple" }
+
+and int_tuple_content = parse
+  | space* (([^',' ')']#space)+ as s) space* ","
+                      { let x = Int64.of_string s in
+                        x :: int_tuple_content lexbuf }
+
+  | space* (([^',' ')']#space)+ as s) space* ")" space* eof
+                      { [Int64.of_string s] }
+
+
+{
+  let init ~preserve_quotations file lexbuf =
+    new_file lexbuf file;
+    {
+      preserve_quotations = preserve_quotations;
+      lexer = `Ocaml;
+      line_start = true;
+      in_directive = false;
+      buf = Buffer.create 200;
+      token_start = Lexing.dummy_pos;
+      lexbuf = lexbuf;
+    }
+
+  let int_tuple_of_string s =
+    try Some (int_tuple (Lexing.from_string s))
+    with _ -> None
+}
diff --git a/tools/ocaml/duniverse/cppo/src/cppo_main.ml b/tools/ocaml/duniverse/cppo/src/cppo_main.ml
new file mode 100644
index 0000000000..93dd6477e4
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/src/cppo_main.ml
@@ -0,0 +1,230 @@
+open Printf
+
+let add_extension tbl s =
+  let i =
+    try String.index s ':'
+    with Not_found ->
+      failwith "Invalid -x argument"
+  in
+  let id = String.sub s 0 i in
+  let raw_tpl = String.sub s (i+1) (String.length s - i - 1) in
+  let cmd_tpl = Cppo_command.parse raw_tpl in
+  if Hashtbl.mem tbl id then
+    failwith ("Multiple definitions for extension " ^ id)
+  else
+    Hashtbl.add tbl id cmd_tpl
+
+let semver_re = Str.regexp "\
+\\([0-9]+\\)\
+\\.\\([0-9]+\\)\
+\\.\\([0-9]+\\)\
+\\([~-]\\([^+]*\\)\\)?\
+\\(\\+\\(.*\\)\\)?\
+\r?$"
+
+let parse_semver s =
+  if not (Str.string_match semver_re s 0) then
+    None
+  else
+    let major = Str.matched_group 1 s in
+    let minor = Str.matched_group 2 s in
+    let patch = Str.matched_group 3 s in
+    let prerelease = try Some (Str.matched_group 5 s) with Not_found -> None in
+    let build = try Some (Str.matched_group 7 s) with Not_found -> None in
+    Some (major, minor, patch, prerelease, build)
+
+let define var s =
+  [sprintf "#define %s %s\n" var s]
+
+let opt_define var o =
+  match o with
+  | None -> []
+  | Some s -> define var s
+
+let parse_version_spec s =
+  let error () =
+    failwith (sprintf "Invalid version specification: %S" s)
+  in
+  let prefix, version_full =
+    try
+      let len = String.index s ':' in
+      String.sub s 0 len, String.sub s (len+1) (String.length s - (len+1))
+    with Not_found ->
+      error ()
+  in
+  match parse_semver version_full with
+  | None ->
+      error ()
+  | Some (major, minor, patch, opt_prerelease, opt_build) ->
+      let version = sprintf "(%s, %s, %s)" major minor patch in
+      let version_string = sprintf "%s.%s.%s" major minor patch in
+      List.flatten [
+        define (prefix ^ "_MAJOR") major;
+        define (prefix ^ "_MINOR") minor;
+        define (prefix ^ "_PATCH") patch;
+        opt_define (prefix ^ "_PRERELEASE") opt_prerelease;
+        opt_define (prefix ^ "_BUILD") opt_build;
+        define (prefix ^ "_VERSION") version;
+        define (prefix ^ "_VERSION_STRING") version_string;
+        define (prefix ^ "_VERSION_FULL") s;
+      ]
+
+let main () =
+  let extensions = Hashtbl.create 10 in
+  let files = ref [] in
+  let header = ref [] in
+  let incdirs = ref [] in
+  let out_file = ref None in
+  let preserve_quotations = ref false in
+  let show_exact_locations = ref false in
+  let show_no_locations = ref false in
+  let options = [
+    "-D", Arg.String (fun s -> header := ("#define " ^ s ^ "\n") :: !header),
+    "DEF
+          Equivalent of interpreting '#define DEF' before processing the
+          input, e.g. `cppo -D 'VERSION \"1.2.3\"'` (no equal sign)";
+
+    "-U", Arg.String (fun s -> header := ("#undef " ^ s ^ "\n") :: !header),
+    "IDENT
+          Equivalent of interpreting '#undef IDENT' before processing the
+          input";
+
+    "-I", Arg.String (fun s -> incdirs := s :: !incdirs),
+    "DIR
+          Add directory DIR to the search path for included files";
+
+    "-V", Arg.String (fun s -> header := parse_version_spec s @ !header),
+    "VAR:MAJOR.MINOR.PATCH-OPTPRERELEASE+OPTBUILD
+          Define the following variables extracted from a version string
+          (following the Semantic Versioning syntax http://semver.org/):
+
+            VAR_MAJOR           must be a non-negative int
+            VAR_MINOR           must be a non-negative int
+            VAR_PATCH           must be a non-negative int
+            VAR_PRERELEASE      if the OPTPRERELEASE part exists
+            VAR_BUILD           if the OPTBUILD part exists
+            VAR_VERSION         is the tuple (MAJOR, MINOR, PATCH)
+            VAR_VERSION_STRING  is the string MAJOR.MINOR.PATCH
+            VAR_VERSION_FULL    is the original string
+
+          Example: cppo -V OCAML:4.02.1
+
+          Note that cppo recognises both '-' and '~' preceding the pre-release
+          meaning -V OCAML:4.11.0+alpha1 sets OCAML_BUILD to alpha1 but
+          -V OCAML:4.12.0~alpha1 sets OCAML_PRERELEASE to alpha1.
+";
+
+    "-o", Arg.String (fun s -> out_file := Some s),
+    "FILE
+          Output file";
+
+    "-q", Arg.Set preserve_quotations,
+    "
+          Identify and preserve camlp4 quotations";
+
+    "-s", Arg.Set show_exact_locations,
+    "
+          Output line directives pointing to the exact source location of
+          each token, including those coming from the body of macro
+          definitions.  This behavior is off by default.";
+
+    "-n", Arg.Set show_no_locations,
+    "
+          Do not output any line directive other than those found in the
+          input (overrides -s).";
+
+    "-version", Arg.Unit (fun () ->
+                            print_endline Cppo_version.cppo_version;
+                            exit 0),
+    "
+          Print the version of the program and exit.";
+
+    "-x", Arg.String (fun s -> add_extension extensions s),
+    "NAME:CMD_TEMPLATE
+          Define a custom preprocessor target section starting with:
+            #ext \"NAME\"
+          and ending with:
+            #endext
+
+          NAME must be a lowercase identifier of the form [a-z][A-Za-z0-9_]*
+
+          CMD_TEMPLATE is a command template supporting the following
+          special sequences:
+            %F  file name (unescaped; beware of potential scripting attacks)
+            %B  number of the first line
+            %E  number of the last line
+            %%  a single percent sign
+
+          Filename, first line number and last line number are also
+          available from the following environment variables:
+          CPPO_FILE, CPPO_FIRST_LINE, CPPO_LAST_LINE.
+
+          The command produced is expected to read the data lines from stdin
+          and to write its output to stdout."
+  ]
+  in
+  let msg = sprintf "\
+Usage: %s [OPTIONS] [FILE1 [FILE2 ...]]
+Options:" Sys.argv.(0) in
+  let add_file s = files := s :: !files in
+  Arg.parse options add_file msg;
+
+  let inputs =
+    let preliminaries =
+      match List.rev !header with
+          [] -> []
+        | l ->
+            let s = String.concat "" l in
+            [ Sys.getcwd (),
+              "<command line>",
+              (fun () -> Lexing.from_string s),
+              (fun () -> ()) ]
+    in
+    let main =
+      match List.rev !files with
+          [] -> [ Sys.getcwd (),
+                  "<stdin>",
+                  (fun () -> Lexing.from_channel stdin),
+                  (fun () -> ()) ]
+        | l ->
+            List.map (
+              fun file ->
+                let ic = lazy (open_in file) in
+                Filename.dirname file,
+                file,
+                (fun () -> Lexing.from_channel (Lazy.force ic)),
+                (fun () -> close_in (Lazy.force ic))
+            ) l
+    in
+    preliminaries @ main
+  in
+
+  let env = Cppo_eval.builtin_env in
+  let buf = Buffer.create 10_000 in
+  let _env =
+    Cppo_eval.include_inputs
+      ~extensions
+      ~preserve_quotations: !preserve_quotations
+      ~incdirs: (List.rev !incdirs)
+      ~show_exact_locations: !show_exact_locations
+      ~show_no_locations: !show_no_locations
+      buf env inputs
+  in
+  match !out_file with
+      None ->
+        print_string (Buffer.contents buf);
+        flush stdout
+    | Some file ->
+        let oc = open_out file in
+        output_string oc (Buffer.contents buf);
+        close_out oc
+
+let () =
+  if not !Sys.interactive then
+    try
+      main ()
+    with
+    | Cppo_types.Cppo_error msg
+    | Failure msg ->
+        eprintf "Error: %s\n%!" msg;
+        exit 1
diff --git a/tools/ocaml/duniverse/cppo/src/cppo_parser.mly b/tools/ocaml/duniverse/cppo/src/cppo_parser.mly
new file mode 100644
index 0000000000..21d2cddb30
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/src/cppo_parser.mly
@@ -0,0 +1,266 @@
+%{
+  open Cppo_types
+%}
+
+/* Directives */
+%token < Cppo_types.loc * string > DEF DEFUN UNDEF INCLUDE WARNING ERROR
+%token < Cppo_types.loc * string option * int > LINE
+%token < Cppo_types.loc * Cppo_types.bool_expr > IFDEF
+%token < Cppo_types.loc * string * string > EXT
+%token < Cppo_types.loc > ENDEF IF ELIF ELSE ENDIF ENDTEST
+
+/* Boolean expressions in #if/#elif directives */
+%token TRUE FALSE DEFINED NOT AND OR EQ LT GT NE LE GE
+       PLUS MINUS STAR LNOT LSL LSR ASR LAND LOR LXOR
+%token < Cppo_types.loc > OP_PAREN SLASH MOD
+%token < int64 > INT
+
+
+/* Regular program and shared terminals */
+%token < Cppo_types.loc > CL_PAREN COMMA CURRENT_LINE CURRENT_FILE
+%token < Cppo_types.loc * string > IDENT FUNIDENT
+%token < Cppo_types.loc * bool * string > TEXT /* bool means "is space" */
+%token EOF
+
+/* Priorities for boolean expressions */
+%left OR
+%left AND
+
+/* Priorities for arithmetics */
+%left PLUS MINUS
+%left STAR SLASH
+%left MOD LSL LSR ASR LAND LOR LXOR
+%nonassoc NOT
+%nonassoc LNOT
+%nonassoc UMINUS
+
+%start main
+%type < Cppo_types.node list > main
+%%
+
+main:
+| unode main { $1 :: $2 }
+| EOF        { [] }
+;
+
+unode_list0:
+| unode unode_list0  { $1 :: $2 }
+|                    { [] }
+;
+
+pnode_list0:
+| pnode pnode_list0  { $1 :: $2 }
+|                    { [] }
+;
+
+/* node in which opening and closing parentheses don't need to match */
+unode:
+| node          { $1 }
+| OP_PAREN      { `Text ($1, false, "(") }
+| CL_PAREN      { `Text ($1, false, ")") }
+| COMMA         { `Text ($1, false, ",") }
+;
+
+/* node in which parentheses must be closed */
+pnode:
+| node          { $1 }
+| OP_PAREN pnode_or_comma_list0 CL_PAREN
+                { `Seq [`Text ($1, false, "(");
+                        `Seq $2;
+                        `Text ($3, false, ")")] }
+;
+
+/* node without parentheses handling (need to use unode or pnode) */
+node:
+| TEXT          { `Text $1 }
+
+| IDENT         { let loc, name = $1 in
+                  `Ident (loc, name, None) }
+
+| FUNIDENT args1 CL_PAREN
+                {
+                (* macro application that receives at least one argument,
+                   possibly empty.  We cannot distinguish syntactically between
+                   zero argument and one empty argument.
+                *)
+                  let (pos1, _), name = $1 in
+                  let _, pos2 = $3 in
+                  `Ident ((pos1, pos2), name, Some $2) }
+| FUNIDENT error
+                { error (fst $1) "Invalid macro application" }
+
+| CURRENT_LINE  { `Current_line $1 }
+| CURRENT_FILE  { `Current_file $1 }
+
+| DEF unode_list0 ENDEF
+                { let (pos1, _), name = $1 in
+
+                  (* Additional spacing is needed for cases like '+foo+'
+                     expanding into '++' instead of '+ +'. *)
+                  let safe_space = `Text ($3, true, " ") in
+
+                  let body = $2 @ [safe_space] in
+                  let _, pos2 = $3 in
+                  `Def ((pos1, pos2), name, body) }
+
+| DEFUN def_args1 CL_PAREN unode_list0 ENDEF
+                { let (pos1, _), name = $1 in
+                  let args = $2 in
+
+                  (* Additional spacing is needed for cases like 'foo()bar'
+                     where 'foo()' expands into 'abc', giving 'abcbar'
+                     instead of 'abc bar';
+                     Also needed for '+foo()+' expanding into '++' instead
+                     of '+ +'. *)
+                  let safe_space = `Text ($5, true, " ") in
+
+                  let body = $4 @ [safe_space] in
+                  let _, pos2 = $5 in
+                  `Defun ((pos1, pos2), name, args, body) }
+
+| DEFUN CL_PAREN
+                { error (fst (fst $1), snd $2)
+                    "At least one argument is required" }
+
+| UNDEF
+                { `Undef $1 }
+| WARNING
+                { `Warning $1 }
+| ERROR
+                { `Error $1 }
+
+| INCLUDE
+                { `Include $1 }
+
+| EXT
+                { `Ext $1 }
+
+| IF test unode_list0 elif_list ENDIF
+                { let pos1, _ = $1 in
+                  let _, pos2 = $5 in
+                  let loc = (pos1, pos2) in
+                  let test = $2 in
+                  let if_true = $3 in
+                  let if_false =
+                    List.fold_right (
+                      fun (loc, test, if_true) if_false ->
+                        [`Cond (loc, test, if_true, if_false) ]
+                    ) $4 []
+                  in
+                  `Cond (loc, test, if_true, if_false)
+                }
+
+| IF test unode_list0 elif_list error
+                { (* BUG? ocamlyacc fails to reduce that rule but not menhir *)
+                  error $1 "missing #endif" }
+
+| IFDEF unode_list0 elif_list ENDIF
+                { let (pos1, _), test = $1 in
+                  let _, pos2 = $4 in
+                  let loc = (pos1, pos2) in
+                  let if_true = $2 in
+                  let if_false =
+                    List.fold_right (
+                      fun (loc, test, if_true) if_false ->
+                        [`Cond (loc, test, if_true, if_false) ]
+                    ) $3 []
+                  in
+                  `Cond (loc, test, if_true, if_false)
+                }
+
+| IFDEF unode_list0 elif_list error
+                { error (fst $1) "missing #endif" }
+
+| LINE          { `Line $1 }
+;
+
+
+elif_list:
+  ELIF test unode_list0 elif_list
+                   { let pos1, _ = $1 in
+                     let pos2 = Parsing.rhs_end_pos 4 in
+                     ((pos1, pos2), $2, $3) :: $4 }
+| ELSE unode_list0
+                   { let pos1, _ = $1 in
+                     let pos2 = Parsing.rhs_end_pos 2 in
+                     [ ((pos1, pos2), `True, $2) ] }
+|                  { [] }
+;
+
+args1:
+  pnode_list0 COMMA args1   { $1 :: $3  }
+| pnode_list0               { [ $1 ] }
+;
+
+pnode_or_comma_list0:
+| pnode pnode_or_comma_list0   { $1 :: $2 }
+| COMMA pnode_or_comma_list0   { `Text ($1, false, ",") :: $2 }
+|                              { [] }
+;
+
+def_args1:
+| arg_blank IDENT COMMA def_args1
+                               { (snd $2) :: $4 }
+| arg_blank IDENT              { [ snd $2 ] }
+;
+
+arg_blank:
+| TEXT arg_blank         { let loc, is_space, _s = $1 in
+                           if not is_space then
+                             error loc "Invalid argument list"
+                         }
+|                        { () }
+;
+
+test:
+  bexpr ENDTEST { $1 }
+;
+
+/* Boolean expressions after #if or #elif */
+bexpr:
+  | TRUE                            { `True }
+  | FALSE                           { `False }
+  | DEFINED IDENT                   { `Defined (snd $2) }
+  | OP_PAREN bexpr CL_PAREN         { $2 }
+  | NOT bexpr                       { `Not $2 }
+  | bexpr AND bexpr                 { `And ($1, $3) }
+  | bexpr OR bexpr                  { `Or ($1, $3) }
+  | aexpr EQ aexpr                  { `Eq ($1, $3) }
+  | aexpr LT aexpr                  { `Lt ($1, $3) }
+  | aexpr GT aexpr                  { `Gt ($1, $3) }
+  | aexpr NE aexpr                  { `Not (`Eq ($1, $3)) }
+  | aexpr LE aexpr                  { `Not (`Gt ($1, $3)) }
+  | aexpr GE aexpr                  { `Not (`Lt ($1, $3)) }
+;
+
+/* Arithmetic expressions within boolean expressions */
+aexpr:
+  | INT                      { `Int $1 }
+  | IDENT                    { `Ident $1 }
+  | OP_PAREN aexpr_list CL_PAREN
+                             { match $2 with
+                               | [x] -> x
+                               | l ->
+                                 let pos1, _ = $1 in
+                                 let _, pos2 = $3 in
+                                 `Tuple ((pos1, pos2), l)
+                             }
+  | aexpr PLUS aexpr         { `Add ($1, $3) }
+  | aexpr MINUS aexpr        { `Sub ($1, $3) }
+  | aexpr STAR aexpr         { `Mul ($1, $3) }
+  | aexpr SLASH aexpr        { `Div ($2, $1, $3) }
+  | aexpr MOD aexpr          { `Mod ($2, $1, $3) }
+  | aexpr LSL aexpr          { `Lsl ($1, $3) }
+  | aexpr LSR aexpr          { `Lsr ($1, $3) }
+  | aexpr ASR aexpr          { `Asr ($1, $3) }
+  | aexpr LAND aexpr         { `Land ($1, $3) }
+  | aexpr LOR aexpr          { `Lor ($1, $3) }
+  | aexpr LXOR aexpr         { `Lxor ($1, $3) }
+  | LNOT aexpr               { `Lnot $2 }
+  | MINUS aexpr %prec UMINUS { `Neg $2 }
+;
+
+aexpr_list:
+  | aexpr COMMA aexpr_list   { $1 :: $3 }
+  | aexpr                    { [$1] }
+;
diff --git a/tools/ocaml/duniverse/cppo/src/cppo_types.ml b/tools/ocaml/duniverse/cppo/src/cppo_types.ml
new file mode 100644
index 0000000000..d6428d8101
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/src/cppo_types.ml
@@ -0,0 +1,98 @@
+open Printf
+open Lexing
+
+module String_set = Set.Make (String)
+module String_map = Map.Make (String)
+
+type loc = position * position
+
+type bool_expr =
+    [ `True
+    | `False
+    | `Defined of string
+    | `Not of bool_expr (* not *)
+    | `And of (bool_expr * bool_expr) (* && *)
+    | `Or of (bool_expr * bool_expr) (* || *)
+    | `Eq of (arith_expr * arith_expr) (* = *)
+    | `Lt of (arith_expr * arith_expr) (* < *)
+    | `Gt of (arith_expr * arith_expr) (* > *)
+        (* syntax for additional operators: <>, <=, >= *)
+    ]
+
+and arith_expr = (* signed int64 *)
+    [ `Int of int64
+    | `Ident of (loc * string)
+        (* must be bound to a valid int literal.
+           Expansion of macro functions is not supported. *)
+
+    | `Tuple of (loc * arith_expr list)
+        (* tuple of 2 or more elements guaranteed by the syntax *)
+
+    | `Neg of arith_expr (* - *)
+    | `Add of (arith_expr * arith_expr) (* + *)
+    | `Sub of (arith_expr * arith_expr) (* - *)
+    | `Mul of (arith_expr * arith_expr) (* * *)
+    | `Div of (loc * arith_expr * arith_expr) (* / *)
+    | `Mod of (loc * arith_expr * arith_expr) (* mod *)
+
+    (* Bitwise operations on 64 bits *)
+    | `Lnot of arith_expr (* lnot *)
+    | `Lsl of (arith_expr * arith_expr) (* lsl *)
+    | `Lsr of (arith_expr * arith_expr) (* lsr *)
+    | `Asr of (arith_expr * arith_expr) (* asr *)
+    | `Land of (arith_expr * arith_expr) (* land *)
+    | `Lor of (arith_expr * arith_expr) (* lor *)
+    | `Lxor of (arith_expr * arith_expr) (* lxor *)
+    ]
+
+and node =
+    [ `Ident of (loc * string * node list list option)
+    | `Def of (loc * string * node list)
+    | `Defun of (loc * string * string list * node list)
+    | `Undef of (loc * string)
+    | `Include of (loc * string)
+    | `Ext of (loc * string * string)
+    | `Cond of (loc * bool_expr * node list * node list)
+    | `Error of (loc * string)
+    | `Warning of (loc * string)
+    | `Text of (loc * bool * string) (* bool is true for space tokens *)
+    | `Seq of node list
+    | `Stringify of node
+    | `Capitalize of node
+    | `Concat of (node * node)
+    | `Line of (loc * string option * int)
+    | `Current_line of loc
+    | `Current_file of loc ]
+
+
+
+let string_of_loc (pos1, pos2) =
+  let line1 = pos1.pos_lnum
+  and start1 = pos1.pos_bol in
+  Printf.sprintf "File %S, line %i, characters %i-%i"
+    pos1.pos_fname line1
+    (pos1.pos_cnum - start1)
+    (pos2.pos_cnum - start1)
+
+
+exception Cppo_error of string
+
+let error loc s =
+  let msg =
+    sprintf "%s\nError: %s" (string_of_loc loc) s in
+  raise (Cppo_error msg)
+
+let warning loc s =
+  let msg =
+    sprintf "%s\nWarning: %s" (string_of_loc loc) s in
+  eprintf "%s\n%!" msg
+
+let dummy_loc = (Lexing.dummy_pos, Lexing.dummy_pos)
+
+let rec flatten_nodes (l: node list): node list =
+  List.flatten (List.map flatten_node l)
+
+and flatten_node (node: node): node list =
+  match node with
+  | `Seq l -> flatten_nodes l
+  | x -> [x]
diff --git a/tools/ocaml/duniverse/cppo/src/cppo_types.mli b/tools/ocaml/duniverse/cppo/src/cppo_types.mli
new file mode 100644
index 0000000000..f3b54235e9
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/src/cppo_types.mli
@@ -0,0 +1,70 @@
+type loc = Lexing.position * Lexing.position
+
+exception Cppo_error of string
+
+type bool_expr =
+    [ `True
+    | `False
+    | `Defined of string
+    | `Not of bool_expr (* not *)
+    | `And of (bool_expr * bool_expr) (* && *)
+    | `Or of (bool_expr * bool_expr) (* || *)
+    | `Eq of (arith_expr * arith_expr) (* = *)
+    | `Lt of (arith_expr * arith_expr) (* < *)
+    | `Gt of (arith_expr * arith_expr) (* > *)
+        (* syntax for additional operators: <>, <=, >= *)
+    ]
+
+and arith_expr = (* signed int64 *)
+    [ `Int of int64
+    | `Ident of (loc * string)
+        (* must be bound to a valid int literal.
+           Expansion of macro functions is not supported. *)
+
+    | `Tuple of (loc * arith_expr list)
+        (* tuple of 2 or more elements guaranteed by the syntax *)
+
+    | `Neg of arith_expr (* - *)
+    | `Add of (arith_expr * arith_expr) (* + *)
+    | `Sub of (arith_expr * arith_expr) (* - *)
+    | `Mul of (arith_expr * arith_expr) (* * *)
+    | `Div of (loc * arith_expr * arith_expr) (* / *)
+    | `Mod of (loc * arith_expr * arith_expr) (* mod *)
+
+    (* Bitwise operations on 64 bits *)
+    | `Lnot of arith_expr (* lnot *)
+    | `Lsl of (arith_expr * arith_expr) (* lsl *)
+    | `Lsr of (arith_expr * arith_expr) (* lsr *)
+    | `Asr of (arith_expr * arith_expr) (* asr *)
+    | `Land of (arith_expr * arith_expr) (* land *)
+    | `Lor of (arith_expr * arith_expr) (* lor *)
+    | `Lxor of (arith_expr * arith_expr) (* lxor *)
+    ]
+
+and node =
+    [ `Ident of (loc * string * node list list option)
+    | `Def of (loc * string * node list)
+    | `Defun of (loc * string * string list * node list)
+    | `Undef of (loc * string)
+    | `Include of (loc * string)
+    | `Ext of (loc * string * string)
+    | `Cond of (loc * bool_expr * node list * node list)
+    | `Error of (loc * string)
+    | `Warning of (loc * string)
+    | `Text of (loc * bool * string) (* bool is true for space tokens *)
+    | `Seq of node list
+    | `Stringify of node
+    | `Capitalize of node
+    | `Concat of (node * node)
+    | `Line of (loc * string option * int)
+    | `Current_line of loc
+    | `Current_file of loc ]
+
+val dummy_loc : loc
+
+val error : loc -> string -> _
+
+val warning : loc -> string -> unit
+
+val flatten_nodes : node list -> node list
+
diff --git a/tools/ocaml/duniverse/cppo/src/cppo_version.mli b/tools/ocaml/duniverse/cppo/src/cppo_version.mli
new file mode 100644
index 0000000000..7d20f68da6
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/src/cppo_version.mli
@@ -0,0 +1 @@
+val cppo_version : string
diff --git a/tools/ocaml/duniverse/cppo/src/dune b/tools/ocaml/duniverse/cppo/src/dune
new file mode 100644
index 0000000000..8cf871b460
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/src/dune
@@ -0,0 +1,21 @@
+(ocamllex cppo_lexer)
+
+(ocamlyacc cppo_parser)
+
+(rule
+ (targets cppo_version.ml)
+ (action
+  (with-stdout-to
+   %{targets}
+   (echo "let cppo_version = \"%{version:cppo}\""))))
+
+(executable
+ (name cppo_main)
+ (package cppo)
+ (public_name cppo)
+ (modules :standard \ compat)
+ (preprocess (per_module
+  ((action (progn
+    (run ocaml %{dep:compat.ml} %{input-file})
+    (cat %{input-file}))) cppo_eval)))
+ (libraries unix str))
diff --git a/tools/ocaml/duniverse/cppo/test/capital.cppo b/tools/ocaml/duniverse/cppo/test/capital.cppo
new file mode 100644
index 0000000000..fa85caae4e
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/test/capital.cppo
@@ -0,0 +1,6 @@
+
+
+#define EVENT(n,ty) external CONCAT(on,CAPITALIZE(n)) : ty = STRINGIFY(n) [@@bs.val] 
+
+
+EVENT(exit, unit -> unit)
\ No newline at end of file
diff --git a/tools/ocaml/duniverse/cppo/test/capital.ref b/tools/ocaml/duniverse/cppo/test/capital.ref
new file mode 100644
index 0000000000..adcc26e251
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/test/capital.ref
@@ -0,0 +1,6 @@
+
+
+
+
+# 6 "capital.cppo"
+ external  onExit  :  unit -> unit = "exit" [@@bs.val]  
\ No newline at end of file
diff --git a/tools/ocaml/duniverse/cppo/test/comments.cppo b/tools/ocaml/duniverse/cppo/test/comments.cppo
new file mode 100644
index 0000000000..5e335f1c3b
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/test/comments.cppo
@@ -0,0 +1,7 @@
+(* '"' *)
+
+#define BE_GONE
+
+(* "*)"
+#define DONT_TOUCH_THIS
+*)
diff --git a/tools/ocaml/duniverse/cppo/test/comments.ref b/tools/ocaml/duniverse/cppo/test/comments.ref
new file mode 100644
index 0000000000..1d0dd1db64
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/test/comments.ref
@@ -0,0 +1,8 @@
+# 1 "comments.cppo"
+(* '"' *)
+
+
+# 5 "comments.cppo"
+(* "*)"
+#define DONT_TOUCH_THIS
+*)
diff --git a/tools/ocaml/duniverse/cppo/test/cond.cppo b/tools/ocaml/duniverse/cppo/test/cond.cppo
new file mode 100644
index 0000000000..b5f0c49ac4
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/test/cond.cppo
@@ -0,0 +1,47 @@
+#if 1 = 1
+#else
+#error "ignored #else (?)"
+#endif
+
+#if true
+  banana
+#elif false
+  apple
+  #error "ignored #elif (?)"
+#endif
+
+#if false
+  earthworm
+  #error ""
+#elif true
+  apricot
+#endif
+
+#if false
+  cuckoo
+  #error ""
+#else
+  #if false
+    egg
+    #error ""
+  #else
+    nest
+  #endif
+#endif
+
+#define X 3
+
+#if false
+  helicopter
+  #error ""
+#elif false
+  ocean
+  #error ""
+#else
+  #if X = 12
+    sand
+    #error ""
+  #elif 4 * X = 12
+    sea urchin
+  #endif
+#endif
diff --git a/tools/ocaml/duniverse/cppo/test/cond.ref b/tools/ocaml/duniverse/cppo/test/cond.ref
new file mode 100644
index 0000000000..a21ea217bf
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/test/cond.ref
@@ -0,0 +1,17 @@
+
+  
+# 7 "cond.cppo"
+  banana
+
+  
+# 17 "cond.cppo"
+  apricot
+
+    
+# 28 "cond.cppo"
+    nest
+
+
+    
+# 45 "cond.cppo"
+    sea urchin
diff --git a/tools/ocaml/duniverse/cppo/test/dune b/tools/ocaml/duniverse/cppo/test/dune
new file mode 100644
index 0000000000..a7fab7bbfb
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/test/dune
@@ -0,0 +1,130 @@
+(rule
+ (targets ext.out)
+ (deps
+  (:< ext.cppo)
+  source.sh)
+ (action
+  (with-stdout-to
+   %{targets}
+   (run %{bin:cppo} -x "rot13:tr '[a-z]' '[n-za-m]'" -x
+     "source:sh source.sh '%F' %B %E" %{<}))))
+
+(rule
+ (targets comments.out)
+ (deps
+  (:< comments.cppo))
+ (action
+  (with-stdout-to
+   %{targets}
+   (run %{bin:cppo} %{<}))))
+
+(rule
+ (targets cond.out)
+ (deps
+  (:< cond.cppo))
+ (action
+  (with-stdout-to
+   %{targets}
+   (run %{bin:cppo} %{<}))))
+
+(rule
+ (targets tuple.out)
+ (deps
+  (:< tuple.cppo))
+ (action
+  (with-stdout-to
+   %{targets}
+   (run %{bin:cppo} %{<}))))
+
+(rule
+ (targets loc.out)
+ (deps
+  (:< loc.cppo))
+ (action
+  (with-stdout-to
+   %{targets}
+   (run %{bin:cppo} %{<}))))
+
+(rule
+ (targets paren_arg.out)
+ (deps
+  (:< paren_arg.cppo))
+ (action
+  (with-stdout-to
+   %{targets}
+   (run %{bin:cppo} %{<}))))
+
+(rule
+ (targets unmatched.out)
+ (deps
+  (:< unmatched.cppo))
+ (action
+  (with-stdout-to
+   %{targets}
+   (run %{bin:cppo} %{<}))))
+
+(rule
+ (targets version.out)
+ (deps
+  (:< version.cppo))
+ (action
+  (with-stdout-to
+   %{targets}
+   (run %{bin:cppo} -V X:123.05.2-alpha.1+foo-2.1 %{<}))))
+
+(alias
+ (name runtest)
+ (package cppo)
+ (action
+  (diff ext.ref ext.out)))
+
+(alias
+ (name runtest)
+ (package cppo)
+ (action
+  (diff comments.ref comments.out)))
+
+(alias
+ (name runtest)
+ (package cppo)
+ (action
+  (diff cond.ref cond.out)))
+
+(alias
+ (name runtest)
+ (package cppo)
+ (action
+  (diff tuple.ref tuple.out)))
+
+(alias
+ (name runtest)
+ (package cppo)
+ (action
+  (diff loc.ref loc.out)))
+
+(alias
+ (name runtest)
+ (package cppo)
+ (action
+  (diff paren_arg.ref paren_arg.out)))
+
+(alias
+ (name runtest)
+ (package cppo)
+ (deps version.out))
+
+(alias
+ (name runtest)
+ (package cppo)
+ (action
+  (diff unmatched.ref unmatched.out)))
+
+(alias
+ (name runtest)
+ (package cppo)
+ (deps
+  (:< test.cppo)
+  incl.cppo
+  incl2.cppo)
+ (action
+  (ignore-stdout (run %{bin:cppo} %{<}))))
diff --git a/tools/ocaml/duniverse/cppo/test/ext.cppo b/tools/ocaml/duniverse/cppo/test/ext.cppo
new file mode 100644
index 0000000000..cb32573f67
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/test/ext.cppo
@@ -0,0 +1,10 @@
+hello
+#ext rot13
+abc
+\#endext
+def
+#endext
+goodbye
+
+#ext source
+#endext
diff --git a/tools/ocaml/duniverse/cppo/test/ext.ref b/tools/ocaml/duniverse/cppo/test/ext.ref
new file mode 100644
index 0000000000..4626b21481
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/test/ext.ref
@@ -0,0 +1,28 @@
+# 1 "ext.cppo"
+hello
+nop
+#raqrkg
+qrs
+# 7 "ext.cppo"
+goodbye
+
+# 9
+(*
+hello
+#ext rot13
+abc
+\#endext
+def
+#endext
+goodbye
+
+#ext source
+#endext
+*)
+(*
+   Environment variables:
+     CPPO_FILE=ext.cppo
+     CPPO_FIRST_LINE=9
+     CPPO_LAST_LINE=11
+*)
+# 11
diff --git a/tools/ocaml/duniverse/cppo/test/incl.cppo b/tools/ocaml/duniverse/cppo/test/incl.cppo
new file mode 100644
index 0000000000..a2ce8dbb36
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/test/incl.cppo
@@ -0,0 +1,3 @@
+included
+
+#include "incl2.cppo"
diff --git a/tools/ocaml/duniverse/cppo/test/incl2.cppo b/tools/ocaml/duniverse/cppo/test/incl2.cppo
new file mode 100644
index 0000000000..9766475a41
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/test/incl2.cppo
@@ -0,0 +1 @@
+ok
diff --git a/tools/ocaml/duniverse/cppo/test/loc.cppo b/tools/ocaml/duniverse/cppo/test/loc.cppo
new file mode 100644
index 0000000000..d7c2c521f3
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/test/loc.cppo
@@ -0,0 +1,8 @@
+#define loc __FILE__ __LINE__
+loc
+X(loc)
+X(loc)
+X(Y(loc))
+
+#define F(x) loc
+F()
diff --git a/tools/ocaml/duniverse/cppo/test/loc.ref b/tools/ocaml/duniverse/cppo/test/loc.ref
new file mode 100644
index 0000000000..78bbfb72bd
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/test/loc.ref
@@ -0,0 +1,21 @@
+# 2 "loc.cppo"
+  "loc.cppo"   2  
+# 3 "loc.cppo"
+X(
+# 3 "loc.cppo"
+    "loc.cppo"   3  
+# 3 "loc.cppo"
+)
+X(
+# 4 "loc.cppo"
+    "loc.cppo"   4  
+# 4 "loc.cppo"
+)
+X(Y(
+# 5 "loc.cppo"
+      "loc.cppo"   5  
+# 5 "loc.cppo"
+  ))
+
+# 8 "loc.cppo"
+   "loc.cppo"   8   
diff --git a/tools/ocaml/duniverse/cppo/test/paren_arg.cppo b/tools/ocaml/duniverse/cppo/test/paren_arg.cppo
new file mode 100644
index 0000000000..f4c4803eb4
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/test/paren_arg.cppo
@@ -0,0 +1,3 @@
+#define F(x, y) <x> <y>
+F((1, (2)), 34)
+F((1\,\(2\)), 34)
diff --git a/tools/ocaml/duniverse/cppo/test/paren_arg.ref b/tools/ocaml/duniverse/cppo/test/paren_arg.ref
new file mode 100644
index 0000000000..6555ca0569
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/test/paren_arg.ref
@@ -0,0 +1,4 @@
+# 2 "paren_arg.cppo"
+ <(1, (2))> < 34> 
+# 3 "paren_arg.cppo"
+ <(1 , (2 ))> < 34> 
diff --git a/tools/ocaml/duniverse/cppo/test/source.sh b/tools/ocaml/duniverse/cppo/test/source.sh
new file mode 100755
index 0000000000..660d161ab2
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/test/source.sh
@@ -0,0 +1,13 @@
+#! /bin/sh -e
+
+echo "# $2"
+echo "(*"
+cat "$1"
+echo "*)"
+echo "(*"
+echo "   Environment variables:"
+echo "     CPPO_FILE=$CPPO_FILE"
+echo "     CPPO_FIRST_LINE=$CPPO_FIRST_LINE"
+echo "     CPPO_LAST_LINE=$CPPO_LAST_LINE"
+echo "*)"
+echo "# $3"
diff --git a/tools/ocaml/duniverse/cppo/test/test.cppo b/tools/ocaml/duniverse/cppo/test/test.cppo
new file mode 100644
index 0000000000..89756f7ca0
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/test/test.cppo
@@ -0,0 +1,144 @@
+(* comment *)
+
+#define pi 3.14
+f(1)
+#define f(x) x+pi
+f(2)
+#undef pi
+f(3)
+
+#ifdef g
+"g" is defined
+#else
+"g" is not defined
+#endif
+
+#define a(x) b()
+#define b(x) a()
+a()
+
+debug("a")
+debug("b")
+
+#define z 123
+#define y z
+#define x y
+
+#if x lsl 1 = 2*123
+
+#if 1 = 2
+#error "test"
+#endif
+
+success
+#else
+failure
+#endif
+
+#define test_multiline \
+"abc\
+ def" \
+(* 123 \
+   456 *)
+test_multiline
+
+#define test_args(x, y) x y
+test_args("a","b")
+
+#define test_argc(x) x y
+test_argc(aa\,bb)
+
+#define test_esc(x) x
+test_esc(\,\)\()
+
+blah #define xyz
+#ifdef xyz
+#error "xyz should not have been defined"
+#endif
+
+#define sticky1(x) _
+#define sticky2(x) sticky1()_ (* the 2 underscores should be space-separated *)
+sticky2()
+
+#define empty1
+#define empty2 +empty1+ (* there should be some space between the pluses *)
+empty2
+
+(* (* nested comment with single single quote: ' *) "*)" *)
+
+#define arg
+obj
+  \# define arg
+
+'  (* lone single quote *)
+
+#define one 1
+one is not 1
+
+#undef x
+#define x #
+x is #
+
+#undef one
+#define one 1
+#if (one+one = 100 + \
+               64 lsr 3 / 4 - lnot lnot 100) && \
+    1 + 3 * 5 = 16 && \
+    22 mod 7 = 1 && \
+    lnot 0 = 0xffffffffffffffff && \
+    -1 asr 100 = -1 && \
+    -1 land (1 lsl 1 lsr 1) = 1 && \
+    -1 lor 1 = -1 && \
+    -2 lxor 1 = -1 && \
+    lnot -1 = 0 && \
+    true && not false && defined one && \
+    (true || true && false)
+good maths
+#else
+#error "math error"
+#endif
+
+
+#undef f
+#undef g
+#undef x
+#undef y
+
+#define trace(f) \
+let f x = \
+  printf "call %s\n%!" STRINGIFY(f); \
+  let y = f x in \
+  printf "return %s\n%!" STRINGIFY(f); \
+  y \
+;;
+
+trace(g)
+
+#define field(name,type) \
+  val mutable name : type option \
+  method CONCAT(get_, name) = name \
+  method CONCAT(set_, name) x = name <- Some x
+
+class foo () =
+object
+  field(field_1, int)
+  field(field_2, string)
+end
+
+#define DEBUG(x) \
+  (if !debug then \
+    eprintf "[debug] %s %i: " __FILE__ __LINE__; \
+    eprintf x; \
+    eprintf "\n")
+DEBUG("test1 %i %i" x y)
+DEBUG("test2 %i" x)
+
+#include "incl.cppo"
+# 123456
+
+#789 "test"
+#include "incl.cppo"
+
+#define debug(s) Printf.eprintf "%S %i: %s\n%!" __FILE__ __LINE__ s
+
+end
diff --git a/tools/ocaml/duniverse/cppo/test/tuple.cppo b/tools/ocaml/duniverse/cppo/test/tuple.cppo
new file mode 100644
index 0000000000..57423b89a9
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/test/tuple.cppo
@@ -0,0 +1,38 @@
+#if (2 + 2, 5) < (4, 5)
+  mountain
+  #error ""
+#else
+  pistachios
+#endif
+
+#if (3 * 3) = 10 - 1
+  trees
+#else
+  rocks
+  #error ""
+#endif
+
+#if (1) = (1)
+  waves
+#else
+  sharks
+  #error ""
+#endif
+
+
+#define x 11
+#if (x, 2) <> (x, 4/2)
+  honey
+  #error ""
+#else
+  bees
+#endif
+
+#define tuple (0, -5, 3)
+#define tuple2 tuple
+#if (0, -5, x) > tuple2
+  steamboat
+#else
+  koalas
+  #error ""
+#endif
diff --git a/tools/ocaml/duniverse/cppo/test/tuple.ref b/tools/ocaml/duniverse/cppo/test/tuple.ref
new file mode 100644
index 0000000000..58df976ef5
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/test/tuple.ref
@@ -0,0 +1,20 @@
+  
+# 5 "tuple.cppo"
+  pistachios
+
+  
+# 9 "tuple.cppo"
+  trees
+
+  
+# 16 "tuple.cppo"
+  waves
+
+
+  
+# 28 "tuple.cppo"
+  bees
+
+  
+# 34 "tuple.cppo"
+  steamboat
diff --git a/tools/ocaml/duniverse/cppo/test/unmatched.cppo b/tools/ocaml/duniverse/cppo/test/unmatched.cppo
new file mode 100644
index 0000000000..470cbd44be
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/test/unmatched.cppo
@@ -0,0 +1,14 @@
+#ifdef whatever
+  (
+#else
+  let a = 1 in
+  let b = 2 in
+  (a ||
+#endif
+
+  b)
+
+#define F(x, y) (x + y)
+F(1,(2+3))
+)
+(
diff --git a/tools/ocaml/duniverse/cppo/test/unmatched.ref b/tools/ocaml/duniverse/cppo/test/unmatched.ref
new file mode 100644
index 0000000000..ff2356a57d
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/test/unmatched.ref
@@ -0,0 +1,15 @@
+  
+# 4 "unmatched.cppo"
+  let a = 1 in
+  let b = 2 in
+  (a ||
+
+  
+# 9 "unmatched.cppo"
+  b)
+
+# 12 "unmatched.cppo"
+ (1 + (2+3)) 
+# 13 "unmatched.cppo"
+)
+(
diff --git a/tools/ocaml/duniverse/cppo/test/version.cppo b/tools/ocaml/duniverse/cppo/test/version.cppo
new file mode 100644
index 0000000000..ee4e429b6a
--- /dev/null
+++ b/tools/ocaml/duniverse/cppo/test/version.cppo
@@ -0,0 +1,30 @@
+#if X_VERSION < (123, 0, 0)
+  alligators
+  #error ""
+#else
+  Cape buffalos
+#endif
+
+#define v X_VERSION
+#if v = (X_MAJOR, X_MINOR, X_PATCH)
+  onion rings
+#else
+  gazpacho
+  #error ""
+#endif
+
+major: X_MAJOR
+minor: X_MINOR
+patch: X_PATCH
+
+#ifdef X_PRERELEASE
+  prerelease: X_PRERELEASE
+#else
+  #error ""
+#endif
+
+#ifdef X_BUILD
+  build: X_BUILD
+#else
+  #error ""
+#endif
diff --git a/tools/ocaml/duniverse/crowbar/.gitignore b/tools/ocaml/duniverse/crowbar/.gitignore
new file mode 100644
index 0000000000..ffee22e516
--- /dev/null
+++ b/tools/ocaml/duniverse/crowbar/.gitignore
@@ -0,0 +1,5 @@
+_build
+crowbar.install
+*.byte
+*.native
+.merlin
diff --git a/tools/ocaml/duniverse/crowbar/CHANGES.md b/tools/ocaml/duniverse/crowbar/CHANGES.md
new file mode 100644
index 0000000000..df24660c5e
--- /dev/null
+++ b/tools/ocaml/duniverse/crowbar/CHANGES.md
@@ -0,0 +1,9 @@
+v0.2 (04 May 2020)
+---------------------
+
+New generators, printers and port to dune.
+
+v0.1 (01 February 2018)
+---------------------
+
+Initial release
diff --git a/tools/ocaml/duniverse/crowbar/LICENSE.md b/tools/ocaml/duniverse/crowbar/LICENSE.md
new file mode 100644
index 0000000000..848fd3e059
--- /dev/null
+++ b/tools/ocaml/duniverse/crowbar/LICENSE.md
@@ -0,0 +1,8 @@
+Copyright (c) 2017 Stephen Dolan
+
+Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+
diff --git a/tools/ocaml/duniverse/crowbar/README.md b/tools/ocaml/duniverse/crowbar/README.md
new file mode 100644
index 0000000000..6938adaf6a
--- /dev/null
+++ b/tools/ocaml/duniverse/crowbar/README.md
@@ -0,0 +1,82 @@
+# Crowbar
+
+**Crowbar** is a library for testing code, combining QuickCheck-style
+  property-based testing and the magical bug-finding powers of
+  [afl-fuzz](http://lcamtuf.coredump.cx/afl/).
+
+## TL;DR
+
+There are [some examples](./examples).
+
+Some brief hints:
+
+1. Use an opam switch with AFL instrumentation enabled (e.g. `opam sw 4.04.0+afl`).
+2. Run in AFL mode with `afl-fuzz -i in -o out -- ./_build/myprog.exe @@`.
+3. If you run your executable without arguments, crowbar will perform some simple (non-AFL) testing instead.
+4. Test binaries have a small amount of documentation, available with `--help`.
+
+## writing tests
+
+To test your software, come up with a property you'd like to test, then decide on the input you'd like for Crowbar to vary.  A Crowbar test is some invocation of `Crowbar.check_eq` or `Crowbar.check`:
+
+```ocaml
+let identity x =
+  Crowbar.check_eq x x
+```
+
+and instructions for running the test with generated items with `Crowbar.add_test`:
+
+```ocaml
+let () =
+  Crowbar.(add_test ~name:"identity function" [int] (fun i -> identity i))
+```
+
+There are [more examples available](./examples), with varying levels complexity.
+
+## building tests
+
+Include `crowbar` in your list of dependencies via your favorite build system.  The resulting executable is a Crowbar test.  (Be sure to build a native-code executable, not bytecode.)
+
+To build tests that run under AFL, you'll need to build your tests with a compiler that has AFL instrumentation enabled.  (You can also enable it specifically for your build, although this is not recommended if your code has any dependencies, including the OCaml standard library).  OCaml compiler variants with AFL enabled by default are available in `opam` with the `+afl` tag.  All versions published starting with 4.05.0 are available, along with a backported 4.04.0.
+
+```shell
+$ opam switch 4.06.0+afl
+$ eval `opam config env`
+$ ./build_my_rad_test.sh # or your relevant build runes
+```
+
+## running Tests
+
+Crowbar tests have two modes:
+
+* a simple quickcheck-like mode for testing propositions against totally random input
+* a mode using [afl-persistent](https://github.com/stedolan/ocaml-afl-persistent) to get good performance from `afl-fuzz` with OCaml's instrumentation enabled
+
+Crowbar tests can be directly invoked with `--help` for more documentation at runtime.
+
+### fully random test mode
+
+If you wish to use the quickcheck-like, fully random mode to run all tests distributed here, build the tests as above and then run the binary with no arguments.
+
+```
+$ ./my_rad_test.exe | head -5
+the first test: PASS
+
+the second test: PASS
+```
+
+### AFL mode requirements
+
+To run the tests in AFL mode, you'll need to install American Fuzzy Lop ([latest source tarball](http://lcamtuf.coredump.cx/afl/releases/afl-latest.tgz), although your distribution may also have a package available).
+
+Once `afl-fuzz` is available on your system, create an `input` directory with a non-empty file in it (or use `test/input`, conveniently provided in this repository), and an `output` directory for `afl-fuzz` to store its findings.  Then, invoke your test binary:
+
+```
+afl-fuzz -i test/input -o output ./my_rad_test.exe @@
+```
+
+This will launch AFL, which will generate new test cases and track the exploration of the state space.  When inputs are discovered which cause a property not to hold, they will be reported as crashes (along with actual crashes, although in the OCaml standard library these are rare).  See the [afl-fuzz documentation](https://lcamtuf.coredump.cx/afl/status_screen.txt) for more on AFL's excellent interface.
+
+# What bugs have you found?
+
+[An open issue](https://github.com/stedolan/crowbar/issues/2) has a list of issues discovered by testing with Crowbar.  If you use Crowbar to improve your software, please let us know!
diff --git a/tools/ocaml/duniverse/crowbar/crowbar.opam b/tools/ocaml/duniverse/crowbar/crowbar.opam
new file mode 100644
index 0000000000..bff15f564a
--- /dev/null
+++ b/tools/ocaml/duniverse/crowbar/crowbar.opam
@@ -0,0 +1,33 @@
+opam-version: "2.0"
+maintainer: "stephen.dolan@cl.cam.ac.uk"
+authors: ["Stephen Dolan"]
+homepage: "https://github.com/stedolan/crowbar"
+bug-reports: "https://github.com/stedolan/crowbar/issues"
+dev-repo: "git+https://github.com/stedolan/crowbar.git"
+license: "MIT"
+build: [
+  [ "dune" "build" "-p" name "-j" jobs ]
+]
+run-test: [
+  [ "dune" "runtest" "-p" name "-j" jobs ]
+]
+depends: [
+  "dune" {build & >= "1.1"}
+  "ocaml" {>= "4.02.0"}
+  "ocplib-endian"
+  "cmdliner"
+  "afl-persistent" {>= "1.1"}
+  "calendar" {with-test}
+  "xmldiff" {with-test}
+  "fpath" {with-test}
+  "pprint" {with-test & < "20180528"}
+  "uucp" {with-test}
+  "uunf" {with-test}
+  "uutf" {with-test}
+]
+synopsis: "Write tests, let a fuzzer find failing cases"
+description: """
+Crowbar is a library for testing code, combining QuickCheck-style
+property-based testing and the magical bug-finding powers of
+[afl-fuzz](http://lcamtuf.coredump.cx/afl/).
+"""
diff --git a/tools/ocaml/duniverse/crowbar/dune b/tools/ocaml/duniverse/crowbar/dune
new file mode 100644
index 0000000000..8daec202b7
--- /dev/null
+++ b/tools/ocaml/duniverse/crowbar/dune
@@ -0,0 +1 @@
+(env (dev (flags (:standard -warn-error -A))))
diff --git a/tools/ocaml/duniverse/crowbar/dune-project b/tools/ocaml/duniverse/crowbar/dune-project
new file mode 100644
index 0000000000..12d75f30c8
--- /dev/null
+++ b/tools/ocaml/duniverse/crowbar/dune-project
@@ -0,0 +1,2 @@
+(lang dune 1.1)
+(name crowbar)
diff --git a/tools/ocaml/duniverse/crowbar/examples/.gitignore b/tools/ocaml/duniverse/crowbar/examples/.gitignore
new file mode 100644
index 0000000000..53752db253
--- /dev/null
+++ b/tools/ocaml/duniverse/crowbar/examples/.gitignore
@@ -0,0 +1 @@
+output
diff --git a/tools/ocaml/duniverse/crowbar/examples/calendar/dune b/tools/ocaml/duniverse/crowbar/examples/calendar/dune
new file mode 100644
index 0000000000..e5fdd8f1df
--- /dev/null
+++ b/tools/ocaml/duniverse/crowbar/examples/calendar/dune
@@ -0,0 +1,3 @@
+(test
+ (name test_calendar)
+ (libraries crowbar calendar))
diff --git a/tools/ocaml/duniverse/crowbar/examples/calendar/test_calendar.ml b/tools/ocaml/duniverse/crowbar/examples/calendar/test_calendar.ml
new file mode 100644
index 0000000000..788e96a7dc
--- /dev/null
+++ b/tools/ocaml/duniverse/crowbar/examples/calendar/test_calendar.ml
@@ -0,0 +1,29 @@
+open Crowbar
+
+module C = CalendarLib.Calendar.Precise
+
+let time =
+  map [int64] (fun a ->
+    try
+      C.from_mjd (Int64.to_float a /. 100_000_000_000_000.)
+    with
+      CalendarLib.Date.Out_of_bounds -> bad_test ())
+
+let pp_time ppf t =
+  pp ppf "%04d-%02d-%02d %02d:%02d:%02d"
+     (C.year t)
+     (C.month t |> C.Date.int_of_month)
+     (C.day_of_month t)
+     (C.hour t)
+     (C.minute t)
+     (C.second t)
+let time = with_printer pp_time time
+
+let period =
+  map [const 0;const 0;int8;int8;int8;int8] C.Period.make
+
+
+let () =
+  add_test ~name:"calendar" [time; time] @@ fun t1 t2 ->
+    guard (C.compare t1 t2 < 0);
+    check_eq ~pp:pp_time ~eq:C.equal (C.add t1 (C.precise_sub t2 t1)) t2
diff --git a/tools/ocaml/duniverse/crowbar/examples/fpath/dune b/tools/ocaml/duniverse/crowbar/examples/fpath/dune
new file mode 100644
index 0000000000..b6050be6f7
--- /dev/null
+++ b/tools/ocaml/duniverse/crowbar/examples/fpath/dune
@@ -0,0 +1,4 @@
+(test
+ (name test_fpath)
+ (modules test_fpath)
+ (libraries crowbar fpath))
diff --git a/tools/ocaml/duniverse/crowbar/examples/fpath/test_fpath.ml b/tools/ocaml/duniverse/crowbar/examples/fpath/test_fpath.ml
new file mode 100644
index 0000000000..242ffc399d
--- /dev/null
+++ b/tools/ocaml/duniverse/crowbar/examples/fpath/test_fpath.ml
@@ -0,0 +1,18 @@
+open Crowbar
+open Astring
+open Fpath
+let fpath =
+  map [bytes] (fun s ->
+    try
+      v s
+    with
+      Invalid_argument _ -> bad_test ())
+
+
+let () =
+  add_test ~name:"segs" [fpath] @@ fun p ->
+    let np = normalize p in
+    assert (is_dir_path p = is_dir_path np);
+    assert (is_file_path p = is_file_path np);
+    assert (filename p = filename np);
+    check_eq ~eq:equal p (v @@ (fst @@ split_volume p) ^ (String.concat ~sep:dir_sep (segs p)))
diff --git a/tools/ocaml/duniverse/crowbar/examples/input/testcase b/tools/ocaml/duniverse/crowbar/examples/input/testcase
new file mode 100644
index 0000000000..8bd6648ed1
--- /dev/null
+++ b/tools/ocaml/duniverse/crowbar/examples/input/testcase
@@ -0,0 +1 @@
+asdf
diff --git a/tools/ocaml/duniverse/crowbar/examples/map/dune b/tools/ocaml/duniverse/crowbar/examples/map/dune
new file mode 100644
index 0000000000..04fa46113b
--- /dev/null
+++ b/tools/ocaml/duniverse/crowbar/examples/map/dune
@@ -0,0 +1,3 @@
+(test
+ (name test_map)
+ (libraries crowbar))
diff --git a/tools/ocaml/duniverse/crowbar/examples/map/test_map.ml b/tools/ocaml/duniverse/crowbar/examples/map/test_map.ml
new file mode 100644
index 0000000000..b60603614a
--- /dev/null
+++ b/tools/ocaml/duniverse/crowbar/examples/map/test_map.ml
@@ -0,0 +1,47 @@
+open Crowbar
+
+module Map = Map.Make (struct
+  type t = int
+  let compare (i : int) (j : int) = compare i j
+end)
+
+type t = ((int * int) list * int Map.t)
+
+let check_map ((list, map) : t) =
+  let rec dedup k = function
+    | [] -> []
+    | (k', v') :: rest when k = k' -> dedup k rest
+    | (k', v') :: rest ->
+       (k', v') :: dedup k' rest in
+  let list = match List.stable_sort (fun a b -> compare (fst a) (fst b)) list with
+    | [] -> []
+    | (k, v) :: rest -> (k, v) :: dedup k rest in
+  List.for_all (fun (k, v) -> Map.find k map = v) list &&
+    list = Map.bindings map
+
+let map_gen : t gen = fix (fun map_gen -> choose [
+  const ([], Map.empty);
+  map [uint8; uint8; map_gen] (fun k v (l, m) ->
+    (k, v) :: l, Map.add k v m);
+  map [uint8; uint8] (fun k v ->
+    [k, v], Map.singleton k v);
+  map [uint8; map_gen] (fun k (l, m) ->
+    let rec rem_all k l =
+      let l' = List.remove_assoc k l in
+      if l = l' then l else rem_all k l' in
+    rem_all k l, Map.remove k m);
+  (* merge? *)
+  map [map_gen; map_gen] (fun (l, m) (l', m') ->
+    l @ l', Map.union (fun k a b -> Some a) m m');
+  map [uint8; map_gen] (fun k (list, map) ->
+    let (l, v, r) = Map.split k map in
+    let (l', vr') = List.partition (fun (kx,vx) -> kx < k) list in
+    let r' = List.filter (fun (kx, vx) -> kx <> k) vr' in
+    let v' = match List.assoc k vr' with n -> Some n | exception Not_found -> None in
+    assert (v = v');
+    (l' @ List.map (fun (k,v) -> k,v+42) r',
+     Map.union (fun k a b -> assert false) l (Map.map (fun v -> v + 42) r)))])
+
+let () =
+  add_test ~name:"map" [map_gen] @@ fun m ->
+    check (check_map m)
diff --git a/tools/ocaml/duniverse/crowbar/examples/pprint/dune b/tools/ocaml/duniverse/crowbar/examples/pprint/dune
new file mode 100644
index 0000000000..7c4d9127c0
--- /dev/null
+++ b/tools/ocaml/duniverse/crowbar/examples/pprint/dune
@@ -0,0 +1,3 @@
+(test
+ (name test_pprint)
+ (libraries crowbar pprint))
diff --git a/tools/ocaml/duniverse/crowbar/examples/pprint/test_pprint.ml b/tools/ocaml/duniverse/crowbar/examples/pprint/test_pprint.ml
new file mode 100644
index 0000000000..77789ef00f
--- /dev/null
+++ b/tools/ocaml/duniverse/crowbar/examples/pprint/test_pprint.ml
@@ -0,0 +1,39 @@
+open Crowbar
+open PPrint
+type t = (string * PPrint.document)
+let doc = fix (fun doc -> choose [
+  const ("", empty);
+  const ("a", char 'a');
+  const ("123", string "123");
+  const ("Hello", string "Hello");
+  const ("awordwhichisalittlebittoolong",
+         string "awordwhichisalittlebittoolong");
+  const ("", hardline);
+  map [range 10] (fun n -> ("", break n));
+  map [range 10] (fun n -> ("", break n));
+  map [doc; doc]
+    (fun (sa,da) (sb,db) -> (sa ^ sb, da ^^ db));
+  map [range 10; doc] (fun n (s,d) -> (s, nest n d));
+  map [doc] (fun (s, d) -> (s, group d));
+  map [doc] (fun (s, d) -> (s, align d))
+])
+
+let check_doc (s, d) =
+  let b = Buffer.create 100 in
+  let w = 40 in
+  ToBuffer.pretty 1.0 w b d;
+  let text = Bytes.to_string (Buffer.to_bytes b) in
+  let ws = Str.regexp "[ \t\n\r]*" in
+  (* Printf.printf "doc2{\n%s\n}%!" text; *)
+  let del_ws = Str.global_replace ws "" in
+  (*  Printf.printf "[%s] = [%s]\n%!" (del_ws s) (del_ws text);*)
+  Str.split (Str.regexp "\n") text |> List.iter (fun s ->
+    let mspace = Str.regexp "[^ ] " in
+    if String.length s > w then
+      match Str.search_forward mspace s w with
+      | _ -> assert false
+      | exception Not_found -> ());
+  check_eq (del_ws s) (del_ws text)
+
+let () =
+  add_test ~name:"pprint" [doc] check_doc
diff --git a/tools/ocaml/duniverse/crowbar/examples/serializer/dune b/tools/ocaml/duniverse/crowbar/examples/serializer/dune
new file mode 100644
index 0000000000..f1f0c6b64b
--- /dev/null
+++ b/tools/ocaml/duniverse/crowbar/examples/serializer/dune
@@ -0,0 +1,3 @@
+(test
+ (name test_serializer)
+ (libraries crowbar))
diff --git a/tools/ocaml/duniverse/crowbar/examples/serializer/serializer.ml b/tools/ocaml/duniverse/crowbar/examples/serializer/serializer.ml
new file mode 100644
index 0000000000..bb1ee2b4c2
--- /dev/null
+++ b/tools/ocaml/duniverse/crowbar/examples/serializer/serializer.ml
@@ -0,0 +1,34 @@
+type data =
+  | Datum of string
+  | Block of header * data list
+and header = string
+
+type _ ty =
+  | Int : int ty
+  | Bool : bool ty
+  | Prod : 'a ty * 'b ty -> ('a * 'b) ty
+  | List : 'a ty -> 'a list ty
+
+let rec pp_ty : type a . _ -> a ty -> unit = fun ppf ->
+  let printf fmt = Format.fprintf ppf fmt in
+  function
+  | Int -> printf "Int"
+  | Bool -> printf "Bool"
+  | Prod(ta, tb) -> printf "Prod(%a,%a)" pp_ty ta pp_ty tb
+  | List t -> printf "List(%a)" pp_ty t
+
+let rec serialize : type a . a ty -> a -> data = function
+  | Int -> fun n -> Datum (string_of_int n)
+  | Bool -> fun b -> Datum (string_of_bool b)
+  | Prod (ta, tb) -> fun (va, vb) ->
+    Block("pair", [serialize ta va; serialize tb vb])
+  | List t -> fun vs ->
+    Block("list", List.map (serialize t) vs)
+
+let rec deserialize : type a . a ty -> data -> a = function[@warning "-8"]
+  | Int -> fun (Datum s) -> int_of_string s
+  | Bool -> fun (Datum s) -> bool_of_string s
+  | Prod (ta, tb) -> fun (Block("pair", [sa; sb])) ->
+    (deserialize ta sa, deserialize tb sb)
+  | List t -> fun (Block("list", ss)) ->
+    List.map (deserialize t) ss
diff --git a/tools/ocaml/duniverse/crowbar/examples/serializer/test_serializer.ml b/tools/ocaml/duniverse/crowbar/examples/serializer/test_serializer.ml
new file mode 100644
index 0000000000..f650484a88
--- /dev/null
+++ b/tools/ocaml/duniverse/crowbar/examples/serializer/test_serializer.ml
@@ -0,0 +1,47 @@
+open Crowbar
+
+module S = Serializer
+
+type any_ty = Any : 'a S.ty -> any_ty
+
+let ty_gen =
+  with_printer (fun ppf (Any t)-> S.pp_ty ppf t) @@
+  fix (fun ty_gen -> choose [
+    const (Any S.Int);
+    const (Any S.Bool);
+    map [ty_gen; ty_gen] (fun (Any ta) (Any tb) ->
+      Any (S.Prod (ta, tb)));
+    map [ty_gen] (fun (Any t) -> Any (List t));
+  ])
+
+let prod_gen ga gb = map [ga; gb] (fun va vb -> (va, vb))
+
+let rec gen_of_ty : type a . a S.ty -> a gen = function
+  | S.Int -> int
+  | S.Bool -> bool
+  | S.Prod (ta, tb) -> prod_gen (gen_of_ty ta) (gen_of_ty tb)
+  | S.List t -> list (gen_of_ty t)
+
+type pair = Pair : 'a S.ty * 'a -> pair
+
+(* The generator for the final value, [gen_of_ty t], depends on the
+   generated type representation, [t]. This dynamic dependency cannot
+   be expressed with [map], it requires [dynamic_bind]. *)
+let pair_gen : pair gen =
+  dynamic_bind ty_gen @@ fun (Any t) ->
+  map [gen_of_ty t] (fun v -> Pair (t, v))
+
+let rec printer_of_ty : type a . a S.ty -> a printer = function
+  | S.Int -> pp_int
+  | S.Bool -> pp_bool
+  | S.Prod (ta, tb) -> (fun ppf (a, b) ->
+      pp ppf "(%a, %a)" (printer_of_ty ta) a (printer_of_ty tb) b)
+  | S.List t -> pp_list (printer_of_ty t)
+
+let check_pair (Pair (t, v)) =
+  let data = S.serialize t v in
+  match S.deserialize t data with
+  | exception _ -> fail "incorrect deserialization"
+  | v' -> check_eq ~pp:(printer_of_ty t) v v'
+
+let () = add_test ~name:"pairs" [pair_gen] check_pair
diff --git a/tools/ocaml/duniverse/crowbar/examples/uunf/dune b/tools/ocaml/duniverse/crowbar/examples/uunf/dune
new file mode 100644
index 0000000000..35850fa034
--- /dev/null
+++ b/tools/ocaml/duniverse/crowbar/examples/uunf/dune
@@ -0,0 +1,3 @@
+(test
+ (name test_uunf)
+ (libraries uunf uutf uucp crowbar))
diff --git a/tools/ocaml/duniverse/crowbar/examples/uunf/test_uunf.ml b/tools/ocaml/duniverse/crowbar/examples/uunf/test_uunf.ml
new file mode 100644
index 0000000000..a43cd8bc13
--- /dev/null
+++ b/tools/ocaml/duniverse/crowbar/examples/uunf/test_uunf.ml
@@ -0,0 +1,75 @@
+open Crowbar
+
+let uchar =
+  map [int32] (fun n ->
+    let n = (Int32.to_int n land 0xFFFFFFF) mod 0x10FFFF in
+    try Uchar.of_int n
+    with Invalid_argument _ -> bad_test ())
+
+let unicode = list1 uchar
+
+let norm form str =
+  let n = Uunf.create form in
+  let rec add acc v = match Uunf.add n v with
+    | `Uchar u -> add (u :: acc) `Await
+    | `Await | `End -> acc in
+  let rec go acc = function
+    | [] -> List.rev (add acc `End)
+    | (v :: vs) -> go (add acc (`Uchar v)) vs in
+  go [] str
+
+let unicode_to_string s =
+  let b = Buffer.create 10 in
+  List.iter (Uutf.Buffer.add_utf_8 b) s;
+  Buffer.contents b
+
+
+let pp_unicode ppf s =
+  Format.fprintf ppf "@[<v 2>";
+  Format.fprintf ppf "@[\"%s\"@]@ " (unicode_to_string s);
+  s |> List.iter (fun u ->
+    Format.fprintf ppf "@[U+%04x %s (%a)@]@ " (Uchar.to_int u) (Uucp.Name.name u) Uucp.Block.pp (Uucp.Block.block u));
+  Format.fprintf ppf "@]\n"
+
+
+let unicode = with_printer pp_unicode unicode
+
+let () =
+  add_test ~name:"uunf" [unicode] @@ fun s ->
+    let nfc = norm `NFC s in
+    let nfd = norm `NFD s in
+    let nfkc = norm `NFKC s in
+    let nfkd = norm `NFKD s in
+(*    [s; nfc; nfd; nfkc; nfkd] |> List.iter (fun s ->
+    Printf.printf "[%s]\n" (unicode_to_string s));
+    Printf.printf "\n%!";*)
+
+    let tests =
+      [
+        nfc, [
+          norm `NFC nfc;
+          norm `NFC nfd];
+        
+        nfd, [
+            norm `NFD nfc;
+            norm `NFD nfd];
+        
+        nfkc, [
+            norm `NFC nfkc;
+            norm `NFC nfkd;
+            norm `NFKC nfc;
+            norm `NFKC nfd;
+            norm `NFKC nfkc;
+            norm `NFKC nfkd];
+        
+        nfkd, [
+            norm `NFD nfkc;
+            norm `NFD nfkd;
+            norm `NFKD nfc;
+            norm `NFKD nfd;
+            norm `NFKD nfkc;
+            norm `NFKD nfkd]
+      ] in
+    tests |> List.iter (fun (s, eqs) ->
+       List.iter (fun s' -> check_eq ~pp:pp_unicode s s') eqs)
+
diff --git a/tools/ocaml/duniverse/crowbar/examples/xmldiff/dune b/tools/ocaml/duniverse/crowbar/examples/xmldiff/dune
new file mode 100644
index 0000000000..46d7ceef19
--- /dev/null
+++ b/tools/ocaml/duniverse/crowbar/examples/xmldiff/dune
@@ -0,0 +1,3 @@
+(test
+ (name test_xmldiff)
+ (libraries xmldiff crowbar))
diff --git a/tools/ocaml/duniverse/crowbar/examples/xmldiff/test_xmldiff.ml b/tools/ocaml/duniverse/crowbar/examples/xmldiff/test_xmldiff.ml
new file mode 100644
index 0000000000..8fdbd9aa88
--- /dev/null
+++ b/tools/ocaml/duniverse/crowbar/examples/xmldiff/test_xmldiff.ml
@@ -0,0 +1,42 @@
+open Crowbar
+
+let ident = choose [const "a"; const "b"; const "c"]
+let elem_name = map [ident] (fun s -> ("", s))
+
+
+let attrs =
+  choose [
+    const Xmldiff.Nmap.empty;
+    map [elem_name; ident] Xmldiff.Nmap.singleton
+  ]
+
+let rec xml = lazy (
+  choose [
+    const (`D "a");
+    map [ident] (fun s -> `D s);
+    map [elem_name; attrs; list (unlazy xml)] (fun s attrs elems ->
+      let rec normalise = function
+        | ([] | [_]) as x -> x
+        | `E _ as el :: xs ->
+           el :: normalise xs
+        | `D s :: xs ->
+           match normalise xs with
+           | `D s' :: xs' ->
+              `D (s ^ s') :: xs'
+           | xs' -> `D s :: xs' in
+      `E (s, attrs, normalise elems))
+  ])
+
+let lazy xml = xml
+
+let xml = map [xml] (fun d -> `E (("", "a"), Xmldiff.Nmap.empty, [d]))
+
+let pp_xml ppf xml =
+  pp ppf "%s" (Xmldiff.string_of_xml xml)
+let xml = with_printer pp_xml xml
+
+
+let () =
+  add_test ~name:"xmldiff" [xml; xml] @@ fun xml1 xml2 ->
+    let (patch, xml3) = Xmldiff.diff_with_final_tree xml1 xml2 in
+    check_eq ~pp:pp_xml xml2 xml3
diff --git a/tools/ocaml/duniverse/crowbar/src/crowbar.ml b/tools/ocaml/duniverse/crowbar/src/crowbar.ml
new file mode 100644
index 0000000000..579a1a8715
--- /dev/null
+++ b/tools/ocaml/duniverse/crowbar/src/crowbar.ml
@@ -0,0 +1,582 @@
+type src = Random of Random.State.t | Fd of Unix.file_descr
+type state =
+  {
+    chan : src;
+    buf : Bytes.t;
+    mutable offset : int;
+    mutable len : int
+  }
+
+type 'a printer = Format.formatter -> 'a -> unit
+
+type 'a gen =
+  | Choose of 'a gen list
+  | Map : ('f, 'a) gens * 'f -> 'a gen
+  | Bind : 'a gen * ('a -> 'b gen) -> 'b gen
+  | Option : 'a gen -> 'a option gen
+  | List : 'a gen -> 'a list gen
+  | List1 : 'a gen -> 'a list gen
+  | Unlazy of 'a gen Lazy.t
+  | Primitive of (state -> 'a)
+  | Print of 'a printer * 'a gen
+and ('k, 'res) gens =
+  | [] : ('res, 'res) gens
+  | (::) : 'a gen * ('k, 'res) gens -> ('a -> 'k, 'res) gens
+
+type nonrec +'a list = 'a list = [] | (::) of 'a * 'a list
+
+let unlazy f = Unlazy f
+
+let fix f =
+  let rec lazygen = lazy (f (unlazy lazygen)) in
+  unlazy lazygen
+
+let map gens f = Map (gens, f)
+
+let dynamic_bind m f = Bind(m, f)
+
+let const x = map [] x
+let choose gens = Choose gens
+let option gen = Option gen
+let list gen = List gen
+let list1 gen = List1 gen
+
+let pair gena genb =
+  map (gena :: genb :: []) (fun a b -> (a, b))
+
+let concat_gen_list sep l =
+  match l with
+  | h::t -> List.fold_left (fun acc e ->
+      map [acc; sep; e] (fun acc sep e -> acc ^ sep ^ e)
+    ) h t
+  | [] -> const ""
+
+let with_printer pp gen = Print (pp, gen)
+
+let result gena genb =
+  Choose [
+    Map([gena], fun va -> Ok va);
+    Map([genb], fun vb -> Error vb);
+  ]
+
+
+let pp = Format.fprintf
+let pp_int ppf n = pp ppf "%d" n
+let pp_int32 ppf n = pp ppf "%s" (Int32.to_string n)
+let pp_int64 ppf n = pp ppf "%s" (Int64.to_string n)
+let pp_float ppf f = pp ppf "%f" f
+let pp_bool ppf b = pp ppf "%b" b
+let pp_char ppf c = pp ppf "%c" c
+let pp_uchar ppf c = pp ppf "U+%04x" (Uchar.to_int c)
+let pp_string ppf s = pp ppf "\"%s\"" (String.escaped s)
+let pp_list pv ppf l =
+  pp ppf "@[<hv 1>[%a]@]"
+     (Format.pp_print_list ~pp_sep:(fun ppf () -> pp ppf ";@ ") pv) l
+let pp_option pv ppf = function
+  | None ->
+      Format.fprintf ppf "None"
+  | Some x ->
+      Format.fprintf ppf "(Some %a)" pv x
+
+exception BadTest of string
+exception FailedTest of unit printer
+let guard = function
+  | true -> ()
+  | false -> raise (BadTest "guard failed")
+let bad_test () = raise (BadTest "bad test")
+let nonetheless = function
+  | None -> bad_test ()
+  | Some a -> a
+
+let get_data chan buf off len =
+  match chan with
+  | Random rand ->
+     for i = off to off + len - 1 do
+       Bytes.set buf i (Char.chr (Random.State.bits rand land 0xff))
+     done;
+     len - off
+  | Fd ch ->
+     Unix.read ch buf off len
+
+let refill src =
+  assert (src.offset <= src.len);
+  let remaining = src.len - src.offset in
+  (* move remaining data to start of buffer *)
+  Bytes.blit src.buf src.offset src.buf 0 remaining;
+  src.len <- remaining;
+  src.offset <- 0;
+  let read = get_data src.chan src.buf remaining (Bytes.length src.buf - remaining) in
+  if read = 0 then
+    raise (BadTest "premature end of file")
+  else
+    src.len <- remaining + read
+
+let rec getbytes src n =
+  assert (src.offset <= src.len);
+  if n > Bytes.length src.buf then failwith "request too big";
+  if src.len - src.offset >= n then
+    let off = src.offset in
+    (src.offset <- src.offset + n; off)
+  else
+    (refill src; getbytes src n)
+
+let read_char src =
+  let off = getbytes src 1 in
+  Bytes.get src.buf off
+
+let read_byte src =
+  Char.code (read_char src)
+
+let read_bool src =
+  let n = read_byte src in
+  n land 1 = 1
+
+let bool = Print(pp_bool, Primitive read_bool)
+
+let uint8 = Print(pp_int, Primitive read_byte)
+let int8 = Print(pp_int, Map ([uint8], fun n -> n - 128))
+
+let read_uint16 src =
+  let off = getbytes src 2 in
+  EndianBytes.LittleEndian.get_uint16 src.buf off
+
+let read_int16 src =
+  let off = getbytes src 2 in
+  EndianBytes.LittleEndian.get_int16 src.buf off
+
+let uint16 = Print(pp_int, Primitive read_uint16)
+let int16 = Print(pp_int, Primitive read_int16)
+
+let read_int32 src =
+  let off = getbytes src 4 in
+  EndianBytes.LittleEndian.get_int32 src.buf off
+
+let read_int64 src =
+  let off = getbytes src 8 in
+  EndianBytes.LittleEndian.get_int64 src.buf off
+
+let int32 = Print (pp_int32, Primitive read_int32)
+let int64 = Print (pp_int64, Primitive read_int64)
+
+let int =
+  Print (pp_int,
+    if Sys.word_size <= 32 then
+      Map([int32], Int32.to_int)
+    else
+      Map([int64], Int64.to_int))
+
+let float = Print (pp_float, Primitive (fun src ->
+  let off = getbytes src 8 in
+  EndianBytes.LittleEndian.get_double src.buf off))
+
+let char = Print (pp_char, Primitive read_char)
+
+(* maybe print as a hexdump? *)
+let bytes = Print (pp_string, Primitive (fun src ->
+  (* null-terminated, with '\001' as an escape code *)
+  let buf = Bytes.make 64 '\255' in
+  let rec read_bytes p =
+    if p >= Bytes.length buf then p else
+    match read_char src with
+    | '\000' -> p
+    | '\001' ->
+       Bytes.set buf p (read_char src);
+       read_bytes (p + 1)
+    | c ->
+       Bytes.set buf p c;
+       read_bytes (p + 1) in
+  let count = read_bytes 0 in
+  Bytes.sub_string buf 0 count))
+
+let bytes_fixed n = Print (pp_string, Primitive (fun src ->
+  let off = getbytes src n in
+  Bytes.sub_string src.buf off n))
+
+let choose_int n state =
+  assert (n > 0);
+  if n = 1 then
+    0
+  else if (n <= 0x100) then
+    read_byte state mod n
+  else if (n < 0x1000000) then
+    Int32.(to_int (abs (rem (read_int32 state) (of_int n))))
+  else
+    Int64.(to_int (abs (rem (read_int64 state) (of_int n))))
+
+let range ?(min=0) n =
+  if n <= 0 then
+    raise (Invalid_argument "Crowbar.range: argument n must be positive");
+  if min < 0 then
+    raise (Invalid_argument "Crowbar.range: argument min must be positive or null");
+  Print (pp_int, Primitive (fun s -> min + choose_int n s))
+
+let uchar : Uchar.t gen =
+  map [range 0x110000] (fun x ->
+    guard (Uchar.is_valid x); Uchar.of_int x)
+let uchar = Print(pp_uchar, uchar)
+
+let rec sequence = function
+  g::gs -> map [g; sequence gs] (fun x xs -> x::xs)
+| [] -> const []
+
+let shuffle_arr arr =
+  let n = Array.length arr in
+  let gs = List.init n (fun i -> range ~min:i (n - i)) in
+  map [sequence gs] @@ fun js ->
+    js |> List.iteri (fun i j ->
+      let t = arr.(i) in arr.(i) <- arr.(j); arr.(j) <- t);
+    arr
+
+let shuffle l = map [shuffle_arr (Array.of_list l)] Array.to_list
+
+exception GenFailed of exn * Printexc.raw_backtrace * unit printer
+
+let minimize_depth : type a . a gen list -> a gen list = fun gens ->
+  let only p = List.filter p gens in
+  let without p = List.filter (fun v -> not (p v)) gens in
+  let branchless = function | _ -> false in
+  let branchy = function | Map _ | Bind _ | Choose _ -> true | _ -> false in
+  let complex = function | Map _ | Bind _ -> true | _ -> false in
+  match only branchless, without branchy, without complex with
+  | x::xs, _    , _     -> x :: xs
+  | [],    x::xs, _     -> x :: xs
+  | [],    [],    x::xs -> x :: xs
+  | [],    [],    []    -> gens
+
+let rec generate : type a . int -> state -> a gen -> a * unit printer =
+  fun size input gen -> match gen with
+  | Choose xs ->
+     (* FIXME: better distribution? *)
+     (* FIXME: choices of size > 255? *)
+     let gens = if size <= 1 then minimize_depth xs else xs in
+     let n = choose_int (List.length gens) input in
+     let v, pv = generate size input (List.nth gens n) in
+     v, fun ppf () -> pp ppf "#%d %a" n pv ()
+  | Map ([], k) ->
+     k, fun ppf () -> pp ppf "?"
+  | Map (gens, f) ->
+     let rec len : type k res . int -> (k, res) gens -> int =
+       fun acc xs -> match xs with
+       | [] -> acc
+       | _ :: xs -> len (1 + acc) xs in
+     let n = len 0 gens in
+     (* the size parameter is (apparently?) meant to ensure that generation
+        eventually terminates, by limiting the set of options from which the
+        generator might choose once we've gotten deep into a tree.  make sure we
+        always mark our passing, even when we've mapped one value into another,
+        so we don't blow the stack. *)
+     let size = (size - 1) / n in
+     let v, pvs = gen_apply size input gens f in
+     begin match v with
+       | Ok v -> v, pvs
+       | Error (e, bt) -> raise (GenFailed (e, bt, pvs))
+     end
+  | Bind (m, f) ->
+     let index, pv_index = generate (size - 1) input m in
+     let a, pv = generate (size - 1) input (f index) in
+     a, (fun ppf () -> pp ppf "(%a) => %a" pv_index () pv ())
+  | Option gen ->
+     if size < 1 then
+       None, fun ppf () -> pp ppf "None"
+     else if read_bool input then
+       let v, pv = generate size input gen in
+       Some v, fun ppf () -> pp ppf "Some (%a)" pv ()
+     else
+       None, fun ppf () -> pp ppf "None"
+  | List gen ->
+     let elems = generate_list size input gen in
+     List.map fst elems,
+       fun ppf () -> pp_list (fun ppf (_, pv) -> pv ppf ()) ppf elems
+  | List1 gen ->
+     let elems = generate_list1 size input gen in
+     List.map fst elems,
+       fun ppf () -> pp_list (fun ppf (_, pv) -> pv ppf ()) ppf elems
+  | Primitive gen ->
+     gen input, fun ppf () -> pp ppf "?"
+  | Unlazy gen ->
+     generate size input (Lazy.force gen)
+  | Print (ppv, gen) ->
+     let v, _ = generate size input gen in
+     v, fun ppf () -> ppv ppf v
+
+and generate_list : type a . int -> state -> a gen -> (a * unit printer) list =
+  fun size input gen ->
+  if size <= 1 then []
+  else if read_bool input then
+    generate_list1 size input gen
+  else
+    []
+
+and generate_list1 : type a . int -> state -> a gen -> (a * unit printer) list =
+  fun size input gen ->
+  let ans = generate (size/2) input gen in
+  ans :: generate_list (size/2) input gen
+
+and gen_apply :
+    type k res . int -> state ->
+       (k, res) gens -> k ->
+       (res, exn * Printexc.raw_backtrace) result * unit printer =
+  fun size state gens f ->
+  let rec go :
+    type k res . int -> state ->
+       (k, res) gens -> k ->
+       (res, exn * Printexc.raw_backtrace) result * unit printer list =
+      fun size input gens -> match gens with
+      | [] -> fun x -> Ok x, []
+      | g :: gs -> fun f ->
+        let v, pv = generate size input g in
+        let res, pvs =
+          match f v with
+          | exception (BadTest _ as e) -> raise e
+          | exception e ->
+             Error (e, Printexc.get_raw_backtrace ()) , []
+          | fv -> go size input gs fv in
+        res, pv :: pvs in
+  let v, pvs = go size state gens f in
+  let pvs = fun ppf () ->
+    match pvs with
+    | [pv] ->
+       pv ppf ()
+    | pvs ->
+       pp_list (fun ppf pv -> pv ppf ()) ppf pvs in
+  v, pvs
+
+
+let fail s = raise (FailedTest (fun ppf () -> pp ppf "%s" s))
+
+let failf format =
+  Format.kasprintf fail format
+
+let check = function
+  | true -> ()
+  | false -> raise (FailedTest (fun ppf () -> pp ppf "check false"))
+
+let check_eq ?pp:pv ?cmp ?eq a b =
+  let pass = match eq, cmp with
+    | Some eq, _ -> eq a b
+    | None, Some cmp -> cmp a b = 0
+    | None, None ->
+       Stdlib.compare a b = 0 in
+  if pass then
+    ()
+  else
+    raise (FailedTest (fun ppf () ->
+      match pv with
+      | None -> pp ppf "different"
+      | Some pv -> pp ppf "@[<hv>%a@ !=@ %a@]" pv a pv b))
+
+let () = Printexc.record_backtrace true
+
+type test = Test : string * ('f, unit) gens * 'f -> test
+
+type test_status =
+  | TestPass of unit printer
+  | BadInput of string
+  | GenFail of exn * Printexc.raw_backtrace * unit printer
+  | TestExn of exn * Printexc.raw_backtrace * unit printer
+  | TestFail of unit printer * unit printer
+
+let run_once (gens : (_, unit) gens) f state =
+  match gen_apply 100 state gens f with
+  | Ok (), pvs -> TestPass pvs
+  | Error (FailedTest p, _), pvs -> TestFail (p, pvs)
+  | Error (e, bt), pvs -> TestExn (e, bt, pvs)
+  | exception (BadTest s) -> BadInput s
+  | exception (GenFailed (e, bt, pvs)) -> GenFail (e, bt, pvs)
+
+let classify_status = function
+  | TestPass _ -> `Pass
+  | BadInput _ -> `Bad
+  | GenFail _ -> `Fail (* slightly dubious... *)
+  | TestExn _ | TestFail _ -> `Fail
+
+let print_status ppf status =
+  let print_ex ppf (e, bt) =
+    pp ppf "%s" (Printexc.to_string e);
+    bt
+    |> Printexc.raw_backtrace_to_string
+    |> Str.split (Str.regexp "\n")
+    |> List.iter (pp ppf "@,%s") in
+  match status with
+  | TestPass pvs ->
+     pp ppf "When given the input:@.@[<v 4>@,%a@,@]@.the test passed."
+        pvs ()
+  | BadInput s ->
+     pp ppf "The testcase was invalid:@.%s" s
+  | GenFail (e, bt, pvs) ->
+     pp ppf "When given the input:@.@[<4>%a@]@.the testcase generator threw an exception:@.@[<v 4>@,%a@,@]"
+        pvs ()
+        print_ex (e, bt)
+  | TestExn (e, bt, pvs) ->
+     pp ppf "When given the input:@.@[<v 4>@,%a@,@]@.the test threw an exception:@.@[<v 4>@,%a@,@]"
+        pvs ()
+        print_ex (e, bt)
+  | TestFail (err, pvs) ->
+     pp ppf "When given the input:@.@[<v 4>@,%a@,@]@.the test failed:@.@[<v 4>@,%a@,@]"
+        pvs ()
+        err ()
+
+let src_of_seed seed =
+  (* try to make this independent of word size *)
+  let seed = Int64.( [|
+       to_int (logand (of_int 0xffff) seed);
+       to_int (logand (of_int 0xffff) (shift_right seed 16));
+       to_int (logand (of_int 0xffff) (shift_right seed 32));
+       to_int (logand (of_int 0xffff) (shift_right seed 48)) |]) in
+  Random (Random.State.make seed)
+
+let run_test ~mode ~silent ?(verbose=false) (Test (name, gens, f)) =
+  let show_status_line ?(clear=false) stat =
+    Printf.printf "%s: %s\n" name stat;
+    if clear then print_newline ();
+    flush stdout in
+  let ppf = Format.std_formatter in
+  if not silent && Unix.isatty Unix.stdout then
+    show_status_line ~clear:false "....";
+  let status = match mode with
+  | `Once state ->
+     run_once gens f state
+  | `Repeat iters ->
+     let worst_status = ref (TestPass (fun _ () -> ())) in
+     let npass = ref 0 in
+     let nbad = ref 0 in
+     while !npass < iters && classify_status !worst_status = `Pass do
+       let seed = Random.int64 Int64.max_int in
+       let state = { chan = src_of_seed seed;
+                     buf = Bytes.make 256 '0';
+                     offset = 0; len = 0 } in
+       let status = run_once gens f state in
+       begin match classify_status status with
+       | `Pass -> incr npass
+       | `Bad -> incr nbad
+       | `Fail ->
+          (* if not silent then pp ppf "failed with seed %016LX" seed; *)
+          worst_status := status
+       end;
+     done;
+     let status = !worst_status in
+     status in
+  if silent && verbose && classify_status status = `Fail then begin
+         show_status_line
+           ~clear:true "FAIL";
+         pp ppf "%a@." print_status status;
+  end;
+  if not silent then begin
+      match classify_status status with
+      | `Pass ->
+         show_status_line
+           ~clear:true "PASS";
+         if verbose then pp ppf "%a@." print_status status
+      | `Fail ->
+         show_status_line
+           ~clear:true "FAIL";
+         pp ppf "%a@." print_status status;
+      | `Bad ->
+         show_status_line
+           ~clear:true "BAD";
+         pp ppf "%a@." print_status status;
+    end;
+  status
+
+exception TestFailure
+let run_all_tests file verbosity infinity tests =
+  match file, infinity with
+  | None, false ->
+    (* limited-run QuickCheck mode *)
+    let failures = ref 0 in
+    let () = tests |> List.iter (fun t ->
+        match (run_test ~mode:(`Repeat 5000) ~silent:false t |> classify_status) with
+        | `Fail -> failures := !failures + 1
+        | _ -> ()
+      )
+    in
+    !failures
+  | None, true ->
+    (* infinite QuickCheck mode *)
+     let rec go ntests alltests tests = match tests with
+       | [] ->
+          go ntests alltests alltests
+       | t :: rest ->
+          if ntests mod 10000 = 0 then Printf.eprintf "\r%d%!" ntests;
+          match classify_status (run_test ~mode:(`Once { chan = src_of_seed (Random.int64 (Int64.max_int));
+                     buf = Bytes.make 256 '0';
+                     offset = 0; len = 0 })  ~silent:true ~verbose:true t) with
+          | `Fail -> Printf.printf "%d tests passed before first failure\n%!" ntests
+          | _ -> go (ntests + 1) alltests rest in
+     let () = go 0 tests tests in
+     1
+  | Some file, _ ->
+    (* AFL mode *)
+    let verbose = List.length verbosity > 0 in
+    let () = AflPersistent.run (fun () ->
+         let fd = Unix.openfile file [Unix.O_RDONLY] 0o000 in
+         let state = { chan = Fd fd; buf = Bytes.make 256 '0';
+                       offset = 0; len = 0 } in
+         let status =
+           try run_test ~mode:(`Once state) ~silent:false ~verbose @@
+             List.nth tests (choose_int (List.length tests) state)
+           with
+           BadTest s -> BadInput s
+         in
+         Unix.close fd;
+         match classify_status status with
+         | `Pass | `Bad -> ()
+         | `Fail ->
+            Printexc.record_backtrace false;
+            raise TestFailure)
+    in
+    0 (* failures come via the exception mechanism above *)
+
+let last_generated_name = ref 0
+let generate_name () =
+  incr last_generated_name;
+  "test" ^ string_of_int !last_generated_name
+
+let registered_tests = ref []
+
+let add_test ?name gens f =
+  let name = match name with
+    | None -> generate_name ()
+    | Some name -> name in
+  registered_tests := Test (name, gens, f) :: !registered_tests
+
+(* cmdliner stuff *)
+
+let randomness_file =
+  let doc = "A file containing some bytes, consulted in constructing test cases.  \
+    When `afl-fuzz` is calling the test binary, use `@@` to indicate that \
+    `afl-fuzz` should put its test case here \
+    (e.g. `afl-fuzz -i input -o output ./my_crowbar_test @@`).  Re-run a test by \
+    supplying the test file here \
+    (e.g. `./my_crowbar_test output/crashes/id:000000`).  If no file is \
+    specified, the test will use OCaml's Random module as a source of \
+    randomness for a predefined number of rounds." in
+  Cmdliner.Arg.(value & pos 0 (some file) None & info [] ~doc ~docv:"FILE")
+
+let verbosity =
+  let doc = "Print information on each test as it's conducted." in
+  Cmdliner.Arg.(value & flag_all & info ["v"; "verbose"] ~doc ~docv:"VERBOSE")
+
+let infinity =
+  let doc = "In non-AFL (quickcheck) mode, continue running until a test failure is \
+             discovered.  No attempt is made to track which tests have already been run, \
+             so some tests may be repeated, and if there are no failures reachable, the \
+             test will never terminate without outside intervention." in
+  Cmdliner.Arg.(value & flag & info ["i"] ~doc ~docv:"INFINITE")
+
+let crowbar_info = Cmdliner.Term.info @@ Filename.basename Sys.argv.(0)
+
+let () =
+  at_exit (fun () ->
+      let t = !registered_tests in
+      registered_tests := [];
+      match t with
+      | [] -> ()
+      | t ->
+        let cmd = Cmdliner.Term.(const run_all_tests $ randomness_file $ verbosity $
+                                 infinity $ const (List.rev t)) in
+        match Cmdliner.Term.eval ~catch:false (cmd, crowbar_info) with
+        | `Ok 0 -> exit 0
+        | `Ok _ -> exit 1
+        | n -> Cmdliner.Term.exit n
+    )
diff --git a/tools/ocaml/duniverse/crowbar/src/crowbar.mli b/tools/ocaml/duniverse/crowbar/src/crowbar.mli
new file mode 100644
index 0000000000..9758dd626d
--- /dev/null
+++ b/tools/ocaml/duniverse/crowbar/src/crowbar.mli
@@ -0,0 +1,251 @@
+(** {1:top Types } *)
+
+type 'a gen
+(** ['a gen] knows how to generate ['a] for use in Crowbar tests. *)
+
+type ('k, 'res) gens =
+  | [] : ('res, 'res) gens
+  | (::) : 'a gen * ('k, 'res) gens -> ('a -> 'k, 'res) gens
+(** multiple generators are passed to functions using a listlike syntax.
+    for example, [map [int; int] (fun a b -> a + b)] *)
+
+type 'a printer = Format.formatter -> 'a -> unit
+(** pretty-printers for items generated by Crowbar; useful for the user in
+    translating test failures into bugfixes. *)
+
+(**/**)
+(* re-export stdlib's list
+   We only want to override [] syntax in the argument to Map *)
+type nonrec +'a list = 'a list = [] | (::) of 'a * 'a list
+(**/**)
+
+(** {1:generators Generators } *)
+
+(** {2:simple_generators Simple Generators } *)
+
+val int : int gen
+(** [int] generates an integer ranging from min_int to max_int, inclusive.
+    If you need integers from a smaller domain, consider using {!range}. *)
+
+val uint8 : int gen
+(** [uint8] generates an unsigned byte, ranging from 0 to 255 inclusive. *)
+
+val int8 : int gen
+(** [int8] generates a signed byte, ranging from -128 to 127 inclusive. *)
+
+val uint16 : int gen
+(** [uint16] generates an unsigned 16-bit integer,
+    ranging from 0 to 65535 inclusive. *)
+
+val int16 : int gen
+(** [int16] generates a signed 16-bit integer,
+    ranging from -32768 to 32767 inclusive. *)
+
+val int32 : Int32.t gen
+(** [int32] generates a 32-bit signed integer. *)
+
+val int64 : Int64.t gen
+(** [int64] generates a 64-bit signed integer. *)
+
+val float : float gen
+(** [float] generates a double-precision floating-point number. *)
+
+val char : char gen
+(** [char] generates a char. *)
+
+val uchar : Uchar.t gen
+(** [uchar] generates a Unicode scalar value *)
+
+val bytes : string gen
+(** [bytes] generates a string of arbitrary length (including zero-length strings).  *)
+
+val bytes_fixed : int -> string gen
+(** [bytes_fixed length] generates a string of the specified length.  *)
+
+val bool : bool gen
+(** [bool] generates a yes or no answer. *)
+
+val range : ?min:int -> int -> int gen
+(** [range ?min n] is a generator for integers between [min] (inclusive)
+    and [min + n] (exclusive). Default [min] value is 0.
+    [range ?min n] will raise [Invalid_argument] for [n <= 0].
+*)
+
+(** {2:generator_functions Functions on Generators } *)
+
+val map : ('f, 'a) gens -> 'f -> 'a gen
+(** [map gens map_fn] provides a means for creating generators using other
+    generators' output.  For example, one might generate a Char.t from a
+    {!uint8}:
+    {[
+      open Crowbar
+      let char_gen : Char.t gen = map [uint8] Char.chr
+    ]}
+*)
+
+val unlazy : 'a gen Lazy.t -> 'a gen
+(** [unlazy gen] forces the generator [gen].  It is useful when defining
+    generators for recursive data types:
+
+    {[
+      open Crowbar
+      type a = A of int | Self of a
+      let rec a_gen = lazy (
+        choose [
+          map [int] (fun i -> A i);
+          map [(unlazy a_gen)] (fun s -> Self s);
+        ])
+      let lazy a_gen = a_gen
+    ]}
+*)
+
+val fix : ('a gen -> 'a gen) -> 'a gen
+(** [fix fn] applies the function [fn].  It is useful when defining generators
+    for recursive data types:
+
+    {[
+      open Crowbar
+      type a = A of int | Self of a
+      let rec a_gen = fix (fun a_gen ->
+          choose [
+          map [int] (fun i -> A i);
+          map [a_gen] (fun s -> Self s);
+        ])
+    ]}
+    *)
+
+val const : 'a -> 'a gen
+(** [const a] always generates [a]. *)
+
+val choose : 'a gen list -> 'a gen
+(** [choose gens] chooses a generator arbitrarily from [gens]. *)
+
+val option : 'a gen -> 'a option gen
+(** [option gen] generates either [None] or [Some x], where [x] is the item
+    generated by [gen]. *)
+
+val pair : 'a gen -> 'b gen -> ('a * 'b) gen
+(** [pair gena gen] generates (a, b)
+    where [a] is generated by [gena] and [b] by [genb]. *)
+
+val result : 'a gen -> 'b gen -> ('a, 'b) result gen
+(** [result gena genb] generates either [Ok va] or [Error vb],
+    where [va], [vb] are generated by [gena], [genb] respectively. *)
+
+val list : 'a gen -> 'a list gen
+(** [list gen] makes a generator for lists using [gen].  Lists may be empty; for
+    non-empty lists, use {!list1}. *)
+
+val list1 : 'a gen -> 'a list gen
+(** [list1 gen] makes non-empty list generators. For potentially empty lists,
+    use {!list}.*)
+
+val shuffle : 'a list -> 'a list gen
+(** [shuffle l] generates random permutations of [l]. *)
+
+val concat_gen_list : string gen -> string gen list -> string gen
+(** [concat_gen_list sep l] concatenates a list of string gen [l] inserting the
+    separator [sep] between each *)
+
+val with_printer : 'a printer -> 'a gen -> 'a gen
+(** [with_printer printer gen] generates the same values as [gen].  If [gen]
+    is used to create a failing test case and the test was reached by
+    calling [check_eq] without [pp] set, [printer] will be used to print the
+    failing test case. *)
+
+val dynamic_bind : 'a gen -> ('a -> 'b gen) -> 'b gen
+(** [dynamic_bind gen f] is a monadic bind, it allows to express the
+   generation of a value whose generator itself depends on
+   a previously generated value. This is in contrast with [map gen f],
+   where no further generation happens in [f] after [gen] has
+   generated an element.
+
+   An typical example where this sort of dependencies is required is
+   a serialization library exporting combinators letting you build
+   values of the form ['a serializer]. You may want to test this
+   library by first generating a pair of a serializer and generator
+   ['a serializer * 'a gen] for arbitrary ['a], and then generating
+   values of type ['a] depending on the (generated) generator to test
+   the serializer. There is such an example in the
+   [examples/serializer/] directory of the Crowbar implementation.
+
+   Because the structure of a generator built with [dynamic_bind] is
+   opaque/dynamic (it depends on generated values), the Crowbar
+   library cannot analyze its statically
+   (without generating anything) -- the generator is opaque to the
+   library, hidden in a function. In particular, many optimizations or
+   or fuzzing techniques based on generator analysis are
+   impossible. As a client of the library, you should avoid
+   [dynamic_bind] whenever it is not strictly required to express
+   a given generator, so that you can take advantage of these features
+   (present or future ones). Use the least powerful/complex
+   combinators that suffice for your needs.
+*)
+
+(** {1:printing Printing } *)
+
+(* Format.fprintf, renamed *)
+val pp : Format.formatter -> ('a, Format.formatter, unit) format -> 'a
+val pp_int : int printer
+val pp_int32 : Int32.t printer
+val pp_int64 : Int64.t printer
+val pp_float : float printer
+val pp_bool : bool printer
+val pp_string : string printer
+val pp_list : 'a printer -> 'a list printer
+val pp_option : 'a printer -> 'a option printer
+
+(** {1:testing Testing} *)
+
+val add_test :
+  ?name:string -> ('f, unit) gens -> 'f -> unit
+(** [add_test name generators test_fn] adds [test_fn] to the list of eligible
+    tests to be run when the program is invoked.  At runtime, random data will
+    be sent to [generators] to create the input necessary to run [test_fn].  Any
+    failures will be printed annotated with [name]. *)
+
+(** {2:aborting Aborting Tests} *)
+
+val guard : bool -> unit
+(** [guard b] aborts a test if [b] is false.  The test will not be recorded
+    or reported as a failure. *)
+
+val bad_test : unit -> 'a
+(** [bad_test ()] aborts a test.  The test will not be recorded or reported
+    as a failure. *)
+
+val nonetheless : 'a option -> 'a
+(** [nonetheless o] aborts a test if [o] is None.  The test will not be recorded
+    or reported as a failure. *)
+
+(** {2:failing Failing} *)
+
+val fail : string -> 'a
+(** [fail message] generates a test failure and prints [message]. *)
+
+val failf : ('a, Format.formatter, unit, _) format4 -> 'a
+(** [failf format ...] generates a test failure and prints the message
+    specified by the format string [format] and the following arguments.
+    It is set up so that [%a] calls for an ['a printer] and an ['a] value. *)
+
+(** {2:asserting Asserting Properties} *)
+
+val check : bool -> unit
+(** [check b] generates a test failure if [b] is false.  No useful information
+    will be printed in this case. *)
+
+val check_eq : ?pp:('a printer) -> ?cmp:('a -> 'a -> int) -> ?eq:('a -> 'a -> bool) ->
+  'a -> 'a -> unit
+(** [check_eq pp cmp eq x y] evaluates whether x and y are equal, and if they
+    are not, raises a failure and prints an error message.
+    Equality is evaluated as follows:
+
+    {ol
+    {- use a provided [eq]}
+    {- if no [eq] is provided, use a provided [cmp]}
+    {- if neither [eq] nor [cmp] is provided, use Stdlib.compare}}
+
+    If [pp] is provided, use this to print [x] and [y] if they are not equal.
+    If [pp] is not provided, a best-effort printer will be generated from the
+    printers for primitive generators and any printers registered with
+    [with_printer] and used. *)
diff --git a/tools/ocaml/duniverse/crowbar/src/dune b/tools/ocaml/duniverse/crowbar/src/dune
new file mode 100644
index 0000000000..ed5173287b
--- /dev/null
+++ b/tools/ocaml/duniverse/crowbar/src/dune
@@ -0,0 +1,3 @@
+(library
+ (public_name crowbar)
+ (libraries cmdliner ocplib-endian afl-persistent str))
diff --git a/tools/ocaml/duniverse/crowbar/src/todo b/tools/ocaml/duniverse/crowbar/src/todo
new file mode 100644
index 0000000000..087141682b
--- /dev/null
+++ b/tools/ocaml/duniverse/crowbar/src/todo
@@ -0,0 +1,16 @@
+join/bind (v2?)
+
+command line interface:
+  - afl-fuzz mode
+  - quickcheck mode
+  - random fuzzing mode (for me testing, really)
+  - file / file list mode
+  - reproduction mode (seed / file)
+  - select which tests to run
+
+output:
+  - seeds for failed tests
+  - maybe use notty to figure out pretty-printing width
+
+api:
+  - manual testsuite interface?
diff --git a/tools/ocaml/duniverse/csexp/CHANGES.md b/tools/ocaml/duniverse/csexp/CHANGES.md
new file mode 100644
index 0000000000..841826dcc7
--- /dev/null
+++ b/tools/ocaml/duniverse/csexp/CHANGES.md
@@ -0,0 +1,45 @@
+# 1.3.2
+
+- The project now builds with dune 1.11.0 and onward (#12, @voodoos)
+
+# 1.3.1
+
+- Fix compatibility with 4.02.3
+
+# 1.3.0
+
+- Add a "feed" API for parsing. This new API let the user feed
+  characters one by one to the parser. It gives more control to the
+  user and the handling of IO errors is simpler and more
+  explicit. Finally, it allocates less (#9, @jeremiedimino)
+
+- Fixes `input_opt`; it was could never return [None] (#9, fixes #7,
+  @jeremiedimino)
+
+- Fixes `parse_many`; it was returning s-expressions in the wrong
+  order (#10, @rgrinberg)
+
+# 1.2.3
+
+- Fix `parse_string_many`; it used to fail on all inputs (#6, @rgrinberg)
+
+# 1.2.2
+
+- Fix compatibility with 4.02.3
+
+# 1.2.1
+
+- Remove inclusion of the `Result` module, which was accidentally
+  added in a previous PR. (#3, @rgrinberg)
+
+# 1.2.0
+
+- Expose low level, monad agnostic parser. (#2, @mefyl)
+
+# 1.1.0
+
+- Add compatibility up-to OCaml 4.02.3 (with disabled tests). (#1, @voodoos)
+
+# 1.0.0
+
+- Initial release
diff --git a/tools/ocaml/duniverse/csexp/LICENSE.md b/tools/ocaml/duniverse/csexp/LICENSE.md
new file mode 100644
index 0000000000..06829595d1
--- /dev/null
+++ b/tools/ocaml/duniverse/csexp/LICENSE.md
@@ -0,0 +1,21 @@
+The MIT License
+
+Copyright (c) 2016 Jane Street Group, LLC <opensource@janestreet.com>
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
diff --git a/tools/ocaml/duniverse/csexp/Makefile b/tools/ocaml/duniverse/csexp/Makefile
new file mode 100644
index 0000000000..a0d0ec8317
--- /dev/null
+++ b/tools/ocaml/duniverse/csexp/Makefile
@@ -0,0 +1,23 @@
+INSTALL_ARGS := $(if $(PREFIX),--prefix $(PREFIX),)
+
+default:
+	dune runtest
+
+test:
+	dune runtest
+
+install:
+	dune install $(INSTALL_ARGS)
+
+uninstall:
+	dune uninstall $(INSTALL_ARGS)
+
+reinstall: uninstall install
+
+clean:
+	dune clean
+
+all-supported-ocaml-versions:
+	dune build @install @runtest --workspace dune-workspace.dev
+
+.PHONY: default install uninstall reinstall clean test
diff --git a/tools/ocaml/duniverse/csexp/README.md b/tools/ocaml/duniverse/csexp/README.md
new file mode 100644
index 0000000000..b6c13631b7
--- /dev/null
+++ b/tools/ocaml/duniverse/csexp/README.md
@@ -0,0 +1,33 @@
+Csexp - Canonical S-expressions
+===============================
+
+This project provides minimal support for parsing and printing
+[S-expressions in canonical form][wikipedia], which is a very simple
+and canonical binary encoding of S-expressions.
+
+[wikipedia]: https://en.wikipedia.org/wiki/Canonical_S-expressions
+
+Example
+-------
+
+```ocaml
+# #require "csexp";;
+# module Sexp = struct type t = Atom of string | List of t list end;;
+module Sexp : sig type t = Atom of string | List of t list end
+# module Csexp = Csexp.Make(Sexp);;
+module Csexp :
+  sig
+    val parse_string : string -> (Sexp.t, int * string) result
+    val parse_string_many : string -> (Sexp.t list, int * string) result
+    val input : in_channel -> (Sexp.t, string) result
+    val input_opt : in_channel -> (Sexp.t option, string) result
+    val input_many : in_channel -> (Sexp.t list, string) result
+    val serialised_length : Sexp.t -> int
+    val to_string : Sexp.t -> string
+    val to_buffer : Buffer.t -> Sexp.t -> unit
+    val to_channel : out_channel -> Sexp.t -> unit
+  end
+# Csexp.to_string (List [ Atom "Hello"; Atom "world!" ]);;
+- : string = "(5:Hello6:world!)"
+```
+
diff --git a/tools/ocaml/duniverse/csexp/bench/csexp_bench.ml b/tools/ocaml/duniverse/csexp/bench/csexp_bench.ml
new file mode 100644
index 0000000000..8f5c5b6ace
--- /dev/null
+++ b/tools/ocaml/duniverse/csexp/bench/csexp_bench.ml
@@ -0,0 +1,22 @@
+open StdLabels
+
+module Sexp = struct
+  type t =
+    | Atom of string
+    | List of t list
+end
+
+module Csexp = Csexp.Make (Sexp)
+
+let atom = Sexp.Atom (String.make 128 'x')
+
+let rec gen_sexp depth =
+  if depth = 0 then
+    atom
+  else
+    let x = gen_sexp (depth - 1) in
+    List [ x; x ]
+
+let s = Sys.opaque_identity (Csexp.to_string (gen_sexp 16))
+
+let%bench "of_string" = ignore (Csexp.parse_string s : _ result)
diff --git a/tools/ocaml/duniverse/csexp/bench/dune b/tools/ocaml/duniverse/csexp/bench/dune
new file mode 100644
index 0000000000..42b95aed93
--- /dev/null
+++ b/tools/ocaml/duniverse/csexp/bench/dune
@@ -0,0 +1,11 @@
+(library
+ (name csexp_bench)
+ (libraries csexp)
+ (library_flags -linkall)
+ (preprocess (pps ppx_bench))
+ (modules csexp_bench))
+
+(executable
+ (name main)
+ (modules main)
+ (libraries core_bench.inline_benchmarks csexp_bench))
diff --git a/tools/ocaml/duniverse/csexp/bench/main.ml b/tools/ocaml/duniverse/csexp/bench/main.ml
new file mode 100644
index 0000000000..88e7fe25cd
--- /dev/null
+++ b/tools/ocaml/duniverse/csexp/bench/main.ml
@@ -0,0 +1 @@
+let () = Inline_benchmarks_public.Runner.main ~libname:"csexp_bench"
diff --git a/tools/ocaml/duniverse/csexp/bench/runner.sh b/tools/ocaml/duniverse/csexp/bench/runner.sh
new file mode 100755
index 0000000000..0089cf7f7a
--- /dev/null
+++ b/tools/ocaml/duniverse/csexp/bench/runner.sh
@@ -0,0 +1,4 @@
+#!/usr/bin/env sh
+export BENCHMARKS_RUNNER=TRUE
+export BENCH_LIB=csexp_bench
+exec dune exec -- ./main.exe -fork -run-without-cross-library-inlining "$@"
diff --git a/tools/ocaml/duniverse/csexp/csexp.opam b/tools/ocaml/duniverse/csexp/csexp.opam
new file mode 100644
index 0000000000..44e653919f
--- /dev/null
+++ b/tools/ocaml/duniverse/csexp/csexp.opam
@@ -0,0 +1,51 @@
+version: "1.3.2"
+# This file is generated by dune, edit dune-project instead
+opam-version: "2.0"
+synopsis: "Parsing and printing of S-expressions in Canonical form"
+description: """
+
+This library provides minimal support for Canonical S-expressions
+[1]. Canonical S-expressions are a binary encoding of S-expressions
+that is super simple and well suited for communication between
+programs.
+
+This library only provides a few helpers for simple applications. If
+you need more advanced support, such as parsing from more fancy input
+sources, you should consider copying the code of this library given
+how simple parsing S-expressions in canonical form is.
+
+To avoid a dependency on a particular S-expression library, the only
+module of this library is parameterised by the type of S-expressions.
+
+[1] https://en.wikipedia.org/wiki/Canonical_S-expressions
+"""
+maintainer: ["Jeremie Dimino <jeremie@dimino.org>"]
+authors: [
+  "Quentin Hocquet <mefyl@gruntech.org>"
+  "Jane Street Group, LLC <opensource@janestreet.com>"
+  "Jeremie Dimino <jeremie@dimino.org>"
+]
+license: "MIT"
+homepage: "https://github.com/ocaml-dune/csexp"
+doc: "https://ocaml-dune.github.io/csexp/"
+bug-reports: "https://github.com/ocaml-dune/csexp/issues"
+depends: [
+  "dune" {>= "1.11"}
+  "ocaml" {>= "4.02.3"}
+  "result" {>= "1.5"}
+]
+dev-repo: "git+https://github.com/ocaml-dune/csexp.git"
+build: [
+  ["dune" "subst"] {pinned}
+  [
+    "dune"
+    "build"
+    "-p"
+    name
+    "-j"
+    jobs
+    "@install"
+#   "@runtest" {with-test & ocaml:version >= "4.04"}
+    "@doc" {with-doc}
+  ]
+]
\ No newline at end of file
diff --git a/tools/ocaml/duniverse/csexp/csexp.opam.template b/tools/ocaml/duniverse/csexp/csexp.opam.template
new file mode 100644
index 0000000000..d7691b5d65
--- /dev/null
+++ b/tools/ocaml/duniverse/csexp/csexp.opam.template
@@ -0,0 +1,14 @@
+build: [
+  ["dune" "subst"] {pinned}
+  [
+    "dune"
+    "build"
+    "-p"
+    name
+    "-j"
+    jobs
+    "@install"
+#   "@runtest" {with-test & ocaml:version >= "4.04"}
+    "@doc" {with-doc}
+  ]
+]
diff --git a/tools/ocaml/duniverse/csexp/dune-project b/tools/ocaml/duniverse/csexp/dune-project
new file mode 100644
index 0000000000..fb7bad4454
--- /dev/null
+++ b/tools/ocaml/duniverse/csexp/dune-project
@@ -0,0 +1,42 @@
+(lang dune 1.11)
+(name csexp)
+(version 1.3.2)
+
+(allow_approximate_merlin)
+
+(license MIT)
+(maintainers "Jeremie Dimino <jeremie@dimino.org>")
+(authors
+  "Quentin Hocquet <mefyl@gruntech.org>"
+  "Jane Street Group, LLC <opensource@janestreet.com>"
+  "Jeremie Dimino <jeremie@dimino.org>")
+(source (github ocaml-dune/csexp))
+(documentation "https://ocaml-dune.github.io/csexp/")
+
+(generate_opam_files true)
+
+(package
+ (name csexp)
+ (depends
+   (ocaml (>= 4.02.3))
+;  (ppx_expect :with-test)
+; Disabled because of a dependency cycle 
+; (see https://github.com/ocaml-opam/opam-depext/issues/121)
+   (result (>= 1.5)))
+ (synopsis "Parsing and printing of S-expressions in Canonical form")
+ (description "
+This library provides minimal support for Canonical S-expressions
+[1]. Canonical S-expressions are a binary encoding of S-expressions
+that is super simple and well suited for communication between
+programs.
+
+This library only provides a few helpers for simple applications. If
+you need more advanced support, such as parsing from more fancy input
+sources, you should consider copying the code of this library given
+how simple parsing S-expressions in canonical form is.
+
+To avoid a dependency on a particular S-expression library, the only
+module of this library is parameterised by the type of S-expressions.
+
+[1] https://en.wikipedia.org/wiki/Canonical_S-expressions
+"))
diff --git a/tools/ocaml/duniverse/csexp/dune-workspace.dev b/tools/ocaml/duniverse/csexp/dune-workspace.dev
new file mode 100644
index 0000000000..b8b97992ba
--- /dev/null
+++ b/tools/ocaml/duniverse/csexp/dune-workspace.dev
@@ -0,0 +1,6 @@
+(lang dune 1.0)
+
+;; This file is used by `make all-supported-ocaml-versions`
+(context (opam (switch 4.02.3)))
+(context (opam (switch 4.04.2)))
+(context (opam (switch 4.08.1)))
diff --git a/tools/ocaml/duniverse/csexp/src/csexp.ml b/tools/ocaml/duniverse/csexp/src/csexp.ml
new file mode 100644
index 0000000000..178658a86f
--- /dev/null
+++ b/tools/ocaml/duniverse/csexp/src/csexp.ml
@@ -0,0 +1,333 @@
+module type Sexp = sig
+  type t =
+    | Atom of string
+    | List of t list
+end
+
+module type Monad = sig
+  type 'a t
+
+  val return : 'a -> 'a t
+
+  val bind : 'a t -> ('a -> 'b t) -> 'b t
+end
+
+module Make (Sexp : Sexp) = struct
+  open Sexp
+
+  (* This is to keep compatibility with 4.02 without writing [Result.]
+     everywhere *)
+  type ('a, 'b) result = ('a, 'b) Result.result =
+    | Ok of 'a
+    | Error of 'b
+
+  module Parser = struct
+    exception Parse_error of string
+
+    let parse_error msg = raise (Parse_error msg)
+
+    let parse_errorf f = Format.ksprintf parse_error f
+
+    let premature_end_of_input = "premature end of input"
+
+    module Lexer = struct
+      type state =
+        | Init
+        | Parsing_length
+
+      type t =
+        { mutable state : state
+        ; mutable n : int
+        }
+
+      let create () = { state = Init; n = 0 }
+
+      let int_of_digit c = Char.code c - Char.code '0'
+
+      type _ token =
+        | Await : [> `other ] token
+        | Lparen : [> `other ] token
+        | Rparen : [> `other ] token
+        | Atom : int -> [> `atom ] token
+
+      let feed t c =
+        match (t.state, c) with
+        | Init, '(' -> Lparen
+        | Init, ')' -> Rparen
+        | Init, '0' .. '9' ->
+          t.state <- Parsing_length;
+          t.n <- int_of_digit c;
+          Await
+        | Init, _ ->
+          parse_errorf "invalid character %C, expected '(', ')' or '0'..'9'" c
+        | Parsing_length, '0' .. '9' ->
+          let len = (t.n * 10) + int_of_digit c in
+          if len > Sys.max_string_length then
+            parse_error "atom too big to represent"
+          else (
+            t.n <- len;
+            Await
+          )
+        | Parsing_length, ':' ->
+          t.state <- Init;
+          Atom t.n
+        | Parsing_length, _ ->
+          parse_errorf
+            "invalid character %C while parsing atom length, expected '0'..'9' \
+             or ':'"
+            c
+
+      let feed_eoi t =
+        match t.state with
+        | Init -> ()
+        | Parsing_length -> parse_error premature_end_of_input
+    end
+
+    module L = Lexer
+
+    module Stack = struct
+      type t =
+        | Empty
+        | Open of t
+        | Sexp of Sexp.t * t
+
+      let open_paren stack = Open stack
+
+      let close_paren =
+        let rec loop acc = function
+          | Empty ->
+            parse_error "right parenthesis without matching left parenthesis"
+          | Sexp (sexp, t) -> loop (sexp :: acc) t
+          | Open t -> Sexp (List acc, t)
+        in
+        fun t -> loop [] t
+
+      let to_list =
+        let rec loop acc = function
+          | Empty -> acc
+          | Sexp (sexp, t) -> loop (sexp :: acc) t
+          | Open _ -> parse_error premature_end_of_input
+        in
+        fun t -> loop [] t
+
+      let add_atom s stack = Sexp (Atom s, stack)
+
+      let add_token (x : [ `other ] Lexer.token) stack =
+        match x with
+        | L.Await -> stack
+        | L.Lparen -> open_paren stack
+        | L.Rparen -> close_paren stack
+    end
+  end
+
+  open Parser
+
+  let feed_eoi_single lexer stack =
+    match
+      Lexer.feed_eoi lexer;
+      Stack.to_list stack
+    with
+    | exception Parse_error msg -> Error msg
+    | [ x ] -> Ok x
+    | [] -> Error premature_end_of_input
+    | _ :: _ :: _ -> assert false
+
+  let feed_eoi_many lexer stack =
+    match
+      Lexer.feed_eoi lexer;
+      Stack.to_list stack
+    with
+    | exception Parse_error msg -> Error msg
+    | l -> Ok l
+
+  let one_token s pos len lexer stack k =
+    match Lexer.feed lexer (String.unsafe_get s pos) with
+    | exception Parse_error msg -> Error (pos, msg)
+    | L.Atom atom_len -> (
+      match String.sub s (pos + 1) atom_len with
+      | exception _ -> Error (len, premature_end_of_input)
+      | atom ->
+        let pos = pos + 1 + atom_len in
+        k s pos len lexer (Stack.add_atom atom stack) )
+    | (L.Await | L.Lparen | L.Rparen) as x -> (
+      match Stack.add_token x stack with
+      | exception Parse_error msg -> Error (pos, msg)
+      | stack -> k s (pos + 1) len lexer stack )
+    [@@inlined always]
+
+  let parse_string =
+    let rec loop s pos len lexer stack =
+      if pos = len then
+        match feed_eoi_single lexer stack with
+        | Error msg -> Error (pos, msg)
+        | Ok _ as ok -> ok
+      else
+        one_token s pos len lexer stack cont
+    and cont s pos len lexer stack =
+      match stack with
+      | Stack.Sexp (sexp, Empty) ->
+        if pos = len then
+          Ok sexp
+        else
+          Error (pos, "data after canonical S-expression")
+      | stack -> loop s pos len lexer stack
+    in
+    fun s -> loop s 0 (String.length s) (Lexer.create ()) Empty
+
+  let parse_string_many =
+    let rec loop s pos len lexer stack =
+      if pos = len then
+        match feed_eoi_many lexer stack with
+        | Error msg -> Error (pos, msg)
+        | Ok _ as ok -> ok
+      else
+        one_token s pos len lexer stack loop
+    in
+    fun s -> loop s 0 (String.length s) (Lexer.create ()) Empty
+
+  let one_token ic c lexer stack =
+    match Lexer.feed lexer c with
+    | L.Atom n -> (
+      match really_input_string ic n with
+      | exception End_of_file -> raise (Parse_error premature_end_of_input)
+      | s -> Stack.add_atom s stack )
+    | (L.Await | L.Lparen | L.Rparen) as x -> Stack.add_token x stack
+
+  let input_opt =
+    let rec loop ic lexer stack =
+      let c = input_char ic in
+      match one_token ic c lexer stack with
+      | Sexp (sexp, Empty) -> Ok (Some sexp)
+      | stack -> loop ic lexer stack
+    in
+    fun ic ->
+      let lexer = Lexer.create () in
+      match input_char ic with
+      | exception End_of_file -> Ok None
+      | c -> (
+        try
+          match Lexer.feed lexer c with
+          | L.Atom _ -> assert false
+          | (L.Await | L.Lparen | L.Rparen) as x ->
+            loop ic lexer (Stack.add_token x Empty)
+        with
+        | Parse_error msg -> Error msg
+        | End_of_file -> Error premature_end_of_input )
+
+  let input ic =
+    match input_opt ic with
+    | Ok None -> Error premature_end_of_input
+    | Ok (Some x) -> Ok x
+    | Error msg -> Error msg
+
+  let input_many =
+    let rec loop ic lexer stack =
+      match input_char ic with
+      | exception End_of_file ->
+        Lexer.feed_eoi lexer;
+        Ok (Stack.to_list stack)
+      | c -> loop ic lexer (one_token ic c lexer stack)
+    in
+    fun ic ->
+      try loop ic (Lexer.create ()) Empty with Parse_error msg -> Error msg
+
+  let serialised_length =
+    let rec loop acc t =
+      match t with
+      | Atom s ->
+        let len = String.length s in
+        let x = ref len in
+        let len_len = ref 1 in
+        while !x > 9 do
+          x := !x / 10;
+          incr len_len
+        done;
+        acc + !len_len + 1 + len
+      | List l -> List.fold_left loop acc l
+    in
+    fun t -> loop 0 t
+
+  let to_buffer buf sexp =
+    let rec loop = function
+      | Atom str ->
+        Buffer.add_string buf (string_of_int (String.length str));
+        Buffer.add_string buf ":";
+        Buffer.add_string buf str
+      | List e ->
+        Buffer.add_char buf '(';
+        List.iter loop e;
+        Buffer.add_char buf ')'
+    in
+    loop sexp
+
+  let to_string sexp =
+    let buf = Buffer.create (serialised_length sexp) in
+    to_buffer buf sexp;
+    Buffer.contents buf
+
+  let to_channel oc sexp =
+    let rec loop = function
+      | Atom str ->
+        output_string oc (string_of_int (String.length str));
+        output_char oc ':';
+        output_string oc str
+      | List l ->
+        output_char oc '(';
+        List.iter loop l;
+        output_char oc ')'
+    in
+    loop sexp
+
+  module type Input = sig
+    type t
+
+    module Monad : Monad
+
+    val read_string : t -> int -> (string, string) Result.t Monad.t
+
+    val read_char : t -> (char, string) Result.t Monad.t
+  end
+
+  module Make_parser (Input : Input) = struct
+    open Input.Monad
+
+    let ( >>= ) = bind
+
+    let ( >>=* ) m f =
+      m >>= function
+      | Error _ as err -> return err
+      | Ok x -> f x
+
+    let one_token input c lexer stack =
+      match Lexer.feed lexer c with
+      | exception Parse_error msg -> return (Error msg)
+      | L.Atom n ->
+        Input.read_string input n >>=* fun s ->
+        return (Ok (Stack.add_atom s stack))
+      | (L.Await | L.Lparen | L.Rparen) as x ->
+        return
+          ( match Stack.add_token x stack with
+          | exception Parse_error msg -> Error msg
+          | stack -> Ok stack )
+
+    let parse =
+      let rec loop input lexer stack =
+        Input.read_char input >>= function
+        | Error _ -> return (feed_eoi_single lexer stack)
+        | Ok c -> (
+          one_token input c lexer stack >>=* function
+          | Sexp (sexp, Empty) -> return (Ok sexp)
+          | stack -> loop input lexer stack )
+      in
+      fun input -> loop input (Lexer.create ()) Empty
+
+    let parse_many =
+      let rec loop input lexer stack =
+        Input.read_char input >>= function
+        | Error _ -> return (feed_eoi_many lexer stack)
+        | Ok c ->
+          one_token input c lexer stack >>=* fun stack -> loop input lexer stack
+      in
+      fun input -> loop input (Lexer.create ()) Empty
+  end
+end
diff --git a/tools/ocaml/duniverse/csexp/src/csexp.mli b/tools/ocaml/duniverse/csexp/src/csexp.mli
new file mode 100644
index 0000000000..f1e8683f63
--- /dev/null
+++ b/tools/ocaml/duniverse/csexp/src/csexp.mli
@@ -0,0 +1,369 @@
+(** Canonical S-expressions *)
+
+(** This module provides minimal support for reading and writing S-expressions
+    in canonical form.
+
+    https://en.wikipedia.org/wiki/Canonical_S-expressions
+
+    Note that because the canonical representation of S-expressions is so
+    simple, this module doesn't go out of his way to provide a fully generic
+    parser and printer and instead just provides a few simple functions. If you
+    are using fancy input sources, simply copy the parser and adapt it. The
+    format is so simple that it's pretty difficult to get it wrong by accident.
+
+    To avoid a dependency on a particular S-expression library, the only module
+    of this library is parameterised by the type of S-expressions.
+
+    {[
+      let rec print = function
+        | Atom str -> Printf.printf "%d:%s" (String.length s)
+        | List l -> List.iter print l
+    ]} *)
+
+module type Sexp = sig
+  type t =
+    | Atom of string
+    | List of t list
+end
+
+module Make (Sexp : Sexp) : sig
+  (** {2 Parsing} *)
+
+  (** [parse_string s] parses a single S-expression encoded in canonical form in
+      [s]. It is an error for [s] to contain a S-expression followed by more
+      data. In case of error, the offset of the error as well as an error
+      message is returned. *)
+  val parse_string : string -> (Sexp.t, int * string) Result.t
+
+  (** [parse_string s] parses a sequence of S-expressions encoded in canonical
+      form in [s] *)
+  val parse_string_many : string -> (Sexp.t list, int * string) Result.t
+
+  (** Read exactly one canonical S-expressions from the given channel. Note that
+      this function never raises [End_of_file]. Instead, it returns [Error]. *)
+  val input : in_channel -> (Sexp.t, string) Result.t
+
+  (** Same as [input] but returns [Ok None] if the end of file has already been
+      reached. If some more characters are available but the end of file is
+      reached before reading a complete S-expression, this function returns
+      [Error]. *)
+  val input_opt : in_channel -> (Sexp.t option, string) Result.t
+
+  (** Read many S-expressions until the end of input is reached. *)
+  val input_many : in_channel -> (Sexp.t list, string) Result.t
+
+  (** {2 Serialising} *)
+
+  (** The length of the serialised representation of a S-expression *)
+  val serialised_length : Sexp.t -> int
+
+  (** [to_string sexp] converts S-expression [sexp] to a string in canonical
+      form. *)
+  val to_string : Sexp.t -> string
+
+  (** [to_buffer buf sexp] outputs the S-expression [sexp] converted to its
+      canonical form to buffer [buf]. *)
+  val to_buffer : Buffer.t -> Sexp.t -> unit
+
+  (** [output oc sexp] outputs the S-expression [sexp] converted to its
+      canonical form to channel [oc]. *)
+  val to_channel : out_channel -> Sexp.t -> unit
+
+  (** {3 Low level parser}
+
+      For efficiently parsing from sources other than strings or input channel.
+      For instance in Lwt or Async programs. *)
+
+  module Parser : sig
+    (** The [Parser] module offers an API that is a balance between sharing the
+        common logic of parsing canonical S-expressions while allowing to write
+        parsers that are as efficient as possible, both in terms of speed and
+        allocations. A carefully written parser using this API will be:
+
+        - fast
+        - perform minimal allocations
+        - perform zero [caml_modify] (a slow function of the OCaml runtime that
+          is emitted when mutating a constructed value)
+
+        {2 Lexers}
+
+        To parse using this API, you must first create a lexer via
+        {!Lexer.create}. The lexer is responsible for scanning the input and
+        forming tokens. The user must feed characters read from the input one by
+        one to the lexer until it yields a token. For instance:
+
+        {[
+          # let lexer = Lexer.create ();;
+          val lexer : Lexer.t = <abstract>
+          # Lexer.feed lexer '(';;
+          - : [ `atom | `other ] Lexer.token = Lparen
+          # Lexer.feed lexer ')';;
+          - : [ `atom | `other ] Lexer.token = Rparen
+        ]}
+
+        When the lexer doesn't have enough to return a token, it simply returns
+        the special token {!Lexer.Await}:
+
+        {[
+          # Lexer.feed lexer '1';;
+          - : [ `atom | `other ] Lexer.token = Await
+        ]}
+
+        Note that since atoms of canonical S-expressions do not need quoting,
+        they are always represented as a contiguous sequence of characters that
+        don't need further processing. To achieve maximum efficiency, the lexer
+        only returns the length of the atom and it is the responsibility of the
+        caller to extract the atom from the input source:
+
+        {[
+          # Lexer.feed lexer '2';;
+          - : [ `atom | `other ] Lexer.token = Await
+          # Lexer.feed lexer ':';;
+          - : [ `atom | `other ] Lexer.token = Atom 2
+        ]}
+
+        When getting [Atom n], the caller should then proceed to read the next
+        [n] characters of the input as a string. For instance, if the input is
+        an [in_channel] the caller should proceed with
+        [really_input_string ic n].
+
+        Finally, when the end of input is reached the user should call
+        {!Lexer.feed_eoi} to make sure the lexer is not awaiting more input. If
+        that is the case, {!Lexer.feed_eoi} will raise:
+
+        {[
+          # Lexer.feed lexer '1';;
+          - : [ `atom | `other ] Lexer.token = Await
+          # Lexer.feed_eoi lexer;;
+          Exception: Parse_error "premature end of input".
+        ]}
+
+        {2 Parsing stacks}
+
+        The lexer doesn't keep track of the structure of the S-expressions. In
+        order to construct a whole structured S-expressions, the caller must
+        maintain a parsing stack via the {!Stack} module. A {!Stack.t} value
+        simply represent a parsed prefix in reverse order.
+
+        For instance, the prefix "1:x((1:y1:z)" will be represented as:
+
+        {[ Sexp (List [ Atom "y"; Atom "z" ], Open (Sexp (Atom "x", Empty))) ]}
+
+        The {!Stack} module offers various primitives to open or close
+        parentheses or insert an atom. And for convenience it provides a
+        function {!Stack.add_token} that takes the output of {!Lexer.feed}
+        directly:
+
+        {[
+          # Stack.add_token Rparen Empty;;
+          - : Stack.t = Open Empty
+          # Stack.add_token Lparen (Open Empty);;
+          - : Stack.t = Sexp (List [], Empty)
+        ]}
+
+        Note that {!Stack.add_token} doesn't accept [Atom _]. This is enforced
+        at the type level by a GADT. The reason for this is that in order to
+        insert an atom, the user must have fetched the contents of the atom
+        themselves. In order to insert an atom into a stack, you can use the
+        function {!Stack.add_atom}:
+
+        {[
+          # Stack.add_atom "foo" (Open Empty);;
+          - : Stack.t = Sexp (Atom "foo", Open Empty)
+        ]}
+
+        When parsing is finished, one may call the function {!Stack.to_list} in
+        order to extract all the toplevel S-expressions from the stack:
+
+        {[
+          # Stack.to_list (Sexp (Atom "x", Sexp (List [Atom "y"], Empty)));;
+          - : Sexp.t list = [List [Atom "y"; Atom "x"]]
+        ]}
+
+        If instead you want to stop parsing as soon a single full S-expression
+        has been discovered, you can match on the structure of the stack. If the
+        stack is of the form [Sexp (_, Empty)], then you know that exactly one
+        S-expression has been parsed and you can stop there.
+
+        {2 Parsing errors}
+
+        In order to reduce allocations to a minumim, parsing errors are reported
+        via the exception {!Parse_error}. It is the responsibility of the caller
+        to catch this exception and return it as an [Error _] value. Functions
+        that may raise [Parse_error] are documented as such.
+
+        When extracting an atom and the input doesn't have enough characters
+        left, the user may raise [Parse_error premature_end_of_input]. This will
+        produce an error message similar to what the various high-level
+        functions of this library produce.
+
+        {2 Building a parsing function}
+
+        Parsing functions should always follow the following pattern:
+
+        + create a lexer and start with an empty parsing stack
+        + iterate over the input, feeding the lexer characters one by one. When
+          the lexer returns [Atom n], fetch the next [n] characters from the
+          input to form an atom
+        + update the stack via [Stack.add_atom] or [Stack.add_token]
+        + if parsing the whole input, call [Lexer.feed_eoi] when the end of
+          input is reached, otherwise stop as soon as the stack is of the form
+          [Sexp (_, Empty)] -
+
+        For instance, to parse a string as a list of S-expressions:
+
+        {[
+          module Sexp = struct
+            type t =
+              | Atom of string
+              | List of t list
+          end
+
+          module Csexp = Csexp.Make (Sexp)
+
+          let extract_atom s pos len =
+            match String.sub s pos len with
+            | exception _ ->
+              (* Turn out-of-bounds errors into [Parse_error] *)
+              raise (Parse_error premature_end_of_input)
+            | s -> s
+
+          let parse_string =
+            let open Csexp.Parser in
+            let rec loop s pos len lexer stack =
+              if pos = len then (
+                Lexer.feed_eoi lexer;
+                Stack.to_list stack
+              ) else
+                match Lexer.feed lexer (String.unsafe_get s pos) with
+                | Atom atom_len ->
+                  let atom = extract_atom s (pos + 1) atom_len in
+                  loop s (pos + 1 + atom) len lexer (Stack.add_atom atom stack)
+                | (Await | Lparen | Rparen) as x ->
+                  loop s (pos + 1) len lexer (Stack.add_token x stack)
+            in
+            fun s ->
+              match loop s 0 (String.length s) (Lexer.create ()) Empty with
+              | v -> Ok v
+              | exception Parse_error msg -> Error msg
+        ]} *)
+
+    exception Parse_error of string
+
+    (** Error message signaling the end of input was reached prematurely. You
+        can use this when extracting an atom from the input and the input
+        doesn't have enough characters. *)
+    val premature_end_of_input : string
+
+    module Lexer : sig
+      (** Lexical analyser *)
+
+      type t
+
+      val create : unit -> t
+
+      type _ token =
+        | Await : [> `other ] token
+        | Lparen : [> `other ] token
+        | Rparen : [> `other ] token
+        | Atom : int -> [> `atom ] token
+
+      (** Feed a character to the parser.
+
+          @raise Parse_error *)
+      val feed : t -> char -> [ `other | `atom ] token
+
+      (** Feed the end of input to the parser.
+
+          You should call this function when the end of input has been reached
+          in order to ensure that the lexer is not awaiting more input, which
+          would be an error.
+
+          @raise Parse_error if the lexer is awaiting more input *)
+      val feed_eoi : t -> unit
+    end
+
+    module Stack : sig
+      (** Parsing stack *)
+
+      type t =
+        | Empty
+        | Open of t
+        | Sexp of Sexp.t * t
+
+      (** Extract the list of full S-expressions contained in a stack.
+
+          For instance:
+
+          {[
+            # to_list (Sexp (Atom "y", Sexp (Atom "x", Empty)));;
+            - : Stack.t list = [Atom "x"; Atom "y"]
+          ]}
+          @raise Parse_error if the stack contains open parentheses that has not
+          been closed. *)
+      val to_list : t -> Sexp.t list
+
+      (** Add a left parenthesis. *)
+      val open_paren : t -> t
+
+      (** Add a right parenthesis. Raise [Parse_error] if the stack contains no
+          opened parentheses.
+
+          For instance:
+
+          {[
+            # close_paren (Sexp (Atom "y", Sexp (Atom "x", Open Empty)));;
+            - : Stack.t = Sexp (List [Atom "x"; Atom "y"], Empty)
+          ]}
+          @raise Parse_error if the stack contains no open open parenthesis. *)
+      val close_paren : t -> t
+
+      (** Insert an atom in the parsing stack:
+
+          {[
+            # add_atom "foo" Empty;;
+            - : Stack.t = Sexp (Atom "foo", Empty)
+          ]} *)
+      val add_atom : string -> t -> t
+
+      (** Add a token as returned by the lexer.
+
+          @raise Parse_error *)
+      val add_token : [ `other ] Lexer.token -> t -> t
+    end
+  end
+
+  (** {3 Deprecated low-level parser} *)
+
+  (** The above are deprecated as the {!Input} signature does not allow to
+      distinguish between IO errors and end of input conditions. Additionally,
+      the use of monads tend to produce parsers that allocates a lot.
+
+      It is recommended to use the {!Parser} module instead. *)
+
+  module type Input = sig
+    type t
+
+    module Monad : sig
+      type 'a t
+
+      val return : 'a -> 'a t
+
+      val bind : 'a t -> ('a -> 'b t) -> 'b t
+    end
+
+    val read_string : t -> int -> (string, string) Result.t Monad.t
+
+    val read_char : t -> (char, string) Result.t Monad.t
+  end
+  [@@deprecated "Use Parser module instead"]
+
+  [@@@warning "-3"]
+
+  module Make_parser (Input : Input) : sig
+    val parse : Input.t -> (Sexp.t, string) Result.t Input.Monad.t
+
+    val parse_many : Input.t -> (Sexp.t list, string) Result.t Input.Monad.t
+  end
+  [@@deprecated "Use Parser module instead"]
+end
diff --git a/tools/ocaml/duniverse/csexp/src/dune b/tools/ocaml/duniverse/csexp/src/dune
new file mode 100644
index 0000000000..bd4b3b7ea6
--- /dev/null
+++ b/tools/ocaml/duniverse/csexp/src/dune
@@ -0,0 +1,3 @@
+(library
+ (public_name csexp)
+ (libraries result))
diff --git a/tools/ocaml/duniverse/csexp/test/dune b/tools/ocaml/duniverse/csexp/test/dune
new file mode 100644
index 0000000000..3284f4b3ca
--- /dev/null
+++ b/tools/ocaml/duniverse/csexp/test/dune
@@ -0,0 +1,6 @@
+(library
+ (name csexp_tests)
+ (libraries csexp)
+ (inline_tests)
+ (preprocess
+  (pps ppx_expect)))
diff --git a/tools/ocaml/duniverse/csexp/test/test.ml b/tools/ocaml/duniverse/csexp/test/test.ml
new file mode 100644
index 0000000000..9ea426f8bc
--- /dev/null
+++ b/tools/ocaml/duniverse/csexp/test/test.ml
@@ -0,0 +1,142 @@
+module Sexp = struct
+  type t =
+    | Atom of string
+    | List of t list
+end
+
+module Csexp = Csexp.Make (Sexp)
+open Csexp
+
+let roundtrip x =
+  let str = to_string x in
+  match parse_string str with
+  | Result.Error (_, msg) -> failwith msg
+  | Result.Ok exp ->
+    assert (exp = x);
+    print_string str
+
+let%expect_test _ =
+  roundtrip (Sexp.Atom "foo");
+  [%expect {|3:foo|}]
+
+let%expect_test _ =
+  roundtrip (Sexp.List []);
+  [%expect {|()|}]
+
+let%expect_test _ =
+  roundtrip (Sexp.List [ Sexp.Atom "Hello"; Sexp.Atom "World!" ]);
+  [%expect {|(5:Hello6:World!)|}]
+
+let%expect_test _ =
+  roundtrip
+    (Sexp.List
+       [ Sexp.List
+           [ Sexp.Atom "metadata"
+           ; Sexp.List [ Sexp.Atom "foo"; Sexp.Atom "bar" ]
+           ]
+       ; Sexp.List
+           [ Sexp.Atom "produced-files"
+           ; Sexp.List
+               [ Sexp.List
+                   [ Sexp.Atom "/tmp/coin"
+                   ; Sexp.Atom
+                       "/tmp/dune-memory/v2/files/b2/b295e63b0b8e8fae971d9c493be0d261.1"
+                   ]
+               ]
+           ]
+       ]);
+  [%expect
+    {|((8:metadata(3:foo3:bar))(14:produced-files((9:/tmp/coin63:/tmp/dune-memory/v2/files/b2/b295e63b0b8e8fae971d9c493be0d261.1))))|}]
+
+let print_parsed r =
+  match r with
+  | Error msg -> Printf.printf "Error %S" msg
+  | Ok sexp -> Printf.printf "Ok %S" (Csexp.to_string sexp)
+
+let parse s =
+  match parse_string s with
+  | Ok x -> print_parsed (Ok x)
+  | Error (_, msg) -> print_parsed (Error msg)
+
+let%expect_test _ =
+  parse "(3:foo)";
+  [%expect {|
+    Ok "(3:foo)" |}]
+
+let%expect_test _ =
+  parse "";
+  [%expect {| Error "premature end of input" |}]
+
+let%expect_test _ =
+  parse "(";
+  [%expect {| Error "premature end of input" |}]
+
+let%expect_test _ =
+  parse "(a)";
+  [%expect {| Error "invalid character 'a', expected '(', ')' or '0'..'9'" |}]
+
+let%expect_test _ =
+  parse "(:)";
+  [%expect {| Error "invalid character ':', expected '(', ')' or '0'..'9'" |}]
+
+let%expect_test _ =
+  parse "(4:foo)";
+  [%expect {| Error "premature end of input" |}]
+
+let%expect_test _ =
+  parse "(5:foo)";
+  [%expect {| Error "premature end of input" |}]
+
+let%expect_test _ =
+  parse "(3:foo)";
+  [%expect {| Ok "(3:foo)" |}]
+
+let sexp_then_stuff s =
+  let fn, oc = Filename.open_temp_file "csexp-test" "" ~mode:[ Open_binary ] in
+  let delete = lazy (Sys.remove fn) in
+  at_exit (fun () -> Lazy.force delete);
+  output_string oc s;
+  close_out oc;
+  let ic = open_in_bin fn in
+  Csexp.input ic |> print_parsed;
+  print_newline ();
+  print_char (input_char ic);
+  close_in ic;
+  Lazy.force delete
+
+let%expect_test _ =
+  sexp_then_stuff "(3:foo)(3:foo)";
+  [%expect {|
+    Ok "(3:foo)"
+    ( |}]
+
+let%expect_test _ =
+  sexp_then_stuff "(3:foo)Additional_stuff";
+  [%expect {|
+    Ok "(3:foo)"
+    A |}]
+
+let%expect_test _ =
+  parse "(3:foo)(3:foo)";
+  [%expect {| Error "data after canonical S-expression" |}]
+
+let%expect_test _ =
+  parse "(3:foo)additional_stuff";
+  [%expect {| Error "data after canonical S-expression" |}]
+
+let parse_many s =
+  match parse_string_many s with
+  | Error (_, msg) -> print_parsed (Error msg)
+  | Ok xs -> xs |> List.iter (fun x -> print_parsed (Ok x))
+
+let%expect_test "parse_string_many - parse empty string" =
+  parse_many "";
+  [%expect {| |}]
+
+let%expect_test "parse_string_many - parse a single csexp" =
+  parse_many "(3:foo)";
+  [%expect {| Ok "(3:foo)" |}]
+
+let%expect_test "parse_string_many - parse many csexp" =
+  parse_many "(3:foo)(3:bar)";
+  [%expect {| Ok "(3:foo)"Ok "(3:bar)" |}]
diff --git a/tools/ocaml/duniverse/dune b/tools/ocaml/duniverse/dune
new file mode 100644
index 0000000000..ad2ec9467e
--- /dev/null
+++ b/tools/ocaml/duniverse/dune
@@ -0,0 +1,4 @@
+; This file is generated by duniverse.
+; Be aware that it is likely to be overwritten by your next duniverse pull invocation.
+
+(vendored_dirs *)
diff --git a/tools/ocaml/duniverse/fmt/.gitignore b/tools/ocaml/duniverse/fmt/.gitignore
new file mode 100644
index 0000000000..10916e3a45
--- /dev/null
+++ b/tools/ocaml/duniverse/fmt/.gitignore
@@ -0,0 +1,8 @@
+BRZO
+_b0
+_build
+tmp
+*~
+\.\#*
+\#*#
+*.install
\ No newline at end of file
diff --git a/tools/ocaml/duniverse/fmt/.ocp-indent b/tools/ocaml/duniverse/fmt/.ocp-indent
new file mode 100644
index 0000000000..ad2fbcbfa5
--- /dev/null
+++ b/tools/ocaml/duniverse/fmt/.ocp-indent
@@ -0,0 +1 @@
+strict_with=always,match_clause=4,strict_else=never
\ No newline at end of file
diff --git a/tools/ocaml/duniverse/fmt/CHANGES.md b/tools/ocaml/duniverse/fmt/CHANGES.md
new file mode 100644
index 0000000000..6ddddbcc26
--- /dev/null
+++ b/tools/ocaml/duniverse/fmt/CHANGES.md
@@ -0,0 +1,98 @@
+v0.8.8 2019-08-01 Zagreb
+------------------------
+
+Fix build on 32-bit platforms.
+
+v0.8.7 2019-07-21 Zagreb
+------------------------
+
+* Require OCaml 4.05.
+* Add `Fmt.hex` and friends. Support for hex dumping.
+  Thanks to David Kaloper Meršinjak for the design and implementation..
+* Add `Fmt.si_size` to format integer magnitudes using SI prefixes.
+* Add `Fmt.uint64_ns_span` to format time spans.
+* Add `Fmt.truncated` to truncate your long strings.
+* Add `Fmt.flush`, has the effect of `Format.pp_print_flush`.
+* Add `Fmt.[Dump.]{field,record}` for records (#9).
+* Add `Fmt.concat` to apply a list of formatters to a value.
+* Add `Fmt.{semi,sps}`, separators.
+* Add `Fmt.{error,error_msg}` to format `result` values.
+* Add `Fmt.failwith_notrace`.
+* Add `Fmt.( ++ )`, alias for `Fmt.append`.
+* Add `Fmt.Dump.string`.
+* Add more ANSI tty formatting styles and make them composable.
+* Change `Fmt.{const,comma,cut,sp}`, generalize signature.
+* Change `Fmt.append`, incompatible signature. Use `Fmt.(pair ~sep:nop)` if 
+  you were using it (backward compatible with earlier versions of `Fmt`).
+* Deprecate `Fmt.{strf,kstrf,strf_like}` in favor of `Fmt.{str,kstr,str_like}`.
+* Deprecate `Fmt.{always,unit}` in favor of `Fmt.any`.
+* Deprecate `Fmt.{prefix,suffix}` (specializes Fmt.( ++ )).
+* Deprecate `Fmt.styled_unit`.
+* No longer subvert the `Format` tag system to do dirty things.
+  Thanks to David Kaloper Meršinjak for the work.
+
+v0.8.6 2019-04-01 La Forclaz (VS)
+---------------------------------
+
+* Add `Fmt.{seq,Dump.seq}` to format `'a Seq.t` values. Thanks to
+  Hezekiah M. Carty for the patch.
+* Handle `Pervasives`'s deprecation via dependency on `stdlib-shims`.
+* `Fmt.Dump.signal` format signals added in 4.03.
+* Fix toplevel initialization for omod (#33).
+* Require at least OCaml 4.03 (drops dependency on `result` and `uchar`
+  compatibility packages).
+
+v0.8.5 2017-12-27 La Forclaz (VS)
+---------------------------------
+
+* Fix `Fmt.{kstrf,strf_like}` when they are partially applied
+  and repeatedly called. Thanks to Thomas Gazagnaire for the report.
+* Add `Fmt.comma`.
+* Relax the `Fmt.(invalid_arg, failwith)` type signature. Thanks to
+  Hezekiah M. Carty for the patch.
+
+v0.8.4 2017-07-08 Zagreb
+------------------------
+
+* Add `Fmt.{invalid_arg,failwith}`. Thanks to Hezekiah M. Carty for the patch.
+
+v0.8.3 2017-04-13 La Forclaz (VS)
+---------------------------------
+
+* Fix `Fmt.exn_backtrace`. Thanks to Thomas Leonard for the report.
+
+v0.8.2 2017-03-20 La Forclaz (VS)
+---------------------------------
+
+* Fix `META` file.
+
+v0.8.1 2017-03-15 La Forclaz (VS)
+---------------------------------
+
+* `Fmt_tty.setup`, treat empty `TERM` env var as dumb.
+* Add `Fmt.Dump.uchar` formatter for inspecting `Uchar.t` values.
+
+v0.8.0 2016-05-23 La Forclaz (VS)
+---------------------------------
+
+* Build depend on topkg.
+* Relicense from BSD3 to ISC.
+* Tweak `Fmt.Dump.option` to indent like in sources.
+* Add `Fmt.Dump.signal` formatter for `Sys` signal numbers.
+* Add `Fmt[.Dump].result`, formatter for `result` values.
+* Add `Fmt.{words,paragraphs}` formatters on US-ASCII strings.
+* Add `Fmt.exn[_backtrace]`. Thanks to Edwin Török for suggesting.
+* Add `Fmt.quote`.
+* Rename `Fmt.text_range` to `Fmt.text_loc` and simplify output
+  when range is a position.
+
+v0.7.1 2015-12-03 Cambridge (UK)
+--------------------------------
+
+* Add optional cmdliner support. See the `Fmt_cli` module provided
+  by the package `fmt.cli`.
+
+v0.7.0 2015-09-17 Cambridge (UK)
+--------------------------------
+
+First Release.
diff --git a/tools/ocaml/duniverse/fmt/LICENSE.md b/tools/ocaml/duniverse/fmt/LICENSE.md
new file mode 100644
index 0000000000..52fe16df4b
--- /dev/null
+++ b/tools/ocaml/duniverse/fmt/LICENSE.md
@@ -0,0 +1,13 @@
+Copyright (c) 2016 The fmt programmers
+
+Permission to use, copy, modify, and/or distribute this software for any
+purpose with or without fee is hereby granted, provided that the above
+copyright notice and this permission notice appear in all copies.
+
+THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
diff --git a/tools/ocaml/duniverse/fmt/README.md b/tools/ocaml/duniverse/fmt/README.md
new file mode 100644
index 0000000000..4809210590
--- /dev/null
+++ b/tools/ocaml/duniverse/fmt/README.md
@@ -0,0 +1,35 @@
+Fmt — OCaml Format pretty-printer combinators
+-------------------------------------------------------------------------------
+%%VERSION%%
+
+Fmt exposes combinators to devise `Format` pretty-printing functions.
+
+Fmt depends only on the OCaml standard library. The optional `Fmt_tty`
+library that allows to setup formatters for terminal color output
+depends on the Unix library. The optional `Fmt_cli` library that
+provides command line support for Fmt depends on [`Cmdliner`][cmdliner].
+
+Fmt is distributed under the ISC license.
+
+[cmdliner]: http://erratique.ch/software/cmdliner
+
+Home page: http://erratique.ch/software/fmt  
+
+## Installation
+
+Fmt can be installed with `opam`:
+
+    opam install fmt
+    opam install base-unix cmdliner fmt # Install all optional libraries
+
+If you don't use `opam` consult the [`opam`](opam) file for build
+instructions.
+
+## Documentation
+
+The documentation and API reference is automatically generated by
+`ocamldoc` from the interfaces. It can be consulted [online][doc]
+and there is a generated version in the `doc` directory of the
+distribution.
+
+[doc]: http://erratique.ch/software/fmt/doc/
diff --git a/tools/ocaml/duniverse/fmt/_tags b/tools/ocaml/duniverse/fmt/_tags
new file mode 100644
index 0000000000..729441abb9
--- /dev/null
+++ b/tools/ocaml/duniverse/fmt/_tags
@@ -0,0 +1,7 @@
+true : bin_annot, safe_string, package(seq), package(stdlib-shims)
+<_b0> : -traverse
+<src> : include
+<src/fmt_tty*> : package(unix)
+<src/fmt_cli*> : package(cmdliner)
+<src/fmt_top*> : package(compiler-libs.toplevel)
+<test> : include
diff --git a/tools/ocaml/duniverse/fmt/doc/api.odocl b/tools/ocaml/duniverse/fmt/doc/api.odocl
new file mode 100644
index 0000000000..f6608c354b
--- /dev/null
+++ b/tools/ocaml/duniverse/fmt/doc/api.odocl
@@ -0,0 +1,3 @@
+Fmt
+Fmt_tty
+Fmt_cli
diff --git a/tools/ocaml/duniverse/fmt/doc/index.mld b/tools/ocaml/duniverse/fmt/doc/index.mld
new file mode 100644
index 0000000000..eb2c91cbbd
--- /dev/null
+++ b/tools/ocaml/duniverse/fmt/doc/index.mld
@@ -0,0 +1,11 @@
+{0 Fmt {%html: <span class="version">%%VERSION%%</span>%}}
+
+Fmt exposes combinators to devise {!Format} pretty-printing functions.
+
+{1:api API}
+
+{!modules:
+Fmt
+Fmt_tty
+Fmt_cli
+}
diff --git a/tools/ocaml/duniverse/fmt/dune-project b/tools/ocaml/duniverse/fmt/dune-project
new file mode 100644
index 0000000000..c9c5bebf07
--- /dev/null
+++ b/tools/ocaml/duniverse/fmt/dune-project
@@ -0,0 +1,2 @@
+(lang dune 1.0)
+(name fmt)
diff --git a/tools/ocaml/duniverse/fmt/fmt.opam b/tools/ocaml/duniverse/fmt/fmt.opam
new file mode 100644
index 0000000000..2cce5b7136
--- /dev/null
+++ b/tools/ocaml/duniverse/fmt/fmt.opam
@@ -0,0 +1,35 @@
+opam-version: "2.0"
+maintainer: "Daniel Bünzli <daniel.buenzl i@erratique.ch>"
+authors: [ "The fmt programmers" ]
+homepage: "https://erratique.ch/software/fmt"
+doc: "https://erratique.ch/software/fmt"
+dev-repo: "git+https://github.com/dune-universe/fmt.git"
+bug-reports: "https://github.com/dbuenzli/fmt/issues"
+tags: [ "string" "format" "pretty-print" "org:erratique" ]
+license: "ISC"
+build: [
+  [ "dune" "build" "-p" name "-j" jobs ]
+]
+run-test: [
+  [ "dune" "runtest" "-p" name "-j" jobs ]
+]
+depends: [
+  "dune"
+  "ocaml" {>= "4.07.0"}
+  "stdlib-shims"
+]
+depopts: [ "base-unix" "cmdliner" ]
+conflicts: [ "cmdliner" {< "0.9.8"} ]
+synopsis: "OCaml Format pretty-printer combinators"
+description: """
+Fmt exposes combinators to devise `Format` pretty-printing functions.
+Fmt depends only on the OCaml standard library. The optional `Fmt_tty`
+library that allows to setup formatters for terminal color output
+depends on the Unix library. The optional `Fmt_cli` library that
+provides command line support for Fmt depends on [`Cmdliner`][cmdliner].
+Fmt is distributed under the ISC license.
+[cmdliner]: http://erratique.ch/software/cmdliner
+"""
+url {
+  src: "git+https://github.com/dune-universe/fmt#duniverse-v0.8.8"
+}
diff --git a/tools/ocaml/duniverse/fmt/pkg/META b/tools/ocaml/duniverse/fmt/pkg/META
new file mode 100644
index 0000000000..e379f4ed88
--- /dev/null
+++ b/tools/ocaml/duniverse/fmt/pkg/META
@@ -0,0 +1,40 @@
+description = "OCaml Format pretty-printer combinators"
+version = "%%VERSION_NUM%%"
+requires = "seq stdlib-shims"
+archive(byte) = "fmt.cma"
+archive(native) = "fmt.cmxa"
+plugin(byte) = "fmt.cma"
+plugin(native) = "fmt.cmxs"
+
+package "tty" (
+  description = "Fmt TTY setup"
+  version = "%%VERSION_NUM%%"
+  requires = "unix fmt"
+  archive(byte) = "fmt_tty.cma"
+  archive(native) = "fmt_tty.cmxa"
+  plugin(byte) = "fmt_tty.cma"
+  plugin(native) = "fmt_tty.cmxs"
+  exists_if = "fmt_tty.cma"
+)
+
+package "cli" (
+  description = "Cmdliner support for Fmt"
+  version = "%%VERSION_NUM%%"
+  requires = "cmdliner fmt"
+  archive(byte) = "fmt_cli.cma"
+  archive(native) = "fmt_cli.cmxa"
+  plugin(byte) = "fmt_cli.cma"
+  plugin(native) = "fmt_cli.cmxs"
+  exists_if = "fmt_cli.cma"
+)
+
+package "top" (
+  description = "Fmt toplevel support"
+  version = "%%VERSION_NUM%%"
+  requires = "fmt fmt.tty"
+  archive(byte) = "fmt_top.cma"
+  archive(native) = "fmt_top.cmxa"
+  plugin(byte) = "fmt_top.cma"
+  plugin(native) = "fmt_top.cmxs"
+  exists_if = "fmt_top.cma"
+)
diff --git a/tools/ocaml/duniverse/fmt/pkg/pkg.ml b/tools/ocaml/duniverse/fmt/pkg/pkg.ml
new file mode 100755
index 0000000000..959b02242c
--- /dev/null
+++ b/tools/ocaml/duniverse/fmt/pkg/pkg.ml
@@ -0,0 +1,18 @@
+#!/usr/bin/env ocaml
+#use "topfind"
+#require "topkg"
+open Topkg
+
+let unix = Conf.with_pkg "base-unix"
+let cmdliner = Conf.with_pkg "cmdliner"
+
+let () =
+  Pkg.describe "fmt" @@ fun c ->
+  let unix = Conf.value c unix in
+  let cmdliner = Conf.value c cmdliner in
+  Ok [ Pkg.mllib "src/fmt.mllib";
+       Pkg.mllib ~cond:unix "src/fmt_tty.mllib";
+       Pkg.mllib ~cond:cmdliner "src/fmt_cli.mllib";
+       Pkg.mllib ~api:[] "src/fmt_top.mllib";
+       Pkg.lib "src/fmt_tty_top_init.ml";
+       Pkg.test "test/test"; ]
diff --git a/tools/ocaml/duniverse/fmt/src/dune b/tools/ocaml/duniverse/fmt/src/dune
new file mode 100644
index 0000000000..ece3f9958b
--- /dev/null
+++ b/tools/ocaml/duniverse/fmt/src/dune
@@ -0,0 +1,30 @@
+(library
+ (name fmt)
+ (public_name fmt)
+ (libraries result)
+ (modules fmt)
+ (flags :standard -w -3-6-27-34-50)
+ (wrapped false))
+
+(library
+ (name fmt_tty)
+ (public_name fmt.tty)
+ (libraries unix fmt)
+ (modules fmt_tty)
+ (flags :standard -w -3-6-27)
+ (wrapped false))
+
+(library
+ (name fmt_cli)
+ (public_name fmt.cli)
+ (libraries fmt cmdliner)
+ (modules fmt_cli)
+ (flags :standard -w -3-6-27)
+ (wrapped false))
+
+(library
+ (name fmt_top)
+ (public_name fmt.top)
+ (libraries compiler-libs.toplevel fmt)
+ (modules fmt_top)
+ (wrapped false))
diff --git a/tools/ocaml/duniverse/fmt/src/fmt.ml b/tools/ocaml/duniverse/fmt/src/fmt.ml
new file mode 100644
index 0000000000..29f42d6bf2
--- /dev/null
+++ b/tools/ocaml/duniverse/fmt/src/fmt.ml
@@ -0,0 +1,787 @@
+(*---------------------------------------------------------------------------
+   Copyright (c) 2014 The fmt programmers. All rights reserved.
+   Distributed under the ISC license, see terms at the end of the file.
+   %%NAME%% %%VERSION%%
+  ---------------------------------------------------------------------------*)
+
+let invalid_arg' = invalid_arg
+
+(* Errors *)
+
+let err_str_formatter = "Format.str_formatter can't be set."
+
+(* Standard outputs *)
+
+let stdout = Format.std_formatter
+let stderr = Format.err_formatter
+
+(* Formatting *)
+
+let pf = Format.fprintf
+let pr = Format.printf
+let epr = Format.eprintf
+let str = Format.asprintf
+let kpf = Format.kfprintf
+let kstr = Format.kasprintf
+let failwith fmt = kstr failwith fmt
+let failwith_notrace fmt = kstr (fun s -> raise_notrace (Failure s)) fmt
+let invalid_arg fmt = kstr invalid_arg fmt
+let error fmt = kstr (fun s -> Error s) fmt
+let error_msg fmt = kstr (fun s -> Error (`Msg s)) fmt
+
+(* Formatters *)
+
+type 'a t = Format.formatter -> 'a -> unit
+
+let flush ppf _ = Format.pp_print_flush ppf ()
+let nop fmt ppf = ()
+let any fmt ppf _ = pf ppf fmt
+let using f pp ppf v = pp ppf (f v)
+let const pp_v v ppf _ = pp_v ppf v
+let fmt fmt ppf = pf ppf fmt
+
+(* Separators *)
+
+let cut ppf _ = Format.pp_print_cut ppf ()
+let sp ppf _ = Format.pp_print_space ppf ()
+let sps n ppf _ = Format.pp_print_break ppf n 0
+let comma ppf _ = Format.pp_print_string ppf ","; sp ppf ()
+let semi ppf _ = Format.pp_print_string ppf ";"; sp ppf ()
+
+(* Sequencing *)
+
+let iter ?sep:(pp_sep = cut) iter pp_elt ppf v =
+  let is_first = ref true in
+  let pp_elt v =
+    if !is_first then (is_first := false) else pp_sep ppf ();
+    pp_elt ppf v
+  in
+  iter pp_elt v
+
+let iter_bindings ?sep:(pp_sep = cut) iter pp_binding ppf v =
+  let is_first = ref true in
+  let pp_binding k v =
+    if !is_first then (is_first := false) else pp_sep ppf ();
+    pp_binding ppf (k, v)
+  in
+  iter pp_binding v
+
+let append pp_v0 pp_v1 ppf v = pp_v0 ppf v; pp_v1 ppf v
+let ( ++ ) = append
+let concat ?sep pps ppf v = iter ?sep List.iter (fun ppf pp -> pp ppf v) ppf pps
+
+(* Boxes *)
+
+let box ?(indent = 0) pp_v ppf v =
+  Format.(pp_open_box ppf indent; pp_v ppf v; pp_close_box ppf ())
+
+let hbox pp_v ppf v =
+  Format.(pp_open_hbox ppf (); pp_v ppf v; pp_close_box ppf ())
+
+let vbox ?(indent = 0) pp_v ppf v =
+  Format.(pp_open_vbox ppf indent; pp_v ppf v; pp_close_box ppf ())
+
+let hvbox ?(indent = 0) pp_v ppf v =
+  Format.(pp_open_hvbox ppf indent; pp_v ppf v; pp_close_box ppf ())
+
+let hovbox ?(indent = 0) pp_v ppf v =
+  Format.(pp_open_hovbox ppf indent; pp_v ppf v; pp_close_box ppf ())
+
+(* Brackets *)
+
+let surround s1 s2 pp_v ppf v =
+  Format.(pp_print_string ppf s1; pp_v ppf v; pp_print_string ppf s2)
+
+let parens pp_v = box ~indent:1 (surround "(" ")" pp_v)
+let brackets pp_v = box ~indent:1 (surround "[" "]" pp_v)
+let oxford_brackets pp_v = box ~indent:2 (surround "[|" "|]" pp_v)
+let braces pp_v = box ~indent:1 (surround "{" "}" pp_v)
+let quote ?(mark = "\"") pp_v =
+  let pp_mark ppf _ = Format.pp_print_as ppf 1 mark in
+  box ~indent:1 (pp_mark ++ pp_v ++ pp_mark)
+
+(* Stdlib types formatters *)
+
+let bool = Format.pp_print_bool
+let int = Format.pp_print_int
+let nativeint ppf v = pf ppf "%nd" v
+let int32 ppf v = pf ppf "%ld" v
+let int64 ppf v = pf ppf "%Ld" v
+let uint ppf v = pf ppf "%u" v
+let uint32 ppf v = pf ppf "%lu" v
+let uint64 ppf v = pf ppf "%Lu" v
+let unativeint ppf v = pf ppf "%nu" v
+let char = Format.pp_print_char
+let string = Format.pp_print_string
+let buffer ppf b = string ppf (Buffer.contents b)
+let exn ppf e = string ppf (Printexc.to_string e)
+let exn_backtrace ppf (e, bt) =
+  let pp_backtrace_str ppf s =
+    let stop = String.length s - 1 (* there's a newline at the end *) in
+    let rec loop left right =
+      if right = stop then string ppf (String.sub s left (right - left)) else
+      if s.[right] <> '\n' then loop left (right + 1) else
+      begin
+        string ppf (String.sub s left (right - left));
+        cut ppf ();
+        loop (right + 1) (right + 1)
+      end
+    in
+    if s = "" then (string ppf "No backtrace available.") else
+    loop 0 0
+  in
+  pf ppf "@[<v>Exception: %a@,%a@]"
+    exn e pp_backtrace_str (Printexc.raw_backtrace_to_string bt)
+
+let float ppf v = pf ppf "%g" v
+let round x = floor (x +. 0.5)
+let round_dfrac d x =
+  if x -. (round x) = 0. then x else                   (* x is an integer. *)
+  let m = 10. ** (float_of_int d) in                (* m moves 10^-d to 1. *)
+  (floor ((x *. m) +. 0.5)) /. m
+
+let round_dsig d x =
+  if x = 0. then 0. else
+  let m = 10. ** (floor (log10 (abs_float x))) in       (* to normalize x. *)
+  (round_dfrac d (x /. m)) *. m
+
+let float_dfrac d ppf f = pf ppf "%g" (round_dfrac d f)
+let float_dsig d ppf f = pf ppf "%g" (round_dsig d f)
+
+let pair ?sep:(pp_sep = cut) pp_fst pp_snd ppf (fst, snd) =
+  pp_fst ppf fst; pp_sep ppf (); pp_snd ppf snd
+
+let option ?none:(pp_none = nop) pp_v ppf = function
+| None -> pp_none ppf ()
+| Some v -> pp_v ppf v
+
+let result ~ok ~error ppf = function
+| Ok v -> ok ppf v
+| Error e -> error ppf e
+
+let list ?sep pp_elt = iter ?sep List.iter pp_elt
+let array ?sep pp_elt = iter ?sep Array.iter pp_elt
+let seq ?sep pp_elt = iter ?sep Seq.iter pp_elt
+let hashtbl ?sep pp_binding = iter_bindings ?sep Hashtbl.iter pp_binding
+let queue ?sep pp_elt = iter Queue.iter pp_elt
+let stack ?sep pp_elt = iter Stack.iter pp_elt
+
+(* Stdlib type dumpers *)
+
+module Dump = struct
+
+  (* Sequencing *)
+
+  let iter iter_f pp_name pp_elt =
+    let pp_v = iter ~sep:sp iter_f (box pp_elt) in
+    parens (pp_name ++ sp ++ pp_v)
+
+  let iter_bindings iter_f pp_name pp_k pp_v =
+    let pp_v = iter_bindings ~sep:sp iter_f (pair pp_k pp_v) in
+    parens (pp_name ++ sp ++ pp_v)
+
+  (* Stlib types *)
+
+  let sig_names =
+    Sys.[ sigabrt, "SIGABRT"; sigalrm, "SIGALRM"; sigfpe, "SIGFPE";
+          sighup, "SIGHUP"; sigill, "SIGILL"; sigint, "SIGINT";
+          sigkill, "SIGKILL"; sigpipe, "SIGPIPE"; sigquit, "SIGQUIT";
+          sigsegv, "SIGSEGV"; sigterm, "SIGTERM"; sigusr1, "SIGUSR1";
+          sigusr2, "SIGUSR2"; sigchld, "SIGCHLD"; sigcont, "SIGCONT";
+          sigstop, "SIGSTOP"; sigtstp, "SIGTSTP"; sigttin, "SIGTTIN";
+          sigttou, "SIGTTOU"; sigvtalrm, "SIGVTALRM"; sigprof, "SIGPROF";
+          sigbus, "SIGBUS"; sigpoll, "SIGPOLL"; sigsys, "SIGSYS";
+          sigtrap, "SIGTRAP"; sigurg, "SIGURG"; sigxcpu, "SIGXCPU";
+          sigxfsz, "SIGXFSZ"; ]
+
+  let signal ppf s = match List.assq_opt s sig_names with
+  | Some name -> string ppf name
+  | None -> pf ppf "SIG(%d)" s
+
+  let uchar ppf u = pf ppf "U+%04X" (Uchar.to_int u)
+  let string ppf s = pf ppf "%S" s
+  let pair pp_fst pp_snd =
+    parens (using fst (box pp_fst) ++ comma ++ using snd (box pp_snd))
+
+  let option pp_v ppf = function
+  | None -> pf ppf "None"
+  | Some v -> pf ppf "@[<2>Some@ @[%a@]@]" pp_v v
+
+  let result ~ok ~error ppf = function
+  | Ok v -> pf ppf "@[<2>Ok@ @[%a@]@]" ok v
+  | Error e -> pf ppf "@[<2>Error@ @[%a@]@]" error e
+
+  let list pp_elt = brackets (list ~sep:semi (box pp_elt))
+  let array pp_elt = oxford_brackets (array ~sep:semi (box pp_elt))
+  let seq pp_elt = brackets (seq ~sep:semi (box pp_elt))
+
+  let hashtbl pp_k pp_v =
+    iter_bindings Hashtbl.iter (any "hashtbl") pp_k pp_v
+
+  let stack pp_elt = iter Stack.iter (any "stack") pp_elt
+  let queue pp_elt = iter Queue.iter (any "queue") pp_elt
+
+  (* Records *)
+
+  let field ?(label = string) l prj pp_v ppf v =
+    pf ppf "@[<1>%a =@ %a@]" label l pp_v (prj v)
+
+  let record pps =
+    box ~indent:2 (surround "{ " " }" @@ vbox (concat ~sep:(any ";@,") pps))
+end
+
+(* Magnitudes *)
+
+let ilog10 x =
+  let rec loop p x = if x = 0 then p else loop (p + 1) (x / 10) in
+  loop (-1) x
+
+let ipow10 n =
+  let rec loop acc n = if n = 0 then acc else loop (acc * 10) (n - 1) in
+  loop 1 n
+
+let si_symb_max = 16
+let si_symb =
+  [| "y"; "z"; "a"; "f"; "p"; "n"; "u"; "m"; ""; "k"; "M"; "G"; "T"; "P";
+     "E"; "Z"; "Y"|]
+
+let rec pp_at_factor ~scale u symb factor ppf s =
+  let m = s / factor in
+  let n = s mod factor in
+  match m with
+  | m when m >= 100 -> (* No fractional digit *)
+      let m_up = if n > 0 then m + 1 else m in
+      if m_up >= 1000 then si_size ~scale u ppf (m_up * factor) else
+      pf ppf "%d%s%s" m_up symb u
+  | m when m >= 10 -> (* One fractional digit w.o. trailing 0 *)
+      let f_factor = factor / 10 in
+      let f_m = n / f_factor in
+      let f_n = n mod f_factor in
+      let f_m_up = if f_n > 0 then f_m + 1 else f_m in
+      begin match f_m_up with
+      | 0 -> pf ppf "%d%s%s" m symb u
+      | f when f >= 10 -> si_size ~scale u ppf (m * factor + f * f_factor)
+      | f -> pf ppf "%d.%d%s%s" m f symb u
+      end
+  | m -> (* Two or zero fractional digits w.o. trailing 0 *)
+      let f_factor = factor / 100 in
+      let f_m = n / f_factor in
+      let f_n = n mod f_factor in
+      let f_m_up = if f_n > 0 then f_m + 1 else f_m in
+      match f_m_up with
+      | 0 -> pf ppf "%d%s%s" m symb u
+      | f when f >= 100 -> si_size ~scale u ppf (m * factor + f * f_factor)
+      | f when f mod 10 = 0 -> pf ppf "%d.%d%s%s" m (f / 10) symb u
+      | f -> pf ppf "%d.%02d%s%s" m f symb u
+
+and si_size ~scale u ppf s = match scale < -8 || scale > 8 with
+| true -> invalid_arg "~scale is %d, must be in [-8;8]" scale
+| false ->
+    let pow_div_3 = if s = 0 then 0 else (ilog10 s / 3) in
+    let symb = (scale + 8) + pow_div_3 in
+    let symb, factor = match symb > si_symb_max with
+    | true -> si_symb_max, ipow10 ((8 - scale) * 3)
+    | false -> symb, ipow10 (pow_div_3 * 3)
+    in
+    if factor = 1
+    then pf ppf "%d%s%s" s si_symb.(symb) u
+    else pp_at_factor ~scale u si_symb.(symb) factor ppf s
+
+let byte_size ppf s = si_size ~scale:0 "B" ppf s
+
+let bi_byte_size ppf s =
+  (* XXX we should get rid of this. *)
+  let _pp_byte_size k i ppf s =
+    let pp_frac = float_dfrac 1 in
+    let div_round_up m n = (m + n - 1) / n in
+    let float = float_of_int in
+    if s < k then pf ppf "%dB" s else
+    let m = k * k in
+    if s < m then begin
+      let kstr = if i = "" then "k" (* SI *) else "K" (* IEC *) in
+      let sk = s / k in
+      if sk < 10
+      then pf ppf "%a%s%sB" pp_frac (float s /. float k) kstr i
+      else pf ppf "%d%s%sB" (div_round_up s k) kstr i
+    end else
+    let g = k * m in
+    if s < g then begin
+      let sm = s / m in
+      if sm < 10
+      then pf ppf "%aM%sB" pp_frac (float s /. float m) i
+      else pf ppf "%dM%sB" (div_round_up s m) i
+    end else
+    let t = k * g in
+    if s < t then begin
+      let sg = s / g in
+      if sg < 10
+      then pf ppf "%aG%sB" pp_frac (float s /. float g) i
+      else pf ppf "%dG%sB" (div_round_up s g) i
+    end else
+    let p = k * t in
+    if s < p then begin
+      let st = s / t in
+      if st < 10
+      then pf ppf "%aT%sB" pp_frac (float s /. float t) i
+      else pf ppf "%dT%sB" (div_round_up s t) i
+    end else begin
+      let sp = s / p in
+      if sp < 10
+      then pf ppf "%aP%sB" pp_frac (float s /. float p) i
+      else pf ppf "%dP%sB" (div_round_up s p) i
+    end
+  in
+  _pp_byte_size 1024 "i" ppf s
+
+(* XXX From 4.08 on use Int64.unsigned_*
+
+   See Hacker's Delight for the implementation of these unsigned_* funs *)
+
+let unsigned_compare x0 x1 = Int64.(compare (sub x0 min_int) (sub x1 min_int))
+let unsigned_div n d = match d < Int64.zero with
+| true -> if unsigned_compare n d < 0 then Int64.zero else Int64.one
+| false ->
+    let q = Int64.(shift_left (div (shift_right_logical n 1) d) 1) in
+    let r = Int64.(sub n (mul q d)) in
+    if unsigned_compare r d >= 0 then Int64.succ q else q
+
+let unsigned_rem n d = Int64.(sub n (mul (unsigned_div n d) d))
+
+let us_span   =                  1_000L
+let ms_span   =              1_000_000L
+let sec_span  =          1_000_000_000L
+let min_span  =         60_000_000_000L
+let hour_span =       3600_000_000_000L
+let day_span  =     86_400_000_000_000L
+let year_span = 31_557_600_000_000_000L
+
+let rec pp_si_span unit_str si_unit si_higher_unit ppf span =
+  let geq x y = unsigned_compare x y >= 0 in
+  let m = unsigned_div span si_unit in
+  let n = unsigned_rem span si_unit in
+  match m with
+  | m when geq m 100L -> (* No fractional digit *)
+      let m_up = if Int64.equal n 0L then m else Int64.succ m in
+      let span' = Int64.mul m_up si_unit in
+      if geq span' si_higher_unit then uint64_ns_span ppf span' else
+      pf ppf "%Ld%s" m_up unit_str
+  | m when geq m 10L -> (* One fractional digit w.o. trailing zero *)
+      let f_factor = unsigned_div si_unit 10L in
+      let f_m = unsigned_div n f_factor in
+      let f_n = unsigned_rem n f_factor in
+      let f_m_up = if Int64.equal f_n 0L then f_m else Int64.succ f_m in
+      begin match f_m_up with
+      | 0L -> pf ppf "%Ld%s" m unit_str
+      | f when geq f 10L ->
+          uint64_ns_span ppf Int64.(add (mul m si_unit) (mul f f_factor))
+      | f -> pf ppf "%Ld.%Ld%s" m f unit_str
+      end
+  | m -> (* Two or zero fractional digits w.o. trailing zero *)
+      let f_factor = unsigned_div si_unit 100L in
+      let f_m = unsigned_div n f_factor in
+      let f_n = unsigned_rem n f_factor in
+      let f_m_up = if Int64.equal f_n 0L then f_m else Int64.succ f_m in
+      match f_m_up with
+      | 0L -> pf ppf "%Ld%s" m unit_str
+      | f when geq f 100L ->
+          uint64_ns_span ppf Int64.(add (mul m si_unit) (mul f f_factor))
+      | f when Int64.equal (Int64.rem f 10L) 0L ->
+          pf ppf "%Ld.%Ld%s" m (Int64.div f 10L) unit_str
+      | f ->
+          pf ppf "%Ld.%02Ld%s" m f unit_str
+
+and pp_non_si unit_str unit unit_lo_str unit_lo unit_lo_size ppf span =
+  let geq x y = unsigned_compare x y >= 0 in
+  let m = unsigned_div span unit in
+  let n = unsigned_rem span unit in
+  if Int64.equal n 0L then pf ppf "%Ld%s" m unit_str else
+  let f_m = unsigned_div n unit_lo in
+  let f_n = unsigned_rem n unit_lo in
+  let f_m_up = if Int64.equal f_n 0L then f_m else Int64.succ f_m in
+  match f_m_up with
+  | f when geq f unit_lo_size ->
+      uint64_ns_span ppf Int64.(add (mul m unit) (mul f unit_lo))
+  | f ->
+      pf ppf "%Ld%s%Ld%s" m unit_str f unit_lo_str
+
+and uint64_ns_span ppf span =
+  let geq x y = unsigned_compare x y >= 0 in
+  let lt x y = unsigned_compare x y = -1 in
+  match span with
+  | s when lt s us_span -> pf ppf "%Ldns" s
+  | s when lt s ms_span -> pp_si_span "us" us_span ms_span ppf s
+  | s when lt s sec_span -> pp_si_span "ms" ms_span sec_span ppf s
+  | s when lt s min_span -> pp_si_span "s" sec_span min_span ppf s
+  | s when lt s hour_span -> pp_non_si "min" min_span "s" sec_span 60L ppf s
+  | s when lt s day_span -> pp_non_si "h" hour_span "min" min_span 60L ppf s
+  | s when lt s year_span -> pp_non_si "d" day_span "h" hour_span 24L ppf s
+  | s ->
+      let m = unsigned_div s year_span in
+      let n = unsigned_rem s year_span in
+      if Int64.equal n 0L then pf ppf "%Lda" m else
+      let f_m = unsigned_div n day_span in
+      let f_n = unsigned_rem n day_span in
+      let f_m_up = if Int64.equal f_n 0L then f_m else Int64.succ f_m in
+      match f_m_up with
+      | f when geq f 366L -> pf ppf "%Lda" (Int64.succ m)
+      | f -> pf ppf "%Lda%Ldd" m f
+
+(* Binary formatting *)
+
+type 'a vec = int * (int -> 'a)
+
+let iter_vec f (n, get) = for i = 0 to n - 1 do f i (get i) done
+let vec ?sep = iter_bindings ?sep iter_vec
+
+let on_string = using String.(fun s -> length s, get s)
+let on_bytes = using Bytes.(fun b -> length b, get b)
+
+let sub_vecs w (n, get) =
+  (n - 1) / w + 1,
+  fun j ->
+    let off = w * j in
+    min w (n - off), fun i -> get (i + off)
+
+let prefix0x = [
+  0xf       , fmt "%01x";
+  0xff      , fmt "%02x";
+  0xfff     , fmt "%03x";
+  0xffff    , fmt "%04x";
+  0xfffff   , fmt "%05x";
+  0xffffff  , fmt "%06x";
+  0xfffffff , fmt "%07x"; ]
+
+let padded0x ~max = match List.find_opt (fun (x, _) -> max <= x) prefix0x with
+| Some (_, pp) -> pp
+| None -> fmt "%08x"
+
+let ascii ?(w = 0) ?(subst = const char '.') () ppf (n, _ as v) =
+  let pp_char ppf (_, c) =
+    if '\x20' <= c && c < '\x7f' then char ppf c else subst ppf ()
+  in
+  vec pp_char ppf v;
+  if n < w then sps (w - n) ppf ()
+
+let octets ?(w = 0) ?(sep = sp) () ppf (n, _ as v) =
+  let pp_sep ppf i = if i > 0 && i mod 2 = 0 then sep ppf () in
+  let pp_char ppf (i, c) = pp_sep ppf i; pf ppf "%02x" (Char.code c) in
+  vec ~sep:nop pp_char ppf v;
+  for i = n to w - 1 do pp_sep ppf i; sps 2 ppf () done
+
+let addresses ?addr ?(w = 16) pp_vec ppf (n, _ as v) =
+  let addr = match addr with
+  | Some pp -> pp
+  | _ -> padded0x ~max:(((n - 1) / w) * w) ++ const string ": "
+  in
+  let pp_sub ppf (i, sub) = addr ppf (i * w); box pp_vec ppf sub in
+  vbox (vec pp_sub) ppf (sub_vecs w v)
+
+let hex ?(w = 16) () =
+  addresses ~w ((octets ~w () |> box) ++ sps 2 ++ (ascii ~w () |> box))
+
+(* Text and lines *)
+
+let is_nl c = c = '\n'
+let is_nl_or_sp c = is_nl c || c = ' '
+let is_white = function ' ' | '\t' .. '\r'  -> true | _ -> false
+let not_white c = not (is_white c)
+let not_white_or_nl c = is_nl c || not_white c
+
+let rec stop_at sat ~start ~max s =
+  if start > max then start else
+  if sat s.[start] then start else
+  stop_at sat ~start:(start + 1) ~max s
+
+let sub s start stop ~max =
+  if start = stop then "" else
+  if start = 0 && stop > max then s else
+  String.sub s start (stop - start)
+
+let words ppf s =
+  let max = String.length s - 1 in
+  let rec loop start s = match stop_at is_white ~start ~max s with
+  | stop when stop > max -> Format.pp_print_string ppf (sub s start stop ~max)
+  | stop ->
+      Format.pp_print_string ppf (sub s start stop ~max);
+      match stop_at not_white ~start:stop ~max s with
+      | stop when stop > max -> ()
+      | stop -> Format.pp_print_space ppf (); loop stop s
+  in
+  let start = stop_at not_white ~start:0 ~max s in
+  if start > max then () else loop start s
+
+let paragraphs ppf s =
+  let max = String.length s - 1 in
+  let rec loop start s = match stop_at is_white ~start ~max s with
+  | stop when stop > max -> Format.pp_print_string ppf (sub s start stop ~max)
+  | stop ->
+      Format.pp_print_string ppf (sub s start stop ~max);
+      match stop_at not_white_or_nl ~start:stop ~max s with
+      | stop when stop > max -> ()
+      | stop ->
+          if s.[stop] <> '\n'
+          then (Format.pp_print_space ppf (); loop stop s) else
+          match stop_at not_white_or_nl ~start:(stop + 1) ~max s with
+          | stop when stop > max -> ()
+          | stop ->
+              if s.[stop] <> '\n'
+              then (Format.pp_print_space ppf (); loop stop s) else
+              match stop_at not_white ~start:(stop + 1) ~max s with
+              | stop when stop > max -> ()
+              | stop ->
+                  Format.pp_force_newline ppf ();
+                  Format.pp_force_newline ppf ();
+                  loop stop s
+  in
+  let start = stop_at not_white ~start:0 ~max s in
+  if start > max then () else loop start s
+
+let text ppf s =
+  let max = String.length s - 1 in
+  let rec loop start s = match stop_at is_nl_or_sp ~start ~max s with
+  | stop when stop > max -> Format.pp_print_string ppf (sub s start stop ~max)
+  | stop ->
+      Format.pp_print_string ppf (sub s start stop ~max);
+      begin match s.[stop] with
+      | ' ' -> Format.pp_print_space ppf ()
+      | '\n' -> Format.pp_force_newline ppf ()
+      | _ -> assert false
+      end;
+      loop (stop + 1) s
+  in
+  loop 0 s
+
+let lines ppf s =
+  let max = String.length s - 1 in
+  let rec loop start s = match stop_at is_nl ~start ~max s with
+  | stop when stop > max -> Format.pp_print_string ppf (sub s start stop ~max)
+  | stop ->
+      Format.pp_print_string ppf (sub s start stop ~max);
+      Format.pp_force_newline ppf ();
+      loop (stop + 1) s
+  in
+  loop 0 s
+
+let truncated ~max ppf s = match String.length s <= max with
+| true -> Format.pp_print_string ppf s
+| false ->
+    for i = 0 to max - 4 do Format.pp_print_char ppf s.[i] done;
+    Format.pp_print_string ppf "..."
+
+let text_loc ppf ((l0, c0), (l1, c1)) =
+  if (l0 : int) == (l1 : int) && (c0 : int) == (c1 : int)
+  then pf ppf "%d.%d" l0 c0
+  else pf ppf "%d.%d-%d.%d" l0 c0 l1 c1
+
+(* HCI fragments *)
+
+let one_of ?(empty = nop) pp_v ppf = function
+| [] -> empty ppf ()
+| [v] -> pp_v ppf v
+| [v0; v1] -> pf ppf "@[either %a or@ %a@]" pp_v v0 pp_v v1
+| _ :: _ as vs ->
+    let rec loop ppf = function
+    | [v] -> pf ppf "or@ %a" pp_v v
+    | v :: vs -> pf ppf "%a,@ " pp_v v; loop ppf vs
+    | [] -> assert false
+    in
+    pf ppf "@[one@ of@ %a@]" loop vs
+
+let did_you_mean
+    ?(pre = any "Unknown") ?(post = nop) ~kind pp_v ppf (v, hints)
+  =
+  match hints with
+  | [] -> pf ppf "@[%a %s %a%a.@]" pre () kind pp_v v post ()
+  | hints ->
+      pf ppf "@[%a %s %a%a.@ Did you mean %a ?@]"
+        pre () kind pp_v v post () (one_of pp_v) hints
+
+(* Conditional UTF-8 and styled formatting. *)
+
+type any = ..
+type 'a attr = int * ('a -> any) * (any -> 'a)
+
+let id = ref 0
+let attr (type a) () =
+  incr id;
+  let module M = struct type any += K of a end in
+  !id, (fun x -> M.K x), (function M.K x -> x | _ -> assert false)
+
+module Int = struct type t = int let compare a b = compare (a: int) b end
+module Imap = Map.Make (Int)
+
+let attrs = ref []
+let store ppf =
+  let open Ephemeron.K1 in
+  let rec go ppf top = function
+  | [] ->
+      let e = create () and v = ref Imap.empty in
+      attrs := e :: List.rev top; set_key e ppf; set_data e v; v
+  | e::es ->
+      match get_key e with
+      | None -> go ppf top es
+      | Some k when not (k == ppf) -> go ppf (e::top) es
+      | Some k ->
+          let v = match get_data e with Some v -> v | _ -> assert false in
+          if not (top == []) then attrs := e :: List.rev_append top es;
+          ignore (Sys.opaque_identity k); v
+  in
+  go ppf [] !attrs
+
+let get (k, _, prj) ppf =
+  match Imap.find_opt k !(store ppf) with Some x -> Some (prj x) | _ -> None
+
+let set (k, inj, _) v ppf =
+  if ppf == Format.str_formatter then invalid_arg' err_str_formatter else
+  let s = store ppf in
+  s := Imap.add k (inj v) !s
+
+let def x = function Some y -> y | _ -> x
+
+let utf_8_attr = attr ()
+let utf_8 ppf = get utf_8_attr ppf |> def true
+let set_utf_8 ppf x = set utf_8_attr x ppf
+
+type style_renderer = [ `Ansi_tty | `None ]
+let style_renderer_attr = attr ()
+let style_renderer ppf = get style_renderer_attr ppf |> def `None
+let set_style_renderer ppf x = set style_renderer_attr x ppf
+
+let with_buffer ?like buf =
+  let ppf = Format.formatter_of_buffer buf in
+  (match like with Some like -> store ppf := !(store like) | _ -> ());
+  ppf
+
+let str_like ppf fmt =
+  let buf = Buffer.create 64 in
+  let bppf = with_buffer ~like:ppf buf in
+  let flush ppf =
+    Format.pp_print_flush ppf ();
+    let s = Buffer.contents buf in
+    Buffer.reset buf; s
+  in
+  Format.kfprintf flush bppf fmt
+
+(* Conditional UTF-8 formatting *)
+
+let if_utf_8 pp_u pp = fun ppf v -> (if utf_8 ppf then pp_u else pp) ppf v
+
+(* Styled formatting *)
+
+type color =
+  [ `Black | `Blue | `Cyan | `Green | `Magenta | `Red | `White | `Yellow ]
+
+type style =
+  [ `None |  `Bold | `Faint | `Italic | `Underline | `Reverse
+  | `Fg of [ color | `Hi of color ]
+  | `Bg of [ color | `Hi of color ]
+  | color (** deprecated *) ]
+
+let ansi_style_code = function
+| `Bold -> "1"
+| `Faint -> "2"
+| `Italic -> "3"
+| `Underline -> "4"
+| `Reverse -> "7"
+| `Fg `Black -> "30"
+| `Fg `Red -> "31"
+| `Fg `Green -> "32"
+| `Fg `Yellow -> "33"
+| `Fg `Blue -> "34"
+| `Fg `Magenta -> "35"
+| `Fg `Cyan -> "36"
+| `Fg `White -> "37"
+| `Bg `Black -> "40"
+| `Bg `Red -> "41"
+| `Bg `Green -> "42"
+| `Bg `Yellow -> "43"
+| `Bg `Blue -> "44"
+| `Bg `Magenta -> "45"
+| `Bg `Cyan -> "46"
+| `Bg `White -> "47"
+| `Fg (`Hi `Black) -> "90"
+| `Fg (`Hi `Red) -> "91"
+| `Fg (`Hi `Green) -> "92"
+| `Fg (`Hi `Yellow) -> "93"
+| `Fg (`Hi `Blue) -> "94"
+| `Fg (`Hi `Magenta) -> "95"
+| `Fg (`Hi `Cyan) -> "96"
+| `Fg (`Hi `White) -> "97"
+| `Bg (`Hi `Black) -> "100"
+| `Bg (`Hi `Red) -> "101"
+| `Bg (`Hi `Green) -> "102"
+| `Bg (`Hi `Yellow) -> "103"
+| `Bg (`Hi `Blue) -> "104"
+| `Bg (`Hi `Magenta) -> "105"
+| `Bg (`Hi `Cyan) -> "106"
+| `Bg (`Hi `White) -> "107"
+| `None -> "0"
+(* deprecated *)
+| `Black -> "30"
+| `Red -> "31"
+| `Green -> "32"
+| `Yellow -> "33"
+| `Blue -> "34"
+| `Magenta -> "35"
+| `Cyan -> "36"
+| `White -> "37"
+
+let pp_sgr ppf style =
+  Format.pp_print_as ppf 0 "\027[";
+  Format.pp_print_as ppf 0 style;
+  Format.pp_print_as ppf 0 "m"
+
+let curr_style = attr ()
+
+let styled style pp_v ppf v = match style_renderer ppf with
+| `None -> pp_v ppf v
+| `Ansi_tty ->
+    let curr = match get curr_style ppf with
+    | None -> let s = ref "0" in set curr_style s ppf; s
+    | Some s -> s
+    in
+    let prev = !curr and here = ansi_style_code style in
+    curr := (match style with `None -> here | _ -> prev ^ ";" ^ here);
+    try pp_sgr ppf here; pp_v ppf v; pp_sgr ppf prev; curr := prev with
+    | e -> curr := prev; raise e
+
+(* Records *)
+
+external id : 'a -> 'a = "%identity"
+let label = styled (`Fg `Yellow) string
+let field ?(label = label) ?(sep = any ":@ ") l prj pp_v ppf v =
+  pf ppf "@[<1>%a%a%a@]" label l sep () pp_v (prj v)
+
+let record ?(sep = cut) pps = vbox (concat ~sep pps)
+
+(* Converting with string converters. *)
+
+let of_to_string f ppf v = string ppf (f v)
+let to_to_string pp_v v = str "%a" pp_v v
+
+(* Deprecated *)
+
+let strf = str
+let kstrf = kstr
+let strf_like = str_like
+let always = any
+let unit = any
+let prefix pp_p pp_v ppf v = pp_p ppf (); pp_v ppf v
+let suffix pp_s pp_v ppf v = pp_v ppf v; pp_s ppf ()
+let styled_unit style fmt = styled style (any fmt)
+
+(*---------------------------------------------------------------------------
+   Copyright (c) 2014 The fmt programmers
+
+   Permission to use, copy, modify, and/or distribute this software for any
+   purpose with or without fee is hereby granted, provided that the above
+   copyright notice and this permission notice appear in all copies.
+
+   THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+   WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+   MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+   ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+   WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+   ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+   OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+  ---------------------------------------------------------------------------*)
diff --git a/tools/ocaml/duniverse/fmt/src/fmt.mli b/tools/ocaml/duniverse/fmt/src/fmt.mli
new file mode 100644
index 0000000000..6fe965b910
--- /dev/null
+++ b/tools/ocaml/duniverse/fmt/src/fmt.mli
@@ -0,0 +1,689 @@
+(*---------------------------------------------------------------------------
+   Copyright (c) 2014 The fmt programmers. All rights reserved.
+   Distributed under the ISC license, see terms at the end of the file.
+   %%NAME%% %%VERSION%%
+  ---------------------------------------------------------------------------*)
+
+(** {!Format} pretty-printer combinators.
+
+    Consult {{!nameconv}naming conventions} for your pretty-printers.
+
+    {b References}
+    {ul
+    {- The {!Format} module documentation.}
+    {- The required reading {!Format} module
+       {{:https://ocaml.org/learn/tutorials/format.html}tutorial}.}} *)
+
+(** {1:stdos Standard outputs} *)
+
+val stdout : Format.formatter
+(** [stdout] is the standard output formatter. *)
+
+val stderr : Format.formatter
+(** [stderr] is the standard error formatter. *)
+
+(** {1:formatting Formatting} *)
+
+val pf : Format.formatter -> ('a, Format.formatter, unit) Stdlib.format -> 'a
+(** [pf] is {!Format.fprintf}. *)
+
+val pr : ('a, Format.formatter, unit) format -> 'a
+(** [pr] is [pf stdout]. *)
+
+val epr : ('a, Format.formatter, unit) format -> 'a
+(** [epr] is [pf stderr]. *)
+
+val str : ('a, Format.formatter, unit, string) format4 -> 'a
+(** [str] is {!Format.asprintf}.
+
+    {b Note.} When using [strf] {!utf_8} and {!style_renderer} are
+    always respectively set to [true] and [`None]. See also
+    {!str_like}. *)
+
+val kpf : (Format.formatter -> 'a) -> Format.formatter ->
+  ('b, Format.formatter, unit, 'a) Stdlib.format4 -> 'b
+(** [kpf] is {!Format.kfprintf}. *)
+
+val kstr :
+  (string -> 'a) -> ('b, Format.formatter, unit, 'a) format4 -> 'b
+(** [kstr] is like {!str} but continuation based. *)
+
+val str_like :
+  Format.formatter -> ('a, Format.formatter, unit, string) format4 -> 'a
+(** [str_like ppf] is like {!str} except its {!utf_8} and {!style_renderer}
+    settings are those of [ppf]. *)
+
+val with_buffer : ?like:Format.formatter -> Buffer.t -> Format.formatter
+(** [with_buffer ~like b] is a formatter whose {!utf_8} and {!style_renderer}
+    settings are copied from those of {!like} (if provided). *)
+
+val failwith : ('a, Format.formatter, unit, 'b) format4 -> 'a
+(** [failwith] is [kstr failwith], raises {!Stdlib.Failure} with
+    a pretty-printed string argument. *)
+
+val failwith_notrace : ('a, Format.formatter, unit, 'b) format4 -> 'a
+(** [failwith_notrace] is like {!failwith} but raises with {!raise_notrace}. *)
+
+val invalid_arg : ('a, Format.formatter, unit, 'b) format4 -> 'a
+(** [invalid_arg] is [kstr invalid_arg], raises
+    {!Stdlib.Invalid_argument} with a pretty-printed string argument. *)
+
+val error : ('b, Format.formatter , unit, ('a, string) result) format4 -> 'b
+(** [error fmt ...] is [kstr (fun s -> Error s) fmt ...] *)
+
+val error_msg :
+  ('b, Format.formatter , unit, ('a, [`Msg of string]) result) format4 -> 'b
+(** [error_msg fmt ...] is [kstr (fun s -> Error (`Msg s)) fmt ...] *)
+
+(** {1 Formatters} *)
+
+type 'a t = Format.formatter -> 'a -> unit
+(** The type for formatters of values of type ['a]. *)
+
+val flush : 'a t
+(** [flush] has the effect of {!Format.pp_print_flush} *)
+
+val nop : 'a t
+(** [nop] formats nothing. *)
+
+val any : (unit, Format.formatter, unit) Stdlib.format -> 'a t
+(** [any fmt ppf v] formats any value with the constant format [fmt]. *)
+
+val using : ('a -> 'b) -> 'b t -> 'a t
+(** [using f pp ppf v] ppf ppf [(f v)]. *)
+
+val const : 'a t -> 'a -> 'b t
+(** [const pp_v v] always formats [v] using [pp_v]. *)
+
+val fmt : ('a, Format.formatter, unit) Stdlib.format -> Format.formatter -> 'a
+(** [fmt fmt ppf] is [pf ppf fmt]. If [fmt] is used with a single
+    non-constant formatting directive, generates a value of type
+    {!t}. *)
+
+(** {1:seps Separators} *)
+
+val cut : 'a t
+(** [cut] has the effect of {!Format.pp_print_cut}. *)
+
+val sp : 'a t
+(** [sp] has the effect of {!Format.pp_print_space}. *)
+
+val sps : int -> 'a t
+(** [sps n] has the effect of {!Format.pp_print_break}[ n 0]. *)
+
+val comma : 'a t
+(** [comma] is {!Fmt.any}[ ",@ "]. *)
+
+val semi : 'a t
+(** [semi] is {!Fmt.any}[ ";@ "]. *)
+
+(** {1:seq Sequencing} *)
+
+val append : 'a t -> 'a t -> 'a t
+(** [append pp_v0 pp_v1 ppf v] is [pp_v0 ppf v; pp_v1 ppf v]. *)
+
+val ( ++ ) : 'a t -> 'a t -> 'a t
+(** [( ++ )] is {!append}. *)
+
+val concat : ?sep:unit t -> 'a t list -> 'a t
+(** [concat ~sep pps] formats a value using the formaters [pps]
+    and separting each format with [sep] (defaults to {!cut}). *)
+
+val iter : ?sep:unit t -> (('a -> unit) -> 'b -> unit) -> 'a t -> 'b t
+(** [iter ~sep iter pp_elt] formats the iterations of [iter] over a
+    value using [pp_elt]. Iterations are separated by [sep] (defaults to
+    {!cut}). *)
+
+val iter_bindings : ?sep:unit t -> (('a -> 'b -> unit) -> 'c -> unit) ->
+  ('a * 'b) t -> 'c t
+(** [iter_bindings ~sep iter pp_binding] formats the iterations of
+    [iter] over a value using [pp_binding]. Iterations are separated
+    by [sep] (defaults to {!cut}). *)
+
+(** {1:boxes Boxes} *)
+
+val box : ?indent:int -> 'a t -> 'a t
+(** [box ~indent pp ppf] wraps [pp] in a pretty-printing box. The box tries to
+    print as much as possible on every line, while emphasizing the box structure
+    (see {!Format.pp_open_box}). Break hints that lead to a new line add
+    [indent] to the current indentation (defaults to [0]). *)
+
+val hbox : 'a t -> 'a t
+(** [hbox] is like {!box} but is a horizontal box: the line is not split
+    in this box (but may be in sub-boxes). See {!Format.pp_open_hbox}. *)
+
+val vbox : ?indent:int -> 'a t -> 'a t
+(** [vbox] is like {!box} but is a vertical box: every break hint leads
+    to a new line which adds [indent] to the current indentation
+    (defaults to [0]). See {!Format.pp_open_vbox}. *)
+
+val hvbox : ?indent:int -> 'a t -> 'a t
+(** [hvbox] is like {!hbox} if it fits on a single line, or like {!vbox}
+    otherwise. See {!Format.pp_open_hvbox}. *)
+
+val hovbox : ?indent:int -> 'a t -> 'a t
+(** [hovbox] is a condensed {!box}. See {!Format.pp_open_hovbox}. *)
+
+(** {1:bracks Brackets} *)
+
+val parens : 'a t -> 'a t
+(** [parens pp_v ppf] is [pf "@[<1>(%a)@]" pp_v]. *)
+
+val brackets : 'a t -> 'a t
+(** [brackets pp_v ppf] is [pf "@[<1>[%a]@]" pp_v]. *)
+
+val braces : 'a t -> 'a t
+(** [braces pp_v ppf] is [pf "@[<1>{%a}@]" pp_v]. *)
+
+val quote : ?mark:string -> 'a t -> 'a t
+(** [quote ~mark pp_v ppf] is [pf "@[<1>@<1>%s%a@<1>%s@]" mark pp_v mark],
+    [mark] defaults to ["\""], it is always counted as spanning as single
+    column (this allows for UTF-8 encoded marks). *)
+
+(** {1:records Records} *)
+
+val id : 'a -> 'a
+(** [id] is {!Fun.id}. *)
+
+val field :
+  ?label:string t -> ?sep:unit t -> string -> ('b -> 'a) -> 'a t -> 'b t
+(** [field ~label ~sep l prj pp_v] pretty prints a labelled field value as
+    [pf "@[<1>%a%a%a@]" label l sep () (using prj pp_v)]. [label] defaults
+    to [styled `Yellow string] and [sep] to [any ":@ "]. *)
+
+val record : ?sep:unit t -> 'a t list -> 'a t
+(** [record ~sep fields] pretty-prints a value using the concatenation of
+    [fields], separated by [sep] (defaults to [cut]) and framed in a vertical
+    box. *)
+
+(** {1:stdlib Stdlib types}
+
+    Formatters for structures give full control to the client over the
+    formatting process and do not wrap the formatted structures with
+    boxes. Use the {!Dump} module to quickly format values for
+    inspection.  *)
+
+val bool : bool t
+(** [bool] is {!Format.pp_print_bool}. *)
+
+val int : int t
+(** [int] is [pf ppf "%d"]. *)
+
+val nativeint : nativeint t
+(** [nativeint ppf] is [pf ppf "%nd"]. *)
+
+val int32 : int32 t
+(** [int32 ppf] is [pf ppf "%ld"]. *)
+
+val int64 : int64 t
+(** [int64 ppf] is [pf ppf "%Ld"]. *)
+
+val uint : int t
+(** [uint ppf] is [pf ppf "%u"]. *)
+
+val unativeint : nativeint t
+(** [unativeint ppf] is [pf ppf "%nu"]. *)
+
+val uint32 : int32 t
+(** [uint32 ppf] is [pf ppf "%lu"]. *)
+
+val uint64 : int64 t
+(** [uint64 ppf] is [pf ppf "%Lu"]. *)
+
+val float : float t
+(** [float ppf] is [pf ppf "%g".] *)
+
+val float_dfrac : int -> float t
+(** [float_dfrac d] rounds the float to the [d]th {e decimal}
+    fractional digit and formats the result with ["%g"]. Ties are
+    rounded towards positive infinity. The result is only defined
+    for [0 <= d <= 16]. *)
+
+val float_dsig : int -> float t
+(** [float_dsig d] rounds the normalized {e decimal} significand
+    of the float to the [d]th decimal fractional digit and formats
+    the result with ["%g"]. Ties are rounded towards positive
+    infinity. The result is NaN on infinities and only defined for
+    [0 <= d <= 16].
+
+    {b Warning.} The current implementation overflows on large [d]
+    and floats. *)
+
+val char : char t
+(** [char] is {!Format.pp_print_char}. *)
+
+val string : string t
+(** [string] is {!Format.pp_print_string}. *)
+
+val buffer : Buffer.t t
+(** [buffer] formats a {!Buffer.t} value's current contents. *)
+
+val exn : exn t
+(** [exn] formats an exception. *)
+
+val exn_backtrace : (exn * Printexc.raw_backtrace) t
+(** [exn_backtrace] formats an exception backtrace. *)
+
+val pair : ?sep:unit t -> 'a t -> 'b t -> ('a * 'b) t
+(** [pair ~sep pp_fst pp_snd] formats a pair. The first and second
+    projection are formatted using [pp_fst] and [pp_snd] and are
+    separated by [sep] (defaults to {!cut}). *)
+
+val option : ?none:unit t -> 'a t -> 'a option t
+(** [option ~none pp_v] formats an optional value. The [Some] case
+    uses [pp_v] and [None] uses [none] (defaults to {!nop}). *)
+
+val result : ok:'a t -> error:'b t -> ('a, 'b) result t
+(** [result ~ok ~error] formats a result value using [ok] for the [Ok]
+    case and [error] for the [Error] case. *)
+
+val list : ?sep:unit t -> 'a t -> 'a list t
+(** [list sep pp_v] formats list elements. Each element of the list is
+    formatted in order with [pp_v]. Elements are separated by [sep]
+    (defaults to {!cut}). If the list is empty, this is {!nop}. *)
+
+val array : ?sep:unit t -> 'a t -> 'a array t
+(** [array sep pp_v] formats array elements. Each element of the array
+    is formatted in order with [pp_v]. Elements are separated by [sep]
+    (defaults to {!cut}). If the array is empty, this is {!nop}. *)
+
+val seq : ?sep:unit t -> 'a t -> 'a Seq.t t
+(** [seq sep pp_v] formats sequence elements. Each element of the sequence
+    is formatted in order with [pp_v]. Elements are separated by [sep]
+    (defaults to {!cut}). If the sequence is empty, this is {!nop}. *)
+
+val hashtbl : ?sep:unit t -> ('a * 'b) t -> ('a, 'b) Hashtbl.t t
+(** [hashtbl ~sep pp_binding] formats the bindings of a hash
+    table. Each binding is formatted with [pp_binding] and bindings
+    are separated by [sep] (defaults to {!cut}). If the hash table has
+    multiple bindings for a given key, all bindings are formatted,
+    with the most recent binding first. If the hash table is empty,
+    this is {!nop}. *)
+
+val queue : ?sep:unit t -> 'a t -> 'a Queue.t t
+(** [queue ~sep pp_v] formats queue elements. Each element of the
+    queue is formatted in least recently added order with
+    [pp_v]. Elements are separated by [sep] (defaults to {!cut}). If
+    the queue is empty, this is {!nop}. *)
+
+val stack : ?sep:unit t -> 'a t -> 'a Stack.t t
+(** [stack ~sep pp_v] formats stack elements. Each element of the
+    stack is formatted from top to bottom order with [pp_v].  Elements
+    are separated by [sep] (defaults to {!cut}). If the stack is
+    empty, this is {!nop}. *)
+
+(** Formatters for inspecting OCaml values.
+
+    Formatters of this module dump OCaml value with little control
+    over the representation but with good default box structures and,
+    whenever possible, using OCaml syntax. *)
+module Dump : sig
+
+  (** {1:stdlib Stdlib types} *)
+
+  val signal : int t
+  (** [signal] formats an OCaml {{!Sys.sigabrt}signal number} as a C
+      POSIX
+      {{:http://pubs.opengroup.org/onlinepubs/9699919799/basedefs/signal.h.html}
+      constant} or ["SIG(%d)"] the signal number is unknown. *)
+
+  val uchar : Uchar.t t
+  (** [uchar] formats an OCaml {!Uchar.t} value using only US-ASCII
+      encoded characters according to the Unicode
+      {{:http://www.unicode.org/versions/latest/appA.pdf}notational
+      convention} for code points. *)
+
+  val string : string t
+  (** [string] is [pf ppf "%S"]. *)
+
+  val pair : 'a t -> 'b t -> ('a * 'b) t
+  (** [pair pp_fst pp_snd] formats an OCaml pair using [pp_fst] and [pp_snd]
+      for the first and second projection. *)
+
+  val option : 'a t -> 'a option t
+  (** [option pp_v] formats an OCaml option using [pp_v] for the [Some]
+      case. No parentheses are added. *)
+
+  val result : ok:'a t -> error:'b t -> ('a, 'b) result t
+  (** [result ~ok ~error] formats an OCaml result using [ok] for the [Ok]
+      case value and [error] for the [Error] case value. No parentheses
+      are added. *)
+
+  val list : 'a t -> 'a list t
+  (** [list pp_v] formats an OCaml list using [pp_v] for the list
+      elements. *)
+
+  val array : 'a t -> 'a array t
+  (** [array pp_v] formats an OCaml array using [pp_v] for the array
+      elements. *)
+
+  val seq : 'a t -> 'a Seq.t t
+  (** [seq pp_v] formats an OCaml sequence using [pp_v] for the sequence
+      elements. *)
+
+  val hashtbl : 'a t -> 'b t -> ('a, 'b) Hashtbl.t t
+  (** [hashtbl pp_k pp_v] formats an unspecified representation of the
+      bindings of a hash table using [pp_k] for the keys and [pp_v]
+      for the values. If the hash table has multiple bindings for a
+      given key, all bindings are formatted, with the most recent
+      binding first. *)
+
+  val queue : 'a t -> 'a Queue.t t
+  (** [queue pp_v] formats an unspecified representation of an OCaml
+      queue using [pp_v] to format its elements, in least recently added
+      order. *)
+
+  val stack : 'a t -> 'a Stack.t t
+  (** [stack pp_v] formats an unspecified representation of an OCaml
+      stack using [pp_v] to format its elements in top to bottom order. *)
+
+  (** {1:record Records} *)
+
+  val field : ?label:string t -> string -> ('b -> 'a) -> 'a t -> 'b t
+  (** [field ~label l prj pp_v] pretty prints a named field using [label]
+      (defaults to [styled `Yellow string]) for the label, and [using prj pp_v]
+      for the field value. *)
+
+  val record : 'a t list -> 'a t
+  (** [record fields] pretty-prints a value using the concatenation of
+      [fields], separated by [";@,"], framed in a vertical
+      box and surrounded by {!braces}. *)
+
+  (** {1:seq Sequencing}
+
+      These are akin to {!iter} and {!iter_bindings} but
+      delimit the sequences with {!parens}. *)
+
+  val iter : (('a -> unit) -> 'b -> unit) -> 'b t -> 'a t -> 'b t
+  (** [iter iter pp_name pp_elt] formats an unspecified representation
+      of the iterations of [iter] over a value using [pp_elt]. The
+      iteration is named by [pp_name]. *)
+
+  val iter_bindings : (('a -> 'b -> unit) -> 'c -> unit) -> 'c t -> 'a t
+    -> 'b t -> 'c t
+  (** [iter_bindings ~sep iter pp_name pp_k pp_v] formats an
+      unspecified representation of the iterations of [iter] over a
+      value using [pp_k] and [pp_v]. The iteration is named by
+      [pp_name]. *)
+end
+
+(** {1:mgs Magnitudes} *)
+
+val si_size : scale:int -> string -> int t
+(** [si_size ~scale unit] formats a non negative integer
+    representing unit [unit] at scale 10{^scale * 3}, depending on
+    its magnitude, using power of 3
+    {{:https://www.bipm.org/en/publications/si-brochure/chapter3.html}
+    SI prefixes} (i.e. all of them except deca, hector, deci and
+    centi). Only US-ASCII characters are used, [µ] (10{^-6}) is
+    written using [u].
+
+    [scale] indicates the scale 10{^scale * 3} an integer
+    represents, for example [-1] for m[unit] (10{^-3}), [0] for
+    [unit] (10{^0}), [1] for [kunit] (10{^3}); it must be in the
+    range \[[-8];[8]\] or [Invalid_argument] is raised.
+
+    Except at the maximal yotta scale always tries to show three
+    digits of data with trailing fractional zeros omited. Rounds
+    towards positive infinity (over approximates).  *)
+
+val byte_size : int t
+(** [byte_size] is [si_size ~scale:0 "B"]. *)
+
+val bi_byte_size : int t
+(** [bi_byte_size] formats a byte size according to its magnitude
+    using {{:https://en.wikipedia.org/wiki/Binary_prefix}binary prefixes}
+    up to pebi bytes (2{^15}). *)
+
+val uint64_ns_span : int64 t
+(** [uint64_ns_span] formats an {e unsigned} nanosecond time span
+    according to its magnitude using
+    {{:http://www.bipm.org/en/publications/si-brochure/chapter3.html}SI
+    prefixes} on seconds and
+    {{:http://www.bipm.org/en/publications/si-brochure/table6.html}accepted
+    non-SI units}. Years are counted in Julian years (365.25 SI-accepted days)
+    as {{:http://www.iau.org/publications/proceedings_rules/units/}defined}
+    by the International Astronomical Union (IAU). Only US-ASCII characters
+    are used ([us] is used for [µs]). *)
+
+(** {1:binary Binary data} *)
+
+type 'a vec = int * (int -> 'a)
+(** The type for random addressable, sized sequences. Each [(n, f)]
+    represents the sequence [f 0, ..., f (n - 1)]. *)
+
+val on_bytes : char vec t -> bytes t
+(** [on_bytes pp] is [pp] adapted to format (entire) [bytes]. *)
+
+val on_string : char vec t -> string t
+(** [on_string pp] is [pp] adapted to format (entire) [string]s. *)
+
+val ascii : ?w:int -> ?subst:unit t -> unit -> char vec t
+(** [ascii ~w ~subst ()] formats character sequences by printing
+    characters in the {e printable US-ASCII range} ([[0x20];[0x7E]])
+    as is, and replacing the rest with [subst] (defaults to [fmt "."]).
+    [w] causes the output to be right padded to the size of formatting
+    at least [w] sequence elements (defaults to [0]). *)
+
+val octets : ?w:int -> ?sep:unit t -> unit -> char vec t
+(** [octets ~w ~sep ()] formats character sequences as hexadecimal
+    digits.  It prints groups of successive characters of unspecified
+    length together, separated by [sep] (defaults to {!sp}). [w]
+    causes the output to be right padded to the size of formatting at
+    least [w] sequence elements (defaults to [0]). *)
+
+val addresses : ?addr:int t -> ?w:int -> 'a vec t -> 'a vec t
+(** [addresses pp] formats sequences by applying [pp] to consecutive
+    subsequences of length [w] (defaults to 16). [addr] formats
+    subsequence offsets (defaults to an unspecified hexadecimal
+    format).  *)
+
+val hex : ?w:int -> unit -> char vec t
+(** [hex ~w ()] formats character sequences as traditional hex dumps,
+    matching the output of {e xxd} and forcing line breaks after every
+    [w] characters (defaults to 16). *)
+
+(** {1:text Words, paragraphs, text and lines}
+
+    {b Note.} These functions only work on US-ASCII strings and/or
+    with newlines (['\n']). If you are dealing with UTF-8 strings or
+    different kinds of line endings you should use the pretty-printers
+    from {!Uuseg_string}.
+
+    {b White space.} White space is one of the following US-ASCII
+    characters: space [' '] ([0x20]), tab ['\t'] ([0x09]), newline
+    ['\n'] ([0x0A]), vertical tab ([0x0B]), form feed ([0x0C]),
+    carriage return ['\r'] ([0x0D]). *)
+
+val words : string t
+(** [words] formats words by suppressing initial and trailing
+    white space and replacing consecutive white space with
+    a single {!Format.pp_print_space}. *)
+
+val paragraphs : string t
+(** [paragraphs] formats paragraphs by suppressing initial and trailing
+    spaces and newlines, replacing blank lines (a line made only
+    of white space) by a two {!Format.pp_force_newline} and remaining
+    consecutive white space with a single {!Format.pp_print_space}. *)
+
+val text : string t
+(** [text] formats text by respectively replacing spaces and newlines in
+    the string with {!Format.pp_print_space} and {!Format.pp_force_newline}. *)
+
+val lines : string t
+(** [lines] formats lines by replacing newlines (['\n']) in the string
+    with calls to {!Format.pp_force_newline}. *)
+
+val truncated : max:int -> string t
+(** [truncated ~max] formats a string using at most [max]
+    characters. If the string doesn't fit, it is truncated and ended
+    with three consecutive dots which do count towards [max]. *)
+
+val text_loc : ((int * int) * (int * int)) t
+(** [text_loc] formats a line-column text range according to
+    {{:http://www.gnu.org/prep/standards/standards.html#Errors}
+    GNU conventions}. *)
+
+(** {1:hci HCI fragments} *)
+
+val one_of : ?empty:unit t -> 'a t -> 'a list t
+(** [one_of ~empty pp_v ppf l] formats according to the length of [l]
+    {ul
+    {- [0], formats {!empty} (defaults to {!nop}).}
+    {- [1], formats the element with [pp_v].}
+    {- [2], formats ["either %a or %a"] with the list elements}
+    {- [n], formats ["one of %a, ... or %a"] with the list elements}} *)
+
+val did_you_mean :
+  ?pre:unit t -> ?post:unit t -> kind:string -> 'a t -> ('a * 'a list) t
+(** [did_you_mean ~pre kind ~post pp_v] formats a faulty value [v] of
+    kind [kind] and a list of [hints] that [v] could have been
+    mistaken for.
+
+    [pre] defaults to [unit "Unknown"], [post] to {!nop} they surround
+    the faulty value before the "did you mean" part as follows ["%a %s
+    %a%a." pre () kind pp_v v post ()]. If [hints] is empty no "did
+    you mean" part is printed. *)
+
+(** {1:utf8_cond Conditional UTF-8 formatting}
+
+    {b Note.} Since {!Format} is not UTF-8 aware using UTF-8 output
+    may derail the pretty printing process. Use the pretty-printers
+    from {!Uuseg_string} if you are serious about UTF-8 formatting. *)
+
+val if_utf_8 : 'a t -> 'a t -> 'a t
+(** [if_utf_8 pp_u pp ppf v] is:
+    {ul
+    {- [pp_u ppf v] if [utf_8 ppf] is [true].}
+    {- [pp ppf v] otherwise.}} *)
+
+val utf_8 : Format.formatter -> bool
+(** [utf_8 ppf] is [true] if UTF-8 output is enabled on [ppf]. If
+    {!set_utf_8} hasn't been called on [ppf] this is [true]. *)
+
+val set_utf_8 : Format.formatter -> bool -> unit
+(** [set_utf_8 ppf b] enables or disables conditional UTF-8 formatting
+    on [ppf].
+
+    @raise Invalid_argument if [ppf] is {!Format.str_formatter}: it is
+    is always UTF-8 enabled. *)
+
+(** {1:styled Styled formatting} *)
+
+type color =
+  [ `Black | `Blue | `Cyan | `Green | `Magenta | `Red | `White | `Yellow ]
+(** The type for colors. *)
+
+type style =
+  [ `None |  `Bold | `Faint | `Italic | `Underline | `Reverse
+  | `Fg of [ color | `Hi of color ]
+  | `Bg of [ color | `Hi of color ]
+  | color (** deprecated *) ]
+(** The type for styles:
+    {ul
+    {- [`None] resets the styling.}
+    {- [`Bold], [`Faint], [`Italic], [`Underline] and [`Reverse] are
+       display attributes.}
+    {- [`Fg _] is the foreground color or high-intensity color on [`Hi _].}
+    {- [`Bg _] is the foreground color or high-intensity color on [`Hi _].}
+    {- [#color] is the foreground colour, {b deprecated} use [`Fg
+       #color] instead.}} *)
+
+val styled : style -> 'a t -> 'a t
+(** [styled s pp] formats like [pp] but styled with [s]. *)
+
+(** {2 Style rendering control} *)
+
+type style_renderer = [ `Ansi_tty | `None ]
+(** The type for style renderers.
+    {ul
+    {- [`Ansi_tty], renders styles using
+       {{:http://www.ecma-international.org/publications/standards/Ecma-048.htm}
+       ANSI escape sequences}.}
+    {- [`None], styled rendering has no effect.}} *)
+
+val style_renderer : Format.formatter  -> style_renderer
+(** [style_renderer ppf] is the style renderer used by [ppf].  If
+    {!set_style_renderer} has never been called on [ppf] this is
+    [`None]. *)
+
+val set_style_renderer : Format.formatter -> style_renderer -> unit
+(** [set_style_renderer ppf r] sets the style renderer of [ppf] to [r].
+
+    @raise Invalid_argument if [ppf] is {!Format.str_formatter}: its
+    renderer is always [`None]. *)
+
+(** {1:stringconverters Converting with string value converters} *)
+
+val of_to_string : ('a -> string) -> 'a t
+(** [of_to_string f ppf v] is [string ppf (f v)]. *)
+
+val to_to_string : 'a t -> 'a -> string
+(** [to_to_string pp_v v] is [strf "%a" pp_v v]. *)
+
+(** {1:deprecated Deprecated} *)
+
+val strf : ('a, Format.formatter, unit, string) format4 -> 'a
+(** @deprecated use {!str} instead. *)
+
+val kstrf : (string -> 'a) -> ('b, Format.formatter, unit, 'a) format4 -> 'b
+(** @deprecated use {!kstr} instead. *)
+
+val strf_like :
+  Format.formatter -> ('a, Format.formatter, unit, string) format4 -> 'a
+(** @deprecated use {!str_like} instead. *)
+
+val always : (unit, Format.formatter, unit) Stdlib.format -> 'a t
+(** @deprecated use {!any} instead. *)
+
+val unit : (unit, Format.formatter, unit) Stdlib.format -> unit t
+(** @deprecated use {!any}. *)
+
+val prefix : unit t -> 'a t -> 'a t
+(** @deprecated use {!( ++ )}. *)
+
+val suffix : unit t -> 'a t -> 'a t
+(** @deprecated use {!( ++ )}. *)
+
+val styled_unit :
+  style -> (unit, Format.formatter, unit) Stdlib.format -> unit t
+(** @deprecated, use [styled s (any fmt)] instead *)
+
+(** {1:nameconv Naming conventions}
+
+    Given a type [ty] use:
+
+    {ul
+    {- [pp_ty] for a pretty printer that provides full control to the
+       client and does not wrap the formatted value in an enclosing
+       box. See {{!stdlib}these examples}.}
+    {- [pp_dump_ty] for a pretty printer that provides little control
+       over the pretty-printing process, wraps the rendering in an
+       enclosing box and tries as much as possible to respect the
+       OCaml syntax. These pretty-printers should make it easy to
+       inspect and understand values of the given type, they are
+       mainly used for quick printf debugging and/or toplevel interaction.
+       See {{!Dump.stdlib} these examples}.}}
+
+    If you are in a situation where making a difference between [dump_ty]
+    and [pp_ty] doesn't make sense then use [pp_ty].
+
+    For a type [ty] that is the main type of the module (the "[M.t]"
+    convention) drop the suffix, that is simply use [M.pp] and
+    [M.pp_dump]. *)
+
+(*---------------------------------------------------------------------------
+   Copyright (c) 2014 The fmt programmers
+
+   Permission to use, copy, modify, and/or distribute this software for any
+   purpose with or without fee is hereby granted, provided that the above
+   copyright notice and this permission notice appear in all copies.
+
+   THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+   WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+   MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+   ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+   WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+   ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+   OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+  ---------------------------------------------------------------------------*)
diff --git a/tools/ocaml/duniverse/fmt/src/fmt.mllib b/tools/ocaml/duniverse/fmt/src/fmt.mllib
new file mode 100644
index 0000000000..977dbb9876
--- /dev/null
+++ b/tools/ocaml/duniverse/fmt/src/fmt.mllib
@@ -0,0 +1 @@
+Fmt
diff --git a/tools/ocaml/duniverse/fmt/src/fmt_cli.ml b/tools/ocaml/duniverse/fmt/src/fmt_cli.ml
new file mode 100644
index 0000000000..0376806759
--- /dev/null
+++ b/tools/ocaml/duniverse/fmt/src/fmt_cli.ml
@@ -0,0 +1,32 @@
+(*---------------------------------------------------------------------------
+   Copyright (c) 2015 The fmt programmers. All rights reserved.
+   Distributed under the ISC license, see terms at the end of the file.
+   %%NAME%% %%VERSION%%
+  ---------------------------------------------------------------------------*)
+
+let strf = Format.asprintf
+
+open Cmdliner
+
+let style_renderer ?env ?docs () =
+  let enum = ["auto", None; "always", Some `Ansi_tty; "never", Some `None] in
+  let color = Arg.enum enum in
+  let enum_alts = Arg.doc_alts_enum enum in
+  let doc = strf "Colorize the output. $(docv) must be %s." enum_alts in
+  Arg.(value & opt color None & info ["color"] ?env ~doc ~docv:"WHEN" ?docs)
+
+(*---------------------------------------------------------------------------
+   Copyright (c) 2015 The fmt programmers
+
+   Permission to use, copy, modify, and/or distribute this software for any
+   purpose with or without fee is hereby granted, provided that the above
+   copyright notice and this permission notice appear in all copies.
+
+   THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+   WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+   MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+   ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+   WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+   ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+   OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+  ---------------------------------------------------------------------------*)
diff --git a/tools/ocaml/duniverse/fmt/src/fmt_cli.mli b/tools/ocaml/duniverse/fmt/src/fmt_cli.mli
new file mode 100644
index 0000000000..dcdd5d86aa
--- /dev/null
+++ b/tools/ocaml/duniverse/fmt/src/fmt_cli.mli
@@ -0,0 +1,45 @@
+(*---------------------------------------------------------------------------
+   Copyright (c) 2015 The fmt programmers. All rights reserved.
+   Distributed under the ISC license, see terms at the end of the file.
+   %%NAME%% %%VERSION%%
+  ---------------------------------------------------------------------------*)
+
+(** {!Cmdliner} support for [Fmt]. *)
+
+(** {1 Option for setting the style renderer} *)
+
+val style_renderer : ?env:Cmdliner.Arg.env -> ?docs:string -> unit ->
+  Fmt.style_renderer option Cmdliner.Term.t
+(** [style_renderer ?env ?docs ()] is a {!Cmdliner} option [--color] that can
+    be directly used with the optional arguments of
+    {{!Fmt_tty.tty_setup}TTY setup} or to control
+    {{!Fmt.set_style_renderer}style rendering}.  The option is
+    documented under [docs] (defaults to the default in
+    {!Cmdliner.Arg.info}).
+
+    The option is a tri-state enumerated value that when used with
+    {{!Fmt_tty.tty_setup}TTY setup} takes over the automatic setup:
+    {ul
+    {- [--color=never], the value is [Some `None], forces no styling.}
+    {- [--color=always], the value is [Some `Ansi], forces ANSI styling.}
+    {- [--color=auto] or absent, the value is [None], automatic setup
+       takes place.}}
+
+    If [env] is provided, the option default value ([None]) can be
+    overridden by the corresponding environment variable. *)
+
+(*---------------------------------------------------------------------------
+   Copyright (c) 2015 The fmt programmers
+
+   Permission to use, copy, modify, and/or distribute this software for any
+   purpose with or without fee is hereby granted, provided that the above
+   copyright notice and this permission notice appear in all copies.
+
+   THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+   WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+   MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+   ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+   WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+   ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+   OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+  ---------------------------------------------------------------------------*)
diff --git a/tools/ocaml/duniverse/fmt/src/fmt_cli.mllib b/tools/ocaml/duniverse/fmt/src/fmt_cli.mllib
new file mode 100644
index 0000000000..6a0743e652
--- /dev/null
+++ b/tools/ocaml/duniverse/fmt/src/fmt_cli.mllib
@@ -0,0 +1 @@
+Fmt_cli
diff --git a/tools/ocaml/duniverse/fmt/src/fmt_top.ml b/tools/ocaml/duniverse/fmt/src/fmt_top.ml
new file mode 100644
index 0000000000..7bcf4b2062
--- /dev/null
+++ b/tools/ocaml/duniverse/fmt/src/fmt_top.ml
@@ -0,0 +1,23 @@
+(*---------------------------------------------------------------------------
+   Copyright (c) 2015 The fmt programmers. All rights reserved.
+   Distributed under the ISC license, see terms at the end of the file.
+   %%NAME%% %%VERSION%%
+  ---------------------------------------------------------------------------*)
+
+let () = ignore (Toploop.use_file Format.err_formatter "fmt_tty_top_init.ml")
+
+(*---------------------------------------------------------------------------
+   Copyright (c) 2015 The fmt programmers
+
+   Permission to use, copy, modify, and/or distribute this software for any
+   purpose with or without fee is hereby granted, provided that the above
+   copyright notice and this permission notice appear in all copies.
+
+   THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+   WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+   MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+   ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+   WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+   ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+   OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+  ---------------------------------------------------------------------------*)
diff --git a/tools/ocaml/duniverse/fmt/src/fmt_top.mllib b/tools/ocaml/duniverse/fmt/src/fmt_top.mllib
new file mode 100644
index 0000000000..49c6b94c54
--- /dev/null
+++ b/tools/ocaml/duniverse/fmt/src/fmt_top.mllib
@@ -0,0 +1 @@
+Fmt_top
\ No newline at end of file
diff --git a/tools/ocaml/duniverse/fmt/src/fmt_tty.ml b/tools/ocaml/duniverse/fmt/src/fmt_tty.ml
new file mode 100644
index 0000000000..eb28007131
--- /dev/null
+++ b/tools/ocaml/duniverse/fmt/src/fmt_tty.ml
@@ -0,0 +1,78 @@
+(*---------------------------------------------------------------------------
+   Copyright (c) 2015 The fmt programmers. All rights reserved.
+   Distributed under the ISC license, see terms at the end of the file.
+   %%NAME%% %%VERSION%%
+  ---------------------------------------------------------------------------*)
+
+let is_infix ~affix s =
+  (* Damned, already missing astring, from which this is c&p *)
+  let len_a = String.length affix in
+  let len_s = String.length s in
+  if len_a > len_s then false else
+  let max_idx_a = len_a - 1 in
+  let max_idx_s = len_s - len_a in
+  let rec loop i k =
+    if i > max_idx_s then false else
+    if k > max_idx_a then true else
+    if k > 0 then
+      if String.get affix k = String.get s (i + k) then loop i (k + 1) else
+      loop (i + 1) 0
+    else if String.get affix 0 = String.get s i then loop i 1 else
+    loop (i + 1) 0
+  in
+  loop 0 0
+
+let setup ?style_renderer ?utf_8 oc =
+  let ppf =
+    if oc == Stdlib.stdout then Fmt.stdout else
+    if oc == Stdlib.stderr then Fmt.stderr else
+    Format.formatter_of_out_channel oc
+  in
+  let style_renderer = match style_renderer with
+  | Some r -> r
+  | None ->
+      let dumb =
+        try match Sys.getenv "TERM" with
+        | "dumb" | "" -> true
+        | _ -> false
+        with
+        Not_found -> true
+      in
+      let isatty = try Unix.(isatty (descr_of_out_channel oc)) with
+      | Unix.Unix_error _ -> false
+      in
+      if not dumb && isatty then `Ansi_tty else `None
+  in
+  let utf_8 = match utf_8 with
+  | Some b -> b
+  | None ->
+      let has_utf_8 var =
+        try is_infix "UTF-8" (String.uppercase_ascii (Sys.getenv var))
+        with Not_found -> false
+      in
+      has_utf_8 "LANG" || has_utf_8 "LC_ALL" || has_utf_8 "LC_CTYPE"
+  in
+  Fmt.set_style_renderer ppf style_renderer;
+  Fmt.set_utf_8 ppf utf_8;
+  ppf
+
+let setup_std_outputs ?style_renderer ?utf_8 () =
+  ignore (setup ?style_renderer ?utf_8 stdout);
+  ignore (setup ?style_renderer ?utf_8 stderr);
+  ()
+
+(*---------------------------------------------------------------------------
+   Copyright (c) 2015 The fmt programmers
+
+   Permission to use, copy, modify, and/or distribute this software for any
+   purpose with or without fee is hereby granted, provided that the above
+   copyright notice and this permission notice appear in all copies.
+
+   THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+   WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+   MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+   ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+   WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+   ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+   OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+  ---------------------------------------------------------------------------*)
diff --git a/tools/ocaml/duniverse/fmt/src/fmt_tty.mli b/tools/ocaml/duniverse/fmt/src/fmt_tty.mli
new file mode 100644
index 0000000000..f894325e1d
--- /dev/null
+++ b/tools/ocaml/duniverse/fmt/src/fmt_tty.mli
@@ -0,0 +1,50 @@
+(*---------------------------------------------------------------------------
+   Copyright (c) 2015 The fmt programmers. All rights reserved.
+   Distributed under the ISC license, see terms at the end of the file.
+   %%NAME%% %%VERSION%%
+  ---------------------------------------------------------------------------*)
+
+(** [Fmt] TTY setup.
+
+    [Fmt_tty] provides simple automatic setup on channel formatters for:
+    {ul
+    {- {!Fmt.set_style_renderer}. [`Ansi_tty] is used if the channel
+       {{!Unix.isatty}is a tty} and the environment variable
+       [TERM] is defined and its value is not ["dumb"]. [`None] is
+       used otherwise.}
+    {- {!Fmt.set_utf_8}. [true] is used if one of the following
+       environment variables has ["UTF-8"] as a case insensitive
+       substring: [LANG], [LC_ALL], [LC_CTYPE].}} *)
+
+(** {1:tty_setup TTY setup} *)
+
+val setup : ?style_renderer:Fmt.style_renderer -> ?utf_8:bool ->
+  out_channel -> Format.formatter
+(** [setup ?style_renderer ?utf_8 outc] is a formatter for [outc] with
+    {!Fmt.set_style_renderer} and {!Fmt.set_utf_8} correctly setup. If
+    [style_renderer] or [utf_8] are specified they override the automatic
+    setup.
+
+    If [outc] is {!stdout}, {!Fmt.stdout} is returned. If [outc] is
+    {!stderr}, {!Fmt.stderr} is returned. *)
+
+val setup_std_outputs : ?style_renderer:Fmt.style_renderer -> ?utf_8:bool ->
+  unit -> unit
+(** [setup_std_outputs ?style_renderer ?utf_8 ()] applies {!setup}
+    on {!stdout} and {!stderr}. *)
+
+(*---------------------------------------------------------------------------
+   Copyright (c) 2015 The fmt programmers
+
+   Permission to use, copy, modify, and/or distribute this software for any
+   purpose with or without fee is hereby granted, provided that the above
+   copyright notice and this permission notice appear in all copies.
+
+   THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+   WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+   MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+   ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+   WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+   ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+   OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+  ---------------------------------------------------------------------------*)
diff --git a/tools/ocaml/duniverse/fmt/src/fmt_tty.mllib b/tools/ocaml/duniverse/fmt/src/fmt_tty.mllib
new file mode 100644
index 0000000000..4e15d82115
--- /dev/null
+++ b/tools/ocaml/duniverse/fmt/src/fmt_tty.mllib
@@ -0,0 +1 @@
+Fmt_tty
diff --git a/tools/ocaml/duniverse/fmt/src/fmt_tty_top_init.ml b/tools/ocaml/duniverse/fmt/src/fmt_tty_top_init.ml
new file mode 100644
index 0000000000..3309166c5e
--- /dev/null
+++ b/tools/ocaml/duniverse/fmt/src/fmt_tty_top_init.ml
@@ -0,0 +1,23 @@
+(*---------------------------------------------------------------------------
+   Copyright (c) 2015 The fmt programmers. All rights reserved.
+   Distributed under the ISC license, see terms at the end of the file.
+   %%NAME%% %%VERSION%%
+  ---------------------------------------------------------------------------*)
+
+let () = Fmt_tty.setup_std_outputs ()
+
+(*---------------------------------------------------------------------------
+   Copyright (c) 2015 The fmt programmers
+
+   Permission to use, copy, modify, and/or distribute this software for any
+   purpose with or without fee is hereby granted, provided that the above
+   copyright notice and this permission notice appear in all copies.
+
+   THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+   WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+   MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+   ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+   WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+   ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+   OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+  ---------------------------------------------------------------------------*)
diff --git a/tools/ocaml/duniverse/fmt/test/test.ml b/tools/ocaml/duniverse/fmt/test/test.ml
new file mode 100644
index 0000000000..48476dffb7
--- /dev/null
+++ b/tools/ocaml/duniverse/fmt/test/test.ml
@@ -0,0 +1,322 @@
+(*---------------------------------------------------------------------------
+   Copyright (c) 2015 The fmt programmers. All rights reserved.
+   Distributed under the ISC license, see terms at the end of the file.
+   %%NAME%% %%VERSION%%
+  ---------------------------------------------------------------------------*)
+
+(*
+let test_exn_backtrace () = (* Don't move this test in the file. *)
+  try failwith "Test" with
+  | ex ->
+      let bt = Printexc.get_raw_backtrace () in
+      let fmt = Fmt.strf "%a" Fmt.exn_backtrace (ex,bt) in
+      assert begin match Printexc.backtrace_status () with
+      | false -> fmt = "Exception: Failure(\"Test\")\nNo backtrace available."
+      | true ->
+          fmt = "Exception: Failure(\"Test\")\n\
+                 Raised at file \"pervasives.ml\", line 32, characters 22-33\n\
+                 Called from file \"test/test.ml\", line 8, characters 6-21"
+      end
+*)
+
+let test_dump_uchar () =
+ let str u = Format.asprintf "%a" Fmt.Dump.uchar u in
+ assert (str Uchar.min = "U+0000");
+ assert (str Uchar.(succ min) = "U+0001");
+ assert (str Uchar.(of_int 0xFFFF) = "U+FFFF");
+ assert (str Uchar.(succ (of_int 0xFFFF)) = "U+10000");
+ assert (str Uchar.(pred max) = "U+10FFFE");
+ assert (str Uchar.max = "U+10FFFF");
+ ()
+
+let test_utf_8 () =
+  let ppf = Format.formatter_of_buffer (Buffer.create 23) in
+  assert (Fmt.utf_8 ppf = true);
+  Fmt.set_utf_8 ppf false;
+  assert (Fmt.utf_8 ppf = false);
+  Fmt.set_utf_8 ppf true;
+  assert (Fmt.utf_8 ppf = true);
+  ()
+
+let test_style_renderer () =
+  let ppf = Format.formatter_of_buffer (Buffer.create 23) in
+  assert (Fmt.style_renderer ppf = `None);
+  Fmt.set_style_renderer ppf `Ansi_tty;
+  assert (Fmt.style_renderer ppf = `Ansi_tty);
+  Fmt.set_style_renderer ppf `None;
+  assert (Fmt.style_renderer ppf = `None);
+  ()
+
+let test_exn_typechecks () =
+  let (_ : bool) = true || Fmt.failwith "%s" "" in
+  let (_ : bool) = true || Fmt.invalid_arg "%s" "" in
+  ()
+
+let test_kstr_str_like_partial_app () =
+  let assertf f = assert (f "X" = f "X") in
+  let test_kstrf fmt = Fmt.kstr (fun x -> x) fmt in
+  let test_strf_like fmt = Fmt.str_like Fmt.stderr fmt in
+  assertf (test_strf_like "%s");
+  assertf (test_kstrf "%s");
+  ()
+
+
+let test_byte_size () =
+  let size s = Fmt.str "%a" Fmt.byte_size s in
+  assert (size 0 = "0B");
+  assert (size 999 = "999B");
+  assert (size 1000 = "1kB");
+  assert (size 1001 = "1.01kB");
+  assert (size 1010 = "1.01kB");
+  assert (size 1011 = "1.02kB");
+  assert (size 1020 = "1.02kB");
+  assert (size 1100 = "1.1kB");
+  assert (size 1101 = "1.11kB");
+  assert (size 1109 = "1.11kB");
+  assert (size 1111 = "1.12kB");
+  assert (size 1119 = "1.12kB");
+  assert (size 1120 = "1.12kB");
+  assert (size 1121 = "1.13kB");
+  assert (size 9990 = "9.99kB");
+  assert (size 9991 = "10kB");
+  assert (size 9999 = "10kB");
+  assert (size 10_000 = "10kB");
+  assert (size 10_001 = "10.1kB");
+  assert (size 10_002 = "10.1kB");
+  assert (size 10_099 = "10.1kB");
+  assert (size 10_100 = "10.1kB");
+  assert (size 10_100 = "10.1kB");
+  assert (size 10_101 = "10.2kB");
+  assert (size 10_199 = "10.2kB");
+  assert (size 10_199 = "10.2kB");
+  assert (size 10_200 = "10.2kB");
+  assert (size 10_201 = "10.3kB");
+  assert (size 99_901 = "100kB");
+  assert (size 99_999 = "100kB");
+  assert (size 100_000 = "100kB");
+  assert (size 100_001 = "101kB");
+  assert (size 100_999 = "101kB");
+  assert (size 101_000 = "101kB");
+  assert (size 101_001 = "102kB");
+  assert (size 999_000 = "999kB");
+  assert (size 999_001 = "1MB");
+  assert (size 999_999 = "1MB");
+  assert (size 1_000_000 = "1MB");
+  assert (size 1_000_001 = "1.01MB");
+  assert (size 1_009_999 = "1.01MB");
+  assert (size 1_010_000 = "1.01MB");
+  assert (size 1_010_001 = "1.02MB");
+  assert (size 1_019_999 = "1.02MB");
+  assert (size 1_020_000 = "1.02MB");
+  assert (size 1_020_001 = "1.03MB");
+  assert (size 1_990_000 = "1.99MB");
+  assert (size 1_990_001 = "2MB");
+  assert (size 1_999_999 = "2MB");
+  assert (size 2_000_000 = "2MB");
+  assert (size 9_990_000 = "9.99MB");
+  assert (size 9_990_001 = "10MB");
+  assert (size 9_990_999 = "10MB");
+  assert (size 10_000_000 = "10MB");
+  assert (size 10_000_001 = "10.1MB");
+  assert (size 10_099_999 = "10.1MB");
+  assert (size 10_100_000 = "10.1MB");
+  assert (size 10_900_001 = "11MB");
+  assert (size 10_999_999 = "11MB");
+  assert (size 11_000_000 = "11MB");
+  assert (size 11_000_001 = "11.1MB");
+  assert (size 99_900_000 = "99.9MB");
+  assert (size 99_900_001 = "100MB");
+  assert (size 99_999_999 = "100MB");
+  assert (size 100_000_000 = "100MB");
+  assert (size 100_000_001 = "101MB");
+  assert (size 100_999_999 = "101MB");
+  assert (size 101_000_000 = "101MB");
+  assert (size 101_000_000 = "101MB");
+  assert (size 999_000_000 = "999MB");
+  assert (size 999_000_001 = "1GB");
+  assert (size 999_999_999 = "1GB");
+  assert (size 1_000_000_000 = "1GB");
+  assert (size 1_000_000_001 = "1.01GB");
+  assert (size 1_000_000_001 = "1.01GB");
+  ()
+
+let test_uint64_ns_span () =
+  let span s = Fmt.str "%a" Fmt.uint64_ns_span (Int64.of_string s) in
+  assert (span "0u0" = "0ns");
+  assert (span "0u999" = "999ns");
+  assert (span "0u1_000" = "1us");
+  assert (span "0u1_001" = "1.01us");
+  assert (span "0u1_009" = "1.01us");
+  assert (span "0u1_010" = "1.01us");
+  assert (span "0u1_011" = "1.02us");
+  assert (span "0u1_090" = "1.09us");
+  assert (span "0u1_091" = "1.1us");
+  assert (span "0u1_100" = "1.1us");
+  assert (span "0u1_101" = "1.11us");
+  assert (span "0u1_109" = "1.11us");
+  assert (span "0u1_110" = "1.11us");
+  assert (span "0u1_111" = "1.12us");
+  assert (span "0u1_990" = "1.99us");
+  assert (span "0u1_991" = "2us");
+  assert (span "0u1_999" = "2us");
+  assert (span "0u2_000" = "2us");
+  assert (span "0u2_001" = "2.01us");
+  assert (span "0u9_990" = "9.99us");
+  assert (span "0u9_991" = "10us");
+  assert (span "0u9_999" = "10us");
+  assert (span "0u10_000" = "10us");
+  assert (span "0u10_001" = "10.1us");
+  assert (span "0u10_099" = "10.1us");
+  assert (span "0u10_100" = "10.1us");
+  assert (span "0u10_101" = "10.2us");
+  assert (span "0u10_900" = "10.9us");
+  assert (span "0u10_901" = "11us");
+  assert (span "0u10_999" = "11us");
+  assert (span "0u11_000" = "11us");
+  assert (span "0u11_001" = "11.1us");
+  assert (span "0u11_099" = "11.1us");
+  assert (span "0u11_100" = "11.1us");
+  assert (span "0u11_101" = "11.2us");
+  assert (span "0u99_900" = "99.9us");
+  assert (span "0u99_901" = "100us");
+  assert (span "0u99_999" = "100us");
+  assert (span "0u100_000" = "100us");
+  assert (span "0u100_001" = "101us");
+  assert (span "0u100_999" = "101us");
+  assert (span "0u101_000" = "101us");
+  assert (span "0u101_001" = "102us");
+  assert (span "0u101_999" = "102us");
+  assert (span "0u102_000" = "102us");
+  assert (span "0u999_000" = "999us");
+  assert (span "0u999_001" = "1ms");
+  assert (span "0u999_001" = "1ms");
+  assert (span "0u999_999" = "1ms");
+  assert (span "0u1_000_000" = "1ms");
+  assert (span "0u1_000_001" = "1.01ms");
+  assert (span "0u1_009_999" = "1.01ms");
+  assert (span "0u1_010_000" = "1.01ms");
+  assert (span "0u1_010_001" = "1.02ms");
+  assert (span "0u9_990_000" = "9.99ms");
+  assert (span "0u9_990_001" = "10ms");
+  assert (span "0u9_999_999" = "10ms");
+  assert (span "0u10_000_000" = "10ms");
+  assert (span "0u10_000_001" = "10.1ms");
+  assert (span "0u10_000_001" = "10.1ms");
+  assert (span "0u10_099_999" = "10.1ms");
+  assert (span "0u10_100_000" = "10.1ms");
+  assert (span "0u10_100_001" = "10.2ms");
+  assert (span "0u99_900_000" = "99.9ms");
+  assert (span "0u99_900_001" = "100ms");
+  assert (span "0u99_999_999" = "100ms");
+  assert (span "0u100_000_000" = "100ms");
+  assert (span "0u100_000_001" = "101ms");
+  assert (span "0u100_999_999" = "101ms");
+  assert (span "0u101_000_000" = "101ms");
+  assert (span "0u101_000_001" = "102ms");
+  assert (span "0u999_000_000" = "999ms");
+  assert (span "0u999_000_001" = "1s");
+  assert (span "0u999_999_999" = "1s");
+  assert (span "0u1_000_000_000" = "1s");
+  assert (span "0u1_000_000_001" = "1.01s");
+  assert (span "0u1_009_999_999" = "1.01s");
+  assert (span "0u1_010_000_000" = "1.01s");
+  assert (span "0u1_010_000_001" = "1.02s");
+  assert (span "0u1_990_000_000" = "1.99s");
+  assert (span "0u1_990_000_001" = "2s");
+  assert (span "0u1_999_999_999" = "2s");
+  assert (span "0u2_000_000_000" = "2s");
+  assert (span "0u2_000_000_001" = "2.01s");
+  assert (span "0u9_990_000_000" = "9.99s");
+  assert (span "0u9_999_999_999" = "10s");
+  assert (span "0u10_000_000_000" = "10s");
+  assert (span "0u10_000_000_001" = "10.1s");
+  assert (span "0u10_099_999_999" = "10.1s");
+  assert (span "0u10_100_000_000" = "10.1s");
+  assert (span "0u10_100_000_001" = "10.2s");
+  assert (span "0u59_900_000_000" = "59.9s");
+  assert (span "0u59_900_000_001" = "1min");
+  assert (span "0u59_999_999_999" = "1min");
+  assert (span "0u60_000_000_000" = "1min");
+  assert (span "0u60_000_000_001" = "1min1s");
+  assert (span "0u60_999_999_999" = "1min1s");
+  assert (span "0u61_000_000_000" = "1min1s");
+  assert (span "0u61_000_000_001" = "1min2s");
+  assert (span "0u119_000_000_000" = "1min59s");
+  assert (span "0u119_000_000_001" = "2min");
+  assert (span "0u119_999_999_999" = "2min");
+  assert (span "0u120_000_000_000" = "2min");
+  assert (span "0u120_000_000_001" = "2min1s");
+  assert (span "0u3599_000_000_000" = "59min59s");
+  assert (span "0u3599_000_000_001" = "1h");
+  assert (span "0u3599_999_999_999" = "1h");
+  assert (span "0u3600_000_000_000" = "1h");
+  assert (span "0u3600_000_000_001" = "1h1min");
+  assert (span "0u3659_000_000_000" = "1h1min");
+  assert (span "0u3659_000_000_001" = "1h1min");
+  assert (span "0u3659_999_999_999" = "1h1min");
+  assert (span "0u3660_000_000_000" = "1h1min");
+  assert (span "0u3660_000_000_001" = "1h2min");
+  assert (span "0u3660_000_000_001" = "1h2min");
+  assert (span "0u3660_000_000_001" = "1h2min");
+  assert (span "0u3720_000_000_000" = "1h2min");
+  assert (span "0u3720_000_000_001" = "1h3min");
+  assert (span "0u7140_000_000_000" = "1h59min");
+  assert (span "0u7140_000_000_001" = "2h");
+  assert (span "0u7199_999_999_999" = "2h");
+  assert (span "0u7200_000_000_000" = "2h");
+  assert (span "0u7200_000_000_001" = "2h1min");
+  assert (span "0u86340_000_000_000" = "23h59min");
+  assert (span "0u86340_000_000_001" = "1d");
+  assert (span "0u86400_000_000_000" = "1d");
+  assert (span "0u86400_000_000_001" = "1d1h");
+  assert (span "0u89999_999_999_999" = "1d1h");
+  assert (span "0u90000_000_000_000" = "1d1h");
+  assert (span "0u90000_000_000_001" = "1d2h");
+  assert (span "0u169200_000_000_000" = "1d23h");
+  assert (span "0u169200_000_000_001" = "2d");
+  assert (span "0u169200_000_000_001" = "2d");
+  assert (span "0u172799_999_999_999" = "2d");
+  assert (span "0u172800_000_000_000" = "2d");
+  assert (span "0u172800_000_000_001" = "2d1h");
+  assert (span "0u31536000_000_000_000" = "365d");
+  assert (span "0u31554000_000_000_000" = "365d5h");
+  assert (
+    (* Technically this should round to a year but it does get rendered.
+       I don't think it matters, it's not inacurate per se. *)
+    span "0u31554000_000_000_001" = "365d6h");
+  assert (span "0u31557600_000_000_000" = "1a");
+  assert (span "0u31557600_000_000_001" = "1a1d");
+  assert (span "0u63028800_000_000_000" = "1a365d");
+  assert (span "0u63093600_000_000_000" = "1a365d");
+  assert (span "0u63093600_000_000_001" = "2a");
+  assert (span "0u63115200_000_000_000" = "2a");
+  assert (span "0u63115200_000_000_001" = "2a1d");
+  ()
+
+let tests () =
+  test_dump_uchar ();
+  test_utf_8 ();
+  test_style_renderer ();
+  test_kstr_str_like_partial_app ();
+  test_byte_size ();
+  test_uint64_ns_span ();
+  Printf.printf "Done.\n";
+  ()
+
+let () = tests ()
+
+(*---------------------------------------------------------------------------
+   Copyright (c) 2015 The fmt programmers
+
+   Permission to use, copy, modify, and/or distribute this software for any
+   purpose with or without fee is hereby granted, provided that the above
+   copyright notice and this permission notice appear in all copies.
+
+   THE SOFTWARE IS PROVIDED "AS IS" AND THE AUTHOR DISCLAIMS ALL WARRANTIES
+   WITH REGARD TO THIS SOFTWARE INCLUDING ALL IMPLIED WARRANTIES OF
+   MERCHANTABILITY AND FITNESS. IN NO EVENT SHALL THE AUTHOR BE LIABLE FOR
+   ANY SPECIAL, DIRECT, INDIRECT, OR CONSEQUENTIAL DAMAGES OR ANY DAMAGES
+   WHATSOEVER RESULTING FROM LOSS OF USE, DATA OR PROFITS, WHETHER IN AN
+   ACTION OF CONTRACT, NEGLIGENCE OR OTHER TORTIOUS ACTION, ARISING OUT OF
+   OR IN CONNECTION WITH THE USE OR PERFORMANCE OF THIS SOFTWARE.
+  ---------------------------------------------------------------------------*)
diff --git a/tools/ocaml/duniverse/ocaml-afl-persistent/.gitignore b/tools/ocaml/duniverse/ocaml-afl-persistent/.gitignore
new file mode 100644
index 0000000000..655e32b07c
--- /dev/null
+++ b/tools/ocaml/duniverse/ocaml-afl-persistent/.gitignore
@@ -0,0 +1,2 @@
+_build
+.merlin
diff --git a/tools/ocaml/duniverse/ocaml-afl-persistent/CHANGES.md b/tools/ocaml/duniverse/ocaml-afl-persistent/CHANGES.md
new file mode 100644
index 0000000000..da38d286bc
--- /dev/null
+++ b/tools/ocaml/duniverse/ocaml-afl-persistent/CHANGES.md
@@ -0,0 +1,22 @@
+v1.3 (13th Nov 2018)
+---------------------
+
+Uses /bin/sh instead of /bin/bash to fix install problems
+
+v1.2 (22nd May 2017)
+---------------------
+
+Allow installation on non-AFL switches.
+(Doesn't do much, but lets you use Crowbar in quickcheck mode)
+
+
+v1.1 (12th January 2017)
+---------------------
+
+Improved stability of instrumentation output
+
+
+v1.0 (6th December 2016)
+---------------------
+
+Initial release
diff --git a/tools/ocaml/duniverse/ocaml-afl-persistent/LICENSE.md b/tools/ocaml/duniverse/ocaml-afl-persistent/LICENSE.md
new file mode 100644
index 0000000000..89cbf71481
--- /dev/null
+++ b/tools/ocaml/duniverse/ocaml-afl-persistent/LICENSE.md
@@ -0,0 +1,8 @@
+Copyright (c) 2016 Stephen Dolan
+
+Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+
diff --git a/tools/ocaml/duniverse/ocaml-afl-persistent/README.md b/tools/ocaml/duniverse/ocaml-afl-persistent/README.md
new file mode 100644
index 0000000000..2ed9916c9f
--- /dev/null
+++ b/tools/ocaml/duniverse/ocaml-afl-persistent/README.md
@@ -0,0 +1,17 @@
+# afl-persistent - persistent-mode afl-fuzz for ocaml
+
+by using `AflPersistent.run`, you can fuzz things really fast:
+
+```ocaml
+let f () =
+  let s = read_line () in
+  match Array.to_list (Array.init (String.length s) (String.get s)) with
+    ['s'; 'e'; 'c'; 'r'; 'e'; 't'; ' '; 'c'; 'o'; 'd'; 'e'] -> failwith "uh oh"
+  | _ -> ()
+
+let _ = AflPersistent.run f
+```
+
+compile with a version of ocaml that supports afl. that means trunk
+for now, but the next release (4.05) will work too, and pass the
+`-afl-instrument` option to ocamlopt.
diff --git a/tools/ocaml/duniverse/ocaml-afl-persistent/afl-persistent.opam b/tools/ocaml/duniverse/ocaml-afl-persistent/afl-persistent.opam
new file mode 100644
index 0000000000..12aedccab5
--- /dev/null
+++ b/tools/ocaml/duniverse/ocaml-afl-persistent/afl-persistent.opam
@@ -0,0 +1,49 @@
+# This file is generated by dune, edit dune-project instead
+version: "1.3"
+synopsis: "Use afl-fuzz in persistent mode"
+description: """
+afl-fuzz normally works by repeatedly fork()ing the program being
+tested. using this package, you can run afl-fuzz in 'persistent mode',
+which avoids repeated forking and is much faster.
+"""
+maintainer: ["stephen.dolan@cl.cam.ac.uk"]
+authors: ["Stephen Dolan"]
+license: "MIT"
+homepage: "https://github.com/stedolan/ocaml-afl-persistent"
+bug-reports: "https://github.com/stedolan/ocaml-afl-persistent/issues"
+depends: [
+  "dune" {>= "2.0"}
+  "ocaml" {>= "4.00"}
+  "base-unix"
+]
+build: [
+  ["dune" "subst"] {pinned}
+  [
+    "dune"
+    "build"
+    "-p"
+    name
+    "-j"
+    jobs
+    "@install"
+    "@runtest" {with-test}
+    "@doc" {with-doc}
+  ]
+]
+dev-repo: "git+https://github.com/stedolan/ocaml-afl-persistent.git"
+opam-version: "2.0"
+post-messages: [
+"afl-persistent is installed, but since AFL instrumentation is not available
+with this OCaml compiler, instrumented fuzzing with afl-fuzz won't work.
+
+To use instrumented fuzzing, switch to an OCaml version supporting AFL, such
+as 4.07.1+afl." {success & !afl-available}
+
+"afl-persistent is installed, but since the current OCaml compiler does
+not enable AFL instrumentation by default, most packages will not be
+instrumented and fuzzing with afl-fuzz may not be effective.
+
+To globally enable AFL instrumentation, use an OCaml switch such as
+4.07.1+afl." {success & afl-available & !afl-always}
+]
+
diff --git a/tools/ocaml/duniverse/ocaml-afl-persistent/afl-persistent.opam.template b/tools/ocaml/duniverse/ocaml-afl-persistent/afl-persistent.opam.template
new file mode 100644
index 0000000000..a9787ebc62
--- /dev/null
+++ b/tools/ocaml/duniverse/ocaml-afl-persistent/afl-persistent.opam.template
@@ -0,0 +1,16 @@
+opam-version: "2.0"
+post-messages: [
+"afl-persistent is installed, but since AFL instrumentation is not available
+with this OCaml compiler, instrumented fuzzing with afl-fuzz won't work.
+
+To use instrumented fuzzing, switch to an OCaml version supporting AFL, such
+as 4.07.1+afl." {success & !afl-available}
+
+"afl-persistent is installed, but since the current OCaml compiler does
+not enable AFL instrumentation by default, most packages will not be
+instrumented and fuzzing with afl-fuzz may not be effective.
+
+To globally enable AFL instrumentation, use an OCaml switch such as
+4.07.1+afl." {success & afl-available & !afl-always}
+]
+
diff --git a/tools/ocaml/duniverse/ocaml-afl-persistent/aflPersistent.available.ml b/tools/ocaml/duniverse/ocaml-afl-persistent/aflPersistent.available.ml
new file mode 100644
index 0000000000..351e16c2a6
--- /dev/null
+++ b/tools/ocaml/duniverse/ocaml-afl-persistent/aflPersistent.available.ml
@@ -0,0 +1,21 @@
+external reset_instrumentation : bool -> unit = "caml_reset_afl_instrumentation"
+external sys_exit : int -> 'a = "caml_sys_exit"
+
+let run f =
+  let _ = try ignore (Sys.getenv "##SIG_AFL_PERSISTENT##") with Not_found -> () in
+  let persist = match Sys.getenv "__AFL_PERSISTENT" with
+    | _ -> true
+    | exception Not_found -> false in
+  let pid = Unix.getpid () in
+  if persist then begin
+    reset_instrumentation true;
+    for _ = 1 to 1000 do
+      f ();
+      Unix.kill pid Sys.sigstop;
+      reset_instrumentation false
+    done;
+    f ();
+    sys_exit 0;
+  end else
+    f ()
+
diff --git a/tools/ocaml/duniverse/ocaml-afl-persistent/aflPersistent.mli b/tools/ocaml/duniverse/ocaml-afl-persistent/aflPersistent.mli
new file mode 100644
index 0000000000..f446ddd605
--- /dev/null
+++ b/tools/ocaml/duniverse/ocaml-afl-persistent/aflPersistent.mli
@@ -0,0 +1 @@
+val run : (unit -> unit) -> unit
diff --git a/tools/ocaml/duniverse/ocaml-afl-persistent/aflPersistent.stub.ml b/tools/ocaml/duniverse/ocaml-afl-persistent/aflPersistent.stub.ml
new file mode 100644
index 0000000000..2fd679dc1f
--- /dev/null
+++ b/tools/ocaml/duniverse/ocaml-afl-persistent/aflPersistent.stub.ml
@@ -0,0 +1 @@
+let run f = f ()
diff --git a/tools/ocaml/duniverse/ocaml-afl-persistent/detect.sh b/tools/ocaml/duniverse/ocaml-afl-persistent/detect.sh
new file mode 100755
index 0000000000..45a427bda2
--- /dev/null
+++ b/tools/ocaml/duniverse/ocaml-afl-persistent/detect.sh
@@ -0,0 +1,43 @@
+#!/bin/sh
+
+set -e
+set -x
+
+ocamlc='ocamlc -g -bin-annot'
+ocamlopt='ocamlopt -g -bin-annot'
+
+echo 'print_string "hello"' > afl_check.ml
+
+if ocamlopt -dcmm -c afl_check.ml 2>&1 | grep -q caml_afl; then
+    afl_always=true
+else
+    afl_always=false
+fi
+
+if [ "$(ocamlopt -afl-instrument afl_check.ml -o test 2>/dev/null && ./test)" = "hello" ]; then
+    ocamlopt="$ocamlopt -afl-inst-ratio 0"
+    afl_available=true
+elif [ "$(ocamlopt -version)" = 4.04.0+afl ]; then
+    # hack for the backported 4.04+afl branch
+    export AFL_INST_RATIO=0
+    afl_available=true
+else
+    afl_available=false
+fi
+
+cat > afl-persistent.config <<EOF
+opam-version: "1.2"
+afl-available: $afl_available
+afl-always: $afl_always
+EOF
+
+if [ $afl_available = true ]; then
+    cp "$1" aflPersistent.ml
+else
+    cp "$2" aflPersistent.ml
+fi
+exit 0
+# test
+cp ../test.ml .
+ocamlc unix.cma afl-persistent.cma test.ml -o test && ./test
+ocamlopt unix.cmxa afl-persistent.cmxa test.ml -o test && ./test
diff --git a/tools/ocaml/duniverse/ocaml-afl-persistent/dune b/tools/ocaml/duniverse/ocaml-afl-persistent/dune
new file mode 100644
index 0000000000..ce3be53ec1
--- /dev/null
+++ b/tools/ocaml/duniverse/ocaml-afl-persistent/dune
@@ -0,0 +1,20 @@
+(rule
+ (targets afl-persistent.config aflPersistent.ml)
+ (deps
+  detect.sh
+  (:aflyes aflPersistent.available.ml)
+  (:aflno aflPersistent.stub.ml))
+ (action
+  (run sh ./detect.sh %{aflyes} %{aflno})))
+
+(library
+ (wrapped false)
+ (public_name afl-persistent)
+ (name afl_persistent)
+ (modules aflPersistent)
+ (libraries unix))
+
+(test
+ (name test)
+ (modules test)
+ (libraries afl_persistent))
diff --git a/tools/ocaml/duniverse/ocaml-afl-persistent/dune-project b/tools/ocaml/duniverse/ocaml-afl-persistent/dune-project
new file mode 100644
index 0000000000..2d79209635
--- /dev/null
+++ b/tools/ocaml/duniverse/ocaml-afl-persistent/dune-project
@@ -0,0 +1,23 @@
+(lang dune 2.0)
+(name afl-persistent)
+; version field is optional
+(version 1.3)
+
+(generate_opam_files true)
+
+(maintainers "stephen.dolan@cl.cam.ac.uk")
+(source (github stedolan/ocaml-afl-persistent))
+(license MIT)
+(authors "Stephen Dolan")
+
+(package
+ (name afl-persistent)
+ (synopsis "Use afl-fuzz in persistent mode")
+ (description
+"\| afl-fuzz normally works by repeatedly fork()ing the program being
+"\| tested. using this package, you can run afl-fuzz in 'persistent mode',
+"\| which avoids repeated forking and is much faster.
+)
+ (depends
+  (ocaml (>= 4.00))
+  base-unix))
diff --git a/tools/ocaml/duniverse/ocaml-afl-persistent/test.ml b/tools/ocaml/duniverse/ocaml-afl-persistent/test.ml
new file mode 100644
index 0000000000..1d12af1877
--- /dev/null
+++ b/tools/ocaml/duniverse/ocaml-afl-persistent/test.ml
@@ -0,0 +1,3 @@
+let () =
+  AflPersistent.run (fun () -> exit 0);
+  failwith "AflPersistent.run failed"
diff --git a/tools/ocaml/duniverse/ocaml-afl-persistent/test/harness.ml b/tools/ocaml/duniverse/ocaml-afl-persistent/test/harness.ml
new file mode 100644
index 0000000000..dbcbebf0b1
--- /dev/null
+++ b/tools/ocaml/duniverse/ocaml-afl-persistent/test/harness.ml
@@ -0,0 +1,22 @@
+external reset_instrumentation : bool -> unit = "caml_reset_afl_instrumentation"
+external sys_exit : int -> 'a = "caml_sys_exit"
+
+let name n =
+  fst (Test.tests.(int_of_string n - 1))
+let run n =
+  snd (Test.tests.(int_of_string n - 1)) ()
+
+let orig_random = Random.get_state ()
+
+let () =
+  (* Random.set_state orig_random; *)
+  reset_instrumentation true;
+  begin
+    match Sys.argv with
+    | [| _; "len" |] -> print_int (Array.length Test.tests); print_newline (); flush stdout
+    | [| _; "name"; n |] -> print_string (name n); flush stdout
+    | [| _; "1"; n |] -> run n
+    | [| _; "2"; n |] -> run n; (* Random.set_state orig_random;  *)reset_instrumentation false; run n
+    | _ -> failwith "error"
+  end;
+  sys_exit 0
diff --git a/tools/ocaml/duniverse/ocaml-afl-persistent/test/test.ml b/tools/ocaml/duniverse/ocaml-afl-persistent/test/test.ml
new file mode 100644
index 0000000000..83c1fc00fe
--- /dev/null
+++ b/tools/ocaml/duniverse/ocaml-afl-persistent/test/test.ml
@@ -0,0 +1,73 @@
+let opaque = Sys.opaque_identity
+
+let lists n =
+  let l = opaque [n; n; n] in
+  match List.rev l with
+  | [a; b; c] when a = n && b = n && c = n -> ()
+  | _ -> assert false
+
+let fresh_exception x =
+  opaque @@
+    let module M = struct
+        exception E of int
+        let throw () = raise (E x)
+      end in
+    try
+      M.throw ()
+    with
+      M.E n -> assert (n = x)
+
+let obj_with_closure x =
+  opaque (object method foo = x end)
+
+let r = ref 42
+let state () =
+  incr r;
+  if !r > 43 then print_string "woo" else ()
+
+let classes (x : int) =
+  opaque @@
+    let module M = struct
+        class a = object
+          method foo = x
+        end
+        class c = object
+          inherit a
+        end
+      end in
+    let o = new M.c in
+    assert (o#foo = x)
+
+
+class c_global = object
+  method foo = 42
+end
+let obj_ordering () = opaque @@
+  (* Object IDs change, but should be in the same relative order *)
+  let a = new c_global in
+  let b = new c_global in
+  if a < b then print_string "a" else print_string "b"
+
+let random () = opaque @@
+  (* as long as there's no self_init, this should be deterministic *)
+  if Random.int 100 < 50 then print_string "a" else print_string "b";
+  if Random.int 100 < 50 then print_string "a" else print_string "b";
+  if Random.int 100 < 50 then print_string "a" else print_string "b";
+  if Random.int 100 < 50 then print_string "a" else print_string "b";
+  if Random.int 100 < 50 then print_string "a" else print_string "b";
+  if Random.int 100 < 50 then print_string "a" else print_string "b";
+  if Random.int 100 < 50 then print_string "a" else print_string "b";
+  if Random.int 100 < 50 then print_string "a" else print_string "b";
+  if Random.int 100 < 50 then print_string "a" else print_string "b"
+
+let tests =
+  [| ("lists", fun () -> lists 42);
+     ("manylists", fun () -> for i = 1 to 10 do lists 42 done);
+     ("exceptions", fun () -> fresh_exception 100);
+     ("objects", fun () -> ignore (obj_with_closure 42));
+     (* ("state", state); *) (* this one should fail *)
+     ("classes", fun () -> classes 42);
+     ("obj_ordering", obj_ordering);
+     (* ("random", random); *)
+  |]
+  
diff --git a/tools/ocaml/duniverse/ocaml-afl-persistent/test/test.sh b/tools/ocaml/duniverse/ocaml-afl-persistent/test/test.sh
new file mode 100755
index 0000000000..32044dffa7
--- /dev/null
+++ b/tools/ocaml/duniverse/ocaml-afl-persistent/test/test.sh
@@ -0,0 +1,33 @@
+#!/bin/bash
+
+set -e
+
+ocamlopt -c -afl-instrument test.ml
+ocamlopt -afl-inst-ratio 0 test.cmx harness.ml -o test
+
+NTESTS=`./test len`
+failures=''
+echo "running $NTESTS tests..."
+for t in `seq 1 $NTESTS`; do
+  printf "%14s: " `./test name $t`
+  # when run twice, the instrumentation output should double
+  afl-showmap -q -o output-1 -- ./test 1 $t
+  afl-showmap -q -o output-2 -- ./test 2 $t
+  # see afl-showmap.c for what the numbers mean
+  cat output-1 | sed '
+    s/:6/:7/; s/:5/:6/;
+    s/:4/:5/; s/:3/:4/;
+    s/:2/:4/; s/:1/:2/;
+  ' > output-2-predicted
+  if cmp -s output-2-predicted output-2; then
+    echo "passed."
+  else
+    echo "failed:"
+    paste output-2 output-1
+    failures=1
+  fi
+done
+
+if [ -z "$failures" ]; then echo "all tests passed"; fi
+
+rm -f {test,harness}.{cmi,cmx,o} test output-{1,2,2-predicted}
diff --git a/tools/ocaml/duniverse/ocplib-endian/.gitignore b/tools/ocaml/duniverse/ocplib-endian/.gitignore
new file mode 100644
index 0000000000..f06221ceba
--- /dev/null
+++ b/tools/ocaml/duniverse/ocplib-endian/.gitignore
@@ -0,0 +1,3 @@
+_build/
+.merlin
+*.install
diff --git a/tools/ocaml/duniverse/ocplib-endian/.travis.yml b/tools/ocaml/duniverse/ocplib-endian/.travis.yml
new file mode 100644
index 0000000000..0e2ae0b572
--- /dev/null
+++ b/tools/ocaml/duniverse/ocplib-endian/.travis.yml
@@ -0,0 +1,19 @@
+language: c
+sudo: required
+install: wget https://raw.githubusercontent.com/ocaml/ocaml-ci-scripts/master/.travis-opam.sh
+script: bash -ex .travis-opam.sh
+global:
+  - PACKAGE=ocplib-endian
+  - TESTS=true
+env:
+  - OCAML_VERSION=4.09
+  - OCAML_VERSION=4.08
+  - OCAML_VERSION=4.07
+  - OCAML_VERSION=4.06
+  - OCAML_VERSION=4.05
+  - OCAML_VERSION=4.04
+  - OCAML_VERSION=4.03
+  - OCAML_VERSION=4.02
+os:
+  - linux
+  - osx
diff --git a/tools/ocaml/duniverse/ocplib-endian/CHANGES.md b/tools/ocaml/duniverse/ocplib-endian/CHANGES.md
new file mode 100644
index 0000000000..bedaa36b83
--- /dev/null
+++ b/tools/ocaml/duniverse/ocplib-endian/CHANGES.md
@@ -0,0 +1,55 @@
+1.1
+---
+
+* Add the OPAM support for building the documentation
+* Use the correct bytes_set primitive for OCaml >= 4.07.0
+  (issue #21 fixed in #22 @hhugo)
+* Fix tests on big endian architectures
+  (issue #20 reported by @TC01 and @olafhering)
+* Fix documentation typo (@bobot)
+* Change cppo to a build dependency (@TheLortex)
+* Port to Dune from jbuilder (@avsm)
+* Upgrade opam metadata to 2.0 format (@avsm)
+* Remove code for OCaml <4.01 support, as the minimum
+  supported version is now OCaml 4.02+ (@avsm)
+* Build with jbuilder (unreleased, superseded by dune)
+
+1.0
+---------------
+
+* Install generated .mli files
+* Build documentation
+* Fix README links
+
+0.8
+---------------
+
+* Replace optcomp with cppo, removing hard dependency on camlp4.
+
+0.7
+---------------
+
+* Fix dependencies.
+
+0.6
+---------------
+
+* Port to OCaml 4.02 -safe-string: Add an EndianBytes module.
+* Add unoptimized get_float, get_double, set_float and set_double to every modules.
+* Add a native endian version of interfaces.
+
+0.5
+---------------
+
+* Fix to avoid problems with integers outside of the range [0; 255] with set_int8.
+* Add travis CI files.
+
+0.4
+---------------
+
+* Fix ocamlfind dependency on optcomp
+
+0.3
+---------------
+
+First release.
diff --git a/tools/ocaml/duniverse/ocplib-endian/COPYING.txt b/tools/ocaml/duniverse/ocplib-endian/COPYING.txt
new file mode 100644
index 0000000000..55831aa883
--- /dev/null
+++ b/tools/ocaml/duniverse/ocplib-endian/COPYING.txt
@@ -0,0 +1,521 @@
+
+As a special exception to the GNU Library General Public License, you may link,
+statically or dynamically, a "work that uses the Library" with a publicly
+distributed version of the Library to produce an executable file containing
+portions of the Library, and distribute that executable file under terms of
+your choice, without any of the additional requirements listed in clause 6 of
+the GNU Library General Public License.  By "a publicly distributed version of
+the Library", we mean either the unmodified Library as distributed by upstream
+author, or a modified version of the Library that is distributed under the
+conditions defined in clause 3 of the GNU Library General Public License.  This
+exception does not however invalidate any other reasons why the executable file
+might be covered by the GNU Library General Public License.
+
+-----------------------------------------------------------------------
+                  GNU LESSER GENERAL PUBLIC LICENSE
+                       Version 2.1, February 1999
+
+ Copyright (C) 1991, 1999 Free Software Foundation, Inc.
+     59 Temple Place, Suite 330, Boston, MA  02111-1307  USA
+ Everyone is permitted to copy and distribute verbatim copies
+ of this license document, but changing it is not allowed.
+
+[This is the first released version of the Lesser GPL.  It also counts
+ as the successor of the GNU Library Public License, version 2, hence
+ the version number 2.1.]
+
+                            Preamble
+
+  The licenses for most software are designed to take away your
+freedom to share and change it.  By contrast, the GNU General Public
+Licenses are intended to guarantee your freedom to share and change
+free software--to make sure the software is free for all its users.
+
+  This license, the Lesser General Public License, applies to some
+specially designated software packages--typically libraries--of the
+Free Software Foundation and other authors who decide to use it.  You
+can use it too, but we suggest you first think carefully about whether
+this license or the ordinary General Public License is the better
+strategy to use in any particular case, based on the explanations
+below.
+
+  When we speak of free software, we are referring to freedom of use,
+not price.  Our General Public Licenses are designed to make sure that
+you have the freedom to distribute copies of free software (and charge
+for this service if you wish); that you receive source code or can get
+it if you want it; that you can change the software and use pieces of
+it in new free programs; and that you are informed that you can do
+these things.
+
+  To protect your rights, we need to make restrictions that forbid
+distributors to deny you these rights or to ask you to surrender these
+rights.  These restrictions translate to certain responsibilities for
+you if you distribute copies of the library or if you modify it.
+
+  For example, if you distribute copies of the library, whether gratis
+or for a fee, you must give the recipients all the rights that we gave
+you.  You must make sure that they, too, receive or can get the source
+code.  If you link other code with the library, you must provide
+complete object files to the recipients, so that they can relink them
+with the library after making changes to the library and recompiling
+it.  And you must show them these terms so they know their rights.
+
+  We protect your rights with a two-step method: (1) we copyright the
+library, and (2) we offer you this license, which gives you legal
+permission to copy, distribute and/or modify the library.
+
+  To protect each distributor, we want to make it very clear that
+there is no warranty for the free library.  Also, if the library is
+modified by someone else and passed on, the recipients should know
+that what they have is not the original version, so that the original
+author's reputation will not be affected by problems that might be
+introduced by others.
+^L
+  Finally, software patents pose a constant threat to the existence of
+any free program.  We wish to make sure that a company cannot
+effectively restrict the users of a free program by obtaining a
+restrictive license from a patent holder.  Therefore, we insist that
+any patent license obtained for a version of the library must be
+consistent with the full freedom of use specified in this license.
+
+  Most GNU software, including some libraries, is covered by the
+ordinary GNU General Public License.  This license, the GNU Lesser
+General Public License, applies to certain designated libraries, and
+is quite different from the ordinary General Public License.  We use
+this license for certain libraries in order to permit linking those
+libraries into non-free programs.
+
+  When a program is linked with a library, whether statically or using
+a shared library, the combination of the two is legally speaking a
+combined work, a derivative of the original library.  The ordinary
+General Public License therefore permits such linking only if the
+entire combination fits its criteria of freedom.  The Lesser General
+Public License permits more lax criteria for linking other code with
+the library.
+
+  We call this license the "Lesser" General Public License because it
+does Less to protect the user's freedom than the ordinary General
+Public License.  It also provides other free software developers Less
+of an advantage over competing non-free programs.  These disadvantages
+are the reason we use the ordinary General Public License for many
+libraries.  However, the Lesser license provides advantages in certain
+special circumstances.
+
+  For example, on rare occasions, there may be a special need to
+encourage the widest possible use of a certain library, so that it
+becomes a de-facto standard.  To achieve this, non-free programs must
+be allowed to use the library.  A more frequent case is that a free
+library does the same job as widely used non-free libraries.  In this
+case, there is little to gain by limiting the free library to free
+software only, so we use the Lesser General Public License.
+
+  In other cases, permission to use a particular library in non-free
+programs enables a greater number of people to use a large body of
+free software.  For example, permission to use the GNU C Library in
+non-free programs enables many more people to use the whole GNU
+operating system, as well as its variant, the GNU/Linux operating
+system.
+
+  Although the Lesser General Public License is Less protective of the
+users' freedom, it does ensure that the user of a program that is
+linked with the Library has the freedom and the wherewithal to run
+that program using a modified version of the Library.
+
+  The precise terms and conditions for copying, distribution and
+modification follow.  Pay close attention to the difference between a
+"work based on the library" and a "work that uses the library".  The
+former contains code derived from the library, whereas the latter must
+be combined with the library in order to run.
+^L
+                  GNU LESSER GENERAL PUBLIC LICENSE
+   TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
+
+  0. This License Agreement applies to any software library or other
+program which contains a notice placed by the copyright holder or
+other authorized party saying it may be distributed under the terms of
+this Lesser General Public License (also called "this License").
+Each licensee is addressed as "you".
+
+  A "library" means a collection of software functions and/or data
+prepared so as to be conveniently linked with application programs
+(which use some of those functions and data) to form executables.
+
+  The "Library", below, refers to any such software library or work
+which has been distributed under these terms.  A "work based on the
+Library" means either the Library or any derivative work under
+copyright law: that is to say, a work containing the Library or a
+portion of it, either verbatim or with modifications and/or translated
+straightforwardly into another language.  (Hereinafter, translation is
+included without limitation in the term "modification".)
+
+  "Source code" for a work means the preferred form of the work for
+making modifications to it.  For a library, complete source code means
+all the source code for all modules it contains, plus any associated
+interface definition files, plus the scripts used to control
+compilation and installation of the library.
+
+  Activities other than copying, distribution and modification are not
+covered by this License; they are outside its scope.  The act of
+running a program using the Library is not restricted, and output from
+such a program is covered only if its contents constitute a work based
+on the Library (independent of the use of the Library in a tool for
+writing it).  Whether that is true depends on what the Library does
+and what the program that uses the Library does.
+
+  1. You may copy and distribute verbatim copies of the Library's
+complete source code as you receive it, in any medium, provided that
+you conspicuously and appropriately publish on each copy an
+appropriate copyright notice and disclaimer of warranty; keep intact
+all the notices that refer to this License and to the absence of any
+warranty; and distribute a copy of this License along with the
+Library.
+
+  You may charge a fee for the physical act of transferring a copy,
+and you may at your option offer warranty protection in exchange for a
+fee.
+
+  2. You may modify your copy or copies of the Library or any portion
+of it, thus forming a work based on the Library, and copy and
+distribute such modifications or work under the terms of Section 1
+above, provided that you also meet all of these conditions:
+
+    a) The modified work must itself be a software library.
+
+    b) You must cause the files modified to carry prominent notices
+    stating that you changed the files and the date of any change.
+
+    c) You must cause the whole of the work to be licensed at no
+    charge to all third parties under the terms of this License.
+
+    d) If a facility in the modified Library refers to a function or a
+    table of data to be supplied by an application program that uses
+    the facility, other than as an argument passed when the facility
+    is invoked, then you must make a good faith effort to ensure that,
+    in the event an application does not supply such function or
+    table, the facility still operates, and performs whatever part of
+    its purpose remains meaningful.
+
+    (For example, a function in a library to compute square roots has
+    a purpose that is entirely well-defined independent of the
+    application.  Therefore, Subsection 2d requires that any
+    application-supplied function or table used by this function must
+    be optional: if the application does not supply it, the square
+    root function must still compute square roots.)
+
+These requirements apply to the modified work as a whole.  If
+identifiable sections of that work are not derived from the Library,
+and can be reasonably considered independent and separate works in
+themselves, then this License, and its terms, do not apply to those
+sections when you distribute them as separate works.  But when you
+distribute the same sections as part of a whole which is a work based
+on the Library, the distribution of the whole must be on the terms of
+this License, whose permissions for other licensees extend to the
+entire whole, and thus to each and every part regardless of who wrote
+it.
+
+Thus, it is not the intent of this section to claim rights or contest
+your rights to work written entirely by you; rather, the intent is to
+exercise the right to control the distribution of derivative or
+collective works based on the Library.
+
+In addition, mere aggregation of another work not based on the Library
+with the Library (or with a work based on the Library) on a volume of
+a storage or distribution medium does not bring the other work under
+the scope of this License.
+
+  3. You may opt to apply the terms of the ordinary GNU General Public
+License instead of this License to a given copy of the Library.  To do
+this, you must alter all the notices that refer to this License, so
+that they refer to the ordinary GNU General Public License, version 2,
+instead of to this License.  (If a newer version than version 2 of the
+ordinary GNU General Public License has appeared, then you can specify
+that version instead if you wish.)  Do not make any other change in
+these notices.
+^L
+  Once this change is made in a given copy, it is irreversible for
+that copy, so the ordinary GNU General Public License applies to all
+subsequent copies and derivative works made from that copy.
+
+  This option is useful when you wish to copy part of the code of
+the Library into a program that is not a library.
+
+  4. You may copy and distribute the Library (or a portion or
+derivative of it, under Section 2) in object code or executable form
+under the terms of Sections 1 and 2 above provided that you accompany
+it with the complete corresponding machine-readable source code, which
+must be distributed under the terms of Sections 1 and 2 above on a
+medium customarily used for software interchange.
+
+  If distribution of object code is made by offering access to copy
+from a designated place, then offering equivalent access to copy the
+source code from the same place satisfies the requirement to
+distribute the source code, even though third parties are not
+compelled to copy the source along with the object code.
+
+  5. A program that contains no derivative of any portion of the
+Library, but is designed to work with the Library by being compiled or
+linked with it, is called a "work that uses the Library".  Such a
+work, in isolation, is not a derivative work of the Library, and
+therefore falls outside the scope of this License.
+
+  However, linking a "work that uses the Library" with the Library
+creates an executable that is a derivative of the Library (because it
+contains portions of the Library), rather than a "work that uses the
+library".  The executable is therefore covered by this License.
+Section 6 states terms for distribution of such executables.
+
+  When a "work that uses the Library" uses material from a header file
+that is part of the Library, the object code for the work may be a
+derivative work of the Library even though the source code is not.
+Whether this is true is especially significant if the work can be
+linked without the Library, or if the work is itself a library.  The
+threshold for this to be true is not precisely defined by law.
+
+  If such an object file uses only numerical parameters, data
+structure layouts and accessors, and small macros and small inline
+functions (ten lines or less in length), then the use of the object
+file is unrestricted, regardless of whether it is legally a derivative
+work.  (Executables containing this object code plus portions of the
+Library will still fall under Section 6.)
+
+  Otherwise, if the work is a derivative of the Library, you may
+distribute the object code for the work under the terms of Section 6.
+Any executables containing that work also fall under Section 6,
+whether or not they are linked directly with the Library itself.
+^L
+  6. As an exception to the Sections above, you may also combine or
+link a "work that uses the Library" with the Library to produce a
+work containing portions of the Library, and distribute that work
+under terms of your choice, provided that the terms permit
+modification of the work for the customer's own use and reverse
+engineering for debugging such modifications.
+
+  You must give prominent notice with each copy of the work that the
+Library is used in it and that the Library and its use are covered by
+this License.  You must supply a copy of this License.  If the work
+during execution displays copyright notices, you must include the
+copyright notice for the Library among them, as well as a reference
+directing the user to the copy of this License.  Also, you must do one
+of these things:
+
+    a) Accompany the work with the complete corresponding
+    machine-readable source code for the Library including whatever
+    changes were used in the work (which must be distributed under
+    Sections 1 and 2 above); and, if the work is an executable linked
+    with the Library, with the complete machine-readable "work that
+    uses the Library", as object code and/or source code, so that the
+    user can modify the Library and then relink to produce a modified
+    executable containing the modified Library.  (It is understood
+    that the user who changes the contents of definitions files in the
+    Library will not necessarily be able to recompile the application
+    to use the modified definitions.)
+
+    b) Use a suitable shared library mechanism for linking with the
+    Library.  A suitable mechanism is one that (1) uses at run time a
+    copy of the library already present on the user's computer system,
+    rather than copying library functions into the executable, and (2)
+    will operate properly with a modified version of the library, if
+    the user installs one, as long as the modified version is
+    interface-compatible with the version that the work was made with.
+
+    c) Accompany the work with a written offer, valid for at least
+    three years, to give the same user the materials specified in
+    Subsection 6a, above, for a charge no more than the cost of
+    performing this distribution.
+
+    d) If distribution of the work is made by offering access to copy
+    from a designated place, offer equivalent access to copy the above
+    specified materials from the same place.
+
+    e) Verify that the user has already received a copy of these
+    materials or that you have already sent this user a copy.
+
+  For an executable, the required form of the "work that uses the
+Library" must include any data and utility programs needed for
+reproducing the executable from it.  However, as a special exception,
+the materials to be distributed need not include anything that is
+normally distributed (in either source or binary form) with the major
+components (compiler, kernel, and so on) of the operating system on
+which the executable runs, unless that component itself accompanies
+the executable.
+
+  It may happen that this requirement contradicts the license
+restrictions of other proprietary libraries that do not normally
+accompany the operating system.  Such a contradiction means you cannot
+use both them and the Library together in an executable that you
+distribute.
+^L
+  7. You may place library facilities that are a work based on the
+Library side-by-side in a single library together with other library
+facilities not covered by this License, and distribute such a combined
+library, provided that the separate distribution of the work based on
+the Library and of the other library facilities is otherwise
+permitted, and provided that you do these two things:
+
+    a) Accompany the combined library with a copy of the same work
+    based on the Library, uncombined with any other library
+    facilities.  This must be distributed under the terms of the
+    Sections above.
+
+    b) Give prominent notice with the combined library of the fact
+    that part of it is a work based on the Library, and explaining
+    where to find the accompanying uncombined form of the same work.
+
+  8. You may not copy, modify, sublicense, link with, or distribute
+the Library except as expressly provided under this License.  Any
+attempt otherwise to copy, modify, sublicense, link with, or
+distribute the Library is void, and will automatically terminate your
+rights under this License.  However, parties who have received copies,
+or rights, from you under this License will not have their licenses
+terminated so long as such parties remain in full compliance.
+
+  9. You are not required to accept this License, since you have not
+signed it.  However, nothing else grants you permission to modify or
+distribute the Library or its derivative works.  These actions are
+prohibited by law if you do not accept this License.  Therefore, by
+modifying or distributing the Library (or any work based on the
+Library), you indicate your acceptance of this License to do so, and
+all its terms and conditions for copying, distributing or modifying
+the Library or works based on it.
+
+  10. Each time you redistribute the Library (or any work based on the
+Library), the recipient automatically receives a license from the
+original licensor to copy, distribute, link with or modify the Library
+subject to these terms and conditions.  You may not impose any further
+restrictions on the recipients' exercise of the rights granted herein.
+You are not responsible for enforcing compliance by third parties with
+this License.
+^L
+  11. If, as a consequence of a court judgment or allegation of patent
+infringement or for any other reason (not limited to patent issues),
+conditions are imposed on you (whether by court order, agreement or
+otherwise) that contradict the conditions of this License, they do not
+excuse you from the conditions of this License.  If you cannot
+distribute so as to satisfy simultaneously your obligations under this
+License and any other pertinent obligations, then as a consequence you
+may not distribute the Library at all.  For example, if a patent
+license would not permit royalty-free redistribution of the Library by
+all those who receive copies directly or indirectly through you, then
+the only way you could satisfy both it and this License would be to
+refrain entirely from distribution of the Library.
+
+If any portion of this section is held invalid or unenforceable under
+any particular circumstance, the balance of the section is intended to
+apply, and the section as a whole is intended to apply in other
+circumstances.
+
+It is not the purpose of this section to induce you to infringe any
+patents or other property right claims or to contest validity of any
+such claims; this section has the sole purpose of protecting the
+integrity of the free software distribution system which is
+implemented by public license practices.  Many people have made
+generous contributions to the wide range of software distributed
+through that system in reliance on consistent application of that
+system; it is up to the author/donor to decide if he or she is willing
+to distribute software through any other system and a licensee cannot
+impose that choice.
+
+This section is intended to make thoroughly clear what is believed to
+be a consequence of the rest of this License.
+
+  12. If the distribution and/or use of the Library is restricted in
+certain countries either by patents or by copyrighted interfaces, the
+original copyright holder who places the Library under this License
+may add an explicit geographical distribution limitation excluding those
+countries, so that distribution is permitted only in or among
+countries not thus excluded.  In such case, this License incorporates
+the limitation as if written in the body of this License.
+
+  13. The Free Software Foundation may publish revised and/or new
+versions of the Lesser General Public License from time to time.
+Such new versions will be similar in spirit to the present version,
+but may differ in detail to address new problems or concerns.
+
+Each version is given a distinguishing version number.  If the Library
+specifies a version number of this License which applies to it and
+"any later version", you have the option of following the terms and
+conditions either of that version or of any later version published by
+the Free Software Foundation.  If the Library does not specify a
+license version number, you may choose any version ever published by
+the Free Software Foundation.
+^L
+  14. If you wish to incorporate parts of the Library into other free
+programs whose distribution conditions are incompatible with these,
+write to the author to ask for permission.  For software which is
+copyrighted by the Free Software Foundation, write to the Free
+Software Foundation; we sometimes make exceptions for this.  Our
+decision will be guided by the two goals of preserving the free status
+of all derivatives of our free software and of promoting the sharing
+and reuse of software generally.
+
+                            NO WARRANTY
+
+  15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO
+WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW.
+EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR
+OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY
+KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE
+IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
+PURPOSE.  THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE
+LIBRARY IS WITH YOU.  SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME
+THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
+
+  16. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN
+WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY
+AND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU
+FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR
+CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE
+LIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING
+RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A
+FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF
+SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH
+DAMAGES.
+
+                     END OF TERMS AND CONDITIONS
+^L
+           How to Apply These Terms to Your New Libraries
+
+  If you develop a new library, and you want it to be of the greatest
+possible use to the public, we recommend making it free software that
+everyone can redistribute and change.  You can do so by permitting
+redistribution under these terms (or, alternatively, under the terms
+of the ordinary General Public License).
+
+  To apply these terms, attach the following notices to the library.
+It is safest to attach them to the start of each source file to most
+effectively convey the exclusion of warranty; and each file should
+have at least the "copyright" line and a pointer to where the full
+notice is found.
+
+
+    <one line to give the library's name and a brief idea of what it does.>
+    Copyright (C) <year>  <name of author>
+
+    This library is free software; you can redistribute it and/or
+    modify it under the terms of the GNU Lesser General Public
+    License as published by the Free Software Foundation; either
+    version 2.1 of the License, or (at your option) any later version.
+
+    This library is distributed in the hope that it will be useful,
+    but WITHOUT ANY WARRANTY; without even the implied warranty of
+    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+    Lesser General Public License for more details.
+
+    You should have received a copy of the GNU Lesser General Public
+    License along with this library; if not, write to the Free Software
+    Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307  USA
+
+Also add information on how to contact you by electronic and paper mail.
+
+You should also get your employer (if you work as a programmer) or
+your school, if any, to sign a "copyright disclaimer" for the library,
+if necessary.  Here is a sample; alter the names:
+
+  Yoyodyne, Inc., hereby disclaims all copyright interest in the
+  library `Frob' (a library for tweaking knobs) written by James
+  Random Hacker.
+
+  <signature of Ty Coon>, 1 April 1990
+  Ty Coon, President of Vice
+
+That's all there is to it!
diff --git a/tools/ocaml/duniverse/ocplib-endian/Makefile b/tools/ocaml/duniverse/ocplib-endian/Makefile
new file mode 100644
index 0000000000..63fb0da7f0
--- /dev/null
+++ b/tools/ocaml/duniverse/ocplib-endian/Makefile
@@ -0,0 +1,13 @@
+.PHONY: all clean test doc
+
+all:
+	dune build
+
+clean:
+	dune clean
+
+test:
+	dune runtest --profile=release
+
+doc:
+	dune build @doc
diff --git a/tools/ocaml/duniverse/ocplib-endian/README.md b/tools/ocaml/duniverse/ocplib-endian/README.md
new file mode 100644
index 0000000000..095959be94
--- /dev/null
+++ b/tools/ocaml/duniverse/ocplib-endian/README.md
@@ -0,0 +1,16 @@
+ocplib-endian
+=============
+
+Optimised functions to read and write int16/32/64 from strings, bytes
+and bigarrays, based on primitives added in version 4.01.
+
+The library implements three modules:
+- [EndianString](src/endianString.cppo.mli) works directly on strings, and provides submodules BigEndian and LittleEndian, with their unsafe counter-parts;
+- [EndianBytes](src/endianBytes.cppo.mli) works directly on bytes, and provides submodules BigEndian and LittleEndian, with their unsafe counter-parts;
+- [EndianBigstring](src/endianBigstring.cppo.mli) works on bigstrings (Bigarrays of chars), and provides submodules BigEndian and LittleEndian, with their unsafe counter-parts;
+
+
+= Hacking =
+
+The tests only pass in dune release profile. The debug mode prevents
+cross module inlining, which prevents unboxing in the tests.
\ No newline at end of file
diff --git a/tools/ocaml/duniverse/ocplib-endian/dune-project b/tools/ocaml/duniverse/ocplib-endian/dune-project
new file mode 100644
index 0000000000..ae33d72195
--- /dev/null
+++ b/tools/ocaml/duniverse/ocplib-endian/dune-project
@@ -0,0 +1,2 @@
+(lang dune 1.0)
+(name ocplib-endian)
diff --git a/tools/ocaml/duniverse/ocplib-endian/ocplib-endian.opam b/tools/ocaml/duniverse/ocplib-endian/ocplib-endian.opam
new file mode 100644
index 0000000000..642829c51b
--- /dev/null
+++ b/tools/ocaml/duniverse/ocplib-endian/ocplib-endian.opam
@@ -0,0 +1,30 @@
+opam-version: "2.0"
+name: "ocplib-endian"
+synopsis: "Optimised functions to read and write int16/32/64 from strings and bigarrays"
+description: """
+The library implements three modules:
+* [EndianString](https://github.com/OCamlPro/ocplib-endian/blob/master/src/endianString.mli) works directly on strings, and provides submodules BigEndian and LittleEndian, with their unsafe counter-parts;
+* [EndianBytes](https://github.com/OCamlPro/ocplib-endian/blob/master/src/endianBytes.mli) works directly on bytes, and provides submodules BigEndian and LittleEndian, with their unsafe counter-parts;
+* [EndianBigstring](https://github.com/OCamlPro/ocplib-endian/blob/master/src/endianBigstring.mli) works on bigstrings (Bigarrays of chars), and provides submodules BigEndian and LittleEndian, with their unsafe counter-parts.
+"""
+maintainer: "pierre.chambart@ocamlpro.com"
+authors: "Pierre Chambart"
+homepage: "https://github.com/OCamlPro/ocplib-endian"
+bug-reports: "https://github.com/OCamlPro/ocplib-endian/issues"
+doc: "https://ocamlpro.github.io/ocplib-endian/ocplib-endian/"
+depends: [
+  "base-bytes"
+  "ocaml" {>= "4.02.3"}
+  "cppo" {>= "1.1.0" & build}
+  "dune" {build & >= "1.0"}
+]
+build: [
+  ["dune" "build" "-p" name "-j" jobs
+   "@install"
+   "@runtest" {with-test}
+   "@doc" {with-doc}]
+]
+dev-repo: "git+https://github.com/OCamlPro/ocplib-endian.git"
+url {
+  src: "https://github.com/OCamlPro/ocplib-endian/archive/1.1.tar.gz"
+}
diff --git a/tools/ocaml/duniverse/ocplib-endian/src/be_ocaml_401.ml b/tools/ocaml/duniverse/ocplib-endian/src/be_ocaml_401.ml
new file mode 100644
index 0000000000..38de28c1ca
--- /dev/null
+++ b/tools/ocaml/duniverse/ocplib-endian/src/be_ocaml_401.ml
@@ -0,0 +1,32 @@
+  let get_uint16 s off =
+    if not Sys.big_endian
+    then swap16 (get_16 s off)
+    else get_16 s off
+
+  let get_int16 s off =
+   ((get_uint16 s off) lsl ( Sys.word_size - 17 )) asr ( Sys.word_size - 17 )
+
+  let get_int32 s off =
+    if not Sys.big_endian
+    then swap32 (get_32 s off)
+    else get_32 s off
+
+  let get_int64 s off =
+    if not Sys.big_endian
+    then swap64 (get_64 s off)
+    else get_64 s off
+
+  let set_int16 s off v =
+    if not Sys.big_endian
+    then (set_16 s off (swap16 v))
+    else set_16 s off v
+
+  let set_int32 s off v =
+    if not Sys.big_endian
+    then set_32 s off (swap32 v)
+    else set_32 s off v
+
+  let set_int64 s off v =
+    if not Sys.big_endian
+    then set_64 s off (swap64 v)
+    else set_64 s off v
diff --git a/tools/ocaml/duniverse/ocplib-endian/src/common.ml b/tools/ocaml/duniverse/ocplib-endian/src/common.ml
new file mode 100644
index 0000000000..54df23effa
--- /dev/null
+++ b/tools/ocaml/duniverse/ocplib-endian/src/common.ml
@@ -0,0 +1,24 @@
+[@@@warning "-32"]
+
+let sign8 v =
+  (v lsl ( Sys.word_size - 9 )) asr ( Sys.word_size - 9 )
+
+let sign16 v =
+  (v lsl ( Sys.word_size - 17 )) asr ( Sys.word_size - 17 )
+
+let get_uint8 s off =
+  Char.code (get_char s off)
+let get_int8 s off =
+  ((get_uint8 s off) lsl ( Sys.word_size - 9 )) asr ( Sys.word_size - 9 )
+let set_int8 s off v =
+  (* It is ok to cast using unsafe_chr because both String.set
+     and Bigarray.Array1.set (on bigstrings) use the 'store unsigned int8'
+     primitives that effectively extract the bits before writing *)
+  set_char s off (Char.unsafe_chr v)
+
+let unsafe_get_uint8 s off =
+  Char.code (unsafe_get_char s off)
+let unsafe_get_int8 s off =
+  ((unsafe_get_uint8 s off) lsl ( Sys.word_size - 9 )) asr ( Sys.word_size - 9 )
+let unsafe_set_int8 s off v =
+  unsafe_set_char s off (Char.unsafe_chr v)
diff --git a/tools/ocaml/duniverse/ocplib-endian/src/common_401.cppo.ml b/tools/ocaml/duniverse/ocplib-endian/src/common_401.cppo.ml
new file mode 100644
index 0000000000..eba9509bd4
--- /dev/null
+++ b/tools/ocaml/duniverse/ocplib-endian/src/common_401.cppo.ml
@@ -0,0 +1,100 @@
+external swap16 : int -> int = "%bswap16"
+external swap32 : int32 -> int32 = "%bswap_int32"
+external swap64 : int64 -> int64 = "%bswap_int64"
+external swapnative : nativeint -> nativeint = "%bswap_native"
+
+module BigEndian = struct
+
+  let get_char = get_char
+  let get_uint8 = get_uint8
+  let get_int8 = get_int8
+  let set_char = set_char
+  let set_int8 = set_int8
+
+#include "be_ocaml_401.ml"
+#include "common_float.ml"
+
+end
+
+module BigEndian_unsafe = struct
+
+  let get_char = unsafe_get_char
+  let get_uint8 = unsafe_get_uint8
+  let get_int8 = unsafe_get_int8
+  let set_char = unsafe_set_char
+  let set_int8 = unsafe_set_int8
+  let get_16 = unsafe_get_16
+  let get_32 = unsafe_get_32
+  let get_64 = unsafe_get_64
+  let set_16 = unsafe_set_16
+  let set_32 = unsafe_set_32
+  let set_64 = unsafe_set_64
+
+#include "be_ocaml_401.ml"
+#include "common_float.ml"
+
+end
+
+module LittleEndian = struct
+
+  let get_char = get_char
+  let get_uint8 = get_uint8
+  let get_int8 = get_int8
+  let set_char = set_char
+  let set_int8 = set_int8
+
+#include "le_ocaml_401.ml"
+#include "common_float.ml"
+
+end
+
+module LittleEndian_unsafe = struct
+
+  let get_char = unsafe_get_char
+  let get_uint8 = unsafe_get_uint8
+  let get_int8 = unsafe_get_int8
+  let set_char = unsafe_set_char
+  let set_int8 = unsafe_set_int8
+  let get_16 = unsafe_get_16
+  let get_32 = unsafe_get_32
+  let get_64 = unsafe_get_64
+  let set_16 = unsafe_set_16
+  let set_32 = unsafe_set_32
+  let set_64 = unsafe_set_64
+
+#include "le_ocaml_401.ml"
+#include "common_float.ml"
+
+end
+
+module NativeEndian = struct
+
+  let get_char = get_char
+  let get_uint8 = get_uint8
+  let get_int8 = get_int8
+  let set_char = set_char
+  let set_int8 = set_int8
+
+#include "ne_ocaml_401.ml"
+#include "common_float.ml"
+
+end
+
+module NativeEndian_unsafe = struct
+
+  let get_char = unsafe_get_char
+  let get_uint8 = unsafe_get_uint8
+  let get_int8 = unsafe_get_int8
+  let set_char = unsafe_set_char
+  let set_int8 = unsafe_set_int8
+  let get_16 = unsafe_get_16
+  let get_32 = unsafe_get_32
+  let get_64 = unsafe_get_64
+  let set_16 = unsafe_set_16
+  let set_32 = unsafe_set_32
+  let set_64 = unsafe_set_64
+
+#include "ne_ocaml_401.ml"
+#include "common_float.ml"
+
+end
diff --git a/tools/ocaml/duniverse/ocplib-endian/src/common_float.ml b/tools/ocaml/duniverse/ocplib-endian/src/common_float.ml
new file mode 100644
index 0000000000..3d28d2da1f
--- /dev/null
+++ b/tools/ocaml/duniverse/ocplib-endian/src/common_float.ml
@@ -0,0 +1,5 @@
+
+let get_float buff i = Int32.float_of_bits (get_int32 buff i)
+let get_double buff i = Int64.float_of_bits (get_int64 buff i)
+let set_float buff i v = set_int32 buff i (Int32.bits_of_float v)
+let set_double buff i v = set_int64 buff i (Int64.bits_of_float v)
diff --git a/tools/ocaml/duniverse/ocplib-endian/src/dune b/tools/ocaml/duniverse/ocplib-endian/src/dune
new file mode 100644
index 0000000000..a5b90d1107
--- /dev/null
+++ b/tools/ocaml/duniverse/ocplib-endian/src/dune
@@ -0,0 +1,75 @@
+(rule
+ (targets endianString.mli)
+ (deps (:< endianString.cppo.mli))
+ (action
+  (run %{bin:cppo} %{<} -o %{targets})))
+
+(rule
+ (targets endianString.ml)
+ (deps
+  (:< endianString.cppo.ml)
+  common.ml
+  common_401.ml)
+ (action
+  (run %{bin:cppo} -V OCAML:%{ocaml_version} %{<} -o %{targets})))
+
+(rule
+ (targets endianBytes.mli)
+ (deps
+  (:< endianBytes.cppo.mli))
+ (action
+  (run %{bin:cppo} %{<} -o %{targets})))
+
+(rule
+ (targets endianBytes.ml)
+ (deps
+  (:< endianBytes.cppo.ml)
+  common.ml
+  common_401.ml)
+ (action
+  (run %{bin:cppo} -V OCAML:%{ocaml_version} %{<} -o %{targets})))
+
+(rule
+ (targets endianBigstring.mli)
+ (deps
+  (:< endianBigstring.cppo.mli))
+ (action
+  (run %{bin:cppo} %{<} -o %{targets})))
+
+(rule
+ (targets endianBigstring.ml)
+ (deps
+  (:< endianBigstring.cppo.ml)
+  common.ml
+  common_401.ml)
+ (action
+  (run %{bin:cppo} %{<} -o %{targets})))
+
+(rule
+ (targets common_401.ml)
+ (deps
+  (:< common_401.cppo.ml)
+  be_ocaml_401.ml
+  le_ocaml_401.ml
+  ne_ocaml_401.ml
+  common_float.ml)
+ (action
+  (run %{bin:cppo} %{<} -o %{targets})))
+
+(library
+ (name ocplib_endian)
+ (public_name ocplib-endian)
+ (synopsis "Optimised functions to read and write int16/32/64 from strings and bytes")
+ (wrapped false)
+ (ocamlopt_flags (:standard -inline 1000))
+ (modules endianString endianBytes)
+ (libraries bytes))
+
+(library
+ (name ocplib_endian_bigstring)
+ (public_name ocplib-endian.bigstring)
+ (synopsis "Optimised functions to read and write int16/32/64 from bigarrays")
+ (wrapped false)
+ (modules endianBigstring)
+ (ocamlopt_flags (:standard -inline 1000))
+ (libraries ocplib_endian bigarray bytes))
diff --git a/tools/ocaml/duniverse/ocplib-endian/src/endianBigstring.cppo.ml b/tools/ocaml/duniverse/ocplib-endian/src/endianBigstring.cppo.ml
new file mode 100644
index 0000000000..b7a6abae79
--- /dev/null
+++ b/tools/ocaml/duniverse/ocplib-endian/src/endianBigstring.cppo.ml
@@ -0,0 +1,112 @@
+(************************************************************************)
+(*  ocplib-endian                                                       *)
+(*                                                                      *)
+(*    Copyright 2012 OCamlPro                                           *)
+(*                                                                      *)
+(*  This file is distributed under the terms of the GNU Lesser General  *)
+(*  Public License as published by the Free Software Foundation; either *)
+(*  version 2.1 of the License, or (at your option) any later version,  *)
+(*  with the OCaml static compilation exception.                        *)
+(*                                                                      *)
+(*  ocplib-endian is distributed in the hope that it will be useful,    *)
+(*  but WITHOUT ANY WARRANTY; without even the implied warranty of      *)
+(*  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the       *)
+(*  GNU General Public License for more details.                        *)
+(*                                                                      *)
+(************************************************************************)
+
+open Bigarray
+
+type bigstring = (char, int8_unsigned_elt, c_layout) Array1.t
+
+module type EndianBigstringSig = sig
+  (** Functions reading according to Big Endian byte order *)
+
+  val get_char : bigstring -> int -> char
+  (** [get_char buff i] reads 1 byte at offset i as a char *)
+
+  val get_uint8 : bigstring -> int -> int
+  (** [get_uint8 buff i] reads 1 byte at offset i as an unsigned int of 8
+  bits. i.e. It returns a value between 0 and 2^8-1 *)
+
+  val get_int8 : bigstring -> int -> int
+  (** [get_int8 buff i] reads 1 byte at offset i as a signed int of 8
+  bits. i.e. It returns a value between -2^7 and 2^7-1 *)
+
+  val get_uint16 : bigstring -> int -> int
+  (** [get_uint16 buff i] reads 2 bytes at offset i as an unsigned int
+  of 16 bits. i.e. It returns a value between 0 and 2^16-1 *)
+
+  val get_int16 : bigstring -> int -> int
+  (** [get_int16 buff i] reads 2 byte at offset i as a signed int of
+  16 bits. i.e. It returns a value between -2^15 and 2^15-1 *)
+
+  val get_int32 : bigstring -> int -> int32
+  (** [get_int32 buff i] reads 4 bytes at offset i as an int32. *)
+
+  val get_int64 : bigstring -> int -> int64
+  (** [get_int64 buff i] reads 8 bytes at offset i as an int64. *)
+
+  val get_float : bigstring -> int -> float
+  (** [get_float buff i] is equivalent to
+      [Int32.float_of_bits (get_int32 buff i)] *)
+
+  val get_double : bigstring -> int -> float
+  (** [get_double buff i] is equivalent to
+      [Int64.float_of_bits (get_int64 buff i)] *)
+
+  val set_char : bigstring -> int -> char -> unit
+  (** [set_char buff i v] writes [v] to [buff] at offset [i] *)
+
+  val set_int8 : bigstring -> int -> int -> unit
+  (** [set_int8 buff i v] writes the least significant 8 bits of [v]
+  to [buff] at offset [i] *)
+
+  val set_int16 : bigstring -> int -> int -> unit
+  (** [set_int16 buff i v] writes the least significant 16 bits of [v]
+  to [buff] at offset [i] *)
+
+  val set_int32 : bigstring -> int -> int32 -> unit
+  (** [set_int32 buff i v] writes [v] to [buff] at offset [i] *)
+
+  val set_int64 : bigstring -> int -> int64 -> unit
+  (** [set_int64 buff i v] writes [v] to [buff] at offset [i] *)
+
+  val set_float : bigstring -> int -> float -> unit
+  (** [set_float buff i v] is equivalent to
+      [set_int32 buff i (Int32.bits_of_float v)] *)
+
+  val set_double : bigstring -> int -> float -> unit
+  (** [set_double buff i v] is equivalent to
+      [set_int64 buff i (Int64.bits_of_float v)] *)
+
+end
+
+let get_char (s:bigstring) off =
+  Array1.get s off
+let set_char (s:bigstring) off v =
+  Array1.set s off v
+let unsafe_get_char (s:bigstring) off =
+  Array1.unsafe_get s off
+let unsafe_set_char (s:bigstring) off v =
+  Array1.unsafe_set s off v
+
+#include "common.ml"
+
+external unsafe_get_16 : bigstring -> int -> int = "%caml_bigstring_get16u"
+external unsafe_get_32 : bigstring -> int -> int32 = "%caml_bigstring_get32u"
+external unsafe_get_64 : bigstring -> int -> int64 = "%caml_bigstring_get64u"
+
+external unsafe_set_16 : bigstring -> int -> int -> unit = "%caml_bigstring_set16u"
+external unsafe_set_32 : bigstring -> int -> int32 -> unit = "%caml_bigstring_set32u"
+external unsafe_set_64 : bigstring -> int -> int64 -> unit = "%caml_bigstring_set64u"
+
+external get_16 : bigstring -> int -> int = "%caml_bigstring_get16"
+external get_32 : bigstring -> int -> int32 = "%caml_bigstring_get32"
+external get_64 : bigstring -> int -> int64 = "%caml_bigstring_get64"
+
+external set_16 : bigstring -> int -> int -> unit = "%caml_bigstring_set16"
+external set_32 : bigstring -> int -> int32 -> unit = "%caml_bigstring_set32"
+external set_64 : bigstring -> int -> int64 -> unit = "%caml_bigstring_set64"
+
+#include "common_401.ml"
diff --git a/tools/ocaml/duniverse/ocplib-endian/src/endianBigstring.cppo.mli b/tools/ocaml/duniverse/ocplib-endian/src/endianBigstring.cppo.mli
new file mode 100644
index 0000000000..73f51abfe3
--- /dev/null
+++ b/tools/ocaml/duniverse/ocplib-endian/src/endianBigstring.cppo.mli
@@ -0,0 +1,128 @@
+(************************************************************************)
+(*  ocplib-endian                                                       *)
+(*                                                                      *)
+(*    Copyright 2012 OCamlPro                                           *)
+(*                                                                      *)
+(*  This file is distributed under the terms of the GNU Lesser General  *)
+(*  Public License as published by the Free Software Foundation; either *)
+(*  version 2.1 of the License, or (at your option) any later version,  *)
+(*  with the OCaml static compilation exception.                        *)
+(*                                                                      *)
+(*  ocplib-endian is distributed in the hope that it will be useful,    *)
+(*  but WITHOUT ANY WARRANTY; without even the implied warranty of      *)
+(*  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the       *)
+(*  GNU General Public License for more details.                        *)
+(*                                                                      *)
+(************************************************************************)
+
+open Bigarray
+type bigstring = (char, int8_unsigned_elt, c_layout) Array1.t
+
+module type EndianBigstringSig = sig
+  (** Functions reading according to Big Endian byte order *)
+
+  val get_char : bigstring -> int -> char
+  (** [get_char buff i] reads 1 byte at offset i as a char *)
+
+  val get_uint8 : bigstring -> int -> int
+  (** [get_uint8 buff i] reads 1 byte at offset i as an unsigned int of 8
+  bits. i.e. It returns a value between 0 and 2^8-1 *)
+
+  val get_int8 : bigstring -> int -> int
+  (** [get_int8 buff i] reads 1 byte at offset i as a signed int of 8
+  bits. i.e. It returns a value between -2^7 and 2^7-1 *)
+
+  val get_uint16 : bigstring -> int -> int
+  (** [get_uint16 buff i] reads 2 bytes at offset i as an unsigned int
+  of 16 bits. i.e. It returns a value between 0 and 2^16-1 *)
+
+  val get_int16 : bigstring -> int -> int
+  (** [get_int16 buff i] reads 2 byte at offset i as a signed int of
+  16 bits. i.e. It returns a value between -2^15 and 2^15-1 *)
+
+  val get_int32 : bigstring -> int -> int32
+  (** [get_int32 buff i] reads 4 bytes at offset i as an int32. *)
+
+  val get_int64 : bigstring -> int -> int64
+  (** [get_int64 buff i] reads 8 bytes at offset i as an int64. *)
+
+  val get_float : bigstring -> int -> float
+  (** [get_float buff i] is equivalent to
+      [Int32.float_of_bits (get_int32 buff i)] *)
+
+  val get_double : bigstring -> int -> float
+  (** [get_double buff i] is equivalent to
+      [Int64.float_of_bits (get_int64 buff i)] *)
+
+  val set_char : bigstring -> int -> char -> unit
+  (** [set_char buff i v] writes [v] to [buff] at offset [i] *)
+
+  val set_int8 : bigstring -> int -> int -> unit
+  (** [set_int8 buff i v] writes the least significant 8 bits of [v]
+  to [buff] at offset [i] *)
+
+  val set_int16 : bigstring -> int -> int -> unit
+  (** [set_int16 buff i v] writes the least significant 16 bits of [v]
+  to [buff] at offset [i] *)
+
+  val set_int32 : bigstring -> int -> int32 -> unit
+  (** [set_int32 buff i v] writes [v] to [buff] at offset [i] *)
+
+  val set_int64 : bigstring -> int -> int64 -> unit
+  (** [set_int64 buff i v] writes [v] to [buff] at offset [i] *)
+
+  val set_float : bigstring -> int -> float -> unit
+  (** [set_float buff i v] is equivalent to
+      [set_int32 buff i (Int32.bits_of_float v)] *)
+
+  val set_double : bigstring -> int -> float -> unit
+  (** [set_double buff i v] is equivalent to
+      [set_int64 buff i (Int64.bits_of_float v)] *)
+
+end
+
+module BigEndian : sig
+  (** Functions reading according to Big Endian byte order without
+  checking for overflow *)
+
+  include EndianBigstringSig
+
+end
+
+module BigEndian_unsafe : sig
+  (** Functions reading according to Big Endian byte order without
+  checking for overflow *)
+
+  include EndianBigstringSig
+
+end
+
+module LittleEndian : sig
+  (** Functions reading according to Little Endian byte order *)
+
+  include EndianBigstringSig
+
+end
+
+module LittleEndian_unsafe : sig
+  (** Functions reading according to Big Endian byte order without
+  checking for overflow *)
+
+  include EndianBigstringSig
+
+end
+
+module NativeEndian : sig
+  (** Functions reading according to machine endianness *)
+
+  include EndianBigstringSig
+
+end
+
+module NativeEndian_unsafe : sig
+  (** Functions reading according to machine endianness without
+  checking for overflow *)
+
+  include EndianBigstringSig
+
+end
diff --git a/tools/ocaml/duniverse/ocplib-endian/src/endianBytes.cppo.ml b/tools/ocaml/duniverse/ocplib-endian/src/endianBytes.cppo.ml
new file mode 100644
index 0000000000..419f06316a
--- /dev/null
+++ b/tools/ocaml/duniverse/ocplib-endian/src/endianBytes.cppo.ml
@@ -0,0 +1,130 @@
+(************************************************************************)
+(*  ocplib-endian                                                       *)
+(*                                                                      *)
+(*    Copyright 2014 OCamlPro                                           *)
+(*                                                                      *)
+(*  This file is distributed under the terms of the GNU Lesser General  *)
+(*  Public License as published by the Free Software Foundation; either *)
+(*  version 2.1 of the License, or (at your option) any later version,  *)
+(*  with the OCaml static compilation exception.                        *)
+(*                                                                      *)
+(*  ocplib-endian is distributed in the hope that it will be useful,    *)
+(*  but WITHOUT ANY WARRANTY; without even the implied warranty of      *)
+(*  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the       *)
+(*  GNU General Public License for more details.                        *)
+(*                                                                      *)
+(************************************************************************)
+
+module type EndianBytesSig = sig
+  (** Functions reading according to Big Endian byte order *)
+
+  val get_char : Bytes.t -> int -> char
+  (** [get_char buff i] reads 1 byte at offset i as a char *)
+
+  val get_uint8 : Bytes.t -> int -> int
+  (** [get_uint8 buff i] reads 1 byte at offset i as an unsigned int of 8
+  bits. i.e. It returns a value between 0 and 2^8-1 *)
+
+  val get_int8 : Bytes.t -> int -> int
+  (** [get_int8 buff i] reads 1 byte at offset i as a signed int of 8
+  bits. i.e. It returns a value between -2^7 and 2^7-1 *)
+
+  val get_uint16 : Bytes.t -> int -> int
+  (** [get_uint16 buff i] reads 2 bytes at offset i as an unsigned int
+  of 16 bits. i.e. It returns a value between 0 and 2^16-1 *)
+
+  val get_int16 : Bytes.t -> int -> int
+  (** [get_int16 buff i] reads 2 byte at offset i as a signed int of
+  16 bits. i.e. It returns a value between -2^15 and 2^15-1 *)
+
+  val get_int32 : Bytes.t -> int -> int32
+  (** [get_int32 buff i] reads 4 bytes at offset i as an int32. *)
+
+  val get_int64 : Bytes.t -> int -> int64
+  (** [get_int64 buff i] reads 8 bytes at offset i as an int64. *)
+
+  val get_float : Bytes.t -> int -> float
+  (** [get_float buff i] is equivalent to
+      [Int32.float_of_bits (get_int32 buff i)] *)
+
+  val get_double : Bytes.t -> int -> float
+  (** [get_double buff i] is equivalent to
+      [Int64.float_of_bits (get_int64 buff i)] *)
+
+  val set_char : Bytes.t -> int -> char -> unit
+  (** [set_char buff i v] writes [v] to [buff] at offset [i] *)
+
+  val set_int8 : Bytes.t -> int -> int -> unit
+  (** [set_int8 buff i v] writes the least significant 8 bits of [v]
+  to [buff] at offset [i] *)
+
+  val set_int16 : Bytes.t -> int -> int -> unit
+  (** [set_int16 buff i v] writes the least significant 16 bits of [v]
+  to [buff] at offset [i] *)
+
+  val set_int32 : Bytes.t -> int -> int32 -> unit
+  (** [set_int32 buff i v] writes [v] to [buff] at offset [i] *)
+
+  val set_int64 : Bytes.t -> int -> int64 -> unit
+  (** [set_int64 buff i v] writes [v] to [buff] at offset [i] *)
+
+  val set_float : Bytes.t -> int -> float -> unit
+  (** [set_float buff i v] is equivalent to
+      [set_int32 buff i (Int32.bits_of_float v)] *)
+
+  val set_double : Bytes.t -> int -> float -> unit
+  (** [set_double buff i v] is equivalent to
+      [set_int64 buff i (Int64.bits_of_float v)] *)
+
+end
+
+let get_char (s:Bytes.t) off =
+  Bytes.get s off
+let set_char (s:Bytes.t) off v =
+  Bytes.set s off v
+let unsafe_get_char (s:Bytes.t) off =
+  Bytes.unsafe_get s off
+let unsafe_set_char (s:Bytes.t) off v =
+  Bytes.unsafe_set s off v
+
+#include "common.ml"
+
+#if OCAML_VERSION < (4, 07, 0)
+
+external unsafe_get_16 : Bytes.t -> int -> int = "%caml_string_get16u"
+external unsafe_get_32 : Bytes.t -> int -> int32 = "%caml_string_get32u"
+external unsafe_get_64 : Bytes.t -> int -> int64 = "%caml_string_get64u"
+
+external unsafe_set_16 : Bytes.t -> int -> int -> unit = "%caml_string_set16u"
+external unsafe_set_32 : Bytes.t -> int -> int32 -> unit = "%caml_string_set32u"
+external unsafe_set_64 : Bytes.t -> int -> int64 -> unit = "%caml_string_set64u"
+
+external get_16 : Bytes.t -> int -> int = "%caml_string_get16"
+external get_32 : Bytes.t -> int -> int32 = "%caml_string_get32"
+external get_64 : Bytes.t -> int -> int64 = "%caml_string_get64"
+
+external set_16 : Bytes.t -> int -> int -> unit = "%caml_string_set16"
+external set_32 : Bytes.t -> int -> int32 -> unit = "%caml_string_set32"
+external set_64 : Bytes.t -> int -> int64 -> unit = "%caml_string_set64"
+
+#else
+
+external unsafe_get_16 : Bytes.t -> int -> int = "%caml_bytes_get16u"
+external unsafe_get_32 : Bytes.t -> int -> int32 = "%caml_bytes_get32u"
+external unsafe_get_64 : Bytes.t -> int -> int64 = "%caml_bytes_get64u"
+
+external unsafe_set_16 : Bytes.t -> int -> int -> unit = "%caml_bytes_set16u"
+external unsafe_set_32 : Bytes.t -> int -> int32 -> unit = "%caml_bytes_set32u"
+external unsafe_set_64 : Bytes.t -> int -> int64 -> unit = "%caml_bytes_set64u"
+
+external get_16 : Bytes.t -> int -> int = "%caml_bytes_get16"
+external get_32 : Bytes.t -> int -> int32 = "%caml_bytes_get32"
+external get_64 : Bytes.t -> int -> int64 = "%caml_bytes_get64"
+
+external set_16 : Bytes.t -> int -> int -> unit = "%caml_bytes_set16"
+external set_32 : Bytes.t -> int -> int32 -> unit = "%caml_bytes_set32"
+external set_64 : Bytes.t -> int -> int64 -> unit = "%caml_bytes_set64"
+
+#endif
+
+#include "common_401.ml"
diff --git a/tools/ocaml/duniverse/ocplib-endian/src/endianBytes.cppo.mli b/tools/ocaml/duniverse/ocplib-endian/src/endianBytes.cppo.mli
new file mode 100644
index 0000000000..25abbb1961
--- /dev/null
+++ b/tools/ocaml/duniverse/ocplib-endian/src/endianBytes.cppo.mli
@@ -0,0 +1,124 @@
+(************************************************************************)
+(*  ocplib-endian                                                       *)
+(*                                                                      *)
+(*    Copyright 2014 OCamlPro                                           *)
+(*                                                                      *)
+(*  This file is distributed under the terms of the GNU Lesser General  *)
+(*  Public License as published by the Free Software Foundation; either *)
+(*  version 2.1 of the License, or (at your option) any later version,  *)
+(*  with the OCaml static compilation exception.                        *)
+(*                                                                      *)
+(*  ocplib-endian is distributed in the hope that it will be useful,    *)
+(*  but WITHOUT ANY WARRANTY; without even the implied warranty of      *)
+(*  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the       *)
+(*  GNU General Public License for more details.                        *)
+(*                                                                      *)
+(************************************************************************)
+
+module type EndianBytesSig = sig
+  (** Functions reading according to Big Endian byte order *)
+
+  val get_char : Bytes.t -> int -> char
+  (** [get_char buff i] reads 1 byte at offset i as a char *)
+
+  val get_uint8 : Bytes.t -> int -> int
+  (** [get_uint8 buff i] reads 1 byte at offset i as an unsigned int of 8
+  bits. i.e. It returns a value between 0 and 2^8-1 *)
+
+  val get_int8 : Bytes.t -> int -> int
+  (** [get_int8 buff i] reads 1 byte at offset i as a signed int of 8
+  bits. i.e. It returns a value between -2^7 and 2^7-1 *)
+
+  val get_uint16 : Bytes.t -> int -> int
+  (** [get_uint16 buff i] reads 2 bytes at offset i as an unsigned int
+  of 16 bits. i.e. It returns a value between 0 and 2^16-1 *)
+
+  val get_int16 : Bytes.t -> int -> int
+  (** [get_int16 buff i] reads 2 byte at offset i as a signed int of
+  16 bits. i.e. It returns a value between -2^15 and 2^15-1 *)
+
+  val get_int32 : Bytes.t -> int -> int32
+  (** [get_int32 buff i] reads 4 bytes at offset i as an int32. *)
+
+  val get_int64 : Bytes.t -> int -> int64
+  (** [get_int64 buff i] reads 8 bytes at offset i as an int64. *)
+
+  val get_float : Bytes.t -> int -> float
+  (** [get_float buff i] is equivalent to
+      [Int32.float_of_bits (get_int32 buff i)] *)
+
+  val get_double : Bytes.t -> int -> float
+  (** [get_double buff i] is equivalent to
+      [Int64.float_of_bits (get_int64 buff i)] *)
+
+  val set_char : Bytes.t -> int -> char -> unit
+  (** [set_char buff i v] writes [v] to [buff] at offset [i] *)
+
+  val set_int8 : Bytes.t -> int -> int -> unit
+  (** [set_int8 buff i v] writes the least significant 8 bits of [v]
+  to [buff] at offset [i] *)
+
+  val set_int16 : Bytes.t -> int -> int -> unit
+  (** [set_int16 buff i v] writes the least significant 16 bits of [v]
+  to [buff] at offset [i] *)
+
+  val set_int32 : Bytes.t -> int -> int32 -> unit
+  (** [set_int32 buff i v] writes [v] to [buff] at offset [i] *)
+
+  val set_int64 : Bytes.t -> int -> int64 -> unit
+  (** [set_int64 buff i v] writes [v] to [buff] at offset [i] *)
+
+  val set_float : Bytes.t -> int -> float -> unit
+  (** [set_float buff i v] is equivalent to
+      [set_int32 buff i (Int32.bits_of_float v)] *)
+
+  val set_double : Bytes.t -> int -> float -> unit
+  (** [set_double buff i v] is equivalent to
+      [set_int64 buff i (Int64.bits_of_float v)] *)
+
+end
+
+module BigEndian : sig
+  (** Functions reading according to Big Endian byte order *)
+
+  include EndianBytesSig
+
+end
+
+module BigEndian_unsafe : sig
+  (** Functions reading according to Big Endian byte order without
+  checking for overflow *)
+
+  include EndianBytesSig
+
+end
+
+module LittleEndian : sig
+  (** Functions reading according to Little Endian byte order *)
+
+  include EndianBytesSig
+
+end
+
+module LittleEndian_unsafe : sig
+  (** Functions reading according to Big Endian byte order without
+  checking for overflow *)
+
+  include EndianBytesSig
+
+end
+
+module NativeEndian : sig
+  (** Functions reading according to machine endianness *)
+
+  include EndianBytesSig
+
+end
+
+module NativeEndian_unsafe : sig
+  (** Functions reading according to machine endianness without
+  checking for overflow *)
+
+  include EndianBytesSig
+
+end
diff --git a/tools/ocaml/duniverse/ocplib-endian/src/endianString.cppo.ml b/tools/ocaml/duniverse/ocplib-endian/src/endianString.cppo.ml
new file mode 100644
index 0000000000..df8ccd4072
--- /dev/null
+++ b/tools/ocaml/duniverse/ocplib-endian/src/endianString.cppo.ml
@@ -0,0 +1,118 @@
+(************************************************************************)
+(*  ocplib-endian                                                       *)
+(*                                                                      *)
+(*    Copyright 2012 OCamlPro                                           *)
+(*                                                                      *)
+(*  This file is distributed under the terms of the GNU Lesser General  *)
+(*  Public License as published by the Free Software Foundation; either *)
+(*  version 2.1 of the License, or (at your option) any later version,  *)
+(*  with the OCaml static compilation exception.                        *)
+(*                                                                      *)
+(*  ocplib-endian is distributed in the hope that it will be useful,    *)
+(*  but WITHOUT ANY WARRANTY; without even the implied warranty of      *)
+(*  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the       *)
+(*  GNU General Public License for more details.                        *)
+(*                                                                      *)
+(************************************************************************)
+
+module type EndianStringSig = sig
+  (** Functions reading according to Big Endian byte order *)
+
+  val get_char : string -> int -> char
+  (** [get_char buff i] reads 1 byte at offset i as a char *)
+
+  val get_uint8 : string -> int -> int
+  (** [get_uint8 buff i] reads 1 byte at offset i as an unsigned int of 8
+  bits. i.e. It returns a value between 0 and 2^8-1 *)
+
+  val get_int8 : string -> int -> int
+  (** [get_int8 buff i] reads 1 byte at offset i as a signed int of 8
+  bits. i.e. It returns a value between -2^7 and 2^7-1 *)
+
+  val get_uint16 : string -> int -> int
+  (** [get_uint16 buff i] reads 2 bytes at offset i as an unsigned int
+  of 16 bits. i.e. It returns a value between 0 and 2^16-1 *)
+
+  val get_int16 : string -> int -> int
+  (** [get_int16 buff i] reads 2 byte at offset i as a signed int of
+  16 bits. i.e. It returns a value between -2^15 and 2^15-1 *)
+
+  val get_int32 : string -> int -> int32
+  (** [get_int32 buff i] reads 4 bytes at offset i as an int32. *)
+
+  val get_int64 : string -> int -> int64
+  (** [get_int64 buff i] reads 8 bytes at offset i as an int64. *)
+
+  val get_float : string -> int -> float
+  (** [get_float buff i] is equivalent to
+      [Int32.float_of_bits (get_int32 buff i)] *)
+
+  val get_double : string -> int -> float
+  (** [get_double buff i] is equivalent to
+      [Int64.float_of_bits (get_int64 buff i)] *)
+
+  val set_char : Bytes.t -> int -> char -> unit
+  (** @deprecated This is a deprecated alias of {!endianBytes.set_char}. *)
+
+  val set_int8 : Bytes.t -> int -> int -> unit
+  (** @deprecated This is a deprecated alias of {!endianBytes.set_int8}. *)
+
+  val set_int16 : Bytes.t -> int -> int -> unit
+  (** @deprecated This is a deprecated alias of {!endianBytes.set_int16}. *)
+
+  val set_int32 : Bytes.t -> int -> int32 -> unit
+  (** @deprecated This is a deprecated alias of {!endianBytes.set_int32}. *)
+
+  val set_int64 : Bytes.t -> int -> int64 -> unit
+  (** @deprecated This is a deprecated alias of {!endianBytes.set_int64}. *)
+
+  val set_float : Bytes.t -> int -> float -> unit
+  (** @deprecated This is a deprecated alias of {!endianBytes.set_float}. *)
+
+  val set_double : Bytes.t -> int -> float -> unit
+  (** @deprecated This is a deprecated alias of {!endianBytes.set_double}. *)
+
+end
+
+let get_char (s:string) off =
+  String.get s off
+let set_char (s:Bytes.t) off v =
+  Bytes.set s off v
+let unsafe_get_char (s:string) off =
+  String.unsafe_get s off
+let unsafe_set_char (s:Bytes.t) off v =
+  Bytes.unsafe_set s off v
+
+#include "common.ml"
+
+external unsafe_get_16 : string -> int -> int = "%caml_string_get16u"
+external unsafe_get_32 : string -> int -> int32 = "%caml_string_get32u"
+external unsafe_get_64 : string -> int -> int64 = "%caml_string_get64u"
+
+external get_16 : string -> int -> int = "%caml_string_get16"
+external get_32 : string -> int -> int32 = "%caml_string_get32"
+external get_64 : string -> int -> int64 = "%caml_string_get64"
+
+#if OCAML_VERSION < (4, 07, 0)
+
+external unsafe_set_16 : Bytes.t -> int -> int -> unit = "%caml_string_set16u"
+external unsafe_set_32 : Bytes.t -> int -> int32 -> unit = "%caml_string_set32u"
+external unsafe_set_64 : Bytes.t -> int -> int64 -> unit = "%caml_string_set64u"
+
+external set_16 : Bytes.t -> int -> int -> unit = "%caml_string_set16"
+external set_32 : Bytes.t -> int -> int32 -> unit = "%caml_string_set32"
+external set_64 : Bytes.t -> int -> int64 -> unit = "%caml_string_set64"
+
+#else
+
+external unsafe_set_16 : Bytes.t -> int -> int -> unit = "%caml_bytes_set16u"
+external unsafe_set_32 : Bytes.t -> int -> int32 -> unit = "%caml_bytes_set32u"
+external unsafe_set_64 : Bytes.t -> int -> int64 -> unit = "%caml_bytes_set64u"
+
+external set_16 : Bytes.t -> int -> int -> unit = "%caml_bytes_set16"
+external set_32 : Bytes.t -> int -> int32 -> unit = "%caml_bytes_set32"
+external set_64 : Bytes.t -> int -> int64 -> unit = "%caml_bytes_set64"
+
+#endif
+
+#include "common_401.ml"
diff --git a/tools/ocaml/duniverse/ocplib-endian/src/endianString.cppo.mli b/tools/ocaml/duniverse/ocplib-endian/src/endianString.cppo.mli
new file mode 100644
index 0000000000..2b703d6d6b
--- /dev/null
+++ b/tools/ocaml/duniverse/ocplib-endian/src/endianString.cppo.mli
@@ -0,0 +1,121 @@
+(************************************************************************)
+(*  ocplib-endian                                                       *)
+(*                                                                      *)
+(*    Copyright 2012 OCamlPro                                           *)
+(*                                                                      *)
+(*  This file is distributed under the terms of the GNU Lesser General  *)
+(*  Public License as published by the Free Software Foundation; either *)
+(*  version 2.1 of the License, or (at your option) any later version,  *)
+(*  with the OCaml static compilation exception.                        *)
+(*                                                                      *)
+(*  ocplib-endian is distributed in the hope that it will be useful,    *)
+(*  but WITHOUT ANY WARRANTY; without even the implied warranty of      *)
+(*  MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the       *)
+(*  GNU General Public License for more details.                        *)
+(*                                                                      *)
+(************************************************************************)
+
+module type EndianStringSig = sig
+  (** Functions reading according to Big Endian byte order *)
+
+  val get_char : string -> int -> char
+  (** [get_char buff i] reads 1 byte at offset i as a char *)
+
+  val get_uint8 : string -> int -> int
+  (** [get_uint8 buff i] reads 1 byte at offset i as an unsigned int of 8
+  bits. i.e. It returns a value between 0 and 2^8-1 *)
+
+  val get_int8 : string -> int -> int
+  (** [get_int8 buff i] reads 1 byte at offset i as a signed int of 8
+  bits. i.e. It returns a value between -2^7 and 2^7-1 *)
+
+  val get_uint16 : string -> int -> int
+  (** [get_uint16 buff i] reads 2 bytes at offset i as an unsigned int
+  of 16 bits. i.e. It returns a value between 0 and 2^16-1 *)
+
+  val get_int16 : string -> int -> int
+  (** [get_int16 buff i] reads 2 byte at offset i as a signed int of
+  16 bits. i.e. It returns a value between -2^15 and 2^15-1 *)
+
+  val get_int32 : string -> int -> int32
+  (** [get_int32 buff i] reads 4 bytes at offset i as an int32. *)
+
+  val get_int64 : string -> int -> int64
+  (** [get_int64 buff i] reads 8 bytes at offset i as an int64. *)
+
+  val get_float : string -> int -> float
+  (** [get_float buff i] is equivalent to
+      [Int32.float_of_bits (get_int32 buff i)] *)
+
+  val get_double : string -> int -> float
+  (** [get_double buff i] is equivalent to
+      [Int64.float_of_bits (get_int64 buff i)] *)
+
+  val set_char : Bytes.t -> int -> char -> unit
+  (** @deprecated This is a deprecated alias of {!endianBytes.set_char}. *)
+
+  val set_int8 : Bytes.t -> int -> int -> unit
+  (** @deprecated This is a deprecated alias of {!endianBytes.set_int8}. *)
+
+  val set_int16 : Bytes.t -> int -> int -> unit
+  (** @deprecated This is a deprecated alias of {!endianBytes.set_int16}. *)
+
+  val set_int32 : Bytes.t -> int -> int32 -> unit
+  (** @deprecated This is a deprecated alias of {!endianBytes.set_int32}. *)
+
+  val set_int64 : Bytes.t -> int -> int64 -> unit
+  (** @deprecated This is a deprecated alias of {!endianBytes.set_int64}. *)
+
+  val set_float : Bytes.t -> int -> float -> unit
+  (** @deprecated This is a deprecated alias of {!endianBytes.set_float}. *)
+
+  val set_double : Bytes.t -> int -> float -> unit
+  (** @deprecated This is a deprecated alias of {!endianBytes.set_double}. *)
+
+end
+
+module BigEndian : sig
+  (** Functions reading according to Big Endian byte order without
+  checking for overflow *)
+
+  include EndianStringSig
+
+end
+
+module BigEndian_unsafe : sig
+  (** Functions reading according to Big Endian byte order without
+  checking for overflow *)
+
+  include EndianStringSig
+
+end
+
+module LittleEndian : sig
+  (** Functions reading according to Little Endian byte order *)
+
+  include EndianStringSig
+
+end
+
+module LittleEndian_unsafe : sig
+  (** Functions reading according to Big Endian byte order without
+  checking for overflow *)
+
+  include EndianStringSig
+
+end
+
+module NativeEndian : sig
+  (** Functions reading according to machine endianness *)
+
+  include EndianStringSig
+
+end
+
+module NativeEndian_unsafe : sig
+  (** Functions reading according to machine endianness without
+  checking for overflow *)
+
+  include EndianStringSig
+
+end
diff --git a/tools/ocaml/duniverse/ocplib-endian/src/le_ocaml_401.ml b/tools/ocaml/duniverse/ocplib-endian/src/le_ocaml_401.ml
new file mode 100644
index 0000000000..b65184ca8d
--- /dev/null
+++ b/tools/ocaml/duniverse/ocplib-endian/src/le_ocaml_401.ml
@@ -0,0 +1,32 @@
+  let get_uint16 s off =
+    if Sys.big_endian
+    then swap16 (get_16 s off)
+    else get_16 s off
+
+  let get_int16 s off =
+   ((get_uint16 s off) lsl ( Sys.word_size - 17 )) asr ( Sys.word_size - 17 )
+
+  let get_int32 s off =
+    if Sys.big_endian
+    then swap32 (get_32 s off)
+    else get_32 s off
+
+  let get_int64 s off =
+    if Sys.big_endian
+    then swap64 (get_64 s off)
+    else get_64 s off
+
+  let set_int16 s off v =
+    if Sys.big_endian
+    then (set_16 s off (swap16 v))
+    else set_16 s off v
+
+  let set_int32 s off v =
+    if Sys.big_endian
+    then set_32 s off (swap32 v)
+    else set_32 s off v
+
+  let set_int64 s off v =
+    if Sys.big_endian
+    then set_64 s off (swap64 v)
+    else set_64 s off v
diff --git a/tools/ocaml/duniverse/ocplib-endian/src/ne_ocaml_401.ml b/tools/ocaml/duniverse/ocplib-endian/src/ne_ocaml_401.ml
new file mode 100644
index 0000000000..2348135809
--- /dev/null
+++ b/tools/ocaml/duniverse/ocplib-endian/src/ne_ocaml_401.ml
@@ -0,0 +1,20 @@
+  let get_uint16 s off =
+    get_16 s off
+
+  let get_int16 s off =
+   ((get_uint16 s off) lsl ( Sys.word_size - 17 )) asr ( Sys.word_size - 17 )
+
+  let get_int32 s off =
+    get_32 s off
+
+  let get_int64 s off =
+    get_64 s off
+
+  let set_int16 s off v =
+    set_16 s off v
+
+  let set_int32 s off v =
+    set_32 s off v
+
+  let set_int64 s off v =
+    set_64 s off v
diff --git a/tools/ocaml/duniverse/ocplib-endian/tests/bench.ml b/tools/ocaml/duniverse/ocplib-endian/tests/bench.ml
new file mode 100644
index 0000000000..8b0c88d192
--- /dev/null
+++ b/tools/ocaml/duniverse/ocplib-endian/tests/bench.ml
@@ -0,0 +1,436 @@
+
+let buffer_size = 10000
+let loops = 10000
+
+let allocdiff =
+  let stat1 = Gc.quick_stat () in
+  let stat2 = Gc.quick_stat () in
+  (stat2.Gc.minor_words -. stat1.Gc.minor_words)
+
+let test_fun s f =
+  let t1 = Unix.gettimeofday () in
+  let stat1 = Gc.quick_stat () in
+  f ();
+  let stat2 = Gc.quick_stat () in
+  let t2 = Unix.gettimeofday () in
+  Printf.printf "%s: time %f alloc: %f\n%!" s (t2 -. t1)
+    (stat2.Gc.minor_words -. stat1.Gc.minor_words -. allocdiff)
+
+module Bytes_test = struct
+  open EndianBytes
+  module BE = BigEndian
+  module LE = LittleEndian
+
+  let buffer = Bytes.create buffer_size
+
+  let loop_read_uint16_be () =
+    for i = 0 to Bytes.length buffer - 2 do
+      ignore(BE.get_uint16 buffer i)
+    done
+
+  let loop_read_uint16_le () =
+    for i = 0 to Bytes.length buffer - 2 do
+      ignore(LE.get_uint16 buffer i)
+    done
+
+  let loop_read_int16_be () =
+    for i = 0 to Bytes.length buffer - 2 do
+      ignore(BE.get_int16 buffer i)
+    done
+
+  let loop_read_int16_le () =
+    for i = 0 to Bytes.length buffer - 2 do
+      ignore(LE.get_int16 buffer i)
+    done
+
+  let loop_read_int32_be () =
+    for i = 0 to Bytes.length buffer - 4 do
+      ignore(Int32.to_int (BE.get_int32 buffer i))
+    done
+
+  let loop_read_int32_le () =
+    for i = 0 to Bytes.length buffer - 4 do
+      ignore(Int32.to_int (LE.get_int32 buffer i))
+    done
+
+  let loop_read_int64_be () =
+    for i = 0 to Bytes.length buffer - 8 do
+      ignore(Int64.to_int (BE.get_int64 buffer i))
+    done
+
+  let loop_read_int64_le () =
+    for i = 0 to Bytes.length buffer - 8 do
+      ignore(Int64.to_int (LE.get_int64 buffer i))
+    done
+
+  let loop_write_int16_be () =
+    for i = 0 to Bytes.length buffer - 2 do
+      ignore(BE.set_int16 buffer i 10)
+    done
+
+  let loop_write_int16_le () =
+    for i = 0 to Bytes.length buffer - 2 do
+      ignore(LE.set_int16 buffer i 10)
+    done
+
+  let loop_write_int32_be () =
+    for i = 0 to Bytes.length buffer - 4 do
+      ignore((BE.set_int32 buffer i) 10l)
+    done
+
+  let loop_write_int32_le () =
+    for i = 0 to Bytes.length buffer - 4 do
+      ignore((LE.set_int32 buffer i) 10l)
+    done
+
+  let loop_write_int64_be () =
+    for i = 0 to Bytes.length buffer - 8 do
+      ignore((BE.set_int64 buffer i) 10L)
+    done
+
+  let loop_write_int64_le () =
+    for i = 0 to Bytes.length buffer - 8 do
+      ignore((LE.set_int64 buffer i) 10L)
+    done
+
+  let do_loop f () =
+    for i = 0 to loops - 1 do
+      f ()
+    done
+
+  let run s f = test_fun s (do_loop f)
+
+  let run_test () =
+    run "loop_read_uint16_be" loop_read_uint16_be;
+    run "loop_read_uint16_le" loop_read_uint16_le;
+    run "loop_read_int16_be" loop_read_int16_be;
+    run "loop_read_int16_le" loop_read_int16_le;
+    run "loop_read_int32_be" loop_read_int32_be;
+    run "loop_read_int32_le" loop_read_int32_le;
+    run "loop_read_int64_be" loop_read_int64_be;
+    run "loop_read_int64_le" loop_read_int64_le;
+    run "loop_write_int16_be" loop_write_int16_be;
+    run "loop_write_int16_le" loop_write_int16_le;
+    run "loop_write_int32_be" loop_write_int32_be;
+    run "loop_write_int32_le" loop_write_int32_le;
+    run "loop_write_int64_be" loop_write_int64_be;
+    run "loop_write_int64_le" loop_write_int64_le
+
+end
+
+module Bytes_unsafe_test = struct
+  open EndianBytes
+  module BE = BigEndian_unsafe
+  module LE = LittleEndian_unsafe
+
+  let buffer = Bytes.create buffer_size
+
+  let loop_read_uint16_be () =
+    for i = 0 to Bytes.length buffer - 2 do
+      ignore(BE.get_uint16 buffer i)
+    done
+
+  let loop_read_uint16_le () =
+    for i = 0 to Bytes.length buffer - 2 do
+      ignore(LE.get_uint16 buffer i)
+    done
+
+  let loop_read_int16_be () =
+    for i = 0 to Bytes.length buffer - 2 do
+      ignore(BE.get_int16 buffer i)
+    done
+
+  let loop_read_int16_le () =
+    for i = 0 to Bytes.length buffer - 2 do
+      ignore(LE.get_int16 buffer i)
+    done
+
+  let loop_read_int32_be () =
+    for i = 0 to Bytes.length buffer - 4 do
+      ignore(Int32.to_int (BE.get_int32 buffer i))
+    done
+
+  let loop_read_int32_le () =
+    for i = 0 to Bytes.length buffer - 4 do
+      ignore(Int32.to_int (LE.get_int32 buffer i))
+    done
+
+  let loop_read_int64_be () =
+    for i = 0 to Bytes.length buffer - 8 do
+      ignore(Int64.to_int (BE.get_int64 buffer i))
+    done
+
+  let loop_read_int64_le () =
+    for i = 0 to Bytes.length buffer - 8 do
+      ignore(Int64.to_int (LE.get_int64 buffer i))
+    done
+
+  let loop_write_int16_be () =
+    for i = 0 to Bytes.length buffer - 2 do
+      ignore(BE.set_int16 buffer i 10)
+    done
+
+  let loop_write_int16_le () =
+    for i = 0 to Bytes.length buffer - 2 do
+      ignore(LE.set_int16 buffer i 10)
+    done
+
+  let loop_write_int32_be () =
+    for i = 0 to Bytes.length buffer - 4 do
+      ignore((BE.set_int32 buffer i) 10l)
+    done
+
+  let loop_write_int32_le () =
+    for i = 0 to Bytes.length buffer - 4 do
+      ignore((LE.set_int32 buffer i) 10l)
+    done
+
+  let loop_write_int64_be () =
+    for i = 0 to Bytes.length buffer - 8 do
+      ignore((BE.set_int64 buffer i) 10L)
+    done
+
+  let loop_write_int64_le () =
+    for i = 0 to Bytes.length buffer - 8 do
+      ignore((LE.set_int64 buffer i) 10L)
+
+   done
+
+  let do_loop f () =
+    for i = 0 to loops - 1 do
+      f ()
+    done
+
+  let run s f = test_fun s (do_loop f)
+
+  let run_test () =
+    run "loop_read_uint16_be" loop_read_uint16_be;
+    run "loop_read_uint16_le" loop_read_uint16_le;
+    run "loop_read_int16_be" loop_read_int16_be;
+    run "loop_read_int16_le" loop_read_int16_le;
+    run "loop_read_int32_be" loop_read_int32_be;
+    run "loop_read_int32_le" loop_read_int32_le;
+    run "loop_read_int64_be" loop_read_int64_be;
+    run "loop_read_int64_le" loop_read_int64_le;
+    run "loop_write_int16_be" loop_write_int16_be;
+    run "loop_write_int16_le" loop_write_int16_le;
+    run "loop_write_int32_be" loop_write_int32_be;
+    run "loop_write_int32_le" loop_write_int32_le;
+    run "loop_write_int64_be" loop_write_int64_be;
+    run "loop_write_int64_le" loop_write_int64_le
+
+end
+
+module Bigstring_test = struct
+  open EndianBigstring
+  module BE = BigEndian
+  module LE = LittleEndian
+  open Bigarray
+  let buffer = Array1.create char c_layout buffer_size
+
+  let loop_read_uint16_be () =
+    for i = 0 to Bigarray.Array1.dim buffer - 2 do
+      ignore(BE.get_uint16 buffer i)
+    done
+
+  let loop_read_uint16_le () =
+    for i = 0 to Bigarray.Array1.dim buffer - 2 do
+      ignore(LE.get_uint16 buffer i)
+    done
+
+  let loop_read_int16_be () =
+    for i = 0 to Bigarray.Array1.dim buffer - 2 do
+      ignore(BE.get_int16 buffer i)
+    done
+
+  let loop_read_int16_le () =
+    for i = 0 to Bigarray.Array1.dim buffer - 2 do
+      ignore(LE.get_int16 buffer i)
+    done
+
+  let loop_read_int32_be () =
+    for i = 0 to Bigarray.Array1.dim buffer - 4 do
+      ignore(Int32.to_int (BE.get_int32 buffer i))
+    done
+
+  let loop_read_int32_le () =
+    for i = 0 to Bigarray.Array1.dim buffer - 4 do
+      ignore(Int32.to_int (LE.get_int32 buffer i))
+    done
+
+  let loop_read_int64_be () =
+    for i = 0 to Bigarray.Array1.dim buffer - 8 do
+      ignore(Int64.to_int (BE.get_int64 buffer i))
+    done
+
+  let loop_read_int64_le () =
+    for i = 0 to Bigarray.Array1.dim buffer - 8 do
+      ignore(Int64.to_int (LE.get_int64 buffer i))
+    done
+
+  let loop_write_int16_be () =
+    for i = 0 to Bigarray.Array1.dim buffer - 2 do
+      ignore(BE.set_int16 buffer i 10)
+    done
+
+  let loop_write_int16_le () =
+    for i = 0 to Bigarray.Array1.dim buffer - 2 do
+      ignore(LE.set_int16 buffer i 10)
+    done
+
+  let loop_write_int32_be () =
+    for i = 0 to Bigarray.Array1.dim buffer - 4 do
+      ignore((BE.set_int32 buffer i) 10l)
+    done
+
+  let loop_write_int32_le () =
+    for i = 0 to Bigarray.Array1.dim buffer - 4 do
+      ignore((LE.set_int32 buffer i) 10l)
+    done
+
+  let loop_write_int64_be () =
+    for i = 0 to Bigarray.Array1.dim buffer - 8 do
+      ignore((BE.set_int64 buffer i) 10L)
+    done
+
+  let loop_write_int64_le () =
+    for i = 0 to Bigarray.Array1.dim buffer - 8 do
+      ignore((LE.set_int64 buffer i) 10L)
+    done
+
+  let do_loop f () =
+    for i = 0 to loops - 1 do
+      f ()
+    done
+
+  let run s f = test_fun s (do_loop f)
+
+  let run_test () =
+    run "loop_read_uint16_be" loop_read_uint16_be;
+    run "loop_read_uint16_le" loop_read_uint16_le;
+    run "loop_read_int16_be" loop_read_int16_be;
+    run "loop_read_int16_le" loop_read_int16_le;
+    run "loop_read_int32_be" loop_read_int32_be;
+    run "loop_read_int32_le" loop_read_int32_le;
+    run "loop_read_int64_be" loop_read_int64_be;
+    run "loop_read_int64_le" loop_read_int64_le;
+    run "loop_write_int16_be" loop_write_int16_be;
+    run "loop_write_int16_le" loop_write_int16_le;
+    run "loop_write_int32_be" loop_write_int32_be;
+    run "loop_write_int32_le" loop_write_int32_le;
+    run "loop_write_int64_be" loop_write_int64_be;
+    run "loop_write_int64_le" loop_write_int64_le
+
+end
+
+module Bigstring_unsafe_test = struct
+  open EndianBigstring
+  module BE = BigEndian_unsafe
+  module LE = LittleEndian_unsafe
+  open Bigarray
+  let buffer = Array1.create char c_layout buffer_size
+
+  let loop_read_uint16_be () =
+    for i = 0 to Bigarray.Array1.dim buffer - 2 do
+      ignore(BE.get_uint16 buffer i)
+    done
+
+  let loop_read_uint16_le () =
+    for i = 0 to Bigarray.Array1.dim buffer - 2 do
+      ignore(LE.get_uint16 buffer i)
+    done
+
+  let loop_read_int16_be () =
+    for i = 0 to Bigarray.Array1.dim buffer - 2 do
+      ignore(BE.get_int16 buffer i)
+    done
+
+  let loop_read_int16_le () =
+    for i = 0 to Bigarray.Array1.dim buffer - 2 do
+      ignore(LE.get_int16 buffer i)
+    done
+
+  let loop_read_int32_be () =
+    for i = 0 to Bigarray.Array1.dim buffer - 4 do
+      ignore(Int32.to_int (BE.get_int32 buffer i))
+    done
+
+  let loop_read_int32_le () =
+    for i = 0 to Bigarray.Array1.dim buffer - 4 do
+      ignore(Int32.to_int (LE.get_int32 buffer i))
+    done
+
+  let loop_read_int64_be () =
+    for i = 0 to Bigarray.Array1.dim buffer - 8 do
+      ignore(Int64.to_int (BE.get_int64 buffer i))
+    done
+
+  let loop_read_int64_le () =
+    for i = 0 to Bigarray.Array1.dim buffer - 8 do
+      ignore(Int64.to_int (LE.get_int64 buffer i))
+    done
+
+  let loop_write_int16_be () =
+    for i = 0 to Bigarray.Array1.dim buffer - 2 do
+      ignore(BE.set_int16 buffer i 10)
+    done
+
+  let loop_write_int16_le () =
+    for i = 0 to Bigarray.Array1.dim buffer - 2 do
+      ignore(LE.set_int16 buffer i 10)
+    done
+
+  let loop_write_int32_be () =
+    for i = 0 to Bigarray.Array1.dim buffer - 4 do
+      ignore((BE.set_int32 buffer i) 10l)
+    done
+
+  let loop_write_int32_le () =
+    for i = 0 to Bigarray.Array1.dim buffer - 4 do
+      ignore((LE.set_int32 buffer i) 10l)
+    done
+
+  let loop_write_int64_be () =
+    for i = 0 to Bigarray.Array1.dim buffer - 8 do
+      ignore((BE.set_int64 buffer i) 10L)
+    done
+
+  let loop_write_int64_le () =
+    for i = 0 to Bigarray.Array1.dim buffer - 8 do
+      ignore((LE.set_int64 buffer i) 10L)
+    done
+
+  let do_loop f () =
+    for i = 0 to loops - 1 do
+      f ()
+    done
+
+  let run s f = test_fun s (do_loop f)
+
+  let run_test () =
+    run "loop_read_uint16_be" loop_read_uint16_be;
+    run "loop_read_uint16_le" loop_read_uint16_le;
+    run "loop_read_int16_be" loop_read_int16_be;
+    run "loop_read_int16_le" loop_read_int16_le;
+    run "loop_read_int32_be" loop_read_int32_be;
+    run "loop_read_int32_le" loop_read_int32_le;
+    run "loop_read_int64_be" loop_read_int64_be;
+    run "loop_read_int64_le" loop_read_int64_le;
+    run "loop_write_int16_be" loop_write_int16_be;
+    run "loop_write_int16_le" loop_write_int16_le;
+    run "loop_write_int32_be" loop_write_int32_be;
+    run "loop_write_int32_le" loop_write_int32_le;
+    run "loop_write_int64_be" loop_write_int64_be;
+    run "loop_write_int64_le" loop_write_int64_le
+
+end
+
+let () =
+  Printf.printf "safe bytes:\n%!";
+  Bytes_test.run_test ();
+  Printf.printf "unsafe bytes:\n%!";
+  Bytes_unsafe_test.run_test ();
+  Printf.printf "safe bigstring:\n%!";
+  Bigstring_test.run_test ();
+  Printf.printf "unsafe bigstring:\n%!";
+  Bigstring_unsafe_test.run_test ()
diff --git a/tools/ocaml/duniverse/ocplib-endian/tests/dune b/tools/ocaml/duniverse/ocplib-endian/tests/dune
new file mode 100644
index 0000000000..e3e0f17940
--- /dev/null
+++ b/tools/ocaml/duniverse/ocplib-endian/tests/dune
@@ -0,0 +1,35 @@
+(rule
+ (targets test_string.ml)
+ (deps (:< test_string.cppo.ml))
+ (action (run %{bin:cppo} %{<} -o %{targets})))
+
+(rule
+ (targets test_bytes.ml)
+ (deps (:< test_bytes.cppo.ml))
+ (action (run %{bin:cppo} %{<} -o %{targets})))
+
+(rule
+ (targets test_bigstring.ml)
+ (deps (:< test_bigstring.cppo.ml))
+ (action (run %{bin:cppo} %{<} -o %{targets})))
+
+(library
+ (name tests)
+ (wrapped false)
+ (modules test_string test_bytes test_bigstring)
+ (libraries ocplib-endian ocplib-endian.bigstring bigarray bytes))
+
+(executables
+ (names test)
+ (modules test)
+ (libraries ocplib-endian tests))
+
+(executables
+ (names bench)
+ (modules bench)
+ (libraries ocplib-endian ocplib-endian.bigstring))
+
+(alias
+ (name runtest)
+ (deps (:< test.exe))
+ (action (run %{<})))
diff --git a/tools/ocaml/duniverse/ocplib-endian/tests/test.ml b/tools/ocaml/duniverse/ocplib-endian/tests/test.ml
new file mode 100644
index 0000000000..387fcc16b3
--- /dev/null
+++ b/tools/ocaml/duniverse/ocplib-endian/tests/test.ml
@@ -0,0 +1,39 @@
+
+let allocdiff =
+  let stat1 = Gc.quick_stat () in
+  let stat2 = Gc.quick_stat () in
+  (stat2.Gc.minor_words -. stat1.Gc.minor_words)
+
+let () =
+  Test_bigstring.test1 ();
+  let stat1 = Gc.quick_stat () in
+  Test_bigstring.test2 ();
+  if Sys.word_size = 64 then Test_bigstring.test_64 ();
+  let stat2 = Gc.quick_stat () in
+  (* with a 32 bit system, int64 must be heap allocated *)
+  if Sys.word_size = 32 then Test_bigstring.test_64 ();
+  let alloc1 = stat2.Gc.minor_words -. stat1.Gc.minor_words -. allocdiff in
+  Printf.printf "bigstring: allocated words %f\n%!" alloc1;
+
+  Test_string.test1 ();
+  let stat1 = Gc.quick_stat () in
+  Test_string.test2 ();
+  if Sys.word_size = 64 then Test_string.test_64 ();
+  let stat2 = Gc.quick_stat () in
+  if Sys.word_size = 32 then Test_string.test_64 ();
+  let alloc2 = stat2.Gc.minor_words -. stat1.Gc.minor_words -. allocdiff in
+  Printf.printf "string: allocated words %f\n%!" alloc2;
+
+  Test_bytes.test1 ();
+  let stat1 = Gc.quick_stat () in
+  Test_bytes.test2 ();
+  if Sys.word_size = 64 then Test_bytes.test_64 ();
+  let stat2 = Gc.quick_stat () in
+  if Sys.word_size = 32 then Test_bytes.test_64 ();
+  let alloc3 = stat2.Gc.minor_words -. stat1.Gc.minor_words -. allocdiff in
+  Printf.printf "bytes: allocated words %f\n%!" alloc3;
+  (* we cannot ensure that there are no allocations only with the
+     primives added in 4.01.0 *)
+  if (alloc1 <> 0. || alloc2 <> 0. || alloc3 <> 0.)
+  then exit 1
+  else exit 0
diff --git a/tools/ocaml/duniverse/ocplib-endian/tests/test_bigstring.cppo.ml b/tools/ocaml/duniverse/ocplib-endian/tests/test_bigstring.cppo.ml
new file mode 100644
index 0000000000..35d926b52c
--- /dev/null
+++ b/tools/ocaml/duniverse/ocplib-endian/tests/test_bigstring.cppo.ml
@@ -0,0 +1,191 @@
+open Bigarray
+open EndianBigstring
+
+[@@@warning "-52-53"]
+
+module BE = BigEndian
+module LE = LittleEndian
+module NE = NativeEndian
+
+let big_endian = Sys.big_endian
+
+let bigstring_of_string s =
+  let a = Array1.create char c_layout (String.length s) in
+  for i = 0 to String.length s - 1 do
+    a.{i} <- s.[i]
+  done;
+  a
+
+let s = bigstring_of_string (String.make 10 '\x00')
+
+let assert_bound_check2 f v1 v2 =
+  try
+    ignore(f v1 v2);
+    assert false
+  with
+     | Invalid_argument("index out of bounds") -> ()
+
+let assert_bound_check3 f v1 v2 v3 =
+  try
+    ignore(f v1 v2 v3);
+    assert false
+  with
+     | Invalid_argument("index out of bounds") -> ()
+
+let test1 () =
+  assert_bound_check2 BE.get_int8 s (-1);
+  assert_bound_check2 BE.get_int8 s 10;
+  assert_bound_check2 BE.get_uint16 s (-1);
+  assert_bound_check2 BE.get_uint16 s 9;
+  assert_bound_check2 BE.get_int32 s (-1);
+  assert_bound_check2 BE.get_int32 s 7;
+  assert_bound_check2 BE.get_int64 s (-1);
+  assert_bound_check2 BE.get_int64 s 3;
+
+  assert_bound_check3 BE.set_int8 s (-1) 0;
+  assert_bound_check3 BE.set_int8 s 10 0;
+  assert_bound_check3 BE.set_int16 s (-1) 0;
+  assert_bound_check3 BE.set_int16 s 9 0;
+  assert_bound_check3 BE.set_int32 s (-1) 0l;
+  assert_bound_check3 BE.set_int32 s 7 0l;
+  assert_bound_check3 BE.set_int64 s (-1) 0L;
+  assert_bound_check3 BE.set_int64 s 3 0L
+
+let test2 () =
+  BE.set_int8 s 0 63; (* in [0; 127] *)
+  assert( BE.get_uint8 s 0 = 63 );
+  assert( BE.get_int8 s 0 = 63 );
+
+  BE.set_int8 s 0 155; (* in [128; 255] *)
+  assert( BE.get_uint8 s 0 = 155 );
+
+  BE.set_int8 s 0 (-103); (* in [-128; -1] *)
+  assert( BE.get_int8 s 0 = (-103) );
+
+  BE.set_int8 s 0 0x1234; (* outside of the [-127;255] range *)
+  assert( BE.get_uint8 s 0 = 0x34 );
+  assert( BE.get_int8 s 0 = 0x34 );
+
+  BE.set_int8 s 0 0xAACD; (* outside of the [-127;255] range, -0x33 land 0xFF = 0xCD*)
+  assert( BE.get_uint8 s 0 = 0xCD );
+  assert( BE.get_int8 s 0 = (-0x33) );
+
+  BE.set_int16 s 0 0x1234;
+  assert( BE.get_uint16 s 0 = 0x1234 );
+  assert( BE.get_uint16 s 1 = 0x3400 );
+  assert( BE.get_uint16 s 2 = 0 );
+
+  assert( LE.get_uint16 s 0 = 0x3412 );
+  assert( LE.get_uint16 s 1 = 0x0034 );
+  assert( LE.get_uint16 s 2 = 0 );
+
+  if big_endian then begin
+    assert( BE.get_uint16 s 0 = NE.get_uint16 s 0 );
+    assert( BE.get_uint16 s 1 = NE.get_uint16 s 1 );
+    assert( BE.get_uint16 s 2 = NE.get_uint16 s 2 );
+  end
+  else begin
+    assert( LE.get_uint16 s 0 = NE.get_uint16 s 0 );
+    assert( LE.get_uint16 s 1 = NE.get_uint16 s 1 );
+    assert( LE.get_uint16 s 2 = NE.get_uint16 s 2 );
+  end;
+
+  assert( BE.get_int16 s 0 = 0x1234 );
+  assert( BE.get_int16 s 1 = 0x3400 );
+  assert( BE.get_int16 s 2 = 0 );
+
+  BE.set_int16 s 0 0xFEDC;
+  assert( BE.get_uint16 s 0 = 0xFEDC );
+  assert( BE.get_uint16 s 1 = 0xDC00 );
+  assert( BE.get_uint16 s 2 = 0 );
+
+  assert( LE.get_uint16 s 0 = 0xDCFE );
+  assert( LE.get_uint16 s 1 = 0x00DC );
+  assert( LE.get_uint16 s 2 = 0 );
+
+  if big_endian then begin
+    assert( BE.get_uint16 s 0 = NE.get_uint16 s 0 );
+    assert( BE.get_uint16 s 1 = NE.get_uint16 s 1 );
+    assert( BE.get_uint16 s 2 = NE.get_uint16 s 2 );
+  end
+  else begin
+    assert( LE.get_uint16 s 0 = NE.get_uint16 s 0 );
+    assert( LE.get_uint16 s 1 = NE.get_uint16 s 1 );
+    assert( LE.get_uint16 s 2 = NE.get_uint16 s 2 );
+  end;
+
+  assert( BE.get_int16 s 0 = -292 );
+  assert( BE.get_int16 s 1 = -9216 );
+  assert( BE.get_int16 s 2 = 0 );
+
+  if big_endian
+  then begin
+    NE.set_int16 s 0 0x1234;
+    assert( BE.get_uint16 s 0 = 0x1234 );
+    assert( BE.get_uint16 s 1 = 0x3400 );
+    assert( BE.get_uint16 s 2 = 0 )
+  end;
+
+  LE.set_int16 s 0 0x1234;
+  assert( BE.get_uint16 s 0 = 0x3412 );
+  assert( BE.get_uint16 s 1 = 0x1200 );
+  assert( BE.get_uint16 s 2 = 0 );
+
+  if not big_endian
+  then begin
+    NE.set_int16 s 0 0x1234;
+    assert( BE.get_uint16 s 0 = 0x3412 );
+    assert( BE.get_uint16 s 1 = 0x1200 );
+    assert( BE.get_uint16 s 2 = 0 )
+  end;
+
+  LE.set_int16 s 0 0xFEDC;
+  assert( LE.get_uint16 s 0 = 0xFEDC );
+  assert( LE.get_uint16 s 1 = 0x00FE );
+  assert( LE.get_uint16 s 2 = 0 );
+
+  BE.set_int32 s 0 0x12345678l;
+  assert( BE.get_int32 s 0 = 0x12345678l );
+  assert( LE.get_int32 s 0 = 0x78563412l );
+  if big_endian
+  then assert( BE.get_int32 s 0 = NE.get_int32 s 0 )
+  else assert( LE.get_int32 s 0 = NE.get_int32 s 0 );
+
+  LE.set_int32 s 0 0x12345678l;
+  assert( LE.get_int32 s 0 = 0x12345678l );
+  assert( BE.get_int32 s 0 = 0x78563412l );
+
+  if big_endian
+  then assert( BE.get_int32 s 0 = NE.get_int32 s 0 )
+  else assert( LE.get_int32 s 0 = NE.get_int32 s 0 );
+
+  NE.set_int32 s 0 0x12345678l;
+  if big_endian
+  then assert( BE.get_int32 s 0 = 0x12345678l )
+  else assert( LE.get_int32 s 0 = 0x12345678l );
+
+  ()
+
+let test_64 () =
+  BE.set_int64 s 0 0x1234567890ABCDEFL;
+  assert( BE.get_int64 s 0 = 0x1234567890ABCDEFL );
+  assert( LE.get_int64 s 0 = 0xEFCDAB9078563412L );
+
+  if big_endian
+  then assert( BE.get_int64 s 0 = NE.get_int64 s 0 )
+  else assert( LE.get_int64 s 0 = NE.get_int64 s 0 );
+
+  LE.set_int64 s 0 0x1234567890ABCDEFL;
+  assert( LE.get_int64 s 0 = 0x1234567890ABCDEFL );
+  assert( BE.get_int64 s 0 = 0xEFCDAB9078563412L );
+
+  if big_endian
+  then assert( BE.get_int64 s 0 = NE.get_int64 s 0 )
+  else assert( LE.get_int64 s 0 = NE.get_int64 s 0 );
+
+  NE.set_int64 s 0 0x1234567890ABCDEFL;
+  if big_endian
+  then assert( BE.get_int64 s 0 = 0x1234567890ABCDEFL )
+  else assert( LE.get_int64 s 0 = 0x1234567890ABCDEFL );
+
+  ()
diff --git a/tools/ocaml/duniverse/ocplib-endian/tests/test_bytes.cppo.ml b/tools/ocaml/duniverse/ocplib-endian/tests/test_bytes.cppo.ml
new file mode 100644
index 0000000000..f51b9523d2
--- /dev/null
+++ b/tools/ocaml/duniverse/ocplib-endian/tests/test_bytes.cppo.ml
@@ -0,0 +1,185 @@
+open EndianBytes
+[@@@warning "-52"]
+
+let to_t x = x
+(* do not allocate to avoid breaking tests *)
+
+module BE = BigEndian
+module LE = LittleEndian
+module NE = NativeEndian
+
+let big_endian = Sys.big_endian
+
+let s = Bytes.make 10 '\x00'
+
+let assert_bound_check2 f v1 v2 =
+  try
+    ignore(f v1 v2);
+    assert false
+  with
+     | Invalid_argument("index out of bounds") -> ()
+
+let assert_bound_check3 f v1 v2 v3 =
+  try
+    ignore(f v1 v2 v3);
+    assert false
+  with
+     | Invalid_argument("index out of bounds") -> ()
+
+let test1 () =
+  assert_bound_check2 BE.get_int8 (to_t s) (-1);
+  assert_bound_check2 BE.get_int8 (to_t s) 10;
+  assert_bound_check2 BE.get_uint16 (to_t s) (-1);
+  assert_bound_check2 BE.get_uint16 (to_t s) 9;
+  assert_bound_check2 BE.get_int32 (to_t s) (-1);
+  assert_bound_check2 BE.get_int32 (to_t s) 7;
+  assert_bound_check2 BE.get_int64 (to_t s) (-1);
+  assert_bound_check2 BE.get_int64 (to_t s) 3;
+
+  assert_bound_check3 BE.set_int8 s (-1) 0;
+  assert_bound_check3 BE.set_int8 s 10 0;
+  assert_bound_check3 BE.set_int16 s (-1) 0;
+  assert_bound_check3 BE.set_int16 s 9 0;
+  assert_bound_check3 BE.set_int32 s (-1) 0l;
+  assert_bound_check3 BE.set_int32 s 7 0l;
+  assert_bound_check3 BE.set_int64 s (-1) 0L;
+  assert_bound_check3 BE.set_int64 s 3 0L
+
+let test2 () =
+  BE.set_int8 s 0 63; (* in [0; 127] *)
+  assert( BE.get_uint8 (to_t s) 0 = 63 );
+  assert( BE.get_int8 (to_t s) 0 = 63 );
+
+  BE.set_int8 s 0 155; (* in [128; 255] *)
+  assert( BE.get_uint8 (to_t s) 0 = 155 );
+
+  BE.set_int8 s 0 (-103); (* in [-128; -1] *)
+  assert( BE.get_int8 (to_t s) 0 = (-103) );
+
+  BE.set_int8 s 0 0x1234; (* outside of the [-127;255] range *)
+  assert( BE.get_uint8 (to_t s) 0 = 0x34 );
+  assert( BE.get_int8 (to_t s) 0 = 0x34 );
+
+  BE.set_int8 s 0 0xAACD; (* outside of the [-127;255] range, -0x33 land 0xFF = 0xCD*)
+  assert( BE.get_uint8 (to_t s) 0 = 0xCD );
+  assert( BE.get_int8 (to_t s) 0 = (-0x33) );
+
+  BE.set_int16 s 0 0x1234;
+  assert( BE.get_uint16 (to_t s) 0 = 0x1234 );
+  assert( BE.get_uint16 (to_t s) 1 = 0x3400 );
+  assert( BE.get_uint16 (to_t s) 2 = 0 );
+
+  assert( LE.get_uint16 (to_t s) 0 = 0x3412 );
+  assert( LE.get_uint16 (to_t s) 1 = 0x0034 );
+  assert( LE.get_uint16 (to_t s) 2 = 0 );
+
+  if big_endian then begin
+    assert( BE.get_uint16 (to_t s) 0 = NE.get_uint16 (to_t s) 0 );
+    assert( BE.get_uint16 (to_t s) 1 = NE.get_uint16 (to_t s) 1 );
+    assert( BE.get_uint16 (to_t s) 2 = NE.get_uint16 (to_t s) 2 );
+  end
+  else begin
+    assert( LE.get_uint16 (to_t s) 0 = NE.get_uint16 (to_t s) 0 );
+    assert( LE.get_uint16 (to_t s) 1 = NE.get_uint16 (to_t s) 1 );
+    assert( LE.get_uint16 (to_t s) 2 = NE.get_uint16 (to_t s) 2 );
+  end;
+
+  assert( BE.get_int16 (to_t s) 0 = 0x1234 );
+  assert( BE.get_int16 (to_t s) 1 = 0x3400 );
+  assert( BE.get_int16 (to_t s) 2 = 0 );
+
+  BE.set_int16 s 0 0xFEDC;
+  assert( BE.get_uint16 (to_t s) 0 = 0xFEDC );
+  assert( BE.get_uint16 (to_t s) 1 = 0xDC00 );
+  assert( BE.get_uint16 (to_t s) 2 = 0 );
+
+  assert( LE.get_uint16 (to_t s) 0 = 0xDCFE );
+  assert( LE.get_uint16 (to_t s) 1 = 0x00DC );
+  assert( LE.get_uint16 (to_t s) 2 = 0 );
+
+  if big_endian then begin
+    assert( BE.get_uint16 (to_t s) 0 = NE.get_uint16 (to_t s) 0 );
+    assert( BE.get_uint16 (to_t s) 1 = NE.get_uint16 (to_t s) 1 );
+    assert( BE.get_uint16 (to_t s) 2 = NE.get_uint16 (to_t s) 2 );
+  end
+  else begin
+    assert( LE.get_uint16 (to_t s) 0 = NE.get_uint16 (to_t s) 0 );
+    assert( LE.get_uint16 (to_t s) 1 = NE.get_uint16 (to_t s) 1 );
+    assert( LE.get_uint16 (to_t s) 2 = NE.get_uint16 (to_t s) 2 );
+  end;
+
+  assert( BE.get_int16 (to_t s) 0 = -292 );
+  assert( BE.get_int16 (to_t s) 1 = -9216 );
+  assert( BE.get_int16 (to_t s) 2 = 0 );
+
+  if big_endian
+  then begin
+    NE.set_int16 s 0 0x1234;
+    assert( BE.get_uint16 (to_t s) 0 = 0x1234 );
+    assert( BE.get_uint16 (to_t s) 1 = 0x3400 );
+    assert( BE.get_uint16 (to_t s) 2 = 0 )
+  end;
+
+  LE.set_int16 s 0 0x1234;
+  assert( BE.get_uint16 (to_t s) 0 = 0x3412 );
+  assert( BE.get_uint16 (to_t s) 1 = 0x1200 );
+  assert( BE.get_uint16 (to_t s) 2 = 0 );
+
+  if not big_endian
+  then begin
+    NE.set_int16 s 0 0x1234;
+    assert( BE.get_uint16 (to_t s) 0 = 0x3412 );
+    assert( BE.get_uint16 (to_t s) 1 = 0x1200 );
+    assert( BE.get_uint16 (to_t s) 2 = 0 )
+  end;
+
+  LE.set_int16 s 0 0xFEDC;
+  assert( LE.get_uint16 (to_t s) 0 = 0xFEDC );
+  assert( LE.get_uint16 (to_t s) 1 = 0x00FE );
+  assert( LE.get_uint16 (to_t s) 2 = 0 );
+
+  BE.set_int32 s 0 0x12345678l;
+  assert( BE.get_int32 (to_t s) 0 = 0x12345678l );
+  assert( LE.get_int32 (to_t s) 0 = 0x78563412l );
+  if big_endian
+  then assert( BE.get_int32 (to_t s) 0 = NE.get_int32 (to_t s) 0 )
+  else assert( LE.get_int32 (to_t s) 0 = NE.get_int32 (to_t s) 0 );
+
+  LE.set_int32 s 0 0x12345678l;
+  assert( LE.get_int32 (to_t s) 0 = 0x12345678l );
+  assert( BE.get_int32 (to_t s) 0 = 0x78563412l );
+
+  if big_endian
+  then assert( BE.get_int32 (to_t s) 0 = NE.get_int32 (to_t s) 0 )
+  else assert( LE.get_int32 (to_t s) 0 = NE.get_int32 (to_t s) 0 );
+
+  NE.set_int32 s 0 0x12345678l;
+  if big_endian
+  then assert( BE.get_int32 (to_t s) 0 = 0x12345678l )
+  else assert( LE.get_int32 (to_t s) 0 = 0x12345678l );
+
+  ()
+
+let test_64 () =
+  BE.set_int64 s 0 0x1234567890ABCDEFL;
+  assert( BE.get_int64 (to_t s) 0 = 0x1234567890ABCDEFL );
+  assert( LE.get_int64 (to_t s) 0 = 0xEFCDAB9078563412L );
+
+  if big_endian
+  then assert( BE.get_int64 (to_t s) 0 = NE.get_int64 (to_t s) 0 )
+  else assert( LE.get_int64 (to_t s) 0 = NE.get_int64 (to_t s) 0 );
+
+  LE.set_int64 s 0 0x1234567890ABCDEFL;
+  assert( LE.get_int64 (to_t s) 0 = 0x1234567890ABCDEFL );
+  assert( BE.get_int64 (to_t s) 0 = 0xEFCDAB9078563412L );
+
+  if big_endian
+  then assert( BE.get_int64 (to_t s) 0 = NE.get_int64 (to_t s) 0 )
+  else assert( LE.get_int64 (to_t s) 0 = NE.get_int64 (to_t s) 0 );
+
+  NE.set_int64 s 0 0x1234567890ABCDEFL;
+  if big_endian
+  then assert( BE.get_int64 (to_t s) 0 = 0x1234567890ABCDEFL )
+  else assert( LE.get_int64 (to_t s) 0 = 0x1234567890ABCDEFL );
+
+  ()
diff --git a/tools/ocaml/duniverse/ocplib-endian/tests/test_string.cppo.ml b/tools/ocaml/duniverse/ocplib-endian/tests/test_string.cppo.ml
new file mode 100644
index 0000000000..dec25216cc
--- /dev/null
+++ b/tools/ocaml/duniverse/ocplib-endian/tests/test_string.cppo.ml
@@ -0,0 +1,185 @@
+open EndianString
+[@@@warning "-52"]
+
+let to_t = Bytes.unsafe_to_string
+(* do not allocate to avoid breaking tests *)
+
+module BE = BigEndian
+module LE = LittleEndian
+module NE = NativeEndian
+
+let big_endian = Sys.big_endian
+
+let s = Bytes.make 10 '\x00'
+
+let assert_bound_check2 f v1 v2 =
+  try
+    ignore(f v1 v2);
+    assert false
+  with
+     | Invalid_argument("index out of bounds") -> ()
+
+let assert_bound_check3 f v1 v2 v3 =
+  try
+    ignore(f v1 v2 v3);
+    assert false
+  with
+     | Invalid_argument("index out of bounds") -> ()
+
+let test1 () =
+  assert_bound_check2 BE.get_int8 (to_t s) (-1);
+  assert_bound_check2 BE.get_int8 (to_t s) 10;
+  assert_bound_check2 BE.get_uint16 (to_t s) (-1);
+  assert_bound_check2 BE.get_uint16 (to_t s) 9;
+  assert_bound_check2 BE.get_int32 (to_t s) (-1);
+  assert_bound_check2 BE.get_int32 (to_t s) 7;
+  assert_bound_check2 BE.get_int64 (to_t s) (-1);
+  assert_bound_check2 BE.get_int64 (to_t s) 3;
+
+  assert_bound_check3 BE.set_int8 s (-1) 0;
+  assert_bound_check3 BE.set_int8 s 10 0;
+  assert_bound_check3 BE.set_int16 s (-1) 0;
+  assert_bound_check3 BE.set_int16 s 9 0;
+  assert_bound_check3 BE.set_int32 s (-1) 0l;
+  assert_bound_check3 BE.set_int32 s 7 0l;
+  assert_bound_check3 BE.set_int64 s (-1) 0L;
+  assert_bound_check3 BE.set_int64 s 3 0L
+
+let test2 () =
+  BE.set_int8 s 0 63; (* in [0; 127] *)
+  assert( BE.get_uint8 (to_t s) 0 = 63 );
+  assert( BE.get_int8 (to_t s) 0 = 63 );
+
+  BE.set_int8 s 0 155; (* in [128; 255] *)
+  assert( BE.get_uint8 (to_t s) 0 = 155 );
+
+  BE.set_int8 s 0 (-103); (* in [-128; -1] *)
+  assert( BE.get_int8 (to_t s) 0 = (-103) );
+
+  BE.set_int8 s 0 0x1234; (* outside of the [-127;255] range *)
+  assert( BE.get_uint8 (to_t s) 0 = 0x34 );
+  assert( BE.get_int8 (to_t s) 0 = 0x34 );
+
+  BE.set_int8 s 0 0xAACD; (* outside of the [-127;255] range, -0x33 land 0xFF = 0xCD*)
+  assert( BE.get_uint8 (to_t s) 0 = 0xCD );
+  assert( BE.get_int8 (to_t s) 0 = (-0x33) );
+
+  BE.set_int16 s 0 0x1234;
+  assert( BE.get_uint16 (to_t s) 0 = 0x1234 );
+  assert( BE.get_uint16 (to_t s) 1 = 0x3400 );
+  assert( BE.get_uint16 (to_t s) 2 = 0 );
+
+  assert( LE.get_uint16 (to_t s) 0 = 0x3412 );
+  assert( LE.get_uint16 (to_t s) 1 = 0x0034 );
+  assert( LE.get_uint16 (to_t s) 2 = 0 );
+
+  if big_endian then begin
+    assert( BE.get_uint16 (to_t s) 0 = NE.get_uint16 (to_t s) 0 );
+    assert( BE.get_uint16 (to_t s) 1 = NE.get_uint16 (to_t s) 1 );
+    assert( BE.get_uint16 (to_t s) 2 = NE.get_uint16 (to_t s) 2 );
+  end
+  else begin
+    assert( LE.get_uint16 (to_t s) 0 = NE.get_uint16 (to_t s) 0 );
+    assert( LE.get_uint16 (to_t s) 1 = NE.get_uint16 (to_t s) 1 );
+    assert( LE.get_uint16 (to_t s) 2 = NE.get_uint16 (to_t s) 2 );
+  end;
+
+  assert( BE.get_int16 (to_t s) 0 = 0x1234 );
+  assert( BE.get_int16 (to_t s) 1 = 0x3400 );
+  assert( BE.get_int16 (to_t s) 2 = 0 );
+
+  BE.set_int16 s 0 0xFEDC;
+  assert( BE.get_uint16 (to_t s) 0 = 0xFEDC );
+  assert( BE.get_uint16 (to_t s) 1 = 0xDC00 );
+  assert( BE.get_uint16 (to_t s) 2 = 0 );
+
+  assert( LE.get_uint16 (to_t s) 0 = 0xDCFE );
+  assert( LE.get_uint16 (to_t s) 1 = 0x00DC );
+  assert( LE.get_uint16 (to_t s) 2 = 0 );
+
+  if big_endian then begin
+    assert( BE.get_uint16 (to_t s) 0 = NE.get_uint16 (to_t s) 0 );
+    assert( BE.get_uint16 (to_t s) 1 = NE.get_uint16 (to_t s) 1 );
+    assert( BE.get_uint16 (to_t s) 2 = NE.get_uint16 (to_t s) 2 );
+  end
+  else begin
+    assert( LE.get_uint16 (to_t s) 0 = NE.get_uint16 (to_t s) 0 );
+    assert( LE.get_uint16 (to_t s) 1 = NE.get_uint16 (to_t s) 1 );
+    assert( LE.get_uint16 (to_t s) 2 = NE.get_uint16 (to_t s) 2 );
+  end;
+
+  assert( BE.get_int16 (to_t s) 0 = -292 );
+  assert( BE.get_int16 (to_t s) 1 = -9216 );
+  assert( BE.get_int16 (to_t s) 2 = 0 );
+
+  if big_endian
+  then begin
+    NE.set_int16 s 0 0x1234;
+    assert( BE.get_uint16 (to_t s) 0 = 0x1234 );
+    assert( BE.get_uint16 (to_t s) 1 = 0x3400 );
+    assert( BE.get_uint16 (to_t s) 2 = 0 )
+  end;
+
+  LE.set_int16 s 0 0x1234;
+  assert( BE.get_uint16 (to_t s) 0 = 0x3412 );
+  assert( BE.get_uint16 (to_t s) 1 = 0x1200 );
+  assert( BE.get_uint16 (to_t s) 2 = 0 );
+
+  if not big_endian
+  then begin
+    NE.set_int16 s 0 0x1234;
+    assert( BE.get_uint16 (to_t s) 0 = 0x3412 );
+    assert( BE.get_uint16 (to_t s) 1 = 0x1200 );
+    assert( BE.get_uint16 (to_t s) 2 = 0 )
+  end;
+
+  LE.set_int16 s 0 0xFEDC;
+  assert( LE.get_uint16 (to_t s) 0 = 0xFEDC );
+  assert( LE.get_uint16 (to_t s) 1 = 0x00FE );
+  assert( LE.get_uint16 (to_t s) 2 = 0 );
+
+  BE.set_int32 s 0 0x12345678l;
+  assert( BE.get_int32 (to_t s) 0 = 0x12345678l );
+  assert( LE.get_int32 (to_t s) 0 = 0x78563412l );
+  if big_endian
+  then assert( BE.get_int32 (to_t s) 0 = NE.get_int32 (to_t s) 0 )
+  else assert( LE.get_int32 (to_t s) 0 = NE.get_int32 (to_t s) 0 );
+
+  LE.set_int32 s 0 0x12345678l;
+  assert( LE.get_int32 (to_t s) 0 = 0x12345678l );
+  assert( BE.get_int32 (to_t s) 0 = 0x78563412l );
+
+  if big_endian
+  then assert( BE.get_int32 (to_t s) 0 = NE.get_int32 (to_t s) 0 )
+  else assert( LE.get_int32 (to_t s) 0 = NE.get_int32 (to_t s) 0 );
+
+  NE.set_int32 s 0 0x12345678l;
+  if big_endian
+  then assert( BE.get_int32 (to_t s) 0 = 0x12345678l )
+  else assert( LE.get_int32 (to_t s) 0 = 0x12345678l );
+
+  ()
+
+let test_64 () =
+  BE.set_int64 s 0 0x1234567890ABCDEFL;
+  assert( BE.get_int64 (to_t s) 0 = 0x1234567890ABCDEFL );
+  assert( LE.get_int64 (to_t s) 0 = 0xEFCDAB9078563412L );
+
+  if big_endian
+  then assert( BE.get_int64 (to_t s) 0 = NE.get_int64 (to_t s) 0 )
+  else assert( LE.get_int64 (to_t s) 0 = NE.get_int64 (to_t s) 0 );
+
+  LE.set_int64 s 0 0x1234567890ABCDEFL;
+  assert( LE.get_int64 (to_t s) 0 = 0x1234567890ABCDEFL );
+  assert( BE.get_int64 (to_t s) 0 = 0xEFCDAB9078563412L );
+
+  if big_endian
+  then assert( BE.get_int64 (to_t s) 0 = NE.get_int64 (to_t s) 0 )
+  else assert( LE.get_int64 (to_t s) 0 = NE.get_int64 (to_t s) 0 );
+
+  NE.set_int64 s 0 0x1234567890ABCDEFL;
+  if big_endian
+  then assert( BE.get_int64 (to_t s) 0 = 0x1234567890ABCDEFL )
+  else assert( LE.get_int64 (to_t s) 0 = 0x1234567890ABCDEFL );
+
+  ()
diff --git a/tools/ocaml/duniverse/result/CHANGES.md b/tools/ocaml/duniverse/result/CHANGES.md
new file mode 100755
index 0000000000..fc04a554f5
--- /dev/null
+++ b/tools/ocaml/duniverse/result/CHANGES.md
@@ -0,0 +1,15 @@
+1.5 (17/02/2020)
+----------------
+
+- Make Result an alias of Stdlib.Result on OCaml >= 4.08.
+
+1.4 (27/03/2019)
+----------------
+
+- Switch to Dune.
+- Do not refer to Pervasives; it is deprecated.
+
+1.3 (05/02/2018)
+----------------
+
+- Switch to jbuilder.
diff --git a/tools/ocaml/duniverse/result/LICENSE.md b/tools/ocaml/duniverse/result/LICENSE.md
new file mode 100755
index 0000000000..42d16a9902
--- /dev/null
+++ b/tools/ocaml/duniverse/result/LICENSE.md
@@ -0,0 +1,24 @@
+Copyright (c) 2015, Jane Street Group, LLC <opensource@janestreet.com>
+All rights reserved.
+Redistribution and use in source and binary forms, with or without
+modification, are permitted provided that the following conditions are met:
+
+    * Redistributions of source code must retain the above copyright
+      notice, this list of conditions and the following disclaimer.
+    * Redistributions in binary form must reproduce the above copyright
+      notice, this list of conditions and the following disclaimer in the
+      documentation and/or other materials provided with the distribution.
+    * Neither the name of Jane Street Group nor the names of his
+      contributors may be used to endorse or promote products derived
+      from this software without specific prior written permission.
+
+THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND ANY
+EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
+WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
+DISCLAIMED. IN NO EVENT SHALL THE AUTHOR AND CONTRIBUTORS BE LIABLE FOR ANY
+DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
+(INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
+LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
+ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
+(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
+SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
diff --git a/tools/ocaml/duniverse/result/Makefile b/tools/ocaml/duniverse/result/Makefile
new file mode 100755
index 0000000000..f5dd44305e
--- /dev/null
+++ b/tools/ocaml/duniverse/result/Makefile
@@ -0,0 +1,17 @@
+INSTALL_ARGS := $(if $(PREFIX),--prefix $(PREFIX),)
+
+default:
+	dune build @install
+
+install:
+	dune install $(INSTALL_ARGS)
+
+uninstall:
+	dune uninstall $(INSTALL_ARGS)
+
+reinstall: uninstall reinstall
+
+clean:
+	dune clean
+
+.PHONY: default install uninstall reinstall clean
diff --git a/tools/ocaml/duniverse/result/README.md b/tools/ocaml/duniverse/result/README.md
new file mode 100755
index 0000000000..ade131944c
--- /dev/null
+++ b/tools/ocaml/duniverse/result/README.md
@@ -0,0 +1,5 @@
+Compatibility Result module.
+
+Projects that want to use the new result type defined in OCaml >= 4.03
+while staying compatible with older version of OCaml should use the
+`Result` module defined in this library.
diff --git a/tools/ocaml/duniverse/result/dune b/tools/ocaml/duniverse/result/dune
new file mode 100755
index 0000000000..8b7ef9b882
--- /dev/null
+++ b/tools/ocaml/duniverse/result/dune
@@ -0,0 +1,12 @@
+(library
+ (name result)
+ (public_name result)
+ (modules result))
+
+(rule
+ (with-stdout-to
+  selected
+  (run %{ocaml} %{dep:which_result.ml} %{ocaml_version})))
+
+(rule
+ (copy# %{read:selected} result.ml))
diff --git a/tools/ocaml/duniverse/result/dune-project b/tools/ocaml/duniverse/result/dune-project
new file mode 100755
index 0000000000..377a3f80cb
--- /dev/null
+++ b/tools/ocaml/duniverse/result/dune-project
@@ -0,0 +1,3 @@
+(lang dune 1.0)
+(name result)
+(version 1.5)
diff --git a/tools/ocaml/duniverse/result/result-as-alias-4.08.ml b/tools/ocaml/duniverse/result/result-as-alias-4.08.ml
new file mode 100755
index 0000000000..a654b7438a
--- /dev/null
+++ b/tools/ocaml/duniverse/result/result-as-alias-4.08.ml
@@ -0,0 +1,2 @@
+include Stdlib.Result
+type ('a, 'b) result = ('a, 'b) Stdlib.Result.t = Ok of 'a | Error of 'b
diff --git a/tools/ocaml/duniverse/result/result-as-alias.ml b/tools/ocaml/duniverse/result/result-as-alias.ml
new file mode 100755
index 0000000000..5d695816c2
--- /dev/null
+++ b/tools/ocaml/duniverse/result/result-as-alias.ml
@@ -0,0 +1,2 @@
+type nonrec ('a, 'b) result = ('a, 'b) result = Ok of 'a | Error of 'b
+type ('a, 'b) t = ('a, 'b) result
diff --git a/tools/ocaml/duniverse/result/result-as-newtype.ml b/tools/ocaml/duniverse/result/result-as-newtype.ml
new file mode 100755
index 0000000000..275c663883
--- /dev/null
+++ b/tools/ocaml/duniverse/result/result-as-newtype.ml
@@ -0,0 +1,2 @@
+type ('a, 'b) result = Ok of 'a | Error of 'b
+type ('a, 'b) t = ('a, 'b) result
diff --git a/tools/ocaml/duniverse/result/result.opam b/tools/ocaml/duniverse/result/result.opam
new file mode 100755
index 0000000000..11e41c9468
--- /dev/null
+++ b/tools/ocaml/duniverse/result/result.opam
@@ -0,0 +1,18 @@
+version: "1.5"
+opam-version: "2.0"
+maintainer: "opensource@janestreet.com"
+authors: ["Jane Street Group, LLC <opensource@janestreet.com>"]
+homepage: "https://github.com/janestreet/result"
+dev-repo: "git+https://github.com/janestreet/result.git"
+bug-reports: "https://github.com/janestreet/result/issues"
+license: "BSD-3-Clause"
+build: [["dune" "build" "-p" name "-j" jobs]]
+depends: [
+  "ocaml"
+  "dune" {>= "1.0"}
+]
+synopsis: "Compatibility Result module"
+description: """
+Projects that want to use the new result type defined in OCaml >= 4.03
+while staying compatible with older version of OCaml should use the
+Result module defined in this library."""
\ No newline at end of file
diff --git a/tools/ocaml/duniverse/result/which_result.ml b/tools/ocaml/duniverse/result/which_result.ml
new file mode 100755
index 0000000000..7c40c21d2e
--- /dev/null
+++ b/tools/ocaml/duniverse/result/which_result.ml
@@ -0,0 +1,14 @@
+let () =
+  let version =
+    Scanf.sscanf Sys.argv.(1) "%d.%d" (fun major minor -> (major, minor))
+  in
+  let file =
+    if version < (4, 03) then
+      "result-as-newtype.ml"
+    else
+      if version < (4, 08) then
+        "result-as-alias.ml"
+      else
+        "result-as-alias-4.08.ml"
+  in
+  print_string file
diff --git a/tools/ocaml/duniverse/stdlib-shims/CHANGES.md b/tools/ocaml/duniverse/stdlib-shims/CHANGES.md
new file mode 100644
index 0000000000..de768873cc
--- /dev/null
+++ b/tools/ocaml/duniverse/stdlib-shims/CHANGES.md
@@ -0,0 +1,5 @@
+0.1.0 2019-02-19 London
+-----------------------
+
+First release. In this release, only the `Stdlib` module is backported
+to older version of OCaml.
diff --git a/tools/ocaml/duniverse/stdlib-shims/LICENSE b/tools/ocaml/duniverse/stdlib-shims/LICENSE
new file mode 100644
index 0000000000..3666ebe155
--- /dev/null
+++ b/tools/ocaml/duniverse/stdlib-shims/LICENSE
@@ -0,0 +1,203 @@
+In the following, "the OCaml Core System" refers to all files marked
+"Copyright INRIA" in this distribution.
+
+The OCaml Core System is distributed under the terms of the
+GNU Lesser General Public License (LGPL) version 2.1 (included below).
+
+As a special exception to the GNU Lesser General Public License, you
+may link, statically or dynamically, a "work that uses the OCaml Core
+System" with a publicly distributed version of the OCaml Core System
+to produce an executable file containing portions of the OCaml Core
+System, and distribute that executable file under terms of your
+choice, without any of the additional requirements listed in clause 6
+of the GNU Lesser General Public License.  By "a publicly distributed
+version of the OCaml Core System", we mean either the unmodified OCaml
+Core System as distributed by INRIA, or a modified version of the
+OCaml Core System that is distributed under the conditions defined in
+clause 2 of the GNU Lesser General Public License.  This exception
+does not however invalidate any other reasons why the executable file
+might be covered by the GNU Lesser General Public License.
+
+----------------------------------------------------------------------
+
+GNU LESSER GENERAL PUBLIC LICENSE
+
+Version 2.1, February 1999
+
+Copyright (C) 1991, 1999 Free Software Foundation, Inc.
+51 Franklin Street, Fifth Floor, Boston, MA  02110-1301  USA
+Everyone is permitted to copy and distribute verbatim copies
+of this license document, but changing it is not allowed.
+
+[This is the first released version of the Lesser GPL.  It also counts
+ as the successor of the GNU Library Public License, version 2, hence
+ the version number 2.1.]
+
+Preamble
+
+The licenses for most software are designed to take away your freedom to share and change it. By contrast, the GNU General Public Licenses are intended to guarantee your freedom to share and change free software--to make sure the software is free for all its users.
+
+This license, the Lesser General Public License, applies to some specially designated software packages--typically libraries--of the Free Software Foundation and other authors who decide to use it. You can use it too, but we suggest you first think carefully about whether this license or the ordinary General Public License is the better strategy to use in any particular case, based on the explanations below.
+
+When we speak of free software, we are referring to freedom of use, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for this service if you wish); that you receive source code or can get it if you want it; that you can change the software and use pieces of it in new free programs; and that you are informed that you can do these things.
+
+To protect your rights, we need to make restrictions that forbid distributors to deny you these rights or to ask you to surrender these rights. These restrictions translate to certain responsibilities for you if you distribute copies of the library or if you modify it.
+
+For example, if you distribute copies of the library, whether gratis or for a fee, you must give the recipients all the rights that we gave you. You must make sure that they, too, receive or can get the source code. If you link other code with the library, you must provide complete object files to the recipients, so that they can relink them with the library after making changes to the library and recompiling it. And you must show them these terms so they know their rights.
+
+We protect your rights with a two-step method: (1) we copyright the library, and (2) we offer you this license, which gives you legal permission to copy, distribute and/or modify the library.
+
+To protect each distributor, we want to make it very clear that there is no warranty for the free library. Also, if the library is modified by someone else and passed on, the recipients should know that what they have is not the original version, so that the original author's reputation will not be affected by problems that might be introduced by others.
+
+Finally, software patents pose a constant threat to the existence of any free program. We wish to make sure that a company cannot effectively restrict the users of a free program by obtaining a restrictive license from a patent holder. Therefore, we insist that any patent license obtained for a version of the library must be consistent with the full freedom of use specified in this license.
+
+Most GNU software, including some libraries, is covered by the ordinary GNU General Public License. This license, the GNU Lesser General Public License, applies to certain designated libraries, and is quite different from the ordinary General Public License. We use this license for certain libraries in order to permit linking those libraries into non-free programs.
+
+When a program is linked with a library, whether statically or using a shared library, the combination of the two is legally speaking a combined work, a derivative of the original library. The ordinary General Public License therefore permits such linking only if the entire combination fits its criteria of freedom. The Lesser General Public License permits more lax criteria for linking other code with the library.
+
+We call this license the "Lesser" General Public License because it does Less to protect the user's freedom than the ordinary General Public License. It also provides other free software developers Less of an advantage over competing non-free programs. These disadvantages are the reason we use the ordinary General Public License for many libraries. However, the Lesser license provides advantages in certain special circumstances.
+
+For example, on rare occasions, there may be a special need to encourage the widest possible use of a certain library, so that it becomes a de-facto standard. To achieve this, non-free programs must be allowed to use the library. A more frequent case is that a free library does the same job as widely used non-free libraries. In this case, there is little to gain by limiting the free library to free software only, so we use the Lesser General Public License.
+
+In other cases, permission to use a particular library in non-free programs enables a greater number of people to use a large body of free software. For example, permission to use the GNU C Library in non-free programs enables many more people to use the whole GNU operating system, as well as its variant, the GNU/Linux operating system.
+
+Although the Lesser General Public License is Less protective of the users' freedom, it does ensure that the user of a program that is linked with the Library has the freedom and the wherewithal to run that program using a modified version of the Library.
+
+The precise terms and conditions for copying, distribution and modification follow. Pay close attention to the difference between a "work based on the library" and a "work that uses the library". The former contains code derived from the library, whereas the latter must be combined with the library in order to run.
+
+TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
+
+0. This License Agreement applies to any software library or other program which contains a notice placed by the copyright holder or other authorized party saying it may be distributed under the terms of this Lesser General Public License (also called "this License"). Each licensee is addressed as "you".
+
+A "library" means a collection of software functions and/or data prepared so as to be conveniently linked with application programs (which use some of those functions and data) to form executables.
+
+The "Library", below, refers to any such software library or work which has been distributed under these terms. A "work based on the Library" means either the Library or any derivative work under copyright law: that is to say, a work containing the Library or a portion of it, either verbatim or with modifications and/or translated straightforwardly into another language. (Hereinafter, translation is included without limitation in the term "modification".)
+
+"Source code" for a work means the preferred form of the work for making modifications to it. For a library, complete source code means all the source code for all modules it contains, plus any associated interface definition files, plus the scripts used to control compilation and installation of the library.
+
+Activities other than copying, distribution and modification are not covered by this License; they are outside its scope. The act of running a program using the Library is not restricted, and output from such a program is covered only if its contents constitute a work based on the Library (independent of the use of the Library in a tool for writing it). Whether that is true depends on what the Library does and what the program that uses the Library does.
+
+1. You may copy and distribute verbatim copies of the Library's complete source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice and disclaimer of warranty; keep intact all the notices that refer to this License and to the absence of any warranty; and distribute a copy of this License along with the Library.
+
+You may charge a fee for the physical act of transferring a copy, and you may at your option offer warranty protection in exchange for a fee.
+
+2. You may modify your copy or copies of the Library or any portion of it, thus forming a work based on the Library, and copy and distribute such modifications or work under the terms of Section 1 above, provided that you also meet all of these conditions:
+
+    a) The modified work must itself be a software library.
+    b) You must cause the files modified to carry prominent notices stating that you changed the files and the date of any change.
+    c) You must cause the whole of the work to be licensed at no charge to all third parties under the terms of this License.
+    d) If a facility in the modified Library refers to a function or a table of data to be supplied by an application program that uses the facility, other than as an argument passed when the facility is invoked, then you must make a good faith effort to ensure that, in the event an application does not supply such function or table, the facility still operates, and performs whatever part of its purpose remains meaningful.
+
+    (For example, a function in a library to compute square roots has a purpose that is entirely well-defined independent of the application. Therefore, Subsection 2d requires that any application-supplied function or table used by this function must be optional: if the application does not supply it, the square root function must still compute square roots.)
+
+These requirements apply to the modified work as a whole. If identifiable sections of that work are not derived from the Library, and can be reasonably considered independent and separate works in themselves, then this License, and its terms, do not apply to those sections when you distribute them as separate works. But when you distribute the same sections as part of a whole which is a work based on the Library, the distribution of the whole must be on the terms of this License, whose permissions for other licensees extend to the entire whole, and thus to each and every part regardless of who wrote it.
+
+Thus, it is not the intent of this section to claim rights or contest your rights to work written entirely by you; rather, the intent is to exercise the right to control the distribution of derivative or collective works based on the Library.
+
+In addition, mere aggregation of another work not based on the Library with the Library (or with a work based on the Library) on a volume of a storage or distribution medium does not bring the other work under the scope of this License.
+
+3. You may opt to apply the terms of the ordinary GNU General Public License instead of this License to a given copy of the Library. To do this, you must alter all the notices that refer to this License, so that they refer to the ordinary GNU General Public License, version 2, instead of to this License. (If a newer version than version 2 of the ordinary GNU General Public License has appeared, then you can specify that version instead if you wish.) Do not make any other change in these notices.
+
+Once this change is made in a given copy, it is irreversible for that copy, so the ordinary GNU General Public License applies to all subsequent copies and derivative works made from that copy.
+
+This option is useful when you wish to copy part of the code of the Library into a program that is not a library.
+
+4. You may copy and distribute the Library (or a portion or derivative of it, under Section 2) in object code or executable form under the terms of Sections 1 and 2 above provided that you accompany it with the complete corresponding machine-readable source code, which must be distributed under the terms of Sections 1 and 2 above on a medium customarily used for software interchange.
+
+If distribution of object code is made by offering access to copy from a designated place, then offering equivalent access to copy the source code from the same place satisfies the requirement to distribute the source code, even though third parties are not compelled to copy the source along with the object code.
+
+5. A program that contains no derivative of any portion of the Library, but is designed to work with the Library by being compiled or linked with it, is called a "work that uses the Library". Such a work, in isolation, is not a derivative work of the Library, and therefore falls outside the scope of this License.
+
+However, linking a "work that uses the Library" with the Library creates an executable that is a derivative of the Library (because it contains portions of the Library), rather than a "work that uses the library". The executable is therefore covered by this License. Section 6 states terms for distribution of such executables.
+
+When a "work that uses the Library" uses material from a header file that is part of the Library, the object code for the work may be a derivative work of the Library even though the source code is not. Whether this is true is especially significant if the work can be linked without the Library, or if the work is itself a library. The threshold for this to be true is not precisely defined by law.
+
+If such an object file uses only numerical parameters, data structure layouts and accessors, and small macros and small inline functions (ten lines or less in length), then the use of the object file is unrestricted, regardless of whether it is legally a derivative work. (Executables containing this object code plus portions of the Library will still fall under Section 6.)
+
+Otherwise, if the work is a derivative of the Library, you may distribute the object code for the work under the terms of Section 6. Any executables containing that work also fall under Section 6, whether or not they are linked directly with the Library itself.
+
+6. As an exception to the Sections above, you may also combine or link a "work that uses the Library" with the Library to produce a work containing portions of the Library, and distribute that work under terms of your choice, provided that the terms permit modification of the work for the customer's own use and reverse engineering for debugging such modifications.
+
+You must give prominent notice with each copy of the work that the Library is used in it and that the Library and its use are covered by this License. You must supply a copy of this License. If the work during execution displays copyright notices, you must include the copyright notice for the Library among them, as well as a reference directing the user to the copy of this License. Also, you must do one of these things:
+
+    a) Accompany the work with the complete corresponding machine-readable source code for the Library including whatever changes were used in the work (which must be distributed under Sections 1 and 2 above); and, if the work is an executable linked with the Library, with the complete machine-readable "work that uses the Library", as object code and/or source code, so that the user can modify the Library and then relink to produce a modified executable containing the modified Library. (It is understood that the user who changes the contents of definitions files in the Library will not necessarily be able to recompile the application to use the modified definitions.)
+    b) Use a suitable shared library mechanism for linking with the Library. A suitable mechanism is one that (1) uses at run time a copy of the library already present on the user's computer system, rather than copying library functions into the executable, and (2) will operate properly with a modified version of the library, if the user installs one, as long as the modified version is interface-compatible with the version that the work was made with.
+    c) Accompany the work with a written offer, valid for at least three years, to give the same user the materials specified in Subsection 6a, above, for a charge no more than the cost of performing this distribution.
+    d) If distribution of the work is made by offering access to copy from a designated place, offer equivalent access to copy the above specified materials from the same place.
+    e) Verify that the user has already received a copy of these materials or that you have already sent this user a copy.
+
+For an executable, the required form of the "work that uses the Library" must include any data and utility programs needed for reproducing the executable from it. However, as a special exception, the materials to be distributed need not include anything that is normally distributed (in either source or binary form) with the major components (compiler, kernel, and so on) of the operating system on which the executable runs, unless that component itself accompanies the executable.
+
+It may happen that this requirement contradicts the license restrictions of other proprietary libraries that do not normally accompany the operating system. Such a contradiction means you cannot use both them and the Library together in an executable that you distribute.
+
+7. You may place library facilities that are a work based on the Library side-by-side in a single library together with other library facilities not covered by this License, and distribute such a combined library, provided that the separate distribution of the work based on the Library and of the other library facilities is otherwise permitted, and provided that you do these two things:
+
+    a) Accompany the combined library with a copy of the same work based on the Library, uncombined with any other library facilities. This must be distributed under the terms of the Sections above.
+    b) Give prominent notice with the combined library of the fact that part of it is a work based on the Library, and explaining where to find the accompanying uncombined form of the same work.
+
+8. You may not copy, modify, sublicense, link with, or distribute the Library except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense, link with, or distribute the Library is void, and will automatically terminate your rights under this License. However, parties who have received copies, or rights, from you under this License will not have their licenses terminated so long as such parties remain in full compliance.
+
+9. You are not required to accept this License, since you have not signed it. However, nothing else grants you permission to modify or distribute the Library or its derivative works. These actions are prohibited by law if you do not accept this License. Therefore, by modifying or distributing the Library (or any work based on the Library), you indicate your acceptance of this License to do so, and all its terms and conditions for copying, distributing or modifying the Library or works based on it.
+
+10. Each time you redistribute the Library (or any work based on the Library), the recipient automatically receives a license from the original licensor to copy, distribute, link with or modify the Library subject to these terms and conditions. You may not impose any further restrictions on the recipients' exercise of the rights granted herein. You are not responsible for enforcing compliance by third parties with this License.
+
+11. If, as a consequence of a court judgment or allegation of patent infringement or for any other reason (not limited to patent issues), conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot distribute so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not distribute the Library at all. For example, if a patent license would not permit royalty-free redistribution of the Library by all those who receive copies directly or indirectly through you, then the only way you could satisfy both it and this License would be to refrain entirely from distribution of the Library.
+
+If any portion of this section is held invalid or unenforceable under any particular circumstance, the balance of the section is intended to apply, and the section as a whole is intended to apply in other circumstances.
+
+It is not the purpose of this section to induce you to infringe any patents or other property right claims or to contest validity of any such claims; this section has the sole purpose of protecting the integrity of the free software distribution system which is implemented by public license practices. Many people have made generous contributions to the wide range of software distributed through that system in reliance on consistent application of that system; it is up to the author/donor to decide if he or she is willing to distribute software through any other system and a licensee cannot impose that choice.
+
+This section is intended to make thoroughly clear what is believed to be a consequence of the rest of this License.
+
+12. If the distribution and/or use of the Library is restricted in certain countries either by patents or by copyrighted interfaces, the original copyright holder who places the Library under this License may add an explicit geographical distribution limitation excluding those countries, so that distribution is permitted only in or among countries not thus excluded. In such case, this License incorporates the limitation as if written in the body of this License.
+
+13. The Free Software Foundation may publish revised and/or new versions of the Lesser General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns.
+
+Each version is given a distinguishing version number. If the Library specifies a version number of this License which applies to it and "any later version", you have the option of following the terms and conditions either of that version or of any later version published by the Free Software Foundation. If the Library does not specify a license version number, you may choose any version ever published by the Free Software Foundation.
+
+14. If you wish to incorporate parts of the Library into other free programs whose distribution conditions are incompatible with these, write to the author to ask for permission. For software which is copyrighted by the Free Software Foundation, write to the Free Software Foundation; we sometimes make exceptions for this. Our decision will be guided by the two goals of preserving the free status of all derivatives of our free software and of promoting the sharing and reuse of software generally.
+
+NO WARRANTY
+
+15. BECAUSE THE LIBRARY IS LICENSED FREE OF CHARGE, THERE IS NO WARRANTY FOR THE LIBRARY, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE LIBRARY "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE LIBRARY IS WITH YOU. SHOULD THE LIBRARY PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION.
+
+16. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MAY MODIFY AND/OR REDISTRIBUTE THE LIBRARY AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE LIBRARY (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE LIBRARY TO OPERATE WITH ANY OTHER SOFTWARE), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.
+END OF TERMS AND CONDITIONS
+
+How to Apply These Terms to Your New Libraries
+
+If you develop a new library, and you want it to be of the greatest possible use to the public, we recommend making it free software that everyone can redistribute and change. You can do so by permitting redistribution under these terms (or, alternatively, under the terms of the ordinary General Public License).
+
+To apply these terms, attach the following notices to the library. It is safest to attach them to the start of each source file to most effectively convey the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found.
+
+one line to give the library's name and an idea of what it does.
+Copyright (C) year  name of author
+
+This library is free software; you can redistribute it and/or
+modify it under the terms of the GNU Lesser General Public
+License as published by the Free Software Foundation; either
+version 2.1 of the License, or (at your option) any later version.
+
+This library is distributed in the hope that it will be useful,
+but WITHOUT ANY WARRANTY; without even the implied warranty of
+MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+Lesser General Public License for more details.
+
+You should have received a copy of the GNU Lesser General Public
+License along with this library; if not, write to the Free Software
+Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA  02110-1301  USA
+
+Also add information on how to contact you by electronic and paper mail.
+
+You should also get your employer (if you work as a programmer) or your school, if any, to sign a "copyright disclaimer" for the library, if necessary. Here is a sample; alter the names:
+
+Yoyodyne, Inc., hereby disclaims all copyright interest in
+the library `Frob' (a library for tweaking knobs) written
+by James Random Hacker.
+
+signature of Ty Coon, 1 April 1990
+Ty Coon, President of Vice
+
+That's all there is to it!
+
+--------------------------------------------------
diff --git a/tools/ocaml/duniverse/stdlib-shims/README.md b/tools/ocaml/duniverse/stdlib-shims/README.md
new file mode 100644
index 0000000000..79890c2d8f
--- /dev/null
+++ b/tools/ocaml/duniverse/stdlib-shims/README.md
@@ -0,0 +1,2 @@
+# stdlib-shims
+Shims for forward-compatibility between versions of the OCaml standard library
diff --git a/tools/ocaml/duniverse/stdlib-shims/dune-project b/tools/ocaml/duniverse/stdlib-shims/dune-project
new file mode 100644
index 0000000000..de4fc20920
--- /dev/null
+++ b/tools/ocaml/duniverse/stdlib-shims/dune-project
@@ -0,0 +1 @@
+(lang dune 1.0)
diff --git a/tools/ocaml/duniverse/stdlib-shims/dune-workspace.dev b/tools/ocaml/duniverse/stdlib-shims/dune-workspace.dev
new file mode 100644
index 0000000000..c793fa1927
--- /dev/null
+++ b/tools/ocaml/duniverse/stdlib-shims/dune-workspace.dev
@@ -0,0 +1,14 @@
+(lang dune 1.0)
+
+;; Run the following command to test against all supported versions of
+;; OCaml:
+;;
+;; $ dune runtest --workspace dune-workspace.dev
+
+(context (opam (switch 4.02.3)))
+(context (opam (switch 4.03.0)))
+(context (opam (switch 4.04.2)))
+(context (opam (switch 4.05.0)))
+(context (opam (switch 4.06.1)))
+(context (opam (switch 4.07.0)))
+(context (opam (switch 4.08.0+trunk)))
diff --git a/tools/ocaml/duniverse/stdlib-shims/src/dune b/tools/ocaml/duniverse/stdlib-shims/src/dune
new file mode 100644
index 0000000000..7981a8a11f
--- /dev/null
+++ b/tools/ocaml/duniverse/stdlib-shims/src/dune
@@ -0,0 +1,97 @@
+(* -*- tuareg -*- *)
+
+open StdLabels
+open Jbuild_plugin.V1
+
+let version = Scanf.sscanf ocaml_version "%u.%u" (fun a b -> (a, b))
+
+let modules_in_4_02 =
+  [ "Arg"
+  ; "Array"
+  ; "ArrayLabels"
+  ; "Buffer"
+  ; "Bytes"
+  ; "BytesLabels"
+  ; "Callback"
+  ; "Char"
+  ; "Complex"
+  ; "Digest"
+  ; "Filename"
+  ; "Format"
+  ; "Gc"
+  ; "Genlex"
+  ; "Hashtbl"
+  ; "Int32"
+  ; "Int64"
+  ; "Lazy"
+  ; "Lexing"
+  ; "List"
+  ; "ListLabels"
+  ; "Map"
+  ; "Marshal"
+  ; "MoreLabels"
+  ; "Nativeint"
+  ; "Obj"
+  ; "Oo"
+  ; "Parsing"
+  ; "Pervasives"
+  ; "Printexc"
+  ; "Printf"
+  ; "Queue"
+  ; "Random"
+  ; "Scanf"
+  ; "Set"
+  ; "Stack"
+  ; "StdLabels"
+  ; "Stream"
+  ; "String"
+  ; "StringLabels"
+  ; "Sys"
+  ; "Weak"
+  ]
+
+let modules_post_4_02 =
+  [ "Float", (4, 07)
+  ; "Seq", (4, 07)
+  ; "Stdlib", (4, 07)
+  ; "Uchar", (4, 03)
+  ]
+
+let available_modules =
+  modules_in_4_02 @
+  (List.filter modules_post_4_02 ~f:(fun (m, v) ->
+       version >= v)
+   |> List.map ~f:fst)
+
+let all_modules_except_stdlib =
+  available_modules
+  |> List.filter ~f:((<>) "Stdlib")
+  |> List.sort ~cmp:String.compare
+
+let longest_module_name =
+  List.fold_left all_modules_except_stdlib ~init:0
+    ~f:(fun acc m -> max acc (String.length m))
+
+let stdlib_rule =
+  Printf.sprintf {|
+(rule
+ (with-stdout-to stdlib.ml
+  (echo "\
+%s
+
+include Pervasives
+")))
+|}
+    (List.map all_modules_except_stdlib
+       ~f:(fun m -> Printf.sprintf "module %-*s = %s" longest_module_name m m)
+     |> String.concat ~sep:"\n")
+
+let () =
+  Printf.ksprintf send {|
+(library
+ (wrapped false)
+ (name stdlib_shims)
+ (public_name stdlib-shims))
+%s
+|}
+    (if version >= (4, 07) then "" else stdlib_rule)
diff --git a/tools/ocaml/duniverse/stdlib-shims/stdlib-shims.opam b/tools/ocaml/duniverse/stdlib-shims/stdlib-shims.opam
new file mode 100644
index 0000000000..c4e1fb9d86
--- /dev/null
+++ b/tools/ocaml/duniverse/stdlib-shims/stdlib-shims.opam
@@ -0,0 +1,24 @@
+version: "0.1.0"
+opam-version: "2.0"
+maintainer: "The stdlib-shims programmers"
+authors: "The stdlib-shims programmers"
+homepage: "https://github.com/ocaml/stdlib-shims"
+doc: "https://ocaml.github.io/stdlib-shims/"
+dev-repo: "git+https://github.com/ocaml/stdlib-shims.git"
+bug-reports: "https://github.com/ocaml/stdlib-shims/issues"
+tags: ["stdlib" "compatibility" "org:ocaml"]
+license: ["typeof OCaml system"]
+available: [  ]
+depends: [
+  "dune"
+  "ocaml" {>= "4.02.3"}
+]
+build: [ "dune" "build" "-p" name "-j" jobs ]
+synopsis: "Backport some of the new stdlib features to older compiler"
+description: """
+Backport some of the new stdlib features to older compiler,
+such as the Stdlib module.
+
+This allows projects that require compatibility with older compiler to
+use these new features in their code.
+"""
\ No newline at end of file
diff --git a/tools/ocaml/duniverse/stdlib-shims/test/dune b/tools/ocaml/duniverse/stdlib-shims/test/dune
new file mode 100644
index 0000000000..f31738ad94
--- /dev/null
+++ b/tools/ocaml/duniverse/stdlib-shims/test/dune
@@ -0,0 +1,3 @@
+(test
+ (name test)
+ (libraries stdlib_shims))
diff --git a/tools/ocaml/duniverse/stdlib-shims/test/test.ml b/tools/ocaml/duniverse/stdlib-shims/test/test.ml
new file mode 100644
index 0000000000..1f0e4941b6
--- /dev/null
+++ b/tools/ocaml/duniverse/stdlib-shims/test/test.ml
@@ -0,0 +1,2 @@
+let _ = Stdlib.(+)
+let _ = Stdlib.List.map
diff --git a/tools/ocaml/xen.opam.locked b/tools/ocaml/xen.opam.locked
new file mode 100644
index 0000000000..fde42bc495
--- /dev/null
+++ b/tools/ocaml/xen.opam.locked
@@ -0,0 +1,119 @@
+opam-version: "2.0"
+synopsis: "opam-monorepo generated lockfile"
+maintainer: "opam-monorepo"
+depends: [
+  "afl-persistent" {= "1.3"}
+  "base-bigarray" {= "base"}
+  "base-bytes" {= "base"}
+  "base-threads" {= "base"}
+  "base-unix" {= "base"}
+  "cmdliner" {= "1.0.4+dune"}
+  "cppo" {= "1.6.7"}
+  "crowbar" {= "0.2"}
+  "csexp" {= "1.3.2"}
+  "fmt" {= "0.8.8+dune"}
+  "ocaml" {= "4.12.0"}
+  "ocaml-base-compiler" {= "4.12.0~beta1"}
+  "ocaml-config" {= "2"}
+  "ocaml-options-vanilla" {= "1"}
+  "ocplib-endian" {= "1.1"}
+  "result" {= "1.5"}
+  "stdlib-shims" {= "0.1.0"}
+]
+depexts: ["libsystemd-dev" "libxen-dev" "m4"] {os-distribution = "debian"}
+pin-depends: [
+  [
+    "afl-persistent.1.3"
+    "git+file:///home/edwin-work/afl-persistent#09539920681aafb7f792d5280c76d4020848b3c0"
+  ]
+  [
+    "cmdliner.1.0.4+dune"
+    "https://github.com/dune-universe/cmdliner/archive/v1.0.4+dune.tar.gz"
+  ]
+  [
+    "cppo.1.6.7"
+    "https://github.com/ocaml-community/cppo/releases/download/v1.6.7/cppo-v1.6.7.tbz"
+  ]
+  ["crowbar.0.2" "https://github.com/stedolan/crowbar/archive/v0.2.tar.gz"]
+  [
+    "csexp.1.3.2"
+    "https://github.com/ocaml-dune/csexp/releases/download/1.3.2/csexp-1.3.2.tbz"
+  ]
+  [
+    "fmt.0.8.8+dune"
+    "https://github.com/dune-universe/fmt/archive/v0.8.8+dune.tar.gz"
+  ]
+  [
+    "ocplib-endian.1.1"
+    "https://github.com/OCamlPro/ocplib-endian/archive/1.1.tar.gz"
+  ]
+  [
+    "result.1.5"
+    "https://github.com/janestreet/result/releases/download/1.5/result-1.5.tbz"
+  ]
+  [
+    "stdlib-shims.0.1.0"
+    "https://github.com/ocaml/stdlib-shims/releases/download/0.1.0/stdlib-shims-0.1.0.tbz"
+  ]
+]
+x-opam-monorepo-duniverse-dirs: [
+  [
+    "git+file:///home/edwin-work/afl-persistent#09539920681aafb7f792d5280c76d4020848b3c0"
+    "ocaml-afl-persistent"
+  ]
+  [
+    "https://github.com/OCamlPro/ocplib-endian/archive/1.1.tar.gz"
+    "ocplib-endian"
+    [
+      "md5=dedf4d69c1b87b3c6c7234f632399285"
+      "sha512=39351c666d1394770696fa89ac62f7c137ad1697d99888bfba2cc8de2c61df05dd8b3aa327c117bf38f3e29e081026d2c575c5ad0022bde92b3d43aba577d3f9"
+    ]
+  ]
+  [
+    "https://github.com/dune-universe/cmdliner/archive/v1.0.4+dune.tar.gz"
+    "cmdliner"
+    [
+      "sha256=ffc09f07a9e394d6be4dbecea7add601ff00519a91dff4c95b9cd0a4aa60eceb"
+    ]
+  ]
+  [
+    "https://github.com/dune-universe/fmt/archive/v0.8.8+dune.tar.gz"
+    "fmt"
+    [
+      "sha256=da16172528cc5ebde062fcb25e46085962ddd5fd32d2dc00eb07697384f0eb2d"
+    ]
+  ]
+  [
+    "https://github.com/janestreet/result/releases/download/1.5/result-1.5.tbz"
+    "result"
+    ["md5=1b82dec78849680b49ae9a8a365b831b"]
+  ]
+  [
+    "https://github.com/ocaml-community/cppo/releases/download/v1.6.7/cppo-v1.6.7.tbz"
+    "cppo"
+    [
+      "sha256=db553e3e6c206df09b1858c3aef5e21e56564d593642a3c78bcedb6af36f529d"
+      "sha512=9722b50fd23aaccf86816313333a3bf8fc7c6b4ef06b153e5e1e1aaf14670cf51a4aac52fb1b4a0e5531699c4047a1eff6c24c969f7e5063e78096c2195b5819"
+    ]
+  ]
+  [
+    "https://github.com/ocaml-dune/csexp/releases/download/1.3.2/csexp-1.3.2.tbz"
+    "csexp"
+    [
+      "sha256=f21f427b277f07e8bfd050e00c640a5893c1bf4b689147640fa383255dcf1c4a"
+      "sha512=ff1bd6a7c6bb3a73ca9ab0506c9ec1f357657deaa9ecc7eb32955817d9b0f266d976af3e2b8fc34c621cb0caf1fde55f9a609dd184e2054f500bf09afeb83026"
+    ]
+  ]
+  [
+    "https://github.com/ocaml/stdlib-shims/releases/download/0.1.0/stdlib-shims-0.1.0.tbz"
+    "stdlib-shims"
+    ["md5=12b5704eed70c6bff5ac39a16db1425d"]
+  ]
+  [
+    "https://github.com/stedolan/crowbar/archive/v0.2.tar.gz"
+    "crowbar"
+    ["md5=55e85b9fcc3a777bc7c70ec57b136e7c"]
+  ]
+]
+x-opam-monorepo-root-packages: ["xen" "xenstore" "xenstored"]
+x-opam-monorepo-version: "0.2"
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 11 19:46:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 19:46:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126010.237193 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgYKH-0005tI-4s; Tue, 11 May 2021 19:45:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126010.237193; Tue, 11 May 2021 19:45:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgYKH-0005tB-14; Tue, 11 May 2021 19:45:53 +0000
Received: by outflank-mailman (input) for mailman id 126010;
 Tue, 11 May 2021 19:45:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgYKF-0005sz-1E; Tue, 11 May 2021 19:45:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgYKE-0003ev-QU; Tue, 11 May 2021 19:45:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgYKE-0007We-Fh; Tue, 11 May 2021 19:45:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lgYKE-0000Ju-Es; Tue, 11 May 2021 19:45:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=usLGunb9ZCpuE4KlaJK+bTRqkDGbgqPGWRmT44tUqGs=; b=hXmjwTR6aMQCr2GnfzI5dQVjAX
	Lb6zpA1HMYaFVMeN+1PQem9NtZfXneIYiS8CPpSlrc864FLbSvQAjjhso28bxvepyjdoxpzLQ4LjH
	Bfhq6XjDkQRY8IlenQsl11BwcBM0R8k+8Jt/giRaZzsyUg6OO/TJAbc5jVwX/KgAL9eY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161904-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 161904: trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    xen-unstable:build-arm64:<job status>:broken:regression
    xen-unstable:build-arm64-pvops:<job status>:broken:regression
    xen-unstable:build-arm64-xsm:<job status>:broken:regression
    xen-unstable:build-arm64-pvops:host-install(4):broken:regression
    xen-unstable:build-arm64-xsm:host-install(4):broken:regression
    xen-unstable:build-arm64:host-install(4):broken:regression
    xen-unstable:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:build-arm64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=d4fb5f166c2bfbaf9ba0de69da0d411288f437a9
X-Osstest-Versions-That:
    xen=982c89ed527bc5b0ffae5da9fd33f9d2d1528f06
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 11 May 2021 19:45:50 +0000

flight 161904 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161904/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 161898
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 161898
 build-arm64                   4 host-install(4)        broken REGR. vs. 161898

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161898
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161898
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 161898
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161898
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161898
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161898
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 161898
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161898
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161898
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161898
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161898
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161898
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  d4fb5f166c2bfbaf9ba0de69da0d411288f437a9
baseline version:
 xen                  982c89ed527bc5b0ffae5da9fd33f9d2d1528f06

Last test of basis   161898  2021-05-10 19:06:50 Z    1 days
Testing same since   161904  2021-05-11 10:00:22 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@arm.com>
  Volodymyr Babchuk <volodymyr_babchuk@epam.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  broken  
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-step build-arm64-pvops host-install(4)
broken-step build-arm64-xsm host-install(4)
broken-step build-arm64 host-install(4)

Not pushing.

------------------------------------------------------------
commit d4fb5f166c2bfbaf9ba0de69da0d411288f437a9
Author: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Date:   Fri May 7 01:39:47 2021 +0000

    optee: enable OPTEE_SMC_SEC_CAP_MEMREF_NULL capability
    
    OP-TEE mediator already have support for NULL memory references. It
    was added in patch 0dbed3ad336 ("optee: allow plain TMEM buffers with
    NULL address"). But it does not propagate
    OPTEE_SMC_SEC_CAP_MEMREF_NULL capability flag to a guest, so well
    behaving guest can't use this feature.
    
    Note: linux optee driver honors this capability flag when handling
    buffers from userspace clients, but ignores it when working with
    internal calls. For instance, __optee_enumerate_devices() function
    uses NULL argument to get buffer size hint from OP-TEE. This was the
    reason, why "optee: allow plain TMEM buffers with NULL address" was
    introduced in the first place.
    
    This patch adds the mentioned capability to list of known
    capabilities. From Linux point of view it means that userspace clients
    can use this feature, which is confirmed by OP-TEE test suite:
    
    * regression_1025 Test memref NULL and/or 0 bytes size
    o regression_1025.1 Invalid NULL buffer memref registration
      regression_1025.1 OK
    o regression_1025.2 Input/Output MEMREF Buffer NULL - Size 0 bytes
      regression_1025.2 OK
    o regression_1025.3 Input MEMREF Buffer NULL - Size non 0 bytes
      regression_1025.3 OK
    o regression_1025.4 Input MEMREF Buffer NULL over PTA invocation
      regression_1025.4 OK
      regression_1025 OK
    
    Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 30f34457b20c78b2862b2b16cb26cb4f10a667ad
Author: Julien Grall <jgrall@amazon.com>
Date:   Mon May 10 18:28:16 2021 +0100

    tools/xenstore: Fix indentation in the header of xenstored_control.c
    
    Commit e867af081d94 "tools/xenstore: save new binary for live update"
    seemed to have spuriously changed the indentation of the first line of
    the copyright header.
    
    The previous indentation is re-instated so all the lines are indented
    the same.
    
    Reported-by: Bjoern Doebel <doebel@amazon.com>
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Juergen Gross <jgross@suse.com>

commit 7e71b1e0affa83c0976c832f254276eeb6e6575c
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu May 6 17:12:23 2021 +0100

    tools/xenstored: Prevent a buffer overflow in dump_state_node_perms()
    
    ASAN reported one issue when Live Updating Xenstored:
    
    =================================================================
    ==873==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7ffc194f53e0 at pc 0x555c6b323292 bp 0x7ffc194f5340 sp 0x7ffc194f5338
    WRITE of size 1 at 0x7ffc194f53e0 thread T0
        #0 0x555c6b323291 in dump_state_node_perms xen/tools/xenstore/xenstored_core.c:2468
        #1 0x555c6b32746e in dump_state_special_node xen/tools/xenstore/xenstored_domain.c:1257
        #2 0x555c6b32a702 in dump_state_special_nodes xen/tools/xenstore/xenstored_domain.c:1273
        #3 0x555c6b32ddb3 in lu_dump_state xen/tools/xenstore/xenstored_control.c:521
        #4 0x555c6b32e380 in do_lu_start xen/tools/xenstore/xenstored_control.c:660
        #5 0x555c6b31b461 in call_delayed xen/tools/xenstore/xenstored_core.c:278
        #6 0x555c6b32275e in main xen/tools/xenstore/xenstored_core.c:2357
        #7 0x7f95eecf3d09 in __libc_start_main ../csu/libc-start.c:308
        #8 0x555c6b3197e9 in _start (/usr/local/sbin/xenstored+0xc7e9)
    
    Address 0x7ffc194f53e0 is located in stack of thread T0 at offset 80 in frame
        #0 0x555c6b32713e in dump_state_special_node xen/tools/xenstore/xenstored_domain.c:1232
    
      This frame has 2 object(s):
        [32, 40) 'head' (line 1233)
        [64, 80) 'sn' (line 1234) <== Memory access at offset 80 overflows this variable
    
    This is happening because the callers are passing a pointer to a variable
    allocated on the stack. However, the field perms is a dynamic array, so
    Xenstored will end up to read outside of the variable.
    
    Rework the code so the permissions are written one by one in the fd.
    
    Fixes: ed6eebf17d2c ("tools/xenstore: dump the xenstore state for live update")
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>

commit 3f568354a95ee2f0c9c553efb94c734fa6848af0
Author: Michal Orzel <michal.orzel@arm.com>
Date:   Wed May 5 09:43:07 2021 +0200

    arm/time,vtimer: Get rid of READ/WRITE_SYSREG32
    
    AArch64 registers are 64bit whereas AArch32 registers
    are 32bit or 64bit. MSR/MRS are expecting 64bit values thus
    we should get rid of helpers READ/WRITE_SYSREG32
    in favour of using READ/WRITE_SYSREG.
    We should also use register_t type when reading sysregs
    which can correspond to uint64_t or uint32_t.
    Even though many AArch64 registers have upper 32bit reserved
    it does not mean that they can't be widen in the future.
    
    Modify type of vtimer structure's member: ctl to register_t.
    
    Add macro CNTFRQ_MASK containing mask for timer clock frequency
    field of CNTFRQ_EL0 register.
    
    Modify CNTx_CTL_* macros to return unsigned long instead of
    unsigned int as ctl is now of type register_t.
    
    Signed-off-by: Michal Orzel <michal.orzel@arm.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 86faae561cd8eee819e0f42ba7a18dd180aa49d1
Author: Michal Orzel <michal.orzel@arm.com>
Date:   Wed May 5 09:43:06 2021 +0200

    arm/page: Get rid of READ/WRITE_SYSREG32
    
    AArch64 registers are 64bit whereas AArch32 registers
    are 32bit or 64bit. MSR/MRS are expecting 64bit values thus
    we should get rid of helpers READ/WRITE_SYSREG32
    in favour of using READ/WRITE_SYSREG.
    We should also use register_t type when reading sysregs
    which can correspond to uint64_t or uint32_t.
    Even though many AArch64 registers have upper 32bit reserved
    it does not mean that they can't be widen in the future.
    
    Modify accesses to CTR_EL0 to use READ/WRITE_SYSREG.
    
    Signed-off-by: Michal Orzel <michal.orzel@arm.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>

commit 25e5d0c412e0d7420f2aa7fdd71cc39d8ed6c528
Author: Michal Orzel <michal.orzel@arm.com>
Date:   Wed May 5 09:43:05 2021 +0200

    xen/arm: Always access SCTLR_EL2 using READ/WRITE_SYSREG()
    
    The Armv8 specification describes the system register as a 64-bit value
    on AArch64 and 32-bit value on AArch32 (same as ARMv7).
    
    Unfortunately, Xen is accessing the system registers using
    READ/WRITE_SYSREG32() which means the top 32-bit are clobbered.
    
    This is only a latent bug so far because Xen will not yet use the top
    32-bit.
    
    There is also no change in behavior because arch/arm/arm64/head.S will
    initialize SCTLR_EL2 to a sane value with the top 32-bit zeroed.
    
    Signed-off-by: Michal Orzel <michal.orzel@arm.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 8eb7cc0465fa228064e807aad51eb7428d6d3199
Author: Michal Orzel <michal.orzel@arm.com>
Date:   Wed May 5 09:43:04 2021 +0200

    arm/p2m: Get rid of READ/WRITE_SYSREG32
    
    AArch64 registers are 64bit whereas AArch32 registers
    are 32bit or 64bit. MSR/MRS are expecting 64bit values thus
    we should get rid of helpers READ/WRITE_SYSREG32
    in favour of using READ/WRITE_SYSREG.
    We should also use register_t type when reading sysregs
    which can correspond to uint64_t or uint32_t.
    Even though many AArch64 registers have upper 32bit reserved
    it does not mean that they can't be widen in the future.
    
    Modify type of vtcr to register_t.
    
    Signed-off-by: Michal Orzel <michal.orzel@arm.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>

commit 78e67c99eb3f90c22c8c6ee282ec3a43d2ddccb5
Author: Michal Orzel <michal.orzel@arm.com>
Date:   Wed May 5 09:43:03 2021 +0200

    arm/gic: Get rid of READ/WRITE_SYSREG32
    
    AArch64 registers are 64bit whereas AArch32 registers
    are 32bit or 64bit. MSR/MRS are expecting 64bit values thus
    we should get rid of helpers READ/WRITE_SYSREG32
    in favour of using READ/WRITE_SYSREG.
    We should also use register_t type when reading sysregs
    which can correspond to uint64_t or uint32_t.
    Even though many AArch64 registers have upper 32bit reserved
    it does not mean that they can't be widen in the future.
    
    Modify types of following members of struct gic_v3 to register_t:
    -vmcr
    -sre_el1
    -apr0
    -apr1
    
    Add new macro GICC_IAR_INTID_MASK containing the mask
    for INTID field of ICC_IAR0/1_EL1 register as only the first 23-bits
    of IAR contains the interrupt number. The rest are RES0.
    Therefore, take the opportunity to mask the bits [23:31] as
    they should be used for an IRQ number (we don't know how the top bits
    will be used).
    
    Signed-off-by: Michal Orzel <michal.orzel@arm.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit d55afb1acaffc6047af3cabc3ef4442f313bee2c
Author: Michal Orzel <michal.orzel@arm.com>
Date:   Wed May 5 09:43:02 2021 +0200

    arm/gic: Remove member hcr of structure gic_v3
    
    ... as it is never used even in the patch introducing it.
    
    Signed-off-by: Michal Orzel <michal.orzel@arm.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit b80470c84553808fef3a6803000ceee8a100e63c
Author: Michal Orzel <michal.orzel@arm.com>
Date:   Wed May 5 09:43:01 2021 +0200

    arm: Modify type of actlr to register_t
    
    AArch64 registers are 64bit whereas AArch32 registers
    are 32bit or 64bit. MSR/MRS are expecting 64bit values thus
    we should get rid of helpers READ/WRITE_SYSREG32
    in favour of using READ/WRITE_SYSREG.
    We should also use register_t type when reading sysregs
    which can correspond to uint64_t or uint32_t.
    Even though many AArch64 registers have upper 32bit reserved
    it does not mean that they can't be widen in the future.
    
    ACTLR_EL1 system register bits are implementation defined
    which means it is possibly a latent bug on current HW as the CPU
    implementer may already have decided to use the top 32bit.
    
    Signed-off-by: Michal Orzel <michal.orzel@arm.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>

commit 3fd8336bc599788e5a52a7e63e833b6f03d79fd5
Author: Michal Orzel <michal.orzel@arm.com>
Date:   Wed May 5 09:43:00 2021 +0200

    arm/domain: Get rid of READ/WRITE_SYSREG32
    
    AArch64 registers are 64bit whereas AArch32 registers
    are 32bit or 64bit. MSR/MRS are expecting 64bit values thus
    we should get rid of helpers READ/WRITE_SYSREG32
    in favour of using READ/WRITE_SYSREG.
    We should also use register_t type when reading sysregs
    which can correspond to uint64_t or uint32_t.
    Even though many AArch64 registers have upper 32bit reserved
    it does not mean that they can't be widen in the future.
    
    Modify type of register cntkctl to register_t.
    
    Thumbee registers are only usable by a 32-bit domain and therefore
    we can just store the bottom 32-bit (IOW there is no type change).
    In fact, this could technically be restricted to Armv7 HW (the
    support was dropped retrospectively in Armv8) but leave it as-is
    for now.
    
    Signed-off-by: Michal Orzel <michal.orzel@arm.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>

commit 8990f0eaca139364091109389416455f4f78cd65
Author: Michal Orzel <michal.orzel@arm.com>
Date:   Wed May 5 09:42:59 2021 +0200

    arm64/vfp: Get rid of READ/WRITE_SYSREG32
    
    AArch64 registers are 64bit whereas AArch32 registers
    are 32bit or 64bit. MSR/MRS are expecting 64bit values thus
    we should get rid of helpers READ/WRITE_SYSREG32
    in favour of using READ/WRITE_SYSREG.
    We should also use register_t type when reading sysregs
    which can correspond to uint64_t or uint32_t.
    Even though many AArch64 registers have upper 32bit reserved
    it does not mean that they can't be widen in the future.
    
    Modify type of FPCR, FPSR, FPEXC32_EL2 to register_t.
    
    Signed-off-by: Michal Orzel <michal.orzel@arm.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue May 11 20:06:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 20:06:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126019.237207 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgYdf-0000OD-0x; Tue, 11 May 2021 20:05:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126019.237207; Tue, 11 May 2021 20:05:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgYde-0000O6-UD; Tue, 11 May 2021 20:05:54 +0000
Received: by outflank-mailman (input) for mailman id 126019;
 Tue, 11 May 2021 20:05:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Tq61=KG=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lgYdd-0000Nx-2J
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 20:05:53 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d92ff197-7738-4aeb-85bf-fab2e6959177;
 Tue, 11 May 2021 20:05:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d92ff197-7738-4aeb-85bf-fab2e6959177
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620763551;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=ERd6zyM/71VfFvobv7ncrvPMsEmZ1g/7zcMa40Z7Dbk=;
  b=BC7Lb7CQktt2MRFYkoAd1sljRi8NW7ydxcQhWRsFJopKaBl3mLsBXuUA
   tFiURAeUMLYYntFKQ0CM7PBK6AGL1hYPdDMHBTv85yPDxja4UVHHyCgjz
   jkI+RjhzC5iHZDXcFgddtyGevR1xq4UqYR2Pz6BsxId1I6FpJj4XdHL86
   0=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: yQtaLk/ETNQNmOdso3w9OI4fw+t+D9nsqG83DqJSP+8QO81WhzAAlXUJ/x5rA2J6oI/HmQnjWf
 BSXZyk51XPCSXcNv69UIBJNRg+PRQ4XRqeQLt3SZIQK2B3V1XBqYDeyPvYTt1OXvbIT2hFsSz1
 htv+HC1Yize1WceygelYs8PeOARcpnD6SyQ5A2bHPLZcTDSPgvdPqhw3BE2cV2rMtjT0gCtVyo
 sN38GJ7e8/87wTp6eZsftbVGUotrjB1bulas1HeiQpjKqZn1JmEFp0RAWjYj1gMyzoiPH4m4Ab
 1n8=
X-SBRS: 5.1
X-MesageID: 43570403
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:8iN59ampt0WtjyYWhQHG6DJ8zEvpDfKt3DAbv31ZSRFFG/Fwz/
 re5cjzpiWE8Ar5OUtQ5+xoV5PwJE80maQa3WBVB8bFYOCEghrYEGgB1/qH/9SIIUSXnYQxuc
 hdmupFebrN5DNB7foSlTPIcerIt+P3k5xA692+85+wJTsaG52IpD0JcTpzWncGPjVuNN4BD5
 yb6dNApz28PVoqTunTPAh2Y8Hz4+TRkpToeBgHAANizjKvo3eH1J7WeiLorSv3dVt0sMMfGK
 z+4nnEz7Tmuaqj0RnX23XI45lRg9WJ8Ko9OCRO5/JlVgkEpDzYGbhcZw==
X-IronPort-AV: E=Sophos;i="5.82,291,1613451600"; 
   d="scan'208";a="43570403"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HAp0LVwOF2zA3KXJv35cgF3cG1N+FcfTnhtx98XLLqX04RsSJSyLGKW4C2sIzCajlEtqdAHvmwq+3a+pJvbnZnnic15CHkgNKFG6rgFaKuDENuMwRGejFgkctAx4V+osY/LtUQtq2W43ZrWjTn1ogFChh9x5IaoIlghI4mN1g4be+1kNQvvjCGTMmtInRF04t48BLZCKfny7DWgN8Djhw0ssM22kVVnA/b8WcTid/a0YSfUSedqKhz2w+br2FcK/plyH4BZGHsCjZP0bQcOUyywZFWfX2h5nyS7BrfShAcaJkTCxqshw9LmIXcAqzEmHj34XWSmczREQIh7Lk7D5ZQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yHSqoKI0B0vWUlCCXnk3zGPds3/b7z3KLLtbZlG9J6g=;
 b=CrgGpIpScjiFXzMPQLktQXGYM1a59nUQHhNZSdWFGLJV0m8CypmoY3Bcdedhtrg1L42+pkWLZNavf7cAE82hYJPb9FQBNqXFDNvhOEIykrcILHQY9LARmsifhP7BTTFqXl76o2cNZzMnaHFIbNGkgZd4JcmOXmKJx+ZPvMZsDUohcLYCgcalmilIL0lZWe+sCm7Izzyqbk4Olm5eGNNRs3tHvn1NiMayOMtfNpe46xtf0yEwOacukzm6bUPur1vAqcZiP9z3vaW8xp74VnO2vqK4G1Ubkw2XliKXH0Q9qPo/ii9PGos0g8UBZj4lqlEFye26CQBzvw2b5mPf9S6f3Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yHSqoKI0B0vWUlCCXnk3zGPds3/b7z3KLLtbZlG9J6g=;
 b=tM1uowWqKNxvJiHWlSsyTOJHNZja+/fIDe1mhF4kUNJHGJpSbKrDkjFmr2V/aP+9KT3J8jUEyS20EdAEc8WWS1ygWXlbh1gKzoabGF+5yvHCo7BARGWEiEjFFzEADJBgmHiYJuYszD6V2WfrjtmVl64/T34i7f2B86UfqFpY0L4=
Subject: Re: [PATCH v2 00/17] live update and gnttab patches
To: =?UTF-8?B?RWR3aW4gVMO2csO2aw==?= <edvin.torok@citrix.com>,
	<xen-devel@lists.xenproject.org>
CC: George Dunlap <george.dunlap@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>, Christian Lindig <christian.lindig@citrix.com>, David Scott
	<dave@recoil.org>, Juergen Gross <jgross@suse.com>
References: <cover.1620755942.git.edvin.torok@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <c744d834-659a-e361-df97-128032402950@citrix.com>
Date: Tue, 11 May 2021 21:05:38 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <cover.1620755942.git.edvin.torok@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0470.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a8::7) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 13f42dea-24ea-4112-dbce-08d914b829c1
X-MS-TrafficTypeDiagnostic: BY5PR03MB5000:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BY5PR03MB500075BA673463D5E608FA0ABA539@BY5PR03MB5000.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:3968;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: GJb1StAXTa+VpfXm1lGbo7LxKoHAwvoHo9BZwZVl6dX6ndDvXrfj42SpolsD+Ep455gmzemKWGZ0s93WH9gfGrmaQ0VWkKwX619mrDZWsN3eok0gEjZo4Jo3axx+RhZ/xqn2z2SGxT7AtvJ7b4Fjmyk5yp9nmDVjFhj2TUEYRZ0SAD9bUlpcc7FPOsjrfSUHOUHGHma0saxvlRvczfTFMitprHk0KRQp6BMUSt2p7QLRAJ6eIeHChCL58PPPdKAptChKcB04hSKlfDRA5OID/h4yJ5JiZm0PZ+58jDe7miF8D00noqiq1U/CsdTgSat8YAUJBryWuOUJSKT25ViJBDD/8jR2sTSjN+KiFqwwPICxOaBp2iADmadtuNAvw3ECnOOF5GXnvNoh2jzKcFV7lurUHXhtFuOJXPZQHdsFrFF88waFKHymO46pRk51u7VLbo0seS479SdL675NIIQ6maRQUE0Cwb9RzxsHPLK9B2pJcn0stcPdJ8daMdiyyIYQLoBcE4COIxW0Hxf158x2F3z/9fsGgLJ8yun0odu9qNEwoy5edZWvcSB9ZR2ZZOqZ2kwnGeEiOXfwrKzoZpwA4JdXgapKpgEmzy+FAmm7N41IZruZI0Ji7bxVNJhnlgBDJ2jgQySI0Ik6yZPzJCZlzgs6CdyTsdT9RD9u8MNIOXtf46f382cxkAWi8AWyaAp8zv8PxNlleL1AwGdmJEFZG9EC2VPY+kpdfpM86DXeMmtJysKxPikbdLKkUVmg42LqtFXelKXR0hah5v7bwCELiQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(346002)(136003)(366004)(376002)(39850400004)(2616005)(15650500001)(956004)(66946007)(26005)(8676002)(316002)(4326008)(478600001)(54906003)(16576012)(186003)(83380400001)(8936002)(53546011)(966005)(66476007)(36756003)(6666004)(2906002)(86362001)(6486002)(5660300002)(38100700002)(66556008)(66574015)(16526019)(31696002)(31686004)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?VERYZy9yWkgxZmtJVmtEN3ZkM1JUeHBFUHo0SnB3VStoKzVaY1pEdWk5YWJF?=
 =?utf-8?B?WVVET0I2M0J2R2dJOG4yN214T0ZoVmRiUWM1YlFRQUx4RUlXRk51ZjBWZUE2?=
 =?utf-8?B?c0U4UHNpTU9Kb01sRkpmZ2N4LzRaWjhrczNvQmFydVBIRjhQa3pTWm8xbTFL?=
 =?utf-8?B?bExkUVJhMlJNTnd3WS85ZzlSam9NTUlYcU9HZnJDYk9tMC93YjhYQVVsVDZX?=
 =?utf-8?B?TmduNUhDSm1CazNlRFZMclh6dHdCZjNBYTFZT1JkQUFHL0NnaFRGL2YrVmtK?=
 =?utf-8?B?YU1jT05XcVRyQWVoQW1RdkR3SER6NTNDWjdiWEwzMEo4bGVSaVNEZXdXaXBV?=
 =?utf-8?B?NXVocVFFVTlla1hLdXM1dFVnaEtIdmdZWlduWGJVVmhqOXdWOWVCczhmYWli?=
 =?utf-8?B?ZUF3TU9ldzRUTlVpTHVZT2hxQXpoQm9nYjJQdzdQeFdxYW9zalNweGNYSnNU?=
 =?utf-8?B?WURWWmZON0IxbTQzRE5JTlhnaVRZWkdiL3hndUd3NS9pcDBPU1Nud2xJYy9z?=
 =?utf-8?B?cUtWYnFvbHozWnVNMWd6RzNHSkdac1dhak8wWkp6L1hqL1ZOSmF0VHdybVZ4?=
 =?utf-8?B?azhMQ0FXMVN2aTRvaVUxNzZlZmFROUZ0YVVUcllXNzZHWlF0eSt6MFRpVWtk?=
 =?utf-8?B?N3RUaEJTL3VEam5jcTdRT3BDYnErTzhNd2NXZkRhdGFCQkt1Y3I1TVg4dFdv?=
 =?utf-8?B?aTEraXZrVDRyNUJMcGt5cnNPN0ROblJKTzk0RW54RUVVV2VpZVRMb1ZGTkMz?=
 =?utf-8?B?bm5hTkJMMkdoQ1hkclFqQmdCNGREUFIzdzRMVEVWSHRlS09Iam1rZjVhSDQ3?=
 =?utf-8?B?bjAzSDZyd2RmN1VUcjhYQk53S3VwUTNZRldMYVlYS2xidWY5b0RvNjRsMWZ0?=
 =?utf-8?B?eUFXUDNxYXdHNVRwbU9UL0h2YitsblRwT1FmZnZQU2l2VlBpeURGZkI3T3lK?=
 =?utf-8?B?RGlhbjFmcWIxazNkSVo4aFV0eFFlZWRzQ2tzaG54K3dLakFCaDlKQ2RaRUZO?=
 =?utf-8?B?VXZtQWtzODc1c3pQZTAyMUtRcFg0TEFTRUJDb1Fybm5Ec0V1dDdZOTNxMkxO?=
 =?utf-8?B?WVFpa2crdkp3MzVrc0d2b1paNHN1QUJSWTJ0T0dEMERqMUkrQkRuY1llZ01Q?=
 =?utf-8?B?RVh6UWhVaFZORWY1bDJzWnlCZTRCWkV2a1FacXBjeHhZNEE0RXNHaVV1empV?=
 =?utf-8?B?WlBCSnlPQ1RvUDRoYmRWWEdpZzcza0ZBdFpMb2dMZ2l6UjB0a29GVS82cEpV?=
 =?utf-8?B?bEFWb0NPbEJwSjRPZkdvdndrQnFPZHh6clVYUVNaakF6RCtDRTZSS3ZGdWd1?=
 =?utf-8?B?UFc2dVR4T1N0SzVvTThUQTZCOFZwcWhJNDBlM1dkVUhCcDU0Q1g0QnBXOHE5?=
 =?utf-8?B?c1VNenRqblZCb21USEFWT3F3MHRqSmRDenM3VVhJTUUyQmhKTHlRN3RjUXZ0?=
 =?utf-8?B?TzIrcVl5U1k2dC9QZWM4c0V0SW1WdjUweVh6eUFpTE5pRWxRcUlhbUdDTHRI?=
 =?utf-8?B?OVVsbi9kT1I4Qm10dmJxeDBZREtDNXlnTEI2N05HN1RCR3VzSjhSb01GaDNv?=
 =?utf-8?B?ZDgvalVtRlVnbzZpQzBvYXJLYmc0RURBcEdSOGlBaVRzamxjT2gwN3BReGlP?=
 =?utf-8?B?KzFhbHEwNUxsQ3RtaDlUVkhTbGhXbUZaUUFWWHZFblNnMWtlVUc0cWZudlBC?=
 =?utf-8?B?RDdwWVhsYk9jRTl2UitLUVVFb0Z5S3JETXR1T3U0RHZXcGROZ3FXQjVXZkRB?=
 =?utf-8?Q?0Ot1DbfT5bETitzYxlIDE6r/WP5xPWfNAW3kR/K?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 13f42dea-24ea-4112-dbce-08d914b829c1
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2021 20:05:45.8343
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 9gdsHDiWF1NOF7bYSwdi3DmS8ZFiy0JSHdms3Z2ZbvTjjKHlrgVCi3E0SHeZKvvMyVfgwpUGIU4ijuQyQr4YLHobcnoQeJVcMeAKT0n/MHQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5000
X-OriginatorOrg: citrix.com

On 11/05/2021 19:05, Edwin Török wrote:
> These patches have been posted previously.
> The gnttab patches (tools/ocaml/libs/mmap) were not applied at the time
> to avoid conflicts with an in-progress XSA.
> The binary format live-update and fuzzing patches were not applied
> because it was too close to the next Xen release freeze.
>
> The patches depend on each-other: live-update only works correctly when the gnttab
> patches are taken too (MFN is not part of the binary live-update stream),
> so they are included here as a single series.
> The gnttab patches replaces one use of libxenctrl with stable interfaces, leaving one unstable
> libxenctrl interface used by oxenstored.
>
> The 'vendor external dependencies' may be optional, it is useful to be part
> of a patchqueue in a specfile so that you can build everything without external dependencies,
> but might as well commit it so everyone has it easily available not just XenServer.
>
> Note that the live-update fuzz test doesn't yet pass, it is still able to find bugs.
> However the reduced version with a fixed seed used as a unit test does pass,
> so it is useful to have it committed, and further improvements can be made later
> as more bugs are discovered and fixed.
>
> Edwin Török (17):
>   docs/designs/xenstore-migration.md: clarify that deletes are recursive
>   tools/ocaml: add unit test skeleton with Dune build system
>   tools/ocaml: vendor external dependencies for convenience
>   tools/ocaml/xenstored: implement the live migration binary format
>   tools/ocaml/xenstored: add binary dump format support
>   tools/ocaml/xenstored: add support for binary format
>   tools/ocaml/xenstored: validate config file before live update
>   Add structured fuzzing unit test
>   tools/ocaml: use common macros for manipulating mmap_interface
>   tools/ocaml/libs/mmap: allocate correct number of bytes
>   tools/ocaml/libs/mmap: Expose stub_mmap_alloc
>   tools/ocaml/libs/mmap: mark mmap/munmap as blocking
>   tools/ocaml/libs/xb: import gnttab stubs from mirage
>   tools/ocaml: safer Xenmmap interface
>   tools/ocaml/xenstored: use gnttab instead of xenctrl's
>     foreign_map_range
>   tools/ocaml/xenstored: don't store domU's mfn of ring page
>   tools/ocaml/libs/mmap: Clean up unused read/write

Gitlab CI reports failures across the board in Debian Stretch 32-bit
builds.  All logs
https://gitlab.com/xen-project/patchew/xen/-/pipelines/301146112 but the
tl;dr seems to be:

File "disk.ml", line 179, characters 26-37:
Error: Integer literal exceeds the range of representable integers of
type int

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 11 21:59:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 21:59:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126027.237220 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgaOr-0004NN-52; Tue, 11 May 2021 21:58:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126027.237220; Tue, 11 May 2021 21:58:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgaOr-0004NG-1N; Tue, 11 May 2021 21:58:45 +0000
Received: by outflank-mailman (input) for mailman id 126027;
 Tue, 11 May 2021 21:58:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nI6L=KG=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lgaOp-0004N9-KE
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 21:58:43 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0fd51c80-b02f-4ce6-b4d0-a8b47202542a;
 Tue, 11 May 2021 21:58:42 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 21C646162A;
 Tue, 11 May 2021 21:58:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0fd51c80-b02f-4ce6-b4d0-a8b47202542a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1620770321;
	bh=HOAq54B5Xnf9i1yZBD7EBx8nEVkEkFwS/CLLa0ixz7I=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=Kxo1nZAUpxDj8UDtbAIJIlwytKjdSlAagdFCo2+YiLDpYmrLi0feAyKrsm/llsBmQ
	 BST775Ym5JCsgFM4l82XRxxu3AN9Du2tktEwyb3R07oo1KxmgdGaflwu5U6nopzNqn
	 GE5Kp5PC4SlsUUi/w2I5sBvc8VP8nJIz8hnBzH8XhxfEJjRZsgJxYEgTdQQhg/XFFv
	 9TBuHi6PaKGg+XmhtWI8KgzhVkl9GyUni2dOllRewz6yfvkU0bkwL2AEJg5gCoBdez
	 hxDvD/duB1brOS3I4CGLI5EiIfAejEHsdqkEn8m0jOdGULgkQ9iEue5WO2Sn2+/2fr
	 ktlu4djgH5new==
Date: Tue, 11 May 2021 14:58:39 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Jan Beulich <jbeulich@suse.com>
cc: Luca Fancellu <luca.fancellu@arm.com>, xen-devel@lists.xenproject.org, 
    Bertrand Marquis <bertrand.marquis@arm.com>, wei.chen@arm.com, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
    Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>, 
    Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH v5 3/3] docs/doxygen: doxygen documentation for
 grant_table.h
In-Reply-To: <8fada713-9ae5-ddd3-585b-0f090748fc49@suse.com>
Message-ID: <alpine.DEB.2.21.2105111457480.5018@sstabellini-ThinkPad-T480s>
References: <20210504133145.767-1-luca.fancellu@arm.com> <20210504133145.767-4-luca.fancellu@arm.com> <alpine.DEB.2.21.2105041514260.5018@sstabellini-ThinkPad-T480s> <9E7D7B58-0ABA-4800-B2D3-9EE3E29CF599@arm.com>
 <8fada713-9ae5-ddd3-585b-0f090748fc49@suse.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 6 May 2021, Jan Beulich wrote:
> An alternative to correcting the (as it seems) v2 related issues, it
> may be worth considering to extract only v1 documentation in this
> initial phase.

FWIW I agree with Jan that "grant table v1" documentation only is a good idea.


From xen-devel-bounces@lists.xenproject.org Tue May 11 22:10:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 22:10:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126032.237231 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgaZr-0006p8-5N; Tue, 11 May 2021 22:10:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126032.237231; Tue, 11 May 2021 22:10:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgaZr-0006p1-2M; Tue, 11 May 2021 22:10:07 +0000
Received: by outflank-mailman (input) for mailman id 126032;
 Tue, 11 May 2021 22:10:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgaZp-0006ks-KZ; Tue, 11 May 2021 22:10:05 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgaZp-0006A6-BU; Tue, 11 May 2021 22:10:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgaZp-00076g-0a; Tue, 11 May 2021 22:10:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lgaZp-0002TY-03; Tue, 11 May 2021 22:10:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gz9sG48fqe/oUsbkeRc1Ntyt9LfV08x8aPhsUoRf29w=; b=hvCM3oOEE5NTL73s2EJa2+uiyz
	LO2XhvPZN1WIQqPFI+1W4QdOi5fcMixRsD3HMaDaIJ0ujcPDL31ajqTxnvxhFOyJqRiCCySRaKjEr
	/9U2/LnB7BlVhVt1s84PMdWhGzzEZbmX2EUTByVBMVLh/XOQJzxpWo+wZmbg4cEJg9fA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161905-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 161905: trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    linux-5.4:build-arm64:<job status>:broken:regression
    linux-5.4:build-arm64-pvops:<job status>:broken:regression
    linux-5.4:build-arm64-xsm:<job status>:broken:regression
    linux-5.4:build-arm64:host-install(4):broken:regression
    linux-5.4:build-arm64-pvops:host-install(4):broken:regression
    linux-5.4:build-arm64-xsm:host-install(4):broken:regression
    linux-5.4:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-5.4:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-5.4:build-arm64-libvirt:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=16022114de9869743d6304290815cdb8a8c7deaa
X-Osstest-Versions-That:
    linux=b5dbcd05792a4bad2c9bb3c4658c854e72c444b7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 11 May 2021 22:10:05 +0000

flight 161905 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161905/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-arm64                   4 host-install(4)        broken REGR. vs. 161832
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 161832
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 161832

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161832
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161832
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161832
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161832
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 161832
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161832
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161832
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161832
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161832
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161832
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161832
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                16022114de9869743d6304290815cdb8a8c7deaa
baseline version:
 linux                b5dbcd05792a4bad2c9bb3c4658c854e72c444b7

Last test of basis   161832  2021-05-07 09:10:55 Z    4 days
Testing same since   161905  2021-05-11 12:11:22 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adrian Hunter <adrian.hunter@intel.com>
  Alan Stern <stern@rowland.harvard.edu>
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Lobakin <alobakin@pm.me>
  Alexander Shishkin <alexander.shishkin@linux.intel.com>
  Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
  Anirudh Rayabharam <mail@anirudhrb.com>
  Anson Jacob <Anson.Jacob@amd.com>
  Ard Biesheuvel <ardb@kernel.org>
  Aric Cyr <aric.cyr@amd.com>
  Arnd Bergmann <arnd@arndb.de>
  Artur Petrosyan <Arthur.Petrosyan@synopsys.com>
  Arun Easi <aeasi@marvell.com>
  Avri Altman <avri.altman@wdc.com>
  Bart Van Assche <bvanassche@acm.org>
  Benjamin Block <bblock@linux.ibm.com>
  Bill Wendling <morbo@google.com>
  Bindu Ramamurthy <bindu.r@amd.com>
  Bixuan Cui <cuibixuan@huawei.com>
  Borislav Petkov <bp@suse.de>
  Brendan Peter <bpeter@lytx.com>
  Catalin Marinas <catalin.marinas@arm.com>
  Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
  Chanwoo Choi <cw00.choi@samsung.com>
  Chao Yu <yuchao0@huawei.com>
  Charles Keepax <ckeepax@opensource.cirrus.com>
  Chen Jun <chenjun102@huawei.com>
  Christian Brauner <christian.brauner@ubuntu.com>
  Christian König <christian.koenig@amd.com>
  Christoph Hellwig <hch@lst.de>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Chunfeng Yun <chunfeng.yun@mediatek.com>
  Colin Ian King <colin.king@canonical.com>
  Daniel Niv <danielniv3@gmail.com>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Daniel Wheeler <daniel.wheeler@amd.com>
  dann frazier <dann.frazier@canonical.com>
  David Bauer <mail@david-bauer.net>
  David E. Box <david.e.box@linux.intel.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  Davide Caratti <dcaratti@redhat.com>
  Dean Anderson <dean@sensoray.com>
  Dick Kennedy <dick.kennedy@broadcom.com>
  Dinghao Liu <dinghao.liu@zju.edu.cn>
  Dinh Nguyen <dinguyen@kernel.org>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Dmitry Vyukov <dvyukov@google.com>
  Dmytro Laktyushkin <Dmytro.Laktyushkin@amd.com>
  Don Brace <don.brace@microchip.com>
  dongjian <dongjian@yulong.com>
  DooHyun Hwang <dh0421.hwang@samsung.com>
  Eckhart Mohr <e.mohr@tuxedocomputers.com>
  Eelco Chaudron <echaudro@redhat.com>
  Eric Biggers <ebiggers@google.com>
  Eryk Brol <eryk.brol@amd.com>
  Ewan D. Milne <emilne@redhat.com>
  Felipe Balbi <balbi@kernel.org>
  Fengnan Chang <changfengnan@vivo.com>
  Filipe Manana <fdmanana@suse.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Gao Xiang <hsiangkao@redhat.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Gregory CLEMENT <gregory.clement@bootlin.com>
  Guchun Chen <guchun.chen@amd.com>
  Guenter Roeck <linux@roeck-us.net>
  Guochun Mao <guochun.mao@mediatek.com>
  Hanjun Guo <guohanjun@huawei.com>
  Hans de Goede <hdegoede@redhat.com>
  Hans Verkuil <hverkuil-cisco@xs4all.nl>
  Hansem Ro <hansemro@outlook.com>
  Harald Freudenberger <freude@linux.ibm.com>
  He Ying <heying24@huawei.com>
  Heiko Carstens <hca@linux.ibm.com>
  Heinz Mauelshagen <heinzm@redhat.com>
  Hemant Kumar <hemantk@codeaurora.org>
  Herbert Xu <herbert@gondor.apana.org.au>
  Hillf Danton <hdanton@sina.com>
  Hui Tang <tanghui20@huawei.com>
  Ido Schimmel <idosch@nvidia.com>
  Jaegeuk Kim <jaegeuk@kernel.org>
  Jakub Kicinski <kuba@kernel.org>
  James Morris <jamorris@linux.microsoft.com>
  James Smart <jsmart2021@gmail.com>
  Jan Kara <jack@suse.cz>
  Jared Baldridge <jrb@expunge.us>
  Jarkko Sakkinen <jarkko@kernel.org>
  Jason Self <jason@bluehome.net>
  Jeffrey Mitchell <jeffrey.mitchell@starlab.io>
  Jens Axboe <axboe@kernel.dk>
  Jens Wiklander <jens.wiklander@linaro.org>
  Jerome Forissier <jerome@forissier.org>
  Jessica Yu <jeyu@kernel.org>
  Joe Thornber <ejt@redhat.com>
  John Millikin <john@john-millikin.com>
  Jon Hunter <jonathanh@nvidia.com>
  Jonathan Kim <jonathan.kim@amd.com>
  Josef Bacik <josef@toxicpanda.com>
  Julian Braha <julianbraha@gmail.com>
  Justin Tee <justin.tee@broadcom.com>
  Kai Stuhlemmer (ebee Engineering) <kai.stuhlemmer@ebee.de>
  Kalle Valo <kvalo@codeaurora.org>
  karthik alapati <mail@karthek.com>
  Kevin Barnett <kevin.barnett@microchip.com>
  Konstantin Kharlamov <hi-angel@yandex.ru>
  Laurent Pinchart <laurent.pinchart@ideasonboard.com>
  Lee Jones <lee.jones@linaro.org>
  Lingutla Chandrasekhar <clingutla@codeaurora.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  lizhe <lizhe67@huawei.com>
  Luis Henriques <lhenriques@suse.de>
  Luke D Jones <luke@ljones.dev>
  Luo Jiaxing <luojiaxing@huawei.com>
  Lv Yunlong <lyl2019@mail.ustc.edu.cn>
  Lyude Paul <lyude@redhat.com>
  Mahesh Salgaonkar <mahesh@linux.ibm.com>
  Marc Zyngier <maz@kernel.org>
  Marek Behún <kabel@kernel.org>
  Marek Vasut <marex@denx.de>
  Marijn Suijten <marijn.suijten@somainline.org>
  Mark Brown <broonie@kernel.org>
  Mark Langsdorf <mlangsdo@redhat.com>
  Mark Rutland <mark.rutland@arm.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Martin Wilck <mwilck@suse.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Mathias Nyman <mathias.nyman@linux.intel.com>
  Matthias Brugger <matthias.bgg@gmail.com>
  Matthias Schiffer <matthias.schiffer@ew.tq-group.com>
  Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
  Maximilian Luz <luzmaximilian@gmail.com>
  Melissa Wen <melissa.srw@gmail.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Mike Snitzer <snitzer@redhat.com>
  Miklos Szeredi <mszeredi@redhat.com>
  Minas Harutyunyan <Minas.Harutyunyan@synopsys.com>
  Ming-Hung Tsai <mtsai@redhat.com>
  Miquel Raynal <miquel.raynal@bootlin.com>
  Muhammad Usama Anjum <musamaanjum@gmail.com>
  Murthy Bhat <Murthy.Bhat@microchip.com>
  Nathan Chancellor <nathan@kernel.org>
  Nick Desaulniers <ndesaulniers@google.com>
  Nilesh Javali <njavali@marvell.com>
  Paul Aurich <paul@darkrain42.org>
  Paul Clements <paul.clements@us.sios.com>
  Pavel Machek <pavel@denx.de>
  Pavel Machek <pavel@ucw.cz>
  Pavel Skripkin <paskripkin@gmail.com>
  Pawel Laszczak <pawell@cadence.com>
  Peilin Ye <yepeilin.cs@gmail.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Phil Calvin <phil@philcalvin.com>
  Phillip Potter <phil@philpotter.co.uk>
  Pradeep P V K <pragalla@codeaurora.org>
  Qu Huang <jinsdb@126.com>
  Quinn Tran <qutran@marvell.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Ricardo Ribalda <ribalda@chromium.org>
  Richard Weinberger <richard@nod.at>
  Rob Clark <robdclark@chromium.org>
  Robin Murphy <robin.murphy@arm.com>
  Ruslan Bilovol <ruslan.bilovol@gmail.com>
  Russell King <rmk+kernel@armlinux.org.uk>
  Sakari Ailus <sakari.ailus@linux.intel.com>
  Sami Loone <sami@loone.fi>
  Sasha Levin <sashal@kernel.org>
  Saurav Kashyap <skashyap@marvell.com>
  Sean Christopherson <seanjc@google.com>
  Sean Young <sean@mess.org>
  Sebastian Reichel <sebastian.reichel@collabora.com>
  Sedat Dilek <sedat.dilek@gmail.com>
  Seunghui Lee <sh043.lee@samsung.com>
  shaoyunl <shaoyun.liu@amd.com>
  Shixin Liu <liushixin2@huawei.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Solomon Chiu <solomon.chiu@amd.com>
  Song Liu <song@kernel.org>
  Sreekanth Reddy <sreekanth.reddy@broadcom.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stephen Boyd <sboyd@kernel.org>
  Steve French <stfrench@microsoft.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  Takashi Iwai <tiwai@suse.de>
  Theodore Ts'o <tytso@mit.edu>
  Thinh Nguyen <Thinh.Nguyen@synopsys.com>
  Thomas Gleixner <tglx@linutronix.de>
  Thomas Zimmermann <tzimmermann@suse.de>
  Tian Tao <tiantao6@hisilicon.com>
  Timo Gurr <timo.gurr@gmail.com>
  Todd Brandt <todd.e.brandt@linux.intel.com>
  Tony Ambardar <Tony.Ambardar@gmail.com>
  Tony Lindgren <tony@atomide.com>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Tudor Ambarus <tudor.ambarus@microchip.com>
  Tyler Hicks <code@tyhicks.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Valentin Schneider <valentin.schneider@arm.com>
  Vasily Gorbik <gor@linux.ibm.com>
  Vinod Koul <vkoul@kernel.org>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Wang Li <wangli74@huawei.com>
  Wei Yongjun <weiyongjun1@huawei.com>
  Werner Sembach <wse@tuxedocomputers.com>
  Wesley Cheng <wcheng@codeaurora.org>
  Will Deacon <will@kernel.org>
  Xingui Yang <yangxingui@huawei.com>
  Yang Yang <yang.yang29@zte.com.cn>
  Yang Yingliang <yangyingliang@huawei.com>
  Zhang Yi <yi.zhang@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  broken  
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-step build-arm64 host-install(4)
broken-step build-arm64-pvops host-install(4)
broken-step build-arm64-xsm host-install(4)

Not pushing.

(No revision log; it would be 5433 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 11 22:12:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 22:12:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126038.237247 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgacb-0007ZZ-RC; Tue, 11 May 2021 22:12:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126038.237247; Tue, 11 May 2021 22:12:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgacb-0007ZS-Ne; Tue, 11 May 2021 22:12:57 +0000
Received: by outflank-mailman (input) for mailman id 126038;
 Tue, 11 May 2021 22:12:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nI6L=KG=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lgaca-0007ZL-I7
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 22:12:56 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c5cfb7fc-2ccf-4d47-8a6c-3520e99159b7;
 Tue, 11 May 2021 22:12:56 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id AAA1A6191C;
 Tue, 11 May 2021 22:12:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c5cfb7fc-2ccf-4d47-8a6c-3520e99159b7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1620771175;
	bh=K/ymIO9zITRDKC7q9fFUInV/MqQVKMrOp6uXkIg6jxY=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=axR8R7EJR2WzizcTkkSMkF5oLGM66At43btJcrX0zmsZcYxbWKWjDDlLe6biICNO3
	 5bsdBiR/q73kaRsflgByacS342hQJIVHz2gRyqrvBShPY/hd3QGSK3MOQHEKp1RjzR
	 UGqOPMS2oKYI0ZP6OlC0ksrFBa6yloRhwqtvrserJTf5Gn4/FUwo8J2yOIT4qmxJgh
	 rqg26bdsp5+HF85HzTEo3RVq+AqjYfwb6gMI8kO//RMP8Tca640KRA1jMW3LJE5JPu
	 X8Oy/WGV24Q46Esbq5Np6g4zhY4LCbIJmogk0TvUE1PSpo2BI3S/YbFb9PUNZ8XDs4
	 2LdGWhOQ/3QMA==
Date: Tue, 11 May 2021 15:12:53 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, Wei.Chen@arm.com, Henry.Wang@arm.com, 
    Penny.Zheng@arm.com, Bertrand.Marquis@arm.com, 
    Julien Grall <jgrall@amazon.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH RFCv2 01/15] xen/arm: lpae: Rename LPAE_ENTRIES_MASK_GS
 to LPAE_ENTRY_MASK_GS
In-Reply-To: <20210425201318.15447-2-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2105111512410.5018@sstabellini-ThinkPad-T480s>
References: <20210425201318.15447-1-julien@xen.org> <20210425201318.15447-2-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Sun, 25 Apr 2021, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Commit 05031fa87357 "xen/arm: guest_walk: Only generate necessary
> offsets/masks" introduced LPAE_ENTRIES_MASK_GS. In a follow-up patch,
> we will use it for to define LPAE_ENTRY_MASK.
> 
> This will lead to inconsistent naming. As LPAE_ENTRY_MASK is used in
> many places, it is better to rename LPAE_ENTRIES_MASK_GS and avoid
> some churn.
> 
> So rename LPAE_ENTRIES_MASK_GS to LPAE_ENTRY_MASK_GS.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v2:
>         - New patch
> ---
>  xen/include/asm-arm/lpae.h | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/include/asm-arm/lpae.h b/xen/include/asm-arm/lpae.h
> index e94de2e7d8e8..4fb9a40a4ca9 100644
> --- a/xen/include/asm-arm/lpae.h
> +++ b/xen/include/asm-arm/lpae.h
> @@ -180,7 +180,7 @@ static inline bool lpae_is_superpage(lpae_t pte, unsigned int level)
>   */
>  #define LPAE_SHIFT_GS(gs)         ((gs) - 3)
>  #define LPAE_ENTRIES_GS(gs)       (_AC(1, U) << LPAE_SHIFT_GS(gs))
> -#define LPAE_ENTRIES_MASK_GS(gs)  (LPAE_ENTRIES_GS(gs) - 1)
> +#define LPAE_ENTRY_MASK_GS(gs)  (LPAE_ENTRIES_GS(gs) - 1)
>  
>  #define LEVEL_ORDER_GS(gs, lvl)   ((3 - (lvl)) * LPAE_SHIFT_GS(gs))
>  #define LEVEL_SHIFT_GS(gs, lvl)   (LEVEL_ORDER_GS(gs, lvl) + (gs))
> @@ -188,7 +188,7 @@ static inline bool lpae_is_superpage(lpae_t pte, unsigned int level)
>  
>  /* Offset in the table at level 'lvl' */
>  #define LPAE_TABLE_INDEX_GS(gs, lvl, addr)   \
> -    (((addr) >> LEVEL_SHIFT_GS(gs, lvl)) & LPAE_ENTRIES_MASK_GS(gs))
> +    (((addr) >> LEVEL_SHIFT_GS(gs, lvl)) & LPAE_ENTRY_MASK_GS(gs))
>  
>  /* Generate an array @var containing the offset for each level from @addr */
>  #define DECLARE_OFFSETS(var, addr)          \
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Tue May 11 22:26:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 22:26:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126046.237258 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgapx-00017j-1Q; Tue, 11 May 2021 22:26:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126046.237258; Tue, 11 May 2021 22:26:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgapw-00017c-Up; Tue, 11 May 2021 22:26:44 +0000
Received: by outflank-mailman (input) for mailman id 126046;
 Tue, 11 May 2021 22:26:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nI6L=KG=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lgapv-00017W-CT
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 22:26:43 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6f46aece-6035-43f0-b3ea-2f6b25d617e1;
 Tue, 11 May 2021 22:26:42 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 74F97611AB;
 Tue, 11 May 2021 22:26:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6f46aece-6035-43f0-b3ea-2f6b25d617e1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1620772001;
	bh=7gCKkvLecHub5W2cbC7m6UCDs4wvrXXyHNbfAMDR5W4=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=N58uzrNcfr84ay4VOLhXs93eMnvQ+cCPcgJ3A/r3pGTXWyDyoH2xc7AUCMbNf5w3K
	 bHehMKcCsxJhsSxCj3O0gEBpaTRp//puFBg7f/XDbuFqCh9BgJbnfZju+laqwvguHY
	 BNJBpn/t9ofp7rgJ1Uqm+cC9eXZTmvs6C/hGqSIr1nfdfmCgGkAZJF+vpbytmx5nxB
	 7NBJmLsUVRSfU/epMvVMw2VZlfT+uzsdiDsLyhgzg8rIwRSaAAI+7tLZ7Ba4bn6Uh5
	 SWyomHe8gsNJWtvaKOGJTfgBm5a4SVG8vvRQSuVTY/xeBJrmWN/SgvtElSYWBYj4WW
	 YiUyLTAGeR1bw==
Date: Tue, 11 May 2021 15:26:40 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, Wei.Chen@arm.com, Henry.Wang@arm.com, 
    Penny.Zheng@arm.com, Bertrand.Marquis@arm.com, 
    Julien Grall <jgrall@amazon.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH RFCv2 02/15] xen/arm: lpae: Use the generic helpers to
 defined the Xen PT helpers
In-Reply-To: <20210425201318.15447-3-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2105111515470.5018@sstabellini-ThinkPad-T480s>
References: <20210425201318.15447-1-julien@xen.org> <20210425201318.15447-3-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Sun, 25 Apr 2021, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Currently, Xen PT helpers are only working with 4KB page granularity
> and open-code the generic helpers. To allow more flexibility, we can
> re-use the generic helpers and pass Xen's page granularity
> (PAGE_SHIFT).
> 
> As Xen PT helpers are used in both C and assembly, we need to move
> the generic helpers definition outside of the !__ASSEMBLY__ section.
> 
> Note the aliases for each level are still kept for the time being so we
> can avoid a massive patch to change all the callers.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

The patch is OK as is. I have a couple of suggestions for improvement
below. If you feel like making them, good, otherwise I am also OK if you
don't want to change anything.


> ---
>     Changes in v2:
>         - New patch
> ---
>  xen/include/asm-arm/lpae.h | 71 +++++++++++++++++++++-----------------
>  1 file changed, 40 insertions(+), 31 deletions(-)
> 
> diff --git a/xen/include/asm-arm/lpae.h b/xen/include/asm-arm/lpae.h
> index 4fb9a40a4ca9..310f5225e056 100644
> --- a/xen/include/asm-arm/lpae.h
> +++ b/xen/include/asm-arm/lpae.h
> @@ -159,6 +159,17 @@ static inline bool lpae_is_superpage(lpae_t pte, unsigned int level)
>  #define lpae_get_mfn(pte)    (_mfn((pte).walk.base))
>  #define lpae_set_mfn(pte, mfn)  ((pte).walk.base = mfn_x(mfn))
>  
> +/* Generate an array @var containing the offset for each level from @addr */
> +#define DECLARE_OFFSETS(var, addr)          \
> +    const unsigned int var[4] = {           \
> +        zeroeth_table_offset(addr),         \
> +        first_table_offset(addr),           \
> +        second_table_offset(addr),          \
> +        third_table_offset(addr)            \
> +    }
> +
> +#endif /* __ASSEMBLY__ */
> +
>  /*
>   * AArch64 supports pages with different sizes (4K, 16K, and 64K).
>   * Provide a set of generic helpers that will compute various
> @@ -190,17 +201,6 @@ static inline bool lpae_is_superpage(lpae_t pte, unsigned int level)
>  #define LPAE_TABLE_INDEX_GS(gs, lvl, addr)   \
>      (((addr) >> LEVEL_SHIFT_GS(gs, lvl)) & LPAE_ENTRY_MASK_GS(gs))
>  
> -/* Generate an array @var containing the offset for each level from @addr */
> -#define DECLARE_OFFSETS(var, addr)          \
> -    const unsigned int var[4] = {           \
> -        zeroeth_table_offset(addr),         \
> -        first_table_offset(addr),           \
> -        second_table_offset(addr),          \
> -        third_table_offset(addr)            \
> -    }
> -
> -#endif /* __ASSEMBLY__ */
> -
>  /*
>   * These numbers add up to a 48-bit input address space.
>   *
> @@ -211,26 +211,35 @@ static inline bool lpae_is_superpage(lpae_t pte, unsigned int level)
>   * therefore 39-bits are sufficient.
>   */
>  
> -#define LPAE_SHIFT      9
> -#define LPAE_ENTRIES    (_AC(1,U) << LPAE_SHIFT)
> -#define LPAE_ENTRY_MASK (LPAE_ENTRIES - 1)
> -
> -#define THIRD_SHIFT    (PAGE_SHIFT)
> -#define THIRD_ORDER    (THIRD_SHIFT - PAGE_SHIFT)
> -#define THIRD_SIZE     (_AT(paddr_t, 1) << THIRD_SHIFT)
> -#define THIRD_MASK     (~(THIRD_SIZE - 1))
> -#define SECOND_SHIFT   (THIRD_SHIFT + LPAE_SHIFT)
> -#define SECOND_ORDER   (SECOND_SHIFT - PAGE_SHIFT)
> -#define SECOND_SIZE    (_AT(paddr_t, 1) << SECOND_SHIFT)
> -#define SECOND_MASK    (~(SECOND_SIZE - 1))
> -#define FIRST_SHIFT    (SECOND_SHIFT + LPAE_SHIFT)
> -#define FIRST_ORDER    (FIRST_SHIFT - PAGE_SHIFT)
> -#define FIRST_SIZE     (_AT(paddr_t, 1) << FIRST_SHIFT)
> -#define FIRST_MASK     (~(FIRST_SIZE - 1))
> -#define ZEROETH_SHIFT  (FIRST_SHIFT + LPAE_SHIFT)
> -#define ZEROETH_ORDER  (ZEROETH_SHIFT - PAGE_SHIFT)
> -#define ZEROETH_SIZE   (_AT(paddr_t, 1) << ZEROETH_SHIFT)
> -#define ZEROETH_MASK   (~(ZEROETH_SIZE - 1))

Should we add a one-line in-code comment saying that the definitions
below are for 4KB pages? It is not immediately obvious any longer.


> +#define LPAE_SHIFT          LPAE_SHIFT_GS(PAGE_SHIFT)
> +#define LPAE_ENTRIES        LPAE_ENTRIES_GS(PAGE_SHIFT)
> +#define LPAE_ENTRY_MASK     LPAE_ENTRY_MASK_GS(PAGE_SHIFT)
>
> +#define LEVEL_SHIFT(lvl)    LEVEL_SHIFT_GS(PAGE_SHIFT, lvl)
> +#define LEVEL_ORDER(lvl)    LEVEL_ORDER_GS(PAGE_SHIFT, lvl)
> +#define LEVEL_SIZE(lvl)     LEVEL_SIZE_GS(PAGE_SHIFT, lvl)
> +#define LEVEL_MASK(lvl)     (~(LEVEL_SIZE(lvl) - 1))

I would avoid adding these 4 macros. It would be OK if they were just
used within this file but lpae.h is a header: they could end up be used
anywhere in the xen/ code and they have a very generic name. My
suggestion would be to skip them and just do:

#define THIRD_SHIFT         LEVEL_SHIFT_GS(PAGE_SHIFT, 3)

etc.


> +/* Convenience aliases */
> +#define THIRD_SHIFT         LEVEL_SHIFT(3)
> +#define THIRD_ORDER         LEVEL_ORDER(3)
> +#define THIRD_SIZE          LEVEL_SIZE(3)
> +#define THIRD_MASK          LEVEL_MASK(3)
> +
> +#define SECOND_SHIFT        LEVEL_SHIFT(2)
> +#define SECOND_ORDER        LEVEL_ORDER(2)
> +#define SECOND_SIZE         LEVEL_SIZE(2)
> +#define SECOND_MASK         LEVEL_MASK(2)
> +
> +#define FIRST_SHIFT         LEVEL_SHIFT(1)
> +#define FIRST_ORDER         LEVEL_ORDER(1)
> +#define FIRST_SIZE          LEVEL_SIZE(1)
> +#define FIRST_MASK          LEVEL_MASK(1)
> +
> +#define ZEROETH_SHIFT       LEVEL_SHIFT(0)
> +#define ZEROETH_ORDER       LEVEL_ORDER(0)
> +#define ZEROETH_SIZE        LEVEL_SIZE(0)
> +#define ZEROETH_MASK        LEVEL_MASK(0)
>  
>  /* Calculate the offsets into the pagetables for a given VA */
>  #define zeroeth_linear_offset(va) ((va) >> ZEROETH_SHIFT)
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Tue May 11 22:33:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 22:33:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126051.237271 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgaw7-0002dh-Mi; Tue, 11 May 2021 22:33:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126051.237271; Tue, 11 May 2021 22:33:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgaw7-0002dH-Jd; Tue, 11 May 2021 22:33:07 +0000
Received: by outflank-mailman (input) for mailman id 126051;
 Tue, 11 May 2021 22:33:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nI6L=KG=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lgaw6-0002YY-Cr
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 22:33:06 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fe7239c4-5821-42fb-9f9d-8b608b8fd5a8;
 Tue, 11 May 2021 22:33:05 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 72AD16191C;
 Tue, 11 May 2021 22:33:04 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fe7239c4-5821-42fb-9f9d-8b608b8fd5a8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1620772384;
	bh=3DpZvIgBp1ulrbcu0FkPePDrABNMuMRAR+2mI6RbEpY=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=dcThtVxaE12GIaRfqzBi1QUhgrsFjSQMpbNiGQz9wURSgtURoW1m9K0ITCAFoHEnS
	 hXj5UH0xp4R59o0R3kVp9HWfwLzKEauNf8roE9z2RRm5C1VvEz5c+nmYgNkrpHPBT7
	 cxpYGGNWcWL/05a7+M8MZ2BR7SkJc/Cu33QzIEDQ7xyHj9HlOZvM32SL0OetTVs5Ui
	 OEO+4AcM8QorZByhlEFeVvvNgxAr+tx/ezGbd8ExOVNSP+dUky/Hul9bX9ZMvANCNm
	 ThuoDFmbLxe6gg7uTpmtaD80DKd7hFbB91LXkCAGKzKP9SLkTzQDvnCD0pLtYzYIK0
	 nAC1goDWFoI+A==
Date: Tue, 11 May 2021 15:33:03 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, Wei.Chen@arm.com, Henry.Wang@arm.com, 
    Penny.Zheng@arm.com, Bertrand.Marquis@arm.com, 
    Julien Grall <jgrall@amazon.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH RFCv2 03/15] xen/arm: p2m: Replace level_{orders, masks}
 arrays with LEVEL_{ORDER, MASK}
In-Reply-To: <20210425201318.15447-4-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2105111528180.5018@sstabellini-ThinkPad-T480s>
References: <20210425201318.15447-1-julien@xen.org> <20210425201318.15447-4-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Sun, 25 Apr 2021, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> The array level_orders and level_masks can be replaced with the
> recently introduced macros LEVEL_ORDER and LEVEL_MASK.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

So you actually planned to use LEVEL_ORDER and LEVEL_MASK in the xen/
code. I take back the previous comment :-)

Is the 4KB size "hiding" (for the lack of a better word) done on purpose?

Let me rephrase. Are you trying to consolidate info about pages being
4KB in xen/include/asm-arm/lpae.h ?

In any case:

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v2:
>         - New patch
> ---
>  xen/arch/arm/p2m.c | 16 +++++-----------
>  1 file changed, 5 insertions(+), 11 deletions(-)
> 
> diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c
> index ac5031262061..1b04c3534439 100644
> --- a/xen/arch/arm/p2m.c
> +++ b/xen/arch/arm/p2m.c
> @@ -36,12 +36,6 @@ static unsigned int __read_mostly max_vmid = MAX_VMID_8_BIT;
>   */
>  unsigned int __read_mostly p2m_ipa_bits = 64;
>  
> -/* Helpers to lookup the properties of each level */
> -static const paddr_t level_masks[] =
> -    { ZEROETH_MASK, FIRST_MASK, SECOND_MASK, THIRD_MASK };
> -static const uint8_t level_orders[] =
> -    { ZEROETH_ORDER, FIRST_ORDER, SECOND_ORDER, THIRD_ORDER };
> -
>  static mfn_t __read_mostly empty_root_mfn;
>  
>  static uint64_t generate_vttbr(uint16_t vmid, mfn_t root_mfn)
> @@ -232,7 +226,7 @@ static lpae_t *p2m_get_root_pointer(struct p2m_domain *p2m,
>       * we can't use (P2M_ROOT_LEVEL - 1) because the root level might be
>       * 0. Yet we still want to check if all the unused bits are zeroed.
>       */
> -    root_table = gfn_x(gfn) >> (level_orders[P2M_ROOT_LEVEL] + LPAE_SHIFT);
> +    root_table = gfn_x(gfn) >> (LEVEL_ORDER(P2M_ROOT_LEVEL) + LPAE_SHIFT);
>      if ( root_table >= P2M_ROOT_PAGES )
>          return NULL;
>  
> @@ -378,7 +372,7 @@ mfn_t p2m_get_entry(struct p2m_domain *p2m, gfn_t gfn,
>      if ( gfn_x(gfn) > gfn_x(p2m->max_mapped_gfn) )
>      {
>          for ( level = P2M_ROOT_LEVEL; level < 3; level++ )
> -            if ( (gfn_x(gfn) & (level_masks[level] >> PAGE_SHIFT)) >
> +            if ( (gfn_x(gfn) & (LEVEL_MASK(level) >> PAGE_SHIFT)) >
>                   gfn_x(p2m->max_mapped_gfn) )
>                  break;
>  
> @@ -421,7 +415,7 @@ mfn_t p2m_get_entry(struct p2m_domain *p2m, gfn_t gfn,
>           * The entry may point to a superpage. Find the MFN associated
>           * to the GFN.
>           */
> -        mfn = mfn_add(mfn, gfn_x(gfn) & ((1UL << level_orders[level]) - 1));
> +        mfn = mfn_add(mfn, gfn_x(gfn) & ((1UL << LEVEL_ORDER(level)) - 1));
>  
>          if ( valid )
>              *valid = lpae_is_valid(entry);
> @@ -432,7 +426,7 @@ out_unmap:
>  
>  out:
>      if ( page_order )
> -        *page_order = level_orders[level];
> +        *page_order = LEVEL_ORDER(level);
>  
>      return mfn;
>  }
> @@ -806,7 +800,7 @@ static bool p2m_split_superpage(struct p2m_domain *p2m, lpae_t *entry,
>      /* Convenience aliases */
>      mfn_t mfn = lpae_get_mfn(*entry);
>      unsigned int next_level = level + 1;
> -    unsigned int level_order = level_orders[next_level];
> +    unsigned int level_order = LEVEL_ORDER(next_level);
>  
>      /*
>       * This should only be called with target != level and the entry is
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Tue May 11 22:42:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 22:42:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126056.237282 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgb5Q-0004F8-IN; Tue, 11 May 2021 22:42:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126056.237282; Tue, 11 May 2021 22:42:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgb5Q-0004F1-FN; Tue, 11 May 2021 22:42:44 +0000
Received: by outflank-mailman (input) for mailman id 126056;
 Tue, 11 May 2021 22:42:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nI6L=KG=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lgb5P-0004Eo-0V
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 22:42:43 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0dbf4708-4689-41f7-b870-b726f3f08a8f;
 Tue, 11 May 2021 22:42:42 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 4766E61928;
 Tue, 11 May 2021 22:42:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0dbf4708-4689-41f7-b870-b726f3f08a8f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1620772961;
	bh=oRmlhES+NzcIsKwzBAMIg597bR85FUxEo+dUmmsinlU=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=H9gBElV5RhHzeoDdLg24gl8Bc1fFfhYcgxmbDoVp/K9GL0bhHZDZKvs5qb2edts5v
	 o2GtIQ6SVMjzAfCf46f9aTNMEE62tkADOzAUSIVQLWB9p07DlPm/8QH6QBLC2EaQE8
	 qqsKQoX0DK/crJ2o8Eyd963VV3LOvPgOsIMHbCIZfbNjH6kjJPrbsdOrsaF3Ki64v6
	 pdXDrv7CePEys+rGWr0vPW3jFYIivSEF+L05+yDQ1U8ttKBN/tx5+DaK7R4JtjZNmU
	 XvNu5wKLyJa4OnH4yp3RXYk8YqBqWbis+33/hASXRrJKWSqciU5BWmUgGVJdcFbC2i
	 8fJ55Uq8sQAAw==
Date: Tue, 11 May 2021 15:42:40 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, Wei.Chen@arm.com, Henry.Wang@arm.com, 
    Penny.Zheng@arm.com, Bertrand.Marquis@arm.com, 
    Julien Grall <julien.grall@arm.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Julien Grall <jgrall@amazon.com>
Subject: Re: [PATCH RFCv2 04/15] xen/arm: mm: Allow other mapping size in
 xen_pt_update_entry()
In-Reply-To: <20210425201318.15447-5-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2105111542290.5018@sstabellini-ThinkPad-T480s>
References: <20210425201318.15447-1-julien@xen.org> <20210425201318.15447-5-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Sun, 25 Apr 2021, Julien Grall wrote:
> From: Julien Grall <julien.grall@arm.com>
> 
> At the moment, xen_pt_update_entry() only supports mapping at level 3
> (i.e 4KB mapping). While this is fine for most of the runtime helper,
> the boot code will require to use superpage mapping.
> 
> We don't want to allow superpage mapping by default as some of the
> callers may expect small mappings (i.e populate_pt_range()) or even
> expect to unmap only a part of a superpage.
> 
> To keep the code simple, a new flag _PAGE_BLOCK is introduced to
> allow the caller to enable superpage mapping.
> 
> As the code doesn't support all the combinations, xen_pt_check_entry()
> is extended to take into account the cases we don't support when
> using block mapping:
>     - Replacing a table with a mapping. This may happen if region was
>     first mapped with 4KB mapping and then later on replaced with a 2MB
>     (or 1GB mapping)
>     - Removing/modify a table. This may happen if a caller try to
>     remove a region with _PAGE_BLOCK set when it was created without it
> 
> Note that the current restriction mean that the caller must ensure that
> _PAGE_BLOCK is consistently set/cleared across all the updates on a
> given virtual region. This ought to be fine with the expected use-cases.
> 
> More rework will be necessary if we wanted to remove the restrictions.
> 
> Note that nr_mfns is now marked const as it is used for flushing the
> TLBs and we don't want it to be modified.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> 
>     Changes in v2:
>         - Pass the target level rather than the order to
>         xen_pt_update_entry()
>         - Update some comments
>         - Open-code paddr_to_pfn()
>         - Add my AWS signed-off-by
> ---
>  xen/arch/arm/mm.c          | 93 ++++++++++++++++++++++++++++++--------
>  xen/include/asm-arm/page.h |  4 ++
>  2 files changed, 79 insertions(+), 18 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 59f8a3f15fd1..8ebb36899314 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -1060,9 +1060,10 @@ static int xen_pt_next_level(bool read_only, unsigned int level,
>  }
>  
>  /* Sanity check of the entry */
> -static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int flags)
> +static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int level,
> +                               unsigned int flags)
>  {
> -    /* Sanity check when modifying a page. */
> +    /* Sanity check when modifying an entry. */
>      if ( (flags & _PAGE_PRESENT) && mfn_eq(mfn, INVALID_MFN) )
>      {
>          /* We don't allow modifying an invalid entry. */
> @@ -1072,6 +1073,13 @@ static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int flags)
>              return false;
>          }
>  
> +        /* We don't allow modifying a table entry */
> +        if ( !lpae_is_mapping(entry, level) )
> +        {
> +            mm_printk("Modifying a table entry is not allowed.\n");
> +            return false;
> +        }
> +
>          /* We don't allow changing memory attributes. */
>          if ( entry.pt.ai != PAGE_AI_MASK(flags) )
>          {
> @@ -1087,7 +1095,7 @@ static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int flags)
>              return false;
>          }
>      }
> -    /* Sanity check when inserting a page */
> +    /* Sanity check when inserting a mapping */
>      else if ( flags & _PAGE_PRESENT )
>      {
>          /* We should be here with a valid MFN. */
> @@ -1096,18 +1104,28 @@ static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int flags)
>          /* We don't allow replacing any valid entry. */
>          if ( lpae_is_valid(entry) )
>          {
> -            mm_printk("Changing MFN for a valid entry is not allowed (%#"PRI_mfn" -> %#"PRI_mfn").\n",
> -                      mfn_x(lpae_get_mfn(entry)), mfn_x(mfn));
> +            if ( lpae_is_mapping(entry, level) )
> +                mm_printk("Changing MFN for a valid entry is not allowed (%#"PRI_mfn" -> %#"PRI_mfn").\n",
> +                          mfn_x(lpae_get_mfn(entry)), mfn_x(mfn));
> +            else
> +                mm_printk("Trying to replace a table with a mapping.\n");
>              return false;
>          }
>      }
> -    /* Sanity check when removing a page. */
> +    /* Sanity check when removing a mapping. */
>      else if ( (flags & (_PAGE_PRESENT|_PAGE_POPULATE)) == 0 )
>      {
>          /* We should be here with an invalid MFN. */
>          ASSERT(mfn_eq(mfn, INVALID_MFN));
>  
> -        /* We don't allow removing page with contiguous bit set. */
> +        /* We don't allow removing a table */
> +        if ( lpae_is_table(entry, level) )
> +        {
> +            mm_printk("Removing a table is not allowed.\n");
> +            return false;
> +        }
> +
> +        /* We don't allow removing a mapping with contiguous bit set. */
>          if ( entry.pt.contig )
>          {
>              mm_printk("Removing entry with contiguous bit set is not allowed.\n");
> @@ -1125,13 +1143,13 @@ static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int flags)
>      return true;
>  }
>  
> +/* Update an entry at the level @target. */
>  static int xen_pt_update_entry(mfn_t root, unsigned long virt,
> -                               mfn_t mfn, unsigned int flags)
> +                               mfn_t mfn, unsigned int target,
> +                               unsigned int flags)
>  {
>      int rc;
>      unsigned int level;
> -    /* We only support 4KB mapping (i.e level 3) for now */
> -    unsigned int target = 3;
>      lpae_t *table;
>      /*
>       * The intermediate page tables are read-only when the MFN is not valid
> @@ -1186,7 +1204,7 @@ static int xen_pt_update_entry(mfn_t root, unsigned long virt,
>      entry = table + offsets[level];
>  
>      rc = -EINVAL;
> -    if ( !xen_pt_check_entry(*entry, mfn, flags) )
> +    if ( !xen_pt_check_entry(*entry, mfn, level, flags) )
>          goto out;
>  
>      /* If we are only populating page-table, then we are done. */
> @@ -1204,8 +1222,11 @@ static int xen_pt_update_entry(mfn_t root, unsigned long virt,
>          {
>              pte = mfn_to_xen_entry(mfn, PAGE_AI_MASK(flags));
>  
> -            /* Third level entries set pte.pt.table = 1 */
> -            pte.pt.table = 1;
> +            /*
> +             * First and second level pages set pte.pt.table = 0, but
> +             * third level entries set pte.pt.table = 1.
> +             */
> +            pte.pt.table = (level == 3);
>          }
>          else /* We are updating the permission => Copy the current pte. */
>              pte = *entry;
> @@ -1229,11 +1250,12 @@ static DEFINE_SPINLOCK(xen_pt_lock);
>  
>  static int xen_pt_update(unsigned long virt,
>                           mfn_t mfn,
> -                         unsigned long nr_mfns,
> +                         const unsigned long nr_mfns,
>                           unsigned int flags)
>  {
>      int rc = 0;
> -    unsigned long addr = virt, addr_end = addr + nr_mfns * PAGE_SIZE;
> +    unsigned long vfn = virt >> PAGE_SHIFT;
> +    unsigned long left = nr_mfns;
>  
>      /*
>       * For arm32, page-tables are different on each CPUs. Yet, they share
> @@ -1265,14 +1287,49 @@ static int xen_pt_update(unsigned long virt,
>  
>      spin_lock(&xen_pt_lock);
>  
> -    for ( ; addr < addr_end; addr += PAGE_SIZE )
> +    while ( left )
>      {
> -        rc = xen_pt_update_entry(root, addr, mfn, flags);
> +        unsigned int order, level;
> +        unsigned long mask;
> +
> +        /*
> +         * Don't take into account the MFN when removing mapping (i.e
> +         * MFN_INVALID) to calculate the correct target order.
> +         *
> +         * This loop relies on mfn, vfn, and nr_mfn to be all superpage
> +         * aligned (mfn and vfn have to be architecturally), and it uses
> +         * `mask` to check for that.
> +         *
> +         * XXX: Support superpage mappings if nr is not aligned to a
> +         * superpage size.
> +         */
> +        mask = !mfn_eq(mfn, INVALID_MFN) ? mfn_x(mfn) : 0;
> +        mask |= vfn | left;
> +
> +        /*
> +         * Always use level 3 mapping unless the caller request block
> +         * mapping.
> +         */
> +        if ( likely(!(flags & _PAGE_BLOCK)) )
> +            level = 3;
> +        else if ( !(mask & (BIT(FIRST_ORDER, UL) - 1)) )
> +            level = 1;
> +        else if ( !(mask & (BIT(SECOND_ORDER, UL) - 1)) )
> +            level = 2;
> +        else
> +            level = 3;
> +
> +        order = LEVEL_ORDER(level);
> +
> +        rc = xen_pt_update_entry(root, pfn_to_paddr(vfn), mfn, level, flags);
>          if ( rc )
>              break;
>  
> +        vfn += 1U << order;
>          if ( !mfn_eq(mfn, INVALID_MFN) )
> -            mfn = mfn_add(mfn, 1);
> +            mfn = mfn_add(mfn, 1U << order);
> +
> +        left -= (1U << order);
>      }
>  
>      /*
> diff --git a/xen/include/asm-arm/page.h b/xen/include/asm-arm/page.h
> index 131507a51712..7052a87ec0fe 100644
> --- a/xen/include/asm-arm/page.h
> +++ b/xen/include/asm-arm/page.h
> @@ -69,6 +69,7 @@
>   * [3:4] Permission flags
>   * [5]   Page present
>   * [6]   Only populate page tables
> + * [7]   Superpage mappings is allowed
>   */
>  #define PAGE_AI_MASK(x) ((x) & 0x7U)
>  
> @@ -82,6 +83,9 @@
>  #define _PAGE_PRESENT    (1U << 5)
>  #define _PAGE_POPULATE   (1U << 6)
>  
> +#define _PAGE_BLOCK_BIT     7
> +#define _PAGE_BLOCK         (1U << _PAGE_BLOCK_BIT)
> +
>  /*
>   * _PAGE_DEVICE and _PAGE_NORMAL are convenience defines. They are not
>   * meant to be used outside of this header.
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Tue May 11 22:51:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 22:51:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126064.237294 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgbDl-00061e-It; Tue, 11 May 2021 22:51:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126064.237294; Tue, 11 May 2021 22:51:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgbDl-00061X-G3; Tue, 11 May 2021 22:51:21 +0000
Received: by outflank-mailman (input) for mailman id 126064;
 Tue, 11 May 2021 22:51:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nI6L=KG=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lgbDj-00061N-VV
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 22:51:19 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 48513bfd-42bc-48c1-875f-483a0cd01633;
 Tue, 11 May 2021 22:51:19 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 63CEE611F1;
 Tue, 11 May 2021 22:51:18 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 48513bfd-42bc-48c1-875f-483a0cd01633
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1620773478;
	bh=3txn367DopLUmPkzGdSQyo882YvI9IA72Po80b+03HY=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=P8y1HlNqgOVTRN3PszZBya/OkwRPpus6Hno2fjq+mhS90I9S4TgM8bI0+RlJaLGzB
	 3OpPkAQmXJc55Ykz457GymFRYdYQGnfkb9/zyg206NMcqcujUJHlbCFBYWsaAA98+N
	 7GCaausz7dBGipk/XoPAJ6K6BhMRDy4JGR8T8vvfnpNydZASRqGO7YTHOudMp5gc+5
	 DUvjxev9Uqp6j0xT2wV2MPBRil6wcLzNkoS8OENruSqYlZ6zc5fuYb2vBGBLO0tOzR
	 4XfbNJ/hSoP+SbiZJRInoiMG18cC/9LkQjm0d/v6U52xs17SPepYZueigLce8VWF1p
	 f4dyU86kp66nA==
Date: Tue, 11 May 2021 15:51:17 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, Wei.Chen@arm.com, Henry.Wang@arm.com, 
    Penny.Zheng@arm.com, Bertrand.Marquis@arm.com, 
    Julien Grall <jgrall@amazon.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH RFCv2 05/15] xen/arm: mm: Avoid flushing the TLBs when
 mapping are inserted
In-Reply-To: <20210425201318.15447-6-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2105111551060.5018@sstabellini-ThinkPad-T480s>
References: <20210425201318.15447-1-julien@xen.org> <20210425201318.15447-6-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Sun, 25 Apr 2021, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Currently, the function xen_pt_update() will flush the TLBs even when
> the mappings are inserted. This is a bit wasteful because we don't
> allow mapping replacement. Even if we were, the flush would need to
> happen earlier because mapping replacement should use Break-Before-Make
> when updating the entry.
> 
> A single call to xen_pt_update() can perform a single action. IOW, it
> is not possible to, for instance, mix inserting and removing mappings.
> Therefore, we can use `flags` to determine what action is performed.
> 
> This change will be particularly help to limit the impact of switching
> boot time mapping to use xen_pt_update().
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Nice improvement!

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
> 
>     Changes in v2:
>         - New patch
> ---
>  xen/arch/arm/mm.c | 17 ++++++++++++++---
>  1 file changed, 14 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 8ebb36899314..1fe52b3af722 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -1101,7 +1101,13 @@ static bool xen_pt_check_entry(lpae_t entry, mfn_t mfn, unsigned int level,
>          /* We should be here with a valid MFN. */
>          ASSERT(!mfn_eq(mfn, INVALID_MFN));
>  
> -        /* We don't allow replacing any valid entry. */
> +        /*
> +         * We don't allow replacing any valid entry.
> +         *
> +         * Note that the function xen_pt_update() relies on this
> +         * assumption and will skip the TLB flush. The function will need
> +         * to be updated if the check is relaxed.
> +         */
>          if ( lpae_is_valid(entry) )
>          {
>              if ( lpae_is_mapping(entry, level) )
> @@ -1333,11 +1339,16 @@ static int xen_pt_update(unsigned long virt,
>      }
>  
>      /*
> -     * Flush the TLBs even in case of failure because we may have
> +     * The TLBs flush can be safely skipped when a mapping is inserted
> +     * as we don't allow mapping replacement (see xen_pt_check_entry()).
> +     *
> +     * For all the other cases, the TLBs will be flushed unconditionally
> +     * even if the mapping are failed. This is because we may have
>       * partially modified the PT. This will prevent any unexpected
>       * behavior afterwards.
>       */
> -    flush_xen_tlb_range_va(virt, PAGE_SIZE * nr_mfns);
> +    if ( !(flags & _PAGE_PRESENT) || mfn_eq(mfn, INVALID_MFN) )
> +        flush_xen_tlb_range_va(virt, PAGE_SIZE * nr_mfns);
>  
>      spin_unlock(&xen_pt_lock);
>  
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Tue May 11 22:58:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 22:58:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126069.237307 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgbKL-0006xi-BI; Tue, 11 May 2021 22:58:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126069.237307; Tue, 11 May 2021 22:58:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgbKL-0006xb-7m; Tue, 11 May 2021 22:58:09 +0000
Received: by outflank-mailman (input) for mailman id 126069;
 Tue, 11 May 2021 22:58:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nI6L=KG=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lgbKK-0006xU-5e
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 22:58:08 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1713af57-8e9f-4e5a-8f35-6e05c3f6aba6;
 Tue, 11 May 2021 22:58:07 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 942C661606;
 Tue, 11 May 2021 22:58:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1713af57-8e9f-4e5a-8f35-6e05c3f6aba6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1620773886;
	bh=AfnKJc+x+W2uE2rWH86MkMQkcq/aZaV3HsIHhNCA1RM=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=c1ldVjtHflZ6MVcpwx9FC3cN1uqRv12TGF5pPWCW6B1PoheUA0XLxt2a0pgbhcY6/
	 HQm+igs/S7FNXjW3X6pxkYqxqdKiKeouuKxR0aN2epl3IstLZmSR/JQ33jB7U8ThL8
	 2KtmIT4Sg8UB32KFHO3GKVzjea3dgPzzSfbEDr5vsFmJFpQQSfN0mqorAuls+064pT
	 GOanK48JQHuhBlOwo+yG7OdFMRBXqotcBiHaMtcB9Nk/AyoTv+zzmP0Ke9zgNSYLOk
	 MucwWYOFcK4QRHrZ3g9xdPXvpsoPvzbw/uHGzetBNaW3QODAwy2PgrmdKawQK6IDXH
	 Ve2Ts2MkwjC6g==
Date: Tue, 11 May 2021 15:58:06 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, Wei.Chen@arm.com, Henry.Wang@arm.com, 
    Penny.Zheng@arm.com, Bertrand.Marquis@arm.com, 
    Julien Grall <julien.grall@arm.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Julien Grall <jgrall@amazon.com>
Subject: Re: [PATCH RFCv2 06/15] xen/arm: mm: Don't open-code Xen PT update
 in remove_early_mappings()
In-Reply-To: <20210425201318.15447-7-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2105111557510.5018@sstabellini-ThinkPad-T480s>
References: <20210425201318.15447-1-julien@xen.org> <20210425201318.15447-7-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Sun, 25 Apr 2021, Julien Grall wrote:
> From: Julien Grall <julien.grall@arm.com>
> 
> Now that xen_pt_update_entry() is able to deal with different mapping
> size, we can replace the open-coding of the page-tables update by a call
> to modify_xen_mappings().
> 
> As the function is not meant to fail, a BUG_ON() is added to check the
> return.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v2:
>         - Stay consistent with how function name are used in the commit
>         message
>         - Add my AWS signed-off-by
> ---
>  xen/arch/arm/mm.c | 10 +++++-----
>  1 file changed, 5 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 1fe52b3af722..2cbfbe25240e 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -598,11 +598,11 @@ void * __init early_fdt_map(paddr_t fdt_paddr)
>  
>  void __init remove_early_mappings(void)
>  {
> -    lpae_t pte = {0};
> -    write_pte(xen_second + second_table_offset(BOOT_FDT_VIRT_START), pte);
> -    write_pte(xen_second + second_table_offset(BOOT_FDT_VIRT_START + SZ_2M),
> -              pte);
> -    flush_xen_tlb_range_va(BOOT_FDT_VIRT_START, BOOT_FDT_SLOT_SIZE);
> +    int rc;
> +
> +    rc = modify_xen_mappings(BOOT_FDT_VIRT_START, BOOT_FDT_VIRT_END,
> +                             _PAGE_BLOCK);
> +    BUG_ON(rc);
>  }
>  
>  /*
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Tue May 11 23:38:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 11 May 2021 23:38:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126074.237319 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgbwt-0003PU-Cz; Tue, 11 May 2021 23:37:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126074.237319; Tue, 11 May 2021 23:37:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgbwt-0003PN-9x; Tue, 11 May 2021 23:37:59 +0000
Received: by outflank-mailman (input) for mailman id 126074;
 Tue, 11 May 2021 23:37:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GEoi=KG=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1lgbwr-0003PH-Qa
 for xen-devel@lists.xenproject.org; Tue, 11 May 2021 23:37:58 +0000
Received: from userp2120.oracle.com (unknown [156.151.31.85])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b04d6228-ef1a-47ca-8df4-e0f983b9bf6a;
 Tue, 11 May 2021 23:37:56 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
 by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 14BNTmml132625;
 Tue, 11 May 2021 23:37:54 GMT
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
 by userp2120.oracle.com with ESMTP id 38dk9ngd8p-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 11 May 2021 23:37:54 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
 by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 14BNYfMe123177;
 Tue, 11 May 2021 23:37:53 GMT
Received: from nam12-mw2-obe.outbound.protection.outlook.com
 (mail-mw2nam12lp2046.outbound.protection.outlook.com [104.47.66.46])
 by aserp3020.oracle.com with ESMTP id 38djfa7752-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 11 May 2021 23:37:53 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com (2603:10b6:208:321::10)
 by BLAPR10MB5203.namprd10.prod.outlook.com (2603:10b6:208:30d::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.26; Tue, 11 May
 2021 23:37:51 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf]) by BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf%7]) with mapi id 15.20.4129.025; Tue, 11 May 2021
 23:37:51 +0000
Received: from [10.74.101.70] (160.34.89.70) by
 SN6PR01CA0014.prod.exchangelabs.com (2603:10b6:805:b6::27) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4108.25 via Frontend Transport; Tue, 11 May 2021 23:37:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b04d6228-ef1a-47ca-8df4-e0f983b9bf6a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=irCO4qVBrM4lVCwQYKC+QoX0RJqaChwsrUA4h8A5Qqo=;
 b=RY9C9vjQeNi3CEArAhRB3IiJ4my6kG0BLxm9IfNaamE9RdH8i5f+nN8EqQyOWphAqrQF
 FJJBsXDFe4WLdLzwkBrSqA+CoqEMPNAhWH2PaNSZJvVnB7StC4lDQtKU+cH4hopKfG86
 kkLLxkHxbuOQVuOw1skpx/wYQR9VDDw5kVX4q524To9uCqN15K4Mumaokw3+W4z+NWH0
 NdegfunNkypw7gr2r0l+lAUJ1GImdEPZCSjihh+yafeLrTpRU74qqVU/CjiHFYbPmGYv
 7nO7ZuEkFS627MNRw1y1eczGtIsF7mm30SP9GLNwE4r/gYOAXHC9L2E0br3jvnSVH69y OA== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SQfoalej1RPBYf91mJTpvVqadzgkICCP495QHoDxLx7eHr5JLIvc2c8X18zxcx8NS2vqARAgsLkhkUcZ5oU9HGK6vqYg1yGgYd6algiorog/v4yDX7t5+foEzMrtDGyAnP402r+cG96nAFHlL1zczqe9qNk4fP/Q7EH1uBs09f6wGO7S6ntghlFc6MScVsuHtUX+WvGmHIzmJMUgBwFgoy/ZvDNCk+2WMeMuOCnceMY7DZI+XcZyo7/A63pW91nSjQMeXLbN+LZZ8rBvbx6pfZ+gOUdWPZ+zohwnkk3MBOPcYb8y1uW4Y6Q6hTbHeNhvk4rYUH7WVeO0dV1EsdkVYg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=irCO4qVBrM4lVCwQYKC+QoX0RJqaChwsrUA4h8A5Qqo=;
 b=WzUO2Pj8iCpMFeaRU69b3o6PUuegbozSUwgJl5ioawyuQhhyZmNOQU19fosIxvAsCNJ2Zqu0aEWdqN3BJM0qx+CYxcW2/Xhptgg5ETVSCQSgmS19YRaKeLjZxBxdWMDCaanEYLbcD7INfuD/6GjPXISkoDq8tuY43k74+Wf6TY80r9Q385TfFka0deRJft+MA/ZbPQCGjIWrmoRGKiyDeagU9jHZsqS4DxJFPjo3qOcoQD5U/eTE0KFNAU3MLFzqEHROuooni1f5BBHw8HdMspNvOQ0VOCEhMfSmbBRAsC6Ist+X7MVCzcPamGQz8nkj+jcI6lj/tU7WishlzXrg4Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=irCO4qVBrM4lVCwQYKC+QoX0RJqaChwsrUA4h8A5Qqo=;
 b=pP9iJ8s2hgcnXcQxsKpcZwHjp7c3ISaTQHhaK5ck9xka6mGypd/TGvGa5C/WURpXDuL2QDcxfhZ1ox+fTDKImyZDIPjrHgyd2YKLokQ3Uzfv5TEHUFDVnn1cgutMcK2rmyjjDJGZEHfNsSC7LFVBWKEOJgWBoKCIfAhIRCqfzT4=
Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=oracle.com;
Subject: Re: [PATCH 2/2] xen/swiotlb: check if the swiotlb has already been
 initialized
To: Stefano Stabellini <sstabellini@kernel.org>,
        xen-devel@lists.xenproject.org
Cc: hch@lst.de, linux-kernel@vger.kernel.org,
        Stefano Stabellini <stefano.stabellini@xilinx.com>, jgross@suse.com
References: <20210511174142.12742-2-sstabellini@kernel.org>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <2e5a684b-3c74-5efc-2946-8ca002894ab4@oracle.com>
Date: Tue, 11 May 2021 19:37:47 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
In-Reply-To: <20210511174142.12742-2-sstabellini@kernel.org>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Originating-IP: [160.34.89.70]
X-ClientProxiedBy: SN6PR01CA0014.prod.exchangelabs.com (2603:10b6:805:b6::27)
 To BLAPR10MB5009.namprd10.prod.outlook.com (2603:10b6:208:321::10)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: cec55a77-a621-4bf3-8fe9-08d914d5caac
X-MS-TrafficTypeDiagnostic: BLAPR10MB5203:
X-Microsoft-Antispam-PRVS: 
	<BLAPR10MB5203BCD00D78563F0CF982A68A539@BLAPR10MB5203.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:1284;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	G5MqafTlTjzjlSDUk6SQtohQAGOm9+0w4W/EgVgYQnHDkGyuzVwvQv/4lBqEKIDB0GpDmgL7OcfS15pD8fJfneatMzEemxXODT/fifR6HXKMINRnmNKVQtXSKwswnZkGRJncLf/GKYd7P86zr+5Rg1SUoIJ0sDr1luG/bv6ZiREw+dbRg7x6KOL7ja8IBJY8u4tQ/83dIf3TTlsurrzZFOaWF7gf0qkJRijcRTbn0S+kKG0jgCHVJC/DwyRShXzmKYSHciGKF6ldis/2/y+9M5FMQX6cUnPL6H4AX2vcYHg0tpP6kNcy9tZ2U5mqWTIfkO9RR601BusroAVXX/a4UoZebGT/ETQslg8P/pyh05C59SnOfpbWzAReDQ8Go8RaY3HWdHyVVmkGuJwwB83OqnPs/RrbeeUbFIehjKbbZab1U5HA8Wix4jCdejA4R8RdEtsVguW/fUmdZzIv272RnERBaHQOZMRiSgJImKJNwwLJmkkmT+dW17VMyCjurs7llFWeRgG+g+LiJnH7MEGxdmWRVNjC2GVkjy8M752kfJo/3dNjHrtWqIX0dhlG3JMiep0AL5LqNazUOJgchg+1Zu/6tdGXrQ/GMDaPIAmEDeiYOxaD3sTVzRN4GBRivwLuIZZCo7zutxZOPB02zvqo8Q==
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB5009.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(39860400002)(136003)(366004)(376002)(346002)(396003)(6666004)(2616005)(16576012)(316002)(83380400001)(38100700002)(478600001)(53546011)(2906002)(86362001)(26005)(8676002)(8936002)(186003)(4744005)(956004)(16526019)(31696002)(5660300002)(36756003)(44832011)(4326008)(66476007)(66556008)(6486002)(31686004)(66946007)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?utf-8?B?a0pJdzBPc21jWUJPM0p0YmlzZTJmRU1jeHY4VkxZdkNSeHdtekNJd3JCRDBV?=
 =?utf-8?B?Q1crWmlWdFNFMVZJdTlKQy9zeVNvZzRYdWlGUHV0MG1ZZkhNVFpMajhveDkv?=
 =?utf-8?B?dHQ3Z3UvMXIvdzhoYUJSWWZZQ2Y3U0JaV1BHOGhtYmVKVDEvWDNIVSs3ZDlv?=
 =?utf-8?B?T1h6SncyMnFvM3ZZOHhjQTI5MFlBOStibFlwTFpGSkhFOXlaMGgxaHJtZk8y?=
 =?utf-8?B?cGZqS1VhZUNyQ3hxNGpOZUxLUnpQUFAwVFFjamJsVGMraGZpSk4wa3l0TEsv?=
 =?utf-8?B?Mm52eHlpdFN5bm5jNFVQbkJBZFMvTWNCU1NFclNTakJSN2dGN3h4T2NzYW50?=
 =?utf-8?B?TytyKy95S1hqNkZ2M3RHcnByWGFYbXJNMlNzVWpwRDgxbmE1ekpRRzkrc0tH?=
 =?utf-8?B?TXFickkxMXhMc1kwVnVheUg4SjZremNCclVRQXI2RjlGTHptK2Q0Z2FSU3ZI?=
 =?utf-8?B?NjRmckVnVTBCNFkvamNOQW9mSURLeWhQbVZpcVhzY1huK2ExLytPQWlvZTdR?=
 =?utf-8?B?U2J3bUViemluaHJaVDg2ME93S0ZyUStmYkQyc2RiUjdJM3U4allKOG9OUnZo?=
 =?utf-8?B?TmhUVU5UMTZVQm1BbUl0SjdEaWZ1NjdDS1U2eXEwbEZ5bndINE5OUlVEellO?=
 =?utf-8?B?MzlTKzE1bVhzK3d5TFlzTmNpbTEvb0pqTzlLZ0tCNzJObDFUMzdvckVlWkM2?=
 =?utf-8?B?ZnB2VG8zSWVjS0oyS0Z5TWQ3UFYzZ1U0eU1TQm9TRXhNbTRCTFgvRVpUK05X?=
 =?utf-8?B?T0ltQ2lCNzBraVJ6NFpBcncyMkhqNnJwMXBneHFTSXRHWStlSW54Mjh4Zm5I?=
 =?utf-8?B?WGRxVHNtNHRYSitHbVJub0YwaU5adDhvZVI2SmZ4SkVnMWFNbUlqclVjS0hM?=
 =?utf-8?B?ZWUxL3ZJQXJHU0YyaWJEci9DeUVpS2trRm9QaVB5Q3lNVTlvVUhiK3JtU2ht?=
 =?utf-8?B?Y1lHd1NZd2s2eGVudnJlWlg4MkRyYUFqU2syV0hxOG11ZWhwVThEa3pQelZR?=
 =?utf-8?B?bER5VFZoeFZrRVdDN09uclRPL1paMTh3NXI0Z09TVVhKTzhabWRqVUhpaFht?=
 =?utf-8?B?UkJDaGE0eExzY3kzNWdFUXBVRUJ3ZjFsMmc4RnMxcENRK1c0YnZhczUvc2Mv?=
 =?utf-8?B?Y000Y0RBZXBPVkkxREtwa1JlUXhRZ0xTNURQbDVWRFZhalU5S3BiVDA2UFor?=
 =?utf-8?B?dG0xVzN1N0dybUpmNDU4NDdMR3NFMFRkcHExRXJWN0ZhV2hrVHA1NS9MQm02?=
 =?utf-8?B?QVYxVUFET1JGRkEwM3BRaUdkbTZBODFlRXFNWUpzLzhUcHoxVHFQMnBOL0V6?=
 =?utf-8?B?Unp3SEhZVXhraG9HeTdGbFV3ZTNTKzhVTFkvc0psMStSSXViYWNiWmk3Slpx?=
 =?utf-8?B?NVdXWlUxMGlSOU9uaW1CMDNuaytRRENZMy9NSFU0emthWUFpbzhoRGRTay82?=
 =?utf-8?B?amVWWjI0dmFwWGV1UDB0L2ExM3oxYWNIZHRHbWZKa2hoVEpIUGNzSHFiUGow?=
 =?utf-8?B?MFBZTkl5UDFwdEEraXlJckIwaEp4T1dxRkExMUdjNDhKSnRoMmZqdDNpUHNK?=
 =?utf-8?B?NDRWelNBRTJ1L3k3Y0k0OGlxWEV1MWUxSkdlc2Nod21wcEUvMkJmV0hWdjY4?=
 =?utf-8?B?ZkZKcFdReFp6dW93NmJ5azV1UWVyMUYwUHdGOTNXTkJsd1hoVXZDTnpGSEJm?=
 =?utf-8?B?VHFQeEZwN0t4SlFoS0llRHJLelFUY09KTnBCbEoreHJhQXdkczNSM21VaEZ0?=
 =?utf-8?Q?MiaLvFGcerii1s6W4q5WdUz/5GRIcAjm5Ac/KZs?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cec55a77-a621-4bf3-8fe9-08d914d5caac
X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB5009.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 11 May 2021 23:37:51.3170
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: f/jZDIUll8I0QBETjgtHdFjKDxvEW51dCqPQrGrmWSbXqYohrzzBYfN02cA5zYqziDa1lL8CPNXmTa1yUzGQLdr4rMowPqy9zhQJ/RsonbU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLAPR10MB5203
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9981 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 spamscore=0 mlxlogscore=999
 adultscore=0 phishscore=0 mlxscore=0 suspectscore=0 malwarescore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2105110163
X-Proofpoint-ORIG-GUID: KWfdH2ZqUXOLnCS__Dkg-fvavUPeYhp-
X-Proofpoint-GUID: KWfdH2ZqUXOLnCS__Dkg-fvavUPeYhp-
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9981 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 phishscore=0
 adultscore=0 mlxscore=0 impostorscore=0 mlxlogscore=999 lowpriorityscore=0
 malwarescore=0 priorityscore=1501 clxscore=1011 bulkscore=0 spamscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2105110162


On 5/11/21 1:41 PM, Stefano Stabellini wrote:
> --- a/drivers/xen/swiotlb-xen.c
> +++ b/drivers/xen/swiotlb-xen.c
> @@ -164,6 +164,11 @@ int __ref xen_swiotlb_init(void)
>  	int rc = -ENOMEM;
>  	char *start;
>  
> +	if (io_tlb_default_mem != NULL) {
> +		printk(KERN_WARNING "Xen-SWIOTLB: swiotlb buffer already initialized\n");


pr_warn().


Reviewed-by: Boris Ostrovsky <boris.ostrvsky@oracle.com>




From xen-devel-bounces@lists.xenproject.org Wed May 12 00:13:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 00:13:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126079.237331 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgcV4-0008Rl-6L; Wed, 12 May 2021 00:13:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126079.237331; Wed, 12 May 2021 00:13:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgcV4-0008Re-3J; Wed, 12 May 2021 00:13:18 +0000
Received: by outflank-mailman (input) for mailman id 126079;
 Wed, 12 May 2021 00:13:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+k7y=KH=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lgcV2-0008RY-L2
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 00:13:16 +0000
Received: from mail-io1-xd32.google.com (unknown [2607:f8b0:4864:20::d32])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 310636a8-58dd-48c1-a821-9896e8cf632e;
 Wed, 12 May 2021 00:13:09 +0000 (UTC)
Received: by mail-io1-xd32.google.com with SMTP id n40so4186070ioz.4
 for <xen-devel@lists.xenproject.org>; Tue, 11 May 2021 17:13:09 -0700 (PDT)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id v12sm7445080iom.6.2021.05.11.17.13.07
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 11 May 2021 17:13:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 310636a8-58dd-48c1-a821-9896e8cf632e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=8XhKreRLzZDKQ0B9rX4Ogf9dU/EKX6rpLfg0ZJ5unGU=;
        b=qI/Lz5w8o5O1exMgq6hXJltZkKpsWpEt5wNzAwy/ML9zuCNKMqPH5ibwyf+/j60COm
         /WFtThKLoJCC0GHC+9PM0Gh9nvB1P8hzKFky01/PxnNhxF0H0MRzypDm7E5BlyVx9R11
         EJqIWsP8NIyMLCt1+D0sLrJvmFahMgiPUT/nhVKbgLuqKnbVa2xdp5bAFaAMc900mDWt
         AZNAekoTvI2Q+e8mMFP2IKjTn5ZA13ZC6jw/Kwp5nD0WvEXo7sfiazl11q0s5kcr35QA
         EQDFIlCEAbLRcJ3YLw4I4tV4oLA8pVAbSqIy92bPR7TWKb23q9ZC+QgzHPgrLV1tM5GD
         vR/g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=8XhKreRLzZDKQ0B9rX4Ogf9dU/EKX6rpLfg0ZJ5unGU=;
        b=kFU25m0z2hCu3Z5b6lkHT4khtljBR5LoGEo8HtmMw5gIEQ32EmnNb3CI+QVIwVOl0N
         6vMIUp3fvwTks9d4b0Y71MIbvdInr5RxVfSB0vW21qj6VBpqEXcbg0iGL+bwDvMC0qSS
         zfTyammE5gdzMw6fevq7AnNjv9V6dYQAtFKotxRFidESw0WZFQ/HLJ9skENZqqICYRZQ
         svCtN/qDIyl8dYuNs5WsfWfNeocPLanq0s9b2W/wnb9qBkr58KPOJrden8HiobHf/NDL
         EQ5c9HwuerloqOFMO9u2WMGQPsYV2mT663lhEELcGALgRevZ8L8EklRw1WJgNkhqCLZM
         9K9g==
X-Gm-Message-State: AOAM532TxgqRn0eze4pDylQhYiFBvCiRbW6kPzuTwtGfq5XmIGUDPsHw
	10k1MtRpm6JupCQXFOdeQ5aX6e3JKUwdgg==
X-Google-Smtp-Source: ABdhPJyguaqxyO3JVmJhZbEceLYXcJfbWcpaXzOgPQNhWMVRfuZGlW7lXa3gMzu/bv7fm0Bsz/iQ+g==
X-Received: by 2002:a05:6602:3427:: with SMTP id n39mr16361151ioz.157.1620778388208;
        Tue, 11 May 2021 17:13:08 -0700 (PDT)
From: Connor Davis <connojdavis@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Connor Davis <connojdavis@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>
Subject: [PATCH] drivers/char: Add USB3 debug capability driver
Date: Tue, 11 May 2021 18:12:40 -0600
Message-Id: <9a6a15ebc538105c83be88883ab3a7125ed52d37.1620776791.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.31.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add support for the xHCI debug capability (DbC). The DbC provides
a SuperSpeed serial link between a debug target running Xen and a
debug host. To use it you will need a USB3 A/A debug cable plugged into
a root port on the target machine. Recent kernels report the existence
of the DbC capability at

  /sys/kernel/debug/usb/xhci/<seg>:<bus>:<slot>.<func>/reg-ext-dbc:00

The host machine should have the usb_debug.ko module on the system
(the cable can be plugged into any port on the host side). After the
host usb_debug enumerates the DbC, it will create a /dev/ttyUSB<n> file
that can be used with things like minicom.

To use the DbC as a console, pass `console=dbgp dbgp=xhci`
on the xen command line. This will select the first host controller
found that implements the DbC. Other variations to 'dbgp=' are accepted,
please see xen-command-line.pandoc for more. Remote GDB is also supported
with `gdb=dbgp dbgp=xhci`. Note that to see output and/or provide input
after dom0 starts, DMA remapping of the host controller must be
disabled.

Signed-off-by: Connor Davis <connojdavis@gmail.com>
---
 MAINTAINERS                       |    6 +
 docs/misc/xen-command-line.pandoc |   19 +-
 xen/arch/x86/Kconfig              |    1 -
 xen/arch/x86/setup.c              |    1 +
 xen/drivers/char/Kconfig          |   15 +
 xen/drivers/char/Makefile         |    1 +
 xen/drivers/char/xhci-dbc.c       | 1122 +++++++++++++++++++++++++++++
 xen/drivers/char/xhci-dbc.h       |  621 ++++++++++++++++
 xen/include/asm-x86/fixmap.h      |    4 +
 xen/include/xen/serial.h          |   15 +
 10 files changed, 1803 insertions(+), 2 deletions(-)
 create mode 100644 xen/drivers/char/xhci-dbc.c
 create mode 100644 xen/drivers/char/xhci-dbc.h

diff --git a/MAINTAINERS b/MAINTAINERS
index d46b08a0d2..3f38c022a0 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -620,6 +620,12 @@ F:	tools/xentrace/
 F:	xen/common/trace.c
 F:	xen/include/xen/trace.h
 
+XHCI DBC DRIVER
+M:      Connor Davis <connojdavis@gmail.com>
+S:      Maintained
+F:      xen/drivers/char/xhci-dbc.c
+F:      xen/drivers/char/xhci-dbc.h
+
 XSM/FLASK
 M:	Daniel De Graaf <dgdegra@tycho.nsa.gov>
 S:	Supported
diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index c32a397a12..1b63432806 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -714,9 +714,26 @@ Available alternatives, with their meaning, are:
 ### dbgp
 > `= ehci[ <integer> | @pci<bus>:<slot>.<func> ]`
 
-Specify the USB controller to use, either by instance number (when going
+Specify the EHCI USB controller to use, either by instance number (when going
 over the PCI busses sequentially) or by PCI device (must be on segment 0).
 
+If you have a system with an xHCI USB controller that supports the Debug
+Capability (DbC), you can use
+
+> `= xhci[ <integer> | @pci<bus>:<slot>.<func> ]`
+
+To use this, you need a USB3 A/A debugging cable plugged into a SuperSpeed
+root port on the target machine. Recent kernels expose the existence of the
+DbC at /sys/kernel/debug/usb/xhci/<seg>:<bus>:<slot>.<func>/reg-ext-dbc:00.
+Note that to see output and process input after dom0 is started, you need to
+ensure that the host controller's DMA is not remapped (e.g. with
+dom0-iommu=passthrough).
+
+Finally, be sure that usb_debug.ko is present on the debug host, as the DbC
+emulates a USB debug device that is bound to usb_debug, which in turn
+creates a /dev/ttyUSB<n> file that can be used with things like minicom
+and gdb.
+
 ### debug_stack_lines
 > `= <integer>`
 
diff --git a/xen/arch/x86/Kconfig b/xen/arch/x86/Kconfig
index e55e029b79..469227f66b 100644
--- a/xen/arch/x86/Kconfig
+++ b/xen/arch/x86/Kconfig
@@ -11,7 +11,6 @@ config X86
 	select HAS_ALTERNATIVE
 	select HAS_COMPAT
 	select HAS_CPUFREQ
-	select HAS_EHCI
 	select HAS_EX_TABLE
 	select HAS_FAST_MULTIPLY
 	select HAS_IOPORTS
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index 8105dc36bb..4612f3e8b8 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -924,6 +924,7 @@ void __init noreturn __start_xen(unsigned long mbi_p)
     ns16550.irq     = 3;
     ns16550_init(1, &ns16550);
     ehci_dbgp_init();
+    xhci_dbc_init();
     console_init_preirq();
 
     if ( pvh_boot )
diff --git a/xen/drivers/char/Kconfig b/xen/drivers/char/Kconfig
index b572305657..a48ca69a87 100644
--- a/xen/drivers/char/Kconfig
+++ b/xen/drivers/char/Kconfig
@@ -63,6 +63,21 @@ config HAS_SCIF
 config HAS_EHCI
 	bool
 	depends on X86
+        default y if !HAS_XHCI_DBC
 	help
 	  This selects the USB based EHCI debug port to be used as a UART. If
 	  you have an x86 based system with USB, say Y.
+
+config HAS_XHCI_DBC
+	bool "xHCI Debug Capability driver"
+	depends on X86 && HAS_PCI
+	help
+	  This selects the xHCI Debug Capabilty to be used as a UART.
+
+config XHCI_FIXMAP_PAGES
+        int "Number of fixmap entries to allocate for the xHC"
+	depends on HAS_XHCI_DBC
+        default 16
+        help
+          This should equal the size (in 4K pages) of the first 64-bit
+          BAR of the host controller in which the DbC is being used.
diff --git a/xen/drivers/char/Makefile b/xen/drivers/char/Makefile
index 7c646d771c..c2cb8c09b7 100644
--- a/xen/drivers/char/Makefile
+++ b/xen/drivers/char/Makefile
@@ -8,6 +8,7 @@ obj-$(CONFIG_HAS_MVEBU) += mvebu-uart.o
 obj-$(CONFIG_HAS_OMAP) += omap-uart.o
 obj-$(CONFIG_HAS_SCIF) += scif-uart.o
 obj-$(CONFIG_HAS_EHCI) += ehci-dbgp.o
+obj-$(CONFIG_HAS_XHCI_DBC) += xhci-dbc.o
 obj-$(CONFIG_ARM) += arm-uart.o
 obj-y += serial.o
 obj-$(CONFIG_XEN_GUEST) += xen_pv_console.o
diff --git a/xen/drivers/char/xhci-dbc.c b/xen/drivers/char/xhci-dbc.c
new file mode 100644
index 0000000000..5e22f4a601
--- /dev/null
+++ b/xen/drivers/char/xhci-dbc.c
@@ -0,0 +1,1122 @@
+/*
+ * xen/drivers/char/xhci-dbc.c
+ *
+ * Driver for USB xHCI Debug Capability
+ * ported from https://github.com/connojd/xue.git
+ *
+ * Copyright (C) 2019 Assured Information Security, Inc.
+ * Copyright (C) 2021 Connor Davis <connojdavis@gmail.com>
+ *
+ * Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without restriction,
+ * including without limitation the rights to use, copy, modify, merge,
+ * publish, distribute, sublicense, and/or sell copies of the Software,
+ * and to permit persons to whom the Software is furnished to do so,
+ * subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
+ * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
+ * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
+ * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
+ * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#include "xhci-dbc.h"
+
+#include <xen/param.h>
+#include <xen/pci.h>
+#include <xen/serial.h>
+#include <xen/timer.h>
+
+#include <asm/fixmap.h>
+
+/*
+ * To use the DbC as a serial console, pass in the following string
+ * param (in addition to console=dbgp):
+ *
+ *   dbgp=xhci | xhci[0-9] | xhci@pci<bus>:<dev>.<func>
+ *
+ * The first variant uses the first compatible xhci controller found.
+ * The second looks for the n'th xhci with a DbC. Both use breadth-first
+ * search of the PCI bus. The last uses the pci string to specify the
+ * BDF of the USB3 host controller you want to use.
+ * PCI segments > 0 are not supported.
+ */
+static char __initdata opt_dbgp[30];
+string_param("dbgp", opt_dbgp);
+
+static struct xhci_dbc xhci_dbc;
+
+/* Buffers for raw data referenced by struct data_ring */
+static char data_out[DATA_RING_SIZE] __aligned(PAGE_SIZE);
+static char data_in[DATA_RING_SIZE] __aligned(PAGE_SIZE);
+
+/* Ring segments must not straddle a 64K boundary */
+static struct trb trb_evt[TRB_RING_SIZE] __aligned(TRB_RING_ALIGN);
+static struct trb trb_out[TRB_RING_SIZE] __aligned(TRB_RING_ALIGN);
+static struct trb trb_in[TRB_RING_SIZE] __aligned(TRB_RING_ALIGN);
+
+/* Port variable for timer handling */
+static DEFINE_PER_CPU(struct serial_port *, poll_port);
+
+static void init_strings(struct xhci_dbc *dbc)
+{
+    struct dbc_info_ctx *info = &dbc->ctx.info;
+    struct dbc_str_desc *dbc_str = &dbc->strings;
+    struct usb_str_desc *usb_str = (struct usb_str_desc *)&dbc_str->string0;
+
+    const char manufacturer[] = "Xen Project";
+    const char product[] = "Xen DbC Device";
+    const char serial[] = "0001";
+
+    uint32_t length = 0;
+
+    /*
+     * Each string is converted to UTF16, plus one byte for bLength and
+     * one byte for bDescriptorType.
+     */
+    BUILD_BUG_ON(sizeof(manufacturer) * 2 + 2 >= DBC_MAX_STR_SIZE);
+    BUILD_BUG_ON(sizeof(product) * 2 + 2 >= DBC_MAX_STR_SIZE);
+    BUILD_BUG_ON(sizeof(serial) * 2 + 2 >= DBC_MAX_STR_SIZE);
+
+    /* string0 specifies the LANGIDs supported... */
+    usb_str->bLength = 4;
+    usb_str->bDescriptorType = USB_DT_STRING;
+    usb_str->wData[0] = cpu_to_le16(USB_LANGID_ENGLISH);
+    length |= usb_str->bLength;
+
+    /* ...the others are UTF16LE-encoded strings */
+    usb_str = (struct usb_str_desc *)&dbc_str->manufacturer;
+    usb_str->bLength = 2 + (2 * strlen(manufacturer));
+    usb_str->bDescriptorType = USB_DT_STRING;
+    length |= ((uint32_t)usb_str->bLength << 8);
+    copy_utf16le(usb_str->wData, manufacturer);
+
+    usb_str = (struct usb_str_desc *)&dbc_str->product;
+    usb_str->bLength = 2 + (2 * strlen(product));
+    usb_str->bDescriptorType = USB_DT_STRING;
+    length |= ((uint32_t)usb_str->bLength << 16);
+    copy_utf16le(usb_str->wData, product);
+
+    usb_str = (struct usb_str_desc *)&dbc_str->serial;
+    usb_str->bLength = 2 + (2 * strlen(serial));
+    usb_str->bDescriptorType = USB_DT_STRING;
+    length |= ((uint32_t)usb_str->bLength << 24);
+    copy_utf16le(usb_str->wData, serial);
+
+    memset(info, 0, sizeof(*info));
+
+    info->string0 = cpu_to_le64(virt_to_maddr(dbc_str->string0));
+    info->manufacturer = cpu_to_le64(virt_to_maddr(dbc_str->manufacturer));
+    info->product = cpu_to_le64(virt_to_maddr(dbc_str->product));
+    info->serial = cpu_to_le64(virt_to_maddr(dbc_str->serial));
+    info->length = cpu_to_le32(length);
+}
+
+/*
+ * Initialize the endpoint as specified in sections 7.6.3.2
+ * and 7.6.9.2. Each DbC endpoint has Bulk type, so MaxPStreams,
+ * LSA, HID, CErr, FE, Interval, Mult, and Max ESIT Payload
+ * fields are all 0.
+ */
+static inline void init_endpoint(struct xhci_dbc *dbc,
+                                 uint32_t type,
+                                 uint64_t trdp)
+{
+    struct dbc_ep_ctx *ep;
+    const uint32_t max_pkt = EP_MAX_PKT_SIZE << 16;
+    const uint32_t max_burst = dbc_max_burst(dbc) << 8;
+
+    ep = (type == EP_TYPE_BULK_OUT) ? &dbc->ctx.ep_out
+                                    : &dbc->ctx.ep_in;
+    memset(ep, 0, sizeof(*ep));
+
+    ep->data1 = cpu_to_le32(max_pkt | max_burst | (type << 3));
+    ep->trdp = cpu_to_le64(trdp | EP_INITIAL_DCS);
+    ep->data2 = cpu_to_le32(EP_AVG_TRB_LENGTH);
+}
+
+static inline void init_xfer_ring(struct trb_ring *ring,
+                                  struct trb *trb,
+                                  int size)
+{
+    struct trb *last_trb;
+
+    memset(trb, 0, size * sizeof(*trb));
+
+    ring->trb = trb;
+    ring->enq = 0;
+    ring->deq = 0;
+    ring->cycle = 1;
+
+    /*
+     * Transfer rings must have a link TRB that points to the next segment.
+     * Currently only one-segment rings are supported, so setup a link TRB
+     * pointing back to the beginning.
+     */
+
+    last_trb = &trb[size - 1];
+
+    trb_set_type(last_trb, TRB_TYPE_LINK);
+    trb_link_set_toggle_cycle(last_trb);
+    trb_link_set_rsp(last_trb, virt_to_maddr(trb));
+}
+
+static inline void init_event_ring(struct trb_ring *ring,
+                                   struct trb *trb,
+                                   int size)
+{
+    memset(trb, 0, size * sizeof(*trb));
+
+    ring->trb = trb;
+    ring->enq = 0;
+    ring->deq = 0;
+    ring->cycle = 1;
+}
+
+static inline void init_data_ring(struct data_ring *ring,
+                                  char *data,
+                                  int size)
+{
+    memset(data, 0, size);
+
+    ring->buf = data;
+    ring->enq = 0;
+    ring->deq = 0;
+    ring->dma_addr = virt_to_maddr(data);
+}
+
+static inline void init_erst(struct xhci_dbc *dbc, uint64_t erdp, int size)
+{
+    /* Only one-segment rings are supported */
+    BUILD_BUG_ON(DBC_ERSTSZ != 1);
+
+    memset(dbc->erst, 0, sizeof(dbc->erst));
+
+    dbc->erst[0].base = cpu_to_le64(erdp);
+    dbc->erst[0].size = cpu_to_le16(size);
+}
+
+/*
+ * Map in the registers. Section 5.2.1 of the xHCI spec
+ * mandates that the first two BARs must be implemented as a
+ * 64-bit MMIO region that includes all of the xHC's registers.
+ */
+static int xhci_map_regs(struct xhci_dbc *dbc)
+{
+    uint64_t phys, size, pages;
+    int fix = FIX_XHCI_END;
+
+    BUILD_BUG_ON(CONFIG_XHCI_FIXMAP_PAGES <= 0);
+
+    if ( pci_size_mem_bar(dbc->sbdf, PCI_BASE_ADDRESS_0, &phys, &size, 0)
+         != 2 )
+        return -EINVAL;
+
+    if ( size < PAGE_SIZE )
+        return -EINVAL;
+
+    if ( (phys & ~PAGE_MASK) || (size & ~PAGE_MASK) )
+        return -EINVAL;
+
+    pages = size / PAGE_SIZE;
+
+    if ( pages > CONFIG_XHCI_FIXMAP_PAGES )
+        return -EINVAL;
+
+    for ( uint64_t p = 0; p < pages; p++ )
+    {
+        set_fixmap_nocache(fix--, phys);
+        phys += PAGE_SIZE;
+    }
+
+    dbc->mmio_paddr = phys;
+    dbc->mmio_pages = pages;
+    dbc->xhc_regs = (uint8_t __iomem *)fix_to_virt(FIX_XHCI_END);
+
+    return 0;
+}
+
+static void xhci_unmap_regs(struct xhci_dbc *dbc)
+{
+    int fix = FIX_XHCI_END;
+
+    for ( uint64_t p = 0; p < dbc->mmio_pages; p++ )
+        clear_fixmap(fix--);
+
+    dbc->xhc_regs = NULL;
+}
+
+static inline void xhci_enable_mmio_and_dma(const struct xhci_dbc *dbc)
+{
+    uint8_t cmd = pci_conf_read8(dbc->sbdf, PCI_COMMAND);
+
+    cmd |= PCI_COMMAND_MEMORY | PCI_COMMAND_MASTER;
+    pci_conf_write8(dbc->sbdf, PCI_COMMAND, cmd);
+}
+
+/*
+ * Search for an xHCI extended capability. If start is 0, then begin the search
+ * at the beginning of the list. Otherwise, start is the offset from the
+ * beginning of the xHC's registers. Note this function will return the _next_
+ * capability with id == @capid, so if start points to a cap with @capid X on
+ * entry, this will return the offset of the next cap with @capid X (or 0
+ * if there is none).
+ */
+static uint32_t xhci_find_next_cap(const struct xhci_dbc *dbc,
+                                   uint32_t start,
+                                   uint32_t capid)
+{
+    const uint8_t __iomem *cap;
+    uint32_t val, id, next, offset;
+
+    offset = start;
+    if ( !offset )
+    {
+        const uint32_t xecp = readl(&dbc->xhc_regs[XHCI_HCCPARAMS1]) >> 16;
+        if ( !xecp )
+            return 0;
+
+        offset = xecp << 2;
+    }
+
+    do {
+        cap = dbc->xhc_regs + offset;
+        val = readl(cap);
+        id = val & 0xFF;
+
+        if ( offset != start && (id == capid) )
+            return offset;
+
+        next = (val & 0xFF00) >> 8;
+        offset += next << 2;
+    } while ( next );
+
+    return 0;
+}
+
+static int dbc_find_regs(struct xhci_dbc *dbc)
+{
+    uint32_t offset = xhci_find_next_cap(dbc, 0, XHCI_CAPID_DBC);
+
+    if ( !offset )
+        return -ENODEV;
+
+    dbc->dbc_regs = (struct dbc_regs __iomem *)(dbc->xhc_regs + offset);
+    return 0;
+}
+
+static void dbc_ring_doorbell(struct xhci_dbc *dbc, int doorbell)
+{
+    uint32_t db;
+    struct dbc_regs __iomem *regs = dbc->dbc_regs;
+    uint32_t ctrl = readl(&regs->ctrl);
+
+    if ( ctrl & CTRL_DRC )
+    {
+        /* Re-arm the doorbell */
+        writel(ctrl | CTRL_DRC, &regs->ctrl);
+    }
+
+    if ( !(ctrl & CTRL_DCR) )
+        return;
+
+    if ( doorbell == DOORBELL_OUT && (ctrl & CTRL_HOT) )
+        return;
+
+    if ( doorbell == DOORBELL_IN && (ctrl & CTRL_HIT) )
+        return;
+
+    db = (readl(&regs->db) & 0xFFFF00FF) | (doorbell << 8);
+    writel(db, &regs->db);
+}
+
+static inline void dbc_stop(const struct xhci_dbc *dbc)
+{
+    uint32_t ctrl = readl(&dbc->dbc_regs->ctrl);
+
+    writel(ctrl & ~CTRL_DCE, &dbc->dbc_regs->ctrl);
+    readl_poll(&dbc->dbc_regs->ctrl, dbc_off, POLL_LIMIT);
+}
+
+static void dbc_init(struct xhci_dbc *dbc, uint64_t erdp)
+{
+    uint64_t ordp = virt_to_maddr(trb_out);
+    uint64_t irdp = virt_to_maddr(trb_in);
+
+    init_xfer_ring(&dbc->trb_out_ring, trb_out, TRB_RING_SIZE);
+    init_xfer_ring(&dbc->trb_in_ring, trb_in, TRB_RING_SIZE);
+
+    init_data_ring(&dbc->data_out_ring, data_out, DATA_RING_SIZE);
+    init_data_ring(&dbc->data_in_ring, data_in, DATA_RING_SIZE);
+
+    init_event_ring(&dbc->trb_evt_ring, trb_evt, TRB_RING_SIZE);
+    init_erst(dbc, erdp, TRB_RING_SIZE);
+    init_strings(dbc);
+
+    init_endpoint(dbc, EP_TYPE_BULK_OUT, ordp);
+    init_endpoint(dbc, EP_TYPE_BULK_IN, irdp);
+}
+
+static void dbc_reset(struct xhci_dbc *dbc)
+{
+    struct dbc_regs __iomem *regs;
+    uint64_t erdp, cp, erstba;
+    uint32_t erstsz, ddi;
+
+    BUILD_BUG_ON(DATA_RING_SIZE > TRB_MAX_XFER);
+
+    /* Make sure we start from the DbC-Off state */
+    dbc_stop(dbc);
+
+    erdp = virt_to_maddr(trb_evt);
+    dbc_init(dbc, erdp);
+
+    regs = dbc->dbc_regs;
+
+    erstsz = (readl(&regs->erstsz) & 0xFFFF0000) | DBC_ERSTSZ;
+    writel(erstsz, &regs->erstsz);
+
+    erstba = (lo_hi_readq(&regs->erstba) & 0xF) | virt_to_maddr(&dbc->erst);
+    lo_hi_writeq(erstba, &regs->erstba);
+
+    erdp |= lo_hi_readq(&regs->erdp) & 0x8;
+    lo_hi_writeq(erdp, &regs->erdp);
+
+    cp = (lo_hi_readq(&regs->cp) & 0xF) | virt_to_maddr(&dbc->ctx);
+    lo_hi_writeq(cp, &regs->cp);
+
+    ddi = (DBC_VENDOR << 16) | DBC_PROTOCOL;
+    writel(ddi, &regs->ddi1);
+
+    ddi = (DBC_REVISION << 16) | DBC_PRODUCT;
+    writel(ddi, &regs->ddi2);
+}
+
+static int dbc_start(const struct xhci_dbc *dbc)
+{
+    uint32_t ctrl, portsc;
+    struct dbc_regs __iomem *regs = dbc->dbc_regs;
+    const int max_try = 2;
+
+    for ( int try = 0; try < max_try; try++ )
+    {
+        ctrl = readl(&regs->ctrl);
+        writel(ctrl | CTRL_DCE | CTRL_LSE, &regs->ctrl);
+
+        ctrl = readl_poll(&regs->ctrl, dbc_on, POLL_LIMIT);
+        if ( !dbc_on(ctrl) )
+            return -ENODEV;
+
+        /* Wait for transition to DbC-Enabled state */
+        portsc = readl_poll(&regs->portsc, dbc_enabled, POLL_LIMIT);
+        if ( !dbc_enabled(portsc) )
+            return -ENODEV;
+
+        /*
+         * Now we're DbC-Enabled, wait for the host to enumerate the
+         * DbC capability and bring us to the DbC-Configured state.
+         */
+        ctrl = readl_poll(&regs->ctrl, dbc_configured, POLL_LIMIT);
+        if ( !dbc_configured(ctrl) )
+        {
+            /*
+             * Clear port status change bits. If a tPortConfigurationTimeout
+             * is the reason we failed to configure, then we must clear
+             * PORTSC.CEC manually. If we failed for another reason
+             * (e.g. an LTSSM substate polling timeout or warm reset), then
+             * the relevant status bits in PORTSC will be cleared upon
+             * clearing CTRL_DCE.
+             */
+            portsc = readl(&regs->portsc);
+            writel(portsc | PORTSC_CEC, &regs->portsc);
+            dbc_stop(dbc);
+
+            continue;
+        }
+
+        return 0;
+    }
+
+    return -ENODEV;
+}
+
+static inline int dbc_state(const struct xhci_dbc *dbc)
+{
+    uint32_t ctrl = readl(&dbc->dbc_regs->ctrl);
+    uint32_t portsc = readl(&dbc->dbc_regs->portsc);
+    uint32_t pls = (portsc & PORTSC_PLS) >> 5;
+
+    if (!(ctrl & CTRL_DCE))
+        return DBC_OFF;
+
+    if (!(portsc & PORTSC_CCS))
+        return DBC_DISCONNECTED;
+
+    if (ctrl & CTRL_DCR)
+        return DBC_CONFIGURED;
+
+    if (portsc & PORTSC_PR)
+        return DBC_RESETTING;
+
+    if (pls == DBC_PLS_INACTIVE)
+        return DBC_ERROR;
+
+    if (pls == DBC_PLS_DISABLED)
+        return DBC_DISABLED;
+
+    return DBC_ENABLED;
+}
+
+static inline void dbc_port_disable(const struct xhci_dbc *dbc)
+{
+    uint32_t portsc = readl(&dbc->dbc_regs->portsc);
+
+    writel(portsc & ~PORTSC_PED, &dbc->dbc_regs->portsc);
+    readl_poll(&dbc->dbc_regs->portsc, port_disabled, POLL_LIMIT);
+}
+
+static inline void dbc_port_enable(const struct xhci_dbc *dbc)
+{
+    uint32_t portsc = readl(&dbc->dbc_regs->portsc);
+
+    writel(portsc | PORTSC_PED, &dbc->dbc_regs->portsc);
+    readl_poll(&dbc->dbc_regs->portsc, port_enabled, POLL_LIMIT);
+}
+
+static inline void dbc_handle_psc_event(const struct xhci_dbc *dbc)
+{
+    uint32_t portsc = readl(&dbc->dbc_regs->portsc);
+
+    portsc |= PORTSC_PSC_MASK;
+    writel(portsc, &dbc->dbc_regs->portsc);
+}
+
+static void dbc_handle_out_xfer_event(struct xhci_dbc *dbc,
+                                      const struct trb *event)
+{
+    struct trb_ring *trb_ring = &dbc->trb_out_ring;
+
+    if ( trb_xfer_event_cc(event) == TRB_CC_TRANSACTION_ERROR)
+    {
+        uint32_t ctrl = readl(&dbc->dbc_regs->ctrl);
+
+        if ( (ctrl & CTRL_DRC) &&
+             ep_state(&dbc->ctx.ep_out) == EP_STATE_DISABLED )
+        {
+            uint32_t bytes_left = trb_xfer_event_len(event);
+
+            if ( bytes_left )
+            {
+                /*
+                 * In this case we left the DBC_CONFIGURED state after
+                 * partially transfering a TRB. Recover by re-queueing
+                 * the left over bytes. See section 7.6.4.4 for more
+                 * details on this behavior.
+                 */
+                uint64_t idx = (trb_xfer_event_ptr(event) & TRB_RING_MASK) /
+                                sizeof(struct trb);
+
+                struct trb *orig_trb = &trb_ring->trb[idx];
+
+                uint64_t dma = trb_norm_buf(orig_trb) +
+                               trb_norm_len(orig_trb) - bytes_left;
+
+                trb_norm_set_buf(orig_trb, dma);
+                trb_norm_set_len(orig_trb, bytes_left);
+
+                /*
+                 * Make sure the xHC schedules this TRB when the doorbell
+                 * is rung later.
+                 */
+                dbc->ctx.ep_out.trdp = cpu_to_le64(virt_to_maddr(orig_trb));
+            }
+        }
+    }
+
+    trb_ring->deq = (trb_xfer_event_ptr(event) & TRB_RING_MASK) /
+                    sizeof(struct trb);
+}
+
+static void dbc_handle_in_xfer_event(struct xhci_dbc *dbc,
+                                     const struct trb *event)
+{
+    uint32_t idx = (trb_xfer_event_ptr(event) & TRB_RING_MASK) /
+                   sizeof(struct trb);
+    struct trb *orig_trb = &dbc->trb_in_ring.trb[idx];
+    uint32_t bytes_rcvd = trb_norm_len(orig_trb) - trb_xfer_event_len(event);
+    struct data_ring *in_ring = &dbc->data_in_ring;
+    struct trb_ring *trb_ring = &dbc->trb_in_ring;
+
+    if ( in_ring->enq + bytes_rcvd >= DATA_RING_SIZE )
+        in_ring->enq = bytes_rcvd - (DATA_RING_SIZE - in_ring->enq);
+    else
+        in_ring->enq += bytes_rcvd;
+
+    trb_ring->deq = ((trb_xfer_event_ptr(event) & TRB_RING_MASK) /
+                     sizeof(struct trb)) + 1;
+    trb_ring->deq &= TRB_RING_SIZE - 1;
+}
+
+static void dbc_handle_xfer_event(struct xhci_dbc *dbc,
+                                  const struct trb *event)
+{
+    int epid = trb_xfer_event_epid(event);
+
+    if ( epid == XDBC_EPID_OUT || epid == XDBC_EPID_OUT_INTEL )
+        dbc_handle_out_xfer_event(dbc, event);
+    else if ( epid == XDBC_EPID_IN || epid == XDBC_EPID_IN_INTEL )
+        dbc_handle_in_xfer_event(dbc, event);
+}
+
+static void dbc_process_events(struct xhci_dbc *dbc)
+{
+    struct trb_ring *evt_ring = &dbc->trb_evt_ring;
+    struct trb *evt = &evt_ring->trb[evt_ring->deq];
+    bool update_erdp = false;
+
+    while ( trb_cycle(evt) == evt_ring->cycle )
+    {
+        /*
+         * The xHC performs a release with the cycle bit to publish
+         * the TRB's contents. Add the corresponding acquire here.
+         */
+        rmb();
+
+        switch ( trb_type(evt) )
+        {
+        case TRB_TYPE_PSC_EVENT:
+            dbc_handle_psc_event(dbc);
+            break;
+        case TRB_TYPE_XFER_EVENT:
+            dbc_handle_xfer_event(dbc, evt);
+            break;
+        default:
+            break;
+        }
+
+        if ( evt_ring->deq == TRB_RING_SIZE - 1 )
+            evt_ring->cycle ^= 1;
+
+        evt_ring->deq = (evt_ring->deq + 1) & (TRB_RING_SIZE - 1);
+        evt = &evt_ring->trb[evt_ring->deq];
+        update_erdp = true;
+    }
+
+    if ( update_erdp )
+    {
+        uint64_t erdp = lo_hi_readq(&dbc->dbc_regs->erdp);
+
+        erdp &= ~TRB_RING_MASK;
+        erdp |= evt_ring->deq * sizeof(struct trb);
+
+        lo_hi_writeq(erdp, &dbc->dbc_regs->erdp);
+    }
+}
+
+static int dbc_handle_state(struct xhci_dbc *dbc)
+{
+    switch ( dbc_state(dbc) )
+    {
+    case DBC_OFF:
+        dbc_reset(dbc);
+        return dbc_start(dbc);
+    case DBC_DISABLED:
+        dbc_port_enable(dbc);
+        break;
+    case DBC_ERROR:
+        /*
+         * Try to toggle the port (without releasing it) to re-enumerate,
+         * lest we get stuck if the host never issues a hot or warm reset.
+         */
+        dbc_port_disable(dbc);
+        dbc_port_enable(dbc);
+        break;
+    default:
+        /*
+         * There are no other states in which we can actually do
+         * anything to cause a transition to DBC_CONFIGURED; we just
+         * have to wait on the hardware to transition.
+         */
+        break;
+    }
+
+    return 0;
+}
+
+static void dbc_queue_xfer(struct trb_ring *ring,
+                           uint64_t dma,
+                           uint32_t len)
+{
+    struct trb *trb;
+
+    if (ring->enq == TRB_RING_SIZE - 1) {
+        /*
+         * We have to make sure the xHC processes the link TRB in order
+         * for wrap-around to work properly. We do this by marking the
+         * xHC as owner of the link TRB by setting the TRB's cycle bit
+         * (just like with normal TRBs).
+         */
+        struct trb *link = &ring->trb[ring->enq];
+
+        /* Release TRB fields */
+        wmb();
+        trb_set_cycle(link, ring->cycle);
+
+        ring->enq = 0;
+        ring->cycle ^= 1;
+    }
+
+    trb = &ring->trb[ring->enq];
+    trb_set_type(trb, TRB_TYPE_NORMAL);
+
+    trb_norm_set_buf(trb, dma);
+    trb_norm_set_len(trb, len);
+    trb_norm_set_ioc(trb);
+
+    /* Release TRB fields */
+    wmb();
+    trb_set_cycle(trb, ring->cycle);
+
+    ring->enq++;
+}
+
+static void dbc_queue_tx_data(struct xhci_dbc *dbc)
+{
+    paddr_t dma;
+    uint64_t len;
+
+    struct data_ring *out_ring = &dbc->data_out_ring;
+    struct trb_ring *trb_ring = &dbc->trb_out_ring;
+
+    if ( out_ring->enq == out_ring->deq )
+        return;
+
+    dma = out_ring->dma_addr + out_ring->deq;
+
+    if ( out_ring->enq > out_ring->deq )
+    {
+        len = out_ring->enq - out_ring->deq;
+        dbc_queue_xfer(trb_ring, dma, len);
+    }
+    else
+    {
+        len = DATA_RING_SIZE - out_ring->deq;
+        dbc_queue_xfer(trb_ring, dma, len);
+
+        if ( out_ring->enq )
+            dbc_queue_xfer(trb_ring, out_ring->dma_addr, out_ring->enq);
+    }
+
+    out_ring->deq = out_ring->enq;
+}
+
+static void dbc_queue_rx_data(struct xhci_dbc *dbc)
+{
+    struct data_ring *in_ring = &dbc->data_in_ring;
+    struct trb_ring *trb_ring = &dbc->trb_in_ring;
+
+    if ( in_ring->enq + MAX_RX_SIZE <= DATA_RING_SIZE )
+        dbc_queue_xfer(trb_ring, in_ring->dma_addr + in_ring->enq, MAX_RX_SIZE);
+    else
+    {
+        uint64_t first_len = DATA_RING_SIZE - in_ring->enq;
+
+        dbc_queue_xfer(trb_ring, in_ring->dma_addr + in_ring->enq, first_len);
+        dbc_queue_xfer(trb_ring, in_ring->dma_addr, MAX_RX_SIZE - first_len);
+    }
+}
+
+static void dbc_flush_locked(struct xhci_dbc *dbc)
+{
+    if ( dbc->unsafe )
+        return;
+
+    xhci_enable_mmio_and_dma(dbc);
+
+    if ( dbc_handle_state(dbc) )
+        return;
+
+    dbc_process_events(dbc);
+    dbc_queue_tx_data(dbc);
+    dbc_ring_doorbell(dbc, DOORBELL_OUT);
+}
+
+static void dbc_flush(struct serial_port *port)
+{
+    unsigned long flags;
+    struct xhci_dbc *dbc = (struct xhci_dbc *)port->uart;
+
+    spin_lock_irqsave(&dbc->lock, flags);
+    dbc_flush_locked(dbc);
+    spin_unlock_irqrestore(&dbc->lock, flags);
+}
+
+static int dbc_reset_prepare(struct xhci_dbc *dbc)
+{
+    unsigned long flags;
+
+    spin_lock_irqsave(&dbc->lock, flags);
+
+    if ( dbc->unsafe )
+        goto out;
+
+    /* Drain pending transfers and events */
+    dbc_flush_locked(dbc);
+    dbc_stop(dbc);
+
+    /*
+     * We aren't allowed to touch any MMIO regs until the host
+     * controller is done resetting as indicated by dbc_reset_done.
+     */
+    dbc->unsafe = true;
+
+out:
+    spin_unlock_irqrestore(&dbc->lock, flags);
+    return 0;
+}
+
+static int dbc_reset_done(struct xhci_dbc *dbc)
+{
+    int rc;
+    unsigned long flags;
+
+    spin_lock_irqsave(&dbc->lock, flags);
+
+    if ( !dbc->unsafe )
+        goto out;
+
+    dbc->unsafe = false;
+    dbc_reset(dbc);
+
+    rc = dbc_start(dbc);
+
+out:
+    spin_unlock_irqrestore(&dbc->lock, flags);
+    return 0;
+}
+
+int dbgp_op(const struct physdev_dbgp_op *op)
+{
+    struct xhci_dbc *dbc = &xhci_dbc;
+
+    if ( !dbc->dbc_regs )
+        return 0;
+
+    switch ( op->bus )
+    {
+    case PHYSDEVOP_DBGP_BUS_UNKNOWN:
+        /* Only PCI is supported ATM */
+        return -EINVAL;
+    case PHYSDEVOP_DBGP_BUS_PCI:
+        if ( PCI_SEG(dbc->sbdf.sbdf) != op->u.pci.seg ||
+             PCI_BUS(dbc->sbdf.sbdf) != op->u.pci.bus ||
+             PCI_DEVFN2(dbc->sbdf.sbdf) != op->u.pci.devfn )
+            return -ENODEV;
+        break;
+    default:
+        return -EINVAL;
+    }
+
+    switch ( op->op )
+    {
+    case PHYSDEVOP_DBGP_RESET_PREPARE:
+        return dbc_reset_prepare(dbc);
+    case PHYSDEVOP_DBGP_RESET_DONE:
+        return dbc_reset_done(dbc);
+    default:
+        return -EINVAL;
+    }
+}
+
+static void dbc_putc(struct serial_port *port, char c)
+{
+    struct xhci_dbc *dbc = port->uart;
+    struct data_ring *ring = &dbc->data_out_ring;
+
+    ring->buf[ring->enq] = c;
+    ring->enq = (ring->enq + 1) & (DATA_RING_SIZE - 1);
+}
+
+static int dbc_getc(struct serial_port *port, char *c)
+{
+    int rc;
+    unsigned long flags;
+
+    struct xhci_dbc *dbc = port->uart;
+    struct data_ring *ring = &dbc->data_in_ring;
+
+    rc = 0;
+
+    spin_lock_irqsave(&port->tx_lock, flags);
+    spin_lock_irqsave(&dbc->lock, flags);
+
+    /*
+     * If the port is synchronous we need to ring the doorbell
+     * and process events manually here instead of relying on
+     * the timer callback. Hence taking tx_lock prior to checking
+     * port->sync.
+     */
+    if ( port->sync )
+    {
+        if ( dbc->unsafe )
+            goto out;
+
+        if ( dbc->trb_in_ring.enq == dbc->trb_in_ring.deq )
+        {
+            dbc_queue_rx_data(dbc);
+            dbc_ring_doorbell(dbc, DOORBELL_IN);
+        }
+
+        dbc_process_events(dbc);
+    }
+
+    if ( ring->enq != ring->deq )
+    {
+        *c = ring->buf[ring->deq];
+        ring->deq = (ring->deq + 1) & (DATA_RING_SIZE - 1);
+        rc = 1;
+    }
+
+out:
+    spin_unlock_irqrestore(&dbc->lock, flags);
+    spin_unlock_irqrestore(&port->tx_lock, flags);
+
+    return rc;
+}
+
+static void dbc_rx_poll(struct cpu_user_regs *regs)
+{
+    unsigned long flags;
+
+    struct serial_port *port = this_cpu(poll_port);
+    struct xhci_dbc *dbc = port->uart;
+
+    spin_lock_irqsave(&dbc->lock, flags);
+
+    if ( dbc->unsafe )
+        goto out;
+
+    dbc_process_events(dbc);
+
+    if ( dbc->trb_in_ring.enq == dbc->trb_in_ring.deq )
+    {
+        dbc_queue_rx_data(dbc);
+        dbc_ring_doorbell(dbc, DOORBELL_IN);
+    }
+
+out:
+    spin_unlock_irqrestore(&dbc->lock, flags);
+
+    while ( dbc->data_in_ring.enq != dbc->data_in_ring.deq )
+        serial_rx_interrupt(port, regs);
+
+    set_timer(&dbc->timer, NOW() + MILLISECS(5));
+}
+
+static void dbc_handle_timer(void *data)
+{
+    this_cpu(poll_port) = data;
+
+#ifdef run_in_exception_handler
+    run_in_exception_handler(dbc_rx_poll);
+#else
+    dbc_rx_poll(guest_cpu_user_regs());
+#endif
+}
+
+static void __init dbc_init_postirq(struct serial_port *port)
+{
+    struct xhci_dbc *dbc = port->uart;
+
+    if ( !dbc->dbc_regs )
+        return;
+
+    init_timer(&dbc->timer, dbc_handle_timer, port, 0);
+    set_timer(&dbc->timer, NOW() + MILLISECS(5));
+}
+
+static struct uart_driver __read_mostly dbc_driver = {
+    .init_postirq = dbc_init_postirq,
+    .putc         = dbc_putc,
+    .flush        = dbc_flush,
+    .getc         = dbc_getc
+};
+
+static bool __init xhci_detect(uint32_t bus,
+                               uint32_t dev,
+                               uint32_t func)
+{
+    uint32_t hdr, class;
+    pci_sbdf_t sbdf = PCI_SBDF(0, bus, dev, func);
+
+    hdr = pci_conf_read8(sbdf, PCI_HEADER_TYPE);
+    if ( (hdr != PCI_HEADER_TYPE_NORMAL) &&
+         (hdr != (PCI_HEADER_TYPE_NORMAL | 0x80)) )
+        return false;
+
+    class = pci_conf_read32(sbdf, PCI_CLASS_REVISION) >> 8;
+    if ( class != XHCI_PCI_CLASS )
+        return false;
+
+    return true;
+}
+
+/*
+ * This implements the handoff algorithm described in section 4.22.1.
+ * It is adapted to xen from linux:
+ *     drivers/usb/early/xhci-dbc.c:xdbc_bios_handoff
+ *     Copyright (C) 2016 Intel Corporation
+ * and
+ *     drivers/usb/host/pci-quirks.c:quirk_usb_handoff_xhci
+ *     Copyright (C) 1999 Martin Mares <mj@ucw.cz>
+ */
+static void __init dbc_bios_handoff(const struct xhci_dbc *dbc)
+{
+    uint32_t __iomem *legacy;
+    uint32_t val;
+    uint32_t offset = xhci_find_next_cap(dbc, 0, XHCI_CAPID_LEGACY);
+
+    if ( !offset )
+        return;
+
+    legacy = (uint32_t __iomem *)(dbc->xhc_regs + offset);
+    val = readl(legacy);
+
+    writel(val | XHCI_OS_OWNED, legacy);
+
+    /*
+     * We are supposed to wait at most one second for the BIOS to
+     * clear its ownership bit. Provide an approximation with
+     * cpu_relax since early_time_init hasn't been called (and
+     * therefore udelay isn't available) yet.
+     */
+
+    val = readl_poll(legacy, xhci_os_owned, POLL_LIMIT);
+    if ( !xhci_os_owned(val) )
+        /* BIOS lost their chance, all your usb are belong to us */
+        writel(val & ~XHCI_BIOS_OWNED, legacy);
+
+    /* Now disable SMIs in the USBLEGCTLSTS register */
+    val = readl(legacy + 4);
+    val &= XHCI_LEGACY_DISABLE_SMI;
+    val |= XHCI_LEGACY_SMI_EVENTS;
+
+    writel(val, legacy + 4);
+}
+
+int __init xhci_find(unsigned long num, uint32_t *b, uint32_t *d, uint32_t *f)
+{
+    uint32_t bus, dev, func;
+    unsigned long count = 0;
+
+    for ( bus = 0; bus < 256; bus++ )
+    {
+        for ( dev = 0; dev < 32; dev++ )
+        {
+            for ( func = 0; func < 8; func++ )
+            {
+                if ( !pci_device_detect(0, bus, dev, func) )
+                {
+                    if ( !func )
+                        break;
+                    continue;
+                }
+
+                if ( !xhci_detect(bus, dev, func) || count++ != num )
+                {
+                    pci_sbdf_t sbdf = PCI_SBDF(0, bus, dev, func);
+                    uint8_t hdr = pci_conf_read8(sbdf, PCI_HEADER_TYPE);
+
+                    /* Go to next dev if this one isn't multi-function */
+                    if ( !func && !(hdr & 0x80) )
+                        break;
+                    continue;
+                }
+                else
+                {
+                    *b = bus;
+                    *d = dev;
+                    *f = func;
+
+                    return 0;
+                }
+            }
+        }
+    }
+
+    return -ENODEV;
+}
+
+static int __init xhci_parse_param(uint32_t *bus, uint32_t *dev, uint32_t *func)
+{
+    if ( strncmp(opt_dbgp, "xhci", 4) )
+        return -ENODEV;
+
+    if ( isdigit(opt_dbgp[4]) || !opt_dbgp[4] )
+    {
+        unsigned long num = 0;
+
+        if ( opt_dbgp[4] )
+            num = simple_strtoul(opt_dbgp + 4, NULL, 10);
+
+        if ( xhci_find(num, bus, dev, func) )
+            return -ENODEV;
+    }
+    else if ( !strncmp(opt_dbgp, "xhci@pci", 8) )
+    {
+        if ( !parse_pci(opt_dbgp + 8, NULL, bus, dev, func) )
+            return -ENODEV;
+
+        if ( !pci_device_detect(0, *bus, *dev, *func) )
+            return -ENODEV;
+
+        if ( !xhci_detect(*bus, *dev, *func) )
+            return -ENODEV;
+    }
+    else
+        return -ENODEV;
+
+    return 0;
+}
+
+void __init xhci_dbc_init(void)
+{
+    uint32_t bus, dev, func;
+    struct xhci_dbc *dbc = &xhci_dbc;
+
+    if ( xhci_parse_param(&bus, &dev, &func) )
+        return;
+
+    dbc->sbdf = PCI_SBDF(0, bus, dev, func);
+    xhci_enable_mmio_and_dma(dbc);
+
+    if ( xhci_map_regs(dbc) )
+        return;
+
+    if ( dbc_find_regs(dbc) )
+        goto unmap;
+
+    spin_lock_init(&dbc->lock);
+
+    dbc_bios_handoff(dbc);
+    dbc_reset(dbc);
+
+    if ( dbc_start(dbc) )
+        goto stop;
+
+    serial_register_uart(SERHND_DBGP, &dbc_driver, dbc);
+    return;
+
+stop:
+    dbc_stop(dbc);
+
+unmap:
+    xhci_unmap_regs(dbc);
+}
diff --git a/xen/drivers/char/xhci-dbc.h b/xen/drivers/char/xhci-dbc.h
new file mode 100644
index 0000000000..34f51933a9
--- /dev/null
+++ b/xen/drivers/char/xhci-dbc.h
@@ -0,0 +1,621 @@
+/*
+ * xen/drivers/char/xhci-dbc.h
+ *
+ * Driver for USB xHCI Debug Capability
+ * ported from https://github.com/connojd/xue.git
+ *
+ * Copyright (C) 2019 Assured Information Security, Inc.
+ * Copyright (C) 2021 Connor Davis <connojdavis@gmail.com>
+ *
+ * Permission is hereby granted, free of charge, to any person
+ * obtaining a copy of this software and associated documentation
+ * files (the "Software"), to deal in the Software without restriction,
+ * including without limitation the rights to use, copy, modify, merge,
+ * publish, distribute, sublicense, and/or sell copies of the Software,
+ * and to permit persons to whom the Software is furnished to do so,
+ * subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be
+ * included in all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND,
+ * EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
+ * MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT.
+ * IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY
+ * CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT,
+ * TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE
+ * SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
+ */
+
+#include <xen/lib.h>
+#include <xen/page-size.h>
+#include <xen/pci.h>
+#include <xen/spinlock.h>
+
+#include <asm/byteorder.h>
+#include <asm/io.h>
+
+/* USB string descriptor values */
+#define USB_DT_STRING      3
+#define USB_LANGID_ENGLISH 0x0409
+#define DBC_MAX_STR_SIZE   64
+
+/* Number of DbC event ring segments */
+#define DBC_ERSTSZ  1
+
+/* DbC control register */
+#define CTRL_DCR    (1U << 0)
+#define CTRL_LSE    (1U << 1)
+#define CTRL_HOT    (1U << 2)
+#define CTRL_HIT    (1U << 3)
+#define CTRL_DRC    (1U << 4)
+#define CTRL_DCE    (1U << 31)
+
+/* DbC port status change register */
+#define PORTSC_CCS  (1U << 0)
+#define PORTSC_PED  (1U << 1)
+#define PORTSC_PR   (1U << 4)
+#define PORTSC_PLS  (0xFU << 5)
+#define PORTSC_CSC  (1U << 17)
+#define PORTSC_PRC  (1U << 21)
+#define PORTSC_PLC  (1U << 22)
+#define PORTSC_CEC  (1U << 23)
+
+/* Port status change bits must be cleared for further event generation */
+#define PORTSC_PSC_MASK (PORTSC_CSC|PORTSC_PRC|PORTSC_PLC|PORTSC_CEC)
+
+/* TRB types */
+#define TRB_TYPE_NORMAL      1
+#define TRB_TYPE_LINK        6
+#define TRB_TYPE_XFER_EVENT  32
+#define TRB_TYPE_PSC_EVENT   34
+
+/* TRB completion codes */
+#define TRB_CC_TRANSACTION_ERROR  4
+
+#define TRB_MAX_XFER       (PAGE_SIZE * 16)
+#define TRB_PER_PAGE       (PAGE_SIZE / sizeof(struct trb))
+#define TRB_RING_ORDER     2
+#define TRB_RING_PAGES     (1U << TRB_RING_ORDER)
+#define TRB_RING_SIZE      (TRB_PER_PAGE * TRB_RING_PAGES)
+#define TRB_RING_MASK      ((TRB_RING_SIZE * sizeof(struct trb)) - 1)
+#define TRB_RING_ALIGN     (PAGE_SIZE * 16)
+
+/* DbC endpoint values */
+#define EP_STATE_DISABLED  0
+#define EP_STATE_RUNNING   1
+#define EP_STATE_HALTED    2
+#define EP_STATE_STOPPED   3
+#define EP_STATE_ERROR     4
+#define EP_TYPE_BULK_OUT   2
+#define EP_TYPE_BULK_IN    6
+#define EP_MAX_PKT_SIZE    1024
+#define EP_AVG_TRB_LENGTH  (3 * 1024)
+#define EP_INITIAL_DCS     1
+
+#define TRB_TYPE_MASK               0xFC00
+#define TRB_TYPE_SHIFT              10
+#define TRB_CYCLE_MASK              0x1
+#define TRB_EVENT_EPID_MASK         0x1F0000
+#define TRB_EVENT_EPID_SHIFT        16
+#define TRB_EVENT_CC_SHIFT          24
+#define TRB_EVENT_LEN_MASK          0x1FFFF
+#define TRB_NORM_LEN_MASK           0x1FFFF
+#define TRB_NORM_IOC_MASK           0x20
+#define TRB_LINK_TOGGLE_CYCLE_MASK  0x2
+#define EP_STATE_MASK               0x7
+#define MAX_BURST_MASK              0xFF0000
+#define MAX_BURST_SHIFT             16
+
+/*
+ * The following comment and XDBC_EPID_* values are taken from
+ * linux/drivers/usb/early/xhci-dbc.h:
+ *
+ * These are the "Endpoint ID" (also known as "Context Index") values for the
+ * OUT Transfer Ring and the IN Transfer Ring of a Debug Capability Context data
+ * structure.
+ * According to the "eXtensible Host Controller Interface for Universal Serial
+ * Bus (xHCI)" specification, section "7.6.3.2 Endpoint Contexts and Transfer
+ * Rings", these should be 0 and 1, and those are the values AMD machines give
+ * you; but Intel machines seem to use the formula from section "4.5.1 Device
+ * Context Index", which is supposed to be used for the Device Context only.
+ * Luckily the values from Intel don't overlap with those from AMD, so we can
+ * just test for both.
+ */
+#define XDBC_EPID_OUT		0
+#define XDBC_EPID_IN		1
+#define XDBC_EPID_OUT_INTEL	2
+#define XDBC_EPID_IN_INTEL	3
+
+/*
+ * The doorbell target is written to the doorbell register to
+ * notify hardware that the respective transfer ring's enqueue
+ * pointer has been updated (i.e., data is ready to be transfered).
+ */
+#define DOORBELL_OUT     0
+#define DOORBELL_IN      1
+
+#define DATA_RING_ORDER  2
+#define DATA_RING_PAGES  (1UL << DATA_RING_ORDER)
+#define DATA_RING_SIZE   (PAGE_SIZE * DATA_RING_PAGES)
+#define DATA_RING_MASK   (DATA_RING_SIZE - 1)
+
+#define MAX_RX_SIZE      EP_MAX_PKT_SIZE
+
+#define DBC_PROTOCOL     0x0001 /* Remote GDB */
+#define DBC_VENDOR       0x1D6B /* Linux Foundation */
+#define DBC_PRODUCT      0x0010
+#define DBC_REVISION     0x0000
+
+/*
+ * xHCI PCI class code
+ *
+ * Bits 7:0   - 0x30: USB3 host controller compliant with xHCI
+ * Bits 15:8  - 0x03: USB host controller
+ * Bits 23:16 - 0x0C: Serial bus controller
+ */
+#define XHCI_PCI_CLASS   0x0C0330
+
+/*
+ * Byte offset from the base of the xHC's MMIO
+ * region to the HCCPARAMS1 capability register
+ */
+#define XHCI_HCCPARAMS1  0x10
+
+/* Extended xHCI capabilities we care about */
+#define XHCI_CAPID_LEGACY  1
+#define XHCI_CAPID_DBC     10
+
+/* Legacy ownership bits */
+#define XHCI_BIOS_OWNED  (1 << 16)
+#define XHCI_OS_OWNED    (1 << 24)
+
+/* The set bits here are RsvdP, the RW SMI bits are clear */
+#define XHCI_LEGACY_DISABLE_SMI ((0x7 << 17) | (0xff << 5) | (0x7 << 1))
+
+/* SMI events, RW1C */
+#define XHCI_LEGACY_SMI_EVENTS  (0x7 << 29)
+
+/* DbC states */
+enum {
+    DBC_OFF,
+    DBC_DISCONNECTED,
+    DBC_DISABLED,
+    DBC_ENABLED,
+    DBC_ERROR,
+    DBC_RESETTING,
+    DBC_CONFIGURED
+};
+
+/* Port link states */
+enum {
+    DBC_PLS_U0,
+    DBC_PLS_U1,
+    DBC_PLS_U2,
+    DBC_PLS_U3,
+    DBC_PLS_DISABLED,
+    DBC_PLS_RXDETECT,
+    DBC_PLS_INACTIVE,
+    DBC_PLS_POLLING,
+    DBC_PLS_RECOVERY,
+    DBC_PLS_HOT_RESET
+};
+
+/*
+ * struct trb - xHC transfer request block
+ *
+ * TRBs come in several flavors, but the DbC mainly deals with
+ * event and normal TRB types. The fields' meanings vary depending
+ * on the type (which is always defined in bits @ctrl[15:10]).
+ *
+ * Normal TRBs are placed onto transfer rings and point to data
+ * that the xHC should read or write. In turn, the xHC produces
+ * event TRBs on the the event ring, which are processed by the driver.
+ */
+struct trb {
+    __le64 params;
+    __le32 status;
+    __le32 ctrl;
+};
+
+/*
+ * struct erst_segment - Event ring segment
+ *
+ * Specifies a physically contiguous region of event TRBs. The region must
+ * not straddle a 64K boundary.
+ *
+ * @base the 64-byte aligned physical address of the segment
+ * @size the number of TRBs in the segment. Must be in [16, 4096].
+ */
+struct erst_segment {
+    __le64 base;
+    __le16 size;
+    __le16 __rsvdz[3];
+};
+
+/*
+ * struct dbc_info_ctx - Physical addresses for the DbC's USB
+ * String Descriptors.
+ *
+ * @string0 - address for languages supported
+ * @manufacturer - address for manufacturer
+ * @product - address for product
+ * @serial - address for serial number
+ * @length - each byte contains the length of the corresponding string
+ *           in bytes, e.g., byte 0 of @length is the number of
+ *           bytes of string0.
+ */
+struct dbc_info_ctx {
+    __le64 string0;
+    __le64 manufacturer;
+    __le64 product;
+    __le64 serial;
+    __le32 length;
+    __le32 __rsvdz[7];
+};
+
+/*
+ * struct dbc_ep_ctx - DbC endpoint context
+ *
+ * Describes a single bulk transfer ring. The DbC has two bulk endpoints:
+ * one for IN transfers, one for OUT. The field values required for the DbC
+ * are specified in sections 7.6.3.2 and 7.6.9.2.
+ */
+struct dbc_ep_ctx {
+   __le32 data0;
+   __le32 data1;
+   __le64 trdp;
+   __le32 data2;
+   __le32 __rsvdo[11];
+};
+
+/*
+ * struct dbc_ctx - DbC context
+ *
+ * Container for the USB string and endpoint contexts.
+ */
+struct dbc_ctx {
+    struct dbc_info_ctx info;
+    struct dbc_ep_ctx ep_out;
+    struct dbc_ep_ctx ep_in;
+};
+
+/*
+ * struct dbc_regs
+ *
+ * The registers are a subset of the xHC's MMIO registers
+ * and are found by traversing the xHC's capability list.
+ *
+ * @id - extended capability ID
+ * @db - doorbell register, written by software whenever new TRBs
+ *       are ready for consumption.
+ * @erstsz - number of entries in the event ring segment stable (ERST).
+ * @erstba - phys address of the ERST, value must be 64-byte aligned.
+ * @erdp - event ring dequeue pointer, updated as software consumes events.
+ * @ctrl - control register
+ * @st - status register
+ * @portsc - port status and control register
+ * @cp - phys address of the DbC context, value must be 16-byte aligned.
+ * @ddi1 - DbC protocol and vendor ID
+ * @ddi2 - device revision and product ID
+ */
+struct dbc_regs {
+    __le32 id;
+    __le32 db;
+    __le32 erstsz;
+    __le32 __rsvdz;
+    __le64 erstba;
+    __le64 erdp;
+    __le32 ctrl;
+    __le32 st;
+    __le32 portsc;
+    __le32 __rsvdp;
+    __le64 cp;
+    __le32 ddi1;
+    __le32 ddi2;
+};
+
+/*
+ * struct trb_ring
+ *
+ * The DbC uses three TRB rings: one event ring and two bulk transfer rings
+ * (one for each direction). The hardware produces Event TRBs onto the event
+ * ring and the driver code produces Normal TRBs onto the transfer rings.
+ *
+ * @trb array of trbs
+ * @enq offset into @trb at which the next trb is produced
+ * @deq offset of the most recently consumed trb
+ * @cycle the bit that determines the ownership of any given
+ *        trb in the ring. Both the driver and hardware have internal
+ *        copies that toggle as the ring is traversed.
+ */
+struct trb_ring {
+    struct trb *trb;
+    unsigned int enq;
+    unsigned int deq;
+    unsigned int cycle;
+};
+
+/*
+ * struct data_ring
+ *
+ * The raw bytes sent from Xen and received from the debug host are buffered
+ * into a data_ring. When the uart is flush()'ed, the bytes are queued into
+ * a TRB on the appropriate transfer ring.
+ *
+ * @buf the buffer for the data
+ * @enq end of buffered data
+ * @deq beginning of buffered data
+ * @dma_addr physical address of @buf
+ */
+struct data_ring {
+    char *buf;
+    unsigned int enq;
+    unsigned int deq;
+    paddr_t dma_addr;
+};
+
+/* USB string descriptor */
+struct usb_str_desc {
+    __u8 bLength;
+    __u8 bDescriptorType;
+    __le16 wData[1];
+};
+
+/*
+ * DbC device strings. string0 is special in that it encodes
+ * the supported LANGIDs of the DbC device. The others are
+ * UTF16LE-encoded character strings
+ */
+struct dbc_str_desc {
+    char string0[DBC_MAX_STR_SIZE];
+    char manufacturer[DBC_MAX_STR_SIZE];
+    char product[DBC_MAX_STR_SIZE];
+    char serial[DBC_MAX_STR_SIZE];
+};
+
+/*
+ * struct xhci_dbc
+ *
+ * @sbdf - location of the HC on the PCI bus
+ * @lock - used for synchronizing safety checks
+ * @timer - used to inject RX interrupts
+ * @unsafe - when true, MMIO accesses are not permitted. This applies
+ *           for a small window of time after dom0 resets the HC until
+ *           the HC clears the Controller Not Ready (CNR) bit
+ *           sandwiched between the PHYSDEVOP_DBGP_RESET_PREPARE and
+ *           PHYSDEVOP_DBGP_RESET_DONE hypercalls.
+ * @mmio_paddr - base address of the HC's MMIO region
+ * @mmio_pages - number of 4K pages of the HC's MMIO region
+ * @xhc_regs - fixmapped value of mmio_paddr
+ * @dbc_regs - base address of the DbC registers
+ * @trb_evt_ring - event ring. The driver consumes TRBs off of
+ *                 this ring.
+ * @trb_out_ring - OUT transfer ring. Data written by Xen is referenced
+ *                 by TRBs placed onto this ring.
+ * @trb_in_ring - IN transfer ring. When the port is !sync, a timer
+ *                interrupt places TRBs onto this ring, otherwise the TRB
+ *                is queued directly by dbc_getc.
+ * @data_out_ring - circular buffer for data written by Xen
+ * @data_in_ring - circular buffer for data received from debug host
+ * @strings - USB string data
+ * @ctx - string and endpoint context
+ * @erst - event ring segment table. All event and transfer rings use
+ *         only one segment.
+ */
+struct xhci_dbc {
+    pci_sbdf_t sbdf;
+    spinlock_t lock;
+    struct timer timer;
+    bool unsafe;
+    uint64_t mmio_paddr;
+    uint64_t mmio_pages;
+    uint8_t __iomem *xhc_regs;
+    struct dbc_regs __iomem *dbc_regs;
+    struct trb_ring trb_evt_ring;
+    struct trb_ring trb_out_ring;
+    struct trb_ring trb_in_ring;
+    struct data_ring data_out_ring;
+    struct data_ring data_in_ring;
+    struct dbc_str_desc strings __aligned(64);
+    struct dbc_ctx ctx __aligned(64);
+    struct erst_segment erst[DBC_ERSTSZ] __aligned(64);
+};
+
+/*
+ * Type used for polling predicates over 32-bit MMIO register values.
+ * Return true when polling should stop, false otherwise.
+ */
+typedef bool (*poll_pred_t)(uint32_t val);
+
+#define POLL_LIMIT 10000000
+
+static uint32_t readl_poll(const uint32_t __iomem *ptr,
+                           poll_pred_t done,
+                           unsigned int limit)
+{
+    unsigned int i = 0;
+    uint32_t val = readl(ptr);
+
+    while ( !done(val) && i++ < limit )
+    {
+        cpu_relax();
+        val = readl(ptr);
+    }
+
+    return val;
+}
+
+static inline bool xhci_os_owned(uint32_t legacy)
+{
+    return (legacy & (XHCI_BIOS_OWNED | XHCI_OS_OWNED)) == XHCI_OS_OWNED;
+}
+
+static inline bool dbc_enabled(uint32_t portsc)
+{
+    return (portsc & (PORTSC_CCS | PORTSC_PED)) == (PORTSC_CCS | PORTSC_PED);
+}
+
+static inline bool dbc_configured(uint32_t ctrl)
+{
+    return (ctrl & CTRL_DCR) != 0;
+}
+
+static inline bool dbc_on(uint32_t ctrl)
+{
+    return (ctrl & CTRL_DCE) != 0;
+}
+
+static inline bool dbc_off(uint32_t ctrl)
+{
+    return (ctrl & CTRL_DCE) == 0;
+}
+
+/* Imported from include/linux/io-64-nonatomic-lo-hi.h */
+static inline __u64 lo_hi_readq(const volatile void __iomem *addr)
+{
+	const volatile u32 __iomem *p = addr;
+	u32 low, high;
+
+	low = readl(p);
+	high = readl(p + 1);
+
+	return low + ((u64)high << 32);
+}
+
+/* Imported from include/linux/io-64-nonatomic-lo-hi.h */
+static inline void lo_hi_writeq(__u64 val, volatile void __iomem *addr)
+{
+	const volatile u8 __iomem *p = addr;
+
+	writel(val, p);
+	writel(val >> 32, p + 4);
+}
+
+static inline void copy_utf16le(u16 *wstr, const char *s)
+{
+    int len = strlen(s);
+
+    for ( int i = 0; i < len; i++ )
+        wstr[i] = cpu_to_le16(s[i]);
+}
+
+
+static inline void trb_set_type(struct trb *trb, uint32_t type)
+{
+    uint32_t ctrl = le32_to_cpu(trb->ctrl);
+
+    ctrl &= ~TRB_TYPE_MASK;
+    ctrl |= type << TRB_TYPE_SHIFT;
+
+    trb->ctrl = cpu_to_le32(ctrl);
+}
+
+static inline int trb_type(const struct trb *trb)
+{
+    return (le32_to_cpu(trb->ctrl) & TRB_TYPE_MASK) >> TRB_TYPE_SHIFT;
+}
+
+
+static inline void trb_set_cycle(struct trb *trb, uint32_t cyc)
+{
+    uint32_t ctrl = le32_to_cpu(trb->ctrl);
+
+    ctrl &= ~TRB_CYCLE_MASK;
+    ctrl |= cyc;
+
+    trb->ctrl = cpu_to_le32(ctrl);
+}
+
+static inline int trb_cycle(const struct trb *trb)
+{
+    return le32_to_cpu(trb->ctrl) & TRB_CYCLE_MASK;
+}
+
+static inline int trb_xfer_event_epid(const struct trb *trb)
+{
+    return (le32_to_cpu(trb->ctrl) & TRB_EVENT_EPID_MASK)
+           >> TRB_EVENT_EPID_SHIFT;
+}
+
+static inline int trb_xfer_event_cc(const struct trb *trb)
+{
+    return le32_to_cpu(trb->status) >> TRB_EVENT_CC_SHIFT;
+}
+
+static inline uint64_t trb_xfer_event_ptr(const struct trb *trb)
+{
+    return le64_to_cpu(trb->params);
+}
+
+static inline uint32_t trb_xfer_event_len(const struct trb *trb)
+{
+    return le32_to_cpu(trb->status) & TRB_EVENT_LEN_MASK;
+}
+
+static inline void trb_norm_set_buf(struct trb *trb, uint64_t dma)
+{
+    trb->params = cpu_to_le64(dma);
+}
+
+static inline void trb_norm_set_len(struct trb *trb, uint32_t len)
+{
+    uint32_t status = le32_to_cpu(trb->status);
+
+    status &= ~TRB_EVENT_LEN_MASK;
+    status |= len;
+
+    trb->status = cpu_to_le32(status);
+}
+
+static inline uint64_t trb_norm_buf(const struct trb *trb)
+{
+    return le64_to_cpu(trb->params);
+}
+
+static inline uint32_t trb_norm_len(const struct trb *trb)
+{
+    return le32_to_cpu(trb->status) & TRB_NORM_LEN_MASK;
+}
+
+static inline void trb_norm_set_ioc(struct trb *trb)
+{
+    uint32_t ctrl = le32_to_cpu(trb->ctrl);
+
+    ctrl |= TRB_NORM_IOC_MASK;
+    trb->ctrl = cpu_to_le32(ctrl);
+}
+
+static inline void trb_link_set_toggle_cycle(struct trb *trb)
+{
+    uint32_t ctrl = le32_to_cpu(trb->ctrl);
+
+    ctrl |= TRB_LINK_TOGGLE_CYCLE_MASK;
+    trb->ctrl = cpu_to_le32(ctrl);
+}
+
+static inline void trb_link_set_rsp(struct trb *trb, uint64_t rsp)
+{
+    trb->params = cpu_to_le64(rsp);
+}
+
+static inline bool port_disabled(uint32_t portsc)
+{
+    return (portsc & PORTSC_PED) == 0;
+}
+
+static inline bool port_enabled(uint32_t portsc)
+{
+    return (portsc & PORTSC_PED) != 0;
+}
+
+static inline uint32_t ep_state(const struct dbc_ep_ctx *ctx)
+{
+    return le32_to_cpu(ctx->data0) & EP_STATE_MASK;
+}
+
+static inline uint32_t dbc_max_burst(const struct xhci_dbc *dbc)
+{
+    return (readl(&dbc->dbc_regs->ctrl) & MAX_BURST_MASK)
+           >> MAX_BURST_SHIFT;
+}
diff --git a/xen/include/asm-x86/fixmap.h b/xen/include/asm-x86/fixmap.h
index 0db314baeb..35f9b38d97 100644
--- a/xen/include/asm-x86/fixmap.h
+++ b/xen/include/asm-x86/fixmap.h
@@ -43,6 +43,10 @@ enum fixed_addresses {
     FIX_COM_BEGIN,
     FIX_COM_END,
     FIX_EHCI_DBGP,
+#ifdef CONFIG_HAS_XHCI_DBC
+    FIX_XHCI_BASE,
+    FIX_XHCI_END = FIX_XHCI_BASE + CONFIG_XHCI_FIXMAP_PAGES - 1,
+#endif
 #ifdef CONFIG_XEN_GUEST
     FIX_PV_CONSOLE,
     FIX_XEN_SHARED_INFO,
diff --git a/xen/include/xen/serial.h b/xen/include/xen/serial.h
index 6548f0b0a9..85098de590 100644
--- a/xen/include/xen/serial.h
+++ b/xen/include/xen/serial.h
@@ -170,7 +170,22 @@ struct ns16550_defaults {
     unsigned long io_base; /* default io_base address */
 };
 void ns16550_init(int index, struct ns16550_defaults *defaults);
+
+#ifdef CONFIG_HAS_EHCI
 void ehci_dbgp_init(void);
+#else
+static inline void ehci_dbgp_init(void)
+{
+}
+#endif
+
+#ifdef CONFIG_HAS_XHCI_DBC
+void xhci_dbc_init(void);
+#else
+static inline void xhci_dbc_init(void)
+{
+}
+#endif
 
 void arm_uart_init(void);
 

base-commit: d4fb5f166c2bfbaf9ba0de69da0d411288f437a9
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Wed May 12 00:18:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 00:18:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126087.237342 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgca0-0000kq-Va; Wed, 12 May 2021 00:18:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126087.237342; Wed, 12 May 2021 00:18:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgca0-0000kj-SX; Wed, 12 May 2021 00:18:24 +0000
Received: by outflank-mailman (input) for mailman id 126087;
 Wed, 12 May 2021 00:18:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+k7y=KH=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lgcZz-0000kd-6p
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 00:18:23 +0000
Received: from mail-io1-xd35.google.com (unknown [2607:f8b0:4864:20::d35])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 028b1b94-275e-4a28-bc0b-a451201d550c;
 Wed, 12 May 2021 00:18:22 +0000 (UTC)
Received: by mail-io1-xd35.google.com with SMTP id o21so19562862iow.13
 for <xen-devel@lists.xenproject.org>; Tue, 11 May 2021 17:18:22 -0700 (PDT)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id v4sm8241490iol.3.2021.05.11.17.18.21
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 11 May 2021 17:18:21 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 028b1b94-275e-4a28-bc0b-a451201d550c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=mXSeR9jPw54X42/hzVCKnW9V7a7Fp/RB7jF7rnjCJUw=;
        b=cPTkppKtPOrkAZ6nQARylT++pk44Lspab5bF4Y1dwmuQBYjN0Y3UuyDtFNRJx0iBpg
         9EHv6UTLsMbWxIJN3z9MPof9z9kmZE5mn3oRoU7l/paKObQqQY0DTK3GBr8Ys1CwM52V
         Ee1MxtvuDQLEH9HCoGgOCOueH024l3lX693zvuJtJeynCSOleAkV6uSYs/nnzUjCTvvU
         UVxynsT469vDP4YHIsKNmIOYKFSX2A4ZjZDC6OLL49vg2Tz+mQ7gH9GUfIuT5tM+7HqS
         4FN/Ba9X3JFWYRvBn6fdHSsH/GGtZjvqJcp/2tekEeP2dLEU4N1oubuqc+iLj+f8Huu6
         TCyQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=mXSeR9jPw54X42/hzVCKnW9V7a7Fp/RB7jF7rnjCJUw=;
        b=B0Yppvy1ltekFEI+taFdWImDiHBDkzk76+Cxvk+QqB3uyGGnn1dxynCt+pwTffeXzn
         S7xGRr0mrqFmzJNoYodCazlCx7VzHjpjsrB2ZuMpxU2CmQTUoWpHAWj9PbveTFq7mEvy
         MknbFJlrEWekbmt1lgOaPEyzBsJCkcwL+29E19KaQwqOaJlbADeNnpQj/pghp/oTK+bL
         eDxyu8/mb/yIvK2kWsGhWMk3fge7FLfi56IAn92lVsDDSJ/C/WPgaG/ZxvRq3py5F+G7
         F/e+IzxOylac6wPnqap6XWIPDMUZCvlcb8ibwJ9leMjARzSZxReTzKND4g1FDpKtNJ3A
         mTWg==
X-Gm-Message-State: AOAM532j2pKh+Oa2kcMpVTiC01dIG5wNf4LaGxXB02/8RwrcvPa1mNRH
	OhfXFQ/mdhLpzM1hdlTcNGA=
X-Google-Smtp-Source: ABdhPJyGzEtP2429F0wetqYLv9fqk0nL/tFbD9fe/i3vDjKMOECPsuDdTpSelI1bEV3hT45X4JMc9A==
X-Received: by 2002:a02:1c81:: with SMTP id c123mr30707808jac.42.1620778702074;
        Tue, 11 May 2021 17:18:22 -0700 (PDT)
From: Connor Davis <connojdavis@gmail.com>
To: linux-kernel@vger.kernel.org
Cc: Connor Davis <connojdavis@gmail.com>,
	linux-usb@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Lee Jones <lee.jones@linaro.org>,
	Jann Horn <jannh@google.com>,
	Chunfeng Yun <chunfeng.yun@mediatek.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Mathias Nyman <mathias.nyman@intel.com>
Subject: [PATCH 0/3] Support xen-driven USB3 debug capability
Date: Tue, 11 May 2021 18:18:18 -0600
Message-Id: <cover.1620776161.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.31.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Hi all,

This goal of this series is to allow the USB3 debug capability (DbC) to be
safely used by xen while linux runs as dom0. The first patch prevents
the early DbC driver from using an already-running DbC. The second
exports xen_dbgp_reset_prep and xen_dbgp_external_startup functions when
CONFIG_XEN_DOM0 is enabled so they may be used by the xHCI driver.
The last uses those functions to notify xen of unsafe periods (e.g. reset
and D3hot transition).

Thanks,
Connor

--
Connor Davis (3):
  usb: early: Avoid using DbC if already enabled
  xen: Export dbgp functions when CONFIG_XEN_DOM0 is enabled
  usb: xhci: Notify xen when DbC is unsafe to use

 drivers/usb/early/xhci-dbc.c   | 10 ++++++
 drivers/usb/host/xhci-dbgcap.h |  6 ++++
 drivers/usb/host/xhci.c        | 57 ++++++++++++++++++++++++++++++++++
 drivers/usb/host/xhci.h        |  1 +
 drivers/xen/dbgp.c             |  2 +-
 5 files changed, 75 insertions(+), 1 deletion(-)


base-commit: 88b06399c9c766c283e070b022b5ceafa4f63f19
--
2.31.1



From xen-devel-bounces@lists.xenproject.org Wed May 12 00:18:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 00:18:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126088.237355 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgca6-00013G-7c; Wed, 12 May 2021 00:18:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126088.237355; Wed, 12 May 2021 00:18:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgca6-000139-42; Wed, 12 May 2021 00:18:30 +0000
Received: by outflank-mailman (input) for mailman id 126088;
 Wed, 12 May 2021 00:18:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+k7y=KH=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lgca4-0000kd-C8
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 00:18:28 +0000
Received: from mail-il1-x133.google.com (unknown [2607:f8b0:4864:20::133])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5e7ffde7-960e-4ccf-ba48-8aeda0aaa6ca;
 Wed, 12 May 2021 00:18:27 +0000 (UTC)
Received: by mail-il1-x133.google.com with SMTP id z1so10881633ils.0
 for <xen-devel@lists.xenproject.org>; Tue, 11 May 2021 17:18:27 -0700 (PDT)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id v4sm8241490iol.3.2021.05.11.17.18.27
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 11 May 2021 17:18:27 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5e7ffde7-960e-4ccf-ba48-8aeda0aaa6ca
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=puac9P/1pH2gTjwWoFDjpCGPhH1RSMCtsgJsRORZsps=;
        b=KVnhpxSKSgjxFKZXFSiHszto+jk1TG6dHF6KfPOJDxdnW5ABx+HpY3LPhsV0oNEYdU
         oOY2KZnnEdfwyeoTn19rO/DWpWf0O3blIvpZ0JPdzTmItMYHWDFp4hP9TX8JFPXxs8t6
         OEOQhUyyUCvwrEwL+kXIgkSEVJBz+mIgiLVe1X9ca0uYQWF5H9EbzFZ77PHOPnKfdS2Z
         V76FL/tX9xIwhDf+ym/LTIwmGVj2VeuWwJyy+uEOBU9ZlksbvAQo4MmOawNclyzLEnpa
         IjYwCl2dc5uthpQGF5HR1mj1jfG5xvg1/BdoGsFLLYXQzg9LYFdfSxy7fb2uZpVBkOG4
         jyRw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=puac9P/1pH2gTjwWoFDjpCGPhH1RSMCtsgJsRORZsps=;
        b=CTxA60GRD+3ZVIgO6cAc809/V0mnUqiqUfacE1BYLVqxD36DeKF3tSAZQ2A0aRSfks
         j60H0A69IkC6k2Zyeo1jM7rmyHpB//spmOfV9moIiTYc8KHQdMYRYShX/6rMJZncr6rN
         bW7csdExThQ7UzBsD0XtnUrfkLbnmZHOR7p7P0VJCVHtM7Ur/su1jyPAb06N5/9CulTf
         /osvFTQBWH+hlBDGfR3CW3v82+URdc2Iihuq8J4wz7LlNhTP/vokqH9G0ee7I6Ixp9VQ
         QJby1vd6iJK9V9XzNT0sJ5KNjmMknYAhHqy3LMxZaYrouqCFHAa8decDJ8S4WttIuMcG
         IhnA==
X-Gm-Message-State: AOAM532XxP7sFdpLyDyop+YvgYSXZ0qydbNWFi0fN1qtARW4KR2qPE5T
	DA8NkrU9SWu4zh9z6AehSmM=
X-Google-Smtp-Source: ABdhPJxwBLia0GULiakrQ/ChbOUlE5Y7VdEixYIKRKbmd5BUsUIAgrVrusZYIWrKZfjHhsy2cqSD6Q==
X-Received: by 2002:a05:6e02:20c5:: with SMTP id 5mr28877967ilq.14.1620778707477;
        Tue, 11 May 2021 17:18:27 -0700 (PDT)
From: Connor Davis <connojdavis@gmail.com>
To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Connor Davis <connojdavis@gmail.com>,
	Lee Jones <lee.jones@linaro.org>,
	Jann Horn <jannh@google.com>,
	Chunfeng Yun <chunfeng.yun@mediatek.com>,
	xen-devel@lists.xenproject.org,
	linux-usb@vger.kernel.org,
	linux-kernel@vger.kernel.org
Subject: [PATCH 1/3] usb: early: Avoid using DbC if already enabled
Date: Tue, 11 May 2021 18:18:19 -0600
Message-Id: <d160cee9b61c0ec41c2cd5ff9b4e107011d39d8c.1620776161.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <cover.1620776161.git.connojdavis@gmail.com>
References: <cover.1620776161.git.connojdavis@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Check if the debug capability is enabled in early_xdbc_parse_parameter,
and if it is, return with an error. This avoids collisions with whatever
enabled the DbC prior to linux starting.

Signed-off-by: Connor Davis <connojdavis@gmail.com>
---
 drivers/usb/early/xhci-dbc.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/drivers/usb/early/xhci-dbc.c b/drivers/usb/early/xhci-dbc.c
index be4ecbabdd58..ca67fddc2d36 100644
--- a/drivers/usb/early/xhci-dbc.c
+++ b/drivers/usb/early/xhci-dbc.c
@@ -642,6 +642,16 @@ int __init early_xdbc_parse_parameter(char *s)
 	}
 	xdbc.xdbc_reg = (struct xdbc_regs __iomem *)(xdbc.xhci_base + offset);

+	if (readl(&xdbc.xdbc_reg->control) & CTRL_DBC_ENABLE) {
+		pr_notice("xhci debug capability already in use\n");
+		early_iounmap(xdbc.xhci_base, xdbc.xhci_length);
+		xdbc.xdbc_reg = NULL;
+		xdbc.xhci_base = NULL;
+		xdbc.xhci_length = 0;
+
+		return -ENODEV;
+	}
+
 	return 0;
 }

--
2.31.1



From xen-devel-bounces@lists.xenproject.org Wed May 12 00:18:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 00:18:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126089.237367 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgca9-0001Na-HB; Wed, 12 May 2021 00:18:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126089.237367; Wed, 12 May 2021 00:18:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgca9-0001NN-Dg; Wed, 12 May 2021 00:18:33 +0000
Received: by outflank-mailman (input) for mailman id 126089;
 Wed, 12 May 2021 00:18:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+k7y=KH=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lgca8-0000kd-1H
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 00:18:32 +0000
Received: from mail-il1-x12e.google.com (unknown [2607:f8b0:4864:20::12e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ae5463af-e05a-42f4-9cba-c489d57f25a0;
 Wed, 12 May 2021 00:18:31 +0000 (UTC)
Received: by mail-il1-x12e.google.com with SMTP id h6so18736306ila.7
 for <xen-devel@lists.xenproject.org>; Tue, 11 May 2021 17:18:31 -0700 (PDT)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id v4sm8241490iol.3.2021.05.11.17.18.29
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 11 May 2021 17:18:29 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ae5463af-e05a-42f4-9cba-c489d57f25a0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=ZVffP2AoY7DIGJlRpw6y32EVHjpd/fv+1cj2j/7FT4M=;
        b=HKnovley9S67BXN2eY+XSVLSSfFtLlQ2kmN/4O2HSj64YCc9GppYzEoOrRHHfVAP1J
         op9/qFDgSnmNQF7nldySt19Ck/ztvAFJ6asAk7LlNvKUjYpm3cxnJwCx7KhTqMgBAhmR
         1miUzwF4xB96xw+Vz0cMh4OX2JIlMJ+ufYnYZPt0bs/0GIqb4Co9/FvPEZHjq0lL+uyH
         7U3ZRyoftT1gtjpnIhcVjYD3xlF3jJVWxk1LSuTxfQORCUjyi3/7t9WLCssbc2fObi3C
         Ypz+I4182NPMJ9Awk/vb3fDmosz2pjnGIuXCdvirAz8HKUTvzI/iYCdH93CH5EPbwBT9
         nIkw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=ZVffP2AoY7DIGJlRpw6y32EVHjpd/fv+1cj2j/7FT4M=;
        b=h727M7gj41KqS4mDMPIwOojFsdp5Z3S2Py8SFUMdE6fx4qDdLRjs8kO8IFVRST9NL3
         Kae8ZaQa4lmbMtcW/craJOekvHF3UphFjEHl9swt55eTt4VDdBZQ17cyzatAw6lqsZaU
         /sSd0vKpJTlf1mSFMhy9NlclFLt9JhTcfue3UsBIkkCIU0YxBMLqH59/2wJLfntqCK5m
         QOOx+UNRbMDjbC01b2hoU6xE5SGWOCIcTIfJI4YukzCoMC0tv41Ouq0j/cBQKkl95y8/
         cdN6CsP77Pj9rDEJVrRyGdkVBtksZipHSNVLaIcCj5lJvCRddTV/uYOPVMLx5yYE3ZMA
         eFtw==
X-Gm-Message-State: AOAM530zvw34vsZtMQRvIwLHEi0e7JQSbGqzW5gtmnDTmniN51GtHNbd
	EWsDC+875qoNeIQgLm4tFM/Oi272b3EVKQ==
X-Google-Smtp-Source: ABdhPJyGiX9C7qlPKoElf5vX50Gg7NyKG09/3UfVmVSHmJj770UiKDP6hzvjmA6DUZd/5xzM6KdSFg==
X-Received: by 2002:a05:6e02:1b06:: with SMTP id i6mr14387567ilv.139.1620778709944;
        Tue, 11 May 2021 17:18:29 -0700 (PDT)
From: Connor Davis <connojdavis@gmail.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Cc: Connor Davis <connojdavis@gmail.com>,
	xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Subject: [PATCH 2/3] xen: Export dbgp functions when CONFIG_XEN_DOM0 is enabled
Date: Tue, 11 May 2021 18:18:20 -0600
Message-Id: <291659390aff63df7c071367ad4932bf41e11aef.1620776161.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <cover.1620776161.git.connojdavis@gmail.com>
References: <cover.1620776161.git.connojdavis@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Export xen_dbgp_reset_prep and xen_dbgp_external_startup
when CONFIG_XEN_DOM0 is defined. This allows use of these symbols
even if CONFIG_EARLY_PRINK_DBGP is defined.

Signed-off-by: Connor Davis <connojdavis@gmail.com>
---
 drivers/xen/dbgp.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/xen/dbgp.c b/drivers/xen/dbgp.c
index cfb5de31d860..fef32dd1a5dc 100644
--- a/drivers/xen/dbgp.c
+++ b/drivers/xen/dbgp.c
@@ -44,7 +44,7 @@ int xen_dbgp_external_startup(struct usb_hcd *hcd)
 	return xen_dbgp_op(hcd, PHYSDEVOP_DBGP_RESET_DONE);
 }

-#ifndef CONFIG_EARLY_PRINTK_DBGP
+#if defined(CONFIG_XEN_DOM0) || !defined(CONFIG_EARLY_PRINTK_DBGP)
 #include <linux/export.h>
 EXPORT_SYMBOL_GPL(xen_dbgp_reset_prep);
 EXPORT_SYMBOL_GPL(xen_dbgp_external_startup);
--
2.31.1



From xen-devel-bounces@lists.xenproject.org Wed May 12 00:18:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 00:18:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126090.237379 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgcaA-0001fi-Rk; Wed, 12 May 2021 00:18:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126090.237379; Wed, 12 May 2021 00:18:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgcaA-0001fa-OB; Wed, 12 May 2021 00:18:34 +0000
Received: by outflank-mailman (input) for mailman id 126090;
 Wed, 12 May 2021 00:18:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+k7y=KH=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lgca9-0000kd-CV
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 00:18:33 +0000
Received: from mail-il1-x12c.google.com (unknown [2607:f8b0:4864:20::12c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bf0ad1d5-2bdd-4ddb-b008-1f072f66397b;
 Wed, 12 May 2021 00:18:32 +0000 (UTC)
Received: by mail-il1-x12c.google.com with SMTP id e14so18705257ils.12
 for <xen-devel@lists.xenproject.org>; Tue, 11 May 2021 17:18:32 -0700 (PDT)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id v4sm8241490iol.3.2021.05.11.17.18.31
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 11 May 2021 17:18:31 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bf0ad1d5-2bdd-4ddb-b008-1f072f66397b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=ITi3r5Sk3xjHS5bejnw6zCgTDqxpb5q4mGkKmUxd1DM=;
        b=vWbP78NMwE3T8JmfFp4s36sK/+i7v7xLPMGWtj+2kA8AvoYHmt83INZDx0up4Xd4QC
         B3aSpbLVmMk34VolopZeFugY28THv0nD4MH4zjjkay7gn4t8WQozRW3ZDElU8059RaeZ
         oX28+d32WYp/Chaada+JSWidliqK190G+TceqHHRVg81DujDoVtt5oKaTcBcX59GUtxz
         L9KflBGAPxBqEhm2G4h+hQEVK53HBzB+baoOD5LhmtgcPvvtnCcEYq6bd4m6ThV0fb/K
         jofnYt2/6Uf/dZ7NEAREsLc75dUYxip4E1tSaWAuusXmya/hiVvLE3kom4tFcAvWk+Kd
         blUg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=ITi3r5Sk3xjHS5bejnw6zCgTDqxpb5q4mGkKmUxd1DM=;
        b=WSiUx1oNZYJfFybrihMdfYUBGGRhHsAfv8Cjifen4GIEbros+OnGas/QQuNw7wyvuy
         s4AUTSZypjVAkrB+4iTUQOfqcM6YIYfJYqNTclduRpyYft3vtIeUGThiM1LVhpMNWDdT
         M8CknEjO1BnFuQpfLUkx9VIUJ4xv6boljtRnORQdPnvRoN2Xx5Yx8dW1t7f+rgNRwv+g
         PlDjEecRVSpuAMqKnadoBxFBc1oOZXcLMlRqz0c9zhfvsT/5H6G2UijQxg0zUypiMMdL
         jLyxatKiyH15uGYZHoTwIhx2shkisl6iMpVyS9WN1+7xoV9ki+XZGZasbjz71we1/rip
         EoGg==
X-Gm-Message-State: AOAM533IaQQCGfJD69bRXFnbNSZutDguksEBYozxsTDX4nemQ/o/hVIB
	xDAJbkltYtCiA+Ug6kIzaHM=
X-Google-Smtp-Source: ABdhPJzAnRBOne/bDfCqTIq6WlGLh8r+N1lZieC4W5kBCqqf/2oSXz+QMjlGKV7hfOzzVbYvZm7tQg==
X-Received: by 2002:a92:c5c1:: with SMTP id s1mr29521516ilt.295.1620778711693;
        Tue, 11 May 2021 17:18:31 -0700 (PDT)
From: Connor Davis <connojdavis@gmail.com>
To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Connor Davis <connojdavis@gmail.com>,
	Mathias Nyman <mathias.nyman@intel.com>,
	xen-devel@lists.xenproject.org,
	linux-usb@vger.kernel.org,
	linux-kernel@vger.kernel.org
Subject: [PATCH 3/3] usb: xhci: Notify xen when DbC is unsafe to use
Date: Tue, 11 May 2021 18:18:21 -0600
Message-Id: <2af7e7b8d569e94ab9c48039040ca69a8d52c89d.1620776161.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <cover.1620776161.git.connojdavis@gmail.com>
References: <cover.1620776161.git.connojdavis@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

When running as a dom0 guest on Xen, check if the USB3 debug
capability is enabled before xHCI reset, suspend, and resume. If it
is, call xen_dbgp_reset_prep() to notify Xen that it is unsafe to touch
MMIO registers until the next xen_dbgp_external_startup().

This notification allows Xen to avoid undefined behavior resulting
from MMIO access when the host controller's CNR bit is set or when
the device transitions to D3hot.

Signed-off-by: Connor Davis <connojdavis@gmail.com>
---
 drivers/usb/host/xhci-dbgcap.h |  6 ++++
 drivers/usb/host/xhci.c        | 57 ++++++++++++++++++++++++++++++++++
 drivers/usb/host/xhci.h        |  1 +
 3 files changed, 64 insertions(+)

diff --git a/drivers/usb/host/xhci-dbgcap.h b/drivers/usb/host/xhci-dbgcap.h
index c70b78d504eb..24784b82a840 100644
--- a/drivers/usb/host/xhci-dbgcap.h
+++ b/drivers/usb/host/xhci-dbgcap.h
@@ -227,4 +227,10 @@ static inline int xhci_dbc_resume(struct xhci_hcd *xhci)
 	return 0;
 }
 #endif /* CONFIG_USB_XHCI_DBGCAP */
+
+#ifdef CONFIG_XEN_DOM0
+int xen_dbgp_reset_prep(struct usb_hcd *hcd);
+int xen_dbgp_external_startup(struct usb_hcd *hcd);
+#endif /* CONFIG_XEN_DOM0 */
+
 #endif /* __LINUX_XHCI_DBGCAP_H */
diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
index ca9385d22f68..afe44169183f 100644
--- a/drivers/usb/host/xhci.c
+++ b/drivers/usb/host/xhci.c
@@ -37,6 +37,57 @@ static unsigned long long quirks;
 module_param(quirks, ullong, S_IRUGO);
 MODULE_PARM_DESC(quirks, "Bit flags for quirks to be enabled as default");

+#ifdef CONFIG_XEN_DOM0
+#include <xen/xen.h>
+
+static void xhci_dbc_external_reset_prep(struct xhci_hcd *xhci)
+{
+	struct dbc_regs __iomem *regs;
+	void __iomem		*base;
+	int			dbc_cap;
+
+	if (!xen_initial_domain())
+		return;
+
+	base = &xhci->cap_regs->hc_capbase;
+	dbc_cap = xhci_find_next_ext_cap(base, 0, XHCI_EXT_CAPS_DEBUG);
+
+	if (!dbc_cap)
+		return;
+
+	xhci->external_dbc = 0;
+	regs = base + dbc_cap;
+
+	if (readl(&regs->control) & DBC_CTRL_DBC_ENABLE) {
+		if (xen_dbgp_reset_prep(xhci_to_hcd(xhci)))
+			xhci_dbg_trace(xhci, trace_xhci_dbg_init,
+					"// Failed to reset external DBC");
+		else {
+			xhci->external_dbc = 1;
+			xhci_dbg_trace(xhci, trace_xhci_dbg_init,
+					"// Completed reset of external DBC");
+		}
+	}
+}
+
+static void xhci_dbc_external_reset_done(struct xhci_hcd *xhci)
+{
+	if (!xen_initial_domain() || !xhci->external_dbc)
+		return;
+
+	if (xen_dbgp_external_startup(xhci_to_hcd(xhci)))
+		xhci->external_dbc = 0;
+}
+#else
+static void xhci_dbc_external_reset_prep(struct xhci_hcd *xhci)
+{
+}
+
+static void xhci_dbc_external_reset_done(struct xhci_hcd *xhci)
+{
+}
+#endif
+
 static bool td_on_ring(struct xhci_td *td, struct xhci_ring *ring)
 {
 	struct xhci_segment *seg = ring->first_seg;
@@ -180,6 +231,8 @@ int xhci_reset(struct xhci_hcd *xhci)
 		return 0;
 	}

+	xhci_dbc_external_reset_prep(xhci);
+
 	xhci_dbg_trace(xhci, trace_xhci_dbg_init, "// Reset the HC");
 	command = readl(&xhci->op_regs->command);
 	command |= CMD_RESET;
@@ -211,6 +264,8 @@ int xhci_reset(struct xhci_hcd *xhci)
 	 */
 	ret = xhci_handshake(&xhci->op_regs->status,
 			STS_CNR, 0, 10 * 1000 * 1000);
+	if (!ret)
+		xhci_dbc_external_reset_done(xhci);

 	xhci->usb2_rhub.bus_state.port_c_suspend = 0;
 	xhci->usb2_rhub.bus_state.suspended_ports = 0;
@@ -991,6 +1046,7 @@ int xhci_suspend(struct xhci_hcd *xhci, bool do_wakeup)
 		return 0;

 	xhci_dbc_suspend(xhci);
+	xhci_dbc_external_reset_prep(xhci);

 	/* Don't poll the roothubs on bus suspend. */
 	xhci_dbg(xhci, "%s: stopping port polling.\n", __func__);
@@ -1225,6 +1281,7 @@ int xhci_resume(struct xhci_hcd *xhci, bool hibernated)
 	spin_unlock_irq(&xhci->lock);

 	xhci_dbc_resume(xhci);
+	xhci_dbc_external_reset_done(xhci);

  done:
 	if (retval == 0) {
diff --git a/drivers/usb/host/xhci.h b/drivers/usb/host/xhci.h
index 2595a8f057c4..61d8efc9eef2 100644
--- a/drivers/usb/host/xhci.h
+++ b/drivers/usb/host/xhci.h
@@ -1920,6 +1920,7 @@ struct xhci_hcd {
 	struct list_head	regset_list;

 	void			*dbc;
+	int			external_dbc;
 	/* platform-specific data -- must come last */
 	unsigned long		priv[] __aligned(sizeof(s64));
 };
--
2.31.1



From xen-devel-bounces@lists.xenproject.org Wed May 12 01:47:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 01:47:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126109.237391 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgdyM-0000qC-Lg; Wed, 12 May 2021 01:47:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126109.237391; Wed, 12 May 2021 01:47:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgdyM-0000pz-Fm; Wed, 12 May 2021 01:47:38 +0000
Received: by outflank-mailman (input) for mailman id 126109;
 Wed, 12 May 2021 01:47:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgdyK-0000pp-PZ; Wed, 12 May 2021 01:47:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgdyK-0007E3-HI; Wed, 12 May 2021 01:47:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgdyK-0007OP-3W; Wed, 12 May 2021 01:47:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lgdyK-0006Su-2J; Wed, 12 May 2021 01:47:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Lqro9eK85L5AFDBaTrYqyEFkh6w5GeokVOuDd8JITYw=; b=p07gpu4NrFyS3deVKnTU06r3ej
	4cT3hzdjeiwwGAhhOGmEm1DNiDZFua5gxIAwX1w/2/Sy6iTVmaLOVAKFpNVwOhox7FBH7mc5r4aCp
	gQ3fckl424Yu+7PauCl+tIPEtFsk65V6FKx/a4x3SxV9hjx9mCFll/gQQ2wgsCWW7CS8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161906-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 161906: regressions - trouble: blocked/broken/fail/pass
X-Osstest-Failures:
    linux-linus:build-arm64:<job status>:broken:regression
    linux-linus:build-arm64-pvops:<job status>:broken:regression
    linux-linus:build-arm64-xsm:<job status>:broken:regression
    linux-linus:build-arm64-pvops:host-install(4):broken:regression
    linux-linus:build-arm64:host-install(4):broken:regression
    linux-linus:build-arm64-xsm:host-install(4):broken:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-linus:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-linus:build-arm64-libvirt:build-check(1):blocked:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=1140ab592e2ebf8153d2b322604031a8868ce7a5
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 12 May 2021 01:47:36 +0000

flight 161906 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161906/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64                     <job status>                 broken
 build-arm64-pvops               <job status>                 broken
 build-arm64-xsm                 <job status>                 broken
 build-arm64-pvops             4 host-install(4)        broken REGR. vs. 152332
 build-arm64                   4 host-install(4)        broken REGR. vs. 152332
 build-arm64-xsm               4 host-install(4)        broken REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-examine      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl           1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)               blocked  n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 build-arm64-libvirt           1 build-check(1)               blocked  n/a
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                1140ab592e2ebf8153d2b322604031a8868ce7a5
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  284 days
Failing since        152366  2020-08-01 20:49:34 Z  283 days  474 attempts
Testing same since   161900  2021-05-11 01:55:00 Z    0 days    2 attempts

------------------------------------------------------------
6042 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              broken  
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  broken  
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          blocked 
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            broken  
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          blocked 
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  blocked 
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  blocked 
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     blocked 
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-arm64 broken
broken-job build-arm64-pvops broken
broken-job build-arm64-xsm broken
broken-step build-arm64-pvops host-install(4)
broken-step build-arm64 host-install(4)
broken-step build-arm64-xsm host-install(4)

Not pushing.

(No revision log; it would be 1639451 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed May 12 02:01:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 02:01:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126117.237406 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgeBX-0003cD-3Z; Wed, 12 May 2021 02:01:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126117.237406; Wed, 12 May 2021 02:01:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgeBW-0003c6-W8; Wed, 12 May 2021 02:01:14 +0000
Received: by outflank-mailman (input) for mailman id 126117;
 Wed, 12 May 2021 02:01:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgeBW-0003bw-G4; Wed, 12 May 2021 02:01:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgeBW-0007uW-9o; Wed, 12 May 2021 02:01:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgeBV-00005N-V6; Wed, 12 May 2021 02:01:14 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lgeBV-0004K1-UZ; Wed, 12 May 2021 02:01:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=cAqwp3mPCfgRr9RMJsdbP7o/+5L290Z34a642Dcbv+Q=; b=kOa9dE/DwbEgI1ILQtSajPUl5U
	ZGAiUK0SF+ps2/PR6Gm1eUaqgpApmQU3vP0FBWlEmRIeQnn0M+WxzeVbrRfCcJ7nXZbkiCSbTi05W
	18mtQNeCoYyxKoXPmBhhQckb4jEAHTUCNvDYnFIcwY/HTzzHRJEtlQoNCtCJyHBj/vdU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161908-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 161908: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=4e5ecdbac8bdf235b2072baa0c5e170cd9f57463
X-Osstest-Versions-That:
    ovmf=ef3840c1ff320698523dd6b94ba7c86354392784
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 12 May 2021 02:01:13 +0000

flight 161908 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161908/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 4e5ecdbac8bdf235b2072baa0c5e170cd9f57463
baseline version:
 ovmf                 ef3840c1ff320698523dd6b94ba7c86354392784

Last test of basis   161899  2021-05-10 23:42:19 Z    1 days
Testing same since   161908  2021-05-11 16:40:04 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jiewen Yao <Jiewen.yao@intel.com>
  Rebecca Cran <rebecca@nuviainc.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   ef3840c1ff..4e5ecdbac8  4e5ecdbac8bdf235b2072baa0c5e170cd9f57463 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed May 12 05:41:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 05:41:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126130.237424 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lghcb-0001el-3g; Wed, 12 May 2021 05:41:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126130.237424; Wed, 12 May 2021 05:41:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lghcb-0001ee-0d; Wed, 12 May 2021 05:41:25 +0000
Received: by outflank-mailman (input) for mailman id 126130;
 Wed, 12 May 2021 05:41:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=F0FV=KH=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lghcZ-0001eY-Qb
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 05:41:23 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f24d065d-903f-44dd-b46c-3cfd25c02633;
 Wed, 12 May 2021 05:41:22 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DFA36B040;
 Wed, 12 May 2021 05:41:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f24d065d-903f-44dd-b46c-3cfd25c02633
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620798082; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=nunIAK1qZldxW8KQN4CHUeIzykGc9fLufiGoZtzSH5g=;
	b=e9X/zmCCcoqcARCIiWb0QkmDpYeFAJY3rAYrp0MmJX31GiRO2MeHAsFZrzgM9WRqUNtHLp
	Lw4PKiCXUrLznaSbCAm5T1ou7QDjtn+gt/VFxzJ9xNgAnZRt5XGqZ5RUjFNG0VGV4uuwDS
	fCWxUz5HKrtW6dT/qD7OQpesVdrX7cE=
Subject: Re: [PATCH 2/3] xen: Export dbgp functions when CONFIG_XEN_DOM0 is
 enabled
To: Connor Davis <connojdavis@gmail.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
References: <cover.1620776161.git.connojdavis@gmail.com>
 <291659390aff63df7c071367ad4932bf41e11aef.1620776161.git.connojdavis@gmail.com>
From: Juergen Gross <jgross@suse.com>
Message-ID: <0ef85b32-4069-4e94-0a2f-2325cd21510f@suse.com>
Date: Wed, 12 May 2021 07:41:20 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.0
MIME-Version: 1.0
In-Reply-To: <291659390aff63df7c071367ad4932bf41e11aef.1620776161.git.connojdavis@gmail.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="yjnCyS82arY4JTCKc7LpsFQxUTMmVfZxm"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--yjnCyS82arY4JTCKc7LpsFQxUTMmVfZxm
Content-Type: multipart/mixed; boundary="as2zie4q5kw0uL7UrINfpGoZuvAPrDvfb";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Connor Davis <connojdavis@gmail.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
Message-ID: <0ef85b32-4069-4e94-0a2f-2325cd21510f@suse.com>
Subject: Re: [PATCH 2/3] xen: Export dbgp functions when CONFIG_XEN_DOM0 is
 enabled
References: <cover.1620776161.git.connojdavis@gmail.com>
 <291659390aff63df7c071367ad4932bf41e11aef.1620776161.git.connojdavis@gmail.com>
In-Reply-To: <291659390aff63df7c071367ad4932bf41e11aef.1620776161.git.connojdavis@gmail.com>

--as2zie4q5kw0uL7UrINfpGoZuvAPrDvfb
Content-Type: multipart/mixed;
 boundary="------------627AC06D42A69DCAD0D9CAD7"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------627AC06D42A69DCAD0D9CAD7
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 12.05.21 02:18, Connor Davis wrote:
> Export xen_dbgp_reset_prep and xen_dbgp_external_startup
> when CONFIG_XEN_DOM0 is defined. This allows use of these symbols
> even if CONFIG_EARLY_PRINK_DBGP is defined.
>=20
> Signed-off-by: Connor Davis <connojdavis@gmail.com>

Acked-by: Juergen Gross <jgross@suse.com>


Juergen


--------------627AC06D42A69DCAD0D9CAD7
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------627AC06D42A69DCAD0D9CAD7--

--as2zie4q5kw0uL7UrINfpGoZuvAPrDvfb--

--yjnCyS82arY4JTCKc7LpsFQxUTMmVfZxm
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmCbaoAFAwAAAAAACgkQsN6d1ii/Ey/z
Fgf/fxgMNrRbC1Tf45689nwZZ+GL38mM9J2LFDCQUOVvwiyTXgp07JwRkwIO6CvjeZy7ET5dbqGD
sH7ULMtz5vk2yCc6xIZwAfomyDKEBztfdlBLoVBZRn4vFhy1FOK4jUJfroo8MtTWoTFh0REWHYv6
01cz2imUfD9ux1g64wxbZQLHJMJ8Rav7ozNttA3Pm46jIEawfvB4XR6fIjDiMnQo4jxZneyTxSjK
yCuexvUzs7dMdcHE3cjzeZRkQAHwltPOGuRSO2v3TKlIBsYK3uUJXoPSJTUGFCI3/tYZXLdl4bC4
2ep8bFxMFlNFi5ryByOWJqHmjchWxruUppwm3zN4Mw==
=y7Na
-----END PGP SIGNATURE-----

--yjnCyS82arY4JTCKc7LpsFQxUTMmVfZxm--


From xen-devel-bounces@lists.xenproject.org Wed May 12 06:58:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 06:58:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126136.237440 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgipA-00017N-V6; Wed, 12 May 2021 06:58:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126136.237440; Wed, 12 May 2021 06:58:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgipA-00017G-S6; Wed, 12 May 2021 06:58:28 +0000
Received: by outflank-mailman (input) for mailman id 126136;
 Wed, 12 May 2021 06:58:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=F0FV=KH=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lgipA-00017A-9c
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 06:58:28 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cb3971da-7174-4eba-97b6-d5ec8996eaad;
 Wed, 12 May 2021 06:58:27 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 63251AEEF;
 Wed, 12 May 2021 06:58:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cb3971da-7174-4eba-97b6-d5ec8996eaad
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620802706; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Qr1pBb5ME1m/hDGzrHCzb7zbDgOx3pj3UFLX3yboP0M=;
	b=s3KPU2cvc5BHhbsIYMGIKhzA0Tstkm6DNoNNPsJLWrHuNRJi9Zb2YFmMX+pRdPH5mj2JiD
	Wh1l79PgsRO47o9L+1wd+7BH6LIuNGoWv8D1ghzAL9nQcGDdFgQSuXRfvu7JyRJflO5RCm
	L/2127BDOHtlWmc9f4oh9YiOXgzdDI8=
Subject: Re: [PATCH v2 0/6] tools/libs: add missing support of linear
 p2m_list, cleanup
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Christian Lindig <christian.lindig@citrix.com>, David Scott <dave@recoil.org>
References: <20210412152236.1975-1-jgross@suse.com>
From: Juergen Gross <jgross@suse.com>
Message-ID: <b79c60e4-7c41-be9a-b0df-e9f9cf71eafa@suse.com>
Date: Wed, 12 May 2021 08:58:24 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.0
MIME-Version: 1.0
In-Reply-To: <20210412152236.1975-1-jgross@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="78G7TuzXokz2mlyJCRm3gsmTvqRpPQr6R"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--78G7TuzXokz2mlyJCRm3gsmTvqRpPQr6R
Content-Type: multipart/mixed; boundary="LO8pSzdI3l6E16X0Dnt1yfck8mjpNMchy";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Christian Lindig <christian.lindig@citrix.com>, David Scott <dave@recoil.org>
Message-ID: <b79c60e4-7c41-be9a-b0df-e9f9cf71eafa@suse.com>
Subject: Re: [PATCH v2 0/6] tools/libs: add missing support of linear
 p2m_list, cleanup
References: <20210412152236.1975-1-jgross@suse.com>
In-Reply-To: <20210412152236.1975-1-jgross@suse.com>

--LO8pSzdI3l6E16X0Dnt1yfck8mjpNMchy
Content-Type: multipart/mixed;
 boundary="------------1F206A47E54C17FAC7DB509B"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------1F206A47E54C17FAC7DB509B
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

Ping?

On 12.04.21 17:22, Juergen Gross wrote:
> There are some corners left which don't support the not so very new
> linear p2m list of pv guests, which has been introduced in Linux kernel=

> 3.19 and which is mandatory for non-legacy versions of Xen since kernel=

> 4.14.
>=20
> This series adds support for the linear p2m list where it is missing
> (colo support and "xl dump-core").
>=20
> In theory it should be possible to merge the p2m list mapping code
> from migration handling and core dump handling, but this needs quite
> some cleanup before this is possible.
>=20
> The first three patches of this series are fixing real problems, so
> I've put them at the start of this series, especially in order to make
> backports easier.
>=20
> The other three patches are only the first steps of cleanup. The main
> work done here is to concentrate all p2m mapping in libxenguest instead=

> of having one implementation in each of libxenguest and libxenctrl.
>=20
> Merging the two implementations should be rather easy, but this will
> require to touch many lines of code, as the migration handling variant
> seems to be more mature, but it is using the migration stream specific
> structures heavily. So I'd like to have some confirmation that my way
> to clean this up is the right one.
>=20
> My idea would be to add the data needed for p2m mapping to struct
> domain_info_context and replace the related fields in struct
> xc_sr_context with a struct domain_info_context. Modifying the
> interface of xc_core_arch_map_p2m() to take most current parameters
> via struct domain_info_context would then enable migration coding to
> use xc_core_arch_map_p2m() for mapping the p2m. xc_core_arch_map_p2m()
> should look basically like the current migration p2m mapping code
> afterwards.
>=20
> Any comments to that plan?
>=20
> Changes in V2:
> - added missing #include in ocaml stub
>=20
> Juergen Gross (6):
>    tools/libs/guest: fix max_pfn setting in map_p2m()
>    tools/libs/ctrl: fix xc_core_arch_map_p2m() to support linear p2m
>      table
>    tools/libs/ctrl: use common p2m mapping code in xc_domain_resume_any=
()
>    tools/libs: move xc_resume.c to libxenguest
>    tools/libs: move xc_core* from libxenctrl to libxenguest
>    tools/libs/guest: make some definitions private to libxenguest
>=20
>   tools/include/xenctrl.h                       |  63 ---
>   tools/include/xenguest.h                      |  63 +++
>   tools/libs/ctrl/Makefile                      |   4 -
>   tools/libs/ctrl/xc_core_x86.c                 | 223 ----------
>   tools/libs/ctrl/xc_domain.c                   |   2 -
>   tools/libs/ctrl/xc_private.h                  |  43 +-
>   tools/libs/guest/Makefile                     |   4 +
>   .../libs/{ctrl/xc_core.c =3D> guest/xg_core.c}  |   7 +-
>   .../libs/{ctrl/xc_core.h =3D> guest/xg_core.h}  |  15 +-
>   .../xc_core_arm.c =3D> guest/xg_core_arm.c}     |  31 +-
>   .../xc_core_arm.h =3D> guest/xg_core_arm.h}     |   0
>   tools/libs/guest/xg_core_x86.c                | 399 +++++++++++++++++=
+
>   .../xc_core_x86.h =3D> guest/xg_core_x86.h}     |   0
>   tools/libs/guest/xg_dom_boot.c                |   2 +-
>   tools/libs/guest/xg_domain.c                  |  19 +-
>   tools/libs/guest/xg_offline_page.c            |   2 +-
>   tools/libs/guest/xg_private.h                 |  16 +-
>   .../{ctrl/xc_resume.c =3D> guest/xg_resume.c}   |  69 +--
>   tools/libs/guest/xg_sr_save_x86_pv.c          |   2 +-
>   tools/ocaml/libs/xc/xenctrl_stubs.c           |   1 +
>   20 files changed, 545 insertions(+), 420 deletions(-)
>   delete mode 100644 tools/libs/ctrl/xc_core_x86.c
>   rename tools/libs/{ctrl/xc_core.c =3D> guest/xg_core.c} (99%)
>   rename tools/libs/{ctrl/xc_core.h =3D> guest/xg_core.h} (92%)
>   rename tools/libs/{ctrl/xc_core_arm.c =3D> guest/xg_core_arm.c} (72%)=

>   rename tools/libs/{ctrl/xc_core_arm.h =3D> guest/xg_core_arm.h} (100%=
)
>   create mode 100644 tools/libs/guest/xg_core_x86.c
>   rename tools/libs/{ctrl/xc_core_x86.h =3D> guest/xg_core_x86.h} (100%=
)
>   rename tools/libs/{ctrl/xc_resume.c =3D> guest/xg_resume.c} (80%)
>=20


--------------1F206A47E54C17FAC7DB509B
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------1F206A47E54C17FAC7DB509B--

--LO8pSzdI3l6E16X0Dnt1yfck8mjpNMchy--

--78G7TuzXokz2mlyJCRm3gsmTvqRpPQr6R
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmCbfJAFAwAAAAAACgkQsN6d1ii/Ey+L
7Af+MngKfuLlpfBSjsHl7qCpAKGLPNn1mJj9OdMRrlzOLghk3LlS70brs36S+3z63g/MRo7grWuq
RCgit9/W20mAqggN+yE8TT2dFCnM6Qx6r1wgClRNG0XHnwy4ycW0zhhKY6LJnoKPbxKpvCEDj79h
eLF+5bGYjIU42iyXDP03w1R/Mh7YAN4uhDxP5mky981Pa2zMoGyO0WHvSwV0YUJfyUFd1B2FuAGS
ci+JcSDB30y0ia16mzYiH8/XN831jmFzOTnYgC1jMAIkJd2jMYsm9T2zUg7IO4A6JSLjsvJHc2Ar
O081OqPddJoUuOLgljn1KEM7vrG1y5phISpFkUgdjw==
=cAlH
-----END PGP SIGNATURE-----

--78G7TuzXokz2mlyJCRm3gsmTvqRpPQr6R--


From xen-devel-bounces@lists.xenproject.org Wed May 12 07:03:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 07:03:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126139.237452 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgitr-0002gs-I7; Wed, 12 May 2021 07:03:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126139.237452; Wed, 12 May 2021 07:03:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgitr-0002gl-F5; Wed, 12 May 2021 07:03:19 +0000
Received: by outflank-mailman (input) for mailman id 126139;
 Wed, 12 May 2021 07:03:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Bb7k=KH=linuxfoundation.org=gregkh@srs-us1.protection.inumbo.net>)
 id 1lgitq-0002gf-Cm
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 07:03:18 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 80e97148-56f2-4fad-8e2d-4e7539d8a64a;
 Wed, 12 May 2021 07:03:17 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 456C161289;
 Wed, 12 May 2021 07:03:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 80e97148-56f2-4fad-8e2d-4e7539d8a64a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org;
	s=korg; t=1620802996;
	bh=iUBD/U+nLN+h/hvAp8X9UtIAIcPumBbY0yRDv4Ky7FI=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=FIE1ZlhZWjvF0yx+OJ/kzFm0svM7pv9OhNyTWJ0XmSt3U89fhRj4UcSrJpPQphqfJ
	 KcEDfj5SfoIYnudPMJTlMYo6wyH3xWLxiE9lYv52LFwIfth3aUeg59qheZp5SkrHup
	 +sYDMRC1tPqmLgr7JIR96sjVcgG0fVhLFhK0GaZg=
Date: Wed, 12 May 2021 09:03:14 +0200
From: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
To: Connor Davis <connojdavis@gmail.com>
Cc: Mathias Nyman <mathias.nyman@intel.com>, xen-devel@lists.xenproject.org,
	linux-usb@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 3/3] usb: xhci: Notify xen when DbC is unsafe to use
Message-ID: <YJt9su1k67KEFh6K@kroah.com>
References: <cover.1620776161.git.connojdavis@gmail.com>
 <2af7e7b8d569e94ab9c48039040ca69a8d52c89d.1620776161.git.connojdavis@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <2af7e7b8d569e94ab9c48039040ca69a8d52c89d.1620776161.git.connojdavis@gmail.com>

On Tue, May 11, 2021 at 06:18:21PM -0600, Connor Davis wrote:
> When running as a dom0 guest on Xen, check if the USB3 debug
> capability is enabled before xHCI reset, suspend, and resume. If it
> is, call xen_dbgp_reset_prep() to notify Xen that it is unsafe to touch
> MMIO registers until the next xen_dbgp_external_startup().
> 
> This notification allows Xen to avoid undefined behavior resulting
> from MMIO access when the host controller's CNR bit is set or when
> the device transitions to D3hot.
> 
> Signed-off-by: Connor Davis <connojdavis@gmail.com>
> ---
>  drivers/usb/host/xhci-dbgcap.h |  6 ++++
>  drivers/usb/host/xhci.c        | 57 ++++++++++++++++++++++++++++++++++
>  drivers/usb/host/xhci.h        |  1 +
>  3 files changed, 64 insertions(+)
> 
> diff --git a/drivers/usb/host/xhci-dbgcap.h b/drivers/usb/host/xhci-dbgcap.h
> index c70b78d504eb..24784b82a840 100644
> --- a/drivers/usb/host/xhci-dbgcap.h
> +++ b/drivers/usb/host/xhci-dbgcap.h
> @@ -227,4 +227,10 @@ static inline int xhci_dbc_resume(struct xhci_hcd *xhci)
>  	return 0;
>  }
>  #endif /* CONFIG_USB_XHCI_DBGCAP */
> +
> +#ifdef CONFIG_XEN_DOM0
> +int xen_dbgp_reset_prep(struct usb_hcd *hcd);
> +int xen_dbgp_external_startup(struct usb_hcd *hcd);
> +#endif /* CONFIG_XEN_DOM0 */
> +
>  #endif /* __LINUX_XHCI_DBGCAP_H */
> diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
> index ca9385d22f68..afe44169183f 100644
> --- a/drivers/usb/host/xhci.c
> +++ b/drivers/usb/host/xhci.c
> @@ -37,6 +37,57 @@ static unsigned long long quirks;
>  module_param(quirks, ullong, S_IRUGO);
>  MODULE_PARM_DESC(quirks, "Bit flags for quirks to be enabled as default");
> 
> +#ifdef CONFIG_XEN_DOM0
> +#include <xen/xen.h>

<snip>

Can't this #ifdef stuff go into a .h file?

thanks,

greg k-h


From xen-devel-bounces@lists.xenproject.org Wed May 12 07:09:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 07:09:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126144.237464 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgizo-0003OB-75; Wed, 12 May 2021 07:09:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126144.237464; Wed, 12 May 2021 07:09:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgizo-0003O4-40; Wed, 12 May 2021 07:09:28 +0000
Received: by outflank-mailman (input) for mailman id 126144;
 Wed, 12 May 2021 07:09:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgizm-0003Nu-NY; Wed, 12 May 2021 07:09:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgizm-0004yy-HM; Wed, 12 May 2021 07:09:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgizm-0000Xx-3T; Wed, 12 May 2021 07:09:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lgizm-0006Wz-31; Wed, 12 May 2021 07:09:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nrQZ9oxFCvl3CUM/gHlS5atU2vE0g3JuPG0z0UdzrZY=; b=6LLa258BxkfGpPszl/1u7rpi2p
	URB2U9OiVBHlLKqX5d+l4xY6uR0SGcTT5es6PsFdZaft0spJzlNtlyWVMjrRSYa4UwIXy3rfFMfx+
	fhDDfGelBfv6o7/LmTwm34w01S2oax5jO4tRztAxmYgEwARjNpKJV95gvGtghvNQPy0k=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161907-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 161907: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:guest-start.2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=f9a576a818044133f8564e0d243ebd97df0b3280
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 12 May 2021 07:09:26 +0000

flight 161907 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161907/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-libvirt      14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-libvirt-xsm  14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-libvirt-pair 25 guest-start/debian       fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 15 guest-start.2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                f9a576a818044133f8564e0d243ebd97df0b3280
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  264 days
Failing since        152659  2020-08-21 14:07:39 Z  263 days  482 attempts
Testing same since   161907  2021-05-11 16:09:28 Z    0 days    1 attempts

------------------------------------------------------------
491 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 149396 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed May 12 07:26:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 07:26:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126153.237483 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgjGb-0005zd-Vj; Wed, 12 May 2021 07:26:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126153.237483; Wed, 12 May 2021 07:26:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgjGb-0005zW-Ri; Wed, 12 May 2021 07:26:49 +0000
Received: by outflank-mailman (input) for mailman id 126153;
 Wed, 12 May 2021 07:26:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=6NHd=KH=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lgjGb-0005zQ-CT
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 07:26:49 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 618a88ed-57e8-4ba7-a9af-ca60e4609b20;
 Wed, 12 May 2021 07:26:48 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 90F4A67373; Wed, 12 May 2021 09:26:45 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 618a88ed-57e8-4ba7-a9af-ca60e4609b20
Date: Wed, 12 May 2021 09:26:45 +0200
From: Christoph Hellwig <hch@lst.de>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, hch@lst.de,
	linux-kernel@vger.kernel.org,
	Stefano Stabellini <stefano.stabellini@xilinx.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	catalin.marinas@arm.com, will@kernel.org,
	linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH 1/2] xen/arm64: do not set SWIOTLB_NO_FORCE when
 swiotlb is required
Message-ID: <20210512072645.GA22396@lst.de>
References: <20210511174142.12742-1-sstabellini@kernel.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210511174142.12742-1-sstabellini@kernel.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

> -int xen_swiotlb_detect(void)
> -{
> -	if (!xen_domain())
> -		return 0;
> -	if (xen_feature(XENFEAT_direct_mapped))
> -		return 1;
> -	/* legacy case */
> -	if (!xen_feature(XENFEAT_not_direct_mapped) && xen_initial_domain())
> -		return 1;
> -	return 0;
> -}

I think this move should be a separate prep patch.


From xen-devel-bounces@lists.xenproject.org Wed May 12 07:35:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 07:35:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126157.237495 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgjOl-0007Zk-Qx; Wed, 12 May 2021 07:35:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126157.237495; Wed, 12 May 2021 07:35:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgjOl-0007Zd-Mm; Wed, 12 May 2021 07:35:15 +0000
Received: by outflank-mailman (input) for mailman id 126157;
 Wed, 12 May 2021 07:35:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UZ9D=KH=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1lgjOj-0007ZX-Vp
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 07:35:14 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.8.59]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5e024aa6-13e9-4917-bcf8-0409cfa3db1d;
 Wed, 12 May 2021 07:35:10 +0000 (UTC)
Received: from DB6PR07CA0001.eurprd07.prod.outlook.com (2603:10a6:6:2d::11) by
 PAXPR08MB6861.eurprd08.prod.outlook.com (2603:10a6:102:138::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.25; Wed, 12 May
 2021 07:35:08 +0000
Received: from DB5EUR03FT036.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:2d:cafe::1e) by DB6PR07CA0001.outlook.office365.com
 (2603:10a6:6:2d::11) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4150.11 via Frontend
 Transport; Wed, 12 May 2021 07:35:08 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT036.mail.protection.outlook.com (10.152.20.185) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Wed, 12 May 2021 07:35:08 +0000
Received: ("Tessian outbound 6c4b4bc1cefb:v91");
 Wed, 12 May 2021 07:35:08 +0000
Received: from 01f475932d98.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 F228B203-1819-4770-AE74-C785A9DD94E0.1; 
 Wed, 12 May 2021 07:34:58 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 01f475932d98.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 12 May 2021 07:34:58 +0000
Received: from VI1PR08MB3629.eurprd08.prod.outlook.com (2603:10a6:803:7f::25)
 by VI1PR08MB3869.eurprd08.prod.outlook.com (2603:10a6:803:bf::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.24; Wed, 12 May
 2021 07:34:55 +0000
Received: from VI1PR08MB3629.eurprd08.prod.outlook.com
 ([fe80::5ca9:87ed:e959:758a]) by VI1PR08MB3629.eurprd08.prod.outlook.com
 ([fe80::5ca9:87ed:e959:758a%5]) with mapi id 15.20.4129.026; Wed, 12 May 2021
 07:34:55 +0000
Received: from smtpclient.apple (82.8.129.65) by
 LO2P265CA0108.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:c::24) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4087.50 via Frontend Transport; Wed, 12 May 2021 07:34:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5e024aa6-13e9-4917-bcf8-0409cfa3db1d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KsWcDodGaJWAwWUe+9CWIGOcPFpVaK7BKSmRFCGzcSQ=;
 b=b/2+RTvoRI2ky8LOi5ciyXwZqmvOB/3gQlUGzn0W4bD6lJ7C0HO9AQwoIrMsqUAEGy62XZGAcXRklDXOr9jz29UDoMZZu3rkgME6gLSvxyMa4basjVL3QzI4Uq8L8v9R2WTeAiMLjVaQ81H66jQWmn/X7K3RNSOG+RVVPj6Xlx4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 2cee01e5f4644e7a
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JEG4YYkqBTXT5fe+EcOMoWy7ShvVMYNBRQDMd+JPujfs0/HzerwWY10fx6c89ywTdC0tOx31+4AtmANz6ETkwuSVDYjNQ6tq08c+dhhoXCxH3axbOrUZH9t3ZgoRZCHKpvtMUaMgDpcHyJg4ZiLHJLXNMOTIIk0OTZTdwJpmeYEA0zh8XV/xPSs4b7ZqZBYQM/brzebTCJR4jrWJkI79vcEGQXxs2n9JtU4TND4OeCXj9YcxfWmVB3uE6j3k5FaHnkKsXL8OfE3TtCQaSNwULJs1pM4SfMWSDRgbm9auIP6rlehLGXfeqzIIWtef6hASX+pqj9pE3YdSZFpHwz3GxA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KsWcDodGaJWAwWUe+9CWIGOcPFpVaK7BKSmRFCGzcSQ=;
 b=E+Mel/xgVWmpZmY6vlyGDWpRurerpY/VROzOjufAYBGs6H+OCkJlOIxSkZXrDlQPfOiH7wTL2dVPCZvubb5kYIjBfDtjhSkP/XcZuDAPyVMrFe7FvsD9//0X/qccS/lRUFzYycvMwgQ0szGzKnSSshfYvYT/XWk2nh+EJXnDEg6+EtsMy+D88Qs8hWv59r2h2mb7ruDmbm08ngnXTgq1takTfxTE6jzfNfwZtdlw1Nqpb4ickM24cHOyA+ezqeQo+OeETYE/eK4orCPnIyc6BUwZ54KKPBFfUdkAoEZV4znjPOsL28Bkmcy/7G01gXJGfHi+JaFAHTrFCb2UQBFnQQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=KsWcDodGaJWAwWUe+9CWIGOcPFpVaK7BKSmRFCGzcSQ=;
 b=b/2+RTvoRI2ky8LOi5ciyXwZqmvOB/3gQlUGzn0W4bD6lJ7C0HO9AQwoIrMsqUAEGy62XZGAcXRklDXOr9jz29UDoMZZu3rkgME6gLSvxyMa4basjVL3QzI4Uq8L8v9R2WTeAiMLjVaQ81H66jQWmn/X7K3RNSOG+RVVPj6Xlx4=
Authentication-Results-Original: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
Content-Type: text/plain;
	charset=us-ascii
Subject: Re: [PATCH v5 3/3] docs/doxygen: doxygen documentation for
 grant_table.h
From: Luca Fancellu <luca.fancellu@arm.com>
In-Reply-To: <alpine.DEB.2.21.2105111457480.5018@sstabellini-ThinkPad-T480s>
Date: Wed, 12 May 2021 08:34:48 +0100
Cc: Jan Beulich <jbeulich@suse.com>,
 xen-devel@lists.xenproject.org,
 Bertrand Marquis <bertrand.marquis@arm.com>,
 wei.chen@arm.com,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>,
 Wei Liu <wl@xen.org>
Content-Transfer-Encoding: quoted-printable
Message-Id: <A54E7AE7-46EB-451F-B521-14F327DCF484@arm.com>
References: <20210504133145.767-1-luca.fancellu@arm.com>
 <20210504133145.767-4-luca.fancellu@arm.com>
 <alpine.DEB.2.21.2105041514260.5018@sstabellini-ThinkPad-T480s>
 <9E7D7B58-0ABA-4800-B2D3-9EE3E29CF599@arm.com>
 <8fada713-9ae5-ddd3-585b-0f090748fc49@suse.com>
 <alpine.DEB.2.21.2105111457480.5018@sstabellini-ThinkPad-T480s>
To: Stefano Stabellini <sstabellini@kernel.org>
X-Mailer: Apple Mail (2.3654.80.0.2.43)
X-Originating-IP: [82.8.129.65]
X-ClientProxiedBy: LO2P265CA0108.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:c::24) To VI1PR08MB3629.eurprd08.prod.outlook.com
 (2603:10a6:803:7f::25)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f2c33efa-2f56-40ad-e18d-08d9151877b5
X-MS-TrafficTypeDiagnostic: VI1PR08MB3869:|PAXPR08MB6861:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS:
	<PAXPR08MB6861CE0AB677D44E923A2942E4529@PAXPR08MB6861.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 sS7cJ09CKBiQE3IO0Y8OjuUPqN7/4Zah5NBuFhnlkz0SYgFRV/ToRGhR3PeurvVF49+7xK2dn/CDF3mX3aGj3ASDSZHXcDvM5t1XO9SWfGEZLEsQ3eBT6r1Zwdo2xtgg0SUtl3s4evCYvwG9POe+hzmqJ8VOFGfamyBvQq8yxJxsfD/pntk/pdx1cvOdXnczzjeFRjdy3oD9CBPQcTZjkn3YoY9gJgqwgF2Xj42c40ThcsXGE6vKjqz97EsWo2dDOdwdlZtB3AGYSRMUYMsrhNI1muZX1hl+/4OFbxvkaLuRMf7gx7+JaQyrdqxW2Z0uFaOvBCZmhsdZntIW60NP02viKFWiJtvR9yb2+A9FU26QKjDKv+iINwIX5JWZ4vAEGuQ52sGNswJpUB4yhSeImOwkXaywz8q+HrSntkNxqgi9mEdFHJelYvJfhlOa8/BchzokRFTJcJDgx4QXw5ZBSZO1MgawUbTkIIDjJVlXCyISsJY3hmsfckqbieneJmUrf7R7VqoXquKxpx6qREtKbopySLrw58WzbNhU0oK1xr6lYkzC6Jh2qTYbpCESrzGWICL4EcnvAJNAmGm8d4ttYk3rzpeY35kxugODipgNw0XYSDn9pYrkSdQNbYoNlN8N+4tsRYeBIhlI8AeAB6Fn1Nv7ci7RWDCFB0+fbPAOATRB8HOwzQX3OyGGJxicJNDRWGs+II43LhQFIKk2z0cUsbNoTuYqAXYhVIMJXaBBeHwgmUoGgQcdrM6P9u448LOXOMDGco6yIjds8GRLz+Hg6w==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR08MB3629.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(376002)(346002)(39860400002)(366004)(6666004)(38350700002)(16526019)(186003)(38100700002)(36756003)(4744005)(66476007)(6512007)(478600001)(5660300002)(86362001)(2906002)(52116002)(66556008)(316002)(966005)(66946007)(4326008)(54906003)(2616005)(956004)(6486002)(33656002)(26005)(6506007)(8676002)(53546011)(44832011)(8936002)(6916009)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
 =?us-ascii?Q?Z3/Re+G29xyVD87R2lir6MiRT78ejreacmLVJxocoTBJSkIR16GwOe7I+1ZV?=
 =?us-ascii?Q?xXawIG5Vxrj9/ZVhmQhoRfKBtoPCXPR6hvEyGthYF3LM91dbm8SsVwVAnbG9?=
 =?us-ascii?Q?84ssIjicZKAF3a8V2G71u6rbsDsVn5yE8yRwNq81QoOGVy5Coqw2rs01YT8d?=
 =?us-ascii?Q?9RhgH3oqM9Do5tcoqfdtDIH8Jtza00j6R0kLVuSxmcO+rlHorOjQcaUwrFQi?=
 =?us-ascii?Q?S1A67fiaWyAiz2dD0GZksTDQuDE/MB241jlfD5WVNCwxOPA8T1DU4zv0KNJH?=
 =?us-ascii?Q?/r3TzWXxVv0dcItMk9rTlI3cNPR0vhReN8GbX0Nwj9zXCuhndEAYHjoz4w5y?=
 =?us-ascii?Q?yjVkxZUyFGN+h0z9qXcUBh5CCWYRgRWl59DOpizAV7imXnh1+KH8UzymsI5p?=
 =?us-ascii?Q?brZ9NF5bQ5x1hczJVzjFu+6c9ET5JN9NLp8kIOi2ENLlXcrrVxReSswSDjSK?=
 =?us-ascii?Q?5Q43iwnqTu1XjeNh6b46ixMknFlROu3vE/kg045+AMrlNiicjjvbizzwp26t?=
 =?us-ascii?Q?Y/NnIqt02lqAPFLxsuMVwYgbmFclAtTD430oEv5+uQDaAjFW0hsTXSK248z8?=
 =?us-ascii?Q?iJLtt8pML79Hd3hHJg2aV1cpIzB9N/s4XyqUa/8j0wrAYuvWAvtnkP4sE3a4?=
 =?us-ascii?Q?Y+UY8dmvd1D/0DyaBWHfrkbZ+HZ3CV5p16zJatZOBXvzB9M+VrvMN6VJdRfs?=
 =?us-ascii?Q?mCgrs7Is1mhrMozIUAxbhAo9tOVl8Bsd9hR4wM8NZaQhNaAKWr7eq5UYnd0h?=
 =?us-ascii?Q?mFieKNLAHEcDWxwCuoChN5tR+9rYwUvP4bVoN75zBkgNEV8ICa6oXk7UgoMR?=
 =?us-ascii?Q?igtF3i8nYLh14GFLQ/MpTcXVvUMyNDe2hLccnjvYDpUUZMMGsAy/ylaJVZGA?=
 =?us-ascii?Q?iEIteOn/7+8lpGar6qeEQmPsIQAkrVCWnoNdnRhjZuIksBkr0j6KQJ9OZSbM?=
 =?us-ascii?Q?MUzsXF3oGrNumvWxLglEbktIpztyetAh3uA+VVHTN8TB3hxw6B0l0zaVyhun?=
 =?us-ascii?Q?tpejigJu6HkF75AdYap2Ui2yye/yqZ+oFU1ekovp75DxrsV2ZmSTOABqW4Ft?=
 =?us-ascii?Q?kG/sQqoiIjKbd9wlMpTM4ed+TYzyL0M2q4hExh1LJ1vMUQuWxME2VOAGf+su?=
 =?us-ascii?Q?itkj6GsifB16ok/3zCYEyy0FYsaGUAdGqbv7tUMiQngov7wx+9uR43+wpXEl?=
 =?us-ascii?Q?QYMnvmLMlDRr+NCJwO0jGqyeH55Quz7B6V60/usApFnFT+x7ibpuDgohp8Ya?=
 =?us-ascii?Q?9PSSFtYy7i8PnP64YSwHRIjJpAeA5QLWtyKV+xCM43J+Q7fo2AKH6OVwT6an?=
 =?us-ascii?Q?qezNzAosKQ3BQUzZbBIjGWwb?=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3869
Original-Authentication-Results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT036.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	c6e20ae7-43fc-48be-6860-08d91518701a
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	bZLfvs2F3pZKWypI+5g+aUzMibiXJZ2sztnra7n2S2Gw14KHJ71SiAmUr/QwOS5I7FpBZlK6Szg9wzVVfz7b5gJ4+qxrmJBwmj/YZkDb+l5MPfqhecDW9h4c2ddpqpsr8CIAutVrUNyU3zP1Yph5vIYbNJEyPg0kWNUbcry5NbkvyUIMWFSzWTf8eCWMfRoFrYsj7Kn+LaiTktODtnp0FqoyIMRz1lt8o1WFPSI637p2eeDkTlh0wjDcf5cLLzmBvklvfEwlI3oyuia/s6IhFmx3lQXd6BqsJmvjiP2GfxImvgG03CR9VejISy9v8yu38Xq00hQApkLTk6vOhjJX7u5VPUAt3HoCW5Nk7z21EZpjidY7BMi0hbrbfGncy7nKxTgjS8BuQuHtpN2RVTpGtwAaVowBRj70KV1eHnBn+cShFS6XQqaCa8IzgOI/63a5plP7QoUtn9lpwFL8RFNlwUdtbKMfgR0U6ih1ER9YFkl9wi+uf2tcenl/OGj13cm8dN7uLSxD+M670T640/cSqJNO0qRVY3g/7+hfHasPcUUxPLckp4LdeoTrynyaHcG4PRcHpNTqZb2ZXfCwu3ZVvwX8i0R4Cly61GFcUffzP3VD3rEExKx4gRDdEKjHhx67S6CvRVMXGl/GS3lIk7MuhjMJr5SvJ9ri6Q3NUenXMItVNjAQlWCDbW10FrRD7qa3pj7u0uRaDuGUm+93k8mi0320X8/dU0R/agEgKhN31Cc=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(376002)(136003)(396003)(346002)(36840700001)(46966006)(2906002)(6862004)(4744005)(186003)(86362001)(4326008)(54906003)(5660300002)(16526019)(44832011)(316002)(36756003)(6666004)(2616005)(956004)(336012)(36860700001)(6512007)(356005)(8936002)(6486002)(478600001)(70586007)(70206006)(82740400003)(47076005)(82310400003)(53546011)(6506007)(26005)(966005)(8676002)(81166007)(33656002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 May 2021 07:35:08.1242
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f2c33efa-2f56-40ad-e18d-08d9151877b5
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT036.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB6861



> On 11 May 2021, at 22:58, Stefano Stabellini <sstabellini@kernel.org> wro=
te:
>=20
> On Thu, 6 May 2021, Jan Beulich wrote:
>> An alternative to correcting the (as it seems) v2 related issues, it
>> may be worth considering to extract only v1 documentation in this
>> initial phase.
>=20
> FWIW I agree with Jan that "grant table v1" documentation only is a good =
idea.

Ok, I already pushed the v6: https://patchwork.kernel.org/project/xen-devel=
/cover/20210510084105.17108-1-luca.fancellu@arm.com/



From xen-devel-bounces@lists.xenproject.org Wed May 12 09:49:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 09:49:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126195.237524 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lglUC-0005vr-9H; Wed, 12 May 2021 09:49:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126195.237524; Wed, 12 May 2021 09:49:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lglUC-0005vk-6J; Wed, 12 May 2021 09:49:00 +0000
Received: by outflank-mailman (input) for mailman id 126195;
 Wed, 12 May 2021 09:48:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lglUA-0005va-FX; Wed, 12 May 2021 09:48:58 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lglUA-00084i-5V; Wed, 12 May 2021 09:48:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lglU9-0001BI-T9; Wed, 12 May 2021 09:48:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lglU9-0001GW-Sf; Wed, 12 May 2021 09:48:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ja2GOtKWCe3sTDltDx4Mp+g6rj5ownE4nPzSBjgeizM=; b=1aGCIAGJ4XSYxnHn68By/tfl1D
	fwcvXdYqQaKjGv3j7IaA1VR+ggekLr0xxYb9YSUS2o6PNpKvMVbmNgnGNVZa5trmg7O5oHiUrnlaA
	3ML3+QMQ3vcXMviw0iqHH2xl7R67o3UuBiM/50zoARWI3a2oJ344Pizgk/fpMISYMNGE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161916-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 161916: all pass - PUSHED
X-Osstest-Versions-This:
    xen=d4fb5f166c2bfbaf9ba0de69da0d411288f437a9
X-Osstest-Versions-That:
    xen=a7da84c457b05479ab423a2e589c5f46c7da0ed7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 12 May 2021 09:48:57 +0000

flight 161916 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161916/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  d4fb5f166c2bfbaf9ba0de69da0d411288f437a9
baseline version:
 xen                  a7da84c457b05479ab423a2e589c5f46c7da0ed7

Last test of basis   161877  2021-05-09 09:18:27 Z    3 days
Testing same since   161916  2021-05-12 09:19:39 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jason Andryuk <jandryuk@gmail.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@arm.com>
  Olaf Hering <olaf@aepfle.de>
  Volodymyr Babchuk <volodymyr_babchuk@epam.com>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   a7da84c457..d4fb5f166c  d4fb5f166c2bfbaf9ba0de69da0d411288f437a9 -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Wed May 12 10:10:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 10:10:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126201.237539 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lglpD-0000u5-VO; Wed, 12 May 2021 10:10:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126201.237539; Wed, 12 May 2021 10:10:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lglpD-0000ty-SR; Wed, 12 May 2021 10:10:43 +0000
Received: by outflank-mailman (input) for mailman id 126201;
 Wed, 12 May 2021 10:10:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oIkv=KH=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
 id 1lglpB-0000ts-RR
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 10:10:42 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ae80655b-2853-4542-925c-33bfeeabe3fa;
 Wed, 12 May 2021 10:10:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ae80655b-2853-4542-925c-33bfeeabe3fa
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620814240;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=49NqqAUb/f9b6xqL4ZGzsx4JsxgsR90+zv1EeNS7iPY=;
  b=BGJ9gSNwPKZYwnL7NxSr0CTl+z7GXnLg+6HjT5uqtkw8vLUkHjLeEX6d
   Jo4BKVsxW//hdVaMSAPFFSZ6SljVRCs3ZaO8TUk/AYWoRzNMbStJIG3sC
   08XwEvdZZ6ZXe74A7w1YXOiwBjaIqUNhN59aDEq5Lm+jSs47KjXGzIhIj
   U=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: Olt9xyHhBEzydiYP+HW1aDS1aNiVqnilQRkVblpWh3dgudQY4lSKsZcrsppoPSkbB+zgdJdun1
 DnEMM4QvxBLznD/HnepiaBF8dEsLxWmIbz/fU8NFebRVVA3acMqfyoymJ/+w3hPU1XyfNYtxtQ
 +YOobRbOx2iTpcBh9P60/uwi3byR8JWH+b8na7PZhjJQ9a5xXgGGB1H5XYeDT4LeiJkl0EvYrZ
 ujDlsHhLEodELYnVCdeO//HjVvHCiOHhozCtb9mr5sNyOzZ14HY3MZM9uMv9MUidDCmvWptjLU
 7lg=
X-SBRS: 5.1
X-MesageID: 43726071
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:VtM9D6yYanLDy87x7+gHKrPw1r1zdoMgy1knxilNoHxuH/BwWf
 rPoB17726RtN91YhsdcL+7V5VoLUmzyXcX2/h1AV7BZniEhILAFugLgbcKqweKJ8SUzJ8+6U
 4PSclD4N2bNykGsS75ijPIb+rJFrO8gd+VbeS19QYScelzAZsQiDuQkmygYzZLrA8tP+teKL
 OsovBpihCHYnotYsGyFhA+LpL+T42iruOeXfYebSRXkDWzsQ==
X-IronPort-AV: E=Sophos;i="5.82,293,1613451600"; 
   d="scan'208";a="43726071"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cHThLjmNj+xxCTjWhtrtIO7bKKS+QQkfnv4lACqjyIa3632imPkyhEKDSnBEVpl34ACdU3ohqXrIoc2p0t800UFTSdM1ZcfGuhFEUuIDMC1dngq+K6TUMxceODJBsw5Du/VtB1gMwm2RtYQtaVp9x6RFs6r+SbOScpUG2qVX3yY3Yypuz50SL0luMQvAMfEGbXW3wF8q+bTeFRfw0HOhSKVorn7zxZH5wLgLhZSbk+xF1QDj2LMJscGhyfetncuUZa2A4R9R3dXNa2Sl8zskNhLOE/kVQS0vsNYFEFx6PxguBlsHRMzXixnxPx2MgfhaOwl63uPkZfFERgCRlCOXUg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=49NqqAUb/f9b6xqL4ZGzsx4JsxgsR90+zv1EeNS7iPY=;
 b=TZvbzof4XqULQvsJQ/APICaHRVQ+9Wl0P+79ZHWaysCb+FaroPpDN7buYltL/Acyz7oj7W4MCSiSgo/B/LPDEjUhLzw4wKFt1bmtigE6sNjj3IZ2Fqbh91eMUnY7wWOeVq7Xpr29zMnwwlm+DXbQDKXP+6SjSgXhhysUTOC0MJC5vKiKbP0OmGnD1mZKejb7l501DjN3UGEc1o7+v0PWTjuyfmMtjOwvTGbsXpgnArVX++G3wwns0aiXv7IiDI1oPxfSzfMKgaKJ54Ma8+l5GWeljpc4hJMiXdSBkNYVSodg3L4f11n8zOhPLmRQfFNwVrzgaTiH2YMXx9J1GxVt+w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=49NqqAUb/f9b6xqL4ZGzsx4JsxgsR90+zv1EeNS7iPY=;
 b=Ei6+nHp61u6fJO7356NTkbGu6PEHIa3lVfxOqOBQ1H5hjhcPPIjVg+Koh4zzPUxSosgB2fC6HRxLe68U+9GocJA4YrLkaMJDb1mk1iNvGkdz3uWVlDaz9HdyIp27esL6E89iSU1XNXDhEzFCbW6WFRQY0dCyOoQcqN06oN1JBF4=
From: Edwin Torok <edvin.torok@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: "julien@xen.org" <julien@xen.org>, "jbeulich@suse.com"
	<jbeulich@suse.com>, "jgross@suse.com" <jgross@suse.com>, "wl@xen.org"
	<wl@xen.org>, "iwj@xenproject.org" <iwj@xenproject.org>,
	"sstabellini@kernel.org" <sstabellini@kernel.org>, "dave@recoil.org"
	<dave@recoil.org>, George Dunlap <George.Dunlap@citrix.com>, Christian Lindig
	<christian.lindig@citrix.com>
Subject: Re: [PATCH v2 00/17] live update and gnttab patches
Thread-Topic: [PATCH v2 00/17] live update and gnttab patches
Thread-Index: AQHXRpBYbSP9+kXyy06RGaCma1VeS6retOQAgADsBAA=
Date: Wed, 12 May 2021 10:10:23 +0000
Message-ID: <7c1a9a8b317fcbc778acaa218ee96e01d15b98d5.camel@citrix.com>
References: <cover.1620755942.git.edvin.torok@citrix.com>
	 <c744d834-659a-e361-df97-128032402950@citrix.com>
In-Reply-To: <c744d834-659a-e361-df97-128032402950@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Evolution 3.36.4-0ubuntu1 
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 903b8e73-3c64-4d99-87c5-08d9152e27fa
x-ms-traffictypediagnostic: BY5PR03MB5013:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <BY5PR03MB50136802AE119AF812CE98EB9B529@BY5PR03MB5013.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: sDJFT4N22HAMx50WoDP/kw4Ra0E/0te9iWcSJYFA0vH+2I+QJYvTNiEd+cHpZ0A5IjdxNtdabkzHOoYlYT7A/B46W6n7P2uRpX0X8DSypi321vzogWnfOnEz5LXhOEsWJqK1Bg0iWDS01guT3nuwx6twnInJjLGXrPFO6ldl4JXbpgIX8CYQLVd2kHe77T6LVDiDbPvTpJNXm3AbFRbGTn/O/pO3sdgycIJf4vlNo8rJjYN/CC4O4RmDOWWSzN3HBDtli+t+qoFYxY6CyTeD0F4SGpKb+KHfXLLwEnT+7md1cNsg+TNwppC/siWk1P8pQCgtUmMHgu25SQ9koVuh+shMjKGbfFF87yN5DBuq4Ara+pvdUfRw88RTsb3/FjC9R1QKF+cq6nYuYlftngEjAYBJBRO5Cq8KwWHdS5958MwaDA7fOIUNhJfhct3zZcdudQRf1nB6RuJ8RaIVO8RpVBdB2MMHyoAcGUrZbqx3ACPzi6Q8u10gsYl82GENNAZkh2n61zxQbwafJOcE7GfgWeAFOVWFfqK+UqxqsiSQ9X0btpLkXqeSyHvpWiQjWTU8aM6NGxT0CTcAMV6U8911G4JmraC6mGEbfsPc5rnOaXmF5EHIu1Oj5pk2GTGOEsS0w2J+emnvnLZKd/g+rUfc9xJpT/B4ckyefvi6JDpmW7Fye7gGQ/mK3ARXdoLhlI9x
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB5888.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(136003)(366004)(39860400002)(396003)(346002)(8936002)(8676002)(6506007)(53546011)(83380400001)(86362001)(2906002)(66574015)(71200400001)(15650500001)(26005)(122000001)(186003)(66946007)(66446008)(2616005)(478600001)(38100700002)(6512007)(66476007)(36756003)(5660300002)(66556008)(91956017)(76116006)(110136005)(6486002)(316002)(54906003)(107886003)(966005)(64756008)(4326008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: =?utf-8?B?VlpMbW5FdzRwZkRXcUlxYWpySlgvaU5rYUNKcHN3Ni9OUm1QaUJTUkhDbDVF?=
 =?utf-8?B?V2pYYmxqQkhFR1BJS251YTVpQlBPSUMxVFJ3K0g0eDNuU09jQXlncGVMakRp?=
 =?utf-8?B?YWkwNWQrcFZENmxUVXlKT3ZMV2QvM0xXQzNnR0JjYW81dTBMREF2djFXTFVO?=
 =?utf-8?B?bmZvNXQ2U1NBbmljZXJJNWJ4WWVjYzQxaHhWc0xiZk40di9IRWlqM0tVWmFM?=
 =?utf-8?B?djdqdTA1c1dYN0NxZWtmaHgxU1IyeU94R2V3cXQweFpzNkorVWRoTjEwREd0?=
 =?utf-8?B?YUMzVEpFM0IxbGR1SDRBa0VNQ2xkRTF4d2JJbkNXUHVaOVBpM2dpK0k3ME93?=
 =?utf-8?B?Mjdsd1lxamFVM2lFdHQrQ3NEQUFldFNlUzE2cWdyV1orWEZzaWlDc1BLbVVo?=
 =?utf-8?B?Zkd6eG5UbGdsczZQS0I1bzg1V3NwWjBSN3c2NytCMkM0TWpDNU5DRmdvQzlj?=
 =?utf-8?B?cW5oTmRQbkRtRzJGY1Z0MzQ1MmFHLzVwaXJHT0VTcTZ5ektodml3NytqV3Bs?=
 =?utf-8?B?WkZBNkRsZkc3d3J1RGg0eUxVNVdzcU1DQUF6NThuV0ZEY3BSNTlQMXU0K0xx?=
 =?utf-8?B?clhML0FxYVNncDlnUjJZd00rUjVUTXdTdThYeUIvVlEzVzBHRE9zWnZxZXcy?=
 =?utf-8?B?NnR5elBBR01SUWlieU93dit0eWNiS0drem9LSG9ORWZZSDU1RHd5UHpDd1B1?=
 =?utf-8?B?VUlUZDA5eWZMWS9EUXM0SW9kNHhqek1ibGZxdFJQNU1TSmMxbnJUT0l5WXV5?=
 =?utf-8?B?NFYvRDRnUlZHNlNtVXBmYThNR3V1Y1I5ZVdMTGR6UnlUZ3A4Z2V1Q25FZWVi?=
 =?utf-8?B?c3U2RmhDbGs1UTFsVlN5UDVDdjhIUkhBSkVoK2hnd094OThBMDJidUl5U0dK?=
 =?utf-8?B?Z0hQUkhYMWtSa1BsUCtCNWx2YXVjeG10YnBiVWNBK1IzOGJ1QkZRVVNsK01K?=
 =?utf-8?B?aTFmQWJ1MTR6NDN1L1h2QmVQb1BZL2l3T2RkbWVtcU41d2poTVRNTkNUZ1Rn?=
 =?utf-8?B?eDU5UHJOTTd5czVtc2wrWG9sbWVyZkM4MlRXNWF3RytVK2o0RVJoMlFpUE9v?=
 =?utf-8?B?cUtDc2poWDRXSjFxS2NmWlo0bTZLQVBEenlucWhyQm4rWUczUEhrMEhqdmxz?=
 =?utf-8?B?U2VQNzBza2x6RS9oczJZa0NSZGRlS0N2dTQxaXU3N2k5d0t0N2Fic2NKSHRL?=
 =?utf-8?B?UlF1dVZJam1tRkt0c3J1bmJnbEZBQVZSa3JHVDVQOEI2Y0p3L1NqM09RRUw4?=
 =?utf-8?B?ZHE2TllrRGFXa2d1c29PK0R0UkJJL1QyazlJd0praDFWVTJjK0lDUzZJNDlK?=
 =?utf-8?B?RHgzL1JValI5a24zTEZrZzdhT3pTRE1LZ1ZwZHhGSENjQlNmVU9lVlBHTjcz?=
 =?utf-8?B?Z2pNbHRwQk8zK20xSldiOHoyWUZ3cmd3UXJzRTlDbVlKckxBV3VVTXdBWSs3?=
 =?utf-8?B?MUZGaW5WRXgzcjdGYkJVK2RSTUlIZ0UraEJKTkVWbktwQkNTNHV4aUlYNUhr?=
 =?utf-8?B?elRCczRkVXh1UngvUXRUWFU2Z2lTZDN5WGwyMU1KdTd3ME5oMUpKZ2JwK3pn?=
 =?utf-8?B?bkt6NDllV1FCVS9IK2FmL0xNL095T1NiTFlEV0oyY3dOdXYyQlZ4Q1VocVR2?=
 =?utf-8?B?K2l1ZjM0dkxxMm8wZVFNZi9xemVtWG9Weml1U1FsbGJLenExK29NekNHcSta?=
 =?utf-8?B?TWNQQ05GTCtNNy9lY0VmMmhuQ1ZLL1VRaFdySnBUOVkrU1NEdGwvYmoreUJp?=
 =?utf-8?Q?M5MEPoKidGv+Mvah7ex2mIinrCSb8MwYzmX2DKW?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <2CFAABE32F1C91459E0EB6590694864F@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB5888.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 903b8e73-3c64-4d99-87c5-08d9152e27fa
X-MS-Exchange-CrossTenant-originalarrivaltime: 12 May 2021 10:10:23.1481
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: cVcypFykuW1E2tG997DybKXm06oOtI0VZX3DuR4WNeU0WYRGWMacM39cr+DO0wMzZR1HXBlURoc/C4hEhujXPQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5013
X-OriginatorOrg: citrix.com

T24gVHVlLCAyMDIxLTA1LTExIGF0IDIxOjA1ICswMTAwLCBBbmRyZXcgQ29vcGVyIHdyb3RlOg0K
PiBPbiAxMS8wNS8yMDIxIDE5OjA1LCBFZHdpbiBUw7Zyw7ZrIHdyb3RlOg0KPiA+IFRoZXNlIHBh
dGNoZXMgaGF2ZSBiZWVuIHBvc3RlZCBwcmV2aW91c2x5Lg0KPiA+IFRoZSBnbnR0YWIgcGF0Y2hl
cyAodG9vbHMvb2NhbWwvbGlicy9tbWFwKSB3ZXJlIG5vdCBhcHBsaWVkIGF0IHRoZQ0KPiA+IHRp
bWUNCj4gPiB0byBhdm9pZCBjb25mbGljdHMgd2l0aCBhbiBpbi1wcm9ncmVzcyBYU0EuDQo+ID4g
VGhlIGJpbmFyeSBmb3JtYXQgbGl2ZS11cGRhdGUgYW5kIGZ1enppbmcgcGF0Y2hlcyB3ZXJlIG5v
dCBhcHBsaWVkDQo+ID4gYmVjYXVzZSBpdCB3YXMgdG9vIGNsb3NlIHRvIHRoZSBuZXh0IFhlbiBy
ZWxlYXNlIGZyZWV6ZS4NCj4gPiANCj4gPiBUaGUgcGF0Y2hlcyBkZXBlbmQgb24gZWFjaC1vdGhl
cjogbGl2ZS11cGRhdGUgb25seSB3b3JrcyBjb3JyZWN0bHkNCj4gPiB3aGVuIHRoZSBnbnR0YWIN
Cj4gPiBwYXRjaGVzIGFyZSB0YWtlbiB0b28gKE1GTiBpcyBub3QgcGFydCBvZiB0aGUgYmluYXJ5
IGxpdmUtdXBkYXRlDQo+ID4gc3RyZWFtKSwNCj4gPiBzbyB0aGV5IGFyZSBpbmNsdWRlZCBoZXJl
IGFzIGEgc2luZ2xlIHNlcmllcy4NCj4gPiBUaGUgZ250dGFiIHBhdGNoZXMgcmVwbGFjZXMgb25l
IHVzZSBvZiBsaWJ4ZW5jdHJsIHdpdGggc3RhYmxlDQo+ID4gaW50ZXJmYWNlcywgbGVhdmluZyBv
bmUgdW5zdGFibGUNCj4gPiBsaWJ4ZW5jdHJsIGludGVyZmFjZSB1c2VkIGJ5IG94ZW5zdG9yZWQu
DQo+ID4gDQo+ID4gVGhlICd2ZW5kb3IgZXh0ZXJuYWwgZGVwZW5kZW5jaWVzJyBtYXkgYmUgb3B0
aW9uYWwsIGl0IGlzIHVzZWZ1bCB0bw0KPiA+IGJlIHBhcnQNCj4gPiBvZiBhIHBhdGNocXVldWUg
aW4gYSBzcGVjZmlsZSBzbyB0aGF0IHlvdSBjYW4gYnVpbGQgZXZlcnl0aGluZw0KPiA+IHdpdGhv
dXQgZXh0ZXJuYWwgZGVwZW5kZW5jaWVzLA0KPiA+IGJ1dCBtaWdodCBhcyB3ZWxsIGNvbW1pdCBp
dCBzbyBldmVyeW9uZSBoYXMgaXQgZWFzaWx5IGF2YWlsYWJsZSBub3QNCj4gPiBqdXN0IFhlblNl
cnZlci4NCj4gPiANCj4gPiBOb3RlIHRoYXQgdGhlIGxpdmUtdXBkYXRlIGZ1enogdGVzdCBkb2Vz
bid0IHlldCBwYXNzLCBpdCBpcyBzdGlsbA0KPiA+IGFibGUgdG8gZmluZCBidWdzLg0KPiA+IEhv
d2V2ZXIgdGhlIHJlZHVjZWQgdmVyc2lvbiB3aXRoIGEgZml4ZWQgc2VlZCB1c2VkIGFzIGEgdW5p
dCB0ZXN0DQo+ID4gZG9lcyBwYXNzLA0KPiA+IHNvIGl0IGlzIHVzZWZ1bCB0byBoYXZlIGl0IGNv
bW1pdHRlZCwgYW5kIGZ1cnRoZXIgaW1wcm92ZW1lbnRzIGNhbg0KPiA+IGJlIG1hZGUgbGF0ZXIN
Cj4gPiBhcyBtb3JlIGJ1Z3MgYXJlIGRpc2NvdmVyZWQgYW5kIGZpeGVkLg0KPiA+IA0KPiA+IEVk
d2luIFTDtnLDtmsgKDE3KToNCj4gPiAgIGRvY3MvZGVzaWducy94ZW5zdG9yZS1taWdyYXRpb24u
bWQ6IGNsYXJpZnkgdGhhdCBkZWxldGVzIGFyZQ0KPiA+IHJlY3Vyc2l2ZQ0KPiA+ICAgdG9vbHMv
b2NhbWw6IGFkZCB1bml0IHRlc3Qgc2tlbGV0b24gd2l0aCBEdW5lIGJ1aWxkIHN5c3RlbQ0KPiA+
ICAgdG9vbHMvb2NhbWw6IHZlbmRvciBleHRlcm5hbCBkZXBlbmRlbmNpZXMgZm9yIGNvbnZlbmll
bmNlDQo+ID4gICB0b29scy9vY2FtbC94ZW5zdG9yZWQ6IGltcGxlbWVudCB0aGUgbGl2ZSBtaWdy
YXRpb24gYmluYXJ5IGZvcm1hdA0KPiA+ICAgdG9vbHMvb2NhbWwveGVuc3RvcmVkOiBhZGQgYmlu
YXJ5IGR1bXAgZm9ybWF0IHN1cHBvcnQNCj4gPiAgIHRvb2xzL29jYW1sL3hlbnN0b3JlZDogYWRk
IHN1cHBvcnQgZm9yIGJpbmFyeSBmb3JtYXQNCj4gPiAgIHRvb2xzL29jYW1sL3hlbnN0b3JlZDog
dmFsaWRhdGUgY29uZmlnIGZpbGUgYmVmb3JlIGxpdmUgdXBkYXRlDQo+ID4gICBBZGQgc3RydWN0
dXJlZCBmdXp6aW5nIHVuaXQgdGVzdA0KPiA+ICAgdG9vbHMvb2NhbWw6IHVzZSBjb21tb24gbWFj
cm9zIGZvciBtYW5pcHVsYXRpbmcgbW1hcF9pbnRlcmZhY2UNCj4gPiAgIHRvb2xzL29jYW1sL2xp
YnMvbW1hcDogYWxsb2NhdGUgY29ycmVjdCBudW1iZXIgb2YgYnl0ZXMNCj4gPiAgIHRvb2xzL29j
YW1sL2xpYnMvbW1hcDogRXhwb3NlIHN0dWJfbW1hcF9hbGxvYw0KPiA+ICAgdG9vbHMvb2NhbWwv
bGlicy9tbWFwOiBtYXJrIG1tYXAvbXVubWFwIGFzIGJsb2NraW5nDQo+ID4gICB0b29scy9vY2Ft
bC9saWJzL3hiOiBpbXBvcnQgZ250dGFiIHN0dWJzIGZyb20gbWlyYWdlDQo+ID4gICB0b29scy9v
Y2FtbDogc2FmZXIgWGVubW1hcCBpbnRlcmZhY2UNCj4gPiAgIHRvb2xzL29jYW1sL3hlbnN0b3Jl
ZDogdXNlIGdudHRhYiBpbnN0ZWFkIG9mIHhlbmN0cmwncw0KPiA+ICAgICBmb3JlaWduX21hcF9y
YW5nZQ0KPiA+ICAgdG9vbHMvb2NhbWwveGVuc3RvcmVkOiBkb24ndCBzdG9yZSBkb21VJ3MgbWZu
IG9mIHJpbmcgcGFnZQ0KPiA+ICAgdG9vbHMvb2NhbWwvbGlicy9tbWFwOiBDbGVhbiB1cCB1bnVz
ZWQgcmVhZC93cml0ZQ0KPiANCj4gR2l0bGFiIENJIHJlcG9ydHMgZmFpbHVyZXMgYWNyb3NzIHRo
ZSBib2FyZCBpbiBEZWJpYW4gU3RyZXRjaCAzMi1iaXQNCj4gYnVpbGRzLiAgQWxsIGxvZ3MNCj4g
aHR0cHM6Ly9naXRsYWIuY29tL3hlbi1wcm9qZWN0L3BhdGNoZXcveGVuLy0vcGlwZWxpbmVzLzMw
MTE0NjExMiBidXQNCj4gdGhlDQo+IHRsO2RyIHNlZW1zIHRvIGJlOg0KPiANCj4gRmlsZSAiZGlz
ay5tbCIsIGxpbmUgMTc5LCBjaGFyYWN0ZXJzIDI2LTM3Og0KPiBFcnJvcjogSW50ZWdlciBsaXRl
cmFsIGV4Y2VlZHMgdGhlIHJhbmdlIG9mIHJlcHJlc2VudGFibGUgaW50ZWdlcnMgb2YNCj4gdHlw
ZSBpbnQNCg0KVGhhbmtzLCB0aGlzIHNob3VsZCBmaXggaXQsIEkgcmVmcmVzaGVkIG15IGdpdCB0
cmVlICh0aGVyZSBpcyBhbHNvIGENCmZpeCB0aGVyZSBmb3IgdGhlIG9sZGVyIHZlcnNpb24gb2Yg
TWFrZSk6DQpodHRwczovL2dpdGxhYi5jb20veGVuLXByb2plY3QvcGF0Y2hldy94ZW4vLS9waXBl
bGluZXMvMzAxMTQ2MTEyDQoNCk5vdCBzdXJlIHdoZXRoZXIgaXQgaXMgd29ydGggY29udGludWlu
ZyB0byBzdXBwb3J0IDMyLWJpdCBpNjg2IGJ1aWxkcywNCmFueSBtb2Rlcm4gSW50ZWwvQU1EIENQ
VSB3b3VsZCBiZSA2NC1iaXQgY2FwYWJsZSwgYnV0IHBlcmhhcHMgMzItYml0IGlzDQpzdGlsbCBw
b3B1bGFyIGluIHRoZSBBUk0gd29ybGQgYW5kIGtlZXBpbmcgMzItYml0IEludGVsIHN1cHBvcnRl
ZCBpcw0KdGhlIGVhc2llc3Qgd2F5IHRvIGJ1aWxkLXRlc3QgaXQ/DQoNCmRpZmYgLS1naXQgYS90
b29scy9vY2FtbC94ZW5zdG9yZWQvZGlzay5tbA0KYi90b29scy9vY2FtbC94ZW5zdG9yZWQvZGlz
ay5tbA0KaW5kZXggNTk3OTQzMjRlMS4uYjc2NzhhZjg3ZiAxMDA2NDQNCi0tLSBhL3Rvb2xzL29j
YW1sL3hlbnN0b3JlZC9kaXNrLm1sDQorKysgYi90b29scy9vY2FtbC94ZW5zdG9yZWQvZGlzay5t
bA0KQEAgLTE3Niw3ICsxNzYsNyBAQCBsZXQgd3JpdGUgc3RvcmUgPQ0KICAgICAgICAgICAgb3V0
cHV0X2J5dGUgY2ggaQ0KIA0KICAgICAgICBsZXQgdzMyIGNoIHYgPQ0KLSAgICAgICAgICAgYXNz
ZXJ0ICh2ID49IDAgJiYgdiA8PSAweEZGRkZfRkZGRik7DQorICAgICAgICAgICBhc3NlcnQgKHYg
Pj0gMCAmJiBJbnQ2NC5vZl9pbnQgdiA8PSAweEZGRkZfRkZGRkwpOw0KICAgICAgICAgICAgb3V0
cHV0X2JpbmFyeV9pbnQgY2ggdg0KIA0KICAgICAgICBsZXQgcG9zID0gcG9zX291dA0KQEAgLTIx
Myw3ICsyMTMsNyBAQCBsZXQgd3JpdGUgc3RvcmUgPQ0KIA0KICAgICAgICBsZXQgcjMyIHQgPQ0K
ICAgICAgICAgICAgKCogcmVhZCB1bnNpZ25lZCAzMi1iaXQgaW50ICopDQotICAgICAgICAgICBs
ZXQgciA9IGlucHV0X2JpbmFyeV9pbnQgdCBsYW5kIDB4RkZGRl9GRkZGIGluDQorICAgICAgICAg
ICBsZXQgciA9IEludDY0LmxvZ2FuZCAoSW50NjQub2ZfaW50IChpbnB1dF9iaW5hcnlfaW50IHQp
KQ0KMHhGRkZGX0ZGRkZMIHw+IEludDY0LnRvX2ludCBpbg0KICAgICAgICAgICAgYXNzZXJ0IChy
ID49IDApOw0KICAgICAgICAgICAgcg0KIA0KQEAgLTI5Myw3ICsyOTMsNyBAQCBtb2R1bGUgTGl2
ZVJlY29yZCA9IHN0cnVjdA0KICAgICAgICB3cml0ZV9yZWNvcmQgdCBUeXBlLmdsb2JhbF9kYXRh
IDggQEAgZnVuIGIgLT4NCiAgICAgICAgTy53MzIgYiAoRkQudG9faW50IHJ3X3NvY2spOw0KICAg
ICAgICAgICAgICAgICAoKiBUT0RPOiB0aGlzIG5lZWRzIGEgdW5pdCB0ZXN0L2xpdmUgdXBkYXRl
IHRlc3QgdG9vIQ0KKikNCi0gICAgICAgTy53MzIgYiAweEZGRkZfRkZGRg0KKyAgICAgICBPLncz
MiBiIDB4M0ZGRl9GRkZGDQogDQogICAgbGV0IHJlYWRfZ2xvYmFsX2RhdGEgdCB+bGVuIGYgPQ0K
ICAgICAgICByZWFkX2V4cGVjdCB0ICJnbG9iYWxfZGF0YSIgOCBsZW47DQo+IA0KPiB+QW5kcmV3
DQo=


From xen-devel-bounces@lists.xenproject.org Wed May 12 10:39:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 10:39:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126208.237552 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgmGr-0003pT-DE; Wed, 12 May 2021 10:39:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126208.237552; Wed, 12 May 2021 10:39:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgmGr-0003pM-8x; Wed, 12 May 2021 10:39:17 +0000
Received: by outflank-mailman (input) for mailman id 126208;
 Wed, 12 May 2021 10:39:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgmGq-0003pC-9e; Wed, 12 May 2021 10:39:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgmGp-0000dD-Uz; Wed, 12 May 2021 10:39:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgmGp-00045W-If; Wed, 12 May 2021 10:39:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lgmGp-00084C-IA; Wed, 12 May 2021 10:39:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=iaw7gYmkVqFv+w8EFMxp4wBms0bQRKuM0nCZenn96MY=; b=FxWLBImp+5e+hnFBmJwmtXU7Nt
	5NZYuQMcarjvubEZUSviCCNij1rHWrCgZLGRNfhcXvTcx5pKsjfM/V1EV4qBZR9UmvmANzNiXWsKn
	Q0nlIDvKy8clBu0O087uOVUANt3PvhRqio7npqe5sIu6SRj5gSy+8t4KBsbyhM42KOBU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161909-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 161909: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:build-arm64-pvops:<job status>:broken:regression
    xen-unstable:build-arm64:<job status>:broken:regression
    xen-unstable:build-arm64-xsm:<job status>:broken:regression
    xen-unstable:build-arm64-pvops:host-install(4):broken:regression
    xen-unstable:build-arm64:host-install(4):broken:regression
    xen-unstable:build-arm64-xsm:host-install(4):broken:regression
    xen-unstable:test-arm64-arm64-examine:reboot:fail:regression
    xen-unstable:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    xen-unstable:test-arm64-arm64-xl-thunderx:xen-boot:fail:regression
    xen-unstable:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    xen-unstable:test-arm64-arm64-xl:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-3:xtf/test-pv32pae-xsa-286:fail:heisenbug
    xen-unstable:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    xen-unstable:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    xen-unstable:build-arm64-libvirt:build-check(1):blocked:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=d4fb5f166c2bfbaf9ba0de69da0d411288f437a9
X-Osstest-Versions-That:
    xen=982c89ed527bc5b0ffae5da9fd33f9d2d1528f06
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 12 May 2021 10:39:15 +0000

flight 161909 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161909/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-pvops               <job status>                 broken  in 161904
 build-arm64                     <job status>                 broken  in 161904
 build-arm64-xsm                 <job status>                 broken  in 161904
 build-arm64-pvops          4 host-install(4) broken in 161904 REGR. vs. 161898
 build-arm64                4 host-install(4) broken in 161904 REGR. vs. 161898
 build-arm64-xsm            4 host-install(4) broken in 161904 REGR. vs. 161898
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 161898
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 161898
 test-arm64-arm64-xl-thunderx  8 xen-boot                 fail REGR. vs. 161898
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 161898
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 161898

Tests which are failing intermittently (not blocking):
 test-xtf-amd64-amd64-3       92 xtf/test-pv32pae-xsa-286   fail pass in 161904

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm       1 build-check(1)           blocked in 161904 n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)           blocked in 161904 n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)           blocked in 161904 n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)           blocked in 161904 n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)           blocked in 161904 n/a
 test-arm64-arm64-xl           1 build-check(1)           blocked in 161904 n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)           blocked in 161904 n/a
 test-arm64-arm64-examine      1 build-check(1)           blocked in 161904 n/a
 build-arm64-libvirt           1 build-check(1)           blocked in 161904 n/a
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161898
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161898
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 161898
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161898
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161898
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161898
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 161898
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161898
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161898
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161898
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161898
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161898
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  d4fb5f166c2bfbaf9ba0de69da0d411288f437a9
baseline version:
 xen                  982c89ed527bc5b0ffae5da9fd33f9d2d1528f06

Last test of basis   161898  2021-05-10 19:06:50 Z    1 days
Testing same since   161904  2021-05-11 10:00:22 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@arm.com>
  Volodymyr Babchuk <volodymyr_babchuk@epam.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-arm64-pvops broken
broken-job build-arm64 broken
broken-job build-arm64-xsm broken

Not pushing.

------------------------------------------------------------
commit d4fb5f166c2bfbaf9ba0de69da0d411288f437a9
Author: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Date:   Fri May 7 01:39:47 2021 +0000

    optee: enable OPTEE_SMC_SEC_CAP_MEMREF_NULL capability
    
    OP-TEE mediator already have support for NULL memory references. It
    was added in patch 0dbed3ad336 ("optee: allow plain TMEM buffers with
    NULL address"). But it does not propagate
    OPTEE_SMC_SEC_CAP_MEMREF_NULL capability flag to a guest, so well
    behaving guest can't use this feature.
    
    Note: linux optee driver honors this capability flag when handling
    buffers from userspace clients, but ignores it when working with
    internal calls. For instance, __optee_enumerate_devices() function
    uses NULL argument to get buffer size hint from OP-TEE. This was the
    reason, why "optee: allow plain TMEM buffers with NULL address" was
    introduced in the first place.
    
    This patch adds the mentioned capability to list of known
    capabilities. From Linux point of view it means that userspace clients
    can use this feature, which is confirmed by OP-TEE test suite:
    
    * regression_1025 Test memref NULL and/or 0 bytes size
    o regression_1025.1 Invalid NULL buffer memref registration
      regression_1025.1 OK
    o regression_1025.2 Input/Output MEMREF Buffer NULL - Size 0 bytes
      regression_1025.2 OK
    o regression_1025.3 Input MEMREF Buffer NULL - Size non 0 bytes
      regression_1025.3 OK
    o regression_1025.4 Input MEMREF Buffer NULL over PTA invocation
      regression_1025.4 OK
      regression_1025 OK
    
    Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 30f34457b20c78b2862b2b16cb26cb4f10a667ad
Author: Julien Grall <jgrall@amazon.com>
Date:   Mon May 10 18:28:16 2021 +0100

    tools/xenstore: Fix indentation in the header of xenstored_control.c
    
    Commit e867af081d94 "tools/xenstore: save new binary for live update"
    seemed to have spuriously changed the indentation of the first line of
    the copyright header.
    
    The previous indentation is re-instated so all the lines are indented
    the same.
    
    Reported-by: Bjoern Doebel <doebel@amazon.com>
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Juergen Gross <jgross@suse.com>

commit 7e71b1e0affa83c0976c832f254276eeb6e6575c
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu May 6 17:12:23 2021 +0100

    tools/xenstored: Prevent a buffer overflow in dump_state_node_perms()
    
    ASAN reported one issue when Live Updating Xenstored:
    
    =================================================================
    ==873==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7ffc194f53e0 at pc 0x555c6b323292 bp 0x7ffc194f5340 sp 0x7ffc194f5338
    WRITE of size 1 at 0x7ffc194f53e0 thread T0
        #0 0x555c6b323291 in dump_state_node_perms xen/tools/xenstore/xenstored_core.c:2468
        #1 0x555c6b32746e in dump_state_special_node xen/tools/xenstore/xenstored_domain.c:1257
        #2 0x555c6b32a702 in dump_state_special_nodes xen/tools/xenstore/xenstored_domain.c:1273
        #3 0x555c6b32ddb3 in lu_dump_state xen/tools/xenstore/xenstored_control.c:521
        #4 0x555c6b32e380 in do_lu_start xen/tools/xenstore/xenstored_control.c:660
        #5 0x555c6b31b461 in call_delayed xen/tools/xenstore/xenstored_core.c:278
        #6 0x555c6b32275e in main xen/tools/xenstore/xenstored_core.c:2357
        #7 0x7f95eecf3d09 in __libc_start_main ../csu/libc-start.c:308
        #8 0x555c6b3197e9 in _start (/usr/local/sbin/xenstored+0xc7e9)
    
    Address 0x7ffc194f53e0 is located in stack of thread T0 at offset 80 in frame
        #0 0x555c6b32713e in dump_state_special_node xen/tools/xenstore/xenstored_domain.c:1232
    
      This frame has 2 object(s):
        [32, 40) 'head' (line 1233)
        [64, 80) 'sn' (line 1234) <== Memory access at offset 80 overflows this variable
    
    This is happening because the callers are passing a pointer to a variable
    allocated on the stack. However, the field perms is a dynamic array, so
    Xenstored will end up to read outside of the variable.
    
    Rework the code so the permissions are written one by one in the fd.
    
    Fixes: ed6eebf17d2c ("tools/xenstore: dump the xenstore state for live update")
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>

commit 3f568354a95ee2f0c9c553efb94c734fa6848af0
Author: Michal Orzel <michal.orzel@arm.com>
Date:   Wed May 5 09:43:07 2021 +0200

    arm/time,vtimer: Get rid of READ/WRITE_SYSREG32
    
    AArch64 registers are 64bit whereas AArch32 registers
    are 32bit or 64bit. MSR/MRS are expecting 64bit values thus
    we should get rid of helpers READ/WRITE_SYSREG32
    in favour of using READ/WRITE_SYSREG.
    We should also use register_t type when reading sysregs
    which can correspond to uint64_t or uint32_t.
    Even though many AArch64 registers have upper 32bit reserved
    it does not mean that they can't be widen in the future.
    
    Modify type of vtimer structure's member: ctl to register_t.
    
    Add macro CNTFRQ_MASK containing mask for timer clock frequency
    field of CNTFRQ_EL0 register.
    
    Modify CNTx_CTL_* macros to return unsigned long instead of
    unsigned int as ctl is now of type register_t.
    
    Signed-off-by: Michal Orzel <michal.orzel@arm.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 86faae561cd8eee819e0f42ba7a18dd180aa49d1
Author: Michal Orzel <michal.orzel@arm.com>
Date:   Wed May 5 09:43:06 2021 +0200

    arm/page: Get rid of READ/WRITE_SYSREG32
    
    AArch64 registers are 64bit whereas AArch32 registers
    are 32bit or 64bit. MSR/MRS are expecting 64bit values thus
    we should get rid of helpers READ/WRITE_SYSREG32
    in favour of using READ/WRITE_SYSREG.
    We should also use register_t type when reading sysregs
    which can correspond to uint64_t or uint32_t.
    Even though many AArch64 registers have upper 32bit reserved
    it does not mean that they can't be widen in the future.
    
    Modify accesses to CTR_EL0 to use READ/WRITE_SYSREG.
    
    Signed-off-by: Michal Orzel <michal.orzel@arm.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>

commit 25e5d0c412e0d7420f2aa7fdd71cc39d8ed6c528
Author: Michal Orzel <michal.orzel@arm.com>
Date:   Wed May 5 09:43:05 2021 +0200

    xen/arm: Always access SCTLR_EL2 using READ/WRITE_SYSREG()
    
    The Armv8 specification describes the system register as a 64-bit value
    on AArch64 and 32-bit value on AArch32 (same as ARMv7).
    
    Unfortunately, Xen is accessing the system registers using
    READ/WRITE_SYSREG32() which means the top 32-bit are clobbered.
    
    This is only a latent bug so far because Xen will not yet use the top
    32-bit.
    
    There is also no change in behavior because arch/arm/arm64/head.S will
    initialize SCTLR_EL2 to a sane value with the top 32-bit zeroed.
    
    Signed-off-by: Michal Orzel <michal.orzel@arm.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 8eb7cc0465fa228064e807aad51eb7428d6d3199
Author: Michal Orzel <michal.orzel@arm.com>
Date:   Wed May 5 09:43:04 2021 +0200

    arm/p2m: Get rid of READ/WRITE_SYSREG32
    
    AArch64 registers are 64bit whereas AArch32 registers
    are 32bit or 64bit. MSR/MRS are expecting 64bit values thus
    we should get rid of helpers READ/WRITE_SYSREG32
    in favour of using READ/WRITE_SYSREG.
    We should also use register_t type when reading sysregs
    which can correspond to uint64_t or uint32_t.
    Even though many AArch64 registers have upper 32bit reserved
    it does not mean that they can't be widen in the future.
    
    Modify type of vtcr to register_t.
    
    Signed-off-by: Michal Orzel <michal.orzel@arm.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>

commit 78e67c99eb3f90c22c8c6ee282ec3a43d2ddccb5
Author: Michal Orzel <michal.orzel@arm.com>
Date:   Wed May 5 09:43:03 2021 +0200

    arm/gic: Get rid of READ/WRITE_SYSREG32
    
    AArch64 registers are 64bit whereas AArch32 registers
    are 32bit or 64bit. MSR/MRS are expecting 64bit values thus
    we should get rid of helpers READ/WRITE_SYSREG32
    in favour of using READ/WRITE_SYSREG.
    We should also use register_t type when reading sysregs
    which can correspond to uint64_t or uint32_t.
    Even though many AArch64 registers have upper 32bit reserved
    it does not mean that they can't be widen in the future.
    
    Modify types of following members of struct gic_v3 to register_t:
    -vmcr
    -sre_el1
    -apr0
    -apr1
    
    Add new macro GICC_IAR_INTID_MASK containing the mask
    for INTID field of ICC_IAR0/1_EL1 register as only the first 23-bits
    of IAR contains the interrupt number. The rest are RES0.
    Therefore, take the opportunity to mask the bits [23:31] as
    they should be used for an IRQ number (we don't know how the top bits
    will be used).
    
    Signed-off-by: Michal Orzel <michal.orzel@arm.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit d55afb1acaffc6047af3cabc3ef4442f313bee2c
Author: Michal Orzel <michal.orzel@arm.com>
Date:   Wed May 5 09:43:02 2021 +0200

    arm/gic: Remove member hcr of structure gic_v3
    
    ... as it is never used even in the patch introducing it.
    
    Signed-off-by: Michal Orzel <michal.orzel@arm.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit b80470c84553808fef3a6803000ceee8a100e63c
Author: Michal Orzel <michal.orzel@arm.com>
Date:   Wed May 5 09:43:01 2021 +0200

    arm: Modify type of actlr to register_t
    
    AArch64 registers are 64bit whereas AArch32 registers
    are 32bit or 64bit. MSR/MRS are expecting 64bit values thus
    we should get rid of helpers READ/WRITE_SYSREG32
    in favour of using READ/WRITE_SYSREG.
    We should also use register_t type when reading sysregs
    which can correspond to uint64_t or uint32_t.
    Even though many AArch64 registers have upper 32bit reserved
    it does not mean that they can't be widen in the future.
    
    ACTLR_EL1 system register bits are implementation defined
    which means it is possibly a latent bug on current HW as the CPU
    implementer may already have decided to use the top 32bit.
    
    Signed-off-by: Michal Orzel <michal.orzel@arm.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>

commit 3fd8336bc599788e5a52a7e63e833b6f03d79fd5
Author: Michal Orzel <michal.orzel@arm.com>
Date:   Wed May 5 09:43:00 2021 +0200

    arm/domain: Get rid of READ/WRITE_SYSREG32
    
    AArch64 registers are 64bit whereas AArch32 registers
    are 32bit or 64bit. MSR/MRS are expecting 64bit values thus
    we should get rid of helpers READ/WRITE_SYSREG32
    in favour of using READ/WRITE_SYSREG.
    We should also use register_t type when reading sysregs
    which can correspond to uint64_t or uint32_t.
    Even though many AArch64 registers have upper 32bit reserved
    it does not mean that they can't be widen in the future.
    
    Modify type of register cntkctl to register_t.
    
    Thumbee registers are only usable by a 32-bit domain and therefore
    we can just store the bottom 32-bit (IOW there is no type change).
    In fact, this could technically be restricted to Armv7 HW (the
    support was dropped retrospectively in Armv8) but leave it as-is
    for now.
    
    Signed-off-by: Michal Orzel <michal.orzel@arm.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>

commit 8990f0eaca139364091109389416455f4f78cd65
Author: Michal Orzel <michal.orzel@arm.com>
Date:   Wed May 5 09:42:59 2021 +0200

    arm64/vfp: Get rid of READ/WRITE_SYSREG32
    
    AArch64 registers are 64bit whereas AArch32 registers
    are 32bit or 64bit. MSR/MRS are expecting 64bit values thus
    we should get rid of helpers READ/WRITE_SYSREG32
    in favour of using READ/WRITE_SYSREG.
    We should also use register_t type when reading sysregs
    which can correspond to uint64_t or uint32_t.
    Even though many AArch64 registers have upper 32bit reserved
    it does not mean that they can't be widen in the future.
    
    Modify type of FPCR, FPSR, FPEXC32_EL2 to register_t.
    
    Signed-off-by: Michal Orzel <michal.orzel@arm.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed May 12 10:52:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 10:52:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126216.237567 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgmTc-0006DH-Or; Wed, 12 May 2021 10:52:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126216.237567; Wed, 12 May 2021 10:52:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgmTc-0006DA-La; Wed, 12 May 2021 10:52:28 +0000
Received: by outflank-mailman (input) for mailman id 126216;
 Wed, 12 May 2021 10:52:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgmTb-0006D0-N7; Wed, 12 May 2021 10:52:27 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgmTb-0000rT-I9; Wed, 12 May 2021 10:52:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgmTb-0004gp-7Z; Wed, 12 May 2021 10:52:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lgmTb-0008DR-78; Wed, 12 May 2021 10:52:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=oRE3PKa3sZaaRX5hxdu05vPotpCv7+CU4+GWtFwg0Xw=; b=1zz/i9HyRvNpTowXC2PSy4VwXg
	eOvhZ1T7KKt3v6u1zGdL5Y2Kw+/YUhOgjyVKjsxO3vx/DBqmHiKtLL6dYYqtj6iS3OkDLAUdcZ3iM
	DTyn2Byp+iEJ7uu+OgQVSxE4ktbrpFyWz3988wS0FnFXX17gaIzHxbleo3X/oAevthek=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161913-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 161913: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=3976dc598ac8fa0689ab6bfc9e2de9b46d480055
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 12 May 2021 10:52:27 +0000

flight 161913 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161913/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              3976dc598ac8fa0689ab6bfc9e2de9b46d480055
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  306 days
Failing since        151818  2020-07-11 04:18:52 Z  305 days  298 attempts
Testing same since   161913  2021-05-12 04:19:56 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 57173 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed May 12 12:28:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 12:28:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126229.237581 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgnxw-00085R-8h; Wed, 12 May 2021 12:27:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126229.237581; Wed, 12 May 2021 12:27:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgnxw-00085K-5c; Wed, 12 May 2021 12:27:52 +0000
Received: by outflank-mailman (input) for mailman id 126229;
 Wed, 12 May 2021 12:27:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgnxu-00085A-MX; Wed, 12 May 2021 12:27:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgnxu-0002P2-En; Wed, 12 May 2021 12:27:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgnxu-0008PB-2w; Wed, 12 May 2021 12:27:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lgnxu-0007B5-2Q; Wed, 12 May 2021 12:27:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=VA+++ccWV7VELsptN84IdBsKvV4RzErweBu35vVTPM4=; b=LAnXQbyWVGcdu9TJKTTQWEGtdm
	8d73sKfL9OU1ZWBtBZyJMbHnWC0ue/WLGmp8FP2vL8uRg9+RC1oAjFZrhYofWXDWffTHKx5g+xn4v
	QeocwKjHXeBJ7bVPa3+IVG0n1RuxX2726TTHFPvK6SW+VUkS2KCTAK/6E/phR3MM5hWY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161910-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 161910: FAIL
X-Osstest-Failures:
    linux-5.4:build-arm64:<job status>:broken:regression
    linux-5.4:build-arm64-xsm:<job status>:broken:regression
    linux-5.4:build-arm64-pvops:<job status>:broken:regression
    linux-5.4:build-arm64:host-install(4):broken:regression
    linux-5.4:build-arm64-xsm:host-install(4):broken:regression
    linux-5.4:build-arm64-pvops:host-install(4):broken:regression
    linux-5.4:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    linux-5.4:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-localmigrate/x10:fail:heisenbug
    linux-5.4:build-arm64-libvirt:build-check(1):blocked:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:build-check(1):blocked:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:build-check(1):blocked:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:build-check(1):blocked:nonblocking
    linux-5.4:test-arm64-arm64-examine:build-check(1):blocked:nonblocking
    linux-5.4:test-arm64-arm64-xl:build-check(1):blocked:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:build-check(1):blocked:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=16022114de9869743d6304290815cdb8a8c7deaa
X-Osstest-Versions-That:
    linux=b5dbcd05792a4bad2c9bb3c4658c854e72c444b7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 12 May 2021 12:27:50 +0000

flight 161910 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161910/

Failures and problems with tests :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64                     <job status>                 broken  in 161905
 build-arm64-xsm                 <job status>                 broken  in 161905
 build-arm64-pvops               <job status>                 broken  in 161905
 build-arm64                4 host-install(4) broken in 161905 REGR. vs. 161832
 build-arm64-xsm            4 host-install(4) broken in 161905 REGR. vs. 161832
 build-arm64-pvops          4 host-install(4) broken in 161905 REGR. vs. 161832

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-examine      4 memdisk-try-append         fail pass in 161905
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 18 guest-localmigrate/x10 fail pass in 161905

Tests which did not succeed, but are not blocking:
 build-arm64-libvirt           1 build-check(1)           blocked in 161905 n/a
 test-arm64-arm64-xl-xsm       1 build-check(1)           blocked in 161905 n/a
 test-arm64-arm64-xl-credit1   1 build-check(1)           blocked in 161905 n/a
 test-arm64-arm64-xl-thunderx  1 build-check(1)           blocked in 161905 n/a
 test-arm64-arm64-xl-credit2   1 build-check(1)           blocked in 161905 n/a
 test-arm64-arm64-examine      1 build-check(1)           blocked in 161905 n/a
 test-arm64-arm64-xl           1 build-check(1)           blocked in 161905 n/a
 test-arm64-arm64-xl-seattle   1 build-check(1)           blocked in 161905 n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)           blocked in 161905 n/a
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161832
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161832
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161832
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 161832
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161832
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161832
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161832
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161832
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161832
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161832
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161832
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                16022114de9869743d6304290815cdb8a8c7deaa
baseline version:
 linux                b5dbcd05792a4bad2c9bb3c4658c854e72c444b7

Last test of basis   161832  2021-05-07 09:10:55 Z    5 days
Testing same since   161905  2021-05-11 12:11:22 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adrian Hunter <adrian.hunter@intel.com>
  Alan Stern <stern@rowland.harvard.edu>
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Lobakin <alobakin@pm.me>
  Alexander Shishkin <alexander.shishkin@linux.intel.com>
  Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
  Anirudh Rayabharam <mail@anirudhrb.com>
  Anson Jacob <Anson.Jacob@amd.com>
  Ard Biesheuvel <ardb@kernel.org>
  Aric Cyr <aric.cyr@amd.com>
  Arnd Bergmann <arnd@arndb.de>
  Artur Petrosyan <Arthur.Petrosyan@synopsys.com>
  Arun Easi <aeasi@marvell.com>
  Avri Altman <avri.altman@wdc.com>
  Bart Van Assche <bvanassche@acm.org>
  Benjamin Block <bblock@linux.ibm.com>
  Bill Wendling <morbo@google.com>
  Bindu Ramamurthy <bindu.r@amd.com>
  Bixuan Cui <cuibixuan@huawei.com>
  Borislav Petkov <bp@suse.de>
  Brendan Peter <bpeter@lytx.com>
  Catalin Marinas <catalin.marinas@arm.com>
  Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
  Chanwoo Choi <cw00.choi@samsung.com>
  Chao Yu <yuchao0@huawei.com>
  Charles Keepax <ckeepax@opensource.cirrus.com>
  Chen Jun <chenjun102@huawei.com>
  Christian Brauner <christian.brauner@ubuntu.com>
  Christian König <christian.koenig@amd.com>
  Christoph Hellwig <hch@lst.de>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Chunfeng Yun <chunfeng.yun@mediatek.com>
  Colin Ian King <colin.king@canonical.com>
  Daniel Niv <danielniv3@gmail.com>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Daniel Wheeler <daniel.wheeler@amd.com>
  dann frazier <dann.frazier@canonical.com>
  David Bauer <mail@david-bauer.net>
  David E. Box <david.e.box@linux.intel.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  Davide Caratti <dcaratti@redhat.com>
  Dean Anderson <dean@sensoray.com>
  Dick Kennedy <dick.kennedy@broadcom.com>
  Dinghao Liu <dinghao.liu@zju.edu.cn>
  Dinh Nguyen <dinguyen@kernel.org>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Dmitry Vyukov <dvyukov@google.com>
  Dmytro Laktyushkin <Dmytro.Laktyushkin@amd.com>
  Don Brace <don.brace@microchip.com>
  dongjian <dongjian@yulong.com>
  DooHyun Hwang <dh0421.hwang@samsung.com>
  Eckhart Mohr <e.mohr@tuxedocomputers.com>
  Eelco Chaudron <echaudro@redhat.com>
  Eric Biggers <ebiggers@google.com>
  Eryk Brol <eryk.brol@amd.com>
  Ewan D. Milne <emilne@redhat.com>
  Felipe Balbi <balbi@kernel.org>
  Fengnan Chang <changfengnan@vivo.com>
  Filipe Manana <fdmanana@suse.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Gao Xiang <hsiangkao@redhat.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Gregory CLEMENT <gregory.clement@bootlin.com>
  Guchun Chen <guchun.chen@amd.com>
  Guenter Roeck <linux@roeck-us.net>
  Guochun Mao <guochun.mao@mediatek.com>
  Hanjun Guo <guohanjun@huawei.com>
  Hans de Goede <hdegoede@redhat.com>
  Hans Verkuil <hverkuil-cisco@xs4all.nl>
  Hansem Ro <hansemro@outlook.com>
  Harald Freudenberger <freude@linux.ibm.com>
  He Ying <heying24@huawei.com>
  Heiko Carstens <hca@linux.ibm.com>
  Heinz Mauelshagen <heinzm@redhat.com>
  Hemant Kumar <hemantk@codeaurora.org>
  Herbert Xu <herbert@gondor.apana.org.au>
  Hillf Danton <hdanton@sina.com>
  Hui Tang <tanghui20@huawei.com>
  Ido Schimmel <idosch@nvidia.com>
  Jaegeuk Kim <jaegeuk@kernel.org>
  Jakub Kicinski <kuba@kernel.org>
  James Morris <jamorris@linux.microsoft.com>
  James Smart <jsmart2021@gmail.com>
  Jan Kara <jack@suse.cz>
  Jared Baldridge <jrb@expunge.us>
  Jarkko Sakkinen <jarkko@kernel.org>
  Jason Self <jason@bluehome.net>
  Jeffrey Mitchell <jeffrey.mitchell@starlab.io>
  Jens Axboe <axboe@kernel.dk>
  Jens Wiklander <jens.wiklander@linaro.org>
  Jerome Forissier <jerome@forissier.org>
  Jessica Yu <jeyu@kernel.org>
  Joe Thornber <ejt@redhat.com>
  John Millikin <john@john-millikin.com>
  Jon Hunter <jonathanh@nvidia.com>
  Jonathan Kim <jonathan.kim@amd.com>
  Josef Bacik <josef@toxicpanda.com>
  Julian Braha <julianbraha@gmail.com>
  Justin Tee <justin.tee@broadcom.com>
  Kai Stuhlemmer (ebee Engineering) <kai.stuhlemmer@ebee.de>
  Kalle Valo <kvalo@codeaurora.org>
  karthik alapati <mail@karthek.com>
  Kevin Barnett <kevin.barnett@microchip.com>
  Konstantin Kharlamov <hi-angel@yandex.ru>
  Laurent Pinchart <laurent.pinchart@ideasonboard.com>
  Lee Jones <lee.jones@linaro.org>
  Lingutla Chandrasekhar <clingutla@codeaurora.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  lizhe <lizhe67@huawei.com>
  Luis Henriques <lhenriques@suse.de>
  Luke D Jones <luke@ljones.dev>
  Luo Jiaxing <luojiaxing@huawei.com>
  Lv Yunlong <lyl2019@mail.ustc.edu.cn>
  Lyude Paul <lyude@redhat.com>
  Mahesh Salgaonkar <mahesh@linux.ibm.com>
  Marc Zyngier <maz@kernel.org>
  Marek Behún <kabel@kernel.org>
  Marek Vasut <marex@denx.de>
  Marijn Suijten <marijn.suijten@somainline.org>
  Mark Brown <broonie@kernel.org>
  Mark Langsdorf <mlangsdo@redhat.com>
  Mark Rutland <mark.rutland@arm.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Martin Wilck <mwilck@suse.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Mathias Nyman <mathias.nyman@linux.intel.com>
  Matthias Brugger <matthias.bgg@gmail.com>
  Matthias Schiffer <matthias.schiffer@ew.tq-group.com>
  Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
  Maximilian Luz <luzmaximilian@gmail.com>
  Melissa Wen <melissa.srw@gmail.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Mike Snitzer <snitzer@redhat.com>
  Miklos Szeredi <mszeredi@redhat.com>
  Minas Harutyunyan <Minas.Harutyunyan@synopsys.com>
  Ming-Hung Tsai <mtsai@redhat.com>
  Miquel Raynal <miquel.raynal@bootlin.com>
  Muhammad Usama Anjum <musamaanjum@gmail.com>
  Murthy Bhat <Murthy.Bhat@microchip.com>
  Nathan Chancellor <nathan@kernel.org>
  Nick Desaulniers <ndesaulniers@google.com>
  Nilesh Javali <njavali@marvell.com>
  Paul Aurich <paul@darkrain42.org>
  Paul Clements <paul.clements@us.sios.com>
  Pavel Machek <pavel@denx.de>
  Pavel Machek <pavel@ucw.cz>
  Pavel Skripkin <paskripkin@gmail.com>
  Pawel Laszczak <pawell@cadence.com>
  Peilin Ye <yepeilin.cs@gmail.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Phil Calvin <phil@philcalvin.com>
  Phillip Potter <phil@philpotter.co.uk>
  Pradeep P V K <pragalla@codeaurora.org>
  Qu Huang <jinsdb@126.com>
  Quinn Tran <qutran@marvell.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Ricardo Ribalda <ribalda@chromium.org>
  Richard Weinberger <richard@nod.at>
  Rob Clark <robdclark@chromium.org>
  Robin Murphy <robin.murphy@arm.com>
  Ruslan Bilovol <ruslan.bilovol@gmail.com>
  Russell King <rmk+kernel@armlinux.org.uk>
  Sakari Ailus <sakari.ailus@linux.intel.com>
  Sami Loone <sami@loone.fi>
  Sasha Levin <sashal@kernel.org>
  Saurav Kashyap <skashyap@marvell.com>
  Sean Christopherson <seanjc@google.com>
  Sean Young <sean@mess.org>
  Sebastian Reichel <sebastian.reichel@collabora.com>
  Sedat Dilek <sedat.dilek@gmail.com>
  Seunghui Lee <sh043.lee@samsung.com>
  shaoyunl <shaoyun.liu@amd.com>
  Shixin Liu <liushixin2@huawei.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Solomon Chiu <solomon.chiu@amd.com>
  Song Liu <song@kernel.org>
  Sreekanth Reddy <sreekanth.reddy@broadcom.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stephen Boyd <sboyd@kernel.org>
  Steve French <stfrench@microsoft.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  Takashi Iwai <tiwai@suse.de>
  Theodore Ts'o <tytso@mit.edu>
  Thinh Nguyen <Thinh.Nguyen@synopsys.com>
  Thomas Gleixner <tglx@linutronix.de>
  Thomas Zimmermann <tzimmermann@suse.de>
  Tian Tao <tiantao6@hisilicon.com>
  Timo Gurr <timo.gurr@gmail.com>
  Todd Brandt <todd.e.brandt@linux.intel.com>
  Tony Ambardar <Tony.Ambardar@gmail.com>
  Tony Lindgren <tony@atomide.com>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Tudor Ambarus <tudor.ambarus@microchip.com>
  Tyler Hicks <code@tyhicks.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Valentin Schneider <valentin.schneider@arm.com>
  Vasily Gorbik <gor@linux.ibm.com>
  Vinod Koul <vkoul@kernel.org>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Wang Li <wangli74@huawei.com>
  Wei Yongjun <weiyongjun1@huawei.com>
  Werner Sembach <wse@tuxedocomputers.com>
  Wesley Cheng <wcheng@codeaurora.org>
  Will Deacon <will@kernel.org>
  Xingui Yang <yangxingui@huawei.com>
  Yang Yang <yang.yang29@zte.com.cn>
  Yang Yingliang <yangyingliang@huawei.com>
  Zhang Yi <yi.zhang@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job build-arm64 broken
broken-job build-arm64-xsm broken
broken-job build-arm64-pvops broken

Not pushing.

(No revision log; it would be 5433 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed May 12 12:51:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 12:51:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126235.237597 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgoKq-00031o-84; Wed, 12 May 2021 12:51:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126235.237597; Wed, 12 May 2021 12:51:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgoKq-00031h-4n; Wed, 12 May 2021 12:51:32 +0000
Received: by outflank-mailman (input) for mailman id 126235;
 Wed, 12 May 2021 12:51:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XikZ=KH=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lgoKn-00031b-UG
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 12:51:30 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 75f38085-516e-462b-ab77-33d7bee06155;
 Wed, 12 May 2021 12:51:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 75f38085-516e-462b-ab77-33d7bee06155
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620823889;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=GmIYeagrbCa9LzpYAjfr5Enj0SQ41za5jf/Y4KY2MjI=;
  b=Llr6XrrH/qt16s2QOKnEF4MbPFo58C1F7I5SWXU9gNF7X0DxN1Pn6v58
   Vd30m8vk7OYfIlvXLlnakZ2AtoRl7OROvqJ6quGrmZqeJl3+qzfsinf7B
   YYbGWu7GBdMormAAU/CoMivIZRbSpbXZmdzS+mvLcX3O5Bxqn1t3qFDkd
   I=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 3J0ZSg6+jFenAhaRdBCjW3+9nzknH5gW1oxtPXyvpvXny7WMDHepCMOIHhEXMp7rA8bQd/CeuC
 DZuGinbIS1xOmiBtJt2E4TryRtXRp3Utaqzgp1rF+c5hLiAXjk9zLKhImeTqJbCTawrMJOMtnd
 1cC+TtZC2mNlTDF1/1rZoXmFZBra12fKFtj+wvHdcKMfYd7o1dRHOMnTD5MyvILXpKJxxpP9xs
 2BvC1nxIog3Bku/a1FnB0TJKJvwm5xC/DYWBA15coY0s7XJ15K/oogttabSBg11elnWEyglaTO
 9Kw=
X-SBRS: 5.1
X-MesageID: 44014494
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:QpMxqq01DWKjeyP8oKEFAAqjBGMkLtp133Aq2lEZdPUMSL39qy
 nPpoVh6ff14Ax/ZJhSo6H9BEDgewKiyXcb2/hzAV7PZmTbUQiTXflfBOnZsl/d8kTFn4Y36U
 4jSdkaNDSaNzZHZLPBgDVQZOxA/DDoysyVbUq09R1QpEpRGsZdBk9Ce2Cm+socfng6OXLsf6
 DsnPaviQDQAEgqUg==
X-IronPort-AV: E=Sophos;i="5.82,293,1613451600"; 
   d="scan'208";a="44014494"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EH89SEomS4TbD6R0oQ1+ZKeTWOhQzUs3FxijGObGiyWorCHGq+NOlyas93Nt+WUTEsg/3u76AI68noQvdsKUvEv6mxrbHm3Ii90e+0k1+OHXSemMlIGqBZUnupMprr9VVnD0iiAA0ItC4cc9/SccHEUTkt9k/ojQ5t/2vehev0E+epH023QhHfWEdDziZHDq0z2C7SnGKKb3Y5KDmuTlNdGGsniEMtl+17YCzx20JzeCYhwShQx235Gle2LQC2MRscDkE/tpnS56jD+q8fuDnnO66Jd+XMfHMPspDdoth94Qbxk18nF3sO4O/mxukcQGpF9SL/ulvsPti8xJ9cZYsQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=U9XRHFJhm1Osb5CRcWolVihP5HVv2ZIiPvyfR+Wyu9M=;
 b=hVRMgEX8fGr0IE56E+/wirb/VjGeVqbCC6zUht+j10MHNyqPmFyY0KvwwNACghHzhH1Sg6hDmu2uHV1D/3ty5Cllz/QlfX5C70ODuO35bEUVi9si8yxvEX+EfpxKTVwBBS275JyeKgMfI6mrYEsZ3a+7kUw7ba27Ox4wOryRNy4Lp1AqCHSL2ON+bizAJjm8u1TwD4mVYgrplVClMBOK1GPFqZMLHTX3AVWh+UVY3qpcbIjjYSve+jIEOUv9YuurcedlGTFJ2+kX2bOer6nn2eCy3rC6vJ38we39R7tHpZcaGh0XHbCnxRqjwxrSWifzdrh1QfyxIVxVjQfvipGe1Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=U9XRHFJhm1Osb5CRcWolVihP5HVv2ZIiPvyfR+Wyu9M=;
 b=SLQdOEx65rw+JKW0DLV42B2XdojSJatiM7kp7dZeFzZPz+WRgjOfzX/HXzner0ROtipFOIYCVd2SelRx5kRhtVO4dNvaiyDE2XPoi8YV1Oz8nqTYvpQf0+eDHbgncacp9YQyGJYEbJYWCoWYqHnNlG9lm0DRGL2FEaFsOqQdJao=
To: Edwin Torok <edvin.torok@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: "julien@xen.org" <julien@xen.org>, "jbeulich@suse.com"
	<jbeulich@suse.com>, "jgross@suse.com" <jgross@suse.com>, "wl@xen.org"
	<wl@xen.org>, "iwj@xenproject.org" <iwj@xenproject.org>,
	"sstabellini@kernel.org" <sstabellini@kernel.org>, "dave@recoil.org"
	<dave@recoil.org>, George Dunlap <George.Dunlap@citrix.com>, Christian Lindig
	<christian.lindig@citrix.com>
References: <cover.1620755942.git.edvin.torok@citrix.com>
 <c744d834-659a-e361-df97-128032402950@citrix.com>
 <7c1a9a8b317fcbc778acaa218ee96e01d15b98d5.camel@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2 00/17] live update and gnttab patches
Message-ID: <bbd8ccf8-6bb4-7cc0-515d-1f14cd4404b7@citrix.com>
Date: Wed, 12 May 2021 13:51:18 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <7c1a9a8b317fcbc778acaa218ee96e01d15b98d5.camel@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0240.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a7::11) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 92d6c1bd-4791-4320-288d-08d91544a6bc
X-MS-TrafficTypeDiagnostic: BYAPR03MB4872:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB487294BE9765F73A9E093A4FBA529@BYAPR03MB4872.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: rMIV/IM6t5hjmcBUapHtIObRGVAtU0hbO9nuOThuHsR3lGc4bp8GOZWToRRCDw0LO6r/20NPAKNblK3vUTEGzbwQC1VOzueZ0ZBpwPCJKLjeE4m+dc9fQOODAUaGIMSNmeq8CkbKbIxXLWIu7fo9xOpY90DbF22OJqvZOZkazfjWZLX5uNfx13YTRKmRSDJyANNRdZ0NGcHWy4Yr7ryewczpkqMwwLDGsLQSygGyPw/47YetwZTSDZQRvarqKpNP0+W3kE6jIAJ4ThAqSpjGzl5qm9JvD3QgT+H6GNrlIWFouBkvRWgWNHef+0aFZL6/s1kn4oGeU2FbAvtBtie9GJjr5EDIPKX3PBQTBR/4fcQ8SNyfeprMLVdCKGjY127ocrOGFzuOnt6XMufU5w3RmJC8+wimcC95DsqWuKf8+g4HbeJFbXhTMEZeCjfe1R1bHMMixV9b/nv3FEgujDVhaEUOv+KdZ4o4zxtN9bVmMpFb6dpPtNPDJnN/keZJ+xSMi7Ac8hF56lyRNcoQM6k7u0KYmdL67bfEnWWzmenMYjCyYMW7p9yyFoN84DLHXI9xyehI64i5o9e2L2biCO2rze2sf7SDv6oKlT8djxE6aXDF1Vl1fpwxqDxcsQzjaakxchac1MKxvuZyqE2K+Wuj+CrjBCcW9F92AXnbKay8iaOk3fcqh3wJ+aab5Ga/XW3dmoeCVnCC++dFzHpmVczFiiKvnbmz4a5sH3MCY2Whl0/J209KmLF/4rwa+dfuZtnWGoVjWzatSH6PpJNWP9d/Bw==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(366004)(39850400004)(396003)(346002)(376002)(15650500001)(66574015)(478600001)(8676002)(110136005)(316002)(6486002)(38100700002)(31686004)(54906003)(86362001)(2616005)(66946007)(66476007)(26005)(956004)(16526019)(186003)(16576012)(53546011)(36756003)(2906002)(31696002)(83380400001)(107886003)(5660300002)(8936002)(966005)(4326008)(6666004)(66556008)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?MjVMQ0FjbVV4VlhyYmFxRi9BMU9LYlFSSmdSM3dzeVV1dStZenY0Z29TUXVl?=
 =?utf-8?B?czBTbWdZYmoydm85bllpY0p2N2g1LzJhLzcxQWRrTlZ2emdKSmRxMS82aTZN?=
 =?utf-8?B?S082ekVVcXhpelkxMG02bTBYWG1JTXBWc2FBMjRRSmk4MTlEeDVUVVErUDhy?=
 =?utf-8?B?UWxCbDU3aERILzRLMDZLVFdpTXUrdWE3dVIxY1FPL2d4UzFjTHNBcXpsWDFY?=
 =?utf-8?B?eEVubm1hTzVNSDh4UzN6ZGRwemsrUFErWGZvamkyT3kxaERjVDhYOVgza0NF?=
 =?utf-8?B?alBoSS9KckJKRmY0L0JtQmRabUxFRGpaZmtBMytOUk43N1ZMcjRhRG5VandK?=
 =?utf-8?B?MklFbDJ2cGJFU0tLMzVTdlNZTnhwcENYZlFQVDgyd3VUSlRENkhyTXhpV0Rr?=
 =?utf-8?B?MXR6OWcxcnRYcTBwbHIyTFRhN1ljSmtLVVlRSmpJUjMvMWJTOHlmRzVUQ3hi?=
 =?utf-8?B?em1pNkNWWWZSdjJuRXVUcFN1R1ZCK1NXSW9DZkl0dGtmQmNPZDJMVmdCNFh0?=
 =?utf-8?B?OEJ5cEdTVzB5Kzk4QzhmeGt0MW85MHBiVFFaV0JkUGw1bzJYdU8wZ0lQcUZt?=
 =?utf-8?B?YTA1RnVpQjdUaGo3eHl5Vk1Yd3RtaGtQeWcxQWJlRXVGemM0b3ZlQzVZOEgw?=
 =?utf-8?B?U0VXeWY5NDZUME5CUUg3UCtORFBCbktkYlBOSmEwMXFnU3pUMnppdGM4ejZy?=
 =?utf-8?B?bGErK0xZclJMVFNjUmFkZzFBUFAyOTZlcmZjc3lDQkxwYksxMzRNQjZtbTVZ?=
 =?utf-8?B?UDBYVFdiQVN1UkZNcGxtcldsZHQzaEdVQisvWVZBdDdCK2hVNzhEZXpuY2t4?=
 =?utf-8?B?YW9vNDRpR0hiSVlXNDhrQ0g3aUJrOWlmdy8xdUJjWmhtSFluSHVHcnRWbGF6?=
 =?utf-8?B?dzlDbm1EVU9LRlpwY1hFM0hhUUFiK1NIZTl3UFVaZFIrY1lqR0VwWHVyWkha?=
 =?utf-8?B?RnhlNXhhTFFDSVFNU0YyWXNKbUo0UWo5WlpmbXNPSklKQlVUaGVkdXVmclRs?=
 =?utf-8?B?alBVUHZEQUEzRjVzWFhyWVNVak45YnJwK2lGc1N4WVB3NHNhOHRScVNhdFpZ?=
 =?utf-8?B?L0hHbkFCZklUR0ZGOW1vd1VOeEgwdjJxRUo1TjNoUytQbVZrM3plMzFsb3hL?=
 =?utf-8?B?dVFXK0t0dG14TkZXaHVDbHNIS3FUQU9qNmVuUC92RFQxaW13MlhSUENCN042?=
 =?utf-8?B?THo4T2RkNC9RNzkxb0I5c2FXdHgyMlR2SjBhM2p6VFFvTld2WS9GODQxTHMx?=
 =?utf-8?B?SkFnOC96RU5MeUw2dEdFaDVyaU9TUmZmeTJ3QTBydHFkeFVGckRoTlkyYWVD?=
 =?utf-8?B?c0tUcmpKaUwyUDRSUzdIMXprVWtub0d2NE5idFV5RDc3emxZOWpsL3htQ0dj?=
 =?utf-8?B?SmsxM1FTOWVYUzZnL1NVd2gwQ2R4QVBYM0ZEUTNZSmJHakorcWJXbWRDS29v?=
 =?utf-8?B?dDEvQ3VaMWs4c25yTGV0L2M1eWN4UlAvWHAra0lKVkFLbmt0M2NzaUZNTXVK?=
 =?utf-8?B?RXZoQ1doVDVBUnNrZCtCMXppRnhUUXFpU3MzTUpKbitycU41MGQ5M2tkQkg5?=
 =?utf-8?B?MkpiU0gyRmRnV0NNSDhTcU1SUGRxZWdFZXdONVYvNlFyVThzbXY1bFE2ZFJ0?=
 =?utf-8?B?RDI1dm04YUJNTkdtbnRxNVBrdHZzOUpKN1NjRjc1Wm9jR05UcVNmdm5veHho?=
 =?utf-8?B?RnpIdlhpVkhTZWYreDdMWGI4Z21JTVNOS3U5aWRXQlpXUHVCMGdEaldQOHUv?=
 =?utf-8?Q?F3QXMBmewWzJO4XADBINhlBnJP3eXfjqllmO05g?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 92d6c1bd-4791-4320-288d-08d91544a6bc
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 May 2021 12:51:25.1023
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: +7xCzu7POMF222C0JuhvvjbFA4oXMZ4PhkexWexG/JbTIYD+J/x925HcMIpSH0tLSrsM31MkVqmHbTUijvB1ew8JjnGr15l+UsoZr36jmpI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4872
X-OriginatorOrg: citrix.com

On 12/05/2021 11:10, Edwin Torok wrote:
> On Tue, 2021-05-11 at 21:05 +0100, Andrew Cooper wrote:
>> On 11/05/2021 19:05, Edwin T=C3=B6r=C3=B6k wrote:
>>> These patches have been posted previously.
>>> The gnttab patches (tools/ocaml/libs/mmap) were not applied at the
>>> time
>>> to avoid conflicts with an in-progress XSA.
>>> The binary format live-update and fuzzing patches were not applied
>>> because it was too close to the next Xen release freeze.
>>>
>>> The patches depend on each-other: live-update only works correctly
>>> when the gnttab
>>> patches are taken too (MFN is not part of the binary live-update
>>> stream),
>>> so they are included here as a single series.
>>> The gnttab patches replaces one use of libxenctrl with stable
>>> interfaces, leaving one unstable
>>> libxenctrl interface used by oxenstored.
>>>
>>> The 'vendor external dependencies' may be optional, it is useful to
>>> be part
>>> of a patchqueue in a specfile so that you can build everything
>>> without external dependencies,
>>> but might as well commit it so everyone has it easily available not
>>> just XenServer.
>>>
>>> Note that the live-update fuzz test doesn't yet pass, it is still
>>> able to find bugs.
>>> However the reduced version with a fixed seed used as a unit test
>>> does pass,
>>> so it is useful to have it committed, and further improvements can
>>> be made later
>>> as more bugs are discovered and fixed.
>>>
>>> Edwin T=C3=B6r=C3=B6k (17):
>>>   docs/designs/xenstore-migration.md: clarify that deletes are
>>> recursive
>>>   tools/ocaml: add unit test skeleton with Dune build system
>>>   tools/ocaml: vendor external dependencies for convenience
>>>   tools/ocaml/xenstored: implement the live migration binary format
>>>   tools/ocaml/xenstored: add binary dump format support
>>>   tools/ocaml/xenstored: add support for binary format
>>>   tools/ocaml/xenstored: validate config file before live update
>>>   Add structured fuzzing unit test
>>>   tools/ocaml: use common macros for manipulating mmap_interface
>>>   tools/ocaml/libs/mmap: allocate correct number of bytes
>>>   tools/ocaml/libs/mmap: Expose stub_mmap_alloc
>>>   tools/ocaml/libs/mmap: mark mmap/munmap as blocking
>>>   tools/ocaml/libs/xb: import gnttab stubs from mirage
>>>   tools/ocaml: safer Xenmmap interface
>>>   tools/ocaml/xenstored: use gnttab instead of xenctrl's
>>>     foreign_map_range
>>>   tools/ocaml/xenstored: don't store domU's mfn of ring page
>>>   tools/ocaml/libs/mmap: Clean up unused read/write
>> Gitlab CI reports failures across the board in Debian Stretch 32-bit
>> builds.  All logs
>> https://gitlab.com/xen-project/patchew/xen/-/pipelines/301146112 but
>> the
>> tl;dr seems to be:
>>
>> File "disk.ml", line 179, characters 26-37:
>> Error: Integer literal exceeds the range of representable integers of
>> type int
> Thanks, this should fix it, I refreshed my git tree (there is also a
> fix there for the older version of Make):
> https://gitlab.com/xen-project/patchew/xen/-/pipelines/301146112
>
> Not sure whether it is worth continuing to support 32-bit i686 builds,
> any modern Intel/AMD CPU would be 64-bit capable, but perhaps 32-bit is
> still popular in the ARM world and keeping 32-bit Intel supported is
> the easiest way to build-test it?

Yes - arm32 is very much a thing, and currently 32bit userspace on x86
is a supported configuration.

>
> diff --git a/tools/ocaml/xenstored/disk.ml
> b/tools/ocaml/xenstored/disk.ml
> index 59794324e1..b7678af87f 100644
> --- a/tools/ocaml/xenstored/disk.ml
> +++ b/tools/ocaml/xenstored/disk.ml
> @@ -176,7 +176,7 @@ let write store =3D
>             output_byte ch i
> =20
>         let w32 ch v =3D
> -           assert (v >=3D 0 && v <=3D 0xFFFF_FFFF);
> +           assert (v >=3D 0 && Int64.of_int v <=3D 0xFFFF_FFFFL);

In the case that v is 32 bits wide, it will underflow and fail the v >=3D
0 check, before the upcast to Int64.

~Andrew



From xen-devel-bounces@lists.xenproject.org Wed May 12 13:06:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 13:06:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126242.237609 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgoZQ-0004yO-MV; Wed, 12 May 2021 13:06:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126242.237609; Wed, 12 May 2021 13:06:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgoZQ-0004yH-JF; Wed, 12 May 2021 13:06:36 +0000
Received: by outflank-mailman (input) for mailman id 126242;
 Wed, 12 May 2021 13:06:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XikZ=KH=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lgoZO-0004yB-Lm
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 13:06:34 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 27dbfd81-e478-43be-a742-14ac8fb0c16f;
 Wed, 12 May 2021 13:06:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 27dbfd81-e478-43be-a742-14ac8fb0c16f
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620824793;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=rLl/dedN9rk965YCYeMWOo1RYr+zFI8RdAReZm+ColU=;
  b=FynzWnCsTWB4F5Mi0eeeOWd8rLwE74ZvkjWqzqHpuEacNyST3wSpQnzF
   OQN9rr2Gth0JQFpVbZw0QsLj3kvrwiPUxObrgMDvxDHRm8rBrv+BNUaUH
   EIoPNy74DsOXR+ajlffxWnugsgzRzQo3rWD2ovzhVCqaj6uVFeMVSjel3
   A=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: j7qRNONbHY4OnoRkAW0WQdBnI+IW+Pqt06UVbVjKiGf2RuLn4fncgZAsbtqH/jp2FGYsEw4W0c
 vvvYcc1PZ4YPPPoltukwlDyg6kiX3qxa4xdY22D3r++3Ejm/dR4nGS+tdM3amt3rMgeQVcMm3j
 vOV8d8zFyJBnR8Q/9Fi1sCmx9NX1p1YBwoL3/Yo7Ig6+R2rh8hnMeh2xPYBt8rhOqr70R7KJPL
 zbnFo8XjsjRbPhVqqe76zW/EpK0tR8qp9y8LhfdOXYPmcIUpRAxLQa5BbOIIrG4DUSEd4GrusX
 /+U=
X-SBRS: 5.1
X-MesageID: 43424328
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:FxUuR6l6aZYFygJ62JXgJEdYVBrpDfIs3DAbv31ZSRFFG/Fwwf
 rAoB19726RtN9xYgBEpTnkAsK9qBznhP1ICOUqXItKPzOJhILLFvAE0WKK+VSJcUDDH4VmpM
 VdmsZFaOEYJGIXsS6wiDPIdeod/A==
X-IronPort-AV: E=Sophos;i="5.82,293,1613451600"; 
   d="scan'208";a="43424328"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fIjrsGlmcr8FeowPECAKadvy2ImM3wgtpWdkEjauM23Jfg1iF/ZK4Dc9zgYOps+Ka0B3c3zp6XJJyrMg41ehZ+kLwxU0ofKAN0Okh4chbGZCK+hbOdqQd/hF2d1yaIRG42PyYInAx/+UkwXTRKacYDT84Mp42kzKJJkDHyoE0n5yYoVXDJ3jR1Qb7ToeLyjICF+iJbyOIoHfuCzte7Te5Bz7Qv9c3IUPsxnUKSPlBFgS5CwQxxegSfohSSlTc24hdeugD4LwdbDcRED5GzrUZsf/259iZCinaOaS33QTo2cfyzUexpX+m2lzKINmUqL1ZSSeOIFzP0ijawHjahqFKg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5QAeRcH2yITfeyIVLS3SF+CzQZKrd9W57cayGjn8adI=;
 b=buWJLVNq9TdtgD4+eOqft8qRY3mOCHwm/ICESp0Rrzf9JtkuuoL6zJ1P+2lD3gx6SnlKBlDZYIlxEive3Jt6XSfUVzxTLQCH25uHrvzr7iG1H6v92Jjnzc0HjWi4p12vDttbSrdGmjFiWPect2Pq245KTH9NAfujJmSooB4nBjSPZF1suUY2gsy617fsnRlEiegg8MEYf9cc396pHnMGbgvd5Nh9x7khPZ0jNCm8Ot0dn5iZ6YW27QNIjCfmc7dLafbgDHFymvDd6zdSEl/KY4LRqrbsyM7W8BfpJ0/vBhsmleXoXPf1accytujA2FakvjeZMO95PsrXtxV6PD3UNw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5QAeRcH2yITfeyIVLS3SF+CzQZKrd9W57cayGjn8adI=;
 b=O7YSl3YxaYghTXzPAeMgzmN/BCWlmAXR6jdtZrGcf/IoYC/+6dR3bJBSNtc6P7FVRVS0urEUXYMx8YzWE0ANmc4anfRzQm0lWuwpefXPChgQhV0Xtr7boFvZJuHCdb2DDpmncnY9DL5NT35Dyl7OUEp5qz+ODyEo+sf1/casK4c=
Subject: Re: [PATCH v2 17/17] tools/ocaml/libs/mmap: Clean up unused
 read/write
To: =?UTF-8?B?RWR3aW4gVMO2csO2aw==?= <edvin.torok@citrix.com>,
	<xen-devel@lists.xenproject.org>
CC: Christian Lindig <christian.lindig@citrix.com>, David Scott
	<dave@recoil.org>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <cover.1620755942.git.edvin.torok@citrix.com>
 <9bfd0989994953f08142d26cbe5a22651a1faa2a.1620755943.git.edvin.torok@citrix.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <155869cb-a4cc-b3ee-93f7-937045630c7a@citrix.com>
Date: Wed, 12 May 2021 14:06:22 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <9bfd0989994953f08142d26cbe5a22651a1faa2a.1620755943.git.edvin.torok@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0231.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a6::20) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 35301e0c-1f92-4806-b2a7-08d91546c1a5
X-MS-TrafficTypeDiagnostic: BYAPR03MB3798:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB379893FAC1F2314EC08E75F6BA529@BYAPR03MB3798.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:5236;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: ZnvtCE2jAPiojC1MGEqdyTTeL0A/6s+qSEeA+1zLQN+i88PCAZ5XoYRAvPWj/A56ZpK5Fw8sMcscwXqu6/myDI41NSsXC1Xo2Lpzh/UNJ9fV6boe6mBjJVQ+W07bsN7qvDTW5SlfbF/ZqiDu+y8o88gTdwgFCjc8Ycx54N1lOckch0MZBmV0z6eCAV9cT8XtUHP2vUgSY9l2t49UKEiZN9pabAPDDd4S+wW6U3f8HU//O0nRz5UW72m1kKm+VO+Bq9CJ59TGHND0iHIIIDeCMw3jhqsU6kyD58UKHFq18Dl4/R5hKxagtX+hZ+gRH1zGWuH4sTma1SsdvFggrG13RRUd59HXxyty9eR7WHXnINfoxIq3bjFXsIAUIaCz9xAk9cRVyo9urY3lCczYxE+94pxJ1ybf+sAoxFOZlL8zrlg1YI2W3L0VDYqpfAOpnUHSeX7B5cMrJ8dO+lMCmcSNKnExqQ3IHePyWw8HpcCfhVfKv43MsVQN9OLi6g0RJlKIwlmwLLIjihkrEjjCwaXpEEkkQFDPcCPB4lwcQbjZt1WSqHVbuzie+DL7Lh/JoIohwAluOiR5+Hc+aqz2wcKfyBocpFO52l9gzBqmu+eqlzSOprT45pkgg4Ugz7UDlA1kKsfOBbd8Se7X/H0TXcBCEWykjoO4VlWi4O74VT/khIqgnCDYlWOgbKFlECWUQEDX
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(346002)(39860400002)(366004)(396003)(136003)(478600001)(16576012)(54906003)(2906002)(4326008)(316002)(5660300002)(16526019)(956004)(2616005)(6486002)(66556008)(66476007)(66946007)(186003)(8676002)(86362001)(31686004)(31696002)(38100700002)(8936002)(6666004)(53546011)(26005)(36756003)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?VzQ0Mk5WY05yRThSMmhESUpGR3BxVVdCWUVTWXh5TW1sU3pqd2RNS3hOdDNV?=
 =?utf-8?B?b3NKZFlDYXpOWk9Ud2Y1WGdMY3JocGhyeGVwdE9ubXhZRW5MQjNzSHhiUEVV?=
 =?utf-8?B?S0xtaFBNT2NTbXFuR0FoSVdtT09DTVRtQ2VFMjA1RU5CMXhKNmlVb1dJdmI0?=
 =?utf-8?B?WHdrejUwbWhYdS8rN2lXVFFkTzR6c0h5QWtTc2c1TFdzNnpicEpSSit6N2pS?=
 =?utf-8?B?ZHBveDdaQnJPdWllT0dGcWtROHJTM3VRVTdEM1ZJclFQc0JyNDRQTDV2VDN0?=
 =?utf-8?B?SDk3VXpHZEU5VkVZRHhQNFl2UTBLRFR1d0JYK1plTGFzRlJXRE52dUlCbUJL?=
 =?utf-8?B?OFBLL2hWV3lVaWVjdEt2Mm04OUdVL0Q5NUNpcU11UWU5c2Raak9DR0lFaEh0?=
 =?utf-8?B?d0NrWEw1bHovVnhTYW54d0U5dy9QcW9wWnEvR3YyOWZwc2dmSjdkWGp0RkRp?=
 =?utf-8?B?cmpaeGdkcVZiOHVqOWpTWmpST0xMWHQzTThpclVUOW50ZWRRUjhLQWtxQ3RY?=
 =?utf-8?B?ZGVXZFYzSmsyZ1duQkkxSmpCMzZCUFljcnJiRXN2YkZMYkJwUCtoKzIyMTVU?=
 =?utf-8?B?YWZwMGZXdjNrVHFtTTF0N0MwTHVQVGZWY2JPVXlnVjNrd3VLeEhMUlNkRXZj?=
 =?utf-8?B?d2cyZ0R4MHRKZndSeHVCWlVneTROekFYc3NtYmRRcFp1N0RxM0lNSTRiK2dU?=
 =?utf-8?B?WlN6bFJ1TkIyb2wva3BHRWJLcG9zMU9SZExML3gvaTArY0c3N0FhNmdPRVdW?=
 =?utf-8?B?ZHVMeEV0Nk5OVGRxTmdwWHNUaldON3YrSmZ5RlRPT01wMS9QZmRqNTAzbnc4?=
 =?utf-8?B?Ry93UUpYakNza0xBZnhSYWZyR2tTaFh1SU9OczBkcEhyNXJzNmdLOHp3UlZH?=
 =?utf-8?B?eFRFVjlNZG1XZkFmV2s0WEFqaElpQXdzMnVPcCtWSytQMkRLWWRNN0pWeEdl?=
 =?utf-8?B?K3NMUk8wYWxlMXQxQnNteE5lWUdtdkNweVo4Z0UweXpaSnlEVkRqc0VFNTdI?=
 =?utf-8?B?Qi84THdVcDJ6ZFRZQVJzZnBRNy9GWTBDbXZtYzRYR1J1NzNSS3hKa01DdlMy?=
 =?utf-8?B?MzEwdUcyZk1lTTQwNkEvSzJKZjRPdk52ekMrMTdOZUhKT2QxZlJMNmg5SUdk?=
 =?utf-8?B?Y2NGamJVZ1hCUTFVR245TUlucVlUTk5JK2tBZk0xT3RWZEZKTUxiT3VyaUx2?=
 =?utf-8?B?NEU0ME84bFowT0RSYURFSUtFUFB3UWdjckMwVm8zcnFNWkNldzBYOVRxMTEr?=
 =?utf-8?B?MlJnUVIzMW9BK3BBeEpDTnh2dW5mLzhjcytUZVNlbzhwTUlPUklublZFd0dV?=
 =?utf-8?B?UUZRNGJxVGQwVEI3cnp6OWZodk92QUJST3pYK1lac1FRaTZBeG9qUkVLQWFw?=
 =?utf-8?B?ZGZNd0p3V1lHcXZ5RG9JVkkwUlRBSFF4ajJyU1dQQ3VGSzRwYWtVZ1pXYkRy?=
 =?utf-8?B?Tmc5c0tCT29ZT1h2aXNvK3djcVNsNWxzRDdCN3FhOFZveFA0LzU2KzJUUzhp?=
 =?utf-8?B?VVFHc0RkaEV5dDdaRWxmbzd3QytGL0V3RGcwald5SlZ0ZjN4eWlpSzVUWEJF?=
 =?utf-8?B?R25Xc25PSnhVSHNRN2UwQVlubXRFcURPRDlRam9xZHBFZ3ZKemFhTzNlMFEr?=
 =?utf-8?B?K2VoYzJVWE44bG9PdFNnSTVpSG1raUwwNk1NdS9xMXBNdjRObG5DTjRVT0Jl?=
 =?utf-8?B?a3ZVM0V3MXZNY0ZiVVllTml2UW1haUsrUkc5L214WXFFdVB1dG14aTlWTWdI?=
 =?utf-8?Q?9WFhGIiSrMhe//aY+Af6+lp7XrRjdYJfHNHxqu8?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 35301e0c-1f92-4806-b2a7-08d91546c1a5
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 May 2021 13:06:29.1980
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: jHqXmmKQ57e3Y1HZWmOOHEJIuiw2GGNHMEpFicjpQFWd9XR0B2WXKTzytfSQWsgjEN59kyE8lDGjOls3XOItk1cIy0LMuge9zLmhY/WwDEQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3798
X-OriginatorOrg: citrix.com

On 11/05/2021 19:05, Edwin Török wrote:
> diff --git a/tools/ocaml/libs/mmap/xenmmap.ml b/tools/ocaml/libs/mmap/xenmmap.ml
> index af258942a0..e17a62e607 100644
> --- a/tools/ocaml/libs/mmap/xenmmap.ml
> +++ b/tools/ocaml/libs/mmap/xenmmap.ml
> @@ -24,11 +24,6 @@ type mmap_map_flag = SHARED | PRIVATE
>  (* mmap: fd -> prot_flag -> map_flag -> length -> offset -> interface *)
>  external mmap': Unix.file_descr -> mmap_prot_flag -> mmap_map_flag
>  		-> int -> int -> mmap_interface = "stub_mmap_init"
> -(* read: interface -> start -> length -> data *)
> -external read: mmap_interface -> int -> int -> string = "stub_mmap_read"
> -(* write: interface -> data -> start -> length -> unit *)
> -external write: mmap_interface -> string -> int -> int -> unit = "stub_mmap_write"
> -(* getpagesize: unit -> size of page *)
>  external unmap': mmap_interface -> unit = "stub_mmap_final"
>  (* getpagesize: unit -> size of page *)
>  let make ?(unmap=unmap') interface = interface, unmap

Are comments supposed to be above or below the declaration?  The double
getpagesize and missing unmap comment looks like a copy&paste mistake in
the past.

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed May 12 13:55:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 13:55:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126248.237626 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgpKD-0002CK-GZ; Wed, 12 May 2021 13:54:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126248.237626; Wed, 12 May 2021 13:54:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgpKD-0002CD-Dg; Wed, 12 May 2021 13:54:57 +0000
Received: by outflank-mailman (input) for mailman id 126248;
 Wed, 12 May 2021 13:54:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kryu=KH=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1lgpKC-0002C5-BR
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 13:54:56 +0000
Received: from mx0b-00069f02.pphosted.com (unknown [205.220.177.32])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 31b87290-cf37-47d5-91b0-b6e51fcffc17;
 Wed, 12 May 2021 13:54:54 +0000 (UTC)
Received: from pps.filterd (m0246632.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 14CDp5YT010088; Wed, 12 May 2021 13:54:52 GMT
Received: from oracle.com (aserp3020.oracle.com [141.146.126.70])
 by mx0b-00069f02.pphosted.com with ESMTP id 38eyurrsas-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 12 May 2021 13:54:52 +0000
Received: from aserp3020.oracle.com (aserp3020.oracle.com [127.0.0.1])
 by pps.podrdrct (8.16.0.36/8.16.0.36) with SMTP id 14CDqSTf160107;
 Wed, 12 May 2021 13:54:51 GMT
Received: from nam12-mw2-obe.outbound.protection.outlook.com
 (mail-mw2nam12lp2041.outbound.protection.outlook.com [104.47.66.41])
 by aserp3020.oracle.com with ESMTP id 38djfbpfjd-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 12 May 2021 13:54:51 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com (2603:10b6:208:321::10)
 by BLAPR10MB5009.namprd10.prod.outlook.com (2603:10b6:208:321::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.25; Wed, 12 May
 2021 13:54:49 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf]) by BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf%7]) with mapi id 15.20.4129.025; Wed, 12 May 2021
 13:54:48 +0000
Received: from [10.74.103.76] (138.3.201.12) by
 SN4PR0501CA0119.namprd05.prod.outlook.com (2603:10b6:803:42::36) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4150.11 via Frontend
 Transport; Wed, 12 May 2021 13:54:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 31b87290-cf37-47d5-91b0-b6e51fcffc17
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=X5Tb+8RVA97vmVIqrMOmb+uN0+k0mdeX4LIkQCcacCA=;
 b=GellMyptH6C6/iqCNq/WutsbekyCURXpEMZzBWMz/yDJ3Vnj0ZmaV8kV9Qicnd/kr/md
 TwI7fb41ozbLWpDfETJH9rAEYV4I124HJtDomo7bQVDmXJLFTw9pQjaxa+Cayy8FpW49
 y5rESAJvfxmQznddFy/FUPUfIk98x5SQs5g8G7leJ2X5b8XmjNQSDRiemrdVypkB4pnU
 6jnI3s63C7xJUO1KQ77isX6ImJHYz8Ucy3nycwxmI0rZ+aTZlt8ZeXW7QGgwPcHYjLSr
 GKfTa6lIWomvQyrV414COsG09X8LkpOAq8qe2cyhMUIbijSRk3kG58J5HwyehN0ppzD+ WQ== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TFgUKhNgU2tGTcQaok3a2E4P8yAFOXSbtnl6tQ4+EwIXPr/tJ/EJ/8tl+4DOO6PXwXvwmqeJc9xzBnTu64JDjxnVoV43AXHC71Tw2yuQDuuTpBDERg8yEod24u4FlX3ccQHncoTNcgK5HoXu6jEirEEI1QTK0t/lJGdBRAjhpy35UjtMU1/LMv7OW/YWJDZZkNEouc2s9InjddkH+HE5SujxHYmxc3IZuN0nPvOepyaykWGMnXdjIKJpZQjFyYGm06ERkHWdkoq8q+9sFGIXXPVQ56mttwXh7DqBVh7gx+3Pq8AFJ+/81U2YxOoV6K5mPAuSl+wsu0gUGJVvvqPXYg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=X5Tb+8RVA97vmVIqrMOmb+uN0+k0mdeX4LIkQCcacCA=;
 b=Wkje7zsWNBGqqSJ9/VmSr0ndlxGKJfXglS69CFwcyRbE/65rTGn2LzqZNB2TQ+fTRZXqwR2yoOVWzW0LlNSGL3qQa4voykc3CAlCY5LFlHjTeZ/i41qJ3FaR+qMreiNVRvzHLWe7MAWQEjegryENxW/+fbQXabXISKVbv/CMlB/MAvHmQeVTntC1qp0p9vYOYw9rlGUUHSHuvjvpLuyM17wBvC2XfvXCNaLcGwvh8xKY5x7H74U20Ys2MTrSScZYgR7PB4+sro0RXbQ07giHtMr54EppZy6Hiofmk5StuVhdF8yPyEGY7rA7qiit4snKxUwxkyGoxsTgO2f7JixlEA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=X5Tb+8RVA97vmVIqrMOmb+uN0+k0mdeX4LIkQCcacCA=;
 b=PWeJmtbccPm7agWwRxMqbfeLIp8aIzT9QI0sx4AkHM7hpgQr4PMauUkXR2ffrtiWAMxOOFlmHjItBAHsG9x44Te3rIH42RUvKhhtJKQM15tbJPoBWOi5G9cPbX54enpcTRo4TD6ouSEXGXt58XgZExIqV3si/vfnDK9BTmgTsaM=
Authentication-Results: vger.kernel.org; dkim=none (message not signed)
 header.d=none;vger.kernel.org; dmarc=none action=none header.from=oracle.com;
Subject: Re: [PATCH 2/3] xen: Export dbgp functions when CONFIG_XEN_DOM0 is
 enabled
To: Connor Davis <connojdavis@gmail.com>, Juergen Gross <jgross@suse.com>,
        Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
References: <cover.1620776161.git.connojdavis@gmail.com>
 <291659390aff63df7c071367ad4932bf41e11aef.1620776161.git.connojdavis@gmail.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <0c1d6722-138f-62e7-03b3-a644e36d20a5@oracle.com>
Date: Wed, 12 May 2021 09:54:44 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
In-Reply-To: <291659390aff63df7c071367ad4932bf41e11aef.1620776161.git.connojdavis@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Originating-IP: [138.3.201.12]
X-ClientProxiedBy: SN4PR0501CA0119.namprd05.prod.outlook.com
 (2603:10b6:803:42::36) To BLAPR10MB5009.namprd10.prod.outlook.com
 (2603:10b6:208:321::10)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b5ee5d8d-5e94-46a7-6a4f-08d9154d81c4
X-MS-TrafficTypeDiagnostic: BLAPR10MB5009:
X-Microsoft-Antispam-PRVS: 
	<BLAPR10MB500915CDC371102BE9990CFF8A529@BLAPR10MB5009.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:2276;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	sb8QJeXiagZQgpXM5frwjHrqvAQQqtlJ0qtOU2vc7FA7Z68Y1/+gpZ7x6F1XNKI2JMTfOxxWt6MQVoUB+QE4eOFJpX9MA9Oyt5/dKgj2XMseOINvoKFe4TBzbMMI9hK7bt+CFZMaPppafslCOWmI0fljTnawDbqJOv7XzuRIgmZRnt6dEmnmrUOWq4r8sFach94kgVEYHjG3UUWX2nND51Xam+S0NU2CZroQPAOnB+C0sH5FC24TF9fIHFdwFq1U2/yxBjc8q165ALsCKBBRfnu+QFQmsraDGg8No71PMYNjRUd2W5LO44mx4sZRxPAnGmK+TDDnDMvE9BYqzNC8faIjzh2PQ2SpuLONGK6fGjhFfvfo4GtbrYyZpvxpJXKDVfcPm0x5PNvoAOs6LjWoVjNUUnsmTUbnf70jiJOCP+A4EQxQYOKFkvaTxjUhIXKfE3nVWn6XnUPhzKipmAistrpRny3SG4qNSwkvARDlQ5f0lfq5wNL+4kApKuMLt9sA1Vhx4teP46FrTH0d7B9RI8675e8bTWOD85wTBuyxdyUODey+jqj5h5HPOPltP6vmh47bKxUMRQ95wfKE6HZS42RYKWw/6i+axCUEooAbVr9LWSg0WBT1aAnW+8SAYhfMBrwZWc3hEywJCHWki1SpAOmzHALDG9S6qYZTmOa6fiKdcC+cBThMjQgYz2Rwalna
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB5009.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(346002)(396003)(39860400002)(376002)(136003)(16526019)(186003)(2616005)(2906002)(5660300002)(26005)(956004)(66556008)(4326008)(478600001)(66946007)(66476007)(8676002)(6486002)(4744005)(38100700002)(8936002)(6666004)(16576012)(44832011)(53546011)(31696002)(316002)(110136005)(86362001)(31686004)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?utf-8?B?dmZMeE5Hd29TT3pHci9DblVaT0N2c3daUVlJdzlqcDJxbkFXcUFKczVmSVNX?=
 =?utf-8?B?eGJqckJKODgrWkFZSVB2TTNkdnQyQUhUUnY5WlZKanFhNlpsVjNHZHpyNFRB?=
 =?utf-8?B?RlB0WXFKTlhSSGRPSEs4a3hlUXo1NDkxWGZLbXp3NGdPTjhkRHFPODJaTjFv?=
 =?utf-8?B?bi9lSGZ6NUEram1GV0RNNzJCNkljYlpRV21mMS9Zb0dEYnRCOVBoYTg0OXZw?=
 =?utf-8?B?VlNRUFo2VTQ4OTQ0S1JmYmlJUURucTdzdHZIeGJlemxzbm81dlRFNDFxNlpj?=
 =?utf-8?B?TW5kQS9zaHRpMCs1OGpsakVTZEg4MG9wR2hZK1ZYWEx5bnRXdXVaUmpmU25w?=
 =?utf-8?B?bVRIYzAvamY5bWhLekdNRUhML2I1Q2t2Z0FzK0hIcFRvY2FQZ1dJTXpiaStk?=
 =?utf-8?B?ZFFaYnY2NlN2K2RCY25aL0MrNmZPK3V5NVlidlhidEFNN1FjNzUwU2NWdnd6?=
 =?utf-8?B?YUJibzVxbmxraWVFWTUwa0UwVmFNQy94OGg4aDVEeDg5dnRNL0Q1RnpXSmxl?=
 =?utf-8?B?bmNrZG5ZNElsdnE5MkkrRUdFQmNlNWxPR3JoL2s1YmVLZjJ3dHBaQmUyanp2?=
 =?utf-8?B?VTRDSjVrOGh2d1VjWGlXT2JwUEdDcEpCZVVFalRJeWhzMDRWMXJjV3RtbG9a?=
 =?utf-8?B?ek9ldFVLOEZabmllNDBhamlsMVdESE9iREkvT2pSTHFienpYSmFDVG5nREV1?=
 =?utf-8?B?ZGpTTU5KUnRxbTQ1L25leXgzbzZidWo5aWhpaFhrVjNsQ0hzbmxDb3ZPT0ZD?=
 =?utf-8?B?Ynl2aXZYQ0ExVllvSFdtWEZoSXg4UVRxcS8wSzFOTDRJcWJQZ2Rsd0RYdWZO?=
 =?utf-8?B?SGxxY0p6ZEh1TUxHMHBtSXRGcjRmSGZwWDhkaW8xektBRENTamhJb050WGZV?=
 =?utf-8?B?a2hzUk80RHRSdk8zSDRKOTEwZ0dxdG94NFhWWDBwRmo2Nk5xeEh4Sk03a28y?=
 =?utf-8?B?SE1vQ25pSk45eStDcUxEMENKV3FJVUZ2WjFGM0M4akpsZExkTGNuZXR6UEhO?=
 =?utf-8?B?cmQvVDZVUEsvNWpML2VKeXVCNk8vQ3Y5Njh0S0VoS1JNZ3diclJ2U053aW4r?=
 =?utf-8?B?cUM2MklOMW1tdXhFUlc4d3c1NVZyUVNHTUVOSUJMZHJUWk8wMm5aZDNjZVJl?=
 =?utf-8?B?VnZLR1hiMUV0dFZvRklFVk54T2h3STVvbU1jTU1TTXBSdVJTaHdhNkZETm9j?=
 =?utf-8?B?eWhLZm5ZbUkyV2tZaXNPMGJCN0xvQzQ0YmlDM0E1VDhENUtiTDV3MUhxN3p0?=
 =?utf-8?B?OGpsbTBuUVR1L0FrZHF0ZElQVFRMUXZsSkY0SU54RnkvbmNxT1NQOGpFM2pL?=
 =?utf-8?B?ZjhBc0ptSTJ2UHJsbUdwaW5ZbkZ6WEViQjg1aUhCeHplVWdUeDJOMC9xUThO?=
 =?utf-8?B?VFU5c1lmQURhMXh6OElOOU5CS3BOcytrODVxY29nZkxxcjdPQUVKUXFKbWJZ?=
 =?utf-8?B?TGVOcEdHOEJFeHdUR252SlRrTHJKazV5R2FMbWQ0eVhvM0gvVnZwV1dldGZO?=
 =?utf-8?B?akRGMmx5dkdpOVN5dmh1djIrby90U0NTWGNrRDVSaEFHMG5zZzRIbVZhb2xL?=
 =?utf-8?B?clVWNlYvc2NSbGZEdE5VeG9yMzN6SFNxTU5hK2RVcUVrR0N5ZzRPQTV1bVhD?=
 =?utf-8?B?SDBxWHp5cEd2cnBJS04zNG8xTmIwMy94cDFEdEVpYVJzWnF2Y1d6ZFAxTjhi?=
 =?utf-8?B?Zmx0NUE5Tkh1cWJaRS9MTXRHVnpnRnJkY3ZMdzI2bHNMelZHZHV1S1hoMFZJ?=
 =?utf-8?Q?IGwSdMbVBh/mHkqizROIocCNBWK25QG2mY98m04?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: b5ee5d8d-5e94-46a7-6a4f-08d9154d81c4
X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB5009.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 May 2021 13:54:48.6649
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: FOQrCiUZes4uJZ742snxyAMwBvJtjgc/sd50pVu0M/XCE/olCNoMIk9cSADx2ZhtYJnX6+VofdUk9g1KMaN+3QCg664ywmUil0wUMh0mk+k=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLAPR10MB5009
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9981 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 spamscore=0 mlxlogscore=999
 adultscore=0 phishscore=0 mlxscore=0 suspectscore=0 malwarescore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2105120094
X-Proofpoint-GUID: tWm9lrRmdyGiY17lHLzqwyK97goLU-5b
X-Proofpoint-ORIG-GUID: tWm9lrRmdyGiY17lHLzqwyK97goLU-5b


On 5/11/21 8:18 PM, Connor Davis wrote:
> Export xen_dbgp_reset_prep and xen_dbgp_external_startup
> when CONFIG_XEN_DOM0 is defined. This allows use of these symbols
> even if CONFIG_EARLY_PRINK_DBGP is defined.
>
> Signed-off-by: Connor Davis <connojdavis@gmail.com>
> ---
>  drivers/xen/dbgp.c | 2 +-


Unrelated to your patch but since you are fixing things around that file --- should we return -EPERM in xen_dbgp_op() when !xen_initial_domain()?



-boris



From xen-devel-bounces@lists.xenproject.org Wed May 12 14:26:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 14:26:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126252.237639 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgpoc-00066h-15; Wed, 12 May 2021 14:26:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126252.237639; Wed, 12 May 2021 14:26:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgpob-00066a-UK; Wed, 12 May 2021 14:26:21 +0000
Received: by outflank-mailman (input) for mailman id 126252;
 Wed, 12 May 2021 14:26:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lgpoa-00066U-8K
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 14:26:20 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lgpoY-0004Ol-Is; Wed, 12 May 2021 14:26:18 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lgpoY-0006nn-C3; Wed, 12 May 2021 14:26:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=EWolIiDZsYMmzk/mpfOsJwMTylTgulIFa9VcefDcyNY=; b=OCvzKkJO9n0VgvOdag8qQqVmg9
	cEEe2KtoDF5cET5e216qzGvyacm9DDawrYF3jo/vjmmgJq1Q0cVwp5HonyXXWUMMloQ7Qgxi49Zra
	tj7ca9gEhOQvNGaf7wHYbWA+7tcUGufX0CrUqTUiD4lcrtemAwguww/VXJ4iM6bueHio=;
Subject: Re: [PATCH RFCv2 02/15] xen/arm: lpae: Use the generic helpers to
 defined the Xen PT helpers
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Wei.Chen@arm.com, Henry.Wang@arm.com,
 Penny.Zheng@arm.com, Bertrand.Marquis@arm.com,
 Julien Grall <jgrall@amazon.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20210425201318.15447-1-julien@xen.org>
 <20210425201318.15447-3-julien@xen.org>
 <alpine.DEB.2.21.2105111515470.5018@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <94e364a7-de40-93ab-6cde-a2f493540439@xen.org>
Date: Wed, 12 May 2021 15:26:15 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2105111515470.5018@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 11/05/2021 23:26, Stefano Stabellini wrote:
> On Sun, 25 Apr 2021, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> Currently, Xen PT helpers are only working with 4KB page granularity
>> and open-code the generic helpers. To allow more flexibility, we can
>> re-use the generic helpers and pass Xen's page granularity
>> (PAGE_SHIFT).
>>
>> As Xen PT helpers are used in both C and assembly, we need to move
>> the generic helpers definition outside of the !__ASSEMBLY__ section.
>>
>> Note the aliases for each level are still kept for the time being so we
>> can avoid a massive patch to change all the callers.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> The patch is OK as is. I have a couple of suggestions for improvement
> below. If you feel like making them, good, otherwise I am also OK if you
> don't want to change anything.
> 
> 
>> ---
>>      Changes in v2:
>>          - New patch
>> ---
>>   xen/include/asm-arm/lpae.h | 71 +++++++++++++++++++++-----------------
>>   1 file changed, 40 insertions(+), 31 deletions(-)
>>
>> diff --git a/xen/include/asm-arm/lpae.h b/xen/include/asm-arm/lpae.h
>> index 4fb9a40a4ca9..310f5225e056 100644
>> --- a/xen/include/asm-arm/lpae.h
>> +++ b/xen/include/asm-arm/lpae.h
>> @@ -159,6 +159,17 @@ static inline bool lpae_is_superpage(lpae_t pte, unsigned int level)
>>   #define lpae_get_mfn(pte)    (_mfn((pte).walk.base))
>>   #define lpae_set_mfn(pte, mfn)  ((pte).walk.base = mfn_x(mfn))
>>   
>> +/* Generate an array @var containing the offset for each level from @addr */
>> +#define DECLARE_OFFSETS(var, addr)          \
>> +    const unsigned int var[4] = {           \
>> +        zeroeth_table_offset(addr),         \
>> +        first_table_offset(addr),           \
>> +        second_table_offset(addr),          \
>> +        third_table_offset(addr)            \
>> +    }
>> +
>> +#endif /* __ASSEMBLY__ */
>> +
>>   /*
>>    * AArch64 supports pages with different sizes (4K, 16K, and 64K).
>>    * Provide a set of generic helpers that will compute various
>> @@ -190,17 +201,6 @@ static inline bool lpae_is_superpage(lpae_t pte, unsigned int level)
>>   #define LPAE_TABLE_INDEX_GS(gs, lvl, addr)   \
>>       (((addr) >> LEVEL_SHIFT_GS(gs, lvl)) & LPAE_ENTRY_MASK_GS(gs))
>>   
>> -/* Generate an array @var containing the offset for each level from @addr */
>> -#define DECLARE_OFFSETS(var, addr)          \
>> -    const unsigned int var[4] = {           \
>> -        zeroeth_table_offset(addr),         \
>> -        first_table_offset(addr),           \
>> -        second_table_offset(addr),          \
>> -        third_table_offset(addr)            \
>> -    }
>> -
>> -#endif /* __ASSEMBLY__ */
>> -
>>   /*
>>    * These numbers add up to a 48-bit input address space.
>>    *
>> @@ -211,26 +211,35 @@ static inline bool lpae_is_superpage(lpae_t pte, unsigned int level)
>>    * therefore 39-bits are sufficient.
>>    */
>>   
>> -#define LPAE_SHIFT      9
>> -#define LPAE_ENTRIES    (_AC(1,U) << LPAE_SHIFT)
>> -#define LPAE_ENTRY_MASK (LPAE_ENTRIES - 1)
>> -
>> -#define THIRD_SHIFT    (PAGE_SHIFT)
>> -#define THIRD_ORDER    (THIRD_SHIFT - PAGE_SHIFT)
>> -#define THIRD_SIZE     (_AT(paddr_t, 1) << THIRD_SHIFT)
>> -#define THIRD_MASK     (~(THIRD_SIZE - 1))
>> -#define SECOND_SHIFT   (THIRD_SHIFT + LPAE_SHIFT)
>> -#define SECOND_ORDER   (SECOND_SHIFT - PAGE_SHIFT)
>> -#define SECOND_SIZE    (_AT(paddr_t, 1) << SECOND_SHIFT)
>> -#define SECOND_MASK    (~(SECOND_SIZE - 1))
>> -#define FIRST_SHIFT    (SECOND_SHIFT + LPAE_SHIFT)
>> -#define FIRST_ORDER    (FIRST_SHIFT - PAGE_SHIFT)
>> -#define FIRST_SIZE     (_AT(paddr_t, 1) << FIRST_SHIFT)
>> -#define FIRST_MASK     (~(FIRST_SIZE - 1))
>> -#define ZEROETH_SHIFT  (FIRST_SHIFT + LPAE_SHIFT)
>> -#define ZEROETH_ORDER  (ZEROETH_SHIFT - PAGE_SHIFT)
>> -#define ZEROETH_SIZE   (_AT(paddr_t, 1) << ZEROETH_SHIFT)
>> -#define ZEROETH_MASK   (~(ZEROETH_SIZE - 1))
> 
> Should we add a one-line in-code comment saying that the definitions
> below are for 4KB pages? It is not immediately obvious any longer.

Because they are not meant to be for 4KB pages. They are meant to be for 
Xen page size.

Today, it is always 4KB but I would like the Xen code to not rely on that.

I can clarify it in an in-code comment.

>> +#define LPAE_SHIFT          LPAE_SHIFT_GS(PAGE_SHIFT)
>> +#define LPAE_ENTRIES        LPAE_ENTRIES_GS(PAGE_SHIFT)
>> +#define LPAE_ENTRY_MASK     LPAE_ENTRY_MASK_GS(PAGE_SHIFT)
>>
>> +#define LEVEL_SHIFT(lvl)    LEVEL_SHIFT_GS(PAGE_SHIFT, lvl)
>> +#define LEVEL_ORDER(lvl)    LEVEL_ORDER_GS(PAGE_SHIFT, lvl)
>> +#define LEVEL_SIZE(lvl)     LEVEL_SIZE_GS(PAGE_SHIFT, lvl)
>> +#define LEVEL_MASK(lvl)     (~(LEVEL_SIZE(lvl) - 1))
> 
> I would avoid adding these 4 macros. It would be OK if they were just
> used within this file but lpae.h is a header: they could end up be used
> anywhere in the xen/ code and they have a very generic name. My
> suggestion would be to skip them and just do:

Those macros will be used in follow-up patches. They are pretty useful 
to avoid introduce static array with the different information for each 
level.

Would prefix them with XEN_ be better?

> #define THIRD_SHIFT         LEVEL_SHIFT_GS(PAGE_SHIFT, 3)
> 
> etc.
> 
> 
>> +/* Convenience aliases */
>> +#define THIRD_SHIFT         LEVEL_SHIFT(3)
>> +#define THIRD_ORDER         LEVEL_ORDER(3)
>> +#define THIRD_SIZE          LEVEL_SIZE(3)
>> +#define THIRD_MASK          LEVEL_MASK(3)
>> +
>> +#define SECOND_SHIFT        LEVEL_SHIFT(2)
>> +#define SECOND_ORDER        LEVEL_ORDER(2)
>> +#define SECOND_SIZE         LEVEL_SIZE(2)
>> +#define SECOND_MASK         LEVEL_MASK(2)
>> +
>> +#define FIRST_SHIFT         LEVEL_SHIFT(1)
>> +#define FIRST_ORDER         LEVEL_ORDER(1)
>> +#define FIRST_SIZE          LEVEL_SIZE(1)
>> +#define FIRST_MASK          LEVEL_MASK(1)
>> +
>> +#define ZEROETH_SHIFT       LEVEL_SHIFT(0)
>> +#define ZEROETH_ORDER       LEVEL_ORDER(0)
>> +#define ZEROETH_SIZE        LEVEL_SIZE(0)
>> +#define ZEROETH_MASK        LEVEL_MASK(0)
>>   
>>   /* Calculate the offsets into the pagetables for a given VA */
>>   #define zeroeth_linear_offset(va) ((va) >> ZEROETH_SHIFT)
>> -- 
>> 2.17.1
>>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 12 14:37:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 14:37:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126259.237650 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgpzT-0007qQ-1K; Wed, 12 May 2021 14:37:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126259.237650; Wed, 12 May 2021 14:37:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgpzS-0007qJ-Uc; Wed, 12 May 2021 14:37:34 +0000
Received: by outflank-mailman (input) for mailman id 126259;
 Wed, 12 May 2021 14:37:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgpzR-0007q9-OU; Wed, 12 May 2021 14:37:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgpzR-0004Zb-Hn; Wed, 12 May 2021 14:37:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgpzR-0005je-7W; Wed, 12 May 2021 14:37:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lgpzR-00026i-73; Wed, 12 May 2021 14:37:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=JK5dmsCdwsjr+ZN4dM3i9Jvsga84MN7AKir5s9qMjrI=; b=qHok3Djkj27k8QAxbs4sT0tHEy
	Dl+gxhff4C3Hl1v14mXxTxbK+d5pxlfhL1hbVB03UUdEdksLzthrSqZDO2n91tZPfeVbR5Rjvuh8G
	TzqAz3837GeLbX+4NPAmgQ69f6XN7EVPUSVuAKCrrg2YkBljIoRqf4u8quMTKfT9FcrI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161912-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 161912: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=5531fd48ded1271b8775725355ab83994e4bc77c
X-Osstest-Versions-That:
    ovmf=4e5ecdbac8bdf235b2072baa0c5e170cd9f57463
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 12 May 2021 14:37:33 +0000

flight 161912 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161912/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 5531fd48ded1271b8775725355ab83994e4bc77c
baseline version:
 ovmf                 4e5ecdbac8bdf235b2072baa0c5e170cd9f57463

Last test of basis   161908  2021-05-11 16:40:04 Z    0 days
Testing same since   161912  2021-05-12 02:02:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Lendacky, Thomas <thomas.lendacky@amd.com>
  Sughosh Ganu <sughosh.ganu@linaro.org>
  Tom Lendacky <thomas.lendacky@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   4e5ecdbac8..5531fd48de  5531fd48ded1271b8775725355ab83994e4bc77c -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed May 12 14:39:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 14:39:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126262.237666 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgq10-0008U3-Ex; Wed, 12 May 2021 14:39:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126262.237666; Wed, 12 May 2021 14:39:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgq10-0008Tw-Bm; Wed, 12 May 2021 14:39:10 +0000
Received: by outflank-mailman (input) for mailman id 126262;
 Wed, 12 May 2021 14:39:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lgq0z-0008To-Ko
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 14:39:09 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lgq0x-0004bI-Vp; Wed, 12 May 2021 14:39:07 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lgq0x-0007qX-P5; Wed, 12 May 2021 14:39:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=JKodAiAsgJ7pY9dJZjJMqrEaMA+o07oCOY7DUlzndHc=; b=4KsPTKiuCAPGIMfrPho0oBbhuH
	F4zDKsNz2R4TN2IVP+R6t1/prk7hI0WY0GrjRvSQvG7amB9eUW/rvh2xR4Yygioe0TsXtgLg5Txne
	PFqfGybU7IVNfrmKfX1Z2YJoInQEtXm/HNsujgPSwnX7wfJtSmQHT7BRe9FSRHd/7WfU=;
Subject: Re: [PATCH RFCv2 03/15] xen/arm: p2m: Replace level_{orders, masks}
 arrays with LEVEL_{ORDER, MASK}
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Wei.Chen@arm.com, Henry.Wang@arm.com,
 Penny.Zheng@arm.com, Bertrand.Marquis@arm.com,
 Julien Grall <jgrall@amazon.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20210425201318.15447-1-julien@xen.org>
 <20210425201318.15447-4-julien@xen.org>
 <alpine.DEB.2.21.2105111528180.5018@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <42315452-0fd3-9cc7-3fb7-89a840f94792@xen.org>
Date: Wed, 12 May 2021 15:39:05 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2105111528180.5018@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 11/05/2021 23:33, Stefano Stabellini wrote:
> On Sun, 25 Apr 2021, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> The array level_orders and level_masks can be replaced with the
>> recently introduced macros LEVEL_ORDER and LEVEL_MASK.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> So you actually planned to use LEVEL_ORDER and LEVEL_MASK in the xen/
> code. I take back the previous comment :-)
> 
> Is the 4KB size "hiding" (for the lack of a better word) done on purpose?
> 
> Let me rephrase. Are you trying to consolidate info about pages being
> 4KB in xen/include/asm-arm/lpae.h ?

THIRD_ORDER, SECOND_ORDER... is already not very 4KB specific :). In 
this case, what I am trying to do is completely removing the static 
arrays so they don't need to be global (or duplicated) when adding 
superpage support for Xen PT (see a follow-up patch).

This also has the added benefits to replace a with a couple of loads 
with only a few instructions working on immediates.

> 
> In any case:
> 
> Acked-by: Stefano Stabellini <sstabellini@kernel.org>

Thank you!

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 12 14:48:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 14:48:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126268.237678 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgqAA-0001j0-AW; Wed, 12 May 2021 14:48:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126268.237678; Wed, 12 May 2021 14:48:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgqAA-0001it-76; Wed, 12 May 2021 14:48:38 +0000
Received: by outflank-mailman (input) for mailman id 126268;
 Wed, 12 May 2021 14:48:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=F0FV=KH=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lgqA9-0001im-K7
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 14:48:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d9ecda99-f3ad-42f7-87e8-23407bc00fe5;
 Wed, 12 May 2021 14:48:35 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8D399B016;
 Wed, 12 May 2021 14:48:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d9ecda99-f3ad-42f7-87e8-23407bc00fe5
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620830914; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=73AxySUK/bHt+9oCp3HGltTtNvpfXQnqomKvsiEwxP0=;
	b=UyJXeOKKVEw4FYmbHuKLD0TCoOoioqwb7Rny43NcyQR+5Lh8iT4Cj4B89d54Gkw7VcYr/q
	We6rghKtFB1SwGKW5WqnM9lZvJc17w873ES9PNmW/o6PwljsEUwRnNReMj1jVyKjT9YXvp
	5T8kXKtvR7mvfOgyJFvNeIO0hCX9gjU=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v4] tools/libs/store: cleanup libxenstore interface
Date: Wed, 12 May 2021 16:48:32 +0200
Message-Id: <20210512144832.19026-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

There are some internals in the libxenstore interface which should be
removed.

Move those functions into xs_lib.c and the related definitions into
xs_lib.h. Remove the functions from the mapfile. Add xs_lib.o to
xenstore_client as some of the internal functions are needed there.

Bump the libxenstore version to 4.0 as the change is incompatible.
Note that the removed functions should not result in any problem as
they ought to be used by xenstored or xenstore_client only.

Avoid an enum as part of a structure as the size of an enum is
compiler implementation dependent.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2: minimal variant (Ian Jackson)
V3: replace enum with unsigned int (Andrew Cooper)
V4: full variant again, this time with version bump (Ian Jackson)
---
 tools/include/xenstore_lib.h       |  54 ++------------
 tools/libs/store/Makefile          |   4 +-
 tools/libs/store/libxenstore.map   |  10 +--
 tools/libs/store/xs.c              | 112 +---------------------------
 tools/xenstore/Makefile            |   4 +-
 tools/xenstore/utils.h             |  11 +++
 tools/xenstore/xenstore_client.c   |   2 +
 tools/xenstore/xenstored_control.c |   1 +
 tools/xenstore/xenstored_core.c    |  19 +++--
 tools/xenstore/xenstored_core.h    |   6 +-
 tools/xenstore/xenstored_watch.c   |   2 +-
 tools/xenstore/xs_lib.c            | 114 ++++++++++++++++++++++++++++-
 tools/xenstore/xs_lib.h            |  50 +++++++++++++
 tools/xenstore/xs_tdb_dump.c       |   2 +-
 14 files changed, 204 insertions(+), 187 deletions(-)
 create mode 100644 tools/xenstore/xs_lib.h

diff --git a/tools/include/xenstore_lib.h b/tools/include/xenstore_lib.h
index 4c9b6d1685..2266009ec1 100644
--- a/tools/include/xenstore_lib.h
+++ b/tools/include/xenstore_lib.h
@@ -26,42 +26,26 @@
 #include <stdint.h>
 #include <xen/io/xs_wire.h>
 
-/* Bitmask of permissions. */
-enum xs_perm_type {
-	XS_PERM_NONE = 0,
-	XS_PERM_READ = 1,
-	XS_PERM_WRITE = 2,
-	/* Internal use. */
-	XS_PERM_ENOENT_OK = 4,
-	XS_PERM_OWNER = 8,
-	XS_PERM_IGNORE = 16,
-};
-
 struct xs_permissions
 {
 	unsigned int id;
-	enum xs_perm_type perms;
-};
-
-/* Header of the node record in tdb. */
-struct xs_tdb_record_hdr {
-	uint64_t generation;
-	uint32_t num_perms;
-	uint32_t datalen;
-	uint32_t childlen;
-	struct xs_permissions perms[0];
+	unsigned int perms;	/* Bitmask of permissions. */
+#define XS_PERM_NONE		0x00
+#define XS_PERM_READ		0x01
+#define XS_PERM_WRITE		0x02
+	/* Internal use. */
+#define XS_PERM_ENOENT_OK	0x04
+#define XS_PERM_OWNER		0x08
+#define XS_PERM_IGNORE		0x10
 };
 
 /* Each 10 bits takes ~ 3 digits, plus one, plus one for nul terminator. */
 #define MAX_STRLEN(x) ((sizeof(x) * CHAR_BIT + CHAR_BIT-1) / 10 * 3 + 2)
 
 /* Path for various daemon things: env vars can override. */
-const char *xs_daemon_rootdir(void);
 const char *xs_daemon_rundir(void);
 const char *xs_daemon_socket(void);
 const char *xs_daemon_socket_ro(void);
-const char *xs_domain_dev(void);
-const char *xs_daemon_tdb(void);
 
 /* Simple write function: loops for you. */
 bool xs_write_all(int fd, const void *data, unsigned int len);
@@ -70,26 +54,4 @@ bool xs_write_all(int fd, const void *data, unsigned int len);
 bool xs_strings_to_perms(struct xs_permissions *perms, unsigned int num,
 			 const char *strings);
 
-/* Convert permissions to a string (up to len MAX_STRLEN(unsigned int)+1). */
-bool xs_perm_to_string(const struct xs_permissions *perm,
-                       char *buffer, size_t buf_len);
-
-/* Given a string and a length, count how many strings (nul terms). */
-unsigned int xs_count_strings(const char *strings, unsigned int len);
-
-/* Sanitising (quoting) possibly-binary strings. */
-struct expanding_buffer {
-	char *buf;
-	int avail;
-};
-
-/* Ensure that given expanding buffer has at least min_avail characters. */
-char *expanding_buffer_ensure(struct expanding_buffer *, int min_avail);
-
-/* sanitise_value() may return NULL if malloc fails. */
-char *sanitise_value(struct expanding_buffer *, const char *val, unsigned len);
-
-/* *out_len_r on entry is ignored; out must be at least strlen(in)+1 bytes. */
-void unsanitise_value(char *out, unsigned *out_len_r, const char *in);
-
 #endif /* XENSTORE_LIB_H */
diff --git a/tools/libs/store/Makefile b/tools/libs/store/Makefile
index bee57b5629..43b018aa8c 100644
--- a/tools/libs/store/Makefile
+++ b/tools/libs/store/Makefile
@@ -1,8 +1,8 @@
 XEN_ROOT=$(CURDIR)/../../..
 include $(XEN_ROOT)/tools/Rules.mk
 
-MAJOR = 3.0
-MINOR = 3
+MAJOR = 4
+MINOR = 0
 
 ifeq ($(CONFIG_Linux),y)
 APPEND_LDFLAGS += -ldl
diff --git a/tools/libs/store/libxenstore.map b/tools/libs/store/libxenstore.map
index 9854305a2c..7e6c7bdd30 100644
--- a/tools/libs/store/libxenstore.map
+++ b/tools/libs/store/libxenstore.map
@@ -1,4 +1,4 @@
-VERS_3.0.3 {
+VERS_4.0 {
 	global:
 		xs_open;
 		xs_close;
@@ -32,18 +32,10 @@ VERS_3.0.3 {
 		xs_control_command;
 		xs_debug_command;
 		xs_suspend_evtchn_port;
-		xs_daemon_rootdir;
 		xs_daemon_rundir;
 		xs_daemon_socket;
 		xs_daemon_socket_ro;
-		xs_domain_dev;
-		xs_daemon_tdb;
 		xs_write_all;
 		xs_strings_to_perms;
-		xs_perm_to_string;
-		xs_count_strings;
-		expanding_buffer_ensure;
-		sanitise_value;
-		unsanitise_value;
 	local: *; /* Do not expose anything by default */
 };
diff --git a/tools/libs/store/xs.c b/tools/libs/store/xs.c
index c91377c27f..7a9a8b1656 100644
--- a/tools/libs/store/xs.c
+++ b/tools/libs/store/xs.c
@@ -34,6 +34,7 @@
 #include <stdint.h>
 #include <errno.h>
 #include "xenstore.h"
+#include "xs_lib.h"
 #include "list.h"
 #include "utils.h"
 
@@ -1358,117 +1359,6 @@ static void *read_thread(void *arg)
 }
 #endif
 
-char *expanding_buffer_ensure(struct expanding_buffer *ebuf, int min_avail)
-{
-	int want;
-	char *got;
-
-	if (ebuf->avail >= min_avail)
-		return ebuf->buf;
-
-	if (min_avail >= INT_MAX/3)
-		return 0;
-
-	want = ebuf->avail + min_avail + 10;
-	got = realloc(ebuf->buf, want);
-	if (!got)
-		return 0;
-
-	ebuf->buf = got;
-	ebuf->avail = want;
-	return ebuf->buf;
-}
-
-char *sanitise_value(struct expanding_buffer *ebuf,
-		     const char *val, unsigned len)
-{
-	int used, remain, c;
-	unsigned char *ip;
-
-#define ADD(c) (ebuf->buf[used++] = (c))
-#define ADDF(f,c) (used += sprintf(ebuf->buf+used, (f), (c)))
-
-	assert(len < INT_MAX/5);
-
-	ip = (unsigned char *)val;
-	used = 0;
-	remain = len;
-
-	if (!expanding_buffer_ensure(ebuf, remain + 1))
-		return NULL;
-
-	while (remain-- > 0) {
-		c= *ip++;
-
-		if (c >= ' ' && c <= '~' && c != '\\') {
-			ADD(c);
-			continue;
-		}
-
-		if (!expanding_buffer_ensure(ebuf, used + remain + 5))
-			/* for "<used>\\nnn<remain>\0" */
-			return 0;
-
-		ADD('\\');
-		switch (c) {
-		case '\t':  ADD('t');   break;
-		case '\n':  ADD('n');   break;
-		case '\r':  ADD('r');   break;
-		case '\\':  ADD('\\');  break;
-		default:
-			if (c < 010) ADDF("%03o", c);
-			else         ADDF("x%02x", c);
-		}
-	}
-
-	ADD(0);
-	assert(used <= ebuf->avail);
-	return ebuf->buf;
-
-#undef ADD
-#undef ADDF
-}
-
-void unsanitise_value(char *out, unsigned *out_len_r, const char *in)
-{
-	const char *ip;
-	char *op;
-	unsigned c;
-	int n;
-
-	for (ip = in, op = out; (c = *ip++); *op++ = c) {
-		if (c == '\\') {
-			c = *ip++;
-
-#define GETF(f) do {					\
-			n = 0;				\
-			sscanf(ip, f "%n", &c, &n);	\
-			ip += n;			\
-		} while (0)
-
-			switch (c) {
-			case 't':              c= '\t';            break;
-			case 'n':              c= '\n';            break;
-			case 'r':              c= '\r';            break;
-			case '\\':             c= '\\';            break;
-			case 'x':                    GETF("%2x");  break;
-			case '0': case '4':
-			case '1': case '5':
-			case '2': case '6':
-			case '3': case '7':    --ip; GETF("%3o");  break;
-			case 0:                --ip;               break;
-			default:;
-			}
-#undef GETF
-		}
-	}
-
-	*op = 0;
-
-	if (out_len_r)
-		*out_len_r = op - out;
-}
-
 /*
  * Local variables:
  *  mode: C
diff --git a/tools/xenstore/Makefile b/tools/xenstore/Makefile
index ab89e22d3a..01c9ccc70f 100644
--- a/tools/xenstore/Makefile
+++ b/tools/xenstore/Makefile
@@ -78,8 +78,8 @@ xenstored.a: $(XENSTORED_OBJS)
 $(CLIENTS): xenstore
 	ln -f xenstore $@
 
-xenstore: xenstore_client.o
-	$(CC) $< $(LDFLAGS) $(LDLIBS_libxenstore) $(LDLIBS_libxentoolcore) $(SOCKET_LIBS) -o $@ $(APPEND_LDFLAGS)
+xenstore: xenstore_client.o xs_lib.o
+	$(CC) $^ $(LDFLAGS) $(LDLIBS_libxenstore) $(LDLIBS_libxentoolcore) $(SOCKET_LIBS) -o $@ $(APPEND_LDFLAGS)
 
 xenstore-control: xenstore_control.o
 	$(CC) $< $(LDFLAGS) $(LDLIBS_libxenstore) $(LDLIBS_libxenctrl) $(LDLIBS_libxenguest) $(LDLIBS_libxentoolcore) $(SOCKET_LIBS) -o $@ $(APPEND_LDFLAGS)
diff --git a/tools/xenstore/utils.h b/tools/xenstore/utils.h
index 87713a8e5d..9d012b97c1 100644
--- a/tools/xenstore/utils.h
+++ b/tools/xenstore/utils.h
@@ -7,6 +7,17 @@
 
 #include <xen-tools/libs.h>
 
+#include "xenstore_lib.h"
+
+/* Header of the node record in tdb. */
+struct xs_tdb_record_hdr {
+	uint64_t generation;
+	uint32_t num_perms;
+	uint32_t datalen;
+	uint32_t childlen;
+	struct xs_permissions perms[0];
+};
+
 /* Is A == B ? */
 #define streq(a,b) (strcmp((a),(b)) == 0)
 
diff --git a/tools/xenstore/xenstore_client.c b/tools/xenstore/xenstore_client.c
index ddbafc5175..0628ba275e 100644
--- a/tools/xenstore/xenstore_client.c
+++ b/tools/xenstore/xenstore_client.c
@@ -22,6 +22,8 @@
 
 #include <sys/ioctl.h>
 
+#include "xs_lib.h"
+
 #define PATH_SEP '/'
 #define MAX_PATH_LEN 256
 
diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index 52d4817679..995f671f35 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -34,6 +34,7 @@
 
 #include "utils.h"
 #include "talloc.h"
+#include "xs_lib.h"
 #include "xenstored_core.h"
 #include "xenstored_control.h"
 #include "xenstored_domain.h"
diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 02ae390e25..4b7b71cfb3 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -46,7 +46,7 @@
 #include "utils.h"
 #include "list.h"
 #include "talloc.h"
-#include "xenstore_lib.h"
+#include "xs_lib.h"
 #include "xenstored_core.h"
 #include "xenstored_watch.h"
 #include "xenstored_transaction.h"
@@ -542,11 +542,11 @@ static int write_node(struct connection *conn, struct node *node,
 	return write_node_raw(conn, &key, node, no_quota_check);
 }
 
-enum xs_perm_type perm_for_conn(struct connection *conn,
-				const struct node_perms *perms)
+unsigned int perm_for_conn(struct connection *conn,
+			   const struct node_perms *perms)
 {
 	unsigned int i;
-	enum xs_perm_type mask = XS_PERM_READ|XS_PERM_WRITE|XS_PERM_OWNER;
+	unsigned int mask = XS_PERM_READ|XS_PERM_WRITE|XS_PERM_OWNER;
 
 	/* Owners and tools get it all... */
 	if (!domain_is_unprivileged(conn) || perms->p[0].id == conn->id
@@ -584,7 +584,7 @@ char *get_parent(const void *ctx, const char *node)
  * Temporary memory allocations are done with ctx.
  */
 static int ask_parents(struct connection *conn, const void *ctx,
-		       const char *name, enum xs_perm_type *perm)
+		       const char *name, unsigned int *perm)
 {
 	struct node *node;
 
@@ -618,10 +618,9 @@ static int ask_parents(struct connection *conn, const void *ctx,
  * Temporary memory allocations are done with ctx.
  */
 static int errno_from_parents(struct connection *conn, const void *ctx,
-			      const char *node, int errnum,
-			      enum xs_perm_type perm)
+			      const char *node, int errnum, unsigned int perm)
 {
-	enum xs_perm_type parent_perm = XS_PERM_NONE;
+	unsigned int parent_perm = XS_PERM_NONE;
 
 	/* We always tell them about memory failures. */
 	if (errnum == ENOMEM)
@@ -641,7 +640,7 @@ static int errno_from_parents(struct connection *conn, const void *ctx,
 static struct node *get_node(struct connection *conn,
 			     const void *ctx,
 			     const char *name,
-			     enum xs_perm_type perm)
+			     unsigned int perm)
 {
 	struct node *node;
 
@@ -873,7 +872,7 @@ static struct node *get_node_canonicalized(struct connection *conn,
 					   const void *ctx,
 					   const char *name,
 					   char **canonical_name,
-					   enum xs_perm_type perm)
+					   unsigned int perm)
 {
 	char *tmp_name;
 
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index b50ea3f57d..6a6d0448e8 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -185,8 +185,8 @@ void send_ack(struct connection *conn, enum xsd_sockmsg_type type);
 char *canonicalize(struct connection *conn, const void *ctx, const char *node);
 
 /* Get access permissions. */
-enum xs_perm_type perm_for_conn(struct connection *conn,
-				const struct node_perms *perms);
+unsigned int perm_for_conn(struct connection *conn,
+			   const struct node_perms *perms);
 
 /* Write a node to the tdb data base. */
 int write_node_raw(struct connection *conn, TDB_DATA *key, struct node *node,
@@ -200,8 +200,6 @@ struct connection *new_connection(connwritefn_t *write, connreadfn_t *read);
 struct connection *get_connection_by_id(unsigned int conn_id);
 void check_store(void);
 void corrupt(struct connection *conn, const char *fmt, ...);
-enum xs_perm_type perm_for_conn(struct connection *conn,
-				const struct node_perms *perms);
 
 /* Is this a valid node name? */
 bool is_valid_nodename(const char *node);
diff --git a/tools/xenstore/xenstored_watch.c b/tools/xenstore/xenstored_watch.c
index db89e0141f..aca0a71bad 100644
--- a/tools/xenstore/xenstored_watch.c
+++ b/tools/xenstore/xenstored_watch.c
@@ -124,7 +124,7 @@ static bool watch_permitted(struct connection *conn, const void *ctx,
 			    const char *name, struct node *node,
 			    struct node_perms *perms)
 {
-	enum xs_perm_type perm;
+	unsigned int perm;
 	struct node *parent;
 	char *parent_name;
 
diff --git a/tools/xenstore/xs_lib.c b/tools/xenstore/xs_lib.c
index 80c03acbea..10fa4c3ad0 100644
--- a/tools/xenstore/xs_lib.c
+++ b/tools/xenstore/xs_lib.c
@@ -16,12 +16,13 @@
     License along with this library; If not, see <http://www.gnu.org/licenses/>.
 */
 
+#include <assert.h>
 #include <unistd.h>
 #include <stdio.h>
 #include <string.h>
 #include <stdlib.h>
 #include <errno.h>
-#include "xenstore_lib.h"
+#include "xs_lib.h"
 
 /* Common routines for the Xen store daemon and client library. */
 
@@ -179,3 +180,114 @@ unsigned int xs_count_strings(const char *strings, unsigned int len)
 
 	return num;
 }
+
+char *expanding_buffer_ensure(struct expanding_buffer *ebuf, int min_avail)
+{
+	int want;
+	char *got;
+
+	if (ebuf->avail >= min_avail)
+		return ebuf->buf;
+
+	if (min_avail >= INT_MAX/3)
+		return 0;
+
+	want = ebuf->avail + min_avail + 10;
+	got = realloc(ebuf->buf, want);
+	if (!got)
+		return 0;
+
+	ebuf->buf = got;
+	ebuf->avail = want;
+	return ebuf->buf;
+}
+
+char *sanitise_value(struct expanding_buffer *ebuf,
+		     const char *val, unsigned len)
+{
+	int used, remain, c;
+	unsigned char *ip;
+
+#define ADD(c) (ebuf->buf[used++] = (c))
+#define ADDF(f,c) (used += sprintf(ebuf->buf+used, (f), (c)))
+
+	assert(len < INT_MAX/5);
+
+	ip = (unsigned char *)val;
+	used = 0;
+	remain = len;
+
+	if (!expanding_buffer_ensure(ebuf, remain + 1))
+		return NULL;
+
+	while (remain-- > 0) {
+		c= *ip++;
+
+		if (c >= ' ' && c <= '~' && c != '\\') {
+			ADD(c);
+			continue;
+		}
+
+		if (!expanding_buffer_ensure(ebuf, used + remain + 5))
+			/* for "<used>\\nnn<remain>\0" */
+			return 0;
+
+		ADD('\\');
+		switch (c) {
+		case '\t':  ADD('t');   break;
+		case '\n':  ADD('n');   break;
+		case '\r':  ADD('r');   break;
+		case '\\':  ADD('\\');  break;
+		default:
+			if (c < 010) ADDF("%03o", c);
+			else         ADDF("x%02x", c);
+		}
+	}
+
+	ADD(0);
+	assert(used <= ebuf->avail);
+	return ebuf->buf;
+
+#undef ADD
+#undef ADDF
+}
+
+void unsanitise_value(char *out, unsigned *out_len_r, const char *in)
+{
+	const char *ip;
+	char *op;
+	unsigned c;
+	int n;
+
+	for (ip = in, op = out; (c = *ip++); *op++ = c) {
+		if (c == '\\') {
+			c = *ip++;
+
+#define GETF(f) do {					\
+			n = 0;				\
+			sscanf(ip, f "%n", &c, &n);	\
+			ip += n;			\
+		} while (0)
+
+			switch (c) {
+			case 't':		c= '\t';		break;
+			case 'n':		c= '\n';		break;
+			case 'r':		c= '\r';		break;
+			case '\\':		c= '\\';		break;
+			case 'x':		GETF("%2x");		break;
+			case '0': case '4':
+			case '1': case '5':
+			case '2': case '6':
+			case '3': case '7':	--ip; GETF("%3o");	break;
+			case 0:			--ip;			break;
+			default:;
+			}
+#undef GETF
+		}
+	}
+
+	*op = 0;
+
+	if (out_len_r)
+		*out_len_r = op - out;
+}
diff --git a/tools/xenstore/xs_lib.h b/tools/xenstore/xs_lib.h
new file mode 100644
index 0000000000..efa05997d6
--- /dev/null
+++ b/tools/xenstore/xs_lib.h
@@ -0,0 +1,50 @@
+/*
+    Common routines between Xen store user library and daemon.
+    Copyright (C) 2005 Rusty Russell IBM Corporation
+
+    This library is free software; you can redistribute it and/or
+    modify it under the terms of the GNU Lesser General Public
+    License as published by the Free Software Foundation; either
+    version 2.1 of the License, or (at your option) any later version.
+
+    This library is distributed in the hope that it will be useful,
+    but WITHOUT ANY WARRANTY; without even the implied warranty of
+    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+    Lesser General Public License for more details.
+
+    You should have received a copy of the GNU Lesser General Public
+    License along with this library; If not, see <http://www.gnu.org/licenses/>.
+*/
+
+#ifndef XS_LIB_H
+#define XS_LIB_H
+
+#include "xenstore_lib.h"
+
+const char *xs_daemon_rootdir(void);
+const char *xs_domain_dev(void);
+const char *xs_daemon_tdb(void);
+
+/* Convert permissions to a string (up to len MAX_STRLEN(unsigned int)+1). */
+bool xs_perm_to_string(const struct xs_permissions *perm,
+		       char *buffer, size_t buf_len);
+
+/* Given a string and a length, count how many strings (nul terms). */
+unsigned int xs_count_strings(const char *strings, unsigned int len);
+
+/* Sanitising (quoting) possibly-binary strings. */
+struct expanding_buffer {
+	char *buf;
+	int avail;
+};
+
+/* Ensure that given expanding buffer has at least min_avail characters. */
+char *expanding_buffer_ensure(struct expanding_buffer *, int min_avail);
+
+/* sanitise_value() may return NULL if malloc fails. */
+char *sanitise_value(struct expanding_buffer *, const char *val, unsigned len);
+
+/* *out_len_r on entry is ignored; out must be at least strlen(in)+1 bytes. */
+void unsanitise_value(char *out, unsigned *out_len_r, const char *in);
+
+#endif /* XS_LIB_H */
diff --git a/tools/xenstore/xs_tdb_dump.c b/tools/xenstore/xs_tdb_dump.c
index f74676cf1c..5d2db392b4 100644
--- a/tools/xenstore/xs_tdb_dump.c
+++ b/tools/xenstore/xs_tdb_dump.c
@@ -17,7 +17,7 @@ static uint32_t total_size(struct xs_tdb_record_hdr *hdr)
 		+ hdr->datalen + hdr->childlen;
 }
 
-static char perm_to_char(enum xs_perm_type perm)
+static char perm_to_char(unsigned int perm)
 {
 	return perm == XS_PERM_READ ? 'r' :
 		perm == XS_PERM_WRITE ? 'w' :
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed May 12 14:52:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 14:52:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126271.237689 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgqDt-00036z-RI; Wed, 12 May 2021 14:52:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126271.237689; Wed, 12 May 2021 14:52:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgqDt-00036s-OE; Wed, 12 May 2021 14:52:29 +0000
Received: by outflank-mailman (input) for mailman id 126271;
 Wed, 12 May 2021 14:52:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XikZ=KH=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lgqDr-00036l-Vt
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 14:52:28 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f8404d95-0236-4e77-ab24-f67484f37af0;
 Wed, 12 May 2021 14:52:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f8404d95-0236-4e77-ab24-f67484f37af0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620831146;
  h=subject:from:to:cc:references:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=tOUc4EMZYhTBpBf+owUFWUgGkCYuXMNlStJ2eudHJUE=;
  b=J69yRGdoUxPEfz+6yoRZzKeZBUOlHm7aifaZAHCi1XGCamPCs8LcpCnc
   cpPRO3iW4fIzF1wZwklAsS39OK6/XCEM9S7avSt2yGGbweIb0zz05QVdn
   S/I0Ztd6mBg9yLYWC7JLOopIv52HVE/yGuRqoZgZdJscwJdyy0d7wQFUH
   0=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: EoVt/jgZab7rgEc7f7JAIbMXa+2HuGcuqPidxvxcT1rWuzhSEqtRNOf/t4EGVLguBewnKXAPsN
 r13h8oD8Jq1gcjHwPPT3icKwgHF0YwAhEOET7AwQc6niXtkzY7MpjyUm+CZkvA1/8CVhVQONZm
 k2Gax+2fhKEBLV7/chw8qlAvgHnStAwO6qcE5VDFC4oaAHvZ4VH19D/c7UlGUoxmrZ24wz1oDt
 Kr6frPLxgETp+53X4ITBVzjyJ5Jx1+1qLgvmUfUZjjUOikj/APBpQyFKwnEfkpr4vdTA7gwmZF
 e6U=
X-SBRS: 5.1
X-MesageID: 43435481
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:y2Nh6ao+ZH9qoFnUN8Z93gkaV5rveYIsimQD101hICG9Evb0qy
 nOpoV/6faQslwssR4b9uxoVJPvfZq+z+8W3WByB9eftWDd0QPFEGgL1+DfKlbbak7DH4BmtJ
 uJc8JFeafN5VoRt7eG3OFveexQvOVu88qT9JjjJ28Gd3APV0n5hT0JcjpyFCdNNW57LKt8Lr
 WwzOxdqQGtfHwGB/7LfUXsD4D41rv2fIuNW29+OyIa
X-IronPort-AV: E=Sophos;i="5.82,293,1613451600"; 
   d="scan'208";a="43435481"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=d/QopO/vDUzOr9L5Uuq7Ro1DEeKkcWQyN4S5327s9FIO6bWQs4Qe9e8zTLrPWnh2Uw5R6CTU/eIwEAHiGdKSEqc8nbGH6HDzsMLwsgfD0VPXTN0LPzWlaGHtC045ENf+8NpQidNeQ4KZUuB+3y2pFPiaSfRMACkzH+jd2GSv9OFB7qeTBgk63zt391MgkAio6wTp7G2hU7w+rTl45RXCnwXX/S5U0ltrM4sU7nmbtmbReGqah5nrE/bwpRb1KlhtoDA/sbP7TTkXdG7RnBjFfzKfA3kf0LhlTPXWIRr+tz34zQR0rRzSSLWxNeO3mPNoy6U0kiNUNNu6DOU3Zo4Pmg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tOUc4EMZYhTBpBf+owUFWUgGkCYuXMNlStJ2eudHJUE=;
 b=VBU2jhp00e5xfN5WCIiuOhCgV/svfgtP3VQoGqJR2kBC7KhcNpeMbHh8js+EmCvJ0Qc1q93UFPAkuTAm2qdDiIvkMuMMmCV9yfeyqnGJ0nzkn5N59dUeasIuw/YEzx7CFeCdHgmuc0v0894LOJc9xaBrC2RLu1M+RqDcUQfyM5+7N3zGp9J+t1jQ1dxjPg3ugsn3ofwD4ntpWO5F3Sx9KAuPD2AKDuZxg2KAkcL74muuYLhofrE7YMNOQ4fGf8eTXvGMASuoaj7TiRUzZENQPD684qNMNL8a1EgRmHjYLT4TF5uopWmXVab3gf7Nbz1MR6/Nctb5uLRTicyDtTiuYg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tOUc4EMZYhTBpBf+owUFWUgGkCYuXMNlStJ2eudHJUE=;
 b=eDU42Isb9QiubYye0olmJyS6iAjP8JzBvBuDVLz/XXFk+KzD300QgCE6sc0lUpshoB8IqDTk1upT+L14VJkQXX3dfSEpFoByAZGNKR4zNA1G/FO2BIOxHBImFjkSImSNwj5Wtrf6nzBeF5OHLFq+2Sw/1G/b8qEuLUKEGoERuOY=
Subject: Re: [PATCH v2] tools: remove unused sysconfig variable
 XENSTORED_ROOTDIR
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Olaf Hering <olaf@aepfle.de>, <xen-devel@lists.xenproject.org>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210506151631.1227-1-olaf@aepfle.de>
 <a236f079-1771-7808-bb16-97b9dc5ed733@citrix.com>
Message-ID: <fe7ccfe8-967e-ed12-5804-590fd9663608@citrix.com>
Date: Wed, 12 May 2021 15:52:16 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <a236f079-1771-7808-bb16-97b9dc5ed733@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0075.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:190::8) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b43b7c87-728d-4657-d3fd-08d915558cbc
X-MS-TrafficTypeDiagnostic: BYAPR03MB3431:
X-Microsoft-Antispam-PRVS: <BYAPR03MB343107C154917854BFC00FADBA529@BYAPR03MB3431.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:1443;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: ENMDJJPCRGjaoH3J5QMKPmts1+Ks0Pglch+82x3mvJTHgpUbSbfV4qgcS7C6fQVdiW1ASUsrrf3seTv3x+ddoi/S+7Q4CpM2NFexWlhxFGUOp2w7TS9Y2QNlMW/t496dsWOUY9chuEEFSruRoUfl729qTrr84rCdBGxfYnMDhI5PaIi1UR6qy5aCjgMzdTLs+DAG8l03nodzBIz+Yez0TMvbDTRqXQIHxsOk12Dqu9shdbe4hzQLUcGdQ8b/2HhK3hCnZHEo1LV9wdPYqBM1ma9EihjSFg7vVhTeF6KSfOnmuoi80a9L0mwOznXUomSYTN5sJXw/A2BVqJFUTZhEdn+gP7DtiRyuNjSzqoth9Hbfo6R/wDe7UqKDxkA4GYXw8Ul1/2WlwKCzFS2V6NS5rZXGbjRjvgBFe4L7s34etJg1/e4OO9mdkGyzvrUviPsgP8zsTemtF7T88P5oWZQKHQZ39tcxRzSQKYh8SMsq06ZWRsbJhgUtns1nNFU2465V+m5grWsLLKIp/v5iaMADizwRG5gb+lHhaS7XHq97467pPh2B0FMndZ1/LgcVbu5P1dvnUKKgFQkaVqDziF91ad+xoLOIaJopJHzq+RuQFSvfIqC4JynUDKxZnwLkag6cv0oeeFkX3frsIbQu84o1ktnCEcm/QFUqDw6DxnZhxFrhkEq3qNMtnBJz3CVEMcno+zLk+ygy9Xu/YBjOONtqr39AWW/8e1Xc80XH5PYMg3uecix1/jw1JmJJeYsD6lYHJ9AItwoWPA0yG3NXexRStaJO0wIPfEuwbt19Bd9zX4k=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39850400004)(366004)(376002)(396003)(346002)(136003)(38100700002)(4326008)(2616005)(16526019)(66946007)(956004)(31696002)(6666004)(2906002)(31686004)(5660300002)(53546011)(83380400001)(86362001)(6486002)(478600001)(8936002)(66476007)(54906003)(66556008)(26005)(186003)(16576012)(36756003)(316002)(8676002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?TFErVXZMNDFKcHE5RDgvOWQreG9VV1djUC9lNmtJeXdkaGVhUkNVdUpid3Vx?=
 =?utf-8?B?cVMvTUpIblAvN2JKenVXb0l5MFlJcktTTmFOZ0ttZStiSnYyWEVrVFFFa3Fu?=
 =?utf-8?B?TktQSWpnWHBUS0NkVlNvWkgrV2ozMDRUUzBOTU9RWGpobnNBdWlBaThkUlhj?=
 =?utf-8?B?d3BNd2ZZTjJPVG9Na2VTdC9sNEFURDdRZm9mdmNUdlU5V0ZKei9keE9jblFW?=
 =?utf-8?B?QnZLMlhiZnZTZ3EyR2hRZTVEcFJGNUZwREU1aDFHZHpLdmN2cDNXVm1OL09O?=
 =?utf-8?B?TWJSMi9COGJDUVFlVHFPVzVvZXVSYmVtaitGK21OWW5DRmN5VmY1d2I5dGF4?=
 =?utf-8?B?cEtRR3F2cEVoUVVsSE1aQTFhYVFOcTF6NklvczhMY2VvQkNpQVIyNWJGUSs1?=
 =?utf-8?B?QXZycnJieWN2RCtWMVM3MmxpR0lWN003Mk9hRURZM3BvWENWVjU3Zm85a2dD?=
 =?utf-8?B?VkNmRWZxWWpiMkxMbnVLaXhBOTIzcDdCNHJTUW1OVDZtNFdmTnRBZFpxYnc4?=
 =?utf-8?B?VHhMNnpadHNnc3RLQlpBRFhrMytTZ0JRS0FFc3BEaWVnektRSGhPYlhhUVMw?=
 =?utf-8?B?aTVHQzl1N2U3RmVXOEhCZnMxRFhHYUZWd0VNb1lTMFAwcTNpdngrL0JNNG0v?=
 =?utf-8?B?azRmZFhuVzFqa1l5Q3dTbkQvYUxVMmZnaGcrZHBoVndFV2RhcUMxQmFHOGx5?=
 =?utf-8?B?Q0toeHNiczNkMzQ1bURBRXFjS3FSb0N4ZzZLM1RTSWFIM1hxd0VtWWNXN2JW?=
 =?utf-8?B?amVwRklYcXBsdWl4UkpWclRlWXpxeU1wMXFISGJnUU1ZSFAwclRvMlpqd0xy?=
 =?utf-8?B?Y3VjN3U1elFjMVg4UkxtOFdiUkVhWlJ4US9ObzB4K0VCM1hHOWJ5U1JXaWND?=
 =?utf-8?B?TkhzT3F1bE03eGNteHdabkdHRW5oaitscXI2UUxFZ21SMnhGS3pkNnhFL3ZR?=
 =?utf-8?B?eWFLdGMxWjI3WHBhRlJlVEhValptVW14RUJUNjIrT00zQi9oeXZmYjBZbkta?=
 =?utf-8?B?QzJOM2xkeWljSVhab21rQ1lKbWt4WnlGYmJUTDNJRlBvR0UwaG1yQlUrbGNx?=
 =?utf-8?B?Z0JlWkU3bGdTY05wZFFkZy9rN0l5QWtxakJpN2JZUHdDNVNHMFFjZjNCQU1z?=
 =?utf-8?B?L2VoQTQ4T01udE8vMGNDZHVVQ2dDakw5OUxmU3FKYlRBNy9vOWY0OTd0VUNy?=
 =?utf-8?B?MitOQ0pLM3JhM2FFZ1ZQd0ZvQTNBeDRUcWpZcVVkTnJXbTE0WUM1eStFOW5k?=
 =?utf-8?B?Z3A0dlgrWDRXUlgzRSs1S3FuWFlzMURJY2VDSlBHaVhyQ2w3NDZWUk9tV2hM?=
 =?utf-8?B?VG1sREZ2TjlkQ3E1T0dRSDdnZlR1OWx0RDRDRDRuQjJScmVVTFJKa2NSSVht?=
 =?utf-8?B?QjQ1SHVCZWFGNUhjOUQyb2ZDSjVhSEEwUVYrdXg0cDBndTlheGZSRytwS0dp?=
 =?utf-8?B?OVFiZU8vbzFBVGRkRmtzT2pJQ1doaVJYL0F2aG1kR2lxL3VUNkVkbFVseE1K?=
 =?utf-8?B?V1BvRGZzRFVLQ09rTUY5U3o2WVpCeWVZZnZTRnFNR0FscVA4c3JlZy9ZaW5J?=
 =?utf-8?B?dVhFM2JKaFY1Q0xHMEtIRW5PZWtPVFNJSk13RGZaOTdnY1I2ZXlwL3EyOWlG?=
 =?utf-8?B?ajVlcVhFbjZ3dDJXRG1KNXNQTlV6aFpDdVZEanFsZmhKdkpGR3EwUWJEaFZ6?=
 =?utf-8?B?cklaZVg3UmtwUzFyMGNnL1o2UHkrSUtLcExRcjJrVktsSnR2VUZCT0dzVGl2?=
 =?utf-8?Q?1hgnAEqSx4WmlweT1c8ykgemZqqF9tVCWh4M0xW?=
X-MS-Exchange-CrossTenant-Network-Message-Id: b43b7c87-728d-4657-d3fd-08d915558cbc
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 May 2021 14:52:23.1372
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: hNBHSg5XqceSxcHWAqMFrm6XTitQHPKeUDqZW4aEXNn38UfflITS06Y1Q0Bt8AId2cNFhToJ6atKdoW1yUGo87ONOdYjVS+FxJWP9J0hU3k=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB3431
X-OriginatorOrg: citrix.com

On 06/05/2021 17:49, Andrew Cooper wrote:
> On 06/05/2021 16:16, Olaf Hering wrote:
>> The sysconfig variable XENSTORED_ROOTDIR is not used anymore.
>> It used to point to a directory with tdb files, which is now a tmpfs.
>>
>> In case the database is not in tmpfs, like on sysv and BSD systems,
>> xenstored will truncate existing database files during start.
>>
>> Fixes commit 2ef6ace428dec4795b8b0a67fff6949e817013de
>>
>> Signed-off-by: Olaf Hering <olaf@aepfle.de>
> Acked-by Andrew Cooper <andrew.cooper3@citrix.com>, although as we're
> trying to keep on top of the changelog this time around, we probably
> want the following hunk:
>
> diff --git a/CHANGELOG.md b/CHANGELOG.md
> index 0106fccec1..6896d70757 100644
> --- a/CHANGELOG.md
> +++ b/CHANGELOG.md
> @@ -6,6 +6,10 @@ The format is based on [Keep a
> Changelog](https://keepachangelog.com/en/1.0.0/)
>  
>  ## [unstable
> UNRELEASED](https://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=staging)
> - TBD
>  
> +### Removed
> + - XENSTORED_ROOTDIR environment variable from configuration files and
> +   initscripts, due to being unused.
> +
>  ## [4.15.0
> UNRELEASED](https://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=RELEASE-4.15.0)
> - TBD
>  
>  ### Added / support upgraded
>
> ~Andrew

Olaf: View on the above?

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed May 12 14:58:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 14:58:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126279.237702 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgqJn-00045s-N2; Wed, 12 May 2021 14:58:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126279.237702; Wed, 12 May 2021 14:58:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgqJn-00045l-In; Wed, 12 May 2021 14:58:35 +0000
Received: by outflank-mailman (input) for mailman id 126279;
 Wed, 12 May 2021 14:58:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+k7y=KH=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lgqJm-00045f-Av
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 14:58:34 +0000
Received: from mail-ot1-x335.google.com (unknown [2607:f8b0:4864:20::335])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id db43408e-1a35-4c5d-985f-bf5bff516cc5;
 Wed, 12 May 2021 14:58:33 +0000 (UTC)
Received: by mail-ot1-x335.google.com with SMTP id
 u25-20020a0568302319b02902ac3d54c25eso20831972ote.1
 for <xen-devel@lists.xenproject.org>; Wed, 12 May 2021 07:58:33 -0700 (PDT)
Received: from ceres ([2603:300b:7b5:c800:1cf6:4c9f:4e7:d116])
 by smtp.gmail.com with ESMTPSA id w25sm153167otq.40.2021.05.12.07.58.32
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 12 May 2021 07:58:33 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: db43408e-1a35-4c5d-985f-bf5bff516cc5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=iykaG1QuO5qEZs1ETSCytVqlMltBctwV3CQakGdsnxQ=;
        b=cNIQOnhGYgXqV/It6z0ALitchnraAAKPlsvAVjx5dDPpiUrnTH6sOwlVipGF5i4mo8
         lvLBghLlGAzTAvaGt/TIjH3vXJaIuDVjqdazab7KWfxvLdcbfuBlxWC7vFecEgGvN9fW
         fCT75y1/IQR7i0C/iUl7UZKFrCybCZQR0wdlZXkKTqILRQgurYrItOZx1W77HvgwyHRo
         DHj33LxMInsbH+/LUui8Mib60JifOPPmoBAdkr3o4xqOFXCpuAMUfPIbs4UffwSuP2OL
         fOkGiH1J4wWKbmSK0qmzj0pOp2iI8pJPUD10BSNwGqsLRebR+MuJGs2bLcscvyjbN7WH
         ihBw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to;
        bh=iykaG1QuO5qEZs1ETSCytVqlMltBctwV3CQakGdsnxQ=;
        b=XShxjweBn+E7dESiiRqTKRxw4Z7eIsr7FkYXbVLnxHw8hJqfyeKfjou7hV2auMeSpE
         uoFePEHUrjH5Lp0lYqSoWDYsKCaraPjorCW0h/MAI/452Q0Q2SY+X71geKOK5x3lxk3P
         J9pQRedPCOEgL5ttw/aOku5dn76UcgnSMmGdsXi86ZvO6/5EOSm+xVPrY/tuvbyHHLDp
         +/v/nfOxJGaHlZHznTdTJAlkGhBqd6mT/w1KI0TAECkLsgPYvTSNQYr65oY9ESBB+5HH
         iLn6FnPl3fqa3z4bAGkq+ZEKdoF/DHbSimGREZBXNrHGhQTRyI5wqg2qQrjVPWQ0LKSp
         trXA==
X-Gm-Message-State: AOAM530OeYsKoOBZ9//EMlWCBNcLCAnm/RPiMROhViv3KGiVc0PR2T+c
	81tGUT6lb7yo2YRZZZ8F1J4=
X-Google-Smtp-Source: ABdhPJzZN5zhpCNe0QJ+Wt1+7aBQEqm+/EdV3TLqdFy7QVNXxftTOD53gkuuLeTIynk2FFtFgfhZpA==
X-Received: by 2002:a9d:7a56:: with SMTP id z22mr28260086otm.47.1620831513268;
        Wed, 12 May 2021 07:58:33 -0700 (PDT)
Date: Wed, 12 May 2021 08:58:31 -0600
From: Connor Davis <connojdavis@gmail.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 2/3] xen: Export dbgp functions when CONFIG_XEN_DOM0 is
 enabled
Message-ID: <20210512145831.gxmmlimkmnnb6zyc@ceres>
References: <cover.1620776161.git.connojdavis@gmail.com>
 <291659390aff63df7c071367ad4932bf41e11aef.1620776161.git.connojdavis@gmail.com>
 <0c1d6722-138f-62e7-03b3-a644e36d20a5@oracle.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <0c1d6722-138f-62e7-03b3-a644e36d20a5@oracle.com>

On May 12, 21, Boris Ostrovsky wrote:
>
> On 5/11/21 8:18 PM, Connor Davis wrote:
> > Export xen_dbgp_reset_prep and xen_dbgp_external_startup
> > when CONFIG_XEN_DOM0 is defined. This allows use of these symbols
> > even if CONFIG_EARLY_PRINK_DBGP is defined.
> >
> > Signed-off-by: Connor Davis <connojdavis@gmail.com>
> > ---
> >  drivers/xen/dbgp.c | 2 +-
>
>
> Unrelated to your patch but since you are fixing things around that file --- should we return -EPERM in xen_dbgp_op() when !xen_initial_domain()?

Yeah looks like it. That would make patch 3 simpler too.
Do you want me to add a patch that fixes that up?

>
> -boris
>

Thanks,
Connor


From xen-devel-bounces@lists.xenproject.org Wed May 12 15:00:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 15:00:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126281.237714 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgqLF-0005H5-14; Wed, 12 May 2021 15:00:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126281.237714; Wed, 12 May 2021 15:00:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgqLE-0005GP-Ty; Wed, 12 May 2021 15:00:04 +0000
Received: by outflank-mailman (input) for mailman id 126281;
 Wed, 12 May 2021 15:00:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+k7y=KH=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lgqLD-0004yO-6w
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 15:00:03 +0000
Received: from mail-oi1-x229.google.com (unknown [2607:f8b0:4864:20::229])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7a65c375-0828-4721-9293-a2fc480fd12a;
 Wed, 12 May 2021 15:00:02 +0000 (UTC)
Received: by mail-oi1-x229.google.com with SMTP id x15so8749837oic.13
 for <xen-devel@lists.xenproject.org>; Wed, 12 May 2021 08:00:02 -0700 (PDT)
Received: from ceres ([2603:300b:7b5:c800:1cf6:4c9f:4e7:d116])
 by smtp.gmail.com with ESMTPSA id j16sm1975523otn.55.2021.05.12.08.00.01
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 12 May 2021 08:00:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7a65c375-0828-4721-9293-a2fc480fd12a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=wAw2wQB4eQHn6TDwpGJK9D3eOJ6qhU0f1KoS9QvzkWM=;
        b=loF0pyMBQRqwDEvT7rYJtv4geglNOLVoskIvxa2FNrVpO6yh141PcdytI9szrSfwey
         NCPI0JXldCm1xdKJkvs+EMOqPOkxAKkkDLid9HXujR3z/uZXl+ytWem68OKayxMK44Wj
         r49klWLflnYXQbGE9YTknEBgQ8tGmEnmG4nu8p+onCvSzORtAu9xuX0vGBEDR4XV+J6V
         ycseUzZzYh4E4rFs59HMcd6G0Q4HVWH65WeFT1aN0bG4y1qoEFRwAhAE0H3EWmQxPLp7
         ANgiAje+NgLqt74hFxWMNSSR2jkx8nmW99vIxAYuPad+vUSTuGfLfNNXC7HWyjj4n/W4
         FV6w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to;
        bh=wAw2wQB4eQHn6TDwpGJK9D3eOJ6qhU0f1KoS9QvzkWM=;
        b=Lm0BkTfaYMvbF9FAclRtpviYRa1C87pLqDNE1/gbTB4ihgkGaHgfMnGSC0lXmnivAK
         cyekaUFnnPS1aBxrmXi5uKj7+FSBgk/EQ3+rY9WpESF+EbtixUp6B08bljsltQSAV+Lq
         Ws6A+UofVkBfbdsyJRkoNkZnXMVJN2mf//QNr1j/GKo2+Xr8OzbFW8OGNrl0FbokPD04
         yrSqyjrNrk1aWEYpQN1urEkNpqxilObWLHU220t8sEp3NDof1xcEnSlreHsplMa6Jdb/
         roF7GqGWyB2JHf8CRY/QUc/taQq4nyxM05NZqUqXY3dQS8M5wqqHNBijsBWVrA0L6PE9
         KF7A==
X-Gm-Message-State: AOAM530ptUo9JP8Zyi6lPAKJCRSp3x32csFuFGj1V/KHfHSo7zh2dVYy
	l2U2Iqcx+pHJpfr5J5846n8=
X-Google-Smtp-Source: ABdhPJz2ZIDSm8cD7EYMyfJiRk9PCaJLERgRQLj7veh6dDLN/mEn8xKYLB8FEo5gpWsqLM+ZGuN6Mw==
X-Received: by 2002:aca:db41:: with SMTP id s62mr7719836oig.167.1620831601894;
        Wed, 12 May 2021 08:00:01 -0700 (PDT)
Date: Wed, 12 May 2021 08:59:59 -0600
From: Connor Davis <connojdavis@gmail.com>
To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Mathias Nyman <mathias.nyman@intel.com>, xen-devel@lists.xenproject.org,
	linux-usb@vger.kernel.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 3/3] usb: xhci: Notify xen when DbC is unsafe to use
Message-ID: <20210512145959.h6boyhrh2bvgx5iz@ceres>
References: <cover.1620776161.git.connojdavis@gmail.com>
 <2af7e7b8d569e94ab9c48039040ca69a8d52c89d.1620776161.git.connojdavis@gmail.com>
 <YJt9su1k67KEFh6K@kroah.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <YJt9su1k67KEFh6K@kroah.com>

On May 12, 21, Greg Kroah-Hartman wrote:
> On Tue, May 11, 2021 at 06:18:21PM -0600, Connor Davis wrote:
> > When running as a dom0 guest on Xen, check if the USB3 debug
> > capability is enabled before xHCI reset, suspend, and resume. If it
> > is, call xen_dbgp_reset_prep() to notify Xen that it is unsafe to touch
> > MMIO registers until the next xen_dbgp_external_startup().
> >
> > This notification allows Xen to avoid undefined behavior resulting
> > from MMIO access when the host controller's CNR bit is set or when
> > the device transitions to D3hot.
> >
> > Signed-off-by: Connor Davis <connojdavis@gmail.com>
> > ---
> >  drivers/usb/host/xhci-dbgcap.h |  6 ++++
> >  drivers/usb/host/xhci.c        | 57 ++++++++++++++++++++++++++++++++++
> >  drivers/usb/host/xhci.h        |  1 +
> >  3 files changed, 64 insertions(+)
> >
> > diff --git a/drivers/usb/host/xhci-dbgcap.h b/drivers/usb/host/xhci-dbgcap.h
> > index c70b78d504eb..24784b82a840 100644
> > --- a/drivers/usb/host/xhci-dbgcap.h
> > +++ b/drivers/usb/host/xhci-dbgcap.h
> > @@ -227,4 +227,10 @@ static inline int xhci_dbc_resume(struct xhci_hcd *xhci)
> >  	return 0;
> >  }
> >  #endif /* CONFIG_USB_XHCI_DBGCAP */
> > +
> > +#ifdef CONFIG_XEN_DOM0
> > +int xen_dbgp_reset_prep(struct usb_hcd *hcd);
> > +int xen_dbgp_external_startup(struct usb_hcd *hcd);
> > +#endif /* CONFIG_XEN_DOM0 */
> > +
> >  #endif /* __LINUX_XHCI_DBGCAP_H */
> > diff --git a/drivers/usb/host/xhci.c b/drivers/usb/host/xhci.c
> > index ca9385d22f68..afe44169183f 100644
> > --- a/drivers/usb/host/xhci.c
> > +++ b/drivers/usb/host/xhci.c
> > @@ -37,6 +37,57 @@ static unsigned long long quirks;
> >  module_param(quirks, ullong, S_IRUGO);
> >  MODULE_PARM_DESC(quirks, "Bit flags for quirks to be enabled as default");
> >
> > +#ifdef CONFIG_XEN_DOM0
> > +#include <xen/xen.h>
>
> <snip>
>
> Can't this #ifdef stuff go into a .h file?
>

Yep, will clean that up in v2.

> thanks,
>
> greg k-h

Thanks,
Connor


From xen-devel-bounces@lists.xenproject.org Wed May 12 15:01:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 15:01:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126283.237726 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgqMz-00063N-F2; Wed, 12 May 2021 15:01:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126283.237726; Wed, 12 May 2021 15:01:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgqMz-00063G-B7; Wed, 12 May 2021 15:01:53 +0000
Received: by outflank-mailman (input) for mailman id 126283;
 Wed, 12 May 2021 15:01:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+k7y=KH=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lgqMy-000638-Qe
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 15:01:52 +0000
Received: from mail-oo1-xc2d.google.com (unknown [2607:f8b0:4864:20::c2d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8c53055f-25e7-47e2-99be-8a84119394f6;
 Wed, 12 May 2021 15:01:52 +0000 (UTC)
Received: by mail-oo1-xc2d.google.com with SMTP id
 e7-20020a4ad2470000b02902088d0512ceso2175781oos.8
 for <xen-devel@lists.xenproject.org>; Wed, 12 May 2021 08:01:52 -0700 (PDT)
Received: from ceres ([2603:300b:7b5:c800:1cf6:4c9f:4e7:d116])
 by smtp.gmail.com with ESMTPSA id h184sm34435oia.1.2021.05.12.08.01.51
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Wed, 12 May 2021 08:01:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8c53055f-25e7-47e2-99be-8a84119394f6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:in-reply-to;
        bh=UDD7PJOLLYReV+ki1w4vWVhaqVdnK70fMH/aRuBsi+U=;
        b=j3EdvE/qBJ2JgHC4pSAw8ptUM/hHYSlfqrJ2bFDGn2AgF1DuvwJnsQErvNFeapmoOd
         WjwGhq3dj825FTLXsKh1Y9BvNJuxB88Y9b89ERwOhntkFG2VFxwix0XuqvpuMkAUFqt4
         ZeskQ1YsAtuToGOXwdrcTTAJe1mpr9sVsXp+emzb8tXoNI2EDxzP9yTiqce+6RneGFO0
         VNRovDzDjhR4MOq45H8TRVrwsITF4/08VWL3lnmNzeXHLCoB6GGezkgw52AAnEi7iMYU
         OVhqRAqlOA2T2q78oVkxMizPE6DhFLwRDwU8C3pgADouxAKG/fP07vGY3pmNGJBNJEM+
         cDPg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to;
        bh=UDD7PJOLLYReV+ki1w4vWVhaqVdnK70fMH/aRuBsi+U=;
        b=kvsqIbQJ9GlJL9bZZ9st8nWczskjXvUQGNFlYpFYRBkHcmYlMyuQrZa8Igy1fmDvpT
         h+Y46LWl/O+8bqop0uH5ezZYvAV1KkdbOYvC9y84Q4eG1wSGq0NfkdsiAQm0+CR/KNx1
         0n5+20jjA2hbQ6DEdVfaVPndcTbfkZXKyv9Qse9IxeugSCpiQPrw4eLVlkFZMM0QWzVj
         y4DK5wFuf9YcU8H/t1dYbZwkPgCba5XmmozVblRCy5uODM2/w8I7uHkJL3y0fmyPzsb6
         k4h/DGAkz3CIf5b3ilALm9FYJpOFcgdCvYgYDmRoBs1QBugLRGDGRgvXn48Qr3dQ8UMk
         IZkQ==
X-Gm-Message-State: AOAM532UQBxodRYBkiX7r+Ydo+JzicDJZVn+W2RWsMfH9JNEnAyQtcif
	brf/1EXVbOIPFWcHTEqVkak=
X-Google-Smtp-Source: ABdhPJwIUfudclyVSUS4OIM+Wq5qc8wnqq6eG4UzeB9J9g6dbK3zWJSX32nnnjigVzYpJ7b4h3l69A==
X-Received: by 2002:a4a:98a4:: with SMTP id a33mr28096899ooj.21.1620831711875;
        Wed, 12 May 2021 08:01:51 -0700 (PDT)
Date: Wed, 12 May 2021 09:01:49 -0600
From: Connor Davis <connojdavis@gmail.com>
To: Juergen Gross <jgross@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
Subject: Re: [PATCH 2/3] xen: Export dbgp functions when CONFIG_XEN_DOM0 is
 enabled
Message-ID: <20210512150149.nfzsgh3hnx7o7caf@ceres>
References: <cover.1620776161.git.connojdavis@gmail.com>
 <291659390aff63df7c071367ad4932bf41e11aef.1620776161.git.connojdavis@gmail.com>
 <0ef85b32-4069-4e94-0a2f-2325cd21510f@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <0ef85b32-4069-4e94-0a2f-2325cd21510f@suse.com>

On May 12, 21, Juergen Gross wrote:
> On 12.05.21 02:18, Connor Davis wrote:
> > Export xen_dbgp_reset_prep and xen_dbgp_external_startup
> > when CONFIG_XEN_DOM0 is defined. This allows use of these symbols
> > even if CONFIG_EARLY_PRINK_DBGP is defined.
> >
> > Signed-off-by: Connor Davis <connojdavis@gmail.com>
>
> Acked-by: Juergen Gross <jgross@suse.com>

Thank you.

- Connor



From xen-devel-bounces@lists.xenproject.org Wed May 12 15:04:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 15:04:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126289.237741 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgqPc-0006xo-TG; Wed, 12 May 2021 15:04:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126289.237741; Wed, 12 May 2021 15:04:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgqPc-0006xh-QA; Wed, 12 May 2021 15:04:36 +0000
Received: by outflank-mailman (input) for mailman id 126289;
 Wed, 12 May 2021 15:04:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oIkv=KH=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
 id 1lgqPb-0006xY-4q
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 15:04:35 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 44798e2b-f173-4139-8fc8-ff77527bbcd2;
 Wed, 12 May 2021 15:04:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 44798e2b-f173-4139-8fc8-ff77527bbcd2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620831874;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=XfrM3QD/84DYsI1vmmrpIh9pwqCM+vWCmFicTrB9Mwk=;
  b=V8byvXzg2W31zjBX2LTAQc+7WwjzAcL+qI9GoQHxW8Pb5uEz09XjmaON
   xTkN5LPfanihK0E/6OJOsdk0WTHnBau6XGdIpHaRNDHj8OVYtF/4kxL3b
   qL84497wTuL4wgNMnHt8GhMIz2EBFWj74PRKzpTqZq2tjzZFUdlARy5kK
   c=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 7buVpZ3J6iml1QF7cMyc/n7430L9oeNjIzCY8cC+XQgPLp0dh+GhxuOQgJ0fC6g5fTkd3isLaM
 SFrJFKtWtnzEFt8DJN5zEHGshGcXehcoHXCptao4AqpmWxqfsMDHOR+tPIzaOx0PUPwtBHY/IH
 +qFQsXTD8DHzh7Rz/iHihxpakD73UWGMo5ZtE0IPH5jwd4wyhZRqnnKgApq/RP+CAv3MJruqxj
 B3pmit51gJ9F76dpUqy9/bebzMiKxHN/4pH2E1Fsmw6rtGhTi/R6uVFYnes5cP0q1kIde9Js75
 xsg=
X-SBRS: 5.1
X-MesageID: 44029392
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:8yQTGq8/M6ORRYjkHlhuk+AiI+orL9Y04lQ7vn2ZKSY5TiVXra
 CTdZUgpHvJYVMqMk3I9uruBEDtex3hHP1OkOws1NWZLWrbUQKTRekP0WKL+Vbd8kbFh4xgPM
 lbEpSXCLfLfCVHZcSR2njFLz73quP3j5xBho3lvglQpRkBUdAG0+/gYDzraXGfQmN9dPwEPa
 vZ3OVrjRy6d08aa8yqb0N1JdQq97Xw5evbiQdtPW9e1DWz
X-IronPort-AV: E=Sophos;i="5.82,293,1613451600"; 
   d="scan'208";a="44029392"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=MXdHCXaETK534/OomAp0UdL2Fay5N55ZAoKd2rSAQtBqiuC4WrTuezwEsMzK3ZF/DNZo5RtbZDUC/G4rToTGPvZtbHRSnDiMWeDx/WfxEy1eX4dgtbwKeCgZWFLvR+KyZt0VEResEJ4byYZywnWigg6nqzH/mEacJ2xqbe9SnyL00VEwRQiPuNzkTDq/kK/U+0ieROOTFdgOvnmQKsV9QrPz3vzxaHD/Ott0h9QX6X1BGapBIHLoLyYEeoxnHSD/muYpY8NIhY1Ox5t+g9dZfEQW9bcWAN8cp7yZwLos6G/f6kie3btBwTmaFE9j8i0s+v1txFeDonIC27vGp6RRCA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XfrM3QD/84DYsI1vmmrpIh9pwqCM+vWCmFicTrB9Mwk=;
 b=kOkit8Gii5bCESK2UrzBaykQ+s7hxRvYdXtSrQeDWEPFyl/lU19QTzzHrrpwl+rZvEuF/Ojb1+grf8mtb1qNuMVN2VbjSpEP273wW5MuLqwaDJlEaxiz82TFTjMwj/58NunN6iT0o5czip+gDISAvNP9iKpJa3KjRxDEl6Y9P0Ct+riTO9+Qc1ZGluPWBOF5JzltIhrR0dQVuCgnQJL5AUpGIy9ADM5N4xyKT0n8QVIyIfYQR5AeG2kOqFKpbOvXt5WXv08jXzwUZS4xF2UKlYf3nBq6ZMlDUzqpXsDxB9QFUZQJANGvwQCPANhO1EFgKyKkvgW7JdjfTwzjBRSBKg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XfrM3QD/84DYsI1vmmrpIh9pwqCM+vWCmFicTrB9Mwk=;
 b=TN33zANNyETdtrN0TUNxkLeBlO7aVGW9F2HwoJIXP+wRTP6l9VcJOLY9hGPIBO8EDGIrpK9+nnACOx/wz0CG8OWyqGoqb0r93us0tTsgI4Obm7L+0pTWTQ9WB/JFERmeKF7tRXvh4ZsKomqJ+RZR9Z9+ywDD9mqADaHa5vaSbmk=
From: Edwin Torok <edvin.torok@citrix.com>
To: Andrew Cooper <Andrew.Cooper3@citrix.com>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: "jbeulich@suse.com" <jbeulich@suse.com>, "julien@xen.org"
	<julien@xen.org>, "jgross@suse.com" <jgross@suse.com>, "wl@xen.org"
	<wl@xen.org>, "iwj@xenproject.org" <iwj@xenproject.org>,
	"sstabellini@kernel.org" <sstabellini@kernel.org>, "dave@recoil.org"
	<dave@recoil.org>, George Dunlap <George.Dunlap@citrix.com>, Christian Lindig
	<christian.lindig@citrix.com>
Subject: Re: [PATCH v2 00/17] live update and gnttab patches
Thread-Topic: [PATCH v2 00/17] live update and gnttab patches
Thread-Index: AQHXRpBYbSP9+kXyy06RGaCma1VeS6retOQAgADsBACAACz3AIAAJTAA
Date: Wed, 12 May 2021 15:04:25 +0000
Message-ID: <a61829312384fa5cf3cd170dc97a12a55eed4598.camel@citrix.com>
References: <cover.1620755942.git.edvin.torok@citrix.com>
	 <c744d834-659a-e361-df97-128032402950@citrix.com>
	 <7c1a9a8b317fcbc778acaa218ee96e01d15b98d5.camel@citrix.com>
	 <bbd8ccf8-6bb4-7cc0-515d-1f14cd4404b7@citrix.com>
In-Reply-To: <bbd8ccf8-6bb4-7cc0-515d-1f14cd4404b7@citrix.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Evolution 3.36.4-0ubuntu1 
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 4995a421-bab7-4d19-f04e-08d915573b88
x-ms-traffictypediagnostic: SJ0PR03MB5902:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: <SJ0PR03MB5902B87B188BED8A84370B689B529@SJ0PR03MB5902.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:9508;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: RFjlALXB43q3C06wo9TeplTHgFyt1AK3rfRiU7Nr1dUX8QIwpO5wivmpbdabIjEZ9/a78rUjC07LB/v4kCn4qRUFyHTPms3O8FR1DL2g1qPvsPYiqz9WUOvE266TSCLdpup3cQNflyy0/SobTjK0NpSqmI3HnErBkIaiTJrpENClNj3IJAiQNviKdzenCnmCOw7wf7arFAk9XI+9fdY9kClrittaTmHKANhudkimrvn0PwXn5980VEKtQZzFoaeBh77R2TGpLZHAjMb4mLLCg4lucpO17FOxm67YVFuo4KKBUezDKnXVn7S5lGAZoAc8/afdT2s1cv5WB4Q5A4QeuWRg45ObR6g2pwmNawuE3LI9gqKktlj0JQdVMnCD3vKBD8DkqQO7PrOHdDQ9eV9WbcBwEU7i7lGaaMl4tKSg9dq9db02GranUAb0ngpJ8WhCDYLVyg4iN/clA91+KdLZlub5lMVnvtrQRSubFQ7Nchz3W3MKpESv24l6MzP1yrvX3COLDeErdFA7doDGrGquNEAvkdGocITp5IxKDhmND0VROtHI+Hjn05Gd/os6zW6FKiB4wBzpMIkBcRzEtV8CP2qcBMTsbHqfRkynECDgSvI=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB5888.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(136003)(376002)(346002)(39850400004)(396003)(64756008)(36756003)(6512007)(66446008)(186003)(107886003)(66946007)(110136005)(86362001)(6486002)(38100700002)(122000001)(316002)(26005)(4326008)(478600001)(2616005)(91956017)(66556008)(2906002)(71200400001)(66476007)(6506007)(54906003)(76116006)(5660300002)(8936002)(8676002)(53546011);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: =?utf-8?B?SG9TdWlMRXVGMERleVoxWVFmZnErTmNTY3lYb2c3NzZNQjh0OWEvdWRnQ0lL?=
 =?utf-8?B?TW5kQVpVTDhpQ01kalYxS3hVbTFPVk10VVkzelJabTZNKy9ZQ09MSEZTVGw5?=
 =?utf-8?B?aVFKRDVaTkR0dk5URzVWbW9nR0hORjlKaThlL2wwd2NmaW1NMTJuVXNhOTlS?=
 =?utf-8?B?cVo2RlRNVjhQSFBIdERkYkZKZWRXVUtQWjlsS2VuU3RObWtOQkJjQzNkOFJV?=
 =?utf-8?B?T2JjSmN3MGlPcU1BaVhpSXF4N2xJdHFGS2VTNVQvQTY1djl2TnhjTTBnQTJv?=
 =?utf-8?B?dHVyTUZ2aUtPbEF3UU8xcklFd01QdEUxYnFRNDk4VnpPalpib05mYnZWZ2Uv?=
 =?utf-8?B?L2lNbGorVWttL3dVMGFYQmt2WUlLLzhLMmkvOHZ3aVFFZ3dueXhmaG1ZemlD?=
 =?utf-8?B?MnkzTEtHd2hHaDR4TldHMGxJR0lJTnI2dnRlcjRkdTU4SUdEWFZTSU1kckkw?=
 =?utf-8?B?RU9sMjh3b1ZrWDFnMlhmVVI4RFdnOGNndEhxMjRGVEpYcW4xSDZ6TmYrbDhG?=
 =?utf-8?B?OHZDSkRrN21qSDJLbDdNSGFDU1JMT1dzNGRFY0VlSUh2MThueGc3MWI4K2Zt?=
 =?utf-8?B?MUVCRjFjcE1FY2V4VDZCckNhdko4MTdWYytyY2crNHdtc1VyMGhFMUVwYWx2?=
 =?utf-8?B?WTRjY05iMGdCSDNlQTZrWmQzN3ZFQzgrSGh3YlFSVnd4UUJqY0hkSnlGMk5X?=
 =?utf-8?B?aWV3V1BLQ0JZRTgyMmg5L1BIUHF5MDZNYTl3UGdmRGhUWjJwNFhmSFBWZGJV?=
 =?utf-8?B?dThYaDc2ODgwTkFaS2l2a08vVnNXa3ZzK0FGbXpCVFgvS0tPbzhxRTFLNjhV?=
 =?utf-8?B?Qmh0aGpUQklGZUJ4Lzd2b1JCOEwrU0JFLzB3WTdDVXdYMDBsbWxIc0crQlRl?=
 =?utf-8?B?VkhnNUI0TTZocmtGUmhOZ2tIbWVGTzBkWDZQQXlVbm92QlphTExkbVNwc3BX?=
 =?utf-8?B?RkxYWmMwei8wZWgvRUljV3E1M1VzVEpPT0M2TUYvdERVeElsckRFbjRTcTky?=
 =?utf-8?B?a0RzQWZnSjNwdGhRWEJZZ0FyZm1JRW5YRDFITlZiak4raCtYL21xTE05QlFB?=
 =?utf-8?B?cnZpWlZWUGYrNE42c1QyemwrOHRkWHRneDRlZ3hpWTlaSEpqa09yK1N6ZG5u?=
 =?utf-8?B?TzZDV2JHNXRNSU94eUVHQmR0QTVRRW9ZMWlIbk1hb3ZTZnVQZzNpTnZ3bnE0?=
 =?utf-8?B?N0xUWGZSaTZ1SlgxWGJYUG9sc0xzQ1FoY2JHdnhVdUFRNTZxK1dSWTF3bllL?=
 =?utf-8?B?QnZWVk9JL3gyNk9jZUtsMWpvM2w0ODcvTmFvcmxoTVJxVkx4a1NzWDRYY1M4?=
 =?utf-8?B?TjkyY3ZzdVBBZE1Zb0dUc1dwUEFzck53MmpRSUdEWXI5YmZzamJIWVY3SkE1?=
 =?utf-8?B?QTN2Wlc4TTFWejBtbFBDVlpITDh2MnBOZEkxZWV0d04yVTBuT1VhY2NTRzQx?=
 =?utf-8?B?L0RLeEh3RVVNNE9sTnAyLy9mMXJpQ0grK1FEd3Z1T252anhhLy9BMFBuUE5F?=
 =?utf-8?B?dGNkWm52bFZpdmpPbTV2MTBwUTRIblF2cXFlRjRQUHUxN3MxTFB4UlB4eThm?=
 =?utf-8?B?RVRnNlUvcGNRNW5maC9icm5vcm9hNDNsS1gydm9VaHQvd25QRnBxMzZ0aC9X?=
 =?utf-8?B?V0hPRno1RUthaFZsei9tVHlWbEV0OVkwNWVzUzFVQXJ1MU9pd2ZvL3FnZHdo?=
 =?utf-8?B?ZDZqbHlLUmtaRDVIMHNFa2duRjl4Qms1N0xoTTUwelhBTVFiVmoyWlhUZEFP?=
 =?utf-8?Q?zcJ5dNuopWcqYGs5f2RrExckKW3W9lXPD89a35U?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <853665380688AB439F8409152FAFF17C@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB5888.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4995a421-bab7-4d19-f04e-08d915573b88
X-MS-Exchange-CrossTenant-originalarrivaltime: 12 May 2021 15:04:25.2215
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: dcXOIWB7ffVn2FeZz5kc/fNmC/TSKbq9JEtf17mqAvkQzobxLPZ85VejSW7/745IGywBBXic+DdLBgWBJFewcA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5902
X-OriginatorOrg: citrix.com

T24gV2VkLCAyMDIxLTA1LTEyIGF0IDEzOjUxICswMTAwLCBBbmRyZXcgQ29vcGVyIHdyb3RlOg0K
PiBPbiAxMi8wNS8yMDIxIDExOjEwLCBFZHdpbiBUb3JvayB3cm90ZToNCj4gPiBPbiBUdWUsIDIw
MjEtMDUtMTEgYXQgMjE6MDUgKzAxMDAsIEFuZHJldyBDb29wZXIgd3JvdGU6DQo+ID4gPiANCj4g
PiBkaWZmIC0tZ2l0IGEvdG9vbHMvb2NhbWwveGVuc3RvcmVkL2Rpc2subWwNCj4gPiBiL3Rvb2xz
L29jYW1sL3hlbnN0b3JlZC9kaXNrLm1sDQo+ID4gaW5kZXggNTk3OTQzMjRlMS4uYjc2NzhhZjg3
ZiAxMDA2NDQNCj4gPiAtLS0gYS90b29scy9vY2FtbC94ZW5zdG9yZWQvZGlzay5tbA0KPiA+ICsr
KyBiL3Rvb2xzL29jYW1sL3hlbnN0b3JlZC9kaXNrLm1sDQo+ID4gQEAgLTE3Niw3ICsxNzYsNyBA
QCBsZXQgd3JpdGUgc3RvcmUgPQ0KPiA+ICAgICAgICAgICAgIG91dHB1dF9ieXRlIGNoIGkNCj4g
PiAgDQo+ID4gICAgICAgICBsZXQgdzMyIGNoIHYgPQ0KPiA+IC0gICAgICAgICAgIGFzc2VydCAo
diA+PSAwICYmIHYgPD0gMHhGRkZGX0ZGRkYpOw0KPiA+ICsgICAgICAgICAgIGFzc2VydCAodiA+
PSAwICYmIEludDY0Lm9mX2ludCB2IDw9IDB4RkZGRl9GRkZGTCk7DQo+IA0KPiBJbiB0aGUgY2Fz
ZSB0aGF0IHYgaXMgMzIgYml0cyB3aWRlLCBpdCB3aWxsIHVuZGVyZmxvdyBhbmQgZmFpbCB0aGUg
dg0KPiA+PQ0KPiAwIGNoZWNrLCBiZWZvcmUgdGhlIHVwY2FzdCB0byBJbnQ2NC4NCg0KSSdsbCBo
YXZlIHRvIHJldmlldyB0aGUgY2FsbGVycyBvZiB0aGlzLCBJIHRoaW5rIG15IGludGVudGlvbiB3
YXMgdG8NCmZvcmJpZCBkdW1waW5nIG5lZ2F0aXZlIHZhbHVlcyBiZWNhdXNlIGl0IGlzIGFtYmln
b3VzIHdoYXQgaXQgbWVhbnMuDQpJbiBjYXNlIHlvdSBhcmUgcnVubmluZyBvbiA2NC1iaXQgdGhh
dCBpcyBtb3N0IGxpa2VseSBhIGJ1ZyBiZWNhdXNlIEkNCnRoaW5rIG1vc3QgMzItYml0IHZhbHVl
cyB3ZXJlIGRlZmluZWQgYXMgdW5zaWduZWQgaW4gdGhlIG1pZ3JhdGlvbiBzcGVjDQpvciBpbiB0
aGUgb3JpZ2luYWwgeGVuIHB1YmxpYyBoZWFkZXJzIChJJ2xsIGhhdmUgdG8gZG91YmxlIGNoZWNr
KS4NCg0KSG93ZXZlciBpbiBjYXNlIG9mIGEgMzItYml0IHN5c3RlbSB3ZSBjYW4gaGF2ZSBuZWdh
dGl2ZSB2YWx1ZXMgd2hlcmUgYW4NCm90aGVyd2lzZSB1bnNpZ25lZCAzMi1iaXQgcXVhbnRpdHkg
aW4geGVuIGlzIHJlcHJlc2VudGVkIGFzIGFuIG9jYW1sDQppbnQsIG9yIGV2ZW4gc2lsZW50bHkg
dHJ1bmNhdGVkIChpZiB0aGUgeGVuIHZhbHVlIGFjdHVhbGx5IHVzZXMgYWxsIDMyLQ0KYml0cywg
YmVjYXVzZSBPQ2FtbCBpbnRzIGFyZSBvbmx5IDMxLWJpdHMgb24gMzItYml0IHN5c3RlbXMsIG9u
ZSB3b3VsZA0KaGF2ZSB0byB1c2UgdGhlIGludDMyIHR5cGUgdG8gZ2V0IHRydWUgMzItYml0IHF1
YW50aXRpZXMgaW4gb2NhbWwgYnV0DQp0aGF0IGNvbWVzIHdpdGggYWRkaXRpb25hbCBib3hpbmcg
YW5kIGEgbW9yZSBjb21wbGljYXRlZCBzeW50YXgsDQpzbyBpbiBtb3N0IHBsYWNlcyBpbiBYZW4g
SSBzZWUgdGhlIGRpZmZlcmVuY2UganVzdCBiZWluZyBpZ25vcmVkKS4NCg0KUGVyaGFwcyB0aGlz
IHNob3VsZCBmb3JiaWQgbmVnYXRpdmUgdmFsdWVzIG9ubHkgb24gNjQtYml0IHN5c3RlbXMNCih3
aGVyZSB0aGF0IHdvdWxkIGJlIGEgYnVnKSwgYW5kIGFsbG93IGl0IG9uIDMyLWJpdCBzeXN0ZW1z
ICh3aGVyZSBhDQpuZWdhdGl2ZSB2YWx1ZSBtaWdodCBiZSBsZWdpdGltYXRlIG9yIGEgYnVnLCB3
ZSBjYW4ndCB0ZWxsKS4NCkNoZWNraW5nIFN5cy53b3JkX3NpemUgc2hvdWxkIHRlbGwgdXMgd2hh
dCBzeXN0ZW0gd2UgYXJlIG9uLg0KDQpCZXN0IHJlZ2FyZHMsDQotLUVkd2luDQo=


From xen-devel-bounces@lists.xenproject.org Wed May 12 15:08:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 15:08:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126295.237753 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgqTF-0007d3-E3; Wed, 12 May 2021 15:08:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126295.237753; Wed, 12 May 2021 15:08:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgqTF-0007cw-9y; Wed, 12 May 2021 15:08:21 +0000
Received: by outflank-mailman (input) for mailman id 126295;
 Wed, 12 May 2021 15:08:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TBJP=KH=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lgqTD-0007cp-Va
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 15:08:20 +0000
Received: from mo6-p00-ob.smtp.rzone.de (unknown [2a01:238:400:100::a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bd1d8d1c-b138-4ef0-9700-d3ab6f333129;
 Wed, 12 May 2021 15:08:18 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.25.8 AUTH)
 with ESMTPSA id N048d9x4CF8C216
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 12 May 2021 17:08:12 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bd1d8d1c-b138-4ef0-9700-d3ab6f333129
ARC-Seal: i=1; a=rsa-sha256; t=1620832093; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=rJSs8nGgoaowMvCgVdo82zxzhgZl0oYnpI83lNahm981v31dZPoyjFVZjTg4DxeNza
    ZtwnmBF/5zdDFfWLQLMZ3cEUq2luXR03VI+RN34GbaT3DZZpmubJ3gJXtVodvduiZdFH
    Uz7wFqFjJDFiIHp/tuRg7Q0KecGpTxncNTeX06o9+7Tkh7azD6TgTKBcQ+dRmeAdcV8n
    +89F6BZzWOql8/0ODFSZmLMJl7hJxKsglvFA6ef/5BJk6Vqp7fbDlpJczeSdRE8uCvJ/
    2zYlu5UJfoCj1dQKeBVN088mpa0b+iTAmoUOzip5KEqFhnDZSxoXuBntA+YqbWZUzsgH
    4ATQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1620832093;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=cfY2t/Q352tHbkiQrn0BpR6TvhyLbPsyWfDitKNkVHQ=;
    b=M6iuIuWvGpNwGdiueBYazHjU+x799Z5B1A4aol/bHQLhz9YM2/iN/l1kwjyvfCZBjr
    l+KyOku/ydf/+Pn8/NiSiNHamUPqESeNAPrayvevGd7mnW3vcDWOabBSltpHDmcCSgvI
    gpsP3IQzlRyL2Zs68LEX1XpADlDc6Nu+mLj2cJF5/pp3qf+kg1ppOK0UqSlbLBhYfjgB
    EBctF/CVkrRShTZ9Zd4sXcs8SydxRsnhUupuIbfUbQpMg+2n8lqdBg1fihfxuYOyyi3c
    jL8AnYDfN1FJU3dxfB3u8sQeQMta0JE0GTuhz35I8ts/4onvvgXOOcB2XD8SaZ8DEcil
    GEAg==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1620832093;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=cfY2t/Q352tHbkiQrn0BpR6TvhyLbPsyWfDitKNkVHQ=;
    b=B3yQsgbabn1EYEFUxDX6yvdKLlnwXP13W0xB0bTOGuVf7HgB8FvJWq5NQ0VwuAxs3b
    o9OHrL1ltGYD1WYg2x5nEGIEUPN+BSm12PfaGtrb1v54Se0jefOaZVP+ccReF4FczHA9
    MdT0JG+XIw3vmczzOz0O3gcUfZ8w7ACZpjDdJDdR6MY7AyaOW13hgncmZFBAmHisdTZa
    IZ7BHxr2YkoFe6SOEsgbPeWz96m/TvN9T3KbF4QvFzThcA5mfaWktjFuJhJUMsGL7DOt
    8Ol3iid8q6PeL2pu9njxS4ZPtmYHcl6my83VqFbUZqsDKilx4SEKWnBUWZJ1C31dAsRK
    xABQ==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisF/Wx6Ea03sAi8O4Y0c9DLMc9kgmB2KMHkQZ2le"
X-RZG-CLASS-ID: mo00
Date: Wed, 12 May 2021 17:07:59 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: <xen-devel@lists.xenproject.org>, Ian Jackson <iwj@xenproject.org>, Wei
 Liu <wl@xen.org>
Subject: Re: [PATCH v2] tools: remove unused sysconfig variable
 XENSTORED_ROOTDIR
Message-ID: <20210512170759.15c7a3cf.olaf@aepfle.de>
In-Reply-To: <fe7ccfe8-967e-ed12-5804-590fd9663608@citrix.com>
References: <20210506151631.1227-1-olaf@aepfle.de>
	<a236f079-1771-7808-bb16-97b9dc5ed733@citrix.com>
	<fe7ccfe8-967e-ed12-5804-590fd9663608@citrix.com>
X-Mailer: Claws Mail 2021.04.23 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/CeSp98GfloKO1uqBbFogruP";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/CeSp98GfloKO1uqBbFogruP
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Wed, 12 May 2021 15:52:16 +0100
schrieb Andrew Cooper <andrew.cooper3@citrix.com>:

> Olaf: View on the above?

I'm fine with the additional CHANGELOG.md change.
I thought the suggested addition is obvious.

Olaf

--Sig_/CeSp98GfloKO1uqBbFogruP
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmCb708ACgkQ86SN7mm1
DoBUnw/+LHeLr8d2lFl3nGOISVN5hU0GDYBtuAm5LN0ALoKkqiL8K2Rv2ce5HOHV
MrEHFnycPUlZk4tcgXKjiTsN8xK647tKyBuASa4+nFRdFIofzih6CJpZRgxXQ6qh
6Js3dyGrwUDO9pHfWoX5ITiidlAy+2IWvWhhTW2LqRZIjB5s31FT//ifPDdfmlGG
x3r+hBe34dw/l70aVnjjZrtMkQcM2Y7W6m9GSRlWu1cYGyrhRm6ArK6S77u9uvU0
/LQV8CdITbSpAovPGpzcTj7FYUQXEoFYHuJ4K+HXM9gF6mQH05+/qh1TXcxXd5YK
rSW7MoMSDH62OsqcocJSlAx7ajunEyfp3R2SAMYX14CJnqUsk85buuJQC+O7cklL
P2AuhZroHRqPcJvgnCQnERxnL+e6/aIk6gv3huWpV9hzdXREHmAJgJNvnVSbRuPI
f/cuZw8JOZtGXTi9tC1IcEvzwPzPzpBNLXFtEu3nfPW+PyjLFXdGUhgcfUsPzdSu
i+zcEI3WSEusPemPgcqq1oeh6vbqSbOgSVoZFOjeGYlOhjKqoY5rV7TZHuEZoZn5
Zsx1IXXRWcU6TMmQ3ko4aefvebR92wnGvZgkdg8erj1ZAYqdQZSWRJp6kXhyYftt
yXJlcE3cAmBds+zNskIzsKcSQGqFOx0xq8Y1J0Lj/UGF8nxqii0=
=piAE
-----END PGP SIGNATURE-----

--Sig_/CeSp98GfloKO1uqBbFogruP--


From xen-devel-bounces@lists.xenproject.org Wed May 12 15:18:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 15:18:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126304.237765 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgqdS-0000wr-H1; Wed, 12 May 2021 15:18:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126304.237765; Wed, 12 May 2021 15:18:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgqdS-0000wk-Du; Wed, 12 May 2021 15:18:54 +0000
Received: by outflank-mailman (input) for mailman id 126304;
 Wed, 12 May 2021 15:18:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kryu=KH=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1lgqdQ-0000wd-Ml
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 15:18:52 +0000
Received: from mx0a-00069f02.pphosted.com (unknown [205.220.165.32])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a13559d2-1dcc-4c39-9e02-8759909ada77;
 Wed, 12 May 2021 15:18:50 +0000 (UTC)
Received: from pps.filterd (m0246629.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 14CF7Pol031093; Wed, 12 May 2021 15:18:48 GMT
Received: from oracle.com (userp3020.oracle.com [156.151.31.79])
 by mx0b-00069f02.pphosted.com with ESMTP id 38f5a60rat-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 12 May 2021 15:18:48 +0000
Received: from userp3020.oracle.com (userp3020.oracle.com [127.0.0.1])
 by pps.podrdrct (8.16.0.36/8.16.0.36) with SMTP id 14CFIlGB021347;
 Wed, 12 May 2021 15:18:47 GMT
Received: from nam10-bn7-obe.outbound.protection.outlook.com
 (mail-bn7nam10lp2100.outbound.protection.outlook.com [104.47.70.100])
 by userp3020.oracle.com with ESMTP id 38fh3ybtfp-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 12 May 2021 15:18:46 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com (2603:10b6:208:321::10)
 by BL0PR10MB3443.namprd10.prod.outlook.com (2603:10b6:208:74::26)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.25; Wed, 12 May
 2021 15:18:45 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf]) by BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf%7]) with mapi id 15.20.4129.025; Wed, 12 May 2021
 15:18:45 +0000
Received: from [10.197.176.85] (138.3.201.21) by
 SN4PR0501CA0072.namprd05.prod.outlook.com (2603:10b6:803:41::49) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4150.11 via Frontend
 Transport; Wed, 12 May 2021 15:18:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a13559d2-1dcc-4c39-9e02-8759909ada77
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=A5QowtbcR1EFbr4jpvs/s8X6mu2U4K9MMKXEXbSnIfY=;
 b=CV5HzRJtadXrBrhIh2neR4k0LbEbza6IDJ3q3zAZA+h5wrwY/6IgUPMPB6Np2xNk6/NX
 bMIZZa/UOoap6Jk6S4s/NJ5fOc5g5CaF1lwlaVFzbDHq7SymbGurWxg/AI699C2DXjpj
 0PdD/ZI8Yz6r3uCr5q3t+/HFgCYsWoR/4y5I/lxORITiR0enDxxjC9yTZDRBInZP3hhr
 3J3tUj8pLIZJCrxpena80mHCGFi4+fJd2U0dBbomDwlU8HMQoL0xbOtTStB4oTcijJbQ
 0kg8PYXuF78+XnUtZurt5aHDLimvSot5qgyfIo6KpgXTdNnHrNhard+mEVep9Z5VqFLS nA== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QaiSQmqY1GmkLz0tRlKXhx4kXJ5+pkXvwzRAnQnYuoRihVIJeCQW+qduh7u2Z3WaoZGEQZ/qOEPhxQK4COS5ykmANT24fOQohAmyKnDTif9oQ3Iy9XpDL8nXem6CWuULVNP7RH0VM/J4hWkqD989+XTgsC1yOF6pLl46TwWsUAHT0bdJDALf9N+RYH3+lUGx3Jz/2SRnOmyIW+vob2Zpvls14CE+C7cAvSj2ZXyXTy9Muu7/qxsebmUU2wBMEXvYDIswmypm8xdBq7x0hO2FZD7lhzfOwxGnddmIRmxYSSl7EGNKASU6X07q0LupzRc+1izyHnIi3UmEunMAgXnDOg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=A5QowtbcR1EFbr4jpvs/s8X6mu2U4K9MMKXEXbSnIfY=;
 b=EaH/wr08aKwsC7AKEaIbUSUF5ehfiq7qfTYT9A/MpUI231OS/SsHGhUArbgROMg5FXSR+rAa/T8HVttDQNewGiFHypJkV9nBopq1fqJ5IQJQ33uV/d41guX/IxKj57FEOI3sD4tkOf9oI2YLTV+UI2a7L6faQC672fCGcIQUusN0PsgB45AJw0XEnurXt6CeV4/SejZM0MNlcZqWeZQcI26UdJLS5w3mozy7PyV9RpSCwKGhRKXDlzKIQqX1soNuofIAQLzDeZturOQbOqE0Cj4kHxvYK+oOEl+I2GjPBRG1aixpPa2nqZco4gDFTPZjoQpdo+GfQspVGAE4oA/3BQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=A5QowtbcR1EFbr4jpvs/s8X6mu2U4K9MMKXEXbSnIfY=;
 b=gzCx3bgUPUnIwe5uspjctbwN2upCnhAL8NpP501GhqfirTuPVGOMqXMrNHXKizysX7NkBeBundtZseOvk0ue+Uk1CuMElLIIsgz/JQRVtFHlcowZ3c0lmiLryv+Y56YLXuSLlmUbBpEE1NGc/pH0cYwO6Le7CsPBcaTbiHEZbK8=
Authentication-Results: vger.kernel.org; dkim=none (message not signed)
 header.d=none;vger.kernel.org; dmarc=none action=none header.from=oracle.com;
Subject: Re: [PATCH 2/3] xen: Export dbgp functions when CONFIG_XEN_DOM0 is
 enabled
To: Connor Davis <connojdavis@gmail.com>
Cc: Juergen Gross <jgross@suse.com>,
        Stefano Stabellini <sstabellini@kernel.org>,
        xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
References: <cover.1620776161.git.connojdavis@gmail.com>
 <291659390aff63df7c071367ad4932bf41e11aef.1620776161.git.connojdavis@gmail.com>
 <0c1d6722-138f-62e7-03b3-a644e36d20a5@oracle.com>
 <20210512145831.gxmmlimkmnnb6zyc@ceres>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <6800c23b-7576-70c4-4862-6f84f23eaed5@oracle.com>
Date: Wed, 12 May 2021 11:18:41 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
In-Reply-To: <20210512145831.gxmmlimkmnnb6zyc@ceres>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Originating-IP: [138.3.201.21]
X-ClientProxiedBy: SN4PR0501CA0072.namprd05.prod.outlook.com
 (2603:10b6:803:41::49) To BLAPR10MB5009.namprd10.prod.outlook.com
 (2603:10b6:208:321::10)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 7926897e-03b0-466d-e901-08d915593bc5
X-MS-TrafficTypeDiagnostic: BL0PR10MB3443:
X-Microsoft-Antispam-PRVS: 
	<BL0PR10MB344341BDC6389721CF69D7F48A529@BL0PR10MB3443.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:590;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	Ogw4GsuzAtJ523VvnpdFfm0FpZIOA85cCTdlzG5gokUeirv1jKVi4DUGlKnkH9h7oYCwPq9rK15hLWTlW0qrWrGmSXyPv2FfqgE/3VbINZv7ZMberW/ENy9AciDBRsRGb7AkVgjyzM4Tlf04rj0Z/FmV9HFFTBC7nBetTevSvRXJViO09MGRLPP0IX1a12LgQxBt8DrQ1qsOQSi7ucFQoxijeOlVtTNHLbwac0sY/R5xCQjIOMuKLdOEt82iYd6SAmXgNzWAu/z6UiOcVxWSKphoHt5M+t9DoEBxdnKE6tihWF+hM6A86KrG8DdP7zGZ7dTZGAMPNScxtsoZWXqvFYHIYc3hG6fkOiLrdCpu067B/vXdjTukTsYoC5VaKzIQgCvRx5rrEBHe9xMNXXGMnyu8AcmOt3R4T//2H4yDwDDGCg/QTVFvphTy1UTKxjU2gSdpPfP0JVTajzckgiKAOt+KX0dtAlaLBuZuCMjRcFyUvhR/997n8O/zVw+OxmbMgWQQYbHbOWN8njWJTzTiMBvdnSirs+9m1vVB22PqU8cLV+dH2U90MfqOBD+XW9U/O6603YxK5BHtQqz5zn7CiM8oTByHkagVAhWNlpqvPupBwb0caMzJqYXxTfL/e0/kkuSHEyiqyRE5vxbQq8dagMRFDMDgTDTjRcRsbw9d7wvVterk+DuI8axCt+xUXBFX
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB5009.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(39860400002)(346002)(396003)(376002)(136003)(366004)(4744005)(53546011)(8676002)(4326008)(8936002)(6486002)(16526019)(44832011)(478600001)(186003)(31686004)(66946007)(31696002)(2616005)(54906003)(5660300002)(6916009)(956004)(316002)(38100700002)(16576012)(86362001)(66476007)(66556008)(26005)(2906002)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?utf-8?B?bXlMVEpqcVlmZTFwODJDaWU1ZUVvTENNWjVhRmlJRHFIVnUyZUdxMG14dHNj?=
 =?utf-8?B?dHRmSW1SQ1htL0FZdkpaa2wzUTZzZFJOcUZzcXpNeHNxRm1pUXJEM0NQUDBB?=
 =?utf-8?B?RThuUk1RdlpvN1NWcGFvRTdyTkY4T2JGdU4rWFlLZzZKUERFTElBaXJQNkw2?=
 =?utf-8?B?d3F6N0NMQ3Zlai8wbDVQQlRqTS94QXVKUkZrWXl0VUpoek5HbkFuSGlsQVVM?=
 =?utf-8?B?NExLbGVIVzdmakpmZlkrNDlPdDBlUk1XWTVJNkZsMm0rdEY0eXJqV1JSNktj?=
 =?utf-8?B?d2praTZJMStDa2FFa0dIcTNraG1HRzUwM3U2NUJOQlBSN2h1R3NuTWYrenZq?=
 =?utf-8?B?VWhXbng4cE1xckFmcFgwVkphc3BQb2JLNHhtbFF3MnFHNDZnMmk0T2kzN1pS?=
 =?utf-8?B?dEhlbW5ia1dBNHZsTXdBbXdGWkZOY0UzY2Yza2ptcjFqOS9rUXZaT28xOERl?=
 =?utf-8?B?N1ZUdEFKNjFTMEFpOWlsaTBOdVIreWQrak9OOEtyVnUweks0aW5FOWJLdW0x?=
 =?utf-8?B?OFJ1U0R2OVg3QVB2cVlWN1J4a05kWjBLUDIxbXFRKzFJdEhTc2NISEpDdHUx?=
 =?utf-8?B?c3ljbkt2cHIveFVrVmhvd3JUVkZQQytnNnpQbTBBa1hpQkZ6cmthS2kzbnpp?=
 =?utf-8?B?SFFHcWlQQ3lzVmtHYnNWTVBPRnQvdlYxQ0xQS09jL3p1VkRJWDdkNU1JRG4z?=
 =?utf-8?B?NmFnSWpZR25VS1ZJeFI0UEs5R0hUN1dteUpiR29yWTJhYWxjTlZvRXNuQ1VD?=
 =?utf-8?B?cmZBc25uY0xBTlUxa0lPVkl1bXdMZ3JnYkQ1Uk8rZGtXZXUrdEw3c01PVzlJ?=
 =?utf-8?B?SGIwVDQ3YlNEbGZoZzE3a1pJVzVDVG5malJENHRTSnZqTmExUVdDOGF0S3pp?=
 =?utf-8?B?NnBvOElieFl6SEZuSXozWFRqemxmVEdrMXNsOHd6VktGZmRjeDFncmtManNS?=
 =?utf-8?B?VVRjeEZnWjVqMmlBSFFjREJxcW9LenF5bWlVYlp3MXhFeU9oc0VmbGExVWRB?=
 =?utf-8?B?NTJVWTdyTmtkN1AwSGpGSWVvVk1PUjZqTy9IV2Qzb29FTU9WSG1aK2xQeTlX?=
 =?utf-8?B?ZjYwUHB0aE1UZVFiaEEzaWdMbkVWbFJEUmJuY3NFSUlJZ2UxSDIwYm14T09S?=
 =?utf-8?B?dDZTTDFJR2R4WGJUamFhUFFGc2g1ODdrNGVwRGtRbDVpTWYzaXFncWh6L20z?=
 =?utf-8?B?amFrakM0STBGYkRIT3JoTmJBK3JGNlF3WWJHYkhJQjJPRWdHQ2RsY0dicEt4?=
 =?utf-8?B?QkFNdUlBNHQvKzJ1dXJucXhlb1JNeHBmK2JFUlJWWEpQem5vek9qVDBXbDg3?=
 =?utf-8?B?OU90Tjk1UGZCMmVRY01JMmo4dzBkWDNlQ1lFaDloMDIwQmVDMnJZb1F6cURE?=
 =?utf-8?B?SjdBVzhkRFlId3JsYmh3TVoyWUMrcldzb0VEOGJ3Z2E2R044L0RUTnplUFcx?=
 =?utf-8?B?eDIzV0o4UGIzUkdybnpld0hkUWJKTDRJenNsMTFHOUdIYzdwU1MzY3ZEb1V2?=
 =?utf-8?B?eUhPRmNKVTFIdU5pMlpXTEdhQmxJKzhFRG1jQzloVXFLNi9FZHFkVHhHRHdJ?=
 =?utf-8?B?dUZZMzAxcjdObnIyR0I4dFM3eFNxRGFpdlN0dEdyT3VBcDBBK2FvREhZY2M0?=
 =?utf-8?B?ajZXMjhrb1BBUG5nN3JHL0lUZzRRWVdUaVllK3p2ZjRtM1FURWdwa1BhWGVR?=
 =?utf-8?B?eWpWNkF6b1JNczN6WDYvdE5XR0pmVHNuazRaVXpXV3kzSDlNMUJKelNJWEtl?=
 =?utf-8?Q?uAuAxZqdFw/qWPiwWEzhEDvdLjqvROyXThBM8Qa?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7926897e-03b0-466d-e901-08d915593bc5
X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB5009.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 May 2021 15:18:44.9779
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: mqM1/F3eS98XM6GstrYsAtEJDpF8+xMFZo7bLnPq35pPQEaY5GV1gnixcJ+btr7+fxpjErkvmB8bACmLP4xno0wKqoer0njcWtXvp9hZjlQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL0PR10MB3443
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9982 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 spamscore=0 phishscore=0 suspectscore=0
 malwarescore=0 adultscore=0 bulkscore=0 mlxlogscore=999 mlxscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2105120098
X-Proofpoint-ORIG-GUID: QpEeMP2p3zBWbcC64QGY_KcmXi-z-wwp
X-Proofpoint-GUID: QpEeMP2p3zBWbcC64QGY_KcmXi-z-wwp


On 5/12/21 10:58 AM, Connor Davis wrote:
> On May 12, 21, Boris Ostrovsky wrote:
>> Unrelated to your patch but since you are fixing things around that file --- should we return -EPERM in xen_dbgp_op() when !xen_initial_domain()?
> Yeah looks like it. That would make patch 3 simpler too.
> Do you want me to add a patch that fixes that up?


Yes, please.


-boris



From xen-devel-bounces@lists.xenproject.org Wed May 12 15:51:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 15:51:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126309.237781 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgr9I-0005NB-71; Wed, 12 May 2021 15:51:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126309.237781; Wed, 12 May 2021 15:51:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgr9I-0005N4-3t; Wed, 12 May 2021 15:51:48 +0000
Received: by outflank-mailman (input) for mailman id 126309;
 Wed, 12 May 2021 15:51:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XikZ=KH=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lgr9G-0005My-OG
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 15:51:46 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1d972e5f-f83f-4e9b-8223-ac5c5129d1d8;
 Wed, 12 May 2021 15:51:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1d972e5f-f83f-4e9b-8223-ac5c5129d1d8
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620834705;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=CbionX3ENd9syjU5+XDv3gDrWj42CrUz0zGawsll0mw=;
  b=G0u+cPVjdcf4m/PLASX1/1pk8K4GUSX94AL+fH3zRD7JeA6ww5R3uYQk
   sYZ9Wi7K6XOlS3g7bo2gGbXSXfKHz7/r8DFMMeRZRAap6BlFOkdOeNSj/
   bhxC33HyC/0MdhfyQsWIathoSxtY5nKXbTh84t/Emgm/7e3/1VOLHwRQC
   k=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: fMZ4zihJxHNtCGDeMHQlrvAfLlZDYI+A2SIy1jUj9B1/r/JmWa8ujted9M5KVQtASrifIltp9Z
 wN3ExXNhiH4LBayJlSEetRwaXy5LXWUgHpEeyQvS3aPUemqGb+bk5W3abcUGQ+PdzWcJJ/7sFP
 GfPhBKZpOmyTOqU0mLhDP6aKC1W7ahjRvfCSSAi5POtfN5Jvy5LJ3nZFNdfifHpVlYVXNYlRRm
 WiY+bG7qE6wjCNghCMUxPGi86hOd/H0cVb2VJV9FD4a17etb3iWan6KNhYIE2936sCB5xqtAYm
 mQc=
X-SBRS: 5.1
X-MesageID: 43755503
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:DhscrKwVN42kd96BfxlzKrPxD+skLtp133Aq2lEZdPULSKOlfp
 GV8MjziyWYtN9IYgBcpTiBUJPwJE80hqQFnbX5XI3SEDUO3VHHEGgM1/qa/9SNIVycygcZ79
 YbT0EcMqy+MbEZt7eD3ODQKb9Jq7PnkJxAx92utEuFJTsaMp2IhD0JbjpzZ3cGIjWucqBJc6
 Z0iPA3xQaISDAyVICWF3MFV+/Mq5ngj5T9eyMLABYh9U2nkS6owKSSKWnX4j4uFxd0hZsy+2
 nMlAL0oo+5teug9xPa32jPq7xLhdrazMdZDsDksLlVFtyssHfpWG1SYczBgNkHmpDr1L/sqq
 iJn/4UBbUx15oWRBDznfKi4Xin7N9k0Q6c9bbRuwqcnSW+fkNjNyMJv/MpTjLJr0Unp91yy6
 RNwiaQsIdWFwrJmGDn68HPTAwCrDv/nZMOq59as5Vka/pUVFaRl/1pwKpfKuZMIMs70vFvLA
 BKNrCp2B97SyLpU5nphBgY/DX3ZAVBIv6veDl2hiW66UknoExE
X-IronPort-AV: E=Sophos;i="5.82,293,1613451600"; 
   d="scan'208";a="43755503"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NaGFIUzeB7EhopnIoPcjx3KV5+NMgyfUvAUVnXr2Zy+C0HbUIdTZBAHk3MELm8TcwWifSgxYl1/OBGmfw1rZ+qwR7GR9nvWJijvhEbh/23ptuw7PrrXK4Qb+2RX8oLp7CuB7b4lswYp7c/eyfPGA2+D2wUBL0IdojqHB0r0jdula27Tl0WH19kQxoG9/hPZTo/FXg8wULQnDO72bEAncN7ngxabu599IqNDQZKy1LBz9GwOba8SZ3kfxUcXidnE7t8Y5h7kYQ/FQudRdlDhNobNolSwuozeasPg5YYCPsjGw8rGzs0ew/y56HXQGqe802/1Ipb864lKd04TeFEsvxA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CbionX3ENd9syjU5+XDv3gDrWj42CrUz0zGawsll0mw=;
 b=gdXbHoMiUOT5Mg9gnNTAqwTFui1NYIIWMakvgq77LHkRtg9JwSNJciuYgdgYoPhshoUwWsimtG08YUZZyrtjSOz5fIuWT24VpZqs+pEQAhYRBh9KqPoDzBkbC5AL4Ak1Z5xzhAFoRAaC74Kp6011g7v730lXZse3Cz0WYIRpaQe6uWElkqFdMjPzFu1dE6/tVh+PY2sav3h5cYszSftPGmwGHO7IRPr6Vf2axG3Jy6GO71dTnohf2eRPEQQ0KIg9JUtglnRxrOykZFjTrBZjKyYpO40oZERoMLXWd2kqCwkTlw8Pd1WPatKofYP8JV+tnwuwve5I3xrxq9B5opRlJg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CbionX3ENd9syjU5+XDv3gDrWj42CrUz0zGawsll0mw=;
 b=vaulAe5/T4ZodAUAVtXciecJMbvlpXFQbsiBZz4fmt2lniDB45j6H7gWQai+18d3k9ZUNN60gT2XEyfs7CCxqbGHDBiN/MIk07UQawnaA4N139EVb1BPaq4ljbx7pDzfeMkrnPt/Y7rOauxytgQwAe0ORcP0fiF31R7bENVXUxw=
To: Olaf Hering <olaf@aepfle.de>
CC: <xen-devel@lists.xenproject.org>, Ian Jackson <iwj@xenproject.org>, "Wei
 Liu" <wl@xen.org>
References: <20210506151631.1227-1-olaf@aepfle.de>
 <a236f079-1771-7808-bb16-97b9dc5ed733@citrix.com>
 <fe7ccfe8-967e-ed12-5804-590fd9663608@citrix.com>
 <20210512170759.15c7a3cf.olaf@aepfle.de>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2] tools: remove unused sysconfig variable
 XENSTORED_ROOTDIR
Message-ID: <fd9a2bbb-09e4-468b-f718-47e149acf011@citrix.com>
Date: Wed, 12 May 2021 16:24:54 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <20210512170759.15c7a3cf.olaf@aepfle.de>
Content-Type: text/plain; charset=windows-1252
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0470.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1a8::7) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d9aeba5f-02ef-4640-a3c9-08d9155a1b7b
X-MS-TrafficTypeDiagnostic: BYAPR03MB4422:
X-Microsoft-Antispam-PRVS: <BYAPR03MB4422153ADD7471388ECA84C6BA529@BYAPR03MB4422.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Ib/vAYR9tpplH1Zgk1nOZu4f9ztOUMtFGjFrwTWE0aiW5PpW4Nsf4hPK3vlPgTCETNCphq4tfWSBREoirGMMe5J3UeXzX+xpbHTKt2IQNwyj72Incc6QMDVruyrVNGOcK8/w2B0z4S1V4YObl96xHrNRXiF/M4RddJYlSP4PbpTPBfx7KsrebvYWHqXgQos1oZnWGveDSfvNeeLDDwtyPozzVjJk8itYSj6I4QxWTi1A3qx5yk/Cjaq9AVBqfyHWxHpkf+Sr0O9DiqVzH7ZIF+LTbSc0McqZo6URVngz/yHTVrtXu+/UBC8js9EmWpGPRWA7/S7PpkkJMbVuaMJddmuEjygzKOD2cTOEn2QAa2M1NnyCHpyRuhefWiABViN7aYIXSUjjI9N0xarE1+fnzgq0qliYdzNEnQNPL7cV4K9HFPURIX6KR3l4OH3zK23lDI8uaES5e80/qkf34eeG4D90bEFFvns2EXNTojVqu3Gt39LJGCnOm+0iZZkhW9qEmZWPMv2IRxY+6FaxbpBcMhcS8YCwuft8SaiOjaNWgsxYCngQYFsiE994GBAfiNgBpZEsO+fcmlsPv5BX7VUyHaWvbrg+bLr0eFQY4r644DAq64wzgs0hg9i3iS0MWk8wpK5l2+wgJVUiHT3f/tWfWK1vlBWuDs2+OqRms4EZB6HcaIbE04FhyWxofOkIYSBk
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(346002)(366004)(39850400004)(376002)(136003)(5660300002)(2906002)(6486002)(6666004)(66556008)(66946007)(54906003)(16576012)(186003)(316002)(38100700002)(4744005)(2616005)(66476007)(16526019)(53546011)(26005)(31686004)(956004)(86362001)(8936002)(478600001)(8676002)(36756003)(31696002)(4326008)(6916009)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?Windows-1252?Q?DaV64ODF3+YEcCnhoby8dyDCef6T/s4yORVSGRwTPN8yXCeW7ZwSkXGQ?=
 =?Windows-1252?Q?RLxNXifGDcSt+AOPYuIKtOy23WjNygq+6wSV00Xkl+RkdMX77cOhsKI6?=
 =?Windows-1252?Q?ht7xp0RwOqjQEq/rEh74kqk7+r2G3VQUkyeFmdXukoYXIbUoyrCJ5C8U?=
 =?Windows-1252?Q?3FkzmYGSQRbUPxqZPITVxU1etyXO89c3hZbvKLgrgDk0yWF8VnIzSDsX?=
 =?Windows-1252?Q?97vRNQ3Kh3vXGUiN/3v5HrAzzZ1Xr8aeseFQQthaZrAZ88SELtsfsin0?=
 =?Windows-1252?Q?Am35xa5uxUAsB4V425y06N9nnunkYhjFm307/TtQdfTuZbVC9VJ7dmz1?=
 =?Windows-1252?Q?LfErVs3xpRlYh5EGO7vC7Ru+wrkDQRbNQFY4qoEFTBUyneZoaFIyc0Cp?=
 =?Windows-1252?Q?M0Gs/K6iQ/jVwC8veInBBUhi/ZvIvD6XCO7w9QA6wiS/lokN/dP/a6rU?=
 =?Windows-1252?Q?4sruiS0UlIIeU5SPpkUWFcj5fsGyJKdqCpwE/ymUXchBWR28nYpFTGlm?=
 =?Windows-1252?Q?r6V6v/0pFCUQzsy1q79jwgI76IRwBd48utdkeA2vhQ4J7AMFuW0Oqwkx?=
 =?Windows-1252?Q?ttpzBwyeVRLZ50tHIPfzxNBCo8uE33eKxOsGIoveR4Syd6sVmve3+vuJ?=
 =?Windows-1252?Q?murimvvJgj302EJVcdA3G1rkVfr8+1on22JTb1puaIYM0h3BGQcpkhfT?=
 =?Windows-1252?Q?RxOTxR2m3aqqKGsu2B8HyFWlbT9PCS8gUrYU4rTRzApksosL3wORe20q?=
 =?Windows-1252?Q?xKrt9JlQ6oDiqrijhHpSAyhyFAPbwTaudSyQkOPFAWZWssvLmiua8C2G?=
 =?Windows-1252?Q?xVrMdbN7wV454QThux/M1nydqhKlvUe+zusRftc2ppY6jBgjffuNFD0/?=
 =?Windows-1252?Q?AGXrLGjJQagwteDQ6e5mkhw9ViucMSB2x6sezZeoDUnMfOzY+4nKO9Cc?=
 =?Windows-1252?Q?jkY4CZpOgcbk32V9N2IayMgcV/O8vyiZHfq+O/D8zf2ySw/m0UNCWsaw?=
 =?Windows-1252?Q?Fueq2czgwipqBRSTN7JMawmvBgEDzr79QJo6ufCiCms27Pomwd1nAgHq?=
 =?Windows-1252?Q?YXoiUzysFF/OnlXxu+T9CEunGIwsH5I16tWqYPdlfegC/OrgsDGsreAM?=
 =?Windows-1252?Q?KTbJ/7xtvKO0IlZblUy6RBvXQDYAp+gMs74O9BG5wnU/cjeBcfg419XG?=
 =?Windows-1252?Q?ej8bB7TlZTyVEPnZsLIgeOxjqgrqrc5mursrCM79Hce3LFaYPQ4jyJlY?=
 =?Windows-1252?Q?7ZJCMY1Mkf4Ijn7q3++93ItFN9SAGA04EgFc6R0L4ipWKZlIQinUEk+r?=
 =?Windows-1252?Q?S7FLaKPuIP8D8dY+LeKEb98NYCWKsC3jLlVuTo/i6S78RPqe2EgMckOi?=
 =?Windows-1252?Q?Ee7FUdtR0YVKNUCYMumE+8McNKwOKOqFDMCMDWzkV+VFJxPvGTBJOyir?=
X-MS-Exchange-CrossTenant-Network-Message-Id: d9aeba5f-02ef-4640-a3c9-08d9155a1b7b
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 May 2021 15:25:00.3262
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 5uuIJDLjZwBmU1g1LzsZ88zM+c6ezzOrm0LU3BA/guW/f5HRJ8VkbGo8eWULE9yRDMlt7+fvcaB785uoca0i/pI7UHRTIMWipfhyTK38Clg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4422
X-OriginatorOrg: citrix.com

On 12/05/2021 16:07, Olaf Hering wrote:
> Am Wed, 12 May 2021 15:52:16 +0100
> schrieb Andrew Cooper <andrew.cooper3@citrix.com>:
>
>> Olaf: View on the above?
> I'm fine with the additional CHANGELOG.md change.
> I thought the suggested addition is obvious.

Thanks, but as I'm folding it into your patch, I shouldn't do it
unilaterally without someone else saying ok.

As it happens, Wei offered his A-by on IRC for the change, so I'll go
ahead as suggested.

~Andrew



From xen-devel-bounces@lists.xenproject.org Wed May 12 16:48:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 16:48:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126314.237792 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgs1l-0003NG-Aa; Wed, 12 May 2021 16:48:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126314.237792; Wed, 12 May 2021 16:48:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgs1l-0003N9-7R; Wed, 12 May 2021 16:48:05 +0000
Received: by outflank-mailman (input) for mailman id 126314;
 Wed, 12 May 2021 16:48:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=F0FV=KH=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lgs1k-0003N3-2y
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 16:48:04 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 50807ee1-380d-46ec-8392-41edfec7e43a;
 Wed, 12 May 2021 16:48:02 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 01685B148;
 Wed, 12 May 2021 16:48:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 50807ee1-380d-46ec-8392-41edfec7e43a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620838082; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=20PFQ0WahlPjBf4V0QLtvAJuN8ggnfjR+YmNs21FlUs=;
	b=KSaojBZAv6QL6z8bbdHkOBHaT7gzzBz8CKCmmAMgbasuQUvm6HTaufvR+ynqSWVNY6xl7H
	erUk4Zo7tv8Z/wUUmyd9P/wf8wHKLIOESLxmNQBah+ROXdJsSBrrfAC+egdtadr3gn9wFl
	30mdm0/15TwjV7aDuTu/yYOAJBmSw9Q=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>
Subject: [PATCH] include/public: add RING_RESPONSE_PROD_OVERFLOW macro
Date: Wed, 12 May 2021 18:48:00 +0200
Message-Id: <20210512164800.26236-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new RING_RESPONSE_PROD_OVERFLOW() macro for being able to
detect an ill-behaved backend tampering with the response producer
index.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 xen/include/public/io/ring.h | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/xen/include/public/io/ring.h b/xen/include/public/io/ring.h
index 0b08b2697e..c486c457e0 100644
--- a/xen/include/public/io/ring.h
+++ b/xen/include/public/io/ring.h
@@ -259,6 +259,10 @@ typedef struct __name##_back_ring __name##_back_ring_t
 #define RING_REQUEST_PROD_OVERFLOW(_r, _prod)                           \
     (((_prod) - (_r)->rsp_prod_pvt) > RING_SIZE(_r))
 
+/* Ill-behaved backend determination: Can there be this many responses? */
+#define RING_RESPONSE_PROD_OVERFLOW(_r, _prod)                          \
+    (((_prod) - (_r)->rsp_cons) > RING_SIZE(_r))
+
 #define RING_PUSH_REQUESTS(_r) do {                                     \
     xen_wmb(); /* back sees requests /before/ updated producer index */ \
     (_r)->sring->req_prod = (_r)->req_prod_pvt;                         \
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Wed May 12 16:58:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 16:58:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126318.237805 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgsBn-0004yz-AA; Wed, 12 May 2021 16:58:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126318.237805; Wed, 12 May 2021 16:58:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgsBn-0004ys-6o; Wed, 12 May 2021 16:58:27 +0000
Received: by outflank-mailman (input) for mailman id 126318;
 Wed, 12 May 2021 16:58:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgsBl-0004yi-Px; Wed, 12 May 2021 16:58:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgsBl-0007Tj-Ef; Wed, 12 May 2021 16:58:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgsBl-0003YK-0i; Wed, 12 May 2021 16:58:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lgsBl-0007ss-09; Wed, 12 May 2021 16:58:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8nanqNXXevkCHY0qtICF+ZvgIoYehDmlsAXOJEzLdgY=; b=sPGG+wl4ampZPuLEy15CIjDnJU
	OY32SZwzpZS7tpjTOU6dQ983GKmCqRMzGvDVAlVGyaXlpKHCHt++nc9pN7gWBDLHhR84OTfnHwAYj
	dD0++symWIp4ATrHj456yUHSQBHaIGvHpmbfUVwGIWQZjDtFhaFv6YRXNlpmYatBVTPs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161911-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 161911: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=88b06399c9c766c283e070b022b5ceafa4f63f19
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 12 May 2021 16:58:25 +0000

flight 161911 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161911/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx  8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                88b06399c9c766c283e070b022b5ceafa4f63f19
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  284 days
Failing since        152366  2020-08-01 20:49:34 Z  283 days  475 attempts
Testing same since   161911  2021-05-12 01:57:04 Z    0 days    1 attempts

------------------------------------------------------------
6042 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1639498 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed May 12 17:36:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 17:36:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126329.237820 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgsm2-0001Hr-EO; Wed, 12 May 2021 17:35:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126329.237820; Wed, 12 May 2021 17:35:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgsm2-0001Hk-Bb; Wed, 12 May 2021 17:35:54 +0000
Received: by outflank-mailman (input) for mailman id 126329;
 Wed, 12 May 2021 17:35:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lgsm1-0001He-Mo
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 17:35:53 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lgsm1-00087v-5g; Wed, 12 May 2021 17:35:53 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lgsm0-0000Tq-Se; Wed, 12 May 2021 17:35:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=OA06v4DNmwymHkROXKnzWmQ6sFacjCJ8GkVTi9Nj6H8=; b=aqbMCoMQnkevTLAg16qekpF+av
	6R6i2Fc4crbn/MYpl/RqhmM/8DaIzyrQLo9c4Vuz1MizPmZQ9r7Ch77AneJIDgncEW5p3giJEX2o6
	SAYA+w8A45Uav1g04ucSY3e4Ye7IJ2XblJzkro5ya3e8egvTz2s1RxJHlrPvMVjkVXCs=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: michal.orzel@arm.com,
	Julien Grall <jgrall@amazon.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: [PATCH] xen/arm: gic-v3: Add missing breaks gicv3_read_apr()
Date: Wed, 12 May 2021 18:35:48 +0100
Message-Id: <20210512173548.27244-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1

From: Julien Grall <jgrall@amazon.com>

Commit 78e67c99eb3f "arm/gic: Get rid of READ/WRITE_SYSREG32"
mistakenly converted all the cases in gicv3_read_apr() to fall-through.

Rather than re-instating a return per case, add the missing break and
keep a single return at the end of the fucntion.

Fixes: 78e67c99eb3f ("arm/gic: Get rid of READ/WRITE_SYSREG32")
Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 xen/arch/arm/gic-v3.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/xen/arch/arm/gic-v3.c b/xen/arch/arm/gic-v3.c
index b86f04058947..9a3a175ad7d2 100644
--- a/xen/arch/arm/gic-v3.c
+++ b/xen/arch/arm/gic-v3.c
@@ -1167,12 +1167,15 @@ static unsigned int gicv3_read_apr(int apr_reg)
     case 0:
         ASSERT(gicv3.nr_priorities > 4 && gicv3.nr_priorities < 8);
         apr = READ_SYSREG(ICH_AP1R0_EL2);
+        break;
     case 1:
         ASSERT(gicv3.nr_priorities > 5 && gicv3.nr_priorities < 8);
         apr = READ_SYSREG(ICH_AP1R1_EL2);
+        break;
     case 2:
         ASSERT(gicv3.nr_priorities > 6 && gicv3.nr_priorities < 8);
         apr = READ_SYSREG(ICH_AP1R2_EL2);
+        break;
     default:
         BUG();
     }
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed May 12 18:00:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 18:00:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126336.237832 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgt9M-0003yL-Eb; Wed, 12 May 2021 18:00:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126336.237832; Wed, 12 May 2021 18:00:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgt9M-0003yE-BA; Wed, 12 May 2021 18:00:00 +0000
Received: by outflank-mailman (input) for mailman id 126336;
 Wed, 12 May 2021 17:59:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lgt9L-0003y8-Eq
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 17:59:59 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lgt9J-0008V9-DG; Wed, 12 May 2021 17:59:57 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lgt9J-0002bn-6p; Wed, 12 May 2021 17:59:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=KT58rG2761s/1fbXHR2Y0JpNluKb0wXvhFcj2WeMN2Q=; b=Y9mBI3NLI4LvJkJzbW1a7CIrju
	Qu4zqDDbmNqWWZYyx+tE2IknS3sT9hebLW5c3XBxw487AjLnK53MWySY/Q3hIB1IKlFL6TfSmPQXx
	xaHk/tgOrf44AlcZ3AmsY5UALXjt7h2T5CxEook70RK+wDhBYrqCmEyHE6k1JGaxDxDY=;
Subject: Re: [PATCH v3 10/10] arm64: Change type of hsr, cpsr, spsr_el1 to
 uint64_t
To: Michal Orzel <michal.orzel@arm.com>, Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, Tamas K Lengyel <tamas@tklengyel.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>, bertrand.marquis@arm.com,
 wei.chen@arm.com, xen-devel@lists.xenproject.org
References: <20210505074308.11016-1-michal.orzel@arm.com>
 <20210505074308.11016-11-michal.orzel@arm.com>
 <c5676e69-a474-d1ad-c7e9-49c03be3ab66@suse.com>
 <1ff4f9fb-0eca-189a-2b47-b910dc6b3639@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <42a998be-2f99-a1b6-ace6-4c5d42af7046@xen.org>
Date: Wed, 12 May 2021 18:59:54 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <1ff4f9fb-0eca-189a-2b47-b910dc6b3639@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi,

On 11/05/2021 07:37, Michal Orzel wrote:
> On 05.05.2021 10:00, Jan Beulich wrote:
>> On 05.05.2021 09:43, Michal Orzel wrote:
>>> --- a/xen/include/public/arch-arm.h
>>> +++ b/xen/include/public/arch-arm.h
>>> @@ -267,10 +267,10 @@ struct vcpu_guest_core_regs
>>>   
>>>       /* Return address and mode */
>>>       __DECL_REG(pc64,         pc32);             /* ELR_EL2 */
>>> -    uint32_t cpsr;                              /* SPSR_EL2 */
>>> +    uint64_t cpsr;                              /* SPSR_EL2 */
>>>   
>>>       union {
>>> -        uint32_t spsr_el1;       /* AArch64 */
>>> +        uint64_t spsr_el1;       /* AArch64 */
>>>           uint32_t spsr_svc;       /* AArch32 */
>>>       };
>>
>> This change affects, besides domctl, also default_initialise_vcpu(),
>> which Arm's arch_initialise_vcpu() calls. I realize do_arm_vcpu_op()
>> only allows two unrelated VCPUOP_* to pass, but then I don't
>> understand why arch_initialise_vcpu() doesn't simply return e.g.
>> -EOPNOTSUPP. Hence I suspect I'm missing something.

I think it is just an overlooked when reviewing the following commit:

commit 192df6f9122ddebc21d0a632c10da3453aeee1c2
Author: Roger Pau Monné <roger.pau@citrix.com>
Date:   Tue Dec 15 14:12:32 2015 +0100

     x86: allow HVM guests to use hypercalls to bring up vCPUs

     Allow the usage of the VCPUOP_initialise, VCPUOP_up, VCPUOP_down,
     VCPUOP_is_up, VCPUOP_get_physid and VCPUOP_send_nmi hypercalls from HVM
     guests.

     This patch introduces a new structure (vcpu_hvm_context) that 
should be used
     in conjuction with the VCPUOP_initialise hypercall in order to 
initialize
     vCPUs for HVM guests.

     Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
     Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
     Reviewed-by: Jan Beulich <jbeulich@suse.com>
     Acked-by: Ian Campbell <ian.campbell@citrix.com>

On Arm, the structure vcpu_guest_context is not exposed outside of Xen 
and the tools. Interestingly vcpu_guest_core_regs is but it should only 
be used within vcpu_guest_context.

So as this is not used by stable ABI, it is fine to break it.

>>
> I agree that do_arm_vcpu_op only allows two VCPUOP* to pass and
> arch_initialise_vcpu being called in case of VCPUOP_initialise
> has no sense as VCPUOP_initialise is not supported on arm.
> It makes sense that it should return -EOPNOTSUPP.
> However do_arm_vcpu_op will not accept VCPUOP_initialise and will return
> -EINVAL. So arch_initialise_vcpu for arm will not be called.
> Do you think that changing this behaviour so that arch_initialise_vcpu returns
> -EOPNOTSUPP should be part of this patch?

I think this change is unrelated. So it should be handled in a follow-up 
patch.

If you are taking care of this, would you mind to also look to move 
struct vcpu_guest_core_regs within the #if defined(__XEN__) || 
defined(__XEN_TOOLS__)?

I will attempt to do a proper review of this patch by the end of the week.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 12 18:15:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 18:15:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126343.237844 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgtO3-0006i9-Q6; Wed, 12 May 2021 18:15:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126343.237844; Wed, 12 May 2021 18:15:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgtO3-0006i2-Mn; Wed, 12 May 2021 18:15:11 +0000
Received: by outflank-mailman (input) for mailman id 126343;
 Wed, 12 May 2021 18:15:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XikZ=KH=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lgtO1-0006hw-Sm
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 18:15:10 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dc95ab92-7a81-4758-a4d2-37b11ddf6708;
 Wed, 12 May 2021 18:15:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dc95ab92-7a81-4758-a4d2-37b11ddf6708
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620843308;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=amc0m8q2dyYx5RpIF9dBjoPSkDmKLQmozHAEGQbqqiI=;
  b=fA0CCT1g72s/02bTgSexVt59+DCFJ9qFIMugPwEckxKku+tYqLizch6C
   G0liBELZm9xTlK835JYs4Ei+4Z88YAmk9qC+n4CVRKi9cKk8+WJFVPRca
   t2W+PKl89wKT4uuXl8N0P1oZZV4m5OHYligpIRF8CobfG/N3wrOsIAYgt
   c=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: J7NEQ0U3qCLA/sDNthOJizJTLrUQgYDBuVl3kpT019BADEa850BdP6Xe3sl28twq0Ob9pNf5Yp
 jt5/N9hvk0mh/SAKwmVdE7LKwmlmo21wuX8bRLPX1U6iUTAUyYIIFT07VTJLMtBLLs6/ykLeFe
 bZia3wYeKMq/R5+Qf28/SEFe0Vw/Ko38+m3v/+iIrgzJzNtaIyW4eHS2Af21Y/LecaIstiTyq0
 tO/Sh7l6swIt8Eb8MvZ2VXxojCo4PIrYpsDKsbXU4C/OjG8OqXDDLV/L+YpdvghMjwnD+wDzSt
 +eA=
X-SBRS: 5.1
X-MesageID: 43655362
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:OyGE3KPbx5qsZsBcTgWjsMiBIKoaSvp037BK7S1MoH1uA6mlfq
 WV9sjzuiWatN98Yh8dcLO7Scu9qBHnlaKdiLN5VduftWHd01dAR7sSjrcKrQeAJ8X/nNQtr5
 uJccJFeaDN5Y4Rt7eH3OG6eexQv+Vu6MqT9IPjJ+8Gd3ATV0lnhT0JbTqzIwlNayRtI4E2L5
 aY7tovnUvaRZxGBv7LYEXsRoL41qT2qK4=
X-IronPort-AV: E=Sophos;i="5.82,293,1613451600"; 
   d="scan'208";a="43655362"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=clv4At08S1nQo9WRiSA98ub0RLEvyiMSVRDvAKJl5r2/jMSc3H+ykJg/hUl+X74In5bKct1rOWAkvduJSBHFbD1P4G/Js2dLcd6hUaei2jyljLcFZHzRiX+aW4YrophNtxdf77hUEsA3kV/kFUXVcdmcwIH0nvjpq4UKLTElm2jlu/77c5bgQfFD6Xn26KL7sbUBM6bU/tzLtx7UXzIcdcS6lygUFoSKOlKaYhGEjOKE01P+Z1E/Er3p4QMsDn1NYRGCQ+GU7Dd+kEIjgTN/aCUr/CpCf8bFboAOJebGIKml0xmaxy5n39i/Ng5VhJlivWcJ10hwKeqV7IK/PBC6hA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=amc0m8q2dyYx5RpIF9dBjoPSkDmKLQmozHAEGQbqqiI=;
 b=oWMJuYVBHanFFpJ9xAe2UNt00Hpg6VWPZxnuxO8Mp6edYS+cXfqaByAQk4v9t2JGekhmGygSDnpLAUKIxS6HXMDlOJhTPuzC9KmEyw0jG4+5xQYRHKAYkmpyuRr88wPq7snxGsVYh0t2qoFMPXzuxo3lAm7jKNIzBVq4wCa5wA14NzS0fuY6ahUAhfZWT5xxW/faibRXzLoPKagMNb6cAspwgoo1Gdbmj0zK2LVyUSmN75nqP05HLME4I5SFo6SPsdHKoh0C9jQGVXzNqKtMFKBVcYeiWC2y9rWcFSl6wJrr5XXpuIMiihinLxIj4kqTotA3IbXkQVziVODGl4vDMQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=amc0m8q2dyYx5RpIF9dBjoPSkDmKLQmozHAEGQbqqiI=;
 b=AK0pmRSrbd6t+8pj5c7MeD/wwTpu8/9UFftqjoFSLMyFVwtjYCbfK8uRNMrMGCoXi6LyuHW+n+5ese5Ih58uqtASsNEr6WusgrWJKj/7pNVYNED1lm4mb2ZALq3qGB2lBEu6ZR7VP5Ibaab23USTtgIC1U35nhLFtx/TkKMoL2o=
Subject: Re: [PATCH v3 10/10] arm64: Change type of hsr, cpsr, spsr_el1 to
 uint64_t
To: Julien Grall <julien@xen.org>, Michal Orzel <michal.orzel@arm.com>, "Jan
 Beulich" <jbeulich@suse.com>
CC: Stefano Stabellini <sstabellini@kernel.org>, Volodymyr Babchuk
	<Volodymyr_Babchuk@epam.com>, George Dunlap <george.dunlap@citrix.com>, "Ian
 Jackson" <iwj@xenproject.org>, Wei Liu <wl@xen.org>, Tamas K Lengyel
	<tamas@tklengyel.com>, Alexandru Isaila <aisaila@bitdefender.com>, "Petre
 Pircalabu" <ppircalabu@bitdefender.com>, <bertrand.marquis@arm.com>,
	<wei.chen@arm.com>, <xen-devel@lists.xenproject.org>
References: <20210505074308.11016-1-michal.orzel@arm.com>
 <20210505074308.11016-11-michal.orzel@arm.com>
 <c5676e69-a474-d1ad-c7e9-49c03be3ab66@suse.com>
 <1ff4f9fb-0eca-189a-2b47-b910dc6b3639@arm.com>
 <42a998be-2f99-a1b6-ace6-4c5d42af7046@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <5b23d612-57a8-f0d8-97d1-2b90161b5539@citrix.com>
Date: Wed, 12 May 2021 19:14:54 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <42a998be-2f99-a1b6-ace6-4c5d42af7046@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0048.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:61::36) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 56704d1b-cadd-49e3-39bd-08d91571dc03
X-MS-TrafficTypeDiagnostic: BY5PR03MB5079:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BY5PR03MB5079A10B21E32A6621C1B9B1BA529@BY5PR03MB5079.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: dmVgF6+BtLpYdx8pF45FahIdtVo0hddkUdwxTzu2VhDtmm2R5lQm/gKty8pDwqdCXQhwivWtGAyLMAzIS44PAQZIhUgfB1v2/SgmaRN+rDq056uOeKKMWz6mG/75XBjhsliGnaDRxSsIgq5J2RTxXNPU62xOwcjBoYfWFOw6qVPfrKDImUboLbiB7Sp9EMCZ97kaycW9EDkoHfwpEsZDNxzTUzSuqsK12pGbVPRvSslofaOQ63+t/OOT9knm/offb7LTLrj4y9DU6E06ib5RLoIpiSL2PUGuC2rZFBQ9NBphnG003ZPVqjbhgLj6lf24exJJ1nABkWBWGg3PmIZ5FCTqsItrbg6iFp6PDJiVAqShjanVg2lIz+iNr6Ub64qnQmh6fLUdj8tSwmZxlhDwVRFwcljjoLCxdUVKNrwORIQUInTffAsR9ykgBP4fWqAmar29n6JAdiQQYtP/mJTY6wHFPq48DjlOxS9hF7jfhm+QJe5iPN7I9VEu4icXmIbfWj7QdN7uViRaVZ7nQNxmXzIRZJGovXEc+0lRG1V+t8lV3RvIEuxyz9hkv/hcrR3lvlVLYgWHto4mH5j7UDmI6zKKz7xAYGk/ZE/HlVLkNFvpBNq0xllST5aBiYAYBz3yLVRfzqQ7kFTpsFnuia9oNuMPJPXx9yhttb7SmEQWQ/JL3Yr6lgixvTB345oCoo7G
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(39860400002)(376002)(136003)(346002)(396003)(478600001)(66476007)(36756003)(6666004)(8676002)(83380400001)(54906003)(86362001)(4326008)(38100700002)(16576012)(956004)(66556008)(316002)(7416002)(53546011)(66946007)(31696002)(16526019)(5660300002)(26005)(6486002)(186003)(110136005)(2906002)(31686004)(2616005)(8936002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?QWJYTmE3bkVLRVBlZG1KNlMxVXk2RHZQNDEyZ2RtNnJyK2dqZ1BROHdsWnls?=
 =?utf-8?B?RVBrUUZVS2d6UzBiZjJwQ3ZITlYrclQ3YklRTXdHSkJxZkdSV0xoU0t0QXB5?=
 =?utf-8?B?Mk1FSVJqQWZNcko4UHJBazlDOXgzMnJlejU5ZndQYXZQV01WNVNTN1JQR1Zv?=
 =?utf-8?B?TVA1V0hZM0Q1NVlETGxKbjZ0QlFCWEs2YysvS3A5REJoalh5NU80WGZDMW9a?=
 =?utf-8?B?Z0FuZzZaN3UxRkhaaFVBMjFxZTdJZkFXM0VaS0Y0c3FseVN3WkdjMk13b3ZH?=
 =?utf-8?B?emswMmRtTHJNeVN0MHZ0QTRIR0h6NDBESmxIektudnFUN3QrWDJjeXlNM1Ay?=
 =?utf-8?B?eHlIQlNXcHFpOG9oTVhzbXVHM1h6ZWpLUGJnYmdhY0dBNU9EWnpkRW9WWkZK?=
 =?utf-8?B?RUdsZ0JWcExOUld4L3FPMjFjUStTdEZsSzcraTArWjNIRFU0S1NyQVhTTFFu?=
 =?utf-8?B?bmJvQjFrdC9nZzFTc2dXencxWFFnSmZTL2ZQZHNnV29kT1o1Y2tpMElpN0Y0?=
 =?utf-8?B?UkVXcGpjQzNCMzFhalVvYmJFWDJreTl4cXdMLy9ZWDgwa0k2VndVY2RaS3dj?=
 =?utf-8?B?cVAxend0Z3VIbkFRYzl4bkdDUmIxSWtuYlFWOWlEVFlrdUpEL0lxcUxyOTJv?=
 =?utf-8?B?U1RWblVZZFhBaVlMcHU1SkNOMy9vTnVFVDBDNzU5aG5haGtRMGNuNXI2bmk5?=
 =?utf-8?B?Y21ONlE4akFWRTczUmVnclViam5yMG56NTNzREJDL2F4elB1dFJoSTNLcFB5?=
 =?utf-8?B?NlpuTXl1d1NSU2cwZXMyQkNJejZIMno1ZW8zREVuSlVIYWRSQlprVk9xWFhw?=
 =?utf-8?B?TnM3d0NXeXFMVytHUVJRMHBRQ1VmUmhCeWhIeFp1M0w2NjVna1lXbWVEUzZk?=
 =?utf-8?B?S2hjTHNTZi9YMlc0WjBQOGF6OGkyMVh5KzZKcThid2w0cVBCLzdJZ1dFTHVD?=
 =?utf-8?B?SHh3RCt0WFd0bmozWk5ldjY2YjB3WVVBcVdCRkNuVkxqaHVuS2V1d3N5Q05C?=
 =?utf-8?B?NEJXR0ZtYkdJL2k3bTlEcVhreXJua3B3WmdLemFXWGt2T1I4SDdsWDloSXJI?=
 =?utf-8?B?b3FnVWY1cTRXOUltaUMwRDRGemJnWFAzcm1rOFNWMGZEV0NObVZRN2ZUcWRj?=
 =?utf-8?B?OXRKdFVZUnhwQys0NDg3UGJhWERlK0JPdVdTQkRGeHJFSVByREQyYis1VmRH?=
 =?utf-8?B?WWh5clp2SithNUdBRmRwSGZCQjdHb0hNa2h4QjV1bnFWVTNUV0JnTXh6NlZk?=
 =?utf-8?B?NmlzQVQxUFZaSk9CK3Nac0g1VVVBVnNkUnhkVWxnME1STVo5YUV1UTNsRjNz?=
 =?utf-8?B?elBIYXY0N0tNYkNrblJwTFM2MkpKL1l3a2JMbWF2aVU1SjMwRERiYlk4YVJo?=
 =?utf-8?B?ZG1GQnQ0T3NaMEFzMWdtMHJwc0N1b2JWakZHSFZFWk96bXluU1ZQNUExS2JC?=
 =?utf-8?B?dmFZNDdRUVpEK1lFMzcxTldPMlRMZm1HZDZQQUVPMXpWa1cwdmgxQy82a0w5?=
 =?utf-8?B?N1ltV29TYkRnWko4dlJBR0pDNTJPWFVXT2VuZUJGd2xkUlEwUVdKRUswdDZD?=
 =?utf-8?B?REJ5SlVGeHhLZk1TZUJ2TkJHc0pXRU5XaVk5RE5IUXhZaXBWWW4vTElzcS9q?=
 =?utf-8?B?ejdOZ1lIY0JGQzRqalVxNVpCYnVScDBQVnZRRUxCZk9yUURYeU1UUFVmSW9L?=
 =?utf-8?B?aEtQY205ZXdrdHRQM3pseklsVkptRmxMKzZTQi80ZTJSbkNpTWkySE5yemgw?=
 =?utf-8?Q?RcONYbXqHNVmj0CyU6cAaO8c+dK/qZHKsYAa2Yz?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 56704d1b-cadd-49e3-39bd-08d91571dc03
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 12 May 2021 18:15:01.9290
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: VOeBJUgFI+wf7pR8gXshdInJqA42n5a+32AHOtArsjYM+M607T0P6CpcH9BlEHfqg97YFiBaF2NV/axcdzcx38Qu5IKjLcxJxM1DugKvgRQ=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5079
X-OriginatorOrg: citrix.com

On 12/05/2021 18:59, Julien Grall wrote:
> Hi,
>
> On 11/05/2021 07:37, Michal Orzel wrote:
>> On 05.05.2021 10:00, Jan Beulich wrote:
>>> On 05.05.2021 09:43, Michal Orzel wrote:
>>>> --- a/xen/include/public/arch-arm.h
>>>> +++ b/xen/include/public/arch-arm.h
>>>> @@ -267,10 +267,10 @@ struct vcpu_guest_core_regs
>>>>         /* Return address and mode */
>>>>       __DECL_REG(pc64,         pc32);             /* ELR_EL2 */
>>>> -    uint32_t cpsr;                              /* SPSR_EL2 */
>>>> +    uint64_t cpsr;                              /* SPSR_EL2 */
>>>>         union {
>>>> -        uint32_t spsr_el1;       /* AArch64 */
>>>> +        uint64_t spsr_el1;       /* AArch64 */
>>>>           uint32_t spsr_svc;       /* AArch32 */
>>>>       };
>>>
>>> This change affects, besides domctl, also default_initialise_vcpu(),
>>> which Arm's arch_initialise_vcpu() calls. I realize do_arm_vcpu_op()
>>> only allows two unrelated VCPUOP_* to pass, but then I don't
>>> understand why arch_initialise_vcpu() doesn't simply return e.g.
>>> -EOPNOTSUPP. Hence I suspect I'm missing something.
>
> I think it is just an overlooked when reviewing the following commit:
>
> commit 192df6f9122ddebc21d0a632c10da3453aeee1c2
> Author: Roger Pau Monné <roger.pau@citrix.com>
> Date:   Tue Dec 15 14:12:32 2015 +0100
>
>     x86: allow HVM guests to use hypercalls to bring up vCPUs
>
>     Allow the usage of the VCPUOP_initialise, VCPUOP_up, VCPUOP_down,
>     VCPUOP_is_up, VCPUOP_get_physid and VCPUOP_send_nmi hypercalls
> from HVM
>     guests.
>
>     This patch introduces a new structure (vcpu_hvm_context) that
> should be used
>     in conjuction with the VCPUOP_initialise hypercall in order to
> initialize
>     vCPUs for HVM guests.
>
>     Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
>     Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>     Reviewed-by: Jan Beulich <jbeulich@suse.com>
>     Acked-by: Ian Campbell <ian.campbell@citrix.com>
>
> On Arm, the structure vcpu_guest_context is not exposed outside of Xen
> and the tools. Interestingly vcpu_guest_core_regs is but it should
> only be used within vcpu_guest_context.
>
> So as this is not used by stable ABI, it is fine to break it.
>
>>>
>> I agree that do_arm_vcpu_op only allows two VCPUOP* to pass and
>> arch_initialise_vcpu being called in case of VCPUOP_initialise
>> has no sense as VCPUOP_initialise is not supported on arm.
>> It makes sense that it should return -EOPNOTSUPP.
>> However do_arm_vcpu_op will not accept VCPUOP_initialise and will return
>> -EINVAL. So arch_initialise_vcpu for arm will not be called.
>> Do you think that changing this behaviour so that
>> arch_initialise_vcpu returns
>> -EOPNOTSUPP should be part of this patch?
>
> I think this change is unrelated. So it should be handled in a
> follow-up patch.
>
> If you are taking care of this, would you mind to also look to move
> struct vcpu_guest_core_regs within the #if defined(__XEN__) ||
> defined(__XEN_TOOLS__)?

+1.  Fairly sure this is the conclusion of a discussion a year or so
back where I noted the same peculiarity, and tried to untangle the mess
which is the common vs arch specific code.  (which is still outstanding,
and I don't immediately recall why.)

~Andrew


From xen-devel-bounces@lists.xenproject.org Wed May 12 19:21:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 19:21:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126350.237855 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lguQ1-0005jA-MC; Wed, 12 May 2021 19:21:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126350.237855; Wed, 12 May 2021 19:21:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lguQ1-0005j3-J8; Wed, 12 May 2021 19:21:17 +0000
Received: by outflank-mailman (input) for mailman id 126350;
 Wed, 12 May 2021 19:21:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lguQ0-0005it-Fn; Wed, 12 May 2021 19:21:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lguPz-0001bi-Lp; Wed, 12 May 2021 19:21:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lguPz-00037t-B7; Wed, 12 May 2021 19:21:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lguPz-0000fa-Ad; Wed, 12 May 2021 19:21:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=cdIwQYhjYIh8dBmvNNI0PQJsPFbuy+YM4EPjqgtY/fQ=; b=o/Hr9Z7CjTt1TsKwY9hsvvT6N3
	/8hAnh75KXxYyb70nSqdhWWWfiRPnCKgjA6V+Rth6ZJSiT3NN7gkNHBPAWyGLpUR3XtXfHljE5Sgc
	RAk23sVcQQ05F+cbBzJ/bPcf+XofB3sTLTTSUL8JsXtM0ISqGX08OYW3DieCGrZoIS/Q=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161921-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 161921: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=52b91dad6f43afb0c77325e6d54115c280958e57
X-Osstest-Versions-That:
    xen=d4fb5f166c2bfbaf9ba0de69da0d411288f437a9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 12 May 2021 19:21:15 +0000

flight 161921 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161921/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  52b91dad6f43afb0c77325e6d54115c280958e57
baseline version:
 xen                  d4fb5f166c2bfbaf9ba0de69da0d411288f437a9

Last test of basis   161897  2021-05-10 19:02:36 Z    2 days
Testing same since   161921  2021-05-12 16:01:36 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Olaf Hering <olaf@aepfle.de>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   d4fb5f166c..52b91dad6f  52b91dad6f43afb0c77325e6d54115c280958e57 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed May 12 19:55:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 19:55:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126357.237871 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lguxN-0001FU-E8; Wed, 12 May 2021 19:55:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126357.237871; Wed, 12 May 2021 19:55:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lguxN-0001FN-AH; Wed, 12 May 2021 19:55:45 +0000
Received: by outflank-mailman (input) for mailman id 126357;
 Wed, 12 May 2021 19:55:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=codm=KH=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lguxL-0001Ey-L7
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 19:55:43 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a3e0163c-c2ef-4546-9907-11d01f8e06c9;
 Wed, 12 May 2021 19:55:43 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id AB53C613F7;
 Wed, 12 May 2021 19:55:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a3e0163c-c2ef-4546-9907-11d01f8e06c9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1620849342;
	bh=262+PIF/9A36VsCFqlkhq//BsN04UF0+U0v6HG31+Wo=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=WEB3NSzIGZ1hY7qqH6Z9As3Y5wCiVwfxNKRRS/2CC0NbdHbt5Qgc02KzG15EMIxHx
	 GRryYhagx0GqTVe35P4CWN/aFNbhJOcLv5Yxgs2TB4g/MMgelXLUXiw+5EwqeavSN6
	 C1MpfPrTlXnYi4xDQtTxE+kO+JPXyDOjavVr6Sg8yNi5a8lr3RJSaOFMbgZLr1uxD2
	 G3ZV5addJoy64YD+onajVSGgt0Xgg+EFe5W32nhaFq/UBHEuA3pK5v5HIMfnLvPbVR
	 /a6LBUXd2mPFZKaLUDxuzPCWaR4pV3xtWBWh3N7EiyErgBxFKvaliGdztPXF4OazjN
	 eMMkMG3PUlnMw==
Date: Wed, 12 May 2021 12:55:40 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Christoph Hellwig <hch@lst.de>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org, 
    Stefano Stabellini <stefano.stabellini@xilinx.com>, 
    boris.ostrovsky@oracle.com, jgross@suse.com, catalin.marinas@arm.com, 
    will@kernel.org, linux-arm-kernel@lists.infradead.org
Subject: Re: [PATCH 1/2] xen/arm64: do not set SWIOTLB_NO_FORCE when swiotlb
 is required
In-Reply-To: <20210512072645.GA22396@lst.de>
Message-ID: <alpine.DEB.2.21.2105121255290.5018@sstabellini-ThinkPad-T480s>
References: <20210511174142.12742-1-sstabellini@kernel.org> <20210512072645.GA22396@lst.de>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 12 May 2021, Christoph Hellwig wrote:
> > -int xen_swiotlb_detect(void)
> > -{
> > -	if (!xen_domain())
> > -		return 0;
> > -	if (xen_feature(XENFEAT_direct_mapped))
> > -		return 1;
> > -	/* legacy case */
> > -	if (!xen_feature(XENFEAT_not_direct_mapped) && xen_initial_domain())
> > -		return 1;
> > -	return 0;
> > -}
> 
> I think this move should be a separate prep patch.

Sure, I can do that


From xen-devel-bounces@lists.xenproject.org Wed May 12 20:10:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 20:10:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126365.237883 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgvBt-0003mk-PL; Wed, 12 May 2021 20:10:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126365.237883; Wed, 12 May 2021 20:10:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgvBt-0003md-MJ; Wed, 12 May 2021 20:10:45 +0000
Received: by outflank-mailman (input) for mailman id 126365;
 Wed, 12 May 2021 20:10:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=codm=KH=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lgvBs-0003mX-Lc
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 20:10:44 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5c367de5-ca19-4a09-8571-a7541e6c4968;
 Wed, 12 May 2021 20:10:44 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id E584D61408;
 Wed, 12 May 2021 20:10:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5c367de5-ca19-4a09-8571-a7541e6c4968
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1620850243;
	bh=KiSht4eykHbuRx4encDVma3UC0WdUp6sYB51Ups/XBQ=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=oe+sv3ILc/RC+aVoS4W+Yefy9KsbBFO5a0a/INlcBlqGDSktVFnmkAeXr/w639z41
	 FF6tet/bKQ9itBDU7qZ50bOtG4N5MDKvqG9wXs9XE5I7LA6RUYdRx25BxbmF5tLeM8
	 vC7jRmufvNf0Cfxf5++3NwAC67oEPQXe/Acw1pHjw6tfLBhINnc8MFktxvcakjlL3t
	 7tHb+8HFZ7QQHeyqwmf1dF6OY5SnoGvdYQ04BKAkHWVg+Q0awZtJSeCSnfthbMy5YS
	 pp9vmID2lox89aKRN67CxlsLOK/QV8v48IrBEG+cQCZbXsRdFaQTNxlDH3SPOPnQnj
	 ShdB+57YHYeuw==
Date: Wed, 12 May 2021 13:10:42 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, hch@lst.de, linux-kernel@vger.kernel.org, 
    Stefano Stabellini <stefano.stabellini@xilinx.com>, jgross@suse.com
Subject: Re: [PATCH 2/2] xen/swiotlb: check if the swiotlb has already been
 initialized
In-Reply-To: <2e5a684b-3c74-5efc-2946-8ca002894ab4@oracle.com>
Message-ID: <alpine.DEB.2.21.2105121310210.5018@sstabellini-ThinkPad-T480s>
References: <20210511174142.12742-2-sstabellini@kernel.org> <2e5a684b-3c74-5efc-2946-8ca002894ab4@oracle.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Tue, 11 May 2021, Boris Ostrovsky wrote:
> On 5/11/21 1:41 PM, Stefano Stabellini wrote:
> > --- a/drivers/xen/swiotlb-xen.c
> > +++ b/drivers/xen/swiotlb-xen.c
> > @@ -164,6 +164,11 @@ int __ref xen_swiotlb_init(void)
> >  	int rc = -ENOMEM;
> >  	char *start;
> >  
> > +	if (io_tlb_default_mem != NULL) {
> > +		printk(KERN_WARNING "Xen-SWIOTLB: swiotlb buffer already initialized\n");
> 
> 
> pr_warn().
> 
> 
> Reviewed-by: Boris Ostrovsky <boris.ostrvsky@oracle.com>

Thank you! I'll send a v2 shortly with the change to pr_warn and your
reviewed-by.


From xen-devel-bounces@lists.xenproject.org Wed May 12 20:18:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 20:18:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126370.237895 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgvJ5-0004fC-KC; Wed, 12 May 2021 20:18:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126370.237895; Wed, 12 May 2021 20:18:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgvJ5-0004f5-Ge; Wed, 12 May 2021 20:18:11 +0000
Received: by outflank-mailman (input) for mailman id 126370;
 Wed, 12 May 2021 20:18:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=codm=KH=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lgvJ4-0004ez-Hh
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 20:18:10 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 521fd89b-72bd-4620-adb4-c21c4fb473f0;
 Wed, 12 May 2021 20:18:09 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id A30C461420;
 Wed, 12 May 2021 20:18:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 521fd89b-72bd-4620-adb4-c21c4fb473f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1620850689;
	bh=fP9Wxr+4cvV2QEABqz6APIe0wf2Nt2QzccyCNr1aPQE=;
	h=Date:From:To:cc:Subject:From;
	b=Iub85i9N4PogdaaRJ1b5DUmjrHqhoIUgjXookWh7V1QaUHlarkcZ9Et/kzUg3IpU/
	 9hYSmO3DpmPCJvQXNAqKhpsWTpHpBo2+9ajqKeAbpXi6xR2jCGoL7liHb5fI/u2aNJ
	 4BJ0yPDlK7/RkwpR1aIgaiACiPp2VsJqMzmzO9bnKI+mnjwqX/xqFihgFpARJxmK2O
	 ZyoTdyARq/XnZSTIbt5WmFCh5YtfN6t4AH4NI4Yq8HeuEPZPyDXeH/BhAw/qab0EuD
	 THrMu77hwEn+1+TqJaRixF3VsYxfrHs4Qust+JkcFtQxWCsJVw+O/5OXlNl0EVeW0x
	 E6VR7zXWWGbAQ==
Date: Wed, 12 May 2021 13:18:07 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: xen-devel@lists.xenproject.org
cc: sstabellini@kernel.org, boris.ostrovsky@oracle.com, jgross@suse.com, 
    hch@lst.de
Subject: [PATCH v2 0/3] swiotlb-xen init fixes
Message-ID: <alpine.DEB.2.21.2105121313060.5018@sstabellini-ThinkPad-T480s>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

Hi all,

This short patch series comes with a preparation patch and 2 unrelated
fixes to swiotlb-xen initialization.


Christoph Hellwig (1):
      arm64: do not set SWIOTLB_NO_FORCE when swiotlb is required

Stefano Stabellini (2):
      xen/arm: move xen_swiotlb_detect to arm/swiotlb-xen.h
      xen/swiotlb: check if the swiotlb has already been initialized

 arch/arm/xen/mm.c             | 20 +++++++-------------
 arch/arm64/mm/init.c          |  3 ++-
 drivers/xen/swiotlb-xen.c     |  5 +++++
 include/xen/arm/swiotlb-xen.h | 15 ++++++++++++++-
 4 files changed, 28 insertions(+), 15 deletions(-)


From xen-devel-bounces@lists.xenproject.org Wed May 12 20:18:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 20:18:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126371.237907 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgvJO-00054U-SL; Wed, 12 May 2021 20:18:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126371.237907; Wed, 12 May 2021 20:18:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgvJO-00054N-PL; Wed, 12 May 2021 20:18:30 +0000
Received: by outflank-mailman (input) for mailman id 126371;
 Wed, 12 May 2021 20:18:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=codm=KH=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lgvJM-00053i-VT
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 20:18:29 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dd05b009-a7cc-4e1e-bda1-ffbca09f8373;
 Wed, 12 May 2021 20:18:28 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 734D361408;
 Wed, 12 May 2021 20:18:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dd05b009-a7cc-4e1e-bda1-ffbca09f8373
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1620850707;
	bh=5GSHqd6UoXwElcgEbYeM8UQG4GJkKSSlVE4wPB43XOk=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=vHHpKNOcSv3F8S4EPFQSzTJKfGTM1mlZ8082xqqy+v1IZ8uxlYgnmk6Fw+JHyWk8P
	 lEpvWBAv3fhk7v3LMfPuWzhRFkhPnyuPcey9qCWJOHG6kQ4KAF97xxCO/Vvan8EsLM
	 G8W3vBQSLbZYMTckWaS7Nct+WIRm+WSuwvfYKcE6jW8okm51FrL764aLnv2fwVKbTX
	 fFgm+kluXKRIjYpViGlO+USpxdzUm5zYLq4ZQJQ26qvLAOfIt3DAh0knAAas8Um5n8
	 tEpW9w3pkYeq6RkfKD0FPsNfFx5Yd7JEr9Cgzmd9SoQien1ww1lkTepk1KEtpz0xot
	 YKaOZF832bj9Q==
From: Stefano Stabellini <sstabellini@kernel.org>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	hch@lst.de,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: [PATCH v2 1/3] xen/arm: move xen_swiotlb_detect to arm/swiotlb-xen.h
Date: Wed, 12 May 2021 13:18:21 -0700
Message-Id: <20210512201823.1963-1-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2105121313060.5018@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2105121313060.5018@sstabellini-ThinkPad-T480s>

From: Stefano Stabellini <stefano.stabellini@xilinx.com>

Move xen_swiotlb_detect to a static inline function to make it available
to !CONFIG_XEN builds.

CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>

---
Changes in v2:
- patch split
---
 arch/arm/xen/mm.c             | 12 ------------
 include/xen/arm/swiotlb-xen.h | 15 ++++++++++++++-
 2 files changed, 14 insertions(+), 13 deletions(-)

diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c
index f8f07469d259..223b1151fd7d 100644
--- a/arch/arm/xen/mm.c
+++ b/arch/arm/xen/mm.c
@@ -135,18 +135,6 @@ void xen_destroy_contiguous_region(phys_addr_t pstart, unsigned int order)
 	return;
 }
 
-int xen_swiotlb_detect(void)
-{
-	if (!xen_domain())
-		return 0;
-	if (xen_feature(XENFEAT_direct_mapped))
-		return 1;
-	/* legacy case */
-	if (!xen_feature(XENFEAT_not_direct_mapped) && xen_initial_domain())
-		return 1;
-	return 0;
-}
-
 static int __init xen_mm_init(void)
 {
 	struct gnttab_cache_flush cflush;
diff --git a/include/xen/arm/swiotlb-xen.h b/include/xen/arm/swiotlb-xen.h
index 2994fe6031a0..33336ab58afc 100644
--- a/include/xen/arm/swiotlb-xen.h
+++ b/include/xen/arm/swiotlb-xen.h
@@ -2,6 +2,19 @@
 #ifndef _ASM_ARM_SWIOTLB_XEN_H
 #define _ASM_ARM_SWIOTLB_XEN_H
 
-extern int xen_swiotlb_detect(void);
+#include <xen/features.h>
+#include <xen/xen.h>
+
+static inline int xen_swiotlb_detect(void)
+{
+	if (!xen_domain())
+		return 0;
+	if (xen_feature(XENFEAT_direct_mapped))
+		return 1;
+	/* legacy case */
+	if (!xen_feature(XENFEAT_not_direct_mapped) && xen_initial_domain())
+		return 1;
+	return 0;
+}
 
 #endif /* _ASM_ARM_SWIOTLB_XEN_H */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed May 12 20:18:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 20:18:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126373.237919 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgvJQ-0005La-4n; Wed, 12 May 2021 20:18:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126373.237919; Wed, 12 May 2021 20:18:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgvJQ-0005LR-1a; Wed, 12 May 2021 20:18:32 +0000
Received: by outflank-mailman (input) for mailman id 126373;
 Wed, 12 May 2021 20:18:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=codm=KH=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lgvJN-00053u-UJ
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 20:18:29 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9ec0c904-6615-4f9e-9af4-e77e4ba4086b;
 Wed, 12 May 2021 20:18:29 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id E6BFC61421;
 Wed, 12 May 2021 20:18:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9ec0c904-6615-4f9e-9af4-e77e4ba4086b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1620850708;
	bh=ikG741femM1u5L3PYlYRG1oNEZycqnQin0LDEK63UdI=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=RShGZVhkT1+Cv9URRAR9hvRdm2lnVpZ/vDKhwwndQErexmq58V6abEelgnec4DX+x
	 Iq/powaDhH+V5WHSLB3ryZogB1b2sgnkPU7EiP0/MRRG/nwzIzV/Q5Srxs0gqbB+cH
	 14MZp+GtgTMFCs0Ih8T5Uvn9iByb8FozTPHOAcZHiN5n9RknvNAxd+0RQUcHqLXTeM
	 jofPjb49sy0g5sdVuYrjnmuG6NNqjf0n9H6kCmi+4WL9vnyH3UOIud8DubTi7dXVH/
	 FrXkN9yBf13D6H+TzEtt8Qy8V5lcj65NtVqeobNFzPY2KnlgJ0bWtvyv2fiu58p57T
	 I20jFea37G0/w==
From: Stefano Stabellini <sstabellini@kernel.org>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	hch@lst.de,
	catalin.marinas@arm.com,
	will@kernel.org,
	linux-arm-kernel@lists.infradead.org,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: [PATCH v2 2/3] arm64: do not set SWIOTLB_NO_FORCE when swiotlb is required
Date: Wed, 12 May 2021 13:18:22 -0700
Message-Id: <20210512201823.1963-2-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2105121313060.5018@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2105121313060.5018@sstabellini-ThinkPad-T480s>

From: Christoph Hellwig <hch@lst.de>

Although SWIOTLB_NO_FORCE is meant to allow later calls to swiotlb_init,
today dma_direct_map_page returns error if SWIOTLB_NO_FORCE.

For now, without a larger overhaul of SWIOTLB_NO_FORCE, the best we can
do is to avoid setting SWIOTLB_NO_FORCE in mem_init when we know that it
is going to be required later (e.g. Xen requires it).

CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
CC: catalin.marinas@arm.com
CC: will@kernel.org
CC: linux-arm-kernel@lists.infradead.org
Fixes: 2726bf3ff252 ("swiotlb: Make SWIOTLB_NO_FORCE perform no allocation")
Signed-off-by: Christoph Hellwig <hch@lst.de>
Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>

---
Changes in v2:
- patch split
---
 arch/arm64/mm/init.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/arch/arm64/mm/init.c b/arch/arm64/mm/init.c
index 16a2b2b1c54d..e55409caaee3 100644
--- a/arch/arm64/mm/init.c
+++ b/arch/arm64/mm/init.c
@@ -43,6 +43,7 @@
 #include <linux/sizes.h>
 #include <asm/tlb.h>
 #include <asm/alternative.h>
+#include <asm/xen/swiotlb-xen.h>
 
 /*
  * We need to be able to catch inadvertent references to memstart_addr
@@ -482,7 +483,7 @@ void __init mem_init(void)
 	if (swiotlb_force == SWIOTLB_FORCE ||
 	    max_pfn > PFN_DOWN(arm64_dma_phys_limit))
 		swiotlb_init(1);
-	else
+	else if (!xen_swiotlb_detect())
 		swiotlb_force = SWIOTLB_NO_FORCE;
 
 	set_max_mapnr(max_pfn - PHYS_PFN_OFFSET);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed May 12 20:18:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 20:18:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126374.237931 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgvJT-0005fz-Ex; Wed, 12 May 2021 20:18:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126374.237931; Wed, 12 May 2021 20:18:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgvJT-0005fk-Ab; Wed, 12 May 2021 20:18:35 +0000
Received: by outflank-mailman (input) for mailman id 126374;
 Wed, 12 May 2021 20:18:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=codm=KH=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lgvJR-00053i-UB
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 20:18:33 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5a7d3754-4079-4430-a3bb-5eac1d4e94f1;
 Wed, 12 May 2021 20:18:29 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 8B2C361422;
 Wed, 12 May 2021 20:18:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5a7d3754-4079-4430-a3bb-5eac1d4e94f1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1620850708;
	bh=trsGadIQLNKKCXatZO0q0DocnFfhaO1AxO8nVb8w9MQ=;
	h=From:To:Cc:Subject:Date:In-Reply-To:References:From;
	b=aO6VInjY6ndrYTvyGcoofX87u0DhSLAIbZ4pk+S+22qXkOzEWJiYPbKKyB66ULAbr
	 Ez1XQgUO11lRcmTm0tegcEbr0o1MpI77EsK4U0uleRHQUDVoW7S3JNRK/7fsRJ55zv
	 eRBbop6Cmt4odtv8kfuIc/IlsU35LtGBJDuT+N3lFmbhnOovGGOXA0ostEPTbZnrxR
	 Iv19y4rS+AG0b1VuEnJWzsdwMOhAxjWZV2mC0b5eWFAKHTiTtVto+4AHKn2/JeOw9T
	 9jf1XMRwgsnuKo9jffD/Faxrw6Wd8fvAniEbLr6DDoHoWpwCvUEe0LnObLHMeKztzC
	 SEcUF38QTS2gw==
From: Stefano Stabellini <sstabellini@kernel.org>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	hch@lst.de,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: [PATCH v2 3/3] xen/swiotlb: check if the swiotlb has already been initialized
Date: Wed, 12 May 2021 13:18:23 -0700
Message-Id: <20210512201823.1963-3-sstabellini@kernel.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <alpine.DEB.2.21.2105121313060.5018@sstabellini-ThinkPad-T480s>
References: <alpine.DEB.2.21.2105121313060.5018@sstabellini-ThinkPad-T480s>

From: Stefano Stabellini <stefano.stabellini@xilinx.com>

xen_swiotlb_init calls swiotlb_late_init_with_tbl, which fails with
-ENOMEM if the swiotlb has already been initialized.

Add an explicit check io_tlb_default_mem != NULL at the beginning of
xen_swiotlb_init. If the swiotlb is already initialized print a warning
and return -EEXIST.

On x86, the error propagates.

On ARM, we don't actually need a special swiotlb buffer (yet), any
buffer would do. So ignore the error and continue.

CC: boris.ostrovsky@oracle.com
CC: jgross@suse.com
Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>
Reviewed-by: Boris Ostrovsky <boris.ostrvsky@oracle.com>
---
Changes in v2:
- use pr_warn
- add reviewed-by
---
 arch/arm/xen/mm.c         | 8 +++++++-
 drivers/xen/swiotlb-xen.c | 5 +++++
 2 files changed, 12 insertions(+), 1 deletion(-)

diff --git a/arch/arm/xen/mm.c b/arch/arm/xen/mm.c
index 223b1151fd7d..a7e54a087b80 100644
--- a/arch/arm/xen/mm.c
+++ b/arch/arm/xen/mm.c
@@ -138,9 +138,15 @@ void xen_destroy_contiguous_region(phys_addr_t pstart, unsigned int order)
 static int __init xen_mm_init(void)
 {
 	struct gnttab_cache_flush cflush;
+	int rc;
+
 	if (!xen_swiotlb_detect())
 		return 0;
-	xen_swiotlb_init();
+
+	rc = xen_swiotlb_init();
+	/* we can work with the default swiotlb */
+	if (rc < 0 && rc != -EEXIST)
+		return rc;
 
 	cflush.op = 0;
 	cflush.a.dev_bus_addr = 0;
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 4c89afc0df62..24d11861ac7d 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -164,6 +164,11 @@ int __ref xen_swiotlb_init(void)
 	int rc = -ENOMEM;
 	char *start;
 
+	if (io_tlb_default_mem != NULL) {
+		pr_warn("swiotlb buffer already initialized\n");
+		return -EEXIST;
+	}
+
 retry:
 	m_ret = XEN_SWIOTLB_ENOMEM;
 	order = get_order(bytes);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed May 12 20:27:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 20:27:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126391.237942 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgvRX-00087P-A7; Wed, 12 May 2021 20:26:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126391.237942; Wed, 12 May 2021 20:26:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgvRX-00087I-70; Wed, 12 May 2021 20:26:55 +0000
Received: by outflank-mailman (input) for mailman id 126391;
 Wed, 12 May 2021 20:26:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=codm=KH=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lgvRV-000879-U2
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 20:26:53 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1162508b-f1cf-4e75-9f1a-9517c003e146;
 Wed, 12 May 2021 20:26:53 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 244F3613B6;
 Wed, 12 May 2021 20:26:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1162508b-f1cf-4e75-9f1a-9517c003e146
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1620851212;
	bh=IgmMSFZrOHgFCTc0Y6ImNI9GrSZkyuzsXekB3bIoVTo=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=Lah3w7DugpOAaDxV9Z8VTCFQrNMUKjNXbXOkRWkuoC3ThsrXBgKrREWJCEUwi4EYX
	 /xiOY8D38yQmD4dxYunyT1YAMSc0Rkdzzy2TX8gJxmH4WQJrjTOXhCo6xHxYyjqaKz
	 AzIRAHPrYmx1hVBXE8iq9zOKwG10x6Ykw1KTjwa1lpTcT4q4jh4uhVYIB1VD8j68lX
	 +VXoOHmmt8QRRadO13vX7ftf5OLsv2qVAywNeENBypFMLJ73ywIov4gTJ85Ce4+dPr
	 oLsDIcMaxRPFWiUQNtOvS2G5Ik5u7MxIte/K6BpQPS7D4fcTHPVfBZkDIhU/IyECGm
	 YXkeTvXydll9A==
Date: Wed, 12 May 2021 13:26:51 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, michal.orzel@arm.com, 
    Julien Grall <jgrall@amazon.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH] xen/arm: gic-v3: Add missing breaks gicv3_read_apr()
In-Reply-To: <20210512173548.27244-1-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2105121326400.5018@sstabellini-ThinkPad-T480s>
References: <20210512173548.27244-1-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 12 May 2021, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Commit 78e67c99eb3f "arm/gic: Get rid of READ/WRITE_SYSREG32"
> mistakenly converted all the cases in gicv3_read_apr() to fall-through.
> 
> Rather than re-instating a return per case, add the missing break and
> keep a single return at the end of the fucntion.
> 
> Fixes: 78e67c99eb3f ("arm/gic: Get rid of READ/WRITE_SYSREG32")
> Signed-off-by: Julien Grall <jgrall@amazon.com>

reviewed and committed


> ---
>  xen/arch/arm/gic-v3.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/xen/arch/arm/gic-v3.c b/xen/arch/arm/gic-v3.c
> index b86f04058947..9a3a175ad7d2 100644
> --- a/xen/arch/arm/gic-v3.c
> +++ b/xen/arch/arm/gic-v3.c
> @@ -1167,12 +1167,15 @@ static unsigned int gicv3_read_apr(int apr_reg)
>      case 0:
>          ASSERT(gicv3.nr_priorities > 4 && gicv3.nr_priorities < 8);
>          apr = READ_SYSREG(ICH_AP1R0_EL2);
> +        break;
>      case 1:
>          ASSERT(gicv3.nr_priorities > 5 && gicv3.nr_priorities < 8);
>          apr = READ_SYSREG(ICH_AP1R1_EL2);
> +        break;
>      case 2:
>          ASSERT(gicv3.nr_priorities > 6 && gicv3.nr_priorities < 8);
>          apr = READ_SYSREG(ICH_AP1R2_EL2);
> +        break;
>      default:
>          BUG();
>      }
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Wed May 12 21:30:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 21:30:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126399.237959 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgwQt-0007Gg-4Z; Wed, 12 May 2021 21:30:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126399.237959; Wed, 12 May 2021 21:30:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgwQt-0007GZ-1U; Wed, 12 May 2021 21:30:19 +0000
Received: by outflank-mailman (input) for mailman id 126399;
 Wed, 12 May 2021 21:30:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=codm=KH=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lgwQr-0007GT-7i
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 21:30:17 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a6518a3d-b686-4252-a12a-5d89cf54ddfa;
 Wed, 12 May 2021 21:30:16 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 0EAB1613E6;
 Wed, 12 May 2021 21:30:15 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a6518a3d-b686-4252-a12a-5d89cf54ddfa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1620855015;
	bh=4GO29E4P5CadwLaZmmzGQrRInrNcFgNOKyQmX0Qeh6M=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=H7kB4nMlt4dT5hTPNVurw8KhB9GFLNAnp08qTFUxB60QGtv7ioFSlocbE9UWB1+rg
	 wP4MixwiGLQlwkZyWNTQYY8pl5tb1WbMMdhF72YwtQIms2G5w5hnYG3SkJMcXnZOLY
	 sO/EJxCUjgpbmB1k6KfpL3Ro0Hbx082+c8VvVfARJfpXf0p1f5oTyQFohT1WhfDHtp
	 KO3m5A5gSRaDbpEyltrfw/9a0oCBnHmyaxfEbjpadBdZGhlZy5wVdPUZNKfd6eqSqU
	 QUk9l1Dj+tNPshaw4cy7hr6R62cj9y9Msfatlbn5Ul9NDkT2gkIdGDDV64sCBOUdw4
	 dcJHArdEt2PRA==
Date: Wed, 12 May 2021 14:30:14 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, Wei.Chen@arm.com, Henry.Wang@arm.com, 
    Penny.Zheng@arm.com, Bertrand.Marquis@arm.com, 
    Julien Grall <jgrall@amazon.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH RFCv2 02/15] xen/arm: lpae: Use the generic helpers to
 defined the Xen PT helpers
In-Reply-To: <94e364a7-de40-93ab-6cde-a2f493540439@xen.org>
Message-ID: <alpine.DEB.2.21.2105121425500.5018@sstabellini-ThinkPad-T480s>
References: <20210425201318.15447-1-julien@xen.org> <20210425201318.15447-3-julien@xen.org> <alpine.DEB.2.21.2105111515470.5018@sstabellini-ThinkPad-T480s> <94e364a7-de40-93ab-6cde-a2f493540439@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 12 May 2021, Julien Grall wrote:
> Hi Stefano,
> 
> On 11/05/2021 23:26, Stefano Stabellini wrote:
> > On Sun, 25 Apr 2021, Julien Grall wrote:
> > > From: Julien Grall <jgrall@amazon.com>
> > > 
> > > Currently, Xen PT helpers are only working with 4KB page granularity
> > > and open-code the generic helpers. To allow more flexibility, we can
> > > re-use the generic helpers and pass Xen's page granularity
> > > (PAGE_SHIFT).
> > > 
> > > As Xen PT helpers are used in both C and assembly, we need to move
> > > the generic helpers definition outside of the !__ASSEMBLY__ section.
> > > 
> > > Note the aliases for each level are still kept for the time being so we
> > > can avoid a massive patch to change all the callers.
> > > 
> > > Signed-off-by: Julien Grall <jgrall@amazon.com>
> > 
> > The patch is OK as is. I have a couple of suggestions for improvement
> > below. If you feel like making them, good, otherwise I am also OK if you
> > don't want to change anything.
> > 
> > 
> > > ---
> > >      Changes in v2:
> > >          - New patch
> > > ---
> > >   xen/include/asm-arm/lpae.h | 71 +++++++++++++++++++++-----------------
> > >   1 file changed, 40 insertions(+), 31 deletions(-)
> > > 
> > > diff --git a/xen/include/asm-arm/lpae.h b/xen/include/asm-arm/lpae.h
> > > index 4fb9a40a4ca9..310f5225e056 100644
> > > --- a/xen/include/asm-arm/lpae.h
> > > +++ b/xen/include/asm-arm/lpae.h
> > > @@ -159,6 +159,17 @@ static inline bool lpae_is_superpage(lpae_t pte,
> > > unsigned int level)
> > >   #define lpae_get_mfn(pte)    (_mfn((pte).walk.base))
> > >   #define lpae_set_mfn(pte, mfn)  ((pte).walk.base = mfn_x(mfn))
> > >   +/* Generate an array @var containing the offset for each level from
> > > @addr */
> > > +#define DECLARE_OFFSETS(var, addr)          \
> > > +    const unsigned int var[4] = {           \
> > > +        zeroeth_table_offset(addr),         \
> > > +        first_table_offset(addr),           \
> > > +        second_table_offset(addr),          \
> > > +        third_table_offset(addr)            \
> > > +    }
> > > +
> > > +#endif /* __ASSEMBLY__ */
> > > +
> > >   /*
> > >    * AArch64 supports pages with different sizes (4K, 16K, and 64K).
> > >    * Provide a set of generic helpers that will compute various
> > > @@ -190,17 +201,6 @@ static inline bool lpae_is_superpage(lpae_t pte,
> > > unsigned int level)
> > >   #define LPAE_TABLE_INDEX_GS(gs, lvl, addr)   \
> > >       (((addr) >> LEVEL_SHIFT_GS(gs, lvl)) & LPAE_ENTRY_MASK_GS(gs))
> > >   -/* Generate an array @var containing the offset for each level from
> > > @addr */
> > > -#define DECLARE_OFFSETS(var, addr)          \
> > > -    const unsigned int var[4] = {           \
> > > -        zeroeth_table_offset(addr),         \
> > > -        first_table_offset(addr),           \
> > > -        second_table_offset(addr),          \
> > > -        third_table_offset(addr)            \
> > > -    }
> > > -
> > > -#endif /* __ASSEMBLY__ */
> > > -
> > >   /*
> > >    * These numbers add up to a 48-bit input address space.
> > >    *
> > > @@ -211,26 +211,35 @@ static inline bool lpae_is_superpage(lpae_t pte,
> > > unsigned int level)
> > >    * therefore 39-bits are sufficient.
> > >    */
> > >   -#define LPAE_SHIFT      9
> > > -#define LPAE_ENTRIES    (_AC(1,U) << LPAE_SHIFT)
> > > -#define LPAE_ENTRY_MASK (LPAE_ENTRIES - 1)
> > > -
> > > -#define THIRD_SHIFT    (PAGE_SHIFT)
> > > -#define THIRD_ORDER    (THIRD_SHIFT - PAGE_SHIFT)
> > > -#define THIRD_SIZE     (_AT(paddr_t, 1) << THIRD_SHIFT)
> > > -#define THIRD_MASK     (~(THIRD_SIZE - 1))
> > > -#define SECOND_SHIFT   (THIRD_SHIFT + LPAE_SHIFT)
> > > -#define SECOND_ORDER   (SECOND_SHIFT - PAGE_SHIFT)
> > > -#define SECOND_SIZE    (_AT(paddr_t, 1) << SECOND_SHIFT)
> > > -#define SECOND_MASK    (~(SECOND_SIZE - 1))
> > > -#define FIRST_SHIFT    (SECOND_SHIFT + LPAE_SHIFT)
> > > -#define FIRST_ORDER    (FIRST_SHIFT - PAGE_SHIFT)
> > > -#define FIRST_SIZE     (_AT(paddr_t, 1) << FIRST_SHIFT)
> > > -#define FIRST_MASK     (~(FIRST_SIZE - 1))
> > > -#define ZEROETH_SHIFT  (FIRST_SHIFT + LPAE_SHIFT)
> > > -#define ZEROETH_ORDER  (ZEROETH_SHIFT - PAGE_SHIFT)
> > > -#define ZEROETH_SIZE   (_AT(paddr_t, 1) << ZEROETH_SHIFT)
> > > -#define ZEROETH_MASK   (~(ZEROETH_SIZE - 1))
> > 
> > Should we add a one-line in-code comment saying that the definitions
> > below are for 4KB pages? It is not immediately obvious any longer.
> 
> Because they are not meant to be for 4KB pages. They are meant to be for Xen
> page size.
> 
> Today, it is always 4KB but I would like the Xen code to not rely on that.
> 
> I can clarify it in an in-code comment.

That would help I think


> > > +#define LPAE_SHIFT          LPAE_SHIFT_GS(PAGE_SHIFT)
> > > +#define LPAE_ENTRIES        LPAE_ENTRIES_GS(PAGE_SHIFT)
> > > +#define LPAE_ENTRY_MASK     LPAE_ENTRY_MASK_GS(PAGE_SHIFT)
> > > 
> > > +#define LEVEL_SHIFT(lvl)    LEVEL_SHIFT_GS(PAGE_SHIFT, lvl)
> > > +#define LEVEL_ORDER(lvl)    LEVEL_ORDER_GS(PAGE_SHIFT, lvl)
> > > +#define LEVEL_SIZE(lvl)     LEVEL_SIZE_GS(PAGE_SHIFT, lvl)
> > > +#define LEVEL_MASK(lvl)     (~(LEVEL_SIZE(lvl) - 1))
> > 
> > I would avoid adding these 4 macros. It would be OK if they were just
> > used within this file but lpae.h is a header: they could end up be used
> > anywhere in the xen/ code and they have a very generic name. My
> > suggestion would be to skip them and just do:
> 
> Those macros will be used in follow-up patches. They are pretty useful to
> avoid introduce static array with the different information for each level.
> 
> Would prefix them with XEN_ be better?

Maybe. The concern I have is that there are multiple page granularities
(4kb, 16kb, etc) and multiple page sizes (4kb, 2mb, etc). If I just see
LEVEL_ORDER it is not immediately obvious what granularity and what size
we are talking about.

I think using a name that makes it clear that they are referring to Xen
pages, currently 4kb, it would make sense. Or maybe a in-code comment
would be sufficient.

I don't have a great suggestion here so I'll leave it to you. I am also
OK to keep them as is.


> > #define THIRD_SHIFT         LEVEL_SHIFT_GS(PAGE_SHIFT, 3)
> > 
> > etc.
> > 
> > 
> > > +/* Convenience aliases */
> > > +#define THIRD_SHIFT         LEVEL_SHIFT(3)
> > > +#define THIRD_ORDER         LEVEL_ORDER(3)
> > > +#define THIRD_SIZE          LEVEL_SIZE(3)
> > > +#define THIRD_MASK          LEVEL_MASK(3)
> > > +
> > > +#define SECOND_SHIFT        LEVEL_SHIFT(2)
> > > +#define SECOND_ORDER        LEVEL_ORDER(2)
> > > +#define SECOND_SIZE         LEVEL_SIZE(2)
> > > +#define SECOND_MASK         LEVEL_MASK(2)
> > > +
> > > +#define FIRST_SHIFT         LEVEL_SHIFT(1)
> > > +#define FIRST_ORDER         LEVEL_ORDER(1)
> > > +#define FIRST_SIZE          LEVEL_SIZE(1)
> > > +#define FIRST_MASK          LEVEL_MASK(1)
> > > +
> > > +#define ZEROETH_SHIFT       LEVEL_SHIFT(0)
> > > +#define ZEROETH_ORDER       LEVEL_ORDER(0)
> > > +#define ZEROETH_SIZE        LEVEL_SIZE(0)
> > > +#define ZEROETH_MASK        LEVEL_MASK(0)
> > >     /* Calculate the offsets into the pagetables for a given VA */
> > >   #define zeroeth_linear_offset(va) ((va) >> ZEROETH_SHIFT)
> > > -- 
> > > 2.17.1
> > > 
> 
> Cheers,
> 
> -- 
> Julien Grall
> 


From xen-devel-bounces@lists.xenproject.org Wed May 12 21:41:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 21:41:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126406.237971 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgwbw-0000XP-9L; Wed, 12 May 2021 21:41:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126406.237971; Wed, 12 May 2021 21:41:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgwbw-0000XI-61; Wed, 12 May 2021 21:41:44 +0000
Received: by outflank-mailman (input) for mailman id 126406;
 Wed, 12 May 2021 21:41:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=codm=KH=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lgwbv-0000XC-B6
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 21:41:43 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6c411ff8-8018-4bdb-8405-01f5c808961d;
 Wed, 12 May 2021 21:41:42 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 30AA161352;
 Wed, 12 May 2021 21:41:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6c411ff8-8018-4bdb-8405-01f5c808961d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1620855701;
	bh=Asqt57QtsFlKxqfIvSBVd659UPg481sw4PcabZfvJkM=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=ECDd2Z3/uSrQ9clQvdlHt+tDsl2I0A3ri60a40SAhzOZTMRfOxJZuRUBnu0UyomTg
	 btBhFibLAFeleOtREuS16+0E3DfG0uwfvO4hGfqFfww8DDll8CS6RYcFx9r7LouaMl
	 611Oj85HYyFz2ykZxT+h9GbwX6YtTtFp4qeqC0W8aVyUt3fvmematGFljXyioxY3m1
	 qZRVgXKcPrBHWeh5GPCddKeayRrVL6rXEf1XAQbqx4WGiwF/DUmuQ9KLKiD/CiQsZx
	 d061mGZN4FBnkE6ct3Ym9ICXnv4+3wXm9h/wDJK0eEaSPfjQrtuHFXk9LAFDxCQKdm
	 ZJ+FYZV2smtCQ==
Date: Wed, 12 May 2021 14:41:40 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, Wei.Chen@arm.com, Henry.Wang@arm.com, 
    Penny.Zheng@arm.com, Bertrand.Marquis@arm.com, 
    Julien Grall <julien.grall@arm.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Julien Grall <jgrall@amazon.com>
Subject: Re: [PATCH RFCv2 07/15] xen/arm: mm: Re-implement early_fdt_map()
 using map_pages_to_xen()
In-Reply-To: <20210425201318.15447-8-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2105121437501.5018@sstabellini-ThinkPad-T480s>
References: <20210425201318.15447-1-julien@xen.org> <20210425201318.15447-8-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Sun, 25 Apr 2021, Julien Grall wrote:
> From: Julien Grall <julien.grall@arm.com>
> 
> Now that map_pages_to_xen() has been extended to support 2MB mappings,
> we can replace the create_mappings() calls by map_pages_to_xen() calls.
> 
> The mapping can also be marked read-only has Xen as no business to
> modify the host Device Tree.

I think that's good. Just FYI there is some work at Xilinx to make
changes to the device tree at runtime but we'll cross that bridge when
we come to it.

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> ---
>     Changes in v2:
>         - Add my AWS signed-off-by
>         - Fix typo in the commit message
> ---
>  xen/arch/arm/mm.c | 18 +++++++++++++-----
>  1 file changed, 13 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 2cbfbe25240e..8fac24d80086 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -558,6 +558,7 @@ void * __init early_fdt_map(paddr_t fdt_paddr)
>      paddr_t offset;
>      void *fdt_virt;
>      uint32_t size;
> +    int rc;
>  
>      /*
>       * Check whether the physical FDT address is set and meets the minimum
> @@ -573,8 +574,12 @@ void * __init early_fdt_map(paddr_t fdt_paddr)
>      /* The FDT is mapped using 2MB superpage */
>      BUILD_BUG_ON(BOOT_FDT_VIRT_START % SZ_2M);
>  
> -    create_mappings(xen_second, BOOT_FDT_VIRT_START, paddr_to_pfn(base_paddr),
> -                    SZ_2M >> PAGE_SHIFT, SZ_2M);
> +    rc = map_pages_to_xen(BOOT_FDT_VIRT_START, maddr_to_mfn(base_paddr),
> +                          SZ_2M >> PAGE_SHIFT,
> +                          PAGE_HYPERVISOR_RO | _PAGE_BLOCK);
> +    if ( rc )
> +        panic("Unable to map the device-tree.\n");
> +
>  
>      offset = fdt_paddr % SECOND_SIZE;
>      fdt_virt = (void *)BOOT_FDT_VIRT_START + offset;
> @@ -588,9 +593,12 @@ void * __init early_fdt_map(paddr_t fdt_paddr)
>  
>      if ( (offset + size) > SZ_2M )
>      {
> -        create_mappings(xen_second, BOOT_FDT_VIRT_START + SZ_2M,
> -                        paddr_to_pfn(base_paddr + SZ_2M),
> -                        SZ_2M >> PAGE_SHIFT, SZ_2M);
> +        rc = map_pages_to_xen(BOOT_FDT_VIRT_START + SZ_2M,
> +                              maddr_to_mfn(base_paddr + SZ_2M),
> +                              SZ_2M >> PAGE_SHIFT,
> +                              PAGE_HYPERVISOR_RO | _PAGE_BLOCK);
> +        if ( rc )
> +            panic("Unable to map the device-tree\n");
>      }
>  
>      return fdt_virt;
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Wed May 12 22:00:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 22:00:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126430.238001 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgwuE-0004LM-9C; Wed, 12 May 2021 22:00:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126430.238001; Wed, 12 May 2021 22:00:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgwuE-0004LF-5o; Wed, 12 May 2021 22:00:38 +0000
Received: by outflank-mailman (input) for mailman id 126430;
 Wed, 12 May 2021 22:00:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=codm=KH=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lgwuC-0004L9-Ro
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 22:00:36 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0dcb13c4-d187-4310-af71-02e1ea9606a3;
 Wed, 12 May 2021 22:00:36 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id AA374613BD;
 Wed, 12 May 2021 22:00:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0dcb13c4-d187-4310-af71-02e1ea9606a3
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1620856835;
	bh=uZq6wy4yE4KAlGFRiyHroM4bLsXCaJgs6J2sO/Z2u7I=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=KxcS5TBxEfCtc2xNmT7JxQF3oDdwUigV4axoiZpuy1phOYJpnG09y7His5vGGCFWb
	 IxMSlbLNyQyzT1Zj3w0CAAup08JsI8anU+ec3zkaCN0JGHzPeGeORFHoMQ7zLLiJO7
	 JzRu5+UDDolVPM21JexGYC/PgOU57SgXM818rGXPpgsy55clelRzbYAZz1kkpXQoKR
	 tb9anNKNZF3t5ig4tfN3G5gvT5pK/GQNfev4t5QKq7yN0KxjjyePrwKrIFpYV/OmHh
	 wtBuWXsXGjaMvgXvtbJ05UXOBsOnGRsY2PvmtmHOH7oxahnKJ1gJzdENVgmjE6t/aw
	 D+HLb5Rp3Ka/Q==
Date: Wed, 12 May 2021 15:00:33 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, Wei.Chen@arm.com, Henry.Wang@arm.com, 
    Penny.Zheng@arm.com, Bertrand.Marquis@arm.com, 
    Julien Grall <jgrall@amazon.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH RFCv2 08/15] xen/arm32: mm: Check if the virtual address
 is shared before updating it
In-Reply-To: <20210425201318.15447-9-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2105121448090.5018@sstabellini-ThinkPad-T480s>
References: <20210425201318.15447-1-julien@xen.org> <20210425201318.15447-9-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Sun, 25 Apr 2021, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Only the first 2GB of the virtual address space is shared between all
> the page-tables on Arm32.
> 
> There is a long outstanding TODO in xen_pt_update() stating that the
> function is can only work with shared mapping. Nobody has ever called
           ^ remove

> the function with private mapping, however as we add more callers
> there is a risk to mess things up.
> 
> Introduce a new define to mark the ened of the shared mappings and use
                                     ^end

> it in xen_pt_update() to verify if the address is correct.
> 
> Note that on Arm64, all the mappings are shared. Some compiler may
> complain about an always true check, so the new define is not introduced
> for arm64 and the code is protected with an #ifdef.
 
On arm64 we could maybe define SHARED_VIRT_END to an arbitrarely large
value, such as:

#define SHARED_VIRT_END (1UL<<48)

or:

#define SHARED_VIRT_END DIRECTMAP_VIRT_END

?


> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> ---
>     Changes in v2:
>         - New patch
> ---
>  xen/arch/arm/mm.c            | 11 +++++++++--
>  xen/include/asm-arm/config.h |  4 ++++
>  2 files changed, 13 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 8fac24d80086..5c17cafff847 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -1275,11 +1275,18 @@ static int xen_pt_update(unsigned long virt,
>       * For arm32, page-tables are different on each CPUs. Yet, they share
>       * some common mappings. It is assumed that only common mappings
>       * will be modified with this function.
> -     *
> -     * XXX: Add a check.
>       */
>      const mfn_t root = virt_to_mfn(THIS_CPU_PGTABLE);
>  
> +#ifdef SHARED_VIRT_END
> +    if ( virt > SHARED_VIRT_END ||
> +         (SHARED_VIRT_END - virt) < nr_mfns )

The following would be sufficient, right?

    if ( virt + nr_mfns > SHARED_VIRT_END )


> +    {
> +        mm_printk("Trying to map outside of the shared area.\n");
> +        return -EINVAL;
> +    }
> +#endif
> +
>      /*
>       * The hardware was configured to forbid mapping both writeable and
>       * executable.
> diff --git a/xen/include/asm-arm/config.h b/xen/include/asm-arm/config.h
> index c7b77912013e..85d4a510ce8a 100644
> --- a/xen/include/asm-arm/config.h
> +++ b/xen/include/asm-arm/config.h
> @@ -137,6 +137,10 @@
>  
>  #define XENHEAP_VIRT_START     _AT(vaddr_t,0x40000000)
>  #define XENHEAP_VIRT_END       _AT(vaddr_t,0x7fffffff)
> +
> +/* The first 2GB is always shared between all the page-tables. */
> +#define SHARED_VIRT_END        _AT(vaddr_t, 0x7fffffff)
> +
>  #define DOMHEAP_VIRT_START     _AT(vaddr_t,0x80000000)
>  #define DOMHEAP_VIRT_END       _AT(vaddr_t,0xffffffff)
>  
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Wed May 12 22:07:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 22:07:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126434.238013 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgx0T-0005FF-Vr; Wed, 12 May 2021 22:07:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126434.238013; Wed, 12 May 2021 22:07:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgx0T-0005F8-Sx; Wed, 12 May 2021 22:07:05 +0000
Received: by outflank-mailman (input) for mailman id 126434;
 Wed, 12 May 2021 22:07:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=codm=KH=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lgx0R-0005F2-SL
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 22:07:03 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c6267f5b-bd3a-4848-b317-64ab73a25fd7;
 Wed, 12 May 2021 22:07:03 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 1C87C613AA;
 Wed, 12 May 2021 22:07:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c6267f5b-bd3a-4848-b317-64ab73a25fd7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1620857222;
	bh=hANkI4E4tNm24CJOT0/c7D0tOgMRUGt021MU/plpymE=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=exiaxCv9qX58Cxft4/eqVcnAfA8WMg/sv43qX1+9o892F2kQuJt4w2NoOXrYmwPIa
	 jMkHHE7YvYny92pYoNXITjs03OJ5SHx8WEVHtxP7exeDkN9/aVPwugENJUKLj08j/+
	 WsfCJmbU8Be4tFAJqTQwnnuAA7vHIFsfNS2zDPSkj74F7FdYtHoDC4v0IoRKGNzrvS
	 G00yNWD+ztnhyNjVy1A/YHzIe7s2ey681xU4NblfIKfr70+Q7Tt6TlLEMgOSlVRJgc
	 V/kg0ooZZAMUSUsadkT6umE6quRLAkpOEt0RudsinWz8+mr/TbY125anudW+yqjaup
	 gkBvunzWoTUXw==
Date: Wed, 12 May 2021 15:07:01 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, Wei.Chen@arm.com, Henry.Wang@arm.com, 
    Penny.Zheng@arm.com, Bertrand.Marquis@arm.com, 
    Julien Grall <jgrall@amazon.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH RFCv2 09/15] xen/arm32: mm: Re-implement setup_xenheap_mappings()
 using map_pages_to_xen()
In-Reply-To: <20210425201318.15447-10-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2105121506300.5018@sstabellini-ThinkPad-T480s>
References: <20210425201318.15447-1-julien@xen.org> <20210425201318.15447-10-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; boundary="8323329-1423833101-1620857222=:5018"

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-1423833101-1620857222=:5018
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8BIT

On Sun, 25 Apr 2021, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Now that map_pages_to_xen() has been extended to support 2MB mappings,
> we can replace the create_mappings() call by map_pages_to_xen() call.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> ---
>     Changes in v2:
>         - New patch
> 
>     TODOs:
>         - add support for contiguous mapping
> ---
>  xen/arch/arm/mm.c | 7 ++++++-
>  1 file changed, 6 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 5c17cafff847..19ecf73542ce 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -806,7 +806,12 @@ void mmu_init_secondary_cpu(void)
>  void __init setup_xenheap_mappings(unsigned long base_mfn,
>                                     unsigned long nr_mfns)
>  {
> -    create_mappings(xen_second, XENHEAP_VIRT_START, base_mfn, nr_mfns, MB(32));
> +    int rc;
> +
> +    rc = map_pages_to_xen(XENHEAP_VIRT_START, base_mfn, nr_mfns,
> +                          PAGE_HYPERVISOR_RW | _PAGE_BLOCK);
> +    if ( rc )
> +        panic("Unable to setup the xenheap mappings.\n");
>  
>      /* Record where the xenheap is, for translation routines. */
>      xenheap_virt_end = XENHEAP_VIRT_START + nr_mfns * PAGE_SIZE;

I get the following build error:

mm.c: In function ‘setup_xenheap_mappings’:
mm.c:811:47: error: incompatible type for argument 2 of ‘map_pages_to_xen’
     rc = map_pages_to_xen(XENHEAP_VIRT_START, base_mfn, nr_mfns,
                                               ^~~~~~~~
In file included from mm.c:24:0:
/local/repos/xen-upstream/xen/include/xen/mm.h:89:5: note: expected ‘mfn_t {aka struct <anonymous>}’ but argument is of type ‘long unsigned int’
 int map_pages_to_xen(
     ^~~~~~~~~~~~~~~~

I think base_mfn needs to be converted to mfn_t
--8323329-1423833101-1620857222=:5018--


From xen-devel-bounces@lists.xenproject.org Wed May 12 22:16:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 22:16:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126439.238025 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgx9H-0006tz-R5; Wed, 12 May 2021 22:16:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126439.238025; Wed, 12 May 2021 22:16:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgx9H-0006ts-Nz; Wed, 12 May 2021 22:16:11 +0000
Received: by outflank-mailman (input) for mailman id 126439;
 Wed, 12 May 2021 22:16:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lgx9F-0006tm-JH
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 22:16:09 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lgx9D-0004dq-KC; Wed, 12 May 2021 22:16:07 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lgx9D-0006E2-E6; Wed, 12 May 2021 22:16:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=/Rnz7krNnHyU5Ry0uAiKBQ2Yb01yUAQ3dItD4+uNPXQ=; b=2EinrVlU+CvbOMnyLYV04tc4WX
	ziGDwPXsGYIvCcTf9oAOwM0gFJeJ4luuk3lARk2HJLf7rYy9aCUaX5+5vOCpeMc/KK7jHRkWVzdPo
	TkJfljBp4YrGC4DO6Wr2IZkre4/IcsuynzQ7zC6HYqq4uLjooOJWyPE7rNgqf5nCy0wA=;
Subject: Re: [PATCH RFCv2 02/15] xen/arm: lpae: Use the generic helpers to
 defined the Xen PT helpers
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Wei.Chen@arm.com, Henry.Wang@arm.com,
 Penny.Zheng@arm.com, Bertrand.Marquis@arm.com,
 Julien Grall <jgrall@amazon.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20210425201318.15447-1-julien@xen.org>
 <20210425201318.15447-3-julien@xen.org>
 <alpine.DEB.2.21.2105111515470.5018@sstabellini-ThinkPad-T480s>
 <94e364a7-de40-93ab-6cde-a2f493540439@xen.org>
 <alpine.DEB.2.21.2105121425500.5018@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <e834b447-46c2-14fe-a39c-209d4d6ca5fe@xen.org>
Date: Wed, 12 May 2021 23:16:05 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2105121425500.5018@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 12/05/2021 22:30, Stefano Stabellini wrote:
> On Wed, 12 May 2021, Julien Grall wrote:
>>>> +#define LPAE_SHIFT          LPAE_SHIFT_GS(PAGE_SHIFT)
>>>> +#define LPAE_ENTRIES        LPAE_ENTRIES_GS(PAGE_SHIFT)
>>>> +#define LPAE_ENTRY_MASK     LPAE_ENTRY_MASK_GS(PAGE_SHIFT)
>>>>
>>>> +#define LEVEL_SHIFT(lvl)    LEVEL_SHIFT_GS(PAGE_SHIFT, lvl)
>>>> +#define LEVEL_ORDER(lvl)    LEVEL_ORDER_GS(PAGE_SHIFT, lvl)
>>>> +#define LEVEL_SIZE(lvl)     LEVEL_SIZE_GS(PAGE_SHIFT, lvl)
>>>> +#define LEVEL_MASK(lvl)     (~(LEVEL_SIZE(lvl) - 1))
>>>
>>> I would avoid adding these 4 macros. It would be OK if they were just
>>> used within this file but lpae.h is a header: they could end up be used
>>> anywhere in the xen/ code and they have a very generic name. My
>>> suggestion would be to skip them and just do:
>>
>> Those macros will be used in follow-up patches. They are pretty useful to
>> avoid introduce static array with the different information for each level.
>>
>> Would prefix them with XEN_ be better?
> 
> Maybe. The concern I have is that there are multiple page granularities
> (4kb, 16kb, etc) and multiple page sizes (4kb, 2mb, etc). If I just see
> LEVEL_ORDER it is not immediately obvious what granularity and what size
> we are talking about.

I am a bit puzzled with your answer. AFAIU, you are happy with the 
existing macros (THIRD_*, SECOND_*) but not with the new macros.

In reality, there is no difference because THIRD_* doesn't tell you the 
exact size but only "this is a level 3 mapping".

So can you clarify what you are after? IOW is it reworking the current 
naming scheme?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 12 22:18:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 22:18:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126445.238037 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgxBj-0007W9-9J; Wed, 12 May 2021 22:18:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126445.238037; Wed, 12 May 2021 22:18:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgxBj-0007W2-5f; Wed, 12 May 2021 22:18:43 +0000
Received: by outflank-mailman (input) for mailman id 126445;
 Wed, 12 May 2021 22:18:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lgxBi-0007Vw-LU
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 22:18:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lgxBg-0004fq-Ui; Wed, 12 May 2021 22:18:40 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lgxBg-0006K6-Ov; Wed, 12 May 2021 22:18:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=fG92JyapnWWZhAVzqQe7939190PRNkzzWbXxx+gTR/c=; b=kACCT2zp03nvUVWCVdEz+IKPLN
	IcKbuU8q9wdkLevAtctsIHedQVXNYlMNw2NUh9VJiuo9F2EEjJ66YTi3s2Swww+PLiVcuTMa9nmRa
	kKQcZ/Ts7sOz5ttSVaLaGttbDVxUcg4dvmJvc1CoPJ46JJICUb2vuMODVCJHYUZ7CFHk=;
Subject: Re: [PATCH RFCv2 07/15] xen/arm: mm: Re-implement early_fdt_map()
 using map_pages_to_xen()
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Wei.Chen@arm.com, Henry.Wang@arm.com,
 Penny.Zheng@arm.com, Bertrand.Marquis@arm.com,
 Julien Grall <julien.grall@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Julien Grall <jgrall@amazon.com>
References: <20210425201318.15447-1-julien@xen.org>
 <20210425201318.15447-8-julien@xen.org>
 <alpine.DEB.2.21.2105121437501.5018@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <68422df9-014e-7acb-f10f-f605a7233f40@xen.org>
Date: Wed, 12 May 2021 23:18:38 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2105121437501.5018@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 12/05/2021 22:41, Stefano Stabellini wrote:
> On Sun, 25 Apr 2021, Julien Grall wrote:
>> From: Julien Grall <julien.grall@arm.com>
>>
>> Now that map_pages_to_xen() has been extended to support 2MB mappings,
>> we can replace the create_mappings() calls by map_pages_to_xen() calls.
>>
>> The mapping can also be marked read-only has Xen as no business to
>> modify the host Device Tree.
> 
> I think that's good. Just FYI there is some work at Xilinx to make
> changes to the device tree at runtime but we'll cross that bridge when
> we come to it.

This particular mapping is only used during early boot. After the DT has 
been unflatten, this region is unmapped and the physical memory released.

So if the DT needs to be modified at runtime, then you would most likely 
want to modify the unflatten version.

> 
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

Thank you!

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 12 22:23:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 22:23:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126452.238049 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgxGJ-0000bv-Rc; Wed, 12 May 2021 22:23:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126452.238049; Wed, 12 May 2021 22:23:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgxGJ-0000bo-O9; Wed, 12 May 2021 22:23:27 +0000
Received: by outflank-mailman (input) for mailman id 126452;
 Wed, 12 May 2021 22:23:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lgxGI-0000bi-To
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 22:23:26 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lgxGH-0004l2-K6; Wed, 12 May 2021 22:23:25 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lgxGH-0006mh-EF; Wed, 12 May 2021 22:23:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=YgBQnV6iBVWQl1Jac0X4gn8+2SozyVSztoo5d0oWYGY=; b=gGmZAku2Z5rMgcH4WZGsorB3fr
	Ej7wMoaF9xdx07kzZ9v9dYkCyNujdifinwxvtSn8Ot35lofHLGn81T+Q4vlV1PAAjkYSP5e5FPjlQ
	lu8Hxsvyha3npIz+1SpTR4zzsRr4JVh0GrEJXg+PLYmkKZTpYXj2vqHM1uRn1gWuDkV8=;
Subject: Re: [PATCH RFCv2 08/15] xen/arm32: mm: Check if the virtual address
 is shared before updating it
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Wei.Chen@arm.com, Henry.Wang@arm.com,
 Penny.Zheng@arm.com, Bertrand.Marquis@arm.com,
 Julien Grall <jgrall@amazon.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20210425201318.15447-1-julien@xen.org>
 <20210425201318.15447-9-julien@xen.org>
 <alpine.DEB.2.21.2105121448090.5018@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <caec9741-8c0e-b80a-1020-c985beb1e100@xen.org>
Date: Wed, 12 May 2021 23:23:23 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2105121448090.5018@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 12/05/2021 23:00, Stefano Stabellini wrote:
> On Sun, 25 Apr 2021, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> Only the first 2GB of the virtual address space is shared between all
>> the page-tables on Arm32.
>>
>> There is a long outstanding TODO in xen_pt_update() stating that the
>> function is can only work with shared mapping. Nobody has ever called
>             ^ remove
> 
>> the function with private mapping, however as we add more callers
>> there is a risk to mess things up.
>>
>> Introduce a new define to mark the ened of the shared mappings and use
>                                       ^end
> 
>> it in xen_pt_update() to verify if the address is correct.
>>
>> Note that on Arm64, all the mappings are shared. Some compiler may
>> complain about an always true check, so the new define is not introduced
>> for arm64 and the code is protected with an #ifdef.
>   
> On arm64 we could maybe define SHARED_VIRT_END to an arbitrarely large
> value, such as:
> 
> #define SHARED_VIRT_END (1UL<<48)
> 
> or:
> 
> #define SHARED_VIRT_END DIRECTMAP_VIRT_END
> 
> ?

I thought about it but I didn't want to define to a random value... I 
felt not define it was better.

> 
> 
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>
>> ---
>>      Changes in v2:
>>          - New patch
>> ---
>>   xen/arch/arm/mm.c            | 11 +++++++++--
>>   xen/include/asm-arm/config.h |  4 ++++
>>   2 files changed, 13 insertions(+), 2 deletions(-)
>>
>> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
>> index 8fac24d80086..5c17cafff847 100644
>> --- a/xen/arch/arm/mm.c
>> +++ b/xen/arch/arm/mm.c
>> @@ -1275,11 +1275,18 @@ static int xen_pt_update(unsigned long virt,
>>        * For arm32, page-tables are different on each CPUs. Yet, they share
>>        * some common mappings. It is assumed that only common mappings
>>        * will be modified with this function.
>> -     *
>> -     * XXX: Add a check.
>>        */
>>       const mfn_t root = virt_to_mfn(THIS_CPU_PGTABLE);
>>   
>> +#ifdef SHARED_VIRT_END
>> +    if ( virt > SHARED_VIRT_END ||
>> +         (SHARED_VIRT_END - virt) < nr_mfns )
> 
> The following would be sufficient, right?
> 
>      if ( virt + nr_mfns > SHARED_VIRT_END )

This would not protect against an overflow. So I think it is best if we 
keep my version.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 12 22:44:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 22:44:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126457.238061 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgxav-0003QM-J4; Wed, 12 May 2021 22:44:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126457.238061; Wed, 12 May 2021 22:44:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgxav-0003QF-G7; Wed, 12 May 2021 22:44:45 +0000
Received: by outflank-mailman (input) for mailman id 126457;
 Wed, 12 May 2021 22:44:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=codm=KH=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lgxau-0003Q9-2J
 for xen-devel@lists.xenproject.org; Wed, 12 May 2021 22:44:44 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7ed9dbc1-406f-4611-88b3-7717b297a739;
 Wed, 12 May 2021 22:44:43 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 1160C611CC;
 Wed, 12 May 2021 22:44:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7ed9dbc1-406f-4611-88b3-7717b297a739
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1620859482;
	bh=N/UdgbK9ys6InuUioovZfBQiICN8cTV6Zh6vhok72SE=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=kPF2fc1wU5K6NsTLLVO2yXTG1qFoYY78Hb58m8h+shfe24NhusuWEk0YduR1tO7N+
	 Ufc5R8P6/qG3Qa1hLXn4AtuRa0KfN2rPqFC3PcV8gvatkPKIxavIQZ8p1Q2yebOyaZ
	 cNlN6Mr5huVwLFtcqN/IWDgJ3FNOJ/UwQjkjdmPfF3fqEPEqlKALMcUGwivURakWGf
	 iaLozOvZdi+EYcxLVRVnVqTSnDICOiSYkN4BRjxxIgtvFcjBjzvMTpwtFNLaQFfzNX
	 bET+DtyZlTCydaDL68VpEXGqrR10KBcdZKSvGZZbU9/FFyXxGTKNAu8pTIl+kNi4Oo
	 bPrtluepGdlmg==
Date: Wed, 12 May 2021 15:44:41 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, Wei.Chen@arm.com, Henry.Wang@arm.com, 
    Penny.Zheng@arm.com, Bertrand.Marquis@arm.com, 
    Julien Grall <jgrall@amazon.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH RFCv2 10/15] xen/arm: mm: Allocate xen page tables in
 domheap rather than xenheap
In-Reply-To: <20210425201318.15447-11-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2105121529180.5018@sstabellini-ThinkPad-T480s>
References: <20210425201318.15447-1-julien@xen.org> <20210425201318.15447-11-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Sun, 25 Apr 2021, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> xen_{un,}map_table() already uses the helper to map/unmap pages
> on-demand (note this is currently a NOP on arm64). So switching to
> domheap don't have any disavantage.
> 
> But this as the benefit:
>     - to keep the page tables unmapped if an arch decided to do so
>     - reduce xenheap use on arm32 which can be pretty small
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Thanks for the patch. It looks OK but let me ask a couple of questions
to clarify my doubts.

This change should have no impact to arm64, right?

For arm32, I wonder why we were using map_domain_page before in
xen_map_table: it wasn't necessary, was it? In fact, one could even say
that it was wrong?


> ---
>     Changes in v2:
>         - New patch
> ---
>  xen/arch/arm/mm.c | 36 +++++++++++++++++++++---------------
>  1 file changed, 21 insertions(+), 15 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 19ecf73542ce..ae5a07ea956b 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -969,21 +969,6 @@ void *ioremap(paddr_t pa, size_t len)
>      return ioremap_attr(pa, len, PAGE_HYPERVISOR_NOCACHE);
>  }
>  
> -static int create_xen_table(lpae_t *entry)
> -{
> -    void *p;
> -    lpae_t pte;
> -
> -    p = alloc_xenheap_page();
> -    if ( p == NULL )
> -        return -ENOMEM;
> -    clear_page(p);
> -    pte = mfn_to_xen_entry(virt_to_mfn(p), MT_NORMAL);
> -    pte.pt.table = 1;
> -    write_pte(entry, pte);
> -    return 0;
> -}
> -
>  static lpae_t *xen_map_table(mfn_t mfn)
>  {
>      /*
> @@ -1024,6 +1009,27 @@ static void xen_unmap_table(const lpae_t *table)
>      unmap_domain_page(table);
>  }
>  
> +static int create_xen_table(lpae_t *entry)
> +{
> +    struct page_info *pg;
> +    void *p;
> +    lpae_t pte;
> +
> +    pg = alloc_domheap_page(NULL, 0);
> +    if ( pg == NULL )
> +        return -ENOMEM;
> +
> +    p = xen_map_table(page_to_mfn(pg));
> +    clear_page(p);
> +    xen_unmap_table(p);
> +
> +    pte = mfn_to_xen_entry(page_to_mfn(pg), MT_NORMAL);
> +    pte.pt.table = 1;
> +    write_pte(entry, pte);
> +
> +    return 0;
> +}
> +
>  #define XEN_TABLE_MAP_FAILED 0
>  #define XEN_TABLE_SUPER_PAGE 1
>  #define XEN_TABLE_NORMAL_PAGE 2
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Wed May 12 23:05:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 12 May 2021 23:05:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126465.238073 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgxvJ-0006EZ-FY; Wed, 12 May 2021 23:05:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126465.238073; Wed, 12 May 2021 23:05:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgxvJ-0006ES-CO; Wed, 12 May 2021 23:05:49 +0000
Received: by outflank-mailman (input) for mailman id 126465;
 Wed, 12 May 2021 23:05:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgxvI-0006EH-4w; Wed, 12 May 2021 23:05:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgxvH-0005c5-Vs; Wed, 12 May 2021 23:05:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgxvH-0006WG-JI; Wed, 12 May 2021 23:05:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lgxvH-0005M9-Ip; Wed, 12 May 2021 23:05:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8TSrwcnYPvMYFfemuWz7dmKyOMgoTtqx7Yx0Vt3wrkM=; b=L9+OnbQ9m0oi6sn4LOg+409AZG
	KIZMYe/ML94bXH54rkFnejBDe24IbQFmoqUrmMRnTy7d6hbL8FBTt5vlK4B64rfTslZ2Q2p1Cda65
	Vhxm34lih33IPYT+iib3qgGID+SImb/+PzywBXy72OMWc2EQ+bCrpxNO1iLuXDtaArPA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161915-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 161915: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-pair:guest-start/debian:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-pair:guest-start/debian:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-armhf-armhf-libvirt:guest-start:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=f9a576a818044133f8564e0d243ebd97df0b3280
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 12 May 2021 23:05:47 +0000

flight 161915 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161915/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-libvirt      14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-libvirt-xsm  14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-amd64-libvirt-pair 25 guest-start/debian      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-libvirt-pair 25 guest-start/debian       fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-armhf-armhf-libvirt     14 guest-start              fail REGR. vs. 152631
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 12 debian-hvm-install fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                f9a576a818044133f8564e0d243ebd97df0b3280
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  265 days
Failing since        152659  2020-08-21 14:07:39 Z  264 days  483 attempts
Testing same since   161907  2021-05-11 16:09:28 Z    1 days    2 attempts

------------------------------------------------------------
491 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                fail    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 149396 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu May 13 00:37:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 00:37:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126471.238088 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgzLP-0008Kt-Uw; Thu, 13 May 2021 00:36:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126471.238088; Thu, 13 May 2021 00:36:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lgzLP-0008Km-RD; Thu, 13 May 2021 00:36:51 +0000
Received: by outflank-mailman (input) for mailman id 126471;
 Thu, 13 May 2021 00:36:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgzLO-0008Kc-Jt; Thu, 13 May 2021 00:36:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgzLO-0007fP-EC; Thu, 13 May 2021 00:36:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lgzLO-0002t9-3C; Thu, 13 May 2021 00:36:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lgzLO-0007Bv-2k; Thu, 13 May 2021 00:36:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=CSabRSzM90ObsS4fyngB7cLvBq6NYWafgCbGalePHYk=; b=xSqm1J8U/vpjavn1pVS7ZjAUcf
	07PcuySGDZvO1Ac+vHoH87UiEBq0utnN/6gqo//xXm/CU1LK4CBWuIAQGYE8eVgHBKs4zJVj4NAle
	k7knXaoQiubKxQCvycu/qHPOfXfKGA0yGUmKx60FBJdnjzdZ06bj0+gKuyGKzZ4JlTcM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161923-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 161923: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=43d4cc7d36503bcc3aa2aa6ebea2b7912808f254
X-Osstest-Versions-That:
    xen=52b91dad6f43afb0c77325e6d54115c280958e57
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 13 May 2021 00:36:50 +0000

flight 161923 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161923/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  43d4cc7d36503bcc3aa2aa6ebea2b7912808f254
baseline version:
 xen                  52b91dad6f43afb0c77325e6d54115c280958e57

Last test of basis   161921  2021-05-12 16:01:36 Z    0 days
Testing same since   161923  2021-05-12 21:01:32 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   52b91dad6f..43d4cc7d36  43d4cc7d36503bcc3aa2aa6ebea2b7912808f254 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu May 13 03:56:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 03:56:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126477.238103 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh2Sd-0002MJ-AA; Thu, 13 May 2021 03:56:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126477.238103; Thu, 13 May 2021 03:56:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh2Sd-0002MB-2k; Thu, 13 May 2021 03:56:31 +0000
Received: by outflank-mailman (input) for mailman id 126477;
 Thu, 13 May 2021 03:56:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lh2Sc-0002M0-9w; Thu, 13 May 2021 03:56:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lh2Sc-0008SQ-4G; Thu, 13 May 2021 03:56:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lh2Sb-0003uc-Pt; Thu, 13 May 2021 03:56:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lh2Sb-0006Um-P4; Thu, 13 May 2021 03:56:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=66bajcDQwXL60mPypLYIgCPQ2R3Ky3Z/IKEp0Kffy8I=; b=zkLCMEYhGkxhHqjpvKnoOxIkAS
	Bb7G/RCq3BOdTN+Cjvort0CATTfh0oPtubf5cQCxBb1Z1IEyRUs0C7BgBwpzyrPAF8H9a91Vt/YZN
	C8LjrhcactOEtLMtNTGma4xWZV3fddZ1Vz/RSFq+BM9ZcEX0sax2+UuuwrwPHjzZn7yQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161917-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 161917: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-arm64-arm64-examine:reboot:fail:regression
    xen-unstable:test-arm64-arm64-xl-thunderx:xen-boot:fail:regression
    xen-unstable:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    xen-unstable:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    xen-unstable:test-arm64-arm64-xl:xen-boot:fail:regression
    xen-unstable:test-xtf-amd64-amd64-3:xtf/test-pv32pae-xsa-286:fail:heisenbug
    xen-unstable:test-arm64-arm64-libvirt-xsm:xen-boot:fail:heisenbug
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=d4fb5f166c2bfbaf9ba0de69da0d411288f437a9
X-Osstest-Versions-That:
    xen=982c89ed527bc5b0ffae5da9fd33f9d2d1528f06
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 13 May 2021 03:56:29 +0000

flight 161917 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161917/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 161898
 test-arm64-arm64-xl-thunderx  8 xen-boot                 fail REGR. vs. 161898
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 161898
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 161898
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 161898

Tests which are failing intermittently (not blocking):
 test-xtf-amd64-amd64-3 92 xtf/test-pv32pae-xsa-286 fail in 161909 pass in 161917
 test-arm64-arm64-libvirt-xsm  8 xen-boot                   fail pass in 161909

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check fail in 161909 never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check fail in 161909 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161898
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161898
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 161898
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161898
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161898
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 161898
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161898
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161898
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161898
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161898
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161898
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161898
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  d4fb5f166c2bfbaf9ba0de69da0d411288f437a9
baseline version:
 xen                  982c89ed527bc5b0ffae5da9fd33f9d2d1528f06

Last test of basis   161898  2021-05-10 19:06:50 Z    2 days
Testing same since   161904  2021-05-11 10:00:22 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@arm.com>
  Volodymyr Babchuk <volodymyr_babchuk@epam.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit d4fb5f166c2bfbaf9ba0de69da0d411288f437a9
Author: Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Date:   Fri May 7 01:39:47 2021 +0000

    optee: enable OPTEE_SMC_SEC_CAP_MEMREF_NULL capability
    
    OP-TEE mediator already have support for NULL memory references. It
    was added in patch 0dbed3ad336 ("optee: allow plain TMEM buffers with
    NULL address"). But it does not propagate
    OPTEE_SMC_SEC_CAP_MEMREF_NULL capability flag to a guest, so well
    behaving guest can't use this feature.
    
    Note: linux optee driver honors this capability flag when handling
    buffers from userspace clients, but ignores it when working with
    internal calls. For instance, __optee_enumerate_devices() function
    uses NULL argument to get buffer size hint from OP-TEE. This was the
    reason, why "optee: allow plain TMEM buffers with NULL address" was
    introduced in the first place.
    
    This patch adds the mentioned capability to list of known
    capabilities. From Linux point of view it means that userspace clients
    can use this feature, which is confirmed by OP-TEE test suite:
    
    * regression_1025 Test memref NULL and/or 0 bytes size
    o regression_1025.1 Invalid NULL buffer memref registration
      regression_1025.1 OK
    o regression_1025.2 Input/Output MEMREF Buffer NULL - Size 0 bytes
      regression_1025.2 OK
    o regression_1025.3 Input MEMREF Buffer NULL - Size non 0 bytes
      regression_1025.3 OK
    o regression_1025.4 Input MEMREF Buffer NULL over PTA invocation
      regression_1025.4 OK
      regression_1025 OK
    
    Signed-off-by: Volodymyr Babchuk <volodymyr_babchuk@epam.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 30f34457b20c78b2862b2b16cb26cb4f10a667ad
Author: Julien Grall <jgrall@amazon.com>
Date:   Mon May 10 18:28:16 2021 +0100

    tools/xenstore: Fix indentation in the header of xenstored_control.c
    
    Commit e867af081d94 "tools/xenstore: save new binary for live update"
    seemed to have spuriously changed the indentation of the first line of
    the copyright header.
    
    The previous indentation is re-instated so all the lines are indented
    the same.
    
    Reported-by: Bjoern Doebel <doebel@amazon.com>
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Juergen Gross <jgross@suse.com>

commit 7e71b1e0affa83c0976c832f254276eeb6e6575c
Author: Julien Grall <jgrall@amazon.com>
Date:   Thu May 6 17:12:23 2021 +0100

    tools/xenstored: Prevent a buffer overflow in dump_state_node_perms()
    
    ASAN reported one issue when Live Updating Xenstored:
    
    =================================================================
    ==873==ERROR: AddressSanitizer: stack-buffer-overflow on address 0x7ffc194f53e0 at pc 0x555c6b323292 bp 0x7ffc194f5340 sp 0x7ffc194f5338
    WRITE of size 1 at 0x7ffc194f53e0 thread T0
        #0 0x555c6b323291 in dump_state_node_perms xen/tools/xenstore/xenstored_core.c:2468
        #1 0x555c6b32746e in dump_state_special_node xen/tools/xenstore/xenstored_domain.c:1257
        #2 0x555c6b32a702 in dump_state_special_nodes xen/tools/xenstore/xenstored_domain.c:1273
        #3 0x555c6b32ddb3 in lu_dump_state xen/tools/xenstore/xenstored_control.c:521
        #4 0x555c6b32e380 in do_lu_start xen/tools/xenstore/xenstored_control.c:660
        #5 0x555c6b31b461 in call_delayed xen/tools/xenstore/xenstored_core.c:278
        #6 0x555c6b32275e in main xen/tools/xenstore/xenstored_core.c:2357
        #7 0x7f95eecf3d09 in __libc_start_main ../csu/libc-start.c:308
        #8 0x555c6b3197e9 in _start (/usr/local/sbin/xenstored+0xc7e9)
    
    Address 0x7ffc194f53e0 is located in stack of thread T0 at offset 80 in frame
        #0 0x555c6b32713e in dump_state_special_node xen/tools/xenstore/xenstored_domain.c:1232
    
      This frame has 2 object(s):
        [32, 40) 'head' (line 1233)
        [64, 80) 'sn' (line 1234) <== Memory access at offset 80 overflows this variable
    
    This is happening because the callers are passing a pointer to a variable
    allocated on the stack. However, the field perms is a dynamic array, so
    Xenstored will end up to read outside of the variable.
    
    Rework the code so the permissions are written one by one in the fd.
    
    Fixes: ed6eebf17d2c ("tools/xenstore: dump the xenstore state for live update")
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Juergen Gross <jgross@suse.com>
    Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>

commit 3f568354a95ee2f0c9c553efb94c734fa6848af0
Author: Michal Orzel <michal.orzel@arm.com>
Date:   Wed May 5 09:43:07 2021 +0200

    arm/time,vtimer: Get rid of READ/WRITE_SYSREG32
    
    AArch64 registers are 64bit whereas AArch32 registers
    are 32bit or 64bit. MSR/MRS are expecting 64bit values thus
    we should get rid of helpers READ/WRITE_SYSREG32
    in favour of using READ/WRITE_SYSREG.
    We should also use register_t type when reading sysregs
    which can correspond to uint64_t or uint32_t.
    Even though many AArch64 registers have upper 32bit reserved
    it does not mean that they can't be widen in the future.
    
    Modify type of vtimer structure's member: ctl to register_t.
    
    Add macro CNTFRQ_MASK containing mask for timer clock frequency
    field of CNTFRQ_EL0 register.
    
    Modify CNTx_CTL_* macros to return unsigned long instead of
    unsigned int as ctl is now of type register_t.
    
    Signed-off-by: Michal Orzel <michal.orzel@arm.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 86faae561cd8eee819e0f42ba7a18dd180aa49d1
Author: Michal Orzel <michal.orzel@arm.com>
Date:   Wed May 5 09:43:06 2021 +0200

    arm/page: Get rid of READ/WRITE_SYSREG32
    
    AArch64 registers are 64bit whereas AArch32 registers
    are 32bit or 64bit. MSR/MRS are expecting 64bit values thus
    we should get rid of helpers READ/WRITE_SYSREG32
    in favour of using READ/WRITE_SYSREG.
    We should also use register_t type when reading sysregs
    which can correspond to uint64_t or uint32_t.
    Even though many AArch64 registers have upper 32bit reserved
    it does not mean that they can't be widen in the future.
    
    Modify accesses to CTR_EL0 to use READ/WRITE_SYSREG.
    
    Signed-off-by: Michal Orzel <michal.orzel@arm.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>

commit 25e5d0c412e0d7420f2aa7fdd71cc39d8ed6c528
Author: Michal Orzel <michal.orzel@arm.com>
Date:   Wed May 5 09:43:05 2021 +0200

    xen/arm: Always access SCTLR_EL2 using READ/WRITE_SYSREG()
    
    The Armv8 specification describes the system register as a 64-bit value
    on AArch64 and 32-bit value on AArch32 (same as ARMv7).
    
    Unfortunately, Xen is accessing the system registers using
    READ/WRITE_SYSREG32() which means the top 32-bit are clobbered.
    
    This is only a latent bug so far because Xen will not yet use the top
    32-bit.
    
    There is also no change in behavior because arch/arm/arm64/head.S will
    initialize SCTLR_EL2 to a sane value with the top 32-bit zeroed.
    
    Signed-off-by: Michal Orzel <michal.orzel@arm.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit 8eb7cc0465fa228064e807aad51eb7428d6d3199
Author: Michal Orzel <michal.orzel@arm.com>
Date:   Wed May 5 09:43:04 2021 +0200

    arm/p2m: Get rid of READ/WRITE_SYSREG32
    
    AArch64 registers are 64bit whereas AArch32 registers
    are 32bit or 64bit. MSR/MRS are expecting 64bit values thus
    we should get rid of helpers READ/WRITE_SYSREG32
    in favour of using READ/WRITE_SYSREG.
    We should also use register_t type when reading sysregs
    which can correspond to uint64_t or uint32_t.
    Even though many AArch64 registers have upper 32bit reserved
    it does not mean that they can't be widen in the future.
    
    Modify type of vtcr to register_t.
    
    Signed-off-by: Michal Orzel <michal.orzel@arm.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>

commit 78e67c99eb3f90c22c8c6ee282ec3a43d2ddccb5
Author: Michal Orzel <michal.orzel@arm.com>
Date:   Wed May 5 09:43:03 2021 +0200

    arm/gic: Get rid of READ/WRITE_SYSREG32
    
    AArch64 registers are 64bit whereas AArch32 registers
    are 32bit or 64bit. MSR/MRS are expecting 64bit values thus
    we should get rid of helpers READ/WRITE_SYSREG32
    in favour of using READ/WRITE_SYSREG.
    We should also use register_t type when reading sysregs
    which can correspond to uint64_t or uint32_t.
    Even though many AArch64 registers have upper 32bit reserved
    it does not mean that they can't be widen in the future.
    
    Modify types of following members of struct gic_v3 to register_t:
    -vmcr
    -sre_el1
    -apr0
    -apr1
    
    Add new macro GICC_IAR_INTID_MASK containing the mask
    for INTID field of ICC_IAR0/1_EL1 register as only the first 23-bits
    of IAR contains the interrupt number. The rest are RES0.
    Therefore, take the opportunity to mask the bits [23:31] as
    they should be used for an IRQ number (we don't know how the top bits
    will be used).
    
    Signed-off-by: Michal Orzel <michal.orzel@arm.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit d55afb1acaffc6047af3cabc3ef4442f313bee2c
Author: Michal Orzel <michal.orzel@arm.com>
Date:   Wed May 5 09:43:02 2021 +0200

    arm/gic: Remove member hcr of structure gic_v3
    
    ... as it is never used even in the patch introducing it.
    
    Signed-off-by: Michal Orzel <michal.orzel@arm.com>
    Acked-by: Julien Grall <jgrall@amazon.com>

commit b80470c84553808fef3a6803000ceee8a100e63c
Author: Michal Orzel <michal.orzel@arm.com>
Date:   Wed May 5 09:43:01 2021 +0200

    arm: Modify type of actlr to register_t
    
    AArch64 registers are 64bit whereas AArch32 registers
    are 32bit or 64bit. MSR/MRS are expecting 64bit values thus
    we should get rid of helpers READ/WRITE_SYSREG32
    in favour of using READ/WRITE_SYSREG.
    We should also use register_t type when reading sysregs
    which can correspond to uint64_t or uint32_t.
    Even though many AArch64 registers have upper 32bit reserved
    it does not mean that they can't be widen in the future.
    
    ACTLR_EL1 system register bits are implementation defined
    which means it is possibly a latent bug on current HW as the CPU
    implementer may already have decided to use the top 32bit.
    
    Signed-off-by: Michal Orzel <michal.orzel@arm.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>

commit 3fd8336bc599788e5a52a7e63e833b6f03d79fd5
Author: Michal Orzel <michal.orzel@arm.com>
Date:   Wed May 5 09:43:00 2021 +0200

    arm/domain: Get rid of READ/WRITE_SYSREG32
    
    AArch64 registers are 64bit whereas AArch32 registers
    are 32bit or 64bit. MSR/MRS are expecting 64bit values thus
    we should get rid of helpers READ/WRITE_SYSREG32
    in favour of using READ/WRITE_SYSREG.
    We should also use register_t type when reading sysregs
    which can correspond to uint64_t or uint32_t.
    Even though many AArch64 registers have upper 32bit reserved
    it does not mean that they can't be widen in the future.
    
    Modify type of register cntkctl to register_t.
    
    Thumbee registers are only usable by a 32-bit domain and therefore
    we can just store the bottom 32-bit (IOW there is no type change).
    In fact, this could technically be restricted to Armv7 HW (the
    support was dropped retrospectively in Armv8) but leave it as-is
    for now.
    
    Signed-off-by: Michal Orzel <michal.orzel@arm.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>

commit 8990f0eaca139364091109389416455f4f78cd65
Author: Michal Orzel <michal.orzel@arm.com>
Date:   Wed May 5 09:42:59 2021 +0200

    arm64/vfp: Get rid of READ/WRITE_SYSREG32
    
    AArch64 registers are 64bit whereas AArch32 registers
    are 32bit or 64bit. MSR/MRS are expecting 64bit values thus
    we should get rid of helpers READ/WRITE_SYSREG32
    in favour of using READ/WRITE_SYSREG.
    We should also use register_t type when reading sysregs
    which can correspond to uint64_t or uint32_t.
    Even though many AArch64 registers have upper 32bit reserved
    it does not mean that they can't be widen in the future.
    
    Modify type of FPCR, FPSR, FPEXC32_EL2 to register_t.
    
    Signed-off-by: Michal Orzel <michal.orzel@arm.com>
    Reviewed-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Thu May 13 06:11:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 06:11:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126489.238127 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh4Yf-0001WA-WF; Thu, 13 May 2021 06:10:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126489.238127; Thu, 13 May 2021 06:10:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh4Yf-0001W3-Sy; Thu, 13 May 2021 06:10:53 +0000
Received: by outflank-mailman (input) for mailman id 126489;
 Thu, 13 May 2021 06:10:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4Mxh=KI=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lh4Ye-0001Vx-Jd
 for xen-devel@lists.xenproject.org; Thu, 13 May 2021 06:10:52 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 780289eb-2339-4154-b73a-d4fbc703e19e;
 Thu, 13 May 2021 06:10:50 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 90E7A67373; Thu, 13 May 2021 08:10:48 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 780289eb-2339-4154-b73a-d4fbc703e19e
Date: Thu, 13 May 2021 08:10:48 +0200
From: Christoph Hellwig <hch@lst.de>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	jgross@suse.com, hch@lst.de,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: Re: [PATCH v2 1/3] xen/arm: move xen_swiotlb_detect to
 arm/swiotlb-xen.h
Message-ID: <20210513061048.GA26335@lst.de>
References: <alpine.DEB.2.21.2105121313060.5018@sstabellini-ThinkPad-T480s> <20210512201823.1963-1-sstabellini@kernel.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210512201823.1963-1-sstabellini@kernel.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>


From xen-devel-bounces@lists.xenproject.org Thu May 13 06:11:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 06:11:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126491.238139 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh4Yy-0001ur-8H; Thu, 13 May 2021 06:11:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126491.238139; Thu, 13 May 2021 06:11:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh4Yy-0001uk-4v; Thu, 13 May 2021 06:11:12 +0000
Received: by outflank-mailman (input) for mailman id 126491;
 Thu, 13 May 2021 06:11:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=4Mxh=KI=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lh4Yx-0001tD-Lz
 for xen-devel@lists.xenproject.org; Thu, 13 May 2021 06:11:11 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4ecd145f-da3d-4098-ba6e-29d43bf18a90;
 Thu, 13 May 2021 06:11:10 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id BED7D67373; Thu, 13 May 2021 08:11:08 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4ecd145f-da3d-4098-ba6e-29d43bf18a90
Date: Thu, 13 May 2021 08:11:08 +0200
From: Christoph Hellwig <hch@lst.de>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	jgross@suse.com, hch@lst.de,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: Re: [PATCH v2 3/3] xen/swiotlb: check if the swiotlb has already
 been initialized
Message-ID: <20210513061108.GB26335@lst.de>
References: <alpine.DEB.2.21.2105121313060.5018@sstabellini-ThinkPad-T480s> <20210512201823.1963-3-sstabellini@kernel.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210512201823.1963-3-sstabellini@kernel.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

Looks good,

Reviewed-by: Christoph Hellwig <hch@lst.de>


From xen-devel-bounces@lists.xenproject.org Thu May 13 08:02:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 08:02:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126744.238156 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh6Im-0005PN-Ni; Thu, 13 May 2021 08:02:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126744.238156; Thu, 13 May 2021 08:02:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh6Im-0005PG-Kl; Thu, 13 May 2021 08:02:36 +0000
Received: by outflank-mailman (input) for mailman id 126744;
 Thu, 13 May 2021 08:02:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lh6Il-0005P3-Gg; Thu, 13 May 2021 08:02:35 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lh6Il-0005Io-8Y; Thu, 13 May 2021 08:02:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lh6Ik-0006RT-VH; Thu, 13 May 2021 08:02:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lh6Ik-0006vk-Uk; Thu, 13 May 2021 08:02:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7Kl0cXbHiXetWyUNWIZwxumXU5qoOObFVkTviJehvh4=; b=wNDB/0+fIKRoU0NbU8XbLWla3j
	DJZ0j2/AqzyzqLVKcmaLRMqzZtmU7DTH3pDfTbTwYKsYburcQ2lJpMGPZtPgt3mxbr3RVx1sJ9lvW
	mPiDYX8/M5It7lgDGtmvUzsuAzjxJl0DbD8NA8QkT2IdnIhT1Bz9khpa2oxGf+/EK2tc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161927-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 161927: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=156315cff4ddee560121328a530b308e1786d73b
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 13 May 2021 08:02:34 +0000

flight 161927 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161927/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              156315cff4ddee560121328a530b308e1786d73b
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  307 days
Failing since        151818  2020-07-11 04:18:52 Z  306 days  299 attempts
Testing same since   161927  2021-05-13 04:19:57 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 57302 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu May 13 08:03:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 08:03:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126748.238172 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh6K3-000605-3b; Thu, 13 May 2021 08:03:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126748.238172; Thu, 13 May 2021 08:03:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh6K3-0005zy-0I; Thu, 13 May 2021 08:03:55 +0000
Received: by outflank-mailman (input) for mailman id 126748;
 Thu, 13 May 2021 08:03:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=hyFE=KI=epam.com=prvs=57677d64fc=anastasiia_lukianenko@srs-us1.protection.inumbo.net>)
 id 1lh6K1-0005zo-NM
 for xen-devel@lists.xenproject.org; Thu, 13 May 2021 08:03:53 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7504ab8b-3664-4574-a2ce-6ac0f036d1f9;
 Thu, 13 May 2021 08:03:52 +0000 (UTC)
Received: from pps.filterd (m0174678.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 14D80gYu027760
 for <xen-devel@lists.xenproject.org>; Thu, 13 May 2021 08:03:51 GMT
Received: from eur05-vi1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2174.outbound.protection.outlook.com [104.47.17.174])
 by mx0a-0039f301.pphosted.com with ESMTP id 38gyq1g37q-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT)
 for <xen-devel@lists.xenproject.org>; Thu, 13 May 2021 08:03:51 +0000
Received: from AM4PR0301MB2273.eurprd03.prod.outlook.com (2603:10a6:200:4d::6)
 by AM0PR03MB4627.eurprd03.prod.outlook.com (2603:10a6:208:bf::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.26; Thu, 13 May
 2021 08:03:48 +0000
Received: from AM4PR0301MB2273.eurprd03.prod.outlook.com
 ([fe80::e190:2560:abbf:5e7d]) by AM4PR0301MB2273.eurprd03.prod.outlook.com
 ([fe80::e190:2560:abbf:5e7d%8]) with mapi id 15.20.4108.035; Thu, 13 May 2021
 08:03:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7504ab8b-3664-4574-a2ce-6ac0f036d1f9
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=G/WTi4fFWpQh59ZFYgTepDFNuQ16QaS0y5wVHlqkaSwkwJyVSodO7/tP3jFDGWkSShem7Z37wMGQyuWc5W19hNk2p1IyGu2W6RJnx926A7SRLct2equ+rYtTiPtKYFbfzLaYaLc0wljc7tEVtISmjL9Jyz8icmAdNH5yoYdxFWM3Yt4qQFziD1bgU8H3xAC4872BqnyBGcpZ8SLlB2HYlj/0ZCZZv1cR8cc4W2vmTij89mRatJQlIpzHSme+ii4z7CXUo6GmTzvx0hvdYceUT04MsVoCqPym3jLBKK6mNyWBSLQcNR44+Aijw+zoGCHndwQZAI2G87N13LQVy29gRA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Fn1kRZLF+oerDgG2OBkAVIy0hqMaV/Q8Jdpx4vgoEN0=;
 b=InC7CIxpV8wSxQdaOiz6j2BSlrEvgxJLDvfCJtInhOVvZZtCefKs4w8DJJ4WHhNKVrIVAbMbhhn7V1qGs26OMm23GL8yPg9ZPYf0OvjX/HTIYdqw70ddpInXrxHT8RcSw8/putEaow2dS/eJWeY4PDnuddLGMgddV5T/yIXkJyPWktZfL6G3EqzYKcnPuyDm5bvpI9gY2FUnlsQnjFu8Tw1EUcXh3N3LYIwjPevNDTeFmS4knZTwLE0Sox2T3+4nkdj4uBe5WUuDGyUId8HjJHwjnknYoM/ertqVwT24P5hVhq0l9J4aW1ncMjwioWSy+lIe/Vgy/YuEI1hAEHgoqg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Fn1kRZLF+oerDgG2OBkAVIy0hqMaV/Q8Jdpx4vgoEN0=;
 b=fl5vkOsyOlBChgdmohZIn6BmfV7G9fK4kfUxl4bqXAjC5Yclw8zzoa3nk7xpBcjO4pfwaObiAHy+7xm7gK6JylXO0twR5gXjbPHlercvNJ9GWHOK5cHdf8G9f4/pr9bM49PGlR66kMN87cti4I1dKtBB1fw9jvvw23+mPcYyBMS5B88WbAuohRgwNlhcFSiRhokY7+NoN8AxW/ITcIZAb28yVwuOWXjeeqVAh2BM1eAIRcipvufibUgWxEYlNW0NbVimbNk7BXv3EbBA8n9oyQiuzazXWHLV+l4kzfYoFUIHVmLZA9MvqyCKfyZHLX2lWwLxD+E2YoHCFRAzDVO7UQ==
From: Anastasiia Lukianenko <anastasiia_lukianenko@epam.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
CC: Andrii Chepurnyi <Andrii_Chepurnyi@epam.com>,
        Oleksandr Andrushchenko
	<Oleksandr_Andrushchenko@epam.com>
Subject: Hand over of the Xen shared info page
Thread-Topic: Hand over of the Xen shared info page
Thread-Index: AQHXR86B8dxtD5lx60CgXee9ciHSoQ==
Date: Thu, 13 May 2021 08:03:48 +0000
Message-ID: <64bc6ab6ec387acebb40c1b4786dfda1050f9d50.camel@epam.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=epam.com;
x-originating-ip: [176.36.213.80]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: ed98ad37-8038-4d13-cea5-08d915e5a39b
x-ms-traffictypediagnostic: AM0PR03MB4627:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: 
 <AM0PR03MB46276D329F8BDE1F6808950DF2519@AM0PR03MB4627.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:9508;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 JIyx+E6IcLY4wQoApH8HKkDLKgfIBxXMnqOeGsvDy2J0NTW526Yb7E64G+JVvWEiwmyv6qTPR003KBu+xQLrfuuW1v2fPm+ZW41ojhSKIXM0dtymv/TFLLODyrtvUeiIThJ8OqdT+2CfG1UawZrsyYcuWaQYjBsjBHx5+vpaftB/fOgZsY7Kj4yE41pEyz5kMri1OTrVpYsNReF2XXRxg5Fw6Lqxk3X2r/IIAc+DGJ+c5NAA3jvGGWyBcnHPlZCsDOKhBivGGF4HuSGNP5PMO/YwXG1cIvhoOlaxhHUUVUxgMwhIMUO7bpO/G7P9yyENpXPZpjzl4dT+0JykpKLkSCC3k5kbzNBQrX6lmRd1UdA2qWn9qYnJN00MK47y1APEdkeE144v0uPFJU7B3vHNAdMI5MN0rzTZdEkL1dT2MEqEFmLEMthzWz1WOlVgS5Wqeo8QMHqP0RZoI9LSU7agHSa9Dwdidg58Uu3fr09z06rJGsM8Y6u3OtWs2Oqf+8YXOpjdaLVtQC3RWMiXRgjDM2pyyygCSWjbINyrLegy7lI7Is6MK41GYgIBiIsTMjDwyyXALIFw7F1j25rZrkringWXRoaWITmNVkkaK3yocsrytJY+j+gdFfyCUny4pDr9M9zgEd+ANG9QmVY/sOrc024hqbnwMxSJTl7XRD8prKMYhKp4/YABxdiMZ4+hc0oEL7LyW2jgU6FFc+kMImSb6A==
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM4PR0301MB2273.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(366004)(39860400002)(136003)(396003)(376002)(478600001)(6506007)(55236004)(6512007)(91956017)(38100700002)(36756003)(54906003)(966005)(6916009)(186003)(6486002)(5660300002)(66946007)(76116006)(316002)(83380400001)(122000001)(8936002)(26005)(64756008)(2616005)(66476007)(66556008)(66446008)(8676002)(4326008)(2906002)(71200400001)(107886003)(86362001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?utf-8?B?WGxWYjNuMVVLTXcxWXdYcEptaERpdUExamNvc2VvQ0F3dFY3TWlzOXptMnN1?=
 =?utf-8?B?Vk01RXRON3VrNFByL2lSQnRSa1cvUWRhKzg1ekcrSGcyU1h3eVdtUlVIRyt0?=
 =?utf-8?B?YmZjd1duTDYrTk5iRnVCZ1hhZmZqWGJmbjNOeDVXR04rSnBVU3ErNDQyMFFx?=
 =?utf-8?B?UUphbHJSamN1UThxR3lTTko5Rk9OUEtmT3RHRTNKbUVlQlArNS9vOFdROHdQ?=
 =?utf-8?B?TVVlSjROSmtRT0R0eHZlN05OQzdtQXI3N2RncmN1TVZWTUdwQVhsekxycUJj?=
 =?utf-8?B?cWxrazNHak1tWHNoT0RmWUEvZ01iek5IdkNLSDJOT0ZUL2FmRW5pMEhsTWpt?=
 =?utf-8?B?STRzdFJCbldLUkN2YThWVjF3RWtETWx5Tk5Md1l2NjFobmt3RFJmZlRTY2VQ?=
 =?utf-8?B?ZGhaSnduMGNTcFFmb0dvcTFOUDZvZG04cVJzREUvU2EzQ2ZhU0dBUkpzQmJQ?=
 =?utf-8?B?YW9ubzRPS3ZNY2pRN01hcERydDRHaWV2czhQc3A0SWJMcTdnem8zSDIrVGw5?=
 =?utf-8?B?b214WGkwa2piR0gxY1dqcExBNjZLNlpnM0U4cGpEUzgrdXp5eUJ4UXVHNXJE?=
 =?utf-8?B?TVZ0VGJTN1prV2JxQWJldnkrc2QvWHRkeGxDVDUwa3JIamVsZFRhNE9zNlU1?=
 =?utf-8?B?NzU4Qk5pbFpNbDlPdy9RR1JOcjhjOHNuMVBKZFM3RnU2cm5jaStkRVlpU1Aw?=
 =?utf-8?B?VXNPK1BKYmVLc3NJbzVzTVFRcWQ2eGhhVHhDbFdTVzJXRkhGdnh4bW1Gc1N2?=
 =?utf-8?B?Mno5NEpWNXVkcEg4eGFIN2N3akRtb2NLU25XTWcvb1ovUjBrWkZOUzZocms3?=
 =?utf-8?B?TU9pRzRBRlp3YkFwM2RxWjdhYkQ5cnRWSHZKMXFBM0xGYXFqUUg4SEpsNkVK?=
 =?utf-8?B?d0pHS2h1UjJtV2hvUXFOaGdVTy9OQjBUNkE4aFBQTmNnMWNUKzBTMml3Ynpk?=
 =?utf-8?B?b01YY2pMNEVjbTRIaGVLUmdNZjBQR3F0eVdBRzg1R1I2TThnVUxVNHJmVFZz?=
 =?utf-8?B?RER1TTFsaVdoWEJhRU9UNURpb3UxaHBjMjVlbk0yNm0wRmEyR0ZENGYzbEhr?=
 =?utf-8?B?S2tlMHQxVitMZkt2MFVFTmkwYXVRL0t0b21JS3dCOGRCa1h2ZEYxbVVtbVhQ?=
 =?utf-8?B?dlE3UUI4ZFZ4QUdwaTJra2t5ZjZBdGthK1orbnhTMVNxSHAxNlpxdDdHbnZN?=
 =?utf-8?B?OFY1dytPbUlPWTh0endidEdjdFNIQlVrTVAxeGZTSkFLVXorNkkwMEJZemhG?=
 =?utf-8?B?WmhrUTlhV3lLbnA5RzEwYXptY2JzZnhZWVhIRkxsZzFPaHZybUY3bXJIZG1V?=
 =?utf-8?B?YkxvZm4ybUhGWUNYUjVOREZzaFlYOTR6UG9MSG1yZHhoaG92aXVjdTd0WmYw?=
 =?utf-8?B?LzRnNmhNYkNzN3VJcFdPc3Zvd2N6Y21uVWVCVFRCK2dEbDd6ckJMbGQwMTBv?=
 =?utf-8?B?b3B2blBBdUNXTlFwY1FDNGNXV2ZqZVNUMmNBTVlqNTIvYThTb013Ylg5TnhN?=
 =?utf-8?B?ZlhqU1JCY25Da0kyNUFZSHRnYzRDM0hUVEw1M0hhNUN6THVCNWpDUE5qVzFU?=
 =?utf-8?B?U0MzUnFwS0J4YkV1MExQcjcxaUpucEFMTVFrUUVSUHFNTFhGYUpkQ0dnNkdR?=
 =?utf-8?B?LzBhZkwzdC94cmlNRXQwb0VwZG5RbWw3QjVPTERYYmVVaXJqS1gvdlhUUXFY?=
 =?utf-8?B?dGtvZnJFSnozTXBoSkEzZVcrWEt6T2F4eVZxbjVCdkpLc2hQNUNxRHV6Q0Zy?=
 =?utf-8?Q?Ny/dk83VcWsXY4HGOA2zqNxdfORXky9VAnj8MxA?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <F03E6A57D6D5874CBE20BEF995ACB0C7@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM4PR0301MB2273.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ed98ad37-8038-4d13-cea5-08d915e5a39b
X-MS-Exchange-CrossTenant-originalarrivaltime: 13 May 2021 08:03:48.5257
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: ddKcJRSQXTpM3fanSELOH1zn24MvhjUYTPkSwwKcRI2PKR3mKW1E97epmy17lahUq5TUsnvgDyiafioID4vanxSHUavSv4yLn2mjPPEy2tA=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB4627
X-Proofpoint-GUID: gzYmAKQhReD9Wr8dJ6Suw7_klCdmPtD2
X-Proofpoint-ORIG-GUID: gzYmAKQhReD9Wr8dJ6Suw7_klCdmPtD2
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=1 mlxlogscore=191 spamscore=1
 bulkscore=0 lowpriorityscore=0 impostorscore=0 priorityscore=1501
 adultscore=0 clxscore=1011 suspectscore=0 phishscore=0 mlxscore=1
 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2104190000 definitions=main-2105130060

SGkgYWxsLA0KDQpUaGUgcHJvYmxlbSBkZXNjcmliZWQgYmVsb3cgY29uY2VybnMgY2FzZXMgd2hl
biBhIHNoYXJlZCBpbmZvIHBhZ2UNCm5lZWRzIHRvIGJlIGhhbmRlZCBvdmVyIGZyb20gb25lIGVu
dGl0eSBpbiB0aGUgc3lzdGVtIHRvIGFub3RoZXIsIGZvcg0KZXhhbXBsZSwgdGhlcmUgaXMgYSBi
b290bG9hZGVyIG9yIGFueSBvdGhlciBjb2RlIHRoYXQgbWF5IHJ1biBiZWZvcmUNCnRoZSBndWVz
dCBPUycga2VybmVsLg0KTm9ybWFsbHksIHRvIG1hcCB0aGUgc2hhcmVkIGluZm8gcGFnZSBndWVz
dHMgYWxsb2NhdGUgYSBtZW1vcnkgcGFnZQ0KZnJvbSB0aGVpciBSQU0gYW5kIG1hcCB0aGUgc2hh
cmVkIGluZm8gb24gdG9wIG9mIGl0LiBTcGVjaWZpY2FsbHkgd2UNCnVzZSBYRU5NQVBTUEFDRV9z
aGFyZWRfaW5mbyBtZW1vcnkgc3BhY2UgaW4gIFhFTk1FTV9hZGRfdG9fcGh5c21hcA0KaHlwZXJj
YWxsLiAgQXMgdGhlIGluZm8gcGFnZSBleGlzdHMgdGhyb3VnaG91dCB0aGUgZ3Vlc3QgZXhpc3Rl
bmNlIHRoaXMNCmRvZXNuJ3QgaHVydCB0aGUgZ3Vlc3QsIGJ1dCB3aGVuIHRoZSBwYWdlIGdldHMg
b3V0IG9mIGFjY291bnRpbmcsIGUuZy4NCmFmdGVyIGJvb3Rsb2FkZXIganVtcHMgdG8gTGludXgg
YW5kIHRoZSBwYWdlIGlzIG5vdCBoYW5kZWQgb3ZlciB0byBpdCwNCnRoZSBtYXBwZWQgcGFnZSBi
ZWNvbWVzIGEgcHJvYmxlbS4NCkNvbnNpZGVyIHRoZSBjYXNlIHdpdGggVS1ib290IGJvb3Rsb2Fk
ZXIgd2hpY2ggYWxyZWFkeSBoYXMgWGVuIHN1cHBvcnQuDQpUaGUgVS1ib2904oCZcyBYZW4gZ3Vl
c3QgaW1wbGVtZW50YXRpb24gYWxsb2NhdGVzIGEgc2hhcmVkIGluZm8gcGFnZQ0KYmV0d2VlbiBY
ZW4gYW5kIHRoZSBndWVzdCBkb21haW4gYW5kIFUtYm9vdCB1c2VzIGRvbWFpbidzIFJBTSBhZGRy
ZXNzDQpzcGFjZSB0byBjcmVhdGUgYW5kIG1hcCB0aGUgc2hhcmVkIGluZm8gcGFnZSBieSB1c2lu
Zw0KWEVOTUVNX2FkZF90b19waHlzbWFwIGh5cGVyY2FsbCBbMV0uDQoNCkFmdGVyIFUtYm9vdCB0
cmFuc2ZlcnMgY29udHJvbCB0byB0aGUgb3BlcmF0aW5nIHN5c3RlbSAoTGludXgsIEFuZHJvaWQs
DQpldGMpLCB0aGUgc2hhcmVkIGluZm8gcGFnZSBpcyBzdGlsbCBtYXBwZWQgaW4gZG9tYWlu4oCZ
cyBhZGRyZXNzIHNwYWNlLA0KZS5nLiBpdHMgUkFNLiBTbywgYWZ0ZXIgd2UgbGVhdmUgVS1ib290
LCB0aGlzIHBhZ2UgYmVjb21lcyBqdXN0IGFuDQpvcmRpbmFyeSBtZW1vcnkgcGFnZSBmcm9tIExp
bnV4IFBPViB3aGlsZSBpdCBpcyBzdGlsbCBhIHNoYXJlZCBpbmZvDQpwYWdlIGZyb20gWGVuIFBP
Vi4gVGhpcyBjYW4gbGVhZCB0byB1bmRlZmluZWQgYmVoYXZpb3IsIGVycm9ycyBldGMgYXMNClhl
biBjYW4gd3JpdGUgc29tZXRoaW5nIHRvIHRoZSBzaGFyZWQgaW5mbyBwYWdlIGFuZCB3aGVuIExp
bnV4IHRyaWVzIHRvDQp1c2UgaXQgLSBkYXRhIGNvcnJ1cHRpb24gbWF5IGhhcHBlbi4NClRoaXMg
aGFwcGVucyBiZWNhdXNlIHRoZXJlIGlzIG5vIHVubWFwIGZ1bmN0aW9uIGluIFhlbiBBUEkgdG8g
cmVtb3ZlIGFuDQpleGlzdGluZyBzaGFyZWQgaW5mbyBwYWdlIG1hcHBpbmcuIFdlIGNvdWxkIHVz
ZSBvbmx5IGh5cGVyY2FsbA0KWEVOTUVNX3JlbW92ZV9mcm9tX3BoeXNtYXAgd2hpY2ggZXZlbnR1
YWxseSB3aWxsIGNyZWF0ZSBhIGhvbGUgaW4gdGhlDQpkb21haW4ncyBSQU0gYWRkcmVzcyBzcGFj
ZSB3aGljaCBtYXkgYWxzbyBsZWFkIHRvIGd1ZXN04oCZcyBjcmFzaCB3aGlsZQ0KYWNjZXNzaW5n
IHRoYXQgbWVtb3J5Lg0KDQpXZSBub3RpY2VkIHRoaXMgcHJvYmxlbSBhbmQgdGhlIHdvcmthcm91
bmQgd2FzIGltcGxlbWVudGVkIHVzaW5nIHRoZQ0Kc3BlY2lhbCBHVUVTVF9NQUdJQyBtZW1vcnkg
cmVnaW9uIFsyXS4NCg0KTm93IHdlIHdhbnQgdG8gbWFrZSBhIHByb3BlciBzb2x1dGlvbiBiYXNl
ZCBvbiBHVUVTVF9NQUdJQ19CQVNFLCB3aGljaA0KZG9lcyBub3QgYmVsb25nIHRvIHRoZSBndWVz
dOKAmXMgUkFNIGFkZHJlc3Mgc3BhY2UgWzNdLiBVc2luZyB0aGUgZXhhbXBsZQ0Kb2YgaG93IG9m
ZnNldHMgZm9yIHRoZSBjb25zb2xlIGFuZCB4ZW5zdG9yZSBhcmUgaW1wbGVtZW50ZWQsIHdlIGNh
biBhZGQNCmEgbmV3IHNoYXJlZF9pbmZvIG9mZnNldCBhbmQgaW5jcmVhc2UgdGhlIG51bWJlciBv
ZiBtYWdpYyBwYWdlcyBbNF0gYW5kDQppbXBsZW1lbnQgcmVsYXRlZCBmdW5jdGlvbmFsaXR5LCBz
byB0aGVyZSBpcyBhIHNpbWlsYXIgQVBJIHRvIHF1ZXJ5DQp0aGF0IG1hZ2ljIHBhZ2UgbG9jYXRp
b24gYXMgaXQgaXMgZG9uZSBmb3IgY29uc29sZSBQRk4gYW5kIG90aGVycy4NClRoaXMgYXBwcm9h
Y2ggd291bGQgYWxsb3cgdGhlIHVzZSBvZiB0aGUgWEVOTUVNX3JlbW92ZV9mcm9tX3BoeXNtYXAN
Cmh5cGVyY2FsbCB3aXRob3V0IGNyZWF0aW5nIGdhcHMgaW4gdGhlIFJBTSBhZGRyZXNzIHNwYWNl
IGZvciBYZW4gZ3Vlc3QNCk9TIFs1XS4NCg0KWzFdIC0gDQpodHRwczovL2dpdGh1Yi5jb20vdS1i
b290L3UtYm9vdC9ibG9iL21hc3Rlci9kcml2ZXJzL3hlbi9oeXBlcnZpc29yLmMjTDE0MQ0KWzJd
IC0gDQpodHRwczovL2dpdGh1Yi5jb20veGVuLXRyb29wcy91LWJvb3QvY29tbWl0L2Y3NTliMTUx
MTE2YWYyMDRhNWFiMDJhY2U4MmMwOTMwMGNkNjIzM2ENClszXSAtIA0KaHR0cHM6Ly9naXRodWIu
Y29tL3hlbi1wcm9qZWN0L3hlbi9ibG9iL21hc3Rlci94ZW4vaW5jbHVkZS9wdWJsaWMvYXJjaC1h
cm0uaCNMNDMyDQpbNF0gLSANCmh0dHBzOi8vZ2l0aHViLmNvbS94ZW4tcHJvamVjdC94ZW4vYmxv
Yi8yNTg0OWM4YjE2ZjJhNWI3ZmNkMGE4MjNlODBhNWYxYjU5MDI5MWY5L3Rvb2xzL2xpYnMvZ3Vl
c3QveGdfZG9tX2FybS5jI0wyOQ0KWzVdIC0gDQpodHRwczovL2dpdGh1Yi5jb20veGVuLXRyb29w
cy91LWJvb3QvYmxvYi9hbmRyb2lkLW1hc3Rlci9kcml2ZXJzL3hlbi9oeXBlcnZpc29yLmMjTDE2
Mg0KDQpSZWdhcmRzLA0KQW5hc3Rhc2lpYSBMdWtpYW5lbmtvDQo=


From xen-devel-bounces@lists.xenproject.org Thu May 13 08:23:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 08:23:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126758.238184 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh6ce-0008NS-Sz; Thu, 13 May 2021 08:23:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126758.238184; Thu, 13 May 2021 08:23:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh6ce-0008NL-Pr; Thu, 13 May 2021 08:23:08 +0000
Received: by outflank-mailman (input) for mailman id 126758;
 Thu, 13 May 2021 08:23:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lh6cd-0008N9-I8; Thu, 13 May 2021 08:23:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lh6cd-0005dQ-Bt; Thu, 13 May 2021 08:23:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lh6cd-00079g-0n; Thu, 13 May 2021 08:23:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lh6cd-00066z-0F; Thu, 13 May 2021 08:23:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wP2oHVzakojn+QNCwk1KkBWAJl8fNGhvycmPsvmK/Ow=; b=t6mK18DXI2JV8oL4axnTeyX8PD
	oM2spQkQQbCe75oFunW5e/SNwsec7EUuEkrf/8QnDudAhJGWLcVTXKS9hhBg2wSIYPr3kcKkGhYQ1
	SCOVVRvI0+aCFChecIMemGCiXcK3bXqjAwTJABLusA/eLpGZBTU9S87aIHL5UTvuVFk0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161918-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 161918: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=16022114de9869743d6304290815cdb8a8c7deaa
X-Osstest-Versions-That:
    linux=b5dbcd05792a4bad2c9bb3c4658c854e72c444b7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 13 May 2021 08:23:07 +0000

flight 161918 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161918/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161832
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161832
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161832
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 161832
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161832
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161832
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161832
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161832
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161832
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161832
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161832
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                16022114de9869743d6304290815cdb8a8c7deaa
baseline version:
 linux                b5dbcd05792a4bad2c9bb3c4658c854e72c444b7

Last test of basis   161832  2021-05-07 09:10:55 Z    5 days
Testing same since   161905  2021-05-11 12:11:22 Z    1 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adrian Hunter <adrian.hunter@intel.com>
  Alan Stern <stern@rowland.harvard.edu>
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Lobakin <alobakin@pm.me>
  Alexander Shishkin <alexander.shishkin@linux.intel.com>
  Aneesh Kumar K.V <aneesh.kumar@linux.ibm.com>
  Anirudh Rayabharam <mail@anirudhrb.com>
  Anson Jacob <Anson.Jacob@amd.com>
  Ard Biesheuvel <ardb@kernel.org>
  Aric Cyr <aric.cyr@amd.com>
  Arnd Bergmann <arnd@arndb.de>
  Artur Petrosyan <Arthur.Petrosyan@synopsys.com>
  Arun Easi <aeasi@marvell.com>
  Avri Altman <avri.altman@wdc.com>
  Bart Van Assche <bvanassche@acm.org>
  Benjamin Block <bblock@linux.ibm.com>
  Bill Wendling <morbo@google.com>
  Bindu Ramamurthy <bindu.r@amd.com>
  Bixuan Cui <cuibixuan@huawei.com>
  Borislav Petkov <bp@suse.de>
  Brendan Peter <bpeter@lytx.com>
  Catalin Marinas <catalin.marinas@arm.com>
  Chaitanya Kulkarni <chaitanya.kulkarni@wdc.com>
  Chanwoo Choi <cw00.choi@samsung.com>
  Chao Yu <yuchao0@huawei.com>
  Charles Keepax <ckeepax@opensource.cirrus.com>
  Chen Jun <chenjun102@huawei.com>
  Christian Brauner <christian.brauner@ubuntu.com>
  Christian König <christian.koenig@amd.com>
  Christoph Hellwig <hch@lst.de>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Chunfeng Yun <chunfeng.yun@mediatek.com>
  Colin Ian King <colin.king@canonical.com>
  Daniel Niv <danielniv3@gmail.com>
  Daniel Vetter <daniel.vetter@ffwll.ch>
  Daniel Wheeler <daniel.wheeler@amd.com>
  dann frazier <dann.frazier@canonical.com>
  David Bauer <mail@david-bauer.net>
  David E. Box <david.e.box@linux.intel.com>
  David S. Miller <davem@davemloft.net>
  David Sterba <dsterba@suse.com>
  Davide Caratti <dcaratti@redhat.com>
  Dean Anderson <dean@sensoray.com>
  Dick Kennedy <dick.kennedy@broadcom.com>
  Dinghao Liu <dinghao.liu@zju.edu.cn>
  Dinh Nguyen <dinguyen@kernel.org>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Dmitry Vyukov <dvyukov@google.com>
  Dmytro Laktyushkin <Dmytro.Laktyushkin@amd.com>
  Don Brace <don.brace@microchip.com>
  dongjian <dongjian@yulong.com>
  DooHyun Hwang <dh0421.hwang@samsung.com>
  Eckhart Mohr <e.mohr@tuxedocomputers.com>
  Eelco Chaudron <echaudro@redhat.com>
  Eric Biggers <ebiggers@google.com>
  Eryk Brol <eryk.brol@amd.com>
  Ewan D. Milne <emilne@redhat.com>
  Felipe Balbi <balbi@kernel.org>
  Fengnan Chang <changfengnan@vivo.com>
  Filipe Manana <fdmanana@suse.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Gao Xiang <hsiangkao@redhat.com>
  Gerd Hoffmann <kraxel@redhat.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Gregory CLEMENT <gregory.clement@bootlin.com>
  Guchun Chen <guchun.chen@amd.com>
  Guenter Roeck <linux@roeck-us.net>
  Guochun Mao <guochun.mao@mediatek.com>
  Hanjun Guo <guohanjun@huawei.com>
  Hans de Goede <hdegoede@redhat.com>
  Hans Verkuil <hverkuil-cisco@xs4all.nl>
  Hansem Ro <hansemro@outlook.com>
  Harald Freudenberger <freude@linux.ibm.com>
  He Ying <heying24@huawei.com>
  Heiko Carstens <hca@linux.ibm.com>
  Heinz Mauelshagen <heinzm@redhat.com>
  Hemant Kumar <hemantk@codeaurora.org>
  Herbert Xu <herbert@gondor.apana.org.au>
  Hillf Danton <hdanton@sina.com>
  Hui Tang <tanghui20@huawei.com>
  Ido Schimmel <idosch@nvidia.com>
  Jaegeuk Kim <jaegeuk@kernel.org>
  Jakub Kicinski <kuba@kernel.org>
  James Morris <jamorris@linux.microsoft.com>
  James Smart <jsmart2021@gmail.com>
  Jan Kara <jack@suse.cz>
  Jared Baldridge <jrb@expunge.us>
  Jarkko Sakkinen <jarkko@kernel.org>
  Jason Self <jason@bluehome.net>
  Jeffrey Mitchell <jeffrey.mitchell@starlab.io>
  Jens Axboe <axboe@kernel.dk>
  Jens Wiklander <jens.wiklander@linaro.org>
  Jerome Forissier <jerome@forissier.org>
  Jessica Yu <jeyu@kernel.org>
  Joe Thornber <ejt@redhat.com>
  John Millikin <john@john-millikin.com>
  Jon Hunter <jonathanh@nvidia.com>
  Jonathan Kim <jonathan.kim@amd.com>
  Josef Bacik <josef@toxicpanda.com>
  Julian Braha <julianbraha@gmail.com>
  Justin Tee <justin.tee@broadcom.com>
  Kai Stuhlemmer (ebee Engineering) <kai.stuhlemmer@ebee.de>
  Kalle Valo <kvalo@codeaurora.org>
  karthik alapati <mail@karthek.com>
  Kevin Barnett <kevin.barnett@microchip.com>
  Konstantin Kharlamov <hi-angel@yandex.ru>
  Laurent Pinchart <laurent.pinchart@ideasonboard.com>
  Lee Jones <lee.jones@linaro.org>
  Lingutla Chandrasekhar <clingutla@codeaurora.org>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  lizhe <lizhe67@huawei.com>
  Luis Henriques <lhenriques@suse.de>
  Luke D Jones <luke@ljones.dev>
  Luo Jiaxing <luojiaxing@huawei.com>
  Lv Yunlong <lyl2019@mail.ustc.edu.cn>
  Lyude Paul <lyude@redhat.com>
  Mahesh Salgaonkar <mahesh@linux.ibm.com>
  Marc Zyngier <maz@kernel.org>
  Marek Behún <kabel@kernel.org>
  Marek Vasut <marex@denx.de>
  Marijn Suijten <marijn.suijten@somainline.org>
  Mark Brown <broonie@kernel.org>
  Mark Langsdorf <mlangsdo@redhat.com>
  Mark Rutland <mark.rutland@arm.com>
  Martin K. Petersen <martin.petersen@oracle.com>
  Martin Wilck <mwilck@suse.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Mathias Nyman <mathias.nyman@linux.intel.com>
  Matthias Brugger <matthias.bgg@gmail.com>
  Matthias Schiffer <matthias.schiffer@ew.tq-group.com>
  Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
  Maximilian Luz <luzmaximilian@gmail.com>
  Melissa Wen <melissa.srw@gmail.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Mike Snitzer <snitzer@redhat.com>
  Miklos Szeredi <mszeredi@redhat.com>
  Minas Harutyunyan <Minas.Harutyunyan@synopsys.com>
  Ming-Hung Tsai <mtsai@redhat.com>
  Miquel Raynal <miquel.raynal@bootlin.com>
  Muhammad Usama Anjum <musamaanjum@gmail.com>
  Murthy Bhat <Murthy.Bhat@microchip.com>
  Nathan Chancellor <nathan@kernel.org>
  Nick Desaulniers <ndesaulniers@google.com>
  Nilesh Javali <njavali@marvell.com>
  Paul Aurich <paul@darkrain42.org>
  Paul Clements <paul.clements@us.sios.com>
  Pavel Machek <pavel@denx.de>
  Pavel Machek <pavel@ucw.cz>
  Pavel Skripkin <paskripkin@gmail.com>
  Pawel Laszczak <pawell@cadence.com>
  Peilin Ye <yepeilin.cs@gmail.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Phil Calvin <phil@philcalvin.com>
  Phillip Potter <phil@philpotter.co.uk>
  Pradeep P V K <pragalla@codeaurora.org>
  Qu Huang <jinsdb@126.com>
  Quinn Tran <qutran@marvell.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Ricardo Ribalda <ribalda@chromium.org>
  Richard Weinberger <richard@nod.at>
  Rob Clark <robdclark@chromium.org>
  Robin Murphy <robin.murphy@arm.com>
  Ruslan Bilovol <ruslan.bilovol@gmail.com>
  Russell King <rmk+kernel@armlinux.org.uk>
  Sakari Ailus <sakari.ailus@linux.intel.com>
  Sami Loone <sami@loone.fi>
  Sasha Levin <sashal@kernel.org>
  Saurav Kashyap <skashyap@marvell.com>
  Sean Christopherson <seanjc@google.com>
  Sean Young <sean@mess.org>
  Sebastian Reichel <sebastian.reichel@collabora.com>
  Sedat Dilek <sedat.dilek@gmail.com>
  Seunghui Lee <sh043.lee@samsung.com>
  shaoyunl <shaoyun.liu@amd.com>
  Shixin Liu <liushixin2@huawei.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Solomon Chiu <solomon.chiu@amd.com>
  Song Liu <song@kernel.org>
  Sreekanth Reddy <sreekanth.reddy@broadcom.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stephen Boyd <sboyd@kernel.org>
  Steve French <stfrench@microsoft.com>
  Steven Rostedt (VMware) <rostedt@goodmis.org>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  Takashi Iwai <tiwai@suse.de>
  Theodore Ts'o <tytso@mit.edu>
  Thinh Nguyen <Thinh.Nguyen@synopsys.com>
  Thomas Gleixner <tglx@linutronix.de>
  Thomas Zimmermann <tzimmermann@suse.de>
  Tian Tao <tiantao6@hisilicon.com>
  Timo Gurr <timo.gurr@gmail.com>
  Todd Brandt <todd.e.brandt@linux.intel.com>
  Tony Ambardar <Tony.Ambardar@gmail.com>
  Tony Lindgren <tony@atomide.com>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Tudor Ambarus <tudor.ambarus@microchip.com>
  Tyler Hicks <code@tyhicks.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Valentin Schneider <valentin.schneider@arm.com>
  Vasily Gorbik <gor@linux.ibm.com>
  Vinod Koul <vkoul@kernel.org>
  Vitaly Kuznetsov <vkuznets@redhat.com>
  Vivek Goyal <vgoyal@redhat.com>
  Wang Li <wangli74@huawei.com>
  Wei Yongjun <weiyongjun1@huawei.com>
  Werner Sembach <wse@tuxedocomputers.com>
  Wesley Cheng <wcheng@codeaurora.org>
  Will Deacon <will@kernel.org>
  Xingui Yang <yangxingui@huawei.com>
  Yang Yang <yang.yang29@zte.com.cn>
  Yang Yingliang <yangyingliang@huawei.com>
  Zhang Yi <yi.zhang@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   b5dbcd05792a..16022114de98  16022114de9869743d6304290815cdb8a8c7deaa -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Thu May 13 08:28:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 08:28:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126765.238198 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh6hz-0000eI-Kz; Thu, 13 May 2021 08:28:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126765.238198; Thu, 13 May 2021 08:28:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh6hz-0000eB-Hu; Thu, 13 May 2021 08:28:39 +0000
Received: by outflank-mailman (input) for mailman id 126765;
 Thu, 13 May 2021 08:28:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yjCE=KI=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lh6hy-0000dz-7G
 for xen-devel@lists.xenproject.org; Thu, 13 May 2021 08:28:38 +0000
Received: from mo6-p00-ob.smtp.rzone.de (unknown [2a01:238:400:100::6])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6fe4971d-7143-4139-bd2c-a3f70568cea7;
 Thu, 13 May 2021 08:28:36 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.25.8 AUTH)
 with ESMTPSA id N048d9x4D8SY3gF
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 13 May 2021 10:28:34 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6fe4971d-7143-4139-bd2c-a3f70568cea7
ARC-Seal: i=1; a=rsa-sha256; t=1620894514; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=gr3TYjjsSq6/cW5QestgIVt9YDooC1dvJxDM0iBp/XS8bvcoDFgcOJUqmjEHOBJvVu
    Utk8Y2EK1CLP3XAFFQBTXWnkMPnN/AwvXbxlNpNFr8w0KBPXEqnlMIiQfw/LsrDvrkss
    Z5AX26VYZo52agAgm/BQdwjl87FU0r+EYUtrdjyZKphcNJ6ouoMVkAnBd/c/ZhyhCKGZ
    U774eLTlQErn3S0oCko5nBa6D8k3M+FG2BvnzLJoYOZc+Q9v695vFIBhABn/tI8gRUZv
    mRXyBUXV+SfT9LPy1yt4cGssGpuClSdkGMVS8reZlFL9CB5IFfWXeGxH6iiRsSv8uIh0
    yy3Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1620894514;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=Rf+0QxJR3FddhhWri78kmDStz4nY5a6KJIiaNH0d6TM=;
    b=hfPBcNZZfjsbE+dn1BfMa7AtPBAnFatRYB274wUIdVyyDdUR7uZkoE5oiWe9xIemtD
    Y/8i36Um3QiM04io9aU+YptTW/hJNYuOKQwC5s68POh9nnYUGWeN8W4/cnSqStaemp5I
    jVYmq2bEy6v56bheIcmgloWD/eT+vf82cjIPSudqZMS6yxG+5VYrL9yqeW62hpDkMrlG
    d7bqOZdJxFeRR3VsxQjb7nl+r9CeAJh1p0oalLlWQSisnzIpfkrBi+tjP7w5ihxdT+3V
    rabbMM29WzPaT6BrUcKrX2fEDbW2TojWbgyhGxdpGFIS2/O+Xr3QThRyOnBaRjbqR7DZ
    yWnA==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1620894514;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=Rf+0QxJR3FddhhWri78kmDStz4nY5a6KJIiaNH0d6TM=;
    b=YDkWSQKOLrsbAxAth8zwKLW/sAvvYVFRfuXbHBpZ8N/sphTiRxDxBm6GVj9E+HupQX
    10jqDxJTvBfyJLmLdpq6EZgmFWoVoEmNgNEQtXf6zlIoWoo9u+bj89sUtR71fcuR1niq
    UyArTpkogglmE0Z9f18xaQdDkJpY2GACtpPnDkFQ1aSh46QK+HAZZBqqW+iFS8fWUWto
    2IfPc1VGxtJEZiBaAaJouAFC93FhF94OaZ0Q2+y2C1v9HUy/u1HWB/p4bJTKcmAncAt0
    DNICvkz6WfEfaSvXYQRJSyNtQv52byM1gGBhw4o+P3QZPnGFyWNW+/H7QgaW9I+rbKVi
    bxBg==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisF/Wx6Ea03sAi8O4Y0c9DLMc9kgmB2KMHkQZ2le"
X-RZG-CLASS-ID: mo00
Date: Thu, 13 May 2021 10:28:27 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Anastasiia Lukianenko <anastasiia_lukianenko@epam.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrii Chepurnyi <Andrii_Chepurnyi@epam.com>, Oleksandr Andrushchenko
 <Oleksandr_Andrushchenko@epam.com>
Subject: Re: Hand over of the Xen shared info page
Message-ID: <20210513102827.2204d5d8.olaf@aepfle.de>
In-Reply-To: <64bc6ab6ec387acebb40c1b4786dfda1050f9d50.camel@epam.com>
References: <64bc6ab6ec387acebb40c1b4786dfda1050f9d50.camel@epam.com>
X-Mailer: Claws Mail 2021.04.23 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/XICQkDJW9le_hmiaMj+R0yS";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/XICQkDJW9le_hmiaMj+R0yS
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Thu, 13 May 2021 08:03:48 +0000
schrieb Anastasiia Lukianenko <anastasiia_lukianenko@epam.com>:

> shared info page needs to be handed over from one entity in the system to=
 another

The same issue exists with kexec. Not sure why it was not addressed as you =
proposed, "soft_reset" was implemented instead.

Good luck.


Olaf

--Sig_/XICQkDJW9le_hmiaMj+R0yS
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmCc4ysACgkQ86SN7mm1
DoARXQ/+KTSFDC2lbtxTzMzNJECgftXg1LCtq8TuXUlP3PSGvykK2pBcEBKvhRuo
Ut/ywXgGvhjBPJZVsVfJq+TgPCvn7cxApIDCWqStqlUvfulooBrhIOYbtLaUfv0H
UNU3ukk1nP8naUX3skCftX4bCeyAk0IXmhV3+xOw1x9VZOENh8p+Qvzd0vPKiYy6
NWqL/elTdAxwP617AZMoxarf9fP4QFYzzr6aErv+5QVNGFoFXh7ywBfqhw7Cbv6p
c1OkkRLUdPXsBxYEzbXkg0t2iTddlQB0fykykEWO8MHEklLGQ8Stl1YRjIy2fFUr
jjpNH1BF8jNoRJGzDlegSRx5mUapGE/CUp8gOeQUCmibWFsiWdHRNMHgVhWzIoVo
oF6N6YiU/yJ2R8tz5G/g4B1CURWQnkngkB/Opjpvtywl2JLMFpxtv5zM1zW5sekv
m+LR6ypWyumIM5EW4dyNYaei/o8tFEQu8tRJh0FPt9bbERZ5+dVjHI1hLwUYqh9g
XxnQ2JpzftU0l/3SQXQpwW3P6EsYsXXEN8Q8yRd5wS58eVXPG4gmcmd2sE+4apZ/
rnNCSWCmm+QAgdcT/jkBRLy/wJee3eaj+j5sgWoOvlkJjur3s0Lw8g6ts2lo4eA6
yiil/h3j/GnkANDCCF3bjSa0Fhypx/DQlog4pExwc8KYC5aqyYQ=
=8p4u
-----END PGP SIGNATURE-----

--Sig_/XICQkDJW9le_hmiaMj+R0yS--


From xen-devel-bounces@lists.xenproject.org Thu May 13 08:37:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 08:37:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126775.238215 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh6qk-0002Du-N6; Thu, 13 May 2021 08:37:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126775.238215; Thu, 13 May 2021 08:37:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh6qk-0002Dn-Jo; Thu, 13 May 2021 08:37:42 +0000
Received: by outflank-mailman (input) for mailman id 126775;
 Thu, 13 May 2021 08:37:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lh6qj-0002Dh-Ai
 for xen-devel@lists.xenproject.org; Thu, 13 May 2021 08:37:41 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lh6qj-0005ru-3w; Thu, 13 May 2021 08:37:41 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lh6qi-0007yw-UX; Thu, 13 May 2021 08:37:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=O2gvYYb/3C0IdyLNznjXgp8Pvq10Z0J69WVg2v08H54=; b=YVqQbb7vjtLG6wUHlg0Mb/0wWV
	tz6MDPV6daVqeRDoU1O2LQ0ol5+IOqt6nAunSd090lGfRvgXy8iiEfz+Gu3yR9GXfuX7cKOT82l/8
	bSM+aXV3FShfU/ckEwQNTf/c8uq48K6xTYlnBSjeov46dBO0y8MmAma40XPVYzyaX4tE=;
Subject: Re: Hand over of the Xen shared info page
To: Anastasiia Lukianenko <anastasiia_lukianenko@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrii Chepurnyi <Andrii_Chepurnyi@epam.com>,
 Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
References: <64bc6ab6ec387acebb40c1b4786dfda1050f9d50.camel@epam.com>
From: Julien Grall <julien@xen.org>
Message-ID: <8ff05bdf-a6c4-6b14-b39c-7d9b3bb9d279@xen.org>
Date: Thu, 13 May 2021 09:37:39 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <64bc6ab6ec387acebb40c1b4786dfda1050f9d50.camel@epam.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 13/05/2021 09:03, Anastasiia Lukianenko wrote:
> Hi all,

Hi,

> The problem described below concerns cases when a shared info page
> needs to be handed over from one entity in the system to another, for
> example, there is a bootloader or any other code that may run before
> the guest OS' kernel.
> Normally, to map the shared info page guests allocate a memory page
> from their RAM and map the shared info on top of it. Specifically we
> use XENMAPSPACE_shared_info memory space in  XENMEM_add_to_physmap
> hypercall.  As the info page exists throughout the guest existence this
> doesn't hurt the guest, but when the page gets out of accounting, e.g.
> after bootloader jumps to Linux and the page is not handed over to it,
> the mapped page becomes a problem.
> Consider the case with U-boot bootloader which already has Xen support.
> The U-boot’s Xen guest implementation allocates a shared info page
> between Xen and the guest domain and U-boot uses domain's RAM address
> space to create and map the shared info page by using
> XENMEM_add_to_physmap hypercall [1].
> 
> After U-boot transfers control to the operating system (Linux, Android,
> etc), the shared info page is still mapped in domain’s address space,
> e.g. its RAM. So, after we leave U-boot, this page becomes just an
> ordinary memory page from Linux POV while it is still a shared info
> page from Xen POV. This can lead to undefined behavior, errors etc as
> Xen can write something to the shared info page and when Linux tries to
> use it - data corruption may happen.
> This happens because there is no unmap function in Xen API to remove an
> existing shared info page mapping. We could use only hypercall
> XENMEM_remove_from_physmap which eventually will create a hole in the
> domain's RAM address space which may also lead to guest’s crash while
> accessing that memory.

The hypercall XENMEM_remove_from_physmap is the correct hypercall here 
and work as intented. It is not Xen business to keep track what was the 
original page (it may have been RAM, device...).

The problem here is the hypercall XENMEM_add_to_physmap is misused in 
U-boot. When you give an address for the mapping you are telling Xen 
"Here a free region to map the share paged". IOW, Xen will throw away 
whatever was before because that was you asked.

If you want to map in place of the RAM page, then the correct approach 
is to:
   1) Request Xen to remove the RAM page from the P2M
   2) Map the shared page
   /* Use it */
   3) Unmap the shared page
   4) Allocate the memory

You can avoid 1) and 4) by finding free region in the address space.

> 
> We noticed this problem and the workaround was implemented using the
> special GUEST_MAGIC memory region [2].
> 
> Now we want to make a proper solution based on GUEST_MAGIC_BASE, which
> does not belong to the guest’s RAM address space [3]. Using the example
> of how offsets for the console and xenstore are implemented, we can add
> a new shared_info offset and increase the number of magic pages [4] and
> implement related functionality, so there is a similar API to query
> that magic page location as it is done for console PFN and others.

They are not the same type. The console PFN points memory already 
populated in the guest address space.

For the domain shared page, this is memory belonging to Xen that you 
will map in your address space. A domain can map it anywhere it wants.

> This approach would allow the use of the XENMEM_remove_from_physmap
> hypercall without creating gaps in the RAM address space for Xen guest
> OS [5].

See above to prevent the gap. I appreciate this means a superpage may 
get shattered.

The alternative is for U-boot to go through the DT and infer which 
regions are free (IOW any region not described).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 13 09:22:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 09:22:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126782.238233 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh7XS-0006xM-3f; Thu, 13 May 2021 09:21:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126782.238233; Thu, 13 May 2021 09:21:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh7XS-0006xF-0A; Thu, 13 May 2021 09:21:50 +0000
Received: by outflank-mailman (input) for mailman id 126782;
 Thu, 13 May 2021 09:21:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yjCE=KI=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lh7XP-0006x9-Op
 for xen-devel@lists.xenproject.org; Thu, 13 May 2021 09:21:48 +0000
Received: from mo6-p00-ob.smtp.rzone.de (unknown [2a01:238:400:100::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cea48f5d-99b4-4131-aee9-ceff16bb9fd5;
 Thu, 13 May 2021 09:21:46 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.25.8 AUTH)
 with ESMTPSA id N048d9x4D9Li3lw
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate)
 for <xen-devel@lists.xenproject.org>;
 Thu, 13 May 2021 11:21:44 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cea48f5d-99b4-4131-aee9-ceff16bb9fd5
ARC-Seal: i=1; a=rsa-sha256; t=1620897704; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=MC2LRQe4imGsspf8pSxm8Zfa6QavqzTqzq/yR2RptIbWe7dy+bCfGTT79dcKMQJuj7
    JqgWU21qSEhNSPT1FMY68EbboCd3LlAytXTEyEWJtsfRbQvqfDJ4sB/bXOVM8dUN7Otb
    IgZydNNZ1pYN7yJcDwki2SWRt223O9TM3T0xY3wzv4ppioJRQWEFLBVTZfJekqrL6lB4
    gUN2Z0ZubC04iwfuostp8xH0FTlN3Yqrl6pIDZujkTqN8JWHpGMH4kQ/zepKlB6Jy8kc
    phtP0/SPVQtF6u94mGYNnWtbKiWTpIIQRG+Jp7UuLq2AYpFWKjg0IrPLD6Jb3i/lKXLy
    IUjw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1620897704;
    s=strato-dkim-0002; d=strato.com;
    h=Message-ID:Subject:To:From:Date:Cc:Date:From:Subject:Sender;
    bh=x7LVqryBFJLZUKbNcYibWCoxaRf5TwQn93bzC6OYEzM=;
    b=m+kz+efGbYMVXFMMW6IgzpxSfjpzm6i/6FazQn3Rdgg2Yt9KZ3oKGVLjSHuc7iMjf7
    SEmJy9Wl97yBHjR57ZXZVV8Sdh6N8+HLhMF+jxpaLdd8EHcMlm0ZS8x1AaMzqLC43T6v
    9KwqfoL4luzl4ulg4GjBxZG+VIwm0QxYqe6ix7HrDDJQz7HAZtomt9yyHa9Q9nGSwMki
    ana4qFr1/bptkaQX6FRuWR3cdWUUPnJfB5zIFgUfElp81murluYp64bwlKWjmgtqfYDM
    sBfQ1PQ9JJjM4/eOx+NH3OQ0bMGcXYfKwI2bh9R3kF3pzvt3cGyPf4cvH2SbY/gR5X4Q
    uxGQ==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1620897704;
    s=strato-dkim-0002; d=aepfle.de;
    h=Message-ID:Subject:To:From:Date:Cc:Date:From:Subject:Sender;
    bh=x7LVqryBFJLZUKbNcYibWCoxaRf5TwQn93bzC6OYEzM=;
    b=FyHNyk+gl0Q+9Xq46n49xdR+S0yBmCfAlXuCnVtcXisrbkoN88wTPaULJ1D5R5xawA
    1U9Yf1+v9dNnh7AQ2xIpaDGx3gSjgxizVZRDyJwwIUgcVXWXa1rv9kadDbAvQ9sl0jvP
    iA/2l5/ioWkzU/3zy9tNau116UBXw4/Wc7KDCqCcLX9VQ+lSxFg54jvt4ZyYSim+3uuq
    fl09BbjMCUWZZLumO9JKyQPahW26KGzIogCuIJ+bTrYwrX+s2W6OgVVY2udddcQtAy1G
    Bkh6YM0xYGwfa7S59mY4FvXceQx1RTQCKDxSrYvPgqQB5smNAklP1KErLbWc88Ioo0jn
    cbvA==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisF/Wx6Ea03sAi8O4Y0c9DLMc9kgmB2KMHkQZ2le"
X-RZG-CLASS-ID: mo00
Date: Thu, 13 May 2021 11:21:36 +0200
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Subject: regression in Xen 4.15, unable to boot xenlinux dom0
Message-ID: <20210513112136.6dcc3aaf.olaf@aepfle.de>
X-Mailer: Claws Mail 2021.04.23 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/.7RY=KJQqqvUJgjnPlKh5.g";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/.7RY=KJQqqvUJgjnPlKh5.g
Content-Type: multipart/mixed; boundary="MP_/VDjKIPPV56Aa_4P.eC1Tu+/"

--MP_/VDjKIPPV56Aa_4P.eC1Tu+/
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

I have access to a ProLiant BL465c G5, which can boot a xenlinux based dom0=
 kernel, like the one included in SLE11SP4. But it fails to do that with st=
aging-4.15 and staging:

...
[    0.197199] node 0 link 0: io port [1000, 3fff]
[    0.197199] node 0 link 2: io port [4000, ffff]
(XEN) emul-priv-op.c:1015:d0v0 RDMSR 0xc001001a unimplemented
[    0.197199] general protection fault: 0000 [#1] SMP=20
[    0.197199] CPU 0=20
[    0.197199] Modules linked in:
[    0.197199] Supported: Yes
[    0.197199]=20
[    0.197199] Pid: 1, comm: swapper Not tainted 3.0.101-63-xen #1 HP ProLi=
ant BL465c G5 =20
[    0.197199] RIP: e030:[<ffffffff807874e4>]  [<ffffffff807874e4>] early_f=
ill_mp_bus_info+0x344/0x7f9
....

I have attached the full logs from staging-4.14 and staging-4.15.


Olaf

--MP_/VDjKIPPV56Aa_4P.eC1Tu+/
Content-Type: application/gzip
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename=xen_414.xenlinux.txt.gz

H4sICN/snGACA3hlbl80MTQueGVubGludXgudHh0AMw723LiSpLv8xW5Mw9jnzW0ShduEz57MBeb
OQYzgHv6rKNDIaQSKAwSRxK+dMyvbewn7S9MZpUkJIEx7j4PS7TbSJWZlVWZlbdK3waW4/kLeOG+
znT6tfL87QtUq9U/3SZjn8IgiD89rWnkWwVBDox6vheHTjZ4HYhfZ196o3P4gi+feBh5gQ96lelV
VVGZYij6jOlGQ9OqTHGVueroTa2iGtWElTOHz7eLy9dfgtWyhe8qhHoOZwvbxq8NOJveT3twS9ye
I9lG1TiHBAVgtuUwtPCLDkxtGY2WpsH9rAM0c8LVrRXzKIbO0vIXfMrjlhisKEZFQSQ9QfpPRVUU
WHhxa8dkQmC+9VZOxXNa0LDchmVYnNcZa9iNRt2pu3NmOXODubrdbDY1y1XnmpogXuGGrXD3eNiC
68n9lQpqVUkHO8F6bfkO4GbzFqyCxeppdWmtVrDYIrtm7oUd+FGw4pd2sGZA/10yZiC3uV331taC
A80FcyviYDlOyKOoBSnQZ8/hAXi+G4RrK0YBtZIB+HzdBi+CmL/EsA4cDg3lRTUuwA38GBovrJYB
XvU+dbsdWPN4GThI2w98/jfodQddiEPLj1weQuytcTUqRBy5dqIUVwDR7IgUQ8jj0ONPHJnltrVF
fv0AiHQyYK2SSVDOMbdj7iR0ul5kH1xEP9jiVjIYXk0g8ha+FW9x/eXRXrebx4YoDrd2HrIzvofP
3HcCFFh72L2AvrX2Vq/AanCmvDDl/AKGuEUrXCA+q/g4jflmQ6dDg7PQegZFYYriqtr5TjgV3lAV
mLSHsLY2GccPSulzAbmHpqu57lc420bWfMXPDyE1XX0PyRVIuB4e4u4eRHP35nJPQGMlFm2X6457
lEUC4WUkw5BI7c54AI4VWwfxjNoeXu3dyYx6Aclx3fcX5nK7tDB8cwoa30Pjyvto7t5sJzDJ9hSk
oSIKP7QhI/6cMwclS/BiuzWBnwCTDFowmXbHRLiv9+UUKlrkUIWbMeDnvAD7ZdqdQaff03s9CVvv
ICxLYGEcBreehVYj+aj483//+z/iO6upvSKxPv5KiDUEsT5NrH0fsW7GWZ+I6d1OM88ZfQRI8mEw
GM1u0Q8omqKqjT3GppIWk6vUizs2HXcmybguxg0lPxcNT66m97u5jjE+7PSvE2IdQUw7uqUM8h+l
SOtm3Es2QZWMa43vF890PBwkxPRkF5CYcYyxt4n1JtOUMyFr1lXeW+XbxNrjQUcS06XiNHvvLfPN
LZtO2gljNclYIks0/gTdbw/NvsmUHa1kQLJW0hv8SFptSate+/5FXvUmCWOdRJY/sGM3vXT7O0KW
rNH5HmLT1yjma/JkLdDUes0YXsGZphmGqimNx6sMDPe0BeMvQ1Cg8rMUlyK+jii8UI6BsdPA1NPA
tCNgbAemZ2DsGJhxGljtNLD6G2CS3WQlSsXKGewDANI5VJzEM7wHSLC69iYwS5jMQCqNEvDoftjG
E7haBTZG1Q6GaWsfMTGwATcM1iBcE80FFfndKuPeRxQtNTC8xGBxyWFpRUuIlp4bp1lEN1hbng9L
jkQp2/CslRfhXN1hG549J16i8sHci9OgzRXR3XQ4huG4EpM7BCsml+bqbiOdvIvWTK3qsCE/62dz
SW6ERJzQw9wFI07X2q7iwvkZDyszjGtDGNzBOAhjcqZNjOnOEkZKNuW21xujg+nftWCzZi+m7ccP
rFWrKRespXy9kC/5k3hZV/FlU1G+FkjkP8/WI99uzCduP4iQiik20sBHM/K+8Qe1hHqLklnJFSWu
H5lNA5YipAA6s+yNZ3rOg/KCXMDK2nj27pH7tJ/O+TuIehFRPxmRFRHZyYhGEdE4GVEtIqonI9aK
iLWTEbUionYyYr2IWD+GaI5QwXPIFBw67kr8YIYZ4yv2tYg3uJMzSuqNr6m+EHYSF3+FReSZFEc+
KBm2xMMXLUi4g8ZFlvez+kVB8dKQ93o6QHPGjCMsNMsssCILrFbmgeV4aB7jgWU8sFpFY0UmRjNz
OumYd58ncDbfIgpm/JHphb/jt8UqmFsr8aDC0lssgTsLfn4qgWaeQBMT9GdYYeK7KhGY/EOBLdm4
+SsEuIoQs/VqGUR9H6T5JkiPNCezdZTno5Xpr6y4CokRVGHw6U4MR/tRJRU/UINYrWap6KRJHC2x
tU7erow7aUSLyb/rLbahzLOVlsxFUjcFEV+s0QqLhIb2ikegk8tw3T1KVsgtsueOnuCmiRL6Buhh
Zp3HkCsReORfytPgJJVsir7l4VGCOIAFj6EXhohwG+AGJYozoXJRun0UQbVgJpzLxgrFNOi5YM4p
20qc1LesTpG4FUpyz4bt7uxcsEMuqrgxuWJE6o2HY+lfn6Wb7IzvIzhTYBnEm9V2IZ6zbG9i4uO0
ZTAV/NC0N9u1FT2a5JFaNT09KZN/oAFY0zvym3gELvAM4LfhdPAJfypfdrUPlBOjCLharQLGdAoM
b74VRin67VtrpixhbdlLz+dgL7n9iELZoF8klhMTlS5nw+0tahm6VkAevIVct2vZ3gofeZTVQ+DG
Cp1nlDa4XFZvdkOdYL0hYVVQ5NF2sxEueDDqDia9zsyc3dyPfoXpTbt7909z3L4ejK4zTEqGIx4T
Z7j+q9mgMltu/Ue47fdGnd4FZjm9jtmZTW4pArqAO4xIQoSbtEedG/OmPen2RhmtqZxZiPLm8xA+
DyNKn68OAow/749/Gc9QH2p6BSVEAIG/ej1vUbyjgOOJRN65oMf77BHOnr14GWxj0u9uVhsg7Fs2
62PgZDlCVY6TKahlhCJztitaKGlkJ+QO8jNN36Isn1Q4s8VrNTO4qY4TBYmi7iilfFEB0sQQy/bI
DpsirEOdahTGkV3kOR3U0gMMGMXxUECgwbR8m5txgLTp266OKQzbW0CVtGIL4db/fcu3aFesMKSD
THagBVFgP/I0tgMbA0xOJ9AWw7BASFTW0ItfkWllndpBUVaNQ8t+pMVL9tGM+wsMRZlS1+o6a6g6
+Cn8GBWezrUohoZUXGV6VWMNPE3CmqYhaVLbFEetqjQ1Om4YoAY2mp8gTE2PtYpBRrVUKWqoDmbi
elN3HZHo5N5ZWoOlxWUy5U9eGG8pdpZnrqQLaDDkbqCSpli9UfvqFo8PuteK8BNoPLIiKk4m9cfn
z2jafk1KtMlwtTobDHuTFgWmcRBeKi99RbhmdqnAxvPZpSoe1csKo2f6nbqZLJtICt0Q0jSBCzUd
fvWu0o1YP1teXPGcFUraCVC0opS89UEYlLROu5b12dRuf8ZctT0ddKPUMKXUpjSSnFic2nKeSIec
fetTgRGPCGRMNTXhAtAij8az8x3ErRXFcEVauIQJt4MQD+7t1YTK7nkZ5JnC+Uu2UrzOrGA7irx0
WtqOs5v2+LxcD5co7TH6JOSN8gGUpv54dQHqEP9j11c/okRXYbBdLGPYbhJHlG6dPPbEVeHIoAPC
OUURfYN6L6xDBbcy2OIJS4XtiLskBMUoCX1velB3B7zvhbiZBIByTYcxnrPRiVhkyPcpsYOUcgDq
ewDaewB6AYD9ALPGQUo5gNp7APWDAGvhic1NsFph6FTwzvSOkKU9imKLVD49CJOt7wu3EG/naPtt
MrCv6DRXLl2bRbvbvb73gprwl/uuqExj5J2qTZ3K0VQffCi/+VrULq2pWUa9SO56/AY5tkeOnUBu
Ov1Dubv6g7gjs98ijScJRCCr9iJQmIfoXWyyH2ekhLGV5Qb/tEJfePbRFzjrvWAgFXO6ARMFf/IT
ZArIspMdFBrHdxUURYSna+vFW2/X0EBLOs4Z859++gmu6FaTBG9RNCFQ8HXqCW77LdgsHYwRNpRJ
oT1XKYhe83X0DR9qznwX9h8ArjlOEbzeOAJd1+d5aKfWOAZr52EtdoyNurgUyoDFrWkBHEeC8JVS
GbE8FKDy0rCaJSjcYPQ51/eYBJh3U7iEP4u78z+/BfO5N5kO7kYEqFZrB8DwOQ9El95aVTkA+Hkw
mZlX7WkPwSipl59GsSiXgx+3u92JedfvTzFpu4QDEL3RbPLbHjX1ILWb38a9Sad9e0th9T4L2kEk
DK/94NmnC9KsUJEbHt51zemsjcsa90eCJNsHGowGOK4Osym5tbuL3Afv99qz+0lPSOYZfRGdEJNc
oim+Rv/KXjo8skNvgxFKOmRt48AU19crikHMzfI1Wlubf2GegWmmh4GYSSGF+chDn68OiGh6Px7f
TWa9rpljA7VIcfdhb+/a3d6E2Fxwn4eefZAeJiWjrtnBHKR3u79DSWEjnx+BiPVEkSStGh5RF/qg
dTfFITED18UEScArZYLJ2CkEBfyjcC2H4dVD8Nx3DjNcOIMZPAbp6JpOor9R14X9OKZJIksEKeAW
yPzsAlbR/IK6LTZWnDV0SDN5EFLs5tt2ZHzz23SARwmGveEdHsD2BHPM694Qj2OrQNyiaLhKJeCE
SV2dJ8X4n7M3dlILOcO8o0n3j6TsEYUFcy4pkC5nASolb1UIrTXmAI+t3TWy02RunnBDdR3XdXjq
R8j63B/nmZqDMEhNt6QskcrPb8n0ME/ZJ+NpT1hjPKAVim9EOwWUZSpnFG80V2k6TpZ8ToVyUslF
FNjLjJVY1eeNnHpEaDFkXvIRVju5fKYFH1kjhfPSQLVKrFplVuc5Myw6jijAsx/LaxRgRUQ9hzi7
m7VvdxcP5ZOeR7QLrEp/Qm6nN53uK8CbEQl8zmUUmevGEInubvZOtjxRu7cyvtjDZmXsJA4pYdd1
60A8gllCCTuJS/aw5wciFEwh9rDtQ9hG7UDIgvlFGVuGLmXseW7dSU1GVJVlIENm8neIl+gglsHK
odKXpEuilkYiy3/tcDufU/zXDzkX7UmeD3PUnEVIF2opWOxUqSwqqtaiMpkMXFOX2sERCi6n6NyQ
M8/fbGMyS927IVqr+HXD4a9UbqtYfxVschERC8sVPXsxHmqBklouYs2BGlMfr0SVNVnmnx7EVX5V
SS4pCsUpm/jfQLSdR68R5Uu4CR9DKEOL1sPscgEDtSpTWKWmiSbIswXnj8EvokdwGUSxbFzMtUFq
VR0eRC+jqVXmskYQ8idP3lQYarOufS32OJ7DX5gozVFz49+3PqgasFpLUVsaS5obmQFnaKGajmKf
l9ktNhb+vvVQCXBl4e+mtXq2XiMzKTugPviefdkEb+GjdRMdhyTLrN9w+WQrwK1w9bpBCxY/yhfC
x5vRkq/EzRKn6OiS4cH2t9bqkpW5oQY4zFWePHITadE40VbRD1eCh6vB3XTfyEIFyr1vu+and0lI
+AIJSzrQXdvVu0TcPT6Y8lEirLyYtEvu1MVk8BUoN8wVuurepSJRilTqH2MksVDZK57K6uQNSa8H
c0TSK6WPEOH7RDj7MBG2R8T9KCdueTlZ28XJRNi+xotGCvdk2bhOmQTLnaO3+SicVEqDPOokyPpW
y1Mi+LFjWudsPp8fY1r0cLek9SE7nNmf5MKmWkYYBbFHZf736iFrLxKFavRnGGP8R5nMXivI/jjd
eu7aoq5u9Zphw7UBcCF2GtpMA6X+ScV/aFzKBFZWFJsb1xc5h9wHintMK7SX2fsshzqGnJiLk5EP
t8I8uNnH4BRFfN11x5QIJFeZOQO9QWWgytXuXBzCMSWCSQiHg+Tsm6aLk7anT2UlSuBkaVt/LMPL
jAMcL0SpQzJvEjNTxRojCuUFaVCPLfwCD8ggpbhpICy+ilbfrx9bT10T5qZSPq6H1pPA0np2x/pH
1pOKsbAgR0kWxJTm/KMLYmUBZed2f0EF07Q73z+yIEHFKq6orjcayYqs+Vz9uIgSzamUZXZIRKmW
VfLy+qEVadwprocWka3H0Q6tx1978tJrvsXcNWyBWlOZrpfB0BZ3B9NfW5koiO03zlOxuzvzUDr6
EXpBXd5Pyq7N+yC26PcuRB5cYlPf95Oy30CagCadxf8lnmUH6UH6ogW8SL+hyGZ0oq/9KP3uHv8u
0Zfd4Tn+s/bw9FCU28Pf4n5apM7k7uhvCEM0jhcRdIFATcc5drIO8oyd05Yr+l6K9G1BXzsurqSp
PMV8g7roAypQV+Vyqcv8D1AG0XBepK8n20n0jWPcn0Jf9KAX6QtlE73o7+zOKfSTPuccfV0qM7Wn
v7M/7+++aFQvUK9J7lPdKXesZ9TFQLqWt1SZWtcL1C1JnVrY/4C9Ed3sBfp2ojt/zN6LBvcifaE7
otH9h+j/d4DpKt1UiC6SaD8MFm3JUoIvO//4c/J0MNIhFE3dobAcCjscTGDsG64xLoMcUIoiPXEZ
ZRg8ifjvGy1A1uVpGXTlyKl4Sm3bZRyR65s0Ql71Qf2aXCvmNmCfM6D2vv21I1faPlc58CQwPrqI
O1/wKVoFYmslClgtaKB3NBr1veA35NykdkFThAe+6HGU6BewWWBenKtO1rWmQX+lIRfL16ZoYach
HLMNizXfkpzY0RYYtaToLjouaVuRChI5hpPW6dMU7BisTnZPgF/A7aB/B3MrtpetN7VJYjGdjNrp
jO0QydUph2bU2JvaKFGbjZrROHXOAmadMUMztJMm/Y6e9RMpHGleP5HCkS72EykcaWc/kcKRvvYT
KRxpcD+RwpFO9xMpHGl5f5vCib3vBwl8vAm+XEr+sW74E5g6qS3+MFff3R9/mK3vaZT/KKW3O+YP
UzrcOv8WrPoB2Ob7sN/T512eTt7WEp1xZwBpu1wkfbboz4lhV1ld0J1n+thiiULtbU7E4+3G3PDQ
3mxbRxvFd+9QSaJWgx6EU6SnPUWgfpDxPQbT6zl3qE7IEsv/iRrTfskcKGskjf46U1kNwgZrquCo
uqaitA4ntxukUBF3161jaPJ6+5L9pCrNOjPUY2QeqNIFDBVSAx0MqEEdyvDT5ySPH/ZHkWzdxmCn
bjCgOxSxEgOFi+90dZ6+NprufG7s7Tq1VMU4H3m3lRfFEZUgRQQZhA4PL2AdzKnv/RXElZPocvWr
ADMKbiCNbpjeNJpamfavsvhg/7+63BEzt+QvkAzs+mzLqjPoyj/2k3VJ6uXwRLuq0sQkQ2xQC7QL
8ZelDTx1MY/2NrgrO0BsDGD5YWKsqWuKntFjRFAzDF3/N23X1txGrpz/ytTJw8p7JBl3YJg4Vbp5
VxVLVll2zqlSXCpehloeiaSWpLxyfn26geEMMIO5eTd+kGiq+wPQaDQaDaCBV/WimJeo7UfNkLme
lZAML1BorVFD45DFCYyEKSGUQfW1lHio2Z5LcnuONd19yvDICjbxJ0dxn5/afueCXT8l62dnUObJ
9/VLMluvftolf+CKJo/YOraam367nu/sseLLj8nnD6f7ThrVhkNyAkYDD0C7swBKAPDD2Fa+TppP
IXjuGOSk3aXMKlWuuHb5MCpc7PF4MjUuglV8lbmvaopzdnl0Xl4d3e6bMlm/rKZZHjLDv+DghfYd
3P7j8iM0stYtV/lhP+gOYShlj285J9JIQR6T8bfx4sl2/IFgRD7ug31TUI7DBO0Q0Exwv+AQ79Iy
xtVj4cej0kogwK23Q7AT5tFGKGsV+HWRbTCO7zZWzr4kiyV0Od49sFNDbciAKX6GaRD12tQ2Qz7d
wxx1O+ImdRbcErKUGdDOKvHF6y5bobk+u/p4m3yHYY96XRc1HoRyV3lQN/J7Cax6L6G232wtyghk
9QRzFyaU+btLIlOl3J/vv3sFW1O4dgUVdj54fGfjp8Vk46bEWfY0/g7zsHd6+TmbLuaLaQJ6DiTZ
8XEiFGHHXCan64f11eXNbXLw9Pyvdyn0Miww3vjwDFzz58UMFnuvo/1N31FucZbQZcuXJd5IoT5P
Cv7XbTZ9wTPuyfvNeJn9sd48Bpe+SmrKoYST5+eTzdLmkMk/NVEriQv1l9Wuzf5I5bFwSruOL4yn
013AYVo5KkcqgIG3FzHLvi2mnjkADqFbOXBl/r/ZxueQ7WWsst399CkoQ8pWjsnT42Lt0yveSg9m
bn4PM+LKl5VAWS1xjxEvMeRXzbbgP1ydXcDKdPXo10ikaFjB1xs/7cBY2Etu0F/uHItV2HXy5cZa
kD0X5dIotfcxz+yZsv1REEYohQoUEz8VXBvZWYJzNssiYFiDafsaubxREjCd4vbrt93yeQ54EeXE
0LeCTrW+wT2O291mDB+BevayXH4vJ4495Web+ClJR8SMOEmS8zEeZiXyLeVvGfWoJYNGXeNx+E/Z
A15x2aC3slnv1mA/yos8Hocy0Ct5HAickkeMIC3Wib1rd+dWLsGuSoyFeSzCssxjLJ8/XpUbScV1
1fHjGLwIZq46KrVcQhl3mdnn65lP8vQ+HTVzfPPZPqvQfNaDryhvVuQHmhE+oLz9jVrgmw8pb+yY
Jk0CZOUJUx5KUHJVEyGs+9BRt334NVmvwjKj1IC3gNnsFajuoEJ58PGIvEa1IGSh+225rNj3fJ3P
ZvHmh6xszzorWWd83oeVl9u17uCR3X8mkx6sYs86z8oKz3uVKvesRT8g77Sd+w4zxGT1rmBRalHr
ClFW8Ws7S9EVe8W3XdFWN1HvClF2RVOznLHFKIM9f/g8XYDR3VueKrG7eH519vH6/aW7RD5z6Uis
Gt8Vl8i/2oMVzZVIDuzp81JZ3rSX1IHWcOl9BoPy/XoztfdSwUEBt2Fpr1uv7a1Iu6S1eEcWqqgC
E4qJr/6d+TBeYQVF3f0km0Rsir5fyS5hIQ59sliD07fJwNInW3Cek/+Ab47If9ojpz6xhCmvvG6/
nuxQoMUJI7yhnnrkzNuzO8OEJutHnMUuzrDxuE1b0IIzRHNad4EfpLHInmbJ3dn5P7jtIlhbZq/T
LJttYXWOC4Xk7vrLhw9f7aIFlzcHNoFLcrCfft/OtutnnFCPwMV/4xfFdFjUlUtRiOkAsreZPY/k
7rbbht79z/3t6f3x/cfbM1AGm16nXPuIWTqRmE3w5OL+5OrD/emX9+8vPt1/uLy6/OxV5XlrwY9k
WBNteBGmWoEWP28y+Fl1p4EQxv2e8KBwZm5Jcit9OKPZnsrpQh5+nFu/FZA3L8+73N1+KPlSJvWe
73oNw2T6uHcN3TmkY4+WY036JlEIGVWgp3jENplsFrOHLL+JvXWZh7Ai/47r4lWGujrefD/E+F3y
Nxjv71br6Wb7Nxu6ccubZAwGoWyMIIoWMsUw3Cc8R3/qirmDL6D2BzVDAMMK5qxSlLDw4zYCtbjH
VKnJzfUNOSF8RGCU1utdmbsIH3umC6AEGw7FJw5qMgmh1A9ATR3ULKyV/IEG8sxCTec6hOKDoWhs
nkcoMwAqn/9NOSXW3DSAVGwwZOvkLmBpLiyk1SCEIvwY8NCDZSMwktlXZ3dJMn0a28i8u2PnQ2gS
g4DpLKEIVWtatj8wBjYirEwqG5HEqCJvSoKWcMJ4I7MpqjHX9tCqlbCuSZhT1tgWXralPJ6GH2ms
LZyRaHUKi3dOk3Pm08u0Qi9sR5CM0tHEotU7whivI8Cc6RhE0RFOfMzYijMTtpxz08gsfPFlpfgy
bLsPAgoTA7m5uvi3fctxKxZN4zlJzvlvoMbnHJZYMx+E6UaQIl+ER67rgmOe4ES34BQzMYiK4Khw
eifCNiuZNjIHgpuVgrP3swOQtBkkUN5pCTKtWByuBW8EmfogpATR1RFgItJnHSOAREeASVUMaZgy
pFQ0gkSUIZX1MoU1Z3w6ggFCYsowJbyEECQlMQgUAPOUwY0iaqjfZsFItPxBbRaMN4PU2yyYrpMr
r80s3mbtQXAahaiY8LnOStuJHwPNEZLIGMiwtscsiGpuu8xnxJJcuumLKjXCINDX/bLBa7siwoNQ
iscgitxkv37GrFTJlduR8xiNDdz6jMorm6E06nJX/tQpUlGHYCUEF3GIMnIF8hJVgWm/Fox3arwE
ZzsGUe/90vS7zNJe70tGaSOIN3dP3bCZhsyCmBozHdYI6I8YRL0RpRmGj5VGCE0bQcpG5Gf1eaj/
UmleY2bDGqFpFKLeiHIagI+VRmieNoJ4jXCzGQ9nM1m3t5a5yW0BehOjbxr0yO2Gvo/BmjHqY16R
avuoyV1WxkaUxW18MO4UTXkNgnoQtBuCGVmDYB4E64bgWtcguAfBuyFEdbICCOFBiG4IVfUaaTpU
nFqaGsRAce4XXj7EQHGmVUsIEMPEqYmqQwwTp2Y6mJHoiMw8e0wLiOYZSYv4jIQhgHyJt1vni316
BJLxWWsrKMuaxFeuc/9f6c1hWCEfdl4cQUsiemHnRqp1KatlzW60gs3nnrvpLR7jNYXx7YOzYlEr
MjGiajyOdSMLulEpHYOoWuJxrZGJEpPFzm9qZXSwH15TaqOi7Yoa26h7pY1uRqibWp2GnijbL0oH
CDIN7SSrLkrr2pI2CNKQlDVBDROkYUTHkPoL0jBuGhHqgjSci5hViA1pFg5pY/eY66x/xZA2XJle
2H2GtOFGDAEbNKSNyNe29y7G6WwGbidgAvd8T8gnN9SPnF4WEeNPLmKcB3zzeDjGUo/vbz599psj
jR4KcXnzz1P4cXNaA7OLDKx/UfVP2e+YrKI4qokxedzw2G3WT/Zdn6D5Cjsq4C+ZNg5pH+M/OLm4
v/74+f79xy/X528O8fmil83KJbK04HjM0iZO9hVUkzzUHFbEHj08u8xsOjfMq7jDg0Wug2zVb2+u
ylzHeH43D7cHYa8OCtGDgnVSiE4K1UEhO+uhelB01VR3YsCao5OioxR0iTspaCdFdym8k6KjX9Db
7KSgnRSsk4J3UrTW1JmzNgrWqetulihGnWSC0bYNHhrb4BHkKPM3eGDRqkVsW4C2bVpEd8QBypAB
UD12yiXYLn9NLUAMc98hFqTLIZbgeKYxiCGRJQAJwmQhSG3GlpLyCjklQbVZd7XDkHoBMazaXKhG
kEi1RbDtgOQ0qLborrYM1hAFxLBqq2CxEIJEqh0G0ZGcDZW24ToGMazaRolGkEi1U1qtNh8obUWp
iEEMqraijDeC1KutmDHRoVD3RgU9Et4GqVScREfzX+CNAjZLe2E3OpBNuIL9Odxmx1QqoYP+MyOS
2wmi9Qj39eJRP18FwsB5AVHZfsottgiijsAchJBDZm/7aVaG7uEjj6yvpFKh/H2oxvUVj6yvJF4u
qCHRYUIxQRCkgKgKxUUxhQiFkhLRyBwIJSuFksWFkqZpE9QgoShBJIla8sioM0ciC1gVi7EmrXO7
CCZiANG0F0hkVs8qszqApekQsCFjSgkWBITFvBhTeQyNd5lVhRuHMYghZlWhcjWC1MyqZqQ2A9Gm
Dp4fSfrVZ63pBm3pYOlEKYM+QRDTC2TfwfNmtw3ANBkCNqSD8ZKSLylZHHboHSTVjAtSgdjvAuCu
pw3P1E0MDSGCDeoCohqamrm4hyenqpXQTAThzBDKjKr9RkJRi2BDI2Se+vXQZT20rvRXGPkNQJr3
7XndTmkmTbQt3haQR6x0fIjUdV6Sis4rw2Osw3ReBTHBZpBeOq+D3ftOsGE6b7iJ6XxMUrQiKSNZ
jPUvcLoAO1ieNWMPc7oAV6s/h9siS06qXmjupkdkyY6k+OqzUh5j/StkiXmZemEPlCUnXPw53DZZ
Uh5fgERkKY+k9GVJa2tT/pfJkkrRC3uoLMOp5wdw22TJaDVKLZqj1EhuhoWYaSWwrLk9/DMM4uKf
pyc1HC7ZD+Cc1nAE5z+Ac3Z8e3IbAVP0B8DOaziS+NFzMTD6Dvx4dy3g/0uj71iA+n+Ivucr3XYK
9Ns7KGgnBeukaI+M5mvNToq2OHA+XbdQyM69htzZKXtGKNMwLj7g/aO7yy+3p2AL8BWb5GfvAoMW
OlWtjNfW8llGDe5m8rN3KxjYU6I72GkLuyS0i521sVNmOth5G7v1vlvZRSE0yx+yc9HFLsvSf66x
C4wltrKrNnaputh1W+WV7mI3geiAO/mZHBbryWMPS6esAyvtjZWi0NqwKOmL5U5gtWLR3lhUdrSR
st5YTHW1kffGsoezWrFEbyx7SqsVS/bGUqRLXqo3lqZd8tK9sQzrkldvvVdpp3711ntN0I9sw2K9
9V7TLv1ivfUeFpsdRo711nvNu/SL9dZ7mDo69Iv11nsY3F2y7633KLAOrN56r43okldvvdep5B1Y
vfXeENUlr956b6ju8D94b703zHToKu+t94ar6A2rX24v8XaFTQd19LT+I+CI3oxCDhrnkB19Yp22
fU6q8c5e9pU+f+1Wgc5LlA0FahU7VodVZA0c4WEp5jWKN3Ck0Z034KCqiSONhXCRI41zKCJjgQbk
MA0cNLrziBy6gcNArb49jMebyWjvfI8xbZvtrJGvFoezDC89b98t1n+H9fjh+o9V8flpPX3cvlut
V5mHbQN4e+wn+wqY91eDpv41W2Fq1VGRhsXe4J1gdix8j3iz+JZtPH1N7T1x766vXaXZpE6gNNX7
x5gjjebkGAawCXTuMWfZvb3ajU8f7dZ4wztIXsVScIN4caHePm+xT7re/mCMPSuZBDhpO070rZY8
rO3jKN6JU31qJY6jW3HKZ0QqL4tUcBiexrvOdh/GE3zYyc+h41PhGCmp9mkSbAYj2wHvEljaBAw8
YNinfNkC5ZfrDyenFx8uzpOzy5vbj9+Ex8fR8/b4XlZP+CmzT3LPMRMUKtQfLlNjntPJZ9c2yZ+X
MWdq9dlmWUQVLWmNPYMbyd70fvGUbb9vd9nS3VXyVD1N7frvefUMY2p143QW77L7FFLWs1CsniNZ
KDBkKh1ckh8+9G6a+1SYRcinqt0lz6NrAZPdP68zlbfGY0z2mFKEqbgfHmXScabiJniMyd4UqTF5
d76jTGmTICLUwhrd3tT2nlV/6kqbG++Zx5iZiDEXSbUjHJwM5lBDOQQfyiEH10oOrpV1GWIc1dv3
EWYlQ0HfPL082LwQN5hozj2gZSfKwwQfjcdjeWPC898GswxjEq43HqK7xZsjUk9LtKuKTv3yDZdR
akHMkf0VDHMjaJx6ZrHFjIbUqoFaBWSSxSvMXIVDS+NyK0Sox456ElIrEadOHXUaUmsSp5aOGibq
gDouutT1eTrPAmoTFx2hDpuGNUnjNZkaZ7BMWJM0LujpTFjqmfapUxKX99y1cg4CDqjj6iTBo/LJ
KI+TuQrLUNDu8FydWjvR4Yl0n5rFBT0lyjaPhM3jcUFPqQjJGmQmxgGZiAtr6oQ1lSykjgtrqqYB
meQNZKGUmlR9Hoq+QcenbkRMx8GgTBt0fDp2ujKWIXVc9HxuzQOfhxU2FWHV85NVk2JxwijWP3dn
mq1QmZmn+loBQuDNuwaI0jS1QxjSDKG6eE0Tr2cM2iHwDEccwhvzrRD2mmMDRGkI2iFooxA869AB
0SgLNBntvKxRCJ4daYfgjZrgGZd2CNEoBM/idEDoRggMjLXyysbhgLapnRd3TRt4S4PVAZE2QoAV
a+fFWEgTb1fHmWbtLQ1ZO0Ta3PrSunVANGqvZ/JaIWwqngpEsx1sh6I1efbwD6eERfxC5q4k5MaZ
+aJhTjT+nIvp1ALqjnIvby7t80WRcoXvM/LyTFv+rupR/jGcSpjwXRreq90U/OJI+dL3NfAE8Gw5
TnyPgUnfvxC1xSsJfGAmff/Cpzb5K44Vah2ndstVMgup0xC7R6vxkZxIqxX1+k965SrfG2BKyYCs
R4GmoUDfY1V+gSQo0O/XgMz3zvCcaIVs83sS1FwbE1D0qDmPK4ihngz0vizfr2M2JaFH0qOwOXWr
tDnJooX6cQLjL3oyqxcs0IvUjxP41MJRi5DajxN41MotqJSsUOs4tdNnlYbUjMep3VhRtEIdb6VS
jlqH1FxEqXkeRArHYeorgOm5claRvuDcH6NprgC+NeTcDzykZc2YmxNYYLu4Yiqg7lEzCbbT/Y4N
Lq6F19ZxZE3vDw1u/LE/7lX+JFqu1NqTzGTk30L0qYwMqdriUlL7Nq4HtR5AbQiPUXs3HqNM6ZAi
6JDGGsZC6q5oEnCYGEfkpmWMmcuhxQk6mEMP5ZB8MEc6lEMNbXmqSMDxpyNvxuRWooy8j/IXsCmr
vlJgyYujDra0IBb/sopE4ylVaKrsHtNy/Go5Ztnz7rdRwu2m027z/X6Fb0YIj8fUstrZfCunJ58S
ClP+eLtdPOAxxb2mebm5g3OvHmRKaGwnFyFVDJGWiJTGESmLJftrRmQlIosj7k9k26/osaAwo8FX
7zdZ5l5jX+w2s/y5CRCg1ko82qchcoEDC1ccx8fV2cUomYAGJd9wasAtwPOPVwRfiH8odqEsucAl
LpDf49/vP3z8ZYQPZ2Q2azfBlxySb7h5+Bvo2dP+AYqcUVYZr9fJEl9keNk8AIZ9zCVgME0Mq/Uq
zqPts5Qvu+y14WUhYYpnhdRh/uhz8KaQg0m9py+2GRQ6ewGcb9kGH0LxXhWz1CnOXeOX2WLnveyA
HbDKdjaT+3Y9fcx2tVPVjlnnd8TfMZu2H2EOKDgyJlXg5iDJiL6JPBkBf9KM4WGi/Fme5/X6KX8Z
SAn3tpVHmeLE8d/vb0fJ+WL7mPz+st7BumyGv+/VsTxmPi26buf4p5ZXUvAZMydKfO3TvilVkaPm
FDdZltuH5WpRLgPzfWKq6P6dFUtrt7fHTw+2l3f21DFo4XY324AkDx7hp4+s0ZM62X5fLjOozzR5
zL67vOCb5KdXSdKfant9ls3g0vUUtyMTsIFAfHt2e5mAJmUIcjDZPrzJt8mLp/vIsch325OD5fhf
UCdYTvlVMbiQhDlyC5JCNdmAeq6fG8rHxUFAO8vGM9xDj9OnNfrp/HePFHTK7b96FZL2psbd+8Vm
aV9tOn15+IqPJV/Y7OXfQIdm+G4UTBiz3W/J35P1fI5dslglv4AturMbaMy8VeItecve7tc2Ftne
W/jl14tbh5fM92XAB+gwfLssWRQPkuH28D9+vTix576PPRgbALper46+rTF1PmhV/pbWXvT0mHvk
HMnxoSSwTbuXcfGiWrJ9sVny5y9PT9+h/tvd+Mkeetkm+NyRhyCw298vXuGPV+eXH0EkoMDPm/XE
F7a0qckWhgiMBlzfwI/btwwfW8Lz6ij7u3yNNfqv0/PDfOkzurllVzbxPXlV5BB+iMQurw6pN6S0
spEb0LL1yBWRAIZ7HaXO6vFpG6/0+U6+/LOJzy/QugvLNT6omH3Lm2L/uz+Wgu/aYfJ8UGgQW7KE
70p20ACQ+cX7/yvuapvbxpH09/kV2LoPtucsmuA7VZO5tWU79l3keC0nk6upqRRFUbbO1suKkmPv
r79+GiAJSqTsTHZrpyqZSOpuNIBGo7vRQF+Kz8lygtnMxXmSqlJ6TyjGg2pWXqefvHRkaOCx7Xvb
uxbpeoiSUQ1CHUWgXZSi4RQBUjGjWf4VpRgfIQBYzdCJFVLMlaQ+0DoEQoM+Fmm2XKlPpt6L+S5v
/+Pp4PL9VZcJ4L4CwYq9wafBGbKm1s/iTL3qD13Plado1eCNkAHtycQL7Z3joR2GwyDNhmkwHpMG
cDKZSi9K/TClWUqpV5mbhXtGy/z6x7+nZewphpIgTf9Ai2OVV/VIS+hIvfwnRD+5owkj22qIzdHv
xlGAVwENQH57szAu1jOu1amLthU2Bldiq1sY5CCAnd+4WAZSULJ0VUy6xl6SDuzMZ7SIsZ13RWg7
9sNP/fmTsmJos+C6SZZlof5eURTUIRvI9jDIrMRVlSkEEre2SoL0Ai/i/WWdlIlZ2kbZElICh5mD
YrrT9C3QkYxwLnS/+Jp/ewt8HPs4dVqOkp3kf/r1119Fm6gssU6kLNSeWgM0vHfLZCqeXMu1JGz4
9EBI2kQ6DlmoJq3rJS2hlCTi/XR4IX755ZefBkX10zVxhKHW3HqhOvRbsx4pa6SWkuSFRb9IgXJ8
nPbvqrhf75lrsbKeQcVS24D2bO8VaGlCw1XfCe0Y0JwfuBPaNaCD6DVOPAM6RHrzTmjfgGbvZCd0
UEGHtnyNk2q8A6WmJ7PFmkzP6/k3EomT9WpF00J74JF2yI4+XH0Z/O/gtk9OB/59/dvNyRX+zXjq
b9ugKYMqSdok+Tshnv9RAkahj3fb1vkwnaN4paFvZtk3VcdknJBk602dAMe5gR0goerN2PfrYYUb
s/esBDm5W9yRSBoYtD1Ju5rc2OVXaCaL6YTUVJ6TXVwutkKI3diqZCemCUatvOv+pRgojXK5wY5l
AIc4DgT1r/nmw96wMHitTBK4tCYSZLkZiZN6r27Fsej8yvWhHSn2Odf0ELWZD/A1PCxHmvSQydVM
b8f75hh1zhyUIl8kXNNIsiVRo80+pabdRQlSXTm5owtDYhdIuXoyKfhpgqy/zEQPPQPdHBLImAEY
I1pQ9aHtCKeVaduk5YfbPKO9NzIdR21MD/onlx8HJigsnhJU/cyl7+xn4vpt3MYN3CpKu/gVo/Xi
kQ2eSvorqr6NOF5zH0i0TUCn1gP6sYF/p+Tf2eSfCDTxT3R+hHvXpHm7fHmD1MEkhrQblda11B+i
XNZTZvxCX9UlPbB9znJoU0l6n660WYko/RjHmL+PllOjACah0RdkAUnLhrEc2JEsB43sCxeBFIWz
JNMQ+dnKldNJs4VlhRzxbIVvLRPbDTawt+E3CkMznoezPY1RD21t6x27We8YnXD5ApJioxb5aGBm
/+aztG2VXup07WdyJs/4k9ujT648PzkwGHX5iofuoJ4IVQ0R5dFwx/g8PD+3y4q6Cgcn7E04Oiri
+25QgavCEI1j8fnmuI+601HfiLCe6bxSJHNXX56r/8S+6/S5fP2B2QJCWI0tvL+95RCK2UCvsYFQ
NWBShTmmujko3i15Gj4msweu2EtLYrrgit0Y9pvsiTTOvsSmaMEKNEfZc/1yxE6VYJcvoSyWtM7I
1Nyi/Pd1tnwxibBMmbLY5aW1XV+MoH3Jb/7pJjM4A7RQMN5IoH+HIT9ETJT/ZWJVUwtYFbuQAUrJ
idPTmxKSLFA8TLdcpV/T6TwvDohubns0IjPxLXnI1MNIA6/CcW2kLm7i0GexpQlKIDKw6N+2QYSl
D991yc1IltMcNfRQfW+GAOJsdX8oXtyHQzLadcRRzJ7IWC8pRCGXu/j99rb/h6phr5cRqdTSGCTz
fnE/SfPS40J+rUcelniYnJikYMSZpMRomrjOTlJF3fVNSlFBqVbcl8ONuu76fGkiwNFqQDjtH7cj
uY67KUNYUPOxkg0dH4Kj+GJiIeOjjsVritBogdWwLAPNq7TW++MbWojk9bJpzUFTIcmvDJ1D/vZu
49uKCu37pMCHs2ca1L99mMN5vspWX1bLbJqJy0vxfnKXIMx2Ro4uzeKqWGDAEE+ORbuc2D/N0gwO
N439IeoRu5X2iG0bpy0MvvGg+Zamdpo1tWMQc/lK5uj/1jkW3DR5Rultfoxhcreer3MWchSx5Igu
iW1YIUuenQZOstU9DkWW82SU0pqq9f+k1/dDO0DtdvuEdHZn8EXsHzv8NkHniwi8DkaHfkWtc3WI
pLfv4uH5Q92LQ/W4CPZurE3H6QZeNx53s6Q7sg0u+XnQOpde83i5zePlVsScAAeNDcSoy/Jf0OWo
1mV3R5fTpOAytN0YF12y+3Ty9T4ddcWnwQl9b4u9sxm5OGk22hMXeNrWiGHun130Lg+0MJaUpOR8
lL8/Js7z8zOqA+vNyrbsPzr0we+KUszPJ8MluaXUAvb4i5NjTa0rIguXCWmriToPlkGc85sL4vUH
GL/f7AilelGuzqwmCGblCMyeq/GdicvBNV5lVOaeTZatrhOLM7U01tey6I/BMK73FhoCDEJLQKmw
cimi2/vqQITDsOWdNT2PByaxGLq40Qr47WTjqT0Fj2yDRvhxNqtMUPqVo6c2V92EolLiUjKTFvYE
znzT6nddFZVLHTqGBRWSioM8qX4XsdYbWG39SbqcV0XPFWy8axJcr0sM9wck+CSXs+pZQpoOwfMx
GHy+PD0aDE4vT/lVGjIED+l/oe2bo+e6SFhslp3CtHwkH2CWvrCNslR3ECsCZDuQWri4Fr3LwaBQ
wvtPgpxjy4kODEC+rZWS0ZNvvA63JaMybpRRGRvUAiyoJmqQQ580UBS4BxzUoUE64oGqsFURh/rO
xpNdkzYYjLI2gREnt2o8diRwkMbnJNlIFTN2IS5pGYkJyfrhOE5NiXyaYZjy5HFLf2yGQio6Pp9w
Nwch1PXSusMVxp6DGFDTGPF39P9fyDNw3IgLHGMWTCh1sxndYeviuFcQjshkgXukqBylNoo2L6RY
OBVE6MN0yNN8IoVurvwx9PmaTHNPlFqBQ3jS7+FEcPZ1MuoWkzI8hKM90l85NrZ1chz15/IxFW4k
tlsb4dgT3NsqqrUV1HYtUtp27Kle2NQLvUjKnz2b7Vrtng0bZME2ZYEx3LiUu6u5uP2McS3Cj+g4
IVXxD8bwkG+tLXOls04n+QIZJiQ1M7Lv58u8Bl6abOXvwu6aEEFJUIjP7487svZjXP14etrr8hnY
G/4yaYReReNsBrW2zGschFEFIETv5pa2fFIAZzdXxx++0ojUOKo8k6pDskYuCusdcswfY3e7Q+kb
/jJocPGiXR3iN8JqHXLqHao4inyvEprxkCzFxaLa58gH9iqnW4HHZRwD7owgeFITBbS9BV1uMCos
5uOVRq8GERvNc/IPzpRlYMD4tjFoYoGrxoBBeocBFSDCMh6mrDxZNAmDaO6Ph/YB4BfLyTTBsTOH
dgpM16d1G/Fs4pSZtpnaXeb543y9JB8yIZtP3fcuQkPksj5XDLhBGMRt7j9xUOepgZ5BKLTbLALD
NV0kMxzgzXEKWuhlhe3IpuiUpkeeiKciVJHtOxHvSGYbsDOmkxm+NWjGgWF4bta2M7dLvEqhH6XQ
m6ffuHf6FfGIn8RvIQ7zdXNTMlAdBG5bUKG0sbkNtbejxu2wyr3CD+oAVJQr3I3g/rTS5M38EOHS
evnCAt3Ds7xuK3phsPOxGnjh7kmrRoAfXlznQ/whRXSlu6GljvUycTD6nM1G8+U7OQqG+KhP+N6h
aJVJzA92EMtXsBpyMtzGy3fkhRREyAcbZEuSnCsennfSpBjXKGqU3TNFaBwHqtD6yWxNG91qvYQj
oc52SPosactO4Haes1k5hiYRbD4VEZPHbr2iTInjuHBG7tdDITt2lx8NwQDgCx5KExIzV4P0hAqO
jXTkygAO4TCut6ZZVbfVK+LkjWc6iiB2hhaCn3aOrRM57bx8zyowh43PcVpocj5KSqNH3uwkLwbn
v6BLUq03HTVyBkGurtlC8KJ3czY4u+W3IcnNXjxmcB1estVfTAJ+O0fK1OSFqd09/I7CxhUBV3pR
KT3On1lY0iTm7yL2JxaWyy9yVRTLhbV78l2J08sK7Q0La721sIhIaBJpWVgoTFXheDgGwnJxXllY
rgcFX4N02haWjDhBdV5zT6Qlxd7HRTZriG983IpvMBVO7po3aGH7Rzcsj42WVuIfd80WLvu38/U9
S9U1aPL5dQvNhg0rq21YeEGs2m/cH1sWRIwXaRux718WyGxzTIrlstg50ECLTLQ3LIv55rIgImGt
N637jTGckZR6Wbi7lwVB4iClBtm6LMLIxfFu0yTLHxZoIh7HrcR3j3PEGVktqN8j0J5JE4Z0C80G
gR6ZAu27YWRXs+b9kECDmAx2EPtugQZFp8bemwSa0WIT7U8INIi4td60CrTBb4QHVlhMvZ0CTZAS
IY4aZJtAB9KxI72unY4WFST24ryfBMMY4sI20YGfzc0rDFQOVUnplem23bFtTrcky8uk5TrttGqz
Ldtn2zYJetIgWE72Z53b/D/Zy3CeLGvd8UIDoz7PF9clIDI+gjIb7OJ6i2QtKawqt3lkbudH2O+P
qCH8wZSZWWLSaEtitnTifAfMIZ5MDuM5Xt12cAqAy9yMdng/GS2Tb1pOLi5PkeBty4qv3xuY/QN+
J9HtmNwRW2a6mmshAc7+p3damp12jLb4CcHdnXbqnZabne5zDvZ391iaXHjfmXdHrJjo2qy8n4wq
5kCrFtGNLHJ7uShxa4h/PESIXx8K/a1/gcLxosOHXWfPC86zOUUHi5Mi7/1w4+yon/3jH8kMmTsX
J8eW0bCEbmtvOEXD+lynq16R33cs//3tUS6evQPx19pBgbgfTZP/5CKP//HOFuNv7yIcUuFCaOzF
wYHZMPvaDacNssGJk0HzKUBg0OMixE0dka+cVBHxppMq36lOqtCAyy+GtTbgel3nR09huJnQlW3j
8uopTGQR1y4eKUCk2tmIVEvaKZyII9Wt3SBJc/7pkqYa5uqC7Q2naPitkiZrkuY0Sdr8QbXtWT6N
MeyST7MJuclT0UfqV+eaRpE/nnUuT8/MFckoUYS1PxllnXRULHC/iBYRRCBDftgux46ik1uR9CMd
GR+KMd+AGa7HHHpFeY3oUORHdzSFd7lw9HUsIhNJz0YuHEL94uz0VIyNux8kmXz347/Xs45Dlpcs
bn4a27/0LccPJcIQV2e33fK2B8KU+hVKojmdPL4IfX1ExlboS4mL5a9iqO46Ek8U8wODj/P5olvc
CjGeRnUcK45jHwkpN9e9GtFZMoVZcXX5pbiet1oms5wjGYqQZZAIEEDfJLEeLXYjwQzZRFqlryAh
aN2EdHU+ePJIxIZJ+pBqqW4h5FlOTLo7alWhke1GUKE9juxPcrGeLR7Xd3fZyLI0jcByyOrGc7bn
g04PtwCLOyzq99gKIo+PVlsWD7fh7GrDpZ0WKtI327jKVuNc7M3GuXmFTyW/q5Q2hRvz282IIdbe
ui2uiGVF1k2RsK3qYvz888/aMlO7NNQUbpbt87Xkvd7tzYdOsidW98ssUylvgFAHAArjQFPau9kT
rHmIOVL8y2w4n7MaLJKJ/x969tYJ/+cAAA==

--MP_/VDjKIPPV56Aa_4P.eC1Tu+/
Content-Type: application/gzip
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename=xen_415.xenlinux.txt.gz

H4sICHHtnGACA3hlbl80MTUueGVubGludXgudHh0AMxb63KjyJL+f54i95wfY89aagrQdcOzo6ut
M5asI8l9etbRQSAoJMIINIB86TivtrGPtK+wmVWAAMmy3T0/VjHTtiDzq6yqrLxV+iYwbddfwTP3
dVajH57r756hWq3+5SZ59ykMgvjT44befKsgyZG3ru/GoZ29vArEj7Mvg8k5fMGHjzyM3MAHvcpq
VVVRmVJT9AXTa01Fq6pNxdYbqqM7lXo1keTM5svd6vLl18Bbt/FZhTjP4WxlWfhrE87md/MB3JCw
54jarOLLhAUWOw5j8wVAB6a2a822osHdogc0biLTjRnzKIbe2vRXfM7jtnhZUWoVBZn0hOnfFVVR
YOXG7b2ICcBy53p2xbXb4LRYU3f0Vr2hKFw1zabSrOtWjXFbZ3ZDs+ots9FstHjC2MXl8nDteNiG
q9ldVwW1qqjJy16w2Zi+DbjUvA1esPIevUvT82C1Q3GN3AMr8KPA45dWsGFA/1wyVkNpc2vubswV
BxoLlmbEwbTtkEdRG1Kiz67NA3B9Jwg3Zozb005ewOerDrgRxPw5hk1gc2gqz2rtApzAj6H5zOoZ
YXfwqd/vwYbH68BGbD/w+X/AoD/qQxyafuTwEGJ3g7NRIeIotR2lvIKIRkemGEIehy5/5Cgst8wd
yusHQNDJC9NLBsFtjrkVczvB6buRdXQSw2CHS8lg3J1B5K58M97h/MtvB/1+nhuiONxZecre9A4+
c98OcMM64/4FDM2N670Aq8OZ8syU8wsY4xJ5OEH8ruLXecy3WzobGpyF5hMoClMUR9XO95tT4U1V
gVlnDBtzm0l8r5Q+F5D70nI0x/kKZ7vIXHr8/BhTy9EPmBzBhPPhIa7uUTbnYCznHWysJKLlcN12
TopIJLzMVKtJpk5vOgLbjM2jfLX6AV/9zcFqjQKT7ThvT8zhVmli+OQ9bPyAjStvszkHo71DSHag
IE0VWfixBZnwp5w5KFmCZ8upC/6EmPagDbN5f0rAQ30oh1B1FEaF6yng57xA+2XeX0BvONAHA0nb
6CEtS2hhGgY3rolWI/mo+P///s9/i99ZXR0UwYb4IwFrCrAhDax9H1g/k2xIYHq/18pLRh9BknwY
jCaLG/QDiqaoavNAsLnEYnKWenHF5tPeLHmvi/c1JT8WvZ5153f7sU4JPu4NrxKwngDTTi4pg/xH
KWJdTwfJIqhScK35/dszn45HCZierAKC1U4J9jrYYDZPJRN7zfrKW7N8HawzHfUkmC4VpzV4a5qv
Ltl81kkEq0vBkr1E40/Uw87YGBpM2WMlL6RoJb3Bj8TqSKxG/fsn2R3MEsF6yV7+wIpdD9Ll74m9
ZM3e94DNX6KYb8iTtUFTG/XauAtnmlarqZrSfOhmZLimbZhQKKHA9MsY/1UqZs7wHCGQRq5iJxbu
LUKi1bVXiZkgZpCRVJol4snduIOa5HmBhdGhjeHGxkdOdNDghMEGhImlsaAifzfLvHcRef0mhkkY
9Kw5rM1oDdHadeI0Fu4HG9P1Yc0RlGJm1/TcCMfqjzvw5NrxGhcRlm6cBh+OiFLm4ymMp5WYzDqY
MZlmjESb6eB9PJVqVYct+Qs/G0tKI06GHboYgWPk5Jg7Ly7owXRcWWB8FsLoFqZBGJNTaGFscpYI
UjobN4PBFA3l8LYN2w17Niw/vmftel25YG3l64V8yB/Fw4aKD1uK8rUAkf88mQ98tzUeuXUvQgOm
WIiBX43I/cbv1RLrDe6MJ2eUuDAUNnW8BcrRraA6c+175Vlpfk3p74leetyvsIpcg/zhvfI1naXk
wwdtMLeuZbg2NC+y7IU1LgoDp677aj5CdWa1EyK0yiKwogisXpaB5WRonZKBZTKwekVjRSEmC2M+
6xm3n2dwttwhC2YukeGGf+BvKy9Ymp74osLaXa2B2yt+/l6AVh6ghYnGE3gYwHslgNk/FNiRji9f
IMBZhJh1VMsk6tskrVdJBj4ejEzXKV9BLRt6ZlyF5BCoMPp0K15Hh96RkjgM5Fm9bqoKExFSWyyt
nderaS/1zJjEOO5qF8p8QWnLmCo1U5jlrDZ4CkVgRmvFI9DJZDjOAZIZcpPOs60nvGnAh7YBBpgh
5DnkTAQf2ZfyMDhIJRtiaLoeosQBrHgMgzBEhpsAFyhRnBmlvenykSdow0IYl60ZimHQcmEeRlFj
YqS+ZflWYlYoWD8bd/qLcyEOmajiwuSSqtQaj6fSvj5JM4mZVYQ5E6yDeOvtVuJ7FrXODPw6b9eY
Cn5oWNvdxoweDLJI7bqenpTZPzBV3tAzspt4BC7wDOBv4/noE/5f+bLP4XCfGHnyarUK6JsUGF9/
K7wlL46pHVPWmJNZa8zAwVpz6wE3ZYt2kUTmpGjZQsy33NqhlqFpBZTBXcl5O6bleviVR1leB9dm
aD/hboPDZRa6f4Up/5Y2q4JbHu22W2GCR5P+aDboLYzF9d3kN5hfd/q3/zSmnavR5CrjpKA+4jFJ
hvPvLkaVxXrnP8DNcDDpDTANnQ56Rm8xuyEPeAG36JEwhe3OOpPetXHdmfUHkwxrLkcWW3n9eQyf
xxGlAd2jBNPPh++/TBeoD3W9gjtEBIHvvZy3yd8pYLsiIbEv6Otd9hXOnlxM6Hcx6Xc/y3GI+4Yt
hug4TVuoymmYglpGuGX2zqOJkkb2Qm6jPPP0Ke7lI6bplnisZgY31XFCkCzqHimViwopBrpYyyU7
bAi3jjrVLLxHcVHm9KWWHmBAL85DQYEG0/QtbsQBYtNv+3qMMGyvEVW0lCrc+X/s+A7tihmGdJDJ
DrQhCqwHnvp2sDDA4HQCLfEaVkiJyhq68QsKrWxSOyjKQ3FoWg80eSk+mnF/haEIUxpaQ2dNzP78
lH6KCk/nWhR1QioSMb2qsSaeJmFN05AkqdGIo1bF7JSOGwYogYXmJwhT0zMMOdIwzC8furhGwsB3
53MKvYLwJSEyvRhk6ENpcVO1Me3QTZW10Kr+kn+2ZHWWVtLI3j+6YbyjAEsezJLCoFWRS4aanHIN
Jp3uDZ4x9MEV4UzQwmQVIxxMKpmPGXWn91tSj0peV6uL0Xgwa1P0EgfhpfI8VIT/ZpcKbF2fXari
q3pZYfSdfqa+KAs5k6oehDRM4EBdh9/cbrpamyfTjSuu7aE62AHuv6ib7XwQVictSm1kMSo17p8x
MO/MR/0otV4p2pzeJMcahzbtR1I0+9BEVWDCIyKZUgFB+Ak025Pp4nxPcWNGMXRJVdcw41YQ4um+
6c6oxpjfg7xQOH7JoIrHmansRJGbDkvLcXbdmZ6Xi3+SpTNFx4WyUdCIu4nKdAHqGP9hV90fUaJu
GOxW6xh228RbpUsnbQNJVThX6KVwTFEx3OLhECakgksZ7PAYpptti7I5kmIohQ46Pc17KzB0Q1xM
IsB9TV9j0GehpzHJ2h8isaNIOQL1LQLtLQK9QMB+QNjaUaQcQf0tgsZRgo1w18Y28DyMrwounJ4R
szRaUWySyqcHYbbzfeE74t0SHYRFVvgFPavn0B1BtL/IGLrPqAl/u+uLMhyG56naNKj2RsWQ+/KT
r0Xt0lq6yRpFuKvpK3DsAI69A24+/1Ol6/5J0pFvaJPG0w5EIEuUIppYhuiCLLIfZ6SEsZklEP80
Q1+4/8kXOBs8Y7QVcyr3i+omORMyBWTZyQ4KjeP7NFsRMezGfHY3O8zd0ZJOc8b8559/hi5d4dDG
mxRyCBZ8nHqCmyFmuGsbA4ktpVtoz1WKtNEzRd/wS91e7nODI8R12y6SN5onqBv6Mk9t15unaK08
rclOidEQFfCMWFwRFcilp6V8R0wPN1B5bpqtEhUuMPqcqzvMFIzbOVzCX8U14V9fo/k8mM1HtxMi
VKv1I2T4PU9EF3xaVTlC+Hk0WxjdznyAZJiTJZ9msXKTo592+v2ZcTsczjGzu4QjFIPJYvb7AZp6
FO369+lg1uvc3FDsfSiCdpQJY3A/ePLpNiir3Odej2/7xnzRwWlNhxMByQ6JRpMRvlfH2ZDc3F+8
HJIPB53F3WwgduYJfRGdEINcoiF+jf6VPbR5ZIXuFiOU9JW5iwND3NV5FIMY2/VLtDG3/8JkBHNR
F6M1g0IK44GHPveObNH8bjq9nS0GfSMnBmqR4hzS3tx2+oMZibniPg9d6ygeZi6TvtHDRGVwc7hC
SfUjn0SBiPVEJSUtLZ1QF/qgdTfEITECx8EsStArZcDk3XsABf2DcC3H6dVj9Ny3jwtcOIMZPUby
6Jrehb9VN4X1OKVJIpUEucFtkEncBXjR8oKulrdmrKURkTSTRynFar5uR6bXv89HeJRgPBjf4gHs
zDARvRqM8Ti2C+AmRcNVqhMmQurqMqnY/pI9sZKCyRkmJy0VgzhS9ojCgiWXCKTLWYBKGV4VQnOD
OcBDe39nZreYkwduqo7tODZP/QhZn7vTMlMfBAap6ZKUd6Tyy2t7elym7JPJdLBZUzygFYpvxN0x
lPdUjiieaI5i1mpZhjoXykl1GVGFLQtWElVfNnPqEaHFkHnJR0Tt5fKZNnxkjhTOSwPVLolqlkVd
5sywaK+gAM96KM9RkBUZ9Rzj4nbRudlXp8snPc9oFUSV/oTczmA+P1SAVyMS+JzLKDLXjSESFfgP
TrY8UfunMr444GZl7iQOKXE3dPNIPIJZQok7iUsOuJdHIhRMIQ64rWPctfqRkAXzizK3DF3K3Mvc
vJPCjSg9y0CGzOQfEK/RQawDz6b6mMSlrZZGIst/rXC3XFL8R1UI0Yvh+rBEzVmFdOuSksV2lWqn
orQtypfJiytqyTn6hoLLOTo3lMz1t7uYzFL/dozWKn7ZcviJanIV8ychJhcRsbBc0ZMb46EWLOeF
Akm9Vn/oilJssTJiK48KZv6idDPrj+czutAX3QFM+cu9uNqsKsllR6HIZdEUtxDtltFLRCkVrtPH
GMrUohMru6TAWK7KFFapa6Il7GzF+UPwq+iZWgdRLPu4ck1hWlWHe9HaZWiVpSwjhPzRlTceNbXV
0L4WW77O4W9MlPio2evvOx9UDVi9rahtjSXNXtRFhkasZSvWeVncYqPVHzsX9QRnFv5hmN6T+RIZ
SWUCVcZ3rcsWuCsfDaDowKLtzvqv1o+WAtwMvZctGrn4QT4QYYARrbnniSEogLpkePb9neldsrI0
1BCE6cyjS54kLT4nCi36g0r00B3dzg/tMFSg3Au0bwZ5E0LSFyBM6WP3bShvgjgHcjDloyCsPJm0
a+i9k8noK1BuICp0Gb2JIlmKKI2PCZIYsewRT/fq3QuSXjPmQNKrqY+A8EMQzj4Mwg5AnI9K4pSn
k13fvxuEHWq8uJB33r03jl2GYLlz9LochZNKmZJLN9JZH195SCQ/dUwbnFnm8pTQoqW1La0P2eHM
/iQXP9UywySIXboueKtksnEjUctGl4dhyL+VYQ5aCg7f0+3pvk2ke6PXaxZc1QAuxEpDh2mgND6p
+B8alzKAZ0axsXV8kZbIdaDQyDBDa509z9KsU8yJuXg38/GWinsn+9Q4BRpf910WJYDkSjRnoLeo
DFTc2p+LYzyGZDCI4Xgcnf2m6eKkHehTWYkSOln91h/K9DIpAdsNcdchGTcJq6mojUGH8owY1HMI
v8I9CkhZcBori19F6+PXj82noQmzUCkf12PzSWhpPvtj/SPzSbexMCFbSSbElNbyoxNi5Q3Kzu3h
hAqmaX++f2RCAsUszqihN1vJjMzlUv/4FiWaUynv2bEtSrWskt+vH5qRxnlxPjSJbD527dh8/I0r
78WWO0xvwzaodZXpepkMbXF/NP+tnW0Fif3KeSp2u2YeSscJ0gPqen1U9m2vR7lF/2sh8uCSm/pg
H5XDhrqENOm0/E/xXXbUHcUXLbFF/KYim3MJX/tR/P6B/A7hy27ZnPxZu2x6KMrtsq9JPy+iM7k6
+iubIRppiwy6YKAmzJw4WUdtJs77piv6Z4r4lsDXTm9X0mSbcr6CLvqJCuiqnC513f4JyiAacIv4
erKchF87Jf178EVPbhFfKJvozX1jdd6DL+7UC/i6VGZq131jfd5efdG4W0CvS+lT3Sl38Gbo4kU6
l9dUmVp5C+imRKeW3j9hbUR3bwHfSnTnz1l70fBbxBe6Ixp/fwj/vwJMV+kyQ3SjRIdhsGhvlTv4
vPePvyTfjkY6xKKpexaWY2HHgwmMfcMNxmWQI0pZpCcus4yDRxH/faMJyNI9TYNuJTnVV6n9t8wj
cn2D3pBXvVe/JjePuQU4lAyoTfBw7iiVfihVjjwJjE9O4tYXcopugtj0RI2rDU30jk21cRD8hpwb
1HZoiPDAF72Skv0CtivMi3MFzIbWqlHXupws3xiiFZpe4TurZjrL13ZOrGgbavWkLi8ae2hZEQVB
TvGkpfw0BTtFq5PdE+QXcDMa3sLSjK11+1VtklxMJ6P2fsH2jOTqlGMjauxVbZSsrWa9zt47ZoGz
wVit1nrfoEmXtOw7Nq2ta8jmY5EyyVbi5GtS3jp/L4JeRNA/jsCKCOzjCLUiQu3jCGoRQf04Qr2I
UP84glZE0D6O0CgiNN6FYEwwcsih0B+K2Y4n/vdcP8ZH7OsrAB9vpi+Xkn+sq/4dQr2rvf64VN/d
Z39crO9puP8o0uud98eRjrfgv0arfoC29Tbt9/SLl4eTF7qEM+2NIO2oi6TPFi08Mewrqyu6Fk2/
tlmiUAeLE/F4tzW2PLS2u/bJhvP9M1SSqN2kL8Ip0rcDRaCWkekdBtObJbepTsia0np/ot61XzMH
yprJHwzoTGV1CJuspYKt6pqKu3U8ud0iQkVcb7dPsckb8Ev2s6q0GqymnoK5p0oXMFRIDXSoQR0a
UKafPyV5/Hg4iWQLOAY7jRoDukMRM6nh5uIzXV2mj9WWY5m1g1WnrqsYxyPv5rlRHFEJUkSQQWjz
8AI2wZL6519AXDmJRli/CrCg4AbS6IbpraZ24Od/k8UH6//V5Y4YuS1/gBRg34pbVp1RX/7RmKxL
UruHKzpalRYmGWKB2qBdiL+0a+Kpi3l0sMB92SRiYQDLj4Oxlq4peobHCFCr1XSd/uTrKOaItL3y
OmSiZ3tIlf4Qo9FokIYeh8yaNECt63pD3LQISup7Fq1L8s7xQHc9Tl0tNMWfJIWRNHZfymLXTxBs
pUFx4OX/yrna3rZtIPxXOPTDMjQvpEi9oguQOGlnoFkDO8U6BIGhFzrTLFmCIqfJfv3uSEmWRdUu
in0bYBgyfc+RIo/HO/KOxYYkxfrnmnxFj6bZsdUww0yfF8taRR5PP5G7j5ftIAXGdCAXoDQwRlqH
CzgCGD+GqvEmabOEYGgy9JOrk/uGVI3gKvch6EzsMIzi5gSqK5K6yBCcyfTkapuC+NS+SlRs1rFs
tszwH5y88H5H8z+mn+AljWG5aeIBQZsI3GFcnXFObd+GJxI+h2mmBv5IWNRetZt9MQjHMUE9BDQR
nhccY06mygDt7HgUWhsI8OjtGPQE/IUuiNGA31JZ4T6+PliZfCZpDkOOOQxqaTCmDKjiEpZBlGvP
OAyZLWCNmgfc87UGV4SWb3kgnUPi65darlFdT24+zckrTHuUa7OrMVZKpwShbHwrv8E4b1YaJYC+
ymDtwgs23upLNYaUbQrA/Qvoms6066hwzQCLbxJmaVTpJTGRWfgK63AvwLmUcbpMYwJyDiTy9JQI
h1qn3CGXxWNxM72dk6Os/PtXH0bZtZxf+uwtMM3LNAFn7yVoM0aDRuPkMGT5JsfMFtbDMGzSXMYb
DIMn76swl1+LarWTPNajtoH6oiwvqlzdqdE8fYPaQmfjBsS43qd/bKcH4Q4/FL4QxnHdR7hsL6KJ
utgCBN0PSORzGvfUASLsvQj0zP+R1Q7C34tYy3oRZzt1MGsvIspWadGn5/trADW3XMCKuO73Fay8
DyTHM0bMc2hS1p7AfriZXINnul71W2Rj4AbaemFWg7JQyXIwXjrURQlsQT7fKg3SohgXluW2NuZE
hZ21oSAWZQzekne0QIq0B2rQxmavCt8FV/5hJL9jS8B8H49fn+u8XAK/EeFsiZRtsMB5W1chPAJ1
ssnz1+3C0VLeqYtwiB8wFuAFAlchxrtS+4zxM4sNqX/HiPmZfMQsmAqtlaqoC9Af21yfIaLZBwKj
ZIU7SGlBVM7evfZceP9UZQxi9SBCQfTZnw42kvkmO4F3fT4pytMY2s/sQAUgDSKPQrJZdzrb7C0V
94uW3PYsulExyhi+f8P0cA5xKGzUKL0pMEfvSbVfpc0GQ4ouwykgf0pjgIe/bzEZmKmw1xyFCM92
KzxUB42T4usMQ5vesG8dgw9Zz6a36JVwGty/6+2xea6Q4hxsgNFSvdu4TLNskZcL5f+Bm/QWT23F
GX1xl75RzVxVY0XB1oywmZ1A35Lr9x8vPugYCEZ7OrPDXnwxTqUZmV1+6Yc1eh73HRj1SY+2HXqD
4dUow/l0WMpQjKZ90wfbvD0k6hhe3o4EVMyoNxLYMaP+Lm1/o7ljyOguEc4SKGVj7WaW0W7PZMgH
ryGFB6VitNQ2qhFDhu/nI5GzR8qjJTCYQ3ey+We1zuBPI7x2OKmQN8gkJ1dNaAy5bh8ms17PeHgA
R3lkMJhZI+MxmfF+qY7ZhFIxoHUco0VXMzrC8GrGRktHKjcZ8lGoE+zcuARfWOoOaIXJ8FYbeZ1y
OCobnYHhnHgn3rLYGWmtSevwabVTDHbNdkuiU1YqcHlY2pt8the71DGmSdto3r1gfxffYp7xFmaX
GOJ+iIJ/L8/enDzE05RQvOzurgrB3jCqG2hMP+GgMcM8WZTFUx2jR4+rNirL8AwveHD3csDB5hFw
SIpFsdZY8EEyxEeI9+n+Ftiu7STnbQhEWzejChxae8ECCITYgrUsLf6SGYgYcEFlz8zOUcc31COc
kRjcZx8MJELbj/BJ5G1/olAk22eAJBZmS7A+AQcvCnw4/PZwZ4dYAm+vkB6xGXLzMH8WK4JlHiGq
3nd0eY73LyDIJwnHbyiGn7GLHzskUuK3pwsZkRxbCs8UKPnYUvmfrYe4IJJ3w7XwfEh2cnJyTzAb
B3PhwbOwqSOjKOYi9HxXOOQBCIaYxmtXu0jgo2MW4tPrOlaBNhd1LfOy1pdwrKC9yoL86fvsjTtt
awTkQxN30QZgGMbHj04XjPFyJAr7BvpRvTN0oiNRzOL9ci5oEvOkhapUC4T6AHWWh5AJiMqD7jEA
+TbWt2T7mypCX9BzDNyrHsHlKar0MV2H2aIMK7Awcex9eqa+D/KxkY98USefdbp8VfMT2xAdxCbM
07oB4YCzLA+AQhzoLGb5S1QqRVHivmOCNWp1chDJEgurbAzmxdZgxtqxck4PqcQfnjT/e9U6k1FR
1E2ErN9eXXp6+i//58mkMVcAAA==

--MP_/VDjKIPPV56Aa_4P.eC1Tu+/--

--Sig_/.7RY=KJQqqvUJgjnPlKh5.g
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmCc76AACgkQ86SN7mm1
DoCpIhAAkKIB3hSVLB/UsSewZDRHvR10Ey5aZlyj/xOGRUIF2mfxZXsMxbZYI4NA
HfwhpGxMIS4bLekc5aNmdv+KPIClbngMkPFMd/Svk/9iLic4ToI9IsgqhHOfHKKl
w2Vb6ebm0cqMeWOaMgiBtbOdh37SO4UH2iv96v6yJc6PoT/BTd1y9aJH9lei9+Nk
Kq4mYoyGPgsUdxUntaBPFtfwx2i8cGTpFWjVq7vpasxztX82GUHc2lx6wwkAyjvF
8oP2eYGzyGfzhhPndQwy+JOPE+VAgAogx9UTEG7mW0viBFb/rAKj0J78++NqXdy5
YTOu0TPTv2b+xSnRhHzWf1R/TE4NQPiOeeSoBi0Wz+g+hvUwsuxmLUVXB3eDH7C/
agQdnQfZ8YB/2ZE1XeWpf7o9iSksxgBteN5rwnIo3Wb0mSVvX9xgJZ6A/bRQjyNZ
mT4GOhHhN/62Y6VJcaLOw1iDmCwomRyJw/LZQ4HZJHlJDkb5JVh0JZFB02aoO7e7
Cf974FCQUyvORgc04zcgbNqFy9X+wFXnOxg7+jlV0vDV4JbKIsKmS0/JuIDeB+39
eJT7TpnSM/0/hwq4CqOvUMZ2/aP4oEivWHPrRzU2nJ2gjOFJ1b6PGHJe8Ar3wY26
gyjVjEg/Ayzx6dZxhtLg2pznonctTb2IvviqdwzJCv+lfTPDwQU=
=E9XD
-----END PGP SIGNATURE-----

--Sig_/.7RY=KJQqqvUJgjnPlKh5.g--


From xen-devel-bounces@lists.xenproject.org Thu May 13 09:50:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 09:50:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126794.238249 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh7yt-0001lr-Ie; Thu, 13 May 2021 09:50:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126794.238249; Thu, 13 May 2021 09:50:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh7yt-0001lk-FS; Thu, 13 May 2021 09:50:11 +0000
Received: by outflank-mailman (input) for mailman id 126794;
 Thu, 13 May 2021 09:50:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P1X2=KI=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lh7yr-0001le-SR
 for xen-devel@lists.xenproject.org; Thu, 13 May 2021 09:50:10 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 008b7613-ce50-4de2-a1f0-09928eb2f586;
 Thu, 13 May 2021 09:50:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 008b7613-ce50-4de2-a1f0-09928eb2f586
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620899408;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=yHLKetI3NyzR3rzD1w253ufqwhgEkkui2QRIuUW1rss=;
  b=FJw9Sd1YfbPG+HypMwfJ5FaCWTEieQ67cGXcH4klr9DLOHLczlmW5RXQ
   IdWezX0Nu4KSWYhj8X62OGGLN0lsmNAhYYWhE/Rj+tsTzJm4vgEGIC3N9
   wIAhmbGBklqRgP3Ofry5tVuFCvAnemWuHqRF4w14GuBkwOIruq1FuOsgn
   I=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: s9QPmd7KPYMxlrR1I/4XMks04S32kJb00wL/imkLcmPslXfOwX0HZrmB+3uUXU/+sybHSMUlAZ
 eVt3qwG3yg+GfbbNfp9b7Lf1CTtqT/hlEjIagERR29nltW8uV+XKiHXyDp0Ll47Svrs8tGOr5a
 Z543i3rFu0C434nm1J1f+J021hkfEiWnIeLdp27zkIV70UDFE6AEMrFxAADD1aIERa/MziuKrT
 d423zwx8/5X+QANBnvcu/iO99plgjXg3btSDBnRdCNAvtuSRPPnhl0OEhUuUg5eZEeBWWddIRL
 5Bw=
X-SBRS: 5.1
X-MesageID: 45247315
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:zEoY36zJU8W1aUMW2jDRKrPxJ+skLtp133Aq2lEZdPULSKKlfp
 GV88jziyWZtN9wYhEdcdDpAtjnfZr5z+8J3WBxB8bZYOCCggqVxe5ZnO7fKlHbaknDH6tmpN
 tdmstFea3N5DpB/L7HCWCDer5KqrT3k9HL9JXjJjVWPHpXgslbnnZE422gYzRLrWd9dP0E/M
 323Ls4m9PsQwVbUizVbUN1E9TrlpnurtbLcBQGDxko5E2lljWz8oP3FBCew1M3Ty5P6a1Kyx
 mFryXJooGY992rwB7V0GHeq75MnsH699dFDMuQzuAINzTXjBqybogJYczNgNkMmpDt1L8Wqq
 iPn/95VP4Drk85P1vF7icF4jOQkArHsBTZuBulaRKJm72LeNo4Y/Axzb6xPCGprHbIh+sMpJ
 6j6Vjp/qa/PSmw6RgV2OK4IC2CtnDE6kbKwtRjxUC2b+MlGclsRNskjTxo+dE7bWTH1Lw=
X-IronPort-AV: E=Sophos;i="5.82,296,1613451600"; 
   d="scan'208";a="45247315"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LZSXaqE324CoAmtxH7tdJp3NWHa0Hb2RBX4HtfBg3zOmUeHfoYxti28pN15qSEA/JHueTqIKiaIVoUG/1acQ68C0QtCB+KXarLrkgaK8iY6iCpVFUuo/oofX4HmEOuB2zIzCZjBr2wBnrb6oTZSHTIvmHbQMHQKXwu3oEX2bFKM1By6GFWMETR60g+lRKha/O0NYpzwnAQ0UZGtJKPAGvkZm/9QpsWI5rEgqy4ss+bIm7ol/pzs6J/0aefiNNoLVxE1BK1SgLWn8Utk/Dr06MxABiabfS/H9wFZri7fNNIlYISlUWflpQF2TNuJXVSQ93KOzhmtxsXijJIN9vV21tg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bMXwlt7TVK1hz2VTJ+mF80Mj3FSuGadpLNeph+EWHgs=;
 b=djxudOyGvM6oFPjCX4mqGyIlv4qiluwPxA+Ph3kwi6jRWmcxZsWeXRiORFbiy9B1KZCTLYB8lIYIr0KtVu8t52hmypoFmfMuaTY90GfOdse/70DxksS3cXA9OfYmYsH5R6nnU3WDw0YKVQxBLvdQsrhAu7eAQ+IbiwBGvk1OeLEniGO6cUCJNsBTz6MGJaZFBMNyrCCRM3TTtGOqV7iAp6jvEZZLx3J9EWn8efu3PZHQK33q4us4Z1PCgan8W0jkku14iOhj16zxhbAQHEALXSb0xqq9n7RaRVqh/HBtThny3uWYaIQxtLIPdO05KuJietBf7qGlmjvyF/S3OX3a9Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bMXwlt7TVK1hz2VTJ+mF80Mj3FSuGadpLNeph+EWHgs=;
 b=kGMkYa6btl5Mj1kfgPjHtP2JaaoPqEExZtrrSf7pWOFO51BlIvETV5rwzYkpEanzqC0dT65KBe6o/hUxJs2uL99hl+MA+DV20ms3gZbCltlwWO7s+UBKhkvldG4euncjfhb6L3DZebGn5UBpcQbNNSuMRyocCYIecVSfaejrmJ8=
Date: Thu, 13 May 2021 11:49:58 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Olaf Hering <olaf@aepfle.de>
CC: <xen-devel@lists.xenproject.org>
Subject: Re: regression in Xen 4.15, unable to boot xenlinux dom0
Message-ID: <YJz2RpJ/c8dTXG0w@Air-de-Roger>
References: <20210513112136.6dcc3aaf.olaf@aepfle.de>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20210513112136.6dcc3aaf.olaf@aepfle.de>
X-ClientProxiedBy: MR2P264CA0149.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:501:1::12) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ff93146d-a3cf-4d60-c139-08d915f47bf9
X-MS-TrafficTypeDiagnostic: DM4PR03MB5967:
X-Microsoft-Antispam-PRVS: <DM4PR03MB5967D172F10880E5BB43D9FC8F519@DM4PR03MB5967.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:5516;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: v6Gizg+R7O3uo7qotiFTnK+pqn0ZJyVqDtH2KIPjaYGcwnOC1HO+GqtsD1IkJUWrSezLB+/8LQiT2k1Hb3vsH38ksql3kcJTGAVmh1nrwQPQrlxV6zkC1CZ6XVPpLOexqdAfNou8FIMRSxbrDHjpjZi6Nlc0BzpSXKaqHvZrE9gHqZkjHGkCSyvc0xtuwGmRcI/GSpqqnsrl87jwDGf7e6Cx0jW3PwgV42/Z/PgTXqag/ZfEteMsPXWvqj7Xd1XNPpfN0HiooSJ7aZq59oYswz6rBvHiYariiraNE6wObCQOOFHXpp2u1dGzzhxFpH+BadAytXO6H4YKO26MI3oxEVkznlns1LnBHt8vbW5/GWrtXdNx/vt95MWM6CBkRTAhgM/tdspfL5/0P+dP8zX6I5zJA7GDf0Tlg75E++WTZ+jaYBCqLKAWmsOB3f35/sxbvRdbsnoYo8LiX1se1sD3utr88vmnAyQJCBl/4vPs+5heeP+flluZsToKQSEHN7cuoS5vc2eT7Bz7B39Y3AX33b5lwoIb1UvccrAZYHqIUGbbFp+oRrp9KS2WMLE1dUDkiQkn5ulMmznsiIjnlCjDP6dp+HXyI8y/mpglwu54e8Q=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(396003)(366004)(346002)(39850400004)(136003)(376002)(45080400002)(16526019)(186003)(2906002)(5660300002)(956004)(26005)(4326008)(8676002)(85182001)(478600001)(66556008)(66476007)(66946007)(9686003)(33716001)(6486002)(8936002)(4744005)(86362001)(83380400001)(316002)(6666004)(6496006)(6916009)(38100700002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?cCt3MG1yYTZFMGVQRXZqR29aR3AreHJWNy9HRmF6S1FwazFqYytYN3VERFBS?=
 =?utf-8?B?SmlUbmtObFNQYTBmUjJ6SFQ2YStkN1BFY3BDWjJ6alpocXhZRFN3WXhOYmho?=
 =?utf-8?B?ZnhqdGhrcjJkQlRQZURHdXNwZHdBOXU4NTI4TXJoT0JCNE9La3dneVVCY0ZW?=
 =?utf-8?B?YlgzemlOZXNPS1h3U0VzQzM2alZJbzgweWlldmMyRlgwMWtVaXBRQVV4OUF1?=
 =?utf-8?B?d25ydG90SWNqMFd1Y3Ivd2pvekQvMmpyczZEQlN5cTY5eHNVeGtKODdWL2o3?=
 =?utf-8?B?UkdBRGY4K01wM215cWFZNFphSit4QTc5S3FxZUNjR01QR1ZWdzJySDZiVDIx?=
 =?utf-8?B?R1dWN3hKeDIva042ZXVzcTVRbEl3SXJPZTNVR0hmd0I3cjhRbXF6dTFnWkxZ?=
 =?utf-8?B?MUN2NjBxOTlybzBjaTR4aW1DWU9JZVJXUGxDS2F2V0FvR2x5MjNTVVR0VURN?=
 =?utf-8?B?UlpndnZXSGpPVUJlV2lLMUhsZlJVa2VMQTJ4d0t0T2lzTXA0UWNPQW1EQU81?=
 =?utf-8?B?K0ZsMUVYcXdTUXMwcHhXUUdiSUIrOTlKc2EydVBZMC9mKzRUdFkzb0liVnh1?=
 =?utf-8?B?K081TTR0bVdhR3h0MTM2VjBHbWFCd2E2Y3ZaZTlPaWdDL2U5aE5ac3dVYTBK?=
 =?utf-8?B?d1c1dTV6SWw0MnpSUjFrVzRRUDY2dUE1cGlKdWliMU1JdlErMFZadWJhbzJY?=
 =?utf-8?B?a3V5SXJnMUhBT3E1K1NtTUVCalB6NkVjQUZ0V3N4QkN6a3NMczFrdEZnaUZw?=
 =?utf-8?B?cXZnU2FIUVdBZFFTMFJPK0k4elRFeGkwN01vdlZWU2VVWWpBVi9wMVFqaXFY?=
 =?utf-8?B?VllZN0lma1VSK3l3VGtrQ08zaVZCWDRmK0lXY2tKbkc1UmhRYzd6T0tBb2x2?=
 =?utf-8?B?REU5SE9TVnF1WUZibEI0UWlZVUVzNlFzek9WTDlYcm80OHRPeTYzNE1CSHZr?=
 =?utf-8?B?UHplQjBuQURzMk5jbXpkdENVNGdqWk9KQmVGNngwZHA3blVmME8xVHVZZHFM?=
 =?utf-8?B?WXlyK1VKMUVyOElUM3d6L1hxcjBwTk4vU2dUSGpJclpvSnF0Nzk5QWhBMUhO?=
 =?utf-8?B?V29SbzBkOFBPRmIyM1RjeTdlbkE3dTBBeDhrNVFrNlJhYjhzenExeUpnTTdz?=
 =?utf-8?B?bkZFaURsczNxNmZDZXZOWXB5eSszanFWMGp3Yk5QUGNwb01waDhwZzhQT1A0?=
 =?utf-8?B?NzJJTzh3Z2dpd1k4R1AzUEtRWFhLVXU5ZzVpV3hhNVYwYm5OOG5lc2QzYWZX?=
 =?utf-8?B?NkV4WW96Szk3N1dnc3hPSmdWTDkvYm9POE9rQVJrbmdYNlhJY3dZMlNOcHF4?=
 =?utf-8?B?NWdRMmtrZGVzNWxuckd5aGZaZ1pKN3I5dXFtbmJpMHRmYU1FSUVRQlROaUNE?=
 =?utf-8?B?anBiMHNXb3M2eGorNHRHMnV2TkUwMk9JQTBRSUQ1SXRjS2ZrNm8wdjQvNFJz?=
 =?utf-8?B?K3paVVNBRmxEMEFxNGxwczc5ZVFyREcrYVladHRBdFMyZzNuTEhyeHlFdEIz?=
 =?utf-8?B?VzM1NjZmSklkUFBoVXJMVjNNVjluQmdHZEU4eWVhWmkyU1lYRkVyb2RtRUhO?=
 =?utf-8?B?NGtmU3VXelN0TG5oQjZLNGRjQnpQWVRoc2F5TTVDcEx4TCtjWFcxelpjL29p?=
 =?utf-8?B?QzdUdHR5TWJndElOTTJDNmxCL1d5U1drMnVSUEthNlZNcWo4OGZVdFloa05C?=
 =?utf-8?B?M1BadW5GV2xDUlBNejdLYnpTRkZDSTdoQ1hZWDZlTzhyRFJ4UzV1ZEZDeDUw?=
 =?utf-8?Q?cGE3bCtxeii7WhPYI6MFRjyvXmvckhc6Y5ECtmN?=
X-MS-Exchange-CrossTenant-Network-Message-Id: ff93146d-a3cf-4d60-c139-08d915f47bf9
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 May 2021 09:50:04.6999
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 11W3qjMrSM8UhpxKyplI35O2+aDBoDTdH5VfbfKwNKc5KI5gUku175vzt8PheI9uKrIy+fX4lrxBCIgvvZNmeA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR03MB5967
X-OriginatorOrg: citrix.com

On Thu, May 13, 2021 at 11:21:36AM +0200, Olaf Hering wrote:
> I have access to a ProLiant BL465c G5, which can boot a xenlinux based dom0 kernel, like the one included in SLE11SP4. But it fails to do that with staging-4.15 and staging:
> 
> ...
> [    0.197199] node 0 link 0: io port [1000, 3fff]
> [    0.197199] node 0 link 2: io port [4000, ffff]
> (XEN) emul-priv-op.c:1015:d0v0 RDMSR 0xc001001a unimplemented
> [    0.197199] general protection fault: 0000 [#1] SMP 
> [    0.197199] CPU 0 
> [    0.197199] Modules linked in:
> [    0.197199] Supported: Yes
> [    0.197199] 
> [    0.197199] Pid: 1, comm: swapper Not tainted 3.0.101-63-xen #1 HP ProLiant BL465c G5  
> [    0.197199] RIP: e030:[<ffffffff807874e4>]  [<ffffffff807874e4>] early_fill_mp_bus_info+0x344/0x7f9
> ....
> 
> I have attached the full logs from staging-4.14 and staging-4.15.

Can you boot with dom0=msr-relaxed on the Xen command line?

Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 13 09:52:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 09:52:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126797.238261 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh81B-0002ND-34; Thu, 13 May 2021 09:52:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126797.238261; Thu, 13 May 2021 09:52:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh81A-0002N6-UZ; Thu, 13 May 2021 09:52:32 +0000
Received: by outflank-mailman (input) for mailman id 126797;
 Thu, 13 May 2021 09:52:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yjCE=KI=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lh819-0002N0-K5
 for xen-devel@lists.xenproject.org; Thu, 13 May 2021 09:52:31 +0000
Received: from mo6-p00-ob.smtp.rzone.de (unknown [2a01:238:400:100::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aefe7773-63df-4f89-ac6f-3d9d53501116;
 Thu, 13 May 2021 09:52:30 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.25.8 AUTH)
 with ESMTPSA id N048d9x4D9qS3pq
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 13 May 2021 11:52:28 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aefe7773-63df-4f89-ac6f-3d9d53501116
ARC-Seal: i=1; a=rsa-sha256; t=1620899548; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=hEfWgoDs39gKRlWOGytO1Q4VtzO00eKDV/ffvOXboszJefdeDw7Do+n+5biHqUSGe3
    G0Rf4WBMxfAaNVUdBVy4im4vgjNl7FEDnjzqehtZx37ME5wWMzPKus1mWESh0rkmAceR
    PWiAaiQM6IUfYSWulsOc0IYncxbMvdrW9IYyo/C1puQLC+J/ald5L8gH/CDzqEomzhCH
    l2VddD2itRIAgGUQ6qPK5h8zBmZT77wastRc2RRhaNjaRH/q+/X/gDKgJ7MAktH0m2QW
    yIdC+YaxB2QBRJyS5/uKNGq0bpoNtJTwqhE6n4b9xUG+Lv3Txe8LKur1/IhmGQmWxYM8
    kMjw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1620899548;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=IFPBgtnQ4yZX6e7A5Exc73MNWqTnOip6+lgCl1NwiIY=;
    b=YJ5/KO3VQ7Kcwzznr+0n5u/CiH/VuHPFmjzp+Fi64w9jIyU7RxjXQAHOm1FuxOKJbr
    1uIf20mY1yFOY8m7SFjJWcArJwDQ8O8Plry0yX5vDzOBtvjhYSj8L4iFEHj6LkE1KqoM
    +oL9rCagLKvcuoIrhBl1Khv51DJ8LrYcESL/vKu7VQ1rl+4oHhZDImVHcOUn7OSV8wlI
    1Kb+lKLpyEzs6GfvdL0/oItc8alCoiPEN2yohJtd6ndxBhAtY4rocnjXWlJkjTiyxR+p
    HWlmmeeJgA9EEU85RPWWGXmtuJE6VvArtIFbjtIbz73VE+wZ5tx9bIgGzCPf1t3zwlDZ
    dEQQ==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1620899548;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=IFPBgtnQ4yZX6e7A5Exc73MNWqTnOip6+lgCl1NwiIY=;
    b=l3nE4bRRUawgPSxQkIHJJKNMu8WniwWA5hWJ/yZgekuy6EohdyLzPDXi5LSWINkamn
    0erbmu9lGEzU9C/vQ8CFouNAAOeEPj6Q4kF2GpJjgov6ADwtQFhOSXD4ec5bNGluo4pa
    npCq0ONrM6ENk7/CGfwrL4vG5ISW3ZHnoXy7WXBTdlc32wOLmq30opUCEihmWgHF4noP
    p8qh2sFxJjFiM2zgm6uH78T53iPmq21PVev6sZvT0n6okyr7Q5AVMsUTHuubtEsNhKnx
    TmwCfm5cKHlVfHnAqcYjRmqe+fXFR9Nmq5v9Sf8+h1zd4vT5JMIMp/HX8Sb7HUxzsE8U
    +V3Q==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisF/Wx6Ea03sAi8O4Y0c9DLMc9kgmB2KMHkQZ2le"
X-RZG-CLASS-ID: mo00
Date: Thu, 13 May 2021 11:52:21 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Roger Pau =?UTF-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Cc: <xen-devel@lists.xenproject.org>
Subject: Re: regression in Xen 4.15, unable to boot xenlinux dom0
Message-ID: <20210513115221.121aa3db.olaf@aepfle.de>
In-Reply-To: <YJz2RpJ/c8dTXG0w@Air-de-Roger>
References: <20210513112136.6dcc3aaf.olaf@aepfle.de>
	<YJz2RpJ/c8dTXG0w@Air-de-Roger>
X-Mailer: Claws Mail 2021.04.23 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/MpTPGB1HD2zO3oIp6xK2bfL";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/MpTPGB1HD2zO3oIp6xK2bfL
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Am Thu, 13 May 2021 11:49:58 +0200
schrieb Roger Pau Monn=C3=A9 <roger.pau@citrix.com>:

> Can you boot with dom0=3Dmsr-relaxed on the Xen command line?

Yes, that helps - I just discovered this cmdline option.


Olaf

--Sig_/MpTPGB1HD2zO3oIp6xK2bfL
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmCc9tUACgkQ86SN7mm1
DoCVVg/9EcXBNIltPts0EONad2gy2xPDuWX7keBc38rBFomu1NtU7vXIB1B0a4rE
TgOP13DGNJ/UH0JgQK7OAAj5wUJqR5sLVTuH/RgKrQK0N/liw5dBLY1LBQU4/RFf
r1QeEP3f0gVYUprfqbZb38wdWKGU8aJiLNDaZhc48o0LkSSfBZuJF4cuNmT22QIu
HaUFLHxsryjfc2/FOl0U1c4+8N0398abcxvT3mlXF3wCjYhiTpwi+HwGwLt9HQpf
s4APX8Tu5Yff+jJsqjbevQOM9IRFR0FAEY3nWoA5bJiIOFCa/RE5fhcRm+DnP6bt
hKR38vrvoXYjzCPdRc6YDItO7POl/pdlXZatvOFn7NtckqLxGIbANJ2XuhhzX3ev
s4BxM4hwr87VYYG8DfAtgbLCd9nggxWcYI6ckz+kZJcUSRuIdJznG1351Zdiq15+
9+VTIc1Qmx0pq6mUBvmNq4RVsUr+/WPwPDMQmhVi5rWoNUXnCijQ37rlfLk19wV3
CzTK2IyD1e2rIbBwCIzb1AyIoRUiL+u1FksXW5+rO2aIyMBnCX3WMOcTA3hWS7RK
45FHZHzNjJCuTyj6N811upKFj7LaSFzo9AuqT/thLMViakP2ifgiV6sMG6UhQNZJ
M/fubDy5HVZwAyW8PBvjlLdOsutDvAuoNTIvdSck+z31vstVj2M=
=0xNu
-----END PGP SIGNATURE-----

--Sig_/MpTPGB1HD2zO3oIp6xK2bfL--


From xen-devel-bounces@lists.xenproject.org Thu May 13 10:03:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 10:03:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126805.238272 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh8BR-0003w9-0h; Thu, 13 May 2021 10:03:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126805.238272; Thu, 13 May 2021 10:03:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh8BQ-0003w1-U7; Thu, 13 May 2021 10:03:08 +0000
Received: by outflank-mailman (input) for mailman id 126805;
 Thu, 13 May 2021 10:03:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=KipV=KI=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lh8BP-0003vg-Ic
 for xen-devel@lists.xenproject.org; Thu, 13 May 2021 10:03:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 00e1ba1c-2504-4cff-8bd5-617ae886048c;
 Thu, 13 May 2021 10:03:06 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 85808AFE5;
 Thu, 13 May 2021 10:03:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 00e1ba1c-2504-4cff-8bd5-617ae886048c
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620900185; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=mSXvMzPiVbs1p8U5teTrqe++A9Kur1ffFfsRtolTEq4=;
	b=Bt3HWyrBQzhFA/VHRnLOrbIQmpMXluR0avJegIDgYP/25lKg9FIZAPmZkKNNj4Tn46vHsx
	DqH+iMtGCUU89BF3407TPvLq70y+9QiQwuomJ/KD69JlH6Mpq8EAgwwGKMJeCtvkebsFoG
	WMLd9rAM4L+C5WZX30UyH56rbG89gqM=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org,
	linux-block@vger.kernel.org,
	netdev@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Jens Axboe <axboe@kernel.dk>,
	"David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jiri Slaby <jirislaby@kernel.org>
Subject: [PATCH 0/8] xen: harden frontends against malicious backends
Date: Thu, 13 May 2021 12:02:54 +0200
Message-Id: <20210513100302.22027-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Xen backends of para-virtualized devices can live in dom0 kernel, dom0
user land, or in a driver domain. This means that a backend might
reside in a less trusted environment than the Xen core components, so
a backend should not be able to do harm to a Xen guest (it can still
mess up I/O data, but it shouldn't be able to e.g. crash a guest by
other means or cause a privilege escalation in the guest).

Unfortunately many frontends in the Linux kernel are fully trusting
their respective backends. This series is starting to fix the most
important frontends: console, disk and network.

It was discussed to handle this as a security problem, but the topic
was discussed in public before, so it isn't a real secret.

Juergen Gross (8):
  xen: sync include/xen/interface/io/ring.h with Xen's newest version
  xen/blkfront: read response from backend only once
  xen/blkfront: don't take local copy of a request from the ring page
  xen/blkfront: don't trust the backend response data blindly
  xen/netfront: read response from backend only once
  xen/netfront: don't read data from request on the ring page
  xen/netfront: don't trust the backend response data blindly
  xen/hvc: replace BUG_ON() with negative return value

 drivers/block/xen-blkfront.c    | 118 +++++++++-----
 drivers/net/xen-netfront.c      | 184 ++++++++++++++-------
 drivers/tty/hvc/hvc_xen.c       |  15 +-
 include/xen/interface/io/ring.h | 278 ++++++++++++++++++--------------
 4 files changed, 369 insertions(+), 226 deletions(-)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu May 13 10:03:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 10:03:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126806.238285 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh8BS-0004CR-A3; Thu, 13 May 2021 10:03:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126806.238285; Thu, 13 May 2021 10:03:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh8BS-0004CK-5n; Thu, 13 May 2021 10:03:10 +0000
Received: by outflank-mailman (input) for mailman id 126806;
 Thu, 13 May 2021 10:03:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=KipV=KI=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lh8BQ-0003vw-Nm
 for xen-devel@lists.xenproject.org; Thu, 13 May 2021 10:03:08 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5e7d7ef8-c3c2-48a1-a0e8-26c163d772ed;
 Thu, 13 May 2021 10:03:07 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9FC13B05D;
 Thu, 13 May 2021 10:03:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5e7d7ef8-c3c2-48a1-a0e8-26c163d772ed
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620900186; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=g+1fyoENkOuy7Zh/t/9f7BoPHxXHhIGX5kJfVKmIRpA=;
	b=g/KYhXDBmr+jGf2DUNwYpTYmyFb5iID0n+JwQIYx5JoW7VIHybKPbSnGj1UPKNEw9rGj7B
	MiaQabJO4iSCO/G/rIVuX8Zf026zsfTmGZsYXeuDjGQ5Idoe7g910FX9BfA6fL9/j32Q2m
	vig7EKf4L+OCtklY3VcDg8L1QLco2XY=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	"David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>
Subject: [PATCH 6/8] xen/netfront: don't read data from request on the ring page
Date: Thu, 13 May 2021 12:03:00 +0200
Message-Id: <20210513100302.22027-7-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210513100302.22027-1-jgross@suse.com>
References: <20210513100302.22027-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In order to avoid a malicious backend being able to influence the local
processing of a request build the request locally first and then copy
it to the ring page. Any reading from the request needs to be done on
the local instance.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/net/xen-netfront.c | 75 ++++++++++++++++++--------------------
 1 file changed, 36 insertions(+), 39 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index f91e41ece554..261c35be0147 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -435,7 +435,8 @@ struct xennet_gnttab_make_txreq {
 	struct netfront_queue *queue;
 	struct sk_buff *skb;
 	struct page *page;
-	struct xen_netif_tx_request *tx; /* Last request */
+	struct xen_netif_tx_request *tx;      /* Last request on ring page */
+	struct xen_netif_tx_request tx_local; /* Last request local copy*/
 	unsigned int size;
 };
 
@@ -463,30 +464,27 @@ static void xennet_tx_setup_grant(unsigned long gfn, unsigned int offset,
 	queue->grant_tx_page[id] = page;
 	queue->grant_tx_ref[id] = ref;
 
-	tx->id = id;
-	tx->gref = ref;
-	tx->offset = offset;
-	tx->size = len;
-	tx->flags = 0;
+	info->tx_local.id = id;
+	info->tx_local.gref = ref;
+	info->tx_local.offset = offset;
+	info->tx_local.size = len;
+	info->tx_local.flags = 0;
+
+	*tx = info->tx_local;
 
 	info->tx = tx;
-	info->size += tx->size;
+	info->size += info->tx_local.size;
 }
 
 static struct xen_netif_tx_request *xennet_make_first_txreq(
-	struct netfront_queue *queue, struct sk_buff *skb,
-	struct page *page, unsigned int offset, unsigned int len)
+	struct xennet_gnttab_make_txreq *info,
+	unsigned int offset, unsigned int len)
 {
-	struct xennet_gnttab_make_txreq info = {
-		.queue = queue,
-		.skb = skb,
-		.page = page,
-		.size = 0,
-	};
+	info->size = 0;
 
-	gnttab_for_one_grant(page, offset, len, xennet_tx_setup_grant, &info);
+	gnttab_for_one_grant(info->page, offset, len, xennet_tx_setup_grant, info);
 
-	return info.tx;
+	return info->tx;
 }
 
 static void xennet_make_one_txreq(unsigned long gfn, unsigned int offset,
@@ -499,35 +497,27 @@ static void xennet_make_one_txreq(unsigned long gfn, unsigned int offset,
 	xennet_tx_setup_grant(gfn, offset, len, data);
 }
 
-static struct xen_netif_tx_request *xennet_make_txreqs(
-	struct netfront_queue *queue, struct xen_netif_tx_request *tx,
-	struct sk_buff *skb, struct page *page,
+static void xennet_make_txreqs(
+	struct xennet_gnttab_make_txreq *info,
+	struct page *page,
 	unsigned int offset, unsigned int len)
 {
-	struct xennet_gnttab_make_txreq info = {
-		.queue = queue,
-		.skb = skb,
-		.tx = tx,
-	};
-
 	/* Skip unused frames from start of page */
 	page += offset >> PAGE_SHIFT;
 	offset &= ~PAGE_MASK;
 
 	while (len) {
-		info.page = page;
-		info.size = 0;
+		info->page = page;
+		info->size = 0;
 
 		gnttab_foreach_grant_in_range(page, offset, len,
 					      xennet_make_one_txreq,
-					      &info);
+					      info);
 
 		page++;
 		offset = 0;
-		len -= info.size;
+		len -= info->size;
 	}
-
-	return info.tx;
 }
 
 /*
@@ -580,10 +570,14 @@ static int xennet_xdp_xmit_one(struct net_device *dev,
 {
 	struct netfront_info *np = netdev_priv(dev);
 	struct netfront_stats *tx_stats = this_cpu_ptr(np->tx_stats);
+	struct xennet_gnttab_make_txreq info = {
+		.queue = queue,
+		.skb = NULL,
+		.page = virt_to_page(xdpf->data),
+	};
 	int notify;
 
-	xennet_make_first_txreq(queue, NULL,
-				virt_to_page(xdpf->data),
+	xennet_make_first_txreq(&info,
 				offset_in_page(xdpf->data),
 				xdpf->len);
 
@@ -647,6 +641,7 @@ static netdev_tx_t xennet_start_xmit(struct sk_buff *skb, struct net_device *dev
 	unsigned int len;
 	unsigned long flags;
 	struct netfront_queue *queue = NULL;
+	struct xennet_gnttab_make_txreq info = { };
 	unsigned int num_queues = dev->real_num_tx_queues;
 	u16 queue_index;
 	struct sk_buff *nskb;
@@ -704,14 +699,16 @@ static netdev_tx_t xennet_start_xmit(struct sk_buff *skb, struct net_device *dev
 	}
 
 	/* First request for the linear area. */
-	first_tx = tx = xennet_make_first_txreq(queue, skb,
-						page, offset, len);
+	info.queue = queue;
+	info.skb = skb;
+	info.page = page;
+	first_tx = tx = xennet_make_first_txreq(&info, offset, len);
 	offset += tx->size;
 	if (offset == PAGE_SIZE) {
 		page++;
 		offset = 0;
 	}
-	len -= tx->size;
+	len -= info.tx_local.size;
 
 	if (skb->ip_summed == CHECKSUM_PARTIAL)
 		/* local packet? */
@@ -741,12 +738,12 @@ static netdev_tx_t xennet_start_xmit(struct sk_buff *skb, struct net_device *dev
 	}
 
 	/* Requests for the rest of the linear area. */
-	tx = xennet_make_txreqs(queue, tx, skb, page, offset, len);
+	xennet_make_txreqs(&info, page, offset, len);
 
 	/* Requests for all the frags. */
 	for (i = 0; i < skb_shinfo(skb)->nr_frags; i++) {
 		skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
-		tx = xennet_make_txreqs(queue, tx, skb, skb_frag_page(frag),
+		xennet_make_txreqs(&info, skb_frag_page(frag),
 					skb_frag_off(frag),
 					skb_frag_size(frag));
 	}
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu May 13 10:03:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 10:03:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126807.238297 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh8BV-0004WH-JW; Thu, 13 May 2021 10:03:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126807.238297; Thu, 13 May 2021 10:03:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh8BV-0004W2-Ex; Thu, 13 May 2021 10:03:13 +0000
Received: by outflank-mailman (input) for mailman id 126807;
 Thu, 13 May 2021 10:03:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=KipV=KI=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lh8BU-0003vg-2p
 for xen-devel@lists.xenproject.org; Thu, 13 May 2021 10:03:12 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 29589f22-b6ae-440a-94dd-2a81fbe2126e;
 Thu, 13 May 2021 10:03:06 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B132FAFF5;
 Thu, 13 May 2021 10:03:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 29589f22-b6ae-440a-94dd-2a81fbe2126e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620900185; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=lutTQX8ZA5u3khaAXqUgHLKbfap899noTeivh9Sx9NQ=;
	b=X9zTq81SYVIejJbZ3hqVnPgnP7pcUDx2BE/zHP4wRq7hBon4FcfafyN+fXITzioqYy+QoR
	pBz0uhcESpMAUzIdg50l7VW/TsBGw3s6HDsmczQ3YLqYI5+JC1vvrfRDwWsaYqSOxIH89y
	FBMrmVY8k7hSXXCMQGggw+D5zhxq8r4=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Jens Axboe <axboe@kernel.dk>
Subject: [PATCH 2/8] xen/blkfront: read response from backend only once
Date: Thu, 13 May 2021 12:02:56 +0200
Message-Id: <20210513100302.22027-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210513100302.22027-1-jgross@suse.com>
References: <20210513100302.22027-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In order to avoid problems in case the backend is modifying a response
on the ring page while the frontend has already seen it, just read the
response into a local buffer in one go and then operate on that buffer
only.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/block/xen-blkfront.c | 35 ++++++++++++++++++-----------------
 1 file changed, 18 insertions(+), 17 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index 10df39a8b18d..a8b56c153330 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -1557,7 +1557,7 @@ static bool blkif_completion(unsigned long *id,
 static irqreturn_t blkif_interrupt(int irq, void *dev_id)
 {
 	struct request *req;
-	struct blkif_response *bret;
+	struct blkif_response bret;
 	RING_IDX i, rp;
 	unsigned long flags;
 	struct blkfront_ring_info *rinfo = (struct blkfront_ring_info *)dev_id;
@@ -1574,8 +1574,9 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
 	for (i = rinfo->ring.rsp_cons; i != rp; i++) {
 		unsigned long id;
 
-		bret = RING_GET_RESPONSE(&rinfo->ring, i);
-		id   = bret->id;
+		RING_COPY_RESPONSE(&rinfo->ring, i, &bret);
+		id = bret.id;
+
 		/*
 		 * The backend has messed up and given us an id that we would
 		 * never have given to it (we stamp it up to BLK_RING_SIZE -
@@ -1583,39 +1584,39 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
 		 */
 		if (id >= BLK_RING_SIZE(info)) {
 			WARN(1, "%s: response to %s has incorrect id (%ld)\n",
-			     info->gd->disk_name, op_name(bret->operation), id);
+			     info->gd->disk_name, op_name(bret.operation), id);
 			/* We can't safely get the 'struct request' as
 			 * the id is busted. */
 			continue;
 		}
 		req  = rinfo->shadow[id].request;
 
-		if (bret->operation != BLKIF_OP_DISCARD) {
+		if (bret.operation != BLKIF_OP_DISCARD) {
 			/*
 			 * We may need to wait for an extra response if the
 			 * I/O request is split in 2
 			 */
-			if (!blkif_completion(&id, rinfo, bret))
+			if (!blkif_completion(&id, rinfo, &bret))
 				continue;
 		}
 
 		if (add_id_to_freelist(rinfo, id)) {
 			WARN(1, "%s: response to %s (id %ld) couldn't be recycled!\n",
-			     info->gd->disk_name, op_name(bret->operation), id);
+			     info->gd->disk_name, op_name(bret.operation), id);
 			continue;
 		}
 
-		if (bret->status == BLKIF_RSP_OKAY)
+		if (bret.status == BLKIF_RSP_OKAY)
 			blkif_req(req)->error = BLK_STS_OK;
 		else
 			blkif_req(req)->error = BLK_STS_IOERR;
 
-		switch (bret->operation) {
+		switch (bret.operation) {
 		case BLKIF_OP_DISCARD:
-			if (unlikely(bret->status == BLKIF_RSP_EOPNOTSUPP)) {
+			if (unlikely(bret.status == BLKIF_RSP_EOPNOTSUPP)) {
 				struct request_queue *rq = info->rq;
 				printk(KERN_WARNING "blkfront: %s: %s op failed\n",
-					   info->gd->disk_name, op_name(bret->operation));
+					   info->gd->disk_name, op_name(bret.operation));
 				blkif_req(req)->error = BLK_STS_NOTSUPP;
 				info->feature_discard = 0;
 				info->feature_secdiscard = 0;
@@ -1625,15 +1626,15 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
 			break;
 		case BLKIF_OP_FLUSH_DISKCACHE:
 		case BLKIF_OP_WRITE_BARRIER:
-			if (unlikely(bret->status == BLKIF_RSP_EOPNOTSUPP)) {
+			if (unlikely(bret.status == BLKIF_RSP_EOPNOTSUPP)) {
 				printk(KERN_WARNING "blkfront: %s: %s op failed\n",
-				       info->gd->disk_name, op_name(bret->operation));
+				       info->gd->disk_name, op_name(bret.operation));
 				blkif_req(req)->error = BLK_STS_NOTSUPP;
 			}
-			if (unlikely(bret->status == BLKIF_RSP_ERROR &&
+			if (unlikely(bret.status == BLKIF_RSP_ERROR &&
 				     rinfo->shadow[id].req.u.rw.nr_segments == 0)) {
 				printk(KERN_WARNING "blkfront: %s: empty %s op failed\n",
-				       info->gd->disk_name, op_name(bret->operation));
+				       info->gd->disk_name, op_name(bret.operation));
 				blkif_req(req)->error = BLK_STS_NOTSUPP;
 			}
 			if (unlikely(blkif_req(req)->error)) {
@@ -1646,9 +1647,9 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
 			fallthrough;
 		case BLKIF_OP_READ:
 		case BLKIF_OP_WRITE:
-			if (unlikely(bret->status != BLKIF_RSP_OKAY))
+			if (unlikely(bret.status != BLKIF_RSP_OKAY))
 				dev_dbg(&info->xbdev->dev, "Bad return from blkdev data "
-					"request: %x\n", bret->status);
+					"request: %x\n", bret.status);
 
 			break;
 		default:
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu May 13 10:03:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 10:03:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126808.238309 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh8BX-0004oo-2X; Thu, 13 May 2021 10:03:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126808.238309; Thu, 13 May 2021 10:03:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh8BW-0004oW-V5; Thu, 13 May 2021 10:03:14 +0000
Received: by outflank-mailman (input) for mailman id 126808;
 Thu, 13 May 2021 10:03:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=KipV=KI=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lh8BV-0003vw-Lh
 for xen-devel@lists.xenproject.org; Thu, 13 May 2021 10:03:13 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4a9dd3b9-34ba-4a9b-a670-7154e734e457;
 Thu, 13 May 2021 10:03:06 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 845DCAFD5;
 Thu, 13 May 2021 10:03:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4a9dd3b9-34ba-4a9b-a670-7154e734e457
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620900185; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=gN4/rgxtzqVWaIyPGz03VdvKju8BDVEvYHrK0+8yrQE=;
	b=Qh2deAEmR6Q8nSdsLJ8lUqzSI4K8jVie2Wh3Br1Tzx8DdAbZnb/EnLyq49zG2YnzxnMmhO
	ois/KXtSxCCBenc/q1t5Iv0v7UKfC8/S5vxSR2UKSCur51mtgyGPzlCDf4DKCiLICMO1/r
	zHOzmGakvaDZYd99BPsLX5sz4brlCkU=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH 1/8] xen: sync include/xen/interface/io/ring.h with Xen's newest version
Date: Thu, 13 May 2021 12:02:55 +0200
Message-Id: <20210513100302.22027-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210513100302.22027-1-jgross@suse.com>
References: <20210513100302.22027-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Sync include/xen/interface/io/ring.h with Xen's newest version in
order to get the RING_COPY_RESPONSE() and RING_RESPONSE_PROD_OVERFLOW()
macros.

Note that this will correct the wrong license info by adding the
missing original copyright notice.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 include/xen/interface/io/ring.h | 278 ++++++++++++++++++--------------
 1 file changed, 156 insertions(+), 122 deletions(-)

diff --git a/include/xen/interface/io/ring.h b/include/xen/interface/io/ring.h
index 2af7a1cd6658..b39cdbc522ec 100644
--- a/include/xen/interface/io/ring.h
+++ b/include/xen/interface/io/ring.h
@@ -1,21 +1,53 @@
-/* SPDX-License-Identifier: GPL-2.0 */
 /******************************************************************************
  * ring.h
  *
  * Shared producer-consumer ring macros.
  *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to
+ * deal in the Software without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
  * Tim Deegan and Andrew Warfield November 2004.
  */
 
 #ifndef __XEN_PUBLIC_IO_RING_H__
 #define __XEN_PUBLIC_IO_RING_H__
 
+/*
+ * When #include'ing this header, you need to provide the following
+ * declaration upfront:
+ * - standard integers types (uint8_t, uint16_t, etc)
+ * They are provided by stdint.h of the standard headers.
+ *
+ * In addition, if you intend to use the FLEX macros, you also need to
+ * provide the following, before invoking the FLEX macros:
+ * - size_t
+ * - memcpy
+ * - grant_ref_t
+ * These declarations are provided by string.h of the standard headers,
+ * and grant_table.h from the Xen public headers.
+ */
+
 #include <xen/interface/grant_table.h>
 
 typedef unsigned int RING_IDX;
 
 /* Round a 32-bit unsigned constant down to the nearest power of two. */
-#define __RD2(_x)  (((_x) & 0x00000002) ? 0x2		       : ((_x) & 0x1))
+#define __RD2(_x)  (((_x) & 0x00000002) ? 0x2                  : ((_x) & 0x1))
 #define __RD4(_x)  (((_x) & 0x0000000c) ? __RD2((_x)>>2)<<2    : __RD2(_x))
 #define __RD8(_x)  (((_x) & 0x000000f0) ? __RD4((_x)>>4)<<4    : __RD4(_x))
 #define __RD16(_x) (((_x) & 0x0000ff00) ? __RD8((_x)>>8)<<8    : __RD8(_x))
@@ -27,82 +59,79 @@ typedef unsigned int RING_IDX;
  * A ring contains as many entries as will fit, rounded down to the nearest
  * power of two (so we can mask with (size-1) to loop around).
  */
-#define __CONST_RING_SIZE(_s, _sz)				\
-	(__RD32(((_sz) - offsetof(struct _s##_sring, ring)) /	\
-		sizeof(((struct _s##_sring *)0)->ring[0])))
-
+#define __CONST_RING_SIZE(_s, _sz) \
+    (__RD32(((_sz) - offsetof(struct _s##_sring, ring)) / \
+	    sizeof(((struct _s##_sring *)0)->ring[0])))
 /*
  * The same for passing in an actual pointer instead of a name tag.
  */
-#define __RING_SIZE(_s, _sz)						\
-	(__RD32(((_sz) - (long)&(_s)->ring + (long)(_s)) / sizeof((_s)->ring[0])))
+#define __RING_SIZE(_s, _sz) \
+    (__RD32(((_sz) - (long)(_s)->ring + (long)(_s)) / sizeof((_s)->ring[0])))
 
 /*
  * Macros to make the correct C datatypes for a new kind of ring.
  *
  * To make a new ring datatype, you need to have two message structures,
- * let's say struct request, and struct response already defined.
+ * let's say request_t, and response_t already defined.
  *
  * In a header where you want the ring datatype declared, you then do:
  *
- *     DEFINE_RING_TYPES(mytag, struct request, struct response);
+ *     DEFINE_RING_TYPES(mytag, request_t, response_t);
  *
  * These expand out to give you a set of types, as you can see below.
  * The most important of these are:
  *
- *     struct mytag_sring      - The shared ring.
- *     struct mytag_front_ring - The 'front' half of the ring.
- *     struct mytag_back_ring  - The 'back' half of the ring.
+ *     mytag_sring_t      - The shared ring.
+ *     mytag_front_ring_t - The 'front' half of the ring.
+ *     mytag_back_ring_t  - The 'back' half of the ring.
  *
  * To initialize a ring in your code you need to know the location and size
  * of the shared memory area (PAGE_SIZE, for instance). To initialise
  * the front half:
  *
- *     struct mytag_front_ring front_ring;
- *     SHARED_RING_INIT((struct mytag_sring *)shared_page);
- *     FRONT_RING_INIT(&front_ring, (struct mytag_sring *)shared_page,
- *		       PAGE_SIZE);
+ *     mytag_front_ring_t front_ring;
+ *     SHARED_RING_INIT((mytag_sring_t *)shared_page);
+ *     FRONT_RING_INIT(&front_ring, (mytag_sring_t *)shared_page, PAGE_SIZE);
  *
  * Initializing the back follows similarly (note that only the front
  * initializes the shared ring):
  *
- *     struct mytag_back_ring back_ring;
- *     BACK_RING_INIT(&back_ring, (struct mytag_sring *)shared_page,
- *		      PAGE_SIZE);
+ *     mytag_back_ring_t back_ring;
+ *     BACK_RING_INIT(&back_ring, (mytag_sring_t *)shared_page, PAGE_SIZE);
  */
 
-#define DEFINE_RING_TYPES(__name, __req_t, __rsp_t)			\
-									\
-/* Shared ring entry */							\
-union __name##_sring_entry {						\
-    __req_t req;							\
-    __rsp_t rsp;							\
-};									\
-									\
-/* Shared ring page */							\
-struct __name##_sring {							\
-    RING_IDX req_prod, req_event;					\
-    RING_IDX rsp_prod, rsp_event;					\
-    uint8_t  pad[48];							\
-    union __name##_sring_entry ring[1]; /* variable-length */		\
-};									\
-									\
-/* "Front" end's private variables */					\
-struct __name##_front_ring {						\
-    RING_IDX req_prod_pvt;						\
-    RING_IDX rsp_cons;							\
-    unsigned int nr_ents;						\
-    struct __name##_sring *sring;					\
-};									\
-									\
-/* "Back" end's private variables */					\
-struct __name##_back_ring {						\
-    RING_IDX rsp_prod_pvt;						\
-    RING_IDX req_cons;							\
-    unsigned int nr_ents;						\
-    struct __name##_sring *sring;					\
-};
-
+#define DEFINE_RING_TYPES(__name, __req_t, __rsp_t)                     \
+                                                                        \
+/* Shared ring entry */                                                 \
+union __name##_sring_entry {                                            \
+    __req_t req;                                                        \
+    __rsp_t rsp;                                                        \
+};                                                                      \
+                                                                        \
+/* Shared ring page */                                                  \
+struct __name##_sring {                                                 \
+    RING_IDX req_prod, req_event;                                       \
+    RING_IDX rsp_prod, rsp_event;                                       \
+    uint8_t __pad[48];                                                  \
+    union __name##_sring_entry ring[1]; /* variable-length */           \
+};                                                                      \
+                                                                        \
+/* "Front" end's private variables */                                   \
+struct __name##_front_ring {                                            \
+    RING_IDX req_prod_pvt;                                              \
+    RING_IDX rsp_cons;                                                  \
+    unsigned int nr_ents;                                               \
+    struct __name##_sring *sring;                                       \
+};                                                                      \
+                                                                        \
+/* "Back" end's private variables */                                    \
+struct __name##_back_ring {                                             \
+    RING_IDX rsp_prod_pvt;                                              \
+    RING_IDX req_cons;                                                  \
+    unsigned int nr_ents;                                               \
+    struct __name##_sring *sring;                                       \
+};                                                                      \
+                                                                        \
 /*
  * Macros for manipulating rings.
  *
@@ -119,94 +148,99 @@ struct __name##_back_ring {						\
  */
 
 /* Initialising empty rings */
-#define SHARED_RING_INIT(_s) do {					\
-    (_s)->req_prod  = (_s)->rsp_prod  = 0;				\
-    (_s)->req_event = (_s)->rsp_event = 1;				\
-    memset((_s)->pad, 0, sizeof((_s)->pad));				\
+#define SHARED_RING_INIT(_s) do {                                       \
+    (_s)->req_prod  = (_s)->rsp_prod  = 0;                              \
+    (_s)->req_event = (_s)->rsp_event = 1;                              \
+    (void)memset((_s)->__pad, 0, sizeof((_s)->__pad));                  \
 } while(0)
 
-#define FRONT_RING_ATTACH(_r, _s, _i, __size) do {			\
-    (_r)->req_prod_pvt = (_i);						\
-    (_r)->rsp_cons = (_i);						\
-    (_r)->nr_ents = __RING_SIZE(_s, __size);				\
-    (_r)->sring = (_s);							\
+#define FRONT_RING_ATTACH(_r, _s, _i, __size) do {                      \
+    (_r)->req_prod_pvt = (_i);                                          \
+    (_r)->rsp_cons = (_i);                                              \
+    (_r)->nr_ents = __RING_SIZE(_s, __size);                            \
+    (_r)->sring = (_s);                                                 \
 } while (0)
 
 #define FRONT_RING_INIT(_r, _s, __size) FRONT_RING_ATTACH(_r, _s, 0, __size)
 
-#define BACK_RING_ATTACH(_r, _s, _i, __size) do {			\
-    (_r)->rsp_prod_pvt = (_i);						\
-    (_r)->req_cons = (_i);						\
-    (_r)->nr_ents = __RING_SIZE(_s, __size);				\
-    (_r)->sring = (_s);							\
+#define BACK_RING_ATTACH(_r, _s, _i, __size) do {                       \
+    (_r)->rsp_prod_pvt = (_i);                                          \
+    (_r)->req_cons = (_i);                                              \
+    (_r)->nr_ents = __RING_SIZE(_s, __size);                            \
+    (_r)->sring = (_s);                                                 \
 } while (0)
 
 #define BACK_RING_INIT(_r, _s, __size) BACK_RING_ATTACH(_r, _s, 0, __size)
 
 /* How big is this ring? */
-#define RING_SIZE(_r)							\
+#define RING_SIZE(_r)                                                   \
     ((_r)->nr_ents)
 
 /* Number of free requests (for use on front side only). */
-#define RING_FREE_REQUESTS(_r)						\
+#define RING_FREE_REQUESTS(_r)                                          \
     (RING_SIZE(_r) - ((_r)->req_prod_pvt - (_r)->rsp_cons))
 
 /* Test if there is an empty slot available on the front ring.
  * (This is only meaningful from the front. )
  */
-#define RING_FULL(_r)							\
+#define RING_FULL(_r)                                                   \
     (RING_FREE_REQUESTS(_r) == 0)
 
 /* Test if there are outstanding messages to be processed on a ring. */
-#define RING_HAS_UNCONSUMED_RESPONSES(_r)				\
+#define RING_HAS_UNCONSUMED_RESPONSES(_r)                               \
     ((_r)->sring->rsp_prod - (_r)->rsp_cons)
 
-#define RING_HAS_UNCONSUMED_REQUESTS(_r)				\
-    ({									\
-	unsigned int req = (_r)->sring->req_prod - (_r)->req_cons;	\
-	unsigned int rsp = RING_SIZE(_r) -				\
-			   ((_r)->req_cons - (_r)->rsp_prod_pvt);	\
-	req < rsp ? req : rsp;						\
-    })
+#define RING_HAS_UNCONSUMED_REQUESTS(_r) ({                             \
+    unsigned int req = (_r)->sring->req_prod - (_r)->req_cons;          \
+    unsigned int rsp = RING_SIZE(_r) -                                  \
+        ((_r)->req_cons - (_r)->rsp_prod_pvt);                          \
+    req < rsp ? req : rsp;                                              \
+})
 
 /* Direct access to individual ring elements, by index. */
-#define RING_GET_REQUEST(_r, _idx)					\
+#define RING_GET_REQUEST(_r, _idx)                                      \
     (&((_r)->sring->ring[((_idx) & (RING_SIZE(_r) - 1))].req))
 
+#define RING_GET_RESPONSE(_r, _idx)                                     \
+    (&((_r)->sring->ring[((_idx) & (RING_SIZE(_r) - 1))].rsp))
+
 /*
- * Get a local copy of a request.
+ * Get a local copy of a request/response.
  *
- * Use this in preference to RING_GET_REQUEST() so all processing is
+ * Use this in preference to RING_GET_{REQUEST,RESPONSE}() so all processing is
  * done on a local copy that cannot be modified by the other end.
  *
  * Note that https://gcc.gnu.org/bugzilla/show_bug.cgi?id=58145 may cause this
- * to be ineffective where _req is a struct which consists of only bitfields.
+ * to be ineffective where dest is a struct which consists of only bitfields.
  */
-#define RING_COPY_REQUEST(_r, _idx, _req) do {				\
-	/* Use volatile to force the copy into _req. */			\
-	*(_req) = *(volatile typeof(_req))RING_GET_REQUEST(_r, _idx);	\
+#define RING_COPY_(type, r, idx, dest) do {				\
+	/* Use volatile to force the copy into dest. */			\
+	*(dest) = *(volatile typeof(dest))RING_GET_##type(r, idx);	\
 } while (0)
 
-#define RING_GET_RESPONSE(_r, _idx)					\
-    (&((_r)->sring->ring[((_idx) & (RING_SIZE(_r) - 1))].rsp))
+#define RING_COPY_REQUEST(r, idx, req)  RING_COPY_(REQUEST, r, idx, req)
+#define RING_COPY_RESPONSE(r, idx, rsp) RING_COPY_(RESPONSE, r, idx, rsp)
 
 /* Loop termination condition: Would the specified index overflow the ring? */
-#define RING_REQUEST_CONS_OVERFLOW(_r, _cons)				\
+#define RING_REQUEST_CONS_OVERFLOW(_r, _cons)                           \
     (((_cons) - (_r)->rsp_prod_pvt) >= RING_SIZE(_r))
 
 /* Ill-behaved frontend determination: Can there be this many requests? */
-#define RING_REQUEST_PROD_OVERFLOW(_r, _prod)               \
+#define RING_REQUEST_PROD_OVERFLOW(_r, _prod)                           \
     (((_prod) - (_r)->rsp_prod_pvt) > RING_SIZE(_r))
 
+/* Ill-behaved backend determination: Can there be this many responses? */
+#define RING_RESPONSE_PROD_OVERFLOW(_r, _prod)                          \
+    (((_prod) - (_r)->rsp_cons) > RING_SIZE(_r))
 
-#define RING_PUSH_REQUESTS(_r) do {					\
-    virt_wmb(); /* back sees requests /before/ updated producer index */	\
-    (_r)->sring->req_prod = (_r)->req_prod_pvt;				\
+#define RING_PUSH_REQUESTS(_r) do {                                     \
+    virt_wmb(); /* back sees requests /before/ updated producer index */\
+    (_r)->sring->req_prod = (_r)->req_prod_pvt;                         \
 } while (0)
 
-#define RING_PUSH_RESPONSES(_r) do {					\
-    virt_wmb(); /* front sees responses /before/ updated producer index */	\
-    (_r)->sring->rsp_prod = (_r)->rsp_prod_pvt;				\
+#define RING_PUSH_RESPONSES(_r) do {                                    \
+    virt_wmb(); /* front sees resps /before/ updated producer index */  \
+    (_r)->sring->rsp_prod = (_r)->rsp_prod_pvt;                         \
 } while (0)
 
 /*
@@ -239,40 +273,40 @@ struct __name##_back_ring {						\
  *  field appropriately.
  */
 
-#define RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(_r, _notify) do {		\
-    RING_IDX __old = (_r)->sring->req_prod;				\
-    RING_IDX __new = (_r)->req_prod_pvt;				\
-    virt_wmb(); /* back sees requests /before/ updated producer index */	\
-    (_r)->sring->req_prod = __new;					\
-    virt_mb(); /* back sees new requests /before/ we check req_event */	\
-    (_notify) = ((RING_IDX)(__new - (_r)->sring->req_event) <		\
-		 (RING_IDX)(__new - __old));				\
+#define RING_PUSH_REQUESTS_AND_CHECK_NOTIFY(_r, _notify) do {           \
+    RING_IDX __old = (_r)->sring->req_prod;                             \
+    RING_IDX __new = (_r)->req_prod_pvt;                                \
+    virt_wmb(); /* back sees requests /before/ updated producer index */\
+    (_r)->sring->req_prod = __new;                                      \
+    virt_mb(); /* back sees new requests /before/ we check req_event */ \
+    (_notify) = ((RING_IDX)(__new - (_r)->sring->req_event) <           \
+                 (RING_IDX)(__new - __old));                            \
 } while (0)
 
-#define RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(_r, _notify) do {		\
-    RING_IDX __old = (_r)->sring->rsp_prod;				\
-    RING_IDX __new = (_r)->rsp_prod_pvt;				\
-    virt_wmb(); /* front sees responses /before/ updated producer index */	\
-    (_r)->sring->rsp_prod = __new;					\
-    virt_mb(); /* front sees new responses /before/ we check rsp_event */	\
-    (_notify) = ((RING_IDX)(__new - (_r)->sring->rsp_event) <		\
-		 (RING_IDX)(__new - __old));				\
+#define RING_PUSH_RESPONSES_AND_CHECK_NOTIFY(_r, _notify) do {          \
+    RING_IDX __old = (_r)->sring->rsp_prod;                             \
+    RING_IDX __new = (_r)->rsp_prod_pvt;                                \
+    virt_wmb(); /* front sees resps /before/ updated producer index */  \
+    (_r)->sring->rsp_prod = __new;                                      \
+    virt_mb(); /* front sees new resps /before/ we check rsp_event */   \
+    (_notify) = ((RING_IDX)(__new - (_r)->sring->rsp_event) <           \
+                 (RING_IDX)(__new - __old));                            \
 } while (0)
 
-#define RING_FINAL_CHECK_FOR_REQUESTS(_r, _work_to_do) do {		\
-    (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);			\
-    if (_work_to_do) break;						\
-    (_r)->sring->req_event = (_r)->req_cons + 1;			\
-    virt_mb();								\
-    (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);			\
+#define RING_FINAL_CHECK_FOR_REQUESTS(_r, _work_to_do) do {             \
+    (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);                   \
+    if (_work_to_do) break;                                             \
+    (_r)->sring->req_event = (_r)->req_cons + 1;                        \
+    virt_mb();                                                          \
+    (_work_to_do) = RING_HAS_UNCONSUMED_REQUESTS(_r);                   \
 } while (0)
 
-#define RING_FINAL_CHECK_FOR_RESPONSES(_r, _work_to_do) do {		\
-    (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r);			\
-    if (_work_to_do) break;						\
-    (_r)->sring->rsp_event = (_r)->rsp_cons + 1;			\
-    virt_mb();								\
-    (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r);			\
+#define RING_FINAL_CHECK_FOR_RESPONSES(_r, _work_to_do) do {            \
+    (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r);                  \
+    if (_work_to_do) break;                                             \
+    (_r)->sring->rsp_event = (_r)->rsp_cons + 1;                        \
+    virt_mb();                                                          \
+    (_work_to_do) = RING_HAS_UNCONSUMED_RESPONSES(_r);                  \
 } while (0)
 
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu May 13 10:03:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 10:03:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126809.238320 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh8Ba-0005B6-DM; Thu, 13 May 2021 10:03:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126809.238320; Thu, 13 May 2021 10:03:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh8Ba-0005Ar-9m; Thu, 13 May 2021 10:03:18 +0000
Received: by outflank-mailman (input) for mailman id 126809;
 Thu, 13 May 2021 10:03:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=KipV=KI=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lh8BZ-0003vg-34
 for xen-devel@lists.xenproject.org; Thu, 13 May 2021 10:03:17 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1c7d2393-cef0-489a-a074-21fe2b426a38;
 Thu, 13 May 2021 10:03:06 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EB135AFF6;
 Thu, 13 May 2021 10:03:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c7d2393-cef0-489a-a074-21fe2b426a38
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620900186; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=0Q6ThoS9c4shqaPBcM7YVpxtTsCqScDLVsepBzmlgCI=;
	b=KB4VO2b29rr8hcdZZAjHAppb7tKug2TKs7hwZV1cULAeBgNVDz9y3rAzxIUyt8uz4zhN1m
	wbkrBEc19J6BDwt9BDdkNjqE7EkKbsb19lj8El+kyBKqof1o5TWtGOxWrVw6bpy14oUpOL
	hA7SOkSFBinvHJbDqRzyWiVv2QpDJHA=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Jens Axboe <axboe@kernel.dk>
Subject: [PATCH 3/8] xen/blkfront: don't take local copy of a request from the ring page
Date: Thu, 13 May 2021 12:02:57 +0200
Message-Id: <20210513100302.22027-4-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210513100302.22027-1-jgross@suse.com>
References: <20210513100302.22027-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In order to avoid a malicious backend being able to influence the local
copy of a request build the request locally first and then copy it to
the ring page instead of doing it the other way round as today.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/block/xen-blkfront.c | 25 +++++++++++++++----------
 1 file changed, 15 insertions(+), 10 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index a8b56c153330..c6a05de4f15f 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -546,7 +546,7 @@ static unsigned long blkif_ring_get_request(struct blkfront_ring_info *rinfo,
 	rinfo->shadow[id].status = REQ_WAITING;
 	rinfo->shadow[id].associated_id = NO_ASSOCIATED_ID;
 
-	(*ring_req)->u.rw.id = id;
+	rinfo->shadow[id].req.u.rw.id = id;
 
 	return id;
 }
@@ -554,11 +554,12 @@ static unsigned long blkif_ring_get_request(struct blkfront_ring_info *rinfo,
 static int blkif_queue_discard_req(struct request *req, struct blkfront_ring_info *rinfo)
 {
 	struct blkfront_info *info = rinfo->dev_info;
-	struct blkif_request *ring_req;
+	struct blkif_request *ring_req, *final_ring_req;
 	unsigned long id;
 
 	/* Fill out a communications ring structure. */
-	id = blkif_ring_get_request(rinfo, req, &ring_req);
+	id = blkif_ring_get_request(rinfo, req, &final_ring_req);
+	ring_req = &rinfo->shadow[id].req;
 
 	ring_req->operation = BLKIF_OP_DISCARD;
 	ring_req->u.discard.nr_sectors = blk_rq_sectors(req);
@@ -569,8 +570,8 @@ static int blkif_queue_discard_req(struct request *req, struct blkfront_ring_inf
 	else
 		ring_req->u.discard.flag = 0;
 
-	/* Keep a private copy so we can reissue requests when recovering. */
-	rinfo->shadow[id].req = *ring_req;
+	/* Copy the request to the ring page. */
+	*final_ring_req = *ring_req;
 
 	return 0;
 }
@@ -703,6 +704,7 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
 {
 	struct blkfront_info *info = rinfo->dev_info;
 	struct blkif_request *ring_req, *extra_ring_req = NULL;
+	struct blkif_request *final_ring_req, *final_extra_ring_req;
 	unsigned long id, extra_id = NO_ASSOCIATED_ID;
 	bool require_extra_req = false;
 	int i;
@@ -747,7 +749,8 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
 	}
 
 	/* Fill out a communications ring structure. */
-	id = blkif_ring_get_request(rinfo, req, &ring_req);
+	id = blkif_ring_get_request(rinfo, req, &final_ring_req);
+	ring_req = &rinfo->shadow[id].req;
 
 	num_sg = blk_rq_map_sg(req->q, req, rinfo->shadow[id].sg);
 	num_grant = 0;
@@ -798,7 +801,9 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
 		ring_req->u.rw.nr_segments = num_grant;
 		if (unlikely(require_extra_req)) {
 			extra_id = blkif_ring_get_request(rinfo, req,
-							  &extra_ring_req);
+							  &final_extra_ring_req);
+			extra_ring_req = &rinfo->shadow[extra_id].req;
+
 			/*
 			 * Only the first request contains the scatter-gather
 			 * list.
@@ -840,10 +845,10 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
 	if (setup.segments)
 		kunmap_atomic(setup.segments);
 
-	/* Keep a private copy so we can reissue requests when recovering. */
-	rinfo->shadow[id].req = *ring_req;
+	/* Copy request(s) to the ring page. */
+	*final_ring_req = *ring_req;
 	if (unlikely(require_extra_req))
-		rinfo->shadow[extra_id].req = *extra_ring_req;
+		*final_extra_ring_req = *extra_ring_req;
 
 	if (new_persistent_gnts)
 		gnttab_free_grant_references(setup.gref_head);
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu May 13 10:03:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 10:03:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126810.238333 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh8Bb-0005Sa-Po; Thu, 13 May 2021 10:03:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126810.238333; Thu, 13 May 2021 10:03:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh8Bb-0005SL-JS; Thu, 13 May 2021 10:03:19 +0000
Received: by outflank-mailman (input) for mailman id 126810;
 Thu, 13 May 2021 10:03:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=KipV=KI=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lh8Ba-0003vw-Lj
 for xen-devel@lists.xenproject.org; Thu, 13 May 2021 10:03:18 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 43127006-13b6-4fd0-b5f8-b9b88ad5c80f;
 Thu, 13 May 2021 10:03:08 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D6B09B156;
 Thu, 13 May 2021 10:03:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 43127006-13b6-4fd0-b5f8-b9b88ad5c80f
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620900187; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=IvMcIqdk9ustdCpVTzueaYDVUeZhxZ/EAhd4OmpuXOI=;
	b=ZtkMcNuE8kcXYMHWB1eAm7u1BAxzlLMEVdB2/nA1DPxetMUzGjtcv/vVkiGbyoq1n6El5i
	PAGibUSBe7Lu5nOiM0Qi+5bXkfUI8pECgwiYR2WcylRx5+Zaxpept3Jo1M/ifP+MvuluDv
	yvkasKkirJJf3j0dKyowWy7EMykFZjc=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	"David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>
Subject: [PATCH 7/8] xen/netfront: don't trust the backend response data blindly
Date: Thu, 13 May 2021 12:03:01 +0200
Message-Id: <20210513100302.22027-8-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210513100302.22027-1-jgross@suse.com>
References: <20210513100302.22027-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Today netfront will trust the backend to send only sane response data.
In order to avoid privilege escalations or crashes in case of malicious
backends verify the data to be within expected limits. Especially make
sure that the response always references an outstanding request.

Note that only the tx queue needs special id handling, as for the rx
queue the id is equal to the index in the ring page.

Introduce a new indicator for the device whether it is broken and let
the device stop working when it is set. Set this indicator in case the
backend sets any weird data.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/net/xen-netfront.c | 71 +++++++++++++++++++++++++++++++++++---
 1 file changed, 67 insertions(+), 4 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 261c35be0147..ccd6d1389b0a 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -154,6 +154,8 @@ struct netfront_queue {
 
 	struct page_pool *page_pool;
 	struct xdp_rxq_info xdp_rxq;
+
+	bool tx_pending[NET_TX_RING_SIZE];
 };
 
 struct netfront_info {
@@ -173,6 +175,9 @@ struct netfront_info {
 	bool netback_has_xdp_headroom;
 	bool netfront_xdp_enabled;
 
+	/* Is device behaving sane? */
+	bool broken;
+
 	atomic_t rx_gso_checksum_fixup;
 };
 
@@ -363,7 +368,7 @@ static int xennet_open(struct net_device *dev)
 	unsigned int i = 0;
 	struct netfront_queue *queue = NULL;
 
-	if (!np->queues)
+	if (!np->queues || np->broken)
 		return -ENODEV;
 
 	for (i = 0; i < num_queues; ++i) {
@@ -391,11 +396,17 @@ static void xennet_tx_buf_gc(struct netfront_queue *queue)
 	unsigned short id;
 	struct sk_buff *skb;
 	bool more_to_do;
+	const struct device *dev = &queue->info->netdev->dev;
 
 	BUG_ON(!netif_carrier_ok(queue->info->netdev));
 
 	do {
 		prod = queue->tx.sring->rsp_prod;
+		if (RING_RESPONSE_PROD_OVERFLOW(&queue->tx, prod)) {
+			dev_alert(dev, "Illegal number of responses %u\n",
+				  prod - queue->tx.rsp_cons);
+			goto err;
+		}
 		rmb(); /* Ensure we see responses up to 'rp'. */
 
 		for (cons = queue->tx.rsp_cons; cons != prod; cons++) {
@@ -406,12 +417,25 @@ static void xennet_tx_buf_gc(struct netfront_queue *queue)
 				continue;
 
 			id  = txrsp.id;
+			if (id >= RING_SIZE(&queue->tx)) {
+				dev_alert(dev,
+					  "Response has incorrect id (%u)\n",
+					  id);
+				goto err;
+			}
+			if (!queue->tx_pending[id]) {
+				dev_alert(dev,
+					  "Response for inactive request\n");
+				goto err;
+			}
+
+			queue->tx_pending[id] = false;
 			skb = queue->tx_skbs[id].skb;
 			if (unlikely(gnttab_query_foreign_access(
 				queue->grant_tx_ref[id]) != 0)) {
-				pr_alert("%s: warning -- grant still in use by backend domain\n",
-					 __func__);
-				BUG();
+				dev_alert(dev,
+					  "Grant still in use by backend domain\n");
+				goto err;
 			}
 			gnttab_end_foreign_access_ref(
 				queue->grant_tx_ref[id], GNTMAP_readonly);
@@ -429,6 +453,12 @@ static void xennet_tx_buf_gc(struct netfront_queue *queue)
 	} while (more_to_do);
 
 	xennet_maybe_wake_tx(queue);
+
+	return;
+
+ err:
+	queue->info->broken = true;
+	dev_alert(dev, "Disabled for further use\n");
 }
 
 struct xennet_gnttab_make_txreq {
@@ -472,6 +502,13 @@ static void xennet_tx_setup_grant(unsigned long gfn, unsigned int offset,
 
 	*tx = info->tx_local;
 
+	/*
+	 * The request is not in its final form, as size and flags might be
+	 * modified later, but even if a malicious backend will send a response
+	 * now, nothing bad regarding security could happen.
+	 */
+	queue->tx_pending[id] = true;
+
 	info->tx = tx;
 	info->size += info->tx_local.size;
 }
@@ -605,6 +642,8 @@ static int xennet_xdp_xmit(struct net_device *dev, int n,
 	int nxmit = 0;
 	int i;
 
+	if (unlikely(np->broken))
+		return -ENODEV;
 	if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
 		return -EINVAL;
 
@@ -649,6 +688,8 @@ static netdev_tx_t xennet_start_xmit(struct sk_buff *skb, struct net_device *dev
 	/* Drop the packet if no queues are set up */
 	if (num_queues < 1)
 		goto drop;
+	if (unlikely(np->broken))
+		goto drop;
 	/* Determine which queue to transmit this SKB on */
 	queue_index = skb_get_queue_mapping(skb);
 	queue = &np->queues[queue_index];
@@ -1153,6 +1194,13 @@ static int xennet_poll(struct napi_struct *napi, int budget)
 	skb_queue_head_init(&tmpq);
 
 	rp = queue->rx.sring->rsp_prod;
+	if (RING_RESPONSE_PROD_OVERFLOW(&queue->rx, rp)) {
+		dev_alert(&dev->dev, "Illegal number of responses %u\n",
+			  rp - queue->rx.rsp_cons);
+		queue->info->broken = true;
+		spin_unlock(&queue->rx_lock);
+		return 0;
+	}
 	rmb(); /* Ensure we see queued responses up to 'rp'. */
 
 	i = queue->rx.rsp_cons;
@@ -1373,6 +1421,9 @@ static irqreturn_t xennet_tx_interrupt(int irq, void *dev_id)
 	struct netfront_queue *queue = dev_id;
 	unsigned long flags;
 
+	if (queue->info->broken)
+		return IRQ_HANDLED;
+
 	spin_lock_irqsave(&queue->tx_lock, flags);
 	xennet_tx_buf_gc(queue);
 	spin_unlock_irqrestore(&queue->tx_lock, flags);
@@ -1385,6 +1436,9 @@ static irqreturn_t xennet_rx_interrupt(int irq, void *dev_id)
 	struct netfront_queue *queue = dev_id;
 	struct net_device *dev = queue->info->netdev;
 
+	if (queue->info->broken)
+		return IRQ_HANDLED;
+
 	if (likely(netif_carrier_ok(dev) &&
 		   RING_HAS_UNCONSUMED_RESPONSES(&queue->rx)))
 		napi_schedule(&queue->napi);
@@ -1406,6 +1460,10 @@ static void xennet_poll_controller(struct net_device *dev)
 	struct netfront_info *info = netdev_priv(dev);
 	unsigned int num_queues = dev->real_num_tx_queues;
 	unsigned int i;
+
+	if (info->broken)
+		return;
+
 	for (i = 0; i < num_queues; ++i)
 		xennet_interrupt(0, &info->queues[i]);
 }
@@ -1477,6 +1535,11 @@ static int xennet_xdp_set(struct net_device *dev, struct bpf_prog *prog,
 
 static int xennet_xdp(struct net_device *dev, struct netdev_bpf *xdp)
 {
+	struct netfront_info *np = netdev_priv(dev);
+
+	if (np->broken)
+		return -ENODEV;
+
 	switch (xdp->command) {
 	case XDP_SETUP_PROG:
 		return xennet_xdp_set(dev, xdp->prog, xdp->extack);
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu May 13 10:03:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 10:03:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126811.238345 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh8Bg-0005zi-Dh; Thu, 13 May 2021 10:03:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126811.238345; Thu, 13 May 2021 10:03:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh8Bg-0005zL-8S; Thu, 13 May 2021 10:03:24 +0000
Received: by outflank-mailman (input) for mailman id 126811;
 Thu, 13 May 2021 10:03:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=KipV=KI=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lh8Be-0003vg-3U
 for xen-devel@lists.xenproject.org; Thu, 13 May 2021 10:03:22 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a743bf04-aef4-409d-a481-7e626ae6bc5f;
 Thu, 13 May 2021 10:03:07 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 30CD0B021;
 Thu, 13 May 2021 10:03:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a743bf04-aef4-409d-a481-7e626ae6bc5f
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620900186; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=+55VMtjzppNIHp9ZFTWUzvbF11WwY57Fyil6Rw9VV3g=;
	b=WIbclcgSxN14F4SQTwhdICOoy/oKAZ9BoqgICnTIhIpgk6Erd4l4kEsUzugP1UaejvjNbV
	SF+VCrj+ofIatBefBM2F1ruGfq4tOuGWV2bH5lzn6RI0YeWS3mPXlGIxGVYjtjjZvVYhSc
	QH0d4mMri26CSMZx7ZAGaRASXWH8GmQ=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linux-block@vger.kernel.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	=?UTF-8?q?Roger=20Pau=20Monn=C3=A9?= <roger.pau@citrix.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Jens Axboe <axboe@kernel.dk>
Subject: [PATCH 4/8] xen/blkfront: don't trust the backend response data blindly
Date: Thu, 13 May 2021 12:02:58 +0200
Message-Id: <20210513100302.22027-5-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210513100302.22027-1-jgross@suse.com>
References: <20210513100302.22027-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Today blkfront will trust the backend to send only sane response data.
In order to avoid privilege escalations or crashes in case of malicious
backends verify the data to be within expected limits. Especially make
sure that the response always references an outstanding request.

Introduce a new state of the ring BLKIF_STATE_ERROR which will be
switched to in case an inconsistency is being detected.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/block/xen-blkfront.c | 62 +++++++++++++++++++++++++++---------
 1 file changed, 47 insertions(+), 15 deletions(-)

diff --git a/drivers/block/xen-blkfront.c b/drivers/block/xen-blkfront.c
index c6a05de4f15f..aa0f159829b4 100644
--- a/drivers/block/xen-blkfront.c
+++ b/drivers/block/xen-blkfront.c
@@ -80,6 +80,7 @@ enum blkif_state {
 	BLKIF_STATE_DISCONNECTED,
 	BLKIF_STATE_CONNECTED,
 	BLKIF_STATE_SUSPENDED,
+	BLKIF_STATE_ERROR,
 };
 
 struct grant {
@@ -89,6 +90,7 @@ struct grant {
 };
 
 enum blk_req_status {
+	REQ_PROCESSING,
 	REQ_WAITING,
 	REQ_DONE,
 	REQ_ERROR,
@@ -543,7 +545,7 @@ static unsigned long blkif_ring_get_request(struct blkfront_ring_info *rinfo,
 
 	id = get_id_from_freelist(rinfo);
 	rinfo->shadow[id].request = req;
-	rinfo->shadow[id].status = REQ_WAITING;
+	rinfo->shadow[id].status = REQ_PROCESSING;
 	rinfo->shadow[id].associated_id = NO_ASSOCIATED_ID;
 
 	rinfo->shadow[id].req.u.rw.id = id;
@@ -572,6 +574,7 @@ static int blkif_queue_discard_req(struct request *req, struct blkfront_ring_inf
 
 	/* Copy the request to the ring page. */
 	*final_ring_req = *ring_req;
+	rinfo->shadow[id].status = REQ_WAITING;
 
 	return 0;
 }
@@ -847,8 +850,11 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
 
 	/* Copy request(s) to the ring page. */
 	*final_ring_req = *ring_req;
-	if (unlikely(require_extra_req))
+	rinfo->shadow[id].status = REQ_WAITING;
+	if (unlikely(require_extra_req)) {
 		*final_extra_ring_req = *extra_ring_req;
+		rinfo->shadow[extra_id].status = REQ_WAITING;
+	}
 
 	if (new_persistent_gnts)
 		gnttab_free_grant_references(setup.gref_head);
@@ -1420,8 +1426,8 @@ static enum blk_req_status blkif_rsp_to_req_status(int rsp)
 static int blkif_get_final_status(enum blk_req_status s1,
 				  enum blk_req_status s2)
 {
-	BUG_ON(s1 == REQ_WAITING);
-	BUG_ON(s2 == REQ_WAITING);
+	BUG_ON(s1 < REQ_DONE);
+	BUG_ON(s2 < REQ_DONE);
 
 	if (s1 == REQ_ERROR || s2 == REQ_ERROR)
 		return BLKIF_RSP_ERROR;
@@ -1454,7 +1460,7 @@ static bool blkif_completion(unsigned long *id,
 		s->status = blkif_rsp_to_req_status(bret->status);
 
 		/* Wait the second response if not yet here. */
-		if (s2->status == REQ_WAITING)
+		if (s2->status < REQ_DONE)
 			return false;
 
 		bret->status = blkif_get_final_status(s->status,
@@ -1574,10 +1580,16 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
 	spin_lock_irqsave(&rinfo->ring_lock, flags);
  again:
 	rp = rinfo->ring.sring->rsp_prod;
+	if (RING_RESPONSE_PROD_OVERFLOW(&rinfo->ring, rp)) {
+		pr_alert("%s: illegal number of responses %u\n",
+			 info->gd->disk_name, rp - rinfo->ring.rsp_cons);
+		goto err;
+	}
 	rmb(); /* Ensure we see queued responses up to 'rp'. */
 
 	for (i = rinfo->ring.rsp_cons; i != rp; i++) {
 		unsigned long id;
+		unsigned int op;
 
 		RING_COPY_RESPONSE(&rinfo->ring, i, &bret);
 		id = bret.id;
@@ -1588,14 +1600,28 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
 		 * look in get_id_from_freelist.
 		 */
 		if (id >= BLK_RING_SIZE(info)) {
-			WARN(1, "%s: response to %s has incorrect id (%ld)\n",
-			     info->gd->disk_name, op_name(bret.operation), id);
-			/* We can't safely get the 'struct request' as
-			 * the id is busted. */
-			continue;
+			pr_alert("%s: response has incorrect id (%ld)\n",
+				 info->gd->disk_name, id);
+			goto err;
 		}
+		if (rinfo->shadow[id].status != REQ_WAITING) {
+			pr_alert("%s: response references no pending request\n",
+				 info->gd->disk_name);
+			goto err;
+		}
+
+		rinfo->shadow[id].status = REQ_PROCESSING;
 		req  = rinfo->shadow[id].request;
 
+		op = rinfo->shadow[id].req.operation;
+		if (op == BLKIF_OP_INDIRECT)
+			op = rinfo->shadow[id].req.u.indirect.indirect_op;
+		if (bret.operation != op) {
+			pr_alert("%s: response has wrong operation (%u instead of %u)\n",
+				 info->gd->disk_name, bret.operation, op);
+			goto err;
+		}
+
 		if (bret.operation != BLKIF_OP_DISCARD) {
 			/*
 			 * We may need to wait for an extra response if the
@@ -1620,7 +1646,8 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
 		case BLKIF_OP_DISCARD:
 			if (unlikely(bret.status == BLKIF_RSP_EOPNOTSUPP)) {
 				struct request_queue *rq = info->rq;
-				printk(KERN_WARNING "blkfront: %s: %s op failed\n",
+
+				pr_warn_ratelimited("blkfront: %s: %s op failed\n",
 					   info->gd->disk_name, op_name(bret.operation));
 				blkif_req(req)->error = BLK_STS_NOTSUPP;
 				info->feature_discard = 0;
@@ -1632,13 +1659,13 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
 		case BLKIF_OP_FLUSH_DISKCACHE:
 		case BLKIF_OP_WRITE_BARRIER:
 			if (unlikely(bret.status == BLKIF_RSP_EOPNOTSUPP)) {
-				printk(KERN_WARNING "blkfront: %s: %s op failed\n",
+				pr_warn_ratelimited("blkfront: %s: %s op failed\n",
 				       info->gd->disk_name, op_name(bret.operation));
 				blkif_req(req)->error = BLK_STS_NOTSUPP;
 			}
 			if (unlikely(bret.status == BLKIF_RSP_ERROR &&
 				     rinfo->shadow[id].req.u.rw.nr_segments == 0)) {
-				printk(KERN_WARNING "blkfront: %s: empty %s op failed\n",
+				pr_warn_ratelimited("blkfront: %s: empty %s op failed\n",
 				       info->gd->disk_name, op_name(bret.operation));
 				blkif_req(req)->error = BLK_STS_NOTSUPP;
 			}
@@ -1653,8 +1680,8 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
 		case BLKIF_OP_READ:
 		case BLKIF_OP_WRITE:
 			if (unlikely(bret.status != BLKIF_RSP_OKAY))
-				dev_dbg(&info->xbdev->dev, "Bad return from blkdev data "
-					"request: %x\n", bret.status);
+				dev_dbg_ratelimited(&info->xbdev->dev,
+					"Bad return from blkdev data request: %x\n", bret.status);
 
 			break;
 		default:
@@ -1680,6 +1707,11 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
 	spin_unlock_irqrestore(&rinfo->ring_lock, flags);
 
 	return IRQ_HANDLED;
+
+ err:
+	info->connected = BLKIF_STATE_ERROR;
+	pr_alert("%s disabled for further use\n", info->gd->disk_name);
+	return IRQ_HANDLED;
 }
 
 
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu May 13 10:03:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 10:03:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126814.238357 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh8Bk-0006XQ-Rf; Thu, 13 May 2021 10:03:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126814.238357; Thu, 13 May 2021 10:03:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh8Bk-0006X1-LP; Thu, 13 May 2021 10:03:28 +0000
Received: by outflank-mailman (input) for mailman id 126814;
 Thu, 13 May 2021 10:03:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=KipV=KI=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lh8Bj-0003vg-3n
 for xen-devel@lists.xenproject.org; Thu, 13 May 2021 10:03:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e15c491b-d62f-4b73-aa93-74a38cbc588f;
 Thu, 13 May 2021 10:03:07 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 68AE3B062;
 Thu, 13 May 2021 10:03:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e15c491b-d62f-4b73-aa93-74a38cbc588f
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620900186; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ygDKbDhfDMlj04vAZT1nQUuD2Bsz2E2EX2ert/+Sldc=;
	b=iGbXPTsViBTDLv+bdZJGVl+oV4HRm0TVz4r6wDci4iCRCW2Hm5WIIRHkHZxUc0LEd1sO1H
	vwqUv4y3NiJHJBX0idWsuTAMxSTOznt4q/IRhurE8VkFp4pjlCxdnjec6RwkyKSaTT5CBc
	PnbZFESlVi1Q5KeT0sJ2wcfmvqVNRzk=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	netdev@vger.kernel.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	"David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>
Subject: [PATCH 5/8] xen/netfront: read response from backend only once
Date: Thu, 13 May 2021 12:02:59 +0200
Message-Id: <20210513100302.22027-6-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210513100302.22027-1-jgross@suse.com>
References: <20210513100302.22027-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

In order to avoid problems in case the backend is modifying a response
on the ring page while the frontend has already seen it, just read the
response into a local buffer in one go and then operate on that buffer
only.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/net/xen-netfront.c | 38 +++++++++++++++++++-------------------
 1 file changed, 19 insertions(+), 19 deletions(-)

diff --git a/drivers/net/xen-netfront.c b/drivers/net/xen-netfront.c
index 44275908d61a..f91e41ece554 100644
--- a/drivers/net/xen-netfront.c
+++ b/drivers/net/xen-netfront.c
@@ -399,13 +399,13 @@ static void xennet_tx_buf_gc(struct netfront_queue *queue)
 		rmb(); /* Ensure we see responses up to 'rp'. */
 
 		for (cons = queue->tx.rsp_cons; cons != prod; cons++) {
-			struct xen_netif_tx_response *txrsp;
+			struct xen_netif_tx_response txrsp;
 
-			txrsp = RING_GET_RESPONSE(&queue->tx, cons);
-			if (txrsp->status == XEN_NETIF_RSP_NULL)
+			RING_COPY_RESPONSE(&queue->tx, cons, &txrsp);
+			if (txrsp.status == XEN_NETIF_RSP_NULL)
 				continue;
 
-			id  = txrsp->id;
+			id  = txrsp.id;
 			skb = queue->tx_skbs[id].skb;
 			if (unlikely(gnttab_query_foreign_access(
 				queue->grant_tx_ref[id]) != 0)) {
@@ -814,7 +814,7 @@ static int xennet_get_extras(struct netfront_queue *queue,
 			     RING_IDX rp)
 
 {
-	struct xen_netif_extra_info *extra;
+	struct xen_netif_extra_info extra;
 	struct device *dev = &queue->info->netdev->dev;
 	RING_IDX cons = queue->rx.rsp_cons;
 	int err = 0;
@@ -830,24 +830,22 @@ static int xennet_get_extras(struct netfront_queue *queue,
 			break;
 		}
 
-		extra = (struct xen_netif_extra_info *)
-			RING_GET_RESPONSE(&queue->rx, ++cons);
+		RING_COPY_RESPONSE(&queue->rx, ++cons, &extra);
 
-		if (unlikely(!extra->type ||
-			     extra->type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
+		if (unlikely(!extra.type ||
+			     extra.type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
 			if (net_ratelimit())
 				dev_warn(dev, "Invalid extra type: %d\n",
-					extra->type);
+					extra.type);
 			err = -EINVAL;
 		} else {
-			memcpy(&extras[extra->type - 1], extra,
-			       sizeof(*extra));
+			memcpy(&extras[extra.type - 1], &extra, sizeof(extra));
 		}
 
 		skb = xennet_get_rx_skb(queue, cons);
 		ref = xennet_get_rx_ref(queue, cons);
 		xennet_move_rx_slot(queue, skb, ref);
-	} while (extra->flags & XEN_NETIF_EXTRA_FLAG_MORE);
+	} while (extra.flags & XEN_NETIF_EXTRA_FLAG_MORE);
 
 	queue->rx.rsp_cons = cons;
 	return err;
@@ -905,7 +903,7 @@ static int xennet_get_responses(struct netfront_queue *queue,
 				struct sk_buff_head *list,
 				bool *need_xdp_flush)
 {
-	struct xen_netif_rx_response *rx = &rinfo->rx;
+	struct xen_netif_rx_response *rx = &rinfo->rx, rx_local;
 	int max = XEN_NETIF_NR_SLOTS_MIN + (rx->status <= RX_COPY_THRESHOLD);
 	RING_IDX cons = queue->rx.rsp_cons;
 	struct sk_buff *skb = xennet_get_rx_skb(queue, cons);
@@ -989,7 +987,8 @@ static int xennet_get_responses(struct netfront_queue *queue,
 			break;
 		}
 
-		rx = RING_GET_RESPONSE(&queue->rx, cons + slots);
+		RING_COPY_RESPONSE(&queue->rx, cons + slots, &rx_local);
+		rx = &rx_local;
 		skb = xennet_get_rx_skb(queue, cons + slots);
 		ref = xennet_get_rx_ref(queue, cons + slots);
 		slots++;
@@ -1044,10 +1043,11 @@ static int xennet_fill_frags(struct netfront_queue *queue,
 	struct sk_buff *nskb;
 
 	while ((nskb = __skb_dequeue(list))) {
-		struct xen_netif_rx_response *rx =
-			RING_GET_RESPONSE(&queue->rx, ++cons);
+		struct xen_netif_rx_response rx;
 		skb_frag_t *nfrag = &skb_shinfo(nskb)->frags[0];
 
+		RING_COPY_RESPONSE(&queue->rx, ++cons, &rx);
+
 		if (skb_shinfo(skb)->nr_frags == MAX_SKB_FRAGS) {
 			unsigned int pull_to = NETFRONT_SKB_CB(skb)->pull_to;
 
@@ -1062,7 +1062,7 @@ static int xennet_fill_frags(struct netfront_queue *queue,
 
 		skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
 				skb_frag_page(nfrag),
-				rx->offset, rx->status, PAGE_SIZE);
+				rx.offset, rx.status, PAGE_SIZE);
 
 		skb_shinfo(nskb)->nr_frags = 0;
 		kfree_skb(nskb);
@@ -1161,7 +1161,7 @@ static int xennet_poll(struct napi_struct *napi, int budget)
 	i = queue->rx.rsp_cons;
 	work_done = 0;
 	while ((i != rp) && (work_done < budget)) {
-		memcpy(rx, RING_GET_RESPONSE(&queue->rx, i), sizeof(*rx));
+		RING_COPY_RESPONSE(&queue->rx, i, rx);
 		memset(extras, 0, sizeof(rinfo.extras));
 
 		err = xennet_get_responses(queue, &rinfo, rp, &tmpq,
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu May 13 10:03:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 10:03:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126817.238369 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh8Bq-0007OF-5m; Thu, 13 May 2021 10:03:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126817.238369; Thu, 13 May 2021 10:03:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh8Bq-0007O2-10; Thu, 13 May 2021 10:03:34 +0000
Received: by outflank-mailman (input) for mailman id 126817;
 Thu, 13 May 2021 10:03:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=KipV=KI=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lh8Bo-0003vg-3g
 for xen-devel@lists.xenproject.org; Thu, 13 May 2021 10:03:32 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 70819697-d6d0-4f9d-a70f-64adb0fb8e2a;
 Thu, 13 May 2021 10:03:08 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0DC46B15E;
 Thu, 13 May 2021 10:03:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 70819697-d6d0-4f9d-a70f-64adb0fb8e2a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620900187; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=j+l5tq5M/f1kECXbl74yHxqhOUkbjQkUE4NZC677IzI=;
	b=muXNug23i8D8WshIzxtjKc3x9GnLaAOKXKGlzsXTDl/23evlPFNCHKHd4a4SVRdSANqXEp
	icn4g/Bx1ghMjx9o1JXTxQFB4zJLnEvn3Te0fnTR9PVC/fhKkMdndGrcacL/hxHOGrnzad
	pwoTCoUMnAz3BRbea9XDOTFJf6ozUQ4=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org,
	linuxppc-dev@lists.ozlabs.org,
	linux-kernel@vger.kernel.org
Cc: Juergen Gross <jgross@suse.com>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jiri Slaby <jirislaby@kernel.org>
Subject: [PATCH 8/8] xen/hvc: replace BUG_ON() with negative return value
Date: Thu, 13 May 2021 12:03:02 +0200
Message-Id: <20210513100302.22027-9-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210513100302.22027-1-jgross@suse.com>
References: <20210513100302.22027-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Xen frontends shouldn't BUG() in case of illegal data received from
their backends. So replace the BUG_ON()s when reading illegal data from
the ring page with negative return values.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 drivers/tty/hvc/hvc_xen.c | 15 +++++++++++++--
 1 file changed, 13 insertions(+), 2 deletions(-)

diff --git a/drivers/tty/hvc/hvc_xen.c b/drivers/tty/hvc/hvc_xen.c
index 92c9a476defc..30d7ffb1e04c 100644
--- a/drivers/tty/hvc/hvc_xen.c
+++ b/drivers/tty/hvc/hvc_xen.c
@@ -86,6 +86,11 @@ static int __write_console(struct xencons_info *xencons,
 	cons = intf->out_cons;
 	prod = intf->out_prod;
 	mb();			/* update queue values before going on */
+
+	if (WARN_ONCE((prod - cons) > sizeof(intf->out),
+		      "Illegal ring page indices"))
+		return -EINVAL;
+
 	BUG_ON((prod - cons) > sizeof(intf->out));
 
 	while ((sent < len) && ((prod - cons) < sizeof(intf->out)))
@@ -114,7 +119,10 @@ static int domU_write_console(uint32_t vtermno, const char *data, int len)
 	 */
 	while (len) {
 		int sent = __write_console(cons, data, len);
-		
+
+		if (sent < 0)
+			return sent;
+
 		data += sent;
 		len -= sent;
 
@@ -138,7 +146,10 @@ static int domU_read_console(uint32_t vtermno, char *buf, int len)
 	cons = intf->in_cons;
 	prod = intf->in_prod;
 	mb();			/* get pointers before reading ring */
-	BUG_ON((prod - cons) > sizeof(intf->in));
+
+	if (WARN_ONCE((prod - cons) > sizeof(intf->in),
+		      "Illegal ring page indices"))
+		return -EINVAL;
 
 	while (cons != prod && recv < len)
 		buf[recv++] = intf->in[MASK_XENCONS_IDX(cons++, intf->in)];
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Thu May 13 10:20:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 10:20:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126860.238387 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh8SM-0002nz-Qu; Thu, 13 May 2021 10:20:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126860.238387; Thu, 13 May 2021 10:20:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh8SM-0002ns-M5; Thu, 13 May 2021 10:20:38 +0000
Received: by outflank-mailman (input) for mailman id 126860;
 Thu, 13 May 2021 10:20:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=KipV=KI=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lh8SM-0002nm-BX
 for xen-devel@lists.xenproject.org; Thu, 13 May 2021 10:20:38 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 67efbfcb-d6e6-4bf8-a71d-be31905b90b0;
 Thu, 13 May 2021 10:20:37 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 47268B15E;
 Thu, 13 May 2021 10:20:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 67efbfcb-d6e6-4bf8-a71d-be31905b90b0
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620901236; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Gv3oECQ46RpkqjsSmVdu+DBDmUv5de8Z0G7uM0jmSok=;
	b=jUliB6QB+yR8m7zRmYNUYKDKJ3/1vxpyky3PJGw3F93zhwBoHSkhylkgwQb1Vaq7l1dlm1
	2KogYYZd78ZTwfzoJSBPcNNs5osgvCANhvLt7vleAdSAVPhIeb3n8SETjdMtUSokPWGDd5
	Ownp3SVZ9wWKrhXoIPaI4X6u/QbH9/0=
Subject: Re: [PATCH 8/8] xen/hvc: replace BUG_ON() with negative return value
To: Christophe Leroy <christophe.leroy@csgroup.eu>,
 xen-devel@lists.xenproject.org, linuxppc-dev@lists.ozlabs.org,
 linux-kernel@vger.kernel.org
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
 Jiri Slaby <jirislaby@kernel.org>
References: <20210513100302.22027-1-jgross@suse.com>
 <20210513100302.22027-9-jgross@suse.com>
 <6da4cc91-ccde-fce8-707c-e7544783c2fa@csgroup.eu>
From: Juergen Gross <jgross@suse.com>
Message-ID: <e62b12e7-6dbe-3d26-2196-9cbc2c0d4160@suse.com>
Date: Thu, 13 May 2021 12:20:35 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.0
MIME-Version: 1.0
In-Reply-To: <6da4cc91-ccde-fce8-707c-e7544783c2fa@csgroup.eu>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="PNhWaVfjD0otdP9RzGGQdOMphOo0bj07j"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--PNhWaVfjD0otdP9RzGGQdOMphOo0bj07j
Content-Type: multipart/mixed; boundary="yyRuUPzqcs5EAOdYBXLJTxPGRA5Ebq4bt";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Christophe Leroy <christophe.leroy@csgroup.eu>,
 xen-devel@lists.xenproject.org, linuxppc-dev@lists.ozlabs.org,
 linux-kernel@vger.kernel.org
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
 Jiri Slaby <jirislaby@kernel.org>
Message-ID: <e62b12e7-6dbe-3d26-2196-9cbc2c0d4160@suse.com>
Subject: Re: [PATCH 8/8] xen/hvc: replace BUG_ON() with negative return value
References: <20210513100302.22027-1-jgross@suse.com>
 <20210513100302.22027-9-jgross@suse.com>
 <6da4cc91-ccde-fce8-707c-e7544783c2fa@csgroup.eu>
In-Reply-To: <6da4cc91-ccde-fce8-707c-e7544783c2fa@csgroup.eu>

--yyRuUPzqcs5EAOdYBXLJTxPGRA5Ebq4bt
Content-Type: multipart/mixed;
 boundary="------------8C09C99EF5386C5A52D248BD"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------8C09C99EF5386C5A52D248BD
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 13.05.21 12:16, Christophe Leroy wrote:
>=20
>=20
> Le 13/05/2021 =C3=A0 12:03, Juergen Gross a =C3=A9crit=C2=A0:
>> Xen frontends shouldn't BUG() in case of illegal data received from
>> their backends. So replace the BUG_ON()s when reading illegal data fro=
m
>> the ring page with negative return values.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>> =C2=A0 drivers/tty/hvc/hvc_xen.c | 15 +++++++++++++--
>> =C2=A0 1 file changed, 13 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/tty/hvc/hvc_xen.c b/drivers/tty/hvc/hvc_xen.c
>> index 92c9a476defc..30d7ffb1e04c 100644
>> --- a/drivers/tty/hvc/hvc_xen.c
>> +++ b/drivers/tty/hvc/hvc_xen.c
>> @@ -86,6 +86,11 @@ static int __write_console(struct xencons_info=20
>> *xencons,
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 cons =3D intf->out_cons;
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 prod =3D intf->out_prod;
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 mb();=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 /* update queue values before going on */
>> +
>> +=C2=A0=C2=A0=C2=A0 if (WARN_ONCE((prod - cons) > sizeof(intf->out),
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0 "Illegal ring page indices"))
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return -EINVAL;
>> +
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 BUG_ON((prod - cons) > sizeof(intf->out=
));
>=20
> Why keep the BUG_ON() ?

Oh, failed to delete it. Thanks for noticing.


Juergen

--------------8C09C99EF5386C5A52D248BD
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------8C09C99EF5386C5A52D248BD--

--yyRuUPzqcs5EAOdYBXLJTxPGRA5Ebq4bt--

--PNhWaVfjD0otdP9RzGGQdOMphOo0bj07j
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmCc/XMFAwAAAAAACgkQsN6d1ii/Ey/C
FQf+OD+N3eBcm476rbHE/SMrRCrdPemYIIfTIa5eniRP163hc8D1XpMWlkcy5Im93pdCIpKY0tTp
2W2b04SywD+i6CehGWJ+nKEvRh1wNiJdGF+EWyf5QQGdmixgOhHzWEcOdn+uI770XafZTPT2w3x2
Ki7nKgGa7Gjr6xCJhexpnMYtQe4uo1SXTRT9j+W10jA6ppaG8zB4pE8ONyHG2c+CN7Ue354XBM4X
QVNLngs5PFy8LbNyhfU1IU1kG/0rrFmwsd2+CRRx0iCgtlcxaVwkPsIEF0e17l2rXGj0+aniQXW8
NcRQ00qw+18DPctnaLB1OU2GUgEJnuLWGzEuR6NdDg==
=tAfr
-----END PGP SIGNATURE-----

--PNhWaVfjD0otdP9RzGGQdOMphOo0bj07j--


From xen-devel-bounces@lists.xenproject.org Thu May 13 10:25:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 10:25:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126866.238398 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh8X0-0003Ug-C4; Thu, 13 May 2021 10:25:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126866.238398; Thu, 13 May 2021 10:25:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh8X0-0003UZ-8M; Thu, 13 May 2021 10:25:26 +0000
Received: by outflank-mailman (input) for mailman id 126866;
 Thu, 13 May 2021 10:25:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yjCE=KI=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lh8Wz-0003UT-7t
 for xen-devel@lists.xenproject.org; Thu, 13 May 2021 10:25:25 +0000
Received: from mo6-p00-ob.smtp.rzone.de (unknown [2a01:238:400:100::3])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6f90336b-5d1f-4ac9-8dd5-38858d5d064c;
 Thu, 13 May 2021 10:25:23 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.25.8 AUTH)
 with ESMTPSA id N048d9x4DAPM3tp
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate)
 for <xen-devel@lists.xenproject.org>;
 Thu, 13 May 2021 12:25:22 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6f90336b-5d1f-4ac9-8dd5-38858d5d064c
ARC-Seal: i=1; a=rsa-sha256; t=1620901522; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=h8Q+594ETXpHD2RuMlcjjI4tElDnD7dsuaSIH5nDBkhW2k/t1s1gM8VsKUYpelgQmJ
    DsySzTya7dA9ZXxuPA+MWBMlxW7D7rGAtLas4/rL9MV3G6a/5kCuT8S+PcDbild5Chgp
    xs1izGPiiGMh93qDt0AZ50ohbw8CRlsZKY5YkgwrI8DmVC846+zBTWMBW3oNiEfkyBt9
    1TcgYwE4Hcot3S7pToJCY+ZeR5NyL0wDI3n/K+p45Oa0hVyJ8KN8Myn5jSzIYf4bVA+T
    MvaxsH9CxI8kPs18POR6q3LOb/jJkJ2DjwFppw5B+XQRxjSWDpFuHqrlOyM+kFyItFP6
    IO1g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1620901522;
    s=strato-dkim-0002; d=strato.com;
    h=Message-ID:Subject:To:From:Date:Cc:Date:From:Subject:Sender;
    bh=vdZRBaOASTW0gueMtvSxWM+YkHqauO38kr8TeB7UK1c=;
    b=gShGFoExLmaLOHLfBzl+xVh1yryz81r6SXUwstBxyj2b7jW8NidSmHcxx1GUVcXAmt
    j8Se+ZoPpqznpHHy/hrUO0HTWSfEY2Bs6hBLFPIJWKVO39jWgqT3ZumdynOcsXj+16FJ
    SxyByIwjvVAkGYg3wb1Md/QUnQlDUNockUZ1ca9G93IQPq9AYeyZUxKxzgmRqK2LG0W6
    XLUZb2kU+Mm/PVRfC1An+/lgW/2lLKW7c/bMgItSs3H9U/pQ97SU++Tpv84V+D0Yi7w3
    DjYOpIl3NAS3CIH5K0WqKxKAsM48kSmMvUVKFA7A3bXQAMZBUbiNSuhvoex0uIVe0FSa
    RGpQ==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1620901522;
    s=strato-dkim-0002; d=aepfle.de;
    h=Message-ID:Subject:To:From:Date:Cc:Date:From:Subject:Sender;
    bh=vdZRBaOASTW0gueMtvSxWM+YkHqauO38kr8TeB7UK1c=;
    b=keGO2Z8qxYsSBS239MP4ShXpW5XQx6V5kCnMwuGi0vR1sW+XYcfzgRiMhmPtdi78rX
    Nhf0Z3O47x/jKq1z8HcEMdqMWUgWEoWEeu+DftxNuU1HYAd9UVHuhIRFJQs8nhD3tAyy
    RYGMHerVopjA1zOL+ehyZ2jUCdYvfzgaqR6Ve/pM6TXMDAeHU0tAifPUs8bH3D9JWJAD
    9dtjhhgT4q4RgxFV8JFRX9WvqjR9YN/Vcr1gByMBe4BJGv7BZiZPdXLF5rUpkcWO/Edb
    oaMlMKheBSltMRWAJq1SKzxy14QTs6NkuRsSTnjd9J7YUJXh0fvactYUf12x3sEkZsl1
    f8fg==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisF/Wx6Ea03sAi8O4Y0c9DLMc9kgmB2KMHkQZ2le"
X-RZG-CLASS-ID: mo00
Date: Thu, 13 May 2021 12:24:57 +0200
From: Olaf Hering <olaf@aepfle.de>
To: xen-devel@lists.xenproject.org
Subject: regression in recent pvops kernels, dom0 crashes early
Message-ID: <20210513122457.4182eb7f.olaf@aepfle.de>
X-Mailer: Claws Mail 2021.04.23 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/dO5gs0drS3OyAvj+8f4m0KJ";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/dO5gs0drS3OyAvj+8f4m0KJ
Content-Type: multipart/mixed; boundary="MP_/fncddok1AucPyRN.am=TRBX"

--MP_/fncddok1AucPyRN.am=TRBX
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable
Content-Disposition: inline

Recent pvops dom0 kernels fail to boot on this particular ProLiant BL465c G=
5 box.
It happens to work with every Xen and a 4.4 based sle12sp3 kernel, but fail=
s with every Xen and a 4.12 based sle12sp4 (and every newer) kernel.

Any idea what is going on?

....
(XEN) Freed 256kB init memory.
(XEN) mm.c:1758:d0 Bad L1 flags 800000
(XEN) traps.c:458:d0 Unhandled invalid opcode fault/trap [#6] on VCPU 0 [ec=
=3D0000]
(XEN) domain_crash_sync called from entry.S: fault at ffff82d08022a2a0 crea=
te_bounce_frame+0x133/0x143
(XEN) Domain 0 (vcpu#0) crashed on cpu#0:
(XEN) ----[ Xen-4.4.20170405T152638.6bf0560e12-9.xen44  x86_64  debug=3Dy  =
Not tainted ]----
....

....
(XEN) Freed 656kB init memory
(XEN) mm.c:2165:d0v0 Bad L1 flags 800000
(XEN) d0v0 Unhandled invalid opcode fault/trap [#6, ec=3Dffffffff]
(XEN) domain_crash_sync called from entry.S: fault at ffff82d04031a016 x86_=
64/entry.S#create_bounce_frame+0x15d/0x177
(XEN) Domain 0 (vcpu#0) crashed on cpu#5:
(XEN) ----[ Xen-4.15.20210504T145803.280d472f4f-6.xen415  x86_64  debug=3Dy=
  Not tainted ]----
....

I can probably cycle through all kernels between 4.4 and 4.12 to see where =
it broke.


Olaf

--MP_/fncddok1AucPyRN.am=TRBX
Content-Type: application/gzip
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename=xen_404.sle12sp4.txt.gz

H4sICOb7nGACA3hlbl80MDQuc2xlMTJzcDQudHh0AO1abXOruhH+3l+xvfdDk9vYBwS2iducqV9w
4qntuMZJz53MGUYGYTMHgws4L3f61zr9Sf0LXYkXA3bIuW4/Xs0kBmn30Wol7a5WTAJqu/4aXpmv
qtBsNn83SWs+hUEQf4o8JpNop37yXH//Wkfg+m4c2oLiNhA/F1/02SV8YT48szByAx/UptokktyR
VKm1lFukrWjN9sqRWm0JURrXzUSMC5ut9uubt78E3qaLVQ3ku4SLtWXhkwYXxoOhw4QLdImQWrN1
CSkHjEIXpvQNoAOS2m0pXZXAw3IARCJyKtCExiyKYbCh/poZLO4CF6khqQ2pBXKrS9pdRYM/SrIk
wdqNuwcBU4A+jttDJbCwC7eLhz4B0pSyxkGw3VLfBtQX64IXrL1n74Z6Hqz32KtZqLACPwo8dmMF
Wxn4vxsZVYKd2sFWukGVrYKIpaiPrs0CcH0nCLc0RlV20wZ4vO2BG0HMXmPYBjYDTXolrStwAj8G
7VVu54R9/dNwOIAtizeBHXXBD3z2J9CH4yHEIfUjh4UQu1uUmkDEUDo7yngFEe8dmWIIWRy67JnZ
sGIW3UcMa4FDpw3USzvBWYmZFTM7xRm6kXVyEKNgjyqTYdpfQOSufRrvQxZVW/XhsMgNURzurSIl
LrUG04gEi94UtnSX40uVAo1i1bWjYtXFPqIrj12e4EkISjxUwFxg3yxETZzico56kqUPueSqfJbD
VCbVyJcTlHhwwXKe3mA+BpvG9BRbQlNm63zQVUJw4GGZRt8flMOs6qAcZn/MxY65mPwRl1PtS5Y+
lFA+XhYacbCc0AVXaBcWxnDOSUfqSJKu8Img0QoJ3M0BS5n2izFcwmCkq7qe0HYGSCuntDAPg4lL
ca+mheDff/79L/Est4leBhvhTwqmCbAR71g5D2yYSzbiYOpwcF2UjBdBkhYZxrPlBI2lpEiEaEeC
GQmWnIxSlUrtxnywSNtV0d6Sin3x5kXfeDj0VSf4dDC6TcEGAkypVakMxSKVse7meqoEkgiOtv/s
6THm03EKpqZaQLBWnWDvg+kLI5NMzLU8lD4a5ftgvfl4kICpycK51j8a5rsqMxa9VLB2Ilg6l73p
kFOPelNzZMrSASttSESrrBssCVYvweq0zx9kX1+kgg3SufwfNHanZ+ofiLmUtcE5YMZbFLMt90hd
UEin3Zr24UJRWi2iSNq3fk6GOu3C/MsU0Ap9TqZLEo8z7tSlOjL5+8jI95EpNWTygUzNyeQ6stb3
kbW/j6zzDlkibjoSqSGc8/sEib1v2Kmt/4iQ06rKu8RyKmRO0tAqxLOHaQ93oOcFFoaeNgZHWx85
MUABJwy2IJxN6n/E83WV9yHiAbeGQR2GaBsGGxptINq4TpxF2cNgS10fNgxBeSTuUs+NsK/htAcv
rh1vcPHByo2zUMkRMZUxncN03oi5hwMac5fmqI6WdT5Ea0aaKuy45/TzvhJpxIzYoYuBKsZ5Dt17
cWn/zKeNJUaTIYzvYR6EGGZLr9ek4hUmuj5HtzK678JuK7+alh8/tdvSlfT1Kqlgz1jRIVeokq8l
1mJ5od/Yfmc+M+tJREOyZCE/vpqR+wt7IhXWCc6Dl8hPbRvHFqFoWbxRphREF9TauaZrP0mvKAV4
dOdah1fmc+3lQQXaBQsRcaJ+xKXYJUk/2QEoD8ffw1fL+GodvnoGvlzGl+vw5TPwW2X8Vh1+6wx8
UsYndfjkDPx2Gb9dh98+A18p4yt1+MoZ+J0yfqcOv/N9+OYMzUChD8f5CrbjiT885cZYJX8t+87x
fSJYIoT2NdtnnDs9DXyFdeSaKxqxJynnTviwogvpIEC7OgjXuSpt2AToCm6NMRp9uVUjwnVVBLks
gtyuyiAXZLiuk0HOZZDbDUUuCzFbmsZiYN4/LuBitUcWwP+mG/4Dn9ZesKKeeCGwcdcbYPa6ctCo
AbguAlyDF7yAh4dyrwKw+JsEe+4JVm8Q4ChC12bNKgn5mOT6XRKdL7DcI/AcBFrnkUfjJqSugsD4
071ojo5jb9fmnkGW221KMJTh09EVqrWL9nhEXVzEEAewZjHoYYgLeBJgn+lcLHgyJ5OIh25dWAqv
tqOhkAFdJqwY83Pv+Euelkj9GT8qX0x7w+Wl8LPcN1qB77jrfZjkGwq5hywMmM4Tx/6S+OfB/CGC
Cwk2Qbzz9mvxnq+rxd9wu2zRBfNAkC+YK+D5Hpga40/41/hSEieyNszeezzHxCUZhMx2YzCyWriw
RE2GPkwzLYBhpdQkMoHp3S/ouNPNnmlmzAeP6BiCBOEbhhA0xNd8sjFOH9GtLG1gS62N6zPA/qxv
ELIdenDOmFqTzJgMstNYWVNSV8wjZCEWRGy9xQhCHK/5CmYRqDzccZwjJBoyymMRW015s2M76h90
LY8gBEeiK8HH56zaDXbSyLvA0TUeXW4ZptMHkcgSQdDvM9XgEn12w3jPI6dkGLYbFUerz3r9yXh2
iwgNsdZxSvMkFQamiTA+e8G19Nc0BZY2N5vL8VRfdHlQEgfhjfQ6koR5kW8k2Lm+fEPEK7lpyPyd
/2bjxJ3E153IzYU81yerTUXW+PzyDZSNLg8v07wi8JmFwIG2Cn91+/neeMSDSM8YD6NsLrMWg7cY
+x2faoSh9jP1LXxwWJKOy7NpDZixiJPM6Zol2wxX/Wy+vDxQTGgUQz9EhA0smBWENlxM+gueySyq
uCgU9l9ZXqL6job2Cy4K6EWRm3XLh3Zx15tfVlOMCUtvjvseZePhH2439Vv/CsgU/8m3/SLh/PEu
SZry1RDlQ8epjzeo6F2q+pRl61q4n4R9w43Nc7aeh32b1m5vctuAMSyNrQ16DJxfcXa4bp1kJeez
Kuezquezts5nbf8a1n4Y7NebGPa71JiehOz8Gki+RdCfJLsngiSbJ4zFKgyobfF1eqGgY4lpyXVC
5DG2E6sDV5ChZJIIg2juUAC0VyUjyev4ukz2aRRTvpayzfXTTz/B5L435OZjeD/tjWcoE1amzcxz
TO6pmLlyfRq+4YA2dsiHhf4tGxPaNLTb0S/47ijOwTd+xEwqzC3Z+hXMCXXOTNq/hlclRV7LWdUx
J05JRANZPvYzvihEsctsr8xPWXHf4pq4fUCHb94bcAM/iHuqHz4kftQXxvh+xjlIs11Hjw1Fan4d
pTSlOo7H8WJp9nuGjvQ8XE6KVk4KnGIcz8ZLc06mgk/L7wRqOPTZcvFzuRuudFmrY7r7ea4vBr3J
xJz3bitCykk6vIZ7pPeWDwtdKPv3L6ErkgcmN7emeIz+uaP4vrbd0KQrDBZNdb2qU5fxMJ/fL5b6
0CxA8/HLNUzznm5O74dc+h/eWFSHz7edvuCEa+az0LXqiPf+Nz948fl9KG/njoFhRPean59Oj8CY
67OhOejNBvpESF8nO8ptGsseLpL5aPYh9d1jSjy5/3s+V1rhnqlWS8PhwrwfjQwMtDmvdMbQZVId
O9/epkU9yxSWr5udiAoxAhYeTInTVZamqdkNvCQGgUMHjhNhlF8UOAdM274HUNB/E4b4BL38Dj3z
7ZMCl81QTo/hJgbSJ+jL+5CXHdmW9HF6m4sLc/jGQp+hi2mrjZUbX4EXra74JTH6OSW7auZJP+mI
EndGSi6UWW9N53c/G2M0BTDVp/doSXqLRW92q0/RrnRLvVAeXjZ52i27IcuSo43PeU0WsV9ohHSu
iSyCsIgf2lYsQeABah4l8sNIE0K6xTgbF1F+9bZStXYJmDi241AnE5rb14d6mfk3ChjIZbqpTnrj
83vTelqmvOQyHc3afPMWNXhAIC6eU0btiBFr0HVT1tIyRkOszySKwVIVrCKqulLzHnmEm5jcboWR
VRlbBccrvl/g8QnfuWVGQVZmLIQKsLxf9iaH1Gt16xUZ1ZJyEjfFjZFuGIXpKO8RsdD4OX1LX93t
fosx4GMhCOT2gX92UQo2MIrieeujbZ0s9kPtUcB0jCRXkMhJJHIUPR0jkSpSGkNVkUR0VIukHCEl
EVUFqXNdGJ1hhfvVikeio5Cx5OapaQc+KyUBqCdSRmki4JknkeINGvFN4NnALazomE9jsovzU2Js
N3nmReSaRPIjbbjl37ecbOGxr4GOF7t0/d0+5tJjDIx2In7bMfjDYLmYNOgfRP9MhM/CZkQvLsb0
KQu+o13MTAcfmA2k1f7WF/mcdBiZjNtt0+rKnZbWtSXoUxsmMjgeXUegFZdlHNJdhJRqQvjgb6hv
eyLR8IwnVDwH7viZA8TdxidODU8/tr/y4yFfl7j0nph1I5KZKaItbmFMK6TRxozefAvQS3JEcc8j
XEWTr38OyBWczJ6NVoEQSqgEVoiHbYyIgz0evU0H7RD7IxpwRfmE/1WlfNmDCnzGE9CP0iWIHpOT
q6jJDGEDy5P4LuY7P72CV61ttvE3+6AKZmgsYuyOH4y/crzsQ6f5gzAFmTYX47l4Z5KidJ/+XFyy
FPv4nOloMZr0bo2KaSUav3HWcaXye9TB/Wypf8Fj2+45+WwKLmzpOb+FDunrwYYkhVFkC1evxxab
11ulek0mLZJZ0dAutNmM2ivGHM4Tuafr7RP1GdZqV7Rt3NITTWBV69m1qNe6x58mZVjXJ9o4jyyd
HqMsd9/FkklVLoUJHqXKoyVY6vv6klsl+03UpH8rLMil8c87JGUl6tVKH+3cj1vhoX9NTt2W4CHv
joV/uJYOmR0encPj+vAY4SOTiBBDPCpKyVYJF8itgMWSDYoTdVOdp0JEW1XJ8TxUF33pCyPsXzoF
xltx+Zc7trUSuhhGBb19EiwR5J0K7TRFYX6Lbv2UIHXox0ERnMFbDNYUi+RGT4B1Sief44VYPyMd
if5Pkv1fh/kb2BGpk1oIyZKsjqqBVSyK1CpXYDmAVVvOqPgN7DewerAhj4nE3YMV+D6zxCWRuDpg
PGfdrIZnaVTWhZCt8MwlLt7SlDG2t7KvrJu/+y+4xfPshi8AAA==

--MP_/fncddok1AucPyRN.am=TRBX
Content-Type: application/gzip
Content-Transfer-Encoding: base64
Content-Disposition: attachment; filename=xen_415.tw.txt.gz

H4sICGf8nGACA3hlbl80MTUudHcudHh0AO0623LiSLLv+xV5Zh7WnjW0SgiQ2fXE4SJsYgCzCHt6
wtGhKKQSKAwSIwlfJvbXTpxPOr9wMksXJIHt3u59HEV0G1VlZmVl5b00Drjj+St4Eb7GmlCv1/8y
Toc+hUEQf4qfP208f/9yesrzvTh05Nx1IP+cfTam5/BZ+PAkwsgLfNDqrFlXFZUpTUVbMK2pK426
qiuO1lZdza216uniZ45Y7ldXr/8dbNYdHKsR5jmcrWwbf+pwZt6ZBoyJm3OkqtdxMkWBxV7AhL8C
aMDUTlPvKA24W/SB1k15GvNYRDH019xfCVPEHTlZU5o1BZG0FOlviqoosPLizoHFlMBy722cmud0
wL1kuuZql622ogiVc13RW5rdZMLRmNNu2K1L3tbblyJF7KG0Nig6EXbgen7XU0GtK2o62Q+2W+47
gFIWHdgEq83T5opvNrDaI7tWYcAO/CjYiCs72DKg/64YayK3BZl7W74SQGvBkkcCuOOEIoo6kAHd
e44IwPPdINzyGI+nk07A/XUXvAhi8RLDNnAE6MqL2rwAN/Bj0F9YKwfsGZ8Ggz5sRbwOHKTtB774
OxiD0QDikPuRK0KIvS3uRoVIINdOlOFKIFodkWIIRRx64kkgs8Lme+TXD4BIpxN8ky6CxxwLOxZO
SmfgRfbJTQyDPYqSwaQ3h8hb+Tze4/6rs8ZgUMSGKA73dhGyP7uDe+E7AR5YdzK4gCHfeptXYC04
U16Ycn4BExTRBjeI7yq+mrHY7cg0GnAW8mdQFKYorto4PxxOTeiqAvPuBLZ8l3P8oFSeCyi8XLoN
1/0CZ/uILzfi/BTSpasdIbkSCfcjQpTuSTT3aC33K9BYhUXbFZrjvssigYgqUrOZIHX7sxE4POYn
8ZqtI7zWh4s12yUkx3U/3pgr7MrGcORr0MQRmlA+RnOPVvsKJtmRgugqoohTApmK54I7qHiCF9tt
SfwUmM6gA3NzMCPCQ22YLKFqyIwKNzPA57wE+9kcLKA/NDTDSGDbfYRlKSzMwmDscfQa6aPiv//7
3/+Rv1lLNcrEhvgnJaZLYkNauPFtxAY5Z0Mipg36l0XO6JEg6cNgNF2MMQ4oDUVV9SPGzIQWS3ap
lSVmzvrzdF6T802luBZNz3vm3WGt9xif9IfXKbG+JNZ4V6QMio9SpnUzM1IhqAnjDf3bj8ecTUYp
MS2VAhJrvsfY28SMuZlxJs+aDZSPdvk2se5s1E+IaYniXBofbfNNkZnzbspYK2EsPUt0/gQ97E6s
ocWUA610ImGtojf4JLS6Ca1269s32TPmKWP99Cy/Q2I3Rib+vjxLpve/hZj5GsViS5GsAw213WpO
enDWaDSbakPRH3s5GMq0A1NKJRSYfZ7g/0qNFxzPCYDEydWc1MN9BEiwWuNNYCaBGeQgNb0CPL2b
dFGTNpvAxuzQwXRj6yMmBmhww2AL0sXSWlBLfvMq7l1EUV/HNAmTnrWANY/WEK09N85y4UGw5Z4P
a4FEKV/2+MaLcK3BpAvPnhOvUYiw9OIs+XBllmJOZjCZ1WJy68Bjcs2YierZ4gO0SrWuwY7ihZ+v
lXAjLcMJPczAMXNy+X4Tl/RgNqktMD8LYXQLsyCMKShcYm5yljJSsY2xYczQUQ5vO7DbshfL9uMH
1mm1lAvWUb5cJIPiSQ62VRy8VJQvJRLF55k/iv3OehL2g0wNmGIjDXy1Iu8P8aBWUMd4MptkR2kI
Q2azwFuCHN1KqDPPeVBeFP1LBv9A8EnE/QKryLMoHj4oX7JdJng40AG+82zLc0C/yKsX1r4oLZyF
7mtzhOrMmu+wcFllgZVZYK0qD6zAw+V7PLCcB9aqNViZienCMud96/Z+DmfLPaJg5RJZXvg7/lpt
giXfyBcV1t5qDcJZifOvJXBZJHCJhcYzbDCB31QIzP+pwJ50fPkKAe4ixKqjXgVRPwa5fBPE8NEw
cl2negW1bLjhcR1SI1Bh9OlWTkfH0ZGKOEzkWavFVYXJDKkjResU9WrWzyIzFjGut9qHSb2gdJKc
KnNTWOWstmiFMjEjWYkINHIZrntEiYeCkz07WoqbJXzoG8DACqGIkexE4pF/qS6Di9TyJYbc2yCV
OICViMEIQ0QYByigVHHmVPZm4qNI0IGFdC47Hspl0HNhHUZZY+qk/sjrrdStULJ+NukOFueSHXJR
ZcEUiqrMG09miX99TtwkVlYR1kywDuLdZr+S73nWOrfw1ew0mQp+aNm7/ZZHjxZ5pE5Lyyxl/k8s
lbc0Rn4TTeACbQB/TczRJ/xX+3yo4fCcGEXyer0OGJsUmNz8UZqlKI6lHVPWWJPZa6zAwV4L+xEP
ZYd+kVgWpGi5IMydsPeoZehaAXnwVsm+XW57G3wVUV7XwQ0PnWc8bXBFUoUeprDk39Fh1fDIo/1u
J13waDoYzY3+wlrc3E1/AfOmO7j91Zp1r0fT6xyTkvpIxMQZ7r+3GNUW673/COOhMe0bWIbOjL7V
X8zHFAEv4BYjEpawvXl32r+xbrrzgTHNaZnJyvIob+4ncD+JqAzonQSY3R/Pf54tUB9aWg1PiAAC
f/N63qF4p4DjyYLEuaDXu/wVzp49LOj3Men3IK9xCHvMFkMMnNyRqvI+mZJaRnhkzn5DGyWN7IfC
QX7MbBTP8gnLdFsOq7nDzXScKCQo6oFSxhc1UiwMsbZHftiSYR11Si/NI7vIczbZyAwYMIqLUEKg
w+S+Law4QNr069CPkY7tLaBaI4MK9/7ve7FHv8LDkAyZ/EAHosB+FFlsBxsTDEEWaMtpWCEkKmvo
xa/ItLLN/KBsD8Uhtx9p8wn76Mb9FaYiTGk32hrTsfrzM/gZKjzZtWzqhNQkYlq9wXS0JulNs5Qk
7dFIU6srlyqZGyYogY3uJwgz1zMMBcIwrC8feygj6eB7pkmpVxC+pkB8E0OS+lBZrKsOlh0aV9kl
etWfi2NL1mJZJ438/ZMXxntKsBLDrCgMepVEZKjJGZYx7fbGaGMYg2symKCHyTtGuFiiZD5W1N3+
L2k/Kp2u1xejiTHvUPYSB+GV8jJUZPxmVwrsPJ9dqfJVvaoxeqe/WSzKU860qwchLRO40NLgF6+X
SWv7zL245jkbVAcnwPOXfbO9D9LrZE2pbdKMypz7PSbmXXM0iDLvlVEzaSY1a1yaO0+kaM6xi6rB
VEQEMqMGgowT6Lans8X5AWLMoxh6pKprmAs7CNG6x7059RiLZ1BkCtevOFQ5nLvKbhR52bIkjrOb
7uy82vxLULozDFzIGyWNeJqoTBegTvA/dt37HiXqhcF+tY5hv0ujVSa6xDcQVyW7wiiFa8qO4Q6N
Q7qQGooy2KMZZoftyK45gmIqhQE6s+aDFxh6IQqTAPBcs2lM+myMNJy8/TEldpJSAUD9CKDxEYBW
AmDfwWzzJKUCQOsjgPZJgK0M19Yu2GwwvyqFcBoj5MRpRTEnlc8MYb73fRk74v0SA4RNXvgVI+vG
pTuC6HCRMfReUBN+vBvINhym55natKn3Rs2Qh+rIl7J2NS41ztplctezN8ixI3LsK8iZ5n+Uu95/
iDuKDR3SeDqBCJIWpcwmliGGIJv8xxkpYczzAuJXHvoy/E8/w5nxgtlWLKjdL7ubFEzIFZBnJz8o
NU4cymxF5rBb/uJt91i7oyedFZz5Tz/9BD26wqGD55RySBQcziLBeIgV7trBRGJH5dYVtfqTJB2D
U/QHvetLfZmH+BMIKi8jtPjyUE+cgG8oEiCHb7APwB27CO6oWgU+CaOyukl5xwNCxFLBLCFRghhU
ru+wFLBuTbiCH+RF3w9vwdwbc3N0OyVAtd46AYbvRSC6wWvUlROA96P5wup1TQPBsOhKH73cminA
j6ajhTVTJxJczy8jjgGN6WL+W5koSYzZJ2BvfpsZ8353PKYcu8IJik49ucDQ6C7u5oaU1n89YwQg
vbQoEFnyZ/SvHcf3leOFFl+iV7G01fKEBMy72ex2vjAGVoEi7k1X2DHwrGtYk9sB8fjDq4hOkBvf
dgfGnOZXwhehZ5+AwTLBD559urDKLxdKHGHtMB1YfSwVjLHk5gQryIZlLrp4fLPh9C2gm/sUZnz7
ay5XvXDTdGqLg8Hcuh0OTSzOr+AUxM1vZkO1DidM6t2+LEEm93uz+5vk4qM4lbZPilUYyGRRtmKy
3tQ76kgPhgdLWqIVuC6WYRJeqRJM576GoIR/lLHpBDx7A17gJk8xXLbxHB5LAYxtp+BLpkHPTt2W
5HHa4GQVCo8i9AVG3aT+u4BNtLygW+kdjxtZMpV42Cok6nMKLoX5vquicx+hmcLEmNzi0XfnWMxe
GxNUhE5pFU4ZdZ16jSm3mppyXvs5H7HThc50lam6pskcMqLUYikSCpST50kuVYl1CPkW64jHzuHe
rc1tViSsq67jujbLmhTk4O7e55k+pcBEN5NN9dBrP791rKd5yp+cp6NTm61foxrlSPL+OUXUjxBx
pOEqvNnMq1xT6if1dmQnt8pYhVVtqRf0JMK6KKlt/h1W+4WaqAP/zh6pJEg8cafCKquyWgyd8hMN
ShLtx+oeJVgZsVlAXNwuuuNDh7tq7AVErcxq4snI7xmmWVCAslUeZzVwX6hK8hQB0yy6JDhyHolJ
5aPq6fwFi4kydpbGlLHTZOUIW61gZ0lNFVvmLkfYjSPsJMepYJesIG3fyAZ0kvGQr/sd4jV6+XWw
cahLltAlsSdmnlfBdrhfLikLpF6E/CLD82GJZ78KKYJkYLFTpw6qbHDLJmY6cU0f5pycoRTTxPiL
nHn+bh+TYxncTtDfxK87AX+lzlyN/1WyKWReLH1P9OzFaJYS5bzUJmk1W4892ZAt90e227rdUVmr
2XGUJwV63IExA3fDVxHoRUHJ6Tt/zX1nI3vMT1iYOxDsbLogk1dDn+KQ7+Dhx9YFCPsq/xQhIyBv
ryw75NHail59G2z0k9QzoPsxGVvqpL5EiQReqAUYV1gLXvSW1dI+pZA/2qFAF2stUdC2sFz0Y+Jv
GACazif8v90u35mh4J6w+PtROQfJAC6LqT+NNDNHWsPnQX5e89XfmaUsQf7xGEwD6hR4PrUevhDF
Q5dYmnZ2yzMfzeS7UBqNzsM/CiormjZXf86ENh+Ou9dmxeOpOl1BG5OOvFjt304Xxmesj3ZPyZde
cEZnlR1/yF8OPiF5BEe0cPlSdaQ6XUuH9suxg81oOTjnCO4shXDpr6solEGEkXd63DkxntFa7kq+
aknaJmkVx9F9NATTaVw/4b8zWpcn5giHKZ3T4+ztPTK1MFfgizXeoKW9Q6t5EscOC3zp9L2H0ljK
8Sqtlsho2eFhfU1dJmFI4qhvru9Gy5Prr2j8yMnSePQmLSedIqU9/HQPP1eHn0RGKKrckvzZaJQ8
ngyPsnEsEuvHQ7+qnnkhvz7awtFA1YCg+K0Trq+cJMYo4aooW7NMXW7j8IqSVw5pyddw9jarTNHa
mlrirJ1vot3U28fmUcYV+luHVUo8UCLuZesbWP2ubf5J7DtUQ6ZurvhTZn8S+5MYNYCzex1ZxaQZ
XAdCscRqi/Lv7NIdU71m9ql4/S//D7c+vd+UMAAA

--MP_/fncddok1AucPyRN.am=TRBX--

--Sig_/dO5gs0drS3OyAvj+8f4m0KJ
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmCc/nkACgkQ86SN7mm1
DoC7QA/7BoJReGxNdTjSQh8uCT0bJ3ORfZSQaqENNnyakozX4Q6+ZSdc2eOECKFW
Z1NxuZhMhw4tBDaSQ2qURHr+vA+4fzamzfuzAF+ewuhG+QFRjBGFRngCy2FwBLsG
4x3gOgx4hOJGASYepRaQAx+REEtarU6Yyw+TryfI1IeodRD/a7IvR3G5kMpOkpXy
/KfDgECeKchm/Blk5woEoi2a6mpATflB/OfmGaas/2zU9guSiYnAnvjP/Uyk+VaE
D+w9ulhzy5I02TaVYQUht/iBx2uwnxElw4aZhycpU0O9tCCllkNzoBj0FKJp5IMa
GFSVWN3fg8IMDmpToFmqLGO+VmxVWihGoVF2gCOLbnP8Rz2QLo25DkorHb1m8yEJ
am6vb4uw2LPFmDadZvHkwbCKYjWWHag0i8VC+YuFOTIdQQKkEuKgqA4Azw0ygmVX
IqN05IcckCmRIdi1aF/6koFUB4lTmuOS9lJoVNrxAKHD9HOKBwrYzxvRREqpeug2
BrJKPzqPTBqkxEkVANUfRX3JJbM4FMNv3pj5XhyN1ALcOGnAuWOMG3ucUvePXQl/
IyOvgU80wB+JTaYSDb0+5TTkmbbJVNS2ygqXf0hlLl2Clj3pgeMQXKIes+e6dOEU
ZIdHFvBfwMiJdSKL5v7WJSvXwuULX1oYurr4UPPzA3NcBWaH22Q=
=O+/L
-----END PGP SIGNATURE-----

--Sig_/dO5gs0drS3OyAvj+8f4m0KJ--


From xen-devel-bounces@lists.xenproject.org Thu May 13 10:25:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 10:25:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126870.238410 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh8XO-00040L-OF; Thu, 13 May 2021 10:25:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126870.238410; Thu, 13 May 2021 10:25:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh8XO-00040E-LP; Thu, 13 May 2021 10:25:50 +0000
Received: by outflank-mailman (input) for mailman id 126870;
 Thu, 13 May 2021 10:25:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IFe4=KI=linuxfoundation.org=gregkh@srs-us1.protection.inumbo.net>)
 id 1lh8XN-0003zk-2T
 for xen-devel@lists.xenproject.org; Thu, 13 May 2021 10:25:49 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f1541a77-476a-4da5-b731-7a97c2ac679f;
 Thu, 13 May 2021 10:25:48 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id CA2A661104;
 Thu, 13 May 2021 10:25:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f1541a77-476a-4da5-b731-7a97c2ac679f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org;
	s=korg; t=1620901547;
	bh=tGSbnJbV9mpjOtoS4fZRDhnpTdRQz/IXHDW4kTYpYAs=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=1I/jANeD3wLrWb04k8nKmsR8WskdmDEfQnDlB/qG+eYesSbsCu01am3HAH+SUqveR
	 7+y3F7VupADfNGsh3gkbc+FnaFq7W5XgtRmTEB2t2fychkG4zIpPchE/u1K4NCOF8I
	 5m1vFoYGXFebcrvbSvQbM6suE/okEvakNcJNX74Q=
Date: Thu, 13 May 2021 12:25:44 +0200
From: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, linuxppc-dev@lists.ozlabs.org,
	linux-kernel@vger.kernel.org, Jiri Slaby <jirislaby@kernel.org>
Subject: Re: [PATCH 8/8] xen/hvc: replace BUG_ON() with negative return value
Message-ID: <YJz+qK8snI64/TKh@kroah.com>
References: <20210513100302.22027-1-jgross@suse.com>
 <20210513100302.22027-9-jgross@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210513100302.22027-9-jgross@suse.com>

On Thu, May 13, 2021 at 12:03:02PM +0200, Juergen Gross wrote:
> Xen frontends shouldn't BUG() in case of illegal data received from
> their backends. So replace the BUG_ON()s when reading illegal data from
> the ring page with negative return values.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>  drivers/tty/hvc/hvc_xen.c | 15 +++++++++++++--
>  1 file changed, 13 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/tty/hvc/hvc_xen.c b/drivers/tty/hvc/hvc_xen.c
> index 92c9a476defc..30d7ffb1e04c 100644
> --- a/drivers/tty/hvc/hvc_xen.c
> +++ b/drivers/tty/hvc/hvc_xen.c
> @@ -86,6 +86,11 @@ static int __write_console(struct xencons_info *xencons,
>  	cons = intf->out_cons;
>  	prod = intf->out_prod;
>  	mb();			/* update queue values before going on */
> +
> +	if (WARN_ONCE((prod - cons) > sizeof(intf->out),
> +		      "Illegal ring page indices"))
> +		return -EINVAL;

How nice, you just rebooted on panic-on-warn systems :(

> +
>  	BUG_ON((prod - cons) > sizeof(intf->out));

Why keep this line?

Please just fix this up properly, if userspace can trigger this, then
both the WARN_ON() and BUG_ON() are not correct and need to be correctly
handled.


>  
>  	while ((sent < len) && ((prod - cons) < sizeof(intf->out)))
> @@ -114,7 +119,10 @@ static int domU_write_console(uint32_t vtermno, const char *data, int len)
>  	 */
>  	while (len) {
>  		int sent = __write_console(cons, data, len);
> -		
> +
> +		if (sent < 0)
> +			return sent;
> +
>  		data += sent;
>  		len -= sent;
>  
> @@ -138,7 +146,10 @@ static int domU_read_console(uint32_t vtermno, char *buf, int len)
>  	cons = intf->in_cons;
>  	prod = intf->in_prod;
>  	mb();			/* get pointers before reading ring */
> -	BUG_ON((prod - cons) > sizeof(intf->in));
> +
> +	if (WARN_ONCE((prod - cons) > sizeof(intf->in),
> +		      "Illegal ring page indices"))
> +		return -EINVAL;

Same here, you still just paniced a machine :(

thanks,

greg k-h


From xen-devel-bounces@lists.xenproject.org Thu May 13 10:35:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 10:35:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126881.238422 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh8gd-0005dB-Ly; Thu, 13 May 2021 10:35:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126881.238422; Thu, 13 May 2021 10:35:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh8gd-0005d4-J3; Thu, 13 May 2021 10:35:23 +0000
Received: by outflank-mailman (input) for mailman id 126881;
 Thu, 13 May 2021 10:35:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=KipV=KI=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lh8gc-0005cy-1e
 for xen-devel@lists.xenproject.org; Thu, 13 May 2021 10:35:22 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d5330c63-27de-4c1e-9c91-8c9303bb0692;
 Thu, 13 May 2021 10:35:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 30ECCB177;
 Thu, 13 May 2021 10:35:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d5330c63-27de-4c1e-9c91-8c9303bb0692
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620902120; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=l+1H/9jM69aEPW1WCPM7kc1bXwCVtwgd7W9dCCgN4U4=;
	b=SG/F+9szzAizgSdu4WxkNLmSs6ockCS8mv2MQB+XEtNJOFMYKlgJDeep7SJBvh9x4iNDZB
	e1cFuHuG0T2Vh3led5v+BxXmD8oc7FLAD283eyJPcN6NpNlz0ZA5Q5YYziOaY4aBlzLMPO
	1T8X9xFV9GlN35X6EA7FikB2iUKV6Ow=
To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: xen-devel@lists.xenproject.org, linuxppc-dev@lists.ozlabs.org,
 linux-kernel@vger.kernel.org, Jiri Slaby <jirislaby@kernel.org>
References: <20210513100302.22027-1-jgross@suse.com>
 <20210513100302.22027-9-jgross@suse.com> <YJz+qK8snI64/TKh@kroah.com>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH 8/8] xen/hvc: replace BUG_ON() with negative return value
Message-ID: <cb1c403e-8919-024e-4a3d-1d17d36c85a4@suse.com>
Date: Thu, 13 May 2021 12:35:19 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.0
MIME-Version: 1.0
In-Reply-To: <YJz+qK8snI64/TKh@kroah.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="p8KsZYxN25YcQ1oOBHYXw2yKXKcq6rvnq"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--p8KsZYxN25YcQ1oOBHYXw2yKXKcq6rvnq
Content-Type: multipart/mixed; boundary="FZBttakujcGFgjGf6InMpgFXVJMrsK0RM";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: xen-devel@lists.xenproject.org, linuxppc-dev@lists.ozlabs.org,
 linux-kernel@vger.kernel.org, Jiri Slaby <jirislaby@kernel.org>
Message-ID: <cb1c403e-8919-024e-4a3d-1d17d36c85a4@suse.com>
Subject: Re: [PATCH 8/8] xen/hvc: replace BUG_ON() with negative return value
References: <20210513100302.22027-1-jgross@suse.com>
 <20210513100302.22027-9-jgross@suse.com> <YJz+qK8snI64/TKh@kroah.com>
In-Reply-To: <YJz+qK8snI64/TKh@kroah.com>

--FZBttakujcGFgjGf6InMpgFXVJMrsK0RM
Content-Type: multipart/mixed;
 boundary="------------FC559BA52F5CD7817E6BBA02"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------FC559BA52F5CD7817E6BBA02
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 13.05.21 12:25, Greg Kroah-Hartman wrote:
> On Thu, May 13, 2021 at 12:03:02PM +0200, Juergen Gross wrote:
>> Xen frontends shouldn't BUG() in case of illegal data received from
>> their backends. So replace the BUG_ON()s when reading illegal data fro=
m
>> the ring page with negative return values.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>>   drivers/tty/hvc/hvc_xen.c | 15 +++++++++++++--
>>   1 file changed, 13 insertions(+), 2 deletions(-)
>>
>> diff --git a/drivers/tty/hvc/hvc_xen.c b/drivers/tty/hvc/hvc_xen.c
>> index 92c9a476defc..30d7ffb1e04c 100644
>> --- a/drivers/tty/hvc/hvc_xen.c
>> +++ b/drivers/tty/hvc/hvc_xen.c
>> @@ -86,6 +86,11 @@ static int __write_console(struct xencons_info *xen=
cons,
>>   	cons =3D intf->out_cons;
>>   	prod =3D intf->out_prod;
>>   	mb();			/* update queue values before going on */
>> +
>> +	if (WARN_ONCE((prod - cons) > sizeof(intf->out),
>> +		      "Illegal ring page indices"))
>> +		return -EINVAL;
>=20
> How nice, you just rebooted on panic-on-warn systems :(
>=20
>> +
>>   	BUG_ON((prod - cons) > sizeof(intf->out));
>=20
> Why keep this line?

Failed to delete it, sorry.

>=20
> Please just fix this up properly, if userspace can trigger this, then
> both the WARN_ON() and BUG_ON() are not correct and need to be correctl=
y
> handled.

It can be triggered by the console backend, but I agree a WARN isn't the
way to go here.


Juergen

--------------FC559BA52F5CD7817E6BBA02
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------FC559BA52F5CD7817E6BBA02--

--FZBttakujcGFgjGf6InMpgFXVJMrsK0RM--

--p8KsZYxN25YcQ1oOBHYXw2yKXKcq6rvnq
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmCdAOcFAwAAAAAACgkQsN6d1ii/Ey+S
rAf/S17iqdeQXUSbhwOIxZaD6ktKNYhdz5k7TVLwIhOuu1CgDJ4OAbC//Hy2nieJrihOylnFDb0O
XhTsBTF08lVyRedghCnkYJUOq6jU8cd52Ao/dDYynTcgT8xFUnHxtJ13A5629ahRri4H+WIRCbmI
OGXrAmELfNYPcaYlBoGzG8CdN8MhUH9Vf8zvZnaJCzZNBV1c9bPuRmiZf0azNDBj72c0UfHc8VBT
FABpYLcOCJR4W1G/Kjo3xRN4yCk35Yg6uW7yXuaah/sSmjbLuWW8ExBKywmppLUEw2q0Wz6gJgIS
Jtx9O+TQn3MTrmI+6ltoVUr49axe+Ftb9w1OlLlEig==
=oe+P
-----END PGP SIGNATURE-----

--p8KsZYxN25YcQ1oOBHYXw2yKXKcq6rvnq--


From xen-devel-bounces@lists.xenproject.org Thu May 13 10:50:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 10:50:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126886.238435 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh8v9-0007pX-Vq; Thu, 13 May 2021 10:50:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126886.238435; Thu, 13 May 2021 10:50:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lh8v9-0007pQ-Sa; Thu, 13 May 2021 10:50:23 +0000
Received: by outflank-mailman (input) for mailman id 126886;
 Thu, 13 May 2021 10:50:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MQpK=KI=csgroup.eu=christophe.leroy@srs-us1.protection.inumbo.net>)
 id 1lh8v7-0007pK-Q3
 for xen-devel@lists.xenproject.org; Thu, 13 May 2021 10:50:21 +0000
Received: from pegase2.c-s.fr (unknown [93.17.235.10])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5e0d02e4-5ff4-4e74-8db9-53e5717950ee;
 Thu, 13 May 2021 10:50:19 +0000 (UTC)
Received: from localhost (mailhub3.si.c-s.fr [172.26.127.67])
 by localhost (Postfix) with ESMTP id 4Fgncz44FZz9sch;
 Thu, 13 May 2021 12:16:35 +0200 (CEST)
Received: from pegase2.c-s.fr ([172.26.127.65])
 by localhost (pegase2.c-s.fr [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id elzAeS9dJuuJ; Thu, 13 May 2021 12:16:35 +0200 (CEST)
Received: from messagerie.si.c-s.fr (messagerie.si.c-s.fr [192.168.25.192])
 by pegase2.c-s.fr (Postfix) with ESMTP id 4Fgncz368bz9scg;
 Thu, 13 May 2021 12:16:35 +0200 (CEST)
Received: from localhost (localhost [127.0.0.1])
 by messagerie.si.c-s.fr (Postfix) with ESMTP id 313FE8B7F3;
 Thu, 13 May 2021 12:16:35 +0200 (CEST)
Received: from messagerie.si.c-s.fr ([127.0.0.1])
 by localhost (messagerie.si.c-s.fr [127.0.0.1]) (amavisd-new, port 10023)
 with ESMTP id R8jvyF-796Mk; Thu, 13 May 2021 12:16:35 +0200 (CEST)
Received: from [192.168.4.90] (unknown [192.168.4.90])
 by messagerie.si.c-s.fr (Postfix) with ESMTP id B59358B76C;
 Thu, 13 May 2021 12:16:34 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5e0d02e4-5ff4-4e74-8db9-53e5717950ee
X-Virus-Scanned: amavisd-new at c-s.fr
X-Virus-Scanned: amavisd-new at c-s.fr
Subject: Re: [PATCH 8/8] xen/hvc: replace BUG_ON() with negative return value
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org,
 linuxppc-dev@lists.ozlabs.org, linux-kernel@vger.kernel.org
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
 Jiri Slaby <jirislaby@kernel.org>
References: <20210513100302.22027-1-jgross@suse.com>
 <20210513100302.22027-9-jgross@suse.com>
From: Christophe Leroy <christophe.leroy@csgroup.eu>
Message-ID: <6da4cc91-ccde-fce8-707c-e7544783c2fa@csgroup.eu>
Date: Thu, 13 May 2021 12:16:35 +0200
User-Agent: Mozilla/5.0 (Windows NT 6.1; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210513100302.22027-9-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: fr
Content-Transfer-Encoding: 8bit



Le 13/05/2021 à 12:03, Juergen Gross a écrit :
> Xen frontends shouldn't BUG() in case of illegal data received from
> their backends. So replace the BUG_ON()s when reading illegal data from
> the ring page with negative return values.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>   drivers/tty/hvc/hvc_xen.c | 15 +++++++++++++--
>   1 file changed, 13 insertions(+), 2 deletions(-)
> 
> diff --git a/drivers/tty/hvc/hvc_xen.c b/drivers/tty/hvc/hvc_xen.c
> index 92c9a476defc..30d7ffb1e04c 100644
> --- a/drivers/tty/hvc/hvc_xen.c
> +++ b/drivers/tty/hvc/hvc_xen.c
> @@ -86,6 +86,11 @@ static int __write_console(struct xencons_info *xencons,
>   	cons = intf->out_cons;
>   	prod = intf->out_prod;
>   	mb();			/* update queue values before going on */
> +
> +	if (WARN_ONCE((prod - cons) > sizeof(intf->out),
> +		      "Illegal ring page indices"))
> +		return -EINVAL;
> +
>   	BUG_ON((prod - cons) > sizeof(intf->out));

Why keep the BUG_ON() ?


>   
>   	while ((sent < len) && ((prod - cons) < sizeof(intf->out)))
> @@ -114,7 +119,10 @@ static int domU_write_console(uint32_t vtermno, const char *data, int len)
>   	 */
>   	while (len) {
>   		int sent = __write_console(cons, data, len);
> -		
> +
> +		if (sent < 0)
> +			return sent;
> +
>   		data += sent;
>   		len -= sent;
>   
> @@ -138,7 +146,10 @@ static int domU_read_console(uint32_t vtermno, char *buf, int len)
>   	cons = intf->in_cons;
>   	prod = intf->in_prod;
>   	mb();			/* get pointers before reading ring */
> -	BUG_ON((prod - cons) > sizeof(intf->in));
> +
> +	if (WARN_ONCE((prod - cons) > sizeof(intf->in),
> +		      "Illegal ring page indices"))
> +		return -EINVAL;
>   
>   	while (cons != prod && recv < len)
>   		buf[recv++] = intf->in[MASK_XENCONS_IDX(cons++, intf->in)];
> 


From xen-devel-bounces@lists.xenproject.org Thu May 13 12:11:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 12:11:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126904.238459 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhABY-0007EG-Bt; Thu, 13 May 2021 12:11:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126904.238459; Thu, 13 May 2021 12:11:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhABY-0007E9-8j; Thu, 13 May 2021 12:11:24 +0000
Received: by outflank-mailman (input) for mailman id 126904;
 Thu, 13 May 2021 12:11:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=U61U=KI=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lhABW-0007E3-D2
 for xen-devel@lists.xenproject.org; Thu, 13 May 2021 12:11:22 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e0a965e2-7846-4cc1-a527-d7475b3a58d8;
 Thu, 13 May 2021 12:11:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e0a965e2-7846-4cc1-a527-d7475b3a58d8
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620907881;
  h=to:references:from:subject:message-id:date:in-reply-to:
   content-transfer-encoding:mime-version;
  bh=w+AN/9riY62R/emu2iqBOPQg2ZLQNl9ooM5syoQh4+Q=;
  b=MJTYkV4jlDJ97nkrJBb8stFn3vhmP4Cy6DqAw9tuAj8hC4dDJLszUIFt
   1ESynjf+YDNOP+Zwz9rCk0m+Shyy69xu91cmAa0HkmZOke1F3zE3H747G
   /4Am+8zRyg41Bt2bH4gqwSzlNQfLp98cne0lfTa6sDynNmg3Qr9T6zWJh
   g=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 5XB6+Qj8Wo2ggOX71Lds5FX4VXWo8DxK4hblM+lHxTxBCDkZpvpLbFo3YfnQQlgjvYUTL4O2/e
 /AegzDZMuNHmkocly/Hkeizz2eMF6JslNpAw4l6S2T/EOD1wAjv9IPoDOJbyVxUDqjkUvwNHtO
 JbsV4MrM5VQoo03Rtiu/ey5MIuMsn3U3u3cAEZ20VKoh1fu4CQqPjuWzqC6ZTDUL0Hkn8VQDZq
 ucpkJMFf8g5dUT1BS0udYh4Jzy5IMffueGxUx6l7NEVa9GrNVKuXjaehKPqlyuyBVuyGJaJ2lU
 xNY=
X-SBRS: 5.1
X-MesageID: 43508827
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:Q6nzdajmPMBC78ZCoZoyR55MFnBQX9p13DAbv31ZSRFFG/FwyP
 rBoB1L73DJYWgqNE3I+erhBEDyewKiyXcT2/hsAV7CZniahILMFuBfBOTZskXd8kHFh4lgPO
 JbAtJD4b7LfCtHZKTBkXCF+r8bqbHtms3Y5pa9vgJQpENRGsVdBm9Ce3am+yZNNW977PQCZf
 +hD4Z81kGdkSN9VLXLOpBJZZmOmzWl/6iWLiIuNloC0k2jnDmo4Ln1H1yzxREFSQ5Cxr8k7C
 zsjxH5zr/LiYD69jbsk0voq7hGktrozdVOQOaWjNIOFznqggG0IKx8Rry5uiwvqu3H0idqrD
 D1mWZjAy1P0QKVQonsyiGdnzUIkQxepUMK8GXowkcK+qfCNXUH46Mrv/MqTvPbg3BQ9+2Unp
 g7mV5xjKAnei8oqh6Nr+QgZysa4nZcnkBS59L7r0YvG7f2O4Uh4LD2wituYd499XXBmf4a+a
 9VfZjh2Mo=
X-IronPort-AV: E=Sophos;i="5.82,296,1613451600"; 
   d="scan'208";a="43508827"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=PKK7D4D+1jS88pghRjHc0ktEsiKeqH6oFFKt3KxGZC9tsIEyARA/zoC+/euPZSDfNIjjlmxx8LhfVGkNu7wMSWLMdIoqt1QHxhwAYD3yJ84ZviwWZZk2lVSu4s88dzw6n8A3ej2AnvnYxLTw7pheEakLXytQfz6idusrLq2nAb6FI0P/V/vjOyNO4gm99YBEej8q9NnSVAPDddWj2IcoLq0AfZlQTM57qbKUO5qCL+tGs7uy9g3hitTKpIUPNNbQGq0ceEEjP9BMAuMzfSU3+gm9iuXjVrmS0nVl7qNVa27a7L5a7gaq7X/cyY8rtxAo6YO4fX4OeeTF87qrGITBVg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bn4/+ic4aBLmKvcX13nj1p5Y+R1+ooz3HzIYW3wGZEs=;
 b=E+K5I9w/b7fZmO7GdAc2EtX5I15/niyr6zp86Fly/iWS8kA07aO1DLm2thP06kRIqwglzAR8tm9JhnRZRfaehYifUR73nacinf5qfDmE2ARmVMM2oLLTAHvCOy9goemrNQ54QbeFO+6kKpvxK2W5ZrYhe7VB/0ZfIUDb5B/oU0XVBl/mqxvYOqDyfBIwZ/JMKV2CZRfjjdVo72mwDG841ilIP482jDWzb0xJeI5mSxJMJX1jwFNTtUG3ffWY1l5C6Y0/nBOhV3x41SrEbVvsQvm/HhKawrEyY+7tIerCnzJOL8NpPxEM4wOKef6xGNGaz332Y+hNQUurUaxKCVPVFA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bn4/+ic4aBLmKvcX13nj1p5Y+R1+ooz3HzIYW3wGZEs=;
 b=QYIh9oe4bP0SnPF5SwUbxpnYLnGJ4bgk0ANQLCD9dPU13CEmLVoY5QArS6FmQ9FN3XrOo81lUy49HhkO/r3aJSCnZnu74UVdyCltNvT6hI7j260Q3e11oRw6QM/0JEQHrMrZPlK0nBqL3KfMMDQBsaXG8lShKFWdfTL60T4c4P4=
To: Olaf Hering <olaf@aepfle.de>, <xen-devel@lists.xenproject.org>
References: <20210513122457.4182eb7f.olaf@aepfle.de>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: regression in recent pvops kernels, dom0 crashes early
Message-ID: <378acbb3-7bb0-6512-2e68-0a6999926811@citrix.com>
Date: Thu, 13 May 2021 13:11:10 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <20210513122457.4182eb7f.olaf@aepfle.de>
Content-Type: text/plain; charset=windows-1252
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0408.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:189::17) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 02d688b1-7a3e-4976-f2fe-08d916083537
X-MS-TrafficTypeDiagnostic: BYAPR03MB4806:
X-Microsoft-Antispam-PRVS: <BYAPR03MB4806CB7EC49B4A3777664057BA519@BYAPR03MB4806.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: px9Y1W/4wfPVlJBfo02Dwvh/EqTtLfcFly1bczYBzdHWkp4EpqMHEOYDTGLOJxxLDst2Wu0WIm1rxbVafVmoJNfTjzl425s7OtNHEx80lJwdtiPhuvJEGCXchSMrmWWtixj89FqfCNtfgTvSLr+I14jRJ1hmBV9NGrOe/33l1eAZMPADWhGArWjfWfQ96keQFKM7Edwf//AsjscOp07LzKwsuK5cBSXv4Hrgjpx/D7NEIuL4QLBj0uBMBp2BwWOeGPkTbNTKmmE/BFVsSthYmkeuuwN3NpAAYeLJ++emcQy3PtvsGraJUyuNausOKG8AWfeD555bl8yT1w2pkAHdixBZxIs+Qdra5+heKgg6koC81pI+JCpbf1oB4+Vo8wd0saBULuBGLWDWGmGw2BkmwiFbBzQDVS8gXai+xJpCuoZpi4U7yqR4wnwYdUjyW25OeyU1R7P1hY7GsPNBB4fBqG97vj9xJws2CeycOSpQ2ZIkYc6lpRpH1M92Ht1diWLWbElYzdw+N8ohlZS8RLb6dustHug7ZcDNpineYHXyZfBIbGPyeQVsQR6xXlz7WVphkxtmHcm6kgCyTlfCA76jwEDHawUnrI1d9/RLzxS7yP6oxzNdKosccne8pJdNtKUElR901aZZvDBBW3bvbdgurwZgGmNpQtGKuYbg65C5sHAQN+u8sBfVTJcmIinPbvd0
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(376002)(39850400004)(396003)(346002)(136003)(83380400001)(186003)(6666004)(31686004)(956004)(53546011)(5660300002)(316002)(38100700002)(8936002)(16526019)(26005)(36756003)(31696002)(6486002)(66476007)(66946007)(86362001)(66556008)(16576012)(8676002)(478600001)(2616005)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?Windows-1252?Q?LsZzbDvmMiR1PjsnSQxdTdH5kxZZY7SNqQdFwYbAY3J31WVEkn/07FSz?=
 =?Windows-1252?Q?gZ4QphY18GpAonqtTCLF1Gx3mTMWN54Uzy4EmdxZ/955QBgp/eyOFJmt?=
 =?Windows-1252?Q?Vs8tFCOP3+zzkxuxZ0ff8NGt8uyjjiir5SDJIP/e8oA+lSRcvsd0qcUP?=
 =?Windows-1252?Q?xaq/b6g+WtXTflaOpaVchYjSSjh2FDECGanOFaez7+WHGtK3jJ5OfSIh?=
 =?Windows-1252?Q?2rZ9nEWHxJi0by8OUbhyH40uFACA234FbXXhEFYHEgG0ChIFIIJL2j2r?=
 =?Windows-1252?Q?WeUYcNmIyy+H9x548jyE09U4QaKVC5QV3EMxcIdehiLNR3VmIyA/pBs3?=
 =?Windows-1252?Q?GF2ZFQJPrZXYBIc2/19SHvmyFZwiQYJiYFe1T0op6Mxo6YMZVrxum6HD?=
 =?Windows-1252?Q?M98vmFv+Rt6lg6lCISLOQLuK3VXn3pg752zZ5wuAnB42/uLo1SDcHZCr?=
 =?Windows-1252?Q?vf4Jdk6m35Nn4paT+6qz/PXggFMB5dngDT3KslCqEQdS/Ogbs7J4MWYQ?=
 =?Windows-1252?Q?Qj1rpmfTM8TL1VsPsci86Vp/Abo+apPZ1ubxyyFgvzkf2UUzTi6BxkLz?=
 =?Windows-1252?Q?/hmO9BTGr5T5UvpD7/06zDxrYpqYfzbigooY4zpT3bPm475291pz5tdA?=
 =?Windows-1252?Q?9haUrclz93ceITbj42OHPDg3Q+5TLzCeAv6qDUhwnCgg0UcwFHu4yi61?=
 =?Windows-1252?Q?Pt+TmmekcaF+GELQOi7NTzXA2Nqipod44SxM4AOReu3sakjSJODRHjxA?=
 =?Windows-1252?Q?Bc76oc6tKWZo31KOUNy+Bghcv2tD+5WX/s8J5PLjOHPG6Irds22R2Mpw?=
 =?Windows-1252?Q?yu4N+ZIWOSgD058C6Q0+rg2hbbS0ZnaIEJk0gd1zKn2JOJicA5z6i1ks?=
 =?Windows-1252?Q?UHB3dDS0EkCS5j7AZxJXZBHVrjDV3quiogOaYavNGBoT8wZMhyrnnGFD?=
 =?Windows-1252?Q?PH9g6ZuubXn6sB5tjcIRqziA+l3CUdIYGgRDhYlncdUZuaS94HqgmJ1u?=
 =?Windows-1252?Q?6h8Y2EDyWpR3I16zsbVKp6Vu1B4Ecx4n/qJBRvt1lbOYxyU6VrVbKddh?=
 =?Windows-1252?Q?baS0Nv6LGo/3FO7rrztlFnKsjNAHHMuLYzfm3HIrkrji9hW/CUB83M1+?=
 =?Windows-1252?Q?osIC9K4ARrlYFuDXvCllFY9g02WdKpJD/RCxXDWvzxry/oDVi1tNFY3J?=
 =?Windows-1252?Q?nmR5zBBDsiMQ5hxbyUZs6XSs3DZXuQhDunVuVG5zoiMh5tSC+CPi+ai7?=
 =?Windows-1252?Q?ThydAcEi77rIu4T4aJ7Miu7dyyZWB11FJ5wt05tMYwjFaMBxc2W5ZLnQ?=
 =?Windows-1252?Q?kz7qZ5N8PkmtpPlRObjp4Imb0RYpAu+36bxqcMVsIDjwM62E0Xz1usiX?=
 =?Windows-1252?Q?rz1b5FTwNhEukZWhpdUTptdRrwT2xHMce1fTCF+ZsSJBJIo2NEvj1MVp?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 02d688b1-7a3e-4976-f2fe-08d916083537
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 May 2021 12:11:16.0962
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: EHNGXi6BTpgOq9jj6gqCTDLvV9zPZoKBRBSc4vegU0fGV7eUcg3KjuAFFwPb9pMyWvhLcB/krUBPoveJ8LID305ICkjWcnZ0rWdeH8s/x9k=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4806
X-OriginatorOrg: citrix.com

On 13/05/2021 11:24, Olaf Hering wrote:
> Recent pvops dom0 kernels fail to boot on this particular ProLiant BL465c=
 G5 box.
> It happens to work with every Xen and a 4.4 based sle12sp3 kernel, but fa=
ils with every Xen and a 4.12 based sle12sp4 (and every newer) kernel.
>
> Any idea what is going on?
>
> ....
> (XEN) Freed 256kB init memory.
> (XEN) mm.c:1758:d0 Bad L1 flags 800000
> (XEN) traps.c:458:d0 Unhandled invalid opcode fault/trap [#6] on VCPU 0 [=
ec=3D0000]
> (XEN) domain_crash_sync called from entry.S: fault at ffff82d08022a2a0 cr=
eate_bounce_frame+0x133/0x143
> (XEN) Domain 0 (vcpu#0) crashed on cpu#0:
> (XEN) ----[ Xen-4.4.20170405T152638.6bf0560e12-9.xen44  x86_64  debug=3Dy=
  Not tainted ]----
> ....
>
> ....
> (XEN) Freed 656kB init memory
> (XEN) mm.c:2165:d0v0 Bad L1 flags 800000
> (XEN) d0v0 Unhandled invalid opcode fault/trap [#6, ec=3Dffffffff]
> (XEN) domain_crash_sync called from entry.S: fault at ffff82d04031a016 x8=
6_64/entry.S#create_bounce_frame+0x15d/0x177
> (XEN) Domain 0 (vcpu#0) crashed on cpu#5:
> (XEN) ----[ Xen-4.15.20210504T145803.280d472f4f-6.xen415  x86_64  debug=
=3Dy  Not tainted ]----
> ....
>
> I can probably cycle through all kernels between 4.4 and 4.12 to see wher=
e it broke.

"Unhandled invalid opcode fault/trap" is "Xen tried to raise #UD with
the guest, and it hasn't set up a handler yet".=A0 The Bad L1 flags
earlier means there was an attempted edit to a pagetable which was
rejected by Xen.

These two things aren't obviously related by a single action in Xen, so
I expect the pagetable modification failed, and the guest fell into a
bad error path.


If I'm counting bits correctly, that is Xen rejecting the use of the NX
bit, which is suspicious.=A0 Do you have the full Xen boot log on this
box?=A0 I wonder if we've some problem clobbing the XD-disable bit.

~Andrew



From xen-devel-bounces@lists.xenproject.org Thu May 13 12:22:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 12:22:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126911.238470 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhAMT-0000Fd-Fp; Thu, 13 May 2021 12:22:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126911.238470; Thu, 13 May 2021 12:22:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhAMT-0000FW-Cp; Thu, 13 May 2021 12:22:41 +0000
Received: by outflank-mailman (input) for mailman id 126911;
 Thu, 13 May 2021 12:22:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yjCE=KI=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lhAMR-0000FQ-7G
 for xen-devel@lists.xenproject.org; Thu, 13 May 2021 12:22:39 +0000
Received: from mo6-p00-ob.smtp.rzone.de (unknown [2a01:238:400:100::2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e21b312b-a8d1-4d8d-a59f-d531191e5421;
 Thu, 13 May 2021 12:22:37 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.25.8 AUTH)
 with ESMTPSA id N048d9x4DCMa45M
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 13 May 2021 14:22:36 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e21b312b-a8d1-4d8d-a59f-d531191e5421
ARC-Seal: i=1; a=rsa-sha256; t=1620908556; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=f+l7xPCu37TJeKYeNLWct8ynD/hapIVBZbfI5d3gKiP4Iu/7qMJhBBYz0fML0Qn23h
    CKkcu/C8/fvliPsEEPzS+FxeSjHsfQvfiiyA092ZndYvHu/a8RL4DHa5MDx9PBiEoF2W
    1bibq3SsopLdtQz/QJhWdRlIOBNUPWVtvkO496wvPUFVENGAMaGt3SCszv7hqAGBHubV
    FmLtsVKH3nLHWurzIcwyAlV1iRgBiu0Ozuzx0YMdX9HZ6NOdtFlvIe+3U59C4fQlnSlj
    hH0gZIJaJYwIcN0h15cHiVgiRoPkkhce/9zyafg+FK86Qwn6GRsD3VzbMfJn0wvc7e81
    3GgA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1620908556;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=9qNuiqEYNLcfEWn6xseeoqOS2no7vS3ZOqeEnUlAs2c=;
    b=VxUVBxb4w9Zdp5TXreYbXMeiwzl0cfQzc1GDJxApRV/+U0vVmcQ+pWqrOtfVnKt+j1
    qEibf9gaa+f6xjON5SYdpFUmJHYTwbpP5RBxKWOcpA/MwI6kAA84N802c7eyIcxhhC96
    Pu880W1oTMxHBa+x709cBywmWcBxsV5oxym8xYdP4pnywnvAB3rOhuDCLzRIOTeiC3ZA
    7K6a84gR4IF6KYHYy84Q62Xq4K4YIAgVjfy5H55Fl1i1FOTmJqcjljrKCdrtSgQGYhJu
    2gQDIEltUoVApg+2ERFznwR6jXZfWMhtVlyDiWtVM8AY/PR+M/C7eV4848sfTsm0JkcG
    7kXA==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1620908556;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=9qNuiqEYNLcfEWn6xseeoqOS2no7vS3ZOqeEnUlAs2c=;
    b=eGCniW/nbt3hQsGMgZ5jwNPYrgS5XvPGsj4lOYJKX6/KEtnH026jq36Qqcjqjal9H3
    mk+i/q9mhMRGt0olAZysNYgk4vZ3Ntswp1//4Wr38gM+b7tmUDwrT0s077SeD0bO+ohJ
    3RcxBM4szJMfIUS/sHwzBMW9K4vIdh+6ZXZ/D8CCtAYjNRdIOev10Ei/TZYbf3W+ZUWO
    xNs8AxRbh9d5oysz7Ljl02Czpmr/yWW86xuLpkciWnp2LtzknRgt3Gf7cjlS4ck6vwxn
    20Byx9NQ6jju5h4kGaP7RjjBYJ18UrB8Vw7TMq5DomjbQD2TgVv7vydXitpoN1PqBDwK
    cAzw==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisF/Wx6Ea03sAi8O4Y0c9DLMc9kgmB2KMHkQZ2le"
X-RZG-CLASS-ID: mo00
Date: Thu, 13 May 2021 14:22:29 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: <xen-devel@lists.xenproject.org>
Subject: Re: regression in recent pvops kernels, dom0 crashes early
Message-ID: <20210513142229.2d2aa0b4.olaf@aepfle.de>
In-Reply-To: <378acbb3-7bb0-6512-2e68-0a6999926811@citrix.com>
References: <20210513122457.4182eb7f.olaf@aepfle.de>
	<378acbb3-7bb0-6512-2e68-0a6999926811@citrix.com>
X-Mailer: Claws Mail 2021.04.23 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/Aa2AgIoIJ.P1nye/=J4is5T";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/Aa2AgIoIJ.P1nye/=J4is5T
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

Am Thu, 13 May 2021 13:11:10 +0100
schrieb Andrew Cooper <andrew.cooper3@citrix.com>:

> If I'm counting bits correctly, that is Xen rejecting the use of the NX
> bit, which is suspicious.=C2=A0 Do you have the full Xen boot log on this
> box?=C2=A0 I wonder if we've some problem clobbing the XD-disable bit.


Yes, it was attached.
Is there any other Xen cmdline knob to enable more debug?

Olaf

--Sig_/Aa2AgIoIJ.P1nye/=J4is5T
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmCdGgUACgkQ86SN7mm1
DoAJYA//datBsFjpGt66gi/I/bsoC0m7vkSCnsn29+QCuNsF0c540tX0oQCKziqN
orzpBi5AFQI7hw3HhW0vWCzvqpUhN7yZUgZ9uSjJBy6Lw/iiYY5kZrmwDHCiM6HF
GtC4RrYaekKIz1V7OTzJJNUcdUHhqIhgHd3ZnLKYM9pFeK8jx4zd58srtxFv4o+v
45tpEDxLFt82uAYxgFrElNRWrX3uL90Wv0LIJnjBn71dL21P9ZnoWK5MeXe2p778
Y29zBRQUBtEcu7VfI29zjPdUh1F4CYCg8hCurCSBvRd4cPvT6tg4RZ/M6Nr4tRqv
txFYGut6Zv5gVb2mSbJNS1pT0XgJ8VnfGWjGx4rReC+i7S/jrQkMqAUZDc4cZam3
RjIZqcgZRikUn8+1aJKOjrW88aExl3Td0GWP8rCZvTL4xdneRUfXZvEeCADIwuEW
M+SaLwHbaeGqOSo+8Hr7qxxBbZhwAof1A5pUzbEjF8n9FuF3dFSp/G1Ctoie88my
6bnZo0l0r8yXaJZtz3hpyU7NA3lEgDuT6e55NL/uY9f0gS0zjonZfvXsYHpqBwf5
+kDPZCW9TWuIvqMMmY4715js5IjuZgNhxAxVBWCy0on9JbHUFlDjpa6d6xESQuFG
8pp0E8/7iDjluWP/gCOkORiWgVKmZI/VMRWfEC/KSHPezCR6wws=
=oCZP
-----END PGP SIGNATURE-----

--Sig_/Aa2AgIoIJ.P1nye/=J4is5T--


From xen-devel-bounces@lists.xenproject.org Thu May 13 12:29:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 12:29:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126921.238501 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhATI-0001H4-JG; Thu, 13 May 2021 12:29:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126921.238501; Thu, 13 May 2021 12:29:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhATI-0001Gx-FX; Thu, 13 May 2021 12:29:44 +0000
Received: by outflank-mailman (input) for mailman id 126921;
 Thu, 13 May 2021 12:29:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yjCE=KI=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lhATG-00010V-Le
 for xen-devel@lists.xenproject.org; Thu, 13 May 2021 12:29:43 +0000
Received: from mo6-p00-ob.smtp.rzone.de (unknown [2a01:238:400:100::a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 22519350-4ffb-4e1d-bf8a-d3ba812bffe0;
 Thu, 13 May 2021 12:29:41 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.25.8 AUTH)
 with ESMTPSA id N048d9x4DCTe45t
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 13 May 2021 14:29:40 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 22519350-4ffb-4e1d-bf8a-d3ba812bffe0
ARC-Seal: i=1; a=rsa-sha256; t=1620908980; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=bsVgK/K9AA+0p8mCjxaMXK+44UI15wLJqakXB46s33eAblVaz/sgNS8B582w9/rg1j
    Nt7zd2tLmCGSKT0/PEz2ow0gnYHuKC0oChb/vPiGa6W7n87+zn4fQslNbR7+yfG5GCe0
    GTYwBSt2EHA534DXMkD/sj8kZDm8CoR9To47sa7rmU/7JzzU8xIx7LdZGPpP+ngwUQbC
    8rmo+G5zDRwMBWn/QB5qfzSBwzqOLxS1XAhVztGO40B8J7nRWHV1nfmNGYvCoS/tCIwS
    vPOUiIPY8rfECiPIfDeo61J8FBeIf7fCGlXrYp1XUAQ7EwnMNAXaFZGZH7ILnHnyxIoZ
    y0tQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1620908980;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=E+lnzI9vLJIrlIXAvC/adxVUcFTOqme0cw6DHBKZA6w=;
    b=q/F9IM+2/43x79eAY8UVC3DVdkrNj4lEAdqeO9dXU6M0z8+9QIwCMU437NUpERLUk0
    +auce2u5sGQTaYSNuqzSKffdD3c/6TXYoqKL+LPiC2fr0+A2HD2L2/wVHaquEFaxNy3H
    ndjyRFu8mYsCXD6KV6cEy0gtuHI76MRqIyDywnSEDKm4bu/UnNlvj7rKlVVlNnNaQuff
    aKwmNA2BNdXBeCa1WprL1QrjHOSxib4rZr+l1HWgNoeF0tEFtvCXD2dnoFCLkKIaPqtE
    k0IUsiB365hEd+o97MUkFn97SanndY7QRCm+u+AXfaz0nauWVba2yGFakeYlUrEHLqa8
    kspg==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1620908980;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=E+lnzI9vLJIrlIXAvC/adxVUcFTOqme0cw6DHBKZA6w=;
    b=C1KUIFM56wrKwfBzPggxsWCFHKgcdiN33cTa1TKDDC3ybTPTmehXfSvf7P59I+wEy7
    cKg69ozHTyk/K/jsAqHb/VTkWu/w67Mr3KElEtiO1A2l/oIfcHEtG9DM3NzO2ax52uJw
    Zl8dzsb9vMIe2uSD/MIVvSoKsKS2W2SZw+ilXgrGLDJOMt2UpKoBC8ZDzHrDGy0+h8g4
    xovbz7yPg7ELppKL4YKUjpUxydIZWvWsoHaPBfRW5Rfz2HqMrOiQE874Pd657DaIv+VS
    NPRFWoZJ/5sr0K/dbxo5QNk6dfLfQTZbFyLT5uZBorcjTeuMyE7mF+761jfdWLEuOZIC
    oYOA==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisF/Wx6Ea03sAi8O4Y0c9DLMc9kgmB2KMHkQZ2le"
X-RZG-CLASS-ID: mo00
Date: Thu, 13 May 2021 14:29:38 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: <xen-devel@lists.xenproject.org>
Subject: Re: regression in recent pvops kernels, dom0 crashes early
Message-ID: <20210513142938.72320118.olaf@aepfle.de>
In-Reply-To: <378acbb3-7bb0-6512-2e68-0a6999926811@citrix.com>
References: <20210513122457.4182eb7f.olaf@aepfle.de>
	<378acbb3-7bb0-6512-2e68-0a6999926811@citrix.com>
X-Mailer: Claws Mail 2021.04.23 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/SeYi41kNcYzA0URP9hVw2Qw";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/SeYi41kNcYzA0URP9hVw2Qw
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Thu, 13 May 2021 13:11:10 +0100
schrieb Andrew Cooper <andrew.cooper3@citrix.com>:

> If I'm counting bits correctly, that is Xen rejecting the use of the NX
> bit, which is suspicious.

I tried 'dom0=3Dpvh,debug':

...
(XEN) mcheck_poll: Machine check polling timer started.
(XEN) Running stub recovery selftests...
(XEN) Fixup #UD[0000]: ffff82d07fffe040 [ffff82d07fffe040] -> ffff82d040394=
a17
(XEN) Fixup #GP[0000]: ffff82d07fffe041 [ffff82d07fffe041] -> ffff82d040394=
a17
(XEN) Fixup #SS[0000]: ffff82d07fffe040 [ffff82d07fffe040] -> ffff82d040394=
a17
(XEN) Fixup #BP[0000]: ffff82d07fffe041 [ffff82d07fffe041] -> ffff82d040394=
a17
(XEN) HPET: 0 timers usable for broadcast (3 total)
(XEN) Warning: NX (Execute Disable) protection not active
(XEN) Dom0 has maximum 864 PIRQs
(XEN)
(XEN) ****************************************
(XEN) Panic on CPU 0:
(XEN) Presently, iommu must be enabled for PVH hardware domain
(XEN) ****************************************
...

The other logs have:
(XEN) Warning: NX (Execute Disable) protection not active

Olaf

--Sig_/SeYi41kNcYzA0URP9hVw2Qw
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmCdG7IACgkQ86SN7mm1
DoCWWBAAo4ovEFagVQ97/JHEDV+y1FHMrMtDVqptnTaIbb0JpWCen2nt8JfKhZ2G
9rH1bdjxNZSClMzEzkUR67nhcBotKNROumkadqZXlKv8/+BkH/YTZoI3aTyI0uKf
s525Pbi7swif6BvNt/O7ToPhaerP6KMA+s/Ii4MFaO67kFE3eA+wjmnLw8DS72gz
gt3cERjzeBqBlH7RwBxKoZGelIXhk1HpBsxW1Cf3ceozR6CEyQUAgGXMMA9JFJm3
oI7t+vCcoeoKBz8KNU/VH6rWbvMJbsYhqDHaPjg1SzinQOXoPzAeHiGMbmm87kRW
4O+13lwJzpUTW8ExXhb9x5EJfdzrGluJrYid1llMD7+2eyTKXjUbYBkR7yEjTVBN
CT+C6rmRb/+GvRGXrsnI5CmykwmGkCzikxMNwKNHSyvsPmJQXdM4ujorYPjLTtlg
471S8dV1o1J/3C6tNrca/A67mq8vt6X3tHN1k8Xazt1OXKGHXRspXButRCI94Z12
IT5UpLq/0Yg8okj/E2rYw5LUWGnf81PVnNAIzKu4AwZtr/qYpR5vE27qNz0m5xNU
C+BPTcfLLqE5xztZ41ropYbIeZNnDYl/9FhTk6MsNvXcQhegnQ2Dbw/h6vu6u0WQ
z/Pa3w8aOdsRki5ZsSJ2GlPHuG2OsM2X+XTp0TVaY1tBJuhY3nU=
=w4nh
-----END PGP SIGNATURE-----

--Sig_/SeYi41kNcYzA0URP9hVw2Qw--


From xen-devel-bounces@lists.xenproject.org Thu May 13 12:29:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 12:29:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126920.238489 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhATH-00010i-9I; Thu, 13 May 2021 12:29:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126920.238489; Thu, 13 May 2021 12:29:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhATH-00010b-6C; Thu, 13 May 2021 12:29:43 +0000
Received: by outflank-mailman (input) for mailman id 126920;
 Thu, 13 May 2021 12:29:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=U61U=KI=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lhATG-00010U-Ma
 for xen-devel@lists.xenproject.org; Thu, 13 May 2021 12:29:42 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fbf6bbc8-4547-459d-b6b3-812cb1cb09c8;
 Thu, 13 May 2021 12:29:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fbf6bbc8-4547-459d-b6b3-812cb1cb09c8
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620908981;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=ohf7UPG1S3BPgYS665xHfBXuDMNUeU7vCuoEhzzTtkI=;
  b=YSfwg29bvtaH38U6KrmzRLZszaD/faFal/U9leGEGJlDueV6NfaiFWlc
   TcW5aP95A5PZXsA8Vw27aMFNhGZctQNbC+jwj4AO08pip8XVFvnLRxrp6
   WPKcgSRppz1+ApIYSA+fsuudssKmjHaQy8TE0GOHs09fUYGg83MqImPcB
   c=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: GpRorewRqtZcOLksbPnldzKs1Pqmxxw6RL9wXSCQ9ItNxYZQ69qLHHL8cSKhBZF0Gz1cjTwxC7
 oH/224HspcV0JUlefJEdH4eFI5psDR42B4eVKkEO8nLTO/RA+la/Gc7oyhGnAkyCyKjSKoSuC0
 pSnHb2+QJJCK73jt58WD2B6Dviuzf1wIK21hUw5XSXC9CoDHZst4N95OUfBFL2x2jb3ACZRxRp
 FYx7S1qLnNVwqpZNuRSVtSTl5gWnK0IG95c3PLJLKf66WHdnckm8mXtYxlHlF8GNQVcvAvW7tS
 YE4=
X-SBRS: 5.1
X-MesageID: 45256819
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:ZWQX76nNVAUtHk7yNlt3Z2bgKhjpDfMpimdD5ihNYBxZY6Wkfp
 +V88jzhCWZtN9OYhwdcLC7WZVpQRvnhPtICPoqTMiftW7dyReVxeBZnPbfKljbdREWmdQtrZ
 uIH5IOb+EYSGIK9/oSgzPIYOrIouP3iZxA7N22pxwGLXAIGtRdBkVCe2Km+yVNNXl77PECZf
 yhD6R81lidkDgsH7+G7i5vZZmzmzSHruOrXfZobCRXpzWmvHeN0vrXAhKY1hARX3dk2rE561
 XIlAT/++GKr+y74gW07R6S071m3P/ajvdTDs2FjcYYbh/2jByzWYhnU7qe+BgoveCU7kowmt
 WkmWZgAy1K0QKSQoiJm2qp5+G5uwxer0MKiGXoz0cLmPaJBw7TUKF69MVkmnKz0TtTgDl+uJ
 g7lF5x+aAnSy8opx6NkOQgYSsa3nZckUBS5dL7sEYvJ7f2SIUh57D3r3klXavpIkrBmcka+b
 5Vfb/hDbBtAAqnU0w=
X-IronPort-AV: E=Sophos;i="5.82,296,1613451600"; 
   d="scan'208";a="45256819"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Yj1xG/IaZc3WGZBEK/U0+sELdXf1JuhhtHyMN2kLvAgPt3DG0QWEHf5fR55om5ia9SKRAYeXOLHV2xgiuVIiJXuu3K4BWCQnBoNVfzZYv73U0MBGCTh8A+oY25gCdZ0LYx8VvC/fIyKFsI6VssymTYRpqmmF1/n0+K08xR6T7Yb2v3MWPr20/izUaZB1rxZqtQA9tM7kC2/+Glr2rd5NoNVh7r8LTLJGkIwTPqJKcs2fAYd/UStjBXWbt+rYtYzfnSoR0ot1I6lDzFO7aJ1aZAewFmhPX5BrAnGw6kr4LUZ289au9InMyDVWTvokYdD8kZ4Bl3NkDH3Qm1F0uvzkFg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ohf7UPG1S3BPgYS665xHfBXuDMNUeU7vCuoEhzzTtkI=;
 b=MwKigK3UxN77xZ9VW8Vkng0Nibrs2Fus9Hmkodjcc2TqKw8EZ/qmfgbRTdjkRx0e/Zaor28WeUTc59vH7Z+Ft4ZehNGOqxlXHftlguxFpd0syuI7txdWhY6BhnZ4v663k2g/IYijief6TNwnnhfpWO9XxusyrTwVWfjLkP0zmnqk5y0z0zZN6EUjIqUSKX0OiX1R2gkTV1SVGWmFS1GpnJ3AHQEKTKVjGz0h3z/r7R5UQO3HS/oPLuqCnM3v2PJUPbljG9sbSzWBYJdfFJUzEyZijNt6l4K1yDRZpjLaCXO3XLCqi73H8qtMQdpBuJNy04zYnXE9urPMtplwVS0nFg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ohf7UPG1S3BPgYS665xHfBXuDMNUeU7vCuoEhzzTtkI=;
 b=PfdvSxOwDaJmX8oFrOk2slQ8n0nh1sGDhjnxXr62vrODmMAoToDpMprhj78cFPIV2ISyNETWk22rLLNJVWNvi5BW9Nb5OmMqHjJq2cTZUEdbDd8dqcmAq0jWKBgKYlPnM8uIbHxjua5/ixo/LeXAMxiZ0wmtO1inQvsBu5LfpGI=
To: Olaf Hering <olaf@aepfle.de>
CC: <xen-devel@lists.xenproject.org>
References: <20210513122457.4182eb7f.olaf@aepfle.de>
 <378acbb3-7bb0-6512-2e68-0a6999926811@citrix.com>
 <20210513142229.2d2aa0b4.olaf@aepfle.de>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: regression in recent pvops kernels, dom0 crashes early
Message-ID: <389f9d76-ed23-f8ee-6081-322699d7e816@citrix.com>
Date: Thu, 13 May 2021 13:29:32 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <20210513142229.2d2aa0b4.olaf@aepfle.de>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0137.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:193::16) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a359349e-efee-403d-8ea4-08d9160ac62f
X-MS-TrafficTypeDiagnostic: BYAPR03MB4485:
X-Microsoft-Antispam-PRVS: <BYAPR03MB4485160F4E52E24FF3CF9A85BA519@BYAPR03MB4485.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: JXyVMW88ttiyT1f7eDSrWMt1J1pRR3oA8Ym656p2zPaLZSfdCLFfAvf5wQVUf8JLgLuqDQarnZ1znq7Q7E43JF6L9DkgCrScaQXM9MfLd1ZkKcQwyEPuOW4tV8+zD2sI8Ug7RitVXcRTRR5MBLjhCjAqZP7Y7esrV0iYXG0EJ4JD5nYZrsGZzZzKhqexKy7jQoMbCdb0BhhCUQRaXc8YCU8jKVZiF1VzSENLZ9bU2/dVCwpmDmLNfCfKrwhHf5eqGroG+FL3UVMf/TDGF6IFTh+omrxrbObbw5B/+ZUFWY6gX+Mo6hc5xJT0NhebN7TJSoHACR362LoQCCLEOhk2tJaPz2KCoW/DWBkfqSdpUKf2xJ78maRZ7gHsSxLUhwtv7iXjA6o+g96HWF83SUB8eV9dI/YJ2J/HZvrC/ozDN8YmH257NP4NsG5f/XyJIiF/5BINU1HkivdeolPru7w0yyqf68bpBFJSbQ2fLjLfYMVTOpH5FmmG+3Z/yHhXjeqOuJUuVJLcyNoC3w6bqjZJVLDaW2/o979y8STzNNzj8xds48YnxcvieVXvO8An5/ej0hCBt/ycK4GFPgK94L9A5kGZ0om0GpwzreeTU5GAlDdlo5vQLT17d6eP+BR4Nu4tnWruzyc5M/OUp8biNbmEWuvEVQz8kbAddTizeE/JL8oRLVa6CIb39xUkbFC7GeSs
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(346002)(136003)(396003)(39850400004)(366004)(6486002)(31696002)(38100700002)(26005)(6916009)(83380400001)(2906002)(86362001)(478600001)(5660300002)(2616005)(956004)(36756003)(186003)(16576012)(8936002)(6666004)(16526019)(66556008)(66946007)(66476007)(31686004)(53546011)(4744005)(4326008)(8676002)(316002)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?ajg4ZkN5YnBvdmZLV25xM1lUTU5wa3d0bGlMbGhXS3Y3TzQrN3lGQUJZTUd3?=
 =?utf-8?B?SGdiY0FiNDRyZ1JyY1ErbEhxVmNsbUdBUEhhYS8xam1zVzlMTXVNVFlRN3pM?=
 =?utf-8?B?Q0Zpb2NpT2R4NVlDY2N2c0p0M3RGLzZXaTh6WHB0QUVIaVc4TllQK0xaQ0pF?=
 =?utf-8?B?NUFIVmFoMitOWmJiR21BWWRLM1BSNnd4NUlKV1FUY01SYndBZm1lbnJneHpZ?=
 =?utf-8?B?Y2lBYjVOajdCejJ4SWdNRVBZVDIrZWJFdzRRU21rdHprZk9GL25mSTdGUlUz?=
 =?utf-8?B?a3lpUDZHQ05uSlRVNUhnQTJFcVpOcFM4ckRqTGNPcTA2bEdDcUNCTThjTldG?=
 =?utf-8?B?a1V6Y3k3TDNsZS9Yb25kRHo0dnFvV0Q0dVRhSnZFdDJpSVFIQmZVMkd1a20w?=
 =?utf-8?B?ZVZGSXI1bHNKdzFJU3pJQ09wbmxOdytjZ2hPMWZvRjAzYmVsQVZ0TG9HWitX?=
 =?utf-8?B?ME5PQ1BRTlVYWEQ4QllDKzRtMVRaa2wyQXFNTVNZUHN4TlYzcm1yYy9sVjVZ?=
 =?utf-8?B?M1Nwam5veEJxRWFqaWZnYmxNaHZ2VFNMUTV1NnpoVW5jaTk4ZWJ1SmtXeFJa?=
 =?utf-8?B?MnZOOGt1NDY2VmZQN1MxS2dOaVZLTWYyQk9aeEVFN1FlTWt1Wjl3MVF5RTFR?=
 =?utf-8?B?VlpFcWlqTDRoZWZGaDBiUFVnb2poYzhSbzRzdDIrY0MxVVY5a2FVM3U3K1pm?=
 =?utf-8?B?SHVmK0ozWkp3NlBCK3N0YjFDNFgyUjNWY1dnbjdWM1ZIK1NCQWhkWjB1VmFs?=
 =?utf-8?B?dy96Y1BLNzZNbCtyM0taQkhTM3FOVVB6Uk9ieDB6dFpMZ3hkWk5RRG8wRW5O?=
 =?utf-8?B?cUtYNEpLZW1ybWU5QkQ0bVBxWWxOaGhjSGoveStHNEs1clllT3ZQTTZGcHlF?=
 =?utf-8?B?c2RTVVBuTzJLWkVkU2hKcnVMK2RTNC9GY25QZndJd0JHSWVrcVNXSWNYcW5v?=
 =?utf-8?B?Z1paMWYxMS9sOEpxTG51MWRLOWhBdGJxbDJURTkzVU1aT1BtRmo4WEJZSzQ0?=
 =?utf-8?B?RkNja0pQSFRMYTJwakIvSTI3azFMTSt3WHo0Y25BYS9iQkk0QkxrV1M4dHJr?=
 =?utf-8?B?YmpXS3FLKzdiK3BNbkhOdDl5d3pjb3lXVmUxMWllNksvYUpSM2hhVU9XOGgy?=
 =?utf-8?B?dm9tY054WGc0TWRCNk9VTG9OZGRHdkpZRUNlMEs2Z1pQU0ZBQ2NQSDM2Tk1O?=
 =?utf-8?B?a3JYRlBGOGRJV2E1c0N2RHc3RVhhV0FhTllURjh0amZsdGd2dTBKMTFyYmg1?=
 =?utf-8?B?cXhyRG9iMWRPcEhLZitVL1lOUnVMNjRDc0VxZ0RXYTZLWHdHdjdsRGozRXhX?=
 =?utf-8?B?ZzgzOUdrL3NXRWRNU0d6RlNUbUZWVXR5Vnc2UHFxaVVtaXRRMHhCMUxnYmIw?=
 =?utf-8?B?MkRpdkpScm1aeHcvNmkvSFdkbW9xWXVrWlZtMzJsRjFkUmhrd3NVU3pQYkVN?=
 =?utf-8?B?Y1dseGEvME9Vak9XNHpDZlVXQWwyZ2RVR2N2MVpxRlFhYXJzNEhzSEdzR0NN?=
 =?utf-8?B?ZDNQT0FmWXJLUFJKd3hlZnB5NjJXUkZ0NnJqaDlGTmdVUDcwZElobjJ6WWJM?=
 =?utf-8?B?Q3FhK2pBRU44ZVkxRUxFYWE5MVVnLzRpOGs4RG1QWHN6RzFkcnkxM0cwRTJn?=
 =?utf-8?B?REJIZ2t4aHN1b0dlcWV4cHZJTTk3dFhFOEpaODBNOUwvaElvN2FzWVhxdGJz?=
 =?utf-8?B?WWJTZWxXTVVOeHA1aGF4TTdmZkdxNHhDK2ViZjYxc2NsSzBvS3phUXVzT2h2?=
 =?utf-8?Q?vCs9mBzhQ/PuC3Wj2Gh7K6MPeehqeXYXdaSwbfB?=
X-MS-Exchange-CrossTenant-Network-Message-Id: a359349e-efee-403d-8ea4-08d9160ac62f
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 May 2021 12:29:38.1244
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: OXDQw7jlyvwhj2pOQyCjoSAgD6Cregsy0rGhdeqt6dK7kf4wiXmbjy5z1ZnEnARLc86KTa7Yra+yJDa1kxqjAivrK3PdJSEjCA7XkdxnMWU=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4485
X-OriginatorOrg: citrix.com

On 13/05/2021 13:22, Olaf Hering wrote:
> Am Thu, 13 May 2021 13:11:10 +0100
> schrieb Andrew Cooper <andrew.cooper3@citrix.com>:
>
>> If I'm counting bits correctly, that is Xen rejecting the use of the NX
>> bit, which is suspicious.=C2=A0 Do you have the full Xen boot log on thi=
s
>> box?=C2=A0 I wonder if we've some problem clobbing the XD-disable bit.
>
> Yes, it was attached.
> Is there any other Xen cmdline knob to enable more debug?

Urgh sorry - I've not had enough coffee yet today.

Warning: NX (Execute Disable) protection not active

And this is an AMD box not an Intel box, so no XD-disable nonsense (that
I'm aware of).

So, the two options are:
1) This box legitimately doesn't have NX, and the dom0 kernel is buggy
for trying to use it.
2) This box does actually have NX, Xen has failed to turn it on, and
dom0 (through non CPUID means) thinks that NX is usable.

Can we first establish whether this box really does, or does not, have NX ?

~Andrew



From xen-devel-bounces@lists.xenproject.org Thu May 13 12:31:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 12:31:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126931.238512 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhAVH-0002y4-4b; Thu, 13 May 2021 12:31:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126931.238512; Thu, 13 May 2021 12:31:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhAVH-0002xx-1d; Thu, 13 May 2021 12:31:47 +0000
Received: by outflank-mailman (input) for mailman id 126931;
 Thu, 13 May 2021 12:31:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yjCE=KI=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lhAVF-0002xn-Pe
 for xen-devel@lists.xenproject.org; Thu, 13 May 2021 12:31:45 +0000
Received: from mo6-p00-ob.smtp.rzone.de (unknown [2a01:238:400:100::4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e1bc68ad-0aa7-4d85-a81a-d9aa0fa3b086;
 Thu, 13 May 2021 12:31:44 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.25.8 AUTH)
 with ESMTPSA id N048d9x4DCVh463
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 13 May 2021 14:31:43 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e1bc68ad-0aa7-4d85-a81a-d9aa0fa3b086
ARC-Seal: i=1; a=rsa-sha256; t=1620909103; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=XBmFXqyxGQ8GQH5iC5Q6xG1UOXr5Z7938HnZxNoOZ/aMkOGUu3f2aFScHtAL/S/0bU
    ofrVi+Gl5osryZnHm9ttd5Bg7gEX2Rv4naENqmBeHXS3xSG4Uiv1/Y8RFOLhpKWTIM44
    qckDqjtiH38aOmcCUdwDv9q/yHNAMLcrwiV7O+J9s4CUpEQYxmg8hr6QP1F/B7WnB84c
    HqiDbO9+8yG7V8OG7SqbCUPqAwnQP481Yd+wft4yr9ZY/k/eyhD6FVavbaQBYS7QURBD
    ocW/JelnTOfYUnHhhn3o+CG6VzsjI/GkNmEeyhCF8Iivth98R4WbVQS9px27m71gTR2S
    sY3g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1620909103;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=Zbe/lMiu4/b4x4nohzl46IEZIM8SA3++CKWwgzokx7I=;
    b=tyxImOWsW9nM7kL5vmVP0NztaBBje5dp7JI+Eu4q3f5StDilsLpeQ3Iy3fgIWMdcVa
    1N1SpxFaIHezQdG+EwufA2hHy/T6SI6B35m8H/BKWqrgnjidNnv9g7VPXUEhJzBo/gIh
    p9y2UlygdTYei1YFzhl/p9cpjgvJSgi0JfWZubrqJvUF9b0hMbMq8Lg9/q5yex75QzPG
    /tIpQyUxeDMpfgfasDj04cvCjcL4BvkUzPTZ76ZSAN3M6GxWBA+hys0aTd03o7HVf2Jm
    2FjMRvDs/oQ955vkgIJHim+IKlZAJIWwrC1Zx5esL9N+PCG88BxdJ03oK7jjOUszgo9F
    pOkw==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1620909103;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=Zbe/lMiu4/b4x4nohzl46IEZIM8SA3++CKWwgzokx7I=;
    b=A+nmomBNrj20bEIPw7s0bRIThGiLGzy1XXiyxU6Wh/kEmex03stPlyuNffkZs600B3
    MeisrMzPvI1UiMEQka6lRe7yTRJUefDbZ39agkVWlxV50uHfP4oLaFLAWwgSu6vqAaPi
    9CM/AJP35tQFzfhMCOmHi6uo1s0GrvlB2M8LhdDlkBhLxJmW2p55fizKSXKkXNZWHTQF
    HpQaf3wcvGgXBoQZiuN4E16uygVtfsYVtsTL2Dvdm0K+qoKbhjWfqDqx0+LJy7iJJv1Y
    8xDHwOU8uInZzn3JpaNbuE4zIOkhu3MzQI2eQYrfHZwQMF3mPhYaV5bDdBeY4qsRVgrz
    BdiA==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisF/Wx6Ea03sAi8O4Y0c9DLMc9kgmB2KMHkQZ2le"
X-RZG-CLASS-ID: mo00
Date: Thu, 13 May 2021 14:31:41 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: <xen-devel@lists.xenproject.org>
Subject: Re: regression in recent pvops kernels, dom0 crashes early
Message-ID: <20210513143141.6dfbd8a0.olaf@aepfle.de>
In-Reply-To: <389f9d76-ed23-f8ee-6081-322699d7e816@citrix.com>
References: <20210513122457.4182eb7f.olaf@aepfle.de>
	<378acbb3-7bb0-6512-2e68-0a6999926811@citrix.com>
	<20210513142229.2d2aa0b4.olaf@aepfle.de>
	<389f9d76-ed23-f8ee-6081-322699d7e816@citrix.com>
X-Mailer: Claws Mail 2021.04.23 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/8noblC+NTZiRCZpp_Z071AL";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/8noblC+NTZiRCZpp_Z071AL
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Thu, 13 May 2021 13:29:32 +0100
schrieb Andrew Cooper <andrew.cooper3@citrix.com>:

> Can we first establish whether this box really does, or does not, have NX=
 ?

According to lscpu of a native boot: no.

Architecture:          x86_64
CPU op-mode(s):        32-bit, 64-bit
Byte Order:            Little Endian
CPU(s):                8
On-line CPU(s) list:   0-7
Thread(s) per core:    1
Core(s) per socket:    4
Socket(s):             2
NUMA node(s):          2
Vendor ID:             AuthenticAMD
CPU family:            16
Model:                 2
Model name:            Quad-Core AMD Opteron(tm) Processor 2356
Stepping:              3
CPU MHz:               2300.057
BogoMIPS:              4600.11
Virtualization:        AMD-V
L1d cache:             64K
L1i cache:             64K
L2 cache:              512K
L3 cache:              2048K
NUMA node0 CPU(s):     0,2,4,6
NUMA node1 CPU(s):     1,3,5,7
Flags:                 fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge=
 mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall mmxex
t fxsr_opt pdpe1gb rdtscp lm 3dnowext 3dnow constant_tsc rep_good nopl nons=
top_tsc cpuid extd_apicid pni monitor cx16 popcnt lahf_lm cmp_
legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs =
hw_pstate vmmcall npt lbrv svm_lock

Olaf

--Sig_/8noblC+NTZiRCZpp_Z071AL
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmCdHC0ACgkQ86SN7mm1
DoB0og//WFzLztUALrmuohki2Y87cf3CZVS2TDpkIu7rivDEiHhkLwh9LRUqLvMf
LzxT7gl/u774CrJ/IxvFMCKyiFTJgc5JE7qIRDJ8Be4GhH4/9rqxy5Zlp8yvHpLh
7ypA9aY+R6ZUFqvLALvlXMO/8wTQ2VWjeNZVyVRFOfEl8IAdzcx5gVvY3t+raOyk
SmsMAdAZIS7BTTtoIVbxDEX+EthvA3sz8K1YeiTNChC185JUvpKxrH4zhjoVVKEX
vutNA+34ENapuH/7pbkC9mj1MCZQwmV6bOhDH8zY+UtxuGuf5TZ7GGQojEch+Bwv
edzg1ZIsVBCHKadYe76KrCVSFsm/wFOMAM4kTF/dmEpzRTJH5H0dniL/2rAhbry/
NkCFNY5WsKjaZL3WNOlrsw5cY6MxMdw8+psvlqCybicNKBooDqPcA+gHnBmRyBPt
aF3zKg4LdOfmEc9dH3CE91zn33BQ7LDAb2OC9jubG4LqmR+MrGe+02ZnVovaQS2R
JPmQS5RJDOxxQmBaFSMaGCXBIi5neZSpR+fuqvjvYTbBGQBXHGvGyf2SQB5PwZ4i
KhPcN/MRFa2L/E2IsgADiigRmdypbkSYF9X/2yqhuZJzYhKNPxOY39oZKhlJiaYB
AYBLb0zfSSSoQXuKagXytDQLFM9n/3RwVVoEI6w84Ec1isPiKbk=
=kQIC
-----END PGP SIGNATURE-----

--Sig_/8noblC+NTZiRCZpp_Z071AL--


From xen-devel-bounces@lists.xenproject.org Thu May 13 13:00:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 13:00:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126944.238531 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhAxP-0006BB-G7; Thu, 13 May 2021 13:00:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126944.238531; Thu, 13 May 2021 13:00:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhAxP-0006B4-D6; Thu, 13 May 2021 13:00:51 +0000
Received: by outflank-mailman (input) for mailman id 126944;
 Thu, 13 May 2021 13:00:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=yjCE=KI=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1lhAxO-0006Ay-RG
 for xen-devel@lists.xenproject.org; Thu, 13 May 2021 13:00:51 +0000
Received: from mo6-p01-ob.smtp.rzone.de (unknown [2a01:238:400:200::9])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ed72c616-108e-4c2b-be7a-b239aa834ae6;
 Thu, 13 May 2021 13:00:49 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.25.8 AUTH)
 with ESMTPSA id N048d9x4DD0l48G
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 13 May 2021 15:00:47 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ed72c616-108e-4c2b-be7a-b239aa834ae6
ARC-Seal: i=1; a=rsa-sha256; t=1620910848; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=fE20vfkRVmpJhOM3cLGjRIXU+P2FXPDvbQT9RcK4IxGSKHHQtkQJnjod6Le8B8pDgP
    eZFLbB7Us0slRVXJ7N3454Gr/5SmkgDclx9HzONAOq5xJvrEDnetECaZLFV/QjoanGKS
    d4DukNggkQ4MPhW0eZqZQ0vlxaXYvJ5uMJCQEwZGhKjFQLKtmRZH4xVWxzz00mv5BHLP
    wvIMsZpeaEFzQL34JM9AKbyf/zyUBsHHRgcCagB3zDm0j+hlJkF/DYRpiW0ZuDBP5w/H
    +s6lqYVyrcXGZ9rwluCsvnXaWnizpFnM6u+esQxqdL8ExuCI1LWrMeZP6O/4E+uZtnm2
    Lw6Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1620910848;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=VAaDm+7Sqi0/Wawe5GvM8BYwU+cG8DW+YhY0b1pyaPU=;
    b=Kc/DA/7C2n0q7ZqNyu9WwojZdGoussBh4scVJNM5JKKZxBrIq0mO3YMCeu0+2huY/R
    Ovfu14+6Qj0rVEdUlcFUHJyTGToC+Wb+O7uiqM7z1snWQwQupZsZShQtoxwhCUoID9Td
    Iah3MNZh9y7ptc3Zb7v4flX9mYh4WSBRVLBZUWKPzUlq3nimmUVVMYAWBF6pwf9i9GfZ
    fu/BFHiuC/kn8AI58YA8ZXOgWXVrSt3WEeYdDQ1LFKOt/Nf9Dj0+VNQMooBarMEzW1d3
    Zmflv5bSSB8vFB61mnUBOilwdJJ+FPs6yfTdQ7MjdgoI8RLRYDR94WzhJrOhH8rZ5Nqc
    XLdg==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1620910848;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=VAaDm+7Sqi0/Wawe5GvM8BYwU+cG8DW+YhY0b1pyaPU=;
    b=VEtz5OfXI//bKAjpTwJM6YVlBQlu3fWU71yvR9VND81+z8Fkz3eSlgzfklf8kVCqsV
    msof5zFRcom6ysI0DUKdUE/gdcTkOQHw/eg2ibI6TBYjqN9YWErPr3EjD2W3wchwszAi
    mWRmGXRRQm66i2W37KQXLsnZCa6K///OBnMjcLHOYUI6m+w1Y0Liu5YT6VgMGKePk8Sn
    pPv+2ERCY90sran0pqlGkEDtKbvXpKFC2wsLsTO8K20SymfqqQHSEHYeWxyQ102h98V5
    7KAjuJ8mbebySdYyLYVmqVw+pphSKxJF+ikM44a+fMTfAMewH3RZcAMF8x2UsaAuqPzy
    /9dw==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisF/Wx6Ea03sAi8O4Y0c9DLMc9kgmB2KMHkQZ2le"
X-RZG-CLASS-ID: mo00
Date: Thu, 13 May 2021 15:00:33 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: <xen-devel@lists.xenproject.org>
Subject: Re: regression in recent pvops kernels, dom0 crashes early
Message-ID: <20210513150033.2448f533.olaf@aepfle.de>
In-Reply-To: <389f9d76-ed23-f8ee-6081-322699d7e816@citrix.com>
References: <20210513122457.4182eb7f.olaf@aepfle.de>
	<378acbb3-7bb0-6512-2e68-0a6999926811@citrix.com>
	<20210513142229.2d2aa0b4.olaf@aepfle.de>
	<389f9d76-ed23-f8ee-6081-322699d7e816@citrix.com>
X-Mailer: Claws Mail 2021.04.23 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/iSjik7i_kC6qWodgkfSsITE";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/iSjik7i_kC6qWodgkfSsITE
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Thu, 13 May 2021 13:29:32 +0100
schrieb Andrew Cooper <andrew.cooper3@citrix.com>:

> Warning: NX (Execute Disable) protection not active

There was a knob in the BIOS, it was set to "Disabled" for some reason.
Once enabled, the flag is seen and the dom0 starts fine.

If Xen is booted with 'cpuid=3Dno-nx', the dom0 crashes again.

Thanks for the help, Andrew.


Olaf

--Sig_/iSjik7i_kC6qWodgkfSsITE
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmCdIvEACgkQ86SN7mm1
DoC/fQ//aHWh5Fulc+rReth+PZhI7yUroVASVGuNUQ7j3A4zF9alOoGY9qd/mkrz
6xw7jUMLr9ZxcEfPWiNH1DR2ZUw/akEjJobYOc0juLNu26Ms0Z6k5VsvD3UPtYK0
j4LFkD+ZoNURkEgEkQ5CJyozQAL1bVhY/8bInExtztimGP4PFfoMKbJXEZNnWPK/
2px/b4KBC4wPbhH50w75odEj/BTxYjuE5gSzOtvhsX+M4li6QNxjxYg5l4IIX5VY
h6B4MeiACkIRjOCVMt4iqaZJeI/VplbxEMs7bQ54CVzs9OG/e3rG55tqtAPf2Uag
9mODnT/pShJNcE217aRpUi4KalFQN/5B0B7Zt9Lx5k0bqlKDERmGt/ltSB/CeS0w
BvWRV0YrH3zO0Y86eFR4Cmcx7cSVlZ5/fnmtZlqdgE+E+thornp6VzBAdwosRPSU
KV+4G1xlVKDwFR8vy8rR+ah8KZXOM6DvtyWdEt49rgxdd91Wrt4xEkhqalSX2xjh
kcL0GoWuP8R39o8l1Vxo6jpJVsxzSwszKW3thWf1OZZzUHhYCsRqSdbUcXBBx7Yd
pxeqBQ5O208pEvANcbyVsalrxK5tSnfS0Hk1tyZMGwidQpjlCcZMHu13rZ1mD82b
ncA6RlXCvTIBdkJjVguX9Dh+gs0O+Al7lccYjP0Q/zQleUkb+X4=
=AHmM
-----END PGP SIGNATURE-----

--Sig_/iSjik7i_kC6qWodgkfSsITE--


From xen-devel-bounces@lists.xenproject.org Thu May 13 13:10:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 13:10:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126951.238543 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhB6N-0007ca-ES; Thu, 13 May 2021 13:10:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126951.238543; Thu, 13 May 2021 13:10:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhB6N-0007cT-BL; Thu, 13 May 2021 13:10:07 +0000
Received: by outflank-mailman (input) for mailman id 126951;
 Thu, 13 May 2021 13:10:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=U61U=KI=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lhB6M-0007cN-My
 for xen-devel@lists.xenproject.org; Thu, 13 May 2021 13:10:06 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1b0cd3b0-9ed5-41a8-b05a-ce5167fa5761;
 Thu, 13 May 2021 13:10:04 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1b0cd3b0-9ed5-41a8-b05a-ce5167fa5761
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620911404;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=tLLMue0pia2RSHPtu+KRxD5gYBb/zBQVw+kdxIB/vzQ=;
  b=DP1wGG5Y7Wlc8HA9vqwwYVOFWW2TPUrhp4wAYRsQee9z+IXKLJSJu11n
   3Wg0o5D3tdV9iE96UV6OkGj0Cd9b/GIbNUcFJxEE4jPnFTNBpnd3Dfuwx
   kQuMjM9+4H/gSuydjxB/BvH/QrrkEPyWqB3vU+eLBzubY2ZqwoT9CdLR7
   E=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: BP9Lngatup1QCAh0563TstMMBV3EgkQGRO1N2HDsaZ32tK94BE+XToakAvDamANOeIe2vm+MpK
 EQiumc7iHU8ZAaazCRNFJBVNP6Tak/wuorFG9YrnIe/RQlm7MQNsXBzgvfPAh1MdygIN0ZgIUT
 Im6RtWh5GY0TxfhJgUaDQg971Sx9Efb+qLNNUbc5pTJRTqjSJ6eS597rs6TqYQFQjC40ema3zv
 jH8Ev2j59R3cvUEUYce5jpXMxFOO/v8jJpszK1ijO0qjRLAlPl0MNKgR4G7yDd+8iD70ysR8ho
 g5Q=
X-SBRS: 5.1
X-MesageID: 43514647
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:AJNt6KrgmCKZ5mqdEnP1w28aV5rReYIsimQD101hICG9Evb0qy
 lhppQmPH7P+VIssRQb8+xoV5PufZqxz/BICOoqTNKftWvdyQiVxehZhOOP/9SJIUbDH4VmpM
 VdmsZFaeEZDTJB/LvHCAvTKadd/DFQmprY+ts3zB1WPH9Xg7kL1XYfNu4CeHcGPzWvA/ACZf
 yhz/sCnRWMU1INYP+2A3EUNtKz3eEixPrdEGc77wdM0nj3sQ+V
X-IronPort-AV: E=Sophos;i="5.82,296,1613451600"; 
   d="scan'208";a="43514647"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ku20cUilb4VzVGm/0dCFNWBFFTMIU3hMW77YxdKICvcmhGtfjNed0iX0HztiYrUTTNi6m3MhT/Rzd/yl9m31GD3OsoE04O2lkRlJtD4+ZIY4lLIe99cC8HwoDQX1yF8kw8Yueil+hJI3Z6N2a8j7rlJeoGfLxS1I7HhIxIxoaaM2UXBZDuCbS2dxBqWSjwVWYs9kb7F0g3kno4JVomcTYHtSezjlbfWuMnk1hbzhOn5PhlVmKDv070lVgRMffNKq2zLCOvYeYHYfDs1N2o2UElGEb7KAgOwbJTn1ERVKrGesigq/CJn7Xcoq0bM6a4YQlXinvY03yzRZAfNTXryDPA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tLLMue0pia2RSHPtu+KRxD5gYBb/zBQVw+kdxIB/vzQ=;
 b=hf1unGZKuU4uHtD6mJ926BvYYoubNi1v1Yg4QhnB854DrYzeWz2ZK6ar2MJVYj2xbLzAISBOHCgpdB5ioGMl5+zAsxpKzFJ+Q+kAd7JSKGyrSNxBJCSX5ypsQbo1bAB963r4c9wKhjCoOr0Rjs7bCcNYXvTXUXcz/UyEp8lXYt0P/MESwZNSOCHlidMRl9o3fEtxlgdD5cyxFF44O/FcbZ5GWQ4U8htGLUAAbZQKpBlFSi/fEdcS0SmNaq1jisNv346ZCg1/E17T92UYR/5rudaE50Tv9RYntzjTxKUkEniJBYOci4zcJNtZ8WC7KYP4eV9DnzRf3VYHtDRrwqnX9g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tLLMue0pia2RSHPtu+KRxD5gYBb/zBQVw+kdxIB/vzQ=;
 b=iNiwmYoOSNdCqewcRfI/c1ZLFZoKeVs5Sz1I4qtPXua61IMR8LMN89uh37l8X9hpicVjST4+frwwMDE9LdFRQWcE8JqBRqMJJN5Z8kBTJch53B70JIjCNkpqLI70Gpu+up105gE06Q0EGeo2od8kr2M94coirYEd8jlSFw85nLc=
Subject: Re: regression in recent pvops kernels, dom0 crashes early
To: Olaf Hering <olaf@aepfle.de>
CC: <xen-devel@lists.xenproject.org>
References: <20210513122457.4182eb7f.olaf@aepfle.de>
 <378acbb3-7bb0-6512-2e68-0a6999926811@citrix.com>
 <20210513142229.2d2aa0b4.olaf@aepfle.de>
 <389f9d76-ed23-f8ee-6081-322699d7e816@citrix.com>
 <20210513150033.2448f533.olaf@aepfle.de>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <67d1fb0f-1cbc-07b7-2224-8f9762b3a7c9@citrix.com>
Date: Thu, 13 May 2021 14:09:53 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <20210513150033.2448f533.olaf@aepfle.de>
Content-Type: text/plain; charset=windows-1252
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0425.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a0::29) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 46b0ce1c-c7ac-4483-f6ad-08d9161069a7
X-MS-TrafficTypeDiagnostic: SJ0PR03MB6392:
X-Microsoft-Antispam-PRVS: <SJ0PR03MB639223142B8D56F7ED8C6886BA519@SJ0PR03MB6392.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: cUc/7JSxhQyz7ZvqjVj9mtmmK9+2UpG5GjogGBd5+dynPNcGEMBT1mt/paeMxBmxdGeh9Kl0ASQ5V2XhJXETbE/oZkLAaN2TfYq1PbfuoQOT3dhKZrbBD5E+ys4mM0jT1EGTXQ0gKQkN4UhFGn2ot9rByC8kLlsjGuvl35ibKuzjB83Mjn4itMpR4uMtozPsACfPpc9U6CDZUTGnNMlH7h2ARslheIijstKf5SqwmyacFgbA/oH330/0Y1kpAFInkT4O6dlbEpgtgc3y5ClwGWN++RzH7tei15OxqZGZzEdwpd+OTQpAQbcWcHebLWf69anzRY0firviJw05Wlh+wkF+aujR9MP/6JHxdSWflLbtt60i0BLkC6B49OBGMIh6lK1u2eA3IbpGqMvsFy8kFwHeJNJSX6plr+akx7iVEHr10a+vjixCQYr9963ZPcc+ZKb0+8yuteEb9TLZlhcfbWgz+xHx2mRSjnPswAyqt+wY4yrD2Nf/xb66ASWXNkxuxXX/SL2X0eq9gvQPthHf6OcVNwaSHF9jIA07beay5VUr8eiuWEHF4arndrW2KIfn3lvgJQd+NzC+VE7F0rBdVP6luQ+jQDzCfmDCBKyCnT7wLrtFaVjXgMj5Ijcg/KMH8B/xeU8BRhvm6h5kFlkw7rpz5CT9iyjh7ZGdfIDAK5AkZLidWcCf1kGV3aKYSGzP
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(39850400004)(366004)(396003)(346002)(376002)(86362001)(8936002)(2906002)(26005)(956004)(186003)(4326008)(2616005)(83380400001)(6666004)(31686004)(16576012)(4744005)(31696002)(8676002)(478600001)(38100700002)(5660300002)(66476007)(66946007)(16526019)(6916009)(66556008)(6486002)(316002)(36756003)(53546011)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?Windows-1252?Q?b6Fgut/Ug5dxiyyqtG1XB67WYTiAxc9dJ6Sv6ZSjkn7ke8yOi5ckbtbn?=
 =?Windows-1252?Q?yXGKXp4e/UBa3auMLHtiKfCMImSz95doECl3KFishNrD6KWK0CRNIAfI?=
 =?Windows-1252?Q?1v+LoolYbjpR8KVzlWCYgqP0OY0CgbxOHtxHsEHGNjlG+mWpTC/r5Rmg?=
 =?Windows-1252?Q?wUBzXIucNC6RRuiZE8d+Gbmcsf7xrY0XyMtnQkrIM5vdk+lCrczYhaV0?=
 =?Windows-1252?Q?d3Z9C/Fx8vC5bK5gXLXe2MTi+Mev+w/HrPayF1nFKRWxRePKnU1yzE5W?=
 =?Windows-1252?Q?BXk20AaAvrLDUuHIyUjt53zMb4vlXWjd5ToMfRWledttGH5bXoCjXi+F?=
 =?Windows-1252?Q?Wk02pjHa9yyGEAOssVGDCcwPJ3/BxkSDhOBtSpJVc9j7QYYcI0aBhp8y?=
 =?Windows-1252?Q?jChl4FmyY7dZWQ0G79kjrrkELwFLRd4iN87aiB3lbaWSys+VZu7ByQ7s?=
 =?Windows-1252?Q?t1Adxzr8RxCf+biNW9tZiNA48WiuhL6e4HBlg1tRU8eNSdA2jvopOyd3?=
 =?Windows-1252?Q?kVYzkHtpF/txGGMOsMB5s7YT3DVeUQYAawHd2fIAeRiQIzem0Xgn+lSS?=
 =?Windows-1252?Q?jd5vPwiPdtHRTT+qxhAXNSSZhMzBeXIK2AKd2dy+64hP7gUqKlUtPm++?=
 =?Windows-1252?Q?yclAfQGdurFlQNio+LzsZV9n3JpDyMYgGdf3N4kk56HX799zVuTDEzTq?=
 =?Windows-1252?Q?mOnqF2iwzZv7YTwPbWZtIjaee8oPkPOlY1tWBN6tMOWV1Z4kS9UOpvBS?=
 =?Windows-1252?Q?LFUKo1vYnSED6rcz+j1Du36AVnjbYDVh+mjtQbjC7xYwZLiHrPbJ1fbr?=
 =?Windows-1252?Q?4/Xo2riygTbpFdGABktNB6jvtbCKbtizHO4hbVmtLpmOMct8Klbv54aD?=
 =?Windows-1252?Q?Brl4a0/zJYrxw31ZtFomABc8qU/aHK4nVPVQLS5uEjS5cGnAoKJL3zmw?=
 =?Windows-1252?Q?rKfU2yE9o7KwMEfksrh5UcUrY6Lg6wUrqpEu3qcvMF7FUWAHlQ3hSHEO?=
 =?Windows-1252?Q?w61wqH+rdtAg0vzQKiP4mr8SATUdYDA1L4tskgvlqSMlaO3ilBpWvUYL?=
 =?Windows-1252?Q?dCM/erCi+lOgZ24p+40Tt7FjDKgDvLCAFv1RlAwYpRlKSRUuXNkS6sCy?=
 =?Windows-1252?Q?lkS8p/UhdfGu++uTTgyTHQpR5Vlqj8x+yWx47Y0dNTGX60s8trgsQQjd?=
 =?Windows-1252?Q?hUGu1aJiESuunTtL85dg9jnRsQnoiJ0GcNl7SRrKpROVQPHDPeSDv3SI?=
 =?Windows-1252?Q?I3oyqNem4C54LhnqaLp/8+eRQaQkxx0YaQC3lau9ts7G+aX3r24NPmsS?=
 =?Windows-1252?Q?Mz8caSD8XzIWSi5jquRO0ZmTMgz8iuJTgVC+JwqdAbveroCUeIUKZNRb?=
 =?Windows-1252?Q?0qolEAT5bIutOcf5Uip6+yLBY5a1E8AVymVtPQqEisvP29rMhc6Tzi4B?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 46b0ce1c-c7ac-4483-f6ad-08d9161069a7
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 May 2021 13:10:00.0617
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Zh9okuM/eOsZ0XwdXvXDwkBg8FWIX1HJ0ApXEPUtxqMQf5hrsXUn92wgQgvGUE9CPQde28PApP6SR4FRmUYTJq8q22IFSXK2F2fRYx9Vc6w=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB6392
X-OriginatorOrg: citrix.com

On 13/05/2021 14:00, Olaf Hering wrote:
> Am Thu, 13 May 2021 13:29:32 +0100
> schrieb Andrew Cooper <andrew.cooper3@citrix.com>:
>
>> Warning: NX (Execute Disable) protection not active
> There was a knob in the BIOS, it was set to "Disabled" for some reason.
> Once enabled, the flag is seen and the dom0 starts fine.
>
> If Xen is booted with 'cpuid=no-nx', the dom0 crashes again.
>
> Thanks for the help, Andrew.

Well - I wouldn't say we're quite done yet.

Clearly between sle12sp3 and sle12sp4 you've picked up a regression
where Linux decides to use NX despite its absence.

If NX is a mandatory feature now, then dom0 ought to error out cleanly
stating this fact.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu May 13 13:38:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 13:38:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126957.238554 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhBXg-0001Zj-NP; Thu, 13 May 2021 13:38:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126957.238554; Thu, 13 May 2021 13:38:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhBXg-0001Zc-KY; Thu, 13 May 2021 13:38:20 +0000
Received: by outflank-mailman (input) for mailman id 126957;
 Thu, 13 May 2021 13:38:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhBXe-0001ZS-Mn; Thu, 13 May 2021 13:38:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhBXe-0002y2-FS; Thu, 13 May 2021 13:38:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhBXe-0005hJ-7V; Thu, 13 May 2021 13:38:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lhBXe-0008MA-70; Thu, 13 May 2021 13:38:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=UYHqy/X/Hx4sVGvXOBhQumJKsciWd5VsZ7wSWjlI/3Q=; b=QUlFULDUGmdtl1dIFz9ERoxVSs
	rGZbSDCF4HnHBc3WgD3lJyVrmaKOQpfR5UbiIN/o64rIwE5zwY1DhtmFXCn2GWMg7tJrw5l9nc3tg
	6wVEq81ReBkb6uWTbe3SLBwLPxYrLSSWRVpIRmidhc7oavCqRao+WppuFljAOha01owg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161926-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 161926: tolerable trouble: fail/pass/starved - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-dom0pvh-xl-amd:hosts-allocate:starved:nonblocking
    xen-unstable:test-amd64-i386-qemut-rhel6hvm-amd:hosts-allocate:starved:nonblocking
    xen-unstable:test-amd64-amd64-xl-pvhv2-amd:hosts-allocate:starved:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:hosts-allocate:starved:nonblocking
    xen-unstable:test-amd64-coresched-amd64-xl:hosts-allocate:starved:nonblocking
    xen-unstable:test-amd64-coresched-i386-xl:hosts-allocate:starved:nonblocking
    xen-unstable:test-amd64-i386-qemuu-rhel6hvm-amd:hosts-allocate:starved:nonblocking
X-Osstest-Versions-This:
    xen=43d4cc7d36503bcc3aa2aa6ebea2b7912808f254
X-Osstest-Versions-That:
    xen=982c89ed527bc5b0ffae5da9fd33f9d2d1528f06
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 13 May 2021 13:38:18 +0000

flight 161926 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161926/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161898
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161898
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161898
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161898
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 161898
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161898
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161898
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161898
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161898
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161898
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161898
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-dom0pvh-xl-amd  3 hosts-allocate               starved  n/a
 test-amd64-i386-qemut-rhel6hvm-amd  3 hosts-allocate               starved n/a
 test-amd64-amd64-xl-pvhv2-amd  3 hosts-allocate               starved  n/a
 test-amd64-amd64-qemuu-nested-amd  3 hosts-allocate               starved  n/a
 test-amd64-coresched-amd64-xl  3 hosts-allocate               starved  n/a
 test-amd64-coresched-i386-xl  3 hosts-allocate               starved  n/a
 test-amd64-i386-qemuu-rhel6hvm-amd  3 hosts-allocate               starved n/a

version targeted for testing:
 xen                  43d4cc7d36503bcc3aa2aa6ebea2b7912808f254
baseline version:
 xen                  982c89ed527bc5b0ffae5da9fd33f9d2d1528f06

Last test of basis   161898  2021-05-10 19:06:50 Z    2 days
Failing since        161904  2021-05-11 10:00:22 Z    2 days    4 attempts
Testing same since   161926  2021-05-13 03:59:53 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>
  Michal Orzel <michal.orzel@arm.com>
  Olaf Hering <olaf@aepfle.de>
  Volodymyr Babchuk <volodymyr_babchuk@epam.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                starved 
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 starved 
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            starved 
 test-amd64-amd64-xl-pvhv2-amd                                starved 
 test-amd64-i386-qemut-rhel6hvm-amd                           starved 
 test-amd64-i386-qemuu-rhel6hvm-amd                           starved 
 test-amd64-amd64-dom0pvh-xl-amd                              starved 
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   982c89ed52..43d4cc7d36  43d4cc7d36503bcc3aa2aa6ebea2b7912808f254 -> master


From xen-devel-bounces@lists.xenproject.org Thu May 13 13:50:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 13:50:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.126983.238629 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhBjX-0004Y2-OC; Thu, 13 May 2021 13:50:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 126983.238629; Thu, 13 May 2021 13:50:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhBjX-0004Xv-LB; Thu, 13 May 2021 13:50:35 +0000
Received: by outflank-mailman (input) for mailman id 126983;
 Thu, 13 May 2021 13:50:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhBjW-0004Xl-3K; Thu, 13 May 2021 13:50:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhBjV-0003C8-O9; Thu, 13 May 2021 13:50:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhBjV-0005xR-Ew; Thu, 13 May 2021 13:50:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lhBjV-0006zh-ER; Thu, 13 May 2021 13:50:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qpyNVtuOR0wJnYXCQqyuOxtEPIvrFG5JeAEbjabvhhA=; b=4EH5wjr3ZcYbuH0OVIbAhUv+Mp
	308ZHHLXE4UWVmPbsI1L5UkxaPmewWGcdi48QIdtJRaKJqshzippJRjAfyd3fe4tOBYsTT/fi2Wyo
	2155uvY5DXQ+DhxpKRH5Ll0lvbQQO9iJY6JhNkiH3tz5dEx3yzazBhU3HZL22O1BxVQc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161922-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 161922: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=88b06399c9c766c283e070b022b5ceafa4f63f19
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 13 May 2021 13:50:33 +0000

flight 161922 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161922/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx  8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail in 161911 pass in 161922
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 161911

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                88b06399c9c766c283e070b022b5ceafa4f63f19
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  285 days
Failing since        152366  2020-08-01 20:49:34 Z  284 days  476 attempts
Testing same since   161911  2021-05-12 01:57:04 Z    1 days    2 attempts

------------------------------------------------------------
6042 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1639498 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu May 13 17:55:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 17:55:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127023.238669 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhFYY-0002Zg-3y; Thu, 13 May 2021 17:55:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127023.238669; Thu, 13 May 2021 17:55:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhFYY-0002ZZ-17; Thu, 13 May 2021 17:55:30 +0000
Received: by outflank-mailman (input) for mailman id 127023;
 Thu, 13 May 2021 17:55:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lhFYX-0002ZT-8j
 for xen-devel@lists.xenproject.org; Thu, 13 May 2021 17:55:29 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lhFYV-0007mi-Nt; Thu, 13 May 2021 17:55:27 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lhFYV-0003tR-HY; Thu, 13 May 2021 17:55:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=83YPJzuVB7giuX9ioaeST6gflKL5uVIQXnDpdoZ07PQ=; b=j4YGJHFiJ7UU629Eeg35Y0Fedt
	DMSm5R+bG+gKy6ox79i23e5GZ6ABo3J9+Xgw/t7JHnAuf+GbTjV1aO80rZGt7bF9JokE8iOq4IRWM
	1UYuJRhPgSgWjk2xLeF9c1PXSsCzi0ZJHNq6QkEf51qVcd7/4rGT21GX7ZOJjtQVnbmw=;
Subject: Re: [PATCH RFCv2 09/15] xen/arm32: mm: Re-implement
 setup_xenheap_mappings() using map_pages_to_xen()
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Wei.Chen@arm.com, Henry.Wang@arm.com,
 Penny.Zheng@arm.com, Bertrand.Marquis@arm.com,
 Julien Grall <jgrall@amazon.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20210425201318.15447-1-julien@xen.org>
 <20210425201318.15447-10-julien@xen.org>
 <alpine.DEB.2.21.2105121506300.5018@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <b77cd6d4-5cc4-c32e-b728-644f60324e6a@xen.org>
Date: Thu, 13 May 2021 18:55:25 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2105121506300.5018@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Stefano,

On 12/05/2021 23:07, Stefano Stabellini wrote:
> On Sun, 25 Apr 2021, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> Now that map_pages_to_xen() has been extended to support 2MB mappings,
>> we can replace the create_mappings() call by map_pages_to_xen() call.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>
>> ---
>>      Changes in v2:
>>          - New patch
>>
>>      TODOs:
>>          - add support for contiguous mapping
>> ---
>>   xen/arch/arm/mm.c | 7 ++++++-
>>   1 file changed, 6 insertions(+), 1 deletion(-)
>>
>> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
>> index 5c17cafff847..19ecf73542ce 100644
>> --- a/xen/arch/arm/mm.c
>> +++ b/xen/arch/arm/mm.c
>> @@ -806,7 +806,12 @@ void mmu_init_secondary_cpu(void)
>>   void __init setup_xenheap_mappings(unsigned long base_mfn,
>>                                      unsigned long nr_mfns)
>>   {
>> -    create_mappings(xen_second, XENHEAP_VIRT_START, base_mfn, nr_mfns, MB(32));
>> +    int rc;
>> +
>> +    rc = map_pages_to_xen(XENHEAP_VIRT_START, base_mfn, nr_mfns,
>> +                          PAGE_HYPERVISOR_RW | _PAGE_BLOCK);
>> +    if ( rc )
>> +        panic("Unable to setup the xenheap mappings.\n");
>>   
>>       /* Record where the xenheap is, for translation routines. */
>>       xenheap_virt_end = XENHEAP_VIRT_START + nr_mfns * PAGE_SIZE;
> 
> I get the following build error:
> 
> mm.c: In function ‘setup_xenheap_mappings’:
> mm.c:811:47: error: incompatible type for argument 2 of ‘map_pages_to_xen’
>       rc = map_pages_to_xen(XENHEAP_VIRT_START, base_mfn, nr_mfns,
>                                                 ^~~~~~~~
> In file included from mm.c:24:0:
> /local/repos/xen-upstream/xen/include/xen/mm.h:89:5: note: expected ‘mfn_t {aka struct <anonymous>}’ but argument is of type ‘long unsigned int’
>   int map_pages_to_xen(
>       ^~~~~~~~~~~~~~~~
> 
> I think base_mfn needs to be converted to mfn_t

You are right. I think my scripts are build testing arm32 with debug=n. 
I will fix it and respin.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 13 18:09:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 18:09:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127028.238682 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhFmK-0004BM-Ep; Thu, 13 May 2021 18:09:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127028.238682; Thu, 13 May 2021 18:09:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhFmK-0004BF-B7; Thu, 13 May 2021 18:09:44 +0000
Received: by outflank-mailman (input) for mailman id 127028;
 Thu, 13 May 2021 18:09:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lhFmI-0004B9-Uj
 for xen-devel@lists.xenproject.org; Thu, 13 May 2021 18:09:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lhFmH-00086d-FI; Thu, 13 May 2021 18:09:41 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lhFmH-0005NU-8r; Thu, 13 May 2021 18:09:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=MgLoHA2k7GO7Z2EQR8jSmJY7nUTs06X7ED5K0XMm7d4=; b=V2sHJkHKzRz+7GgsfdF0/MPgIR
	r3cXFZ979bLKcQdgiw3ruSXb4mpm2UTd5AERp6vnXp+vJoY9faYDoNZ3+eSPwmAsMOGTEhGiyAcTA
	1jbvxQQu66kNp5HRVUAz5Xh4h537l7K39jksCtkcco25c0Q7U7Eo/U3WY5bS+W8x338g=;
Subject: Re: [PATCH RFCv2 10/15] xen/arm: mm: Allocate xen page tables in
 domheap rather than xenheap
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Wei.Chen@arm.com, Henry.Wang@arm.com,
 Penny.Zheng@arm.com, Bertrand.Marquis@arm.com,
 Julien Grall <jgrall@amazon.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20210425201318.15447-1-julien@xen.org>
 <20210425201318.15447-11-julien@xen.org>
 <alpine.DEB.2.21.2105121529180.5018@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <9429bec0-8706-42b9-cda6-77adde961235@xen.org>
Date: Thu, 13 May 2021 19:09:39 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2105121529180.5018@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 12/05/2021 23:44, Stefano Stabellini wrote:
> On Sun, 25 Apr 2021, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> xen_{un,}map_table() already uses the helper to map/unmap pages
>> on-demand (note this is currently a NOP on arm64). So switching to
>> domheap don't have any disavantage.
>>
>> But this as the benefit:
>>      - to keep the page tables unmapped if an arch decided to do so
>>      - reduce xenheap use on arm32 which can be pretty small
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> Thanks for the patch. It looks OK but let me ask a couple of questions
> to clarify my doubts.
> 
> This change should have no impact to arm64, right?
> 
> For arm32, I wonder why we were using map_domain_page before in
> xen_map_table: it wasn't necessary, was it? In fact, one could even say
> that it was wrong?
In xen_map_table() we need to be able to map pages from Xen binary, 
xenheap... On arm64, we would be able to use mfn_to_virt() because 
everything is mapped in Xen. But that's not the case on arm32. So we 
need a way to map anything easily.

The only difference between xenheap and domheap are the former is always 
mapped while the latter may not be. So one can also view a xenheap page 
as a glorified domheap.

I also don't really want to create yet another interface to map pages 
(we have vmap(), map_domain_page(), map_domain_global_page()...). So, 
when I implemented xen_map_table() a couple of years ago, I came to the 
conclusion that this is a convenient (ab)use of the interface.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 13 18:18:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 18:18:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127040.238694 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhFuZ-0005gg-G8; Thu, 13 May 2021 18:18:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127040.238694; Thu, 13 May 2021 18:18:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhFuZ-0005gZ-D9; Thu, 13 May 2021 18:18:15 +0000
Received: by outflank-mailman (input) for mailman id 127040;
 Thu, 13 May 2021 18:18:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lhFuY-0005gT-No
 for xen-devel@lists.xenproject.org; Thu, 13 May 2021 18:18:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lhFuY-0008GP-Hh; Thu, 13 May 2021 18:18:14 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lhFuY-00061B-Bj; Thu, 13 May 2021 18:18:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=L8aKK04N7qjvZWJ9p9nrmKWBIoOnOWBIu3ogpWvCblM=; b=3dpYZvwvUTYA6uDfHrrLgihs+a
	qmHVC5V6W15vkAbgf/DoHn/oqeZkUOAdpvH4H7g6LORmdwwtpaAc6y4sLByP05E2IOgM0NIedqn4Q
	o9Ohju/WPvRtncmzwLYzc4AJRLDpRMAyksk5QzeZ69PKKqoQv62RpwQ1/iapldzbP3xI=;
Subject: Re: Discussion of Xenheap problems on AArch64
To: Henry Wang <Henry.Wang@arm.com>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Chen <Wei.Chen@arm.com>, Penny Zheng <Penny.Zheng@arm.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
References: <PA4PR08MB6253F49C13ED56811BA5B64E92479@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <cdde98ca-4183-c92b-adca-801330992fc5@xen.org>
 <PA4PR08MB62538BBA256E66A0415F0C7192479@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <f14aa1d6-35d2-a9a3-0672-7f0d3ae3ec89@xen.org>
 <PA4PR08MB62534C4130B59CAA9A8A8BF792419@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <PA4PR08MB6253FBC7F5E690DB74F2E11F92409@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <2a65b8c0-fccc-2ccc-f736-7f3f666e84d1@xen.org>
 <PA4PR08MB62537A958107CD234831E0B892579@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <ba649865-410b-e1be-39a3-c4cac802f464@xen.org>
 <PA4PR08MB6253F85E184CA51BDB99786992539@PA4PR08MB6253.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
Message-ID: <ba1bc084-5a5b-1410-acba-33bfca7c4f6a@xen.org>
Date: Thu, 13 May 2021 19:18:12 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <PA4PR08MB6253F85E184CA51BDB99786992539@PA4PR08MB6253.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 11/05/2021 02:11, Henry Wang wrote:
> Hi Julien,

Hi Henry,

> 
>> From: Julien Grall <julien@xen.org>
>> Hi Henry,
>>
>> On 07/05/2021 05:06, Henry Wang wrote:
>>>> From: Julien Grall <julien@xen.org>
>>>> On 28/04/2021 10:28, Henry Wang wrote:
>> [...]
>>
>>> when I continue booting Xen, I got following error log:
>>>
>>> (XEN) CPU:    0
>>> (XEN) PC:     00000000002b5a5c alloc_boot_pages+0x94/0x98
>>> (XEN) LR:     00000000002ca3bc
>>> (XEN) SP:     00000000002ffde0
>>> (XEN) CPSR:   600003c9 MODE:64-bit EL2h (Hypervisor, handler)
>>> (XEN)
>>> (XEN)   VTCR_EL2: 80000000
>>> (XEN)  VTTBR_EL2: 0000000000000000
>>> (XEN)
>>> (XEN)  SCTLR_EL2: 30cd183d
>>> (XEN)    HCR_EL2: 0000000000000038
>>> (XEN)  TTBR0_EL2: 000000008413c000
>>> (XEN)
>>> (XEN)    ESR_EL2: f2000001
>>> (XEN)  HPFAR_EL2: 0000000000000000
>>> (XEN)    FAR_EL2: 0000000000000000
>>> (XEN)
>>> (XEN) Xen call trace:
>>> (XEN)    [<00000000002b5a5c>] alloc_boot_pages+0x94/0x98 (PC)
>>> (XEN)    [<00000000002ca3bc>] setup_frametable_mappings+0xa4/0x108
>> (LR)
>>> (XEN)    [<00000000002ca3bc>] setup_frametable_mappings+0xa4/0x108
>>> (XEN)    [<00000000002cb988>] start_xen+0x344/0xbcc
>>> (XEN)    [<00000000002001c0>]
>> arm64/head.o#primary_switched+0x10/0x30
>>> (XEN)
>>> (XEN) ****************************************
>>> (XEN) Panic on CPU 0:
>>> (XEN) Xen BUG at page_alloc.c:432
>>> (XEN) ****************************************
>>
>> This is happening without my patch series applied, right? If so, what
>> happen if you apply it?
> 
> No, I am afraid this is with your patch series applied, and that is why I
> am a little bit confused about the error log...

You are hitting the BUG() at the end of alloc_boot_pages(). This is hit 
because the boot allocator couldn't allocate memory for your request.

Would you be able to apply the following diff and paste the output here?

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index ace6333c18ea..dbb736fdb275 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -329,6 +329,8 @@ void __init init_boot_pages(paddr_t ps, paddr_t pe)
      if ( pe <= ps )
          return;

+    printk("%s: ps %"PRI_paddr" pe %"PRI_paddr"\n", __func__, ps, pe);
+
      first_valid_mfn = mfn_min(maddr_to_mfn(ps), first_valid_mfn);

      bootmem_region_add(ps >> PAGE_SHIFT, pe >> PAGE_SHIFT);
@@ -395,6 +397,8 @@ mfn_t __init alloc_boot_pages(unsigned long nr_pfns, 
unsigned long pfn_align)
      unsigned long pg, _e;
      unsigned int i = nr_bootmem_regions;

+    printk("%s: nr_pfns %lu pfn_align %lu\n", __func__, nr_pfns, 
pfn_align);
+
      BUG_ON(!nr_bootmem_regions);

      while ( i-- )

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 13 18:38:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 18:38:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127047.238706 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhGE4-0007zN-69; Thu, 13 May 2021 18:38:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127047.238706; Thu, 13 May 2021 18:38:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhGE4-0007zG-2q; Thu, 13 May 2021 18:38:24 +0000
Received: by outflank-mailman (input) for mailman id 127047;
 Thu, 13 May 2021 18:38:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhGE2-0007z6-SE; Thu, 13 May 2021 18:38:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhGE2-00008M-LD; Thu, 13 May 2021 18:38:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhGE2-0000j3-A5; Thu, 13 May 2021 18:38:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lhGE2-0008UY-9Y; Thu, 13 May 2021 18:38:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wXZaI9zA7oszQ4KnrBJ+OujN7968N+EjNNy0kbe3XF4=; b=yW1vnvrPiXz3vmsELSdhJWurMS
	+cMHX1j8q0FrLYYUF5ZY8nLONY2lhPSK+DxJoiE0cBKUD1B9u3FyR/JBhpqLr2ifnjguo4pWOOvyi
	Wd5tjTfAAaS/CpUXBJ/XOlgA1fNiw1rlmnt2FlPO71ziucK2MgvCZSdYE8dUyzg3hOUQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161924-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 161924: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=3e9f48bcdabe57f8f90cf19f01bbbf3c86937267
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 13 May 2021 18:38:22 +0000

flight 161924 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161924/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                3e9f48bcdabe57f8f90cf19f01bbbf3c86937267
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  266 days
Failing since        152659  2020-08-21 14:07:39 Z  265 days  484 attempts
Testing same since   161924  2021-05-12 23:09:35 Z    0 days    1 attempts

------------------------------------------------------------
499 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 151336 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu May 13 20:15:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 20:15:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127059.238721 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhHjy-0000QH-Ek; Thu, 13 May 2021 20:15:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127059.238721; Thu, 13 May 2021 20:15:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhHjy-0000QA-BD; Thu, 13 May 2021 20:15:26 +0000
Received: by outflank-mailman (input) for mailman id 127059;
 Thu, 13 May 2021 20:15:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=U61U=KI=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lhHjx-0000Q4-3E
 for xen-devel@lists.xenproject.org; Thu, 13 May 2021 20:15:25 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d3db3fe9-445a-43b6-bba2-1accff3a4fd7;
 Thu, 13 May 2021 20:15:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d3db3fe9-445a-43b6-bba2-1accff3a4fd7
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620936923;
  h=to:references:from:subject:message-id:date:in-reply-to:
   content-transfer-encoding:mime-version;
  bh=LfhQ/0e+rIfZ1fGhjJDwLaWo5qq8gYKoelOO0M3wLrA=;
  b=Lv85+MiG260QzuLLkSUEYKAFq5PeuqgACAJ7zAcStps66rcKJ9pCnBvL
   fTNeQ8TV5M2jFvkv/rG5VfSiHPWYU6oB4RciQbl997i9ANSjkL6RHTNcF
   4dwULTG3hFg1T6PuBjNPrn4XZmZFqyuXjEaOe8jPm6Q53Y1S1IUkdWdwo
   A=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: pzzQJlzjlKNG8S5e9BF32grl1vO3cxa/jaH9CjPqf5NXxe/Wui6+v3VBQ0lRwIoyIkvnyVYZfy
 9Ci2GvxKYRNQ4Kj4ij9wIwLwPdG/mx/P0U80ZDCX5uDjokddpxYq2OyjX4oCchznjfHoV+wOAQ
 qKuWJhgUjavDOcDfDoKX34BbGiVBsc5roS9JD/5Hz+VPF7TOMhk0mv4gVXRv2QuoHu5EipFu0G
 OauGBUvfDaxD4bEMtQluMVafNvyKHNOjuf3hzO42JXCT5VtNQzoTRBUSk8IpRMc4genok23VGq
 j/s=
X-SBRS: 5.1
X-MesageID: 43555868
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:fx/BIK9LU4xYzXrvXlFuk+Hidr1zdoMgy1knxilNoENuEvBwxv
 rOoB1E73HJYW4qKTgdcBW7SeS9qADnhNZICOgqTO+ftWzd1RrKEGgM1/qW/9SRIUfDH4JmpN
 ldmu1FeavN5DtB/J3HCWuDYqIdKbC8mcjC6YiurQYJPGcaEtAdnnhE5x6gYypLrUt9dO0E/f
 Gnl7p6Tk+bCAYqh7OAdwo4tob41qz2fabdEEM7Li9ixBiFiDup7LLgMh6DwxsSaTNAxr8+7X
 PIiUjc6r+4u/+28wTb3WPI9Zha8eGRsudrNYihm8IRIjXphh2JYJ17W7qelDopoOepgWxa7e
 XkklMNLs5343PUcnqUpQL32w789T4y53jp2Taj8AHeiP28aCMxDsJAgY5DSwDe+loEtMxx16
 hatljpzKa/QCmwzBgUKLDzJlZXfh7fmwtHrccjy1hkFacOYr5YqoISuGlPFo0bIS784Ic7VM
 FzEcD1/p9tAHqnRkGcmlMq7M2nX3w1EBvDaFMFoNap3z9fm20851cExfYYgmwL+PsGOs95Dt
 z/Q4xVfYx1P5srhONGdbI8qPKMez7wqMf3QTWvyVeOLtBEB5uCke+skZ4IDCfDQu1V8HJ4ou
 WLbLpijx99R6o1Yff+g6Gjuiq9CllVfQ6dlP22tKIJ64EVstLQQGm+oG5Hqbrhnx3paverHs
 pbfqgmUMMKZQbVaKh0N0aSYeh8FZFWPfdllurSET+10+P2wvWGjJ33TN/jYInMIBcDYF7SLz
 8oYALPAuFt1SmQK1XFaRPqKjXQk2nEjMtN+ZnhjqsuIdI2R8JxWyAu+BWED++wWHF/jpA=
X-IronPort-AV: E=Sophos;i="5.82,296,1613451600"; 
   d="scan'208";a="43555868"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=j8TQTEr02zgV4aGE6+AGOYEPzSOpj6ud/i2Qmf3zi+kuDU5rNJLT7iyw33GcHuDj61Uus3ZAkUruEQstBEFBQONCNWebsZa9tri6tQNCYg0azG8BlIyJuYzgGAx3MIkocDPskT63kS6YirbVTKk96vuT9JrYf0nHGYUJcu8QlB/HtycdSMVIuwJoy+asZDI26nbgX/lm0eyrxxM1ZPiCc+RrMZ1nLDGYZ9b5fo7E+6a6kWvYqaNe7+fJU7oq5WCFQv5G+1lBagyWi7d0zTccTY0Afu+EO28NMEm/QreLoCubrLqsOfiVojXfq0DV58fY0HVnFh7xYOEnQ1NRH6D3zA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eNUPB6RNTND5hqCgZhFxtEHC3q+VgGfoK6l471YuAi8=;
 b=QYtM9BKN+GrSSnlsbzcvbBUmGDBEAozvqMYH8eZ2AxCpkf4Ox5REGx5l71iKPhUtaUeIzPI0jkE8wXLOxf5hZTJ2bpNbUlPs3TGSg6maQgQljABvrPULcNBnmLLonlhKP5H4809PdUJvPGqDJ+SuzVCuRRmL5OtljjKlA4KTymx4j1YvLLG2Fo01GEdoPYFDFMoWw2QkXPiYOnGWVWt7KVqD0NsiL3sliWOfSfBjWv/QH1ftRcKljLjsiiSmWAkpayjg1tPrNxVD8hSig+uBcTUKwOFRthNsxQZIRmJYH9vR/8Y7OtfeAvhjzYUzJ7PZ+NWKZWcAm4uRwVXZbqUj4w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eNUPB6RNTND5hqCgZhFxtEHC3q+VgGfoK6l471YuAi8=;
 b=hC/52uBPCzBMttboKf61U6A6QV5bQEK1xf/AOk9aLU2fgyLNZ4eymsZAZviI2KMEMWveUx174rwFeR+wAYqnwRf6MVF1nCOs/2oLM6UoUrpFSSuB7x0yMGsG8WXpk1l2T8FsFDp73s6Ht1/pAYfpmgANode/nc6VqRwNB/veh1w=
To: osstest service owner <osstest-admin@xenproject.org>,
	<xen-devel@lists.xenproject.org>, Ian Jackson <iwj@xenproject.org>, "Jan
 Beulich" <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
References: <osstest-161917-mainreport@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Regressed XSA-286, was [xen-unstable test] 161917: regressions - FAIL
Message-ID: <7cfa28ae-2fbe-0945-8a6c-a965ec52155f@citrix.com>
Date: Thu, 13 May 2021 21:15:10 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <osstest-161917-mainreport@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0288.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a1::36) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 004e8a3a-f516-42db-3bb0-08d9164bd2ec
X-MS-TrafficTypeDiagnostic: BYAPR03MB4871:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BYAPR03MB4871440FB77B82FCC1397704BA519@BYAPR03MB4871.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: sPild0zL8dZxCa+rXjOS75ZQTQuDMBO65ZqV8vi7u5DMoy4uLjvTMnj6GoHxvy/xVw0BxGPTsmgEafW5f8Feu/wQIfV1FvahEjTUdQm09icKkA4oujaWKuWAwj1CfgeQo/BOnl/phUbrEwaygoaZLhNl1uhsLtLgqV0GOIP/YLJq9HPDi0vo/wU2TGH0IQcI4dnnCGL4ou9EY64Ukcm6wUq0YNUBGv2chL2N72g3IGOmqPEjQwlsG3j5ookYF7289VTMO2+UGTUj6Xu5mKHtkQi4xDEftEzn9AWd0253rpGSFxTEOoetFgM+QGFt2BSotCUHJQjkJ2+Z3eQyr2d3uY8bjhwLoVj0QkKV6tYLSoUasGZeyET2Xq4/5OzrHesKFwcuLRoiu1i7wTlu2bPOvDc9sKfdm9fdJSXYO9jBZwVrpdvi8JtplbY2DZHGu/zsYXG5INqkrQVRE0KkmApQsFThqkJ9ncUUw+C5J9Fww2C4yXkffigRufkxYRvT4SGPyb36k5IU4TXqYYDtoIagBKnnsvSfkL5czgcgcSGw1rWw32CsqR7pE8Rqv+UdvHd/rdnSPU8+Jj/p3Kaz1Q16YbpU/ttrz52OF6+hbqlOcquou4coFZw3ZeE84Sz/VG36Ovrxi13vZXpFESANuvaiywxw5jdldTkWwTpSsdNWR4g4bZfMVyuBvA8XNQrQOUOQdVGVLCWlom1QyrL6rqeWQHqvPwlyCc5OlEpMn0m2ws+PGl6HybXOT+DC3e0LitCS
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(346002)(136003)(366004)(39860400002)(376002)(2906002)(478600001)(8936002)(53546011)(26005)(66476007)(31686004)(31696002)(66556008)(2616005)(956004)(36756003)(6636002)(66946007)(5660300002)(316002)(16576012)(38100700002)(83380400001)(6486002)(110136005)(966005)(16526019)(6666004)(186003)(86362001)(8676002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?Ym84ckdXOEJkNnE3V0pqa0ZLUEd0ZytVeFZ3eTdSSjByT0l2RERaRENWRmtP?=
 =?utf-8?B?VWt1SDlJelUwaTJ3N1ErcDZRTUtjWVRGUzc4ZW1CUFZQa255UXJWcnpxdm5r?=
 =?utf-8?B?ejZLZ2pkTUxhbHdXeFcreWNPSWFkWWtsVGRpbHI3WEMvdlpuUzR4dFI5czBK?=
 =?utf-8?B?amJ2RlNYcVdoL0ZxNXZOTWJiaXkyY2dFZFpnZHlDU2djbmlCT2tXNlRrR2ZV?=
 =?utf-8?B?ak9SRWdUOXpMSVJwSW9TVDgxT0FBRkY5bXhiN1hEa0hYaVdOZzBNd2dUbDRO?=
 =?utf-8?B?Ry9nWHZuSHVpczZhZUt0ZlZkSGpoaWZ6K21NWVBYeHZXV0dQdi9QYUttakRX?=
 =?utf-8?B?Q1BMcGo0SHhsUW56MyszRXBWZWc1WmZHQ1RTRVNJSGxJR2tyRmJVNlhQaExH?=
 =?utf-8?B?RkNZaFJhdFFnakc5Z0tOQ1BFd3pITTRsUy9LLzI2M2gra2pJRjVGUFlrMTFr?=
 =?utf-8?B?eXVZdDVTWVA2REhNYUFVUnora081UnBsSmFJckZlSWFxTUFBR3dodDl5cnNw?=
 =?utf-8?B?cnZTamNGZFNaaTZsRWdpdkc5Zi8zd1FQSUd3WHlSZVNkb0d2R3VRSkN4REQ5?=
 =?utf-8?B?Y1dLRXM1WkE4eEFPL1FHazd0dnJFUGZSZDhtdS92eWsxOVo4WkhpcUw0WmVj?=
 =?utf-8?B?QVpVVEkxQmh3K1pYcXdGaVd5bE02RFlTT0duOW9IbXh6blpETVZ2MFlNbzlm?=
 =?utf-8?B?VFhnZW1SdGJuV0VHeGpNUjJvS0swTW9od0NrcVZPWTlqaTNpRkZqWUlRekhk?=
 =?utf-8?B?ZXpnM2xTdzkvK2FXWXk0bzhFK0g3VE1TTGtRZkpFSGV3aVJGV0pzS0xWd2Nh?=
 =?utf-8?B?Yk5pajlaYktRYnVTMHd4V3hQUHpicGo3WnN3U2toSUtSczg4OVM2MUUyZ1Jl?=
 =?utf-8?B?WDQxQ0g4dkh5MkFEVnpOdUNTMkRNbFRtK3JyMWMvZCs5WER5cCtjWEJ4RWVO?=
 =?utf-8?B?T2lUbmFQYURNbUdha1MwQWsybWNwWEJ6eHhyMThsaHE5eXNQU1FBanR2OVdB?=
 =?utf-8?B?R0JpK1VjRFVKWEluMWE3OWlmU0NXSjZFMkYwaGw0S281dERrekxWNUpXcWhZ?=
 =?utf-8?B?d0gybzk5d2dVNmswYXBLZWtrTCtmci9ONncyVnFCMUhtVGdpUStBcXlDZlUr?=
 =?utf-8?B?Uk1PWU1MMllXUmNmRmEvM0kyNWtNdWFCWVpDTGxhVnVLVllkUUgwaTVDZkJh?=
 =?utf-8?B?ZTFNL01LSUFwa3pHTHFwbk5XRGR4R3RmMzM3R051WjdhUEljYkNYampBVW9j?=
 =?utf-8?B?eUJ1czZjZ0tDakRPRjd6TSs2SnR0OEtFN0Y3NUJhRks4cUluMlJYWjlKVVVo?=
 =?utf-8?B?TDYwOGtIa2FQaDZjR3RXMHV2R1dMaVBWdFhEbTFyS2t3VzdEZnR5K3NscWRV?=
 =?utf-8?B?MEZEZlVyRFh1ZVBUTm1lcDlHQmtZeUJBZlJxYlgvNUxoMkZiSzg4UC93eXJ4?=
 =?utf-8?B?Uk1FYjQvQlczSHpwcDlCZW9BZUNlMDdSRHdGRjFxdXBMaGxmc0QzWUkyK1N0?=
 =?utf-8?B?TmdxeXFudWxDdEdiQVM2cHN2Z2lFSHdnbktuN0VnWExKcGhGV0Q2TjdGOWgr?=
 =?utf-8?B?bkc5SkNZcGtwcHpjK2FYYkErNW5zOG5xbjBieTZ6VWJFTk9UeEY5Q1kyMFQ1?=
 =?utf-8?B?NFR2YjhEL0xCcjQ5NWNtZ0JjYmFOUm4yTE5oRkZlbms4UXJsNE5UcXlyQmk5?=
 =?utf-8?B?cVNPZUhMQkVpZ3hwWjB0T0lEN1Y1NjRwNHpLM2RBMWJ6MmV5Q3B4S3ZpN2lu?=
 =?utf-8?Q?fWITIJh912dh1sxtpRh4W7tzoNIwNE+YrLPdxNs?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 004e8a3a-f516-42db-3bb0-08d9164bd2ec
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 13 May 2021 20:15:17.3547
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: y385eZs55DRyL0AM3Rnro/c4DXN8bHEt5nSwi/Se5LP2lGkPIV2KZFNp9xEQ9LsSm9PC1XYrl9+rzx//USPdz30l0m0qEuzLNni3mniy+a4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4871
X-OriginatorOrg: citrix.com

On 13/05/2021 04:56, osstest service owner wrote:
> flight 161917 xen-unstable real [real]
> http://logs.test-lab.xenproject.org/osstest/logs/161917/
>
> Regressions :-(
>
> Tests which did not succeed and are blocking,
> including tests which could not be run:
>  test-arm64-arm64-examine      8 reboot                   fail REGR. vs. =
161898
>  test-arm64-arm64-xl-thunderx  8 xen-boot                 fail REGR. vs. =
161898
>  test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. =
161898
>  test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. =
161898
>  test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. =
161898

I reported these on IRC, and Julien/Stefano have already committed a fix.

> Tests which are failing intermittently (not blocking):
>  test-xtf-amd64-amd64-3 92 xtf/test-pv32pae-xsa-286 fail in 161909 pass i=
n 161917

While noticing the ARM issue above, I also spotted this one by chance.=C2=
=A0
There are two issues.

First, I have reverted bed7e6cad30 and edcfce55917.=C2=A0 The XTF test is
correct, and they really do reintroduce XSA-286.=C2=A0 It is a miracle of
timing that we don't need an XSA/CVE against Xen 4.15.

Given that I was unhappy with the changes in the first place, I don't
particularly want to see an attempt to resurrect them.=C2=A0 I did not find
the claim that they were a perf improvement in the first place very
convincing, and the XTF test demonstrates that the reasoning about their
safety was incorrect.


Second, the unexplained OSSTest behaviour.

When I repro'd this on pinot1, test-pv32pae-xsa-286 failing was totally
deterministic and repeatable (I tried 100 times because the test is a
fraction of a second).

>From the log trawling which Ian already did, the first recorded failure
was flight 160912 on April 11th.=C2=A0 All failures (12, but this number is=
 a
few flights old now) were on pinot*.

What would be interesting to see is whether there have been any passes
on pinot since 160912.

I can't see any reason why the test would be reliable for me, but
unreliable for OSSTest, so I'm wondering whether it is actually
reliable, and something is wrong with the stickiness heuristic.

~Andrew



From xen-devel-bounces@lists.xenproject.org Thu May 13 20:56:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 20:56:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127068.238733 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhINV-0004WC-Li; Thu, 13 May 2021 20:56:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127068.238733; Thu, 13 May 2021 20:56:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhINV-0004W5-I1; Thu, 13 May 2021 20:56:17 +0000
Received: by outflank-mailman (input) for mailman id 127068;
 Thu, 13 May 2021 20:56:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhINU-0004Vv-8x; Thu, 13 May 2021 20:56:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhINU-0002Rc-1s; Thu, 13 May 2021 20:56:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhINT-0005Tw-QW; Thu, 13 May 2021 20:56:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lhINT-0000zK-Q2; Thu, 13 May 2021 20:56:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=kRNhkIuMluB+FjfUt0Brezb4Ir65kInge4U+2EptWSg=; b=uU7KHSNTFE5NZJE9fGmneduz4D
	tgv/CvOVJhN6lBGqa1AXG+Y1Tpu3lxZiCuC0LcXXvdvOh+qgq1z0+86PlJ/YspViFhDV4/ySP6z6Y
	e2Onlq3SCflQ39Tk1qVeBeuBWWscJEd9L+nMUAp/1euOUBzNpv72sGTBG8VkV5awS9es=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161937-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 161937: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=cb199cc7de987cfda4659fccf51059f210f6ad34
X-Osstest-Versions-That:
    xen=43d4cc7d36503bcc3aa2aa6ebea2b7912808f254
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 13 May 2021 20:56:15 +0000

flight 161937 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161937/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  cb199cc7de987cfda4659fccf51059f210f6ad34
baseline version:
 xen                  43d4cc7d36503bcc3aa2aa6ebea2b7912808f254

Last test of basis   161923  2021-05-12 21:01:32 Z    0 days
Testing same since   161937  2021-05-13 18:01:32 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   43d4cc7d36..cb199cc7de  cb199cc7de987cfda4659fccf51059f210f6ad34 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu May 13 21:30:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 21:30:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127075.238748 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhIul-0008S8-AK; Thu, 13 May 2021 21:30:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127075.238748; Thu, 13 May 2021 21:30:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhIul-0008S1-78; Thu, 13 May 2021 21:30:39 +0000
Received: by outflank-mailman (input) for mailman id 127075;
 Thu, 13 May 2021 21:30:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhIuj-0008Rr-KF; Thu, 13 May 2021 21:30:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhIuj-0002z1-A6; Thu, 13 May 2021 21:30:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhIui-0006YZ-Tr; Thu, 13 May 2021 21:30:37 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lhIui-0005X4-TN; Thu, 13 May 2021 21:30:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ufAW49OrJhaps4lhhmSl1pRr/IfXfbltmA33c3ipVPM=; b=afZOUo7V61Tpn9JnVOqHkH8OGI
	3gEX2joQE1sPo2HNyUr76Jp8qOkpzOs4RZKGlDJDzmmcGa4WGbRwUOXmDbdJPNaaXrM1Tb2343rTk
	NOz61NT1Dc41AHAlxheRpzUo7rSR4RKjyYvCmj97ELJ0zD7H5rRz8i4fDKD/TmHZOGKs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161936-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 161936: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-localmigrate/x10:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=c06a2ba62fc401b7aaefd23f5d0bc06d2457ccc1
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 13 May 2021 21:30:36 +0000

flight 161936 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161936/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 19 guest-localmigrate/x10   fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-armhf-armhf-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx  8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                c06a2ba62fc401b7aaefd23f5d0bc06d2457ccc1
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  286 days
Failing since        152366  2020-08-01 20:49:34 Z  285 days  477 attempts
Testing same since   161936  2021-05-13 13:53:14 Z    0 days    1 attempts

------------------------------------------------------------
6046 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1639877 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu May 13 22:27:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 22:27:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127082.238763 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhJo2-000537-QY; Thu, 13 May 2021 22:27:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127082.238763; Thu, 13 May 2021 22:27:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhJo2-000530-Lj; Thu, 13 May 2021 22:27:46 +0000
Received: by outflank-mailman (input) for mailman id 127082;
 Thu, 13 May 2021 22:27:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=J2h5=KI=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lhJo0-00052u-Pu
 for xen-devel@lists.xenproject.org; Thu, 13 May 2021 22:27:44 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eca2f0c2-d48f-4ec2-b447-bda0ece8ea24;
 Thu, 13 May 2021 22:27:44 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 0FC3161396;
 Thu, 13 May 2021 22:27:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eca2f0c2-d48f-4ec2-b447-bda0ece8ea24
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1620944863;
	bh=kSvMpuSvfKZ36WVRmixfgj1PowjjluhXCUNLL+3vFXY=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=Cp1AdoE028S7Gq+S03bu9kN1sqGyf268NlVt6EitqtiArJaZtndWwEwOOQFYrlyI5
	 ClSudX1mMAUOQhVahvwX7MKYaJnzy1Y/lkgOvwLllOvNa1DIDzyQm6dFwJKR7pkN8U
	 927RwggVajIqQGVOdECRq/ODZBV1lUN/hAmiWqpWevZm0Qpni7CLZRlNAyqKJ/6dRM
	 Xl97/Pl4WI8UjmNz+pMYkzjNl8XPOuWGnJlExAcvkvyL6wiK/V7SFQDfmCGUvlr2bv
	 YJ7TuNMMe1oaG4EXJIMZgWwvLEDHXBWZAMQ+dvcXonKG5v3ygXQHMaH7EJuk8Izigw
	 ohygEV4kQDISQ==
Date: Thu, 13 May 2021 15:27:42 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, Wei.Chen@arm.com, Henry.Wang@arm.com, 
    Penny.Zheng@arm.com, Bertrand.Marquis@arm.com, 
    Julien Grall <jgrall@amazon.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH RFCv2 10/15] xen/arm: mm: Allocate xen page tables in
 domheap rather than xenheap
In-Reply-To: <9429bec0-8706-42b9-cda6-77adde961235@xen.org>
Message-ID: <alpine.DEB.2.21.2105131522030.5018@sstabellini-ThinkPad-T480s>
References: <20210425201318.15447-1-julien@xen.org> <20210425201318.15447-11-julien@xen.org> <alpine.DEB.2.21.2105121529180.5018@sstabellini-ThinkPad-T480s> <9429bec0-8706-42b9-cda6-77adde961235@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 13 May 2021, Julien Grall wrote:
> Hi Stefano,
> 
> On 12/05/2021 23:44, Stefano Stabellini wrote:
> > On Sun, 25 Apr 2021, Julien Grall wrote:
> > > From: Julien Grall <jgrall@amazon.com>
> > > 
> > > xen_{un,}map_table() already uses the helper to map/unmap pages
> > > on-demand (note this is currently a NOP on arm64). So switching to
> > > domheap don't have any disavantage.
> > > 
> > > But this as the benefit:
> > >      - to keep the page tables unmapped if an arch decided to do so
> > >      - reduce xenheap use on arm32 which can be pretty small
> > > 
> > > Signed-off-by: Julien Grall <jgrall@amazon.com>
> > 
> > Thanks for the patch. It looks OK but let me ask a couple of questions
> > to clarify my doubts.
> > 
> > This change should have no impact to arm64, right?
> > 
> > For arm32, I wonder why we were using map_domain_page before in
> > xen_map_table: it wasn't necessary, was it? In fact, one could even say
> > that it was wrong?
> In xen_map_table() we need to be able to map pages from Xen binary, xenheap...
> On arm64, we would be able to use mfn_to_virt() because everything is mapped
> in Xen. But that's not the case on arm32. So we need a way to map anything
> easily.
> 
> The only difference between xenheap and domheap are the former is always
> mapped while the latter may not be. So one can also view a xenheap page as a
> glorified domheap.
> 
> I also don't really want to create yet another interface to map pages (we have
> vmap(), map_domain_page(), map_domain_global_page()...). So, when I
> implemented xen_map_table() a couple of years ago, I came to the conclusion
> that this is a convenient (ab)use of the interface.

Got it. Repeating to check if I see the full picture. On ARM64 there are
no changes. On ARM32, at runtime there are no changes mapping/unmapping
pages because xen_map_table is already mapping all pages using domheap,
even xenheap pages are mapped as domheap; so this patch makes no
difference in mapping/unmapping, correct?

The only difference is that on arm32 we are using domheap to allocate
the pages, which is a different (larger) pool.


From xen-devel-bounces@lists.xenproject.org Thu May 13 22:33:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 22:33:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127091.238774 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhJsy-0006Tu-GB; Thu, 13 May 2021 22:32:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127091.238774; Thu, 13 May 2021 22:32:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhJsy-0006Tn-DL; Thu, 13 May 2021 22:32:52 +0000
Received: by outflank-mailman (input) for mailman id 127091;
 Thu, 13 May 2021 22:32:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=J2h5=KI=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lhJsx-0006Th-RB
 for xen-devel@lists.xenproject.org; Thu, 13 May 2021 22:32:51 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 77993eda-fc7d-41be-87b8-1cc155dd68d9;
 Thu, 13 May 2021 22:32:51 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 093ED61354;
 Thu, 13 May 2021 22:32:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 77993eda-fc7d-41be-87b8-1cc155dd68d9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1620945170;
	bh=JobBn5ClyiCaU5MqM2+6SS0+A3r7udsmTCc7WW9sszU=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=eL477i+/7fLUyGfUrES/7dhqEaGnmpYfxoLW9aDL7PTPqT/p5jMLMuGGe1XK6+J7a
	 nsxOJGtXF/wW1OKHHE3BQNtnc7tTGRNsRjlURbaXYhoOjQngTzT1u69457lMNVNu9D
	 70aaF2Ii7OktVIzxxKDTto0YYOE/bKURD1xXueDVMmaHNEZJH9k6O5g/i9FMoxwCj6
	 hVRDgnW+LIZh5F5LwDhairfgeClxC+EKzqCf/wWL4VIQEi99FAww/Jg0CQyPVh0lgI
	 WMqw8qf9Wy/TAAziRJJoajTD59DwTsx2+lNayTqLu8KRP5iC+Pt7swI3KFpP9JtxUx
	 7FVreKHG4EvnA==
Date: Thu, 13 May 2021 15:32:49 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, Wei.Chen@arm.com, Henry.Wang@arm.com, 
    Penny.Zheng@arm.com, Bertrand.Marquis@arm.com, 
    Julien Grall <jgrall@amazon.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH RFCv2 08/15] xen/arm32: mm: Check if the virtual address
 is shared before updating it
In-Reply-To: <caec9741-8c0e-b80a-1020-c985beb1e100@xen.org>
Message-ID: <alpine.DEB.2.21.2105131528230.5018@sstabellini-ThinkPad-T480s>
References: <20210425201318.15447-1-julien@xen.org> <20210425201318.15447-9-julien@xen.org> <alpine.DEB.2.21.2105121448090.5018@sstabellini-ThinkPad-T480s> <caec9741-8c0e-b80a-1020-c985beb1e100@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 12 May 2021, Julien Grall wrote:
> Hi Stefano,
> 
> On 12/05/2021 23:00, Stefano Stabellini wrote:
> > On Sun, 25 Apr 2021, Julien Grall wrote:
> > > From: Julien Grall <jgrall@amazon.com>
> > > 
> > > Only the first 2GB of the virtual address space is shared between all
> > > the page-tables on Arm32.
> > > 
> > > There is a long outstanding TODO in xen_pt_update() stating that the
> > > function is can only work with shared mapping. Nobody has ever called
> >             ^ remove
> > 
> > > the function with private mapping, however as we add more callers
> > > there is a risk to mess things up.
> > > 
> > > Introduce a new define to mark the ened of the shared mappings and use
> >                                       ^end
> > 
> > > it in xen_pt_update() to verify if the address is correct.
> > > 
> > > Note that on Arm64, all the mappings are shared. Some compiler may
> > > complain about an always true check, so the new define is not introduced
> > > for arm64 and the code is protected with an #ifdef.
> >   On arm64 we could maybe define SHARED_VIRT_END to an arbitrarely large
> > value, such as:
> > 
> > #define SHARED_VIRT_END (1UL<<48)
> > 
> > or:
> > 
> > #define SHARED_VIRT_END DIRECTMAP_VIRT_END
> > 
> > ?
> 
> I thought about it but I didn't want to define to a random value... I felt not
> define it was better.

Yeah, I see your point: any restrictions in addressing (e.g. 48bits)
are physical address restrictions. Here we are talking about virtual
address restriction, and I don't think there are actually any
restrictions there?  We could validly map something at
0xffff_ffff_ffff_ffff. So even (1<<48) which makes sense at the physical
level, doesn't make sense in terms of virtual addresses.


> > > Signed-off-by: Julien Grall <jgrall@amazon.com>
> > > 
> > > ---
> > >      Changes in v2:
> > >          - New patch
> > > ---
> > >   xen/arch/arm/mm.c            | 11 +++++++++--
> > >   xen/include/asm-arm/config.h |  4 ++++
> > >   2 files changed, 13 insertions(+), 2 deletions(-)
> > > 
> > > diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> > > index 8fac24d80086..5c17cafff847 100644
> > > --- a/xen/arch/arm/mm.c
> > > +++ b/xen/arch/arm/mm.c
> > > @@ -1275,11 +1275,18 @@ static int xen_pt_update(unsigned long virt,
> > >        * For arm32, page-tables are different on each CPUs. Yet, they
> > > share
> > >        * some common mappings. It is assumed that only common mappings
> > >        * will be modified with this function.
> > > -     *
> > > -     * XXX: Add a check.
> > >        */
> > >       const mfn_t root = virt_to_mfn(THIS_CPU_PGTABLE);
> > >   +#ifdef SHARED_VIRT_END
> > > +    if ( virt > SHARED_VIRT_END ||
> > > +         (SHARED_VIRT_END - virt) < nr_mfns )
> > 
> > The following would be sufficient, right?
> > 
> >      if ( virt + nr_mfns > SHARED_VIRT_END )
> 
> This would not protect against an overflow. So I think it is best if we keep
> my version.

But there can be no overflow with the way SHARED_VIRT_END is defined.
Even if SHARED_VIRT_END was defined at (1<<48) there can be no overflow.
Only if we defined SHARED_VIRT_END as 0xffff_ffff_ffff_ffff we would
have an overflow, but you wrote above that your preference is not to do
that.


From xen-devel-bounces@lists.xenproject.org Thu May 13 22:44:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 22:44:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127097.238787 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhK4I-00081Z-IQ; Thu, 13 May 2021 22:44:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127097.238787; Thu, 13 May 2021 22:44:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhK4I-00081S-FH; Thu, 13 May 2021 22:44:34 +0000
Received: by outflank-mailman (input) for mailman id 127097;
 Thu, 13 May 2021 22:44:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=J2h5=KI=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lhK4G-00081M-W1
 for xen-devel@lists.xenproject.org; Thu, 13 May 2021 22:44:33 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b17e4acb-9970-4262-b398-cbdb75f85b4e;
 Thu, 13 May 2021 22:44:32 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 1D13C613CB;
 Thu, 13 May 2021 22:44:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b17e4acb-9970-4262-b398-cbdb75f85b4e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1620945871;
	bh=uusHZ6Hu18cGkfqehEHDwxB05WhNIrzTKdoOVktZntA=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=Tl4cK54NmaePGQhluNWOJ8aKN1yjCZ7ovpnHW3XZL9r/i/N8FOs/P0PRlmRBnVBA1
	 OVfiDKdFJA9vlkvEL3VTjOYnDtWuep0Wgaus/MnA1LiXvZjrbHvAX8jMZX9hfS8e/5
	 4UA59IFCrW2AYNwXsmpUrf4nDKYpgJOquaDmkpP436doLBwSokIxfBjAuQWyTnrmwA
	 5jfcpYR2CO1gFgyL/QaxkIeFxNOG9Qy/qbo3ZGChe5jz4EazHXY5mV7DghyIVwgm10
	 TpeUfUeCPq1bqRxgxpp2vDS0KWTFGI0/zGLH00PRaSOsNfY6/kw69M9XvAiJgOmEs3
	 Z5jvOaSZ/Gvuw==
Date: Thu, 13 May 2021 15:44:30 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, Wei.Chen@arm.com, Henry.Wang@arm.com, 
    Penny.Zheng@arm.com, Bertrand.Marquis@arm.com, 
    Julien Grall <jgrall@amazon.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH RFCv2 02/15] xen/arm: lpae: Use the generic helpers to
 defined the Xen PT helpers
In-Reply-To: <e834b447-46c2-14fe-a39c-209d4d6ca5fe@xen.org>
Message-ID: <alpine.DEB.2.21.2105131533070.5018@sstabellini-ThinkPad-T480s>
References: <20210425201318.15447-1-julien@xen.org> <20210425201318.15447-3-julien@xen.org> <alpine.DEB.2.21.2105111515470.5018@sstabellini-ThinkPad-T480s> <94e364a7-de40-93ab-6cde-a2f493540439@xen.org> <alpine.DEB.2.21.2105121425500.5018@sstabellini-ThinkPad-T480s>
 <e834b447-46c2-14fe-a39c-209d4d6ca5fe@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 12 May 2021, Julien Grall wrote:
> Hi Stefano,
> 
> On 12/05/2021 22:30, Stefano Stabellini wrote:
> > On Wed, 12 May 2021, Julien Grall wrote:
> > > > > +#define LPAE_SHIFT          LPAE_SHIFT_GS(PAGE_SHIFT)
> > > > > +#define LPAE_ENTRIES        LPAE_ENTRIES_GS(PAGE_SHIFT)
> > > > > +#define LPAE_ENTRY_MASK     LPAE_ENTRY_MASK_GS(PAGE_SHIFT)
> > > > > 
> > > > > +#define LEVEL_SHIFT(lvl)    LEVEL_SHIFT_GS(PAGE_SHIFT, lvl)
> > > > > +#define LEVEL_ORDER(lvl)    LEVEL_ORDER_GS(PAGE_SHIFT, lvl)
> > > > > +#define LEVEL_SIZE(lvl)     LEVEL_SIZE_GS(PAGE_SHIFT, lvl)
> > > > > +#define LEVEL_MASK(lvl)     (~(LEVEL_SIZE(lvl) - 1))
> > > > 
> > > > I would avoid adding these 4 macros. It would be OK if they were just
> > > > used within this file but lpae.h is a header: they could end up be used
> > > > anywhere in the xen/ code and they have a very generic name. My
> > > > suggestion would be to skip them and just do:
> > > 
> > > Those macros will be used in follow-up patches. They are pretty useful to
> > > avoid introduce static array with the different information for each
> > > level.
> > > 
> > > Would prefix them with XEN_ be better?
> > 
> > Maybe. The concern I have is that there are multiple page granularities
> > (4kb, 16kb, etc) and multiple page sizes (4kb, 2mb, etc). If I just see
> > LEVEL_ORDER it is not immediately obvious what granularity and what size
> > we are talking about.
> 
> I am a bit puzzled with your answer. AFAIU, you are happy with the existing
> macros (THIRD_*, SECOND_*) but not with the new macros.
>
> In reality, there is no difference because THIRD_* doesn't tell you the exact
> size but only "this is a level 3 mapping".
> 
> So can you clarify what you are after? IOW is it reworking the current naming
> scheme?

You are right -- there is no real difference between THIRD_*, SECOND_*
and LEVEL_*.

The original reason for my comments is that I hadn't read the following
patches, and the definition of LEVEL_* macros is simple, they could be
open coded. It looked like they were only going to be used to make the
definition of THIRD_*, SECOND_* a bit easier. So, at first, I was
wondering if they were needed at all.

Secondly, I realized that they were going to be used in *.c files by
other patches. That's why they are there. But I started thinking whether
we should find a way to make it a bit clearer that they are for Xen
pages, currently at 4KB granularity. THIRD_*, SECOND_*, etc. are already
generic names which don't convey the granularity or whether they are Xen
pages at all. But LEVEL_* seem even more generic.

As I mentioned, I don't have any good suggestions for changes to make
here, so unless you can come up with a good idea let's keep it as is.


From xen-devel-bounces@lists.xenproject.org Thu May 13 22:58:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 22:58:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127102.238798 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhKHa-00015O-Qt; Thu, 13 May 2021 22:58:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127102.238798; Thu, 13 May 2021 22:58:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhKHa-00015H-Nt; Thu, 13 May 2021 22:58:18 +0000
Received: by outflank-mailman (input) for mailman id 127102;
 Thu, 13 May 2021 22:58:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=J2h5=KI=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lhKHZ-000159-4N
 for xen-devel@lists.xenproject.org; Thu, 13 May 2021 22:58:17 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 038240cf-730e-4e95-ba80-0b82c1316b3d;
 Thu, 13 May 2021 22:58:16 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 856FC613F2;
 Thu, 13 May 2021 22:58:15 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 038240cf-730e-4e95-ba80-0b82c1316b3d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1620946695;
	bh=mJyZjD0j0jh/C9sjuzveh3hfW5aCZqoqnRKmjNZsgW8=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=HocVW8WT1IhE6KWJGiBxD0c6F67fjN/+vtS+z8pA/p6izSawad3smgymhX7QUd7/l
	 UdxQrjKzewHiCaSmwvjlB9arf3Do67S+youUOGslNjpWtbrrLdxWn/HGB5QGr1HzCV
	 SlobJcf4rxZOGdo2KFTy80kpWYum/BpDi2+lzrxDBxX7h9PrUaDAoD6ePB8jFXNyl1
	 5loG41loVuce3jDzlXC5KXxP7YJK+aQxbVQF4S0kmIQkl1a0jfmVkFRtJe8wVhgYFH
	 M/riNtAGx4tVMWWToRAydOxgBao0Aan3Dg5t/XL74FFYYut+ED9qrUZlq4VVUhM23s
	 T5sABUu+8KHUA==
Date: Thu, 13 May 2021 15:58:14 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, Wei.Chen@arm.com, Henry.Wang@arm.com, 
    Penny.Zheng@arm.com, Bertrand.Marquis@arm.com, 
    Julien Grall <julien.grall@arm.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Julien Grall <jgrall@amazon.com>
Subject: Re: [PATCH RFCv2 11/15] xen/arm: mm: Allow page-table allocation
 from the boot allocator
In-Reply-To: <20210425201318.15447-12-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2105131556050.5018@sstabellini-ThinkPad-T480s>
References: <20210425201318.15447-1-julien@xen.org> <20210425201318.15447-12-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Sun, 25 Apr 2021, Julien Grall wrote:
> From: Julien Grall <julien.grall@arm.com>
> 
> At the moment, page-table can only be allocated from domheap. This means
> it is not possible to create mapping in the page-tables via
> map_pages_to_xen() if page-table needs to be allocated.
> 
> In order to avoid open-coding page-tables update in early boot, we need
> to be able to allocate page-tables much earlier. Thankfully, we have the
> boot allocator for those cases.
> 
> create_xen_table() is updated to cater early boot allocation by using
> alloc_boot_pages().
> 
> Note, this is not sufficient to bootstrap the page-tables (i.e mapping
> before any memory is actually mapped). This will be addressed
> separately.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>


> ---
>     Changes in v2:
>         - New patch
> ---
>  xen/arch/arm/mm.c | 20 ++++++++++++++------
>  1 file changed, 14 insertions(+), 6 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index ae5a07ea956b..d090fdfd5994 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -1011,19 +1011,27 @@ static void xen_unmap_table(const lpae_t *table)
>  
>  static int create_xen_table(lpae_t *entry)
>  {
> -    struct page_info *pg;
> +    mfn_t mfn;
>      void *p;
>      lpae_t pte;
>  
> -    pg = alloc_domheap_page(NULL, 0);
> -    if ( pg == NULL )
> -        return -ENOMEM;
> +    if ( system_state != SYS_STATE_early_boot )
> +    {
> +        struct page_info *pg = alloc_domheap_page(NULL, 0);
> +
> +        if ( pg == NULL )
> +            return -ENOMEM;
> +
> +        mfn = page_to_mfn(pg);
> +    }
> +    else
> +        mfn = alloc_boot_pages(1, 1);
>  
> -    p = xen_map_table(page_to_mfn(pg));
> +    p = xen_map_table(mfn);
>      clear_page(p);
>      xen_unmap_table(p);
>  
> -    pte = mfn_to_xen_entry(page_to_mfn(pg), MT_NORMAL);
> +    pte = mfn_to_xen_entry(mfn, MT_NORMAL);
>      pte.pt.table = 1;
>      write_pte(entry, pte);
>  
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Thu May 13 22:59:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 13 May 2021 22:59:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127106.238810 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhKIh-0001gk-4Y; Thu, 13 May 2021 22:59:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127106.238810; Thu, 13 May 2021 22:59:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhKIh-0001gd-1X; Thu, 13 May 2021 22:59:27 +0000
Received: by outflank-mailman (input) for mailman id 127106;
 Thu, 13 May 2021 22:59:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zrL7=KI=gmail.com=julien.grall.oss@srs-us1.protection.inumbo.net>)
 id 1lhKIf-0001gT-Ns
 for xen-devel@lists.xenproject.org; Thu, 13 May 2021 22:59:25 +0000
Received: from mail-ej1-x632.google.com (unknown [2a00:1450:4864:20::632])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9aef104c-0bfb-4de3-a30f-f3b27be6dfd3;
 Thu, 13 May 2021 22:59:24 +0000 (UTC)
Received: by mail-ej1-x632.google.com with SMTP id u21so42028182ejo.13
 for <xen-devel@lists.xenproject.org>; Thu, 13 May 2021 15:59:24 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9aef104c-0bfb-4de3-a30f-f3b27be6dfd3
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=iHbZSskysz//rzg/J/U/CPVDeQUSyKDFXBsEtO6wAsk=;
        b=c4IiTnwj5ZjXKB15fI1gV8Jew6IFfOEirnZKZGW+PQf7m9+u+zYsfzsTmNKOzcYCf2
         TufKQQjfu+4uu3JQ4SB+eLT1FOWlXzUKPXBdIMprlScFAFSTZpD5gToIPRhPzIH/YeIF
         d0p8v4ykEh3DYmLa6FZ8QKT5nBH/1xCCzoIv/wiOU3SWd7n8/LWLphVLMLT5SuJoFP4U
         8s3HYNxGpS36X0ZRPPUVhzizSmzSgP9Tq7Knjby12nf3dI/frygAUc3YE97nd4uHXqBQ
         MokHJqAMxAVAxTxMY1RiFWZFyxGKw4c/w+tG3vBvXEKkyMG7E9krUpFiD9+wZDaXy1Qy
         RZSw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=iHbZSskysz//rzg/J/U/CPVDeQUSyKDFXBsEtO6wAsk=;
        b=eknWbGHbid2m8PJG7AVkX+mUqBCq6X0mL7pgWDCJyvcyfNIy31/9/gqg7yU8LxfbI+
         /9JrLM8mF0JqWXBA/CR7r48+/oCxGRDDumgaZY4/UZtU605maD/znY13HRBgiHYQm4pG
         vq6tttLGAHjziYsFgb8hweFYOyzZ+rEWMV/tiR4ftOpZZ0EONOc5YV4he9AaCRGStLHJ
         ixrfEkiec4arnPU3H1rHDYmOas91Km6EFViyCqLTfnSsNmRvSCjA+VkTWlbIz9eMuzEp
         MxAfDg5XJ3LQqHwgJqqOCiuYqXOJXt7RqPGCW6HZa3G+hOok3VXBHwaaa8Q5mpyqQhiz
         dMbg==
X-Gm-Message-State: AOAM532rIC2xVxP2bETgMudovWb4Tltn4lMRpD+JU2Sh0R2Gwbd2+4w9
	VwJTf6pS0U8+bBlEuR9RzKqTPj5goa0eIZQyRXk=
X-Google-Smtp-Source: ABdhPJxE/CH8XOP455+qI0x7WsL04Do3u1JmWZNPPEWo6czS+atSjxl6pF4Xtowu/DWbujf1MOanWlE3MNZD1HrHr4w=
X-Received: by 2002:a17:906:e28c:: with SMTP id gg12mr45600715ejb.483.1620946763640;
 Thu, 13 May 2021 15:59:23 -0700 (PDT)
MIME-Version: 1.0
References: <20210425201318.15447-1-julien@xen.org> <20210425201318.15447-9-julien@xen.org>
 <alpine.DEB.2.21.2105121448090.5018@sstabellini-ThinkPad-T480s>
 <caec9741-8c0e-b80a-1020-c985beb1e100@xen.org> <alpine.DEB.2.21.2105131528230.5018@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2105131528230.5018@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien.grall.oss@gmail.com>
Date: Thu, 13 May 2021 23:59:13 +0100
Message-ID: <CAJ=z9a18zq06AKTGRJHHzR1JeabdO+-FKTmu77ZmdPdQQi=NMA@mail.gmail.com>
Subject: Re: [PATCH RFCv2 08/15] xen/arm32: mm: Check if the virtual address
 is shared before updating it
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>, Henry.Wang@arm.com, 
	Penny.Zheng@arm.com, Bertrand Marquis <Bertrand.Marquis@arm.com>, 
	Julien Grall <jgrall@amazon.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Content-Type: multipart/alternative; boundary="0000000000000bf2b905c23e143d"

--0000000000000bf2b905c23e143d
Content-Type: text/plain; charset="UTF-8"

On Thu, 13 May 2021, 23:32 Stefano Stabellini, <sstabellini@kernel.org>
wrote:

> On Wed, 12 May 2021, Julien Grall wrote:
> > Hi Stefano,
> >
> > On 12/05/2021 23:00, Stefano Stabellini wrote:
> > > On Sun, 25 Apr 2021, Julien Grall wrote:
> > > > From: Julien Grall <jgrall@amazon.com>
> > > >
> > > > Only the first 2GB of the virtual address space is shared between all
> > > > the page-tables on Arm32.
> > > >
> > > > There is a long outstanding TODO in xen_pt_update() stating that the
> > > > function is can only work with shared mapping. Nobody has ever called
> > >             ^ remove
> > >
> > > > the function with private mapping, however as we add more callers
> > > > there is a risk to mess things up.
> > > >
> > > > Introduce a new define to mark the ened of the shared mappings and
> use
> > >                                       ^end
> > >
> > > > it in xen_pt_update() to verify if the address is correct.
> > > >
> > > > Note that on Arm64, all the mappings are shared. Some compiler may
> > > > complain about an always true check, so the new define is not
> introduced
> > > > for arm64 and the code is protected with an #ifdef.
> > >   On arm64 we could maybe define SHARED_VIRT_END to an arbitrarely
> large
> > > value, such as:
> > >
> > > #define SHARED_VIRT_END (1UL<<48)
> > >
> > > or:
> > >
> > > #define SHARED_VIRT_END DIRECTMAP_VIRT_END
> > >
> > > ?
> >
> > I thought about it but I didn't want to define to a random value... I
> felt not
> > define it was better.
>
> Yeah, I see your point: any restrictions in addressing (e.g. 48bits)
> are physical address restrictions. Here we are talking about virtual
> address restriction, and I don't think there are actually any
> restrictions there?  We could validly map something at
> 0xffff_ffff_ffff_ffff. So even (1<<48) which makes sense at the physical
> level, doesn't make sense in terms of virtual addresses.
>

The limit for the virtual address is 2^64.


>
> > > > Signed-off-by: Julien Grall <jgrall@amazon.com>
> > > >
> > > > ---
> > > >      Changes in v2:
> > > >          - New patch
> > > > ---
> > > >   xen/arch/arm/mm.c            | 11 +++++++++--
> > > >   xen/include/asm-arm/config.h |  4 ++++
> > > >   2 files changed, 13 insertions(+), 2 deletions(-)
> > > >
> > > > diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> > > > index 8fac24d80086..5c17cafff847 100644
> > > > --- a/xen/arch/arm/mm.c
> > > > +++ b/xen/arch/arm/mm.c
> > > > @@ -1275,11 +1275,18 @@ static int xen_pt_update(unsigned long virt,
> > > >        * For arm32, page-tables are different on each CPUs. Yet, they
> > > > share
> > > >        * some common mappings. It is assumed that only common
> mappings
> > > >        * will be modified with this function.
> > > > -     *
> > > > -     * XXX: Add a check.
> > > >        */
> > > >       const mfn_t root = virt_to_mfn(THIS_CPU_PGTABLE);
> > > >   +#ifdef SHARED_VIRT_END
> > > > +    if ( virt > SHARED_VIRT_END ||
> > > > +         (SHARED_VIRT_END - virt) < nr_mfns )
> > >
> > > The following would be sufficient, right?
> > >
> > >      if ( virt + nr_mfns > SHARED_VIRT_END )
> >
> > This would not protect against an overflow. So I think it is best if we
> keep
> > my version.
>
> But there can be no overflow with the way SHARED_VIRT_END is defined.

Even if SHARED_VIRT_END was defined at (1<<48) there can be no overflow.
> Only if we defined SHARED_VIRT_END as 0xffff_ffff_ffff_ffff we would
> have an overflow, but you wrote above that your preference is not to do
> that.
>

You can have an overflow regardless the value of SHARED_VIRT_END.

Imagine virt = 2^64 - 1 and nr_mfs = 1. The addition would result to 0.

As a consequence the check would pass when it should not.

One can argue that the caller will always provide sane values. However
given the simplicity of the check, this is not worth the trouble if a
caller is buggy.

Now, the problem with SHARED_VIRT_END equals to 2^64 - 1 is not the
overflow but the compiler that may throw an error/warning for always true
check. Hence the reason of not defining SHARED_VIRT_END on arm64.

Cheers,

--0000000000000bf2b905c23e143d
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"auto"><div><br><br><div class=3D"gmail_quote"><div dir=3D"ltr" =
class=3D"gmail_attr">On Thu, 13 May 2021, 23:32 Stefano Stabellini, &lt;<a =
href=3D"mailto:sstabellini@kernel.org">sstabellini@kernel.org</a>&gt; wrote=
:<br></div><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;bor=
der-left:1px #ccc solid;padding-left:1ex">On Wed, 12 May 2021, Julien Grall=
 wrote:<br>
&gt; Hi Stefano,<br>
&gt; <br>
&gt; On 12/05/2021 23:00, Stefano Stabellini wrote:<br>
&gt; &gt; On Sun, 25 Apr 2021, Julien Grall wrote:<br>
&gt; &gt; &gt; From: Julien Grall &lt;<a href=3D"mailto:jgrall@amazon.com" =
target=3D"_blank" rel=3D"noreferrer">jgrall@amazon.com</a>&gt;<br>
&gt; &gt; &gt; <br>
&gt; &gt; &gt; Only the first 2GB of the virtual address space is shared be=
tween all<br>
&gt; &gt; &gt; the page-tables on Arm32.<br>
&gt; &gt; &gt; <br>
&gt; &gt; &gt; There is a long outstanding TODO in xen_pt_update() stating =
that the<br>
&gt; &gt; &gt; function is can only work with shared mapping. Nobody has ev=
er called<br>
&gt; &gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0^ remove<br>
&gt; &gt; <br>
&gt; &gt; &gt; the function with private mapping, however as we add more ca=
llers<br>
&gt; &gt; &gt; there is a risk to mess things up.<br>
&gt; &gt; &gt; <br>
&gt; &gt; &gt; Introduce a new define to mark the ened of the shared mappin=
gs and use<br>
&gt; &gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0^e=
nd<br>
&gt; &gt; <br>
&gt; &gt; &gt; it in xen_pt_update() to verify if the address is correct.<b=
r>
&gt; &gt; &gt; <br>
&gt; &gt; &gt; Note that on Arm64, all the mappings are shared. Some compil=
er may<br>
&gt; &gt; &gt; complain about an always true check, so the new define is no=
t introduced<br>
&gt; &gt; &gt; for arm64 and the code is protected with an #ifdef.<br>
&gt; &gt;=C2=A0 =C2=A0On arm64 we could maybe define SHARED_VIRT_END to an =
arbitrarely large<br>
&gt; &gt; value, such as:<br>
&gt; &gt; <br>
&gt; &gt; #define SHARED_VIRT_END (1UL&lt;&lt;48)<br>
&gt; &gt; <br>
&gt; &gt; or:<br>
&gt; &gt; <br>
&gt; &gt; #define SHARED_VIRT_END DIRECTMAP_VIRT_END<br>
&gt; &gt; <br>
&gt; &gt; ?<br>
&gt; <br>
&gt; I thought about it but I didn&#39;t want to define to a random value..=
. I felt not<br>
&gt; define it was better.<br>
<br>
Yeah, I see your point: any restrictions in addressing (e.g. 48bits)<br>
are physical address restrictions. Here we are talking about virtual<br>
address restriction, and I don&#39;t think there are actually any<br>
restrictions there?=C2=A0 We could validly map something at<br>
0xffff_ffff_ffff_ffff. So even (1&lt;&lt;48) which makes sense at the physi=
cal<br>
level, doesn&#39;t make sense in terms of virtual addresses.<br></blockquot=
e></div></div><div dir=3D"auto"><br></div><div dir=3D"auto">The limit for t=
he virtual address is 2^64.</div><div dir=3D"auto"><br></div><div dir=3D"au=
to"><div class=3D"gmail_quote"><blockquote class=3D"gmail_quote" style=3D"m=
argin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex">
<br>
<br>
&gt; &gt; &gt; Signed-off-by: Julien Grall &lt;<a href=3D"mailto:jgrall@ama=
zon.com" target=3D"_blank" rel=3D"noreferrer">jgrall@amazon.com</a>&gt;<br>
&gt; &gt; &gt; <br>
&gt; &gt; &gt; ---<br>
&gt; &gt; &gt;=C2=A0 =C2=A0 =C2=A0 Changes in v2:<br>
&gt; &gt; &gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0 - New patch<br>
&gt; &gt; &gt; ---<br>
&gt; &gt; &gt;=C2=A0 =C2=A0xen/arch/arm/mm.c=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=
=A0 =C2=A0 | 11 +++++++++--<br>
&gt; &gt; &gt;=C2=A0 =C2=A0xen/include/asm-arm/config.h |=C2=A0 4 ++++<br>
&gt; &gt; &gt;=C2=A0 =C2=A02 files changed, 13 insertions(+), 2 deletions(-=
)<br>
&gt; &gt; &gt; <br>
&gt; &gt; &gt; diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c<br>
&gt; &gt; &gt; index 8fac24d80086..5c17cafff847 100644<br>
&gt; &gt; &gt; --- a/xen/arch/arm/mm.c<br>
&gt; &gt; &gt; +++ b/xen/arch/arm/mm.c<br>
&gt; &gt; &gt; @@ -1275,11 +1275,18 @@ static int xen_pt_update(unsigned lo=
ng virt,<br>
&gt; &gt; &gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 * For arm32, page-tables are diff=
erent on each CPUs. Yet, they<br>
&gt; &gt; &gt; share<br>
&gt; &gt; &gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 * some common mappings. It is ass=
umed that only common mappings<br>
&gt; &gt; &gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 * will be modified with this func=
tion.<br>
&gt; &gt; &gt; -=C2=A0 =C2=A0 =C2=A0*<br>
&gt; &gt; &gt; -=C2=A0 =C2=A0 =C2=A0* XXX: Add a check.<br>
&gt; &gt; &gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0 */<br>
&gt; &gt; &gt;=C2=A0 =C2=A0 =C2=A0 =C2=A0const mfn_t root =3D virt_to_mfn(T=
HIS_CPU_PGTABLE);<br>
&gt; &gt; &gt;=C2=A0 =C2=A0+#ifdef SHARED_VIRT_END<br>
&gt; &gt; &gt; +=C2=A0 =C2=A0 if ( virt &gt; SHARED_VIRT_END ||<br>
&gt; &gt; &gt; +=C2=A0 =C2=A0 =C2=A0 =C2=A0 =C2=A0(SHARED_VIRT_END - virt) =
&lt; nr_mfns )<br>
&gt; &gt; <br>
&gt; &gt; The following would be sufficient, right?<br>
&gt; &gt; <br>
&gt; &gt;=C2=A0 =C2=A0 =C2=A0 if ( virt + nr_mfns &gt; SHARED_VIRT_END )<br=
>
&gt; <br>
&gt; This would not protect against an overflow. So I think it is best if w=
e keep<br>
&gt; my version.<br>
<br>
But there can be no overflow with the way SHARED_VIRT_END is defined.</bloc=
kquote></div></div><div dir=3D"auto"><div class=3D"gmail_quote"><blockquote=
 class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;border-left:1px #ccc soli=
d;padding-left:1ex">
Even if SHARED_VIRT_END was defined at (1&lt;&lt;48) there can be no overfl=
ow.<br>
Only if we defined SHARED_VIRT_END as 0xffff_ffff_ffff_ffff we would<br>
have an overflow, but you wrote above that your preference is not to do<br>
that.<br></blockquote></div></div><div dir=3D"auto"><br></div><div dir=3D"a=
uto">You can have an overflow regardless the value of SHARED_VIRT_END.</div=
><div dir=3D"auto"><br></div><div dir=3D"auto">Imagine virt =3D 2^64 - 1 an=
d nr_mfs =3D 1. The addition would result to 0.</div><div dir=3D"auto"><br>=
</div><div dir=3D"auto">As a consequence the check would pass when it shoul=
d not.</div><div dir=3D"auto"><br></div><div dir=3D"auto">One can argue tha=
t the caller will always provide sane values. However given the simplicity =
of the check, this is not worth the trouble if a caller is buggy.</div><div=
 dir=3D"auto"><br></div><div dir=3D"auto">Now, the problem with SHARED_VIRT=
_END equals to 2^64 - 1 is not the overflow but the compiler that may throw=
 an error/warning for always true check. Hence the reason of not defining S=
HARED_VIRT_END on arm64.</div><div dir=3D"auto"><br></div><div dir=3D"auto"=
>Cheers,</div><div dir=3D"auto"><br></div><div dir=3D"auto"><div class=3D"g=
mail_quote"><blockquote class=3D"gmail_quote" style=3D"margin:0 0 0 .8ex;bo=
rder-left:1px #ccc solid;padding-left:1ex">
</blockquote></div></div></div>

--0000000000000bf2b905c23e143d--


From xen-devel-bounces@lists.xenproject.org Fri May 14 00:57:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 00:57:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127118.238847 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhM93-0005Q3-W2; Fri, 14 May 2021 00:57:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127118.238847; Fri, 14 May 2021 00:57:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhM93-0005Pw-Sl; Fri, 14 May 2021 00:57:37 +0000
Received: by outflank-mailman (input) for mailman id 127118;
 Fri, 14 May 2021 00:57:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CSkR=KJ=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lhM92-0004or-7V
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 00:57:36 +0000
Received: from mail-il1-x130.google.com (unknown [2607:f8b0:4864:20::130])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1e29672f-6039-4915-8178-e9f5cbd9e28c;
 Fri, 14 May 2021 00:57:30 +0000 (UTC)
Received: by mail-il1-x130.google.com with SMTP id j20so24501126ilo.10
 for <xen-devel@lists.xenproject.org>; Thu, 13 May 2021 17:57:30 -0700 (PDT)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id g25sm1981538ion.32.2021.05.13.17.57.29
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 May 2021 17:57:29 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1e29672f-6039-4915-8178-e9f5cbd9e28c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=z8NQKucDlsKkxpg+tgse/goMcErijkf42rV8EohDD7E=;
        b=XPPSHBsT1q8Fxl+w7tsnDyetbBVg32loktCtiMutZPMZIISb8G6/GOpGW/bBxc7Fyk
         ovDFeN8lUi41edNc42vxwYIpcbQSWxIimpk/T9X4PdIro4BqkmTQEQIl2BRbrflIHNtL
         FR32/T6odk0CN7rtrJxiT9yaelI1GYH8zx7ipxdWXMtE3dENIKAEQcir1Y6jXo62X44O
         2B0r5QCCaySuiDu1AOCDk3UAYtbNJqNq9GXDlgLfPkJIWZ+NDzqKYJ/x80gq7GyD3zd4
         wecV1mmOxov5sOzjwH4D3cMXaUlU/PFSBZ3xaqkdtbKrJH7k5MiA/gs3wQ4kGBXtlwyq
         LHGw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=z8NQKucDlsKkxpg+tgse/goMcErijkf42rV8EohDD7E=;
        b=R8Y7GddajITsVjQLPfLqwv+i1AF61nyYKb4FxtawEuCUuria3amPeKvc48R0Lj+iO8
         5Cr78oLqmOERKC3WsQTJxQXmsa50MAh+YU5aJfZj4Y6idznAOJa4W/B00QhWYKpvCWh0
         mqC6wyDJEZ4Yyg36Nh8G2UbYjw/0skz9w6mQUYLhP0UgS0NkBYkhMqkcWgUXBuotUh4i
         CfEgiJ+hGFGezW3S8uJN1dlzsLlynxSm/93tm6ERHWBYAqi66qRqe33dFzUTE2Bngb7N
         VP+PuozukEAhah/gV4foh/SLvJkXB6H7sOsiMZwnkBk0WdzbulqbbYSOPoYuod1fakEw
         hn7A==
X-Gm-Message-State: AOAM533wCMvCvXnxYqsrW1slfd7X00En98Ur2LTXjWpGN9KhY84E1o3E
	3xSlFxLG+cyvEMppOZO/czA=
X-Google-Smtp-Source: ABdhPJz55umq4EwDtw961MeN5IVmysZGtNDtXlv5xDHKKiDYO8Bs+8fzkkIvkTpg7mxfFPiPP4ae6A==
X-Received: by 2002:a92:b751:: with SMTP id c17mr39545880ilm.121.1620953849854;
        Thu, 13 May 2021 17:57:29 -0700 (PDT)
From: Connor Davis <connojdavis@gmail.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Cc: Connor Davis <connojdavis@gmail.com>,
	xen-devel@lists.xenproject.org,
	linux-kernel@vger.kernel.org
Subject: [PATCH v2 2/4] xen: Export dbgp functions when CONFIG_XEN_DOM0 is enabled
Date: Thu, 13 May 2021 18:56:49 -0600
Message-Id: <291659390aff63df7c071367ad4932bf41e11aef.1620952511.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <d160cee9b61c0ec41c2cd5ff9b4e107011d39d8c.1620952511.git.connojdavis@gmail.com>
References: <d160cee9b61c0ec41c2cd5ff9b4e107011d39d8c.1620952511.git.connojdavis@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Export xen_dbgp_reset_prep and xen_dbgp_external_startup
when CONFIG_XEN_DOM0 is defined. This allows use of these symbols
even if CONFIG_EARLY_PRINK_DBGP is defined.

Signed-off-by: Connor Davis <connojdavis@gmail.com>
Acked-by: Juergen Gross <jgross@suse.com>
---
 drivers/xen/dbgp.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/xen/dbgp.c b/drivers/xen/dbgp.c
index cfb5de31d860..fef32dd1a5dc 100644
--- a/drivers/xen/dbgp.c
+++ b/drivers/xen/dbgp.c
@@ -44,7 +44,7 @@ int xen_dbgp_external_startup(struct usb_hcd *hcd)
 	return xen_dbgp_op(hcd, PHYSDEVOP_DBGP_RESET_DONE);
 }
 
-#ifndef CONFIG_EARLY_PRINTK_DBGP
+#if defined(CONFIG_XEN_DOM0) || !defined(CONFIG_EARLY_PRINTK_DBGP)
 #include <linux/export.h>
 EXPORT_SYMBOL_GPL(xen_dbgp_reset_prep);
 EXPORT_SYMBOL_GPL(xen_dbgp_external_startup);
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri May 14 00:57:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 00:57:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127117.238835 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhM8y-00055n-PD; Fri, 14 May 2021 00:57:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127117.238835; Fri, 14 May 2021 00:57:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhM8y-00055g-K3; Fri, 14 May 2021 00:57:32 +0000
Received: by outflank-mailman (input) for mailman id 127117;
 Fri, 14 May 2021 00:57:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CSkR=KJ=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lhM8x-0004or-7P
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 00:57:31 +0000
Received: from mail-io1-xd34.google.com (unknown [2607:f8b0:4864:20::d34])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0d522bf0-32e1-4d87-8967-518a75310e3d;
 Fri, 14 May 2021 00:57:28 +0000 (UTC)
Received: by mail-io1-xd34.google.com with SMTP id k16so12586227ios.10
 for <xen-devel@lists.xenproject.org>; Thu, 13 May 2021 17:57:28 -0700 (PDT)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id g25sm1981538ion.32.2021.05.13.17.57.27
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 May 2021 17:57:27 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0d522bf0-32e1-4d87-8967-518a75310e3d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=puac9P/1pH2gTjwWoFDjpCGPhH1RSMCtsgJsRORZsps=;
        b=uleaoi7pw6pKKSvyheVtu5WrdsRZupmmjo1QujTNyVGJQ8DgVhO7y4jFYpwCkJIhy2
         68VKEMoj9rYzzCOBLGWHjXSfvisPps5OKm/Jqp9PN5IR9eHiG2WO3QCYVHXMwSzLNd9A
         O0uK2IdzmG6w6ZUgWg0XIor/h80j1JteyuNu5gj+cCHZ6YX9EGGAKryY0N7AcUPnQcIK
         vIdeab3oi0stWuXuAWdv1sQQaa2U/oM1C01bnoAVje+WPnbUZxDSwbt0jDrGDvqpwA1v
         tpqspuZUJ3yxTcLvPcN9YJEahK0dWjqgC/eQ8eTK/DAK7lEatlPwq4qb3xayS5RwgZXG
         5QPQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=puac9P/1pH2gTjwWoFDjpCGPhH1RSMCtsgJsRORZsps=;
        b=hGQSuS5Ya6ZXoEbsfwfVaM1JLbuomZh9lYT8NVhfgsgS9/Vrcm5EoDKGjhl7GIaHWT
         MWHP++OThP9Z4ar3ua0wVp5GSAYIKTKbfZKDs05QLlhxVd+6hhJxEswt02xi6umpbhAD
         E2wRpqVaB0VyC3Go66hcl/Y53mY1r4YmUwRWbytc8x7vcJ1kDyJZN+EujH75dtX05zz/
         VqdJYu+zWiELZf4a3aIa1VkR2StcMBpKyK8PTfJu7iB88TYORG+t6H0Sr+Cgnnq6UZ+1
         vMGjgDtf3piuV4XM1AG4kOwgMuSjN4mOS9DOm98AgLZaeR2I5pXiHd2OvlId/uFV5BX5
         Yqxg==
X-Gm-Message-State: AOAM532h/dYTJhFxoJL6Yaka0zqCbQd/9MVyBYP1m2aXMCiQaar2jFKY
	rMhqo7g8TCLMJPjrwRucMKU=
X-Google-Smtp-Source: ABdhPJwxVFI67Tc3W19dlbyZNc1e3oxlbpVUX3LqDmZg7zTCXURGdjpwQ/MpJv+vQ6Drwqm86KEn+w==
X-Received: by 2002:a6b:7413:: with SMTP id s19mr32097168iog.151.1620953848097;
        Thu, 13 May 2021 17:57:28 -0700 (PDT)
From: Connor Davis <connojdavis@gmail.com>
To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: Connor Davis <connojdavis@gmail.com>,
	Jann Horn <jannh@google.com>,
	Lee Jones <lee.jones@linaro.org>,
	Chunfeng Yun <chunfeng.yun@mediatek.com>,
	linux-usb@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH v2 1/4] usb: early: Avoid using DbC if already enabled
Date: Thu, 13 May 2021 18:56:48 -0600
Message-Id: <d160cee9b61c0ec41c2cd5ff9b4e107011d39d8c.1620952511.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <cover.1620950220.git.connojdavis@gmail.com>
References: <cover.1620950220.git.connojdavis@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Check if the debug capability is enabled in early_xdbc_parse_parameter,
and if it is, return with an error. This avoids collisions with whatever
enabled the DbC prior to linux starting.

Signed-off-by: Connor Davis <connojdavis@gmail.com>
---
 drivers/usb/early/xhci-dbc.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/drivers/usb/early/xhci-dbc.c b/drivers/usb/early/xhci-dbc.c
index be4ecbabdd58..ca67fddc2d36 100644
--- a/drivers/usb/early/xhci-dbc.c
+++ b/drivers/usb/early/xhci-dbc.c
@@ -642,6 +642,16 @@ int __init early_xdbc_parse_parameter(char *s)
 	}
 	xdbc.xdbc_reg = (struct xdbc_regs __iomem *)(xdbc.xhci_base + offset);
 
+	if (readl(&xdbc.xdbc_reg->control) & CTRL_DBC_ENABLE) {
+		pr_notice("xhci debug capability already in use\n");
+		early_iounmap(xdbc.xhci_base, xdbc.xhci_length);
+		xdbc.xdbc_reg = NULL;
+		xdbc.xhci_base = NULL;
+		xdbc.xhci_length = 0;
+
+		return -ENODEV;
+	}
+
 	return 0;
 }
 
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri May 14 00:57:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 00:57:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127116.238823 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhM8t-0004p4-FI; Fri, 14 May 2021 00:57:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127116.238823; Fri, 14 May 2021 00:57:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhM8t-0004ox-CB; Fri, 14 May 2021 00:57:27 +0000
Received: by outflank-mailman (input) for mailman id 127116;
 Fri, 14 May 2021 00:57:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CSkR=KJ=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lhM8s-0004or-C2
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 00:57:26 +0000
Received: from mail-io1-xd2e.google.com (unknown [2607:f8b0:4864:20::d2e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9c44bed0-55f6-4d98-b88c-2f63eb0ebddd;
 Fri, 14 May 2021 00:57:25 +0000 (UTC)
Received: by mail-io1-xd2e.google.com with SMTP id d11so6153610iod.5
 for <xen-devel@lists.xenproject.org>; Thu, 13 May 2021 17:57:25 -0700 (PDT)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id g25sm1981538ion.32.2021.05.13.17.57.24
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 May 2021 17:57:24 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9c44bed0-55f6-4d98-b88c-2f63eb0ebddd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=o3wEUqD5QhLS2UhtL5j7UqBMamQk3HhwXmumgBPx1P8=;
        b=hcN6w+BypiE5zN1KXHBF4f9O0oom+Q2j0Uo1g9Gd/LiXSvBT4gUXeolwGYSAMv61Su
         GmZhr+7jNEd20bCPrT+SsrfZ53U+zbwlqr9z2QTnpm8oEEZ7Nm9Qdk/PvNgh9C7+rJLt
         DKcHhb1MgYUN08Ued/nwzuZ/lX7vaizEBXJOhGRm7vNn5/CBzEcJTOTBbnG3UF6pQOaR
         EVBLUd1K9yVT/2JunLG+yac246uWjJwWcrg4XgHvPrZmIYHHSwifnwzhD4cXcJyaswir
         zgg/SC0mFliMPn4HB7kfRUC/QydemZzgfZpwza4MT6selUltS0uKSrMJLq61uj0CJSti
         +Lkg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=o3wEUqD5QhLS2UhtL5j7UqBMamQk3HhwXmumgBPx1P8=;
        b=TxwNF4kU4+PQ8bUFFPIGFaAncnxEdKASnk1hfdHbDWhk7zKCJHOB2ZG0i9qt4fqAiE
         hGJ/6I6Pp/S5M1/F7kTQtwyj8pEmEL2er+PoCQL+0gGnQ/ZtQwRhlLJpOWUds/bu2dHN
         C80/nCTgtjcOUyld/6d6Hgg5KDxgLk+CCsHDlF+rJL5SIzD75nuqBEIctE3reiP01aon
         8h7j8yDz9y8XvN5wVEaEL6MfyJy/ukfeCD7/sz0cBM/W+pyoQxsRZUw/+x0/sL4mW1Aa
         YbuVrEoqzcoxhsKXeEPRoMqlpBiSuy1UlSpvlDnUUtJbvjiWccVzbmXTpWktEiHf+H4o
         LeFQ==
X-Gm-Message-State: AOAM531fLMEbZGLwNRhpCz/ZRswFrc/Rv9wpUr0hFte9gdTQ2Q+o5597
	Vraksbfbp1JZ1LAGaQbc1tc=
X-Google-Smtp-Source: ABdhPJyB4KylG8BajfWzW2sICVndWhPHbf+Sta1dYqbL8ii+leuCVgYktVAW4oFkpqaC0W9obhMlLA==
X-Received: by 2002:a5d:89c5:: with SMTP id a5mr33586495iot.172.1620953845046;
        Thu, 13 May 2021 17:57:25 -0700 (PDT)
From: Connor Davis <connojdavis@gmail.com>
To: linux-kernel@vger.kernel.org
Cc: Connor Davis <connojdavis@gmail.com>,
	linux-usb@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Lee Jones <lee.jones@linaro.org>,
	Jann Horn <jannh@google.com>,
	Chunfeng Yun <chunfeng.yun@mediatek.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Mathias Nyman <mathias.nyman@intel.com>,
	Douglas Anderson <dianders@chromium.org>,
	"Eric W. Biederman" <ebiederm@xmission.com>,
	Petr Mladek <pmladek@suse.com>,
	Sumit Garg <sumit.garg@linaro.org>
Subject: [PATCH v2 0/4] Support xen-driven USB3 debug capability
Date: Thu, 13 May 2021 18:56:47 -0600
Message-Id: <cover.1620950220.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.31.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Hi all,

This goal of this series is to allow the USB3 debug capability (DbC) to be
safely used by xen while linux runs as dom0.

The first patch prevents the early DbC driver from using an
already-running DbC.

The second exports xen_dbgp_reset_prep and xen_dbgp_external_startup
functions when CONFIG_XEN_DOM0 is enabled so they may be used by the
xHCI driver.

The third ensures that xen_dbgp_reset_prep/xen_dbgp_external_startup
return consistent values in failure cases. This inconsistency illustrated
another issue: dbgp_reset_prep returned the value of xen_dbgp_reset_prep
if it was nonzero, but callers of dbgp_reset_prep interpret nonzero
as "keep using the debug port" and would eventually (needlessly) call
dbgp_external_startup. Patch three _should_ fix this issue, but
I don't have any EHCI hardware available to test unfortunately.

The last uses the xen_dbgp_* functions to notify xen of unsafe periods
(e.g. reset and D3hot transition).

Thanks,
Connor

--
Changes since v1:
 - Added patch for dbgp return value fixes
 - Return -EPERM when !xen_initial_domain() in xen_dbgp_op
 - Moved #ifdef-ary out of xhci.c into xhci-dbgcap.h

--
Connor Davis (4):
  usb: early: Avoid using DbC if already enabled
  xen: Export dbgp functions when CONFIG_XEN_DOM0 is enabled
  usb: dbgp: Fix return values for reset prep and startup
  usb: xhci: Notify xen when DbC is unsafe to use

 drivers/usb/early/ehci-dbgp.c  |  9 ++++---
 drivers/usb/early/xhci-dbc.c   | 10 ++++++++
 drivers/usb/host/xhci-dbgcap.h | 19 ++++++++++++++
 drivers/usb/host/xhci.c        | 47 ++++++++++++++++++++++++++++++++++
 drivers/usb/host/xhci.h        |  1 +
 drivers/xen/dbgp.c             |  4 +--
 include/linux/usb/ehci-dbgp.h  | 14 ++++++----
 7 files changed, 94 insertions(+), 10 deletions(-)


base-commit: 88b06399c9c766c283e070b022b5ceafa4f63f19
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri May 14 00:57:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 00:57:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127119.238859 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhM99-0005nO-AR; Fri, 14 May 2021 00:57:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127119.238859; Fri, 14 May 2021 00:57:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhM99-0005nD-4l; Fri, 14 May 2021 00:57:43 +0000
Received: by outflank-mailman (input) for mailman id 127119;
 Fri, 14 May 2021 00:57:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CSkR=KJ=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lhM97-0004or-7a
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 00:57:41 +0000
Received: from mail-io1-xd2a.google.com (unknown [2607:f8b0:4864:20::d2a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 44082472-4656-4a6d-84d0-b4f193f1ea8e;
 Fri, 14 May 2021 00:57:31 +0000 (UTC)
Received: by mail-io1-xd2a.google.com with SMTP id d11so6153836iod.5
 for <xen-devel@lists.xenproject.org>; Thu, 13 May 2021 17:57:31 -0700 (PDT)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id g25sm1981538ion.32.2021.05.13.17.57.30
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 May 2021 17:57:31 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 44082472-4656-4a6d-84d0-b4f193f1ea8e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=kaolo11x7BDn/0QfbTzKyesjjtUEBXBeIP8y1OlvuS8=;
        b=oNgcz//ZDfFesKvkG1DPiB9mV5Cjp3q09B7rZg/T1l5DnxEicJF0uHcIatOkszgULL
         SQEJkOvXso5TLe2DSdp1mnCW64dlGG6ReyWh0c7IIHMw9cwQsc362Qp27R2rjjO8CVQd
         Hsw6El7hP5PO+qzVt53ggMKbyvifV+phwY4TkxOVrtqS6hgLNDvZV6mBSbUARwiLfufR
         Ly6yjfKaiELXc2e4+vGzFYHJ0ZIMTafqZLtowv+yMlm4zsWPw3g2DEpcIKeDfUdmlAhM
         CdJOXn8FA9S3rpcCWGZkKyU4k5ld6wYJ5P997dkD5hYRHf17+oUiV7WuTWVcVh/IGXQM
         RjMQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=kaolo11x7BDn/0QfbTzKyesjjtUEBXBeIP8y1OlvuS8=;
        b=pe2bZeeiyWKawuIFSeIHAhu/NIyhFPeWGdoF4B/vDbL6TyYiCzyC88iWqhc45koHAa
         6Km9gyKoPnEcu2jYOLDdlZduvi3kBGH8xyUjLzkA29LlFwdiFZl3AUxPHCUQSK6pTnG2
         nd/lQEddadNF+Le/BM9VAzF/+2nSfL4Zd+mjBOdl8HAiZ8rJ7wlddMT6moFlK1hxJef2
         HzNGvFxq8IhGjo1DaRKGw+WDp7H7LsHLD6b1svshqDVhjn8NIuxLiuFHHHLXO3lUAwgE
         HyVjkVjgVxiYVjZBx2G4mfDaGCsC4NHD6G7V0bEH37Aks/5/4m+Zq56CHeU9wwa7zELf
         y9HA==
X-Gm-Message-State: AOAM531IYC2JZBZg/75ciyR6SSa+YUAHkMxCycwzJXG2/fPDMjtVc5Mq
	PhlLz3qvEDqUMJHH+RJ7/IY=
X-Google-Smtp-Source: ABdhPJygkbUw3fnDxhQdUp2/4URvzqUJF2PZVWTCLi/wLAoCtayuWSrLwm2fsyOwJJpdijs/Hs4iAg==
X-Received: by 2002:a05:6638:3013:: with SMTP id r19mr41336649jak.36.1620953851491;
        Thu, 13 May 2021 17:57:31 -0700 (PDT)
From: Connor Davis <connojdavis@gmail.com>
To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>
Cc: Connor Davis <connojdavis@gmail.com>,
	Douglas Anderson <dianders@chromium.org>,
	"Eric W. Biederman" <ebiederm@xmission.com>,
	Chunfeng Yun <chunfeng.yun@mediatek.com>,
	Petr Mladek <pmladek@suse.com>,
	Sumit Garg <sumit.garg@linaro.org>,
	Lee Jones <lee.jones@linaro.org>,
	linux-usb@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH v2 3/4] usb: dbgp: Fix return values for reset prep and startup
Date: Thu, 13 May 2021 18:56:50 -0600
Message-Id: <0010a6165f3560f16123a142d297276e7d6c2087.1620952511.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <d160cee9b61c0ec41c2cd5ff9b4e107011d39d8c.1620952511.git.connojdavis@gmail.com>
References: <d160cee9b61c0ec41c2cd5ff9b4e107011d39d8c.1620952511.git.connojdavis@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Callers of dbgp_reset_prep treat a 0 return value as "stop using
the debug port", which means they don't make any subsequent calls to
dbgp_reset_prep or dbgp_external_startup.

To ensure the callers' interpretation is correct, first return -EPERM
from xen_dbgp_op if !xen_initial_domain(). This ensures that
both xen_dbgp_reset_prep and xen_dbgp_external_startup return 0
iff the PHYSDEVOP_DBGP_RESET_{PREPARE,DONE} hypercalls succeed. Also
update xen_dbgp_reset_prep and xen_dbgp_external_startup to return
-EPERM when !CONFIG_XEN_DOM0 for consistency.

Next, return nonzero from dbgp_reset_prep if xen_dbgp_reset_prep returns
0. The nonzero value ensures that callers of dbgp_reset_prep will
subsequently call dbgp_external_startup when it is safe to do so.

Also invert the return values from dbgp_external_startup for
consistency with dbgp_reset_prep (this inversion has no functional
change since no callers actually check the value).

Signed-off-by: Connor Davis <connojdavis@gmail.com>
---
 drivers/usb/early/ehci-dbgp.c |  9 ++++++---
 drivers/xen/dbgp.c            |  2 +-
 include/linux/usb/ehci-dbgp.h | 14 +++++++++-----
 3 files changed, 16 insertions(+), 9 deletions(-)

diff --git a/drivers/usb/early/ehci-dbgp.c b/drivers/usb/early/ehci-dbgp.c
index 45b42d8f6453..ff993d330c01 100644
--- a/drivers/usb/early/ehci-dbgp.c
+++ b/drivers/usb/early/ehci-dbgp.c
@@ -970,8 +970,8 @@ int dbgp_reset_prep(struct usb_hcd *hcd)
 	int ret = xen_dbgp_reset_prep(hcd);
 	u32 ctrl;
 
-	if (ret)
-		return ret;
+	if (!ret)
+		return 1;
 
 	dbgp_not_safe = 1;
 	if (!ehci_debug)
@@ -995,7 +995,10 @@ EXPORT_SYMBOL_GPL(dbgp_reset_prep);
 
 int dbgp_external_startup(struct usb_hcd *hcd)
 {
-	return xen_dbgp_external_startup(hcd) ?: _dbgp_external_startup();
+	if (!xen_dbgp_external_startup(hcd))
+		return 1;
+
+	return !_dbgp_external_startup();
 }
 EXPORT_SYMBOL_GPL(dbgp_external_startup);
 #endif /* USB */
diff --git a/drivers/xen/dbgp.c b/drivers/xen/dbgp.c
index fef32dd1a5dc..d54f98380e6f 100644
--- a/drivers/xen/dbgp.c
+++ b/drivers/xen/dbgp.c
@@ -15,7 +15,7 @@ static int xen_dbgp_op(struct usb_hcd *hcd, int op)
 	struct physdev_dbgp_op dbgp;
 
 	if (!xen_initial_domain())
-		return 0;
+		return -EPERM;
 
 	dbgp.op = op;
 
diff --git a/include/linux/usb/ehci-dbgp.h b/include/linux/usb/ehci-dbgp.h
index 62ab3805172d..c0e98557efc0 100644
--- a/include/linux/usb/ehci-dbgp.h
+++ b/include/linux/usb/ehci-dbgp.h
@@ -56,28 +56,32 @@ extern int xen_dbgp_external_startup(struct usb_hcd *);
 #else
 static inline int xen_dbgp_reset_prep(struct usb_hcd *hcd)
 {
-	return 1; /* Shouldn't this be 0? */
+	return -EPERM;
 }
 
 static inline int xen_dbgp_external_startup(struct usb_hcd *hcd)
 {
-	return -1;
+	return -EPERM;
 }
 #endif
 
 #ifdef CONFIG_EARLY_PRINTK_DBGP
-/* Call backs from ehci host driver to ehci debug driver */
+/*
+ * Call backs from ehci host driver to ehci debug driver.
+ * Returns 0 if the debug port should stopped being used,
+ * nonzero otherwise.
+ */
 extern int dbgp_external_startup(struct usb_hcd *);
 extern int dbgp_reset_prep(struct usb_hcd *);
 #else
 static inline int dbgp_reset_prep(struct usb_hcd *hcd)
 {
-	return xen_dbgp_reset_prep(hcd);
+	return !xen_dbgp_reset_prep(hcd);
 }
 
 static inline int dbgp_external_startup(struct usb_hcd *hcd)
 {
-	return xen_dbgp_external_startup(hcd);
+	return !xen_dbgp_external_startup(hcd);
 }
 #endif
 
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri May 14 01:04:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 01:04:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127137.238871 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhMG0-0005ON-1y; Fri, 14 May 2021 01:04:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127137.238871; Fri, 14 May 2021 01:04:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhMFz-0005OG-Ui; Fri, 14 May 2021 01:04:47 +0000
Received: by outflank-mailman (input) for mailman id 127137;
 Fri, 14 May 2021 01:04:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=saLk=KJ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lhMFy-0005OA-KC
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 01:04:46 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f2364f62-8199-4894-a393-ec3e80e73e7a;
 Fri, 14 May 2021 01:04:46 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 08824613B5;
 Fri, 14 May 2021 01:04:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f2364f62-8199-4894-a393-ec3e80e73e7a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1620954285;
	bh=h8ak1LFNpEuvvrnEcyltA6NYwXM3pDzQJjkY5k1zegM=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=pAWeeTTmkOE7ea830IPQIJN+SBeYpUarDv5wIeY1DmCdaSw0O85yDIUSYfmeP7DQT
	 Vo29uoYNHQNZD1dyinBccn/fVUttftU9ivrf8zRuJTOaVwd41rTt5EevpbywmtiTr+
	 eyKomCdFpN/L9N+ZvYyo3FMln03dduTzWCOXwLj6hGDqswg+puRh82FQNy6HU+bfX+
	 X0MkAU66eTKidHnrcFOK50e44ASsNrfpl6PmaZvviatrqK9XbPmAAV0skBc3klBSTS
	 T7uEBjgoYlOSLTyG9gl9brl9zAgHJvxbL4ggl7OPtnnGqpoWCGx0DRbMff/PCndPuw
	 6xcL+83MyXaKw==
Date: Thu, 13 May 2021 18:04:44 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien.grall.oss@gmail.com>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel <xen-devel@lists.xenproject.org>, Wei Chen <Wei.Chen@arm.com>, 
    Henry.Wang@arm.com, Penny.Zheng@arm.com, 
    Bertrand Marquis <Bertrand.Marquis@arm.com>, 
    Julien Grall <jgrall@amazon.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH RFCv2 08/15] xen/arm32: mm: Check if the virtual address
 is shared before updating it
In-Reply-To: <CAJ=z9a18zq06AKTGRJHHzR1JeabdO+-FKTmu77ZmdPdQQi=NMA@mail.gmail.com>
Message-ID: <alpine.DEB.2.21.2105131801290.5018@sstabellini-ThinkPad-T480s>
References: <20210425201318.15447-1-julien@xen.org> <20210425201318.15447-9-julien@xen.org> <alpine.DEB.2.21.2105121448090.5018@sstabellini-ThinkPad-T480s> <caec9741-8c0e-b80a-1020-c985beb1e100@xen.org> <alpine.DEB.2.21.2105131528230.5018@sstabellini-ThinkPad-T480s>
 <CAJ=z9a18zq06AKTGRJHHzR1JeabdO+-FKTmu77ZmdPdQQi=NMA@mail.gmail.com>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-835495777-1620954177=:5018"
Content-ID: <alpine.DEB.2.21.2105131803320.5018@sstabellini-ThinkPad-T480s>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-835495777-1620954177=:5018
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2105131803321.5018@sstabellini-ThinkPad-T480s>

On Thu, 13 May 2021, Julien Grall wrote:
> On Thu, 13 May 2021, 23:32 Stefano Stabellini, <sstabellini@kernel.org> wrote:
>       On Wed, 12 May 2021, Julien Grall wrote:
>       > Hi Stefano,
>       >
>       > On 12/05/2021 23:00, Stefano Stabellini wrote:
>       > > On Sun, 25 Apr 2021, Julien Grall wrote:
>       > > > From: Julien Grall <jgrall@amazon.com>
>       > > >
>       > > > Only the first 2GB of the virtual address space is shared between all
>       > > > the page-tables on Arm32.
>       > > >
>       > > > There is a long outstanding TODO in xen_pt_update() stating that the
>       > > > function is can only work with shared mapping. Nobody has ever called
>       > >             ^ remove
>       > >
>       > > > the function with private mapping, however as we add more callers
>       > > > there is a risk to mess things up.
>       > > >
>       > > > Introduce a new define to mark the ened of the shared mappings and use
>       > >                                       ^end
>       > >
>       > > > it in xen_pt_update() to verify if the address is correct.
>       > > >
>       > > > Note that on Arm64, all the mappings are shared. Some compiler may
>       > > > complain about an always true check, so the new define is not introduced
>       > > > for arm64 and the code is protected with an #ifdef.
>       > >   On arm64 we could maybe define SHARED_VIRT_END to an arbitrarely large
>       > > value, such as:
>       > >
>       > > #define SHARED_VIRT_END (1UL<<48)
>       > >
>       > > or:
>       > >
>       > > #define SHARED_VIRT_END DIRECTMAP_VIRT_END
>       > >
>       > > ?
>       >
>       > I thought about it but I didn't want to define to a random value... I felt not
>       > define it was better.
> 
>       Yeah, I see your point: any restrictions in addressing (e.g. 48bits)
>       are physical address restrictions. Here we are talking about virtual
>       address restriction, and I don't think there are actually any
>       restrictions there?  We could validly map something at
>       0xffff_ffff_ffff_ffff. So even (1<<48) which makes sense at the physical
>       level, doesn't make sense in terms of virtual addresses.
> 
> 
> The limit for the virtual address is 2^64.
> 
> 
> 
>       > > > Signed-off-by: Julien Grall <jgrall@amazon.com>
>       > > >
>       > > > ---
>       > > >      Changes in v2:
>       > > >          - New patch
>       > > > ---
>       > > >   xen/arch/arm/mm.c            | 11 +++++++++--
>       > > >   xen/include/asm-arm/config.h |  4 ++++
>       > > >   2 files changed, 13 insertions(+), 2 deletions(-)
>       > > >
>       > > > diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
>       > > > index 8fac24d80086..5c17cafff847 100644
>       > > > --- a/xen/arch/arm/mm.c
>       > > > +++ b/xen/arch/arm/mm.c
>       > > > @@ -1275,11 +1275,18 @@ static int xen_pt_update(unsigned long virt,
>       > > >        * For arm32, page-tables are different on each CPUs. Yet, they
>       > > > share
>       > > >        * some common mappings. It is assumed that only common mappings
>       > > >        * will be modified with this function.
>       > > > -     *
>       > > > -     * XXX: Add a check.
>       > > >        */
>       > > >       const mfn_t root = virt_to_mfn(THIS_CPU_PGTABLE);
>       > > >   +#ifdef SHARED_VIRT_END
>       > > > +    if ( virt > SHARED_VIRT_END ||
>       > > > +         (SHARED_VIRT_END - virt) < nr_mfns )
>       > >
>       > > The following would be sufficient, right?
>       > >
>       > >      if ( virt + nr_mfns > SHARED_VIRT_END )
>       >
>       > This would not protect against an overflow. So I think it is best if we keep
>       > my version.
> 
>       But there can be no overflow with the way SHARED_VIRT_END is defined.
> 
>       Even if SHARED_VIRT_END was defined at (1<<48) there can be no overflow.
>       Only if we defined SHARED_VIRT_END as 0xffff_ffff_ffff_ffff we would
>       have an overflow, but you wrote above that your preference is not to do
>       that.
> 
> 
> You can have an overflow regardless the value of SHARED_VIRT_END.
> 
> Imagine virt = 2^64 - 1 and nr_mfs = 1. The addition would result to 0.
> 
> As a consequence the check would pass when it should not.

Yes you are right, I don't know how I missed it!


> One can argue that the caller will always provide sane values. However given the simplicity of the check, this is not worth the trouble if
> a caller is buggy.
> 
> Now, the problem with SHARED_VIRT_END equals to 2^64 - 1 is not the overflow but the compiler that may throw an error/warning for always
> true check. Hence the reason of not defining SHARED_VIRT_END on arm64.

OK, all checks out.

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>
--8323329-835495777-1620954177=:5018--


From xen-devel-bounces@lists.xenproject.org Fri May 14 02:20:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 02:20:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127149.238883 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhNR9-0004mn-NM; Fri, 14 May 2021 02:20:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127149.238883; Fri, 14 May 2021 02:20:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhNR9-0004mf-HF; Fri, 14 May 2021 02:20:23 +0000
Received: by outflank-mailman (input) for mailman id 127149;
 Fri, 14 May 2021 02:20:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhNR8-0004mV-Eh; Fri, 14 May 2021 02:20:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhNR8-0005cr-8s; Fri, 14 May 2021 02:20:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhNR7-0005hd-Uo; Fri, 14 May 2021 02:20:22 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lhNR7-0006FF-QK; Fri, 14 May 2021 02:20:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=X7eviHk0VE/fTxzCSfJm+FyNP0ZBStdWRLyVAXMDG5A=; b=G/DFoOq2ZLSXTu7jthCM1VOeZO
	ZziQXQH8A3WMjOFZiDwR5teVq5IBViUJDLjqZlFdqoEQ3UjrKtz1lh73eCU6ye3X85Gb57rxUlUGW
	rBpRGfuLGZCSwiAy0iqluNyrjkIpUXKewxYQrSYNzYReGM2/kaz3vsnhMSMRL2nHfRIY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161938-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 161938: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=dab59ce031228066eb95a9c518846fcacfb0dbbf
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 14 May 2021 02:20:21 +0000

flight 161938 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161938/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                dab59ce031228066eb95a9c518846fcacfb0dbbf
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  266 days
Failing since        152659  2020-08-21 14:07:39 Z  265 days  485 attempts
Testing same since   161938  2021-05-13 18:40:23 Z    0 days    1 attempts

------------------------------------------------------------
499 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 151427 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 14 02:42:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 02:42:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127155.238900 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhNmi-00078B-Gv; Fri, 14 May 2021 02:42:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127155.238900; Fri, 14 May 2021 02:42:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhNmi-000784-DD; Fri, 14 May 2021 02:42:40 +0000
Received: by outflank-mailman (input) for mailman id 127155;
 Fri, 14 May 2021 02:42:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rjOs=KJ=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1lhNmh-00077y-4S
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 02:42:39 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3cd4390e-c50a-4cd4-835b-171f2d4b7675;
 Fri, 14 May 2021 02:42:37 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.16.1/8.15.2) with ESMTPS id 14E2gSm8073818
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Thu, 13 May 2021 22:42:34 -0400 (EDT) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.16.1/8.15.2/Submit) id 14E2gSQW073817;
 Thu, 13 May 2021 19:42:28 -0700 (PDT) (envelope-from ehem)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3cd4390e-c50a-4cd4-835b-171f2d4b7675
Date: Thu, 13 May 2021 19:42:28 -0700
From: Elliott Mitchell <ehem+undef@m5p.com>
To: Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org, Roger Pau Monn?? <royger@freebsd.org>,
        Mitchell Horne <mhorne@freebsd.org>
Subject: Uses of /hypervisor memory range (was: FreeBSD/Xen/ARM issues)
Message-ID: <YJ3jlGSxs60Io+dp@mattapan.m5p.com>
References: <YIhSbkfShjN/gMCe@Air-de-Roger>
 <YIndyh0sRqcmcMim@mattapan.m5p.com>
 <YIptpndhk6MOJFod@Air-de-Roger>
 <YItwHirnih6iUtRS@mattapan.m5p.com>
 <YIu80FNQHKS3+jVN@Air-de-Roger>
 <YJDcDjjgCsQUdsZ7@mattapan.m5p.com>
 <YJURGaqAVBSYnMRf@Air-de-Roger>
 <YJYem5CW/97k/e5A@mattapan.m5p.com>
 <YJs/YAgB8molh7e5@mattapan.m5p.com>
 <54427968-9b13-36e6-0001-27fb49f85635@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <54427968-9b13-36e6-0001-27fb49f85635@xen.org>
X-Spam-Status: No, score=0.4 required=10.0 tests=KHOP_HELO_FCRDNS autolearn=no
	autolearn_force=no version=3.4.5
X-Spam-Checker-Version: SpamAssassin 3.4.5 (2021-03-20) on mattapan.m5p.com

Upon thinking about it, this seems appropriate to bring to the attention
of the Xen development list since it seems to have wider implications.


On Wed, May 12, 2021 at 11:08:39AM +0100, Julien Grall wrote:
> On 12/05/2021 03:37, Elliott Mitchell wrote:
> > 
> > What about the approach to the grant-table/xenpv memory situation?
> > 
> > As stated for a 768MB VM, Xen suggested a 16MB range.  I'm unsure whether
> > that is strictly meant for grant-table use or is meant for any foreign
> > memory mappings (Julien?).
> 
> An OS is free to use it as it wants. However, there is no promise that:
>    1) The region will not shrink
>    2) The region will stay where it is

Issue is what is the intended use of the memory range allocated to
/hypervisor in the device-tree on ARM?  What do the Xen developers plan
for?  What is expected?


With FreeBSD, Julien Grall's attempt 5 years ago at getting Xen/ARM
support treated the grant table as distinct from other foreign memory
mappings.  Yet for the current code (which is oriented towards x86) it is
rather easier to treat all foreign mappings the same.

Limiting foreign mappings to a total of 16MB for a 768MB domain is tight.
Was the /hypervisor range intended *strictly* for mapping grant-tables?
Was it intended for the /hypervisor range to dynamically scale with the
size of the domain?  Was it intended for /hypervisor to grow over the
years as hardware got cheaper?

Might it be better to deprecate the /hypervisor range and have domains
allocate any available address space for foreign mappings?

Should the FreeBSD implementation be treating grant tables as distinct
from other foreign mappings?  (is treating them the same likely to
induce buggy behavior on x86?)


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Fri May 14 03:25:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 03:25:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127160.238915 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhORY-0002u2-RJ; Fri, 14 May 2021 03:24:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127160.238915; Fri, 14 May 2021 03:24:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhORY-0002tv-NJ; Fri, 14 May 2021 03:24:52 +0000
Received: by outflank-mailman (input) for mailman id 127160;
 Fri, 14 May 2021 03:24:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CSkR=KJ=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lhORX-0002tn-Ad
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 03:24:51 +0000
Received: from mail-il1-x131.google.com (unknown [2607:f8b0:4864:20::131])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d585e6ba-f43b-487b-b9e5-c51e5fa812d1;
 Fri, 14 May 2021 03:24:50 +0000 (UTC)
Received: by mail-il1-x131.google.com with SMTP id j12so24759098ils.4
 for <xen-devel@lists.xenproject.org>; Thu, 13 May 2021 20:24:50 -0700 (PDT)
Received: from [192.168.99.80] (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id d2sm2412666ile.18.2021.05.13.20.24.49
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 13 May 2021 20:24:49 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d585e6ba-f43b-487b-b9e5-c51e5fa812d1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=rjEVrWO3moq8rpuZa0b0GYhpCLgyhm8VlqMw+Ea4qUc=;
        b=vgTI8EkZ7AKRgt7SnHw1F4ELlQWM/xswnMWiCyu5J1CHZNdhbNAQPUNjIVx42VHXpJ
         FrUrV0LAcDEL3QfZaUxezVPEU+2y/eTLThxTcICb4JZFBSvw6Jg+pNFybLi57CFRoiwf
         1+XGBR0qqfNz025ZE5TwpTPPAglGbRgZshdoTkOyM6ilMYhoFCbINBoRboDZaiCDAyV/
         pmuFXGYUt50VhlOauUXKoM7179ZJ8LVHXPMnWAILSvsqPXdIEg2XbEl40X/nWLsPGYRY
         SFZXUjih3/WWNi0mASJUZOjYN2J8JJxwhP3lWXqlksynNng74sX1OylVg6Obdwfu6ZEi
         5UYA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=rjEVrWO3moq8rpuZa0b0GYhpCLgyhm8VlqMw+Ea4qUc=;
        b=hAOm+jlYCMLWVsV2O4wuz51DIB8yuPD5cfG27vuRndZWdFJPdA9apgQEjtWE9lpZYp
         CHC7bcHK4CI1okJPeVPJmpw9Vlb48lC731RQpxRl28Iq+y1coQS7MvK2oO6uVwxNbhRJ
         t2nmclqFYN1KvqYDWhK0NkHH3HN6F3klIL85VHZeKQ0c5c19dAwGvTl2wkZI2v7e7WHI
         SR1D/TcOl/1e0+FtOQ71pIXv0gsILwtJEcNCKWs3av4dbOx3v9tZKSUbL180UnnkIyrL
         ubl8UuzpFw79iZbjaLXg4vWtxOAJa6dUpK0TC/MTWk6HzxiE2HgbpIbIU89XKTsg8+BT
         7CMA==
X-Gm-Message-State: AOAM530NvaNaC0tewZXmqsiw1Izkc4XQr+7FchsQwsweON8Ng7C1kR7R
	+oYinTB1d4do7Hn+Q04Iovx6pHMxMtZxsg==
X-Google-Smtp-Source: ABdhPJxy11y1+SFBML7QznyUV8e/TjAncibvLacUI8v4Jbu5NWkYW0SX5D0aPeABVndeJXmhg3rfow==
X-Received: by 2002:a92:6804:: with SMTP id d4mr38570373ilc.5.1620962689992;
        Thu, 13 May 2021 20:24:49 -0700 (PDT)
Subject: Re: [PATCH for-next 5/6] xen: Add files needed for minimal riscv
 build
To: Jan Beulich <jbeulich@suse.com>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Tamas K Lengyel <tamas@tklengyel.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>, xen-devel@lists.xenproject.org
References: <cover.1614265718.git.connojdavis@gmail.com>
 <7652ce3486c026a3a9f7d850170ea81ba8a18bdb.1614265718.git.connojdavis@gmail.com>
 <84f490e8-7035-565d-4b20-6e46ccc800f2@suse.com>
From: Connor Davis <connojdavis@gmail.com>
Message-ID: <58454a32-12d4-3bf3-f962-887be7bda381@gmail.com>
Date: Thu, 13 May 2021 21:25:01 -0600
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <84f490e8-7035-565d-4b20-6e46ccc800f2@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 3/12/21 10:09 AM, Jan Beulich wrote:
> On 25.02.2021 16:24, Connor Davis wrote:
>> --- a/xen/include/public/hvm/save.h
>> +++ b/xen/include/public/hvm/save.h
>> @@ -106,6 +106,8 @@ DECLARE_HVM_SAVE_TYPE(END, 0, struct hvm_save_end);
>>   #include "../arch-x86/hvm/save.h"
>>   #elif defined(__arm__) || defined(__aarch64__)
>>   #include "../arch-arm/hvm/save.h"
>> +#elif defined(__riscv)
>> +#include "../arch-riscv/hvm/save.h"
> Does the compiler not also provide __riscv__? If it does, using it
> here (and elsewhere) would fit better with the existing logic.
>
No only __riscv is defined


Thanks,

Connor



From xen-devel-bounces@lists.xenproject.org Fri May 14 03:41:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 03:41:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127164.238926 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhOhl-00057V-6T; Fri, 14 May 2021 03:41:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127164.238926; Fri, 14 May 2021 03:41:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhOhl-00057O-2q; Fri, 14 May 2021 03:41:37 +0000
Received: by outflank-mailman (input) for mailman id 127164;
 Fri, 14 May 2021 03:41:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QDWZ=KJ=gmail.com=christopher.w.clark@srs-us1.protection.inumbo.net>)
 id 1lhOhj-00057G-Qe
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 03:41:35 +0000
Received: from mail-qt1-x829.google.com (unknown [2607:f8b0:4864:20::829])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f7900ed3-785a-45cb-8455-275a0609307d;
 Fri, 14 May 2021 03:41:35 +0000 (UTC)
Received: by mail-qt1-x829.google.com with SMTP id c10so11586905qtx.10
 for <xen-devel@lists.xenproject.org>; Thu, 13 May 2021 20:41:35 -0700 (PDT)
Received: from walnut.ice.pyrology.org (mobile-166-176-184-32.mycingular.net.
 [166.176.184.32])
 by smtp.gmail.com with ESMTPSA id g15sm3873432qka.49.2021.05.13.20.41.31
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 May 2021 20:41:34 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f7900ed3-785a-45cb-8455-275a0609307d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=A/v1QITXYqG1suB6tX0Jl2pKcl1d0G85XYf1su05dpo=;
        b=Lyuv6a0BPDqNJ/edx+Zud4iNbYYNiU1CMAJ6cmpcG2PHWt476Gjb7a1ASQXO42e2J/
         DGSjcGXeje45z+Yu3YzbZajs8nNiE7VT8xHi0fRz5KpVH0nu5K27Q60cmXxASPcOKFJu
         QsngTOvH2uQuqOcMXFZk+rVVaXz91VwvkxUSxAJmUR9i94pcNTWFT43pXRR3VBnQQyx0
         SCSERhCVNEtc3VGV/NL34VqZPJNWxj5DDUNxI4x120HYH2KEMkYt73wZ/NEdhEd8falV
         HlxPxpQYd6f7Eaj9hZ3J1WfhOjXGGxbMvdnvx077kJ5uBbELpZ/gKsi8BAVv5WNdPQH0
         kEKg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=A/v1QITXYqG1suB6tX0Jl2pKcl1d0G85XYf1su05dpo=;
        b=Q3zx9CEMz0ZTjpmIoqSxFOOezFY8ZO9mfqzVspS7OULIJCeZ2YDm5OR0ckZIMgZ7eK
         WzthWm+68AXk/2/nIay2ku8yzUsFGcxnR8iRHCCobI+X38SjR7IXQ6F5GGJh8VFpcEZV
         F6RaJzWzAjUiMhTCu18UT9VTTUXk+3jrJMfvy8I8XGKrLeANlPLV+6/3Mo/4Ep3TSrf6
         3QCIVtNsxJMCEMEQ4hyFoOVO0DfkmtvkDv2mB5axe8PCAg6EVsFL9+sTpqqIYFwVWmBE
         VtKjW5etHr4TzwPAXBJkrRrpReqFZToJjEKGQmt7aupox9XWYo0SYdtlXn9qodNiannQ
         DVTQ==
X-Gm-Message-State: AOAM532WtKQLuVJh5aEFVXf9L0GN5oVLYK1OiCnVy2Kao05bAc2iQT1T
	gNnUTp5DDHt+S7Y+wKiFYaJu0MD2YvTp4w==
X-Google-Smtp-Source: ABdhPJwX5AYTyjg6nGHF/DXpwwDwO9nYNQY2nIOn5uj0jbRKXhGNHG9fGOpLTKEyYv1Xya7FizR8TQ==
X-Received: by 2002:a05:622a:15c9:: with SMTP id d9mr24643186qty.103.1620963694737;
        Thu, 13 May 2021 20:41:34 -0700 (PDT)
From: Christopher Clark <christopher.w.clark@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: dpsmith@apertussolutions.com,
	andrew.cooper3@citrix.com,
	stefano.stabellini@xilinx.com,
	jgrall@amazon.com,
	Julien.grall.oss@gmail.com,
	iwj@xenproject.org,
	wl@xen.org,
	george.dunlap@citrix.com,
	jbeulich@suse.com,
	persaur@gmail.com,
	Bertrand.Marquis@arm.com,
	roger.pau@citrix.com,
	luca.fancellu@arm.com,
	paul@xen.org,
	adam.schwalm@starlab.io,
	scott.davis@starlab.io,
	Christopher Clark <christopher.clark@starlab.io>
Subject: [PATCH v4 0/2] Introducing Hyperlaunch capability design (formerly: DomB mode of dom0less)
Date: Thu, 13 May 2021 20:40:59 -0700
Message-Id: <20210514034101.3683-1-christopher.w.clark@gmail.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

We are submitting for inclusion in the Xen documentation:

- the Hyperlaunch design document, and
- the Hyperlaunch device tree design document

to describe a new method for launching the Xen hypervisor.

The Hyperlaunch feature builds upon prior dom0less work,
to bring a flexible and security-minded means to launch a
variety of VM configurations as part of the startup of Xen.

Signed-off-by: Christopher Clark <christopher.clark@starlab.io>
Signed-off by: Daniel P. Smith <dpsmith@apertussolutions.com>


Daniel P. Smith (2):
  docs/designs/launch: hyperlaunch design document
  docs/designs/launch: hyperlaunch device tree

 .../designs/launch/hyperlaunch-devicetree.rst |  343 ++++++
 docs/designs/launch/hyperlaunch.rst           | 1004 +++++++++++++++++
 2 files changed, 1347 insertions(+)
 create mode 100644 docs/designs/launch/hyperlaunch-devicetree.rst
 create mode 100644 docs/designs/launch/hyperlaunch.rst

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri May 14 03:41:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 03:41:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127165.238937 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhOhw-0005SC-JP; Fri, 14 May 2021 03:41:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127165.238937; Fri, 14 May 2021 03:41:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhOhw-0005S5-Fp; Fri, 14 May 2021 03:41:48 +0000
Received: by outflank-mailman (input) for mailman id 127165;
 Fri, 14 May 2021 03:41:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QDWZ=KJ=gmail.com=christopher.w.clark@srs-us1.protection.inumbo.net>)
 id 1lhOhv-0005RK-B3
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 03:41:47 +0000
Received: from mail-qv1-xf2b.google.com (unknown [2607:f8b0:4864:20::f2b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6e17bf28-b5b9-49c2-a70b-5c049ad0651c;
 Fri, 14 May 2021 03:41:44 +0000 (UTC)
Received: by mail-qv1-xf2b.google.com with SMTP id o59so1668297qva.1
 for <xen-devel@lists.xenproject.org>; Thu, 13 May 2021 20:41:44 -0700 (PDT)
Received: from walnut.ice.pyrology.org (mobile-166-176-184-32.mycingular.net.
 [166.176.184.32])
 by smtp.gmail.com with ESMTPSA id g15sm3873432qka.49.2021.05.13.20.41.39
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 May 2021 20:41:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6e17bf28-b5b9-49c2-a70b-5c049ad0651c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=jEFwomjUiAfu63gnJ/imoReC+k3WHkjDiN31O5bV5mw=;
        b=U23Lw0L4zuv6T0JwdydBeLFOvmkhIhzsHiBWLJZl7aV+2kDDPQXxaFDMJ69PIyclnT
         TnQabamvfZuexf4k0IazbVV4Tm7QddIxcxZ/hX+C42VO2jOWlarZjYsbOlWGP4qZsU1Q
         OQOOt5biRX9w1K1K9LsYzuaKoKmr1bESM5HPF4gUalVzhkX3MaNyPDav3s4Mksi2MMrL
         1vYE8Yw0xYlMQ116M2oIKM3UYboFNgC3vVA8t5t6QY0til66uB2WYgoTdHZ8MV9nko+p
         k/xl17af32QOfWIott8e2NoeblRkNIfFsvNMTVvhPFx8crgBCHrkNq06hsuX3pOITuvW
         qQUg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=jEFwomjUiAfu63gnJ/imoReC+k3WHkjDiN31O5bV5mw=;
        b=p1HqeffNKcJTujeiczq/fNLVXWynCiQdXf7DFIA2J0BiMI4WRcAKkaL4ZOAgLHttLT
         t5k28GZ9WRLmcIHJI0Mc0ZP6Y37ITkPMGsj5TydLhbw4LB2/BIQ21eSY6itjcQqe8u0Q
         0ozIiRgKBEHtVpNO0KIvD3WCaLko9wJoXCgmQK8WycNxoxN0YU4/UScMkc+n3VSf70bk
         +KMN+bVkriSYuIJHVJ7Ch1XqKZ0glbZUMO5FJyB5rif3OAvmGWMytc3+JcsksWC8SfhQ
         l+TtwWPMWZJ9+tC7eAY3r7FeePOfFCTUl39rD+Pyo6FGSkEs0u8EzJl83vtlDT0zwS5C
         h0Iw==
X-Gm-Message-State: AOAM533Fc2/q4cvv3Zzmpbzt4JaBc6KJkpAPLQFM591lK/vbFn7kMibo
	1/wZIoiCH2kIC7X9JrrttwPb+8nUpfoYYQ==
X-Google-Smtp-Source: ABdhPJx4ZWhqlb8L3Muq/l0fThUfeh/ZjiMw9SGdtD79LKcciTAOPSjzinFLv+7s+hARZpo2UX1j0A==
X-Received: by 2002:a0c:e486:: with SMTP id n6mr43853662qvl.21.1620963703731;
        Thu, 13 May 2021 20:41:43 -0700 (PDT)
From: Christopher Clark <christopher.w.clark@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: "Daniel P. Smith" <dpsmith@apertussolutions.com>,
	andrew.cooper3@citrix.com,
	stefano.stabellini@xilinx.com,
	jgrall@amazon.com,
	Julien.grall.oss@gmail.com,
	iwj@xenproject.org,
	wl@xen.org,
	george.dunlap@citrix.com,
	jbeulich@suse.com,
	persaur@gmail.com,
	Bertrand.Marquis@arm.com,
	roger.pau@citrix.com,
	luca.fancellu@arm.com,
	paul@xen.org,
	adam.schwalm@starlab.io,
	scott.davis@starlab.io,
	Christopher Clark <christopher.clark@starlab.io>
Subject: [PATCH v4 2/2] docs/designs/launch: Hyperlaunch device tree
Date: Thu, 13 May 2021 20:41:01 -0700
Message-Id: <20210514034101.3683-3-christopher.w.clark@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20210514034101.3683-1-christopher.w.clark@gmail.com>
References: <20210514034101.3683-1-christopher.w.clark@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: "Daniel P. Smith" <dpsmith@apertussolutions.com>

Adds a design document for Hyperlaunch device tree structure.

Signed-off-by: Christopher Clark <christopher.clark@starlab.io>
Signed-off by: Daniel P. Smith <dpsmith@apertussolutions.com>
---
 .../designs/launch/hyperlaunch-devicetree.rst | 343 ++++++++++++++++++
 1 file changed, 343 insertions(+)
 create mode 100644 docs/designs/launch/hyperlaunch-devicetree.rst

diff --git a/docs/designs/launch/hyperlaunch-devicetree.rst b/docs/designs/launch/hyperlaunch-devicetree.rst
new file mode 100644
index 0000000000..f97d357407
--- /dev/null
+++ b/docs/designs/launch/hyperlaunch-devicetree.rst
@@ -0,0 +1,343 @@
+-------------------------------------
+Xen Hyperlaunch Device Tree Bindings
+-------------------------------------
+
+The Xen Hyperlaunch device tree adopts the dom0less device tree structure and
+extends it to meet the requirements for the Hyperlaunch capability. The primary
+difference is the introduction of the ``hypervisor`` node that is under the
+``/chosen`` node. The move to a dedicated node was driven by:
+
+1. Reduces the need to walk over nodes that are not of interest, e.g. only
+   nodes of interest should be in ``/chosen/hypervisor``
+
+2. Allows for the domain construction information to easily be sanitized by
+   simple removing the ``/chosen/hypervisor`` node.
+
+Example Configuration
+---------------------
+
+Below are two example device tree definitions for the hypervisor node. The
+first is an example of a multiboot-based configuration for x86 and the second
+is a module-based configuration for Arm.
+
+Multiboot x86 Configuration:
+""""""""""""""""""""""""""""
+
+::
+
+    hypervisor {
+        #address-cells = <1>;
+        #size-cells = <0>;
+        compatible = “hypervisor,xen”
+ 
+        // Configuration container
+        config {
+            compatible = "xen,config";
+ 
+            module {
+                compatible = "module,microcode", "multiboot,module";
+                mb-index = <1>;
+            };
+ 
+            module {
+                compatible = "module,xsm-policy", "multiboot,module";
+                mb-index = <2>;
+            };
+        };
+ 
+        // Boot Domain definition
+        domain {
+            compatible = "xen,domain";
+ 
+            domid = <0x7FF5>;
+ 
+            // FUNCTION_NONE            (0)
+            // FUNCTION_BOOT            (1 << 0)
+            // FUNCTION_CRASH           (1 << 1)
+            // FUNCTION_CONSOLE         (1 << 2)
+            // FUNCTION_XENSTORE        (1 << 30)
+            // FUNCTION_LEGACY_DOM0     (1 << 31)
+            functions = <0x00000001>;
+ 
+            memory = <0x0 0x20000>;
+            cpus = <1>;
+            module {
+                compatible = "module,kernel", "multiboot,module";
+                mb-index = <3>;
+            };
+ 
+            module {
+                compatible = "module,ramdisk", "multiboot,module";
+                mb-index = <4>;
+            };
+            module {
+                compatible = "module,config", "multiboot,module";
+                mb-index = <5>;
+            };
+ 
+        // Classic Dom0 definition
+        domain {
+            compatible = "xen,domain";
+ 
+            domid = <0>;
+ 
+            // PERMISSION_NONE          (0)
+            // PERMISSION_CONTROL       (1 << 0)
+            // PERMISSION_HARDWARE      (1 << 1)
+            permissions = <3>;
+ 
+            // FUNCTION_NONE            (0)
+            // FUNCTION_BOOT            (1 << 0)
+            // FUNCTION_CRASH           (1 << 1)
+            // FUNCTION_CONSOLE         (1 << 2)
+            // FUNCTION_XENSTORE        (1 << 30)
+            // FUNCTION_LEGACY_DOM0     (1 << 31)
+            functions = <0xC0000006>;
+ 
+            // MODE_PARAVIRTUALIZED     (1 << 0) /* PV | PVH/HVM */
+            // MODE_ENABLE_DEVICE_MODEL (1 << 1) /* HVM | PVH */
+            // MODE_LONG                (1 << 2) /* 64 BIT | 32 BIT */
+            mode = <5>; /* 64 BIT, PV */
+ 
+            // UUID
+            domain-uuid = [B3 FB 98 FB 8F 9F 67 A3];
+ 
+            cpus = <1>;
+            memory = <0x0 0x20000>;
+            security-id = “dom0_t;
+ 
+            module {
+                compatible = "module,kernel", "multiboot,module";
+                mb-index = <6>;
+                bootargs = "console=hvc0";
+            };
+            module {
+                compatible = "module,ramdisk", "multiboot,module";
+                mb-index = <7>;
+            };
+    };
+
+The multiboot modules supplied when using the above config would be, in order:
+
+* (the above config, compiled)
+* CPU microcode
+* XSM policy
+* kernel for boot domain
+* ramdisk for boot domain
+* boot domain configuration file
+* kernel for the classic dom0 domain
+* ramdisk for the classic dom0 domain
+
+Module Arm Configuration:
+"""""""""""""""""""""""""
+
+::
+
+    hypervisor {
+        compatible = “hypervisor,xen”
+ 
+        // Configuration container
+        config {
+            compatible = "xen,config";
+ 
+            module {
+                compatible = "module,microcode”;
+                module-addr = <0x0000ff00 0x80>;
+            };
+ 
+            module {
+                compatible = "module,xsm-policy";
+                module-addr = <0x0000ff00 0x80>;
+ 
+            };
+        };
+ 
+        // Boot Domain definition
+        domain {
+            compatible = "xen,domain";
+ 
+            domid = <0x7FF5>;
+ 
+            // FUNCTION_NONE            (0)
+            // FUNCTION_BOOT            (1 << 0)
+            // FUNCTION_CRASH           (1 << 1)
+            // FUNCTION_CONSOLE         (1 << 2)
+            // FUNCTION_XENSTORE        (1 << 30)
+            // FUNCTION_LEGACY_DOM0     (1 << 31)
+            functions = <0x00000001>;
+ 
+            memory = <0x0 0x20000>;
+            cpus = <1>;
+            module {
+                compatible = "module,kernel";
+                module-addr = <0x0000ff00 0x80>;
+            };
+ 
+            module {
+                compatible = "module,ramdisk";
+                module-addr = <0x0000ff00 0x80>;
+            };
+            module {
+                compatible = "module,config";
+                module-addr = <0x0000ff00 0x80>;
+            };
+ 
+        // Classic Dom0 definition
+        domain@0 {
+            compatible = "xen,domain";
+ 
+            domid = <0>;
+ 
+            // PERMISSION_NONE          (0)
+            // PERMISSION_CONTROL       (1 << 0)
+            // PERMISSION_HARDWARE      (1 << 1)
+            permissions = <3>;
+ 
+            // FUNCTION_NONE            (0)
+            // FUNCTION_BOOT            (1 << 0)
+            // FUNCTION_CRASH           (1 << 1)
+            // FUNCTION_CONSOLE         (1 << 2)
+            // FUNCTION_XENSTORE        (1 << 30)
+            // FUNCTION_LEGACY_DOM0     (1 << 31)
+            functions = <0xC0000006>;
+ 
+            // MODE_PARAVIRTUALIZED     (1 << 0) /* PV | PVH/HVM */
+            // MODE_ENABLE_DEVICE_MODEL (1 << 1) /* HVM | PVH */
+            // MODE_LONG                (1 << 2) /* 64 BIT | 32 BIT */
+            mode = <5>; /* 64 BIT, PV */
+ 
+            // UUID
+            domain-uuid = [B3 FB 98 FB 8F 9F 67 A3];
+ 
+            cpus = <1>;
+            memory = <0x0 0x20000>;
+            security-id = “dom0_t”;
+ 
+            module {
+                compatible = "module,kernel";
+                module-addr = <0x0000ff00 0x80>;
+                bootargs = "console=hvc0";
+            };
+            module {
+                compatible = "module,ramdisk";
+                module-addr = <0x0000ff00 0x80>;
+            };
+    };
+
+The modules that would be supplied when using the above config would be:
+
+* (the above config, compiled into hardware tree)
+* CPU microcode
+* XSM policy
+* kernel for boot domain
+* ramdisk for boot domain
+* boot domain configuration file
+* kernel for the classic dom0 domain
+* ramdisk for the classic dom0 domain
+
+The hypervisor device tree would be compiled into the hardware device tree and
+provided to Xen using the standard method currently in use. The remaining
+modules would need to be loaded in the respective addresses specified in the
+`module-addr` property.
+
+
+The Hypervisor node
+-------------------
+
+The hypervisor node is a top level container for the domains that will be built
+by hypervisor on start up. On the ``hypervisor`` node the ``compatible``
+property is used to identify the type of hypervisor node present..
+
+compatible
+  Identifies the type of node. Required.
+
+The Config node
+---------------
+
+A config node is for detailing any modules that are of interest to Xen itself.
+For example this would be where Xen would be informed of microcode or XSM
+policy locations. If the modules are multiboot modules and are able to be
+located by index within the module chain, the ``mb-index`` property should be
+used to specify the index in the multiboot module chain.. If the module will be
+located by physical memory address, then the ``module-addr`` property should be
+used to identify the location and size of the module.
+
+compatible
+  Identifies the type of node. Required.
+
+The Domain node
+---------------
+
+A domain node is for describing the construction of a domain. It may provide a
+domid property which will be used as the requested domain id for the domain
+with a value of “0” signifying to use the next available domain id, which is
+the default behavior if omitted. A domain configuration is not able to request
+a domid of “0”. After that a domain node may have any of the following
+parameters,
+
+compatible
+  Identifies the type of node. Required.
+
+domid
+  Identifies the domid requested to assign to the domain. Required.
+
+permissions
+  This sets what Discretionary Access Control permissions 
+  a domain is assigned. Optional, default is none.
+
+functions
+  This identifies what system functions a domain will fulfill.
+  Optional, the default is none.
+
+.. note::  The `functions` bits that have been selected to indicate
+   ``FUNCTION_XENSTORE`` and ``FUNCTION_LEGACY_DOM0`` are the last two bits
+   (30, 31) such that should these features ever be fully retired, the flags may
+   be dropped without leaving a gap in the flag set.
+
+mode
+  The mode the domain will be executed under. Required.
+
+domain-uuid
+  A globally unique identifier for the domain. Optional,
+  the default is NULL.
+
+cpus
+  The number of vCPUs to be assigned to the domain. Optional,
+  the default is “1”.
+
+memory
+  The amount of memory to assign to the domain, in KBs.
+  Required.
+
+security-id
+  The security identity to be assigned to the domain when XSM
+  is the access control mechanism being used. Optional,
+  the default is “domu_t”.
+
+The Module node
+---------------
+
+This node describes a boot module loaded by the boot loader. The required
+compatible property follows the format: module,<type> where type can be
+“kernel”, “ramdisk”, “device-tree”, “microcode”, “xsm-policy” or “config”. In
+the case the module is a multiboot module, the additional property string
+“multiboot,module” may be present. One of two properties is required and
+identifies how to locate the module. They are the mb-index, used for multiboot
+modules, and the module-addr for memory address based location.
+
+compatible
+  This identifies what the module is and thus what the hypervisor
+  should use the module for during domain construction. Required.
+
+mb-index
+  This identifies the index for this module in the multiboot module chain.
+  Required for multiboot environments.
+
+module-addr
+  This identifies where in memory this module is located. Required for
+  non-multiboot environments.
+
+bootargs
+  This is used to provide the boot params to kernel modules.
+
+.. note::  The bootargs property is intended for situations where the same kernel multiboot module is used for more than one domain.
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri May 14 03:41:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 03:41:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127166.238948 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhOhx-0005k6-TZ; Fri, 14 May 2021 03:41:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127166.238948; Fri, 14 May 2021 03:41:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhOhx-0005jx-On; Fri, 14 May 2021 03:41:49 +0000
Received: by outflank-mailman (input) for mailman id 127166;
 Fri, 14 May 2021 03:41:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QDWZ=KJ=gmail.com=christopher.w.clark@srs-us1.protection.inumbo.net>)
 id 1lhOhv-0005RK-Si
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 03:41:47 +0000
Received: from mail-qk1-x736.google.com (unknown [2607:f8b0:4864:20::736])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 03d3a714-a814-4ced-a08f-0ce4acfbd8d9;
 Fri, 14 May 2021 03:41:41 +0000 (UTC)
Received: by mail-qk1-x736.google.com with SMTP id f18so5758388qko.7
 for <xen-devel@lists.xenproject.org>; Thu, 13 May 2021 20:41:41 -0700 (PDT)
Received: from walnut.ice.pyrology.org (mobile-166-176-184-32.mycingular.net.
 [166.176.184.32])
 by smtp.gmail.com with ESMTPSA id g15sm3873432qka.49.2021.05.13.20.41.35
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 May 2021 20:41:39 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 03d3a714-a814-4ced-a08f-0ce4acfbd8d9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=K8N5QAyyvFIXtCrN3elhBL2wI2atvVxFr2oYKT9IGE0=;
        b=Dh7uX/dzBU+kHTQNp5zVHJBkd4ipfqH+jmMFF77mJi43hJmvCpRrKkX5hl4KAbXIss
         hnAcuIsB4QrVXMeU43A5xzube57C6nDMMdBxG6GcN4Ol7SdtMxpQD1iQyD5b7j+4HCnY
         8R5GeAHb+MsJestGYRjJyCVC/h5v/nUTe00J28q8f7xzAu2AmdMNaatfk4IUqGhOhnCL
         Uiz834OsHNerVR170/qHEE+FYgETgDTaExgP1SansvtJMbj8qypLuTD5l6H1pRK3KDRy
         SRFvQQMa2n0ziAFgXZk+1BR8sTlWKmSssXoAtvITsIhqVPWntTEglJXVmQuh6BA2P4j8
         ovDw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=K8N5QAyyvFIXtCrN3elhBL2wI2atvVxFr2oYKT9IGE0=;
        b=Jf1srypjYGRyuEwB4dfdiA95pmWJucaXgcdXJTFWW8AunAhyH7b4pVc5bYqbZC2jHP
         mmmrDEccQtTV6wYjCdOMGb4D6iOUYvQLH0xqUZbPTgux7lzdMyLS4DjKnJvQRdYHEvV4
         z0kFRgyMTh3BhOb1jqhhvedfEsmaucpP0Vyq8AAHYhCDo06GbzOoEQMtMs2KPhPpZZTZ
         NdpnFYpKLHKguMTOvr31+UpckFCqUzDeVjwUzTw+P9+AzQ01aV0lijQYuFmojMMDGmeg
         V74fd/eNSGpld2AfC6HWbbPuzogfQnXGLmAYCCBhBnZ5v1R+Ka0EDcY3NSXorb8/fasS
         2/Lg==
X-Gm-Message-State: AOAM533021SpZqvRDO5j6V0qFPKN8ctkivIFbpfkeM67svnklYZ22JYV
	ogLglFIRNyKgNFxhMVqOJGsKbdWVcs2sgw==
X-Google-Smtp-Source: ABdhPJzvDjiz7eOZkXWzr8/lR/xD4PZal2Op1AKBEWcwjWNVx43EOxBCyurpGBl22cyd3yzWCSUWXQ==
X-Received: by 2002:a37:a3d7:: with SMTP id m206mr41063632qke.343.1620963699655;
        Thu, 13 May 2021 20:41:39 -0700 (PDT)
From: Christopher Clark <christopher.w.clark@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: "Daniel P. Smith" <dpsmith@apertussolutions.com>,
	andrew.cooper3@citrix.com,
	stefano.stabellini@xilinx.com,
	jgrall@amazon.com,
	Julien.grall.oss@gmail.com,
	iwj@xenproject.org,
	wl@xen.org,
	george.dunlap@citrix.com,
	jbeulich@suse.com,
	persaur@gmail.com,
	Bertrand.Marquis@arm.com,
	roger.pau@citrix.com,
	luca.fancellu@arm.com,
	paul@xen.org,
	adam.schwalm@starlab.io,
	scott.davis@starlab.io,
	Christopher Clark <christopher.clark@starlab.io>
Subject: [PATCH v4 1/2] docs/designs/launch: Hyperlaunch design document
Date: Thu, 13 May 2021 20:41:00 -0700
Message-Id: <20210514034101.3683-2-christopher.w.clark@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20210514034101.3683-1-christopher.w.clark@gmail.com>
References: <20210514034101.3683-1-christopher.w.clark@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: "Daniel P. Smith" <dpsmith@apertussolutions.com>

Adds a design document for Hyperlaunch, formerly DomB mode of dom0less.

Signed-off-by: Christopher Clark <christopher.clark@starlab.io>
Signed-off by: Daniel P. Smith <dpsmith@apertussolutions.com>
Reviewed-by: Rich Persaud <rp@stacktrust.org>

---
Changes since v3:
* Rename the Landscape table
* Changed Crash Domain to Recovery Domain
  * amended text to indicate that this will be new rather than existing Xen
    functionality
  * including update to the configuration, permission, function table
* Add definitions for “recovery domain” and “crash environment”, describing
  the different functionalities
  * some design issues deferred
* Added section to explain the motivations for the separation between VM
  creation (by the hypervisor) and VM configuration (by the boot domain)
* Adjusted the description of the current process for creating a domain
* Added recommendation for UEFI boot to use GRUB.efi to load via multiboot2
  method.
* Added Document Structure section
* Added section on Communication of Domain Configuration

 docs/designs/launch/hyperlaunch.rst | 1004 +++++++++++++++++++++++++++
 1 file changed, 1004 insertions(+)
 create mode 100644 docs/designs/launch/hyperlaunch.rst

diff --git a/docs/designs/launch/hyperlaunch.rst b/docs/designs/launch/hyperlaunch.rst
new file mode 100644
index 0000000000..30fce8c9c3
--- /dev/null
+++ b/docs/designs/launch/hyperlaunch.rst
@@ -0,0 +1,1004 @@
+###########################
+Hyperlaunch Design Document
+###########################
+
+.. sectnum:: :depth: 4
+
+This post is a Request for Comment on the included v4 of a design document that
+describes Hyperlaunch: a new method of launching the Xen hypervisor, relating
+to dom0less and work from the Hyperlaunch project. We invite discussion of this
+on this list, at the monthly Xen Community Calls, and at dedicated meetings on
+this topic in the Xen Working Group which will be announced in advance on the
+Xen Development mailing list.
+
+
+.. contents:: :depth: 3
+
+
+Introduction
+============
+
+This document describes the design and motivation for the funded development of
+a new, flexible system for launching the Xen hypervisor and virtual machines
+named: "Hyperlaunch".
+
+The design enables seamless transition for existing systems that require a
+dom0, and provides a new general capability to build and launch alternative
+configurations of virtual machines, including support for static partitioning
+and accelerated start of VMs during host boot, while adhering to the principles
+of least privilege. It incorporates the existing dom0less functionality,
+extended to fold in the new developments from the Hyperlaunch project, with
+support for both x86 and Arm platform architectures, building upon and
+replacing the earlier 'late hardware domain' feature for disaggregation of
+dom0.
+
+Hyperlaunch is designed to be flexible and reusable across multiple use cases,
+and our aim is to ensure that it is capable, widely exercised, comprehensively
+tested, and well understood by the Xen community.
+
+Document Structure
+==================
+
+This is the primary design document for Hyperlaunch, to provide an overview of
+the feature. Separate additional documents will cover specific aspects of
+Hyperlaunch in further detail, including:
+
+  - The Device Tree specification for Hyperlaunch metadata
+  - New Domain Roles for Xen and the Xen Security Modules (XSM) policy
+  - Passthrough of PCI devices with Hyperlaunch
+
+Approach
+========
+
+Born out of improving support for Dynamic Root of Trust for Measurement (DRTM),
+the Hyperlaunch project is focused on restructuring the system launch of Xen.
+The Hyperlaunch design provides a security architecture that builds on the
+principles of Least Privilege and Strong Isolation, achieving this through the
+disaggregation of system functions. It enables this with the introduction of a
+boot domain that works in conjunction with the hypervisor to provide the
+ability to launch multiple domains as part of host boot while maintaining a
+least privilege implementation.
+
+While the Hyperlaunch project inception was and continues to be driven by a
+focus on security through disaggregation, there are multiple use cases with a
+non-security focus that require or benefit from the ability to launch multiple
+domains at host boot. This was proven by the need that drove the implementation
+of the dom0less capability in the Arm branch of Xen.
+
+Hyperlaunch is designed to be flexible and reusable across multiple use cases,
+and our aim is to ensure that it is capable, widely exercised, comprehensively
+tested, and provides a robust foundation for current and emerging system launch
+requirements of the Xen community.
+
+
+Objectives
+----------
+
+* In general strive to maintain compatibility with existing Xen behavior
+* A default build of the hypervisor should be capable of booting both legacy-compatible and new styles of launch:
+
+        * classic Xen boot: starting a single, privileged Dom0
+        * classic Xen boot with late hardware domain: starting a Dom0 that transitions hardware access/control to another domain
+        * a dom0less boot: starting multiple domains without privilege assignment controls
+        * Hyperlaunch: starting one or more VMs, with flexible configuration
+
+* Preferred that it be managed via KCONFIG options to govern inclusion of support for each style
+* The selection between classic boot and Hyperlaunch boot should be automatic
+
+        * Preferred that it not require a kernel command line parameter for selection
+
+* It should not require modification to boot loaders
+* It should provide a user friendly interface for its configuration and management
+* It must provide a method for building systems that fallback to console access in the event of misconfiguration
+* It should be able to boot an x86 Xen environment without the need for a Dom0 domain
+
+
+Requirements and Design
+=======================
+
+Hyperlaunch is defined as the ability of a hypervisor to construct and start
+one or more virtual machines at system launch in a specific way. A hypervisor
+can support one or both modes of configuration, Hyperlaunch Static and
+Hyperlaunch Dynamic. The Hyperlaunch Static mode functions as a static
+partitioning hypervisor ensuring only the virtual machines started at system
+launch are running on the system. The Hyperlaunch Dynamic mode functions as a
+dynamic hypervisor allowing for additional virtual machines to be started after
+the initial virtual machines have started. The Xen hypervisor is capable of
+both modes of configuration from the same binary and when paired with its XSM
+flask, provides strong controls that enable fine grained system partitioning.
+
+Hypervisor Launch Landscape
+---------
+
+This comparison table presents the distinctive capabilities of Hyperlaunch with
+reference to existing launch configurations currently available in Xen and
+other hypervisors.
+
+::
+
+ +---------------+-----------+------------+-----------+-------------+---------------------+
+ | **Xen Dom0**  | **Linux** | **Late**   | **Jail**  | **Xen**     | **Xen Hyperlaunch** |
+ | **(Classic)** | **KVM**   | **HW Dom** | **house** | **dom0less**+---------+-----------+
+ |               |           |            |           |             | Static  | Dynamic   |
+ +===============+===========+============+===========+=============+=========+===========+
+ | Hypervisor able to launch multiple VMs during host boot                                |
+ +---------------+-----------+------------+-----------+-------------+---------+-----------+
+ |               |           |            |     Y     |       Y     |    Y    |     Y     |
+ +---------------+-----------+------------+-----------+-------------+---------+-----------+
+ | Hypervisor supports Static Partitioning                                                |
+ +---------------+-----------+------------+-----------+-------------+---------+-----------+
+ |               |           |            |     Y     |       Y     |    Y    |           |
+ +---------------+-----------+------------+-----------+-------------+---------+-----------+
+ | Able to launch VMs dynamically after host boot                                         |
+ +---------------+-----------+------------+-----------+-------------+---------+-----------+
+ |       Y       |     Y     |      Y*    |     Y     |       Y*    |         |     Y     |
+ +---------------+-----------+------------+-----------+-------------+---------+-----------+
+ | Supports strong isolation between all VMs started at host boot                         |
+ +---------------+-----------+------------+-----------+-------------+---------+-----------+
+ |               |           |            |     Y     |       Y     |    Y    |     Y     |
+ +---------------+-----------+------------+-----------+-------------+---------+-----------+
+ | Enables flexible sequencing of VM start during host boot                               |
+ +---------------+-----------+------------+-----------+-------------+---------+-----------+
+ |               |           |            |           |             |    Y    |     Y     |
+ +---------------+-----------+------------+-----------+-------------+---------+-----------+
+ | Prevent all-powerful static root domain being launched at boot                         |
+ +---------------+-----------+------------+-----------+-------------+---------+-----------+
+ |               |           |            |           |       Y*    |    Y    |     Y     |
+ +---------------+-----------+------------+-----------+-------------+---------+-----------+
+ | Operates without a Highly-privileged management VM (eg. Dom0)                          |
+ +---------------+-----------+------------+-----------+-------------+---------+-----------+
+ |               |           |      Y*    |           |       Y*    |    Y    |     Y     |
+ +---------------+-----------+------------+-----------+-------------+---------+-----------+
+ | Operates without a privileged toolstack VM (Control Domain)                            |
+ +---------------+-----------+------------+-----------+-------------+---------+-----------+
+ |               |           |            |           |       Y*    |    Y    |           |
+ +---------------+-----------+------------+-----------+-------------+---------+-----------+
+ | Extensible VM configuration applied before launch of VMs at host boot                  |
+ +---------------+-----------+------------+-----------+-------------+---------+-----------+
+ |               |           |            |           |             |    Y    |     Y     |
+ +---------------+-----------+------------+-----------+-------------+---------+-----------+
+ | Flexible granular assignment of permissions and functions to VMs                       |
+ +---------------+-----------+------------+-----------+-------------+---------+-----------+
+ |               |           |            |           |             |    Y    |     Y     |
+ +---------------+-----------+------------+-----------+-------------+---------+-----------+
+ | Supports extensible VM measurement architecture for DRTM and attestation               |
+ +---------------+-----------+------------+-----------+-------------+---------+-----------+
+ |               |           |            |           |             |    Y    |     Y     |
+ +---------------+-----------+------------+-----------+-------------+---------+-----------+
+ | PCI passthrough configured at host boot                                                |
+ +---------------+-----------+------------+-----------+-------------+---------+-----------+
+ |               |           |            |           |             |    Y    |     Y     |
+ +---------------+-----------+------------+-----------+-------------+---------+-----------+
+
+
+Domain Construction
+-------------------
+
+An important aspect of the Hyperlaunch architecture is that the hypervisor
+performs domain construction for all the Initial Domains,  ie. it builds each
+domain that is described in the Launch Control Module. More specifically, the
+hypervisor will perform the function of *domain creation* for each Initial
+Domain: it allocates the unique domain identifier assigned to the virtual
+machine and records essential metadata about it in the internal data structure
+that enables scheduling the domain to run. It will also perform *basic domain
+construction*: build the initial page tables with data from the kernel and
+initial ramdisk supplied, and as appropriate for the domain type, populate the
+p2m table and ACPI tables.
+
+Subsequent to this, the boot domain can apply additional configuration to the
+initial domains from the data in the LCM, in *extended domain construction*.
+
+The benefits of this structure include:
+
+* Security: Contrains the permissions required by the boot domain: it does not
+  require the capability to create domains in this structure. This aligns with
+  the principles of least privilege.
+* Flexibility: Enables policy-based dynamic assignment of hardware by the boot
+  domain, customizable according to use-case and able to adapt to hardware
+  discovery
+* Compatibility: Supports reuse of familiar tools with use-case customized boot
+  domains.
+* Commonality: Reuses the same logic for initial basic domain building across
+  diverse Xen deployments.
+	* It aligns the x86 initial domain construction with the existing Arm
+	  dom0less feature for construction of multiple domains at boot.
+	* The boot domain implementation may vary significantly with different
+	  deployment use cases, whereas the hypervisor implementation is
+	  common.
+* Correctness: Increases confidence in the implementation of domain
+  construction, since it is performed by the hypervisor in well maintained and
+  centrally tested logic.
+* Performance: Enables launch for configurations where a fast start of
+  multiple domains at boot is a requirement.
+* Capability: Supports launch of advanced configurations where a sequenced
+  start of multiple domains is required, or multiple domains are involved in
+  startup of the running system configuration
+	* eg. for PCI passthrough on systems where the toolstack runs in a
+	  separate domain to the hardware management.
+
+Please, see the ‘Hyperlaunch Device Tree’ design document, which describes the
+configuration module that is provided to the hypervisor by the bootloader.
+
+The hypervisor determines how these domains are started as host boot completes:
+in some systems the Boot Domain acts upon the extended boot configuration
+supplied as part of launch, performing configuration tasks for preparing the
+other domains for the hypervisor to commence running them.
+
+Common Boot Configurations
+--------------------------
+
+When looking across those that have expressed interest or discussed a need for
+launching multiple domains at host boot, the Hyperlaunch approach is to provide
+the means to start nearly any combination of domains. Below is an enumerated
+selection of common boot configurations for reference in the following section. 
+
+Dynamic Launch with a Highly-Privileged Domain 0
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Hyperlaunch Classic: Dom0
+        This configuration mimics the classic Xen start and domain construction
+        where a single domain is constructed with all privileges and functions for
+        managing hardware and running virtualization toolstack software.
+
+Hyperlaunch Classic: Extended Launch Dom0
+        This configuration is where a Dom0 is started via a Boot Domain that runs
+        first. This is for cases where some preprocessing in a less privileged domain
+        is required before starting the all-privileged Domain 0.
+
+Hyperlaunch Classic: Basic Cloud
+        This configuration constructs a Dom0 that is started in parallel with some
+        number of workload domains.
+
+Hyperlaunch Classic: Cloud
+        This configuration builds a Dom0 and some number of workload domains, launched
+        via a Boot Domain that runs first.
+
+
+Static Launch Configurations: without a Domain 0 or a Control Domain
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Hyperlaunch Static: Basic
+        Simple static partitioning where all domains that can be run on this system are
+        built and started during host boot and where no domain is started with the
+        Control Domain permissions, thus making it not possible to create/start any
+        further new domains.
+
+Hyperlaunch Static: Standard
+        This is a variation of the “Hyperlaunch Static: Basic” static partitioning
+        configuration with the introduction of a Boot Domain. This configuration allows
+        for use of a Boot Domain to be able to apply extended configuration
+        to the Initial Domains before they are started and
+        sequence the order in which they start.
+
+Hyperlaunch Static: Disaggregated
+        This is a variation of the “Hyperlaunch Static: Standard” configuration with
+        the introduction of a Boot Domain and an illustration that some functions can
+        be disaggregated to dedicated domains.
+
+Dynamic Launch of Disaggregated System Configurations
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+Hyperlaunch Dynamic: Hardware Domain
+        This configuration mimics the existing Xen feature late hardware domain with
+        the one difference being that the hardware domain is constructed by the
+        hypervisor at startup instead of later by Dom0.
+
+Hyperlaunch Dynamic: Flexible Disaggregation
+        This configuration is similar to the “Hyperlaunch Classic: Dom0” configuration
+        except that it includes starting a separate hardware domain during Xen startup.
+        It is also similar to “Hyperlaunch Dynamic: Hardware Domain” configuration, but
+        it launches via a Boot Domain that runs first.
+
+Hyperlaunch Dynamic: Full Disaggregation
+        In this configuration it is demonstrated how it is possible to start a fully
+        disaggregated system: the virtualization toolstack runs in a Control Domain,
+        separate from the domains responsible for managing hardware, XenStore, the Xen
+        Console and Crash functions, each launched via a Boot Domain.
+
+
+Example Use Cases and Configurations
+^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
+
+The following example use cases can be matched to configurations listed in the
+previous section.
+
+Use case: Modern cloud hypervisor
+"""""""""""""""""""""""""""""""""
+
+**Option:** Hyperlaunch Classic: Cloud
+
+This configuration will support strong isolation for virtual TPM domains and
+measured launch in support of attestation to infrastructure management, while
+allowing the use of existing Dom0 virtualization toolstack software.
+
+Use case: Edge device with security or safety requirements
+""""""""""""""""""""""""""""""""""""""""""""""""""""""""""
+
+**Option:** Hyperlaunch Static: Boot
+
+This configuration runs without requiring a highly-privileged Dom0, and enables
+extended VM configuration to be applied to the Initial VMs prior to launching
+them, optionally in a sequenced start.
+
+Use case: Client hypervisor
+"""""""""""""""""""""""""""
+
+**Option:** Hyperlaunch Dynamic: Flexible Disaggregation
+
+**Option:** Hyperlaunch Dynamic: Full Disaggregation
+
+These configurations enable dynamic client workloads, strong isolation for the
+domain running the virtualization toolstack software and each domain managing
+hardware, with PCI passthrough performed during host boot and support for
+measured launch.
+
+Hyperlaunch Disaggregated Launch
+--------------------------------
+
+
+Existing in Xen today are two primary permissions, *control domain* and
+*hardware domain*, and two functions, *console domain* and *xenstore domain*,
+that can be assigned to a domain. Traditionally all of these permissions and
+functions are all assigned to Dom0 at start and can then be delegated to other
+domains created by the toolstack in Dom0. With Hyperlaunch it becomes possible
+to assign these permissions and functions to any domain for which there is a
+definition provided at startup.
+
+Additionally, two further functions are introduced: the *recovery domain*,
+intended to assist with recovery from failures encountered starting VMs during
+host boot, and the *boot domain*, for performing aspects of domain construction
+during startup.
+
+Supporting the booting of each of the above common boot configurations is
+accomplished by considering the set of initial domains and the assignment of
+Xen’s permissions and functions, including the ones introduced by Hyperlaunch,
+to these domains. A discussion of these will be covered later but for now they
+are laid out in a table with a mapping to the common boot configurations. This
+table is not intended to be an exhaustive list of configurations and does not
+account for flask policy specified functions that are use case specific.
+
+In the table each number represents a separate domain being
+constructed by the Hyperlaunch construction path as Xen starts, and the
+designator, ``{n}`` signifies that there may be “n” additional domains that may
+be constructed that do not have any special role for a general Xen system.
+
+::
+
+ +-------------------+------------------+-----------------------------------+
+ | Configuration     |    Permission    |            Function               |
+ |                   +------+------+----+------+--------+--------+----------+
+ |                   | None | Ctrl | HW | Boot |Recovery| Console| Xenstore |
+ +===================+======+======+====+======+========+========+==========+
+ | Classic: Dom0     |      |  0   | 0  |      |   0    |   0    |    0     |
+ +-------------------+------+------+----+------+--------+--------+----------+
+ | Classic: Extended |      |  1   | 1  |  0   |   1    |   1    |    1     |
+ | Launch Dom0       |      |      |    |      |        |        |          |
+ +-------------------+------+------+----+------+--------+--------+----------+
+ | Classic:          | {n}  |  0   | 0  |      |   0    |   0    |    0     |
+ | Basic Cloud       |      |      |    |      |        |        |          |
+ +-------------------+------+------+----+------+--------+--------+----------+
+ | Classic: Cloud    | {n}  |  1   | 1  |  0   |   1    |   1    |    1     |
+ +-------------------+------+------+----+------+--------+--------+----------+
+ | Static: Basic     | {n}  |      | 0  |      |   0    |   0    |    0     |
+ +-------------------+------+------+----+------+--------+--------+----------+
+ | Static: Standard  | {n}  |      | 1  |  0   |   1    |   1    |    1     |
+ +-------------------+------+------+----+------+--------+--------+----------+
+ | Static:           | {n}  |      | 2  |  0   |   3    |   4    |    1     |
+ | Disaggregated     |      |      |    |      |        |        |          |
+ +-------------------+------+------+----+------+--------+--------+----------+
+ | Dynamic:          |      |  0   | 1  |      |   0    |   0    |    0     |
+ | Hardware Domain   |      |      |    |      |        |        |          |
+ +-------------------+------+------+----+------+--------+--------+----------+
+ | Dynamic: Flexible | {n}  |  1   | 2  |  0   |   1    |   1    |    1     |
+ | Disaggregation    |      |      |    |      |        |        |          |
+ +-------------------+------+------+----+------+--------+--------+----------+
+ | Dynamic: Full     | {n}  |  2   | 3  |  0   |   4    |   5    |    1     |
+ | Disaggregation    |      |      |    |      |        |        |          |
+ +-------------------+------+------+----+------+--------+--------+----------+
+
+Overview of Hyperlaunch Flow
+----------------------------
+
+Before delving into Hyperlaunch, a good basis to start with is an understanding
+of the current process to create a domain. A way to view this process starts
+with the core configuration which is the information the hypervisor requires to
+make the call to `domain_create`, followed by basic construction to provide the
+memory image to run, including the kernel and ramdisk. A subsequent step
+applies the extended configuration used by the toolstack to provide a domain
+with any additional configuration information. Until the extended configuration
+is completed, a domain has access to no resources except its allocated vcpus
+and memory. The exception to this is Dom0, which the hypervisor explicitly
+grants control and access to all system resources, except for those that only
+the hypervisor should have control over.  This exception for Dom0 is driven by
+the system structure with a monolithic Dom0 domain predating introduction of
+support for disaggregation into Xen, and the corresponding default assignment
+of multiple roles within the Xen system to Dom0.
+
+While not a different domain creation path, there does exist the Hardware
+Domain (hwdom), sometimes also referred to as late-Dom0. It is an early effort
+to disaggregate Dom0’s roles into a separate control domain and hardware
+domain. This capability is activated by the passing of a domain id to the
+`hardware_dom` kernel command line parameter, and the Xen hypervisor will then
+flag that domain id as the hardware domain. Later when the toolstack constructs
+a domain with that domain id as the requested domid, the hypervisor will
+transfer all device I/O from Dom0 to this domain. In addition it will also
+transfer the “host shutdown on domain shutdown” flag from Dom0 to the hardware
+domain. It is worth mentioning that this approach for disaggregation was
+created in this manner due to the inability of Xen to launch more than one
+domain at startup.
+
+Hyperlaunch Xen startup
+^^^^^^^^^^^^^^^^^^^^^^^
+
+The Hyperlaunch approach’s primary focus is on how to assign the roles
+traditionally granted to Dom0 to one or more domains at host boot. While the
+statement is simple to make, the implications are not trivial by any means.
+This also explains why the Hyperlaunch approach is orthogonal to the existing
+dom0less capability. The dom0less capability focuses on enabling the launch of
+multiple domains in parallel with Dom0 at host boot. A corollary for dom0less
+is that for systems that don’t require Dom0 after all guest domains have
+started, they are able to do the host boot without a Dom0. Though it should be
+noted that it may be possible to start  Dom0 at a later point. Whereas with
+Hyperlaunch, its approach of separating Dom0’s roles requires the ability to
+launch multiple domains at host boot. The direct consequences from this
+approach are profound and provide a myriad of possible configurations for which
+a sample of common boot configurations were already presented.
+
+To enable the Hyperlaunch approach a new alternative path for host boot within
+the hypervisor must be introduced. This alternative path effectively branches
+just before the current point of Dom0 construction and begins an alternate
+means of system construction. The determination if this alternate path should
+be taken is through the inspection of the boot chain. If the bootloader has
+loaded a specific configuration, as described later, it will enable Xen to
+detect that a Hyperlaunch configuration has been provided. Once a Hyperlaunch
+configuration is detected, this alternate path can be thought of as occurring
+in phases: domain creation, domain preparation, and launch finalization.
+
+Domain Creation
+"""""""""""""""
+
+The domain creation phase begins with Xen parsing the bootloader provided
+material, to understand the content of the modules provided. It will then load
+any microcode or XSM policy it discovers. For each domain configuration Xen
+finds, it parses the configuration to construct the necessary domain definition
+to instantiate an instance of the domain and leave it in a paused state. When
+all domain configurations have been instantiated as domains, if one of them is
+flagged as the Boot Domain, that domain will be unpaused starting the domain
+preparation phase. If there is no Boot Domain defined, then the domain
+preparation phase will be skipped and Xen will trigger the launch finalization
+phase.
+
+Domain Preparation Phase
+""""""""""""""""""""""""
+
+The domain preparation phase is an optional check point for the execution of a
+workload specific domain, the Boot Domain. While the Boot Domain is the first
+domain to run and has some degree of control over the system, it is extremely
+restricted in both system resource access and hypervisor operations. Its
+purpose is to:
+
+* Access the configuration provided by the bootloader
+* Finalize the configuration of the domains
+* Conduct any setup and launch related operations
+* Do an ordered unpause of domains that require an ordered start
+
+When the Boot Domain has completed, it will notify the hypervisor that it is
+done triggering the launch finalization phase.
+
+
+Launch Finalization
+"""""""""""""""""""
+
+The hypervisor handles the launch finalization phase which is equivalent to the
+clean up phase. As such the steps taken by the hypervisor, not necessarily in
+implementation order, are as follows,
+
+* Free the boot module chain
+* If a Boot Domain was used, reclaim Boot Domain resources
+* Unpause any domains still in a paused state
+* Boot Domain uses a reserved function thus can never be respawned
+
+While the focus thus far has been on how the Hyperlaunch capability will work,
+it is worth mentioning what it does not do or limit from occurring. It does not
+stop or inhibit the assigning of the control domain role which gives the domain
+the ability to create, start, stop, restart, and destroy domains or the
+hardware domain role which gives access to all I/O devices except those that
+the hypervisor has reserved for itself. In particular it is still possible to
+construct a domain with all the privileged roles, i.e. a Dom0, with or without
+the domain id being zero. In fact what limitations are imposed now become fully
+configurable without the risk of circumvention by an all privileged domain.
+
+Structuring of Hyperlaunch
+--------------------------
+
+The structure of Hyperlaunch is built around the existing capabilities of the
+host boot protocol. This approach was driven by the objective not to require
+modifications to the boot loader. The only requirement is that the boot loader
+supports the Multiboot2 (MB2) protocol. For UEFI boot, our recommendation is to
+use GRUB.efi to load Xen and the initial domain materials via the multiboot2
+method. On Arm platforms, Hyperlaunch is compatible with the existing interface
+for boot into the hypervisor.
+
+
+x86 Multiboot2
+^^^^^^^^^^^^^^
+
+The MB2 protocol has no concept of a manifest to tell the initial kernel what
+is contained in the chain, leaving it to the kernel to impose a loading
+convention, use magic number identification, or both. When considering the
+passing of multiple kernels, ramdisks, and domain configuration along with any
+existing modules already passed, there is no sane convention that could be
+imposed and magic number identification is nearly impossible when considering
+the objective not to impose unnecessary complication to the hypervisor.
+
+As it was alluded to previously, a manifest describing the contents in the MB2
+chain and how they relate within a Xen context is needed. To address this need
+the Launch Control Module (LCM) was designed to provide such a manifest. The
+LCM was designed to have a specific set of properties,
+
+* minimize the complexity of the parsing logic required by the hypervisor
+* allow for expanding and optional configuration fragments without breaking
+  backwards compatibility
+
+To enable automatic detection of a Hyperlaunch configuration, the LCM must be
+the first MB2 module in the MB2 module chain. The LCM is implemented using the
+Device Tree as defined in the Hyperlaunch Device Tree design document. With the
+LCM implemented in Device Tree, it has a magic number that enables the
+hypervisor to detect its presence when used in a Multiboot2 module chain. The
+hypervisor can confirm that it is a proper LCM Device Tree by checking for a
+compliant Hyperlaunch Device Tree. The Hyperlaunch Device Tree nodes are
+designed to allow,
+
+* for the hypervisor to parse only those entries it understands,
+* for packing custom information for a custom boot domain,
+* the ability to use a new LCM with an older hypervisor,
+* and the ability to use an older LCM with a new hypervisor.
+
+Arm Device Tree
+^^^^^^^^^^^^^^^
+
+As discussed the LCM is in Device Tree format and was designed to co-exist in
+the Device Tree ecosystem, and in particular in parallel with dom0less Device
+Tree entries. On Arm, Xen is already designed to boot from a host Device Tree
+description (dtb) file and the LCM entries can be embedded into this host dtb
+file. This makes detecting the LCM entries and supporting Hyperlaunch on Arm
+relatively straight forward. Relative to the described x86 approach, at the
+point where Xen inspects the first MB2 module, on Arm Xen will check if the top
+level LCM node exists in the host dtb file. If the LCM node does exist, then at
+that point it will enter into the same code path as the x86 entry would go. 
+
+Xen hypervisor
+^^^^^^^^^^^^^^
+
+It was previously discussed at a higher level of the new host boot flow that
+will be introduced. Within this new flow is the configuration parsing and
+domain creation phase which will be expanded upon here. The hypervisor will
+inspect the LCM for a config node and if found will iterate through all modules
+nodes. The module nodes are used to identify if any modules contain microcode
+or an XSM policy. As it processes domain nodes, it will construct the domain
+using the node properties and the modules nodes. Once it has completed
+iterating through all the entries in the LCM, if a constructed domain has the
+Boot Domain attribute, it will then be unpaused. Otherwise the hypervisor will
+start the launch finalization phase.
+
+Boot Domain
+^^^^^^^^^^^
+
+Traditionally domain creation was controlled by the user within the Dom0
+environment whereby custom toolstacks could be implemented to impose
+requirements on the process. The Boot Domain is a means to enable the user to
+continue to maintain a degree of that control over domain creation but within a
+limited privilege environment. The Boot Domain will have access to the LCM and
+the boot chain along with access to a subset of the hypercall operations. When
+the Boot Domain is finished it will notify the hypervisor through a hypercall
+op.
+
+Recovery Domain
+^^^^^^^^^^^^^^^
+
+With the existing Dom0 host boot path, when a failure occurs there are several
+assumptions that can safely be made to get the user to a console for
+troubleshooting. With the Hyperlaunch host boot path those assumptions can no
+longer be made, thus a means is needed to get the user to a console in the case
+of a recoverable failure. The recovery domain is configured by a domain
+configuration entry in the LCM, in the same manner as the other initial
+domains, and it will not be unpaused at launch finalization unless a failure is
+encountered starting the initial domains.
+
+Xen has existing support for a Crash Environment where memory can be reserved
+at host boot and a kernel loaded into it, to be jumped into at any point while
+the system is running when a crash is detected. The Recovery Domain
+functionality is a separate, complementary capability. The Crash Environment
+replaces the previously active hypervisor and running guests, and enables a
+process for mounting disks to write out log information prior to rebooting the
+system. In contrast, the Recovery Domain is able to use the functionality of
+the Xen hypervisor, that is still present and running, to perform recovery
+handling for errors encountered with starting the initial domains.
+
+Deferred Design
+"""""""""""""""
+
+To be determined:
+
+* Define what is detected as a crash
+* Explain how crash detection is performed and which components are involved
+* Explain how the recovery domain is unpaused
+* Explain how and when the resources assigned to the recovery domain are reclaimed
+* Define what the recovery domain is able to do
+* Determine what permissions the recovery domain requires to perform its job
+
+
+Control Domain
+^^^^^^^^^^^^^^
+
+The concept of the Control Domain already exists within Xen as a boolean,
+`is_privileged`, that governs access to many of the privileged interfaces of
+the hypervisor that support a domain running a virtualization system toolstack.
+Hyperlaunch will allow the `is_privileged` flag to be set on any domain that is
+created at launch, rather than only a Dom0. It may potentially be set on
+multiple domains.
+
+Hardware Domain
+^^^^^^^^^^^^^^^
+
+The Hardware Domain is also an existing concept for Xen that is enabled through
+the `is_hardware_domain` check. With Hyperlaunch the previous process of I/O
+accesses being assigned to Dom0 for later transfer to the hardware domain would
+no longer be required. Instead during the configuration phase the Xen
+hypervisor would directly assign the I/O accesses to the domain with the
+hardware domain permission bit enabled.
+
+Console Domain
+^^^^^^^^^^^^^^
+
+Traditionally the Xen console is assigned to the control domain and then
+reassignable by the toolstack to another domain. With Hyperlaunch it becomes
+possible to construct a boot configuration where there is no control domain or
+have a use case where the Xen console needs to be isolated. As such it becomes
+necessary to be able to designate which of the initial domains should be
+assigned the Xen console. Therefore Hyperlaunch introduces the ability to
+specify an initial domain which the console is assigned along with a convention
+of ordered assignment for when there is no explicit assignment.
+
+Communication of Domain Configurations
+======================================
+
+There are several standard methods for an Operating System to access machine
+configuration and environment information: ACPI is common on x86 systems,
+whereas Device Tree is more typical on Arm platforms. There are currently
+implementations of both in Xen.
+
+* For dom0less, guest Device Trees are dynamically constructed by the
+  hypervisor to convey domain configuration data
+
+* For PVH dom0 on x86, ACPI tables are built by the hypervisor before the
+  domain is started
+
+Note that both of these mechanisms convey static data that is fixed prior to
+the point of domain construction. Hyperlaunch will retain both the existing
+ACPI and Device Tree methods.
+
+Communication of data between a Boot Domain and a Control Domain is of note
+since they may not be running concurrently: the method used will depend on
+their specific implementations, but one option available is to use Xen’s hypfs
+for transfer of basic data to support system bootstrap.
+
+-------------------------------------------------------------------------------
+
+Appendix
+========
+
+Appendix 1: Flow Sequence of Steps of a Hyperlaunch Boot
+--------------------------------------------------------
+
+Provided here is an ordered flow of a Hyperlaunch with a highlight logic
+decision points. Not all branch points are recorded, specifically for the
+variety of error conditions that may occur. ::
+
+  1. Hypervisor Startup:
+  2a. (x86) Inspect first module provided by the bootloader
+      a. Is the module an LCM
+          i. YES: proceed with the Hyperlaunch host boot path
+          ii. NO: proceed with a Dom0 host boot path
+  2b. (Arm) Inspect host dtb for `/chosen/hypervisor` node
+      a. Is the LCM present
+          i. YES: proceed with the Hyperlaunch host boot path
+          ii. NO: proceed with a Dom0/dom0less host boot path
+  3. Iterate through the LCM entries looking for the module description
+     entry
+      a. Check if any of the modules are microcode or policy and if so,
+         load
+  4. Iterate through the LCM entries processing all domain description
+     entries
+      a. Use the details from the Basic Configuration to call
+         `domain_create`
+      b. Record if a domain is flagged as the Boot Domain
+      c. Record if a domain is flagged as the Recovery Domain
+  5. Was a Boot Domain created
+      a. YES:
+          i. Attach console to Boot Domain
+          ii. Unpause Boot Domain
+          iii. Goto Boot Domain (step 6)
+      b. NO: Goto Launch Finalization (step 10)
+  6. Boot Domain:
+  7. Boot Domain comes online and may do any of the following actions
+      a. Process the LCM
+      b. Validate the MB2 chain
+      c. Make additional configuration settings for staged domains
+      d. Unpause any precursor domains
+      e. Set any runtime configurations
+  8. Boot Domain does any necessary cleanup
+  9. Boot Domain make hypercall op call to signal it is finished
+      i. Hypervisor reclaims all Boot Domain resources
+      ii. Hypervisor records that the Boot Domain ran
+      ii. Goto Launch Finalization (step 9)
+  10. Launch Finalization
+  11. If a configured domain was flagged to have the console, the
+      hypervisor assigns it
+  12. The hypervisor clears the LCM and bootloader loaded module,
+      reclaiming the memory
+  13. The hypervisor iterates through domains unpausing any domain not
+      flagged as the recovery domain
+
+
+Appendix 2: Considerations in Naming the Hyperlaunch Feature
+------------------------------------------------------------
+
+* The term “Launch” is preferred over “Boot”
+
+        * Multiple individual component boots can occur in the new system start
+          process; Launch is preferable for describing the whole process
+        * Fortunately there is consensus in the current group of stakeholders
+          that the term “Launch” is good and appropriate
+
+* The names we define must support becoming meaningful and simple to use
+  outside the Xen community
+
+        * They must be able to be resolved quickly via search engine to a clear
+          explanation (eg. Xen marketing material, documentation or wiki)
+        * We prefer that the terms be helpful for marketing communications
+        * Consequence: avoid the term “domain” which is Xen-specific and
+          requires a definition to be provided each time when used elsewhere
+
+
+* There is a need to communicate that Xen is  capable of being used as a Static
+  Partitioning hypervisor
+
+        * The community members using and maintaining dom0less are the current
+          primary stakeholders for this
+
+* There is a need to communicate that the new launch functionality provides new
+  capabilities not available elsewhere, and is more than just supporting Static
+  Partitioning
+
+        * No other hypervisor known to the authors of this document is capable
+          of providing what Hyperlaunch will be able to do. The launch sequence is
+          designed to:
+
+                * Remove dependency on a single, highly-privileged initial domain
+                * Allow the initial domains started to be independent and fully
+                  isolated from each other
+                * Support configurations where no further VMs can be launched
+                  once the initial domains have started
+                * Use a standard, extensible format for conveying VM
+                  configuration data
+                * Ensure that domain building of all initial domains is
+                  performed by the hypervisor from materials supplied by the
+                  bootloader
+                * Enable flexible configuration to be applied to all initial
+                  domains by an optional Boot Domain, that runs with limited
+                  privilege, before any other domain starts and obtains the VM
+                  configuration data from the bootloader materials via the
+                  hypervisor
+                * Enable measurements of all of the boot materials prior to
+                  their use, in a sequence with minimized privilege
+                * Support use-case-specific customized Boot Domains
+                * Complement the hypervisor’s existing ability to enforce
+                  policy-based Mandatory Access Control
+
+
+* “Static” and “Dynamic” have different and important meanings in different
+  communities
+
+        * Static and Dynamic Partitioning describe the ability to create new
+          virtual machines, or not, after the initial host boot process
+          completes
+        * Static and Dynamic Root of Trust describe the nature of the trust
+          chain for a measured launch. In this case Static is referring to the
+          fact that the trust chain is fixed and non-repeatable until the next
+          host reboot or shutdown. Whereas Dynamic in this case refers to the
+          ability to conduct the measured launch at any time and potentially
+          multiple times before the next host reboot or shutdown. 
+
+                * We will be using Hyperlaunch with both Static and Dynamic
+                  Roots of Trust, to launch both Static and Dynamically
+                  Partitioned Systems, and being clear about exactly which
+                  combination is being started will be very important (eg. for
+                  certification processes)
+
+        * Consequence: uses of “Static” and “Dynamic” need to be qualified if
+          they are incorporated into the naming of this functionality
+
+                * This can be done by adding the preceding, stronger branded
+                  term: “Hyperlaunch”, before “Static” or “Dynamic”
+                * ie. “Hyperlaunch Static” describes launch of a
+                  Statically Partitioned system
+                * and “Hyperlaunch Dynamic” describes launch of a
+                  Dynamically Partitioned system.
+                * In practice, this means that “Hyperlaunch Static” describes
+                  starting a Static Partitioned system where no new domains can
+                  be started later (ie. no VM has the Control Domain
+                  permission), whereas “Hyperlaunch Dynamic” will launch some
+                  VM with the Control Domain permission, able to create VMs
+                  dynamically at a later point.
+
+**Naming Proposal:**
+
+* New Term: “Hyperlaunch” : the ability of a hypervisor to construct and start
+  one or more virtual machines at system launch, in the following manner:
+
+        * The hypervisor must build all of the domains that it starts at host
+          boot
+
+                * Similar to the way the dom0 domain is built by the hypervisor
+                  today, and how dom0less works: it will run a loop to build
+                  them all, driven from the configuration provided
+                * This is a requirement for ensuring that there is Strong
+                  Isolation between each of the initial VMs
+
+        * A single file contains the VM configs (“Launch Control Module”: LCM,
+          in Device Tree binary format) is provided to the hypervisor
+
+                * The hypervisor parses it and builds domains
+                * If the LCM config says that a Boot Domain should run first,
+                  then the LCM file itself is made available to the Boot Domain
+                  for it to parse and act on, to invoke operations via the
+                  hypervisor to apply additional configuration to the other VMs
+                  (ie. executing a privilege-constrained toolstack)
+
+* New Term: “Hyperlaunch Static”: starts a Static Partitioned system, where
+  only the virtual machines started at system launch are running on the system
+
+* New Term: “Hyperlaunch Dynamic”: starts a system where virtual machines may
+  be dynamically added after the initial virtual machines have started.
+
+
+In the default configuration, Xen will be capable of both styles of Hyperlaunch
+from the same hypervisor binary, when paired with its XSM flask, provides
+strong controls that enable fine grained system partitioning.
+
+
+* Retiring Term: “DomB”: will no longer be used to describe the optional first
+  domain that is started. It is replaced with the more general term: “Boot
+  Domain”.
+
+* Retiring Term: “Dom0less”: it is to be replaced with “Hyperlaunch Static”
+
+
+Appendix 3: Terminology
+-----------------------
+
+To help ensure clarity in reading this document, the following is the
+definition of terminology used within this document.
+
+
+Basic Configuration
+    the minimal information the hypervisor requires to instantiate a domain instance
+
+
+Boot Domain
+    a domain with limited privileges launched by the hypervisor during a
+    Multiple Domain Boot that runs as the first domain started. In the Hyperlaunch
+    architecture, it is responsible for assisting with higher level operations of
+    the domain setup process.
+
+
+Classic Launch
+    a backwards-compatible host boot that ends with the launch of a single domain (Dom0)
+
+
+Console Domain
+    a domain that has the Xen console assigned to it
+
+
+Control Domain
+    a privileged domain that has been granted Control Domain permissions which
+    are those that are required by the Xen toolstack for managing other domains.
+    These permissions are a subset of those that are granted to Dom0.
+
+
+Device Tree
+    a standardized data structure, with defined file formats, for describing
+    initial system configuration
+
+
+Disaggregation
+    the separation of system roles and responsibilities across multiple
+    connected components that work together to provide functionality
+
+
+Dom0
+    the highly-privileged, first and only domain started at host boot on a
+    conventional Xen system
+
+
+Dom0less
+    an existing feature of Xen on Arm that provides Multiple Domain Boot
+
+
+Domain
+    a running instance of a virtual machine; (as the term is commonly used in
+    the Xen Community)
+
+DomB
+     the former name for Hyperlaunch
+
+
+Extended Configuration
+    any configuration options for a domain beyond its Basic Configuration
+
+
+Hardware Domain
+    a privileged domain that has been granted permissions to access and manage
+    host hardware. These permissions are a subset of those that are granted to
+    Dom0.
+
+
+Host Boot
+    the system startup of Xen using the configuration provided by the bootloader
+
+
+Hyperlaunch
+    a flexible host boot that ends with the launch of one or more domains
+
+
+Initial Domain
+    a domain that is described in the LCM that is run as part of a multiple
+    domain boot. This includes the Boot Domain, Recovery Domain and all Launched
+    Domains.
+
+
+Late Hardware Domain
+    a Hardware Domain that is launched after host boot has already completed
+    with a running Dom0. When the Late Hardware Domain is started, Dom0
+    relinquishes and transfers the permissions to access and manage host hardware
+    to it..
+
+
+Launch Control Module (LCM)
+    A file supplied to the hypervisor by the bootloader that contains
+    configuration data for the hypervisor and the initial set of virtual machines
+    to be run at boot
+
+
+Launched Domain
+    a domain, aside from the boot domain and recovery domain, that is started as
+    part of a multiple domain boot and remains running once the boot process is
+    complete
+
+
+Multiple Domain Boot
+    a system configuration where the hypervisor and multiple virtual machines
+    are all launched when the host system hardware boots
+
+
+Recovery Domain
+    an optional fallback domain that the hypervisor may start in the event of a
+    detectable error encountered during the multiple domain boot process
+
+
+System Device Tree
+    this is the product of an Arm community project to extend Device Tree to
+    cover more aspects of initial system configuration
+
+
+Appendix 4: Copyright License
+-----------------------------
+
+This work is licensed under a Creative Commons Attribution 4.0 International
+License. A copy of this license may be obtained from the Creative Commons
+website (https://creativecommons.org/licenses/by/4.0/legalcode).
+
+| Contributions by:
+| Christopher Clark are Copyright © 2021 Star Lab Corporation
+| Daniel P. Smith are Copyright  © 2021 Apertus Solutions, LLC
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Fri May 14 04:17:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 04:17:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127182.238959 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhPGl-0001OV-Tq; Fri, 14 May 2021 04:17:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127182.238959; Fri, 14 May 2021 04:17:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhPGl-0001OO-Qu; Fri, 14 May 2021 04:17:47 +0000
Received: by outflank-mailman (input) for mailman id 127182;
 Fri, 14 May 2021 04:17:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CSkR=KJ=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lhPGk-0001OI-SF
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 04:17:46 +0000
Received: from mail-io1-xd33.google.com (unknown [2607:f8b0:4864:20::d33])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 87656a00-7770-4f5f-8ce8-5cd577aad3e3;
 Fri, 14 May 2021 04:17:44 +0000 (UTC)
Received: by mail-io1-xd33.google.com with SMTP id d24so17097648ios.2
 for <xen-devel@lists.xenproject.org>; Thu, 13 May 2021 21:17:44 -0700 (PDT)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id g11sm2401505ilq.38.2021.05.13.21.17.42
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 May 2021 21:17:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 87656a00-7770-4f5f-8ce8-5cd577aad3e3
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=vTOGk9Ujn5QyJxmbDoyws86RnXTyUDZH51yCs4bct4o=;
        b=hjTIh9tkjnIn1yDFr8AyHLkj5SfHQCsv4IvPEYNIud99Pdm9MXwbvrWvny515zTM0W
         XQPgUiVqUDmsbljf81EJinSxbWxkOZd03HrZzo2jXZpziCbk5GxlNL6n1RDiMCJ1IyO8
         YkmD2rgD3vQGmNEisUFV4er/kpy+NWBJZMUZTpfZ1dmfAeqwx9X9GAxNdHIKKFAw2eBv
         4GMF+gSflRgTaD9pnC2QBiAJo7YdVb6jT4CezQMB1Ejqxqu+lvFqWD3iPFax5quKeBcA
         5Mxydvc34VgW8IigZcB2J0R+aSNfowc+GG3piRga84Z98fHY604PQJ2TuMjlULk3Ua7c
         NIYg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=vTOGk9Ujn5QyJxmbDoyws86RnXTyUDZH51yCs4bct4o=;
        b=ea08uSCHYq3Ic/MrjZkCxCfJFsO00CoyB1vc5Dy8G5x+5oy2QQOPlqMcPDBkL2A7Er
         RoENzkOey/4N5+buURp2tw8hHZ1VNKZ08oN7i3zDEo8z38/YuhKWfnKLdcyCQ6z+NUFx
         q+9DBRTGgGADvdixSyczK1Dae85/g9TFHpmnyQciwa9dZmbNyQduitgAi29FS2MZweiz
         bL6R9VJ7Qk1yZssnwQThYL7Z5lCfAq7RBclr/+bRn9PRq9C/KOfVtlaHL6/0UKN2zHM9
         bt9Asn3CMFja9xMk2oRiDz0U82epEC/c4QSrNj2obyaTv96Z61BY1/R4kgygecwyhVNs
         yuog==
X-Gm-Message-State: AOAM533amnXSPwY1j3Q68fxSEcSJksgR4hO9Nr+YknJlD+YWoqz8VG8n
	uY0QJ3Zvx2WSyd1Y82a5CPFcriKEWWVRZg==
X-Google-Smtp-Source: ABdhPJy6Q16nkZLa+RVnWmiLi8Y0JVcflNb9CIxqn8QqUN/wV7ZyB0eaHZ0UOiAlsioVxXwczMN2RQ==
X-Received: by 2002:a5e:a902:: with SMTP id c2mr30568174iod.80.1620965864155;
        Thu, 13 May 2021 21:17:44 -0700 (PDT)
From: Connor Davis <connojdavis@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Connor Davis <connojdavis@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Paul Durrant <paul@xen.org>,
	Tamas K Lengyel <tamas@tklengyel.com>,
	Alexandru Isaila <aisaila@bitdefender.com>,
	Petre Pircalabu <ppircalabu@bitdefender.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH v2 0/5] Minimal build for RISCV
Date: Thu, 13 May 2021 22:17:07 -0600
Message-Id: <cover.1620965208.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.31.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Hi all,

This series introduces a minimal build for RISCV. It is based on Bobby's
previous work from last year[0]. I have worked to rebase onto current Xen,
as well as update the various header files borrowed from Linux.

This series provides the patches necessary to get a minimal build
working. The build is "minimal" in the sense that 1) it uses a
minimal config and 2) files, functions, and variables are included if
and only if they are required for a successful build based on the
config. It doesn't run at all, as the functions just have stub
implementations.

My hope is that this can serve as a useful example for future ports as
well as inform the community of exactly what is imposed by common code
onto new architectures.

The first 3 patches are mods to non-RISCV bits that enable building a
config with:

  !CONFIG_HAS_NS16550
  !CONFIG_HAS_PASSTHROUGH
  !CONFIG_GRANT_TABLE

respectively. The fourth patch adds the RISCV files, and the last patch
adds a docker container for doing the build. To build from the docker
container (after creating it locally), you can run the following:

  $ make XEN_TARGET_ARCH=riscv64 SUBSYSTEMS=xen 

The sources taken from Linux are documented in arch/riscv/README.sources.
There were also some files copied from arm:

  asm-arm/softirq.h
  asm-arm/random.h
  asm-arm/nospec.h
  asm-arm/numa.h
  asm-arm/p2m.h
  asm-arm/delay.h
  asm-arm/debugger.h
  asm-arm/desc.h
  asm-arm/guest_access.h
  asm-arm/hardirq.h
  lib/find_next_bit.c

I imagine some of these will want some consolidation, but I put them
under the respective RISCV directories for now.

[0] https://lore.kernel.org/xen-devel/cover.1579615303.git.bobbyeshleman@gmail.com/

Thanks,
Connor

--
Changes since v1:
  - Dropped "xen/sched: Fix build when NR_CPUS == 1" since this was
    fixed for 4.15
  - Moved #ifdef-ary around iommu_enabled to iommu.h
  - Moved struct grant_table declaration above ifdef CONFIG_GRANT_TABLE
    instead of defining an empty struct when !CONFIG_GRANT_TABLE

Connor Davis (5):
  xen/char: Default HAS_NS16550 to y only for X86 and ARM
  xen/common: Guard iommu symbols with CONFIG_HAS_PASSTHROUGH
  xen: Fix build when !CONFIG_GRANT_TABLE
  xen: Add files needed for minimal riscv build
  automation: add container for riscv64 builds

 automation/build/archlinux/riscv64.dockerfile |  33 ++
 automation/scripts/containerize               |   1 +
 config/riscv64.mk                             |   7 +
 xen/Makefile                                  |   8 +-
 xen/arch/riscv/Kconfig                        |  54 +++
 xen/arch/riscv/Kconfig.debug                  |   0
 xen/arch/riscv/Makefile                       |  57 +++
 xen/arch/riscv/README.source                  |  19 +
 xen/arch/riscv/Rules.mk                       |  13 +
 xen/arch/riscv/arch.mk                        |   7 +
 xen/arch/riscv/configs/riscv64_defconfig      |  12 +
 xen/arch/riscv/delay.c                        |  16 +
 xen/arch/riscv/domain.c                       | 144 +++++++
 xen/arch/riscv/domctl.c                       |  36 ++
 xen/arch/riscv/guestcopy.c                    |  57 +++
 xen/arch/riscv/head.S                         |   6 +
 xen/arch/riscv/irq.c                          |  78 ++++
 xen/arch/riscv/lib/Makefile                   |   1 +
 xen/arch/riscv/lib/find_next_bit.c            | 284 +++++++++++++
 xen/arch/riscv/mm.c                           |  93 +++++
 xen/arch/riscv/p2m.c                          | 144 +++++++
 xen/arch/riscv/percpu.c                       |  17 +
 xen/arch/riscv/platforms/Kconfig              |  31 ++
 xen/arch/riscv/riscv64/asm-offsets.c          |  31 ++
 xen/arch/riscv/setup.c                        |  27 ++
 xen/arch/riscv/shutdown.c                     |  28 ++
 xen/arch/riscv/smp.c                          |  35 ++
 xen/arch/riscv/smpboot.c                      |  34 ++
 xen/arch/riscv/sysctl.c                       |  33 ++
 xen/arch/riscv/time.c                         |  35 ++
 xen/arch/riscv/traps.c                        |  35 ++
 xen/arch/riscv/vm_event.c                     |  39 ++
 xen/arch/riscv/xen.lds.S                      | 113 ++++++
 xen/common/memory.c                           |  10 +
 xen/drivers/char/Kconfig                      |   2 +-
 xen/include/asm-riscv/altp2m.h                |  39 ++
 xen/include/asm-riscv/asm.h                   |  77 ++++
 xen/include/asm-riscv/asm_defns.h             |  24 ++
 xen/include/asm-riscv/atomic.h                | 204 ++++++++++
 xen/include/asm-riscv/bitops.h                | 331 +++++++++++++++
 xen/include/asm-riscv/bug.h                   |  54 +++
 xen/include/asm-riscv/byteorder.h             |  16 +
 xen/include/asm-riscv/cache.h                 |  24 ++
 xen/include/asm-riscv/cmpxchg.h               | 382 ++++++++++++++++++
 xen/include/asm-riscv/compiler_types.h        |  32 ++
 xen/include/asm-riscv/config.h                | 110 +++++
 xen/include/asm-riscv/cpufeature.h            |  17 +
 xen/include/asm-riscv/csr.h                   | 219 ++++++++++
 xen/include/asm-riscv/current.h               |  47 +++
 xen/include/asm-riscv/debugger.h              |  15 +
 xen/include/asm-riscv/delay.h                 |  17 +
 xen/include/asm-riscv/desc.h                  |  12 +
 xen/include/asm-riscv/device.h                |  15 +
 xen/include/asm-riscv/div64.h                 |  23 ++
 xen/include/asm-riscv/domain.h                |  50 +++
 xen/include/asm-riscv/event.h                 |  42 ++
 xen/include/asm-riscv/fence.h                 |  12 +
 xen/include/asm-riscv/flushtlb.h              |  34 ++
 xen/include/asm-riscv/grant_table.h           |  12 +
 xen/include/asm-riscv/guest_access.h          |  41 ++
 xen/include/asm-riscv/guest_atomics.h         |  60 +++
 xen/include/asm-riscv/hardirq.h               |  27 ++
 xen/include/asm-riscv/hypercall.h             |  12 +
 xen/include/asm-riscv/init.h                  |  42 ++
 xen/include/asm-riscv/io.h                    | 283 +++++++++++++
 xen/include/asm-riscv/iocap.h                 |  13 +
 xen/include/asm-riscv/iommu.h                 |  46 +++
 xen/include/asm-riscv/irq.h                   |  58 +++
 xen/include/asm-riscv/mem_access.h            |   4 +
 xen/include/asm-riscv/mm.h                    | 246 +++++++++++
 xen/include/asm-riscv/monitor.h               |  65 +++
 xen/include/asm-riscv/nospec.h                |  25 ++
 xen/include/asm-riscv/numa.h                  |  41 ++
 xen/include/asm-riscv/p2m.h                   | 218 ++++++++++
 xen/include/asm-riscv/page-bits.h             |  11 +
 xen/include/asm-riscv/page.h                  |  73 ++++
 xen/include/asm-riscv/paging.h                |  15 +
 xen/include/asm-riscv/pci.h                   |  31 ++
 xen/include/asm-riscv/percpu.h                |  33 ++
 xen/include/asm-riscv/processor.h             |  59 +++
 xen/include/asm-riscv/random.h                |   9 +
 xen/include/asm-riscv/regs.h                  |  23 ++
 xen/include/asm-riscv/setup.h                 |  14 +
 xen/include/asm-riscv/smp.h                   |  46 +++
 xen/include/asm-riscv/softirq.h               |  16 +
 xen/include/asm-riscv/spinlock.h              |  12 +
 xen/include/asm-riscv/string.h                |  28 ++
 xen/include/asm-riscv/sysregs.h               |  16 +
 xen/include/asm-riscv/system.h                |  99 +++++
 xen/include/asm-riscv/time.h                  |  31 ++
 xen/include/asm-riscv/trace.h                 |  12 +
 xen/include/asm-riscv/types.h                 |  60 +++
 xen/include/asm-riscv/vm_event.h              |  60 +++
 xen/include/asm-riscv/xenoprof.h              |  12 +
 xen/include/public/arch-riscv.h               | 134 ++++++
 xen/include/public/arch-riscv/hvm/save.h      |  39 ++
 xen/include/public/hvm/save.h                 |   2 +
 xen/include/public/pmu.h                      |   2 +
 xen/include/public/xen.h                      |   2 +
 xen/include/xen/domain.h                      |   1 +
 xen/include/xen/grant_table.h                 |   3 +-
 xen/include/xen/iommu.h                       |   8 +-
 102 files changed, 5375 insertions(+), 5 deletions(-)
 create mode 100644 automation/build/archlinux/riscv64.dockerfile
 create mode 100644 config/riscv64.mk
 create mode 100644 xen/arch/riscv/Kconfig
 create mode 100644 xen/arch/riscv/Kconfig.debug
 create mode 100644 xen/arch/riscv/Makefile
 create mode 100644 xen/arch/riscv/README.source
 create mode 100644 xen/arch/riscv/Rules.mk
 create mode 100644 xen/arch/riscv/arch.mk
 create mode 100644 xen/arch/riscv/configs/riscv64_defconfig
 create mode 100644 xen/arch/riscv/delay.c
 create mode 100644 xen/arch/riscv/domain.c
 create mode 100644 xen/arch/riscv/domctl.c
 create mode 100644 xen/arch/riscv/guestcopy.c
 create mode 100644 xen/arch/riscv/head.S
 create mode 100644 xen/arch/riscv/irq.c
 create mode 100644 xen/arch/riscv/lib/Makefile
 create mode 100644 xen/arch/riscv/lib/find_next_bit.c
 create mode 100644 xen/arch/riscv/mm.c
 create mode 100644 xen/arch/riscv/p2m.c
 create mode 100644 xen/arch/riscv/percpu.c
 create mode 100644 xen/arch/riscv/platforms/Kconfig
 create mode 100644 xen/arch/riscv/riscv64/asm-offsets.c
 create mode 100644 xen/arch/riscv/setup.c
 create mode 100644 xen/arch/riscv/shutdown.c
 create mode 100644 xen/arch/riscv/smp.c
 create mode 100644 xen/arch/riscv/smpboot.c
 create mode 100644 xen/arch/riscv/sysctl.c
 create mode 100644 xen/arch/riscv/time.c
 create mode 100644 xen/arch/riscv/traps.c
 create mode 100644 xen/arch/riscv/vm_event.c
 create mode 100644 xen/arch/riscv/xen.lds.S
 create mode 100644 xen/include/asm-riscv/altp2m.h
 create mode 100644 xen/include/asm-riscv/asm.h
 create mode 100644 xen/include/asm-riscv/asm_defns.h
 create mode 100644 xen/include/asm-riscv/atomic.h
 create mode 100644 xen/include/asm-riscv/bitops.h
 create mode 100644 xen/include/asm-riscv/bug.h
 create mode 100644 xen/include/asm-riscv/byteorder.h
 create mode 100644 xen/include/asm-riscv/cache.h
 create mode 100644 xen/include/asm-riscv/cmpxchg.h
 create mode 100644 xen/include/asm-riscv/compiler_types.h
 create mode 100644 xen/include/asm-riscv/config.h
 create mode 100644 xen/include/asm-riscv/cpufeature.h
 create mode 100644 xen/include/asm-riscv/csr.h
 create mode 100644 xen/include/asm-riscv/current.h
 create mode 100644 xen/include/asm-riscv/debugger.h
 create mode 100644 xen/include/asm-riscv/delay.h
 create mode 100644 xen/include/asm-riscv/desc.h
 create mode 100644 xen/include/asm-riscv/device.h
 create mode 100644 xen/include/asm-riscv/div64.h
 create mode 100644 xen/include/asm-riscv/domain.h
 create mode 100644 xen/include/asm-riscv/event.h
 create mode 100644 xen/include/asm-riscv/fence.h
 create mode 100644 xen/include/asm-riscv/flushtlb.h
 create mode 100644 xen/include/asm-riscv/grant_table.h
 create mode 100644 xen/include/asm-riscv/guest_access.h
 create mode 100644 xen/include/asm-riscv/guest_atomics.h
 create mode 100644 xen/include/asm-riscv/hardirq.h
 create mode 100644 xen/include/asm-riscv/hypercall.h
 create mode 100644 xen/include/asm-riscv/init.h
 create mode 100644 xen/include/asm-riscv/io.h
 create mode 100644 xen/include/asm-riscv/iocap.h
 create mode 100644 xen/include/asm-riscv/iommu.h
 create mode 100644 xen/include/asm-riscv/irq.h
 create mode 100644 xen/include/asm-riscv/mem_access.h
 create mode 100644 xen/include/asm-riscv/mm.h
 create mode 100644 xen/include/asm-riscv/monitor.h
 create mode 100644 xen/include/asm-riscv/nospec.h
 create mode 100644 xen/include/asm-riscv/numa.h
 create mode 100644 xen/include/asm-riscv/p2m.h
 create mode 100644 xen/include/asm-riscv/page-bits.h
 create mode 100644 xen/include/asm-riscv/page.h
 create mode 100644 xen/include/asm-riscv/paging.h
 create mode 100644 xen/include/asm-riscv/pci.h
 create mode 100644 xen/include/asm-riscv/percpu.h
 create mode 100644 xen/include/asm-riscv/processor.h
 create mode 100644 xen/include/asm-riscv/random.h
 create mode 100644 xen/include/asm-riscv/regs.h
 create mode 100644 xen/include/asm-riscv/setup.h
 create mode 100644 xen/include/asm-riscv/smp.h
 create mode 100644 xen/include/asm-riscv/softirq.h
 create mode 100644 xen/include/asm-riscv/spinlock.h
 create mode 100644 xen/include/asm-riscv/string.h
 create mode 100644 xen/include/asm-riscv/sysregs.h
 create mode 100644 xen/include/asm-riscv/system.h
 create mode 100644 xen/include/asm-riscv/time.h
 create mode 100644 xen/include/asm-riscv/trace.h
 create mode 100644 xen/include/asm-riscv/types.h
 create mode 100644 xen/include/asm-riscv/vm_event.h
 create mode 100644 xen/include/asm-riscv/xenoprof.h
 create mode 100644 xen/include/public/arch-riscv.h
 create mode 100644 xen/include/public/arch-riscv/hvm/save.h

-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri May 14 04:17:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 04:17:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127183.238970 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhPGr-0001fg-5C; Fri, 14 May 2021 04:17:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127183.238970; Fri, 14 May 2021 04:17:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhPGr-0001fZ-1p; Fri, 14 May 2021 04:17:53 +0000
Received: by outflank-mailman (input) for mailman id 127183;
 Fri, 14 May 2021 04:17:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CSkR=KJ=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lhPGp-0001OI-Op
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 04:17:51 +0000
Received: from mail-io1-xd29.google.com (unknown [2607:f8b0:4864:20::d29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c813b84c-7565-4142-b8fb-a43f6111bbcd;
 Fri, 14 May 2021 04:17:50 +0000 (UTC)
Received: by mail-io1-xd29.google.com with SMTP id k25so27007385iob.6
 for <xen-devel@lists.xenproject.org>; Thu, 13 May 2021 21:17:50 -0700 (PDT)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id g11sm2401505ilq.38.2021.05.13.21.17.49
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 May 2021 21:17:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c813b84c-7565-4142-b8fb-a43f6111bbcd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=3ZyiRnb9WVtOr1miy5zSjmis6+BOGxMggG62SA/umH8=;
        b=i6FKJqyF+Xk2tmcAcez6cPGV1ZKusR7+pKqGgdrNuN/FC+2v7cCt5bFEqQ2EfuThrD
         PXm03/YcOGFz+mwYopL+G/eosrSt/G2XEyle8jqxGQZjqJNWaKAB8dmf12XKNeRaTG2s
         ejOAujxyN7QxxbyfIdE7QzGmbYjJxNSdS7ugo2efj/UA+cTFG/kI+GJO11d74vN2NUr1
         /GWxzmtvsHeUVf7dOLSvAVkdR49ml/P3JXMyQ1VatKyfBQzh/niQmjVmNDgekzxSVe2V
         27LbGjxdGsTF/2edXaHKXc5PzdP8OrDwKxx9Gbw6UUnpdEBO49NGUfa59ZTeWl1Oxhnt
         Kckg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=3ZyiRnb9WVtOr1miy5zSjmis6+BOGxMggG62SA/umH8=;
        b=sOL/lGVyLEbOM8TZhDSS3Ucxx4VWHPPJb4iz3WzN5YvqJPsKj2Ldr43pPjTSu3w9tF
         mLpK0BK5QnkmX7x6ciuyeh6MKMWiX51+Mnx7AlQNKNC6hUpokJwLAuQXgKRWSZdt5l+H
         VGLyZ5CoAcOHa/2KhOJyc/ICXcAkN0EVd+x4SaNwx22/8viFyuGj28MaQQvL1NMqlazF
         sDchluqwFa7uFzF/BwJnRLEf4zQBT1Tb4Hdo71iKuSRv5ofN9FOqjCUMZA9QKxneazvY
         KgusoHx/zar9mFdtJHR5fvRQV+hal+JhuaO5xpjsHtxQhxX5iTG8wC93vKjzsg1PzpCL
         NblQ==
X-Gm-Message-State: AOAM530sneCnoVb5Yrnz9U1TijAzjHOgz95j6ZwQdVZ1Jo9LbTS8PXZV
	2FhyW+f8VWGqQV5xgFgpsNuW3nDjrjr9Bg==
X-Google-Smtp-Source: ABdhPJwIwS9f6N+lPyctzv8VNftZHmfOX9YZFwfBjbfRWb8nYN+en2Qog6o0n8Ilno+SY0bp9slkcg==
X-Received: by 2002:a05:6638:3ab:: with SMTP id z11mr41058043jap.58.1620965870246;
        Thu, 13 May 2021 21:17:50 -0700 (PDT)
From: Connor Davis <connojdavis@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Connor Davis <connojdavis@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 1/5] xen/char: Default HAS_NS16550 to y only for X86 and ARM
Date: Thu, 13 May 2021 22:17:08 -0600
Message-Id: <3960a676376e0163d97ac02f968966cdfaccbf75.1620965208.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <cover.1620965208.git.connojdavis@gmail.com>
References: <cover.1620965208.git.connojdavis@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Defaulting to yes only for X86 and ARM reduces the requirements
for a minimal build when porting new architectures.

Signed-off-by: Connor Davis <connojdavis@gmail.com>
---
 xen/drivers/char/Kconfig | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/drivers/char/Kconfig b/xen/drivers/char/Kconfig
index b572305657..b15b0c8d6a 100644
--- a/xen/drivers/char/Kconfig
+++ b/xen/drivers/char/Kconfig
@@ -1,6 +1,6 @@
 config HAS_NS16550
 	bool "NS16550 UART driver" if ARM
-	default y
+	default y if (ARM || X86)
 	help
 	  This selects the 16550-series UART support. For most systems, say Y.
 
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri May 14 04:17:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 04:17:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127184.238981 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhPGw-00020P-D1; Fri, 14 May 2021 04:17:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127184.238981; Fri, 14 May 2021 04:17:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhPGw-00020I-9r; Fri, 14 May 2021 04:17:58 +0000
Received: by outflank-mailman (input) for mailman id 127184;
 Fri, 14 May 2021 04:17:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CSkR=KJ=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lhPGu-0001OI-P8
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 04:17:56 +0000
Received: from mail-io1-xd31.google.com (unknown [2607:f8b0:4864:20::d31])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b879d6e7-86b3-41ff-be44-6f7f575995ce;
 Fri, 14 May 2021 04:17:53 +0000 (UTC)
Received: by mail-io1-xd31.google.com with SMTP id d11so6462089iod.5
 for <xen-devel@lists.xenproject.org>; Thu, 13 May 2021 21:17:53 -0700 (PDT)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id g11sm2401505ilq.38.2021.05.13.21.17.52
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 May 2021 21:17:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b879d6e7-86b3-41ff-be44-6f7f575995ce
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=iluLvhPa3OY2DopHKBkR7Xvis99pDwru79D3Q8FxEio=;
        b=jPMW/VtCtDMS19GIRdyXHZIBKMlFqX8JI92XiGO40z/NHetUBJ2TuIskv1/WSxBmQf
         Sf1w2t33r4nytOTg7XOwrEh7hr42zRECc1sDJ8SanNw0XG6lDdMGsCHLje25stJzgurx
         wnct/xjh5vXsZ0DxEYo1l9Rip4WNzApI6xkt4NouPHETOPIU7bSvLqHtSXrpMSct24Wl
         y4ur7tkFPwDrxL87N23g0dc8gqVi4FxgnXyrEl2+FLQSUJm9jaje9KjsAB97Rl8njoDz
         7k5Pz6v+at0KyCMvT/J3zWY6CHQGcCXZpp8TfgQVWhfHUocepLh/0zKDsPAqCn1+VYi/
         g8bQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=iluLvhPa3OY2DopHKBkR7Xvis99pDwru79D3Q8FxEio=;
        b=qUumlQToiluZvylP6URBVdRJuVqtPTCCY3q29NKsn0ie22h4RGiyJSh263Zk+jx1Yu
         RPUKKQaZ8sv4luzsYiI9GhPixuJ6Ly0lR9tw2Bkayk+WSWlsRV0hnMyFlhRs3WQrj6KE
         lD2h3a08Djh7q8NdnQIBNPjfvGE7Zf7GwEa3MDHHWXAUP0W4Jxi7MXC4fWfGuxX3orLq
         nLM5l2SZJVPGk90hvRNxy6FGmsg2UXx1VUn1IJE/u1uRVhIAMaer8KnQPPFZ3NwttWxW
         QtBoBLM7EDt6ukKl6HFnkKY64mBcUUPn2MH41q8BNyTkii9mQ9Jh8TU6FlJKvYLNJCxJ
         N1nw==
X-Gm-Message-State: AOAM531h+2pkXpfsvDtAorQEs4Y0lZNMbYDjgH9KueVOY9WiZHSAmiFv
	zayrOuIM9pHz63bFZHvJx0VEDD+vueNrBQ==
X-Google-Smtp-Source: ABdhPJyFVJSatGbAphL2zDuKfeIWjon6OLAGPGNjVPrqio5x0pw2AJ8syH7B+6o2fgXcBhlJAEr46w==
X-Received: by 2002:a5d:9ada:: with SMTP id x26mr32706913ion.209.1620965872745;
        Thu, 13 May 2021 21:17:52 -0700 (PDT)
From: Connor Davis <connojdavis@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Connor Davis <connojdavis@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH v2 2/5] xen/common: Guard iommu symbols with CONFIG_HAS_PASSTHROUGH
Date: Thu, 13 May 2021 22:17:09 -0600
Message-Id: <1156cb116da19ef64323e472bb6b6e87c6c73d77.1620965208.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <cover.1620965208.git.connojdavis@gmail.com>
References: <cover.1620965208.git.connojdavis@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The variables iommu_enabled and iommu_dont_flush_iotlb are defined in
drivers/passthrough/iommu.c and are referenced in common code, which
causes the link to fail when !CONFIG_HAS_PASSTHROUGH.

Guard references to these variables in common code so that xen
builds when !CONFIG_HAS_PASSTHROUGH.

Signed-off-by: Connor Davis <connojdavis@gmail.com>
---
 xen/common/memory.c     | 10 ++++++++++
 xen/include/xen/iommu.h |  8 +++++++-
 2 files changed, 17 insertions(+), 1 deletion(-)

diff --git a/xen/common/memory.c b/xen/common/memory.c
index b5c70c4b85..72a6b70cb5 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -294,7 +294,9 @@ int guest_remove_page(struct domain *d, unsigned long gmfn)
     p2m_type_t p2mt;
 #endif
     mfn_t mfn;
+#ifdef CONFIG_HAS_PASSTHROUGH
     bool *dont_flush_p, dont_flush;
+#endif
     int rc;
 
 #ifdef CONFIG_X86
@@ -385,13 +387,17 @@ int guest_remove_page(struct domain *d, unsigned long gmfn)
      * Since we're likely to free the page below, we need to suspend
      * xenmem_add_to_physmap()'s suppressing of IOMMU TLB flushes.
      */
+#ifdef CONFIG_HAS_PASSTHROUGH
     dont_flush_p = &this_cpu(iommu_dont_flush_iotlb);
     dont_flush = *dont_flush_p;
     *dont_flush_p = false;
+#endif
 
     rc = guest_physmap_remove_page(d, _gfn(gmfn), mfn, 0);
 
+#ifdef CONFIG_HAS_PASSTHROUGH
     *dont_flush_p = dont_flush;
+#endif
 
     /*
      * With the lack of an IOMMU on some platforms, domains with DMA-capable
@@ -839,11 +845,13 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
     xatp->gpfn += start;
     xatp->size -= start;
 
+#ifdef CONFIG_HAS_PASSTHROUGH
     if ( is_iommu_enabled(d) )
     {
        this_cpu(iommu_dont_flush_iotlb) = 1;
        extra.ppage = &pages[0];
     }
+#endif
 
     while ( xatp->size > done )
     {
@@ -868,6 +876,7 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
         }
     }
 
+#ifdef CONFIG_HAS_PASSTHROUGH
     if ( is_iommu_enabled(d) )
     {
         int ret;
@@ -894,6 +903,7 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
         if ( unlikely(ret) && rc >= 0 )
             rc = ret;
     }
+#endif
 
     return rc;
 }
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 460755df29..d878a93269 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -51,9 +51,15 @@ static inline bool_t dfn_eq(dfn_t x, dfn_t y)
     return dfn_x(x) == dfn_x(y);
 }
 
-extern bool_t iommu_enable, iommu_enabled;
+extern bool_t iommu_enable;
 extern bool force_iommu, iommu_quarantine, iommu_verbose;
 
+#ifdef CONFIG_HAS_PASSTHROUGH
+extern bool_t iommu_enabled;
+#else
+#define iommu_enabled false
+#endif
+
 #ifdef CONFIG_X86
 extern enum __packed iommu_intremap {
    /*
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri May 14 04:18:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 04:18:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127185.238992 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhPH0-0002MI-OX; Fri, 14 May 2021 04:18:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127185.238992; Fri, 14 May 2021 04:18:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhPH0-0002MB-KC; Fri, 14 May 2021 04:18:02 +0000
Received: by outflank-mailman (input) for mailman id 127185;
 Fri, 14 May 2021 04:18:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CSkR=KJ=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lhPGz-0001OI-PR
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 04:18:01 +0000
Received: from mail-il1-x129.google.com (unknown [2607:f8b0:4864:20::129])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7257c6c0-4170-4379-b10f-cec379add951;
 Fri, 14 May 2021 04:17:54 +0000 (UTC)
Received: by mail-il1-x129.google.com with SMTP id j20so24780813ilo.10
 for <xen-devel@lists.xenproject.org>; Thu, 13 May 2021 21:17:54 -0700 (PDT)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id g11sm2401505ilq.38.2021.05.13.21.17.53
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 May 2021 21:17:54 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7257c6c0-4170-4379-b10f-cec379add951
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=aZUcLbAyYZr/mu1DuUlZ6kf0uoeQ0LVDzaRiHOP28GY=;
        b=ivTifj4x38uZkQDMLhQcm1GuvzLQ3TzyBul3hVo/nHjNJWoOdpeEMYIvLK71DkZSUU
         /LS3ZSt/yGGEBZcGE0lM/eHQpTaD2vwr7jhJjPyEX9jVzn35wfszV55Zd6ifdC5Wmzrv
         rJPX+SYpAbS2dc0GbPQNK9m0aOQJtWxhOqrJMiLmsZF0x8ILCIE1kMB2o+IL9QkKWdVN
         aafh2dQgScTIjVWr/yyxU3TaxXxZmSSKBMQSxHqWaDK132vi2+VDCLO1yAcDKz1B9bqw
         9YHRdR2cZ+7OBh/kNskMHo/BscvqmrTpCHlnYZ/zsYf6VgMvk2LSecmbJoQMM5hKkzkd
         9biA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=aZUcLbAyYZr/mu1DuUlZ6kf0uoeQ0LVDzaRiHOP28GY=;
        b=WT5TJgscEDmVLModMM326FwYTFIAjNmYejdX4r614thgc6uHKFl7Ir2ShKrEy1YDs/
         fLkLSzA3pXt70ayHrBVeT9h+ZdVoTP5VQykUvKbyx39OpB/aUc+uOAqBhzCB4KaBF0ta
         ebwqkJVAnxxfAL89SeueO5aLs6pdIAVgV1y0X0uo+eu1oOA5CHHJvYtmkuW0deoN7V7/
         dx2eY/ImtWJJX/I7Jo0NLwMgaKnDHOolt2wCIP4NUiNELpgQv7/yEiRMC2ywjenBJs9S
         4/J0VmeheSnlUI00x6Cmv223HLW4Rv2pdX1zGMy8VT7GHJd80fw1Xv3zFDdziXLCOBjG
         nSlA==
X-Gm-Message-State: AOAM5333BTmXcCl5UhcxdcHmQoylVYR1vZJPegeRLZqzObfHraiV1wFb
	qrFQFXh9Vy9Spne9KYlPFJFjYaSxJYfQGQ==
X-Google-Smtp-Source: ABdhPJxYgVIKTsFkK5X5NNbwPOKWiV1blY4u9yJdwXEOsFjDzN/gIMSGEZ5yTJsADZ7uPelTf3QhFw==
X-Received: by 2002:a92:c645:: with SMTP id 5mr4223943ill.142.1620965874438;
        Thu, 13 May 2021 21:17:54 -0700 (PDT)
From: Connor Davis <connojdavis@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Connor Davis <connojdavis@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 3/5] xen: Fix build when !CONFIG_GRANT_TABLE
Date: Thu, 13 May 2021 22:17:10 -0600
Message-Id: <834f7995ae80a3b37b6d508d1c989b4ee391f61b.1620965208.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <cover.1620965208.git.connojdavis@gmail.com>
References: <cover.1620965208.git.connojdavis@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Move struct grant_table; in grant_table.h above
ifdef CONFIG_GRANT_TABLE. This fixes the following:

/build/xen/include/xen/grant_table.h:84:50: error: 'struct grant_table'
declared inside parameter list will not be visible outside of this
definition or declaration [-Werror]
   84 | static inline int mem_sharing_gref_to_gfn(struct grant_table *gt,
      |

Signed-off-by: Connor Davis <connojdavis@gmail.com>
---
 xen/include/xen/grant_table.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen/include/xen/grant_table.h b/xen/include/xen/grant_table.h
index 63b6dc78f4..9f8b7e66c1 100644
--- a/xen/include/xen/grant_table.h
+++ b/xen/include/xen/grant_table.h
@@ -28,9 +28,10 @@
 #include <public/grant_table.h>
 #include <asm/grant_table.h>
 
-#ifdef CONFIG_GRANT_TABLE
 struct grant_table;
 
+#ifdef CONFIG_GRANT_TABLE
+
 extern unsigned int opt_max_grant_frames;
 
 /* Create/destroy per-domain grant table context. */
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri May 14 04:18:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 04:18:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127186.239003 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhPH6-0002mR-8B; Fri, 14 May 2021 04:18:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127186.239003; Fri, 14 May 2021 04:18:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhPH6-0002mG-3E; Fri, 14 May 2021 04:18:08 +0000
Received: by outflank-mailman (input) for mailman id 127186;
 Fri, 14 May 2021 04:18:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CSkR=KJ=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lhPH4-0001OI-PZ
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 04:18:06 +0000
Received: from mail-il1-x12b.google.com (unknown [2607:f8b0:4864:20::12b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 58f48555-1565-4208-bab3-9c76742ed6b0;
 Fri, 14 May 2021 04:17:58 +0000 (UTC)
Received: by mail-il1-x12b.google.com with SMTP id e14so24780380ils.12
 for <xen-devel@lists.xenproject.org>; Thu, 13 May 2021 21:17:58 -0700 (PDT)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id g11sm2401505ilq.38.2021.05.13.21.17.57
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 May 2021 21:17:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 58f48555-1565-4208-bab3-9c76742ed6b0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=BT6O1zmlP2z29HXEprVn3g9HsKzNF9PGCbDM4AzPEW0=;
        b=pQczM9ZC3BARuMOdCbfLFjsZnq+3ddd9ElV754WAqTrDIqoCQXvmaKbdkRCW2IsFKZ
         uZeO6tTNGPBUyHX+2EHI54nkX+6xKd8h4r1AdC9I6SBgvaRtlBb939NrhDxkXUprIQUb
         TuFtwBkqOsNkuO0TvV1jFBVLFi22NZoUIfpHR2+4JCSlM9IwfrOX61wkScF4Gf8J/ME9
         YxZHEnT5YruOxW0Dvbbe4OZMsOG/HSJ7atIei8DCKq8TTF20jI/zMSKRMVLoY2/14lr4
         FR257G1aHV9RzK5TN+oGtd/TVeVaORZWIV29gB2/+gn19r6YDiAcByPdAUq0jDtRzpdR
         CtdA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=BT6O1zmlP2z29HXEprVn3g9HsKzNF9PGCbDM4AzPEW0=;
        b=em9zajpMjslai1vj1NJfSFsI1pEbCwjbVGwGc1EUHGXb9HwPJhiOfwofVkw65DFz+p
         YSxbCj8k02MHsAJh6yUQ8fXv0pzDJyn81Gpt/JpzxQMjBRpc/IdBcCAMEtaKTkkPk7bN
         kKev+ldl8/IT84rDiIptE3dS76WlxFaEivOJoq+Qz3OFrjFWq3KazWEvfGsSBr1OBG5N
         Y+V2AqPFYZhjREFDA6MiowfgtiNz60xXhsg7Uw8z5bchXqZu/1+/uZpUgpzOQr5nXU4k
         uCw2f6gD8hrPo+MMbjlmV01r32/WPF6lL1RNoPDD6fMlbyjSS9xNAfQO9u4wpRCxmJy2
         pADQ==
X-Gm-Message-State: AOAM533E4g1XjEiIEMAAZ/hggiCu3jptuDaf/tqbr+1CREqa00buw3dt
	GW3RH4U0b87tDKv51+27Flhi/1lFBt0oyA==
X-Google-Smtp-Source: ABdhPJyCzzMyZqih7E+w2fiBBhZDynBm+hJhmYKL9iCW6+0WujSWgCNHq/2HDM81oLh3HLNtRqod1g==
X-Received: by 2002:a05:6e02:1a4b:: with SMTP id u11mr38672185ilv.258.1620965877573;
        Thu, 13 May 2021 21:17:57 -0700 (PDT)
From: Connor Davis <connojdavis@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Connor Davis <connojdavis@gmail.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH v2 5/5] automation: add container for riscv64 builds
Date: Thu, 13 May 2021 22:17:12 -0600
Message-Id: <3fc237b4350832e63be4943d4fd1b029fea8d486.1620965208.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <cover.1620965208.git.connojdavis@gmail.com>
References: <cover.1620965208.git.connojdavis@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a container for cross-compiling xen to riscv64.
This just includes the cross-compiler and necessary packages for
building xen itself (packages for tools, stubdoms, etc., can be
added later).

To build xen in the container run the following:

$ make XEN_TARGET_ARCH=riscv64 SUBSYSTEMS=xen

Signed-off-by: Connor Davis <connojdavis@gmail.com>
---
 automation/build/archlinux/riscv64.dockerfile | 33 +++++++++++++++++++
 automation/scripts/containerize               |  1 +
 2 files changed, 34 insertions(+)
 create mode 100644 automation/build/archlinux/riscv64.dockerfile

diff --git a/automation/build/archlinux/riscv64.dockerfile b/automation/build/archlinux/riscv64.dockerfile
new file mode 100644
index 0000000000..505b623c01
--- /dev/null
+++ b/automation/build/archlinux/riscv64.dockerfile
@@ -0,0 +1,33 @@
+FROM archlinux
+LABEL maintainer.name="The Xen Project" \
+      maintainer.email="xen-devel@lists.xenproject.org"
+
+# Packages needed for the build
+RUN pacman --noconfirm --needed -Syu \
+    base-devel \
+    gcc \
+    git
+
+# Packages needed for QEMU
+RUN pacman --noconfirm --needed -Syu \
+    pixman \
+    python \
+    sh
+
+# There is a regression in GDB that causes an assertion error
+# when setting breakpoints, use this commit until it is fixed!
+RUN git clone --recursive -j$(nproc) --progress https://github.com/riscv/riscv-gnu-toolchain && \
+    cd riscv-gnu-toolchain/riscv-gdb && \
+    git checkout 1dd588507782591478882a891f64945af9e2b86c && \
+    cd  .. && \
+    ./configure --prefix=/opt/riscv && \
+    make linux -j$(nproc) && \
+    rm -R /riscv-gnu-toolchain
+
+# Add compiler path
+ENV PATH=/opt/riscv/bin/:${PATH}
+ENV CROSS_COMPILE=riscv64-unknown-linux-gnu-
+
+RUN useradd --create-home user
+USER user
+WORKDIR /build
diff --git a/automation/scripts/containerize b/automation/scripts/containerize
index b7c81559fb..59edf0ba40 100755
--- a/automation/scripts/containerize
+++ b/automation/scripts/containerize
@@ -26,6 +26,7 @@ BASE="registry.gitlab.com/xen-project/xen"
 case "_${CONTAINER}" in
     _alpine) CONTAINER="${BASE}/alpine:3.12" ;;
     _archlinux|_arch) CONTAINER="${BASE}/archlinux:current" ;;
+    _riscv64) CONTAINER="${BASE}/archlinux:riscv64" ;;
     _centos7) CONTAINER="${BASE}/centos:7" ;;
     _centos72) CONTAINER="${BASE}/centos:7.2" ;;
     _fedora) CONTAINER="${BASE}/fedora:29";;
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri May 14 04:18:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 04:18:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127190.239014 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhPHJ-0003jF-KB; Fri, 14 May 2021 04:18:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127190.239014; Fri, 14 May 2021 04:18:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhPHJ-0003j7-Fk; Fri, 14 May 2021 04:18:21 +0000
Received: by outflank-mailman (input) for mailman id 127190;
 Fri, 14 May 2021 04:18:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CSkR=KJ=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lhPHH-0003er-Mh
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 04:18:19 +0000
Received: from mail-il1-x134.google.com (unknown [2607:f8b0:4864:20::134])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 85699491-d6b8-4c61-a5c5-239e5970dadc;
 Fri, 14 May 2021 04:17:58 +0000 (UTC)
Received: by mail-il1-x134.google.com with SMTP id w7so8107589ilg.13
 for <xen-devel@lists.xenproject.org>; Thu, 13 May 2021 21:17:58 -0700 (PDT)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id g11sm2401505ilq.38.2021.05.13.21.17.55
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Thu, 13 May 2021 21:17:55 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 85699491-d6b8-4c61-a5c5-239e5970dadc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=fwwzqx7aNAcHmdpy5K/S5WgbBGtOuN5CLTXXUV7UWpg=;
        b=HI9H8ha1t552U/6traNKVkv3VUhu+cj6bFoxHQGkz/polbf61kMxToCgD2/M4A2a9P
         xOvbctKNG3u/GfRY3JTmmPJUlOKIUW2qae2T1c4vXhmTViI18TP51pSBTrE+ryKv8Jgh
         7hOqbYzJC7/TYUqcJPjvh4vNzgLj0N/9J9hXupM2x4RzjiOZUw+4B/uDe9jFSf1vv9wM
         hpfX6jbIAiwCih+L89DpCu3hrXPXkErmUFOn3+nHd6SnE/lzo5cdCzgmIOPhlD5Sw1Qk
         Yl5idiuHDLZWQPWZOCXK1Z08CQyFrZynuxdlKF7ZeZsj0keI1N7yyRFKveozZkpH4OHw
         1CSA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=fwwzqx7aNAcHmdpy5K/S5WgbBGtOuN5CLTXXUV7UWpg=;
        b=jMB7t69i2svb0o55GE8RJn1+RmG5TmhP+IALUsC6CrV81sMFnGStsRxae54aykBlLR
         jEfvXLU0g7xvwoEOaDbJrRs8kwA8hnB5bLuVVl/muTcNpYBh1YtrJ4eid9dbLktbrBVM
         9S/WnmxY/wHklos8iPuNe60EYt3k/kYKj0BuXOWdyPZK6g7nqPzzZwFSpsJ/dGT9Q4vK
         jwWEtruPqGE0yb/tf8q1ZkYs+zb98dWnV2on3T1anNRctlxn1xtsDRjBoarFZ9MhOTF4
         O7kRog4gBnE8QVOFWY1pL03NBzK9jVsRpS3fFAfhOw5nrA8jPCGm6p3pFwHriDMSNf9V
         rtYw==
X-Gm-Message-State: AOAM532BurWUc4r9Jb3g0uUodYdCSJ7oy3N7UcaW6PEtBve4pTTzQ8AK
	GnEZF+o6T+wLtvAk9si2ZOcQOfA2utW5Ow==
X-Google-Smtp-Source: ABdhPJx+fJcZVVCrxsnJ9oIOQs9GDEnDIuBa9BERRgVFcoMcVKd9RFqlo9IMgW8xDd6J9bazqwXVUA==
X-Received: by 2002:a92:ce07:: with SMTP id b7mr10748131ilo.94.1620965876282;
        Thu, 13 May 2021 21:17:56 -0700 (PDT)
From: Connor Davis <connojdavis@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Connor Davis <connojdavis@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Tamas K Lengyel <tamas@tklengyel.com>,
	Alexandru Isaila <aisaila@bitdefender.com>,
	Petre Pircalabu <ppircalabu@bitdefender.com>
Subject: [PATCH v2 4/5] xen: Add files needed for minimal riscv build
Date: Thu, 13 May 2021 22:17:11 -0600
Message-Id: <c5d130b06de3d724921488387f1743d7996aac11.1620965208.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <cover.1620965208.git.connojdavis@gmail.com>
References: <cover.1620965208.git.connojdavis@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add the minimum code required to get xen to build with
XEN_TARGET_ARCH=riscv64. It is minimal in the sense that every file and
function added is required for a successful build, given the .config
generated from riscv64_defconfig. The function implementations are just
stubs; actual implmentations will need to be added later.

Signed-off-by: Connor Davis <connojdavis@gmail.com>
---
 config/riscv64.mk                        |   7 +
 xen/Makefile                             |   8 +-
 xen/arch/riscv/Kconfig                   |  54 ++++
 xen/arch/riscv/Kconfig.debug             |   0
 xen/arch/riscv/Makefile                  |  57 ++++
 xen/arch/riscv/README.source             |  19 ++
 xen/arch/riscv/Rules.mk                  |  13 +
 xen/arch/riscv/arch.mk                   |   7 +
 xen/arch/riscv/configs/riscv64_defconfig |  12 +
 xen/arch/riscv/delay.c                   |  16 +
 xen/arch/riscv/domain.c                  | 144 +++++++++
 xen/arch/riscv/domctl.c                  |  36 +++
 xen/arch/riscv/guestcopy.c               |  57 ++++
 xen/arch/riscv/head.S                    |   6 +
 xen/arch/riscv/irq.c                     |  78 +++++
 xen/arch/riscv/lib/Makefile              |   1 +
 xen/arch/riscv/lib/find_next_bit.c       | 284 +++++++++++++++++
 xen/arch/riscv/mm.c                      |  93 ++++++
 xen/arch/riscv/p2m.c                     | 144 +++++++++
 xen/arch/riscv/percpu.c                  |  17 +
 xen/arch/riscv/platforms/Kconfig         |  31 ++
 xen/arch/riscv/riscv64/asm-offsets.c     |  31 ++
 xen/arch/riscv/setup.c                   |  27 ++
 xen/arch/riscv/shutdown.c                |  28 ++
 xen/arch/riscv/smp.c                     |  35 +++
 xen/arch/riscv/smpboot.c                 |  34 ++
 xen/arch/riscv/sysctl.c                  |  33 ++
 xen/arch/riscv/time.c                    |  35 +++
 xen/arch/riscv/traps.c                   |  35 +++
 xen/arch/riscv/vm_event.c                |  39 +++
 xen/arch/riscv/xen.lds.S                 | 113 +++++++
 xen/include/asm-riscv/altp2m.h           |  39 +++
 xen/include/asm-riscv/asm.h              |  77 +++++
 xen/include/asm-riscv/asm_defns.h        |  24 ++
 xen/include/asm-riscv/atomic.h           | 204 ++++++++++++
 xen/include/asm-riscv/bitops.h           | 331 ++++++++++++++++++++
 xen/include/asm-riscv/bug.h              |  54 ++++
 xen/include/asm-riscv/byteorder.h        |  16 +
 xen/include/asm-riscv/cache.h            |  24 ++
 xen/include/asm-riscv/cmpxchg.h          | 382 +++++++++++++++++++++++
 xen/include/asm-riscv/compiler_types.h   |  32 ++
 xen/include/asm-riscv/config.h           | 110 +++++++
 xen/include/asm-riscv/cpufeature.h       |  17 +
 xen/include/asm-riscv/csr.h              | 219 +++++++++++++
 xen/include/asm-riscv/current.h          |  47 +++
 xen/include/asm-riscv/debugger.h         |  15 +
 xen/include/asm-riscv/delay.h            |  17 +
 xen/include/asm-riscv/desc.h             |  12 +
 xen/include/asm-riscv/device.h           |  15 +
 xen/include/asm-riscv/div64.h            |  23 ++
 xen/include/asm-riscv/domain.h           |  50 +++
 xen/include/asm-riscv/event.h            |  42 +++
 xen/include/asm-riscv/fence.h            |  12 +
 xen/include/asm-riscv/flushtlb.h         |  34 ++
 xen/include/asm-riscv/grant_table.h      |  12 +
 xen/include/asm-riscv/guest_access.h     |  41 +++
 xen/include/asm-riscv/guest_atomics.h    |  60 ++++
 xen/include/asm-riscv/hardirq.h          |  27 ++
 xen/include/asm-riscv/hypercall.h        |  12 +
 xen/include/asm-riscv/init.h             |  42 +++
 xen/include/asm-riscv/io.h               | 283 +++++++++++++++++
 xen/include/asm-riscv/iocap.h            |  13 +
 xen/include/asm-riscv/iommu.h            |  46 +++
 xen/include/asm-riscv/irq.h              |  58 ++++
 xen/include/asm-riscv/mem_access.h       |   4 +
 xen/include/asm-riscv/mm.h               | 246 +++++++++++++++
 xen/include/asm-riscv/monitor.h          |  65 ++++
 xen/include/asm-riscv/nospec.h           |  25 ++
 xen/include/asm-riscv/numa.h             |  41 +++
 xen/include/asm-riscv/p2m.h              | 218 +++++++++++++
 xen/include/asm-riscv/page-bits.h        |  11 +
 xen/include/asm-riscv/page.h             |  73 +++++
 xen/include/asm-riscv/paging.h           |  15 +
 xen/include/asm-riscv/pci.h              |  31 ++
 xen/include/asm-riscv/percpu.h           |  33 ++
 xen/include/asm-riscv/processor.h        |  59 ++++
 xen/include/asm-riscv/random.h           |   9 +
 xen/include/asm-riscv/regs.h             |  23 ++
 xen/include/asm-riscv/setup.h            |  14 +
 xen/include/asm-riscv/smp.h              |  46 +++
 xen/include/asm-riscv/softirq.h          |  16 +
 xen/include/asm-riscv/spinlock.h         |  12 +
 xen/include/asm-riscv/string.h           |  28 ++
 xen/include/asm-riscv/sysregs.h          |  16 +
 xen/include/asm-riscv/system.h           |  99 ++++++
 xen/include/asm-riscv/time.h             |  31 ++
 xen/include/asm-riscv/trace.h            |  12 +
 xen/include/asm-riscv/types.h            |  60 ++++
 xen/include/asm-riscv/vm_event.h         |  60 ++++
 xen/include/asm-riscv/xenoprof.h         |  12 +
 xen/include/public/arch-riscv.h          | 134 ++++++++
 xen/include/public/arch-riscv/hvm/save.h |  39 +++
 xen/include/public/hvm/save.h            |   2 +
 xen/include/public/pmu.h                 |   2 +
 xen/include/public/xen.h                 |   2 +
 xen/include/xen/domain.h                 |   1 +
 96 files changed, 5321 insertions(+), 2 deletions(-)
 create mode 100644 config/riscv64.mk
 create mode 100644 xen/arch/riscv/Kconfig
 create mode 100644 xen/arch/riscv/Kconfig.debug
 create mode 100644 xen/arch/riscv/Makefile
 create mode 100644 xen/arch/riscv/README.source
 create mode 100644 xen/arch/riscv/Rules.mk
 create mode 100644 xen/arch/riscv/arch.mk
 create mode 100644 xen/arch/riscv/configs/riscv64_defconfig
 create mode 100644 xen/arch/riscv/delay.c
 create mode 100644 xen/arch/riscv/domain.c
 create mode 100644 xen/arch/riscv/domctl.c
 create mode 100644 xen/arch/riscv/guestcopy.c
 create mode 100644 xen/arch/riscv/head.S
 create mode 100644 xen/arch/riscv/irq.c
 create mode 100644 xen/arch/riscv/lib/Makefile
 create mode 100644 xen/arch/riscv/lib/find_next_bit.c
 create mode 100644 xen/arch/riscv/mm.c
 create mode 100644 xen/arch/riscv/p2m.c
 create mode 100644 xen/arch/riscv/percpu.c
 create mode 100644 xen/arch/riscv/platforms/Kconfig
 create mode 100644 xen/arch/riscv/riscv64/asm-offsets.c
 create mode 100644 xen/arch/riscv/setup.c
 create mode 100644 xen/arch/riscv/shutdown.c
 create mode 100644 xen/arch/riscv/smp.c
 create mode 100644 xen/arch/riscv/smpboot.c
 create mode 100644 xen/arch/riscv/sysctl.c
 create mode 100644 xen/arch/riscv/time.c
 create mode 100644 xen/arch/riscv/traps.c
 create mode 100644 xen/arch/riscv/vm_event.c
 create mode 100644 xen/arch/riscv/xen.lds.S
 create mode 100644 xen/include/asm-riscv/altp2m.h
 create mode 100644 xen/include/asm-riscv/asm.h
 create mode 100644 xen/include/asm-riscv/asm_defns.h
 create mode 100644 xen/include/asm-riscv/atomic.h
 create mode 100644 xen/include/asm-riscv/bitops.h
 create mode 100644 xen/include/asm-riscv/bug.h
 create mode 100644 xen/include/asm-riscv/byteorder.h
 create mode 100644 xen/include/asm-riscv/cache.h
 create mode 100644 xen/include/asm-riscv/cmpxchg.h
 create mode 100644 xen/include/asm-riscv/compiler_types.h
 create mode 100644 xen/include/asm-riscv/config.h
 create mode 100644 xen/include/asm-riscv/cpufeature.h
 create mode 100644 xen/include/asm-riscv/csr.h
 create mode 100644 xen/include/asm-riscv/current.h
 create mode 100644 xen/include/asm-riscv/debugger.h
 create mode 100644 xen/include/asm-riscv/delay.h
 create mode 100644 xen/include/asm-riscv/desc.h
 create mode 100644 xen/include/asm-riscv/device.h
 create mode 100644 xen/include/asm-riscv/div64.h
 create mode 100644 xen/include/asm-riscv/domain.h
 create mode 100644 xen/include/asm-riscv/event.h
 create mode 100644 xen/include/asm-riscv/fence.h
 create mode 100644 xen/include/asm-riscv/flushtlb.h
 create mode 100644 xen/include/asm-riscv/grant_table.h
 create mode 100644 xen/include/asm-riscv/guest_access.h
 create mode 100644 xen/include/asm-riscv/guest_atomics.h
 create mode 100644 xen/include/asm-riscv/hardirq.h
 create mode 100644 xen/include/asm-riscv/hypercall.h
 create mode 100644 xen/include/asm-riscv/init.h
 create mode 100644 xen/include/asm-riscv/io.h
 create mode 100644 xen/include/asm-riscv/iocap.h
 create mode 100644 xen/include/asm-riscv/iommu.h
 create mode 100644 xen/include/asm-riscv/irq.h
 create mode 100644 xen/include/asm-riscv/mem_access.h
 create mode 100644 xen/include/asm-riscv/mm.h
 create mode 100644 xen/include/asm-riscv/monitor.h
 create mode 100644 xen/include/asm-riscv/nospec.h
 create mode 100644 xen/include/asm-riscv/numa.h
 create mode 100644 xen/include/asm-riscv/p2m.h
 create mode 100644 xen/include/asm-riscv/page-bits.h
 create mode 100644 xen/include/asm-riscv/page.h
 create mode 100644 xen/include/asm-riscv/paging.h
 create mode 100644 xen/include/asm-riscv/pci.h
 create mode 100644 xen/include/asm-riscv/percpu.h
 create mode 100644 xen/include/asm-riscv/processor.h
 create mode 100644 xen/include/asm-riscv/random.h
 create mode 100644 xen/include/asm-riscv/regs.h
 create mode 100644 xen/include/asm-riscv/setup.h
 create mode 100644 xen/include/asm-riscv/smp.h
 create mode 100644 xen/include/asm-riscv/softirq.h
 create mode 100644 xen/include/asm-riscv/spinlock.h
 create mode 100644 xen/include/asm-riscv/string.h
 create mode 100644 xen/include/asm-riscv/sysregs.h
 create mode 100644 xen/include/asm-riscv/system.h
 create mode 100644 xen/include/asm-riscv/time.h
 create mode 100644 xen/include/asm-riscv/trace.h
 create mode 100644 xen/include/asm-riscv/types.h
 create mode 100644 xen/include/asm-riscv/vm_event.h
 create mode 100644 xen/include/asm-riscv/xenoprof.h
 create mode 100644 xen/include/public/arch-riscv.h
 create mode 100644 xen/include/public/arch-riscv/hvm/save.h

diff --git a/config/riscv64.mk b/config/riscv64.mk
new file mode 100644
index 0000000000..0ec97838f9
--- /dev/null
+++ b/config/riscv64.mk
@@ -0,0 +1,7 @@
+CONFIG_RISCV := y
+CONFIG_RISCV_64 := y
+CONFIG_RISCV_$(XEN_OS) := y
+
+CONFIG_XEN_INSTALL_SUFFIX :=
+
+CFLAGS +=
diff --git a/xen/Makefile b/xen/Makefile
index 9f3be7766d..60de4cc6cd 100644
--- a/xen/Makefile
+++ b/xen/Makefile
@@ -26,7 +26,9 @@ MAKEFLAGS += -rR
 EFI_MOUNTPOINT ?= $(BOOT_DIR)/efi
 
 ARCH=$(XEN_TARGET_ARCH)
-SRCARCH=$(shell echo $(ARCH) | sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g')
+SRCARCH=$(shell echo $(ARCH) | \
+	  sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g' \
+	      -e s'/riscv.*/riscv/g')
 
 # Don't break if the build process wasn't called from the top level
 # we need XEN_TARGET_ARCH to generate the proper config
@@ -35,7 +37,8 @@ include $(XEN_ROOT)/Config.mk
 # Set ARCH/SUBARCH appropriately.
 export TARGET_SUBARCH  := $(XEN_TARGET_ARCH)
 export TARGET_ARCH     := $(shell echo $(XEN_TARGET_ARCH) | \
-                            sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g')
+                            sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g' \
+			        -e s'/riscv.*/riscv/g')
 
 # Allow someone to change their config file
 export KCONFIG_CONFIG ?= .config
@@ -335,6 +338,7 @@ _clean: delete-unfresh-files
 	$(MAKE) $(clean) xsm
 	$(MAKE) $(clean) crypto
 	$(MAKE) $(clean) arch/arm
+	$(MAKE) $(clean) arch/riscv
 	$(MAKE) $(clean) arch/x86
 	$(MAKE) $(clean) test
 	$(MAKE) -f $(BASEDIR)/tools/kconfig/Makefile.kconfig ARCH=$(ARCH) SRCARCH=$(SRCARCH) clean
diff --git a/xen/arch/riscv/Kconfig b/xen/arch/riscv/Kconfig
new file mode 100644
index 0000000000..1b44564053
--- /dev/null
+++ b/xen/arch/riscv/Kconfig
@@ -0,0 +1,54 @@
+config 64BIT
+	bool
+
+config RISCV_64
+	bool
+	depends on 64BIT
+
+config RISCV
+	def_bool y
+
+config ARCH_DEFCONFIG
+	string
+	default "arch/riscv/configs/riscv64_defconfig" if RISCV_64
+
+menu "Architecture Features"
+
+source "arch/Kconfig"
+
+endmenu
+
+menu "ISA Selection"
+
+choice
+	prompt "Base ISA"
+        default RISCV_ISA_RV64IMA
+        help
+          This selects the base ISA extensions that Xen will target.
+
+config RISCV_ISA_RV64IMA
+	bool "RV64IMA"
+        select 64BIT
+        select RISCV_64
+        help
+           Use the RV64I base ISA, plus the "M" and "A" extensions
+           for integer multiply/divide and atomic instructions, respectively.
+
+endchoice
+
+config RISCV_ISA_C
+	bool "Compressed extension"
+        help
+           Add "C" to the ISA subsets that the toolchain is allowed
+           to emit when building Xen, which results in compressed
+           instructions in the Xen binary.
+
+           If unsure, say N.
+
+endmenu
+
+source "arch/riscv/platforms/Kconfig"
+
+source "common/Kconfig"
+
+source "drivers/Kconfig"
diff --git a/xen/arch/riscv/Kconfig.debug b/xen/arch/riscv/Kconfig.debug
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
new file mode 100644
index 0000000000..bf67c17d1b
--- /dev/null
+++ b/xen/arch/riscv/Makefile
@@ -0,0 +1,57 @@
+obj-y += lib/
+
+obj-y += domain.o
+obj-y += domctl.o
+obj-y += delay.o
+obj-y += guestcopy.o
+obj-y += irq.o
+obj-y += mm.o
+obj-y += p2m.o
+obj-y += percpu.o
+obj-y += setup.o
+obj-y += shutdown.o
+obj-y += smp.o
+obj-y += smpboot.o
+obj-y += sysctl.o
+obj-y += time.o
+obj-y += traps.o
+obj-y += vm_event.o
+
+ALL_OBJS := head.o $(ALL_OBJS)
+
+$(TARGET): $(TARGET)-syms
+	$(OBJCOPY) -O binary -S $< $@
+
+prelink.o: $(ALL_OBJS) $(ALL_LIBS) FORCE
+	$(call if_changed,ld)
+
+targets += prelink.o
+
+$(TARGET)-syms: prelink.o xen.lds
+	$(LD) $(XEN_LDFLAGS) -T xen.lds -N prelink.o \
+	    $(BASEDIR)/common/symbols-dummy.o -o $(@D)/.$(@F).0
+	$(NM) -pa --format=sysv $(@D)/.$(@F).0 \
+		| $(BASEDIR)/tools/symbols $(all_symbols) --sysv --sort >$(@D)/.$(@F).0.S
+	$(MAKE) -f $(BASEDIR)/Rules.mk $(@D)/.$(@F).0.o
+	$(LD) $(XEN_LDFLAGS) -T xen.lds -N prelink.o \
+	    $(@D)/.$(@F).0.o -o $(@D)/.$(@F).1
+	$(NM) -pa --format=sysv $(@D)/.$(@F).1 \
+		| $(BASEDIR)/tools/symbols $(all_symbols) --sysv --sort >$(@D)/.$(@F).1.S
+	$(MAKE) -f $(BASEDIR)/Rules.mk $(@D)/.$(@F).1.o
+	$(LD) $(XEN_LDFLAGS) -T xen.lds -N prelink.o $(build_id_linker) \
+	    $(@D)/.$(@F).1.o -o $@
+	$(NM) -pa --format=sysv $(@D)/$(@F) \
+		| $(BASEDIR)/tools/symbols --all-symbols --xensyms --sysv --sort \
+		>$(@D)/$(@F).map
+	rm -f $(@D)/.$(@F).[0-9]*
+
+asm-offsets.s: $(TARGET_SUBARCH)/asm-offsets.c
+	$(CC) $(filter-out -flto,$(c_flags)) -S -o $@ $<
+
+xen.lds: xen.lds.S
+	$(CPP) -P $(a_flags) -MQ $@ -o $@ $<
+
+.PHONY: clean
+clean::
+	rm -f asm-offsets.s xen.lds
+	rm -f $(BASEDIR)/.xen-syms.[0-9]*
diff --git a/xen/arch/riscv/README.source b/xen/arch/riscv/README.source
new file mode 100644
index 0000000000..a04e06c5f7
--- /dev/null
+++ b/xen/arch/riscv/README.source
@@ -0,0 +1,19 @@
+External RISCV Sources
+======================
+This documents the files copied from other projects for use in the
+RISCV code of Xen. 
+
+Linux (commit f40ddce88593, Feb. 14 2021)
+=========================================
+The following files were copied from arch/riscv/include/asm to
+xen/include/asm-riscv:
+
+asm.h -> asm.h
+atomic.h -> atomic.h
+bitops.h -> bitops.h
+csr.h -> csr.h
+{mmio,io}.h -> io.h
+fence.h -> fence.h
+cmpxchg.h -> cmpxchg.h
+compiler_types.h -> compiler_types.h
+timex.h -> time.h
diff --git a/xen/arch/riscv/Rules.mk b/xen/arch/riscv/Rules.mk
new file mode 100644
index 0000000000..3c368fa05d
--- /dev/null
+++ b/xen/arch/riscv/Rules.mk
@@ -0,0 +1,13 @@
+########################################
+# RISCV-specific definitions
+
+ifeq ($(CONFIG_RISCV_64),y)
+    c_flags += -mabi=lp64
+    a_flags += -mabi=lp64
+endif
+
+riscv-march-$(CONFIG_RISCV_ISA_RV64IMA) := rv64ima
+riscv-march-$(CONFIG_RISCV_ISA_C)       := $(riscv-march-y)c
+
+c_flags += -march=$(riscv-march-y) -mstrict-align -mcmodel=medany
+a_flags += -march=$(riscv-march-y) -mstrict-align -mcmodel=medany
diff --git a/xen/arch/riscv/arch.mk b/xen/arch/riscv/arch.mk
new file mode 100644
index 0000000000..d5d68c9150
--- /dev/null
+++ b/xen/arch/riscv/arch.mk
@@ -0,0 +1,7 @@
+########################################
+# riscv-specific definitions
+
+CFLAGS += -I$(BASEDIR)/include
+
+$(call cc-options-add,CFLAGS,CC,$(EMBEDDED_EXTRA_CFLAGS))
+$(call cc-option-add,CFLAGS,CC,-Wnested-externs)
diff --git a/xen/arch/riscv/configs/riscv64_defconfig b/xen/arch/riscv/configs/riscv64_defconfig
new file mode 100644
index 0000000000..664a5d2378
--- /dev/null
+++ b/xen/arch/riscv/configs/riscv64_defconfig
@@ -0,0 +1,12 @@
+# CONFIG_SCHED_CREDIT is not set
+# CONFIG_SCHED_RTDS is not set
+# CONFIG_SCHED_NULL is not set
+# CONFIG_SCHED_ARINC653 is not set
+# CONFIG_TRACEBUFFER is not set
+# CONFIG_DEBUG is not set
+# CONFIG_DEBUG_INFO is not set
+# CONFIG_HYPFS is not set
+# CONFIG_GRANT_TABLE is not set
+# CONFIG_SPECULATIVE_HARDEN_ARRAY is not set
+
+CONFIG_EXPERT=y
diff --git a/xen/arch/riscv/delay.c b/xen/arch/riscv/delay.c
new file mode 100644
index 0000000000..403b139b96
--- /dev/null
+++ b/xen/arch/riscv/delay.c
@@ -0,0 +1,16 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+void udelay(unsigned long usecs)
+{
+}
+EXPORT_SYMBOL(udelay);
diff --git a/xen/arch/riscv/domain.c b/xen/arch/riscv/domain.c
new file mode 100644
index 0000000000..a9fdb1f94f
--- /dev/null
+++ b/xen/arch/riscv/domain.c
@@ -0,0 +1,144 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <xen/errno.h>
+#include <xen/sched.h>
+#include <xen/domain.h>
+#include <public/domctl.h>
+#include <public/xen.h>
+
+DEFINE_PER_CPU(struct vcpu *, curr_vcpu);
+
+void context_switch(struct vcpu *prev, struct vcpu *next)
+{
+}
+
+void continue_running(struct vcpu *same)
+{
+}
+
+void sync_local_execstate(void)
+{
+}
+
+void sync_vcpu_execstate(struct vcpu *v)
+{
+}
+
+unsigned long hypercall_create_continuation(
+    unsigned int op, const char *format, ...)
+{
+
+    return 0;
+}
+
+struct domain *alloc_domain_struct(void)
+{
+    return 0;
+}
+
+void free_domain_struct(struct domain *d)
+{
+}
+
+void dump_pageframe_info(struct domain *d)
+{
+}
+
+int arch_sanitise_domain_config(struct xen_domctl_createdomain *config)
+{
+    return -EOPNOTSUPP;
+}
+
+
+int arch_domain_create(struct domain *d,
+                       struct xen_domctl_createdomain *config)
+{
+    return -EOPNOTSUPP;
+}
+
+void arch_domain_destroy(struct domain *d)
+{
+}
+
+void arch_domain_shutdown(struct domain *d)
+{
+}
+
+void arch_domain_pause(struct domain *d)
+{
+}
+
+void arch_domain_unpause(struct domain *d)
+{
+}
+
+int arch_domain_soft_reset(struct domain *d)
+{
+    return -EOPNOTSUPP;
+}
+
+void arch_domain_creation_finished(struct domain *d)
+{
+}
+
+int domain_relinquish_resources(struct domain *d)
+{
+    return -EOPNOTSUPP;
+}
+
+void arch_dump_domain_info(struct domain *d)
+{
+}
+
+long arch_do_vcpu_op(int cmd, struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
+{
+    return -EOPNOTSUPP;
+}
+
+void arch_dump_vcpu_info(struct vcpu *v)
+{
+}
+
+int arch_set_info_guest(
+    struct vcpu *v, vcpu_guest_context_u c)
+{
+    return -EOPNOTSUPP;
+}
+
+struct vcpu *alloc_vcpu_struct(const struct domain *d)
+{
+    return 0;
+}
+
+void free_vcpu_struct(struct vcpu *v)
+{
+}
+
+int arch_initialise_vcpu(struct vcpu *v, XEN_GUEST_HANDLE_PARAM(void) arg)
+{
+    return -EOPNOTSUPP;
+}
+
+int arch_vcpu_reset(struct vcpu *v)
+{
+    return -EOPNOTSUPP;
+}
+
+int arch_vcpu_create(struct vcpu *v)
+{
+    return -EOPNOTSUPP;
+}
+
+void arch_vcpu_destroy(struct vcpu *v)
+{
+}
diff --git a/xen/arch/riscv/domctl.c b/xen/arch/riscv/domctl.c
new file mode 100644
index 0000000000..f81a13a9c4
--- /dev/null
+++ b/xen/arch/riscv/domctl.c
@@ -0,0 +1,36 @@
+/******************************************************************************
+ * Arch-specific domctl.c
+ *
+ * Copyright (c) 2012, Citrix Systems
+ */
+
+#include <xen/errno.h>
+#include <xen/guest_access.h>
+#include <xen/hypercall.h>
+#include <xen/sched.h>
+#include <public/domctl.h>
+
+void arch_get_domain_info(const struct domain *d,
+                          struct xen_domctl_getdomaininfo *info)
+{
+}
+
+long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
+                    XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
+{
+    return -EOPNOTSUPP;
+}
+
+void arch_get_info_guest(struct vcpu *v, vcpu_guest_context_u c)
+{
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/riscv/guestcopy.c b/xen/arch/riscv/guestcopy.c
new file mode 100644
index 0000000000..d8fcf98a0e
--- /dev/null
+++ b/xen/arch/riscv/guestcopy.c
@@ -0,0 +1,57 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <asm/guest_access.h>
+
+unsigned long raw_copy_to_guest(void *to, const void *from, unsigned len)
+{
+    return -EOPNOTSUPP;
+}
+
+unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
+                                             unsigned len)
+{
+    return -EOPNOTSUPP;
+}
+
+unsigned long raw_clear_guest(void *to, unsigned len)
+{
+    return -EOPNOTSUPP;
+}
+
+unsigned long raw_copy_from_guest(void *to, const void __user *from, unsigned len)
+{
+    return -EOPNOTSUPP;
+}
+
+unsigned long copy_to_guest_phys_flush_dcache(struct domain *d,
+                                              paddr_t gpa,
+                                              void *buf,
+                                              unsigned int len)
+{
+    return -EOPNOTSUPP;
+}
+
+int access_guest_memory_by_ipa(struct domain *d, paddr_t gpa, void *buf,
+                               uint32_t size, bool is_write)
+{
+    return -EOPNOTSUPP;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/riscv/head.S b/xen/arch/riscv/head.S
new file mode 100644
index 0000000000..0dbc27ba75
--- /dev/null
+++ b/xen/arch/riscv/head.S
@@ -0,0 +1,6 @@
+#include <asm/config.h>
+
+        .text
+
+ENTRY(start)
+        j  start
diff --git a/xen/arch/riscv/irq.c b/xen/arch/riscv/irq.c
new file mode 100644
index 0000000000..65137e5f11
--- /dev/null
+++ b/xen/arch/riscv/irq.c
@@ -0,0 +1,78 @@
+/*
+ * RISC-V Interrupt support
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <xen/lib.h>
+#include <xen/errno.h>
+#include <xen/sched.h>
+
+const unsigned int nr_irqs = NR_IRQS;
+
+static void ack_none(struct irq_desc *irq)
+{
+}
+
+static void end_none(struct irq_desc *irq)
+{
+}
+
+hw_irq_controller no_irq_type = {
+    .typename = "none",
+    .startup = irq_startup_none,
+    .shutdown = irq_shutdown_none,
+    .enable = irq_enable_none,
+    .disable = irq_disable_none,
+    .ack = ack_none,
+    .end = end_none
+};
+
+int arch_init_one_irq_desc(struct irq_desc *desc)
+{
+    return -EOPNOTSUPP;
+}
+
+struct pirq *alloc_pirq_struct(struct domain *d)
+{
+    return NULL;
+}
+
+irq_desc_t *__irq_to_desc(int irq)
+{
+    return NULL;
+}
+
+int pirq_guest_bind(struct vcpu *v, struct pirq *pirq, int will_share)
+{
+    return -EOPNOTSUPP;
+}
+
+void pirq_guest_unbind(struct domain *d, struct pirq *pirq)
+{
+}
+
+void pirq_set_affinity(struct domain *d, int pirq, const cpumask_t *mask)
+{
+}
+
+void smp_send_state_dump(unsigned int cpu)
+{
+}
+
+void arch_move_irqs(struct vcpu *v)
+{
+}
+
+int setup_irq(unsigned int irq, unsigned int irqflags, struct irqaction *new)
+{
+    return -EOPNOTSUPP;
+}
diff --git a/xen/arch/riscv/lib/Makefile b/xen/arch/riscv/lib/Makefile
new file mode 100644
index 0000000000..6fae6a1f10
--- /dev/null
+++ b/xen/arch/riscv/lib/Makefile
@@ -0,0 +1 @@
+obj-y += find_next_bit.o
diff --git a/xen/arch/riscv/lib/find_next_bit.c b/xen/arch/riscv/lib/find_next_bit.c
new file mode 100644
index 0000000000..adaa25f32b
--- /dev/null
+++ b/xen/arch/riscv/lib/find_next_bit.c
@@ -0,0 +1,284 @@
+/* find_next_bit.c: fallback find next bit implementation
+ *
+ * Copyright (C) 2004 Red Hat, Inc. All Rights Reserved.
+ * Written by David Howells (dhowells@redhat.com)
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public License
+ * as published by the Free Software Foundation; either version
+ * 2 of the License, or (at your option) any later version.
+ */
+#include <xen/bitops.h>
+#include <asm/bitops.h>
+#include <asm/types.h>
+#include <asm/byteorder.h>
+
+#define BITOP_WORD(nr)		((nr) / BITS_PER_LONG)
+
+#ifndef find_next_bit
+/*
+ * Find the next set bit in a memory region.
+ */
+unsigned long find_next_bit(const unsigned long *addr, unsigned long size,
+			    unsigned long offset)
+{
+	const unsigned long *p = addr + BITOP_WORD(offset);
+	unsigned long result = offset & ~(BITS_PER_LONG-1);
+	unsigned long tmp;
+
+	if (offset >= size)
+		return size;
+	size -= result;
+	offset %= BITS_PER_LONG;
+	if (offset) {
+		tmp = *(p++);
+		tmp &= (~0UL << offset);
+		if (size < BITS_PER_LONG)
+			goto found_first;
+		if (tmp)
+			goto found_middle;
+		size -= BITS_PER_LONG;
+		result += BITS_PER_LONG;
+	}
+	while (size & ~(BITS_PER_LONG-1)) {
+		if ((tmp = *(p++)))
+			goto found_middle;
+		result += BITS_PER_LONG;
+		size -= BITS_PER_LONG;
+	}
+	if (!size)
+		return result;
+	tmp = *p;
+
+found_first:
+	tmp &= (~0UL >> (BITS_PER_LONG - size));
+	if (tmp == 0UL)		/* Are any bits set? */
+		return result + size;	/* Nope. */
+found_middle:
+	return result + ffs(tmp);
+}
+EXPORT_SYMBOL(find_next_bit);
+#endif
+
+#ifndef find_next_zero_bit
+/*
+ * This implementation of find_{first,next}_zero_bit was stolen from
+ * Linus' asm-alpha/bitops.h.
+ */
+unsigned long find_next_zero_bit(const unsigned long *addr, unsigned long size,
+				 unsigned long offset)
+{
+	const unsigned long *p = addr + BITOP_WORD(offset);
+	unsigned long result = offset & ~(BITS_PER_LONG-1);
+	unsigned long tmp;
+
+	if (offset >= size)
+		return size;
+	size -= result;
+	offset %= BITS_PER_LONG;
+	if (offset) {
+		tmp = *(p++);
+		tmp |= ~0UL >> (BITS_PER_LONG - offset);
+		if (size < BITS_PER_LONG)
+			goto found_first;
+		if (~tmp)
+			goto found_middle;
+		size -= BITS_PER_LONG;
+		result += BITS_PER_LONG;
+	}
+	while (size & ~(BITS_PER_LONG-1)) {
+		if (~(tmp = *(p++)))
+			goto found_middle;
+		result += BITS_PER_LONG;
+		size -= BITS_PER_LONG;
+	}
+	if (!size)
+		return result;
+	tmp = *p;
+
+found_first:
+	tmp |= ~0UL << size;
+	if (tmp == ~0UL)	/* Are any bits zero? */
+		return result + size;	/* Nope. */
+found_middle:
+	return result + ffz(tmp);
+}
+EXPORT_SYMBOL(find_next_zero_bit);
+#endif
+
+#ifndef find_first_bit
+/*
+ * Find the first set bit in a memory region.
+ */
+unsigned long find_first_bit(const unsigned long *addr, unsigned long size)
+{
+	const unsigned long *p = addr;
+	unsigned long result = 0;
+	unsigned long tmp;
+
+	while (size & ~(BITS_PER_LONG-1)) {
+		if ((tmp = *(p++)))
+			goto found;
+		result += BITS_PER_LONG;
+		size -= BITS_PER_LONG;
+	}
+	if (!size)
+		return result;
+
+	tmp = (*p) & (~0UL >> (BITS_PER_LONG - size));
+	if (tmp == 0UL)		/* Are any bits set? */
+		return result + size;	/* Nope. */
+found:
+	return result + ffs(tmp);
+}
+EXPORT_SYMBOL(find_first_bit);
+#endif
+
+#ifndef find_first_zero_bit
+/*
+ * Find the first cleared bit in a memory region.
+ */
+unsigned long find_first_zero_bit(const unsigned long *addr, unsigned long size)
+{
+	const unsigned long *p = addr;
+	unsigned long result = 0;
+	unsigned long tmp;
+
+	while (size & ~(BITS_PER_LONG-1)) {
+		if (~(tmp = *(p++)))
+			goto found;
+		result += BITS_PER_LONG;
+		size -= BITS_PER_LONG;
+	}
+	if (!size)
+		return result;
+
+	tmp = (*p) | (~0UL << size);
+	if (tmp == ~0UL)	/* Are any bits zero? */
+		return result + size;	/* Nope. */
+found:
+	return result + ffz(tmp);
+}
+EXPORT_SYMBOL(find_first_zero_bit);
+#endif
+
+#ifdef __BIG_ENDIAN
+
+/* include/linux/byteorder does not support "unsigned long" type */
+static inline unsigned long ext2_swabp(const unsigned long * x)
+{
+#if BITS_PER_LONG == 64
+	return (unsigned long) __swab64p((u64 *) x);
+#elif BITS_PER_LONG == 32
+	return (unsigned long) __swab32p((u32 *) x);
+#else
+#error BITS_PER_LONG not defined
+#endif
+}
+
+/* include/linux/byteorder doesn't support "unsigned long" type */
+static inline unsigned long ext2_swab(const unsigned long y)
+{
+#if BITS_PER_LONG == 64
+	return (unsigned long) __swab64((u64) y);
+#elif BITS_PER_LONG == 32
+	return (unsigned long) __swab32((u32) y);
+#else
+#error BITS_PER_LONG not defined
+#endif
+}
+
+#ifndef find_next_zero_bit_le
+unsigned long find_next_zero_bit_le(const void *addr, unsigned
+		long size, unsigned long offset)
+{
+	const unsigned long *p = addr;
+	unsigned long result = offset & ~(BITS_PER_LONG - 1);
+	unsigned long tmp;
+
+	if (offset >= size)
+		return size;
+	p += BITOP_WORD(offset);
+	size -= result;
+	offset &= (BITS_PER_LONG - 1UL);
+	if (offset) {
+		tmp = ext2_swabp(p++);
+		tmp |= (~0UL >> (BITS_PER_LONG - offset));
+		if (size < BITS_PER_LONG)
+			goto found_first;
+		if (~tmp)
+			goto found_middle;
+		size -= BITS_PER_LONG;
+		result += BITS_PER_LONG;
+	}
+
+	while (size & ~(BITS_PER_LONG - 1)) {
+		if (~(tmp = *(p++)))
+			goto found_middle_swap;
+		result += BITS_PER_LONG;
+		size -= BITS_PER_LONG;
+	}
+	if (!size)
+		return result;
+	tmp = ext2_swabp(p);
+found_first:
+	tmp |= ~0UL << size;
+	if (tmp == ~0UL)	/* Are any bits zero? */
+		return result + size; /* Nope. Skip ffz */
+found_middle:
+	return result + ffz(tmp);
+
+found_middle_swap:
+	return result + ffz(ext2_swab(tmp));
+}
+EXPORT_SYMBOL(find_next_zero_bit_le);
+#endif
+
+#ifndef find_next_bit_le
+unsigned long find_next_bit_le(const void *addr, unsigned
+		long size, unsigned long offset)
+{
+	const unsigned long *p = addr;
+	unsigned long result = offset & ~(BITS_PER_LONG - 1);
+	unsigned long tmp;
+
+	if (offset >= size)
+		return size;
+	p += BITOP_WORD(offset);
+	size -= result;
+	offset &= (BITS_PER_LONG - 1UL);
+	if (offset) {
+		tmp = ext2_swabp(p++);
+		tmp &= (~0UL << offset);
+		if (size < BITS_PER_LONG)
+			goto found_first;
+		if (tmp)
+			goto found_middle;
+		size -= BITS_PER_LONG;
+		result += BITS_PER_LONG;
+	}
+
+	while (size & ~(BITS_PER_LONG - 1)) {
+		tmp = *(p++);
+		if (tmp)
+			goto found_middle_swap;
+		result += BITS_PER_LONG;
+		size -= BITS_PER_LONG;
+	}
+	if (!size)
+		return result;
+	tmp = ext2_swabp(p);
+found_first:
+	tmp &= (~0UL >> (BITS_PER_LONG - size));
+	if (tmp == 0UL)		/* Are any bits set? */
+		return result + size; /* Nope. */
+found_middle:
+	return result + ffs(tmp);
+
+found_middle_swap:
+	return result + ffs(ext2_swab(tmp));
+}
+EXPORT_SYMBOL(find_next_bit_le);
+#endif
+
+#endif /* __BIG_ENDIAN */
diff --git a/xen/arch/riscv/mm.c b/xen/arch/riscv/mm.c
new file mode 100644
index 0000000000..72322b9adc
--- /dev/null
+++ b/xen/arch/riscv/mm.c
@@ -0,0 +1,93 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <xen/compile.h>
+#include <xen/types.h>
+#include <xen/init.h>
+#include <xen/mm.h>
+
+unsigned long max_page;
+unsigned long total_pages;
+unsigned long frametable_base_mfn;
+
+void flush_page_to_ram(unsigned long mfn, bool sync_icache)
+{
+}
+
+void arch_dump_shared_mem_info(void)
+{
+}
+
+int steal_page(struct domain *d, struct page_info *page, unsigned int memflags)
+{
+    return 0;
+}
+
+int page_is_ram_type(unsigned long mfn, unsigned long mem_type)
+{
+    return 0;
+}
+
+unsigned long domain_get_maximum_gpfn(struct domain *d)
+{
+    return 0;
+}
+
+int xenmem_add_to_physmap_one(struct domain *d, unsigned int space,
+                              union add_to_physmap_extra extra,
+                              unsigned long idx, gfn_t gfn)
+{
+    return 0;
+}
+
+long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
+{
+    return 0;
+}
+
+struct domain *page_get_owner_and_reference(struct page_info *page)
+{
+    return (void *) 0xdeadbeef;
+}
+
+void put_page(struct page_info *page)
+{
+}
+
+bool get_page(struct page_info *page, const struct domain *domain)
+{
+    return false;
+}
+
+int get_page_type(struct page_info *page, unsigned long type)
+{
+    return 0;
+}
+
+void put_page_type(struct page_info *page)
+{
+    return;
+}
+
+unsigned long get_upper_mfn_bound(void)
+{
+    return -1;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/riscv/p2m.c b/xen/arch/riscv/p2m.c
new file mode 100644
index 0000000000..91b86a2bc7
--- /dev/null
+++ b/xen/arch/riscv/p2m.c
@@ -0,0 +1,144 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <xen/sched.h>
+
+#define INVALID_VMID 0 /* VMID 0 is reserved */
+
+void p2m_write_unlock(struct p2m_domain *p2m)
+{
+}
+
+void p2m_dump_info(struct domain *d)
+{
+}
+
+void memory_type_changed(struct domain *d)
+{
+}
+
+void dump_p2m_lookup(struct domain *d, paddr_t addr)
+{
+}
+
+void p2m_save_state(struct vcpu *p)
+{
+}
+
+void p2m_restore_state(struct vcpu *n)
+{
+}
+
+mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn)
+{
+    return _mfn(gfn_x(gfn));
+}
+
+int p2m_set_entry(struct p2m_domain *p2m,
+                  gfn_t sgfn,
+                  unsigned long nr,
+                  mfn_t smfn,
+                  p2m_type_t t,
+                  p2m_access_t a)
+{
+    int rc = 0;
+
+
+    return rc;
+}
+
+mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t)
+{
+    return _mfn(gfn_x(gfn));
+}
+
+mfn_t p2m_get_entry(struct p2m_domain *p2m, gfn_t gfn,
+                    p2m_type_t *t, p2m_access_t *a,
+                    unsigned int *page_order,
+                    bool *valid)
+{
+    return _mfn(gfn_x(gfn));
+}
+
+void p2m_tlb_flush_sync(struct p2m_domain *p2m)
+{
+}
+
+int map_regions_p2mt(struct domain *d,
+                     gfn_t gfn,
+                     unsigned long nr,
+                     mfn_t mfn,
+                     p2m_type_t p2mt)
+{
+    return 0;
+}
+
+int unmap_regions_p2mt(struct domain *d,
+                       gfn_t gfn,
+                       unsigned long nr,
+                       mfn_t mfn)
+{
+    return 0;
+}
+
+int map_mmio_regions(struct domain *d,
+                     gfn_t start_gfn,
+                     unsigned long nr,
+                     mfn_t mfn)
+{
+    return 0;
+}
+
+int unmap_mmio_regions(struct domain *d,
+                       gfn_t start_gfn,
+                       unsigned long nr,
+                       mfn_t mfn)
+{
+    return 0;
+}
+
+int map_dev_mmio_region(struct domain *d,
+                        gfn_t gfn,
+                        unsigned long nr,
+                        mfn_t mfn)
+{
+    return 0;
+}
+
+int guest_physmap_add_entry(struct domain *d,
+                            gfn_t gfn,
+                            mfn_t mfn,
+                            unsigned long page_order,
+                            p2m_type_t t)
+{
+    return 0;
+}
+
+int set_foreign_p2m_entry(struct domain *d, const struct domain *fd,
+                          unsigned long gfn, mfn_t mfn)
+{
+    return 0;
+}
+
+struct page_info *p2m_get_page_from_gfn(struct domain *d, gfn_t gfn,
+                                        p2m_type_t *t)
+{
+    return NULL;
+}
+
+void vcpu_mark_events_pending(struct vcpu *v)
+{
+}
+
+void vcpu_update_evtchn_irq(struct vcpu *v)
+{
+}
diff --git a/xen/arch/riscv/percpu.c b/xen/arch/riscv/percpu.c
new file mode 100644
index 0000000000..31c0cce606
--- /dev/null
+++ b/xen/arch/riscv/percpu.c
@@ -0,0 +1,17 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <xen/percpu.h>
+#include <xen/cpu.h>
+#include <xen/init.h>
+
+unsigned long __per_cpu_offset[NR_CPUS];
diff --git a/xen/arch/riscv/platforms/Kconfig b/xen/arch/riscv/platforms/Kconfig
new file mode 100644
index 0000000000..6959ec35a2
--- /dev/null
+++ b/xen/arch/riscv/platforms/Kconfig
@@ -0,0 +1,31 @@
+choice
+	prompt "Platform Support"
+	default ALL_PLAT
+	---help---
+	Choose which hardware platform to enable in Xen.
+
+	If unsure, choose ALL_PLAT.
+
+config ALL_PLAT
+	bool "All Platforms"
+	---help---
+	Enable support for all available hardware platforms. It doesn't
+	automatically select any of the related drivers.
+
+config QEMU
+	bool "QEMU RISC-V virt machine support"
+	depends on RISCV
+	select HAS_NS16550
+	---help---
+	Enable all the required drivers for QEMU RISC-V virt emulated
+	machine.
+
+endchoice
+
+config ALL64_PLAT
+	bool
+	default (ALL_PLAT && RISCV_64)
+
+config ALL32_PLAT
+	bool
+	default (ALL_PLAT && RISCV_32)
diff --git a/xen/arch/riscv/riscv64/asm-offsets.c b/xen/arch/riscv/riscv64/asm-offsets.c
new file mode 100644
index 0000000000..994d5f60c9
--- /dev/null
+++ b/xen/arch/riscv/riscv64/asm-offsets.c
@@ -0,0 +1,31 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#define COMPILE_OFFSETS
+
+#include <asm/init.h>
+
+#define DEFINE(_sym, _val)                                                 \
+    asm volatile ("\n.ascii\"==>#define " #_sym " %0 /* " #_val " */<==\"" \
+                  : : "i" (_val) )
+#define BLANK()                                                            \
+    asm volatile ( "\n.ascii\"==><==\"" : : )
+#define OFFSET(_sym, _str, _mem)                                           \
+    DEFINE(_sym, offsetof(_str, _mem));
+
+void asm_offsets(void)
+{
+   BLANK();
+   OFFSET(INITINFO_stack, struct init_info, stack);
+}
diff --git a/xen/arch/riscv/setup.c b/xen/arch/riscv/setup.c
new file mode 100644
index 0000000000..129e3db58f
--- /dev/null
+++ b/xen/arch/riscv/setup.c
@@ -0,0 +1,27 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <xen/types.h>
+#include <public/version.h>
+
+void arch_get_xen_caps(xen_capabilities_info_t *info)
+{
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/riscv/shutdown.c b/xen/arch/riscv/shutdown.c
new file mode 100644
index 0000000000..bfa1174366
--- /dev/null
+++ b/xen/arch/riscv/shutdown.c
@@ -0,0 +1,28 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+void machine_halt(void)
+{
+}
+
+void machine_restart(unsigned int delay_millisecs)
+{
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/riscv/smp.c b/xen/arch/riscv/smp.c
new file mode 100644
index 0000000000..66f1012b37
--- /dev/null
+++ b/xen/arch/riscv/smp.c
@@ -0,0 +1,35 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <xen/cpumask.h>
+#include <asm/smp.h>
+
+void arch_flush_tlb_mask(const cpumask_t *mask)
+{
+}
+
+void smp_send_event_check_mask(const cpumask_t *mask)
+{
+}
+
+void smp_send_call_function_mask(const cpumask_t *mask)
+{
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/riscv/smpboot.c b/xen/arch/riscv/smpboot.c
new file mode 100644
index 0000000000..567d12a262
--- /dev/null
+++ b/xen/arch/riscv/smpboot.c
@@ -0,0 +1,34 @@
+/*
+ * Dummy smpboot support
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+#include <xen/cpu.h>
+#include <xen/cpumask.h>
+#include <xen/errno.h>
+#include <xen/init.h>
+#include <xen/sched.h>
+#include <xen/smp.h>
+#include <xen/nodemask.h>
+
+cpumask_t cpu_online_map;
+cpumask_t cpu_present_map;
+cpumask_t cpu_possible_map;
+
+DEFINE_PER_CPU(unsigned int, cpu_id);
+DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_sibling_mask);
+DEFINE_PER_CPU_READ_MOSTLY(cpumask_var_t, cpu_core_mask);
+
+/* Fake one node for now. See also include/asm-arm/numa.h */
+nodemask_t __read_mostly node_online_map = { { [0] = 1UL } };
+
+/* Boot cpu data */
+struct init_info init_data = {};
diff --git a/xen/arch/riscv/sysctl.c b/xen/arch/riscv/sysctl.c
new file mode 100644
index 0000000000..9b4ef27aac
--- /dev/null
+++ b/xen/arch/riscv/sysctl.c
@@ -0,0 +1,33 @@
+/******************************************************************************
+ * Arch-specific sysctl.c
+ *
+ * System management operations. For use by node control stack.
+ *
+ * Copyright (c) 2012, Citrix Systems
+ */
+
+#include <xen/types.h>
+#include <xen/lib.h>
+#include <xen/errno.h>
+#include <xen/hypercall.h>
+#include <public/sysctl.h>
+
+void arch_do_physinfo(struct xen_sysctl_physinfo *pi)
+{
+}
+
+long arch_do_sysctl(struct xen_sysctl *sysctl,
+                    XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
+{
+    return -EOPNOTSUPP;
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/riscv/time.c b/xen/arch/riscv/time.c
new file mode 100644
index 0000000000..4d7269195d
--- /dev/null
+++ b/xen/arch/riscv/time.c
@@ -0,0 +1,35 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <xen/sched.h>
+#include <xen/time.h>
+
+unsigned long __read_mostly cpu_khz;  /* CPU clock frequency in kHz. */
+
+s_time_t get_s_time(void)
+{
+    return 0;
+}
+
+/* VCPU PV timers. */
+void send_timer_event(struct vcpu *v)
+{
+}
+
+void domain_set_time_offset(struct domain *d, int64_t time_offset_seconds)
+{
+}
+
+int reprogram_timer(s_time_t timeout)
+{
+    return 0;
+}
diff --git a/xen/arch/riscv/traps.c b/xen/arch/riscv/traps.c
new file mode 100644
index 0000000000..5287894954
--- /dev/null
+++ b/xen/arch/riscv/traps.c
@@ -0,0 +1,35 @@
+/*
+ * RISC-V Trap handlers
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2 of the License, or
+ * (at your option) any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ */
+
+#include <public/xen.h>
+#include <xen/multicall.h>
+#include <xen/sched.h>
+#include <asm/processor.h>
+
+void show_execution_state(const struct cpu_user_regs *regs)
+{
+}
+
+void vcpu_show_execution_state(struct vcpu *v)
+{
+}
+
+void arch_hypercall_tasklet_result(struct vcpu *v, long res)
+{
+}
+
+enum mc_disposition arch_do_multicall_call(struct mc_state *state)
+{
+    return mc_continue;
+}
diff --git a/xen/arch/riscv/vm_event.c b/xen/arch/riscv/vm_event.c
new file mode 100644
index 0000000000..6c759f85a6
--- /dev/null
+++ b/xen/arch/riscv/vm_event.c
@@ -0,0 +1,39 @@
+/*
+ * Architecture-specific vm_event handling routines
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public
+ * License v2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#include <xen/sched.h>
+#include <asm/vm_event.h>
+
+void vm_event_fill_regs(vm_event_request_t *req)
+{
+}
+
+void vm_event_set_registers(struct vcpu *v, vm_event_response_t *rsp)
+{
+}
+
+void vm_event_monitor_next_interrupt(struct vcpu *v)
+{
+}
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/arch/riscv/xen.lds.S b/xen/arch/riscv/xen.lds.S
new file mode 100644
index 0000000000..6b95fc84da
--- /dev/null
+++ b/xen/arch/riscv/xen.lds.S
@@ -0,0 +1,113 @@
+/* Excerpts written by Martin Mares <mj@atrey.karlin.mff.cuni.cz> */
+/* Modified for i386/x86-64 Xen by Keir Fraser */
+/* Modified for ARM Xen by Ian Campbell */
+
+#include <xen/cache.h>
+#include <asm/page.h>
+#undef ENTRY
+#undef ALIGN
+
+ENTRY(start)
+OUTPUT_ARCH(riscv)
+
+PHDRS
+{
+  text PT_LOAD ;
+#if defined(BUILD_ID)
+  note PT_NOTE ;
+#endif
+}
+SECTIONS
+{
+  . = XEN_VIRT_START;
+  _start = .;
+  .text : {
+        _stext = .;            /* Text section */
+       *(.text)
+       *(.text.cold)
+       *(.text.unlikely)
+       *(.fixup)
+       *(.gnu.warning)
+       _etext = .;             /* End of text section */
+  } :text = 0x9090
+
+  . = ALIGN(PAGE_SIZE);
+  .rodata : {
+        _srodata = .;          /* Read-only data */
+        /* Bug frames table */
+       __start_bug_frames = .;
+       *(.bug_frames.0)
+       __stop_bug_frames_0 = .;
+       *(.bug_frames.1)
+       __stop_bug_frames_1 = .;
+       *(.bug_frames.2)
+       __stop_bug_frames_2 = .;
+       *(.bug_frames.3)
+       __stop_bug_frames_3 = .;
+       *(.rodata)
+       *(.rodata.*)
+       *(.data.rel.ro)
+       *(.data.rel.ro.*)
+  } :text
+
+#if defined(BUILD_ID)
+  . = ALIGN(4);
+  .note.gnu.build-id : {
+       __note_gnu_build_id_start = .;
+       *(.note.gnu.build-id)
+       __note_gnu_build_id_end = .;
+  } :note :text
+#endif
+  _erodata = .;                /* End of read-only data */
+
+  .data : {                    /* Data */
+       . = ALIGN(PAGE_SIZE);
+       *(.data.page_aligned)
+       *(.data)
+
+       . = ALIGN(8);
+       __start_schedulers_array = .;
+       *(.data.schedulers)
+       __end_schedulers_array = .;
+
+       *(.data.rel)
+       *(.data.rel.*)
+       CONSTRUCTORS
+  } :text
+
+  . = ALIGN(SMP_CACHE_BYTES);
+  .data.read_mostly : {
+       *(.data.read_mostly)
+  } :text
+
+  . = ALIGN(PAGE_SIZE);        /* Init code and data */
+  __init_begin = .;
+  .init.text : {
+       _sinittext = .;
+       *(.init.text)
+       _einittext = .;
+  } :text
+  . = ALIGN(PAGE_SIZE);
+  .init.data : {
+       *(.init.rodata)
+       *(.init.rodata.*)
+
+       . = ALIGN(POINTER_ALIGN);
+       __setup_start = .;
+       *(.init.setup)
+       __setup_end = .;
+
+       __initcall_start = .;
+       *(.initcallpresmp.init)
+       __presmp_initcall_end = .;
+       *(.initcall1.init)
+       __initcall_end = .;
+
+       *(.init.data)
+       *(.init.data.rel)
+       *(.init.data.rel.*)
+  } :text
+  . = ALIGN(STACK_SIZE);
+  __init_end = .;
+  _end = . ;
+}
diff --git a/xen/include/asm-riscv/altp2m.h b/xen/include/asm-riscv/altp2m.h
new file mode 100644
index 0000000000..8554495f94
--- /dev/null
+++ b/xen/include/asm-riscv/altp2m.h
@@ -0,0 +1,39 @@
+/*
+ * Alternate p2m
+ *
+ * Copyright (c) 2014, Intel Corporation.
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ASM_RISCV_ALTP2M_H
+#define __ASM_RISCV_ALTP2M_H
+
+#include <xen/sched.h>
+
+/* Alternate p2m on/off per domain */
+static inline bool altp2m_active(const struct domain *d)
+{
+    /* Not implemented on RISCV. */
+    return false;
+}
+
+/* Alternate p2m VCPU */
+static inline uint16_t altp2m_vcpu_idx(const struct vcpu *v)
+{
+    /* Not implemented on RISCV, should not be reached. */
+    BUG();
+    return 0;
+}
+
+#endif /* __ASM_RISCV_ALTP2M_H */
diff --git a/xen/include/asm-riscv/asm.h b/xen/include/asm-riscv/asm.h
new file mode 100644
index 0000000000..2dafac5b35
--- /dev/null
+++ b/xen/include/asm-riscv/asm.h
@@ -0,0 +1,77 @@
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_ASM_H
+#define _ASM_RISCV_ASM_H
+
+#ifdef __ASSEMBLY__
+#define __ASM_STR(x)	x
+#else
+#define __ASM_STR(x)	#x
+#endif
+
+#if __riscv_xlen == 64
+#define __REG_SEL(a, b)	__ASM_STR(a)
+#elif __riscv_xlen == 32
+#define __REG_SEL(a, b)	__ASM_STR(b)
+#else
+#error "Unexpected __riscv_xlen"
+#endif
+
+#define REG_L		__REG_SEL(ld, lw)
+#define REG_S		__REG_SEL(sd, sw)
+#define REG_SC		__REG_SEL(sc.d, sc.w)
+#define SZREG		__REG_SEL(8, 4)
+#define LGREG		__REG_SEL(3, 2)
+
+#if __SIZEOF_POINTER__ == 8
+#ifdef __ASSEMBLY__
+#define RISCV_PTR		.dword
+#define RISCV_SZPTR		8
+#define RISCV_LGPTR		3
+#else
+#define RISCV_PTR		".dword"
+#define RISCV_SZPTR		"8"
+#define RISCV_LGPTR		"3"
+#endif
+#elif __SIZEOF_POINTER__ == 4
+#ifdef __ASSEMBLY__
+#define RISCV_PTR		.word
+#define RISCV_SZPTR		4
+#define RISCV_LGPTR		2
+#else
+#define RISCV_PTR		".word"
+#define RISCV_SZPTR		"4"
+#define RISCV_LGPTR		"2"
+#endif
+#else
+#error "Unexpected __SIZEOF_POINTER__"
+#endif
+
+#if (__SIZEOF_INT__ == 4)
+#define RISCV_INT		__ASM_STR(.word)
+#define RISCV_SZINT		__ASM_STR(4)
+#define RISCV_LGINT		__ASM_STR(2)
+#else
+#error "Unexpected __SIZEOF_INT__"
+#endif
+
+#if (__SIZEOF_SHORT__ == 2)
+#define RISCV_SHORT		__ASM_STR(.half)
+#define RISCV_SZSHORT		__ASM_STR(2)
+#define RISCV_LGSHORT		__ASM_STR(1)
+#else
+#error "Unexpected __SIZEOF_SHORT__"
+#endif
+
+#endif /* _ASM_RISCV_ASM_H */
diff --git a/xen/include/asm-riscv/asm_defns.h b/xen/include/asm-riscv/asm_defns.h
new file mode 100644
index 0000000000..9145f9cbf1
--- /dev/null
+++ b/xen/include/asm-riscv/asm_defns.h
@@ -0,0 +1,24 @@
+#ifndef __RISCV_ASM_DEFNS_H__
+#define __RISCV_ASM_DEFNS_H__
+
+#ifndef COMPILE_OFFSETS
+/* NB. Auto-generated from arch/.../asm-offsets.c */
+#include <asm/asm-offsets.h>
+#endif
+#include <asm/processor.h>
+
+#define ASM_INT(label, val)                 \
+    .p2align 2;                             \
+label: .long (val);                         \
+    .size label, . - label;                 \
+    .type label, @object
+
+#endif /* __RISCV_ASM_DEFNS_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/atomic.h b/xen/include/asm-riscv/atomic.h
new file mode 100644
index 0000000000..7ffae3bd74
--- /dev/null
+++ b/xen/include/asm-riscv/atomic.h
@@ -0,0 +1,204 @@
+/**
+ * Copyright (c) 2018 Anup Patel.
+ * Copyright (c) 2019 Alistair Francis <alistair.francis@wdc.com>
+ * Copyright (c) 2021 Connor Davis <connojd@pm.me>
+ * All rights reserved.
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ *
+ */
+
+#ifndef _ASM_RISCV_ATOMIC_H
+#define _ASM_RISCV_ATOMIC_H
+
+#include <xen/atomic.h>
+#include <asm/compiler_types.h>
+#include <asm/cmpxchg.h>
+#include <asm/system.h>
+
+void __bad_atomic_size(void);
+
+/*
+ * Adapted from {READ,WRITE}_ONCE in linux/include/asm-generic/rwonce.h,
+ * with the exception of only allowing types with size at most sizeof(long).
+ * Linux allows sizes <= sizeof(long long), but long long's will tear on
+ * RV32, so we exclude them.
+ */
+#define read_atomic(p) ({                                        \
+    BUILD_BUG_ON(!__native_word(typeof(*(p))));                  \
+    (*(const volatile __unqual_scalar_typeof(*(p)) *)(p));       \
+})
+
+#define write_atomic(p, x)                                       \
+do {						                 \
+    BUILD_BUG_ON(!__native_word(typeof(*(p))));                  \
+    *(volatile typeof(*(p)) *)(p) = (x);                         \
+} while (0)
+
+#define build_add_sized(name, size, type)                        \
+static inline void name(volatile type *addr, type val)           \
+{                                                                \
+    type t;                                                      \
+    asm volatile("l" size "u %1, %0\n"                           \
+                 "add %1, %1, %2\n"                              \
+                 "s" size " %1, %0\n"                            \
+                 : "+m" (*addr), "=&r" (t)                       \
+                 : "r" (val));                                   \
+}
+
+build_add_sized(add_u8_sized, "b", uint8_t)
+build_add_sized(add_u16_sized, "h", uint16_t)
+build_add_sized(add_u32_sized, "w", uint32_t)
+
+#define add_sized(p, x) ({                                \
+    typeof(*(p)) x_ = (x);                                \
+    switch ( sizeof(*(p)) )                               \
+    {                                                     \
+    case 1: add_u8_sized((uint8_t *)(p), x_); break;      \
+    case 2: add_u16_sized((uint16_t *)(p), x_); break;    \
+    case 4: add_u32_sized((uint32_t *)(p), x_); break;    \
+    default: __bad_atomic_size(); break;                  \
+    }                                                     \
+})
+
+/*
+ * Snipped from linux/arch/riscv/include/asm/atomic.h:
+ *
+ * First, the atomic ops that have no ordering constraints and therefor don't
+ * have the AQ or RL bits set.  These don't return anything, so there's only
+ * one version to worry about.
+ */
+#define ATOMIC_OP(op, asm_op, I)                              		\
+static always_inline void atomic_##op(int i, atomic_t *v)               \
+{									\
+	__asm__ __volatile__ (						\
+		"	amo" #asm_op ".w" " zero, %1, %0"	        \
+		: "+A" (v->counter)					\
+		: "r" (I)						\
+		: "memory");						\
+}									\
+
+ATOMIC_OP(add, add,  i)
+ATOMIC_OP(sub, add, -i)
+ATOMIC_OP(and, and,  i)
+
+#undef ATOMIC_OP
+
+/* The *_return variants provide full barriers */
+#define ATOMIC_OP_RETURN(op, asm_op, c_op, I)                   	\
+static always_inline int atomic_fetch_##op(int i, atomic_t *v)	        \
+{									\
+	register int ret;						\
+	__asm__ __volatile__ (						\
+		"	amo" #asm_op ".w.aqrl  %1, %2, %0"      	\
+		: "+A" (v->counter), "=r" (ret)				\
+		: "r" (I)						\
+		: "memory");						\
+	return ret;							\
+}                                                                       \
+static always_inline int atomic_##op##_return(int i, atomic_t *v)	\
+{									\
+        return atomic_fetch_##op(i, v) c_op I;          		\
+}
+
+ATOMIC_OP_RETURN(add, add, +,  i)
+ATOMIC_OP_RETURN(sub, add, +, -i)
+
+#undef ATOMIC_OP_RETURN
+
+static inline int atomic_read(const atomic_t *v)
+{
+    return read_atomic(&v->counter);
+}
+
+static inline int _atomic_read(atomic_t v)
+{
+    return v.counter;
+}
+
+static inline void atomic_set(atomic_t *v, int i)
+{
+    write_atomic(&v->counter, i);
+}
+
+static inline void _atomic_set(atomic_t *v, int i)
+{
+    v->counter = i;
+}
+
+static inline int atomic_sub_and_test(int i, atomic_t *v)
+{
+    return atomic_sub_return(i, v) == 0;
+}
+
+static inline void atomic_inc(atomic_t *v)
+{
+    atomic_add(1, v);
+}
+
+static inline int atomic_inc_return(atomic_t *v)
+{
+    return atomic_add_return(1, v);
+}
+
+static inline int atomic_inc_and_test(atomic_t *v)
+{
+    return atomic_add_return(1, v) == 0;
+}
+
+static inline void atomic_dec(atomic_t *v)
+{
+    atomic_sub(1, v);
+}
+
+static inline int atomic_dec_return(atomic_t *v)
+{
+    return atomic_sub_return(1, v);
+}
+
+static inline int atomic_dec_and_test(atomic_t *v)
+{
+    return atomic_sub_return(1, v) == 0;
+}
+
+static inline int atomic_add_negative(int i, atomic_t *v)
+{
+    return atomic_add_return(i, v) < 0;
+}
+
+static inline int atomic_cmpxchg(atomic_t *v, int old, int new)
+{
+	return cmpxchg(&v->counter, old, new);
+}
+
+static inline int atomic_add_unless(atomic_t *v, int a, int u)
+{
+       int prev, rc;
+
+	__asm__ __volatile__ (
+		"0:	lr.w     %[p],  %[c]\n"
+		"	beq      %[p],  %[u], 1f\n"
+		"	add      %[rc], %[p], %[a]\n"
+		"	sc.w.rl  %[rc], %[rc], %[c]\n"
+		"	bnez     %[rc], 0b\n"
+		"	fence    rw, rw\n"
+		"1:\n"
+		: [p]"=&r" (prev), [rc]"=&r" (rc), [c]"+A" (v->counter)
+		: [a]"r" (a), [u]"r" (u)
+		: "memory");
+	return prev;
+}
+
+#endif /* _ASM_RISCV_ATOMIC_H */
diff --git a/xen/include/asm-riscv/bitops.h b/xen/include/asm-riscv/bitops.h
new file mode 100644
index 0000000000..f2f6f63b03
--- /dev/null
+++ b/xen/include/asm-riscv/bitops.h
@@ -0,0 +1,331 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_BITOPS_H
+#define _ASM_RISCV_BITOPS_H
+
+#include <asm/system.h>
+
+#define BIT_ULL(nr)		(1ULL << (nr))
+#define BIT_MASK(nr)		(1UL << ((nr) % BITS_PER_LONG))
+#define BIT_WORD(nr)		((nr) / BITS_PER_LONG)
+#define BIT_ULL_MASK(nr)	(1ULL << ((nr) % BITS_PER_LONG_LONG))
+#define BIT_ULL_WORD(nr)	((nr) / BITS_PER_LONG_LONG)
+#define BITS_PER_BYTE		8
+
+#define __set_bit(n,p)            set_bit(n,p)
+#define __clear_bit(n,p)          clear_bit(n,p)
+
+#define BITS_PER_WORD           32
+
+#ifndef smp_mb__before_clear_bit
+#define smp_mb__before_clear_bit()  smp_mb()
+#define smp_mb__after_clear_bit()   smp_mb()
+#endif /* smp_mb__before_clear_bit */
+
+#if (BITS_PER_LONG == 64)
+#define __AMO(op)	"amo" #op ".d"
+#elif (BITS_PER_LONG == 32)
+#define __AMO(op)	"amo" #op ".w"
+#else
+#error "Unexpected BITS_PER_LONG"
+#endif
+
+#define __test_and_op_bit_ord(op, mod, nr, addr, ord)		\
+({								\
+	unsigned long __res, __mask;				\
+	__mask = BIT_MASK(nr);					\
+	__asm__ __volatile__ (					\
+		__AMO(op) #ord " %0, %2, %1"			\
+		: "=r" (__res), "+A" (addr[BIT_WORD(nr)])	\
+		: "r" (mod(__mask))				\
+		: "memory");					\
+	((__res & __mask) != 0);				\
+})
+
+#define __op_bit_ord(op, mod, nr, addr, ord)			\
+	__asm__ __volatile__ (					\
+		__AMO(op) #ord " zero, %1, %0"			\
+		: "+A" (addr[BIT_WORD(nr)])			\
+		: "r" (mod(BIT_MASK(nr)))			\
+		: "memory");
+
+#define __test_and_op_bit(op, mod, nr, addr) 			\
+	__test_and_op_bit_ord(op, mod, nr, addr, .aqrl)
+#define __op_bit(op, mod, nr, addr)				\
+	__op_bit_ord(op, mod, nr, addr, )
+
+/* Bitmask modifiers */
+#define __NOP(x)	(x)
+#define __NOT(x)	(~(x))
+
+/**
+ * __test_and_set_bit - Set a bit and return its old value
+ * @nr: Bit to set
+ * @addr: Address to count from
+ *
+ * This operation may be reordered on other architectures than x86.
+ */
+static inline int __test_and_set_bit(int nr, volatile void *p)
+{
+	volatile unsigned long *addr = p;
+
+	return __test_and_op_bit(or, __NOP, nr, addr);
+}
+
+/**
+ * __test_and_clear_bit - Clear a bit and return its old value
+ * @nr: Bit to clear
+ * @addr: Address to count from
+ *
+ * This operation can be reordered on other architectures other than x86.
+ */
+static inline int __test_and_clear_bit(int nr, volatile void *p)
+{
+	volatile unsigned long *addr = p;
+
+	return __test_and_op_bit(and, __NOT, nr, addr);
+}
+
+/**
+ * __test_and_change_bit - Change a bit and return its old value
+ * @nr: Bit to change
+ * @addr: Address to count from
+ *
+ * This operation is atomic and cannot be reordered.
+ * It also implies a memory barrier.
+ */
+static inline int __test_and_change_bit(int nr, volatile void *p)
+{
+	volatile unsigned long *addr = p;
+
+	return __test_and_op_bit(xor, __NOP, nr, addr);
+}
+
+/**
+ * set_bit - Atomically set a bit in memory
+ * @nr: the bit to set
+ * @addr: the address to start counting from
+ *
+ * Note: there are no guarantees that this function will not be reordered
+ * on non x86 architectures, so if you are writing portable code,
+ * make sure not to rely on its reordering guarantees.
+ *
+ * Note that @nr may be almost arbitrarily large; this function is not
+ * restricted to acting on a single-word quantity.
+ */
+static inline void set_bit(int nr, volatile void *p)
+{
+	volatile unsigned long *addr = p;
+
+	__op_bit(or, __NOP, nr, addr);
+}
+
+/**
+ * clear_bit - Clears a bit in memory
+ * @nr: Bit to clear
+ * @addr: Address to start counting from
+ *
+ * Note: there are no guarantees that this function will not be reordered
+ * on non x86 architectures, so if you are writing portable code,
+ * make sure not to rely on its reordering guarantees.
+ */
+static inline void clear_bit(int nr, volatile void *p)
+{
+	volatile unsigned long *addr = p;
+
+	__op_bit(and, __NOT, nr, addr);
+}
+
+static inline int test_bit(int nr, const volatile void *p)
+{
+        const volatile unsigned int *addr = (const volatile unsigned int *)p;
+
+        return 1UL & (addr[BIT_WORD(nr)] >> (nr & (BITS_PER_WORD-1)));
+}
+
+/**
+ * change_bit - Toggle a bit in memory
+ * @nr: Bit to change
+ * @addr: Address to start counting from
+ *
+ * change_bit()  may be reordered on other architectures than x86.
+ * Note that @nr may be almost arbitrarily large; this function is not
+ * restricted to acting on a single-word quantity.
+ */
+static inline void change_bit(int nr, volatile void *p)
+{
+	volatile unsigned long *addr = p;
+
+	__op_bit(xor, __NOP, nr, addr);
+}
+
+/**
+ * test_and_set_bit_lock - Set a bit and return its old value, for lock
+ * @nr: Bit to set
+ * @addr: Address to count from
+ *
+ * This operation is atomic and provides acquire barrier semantics.
+ * It can be used to implement bit locks.
+ */
+static inline int test_and_set_bit_lock(
+	unsigned long nr, volatile void *p)
+{
+	volatile unsigned long *addr = p;
+
+	return __test_and_op_bit_ord(or, __NOP, nr, addr, .aq);
+}
+
+/**
+ * clear_bit_unlock - Clear a bit in memory, for unlock
+ * @nr: the bit to set
+ * @addr: the address to start counting from
+ *
+ * This operation is atomic and provides release barrier semantics.
+ */
+static inline void clear_bit_unlock(
+	unsigned long nr, volatile void *p)
+{
+	volatile unsigned long *addr = p;
+
+	__op_bit_ord(and, __NOT, nr, addr, .rl);
+}
+
+/**
+ * __clear_bit_unlock - Clear a bit in memory, for unlock
+ * @nr: the bit to set
+ * @addr: the address to start counting from
+ *
+ * This operation is like clear_bit_unlock, however it is not atomic.
+ * It does provide release barrier semantics so it can be used to unlock
+ * a bit lock, however it would only be used if no other CPU can modify
+ * any bits in the memory until the lock is released (a good example is
+ * if the bit lock itself protects access to the other bits in the word).
+ *
+ * On RISC-V systems there seems to be no benefit to taking advantage of the
+ * non-atomic property here: it's a lot more instructions and we still have to
+ * provide release semantics anyway.
+ */
+static inline void __clear_bit_unlock(
+	unsigned long nr, volatile unsigned long *addr)
+{
+	clear_bit_unlock(nr, addr);
+}
+
+#undef __test_and_op_bit
+#undef __op_bit
+#undef __NOP
+#undef __NOT
+#undef __AMO
+
+static inline int fls(unsigned int x)
+{
+    return generic_fls(x);
+}
+
+static inline int flsl(unsigned long x)
+{
+   return generic_flsl(x);
+}
+
+
+#define test_and_set_bit   __test_and_set_bit
+#define test_and_clear_bit __test_and_clear_bit
+
+/* Based on linux/include/asm-generic/bitops/find.h */
+
+#ifndef find_next_bit
+/**
+ * find_next_bit - find the next set bit in a memory region
+ * @addr: The address to base the search on
+ * @offset: The bitnumber to start searching at
+ * @size: The bitmap size in bits
+ */
+extern unsigned long find_next_bit(const unsigned long *addr, unsigned long
+		size, unsigned long offset);
+#endif
+
+#ifndef find_next_zero_bit
+/**
+ * find_next_zero_bit - find the next cleared bit in a memory region
+ * @addr: The address to base the search on
+ * @offset: The bitnumber to start searching at
+ * @size: The bitmap size in bits
+ */
+extern unsigned long find_next_zero_bit(const unsigned long *addr, unsigned
+		long size, unsigned long offset);
+#endif
+
+#ifdef CONFIG_GENERIC_FIND_FIRST_BIT
+
+/**
+ * find_first_bit - find the first set bit in a memory region
+ * @addr: The address to start the search at
+ * @size: The maximum size to search
+ *
+ * Returns the bit number of the first set bit.
+ */
+extern unsigned long find_first_bit(const unsigned long *addr,
+				    unsigned long size);
+
+/**
+ * find_first_zero_bit - find the first cleared bit in a memory region
+ * @addr: The address to start the search at
+ * @size: The maximum size to search
+ *
+ * Returns the bit number of the first cleared bit.
+ */
+extern unsigned long find_first_zero_bit(const unsigned long *addr,
+					 unsigned long size);
+#else /* CONFIG_GENERIC_FIND_FIRST_BIT */
+
+#define find_first_bit(addr, size) find_next_bit((addr), (size), 0)
+#define find_first_zero_bit(addr, size) find_next_zero_bit((addr), (size), 0)
+
+#endif /* CONFIG_GENERIC_FIND_FIRST_BIT */
+
+#define ffs(x) ({ unsigned int __t = (x); fls(__t & -__t); })
+#define ffsl(x) ({ unsigned long __t = (x); flsl(__t & -__t); })
+
+/*
+ * ffz - find first zero in word.
+ * @word: The word to search
+ *
+ * Undefined if no zero exists, so code should check against ~0UL first.
+ */
+#define ffz(x)  ffs(~(x))
+
+/**
+ * find_first_set_bit - find the first set bit in @word
+ * @word: the word to search
+ *
+ * Returns the bit-number of the first set bit (first bit being 0).
+ * The input must *not* be zero.
+ */
+static inline unsigned int find_first_set_bit(unsigned long word)
+{
+        return ffsl(word) - 1;
+}
+
+/**
+ * hweightN - returns the hamming weight of a N-bit word
+ * @x: the word to weigh
+ *
+ * The Hamming Weight of a number is the total number of bits set in it.
+ */
+#define hweight64(x) generic_hweight64(x)
+#define hweight32(x) generic_hweight32(x)
+#define hweight16(x) generic_hweight16(x)
+#define hweight8(x) generic_hweight8(x)
+
+#endif /* _ASM_RISCV_BITOPS_H */
diff --git a/xen/include/asm-riscv/bug.h b/xen/include/asm-riscv/bug.h
new file mode 100644
index 0000000000..cdf4c0ebd4
--- /dev/null
+++ b/xen/include/asm-riscv/bug.h
@@ -0,0 +1,54 @@
+/*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_BUG_H
+#define _ASM_RISCV_BUG_H
+
+#define BUGFRAME_NR     4
+
+#ifndef __ASSEMBLY__
+
+struct bug_frame {
+    signed int loc_disp;    /* Relative address to the bug address */
+    signed int file_disp;   /* Relative address to the filename */
+    signed int msg_disp;    /* Relative address to the predicate (for ASSERT) */
+    uint16_t line;          /* Line number */
+    uint32_t pad0:16;       /* Padding for 8-bytes align */
+};
+
+#define BUG()							\
+do {								\
+    __asm__ __volatile__ ("ebreak\n");			        \
+    unreachable();						\
+} while (0)
+
+#define WARN()                                                  \
+do {                                                            \
+    BUG();                                                      \
+} while (0)
+
+#define assert_failed(msg) do {                                 \
+    BUG();                                                      \
+} while (0)
+
+#define run_in_exception_handler(fn) BUG()
+
+extern const struct bug_frame __start_bug_frames[],
+                              __stop_bug_frames_0[],
+                              __stop_bug_frames_1[],
+                              __stop_bug_frames_2[],
+                              __stop_bug_frames_3[];
+
+#endif /* !__ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_BUG_H */
diff --git a/xen/include/asm-riscv/byteorder.h b/xen/include/asm-riscv/byteorder.h
new file mode 100644
index 0000000000..320a03c88f
--- /dev/null
+++ b/xen/include/asm-riscv/byteorder.h
@@ -0,0 +1,16 @@
+#ifndef __ASM_RISCV_BYTEORDER_H__
+#define __ASM_RISCV_BYTEORDER_H__
+
+#define __BYTEORDER_HAS_U64__
+
+#include <xen/byteorder/little_endian.h>
+
+#endif /* __ASM_RISCV_BYTEORDER_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/cache.h b/xen/include/asm-riscv/cache.h
new file mode 100644
index 0000000000..394782ca8e
--- /dev/null
+++ b/xen/include/asm-riscv/cache.h
@@ -0,0 +1,24 @@
+/*
+ * Copyright (C) 2017 Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2021 Connor Davis <connojd@pm.me>
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_CACHE_H
+#define _ASM_RISCV_CACHE_H
+
+#define L1_CACHE_SHIFT		CONFIG_RISCV_L1_CACHE_SHIFT
+#define L1_CACHE_BYTES		(1 << L1_CACHE_SHIFT)
+
+#define __read_mostly __section(".data.read_mostly")
+
+#endif /* _ASM_RISCV_CACHE_H */
diff --git a/xen/include/asm-riscv/cmpxchg.h b/xen/include/asm-riscv/cmpxchg.h
new file mode 100644
index 0000000000..b7113fa546
--- /dev/null
+++ b/xen/include/asm-riscv/cmpxchg.h
@@ -0,0 +1,382 @@
+/*
+ * Copyright (C) 2014 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_CMPXCHG_H
+#define _ASM_RISCV_CMPXCHG_H
+
+#include <asm/system.h>
+#include <asm/fence.h>
+#include <xen/lib.h>
+
+#define __xchg_relaxed(ptr, new, size)					\
+({									\
+	__typeof__(ptr) __ptr = (ptr);					\
+	__typeof__(new) __new = (new);					\
+	__typeof__(*(ptr)) __ret;					\
+	switch (size) {							\
+	case 4:								\
+		__asm__ __volatile__ (					\
+			"	amoswap.w %0, %2, %1\n"			\
+			: "=r" (__ret), "+A" (*__ptr)			\
+			: "r" (__new)					\
+			: "memory");					\
+		break;							\
+	case 8:								\
+		__asm__ __volatile__ (					\
+			"	amoswap.d %0, %2, %1\n"			\
+			: "=r" (__ret), "+A" (*__ptr)			\
+			: "r" (__new)					\
+			: "memory");					\
+		break;							\
+	default:							\
+		ASSERT_UNREACHABLE();					\
+	}								\
+	__ret;								\
+})
+
+#define xchg_relaxed(ptr, x)						\
+({									\
+	__typeof__(*(ptr)) _x_ = (x);					\
+	(__typeof__(*(ptr))) __xchg_relaxed((ptr),			\
+					    _x_, sizeof(*(ptr)));	\
+})
+
+#define __xchg_acquire(ptr, new, size)					\
+({									\
+	__typeof__(ptr) __ptr = (ptr);					\
+	__typeof__(new) __new = (new);					\
+	__typeof__(*(ptr)) __ret;					\
+	switch (size) {							\
+	case 4:								\
+		__asm__ __volatile__ (					\
+			"	amoswap.w %0, %2, %1\n"			\
+			RISCV_ACQUIRE_BARRIER				\
+			: "=r" (__ret), "+A" (*__ptr)			\
+			: "r" (__new)					\
+			: "memory");					\
+		break;							\
+	case 8:								\
+		__asm__ __volatile__ (					\
+			"	amoswap.d %0, %2, %1\n"			\
+			RISCV_ACQUIRE_BARRIER				\
+			: "=r" (__ret), "+A" (*__ptr)			\
+			: "r" (__new)					\
+			: "memory");					\
+		break;							\
+	default:							\
+		ASSERT_UNREACHABLE();					\
+	}								\
+	__ret;								\
+})
+
+#define xchg_acquire(ptr, x)						\
+({									\
+	__typeof__(*(ptr)) _x_ = (x);					\
+	(__typeof__(*(ptr))) __xchg_acquire((ptr),			\
+					    _x_, sizeof(*(ptr)));	\
+})
+
+#define __xchg_release(ptr, new, size)					\
+({									\
+	__typeof__(ptr) __ptr = (ptr);					\
+	__typeof__(new) __new = (new);					\
+	__typeof__(*(ptr)) __ret = 0;					\
+	switch (size) {							\
+	case 4:								\
+		__asm__ __volatile__ (					\
+			RISCV_RELEASE_BARRIER				\
+			"	amoswap.w %0, %2, %1\n"			\
+			: "=r" (__ret), "+A" (*__ptr)			\
+			: "r" (__new)					\
+			: "memory");					\
+		break;							\
+	case 8:								\
+		__asm__ __volatile__ (					\
+			RISCV_RELEASE_BARRIER				\
+			"	amoswap.d %0, %2, %1\n"			\
+			: "=r" (__ret), "+A" (*__ptr)			\
+			: "r" (__new)					\
+			: "memory");					\
+		break;							\
+	default:							\
+		ASSERT_UNREACHABLE();					\
+	}								\
+	__ret;								\
+})
+
+#define xchg_release(ptr, x)						\
+({									\
+	__typeof__(*(ptr)) _x_ = (x);					\
+	(__typeof__(*(ptr))) __xchg_release((ptr),			\
+					    _x_, sizeof(*(ptr)));	\
+})
+
+#define __xchg(ptr, new, size)						\
+({									\
+	__typeof__(ptr) __ptr = (ptr);					\
+	__typeof__(new) __new = (new);					\
+	__typeof__(*(ptr)) __ret = 0;					\
+	switch (size) {							\
+	case 4:								\
+		__asm__ __volatile__ (					\
+			"	amoswap.w.aqrl %0, %2, %1\n"		\
+			: "=r" (__ret), "+A" (*__ptr)			\
+			: "r" (__new)					\
+			: "memory");					\
+		break;							\
+	case 8:								\
+		__asm__ __volatile__ (					\
+			"	amoswap.d.aqrl %0, %2, %1\n"		\
+			: "=r" (__ret), "+A" (*__ptr)			\
+			: "r" (__new)					\
+			: "memory");					\
+		break;							\
+	default:							\
+		ASSERT_UNREACHABLE();					\
+	}								\
+	__ret;								\
+})
+
+#define xchg(ptr, x)							\
+({									\
+	__typeof__(*(ptr)) _x_ = (x);					\
+	(__typeof__(*(ptr))) __xchg((ptr), _x_, sizeof(*(ptr)));	\
+})
+
+#define xchg32(ptr, x)							\
+({									\
+	BUILD_BUG_ON(sizeof(*(ptr)) != 4);				\
+	xchg((ptr), (x));						\
+})
+
+#define xchg64(ptr, x)							\
+({									\
+	BUILD_BUG_ON(sizeof(*(ptr)) != 8);				\
+	xchg((ptr), (x));						\
+})
+
+/*
+ * Atomic compare and exchange.  Compare OLD with MEM, if identical,
+ * store NEW in MEM.  Return the initial value in MEM.  Success is
+ * indicated by comparing RETURN with OLD.
+ */
+#define __cmpxchg_relaxed(ptr, old, new, size)				\
+({									\
+	__typeof__(ptr) __ptr = (ptr);					\
+	__typeof__(*(ptr)) __old = (old);				\
+	__typeof__(*(ptr)) __new = (new);				\
+	__typeof__(*(ptr)) __ret;					\
+	register unsigned int __rc;					\
+	switch (size) {							\
+	case 4:								\
+		__asm__ __volatile__ (					\
+			"0:	lr.w %0, %2\n"				\
+			"	bne  %0, %z3, 1f\n"			\
+			"	sc.w %1, %z4, %2\n"			\
+			"	bnez %1, 0b\n"				\
+			"1:\n"						\
+			: "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)	\
+			: "rJ" (__old), "rJ" (__new)			\
+			: "memory");					\
+		break;							\
+	case 8:								\
+		__asm__ __volatile__ (					\
+			"0:	lr.d %0, %2\n"				\
+			"	bne %0, %z3, 1f\n"			\
+			"	sc.d %1, %z4, %2\n"			\
+			"	bnez %1, 0b\n"				\
+			"1:\n"						\
+			: "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)	\
+			: "rJ" (__old), "rJ" (__new)			\
+			: "memory");					\
+		break;							\
+	default:							\
+		ASSERT_UNREACHABLE();					\
+	}								\
+	__ret;								\
+})
+
+#define cmpxchg_relaxed(ptr, o, n)					\
+({									\
+	__typeof__(*(ptr)) _o_ = (o);					\
+	__typeof__(*(ptr)) _n_ = (n);					\
+	(__typeof__(*(ptr))) __cmpxchg_relaxed((ptr),			\
+					_o_, _n_, sizeof(*(ptr)));	\
+})
+
+#define __cmpxchg_acquire(ptr, old, new, size)				\
+({									\
+	__typeof__(ptr) __ptr = (ptr);					\
+	__typeof__(*(ptr)) __old = (old);				\
+	__typeof__(*(ptr)) __new = (new);				\
+	__typeof__(*(ptr)) __ret;					\
+	register unsigned int __rc;					\
+	switch (size) {							\
+	case 4:								\
+		__asm__ __volatile__ (					\
+			"0:	lr.w %0, %2\n"				\
+			"	bne  %0, %z3, 1f\n"			\
+			"	sc.w %1, %z4, %2\n"			\
+			"	bnez %1, 0b\n"				\
+			RISCV_ACQUIRE_BARRIER				\
+			"1:\n"						\
+			: "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)	\
+			: "rJ" (__old), "rJ" (__new)			\
+			: "memory");					\
+		break;							\
+	case 8:								\
+		__asm__ __volatile__ (					\
+			"0:	lr.d %0, %2\n"				\
+			"	bne %0, %z3, 1f\n"			\
+			"	sc.d %1, %z4, %2\n"			\
+			"	bnez %1, 0b\n"				\
+			RISCV_ACQUIRE_BARRIER				\
+			"1:\n"						\
+			: "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)	\
+			: "rJ" (__old), "rJ" (__new)			\
+			: "memory");					\
+		break;							\
+	default:							\
+		ASSERT_UNREACHABLE();					\
+	}								\
+	__ret;								\
+})
+
+#define cmpxchg_acquire(ptr, o, n)					\
+({									\
+	__typeof__(*(ptr)) _o_ = (o);					\
+	__typeof__(*(ptr)) _n_ = (n);					\
+	(__typeof__(*(ptr))) __cmpxchg_acquire((ptr),			\
+					_o_, _n_, sizeof(*(ptr)));	\
+})
+
+#define __cmpxchg_release(ptr, old, new, size)				\
+({									\
+	__typeof__(ptr) __ptr = (ptr);					\
+	__typeof__(*(ptr)) __old = (old);				\
+	__typeof__(*(ptr)) __new = (new);				\
+	__typeof__(*(ptr)) __ret = 0;					\
+	register unsigned int __rc = 0;					\
+	switch (size) {							\
+	case 4:								\
+		__asm__ __volatile__ (					\
+			RISCV_RELEASE_BARRIER				\
+			"0:	lr.w %0, %2\n"				\
+			"	bne  %0, %z3, 1f\n"			\
+			"	sc.w %1, %z4, %2\n"			\
+			"	bnez %1, 0b\n"				\
+			"1:\n"						\
+			: "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)	\
+			: "rJ" (__old), "rJ" (__new)			\
+			: "memory");					\
+		break;							\
+	case 8:								\
+		__asm__ __volatile__ (					\
+			RISCV_RELEASE_BARRIER				\
+			"0:	lr.d %0, %2\n"				\
+			"	bne %0, %z3, 1f\n"			\
+			"	sc.d %1, %z4, %2\n"			\
+			"	bnez %1, 0b\n"				\
+			"1:\n"						\
+			: "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)	\
+			: "rJ" (__old), "rJ" (__new)			\
+			: "memory");					\
+		break;							\
+	default:							\
+		ASSERT_UNREACHABLE();					\
+	}								\
+	__ret;								\
+})
+
+#define cmpxchg_release(ptr, o, n)					\
+({									\
+	__typeof__(*(ptr)) _o_ = (o);					\
+	__typeof__(*(ptr)) _n_ = (n);					\
+	(__typeof__(*(ptr))) __cmpxchg_release((ptr),			\
+					_o_, _n_, sizeof(*(ptr)));	\
+})
+
+#define __cmpxchg(ptr, old, new, size)					\
+({									\
+	__typeof__(ptr) __ptr = (ptr);					\
+	__typeof__(*(ptr)) __old = (__typeof__(*(ptr)))(old);		\
+	__typeof__(*(ptr)) __new = (__typeof__(*(ptr)))(new);	        \
+	__typeof__(*(ptr)) __ret = 0;					\
+	register unsigned int __rc = 0;					\
+	switch (size) {							\
+	case 4:								\
+		__asm__ __volatile__ (					\
+			"0:	lr.w %0, %2\n"				\
+			"	bne  %0, %z3, 1f\n"			\
+			"	sc.w.rl %1, %z4, %2\n"			\
+			"	bnez %1, 0b\n"				\
+			"	fence rw, rw\n"				\
+			"1:\n"						\
+			: "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)	\
+			: "rJ" (__old), "rJ" (__new)			\
+			: "memory");					\
+		break;							\
+	case 8:								\
+		__asm__ __volatile__ (					\
+			"0:	lr.d %0, %2\n"				\
+			"	bne %0, %z3, 1f\n"			\
+			"	sc.d.rl %1, %z4, %2\n"			\
+			"	bnez %1, 0b\n"				\
+			"	fence rw, rw\n"				\
+			"1:\n"						\
+			: "=&r" (__ret), "=&r" (__rc), "+A" (*__ptr)	\
+			: "rJ" (__old), "rJ" (__new)			\
+			: "memory");					\
+		break;							\
+	default:							\
+		ASSERT_UNREACHABLE();					\
+	}								\
+	__ret;								\
+})
+
+#define cmpxchg(ptr, o, n)						\
+({									\
+	__typeof__(*(ptr)) _o_ = (o);					\
+	__typeof__(*(ptr)) _n_ = (n);					\
+	(__typeof__(*(ptr))) __cmpxchg((ptr),				\
+				       _o_, _n_, sizeof(*(ptr)));	\
+})
+
+#define cmpxchg_local(ptr, o, n)					\
+	(__cmpxchg_relaxed((ptr), (o), (n), sizeof(*(ptr))))
+
+#define cmpxchg32(ptr, o, n)						\
+({									\
+	BUILD_BUG_ON(sizeof(*(ptr)) != 4);				\
+	cmpxchg((ptr), (o), (n));					\
+})
+
+#define cmpxchg32_local(ptr, o, n)					\
+({									\
+	BUILD_BUG_ON(sizeof(*(ptr)) != 4);				\
+	cmpxchg_relaxed((ptr), (o), (n))				\
+})
+
+#define cmpxchg64(ptr, o, n)						\
+({									\
+	BUILD_BUG_ON(sizeof(*(ptr)) != 8);				\
+	cmpxchg((ptr), (o), (n));					\
+})
+
+#define cmpxchg64_local(ptr, o, n)					\
+({									\
+	BUILD_BUG_ON(sizeof(*(ptr)) != 8);				\
+	cmpxchg_relaxed((ptr), (o), (n));				\
+})
+
+#endif /* _ASM_RISCV_CMPXCHG_H */
diff --git a/xen/include/asm-riscv/compiler_types.h b/xen/include/asm-riscv/compiler_types.h
new file mode 100644
index 0000000000..dbe4a8bbff
--- /dev/null
+++ b/xen/include/asm-riscv/compiler_types.h
@@ -0,0 +1,32 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __LINUX_COMPILER_TYPES_H
+#define __LINUX_COMPILER_TYPES_H
+
+/*
+ * __unqual_scalar_typeof(x) - Declare an unqualified scalar type, leaving
+ *			       non-scalar types unchanged.
+ */
+/*
+ * Prefer C11 _Generic for better compile-times and simpler code. Note: 'char'
+ * is not type-compatible with 'signed char', and we define a separate case.
+ */
+#define __scalar_type_to_expr_cases(type)				\
+		unsigned type:	(unsigned type)0,			\
+		signed type:	(signed type)0
+
+#define __unqual_scalar_typeof(x) typeof(				\
+		_Generic((x),						\
+			 char:	(char)0,				\
+			 __scalar_type_to_expr_cases(char),		\
+			 __scalar_type_to_expr_cases(short),		\
+			 __scalar_type_to_expr_cases(int),		\
+			 __scalar_type_to_expr_cases(long),		\
+			 __scalar_type_to_expr_cases(long long),	\
+			 default: (x)))
+
+/* Is this type a native word size -- useful for atomic operations */
+#define __native_word(t) \
+	(sizeof(t) == sizeof(char) || sizeof(t) == sizeof(short) || \
+	 sizeof(t) == sizeof(int) || sizeof(t) == sizeof(long))
+
+#endif /* __LINUX_COMPILER_TYPES_H */
diff --git a/xen/include/asm-riscv/config.h b/xen/include/asm-riscv/config.h
new file mode 100644
index 0000000000..84cb436dc1
--- /dev/null
+++ b/xen/include/asm-riscv/config.h
@@ -0,0 +1,110 @@
+/******************************************************************************
+ * config.h
+ *
+ * A Linux-style configuration list.
+ */
+
+#ifndef __RISCV_CONFIG_H__
+#define __RISCV_CONFIG_H__
+
+#if defined(CONFIG_RISCV_64)
+# define LONG_BYTEORDER 3
+# define ELFSIZE 64
+#else
+# error "Unsupported RISCV variant"
+#endif
+
+#define BYTES_PER_LONG (1 << LONG_BYTEORDER)
+#define BITS_PER_LONG  (BYTES_PER_LONG << 3)
+#define POINTER_ALIGN  BYTES_PER_LONG
+
+#define BITS_PER_LLONG 64
+
+/* xen_ulong_t is always 64 bits */
+#define BITS_PER_XEN_ULONG 64
+
+#define CONFIG_RISCV 1
+#define CONFIG_RISCV_L1_CACHE_SHIFT 6
+
+#define CONFIG_PAGEALLOC_MAX_ORDER 18
+#define CONFIG_DOMU_MAX_ORDER      9
+#define CONFIG_HWDOM_MAX_ORDER     10
+
+#define OPT_CONSOLE_STR "dtuart"
+
+#ifdef CONFIG_RISCV_64
+#define MAX_VIRT_CPUS 128u
+#else
+#error "Unsupported RISCV variant"
+#endif
+
+#define INVALID_VCPU_ID MAX_VIRT_CPUS
+
+/* Linkage for RISCV */
+#ifdef __ASSEMBLY__
+#define ALIGN .align 2
+
+#define ENTRY(name)                                \
+  .globl name;                                     \
+  ALIGN;                                           \
+  name:
+#endif
+
+#include <xen/const.h>
+
+#ifdef CONFIG_RISCV_64
+
+/*
+ * RISC-V Layout:
+ *   0x0000000000000000 - 0x0000003fffffffff (256GB, L2 slots [0-255])
+ *     Unmapped
+ *   0x0000004000000000 - 0xffffffbfffffffff
+ *     Inaccessible: sv39 only supports 39-bit sign-extended VAs.
+ *   0xffffffc000000000 - 0xffffffc0001fffff (2MB, L2 slot [256])
+ *     Unmapped
+ *   0xffffffc000200000 - 0xffffffc0003fffff (2MB, L2 slot [256])
+ *     Xen text, data, bss
+ *   0xffffffc000400000 - 0xffffffc0005fffff (2MB, L2 slot [256])
+ *     Fixmap: special-purpose 4K mapping slots
+ *   0xffffffc000600000 - 0xffffffc0009fffff (4MB, L2 slot [256])
+ *     Early boot mapping of FDT
+ *   0xffffffc000a00000 - 0xffffffc000bfffff (2MB, L2 slot [256])
+ *     Early relocation address, used when relocating Xen and later
+ *     for livepatch vmap (if compiled in)
+ *   0xffffffc040000000 - 0xffffffc07fffffff (1GB, L2 slot [257])
+ *     VMAP: ioremap and early_ioremap
+ *   0xffffffc080000000 - 0xffffffc13fffffff (3GB, L2 slots [258..260])
+ *     Unmapped
+ *   0xffffffc140000000 - 0xffffffc1bfffffff (2GB, L2 slots [261..262])
+ *     Frametable: 48 bytes per page for 133GB of RAM
+ *   0xffffffc1c0000000 - 0xffffffe1bfffffff (128GB, L2 slots [263..390])
+ *     1:1 direct mapping of RAM
+ *   0xffffffe1c0000000 - 0xffffffffffffffff (121GB, L2 slots [391..511])
+ *     Unmapped
+ */
+
+#define L2_ENTRY_BITS  30
+#define L2_ENTRY_BYTES (_AC(1,UL) << L2_ENTRY_BITS)
+#define L2_ADDR(_slot)                                      \
+    (((_AC(_slot, UL) >> 8) * _AC(0xffffff8000000000,UL)) | \
+     (_AC(_slot, UL) << L2_ENTRY_BITS))
+
+#define XEN_VIRT_START         _AT(vaddr_t, L2_ADDR(256) + MB(2))
+#define HYPERVISOR_VIRT_START  XEN_VIRT_START
+
+#define FRAMETABLE_VIRT_START  _AT(vaddr_t, L2_ADDR(261))
+
+#endif /* CONFIG_RISCV_64 */
+
+#define STACK_ORDER            3
+#define STACK_SIZE             (PAGE_SIZE << STACK_ORDER)
+
+#endif /* __RISCV_CONFIG_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/cpufeature.h b/xen/include/asm-riscv/cpufeature.h
new file mode 100644
index 0000000000..15133ed63e
--- /dev/null
+++ b/xen/include/asm-riscv/cpufeature.h
@@ -0,0 +1,17 @@
+#ifndef __ASM_RISCV_CPUFEATURE_H
+#define __ASM_RISCV_CPUFEATURE_H
+
+static inline int cpu_nr_siblings(unsigned int cpu)
+{
+    return 1;
+}
+
+#endif
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/csr.h b/xen/include/asm-riscv/csr.h
new file mode 100644
index 0000000000..2c84efde99
--- /dev/null
+++ b/xen/include/asm-riscv/csr.h
@@ -0,0 +1,219 @@
+/* SPDX-License-Identifier: GPL-2.0-only */
+/*
+ * Copyright (C) 2015 Regents of the University of California
+ */
+
+#ifndef _ASM_RISCV_CSR_H
+#define _ASM_RISCV_CSR_H
+
+#include <asm/asm.h>
+#include <xen/const.h>
+
+/* Status register flags */
+#define SR_SIE		_AC(0x00000002, UL) /* Supervisor Interrupt Enable */
+#define SR_MIE		_AC(0x00000008, UL) /* Machine Interrupt Enable */
+#define SR_SPIE		_AC(0x00000020, UL) /* Previous Supervisor IE */
+#define SR_MPIE		_AC(0x00000080, UL) /* Previous Machine IE */
+#define SR_SPP		_AC(0x00000100, UL) /* Previously Supervisor */
+#define SR_MPP		_AC(0x00001800, UL) /* Previously Machine */
+#define SR_SUM		_AC(0x00040000, UL) /* Supervisor User Memory Access */
+
+#define SR_FS		_AC(0x00006000, UL) /* Floating-point Status */
+#define SR_FS_OFF	_AC(0x00000000, UL)
+#define SR_FS_INITIAL	_AC(0x00002000, UL)
+#define SR_FS_CLEAN	_AC(0x00004000, UL)
+#define SR_FS_DIRTY	_AC(0x00006000, UL)
+
+#define SR_XS		_AC(0x00018000, UL) /* Extension Status */
+#define SR_XS_OFF	_AC(0x00000000, UL)
+#define SR_XS_INITIAL	_AC(0x00008000, UL)
+#define SR_XS_CLEAN	_AC(0x00010000, UL)
+#define SR_XS_DIRTY	_AC(0x00018000, UL)
+
+#ifndef CONFIG_64BIT
+#define SR_SD		_AC(0x80000000, UL) /* FS/XS dirty */
+#else
+#define SR_SD		_AC(0x8000000000000000, UL) /* FS/XS dirty */
+#endif
+
+/* SATP flags */
+#ifndef CONFIG_64BIT
+#define SATP_PPN	_AC(0x003FFFFF, UL)
+#define SATP_MODE_32	_AC(0x80000000, UL)
+#define SATP_MODE	SATP_MODE_32
+#else
+#define SATP_PPN	_AC(0x00000FFFFFFFFFFF, UL)
+#define SATP_MODE_39	_AC(0x8000000000000000, UL)
+#define SATP_MODE	SATP_MODE_39
+#endif
+
+/* Exception cause high bit - is an interrupt if set */
+#define CAUSE_IRQ_FLAG		(_AC(1, UL) << (__riscv_xlen - 1))
+
+/* Interrupt causes (minus the high bit) */
+#define IRQ_S_SOFT		1
+#define IRQ_M_SOFT		3
+#define IRQ_S_TIMER		5
+#define IRQ_M_TIMER		7
+#define IRQ_S_EXT		9
+#define IRQ_M_EXT		11
+
+/* Exception causes */
+#define EXC_INST_MISALIGNED	0
+#define EXC_INST_ACCESS		1
+#define EXC_BREAKPOINT		3
+#define EXC_LOAD_ACCESS		5
+#define EXC_STORE_ACCESS	7
+#define EXC_SYSCALL		8
+#define EXC_INST_PAGE_FAULT	12
+#define EXC_LOAD_PAGE_FAULT	13
+#define EXC_STORE_PAGE_FAULT	15
+
+/* PMP configuration */
+#define PMP_R			0x01
+#define PMP_W			0x02
+#define PMP_X			0x04
+#define PMP_A			0x18
+#define PMP_A_TOR		0x08
+#define PMP_A_NA4		0x10
+#define PMP_A_NAPOT		0x18
+#define PMP_L			0x80
+
+/* symbolic CSR names: */
+#define CSR_CYCLE		0xc00
+#define CSR_TIME		0xc01
+#define CSR_INSTRET		0xc02
+#define CSR_CYCLEH		0xc80
+#define CSR_TIMEH		0xc81
+#define CSR_INSTRETH		0xc82
+
+#define CSR_SSTATUS		0x100
+#define CSR_SIE			0x104
+#define CSR_STVEC		0x105
+#define CSR_SCOUNTEREN		0x106
+#define CSR_SSCRATCH		0x140
+#define CSR_SEPC		0x141
+#define CSR_SCAUSE		0x142
+#define CSR_STVAL		0x143
+#define CSR_SIP			0x144
+#define CSR_SATP		0x180
+
+#define CSR_MSTATUS		0x300
+#define CSR_MISA		0x301
+#define CSR_MIE			0x304
+#define CSR_MTVEC		0x305
+#define CSR_MSCRATCH		0x340
+#define CSR_MEPC		0x341
+#define CSR_MCAUSE		0x342
+#define CSR_MTVAL		0x343
+#define CSR_MIP			0x344
+#define CSR_PMPCFG0		0x3a0
+#define CSR_PMPADDR0		0x3b0
+#define CSR_MHARTID		0xf14
+
+#ifdef CONFIG_RISCV_M_MODE
+# define CSR_STATUS	CSR_MSTATUS
+# define CSR_IE		CSR_MIE
+# define CSR_TVEC	CSR_MTVEC
+# define CSR_SCRATCH	CSR_MSCRATCH
+# define CSR_EPC	CSR_MEPC
+# define CSR_CAUSE	CSR_MCAUSE
+# define CSR_TVAL	CSR_MTVAL
+# define CSR_IP		CSR_MIP
+
+# define SR_IE		SR_MIE
+# define SR_PIE		SR_MPIE
+# define SR_PP		SR_MPP
+
+# define RV_IRQ_SOFT    IRQ_M_SOFT
+# define RV_IRQ_TIMER	IRQ_M_TIMER
+# define RV_IRQ_EXT     IRQ_M_EXT
+#else /* CONFIG_RISCV_M_MODE */
+# define CSR_STATUS	CSR_SSTATUS
+# define CSR_IE		CSR_SIE
+# define CSR_TVEC	CSR_STVEC
+# define CSR_SCRATCH	CSR_SSCRATCH
+# define CSR_EPC	CSR_SEPC
+# define CSR_CAUSE	CSR_SCAUSE
+# define CSR_TVAL	CSR_STVAL
+# define CSR_IP		CSR_SIP
+
+# define SR_IE		SR_SIE
+# define SR_PIE		SR_SPIE
+# define SR_PP		SR_SPP
+
+# define RV_IRQ_SOFT    IRQ_S_SOFT
+# define RV_IRQ_TIMER	IRQ_S_TIMER
+# define RV_IRQ_EXT     IRQ_S_EXT
+#endif /* CONFIG_RISCV_M_MODE */
+
+/* IE/IP (Supervisor/Machine Interrupt Enable/Pending) flags */
+#define IE_SIE		(_AC(0x1, UL) << RV_IRQ_SOFT)
+#define IE_TIE		(_AC(0x1, UL) << RV_IRQ_TIMER)
+#define IE_EIE		(_AC(0x1, UL) << RV_IRQ_EXT)
+
+#ifndef __ASSEMBLY__
+
+#define csr_swap(csr, val)					\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrrw %0, " __ASM_STR(csr) ", %1"\
+			      : "=r" (__v) : "rK" (__v)		\
+			      : "memory");			\
+	__v;							\
+})
+
+#define csr_read(csr)						\
+({								\
+	register unsigned long __v;				\
+	__asm__ __volatile__ ("csrr %0, " __ASM_STR(csr)	\
+			      : "=r" (__v) :			\
+			      : "memory");			\
+	__v;							\
+})
+
+#define csr_write(csr, val)					\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrw " __ASM_STR(csr) ", %0"	\
+			      : : "rK" (__v)			\
+			      : "memory");			\
+})
+
+#define csr_read_set(csr, val)					\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrrs %0, " __ASM_STR(csr) ", %1"\
+			      : "=r" (__v) : "rK" (__v)		\
+			      : "memory");			\
+	__v;							\
+})
+
+#define csr_set(csr, val)					\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrs " __ASM_STR(csr) ", %0"	\
+			      : : "rK" (__v)			\
+			      : "memory");			\
+})
+
+#define csr_read_clear(csr, val)				\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrrc %0, " __ASM_STR(csr) ", %1"\
+			      : "=r" (__v) : "rK" (__v)		\
+			      : "memory");			\
+	__v;							\
+})
+
+#define csr_clear(csr, val)					\
+({								\
+	unsigned long __v = (unsigned long)(val);		\
+	__asm__ __volatile__ ("csrc " __ASM_STR(csr) ", %0"	\
+			      : : "rK" (__v)			\
+			      : "memory");			\
+})
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_CSR_H */
diff --git a/xen/include/asm-riscv/current.h b/xen/include/asm-riscv/current.h
new file mode 100644
index 0000000000..b9f319e9c4
--- /dev/null
+++ b/xen/include/asm-riscv/current.h
@@ -0,0 +1,47 @@
+#ifndef __ASM_CURRENT_H
+#define __ASM_CURRENT_H
+
+#include <xen/page-size.h>
+#include <xen/percpu.h>
+#include <asm/processor.h>
+
+#ifndef __ASSEMBLY__
+
+struct vcpu;
+
+/* Which VCPU is "current" on this PCPU. */
+DECLARE_PER_CPU(struct vcpu *, curr_vcpu);
+
+#define current              (this_cpu(curr_vcpu))
+#define set_current(vcpu)    do { current = (vcpu); } while (0)
+#define get_cpu_current(cpu) (per_cpu(curr_vcpu, cpu))
+
+/* Per-VCPU state that lives at the top of the stack */
+struct cpu_info {
+    struct cpu_user_regs guest_cpu_user_regs;
+    unsigned long elr;
+    uint32_t flags;
+};
+
+static inline struct cpu_info *get_cpu_info(void)
+{
+    register unsigned long sp asm ("sp");
+
+    return (struct cpu_info *)((sp & ~(STACK_SIZE - 1)) +
+                               STACK_SIZE - sizeof(struct cpu_info));
+}
+
+#define guest_cpu_user_regs() (&get_cpu_info()->guest_cpu_user_regs)
+
+DECLARE_PER_CPU(unsigned int, cpu_id);
+
+#define get_processor_id() (this_cpu(cpu_id))
+
+#define set_processor_id(id)  do {                      \
+    csr_write(CSR_SCRATCH, __per_cpu_offset[id]);       \
+    this_cpu(cpu_id) = (id);                            \
+} while(0)
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* __ASM_CURRENT_H */
diff --git a/xen/include/asm-riscv/debugger.h b/xen/include/asm-riscv/debugger.h
new file mode 100644
index 0000000000..af4fc8a838
--- /dev/null
+++ b/xen/include/asm-riscv/debugger.h
@@ -0,0 +1,15 @@
+#ifndef __RISCV_DEBUGGER_H__
+#define __RISCV_DEBUGGER_H__
+
+#define debugger_trap_fatal(v, r) (0)
+#define debugger_trap_immediate() ((void) 0)
+
+#endif /* __RISCV_DEBUGGER_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/delay.h b/xen/include/asm-riscv/delay.h
new file mode 100644
index 0000000000..8f2da8b78b
--- /dev/null
+++ b/xen/include/asm-riscv/delay.h
@@ -0,0 +1,17 @@
+#ifndef __RISCV_DELAY_H__
+#define __RISCV_DELAY_H__
+
+#include <asm/processor.h>
+
+extern void udelay(unsigned long usecs);
+
+#endif
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/desc.h b/xen/include/asm-riscv/desc.h
new file mode 100644
index 0000000000..a4d02d5eef
--- /dev/null
+++ b/xen/include/asm-riscv/desc.h
@@ -0,0 +1,12 @@
+#ifndef __ARCH_DESC_H
+#define __ARCH_DESC_H
+
+#endif /* __ARCH_DESC_H */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/device.h b/xen/include/asm-riscv/device.h
new file mode 100644
index 0000000000..e38d2a9712
--- /dev/null
+++ b/xen/include/asm-riscv/device.h
@@ -0,0 +1,15 @@
+#ifndef __ASM_RISCV_DEVICE_H
+#define __ASM_RISCV_DEVICE_H
+
+typedef struct device device_t;
+
+#endif /* __ASM_RISCV_DEVICE_H */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/div64.h b/xen/include/asm-riscv/div64.h
new file mode 100644
index 0000000000..0a88dd30ad
--- /dev/null
+++ b/xen/include/asm-riscv/div64.h
@@ -0,0 +1,23 @@
+#ifndef __ASM_RISCV_DIV64
+#define __ASM_RISCV_DIV64
+
+#include <asm/system.h>
+#include <xen/types.h>
+
+# define do_div(n,base) ({                                      \
+        uint32_t __base = (base);                               \
+        uint32_t __rem;                                         \
+        __rem = ((uint64_t)(n)) % __base;                       \
+        (n) = ((uint64_t)(n)) / __base;                         \
+        __rem;                                                  \
+ })
+
+#endif
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/domain.h b/xen/include/asm-riscv/domain.h
new file mode 100644
index 0000000000..ebf2c5bfe1
--- /dev/null
+++ b/xen/include/asm-riscv/domain.h
@@ -0,0 +1,50 @@
+#ifndef __ASM_DOMAIN_H__
+#define __ASM_DOMAIN_H__
+
+#include <xen/cache.h>
+#include <xen/sched.h>
+#include <asm/page.h>
+#include <asm/p2m.h>
+#include <public/hvm/params.h>
+#include <xen/serial.h>
+#include <xen/rbtree.h>
+
+struct hvm_domain {
+    uint64_t params[HVM_NR_PARAMS];
+};
+
+/* The hardware domain has always its memory direct mapped. */
+#define is_domain_direct_mapped(d) ((d) == hardware_domain)
+
+struct arch_domain {
+    struct hvm_domain hvm;
+} __cacheline_aligned;
+
+struct arch_vcpu {
+}  __cacheline_aligned;
+
+void vcpu_show_execution_state(struct vcpu *);
+void vcpu_show_registers(const struct vcpu *);
+
+static inline struct vcpu_guest_context *alloc_vcpu_guest_context(void)
+{
+    return (struct vcpu_guest_context *)0xdeadbeef;
+}
+
+static inline void free_vcpu_guest_context(struct vcpu_guest_context *vgc)
+{
+}
+
+static inline void arch_vcpu_block(struct vcpu *v) {}
+
+#endif /* __ASM_DOMAIN_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/event.h b/xen/include/asm-riscv/event.h
new file mode 100644
index 0000000000..88e10f414b
--- /dev/null
+++ b/xen/include/asm-riscv/event.h
@@ -0,0 +1,42 @@
+#ifndef __ASM_EVENT_H__
+#define __ASM_EVENT_H__
+
+#include <xen/errno.h>
+#include <asm/domain.h>
+#include <asm/bug.h>
+
+void vcpu_kick(struct vcpu *v);
+void vcpu_mark_events_pending(struct vcpu *v);
+void vcpu_update_evtchn_irq(struct vcpu *v);
+void vcpu_block_unless_event_pending(struct vcpu *v);
+
+static inline int vcpu_event_delivery_is_enabled(struct vcpu *v)
+{
+    return 0;
+}
+
+static inline int local_events_need_delivery(void)
+{
+    return 0;
+}
+
+static inline void local_event_delivery_enable(void)
+{
+
+}
+
+/* No arch specific virq definition now. Default to global. */
+static inline bool arch_virq_is_global(unsigned int virq)
+{
+    return true;
+}
+
+#endif
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/fence.h b/xen/include/asm-riscv/fence.h
new file mode 100644
index 0000000000..2b443a3a48
--- /dev/null
+++ b/xen/include/asm-riscv/fence.h
@@ -0,0 +1,12 @@
+#ifndef _ASM_RISCV_FENCE_H
+#define _ASM_RISCV_FENCE_H
+
+#ifdef CONFIG_SMP
+#define RISCV_ACQUIRE_BARRIER		"\tfence r , rw\n"
+#define RISCV_RELEASE_BARRIER		"\tfence rw,  w\n"
+#else
+#define RISCV_ACQUIRE_BARRIER
+#define RISCV_RELEASE_BARRIER
+#endif
+
+#endif	/* _ASM_RISCV_FENCE_H */
diff --git a/xen/include/asm-riscv/flushtlb.h b/xen/include/asm-riscv/flushtlb.h
new file mode 100644
index 0000000000..7a4a4eee23
--- /dev/null
+++ b/xen/include/asm-riscv/flushtlb.h
@@ -0,0 +1,34 @@
+#ifndef __ASM_RISCV_FLUSHTLB_H__
+#define __ASM_RISCV_FLUSHTLB_H__
+
+#include <xen/cpumask.h>
+
+/*
+ * Filter the given set of CPUs, removing those that definitely flushed their
+ * TLB since @page_timestamp.
+ */
+/* XXX lazy implementation just doesn't clear anything.... */
+static inline void tlbflush_filter(cpumask_t *mask, uint32_t page_timestamp) {}
+
+/* Returning 0 from tlbflush_current_time will always force a flush. */
+static inline uint32_t tlbflush_current_time(void)
+{
+    return 0;
+}
+
+static inline void page_set_tlbflush_timestamp(struct page_info *page)
+{
+}
+
+/* Flush specified CPUs' TLBs */
+void arch_flush_tlb_mask(const cpumask_t *mask);
+
+#endif /* __ASM_RISCV_FLUSHTLB_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/grant_table.h b/xen/include/asm-riscv/grant_table.h
new file mode 100644
index 0000000000..8bcc05a60b
--- /dev/null
+++ b/xen/include/asm-riscv/grant_table.h
@@ -0,0 +1,12 @@
+#ifndef __ASM_GRANT_TABLE_H__
+#define __ASM_GRANT_TABLE_H__
+
+#endif /* __ASM_GRANT_TABLE_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/guest_access.h b/xen/include/asm-riscv/guest_access.h
new file mode 100644
index 0000000000..61a16044b2
--- /dev/null
+++ b/xen/include/asm-riscv/guest_access.h
@@ -0,0 +1,41 @@
+#ifndef __ASM_RISCV_GUEST_ACCESS_H__
+#define __ASM_RISCV_GUEST_ACCESS_H__
+
+#include <xen/errno.h>
+#include <xen/sched.h>
+
+unsigned long raw_copy_to_guest(void *to, const void *from, unsigned len);
+unsigned long raw_copy_to_guest_flush_dcache(void *to, const void *from,
+                                             unsigned len);
+unsigned long raw_copy_from_guest(void *to, const void *from, unsigned len);
+unsigned long raw_clear_guest(void *to, unsigned len);
+
+/* Copy data to guest physical address, then clean the region. */
+unsigned long copy_to_guest_phys_flush_dcache(struct domain *d,
+                                              paddr_t phys,
+                                              void *buf,
+                                              unsigned int len);
+
+int access_guest_memory_by_ipa(struct domain *d, paddr_t ipa, void *buf,
+                               uint32_t size, bool is_write);
+
+#define __raw_copy_to_guest raw_copy_to_guest
+#define __raw_copy_from_guest raw_copy_from_guest
+#define __raw_clear_guest raw_clear_guest
+
+/*
+ * Pre-validate a guest handle.
+ * Allows use of faster __copy_* functions.
+ */
+#define guest_handle_okay(hnd, nr) (1)
+#define guest_handle_subrange_okay(hnd, first, last) (1)
+
+#endif /* __ASM_RISCV_GUEST_ACCESS_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/guest_atomics.h b/xen/include/asm-riscv/guest_atomics.h
new file mode 100644
index 0000000000..85e82e8c7c
--- /dev/null
+++ b/xen/include/asm-riscv/guest_atomics.h
@@ -0,0 +1,60 @@
+/*
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License as published by
+ * the Free Software Foundation; either version 2, or (at your option)
+ * any later version.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.
+ */
+
+#ifndef _RISCV_GUEST_ATOMICS_H
+#define _RISCV_GUEST_ATOMICS_H
+
+#define guest_testop(name)                                                  \
+static inline int guest_##name(struct domain *d, int nr, volatile void *p)  \
+{                                                                           \
+    (void) d;                                                               \
+    (void) nr;                                                              \
+    (void) p;                                                               \
+                                                                            \
+    return 0;                                                               \
+}
+
+guest_testop(test_and_set_bit)
+guest_testop(test_and_clear_bit)
+guest_testop(test_and_change_bit)
+
+#undef guest_testop
+
+#define guest_bitop(name)                                                   \
+static inline void guest_##name(struct domain *d, int nr, volatile void *p) \
+{                                                                           \
+    (void) d;                                                               \
+    (void) nr;                                                              \
+    (void) p;                                                               \
+}
+
+guest_bitop(set_bit)
+guest_bitop(clear_bit)
+guest_bitop(change_bit)
+
+#undef guest_bitop
+
+#define guest_test_bit(d, nr, p) ((void)(d), test_bit(nr, p))
+
+#endif /* _RISCV_GUEST_ATOMICS_H */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/hardirq.h b/xen/include/asm-riscv/hardirq.h
new file mode 100644
index 0000000000..67b6a673db
--- /dev/null
+++ b/xen/include/asm-riscv/hardirq.h
@@ -0,0 +1,27 @@
+#ifndef __ASM_HARDIRQ_H
+#define __ASM_HARDIRQ_H
+
+#include <xen/cache.h>
+#include <xen/smp.h>
+
+typedef struct {
+        unsigned long __softirq_pending;
+        unsigned int __local_irq_count;
+} __cacheline_aligned irq_cpustat_t;
+
+#include <xen/irq_cpustat.h>    /* Standard mappings for irq_cpustat_t above */
+
+#define in_irq() (local_irq_count(smp_processor_id()) != 0)
+
+#define irq_enter()     (local_irq_count(smp_processor_id())++)
+#define irq_exit()      (local_irq_count(smp_processor_id())--)
+
+#endif /* __ASM_HARDIRQ_H */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/hypercall.h b/xen/include/asm-riscv/hypercall.h
new file mode 100644
index 0000000000..8af474b5e2
--- /dev/null
+++ b/xen/include/asm-riscv/hypercall.h
@@ -0,0 +1,12 @@
+#ifndef __ASM_RISCV_HYPERCALL_H__
+#define __ASM_RISCV_HYPERCALL_H__
+
+#endif /* __ASM_RISCV_HYPERCALL_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/init.h b/xen/include/asm-riscv/init.h
new file mode 100644
index 0000000000..d72e62f0c9
--- /dev/null
+++ b/xen/include/asm-riscv/init.h
@@ -0,0 +1,42 @@
+#ifndef _XEN_ASM_INIT_H
+#define _XEN_ASM_INIT_H
+
+#ifndef __ASSEMBLY__
+
+struct init_info {
+    /* Pointer to the stack, used by head.S when entering in C */
+    unsigned char *stack;
+
+    /* Logical CPU ID, used by start_secondary */
+    unsigned int cpuid;
+};
+
+#endif /* __ASSEMBLY__ */
+
+/* For assembly routines */
+#define __HEAD		.section	".head.text","ax"
+#define __INIT		.section	".init.text","ax"
+#define __FINIT		.previous
+
+#define __INITDATA	.section	".init.data","aw",%progbits
+#define __INITRODATA	.section	".init.rodata","a",%progbits
+#define __FINITDATA	.previous
+
+#define __MEMINIT        .section	".meminit.text", "ax"
+#define __MEMINITDATA    .section	".meminit.data", "aw"
+#define __MEMINITRODATA  .section	".meminit.rodata", "a"
+
+/* silence warnings when references are OK */
+#define __REF            .section       ".ref.text", "ax"
+#define __REFDATA        .section       ".ref.data", "aw"
+#define __REFCONST       .section       ".ref.rodata", "a"
+
+#endif /* _XEN_ASM_INIT_H */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/io.h b/xen/include/asm-riscv/io.h
new file mode 100644
index 0000000000..92d17ebfa8
--- /dev/null
+++ b/xen/include/asm-riscv/io.h
@@ -0,0 +1,283 @@
+/*
+ * {read,write}{b,w,l,q} based on arch/arm64/include/asm/io.h
+ *   which was based on arch/arm/include/io.h
+ *
+ * Copyright (C) 1996-2000 Russell King
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2014 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_IO_H
+#define _ASM_RISCV_IO_H
+
+#include <asm/byteorder.h>
+
+/*
+ * The RISC-V ISA doesn't yet specify how to query or modify PMAs, so we can't
+ * change the properties of memory regions.  This should be fixed by the
+ * upcoming platform spec.
+ */
+#define ioremap_nocache(addr, size) ioremap((addr), (size))
+#define ioremap_wc(addr, size) ioremap((addr), (size))
+#define ioremap_wt(addr, size) ioremap((addr), (size))
+
+/* Generic IO read/write.  These perform native-endian accesses. */
+#define __raw_writeb __raw_writeb
+static inline void __raw_writeb(u8 val, volatile void __iomem *addr)
+{
+	asm volatile("sb %0, 0(%1)" : : "r" (val), "r" (addr));
+}
+
+#define __raw_writew __raw_writew
+static inline void __raw_writew(u16 val, volatile void __iomem *addr)
+{
+	asm volatile("sh %0, 0(%1)" : : "r" (val), "r" (addr));
+}
+
+#define __raw_writel __raw_writel
+static inline void __raw_writel(u32 val, volatile void __iomem *addr)
+{
+	asm volatile("sw %0, 0(%1)" : : "r" (val), "r" (addr));
+}
+
+#ifdef CONFIG_64BIT
+#define __raw_writeq __raw_writeq
+static inline void __raw_writeq(u64 val, volatile void __iomem *addr)
+{
+	asm volatile("sd %0, 0(%1)" : : "r" (val), "r" (addr));
+}
+#endif
+
+#define __raw_readb __raw_readb
+static inline u8 __raw_readb(const volatile void __iomem *addr)
+{
+	u8 val;
+
+	asm volatile("lb %0, 0(%1)" : "=r" (val) : "r" (addr));
+	return val;
+}
+
+#define __raw_readw __raw_readw
+static inline u16 __raw_readw(const volatile void __iomem *addr)
+{
+	u16 val;
+
+	asm volatile("lh %0, 0(%1)" : "=r" (val) : "r" (addr));
+	return val;
+}
+
+#define __raw_readl __raw_readl
+static inline u32 __raw_readl(const volatile void __iomem *addr)
+{
+	u32 val;
+
+	asm volatile("lw %0, 0(%1)" : "=r" (val) : "r" (addr));
+	return val;
+}
+
+#ifdef CONFIG_64BIT
+#define __raw_readq __raw_readq
+static inline u64 __raw_readq(const volatile void __iomem *addr)
+{
+	u64 val;
+
+	asm volatile("ld %0, 0(%1)" : "=r" (val) : "r" (addr));
+	return val;
+}
+#endif
+
+/*
+ * Unordered I/O memory access primitives.  These are even more relaxed than
+ * the relaxed versions, as they don't even order accesses between successive
+ * operations to the I/O regions.
+ */
+#define readb_cpu(c)		({ u8  __r = __raw_readb(c); __r; })
+#define readw_cpu(c)		({ u16 __r = le16_to_cpu((__force __le16)__raw_readw(c)); __r; })
+#define readl_cpu(c)		({ u32 __r = le32_to_cpu((__force __le32)__raw_readl(c)); __r; })
+
+#define writeb_cpu(v,c)		((void)__raw_writeb((v),(c)))
+#define writew_cpu(v,c)		((void)__raw_writew((__force u16)cpu_to_le16(v),(c)))
+#define writel_cpu(v,c)		((void)__raw_writel((__force u32)cpu_to_le32(v),(c)))
+
+#ifdef CONFIG_64BIT
+#define readq_cpu(c)		({ u64 __r = le64_to_cpu((__force __le64)__raw_readq(c)); __r; })
+#define writeq_cpu(v,c)		((void)__raw_writeq((__force u64)cpu_to_le64(v),(c)))
+#endif
+
+/*
+ * Relaxed I/O memory access primitives. These follow the Device memory
+ * ordering rules but do not guarantee any ordering relative to Normal memory
+ * accesses.  These are defined to order the indicated access (either a read or
+ * write) with all other I/O memory accesses. Since the platform specification
+ * defines that all I/O regions are strongly ordered on channel 2, no explicit
+ * fences are required to enforce this ordering.
+ */
+/* FIXME: These are now the same as asm-generic */
+#define __io_rbr()		do {} while (0)
+#define __io_rar()		do {} while (0)
+#define __io_rbw()		do {} while (0)
+#define __io_raw()		do {} while (0)
+
+#define readb_relaxed(c)	({ u8  __v; __io_rbr(); __v = readb_cpu(c); __io_rar(); __v; })
+#define readw_relaxed(c)	({ u16 __v; __io_rbr(); __v = readw_cpu(c); __io_rar(); __v; })
+#define readl_relaxed(c)	({ u32 __v; __io_rbr(); __v = readl_cpu(c); __io_rar(); __v; })
+
+#define writeb_relaxed(v,c)	({ __io_rbw(); writeb_cpu((v),(c)); __io_raw(); })
+#define writew_relaxed(v,c)	({ __io_rbw(); writew_cpu((v),(c)); __io_raw(); })
+#define writel_relaxed(v,c)	({ __io_rbw(); writel_cpu((v),(c)); __io_raw(); })
+
+#ifdef CONFIG_64BIT
+#define readq_relaxed(c)	({ u64 __v; __io_rbr(); __v = readq_cpu(c); __io_rar(); __v; })
+#define writeq_relaxed(v,c)	({ __io_rbw(); writeq_cpu((v),(c)); __io_raw(); })
+#endif
+
+/*
+ * I/O memory access primitives. Reads are ordered relative to any
+ * following Normal memory access. Writes are ordered relative to any prior
+ * Normal memory access.  The memory barriers here are necessary as RISC-V
+ * doesn't define any ordering between the memory space and the I/O space.
+ */
+#define __io_br()	do {} while (0)
+#define __io_ar(v)	__asm__ __volatile__ ("fence i,r" : : : "memory");
+#define __io_bw()	__asm__ __volatile__ ("fence w,o" : : : "memory");
+#define __io_aw()	do { } while (0)
+
+#define readb(c)	({ u8  __v; __io_br(); __v = readb_cpu(c); __io_ar(__v); __v; })
+#define readw(c)	({ u16 __v; __io_br(); __v = readw_cpu(c); __io_ar(__v); __v; })
+#define readl(c)	({ u32 __v; __io_br(); __v = readl_cpu(c); __io_ar(__v); __v; })
+
+#define writeb(v,c)	({ __io_bw(); writeb_cpu((v),(c)); __io_aw(); })
+#define writew(v,c)	({ __io_bw(); writew_cpu((v),(c)); __io_aw(); })
+#define writel(v,c)	({ __io_bw(); writel_cpu((v),(c)); __io_aw(); })
+
+#ifdef CONFIG_64BIT
+#define readq(c)	({ u64 __v; __io_br(); __v = readq_cpu(c); __io_ar(__v); __v; })
+#define writeq(v,c)	({ __io_bw(); writeq_cpu((v),(c)); __io_aw(); })
+#endif
+
+/*
+ * Emulation routines for the port-mapped IO space used by some PCI drivers.
+ * These are defined as being "fully synchronous", but also "not guaranteed to
+ * be fully ordered with respect to other memory and I/O operations".  We're
+ * going to be on the safe side here and just make them:
+ *  - Fully ordered WRT each other, by bracketing them with two fences.  The
+ *    outer set contains both I/O so inX is ordered with outX, while the inner just
+ *    needs the type of the access (I for inX and O for outX).
+ *  - Ordered in the same manner as readX/writeX WRT memory by subsuming their
+ *    fences.
+ *  - Ordered WRT timer reads, so udelay and friends don't get elided by the
+ *    implementation.
+ * Note that there is no way to actually enforce that outX is a non-posted
+ * operation on RISC-V, but hopefully the timer ordering constraint is
+ * sufficient to ensure this works sanely on controllers that support I/O
+ * writes.
+ */
+#define __io_pbr()	__asm__ __volatile__ ("fence io,i"  : : : "memory");
+#define __io_par(v)	__asm__ __volatile__ ("fence i,ior" : : : "memory");
+#define __io_pbw()	__asm__ __volatile__ ("fence iow,o" : : : "memory");
+#define __io_paw()	__asm__ __volatile__ ("fence o,io"  : : : "memory");
+
+#define inb(c)		({ u8  __v; __io_pbr(); __v = readb_cpu((void*)(PCI_IOBASE + (c))); __io_par(__v); __v; })
+#define inw(c)		({ u16 __v; __io_pbr(); __v = readw_cpu((void*)(PCI_IOBASE + (c))); __io_par(__v); __v; })
+#define inl(c)		({ u32 __v; __io_pbr(); __v = readl_cpu((void*)(PCI_IOBASE + (c))); __io_par(__v); __v; })
+
+#define outb(v,c)	({ __io_pbw(); writeb_cpu((v),(void*)(PCI_IOBASE + (c))); __io_paw(); })
+#define outw(v,c)	({ __io_pbw(); writew_cpu((v),(void*)(PCI_IOBASE + (c))); __io_paw(); })
+#define outl(v,c)	({ __io_pbw(); writel_cpu((v),(void*)(PCI_IOBASE + (c))); __io_paw(); })
+
+#ifdef CONFIG_64BIT
+#define inq(c)		({ u64 __v; __io_pbr(); __v = readq_cpu((void*)(c)); __io_par(__v); __v; })
+#define outq(v,c)	({ __io_pbw(); writeq_cpu((v),(void*)(c)); __io_paw(); })
+#endif
+
+/*
+ * Accesses from a single hart to a single I/O address must be ordered.  This
+ * allows us to use the raw read macros, but we still need to fence before and
+ * after the block to ensure ordering WRT other macros.  These are defined to
+ * perform host-endian accesses so we use __raw instead of __cpu.
+ */
+#define __io_reads_ins(port, ctype, len, bfence, afence)			\
+	static inline void __ ## port ## len(const volatile void __iomem *addr,	\
+					     void *buffer,			\
+					     unsigned int count)		\
+	{									\
+		bfence;								\
+		if (count) {							\
+			ctype *buf = buffer;					\
+										\
+			do {							\
+				ctype x = __raw_read ## len(addr);		\
+				*buf++ = x;					\
+			} while (--count);					\
+		}								\
+		afence;								\
+	}
+
+#define __io_writes_outs(port, ctype, len, bfence, afence)			\
+	static inline void __ ## port ## len(volatile void __iomem *addr,	\
+					     const void *buffer,		\
+					     unsigned int count)		\
+	{									\
+		bfence;								\
+		if (count) {							\
+			const ctype *buf = buffer;				\
+										\
+			do {							\
+				__raw_write ## len(*buf++, addr);		\
+			} while (--count);					\
+		}								\
+		afence;								\
+	}
+
+__io_reads_ins(reads,  u8, b, __io_br(), __io_ar(addr))
+__io_reads_ins(reads, u16, w, __io_br(), __io_ar(addr))
+__io_reads_ins(reads, u32, l, __io_br(), __io_ar(addr))
+#define readsb(addr, buffer, count) __readsb(addr, buffer, count)
+#define readsw(addr, buffer, count) __readsw(addr, buffer, count)
+#define readsl(addr, buffer, count) __readsl(addr, buffer, count)
+
+__io_reads_ins(ins,  u8, b, __io_pbr(), __io_par(addr))
+__io_reads_ins(ins, u16, w, __io_pbr(), __io_par(addr))
+__io_reads_ins(ins, u32, l, __io_pbr(), __io_par(addr))
+#define insb(addr, buffer, count) __insb((void __iomem *)(long)addr, buffer, count)
+#define insw(addr, buffer, count) __insw((void __iomem *)(long)addr, buffer, count)
+#define insl(addr, buffer, count) __insl((void __iomem *)(long)addr, buffer, count)
+
+__io_writes_outs(writes,  u8, b, __io_bw(), __io_aw())
+__io_writes_outs(writes, u16, w, __io_bw(), __io_aw())
+__io_writes_outs(writes, u32, l, __io_bw(), __io_aw())
+#define writesb(addr, buffer, count) __writesb(addr, buffer, count)
+#define writesw(addr, buffer, count) __writesw(addr, buffer, count)
+#define writesl(addr, buffer, count) __writesl(addr, buffer, count)
+
+__io_writes_outs(outs,  u8, b, __io_pbw(), __io_paw())
+__io_writes_outs(outs, u16, w, __io_pbw(), __io_paw())
+__io_writes_outs(outs, u32, l, __io_pbw(), __io_paw())
+#define outsb(addr, buffer, count) __outsb((void __iomem *)(long)addr, buffer, count)
+#define outsw(addr, buffer, count) __outsw((void __iomem *)(long)addr, buffer, count)
+#define outsl(addr, buffer, count) __outsl((void __iomem *)(long)addr, buffer, count)
+
+#ifdef CONFIG_64BIT
+__io_reads_ins(reads, u64, q, __io_br(), __io_ar(addr))
+#define readsq(addr, buffer, count) __readsq(addr, buffer, count)
+
+__io_reads_ins(ins, u64, q, __io_pbr(), __io_par(addr))
+#define insq(addr, buffer, count) __insq((void __iomem *)addr, buffer, count)
+
+__io_writes_outs(writes, u64, q, __io_bw(), __io_aw())
+#define writesq(addr, buffer, count) __writesq(addr, buffer, count)
+
+__io_writes_outs(outs, u64, q, __io_pbr(), __io_paw())
+#define outsq(addr, buffer, count) __outsq((void __iomem *)addr, buffer, count)
+#endif
+
+#endif /* _ASM_RISCV_IO_H */
diff --git a/xen/include/asm-riscv/iocap.h b/xen/include/asm-riscv/iocap.h
new file mode 100644
index 0000000000..e38a7ff3dc
--- /dev/null
+++ b/xen/include/asm-riscv/iocap.h
@@ -0,0 +1,13 @@
+#ifndef __RISCV_IOCAP_H__
+#define __RISCV_IOCAP_H__
+
+#endif
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/iommu.h b/xen/include/asm-riscv/iommu.h
new file mode 100644
index 0000000000..c4f24574ec
--- /dev/null
+++ b/xen/include/asm-riscv/iommu.h
@@ -0,0 +1,46 @@
+/******************************************************************************
+ *
+ * Copyright 2019 (C) Alistair Francis <alistair.francis@wdc.com>
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to
+ * deal in the Software without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __ARCH_RISCV_IOMMU_H__
+#define __ARCH_RISCV_IOMMU_H__
+
+struct arch_iommu
+{
+    /* Private information for the IOMMU drivers */
+    void *priv;
+};
+
+const struct iommu_ops *iommu_get_ops(void);
+void iommu_set_ops(const struct iommu_ops *ops);
+
+#endif /* __ARCH_RISCV_IOMMU_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/irq.h b/xen/include/asm-riscv/irq.h
new file mode 100644
index 0000000000..ae17872d4d
--- /dev/null
+++ b/xen/include/asm-riscv/irq.h
@@ -0,0 +1,58 @@
+#ifndef _ASM_HW_IRQ_H
+#define _ASM_HW_IRQ_H
+
+#include <public/device_tree_defs.h>
+
+/*
+ * These defines correspond to the Xen internal representation of the
+ * IRQ types. We choose to make them the same as the existing device
+ * tree definitions for convenience.
+ */
+#define IRQ_TYPE_NONE           DT_IRQ_TYPE_NONE
+#define IRQ_TYPE_EDGE_RISING    DT_IRQ_TYPE_EDGE_RISING
+#define IRQ_TYPE_EDGE_FALLING   DT_IRQ_TYPE_EDGE_FALLING
+#define IRQ_TYPE_EDGE_BOTH      DT_IRQ_TYPE_EDGE_BOTH
+#define IRQ_TYPE_LEVEL_HIGH     DT_IRQ_TYPE_LEVEL_HIGH
+#define IRQ_TYPE_LEVEL_LOW      DT_IRQ_TYPE_LEVEL_LOW
+#define IRQ_TYPE_LEVEL_MASK     DT_IRQ_TYPE_LEVEL_MASK
+#define IRQ_TYPE_SENSE_MASK     DT_IRQ_TYPE_SENSE_MASK
+#define IRQ_TYPE_INVALID        DT_IRQ_TYPE_INVALID
+
+#define NR_LOCAL_IRQS	32
+#define NR_IRQS		1024
+
+typedef struct {
+} vmask_t;
+
+struct arch_pirq
+{
+};
+
+struct arch_irq_desc {
+};
+
+struct irq_desc;
+
+struct irq_desc *__irq_to_desc(int irq);
+
+#define irq_to_desc(irq)    __irq_to_desc(irq)
+
+void arch_move_irqs(struct vcpu *v);
+
+#define domain_pirq_to_irq(d, pirq) (pirq)
+
+extern const unsigned int nr_irqs;
+#define nr_static_irqs NR_IRQS
+#define arch_hwdom_irqs(domid) NR_IRQS
+
+#define arch_evtchn_bind_pirq(d, pirq) ((void)((d) + (pirq)))
+
+#endif /* _ASM_HW_IRQ_H */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/mem_access.h b/xen/include/asm-riscv/mem_access.h
new file mode 100644
index 0000000000..8348a04a53
--- /dev/null
+++ b/xen/include/asm-riscv/mem_access.h
@@ -0,0 +1,4 @@
+#ifndef __RISCV_MEM_ACCESS_H__
+#define __RISCV_MEM_ACCESS_H__
+
+#endif
diff --git a/xen/include/asm-riscv/mm.h b/xen/include/asm-riscv/mm.h
new file mode 100644
index 0000000000..e1972a8c20
--- /dev/null
+++ b/xen/include/asm-riscv/mm.h
@@ -0,0 +1,246 @@
+#ifndef __ARCH_RISCV_MM__
+#define __ARCH_RISCV_MM__
+
+#include <xen/errno.h>
+#include <asm/page.h>
+#include <public/xen.h>
+
+extern unsigned long max_page;
+extern unsigned long total_pages;
+extern unsigned long frametable_base_mfn;
+extern mfn_t xenheap_mfn_start;
+extern mfn_t xenheap_mfn_end;
+extern vaddr_t xenheap_virt_end;
+extern vaddr_t xenheap_virt_start;
+
+/* Per-page-frame information. */
+struct page_info {
+    /* Each frame can be threaded onto a doubly-linked list. */
+    struct page_list_entry list;
+
+    /* Reference count and various PGC_xxx flags and fields. */
+    unsigned long count_info;
+
+    /* Context-dependent fields follow... */
+    union {
+        /* Page is in use: ((count_info & PGC_count_mask) != 0). */
+        struct {
+            /* Type reference count and various PGT_xxx flags and fields. */
+            unsigned long type_info;
+        } inuse;
+        /* Page is on a free list: ((count_info & PGC_count_mask) == 0). */
+        union {
+            struct {
+                /*
+                 * Index of the first *possibly* unscrubbed page in the buddy.
+                 * One more bit than maximum possible order to accommodate
+                 * INVALID_DIRTY_IDX.
+                 */
+#define INVALID_DIRTY_IDX ((1UL << (MAX_ORDER + 1)) - 1)
+                unsigned long first_dirty:MAX_ORDER + 1;
+
+                /* Do TLBs need flushing for safety before next page use? */
+                bool need_tlbflush:1;
+
+#define BUDDY_NOT_SCRUBBING    0
+#define BUDDY_SCRUBBING        1
+#define BUDDY_SCRUB_ABORT      2
+                unsigned long scrub_state:2;
+            };
+
+            unsigned long val;
+        } free;
+    } u;
+
+    union {
+        /* Page is in use, but not as a shadow. */
+        struct {
+            /* Owner of this page (zero if page is anonymous). */
+            struct domain *domain;
+        } inuse;
+
+        /* Page is on a free list. */
+        struct {
+            /* Order-size of the free chunk this page is the head of. */
+            unsigned int order;
+        } free;
+    } v;
+
+    union {
+        /*
+         * Timestamp from 'TLB clock', used to avoid extra safety flushes.
+         * Only valid for: a) free pages, and b) pages with zero type count
+         */
+        u32 tlbflush_timestamp;
+    };
+};
+
+#define PFN_ORDER(_pfn) ((_pfn)->v.free.order)
+
+#define PG_shift(idx)   (BITS_PER_LONG - (idx))
+#define PG_mask(x, idx) (x ## UL << PG_shift(idx))
+
+#define PGT_writable_page   PG_mask(1, 1)  /* has writable mappings?         */
+
+/* Count of uses of this frame as its current type. */
+#define PGT_count_width     PG_shift(2)
+#define PGT_count_mask      ((1UL<<PGT_count_width)-1)
+
+/* Cleared when the owning guest 'frees' this page. */
+#define _PGC_allocated      PG_shift(1)
+#define PGC_allocated       PG_mask(1, 1)
+
+/* Page is Xen heap? */
+#define _PGC_xen_heap       PG_shift(2)
+#define PGC_xen_heap        PG_mask(1, 2)
+
+/* Page is broken? */
+#define _PGC_broken         PG_shift(7)
+#define PGC_broken          PG_mask(1, 7)
+
+/* Mutually-exclusive page states: { inuse, offlining, offlined, free }. */
+#define PGC_state           PG_mask(3, 9)
+#define PGC_state_inuse     PG_mask(0, 9)
+#define PGC_state_offlining PG_mask(1, 9)
+#define PGC_state_offlined  PG_mask(2, 9)
+#define PGC_state_free      PG_mask(3, 9)
+#define page_state_is(pg, st) (((pg)->count_info & PGC_state) == PGC_state_##st)
+
+/* Page is not reference counted */
+#define _PGC_extra          PG_shift(10)
+#define PGC_extra           PG_mask(1, 10)
+
+/* Count of references to this frame. */
+#define PGC_count_width     PG_shift(9)
+#define PGC_count_mask      ((1UL<<PGC_count_width)-1)
+
+/*
+ * Page needs to be scrubbed. Since this bit can only be set on a page that is
+ * free (i.e. in PGC_state_free) we can reuse PGC_allocated bit.
+ */
+#define _PGC_need_scrub   _PGC_allocated
+#define PGC_need_scrub    PGC_allocated
+
+#define is_xen_heap_page(page) ((page)->count_info & PGC_xen_heap)
+#define is_xen_heap_mfn(mfn) \
+    (mfn_valid(_mfn(mfn)) && is_xen_heap_page(mfn_to_page(_mfn(mfn))))
+
+#define is_xen_fixed_mfn(mfn)                                   \
+    ((mfn_to_maddr(mfn) >= virt_to_maddr(&_start)) &&           \
+     (mfn_to_maddr(mfn) <= virt_to_maddr(&_end)))
+
+#define page_get_owner(_p)    (_p)->v.inuse.domain
+#define page_set_owner(_p,_d) ((_p)->v.inuse.domain = (_d))
+
+#define maddr_get_owner(ma)   (page_get_owner(maddr_to_page((ma))))
+
+#define mfn_valid(mfn) ({                                          \
+    unsigned long mfn_ = mfn_x(mfn);                               \
+    likely(mfn_ >= frametable_base_mfn && mfn_ < max_page);        \
+})
+
+/* Convert between machine frame numbers and page-info structures. */
+#define frame_table ((struct page_info *)FRAMETABLE_VIRT_START)
+#define mfn_to_page(mfn)                                           \
+    (frame_table + (mfn_x(mfn) - frametable_base_mfn))
+#define page_to_mfn(pg)                                            \
+    _mfn(((unsigned long)((pg) - frame_table) + frametable_base_mfn))
+
+/* Convert between machine addresses and page-info structures. */
+#define maddr_to_page(ma) mfn_to_page(maddr_to_mfn(ma))
+#define page_to_maddr(pg) (mfn_to_maddr(page_to_mfn(pg)))
+
+/* Convert between frame number and address formats.  */
+#define pfn_to_paddr(pfn) ((paddr_t)(pfn) << PAGE_SHIFT)
+#define paddr_to_pfn(pa)  ((unsigned long)((pa) >> PAGE_SHIFT))
+#define mfn_to_maddr(mfn) pfn_to_paddr(mfn_x(mfn))
+#define maddr_to_mfn(ma)  _mfn(paddr_to_pfn(ma))
+#define vmap_to_mfn(va)   maddr_to_mfn(virt_to_maddr((vaddr_t)va))
+#define vmap_to_page(va)  mfn_to_page(vmap_to_mfn(va))
+
+static inline void *maddr_to_virt(paddr_t ma)
+{
+    return (void *)0xdeadbeef;
+}
+
+static inline paddr_t __virt_to_maddr(vaddr_t va)
+{
+    return 0;
+}
+
+#define virt_to_maddr(va)  __virt_to_maddr((vaddr_t) (va))
+
+/* Convert between Xen-heap virtual addresses and machine addresses. */
+#define __pa(x)            (virt_to_maddr(x))
+#define __va(x)            (maddr_to_virt(x))
+
+/* Convert between Xen-heap virtual addresses and machine frame numbers. */
+#define __virt_to_mfn(va)  (virt_to_maddr(va) >> PAGE_SHIFT)
+#define __mfn_to_virt(mfn) (maddr_to_virt((paddr_t)(mfn) << PAGE_SHIFT))
+
+/*
+ * We define non-underscored wrappers for above conversion functions.
+ * These are overriden in various source files while underscored version
+ * remain intact.
+ */
+#define virt_to_mfn(va)    __virt_to_mfn(va)
+#define mfn_to_virt(mfn)   __mfn_to_virt(mfn)
+
+/* Convert between Xen-heap virtual addresses and page-info structures. */
+static inline struct page_info *virt_to_page(const void *v)
+{
+    return (void *)0xdeadbeef;
+}
+
+static inline void *page_to_virt(const struct page_info *pg)
+{
+    return (void *)0xdeadbeef;
+}
+
+#define domain_set_alloc_bitsize(d) ((void)0)
+#define domain_clamp_alloc_bitsize(d, b) (b)
+
+/*
+ * RISC-V does not have an M2P, but common code expects a handful of
+ * M2P-related defines and functions. Provide dummy versions of these.
+ */
+#define INVALID_M2P_ENTRY        (~0UL)
+#define SHARED_M2P_ENTRY         (~0UL - 1UL)
+#define SHARED_M2P(_e)           ((_e) == SHARED_M2P_ENTRY)
+
+/* Xen always owns P2M on RISC-V */
+#define set_gpfn_from_mfn(mfn, pfn) do { (void) (mfn), (void)(pfn); } while (0)
+#define mfn_to_gmfn(_d, mfn)  (mfn)
+
+/* Arch-specific portion of memory_op hypercall. */
+long arch_memory_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg);
+
+extern void put_page_type(struct page_info *page);
+
+static inline void put_page_and_type(struct page_info *page)
+{
+}
+
+int guest_physmap_mark_populate_on_demand(struct domain *d, unsigned long gfn,
+                                          unsigned int order);
+
+unsigned long domain_get_maximum_gpfn(struct domain *d);
+
+/*
+ * On RISC-V, all the RAM is currently direct mapped in Xen.
+ * Hence return always true.
+ */
+static inline bool arch_mfn_in_directmap(unsigned long mfn)
+{
+    return true;
+}
+
+#endif /*  __ARCH_RISCV_MM__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/monitor.h b/xen/include/asm-riscv/monitor.h
new file mode 100644
index 0000000000..e77d21dba4
--- /dev/null
+++ b/xen/include/asm-riscv/monitor.h
@@ -0,0 +1,65 @@
+/*
+ * include/asm-RISCV/monitor.h
+ *
+ * Arch-specific monitor_op domctl handler.
+ *
+ * Copyright (c) 2015 Tamas K Lengyel (tamas@tklengyel.com)
+ * Copyright (c) 2016, Bitdefender S.R.L.
+ *
+ * This program is free software; you can redistribute it and/or
+ * modify it under the terms of the GNU General Public
+ * License v2 as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
+ * General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public
+ * License along with this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ASM_RISCV_MONITOR_H__
+#define __ASM_RISCV_MONITOR_H__
+
+#include <xen/sched.h>
+#include <public/domctl.h>
+
+static inline
+void arch_monitor_allow_userspace(struct domain *d, bool allow_userspace)
+{
+}
+
+static inline
+int arch_monitor_domctl_op(struct domain *d, struct xen_domctl_monitor_op *mop)
+{
+    /* No arch-specific monitor ops on RISCV. */
+    return -EOPNOTSUPP;
+}
+
+int arch_monitor_domctl_event(struct domain *d,
+                              struct xen_domctl_monitor_op *mop);
+
+static inline
+int arch_monitor_init_domain(struct domain *d)
+{
+    /* No arch-specific domain initialization on RISCV. */
+    return 0;
+}
+
+static inline
+void arch_monitor_cleanup_domain(struct domain *d)
+{
+    /* No arch-specific domain cleanup on RISCV. */
+}
+
+static inline uint32_t arch_monitor_get_capabilities(struct domain *d)
+{
+    uint32_t capabilities = 0;
+
+    return capabilities;
+}
+
+int monitor_smc(void);
+
+#endif /* __ASM_RISCV_MONITOR_H__ */
diff --git a/xen/include/asm-riscv/nospec.h b/xen/include/asm-riscv/nospec.h
new file mode 100644
index 0000000000..55087fa831
--- /dev/null
+++ b/xen/include/asm-riscv/nospec.h
@@ -0,0 +1,25 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+/* Copyright 2018 Amazon.com, Inc. or its affiliates. All Rights Reserved. */
+
+#ifndef _ASM_RISCV_NOSPEC_H
+#define _ASM_RISCV_NOSPEC_H
+
+static inline bool evaluate_nospec(bool condition)
+{
+    return condition;
+}
+
+static inline void block_speculation(void)
+{
+}
+
+#endif /* _ASM_RISCV_NOSPEC_H */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/numa.h b/xen/include/asm-riscv/numa.h
new file mode 100644
index 0000000000..52bdfbc16b
--- /dev/null
+++ b/xen/include/asm-riscv/numa.h
@@ -0,0 +1,41 @@
+#ifndef __ARCH_RISCV_NUMA_H
+#define __ARCH_RISCV_NUMA_H
+
+#include <xen/mm.h>
+
+typedef u8 nodeid_t;
+
+/* Fake one node for now. See also node_online_map. */
+#define cpu_to_node(cpu) 0
+#define node_to_cpumask(node)   (cpu_online_map)
+
+static inline __attribute__((pure)) nodeid_t phys_to_nid(paddr_t addr)
+{
+    return 0;
+}
+
+/*
+ * TODO: make first_valid_mfn static when NUMA is supported on RISCV, this
+ * is required because the dummy helpers are using it.
+ */
+extern mfn_t first_valid_mfn;
+
+/* XXX: implement NUMA support */
+#define node_spanned_pages(nid) (max_page - mfn_x(first_valid_mfn))
+#define node_start_pfn(nid) (mfn_x(first_valid_mfn))
+#define __node_distance(a, b) (20)
+
+static inline unsigned int arch_get_dma_bitsize(void)
+{
+    return 32;
+}
+
+#endif /* __ARCH_RISCV_NUMA_H */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/p2m.h b/xen/include/asm-riscv/p2m.h
new file mode 100644
index 0000000000..1bb2009d53
--- /dev/null
+++ b/xen/include/asm-riscv/p2m.h
@@ -0,0 +1,218 @@
+#ifndef _XEN_P2M_H
+#define _XEN_P2M_H
+
+#include <xen/mm.h>
+#include <xen/mem_access.h>
+#include <xen/errno.h>
+
+struct domain;
+
+extern void memory_type_changed(struct domain *);
+
+/* Per-p2m-table state */
+struct p2m_domain {
+};
+
+typedef enum {
+    p2m_invalid = 0
+} p2m_type_t;
+
+/* All common type definitions should live ahead of this inclusion. */
+#ifdef _XEN_P2M_COMMON_H
+# error "xen/p2m-common.h should not be included directly"
+#endif
+#include <xen/p2m-common.h>
+
+static inline bool arch_acquire_resource_check(struct domain *d)
+{
+    return true;
+}
+
+static inline
+void p2m_altp2m_check(struct vcpu *v, uint16_t idx)
+{
+}
+
+/* Second stage paging setup, to be called on all CPUs */
+void setup_virt_paging(void);
+
+/* Init the datastructures for later use by the p2m code */
+int p2m_init(struct domain *d);
+
+/* Return all the p2m resources to Xen. */
+void p2m_teardown(struct domain *d);
+
+/* Remove mapping refcount on each mapping page in the p2m */
+int relinquish_p2m_mapping(struct domain *d);
+
+/* Context switch */
+void p2m_save_state(struct vcpu *p);
+void p2m_restore_state(struct vcpu *n);
+
+/* Print debugging/statistial info about a domain's p2m */
+void p2m_dump_info(struct domain *d);
+
+static inline void p2m_write_lock(struct p2m_domain *p2m)
+{
+}
+
+void p2m_write_unlock(struct p2m_domain *p2m);
+
+static inline void p2m_read_lock(struct p2m_domain *p2m)
+{
+}
+
+static inline void p2m_read_unlock(struct p2m_domain *p2m)
+{
+}
+
+static inline int p2m_is_locked(struct p2m_domain *p2m)
+{
+    return 0;
+}
+
+static inline int p2m_is_write_locked(struct p2m_domain *p2m)
+{
+    return 0;
+}
+
+void p2m_tlb_flush_sync(struct p2m_domain *p2m);
+
+/* Look up the MFN corresponding to a domain's GFN. */
+mfn_t p2m_lookup(struct domain *d, gfn_t gfn, p2m_type_t *t);
+
+/*
+ * Get details of a given gfn.
+ * The P2M lock should be taken by the caller.
+ */
+mfn_t p2m_get_entry(struct p2m_domain *p2m, gfn_t gfn,
+                    p2m_type_t *t, p2m_access_t *a,
+                    unsigned int *page_order,
+                    bool *valid);
+
+/*
+ * Direct set a p2m entry: only for use by the P2M code.
+ * The P2M write lock should be taken.
+ */
+int p2m_set_entry(struct p2m_domain *p2m,
+                  gfn_t sgfn,
+                  unsigned long nr,
+                  mfn_t smfn,
+                  p2m_type_t t,
+                  p2m_access_t a);
+
+bool p2m_resolve_translation_fault(struct domain *d, gfn_t gfn);
+
+void p2m_invalidate_root(struct p2m_domain *p2m);
+
+/*
+ * Clean & invalidate caches corresponding to a region [start,end) of guest
+ * address space.
+ *
+ * start will get updated if the function is preempted.
+ */
+int p2m_cache_flush_range(struct domain *d, gfn_t *pstart, gfn_t end);
+
+void p2m_set_way_flush(struct vcpu *v);
+
+void p2m_toggle_cache(struct vcpu *v, bool was_enabled);
+
+void p2m_flush_vm(struct vcpu *v);
+
+/*
+ * Map a region in the guest p2m with a specific p2m type.
+ * The memory attributes will be derived from the p2m type.
+ */
+int map_regions_p2mt(struct domain *d,
+                     gfn_t gfn,
+                     unsigned long nr,
+                     mfn_t mfn,
+                     p2m_type_t p2mt);
+
+int unmap_regions_p2mt(struct domain *d,
+                       gfn_t gfn,
+                       unsigned long nr,
+                       mfn_t mfn);
+
+int map_dev_mmio_region(struct domain *d,
+                        gfn_t gfn,
+                        unsigned long nr,
+                        mfn_t mfn);
+
+int guest_physmap_add_entry(struct domain *d,
+                            gfn_t gfn,
+                            mfn_t mfn,
+                            unsigned long page_order,
+                            p2m_type_t t);
+
+/* Untyped version for RAM only, for compatibility */
+static inline int guest_physmap_add_page(struct domain *d,
+                                         gfn_t gfn,
+                                         mfn_t mfn,
+                                         unsigned int page_order)
+{
+    return 0;
+}
+
+mfn_t gfn_to_mfn(struct domain *d, gfn_t gfn);
+
+/* Look up a GFN and take a reference count on the backing page. */
+typedef unsigned int p2m_query_t;
+#define P2M_ALLOC    (1u<<0)   /* Populate PoD and paged-out entries */
+#define P2M_UNSHARE  (1u<<1)   /* Break CoW sharing */
+
+struct page_info *p2m_get_page_from_gfn(struct domain *d, gfn_t gfn,
+                                        p2m_type_t *t);
+
+static inline struct page_info *get_page_from_gfn(
+    struct domain *d, unsigned long gfn, p2m_type_t *t, p2m_query_t q)
+{
+    *t = p2m_invalid;
+    return (void *) 0xdeadbeef;
+}
+
+int get_page_type(struct page_info *page, unsigned long type);
+bool is_iomem_page(mfn_t mfn);
+static inline int get_page_and_type(struct page_info *page,
+                                    struct domain *domain,
+                                    unsigned long type)
+{
+    return 0;
+}
+
+/* get host p2m table */
+#define p2m_get_hostp2m(d) (&(d)->arch.p2m)
+
+static inline bool p2m_vm_event_sanity_check(struct domain *d)
+{
+    return true;
+}
+
+/*
+ * Return the start of the next mapping based on the order of the
+ * current one.
+ */
+static inline gfn_t gfn_next_boundary(gfn_t gfn, unsigned int order)
+{
+    return gfn;
+}
+
+/*
+ * A vCPU has cache enabled only when the MMU is enabled and data cache
+ * is enabled.
+ */
+static inline bool vcpu_has_cache_enabled(struct vcpu *v)
+{
+    return 0;
+}
+
+#endif /* _XEN_P2M_H */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/page-bits.h b/xen/include/asm-riscv/page-bits.h
new file mode 100644
index 0000000000..5a47701fea
--- /dev/null
+++ b/xen/include/asm-riscv/page-bits.h
@@ -0,0 +1,11 @@
+#ifndef __RISCV_PAGE_SHIFT_H__
+#define __RISCV_PAGE_SHIFT_H__
+
+#define PAGE_SHIFT              12
+
+#ifdef CONFIG_RISCV_64
+#define PADDR_BITS              56
+#define VADDR_BITS              39
+#endif
+
+#endif /* __RISCV_PAGE_SHIFT_H__ */
diff --git a/xen/include/asm-riscv/page.h b/xen/include/asm-riscv/page.h
new file mode 100644
index 0000000000..36c8732efe
--- /dev/null
+++ b/xen/include/asm-riscv/page.h
@@ -0,0 +1,73 @@
+/*
+ * Copyright (C) 2009 Chen Liqin <liqin.chen@sunplusct.com>
+ * Copyright (C) 2012 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ * Copyright (C) 2017 XiaojingZhu <zhuxiaoj@ict.ac.cn>
+ * Copyright (C) 2019 Bobby Eshleman <bobbyeshleman@gmail.com>
+ * Copyright (C) 2021 Connor Davis <connojd@pm.me>
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_PAGE_H
+#define _ASM_RISCV_PAGE_H
+
+#include <xen/const.h>
+#include <xen/page-size.h>
+
+#define PAGE_ALIGN(x)   (((x) + PAGE_SIZE - 1) & PAGE_MASK)
+#define paddr_bits      PADDR_BITS
+
+#define PTE_VALID       BIT(0, UL)
+#define PTE_READABLE    BIT(1, UL)
+#define PTE_WRITABLE    BIT(2, UL)
+#define PTE_EXECUTABLE  BIT(3, UL)
+#define PTE_USER        BIT(4, UL)
+#define PTE_GLOBAL      BIT(5, UL)
+#define PTE_ACCESSED    BIT(6, UL)
+#define PTE_DIRTY       BIT(7, UL)
+#define PTE_RSW         (BIT(8, UL) | BIT(9, UL))
+
+#ifndef __ASSEMBLY__
+
+#define PAGE_HYPERVISOR_RO  (PTE_VALID|PTE_READABLE|PTE_ACCESSED)
+#define PAGE_HYPERVISOR_RX  (PAGE_HYPERVISOR_RO|PTE_EXECUTABLE)
+#define PAGE_HYPERVISOR_RW  (PAGE_HYPERVISOR_RO|PTE_WRITABLE|PTE_DIRTY)
+
+/*
+ * RISC-V does not encode cacheability attributes in the PTEs;
+ * instead mappings must consult the platform PMAs.
+ */
+#define PAGE_HYPERVISOR         PAGE_HYPERVISOR_RW
+#define PAGE_HYPERVISOR_NOCACHE PAGE_HYPERVISOR
+#define PAGE_HYPERVISOR_WC      PAGE_HYPERVISOR
+
+typedef struct {
+    unsigned long pte;
+} pte_t;
+
+#define clear_page(pgaddr)  memset((pgaddr), 0, PAGE_SIZE)
+#define copy_page(to, from) memcpy((to), (from), PAGE_SIZE)
+
+/*
+ * Ensure that stores to instruction memory are locally visible to
+ * subsequent fetches on this hart.
+ */
+static inline void invalidate_icache(void)
+{
+    asm volatile ("fence.i" ::: "memory");
+}
+
+/* Flush the dcache for an entire page. */
+void flush_page_to_ram(unsigned long mfn, bool sync_icache);
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_PAGE_H */
diff --git a/xen/include/asm-riscv/paging.h b/xen/include/asm-riscv/paging.h
new file mode 100644
index 0000000000..3f9f704273
--- /dev/null
+++ b/xen/include/asm-riscv/paging.h
@@ -0,0 +1,15 @@
+#ifndef _XEN_PAGING_H
+#define _XEN_PAGING_H
+
+#define paging_mode_translate(d)              (1)
+
+#endif /* _XEN_PAGING_H */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/pci.h b/xen/include/asm-riscv/pci.h
new file mode 100644
index 0000000000..0ccf335e34
--- /dev/null
+++ b/xen/include/asm-riscv/pci.h
@@ -0,0 +1,31 @@
+/******************************************************************************
+ *
+ * Copyright 2019 (C) Alistair Francis <alistair.francis@wdc.com>
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to
+ * deal in the Software without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef __RISCV_PCI_H__
+#define __RISCV_PCI_H__
+
+struct arch_pci_dev {
+};
+
+#endif /* __RISCV_PCI_H__ */
diff --git a/xen/include/asm-riscv/percpu.h b/xen/include/asm-riscv/percpu.h
new file mode 100644
index 0000000000..0d165d6aa1
--- /dev/null
+++ b/xen/include/asm-riscv/percpu.h
@@ -0,0 +1,33 @@
+#ifndef __RISCV_PERCPU_H__
+#define __RISCV_PERCPU_H__
+
+#ifndef __ASSEMBLY__
+
+#include <xen/types.h>
+#include <asm/sysregs.h>
+
+extern char __per_cpu_start[], __per_cpu_data_end[];
+extern unsigned long __per_cpu_offset[NR_CPUS];
+void percpu_init_areas(void);
+
+#define per_cpu(var, cpu)  \
+    (*RELOC_HIDE(&per_cpu__##var, __per_cpu_offset[cpu]))
+#define this_cpu(var) \
+    (*RELOC_HIDE(&per_cpu__##var, csr_read(CSR_SCRATCH)))
+
+#define per_cpu_ptr(var, cpu)  \
+    (*RELOC_HIDE(var, __per_cpu_offset[cpu]))
+#define this_cpu_ptr(var) \
+    (*RELOC_HIDE(var, csr_read(CSR_SCRATCH)))
+
+#endif
+
+#endif /* __RISCV_PERCPU_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/processor.h b/xen/include/asm-riscv/processor.h
new file mode 100644
index 0000000000..19e681652a
--- /dev/null
+++ b/xen/include/asm-riscv/processor.h
@@ -0,0 +1,59 @@
+/******************************************************************************
+ *
+ * Copyright 2019 (C) Alistair Francis <alistair.francis@wdc.com>
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to
+ * deal in the Software without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef _ASM_RISCV_PROCESSOR_H
+#define _ASM_RISCV_PROCESSOR_H
+
+#ifndef __ASSEMBLY__
+
+/* On stack VCPU state */
+struct cpu_user_regs {
+    unsigned long r0;
+};
+
+void show_execution_state(const struct cpu_user_regs *regs);
+void show_registers(const struct cpu_user_regs *regs);
+
+/* All a bit UP for the moment */
+#define cpu_to_core(_cpu)   (0)
+#define cpu_to_socket(_cpu) (0)
+
+/* Based on Linux: arch/riscv/include/asm/processor.h */
+
+static inline void cpu_relax(void)
+{
+	int dummy;
+	/* In lieu of a halt instruction, induce a long-latency stall. */
+	__asm__ __volatile__ ("div %0, %0, zero" : "=r" (dummy));
+	barrier();
+}
+
+static inline void wait_for_interrupt(void)
+{
+	__asm__ __volatile__ ("wfi");
+}
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_PROCESSOR_H */
diff --git a/xen/include/asm-riscv/random.h b/xen/include/asm-riscv/random.h
new file mode 100644
index 0000000000..b4acee276b
--- /dev/null
+++ b/xen/include/asm-riscv/random.h
@@ -0,0 +1,9 @@
+#ifndef __ASM_RANDOM_H__
+#define __ASM_RANDOM_H__
+
+static inline unsigned int arch_get_random(void)
+{
+    return 0;
+}
+
+#endif /* __ASM_RANDOM_H__ */
diff --git a/xen/include/asm-riscv/regs.h b/xen/include/asm-riscv/regs.h
new file mode 100644
index 0000000000..82e7dd2aee
--- /dev/null
+++ b/xen/include/asm-riscv/regs.h
@@ -0,0 +1,23 @@
+#ifndef __ARM_REGS_H__
+#define __ARM_REGS_H__
+
+#ifndef __ASSEMBLY__
+
+#include <asm/current.h>
+
+static inline bool guest_mode(const struct cpu_user_regs *r)
+{
+    return false;
+}
+
+#endif
+
+#endif /* __ARM_REGS_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/setup.h b/xen/include/asm-riscv/setup.h
new file mode 100644
index 0000000000..d0fc75054e
--- /dev/null
+++ b/xen/include/asm-riscv/setup.h
@@ -0,0 +1,14 @@
+#ifndef __RISCV_SETUP_H_
+#define __RISCV_SETUP_H_
+
+#define max_init_domid (0)
+
+#endif /* __RISCV_SETUP_H_ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/smp.h b/xen/include/asm-riscv/smp.h
new file mode 100644
index 0000000000..f0f0b06501
--- /dev/null
+++ b/xen/include/asm-riscv/smp.h
@@ -0,0 +1,46 @@
+/******************************************************************************
+ *
+ * Copyright 2019 (C) Alistair Francis <alistair.francis@wdc.com>
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to
+ * deal in the Software without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef _ASM_RISCV_SMP_H
+#define _ASM_RISCV_SMP_H
+
+#ifndef __ASSEMBLY__
+#include <xen/cpumask.h>
+#include <asm/current.h>
+#endif
+
+DECLARE_PER_CPU(cpumask_var_t, cpu_sibling_mask);
+DECLARE_PER_CPU(cpumask_var_t, cpu_core_mask);
+
+/*
+ * Do we, for platform reasons, need to actually keep CPUs online when we
+ * would otherwise prefer them to be off?
+ */
+#define park_offline_cpus true
+
+#define cpu_is_offline(cpu) unlikely(!cpu_online(cpu))
+
+#define smp_processor_id() get_processor_id()
+
+#endif /* _ASM_RISCV_SMP_H */
diff --git a/xen/include/asm-riscv/softirq.h b/xen/include/asm-riscv/softirq.h
new file mode 100644
index 0000000000..976e0ebd70
--- /dev/null
+++ b/xen/include/asm-riscv/softirq.h
@@ -0,0 +1,16 @@
+#ifndef __ASM_SOFTIRQ_H__
+#define __ASM_SOFTIRQ_H__
+
+#define NR_ARCH_SOFTIRQS       0
+
+#define arch_skip_send_event_check(cpu) 0
+
+#endif /* __ASM_SOFTIRQ_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/spinlock.h b/xen/include/asm-riscv/spinlock.h
new file mode 100644
index 0000000000..77e6736e71
--- /dev/null
+++ b/xen/include/asm-riscv/spinlock.h
@@ -0,0 +1,12 @@
+#ifndef __ASM_SPINLOCK_H
+#define __ASM_SPINLOCK_H
+
+#define arch_lock_acquire_barrier()
+#define arch_lock_release_barrier()
+
+#define arch_lock_relax()
+#define arch_lock_signal()
+
+#define arch_lock_signal_wmb()
+
+#endif /* __ASM_SPINLOCK_H */
diff --git a/xen/include/asm-riscv/string.h b/xen/include/asm-riscv/string.h
new file mode 100644
index 0000000000..733e9e00d3
--- /dev/null
+++ b/xen/include/asm-riscv/string.h
@@ -0,0 +1,28 @@
+/******************************************************************************
+ *
+ * Copyright 2019 (C) Alistair Francis <alistair.francis@wdc.com>
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to
+ * deal in the Software without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ */
+
+#ifndef _ASM_RISCV_STRING_H
+#define _ASM_RISCV_STRING_H
+
+#endif /* _ASM_RISCV_STRING_H */
diff --git a/xen/include/asm-riscv/sysregs.h b/xen/include/asm-riscv/sysregs.h
new file mode 100644
index 0000000000..ae0945d902
--- /dev/null
+++ b/xen/include/asm-riscv/sysregs.h
@@ -0,0 +1,16 @@
+#ifndef __ASM_RISCV_SYSREGS_H
+#define __ASM_RISCV_SYSREGS_H
+
+#include <asm/csr.h>
+
+#endif /* __ASM_RISCV_SYSREGS_H */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
+
+
diff --git a/xen/include/asm-riscv/system.h b/xen/include/asm-riscv/system.h
new file mode 100644
index 0000000000..276e7ba550
--- /dev/null
+++ b/xen/include/asm-riscv/system.h
@@ -0,0 +1,99 @@
+/*
+ * Based on arch/arm/include/asm/system.h
+ *
+ * Copyright (C) 2012 ARM Ltd.
+ * Copyright (C) 2013 Regents of the University of California
+ * Copyright (C) 2017 SiFive
+ * Copyright (C) 2021 Connor Davis <connojd@pm.me>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program.  If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef _ASM_RISCV_BARRIER_H
+#define _ASM_RISCV_BARRIER_H
+
+#include <asm/csr.h>
+#include <xen/lib.h>
+
+#ifndef __ASSEMBLY__
+
+#define nop()		__asm__ __volatile__ ("nop")
+
+#define RISCV_FENCE(p, s) \
+	__asm__ __volatile__ ("fence " #p "," #s : : : "memory")
+
+/* These barriers need to enforce ordering on both devices or memory. */
+#define mb()		RISCV_FENCE(iorw,iorw)
+#define rmb()		RISCV_FENCE(ir,ir)
+#define wmb()		RISCV_FENCE(ow,ow)
+
+/* These barriers do not need to enforce ordering on devices, just memory. */
+#define smp_mb()	RISCV_FENCE(rw,rw)
+#define smp_rmb()	RISCV_FENCE(r,r)
+#define smp_wmb()	RISCV_FENCE(w,w)
+
+#define smp_mb__before_atomic()    smp_mb()
+#define smp_mb__after_atomic()     smp_mb()
+
+#define __smp_store_release(p, v)					\
+do {									\
+	compiletime_assert_atomic_type(*p);				\
+	RISCV_FENCE(rw,w);						\
+	WRITE_ONCE(*p, v);						\
+} while (0)
+
+#define __smp_load_acquire(p)						\
+({									\
+	typeof(*p) ___p1 = READ_ONCE(*p);				\
+	compiletime_assert_atomic_type(*p);				\
+	RISCV_FENCE(r,rw);						\
+	___p1;								\
+})
+
+static inline unsigned long local_save_flags(void)
+{
+	return csr_read(CSR_STATUS);
+}
+
+static inline void local_irq_enable(void)
+{
+	csr_set(CSR_STATUS, SR_IE);
+}
+
+static inline void local_irq_disable(void)
+{
+	csr_clear(CSR_STATUS, SR_IE);
+}
+
+#define local_irq_save(x)                                               \
+({                                                                      \
+	x = csr_read_clear(CSR_STATUS, SR_IE);                          \
+})
+
+static inline void local_irq_restore(unsigned long flags)
+{
+	csr_set(CSR_STATUS, flags & SR_IE);
+}
+
+static inline int local_irq_is_enabled(void)
+{
+	unsigned long flags = local_save_flags();
+
+	return !!(flags & SR_IE);
+}
+
+#define arch_fetch_and_add(x, v) __sync_fetch_and_add(x, v)
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* _ASM_RISCV_BARRIER_H */
diff --git a/xen/include/asm-riscv/time.h b/xen/include/asm-riscv/time.h
new file mode 100644
index 0000000000..af1a8ece45
--- /dev/null
+++ b/xen/include/asm-riscv/time.h
@@ -0,0 +1,31 @@
+ /*
+ * Copyright (C) 2012 Regents of the University of California
+ *
+ *   This program is free software; you can redistribute it and/or
+ *   modify it under the terms of the GNU General Public License
+ *   as published by the Free Software Foundation, version 2.
+ *
+ *   This program is distributed in the hope that it will be useful,
+ *   but WITHOUT ANY WARRANTY; without even the implied warranty of
+ *   MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ *   GNU General Public License for more details.
+ */
+
+#ifndef _ASM_RISCV_TIMEX_H
+#define _ASM_RISCV_TIMEX_H
+
+typedef uint64_t cycles_t;
+
+#ifdef CONFIG_64BIT
+static inline cycles_t get_cycles(void)
+{
+	cycles_t n;
+
+	__asm__ __volatile__ (
+		"rdtime %0"
+		: "=r" (n));
+	return n;
+}
+#endif
+
+#endif /* _ASM_RISCV_TIMEX_H */
diff --git a/xen/include/asm-riscv/trace.h b/xen/include/asm-riscv/trace.h
new file mode 100644
index 0000000000..e06def61f6
--- /dev/null
+++ b/xen/include/asm-riscv/trace.h
@@ -0,0 +1,12 @@
+#ifndef __ASM_TRACE_H__
+#define __ASM_TRACE_H__
+
+#endif /* __ASM_TRACE_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/types.h b/xen/include/asm-riscv/types.h
new file mode 100644
index 0000000000..b1c76a59c2
--- /dev/null
+++ b/xen/include/asm-riscv/types.h
@@ -0,0 +1,60 @@
+#ifndef __RISCV_TYPES_H__
+#define __RISCV_TYPES_H__
+
+#ifndef __ASSEMBLY__
+
+typedef __signed__ char __s8;
+typedef unsigned char __u8;
+
+typedef __signed__ short __s16;
+typedef unsigned short __u16;
+
+typedef __signed__ int __s32;
+typedef unsigned int __u32;
+
+#if defined(__GNUC__) && !defined(__STRICT_ANSI__)
+#if defined(CONFIG_RISCV_64)
+typedef __signed__ long __s64;
+typedef unsigned long __u64;
+#endif
+#endif
+
+typedef signed char s8;
+typedef unsigned char u8;
+
+typedef signed short s16;
+typedef unsigned short u16;
+
+typedef signed int s32;
+typedef unsigned int u32;
+
+#if defined(CONFIG_RISCV_64)
+typedef signed long s64;
+typedef unsigned long u64;
+typedef u64 vaddr_t;
+#define PRIvaddr PRIx64
+typedef u64 paddr_t;
+#define INVALID_PADDR (~0UL)
+#define PRIpaddr "016lx"
+typedef u64 register_t;
+#define PRIregister "lx"
+#endif
+
+#if defined(__SIZE_TYPE__)
+typedef __SIZE_TYPE__ size_t;
+#else
+typedef unsigned long size_t;
+#endif
+typedef signed long ssize_t;
+
+#endif /* __ASSEMBLY__ */
+
+#endif /* __RISCV_TYPES_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/asm-riscv/vm_event.h b/xen/include/asm-riscv/vm_event.h
new file mode 100644
index 0000000000..a0a9ce81d4
--- /dev/null
+++ b/xen/include/asm-riscv/vm_event.h
@@ -0,0 +1,60 @@
+/*
+ * vm_event.h: architecture specific vm_event handling routines
+ *
+ * Copyright (c) 2015 Tamas K Lengyel (tamas@tklengyel.com)
+ *
+ * This program is free software; you can redistribute it and/or modify it
+ * under the terms and conditions of the GNU General Public License,
+ * version 2, as published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope it will be useful, but WITHOUT
+ * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
+ * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
+ * more details.
+ *
+ * You should have received a copy of the GNU General Public License along with
+ * this program; If not, see <http://www.gnu.org/licenses/>.
+ */
+
+#ifndef __ASM_RISCV_VM_EVENT_H__
+#define __ASM_RISCV_VM_EVENT_H__
+
+#include <xen/sched.h>
+#include <xen/vm_event.h>
+#include <public/domctl.h>
+
+static inline int vm_event_init_domain(struct domain *d)
+{
+    return 0;
+}
+
+static inline void vm_event_cleanup_domain(struct domain *d)
+{
+}
+
+static inline void vm_event_toggle_singlestep(struct domain *d, struct vcpu *v,
+                                              vm_event_response_t *rsp)
+{
+}
+
+static inline
+void vm_event_register_write_resume(struct vcpu *v, vm_event_response_t *rsp)
+{
+}
+
+static inline
+void vm_event_emulate_check(struct vcpu *v, vm_event_response_t *rsp)
+{
+}
+
+static inline
+void vm_event_sync_event(struct vcpu *v, bool value)
+{
+}
+
+static inline
+void vm_event_reset_vmtrace(struct vcpu *v)
+{
+}
+
+#endif /* __ASM_RISCV_VM_EVENT_H__ */
diff --git a/xen/include/asm-riscv/xenoprof.h b/xen/include/asm-riscv/xenoprof.h
new file mode 100644
index 0000000000..3db6ce3ab2
--- /dev/null
+++ b/xen/include/asm-riscv/xenoprof.h
@@ -0,0 +1,12 @@
+#ifndef __ASM_XENOPROF_H__
+#define __ASM_XENOPROF_H__
+
+#endif /* __ASM_XENOPROF_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/public/arch-riscv.h b/xen/include/public/arch-riscv.h
new file mode 100644
index 0000000000..1b31ba7290
--- /dev/null
+++ b/xen/include/public/arch-riscv.h
@@ -0,0 +1,134 @@
+/******************************************************************************
+ * arch-riscv.h
+ *
+ * Guest OS interface to RISC-V Xen.
+ * Initially based on the ARM implementation
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to
+ * deal in the Software without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ *
+ * Copyright 2019 (C) Alistair Francis <alistair.francis@wdc.com>
+ */
+
+#ifndef __XEN_PUBLIC_ARCH_RISCV_H__
+#define __XEN_PUBLIC_ARCH_RISCV_H__
+
+#include <xen/types.h>
+
+#define  int64_aligned_t  int64_t __attribute__((aligned(8)))
+#define uint64_aligned_t uint64_t __attribute__((aligned(8)))
+
+#ifndef __ASSEMBLY__
+#define ___DEFINE_XEN_GUEST_HANDLE(name, type)                  \
+    typedef union { type *p; unsigned long q; }                 \
+        __guest_handle_ ## name;                                \
+    typedef union { type *p; uint64_aligned_t q; }              \
+        __guest_handle_64_ ## name
+
+/*
+ * XEN_GUEST_HANDLE represents a guest pointer, when passed as a field
+ * in a struct in memory. On rv64 it is 8 bytes long and 8-byte aligned.
+ *
+ * XEN_GUEST_HANDLE_PARAM represents a guest pointer, when passed as a
+ * hypercall argument. It is 4 bytes on rv32 and 8 bytes on rv64.
+ */
+#define __DEFINE_XEN_GUEST_HANDLE(name, type) \
+    ___DEFINE_XEN_GUEST_HANDLE(name, type);   \
+    ___DEFINE_XEN_GUEST_HANDLE(const_##name, const type)
+#define DEFINE_XEN_GUEST_HANDLE(name)   __DEFINE_XEN_GUEST_HANDLE(name, name)
+#define __XEN_GUEST_HANDLE(name)        __guest_handle_64_ ## name
+#define XEN_GUEST_HANDLE(name)          __XEN_GUEST_HANDLE(name)
+#define XEN_GUEST_HANDLE_PARAM(name)    __guest_handle_ ## name
+#define set_xen_guest_handle_raw(hnd, val)                  \
+    do {                                                    \
+        typeof(&(hnd)) _sxghr_tmp = &(hnd);                 \
+        _sxghr_tmp->q = 0;                                  \
+        _sxghr_tmp->p = val;                                \
+    } while ( 0 )
+#define set_xen_guest_handle(hnd, val) set_xen_guest_handle_raw(hnd, val)
+
+#if defined(__GNUC__) && !defined(__STRICT_ANSI__)
+/* Anonymous union includes both 32- and 64-bit names (e.g., r0/x0). */
+# define __DECL_REG(n64, n32) union {          \
+        uint64_t n64;                          \
+        uint32_t n32;                          \
+    }
+#else
+/* Non-gcc sources must always use the proper 64-bit name (e.g., x0). */
+#define __DECL_REG(n64, n32) uint64_t n64
+#endif
+
+struct vcpu_guest_core_regs {
+};
+typedef struct vcpu_guest_core_regs vcpu_guest_core_regs_t;
+DEFINE_XEN_GUEST_HANDLE(vcpu_guest_core_regs_t);
+
+typedef uint64_t xen_pfn_t;
+#define PRI_xen_pfn PRIx64
+#define PRIu_xen_pfn PRIu64
+
+typedef uint64_t xen_ulong_t;
+#define PRI_xen_ulong PRIx64
+
+#if defined(__XEN__) || defined(__XEN_TOOLS__)
+
+struct vcpu_guest_context {
+};
+typedef struct vcpu_guest_context vcpu_guest_context_t;
+DEFINE_XEN_GUEST_HANDLE(vcpu_guest_context_t);
+
+struct xen_arch_domainconfig {
+};
+
+struct arch_vcpu_info {
+};
+typedef struct arch_vcpu_info arch_vcpu_info_t;
+
+struct arch_shared_info {
+};
+typedef struct arch_shared_info arch_shared_info_t;
+
+typedef uint64_t xen_callback_t;
+
+#endif
+
+/* Maximum number of virtual CPUs in legacy multi-processor guests. */
+/* Only one. All other VCPUS must use VCPUOP_register_vcpu_info */
+#define XEN_LEGACY_MAX_VCPUS 1
+
+/* Current supported guest VCPUs */
+#define GUEST_MAX_VCPUS 128
+
+#endif /* __ASSEMBLY__ */
+
+#ifndef __ASSEMBLY__
+/* Stub definition of PMU structure */
+typedef struct xen_pmu_arch { uint8_t dummy; } xen_pmu_arch_t;
+#endif
+
+#endif /*  __XEN_PUBLIC_ARCH_RISCV_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/public/arch-riscv/hvm/save.h b/xen/include/public/arch-riscv/hvm/save.h
new file mode 100644
index 0000000000..fa010f0315
--- /dev/null
+++ b/xen/include/public/arch-riscv/hvm/save.h
@@ -0,0 +1,39 @@
+/*
+ * Structure definitions for HVM state that is held by Xen and must
+ * be saved along with the domain's memory and device-model state.
+ *
+ * Copyright (c) 2012 Citrix Systems Ltd.
+ *
+ * Permission is hereby granted, free of charge, to any person obtaining a copy
+ * of this software and associated documentation files (the "Software"), to
+ * deal in the Software without restriction, including without limitation the
+ * rights to use, copy, modify, merge, publish, distribute, sublicense, and/or
+ * sell copies of the Software, and to permit persons to whom the Software is
+ * furnished to do so, subject to the following conditions:
+ *
+ * The above copyright notice and this permission notice shall be included in
+ * all copies or substantial portions of the Software.
+ *
+ * THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+ * IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+ * FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+ * AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+ * LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING
+ * FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER
+ * DEALINGS IN THE SOFTWARE.
+ */
+
+#ifndef __XEN_PUBLIC_HVM_SAVE_RISCV_H__
+#define __XEN_PUBLIC_HVM_SAVE_RISCV_H__
+
+#endif
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
diff --git a/xen/include/public/hvm/save.h b/xen/include/public/hvm/save.h
index bea5e9f50f..c0b245596a 100644
--- a/xen/include/public/hvm/save.h
+++ b/xen/include/public/hvm/save.h
@@ -106,6 +106,8 @@ DECLARE_HVM_SAVE_TYPE(END, 0, struct hvm_save_end);
 #include "../arch-x86/hvm/save.h"
 #elif defined(__arm__) || defined(__aarch64__)
 #include "../arch-arm/hvm/save.h"
+#elif defined(__riscv)
+#include "../arch-riscv/hvm/save.h"
 #else
 #error "unsupported architecture"
 #endif
diff --git a/xen/include/public/pmu.h b/xen/include/public/pmu.h
index cc2fcf8816..3fb1bcd900 100644
--- a/xen/include/public/pmu.h
+++ b/xen/include/public/pmu.h
@@ -28,6 +28,8 @@
 #include "arch-x86/pmu.h"
 #elif defined (__arm__) || defined (__aarch64__)
 #include "arch-arm.h"
+#elif defined (__riscv)
+#include "arch-riscv.h"
 #else
 #error "Unsupported architecture"
 #endif
diff --git a/xen/include/public/xen.h b/xen/include/public/xen.h
index e373592c33..1d80b64ee0 100644
--- a/xen/include/public/xen.h
+++ b/xen/include/public/xen.h
@@ -33,6 +33,8 @@
 #include "arch-x86/xen.h"
 #elif defined(__arm__) || defined (__aarch64__)
 #include "arch-arm.h"
+#elif defined(__riscv)
+#include "arch-riscv.h"
 #else
 #error "Unsupported architecture"
 #endif
diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
index 1708c36964..fd0b75677c 100644
--- a/xen/include/xen/domain.h
+++ b/xen/include/xen/domain.h
@@ -60,6 +60,7 @@ void arch_vcpu_destroy(struct vcpu *v);
 int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset);
 void unmap_vcpu_info(struct vcpu *v);
 
+struct xen_domctl_createdomain;
 int arch_domain_create(struct domain *d,
                        struct xen_domctl_createdomain *config);
 
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri May 14 04:36:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 04:36:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127212.239025 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhPY4-0006q0-53; Fri, 14 May 2021 04:35:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127212.239025; Fri, 14 May 2021 04:35:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhPY4-0006pt-1e; Fri, 14 May 2021 04:35:40 +0000
Received: by outflank-mailman (input) for mailman id 127212;
 Fri, 14 May 2021 04:35:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=fkNH=KJ=arm.com=henry.wang@srs-us1.protection.inumbo.net>)
 id 1lhPY3-0006pn-4e
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 04:35:39 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.21.78]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0891d212-8865-499c-8e77-e78a0efc873a;
 Fri, 14 May 2021 04:35:36 +0000 (UTC)
Received: from AM6P191CA0075.EURP191.PROD.OUTLOOK.COM (2603:10a6:209:8a::16)
 by AM0PR08MB5233.eurprd08.prod.outlook.com (2603:10a6:208:164::15) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.26; Fri, 14 May
 2021 04:35:34 +0000
Received: from VE1EUR03FT045.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:8a:cafe::a0) by AM6P191CA0075.outlook.office365.com
 (2603:10a6:209:8a::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.25 via Frontend
 Transport; Fri, 14 May 2021 04:35:34 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT045.mail.protection.outlook.com (10.152.19.51) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Fri, 14 May 2021 04:35:33 +0000
Received: ("Tessian outbound 1e34f83e4964:v91");
 Fri, 14 May 2021 04:35:33 +0000
Received: from 973ee7c1a37a.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 24F7DF35-1C84-4BB7-9C4B-D504833F5266.1; 
 Fri, 14 May 2021 04:35:22 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 973ee7c1a37a.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 14 May 2021 04:35:22 +0000
Received: from PA4PR08MB6253.eurprd08.prod.outlook.com (2603:10a6:102:e4::8)
 by PAXPR08MB6591.eurprd08.prod.outlook.com (2603:10a6:102:150::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.26; Fri, 14 May
 2021 04:35:12 +0000
Received: from PA4PR08MB6253.eurprd08.prod.outlook.com
 ([fe80::19f9:d346:b9af:5cad]) by PA4PR08MB6253.eurprd08.prod.outlook.com
 ([fe80::19f9:d346:b9af:5cad%3]) with mapi id 15.20.4129.028; Fri, 14 May 2021
 04:35:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0891d212-8865-499c-8e77-e78a0efc873a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=orPWnfeKWw+qcO1rL/1QfQTyiifsXPu6f4832g9BrqM=;
 b=fyuXZ56spaJhGbqdAUpgnAnpPpoDbMgiwnI00taIdXqc5FYj7BuY1tVKoMZb2t98pWZhFPRxYYnyfa4QIKpDxFKNkAG+wElGzWzYgjbqwnANMPAV98zhWCIzt9JuWzV+0y8pax/gTfzKswGRlqcf7ZIyBF21kXYG52gFUNkDIf4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=L4P/3DOx1v/K4avopiKfYZLdLNkTGvfLyVnAHUkxygXRiuuww6kvtbDDX6L9OeoRmHpZdqkp/2zrlcFx4pizkPTr1Aqt8Yysb2S4P2jW7TGsb6lkOVn2mPheFMf7hTndtOkgbMeav48ptYPGvJvDhkH2SPRrg88PRq5U5ONy3jSaL4cROmBnyodL2190+k/Vs7rmCz4ATK9l5l0mhuVji7AF9btjMLZfwhEJCuOmPMle8BVKXfcWEo+qfLh8f7814h9WGdAUVPLTDUD8YIxf5L4EKwgi66VOYA7GdTLpp1x8mcPYxE+n3D6MKKpMJQvC++c9HPHFhCIAycC7E5cMuw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=orPWnfeKWw+qcO1rL/1QfQTyiifsXPu6f4832g9BrqM=;
 b=SaXAm/UvSytkrh5o2gM9WACAT25s2ugGWbmiR7Ed7+3RMsowjcawzctm31g1ABtOq1+kVcuZtFs5XzwzWTYP5tZ9aI0bde3nuaifqQtVTrAV1U24D8sFTXtTA02LinNLardQYDG5h5fDJrB7oIuaCXVdGV430+vZ5e1UpgkngAF1fUnxqZW16k8rWJ5lBvUx1HQqDyd2vyeZyingL5LolEs2NU3eiIaCd/DwKsHhXW+1I/CmLK+CydiUQT4bmbroHCgCQvSY+VEKQTQ6DZ6ul9uKsNSMZVACHHfG/TaS5afYdbXpmcFMxsNV+/02a7FS3OYnZ4yAkLhQF9hFmyMYZw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=orPWnfeKWw+qcO1rL/1QfQTyiifsXPu6f4832g9BrqM=;
 b=fyuXZ56spaJhGbqdAUpgnAnpPpoDbMgiwnI00taIdXqc5FYj7BuY1tVKoMZb2t98pWZhFPRxYYnyfa4QIKpDxFKNkAG+wElGzWzYgjbqwnANMPAV98zhWCIzt9JuWzV+0y8pax/gTfzKswGRlqcf7ZIyBF21kXYG52gFUNkDIf4=
From: Henry Wang <Henry.Wang@arm.com>
To: Julien Grall <julien@xen.org>, "sstabellini@kernel.org"
	<sstabellini@kernel.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Chen <Wei.Chen@arm.com>, Penny Zheng <Penny.Zheng@arm.com>, Bertrand
 Marquis <Bertrand.Marquis@arm.com>
Subject: RE: Discussion of Xenheap problems on AArch64
Thread-Topic: Discussion of Xenheap problems on AArch64
Thread-Index:
 Adc2dyA8lkZGRqbyRiSglHolanVkwQAFhaqAAACgy/AA4CfqgABHcHyAADhcqlAABznSAAGrycWAALiGZgAAEDKF4ACJdUUAABHcYPA=
Date: Fri, 14 May 2021 04:35:11 +0000
Message-ID:
 <PA4PR08MB6253E95579D8277D7FD1BE9A92509@PA4PR08MB6253.eurprd08.prod.outlook.com>
References:
 <PA4PR08MB6253F49C13ED56811BA5B64E92479@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <cdde98ca-4183-c92b-adca-801330992fc5@xen.org>
 <PA4PR08MB62538BBA256E66A0415F0C7192479@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <f14aa1d6-35d2-a9a3-0672-7f0d3ae3ec89@xen.org>
 <PA4PR08MB62534C4130B59CAA9A8A8BF792419@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <PA4PR08MB6253FBC7F5E690DB74F2E11F92409@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <2a65b8c0-fccc-2ccc-f736-7f3f666e84d1@xen.org>
 <PA4PR08MB62537A958107CD234831E0B892579@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <ba649865-410b-e1be-39a3-c4cac802f464@xen.org>
 <PA4PR08MB6253F85E184CA51BDB99786992539@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <ba1bc084-5a5b-1410-acba-33bfca7c4f6a@xen.org>
In-Reply-To: <ba1bc084-5a5b-1410-acba-33bfca7c4f6a@xen.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: D833DBD41F6837459A3F57C8C9D0F4DE.0
x-checkrecipientchecked: true
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [203.126.0.111]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: 8e8274a5-dec8-4b25-c728-08d91691b6bc
x-ms-traffictypediagnostic: PAXPR08MB6591:|AM0PR08MB5233:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM0PR08MB52337F0842C6AFE2FB78908D92509@AM0PR08MB5233.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:2089;OLM:2089;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 x/K3EwyvjFqdOxLE4uEogYjUqTtCTCbg+BZpvppQ1LtRC9Dyr+kiVrwcnZIXKdq9IPT6GorGeym705UmzFlbHfz8f9N+LtCBdTtRBEscQh5uMVZctdiH04YJkTInAB3PMerNw/Za5v9WeN6VtB9WUUOJqy77CWDMGzceAfOOsioNgz2+RPaS+l72v424+Fa3hpBg6TdYG1jRaQN+rxEWfVRA1u+JeIq6l57urHv0bhOltSG0nlTjFfyahwtKM+seNATVkWyA/Qsq9FAt9m8Fkio87w/2gb01/l2700lt3rKVmq1NMi3NwYvgonnAaHzyPBv1Dw3o/Vab/9r6YFwnqnF+6OfnVjwbJV6S3TlHK2N3D6Ae2fVz7VmJJLpj5awhsNCTB53bwWd0wn6lFNH7EK/usgPgthnVwqeQtrSUGVKltRH7QvyNNMCKdRnwg1uBkwkMpsKAa927JokQbP+a8XzljXjeFISAmLRRdvAKLz+DGSIkGE1kMc5xgVOBq5heiCdQHxvbM1jxtHxwjrpm3kDfo/dRMtJ9OqrXHbu0omRuRPLWwdUiKeh9ympZgXVgy9z10wyw4v1s6sZ2LZZQXrcPHcLSjhigx084X/LUydc=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PA4PR08MB6253.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(346002)(366004)(136003)(396003)(39850400004)(26005)(7696005)(52536014)(66556008)(5660300002)(6506007)(86362001)(110136005)(54906003)(66446008)(76116006)(66946007)(71200400001)(478600001)(122000001)(38100700002)(53546011)(2906002)(9686003)(55016002)(33656002)(83380400001)(4326008)(8936002)(64756008)(186003)(66476007)(8676002)(316002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?Q0Zmek5rTTFsc0tWeEF5Tmhsb21LbzhUaWQ2STZFN2VWTExSc2h2dzVMTnNX?=
 =?utf-8?B?RjY4N2RXc2s1em1ZampGcUhmYVl0TEY1bm5ZZUtyT1F6WGgwUktkdXZwRnEw?=
 =?utf-8?B?RHY1WnJSb1lINWhSTUJLTUNVTVRMZStNYW1lTkNCVG9hQWEwOGZEMklQSnAx?=
 =?utf-8?B?RkFNNGZIdi9CQnJDMnNONThwczNSTnZBcThFeXVra0doc1grS3U4eFVZY3R3?=
 =?utf-8?B?aGh0VU5FQlpUVjg5VmlEdTFWQWpUbzNmQ1loekd0dXJVYUN2VndpVDAxRTZU?=
 =?utf-8?B?WTB3MGNDNEhLbzR4NnBBYmNBU24xV2xLYzVKa0FjQmRlZkpaN2lWZENQc21x?=
 =?utf-8?B?YkQ2cW5kWTc0R0Q2aDNseFFiNkp0QzdQOS9tTW9PN3hteVRpNWVnd0VSM1d3?=
 =?utf-8?B?ZWJBUU9Bc3RLY0I2RjRreWFqZkRob0hLd3VOenlnRE45VW43T28weDB2MkxR?=
 =?utf-8?B?MWRHem8rcFFhT1V6OHRaWnV4R2JTRTdRUE9EMTAvUUk2TFcveDQ3bVRqYmg0?=
 =?utf-8?B?d0MzMUlId0IzNWtVaVhZWmpaYnNnTHJBQmNCR0NXUmRFd2ZBWlRrNDgyeThW?=
 =?utf-8?B?Y1N1bm15T2hybTBvUEtWOWFEMEpkV0pxR0FjNVJlSm8rY2hBTitvQ0tiQlFz?=
 =?utf-8?B?WnMzOGIzTHYyYVZ6alAwd1BqNXIwVm85dlhrdURZbTh3VnMraCtGeFpPNVFK?=
 =?utf-8?B?a1c0c3dPMGNPNytyRVVZQi9WRFQ2NllLUkV6dkw2NlUxVGMxSytGYjhkUllF?=
 =?utf-8?B?ZGl1eGh0T1lpT0o4QXBEOERhQm9ZVjFSOFoyL0ZaS2dPL3pRbTFKR1RXaHhw?=
 =?utf-8?B?dzhOS1ZoR0Y2Q3o0RUVldkdZb1hQLzNLL3pxMFdWaWh4WTBVUk8yUnFUMGhu?=
 =?utf-8?B?SDRZL1dhOXdmWG9MWTQrb2FGSHNPa1BBVVJzejhKTnkvSHp4dkQ1NW5Idldy?=
 =?utf-8?B?ckdtUzJsY2xEVW9ISHJ3S0FENk04cnRnYjNmaUlEazg3eDNneXZGK1UxdUw1?=
 =?utf-8?B?ZzRPQjQ0dDRMNDk1b0tJUU5HdlplcnRFWGZGQW1PcGVpNTJOWU9wNHNXSXB0?=
 =?utf-8?B?aFRYQ3lXZzhnd3VkdjVuc0luZkVRbmx6Z3JjSUdDTnBGbnYxYmFkeEtjTUVB?=
 =?utf-8?B?dnc2a01SL1NBSWdzRlhwbTRIVUpmeDE3OFVEZkQwOVR0VjNpNmFiMjR2RnQy?=
 =?utf-8?B?OS84Z2pqU0M2N1JyZjEybTRGMTlxSVZrS296cm11eWNnOXpNUkw5N1NzNjE0?=
 =?utf-8?B?ai9BRUcrckFIb212c3ltaWVOTm16L2tKQlJkQ0JtbXV5NUw4MGQraHB1ZzBo?=
 =?utf-8?B?S004ckd6M1NncktqUTdkU2x0djVLbmxubU45cmVXVitqWnl5aUd3anJlWGkv?=
 =?utf-8?B?elloT1ROdzIxM2hCaWt2UUNaSlFMYmU5T3pEUllybnJFUnQ2Nit2Wkc4SUVH?=
 =?utf-8?B?VDUvTTFtV1UrQ3Ftcmt4aUpzSFU4dzNGWXF4cDlZVXJXbHY5OHpQYjF5Q0pz?=
 =?utf-8?B?dlhrV1p6MzBxUWx6WXFJaENzRHNDOG1Ea09YR0dPUlV0ZkpXckZXbHhIQVpx?=
 =?utf-8?B?dDFjNjJhcGtKVzJVaXdDYzk5NjFycmVqekZNVmhpb0ppdEhsRDNoaEY0dHR5?=
 =?utf-8?B?T3FOSHVrNHkvM2liQjE0RlhsT04xOEZURFhMZzgzNnFaajY4d3VJZHlnb1N6?=
 =?utf-8?B?ZFI0a0hNRkozcVpJRFFDeG9ta2xWeU1oL0Jwd1IzaTgzVHdsK0tBSUpZUmlw?=
 =?utf-8?Q?/FMsMd5vdRzDRiYS+x8bkyMbAjChnHeMNIAnPDi?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB6591
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT045.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	1e280ff6-78aa-41db-0fdd-08d91691a99a
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	OgrhXJr7yyahNnrf5pruVjn1yxXhFp/BCDhLzUOPlmKylRLBgmZSp/Sc0pxCFTUTsykSw1pxXuS/on+4JyU7/8qIpHq+3TPGsXrC51Y0u7UQbEzKBhpu/xpoySqStNxUtbIh3fU6Xo66hDbXIBVkSdDZqCGDhIQLBiM2liXOaz9OTQI7/cHaQPUfd4DhnEp5sNULjAW5DdrCdMyHXLzk119uXNheRnHgZzQBNO784dOa+suUZJRRqlp+u/AyX5u7fuz4lpPuYKv1suIroxxbEHcjEWBUGd0pmOQUv/biK9NTj3RIgN+uSH7fKvMLaObLXG+EatiUVMv10ZzZ4QkIUAW4OiCXLOpfjsWfIBhbHdT5iYeyC8zQ5cq2coGgoaMkOFwp/boO0+UTPNwyhA4dGs5aQfsU0Ekb1eZtXSJPaP6r0ON5p7w4hAIKnyplQgum8GCQECHHgVBwgahIa7Iid/ZUdN5XGTp2nOZtKOgyly+jvQ65a2VPuSOegxCYbMowu+kLqoiUgPktSUsvURJYz2BXu9+iSoO53DuIjUp0sQiXPpmcIfYKC7yuaP8RZifXYXpsnGuFi3c/TFDLDjQymFgwSIaSVp4eSnjKrG4eHEdRX7C3Uemq2rVUgD6+WHlJrhPcHxds8hAUNfMcicqzP6bNTr7BfovQ2y8keQp3pno=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39850400004)(136003)(346002)(396003)(376002)(46966006)(36840700001)(82310400003)(83380400001)(52536014)(70206006)(5660300002)(110136005)(316002)(54906003)(186003)(36860700001)(8936002)(70586007)(4326008)(2906002)(6506007)(26005)(7696005)(9686003)(55016002)(356005)(33656002)(478600001)(82740400003)(336012)(81166007)(8676002)(86362001)(47076005)(53546011);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 May 2021 04:35:33.9752
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 8e8274a5-dec8-4b25-c728-08d91691b6bc
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT045.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB5233

PiBGcm9tOiBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4ub3JnPg0KSGkgSnVsaWVuLA0KDQo+IA0K
PiBPbiAxMS8wNS8yMDIxIDAyOjExLCBIZW5yeSBXYW5nIHdyb3RlOg0KPiA+IEhpIEp1bGllbiwN
Cj4gSGkgSGVucnksDQo+ID4NCj4gPj4gRnJvbTogSnVsaWVuIEdyYWxsIDxqdWxpZW5AeGVuLm9y
Zz4NCj4gPj4gSGkgSGVucnksDQo+ID4+DQo+ID4+IE9uIDA3LzA1LzIwMjEgMDU6MDYsIEhlbnJ5
IFdhbmcgd3JvdGU6DQo+ID4+Pj4gRnJvbTogSnVsaWVuIEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4N
Cj4gPj4+PiBPbiAyOC8wNC8yMDIxIDEwOjI4LCBIZW5yeSBXYW5nIHdyb3RlOg0KPiA+PiBbLi4u
XQ0KPiA+Pg0KPiA+Pj4gd2hlbiBJIGNvbnRpbnVlIGJvb3RpbmcgWGVuLCBJIGdvdCBmb2xsb3dp
bmcgZXJyb3IgbG9nOg0KPiA+Pj4NCj4gPj4+IChYRU4pIFhlbiBjYWxsIHRyYWNlOg0KPiA+Pj4g
KFhFTikgICAgWzwwMDAwMDAwMDAwMmI1YTVjPl0gYWxsb2NfYm9vdF9wYWdlcysweDk0LzB4OTgg
KFBDKQ0KPiA+Pj4gKFhFTikgICAgWzwwMDAwMDAwMDAwMmNhM2JjPl0gc2V0dXBfZnJhbWV0YWJs
ZV9tYXBwaW5ncysweGE0LzB4MTA4DQo+ID4+IChMUikNCj4gPj4+IChYRU4pICAgIFs8MDAwMDAw
MDAwMDJjYTNiYz5dIHNldHVwX2ZyYW1ldGFibGVfbWFwcGluZ3MrMHhhNC8weDEwOA0KPiA+Pj4g
KFhFTikgICAgWzwwMDAwMDAwMDAwMmNiOTg4Pl0gc3RhcnRfeGVuKzB4MzQ0LzB4YmNjDQo+ID4+
PiAoWEVOKSAgICBbPDAwMDAwMDAwMDAyMDAxYzA+XQ0KPiA+PiBhcm02NC9oZWFkLm8jcHJpbWFy
eV9zd2l0Y2hlZCsweDEwLzB4MzANCj4gPj4+IChYRU4pDQo+ID4+PiAoWEVOKSAqKioqKioqKioq
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqDQo+ID4+PiAoWEVOKSBQYW5pYyBvbiBDUFUg
MDoNCj4gPj4+IChYRU4pIFhlbiBCVUcgYXQgcGFnZV9hbGxvYy5jOjQzMg0KPiA+Pj4gKFhFTikg
KioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKioqKg0KPiA+Pg0KPiA+PiBUaGlz
IGlzIGhhcHBlbmluZyB3aXRob3V0IG15IHBhdGNoIHNlcmllcyBhcHBsaWVkLCByaWdodD8gSWYg
c28sIHdoYXQNCj4gPj4gaGFwcGVuIGlmIHlvdSBhcHBseSBpdD8NCj4gPg0KPiA+IE5vLCBJIGFt
IGFmcmFpZCB0aGlzIGlzIHdpdGggeW91ciBwYXRjaCBzZXJpZXMgYXBwbGllZCwgYW5kIHRoYXQg
aXMgd2h5IEkNCj4gPiBhbSBhIGxpdHRsZSBiaXQgY29uZnVzZWQgYWJvdXQgdGhlIGVycm9yIGxv
Zy4uLg0KPiANCj4gWW91IGFyZSBoaXR0aW5nIHRoZSBCVUcoKSBhdCB0aGUgZW5kIG9mIGFsbG9j
X2Jvb3RfcGFnZXMoKS4gVGhpcyBpcyBoaXQNCj4gYmVjYXVzZSB0aGUgYm9vdCBhbGxvY2F0b3Ig
Y291bGRuJ3QgYWxsb2NhdGUgbWVtb3J5IGZvciB5b3VyIHJlcXVlc3QuDQo+IA0KPiBXb3VsZCB5
b3UgYmUgYWJsZSB0byBhcHBseSB0aGUgZm9sbG93aW5nIGRpZmYgYW5kIHBhc3RlIHRoZSBvdXRw
dXQgaGVyZT8NCg0KVGhhbmsgeW91LCBvZiBjb3Vyc2UgeWVzLCBwbGVhc2Ugc2VlIGJlbG93IG91
dHB1dCBhdHRhY2hlZCA6KQ0KDQo+IA0KPiBkaWZmIC0tZ2l0IGEveGVuL2NvbW1vbi9wYWdlX2Fs
bG9jLmMgYi94ZW4vY29tbW9uL3BhZ2VfYWxsb2MuYw0KPiBpbmRleCBhY2U2MzMzYzE4ZWEuLmRi
YjczNmZkYjI3NSAxMDA2NDQNCj4gLS0tIGEveGVuL2NvbW1vbi9wYWdlX2FsbG9jLmMNCj4gKysr
IGIveGVuL2NvbW1vbi9wYWdlX2FsbG9jLmMNCj4gQEAgLTMyOSw2ICszMjksOCBAQCB2b2lkIF9f
aW5pdCBpbml0X2Jvb3RfcGFnZXMocGFkZHJfdCBwcywgcGFkZHJfdCBwZSkNCj4gICAgICAgaWYg
KCBwZSA8PSBwcyApDQo+ICAgICAgICAgICByZXR1cm47DQo+IA0KPiArICAgIHByaW50aygiJXM6
IHBzICUiUFJJX3BhZGRyIiBwZSAlIlBSSV9wYWRkciJcbiIsIF9fZnVuY19fLCBwcywgcGUpOw0K
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIF4gRllJOiBJIGhh
dmUgdG8gY2hhbmdlIHRoaXMgUFJJX3BhZGRyIHRvIFBSSXBhZGRyDQogICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdG8gbWFrZSBjb21waWxlciBoYXBweQ0K
DQo+ICsNCj4gICAgICAgZmlyc3RfdmFsaWRfbWZuID0gbWZuX21pbihtYWRkcl90b19tZm4ocHMp
LCBmaXJzdF92YWxpZF9tZm4pOw0KPiANCj4gICAgICAgYm9vdG1lbV9yZWdpb25fYWRkKHBzID4+
IFBBR0VfU0hJRlQsIHBlID4+IFBBR0VfU0hJRlQpOw0KPiBAQCAtMzk1LDYgKzM5Nyw4IEBAIG1m
bl90IF9faW5pdCBhbGxvY19ib290X3BhZ2VzKHVuc2lnbmVkIGxvbmcgbnJfcGZucywNCj4gdW5z
aWduZWQgbG9uZyBwZm5fYWxpZ24pDQo+ICAgICAgIHVuc2lnbmVkIGxvbmcgcGcsIF9lOw0KPiAg
ICAgICB1bnNpZ25lZCBpbnQgaSA9IG5yX2Jvb3RtZW1fcmVnaW9uczsNCj4gDQo+ICsgICAgcHJp
bnRrKCIlczogbnJfcGZucyAlbHUgcGZuX2FsaWduICVsdVxuIiwgX19mdW5jX18sIG5yX3BmbnMs
DQo+IHBmbl9hbGlnbik7DQo+ICsNCj4gICAgICAgQlVHX09OKCFucl9ib290bWVtX3JlZ2lvbnMp
Ow0KPiANCj4gICAgICAgd2hpbGUgKCBpLS0gKQ0KPiANCg0KSSBhbHNvIGFkZGVkIHNvbWUgcHJp
bnRrIHRvIG1ha2Ugc3VyZSB0aGUgZHRiIGlzIHBhcnNlZCBjb3JyZWN0bHksIGFuZCBmb3IgdGhl
DQpFcnJvciBjYXNlLCBJIGdldCBmb2xsb3dpbmcgbG9nOg0KDQooWEVOKSAtLS0tLS0tLS0tYmFu
a3M9Mi0tLS0tLS0tDQooWEVOKSAtLS0tLS0tLS0tc3RhcnQ9ODAwMDAwMDAtLS0tLS0tLQ0KKFhF
TikgLS0tLS0tLS0tLXNpemU9N0YwMDAwMDAtLS0tLS0tLQ0KKFhFTikgLS0tLS0tLS0tLXN0YXJ0
PUY5MDAwMDAwMDAtLS0tLS0tLQ0KKFhFTikgLS0tLS0tLS0tLXNpemU9ODAwMDAwMDAtLS0tLS0t
LQ0KKFhFTikgQ2hlY2tpbmcgZm9yIGluaXRyZCBpbiAvY2hvc2VuDQooWEVOKSBSQU06IDAwMDAw
MDAwODAwMDAwMDAgLSAwMDAwMDAwMGZlZmZmZmZmDQooWEVOKSBSQU06IDAwMDAwMGY5MDAwMDAw
MDAgLSAwMDAwMDBmOTdmZmZmZmZmDQooWEVOKQ0KKFhFTikgTU9EVUxFWzBdOiAwMDAwMDAwMDg0
MDAwMDAwIC0gMDAwMDAwMDA4NDE0NjRjOCBYZW4NCihYRU4pIE1PRFVMRVsxXTogMDAwMDAwMDA4
NDE0NjRjOCAtIDAwMDAwMDAwODQxNDhjOWIgRGV2aWNlIFRyZWUNCihYRU4pIE1PRFVMRVsyXTog
MDAwMDAwMDA4MDA4MDAwMCAtIDAwMDAwMDAwODEwODAwMDAgS2VybmVsDQooWEVOKSAgUkVTVkRb
MF06IDAwMDAwMDAwODAwMDAwMDAgLSAwMDAwMDAwMDgwMDEwMDAwDQooWEVOKQ0KKFhFTikgQ29t
bWFuZCBsaW5lOiBub3JlYm9vdCBkb20wX21lbT0xMDI0TSBjb25zb2xlPWR0dWFydCANCmR0dWFy
dD1zZXJpYWwwIGJvb3RzY3J1Yj0wDQooWEVOKSBQRk4gY29tcHJlc3Npb24gb24gYml0cyAyMS4u
LjIyDQooWEVOKSBpbml0X2Jvb3RfcGFnZXM6IHBzIDAwMDAwMDAwODAwMTAwMDAgcGUgMDAwMDAw
MDA4MDA4MDAwMA0KKFhFTikgaW5pdF9ib290X3BhZ2VzOiBwcyAwMDAwMDAwMDgxMDgwMDAwIHBl
IDAwMDAwMDAwODQwMDAwMDANCihYRU4pIGluaXRfYm9vdF9wYWdlczogcHMgMDAwMDAwMDA4NDE0
OTAwMCBwZSAwMDAwMDAwMGZmMDAwMDAwDQooWEVOKSBhbGxvY19ib290X3BhZ2VzOiBucl9wZm5z
IDEgcGZuX2FsaWduIDENCihYRU4pIGFsbG9jX2Jvb3RfcGFnZXM6IG5yX3BmbnMgMSBwZm5fYWxp
Z24gMQ0KKFhFTikgYWxsb2NfYm9vdF9wYWdlczogbnJfcGZucyAxIHBmbl9hbGlnbiAxDQooWEVO
KSBpbml0X2Jvb3RfcGFnZXM6IHBzIDAwMDAwMGY5MDAwMDAwMDAgcGUgMDAwMDAwZjk4MDAwMDAw
MA0KKFhFTikgYWxsb2NfYm9vdF9wYWdlczogbnJfcGZucyA5MDkzMTIgcGZuX2FsaWduIDgxOTIN
CihYRU4pIFhlbiBCVUcgYXQgcGFnZV9hbGxvYy5jOjQzNg0KDQpUbyBjb21wYXJlIHdpdGggdGhl
IG1heGltdW0gc3RhcnQgYWRkcmVzcyAoZjgwMDAwMDAwMCkgb2Ygc2Vjb25kIHBhcnQgbWVtDQp3
aGVyZSB4ZW4gYm9vdHMgY29ycmVjdGx5LCBJIGFsc28gYXR0YWNoZWQgdGhlIGxvZyBmb3IgeW91
ciBpbmZvcm1hdGlvbjoNCg0KKFhFTikgLS0tLS0tLS0tLWJhbmtzPTItLS0tLS0tLQ0KKFhFTikg
LS0tLS0tLS0tLXN0YXJ0PTgwMDAwMDAwLS0tLS0tLS0NCihYRU4pIC0tLS0tLS0tLS1zaXplPTdG
MDAwMDAwLS0tLS0tLS0NCihYRU4pIC0tLS0tLS0tLS1zdGFydD1GODAwMDAwMDAwLS0tLS0tLS0N
CihYRU4pIC0tLS0tLS0tLS1zaXplPTgwMDAwMDAwLS0tLS0tLS0NCihYRU4pIENoZWNraW5nIGZv
ciBpbml0cmQgaW4gL2Nob3Nlbg0KKFhFTikgUkFNOiAwMDAwMDAwMDgwMDAwMDAwIC0gMDAwMDAw
MDBmZWZmZmZmZg0KKFhFTikgUkFNOiAwMDAwMDBmODAwMDAwMDAwIC0gMDAwMDAwZjg3ZmZmZmZm
Zg0KKFhFTikNCihYRU4pIE1PRFVMRVswXTogMDAwMDAwMDA4NDAwMDAwMCAtIDAwMDAwMDAwODQx
NDY0YzggWGVuDQooWEVOKSBNT0RVTEVbMV06IDAwMDAwMDAwODQxNDY0YzggLSAwMDAwMDAwMDg0
MTQ4YzliIERldmljZSBUcmVlDQooWEVOKSBNT0RVTEVbMl06IDAwMDAwMDAwODAwODAwMDAgLSAw
MDAwMDAwMDgxMDgwMDAwIEtlcm5lbA0KKFhFTikgIFJFU1ZEWzBdOiAwMDAwMDAwMDgwMDAwMDAw
IC0gMDAwMDAwMDA4MDAxMDAwMA0KKFhFTikNCihYRU4pIENvbW1hbmQgbGluZTogbm9yZWJvb3Qg
ZG9tMF9tZW09MTAyNE0gY29uc29sZT1kdHVhcnQNCmR0dWFydD1zZXJpYWwwIGJvb3RzY3J1Yj0w
DQooWEVOKSBQRk4gY29tcHJlc3Npb24gb24gYml0cyAyMC4uLjIyDQooWEVOKSBpbml0X2Jvb3Rf
cGFnZXM6IHBzIDAwMDAwMDAwODAwMTAwMDAgcGUgMDAwMDAwMDA4MDA4MDAwMA0KKFhFTikgaW5p
dF9ib290X3BhZ2VzOiBwcyAwMDAwMDAwMDgxMDgwMDAwIHBlIDAwMDAwMDAwODQwMDAwMDANCihY
RU4pIGluaXRfYm9vdF9wYWdlczogcHMgMDAwMDAwMDA4NDE0OTAwMCBwZSAwMDAwMDAwMGZmMDAw
MDAwDQooWEVOKSBhbGxvY19ib290X3BhZ2VzOiBucl9wZm5zIDEgcGZuX2FsaWduIDENCihYRU4p
IGFsbG9jX2Jvb3RfcGFnZXM6IG5yX3BmbnMgMSBwZm5fYWxpZ24gMQ0KKFhFTikgYWxsb2NfYm9v
dF9wYWdlczogbnJfcGZucyAxIHBmbl9hbGlnbiAxDQooWEVOKSBpbml0X2Jvb3RfcGFnZXM6IHBz
IDAwMDAwMGY4MDAwMDAwMDAgcGUgMDAwMDAwZjg4MDAwMDAwMA0KKFhFTikgYWxsb2NfYm9vdF9w
YWdlczogbnJfcGZucyA0NTA1NjAgcGZuX2FsaWduIDgxOTINCihYRU4pIGFsbG9jX2Jvb3RfcGFn
ZXM6IG5yX3BmbnMgMSBwZm5fYWxpZ24gMQ0KKC4uLkEgbG90IG9mIChYRU4pIGFsbG9jX2Jvb3Rf
cGFnZXM6IG5yX3BmbnMgMSBwZm5fYWxpZ24gMS4uLikNCihYRU4pIERvbWFpbiBoZWFwIGluaXRp
YWxpc2VkDQooWEVOKSBCb290aW5nIHVzaW5nIERldmljZSBUcmVlDQoNCkhvcGUgdGhlc2UgY2Fu
IGhlbHAuIFRoYW5rIHlvdS4NCg0KS2luZCByZWdhcmRzLA0KDQpIZW5yeQ0KDQo+IENoZWVycywN
Cj4gDQo+IC0tDQo+IEp1bGllbiBHcmFsbA0K


From xen-devel-bounces@lists.xenproject.org Fri May 14 04:43:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 04:43:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127216.239036 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhPfd-0008Gg-VS; Fri, 14 May 2021 04:43:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127216.239036; Fri, 14 May 2021 04:43:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhPfd-0008GZ-SY; Fri, 14 May 2021 04:43:29 +0000
Received: by outflank-mailman (input) for mailman id 127216;
 Fri, 14 May 2021 04:43:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=aSgG=KJ=gmail.com=alistair23@srs-us1.protection.inumbo.net>)
 id 1lhPfd-0008GT-6f
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 04:43:29 +0000
Received: from mail-io1-xd2a.google.com (unknown [2607:f8b0:4864:20::d2a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c2ea9cd7-abc9-42c5-8f2b-0b4a90c61e81;
 Fri, 14 May 2021 04:43:27 +0000 (UTC)
Received: by mail-io1-xd2a.google.com with SMTP id a11so27028556ioo.0
 for <xen-devel@lists.xenproject.org>; Thu, 13 May 2021 21:43:27 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c2ea9cd7-abc9-42c5-8f2b-0b4a90c61e81
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=/ohOlU24ZyxnvH4YqPLRSXTTK9HNXrxWiOxOEVf5vco=;
        b=kRV18wWlqMMkd9ibq68MSFE+8cg5iQ389YgMy1T+WzKgoVOG4bqeaEc4taOJET2JP5
         WcHwZLJy8AD+VZztwskxuXs6ITojrTRUD5lasdvPAaATExLzdhF+ww8tnfhj7MN1u3Q6
         +80oM6BiA7NkUoS/mwgzdQFCXDG74mxdAOJpThk5bMgzLV2MYH970Kt/T6pzkk6s+TmB
         Vdr1j4BAL1noxx66iLl1zv/F63CrIbKhgjtDKGpg72ctnZJbSX5lj/Ej+yN7/DFOqSoQ
         uuYfShHhD+a7pp4xSOyQnwe1YJuCUQ7WOU5PrCIJr10skZSDeCnNGwWsG/UQoFAJ/E/7
         poTw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=/ohOlU24ZyxnvH4YqPLRSXTTK9HNXrxWiOxOEVf5vco=;
        b=mLJlVNTurdJZCrTQ385XudhHJCLZruPif1dao8YjOhy0sX67Bgc0LImD0jWQbV9IZe
         RKURhWjozq323bkylqU+swDD+qkZwG8efk70oJBO1LDLvLPinXmaiN0Qlu19hHnaK7k1
         PkZMn2ZaqxCS3w7O3fc6OgUtA1rwbbNQADRMbiQ3EaD61veq9uMcH/KP8qgr0MJRydv2
         PvvEvoJLdJObaRxVavIRyUwxmuJFfoECFa96rPQLDZvqvhSYwne8fVNeoqvJtyiDqlzT
         g1e3C3b7dUtl51Cnu1f7Z/kxdtWgq5SUJaZ+yEO20cG5aiGLPoYk7dykc2PLw4OKYX4a
         Ezsw==
X-Gm-Message-State: AOAM530wpVZstR/hR1ZSdAFwjpbTpEH3e/ihP/yO/nDIkbgSn4N1Jgij
	VS9IH2inl9TchLiOaw9gfr157aSDVYYxpsNlHYY=
X-Google-Smtp-Source: ABdhPJzHfcaAEUtxzNxWdACKRyEYaafuqG+OMFivYj7DICVf3Lnqx9QCI6DmIN1DP1ixdxzWbYXbouc/kbnkoBknykg=
X-Received: by 2002:a6b:c857:: with SMTP id y84mr33153688iof.118.1620967407231;
 Thu, 13 May 2021 21:43:27 -0700 (PDT)
MIME-Version: 1.0
References: <cover.1620965208.git.connojdavis@gmail.com>
In-Reply-To: <cover.1620965208.git.connojdavis@gmail.com>
From: Alistair Francis <alistair23@gmail.com>
Date: Fri, 14 May 2021 14:43:00 +1000
Message-ID: <CAKmqyKN1+we16d3AkYg9GLXxic-Y=JZKdjqHfE5JRJTaGmaaHw@mail.gmail.com>
Subject: Re: [PATCH v2 0/5] Minimal build for RISCV
To: Connor Davis <connojdavis@gmail.com>
Cc: "open list:X86" <xen-devel@lists.xenproject.org>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, 
	Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>, 
	Tamas K Lengyel <tamas@tklengyel.com>, Alexandru Isaila <aisaila@bitdefender.com>, 
	Petre Pircalabu <ppircalabu@bitdefender.com>, Doug Goldstein <cardoe@cardoe.com>
Content-Type: text/plain; charset="UTF-8"

On Fri, May 14, 2021 at 2:18 PM Connor Davis <connojdavis@gmail.com> wrote:
>
> Hi all,
>
> This series introduces a minimal build for RISCV. It is based on Bobby's
> previous work from last year[0]. I have worked to rebase onto current Xen,
> as well as update the various header files borrowed from Linux.
>
> This series provides the patches necessary to get a minimal build
> working. The build is "minimal" in the sense that 1) it uses a
> minimal config and 2) files, functions, and variables are included if
> and only if they are required for a successful build based on the
> config. It doesn't run at all, as the functions just have stub
> implementations.
>
> My hope is that this can serve as a useful example for future ports as
> well as inform the community of exactly what is imposed by common code
> onto new architectures.
>
> The first 3 patches are mods to non-RISCV bits that enable building a
> config with:
>
>   !CONFIG_HAS_NS16550
>   !CONFIG_HAS_PASSTHROUGH
>   !CONFIG_GRANT_TABLE
>
> respectively. The fourth patch adds the RISCV files, and the last patch
> adds a docker container for doing the build. To build from the docker
> container (after creating it locally), you can run the following:
>
>   $ make XEN_TARGET_ARCH=riscv64 SUBSYSTEMS=xen
>
> The sources taken from Linux are documented in arch/riscv/README.sources.
> There were also some files copied from arm:
>
>   asm-arm/softirq.h
>   asm-arm/random.h
>   asm-arm/nospec.h
>   asm-arm/numa.h
>   asm-arm/p2m.h
>   asm-arm/delay.h
>   asm-arm/debugger.h
>   asm-arm/desc.h
>   asm-arm/guest_access.h
>   asm-arm/hardirq.h
>   lib/find_next_bit.c
>
> I imagine some of these will want some consolidation, but I put them
> under the respective RISCV directories for now.

Awesome!

Do you have a public branch I could pull these from to try out?

Alistair

>
> [0] https://lore.kernel.org/xen-devel/cover.1579615303.git.bobbyeshleman@gmail.com/
>
> Thanks,
> Connor
>
> --
> Changes since v1:
>   - Dropped "xen/sched: Fix build when NR_CPUS == 1" since this was
>     fixed for 4.15
>   - Moved #ifdef-ary around iommu_enabled to iommu.h
>   - Moved struct grant_table declaration above ifdef CONFIG_GRANT_TABLE
>     instead of defining an empty struct when !CONFIG_GRANT_TABLE
>
> Connor Davis (5):
>   xen/char: Default HAS_NS16550 to y only for X86 and ARM
>   xen/common: Guard iommu symbols with CONFIG_HAS_PASSTHROUGH
>   xen: Fix build when !CONFIG_GRANT_TABLE
>   xen: Add files needed for minimal riscv build
>   automation: add container for riscv64 builds
>
>  automation/build/archlinux/riscv64.dockerfile |  33 ++
>  automation/scripts/containerize               |   1 +
>  config/riscv64.mk                             |   7 +
>  xen/Makefile                                  |   8 +-
>  xen/arch/riscv/Kconfig                        |  54 +++
>  xen/arch/riscv/Kconfig.debug                  |   0
>  xen/arch/riscv/Makefile                       |  57 +++
>  xen/arch/riscv/README.source                  |  19 +
>  xen/arch/riscv/Rules.mk                       |  13 +
>  xen/arch/riscv/arch.mk                        |   7 +
>  xen/arch/riscv/configs/riscv64_defconfig      |  12 +
>  xen/arch/riscv/delay.c                        |  16 +
>  xen/arch/riscv/domain.c                       | 144 +++++++
>  xen/arch/riscv/domctl.c                       |  36 ++
>  xen/arch/riscv/guestcopy.c                    |  57 +++
>  xen/arch/riscv/head.S                         |   6 +
>  xen/arch/riscv/irq.c                          |  78 ++++
>  xen/arch/riscv/lib/Makefile                   |   1 +
>  xen/arch/riscv/lib/find_next_bit.c            | 284 +++++++++++++
>  xen/arch/riscv/mm.c                           |  93 +++++
>  xen/arch/riscv/p2m.c                          | 144 +++++++
>  xen/arch/riscv/percpu.c                       |  17 +
>  xen/arch/riscv/platforms/Kconfig              |  31 ++
>  xen/arch/riscv/riscv64/asm-offsets.c          |  31 ++
>  xen/arch/riscv/setup.c                        |  27 ++
>  xen/arch/riscv/shutdown.c                     |  28 ++
>  xen/arch/riscv/smp.c                          |  35 ++
>  xen/arch/riscv/smpboot.c                      |  34 ++
>  xen/arch/riscv/sysctl.c                       |  33 ++
>  xen/arch/riscv/time.c                         |  35 ++
>  xen/arch/riscv/traps.c                        |  35 ++
>  xen/arch/riscv/vm_event.c                     |  39 ++
>  xen/arch/riscv/xen.lds.S                      | 113 ++++++
>  xen/common/memory.c                           |  10 +
>  xen/drivers/char/Kconfig                      |   2 +-
>  xen/include/asm-riscv/altp2m.h                |  39 ++
>  xen/include/asm-riscv/asm.h                   |  77 ++++
>  xen/include/asm-riscv/asm_defns.h             |  24 ++
>  xen/include/asm-riscv/atomic.h                | 204 ++++++++++
>  xen/include/asm-riscv/bitops.h                | 331 +++++++++++++++
>  xen/include/asm-riscv/bug.h                   |  54 +++
>  xen/include/asm-riscv/byteorder.h             |  16 +
>  xen/include/asm-riscv/cache.h                 |  24 ++
>  xen/include/asm-riscv/cmpxchg.h               | 382 ++++++++++++++++++
>  xen/include/asm-riscv/compiler_types.h        |  32 ++
>  xen/include/asm-riscv/config.h                | 110 +++++
>  xen/include/asm-riscv/cpufeature.h            |  17 +
>  xen/include/asm-riscv/csr.h                   | 219 ++++++++++
>  xen/include/asm-riscv/current.h               |  47 +++
>  xen/include/asm-riscv/debugger.h              |  15 +
>  xen/include/asm-riscv/delay.h                 |  17 +
>  xen/include/asm-riscv/desc.h                  |  12 +
>  xen/include/asm-riscv/device.h                |  15 +
>  xen/include/asm-riscv/div64.h                 |  23 ++
>  xen/include/asm-riscv/domain.h                |  50 +++
>  xen/include/asm-riscv/event.h                 |  42 ++
>  xen/include/asm-riscv/fence.h                 |  12 +
>  xen/include/asm-riscv/flushtlb.h              |  34 ++
>  xen/include/asm-riscv/grant_table.h           |  12 +
>  xen/include/asm-riscv/guest_access.h          |  41 ++
>  xen/include/asm-riscv/guest_atomics.h         |  60 +++
>  xen/include/asm-riscv/hardirq.h               |  27 ++
>  xen/include/asm-riscv/hypercall.h             |  12 +
>  xen/include/asm-riscv/init.h                  |  42 ++
>  xen/include/asm-riscv/io.h                    | 283 +++++++++++++
>  xen/include/asm-riscv/iocap.h                 |  13 +
>  xen/include/asm-riscv/iommu.h                 |  46 +++
>  xen/include/asm-riscv/irq.h                   |  58 +++
>  xen/include/asm-riscv/mem_access.h            |   4 +
>  xen/include/asm-riscv/mm.h                    | 246 +++++++++++
>  xen/include/asm-riscv/monitor.h               |  65 +++
>  xen/include/asm-riscv/nospec.h                |  25 ++
>  xen/include/asm-riscv/numa.h                  |  41 ++
>  xen/include/asm-riscv/p2m.h                   | 218 ++++++++++
>  xen/include/asm-riscv/page-bits.h             |  11 +
>  xen/include/asm-riscv/page.h                  |  73 ++++
>  xen/include/asm-riscv/paging.h                |  15 +
>  xen/include/asm-riscv/pci.h                   |  31 ++
>  xen/include/asm-riscv/percpu.h                |  33 ++
>  xen/include/asm-riscv/processor.h             |  59 +++
>  xen/include/asm-riscv/random.h                |   9 +
>  xen/include/asm-riscv/regs.h                  |  23 ++
>  xen/include/asm-riscv/setup.h                 |  14 +
>  xen/include/asm-riscv/smp.h                   |  46 +++
>  xen/include/asm-riscv/softirq.h               |  16 +
>  xen/include/asm-riscv/spinlock.h              |  12 +
>  xen/include/asm-riscv/string.h                |  28 ++
>  xen/include/asm-riscv/sysregs.h               |  16 +
>  xen/include/asm-riscv/system.h                |  99 +++++
>  xen/include/asm-riscv/time.h                  |  31 ++
>  xen/include/asm-riscv/trace.h                 |  12 +
>  xen/include/asm-riscv/types.h                 |  60 +++
>  xen/include/asm-riscv/vm_event.h              |  60 +++
>  xen/include/asm-riscv/xenoprof.h              |  12 +
>  xen/include/public/arch-riscv.h               | 134 ++++++
>  xen/include/public/arch-riscv/hvm/save.h      |  39 ++
>  xen/include/public/hvm/save.h                 |   2 +
>  xen/include/public/pmu.h                      |   2 +
>  xen/include/public/xen.h                      |   2 +
>  xen/include/xen/domain.h                      |   1 +
>  xen/include/xen/grant_table.h                 |   3 +-
>  xen/include/xen/iommu.h                       |   8 +-
>  102 files changed, 5375 insertions(+), 5 deletions(-)
>  create mode 100644 automation/build/archlinux/riscv64.dockerfile
>  create mode 100644 config/riscv64.mk
>  create mode 100644 xen/arch/riscv/Kconfig
>  create mode 100644 xen/arch/riscv/Kconfig.debug
>  create mode 100644 xen/arch/riscv/Makefile
>  create mode 100644 xen/arch/riscv/README.source
>  create mode 100644 xen/arch/riscv/Rules.mk
>  create mode 100644 xen/arch/riscv/arch.mk
>  create mode 100644 xen/arch/riscv/configs/riscv64_defconfig
>  create mode 100644 xen/arch/riscv/delay.c
>  create mode 100644 xen/arch/riscv/domain.c
>  create mode 100644 xen/arch/riscv/domctl.c
>  create mode 100644 xen/arch/riscv/guestcopy.c
>  create mode 100644 xen/arch/riscv/head.S
>  create mode 100644 xen/arch/riscv/irq.c
>  create mode 100644 xen/arch/riscv/lib/Makefile
>  create mode 100644 xen/arch/riscv/lib/find_next_bit.c
>  create mode 100644 xen/arch/riscv/mm.c
>  create mode 100644 xen/arch/riscv/p2m.c
>  create mode 100644 xen/arch/riscv/percpu.c
>  create mode 100644 xen/arch/riscv/platforms/Kconfig
>  create mode 100644 xen/arch/riscv/riscv64/asm-offsets.c
>  create mode 100644 xen/arch/riscv/setup.c
>  create mode 100644 xen/arch/riscv/shutdown.c
>  create mode 100644 xen/arch/riscv/smp.c
>  create mode 100644 xen/arch/riscv/smpboot.c
>  create mode 100644 xen/arch/riscv/sysctl.c
>  create mode 100644 xen/arch/riscv/time.c
>  create mode 100644 xen/arch/riscv/traps.c
>  create mode 100644 xen/arch/riscv/vm_event.c
>  create mode 100644 xen/arch/riscv/xen.lds.S
>  create mode 100644 xen/include/asm-riscv/altp2m.h
>  create mode 100644 xen/include/asm-riscv/asm.h
>  create mode 100644 xen/include/asm-riscv/asm_defns.h
>  create mode 100644 xen/include/asm-riscv/atomic.h
>  create mode 100644 xen/include/asm-riscv/bitops.h
>  create mode 100644 xen/include/asm-riscv/bug.h
>  create mode 100644 xen/include/asm-riscv/byteorder.h
>  create mode 100644 xen/include/asm-riscv/cache.h
>  create mode 100644 xen/include/asm-riscv/cmpxchg.h
>  create mode 100644 xen/include/asm-riscv/compiler_types.h
>  create mode 100644 xen/include/asm-riscv/config.h
>  create mode 100644 xen/include/asm-riscv/cpufeature.h
>  create mode 100644 xen/include/asm-riscv/csr.h
>  create mode 100644 xen/include/asm-riscv/current.h
>  create mode 100644 xen/include/asm-riscv/debugger.h
>  create mode 100644 xen/include/asm-riscv/delay.h
>  create mode 100644 xen/include/asm-riscv/desc.h
>  create mode 100644 xen/include/asm-riscv/device.h
>  create mode 100644 xen/include/asm-riscv/div64.h
>  create mode 100644 xen/include/asm-riscv/domain.h
>  create mode 100644 xen/include/asm-riscv/event.h
>  create mode 100644 xen/include/asm-riscv/fence.h
>  create mode 100644 xen/include/asm-riscv/flushtlb.h
>  create mode 100644 xen/include/asm-riscv/grant_table.h
>  create mode 100644 xen/include/asm-riscv/guest_access.h
>  create mode 100644 xen/include/asm-riscv/guest_atomics.h
>  create mode 100644 xen/include/asm-riscv/hardirq.h
>  create mode 100644 xen/include/asm-riscv/hypercall.h
>  create mode 100644 xen/include/asm-riscv/init.h
>  create mode 100644 xen/include/asm-riscv/io.h
>  create mode 100644 xen/include/asm-riscv/iocap.h
>  create mode 100644 xen/include/asm-riscv/iommu.h
>  create mode 100644 xen/include/asm-riscv/irq.h
>  create mode 100644 xen/include/asm-riscv/mem_access.h
>  create mode 100644 xen/include/asm-riscv/mm.h
>  create mode 100644 xen/include/asm-riscv/monitor.h
>  create mode 100644 xen/include/asm-riscv/nospec.h
>  create mode 100644 xen/include/asm-riscv/numa.h
>  create mode 100644 xen/include/asm-riscv/p2m.h
>  create mode 100644 xen/include/asm-riscv/page-bits.h
>  create mode 100644 xen/include/asm-riscv/page.h
>  create mode 100644 xen/include/asm-riscv/paging.h
>  create mode 100644 xen/include/asm-riscv/pci.h
>  create mode 100644 xen/include/asm-riscv/percpu.h
>  create mode 100644 xen/include/asm-riscv/processor.h
>  create mode 100644 xen/include/asm-riscv/random.h
>  create mode 100644 xen/include/asm-riscv/regs.h
>  create mode 100644 xen/include/asm-riscv/setup.h
>  create mode 100644 xen/include/asm-riscv/smp.h
>  create mode 100644 xen/include/asm-riscv/softirq.h
>  create mode 100644 xen/include/asm-riscv/spinlock.h
>  create mode 100644 xen/include/asm-riscv/string.h
>  create mode 100644 xen/include/asm-riscv/sysregs.h
>  create mode 100644 xen/include/asm-riscv/system.h
>  create mode 100644 xen/include/asm-riscv/time.h
>  create mode 100644 xen/include/asm-riscv/trace.h
>  create mode 100644 xen/include/asm-riscv/types.h
>  create mode 100644 xen/include/asm-riscv/vm_event.h
>  create mode 100644 xen/include/asm-riscv/xenoprof.h
>  create mode 100644 xen/include/public/arch-riscv.h
>  create mode 100644 xen/include/public/arch-riscv/hvm/save.h
>
> --
> 2.31.1
>
>


From xen-devel-bounces@lists.xenproject.org Fri May 14 05:34:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 05:34:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127224.239051 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhQSw-0005LX-QB; Fri, 14 May 2021 05:34:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127224.239051; Fri, 14 May 2021 05:34:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhQSw-0005LQ-NI; Fri, 14 May 2021 05:34:26 +0000
Received: by outflank-mailman (input) for mailman id 127224;
 Fri, 14 May 2021 05:34:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=rjOs=KJ=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1lhQSv-0005LK-5k
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 05:34:25 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 253116a6-17f3-4e4e-a67c-36d38d4489f2;
 Fri, 14 May 2021 05:34:24 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.16.1/8.15.2) with ESMTPS id 14E5Y6Yx074385
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Fri, 14 May 2021 01:34:12 -0400 (EDT) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.16.1/8.15.2/Submit) id 14E5Y56K074384;
 Thu, 13 May 2021 22:34:05 -0700 (PDT) (envelope-from ehem)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 253116a6-17f3-4e4e-a67c-36d38d4489f2
Date: Thu, 13 May 2021 22:34:05 -0700
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Connor Davis <connojdavis@gmail.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper <andrew.cooper3@citrix.com>,
        George Dunlap <george.dunlap@citrix.com>,
        Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>,
        Julien Grall <julien@xen.org>,
        Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2 1/5] xen/char: Default HAS_NS16550 to y only for X86
 and ARM
Message-ID: <YJ4LzUcajOJncKUP@mattapan.m5p.com>
References: <cover.1620965208.git.connojdavis@gmail.com>
 <3960a676376e0163d97ac02f968966cdfaccbf75.1620965208.git.connojdavis@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <3960a676376e0163d97ac02f968966cdfaccbf75.1620965208.git.connojdavis@gmail.com>
X-Spam-Status: No, score=0.4 required=10.0 tests=KHOP_HELO_FCRDNS autolearn=no
	autolearn_force=no version=3.4.5
X-Spam-Checker-Version: SpamAssassin 3.4.5 (2021-03-20) on mattapan.m5p.com

On Thu, May 13, 2021 at 10:17:08PM -0600, Connor Davis wrote:
> Defaulting to yes only for X86 and ARM reduces the requirements
> for a minimal build when porting new architectures.
> 
> Signed-off-by: Connor Davis <connojdavis@gmail.com>
> ---
>  xen/drivers/char/Kconfig | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
> 
> diff --git a/xen/drivers/char/Kconfig b/xen/drivers/char/Kconfig
> index b572305657..b15b0c8d6a 100644
> --- a/xen/drivers/char/Kconfig
> +++ b/xen/drivers/char/Kconfig
> @@ -1,6 +1,6 @@
>  config HAS_NS16550
>  	bool "NS16550 UART driver" if ARM
> -	default y
> +	default y if (ARM || X86)
>  	help
>  	  This selects the 16550-series UART support. For most systems, say Y.

Are you sure this is necessary?  I've been working on something else
recently, but did you confirm this with a full build?

If you observe the line directly above that one, `_if_ARM_`.  I'm pretty
sure this driver will refuse to show up in a RISC-V build.


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Fri May 14 06:37:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 06:37:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127232.239072 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhRRf-0002pb-Nz; Fri, 14 May 2021 06:37:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127232.239072; Fri, 14 May 2021 06:37:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhRRf-0002pU-K4; Fri, 14 May 2021 06:37:11 +0000
Received: by outflank-mailman (input) for mailman id 127232;
 Fri, 14 May 2021 06:37:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhRRe-0002pK-E9; Fri, 14 May 2021 06:37:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhRRe-00022N-7L; Fri, 14 May 2021 06:37:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhRRd-0000qq-Sq; Fri, 14 May 2021 06:37:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lhRRd-00028o-SK; Fri, 14 May 2021 06:37:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tpHj1GtOgkNaTIrb2fWInS3Xl9yp8lUxhc8UZUJ1roc=; b=6WuFKuL/iOFzWTh5J6UO0NQYUe
	fKD+aTEpiHi8cU1596Urq02n7LgxGdU1kjJmBUgGBS+v6tibj6ZJj+YvlDYDUrojAKdG6rLcC0TLh
	wgAqr6ZMPNNLKVitfO2hSszpmSbp1Pmz6gz5p/MCYlJ4mrmI3TrUcQypgKUBoHV0cN/g=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161939-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 161939: regressions - trouble: broken/fail/pass
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl:<job status>:broken:regression
    xen-unstable:test-armhf-armhf-xl:host-install(5):broken:regression
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:allowable
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This:
    xen=cb199cc7de987cfda4659fccf51059f210f6ad34
X-Osstest-Versions-That:
    xen=43d4cc7d36503bcc3aa2aa6ebea2b7912808f254
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 14 May 2021 06:37:09 +0000

flight 161939 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161939/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-armhf-armhf-xl             <job status>                 broken
 test-armhf-armhf-xl           5 host-install(5)        broken REGR. vs. 161926
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 161926

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     18 guest-localmigrate       fail REGR. vs. 161926

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161926
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161926
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161926
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161926
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161926
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161926
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161926
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161926
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161926
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161926
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail starved in 161926

version targeted for testing:
 xen                  cb199cc7de987cfda4659fccf51059f210f6ad34
baseline version:
 xen                  43d4cc7d36503bcc3aa2aa6ebea2b7912808f254

Last test of basis   161926  2021-05-13 03:59:53 Z    1 days
Testing same since   161939  2021-05-13 21:07:48 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          broken  
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary

broken-job test-armhf-armhf-xl broken
broken-step test-armhf-armhf-xl host-install(5)

Not pushing.

------------------------------------------------------------
commit cb199cc7de987cfda4659fccf51059f210f6ad34
Author: Andrew Cooper <andrew.cooper3@citrix.com>
Date:   Thu May 13 16:43:27 2021 +0100

    Revert "x86/PV32: avoid TLB flushing after mod_l3_entry()" and "x86/PV: restrict TLB flushing after mod_l[234]_entry()"
    
    These reintroduce XSA-286 / CVE-2018-15469, as confirmed by the xsa-286 XTF
    test run by OSSTest.
    
    The TLB flushing is for Xen's correctness, not the guest's.
    
    The text in c/s bed7e6cad30 is technically correct, from the guests point of
    view, but clearly false as far as XSA-286 is concerned.  That said, it is
    edcfce55917 which introduced the regression, which demonstrates that the
    reasoning is flawed.
    
    This reverts commit bed7e6cad30ec8db0c9ce9a1676856e9dc4c39da.
    This reverts commit edcfce55917bb412f986d7b28358f6ef155b3664.
    
    Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Fri May 14 06:46:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 06:46:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127237.239086 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhRaK-0004Gv-NY; Fri, 14 May 2021 06:46:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127237.239086; Fri, 14 May 2021 06:46:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhRaK-0004Go-Ja; Fri, 14 May 2021 06:46:08 +0000
Received: by outflank-mailman (input) for mailman id 127237;
 Fri, 14 May 2021 06:46:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=P2xF=KJ=linuxfoundation.org=gregkh@srs-us1.protection.inumbo.net>)
 id 1lhRaJ-0004GP-G4
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 06:46:07 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id df222eb5-2d68-4ab5-946f-28489d10fe21;
 Fri, 14 May 2021 06:46:06 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 3EFC061447;
 Fri, 14 May 2021 06:46:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: df222eb5-2d68-4ab5-946f-28489d10fe21
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org;
	s=korg; t=1620974766;
	bh=LNWmhn0/F3I4Twb0AdNvtSFD/SzwC6jAdWvk+gwCZBQ=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=uzzpqMGgIXuJr7hx6CQqXRdBPKKVsMTrvIIkpwFFbwVDTtYHqNCe/yPEPrVV8UEAL
	 3hNZMO4oMAtJ2oG/vj2XYjiACDAP4egVyT0+ig0Nx2RKPsY1vw1irU7hw1fQYufOyV
	 4Jy1cf/CC5HV8+waMQMi7t8pk6sO5xt1+SSx5aNM=
Date: Fri, 14 May 2021 08:46:02 +0200
From: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
To: Connor Davis <connojdavis@gmail.com>
Cc: linux-kernel@vger.kernel.org, linux-usb@vger.kernel.org,
	xen-devel@lists.xenproject.org, Lee Jones <lee.jones@linaro.org>,
	Jann Horn <jannh@google.com>,
	Chunfeng Yun <chunfeng.yun@mediatek.com>,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Mathias Nyman <mathias.nyman@intel.com>,
	Douglas Anderson <dianders@chromium.org>,
	"Eric W. Biederman" <ebiederm@xmission.com>,
	Petr Mladek <pmladek@suse.com>, Sumit Garg <sumit.garg@linaro.org>
Subject: Re: [PATCH v2 0/4] Support xen-driven USB3 debug capability
Message-ID: <YJ4cqntf7YdZCOPk@kroah.com>
References: <cover.1620950220.git.connojdavis@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <cover.1620950220.git.connojdavis@gmail.com>

On Thu, May 13, 2021 at 06:56:47PM -0600, Connor Davis wrote:
> Hi all,
> 
> This goal of this series is to allow the USB3 debug capability (DbC) to be
> safely used by xen while linux runs as dom0.

Patch 2/4 does not seem to be showing up anywhere, did it get lost?

thanks,

greg k-h


From xen-devel-bounces@lists.xenproject.org Fri May 14 08:26:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 08:26:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127248.239108 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhT93-0005Zf-Ew; Fri, 14 May 2021 08:26:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127248.239108; Fri, 14 May 2021 08:26:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhT93-0005ZY-Bu; Fri, 14 May 2021 08:26:05 +0000
Received: by outflank-mailman (input) for mailman id 127248;
 Fri, 14 May 2021 08:26:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhT91-0005ZO-SP; Fri, 14 May 2021 08:26:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhT91-0004NI-2w; Fri, 14 May 2021 08:26:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhT90-00066u-M1; Fri, 14 May 2021 08:26:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lhT90-0007gI-LV; Fri, 14 May 2021 08:26:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Pt+5xqEEcWJnMfUOfVbhTNaYyUXkAMs/wqTMR4VCHjY=; b=vQKFsvgVeuZbMCgV2YXmjthU3p
	tLuWrt0FKf0Wzo+wAg4J7+Wp25szAwHAC9F7fpw5nkk4I/KY5CnIsIPyiiltsyBj8D/HoSZ9gFlTq
	G8uaQVJ2VVopptfrWVtO9dMoZ1gilS3pWCdmnR70h8gECw0rxakJGLSXHZiRlUghidIg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161944-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 161944: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=8f390ae310021a2e392d089ab7d4ac0a250551c7
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 14 May 2021 08:26:02 +0000

flight 161944 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161944/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              8f390ae310021a2e392d089ab7d4ac0a250551c7
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  308 days
Failing since        151818  2020-07-11 04:18:52 Z  307 days  300 attempts
Testing same since   161944  2021-05-14 04:18:58 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 57546 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 14 08:32:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 08:32:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127256.239121 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhTEz-00071N-Ax; Fri, 14 May 2021 08:32:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127256.239121; Fri, 14 May 2021 08:32:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhTEz-00071G-7z; Fri, 14 May 2021 08:32:13 +0000
Received: by outflank-mailman (input) for mailman id 127256;
 Fri, 14 May 2021 08:32:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lhTEy-00071A-H4
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 08:32:12 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lhTEy-0004Uj-3l; Fri, 14 May 2021 08:32:12 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lhTEx-0004tW-UT; Fri, 14 May 2021 08:32:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=0sbUH7G16PopEH/rjYcHFkkcZyBjinXGx914CCtJe/I=; b=FEJe7Wk0st9R/2eEYMXvTOElfs
	27DZs4Cm/qiMODrLEZ5hjoqcS38a3sMN5CenFLoQBoeMhw436FGyqjv9wT7r39FdL/wi0DasPN0ol
	UFWuyr8QGh0XgFf2iWIDv6CdVaMz5uGtqw3o4qBQ1iynj9eqyqO3RTnUuMJoInqX0KnM=;
Subject: Re: Uses of /hypervisor memory range (was: FreeBSD/Xen/ARM issues)
To: Elliott Mitchell <ehem+undef@m5p.com>
Cc: xen-devel@lists.xenproject.org, Roger Pau Monn?? <royger@freebsd.org>,
 Mitchell Horne <mhorne@freebsd.org>
References: <YIhSbkfShjN/gMCe@Air-de-Roger>
 <YIndyh0sRqcmcMim@mattapan.m5p.com> <YIptpndhk6MOJFod@Air-de-Roger>
 <YItwHirnih6iUtRS@mattapan.m5p.com> <YIu80FNQHKS3+jVN@Air-de-Roger>
 <YJDcDjjgCsQUdsZ7@mattapan.m5p.com> <YJURGaqAVBSYnMRf@Air-de-Roger>
 <YJYem5CW/97k/e5A@mattapan.m5p.com> <YJs/YAgB8molh7e5@mattapan.m5p.com>
 <54427968-9b13-36e6-0001-27fb49f85635@xen.org>
 <YJ3jlGSxs60Io+dp@mattapan.m5p.com>
From: Julien Grall <julien@xen.org>
Message-ID: <93936406-574f-7fd0-53bf-3bafaa4b1947@xen.org>
Date: Fri, 14 May 2021 09:32:10 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <YJ3jlGSxs60Io+dp@mattapan.m5p.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Elliott,

On 14/05/2021 03:42, Elliott Mitchell wrote:
> Upon thinking about it, this seems appropriate to bring to the attention
> of the Xen development list since it seems to have wider implications.
> 
> 
> On Wed, May 12, 2021 at 11:08:39AM +0100, Julien Grall wrote:
>> On 12/05/2021 03:37, Elliott Mitchell wrote:
>>>
>>> What about the approach to the grant-table/xenpv memory situation?
>>>
>>> As stated for a 768MB VM, Xen suggested a 16MB range.  I'm unsure whether
>>> that is strictly meant for grant-table use or is meant for any foreign
>>> memory mappings (Julien?).
>>
>> An OS is free to use it as it wants. However, there is no promise that:
>>     1) The region will not shrink
>>     2) The region will stay where it is
> 
> Issue is what is the intended use of the memory range allocated to
> /hypervisor in the device-tree on ARM?  What do the Xen developers plan
> for?  What is expected?

 From docs/misc/arm/device-tree/guest.txt:

"
- reg: specifies the base physical address and size of a region in
   memory where the grant table should be mapped to, using an
   HYPERVISOR_memory_op hypercall. The memory region is large enough to map
   the whole grant table (it is larger or equal to 
gnttab_max_grant_frames()).
   This property is unnecessary when booting Dom0 using ACPI.
"

Effectively, this is a known space in memory that is unallocated. Not 
all the guests will use it if they have a better way to find unallocated 
space.

> 
> 
> With FreeBSD, Julien Grall's attempt 5 years ago at getting Xen/ARM
> support treated the grant table as distinct from other foreign memory
> mappings.  Yet for the current code (which is oriented towards x86) it is
> rather easier to treat all foreign mappings the same.
> 
> Limiting foreign mappings to a total of 16MB for a 768MB domain is tight.

It is not clear to me whether you are referring to frontend or backend 
domain.

However, there is no relation between the size of a domain and how many 
foreign pages it will map. You can have a tiny backend (let say 128MB of 
RAM) that will handle a large domain (e.g. 2GB).

Instead, it depends on the maximum number of pages that will be mapped 
at a given point. If you are running a device emulator, then it is more 
convenient to try to keep as many foreign pages as possible mapped.

For PV backend (e.g. block, net), they tend to use grant mapping. Most 
of the time they are ephemeral (they last for the duration of the 
requests) but in some cases they will be kept mapped for the longer (for 
instance the block backend may support persistent grant).

> Was the /hypervisor range intended *strictly* for mapping grant-tables?

It was introduced to tell the OS a place where the grant-table could be 
conveniently mapped.

> Was it intended for the /hypervisor range to dynamically scale with the
> size of the domain? 
As per above, this doesn't depend on the size of the domain. Instead, 
this depends on what sort of the backend will be present in the domain.

> Was it intended for /hypervisor to grow over the
> years as hardware got cheaper?
I don't understand this question.

> Might it be better to deprecate the /hypervisor range and have domains
> allocate any available address space for foreign mappings?

It may be easy for FreeBSD to find available address space but so far 
this has not been the case in Linux (I haven't checked the latest 
version though).

To be clear, an OS is free to not use the range provided in /hypervisor 
(maybe this is not clear enough in the spec?). This was mostly 
introduced to overcome some issues we saw in Linux when Xen on Arm was 
introduced.

> 
> Should the FreeBSD implementation be treating grant tables as distinct
> from other foreign mappings?

Both require unallocated addres space to work. IIRC FreeBSD is able to 
find unallocated space easily, so I would recommend to use it.

> (is treating them the same likely to
> induce buggy behavior on x86?)

I will leave this answer to Roger.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 14 08:39:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 08:39:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127260.239133 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhTLj-0007k7-2C; Fri, 14 May 2021 08:39:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127260.239133; Fri, 14 May 2021 08:39:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhTLi-0007k0-VM; Fri, 14 May 2021 08:39:10 +0000
Received: by outflank-mailman (input) for mailman id 127260;
 Fri, 14 May 2021 08:39:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sDpF=KJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lhTLh-0007ju-MV
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 08:39:09 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2b8b0916-4a3a-44fb-a9e7-cbcfd3e61c7a;
 Fri, 14 May 2021 08:39:08 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 578DCB16C;
 Fri, 14 May 2021 08:39:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2b8b0916-4a3a-44fb-a9e7-cbcfd3e61c7a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620981547; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=ob63ffXUj7jX5HQcQ9WrfD7PQ0n7r+vG/seNM0m+Z/w=;
	b=r2GdA0+j5+7TI8JKzqA13YEoIpnc3Sy+TMU0nWwZYVDm0ykPyGad4Vhrvxu8fxrIa8mNGR
	GfGvA6pGa3iJK90fQ11aT7yDfaIAWbZipmvBmOE981+SVYInIx66Y/8ZFhy+IH3FeFpT1F
	kxWS0c4NTSUiSSbqto96yIk1IFOOkNw=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] tools/xenstore: simplify xenstored main loop
Date: Fri, 14 May 2021 10:39:05 +0200
Message-Id: <20210514083905.18212-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The main loop of xenstored is rather complicated due to different
handling of socket and ring-page interfaces. Unify that handling by
introducing interface type specific functions can_read() and
can_write().

Put the interface type specific functions in an own structure and let
struct connection contain only a pointer to that new function vector.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_core.c   | 117 ++++++++++++++----------------
 tools/xenstore/xenstored_core.h   |  19 ++---
 tools/xenstore/xenstored_domain.c |  11 ++-
 3 files changed, 73 insertions(+), 74 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 4b7b71cfb3..b66d119a98 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -226,8 +226,8 @@ static bool write_messages(struct connection *conn)
 				sockmsg_string(out->hdr.msg.type),
 				out->hdr.msg.len,
 				out->buffer, conn);
-		ret = conn->write(conn, out->hdr.raw + out->used,
-				  sizeof(out->hdr) - out->used);
+		ret = conn->funcs->write(conn, out->hdr.raw + out->used,
+					 sizeof(out->hdr) - out->used);
 		if (ret < 0)
 			return false;
 
@@ -243,8 +243,8 @@ static bool write_messages(struct connection *conn)
 			return true;
 	}
 
-	ret = conn->write(conn, out->buffer + out->used,
-			  out->hdr.msg.len - out->used);
+	ret = conn->funcs->write(conn, out->buffer + out->used,
+				 out->hdr.msg.len - out->used);
 	if (ret < 0)
 		return false;
 
@@ -1531,8 +1531,8 @@ static void handle_input(struct connection *conn)
 	/* Not finished header yet? */
 	if (in->inhdr) {
 		if (in->used != sizeof(in->hdr)) {
-			bytes = conn->read(conn, in->hdr.raw + in->used,
-					   sizeof(in->hdr) - in->used);
+			bytes = conn->funcs->read(conn, in->hdr.raw + in->used,
+						  sizeof(in->hdr) - in->used);
 			if (bytes < 0)
 				goto bad_client;
 			in->used += bytes;
@@ -1557,8 +1557,8 @@ static void handle_input(struct connection *conn)
 		in->inhdr = false;
 	}
 
-	bytes = conn->read(conn, in->buffer + in->used,
-			   in->hdr.msg.len - in->used);
+	bytes = conn->funcs->read(conn, in->buffer + in->used,
+				  in->hdr.msg.len - in->used);
 	if (bytes < 0)
 		goto bad_client;
 
@@ -1581,7 +1581,7 @@ static void handle_output(struct connection *conn)
 		ignore_connection(conn);
 }
 
-struct connection *new_connection(connwritefn_t *write, connreadfn_t *read)
+struct connection *new_connection(struct interface_funcs *funcs)
 {
 	struct connection *new;
 
@@ -1591,8 +1591,7 @@ struct connection *new_connection(connwritefn_t *write, connreadfn_t *read)
 
 	new->fd = -1;
 	new->pollfd_idx = -1;
-	new->write = write;
-	new->read = read;
+	new->funcs = funcs;
 	new->is_ignored = false;
 	new->transaction_started = 0;
 	INIT_LIST_HEAD(&new->out_list);
@@ -1622,17 +1621,7 @@ static void accept_connection(int sock)
 {
 }
 
-int writefd(struct connection *conn, const void *data, unsigned int len)
-{
-	errno = EBADF;
-	return -1;
-}
-
-int readfd(struct connection *conn, void *data, unsigned int len)
-{
-	errno = EBADF;
-	return -1;
-}
+struct interface_funcs socket_funcs;
 #else
 int writefd(struct connection *conn, const void *data, unsigned int len)
 {
@@ -1672,6 +1661,29 @@ int readfd(struct connection *conn, void *data, unsigned int len)
 	return rc;
 }
 
+static bool socket_can_process(struct connection *conn, int mask)
+{
+	if (conn->pollfd_idx == -1)
+		return false;
+
+	if (fds[conn->pollfd_idx].revents & ~(POLLIN | POLLOUT)) {
+		talloc_free(conn);
+		return false;
+	}
+
+	return (fds[conn->pollfd_idx].revents & mask) && !conn->is_ignored;
+}
+
+static bool socket_can_write(struct connection *conn)
+{
+	return socket_can_process(conn, POLLOUT);
+}
+
+static bool socket_can_read(struct connection *conn)
+{
+	return socket_can_process(conn, POLLIN);
+}
+
 static void accept_connection(int sock)
 {
 	int fd;
@@ -1681,12 +1693,19 @@ static void accept_connection(int sock)
 	if (fd < 0)
 		return;
 
-	conn = new_connection(writefd, readfd);
+	conn = new_connection(&socket_funcs);
 	if (conn)
 		conn->fd = fd;
 	else
 		close(fd);
 }
+
+struct interface_funcs socket_funcs = {
+	.write = writefd,
+	.read = readfd,
+	.can_write = socket_can_write,
+	.can_read = socket_can_read,
+};
 #endif
 
 static int tdb_flags;
@@ -2304,47 +2323,19 @@ int main(int argc, char *argv[])
 			if (&next->list != &connections)
 				talloc_increase_ref_count(next);
 
-			if (conn->domain) {
-				if (domain_can_read(conn))
-					handle_input(conn);
-				if (talloc_free(conn) == 0)
-					continue;
-
-				talloc_increase_ref_count(conn);
-				if (domain_can_write(conn) &&
-				    !list_empty(&conn->out_list))
-					handle_output(conn);
-				if (talloc_free(conn) == 0)
-					continue;
-			} else {
-				if (conn->pollfd_idx != -1) {
-					if (fds[conn->pollfd_idx].revents
-					    & ~(POLLIN|POLLOUT))
-						talloc_free(conn);
-					else if ((fds[conn->pollfd_idx].revents
-						  & POLLIN) &&
-						 !conn->is_ignored)
-						handle_input(conn);
-				}
-				if (talloc_free(conn) == 0)
-					continue;
-
-				talloc_increase_ref_count(conn);
-
-				if (conn->pollfd_idx != -1) {
-					if (fds[conn->pollfd_idx].revents
-					    & ~(POLLIN|POLLOUT))
-						talloc_free(conn);
-					else if ((fds[conn->pollfd_idx].revents
-						  & POLLOUT) &&
-						 !conn->is_ignored)
-						handle_output(conn);
-				}
-				if (talloc_free(conn) == 0)
-					continue;
+			if (conn->funcs->can_read(conn))
+				handle_input(conn);
+			if (talloc_free(conn) == 0)
+				continue;
 
-				conn->pollfd_idx = -1;
-			}
+			talloc_increase_ref_count(conn);
+
+			if (conn->funcs->can_write(conn))
+				handle_output(conn);
+			if (talloc_free(conn) == 0)
+				continue;
+
+			conn->pollfd_idx = -1;
 		}
 
 		if (delayed_requests) {
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 6a6d0448e8..1467270476 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -86,8 +86,13 @@ struct delayed_request {
 };
 
 struct connection;
-typedef int connwritefn_t(struct connection *, const void *, unsigned int);
-typedef int connreadfn_t(struct connection *, void *, unsigned int);
+
+struct interface_funcs {
+	int (*write)(struct connection *, const void *, unsigned int);
+	int (*read)(struct connection *, void *, unsigned int);
+	bool (*can_write)(struct connection *);
+	bool (*can_read)(struct connection *);
+};
 
 struct connection
 {
@@ -131,9 +136,8 @@ struct connection
 	/* My watches. */
 	struct list_head watches;
 
-	/* Methods for communicating over this connection: write can be NULL */
-	connwritefn_t *write;
-	connreadfn_t *read;
+	/* Methods for communicating over this connection. */
+	struct interface_funcs *funcs;
 
 	/* Support for live update: connection id. */
 	unsigned int conn_id;
@@ -196,7 +200,7 @@ int write_node_raw(struct connection *conn, TDB_DATA *key, struct node *node,
 struct node *read_node(struct connection *conn, const void *ctx,
 		       const char *name);
 
-struct connection *new_connection(connwritefn_t *write, connreadfn_t *read);
+struct connection *new_connection(struct interface_funcs *funcs);
 struct connection *get_connection_by_id(unsigned int conn_id);
 void check_store(void);
 void corrupt(struct connection *conn, const char *fmt, ...);
@@ -254,9 +258,6 @@ void finish_daemonize(void);
 /* Open a pipe for signal handling */
 void init_pipe(int reopen_log_pipe[2]);
 
-int writefd(struct connection *conn, const void *data, unsigned int len);
-int readfd(struct connection *conn, void *data, unsigned int len);
-
 extern struct interface_funcs socket_funcs;
 extern xengnttab_handle **xgt_handle;
 
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 0c17937c0f..6e0fa6e861 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -172,6 +172,13 @@ static int readchn(struct connection *conn, void *data, unsigned int len)
 	return len;
 }
 
+static struct interface_funcs domain_funcs = {
+	.write = writechn,
+	.read = readchn,
+	.can_write = domain_can_write,
+	.can_read = domain_can_read,
+};
+
 static void *map_interface(domid_t domid)
 {
 	return xengnttab_map_grant_ref(*xgt_handle, domid,
@@ -389,7 +396,7 @@ static int new_domain(struct domain *domain, int port, bool restore)
 
 	domain->introduced = true;
 
-	domain->conn = new_connection(writechn, readchn);
+	domain->conn = new_connection(&domain_funcs);
 	if (!domain->conn)  {
 		errno = ENOMEM;
 		return errno;
@@ -1288,7 +1295,7 @@ void read_state_connection(const void *ctx, const void *state)
 	struct domain *domain, *tdomain;
 
 	if (sc->conn_type == XS_STATE_CONN_TYPE_SOCKET) {
-		conn = new_connection(writefd, readfd);
+		conn = new_connection(&socket_funcs);
 		if (!conn)
 			barf("error restoring connection");
 		conn->fd = sc->spec.socket_fd;
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri May 14 08:41:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 08:41:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127263.239144 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhTO6-0000cy-Gt; Fri, 14 May 2021 08:41:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127263.239144; Fri, 14 May 2021 08:41:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhTO6-0000cr-CZ; Fri, 14 May 2021 08:41:38 +0000
Received: by outflank-mailman (input) for mailman id 127263;
 Fri, 14 May 2021 08:41:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sDpF=KJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lhTO4-0000cg-SX
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 08:41:36 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 04b490e1-07ca-42fe-b757-3b75b7893ed9;
 Fri, 14 May 2021 08:41:36 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5AC31AF35;
 Fri, 14 May 2021 08:41:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 04b490e1-07ca-42fe-b757-3b75b7893ed9
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620981695; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=i5kvPLZ5qhDTjMIfcwzywnthE6OAXMQX2NmMcx+mJdk=;
	b=bxgvp8Fe1e42kLgWOViOxSzxxNi00O23Vtt0Y9O1rvoUbVK3dybwrRLOlL6WU0VwDxdBDU
	63iq67Tdkn4zIk+gX//v66KiE614Oprk4t6yf1tzU0biD6TphhpCOQQTWlt+Jr7VDWor6V
	PCm4FSgNJ7giz+nqabxtCAblxOkuC4Y=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] tools/xenstore: claim resources when running as daemon
Date: Fri, 14 May 2021 10:41:33 +0200
Message-Id: <20210514084133.18658-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Xenstored is absolutely mandatory for a Xen host and it can't be
restarted, so being killed by OOM-killer in case of memory shortage is
to be avoided.

Set /proc/$pid/oom_score_adj (if available) to -500 in order to allow
xenstored to use large amounts of memory without being killed.

In order to support large numbers of domains the limit for open file
descriptors might need to be raised. Each domain needs 2 file
descriptors (one for the event channel and one for the xl per-domain
daemon to connect to xenstored).

Try to raise ulimit for open files to 65536. First the hard limit if
needed, and then the soft limit.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 tools/xenstore/xenstored_core.c   |  2 ++
 tools/xenstore/xenstored_core.h   |  3 ++
 tools/xenstore/xenstored_minios.c |  4 +++
 tools/xenstore/xenstored_posix.c  | 46 +++++++++++++++++++++++++++++++
 4 files changed, 55 insertions(+)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index b66d119a98..964e693450 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -2243,6 +2243,8 @@ int main(int argc, char *argv[])
 		xprintf = trace;
 #endif
 
+	claim_resources();
+
 	signal(SIGHUP, trigger_reopen_log);
 	if (tracefile)
 		tracefile = talloc_strdup(NULL, tracefile);
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 1467270476..ac26973648 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -255,6 +255,9 @@ void daemonize(void);
 /* Close stdin/stdout/stderr to complete daemonize */
 void finish_daemonize(void);
 
+/* Set OOM-killer score and raise ulimit. */
+void claim_resources(void);
+
 /* Open a pipe for signal handling */
 void init_pipe(int reopen_log_pipe[2]);
 
diff --git a/tools/xenstore/xenstored_minios.c b/tools/xenstore/xenstored_minios.c
index c94493e52a..df8ff580b0 100644
--- a/tools/xenstore/xenstored_minios.c
+++ b/tools/xenstore/xenstored_minios.c
@@ -32,6 +32,10 @@ void finish_daemonize(void)
 {
 }
 
+void claim_resources(void)
+{
+}
+
 void init_pipe(int reopen_log_pipe[2])
 {
 	reopen_log_pipe[0] = -1;
diff --git a/tools/xenstore/xenstored_posix.c b/tools/xenstore/xenstored_posix.c
index 48c37ffe3e..0074fbd8b2 100644
--- a/tools/xenstore/xenstored_posix.c
+++ b/tools/xenstore/xenstored_posix.c
@@ -22,6 +22,7 @@
 #include <fcntl.h>
 #include <stdlib.h>
 #include <sys/mman.h>
+#include <sys/resource.h>
 
 #include "utils.h"
 #include "xenstored_core.h"
@@ -87,6 +88,51 @@ void finish_daemonize(void)
 	close(devnull);
 }
 
+static void avoid_oom_killer(void)
+{
+	char path[32];
+	char val[] = "-500";
+	int fd;
+
+	snprintf(path, sizeof(path), "/proc/%d/oom_score_adj", (int)getpid());
+
+	fd = open(path, O_WRONLY);
+	/* Do nothing if file doesn't exist. */
+	if (fd < 0)
+		return;
+	/* Ignore errors. */
+	write(fd, val, sizeof(val));
+	close(fd);
+}
+
+/* Max. 32752 domains with 2 open files per domain, plus some spare. */
+#define MAX_FILES 65536
+static void raise_ulimit(void)
+{
+	struct rlimit rlim;
+
+	if (getrlimit(RLIMIT_NOFILE, &rlim))
+		return;
+	if (rlim.rlim_max != RLIM_INFINITY && rlim.rlim_max < MAX_FILES)
+	{
+		rlim.rlim_max = MAX_FILES;
+		setrlimit(RLIMIT_NOFILE, &rlim);
+	}
+	if (getrlimit(RLIMIT_NOFILE, &rlim))
+		return;
+	if (rlim.rlim_max == RLIM_INFINITY || rlim.rlim_max > MAX_FILES)
+		rlim.rlim_cur = MAX_FILES;
+	else
+		rlim.rlim_cur = rlim.rlim_max;
+	setrlimit(RLIMIT_NOFILE, &rlim);
+}
+
+void claim_resources(void)
+{
+	avoid_oom_killer();
+	raise_ulimit();
+}
+
 void init_pipe(int reopen_log_pipe[2])
 {
 	int flags;
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri May 14 09:01:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 09:01:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127268.239155 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhThC-0002xU-4O; Fri, 14 May 2021 09:01:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127268.239155; Fri, 14 May 2021 09:01:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhThC-0002xN-12; Fri, 14 May 2021 09:01:22 +0000
Received: by outflank-mailman (input) for mailman id 127268;
 Fri, 14 May 2021 09:01:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sDpF=KJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lhThA-0002xG-3k
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 09:01:20 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0e0ebfca-82ed-48a5-8b78-e9a354de4e95;
 Fri, 14 May 2021 09:01:19 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5A6D6ACAD;
 Fri, 14 May 2021 09:01:18 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0e0ebfca-82ed-48a5-8b78-e9a354de4e95
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620982878; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=2GgQF7bJgYZPUZXWhATSQC6d8JF/XChiJEgYX2JYtPc=;
	b=PWvYctyvW9CjDi0iFLx6fSJsMPFb5TM+m9U4JUW4cSjdCFkqOjk0aF7xOqoMDtegeiP/KP
	8Z9UEyhRpmAS3Fv1ALAOywlbLrg3JZqQD7AONr4FLga6XzCegeEfElh/zczSp4hWKtzAv9
	s1je5/OUhQFgcPPgUJnizKxikleWID8=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] tools/xenstore: cleanup Makefile and gitignore
Date: Fri, 14 May 2021 11:01:16 +0200
Message-Id: <20210514090116.21002-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The Makefile of xenstore and related to that the global .gitignore
file contain some leftovers from ancient times. Remove those.

While at it sort the tools/xenstore/* entries in .gitignore.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
 .gitignore              | 7 +++----
 tools/xenstore/Makefile | 2 +-
 2 files changed, 4 insertions(+), 5 deletions(-)

diff --git a/.gitignore b/.gitignore
index 1c2fa1530b..4aad2ddd65 100644
--- a/.gitignore
+++ b/.gitignore
@@ -288,15 +288,15 @@ tools/xenpaging/xenpaging
 tools/xenpmd/xenpmd
 tools/xenstore/xenstore
 tools/xenstore/xenstore-chmod
+tools/xenstore/xenstore-control
 tools/xenstore/xenstore-exists
 tools/xenstore/xenstore-list
+tools/xenstore/xenstore-ls
 tools/xenstore/xenstore-read
 tools/xenstore/xenstore-rm
+tools/xenstore/xenstore-watch
 tools/xenstore/xenstore-write
-tools/xenstore/xenstore-control
-tools/xenstore/xenstore-ls
 tools/xenstore/xenstored
-tools/xenstore/xenstored_test
 tools/xenstore/xs_tdb_dump
 tools/xentop/xentop
 tools/xentrace/xentrace_setsize
@@ -428,7 +428,6 @@ tools/firmware/etherboot/ipxe.tar.gz
 tools/firmware/etherboot/ipxe/
 tools/python/xen/lowlevel/xl/_pyxl_types.c
 tools/python/xen/lowlevel/xl/_pyxl_types.h
-tools/xenstore/xenstore-watch
 tools/xl/_paths.h
 tools/xl/xl
 
diff --git a/tools/xenstore/Makefile b/tools/xenstore/Makefile
index 01c9ccc70f..292b478fa1 100644
--- a/tools/xenstore/Makefile
+++ b/tools/xenstore/Makefile
@@ -90,7 +90,7 @@ xs_tdb_dump: xs_tdb_dump.o utils.o tdb.o talloc.o
 .PHONY: clean
 clean:
 	rm -f *.a *.o xenstored_probes.h
-	rm -f xenstored xs_random xs_stress xs_crashme
+	rm -f xenstored
 	rm -f xs_tdb_dump xenstore-control init-xenstore-domain
 	rm -f xenstore $(CLIENTS)
 	$(RM) $(DEPS_RM)
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri May 14 09:35:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 09:35:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127272.239166 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhUEI-0006AB-M3; Fri, 14 May 2021 09:35:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127272.239166; Fri, 14 May 2021 09:35:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhUEI-0006A4-Hk; Fri, 14 May 2021 09:35:34 +0000
Received: by outflank-mailman (input) for mailman id 127272;
 Fri, 14 May 2021 09:35:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lhUEH-00069y-Oe
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 09:35:33 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lhUEG-0005Yt-AM; Fri, 14 May 2021 09:35:32 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lhUEG-0002GH-4S; Fri, 14 May 2021 09:35:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=VPEjS/8W/3sbbWYkEBEk4qGvapMDFBEIrQK2ZWVg0gM=; b=4em98B6RHaM9VrX+cSnPnmD+87
	VxsG3BftRIype7zJsu+mZc4XTraHilXGOcwZReT8wipAFJUC86dKfth7c5OM4NYwKSreqfsn9eo72
	cWMRq5Yus8otHGX8YmwD8/eDgT3vIQ7Z9Re9MnYtd48IzHIryoVwTtgyILU3hN/50I94=;
Subject: Re: [PATCH] tools/xenstore: simplify xenstored main loop
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210514083905.18212-1-jgross@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <304944cf-ac92-be14-e088-1975ef073255@xen.org>
Date: Fri, 14 May 2021 10:35:30 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210514083905.18212-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 14/05/2021 09:39, Juergen Gross wrote:
> The main loop of xenstored is rather complicated due to different
> handling of socket and ring-page interfaces. Unify that handling by
> introducing interface type specific functions can_read() and
> can_write().
> 
> Put the interface type specific functions in an own structure and let
> struct connection contain only a pointer to that new function vector.
I would split the renaming in a separate patch. This would be easier to 
review and remove some churn from this patch.

> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>   tools/xenstore/xenstored_core.c   | 117 ++++++++++++++----------------
>   tools/xenstore/xenstored_core.h   |  19 ++---
>   tools/xenstore/xenstored_domain.c |  11 ++-
>   3 files changed, 73 insertions(+), 74 deletions(-)
> 
> diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
> index 4b7b71cfb3..b66d119a98 100644
> --- a/tools/xenstore/xenstored_core.c
> +++ b/tools/xenstore/xenstored_core.c
> @@ -226,8 +226,8 @@ static bool write_messages(struct connection *conn)
>   				sockmsg_string(out->hdr.msg.type),
>   				out->hdr.msg.len,
>   				out->buffer, conn);
> -		ret = conn->write(conn, out->hdr.raw + out->used,
> -				  sizeof(out->hdr) - out->used);
> +		ret = conn->funcs->write(conn, out->hdr.raw + out->used,
> +					 sizeof(out->hdr) - out->used);
>   		if (ret < 0)
>   			return false;
>   
> @@ -243,8 +243,8 @@ static bool write_messages(struct connection *conn)
>   			return true;
>   	}
>   
> -	ret = conn->write(conn, out->buffer + out->used,
> -			  out->hdr.msg.len - out->used);
> +	ret = conn->funcs->write(conn, out->buffer + out->used,
> +				 out->hdr.msg.len - out->used);
>   	if (ret < 0)
>   		return false;
>   
> @@ -1531,8 +1531,8 @@ static void handle_input(struct connection *conn)
>   	/* Not finished header yet? */
>   	if (in->inhdr) {
>   		if (in->used != sizeof(in->hdr)) {
> -			bytes = conn->read(conn, in->hdr.raw + in->used,
> -					   sizeof(in->hdr) - in->used);
> +			bytes = conn->funcs->read(conn, in->hdr.raw + in->used,
> +						  sizeof(in->hdr) - in->used);
>   			if (bytes < 0)
>   				goto bad_client;
>   			in->used += bytes;
> @@ -1557,8 +1557,8 @@ static void handle_input(struct connection *conn)
>   		in->inhdr = false;
>   	}
>   
> -	bytes = conn->read(conn, in->buffer + in->used,
> -			   in->hdr.msg.len - in->used);
> +	bytes = conn->funcs->read(conn, in->buffer + in->used,
> +				  in->hdr.msg.len - in->used);
>   	if (bytes < 0)
>   		goto bad_client;
>   
> @@ -1581,7 +1581,7 @@ static void handle_output(struct connection *conn)
>   		ignore_connection(conn);
>   }
>   
> -struct connection *new_connection(connwritefn_t *write, connreadfn_t *read)
> +struct connection *new_connection(struct interface_funcs *funcs)

I don't think the interface is meant to be changed after the connection 
is created. So this should be const.

>   {
>   	struct connection *new;
>   
> @@ -1591,8 +1591,7 @@ struct connection *new_connection(connwritefn_t *write, connreadfn_t *read)
>   
>   	new->fd = -1;
>   	new->pollfd_idx = -1;
> -	new->write = write;
> -	new->read = read;
> +	new->funcs = funcs;
>   	new->is_ignored = false;
>   	new->transaction_started = 0;
>   	INIT_LIST_HEAD(&new->out_list);
> @@ -1622,17 +1621,7 @@ static void accept_connection(int sock)
>   {
>   }
>   
> -int writefd(struct connection *conn, const void *data, unsigned int len)
> -{
> -	errno = EBADF;
> -	return -1;
> -}
> -
> -int readfd(struct connection *conn, void *data, unsigned int len)
> -{
> -	errno = EBADF;
> -	return -1;
> -}
> +struct interface_funcs socket_funcs;

AFAICT, this is defined for mini-os because read_state_connection() may 
use it. The assumption here is XS_STATE_CONN_TYPE_SOCKET will never show 
up in the stream.

If there is any mistake in the stream, this could lead to dereference 
NULL and crash after. AFAICT, before, we would just ignore the connection.

I think it would be best if sockets_funcs() is not defined at all or we 
continue to ignore the connection. This can be probably done by 
implementing dummy callback for can_write/can_read.

>   #else
>   int writefd(struct connection *conn, const void *data, unsigned int len)
>   {
> @@ -1672,6 +1661,29 @@ int readfd(struct connection *conn, void *data, unsigned int len)
>   	return rc;
>   }
>   
> +static bool socket_can_process(struct connection *conn, int mask)
> +{
> +	if (conn->pollfd_idx == -1)
> +		return false;
> +
> +	if (fds[conn->pollfd_idx].revents & ~(POLLIN | POLLOUT)) {
> +		talloc_free(conn);
> +		return false;
> +	}
> +
> +	return (fds[conn->pollfd_idx].revents & mask) && !conn->is_ignored;
> +}
> +
> +static bool socket_can_write(struct connection *conn)
> +{
> +	return socket_can_process(conn, POLLOUT);
> +}
> +
> +static bool socket_can_read(struct connection *conn)
> +{
> +	return socket_can_process(conn, POLLIN);
> +}
> +
>   static void accept_connection(int sock)
>   {
>   	int fd;
> @@ -1681,12 +1693,19 @@ static void accept_connection(int sock)
>   	if (fd < 0)
>   		return;
>   
> -	conn = new_connection(writefd, readfd);
> +	conn = new_connection(&socket_funcs);
>   	if (conn)
>   		conn->fd = fd;
>   	else
>   		close(fd);
>   }
> +
> +struct interface_funcs socket_funcs = {

This should be const.

> +	.write = writefd,
> +	.read = readfd,
> +	.can_write = socket_can_write,
> +	.can_read = socket_can_read,
> +};
>   #endif
>   
>   static int tdb_flags;
> @@ -2304,47 +2323,19 @@ int main(int argc, char *argv[])
>   			if (&next->list != &connections)
>   				talloc_increase_ref_count(next);
>   
> -			if (conn->domain) {
> -				if (domain_can_read(conn))
> -					handle_input(conn);
> -				if (talloc_free(conn) == 0)
> -					continue;
> -
> -				talloc_increase_ref_count(conn);
> -				if (domain_can_write(conn) &&
> -				    !list_empty(&conn->out_list))
> -					handle_output(conn);
> -				if (talloc_free(conn) == 0)
> -					continue;
> -			} else {
> -				if (conn->pollfd_idx != -1) {
> -					if (fds[conn->pollfd_idx].revents
> -					    & ~(POLLIN|POLLOUT))
> -						talloc_free(conn);
> -					else if ((fds[conn->pollfd_idx].revents
> -						  & POLLIN) &&
> -						 !conn->is_ignored)
> -						handle_input(conn);
> -				}
> -				if (talloc_free(conn) == 0)
> -					continue;
> -
> -				talloc_increase_ref_count(conn);
> -
> -				if (conn->pollfd_idx != -1) {
> -					if (fds[conn->pollfd_idx].revents
> -					    & ~(POLLIN|POLLOUT))
> -						talloc_free(conn);
> -					else if ((fds[conn->pollfd_idx].revents
> -						  & POLLOUT) &&
> -						 !conn->is_ignored)
> -						handle_output(conn);
> -				}
> -				if (talloc_free(conn) == 0)
> -					continue;
> +			if (conn->funcs->can_read(conn))
> +				handle_input(conn);
> +			if (talloc_free(conn) == 0)
> +				continue;
>   
> -				conn->pollfd_idx = -1;
> -			}
> +			talloc_increase_ref_count(conn);
> +
> +			if (conn->funcs->can_write(conn))
> +				handle_output(conn);
> +			if (talloc_free(conn) == 0)
> +				continue;
> +
> +			conn->pollfd_idx = -1;
>   		}
>   
>   		if (delayed_requests) {
> diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
> index 6a6d0448e8..1467270476 100644
> --- a/tools/xenstore/xenstored_core.h
> +++ b/tools/xenstore/xenstored_core.h
> @@ -86,8 +86,13 @@ struct delayed_request {
>   };
>   
>   struct connection;
> -typedef int connwritefn_t(struct connection *, const void *, unsigned int);
> -typedef int connreadfn_t(struct connection *, void *, unsigned int);
> +
> +struct interface_funcs {
> +	int (*write)(struct connection *, const void *, unsigned int);
> +	int (*read)(struct connection *, void *, unsigned int);
> +	bool (*can_write)(struct connection *);
> +	bool (*can_read)(struct connection *);
> +};
>   
>   struct connection
>   {
> @@ -131,9 +136,8 @@ struct connection
>   	/* My watches. */
>   	struct list_head watches;
>   
> -	/* Methods for communicating over this connection: write can be NULL */
> -	connwritefn_t *write;
> -	connreadfn_t *read;
> +	/* Methods for communicating over this connection. */
> +	struct interface_funcs *funcs;
>   
>   	/* Support for live update: connection id. */
>   	unsigned int conn_id;
> @@ -196,7 +200,7 @@ int write_node_raw(struct connection *conn, TDB_DATA *key, struct node *node,
>   struct node *read_node(struct connection *conn, const void *ctx,
>   		       const char *name);
>   
> -struct connection *new_connection(connwritefn_t *write, connreadfn_t *read);
> +struct connection *new_connection(struct interface_funcs *funcs);
>   struct connection *get_connection_by_id(unsigned int conn_id);
>   void check_store(void);
>   void corrupt(struct connection *conn, const char *fmt, ...);
> @@ -254,9 +258,6 @@ void finish_daemonize(void);
>   /* Open a pipe for signal handling */
>   void init_pipe(int reopen_log_pipe[2]);
>   
> -int writefd(struct connection *conn, const void *data, unsigned int len);
> -int readfd(struct connection *conn, void *data, unsigned int len);
> -
>   extern struct interface_funcs socket_funcs;

Hmmm... I guess this change splipped in the staging before hand?

>   extern xengnttab_handle **xgt_handle;
>   
> diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
> index 0c17937c0f..6e0fa6e861 100644
> --- a/tools/xenstore/xenstored_domain.c
> +++ b/tools/xenstore/xenstored_domain.c
> @@ -172,6 +172,13 @@ static int readchn(struct connection *conn, void *data, unsigned int len)
>   	return len;
>   }
>   
> +static struct interface_funcs domain_funcs = {
> +	.write = writechn,
> +	.read = readchn,
> +	.can_write = domain_can_write,
> +	.can_read = domain_can_read,
> +};
> +
>   static void *map_interface(domid_t domid)
>   {
>   	return xengnttab_map_grant_ref(*xgt_handle, domid,
> @@ -389,7 +396,7 @@ static int new_domain(struct domain *domain, int port, bool restore)
>   
>   	domain->introduced = true;
>   
> -	domain->conn = new_connection(writechn, readchn);
> +	domain->conn = new_connection(&domain_funcs);
>   	if (!domain->conn)  {
>   		errno = ENOMEM;
>   		return errno;
> @@ -1288,7 +1295,7 @@ void read_state_connection(const void *ctx, const void *state)
>   	struct domain *domain, *tdomain;
>   
>   	if (sc->conn_type == XS_STATE_CONN_TYPE_SOCKET) {
> -		conn = new_connection(writefd, readfd);
> +		conn = new_connection(&socket_funcs);
>   		if (!conn)
>   			barf("error restoring connection");
>   		conn->fd = sc->spec.socket_fd;
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 14 09:36:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 09:36:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127278.239177 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhUF9-0006mG-44; Fri, 14 May 2021 09:36:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127278.239177; Fri, 14 May 2021 09:36:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhUF9-0006m9-09; Fri, 14 May 2021 09:36:27 +0000
Received: by outflank-mailman (input) for mailman id 127278;
 Fri, 14 May 2021 09:36:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6p66=KJ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lhUF7-0006m3-Qc
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 09:36:26 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1c8f95f4-9aec-4ae2-b841-5f76460f2ff5;
 Fri, 14 May 2021 09:36:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c8f95f4-9aec-4ae2-b841-5f76460f2ff5
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1620984984;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=PDGEfs1Mj7mGII5pOVsH/w5XrXO7fByhkGskNpfqjzs=;
  b=GQk270OlFT6K9Sv/u/YO/dXlyduu5k1dn4ilVuG7oRZxRn85tyttNCvY
   uEkxawRGKMhKduHaNaVw8TMxSfJKrtPs3i7FTzcsqmS2uuhHCn4Mk01x+
   FnXHTXtieUhsqaVHMepolNjW1z7rLSCvlWDmE2zGwVCFK/G4orkei0qnD
   Y=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 2HR/JhpxVyPlBjBUFruYbJ84y5Ju5BB8pK+YQHptqraxqJc1GQtlm7yTaM2s64zFZWl0Lh+hOS
 RrpDhnaUpuA/B+5vsAwUaPgzPFUjSNmA4jrW/h6ehh2bS6obsXoEXQS/9mFKaLZDkekhW6BTYJ
 g2F1tdrJspkBUKV1PdesTmP0hqbSvsxZTPfXXXABNN9rLE6F1x/LpCmn9YkshtSaCYRquEbQ15
 rN9TqAChn86zEixlDuLNYKgdEVPpgNBgU8wYU/m607eXOepIHBLBiV1HK+HMLeU4jdUfRttETe
 l/M=
X-SBRS: 5.1
X-MesageID: 43588709
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:vilYsa0Sc56uNSvGFv9x3AqjBbByeYIsimQD101hICG9Lfb3qy
 n+ppsmPEHP5Ar5OEtBpTiBUJPwJ0800aQFnLX5XI3SJjUO3VHIEGgM1/qG/9SNIVybygcZ79
 YdT0EcMqyBMbEZt7eD3ODQKb9Jq7PrgcPY55at854ud3AQV0gJ1XYJNu/xKDwOeOApP+tfKH
 LKjfA32QZINE5nIvhSaRI+Lqf+juyOsKijTQ8NBhYh5gXLpyiv8qTGHx+R2Qpbey9TwJ85mF
 K10zDR1+GGibWW2xXc32jc49B9g9360OZOA8SKl4w8NijssAC1f45sMofy/QzdmNvfqmrCre
 O85ivJZ69ImjfslyCO0FXQMjDboXUTAySI8y7evZOLyvaJNw7TCKJ69Mlkm1XimgwdVHwV6t
 M844ujjesiMfr3plW02zH5bWAeqqOKmwtUrQcytQ0UbWJMUs4dkWQglHklWavoSxiKl7zPVt
 MeVf3h2A==
X-IronPort-AV: E=Sophos;i="5.82,299,1613451600"; 
   d="scan'208";a="43588709"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jSZQ2OLuLkBBylV1GiY+djKviXf7avekLXJslaRcfsTI33g6Xsnf3DcWal1NI6BylSIQjFNwzq9Yg1066ABno/4jBfw/LnLd+V9M3aA7Fj+SxFQHf5bEZgLlm1bmj2q07pr30Tf9uJ+s6RKK+pyca1sN2twr9xl1YrrtZjFK831Sq7mH/5npbeKVhpMotuwL86t72RjONLjJSSzCVm8K8KrWZbxD0wwOxUuFuMUDX1+J9CPCDjfvB1uOPXcUdffX9ODN55MEq6yfNaYe8JPRdQHrNSge9wjQXNL+YUTbEQlI01zLlyKaYSZVDFGUVWRk6ljAmN+l9x/1vXKyyBxYpQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oFY+P0vCHwC1ZGQbL3Mljf4Z3Nq0ZT0Q6qIFdYRgqhg=;
 b=bjokIZjubOsJn+uzI1on0p3sySymg4/5dtQMo1EwSU90ZPAd82CPRYfp0gfb5ERCjmHrV27TBY/OLlmdoxTcE5r0skUVUUH7LfJmNyJDR7Y1o4fzELeQk7jND+ay74/Ttf+U1r28+Ni1CTErAJKxcbZNRLLSC/rcI04ViV0ZZ4c2/R4KSkGbj7FNySp8KExdyH9jHq/EaZmW7U/6wrTEvadMMy6IKi+fUcTjRzUGQq/6CFtvlw4RZXf3600SiVs32IlViOtrgIdxFZ+2D67Xwh7phRqwb0vvdjSFuJhhKK4EOL3WhZ2o8uqMN1K9QkFSYxr29g/N0h8ml8Sh1L3EBQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oFY+P0vCHwC1ZGQbL3Mljf4Z3Nq0ZT0Q6qIFdYRgqhg=;
 b=v4Oy/n0LnT6Vz2GczUEkjPFz6+3l9px71LYYAB3B+zcUQ2rGgrMc9pIh5kFbvMAOPpyj0+KwPylEnH90dcPNqmnrfcW4BgimeaYAgK7miDO2uAz/ClreTAC7PHhhOFdk7eUUVfhsIjraNVwoAzepik1sk+gjCMLZXFvBTX7CW6g=
Date: Fri, 14 May 2021 11:36:14 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Anthony PERARD
	<anthony.perard@citrix.com>
Subject: Re: [PATCH v4 09/10] libs/{light,guest}: implement
 xc_cpuid_apply_policy in libxl
Message-ID: <YJ5EjsgzVXFDqlE1@Air-de-Roger>
References: <20210507110422.24608-1-roger.pau@citrix.com>
 <20210507110422.24608-10-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20210507110422.24608-10-roger.pau@citrix.com>
X-ClientProxiedBy: AM8P190CA0011.EURP190.PROD.OUTLOOK.COM
 (2603:10a6:20b:219::16) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ae3f4536-1c9f-4ae1-eea2-08d916bbbb6f
X-MS-TrafficTypeDiagnostic: DM6PR03MB5083:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB50833DFB978374D8A2B98CF38F509@DM6PR03MB5083.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: K44YVnAPfUPcgI0tzN+Aa5pd89F3MRHNrLtYamt9bx4Y6n/Rz0fT9O7GrR3x8svJN4yeNaQP0Fccrxn08I6go2y6ZUbs52eoT2SDk2IWHicYGNfTKgnKEb4GUUc3WyAVeLbGXXnMVIBkVgvXeIBm8mX2DVZKT2hejbGxE5qSRQPHM/UqywdahhoxTdSlB876w1GdlK9S+YMaE9QYSmBxIuIn2gXv3N2/IWJKKka/aQav8NmzDrGeBf/FGzoi8i1aCUq/291962coswVOZRPaingkkkyUr1riPw1gS3elDYDnMfq04EKI4HkGOyV3dVlAuzUFDRlzZUF5Qpi55EOCZ2+3jpm8AA90ISoRheBVbHy7D8wh0UkLMUAY8BQrcmc84zUZieE64P7nDl9kOPPSXb9nfenpRDAuR6wHXpVUYn/I83oSfCV3RGBml7eGnBRp+9F3o4pOGtcwy0YjCkG8VuMidXuBewTbh5U5Gk9BNY0kmYj4Ns4fVeM17TRDR2qkNEoHIQZe9D0gyeMz8ybGE9ju//hxehwKTPIopntFOEUGSESYBg/r/R1OVJonuIhic5Ghk4aVhPf7qI9hqpFe0r/3YTQGgky/SZkFatfGygg=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(39850400004)(376002)(136003)(346002)(396003)(366004)(4326008)(16526019)(8676002)(186003)(33716001)(6666004)(5660300002)(316002)(6496006)(6916009)(86362001)(38100700002)(66946007)(6486002)(66476007)(956004)(83380400001)(54906003)(8936002)(66556008)(26005)(2906002)(4744005)(107886003)(478600001)(85182001)(9686003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?cGhPaEJiWkRReEcycmZrU2RVM1BsR2l0aU1wWnlmMGJzNHp6R2RjWnBwdlhh?=
 =?utf-8?B?TmtkYmd6MTFnSytpT3V5YU9KYk5Cbno2aUZpSGZNT3JwelFlQk1LQW1IT3FM?=
 =?utf-8?B?L0dBR3RIVUNhY3NGWjRHMnoxNUcyU294KzJyWmE1WEFDWWJKVDJkYVorcE9J?=
 =?utf-8?B?T2pLcGo0bGdJL2RiT0JxZS9rcy9Yd0Y5SkFoNEpoWVp3a0U0TGdmcEgvMFZ6?=
 =?utf-8?B?aU9HUTNzb3pVQ1lFOEZCR3BzNVVIMGQ2N3pSVXFmMEJDWHJoTDJzQkExYmg5?=
 =?utf-8?B?S3QwUVNFRXFhdVlDK2tZaVhvclF5S05MaENTS1M1a3Uvc0crUEhRMXZ5YWYx?=
 =?utf-8?B?eS9NcW90SnlZakoxSDUyZ2w5dm1Cb3FLRXVxbit0QWZmam8xaHJGczhNT0Rh?=
 =?utf-8?B?L3pVOE1ESXNHcldDVEcwakVobjEwZ3Z4ekVJMnVVT1ZsOWhNSFJ6Q3pGZTdj?=
 =?utf-8?B?eUk1T0ROVmlSdERuKzNmWllzR1g2NmgzU3Qzdy9SRS94cStTZVlKM1ozZWpm?=
 =?utf-8?B?UitFQkw3dXh2U3gwL3p1MDJzbGhHS2RvYnFPMUtXWmt3cVJ4a1RJMzhCYWk3?=
 =?utf-8?B?TkF3NjlUQzV3RURvYmlibE9jdEVibEEzd09oNkU3ZHlxZmh5MG1ub0JIaHJx?=
 =?utf-8?B?b0VWMXk5eGVObXd4ajZKQjVlV1dZR2U3VVZlMXBrMWJTdFJDejVMRnAzTHVZ?=
 =?utf-8?B?WXlhMUw1V3RPanlTOHA5WDc5UDZ3Z2lIbVJsZGU3RTB6cTZHUFBmUlJXOWpI?=
 =?utf-8?B?WmpFNkEzYnFkcVovL0ovaEVXVEgrTEExSVBpa09rOUxFdzlNaGtlRHZPeWR1?=
 =?utf-8?B?SlVzR25xaWtubmlXWVlvYnV3TmJVd2tiTGpmcDNVbGdYS3JlV0V4Qyt5L3pR?=
 =?utf-8?B?bC84NFVqbUhjbTBVUmdaa2dXakIraTVGTnFtaFpVMW05TlJYYUcwSnJVdC9a?=
 =?utf-8?B?MWVFS0ErSEFGZTlBend0NE9obVFDZ2pmZFR2TDhFOE1SRTVnY0I5bUlTMEp3?=
 =?utf-8?B?R2J6UWV3cTdtM0xvUXQySHRoUkpsNVUxUkxjRXVrTmNGK1I2RUttNm93ZTYy?=
 =?utf-8?B?Y1dETXRkeWVFUVRoaWxUSzVtdWxaa1AxNEpEQUt1ejBpUURXL2RpcTZyNFlG?=
 =?utf-8?B?Q2IyRG42cU5YWFVkY3AwdXl4WmRkVGU0S29SN0FiS2RZTXNqT0lxVndQK1Ja?=
 =?utf-8?B?MWM3cVJYK0RNOFdobVRGQmVweGwwR1V0QjFXL0c4RzBNbVY5b1NSVlNPekFG?=
 =?utf-8?B?KzR5VHIrbHgrc1JhNll6SUR4aDY5WHN5VTVRRzFsSEZwMUtJdk5vR0pJZGo2?=
 =?utf-8?B?UHdOVmp1SDJFd09mbFBlNzZzUVk3MGtQQU9qUGVLMDJBMUZKOW84RDNrdTEy?=
 =?utf-8?B?dHlCR2VVWlRUMmRqb09QRFBCbGV2RmMvdTJDNCtGR0VwMEtsbUNTbzliMDhR?=
 =?utf-8?B?ZFBiQnFLeDJMVW9TeFZGWVhzcTk0aHkrNmxhOTVTMVU5dFBtMjlVQjJNT0hC?=
 =?utf-8?B?cDBObm4xdXhkTnhmL0lJKyt5U1pDRmtjRFgreE52T2tlai9SUkcvOTkxQ3o2?=
 =?utf-8?B?ZnlsUGRWaGhnWTZoR2NDcVRjTGc3dGtlSThBNlpyVTFoaHEycUlvWDFBWnhC?=
 =?utf-8?B?bmtBWElZdml6ZVNlaTRudWp5MjdKMEdHdDlNbjZ6SVRwME1yZ3lLOUl2dVRw?=
 =?utf-8?B?RVJUWWNvMU9SbDYxYmZCTkxkckI5Y1FlQWs2Q1NEcWgzejFpdFlLbStKMlFP?=
 =?utf-8?Q?Y+Arj+uu0XKQhYzcQyRhlvciKK231IlgMSzuoRd?=
X-MS-Exchange-CrossTenant-Network-Message-Id: ae3f4536-1c9f-4ae1-eea2-08d916bbbb6f
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 May 2021 09:36:20.9952
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: E4Ne58eCSO43mxZ25P37rVvdffgxD36L8bN0beKDzV16OEw+hAGdtcgq1gBuAsdvMaAvxc2KQLM88Zvs+xTGtg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5083
X-OriginatorOrg: citrix.com

On Fri, May 07, 2021 at 01:04:21PM +0200, Roger Pau Monne wrote:
> diff --git a/tools/libs/light/libxl_cpuid.c b/tools/libs/light/libxl_cpuid.c
> index eb6feaa96d1..6d17e89191f 100644
> --- a/tools/libs/light/libxl_cpuid.c
> +++ b/tools/libs/light/libxl_cpuid.c
> @@ -430,9 +430,11 @@ int libxl__cpuid_legacy(libxl_ctx *ctx, uint32_t domid, bool restore,
>                          libxl_domain_build_info *info)
>  {
>      GC_INIT(ctx);
> +    xc_cpu_policy_t *policy = NULL;
> +    bool hvm = info->type == LIBXL_DOMAIN_TYPE_HVM;

This is wrong and should instead be:

bool hvm = info->type != LIBXL_DOMAIN_TYPE_PV;

To account for libxl having a different domain type for PVH. I've
fixed it on my local copy of the patch.

Maybe the variable should also have a different name? I really cannot
think of any other name short of 'translated' or hvm_container. Let me
know if I should change the variable name.

Roger.


From xen-devel-bounces@lists.xenproject.org Fri May 14 09:38:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 09:38:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127281.239187 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhUHR-0007R8-G0; Fri, 14 May 2021 09:38:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127281.239187; Fri, 14 May 2021 09:38:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhUHR-0007R1-DB; Fri, 14 May 2021 09:38:49 +0000
Received: by outflank-mailman (input) for mailman id 127281;
 Fri, 14 May 2021 09:38:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhUHP-0007Qr-Hf; Fri, 14 May 2021 09:38:47 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhUHP-0005dM-Fu; Fri, 14 May 2021 09:38:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhUHP-0001zA-5E; Fri, 14 May 2021 09:38:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lhUHP-0000fa-4i; Fri, 14 May 2021 09:38:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=JxkINIF3SO5KM6aci/6y4GM/w3B52SONwayuDv5EKp0=; b=hf0qEcx7MRIrpJVdUjJKbKPFjm
	Y4yXJ5fT5JestONM/XWjRb2ngMm+XrMDD5h7F+Kk5xII204o7cpn6EHndQqRbMi5+D2OnfBAYPCBm
	6cK3uZk1AQC15HnOP4MR42B7oDs9fkumVzNRSu39dzWErz6uU6r76yT5jLY2F3BUDtrY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161943-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 161943: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=22ac5cc9d9db34056f7c97e994fd9def683ebb2e
X-Osstest-Versions-That:
    ovmf=5531fd48ded1271b8775725355ab83994e4bc77c
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 14 May 2021 09:38:47 +0000

flight 161943 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161943/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 22ac5cc9d9db34056f7c97e994fd9def683ebb2e
baseline version:
 ovmf                 5531fd48ded1271b8775725355ab83994e4bc77c

Last test of basis   161912  2021-05-12 02:02:58 Z    2 days
Testing same since   161943  2021-05-14 03:41:19 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Sachin Agrawal <sachin.agrawal@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   5531fd48de..22ac5cc9d9  22ac5cc9d9db34056f7c97e994fd9def683ebb2e -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri May 14 09:42:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 09:42:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127288.239201 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhUKi-0000PM-Vr; Fri, 14 May 2021 09:42:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127288.239201; Fri, 14 May 2021 09:42:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhUKi-0000PF-Sk; Fri, 14 May 2021 09:42:12 +0000
Received: by outflank-mailman (input) for mailman id 127288;
 Fri, 14 May 2021 09:42:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhUKh-0000P5-3s; Fri, 14 May 2021 09:42:11 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhUKg-0005ha-UI; Fri, 14 May 2021 09:42:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhUKg-00025z-Ka; Fri, 14 May 2021 09:42:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lhUKg-0004Lh-K4; Fri, 14 May 2021 09:42:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=fjfFKEIUIatcOcb34Ctm9jq1iN8nn6s5dLanLgMM04E=; b=kN2a4jm94sppSbieV5BNFsCrgR
	g/ZRuuq0pBlYUy3Pp24i+g/f1uez0b0vh9LE4fyl359pyuOnXe0vBYlrTFr/143Ut+xRwgdqe+C6Z
	P1hn2I4StWf7zh80NbY7ilN6xtjS9fCiI3G7WhWPxzeLcI6k9ApywgJFUoRhNlKS2Cnw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161940-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 161940: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-rtds:guest-start:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=315d99318179b9cd5077ccc9f7f26a164c9fa998
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 14 May 2021 09:42:10 +0000

flight 161940 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161940/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx  8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     14 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                315d99318179b9cd5077ccc9f7f26a164c9fa998
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  286 days
Failing since        152366  2020-08-01 20:49:34 Z  285 days  478 attempts
Testing same since   161940  2021-05-13 21:40:37 Z    0 days    1 attempts

------------------------------------------------------------
6046 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1640334 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 14 09:42:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 09:42:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127290.239216 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhUKt-0000kh-F5; Fri, 14 May 2021 09:42:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127290.239216; Fri, 14 May 2021 09:42:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhUKt-0000ka-CA; Fri, 14 May 2021 09:42:23 +0000
Received: by outflank-mailman (input) for mailman id 127290;
 Fri, 14 May 2021 09:42:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sDpF=KJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lhUKs-0000jv-6P
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 09:42:22 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1ce2e103-085a-4205-bafe-4c208ab0b78e;
 Fri, 14 May 2021 09:42:19 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id ED311B15A;
 Fri, 14 May 2021 09:42:18 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1ce2e103-085a-4205-bafe-4c208ab0b78e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620985339; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Q172meoakAbkZf2higsfk3dmMCEQ9Vow2IpeO8dgoSk=;
	b=eq1sZFDLP2o6ied94pExxPmSlt6HpYPtJafkH5URZXyZvFsDGJryzcb6nAgaeGu/VsbmlH
	SIxxwQM2bdO/NsxKjXxI8/E7frVhmtBTQg/Vz5puTgATqkfeP/Yq+m9qVKkQB9ZL4Mh/Dr
	7DQJVzl1bIEA5oW2u++M3hNb92j1NHo=
Subject: Re: [PATCH] tools/xenstore: simplify xenstored main loop
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210514083905.18212-1-jgross@suse.com>
 <304944cf-ac92-be14-e088-1975ef073255@xen.org>
From: Juergen Gross <jgross@suse.com>
Message-ID: <3be1937f-3cd9-3eb8-48fd-bc9c9a85c051@suse.com>
Date: Fri, 14 May 2021 11:42:18 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.0
MIME-Version: 1.0
In-Reply-To: <304944cf-ac92-be14-e088-1975ef073255@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="CYGYGRGGH1Jj5YX63YFP1i7kmLiSTbNs5"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--CYGYGRGGH1Jj5YX63YFP1i7kmLiSTbNs5
Content-Type: multipart/mixed; boundary="wAO9iePUs26Q35umTs4dTT6P6H9ilRvbx";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <3be1937f-3cd9-3eb8-48fd-bc9c9a85c051@suse.com>
Subject: Re: [PATCH] tools/xenstore: simplify xenstored main loop
References: <20210514083905.18212-1-jgross@suse.com>
 <304944cf-ac92-be14-e088-1975ef073255@xen.org>
In-Reply-To: <304944cf-ac92-be14-e088-1975ef073255@xen.org>

--wAO9iePUs26Q35umTs4dTT6P6H9ilRvbx
Content-Type: multipart/mixed;
 boundary="------------C8E25EB28BB00EF812C59CC6"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------C8E25EB28BB00EF812C59CC6
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 14.05.21 11:35, Julien Grall wrote:
> Hi,
>=20
> On 14/05/2021 09:39, Juergen Gross wrote:
>> The main loop of xenstored is rather complicated due to different
>> handling of socket and ring-page interfaces. Unify that handling by
>> introducing interface type specific functions can_read() and
>> can_write().
>>
>> Put the interface type specific functions in an own structure and let
>> struct connection contain only a pointer to that new function vector.
> I would split the renaming in a separate patch. This would be easier to=
=20
> review and remove some churn from this patch.

Okay, I'll split the patch.

>=20
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>> =C2=A0 tools/xenstore/xenstored_core.c=C2=A0=C2=A0 | 117 +++++++++++++=
+----------------
>> =C2=A0 tools/xenstore/xenstored_core.h=C2=A0=C2=A0 |=C2=A0 19 ++---
>> =C2=A0 tools/xenstore/xenstored_domain.c |=C2=A0 11 ++-
>> =C2=A0 3 files changed, 73 insertions(+), 74 deletions(-)
>>
>> diff --git a/tools/xenstore/xenstored_core.c=20
>> b/tools/xenstore/xenstored_core.c
>> index 4b7b71cfb3..b66d119a98 100644
>> --- a/tools/xenstore/xenstored_core.c
>> +++ b/tools/xenstore/xenstored_core.c
>> @@ -226,8 +226,8 @@ static bool write_messages(struct connection *conn=
)
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 sockmsg_string(out->hdr.msg.type),
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 out->hdr.msg.len,
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 out->buffer, conn);
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ret =3D conn->write(conn, =
out->hdr.raw + out->used,
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 sizeof(out->hdr) - out->used);
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ret =3D conn->funcs->write=
(conn, out->hdr.raw + out->used,
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 sizeof(out->hdr) - ou=
t->used);
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (ret < 0)
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 return false;
>> @@ -243,8 +243,8 @@ static bool write_messages(struct connection *conn=
)
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 return true;
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 }
>> -=C2=A0=C2=A0=C2=A0 ret =3D conn->write(conn, out->buffer + out->used,=

>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0 out->hdr.msg.len - out->used);
>> +=C2=A0=C2=A0=C2=A0 ret =3D conn->funcs->write(conn, out->buffer + out=
->used,
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 out->hdr.msg.len - out->used);
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (ret < 0)
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return false;
>> @@ -1531,8 +1531,8 @@ static void handle_input(struct connection *conn=
)
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 /* Not finished header yet? */
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (in->inhdr) {
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (in->used !=3D=
 sizeof(in->hdr)) {
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 by=
tes =3D conn->read(conn, in->hdr.raw + in->used,
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 sizeof(in=
->hdr) - in->used);
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 by=
tes =3D conn->funcs->read(conn, in->hdr.raw + in->used,
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 sizeof(in->hdr) - in->used);
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 if (bytes < 0)
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 goto bad_client;
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 in->used +=3D bytes;
>> @@ -1557,8 +1557,8 @@ static void handle_input(struct connection *conn=
)
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 in->inhdr =3D f=
alse;
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 }
>> -=C2=A0=C2=A0=C2=A0 bytes =3D conn->read(conn, in->buffer + in->used,
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 in->hdr.msg.len - in->used);
>> +=C2=A0=C2=A0=C2=A0 bytes =3D conn->funcs->read(conn, in->buffer + in-=
>used,
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 in->hdr.msg.len - in->used);
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (bytes < 0)
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 goto bad_client=
;
>> @@ -1581,7 +1581,7 @@ static void handle_output(struct connection *con=
n)
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ignore_connecti=
on(conn);
>> =C2=A0 }
>> -struct connection *new_connection(connwritefn_t *write, connreadfn_t =

>> *read)
>> +struct connection *new_connection(struct interface_funcs *funcs)
>=20
> I don't think the interface is meant to be changed after the connection=
=20
> is created. So this should be const.

Yes.

>=20
>> =C2=A0 {
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 struct connection *new;
>> @@ -1591,8 +1591,7 @@ struct connection *new_connection(connwritefn_t =

>> *write, connreadfn_t *read)
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 new->fd =3D -1;
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 new->pollfd_idx =3D -1;
>> -=C2=A0=C2=A0=C2=A0 new->write =3D write;
>> -=C2=A0=C2=A0=C2=A0 new->read =3D read;
>> +=C2=A0=C2=A0=C2=A0 new->funcs =3D funcs;
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 new->is_ignored =3D false;
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 new->transaction_started =3D 0;
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 INIT_LIST_HEAD(&new->out_list);
>> @@ -1622,17 +1621,7 @@ static void accept_connection(int sock)
>> =C2=A0 {
>> =C2=A0 }
>> -int writefd(struct connection *conn, const void *data, unsigned int l=
en)
>> -{
>> -=C2=A0=C2=A0=C2=A0 errno =3D EBADF;
>> -=C2=A0=C2=A0=C2=A0 return -1;
>> -}
>> -
>> -int readfd(struct connection *conn, void *data, unsigned int len)
>> -{
>> -=C2=A0=C2=A0=C2=A0 errno =3D EBADF;
>> -=C2=A0=C2=A0=C2=A0 return -1;
>> -}
>> +struct interface_funcs socket_funcs;
>=20
> AFAICT, this is defined for mini-os because read_state_connection() may=
=20
> use it. The assumption here is XS_STATE_CONN_TYPE_SOCKET will never sho=
w=20
> up in the stream.
>=20
> If there is any mistake in the stream, this could lead to dereference=20
> NULL and crash after. AFAICT, before, we would just ignore the connecti=
on.
>=20
> I think it would be best if sockets_funcs() is not defined at all or we=
=20
> continue to ignore the connection. This can be probably done by=20
> implementing dummy callback for can_write/can_read.

Hmm, yes. I can put the referencing part inside #ifdef NO_SOCKETS.

>=20
>> =C2=A0 #else
>> =C2=A0 int writefd(struct connection *conn, const void *data, unsigned=
 int=20
>> len)
>> =C2=A0 {
>> @@ -1672,6 +1661,29 @@ int readfd(struct connection *conn, void *data,=
=20
>> unsigned int len)
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return rc;
>> =C2=A0 }
>> +static bool socket_can_process(struct connection *conn, int mask)
>> +{
>> +=C2=A0=C2=A0=C2=A0 if (conn->pollfd_idx =3D=3D -1)
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return false;
>> +
>> +=C2=A0=C2=A0=C2=A0 if (fds[conn->pollfd_idx].revents & ~(POLLIN | POL=
LOUT)) {
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 talloc_free(conn);
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return false;
>> +=C2=A0=C2=A0=C2=A0 }
>> +
>> +=C2=A0=C2=A0=C2=A0 return (fds[conn->pollfd_idx].revents & mask) && !=
conn->is_ignored;
>> +}
>> +
>> +static bool socket_can_write(struct connection *conn)
>> +{
>> +=C2=A0=C2=A0=C2=A0 return socket_can_process(conn, POLLOUT);
>> +}
>> +
>> +static bool socket_can_read(struct connection *conn)
>> +{
>> +=C2=A0=C2=A0=C2=A0 return socket_can_process(conn, POLLIN);
>> +}
>> +
>> =C2=A0 static void accept_connection(int sock)
>> =C2=A0 {
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 int fd;
>> @@ -1681,12 +1693,19 @@ static void accept_connection(int sock)
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (fd < 0)
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return;
>> -=C2=A0=C2=A0=C2=A0 conn =3D new_connection(writefd, readfd);
>> +=C2=A0=C2=A0=C2=A0 conn =3D new_connection(&socket_funcs);
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (conn)
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 conn->fd =3D fd=
;
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 else
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 close(fd);
>> =C2=A0 }
>> +
>> +struct interface_funcs socket_funcs =3D {
>=20
> This should be const.

Yes.

>=20
>> +=C2=A0=C2=A0=C2=A0 .write =3D writefd,
>> +=C2=A0=C2=A0=C2=A0 .read =3D readfd,
>> +=C2=A0=C2=A0=C2=A0 .can_write =3D socket_can_write,
>> +=C2=A0=C2=A0=C2=A0 .can_read =3D socket_can_read,
>> +};
>> =C2=A0 #endif
>> =C2=A0 static int tdb_flags;
>> @@ -2304,47 +2323,19 @@ int main(int argc, char *argv[])
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 if (&next->list !=3D &connections)
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 talloc_increase_ref_count(next);
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if=
 (conn->domain) {
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 if (domain_can_read(conn))
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 handle_input(conn);
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 if (talloc_free(conn) =3D=3D 0)
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 continue;
>> -
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 talloc_increase_ref_count(conn);
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 if (domain_can_write(conn) &&
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 !list_empty(&conn->out_list=
))
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 handle_output(conn);
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 if (talloc_free(conn) =3D=3D 0)
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 continue;
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 } =
else {
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 if (conn->pollfd_idx !=3D -1) {
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (fds[conn->pollfd_idx].r=
events
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 & ~=
(POLLIN|POLLOUT))
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 tal=
loc_free(conn);
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 else if ((fds[conn->pollfd_=
idx].revents
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 & POLLIN) &&
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
 !conn->is_ignored)
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 han=
dle_input(conn);
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 }
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 if (talloc_free(conn) =3D=3D 0)
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 continue;
>> -
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 talloc_increase_ref_count(conn);
>> -
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 if (conn->pollfd_idx !=3D -1) {
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (fds[conn->pollfd_idx].r=
events
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 & ~=
(POLLIN|POLLOUT))
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 tal=
loc_free(conn);
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 else if ((fds[conn->pollfd_=
idx].revents
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 & POLLOUT) &&
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
 !conn->is_ignored)
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 han=
dle_output(conn);
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 }
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 if (talloc_free(conn) =3D=3D 0)
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 continue;
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if=
 (conn->funcs->can_read(conn))
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 handle_input(conn);
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if=
 (talloc_free(conn) =3D=3D 0)
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 continue;
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 conn->pollfd_idx =3D -1;
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 }
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 ta=
lloc_increase_ref_count(conn);
>> +
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if=
 (conn->funcs->can_write(conn))
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 handle_output(conn);
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if=
 (talloc_free(conn) =3D=3D 0)
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 continue;
>> +
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 co=
nn->pollfd_idx =3D -1;
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 }
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (delayed_req=
uests) {
>> diff --git a/tools/xenstore/xenstored_core.h=20
>> b/tools/xenstore/xenstored_core.h
>> index 6a6d0448e8..1467270476 100644
>> --- a/tools/xenstore/xenstored_core.h
>> +++ b/tools/xenstore/xenstored_core.h
>> @@ -86,8 +86,13 @@ struct delayed_request {
>> =C2=A0 };
>> =C2=A0 struct connection;
>> -typedef int connwritefn_t(struct connection *, const void *, unsigned=
=20
>> int);
>> -typedef int connreadfn_t(struct connection *, void *, unsigned int);
>> +
>> +struct interface_funcs {
>> +=C2=A0=C2=A0=C2=A0 int (*write)(struct connection *, const void *, un=
signed int);
>> +=C2=A0=C2=A0=C2=A0 int (*read)(struct connection *, void *, unsigned =
int);
>> +=C2=A0=C2=A0=C2=A0 bool (*can_write)(struct connection *);
>> +=C2=A0=C2=A0=C2=A0 bool (*can_read)(struct connection *);
>> +};
>> =C2=A0 struct connection
>> =C2=A0 {
>> @@ -131,9 +136,8 @@ struct connection
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 /* My watches. */
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 struct list_head watches;
>> -=C2=A0=C2=A0=C2=A0 /* Methods for communicating over this connection:=
 write can be=20
>> NULL */
>> -=C2=A0=C2=A0=C2=A0 connwritefn_t *write;
>> -=C2=A0=C2=A0=C2=A0 connreadfn_t *read;
>> +=C2=A0=C2=A0=C2=A0 /* Methods for communicating over this connection.=
 */
>> +=C2=A0=C2=A0=C2=A0 struct interface_funcs *funcs;
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 /* Support for live update: connection =
id. */
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 unsigned int conn_id;
>> @@ -196,7 +200,7 @@ int write_node_raw(struct connection *conn,=20
>> TDB_DATA *key, struct node *node,
>> =C2=A0 struct node *read_node(struct connection *conn, const void *ctx=
,
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0 const char *name);
>> -struct connection *new_connection(connwritefn_t *write, connreadfn_t =

>> *read);
>> +struct connection *new_connection(struct interface_funcs *funcs);
>> =C2=A0 struct connection *get_connection_by_id(unsigned int conn_id);
>> =C2=A0 void check_store(void);
>> =C2=A0 void corrupt(struct connection *conn, const char *fmt, ...);
>> @@ -254,9 +258,6 @@ void finish_daemonize(void);
>> =C2=A0 /* Open a pipe for signal handling */
>> =C2=A0 void init_pipe(int reopen_log_pipe[2]);
>> -int writefd(struct connection *conn, const void *data, unsigned int=20
>> len);
>> -int readfd(struct connection *conn, void *data, unsigned int len);
>> -
>> =C2=A0 extern struct interface_funcs socket_funcs;
>=20
> Hmmm... I guess this change splipped in the staging before hand?

No, I just forgot to make the functions static.


Juergen

--------------C8E25EB28BB00EF812C59CC6
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------C8E25EB28BB00EF812C59CC6--

--wAO9iePUs26Q35umTs4dTT6P6H9ilRvbx--

--CYGYGRGGH1Jj5YX63YFP1i7kmLiSTbNs5
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmCeRfoFAwAAAAAACgkQsN6d1ii/Ey/X
uwf/TgzRZlpWX3ksv1rT24759SrwgfqpNQL4VOpJt+qVwaYU/u4sP79KniHHw0WWvtSdL9Yf4ioL
3e/z9N82TP/1FSzSxxs0fm+wEuKsJ2TpZLDm3ybDdEM3lwsPFv7S2aFh6szF+qKQlONR1+FLERGg
vfMRod0q0+WIjWDBMYw9twf/WNOu2STXV6MLfjaJ04Wybata4vbNmk9qB/kxPxH/DwHYqeIrp5AA
KlB+3exx9e4uI1iQpY7qE3P+xIAdF9PkB39cEiWuEF5uVfKz+qvkoJHbAu3wusq5t35+LZlGQyLG
Qn0IXZmepfOVzcCf1oMXlf1Jh+BmYPJdZQKU8Lfbbg==
=ZmN7
-----END PGP SIGNATURE-----

--CYGYGRGGH1Jj5YX63YFP1i7kmLiSTbNs5--


From xen-devel-bounces@lists.xenproject.org Fri May 14 09:47:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 09:47:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127300.239227 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhUPP-0001lt-1f; Fri, 14 May 2021 09:47:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127300.239227; Fri, 14 May 2021 09:47:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhUPO-0001lm-Ue; Fri, 14 May 2021 09:47:02 +0000
Received: by outflank-mailman (input) for mailman id 127300;
 Fri, 14 May 2021 09:47:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lhUPN-0001lg-A2
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 09:47:01 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lhUPF-0005nD-GL; Fri, 14 May 2021 09:46:53 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lhUPF-0003jG-8l; Fri, 14 May 2021 09:46:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=cZpQbTQXGNTxggqBXwDi/oxJiHwv6QXPFqV8J+YrORg=; b=O0FQzyBRmLTcNrGqPpB4q8uelP
	N2ZVE7CwiH030S3nD6xZ5pLK+KbmvAY6kYq8/nTpXFKO1n/0LXGcs1VwftQ6X3g4vTLV3fJgVb2i2
	syJN23WYBIXMgfl2VtcovvIwnhWLJibqnXnw3384jliqdKHj5bIcnBQfcyDCkRRu2N8o=;
Subject: Re: [PATCH v2 4/5] xen: Add files needed for minimal riscv build
To: Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Tamas K Lengyel <tamas@tklengyel.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>
References: <cover.1620965208.git.connojdavis@gmail.com>
 <c5d130b06de3d724921488387f1743d7996aac11.1620965208.git.connojdavis@gmail.com>
From: Julien Grall <julien@xen.org>
Message-ID: <a5fd6d72-3a02-4c12-4021-bf06d0eeb174@xen.org>
Date: Fri, 14 May 2021 10:46:49 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <c5d130b06de3d724921488387f1743d7996aac11.1620965208.git.connojdavis@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Connor,

On 14/05/2021 05:17, Connor Davis wrote:
> Add the minimum code required to get xen to build with
> XEN_TARGET_ARCH=riscv64. It is minimal in the sense that every file and
> function added is required for a successful build, given the .config
> generated from riscv64_defconfig. The function implementations are just
> stubs; actual implmentations will need to be added later.

Thank you for the contribution. This is quite a large patch to review. 
Could you consider to split in smaller one (I think Stefano suggested 
one per header file or group of headers)? This would help to review and 
find whether some bits can be moved in common.

I would be happy to help moving some of the pieces.

> 
> Signed-off-by: Connor Davis <connojdavis@gmail.com>
> ---
>   config/riscv64.mk                        |   7 +
>   xen/Makefile                             |   8 +-
>   xen/arch/riscv/Kconfig                   |  54 ++++
>   xen/arch/riscv/Kconfig.debug             |   0
>   xen/arch/riscv/Makefile                  |  57 ++++
>   xen/arch/riscv/README.source             |  19 ++
>   xen/arch/riscv/Rules.mk                  |  13 +
>   xen/arch/riscv/arch.mk                   |   7 +
>   xen/arch/riscv/configs/riscv64_defconfig |  12 +
>   xen/arch/riscv/delay.c                   |  16 +
>   xen/arch/riscv/domain.c                  | 144 +++++++++
>   xen/arch/riscv/domctl.c                  |  36 +++
>   xen/arch/riscv/guestcopy.c               |  57 ++++
>   xen/arch/riscv/head.S                    |   6 +
>   xen/arch/riscv/irq.c                     |  78 +++++
>   xen/arch/riscv/lib/Makefile              |   1 +
>   xen/arch/riscv/lib/find_next_bit.c       | 284 +++++++++++++++++

I quickly skimmed through the code and I think some of the file can be 
made common such as this one.

[...]

> diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
> index 1708c36964..fd0b75677c 100644
> --- a/xen/include/xen/domain.h
> +++ b/xen/include/xen/domain.h
> @@ -60,6 +60,7 @@ void arch_vcpu_destroy(struct vcpu *v);
>   int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset);
>   void unmap_vcpu_info(struct vcpu *v);
>   
> +struct xen_domctl_createdomain;

This is needed because?

>   int arch_domain_create(struct domain *d,
>                          struct xen_domctl_createdomain *config);
>   
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 14 09:51:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 09:51:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127307.239238 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhUTG-0003Bt-Li; Fri, 14 May 2021 09:51:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127307.239238; Fri, 14 May 2021 09:51:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhUTG-0003Bm-Ik; Fri, 14 May 2021 09:51:02 +0000
Received: by outflank-mailman (input) for mailman id 127307;
 Fri, 14 May 2021 09:51:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=AsHw=KJ=epam.com=prvs=5768ae630b=anastasiia_lukianenko@srs-us1.protection.inumbo.net>)
 id 1lhUTE-0003Bg-D8
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 09:51:00 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5ed74cc5-bf3c-40b8-a66c-889acdf13a0f;
 Fri, 14 May 2021 09:50:59 +0000 (UTC)
Received: from pps.filterd (m0174679.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 14E9j5uc031807; Fri, 14 May 2021 09:50:58 GMT
Received: from eur01-db5-obe.outbound.protection.outlook.com
 (mail-db5eur01lp2054.outbound.protection.outlook.com [104.47.2.54])
 by mx0a-0039f301.pphosted.com with ESMTP id 38hnhp09a9-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 14 May 2021 09:50:57 +0000
Received: from AM4PR0301MB2273.eurprd03.prod.outlook.com (2603:10a6:200:4d::6)
 by AM0PR03MB4385.eurprd03.prod.outlook.com (2603:10a6:208:c9::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.25; Fri, 14 May
 2021 09:50:53 +0000
Received: from AM4PR0301MB2273.eurprd03.prod.outlook.com
 ([fe80::e190:2560:abbf:5e7d]) by AM4PR0301MB2273.eurprd03.prod.outlook.com
 ([fe80::e190:2560:abbf:5e7d%8]) with mapi id 15.20.4129.028; Fri, 14 May 2021
 09:50:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5ed74cc5-bf3c-40b8-a66c-889acdf13a0f
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eGJ2gGLeI2yl5+p4kDcrrTnslO+MFoEIrE1e/jVaEkgYa4+7EITGVFOK6oQro3c7/Ylfz0O3V1SJi7Z1EiEb3tXbMgsaekmLkrJNuRghwFC0mubCn+KDB0OJB1Xf/BSpZiFdBVONmdle89JX/CvnODjlm4ek3039Cle2PIdW1TwaVC9mM+B2XW1jcj8iw2Gez6d7hlKUqNGra4fZ4GjiLrHXNDg1GXe6n4hEbs0LsnLDn4tIKITbQRWpGcLqcyOnG/SSDjq3BQPwCZ3mh4Tw5KhqK2QRBiEhERTyoS6S8cKTWVqVSgj2DmLaYmw10Hwla+AYjLmNH1L/efJtva5kxQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=b6M4KT2dC5AM7/cluheRXqKEVIlG68301meUjndXrnA=;
 b=Esh5+dooychwQ6dpSOzSYL48alnBRLHSLPPzVkg7yfYi1xAknrJXT6YTvoR4lIpNMoa2ZszAYTmJYr74FKwALOIzOqetQqzDbA1Cjm2WuIvA2vkumq3DbGsCdalV+Uspq+ue8e80nHdC3fUw8/RqagnXwqt+a8rxO+zO8Ydu6gZChjBdFPmOnAa7skkDwas+QDDVN8JL5bYlBoNRk8zulJp/qbRkFxhJ+Fex//EG1yX7zlwJbheuuDFHhJuZAFgf73NTy2tlGXp/ihha8NPrIiW21xmaltMIauI1QgCsyuVhECW0erO+Z0QbDSv8/rZ4AV7mGUiZJxMndkSEKEgzjA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=b6M4KT2dC5AM7/cluheRXqKEVIlG68301meUjndXrnA=;
 b=OPL4Kih7rKZ3muqPlPaQw692vxXbjLOkZesQZtQAl7lL1umKiuI8RB0nadXrAk1etXrzba5Z0VrpH6wKi3po/Kaq4qZBKFdnTaoj6yUGk5Q2Llnad0V8/ZG8mnpXV5GLbflz4EUcKdMbbVHwormaxfaUyZ7YuKGtHyGOz9ZPLr3uoLy8iw5kHFiweNt4t0fqHOi23v5ZIabVmpY/WDujFLvHmOJMRfp1NANGLJogKALCBfFdSqDwX7IyuVEhBiVdLEsfXRqsgyw381rw/cQcXytk0t41vGnfDQg+MVXFLDsiqC7en3OxL9qMjxpkPXrawM0MauGKpjopv+IQ40vDIA==
From: Anastasiia Lukianenko <anastasiia_lukianenko@epam.com>
To: "julien@xen.org" <julien@xen.org>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Andrii Chepurnyi <Andrii_Chepurnyi@epam.com>,
        Oleksandr Andrushchenko
	<Oleksandr_Andrushchenko@epam.com>
Subject: Re: Hand over of the Xen shared info page
Thread-Topic: Hand over of the Xen shared info page
Thread-Index: AQHXR86B8dxtD5lx60CgXee9ciHSoarhFtmAgAGmygA=
Date: Fri, 14 May 2021 09:50:52 +0000
Message-ID: <1db54c363eae22613280e7181805abee396fe5e9.camel@epam.com>
References: <64bc6ab6ec387acebb40c1b4786dfda1050f9d50.camel@epam.com>
	 <8ff05bdf-a6c4-6b14-b39c-7d9b3bb9d279@xen.org>
In-Reply-To: <8ff05bdf-a6c4-6b14-b39c-7d9b3bb9d279@xen.org>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=epam.com;
x-originating-ip: [176.36.213.80]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 910ce89b-7c7b-46af-9cf6-08d916bdc33d
x-ms-traffictypediagnostic: AM0PR03MB4385:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: 
 <AM0PR03MB43853E00C501B3BFCC2C8B24F2509@AM0PR03MB4385.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 5LHIjl9zInvBXpbTLPKbzv9AbPSzXLJ6WZf4iAfASzG7n2J21rQGRVPWJBXKFMdPRaUPq+4dVXpK4DCSrRyz7uYvJ03ofAL8suIb0C8mUFRgKVyh4KYH8/m1VFDwM3+B1riJMAz30m25/KTZxdUUPF8SC1pikRNLi/t0EnkBjRJ6HTfHXEHE56hLifKsWXVP2A1/ydXb/KIadA4TViTfaO77hkQADtr0bZavLi3D57KnKLFo/lafNmXslkijTUBs9Kn9x4wKJixNog/YvwOOVgIxy/QKV5R+8AF7d6+sq31YwnA2qBEMPYJk3+SU259d8tYRjnFgvCSrclTzn9EuU983qG/mMuoRgTGUjvHs1crdODbW2oGLav1CAPaHuzICQ+eq38vcDVO2Y3cV4LM66nJ+IHqjLycxt7OXeS4hdl7ORpE9ZqXprQv6S67du2iSZP0hN/FpxT/W4EXDlmWIj8VvEqL1CORDUVbL2JzM3mndMe0cUOmpNxRZedto9t3VKZvcCe/n4wCR5HnAdTAG3WF0l7NYlX9MsYphQBj1w/5hkQ0LkO2Ai+xneQMRL/elDLPKUV89l2s7yydP5m436oK5hd8QTll2AM4EnbDR10s=
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM4PR0301MB2273.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(136003)(346002)(39860400002)(396003)(366004)(64756008)(66946007)(8936002)(38100700002)(66446008)(66556008)(66476007)(122000001)(86362001)(55236004)(54906003)(91956017)(8676002)(53546011)(110136005)(316002)(26005)(36756003)(5660300002)(76116006)(107886003)(83380400001)(186003)(2616005)(478600001)(2906002)(6506007)(71200400001)(4326008)(6486002)(6512007);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?utf-8?B?UElDeFFRK1ZibEdJOGFKZVFqTGNINHN4ek5LMHpWZ21lZ0pvV1FiYjM5QmFS?=
 =?utf-8?B?WVRxTGZGOE55eWIzdzRUMXUzQmkrcjc2WGhudGUyaXJ3SE1iSFFXMXNaUVBt?=
 =?utf-8?B?TkR3WldqN01VVkJmTEltdHI4RFZxTExtcTVjTlpkRUdkemNGWlRaV2ZhVFQw?=
 =?utf-8?B?Z3JyS1d5VUJWbDRlOWVmNlNYU3RDN0pjOHEzVUs5YU80MzRYelByNHZCL2Na?=
 =?utf-8?B?LzQxZ0EzMjNQUjFMK2QzR3dLUTdJRlovcE9FcENxbEZvOW1Kb0RLUjh3RFNo?=
 =?utf-8?B?TlBPOFZXV3VYaVdRUXhLa3hkWnd4WEFoaS9wWTNMMFZ2eklVOFcrY2NTbG4x?=
 =?utf-8?B?TFMycW1paEtJT0VGb1MwWG9RU0oveVFWaGNhWGcvQ082LzZlakpnME54LzFJ?=
 =?utf-8?B?OTJFZk5wdnhZTFIwSkFDZ21oQnpyVXA2ZGk3SmlqSFR6enppb3dTNmNrK3R4?=
 =?utf-8?B?dGo2aUNKUEV3QW93dzMydkRaVXdycytqOWdBMXZ5U2xJQkhaMThMdnFZL3hW?=
 =?utf-8?B?VTJFV3dZbU9Vd3lNOENPSTlBbW5ZbmhtdW5iL2FRWCtucDIvZy9wY3gwRWFW?=
 =?utf-8?B?ZHVyVEUvMFVqWWV3SnR0OElOOWRqNVNCY2F3a2R3dFRxWWllbk4raVR3Y1g3?=
 =?utf-8?B?dTNzcVVoZm1vZ0xZc1Y3NFRpYjlNczJwUFJYMFlSTjZ1RjBodUFxWm5WbDhG?=
 =?utf-8?B?QjN6U3pYZ0RjQm5naDJNdmZVMjRPaDJqM2d4VlVpM2pMRlNpa1o4VkJkZG5a?=
 =?utf-8?B?KzhnbGdFMGE4bEZ3YUczd1U2TWpyUTRBa3FXVm1YRFVJNEp0RkJSN3grcTcx?=
 =?utf-8?B?djFWYnVtN01YRzdEYzBQVnJxM1UwTWZyb2RTdE5DaWx4dys5VVhwNzQ2bHFK?=
 =?utf-8?B?RVh5a3FyS0JtZzd6cDVOSzFldmtoeUgwSDE2b1JYUjVlUzNTblp2eFIrT0kw?=
 =?utf-8?B?NmZHY2l1NGQrTkp5ZmpQT0R3bG1ocVA3cEp5VjNOUlZRZytQYVhDWTREWHBq?=
 =?utf-8?B?dEVaSWdhTSttaDVleXpJbjhjdEgrSUhzT01VRTJrQUd3clVYM1J0RGJwbmlS?=
 =?utf-8?B?bnM3Nkkyc24raTlWNFJhVkQzQkpjcmpCTGRScURSVHpndFd6WlBDOWNHNjh6?=
 =?utf-8?B?R1Q2NEJMbi9Ub1pORnljelYwRElQaE1hUTBZMWhZM1lHelpRQW1LR2Q1eEFH?=
 =?utf-8?B?NmtuNmxEMlhzTERyeFRqNWJQZnNNUmRhQkdyL01xZ0xOcFdIVTNicHI2am8v?=
 =?utf-8?B?dUg2NGZMNjFHY2lIeDU1dTl2UHpLWDBQQnhEV2hNTk9xQ2ZaWGc0Nk5xMzRr?=
 =?utf-8?B?anp6d2ZSdkhXR2QvbEFYRHFnWXZsM0ZoRU5mUXMzcmVkeXNQUWJNMHpBR3Yr?=
 =?utf-8?B?SWhDV1JqMmovdmZ2c1NCMWdiSklCNGd2YTY2QVhHRGpmVEtGYmFPT0hROFFu?=
 =?utf-8?B?eG9JTDdOTE1wMWdVYi9YVHhlby9JcXZIRjVsYXZySk5OQWNTNktlU1BDL05h?=
 =?utf-8?B?YVc1SEZKbGZmdmJFWGc1TGRiZitzMldpMllkV2FDVnhNZzJSeCtqSGZFSVho?=
 =?utf-8?B?ZTZNZjljK3pCRHphS1BnVVV1RCs5MGFhTzRGS0I2T0k0VG5pS1JkczQvZXVM?=
 =?utf-8?B?TTZ2c1dYdVhBY3BjbG1EcWpuRDZjUlRZM0NTb3duUHpuYkhhMmI3RlZGV0E2?=
 =?utf-8?B?T3pIZ05BMTQyRVRRazEybUlHcGNvOVgwRWM2elZycncwcW1qNjF1NmVyUm5u?=
 =?utf-8?Q?vcsJfAvOqtVjbP6udRAAUGlFFXm0ymn1i3hgq+f?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <B801330A931AF3409997206AC0BDBD10@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM4PR0301MB2273.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 910ce89b-7c7b-46af-9cf6-08d916bdc33d
X-MS-Exchange-CrossTenant-originalarrivaltime: 14 May 2021 09:50:52.8931
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: hJQXfXhuyTqdDinjmNyh0ypMcMUnyGrqTt8ldjq4bHCB/5DZCT8Hm7hH3Ic5emPsYpHJdd4GgHTl6aQ+s2FZuyKmLar0zAr8mUNKCpkIQBw=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB4385
X-Proofpoint-GUID: rqqcuZHCW2buPOmtFW11UUSCKiLAm1iG
X-Proofpoint-ORIG-GUID: rqqcuZHCW2buPOmtFW11UUSCKiLAm1iG
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 clxscore=1011 impostorscore=0
 mlxlogscore=834 priorityscore=1501 mlxscore=0 bulkscore=0 malwarescore=0
 lowpriorityscore=0 adultscore=0 phishscore=0 spamscore=0 suspectscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2105140073

SGkgSnVsaWVuIQ0KDQpPbiBUaHUsIDIwMjEtMDUtMTMgYXQgMDk6MzcgKzAxMDAsIEp1bGllbiBH
cmFsbCB3cm90ZToNCj4gDQo+IE9uIDEzLzA1LzIwMjEgMDk6MDMsIEFuYXN0YXNpaWEgTHVraWFu
ZW5rbyB3cm90ZToNCj4gPiBIaSBhbGwsDQo+IA0KPiBIaSwNCj4gDQo+ID4gVGhlIHByb2JsZW0g
ZGVzY3JpYmVkIGJlbG93IGNvbmNlcm5zIGNhc2VzIHdoZW4gYSBzaGFyZWQgaW5mbyBwYWdlDQo+
ID4gbmVlZHMgdG8gYmUgaGFuZGVkIG92ZXIgZnJvbSBvbmUgZW50aXR5IGluIHRoZSBzeXN0ZW0g
dG8gYW5vdGhlciwNCj4gPiBmb3INCj4gPiBleGFtcGxlLCB0aGVyZSBpcyBhIGJvb3Rsb2FkZXIg
b3IgYW55IG90aGVyIGNvZGUgdGhhdCBtYXkgcnVuDQo+ID4gYmVmb3JlDQo+ID4gdGhlIGd1ZXN0
IE9TJyBrZXJuZWwuDQo+ID4gTm9ybWFsbHksIHRvIG1hcCB0aGUgc2hhcmVkIGluZm8gcGFnZSBn
dWVzdHMgYWxsb2NhdGUgYSBtZW1vcnkgcGFnZQ0KPiA+IGZyb20gdGhlaXIgUkFNIGFuZCBtYXAg
dGhlIHNoYXJlZCBpbmZvIG9uIHRvcCBvZiBpdC4gU3BlY2lmaWNhbGx5DQo+ID4gd2UNCj4gPiB1
c2UgWEVOTUFQU1BBQ0Vfc2hhcmVkX2luZm8gbWVtb3J5IHNwYWNlIGluICBYRU5NRU1fYWRkX3Rv
X3BoeXNtYXANCj4gPiBoeXBlcmNhbGwuICBBcyB0aGUgaW5mbyBwYWdlIGV4aXN0cyB0aHJvdWdo
b3V0IHRoZSBndWVzdCBleGlzdGVuY2UNCj4gPiB0aGlzDQo+ID4gZG9lc24ndCBodXJ0IHRoZSBn
dWVzdCwgYnV0IHdoZW4gdGhlIHBhZ2UgZ2V0cyBvdXQgb2YgYWNjb3VudGluZywNCj4gPiBlLmcu
DQo+ID4gYWZ0ZXIgYm9vdGxvYWRlciBqdW1wcyB0byBMaW51eCBhbmQgdGhlIHBhZ2UgaXMgbm90
IGhhbmRlZCBvdmVyIHRvDQo+ID4gaXQsDQo+ID4gdGhlIG1hcHBlZCBwYWdlIGJlY29tZXMgYSBw
cm9ibGVtLg0KPiA+IENvbnNpZGVyIHRoZSBjYXNlIHdpdGggVS1ib290IGJvb3Rsb2FkZXIgd2hp
Y2ggYWxyZWFkeSBoYXMgWGVuDQo+ID4gc3VwcG9ydC4NCj4gPiBUaGUgVS1ib2904oCZcyBYZW4g
Z3Vlc3QgaW1wbGVtZW50YXRpb24gYWxsb2NhdGVzIGEgc2hhcmVkIGluZm8gcGFnZQ0KPiA+IGJl
dHdlZW4gWGVuIGFuZCB0aGUgZ3Vlc3QgZG9tYWluIGFuZCBVLWJvb3QgdXNlcyBkb21haW4ncyBS
QU0NCj4gPiBhZGRyZXNzDQo+ID4gc3BhY2UgdG8gY3JlYXRlIGFuZCBtYXAgdGhlIHNoYXJlZCBp
bmZvIHBhZ2UgYnkgdXNpbmcNCj4gPiBYRU5NRU1fYWRkX3RvX3BoeXNtYXAgaHlwZXJjYWxsIFsx
XS4NCj4gPiANCj4gPiBBZnRlciBVLWJvb3QgdHJhbnNmZXJzIGNvbnRyb2wgdG8gdGhlIG9wZXJh
dGluZyBzeXN0ZW0gKExpbnV4LA0KPiA+IEFuZHJvaWQsDQo+ID4gZXRjKSwgdGhlIHNoYXJlZCBp
bmZvIHBhZ2UgaXMgc3RpbGwgbWFwcGVkIGluIGRvbWFpbuKAmXMgYWRkcmVzcw0KPiA+IHNwYWNl
LA0KPiA+IGUuZy4gaXRzIFJBTS4gU28sIGFmdGVyIHdlIGxlYXZlIFUtYm9vdCwgdGhpcyBwYWdl
IGJlY29tZXMganVzdCBhbg0KPiA+IG9yZGluYXJ5IG1lbW9yeSBwYWdlIGZyb20gTGludXggUE9W
IHdoaWxlIGl0IGlzIHN0aWxsIGEgc2hhcmVkIGluZm8NCj4gPiBwYWdlIGZyb20gWGVuIFBPVi4g
VGhpcyBjYW4gbGVhZCB0byB1bmRlZmluZWQgYmVoYXZpb3IsIGVycm9ycyBldGMNCj4gPiBhcw0K
PiA+IFhlbiBjYW4gd3JpdGUgc29tZXRoaW5nIHRvIHRoZSBzaGFyZWQgaW5mbyBwYWdlIGFuZCB3
aGVuIExpbnV4DQo+ID4gdHJpZXMgdG8NCj4gPiB1c2UgaXQgLSBkYXRhIGNvcnJ1cHRpb24gbWF5
IGhhcHBlbi4NCj4gPiBUaGlzIGhhcHBlbnMgYmVjYXVzZSB0aGVyZSBpcyBubyB1bm1hcCBmdW5j
dGlvbiBpbiBYZW4gQVBJIHRvDQo+ID4gcmVtb3ZlIGFuDQo+ID4gZXhpc3Rpbmcgc2hhcmVkIGlu
Zm8gcGFnZSBtYXBwaW5nLiBXZSBjb3VsZCB1c2Ugb25seSBoeXBlcmNhbGwNCj4gPiBYRU5NRU1f
cmVtb3ZlX2Zyb21fcGh5c21hcCB3aGljaCBldmVudHVhbGx5IHdpbGwgY3JlYXRlIGEgaG9sZSBp
bg0KPiA+IHRoZQ0KPiA+IGRvbWFpbidzIFJBTSBhZGRyZXNzIHNwYWNlIHdoaWNoIG1heSBhbHNv
IGxlYWQgdG8gZ3Vlc3TigJlzIGNyYXNoDQo+ID4gd2hpbGUNCj4gPiBhY2Nlc3NpbmcgdGhhdCBt
ZW1vcnkuDQo+IA0KPiBUaGUgaHlwZXJjYWxsIFhFTk1FTV9yZW1vdmVfZnJvbV9waHlzbWFwIGlz
IHRoZSBjb3JyZWN0IGh5cGVyY2FsbA0KPiBoZXJlIA0KPiBhbmQgd29yayBhcyBpbnRlbnRlZC4g
SXQgaXMgbm90IFhlbiBidXNpbmVzcyB0byBrZWVwIHRyYWNrIHdoYXQgd2FzDQo+IHRoZSANCj4g
b3JpZ2luYWwgcGFnZSAoaXQgbWF5IGhhdmUgYmVlbiBSQU0sIGRldmljZS4uLikuDQo+IA0KPiBU
aGUgcHJvYmxlbSBoZXJlIGlzIHRoZSBoeXBlcmNhbGwgWEVOTUVNX2FkZF90b19waHlzbWFwIGlz
IG1pc3VzZWQNCj4gaW4gDQo+IFUtYm9vdC4gV2hlbiB5b3UgZ2l2ZSBhbiBhZGRyZXNzIGZvciB0
aGUgbWFwcGluZyB5b3UgYXJlIHRlbGxpbmcgWGVuIA0KPiAiSGVyZSBhIGZyZWUgcmVnaW9uIHRv
IG1hcCB0aGUgc2hhcmUgcGFnZWQiLiBJT1csIFhlbiB3aWxsIHRocm93DQo+IGF3YXkgDQo+IHdo
YXRldmVyIHdhcyBiZWZvcmUgYmVjYXVzZSB0aGF0IHdhcyB5b3UgYXNrZWQuDQo+IA0KPiBJZiB5
b3Ugd2FudCB0byBtYXAgaW4gcGxhY2Ugb2YgdGhlIFJBTSBwYWdlLCB0aGVuIHRoZSBjb3JyZWN0
DQo+IGFwcHJvYWNoIA0KPiBpcyB0bzoNCj4gICAgMSkgUmVxdWVzdCBYZW4gdG8gcmVtb3ZlIHRo
ZSBSQU0gcGFnZSBmcm9tIHRoZSBQMk0NCj4gICAgMikgTWFwIHRoZSBzaGFyZWQgcGFnZQ0KPiAg
ICAvKiBVc2UgaXQgKi8NCj4gICAgMykgVW5tYXAgdGhlIHNoYXJlZCBwYWdlDQo+ICAgIDQpIEFs
bG9jYXRlIHRoZSBtZW1vcnkNCj4gDQo+IFlvdSBjYW4gYXZvaWQgMSkgYW5kIDQpIGJ5IGZpbmRp
bmcgZnJlZSByZWdpb24gaW4gdGhlIGFkZHJlc3Mgc3BhY2UuDQo+IA0KPiA+IA0KPiA+IFdlIG5v
dGljZWQgdGhpcyBwcm9ibGVtIGFuZCB0aGUgd29ya2Fyb3VuZCB3YXMgaW1wbGVtZW50ZWQgdXNp
bmcNCj4gPiB0aGUNCj4gPiBzcGVjaWFsIEdVRVNUX01BR0lDIG1lbW9yeSByZWdpb24gWzJdLg0K
PiA+IA0KPiA+IE5vdyB3ZSB3YW50IHRvIG1ha2UgYSBwcm9wZXIgc29sdXRpb24gYmFzZWQgb24g
R1VFU1RfTUFHSUNfQkFTRSwNCj4gPiB3aGljaA0KPiA+IGRvZXMgbm90IGJlbG9uZyB0byB0aGUg
Z3Vlc3TigJlzIFJBTSBhZGRyZXNzIHNwYWNlIFszXS4gVXNpbmcgdGhlDQo+ID4gZXhhbXBsZQ0K
PiA+IG9mIGhvdyBvZmZzZXRzIGZvciB0aGUgY29uc29sZSBhbmQgeGVuc3RvcmUgYXJlIGltcGxl
bWVudGVkLCB3ZSBjYW4NCj4gPiBhZGQNCj4gPiBhIG5ldyBzaGFyZWRfaW5mbyBvZmZzZXQgYW5k
IGluY3JlYXNlIHRoZSBudW1iZXIgb2YgbWFnaWMgcGFnZXMgWzRdDQo+ID4gYW5kDQo+ID4gaW1w
bGVtZW50IHJlbGF0ZWQgZnVuY3Rpb25hbGl0eSwgc28gdGhlcmUgaXMgYSBzaW1pbGFyIEFQSSB0
byBxdWVyeQ0KPiA+IHRoYXQgbWFnaWMgcGFnZSBsb2NhdGlvbiBhcyBpdCBpcyBkb25lIGZvciBj
b25zb2xlIFBGTiBhbmQgb3RoZXJzLg0KPiANCj4gVGhleSBhcmUgbm90IHRoZSBzYW1lIHR5cGUu
IFRoZSBjb25zb2xlIFBGTiBwb2ludHMgbWVtb3J5IGFscmVhZHkgDQo+IHBvcHVsYXRlZCBpbiB0
aGUgZ3Vlc3QgYWRkcmVzcyBzcGFjZS4NCj4gDQo+IEZvciB0aGUgZG9tYWluIHNoYXJlZCBwYWdl
LCB0aGlzIGlzIG1lbW9yeSBiZWxvbmdpbmcgdG8gWGVuIHRoYXQgeW91IA0KPiB3aWxsIG1hcCBp
biB5b3VyIGFkZHJlc3Mgc3BhY2UuIEEgZG9tYWluIGNhbiBtYXAgaXQgYW55d2hlcmUgaXQNCj4g
d2FudHMuDQo+IA0KPiA+IFRoaXMgYXBwcm9hY2ggd291bGQgYWxsb3cgdGhlIHVzZSBvZiB0aGUg
WEVOTUVNX3JlbW92ZV9mcm9tX3BoeXNtYXANCj4gPiBoeXBlcmNhbGwgd2l0aG91dCBjcmVhdGlu
ZyBnYXBzIGluIHRoZSBSQU0gYWRkcmVzcyBzcGFjZSBmb3IgWGVuDQo+ID4gZ3Vlc3QNCj4gPiBP
UyBbNV0uDQo+IA0KPiBTZWUgYWJvdmUgdG8gcHJldmVudCB0aGUgZ2FwLiBJIGFwcHJlY2lhdGUg
dGhpcyBtZWFucyBhIHN1cGVycGFnZQ0KPiBtYXkgDQo+IGdldCBzaGF0dGVyZWQuDQo+IA0KPiBU
aGUgYWx0ZXJuYXRpdmUgaXMgZm9yIFUtYm9vdCB0byBnbyB0aHJvdWdoIHRoZSBEVCBhbmQgaW5m
ZXIgd2hpY2ggDQo+IHJlZ2lvbnMgYXJlIGZyZWUgKElPVyBhbnkgcmVnaW9uIG5vdCBkZXNjcmli
ZWQpLg0KDQpUaGFuayB5b3UgZm9yIGludGVyZXN0IGluIHRoZSBwcm9ibGVtIGFuZCBhZHZpY2Ug
b24gaG93IHRvIHNvbHZlIGl0Lg0KQ291bGQgeW91IHBsZWFzZSBjbGFyaWZ5IGhvdyB3ZSBjb3Vs
ZCBmaW5kIGZyZWUgcmVnaW9ucyB1c2luZyBEVCBpbiBVLQ0KYm9vdD8NCg0KUmVnYXJkcywNCkFu
YXN0YXNpaWENCj4gDQo+IENoZWVycywNCj4gDQo=


From xen-devel-bounces@lists.xenproject.org Fri May 14 09:59:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 09:59:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127311.239249 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhUbK-0003vH-Gt; Fri, 14 May 2021 09:59:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127311.239249; Fri, 14 May 2021 09:59:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhUbK-0003vA-De; Fri, 14 May 2021 09:59:22 +0000
Received: by outflank-mailman (input) for mailman id 127311;
 Fri, 14 May 2021 09:59:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sDpF=KJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lhUbJ-0003v4-EA
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 09:59:21 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 406a952d-14ab-4c53-9146-58ae9bd27e53;
 Fri, 14 May 2021 09:59:20 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9DDDFABF6;
 Fri, 14 May 2021 09:59:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 406a952d-14ab-4c53-9146-58ae9bd27e53
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620986359; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=VDl3ne5ANvyN/ZjMnO/4dwhl29qGNIHANao2v6VU9DU=;
	b=a1qnFKmofxUI2Dh55VL2TxVsrTo1DJZE5IZDjFKkX1wK/rK0ACJ125lCkocEnY09Tgfl95
	BWL8VMe2tVk6ryfsffs1OMUG0cwxKiLH2Rb3WWhFCv2jAf1lU64sYHL1etSssWKechgKSg
	2beqagClWN21wf7sky/erHe3x21hXXU=
Subject: Re: [PATCH v2 1/3] xen/arm: move xen_swiotlb_detect to
 arm/swiotlb-xen.h
To: Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
Cc: boris.ostrovsky@oracle.com, hch@lst.de,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
References: <alpine.DEB.2.21.2105121313060.5018@sstabellini-ThinkPad-T480s>
 <20210512201823.1963-1-sstabellini@kernel.org>
From: Juergen Gross <jgross@suse.com>
Message-ID: <3a54675f-d3f3-bc49-e10f-edf4e9c94cf1@suse.com>
Date: Fri, 14 May 2021 11:59:18 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.0
MIME-Version: 1.0
In-Reply-To: <20210512201823.1963-1-sstabellini@kernel.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="ax8YgDkHoBDjxrNheNJb6Az0w3RVWfcx6"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--ax8YgDkHoBDjxrNheNJb6Az0w3RVWfcx6
Content-Type: multipart/mixed; boundary="C7oYKy3Mw72Cvinm75Q4Ew9nuG4Btzs1i";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
Cc: boris.ostrovsky@oracle.com, hch@lst.de,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Message-ID: <3a54675f-d3f3-bc49-e10f-edf4e9c94cf1@suse.com>
Subject: Re: [PATCH v2 1/3] xen/arm: move xen_swiotlb_detect to
 arm/swiotlb-xen.h
References: <alpine.DEB.2.21.2105121313060.5018@sstabellini-ThinkPad-T480s>
 <20210512201823.1963-1-sstabellini@kernel.org>
In-Reply-To: <20210512201823.1963-1-sstabellini@kernel.org>

--C7oYKy3Mw72Cvinm75Q4Ew9nuG4Btzs1i
Content-Type: multipart/mixed;
 boundary="------------5FB5208856B6D8F743644AAA"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------5FB5208856B6D8F743644AAA
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 12.05.21 22:18, Stefano Stabellini wrote:
> From: Stefano Stabellini <stefano.stabellini@xilinx.com>
>=20
> Move xen_swiotlb_detect to a static inline function to make it availabl=
e
> to !CONFIG_XEN builds.
>=20
> CC: boris.ostrovsky@oracle.com
> CC: jgross@suse.com
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------5FB5208856B6D8F743644AAA
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------5FB5208856B6D8F743644AAA--

--C7oYKy3Mw72Cvinm75Q4Ew9nuG4Btzs1i--

--ax8YgDkHoBDjxrNheNJb6Az0w3RVWfcx6
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmCeSfYFAwAAAAAACgkQsN6d1ii/Ey+C
IAf/RfXE5sCIIPncy23CE2fot8TZTqS930g2RgV7TR8Cjj5BOoxuK6dizqmS2AJVv1EFBQqjfhLW
sQ+DUtej8kOO5fCKsRTMq0P03vB9Cmhbm70KXEuAq5I8BPokHkVsyc93A4wdBt/UuMG+3mt7dUgp
5QFj/rbOqgPSfV4HnltOqJpqIzCJCk+NxxUTGdMWQSb9xdoyvXKxbSZAGsiKZX/rLvcMB5P96R4v
wZD/LbbLTvYn3aNWvHI1wS/Dn2RQMvdppMxdTp/9BP99k0kEUb/17Qv+mRZY9BIhp94JgJTcXE8M
J6uCviQtf3IDfKm2Zur0TcuToEAbOEhk+0lLjoW5YQ==
=4DlO
-----END PGP SIGNATURE-----

--ax8YgDkHoBDjxrNheNJb6Az0w3RVWfcx6--


From xen-devel-bounces@lists.xenproject.org Fri May 14 10:00:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 10:00:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127314.239260 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhUcr-0005HS-TY; Fri, 14 May 2021 10:00:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127314.239260; Fri, 14 May 2021 10:00:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhUcr-0005HJ-QP; Fri, 14 May 2021 10:00:57 +0000
Received: by outflank-mailman (input) for mailman id 127314;
 Fri, 14 May 2021 10:00:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sDpF=KJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lhUcq-0005HB-AZ
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 10:00:56 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9cb16d02-9d14-4602-a466-10c142f6ac6e;
 Fri, 14 May 2021 10:00:55 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6BE3AABF6;
 Fri, 14 May 2021 10:00:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9cb16d02-9d14-4602-a466-10c142f6ac6e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620986454; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=S1AasyfX6dCn7eKy9VmzPxCNY0w7MLgEqeTYoimG95M=;
	b=KVjFKKs3eW6M1ftQosK1jd9O4JCjhh7bDVra+G9S7eY4zNb2nMVaxyFlDKw2s1tXtp3Om9
	MBpKRArS4dNSMhlFSzZH4u4o6WNLwF3usk9J2O0ldHOlzvnKC6dnXVotQlPYfpUxtVZEOQ
	k8Ls81Uq5qKk6Li/oW2mLVKtnfOlXs0=
Subject: Re: [PATCH v2 2/3] arm64: do not set SWIOTLB_NO_FORCE when swiotlb is
 required
To: Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
Cc: boris.ostrovsky@oracle.com, hch@lst.de, catalin.marinas@arm.com,
 will@kernel.org, linux-arm-kernel@lists.infradead.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
References: <alpine.DEB.2.21.2105121313060.5018@sstabellini-ThinkPad-T480s>
 <20210512201823.1963-2-sstabellini@kernel.org>
From: Juergen Gross <jgross@suse.com>
Message-ID: <12d992d9-30de-2d74-9e87-5e5dfdf8e785@suse.com>
Date: Fri, 14 May 2021 12:00:53 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.0
MIME-Version: 1.0
In-Reply-To: <20210512201823.1963-2-sstabellini@kernel.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="4LHxa7KdNQCrLvpAMN7ARqgIqLpVDMjIM"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--4LHxa7KdNQCrLvpAMN7ARqgIqLpVDMjIM
Content-Type: multipart/mixed; boundary="RlFh2c8j1O0H9rdQGzratxOGFlAhxMIMn";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
Cc: boris.ostrovsky@oracle.com, hch@lst.de, catalin.marinas@arm.com,
 will@kernel.org, linux-arm-kernel@lists.infradead.org,
 Stefano Stabellini <stefano.stabellini@xilinx.com>
Message-ID: <12d992d9-30de-2d74-9e87-5e5dfdf8e785@suse.com>
Subject: Re: [PATCH v2 2/3] arm64: do not set SWIOTLB_NO_FORCE when swiotlb is
 required
References: <alpine.DEB.2.21.2105121313060.5018@sstabellini-ThinkPad-T480s>
 <20210512201823.1963-2-sstabellini@kernel.org>
In-Reply-To: <20210512201823.1963-2-sstabellini@kernel.org>

--RlFh2c8j1O0H9rdQGzratxOGFlAhxMIMn
Content-Type: multipart/mixed;
 boundary="------------653F18E978F1107FBF01B043"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------653F18E978F1107FBF01B043
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 12.05.21 22:18, Stefano Stabellini wrote:
> From: Christoph Hellwig <hch@lst.de>
>=20
> Although SWIOTLB_NO_FORCE is meant to allow later calls to swiotlb_init=
,
> today dma_direct_map_page returns error if SWIOTLB_NO_FORCE.
>=20
> For now, without a larger overhaul of SWIOTLB_NO_FORCE, the best we can=

> do is to avoid setting SWIOTLB_NO_FORCE in mem_init when we know that i=
t
> is going to be required later (e.g. Xen requires it).
>=20
> CC: boris.ostrovsky@oracle.com
> CC: jgross@suse.com
> CC: catalin.marinas@arm.com
> CC: will@kernel.org
> CC: linux-arm-kernel@lists.infradead.org
> Fixes: 2726bf3ff252 ("swiotlb: Make SWIOTLB_NO_FORCE perform no allocat=
ion")
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------653F18E978F1107FBF01B043
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------653F18E978F1107FBF01B043--

--RlFh2c8j1O0H9rdQGzratxOGFlAhxMIMn--

--4LHxa7KdNQCrLvpAMN7ARqgIqLpVDMjIM
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmCeSlUFAwAAAAAACgkQsN6d1ii/Ey8l
OggAif18ZgrsFLVdYp+O0xPpes+xFlJKXgKPZayoWc2m1hSqUAFdmf6xQIux2zdRRedU4m2xOr2u
hKk69P6JQIIYm7cRyQ79HAlhjNWRWuioOyQG5DIJK5cMRyJOGjgyN7lgnunhStsoJ1/IAdgdPVbw
ta5gnV2mk2RsaiWupp9AZIBg2RxF/r/SsCG7ehVMJsB1pJU8eIrtGdgC7JqjUErDopk4RBo7/wxf
g3O+CTKu4wZUvAH9mgVbQ6sOnQ0XWDKWpl4BDqgDnLEuz1tD8OW/uK4qNrKY5Zi+7R51hk6lXzHr
nOPCUkxYH1vZBHEU8ceH4M1I7aQYEfYrmkzuGj/1tQ==
=Lz5u
-----END PGP SIGNATURE-----

--4LHxa7KdNQCrLvpAMN7ARqgIqLpVDMjIM--


From xen-devel-bounces@lists.xenproject.org Fri May 14 10:08:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 10:08:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127322.239271 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhUjl-00067P-Pu; Fri, 14 May 2021 10:08:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127322.239271; Fri, 14 May 2021 10:08:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhUjl-00067I-Mt; Fri, 14 May 2021 10:08:05 +0000
Received: by outflank-mailman (input) for mailman id 127322;
 Fri, 14 May 2021 10:08:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+syT=KJ=freebsd.org=royger@srs-us1.protection.inumbo.net>)
 id 1lhUjk-00067A-Ld
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 10:08:04 +0000
Received: from mx2.freebsd.org (unknown [2610:1c1:1:606c::19:2])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d55b3dde-8f08-4378-9e93-fa6d2c2f3489;
 Fri, 14 May 2021 10:08:03 +0000 (UTC)
Received: from mx1.freebsd.org (mx1.freebsd.org [96.47.72.80])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits)
 client-signature RSA-PSS (4096 bits))
 (Client CN "mx1.freebsd.org", Issuer "R3" (verified OK))
 by mx2.freebsd.org (Postfix) with ESMTPS id 076D78682A;
 Fri, 14 May 2021 10:08:03 +0000 (UTC)
 (envelope-from royger@freebsd.org)
Received: from smtp.freebsd.org (smtp.freebsd.org [96.47.72.83])
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256/256 bits)
 key-exchange X25519 server-signature RSA-PSS (4096 bits) server-digest SHA256
 client-signature RSA-PSS (4096 bits) client-digest SHA256)
 (Client CN "smtp.freebsd.org", Issuer "R3" (verified OK))
 by mx1.freebsd.org (Postfix) with ESMTPS id 4FhPNf6XZDz3FKW;
 Fri, 14 May 2021 10:08:02 +0000 (UTC)
 (envelope-from royger@freebsd.org)
Received: from localhost (unknown [93.176.185.224])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (Client did not present a certificate) (Authenticated sender: royger)
 by smtp.freebsd.org (Postfix) with ESMTPSA id 6FCB7236D9;
 Fri, 14 May 2021 10:08:02 +0000 (UTC)
 (envelope-from royger@freebsd.org)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d55b3dde-8f08-4378-9e93-fa6d2c2f3489
Date: Fri, 14 May 2021 12:07:53 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <royger@freebsd.org>
To: Julien Grall <julien@xen.org>
Cc: Elliott Mitchell <ehem+undef@m5p.com>, xen-devel@lists.xenproject.org,
	Mitchell Horne <mhorne@freebsd.org>
Subject: Re: Uses of /hypervisor memory range (was: FreeBSD/Xen/ARM issues)
Message-ID: <YJ5L+ar29Q8g+xm+@Air-de-Roger>
References: <YIptpndhk6MOJFod@Air-de-Roger>
 <YItwHirnih6iUtRS@mattapan.m5p.com>
 <YIu80FNQHKS3+jVN@Air-de-Roger>
 <YJDcDjjgCsQUdsZ7@mattapan.m5p.com>
 <YJURGaqAVBSYnMRf@Air-de-Roger>
 <YJYem5CW/97k/e5A@mattapan.m5p.com>
 <YJs/YAgB8molh7e5@mattapan.m5p.com>
 <54427968-9b13-36e6-0001-27fb49f85635@xen.org>
 <YJ3jlGSxs60Io+dp@mattapan.m5p.com>
 <93936406-574f-7fd0-53bf-3bafaa4b1947@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <93936406-574f-7fd0-53bf-3bafaa4b1947@xen.org>

On Fri, May 14, 2021 at 09:32:10AM +0100, Julien Grall wrote:
> Hi Elliott,
> 
> On 14/05/2021 03:42, Elliott Mitchell wrote:
> > Was it intended for the /hypervisor range to dynamically scale with the
> > size of the domain?
> As per above, this doesn't depend on the size of the domain. Instead, this
> depends on what sort of the backend will be present in the domain.

It should instead scale based on the total memory on the system, ie:
if your hardware has 4GB of RAM the unpopulated range should at least
be: 4GB - memory of the current domain, so that it could map any
possible page assigned to a different domain (and even then I'm not
sure we shouldn't account for duplicated mappings).

> > Might it be better to deprecate the /hypervisor range and have domains
> > allocate any available address space for foreign mappings?
> 
> It may be easy for FreeBSD to find available address space but so far this
> has not been the case in Linux (I haven't checked the latest version
> though).
> 
> To be clear, an OS is free to not use the range provided in /hypervisor
> (maybe this is not clear enough in the spec?). This was mostly introduced to
> overcome some issues we saw in Linux when Xen on Arm was introduced.
> 
> > 
> > Should the FreeBSD implementation be treating grant tables as distinct
> > from other foreign mappings?
> 
> Both require unallocated addres space to work. IIRC FreeBSD is able to find
> unallocated space easily, so I would recommend to use it.

I agree. I think the main issue here is that there seems to be some
bug (or behavior not understood properly) with the resource manager
on Arm that returns an error when requesting a region anywhere in the
memory address space, ie: [0, ~0].

> > (is treating them the same likely to
> > induce buggy behavior on x86?)
> 
> I will leave this answer to Roger.

x86 is already treating them the same by using xenmem_alloc to request
memory to map the grant table or foreign mappings, so there's no
change on x86 in that regard.

Maybe I'm not getting that last question right.

Roger.


From xen-devel-bounces@lists.xenproject.org Fri May 14 10:30:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 10:30:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127326.239282 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhV4y-00006q-KC; Fri, 14 May 2021 10:30:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127326.239282; Fri, 14 May 2021 10:30:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhV4y-00006j-Gw; Fri, 14 May 2021 10:30:00 +0000
Received: by outflank-mailman (input) for mailman id 127326;
 Fri, 14 May 2021 10:29:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Bqrj=KJ=kernel.org=cmarinas@srs-us1.protection.inumbo.net>)
 id 1lhV4x-00006d-9s
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 10:29:59 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c42e3c97-8764-40f5-a4ec-2e6367d52c77;
 Fri, 14 May 2021 10:29:58 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 918BC61396;
 Fri, 14 May 2021 10:29:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c42e3c97-8764-40f5-a4ec-2e6367d52c77
Date: Fri, 14 May 2021 11:29:54 +0100
From: Catalin Marinas <catalin.marinas@arm.com>
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com,
	jgross@suse.com, hch@lst.de, will@kernel.org,
	linux-arm-kernel@lists.infradead.org,
	Stefano Stabellini <stefano.stabellini@xilinx.com>
Subject: Re: [PATCH v2 2/3] arm64: do not set SWIOTLB_NO_FORCE when swiotlb
 is required
Message-ID: <20210514102953.GA855@arm.com>
References: <alpine.DEB.2.21.2105121313060.5018@sstabellini-ThinkPad-T480s>
 <20210512201823.1963-2-sstabellini@kernel.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210512201823.1963-2-sstabellini@kernel.org>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Wed, May 12, 2021 at 01:18:22PM -0700, Stefano Stabellini wrote:
> From: Christoph Hellwig <hch@lst.de>
> 
> Although SWIOTLB_NO_FORCE is meant to allow later calls to swiotlb_init,
> today dma_direct_map_page returns error if SWIOTLB_NO_FORCE.
> 
> For now, without a larger overhaul of SWIOTLB_NO_FORCE, the best we can
> do is to avoid setting SWIOTLB_NO_FORCE in mem_init when we know that it
> is going to be required later (e.g. Xen requires it).
> 
> CC: boris.ostrovsky@oracle.com
> CC: jgross@suse.com
> CC: catalin.marinas@arm.com
> CC: will@kernel.org
> CC: linux-arm-kernel@lists.infradead.org
> Fixes: 2726bf3ff252 ("swiotlb: Make SWIOTLB_NO_FORCE perform no allocation")
> Signed-off-by: Christoph Hellwig <hch@lst.de>
> Signed-off-by: Stefano Stabellini <stefano.stabellini@xilinx.com>

Acked-by: Catalin Marinas <catalin.marinas@arm.com>


From xen-devel-bounces@lists.xenproject.org Fri May 14 11:56:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 11:56:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127332.239293 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhWQb-0008GY-Ow; Fri, 14 May 2021 11:56:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127332.239293; Fri, 14 May 2021 11:56:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhWQb-0008GR-Ll; Fri, 14 May 2021 11:56:25 +0000
Received: by outflank-mailman (input) for mailman id 127332;
 Fri, 14 May 2021 11:56:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sDpF=KJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lhWQa-0008GF-B8
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 11:56:24 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d8146b39-3b69-41b9-96e2-c2199d651479;
 Fri, 14 May 2021 11:56:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6DE6EAEA6;
 Fri, 14 May 2021 11:56:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d8146b39-3b69-41b9-96e2-c2199d651479
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620993382; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=tdOBs2NkLw0YS/tdRPalc0PWkKYkTcp857mDLm1t7FE=;
	b=QOSmSUIrvYIulLhS0Lryzj2QP8HHxprHuMCYhwNjkOA6l8/T//HIO+4mKNZLtaNSNKtb3g
	vEUvhSp1c0cXCD5iwnWM/U7k799cnr3ERaocUVsqg16KasqQYEkNCEh0CxsdBVfQyKaHz3
	I/kvbjS5ThBezmzAlGu2HNJePfeDnqw=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 0/2] tools/xenstore: simplify xenstored main loop
Date: Fri, 14 May 2021 13:56:18 +0200
Message-Id: <20210514115620.32731-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Small series to make the main loop of xenstored more readable.

Changes in V2:
- split into two patches
- use const
- NO_SOCKETS handling

Juergen Gross (2):
  tools/xenstore: move per connection read and write func hooks into a
    struct
  tools/xenstore: simplify xenstored main loop

 tools/xenstore/xenstored_core.c   | 121 ++++++++++++++----------------
 tools/xenstore/xenstored_core.h   |  21 +++---
 tools/xenstore/xenstored_domain.c |  15 +++-
 3 files changed, 79 insertions(+), 78 deletions(-)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri May 14 11:56:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 11:56:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127334.239315 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhWQg-0000Nk-Ac; Fri, 14 May 2021 11:56:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127334.239315; Fri, 14 May 2021 11:56:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhWQg-0000NX-7M; Fri, 14 May 2021 11:56:30 +0000
Received: by outflank-mailman (input) for mailman id 127334;
 Fri, 14 May 2021 11:56:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sDpF=KJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lhWQf-0008GF-7L
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 11:56:29 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b132285a-3e73-4659-81bc-9120a1e39dd2;
 Fri, 14 May 2021 11:56:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 74EDFAF3B;
 Fri, 14 May 2021 11:56:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b132285a-3e73-4659-81bc-9120a1e39dd2
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620993382; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=JABpMKFbWwF/cBhHZRIRb/5VTXEahnZX0PdYMClipEU=;
	b=IQKz1nlZQlEG+F7cT45CH7aovSZu9S450UcKyoE6gBQFJt0KpOXhRxOEficqLga1tnVQ3e
	AgisJbGGI6oWGYp0pn8JesamC3/E/xNH5GTzU1oRo0RuSL2/HlWCpGXrx/TomK/m5/BMvw
	dLcwXx/P5StDg5K1Q/FVtXTnTvUxKks=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 1/2] tools/xenstore: move per connection read and write func hooks into a struct
Date: Fri, 14 May 2021 13:56:19 +0200
Message-Id: <20210514115620.32731-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210514115620.32731-1-jgross@suse.com>
References: <20210514115620.32731-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Put the interface type specific functions into an own structure and let
struct connection contain only a pointer to that new function vector.

Don't even define the socket based functions in case of NO_SOCKETS
(Mini-OS).

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- split off from V1 patch (Julien Grall)
- use const qualifier (Julien Grall)
- drop socket specific case for Mini-OS (Julien Grall)
---
 tools/xenstore/xenstored_core.c   | 44 +++++++++++++------------------
 tools/xenstore/xenstored_core.h   | 19 +++++++------
 tools/xenstore/xenstored_domain.c | 13 +++++++--
 3 files changed, 38 insertions(+), 38 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 4b7b71cfb3..856f518075 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -226,8 +226,8 @@ static bool write_messages(struct connection *conn)
 				sockmsg_string(out->hdr.msg.type),
 				out->hdr.msg.len,
 				out->buffer, conn);
-		ret = conn->write(conn, out->hdr.raw + out->used,
-				  sizeof(out->hdr) - out->used);
+		ret = conn->funcs->write(conn, out->hdr.raw + out->used,
+					 sizeof(out->hdr) - out->used);
 		if (ret < 0)
 			return false;
 
@@ -243,8 +243,8 @@ static bool write_messages(struct connection *conn)
 			return true;
 	}
 
-	ret = conn->write(conn, out->buffer + out->used,
-			  out->hdr.msg.len - out->used);
+	ret = conn->funcs->write(conn, out->buffer + out->used,
+				 out->hdr.msg.len - out->used);
 	if (ret < 0)
 		return false;
 
@@ -1531,8 +1531,8 @@ static void handle_input(struct connection *conn)
 	/* Not finished header yet? */
 	if (in->inhdr) {
 		if (in->used != sizeof(in->hdr)) {
-			bytes = conn->read(conn, in->hdr.raw + in->used,
-					   sizeof(in->hdr) - in->used);
+			bytes = conn->funcs->read(conn, in->hdr.raw + in->used,
+						  sizeof(in->hdr) - in->used);
 			if (bytes < 0)
 				goto bad_client;
 			in->used += bytes;
@@ -1557,8 +1557,8 @@ static void handle_input(struct connection *conn)
 		in->inhdr = false;
 	}
 
-	bytes = conn->read(conn, in->buffer + in->used,
-			   in->hdr.msg.len - in->used);
+	bytes = conn->funcs->read(conn, in->buffer + in->used,
+				  in->hdr.msg.len - in->used);
 	if (bytes < 0)
 		goto bad_client;
 
@@ -1581,7 +1581,7 @@ static void handle_output(struct connection *conn)
 		ignore_connection(conn);
 }
 
-struct connection *new_connection(connwritefn_t *write, connreadfn_t *read)
+struct connection *new_connection(const struct interface_funcs *funcs)
 {
 	struct connection *new;
 
@@ -1591,8 +1591,7 @@ struct connection *new_connection(connwritefn_t *write, connreadfn_t *read)
 
 	new->fd = -1;
 	new->pollfd_idx = -1;
-	new->write = write;
-	new->read = read;
+	new->funcs = funcs;
 	new->is_ignored = false;
 	new->transaction_started = 0;
 	INIT_LIST_HEAD(&new->out_list);
@@ -1621,20 +1620,8 @@ struct connection *get_connection_by_id(unsigned int conn_id)
 static void accept_connection(int sock)
 {
 }
-
-int writefd(struct connection *conn, const void *data, unsigned int len)
-{
-	errno = EBADF;
-	return -1;
-}
-
-int readfd(struct connection *conn, void *data, unsigned int len)
-{
-	errno = EBADF;
-	return -1;
-}
 #else
-int writefd(struct connection *conn, const void *data, unsigned int len)
+static int writefd(struct connection *conn, const void *data, unsigned int len)
 {
 	int rc;
 
@@ -1650,7 +1637,7 @@ int writefd(struct connection *conn, const void *data, unsigned int len)
 	return rc;
 }
 
-int readfd(struct connection *conn, void *data, unsigned int len)
+static int readfd(struct connection *conn, void *data, unsigned int len)
 {
 	int rc;
 
@@ -1672,6 +1659,11 @@ int readfd(struct connection *conn, void *data, unsigned int len)
 	return rc;
 }
 
+const struct interface_funcs socket_funcs = {
+	.write = writefd,
+	.read = readfd,
+};
+
 static void accept_connection(int sock)
 {
 	int fd;
@@ -1681,7 +1673,7 @@ static void accept_connection(int sock)
 	if (fd < 0)
 		return;
 
-	conn = new_connection(writefd, readfd);
+	conn = new_connection(&socket_funcs);
 	if (conn)
 		conn->fd = fd;
 	else
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 6a6d0448e8..021e41076d 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -86,8 +86,11 @@ struct delayed_request {
 };
 
 struct connection;
-typedef int connwritefn_t(struct connection *, const void *, unsigned int);
-typedef int connreadfn_t(struct connection *, void *, unsigned int);
+
+struct interface_funcs {
+	int (*write)(struct connection *, const void *, unsigned int);
+	int (*read)(struct connection *, void *, unsigned int);
+};
 
 struct connection
 {
@@ -131,9 +134,8 @@ struct connection
 	/* My watches. */
 	struct list_head watches;
 
-	/* Methods for communicating over this connection: write can be NULL */
-	connwritefn_t *write;
-	connreadfn_t *read;
+	/* Methods for communicating over this connection. */
+	const struct interface_funcs *funcs;
 
 	/* Support for live update: connection id. */
 	unsigned int conn_id;
@@ -196,7 +198,7 @@ int write_node_raw(struct connection *conn, TDB_DATA *key, struct node *node,
 struct node *read_node(struct connection *conn, const void *ctx,
 		       const char *name);
 
-struct connection *new_connection(connwritefn_t *write, connreadfn_t *read);
+struct connection *new_connection(const struct interface_funcs *funcs);
 struct connection *get_connection_by_id(unsigned int conn_id);
 void check_store(void);
 void corrupt(struct connection *conn, const char *fmt, ...);
@@ -254,10 +256,7 @@ void finish_daemonize(void);
 /* Open a pipe for signal handling */
 void init_pipe(int reopen_log_pipe[2]);
 
-int writefd(struct connection *conn, const void *data, unsigned int len);
-int readfd(struct connection *conn, void *data, unsigned int len);
-
-extern struct interface_funcs socket_funcs;
+extern const struct interface_funcs socket_funcs;
 extern xengnttab_handle **xgt_handle;
 
 int remember_string(struct hashtable *hash, const char *str);
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 0c17937c0f..f3cd56050e 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -172,6 +172,11 @@ static int readchn(struct connection *conn, void *data, unsigned int len)
 	return len;
 }
 
+static const struct interface_funcs domain_funcs = {
+	.write = writechn,
+	.read = readchn,
+};
+
 static void *map_interface(domid_t domid)
 {
 	return xengnttab_map_grant_ref(*xgt_handle, domid,
@@ -389,7 +394,7 @@ static int new_domain(struct domain *domain, int port, bool restore)
 
 	domain->introduced = true;
 
-	domain->conn = new_connection(writechn, readchn);
+	domain->conn = new_connection(&domain_funcs);
 	if (!domain->conn)  {
 		errno = ENOMEM;
 		return errno;
@@ -1288,10 +1293,14 @@ void read_state_connection(const void *ctx, const void *state)
 	struct domain *domain, *tdomain;
 
 	if (sc->conn_type == XS_STATE_CONN_TYPE_SOCKET) {
-		conn = new_connection(writefd, readfd);
+#ifdef NO_SOCKETS
+		barf("socket based connection without sockets");
+#else
+		conn = new_connection(&socket_funcs);
 		if (!conn)
 			barf("error restoring connection");
 		conn->fd = sc->spec.socket_fd;
+#endif
 	} else {
 		domain = introduce_domain(ctx, sc->spec.ring.domid,
 					  sc->spec.ring.evtchn, true);
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri May 14 11:56:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 11:56:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127333.239299 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhWQc-0008K3-41; Fri, 14 May 2021 11:56:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127333.239299; Fri, 14 May 2021 11:56:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhWQb-0008J9-Tt; Fri, 14 May 2021 11:56:25 +0000
Received: by outflank-mailman (input) for mailman id 127333;
 Fri, 14 May 2021 11:56:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sDpF=KJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lhWQb-0008GL-9G
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 11:56:25 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8bdca7cd-3804-4779-9596-66eec4f043cf;
 Fri, 14 May 2021 11:56:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 95F88AF75;
 Fri, 14 May 2021 11:56:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8bdca7cd-3804-4779-9596-66eec4f043cf
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1620993382; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=4hSSh9cATKqR0QNGAlp/GDcIoX5aJtU3EsT3hQxejts=;
	b=hxOrKdTuuyPDTAc43edyzIxCBTbJ8uruHiE8N3i8GOWa0td96loMUFMMqud5fA9b/tZT7R
	tK41GacFZf5l1bkD1HosJLeBmhlu7maRTJPE/vgWl1kKsbEDlj5BkZ1q+oIWrhLI1nb2T5
	EqIEISD0Pi3tAGxyTB0si5MkajE3FfY=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 2/2] tools/xenstore: simplify xenstored main loop
Date: Fri, 14 May 2021 13:56:20 +0200
Message-Id: <20210514115620.32731-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210514115620.32731-1-jgross@suse.com>
References: <20210514115620.32731-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The main loop of xenstored is rather complicated due to different
handling of socket and ring-page interfaces. Unify that handling by
introducing interface type specific functions can_read() and
can_write().

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- split off function vector introduction (Julien Grall)
---
 tools/xenstore/xenstored_core.c   | 77 +++++++++++++++----------------
 tools/xenstore/xenstored_core.h   |  2 +
 tools/xenstore/xenstored_domain.c |  2 +
 3 files changed, 41 insertions(+), 40 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 856f518075..883a1a582a 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -1659,9 +1659,34 @@ static int readfd(struct connection *conn, void *data, unsigned int len)
 	return rc;
 }
 
+static bool socket_can_process(struct connection *conn, int mask)
+{
+	if (conn->pollfd_idx == -1)
+		return false;
+
+	if (fds[conn->pollfd_idx].revents & ~(POLLIN | POLLOUT)) {
+		talloc_free(conn);
+		return false;
+	}
+
+	return (fds[conn->pollfd_idx].revents & mask) && !conn->is_ignored;
+}
+
+static bool socket_can_write(struct connection *conn)
+{
+	return socket_can_process(conn, POLLOUT);
+}
+
+static bool socket_can_read(struct connection *conn)
+{
+	return socket_can_process(conn, POLLIN);
+}
+
 const struct interface_funcs socket_funcs = {
 	.write = writefd,
 	.read = readfd,
+	.can_write = socket_can_write,
+	.can_read = socket_can_read,
 };
 
 static void accept_connection(int sock)
@@ -2296,47 +2321,19 @@ int main(int argc, char *argv[])
 			if (&next->list != &connections)
 				talloc_increase_ref_count(next);
 
-			if (conn->domain) {
-				if (domain_can_read(conn))
-					handle_input(conn);
-				if (talloc_free(conn) == 0)
-					continue;
-
-				talloc_increase_ref_count(conn);
-				if (domain_can_write(conn) &&
-				    !list_empty(&conn->out_list))
-					handle_output(conn);
-				if (talloc_free(conn) == 0)
-					continue;
-			} else {
-				if (conn->pollfd_idx != -1) {
-					if (fds[conn->pollfd_idx].revents
-					    & ~(POLLIN|POLLOUT))
-						talloc_free(conn);
-					else if ((fds[conn->pollfd_idx].revents
-						  & POLLIN) &&
-						 !conn->is_ignored)
-						handle_input(conn);
-				}
-				if (talloc_free(conn) == 0)
-					continue;
-
-				talloc_increase_ref_count(conn);
-
-				if (conn->pollfd_idx != -1) {
-					if (fds[conn->pollfd_idx].revents
-					    & ~(POLLIN|POLLOUT))
-						talloc_free(conn);
-					else if ((fds[conn->pollfd_idx].revents
-						  & POLLOUT) &&
-						 !conn->is_ignored)
-						handle_output(conn);
-				}
-				if (talloc_free(conn) == 0)
-					continue;
+			if (conn->funcs->can_read(conn))
+				handle_input(conn);
+			if (talloc_free(conn) == 0)
+				continue;
 
-				conn->pollfd_idx = -1;
-			}
+			talloc_increase_ref_count(conn);
+
+			if (conn->funcs->can_write(conn))
+				handle_output(conn);
+			if (talloc_free(conn) == 0)
+				continue;
+
+			conn->pollfd_idx = -1;
 		}
 
 		if (delayed_requests) {
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 021e41076d..c6e04c0708 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -90,6 +90,8 @@ struct connection;
 struct interface_funcs {
 	int (*write)(struct connection *, const void *, unsigned int);
 	int (*read)(struct connection *, void *, unsigned int);
+	bool (*can_write)(struct connection *);
+	bool (*can_read)(struct connection *);
 };
 
 struct connection
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index f3cd56050e..708bf68af0 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -175,6 +175,8 @@ static int readchn(struct connection *conn, void *data, unsigned int len)
 static const struct interface_funcs domain_funcs = {
 	.write = writechn,
 	.read = readchn,
+	.can_write = domain_can_write,
+	.can_read = domain_can_read,
 };
 
 static void *map_interface(domid_t domid)
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Fri May 14 13:49:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 13:49:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127352.239325 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhYCG-00030M-4i; Fri, 14 May 2021 13:49:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127352.239325; Fri, 14 May 2021 13:49:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhYCG-00030F-1g; Fri, 14 May 2021 13:49:44 +0000
Received: by outflank-mailman (input) for mailman id 127352;
 Fri, 14 May 2021 13:49:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhYCF-000305-Ec; Fri, 14 May 2021 13:49:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhYCF-0001Tm-9u; Fri, 14 May 2021 13:49:43 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhYCE-00071t-VP; Fri, 14 May 2021 13:49:43 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lhYCE-00014U-Ue; Fri, 14 May 2021 13:49:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Lt+EBHTpH+QJvgcMBbMtpLeJpR3wdMF9GN7UDfMBtSk=; b=6NuIk/N06+ANIQMUfYWYYDFj1g
	H+I4ntvRbKyt8D6/1eNKjouYMmwzEoPu7QCHDIF7p4OUHuxRc3bHXS5gDiiBVRJDmQughHDyBG6K7
	M532pQSkHJH6Zga0OW0mMVFFEeKtZnTuYw3BK930/IgPj0hyiW34+IVjvrIAj2XzA4LM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161941-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 161941: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=dab59ce031228066eb95a9c518846fcacfb0dbbf
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 14 May 2021 13:49:42 +0000

flight 161941 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161941/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                dab59ce031228066eb95a9c518846fcacfb0dbbf
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  267 days
Failing since        152659  2020-08-21 14:07:39 Z  265 days  486 attempts
Testing same since   161938  2021-05-13 18:40:23 Z    0 days    2 attempts

------------------------------------------------------------
499 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 151427 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 14 13:50:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 13:50:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127355.239339 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhYD1-0004Fa-G1; Fri, 14 May 2021 13:50:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127355.239339; Fri, 14 May 2021 13:50:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhYD1-0004FT-Cz; Fri, 14 May 2021 13:50:31 +0000
Received: by outflank-mailman (input) for mailman id 127355;
 Fri, 14 May 2021 13:50:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6p66=KJ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lhYCz-0004FG-LN
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 13:50:29 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9d260cf7-2d2a-4e01-91df-7053fb5422b7;
 Fri, 14 May 2021 13:50:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9d260cf7-2d2a-4e01-91df-7053fb5422b7
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1621000227;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=AHbYOkcy9kIAQ0eSbuae9Vtd8wItzsZRRWfrgcnuRwM=;
  b=WgjS+mtj7Xn6RIeW+8iH0UkFp7KPvhV2gBgy+ZWMiw7IpSKRJYA6IrMt
   hnk5+eqV78bC4Y6iR0sCDKp7Vxc/uN4bfGqmosmByjxAaBF0GKTj3h31V
   MZgDpYl6n5DM35wmFzyEscZIh4t+bCm1oCFxA2YyVcVwPfT+s1Cf2QVL/
   M=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: jSDKHFZxzm0DRZ9o5cjJEYV4IvsDTi41a7Oi0RsbjfnP/iFnPWGq3LGkNsyqFD0S3I20pvaKZ+
 82AHTfwZ7JBsqtZrxk4I9jwe642/X4LUWkdtoGobx0K4oXc8fJjQvVuWucn7/Qk9ZhNFgfLf4Z
 GuDAyI0NoHDVNlrj/VFarLUREHviDXzBDJQ4Ni9sMpTEAC8RMvUollUIwtxltGIKUGNNCK/Dvs
 Hyc3E9RSqE2L+x0UdFcvSPG13O9Ir3Td3DkYuosukE/omIOzutnZWwEfwvG6YHP10k5uurnRqn
 eXA=
X-SBRS: 5.1
X-MesageID: 43804762
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:eDm6r6s8vK+FabW3I5EL2xq/7skDgNV00zEX/kB9WHVpm6yj+v
 xGUs566faUskd2ZJhEo7q90ca7Lk80maQa3WBVB8bBYOCEghrOEGgB1/qA/9SIIUSXmtK1l5
 0QFpSWYOeaMbEQt7ef3ODXKbcdKNnsytHWuQ/dpU0dMz2DvctbnnZE4gXwKDwHeOFfb6BJba
 Z1fqB81kedkXJ8VLXCOlA1G9Ltivfsj5zcbRsPF3ccmXWzZWPB0s+AL/CAtC1uKQ9y/Q==
X-IronPort-AV: E=Sophos;i="5.82,299,1613451600"; 
   d="scan'208";a="43804762"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OuJ1ZiMuYx7HIJoWtb5STZGijngnZhMIhEXNEXorD+dG9gBgKhC6D5mYCvhyhnscaV76cEzwFBbgtnLgZXdSxujAONnlcrBG3dE0IsK93JLByt2TJ+G/WSKaLTvPJgJqTnl68ttXm+H0CIDcgChzPamA/0ZV18+tO/c+iOmdTufFYJjpoCdbPWVkpli4bULILl3k9tQ8W2CXbasGso0R3Morg0sQQj+0hoW/Ph+pBrkccfqrxzAcV/0NsWhfEW0V6fhjVdvI5ielZ4ebfyFmI36SoNykVS8woriSS5IDy9rgNbTXoSdfzv0RzIIXocgsYTykKo/cfVbenWzQLaC5gQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=aW6BBD0PtckF53M/0IbS19dxoVxv8BX67XijzZUAxD4=;
 b=f4Il/l3fiXbj9tjj+PFKf5KDoOVU+8NzbKPh/ucmy/OmKqPH66mnEjjc7re1Fjvxv9LMsqbEGUXyoQMtALI4EsDNT3OST6Axgr1JXuJOdoftpdVbuIaY74tAVXnXQe9h7+7QjMGhXHkLdLJlfIYxHIuDYw9syhyggG6hGYruoKvrg12gwBa87YgmQUlVkx957Eu5mh+29WeOTn7z5Vt56Nq8WfleP8EypHklW/MdmNqZmaX819MVeh3vwTHyzok87yBhfIKmcGbJbJsrR4diANdg4Q8NSIq6Q1XO9XPWD9gM8T9Z74u0ct7LR/ElPPDDcUvrtk7F3nh2ThJZ9JA7Yg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=aW6BBD0PtckF53M/0IbS19dxoVxv8BX67XijzZUAxD4=;
 b=Q3566cpx3neFGaUeelX8hEMwi+b+iLBc3k39ERS5LuVXa8rEqPd1MJNyAG/5dUIYxSlkKdPf9BwHdgYn74NafJ6cs9wTr0UA07UGhngGowKVTkxLqu/rLDlFAWVTZjvZlb6Jfzk8D9Syb6UpS0GtYVxMDtMFbbO81dvFcPTWg88=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, George
 Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, Julien
 Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH] libelf: improve PVH elfnote parsing
Date: Fri, 14 May 2021 15:50:14 +0200
Message-ID: <20210514135014.78389-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.31.1
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: MR2P264CA0150.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:501:1::13) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e81be80e-3b9b-448c-95a5-08d916df391b
X-MS-TrafficTypeDiagnostic: DM6PR03MB4970:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4970ACBDB38679280792F2AB8F509@DM6PR03MB4970.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: kdCD6ifDwAeQ+kZtJtQervnKYQBKdoM7HNTgc8twV5/1flSO4a6BzsLGhZ87nUGuw2QGq3VtB077VMyFjUGTarwLXYIrM1sEmrT6Q9ySTF8h/BrN+iGqEIc4sptBpvvaa+VPeyf0m0uhiXIeD5S18HmxezClyCnyJgerW8+Q1d/KozRL/H5/LVACTd483QK8+ORdhIWT9TZ0DmRk4xJ0cbx1aXAV8lBjpcRhFLbN4upb8cPIUnfYm6MklhcSwYnjwfdgcwULmLUrX8/7TByuVIwGuUO5HSGRw0f1wbXT7G5yYIDohGBUw97wXBH7+gdJO5V01rvwulXFpibIABKZZjm0Oi1+g8lVIsu6AVAu0NugNA/333O7yqr2gA9gCQ0ULCN9s7T/pXpOGqPZbTThkUOiuH3olPwugkuMzYaXHQa/pCSRX2MAmPmr4KQ0sNWnmSmZnQpeo8Hwq72SYGSuW3loZtnXHHWf4k5jhbnWSTS6vbSUilTss7ELG+CjKJ4DpPnNOqa0vyzc0hjLhFReNzi5P3iXzQTdFkWcB0UDaPGDMqnzhi/wwV/8ePvu4Xd0u4+mJjjCVl4584kzq2QQyPD6UnO7WQho3ETQPlMYSFU=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(396003)(376002)(136003)(39860400002)(366004)(8676002)(5660300002)(6496006)(8936002)(2906002)(956004)(2616005)(83380400001)(478600001)(26005)(16526019)(316002)(36756003)(6916009)(54906003)(66556008)(6486002)(6666004)(186003)(1076003)(66946007)(38100700002)(86362001)(66476007)(4326008);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?amdmYWg1cFRCdTBTMFlJeGZ3a1hhdkRMNmZuTVZtK0tZVHNtQ3ZQL25xQitY?=
 =?utf-8?B?K1VIbVcxUzdaWlNtMVViMUJ2MkhVRm1iUkxPdG5PRzVuRlpOR3pHWGE1VzJl?=
 =?utf-8?B?cWVtZEJEbmJwU09DM2xkK3BOMmV1dXRmdk40dERTZGQvUm9vR0xBTXF6WjFC?=
 =?utf-8?B?Z3p0czN1eHBoTFdPR0tCNDZJR0lDdGIxaGErakg5Qmh6Qmk2cm5GYUNmdUNh?=
 =?utf-8?B?eENrcm5NRFdIbGkzdHJVaWFsM3FWczgyUmxXZjJpVUlMNVArUzhNS2F1cGZZ?=
 =?utf-8?B?UWdUR05tS0V2UFZZYldVNTUxOW94SEVHcWFtWEpHMjVCOG8rcXAxZTdaUkEv?=
 =?utf-8?B?L0hyaHVQbERTay80RlZ3K3RxOGpsTzNyMlM1RjhMMFJucGdkNEd2cnRCMC9S?=
 =?utf-8?B?S1htbmNicld0bTVkeVVVUzk5Ny8xbjlaTU11WGdwSm1pdjZRMWxPeXMxY2Z6?=
 =?utf-8?B?a1BlQWlIcGp6MTRoVmZDdVNLQng3eHh1WWxqSzZSL0w1TnRsdWhzWDVGU1hF?=
 =?utf-8?B?YzVCWEwyd3UxUkZ4Z0hGdnVZYjM3aThYMHdIQjBGd015QXRyMDFrZWZNYlRO?=
 =?utf-8?B?dGVwcVA4WEdRcXdySG9vaEZvZ0NwdDlsdEZqcmY1RVkzQXNIcm4vK043NUZv?=
 =?utf-8?B?ako2T1g5cFRoRWgrVC9EdEpyWW1ZVGlTcG85eWI4M3NlUE5wbmFJbXZsdVhz?=
 =?utf-8?B?ZkVQaEVaMTJVbDJ3R01nckVzcXJJcm9LMys5UEtaTDNvVXVBVlJYa0FYL2d6?=
 =?utf-8?B?dzgzSVJ1cWRlS0p2RTUrVzIvY3crbjNYUEhYbnpZaThZQWt0RHN4SzV1akZn?=
 =?utf-8?B?TzJjOHUzQTVycGtIZ3lrSGs4SGpUeUhRNHZpR3NMOFlkUnAxRTdjbmNyeS9Z?=
 =?utf-8?B?VUZVQXZSSnBuUzdDWm9rSlFmQms0WjN1c0RMYUFJMlB1L1pWODZYREQ1cEQv?=
 =?utf-8?B?cG5KdmhBMUVCd3JUakE2QnNPT0RhRkpWRnIrbUJKVWJFRHplYTV0NTcybTlz?=
 =?utf-8?B?N2NpQnhIa096cjl3MCtMRmZaNW9TcFRzLzJ3K1RFbklxNXROS3pPdkZNcFpm?=
 =?utf-8?B?QkdnZDl3WHVISDIwQWpJQmxxWWNQUlprQkxocFlXTWd5UlROaG1GWndSVDJ0?=
 =?utf-8?B?MTNETEJwKzQ0K0x4YlcxdEpBQlJYNjA1U2J4Zno1WUtsV0VWdTAwOTRhNU1D?=
 =?utf-8?B?SXE3Sk5vVVVaVXprTElWTWsxaHBnQTY3UUZ2SEtrTDRuY240cnFQeXZ0S3Q5?=
 =?utf-8?B?YlVKRzU2RXBOMFI1TXpGdlVzTnJ5ZkVrbnVyRUI3YnhPSk45UFZDKzAvQ3lB?=
 =?utf-8?B?MFJ3ZXV0R2I2dEJUQ2w4UVBwZzQwckU2N0h5Y2xVcHM4K2VseVNwV2ZnWVZC?=
 =?utf-8?B?eUFzU0drUjBBNjdSbE9JSFVvVnZQc3Z2a1I4ZmxPcG5HSGEwYW5xYTgrSFdW?=
 =?utf-8?B?SXpKVzQzZnp3d2p0aStZVFFGaU53WVZ6aTNqTCtVeUdvRlFHbFNsK0ZqYklp?=
 =?utf-8?B?RnVJRWRRbjRIMk1SSFkxRG9DVkFJek1tS2RLVHpia0U2dXhSZ25WYTRKT1or?=
 =?utf-8?B?SnpEdXpxQWVCY3p0bk42R3E3WTVCWm1iRlRDS3FSSlFKSXRTcnVBWlpSQ2ln?=
 =?utf-8?B?eEJPTk43VnVtVTZNdk1WVFZsQSsvNGR2TmRFc3RQU2NTTWJPUXhJRUNhSlJq?=
 =?utf-8?B?SlRpazArS0YvaytDZ1RubFNncnFrOS9SbnJ6MCtLMHlxcE1LYjhjQm5VeDYv?=
 =?utf-8?Q?nzxbQMlwuuuogC2sCp3fqkhACKaY4yjnktTpA/X?=
X-MS-Exchange-CrossTenant-Network-Message-Id: e81be80e-3b9b-448c-95a5-08d916df391b
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 May 2021 13:50:24.2487
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: BAH5xKCROraJhvqIxMdCTrFP0YxWV9awRdZ3PCTD4kMBvpoHAVhD2K5TbGNkEs9umh4qIp7iBpYefFaTZTdFOg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4970
X-OriginatorOrg: citrix.com

Pass an hvm boolean parameter to the elf note parsing and checking
routines, so that better checking can be done in case libelf is
dealing with an hvm container.

elf_xen_note_check shouldn't return early unless PHYS32_ENTRY is set
and the container is of type HVM, or else the loader and version
checks would be avoided for kernels intended to be booted as PV but
that also have PHYS32_ENTRY set.

Adjust elf_xen_addr_calc_check so that the virtual addresses are
actually physical ones (by setting virt_base and elf_paddr_offset to
zero) when the container is of type HVM, as that container is always
started with paging disabled.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 tools/fuzz/libelf/libelf-fuzzer.c   |  3 ++-
 tools/libs/guest/xg_dom_elfloader.c |  6 ++++--
 tools/libs/guest/xg_dom_hvmloader.c |  2 +-
 xen/arch/x86/hvm/dom0_build.c       |  2 +-
 xen/arch/x86/pv/dom0_build.c        |  2 +-
 xen/common/libelf/libelf-dominfo.c  | 25 +++++++++++++++----------
 xen/include/xen/libelf.h            |  2 +-
 7 files changed, 25 insertions(+), 17 deletions(-)

diff --git a/tools/fuzz/libelf/libelf-fuzzer.c b/tools/fuzz/libelf/libelf-fuzzer.c
index 1ba85717114..84fb84720fa 100644
--- a/tools/fuzz/libelf/libelf-fuzzer.c
+++ b/tools/fuzz/libelf/libelf-fuzzer.c
@@ -17,7 +17,8 @@ int LLVMFuzzerTestOneInput(const uint8_t *data, size_t size)
         return -1;
 
     elf_parse_binary(elf);
-    elf_xen_parse(elf, &parms);
+    elf_xen_parse(elf, &parms, false);
+    elf_xen_parse(elf, &parms, true);
 
     return 0;
 }
diff --git a/tools/libs/guest/xg_dom_elfloader.c b/tools/libs/guest/xg_dom_elfloader.c
index 06e713fe111..ad71163dd92 100644
--- a/tools/libs/guest/xg_dom_elfloader.c
+++ b/tools/libs/guest/xg_dom_elfloader.c
@@ -135,7 +135,8 @@ static elf_negerrnoval xc_dom_probe_elf_kernel(struct xc_dom_image *dom)
      * or else we might be trying to load a plain ELF.
      */
     elf_parse_binary(&elf);
-    rc = elf_xen_parse(&elf, dom->parms);
+    rc = elf_xen_parse(&elf, dom->parms,
+                       dom->container_type == XC_DOM_HVM_CONTAINER);
     if ( rc != 0 )
         return rc;
 
@@ -166,7 +167,8 @@ static elf_negerrnoval xc_dom_parse_elf_kernel(struct xc_dom_image *dom)
 
     /* parse binary and get xen meta info */
     elf_parse_binary(elf);
-    if ( elf_xen_parse(elf, dom->parms) != 0 )
+    if ( elf_xen_parse(elf, dom->parms,
+                       dom->container_type == XC_DOM_HVM_CONTAINER) != 0 )
     {
         rc = -EINVAL;
         goto out;
diff --git a/tools/libs/guest/xg_dom_hvmloader.c b/tools/libs/guest/xg_dom_hvmloader.c
index ec6ebad7fd5..3a63b23ba39 100644
--- a/tools/libs/guest/xg_dom_hvmloader.c
+++ b/tools/libs/guest/xg_dom_hvmloader.c
@@ -73,7 +73,7 @@ static elf_negerrnoval xc_dom_probe_hvm_kernel(struct xc_dom_image *dom)
      * else we might be trying to load a PV kernel.
      */
     elf_parse_binary(&elf);
-    rc = elf_xen_parse(&elf, dom->parms);
+    rc = elf_xen_parse(&elf, dom->parms, true);
     if ( rc == 0 )
         return -EINVAL;
 
diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
index 878dc1d808e..c24b9efdb0a 100644
--- a/xen/arch/x86/hvm/dom0_build.c
+++ b/xen/arch/x86/hvm/dom0_build.c
@@ -561,7 +561,7 @@ static int __init pvh_load_kernel(struct domain *d, const module_t *image,
     elf_set_verbose(&elf);
 #endif
     elf_parse_binary(&elf);
-    if ( (rc = elf_xen_parse(&elf, &parms)) != 0 )
+    if ( (rc = elf_xen_parse(&elf, &parms, true)) != 0 )
     {
         printk("Unable to parse kernel for ELFNOTES\n");
         return rc;
diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c
index e0801a9e6d1..af47615b226 100644
--- a/xen/arch/x86/pv/dom0_build.c
+++ b/xen/arch/x86/pv/dom0_build.c
@@ -353,7 +353,7 @@ int __init dom0_construct_pv(struct domain *d,
         elf_set_verbose(&elf);
 
     elf_parse_binary(&elf);
-    if ( (rc = elf_xen_parse(&elf, &parms)) != 0 )
+    if ( (rc = elf_xen_parse(&elf, &parms, false)) != 0 )
         goto out;
 
     /* compatibility check */
diff --git a/xen/common/libelf/libelf-dominfo.c b/xen/common/libelf/libelf-dominfo.c
index 69c94b6f3bb..584be0f6fb2 100644
--- a/xen/common/libelf/libelf-dominfo.c
+++ b/xen/common/libelf/libelf-dominfo.c
@@ -360,7 +360,7 @@ elf_errorstatus elf_xen_parse_guest_info(struct elf_binary *elf,
 /* sanity checks                                                            */
 
 static elf_errorstatus elf_xen_note_check(struct elf_binary *elf,
-                              struct elf_dom_parms *parms)
+                              struct elf_dom_parms *parms, bool hvm)
 {
     if ( (ELF_PTRVAL_INVALID(parms->elf_note_start)) &&
          (ELF_PTRVAL_INVALID(parms->guest_info)) )
@@ -382,7 +382,7 @@ static elf_errorstatus elf_xen_note_check(struct elf_binary *elf,
     }
 
     /* PVH only requires one ELF note to be set */
-    if ( parms->phys_entry != UNSET_ADDR32 )
+    if ( parms->phys_entry != UNSET_ADDR32 && hvm )
     {
         elf_msg(elf, "ELF: Found PVH image\n");
         return 0;
@@ -414,7 +414,7 @@ static elf_errorstatus elf_xen_note_check(struct elf_binary *elf,
 }
 
 static elf_errorstatus elf_xen_addr_calc_check(struct elf_binary *elf,
-                                   struct elf_dom_parms *parms)
+                                   struct elf_dom_parms *parms, bool hvm)
 {
     uint64_t virt_offset;
 
@@ -426,7 +426,7 @@ static elf_errorstatus elf_xen_addr_calc_check(struct elf_binary *elf,
     }
 
     /* Initial guess for virt_base is 0 if it is not explicitly defined. */
-    if ( parms->virt_base == UNSET_ADDR )
+    if ( parms->virt_base == UNSET_ADDR || hvm )
     {
         parms->virt_base = 0;
         elf_msg(elf, "ELF: VIRT_BASE unset, using %#" PRIx64 "\n",
@@ -442,7 +442,7 @@ static elf_errorstatus elf_xen_addr_calc_check(struct elf_binary *elf,
      * If we are using the modern ELF notes interface then the default
      * is 0.
      */
-    if ( parms->elf_paddr_offset == UNSET_ADDR )
+    if ( parms->elf_paddr_offset == UNSET_ADDR || hvm )
     {
         if ( parms->elf_note_start )
             parms->elf_paddr_offset = 0;
@@ -456,8 +456,13 @@ static elf_errorstatus elf_xen_addr_calc_check(struct elf_binary *elf,
     parms->virt_kstart = elf->pstart + virt_offset;
     parms->virt_kend   = elf->pend   + virt_offset;
 
-    if ( parms->virt_entry == UNSET_ADDR )
-        parms->virt_entry = elf_uval(elf, elf->ehdr, e_entry);
+    if ( parms->virt_entry == UNSET_ADDR || hvm )
+    {
+        if ( parms->phys_entry != UNSET_ADDR32 )
+            parms->virt_entry = parms->phys_entry;
+        else
+            parms->virt_entry = elf_uval(elf, elf->ehdr, e_entry);
+    }
 
     if ( parms->bsd_symtab )
     {
@@ -499,7 +504,7 @@ static elf_errorstatus elf_xen_addr_calc_check(struct elf_binary *elf,
 /* glue it all together ...                                                 */
 
 elf_errorstatus elf_xen_parse(struct elf_binary *elf,
-                  struct elf_dom_parms *parms)
+                  struct elf_dom_parms *parms, bool hvm)
 {
     ELF_HANDLE_DECL(elf_shdr) shdr;
     ELF_HANDLE_DECL(elf_phdr) phdr;
@@ -594,9 +599,9 @@ elf_errorstatus elf_xen_parse(struct elf_binary *elf,
         }
     }
 
-    if ( elf_xen_note_check(elf, parms) != 0 )
+    if ( elf_xen_note_check(elf, parms, hvm) != 0 )
         return -1;
-    if ( elf_xen_addr_calc_check(elf, parms) != 0 )
+    if ( elf_xen_addr_calc_check(elf, parms, hvm) != 0 )
         return -1;
     return 0;
 }
diff --git a/xen/include/xen/libelf.h b/xen/include/xen/libelf.h
index b73998150fc..be47b0cc366 100644
--- a/xen/include/xen/libelf.h
+++ b/xen/include/xen/libelf.h
@@ -454,7 +454,7 @@ int elf_xen_parse_note(struct elf_binary *elf,
 int elf_xen_parse_guest_info(struct elf_binary *elf,
                              struct elf_dom_parms *parms);
 int elf_xen_parse(struct elf_binary *elf,
-                  struct elf_dom_parms *parms);
+                  struct elf_dom_parms *parms, bool hvm);
 
 static inline void *elf_memcpy_unchecked(void *dest, const void *src, size_t n)
     { return memcpy(dest, src, n); }
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri May 14 13:56:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 13:56:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127364.239351 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhYIT-00054W-9P; Fri, 14 May 2021 13:56:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127364.239351; Fri, 14 May 2021 13:56:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhYIT-00054P-6F; Fri, 14 May 2021 13:56:09 +0000
Received: by outflank-mailman (input) for mailman id 127364;
 Fri, 14 May 2021 13:56:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=sDpF=KJ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lhYIR-00054J-96
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 13:56:07 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 161ba445-aa20-48e8-b9ef-cae014c0dcbc;
 Fri, 14 May 2021 13:56:04 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1C029AC47;
 Fri, 14 May 2021 13:56:04 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 161ba445-aa20-48e8-b9ef-cae014c0dcbc
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621000564; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=ogAYC34f/q7VMRbs3ZtWRqAlocqeIndtCfA4GK6PXnU=;
	b=PV+zv1nGbB7gW6V1V+iooWmMtNw2Am+yRWbCNRZLjxXaD0SEqscckKZ4LoTyURLd+SkwRS
	QWVLMcvBiFrvOc0RoKZ6pvKi2UYrxzW3dSmMDMbPX4Py7hAZ+aUJiFmeSVG9U3B85RxMBM
	8x4qJlB39isly3Eq1kcpHYk0ajHWE2k=
Subject: Re: [PATCH v2 0/3] swiotlb-xen init fixes
To: Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
Cc: boris.ostrovsky@oracle.com, hch@lst.de
References: <alpine.DEB.2.21.2105121313060.5018@sstabellini-ThinkPad-T480s>
From: Juergen Gross <jgross@suse.com>
Message-ID: <1cbce1dc-bf1f-2448-f839-47a4e06f43f0@suse.com>
Date: Fri, 14 May 2021 15:56:03 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.0
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2105121313060.5018@sstabellini-ThinkPad-T480s>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="yHdcDRAVbxfU5I8D2hPUiwkn1l2CwsVnE"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--yHdcDRAVbxfU5I8D2hPUiwkn1l2CwsVnE
Content-Type: multipart/mixed; boundary="aaQUglZ92RYiMKAAjAHv2FbcOlIBdsWJI";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel@lists.xenproject.org
Cc: boris.ostrovsky@oracle.com, hch@lst.de
Message-ID: <1cbce1dc-bf1f-2448-f839-47a4e06f43f0@suse.com>
Subject: Re: [PATCH v2 0/3] swiotlb-xen init fixes
References: <alpine.DEB.2.21.2105121313060.5018@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2105121313060.5018@sstabellini-ThinkPad-T480s>

--aaQUglZ92RYiMKAAjAHv2FbcOlIBdsWJI
Content-Type: multipart/mixed;
 boundary="------------83F0EB9215915B02E79026A1"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------83F0EB9215915B02E79026A1
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 12.05.21 22:18, Stefano Stabellini wrote:
> Hi all,
>=20
> This short patch series comes with a preparation patch and 2 unrelated
> fixes to swiotlb-xen initialization.
>=20
>=20
> Christoph Hellwig (1):
>        arm64: do not set SWIOTLB_NO_FORCE when swiotlb is required
>=20
> Stefano Stabellini (2):
>        xen/arm: move xen_swiotlb_detect to arm/swiotlb-xen.h
>        xen/swiotlb: check if the swiotlb has already been initialized
>=20
>   arch/arm/xen/mm.c             | 20 +++++++-------------
>   arch/arm64/mm/init.c          |  3 ++-
>   drivers/xen/swiotlb-xen.c     |  5 +++++
>   include/xen/arm/swiotlb-xen.h | 15 ++++++++++++++-
>   4 files changed, 28 insertions(+), 15 deletions(-)
>=20

Series pushed to xen/tip.git for-linus-5.13b


Juergen

--------------83F0EB9215915B02E79026A1
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------83F0EB9215915B02E79026A1--

--aaQUglZ92RYiMKAAjAHv2FbcOlIBdsWJI--

--yHdcDRAVbxfU5I8D2hPUiwkn1l2CwsVnE
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmCegXMFAwAAAAAACgkQsN6d1ii/Ey8f
uQf/WkB4isD+ugClXbVzv7vjbJIe7xf8jNseT0HV4SJgSR8QbnIpkHDnD6De5SCEHR5+SN7zbWzO
Zg8qhc0clN6nBUxijEb6iSVGNxxPqoRCNTcqtuDdk1DbYLSpic/IeoS7IjtJ3aXzq5wITAc2zZ42
DwiMEOTZrkL4FgNb5Cx1Wt6lUAe7lhpCcHcXtHAr/5FlvzwQVBJsD04Kf2RHOUFaL5RT0d9TVsH1
vN/iIqo34jOBXsNEd3PBMLx9/q9iGhQm0E68b4iDXppdZKcMQYCUpxBckn2vsKatIDatr/BImwl7
rWWkFzA7uDGP92N+KT17iPHnMbsmqA8Vm77O+dKETg==
=RoaB
-----END PGP SIGNATURE-----

--yHdcDRAVbxfU5I8D2hPUiwkn1l2CwsVnE--


From xen-devel-bounces@lists.xenproject.org Fri May 14 13:57:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 13:57:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127367.239361 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhYJl-0005eD-JR; Fri, 14 May 2021 13:57:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127367.239361; Fri, 14 May 2021 13:57:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhYJl-0005e6-GO; Fri, 14 May 2021 13:57:29 +0000
Received: by outflank-mailman (input) for mailman id 127367;
 Fri, 14 May 2021 13:57:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CSkR=KJ=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lhYJj-0005dx-P1
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 13:57:27 +0000
Received: from mail-io1-xd2c.google.com (unknown [2607:f8b0:4864:20::d2c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 58814e96-f0f1-4688-be29-dad8ffff7f80;
 Fri, 14 May 2021 13:57:27 +0000 (UTC)
Received: by mail-io1-xd2c.google.com with SMTP id i7so20701499ioa.12
 for <xen-devel@lists.xenproject.org>; Fri, 14 May 2021 06:57:27 -0700 (PDT)
Received: from [192.168.99.80] (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id x14sm3041398ill.74.2021.05.14.06.57.26
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 14 May 2021 06:57:26 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 58814e96-f0f1-4688-be29-dad8ffff7f80
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=I80+X0SMixcNNEjoWnMY8lYgMp184ikKB9bKEeh4kMM=;
        b=e/XP3V/oJD8xj9XMhPnYIlaQcL2XdhSV/7/cMQDOrZJrHCVvE4j3k5/3RgqdwL9WTZ
         AFxFhS40Mrjza4i8lwfyUtHwSyQiRLR/Hjo+mUSXmWgpVpsRzHIyUYkYSaDQMyrs2hv0
         /jXRXbfhKJCHH8VbZYDFWed46KnIVxo+Y0u4GuYAAkYogM4coTb/XHMiwnR6NouGRwSl
         jnazP9hJI/O+AjkCV4LxQjBrEVPgyi7vJE1AKtYbw2yd2fEGJ4FoQVMVohAhWkTHBNmD
         IPWYuje/9neYCP5hUVVpTIctfUWdfj1WrZSQvjPLL65OECDR1obD7U51F931w7E4cMpU
         kK1A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=I80+X0SMixcNNEjoWnMY8lYgMp184ikKB9bKEeh4kMM=;
        b=Gc0v/tKnL6qvd+MYG8wd7W63TNaUrL19FVEen1DSqKGg7fbMlbw90dRglKJvwO8eCG
         erIJLxBRQHJJjsQVaxcQq2BkCv9lG5mxd3z30MlGWfUjrCWI4Qqa90LJsSJlCCnHh3Xz
         EYcE2521fdq1g1+yNoEodU/+1nU+FbNtQQmq+iYVfVjOUuB9q2Jj6qmMqWhBHOLnDic9
         fSpRruwPbzlwyFz+e9MwAdOgak5DKKIgFhAEGzIxq1CNPf6yFDiB2EHDS8jwxIBEgn6i
         W9qjyU+FwU5eJH1L8tYj3VTPRGO6B1i68+7niVofmRcsQrxME9rvc+u1wLt8B9C7k9IY
         KO4Q==
X-Gm-Message-State: AOAM530THepahTQJTpMmWJCAOQWEnDjzVTT+768ArU5j4FhyJ2PbA1RY
	wqB0wIrRGVwMxJZfvHKcjlo=
X-Google-Smtp-Source: ABdhPJzXjGdg+BBKyCIjFNC0gIohwCQWX9tFQjn0LXStxT30mkkcNqoHI+gWRsnFxAtSVbWuedelFA==
X-Received: by 2002:a02:caa3:: with SMTP id e3mr43006658jap.57.1621000646597;
        Fri, 14 May 2021 06:57:26 -0700 (PDT)
Subject: Re: [PATCH v2 4/5] xen: Add files needed for minimal riscv build
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Tamas K Lengyel <tamas@tklengyel.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>
References: <cover.1620965208.git.connojdavis@gmail.com>
 <c5d130b06de3d724921488387f1743d7996aac11.1620965208.git.connojdavis@gmail.com>
 <a5fd6d72-3a02-4c12-4021-bf06d0eeb174@xen.org>
From: Connor Davis <connojdavis@gmail.com>
Message-ID: <2582c509-3123-f193-0b72-bfc46e798741@gmail.com>
Date: Fri, 14 May 2021 07:57:38 -0600
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <a5fd6d72-3a02-4c12-4021-bf06d0eeb174@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 5/14/21 3:46 AM, Julien Grall wrote:
> Hi Connor,
>
> On 14/05/2021 05:17, Connor Davis wrote:
>> Add the minimum code required to get xen to build with
>> XEN_TARGET_ARCH=riscv64. It is minimal in the sense that every file and
>> function added is required for a successful build, given the .config
>> generated from riscv64_defconfig. The function implementations are just
>> stubs; actual implmentations will need to be added later.
>
> Thank you for the contribution. This is quite a large patch to review. 
> Could you consider to split in smaller one (I think Stefano suggested 
> one per header file or group of headers)? This would help to review 
> and find whether some bits can be moved in common.
>
Ok yes I will work on that.

> I would be happy to help moving some of the pieces.
>
Great!
>>
>> Signed-off-by: Connor Davis <connojdavis@gmail.com>
>> ---
>>   config/riscv64.mk                        |   7 +
>>   xen/Makefile                             |   8 +-
>>   xen/arch/riscv/Kconfig                   |  54 ++++
>>   xen/arch/riscv/Kconfig.debug             |   0
>>   xen/arch/riscv/Makefile                  |  57 ++++
>>   xen/arch/riscv/README.source             |  19 ++
>>   xen/arch/riscv/Rules.mk                  |  13 +
>>   xen/arch/riscv/arch.mk                   |   7 +
>>   xen/arch/riscv/configs/riscv64_defconfig |  12 +
>>   xen/arch/riscv/delay.c                   |  16 +
>>   xen/arch/riscv/domain.c                  | 144 +++++++++
>>   xen/arch/riscv/domctl.c                  |  36 +++
>>   xen/arch/riscv/guestcopy.c               |  57 ++++
>>   xen/arch/riscv/head.S                    |   6 +
>>   xen/arch/riscv/irq.c                     |  78 +++++
>>   xen/arch/riscv/lib/Makefile              |   1 +
>>   xen/arch/riscv/lib/find_next_bit.c       | 284 +++++++++++++++++
>
> I quickly skimmed through the code and I think some of the file can be 
> made common such as this one.
Yep there is quite a bit of overlap from ARM
>
> [...]
>
>> diff --git a/xen/include/xen/domain.h b/xen/include/xen/domain.h
>> index 1708c36964..fd0b75677c 100644
>> --- a/xen/include/xen/domain.h
>> +++ b/xen/include/xen/domain.h
>> @@ -60,6 +60,7 @@ void arch_vcpu_destroy(struct vcpu *v);
>>   int map_vcpu_info(struct vcpu *v, unsigned long gfn, unsigned offset);
>>   void unmap_vcpu_info(struct vcpu *v);
>>   +struct xen_domctl_createdomain;
>
> This is needed because?
>
The build was failing without it. With the one commit-per-file approach 
we can probably avoid this.


Thanks,

Connor



From xen-devel-bounces@lists.xenproject.org Fri May 14 14:02:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 14:02:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127372.239373 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhYOY-00077V-6p; Fri, 14 May 2021 14:02:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127372.239373; Fri, 14 May 2021 14:02:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhYOY-00077O-2I; Fri, 14 May 2021 14:02:26 +0000
Received: by outflank-mailman (input) for mailman id 127372;
 Fri, 14 May 2021 14:02:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CSkR=KJ=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lhYOX-00077I-39
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 14:02:25 +0000
Received: from mail-il1-x12f.google.com (unknown [2607:f8b0:4864:20::12f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6d2a19cb-8365-4b0d-b450-7620496b5d4b;
 Fri, 14 May 2021 14:02:24 +0000 (UTC)
Received: by mail-il1-x12f.google.com with SMTP id j30so4012142ila.5
 for <xen-devel@lists.xenproject.org>; Fri, 14 May 2021 07:02:24 -0700 (PDT)
Received: from [192.168.99.80] (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id h14sm3092234ils.13.2021.05.14.07.02.22
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 14 May 2021 07:02:23 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6d2a19cb-8365-4b0d-b450-7620496b5d4b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=uCLR6cY/lL8p7GrHVOGJbJLWtyXk2C9x9HUgrPSWLvo=;
        b=eGJr9DvMrrfXRbrjjysZHqyij7bTdRuRfBMgAAe0pxdpsFlfQbw2YLvEBv2HwufKhd
         v+NhGYafD4mXBqiFMuHJaYA3+M4XmGtaeZTrCne8901nXVrruu7PWu3e5TWI8mdhfftd
         fqJ3Nro6nxGvWrYvRZb53iDoArNk5tU+lAMsjQ9NMmE7mMnLVs2kxO7W5DqB4WvXKiuG
         5+Nu/uEVTtRNowBBM2lTBa5mWMy+PFe82IOJcxNbrUKms9XkmIbpA94ozNAJLd6GHWz4
         Ya6uoVzWWPsaHvsnUEcPL3m60NviFOIRqwkT1MvTgeIThkzhlDcHUMORXXBmbIGU99IA
         QThA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=uCLR6cY/lL8p7GrHVOGJbJLWtyXk2C9x9HUgrPSWLvo=;
        b=unWCW3FhunjcZ3fMPOevylMUoJDgovkbOnJ4NnjBzd1MP1cOMU3l5BQc4hGZOezRvk
         NqpBnSIJhWCcRD03eixDQ6vELqPW5DultdQaJ2yuiFM4j0jRVaS9kdJQ306e+qntdmB4
         YcjlDxzmGkrJdwkgLlog37gGGjgiiK6PQ5a9wP0mWA2cLiQYZbOhtZ27/eW8kzPXHn/y
         hzoZAIIFOZi1Vn2oh5hSjMq3Kez/1f9Bh/8nEV/rEslTnjUGbdRjp2aJxyfNXPSqOl0O
         DzwiweE2xS3cHCNl9yS/Ui+AGgrATaoFO4n3Kta7kRd42+IwnqesnMH7/KK9OKiO9tW5
         D5uw==
X-Gm-Message-State: AOAM532XEb0SrpQ2y5rqQOthLtV8riO9QCkDjlqtfFEWX5iKu3t/vapX
	6rpEaKJ4dYt6j+Ia0TATPF4=
X-Google-Smtp-Source: ABdhPJx1LV9n+PI9gZBXokDgcuoYKWqQFxuABdKAmUOVHdVBGouxSjgjVeZ2t2gwaUbKB+ZzAJ21ug==
X-Received: by 2002:a05:6e02:48b:: with SMTP id b11mr29413540ils.110.1621000943788;
        Fri, 14 May 2021 07:02:23 -0700 (PDT)
Subject: Re: [PATCH v2 0/5] Minimal build for RISCV
To: Alistair Francis <alistair23@gmail.com>
Cc: "open list:X86" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>, Tamas K Lengyel <tamas@tklengyel.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>,
 Doug Goldstein <cardoe@cardoe.com>
References: <cover.1620965208.git.connojdavis@gmail.com>
 <CAKmqyKN1+we16d3AkYg9GLXxic-Y=JZKdjqHfE5JRJTaGmaaHw@mail.gmail.com>
From: Connor Davis <connojdavis@gmail.com>
Message-ID: <3d2e6213-3b38-1329-cb78-0e964577eb84@gmail.com>
Date: Fri, 14 May 2021 08:02:35 -0600
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <CAKmqyKN1+we16d3AkYg9GLXxic-Y=JZKdjqHfE5JRJTaGmaaHw@mail.gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 5/13/21 10:43 PM, Alistair Francis wrote:
> On Fri, May 14, 2021 at 2:18 PM Connor Davis <connojdavis@gmail.com> wrote:
>> Hi all,
>>
>> This series introduces a minimal build for RISCV. It is based on Bobby's
>> previous work from last year[0]. I have worked to rebase onto current Xen,
>> as well as update the various header files borrowed from Linux.
>>
>> This series provides the patches necessary to get a minimal build
>> working. The build is "minimal" in the sense that 1) it uses a
>> minimal config and 2) files, functions, and variables are included if
>> and only if they are required for a successful build based on the
>> config. It doesn't run at all, as the functions just have stub
>> implementations.
>>
>> My hope is that this can serve as a useful example for future ports as
>> well as inform the community of exactly what is imposed by common code
>> onto new architectures.
>>
>> The first 3 patches are mods to non-RISCV bits that enable building a
>> config with:
>>
>>    !CONFIG_HAS_NS16550
>>    !CONFIG_HAS_PASSTHROUGH
>>    !CONFIG_GRANT_TABLE
>>
>> respectively. The fourth patch adds the RISCV files, and the last patch
>> adds a docker container for doing the build. To build from the docker
>> container (after creating it locally), you can run the following:
>>
>>    $ make XEN_TARGET_ARCH=riscv64 SUBSYSTEMS=xen
>>
>> The sources taken from Linux are documented in arch/riscv/README.sources.
>> There were also some files copied from arm:
>>
>>    asm-arm/softirq.h
>>    asm-arm/random.h
>>    asm-arm/nospec.h
>>    asm-arm/numa.h
>>    asm-arm/p2m.h
>>    asm-arm/delay.h
>>    asm-arm/debugger.h
>>    asm-arm/desc.h
>>    asm-arm/guest_access.h
>>    asm-arm/hardirq.h
>>    lib/find_next_bit.c
>>
>> I imagine some of these will want some consolidation, but I put them
>> under the respective RISCV directories for now.
> Awesome!
>
> Do you have a public branch I could pull these from to try out?
>
> Alistair
Yes you can find the latest here: 
https://gitlab.com/connojd/xen/-/commits/riscv-build


Thanks,

Connor



From xen-devel-bounces@lists.xenproject.org Fri May 14 14:06:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 14:06:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127375.239383 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhYSP-0007na-Mb; Fri, 14 May 2021 14:06:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127375.239383; Fri, 14 May 2021 14:06:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhYSP-0007nT-Jd; Fri, 14 May 2021 14:06:25 +0000
Received: by outflank-mailman (input) for mailman id 127375;
 Fri, 14 May 2021 14:06:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CSkR=KJ=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lhYSO-0007nN-MW
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 14:06:24 +0000
Received: from mail-il1-x135.google.com (unknown [2607:f8b0:4864:20::135])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bc75dba8-8282-4ae5-b591-f51a0c9a032b;
 Fri, 14 May 2021 14:06:23 +0000 (UTC)
Received: by mail-il1-x135.google.com with SMTP id w7so9182911ilg.13
 for <xen-devel@lists.xenproject.org>; Fri, 14 May 2021 07:06:23 -0700 (PDT)
Received: from [192.168.99.80] (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id q18sm3094808ile.33.2021.05.14.07.06.23
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 14 May 2021 07:06:23 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bc75dba8-8282-4ae5-b591-f51a0c9a032b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=p778h4mDEFACoY3jLIwbbMe6YCGbmf3vLibN4Bx2pk4=;
        b=nC8L4dQI89t4hzo3h+eXBqlhwLeHOghclMnHjZf+MAel5fuBM81cHhC1O25S5hW/il
         y96X3yCwwkPOdCOdVOy2asoa6gkK69mX1zff9NAlBLl6K+ClrrnSIn1pVwCH6EoKLNMJ
         kMe5RzAkqe7zgOFW/n9XniT6aIBB+GbQDxNcqMhouZEzOJyZo7/0X44tVsriScZCPh3D
         eZXSjREmV6hTeRWC6KuC+K3z85BfiZDhfLKc5qkOCbPNf65/8M4wf4Bcl1r7d+8VypAA
         0B6vBnLdxAC1qVfWxaecdHwSerGPqiAm9NE0QV+Oj6VJuXDBQL3C90Bk3+BJ5otjkZgr
         6r1A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=p778h4mDEFACoY3jLIwbbMe6YCGbmf3vLibN4Bx2pk4=;
        b=tA/CQcjD3k9xDCK6Kq8J21Z+d6NOnpyraC9IA/bsor6YbCLbW9EPZVABC75khTZ22c
         9HC1Cn3taki1Eu1K9YBbefQrdcD+eAqQmUWRcq3rwSSTbnYKb3FCt91L+29CnffUKZlT
         yksBppX+ihZWc8HYPSk8V+4vlKQETB8mTIZy0XHqJoZzzUMi0i8kx3gAmTy77otJqSio
         cE5S9/wdLy+c6JKFb1Z66B8OtAZQu51ESvp8BTh8xjdTgYp8Pg+AsQ1pW+z4o7041rpB
         WyEGMBn4dGnB0g6gV5KRJU+6A7yMofXNpfDd8oDefFn0lg9Hof5ro1zFjPmAWbx8NLqQ
         PJog==
X-Gm-Message-State: AOAM531XYhyScIKGSqBCP/8RX/+UUV4fsnf2J11IXHOUV68gJBfoZdDZ
	4jpAf5eR8ew4lMKiZIoQQdY=
X-Google-Smtp-Source: ABdhPJzjePt6YwBAYgds4Vy4ai2dQpEtXYwz25hrD40aPq72dVFwZx8fW0JmXwVyeb7KNXlrQUn6AQ==
X-Received: by 2002:a92:ce90:: with SMTP id r16mr41615408ilo.220.1621001183633;
        Fri, 14 May 2021 07:06:23 -0700 (PDT)
Subject: Re: [PATCH v2 2/4] xen: Export dbgp functions when CONFIG_XEN_DOM0 is
 enabled
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Juergen Gross <jgross@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
 linux-usb@vger.kernel.org
References: <d160cee9b61c0ec41c2cd5ff9b4e107011d39d8c.1620952511.git.connojdavis@gmail.com>
 <291659390aff63df7c071367ad4932bf41e11aef.1620952511.git.connojdavis@gmail.com>
From: Connor Davis <connojdavis@gmail.com>
Message-ID: <236c31fe-2373-be23-bed4-48012a6a9765@gmail.com>
Date: Fri, 14 May 2021 08:06:36 -0600
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <291659390aff63df7c071367ad4932bf41e11aef.1620952511.git.connojdavis@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US

Adding Greg and linux-usb

On 5/13/21 6:56 PM, Connor Davis wrote:
> Export xen_dbgp_reset_prep and xen_dbgp_external_startup
> when CONFIG_XEN_DOM0 is defined. This allows use of these symbols
> even if CONFIG_EARLY_PRINK_DBGP is defined.
>
> Signed-off-by: Connor Davis <connojdavis@gmail.com>
> Acked-by: Juergen Gross <jgross@suse.com>
> ---
>   drivers/xen/dbgp.c | 2 +-
>   1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/drivers/xen/dbgp.c b/drivers/xen/dbgp.c
> index cfb5de31d860..fef32dd1a5dc 100644
> --- a/drivers/xen/dbgp.c
> +++ b/drivers/xen/dbgp.c
> @@ -44,7 +44,7 @@ int xen_dbgp_external_startup(struct usb_hcd *hcd)
>   	return xen_dbgp_op(hcd, PHYSDEVOP_DBGP_RESET_DONE);
>   }
>   
> -#ifndef CONFIG_EARLY_PRINTK_DBGP
> +#if defined(CONFIG_XEN_DOM0) || !defined(CONFIG_EARLY_PRINTK_DBGP)
>   #include <linux/export.h>
>   EXPORT_SYMBOL_GPL(xen_dbgp_reset_prep);
>   EXPORT_SYMBOL_GPL(xen_dbgp_external_startup);


From xen-devel-bounces@lists.xenproject.org Fri May 14 14:06:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 14:06:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127377.239395 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhYSv-0008L1-11; Fri, 14 May 2021 14:06:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127377.239395; Fri, 14 May 2021 14:06:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhYSu-0008Ku-Sr; Fri, 14 May 2021 14:06:56 +0000
Received: by outflank-mailman (input) for mailman id 127377;
 Fri, 14 May 2021 14:06:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CSkR=KJ=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lhYSu-0008Kk-14
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 14:06:56 +0000
Received: from mail-il1-x136.google.com (unknown [2607:f8b0:4864:20::136])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3614fd90-b3f6-4415-8d89-29236d4a0ad5;
 Fri, 14 May 2021 14:06:55 +0000 (UTC)
Received: by mail-il1-x136.google.com with SMTP id v13so25879222ilj.8
 for <xen-devel@lists.xenproject.org>; Fri, 14 May 2021 07:06:55 -0700 (PDT)
Received: from [192.168.99.80] (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id t7sm3068586ilq.34.2021.05.14.07.06.54
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 14 May 2021 07:06:54 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3614fd90-b3f6-4415-8d89-29236d4a0ad5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=R4gCuqdbZR815nXx2XVEAAsDO0qJ8mZy8Tzf13PeXRw=;
        b=IETsefp/ddaBv67PB9DwzcBjShhaX873XycpoRKkMCKFRHECqeDTtDzXPsmph0KL/0
         0EW7sCQ5V7cAmGYnLtPzplJ+8EhLBdwC2joA93USf76Bxa/HyWuVhzRtjCXUyx7wvWcj
         B2weCbs7gGz0br63woBP633R7m2JHQ2VUnq9+nrdA96L7KVmuT1U9iuzxI/1mfEc+ri+
         WgPxdLhDpQwuubRZ6I1SUEdnlbt394yfkm3EF3KwjEB9A0CW8JKyUIhVjlsQMZkQtWQb
         QBliIPfHpcgfUkeFkcO43fbqNllnV6uBY3BFGpj+DcDdaUlDpQMWV2PCTRqNruPwZRH1
         sFyQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=R4gCuqdbZR815nXx2XVEAAsDO0qJ8mZy8Tzf13PeXRw=;
        b=s/dc0KqXUo/0NppOQaRbzn1oK1vr7LHlYK3thJOTk7Bi0vRMHNjS7nqq0XdB35HqBn
         z8TWrQmDaRZJHvMf1XJiBkUxA1iPrBCMs2Ynxpe628wor48I+Xf80wdlturOyl5OLMDD
         2TiWsc1z4lBPXkV1qeuMWhnByAdBWypOobZqaOdRWI301eC/IwF/1Qr5+JsxAAIYWBxG
         i4iXqv79cLi+AxQaNCK1K5JH9+n5f4DEQ/df+Xq/JQkM1Q+UzxL/Bzi5hq28w0IDSH1W
         SgNttY5pHbJjYjWTkUAlYMAsuDX5afebhQQX4KWsAO4o/YKfwfwc29Yv4+8ldCABhDdd
         AFRw==
X-Gm-Message-State: AOAM5330E+H8Ecz4gLy11mr2rrj6guZHPO8u3WxNqeDU7fTCNMInMDCo
	QxvwcXBatNotvuTWT1pmW5E=
X-Google-Smtp-Source: ABdhPJylsAg5N0IK+T5+EJrrTV3WwEBwg+lAI3XbJ38wuYsMOY+RKBJs4s+/eBw/u02YpGo+A49tZA==
X-Received: by 2002:a05:6e02:f42:: with SMTP id y2mr29468885ilj.216.1621001215109;
        Fri, 14 May 2021 07:06:55 -0700 (PDT)
Subject: Re: [PATCH v2 0/4] Support xen-driven USB3 debug capability
To: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Cc: linux-kernel@vger.kernel.org, linux-usb@vger.kernel.org,
 xen-devel@lists.xenproject.org, Lee Jones <lee.jones@linaro.org>,
 Jann Horn <jannh@google.com>, Chunfeng Yun <chunfeng.yun@mediatek.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>, Juergen Gross
 <jgross@suse.com>, Stefano Stabellini <sstabellini@kernel.org>,
 Mathias Nyman <mathias.nyman@intel.com>,
 Douglas Anderson <dianders@chromium.org>,
 "Eric W. Biederman" <ebiederm@xmission.com>, Petr Mladek <pmladek@suse.com>,
 Sumit Garg <sumit.garg@linaro.org>
References: <cover.1620950220.git.connojdavis@gmail.com>
 <YJ4cqntf7YdZCOPk@kroah.com>
From: Connor Davis <connojdavis@gmail.com>
Message-ID: <e2d96a91-3f0f-d2b3-9a1a-16caaf82c24a@gmail.com>
Date: Fri, 14 May 2021 08:07:07 -0600
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <YJ4cqntf7YdZCOPk@kroah.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 5/14/21 12:46 AM, Greg Kroah-Hartman wrote:
> On Thu, May 13, 2021 at 06:56:47PM -0600, Connor Davis wrote:
>> Hi all,
>>
>> This goal of this series is to allow the USB3 debug capability (DbC) to be
>> safely used by xen while linux runs as dom0.
> Patch 2/4 does not seem to be showing up anywhere, did it get lost?

Yep, just added you, sorry about that


Thanks,

Connor

> thanks,
>
> greg k-h


From xen-devel-bounces@lists.xenproject.org Fri May 14 14:13:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 14:13:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127394.239434 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhYZD-0001xF-6s; Fri, 14 May 2021 14:13:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127394.239434; Fri, 14 May 2021 14:13:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhYZD-0001x8-3n; Fri, 14 May 2021 14:13:27 +0000
Received: by outflank-mailman (input) for mailman id 127394;
 Fri, 14 May 2021 14:13:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CSkR=KJ=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lhYZC-0001x2-AV
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 14:13:26 +0000
Received: from mail-io1-xd29.google.com (unknown [2607:f8b0:4864:20::d29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 40054e21-1838-49ff-9f20-688efc4ae478;
 Fri, 14 May 2021 14:13:25 +0000 (UTC)
Received: by mail-io1-xd29.google.com with SMTP id i7so20752140ioa.12
 for <xen-devel@lists.xenproject.org>; Fri, 14 May 2021 07:13:25 -0700 (PDT)
Received: from [192.168.99.80] (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id g20sm415084iox.44.2021.05.14.07.13.24
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 14 May 2021 07:13:24 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 40054e21-1838-49ff-9f20-688efc4ae478
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=o70sDiQtQ+dDfpZ+eU3QLQfUt2kO/ieBAaoM5zoEMRk=;
        b=I3oDkRj7N+G8uClJphH8sp3CphcAUJHuonMpStGeC7ZOtm5DueOr2UWwz1mk4VOVFJ
         hgjz9Jo3qcPm2tLziHJGrWfMJJK5WEfWxJ6oSzqQrvqgB22wiU7t/PR+45R6N3AWzhTI
         LSVyMAwFfx6JGpM4fEMFFCmJh7hq6aQ7vwlZB2R+iXmYYujHjEgp+YsbeouvNMq1qIm1
         ls965ce58FGjCDQXbhYO3JzOMH7/U5HaObkgrv6Wzl29/c0uCUQun4+/5sS1S7/OkUAS
         iOPitaLnFGZnEkd/2iHcKNemDb786ayavYewJhOm8exZxB10NYkhOQk561Yxal3mNC/T
         HeRQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=o70sDiQtQ+dDfpZ+eU3QLQfUt2kO/ieBAaoM5zoEMRk=;
        b=VZZCm53kjB0KDQEtcJSj/2++yAHPzCUoksEgR5mA9Uj3LZ53pFMdwURTsxgG2bR6yy
         uD/Ox5rEKSRWbBrY//Lk4svK2WucfAPYYKCMVjbGtnTU7dOzYVz6VYPvcMP1RFGDNn61
         AQkDUmiJkojtYbnS5DiZZgQACZbfOXazDyABhEueHOoXxUiOBrj7oJwScvxxLW3luk1m
         IQY8G11vSkYgcYgq5uf3Zr5xDaz4Rq2GQkboLmYfsvH9J64X7A5aAeaLbFwn+eMRG3Jm
         uGiwnOHOY52l/rmaNrs4LI3V3f4TDKDZ8UwjHd92BAzH9ossWhIF1X8+An+wxHSaWdJ2
         tlfg==
X-Gm-Message-State: AOAM531bD/HKyg2beKkTcLMaVblkrztVPesFCpaWkL1//q6pvffm1f+B
	ROLqKtDsQMOvkdk7YnDk7X4=
X-Google-Smtp-Source: ABdhPJxcba5H+VgtPFTRDUwwnpPXcXTlpl/P5iX1tSlcrlDbT8KuB2zaA6XFPrNnA/+pl4Sjn0Q9gA==
X-Received: by 2002:a6b:4e15:: with SMTP id c21mr31591852iob.116.1621001605156;
        Fri, 14 May 2021 07:13:25 -0700 (PDT)
Subject: Re: [PATCH v2 1/5] xen/char: Default HAS_NS16550 to y only for X86
 and ARM
To: Elliott Mitchell <ehem+xen@m5p.com>
Cc: xen-devel@lists.xenproject.org, Andrew Cooper
 <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <cover.1620965208.git.connojdavis@gmail.com>
 <3960a676376e0163d97ac02f968966cdfaccbf75.1620965208.git.connojdavis@gmail.com>
 <YJ4LzUcajOJncKUP@mattapan.m5p.com>
From: Connor Davis <connojdavis@gmail.com>
Message-ID: <e597d3cf-39c4-bfaa-f0dd-ea9c84b0a178@gmail.com>
Date: Fri, 14 May 2021 08:13:37 -0600
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <YJ4LzUcajOJncKUP@mattapan.m5p.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 5/13/21 11:34 PM, Elliott Mitchell wrote:
> On Thu, May 13, 2021 at 10:17:08PM -0600, Connor Davis wrote:
>> Defaulting to yes only for X86 and ARM reduces the requirements
>> for a minimal build when porting new architectures.
>>
>> Signed-off-by: Connor Davis <connojdavis@gmail.com>
>> ---
>>   xen/drivers/char/Kconfig | 2 +-
>>   1 file changed, 1 insertion(+), 1 deletion(-)
>>
>> diff --git a/xen/drivers/char/Kconfig b/xen/drivers/char/Kconfig
>> index b572305657..b15b0c8d6a 100644
>> --- a/xen/drivers/char/Kconfig
>> +++ b/xen/drivers/char/Kconfig
>> @@ -1,6 +1,6 @@
>>   config HAS_NS16550
>>   	bool "NS16550 UART driver" if ARM
>> -	default y
>> +	default y if (ARM || X86)
>>   	help
>>   	  This selects the 16550-series UART support. For most systems, say Y.
> Are you sure this is necessary?  I've been working on something else
> recently, but did you confirm this with a full build?
>
> If you observe the line directly above that one, `_if_ARM_`.  I'm pretty
> sure this driver will refuse to show up in a RISC-V build.
>
It isn't visible in Kconfig, true, but it will still be built because of

the unconditional "default y"


Thanks,

Connor



From xen-devel-bounces@lists.xenproject.org Fri May 14 14:19:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 14:19:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127398.239445 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhYex-0002ec-UL; Fri, 14 May 2021 14:19:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127398.239445; Fri, 14 May 2021 14:19:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhYex-0002eV-QY; Fri, 14 May 2021 14:19:23 +0000
Received: by outflank-mailman (input) for mailman id 127398;
 Fri, 14 May 2021 14:19:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l2R2=KJ=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1lhYev-0002eP-WC
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 14:19:22 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 86e4dc79-464e-4ffc-a253-c0f31e224b72;
 Fri, 14 May 2021 14:19:20 +0000 (UTC)
Received: from [10.10.1.24] (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 16210019252001009.2056968807565;
 Fri, 14 May 2021 07:18:45 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 86e4dc79-464e-4ffc-a253-c0f31e224b72
ARC-Seal: i=1; a=rsa-sha256; t=1621001937; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=K7lLugE8T7JI0ob19KpaId4PfZrt73+pubbIeM+/ozk98r4IgGeZOexqBN2rJnUVm3Y18gj+MVukjb/uO2ZTmoLuuR8ynvkrvDBX+DnA02bLIN9cFvn9hYv9KqUrWhTnqUMpzDO4CazWL276v89VqubxPvOx7Fq0jNYnK27MbjQ=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1621001937; h=Content-Type:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=j8JJ219Rjv77FFvOnad2A084AOfcITB+64Zrfzp3+ao=; 
	b=GBEGm45ay9OzFaNEPsX72md3xgAQ612YmiQogMQoNCB2aP3RUNi2cNBZIMp4oJDhu84f+iaO7R0XiLII/+ppV4jb4K46SYmevP2sw0ZvH3avNoKiZj6Er9FtyjYuBxzrNPCRV/yYWkVgaMU5xwmYUtGpOXFcjHCazI430NYMExg=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com> header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1621001937;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=Subject:To:Cc:References:From:Message-ID:Date:MIME-Version:In-Reply-To:Content-Type;
	bh=j8JJ219Rjv77FFvOnad2A084AOfcITB+64Zrfzp3+ao=;
	b=fDtd3zIHJoR5enqogoI1y8dkigWY/SvFXpqPQF8iib7TtEcbF9joUVynbirhBXZI
	gCLbN/dt0UsP+Fv0uxVoZNGUSsrynW8fACBFFdztsaosTkhMaUsaSv/lZO1TvMoQQ9/
	E5k7sKBwz1Ze7So4VgQV4k//HwZsnOulCVjF7TWQ=
Subject: Re: [PATCH v4 0/2] Introducing Hyperlaunch capability design
 (formerly: DomB mode of dom0less)
To: Christopher Clark <christopher.w.clark@gmail.com>,
 xen-devel@lists.xenproject.org
Cc: andrew.cooper3@citrix.com, stefano.stabellini@xilinx.com,
 jgrall@amazon.com, Julien.grall.oss@gmail.com, iwj@xenproject.org,
 wl@xen.org, george.dunlap@citrix.com, jbeulich@suse.com, persaur@gmail.com,
 Bertrand.Marquis@arm.com, roger.pau@citrix.com, luca.fancellu@arm.com,
 paul@xen.org, adam.schwalm@starlab.io, scott.davis@starlab.io,
 Christopher Clark <christopher.clark@starlab.io>, openxt@googlegroups.com,
 Daniel DeGraaf <dgdegra@tycho.nsa.gov>, quinnr@ainfosec.com
References: <20210514034101.3683-1-christopher.w.clark@gmail.com>
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
Message-ID: <16e9e430-e684-46f9-ca48-3fdd80b9e8df@apertussolutions.com>
Date: Fri, 14 May 2021 10:18:41 -0400
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210514034101.3683-1-christopher.w.clark@gmail.com>
Content-Type: multipart/mixed;
 boundary="------------AE2245E82A7EC007FBDDF8CD"
Content-Language: en-US
X-Zoho-Virus-Status: 1
X-ZohoMailClient: External

This is a multi-part message in MIME format.
--------------AE2245E82A7EC007FBDDF8CD
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit

On 5/13/21 11:40 PM, Christopher Clark wrote:
> We are submitting for inclusion in the Xen documentation:
> 
> - the Hyperlaunch design document, and
> - the Hyperlaunch device tree design document
> 
> to describe a new method for launching the Xen hypervisor.
> 
> The Hyperlaunch feature builds upon prior dom0less work,
> to bring a flexible and security-minded means to launch a
> variety of VM configurations as part of the startup of Xen.
> 
> Signed-off-by: Christopher Clark <christopher.clark@starlab.io>
> Signed-off by: Daniel P. Smith <dpsmith@apertussolutions.com>
> 
> 
> Daniel P. Smith (2):
>   docs/designs/launch: hyperlaunch design document
>   docs/designs/launch: hyperlaunch device tree
> 
>  .../designs/launch/hyperlaunch-devicetree.rst |  343 ++++++
>  docs/designs/launch/hyperlaunch.rst           | 1004 +++++++++++++++++
>  2 files changed, 1347 insertions(+)
>  create mode 100644 docs/designs/launch/hyperlaunch-devicetree.rst
>  create mode 100644 docs/designs/launch/hyperlaunch.rst
> 

All,

Please find a rendered PDF copy of each document attached for ease of
reading.

V/r,
Daniel P. Smith
Apertus Solutions, LLC

--------------AE2245E82A7EC007FBDDF8CD
Content-Type: application/pdf;
 name="hyperlaunch.pdf"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
 filename="hyperlaunch.pdf"

JVBERi0xLjQNCiWTjIueIFJlcG9ydExhYiBHZW5lcmF0ZWQgUERGIGRvY3VtZW50IGh0dHA6
Ly93d3cucmVwb3J0bGFiLmNvbQ0KMSAwIG9iag0KPDwgL0YxIDIgMCBSIC9GMiAzIDAgUiAv
RjMgNjcgMCBSIC9GNCA2OSAwIFIgL0Y1IDcxIDAgUiA+Pg0KZW5kb2JqDQoyIDAgb2JqDQo8
PCAvQmFzZUZvbnQgL0hlbHZldGljYSAvRW5jb2RpbmcgL1dpbkFuc2lFbmNvZGluZyAvTmFt
ZSAvRjEgL1N1YnR5cGUgL1R5cGUxIC9UeXBlIC9Gb250ID4+DQplbmRvYmoNCjMgMCBvYmoN
Cjw8IC9CYXNlRm9udCAvSGVsdmV0aWNhLUJvbGQgL0VuY29kaW5nIC9XaW5BbnNpRW5jb2Rp
bmcgL05hbWUgL0YyIC9TdWJ0eXBlIC9UeXBlMSAvVHlwZSAvRm9udCA+Pg0KZW5kb2JqDQo0
IDAgb2JqDQo8PCAvQm9yZGVyIFsgMCAwIDAgXSAvQ29udGVudHMgKCkgL0Rlc3QgWyA2NSAw
IFIgL1hZWiA2Mi42OTI5MSA3NjUuMDIzNiAwIF0gL1JlY3QgWyA2Mi42OTI5MSA2MTcuMDIz
NiAxMzQuOTIyOSA2MjkuMDIzNiBdIC9TdWJ0eXBlIC9MaW5rIC9UeXBlIC9Bbm5vdCA+Pg0K
ZW5kb2JqDQo1IDAgb2JqDQo8PCAvQm9yZGVyIFsgMCAwIDAgXSAvQ29udGVudHMgKCkgL0Rl
c3QgWyA2NSAwIFIgL1hZWiA2Mi42OTI5MSA3NjUuMDIzNiAwIF0gL1JlY3QgWyA1MjcuMDIy
NyA2MTcuNzczNiA1MzIuNTgyNyA2MjkuNzczNiBdIC9TdWJ0eXBlIC9MaW5rIC9UeXBlIC9B
bm5vdCA+Pg0KZW5kb2JqDQo2IDAgb2JqDQo8PCAvQm9yZGVyIFsgMCAwIDAgXSAvQ29udGVu
dHMgKCkgL0Rlc3QgWyA2NSAwIFIgL1hZWiA2Mi42OTI5MSA1OTQuMDIzNiAwIF0gL1JlY3Qg
WyA2Mi42OTI5MSA1OTkuMDIzNiAxNzIuNzEyOSA2MTEuMDIzNiBdIC9TdWJ0eXBlIC9MaW5r
IC9UeXBlIC9Bbm5vdCA+Pg0KZW5kb2JqDQo3IDAgb2JqDQo8PCAvQm9yZGVyIFsgMCAwIDAg
XSAvQ29udGVudHMgKCkgL0Rlc3QgWyA2NSAwIFIgL1hZWiA2Mi42OTI5MSA1OTQuMDIzNiAw
IF0gL1JlY3QgWyA1MjcuMDIyNyA1OTkuNzczNiA1MzIuNTgyNyA2MTEuNzczNiBdIC9TdWJ0
eXBlIC9MaW5rIC9UeXBlIC9Bbm5vdCA+Pg0KZW5kb2JqDQo4IDAgb2JqDQo8PCAvQm9yZGVy
IFsgMCAwIDAgXSAvQ29udGVudHMgKCkgL0Rlc3QgWyA2NSAwIFIgL1hZWiA2Mi42OTI5MSA0
NzEuMDIzNiAwIF0gL1JlY3QgWyA2Mi42OTI5MSA1ODEuMDIzNiAxMjMuMjYyOSA1OTMuMDIz
NiBdIC9TdWJ0eXBlIC9MaW5rIC9UeXBlIC9Bbm5vdCA+Pg0KZW5kb2JqDQo5IDAgb2JqDQo8
PCAvQm9yZGVyIFsgMCAwIDAgXSAvQ29udGVudHMgKCkgL0Rlc3QgWyA2NSAwIFIgL1hZWiA2
Mi42OTI5MSA0NzEuMDIzNiAwIF0gL1JlY3QgWyA1MjcuMDIyNyA1ODEuNzczNiA1MzIuNTgy
NyA1OTMuNzczNiBdIC9TdWJ0eXBlIC9MaW5rIC9UeXBlIC9Bbm5vdCA+Pg0KZW5kb2JqDQox
MCAwIG9iag0KPDwgL0JvcmRlciBbIDAgMCAwIF0gL0NvbnRlbnRzICgpIC9EZXN0IFsgNjUg
MCBSIC9YWVogNjIuNjkyOTEgMjY0LjAyMzYgMCBdIC9SZWN0IFsgODIuNjkyOTEgNTYzLjAy
MzYgMTUxLjYxMjkgNTc1LjAyMzYgXSAvU3VidHlwZSAvTGluayAvVHlwZSAvQW5ub3QgPj4N
CmVuZG9iag0KMTEgMCBvYmoNCjw8IC9Cb3JkZXIgWyAwIDAgMCBdIC9Db250ZW50cyAoKSAv
RGVzdCBbIDY1IDAgUiAvWFlaIDYyLjY5MjkxIDI2NC4wMjM2IDAgXSAvUmVjdCBbIDUyNy4w
MjI3IDU2My43NzM2IDUzMi41ODI3IDU3NS43NzM2IF0gL1N1YnR5cGUgL0xpbmsgL1R5cGUg
L0Fubm90ID4+DQplbmRvYmoNCjEyIDAgb2JqDQo8PCAvQm9yZGVyIFsgMCAwIDAgXSAvQ29u
dGVudHMgKCkgL0Rlc3QgWyA2NiAwIFIgL1hZWiA2Mi42OTI5MSA2MzMuMDIzNiAwIF0gL1Jl
Y3QgWyA2Mi42OTI5MSA1NDUuMDIzNiAxOTkuOTUyOSA1NTcuMDIzNiBdIC9TdWJ0eXBlIC9M
aW5rIC9UeXBlIC9Bbm5vdCA+Pg0KZW5kb2JqDQoxMyAwIG9iag0KPDwgL0JvcmRlciBbIDAg
MCAwIF0gL0NvbnRlbnRzICgpIC9EZXN0IFsgNjYgMCBSIC9YWVogNjIuNjkyOTEgNjMzLjAy
MzYgMCBdIC9SZWN0IFsgNTI3LjAyMjcgNTQ1Ljc3MzYgNTMyLjU4MjcgNTU3Ljc3MzYgXSAv
U3VidHlwZSAvTGluayAvVHlwZSAvQW5ub3QgPj4NCmVuZG9iag0KMTQgMCBvYmoNCjw8IC9C
b3JkZXIgWyAwIDAgMCBdIC9Db250ZW50cyAoKSAvRGVzdCBbIDY2IDAgUiAvWFlaIDYyLjY5
MjkxIDQ5OC4wMjM2IDAgXSAvUmVjdCBbIDgyLjY5MjkxIDUyNy4wMjM2IDIzOS45OTI5IDUz
OS4wMjM2IF0gL1N1YnR5cGUgL0xpbmsgL1R5cGUgL0Fubm90ID4+DQplbmRvYmoNCjE1IDAg
b2JqDQo8PCAvQm9yZGVyIFsgMCAwIDAgXSAvQ29udGVudHMgKCkgL0Rlc3QgWyA2NiAwIFIg
L1hZWiA2Mi42OTI5MSA0OTguMDIzNiAwIF0gL1JlY3QgWyA1MjcuMDIyNyA1MjcuNzczNiA1
MzIuNTgyNyA1MzkuNzczNiBdIC9TdWJ0eXBlIC9MaW5rIC9UeXBlIC9Bbm5vdCA+Pg0KZW5k
b2JqDQoxNiAwIG9iag0KPDwgL0JvcmRlciBbIDAgMCAwIF0gL0NvbnRlbnRzICgpIC9EZXN0
IFsgNjggMCBSIC9YWVogNjIuNjkyOTEgNjQ1LjAyMzYgMCBdIC9SZWN0IFsgODIuNjkyOTEg
NTA5LjAyMzYgMTk4LjI5MjkgNTIxLjAyMzYgXSAvU3VidHlwZSAvTGluayAvVHlwZSAvQW5u
b3QgPj4NCmVuZG9iag0KMTcgMCBvYmoNCjw8IC9Cb3JkZXIgWyAwIDAgMCBdIC9Db250ZW50
cyAoKSAvRGVzdCBbIDY4IDAgUiAvWFlaIDYyLjY5MjkxIDY0NS4wMjM2IDAgXSAvUmVjdCBb
IDUyNy4wMjI3IDUwOS43NzM2IDUzMi41ODI3IDUyMS43NzM2IF0gL1N1YnR5cGUgL0xpbmsg
L1R5cGUgL0Fubm90ID4+DQplbmRvYmoNCjE4IDAgb2JqDQo8PCAvQm9yZGVyIFsgMCAwIDAg
XSAvQ29udGVudHMgKCkgL0Rlc3QgWyA3MCAwIFIgL1hZWiA2Mi42OTI5MSA3NjUuMDIzNiAw
IF0gL1JlY3QgWyA4Mi42OTI5MSA0OTEuMDIzNiAyMzYuMDkyOSA1MDMuMDIzNiBdIC9TdWJ0
eXBlIC9MaW5rIC9UeXBlIC9Bbm5vdCA+Pg0KZW5kb2JqDQoxOSAwIG9iag0KPDwgL0JvcmRl
ciBbIDAgMCAwIF0gL0NvbnRlbnRzICgpIC9EZXN0IFsgNzAgMCBSIC9YWVogNjIuNjkyOTEg
NzY1LjAyMzYgMCBdIC9SZWN0IFsgNTI3LjAyMjcgNDkxLjc3MzYgNTMyLjU4MjcgNTAzLjc3
MzYgXSAvU3VidHlwZSAvTGluayAvVHlwZSAvQW5ub3QgPj4NCmVuZG9iag0KMjAgMCBvYmoN
Cjw8IC9Cb3JkZXIgWyAwIDAgMCBdIC9Db250ZW50cyAoKSAvRGVzdCBbIDcwIDAgUiAvWFla
IDYyLjY5MjkxIDY4MS4wMjM2IDAgXSAvUmVjdCBbIDEwMi42OTI5IDQ3My4wMjM2IDM2MC4w
MDI5IDQ4NS4wMjM2IF0gL1N1YnR5cGUgL0xpbmsgL1R5cGUgL0Fubm90ID4+DQplbmRvYmoN
CjIxIDAgb2JqDQo8PCAvQm9yZGVyIFsgMCAwIDAgXSAvQ29udGVudHMgKCkgL0Rlc3QgWyA3
MCAwIFIgL1hZWiA2Mi42OTI5MSA2ODEuMDIzNiAwIF0gL1JlY3QgWyA1MjcuMDIyNyA0NzMu
NzczNiA1MzIuNTgyNyA0ODUuNzczNiBdIC9TdWJ0eXBlIC9MaW5rIC9UeXBlIC9Bbm5vdCA+
Pg0KZW5kb2JqDQoyMiAwIG9iag0KPDwgL0JvcmRlciBbIDAgMCAwIF0gL0NvbnRlbnRzICgp
IC9EZXN0IFsgNzAgMCBSIC9YWVogNjIuNjkyOTEgNDU2LjAyMzYgMCBdIC9SZWN0IFsgMTAy
LjY5MjkgNDU1LjAyMzYgNDQ0LjUxMjkgNDY3LjAyMzYgXSAvU3VidHlwZSAvTGluayAvVHlw
ZSAvQW5ub3QgPj4NCmVuZG9iag0KMjMgMCBvYmoNCjw8IC9Cb3JkZXIgWyAwIDAgMCBdIC9D
b250ZW50cyAoKSAvRGVzdCBbIDcwIDAgUiAvWFlaIDYyLjY5MjkxIDQ1Ni4wMjM2IDAgXSAv
UmVjdCBbIDUyNy4wMjI3IDQ1NS43NzM2IDUzMi41ODI3IDQ2Ny43NzM2IF0gL1N1YnR5cGUg
L0xpbmsgL1R5cGUgL0Fubm90ID4+DQplbmRvYmoNCjI0IDAgb2JqDQo8PCAvQm9yZGVyIFsg
MCAwIDAgXSAvQ29udGVudHMgKCkgL0Rlc3QgWyA3MCAwIFIgL1hZWiA2Mi42OTI5MSAyNjIu
MDIzNiAwIF0gL1JlY3QgWyAxMDIuNjkyOSA0MzcuMDIzNiAzOTAuMDQyOSA0NDkuMDIzNiBd
IC9TdWJ0eXBlIC9MaW5rIC9UeXBlIC9Bbm5vdCA+Pg0KZW5kb2JqDQoyNSAwIG9iag0KPDwg
L0JvcmRlciBbIDAgMCAwIF0gL0NvbnRlbnRzICgpIC9EZXN0IFsgNzAgMCBSIC9YWVogNjIu
NjkyOTEgMjYyLjAyMzYgMCBdIC9SZWN0IFsgNTI3LjAyMjcgNDM3Ljc3MzYgNTMyLjU4Mjcg
NDQ5Ljc3MzYgXSAvU3VidHlwZSAvTGluayAvVHlwZSAvQW5ub3QgPj4NCmVuZG9iag0KMjYg
MCBvYmoNCjw8IC9Cb3JkZXIgWyAwIDAgMCBdIC9Db250ZW50cyAoKSAvRGVzdCBbIDcyIDAg
UiAvWFlaIDYyLjY5MjkxIDc2NS4wMjM2IDAgXSAvUmVjdCBbIDEwMi42OTI5IDQxOS4wMjM2
IDMxMC41NjI5IDQzMS4wMjM2IF0gL1N1YnR5cGUgL0xpbmsgL1R5cGUgL0Fubm90ID4+DQpl
bmRvYmoNCjI3IDAgb2JqDQo8PCAvQm9yZGVyIFsgMCAwIDAgXSAvQ29udGVudHMgKCkgL0Rl
c3QgWyA3MiAwIFIgL1hZWiA2Mi42OTI5MSA3NjUuMDIzNiAwIF0gL1JlY3QgWyA1MjcuMDIy
NyA0MTkuNzczNiA1MzIuNTgyNyA0MzEuNzczNiBdIC9TdWJ0eXBlIC9MaW5rIC9UeXBlIC9B
bm5vdCA+Pg0KZW5kb2JqDQoyOCAwIG9iag0KPDwgL0JvcmRlciBbIDAgMCAwIF0gL0NvbnRl
bnRzICgpIC9EZXN0IFsgNzIgMCBSIC9YWVogNjIuNjkyOTEgNDYyLjAyMzYgMCBdIC9SZWN0
IFsgODIuNjkyOTEgNDAxLjAyMzYgMjY0LjQ1MjkgNDEzLjAyMzYgXSAvU3VidHlwZSAvTGlu
ayAvVHlwZSAvQW5ub3QgPj4NCmVuZG9iag0KMjkgMCBvYmoNCjw8IC9Cb3JkZXIgWyAwIDAg
MCBdIC9Db250ZW50cyAoKSAvRGVzdCBbIDcyIDAgUiAvWFlaIDYyLjY5MjkxIDQ2Mi4wMjM2
IDAgXSAvUmVjdCBbIDUyNy4wMjI3IDQwMS43NzM2IDUzMi41ODI3IDQxMy43NzM2IF0gL1N1
YnR5cGUgL0xpbmsgL1R5cGUgL0Fubm90ID4+DQplbmRvYmoNCjMwIDAgb2JqDQo8PCAvQm9y
ZGVyIFsgMCAwIDAgXSAvQ29udGVudHMgKCkgL0Rlc3QgWyA3MyAwIFIgL1hZWiA2Mi42OTI5
MSA1OTEuMDIzNiAwIF0gL1JlY3QgWyA4Mi42OTI5MSAzODMuMDIzNiAyNDAuNTIyOSAzOTUu
MDIzNiBdIC9TdWJ0eXBlIC9MaW5rIC9UeXBlIC9Bbm5vdCA+Pg0KZW5kb2JqDQozMSAwIG9i
ag0KPDwgL0JvcmRlciBbIDAgMCAwIF0gL0NvbnRlbnRzICgpIC9EZXN0IFsgNzMgMCBSIC9Y
WVogNjIuNjkyOTEgNTkxLjAyMzYgMCBdIC9SZWN0IFsgNTI3LjAyMjcgMzgzLjc3MzYgNTMy
LjU4MjcgMzk1Ljc3MzYgXSAvU3VidHlwZSAvTGluayAvVHlwZSAvQW5ub3QgPj4NCmVuZG9i
ag0KMzIgMCBvYmoNCjw8IC9Cb3JkZXIgWyAwIDAgMCBdIC9Db250ZW50cyAoKSAvRGVzdCBb
IDczIDAgUiAvWFlaIDYyLjY5MjkxIDMwOS4wMjM2IDAgXSAvUmVjdCBbIDEwMi42OTI5IDM2
NS4wMjM2IDI0My4zMjI5IDM3Ny4wMjM2IF0gL1N1YnR5cGUgL0xpbmsgL1R5cGUgL0Fubm90
ID4+DQplbmRvYmoNCjMzIDAgb2JqDQo8PCAvQm9yZGVyIFsgMCAwIDAgXSAvQ29udGVudHMg
KCkgL0Rlc3QgWyA3MyAwIFIgL1hZWiA2Mi42OTI5MSAzMDkuMDIzNiAwIF0gL1JlY3QgWyA1
MjcuMDIyNyAzNjUuNzczNiA1MzIuNTgyNyAzNzcuNzczNiBdIC9TdWJ0eXBlIC9MaW5rIC9U
eXBlIC9Bbm5vdCA+Pg0KZW5kb2JqDQozNCAwIG9iag0KPDwgL0JvcmRlciBbIDAgMCAwIF0g
L0NvbnRlbnRzICgpIC9EZXN0IFsgNzQgMCBSIC9YWVogNjIuNjkyOTEgMjU1LjAyMzYgMCBd
IC9SZWN0IFsgODIuNjkyOTEgMzQ3LjAyMzYgMjIzLjMxMjkgMzU5LjAyMzYgXSAvU3VidHlw
ZSAvTGluayAvVHlwZSAvQW5ub3QgPj4NCmVuZG9iag0KMzUgMCBvYmoNCjw8IC9Cb3JkZXIg
WyAwIDAgMCBdIC9Db250ZW50cyAoKSAvRGVzdCBbIDc0IDAgUiAvWFlaIDYyLjY5MjkxIDI1
NS4wMjM2IDAgXSAvUmVjdCBbIDUyNy4wMjI3IDM0Ny43NzM2IDUzMi41ODI3IDM1OS43NzM2
IF0gL1N1YnR5cGUgL0xpbmsgL1R5cGUgL0Fubm90ID4+DQplbmRvYmoNCjM2IDAgb2JqDQo8
PCAvQm9yZGVyIFsgMCAwIDAgXSAvQ29udGVudHMgKCkgL0Rlc3QgWyA3NSAwIFIgL1hZWiA2
Mi42OTI5MSA3NjUuMDIzNiAwIF0gL1JlY3QgWyAxMDIuNjkyOSAzMjkuMDIzNiAxOTguMzAy
OSAzNDEuMDIzNiBdIC9TdWJ0eXBlIC9MaW5rIC9UeXBlIC9Bbm5vdCA+Pg0KZW5kb2JqDQoz
NyAwIG9iag0KPDwgL0JvcmRlciBbIDAgMCAwIF0gL0NvbnRlbnRzICgpIC9EZXN0IFsgNzUg
MCBSIC9YWVogNjIuNjkyOTEgNzY1LjAyMzYgMCBdIC9SZWN0IFsgNTI3LjAyMjcgMzI5Ljc3
MzYgNTMyLjU4MjcgMzQxLjc3MzYgXSAvU3VidHlwZSAvTGluayAvVHlwZSAvQW5ub3QgPj4N
CmVuZG9iag0KMzggMCBvYmoNCjw8IC9Cb3JkZXIgWyAwIDAgMCBdIC9Db250ZW50cyAoKSAv
RGVzdCBbIDc1IDAgUiAvWFlaIDYyLjY5MjkxIDQzMi4wMjM2IDAgXSAvUmVjdCBbIDEwMi42
OTI5IDMxMS4wMjM2IDIwOC4yODI5IDMyMy4wMjM2IF0gL1N1YnR5cGUgL0xpbmsgL1R5cGUg
L0Fubm90ID4+DQplbmRvYmoNCjM5IDAgb2JqDQo8PCAvQm9yZGVyIFsgMCAwIDAgXSAvQ29u
dGVudHMgKCkgL0Rlc3QgWyA3NSAwIFIgL1hZWiA2Mi42OTI5MSA0MzIuMDIzNiAwIF0gL1Jl
Y3QgWyA1MjcuMDIyNyAzMTEuNzczNiA1MzIuNTgyNyAzMjMuNzczNiBdIC9TdWJ0eXBlIC9M
aW5rIC9UeXBlIC9Bbm5vdCA+Pg0KZW5kb2JqDQo0MCAwIG9iag0KPDwgL0JvcmRlciBbIDAg
MCAwIF0gL0NvbnRlbnRzICgpIC9EZXN0IFsgNzUgMCBSIC9YWVogNjIuNjkyOTEgMzE1LjAy
MzYgMCBdIC9SZWN0IFsgMTAyLjY5MjkgMjkzLjAyMzYgMTk5Ljk2MjkgMzA1LjAyMzYgXSAv
U3VidHlwZSAvTGluayAvVHlwZSAvQW5ub3QgPj4NCmVuZG9iag0KNDEgMCBvYmoNCjw8IC9C
b3JkZXIgWyAwIDAgMCBdIC9Db250ZW50cyAoKSAvRGVzdCBbIDc1IDAgUiAvWFlaIDYyLjY5
MjkxIDMxNS4wMjM2IDAgXSAvUmVjdCBbIDUyNy4wMjI3IDI5My43NzM2IDUzMi41ODI3IDMw
NS43NzM2IF0gL1N1YnR5cGUgL0xpbmsgL1R5cGUgL0Fubm90ID4+DQplbmRvYmoNCjQyIDAg
b2JqDQo8PCAvQm9yZGVyIFsgMCAwIDAgXSAvQ29udGVudHMgKCkgL0Rlc3QgWyA3NSAwIFIg
L1hZWiA2Mi42OTI5MSAxOTguMDIzNiAwIF0gL1JlY3QgWyAxMDIuNjkyOSAyNzUuMDIzNiAx
OTEuMDcyOSAyODcuMDIzNiBdIC9TdWJ0eXBlIC9MaW5rIC9UeXBlIC9Bbm5vdCA+Pg0KZW5k
b2JqDQo0MyAwIG9iag0KPDwgL0JvcmRlciBbIDAgMCAwIF0gL0NvbnRlbnRzICgpIC9EZXN0
IFsgNzUgMCBSIC9YWVogNjIuNjkyOTEgMTk4LjAyMzYgMCBdIC9SZWN0IFsgNTI3LjAyMjcg
Mjc1Ljc3MzYgNTMyLjU4MjcgMjg3Ljc3MzYgXSAvU3VidHlwZSAvTGluayAvVHlwZSAvQW5u
b3QgPj4NCmVuZG9iag0KNDQgMCBvYmoNCjw8IC9Cb3JkZXIgWyAwIDAgMCBdIC9Db250ZW50
cyAoKSAvRGVzdCBbIDc2IDAgUiAvWFlaIDYyLjY5MjkxIDc2NS4wMjM2IDAgXSAvUmVjdCBb
IDEwMi42OTI5IDI1Ny4wMjM2IDIxMi43MzI5IDI2OS4wMjM2IF0gL1N1YnR5cGUgL0xpbmsg
L1R5cGUgL0Fubm90ID4+DQplbmRvYmoNCjQ1IDAgb2JqDQo8PCAvQm9yZGVyIFsgMCAwIDAg
XSAvQ29udGVudHMgKCkgL0Rlc3QgWyA3NiAwIFIgL1hZWiA2Mi42OTI5MSA3NjUuMDIzNiAw
IF0gL1JlY3QgWyA1MjEuNDYyNyAyNTcuNzczNiA1MzIuNTgyNyAyNjkuNzczNiBdIC9TdWJ0
eXBlIC9MaW5rIC9UeXBlIC9Bbm5vdCA+Pg0KZW5kb2JqDQo0NiAwIG9iag0KPDwgL0JvcmRl
ciBbIDAgMCAwIF0gL0NvbnRlbnRzICgpIC9EZXN0IFsgNzYgMCBSIC9YWVogNjIuNjkyOTEg
NDIwLjAyMzYgMCBdIC9SZWN0IFsgMTAyLjY5MjkgMjM5LjAyMzYgMjAyLjczMjkgMjUxLjAy
MzYgXSAvU3VidHlwZSAvTGluayAvVHlwZSAvQW5ub3QgPj4NCmVuZG9iag0KNDcgMCBvYmoN
Cjw8IC9Cb3JkZXIgWyAwIDAgMCBdIC9Db250ZW50cyAoKSAvRGVzdCBbIDc2IDAgUiAvWFla
IDYyLjY5MjkxIDQyMC4wMjM2IDAgXSAvUmVjdCBbIDUyMS40NjI3IDIzOS43NzM2IDUzMi41
ODI3IDI1MS43NzM2IF0gL1N1YnR5cGUgL0xpbmsgL1R5cGUgL0Fubm90ID4+DQplbmRvYmoN
CjQ4IDAgb2JqDQo8PCAvQm9yZGVyIFsgMCAwIDAgXSAvQ29udGVudHMgKCkgL0Rlc3QgWyA3
NiAwIFIgL1hZWiA2Mi42OTI5MSAzMzkuMDIzNiAwIF0gL1JlY3QgWyAxMDIuNjkyOSAyMjEu
MDIzNiAyMTMuODQyOSAyMzMuMDIzNiBdIC9TdWJ0eXBlIC9MaW5rIC9UeXBlIC9Bbm5vdCA+
Pg0KZW5kb2JqDQo0OSAwIG9iag0KPDwgL0JvcmRlciBbIDAgMCAwIF0gL0NvbnRlbnRzICgp
IC9EZXN0IFsgNzYgMCBSIC9YWVogNjIuNjkyOTEgMzM5LjAyMzYgMCBdIC9SZWN0IFsgNTIx
LjQ2MjcgMjIxLjc3MzYgNTMyLjU4MjcgMjMzLjc3MzYgXSAvU3VidHlwZSAvTGluayAvVHlw
ZSAvQW5ub3QgPj4NCmVuZG9iag0KNTAgMCBvYmoNCjw8IC9Cb3JkZXIgWyAwIDAgMCBdIC9D
b250ZW50cyAoKSAvRGVzdCBbIDc2IDAgUiAvWFlaIDYyLjY5MjkxIDI0Ni4wMjM2IDAgXSAv
UmVjdCBbIDEwMi42OTI5IDIwMy4wMjM2IDIwNy4xODI5IDIxNS4wMjM2IF0gL1N1YnR5cGUg
L0xpbmsgL1R5cGUgL0Fubm90ID4+DQplbmRvYmoNCjUxIDAgb2JqDQo8PCAvQm9yZGVyIFsg
MCAwIDAgXSAvQ29udGVudHMgKCkgL0Rlc3QgWyA3NiAwIFIgL1hZWiA2Mi42OTI5MSAyNDYu
MDIzNiAwIF0gL1JlY3QgWyA1MjEuNDYyNyAyMDMuNzczNiA1MzIuNTgyNyAyMTUuNzczNiBd
IC9TdWJ0eXBlIC9MaW5rIC9UeXBlIC9Bbm5vdCA+Pg0KZW5kb2JqDQo1MiAwIG9iag0KPDwg
L0JvcmRlciBbIDAgMCAwIF0gL0NvbnRlbnRzICgpIC9EZXN0IFsgNzYgMCBSIC9YWVogNjIu
NjkyOTEgMTQxLjAyMzYgMCBdIC9SZWN0IFsgNjIuNjkyOTEgMTg1LjAyMzYgMjc3LjcxMjkg
MTk3LjAyMzYgXSAvU3VidHlwZSAvTGluayAvVHlwZSAvQW5ub3QgPj4NCmVuZG9iag0KNTMg
MCBvYmoNCjw8IC9Cb3JkZXIgWyAwIDAgMCBdIC9Db250ZW50cyAoKSAvRGVzdCBbIDc2IDAg
UiAvWFlaIDYyLjY5MjkxIDE0MS4wMjM2IDAgXSAvUmVjdCBbIDUyMS40NjI3IDE4NS43NzM2
IDUzMi41ODI3IDE5Ny43NzM2IF0gL1N1YnR5cGUgL0xpbmsgL1R5cGUgL0Fubm90ID4+DQpl
bmRvYmoNCjU0IDAgb2JqDQo8PCAvQm9yZGVyIFsgMCAwIDAgXSAvQ29udGVudHMgKCkgL0Rl
c3QgWyA3NyAwIFIgL1hZWiA2Mi42OTI5MSA2MTAuNjc3MiAwIF0gL1JlY3QgWyA2Mi42OTI5
MSAxNjcuMDIzNiAxMjIuMTUyOSAxNzkuMDIzNiBdIC9TdWJ0eXBlIC9MaW5rIC9UeXBlIC9B
bm5vdCA+Pg0KZW5kb2JqDQo1NSAwIG9iag0KPDwgL0JvcmRlciBbIDAgMCAwIF0gL0NvbnRl
bnRzICgpIC9EZXN0IFsgNzcgMCBSIC9YWVogNjIuNjkyOTEgNjEwLjY3NzIgMCBdIC9SZWN0
IFsgNTIxLjQ2MjcgMTY3Ljc3MzYgNTMyLjU4MjcgMTc5Ljc3MzYgXSAvU3VidHlwZSAvTGlu
ayAvVHlwZSAvQW5ub3QgPj4NCmVuZG9iag0KNTYgMCBvYmoNCjw8IC9Cb3JkZXIgWyAwIDAg
MCBdIC9Db250ZW50cyAoKSAvRGVzdCBbIDc3IDAgUiAvWFlaIDYyLjY5MjkxIDU3Ny42Nzcy
IDAgXSAvUmVjdCBbIDgyLjY5MjkxIDE0OS4wMjM2IDM3MC42MzI5IDE2MS4wMjM2IF0gL1N1
YnR5cGUgL0xpbmsgL1R5cGUgL0Fubm90ID4+DQplbmRvYmoNCjU3IDAgb2JqDQo8PCAvQm9y
ZGVyIFsgMCAwIDAgXSAvQ29udGVudHMgKCkgL0Rlc3QgWyA3NyAwIFIgL1hZWiA2Mi42OTI5
MSA1NzcuNjc3MiAwIF0gL1JlY3QgWyA1MjEuNDYyNyAxNDkuNzczNiA1MzIuNTgyNyAxNjEu
NzczNiBdIC9TdWJ0eXBlIC9MaW5rIC9UeXBlIC9Bbm5vdCA+Pg0KZW5kb2JqDQo1OCAwIG9i
ag0KPDwgL0JvcmRlciBbIDAgMCAwIF0gL0NvbnRlbnRzICgpIC9EZXN0IFsgNzggMCBSIC9Y
WVogNjIuNjkyOTEgNjMxLjgyMzYgMCBdIC9SZWN0IFsgODIuNjkyOTEgMTMxLjAyMzYgMzg3
LjgzMjkgMTQzLjAyMzYgXSAvU3VidHlwZSAvTGluayAvVHlwZSAvQW5ub3QgPj4NCmVuZG9i
ag0KNTkgMCBvYmoNCjw8IC9Cb3JkZXIgWyAwIDAgMCBdIC9Db250ZW50cyAoKSAvRGVzdCBb
IDc4IDAgUiAvWFlaIDYyLjY5MjkxIDYzMS44MjM2IDAgXSAvUmVjdCBbIDUyMS40NjI3IDEz
MS43NzM2IDUzMi41ODI3IDE0My43NzM2IF0gL1N1YnR5cGUgL0xpbmsgL1R5cGUgL0Fubm90
ID4+DQplbmRvYmoNCjYwIDAgb2JqDQo8PCAvQm9yZGVyIFsgMCAwIDAgXSAvQ29udGVudHMg
KCkgL0Rlc3QgWyA4MCAwIFIgL1hZWiA2Mi42OTI5MSA2MDkuMDIzNiAwIF0gL1JlY3QgWyA4
Mi42OTI5MSAxMTMuMDIzNiAyMTUuNTMyOSAxMjUuMDIzNiBdIC9TdWJ0eXBlIC9MaW5rIC9U
eXBlIC9Bbm5vdCA+Pg0KZW5kb2JqDQo2MSAwIG9iag0KPDwgL0JvcmRlciBbIDAgMCAwIF0g
L0NvbnRlbnRzICgpIC9EZXN0IFsgODAgMCBSIC9YWVogNjIuNjkyOTEgNjA5LjAyMzYgMCBd
IC9SZWN0IFsgNTIxLjQ2MjcgMTEzLjc3MzYgNTMyLjU4MjcgMTI1Ljc3MzYgXSAvU3VidHlw
ZSAvTGluayAvVHlwZSAvQW5ub3QgPj4NCmVuZG9iag0KNjIgMCBvYmoNCjw8IC9Cb3JkZXIg
WyAwIDAgMCBdIC9Db250ZW50cyAoKSAvRGVzdCBbIDgyIDAgUiAvWFlaIDYyLjY5MjkxIDM4
Mi4wMjM2IDAgXSAvUmVjdCBbIDgyLjY5MjkxIDk1LjAyMzYyIDI0MC41NTI5IDEwNy4wMjM2
IF0gL1N1YnR5cGUgL0xpbmsgL1R5cGUgL0Fubm90ID4+DQplbmRvYmoNCjYzIDAgb2JqDQo8
PCAvQm9yZGVyIFsgMCAwIDAgXSAvQ29udGVudHMgKCkgL0Rlc3QgWyA4MiAwIFIgL1hZWiA2
Mi42OTI5MSAzODIuMDIzNiAwIF0gL1JlY3QgWyA1MjEuNDYyNyA5NS43NzM2MiA1MzIuNTgy
NyAxMDcuNzczNiBdIC9TdWJ0eXBlIC9MaW5rIC9UeXBlIC9Bbm5vdCA+Pg0KZW5kb2JqDQo2
NCAwIG9iag0KPDwgL0Fubm90cyBbIDQgMCBSIDUgMCBSIDYgMCBSIDcgMCBSIDggMCBSIDkg
MCBSIDEwIDAgUiAxMSAwIFIgMTIgMCBSIDEzIDAgUiANCiAgMTQgMCBSIDE1IDAgUiAxNiAw
IFIgMTcgMCBSIDE4IDAgUiAxOSAwIFIgMjAgMCBSIDIxIDAgUiAyMiAwIFIgMjMgMCBSIA0K
ICAyNCAwIFIgMjUgMCBSIDI2IDAgUiAyNyAwIFIgMjggMCBSIDI5IDAgUiAzMCAwIFIgMzEg
MCBSIDMyIDAgUiAzMyAwIFIgDQogIDM0IDAgUiAzNSAwIFIgMzYgMCBSIDM3IDAgUiAzOCAw
IFIgMzkgMCBSIDQwIDAgUiA0MSAwIFIgNDIgMCBSIDQzIDAgUiANCiAgNDQgMCBSIDQ1IDAg
UiA0NiAwIFIgNDcgMCBSIDQ4IDAgUiA0OSAwIFIgNTAgMCBSIDUxIDAgUiA1MiAwIFIgNTMg
MCBSIA0KICA1NCAwIFIgNTUgMCBSIDU2IDAgUiA1NyAwIFIgNTggMCBSIDU5IDAgUiA2MCAw
IFIgNjEgMCBSIDYyIDAgUiA2MyAwIFIgXSAvQ29udGVudHMgMTI0IDAgUiAvTWVkaWFCb3gg
WyAwIDAgNTk1LjI3NTYgODQxLjg4OTggXSAvUGFyZW50IDEyMyAwIFIgL1Jlc291cmNlcyA8
PCAvRm9udCAxIDAgUiAvUHJvY1NldCBbIC9QREYgL1RleHQgL0ltYWdlQiAvSW1hZ2VDIC9J
bWFnZUkgXSA+PiAvUm90YXRlIDAgDQogIC9UcmFucyA8PCAgPj4gL1R5cGUgL1BhZ2UgPj4N
CmVuZG9iag0KNjUgMCBvYmoNCjw8IC9Db250ZW50cyAxMjUgMCBSIC9NZWRpYUJveCBbIDAg
MCA1OTUuMjc1NiA4NDEuODg5OCBdIC9QYXJlbnQgMTIzIDAgUiAvUmVzb3VyY2VzIDw8IC9G
b250IDEgMCBSIC9Qcm9jU2V0IFsgL1BERiAvVGV4dCAvSW1hZ2VCIC9JbWFnZUMgL0ltYWdl
SSBdID4+IC9Sb3RhdGUgMCAvVHJhbnMgPDwgID4+IA0KICAvVHlwZSAvUGFnZSA+Pg0KZW5k
b2JqDQo2NiAwIG9iag0KPDwgL0NvbnRlbnRzIDEyNiAwIFIgL01lZGlhQm94IFsgMCAwIDU5
NS4yNzU2IDg0MS44ODk4IF0gL1BhcmVudCAxMjMgMCBSIC9SZXNvdXJjZXMgPDwgL0ZvbnQg
MSAwIFIgL1Byb2NTZXQgWyAvUERGIC9UZXh0IC9JbWFnZUIgL0ltYWdlQyAvSW1hZ2VJIF0g
Pj4gL1JvdGF0ZSAwIC9UcmFucyA8PCAgPj4gDQogIC9UeXBlIC9QYWdlID4+DQplbmRvYmoN
CjY3IDAgb2JqDQo8PCAvQmFzZUZvbnQgL0hlbHZldGljYS1PYmxpcXVlIC9FbmNvZGluZyAv
V2luQW5zaUVuY29kaW5nIC9OYW1lIC9GMyAvU3VidHlwZSAvVHlwZTEgL1R5cGUgL0ZvbnQg
Pj4NCmVuZG9iag0KNjggMCBvYmoNCjw8IC9Db250ZW50cyAxMjcgMCBSIC9NZWRpYUJveCBb
IDAgMCA1OTUuMjc1NiA4NDEuODg5OCBdIC9QYXJlbnQgMTIzIDAgUiAvUmVzb3VyY2VzIDw8
IC9Gb250IDEgMCBSIC9Qcm9jU2V0IFsgL1BERiAvVGV4dCAvSW1hZ2VCIC9JbWFnZUMgL0lt
YWdlSSBdID4+IC9Sb3RhdGUgMCAvVHJhbnMgPDwgID4+IA0KICAvVHlwZSAvUGFnZSA+Pg0K
ZW5kb2JqDQo2OSAwIG9iag0KPDwgL0Jhc2VGb250IC9IZWx2ZXRpY2EtQm9sZE9ibGlxdWUg
L0VuY29kaW5nIC9XaW5BbnNpRW5jb2RpbmcgL05hbWUgL0Y0IC9TdWJ0eXBlIC9UeXBlMSAv
VHlwZSAvRm9udCA+Pg0KZW5kb2JqDQo3MCAwIG9iag0KPDwgL0NvbnRlbnRzIDEyOCAwIFIg
L01lZGlhQm94IFsgMCAwIDU5NS4yNzU2IDg0MS44ODk4IF0gL1BhcmVudCAxMjMgMCBSIC9S
ZXNvdXJjZXMgPDwgL0ZvbnQgMSAwIFIgL1Byb2NTZXQgWyAvUERGIC9UZXh0IC9JbWFnZUIg
L0ltYWdlQyAvSW1hZ2VJIF0gPj4gL1JvdGF0ZSAwIC9UcmFucyA8PCAgPj4gDQogIC9UeXBl
IC9QYWdlID4+DQplbmRvYmoNCjcxIDAgb2JqDQo8PCAvQmFzZUZvbnQgL0NvdXJpZXIgL0Vu
Y29kaW5nIC9XaW5BbnNpRW5jb2RpbmcgL05hbWUgL0Y1IC9TdWJ0eXBlIC9UeXBlMSAvVHlw
ZSAvRm9udCA+Pg0KZW5kb2JqDQo3MiAwIG9iag0KPDwgL0NvbnRlbnRzIDEyOSAwIFIgL01l
ZGlhQm94IFsgMCAwIDU5NS4yNzU2IDg0MS44ODk4IF0gL1BhcmVudCAxMjMgMCBSIC9SZXNv
dXJjZXMgPDwgL0ZvbnQgMSAwIFIgL1Byb2NTZXQgWyAvUERGIC9UZXh0IC9JbWFnZUIgL0lt
YWdlQyAvSW1hZ2VJIF0gPj4gL1JvdGF0ZSAwIC9UcmFucyA8PCAgPj4gDQogIC9UeXBlIC9Q
YWdlID4+DQplbmRvYmoNCjczIDAgb2JqDQo8PCAvQ29udGVudHMgMTMwIDAgUiAvTWVkaWFC
b3ggWyAwIDAgNTk1LjI3NTYgODQxLjg4OTggXSAvUGFyZW50IDEyMyAwIFIgL1Jlc291cmNl
cyA8PCAvRm9udCAxIDAgUiAvUHJvY1NldCBbIC9QREYgL1RleHQgL0ltYWdlQiAvSW1hZ2VD
IC9JbWFnZUkgXSA+PiAvUm90YXRlIDAgL1RyYW5zIDw8ICA+PiANCiAgL1R5cGUgL1BhZ2Ug
Pj4NCmVuZG9iag0KNzQgMCBvYmoNCjw8IC9Db250ZW50cyAxMzEgMCBSIC9NZWRpYUJveCBb
IDAgMCA1OTUuMjc1NiA4NDEuODg5OCBdIC9QYXJlbnQgMTIzIDAgUiAvUmVzb3VyY2VzIDw8
IC9Gb250IDEgMCBSIC9Qcm9jU2V0IFsgL1BERiAvVGV4dCAvSW1hZ2VCIC9JbWFnZUMgL0lt
YWdlSSBdID4+IC9Sb3RhdGUgMCAvVHJhbnMgPDwgID4+IA0KICAvVHlwZSAvUGFnZSA+Pg0K
ZW5kb2JqDQo3NSAwIG9iag0KPDwgL0NvbnRlbnRzIDEzMiAwIFIgL01lZGlhQm94IFsgMCAw
IDU5NS4yNzU2IDg0MS44ODk4IF0gL1BhcmVudCAxMjMgMCBSIC9SZXNvdXJjZXMgPDwgL0Zv
bnQgMSAwIFIgL1Byb2NTZXQgWyAvUERGIC9UZXh0IC9JbWFnZUIgL0ltYWdlQyAvSW1hZ2VJ
IF0gPj4gL1JvdGF0ZSAwIC9UcmFucyA8PCAgPj4gDQogIC9UeXBlIC9QYWdlID4+DQplbmRv
YmoNCjc2IDAgb2JqDQo8PCAvQ29udGVudHMgMTMzIDAgUiAvTWVkaWFCb3ggWyAwIDAgNTk1
LjI3NTYgODQxLjg4OTggXSAvUGFyZW50IDEyMyAwIFIgL1Jlc291cmNlcyA8PCAvRm9udCAx
IDAgUiAvUHJvY1NldCBbIC9QREYgL1RleHQgL0ltYWdlQiAvSW1hZ2VDIC9JbWFnZUkgXSA+
PiAvUm90YXRlIDAgL1RyYW5zIDw8ICA+PiANCiAgL1R5cGUgL1BhZ2UgPj4NCmVuZG9iag0K
NzcgMCBvYmoNCjw8IC9Db250ZW50cyAxMzQgMCBSIC9NZWRpYUJveCBbIDAgMCA1OTUuMjc1
NiA4NDEuODg5OCBdIC9QYXJlbnQgMTIzIDAgUiAvUmVzb3VyY2VzIDw8IC9Gb250IDEgMCBS
IC9Qcm9jU2V0IFsgL1BERiAvVGV4dCAvSW1hZ2VCIC9JbWFnZUMgL0ltYWdlSSBdID4+IC9S
b3RhdGUgMCAvVHJhbnMgPDwgID4+IA0KICAvVHlwZSAvUGFnZSA+Pg0KZW5kb2JqDQo3OCAw
IG9iag0KPDwgL0NvbnRlbnRzIDEzNSAwIFIgL01lZGlhQm94IFsgMCAwIDU5NS4yNzU2IDg0
MS44ODk4IF0gL1BhcmVudCAxMjMgMCBSIC9SZXNvdXJjZXMgPDwgL0ZvbnQgMSAwIFIgL1By
b2NTZXQgWyAvUERGIC9UZXh0IC9JbWFnZUIgL0ltYWdlQyAvSW1hZ2VJIF0gPj4gL1JvdGF0
ZSAwIC9UcmFucyA8PCAgPj4gDQogIC9UeXBlIC9QYWdlID4+DQplbmRvYmoNCjc5IDAgb2Jq
DQo8PCAvQ29udGVudHMgMTM2IDAgUiAvTWVkaWFCb3ggWyAwIDAgNTk1LjI3NTYgODQxLjg4
OTggXSAvUGFyZW50IDEyMyAwIFIgL1Jlc291cmNlcyA8PCAvRm9udCAxIDAgUiAvUHJvY1Nl
dCBbIC9QREYgL1RleHQgL0ltYWdlQiAvSW1hZ2VDIC9JbWFnZUkgXSA+PiAvUm90YXRlIDAg
L1RyYW5zIDw8ICA+PiANCiAgL1R5cGUgL1BhZ2UgPj4NCmVuZG9iag0KODAgMCBvYmoNCjw8
IC9Db250ZW50cyAxMzcgMCBSIC9NZWRpYUJveCBbIDAgMCA1OTUuMjc1NiA4NDEuODg5OCBd
IC9QYXJlbnQgMTIzIDAgUiAvUmVzb3VyY2VzIDw8IC9Gb250IDEgMCBSIC9Qcm9jU2V0IFsg
L1BERiAvVGV4dCAvSW1hZ2VCIC9JbWFnZUMgL0ltYWdlSSBdID4+IC9Sb3RhdGUgMCAvVHJh
bnMgPDwgID4+IA0KICAvVHlwZSAvUGFnZSA+Pg0KZW5kb2JqDQo4MSAwIG9iag0KPDwgL0Eg
PDwgL1MgL1VSSSAvVHlwZSAvQWN0aW9uIC9VUkkgKGh0dHBzOi8vY3JlYXRpdmVjb21tb25z
Lm9yZy9saWNlbnNlcy9ieS80LjAvbGVnYWxjb2RlKSA+PiAvQm9yZGVyIFsgMCAwIDAgXSAv
UmVjdCBbIDY2LjAyMjkxIDMyMi4wMjM2IDMwNi4xMjI5IDMzNC4wMjM2IF0gL1N1YnR5cGUg
L0xpbmsgL1R5cGUgL0Fubm90ID4+DQplbmRvYmoNCjgyIDAgb2JqDQo8PCAvQW5ub3RzIFsg
ODEgMCBSIF0gL0NvbnRlbnRzIDEzOCAwIFIgL01lZGlhQm94IFsgMCAwIDU5NS4yNzU2IDg0
MS44ODk4IF0gL1BhcmVudCAxMjMgMCBSIC9SZXNvdXJjZXMgPDwgL0ZvbnQgMSAwIFIgL1By
b2NTZXQgWyAvUERGIC9UZXh0IC9JbWFnZUIgL0ltYWdlQyAvSW1hZ2VJIF0gPj4gL1JvdGF0
ZSAwIA0KICAvVHJhbnMgPDwgID4+IC9UeXBlIC9QYWdlID4+DQplbmRvYmoNCjgzIDAgb2Jq
DQo8PCAvT3V0bGluZXMgODUgMCBSIC9QYWdlTGFiZWxzIDEzOSAwIFIgL1BhZ2VNb2RlIC9V
c2VOb25lIC9QYWdlcyAxMjMgMCBSIC9UeXBlIC9DYXRhbG9nID4+DQplbmRvYmoNCjg0IDAg
b2JqDQo8PCAvQXV0aG9yICgpIC9DcmVhdGlvbkRhdGUgKEQ6MjAyMTA1MTQxMDU4NDUrMDAn
MDAnKSAvQ3JlYXRvciAoXCh1bnNwZWNpZmllZFwpKSAvS2V5d29yZHMgKCkgL01vZERhdGUg
KEQ6MjAyMTA1MTQxMDU4NDUrMDAnMDAnKSAvUHJvZHVjZXIgKFJlcG9ydExhYiBQREYgTGli
cmFyeSAtIHd3dy5yZXBvcnRsYWIuY29tKSANCiAgL1N1YmplY3QgKFwodW5zcGVjaWZpZWRc
KSkgL1RpdGxlIChIeXBlcmxhdW5jaCBEZXNpZ24gRG9jdW1lbnQpIC9UcmFwcGVkIC9GYWxz
ZSA+Pg0KZW5kb2JqDQo4NSAwIG9iag0KPDwgL0NvdW50IDQ2IC9GaXJzdCA4NiAwIFIgL0xh
c3QgMTE4IDAgUiAvVHlwZSAvT3V0bGluZXMgPj4NCmVuZG9iag0KODYgMCBvYmoNCjw8IC9E
ZXN0IFsgNjUgMCBSIC9YWVogNjIuNjkyOTEgNzY1LjAyMzYgMCBdIC9OZXh0IDg3IDAgUiAv
UGFyZW50IDg1IDAgUiAvVGl0bGUgKFwzNzZcMzc3XDAwMDFcMDAwXDI0MFwwMDBcMjQwXDAw
MFwyNDBcMDAwSVwwMDBuXDAwMHRcMDAwclwwMDBvXDAwMGRcMDAwdVwwMDBjXDAwMHRcMDAw
aVwwMDBvXDAwMG4pID4+DQplbmRvYmoNCjg3IDAgb2JqDQo8PCAvRGVzdCBbIDY1IDAgUiAv
WFlaIDYyLjY5MjkxIDU5NC4wMjM2IDAgXSAvTmV4dCA4OCAwIFIgL1BhcmVudCA4NSAwIFIg
L1ByZXYgODYgMCBSIC9UaXRsZSAoXDM3NlwzNzdcMDAwMlwwMDBcMjQwXDAwMFwyNDBcMDAw
XDI0MFwwMDBEXDAwMG9cMDAwY1wwMDB1XDAwMG1cMDAwZVwwMDBuXDAwMHRcMDAwIFwwMDBT
XDAwMHRcMDAwclwwMDB1XDAwMGNcMDAwdFwwMDB1XDAwMHJcMDAwZSkgPj4NCmVuZG9iag0K
ODggMCBvYmoNCjw8IC9Db3VudCAxIC9EZXN0IFsgNjUgMCBSIC9YWVogNjIuNjkyOTEgNDcx
LjAyMzYgMCBdIC9GaXJzdCA4OSAwIFIgL0xhc3QgODkgMCBSIC9OZXh0IDkwIDAgUiAvUGFy
ZW50IDg1IDAgUiANCiAgL1ByZXYgODcgMCBSIC9UaXRsZSAoXDM3NlwzNzdcMDAwM1wwMDBc
MjQwXDAwMFwyNDBcMDAwXDI0MFwwMDBBXDAwMHBcMDAwcFwwMDByXDAwMG9cMDAwYVwwMDBj
XDAwMGgpID4+DQplbmRvYmoNCjg5IDAgb2JqDQo8PCAvRGVzdCBbIDY1IDAgUiAvWFlaIDYy
LjY5MjkxIDI2NC4wMjM2IDAgXSAvUGFyZW50IDg4IDAgUiAvVGl0bGUgKFwzNzZcMzc3XDAw
MDNcMDAwLlwwMDAxXDAwMFwyNDBcMDAwXDI0MFwwMDBcMjQwXDAwME9cMDAwYlwwMDBqXDAw
MGVcMDAwY1wwMDB0XDAwMGlcMDAwdlwwMDBlXDAwMHMpID4+DQplbmRvYmoNCjkwIDAgb2Jq
DQo8PCAvQ291bnQgMjYgL0Rlc3QgWyA2NiAwIFIgL1hZWiA2Mi42OTI5MSA2MzMuMDIzNiAw
IF0gL0ZpcnN0IDkxIDAgUiAvTGFzdCAxMDcgMCBSIC9OZXh0IDExNyAwIFIgL1BhcmVudCA4
NSAwIFIgDQogIC9QcmV2IDg4IDAgUiAvVGl0bGUgKFwzNzZcMzc3XDAwMDRcMDAwXDI0MFww
MDBcMjQwXDAwMFwyNDBcMDAwUlwwMDBlXDAwMHFcMDAwdVwwMDBpXDAwMHJcMDAwZVwwMDBt
XDAwMGVcMDAwblwwMDB0XDAwMHNcMDAwIFwwMDBhXDAwMG5cMDAwZFwwMDAgXDAwMERcMDAw
ZVwwMDBzXDAwMGlcMDAwZ1wwMDBuKSA+Pg0KZW5kb2JqDQo5MSAwIG9iag0KPDwgL0Rlc3Qg
WyA2NiAwIFIgL1hZWiA2Mi42OTI5MSA0OTguMDIzNiAwIF0gL05leHQgOTIgMCBSIC9QYXJl
bnQgOTAgMCBSIC9UaXRsZSAoXDM3NlwzNzdcMDAwNFwwMDAuXDAwMDFcMDAwXDI0MFwwMDBc
MjQwXDAwMFwyNDBcMDAwSFwwMDB5XDAwMHBcMDAwZVwwMDByXDAwMHZcMDAwaVwwMDBzXDAw
MG9cMDAwclwwMDAgXDAwMExcMDAwYVwwMDB1XDAwMG5cMDAwY1wwMDBoXDAwMCBcMDAwTFww
MDBhXDAwMG5cMDAwZFwwMDBzXDAwMGNcMDAwYVwwMDBwXDAwMGUpID4+DQplbmRvYmoNCjky
IDAgb2JqDQo8PCAvRGVzdCBbIDY4IDAgUiAvWFlaIDYyLjY5MjkxIDY0NS4wMjM2IDAgXSAv
TmV4dCA5MyAwIFIgL1BhcmVudCA5MCAwIFIgL1ByZXYgOTEgMCBSIC9UaXRsZSAoXDM3Nlwz
NzdcMDAwNFwwMDAuXDAwMDJcMDAwXDI0MFwwMDBcMjQwXDAwMFwyNDBcMDAwRFwwMDBvXDAw
MG1cMDAwYVwwMDBpXDAwMG5cMDAwIFwwMDBDXDAwMG9cMDAwblwwMDBzXDAwMHRcMDAwclww
MDB1XDAwMGNcMDAwdFwwMDBpXDAwMG9cMDAwbikgPj4NCmVuZG9iag0KOTMgMCBvYmoNCjw8
IC9Db3VudCA3IC9EZXN0IFsgNzAgMCBSIC9YWVogNjIuNjkyOTEgNzY1LjAyMzYgMCBdIC9G
aXJzdCA5NCAwIFIgL0xhc3QgOTcgMCBSIC9OZXh0IDEwMSAwIFIgL1BhcmVudCA5MCAwIFIg
DQogIC9QcmV2IDkyIDAgUiAvVGl0bGUgKFwzNzZcMzc3XDAwMDRcMDAwLlwwMDAzXDAwMFwy
NDBcMDAwXDI0MFwwMDBcMjQwXDAwMENcMDAwb1wwMDBtXDAwMG1cMDAwb1wwMDBuXDAwMCBc
MDAwQlwwMDBvXDAwMG9cMDAwdFwwMDAgXDAwMENcMDAwb1wwMDBuXDAwMGZcMDAwaVwwMDBn
XDAwMHVcMDAwclwwMDBhXDAwMHRcMDAwaVwwMDBvXDAwMG5cMDAwcykgPj4NCmVuZG9iag0K
OTQgMCBvYmoNCjw8IC9EZXN0IFsgNzAgMCBSIC9YWVogNjIuNjkyOTEgNjgxLjAyMzYgMCBd
IC9OZXh0IDk1IDAgUiAvUGFyZW50IDkzIDAgUiAvVGl0bGUgKFwzNzZcMzc3XDAwMDRcMDAw
LlwwMDAzXDAwMC5cMDAwMVwwMDBcMjQwXDAwMFwyNDBcMDAwXDI0MFwwMDBEXDAwMHlcMDAw
blwwMDBhXDAwMG1cMDAwaVwwMDBjXDAwMCBcMDAwTFwwMDBhXDAwMHVcMDAwblwwMDBjXDAw
MGhcMDAwIFwwMDB3XDAwMGlcMDAwdFwwMDBoXDAwMCBcMDAwYVwwMDAgXDAwMEhcMDAwaVww
MDBnXDAwMGhcMDAwbFwwMDB5XDAwMC1cMDAwUFwwMDByXDAwMGlcMDAwdlwwMDBpXDAwMGxc
MDAwZVwwMDBnXDAwMGVcMDAwZFwwMDAgXDAwMERcMDAwb1wwMDBtXDAwMGFcMDAwaVwwMDBu
XDAwMCBcMDAwMCkgPj4NCmVuZG9iag0KOTUgMCBvYmoNCjw8IC9EZXN0IFsgNzAgMCBSIC9Y
WVogNjIuNjkyOTEgNDU2LjAyMzYgMCBdIC9OZXh0IDk2IDAgUiAvUGFyZW50IDkzIDAgUiAv
UHJldiA5NCAwIFIgL1RpdGxlIChcMzc2XDM3N1wwMDA0XDAwMC5cMDAwM1wwMDAuXDAwMDJc
MDAwXDI0MFwwMDBcMjQwXDAwMFwyNDBcMDAwU1wwMDB0XDAwMGFcMDAwdFwwMDBpXDAwMGNc
MDAwIFwwMDBMXDAwMGFcMDAwdVwwMDBuXDAwMGNcMDAwaFwwMDAgXDAwMENcMDAwb1wwMDBu
XDAwMGZcMDAwaVwwMDBnXDAwMHVcMDAwclwwMDBhXDAwMHRcMDAwaVwwMDBvXDAwMG5cMDAw
c1wwMDA6XDAwMCBcMDAwd1wwMDBpXDAwMHRcMDAwaFwwMDBvXDAwMHVcMDAwdFwwMDAgXDAw
MGFcMDAwIFwwMDBEXDAwMG9cMDAwbVwwMDBhXDAwMGlcMDAwblwwMDAgXDAwMDBcMDAwIFww
MDBvXDAwMHJcMDAwIFwwMDBhXDAwMCBcMDAwQ1wwMDBvXDAwMG5cMDAwdFwwMDByXDAwMG9c
MDAwbFwwMDAgXDAwMERcMDAwb1wwMDBtXDAwMGFcMDAwaVwwMDBuKSA+Pg0KZW5kb2JqDQo5
NiAwIG9iag0KPDwgL0Rlc3QgWyA3MCAwIFIgL1hZWiA2Mi42OTI5MSAyNjIuMDIzNiAwIF0g
L05leHQgOTcgMCBSIC9QYXJlbnQgOTMgMCBSIC9QcmV2IDk1IDAgUiAvVGl0bGUgKFwzNzZc
Mzc3XDAwMDRcMDAwLlwwMDAzXDAwMC5cMDAwM1wwMDBcMjQwXDAwMFwyNDBcMDAwXDI0MFww
MDBEXDAwMHlcMDAwblwwMDBhXDAwMG1cMDAwaVwwMDBjXDAwMCBcMDAwTFwwMDBhXDAwMHVc
MDAwblwwMDBjXDAwMGhcMDAwIFwwMDBvXDAwMGZcMDAwIFwwMDBEXDAwMGlcMDAwc1wwMDBh
XDAwMGdcMDAwZ1wwMDByXDAwMGVcMDAwZ1wwMDBhXDAwMHRcMDAwZVwwMDBkXDAwMCBcMDAw
U1wwMDB5XDAwMHNcMDAwdFwwMDBlXDAwMG1cMDAwIFwwMDBDXDAwMG9cMDAwblwwMDBmXDAw
MGlcMDAwZ1wwMDB1XDAwMHJcMDAwYVwwMDB0XDAwMGlcMDAwb1wwMDBuXDAwMHMpID4+DQpl
bmRvYmoNCjk3IDAgb2JqDQo8PCAvQ291bnQgMyAvRGVzdCBbIDcyIDAgUiAvWFlaIDYyLjY5
MjkxIDc2NS4wMjM2IDAgXSAvRmlyc3QgOTggMCBSIC9MYXN0IDEwMCAwIFIgL1BhcmVudCA5
MyAwIFIgL1ByZXYgOTYgMCBSIA0KICAvVGl0bGUgKFwzNzZcMzc3XDAwMDRcMDAwLlwwMDAz
XDAwMC5cMDAwNFwwMDBcMjQwXDAwMFwyNDBcMDAwXDI0MFwwMDBFXDAwMHhcMDAwYVwwMDBt
XDAwMHBcMDAwbFwwMDBlXDAwMCBcMDAwVVwwMDBzXDAwMGVcMDAwIFwwMDBDXDAwMGFcMDAw
c1wwMDBlXDAwMHNcMDAwIFwwMDBhXDAwMG5cMDAwZFwwMDAgXDAwMENcMDAwb1wwMDBuXDAw
MGZcMDAwaVwwMDBnXDAwMHVcMDAwclwwMDBhXDAwMHRcMDAwaVwwMDBvXDAwMG5cMDAwcykg
Pj4NCmVuZG9iag0KOTggMCBvYmoNCjw8IC9EZXN0IFsgNzIgMCBSIC9YWVogNjIuNjkyOTEg
NzIwLjAyMzYgMCBdIC9OZXh0IDk5IDAgUiAvUGFyZW50IDk3IDAgUiAvVGl0bGUgKFwzNzZc
Mzc3XDAwMDRcMDAwLlwwMDAzXDAwMC5cMDAwNFwwMDAuXDAwMDFcMDAwXDI0MFwwMDBcMjQw
XDAwMFwyNDBcMDAwVVwwMDBzXDAwMGVcMDAwIFwwMDBjXDAwMGFcMDAwc1wwMDBlXDAwMDpc
MDAwIFwwMDBNXDAwMG9cMDAwZFwwMDBlXDAwMHJcMDAwblwwMDAgXDAwMGNcMDAwbFwwMDBv
XDAwMHVcMDAwZFwwMDAgXDAwMGhcMDAweVwwMDBwXDAwMGVcMDAwclwwMDB2XDAwMGlcMDAw
c1wwMDBvXDAwMHIpID4+DQplbmRvYmoNCjk5IDAgb2JqDQo8PCAvRGVzdCBbIDcyIDAgUiAv
WFlaIDYyLjY5MjkxIDYzNi4wMjM2IDAgXSAvTmV4dCAxMDAgMCBSIC9QYXJlbnQgOTcgMCBS
IC9QcmV2IDk4IDAgUiAvVGl0bGUgKFwzNzZcMzc3XDAwMDRcMDAwLlwwMDAzXDAwMC5cMDAw
NFwwMDAuXDAwMDJcMDAwXDI0MFwwMDBcMjQwXDAwMFwyNDBcMDAwVVwwMDBzXDAwMGVcMDAw
IFwwMDBjXDAwMGFcMDAwc1wwMDBlXDAwMDpcMDAwIFwwMDBFXDAwMGRcMDAwZ1wwMDBlXDAw
MCBcMDAwZFwwMDBlXDAwMHZcMDAwaVwwMDBjXDAwMGVcMDAwIFwwMDB3XDAwMGlcMDAwdFww
MDBoXDAwMCBcMDAwc1wwMDBlXDAwMGNcMDAwdVwwMDByXDAwMGlcMDAwdFwwMDB5XDAwMCBc
MDAwb1wwMDByXDAwMCBcMDAwc1wwMDBhXDAwMGZcMDAwZVwwMDB0XDAwMHlcMDAwIFwwMDBy
XDAwMGVcMDAwcVwwMDB1XDAwMGlcMDAwclwwMDBlXDAwMG1cMDAwZVwwMDBuXDAwMHRcMDAw
cykgPj4NCmVuZG9iag0KMTAwIDAgb2JqDQo8PCAvRGVzdCBbIDcyIDAgUiAvWFlaIDYyLjY5
MjkxIDU2NC4wMjM2IDAgXSAvUGFyZW50IDk3IDAgUiAvUHJldiA5OSAwIFIgL1RpdGxlIChc
Mzc2XDM3N1wwMDA0XDAwMC5cMDAwM1wwMDAuXDAwMDRcMDAwLlwwMDAzXDAwMFwyNDBcMDAw
XDI0MFwwMDBcMjQwXDAwMFVcMDAwc1wwMDBlXDAwMCBcMDAwY1wwMDBhXDAwMHNcMDAwZVww
MDA6XDAwMCBcMDAwQ1wwMDBsXDAwMGlcMDAwZVwwMDBuXDAwMHRcMDAwIFwwMDBoXDAwMHlc
MDAwcFwwMDBlXDAwMHJcMDAwdlwwMDBpXDAwMHNcMDAwb1wwMDByKSA+Pg0KZW5kb2JqDQox
MDEgMCBvYmoNCjw8IC9EZXN0IFsgNzIgMCBSIC9YWVogNjIuNjkyOTEgNDYyLjAyMzYgMCBd
IC9OZXh0IDEwMiAwIFIgL1BhcmVudCA5MCAwIFIgL1ByZXYgOTMgMCBSIC9UaXRsZSAoXDM3
NlwzNzdcMDAwNFwwMDAuXDAwMDRcMDAwXDI0MFwwMDBcMjQwXDAwMFwyNDBcMDAwSFwwMDB5
XDAwMHBcMDAwZVwwMDByXDAwMGxcMDAwYVwwMDB1XDAwMG5cMDAwY1wwMDBoXDAwMCBcMDAw
RFwwMDBpXDAwMHNcMDAwYVwwMDBnXDAwMGdcMDAwclwwMDBlXDAwMGdcMDAwYVwwMDB0XDAw
MGVcMDAwZFwwMDAgXDAwMExcMDAwYVwwMDB1XDAwMG5cMDAwY1wwMDBoKSA+Pg0KZW5kb2Jq
DQoxMDIgMCBvYmoNCjw8IC9Db3VudCA0IC9EZXN0IFsgNzMgMCBSIC9YWVogNjIuNjkyOTEg
NTkxLjAyMzYgMCBdIC9GaXJzdCAxMDMgMCBSIC9MYXN0IDEwMyAwIFIgL05leHQgMTA3IDAg
UiAvUGFyZW50IDkwIDAgUiANCiAgL1ByZXYgMTAxIDAgUiAvVGl0bGUgKFwzNzZcMzc3XDAw
MDRcMDAwLlwwMDA1XDAwMFwyNDBcMDAwXDI0MFwwMDBcMjQwXDAwME9cMDAwdlwwMDBlXDAw
MHJcMDAwdlwwMDBpXDAwMGVcMDAwd1wwMDAgXDAwMG9cMDAwZlwwMDAgXDAwMEhcMDAweVww
MDBwXDAwMGVcMDAwclwwMDBsXDAwMGFcMDAwdVwwMDBuXDAwMGNcMDAwaFwwMDAgXDAwMEZc
MDAwbFwwMDBvXDAwMHcpID4+DQplbmRvYmoNCjEwMyAwIG9iag0KPDwgL0NvdW50IDMgL0Rl
c3QgWyA3MyAwIFIgL1hZWiA2Mi42OTI5MSAzMDkuMDIzNiAwIF0gL0ZpcnN0IDEwNCAwIFIg
L0xhc3QgMTA2IDAgUiAvUGFyZW50IDEwMiAwIFIgL1RpdGxlIChcMzc2XDM3N1wwMDA0XDAw
MC5cMDAwNVwwMDAuXDAwMDFcMDAwXDI0MFwwMDBcMjQwXDAwMFwyNDBcMDAwSFwwMDB5XDAw
MHBcMDAwZVwwMDByXDAwMGxcMDAwYVwwMDB1XDAwMG5cMDAwY1wwMDBoXDAwMCBcMDAwWFww
MDBlXDAwMG5cMDAwIFwwMDBzXDAwMHRcMDAwYVwwMDByXDAwMHRcMDAwdVwwMDBwKSA+Pg0K
ZW5kb2JqDQoxMDQgMCBvYmoNCjw8IC9EZXN0IFsgNzQgMCBSIC9YWVogNjIuNjkyOTEgNzY1
LjAyMzYgMCBdIC9OZXh0IDEwNSAwIFIgL1BhcmVudCAxMDMgMCBSIC9UaXRsZSAoXDM3Nlwz
NzdcMDAwNFwwMDAuXDAwMDVcMDAwLlwwMDAxXDAwMC5cMDAwMVwwMDBcMjQwXDAwMFwyNDBc
MDAwXDI0MFwwMDBEXDAwMG9cMDAwbVwwMDBhXDAwMGlcMDAwblwwMDAgXDAwMENcMDAwclww
MDBlXDAwMGFcMDAwdFwwMDBpXDAwMG9cMDAwbikgPj4NCmVuZG9iag0KMTA1IDAgb2JqDQo8
PCAvRGVzdCBbIDc0IDAgUiAvWFlaIDYyLjY5MjkxIDY1MS4wMjM2IDAgXSAvTmV4dCAxMDYg
MCBSIC9QYXJlbnQgMTAzIDAgUiAvUHJldiAxMDQgMCBSIC9UaXRsZSAoXDM3NlwzNzdcMDAw
NFwwMDAuXDAwMDVcMDAwLlwwMDAxXDAwMC5cMDAwMlwwMDBcMjQwXDAwMFwyNDBcMDAwXDI0
MFwwMDBEXDAwMG9cMDAwbVwwMDBhXDAwMGlcMDAwblwwMDAgXDAwMFBcMDAwclwwMDBlXDAw
MHBcMDAwYVwwMDByXDAwMGFcMDAwdFwwMDBpXDAwMG9cMDAwblwwMDAgXDAwMFBcMDAwaFww
MDBhXDAwMHNcMDAwZSkgPj4NCmVuZG9iag0KMTA2IDAgb2JqDQo8PCAvRGVzdCBbIDc0IDAg
UiAvWFlaIDYyLjY5MjkxIDQ3MS4wMjM2IDAgXSAvUGFyZW50IDEwMyAwIFIgL1ByZXYgMTA1
IDAgUiAvVGl0bGUgKFwzNzZcMzc3XDAwMDRcMDAwLlwwMDA1XDAwMC5cMDAwMVwwMDAuXDAw
MDNcMDAwXDI0MFwwMDBcMjQwXDAwMFwyNDBcMDAwTFwwMDBhXDAwMHVcMDAwblwwMDBjXDAw
MGhcMDAwIFwwMDBGXDAwMGlcMDAwblwwMDBhXDAwMGxcMDAwaVwwMDB6XDAwMGFcMDAwdFww
MDBpXDAwMG9cMDAwbikgPj4NCmVuZG9iag0KMTA3IDAgb2JqDQo8PCAvQ291bnQgOSAvRGVz
dCBbIDc0IDAgUiAvWFlaIDYyLjY5MjkxIDI1NS4wMjM2IDAgXSAvRmlyc3QgMTA4IDAgUiAv
TGFzdCAxMTYgMCBSIC9QYXJlbnQgOTAgMCBSIC9QcmV2IDEwMiAwIFIgDQogIC9UaXRsZSAo
XDM3NlwzNzdcMDAwNFwwMDAuXDAwMDZcMDAwXDI0MFwwMDBcMjQwXDAwMFwyNDBcMDAwU1ww
MDB0XDAwMHJcMDAwdVwwMDBjXDAwMHRcMDAwdVwwMDByXDAwMGlcMDAwblwwMDBnXDAwMCBc
MDAwb1wwMDBmXDAwMCBcMDAwSFwwMDB5XDAwMHBcMDAwZVwwMDByXDAwMGxcMDAwYVwwMDB1
XDAwMG5cMDAwY1wwMDBoKSA+Pg0KZW5kb2JqDQoxMDggMCBvYmoNCjw8IC9EZXN0IFsgNzUg
MCBSIC9YWVogNjIuNjkyOTEgNzY1LjAyMzYgMCBdIC9OZXh0IDEwOSAwIFIgL1BhcmVudCAx
MDcgMCBSIC9UaXRsZSAoXDM3NlwzNzdcMDAwNFwwMDAuXDAwMDZcMDAwLlwwMDAxXDAwMFwy
NDBcMDAwXDI0MFwwMDBcMjQwXDAwMHhcMDAwOFwwMDA2XDAwMCBcMDAwTVwwMDB1XDAwMGxc
MDAwdFwwMDBpXDAwMGJcMDAwb1wwMDBvXDAwMHRcMDAwMikgPj4NCmVuZG9iag0KMTA5IDAg
b2JqDQo8PCAvRGVzdCBbIDc1IDAgUiAvWFlaIDYyLjY5MjkxIDQzMi4wMjM2IDAgXSAvTmV4
dCAxMTAgMCBSIC9QYXJlbnQgMTA3IDAgUiAvUHJldiAxMDggMCBSIC9UaXRsZSAoXDM3Nlwz
NzdcMDAwNFwwMDAuXDAwMDZcMDAwLlwwMDAyXDAwMFwyNDBcMDAwXDI0MFwwMDBcMjQwXDAw
MEFcMDAwclwwMDBtXDAwMCBcMDAwRFwwMDBlXDAwMHZcMDAwaVwwMDBjXDAwMGVcMDAwIFww
MDBUXDAwMHJcMDAwZVwwMDBlKSA+Pg0KZW5kb2JqDQoxMTAgMCBvYmoNCjw8IC9EZXN0IFsg
NzUgMCBSIC9YWVogNjIuNjkyOTEgMzE1LjAyMzYgMCBdIC9OZXh0IDExMSAwIFIgL1BhcmVu
dCAxMDcgMCBSIC9QcmV2IDEwOSAwIFIgL1RpdGxlIChcMzc2XDM3N1wwMDA0XDAwMC5cMDAw
NlwwMDAuXDAwMDNcMDAwXDI0MFwwMDBcMjQwXDAwMFwyNDBcMDAwWFwwMDBlXDAwMG5cMDAw
IFwwMDBoXDAwMHlcMDAwcFwwMDBlXDAwMHJcMDAwdlwwMDBpXDAwMHNcMDAwb1wwMDByKSA+
Pg0KZW5kb2JqDQoxMTEgMCBvYmoNCjw8IC9EZXN0IFsgNzUgMCBSIC9YWVogNjIuNjkyOTEg
MTk4LjAyMzYgMCBdIC9OZXh0IDExMiAwIFIgL1BhcmVudCAxMDcgMCBSIC9QcmV2IDExMCAw
IFIgL1RpdGxlIChcMzc2XDM3N1wwMDA0XDAwMC5cMDAwNlwwMDAuXDAwMDRcMDAwXDI0MFww
MDBcMjQwXDAwMFwyNDBcMDAwQlwwMDBvXDAwMG9cMDAwdFwwMDAgXDAwMERcMDAwb1wwMDBt
XDAwMGFcMDAwaVwwMDBuKSA+Pg0KZW5kb2JqDQoxMTIgMCBvYmoNCjw8IC9Db3VudCAxIC9E
ZXN0IFsgNzYgMCBSIC9YWVogNjIuNjkyOTEgNzY1LjAyMzYgMCBdIC9GaXJzdCAxMTMgMCBS
IC9MYXN0IDExMyAwIFIgL05leHQgMTE0IDAgUiAvUGFyZW50IDEwNyAwIFIgDQogIC9QcmV2
IDExMSAwIFIgL1RpdGxlIChcMzc2XDM3N1wwMDA0XDAwMC5cMDAwNlwwMDAuXDAwMDVcMDAw
XDI0MFwwMDBcMjQwXDAwMFwyNDBcMDAwUlwwMDBlXDAwMGNcMDAwb1wwMDB2XDAwMGVcMDAw
clwwMDB5XDAwMCBcMDAwRFwwMDBvXDAwMG1cMDAwYVwwMDBpXDAwMG4pID4+DQplbmRvYmoN
CjExMyAwIG9iag0KPDwgL0Rlc3QgWyA3NiAwIFIgL1hZWiA2Mi42OTI5MSA1NzAuMDIzNiAw
IF0gL1BhcmVudCAxMTIgMCBSIC9UaXRsZSAoXDM3NlwzNzdcMDAwNFwwMDAuXDAwMDZcMDAw
LlwwMDA1XDAwMC5cMDAwMVwwMDBcMjQwXDAwMFwyNDBcMDAwXDI0MFwwMDBEXDAwMGVcMDAw
ZlwwMDBlXDAwMHJcMDAwclwwMDBlXDAwMGRcMDAwIFwwMDBEXDAwMGVcMDAwc1wwMDBpXDAw
MGdcMDAwbikgPj4NCmVuZG9iag0KMTE0IDAgb2JqDQo8PCAvRGVzdCBbIDc2IDAgUiAvWFla
IDYyLjY5MjkxIDQyMC4wMjM2IDAgXSAvTmV4dCAxMTUgMCBSIC9QYXJlbnQgMTA3IDAgUiAv
UHJldiAxMTIgMCBSIC9UaXRsZSAoXDM3NlwzNzdcMDAwNFwwMDAuXDAwMDZcMDAwLlwwMDA2
XDAwMFwyNDBcMDAwXDI0MFwwMDBcMjQwXDAwMENcMDAwb1wwMDBuXDAwMHRcMDAwclwwMDBv
XDAwMGxcMDAwIFwwMDBEXDAwMG9cMDAwbVwwMDBhXDAwMGlcMDAwbikgPj4NCmVuZG9iag0K
MTE1IDAgb2JqDQo8PCAvRGVzdCBbIDc2IDAgUiAvWFlaIDYyLjY5MjkxIDMzOS4wMjM2IDAg
XSAvTmV4dCAxMTYgMCBSIC9QYXJlbnQgMTA3IDAgUiAvUHJldiAxMTQgMCBSIC9UaXRsZSAo
XDM3NlwzNzdcMDAwNFwwMDAuXDAwMDZcMDAwLlwwMDA3XDAwMFwyNDBcMDAwXDI0MFwwMDBc
MjQwXDAwMEhcMDAwYVwwMDByXDAwMGRcMDAwd1wwMDBhXDAwMHJcMDAwZVwwMDAgXDAwMERc
MDAwb1wwMDBtXDAwMGFcMDAwaVwwMDBuKSA+Pg0KZW5kb2JqDQoxMTYgMCBvYmoNCjw8IC9E
ZXN0IFsgNzYgMCBSIC9YWVogNjIuNjkyOTEgMjQ2LjAyMzYgMCBdIC9QYXJlbnQgMTA3IDAg
UiAvUHJldiAxMTUgMCBSIC9UaXRsZSAoXDM3NlwzNzdcMDAwNFwwMDAuXDAwMDZcMDAwLlww
MDA4XDAwMFwyNDBcMDAwXDI0MFwwMDBcMjQwXDAwMENcMDAwb1wwMDBuXDAwMHNcMDAwb1ww
MDBsXDAwMGVcMDAwIFwwMDBEXDAwMG9cMDAwbVwwMDBhXDAwMGlcMDAwbikgPj4NCmVuZG9i
ag0KMTE3IDAgb2JqDQo8PCAvRGVzdCBbIDc2IDAgUiAvWFlaIDYyLjY5MjkxIDE0MS4wMjM2
IDAgXSAvTmV4dCAxMTggMCBSIC9QYXJlbnQgODUgMCBSIC9QcmV2IDkwIDAgUiAvVGl0bGUg
KFwzNzZcMzc3XDAwMDVcMDAwXDI0MFwwMDBcMjQwXDAwMFwyNDBcMDAwQ1wwMDBvXDAwMG1c
MDAwbVwwMDB1XDAwMG5cMDAwaVwwMDBjXDAwMGFcMDAwdFwwMDBpXDAwMG9cMDAwblwwMDAg
XDAwMG9cMDAwZlwwMDAgXDAwMERcMDAwb1wwMDBtXDAwMGFcMDAwaVwwMDBuXDAwMCBcMDAw
Q1wwMDBvXDAwMG5cMDAwZlwwMDBpXDAwMGdcMDAwdVwwMDByXDAwMGFcMDAwdFwwMDBpXDAw
MG9cMDAwblwwMDBzKSA+Pg0KZW5kb2JqDQoxMTggMCBvYmoNCjw8IC9Db3VudCA0IC9EZXN0
IFsgNzcgMCBSIC9YWVogNjIuNjkyOTEgNjEwLjY3NzIgMCBdIC9GaXJzdCAxMTkgMCBSIC9M
YXN0IDEyMiAwIFIgL1BhcmVudCA4NSAwIFIgL1ByZXYgMTE3IDAgUiANCiAgL1RpdGxlIChc
Mzc2XDM3N1wwMDA2XDAwMFwyNDBcMDAwXDI0MFwwMDBcMjQwXDAwMEFcMDAwcFwwMDBwXDAw
MGVcMDAwblwwMDBkXDAwMGlcMDAweCkgPj4NCmVuZG9iag0KMTE5IDAgb2JqDQo8PCAvRGVz
dCBbIDc3IDAgUiAvWFlaIDYyLjY5MjkxIDU3Ny42NzcyIDAgXSAvTmV4dCAxMjAgMCBSIC9Q
YXJlbnQgMTE4IDAgUiAvVGl0bGUgKFwzNzZcMzc3XDAwMDZcMDAwLlwwMDAxXDAwMFwyNDBc
MDAwXDI0MFwwMDBcMjQwXDAwMEFcMDAwcFwwMDBwXDAwMGVcMDAwblwwMDBkXDAwMGlcMDAw
eFwwMDAgXDAwMDFcMDAwOlwwMDAgXDAwMEZcMDAwbFwwMDBvXDAwMHdcMDAwIFwwMDBTXDAw
MGVcMDAwcVwwMDB1XDAwMGVcMDAwblwwMDBjXDAwMGVcMDAwIFwwMDBvXDAwMGZcMDAwIFww
MDBTXDAwMHRcMDAwZVwwMDBwXDAwMHNcMDAwIFwwMDBvXDAwMGZcMDAwIFwwMDBhXDAwMCBc
MDAwSFwwMDB5XDAwMHBcMDAwZVwwMDByXDAwMGxcMDAwYVwwMDB1XDAwMG5cMDAwY1wwMDBo
XDAwMCBcMDAwQlwwMDBvXDAwMG9cMDAwdCkgPj4NCmVuZG9iag0KMTIwIDAgb2JqDQo8PCAv
RGVzdCBbIDc4IDAgUiAvWFlaIDYyLjY5MjkxIDYzMS44MjM2IDAgXSAvTmV4dCAxMjEgMCBS
IC9QYXJlbnQgMTE4IDAgUiAvUHJldiAxMTkgMCBSIC9UaXRsZSAoXDM3NlwzNzdcMDAwNlww
MDAuXDAwMDJcMDAwXDI0MFwwMDBcMjQwXDAwMFwyNDBcMDAwQVwwMDBwXDAwMHBcMDAwZVww
MDBuXDAwMGRcMDAwaVwwMDB4XDAwMCBcMDAwMlwwMDA6XDAwMCBcMDAwQ1wwMDBvXDAwMG5c
MDAwc1wwMDBpXDAwMGRcMDAwZVwwMDByXDAwMGFcMDAwdFwwMDBpXDAwMG9cMDAwblwwMDBz
XDAwMCBcMDAwaVwwMDBuXDAwMCBcMDAwTlwwMDBhXDAwMG1cMDAwaVwwMDBuXDAwMGdcMDAw
IFwwMDB0XDAwMGhcMDAwZVwwMDAgXDAwMEhcMDAweVwwMDBwXDAwMGVcMDAwclwwMDBsXDAw
MGFcMDAwdVwwMDBuXDAwMGNcMDAwaFwwMDAgXDAwMEZcMDAwZVwwMDBhXDAwMHRcMDAwdVww
MDByXDAwMGUpID4+DQplbmRvYmoNCjEyMSAwIG9iag0KPDwgL0Rlc3QgWyA4MCAwIFIgL1hZ
WiA2Mi42OTI5MSA2MDkuMDIzNiAwIF0gL05leHQgMTIyIDAgUiAvUGFyZW50IDExOCAwIFIg
L1ByZXYgMTIwIDAgUiAvVGl0bGUgKFwzNzZcMzc3XDAwMDZcMDAwLlwwMDAzXDAwMFwyNDBc
MDAwXDI0MFwwMDBcMjQwXDAwMEFcMDAwcFwwMDBwXDAwMGVcMDAwblwwMDBkXDAwMGlcMDAw
eFwwMDAgXDAwMDNcMDAwOlwwMDAgXDAwMFRcMDAwZVwwMDByXDAwMG1cMDAwaVwwMDBuXDAw
MG9cMDAwbFwwMDBvXDAwMGdcMDAweSkgPj4NCmVuZG9iag0KMTIyIDAgb2JqDQo8PCAvRGVz
dCBbIDgyIDAgUiAvWFlaIDYyLjY5MjkxIDM4Mi4wMjM2IDAgXSAvUGFyZW50IDExOCAwIFIg
L1ByZXYgMTIxIDAgUiAvVGl0bGUgKFwzNzZcMzc3XDAwMDZcMDAwLlwwMDA0XDAwMFwyNDBc
MDAwXDI0MFwwMDBcMjQwXDAwMEFcMDAwcFwwMDBwXDAwMGVcMDAwblwwMDBkXDAwMGlcMDAw
eFwwMDAgXDAwMDRcMDAwOlwwMDAgXDAwMENcMDAwb1wwMDBwXDAwMHlcMDAwclwwMDBpXDAw
MGdcMDAwaFwwMDB0XDAwMCBcMDAwTFwwMDBpXDAwMGNcMDAwZVwwMDBuXDAwMHNcMDAwZSkg
Pj4NCmVuZG9iag0KMTIzIDAgb2JqDQo8PCAvQ291bnQgMTUgL0tpZHMgWyA2NCAwIFIgNjUg
MCBSIDY2IDAgUiA2OCAwIFIgNzAgMCBSIDcyIDAgUiA3MyAwIFIgNzQgMCBSIDc1IDAgUiA3
NiAwIFIgDQogIDc3IDAgUiA3OCAwIFIgNzkgMCBSIDgwIDAgUiA4MiAwIFIgXSAvVHlwZSAv
UGFnZXMgPj4NCmVuZG9iag0KMTI0IDAgb2JqDQo8PCAvTGVuZ3RoIDg5NjIgPj4NCnN0cmVh
bQ0KMSAwIDAgMSAwIDAgY20gIEJUIC9GMSAxMiBUZiAxNC40IFRMIEVUDQpxDQoxIDAgMCAx
IDYyLjY5MjkxIDc0MS4wMjM2IGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDQgVG0g
L0YyIDIwIFRmIDI0IFRMIDg2LjU4NDg4IDAgVGQgKEh5cGVybGF1bmNoIERlc2lnbiBEb2N1
bWVudCkgVGogVCogLTg2LjU4NDg4IDAgVGQgRVQNClENClENCnENCjEgMCAwIDEgNjIuNjky
OTEgNjcxLjAyMzYgY20NCnENCkJUIDEgMCAwIDEgMCA1MCBUbSAuMDc4ODEgVHcgMTIgVEwg
L0YxIDEwIFRmIDAgMCAwIHJnIChUaGlzIHBvc3QgaXMgYSBSZXF1ZXN0IGZvciBDb21tZW50
IG9uIHRoZSBpbmNsdWRlZCB2NCBvZiBhIGRlc2lnbiBkb2N1bWVudCB0aGF0IGRlc2NyaWJl
cyBIeXBlcmxhdW5jaDopIFRqIFQqIDAgVHcgMS44MzEzMTggVHcgKGEgbmV3IG1ldGhvZCBv
ZiBsYXVuY2hpbmcgdGhlIFhlbiBoeXBlcnZpc29yLCByZWxhdGluZyB0byBkb20wbGVzcyBh
bmQgd29yayBmcm9tIHRoZSBIeXBlcmxhdW5jaCkgVGogVCogMCBUdyAxLjE5MTc1MSBUdyAo
cHJvamVjdC4gV2UgaW52aXRlIGRpc2N1c3Npb24gb2YgdGhpcyBvbiB0aGlzIGxpc3QsIGF0
IHRoZSBtb250aGx5IFhlbiBDb21tdW5pdHkgQ2FsbHMsIGFuZCBhdCBkZWRpY2F0ZWQpIFRq
IFQqIDAgVHcgMi40NjQ2OTIgVHcgKG1lZXRpbmdzIG9uIHRoaXMgdG9waWMgaW4gdGhlIFhl
biBXb3JraW5nIEdyb3VwIHdoaWNoIHdpbGwgYmUgYW5ub3VuY2VkIGluIGFkdmFuY2Ugb24g
dGhlIFhlbikgVGogVCogMCBUdyAoRGV2ZWxvcG1lbnQgbWFpbGluZyBsaXN0LikgVGogVCog
RVQNClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgNjM4LjAyMzYgY20NCnENCkJUIDEgMCAw
IDEgMCAzLjUgVG0gMjEgVEwgL0YyIDE3LjUgVGYgMCAwIDAgcmcgKENvbnRlbnRzKSBUaiBU
KiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA5Mi4wMjM2MiBjbQ0KMCAwIDAgcmcN
CkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KcQ0KMSAwIDAgMSAwIDUyNSBjbQ0KcQ0KQlQgMSAw
IDAgMSAwIDIgVG0gMTIgVEwgL0YyIDEwIFRmIDAgMCAuNTAxOTYxIHJnICgxICAgSW50cm9k
dWN0aW9uKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAzOTcuODg5OCA1MjUgY20NCnEN
CjAgMCAuNTAxOTYxIHJnDQowIDAgLjUwMTk2MSBSRw0KQlQgMSAwIDAgMSAwIDIgVG0gL0Yy
IDEwIFRmIDEyIFRMIDY2LjQ0IDAgVGQgKDIpIFRqIFQqIC02Ni40NCAwIFRkIEVUDQpRDQpR
DQpxDQoxIDAgMCAxIDAgNTA3IGNtDQpxDQpCVCAxIDAgMCAxIDAgMiBUbSAxMiBUTCAvRjIg
MTAgVGYgMCAwIC41MDE5NjEgcmcgKDIgICBEb2N1bWVudCBTdHJ1Y3R1cmUpIFRqIFQqIEVU
DQpRDQpRDQpxDQoxIDAgMCAxIDM5Ny44ODk4IDUwNyBjbQ0KcQ0KMCAwIC41MDE5NjEgcmcN
CjAgMCAuNTAxOTYxIFJHDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjIgMTAgVGYgMTIgVEwgNjYu
NDQgMCBUZCAoMikgVGogVCogLTY2LjQ0IDAgVGQgRVQNClENClENCnENCjEgMCAwIDEgMCA0
ODkgY20NCnENCkJUIDEgMCAwIDEgMCAyIFRtIDEyIFRMIC9GMiAxMCBUZiAwIDAgLjUwMTk2
MSByZyAoMyAgIEFwcHJvYWNoKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAzOTcuODg5
OCA0ODkgY20NCnENCjAgMCAuNTAxOTYxIHJnDQowIDAgLjUwMTk2MSBSRw0KQlQgMSAwIDAg
MSAwIDIgVG0gL0YyIDEwIFRmIDEyIFRMIDY2LjQ0IDAgVGQgKDIpIFRqIFQqIC02Ni40NCAw
IFRkIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDAgNDcxIGNtDQpxDQpCVCAxIDAgMCAxIDIwIDIg
VG0gMTIgVEwgL0YxIDEwIFRmIDAgMCAuNTAxOTYxIHJnICgzLjEgICBPYmplY3RpdmVzKSBU
aiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAzOTcuODg5OCA0NzEgY20NCnENCjAgMCAuNTAx
OTYxIHJnDQowIDAgLjUwMTk2MSBSRw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEy
IFRMIDY2LjQ0IDAgVGQgKDIpIFRqIFQqIC02Ni40NCAwIFRkIEVUDQpRDQpRDQpxDQoxIDAg
MCAxIDAgNDUzIGNtDQpxDQpCVCAxIDAgMCAxIDAgMiBUbSAxMiBUTCAvRjIgMTAgVGYgMCAw
IC41MDE5NjEgcmcgKDQgICBSZXF1aXJlbWVudHMgYW5kIERlc2lnbikgVGogVCogRVQNClEN
ClENCnENCjEgMCAwIDEgMzk3Ljg4OTggNDUzIGNtDQpxDQowIDAgLjUwMTk2MSByZw0KMCAw
IC41MDE5NjEgUkcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMiAxMCBUZiAxMiBUTCA2Ni40NCAw
IFRkICgzKSBUaiBUKiAtNjYuNDQgMCBUZCBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAwIDQzNSBj
bQ0KcQ0KQlQgMSAwIDAgMSAyMCAyIFRtIDEyIFRMIC9GMSAxMCBUZiAwIDAgLjUwMTk2MSBy
ZyAoNC4xICAgSHlwZXJ2aXNvciBMYXVuY2ggTGFuZHNjYXBlKSBUaiBUKiBFVA0KUQ0KUQ0K
cQ0KMSAwIDAgMSAzOTcuODg5OCA0MzUgY20NCnENCjAgMCAuNTAxOTYxIHJnDQowIDAgLjUw
MTk2MSBSRw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIDY2LjQ0IDAgVGQg
KDMpIFRqIFQqIC02Ni40NCAwIFRkIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDAgNDE3IGNtDQpx
DQpCVCAxIDAgMCAxIDIwIDIgVG0gMTIgVEwgL0YxIDEwIFRmIDAgMCAuNTAxOTYxIHJnICg0
LjIgICBEb21haW4gQ29uc3RydWN0aW9uKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAz
OTcuODg5OCA0MTcgY20NCnENCjAgMCAuNTAxOTYxIHJnDQowIDAgLjUwMTk2MSBSRw0KQlQg
MSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIDY2LjQ0IDAgVGQgKDQpIFRqIFQqIC02
Ni40NCAwIFRkIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDAgMzk5IGNtDQpxDQpCVCAxIDAgMCAx
IDIwIDIgVG0gMTIgVEwgL0YxIDEwIFRmIDAgMCAuNTAxOTYxIHJnICg0LjMgICBDb21tb24g
Qm9vdCBDb25maWd1cmF0aW9ucykgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgMzk3Ljg4
OTggMzk5IGNtDQpxDQowIDAgLjUwMTk2MSByZw0KMCAwIC41MDE5NjEgUkcNCkJUIDEgMCAw
IDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCA2Ni40NCAwIFRkICg1KSBUaiBUKiAtNjYuNDQg
MCBUZCBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAwIDM4MSBjbQ0KcQ0KQlQgMSAwIDAgMSA0MCAy
IFRtIDEyIFRMIC9GMSAxMCBUZiAwIDAgLjUwMTk2MSByZyAoNC4zLjEgICBEeW5hbWljIExh
dW5jaCB3aXRoIGEgSGlnaGx5LVByaXZpbGVnZWQgRG9tYWluIDApIFRqIFQqIEVUDQpRDQpR
DQpxDQoxIDAgMCAxIDM5Ny44ODk4IDM4MSBjbQ0KcQ0KMCAwIC41MDE5NjEgcmcNCjAgMCAu
NTAxOTYxIFJHDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgNjYuNDQgMCBU
ZCAoNSkgVGogVCogLTY2LjQ0IDAgVGQgRVQNClENClENCnENCjEgMCAwIDEgMCAzNjMgY20N
CnENCkJUIDEgMCAwIDEgNDAgMiBUbSAxMiBUTCAvRjEgMTAgVGYgMCAwIC41MDE5NjEgcmcg
KDQuMy4yICAgU3RhdGljIExhdW5jaCBDb25maWd1cmF0aW9uczogd2l0aG91dCBhIERvbWFp
biAwIG9yIGEgQ29udHJvbCBEb21haW4pIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDM5
Ny44ODk4IDM2MyBjbQ0KcQ0KMCAwIC41MDE5NjEgcmcNCjAgMCAuNTAxOTYxIFJHDQpCVCAx
IDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgNjYuNDQgMCBUZCAoNSkgVGogVCogLTY2
LjQ0IDAgVGQgRVQNClENClENCnENCjEgMCAwIDEgMCAzNDUgY20NCnENCkJUIDEgMCAwIDEg
NDAgMiBUbSAxMiBUTCAvRjEgMTAgVGYgMCAwIC41MDE5NjEgcmcgKDQuMy4zICAgRHluYW1p
YyBMYXVuY2ggb2YgRGlzYWdncmVnYXRlZCBTeXN0ZW0gQ29uZmlndXJhdGlvbnMpIFRqIFQq
IEVUDQpRDQpRDQpxDQoxIDAgMCAxIDM5Ny44ODk4IDM0NSBjbQ0KcQ0KMCAwIC41MDE5NjEg
cmcNCjAgMCAuNTAxOTYxIFJHDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwg
NjYuNDQgMCBUZCAoNSkgVGogVCogLTY2LjQ0IDAgVGQgRVQNClENClENCnENCjEgMCAwIDEg
MCAzMjcgY20NCnENCkJUIDEgMCAwIDEgNDAgMiBUbSAxMiBUTCAvRjEgMTAgVGYgMCAwIC41
MDE5NjEgcmcgKDQuMy40ICAgRXhhbXBsZSBVc2UgQ2FzZXMgYW5kIENvbmZpZ3VyYXRpb25z
KSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAzOTcuODg5OCAzMjcgY20NCnENCjAgMCAu
NTAxOTYxIHJnDQowIDAgLjUwMTk2MSBSRw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRm
IDEyIFRMIDY2LjQ0IDAgVGQgKDYpIFRqIFQqIC02Ni40NCAwIFRkIEVUDQpRDQpRDQpxDQox
IDAgMCAxIDAgMzA5IGNtDQpxDQpCVCAxIDAgMCAxIDIwIDIgVG0gMTIgVEwgL0YxIDEwIFRm
IDAgMCAuNTAxOTYxIHJnICg0LjQgICBIeXBlcmxhdW5jaCBEaXNhZ2dyZWdhdGVkIExhdW5j
aCkgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgMzk3Ljg4OTggMzA5IGNtDQpxDQowIDAg
LjUwMTk2MSByZw0KMCAwIC41MDE5NjEgUkcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBU
ZiAxMiBUTCA2Ni40NCAwIFRkICg2KSBUaiBUKiAtNjYuNDQgMCBUZCBFVA0KUQ0KUQ0KcQ0K
MSAwIDAgMSAwIDI5MSBjbQ0KcQ0KQlQgMSAwIDAgMSAyMCAyIFRtIDEyIFRMIC9GMSAxMCBU
ZiAwIDAgLjUwMTk2MSByZyAoNC41ICAgT3ZlcnZpZXcgb2YgSHlwZXJsYXVuY2ggRmxvdykg
VGogVCogRVQNClENClENCnENCjEgMCAwIDEgMzk3Ljg4OTggMjkxIGNtDQpxDQowIDAgLjUw
MTk2MSByZw0KMCAwIC41MDE5NjEgUkcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAx
MiBUTCA2Ni40NCAwIFRkICg3KSBUaiBUKiAtNjYuNDQgMCBUZCBFVA0KUQ0KUQ0KcQ0KMSAw
IDAgMSAwIDI3MyBjbQ0KcQ0KQlQgMSAwIDAgMSA0MCAyIFRtIDEyIFRMIC9GMSAxMCBUZiAw
IDAgLjUwMTk2MSByZyAoNC41LjEgICBIeXBlcmxhdW5jaCBYZW4gc3RhcnR1cCkgVGogVCog
RVQNClENClENCnENCjEgMCAwIDEgMzk3Ljg4OTggMjczIGNtDQpxDQowIDAgLjUwMTk2MSBy
Zw0KMCAwIC41MDE5NjEgUkcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCA2
Ni40NCAwIFRkICg3KSBUaiBUKiAtNjYuNDQgMCBUZCBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAw
IDI1NSBjbQ0KcQ0KQlQgMSAwIDAgMSAyMCAyIFRtIDEyIFRMIC9GMSAxMCBUZiAwIDAgLjUw
MTk2MSByZyAoNC42ICAgU3RydWN0dXJpbmcgb2YgSHlwZXJsYXVuY2gpIFRqIFQqIEVUDQpR
DQpRDQpxDQoxIDAgMCAxIDM5Ny44ODk4IDI1NSBjbQ0KcQ0KMCAwIC41MDE5NjEgcmcNCjAg
MCAuNTAxOTYxIFJHDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgNjYuNDQg
MCBUZCAoOCkgVGogVCogLTY2LjQ0IDAgVGQgRVQNClENClENCnENCjEgMCAwIDEgMCAyMzcg
Y20NCnENCkJUIDEgMCAwIDEgNDAgMiBUbSAxMiBUTCAvRjEgMTAgVGYgMCAwIC41MDE5NjEg
cmcgKDQuNi4xICAgeDg2IE11bHRpYm9vdDIpIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAx
IDM5Ny44ODk4IDIzNyBjbQ0KcQ0KMCAwIC41MDE5NjEgcmcNCjAgMCAuNTAxOTYxIFJHDQpC
VCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgNjYuNDQgMCBUZCAoOSkgVGogVCog
LTY2LjQ0IDAgVGQgRVQNClENClENCnENCjEgMCAwIDEgMCAyMTkgY20NCnENCkJUIDEgMCAw
IDEgNDAgMiBUbSAxMiBUTCAvRjEgMTAgVGYgMCAwIC41MDE5NjEgcmcgKDQuNi4yICAgQXJt
IERldmljZSBUcmVlKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAzOTcuODg5OCAyMTkg
Y20NCnENCjAgMCAuNTAxOTYxIHJnDQowIDAgLjUwMTk2MSBSRw0KQlQgMSAwIDAgMSAwIDIg
VG0gL0YxIDEwIFRmIDEyIFRMIDY2LjQ0IDAgVGQgKDkpIFRqIFQqIC02Ni40NCAwIFRkIEVU
DQpRDQpRDQpxDQoxIDAgMCAxIDAgMjAxIGNtDQpxDQpCVCAxIDAgMCAxIDQwIDIgVG0gMTIg
VEwgL0YxIDEwIFRmIDAgMCAuNTAxOTYxIHJnICg0LjYuMyAgIFhlbiBoeXBlcnZpc29yKSBU
aiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAzOTcuODg5OCAyMDEgY20NCnENCjAgMCAuNTAx
OTYxIHJnDQowIDAgLjUwMTk2MSBSRw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEy
IFRMIDY2LjQ0IDAgVGQgKDkpIFRqIFQqIC02Ni40NCAwIFRkIEVUDQpRDQpRDQpxDQoxIDAg
MCAxIDAgMTgzIGNtDQpxDQpCVCAxIDAgMCAxIDQwIDIgVG0gMTIgVEwgL0YxIDEwIFRmIDAg
MCAuNTAxOTYxIHJnICg0LjYuNCAgIEJvb3QgRG9tYWluKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0K
MSAwIDAgMSAzOTcuODg5OCAxODMgY20NCnENCjAgMCAuNTAxOTYxIHJnDQowIDAgLjUwMTk2
MSBSRw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIDY2LjQ0IDAgVGQgKDkp
IFRqIFQqIC02Ni40NCAwIFRkIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDAgMTY1IGNtDQpxDQpC
VCAxIDAgMCAxIDQwIDIgVG0gMTIgVEwgL0YxIDEwIFRmIDAgMCAuNTAxOTYxIHJnICg0LjYu
NSAgIFJlY292ZXJ5IERvbWFpbikgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgMzk3Ljg4
OTggMTY1IGNtDQpxDQowIDAgLjUwMTk2MSByZw0KMCAwIC41MDE5NjEgUkcNCkJUIDEgMCAw
IDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCA2MC44OCAwIFRkICgxMCkgVGogVCogLTYwLjg4
IDAgVGQgRVQNClENClENCnENCjEgMCAwIDEgMCAxNDcgY20NCnENCkJUIDEgMCAwIDEgNDAg
MiBUbSAxMiBUTCAvRjEgMTAgVGYgMCAwIC41MDE5NjEgcmcgKDQuNi42ICAgQ29udHJvbCBE
b21haW4pIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDM5Ny44ODk4IDE0NyBjbQ0KcQ0K
MCAwIC41MDE5NjEgcmcNCjAgMCAuNTAxOTYxIFJHDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEg
MTAgVGYgMTIgVEwgNjAuODggMCBUZCAoMTApIFRqIFQqIC02MC44OCAwIFRkIEVUDQpRDQpR
DQpxDQoxIDAgMCAxIDAgMTI5IGNtDQpxDQpCVCAxIDAgMCAxIDQwIDIgVG0gMTIgVEwgL0Yx
IDEwIFRmIDAgMCAuNTAxOTYxIHJnICg0LjYuNyAgIEhhcmR3YXJlIERvbWFpbikgVGogVCog
RVQNClENClENCnENCjEgMCAwIDEgMzk3Ljg4OTggMTI5IGNtDQpxDQowIDAgLjUwMTk2MSBy
Zw0KMCAwIC41MDE5NjEgUkcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCA2
MC44OCAwIFRkICgxMCkgVGogVCogLTYwLjg4IDAgVGQgRVQNClENClENCnENCjEgMCAwIDEg
MCAxMTEgY20NCnENCkJUIDEgMCAwIDEgNDAgMiBUbSAxMiBUTCAvRjEgMTAgVGYgMCAwIC41
MDE5NjEgcmcgKDQuNi44ICAgQ29uc29sZSBEb21haW4pIFRqIFQqIEVUDQpRDQpRDQpxDQox
IDAgMCAxIDM5Ny44ODk4IDExMSBjbQ0KcQ0KMCAwIC41MDE5NjEgcmcNCjAgMCAuNTAxOTYx
IFJHDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgNjAuODggMCBUZCAoMTAp
IFRqIFQqIC02MC44OCAwIFRkIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDAgOTMgY20NCnENCkJU
IDEgMCAwIDEgMCAyIFRtIDEyIFRMIC9GMiAxMCBUZiAwIDAgLjUwMTk2MSByZyAoNSAgIENv
bW11bmljYXRpb24gb2YgRG9tYWluIENvbmZpZ3VyYXRpb25zKSBUaiBUKiBFVA0KUQ0KUQ0K
cQ0KMSAwIDAgMSAzOTcuODg5OCA5MyBjbQ0KcQ0KMCAwIC41MDE5NjEgcmcNCjAgMCAuNTAx
OTYxIFJHDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjIgMTAgVGYgMTIgVEwgNjAuODggMCBUZCAo
MTApIFRqIFQqIC02MC44OCAwIFRkIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDAgNzUgY20NCnEN
CkJUIDEgMCAwIDEgMCAyIFRtIDEyIFRMIC9GMiAxMCBUZiAwIDAgLjUwMTk2MSByZyAoNiAg
IEFwcGVuZGl4KSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAzOTcuODg5OCA3NSBjbQ0K
cQ0KMCAwIC41MDE5NjEgcmcNCjAgMCAuNTAxOTYxIFJHDQpCVCAxIDAgMCAxIDAgMiBUbSAv
RjIgMTAgVGYgMTIgVEwgNjAuODggMCBUZCAoMTEpIFRqIFQqIC02MC44OCAwIFRkIEVUDQpR
DQpRDQpxDQoxIDAgMCAxIDAgNTcgY20NCnENCkJUIDEgMCAwIDEgMjAgMiBUbSAxMiBUTCAv
RjEgMTAgVGYgMCAwIC41MDE5NjEgcmcgKDYuMSAgIEFwcGVuZGl4IDE6IEZsb3cgU2VxdWVu
Y2Ugb2YgU3RlcHMgb2YgYSBIeXBlcmxhdW5jaCBCb290KSBUaiBUKiBFVA0KUQ0KUQ0KcQ0K
MSAwIDAgMSAzOTcuODg5OCA1NyBjbQ0KcQ0KMCAwIC41MDE5NjEgcmcNCjAgMCAuNTAxOTYx
IFJHDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgNjAuODggMCBUZCAoMTEp
IFRqIFQqIC02MC44OCAwIFRkIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDAgMzkgY20NCnENCkJU
IDEgMCAwIDEgMjAgMiBUbSAxMiBUTCAvRjEgMTAgVGYgMCAwIC41MDE5NjEgcmcgKDYuMiAg
IEFwcGVuZGl4IDI6IENvbnNpZGVyYXRpb25zIGluIE5hbWluZyB0aGUgSHlwZXJsYXVuY2gg
RmVhdHVyZSkgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgMzk3Ljg4OTggMzkgY20NCnEN
CjAgMCAuNTAxOTYxIHJnDQowIDAgLjUwMTk2MSBSRw0KQlQgMSAwIDAgMSAwIDIgVG0gL0Yx
IDEwIFRmIDEyIFRMIDYwLjg4IDAgVGQgKDEyKSBUaiBUKiAtNjAuODggMCBUZCBFVA0KUQ0K
UQ0KcQ0KMSAwIDAgMSAwIDIxIGNtDQpxDQpCVCAxIDAgMCAxIDIwIDIgVG0gMTIgVEwgL0Yx
IDEwIFRmIDAgMCAuNTAxOTYxIHJnICg2LjMgICBBcHBlbmRpeCAzOiBUZXJtaW5vbG9neSkg
VGogVCogRVQNClENClENCnENCjEgMCAwIDEgMzk3Ljg4OTggMjEgY20NCnENCjAgMCAuNTAx
OTYxIHJnDQowIDAgLjUwMTk2MSBSRw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEy
IFRMIDYwLjg4IDAgVGQgKDE0KSBUaiBUKiAtNjAuODggMCBUZCBFVA0KUQ0KUQ0KcQ0KMSAw
IDAgMSAwIDMgY20NCnENCkJUIDEgMCAwIDEgMjAgMiBUbSAxMiBUTCAvRjEgMTAgVGYgMCAw
IC41MDE5NjEgcmcgKDYuNCAgIEFwcGVuZGl4IDQ6IENvcHlyaWdodCBMaWNlbnNlKSBUaiBU
KiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAzOTcuODg5OCAzIGNtDQpxDQowIDAgLjUwMTk2MSBy
Zw0KMCAwIC41MDE5NjEgUkcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCA2
MC44OCAwIFRkICgxNSkgVGogVCogLTYwLjg4IDAgVGQgRVQNClENClENCnENClENClENCiAN
CmVuZHN0cmVhbQ0KZW5kb2JqDQoxMjUgMCBvYmoNCjw8IC9MZW5ndGggODI1MCA+Pg0Kc3Ry
ZWFtDQoxIDAgMCAxIDAgMCBjbSAgQlQgL0YxIDEyIFRmIDE0LjQgVEwgRVQNCnENCjEgMCAw
IDEgNjIuNjkyOTEgNzQ0LjAyMzYgY20NCnENCkJUIDEgMCAwIDEgMCAzLjUgVG0gMjEgVEwg
L0YyIDE3LjUgVGYgMCAwIDAgcmcgKDEgICBJbnRyb2R1Y3Rpb24pIFRqIFQqIEVUDQpRDQpR
DQpxDQoxIDAgMCAxIDYyLjY5MjkxIDcxNC4wMjM2IGNtDQpxDQowIDAgMCByZw0KQlQgMSAw
IDAgMSAwIDE0IFRtIC9GMSAxMCBUZiAxMiBUTCAuMzg1MzE4IFR3IChUaGlzIGRvY3VtZW50
IGRlc2NyaWJlcyB0aGUgZGVzaWduIGFuZCBtb3RpdmF0aW9uIGZvciB0aGUgZnVuZGVkIGRl
dmVsb3BtZW50IG9mIGEgbmV3LCBmbGV4aWJsZSBzeXN0ZW0pIFRqIFQqIDAgVHcgKGZvciBs
YXVuY2hpbmcgdGhlIFhlbiBoeXBlcnZpc29yIGFuZCB2aXJ0dWFsIG1hY2hpbmVzIG5hbWVk
OiAiSHlwZXJsYXVuY2giLikgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEg
NjM2LjAyMzYgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgNjIgVG0gL0YxIDEwIFRm
IDEyIFRMIDEuNDYxMzE4IFR3IChUaGUgZGVzaWduIGVuYWJsZXMgc2VhbWxlc3MgdHJhbnNp
dGlvbiBmb3IgZXhpc3Rpbmcgc3lzdGVtcyB0aGF0IHJlcXVpcmUgYSBkb20wLCBhbmQgcHJv
dmlkZXMgYSBuZXcpIFRqIFQqIDAgVHcgLjc0NDU5NyBUdyAoZ2VuZXJhbCBjYXBhYmlsaXR5
IHRvIGJ1aWxkIGFuZCBsYXVuY2ggYWx0ZXJuYXRpdmUgY29uZmlndXJhdGlvbnMgb2Ygdmly
dHVhbCBtYWNoaW5lcywgaW5jbHVkaW5nIHN1cHBvcnQgZm9yKSBUaiBUKiAwIFR3IC43Nzgx
MSBUdyAoc3RhdGljIHBhcnRpdGlvbmluZyBhbmQgYWNjZWxlcmF0ZWQgc3RhcnQgb2YgVk1z
IGR1cmluZyBob3N0IGJvb3QsIHdoaWxlIGFkaGVyaW5nIHRvIHRoZSBwcmluY2lwbGVzIG9m
IGxlYXN0KSBUaiBUKiAwIFR3IDEuNDcwNzUxIFR3IChwcml2aWxlZ2UuIEl0IGluY29ycG9y
YXRlcyB0aGUgZXhpc3RpbmcgZG9tMGxlc3MgZnVuY3Rpb25hbGl0eSwgZXh0ZW5kZWQgdG8g
Zm9sZCBpbiB0aGUgbmV3IGRldmVsb3BtZW50cykgVGogVCogMCBUdyAuMTYzOTg0IFR3IChm
cm9tIHRoZSBIeXBlcmxhdW5jaCBwcm9qZWN0LCB3aXRoIHN1cHBvcnQgZm9yIGJvdGggeDg2
IGFuZCBBcm0gcGxhdGZvcm0gYXJjaGl0ZWN0dXJlcywgYnVpbGRpbmcgdXBvbiBhbmQpIFRq
IFQqIDAgVHcgKHJlcGxhY2luZyB0aGUgZWFybGllciAnbGF0ZSBoYXJkd2FyZSBkb21haW4n
IGZlYXR1cmUgZm9yIGRpc2FnZ3JlZ2F0aW9uIG9mIGRvbTAuKSBUaiBUKiBFVA0KUQ0KUQ0K
cQ0KMSAwIDAgMSA2Mi42OTI5MSA2MDYuMDIzNiBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAw
IDEgMCAxNCBUbSAvRjEgMTAgVGYgMTIgVEwgLjYzNTI4IFR3IChIeXBlcmxhdW5jaCBpcyBk
ZXNpZ25lZCB0byBiZSBmbGV4aWJsZSBhbmQgcmV1c2FibGUgYWNyb3NzIG11bHRpcGxlIHVz
ZSBjYXNlcywgYW5kIG91ciBhaW0gaXMgdG8gZW5zdXJlKSBUaiBUKiAwIFR3ICh0aGF0IGl0
IGlzIGNhcGFibGUsIHdpZGVseSBleGVyY2lzZWQsIGNvbXByZWhlbnNpdmVseSB0ZXN0ZWQs
IGFuZCB3ZWxsIHVuZGVyc3Rvb2QgYnkgdGhlIFhlbiBjb21tdW5pdHkuKSBUaiBUKiBFVA0K
UQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA1NzMuMDIzNiBjbQ0KcQ0KQlQgMSAwIDAgMSAw
IDMuNSBUbSAyMSBUTCAvRjIgMTcuNSBUZiAwIDAgMCByZyAoMiAgIERvY3VtZW50IFN0cnVj
dHVyZSkgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgNTQzLjAyMzYgY20N
CnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMTQgVG0gL0YxIDEwIFRmIDEyIFRMIDEuNDk3
OTg0IFR3IChUaGlzIGlzIHRoZSBwcmltYXJ5IGRlc2lnbiBkb2N1bWVudCBmb3IgSHlwZXJs
YXVuY2gsIHRvIHByb3ZpZGUgYW4gb3ZlcnZpZXcgb2YgdGhlIGZlYXR1cmUuIFNlcGFyYXRl
KSBUaiBUKiAwIFR3IChhZGRpdGlvbmFsIGRvY3VtZW50cyB3aWxsIGNvdmVyIHNwZWNpZmlj
IGFzcGVjdHMgb2YgSHlwZXJsYXVuY2ggaW4gZnVydGhlciBkZXRhaWwsIGluY2x1ZGluZzop
IFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDUzNy4wMjM2IGNtDQpRDQpx
DQoxIDAgMCAxIDYyLjY5MjkxIDQ4My4wMjM2IGNtDQowIDAgMCByZw0KQlQgL0YxIDEwIFRm
IDEyIFRMIEVUDQpCVCAxIDAgMCAxIDAgMiBUbSAgVCogRVQNCnENCjEgMCAwIDEgMjAgNDgg
Y20NClENCnENCjEgMCAwIDEgMjAgNDggY20NClENCnENCjEgMCAwIDEgMjAgMzYgY20NCjAg
MCAwIHJnDQpCVCAvRjEgMTAgVGYgMTIgVEwgRVQNCnENCjEgMCAwIDEgNiAtMyBjbQ0KcQ0K
MCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAxMC41IDAgVGQg
KFwxNzcpIFRqIFQqIC0xMC41IDAgVGQgRVQNClENClENCnENCjEgMCAwIDEgMjMgLTMgY20N
CnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgKFRoZSBE
ZXZpY2UgVHJlZSBzcGVjaWZpY2F0aW9uIGZvciBIeXBlcmxhdW5jaCBtZXRhZGF0YSkgVGog
VCogRVQNClENClENCnENClENClENCnENCjEgMCAwIDEgMjAgMzAgY20NClENCnENCjEgMCAw
IDEgMjAgMTggY20NCjAgMCAwIHJnDQpCVCAvRjEgMTAgVGYgMTIgVEwgRVQNCnENCjEgMCAw
IDEgNiAtMyBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAx
MiBUTCAxMC41IDAgVGQgKFwxNzcpIFRqIFQqIC0xMC41IDAgVGQgRVQNClENClENCnENCjEg
MCAwIDEgMjMgLTMgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAg
VGYgMTIgVEwgKE5ldyBEb21haW4gUm9sZXMgZm9yIFhlbiBhbmQgdGhlIFhlbiBTZWN1cml0
eSBNb2R1bGVzIFwoWFNNXCkgcG9saWN5KSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KUQ0KUQ0KcQ0K
MSAwIDAgMSAyMCAxMiBjbQ0KUQ0KcQ0KMSAwIDAgMSAyMCAwIGNtDQowIDAgMCByZw0KQlQg
L0YxIDEwIFRmIDEyIFRMIEVUDQpxDQoxIDAgMCAxIDYgLTMgY20NCnENCjAgMCAwIHJnDQpC
VCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgMTAuNSAwIFRkIChcMTc3KSBUaiBU
KiAtMTAuNSAwIFRkIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDIzIC0zIGNtDQpxDQowIDAgMCBy
Zw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIChQYXNzdGhyb3VnaCBvZiBQ
Q0kgZGV2aWNlcyB3aXRoIEh5cGVybGF1bmNoKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KUQ0KUQ0K
cQ0KMSAwIDAgMSAyMCAwIGNtDQpRDQpxDQpRDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDQ4
My4wMjM2IGNtDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDQ1MC4wMjM2IGNtDQpxDQpCVCAx
IDAgMCAxIDAgMy41IFRtIDIxIFRMIC9GMiAxNy41IFRmIDAgMCAwIHJnICgzICAgQXBwcm9h
Y2gpIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDM3Mi4wMjM2IGNtDQpx
DQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDYyIFRtIC9GMSAxMCBUZiAxMiBUTCAxLjkyNjQx
MiBUdyAoQm9ybiBvdXQgb2YgaW1wcm92aW5nIHN1cHBvcnQgZm9yIER5bmFtaWMgUm9vdCBv
ZiBUcnVzdCBmb3IgTWVhc3VyZW1lbnQgXChEUlRNXCksIHRoZSBIeXBlcmxhdW5jaCkgVGog
VCogMCBUdyAuMzEyNjUxIFR3IChwcm9qZWN0IGlzIGZvY3VzZWQgb24gcmVzdHJ1Y3R1cmlu
ZyB0aGUgc3lzdGVtIGxhdW5jaCBvZiBYZW4uIFRoZSBIeXBlcmxhdW5jaCBkZXNpZ24gcHJv
dmlkZXMgYSBzZWN1cml0eSkgVGogVCogMCBUdyAuMTk5MzE4IFR3IChhcmNoaXRlY3R1cmUg
dGhhdCBidWlsZHMgb24gdGhlIHByaW5jaXBsZXMgb2YgTGVhc3QgUHJpdmlsZWdlIGFuZCBT
dHJvbmcgSXNvbGF0aW9uLCBhY2hpZXZpbmcgdGhpcyB0aHJvdWdoIHRoZSkgVGogVCogMCBU
dyAxLjI5ODczNSBUdyAoZGlzYWdncmVnYXRpb24gb2Ygc3lzdGVtIGZ1bmN0aW9ucy4gSXQg
ZW5hYmxlcyB0aGlzIHdpdGggdGhlIGludHJvZHVjdGlvbiBvZiBhIGJvb3QgZG9tYWluIHRo
YXQgd29ya3MgaW4pIFRqIFQqIDAgVHcgLjM3MzUxNiBUdyAoY29uanVuY3Rpb24gd2l0aCB0
aGUgaHlwZXJ2aXNvciB0byBwcm92aWRlIHRoZSBhYmlsaXR5IHRvIGxhdW5jaCBtdWx0aXBs
ZSBkb21haW5zIGFzIHBhcnQgb2YgaG9zdCBib290IHdoaWxlKSBUaiBUKiAwIFR3IChtYWlu
dGFpbmluZyBhIGxlYXN0IHByaXZpbGVnZSBpbXBsZW1lbnRhdGlvbi4pIFRqIFQqIEVUDQpR
DQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDMxOC4wMjM2IGNtDQpxDQowIDAgMCByZw0KQlQg
MSAwIDAgMSAwIDM4IFRtIC9GMSAxMCBUZiAxMiBUTCAxLjA1NjIzNSBUdyAoV2hpbGUgdGhl
IEh5cGVybGF1bmNoIHByb2plY3QgaW5jZXB0aW9uIHdhcyBhbmQgY29udGludWVzIHRvIGJl
IGRyaXZlbiBieSBhIGZvY3VzIG9uIHNlY3VyaXR5IHRocm91Z2gpIFRqIFQqIDAgVHcgMS41
NzE5ODQgVHcgKGRpc2FnZ3JlZ2F0aW9uLCB0aGVyZSBhcmUgbXVsdGlwbGUgdXNlIGNhc2Vz
IHdpdGggYSBub24tc2VjdXJpdHkgZm9jdXMgdGhhdCByZXF1aXJlIG9yIGJlbmVmaXQgZnJv
bSB0aGUpIFRqIFQqIDAgVHcgLjAxMjkyNyBUdyAoYWJpbGl0eSB0byBsYXVuY2ggbXVsdGlw
bGUgZG9tYWlucyBhdCBob3N0IGJvb3QuIFRoaXMgd2FzIHByb3ZlbiBieSB0aGUgbmVlZCB0
aGF0IGRyb3ZlIHRoZSBpbXBsZW1lbnRhdGlvbikgVGogVCogMCBUdyAob2YgdGhlIGRvbTBs
ZXNzIGNhcGFiaWxpdHkgaW4gdGhlIEFybSBicmFuY2ggb2YgWGVuLikgVGogVCogRVQNClEN
ClENCnENCjEgMCAwIDEgNjIuNjkyOTEgMjc2LjAyMzYgY20NCnENCjAgMCAwIHJnDQpCVCAx
IDAgMCAxIDAgMjYgVG0gL0YxIDEwIFRmIDEyIFRMIC42MzUyOCBUdyAoSHlwZXJsYXVuY2gg
aXMgZGVzaWduZWQgdG8gYmUgZmxleGlibGUgYW5kIHJldXNhYmxlIGFjcm9zcyBtdWx0aXBs
ZSB1c2UgY2FzZXMsIGFuZCBvdXIgYWltIGlzIHRvIGVuc3VyZSkgVGogVCogMCBUdyAuNjEy
MTI2IFR3ICh0aGF0IGl0IGlzIGNhcGFibGUsIHdpZGVseSBleGVyY2lzZWQsIGNvbXByZWhl
bnNpdmVseSB0ZXN0ZWQsIGFuZCBwcm92aWRlcyBhIHJvYnVzdCBmb3VuZGF0aW9uIGZvciBj
dXJyZW50KSBUaiBUKiAwIFR3IChhbmQgZW1lcmdpbmcgc3lzdGVtIGxhdW5jaCByZXF1aXJl
bWVudHMgb2YgdGhlIFhlbiBjb21tdW5pdHkuKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAg
MSA2Mi42OTI5MSAyNDYuMDIzNiBjbQ0KcQ0KQlQgMSAwIDAgMSAwIDMgVG0gMTggVEwgL0Yy
IDE1IFRmIDAgMCAwIHJnICgzLjEgICBPYmplY3RpdmVzKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0K
MSAwIDAgMSA2Mi42OTI5MSAyMzkuODY2MSBjbQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSAy
MzkuODY2MSBjbQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSAyMjcuODY2MSBjbQ0KMCAwIDAg
cmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KcQ0KMSAwIDAgMSA2IC0zIGNtDQpxDQowIDAg
MCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIDEwLjUgMCBUZCAoXDE3
NykgVGogVCogLTEwLjUgMCBUZCBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAyMyAtMyBjbQ0KcQ0K
MCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAoSW4gZ2VuZXJh
bCBzdHJpdmUgdG8gbWFpbnRhaW4gY29tcGF0aWJpbGl0eSB3aXRoIGV4aXN0aW5nIFhlbiBi
ZWhhdmlvcikgVGogVCogRVQNClENClENCnENClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEg
MjIxLjg2NjEgY20NClENCnENCjEgMCAwIDEgNjIuNjkyOTEgMTA3Ljg2NjEgY20NCjAgMCAw
IHJnDQpCVCAvRjEgMTAgVGYgMTIgVEwgRVQNCnENCjEgMCAwIDEgNiA5OSBjbQ0KcQ0KMCAw
IDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAxMC41IDAgVGQgKFwx
NzcpIFRqIFQqIC0xMC41IDAgVGQgRVQNClENClENCnENCjEgMCAwIDEgMjMgODcgY20NCnEN
CjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMTQgVG0gL0YxIDEwIFRmIDEyIFRMIC4xMTA2NTEg
VHcgKEEgZGVmYXVsdCBidWlsZCBvZiB0aGUgaHlwZXJ2aXNvciBzaG91bGQgYmUgY2FwYWJs
ZSBvZiBib290aW5nIGJvdGggbGVnYWN5LWNvbXBhdGlibGUgYW5kIG5ldyBzdHlsZXMpIFRq
IFQqIDAgVHcgKG9mIGxhdW5jaDopIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDIzIDgx
IGNtDQpRDQpxDQoxIDAgMCAxIDIzIC0zIGNtDQowIDAgMCByZw0KQlQgL0YxIDEwIFRmIDEy
IFRMIEVUDQpCVCAxIDAgMCAxIDAgMiBUbSAgVCogRVQNCnENCjEgMCAwIDEgMjAgNzggY20N
ClENCnENCjEgMCAwIDEgMjAgNzggY20NClENCnENCjEgMCAwIDEgMjAgNjYgY20NCjAgMCAw
IHJnDQpCVCAvRjEgMTAgVGYgMTIgVEwgRVQNCnENCjEgMCAwIDEgNiAtMyBjbQ0KcQ0KMCAw
IDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAxMC41IDAgVGQgKFwx
NzcpIFRqIFQqIC0xMC41IDAgVGQgRVQNClENClENCnENCjEgMCAwIDEgMjMgLTMgY20NCnEN
CjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgKGNsYXNzaWMg
WGVuIGJvb3Q6IHN0YXJ0aW5nIGEgc2luZ2xlLCBwcml2aWxlZ2VkIERvbTApIFRqIFQqIEVU
DQpRDQpRDQpxDQpRDQpRDQpxDQoxIDAgMCAxIDIwIDYwIGNtDQpRDQpxDQoxIDAgMCAxIDIw
IDM2IGNtDQowIDAgMCByZw0KQlQgL0YxIDEwIFRmIDEyIFRMIEVUDQpxDQoxIDAgMCAxIDYg
OSBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAx
MC41IDAgVGQgKFwxNzcpIFRqIFQqIC0xMC41IDAgVGQgRVQNClENClENCnENCjEgMCAwIDEg
MjMgLTMgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMTQgVG0gL0YxIDEwIFRmIDEy
IFRMIDEuOTMxNjQ3IFR3IChjbGFzc2ljIFhlbiBib290IHdpdGggbGF0ZSBoYXJkd2FyZSBk
b21haW46IHN0YXJ0aW5nIGEgRG9tMCB0aGF0IHRyYW5zaXRpb25zIGhhcmR3YXJlKSBUaiBU
KiAwIFR3IChhY2Nlc3MvY29udHJvbCB0byBhbm90aGVyIGRvbWFpbikgVGogVCogRVQNClEN
ClENCnENClENClENCnENCjEgMCAwIDEgMjAgMzAgY20NClENCnENCjEgMCAwIDEgMjAgMTgg
Y20NCjAgMCAwIHJnDQpCVCAvRjEgMTAgVGYgMTIgVEwgRVQNCnENCjEgMCAwIDEgNiAtMyBj
bQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAxMC41
IDAgVGQgKFwxNzcpIFRqIFQqIC0xMC41IDAgVGQgRVQNClENClENCnENCjEgMCAwIDEgMjMg
LTMgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwg
KGEgZG9tMGxlc3MgYm9vdDogc3RhcnRpbmcgbXVsdGlwbGUgZG9tYWlucyB3aXRob3V0IHBy
aXZpbGVnZSBhc3NpZ25tZW50IGNvbnRyb2xzKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KUQ0KUQ0K
cQ0KMSAwIDAgMSAyMCAxMiBjbQ0KUQ0KcQ0KMSAwIDAgMSAyMCAwIGNtDQowIDAgMCByZw0K
QlQgL0YxIDEwIFRmIDEyIFRMIEVUDQpxDQoxIDAgMCAxIDYgLTMgY20NCnENCjAgMCAwIHJn
DQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgMTAuNSAwIFRkIChcMTc3KSBU
aiBUKiAtMTAuNSAwIFRkIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDIzIC0zIGNtDQpxDQowIDAg
MCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIChIeXBlcmxhdW5jaDog
c3RhcnRpbmcgb25lIG9yIG1vcmUgVk1zLCB3aXRoIGZsZXhpYmxlIGNvbmZpZ3VyYXRpb24p
IFRqIFQqIEVUDQpRDQpRDQpxDQpRDQpRDQpxDQoxIDAgMCAxIDIwIDAgY20NClENCnENClEN
ClENCnENCjEgMCAwIDEgMjMgLTMgY20NClENCnENClENClENCnENCjEgMCAwIDEgNjIuNjky
OTEgMTAxLjg2NjEgY20NClENCnENCjEgMCAwIDEgNjIuNjkyOTEgODkuODY2MTQgY20NCjAg
MCAwIHJnDQpCVCAvRjEgMTAgVGYgMTIgVEwgRVQNCnENCjEgMCAwIDEgNiAtMyBjbQ0KcQ0K
MCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAxMC41IDAgVGQg
KFwxNzcpIFRqIFQqIC0xMC41IDAgVGQgRVQNClENClENCnENCjEgMCAwIDEgMjMgLTMgY20N
CnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgKFByZWZl
cnJlZCB0aGF0IGl0IGJlIG1hbmFnZWQgdmlhIEtDT05GSUcgb3B0aW9ucyB0byBnb3Zlcm4g
aW5jbHVzaW9uIG9mIHN1cHBvcnQgZm9yIGVhY2ggc3R5bGUpIFRqIFQqIEVUDQpRDQpRDQpx
DQpRDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDgzLjg2NjE0IGNtDQpRDQogDQplbmRzdHJl
YW0NCmVuZG9iag0KMTI2IDAgb2JqDQo8PCAvTGVuZ3RoIDEzOTI2ID4+DQpzdHJlYW0NCjEg
MCAwIDEgMCAwIGNtICBCVCAvRjEgMTIgVGYgMTQuNCBUTCBFVA0KcQ0KMSAwIDAgMSA2Mi42
OTI5MSA3MjkuMDIzNiBjbQ0KMCAwIDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KcQ0K
MSAwIDAgMSA2IDIxIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEw
IFRmIDEyIFRMIDEwLjUgMCBUZCAoXDE3NykgVGogVCogLTEwLjUgMCBUZCBFVA0KUQ0KUQ0K
cQ0KMSAwIDAgMSAyMyAyMSBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9G
MSAxMCBUZiAxMiBUTCAoVGhlIHNlbGVjdGlvbiBiZXR3ZWVuIGNsYXNzaWMgYm9vdCBhbmQg
SHlwZXJsYXVuY2ggYm9vdCBzaG91bGQgYmUgYXV0b21hdGljKSBUaiBUKiBFVA0KUQ0KUQ0K
cQ0KMSAwIDAgMSAyMyAxNSBjbQ0KUQ0KcQ0KMSAwIDAgMSAyMyAtMyBjbQ0KMCAwIDAgcmcN
CkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KQlQgMSAwIDAgMSAwIDIgVG0gIFQqIEVUDQpxDQox
IDAgMCAxIDIwIDEyIGNtDQpRDQpxDQoxIDAgMCAxIDIwIDEyIGNtDQpRDQpxDQoxIDAgMCAx
IDIwIDAgY20NCjAgMCAwIHJnDQpCVCAvRjEgMTAgVGYgMTIgVEwgRVQNCnENCjEgMCAwIDEg
NiAtMyBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBU
TCAxMC41IDAgVGQgKFwxNzcpIFRqIFQqIC0xMC41IDAgVGQgRVQNClENClENCnENCjEgMCAw
IDEgMjMgLTMgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYg
MTIgVEwgKFByZWZlcnJlZCB0aGF0IGl0IG5vdCByZXF1aXJlIGEga2VybmVsIGNvbW1hbmQg
bGluZSBwYXJhbWV0ZXIgZm9yIHNlbGVjdGlvbikgVGogVCogRVQNClENClENCnENClENClEN
CnENCjEgMCAwIDEgMjAgMCBjbQ0KUQ0KcQ0KUQ0KUQ0KcQ0KMSAwIDAgMSAyMyAtMyBjbQ0K
UQ0KcQ0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA3MjMuMDIzNiBjbQ0KUQ0KcQ0KMSAw
IDAgMSA2Mi42OTI5MSA3MTEuMDIzNiBjbQ0KMCAwIDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBU
TCBFVA0KcQ0KMSAwIDAgMSA2IC0zIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIg
VG0gL0YxIDEwIFRmIDEyIFRMIDEwLjUgMCBUZCAoXDE3NykgVGogVCogLTEwLjUgMCBUZCBF
VA0KUQ0KUQ0KcQ0KMSAwIDAgMSAyMyAtMyBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEg
MCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAoSXQgc2hvdWxkIG5vdCByZXF1aXJlIG1vZGlmaWNh
dGlvbiB0byBib290IGxvYWRlcnMpIFRqIFQqIEVUDQpRDQpRDQpxDQpRDQpRDQpxDQoxIDAg
MCAxIDYyLjY5MjkxIDcwNS4wMjM2IGNtDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDY5My4w
MjM2IGNtDQowIDAgMCByZw0KQlQgL0YxIDEwIFRmIDEyIFRMIEVUDQpxDQoxIDAgMCAxIDYg
LTMgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwg
MTAuNSAwIFRkIChcMTc3KSBUaiBUKiAtMTAuNSAwIFRkIEVUDQpRDQpRDQpxDQoxIDAgMCAx
IDIzIC0zIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEy
IFRMIChJdCBzaG91bGQgcHJvdmlkZSBhIHVzZXIgZnJpZW5kbHkgaW50ZXJmYWNlIGZvciBp
dHMgY29uZmlndXJhdGlvbiBhbmQgbWFuYWdlbWVudCkgVGogVCogRVQNClENClENCnENClEN
ClENCnENCjEgMCAwIDEgNjIuNjkyOTEgNjg3LjAyMzYgY20NClENCnENCjEgMCAwIDEgNjIu
NjkyOTEgNjYzLjAyMzYgY20NCjAgMCAwIHJnDQpCVCAvRjEgMTAgVGYgMTIgVEwgRVQNCnEN
CjEgMCAwIDEgNiA5IGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEw
IFRmIDEyIFRMIDEwLjUgMCBUZCAoXDE3NykgVGogVCogLTEwLjUgMCBUZCBFVA0KUQ0KUQ0K
cQ0KMSAwIDAgMSAyMyAtMyBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAxNCBUbSAv
RjEgMTAgVGYgMTIgVEwgMi42MDYyMzUgVHcgKEl0IG11c3QgcHJvdmlkZSBhIG1ldGhvZCBm
b3IgYnVpbGRpbmcgc3lzdGVtcyB0aGF0IGZhbGxiYWNrIHRvIGNvbnNvbGUgYWNjZXNzIGlu
IHRoZSBldmVudCBvZikgVGogVCogMCBUdyAobWlzY29uZmlndXJhdGlvbikgVGogVCogRVQN
ClENClENCnENClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgNjU3LjAyMzYgY20NClENCnEN
CjEgMCAwIDEgNjIuNjkyOTEgNjQ1LjAyMzYgY20NCjAgMCAwIHJnDQpCVCAvRjEgMTAgVGYg
MTIgVEwgRVQNCnENCjEgMCAwIDEgNiAtMyBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEg
MCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAxMC41IDAgVGQgKFwxNzcpIFRqIFQqIC0xMC41IDAg
VGQgRVQNClENClENCnENCjEgMCAwIDEgMjMgLTMgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAg
MCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgKEl0IHNob3VsZCBiZSBhYmxlIHRvIGJvb3Qg
YW4geDg2IFhlbiBlbnZpcm9ubWVudCB3aXRob3V0IHRoZSBuZWVkIGZvciBhIERvbTAgZG9t
YWluKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA2NDUu
MDIzNiBjbQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA2MTIuMDIzNiBjbQ0KcQ0KQlQgMSAw
IDAgMSAwIDMuNSBUbSAyMSBUTCAvRjIgMTcuNSBUZiAwIDAgMCByZyAoNCAgIFJlcXVpcmVt
ZW50cyBhbmQgRGVzaWduKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA1
MTAuMDIzNiBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCA4NiBUbSAvRjEgMTAgVGYg
MTIgVEwgLjEzNzIwOSBUdyAoSHlwZXJsYXVuY2ggaXMgZGVmaW5lZCBhcyB0aGUgYWJpbGl0
eSBvZiBhIGh5cGVydmlzb3IgdG8gY29uc3RydWN0IGFuZCBzdGFydCBvbmUgb3IgbW9yZSB2
aXJ0dWFsIG1hY2hpbmVzIGF0KSBUaiBUKiAwIFR3IDMuNjA5OTg0IFR3IChzeXN0ZW0gbGF1
bmNoIGluIGEgc3BlY2lmaWMgd2F5LiBBIGh5cGVydmlzb3IgY2FuIHN1cHBvcnQgb25lIG9y
IGJvdGggbW9kZXMgb2YgY29uZmlndXJhdGlvbiwpIFRqIFQqIDAgVHcgMy4xNzA4MTQgVHcg
KEh5cGVybGF1bmNoIFN0YXRpYyBhbmQgSHlwZXJsYXVuY2ggRHluYW1pYy4gVGhlIEh5cGVy
bGF1bmNoIFN0YXRpYyBtb2RlIGZ1bmN0aW9ucyBhcyBhIHN0YXRpYykgVGogVCogMCBUdyAx
LjY0NDk4MyBUdyAocGFydGl0aW9uaW5nIGh5cGVydmlzb3IgZW5zdXJpbmcgb25seSB0aGUg
dmlydHVhbCBtYWNoaW5lcyBzdGFydGVkIGF0IHN5c3RlbSBsYXVuY2ggYXJlIHJ1bm5pbmcg
b24gdGhlKSBUaiBUKiAwIFR3IC40MDY5MDUgVHcgKHN5c3RlbS4gVGhlIEh5cGVybGF1bmNo
IER5bmFtaWMgbW9kZSBmdW5jdGlvbnMgYXMgYSBkeW5hbWljIGh5cGVydmlzb3IgYWxsb3dp
bmcgZm9yIGFkZGl0aW9uYWwgdmlydHVhbCkgVGogVCogMCBUdyAxLjIzMDYxIFR3IChtYWNo
aW5lcyB0byBiZSBzdGFydGVkIGFmdGVyIHRoZSBpbml0aWFsIHZpcnR1YWwgbWFjaGluZXMg
aGF2ZSBzdGFydGVkLiBUaGUgWGVuIGh5cGVydmlzb3IgaXMgY2FwYWJsZSBvZikgVGogVCog
MCBUdyAxLjEyNjg2IFR3IChib3RoIG1vZGVzIG9mIGNvbmZpZ3VyYXRpb24gZnJvbSB0aGUg
c2FtZSBiaW5hcnkgYW5kIHdoZW4gcGFpcmVkIHdpdGggaXRzIFhTTSBmbGFzaywgcHJvdmlk
ZXMgc3Ryb25nKSBUaiBUKiAwIFR3IChjb250cm9scyB0aGF0IGVuYWJsZSBmaW5lIGdyYWlu
ZWQgc3lzdGVtIHBhcnRpdGlvbmluZy4pIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDYy
LjY5MjkxIDQ4MC4wMjM2IGNtDQpxDQpCVCAxIDAgMCAxIDAgMyBUbSAxOCBUTCAvRjIgMTUg
VGYgMCAwIDAgcmcgKDQuMSAgIEh5cGVydmlzb3IgTGF1bmNoIExhbmRzY2FwZSkgVGogVCog
RVQNClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgNDUwLjAyMzYgY20NCnENCjAgMCAwIHJn
DQpCVCAxIDAgMCAxIDAgMTQgVG0gL0YxIDEwIFRmIDEyIFRMIDIuNjYxNjQ3IFR3IChUaGlz
IGNvbXBhcmlzb24gdGFibGUgcHJlc2VudHMgdGhlIGRpc3RpbmN0aXZlIGNhcGFiaWxpdGll
cyBvZiBIeXBlcmxhdW5jaCB3aXRoIHJlZmVyZW5jZSB0byBleGlzdGluZykgVGogVCogMCBU
dyAobGF1bmNoIGNvbmZpZ3VyYXRpb25zIGN1cnJlbnRseSBhdmFpbGFibGUgaW4gWGVuIGFu
ZCBvdGhlciBoeXBlcnZpc29ycy4pIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDYyLjY5
MjkxIDQ0NC4wMjM2IGNtDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDc2Ljg2NjE0IGNtDQow
IDAgMCByZw0KQlQgL0YxIDEwIFRmIDEyIFRMIEVUDQpCVCAxIDAgMCAxIDAgMiBUbSAgVCog
RVQNCnENCjEgMCAwIDEgMjAgMzYwIGNtDQpRDQpxDQoxIDAgMCAxIDIwIDAgY20NCjEgMSAx
IHJnDQpuIDAgMzYwIDQ0My44ODk4IC0xOCByZSBmKg0KLjg3ODQzMSAuODc4NDMxIC44Nzg0
MzEgcmcNCm4gMCAzNDIgNDQzLjg4OTggLTE4IHJlIGYqDQoxIDEgMSByZw0KbiAwIDMyNCA0
NDMuODg5OCAtMTggcmUgZioNCi44Nzg0MzEgLjg3ODQzMSAuODc4NDMxIHJnDQpuIDAgMzA2
IDQ0My44ODk4IC0xOCByZSBmKg0KMSAxIDEgcmcNCm4gMCAyODggNDQzLjg4OTggLTE4IHJl
IGYqDQouODc4NDMxIC44Nzg0MzEgLjg3ODQzMSByZw0KbiAwIDI3MCA0NDMuODg5OCAtMTgg
cmUgZioNCjEgMSAxIHJnDQpuIDAgMjUyIDQ0My44ODk4IC0xOCByZSBmKg0KLjg3ODQzMSAu
ODc4NDMxIC44Nzg0MzEgcmcNCm4gMCAyMzQgNDQzLjg4OTggLTE4IHJlIGYqDQoxIDEgMSBy
Zw0KbiAwIDIxNiA0NDMuODg5OCAtMTggcmUgZioNCi44Nzg0MzEgLjg3ODQzMSAuODc4NDMx
IHJnDQpuIDAgMTk4IDQ0My44ODk4IC0xOCByZSBmKg0KMSAxIDEgcmcNCm4gMCAxODAgNDQz
Ljg4OTggLTE4IHJlIGYqDQouODc4NDMxIC44Nzg0MzEgLjg3ODQzMSByZw0KbiAwIDE2MiA0
NDMuODg5OCAtMTggcmUgZioNCjEgMSAxIHJnDQpuIDAgMTQ0IDQ0My44ODk4IC0xOCByZSBm
Kg0KLjg3ODQzMSAuODc4NDMxIC44Nzg0MzEgcmcNCm4gMCAxMjYgNDQzLjg4OTggLTE4IHJl
IGYqDQoxIDEgMSByZw0KbiAwIDEwOCA0NDMuODg5OCAtMTggcmUgZioNCi44Nzg0MzEgLjg3
ODQzMSAuODc4NDMxIHJnDQpuIDAgOTAgNDQzLjg4OTggLTE4IHJlIGYqDQoxIDEgMSByZw0K
biAwIDcyIDQ0My44ODk4IC0xOCByZSBmKg0KLjg3ODQzMSAuODc4NDMxIC44Nzg0MzEgcmcN
Cm4gMCA1NCA0NDMuODg5OCAtMTggcmUgZioNCjEgMSAxIHJnDQpuIDAgMzYgNDQzLjg4OTgg
LTE4IHJlIGYqDQouODc4NDMxIC44Nzg0MzEgLjg3ODQzMSByZw0KbiAwIDE4IDQ0My44ODk4
IC0xOCByZSBmKg0KLjk2MDc4NCAuOTYwNzg0IC44NjI3NDUgcmcNCm4gMCAzNjAgNDQzLjg4
OTggLTM2IHJlIGYqDQowIDAgMCByZw0KQlQgL0YxIDEwIFRmIDEyIFRMIEVUDQpxDQoxIDAg
MCAxIDYgMzI3IGNtDQpxDQouOTYwNzg0IC45NjA3ODQgLjg2Mjc0NSByZw0KbiAwIDAgNjku
MTk5MzUgMjQgcmUgZioNClENCnENCkJUIDEgMCAwIDEgMCAxNCBUbSAxMC4xNDk2NyAwIFRk
IDEyIFRMIC9GMiAxMCBUZiAwIDAgMCByZyAoWGVuIERvbTApIFRqIC9GMSAxMCBUZiAgVCog
My42MTAwMDQgMCBUZCAvRjIgMTAgVGYgKFwoQ2xhc3NpY1wpKSBUaiBUKiAtMTMuNzU5Njcg
MCBUZCBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA4Ny4xOTkzNSAzMjcgY20NCnENCi45NjA3ODQg
Ljk2MDc4NCAuODYyNzQ1IHJnDQpuIDAgMCA0Ny41NDYxOSAyNCByZSBmKg0KUQ0KcQ0KQlQg
MSAwIDAgMSAwIDE0IFRtIDEwLjQzODA5IDAgVGQgMTIgVEwgL0YyIDEwIFRmIDAgMCAwIHJn
IChMaW51eCkgVGogL0YxIDEwIFRmICBUKiAyLjIyNTAwNCAwIFRkIC9GMiAxMCBUZiAoS1ZN
KSBUaiBUKiAtMTIuNjYzMDkgMCBUZCBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAxNDYuNzQ1NSAz
MjcgY20NCnENCi45NjA3ODQgLjk2MDc4NCAuODYyNzQ1IHJnDQpuIDAgMCA1Mi45NTk0OCAy
NCByZSBmKg0KUQ0KcQ0KQlQgMSAwIDAgMSAwIDE0IFRtIDYuNDc5NzM5IDAgVGQgMTIgVEwg
L0YyIDEwIFRmIDAgMCAwIHJnIChMYXRlKSBUaiAvRjEgMTAgVGYgKCApIFRqIC9GMiAxMCBU
ZiAoSFcpIFRqIFQqIDguODkgMCBUZCAoRG9tKSBUaiBUKiAtMTUuMzY5NzQgMCBUZCBFVA0K
UQ0KUQ0KcQ0KMSAwIDAgMSAyMTEuNzA1IDMyNyBjbQ0KcQ0KLjk2MDc4NCAuOTYwNzg0IC44
NjI3NDUgcmcNCm4gMCAwIDQ3LjU0NjE5IDI0IHJlIGYqDQpRDQpxDQpCVCAxIDAgMCAxIDAg
MTQgVG0gMTUuNDMzMDkgMCBUZCAxMiBUTCAvRjIgMTAgVGYgMCAwIDAgcmcgKEphaWwpIFRq
IC9GMSAxMCBUZiAgVCogLTYuMzg0OTk2IDAgVGQgL0YyIDEwIFRmIChob3VzZSkgVGogVCog
LTkuMDQ4MDk0IDAgVGQgRVQNClENClENCnENCjEgMCAwIDEgMjcxLjI1MTIgMzI3IGNtDQpx
DQouOTYwNzg0IC45NjA3ODQgLjg2Mjc0NSByZw0KbiAwIDAgNTguMzcyNzcgMjQgcmUgZioN
ClENCnENCkJUIDEgMCAwIDEgMCAxNCBUbSAyMC4wMTYzOCAwIFRkIDEyIFRMIC9GMiAxMCBU
ZiAwIDAgMCByZyAoWGVuKSBUaiAvRjEgMTAgVGYgIFQqIC0xMy44OTUgMCBUZCAvRjIgMTAg
VGYgKGRvbTBsZXNzKSBUaiBUKiAtNi4xMjEzODQgMCBUZCBFVA0KUQ0KUQ0KcQ0KMSAwIDAg
MSAzNDEuNjI0IDM0NSBjbQ0KcQ0KLjk2MDc4NCAuOTYwNzg0IC44NjI3NDUgcmcNCm4gMCAw
IDk2LjI2NTggMTIgcmUgZioNClENCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAv
RjIgMTAgVGYgMTIgVEwgNy4yODc4OTggMCBUZCAoWGVuIEh5cGVybGF1bmNoKSBUaiBUKiAt
Ny4yODc4OTggMCBUZCBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAzNDEuNjI0IDMyNyBjbQ0KcQ0K
Ljk2MDc4NCAuOTYwNzg0IC44NjI3NDUgcmcNCm4gMCAwIDM2LjcxOTYxIDEyIHJlIGYqDQpR
DQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIDUuODU0
ODA0IDAgVGQgKFN0YXRpYykgVGogVCogLTUuODU0ODA0IDAgVGQgRVQNClENClENCnENCjEg
MCAwIDEgMzkwLjM0MzYgMzI3IGNtDQpxDQouOTYwNzg0IC45NjA3ODQgLjg2Mjc0NSByZw0K
biAwIDAgNDcuNTQ2MTkgMTIgcmUgZioNClENCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAg
MiBUbSAvRjEgMTAgVGYgMTIgVEwgNC4zMjgwOTQgMCBUZCAoRHluYW1pYykgVGogVCogLTQu
MzI4MDk0IDAgVGQgRVQNClENClENCjAgMCAwIHJnDQpxDQoxIDAgMCAxIDYgMzA5IGNtDQpx
DQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIChIeXBlcnZp
c29yIGFibGUgdG8gbGF1bmNoIG11bHRpcGxlIFZNcyBkdXJpbmcgaG9zdCBib290KSBUaiBU
KiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAyMTEuNzA1IDI5MSBjbQ0KcQ0KMCAwIDAgcmcNCkJU
IDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAoWSkgVGogVCogRVQNClENClENCnEN
CjEgMCAwIDEgMjcxLjI1MTIgMjkxIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIg
VG0gL0YxIDEwIFRmIDEyIFRMIChZKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAzNDEu
NjI0IDI5MSBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAx
MiBUTCAoWSkgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgMzkwLjM0MzYgMjkxIGNtDQpx
DQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIChZKSBUaiBU
KiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2IDI3MyBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAw
IDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAoSHlwZXJ2aXNvciBzdXBwb3J0cyBTdGF0aWMg
UGFydGl0aW9uaW5nKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAyMTEuNzA1IDI1NSBj
bQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAoWSkg
VGogVCogRVQNClENClENCnENCjEgMCAwIDEgMjcxLjI1MTIgMjU1IGNtDQpxDQowIDAgMCBy
Zw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIChZKSBUaiBUKiBFVA0KUQ0K
UQ0KcQ0KMSAwIDAgMSAzNDEuNjI0IDI1NSBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEg
MCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAoWSkgVGogVCogRVQNClENClENCnENCjEgMCAwIDEg
NiAyMzcgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIg
VEwgKEFibGUgdG8gbGF1bmNoIFZNcyBkeW5hbWljYWxseSBhZnRlciBob3N0IGJvb3QpIFRq
IFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDYgMjE5IGNtDQpxDQowIDAgMCByZw0KQlQgMSAw
IDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIChZKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAw
IDAgMSA4Ny4xOTkzNSAyMTkgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAv
RjEgMTAgVGYgMTIgVEwgKFkpIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDE0Ni43NDU1
IDIxOSBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBU
TCAoWSopIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDIxMS43MDUgMjE5IGNtDQpxDQow
IDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIChZKSBUaiBUKiBF
VA0KUQ0KUQ0KcQ0KMSAwIDAgMSAyNzEuMjUxMiAyMTkgY20NCnENCjAgMCAwIHJnDQpCVCAx
IDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgKFkqKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0K
MSAwIDAgMSAzOTAuMzQzNiAyMTkgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBU
bSAvRjEgMTAgVGYgMTIgVEwgKFkpIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDYgMjAx
IGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIChT
dXBwb3J0cyBzdHJvbmcgaXNvbGF0aW9uIGJldHdlZW4gYWxsIFZNcyBzdGFydGVkIGF0IGhv
c3QgYm9vdCkgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgMjExLjcwNSAxODMgY20NCnEN
CjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgKFkpIFRqIFQq
IEVUDQpRDQpRDQpxDQoxIDAgMCAxIDI3MS4yNTEyIDE4MyBjbQ0KcQ0KMCAwIDAgcmcNCkJU
IDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAoWSkgVGogVCogRVQNClENClENCnEN
CjEgMCAwIDEgMzQxLjYyNCAxODMgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBU
bSAvRjEgMTAgVGYgMTIgVEwgKFkpIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDM5MC4z
NDM2IDE4MyBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAx
MiBUTCAoWSkgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgNiAxNjUgY20NCnENCjAgMCAw
IHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgKEVuYWJsZXMgZmxleGli
bGUgc2VxdWVuY2luZyBvZiBWTSBzdGFydCBkdXJpbmcgaG9zdCBib290KSBUaiBUKiBFVA0K
UQ0KUQ0KcQ0KMSAwIDAgMSAzNDEuNjI0IDE0NyBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAw
IDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAoWSkgVGogVCogRVQNClENClENCnENCjEgMCAw
IDEgMzkwLjM0MzYgMTQ3IGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0Yx
IDEwIFRmIDEyIFRMIChZKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2IDEyOSBjbQ0K
cQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAoUHJldmVu
dCBhbGwtcG93ZXJmdWwgc3RhdGljIHJvb3QgZG9tYWluIGJlaW5nIGxhdW5jaGVkIGF0IGJv
b3QpIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDI3MS4yNTEyIDExMSBjbQ0KcQ0KMCAw
IDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAoWSopIFRqIFQqIEVU
DQpRDQpRDQpxDQoxIDAgMCAxIDM0MS42MjQgMTExIGNtDQpxDQowIDAgMCByZw0KQlQgMSAw
IDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIChZKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAw
IDAgMSAzOTAuMzQzNiAxMTEgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAv
RjEgMTAgVGYgMTIgVEwgKFkpIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDYgOTMgY20N
CnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgKE9wZXJh
dGVzIHdpdGhvdXQgYSBIaWdobHktcHJpdmlsZWdlZCBtYW5hZ2VtZW50IFZNIFwoZWcuIERv
bTBcKSkgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgMTQ2Ljc0NTUgNzUgY20NCnENCjAg
MCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgKFkqKSBUaiBUKiBF
VA0KUQ0KUQ0KcQ0KMSAwIDAgMSAyNzEuMjUxMiA3NSBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEg
MCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAoWSopIFRqIFQqIEVUDQpRDQpRDQpxDQox
IDAgMCAxIDM0MS42MjQgNzUgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAv
RjEgMTAgVGYgMTIgVEwgKFkpIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDM5MC4zNDM2
IDc1IGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRM
IChZKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2IDU3IGNtDQpxDQowIDAgMCByZw0K
QlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIChPcGVyYXRlcyB3aXRob3V0IGEg
cHJpdmlsZWdlZCB0b29sc3RhY2sgVk0gXChDb250cm9sIERvbWFpblwpKSBUaiBUKiBFVA0K
UQ0KUQ0KcQ0KMSAwIDAgMSAyNzEuMjUxMiAzOSBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAw
IDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAoWSopIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAg
MCAxIDM0MS42MjQgMzkgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEg
MTAgVGYgMTIgVEwgKFkpIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDYgMjEgY20NCnEN
CjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgKEV4dGVuc2li
bGUgVk0gY29uZmlndXJhdGlvbiBhcHBsaWVkIGJlZm9yZSBsYXVuY2ggb2YgVk1zIGF0IGhv
c3QgYm9vdCkgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgMzQxLjYyNCAzIGNtDQpxDQow
IDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIChZKSBUaiBUKiBF
VA0KUQ0KUQ0KcQ0KMSAwIDAgMSAzOTAuMzQzNiAzIGNtDQpxDQowIDAgMCByZw0KQlQgMSAw
IDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIChZKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSBK
DQoxIGoNCjAgMCAwIFJHDQouMjUgdw0KbiAwIDAgbSA0NDMuODg5OCAwIGwgUw0KbiAzMzUu
NjI0IDM0MiBtIDQ0My44ODk4IDM0MiBsIFMNCm4gMCAzMjQgbSA0NDMuODg5OCAzMjQgbCBT
DQpuIDAgMzA2IG0gNDQzLjg4OTggMzA2IGwgUw0KbiAwIDI4OCBtIDQ0My44ODk4IDI4OCBs
IFMNCm4gMCAyNzAgbSA0NDMuODg5OCAyNzAgbCBTDQpuIDAgMjUyIG0gNDQzLjg4OTggMjUy
IGwgUw0KbiAwIDIzNCBtIDQ0My44ODk4IDIzNCBsIFMNCm4gMCAyMTYgbSA0NDMuODg5OCAy
MTYgbCBTDQpuIDAgMTk4IG0gNDQzLjg4OTggMTk4IGwgUw0KbiAwIDE4MCBtIDQ0My44ODk4
IDE4MCBsIFMNCm4gMCAxNjIgbSA0NDMuODg5OCAxNjIgbCBTDQpuIDAgMTQ0IG0gNDQzLjg4
OTggMTQ0IGwgUw0KbiAwIDEyNiBtIDQ0My44ODk4IDEyNiBsIFMNCm4gMCAxMDggbSA0NDMu
ODg5OCAxMDggbCBTDQpuIDAgOTAgbSA0NDMuODg5OCA5MCBsIFMNCm4gMCA3MiBtIDQ0My44
ODk4IDcyIGwgUw0KbiAwIDU0IG0gNDQzLjg4OTggNTQgbCBTDQpuIDAgMzYgbSA0NDMuODg5
OCAzNiBsIFMNCm4gMCAxOCBtIDQ0My44ODk4IDE4IGwgUw0KbiA4MS4xOTkzNSAwIG0gODEu
MTk5MzUgMTggbCBTDQpuIDgxLjE5OTM1IDM2IG0gODEuMTk5MzUgNTQgbCBTDQpuIDgxLjE5
OTM1IDcyIG0gODEuMTk5MzUgOTAgbCBTDQpuIDgxLjE5OTM1IDEwOCBtIDgxLjE5OTM1IDEy
NiBsIFMNCm4gODEuMTk5MzUgMTQ0IG0gODEuMTk5MzUgMTYyIGwgUw0KbiA4MS4xOTkzNSAx
ODAgbSA4MS4xOTkzNSAxOTggbCBTDQpuIDgxLjE5OTM1IDIxNiBtIDgxLjE5OTM1IDIzNCBs
IFMNCm4gODEuMTk5MzUgMjUyIG0gODEuMTk5MzUgMjcwIGwgUw0KbiA4MS4xOTkzNSAyODgg
bSA4MS4xOTkzNSAzMDYgbCBTDQpuIDgxLjE5OTM1IDMyNCBtIDgxLjE5OTM1IDM2MCBsIFMN
Cm4gMTQwLjc0NTUgMCBtIDE0MC43NDU1IDE4IGwgUw0KbiAxNDAuNzQ1NSAzNiBtIDE0MC43
NDU1IDU0IGwgUw0KbiAxNDAuNzQ1NSA3MiBtIDE0MC43NDU1IDkwIGwgUw0KbiAxNDAuNzQ1
NSAxMDggbSAxNDAuNzQ1NSAxMjYgbCBTDQpuIDE0MC43NDU1IDE0NCBtIDE0MC43NDU1IDE2
MiBsIFMNCm4gMTQwLjc0NTUgMTgwIG0gMTQwLjc0NTUgMTk4IGwgUw0KbiAxNDAuNzQ1NSAy
MTYgbSAxNDAuNzQ1NSAyMzQgbCBTDQpuIDE0MC43NDU1IDI1MiBtIDE0MC43NDU1IDI3MCBs
IFMNCm4gMTQwLjc0NTUgMjg4IG0gMTQwLjc0NTUgMzA2IGwgUw0KbiAxNDAuNzQ1NSAzMjQg
bSAxNDAuNzQ1NSAzNjAgbCBTDQpuIDIwNS43MDUgMCBtIDIwNS43MDUgMTggbCBTDQpuIDIw
NS43MDUgMzYgbSAyMDUuNzA1IDU0IGwgUw0KbiAyMDUuNzA1IDcyIG0gMjA1LjcwNSA5MCBs
IFMNCm4gMjA1LjcwNSAxMDggbSAyMDUuNzA1IDEyNiBsIFMNCm4gMjA1LjcwNSAxNDQgbSAy
MDUuNzA1IDE2MiBsIFMNCm4gMjA1LjcwNSAxODAgbSAyMDUuNzA1IDE5OCBsIFMNCm4gMjA1
LjcwNSAyMTYgbSAyMDUuNzA1IDIzNCBsIFMNCm4gMjA1LjcwNSAyNTIgbSAyMDUuNzA1IDI3
MCBsIFMNCm4gMjA1LjcwNSAyODggbSAyMDUuNzA1IDMwNiBsIFMNCm4gMjA1LjcwNSAzMjQg
bSAyMDUuNzA1IDM2MCBsIFMNCm4gMjY1LjI1MTIgMCBtIDI2NS4yNTEyIDE4IGwgUw0KbiAy
NjUuMjUxMiAzNiBtIDI2NS4yNTEyIDU0IGwgUw0KbiAyNjUuMjUxMiA3MiBtIDI2NS4yNTEy
IDkwIGwgUw0KbiAyNjUuMjUxMiAxMDggbSAyNjUuMjUxMiAxMjYgbCBTDQpuIDI2NS4yNTEy
IDE0NCBtIDI2NS4yNTEyIDE2MiBsIFMNCm4gMjY1LjI1MTIgMTgwIG0gMjY1LjI1MTIgMTk4
IGwgUw0KbiAyNjUuMjUxMiAyMTYgbSAyNjUuMjUxMiAyMzQgbCBTDQpuIDI2NS4yNTEyIDI1
MiBtIDI2NS4yNTEyIDI3MCBsIFMNCm4gMjY1LjI1MTIgMjg4IG0gMjY1LjI1MTIgMzA2IGwg
Uw0KbiAyNjUuMjUxMiAzMjQgbSAyNjUuMjUxMiAzNjAgbCBTDQpuIDMzNS42MjQgMCBtIDMz
NS42MjQgMTggbCBTDQpuIDMzNS42MjQgMzYgbSAzMzUuNjI0IDU0IGwgUw0KbiAzMzUuNjI0
IDcyIG0gMzM1LjYyNCA5MCBsIFMNCm4gMzM1LjYyNCAxMDggbSAzMzUuNjI0IDEyNiBsIFMN
Cm4gMzM1LjYyNCAxNDQgbSAzMzUuNjI0IDE2MiBsIFMNCm4gMzM1LjYyNCAxODAgbSAzMzUu
NjI0IDE5OCBsIFMNCm4gMzM1LjYyNCAyMTYgbSAzMzUuNjI0IDIzNCBsIFMNCm4gMzM1LjYy
NCAyNTIgbSAzMzUuNjI0IDI3MCBsIFMNCm4gMzM1LjYyNCAyODggbSAzMzUuNjI0IDMwNiBs
IFMNCm4gMzM1LjYyNCAzMjQgbSAzMzUuNjI0IDM2MCBsIFMNCm4gMzg0LjM0MzYgMCBtIDM4
NC4zNDM2IDE4IGwgUw0KbiAzODQuMzQzNiAzNiBtIDM4NC4zNDM2IDU0IGwgUw0KbiAzODQu
MzQzNiA3MiBtIDM4NC4zNDM2IDkwIGwgUw0KbiAzODQuMzQzNiAxMDggbSAzODQuMzQzNiAx
MjYgbCBTDQpuIDM4NC4zNDM2IDE0NCBtIDM4NC4zNDM2IDE2MiBsIFMNCm4gMzg0LjM0MzYg
MTgwIG0gMzg0LjM0MzYgMTk4IGwgUw0KbiAzODQuMzQzNiAyMTYgbSAzODQuMzQzNiAyMzQg
bCBTDQpuIDM4NC4zNDM2IDI1MiBtIDM4NC4zNDM2IDI3MCBsIFMNCm4gMzg0LjM0MzYgMjg4
IG0gMzg0LjM0MzYgMzA2IGwgUw0KbiAzODQuMzQzNiAzMjQgbSAzODQuMzQzNiAzNDIgbCBT
DQpuIDAgMzYwIG0gNDQzLjg4OTggMzYwIGwgUw0KbiAwIDAgbSAwIDM2MCBsIFMNCm4gNDQz
Ljg4OTggMCBtIDQ0My44ODk4IDM2MCBsIFMNClENClENCnENClENClENCiANCmVuZHN0cmVh
bQ0KZW5kb2JqDQoxMjcgMCBvYmoNCjw8IC9MZW5ndGggOTczNSA+Pg0Kc3RyZWFtDQoxIDAg
MCAxIDAgMCBjbSAgQlQgL0YxIDEyIFRmIDE0LjQgVEwgRVQNCnENCjEgMCAwIDEgNjIuNjky
OTEgNjU3LjAyMzYgY20NCjAgMCAwIHJnDQpCVCAvRjEgMTAgVGYgMTIgVEwgRVQNCkJUIDEg
MCAwIDEgMCAyIFRtICBUKiBFVA0KcQ0KMSAwIDAgMSAyMCAwIGNtDQoxIDEgMSByZw0KbiAw
IDEwOCA0NDMuODg5OCAtMTggcmUgZioNCi44Nzg0MzEgLjg3ODQzMSAuODc4NDMxIHJnDQpu
IDAgOTAgNDQzLjg4OTggLTE4IHJlIGYqDQoxIDEgMSByZw0KbiAwIDcyIDQ0My44ODk4IC0x
OCByZSBmKg0KLjg3ODQzMSAuODc4NDMxIC44Nzg0MzEgcmcNCm4gMCA1NCA0NDMuODg5OCAt
MTggcmUgZioNCjEgMSAxIHJnDQpuIDAgMzYgNDQzLjg4OTggLTE4IHJlIGYqDQouODc4NDMx
IC44Nzg0MzEgLjg3ODQzMSByZw0KbiAwIDE4IDQ0My44ODk4IC0xOCByZSBmKg0KMCAwIDAg
cmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KcQ0KMSAwIDAgMSA2IDkzIGNtDQpxDQowIDAg
MCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIChGbGV4aWJsZSBncmFu
dWxhciBhc3NpZ25tZW50IG9mIHBlcm1pc3Npb25zIGFuZCBmdW5jdGlvbnMgdG8gVk1zKSBU
aiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAzNDEuNjI0IDc1IGNtDQpxDQowIDAgMCByZw0K
QlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIChZKSBUaiBUKiBFVA0KUQ0KUQ0K
cQ0KMSAwIDAgMSAzOTAuMzQzNiA3NSBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAy
IFRtIC9GMSAxMCBUZiAxMiBUTCAoWSkgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgNiA1
NyBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAo
U3VwcG9ydHMgZXh0ZW5zaWJsZSBWTSBtZWFzdXJlbWVudCBhcmNoaXRlY3R1cmUgZm9yIERS
VE0gYW5kIGF0dGVzdGF0aW9uKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAzNDEuNjI0
IDM5IGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRM
IChZKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAzOTAuMzQzNiAzOSBjbQ0KcQ0KMCAw
IDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAoWSkgVGogVCogRVQN
ClENClENCnENCjEgMCAwIDEgNiAyMSBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAy
IFRtIC9GMSAxMCBUZiAxMiBUTCAoUENJIHBhc3N0aHJvdWdoIGNvbmZpZ3VyZWQgYXQgaG9z
dCBib290KSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAzNDEuNjI0IDMgY20NCnENCjAg
MCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgKFkpIFRqIFQqIEVU
DQpRDQpRDQpxDQoxIDAgMCAxIDM5MC4zNDM2IDMgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAg
MCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgKFkpIFRqIFQqIEVUDQpRDQpRDQpxDQoxIEoN
CjEgag0KMCAwIDAgUkcNCi4yNSB3DQpuIDAgMTA4IG0gNDQzLjg4OTggMTA4IGwgUw0KbiAw
IDkwIG0gNDQzLjg4OTggOTAgbCBTDQpuIDAgNzIgbSA0NDMuODg5OCA3MiBsIFMNCm4gMCA1
NCBtIDQ0My44ODk4IDU0IGwgUw0KbiAwIDM2IG0gNDQzLjg4OTggMzYgbCBTDQpuIDAgMTgg
bSA0NDMuODg5OCAxOCBsIFMNCm4gODEuMTk5MzUgMCBtIDgxLjE5OTM1IDE4IGwgUw0KbiA4
MS4xOTkzNSAzNiBtIDgxLjE5OTM1IDU0IGwgUw0KbiA4MS4xOTkzNSA3MiBtIDgxLjE5OTM1
IDkwIGwgUw0KbiAxNDAuNzQ1NSAwIG0gMTQwLjc0NTUgMTggbCBTDQpuIDE0MC43NDU1IDM2
IG0gMTQwLjc0NTUgNTQgbCBTDQpuIDE0MC43NDU1IDcyIG0gMTQwLjc0NTUgOTAgbCBTDQpu
IDIwNS43MDUgMCBtIDIwNS43MDUgMTggbCBTDQpuIDIwNS43MDUgMzYgbSAyMDUuNzA1IDU0
IGwgUw0KbiAyMDUuNzA1IDcyIG0gMjA1LjcwNSA5MCBsIFMNCm4gMjY1LjI1MTIgMCBtIDI2
NS4yNTEyIDE4IGwgUw0KbiAyNjUuMjUxMiAzNiBtIDI2NS4yNTEyIDU0IGwgUw0KbiAyNjUu
MjUxMiA3MiBtIDI2NS4yNTEyIDkwIGwgUw0KbiAzMzUuNjI0IDAgbSAzMzUuNjI0IDE4IGwg
Uw0KbiAzMzUuNjI0IDM2IG0gMzM1LjYyNCA1NCBsIFMNCm4gMzM1LjYyNCA3MiBtIDMzNS42
MjQgOTAgbCBTDQpuIDM4NC4zNDM2IDAgbSAzODQuMzQzNiAxOCBsIFMNCm4gMzg0LjM0MzYg
MzYgbSAzODQuMzQzNiA1NCBsIFMNCm4gMzg0LjM0MzYgNzIgbSAzODQuMzQzNiA5MCBsIFMN
Cm4gMCAwIG0gMCAxMDggbCBTDQpuIDQ0My44ODk4IDAgbSA0NDMuODg5OCAxMDggbCBTDQpu
IDAgMCBtIDQ0My44ODk4IDAgbCBTDQpRDQpRDQpxDQoxIDAgMCAxIDIwIDAgY20NClENCnEN
ClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgNjU3LjAyMzYgY20NClENCnENCjEgMCAwIDEg
NjIuNjkyOTEgNjI3LjAyMzYgY20NCnENCkJUIDEgMCAwIDEgMCAzIFRtIDE4IFRMIC9GMiAx
NSBUZiAwIDAgMCByZyAoNC4yICAgRG9tYWluIENvbnN0cnVjdGlvbikgVGogVCogRVQNClEN
ClENCnENCjEgMCAwIDEgNjIuNjkyOTEgNTM3LjAyMzYgY20NCnENCkJUIDEgMCAwIDEgMCA3
NCBUbSAuNzQ2MTM2IFR3IDEyIFRMIC9GMSAxMCBUZiAwIDAgMCByZyAoQW4gaW1wb3J0YW50
IGFzcGVjdCBvZiB0aGUgSHlwZXJsYXVuY2ggYXJjaGl0ZWN0dXJlIGlzIHRoYXQgdGhlIGh5
cGVydmlzb3IgcGVyZm9ybXMgZG9tYWluIGNvbnN0cnVjdGlvbikgVGogVCogMCBUdyAuNTY5
NDMxIFR3IChmb3IgYWxsIHRoZSBJbml0aWFsIERvbWFpbnMsIGllLiBpdCBidWlsZHMgZWFj
aCBkb21haW4gdGhhdCBpcyBkZXNjcmliZWQgaW4gdGhlIExhdW5jaCBDb250cm9sIE1vZHVs
ZS4gTW9yZSkgVGogVCogMCBUdyAuMzE0NjUxIFR3IChzcGVjaWZpY2FsbHksIHRoZSBoeXBl
cnZpc29yIHdpbGwgcGVyZm9ybSB0aGUgZnVuY3Rpb24gb2YgKSBUaiAvRjMgMTAgVGYgKGRv
bWFpbiBjcmVhdGlvbiApIFRqIC9GMSAxMCBUZiAoZm9yIGVhY2ggSW5pdGlhbCBEb21haW46
IGl0IGFsbG9jYXRlcykgVGogVCogMCBUdyAuMDgyNDg1IFR3ICh0aGUgdW5pcXVlIGRvbWFp
biBpZGVudGlmaWVyIGFzc2lnbmVkIHRvIHRoZSB2aXJ0dWFsIG1hY2hpbmUgYW5kIHJlY29y
ZHMgZXNzZW50aWFsIG1ldGFkYXRhIGFib3V0IGl0IGluIHRoZSkgVGogVCogMCBUdyAyLjQ2
MTMxOCBUdyAoaW50ZXJuYWwgZGF0YSBzdHJ1Y3R1cmUgdGhhdCBlbmFibGVzIHNjaGVkdWxp
bmcgdGhlIGRvbWFpbiB0byBydW4uIEl0IHdpbGwgYWxzbyBwZXJmb3JtICkgVGogL0YzIDEw
IFRmIChiYXNpYyBkb21haW4pIFRqIFQqIDAgVHcgMS4zNjkzNiBUdyAoY29uc3RydWN0aW9u
KSBUaiAvRjEgMTAgVGYgKDogYnVpbGQgdGhlIGluaXRpYWwgcGFnZSB0YWJsZXMgd2l0aCBk
YXRhIGZyb20gdGhlIGtlcm5lbCBhbmQgaW5pdGlhbCByYW1kaXNrIHN1cHBsaWVkLCBhbmQg
YXMpIFRqIFQqIDAgVHcgKGFwcHJvcHJpYXRlIGZvciB0aGUgZG9tYWluIHR5cGUsIHBvcHVs
YXRlIHRoZSBwMm0gdGFibGUgYW5kIEFDUEkgdGFibGVzLikgVGogVCogRVQNClENClENCnEN
CjEgMCAwIDEgNjIuNjkyOTEgNTA3LjAyMzYgY20NCnENCkJUIDEgMCAwIDEgMCAxNCBUbSAu
MTg1NjEgVHcgMTIgVEwgL0YxIDEwIFRmIDAgMCAwIHJnIChTdWJzZXF1ZW50IHRvIHRoaXMs
IHRoZSBib290IGRvbWFpbiBjYW4gYXBwbHkgYWRkaXRpb25hbCBjb25maWd1cmF0aW9uIHRv
IHRoZSBpbml0aWFsIGRvbWFpbnMgZnJvbSB0aGUgZGF0YSkgVGogVCogMCBUdyAoaW4gdGhl
IExDTSwgaW4gKSBUaiAvRjMgMTAgVGYgKGV4dGVuZGVkIGRvbWFpbiBjb25zdHJ1Y3Rpb24p
IFRqIC9GMSAxMCBUZiAoLikgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEg
NDg5LjAyMzYgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYg
MTIgVEwgKFRoZSBiZW5lZml0cyBvZiB0aGlzIHN0cnVjdHVyZSBpbmNsdWRlOikgVGogVCog
RVQNClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgNDgzLjAyMzYgY20NClENCnENCjEgMCAw
IDEgNjIuNjkyOTEgNDgzLjAyMzYgY20NClENCnENCjEgMCAwIDEgNjIuNjkyOTEgNDU5LjAy
MzYgY20NCjAgMCAwIHJnDQpCVCAvRjEgMTAgVGYgMTIgVEwgRVQNCnENCjEgMCAwIDEgNiA5
IGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIDEw
LjUgMCBUZCAoXDE3NykgVGogVCogLTEwLjUgMCBUZCBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAy
MyAtMyBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAxNCBUbSAvRjEgMTAgVGYgMTIg
VEwgLjI5Nzk4NCBUdyAoU2VjdXJpdHk6IENvbnRyYWlucyB0aGUgcGVybWlzc2lvbnMgcmVx
dWlyZWQgYnkgdGhlIGJvb3QgZG9tYWluOiBpdCBkb2VzIG5vdCByZXF1aXJlIHRoZSBjYXBh
YmlsaXR5IHRvKSBUaiBUKiAwIFR3IChjcmVhdGUgZG9tYWlucyBpbiB0aGlzIHN0cnVjdHVy
ZS4gVGhpcyBhbGlnbnMgd2l0aCB0aGUgcHJpbmNpcGxlcyBvZiBsZWFzdCBwcml2aWxlZ2Uu
KSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA0NTMuMDIz
NiBjbQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA0MjkuMDIzNiBjbQ0KMCAwIDAgcmcNCkJU
IC9GMSAxMCBUZiAxMiBUTCBFVA0KcQ0KMSAwIDAgMSA2IDkgY20NCnENCjAgMCAwIHJnDQpC
VCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgMTAuNSAwIFRkIChcMTc3KSBUaiBU
KiAtMTAuNSAwIFRkIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDIzIC0zIGNtDQpxDQowIDAgMCBy
Zw0KQlQgMSAwIDAgMSAwIDE0IFRtIC9GMSAxMCBUZiAxMiBUTCAuMzU4MTYgVHcgKEZsZXhp
YmlsaXR5OiBFbmFibGVzIHBvbGljeS1iYXNlZCBkeW5hbWljIGFzc2lnbm1lbnQgb2YgaGFy
ZHdhcmUgYnkgdGhlIGJvb3QgZG9tYWluLCBjdXN0b21pemFibGUpIFRqIFQqIDAgVHcgKGFj
Y29yZGluZyB0byB1c2UtY2FzZSBhbmQgYWJsZSB0byBhZGFwdCB0byBoYXJkd2FyZSBkaXNj
b3ZlcnkpIFRqIFQqIEVUDQpRDQpRDQpxDQpRDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDQy
My4wMjM2IGNtDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDQxMS4wMjM2IGNtDQowIDAgMCBy
Zw0KQlQgL0YxIDEwIFRmIDEyIFRMIEVUDQpxDQoxIDAgMCAxIDYgLTMgY20NCnENCjAgMCAw
IHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgMTAuNSAwIFRkIChcMTc3
KSBUaiBUKiAtMTAuNSAwIFRkIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDIzIC0zIGNtDQpxDQow
IDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIChDb21wYXRpYmls
aXR5OiBTdXBwb3J0cyByZXVzZSBvZiBmYW1pbGlhciB0b29scyB3aXRoIHVzZS1jYXNlIGN1
c3RvbWl6ZWQgYm9vdCBkb21haW5zLikgVGogVCogRVQNClENClENCnENClENClENCnENCjEg
MCAwIDEgNjIuNjkyOTEgNDA1LjAyMzYgY20NClENCnENCjEgMCAwIDEgNjIuNjkyOTEgMzE1
LjAyMzYgY20NCjAgMCAwIHJnDQpCVCAvRjEgMTAgVGYgMTIgVEwgRVQNCnENCjEgMCAwIDEg
NiA3NSBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBU
TCAxMC41IDAgVGQgKFwxNzcpIFRqIFQqIC0xMC41IDAgVGQgRVQNClENClENCnENCjEgMCAw
IDEgMjMgNjMgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMTQgVG0gL0YxIDEwIFRm
IDEyIFRMIDQuNTkwODE0IFR3IChDb21tb25hbGl0eTogUmV1c2VzIHRoZSBzYW1lIGxvZ2lj
IGZvciBpbml0aWFsIGJhc2ljIGRvbWFpbiBidWlsZGluZyBhY3Jvc3MgZGl2ZXJzZSBYZW4p
IFRqIFQqIDAgVHcgKGRlcGxveW1lbnRzLikgVGogVCogRVQNClENClENCnENCjEgMCAwIDEg
MjMgNTcgY20NClENCnENCjEgMCAwIDEgMjMgLTMgY20NCjAgMCAwIHJnDQpCVCAvRjEgMTAg
VGYgMTIgVEwgRVQNCkJUIDEgMCAwIDEgMCAyIFRtICBUKiBFVA0KcQ0KMSAwIDAgMSAyMCA1
NCBjbQ0KUQ0KcQ0KMSAwIDAgMSAyMCA1NCBjbQ0KUQ0KcQ0KMSAwIDAgMSAyMCAzMCBjbQ0K
MCAwIDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KcQ0KMSAwIDAgMSA2IDkgY20NCnEN
CjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgMTAuNSAwIFRk
IChcMTc3KSBUaiBUKiAtMTAuNSAwIFRkIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDIzIC0zIGNt
DQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDE0IFRtIC9GMSAxMCBUZiAxMiBUTCAyLjA4
MjI5IFR3IChJdCBhbGlnbnMgdGhlIHg4NiBpbml0aWFsIGRvbWFpbiBjb25zdHJ1Y3Rpb24g
d2l0aCB0aGUgZXhpc3RpbmcgQXJtIGRvbTBsZXNzIGZlYXR1cmUgZm9yKSBUaiBUKiAwIFR3
IChjb25zdHJ1Y3Rpb24gb2YgbXVsdGlwbGUgZG9tYWlucyBhdCBib290LikgVGogVCogRVQN
ClENClENCnENClENClENCnENCjEgMCAwIDEgMjAgMjQgY20NClENCnENCjEgMCAwIDEgMjAg
MCBjbQ0KMCAwIDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KcQ0KMSAwIDAgMSA2IDkg
Y20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgMTAu
NSAwIFRkIChcMTc3KSBUaiBUKiAtMTAuNSAwIFRkIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDIz
IC0zIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDE0IFRtIC9GMSAxMCBUZiAxMiBU
TCAyLjc2Mjk3NiBUdyAoVGhlIGJvb3QgZG9tYWluIGltcGxlbWVudGF0aW9uIG1heSB2YXJ5
IHNpZ25pZmljYW50bHkgd2l0aCBkaWZmZXJlbnQgZGVwbG95bWVudCB1c2UpIFRqIFQqIDAg
VHcgKGNhc2VzLCB3aGVyZWFzIHRoZSBoeXBlcnZpc29yIGltcGxlbWVudGF0aW9uIGlzIGNv
bW1vbi4pIFRqIFQqIEVUDQpRDQpRDQpxDQpRDQpRDQpxDQoxIDAgMCAxIDIwIDAgY20NClEN
CnENClENClENCnENCjEgMCAwIDEgMjMgLTMgY20NClENCnENClENClENCnENCjEgMCAwIDEg
NjIuNjkyOTEgMzA5LjAyMzYgY20NClENCnENCjEgMCAwIDEgNjIuNjkyOTEgMjg1LjAyMzYg
Y20NCjAgMCAwIHJnDQpCVCAvRjEgMTAgVGYgMTIgVEwgRVQNCnENCjEgMCAwIDEgNiA5IGNt
DQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIDEwLjUg
MCBUZCAoXDE3NykgVGogVCogLTEwLjUgMCBUZCBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAyMyAt
MyBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAxNCBUbSAvRjEgMTAgVGYgMTIgVEwg
NC4wOTYzNDIgVHcgKENvcnJlY3RuZXNzOiBJbmNyZWFzZXMgY29uZmlkZW5jZSBpbiB0aGUg
aW1wbGVtZW50YXRpb24gb2YgZG9tYWluIGNvbnN0cnVjdGlvbiwgc2luY2UgaXQgaXMpIFRq
IFQqIDAgVHcgKHBlcmZvcm1lZCBieSB0aGUgaHlwZXJ2aXNvciBpbiB3ZWxsIG1haW50YWlu
ZWQgYW5kIGNlbnRyYWxseSB0ZXN0ZWQgbG9naWMuKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KUQ0K
UQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSAyNzkuMDIzNiBjbQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42
OTI5MSAyNTUuMDIzNiBjbQ0KMCAwIDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KcQ0K
MSAwIDAgMSA2IDkgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAg
VGYgMTIgVEwgMTAuNSAwIFRkIChcMTc3KSBUaiBUKiAtMTAuNSAwIFRkIEVUDQpRDQpRDQpx
DQoxIDAgMCAxIDIzIC0zIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDE0IFRtIC9G
MSAxMCBUZiAxMiBUTCAxLjA3NTk4NCBUdyAoUGVyZm9ybWFuY2U6IEVuYWJsZXMgbGF1bmNo
IGZvciBjb25maWd1cmF0aW9ucyB3aGVyZSBhIGZhc3Qgc3RhcnQgb2YgbXVsdGlwbGUgZG9t
YWlucyBhdCBib290IGlzIGEpIFRqIFQqIDAgVHcgKHJlcXVpcmVtZW50LikgVGogVCogRVQN
ClENClENCnENClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgMjQ5LjAyMzYgY20NClENCnEN
CjEgMCAwIDEgNjIuNjkyOTEgMTg5LjAyMzYgY20NCjAgMCAwIHJnDQpCVCAvRjEgMTAgVGYg
MTIgVEwgRVQNCnENCjEgMCAwIDEgNiA0NSBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEg
MCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAxMC41IDAgVGQgKFwxNzcpIFRqIFQqIC0xMC41IDAg
VGQgRVQNClENClENCnENCjEgMCAwIDEgMjMgMzMgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAg
MCAxIDAgMTQgVG0gL0YxIDEwIFRmIDEyIFRMIC4xMzk5OCBUdyAoQ2FwYWJpbGl0eTogU3Vw
cG9ydHMgbGF1bmNoIG9mIGFkdmFuY2VkIGNvbmZpZ3VyYXRpb25zIHdoZXJlIGEgc2VxdWVu
Y2VkIHN0YXJ0IG9mIG11bHRpcGxlIGRvbWFpbnMpIFRqIFQqIDAgVHcgKGlzIHJlcXVpcmVk
LCBvciBtdWx0aXBsZSBkb21haW5zIGFyZSBpbnZvbHZlZCBpbiBzdGFydHVwIG9mIHRoZSBy
dW5uaW5nIHN5c3RlbSBjb25maWd1cmF0aW9uKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAg
MSAyMyAyNyBjbQ0KUQ0KcQ0KMSAwIDAgMSAyMyAtMyBjbQ0KMCAwIDAgcmcNCkJUIC9GMSAx
MCBUZiAxMiBUTCBFVA0KQlQgMSAwIDAgMSAwIDIgVG0gIFQqIEVUDQpxDQoxIDAgMCAxIDIw
IDI0IGNtDQpRDQpxDQoxIDAgMCAxIDIwIDI0IGNtDQpRDQpxDQoxIDAgMCAxIDIwIDAgY20N
CjAgMCAwIHJnDQpCVCAvRjEgMTAgVGYgMTIgVEwgRVQNCnENCjEgMCAwIDEgNiA5IGNtDQpx
DQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIDEwLjUgMCBU
ZCAoXDE3NykgVGogVCogLTEwLjUgMCBUZCBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAyMyAtMyBj
bQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAxNCBUbSAvRjEgMTAgVGYgMTIgVEwgLjY1
Mzk4NCBUdyAoZWcuIGZvciBQQ0kgcGFzc3Rocm91Z2ggb24gc3lzdGVtcyB3aGVyZSB0aGUg
dG9vbHN0YWNrIHJ1bnMgaW4gYSBzZXBhcmF0ZSBkb21haW4gdG8gdGhlKSBUaiBUKiAwIFR3
IChoYXJkd2FyZSBtYW5hZ2VtZW50LikgVGogVCogRVQNClENClENCnENClENClENCnENCjEg
MCAwIDEgMjAgMCBjbQ0KUQ0KcQ0KUQ0KUQ0KcQ0KMSAwIDAgMSAyMyAtMyBjbQ0KUQ0KcQ0K
UQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSAxODkuMDIzNiBjbQ0KUQ0KcQ0KMSAwIDAgMSA2
Mi42OTI5MSAxNTkuMDIzNiBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAxNCBUbSAv
RjEgMTAgVGYgMTIgVEwgMS4wODU4MTQgVHcgKFBsZWFzZSwgc2VlIHRoZSBcMjIxSHlwZXJs
YXVuY2ggRGV2aWNlIFRyZWVcMjIyIGRlc2lnbiBkb2N1bWVudCwgd2hpY2ggZGVzY3JpYmVz
IHRoZSBjb25maWd1cmF0aW9uIG1vZHVsZSkgVGogVCogMCBUdyAodGhhdCBpcyBwcm92aWRl
ZCB0byB0aGUgaHlwZXJ2aXNvciBieSB0aGUgYm9vdGxvYWRlci4pIFRqIFQqIEVUDQpRDQpR
DQpxDQoxIDAgMCAxIDYyLjY5MjkxIDExNy4wMjM2IGNtDQpxDQowIDAgMCByZw0KQlQgMSAw
IDAgMSAwIDI2IFRtIC9GMSAxMCBUZiAxMiBUTCAuNjQ2NjUxIFR3IChUaGUgaHlwZXJ2aXNv
ciBkZXRlcm1pbmVzIGhvdyB0aGVzZSBkb21haW5zIGFyZSBzdGFydGVkIGFzIGhvc3QgYm9v
dCBjb21wbGV0ZXM6IGluIHNvbWUgc3lzdGVtcyB0aGUpIFRqIFQqIDAgVHcgNC4wNzY5MDUg
VHcgKEJvb3QgRG9tYWluIGFjdHMgdXBvbiB0aGUgZXh0ZW5kZWQgYm9vdCBjb25maWd1cmF0
aW9uIHN1cHBsaWVkIGFzIHBhcnQgb2YgbGF1bmNoLCBwZXJmb3JtaW5nKSBUaiBUKiAwIFR3
IChjb25maWd1cmF0aW9uIHRhc2tzIGZvciBwcmVwYXJpbmcgdGhlIG90aGVyIGRvbWFpbnMg
Zm9yIHRoZSBoeXBlcnZpc29yIHRvIGNvbW1lbmNlIHJ1bm5pbmcgdGhlbS4pIFRqIFQqIEVU
DQpRDQpRDQogDQplbmRzdHJlYW0NCmVuZG9iag0KMTI4IDAgb2JqDQo8PCAvTGVuZ3RoIDcw
MDIgPj4NCnN0cmVhbQ0KMSAwIDAgMSAwIDAgY20gIEJUIC9GMSAxMiBUZiAxNC40IFRMIEVU
DQpxDQoxIDAgMCAxIDYyLjY5MjkxIDc0Ny4wMjM2IGNtDQpxDQpCVCAxIDAgMCAxIDAgMyBU
bSAxOCBUTCAvRjIgMTUgVGYgMCAwIDAgcmcgKDQuMyAgIENvbW1vbiBCb290IENvbmZpZ3Vy
YXRpb25zKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA2OTMuMDIzNiBj
bQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAzOCBUbSAvRjEgMTAgVGYgMTIgVEwgMi4y
NzkyNjkgVHcgKFdoZW4gbG9va2luZyBhY3Jvc3MgdGhvc2UgdGhhdCBoYXZlIGV4cHJlc3Nl
ZCBpbnRlcmVzdCBvciBkaXNjdXNzZWQgYSBuZWVkIGZvciBsYXVuY2hpbmcgbXVsdGlwbGUp
IFRqIFQqIDAgVHcgLjU2OTM2IFR3IChkb21haW5zIGF0IGhvc3QgYm9vdCwgdGhlIEh5cGVy
bGF1bmNoIGFwcHJvYWNoIGlzIHRvIHByb3ZpZGUgdGhlIG1lYW5zIHRvIHN0YXJ0IG5lYXJs
eSBhbnkgY29tYmluYXRpb24pIFRqIFQqIDAgVHcgMi44MzQ5ODMgVHcgKG9mIGRvbWFpbnMu
IEJlbG93IGlzIGFuIGVudW1lcmF0ZWQgc2VsZWN0aW9uIG9mIGNvbW1vbiBib290IGNvbmZp
Z3VyYXRpb25zIGZvciByZWZlcmVuY2UgaW4gdGhlKSBUaiBUKiAwIFR3IChmb2xsb3dpbmcg
c2VjdGlvbi4pIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDY2Ni4wMjM2
IGNtDQpxDQpCVCAxIDAgMCAxIDAgMi41IFRtIDE1IFRMIC9GNCAxMi41IFRmIDAgMCAwIHJn
ICg0LjMuMSAgIER5bmFtaWMgTGF1bmNoIHdpdGggYSBIaWdobHktUHJpdmlsZWdlZCBEb21h
aW4gMCkgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgNjQ4LjAyMzYgY20N
CnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjIgMTAgVGYgMTIgVEwgKEh5cGVy
bGF1bmNoIENsYXNzaWM6IERvbTApIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDYyLjY5
MjkxIDYwOS4wMjM2IGNtDQowIDAgMCByZw0KQlQgL0YxIDEwIFRmIDEyIFRMIEVUDQpCVCAx
IDAgMCAxIDAgMjYgVG0gIFQqIEVUDQpxDQoxIDAgMCAxIDIwIDAgY20NCnENCjAgMCAwIHJn
DQpCVCAxIDAgMCAxIDAgMjYgVG0gL0YxIDEwIFRmIDEyIFRMIDEuNDQ5MjY5IFR3IChUaGlz
IGNvbmZpZ3VyYXRpb24gbWltaWNzIHRoZSBjbGFzc2ljIFhlbiBzdGFydCBhbmQgZG9tYWlu
IGNvbnN0cnVjdGlvbiB3aGVyZSBhIHNpbmdsZSBkb21haW4gaXMpIFRqIFQqIDAgVHcgMy40
NTkwNjkgVHcgKGNvbnN0cnVjdGVkIHdpdGggYWxsIHByaXZpbGVnZXMgYW5kIGZ1bmN0aW9u
cyBmb3IgbWFuYWdpbmcgaGFyZHdhcmUgYW5kIHJ1bm5pbmcgdmlydHVhbGl6YXRpb24pIFRq
IFQqIDAgVHcgKHRvb2xzdGFjayBzb2Z0d2FyZS4pIFRqIFQqIEVUDQpRDQpRDQpxDQpRDQpR
DQpxDQoxIDAgMCAxIDYyLjY5MjkxIDU5My4wMjM2IGNtDQpxDQowIDAgMCByZw0KQlQgMSAw
IDAgMSAwIDIgVG0gL0YyIDEwIFRmIDEyIFRMIChIeXBlcmxhdW5jaCBDbGFzc2ljOiBFeHRl
bmRlZCBMYXVuY2ggRG9tMCkgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEg
NTU0LjAyMzYgY20NCjAgMCAwIHJnDQpCVCAvRjEgMTAgVGYgMTIgVEwgRVQNCkJUIDEgMCAw
IDEgMCAyNiBUbSAgVCogRVQNCnENCjEgMCAwIDEgMjAgMCBjbQ0KcQ0KMCAwIDAgcmcNCkJU
IDEgMCAwIDEgMCAyNiBUbSAvRjEgMTAgVGYgMTIgVEwgMS40OTc3NjUgVHcgKFRoaXMgY29u
ZmlndXJhdGlvbiBpcyB3aGVyZSBhIERvbTAgaXMgc3RhcnRlZCB2aWEgYSBCb290IERvbWFp
biB0aGF0IHJ1bnMgZmlyc3QuIFRoaXMgaXMgZm9yIGNhc2VzKSBUaiBUKiAwIFR3IDEuMzQ2
OTA1IFR3ICh3aGVyZSBzb21lIHByZXByb2Nlc3NpbmcgaW4gYSBsZXNzIHByaXZpbGVnZWQg
ZG9tYWluIGlzIHJlcXVpcmVkIGJlZm9yZSBzdGFydGluZyB0aGUgYWxsLXByaXZpbGVnZWQp
IFRqIFQqIDAgVHcgKERvbWFpbiAwLikgVGogVCogRVQNClENClENCnENClENClENCnENCjEg
MCAwIDEgNjIuNjkyOTEgNTM4LjAyMzYgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAg
MiBUbSAvRjIgMTAgVGYgMTIgVEwgKEh5cGVybGF1bmNoIENsYXNzaWM6IEJhc2ljIENsb3Vk
KSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA1MTEuMDIzNiBjbQ0KMCAw
IDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KQlQgMSAwIDAgMSAwIDE0IFRtICBUKiBF
VA0KcQ0KMSAwIDAgMSAyMCAwIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDE0IFRt
IC9GMSAxMCBUZiAxMiBUTCAyLjcxOTI2OSBUdyAoVGhpcyBjb25maWd1cmF0aW9uIGNvbnN0
cnVjdHMgYSBEb20wIHRoYXQgaXMgc3RhcnRlZCBpbiBwYXJhbGxlbCB3aXRoIHNvbWUgbnVt
YmVyIG9mIHdvcmtsb2FkKSBUaiBUKiAwIFR3IChkb21haW5zLikgVGogVCogRVQNClENClEN
CnENClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgNDk1LjAyMzYgY20NCnENCjAgMCAwIHJn
DQpCVCAxIDAgMCAxIDAgMiBUbSAvRjIgMTAgVGYgMTIgVEwgKEh5cGVybGF1bmNoIENsYXNz
aWM6IENsb3VkKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA0NjguMDIz
NiBjbQ0KMCAwIDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KQlQgMSAwIDAgMSAwIDE0
IFRtICBUKiBFVA0KcQ0KMSAwIDAgMSAyMCAwIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAg
MSAwIDE0IFRtIC9GMSAxMCBUZiAxMiBUTCAyLjM1OTk4MyBUdyAoVGhpcyBjb25maWd1cmF0
aW9uIGJ1aWxkcyBhIERvbTAgYW5kIHNvbWUgbnVtYmVyIG9mIHdvcmtsb2FkIGRvbWFpbnMs
IGxhdW5jaGVkIHZpYSBhIEJvb3QpIFRqIFQqIDAgVHcgKERvbWFpbiB0aGF0IHJ1bnMgZmly
c3QuKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA0NDEu
MDIzNiBjbQ0KcQ0KQlQgMSAwIDAgMSAwIDIuNSBUbSAxNSBUTCAvRjQgMTIuNSBUZiAwIDAg
MCByZyAoNC4zLjIgICBTdGF0aWMgTGF1bmNoIENvbmZpZ3VyYXRpb25zOiB3aXRob3V0IGEg
RG9tYWluIDAgb3IgYSBDb250cm9sIERvbWFpbikgVGogVCogRVQNClENClENCnENCjEgMCAw
IDEgNjIuNjkyOTEgNDIzLjAyMzYgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBU
bSAvRjIgMTAgVGYgMTIgVEwgKEh5cGVybGF1bmNoIFN0YXRpYzogQmFzaWMpIFRqIFQqIEVU
DQpRDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDM4NC4wMjM2IGNtDQowIDAgMCByZw0KQlQg
L0YxIDEwIFRmIDEyIFRMIEVUDQpCVCAxIDAgMCAxIDAgMjYgVG0gIFQqIEVUDQpxDQoxIDAg
MCAxIDIwIDAgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMjYgVG0gL0YxIDEwIFRm
IDEyIFRMIC4wMTQ2OTIgVHcgKFNpbXBsZSBzdGF0aWMgcGFydGl0aW9uaW5nIHdoZXJlIGFs
bCBkb21haW5zIHRoYXQgY2FuIGJlIHJ1biBvbiB0aGlzIHN5c3RlbSBhcmUgYnVpbHQgYW5k
IHN0YXJ0ZWQgZHVyaW5nKSBUaiBUKiAwIFR3IDEuMDU4MTEgVHcgKGhvc3QgYm9vdCBhbmQg
d2hlcmUgbm8gZG9tYWluIGlzIHN0YXJ0ZWQgd2l0aCB0aGUgQ29udHJvbCBEb21haW4gcGVy
bWlzc2lvbnMsIHRodXMgbWFraW5nIGl0IG5vdCkgVGogVCogMCBUdyAocG9zc2libGUgdG8g
Y3JlYXRlL3N0YXJ0IGFueSBmdXJ0aGVyIG5ldyBkb21haW5zLikgVGogVCogRVQNClENClEN
CnENClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgMzY4LjAyMzYgY20NCnENCjAgMCAwIHJn
DQpCVCAxIDAgMCAxIDAgMiBUbSAvRjIgMTAgVGYgMTIgVEwgKEh5cGVybGF1bmNoIFN0YXRp
YzogU3RhbmRhcmQpIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDMxNy4w
MjM2IGNtDQowIDAgMCByZw0KQlQgL0YxIDEwIFRmIDEyIFRMIEVUDQpCVCAxIDAgMCAxIDAg
MzggVG0gIFQqIEVUDQpxDQoxIDAgMCAxIDIwIDAgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAg
MCAxIDAgMzggVG0gL0YxIDEwIFRmIDEyIFRMIDMuNTI2MTM2IFR3IChUaGlzIGlzIGEgdmFy
aWF0aW9uIG9mIHRoZSBcMjIzSHlwZXJsYXVuY2ggU3RhdGljOiBCYXNpY1wyMjQgc3RhdGlj
IHBhcnRpdGlvbmluZyBjb25maWd1cmF0aW9uIHdpdGggdGhlKSBUaiBUKiAwIFR3IC4yOTA1
NDIgVHcgKGludHJvZHVjdGlvbiBvZiBhIEJvb3QgRG9tYWluLiBUaGlzIGNvbmZpZ3VyYXRp
b24gYWxsb3dzIGZvciB1c2Ugb2YgYSBCb290IERvbWFpbiB0byBiZSBhYmxlIHRvIGFwcGx5
KSBUaiBUKiAwIFR3IC4xMjUzMTggVHcgKGV4dGVuZGVkIGNvbmZpZ3VyYXRpb24gdG8gdGhl
IEluaXRpYWwgRG9tYWlucyBiZWZvcmUgdGhleSBhcmUgc3RhcnRlZCBhbmQgc2VxdWVuY2Ug
dGhlIG9yZGVyIGluIHdoaWNoKSBUaiBUKiAwIFR3ICh0aGV5IHN0YXJ0LikgVGogVCogRVQN
ClENClENCnENClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgMzAxLjAyMzYgY20NCnENCjAg
MCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjIgMTAgVGYgMTIgVEwgKEh5cGVybGF1bmNo
IFN0YXRpYzogRGlzYWdncmVnYXRlZCkgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgNjIu
NjkyOTEgMjc0LjAyMzYgY20NCjAgMCAwIHJnDQpCVCAvRjEgMTAgVGYgMTIgVEwgRVQNCkJU
IDEgMCAwIDEgMCAxNCBUbSAgVCogRVQNCnENCjEgMCAwIDEgMjAgMCBjbQ0KcQ0KMCAwIDAg
cmcNCkJUIDEgMCAwIDEgMCAxNCBUbSAvRjEgMTAgVGYgMTIgVEwgLjY4MjY1MSBUdyAoVGhp
cyBpcyBhIHZhcmlhdGlvbiBvZiB0aGUgXDIyM0h5cGVybGF1bmNoIFN0YXRpYzogU3RhbmRh
cmRcMjI0IGNvbmZpZ3VyYXRpb24gd2l0aCB0aGUgaW50cm9kdWN0aW9uIG9mIGEgQm9vdCkg
VGogVCogMCBUdyAoRG9tYWluIGFuZCBhbiBpbGx1c3RyYXRpb24gdGhhdCBzb21lIGZ1bmN0
aW9ucyBjYW4gYmUgZGlzYWdncmVnYXRlZCB0byBkZWRpY2F0ZWQgZG9tYWlucy4pIFRqIFQq
IEVUDQpRDQpRDQpxDQpRDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDI0Ny4wMjM2IGNtDQpx
DQpCVCAxIDAgMCAxIDAgMi41IFRtIDE1IFRMIC9GNCAxMi41IFRmIDAgMCAwIHJnICg0LjMu
MyAgIER5bmFtaWMgTGF1bmNoIG9mIERpc2FnZ3JlZ2F0ZWQgU3lzdGVtIENvbmZpZ3VyYXRp
b25zKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSAyMjkuMDIzNiBjbQ0K
cQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMiAxMCBUZiAxMiBUTCAoSHlwZXJs
YXVuY2ggRHluYW1pYzogSGFyZHdhcmUgRG9tYWluKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAw
IDAgMSA2Mi42OTI5MSAyMDIuMDIzNiBjbQ0KMCAwIDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBU
TCBFVA0KQlQgMSAwIDAgMSAwIDE0IFRtICBUKiBFVA0KcQ0KMSAwIDAgMSAyMCAwIGNtDQpx
DQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDE0IFRtIC9GMSAxMCBUZiAxMiBUTCAyLjA3MTUy
IFR3IChUaGlzIGNvbmZpZ3VyYXRpb24gbWltaWNzIHRoZSBleGlzdGluZyBYZW4gZmVhdHVy
ZSBsYXRlIGhhcmR3YXJlIGRvbWFpbiB3aXRoIHRoZSBvbmUgZGlmZmVyZW5jZSkgVGogVCog
MCBUdyAoYmVpbmcgdGhhdCB0aGUgaGFyZHdhcmUgZG9tYWluIGlzIGNvbnN0cnVjdGVkIGJ5
IHRoZSBoeXBlcnZpc29yIGF0IHN0YXJ0dXAgaW5zdGVhZCBvZiBsYXRlciBieSBEb20wLikg
VGogVCogRVQNClENClENCnENClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgMTg2LjAyMzYg
Y20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjIgMTAgVGYgMTIgVEwgKEh5
cGVybGF1bmNoIER5bmFtaWM6IEZsZXhpYmxlIERpc2FnZ3JlZ2F0aW9uKSBUaiBUKiBFVA0K
UQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSAxNDcuMDIzNiBjbQ0KMCAwIDAgcmcNCkJUIC9G
MSAxMCBUZiAxMiBUTCBFVA0KQlQgMSAwIDAgMSAwIDI2IFRtICBUKiBFVA0KcQ0KMSAwIDAg
MSAyMCAwIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDI2IFRtIC9GMSAxMCBUZiAx
MiBUTCAuNzQ5MjEzIFR3IChUaGlzIGNvbmZpZ3VyYXRpb24gaXMgc2ltaWxhciB0byB0aGUg
XDIyM0h5cGVybGF1bmNoIENsYXNzaWM6IERvbTBcMjI0IGNvbmZpZ3VyYXRpb24gZXhjZXB0
IHRoYXQgaXQgaW5jbHVkZXMpIFRqIFQqIDAgVHcgLjc3NDI2OSBUdyAoc3RhcnRpbmcgYSBz
ZXBhcmF0ZSBoYXJkd2FyZSBkb21haW4gZHVyaW5nIFhlbiBzdGFydHVwLiBJdCBpcyBhbHNv
IHNpbWlsYXIgdG8gXDIyM0h5cGVybGF1bmNoIER5bmFtaWM6KSBUaiBUKiAwIFR3IChIYXJk
d2FyZSBEb21haW5cMjI0IGNvbmZpZ3VyYXRpb24sIGJ1dCBpdCBsYXVuY2hlcyB2aWEgYSBC
b290IERvbWFpbiB0aGF0IHJ1bnMgZmlyc3QuKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KUQ0KUQ0K
cQ0KMSAwIDAgMSA2Mi42OTI5MSAxMzEuMDIzNiBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAw
IDEgMCAyIFRtIC9GMiAxMCBUZiAxMiBUTCAoSHlwZXJsYXVuY2ggRHluYW1pYzogRnVsbCBE
aXNhZ2dyZWdhdGlvbikgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgODAu
MDIzNjIgY20NCjAgMCAwIHJnDQpCVCAvRjEgMTAgVGYgMTIgVEwgRVQNCkJUIDEgMCAwIDEg
MCAzOCBUbSAgVCogRVQNCnENCjEgMCAwIDEgMjAgMCBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEg
MCAwIDEgMCAzOCBUbSAvRjEgMTAgVGYgMTIgVEwgMS4zMDA2MSBUdyAoSW4gdGhpcyBjb25m
aWd1cmF0aW9uIGl0IGlzIGRlbW9uc3RyYXRlZCBob3cgaXQgaXMgcG9zc2libGUgdG8gc3Rh
cnQgYSBmdWxseSBkaXNhZ2dyZWdhdGVkIHN5c3RlbTogdGhlKSBUaiBUKiAwIFR3IDMuODIw
ODE0IFR3ICh2aXJ0dWFsaXphdGlvbiB0b29sc3RhY2sgcnVucyBpbiBhIENvbnRyb2wgRG9t
YWluLCBzZXBhcmF0ZSBmcm9tIHRoZSBkb21haW5zIHJlc3BvbnNpYmxlIGZvcikgVGogVCog
MCBUdyAxLjkzOTk4MiBUdyAobWFuYWdpbmcgaGFyZHdhcmUsIFhlblN0b3JlLCB0aGUgWGVu
IENvbnNvbGUgYW5kIENyYXNoIGZ1bmN0aW9ucywgZWFjaCBsYXVuY2hlZCB2aWEgYSBCb290
KSBUaiBUKiAwIFR3IChEb21haW4uKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KUQ0KUQ0KIA0KZW5k
c3RyZWFtDQplbmRvYmoNCjEyOSAwIG9iag0KPDwgL0xlbmd0aCAxMDAwMSA+Pg0Kc3RyZWFt
DQoxIDAgMCAxIDAgMCBjbSAgQlQgL0YxIDEyIFRmIDE0LjQgVEwgRVQNCnENCjEgMCAwIDEg
NjIuNjkyOTEgNzUwLjAyMzYgY20NCnENCkJUIDEgMCAwIDEgMCAyLjUgVG0gMTUgVEwgL0Y0
IDEyLjUgVGYgMCAwIDAgcmcgKDQuMy40ICAgRXhhbXBsZSBVc2UgQ2FzZXMgYW5kIENvbmZp
Z3VyYXRpb25zKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA3MzIuMDIz
NiBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAo
VGhlIGZvbGxvd2luZyBleGFtcGxlIHVzZSBjYXNlcyBjYW4gYmUgbWF0Y2hlZCB0byBjb25m
aWd1cmF0aW9ucyBsaXN0ZWQgaW4gdGhlIHByZXZpb3VzIHNlY3Rpb24uKSBUaiBUKiBFVA0K
UQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA3MDguMDIzNiBjbQ0KcQ0KQlQgMSAwIDAgMSAw
IDIgVG0gMTIgVEwgL0Y0IDEwIFRmIDAgMCAwIHJnICg0LjMuNC4xICAgVXNlIGNhc2U6IE1v
ZGVybiBjbG91ZCBoeXBlcnZpc29yKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42
OTI5MSA2OTAuMDIzNiBjbQ0KcQ0KQlQgMSAwIDAgMSAwIDIgVG0gMTIgVEwgL0YyIDEwIFRm
IDAgMCAwIHJnIChPcHRpb246ICkgVGogL0YxIDEwIFRmIChIeXBlcmxhdW5jaCBDbGFzc2lj
OiBDbG91ZCkgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgNjQ4LjAyMzYg
Y20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMjYgVG0gL0YxIDEwIFRmIDEyIFRMIC4y
MDI2NTEgVHcgKFRoaXMgY29uZmlndXJhdGlvbiB3aWxsIHN1cHBvcnQgc3Ryb25nIGlzb2xh
dGlvbiBmb3IgdmlydHVhbCBUUE0gZG9tYWlucyBhbmQgbWVhc3VyZWQgbGF1bmNoIGluIHN1
cHBvcnQgb2YpIFRqIFQqIDAgVHcgLjgwODMxNCBUdyAoYXR0ZXN0YXRpb24gdG8gaW5mcmFz
dHJ1Y3R1cmUgbWFuYWdlbWVudCwgd2hpbGUgYWxsb3dpbmcgdGhlIHVzZSBvZiBleGlzdGlu
ZyBEb20wIHZpcnR1YWxpemF0aW9uIHRvb2xzdGFjaykgVGogVCogMCBUdyAoc29mdHdhcmUu
KSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA2MjQuMDIzNiBjbQ0KcQ0K
QlQgMSAwIDAgMSAwIDIgVG0gMTIgVEwgL0Y0IDEwIFRmIDAgMCAwIHJnICg0LjMuNC4yICAg
VXNlIGNhc2U6IEVkZ2UgZGV2aWNlIHdpdGggc2VjdXJpdHkgb3Igc2FmZXR5IHJlcXVpcmVt
ZW50cykgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgNjA2LjAyMzYgY20N
CnENCkJUIDEgMCAwIDEgMCAyIFRtIDEyIFRMIC9GMiAxMCBUZiAwIDAgMCByZyAoT3B0aW9u
OiApIFRqIC9GMSAxMCBUZiAoSHlwZXJsYXVuY2ggU3RhdGljOiBCb290KSBUaiBUKiBFVA0K
UQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA1NzYuMDIzNiBjbQ0KcQ0KMCAwIDAgcmcNCkJU
IDEgMCAwIDEgMCAxNCBUbSAvRjEgMTAgVGYgMTIgVEwgNS4yNzcyNTEgVHcgKFRoaXMgY29u
ZmlndXJhdGlvbiBydW5zIHdpdGhvdXQgcmVxdWlyaW5nIGEgaGlnaGx5LXByaXZpbGVnZWQg
RG9tMCwgYW5kIGVuYWJsZXMgZXh0ZW5kZWQgVk0pIFRqIFQqIDAgVHcgKGNvbmZpZ3VyYXRp
b24gdG8gYmUgYXBwbGllZCB0byB0aGUgSW5pdGlhbCBWTXMgcHJpb3IgdG8gbGF1bmNoaW5n
IHRoZW0sIG9wdGlvbmFsbHkgaW4gYSBzZXF1ZW5jZWQgc3RhcnQuKSBUaiBUKiBFVA0KUQ0K
UQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA1NTIuMDIzNiBjbQ0KcQ0KQlQgMSAwIDAgMSAwIDIg
VG0gMTIgVEwgL0Y0IDEwIFRmIDAgMCAwIHJnICg0LjMuNC4zICAgVXNlIGNhc2U6IENsaWVu
dCBoeXBlcnZpc29yKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA1MzQu
MDIzNiBjbQ0KcQ0KQlQgMSAwIDAgMSAwIDIgVG0gMTIgVEwgL0YyIDEwIFRmIDAgMCAwIHJn
IChPcHRpb246ICkgVGogL0YxIDEwIFRmIChIeXBlcmxhdW5jaCBEeW5hbWljOiBGbGV4aWJs
ZSBEaXNhZ2dyZWdhdGlvbikgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEg
NTE2LjAyMzYgY20NCnENCkJUIDEgMCAwIDEgMCAyIFRtIDEyIFRMIC9GMiAxMCBUZiAwIDAg
MCByZyAoT3B0aW9uOiApIFRqIC9GMSAxMCBUZiAoSHlwZXJsYXVuY2ggRHluYW1pYzogRnVs
bCBEaXNhZ2dyZWdhdGlvbikgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEg
NDc0LjAyMzYgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMjYgVG0gL0YxIDEwIFRm
IDEyIFRMIDMuMTY5MTQ3IFR3IChUaGVzZSBjb25maWd1cmF0aW9ucyBlbmFibGUgZHluYW1p
YyBjbGllbnQgd29ya2xvYWRzLCBzdHJvbmcgaXNvbGF0aW9uIGZvciB0aGUgZG9tYWluIHJ1
bm5pbmcgdGhlKSBUaiBUKiAwIFR3IDEuMDgzNjE1IFR3ICh2aXJ0dWFsaXphdGlvbiB0b29s
c3RhY2sgc29mdHdhcmUgYW5kIGVhY2ggZG9tYWluIG1hbmFnaW5nIGhhcmR3YXJlLCB3aXRo
IFBDSSBwYXNzdGhyb3VnaCBwZXJmb3JtZWQpIFRqIFQqIDAgVHcgKGR1cmluZyBob3N0IGJv
b3QgYW5kIHN1cHBvcnQgZm9yIG1lYXN1cmVkIGxhdW5jaC4pIFRqIFQqIEVUDQpRDQpRDQpx
DQoxIDAgMCAxIDYyLjY5MjkxIDQ0NC4wMjM2IGNtDQpxDQpCVCAxIDAgMCAxIDAgMyBUbSAx
OCBUTCAvRjIgMTUgVGYgMCAwIDAgcmcgKDQuNCAgIEh5cGVybGF1bmNoIERpc2FnZ3JlZ2F0
ZWQgTGF1bmNoKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSAzNzguMDIz
NiBjbQ0KcQ0KQlQgMSAwIDAgMSAwIDUwIFRtIDIuNjc4NTU1IFR3IDEyIFRMIC9GMSAxMCBU
ZiAwIDAgMCByZyAoRXhpc3RpbmcgaW4gWGVuIHRvZGF5IGFyZSB0d28gcHJpbWFyeSBwZXJt
aXNzaW9ucywgKSBUaiAvRjMgMTAgVGYgKGNvbnRyb2wgZG9tYWluICkgVGogL0YxIDEwIFRm
IChhbmQgKSBUaiAvRjMgMTAgVGYgKGhhcmR3YXJlIGRvbWFpbikgVGogL0YxIDEwIFRmICgs
IGFuZCB0d28pIFRqIFQqIDAgVHcgMS40NTkzMTggVHcgKGZ1bmN0aW9ucywgKSBUaiAvRjMg
MTAgVGYgKGNvbnNvbGUgZG9tYWluICkgVGogL0YxIDEwIFRmIChhbmQgKSBUaiAvRjMgMTAg
VGYgKHhlbnN0b3JlIGRvbWFpbikgVGogL0YxIDEwIFRmICgsIHRoYXQgY2FuIGJlIGFzc2ln
bmVkIHRvIGEgZG9tYWluLiBUcmFkaXRpb25hbGx5IGFsbCBvZikgVGogVCogMCBUdyAxLjMx
OTM5OCBUdyAodGhlc2UgcGVybWlzc2lvbnMgYW5kIGZ1bmN0aW9ucyBhcmUgYWxsIGFzc2ln
bmVkIHRvIERvbTAgYXQgc3RhcnQgYW5kIGNhbiB0aGVuIGJlIGRlbGVnYXRlZCB0byBvdGhl
cikgVGogVCogMCBUdyAyLjc1NzEyNiBUdyAoZG9tYWlucyBjcmVhdGVkIGJ5IHRoZSB0b29s
c3RhY2sgaW4gRG9tMC4gV2l0aCBIeXBlcmxhdW5jaCBpdCBiZWNvbWVzIHBvc3NpYmxlIHRv
IGFzc2lnbiB0aGVzZSkgVGogVCogMCBUdyAocGVybWlzc2lvbnMgYW5kIGZ1bmN0aW9ucyB0
byBhbnkgZG9tYWluIGZvciB3aGljaCB0aGVyZSBpcyBhIGRlZmluaXRpb24gcHJvdmlkZWQg
YXQgc3RhcnR1cC4pIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDMzNi4w
MjM2IGNtDQpxDQpCVCAxIDAgMCAxIDAgMjYgVG0gMS4xNzMwNTkgVHcgMTIgVEwgL0YxIDEw
IFRmIDAgMCAwIHJnIChBZGRpdGlvbmFsbHksIHR3byBmdXJ0aGVyIGZ1bmN0aW9ucyBhcmUg
aW50cm9kdWNlZDogdGhlICkgVGogL0YzIDEwIFRmIChyZWNvdmVyeSBkb21haW4pIFRqIC9G
MSAxMCBUZiAoLCBpbnRlbmRlZCB0byBhc3Npc3Qgd2l0aCByZWNvdmVyeSkgVGogVCogMCBU
dyAuNzE4NjUxIFR3IChmcm9tIGZhaWx1cmVzIGVuY291bnRlcmVkIHN0YXJ0aW5nIFZNcyBk
dXJpbmcgaG9zdCBib290LCBhbmQgdGhlICkgVGogL0YzIDEwIFRmIChib290IGRvbWFpbikg
VGogL0YxIDEwIFRmICgsIGZvciBwZXJmb3JtaW5nIGFzcGVjdHMgb2YpIFRqIFQqIDAgVHcg
KGRvbWFpbiBjb25zdHJ1Y3Rpb24gZHVyaW5nIHN0YXJ0dXAuKSBUaiBUKiBFVA0KUQ0KUQ0K
cQ0KMSAwIDAgMSA2Mi42OTI5MSAyNTguMDIzNiBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAw
IDEgMCA2MiBUbSAvRjEgMTAgVGYgMTIgVEwgLjQ1MTQxMiBUdyAoU3VwcG9ydGluZyB0aGUg
Ym9vdGluZyBvZiBlYWNoIG9mIHRoZSBhYm92ZSBjb21tb24gYm9vdCBjb25maWd1cmF0aW9u
cyBpcyBhY2NvbXBsaXNoZWQgYnkgY29uc2lkZXJpbmcpIFRqIFQqIDAgVHcgMS45NDEzMTgg
VHcgKHRoZSBzZXQgb2YgaW5pdGlhbCBkb21haW5zIGFuZCB0aGUgYXNzaWdubWVudCBvZiBY
ZW5cMjIycyBwZXJtaXNzaW9ucyBhbmQgZnVuY3Rpb25zLCBpbmNsdWRpbmcgdGhlIG9uZXMp
IFRqIFQqIDAgVHcgLjI0MjMzOSBUdyAoaW50cm9kdWNlZCBieSBIeXBlcmxhdW5jaCwgdG8g
dGhlc2UgZG9tYWlucy4gQSBkaXNjdXNzaW9uIG9mIHRoZXNlIHdpbGwgYmUgY292ZXJlZCBs
YXRlciBidXQgZm9yIG5vdyB0aGV5KSBUaiBUKiAwIFR3IC40NTQ0ODggVHcgKGFyZSBsYWlk
IG91dCBpbiBhIHRhYmxlIHdpdGggYSBtYXBwaW5nIHRvIHRoZSBjb21tb24gYm9vdCBjb25m
aWd1cmF0aW9ucy4gVGhpcyB0YWJsZSBpcyBub3QgaW50ZW5kZWQgdG8gYmUpIFRqIFQqIDAg
VHcgLjc0MjQ4NSBUdyAoYW4gZXhoYXVzdGl2ZSBsaXN0IG9mIGNvbmZpZ3VyYXRpb25zIGFu
ZCBkb2VzIG5vdCBhY2NvdW50IGZvciBmbGFzayBwb2xpY3kgc3BlY2lmaWVkIGZ1bmN0aW9u
cyB0aGF0IGFyZSB1c2UpIFRqIFQqIDAgVHcgKGNhc2Ugc3BlY2lmaWMuKSBUaiBUKiBFVA0K
UQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSAyMTYuMDIzNiBjbQ0KcQ0KQlQgMSAwIDAgMSAw
IDI2IFRtIDQuMDc3Njc0IFR3IDEyIFRMIC9GMSAxMCBUZiAwIDAgMCByZyAoSW4gdGhlIHRh
YmxlIGVhY2ggbnVtYmVyIHJlcHJlc2VudHMgYSBzZXBhcmF0ZSBkb21haW4gYmVpbmcgY29u
c3RydWN0ZWQgYnkgdGhlIEh5cGVybGF1bmNoKSBUaiBUKiAwIFR3IDIuNDg5OTg0IFR3IChj
b25zdHJ1Y3Rpb24gcGF0aCBhcyBYZW4gc3RhcnRzLCBhbmQgdGhlIGRlc2lnbmF0b3IsICkg
VGogL0Y1IDEwIFRmIDAgMCAwIHJnICh7bn0gKSBUaiAvRjEgMTAgVGYgMCAwIDAgcmcgKHNp
Z25pZmllcyB0aGF0IHRoZXJlIG1heSBiZSBcMjIzblwyMjQgYWRkaXRpb25hbCkgVGogVCog
MCBUdyAoZG9tYWlucyB0aGF0IG1heSBiZSBjb25zdHJ1Y3RlZCB0aGF0IGRvIG5vdCBoYXZl
IGFueSBzcGVjaWFsIHJvbGUgZm9yIGEgZ2VuZXJhbCBYZW4gc3lzdGVtLikgVGogVCogRVQN
ClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgMjEwLjAyMzYgY20NClENCnENCjEgMCAwIDEg
NjIuNjkyOTEgNzYuODY2MTQgY20NCjAgMCAwIHJnDQpCVCAvRjEgMTAgVGYgMTIgVEwgRVQN
CkJUIDEgMCAwIDEgMCAyIFRtICBUKiBFVA0KcQ0KMSAwIDAgMSAyMCAxMTQgY20NClENCnEN
CjEgMCAwIDEgMjAgMCBjbQ0KMSAxIDEgcmcNCm4gMCAxMTQgNDQzLjg4OTggLTE4IHJlIGYq
DQouODc4NDMxIC44Nzg0MzEgLjg3ODQzMSByZw0KbiAwIDk2IDQ0My44ODk4IC0zMCByZSBm
Kg0KMSAxIDEgcmcNCm4gMCA2NiA0NDMuODg5OCAtMTggcmUgZioNCi44Nzg0MzEgLjg3ODQz
MSAuODc4NDMxIHJnDQpuIDAgNDggNDQzLjg4OTggLTMwIHJlIGYqDQoxIDEgMSByZw0KbiAw
IDE4IDQ0My44ODk4IC0xOCByZSBmKg0KLjk2MDc4NCAuOTYwNzg0IC44NjI3NDUgcmcNCm4g
MCAxMTQgNDQzLjg4OTggLTQ4IHJlIGYqDQowIDAgMCByZw0KQlQgL0YxIDEwIFRmIDEyIFRM
IEVUDQpxDQoxIDAgMCAxIDYgNjkgY20NCnENCi45NjA3ODQgLjk2MDc4NCAuODYyNzQ1IHJn
DQpuIDAgMCAxMTMuODc5MiAxMiByZSBmKg0KUQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEg
MCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAyNy4yMDQ1OSAwIFRkIChDb25maWd1cmF0aW9uKSBU
aiBUKiAtMjcuMjA0NTkgMCBUZCBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAxMzEuODc5MiA5OSBj
bQ0KcQ0KLjk2MDc4NCAuOTYwNzg0IC44NjI3NDUgcmcNCm4gMCAwIDk0LjAwMzUzIDEyIHJl
IGYqDQpRDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRM
IDIyLjI3Njc2IDAgVGQgKFBlcm1pc3Npb24pIFRqIFQqIC0yMi4yNzY3NiAwIFRkIEVUDQpR
DQpRDQpxDQoxIDAgMCAxIDIzNy44ODI3IDk5IGNtDQpxDQouOTYwNzg0IC45NjA3ODQgLjg2
Mjc0NSByZw0KbiAwIDAgMjAwLjAwNzEgMTIgcmUgZioNClENCnENCjAgMCAwIHJnDQpCVCAx
IDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgODAuODI4NTMgMCBUZCAoRnVuY3Rpb24p
IFRqIFQqIC04MC44Mjg1MyAwIFRkIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDEzMS44NzkyIDY5
IGNtDQpxDQouOTYwNzg0IC45NjA3ODQgLjg2Mjc0NSByZw0KbiAwIDAgMjcuNzUxMzIgMTIg
cmUgZioNClENCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIg
VEwgMS45MjU2NjEgMCBUZCAoTm9uZSkgVGogVCogLTEuOTI1NjYxIDAgVGQgRVQNClENClEN
CnENCjEgMCAwIDEgMTcxLjYzMDUgNjkgY20NCnENCi45NjA3ODQgLjk2MDc4NCAuODYyNzQ1
IHJnDQpuIDAgMCAyNy43NTEzMiAxMiByZSBmKg0KUQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAw
IDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCA2LjEwMDY2MSAwIFRkIChDdHJsKSBUaiBUKiAt
Ni4xMDA2NjEgMCBUZCBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAyMTEuMzgxOCA2OSBjbQ0KcQ0K
Ljk2MDc4NCAuOTYwNzg0IC44NjI3NDUgcmcNCm4gMCAwIDE0LjUwMDg4IDI0IHJlIGYqDQpR
DQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDE0IFRtIC9GMSAxMCBUZiAxMiBUTCAzLjY0
MDQ0MSAwIFRkIChIKSBUaiBUKiAtMS4xMSAwIFRkIChXKSBUaiBUKiAtMi41MzA0NDEgMCBU
ZCBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAyMzcuODgyNyA2OSBjbQ0KcQ0KLjk2MDc4NCAuOTYw
Nzg0IC44NjI3NDUgcmcNCm4gMCAwIDI3Ljc1MTMyIDEyIHJlIGYqDQpRDQpxDQowIDAgMCBy
Zw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIDMuNTkwNjYxIDAgVGQgKEJv
b3QpIFRqIFQqIC0zLjU5MDY2MSAwIFRkIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDI3Ny42MzQg
NjkgY20NCnENCi45NjA3ODQgLjk2MDc4NCAuODYyNzQ1IHJnDQpuIDAgMCA0MS4wMDE3NiAy
NCByZSBmKg0KUQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAxNCBUbSAvRjEgMTAgVGYg
MTIgVEwgMS44ODU4ODEgMCBUZCAoUmVjb3ZlcikgVGogVCogMTYuMTE1IDAgVGQgKHkpIFRq
IFQqIC0xOC4wMDA4OCAwIFRkIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDMzMC42MzU4IDY5IGNt
DQpxDQouOTYwNzg0IC45NjA3ODQgLjg2Mjc0NSByZw0KbiAwIDAgNDEuMDAxNzYgMTIgcmUg
ZioNClENCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwg
Mi4xNjA4ODEgMCBUZCAoQ29uc29sZSkgVGogVCogLTIuMTYwODgxIDAgVGQgRVQNClENClEN
CnENCjEgMCAwIDEgMzgzLjYzNzYgNjkgY20NCnENCi45NjA3ODQgLjk2MDc4NCAuODYyNzQ1
IHJnDQpuIDAgMCA1NC4yNTIyIDEyIHJlIGYqDQpRDQpxDQowIDAgMCByZw0KQlQgMSAwIDAg
MSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIDcuMTE2MTAyIDAgVGQgKFhlbnN0b3JlKSBUaiBU
KiAtNy4xMTYxMDIgMCBUZCBFVA0KUQ0KUQ0KMCAwIDAgcmcNCnENCjEgMCAwIDEgNiA1MSBj
bQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAoQ2xh
c3NpYzogRG9tMCkgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgMTcxLjYzMDUgNTEgY20N
CnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgKDApIFRq
IFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDIxMS4zODE4IDUxIGNtDQpxDQowIDAgMCByZw0K
QlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMICgwKSBUaiBUKiBFVA0KUQ0KUQ0K
cQ0KMSAwIDAgMSAyNzcuNjM0IDUxIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIg
VG0gL0YxIDEwIFRmIDEyIFRMICgwKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAzMzAu
NjM1OCA1MSBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAx
MiBUTCAoMCkgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgMzgzLjYzNzYgNTEgY20NCnEN
CjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgKDApIFRqIFQq
IEVUDQpRDQpRDQpxDQoxIDAgMCAxIDYgMjEgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAx
IDAgMTQgVG0gL0YxIDEwIFRmIDEyIFRMIChDbGFzc2ljOiBFeHRlbmRlZCkgVGogVCogKExh
dW5jaCBEb20wKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAxNzEuNjMwNSAzMyBjbQ0K
cQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAoMSkgVGog
VCogRVQNClENClENCnENCjEgMCAwIDEgMjExLjM4MTggMzMgY20NCnENCjAgMCAwIHJnDQpC
VCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgKDEpIFRqIFQqIEVUDQpRDQpRDQpx
DQoxIDAgMCAxIDIzNy44ODI3IDMzIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIg
VG0gL0YxIDEwIFRmIDEyIFRMICgwKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAyNzcu
NjM0IDMzIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEy
IFRMICgxKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAzMzAuNjM1OCAzMyBjbQ0KcQ0K
MCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAoMSkgVGogVCog
RVQNClENClENCnENCjEgMCAwIDEgMzgzLjYzNzYgMzMgY20NCnENCjAgMCAwIHJnDQpCVCAx
IDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgKDEpIFRqIFQqIEVUDQpRDQpRDQpxDQox
IDAgMCAxIDYgMyBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBU
ZiAxMiBUTCAoQ2xhc3NpYzogQmFzaWMgQ2xvdWQpIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAg
MCAxIDEzMS44NzkyIDMgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEg
MTAgVGYgMTIgVEwgKHtufSkgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgMTcxLjYzMDUg
MyBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAo
MCkgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgMjExLjM4MTggMyBjbQ0KcQ0KMCAwIDAg
cmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAoMCkgVGogVCogRVQNClEN
ClENCnENCjEgMCAwIDEgMjc3LjYzNCAzIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAw
IDIgVG0gL0YxIDEwIFRmIDEyIFRMICgwKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAz
MzAuNjM1OCAzIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRm
IDEyIFRMICgwKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAzODMuNjM3NiAzIGNtDQpx
DQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMICgwKSBUaiBU
KiBFVA0KUQ0KUQ0KcQ0KMSBKDQoxIGoNCjAgMCAwIFJHDQouMjUgdw0KbiAwIDAgbSA0NDMu
ODg5OCAwIGwgUw0KbiAxMjUuODc5MiA5NiBtIDQ0My44ODk4IDk2IGwgUw0KbiAwIDY2IG0g
NDQzLjg4OTggNjYgbCBTDQpuIDAgNDggbSA0NDMuODg5OCA0OCBsIFMNCm4gMCAxOCBtIDQ0
My44ODk4IDE4IGwgUw0KbiAxMjUuODc5MiAwIG0gMTI1Ljg3OTIgMTE0IGwgUw0KbiAxNjUu
NjMwNSAwIG0gMTY1LjYzMDUgOTYgbCBTDQpuIDIwNS4zODE4IDAgbSAyMDUuMzgxOCA5NiBs
IFMNCm4gMjMxLjg4MjcgMCBtIDIzMS44ODI3IDExNCBsIFMNCm4gMjcxLjYzNCAwIG0gMjcx
LjYzNCA5NiBsIFMNCm4gMzI0LjYzNTggMCBtIDMyNC42MzU4IDk2IGwgUw0KbiAzNzcuNjM3
NiAwIG0gMzc3LjYzNzYgOTYgbCBTDQpuIDAgMTE0IG0gNDQzLjg4OTggMTE0IGwgUw0KbiAw
IDAgbSAwIDExNCBsIFMNCm4gNDQzLjg4OTggMCBtIDQ0My44ODk4IDExNCBsIFMNClENClEN
CnENClENClENCiANCmVuZHN0cmVhbQ0KZW5kb2JqDQoxMzAgMCBvYmoNCjw8IC9MZW5ndGgg
MTEwOTEgPj4NCnN0cmVhbQ0KMSAwIDAgMSAwIDAgY20gIEJUIC9GMSAxMiBUZiAxNC40IFRM
IEVUDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDYwMy4wMjM2IGNtDQowIDAgMCByZw0KQlQgL0Yx
IDEwIFRmIDEyIFRMIEVUDQpCVCAxIDAgMCAxIDAgMiBUbSAgVCogRVQNCnENCjEgMCAwIDEg
MjAgMCBjbQ0KMSAxIDEgcmcNCm4gMCAxNjIgNDQzLjg4OTggLTE4IHJlIGYqDQouODc4NDMx
IC44Nzg0MzEgLjg3ODQzMSByZw0KbiAwIDE0NCA0NDMuODg5OCAtMTggcmUgZioNCjEgMSAx
IHJnDQpuIDAgMTI2IDQ0My44ODk4IC0xOCByZSBmKg0KLjg3ODQzMSAuODc4NDMxIC44Nzg0
MzEgcmcNCm4gMCAxMDggNDQzLjg4OTggLTE4IHJlIGYqDQoxIDEgMSByZw0KbiAwIDkwIDQ0
My44ODk4IC0zMCByZSBmKg0KLjg3ODQzMSAuODc4NDMxIC44Nzg0MzEgcmcNCm4gMCA2MCA0
NDMuODg5OCAtMzAgcmUgZioNCjEgMSAxIHJnDQpuIDAgMzAgNDQzLjg4OTggLTMwIHJlIGYq
DQowIDAgMCByZw0KQlQgL0YxIDEwIFRmIDEyIFRMIEVUDQpxDQoxIDAgMCAxIDYgMTQ3IGNt
DQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIChDbGFz
c2ljOiBDbG91ZCkgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgMTMxLjg3OTIgMTQ3IGNt
DQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMICh7bn0p
IFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDE3MS42MzA1IDE0NyBjbQ0KcQ0KMCAwIDAg
cmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAoMSkgVGogVCogRVQNClEN
ClENCnENCjEgMCAwIDEgMjExLjM4MTggMTQ3IGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAg
MSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMICgxKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAg
MSAyMzcuODgyNyAxNDcgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEg
MTAgVGYgMTIgVEwgKDApIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDI3Ny42MzQgMTQ3
IGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMICgx
KSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAzMzAuNjM1OCAxNDcgY20NCnENCjAgMCAw
IHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgKDEpIFRqIFQqIEVUDQpR
DQpRDQpxDQoxIDAgMCAxIDM4My42Mzc2IDE0NyBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAw
IDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAoMSkgVGogVCogRVQNClENClENCnENCjEgMCAw
IDEgNiAxMjkgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYg
MTIgVEwgKFN0YXRpYzogQmFzaWMpIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDEzMS44
NzkyIDEyOSBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAx
MiBUTCAoe259KSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAyMTEuMzgxOCAxMjkgY20N
CnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgKDApIFRq
IFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDI3Ny42MzQgMTI5IGNtDQpxDQowIDAgMCByZw0K
QlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMICgwKSBUaiBUKiBFVA0KUQ0KUQ0K
cQ0KMSAwIDAgMSAzMzAuNjM1OCAxMjkgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAg
MiBUbSAvRjEgMTAgVGYgMTIgVEwgKDApIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDM4
My42Mzc2IDEyOSBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBU
ZiAxMiBUTCAoMCkgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgNiAxMTEgY20NCnENCjAg
MCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgKFN0YXRpYzogU3Rh
bmRhcmQpIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDEzMS44NzkyIDExMSBjbQ0KcQ0K
MCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAoe259KSBUaiBU
KiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAyMTEuMzgxOCAxMTEgY20NCnENCjAgMCAwIHJnDQpC
VCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgKDEpIFRqIFQqIEVUDQpRDQpRDQpx
DQoxIDAgMCAxIDIzNy44ODI3IDExMSBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAy
IFRtIC9GMSAxMCBUZiAxMiBUTCAoMCkgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgMjc3
LjYzNCAxMTEgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYg
MTIgVEwgKDEpIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDMzMC42MzU4IDExMSBjbQ0K
cQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAoMSkgVGog
VCogRVQNClENClENCnENCjEgMCAwIDEgMzgzLjYzNzYgMTExIGNtDQpxDQowIDAgMCByZw0K
QlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMICgxKSBUaiBUKiBFVA0KUQ0KUQ0K
cQ0KMSAwIDAgMSA2IDkzIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0Yx
IDEwIFRmIDEyIFRMIChTdGF0aWM6IERpc2FnZ3JlZ2F0ZWQpIFRqIFQqIEVUDQpRDQpRDQpx
DQoxIDAgMCAxIDEzMS44NzkyIDkzIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIg
VG0gL0YxIDEwIFRmIDEyIFRMICh7bn0pIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDIx
MS4zODE4IDkzIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRm
IDEyIFRMICgyKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAyMzcuODgyNyA5MyBjbQ0K
cQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAoMCkgVGog
VCogRVQNClENClENCnENCjEgMCAwIDEgMjc3LjYzNCA5MyBjbQ0KcQ0KMCAwIDAgcmcNCkJU
IDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAoMykgVGogVCogRVQNClENClENCnEN
CjEgMCAwIDEgMzMwLjYzNTggOTMgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBU
bSAvRjEgMTAgVGYgMTIgVEwgKDQpIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDM4My42
Mzc2IDkzIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEy
IFRMICgxKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2IDYzIGNtDQpxDQowIDAgMCBy
Zw0KQlQgMSAwIDAgMSAwIDE0IFRtIC9GMSAxMCBUZiAxMiBUTCAoRHluYW1pYzogSGFyZHdh
cmUpIFRqIFQqIChEb21haW4pIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDE3MS42MzA1
IDc1IGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRM
ICgwKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAyMTEuMzgxOCA3NSBjbQ0KcQ0KMCAw
IDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAoMSkgVGogVCogRVQN
ClENClENCnENCjEgMCAwIDEgMjc3LjYzNCA3NSBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAw
IDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAoMCkgVGogVCogRVQNClENClENCnENCjEgMCAw
IDEgMzMwLjYzNTggNzUgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEg
MTAgVGYgMTIgVEwgKDApIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDM4My42Mzc2IDc1
IGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMICgw
KSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2IDMzIGNtDQpxDQowIDAgMCByZw0KQlQg
MSAwIDAgMSAwIDE0IFRtIC9GMSAxMCBUZiAxMiBUTCAoRHluYW1pYzogRmxleGlibGUpIFRq
IFQqIChEaXNhZ2dyZWdhdGlvbikgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgMTMxLjg3
OTIgNDUgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIg
VEwgKHtufSkgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgMTcxLjYzMDUgNDUgY20NCnEN
CjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgKDEpIFRqIFQq
IEVUDQpRDQpRDQpxDQoxIDAgMCAxIDIxMS4zODE4IDQ1IGNtDQpxDQowIDAgMCByZw0KQlQg
MSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMICgyKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0K
MSAwIDAgMSAyMzcuODgyNyA0NSBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRt
IC9GMSAxMCBUZiAxMiBUTCAoMCkgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgMjc3LjYz
NCA0NSBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBU
TCAoMSkgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgMzMwLjYzNTggNDUgY20NCnENCjAg
MCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgKDEpIFRqIFQqIEVU
DQpRDQpRDQpxDQoxIDAgMCAxIDM4My42Mzc2IDQ1IGNtDQpxDQowIDAgMCByZw0KQlQgMSAw
IDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMICgxKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAw
IDAgMSA2IDMgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMTQgVG0gL0YxIDEwIFRm
IDEyIFRMIChEeW5hbWljOiBGdWxsKSBUaiBUKiAoRGlzYWdncmVnYXRpb24pIFRqIFQqIEVU
DQpRDQpRDQpxDQoxIDAgMCAxIDEzMS44NzkyIDE1IGNtDQpxDQowIDAgMCByZw0KQlQgMSAw
IDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMICh7bn0pIFRqIFQqIEVUDQpRDQpRDQpxDQox
IDAgMCAxIDE3MS42MzA1IDE1IGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0g
L0YxIDEwIFRmIDEyIFRMICgyKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAyMTEuMzgx
OCAxNSBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBU
TCAoMykgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgMjM3Ljg4MjcgMTUgY20NCnENCjAg
MCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgKDApIFRqIFQqIEVU
DQpRDQpRDQpxDQoxIDAgMCAxIDI3Ny42MzQgMTUgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAg
MCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgKDQpIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAg
MCAxIDMzMC42MzU4IDE1IGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0Yx
IDEwIFRmIDEyIFRMICg1KSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAzODMuNjM3NiAx
NSBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAo
MSkgVGogVCogRVQNClENClENCnENCjEgSg0KMSBqDQowIDAgMCBSRw0KLjI1IHcNCm4gMCAx
NjIgbSA0NDMuODg5OCAxNjIgbCBTDQpuIDAgMTQ0IG0gNDQzLjg4OTggMTQ0IGwgUw0KbiAw
IDEyNiBtIDQ0My44ODk4IDEyNiBsIFMNCm4gMCAxMDggbSA0NDMuODg5OCAxMDggbCBTDQpu
IDAgOTAgbSA0NDMuODg5OCA5MCBsIFMNCm4gMCA2MCBtIDQ0My44ODk4IDYwIGwgUw0KbiAw
IDMwIG0gNDQzLjg4OTggMzAgbCBTDQpuIDEyNS44NzkyIDAgbSAxMjUuODc5MiAxNjIgbCBT
DQpuIDE2NS42MzA1IDAgbSAxNjUuNjMwNSAxNjIgbCBTDQpuIDIwNS4zODE4IDAgbSAyMDUu
MzgxOCAxNjIgbCBTDQpuIDIzMS44ODI3IDAgbSAyMzEuODgyNyAxNjIgbCBTDQpuIDI3MS42
MzQgMCBtIDI3MS42MzQgMTYyIGwgUw0KbiAzMjQuNjM1OCAwIG0gMzI0LjYzNTggMTYyIGwg
Uw0KbiAzNzcuNjM3NiAwIG0gMzc3LjYzNzYgMTYyIGwgUw0KbiAwIDAgbSAwIDE2MiBsIFMN
Cm4gNDQzLjg4OTggMCBtIDQ0My44ODk4IDE2MiBsIFMNCm4gMCAwIG0gNDQzLjg4OTggMCBs
IFMNClENClENCnENCjEgMCAwIDEgMjAgMCBjbQ0KUQ0KcQ0KUQ0KUQ0KcQ0KMSAwIDAgMSA2
Mi42OTI5MSA2MDMuMDIzNiBjbQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA1NzMuMDIzNiBj
bQ0KcQ0KQlQgMSAwIDAgMSAwIDMgVG0gMTggVEwgL0YyIDE1IFRmIDAgMCAwIHJnICg0LjUg
ICBPdmVydmlldyBvZiBIeXBlcmxhdW5jaCBGbG93KSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAw
IDAgMSA2Mi42OTI5MSA0MzUuMDIzNiBjbQ0KcQ0KQlQgMSAwIDAgMSAwIDEyMiBUbSAuNjY2
NDU3IFR3IDEyIFRMIC9GMSAxMCBUZiAwIDAgMCByZyAoQmVmb3JlIGRlbHZpbmcgaW50byBI
eXBlcmxhdW5jaCwgYSBnb29kIGJhc2lzIHRvIHN0YXJ0IHdpdGggaXMgYW4gdW5kZXJzdGFu
ZGluZyBvZiB0aGUgY3VycmVudCBwcm9jZXNzIHRvKSBUaiBUKiAwIFR3IC4yNjEwOTggVHcg
KGNyZWF0ZSBhIGRvbWFpbi4gQSB3YXkgdG8gdmlldyB0aGlzIHByb2Nlc3Mgc3RhcnRzIHdp
dGggdGhlIGNvcmUgY29uZmlndXJhdGlvbiB3aGljaCBpcyB0aGUgaW5mb3JtYXRpb24gdGhl
KSBUaiBUKiAwIFR3IDIuMTYxNDEyIFR3IChoeXBlcnZpc29yIHJlcXVpcmVzIHRvIG1ha2Ug
dGhlIGNhbGwgdG8gKSBUaiAvRjMgMTAgVGYgMCAwIDAgcmcgKGRvbWFpbl9jcmVhdGUpIFRq
IC9GMSAxMCBUZiAwIDAgMCByZyAoLCBmb2xsb3dlZCBieSBiYXNpYyBjb25zdHJ1Y3Rpb24g
dG8gcHJvdmlkZSB0aGUpIFRqIFQqIDAgVHcgMy4wNzI4NCBUdyAobWVtb3J5IGltYWdlIHRv
IHJ1biwgaW5jbHVkaW5nIHRoZSBrZXJuZWwgYW5kIHJhbWRpc2suIEEgc3Vic2VxdWVudCBz
dGVwIGFwcGxpZXMgdGhlIGV4dGVuZGVkKSBUaiBUKiAwIFR3IC4xNzQ5ODMgVHcgKGNvbmZp
Z3VyYXRpb24gdXNlZCBieSB0aGUgdG9vbHN0YWNrIHRvIHByb3ZpZGUgYSBkb21haW4gd2l0
aCBhbnkgYWRkaXRpb25hbCBjb25maWd1cmF0aW9uIGluZm9ybWF0aW9uLiBVbnRpbCkgVGog
VCogMCBUdyAuMzQ3OTg0IFR3ICh0aGUgZXh0ZW5kZWQgY29uZmlndXJhdGlvbiBpcyBjb21w
bGV0ZWQsIGEgZG9tYWluIGhhcyBhY2Nlc3MgdG8gbm8gcmVzb3VyY2VzIGV4Y2VwdCBpdHMg
YWxsb2NhdGVkIHZjcHVzKSBUaiBUKiAwIFR3IC43NDYyMzUgVHcgKGFuZCBtZW1vcnkuIFRo
ZSBleGNlcHRpb24gdG8gdGhpcyBpcyBEb20wLCB3aGljaCB0aGUgaHlwZXJ2aXNvciBleHBs
aWNpdGx5IGdyYW50cyBjb250cm9sIGFuZCBhY2Nlc3MgdG8pIFRqIFQqIDAgVHcgLjk0MjY1
MSBUdyAoYWxsIHN5c3RlbSByZXNvdXJjZXMsIGV4Y2VwdCBmb3IgdGhvc2UgdGhhdCBvbmx5
IHRoZSBoeXBlcnZpc29yIHNob3VsZCBoYXZlIGNvbnRyb2wgb3Zlci4gVGhpcyBleGNlcHRp
b24pIFRqIFQqIDAgVHcgMi4wNTU5ODQgVHcgKGZvciBEb20wIGlzIGRyaXZlbiBieSB0aGUg
c3lzdGVtIHN0cnVjdHVyZSB3aXRoIGEgbW9ub2xpdGhpYyBEb20wIGRvbWFpbiBwcmVkYXRp
bmcgaW50cm9kdWN0aW9uIG9mKSBUaiBUKiAwIFR3IC4zMzI4NCBUdyAoc3VwcG9ydCBmb3Ig
ZGlzYWdncmVnYXRpb24gaW50byBYZW4sIGFuZCB0aGUgY29ycmVzcG9uZGluZyBkZWZhdWx0
IGFzc2lnbm1lbnQgb2YgbXVsdGlwbGUgcm9sZXMgd2l0aGluIHRoZSkgVGogVCogMCBUdyAo
WGVuIHN5c3RlbSB0byBEb20wLikgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgNjIuNjky
OTEgMzIxLjAyMzYgY20NCnENCkJUIDEgMCAwIDEgMCA5OCBUbSAuOTMyMTI2IFR3IDEyIFRM
IC9GMSAxMCBUZiAwIDAgMCByZyAoV2hpbGUgbm90IGEgZGlmZmVyZW50IGRvbWFpbiBjcmVh
dGlvbiBwYXRoLCB0aGVyZSBkb2VzIGV4aXN0IHRoZSBIYXJkd2FyZSBEb21haW4gXChod2Rv
bVwpLCBzb21ldGltZXMpIFRqIFQqIDAgVHcgMS4yMjQxMDQgVHcgKGFsc28gcmVmZXJyZWQg
dG8gYXMgbGF0ZS1Eb20wLiBJdCBpcyBhbiBlYXJseSBlZmZvcnQgdG8gZGlzYWdncmVnYXRl
IERvbTBcMjIycyByb2xlcyBpbnRvIGEgc2VwYXJhdGUgY29udHJvbCkgVGogVCogMCBUdyAz
LjQ1MzExIFR3IChkb21haW4gYW5kIGhhcmR3YXJlIGRvbWFpbi4gVGhpcyBjYXBhYmlsaXR5
IGlzIGFjdGl2YXRlZCBieSB0aGUgcGFzc2luZyBvZiBhIGRvbWFpbiBpZCB0byB0aGUpIFRq
IFQqIDAgVHcgLjk3OTk4NCBUdyAvRjMgMTAgVGYgMCAwIDAgcmcgKGhhcmR3YXJlX2RvbSAp
IFRqIC9GMSAxMCBUZiAwIDAgMCByZyAoa2VybmVsIGNvbW1hbmQgbGluZSBwYXJhbWV0ZXIs
IGFuZCB0aGUgWGVuIGh5cGVydmlzb3Igd2lsbCB0aGVuIGZsYWcgdGhhdCBkb21haW4gaWQg
YXMpIFRqIFQqIDAgVHcgLjM5NTYxIFR3ICh0aGUgaGFyZHdhcmUgZG9tYWluLiBMYXRlciB3
aGVuIHRoZSB0b29sc3RhY2sgY29uc3RydWN0cyBhIGRvbWFpbiB3aXRoIHRoYXQgZG9tYWlu
IGlkIGFzIHRoZSByZXF1ZXN0ZWQpIFRqIFQqIDAgVHcgLjUwOTk4NyBUdyAoZG9taWQsIHRo
ZSBoeXBlcnZpc29yIHdpbGwgdHJhbnNmZXIgYWxsIGRldmljZSBJL08gZnJvbSBEb20wIHRv
IHRoaXMgZG9tYWluLiBJbiBhZGRpdGlvbiBpdCB3aWxsIGFsc28gdHJhbnNmZXIpIFRqIFQq
IDAgVHcgLjUwMTIzNSBUdyAodGhlIFwyMjNob3N0IHNodXRkb3duIG9uIGRvbWFpbiBzaHV0
ZG93blwyMjQgZmxhZyBmcm9tIERvbTAgdG8gdGhlIGhhcmR3YXJlIGRvbWFpbi4gSXQgaXMg
d29ydGggbWVudGlvbmluZykgVGogVCogMCBUdyAuMDczMzIgVHcgKHRoYXQgdGhpcyBhcHBy
b2FjaCBmb3IgZGlzYWdncmVnYXRpb24gd2FzIGNyZWF0ZWQgaW4gdGhpcyBtYW5uZXIgZHVl
IHRvIHRoZSBpbmFiaWxpdHkgb2YgWGVuIHRvIGxhdW5jaCBtb3JlKSBUaiBUKiAwIFR3ICh0
aGFuIG9uZSBkb21haW4gYXQgc3RhcnR1cC4pIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAx
IDYyLjY5MjkxIDI5NC4wMjM2IGNtDQpxDQpCVCAxIDAgMCAxIDAgMi41IFRtIDE1IFRMIC9G
NCAxMi41IFRmIDAgMCAwIHJnICg0LjUuMSAgIEh5cGVybGF1bmNoIFhlbiBzdGFydHVwKSBU
aiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSAxNjguMDIzNiBjbQ0KcQ0KMCAw
IDAgcmcNCkJUIDEgMCAwIDEgMCAxMTAgVG0gL0YxIDEwIFRmIDEyIFRMIC41MzY4NiBUdyAo
VGhlIEh5cGVybGF1bmNoIGFwcHJvYWNoXDIyMnMgcHJpbWFyeSBmb2N1cyBpcyBvbiBob3cg
dG8gYXNzaWduIHRoZSByb2xlcyB0cmFkaXRpb25hbGx5IGdyYW50ZWQgdG8gRG9tMCB0bykg
VGogVCogMCBUdyAuNjYyNzY1IFR3IChvbmUgb3IgbW9yZSBkb21haW5zIGF0IGhvc3QgYm9v
dC4gV2hpbGUgdGhlIHN0YXRlbWVudCBpcyBzaW1wbGUgdG8gbWFrZSwgdGhlIGltcGxpY2F0
aW9ucyBhcmUgbm90IHRyaXZpYWwpIFRqIFQqIDAgVHcgLjc5Mzk4NCBUdyAoYnkgYW55IG1l
YW5zLiBUaGlzIGFsc28gZXhwbGFpbnMgd2h5IHRoZSBIeXBlcmxhdW5jaCBhcHByb2FjaCBp
cyBvcnRob2dvbmFsIHRvIHRoZSBleGlzdGluZyBkb20wbGVzcykgVGogVCogMCBUdyAxLjQw
NzEyNiBUdyAoY2FwYWJpbGl0eS4gVGhlIGRvbTBsZXNzIGNhcGFiaWxpdHkgZm9jdXNlcyBv
biBlbmFibGluZyB0aGUgbGF1bmNoIG9mIG11bHRpcGxlIGRvbWFpbnMgaW4gcGFyYWxsZWwg
d2l0aCkgVGogVCogMCBUdyAxLjIxODMyIFR3IChEb20wIGF0IGhvc3QgYm9vdC4gQSBjb3Jv
bGxhcnkgZm9yIGRvbTBsZXNzIGlzIHRoYXQgZm9yIHN5c3RlbXMgdGhhdCBkb25cMjIydCBy
ZXF1aXJlIERvbTAgYWZ0ZXIgYWxsIGd1ZXN0KSBUaiBUKiAwIFR3IC4yMzA5ODggVHcgKGRv
bWFpbnMgaGF2ZSBzdGFydGVkLCB0aGV5IGFyZSBhYmxlIHRvIGRvIHRoZSBob3N0IGJvb3Qg
d2l0aG91dCBhIERvbTAuIFRob3VnaCBpdCBzaG91bGQgYmUgbm90ZWQgdGhhdCBpdCkgVGog
VCogMCBUdyAxLjQzOTM2IFR3IChtYXkgYmUgcG9zc2libGUgdG8gc3RhcnQgRG9tMCBhdCBh
IGxhdGVyIHBvaW50LiBXaGVyZWFzIHdpdGggSHlwZXJsYXVuY2gsIGl0cyBhcHByb2FjaCBv
ZiBzZXBhcmF0aW5nKSBUaiBUKiAwIFR3IC43NTc5ODQgVHcgKERvbTBcMjIycyByb2xlcyBy
ZXF1aXJlcyB0aGUgYWJpbGl0eSB0byBsYXVuY2ggbXVsdGlwbGUgZG9tYWlucyBhdCBob3N0
IGJvb3QuIFRoZSBkaXJlY3QgY29uc2VxdWVuY2VzIGZyb20pIFRqIFQqIDAgVHcgLjAxNDk4
NSBUdyAodGhpcyBhcHByb2FjaCBhcmUgcHJvZm91bmQgYW5kIHByb3ZpZGUgYSBteXJpYWQg
b2YgcG9zc2libGUgY29uZmlndXJhdGlvbnMgZm9yIHdoaWNoIGEgc2FtcGxlIG9mIGNvbW1v
bikgVGogVCogMCBUdyAoYm9vdCBjb25maWd1cmF0aW9ucyB3ZXJlIGFscmVhZHkgcHJlc2Vu
dGVkLikgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgNzguMDIzNjIgY20N
CnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgNzQgVG0gL0YxIDEwIFRmIDEyIFRMIC44ODE4
NiBUdyAoVG8gZW5hYmxlIHRoZSBIeXBlcmxhdW5jaCBhcHByb2FjaCBhIG5ldyBhbHRlcm5h
dGl2ZSBwYXRoIGZvciBob3N0IGJvb3Qgd2l0aGluIHRoZSBoeXBlcnZpc29yIG11c3QgYmUp
IFRqIFQqIDAgVHcgMS4xMjkyMTMgVHcgKGludHJvZHVjZWQuIFRoaXMgYWx0ZXJuYXRpdmUg
cGF0aCBlZmZlY3RpdmVseSBicmFuY2hlcyBqdXN0IGJlZm9yZSB0aGUgY3VycmVudCBwb2lu
dCBvZiBEb20wIGNvbnN0cnVjdGlvbikgVGogVCogMCBUdyAuNTMyNjUxIFR3IChhbmQgYmVn
aW5zIGFuIGFsdGVybmF0ZSBtZWFucyBvZiBzeXN0ZW0gY29uc3RydWN0aW9uLiBUaGUgZGV0
ZXJtaW5hdGlvbiBpZiB0aGlzIGFsdGVybmF0ZSBwYXRoIHNob3VsZCBiZSkgVGogVCogMCBU
dyAuNDM1MjggVHcgKHRha2VuIGlzIHRocm91Z2ggdGhlIGluc3BlY3Rpb24gb2YgdGhlIGJv
b3QgY2hhaW4uIElmIHRoZSBib290bG9hZGVyIGhhcyBsb2FkZWQgYSBzcGVjaWZpYyBjb25m
aWd1cmF0aW9uLCBhcykgVGogVCogMCBUdyAuODExODYgVHcgKGRlc2NyaWJlZCBsYXRlciwg
aXQgd2lsbCBlbmFibGUgWGVuIHRvIGRldGVjdCB0aGF0IGEgSHlwZXJsYXVuY2ggY29uZmln
dXJhdGlvbiBoYXMgYmVlbiBwcm92aWRlZC4gT25jZSBhKSBUaiBUKiAwIFR3IDIuMjc3MTI2
IFR3IChIeXBlcmxhdW5jaCBjb25maWd1cmF0aW9uIGlzIGRldGVjdGVkLCB0aGlzIGFsdGVy
bmF0ZSBwYXRoIGNhbiBiZSB0aG91Z2h0IG9mIGFzIG9jY3VycmluZyBpbiBwaGFzZXM6KSBU
aiBUKiAwIFR3IChkb21haW4gY3JlYXRpb24sIGRvbWFpbiBwcmVwYXJhdGlvbiwgYW5kIGxh
dW5jaCBmaW5hbGl6YXRpb24uKSBUaiBUKiBFVA0KUQ0KUQ0KIA0KZW5kc3RyZWFtDQplbmRv
YmoNCjEzMSAwIG9iag0KPDwgL0xlbmd0aCA3MjA4ID4+DQpzdHJlYW0NCjEgMCAwIDEgMCAw
IGNtICBCVCAvRjEgMTIgVGYgMTQuNCBUTCBFVA0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA3NTMu
MDIzNiBjbQ0KcQ0KQlQgMSAwIDAgMSAwIDIgVG0gMTIgVEwgL0Y0IDEwIFRmIDAgMCAwIHJn
ICg0LjUuMS4xICAgRG9tYWluIENyZWF0aW9uKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAg
MSA2Mi42OTI5MSA2NjMuMDIzNiBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCA3NCBU
bSAvRjEgMTAgVGYgMTIgVEwgLjcyOTI2OSBUdyAoVGhlIGRvbWFpbiBjcmVhdGlvbiBwaGFz
ZSBiZWdpbnMgd2l0aCBYZW4gcGFyc2luZyB0aGUgYm9vdGxvYWRlciBwcm92aWRlZCBtYXRl
cmlhbCwgdG8gdW5kZXJzdGFuZCB0aGUpIFRqIFQqIDAgVHcgMS41MTg4MSBUdyAoY29udGVu
dCBvZiB0aGUgbW9kdWxlcyBwcm92aWRlZC4gSXQgd2lsbCB0aGVuIGxvYWQgYW55IG1pY3Jv
Y29kZSBvciBYU00gcG9saWN5IGl0IGRpc2NvdmVycy4gRm9yIGVhY2gpIFRqIFQqIDAgVHcg
LjI1MzU1NSBUdyAoZG9tYWluIGNvbmZpZ3VyYXRpb24gWGVuIGZpbmRzLCBpdCBwYXJzZXMg
dGhlIGNvbmZpZ3VyYXRpb24gdG8gY29uc3RydWN0IHRoZSBuZWNlc3NhcnkgZG9tYWluIGRl
ZmluaXRpb24gdG8pIFRqIFQqIDAgVHcgLjI3MjkyNyBUdyAoaW5zdGFudGlhdGUgYW4gaW5z
dGFuY2Ugb2YgdGhlIGRvbWFpbiBhbmQgbGVhdmUgaXQgaW4gYSBwYXVzZWQgc3RhdGUuIFdo
ZW4gYWxsIGRvbWFpbiBjb25maWd1cmF0aW9ucyBoYXZlKSBUaiBUKiAwIFR3IC4xMzQ0MzEg
VHcgKGJlZW4gaW5zdGFudGlhdGVkIGFzIGRvbWFpbnMsIGlmIG9uZSBvZiB0aGVtIGlzIGZs
YWdnZWQgYXMgdGhlIEJvb3QgRG9tYWluLCB0aGF0IGRvbWFpbiB3aWxsIGJlIHVucGF1c2Vk
KSBUaiBUKiAwIFR3IC45MzkzMTggVHcgKHN0YXJ0aW5nIHRoZSBkb21haW4gcHJlcGFyYXRp
b24gcGhhc2UuIElmIHRoZXJlIGlzIG5vIEJvb3QgRG9tYWluIGRlZmluZWQsIHRoZW4gdGhl
IGRvbWFpbiBwcmVwYXJhdGlvbikgVGogVCogMCBUdyAocGhhc2Ugd2lsbCBiZSBza2lwcGVk
IGFuZCBYZW4gd2lsbCB0cmlnZ2VyIHRoZSBsYXVuY2ggZmluYWxpemF0aW9uIHBoYXNlLikg
VGogVCogRVQNClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgNjM5LjAyMzYgY20NCnENCkJU
IDEgMCAwIDEgMCAyIFRtIDEyIFRMIC9GNCAxMCBUZiAwIDAgMCByZyAoNC41LjEuMiAgIERv
bWFpbiBQcmVwYXJhdGlvbiBQaGFzZSkgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgNjIu
NjkyOTEgNTg1LjAyMzYgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMzggVG0gL0Yx
IDEwIFRmIDEyIFRMIC4zMjYyMzUgVHcgKFRoZSBkb21haW4gcHJlcGFyYXRpb24gcGhhc2Ug
aXMgYW4gb3B0aW9uYWwgY2hlY2sgcG9pbnQgZm9yIHRoZSBleGVjdXRpb24gb2YgYSB3b3Jr
bG9hZCBzcGVjaWZpYyBkb21haW4sKSBUaiBUKiAwIFR3IC42NTU3NzcgVHcgKHRoZSBCb290
IERvbWFpbi4gV2hpbGUgdGhlIEJvb3QgRG9tYWluIGlzIHRoZSBmaXJzdCBkb21haW4gdG8g
cnVuIGFuZCBoYXMgc29tZSBkZWdyZWUgb2YgY29udHJvbCBvdmVyKSBUaiBUKiAwIFR3IDIu
NDQxNDEyIFR3ICh0aGUgc3lzdGVtLCBpdCBpcyBleHRyZW1lbHkgcmVzdHJpY3RlZCBpbiBi
b3RoIHN5c3RlbSByZXNvdXJjZSBhY2Nlc3MgYW5kIGh5cGVydmlzb3Igb3BlcmF0aW9ucy4g
SXRzKSBUaiBUKiAwIFR3IChwdXJwb3NlIGlzIHRvOikgVGogVCogRVQNClENClENCnENCjEg
MCAwIDEgNjIuNjkyOTEgNTc5LjAyMzYgY20NClENCnENCjEgMCAwIDEgNjIuNjkyOTEgNTc5
LjAyMzYgY20NClENCnENCjEgMCAwIDEgNjIuNjkyOTEgNTY3LjAyMzYgY20NCjAgMCAwIHJn
DQpCVCAvRjEgMTAgVGYgMTIgVEwgRVQNCnENCjEgMCAwIDEgNiAtMyBjbQ0KcQ0KMCAwIDAg
cmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAxMC41IDAgVGQgKFwxNzcp
IFRqIFQqIC0xMC41IDAgVGQgRVQNClENClENCnENCjEgMCAwIDEgMjMgLTMgY20NCnENCjAg
MCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgKEFjY2VzcyB0aGUg
Y29uZmlndXJhdGlvbiBwcm92aWRlZCBieSB0aGUgYm9vdGxvYWRlcikgVGogVCogRVQNClEN
ClENCnENClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgNTYxLjAyMzYgY20NClENCnENCjEg
MCAwIDEgNjIuNjkyOTEgNTQ5LjAyMzYgY20NCjAgMCAwIHJnDQpCVCAvRjEgMTAgVGYgMTIg
VEwgRVQNCnENCjEgMCAwIDEgNiAtMyBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAy
IFRtIC9GMSAxMCBUZiAxMiBUTCAxMC41IDAgVGQgKFwxNzcpIFRqIFQqIC0xMC41IDAgVGQg
RVQNClENClENCnENCjEgMCAwIDEgMjMgLTMgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAx
IDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgKEZpbmFsaXplIHRoZSBjb25maWd1cmF0aW9uIG9m
IHRoZSBkb21haW5zKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42
OTI5MSA1NDMuMDIzNiBjbQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA1MzEuMDIzNiBjbQ0K
MCAwIDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KcQ0KMSAwIDAgMSA2IC0zIGNtDQpx
DQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIDEwLjUgMCBU
ZCAoXDE3NykgVGogVCogLTEwLjUgMCBUZCBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAyMyAtMyBj
bQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAoQ29u
ZHVjdCBhbnkgc2V0dXAgYW5kIGxhdW5jaCByZWxhdGVkIG9wZXJhdGlvbnMpIFRqIFQqIEVU
DQpRDQpRDQpxDQpRDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDUyNS4wMjM2IGNtDQpRDQpx
DQoxIDAgMCAxIDYyLjY5MjkxIDUxMy4wMjM2IGNtDQowIDAgMCByZw0KQlQgL0YxIDEwIFRm
IDEyIFRMIEVUDQpxDQoxIDAgMCAxIDYgLTMgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAx
IDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgMTAuNSAwIFRkIChcMTc3KSBUaiBUKiAtMTAuNSAw
IFRkIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDIzIC0zIGNtDQpxDQowIDAgMCByZw0KQlQgMSAw
IDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIChEbyBhbiBvcmRlcmVkIHVucGF1c2Ugb2Yg
ZG9tYWlucyB0aGF0IHJlcXVpcmUgYW4gb3JkZXJlZCBzdGFydCkgVGogVCogRVQNClENClEN
CnENClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgNTEzLjAyMzYgY20NClENCnENCjEgMCAw
IDEgNjIuNjkyOTEgNDgzLjAyMzYgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMTQg
VG0gL0YxIDEwIFRmIDEyIFRMIDEuMzg3MDQ1IFR3IChXaGVuIHRoZSBCb290IERvbWFpbiBo
YXMgY29tcGxldGVkLCBpdCB3aWxsIG5vdGlmeSB0aGUgaHlwZXJ2aXNvciB0aGF0IGl0IGlz
IGRvbmUgdHJpZ2dlcmluZyB0aGUgbGF1bmNoKSBUaiBUKiAwIFR3IChmaW5hbGl6YXRpb24g
cGhhc2UuKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA0NTkuMDIzNiBj
bQ0KcQ0KQlQgMSAwIDAgMSAwIDIgVG0gMTIgVEwgL0Y0IDEwIFRmIDAgMCAwIHJnICg0LjUu
MS4zICAgTGF1bmNoIEZpbmFsaXphdGlvbikgVGogVCogRVQNClENClENCnENCjEgMCAwIDEg
NjIuNjkyOTEgNDI5LjAyMzYgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMTQgVG0g
L0YxIDEwIFRmIDEyIFRMIC42NzMxMSBUdyAoVGhlIGh5cGVydmlzb3IgaGFuZGxlcyB0aGUg
bGF1bmNoIGZpbmFsaXphdGlvbiBwaGFzZSB3aGljaCBpcyBlcXVpdmFsZW50IHRvIHRoZSBj
bGVhbiB1cCBwaGFzZS4gQXMgc3VjaCkgVGogVCogMCBUdyAodGhlIHN0ZXBzIHRha2VuIGJ5
IHRoZSBoeXBlcnZpc29yLCBub3QgbmVjZXNzYXJpbHkgaW4gaW1wbGVtZW50YXRpb24gb3Jk
ZXIsIGFyZSBhcyBmb2xsb3dzLCkgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgNjIuNjky
OTEgNDIzLjAyMzYgY20NClENCnENCjEgMCAwIDEgNjIuNjkyOTEgNDIzLjAyMzYgY20NClEN
CnENCjEgMCAwIDEgNjIuNjkyOTEgNDExLjAyMzYgY20NCjAgMCAwIHJnDQpCVCAvRjEgMTAg
VGYgMTIgVEwgRVQNCnENCjEgMCAwIDEgNiAtMyBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAw
IDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAxMC41IDAgVGQgKFwxNzcpIFRqIFQqIC0xMC41
IDAgVGQgRVQNClENClENCnENCjEgMCAwIDEgMjMgLTMgY20NCnENCjAgMCAwIHJnDQpCVCAx
IDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgKEZyZWUgdGhlIGJvb3QgbW9kdWxlIGNo
YWluKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA0MDUu
MDIzNiBjbQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSAzOTMuMDIzNiBjbQ0KMCAwIDAgcmcN
CkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KcQ0KMSAwIDAgMSA2IC0zIGNtDQpxDQowIDAgMCBy
Zw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIDEwLjUgMCBUZCAoXDE3Nykg
VGogVCogLTEwLjUgMCBUZCBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAyMyAtMyBjbQ0KcQ0KMCAw
IDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAoSWYgYSBCb290IERv
bWFpbiB3YXMgdXNlZCwgcmVjbGFpbSBCb290IERvbWFpbiByZXNvdXJjZXMpIFRqIFQqIEVU
DQpRDQpRDQpxDQpRDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDM4Ny4wMjM2IGNtDQpRDQpx
DQoxIDAgMCAxIDYyLjY5MjkxIDM3NS4wMjM2IGNtDQowIDAgMCByZw0KQlQgL0YxIDEwIFRm
IDEyIFRMIEVUDQpxDQoxIDAgMCAxIDYgLTMgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAx
IDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgMTAuNSAwIFRkIChcMTc3KSBUaiBUKiAtMTAuNSAw
IFRkIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDIzIC0zIGNtDQpxDQowIDAgMCByZw0KQlQgMSAw
IDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIChVbnBhdXNlIGFueSBkb21haW5zIHN0aWxs
IGluIGEgcGF1c2VkIHN0YXRlKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KUQ0KUQ0KcQ0KMSAwIDAg
MSA2Mi42OTI5MSAzNjkuMDIzNiBjbQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSAzNTcuMDIz
NiBjbQ0KMCAwIDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KcQ0KMSAwIDAgMSA2IC0z
IGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIDEw
LjUgMCBUZCAoXDE3NykgVGogVCogLTEwLjUgMCBUZCBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAy
MyAtMyBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBU
TCAoQm9vdCBEb21haW4gdXNlcyBhIHJlc2VydmVkIGZ1bmN0aW9uIHRodXMgY2FuIG5ldmVy
IGJlIHJlc3Bhd25lZCkgVGogVCogRVQNClENClENCnENClENClENCnENCjEgMCAwIDEgNjIu
NjkyOTEgMzU3LjAyMzYgY20NClENCnENCjEgMCAwIDEgNjIuNjkyOTEgMjY3LjAyMzYgY20N
CnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgNzQgVG0gL0YxIDEwIFRmIDEyIFRMIC4xMzc3
NjUgVHcgKFdoaWxlIHRoZSBmb2N1cyB0aHVzIGZhciBoYXMgYmVlbiBvbiBob3cgdGhlIEh5
cGVybGF1bmNoIGNhcGFiaWxpdHkgd2lsbCB3b3JrLCBpdCBpcyB3b3J0aCBtZW50aW9uaW5n
IHdoYXQpIFRqIFQqIDAgVHcgLjk1NjQ4OCBUdyAoaXQgZG9lcyBub3QgZG8gb3IgbGltaXQg
ZnJvbSBvY2N1cnJpbmcuIEl0IGRvZXMgbm90IHN0b3Agb3IgaW5oaWJpdCB0aGUgYXNzaWdu
aW5nIG9mIHRoZSBjb250cm9sIGRvbWFpbiByb2xlKSBUaiBUKiAwIFR3IDEuNTQzMTEgVHcg
KHdoaWNoIGdpdmVzIHRoZSBkb21haW4gdGhlIGFiaWxpdHkgdG8gY3JlYXRlLCBzdGFydCwg
c3RvcCwgcmVzdGFydCwgYW5kIGRlc3Ryb3kgZG9tYWlucyBvciB0aGUgaGFyZHdhcmUpIFRq
IFQqIDAgVHcgLjQwNzA0NSBUdyAoZG9tYWluIHJvbGUgd2hpY2ggZ2l2ZXMgYWNjZXNzIHRv
IGFsbCBJL08gZGV2aWNlcyBleGNlcHQgdGhvc2UgdGhhdCB0aGUgaHlwZXJ2aXNvciBoYXMg
cmVzZXJ2ZWQgZm9yIGl0c2VsZi4pIFRqIFQqIDAgVHcgMS43NDA1MTQgVHcgKEluIHBhcnRp
Y3VsYXIgaXQgaXMgc3RpbGwgcG9zc2libGUgdG8gY29uc3RydWN0IGEgZG9tYWluIHdpdGgg
YWxsIHRoZSBwcml2aWxlZ2VkIHJvbGVzLCBpLmUuIGEgRG9tMCwgd2l0aCBvcikgVGogVCog
MCBUdyAxLjc5NDY1MSBUdyAod2l0aG91dCB0aGUgZG9tYWluIGlkIGJlaW5nIHplcm8uIElu
IGZhY3Qgd2hhdCBsaW1pdGF0aW9ucyBhcmUgaW1wb3NlZCBub3cgYmVjb21lIGZ1bGx5IGNv
bmZpZ3VyYWJsZSkgVGogVCogMCBUdyAod2l0aG91dCB0aGUgcmlzayBvZiBjaXJjdW12ZW50
aW9uIGJ5IGFuIGFsbCBwcml2aWxlZ2VkIGRvbWFpbi4pIFRqIFQqIEVUDQpRDQpRDQpxDQox
IDAgMCAxIDYyLjY5MjkxIDIzNy4wMjM2IGNtDQpxDQpCVCAxIDAgMCAxIDAgMyBUbSAxOCBU
TCAvRjIgMTUgVGYgMCAwIDAgcmcgKDQuNiAgIFN0cnVjdHVyaW5nIG9mIEh5cGVybGF1bmNo
KSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSAxNzEuMDIzNiBjbQ0KcQ0K
MCAwIDAgcmcNCkJUIDEgMCAwIDEgMCA1MCBUbSAvRjEgMTAgVGYgMTIgVEwgMi4zODY2NTEg
VHcgKFRoZSBzdHJ1Y3R1cmUgb2YgSHlwZXJsYXVuY2ggaXMgYnVpbHQgYXJvdW5kIHRoZSBl
eGlzdGluZyBjYXBhYmlsaXRpZXMgb2YgdGhlIGhvc3QgYm9vdCBwcm90b2NvbC4gVGhpcykg
VGogVCogMCBUdyAuMTE4MTEgVHcgKGFwcHJvYWNoIHdhcyBkcml2ZW4gYnkgdGhlIG9iamVj
dGl2ZSBub3QgdG8gcmVxdWlyZSBtb2RpZmljYXRpb25zIHRvIHRoZSBib290IGxvYWRlci4g
VGhlIG9ubHkgcmVxdWlyZW1lbnQpIFRqIFQqIDAgVHcgLjc0NDM2IFR3IChpcyB0aGF0IHRo
ZSBib290IGxvYWRlciBzdXBwb3J0cyB0aGUgTXVsdGlib290MiBcKE1CMlwpIHByb3RvY29s
LiBGb3IgVUVGSSBib290LCBvdXIgcmVjb21tZW5kYXRpb24gaXMgdG8pIFRqIFQqIDAgVHcg
LjY3NTYxIFR3ICh1c2UgR1JVQi5lZmkgdG8gbG9hZCBYZW4gYW5kIHRoZSBpbml0aWFsIGRv
bWFpbiBtYXRlcmlhbHMgdmlhIHRoZSBtdWx0aWJvb3QyIG1ldGhvZC4gT24gQXJtIHBsYXRm
b3JtcywpIFRqIFQqIDAgVHcgKEh5cGVybGF1bmNoIGlzIGNvbXBhdGlibGUgd2l0aCB0aGUg
ZXhpc3RpbmcgaW50ZXJmYWNlIGZvciBib290IGludG8gdGhlIGh5cGVydmlzb3IuKSBUaiBU
KiBFVA0KUQ0KUQ0KIA0KZW5kc3RyZWFtDQplbmRvYmoNCjEzMiAwIG9iag0KPDwgL0xlbmd0
aCA3NTc3ID4+DQpzdHJlYW0NCjEgMCAwIDEgMCAwIGNtICBCVCAvRjEgMTIgVGYgMTQuNCBU
TCBFVA0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA3NTAuMDIzNiBjbQ0KcQ0KQlQgMSAwIDAgMSAw
IDIuNSBUbSAxNSBUTCAvRjQgMTIuNSBUZiAwIDAgMCByZyAoNC42LjEgICB4ODYgTXVsdGli
b290MikgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgNjcyLjAyMzYgY20N
CnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgNjIgVG0gL0YxIDEwIFRmIDEyIFRMIDEuMjk4
NDA5IFR3IChUaGUgTUIyIHByb3RvY29sIGhhcyBubyBjb25jZXB0IG9mIGEgbWFuaWZlc3Qg
dG8gdGVsbCB0aGUgaW5pdGlhbCBrZXJuZWwgd2hhdCBpcyBjb250YWluZWQgaW4gdGhlIGNo
YWluLCkgVGogVCogMCBUdyAxLjA5MDYxIFR3IChsZWF2aW5nIGl0IHRvIHRoZSBrZXJuZWwg
dG8gaW1wb3NlIGEgbG9hZGluZyBjb252ZW50aW9uLCB1c2UgbWFnaWMgbnVtYmVyIGlkZW50
aWZpY2F0aW9uLCBvciBib3RoLiBXaGVuKSBUaiBUKiAwIFR3IDEuMTMwNzUxIFR3IChjb25z
aWRlcmluZyB0aGUgcGFzc2luZyBvZiBtdWx0aXBsZSBrZXJuZWxzLCByYW1kaXNrcywgYW5k
IGRvbWFpbiBjb25maWd1cmF0aW9uIGFsb25nIHdpdGggYW55IGV4aXN0aW5nKSBUaiBUKiAw
IFR3IDMuMzg5OTgzIFR3IChtb2R1bGVzIGFscmVhZHkgcGFzc2VkLCB0aGVyZSBpcyBubyBz
YW5lIGNvbnZlbnRpb24gdGhhdCBjb3VsZCBiZSBpbXBvc2VkIGFuZCBtYWdpYyBudW1iZXIp
IFRqIFQqIDAgVHcgLjM0NTgxNCBUdyAoaWRlbnRpZmljYXRpb24gaXMgbmVhcmx5IGltcG9z
c2libGUgd2hlbiBjb25zaWRlcmluZyB0aGUgb2JqZWN0aXZlIG5vdCB0byBpbXBvc2UgdW5u
ZWNlc3NhcnkgY29tcGxpY2F0aW9uKSBUaiBUKiAwIFR3ICh0byB0aGUgaHlwZXJ2aXNvci4p
IFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDYzMC4wMjM2IGNtDQpxDQow
IDAgMCByZw0KQlQgMSAwIDAgMSAwIDI2IFRtIC9GMSAxMCBUZiAxMiBUTCAuNjI5OTg3IFR3
IChBcyBpdCB3YXMgYWxsdWRlZCB0byBwcmV2aW91c2x5LCBhIG1hbmlmZXN0IGRlc2NyaWJp
bmcgdGhlIGNvbnRlbnRzIGluIHRoZSBNQjIgY2hhaW4gYW5kIGhvdyB0aGV5IHJlbGF0ZSkg
VGogVCogMCBUdyAuNDA1MjggVHcgKHdpdGhpbiBhIFhlbiBjb250ZXh0IGlzIG5lZWRlZC4g
VG8gYWRkcmVzcyB0aGlzIG5lZWQgdGhlIExhdW5jaCBDb250cm9sIE1vZHVsZSBcKExDTVwp
IHdhcyBkZXNpZ25lZCB0bykgVGogVCogMCBUdyAocHJvdmlkZSBzdWNoIGEgbWFuaWZlc3Qu
IFRoZSBMQ00gd2FzIGRlc2lnbmVkIHRvIGhhdmUgYSBzcGVjaWZpYyBzZXQgb2YgcHJvcGVy
dGllcywpIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDYyNC4wMjM2IGNt
DQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDYyNC4wMjM2IGNtDQpRDQpxDQoxIDAgMCAxIDYy
LjY5MjkxIDYxMi4wMjM2IGNtDQowIDAgMCByZw0KQlQgL0YxIDEwIFRmIDEyIFRMIEVUDQpx
DQoxIDAgMCAxIDYgLTMgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEg
MTAgVGYgMTIgVEwgMTAuNSAwIFRkIChcMTc3KSBUaiBUKiAtMTAuNSAwIFRkIEVUDQpRDQpR
DQpxDQoxIDAgMCAxIDIzIC0zIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0g
L0YxIDEwIFRmIDEyIFRMIChtaW5pbWl6ZSB0aGUgY29tcGxleGl0eSBvZiB0aGUgcGFyc2lu
ZyBsb2dpYyByZXF1aXJlZCBieSB0aGUgaHlwZXJ2aXNvcikgVGogVCogRVQNClENClENCnEN
ClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgNjA2LjAyMzYgY20NClENCnENCjEgMCAwIDEg
NjIuNjkyOTEgNTk0LjAyMzYgY20NCjAgMCAwIHJnDQpCVCAvRjEgMTAgVGYgMTIgVEwgRVQN
CnENCjEgMCAwIDEgNiAtMyBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9G
MSAxMCBUZiAxMiBUTCAxMC41IDAgVGQgKFwxNzcpIFRqIFQqIC0xMC41IDAgVGQgRVQNClEN
ClENCnENCjEgMCAwIDEgMjMgLTMgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBU
bSAvRjEgMTAgVGYgMTIgVEwgKGFsbG93IGZvciBleHBhbmRpbmcgYW5kIG9wdGlvbmFsIGNv
bmZpZ3VyYXRpb24gZnJhZ21lbnRzIHdpdGhvdXQgYnJlYWtpbmcgYmFja3dhcmRzIGNvbXBh
dGliaWxpdHkpIFRqIFQqIEVUDQpRDQpRDQpxDQpRDQpRDQpxDQoxIDAgMCAxIDYyLjY5Mjkx
IDU5NC4wMjM2IGNtDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDUxNi4wMjM2IGNtDQpxDQow
IDAgMCByZw0KQlQgMSAwIDAgMSAwIDYyIFRtIC9GMSAxMCBUZiAxMiBUTCAuNjc0MzYgVHcg
KFRvIGVuYWJsZSBhdXRvbWF0aWMgZGV0ZWN0aW9uIG9mIGEgSHlwZXJsYXVuY2ggY29uZmln
dXJhdGlvbiwgdGhlIExDTSBtdXN0IGJlIHRoZSBmaXJzdCBNQjIgbW9kdWxlIGluKSBUaiBU
KiAwIFR3IDEuMTYxODYgVHcgKHRoZSBNQjIgbW9kdWxlIGNoYWluLiBUaGUgTENNIGlzIGlt
cGxlbWVudGVkIHVzaW5nIHRoZSBEZXZpY2UgVHJlZSBhcyBkZWZpbmVkIGluIHRoZSBIeXBl
cmxhdW5jaCkgVGogVCogMCBUdyAxLjI2NzQ4NSBUdyAoRGV2aWNlIFRyZWUgZGVzaWduIGRv
Y3VtZW50LiBXaXRoIHRoZSBMQ00gaW1wbGVtZW50ZWQgaW4gRGV2aWNlIFRyZWUsIGl0IGhh
cyBhIG1hZ2ljIG51bWJlciB0aGF0KSBUaiBUKiAwIFR3IDEuMjM3OTg0IFR3IChlbmFibGVz
IHRoZSBoeXBlcnZpc29yIHRvIGRldGVjdCBpdHMgcHJlc2VuY2Ugd2hlbiB1c2VkIGluIGEg
TXVsdGlib290MiBtb2R1bGUgY2hhaW4uIFRoZSBoeXBlcnZpc29yKSBUaiBUKiAwIFR3IDEu
MDYyOTI3IFR3IChjYW4gY29uZmlybSB0aGF0IGl0IGlzIGEgcHJvcGVyIExDTSBEZXZpY2Ug
VHJlZSBieSBjaGVja2luZyBmb3IgYSBjb21wbGlhbnQgSHlwZXJsYXVuY2ggRGV2aWNlIFRy
ZWUuKSBUaiBUKiAwIFR3IChUaGUgSHlwZXJsYXVuY2ggRGV2aWNlIFRyZWUgbm9kZXMgYXJl
IGRlc2lnbmVkIHRvIGFsbG93LCkgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgNjIuNjky
OTEgNTEwLjAyMzYgY20NClENCnENCjEgMCAwIDEgNjIuNjkyOTEgNTEwLjAyMzYgY20NClEN
CnENCjEgMCAwIDEgNjIuNjkyOTEgNDk4LjAyMzYgY20NCjAgMCAwIHJnDQpCVCAvRjEgMTAg
VGYgMTIgVEwgRVQNCnENCjEgMCAwIDEgNiAtMyBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAw
IDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAxMC41IDAgVGQgKFwxNzcpIFRqIFQqIC0xMC41
IDAgVGQgRVQNClENClENCnENCjEgMCAwIDEgMjMgLTMgY20NCnENCjAgMCAwIHJnDQpCVCAx
IDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgKGZvciB0aGUgaHlwZXJ2aXNvciB0byBw
YXJzZSBvbmx5IHRob3NlIGVudHJpZXMgaXQgdW5kZXJzdGFuZHMsKSBUaiBUKiBFVA0KUQ0K
UQ0KcQ0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA0OTIuMDIzNiBjbQ0KUQ0KcQ0KMSAw
IDAgMSA2Mi42OTI5MSA0ODAuMDIzNiBjbQ0KMCAwIDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBU
TCBFVA0KcQ0KMSAwIDAgMSA2IC0zIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIg
VG0gL0YxIDEwIFRmIDEyIFRMIDEwLjUgMCBUZCAoXDE3NykgVGogVCogLTEwLjUgMCBUZCBF
VA0KUQ0KUQ0KcQ0KMSAwIDAgMSAyMyAtMyBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEg
MCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAoZm9yIHBhY2tpbmcgY3VzdG9tIGluZm9ybWF0aW9u
IGZvciBhIGN1c3RvbSBib290IGRvbWFpbiwpIFRqIFQqIEVUDQpRDQpRDQpxDQpRDQpRDQpx
DQoxIDAgMCAxIDYyLjY5MjkxIDQ3NC4wMjM2IGNtDQpRDQpxDQoxIDAgMCAxIDYyLjY5Mjkx
IDQ2Mi4wMjM2IGNtDQowIDAgMCByZw0KQlQgL0YxIDEwIFRmIDEyIFRMIEVUDQpxDQoxIDAg
MCAxIDYgLTMgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYg
MTIgVEwgMTAuNSAwIFRkIChcMTc3KSBUaiBUKiAtMTAuNSAwIFRkIEVUDQpRDQpRDQpxDQox
IDAgMCAxIDIzIC0zIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEw
IFRmIDEyIFRMICh0aGUgYWJpbGl0eSB0byB1c2UgYSBuZXcgTENNIHdpdGggYW4gb2xkZXIg
aHlwZXJ2aXNvciwpIFRqIFQqIEVUDQpRDQpRDQpxDQpRDQpRDQpxDQoxIDAgMCAxIDYyLjY5
MjkxIDQ1Ni4wMjM2IGNtDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDQ0NC4wMjM2IGNtDQow
IDAgMCByZw0KQlQgL0YxIDEwIFRmIDEyIFRMIEVUDQpxDQoxIDAgMCAxIDYgLTMgY20NCnEN
CjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgMTAuNSAwIFRk
IChcMTc3KSBUaiBUKiAtMTAuNSAwIFRkIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDIzIC0zIGNt
DQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIChhbmQg
dGhlIGFiaWxpdHkgdG8gdXNlIGFuIG9sZGVyIExDTSB3aXRoIGEgbmV3IGh5cGVydmlzb3Iu
KSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA0NDQuMDIz
NiBjbQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA0MTcuMDIzNiBjbQ0KcQ0KQlQgMSAwIDAg
MSAwIDIuNSBUbSAxNSBUTCAvRjQgMTIuNSBUZiAwIDAgMCByZyAoNC42LjIgICBBcm0gRGV2
aWNlIFRyZWUpIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDMyNy4wMjM2
IGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDc0IFRtIC9GMSAxMCBUZiAxMiBUTCAy
LjcyOTM5OCBUdyAoQXMgZGlzY3Vzc2VkIHRoZSBMQ00gaXMgaW4gRGV2aWNlIFRyZWUgZm9y
bWF0IGFuZCB3YXMgZGVzaWduZWQgdG8gY28tZXhpc3QgaW4gdGhlIERldmljZSBUcmVlKSBU
aiBUKiAwIFR3IDIuMzUzMzE4IFR3IChlY29zeXN0ZW0sIGFuZCBpbiBwYXJ0aWN1bGFyIGlu
IHBhcmFsbGVsIHdpdGggZG9tMGxlc3MgRGV2aWNlIFRyZWUgZW50cmllcy4gT24gQXJtLCBY
ZW4gaXMgYWxyZWFkeSkgVGogVCogMCBUdyAuMTA0OTg3IFR3IChkZXNpZ25lZCB0byBib290
IGZyb20gYSBob3N0IERldmljZSBUcmVlIGRlc2NyaXB0aW9uIFwoZHRiXCkgZmlsZSBhbmQg
dGhlIExDTSBlbnRyaWVzIGNhbiBiZSBlbWJlZGRlZCBpbnRvKSBUaiBUKiAwIFR3IDEuNDYx
OTg0IFR3ICh0aGlzIGhvc3QgZHRiIGZpbGUuIFRoaXMgbWFrZXMgZGV0ZWN0aW5nIHRoZSBM
Q00gZW50cmllcyBhbmQgc3VwcG9ydGluZyBIeXBlcmxhdW5jaCBvbiBBcm0gcmVsYXRpdmVs
eSkgVGogVCogMCBUdyAxLjEyNTYxIFR3IChzdHJhaWdodCBmb3J3YXJkLiBSZWxhdGl2ZSB0
byB0aGUgZGVzY3JpYmVkIHg4NiBhcHByb2FjaCwgYXQgdGhlIHBvaW50IHdoZXJlIFhlbiBp
bnNwZWN0cyB0aGUgZmlyc3QgTUIyKSBUaiBUKiAwIFR3IC40MTQ1MzUgVHcgKG1vZHVsZSwg
b24gQXJtIFhlbiB3aWxsIGNoZWNrIGlmIHRoZSB0b3AgbGV2ZWwgTENNIG5vZGUgZXhpc3Rz
IGluIHRoZSBob3N0IGR0YiBmaWxlLiBJZiB0aGUgTENNIG5vZGUgZG9lcykgVGogVCogMCBU
dyAoZXhpc3QsIHRoZW4gYXQgdGhhdCBwb2ludCBpdCB3aWxsIGVudGVyIGludG8gdGhlIHNh
bWUgY29kZSBwYXRoIGFzIHRoZSB4ODYgZW50cnkgd291bGQgZ28uKSBUaiBUKiBFVA0KUQ0K
UQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSAzMDAuMDIzNiBjbQ0KcQ0KQlQgMSAwIDAgMSAwIDIu
NSBUbSAxNSBUTCAvRjQgMTIuNSBUZiAwIDAgMCByZyAoNC42LjMgICBYZW4gaHlwZXJ2aXNv
cikgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgMjEwLjAyMzYgY20NCnEN
CjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgNzQgVG0gL0YxIDEwIFRmIDEyIFRMIC41Njg0MDkg
VHcgKEl0IHdhcyBwcmV2aW91c2x5IGRpc2N1c3NlZCBhdCBhIGhpZ2hlciBsZXZlbCBvZiB0
aGUgbmV3IGhvc3QgYm9vdCBmbG93IHRoYXQgd2lsbCBiZSBpbnRyb2R1Y2VkLiBXaXRoaW4g
dGhpcykgVGogVCogMCBUdyAuNTY5MzYgVHcgKG5ldyBmbG93IGlzIHRoZSBjb25maWd1cmF0
aW9uIHBhcnNpbmcgYW5kIGRvbWFpbiBjcmVhdGlvbiBwaGFzZSB3aGljaCB3aWxsIGJlIGV4
cGFuZGVkIHVwb24gaGVyZS4gVGhlKSBUaiBUKiAwIFR3IC41MzgzMiBUdyAoaHlwZXJ2aXNv
ciB3aWxsIGluc3BlY3QgdGhlIExDTSBmb3IgYSBjb25maWcgbm9kZSBhbmQgaWYgZm91bmQg
d2lsbCBpdGVyYXRlIHRocm91Z2ggYWxsIG1vZHVsZXMgbm9kZXMuIFRoZSkgVGogVCogMCBU
dyAuOTk1MjggVHcgKG1vZHVsZSBub2RlcyBhcmUgdXNlZCB0byBpZGVudGlmeSBpZiBhbnkg
bW9kdWxlcyBjb250YWluIG1pY3JvY29kZSBvciBhbiBYU00gcG9saWN5LiBBcyBpdCBwcm9j
ZXNzZXMpIFRqIFQqIDAgVHcgLjMzOTM5OCBUdyAoZG9tYWluIG5vZGVzLCBpdCB3aWxsIGNv
bnN0cnVjdCB0aGUgZG9tYWluIHVzaW5nIHRoZSBub2RlIHByb3BlcnRpZXMgYW5kIHRoZSBt
b2R1bGVzIG5vZGVzLiBPbmNlIGl0IGhhcykgVGogVCogMCBUdyAxLjg4OTk4NSBUdyAoY29t
cGxldGVkIGl0ZXJhdGluZyB0aHJvdWdoIGFsbCB0aGUgZW50cmllcyBpbiB0aGUgTENNLCBp
ZiBhIGNvbnN0cnVjdGVkIGRvbWFpbiBoYXMgdGhlIEJvb3QgRG9tYWluKSBUaiBUKiAwIFR3
IChhdHRyaWJ1dGUsIGl0IHdpbGwgdGhlbiBiZSB1bnBhdXNlZC4gT3RoZXJ3aXNlIHRoZSBo
eXBlcnZpc29yIHdpbGwgc3RhcnQgdGhlIGxhdW5jaCBmaW5hbGl6YXRpb24gcGhhc2UuKSBU
aiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSAxODMuMDIzNiBjbQ0KcQ0KQlQg
MSAwIDAgMSAwIDIuNSBUbSAxNSBUTCAvRjQgMTIuNSBUZiAwIDAgMCByZyAoNC42LjQgICBC
b290IERvbWFpbikgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgMTA1LjAy
MzYgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgNjIgVG0gL0YxIDEwIFRmIDEyIFRM
IDEuMTc1MzY2IFR3IChUcmFkaXRpb25hbGx5IGRvbWFpbiBjcmVhdGlvbiB3YXMgY29udHJv
bGxlZCBieSB0aGUgdXNlciB3aXRoaW4gdGhlIERvbTAgZW52aXJvbm1lbnQgd2hlcmVieSBj
dXN0b20pIFRqIFQqIDAgVHcgLjIyMzczNSBUdyAodG9vbHN0YWNrcyBjb3VsZCBiZSBpbXBs
ZW1lbnRlZCB0byBpbXBvc2UgcmVxdWlyZW1lbnRzIG9uIHRoZSBwcm9jZXNzLiBUaGUgQm9v
dCBEb21haW4gaXMgYSBtZWFucyB0bykgVGogVCogMCBUdyAuNTk4MzIgVHcgKGVuYWJsZSB0
aGUgdXNlciB0byBjb250aW51ZSB0byBtYWludGFpbiBhIGRlZ3JlZSBvZiB0aGF0IGNvbnRy
b2wgb3ZlciBkb21haW4gY3JlYXRpb24gYnV0IHdpdGhpbiBhIGxpbWl0ZWQpIFRqIFQqIDAg
VHcgMS45OTQ5ODUgVHcgKHByaXZpbGVnZSBlbnZpcm9ubWVudC4gVGhlIEJvb3QgRG9tYWlu
IHdpbGwgaGF2ZSBhY2Nlc3MgdG8gdGhlIExDTSBhbmQgdGhlIGJvb3QgY2hhaW4gYWxvbmcg
d2l0aCkgVGogVCogMCBUdyAyLjUzMTE2MyBUdyAoYWNjZXNzIHRvIGEgc3Vic2V0IG9mIHRo
ZSBoeXBlcmNhbGwgb3BlcmF0aW9ucy4gV2hlbiB0aGUgQm9vdCBEb21haW4gaXMgZmluaXNo
ZWQgaXQgd2lsbCBub3RpZnkgdGhlKSBUaiBUKiAwIFR3IChoeXBlcnZpc29yIHRocm91Z2gg
YSBoeXBlcmNhbGwgb3AuKSBUaiBUKiBFVA0KUQ0KUQ0KIA0KZW5kc3RyZWFtDQplbmRvYmoN
CjEzMyAwIG9iag0KPDwgL0xlbmd0aCA3NTAxID4+DQpzdHJlYW0NCjEgMCAwIDEgMCAwIGNt
ICBCVCAvRjEgMTIgVGYgMTQuNCBUTCBFVA0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA3NTAuMDIz
NiBjbQ0KcQ0KQlQgMSAwIDAgMSAwIDIuNSBUbSAxNSBUTCAvRjQgMTIuNSBUZiAwIDAgMCBy
ZyAoNC42LjUgICBSZWNvdmVyeSBEb21haW4pIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAx
IDYyLjY5MjkxIDY3Mi4wMjM2IGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDYyIFRt
IC9GMSAxMCBUZiAxMiBUTCAxLjYxMjQ4NSBUdyAoV2l0aCB0aGUgZXhpc3RpbmcgRG9tMCBo
b3N0IGJvb3QgcGF0aCwgd2hlbiBhIGZhaWx1cmUgb2NjdXJzIHRoZXJlIGFyZSBzZXZlcmFs
IGFzc3VtcHRpb25zIHRoYXQgY2FuKSBUaiBUKiAwIFR3IDEuNTE1MjggVHcgKHNhZmVseSBi
ZSBtYWRlIHRvIGdldCB0aGUgdXNlciB0byBhIGNvbnNvbGUgZm9yIHRyb3VibGVzaG9vdGlu
Zy4gV2l0aCB0aGUgSHlwZXJsYXVuY2ggaG9zdCBib290IHBhdGgpIFRqIFQqIDAgVHcgLjk1
Mzk4OCBUdyAodGhvc2UgYXNzdW1wdGlvbnMgY2FuIG5vIGxvbmdlciBiZSBtYWRlLCB0aHVz
IGEgbWVhbnMgaXMgbmVlZGVkIHRvIGdldCB0aGUgdXNlciB0byBhIGNvbnNvbGUgaW4gdGhl
KSBUaiBUKiAwIFR3IC45ODc0ODUgVHcgKGNhc2Ugb2YgYSByZWNvdmVyYWJsZSBmYWlsdXJl
LiBUaGUgcmVjb3ZlcnkgZG9tYWluIGlzIGNvbmZpZ3VyZWQgYnkgYSBkb21haW4gY29uZmln
dXJhdGlvbiBlbnRyeSBpbiB0aGUpIFRqIFQqIDAgVHcgLjg3NzIwOSBUdyAoTENNLCBpbiB0
aGUgc2FtZSBtYW5uZXIgYXMgdGhlIG90aGVyIGluaXRpYWwgZG9tYWlucywgYW5kIGl0IHdp
bGwgbm90IGJlIHVucGF1c2VkIGF0IGxhdW5jaCBmaW5hbGl6YXRpb24pIFRqIFQqIDAgVHcg
KHVubGVzcyBhIGZhaWx1cmUgaXMgZW5jb3VudGVyZWQgc3RhcnRpbmcgdGhlIGluaXRpYWwg
ZG9tYWlucy4pIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDU4Mi4wMjM2
IGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDc0IFRtIC9GMSAxMCBUZiAxMiBUTCAx
LjM1NDEwNCBUdyAoWGVuIGhhcyBleGlzdGluZyBzdXBwb3J0IGZvciBhIENyYXNoIEVudmly
b25tZW50IHdoZXJlIG1lbW9yeSBjYW4gYmUgcmVzZXJ2ZWQgYXQgaG9zdCBib290IGFuZCBh
KSBUaiBUKiAwIFR3IC4zNzE5ODggVHcgKGtlcm5lbCBsb2FkZWQgaW50byBpdCwgdG8gYmUg
anVtcGVkIGludG8gYXQgYW55IHBvaW50IHdoaWxlIHRoZSBzeXN0ZW0gaXMgcnVubmluZyB3
aGVuIGEgY3Jhc2ggaXMgZGV0ZWN0ZWQuKSBUaiBUKiAwIFR3IDIuMjk5MDY5IFR3IChUaGUg
UmVjb3ZlcnkgRG9tYWluIGZ1bmN0aW9uYWxpdHkgaXMgYSBzZXBhcmF0ZSwgY29tcGxlbWVu
dGFyeSBjYXBhYmlsaXR5LiBUaGUgQ3Jhc2ggRW52aXJvbm1lbnQpIFRqIFQqIDAgVHcgLjA1
MjY1MSBUdyAocmVwbGFjZXMgdGhlIHByZXZpb3VzbHkgYWN0aXZlIGh5cGVydmlzb3IgYW5k
IHJ1bm5pbmcgZ3Vlc3RzLCBhbmQgZW5hYmxlcyBhIHByb2Nlc3MgZm9yIG1vdW50aW5nIGRp
c2tzIHRvKSBUaiBUKiAwIFR3IC4xMDYwOTggVHcgKHdyaXRlIG91dCBsb2cgaW5mb3JtYXRp
b24gcHJpb3IgdG8gcmVib290aW5nIHRoZSBzeXN0ZW0uIEluIGNvbnRyYXN0LCB0aGUgUmVj
b3ZlcnkgRG9tYWluIGlzIGFibGUgdG8gdXNlIHRoZSkgVGogVCogMCBUdyAuMTU0MzYgVHcg
KGZ1bmN0aW9uYWxpdHkgb2YgdGhlIFhlbiBoeXBlcnZpc29yLCB0aGF0IGlzIHN0aWxsIHBy
ZXNlbnQgYW5kIHJ1bm5pbmcsIHRvIHBlcmZvcm0gcmVjb3ZlcnkgaGFuZGxpbmcgZm9yIGVy
cm9ycykgVGogVCogMCBUdyAoZW5jb3VudGVyZWQgd2l0aCBzdGFydGluZyB0aGUgaW5pdGlh
bCBkb21haW5zLikgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgNTU4LjAy
MzYgY20NCnENCkJUIDEgMCAwIDEgMCAyIFRtIDEyIFRMIC9GNCAxMCBUZiAwIDAgMCByZyAo
NC42LjUuMSAgIERlZmVycmVkIERlc2lnbikgVGogVCogRVQNClENClENCnENCjEgMCAwIDEg
NjIuNjkyOTEgNTQwLjAyMzYgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAv
RjEgMTAgVGYgMTIgVEwgKFRvIGJlIGRldGVybWluZWQ6KSBUaiBUKiBFVA0KUQ0KUQ0KcQ0K
MSAwIDAgMSA2Mi42OTI5MSA1MzQuMDIzNiBjbQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA1
MzQuMDIzNiBjbQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA1MjIuMDIzNiBjbQ0KMCAwIDAg
cmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KcQ0KMSAwIDAgMSA2IC0zIGNtDQpxDQowIDAg
MCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIDEwLjUgMCBUZCAoXDE3
NykgVGogVCogLTEwLjUgMCBUZCBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAyMyAtMyBjbQ0KcQ0K
MCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAoRGVmaW5lIHdo
YXQgaXMgZGV0ZWN0ZWQgYXMgYSBjcmFzaCkgVGogVCogRVQNClENClENCnENClENClENCnEN
CjEgMCAwIDEgNjIuNjkyOTEgNTE2LjAyMzYgY20NClENCnENCjEgMCAwIDEgNjIuNjkyOTEg
NTA0LjAyMzYgY20NCjAgMCAwIHJnDQpCVCAvRjEgMTAgVGYgMTIgVEwgRVQNCnENCjEgMCAw
IDEgNiAtMyBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAx
MiBUTCAxMC41IDAgVGQgKFwxNzcpIFRqIFQqIC0xMC41IDAgVGQgRVQNClENClENCnENCjEg
MCAwIDEgMjMgLTMgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAg
VGYgMTIgVEwgKEV4cGxhaW4gaG93IGNyYXNoIGRldGVjdGlvbiBpcyBwZXJmb3JtZWQgYW5k
IHdoaWNoIGNvbXBvbmVudHMgYXJlIGludm9sdmVkKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KUQ0K
UQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA0OTguMDIzNiBjbQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42
OTI5MSA0ODYuMDIzNiBjbQ0KMCAwIDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KcQ0K
MSAwIDAgMSA2IC0zIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEw
IFRmIDEyIFRMIDEwLjUgMCBUZCAoXDE3NykgVGogVCogLTEwLjUgMCBUZCBFVA0KUQ0KUQ0K
cQ0KMSAwIDAgMSAyMyAtMyBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9G
MSAxMCBUZiAxMiBUTCAoRXhwbGFpbiBob3cgdGhlIHJlY292ZXJ5IGRvbWFpbiBpcyB1bnBh
dXNlZCkgVGogVCogRVQNClENClENCnENClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgNDgw
LjAyMzYgY20NClENCnENCjEgMCAwIDEgNjIuNjkyOTEgNDY4LjAyMzYgY20NCjAgMCAwIHJn
DQpCVCAvRjEgMTAgVGYgMTIgVEwgRVQNCnENCjEgMCAwIDEgNiAtMyBjbQ0KcQ0KMCAwIDAg
cmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAxMC41IDAgVGQgKFwxNzcp
IFRqIFQqIC0xMC41IDAgVGQgRVQNClENClENCnENCjEgMCAwIDEgMjMgLTMgY20NCnENCjAg
MCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgKEV4cGxhaW4gaG93
IGFuZCB3aGVuIHRoZSByZXNvdXJjZXMgYXNzaWduZWQgdG8gdGhlIHJlY292ZXJ5IGRvbWFp
biBhcmUgcmVjbGFpbWVkKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KUQ0KUQ0KcQ0KMSAwIDAgMSA2
Mi42OTI5MSA0NjIuMDIzNiBjbQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA0NTAuMDIzNiBj
bQ0KMCAwIDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KcQ0KMSAwIDAgMSA2IC0zIGNt
DQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIDEwLjUg
MCBUZCAoXDE3NykgVGogVCogLTEwLjUgMCBUZCBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAyMyAt
MyBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAo
RGVmaW5lIHdoYXQgdGhlIHJlY292ZXJ5IGRvbWFpbiBpcyBhYmxlIHRvIGRvKSBUaiBUKiBF
VA0KUQ0KUQ0KcQ0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA0NDQuMDIzNiBjbQ0KUQ0K
cQ0KMSAwIDAgMSA2Mi42OTI5MSA0MzIuMDIzNiBjbQ0KMCAwIDAgcmcNCkJUIC9GMSAxMCBU
ZiAxMiBUTCBFVA0KcQ0KMSAwIDAgMSA2IC0zIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAg
MSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIDEwLjUgMCBUZCAoXDE3NykgVGogVCogLTEwLjUg
MCBUZCBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAyMyAtMyBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEg
MCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAoRGV0ZXJtaW5lIHdoYXQgcGVybWlzc2lv
bnMgdGhlIHJlY292ZXJ5IGRvbWFpbiByZXF1aXJlcyB0byBwZXJmb3JtIGl0cyBqb2IpIFRq
IFQqIEVUDQpRDQpRDQpxDQpRDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDQzMi4wMjM2IGNt
DQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDQwNS4wMjM2IGNtDQpxDQpCVCAxIDAgMCAxIDAg
Mi41IFRtIDE1IFRMIC9GNCAxMi41IFRmIDAgMCAwIHJnICg0LjYuNiAgIENvbnRyb2wgRG9t
YWluKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSAzNTEuMDIzNiBjbQ0K
cQ0KQlQgMSAwIDAgMSAwIDM4IFRtIDEuMzQ5MzE4IFR3IDEyIFRMIC9GMSAxMCBUZiAwIDAg
MCByZyAoVGhlIGNvbmNlcHQgb2YgdGhlIENvbnRyb2wgRG9tYWluIGFscmVhZHkgZXhpc3Rz
IHdpdGhpbiBYZW4gYXMgYSBib29sZWFuLCApIFRqIC9GMyAxMCBUZiAwIDAgMCByZyAoaXNf
cHJpdmlsZWdlZCkgVGogL0YxIDEwIFRmIDAgMCAwIHJnICgsIHRoYXQgZ292ZXJucykgVGog
VCogMCBUdyAuMTE4NzM1IFR3IChhY2Nlc3MgdG8gbWFueSBvZiB0aGUgcHJpdmlsZWdlZCBp
bnRlcmZhY2VzIG9mIHRoZSBoeXBlcnZpc29yIHRoYXQgc3VwcG9ydCBhIGRvbWFpbiBydW5u
aW5nIGEgdmlydHVhbGl6YXRpb24pIFRqIFQqIDAgVHcgLjcwMTE2MyBUdyAoc3lzdGVtIHRv
b2xzdGFjay4gSHlwZXJsYXVuY2ggd2lsbCBhbGxvdyB0aGUgKSBUaiAvRjMgMTAgVGYgMCAw
IDAgcmcgKGlzX3ByaXZpbGVnZWQgKSBUaiAvRjEgMTAgVGYgMCAwIDAgcmcgKGZsYWcgdG8g
YmUgc2V0IG9uIGFueSBkb21haW4gdGhhdCBpcyBjcmVhdGVkIGF0KSBUaiBUKiAwIFR3IChs
YXVuY2gsIHJhdGhlciB0aGFuIG9ubHkgYSBEb20wLiBJdCBtYXkgcG90ZW50aWFsbHkgYmUg
c2V0IG9uIG11bHRpcGxlIGRvbWFpbnMuKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2
Mi42OTI5MSAzMjQuMDIzNiBjbQ0KcQ0KQlQgMSAwIDAgMSAwIDIuNSBUbSAxNSBUTCAvRjQg
MTIuNSBUZiAwIDAgMCByZyAoNC42LjcgICBIYXJkd2FyZSBEb21haW4pIFRqIFQqIEVUDQpR
DQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDI1OC4wMjM2IGNtDQpxDQpCVCAxIDAgMCAxIDAg
NTAgVG0gNi42ODU2OTcgVHcgMTIgVEwgL0YxIDEwIFRmIDAgMCAwIHJnIChUaGUgSGFyZHdh
cmUgRG9tYWluIGlzIGFsc28gYW4gZXhpc3RpbmcgY29uY2VwdCBmb3IgWGVuIHRoYXQgaXMg
ZW5hYmxlZCB0aHJvdWdoIHRoZSkgVGogVCogMCBUdyAxLjMxNzQ4IFR3IC9GMyAxMCBUZiAw
IDAgMCByZyAoaXNfaGFyZHdhcmVfZG9tYWluICkgVGogL0YxIDEwIFRmIDAgMCAwIHJnIChj
aGVjay4gV2l0aCBIeXBlcmxhdW5jaCB0aGUgcHJldmlvdXMgcHJvY2VzcyBvZiBJL08gYWNj
ZXNzZXMgYmVpbmcgYXNzaWduZWQgdG8pIFRqIFQqIDAgVHcgMy4zMTI2NTEgVHcgKERvbTAg
Zm9yIGxhdGVyIHRyYW5zZmVyIHRvIHRoZSBoYXJkd2FyZSBkb21haW4gd291bGQgbm8gbG9u
Z2VyIGJlIHJlcXVpcmVkLiBJbnN0ZWFkIGR1cmluZyB0aGUpIFRqIFQqIDAgVHcgMS41NzEz
MTggVHcgKGNvbmZpZ3VyYXRpb24gcGhhc2UgdGhlIFhlbiBoeXBlcnZpc29yIHdvdWxkIGRp
cmVjdGx5IGFzc2lnbiB0aGUgSS9PIGFjY2Vzc2VzIHRvIHRoZSBkb21haW4gd2l0aCB0aGUp
IFRqIFQqIDAgVHcgKGhhcmR3YXJlIGRvbWFpbiBwZXJtaXNzaW9uIGJpdCBlbmFibGVkLikg
VGogVCogRVQNClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgMjMxLjAyMzYgY20NCnENCkJU
IDEgMCAwIDEgMCAyLjUgVG0gMTUgVEwgL0Y0IDEyLjUgVGYgMCAwIDAgcmcgKDQuNi44ICAg
Q29uc29sZSBEb21haW4pIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDE1
My4wMjM2IGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDYyIFRtIC9GMSAxMCBUZiAx
MiBUTCAuNDk5MzYgVHcgKFRyYWRpdGlvbmFsbHkgdGhlIFhlbiBjb25zb2xlIGlzIGFzc2ln
bmVkIHRvIHRoZSBjb250cm9sIGRvbWFpbiBhbmQgdGhlbiByZWFzc2lnbmFibGUgYnkgdGhl
IHRvb2xzdGFjayB0bykgVGogVCogMCBUdyAuMDUyNjUxIFR3IChhbm90aGVyIGRvbWFpbi4g
V2l0aCBIeXBlcmxhdW5jaCBpdCBiZWNvbWVzIHBvc3NpYmxlIHRvIGNvbnN0cnVjdCBhIGJv
b3QgY29uZmlndXJhdGlvbiB3aGVyZSB0aGVyZSBpcyBubykgVGogVCogMCBUdyAxLjU1NTU0
MiBUdyAoY29udHJvbCBkb21haW4gb3IgaGF2ZSBhIHVzZSBjYXNlIHdoZXJlIHRoZSBYZW4g
Y29uc29sZSBuZWVkcyB0byBiZSBpc29sYXRlZC4gQXMgc3VjaCBpdCBiZWNvbWVzKSBUaiBU
KiAwIFR3IDEuOTkyNDg1IFR3IChuZWNlc3NhcnkgdG8gYmUgYWJsZSB0byBkZXNpZ25hdGUg
d2hpY2ggb2YgdGhlIGluaXRpYWwgZG9tYWlucyBzaG91bGQgYmUgYXNzaWduZWQgdGhlIFhl
biBjb25zb2xlLikgVGogVCogMCBUdyAxLjMyNzg0IFR3IChUaGVyZWZvcmUgSHlwZXJsYXVu
Y2ggaW50cm9kdWNlcyB0aGUgYWJpbGl0eSB0byBzcGVjaWZ5IGFuIGluaXRpYWwgZG9tYWlu
IHdoaWNoIHRoZSBjb25zb2xlIGlzIGFzc2lnbmVkKSBUaiBUKiAwIFR3IChhbG9uZyB3aXRo
IGEgY29udmVudGlvbiBvZiBvcmRlcmVkIGFzc2lnbm1lbnQgZm9yIHdoZW4gdGhlcmUgaXMg
bm8gZXhwbGljaXQgYXNzaWdubWVudC4pIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDYy
LjY5MjkxIDEyMC4wMjM2IGNtDQpxDQpCVCAxIDAgMCAxIDAgMy41IFRtIDIxIFRMIC9GMiAx
Ny41IFRmIDAgMCAwIHJnICg1ICAgQ29tbXVuaWNhdGlvbiBvZiBEb21haW4gQ29uZmlndXJh
dGlvbnMpIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDc4LjAyMzYyIGNt
DQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDI2IFRtIC9GMSAxMCBUZiAxMiBUTCAyLjQ5
NzY3NCBUdyAoVGhlcmUgYXJlIHNldmVyYWwgc3RhbmRhcmQgbWV0aG9kcyBmb3IgYW4gT3Bl
cmF0aW5nIFN5c3RlbSB0byBhY2Nlc3MgbWFjaGluZSBjb25maWd1cmF0aW9uIGFuZCkgVGog
VCogMCBUdyAuNTc1OTg0IFR3IChlbnZpcm9ubWVudCBpbmZvcm1hdGlvbjogQUNQSSBpcyBj
b21tb24gb24geDg2IHN5c3RlbXMsIHdoZXJlYXMgRGV2aWNlIFRyZWUgaXMgbW9yZSB0eXBp
Y2FsIG9uIEFybSkgVGogVCogMCBUdyAocGxhdGZvcm1zLiBUaGVyZSBhcmUgY3VycmVudGx5
IGltcGxlbWVudGF0aW9ucyBvZiBib3RoIGluIFhlbi4pIFRqIFQqIEVUDQpRDQpRDQpxDQox
IDAgMCAxIDYyLjY5MjkxIDc2Ljg2NjE0IGNtDQpRDQogDQplbmRzdHJlYW0NCmVuZG9iag0K
MTM0IDAgb2JqDQo8PCAvTGVuZ3RoIDQzNTkgPj4NCnN0cmVhbQ0KMSAwIDAgMSAwIDAgY20g
IEJUIC9GMSAxMiBUZiAxNC40IFRMIEVUDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDc2NS4wMjM2
IGNtDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDc0MS4wMjM2IGNtDQowIDAgMCByZw0KQlQg
L0YxIDEwIFRmIDEyIFRMIEVUDQpxDQoxIDAgMCAxIDYgOSBjbQ0KcQ0KMCAwIDAgcmcNCkJU
IDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAxMC41IDAgVGQgKFwxNzcpIFRqIFQq
IC0xMC41IDAgVGQgRVQNClENClENCnENCjEgMCAwIDEgMjMgLTMgY20NCnENCjAgMCAwIHJn
DQpCVCAxIDAgMCAxIDAgMTQgVG0gL0YxIDEwIFRmIDEyIFRMIC42MDMwNTkgVHcgKEZvciBk
b20wbGVzcywgZ3Vlc3QgRGV2aWNlIFRyZWVzIGFyZSBkeW5hbWljYWxseSBjb25zdHJ1Y3Rl
ZCBieSB0aGUgaHlwZXJ2aXNvciB0byBjb252ZXkgZG9tYWluKSBUaiBUKiAwIFR3IChjb25m
aWd1cmF0aW9uIGRhdGEpIFRqIFQqIEVUDQpRDQpRDQpxDQpRDQpRDQpxDQoxIDAgMCAxIDYy
LjY5MjkxIDczNS4wMjM2IGNtDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDcyMy4wMjM2IGNt
DQowIDAgMCByZw0KQlQgL0YxIDEwIFRmIDEyIFRMIEVUDQpxDQoxIDAgMCAxIDYgLTMgY20N
CnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgMTAuNSAw
IFRkIChcMTc3KSBUaiBUKiAtMTAuNSAwIFRkIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDIzIC0z
IGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIChG
b3IgUFZIIGRvbTAgb24geDg2LCBBQ1BJIHRhYmxlcyBhcmUgYnVpbHQgYnkgdGhlIGh5cGVy
dmlzb3IgYmVmb3JlIHRoZSBkb21haW4gaXMgc3RhcnRlZCkgVGogVCogRVQNClENClENCnEN
ClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgNzIzLjAyMzYgY20NClENCnENCjEgMCAwIDEg
NjIuNjkyOTEgNjkzLjAyMzYgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMTQgVG0g
L0YxIDEwIFRmIDEyIFRMIDMuMDUyOTI3IFR3IChOb3RlIHRoYXQgYm90aCBvZiB0aGVzZSBt
ZWNoYW5pc21zIGNvbnZleSBzdGF0aWMgZGF0YSB0aGF0IGlzIGZpeGVkIHByaW9yIHRvIHRo
ZSBwb2ludCBvZiBkb21haW4pIFRqIFQqIDAgVHcgKGNvbnN0cnVjdGlvbi4gSHlwZXJsYXVu
Y2ggd2lsbCByZXRhaW4gYm90aCB0aGUgZXhpc3RpbmcgQUNQSSBhbmQgRGV2aWNlIFRyZWUg
bWV0aG9kcy4pIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDY1MS4wMjM2
IGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDI2IFRtIC9GMSAxMCBUZiAxMiBUTCAu
Nzg0NDMxIFR3IChDb21tdW5pY2F0aW9uIG9mIGRhdGEgYmV0d2VlbiBhIEJvb3QgRG9tYWlu
IGFuZCBhIENvbnRyb2wgRG9tYWluIGlzIG9mIG5vdGUgc2luY2UgdGhleSBtYXkgbm90IGJl
KSBUaiBUKiAwIFR3IDIuMjgzMDU5IFR3IChydW5uaW5nIGNvbmN1cnJlbnRseTogdGhlIG1l
dGhvZCB1c2VkIHdpbGwgZGVwZW5kIG9uIHRoZWlyIHNwZWNpZmljIGltcGxlbWVudGF0aW9u
cywgYnV0IG9uZSBvcHRpb24pIFRqIFQqIDAgVHcgKGF2YWlsYWJsZSBpcyB0byB1c2UgWGVu
XDIyMnMgaHlwZnMgZm9yIHRyYW5zZmVyIG9mIGJhc2ljIGRhdGEgdG8gc3VwcG9ydCBzeXN0
ZW0gYm9vdHN0cmFwLikgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgNjIy
LjY3NzIgY20NCm4gMCAxNC4xNzMyMyBtIDQ2OS44ODk4IDE0LjE3MzIzIGwgUw0KUQ0KcQ0K
MSAwIDAgMSA2Mi42OTI5MSA1ODkuNjc3MiBjbQ0KcQ0KQlQgMSAwIDAgMSAwIDMuNSBUbSAy
MSBUTCAvRjIgMTcuNSBUZiAwIDAgMCByZyAoNiAgIEFwcGVuZGl4KSBUaiBUKiBFVA0KUQ0K
UQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA1NTkuNjc3MiBjbQ0KcQ0KQlQgMSAwIDAgMSAwIDMg
VG0gMTggVEwgL0YyIDE1IFRmIDAgMCAwIHJnICg2LjEgICBBcHBlbmRpeCAxOiBGbG93IFNl
cXVlbmNlIG9mIFN0ZXBzIG9mIGEgSHlwZXJsYXVuY2ggQm9vdCkgVGogVCogRVQNClENClEN
CnENCjEgMCAwIDEgNjIuNjkyOTEgNTI5LjY3NzIgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAg
MCAxIDAgMTQgVG0gL0YxIDEwIFRmIDEyIFRMIDEuMTI1ODY4IFR3IChQcm92aWRlZCBoZXJl
IGlzIGFuIG9yZGVyZWQgZmxvdyBvZiBhIEh5cGVybGF1bmNoIHdpdGggYSBoaWdobGlnaHQg
bG9naWMgZGVjaXNpb24gcG9pbnRzLiBOb3QgYWxsIGJyYW5jaCkgVGogVCogMCBUdyAocG9p
bnRzIGFyZSByZWNvcmRlZCwgc3BlY2lmaWNhbGx5IGZvciB0aGUgdmFyaWV0eSBvZiBlcnJv
ciBjb25kaXRpb25zIHRoYXQgbWF5IG9jY3VyLikgVGogVCogRVQNClENClENCnENCjEgMCAw
IDEgNjIuNjkyOTEgODguNDc3MTcgY20NCnENCnENCjEgMCAwIDEgMCAwIGNtDQpxDQoxIDAg
MCAxIDYuNiA2LjYgY20NCnENCi42NjI3NDUgLjY2Mjc0NSAuNjYyNzQ1IFJHDQouNSB3DQou
OTYwNzg0IC45NjA3ODQgLjg2Mjc0NSByZw0KbiAtNiAtNiA0NjguNjg5OCA0MzIgcmUgQioN
ClENCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgNDEwIFRtIC9GNSAxMCBUZiAxMiBUTCAo
MS4gSHlwZXJ2aXNvciBTdGFydHVwOikgVGogVCogKDJhLiBcKHg4NlwpIEluc3BlY3QgZmly
c3QgbW9kdWxlIHByb3ZpZGVkIGJ5IHRoZSBib290bG9hZGVyKSBUaiBUKiAoICAgIGEuIElz
IHRoZSBtb2R1bGUgYW4gTENNKSBUaiBUKiAoICAgICAgICBpLiBZRVM6IHByb2NlZWQgd2l0
aCB0aGUgSHlwZXJsYXVuY2ggaG9zdCBib290IHBhdGgpIFRqIFQqICggICAgICAgIGlpLiBO
TzogcHJvY2VlZCB3aXRoIGEgRG9tMCBob3N0IGJvb3QgcGF0aCkgVGogVCogKDJiLiBcKEFy
bVwpIEluc3BlY3QgaG9zdCBkdGIgZm9yIGAvY2hvc2VuL2h5cGVydmlzb3JgIG5vZGUpIFRq
IFQqICggICAgYS4gSXMgdGhlIExDTSBwcmVzZW50KSBUaiBUKiAoICAgICAgICBpLiBZRVM6
IHByb2NlZWQgd2l0aCB0aGUgSHlwZXJsYXVuY2ggaG9zdCBib290IHBhdGgpIFRqIFQqICgg
ICAgICAgIGlpLiBOTzogcHJvY2VlZCB3aXRoIGEgRG9tMC9kb20wbGVzcyBob3N0IGJvb3Qg
cGF0aCkgVGogVCogKDMuIEl0ZXJhdGUgdGhyb3VnaCB0aGUgTENNIGVudHJpZXMgbG9va2lu
ZyBmb3IgdGhlIG1vZHVsZSBkZXNjcmlwdGlvbikgVGogVCogKCAgIGVudHJ5KSBUaiBUKiAo
ICAgIGEuIENoZWNrIGlmIGFueSBvZiB0aGUgbW9kdWxlcyBhcmUgbWljcm9jb2RlIG9yIHBv
bGljeSBhbmQgaWYgc28sKSBUaiBUKiAoICAgICAgIGxvYWQpIFRqIFQqICg0LiBJdGVyYXRl
IHRocm91Z2ggdGhlIExDTSBlbnRyaWVzIHByb2Nlc3NpbmcgYWxsIGRvbWFpbiBkZXNjcmlw
dGlvbikgVGogVCogKCAgIGVudHJpZXMpIFRqIFQqICggICAgYS4gVXNlIHRoZSBkZXRhaWxz
IGZyb20gdGhlIEJhc2ljIENvbmZpZ3VyYXRpb24gdG8gY2FsbCkgVGogVCogKCAgICAgICBg
ZG9tYWluX2NyZWF0ZWApIFRqIFQqICggICAgYi4gUmVjb3JkIGlmIGEgZG9tYWluIGlzIGZs
YWdnZWQgYXMgdGhlIEJvb3QgRG9tYWluKSBUaiBUKiAoICAgIGMuIFJlY29yZCBpZiBhIGRv
bWFpbiBpcyBmbGFnZ2VkIGFzIHRoZSBSZWNvdmVyeSBEb21haW4pIFRqIFQqICg1LiBXYXMg
YSBCb290IERvbWFpbiBjcmVhdGVkKSBUaiBUKiAoICAgIGEuIFlFUzopIFRqIFQqICggICAg
ICAgIGkuIEF0dGFjaCBjb25zb2xlIHRvIEJvb3QgRG9tYWluKSBUaiBUKiAoICAgICAgICBp
aS4gVW5wYXVzZSBCb290IERvbWFpbikgVGogVCogKCAgICAgICAgaWlpLiBHb3RvIEJvb3Qg
RG9tYWluIFwoc3RlcCA2XCkpIFRqIFQqICggICAgYi4gTk86IEdvdG8gTGF1bmNoIEZpbmFs
aXphdGlvbiBcKHN0ZXAgMTBcKSkgVGogVCogKDYuIEJvb3QgRG9tYWluOikgVGogVCogKDcu
IEJvb3QgRG9tYWluIGNvbWVzIG9ubGluZSBhbmQgbWF5IGRvIGFueSBvZiB0aGUgZm9sbG93
aW5nIGFjdGlvbnMpIFRqIFQqICggICAgYS4gUHJvY2VzcyB0aGUgTENNKSBUaiBUKiAoICAg
IGIuIFZhbGlkYXRlIHRoZSBNQjIgY2hhaW4pIFRqIFQqICggICAgYy4gTWFrZSBhZGRpdGlv
bmFsIGNvbmZpZ3VyYXRpb24gc2V0dGluZ3MgZm9yIHN0YWdlZCBkb21haW5zKSBUaiBUKiAo
ICAgIGQuIFVucGF1c2UgYW55IHByZWN1cnNvciBkb21haW5zKSBUaiBUKiAoICAgIGUuIFNl
dCBhbnkgcnVudGltZSBjb25maWd1cmF0aW9ucykgVGogVCogKDguIEJvb3QgRG9tYWluIGRv
ZXMgYW55IG5lY2Vzc2FyeSBjbGVhbnVwKSBUaiBUKiAoOS4gQm9vdCBEb21haW4gbWFrZSBo
eXBlcmNhbGwgb3AgY2FsbCB0byBzaWduYWwgaXQgaXMgZmluaXNoZWQpIFRqIFQqICggICAg
aS4gSHlwZXJ2aXNvciByZWNsYWltcyBhbGwgQm9vdCBEb21haW4gcmVzb3VyY2VzKSBUaiBU
KiBFVA0KUQ0KUQ0KUQ0KUQ0KUQ0KIA0KZW5kc3RyZWFtDQplbmRvYmoNCjEzNSAwIG9iag0K
PDwgL0xlbmd0aCA5MTAyID4+DQpzdHJlYW0NCjEgMCAwIDEgMCAwIGNtICBCVCAvRjEgMTIg
VGYgMTQuNCBUTCBFVA0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA2NDMuODIzNiBjbQ0KcQ0KcQ0K
MSAwIDAgMSAwIDAgY20NCnENCjEgMCAwIDEgNi42IDYuNiBjbQ0KcQ0KLjY2Mjc0NSAuNjYy
NzQ1IC42NjI3NDUgUkcNCi41IHcNCi45NjA3ODQgLjk2MDc4NCAuODYyNzQ1IHJnDQpuIC02
IC02IDQ2OC42ODk4IDEyMCByZSBCKg0KUQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCA5
OCBUbSAvRjUgMTAgVGYgMTIgVEwgKCAgICBpaS4gSHlwZXJ2aXNvciByZWNvcmRzIHRoYXQg
dGhlIEJvb3QgRG9tYWluIHJhbikgVGogVCogKCAgICBpaS4gR290byBMYXVuY2ggRmluYWxp
emF0aW9uIFwoc3RlcCA5XCkpIFRqIFQqICgxMC4gTGF1bmNoIEZpbmFsaXphdGlvbikgVGog
VCogKDExLiBJZiBhIGNvbmZpZ3VyZWQgZG9tYWluIHdhcyBmbGFnZ2VkIHRvIGhhdmUgdGhl
IGNvbnNvbGUsIHRoZSkgVGogVCogKCAgICBoeXBlcnZpc29yIGFzc2lnbnMgaXQpIFRqIFQq
ICgxMi4gVGhlIGh5cGVydmlzb3IgY2xlYXJzIHRoZSBMQ00gYW5kIGJvb3Rsb2FkZXIgbG9h
ZGVkIG1vZHVsZSwpIFRqIFQqICggICAgcmVjbGFpbWluZyB0aGUgbWVtb3J5KSBUaiBUKiAo
MTMuIFRoZSBoeXBlcnZpc29yIGl0ZXJhdGVzIHRocm91Z2ggZG9tYWlucyB1bnBhdXNpbmcg
YW55IGRvbWFpbiBub3QpIFRqIFQqICggICAgZmxhZ2dlZCBhcyB0aGUgcmVjb3ZlcnkgZG9t
YWluKSBUaiBUKiBFVA0KUQ0KUQ0KUQ0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA1OTUu
ODIzNiBjbQ0KcQ0KQlQgMSAwIDAgMSAwIDIxIFRtIDE4IFRMIC9GMiAxNSBUZiAwIDAgMCBy
ZyAoNi4yICAgQXBwZW5kaXggMjogQ29uc2lkZXJhdGlvbnMgaW4gTmFtaW5nIHRoZSBIeXBl
cmxhdW5jaCkgVGogVCogKEZlYXR1cmUpIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDYy
LjY5MjkxIDU4My44MjM2IGNtDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDU4My44MjM2IGNt
DQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDUwNS44MjM2IGNtDQowIDAgMCByZw0KQlQgL0Yx
IDEwIFRmIDEyIFRMIEVUDQpxDQoxIDAgMCAxIDYgNjMgY20NCnENCjAgMCAwIHJnDQpCVCAx
IDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgMTAuNSAwIFRkIChcMTc3KSBUaiBUKiAt
MTAuNSAwIFRkIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDIzIDYzIGNtDQpxDQowIDAgMCByZw0K
QlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIChUaGUgdGVybSBcMjIzTGF1bmNo
XDIyNCBpcyBwcmVmZXJyZWQgb3ZlciBcMjIzQm9vdFwyMjQpIFRqIFQqIEVUDQpRDQpRDQpx
DQoxIDAgMCAxIDIzIDU3IGNtDQpRDQpxDQoxIDAgMCAxIDIzIC0zIGNtDQowIDAgMCByZw0K
QlQgL0YxIDEwIFRmIDEyIFRMIEVUDQpCVCAxIDAgMCAxIDAgMiBUbSAgVCogRVQNCnENCjEg
MCAwIDEgMjAgNTQgY20NClENCnENCjEgMCAwIDEgMjAgNTQgY20NClENCnENCjEgMCAwIDEg
MjAgMzAgY20NCjAgMCAwIHJnDQpCVCAvRjEgMTAgVGYgMTIgVEwgRVQNCnENCjEgMCAwIDEg
NiA5IGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRM
IDEwLjUgMCBUZCAoXDE3NykgVGogVCogLTEwLjUgMCBUZCBFVA0KUQ0KUQ0KcQ0KMSAwIDAg
MSAyMyAtMyBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAxNCBUbSAvRjEgMTAgVGYg
MTIgVEwgLjYyODQ0MyBUdyAoTXVsdGlwbGUgaW5kaXZpZHVhbCBjb21wb25lbnQgYm9vdHMg
Y2FuIG9jY3VyIGluIHRoZSBuZXcgc3lzdGVtIHN0YXJ0IHByb2Nlc3M7IExhdW5jaCBpcykg
VGogVCogMCBUdyAocHJlZmVyYWJsZSBmb3IgZGVzY3JpYmluZyB0aGUgd2hvbGUgcHJvY2Vz
cykgVGogVCogRVQNClENClENCnENClENClENCnENCjEgMCAwIDEgMjAgMjQgY20NClENCnEN
CjEgMCAwIDEgMjAgMCBjbQ0KMCAwIDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KcQ0K
MSAwIDAgMSA2IDkgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAg
VGYgMTIgVEwgMTAuNSAwIFRkIChcMTc3KSBUaiBUKiAtMTAuNSAwIFRkIEVUDQpRDQpRDQpx
DQoxIDAgMCAxIDIzIC0zIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDE0IFRtIC9G
MSAxMCBUZiAxMiBUTCAuNDk4NDQzIFR3IChGb3J0dW5hdGVseSB0aGVyZSBpcyBjb25zZW5z
dXMgaW4gdGhlIGN1cnJlbnQgZ3JvdXAgb2Ygc3Rha2Vob2xkZXJzIHRoYXQgdGhlIHRlcm0g
XDIyM0xhdW5jaFwyMjQpIFRqIFQqIDAgVHcgKGlzIGdvb2QgYW5kIGFwcHJvcHJpYXRlKSBU
aiBUKiBFVA0KUQ0KUQ0KcQ0KUQ0KUQ0KcQ0KMSAwIDAgMSAyMCAwIGNtDQpRDQpxDQpRDQpR
DQpxDQoxIDAgMCAxIDIzIC0zIGNtDQpRDQpxDQpRDQpRDQpxDQoxIDAgMCAxIDYyLjY5Mjkx
IDQ5OS44MjM2IGNtDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDM5MS44MjM2IGNtDQowIDAg
MCByZw0KQlQgL0YxIDEwIFRmIDEyIFRMIEVUDQpxDQoxIDAgMCAxIDYgOTMgY20NCnENCjAg
MCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgMTAuNSAwIFRkIChc
MTc3KSBUaiBUKiAtMTAuNSAwIFRkIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDIzIDgxIGNtDQpx
DQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDE0IFRtIC9GMSAxMCBUZiAxMiBUTCAyLjgxOTI2
OSBUdyAoVGhlIG5hbWVzIHdlIGRlZmluZSBtdXN0IHN1cHBvcnQgYmVjb21pbmcgbWVhbmlu
Z2Z1bCBhbmQgc2ltcGxlIHRvIHVzZSBvdXRzaWRlIHRoZSBYZW4pIFRqIFQqIDAgVHcgKGNv
bW11bml0eSkgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgMjMgNzUgY20NClENCnENCjEg
MCAwIDEgMjMgLTMgY20NCjAgMCAwIHJnDQpCVCAvRjEgMTAgVGYgMTIgVEwgRVQNCkJUIDEg
MCAwIDEgMCAyIFRtICBUKiBFVA0KcQ0KMSAwIDAgMSAyMCA3MiBjbQ0KUQ0KcQ0KMSAwIDAg
MSAyMCA3MiBjbQ0KUQ0KcQ0KMSAwIDAgMSAyMCA0OCBjbQ0KMCAwIDAgcmcNCkJUIC9GMSAx
MCBUZiAxMiBUTCBFVA0KcQ0KMSAwIDAgMSA2IDkgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAg
MCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgMTAuNSAwIFRkIChcMTc3KSBUaiBUKiAtMTAu
NSAwIFRkIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDIzIC0zIGNtDQpxDQowIDAgMCByZw0KQlQg
MSAwIDAgMSAwIDE0IFRtIC9GMSAxMCBUZiAxMiBUTCAuMTk2MjM1IFR3IChUaGV5IG11c3Qg
YmUgYWJsZSB0byBiZSByZXNvbHZlZCBxdWlja2x5IHZpYSBzZWFyY2ggZW5naW5lIHRvIGEg
Y2xlYXIgZXhwbGFuYXRpb24gXChlZy4gWGVuKSBUaiBUKiAwIFR3IChtYXJrZXRpbmcgbWF0
ZXJpYWwsIGRvY3VtZW50YXRpb24gb3Igd2lraVwpKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KUQ0K
UQ0KcQ0KMSAwIDAgMSAyMCA0MiBjbQ0KUQ0KcQ0KMSAwIDAgMSAyMCAzMCBjbQ0KMCAwIDAg
cmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KcQ0KMSAwIDAgMSA2IC0zIGNtDQpxDQowIDAg
MCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIDEwLjUgMCBUZCAoXDE3
NykgVGogVCogLTEwLjUgMCBUZCBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAyMyAtMyBjbQ0KcQ0K
MCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAoV2UgcHJlZmVy
IHRoYXQgdGhlIHRlcm1zIGJlIGhlbHBmdWwgZm9yIG1hcmtldGluZyBjb21tdW5pY2F0aW9u
cykgVGogVCogRVQNClENClENCnENClENClENCnENCjEgMCAwIDEgMjAgMjQgY20NClENCnEN
CjEgMCAwIDEgMjAgMCBjbQ0KMCAwIDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KcQ0K
MSAwIDAgMSA2IDkgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAg
VGYgMTIgVEwgMTAuNSAwIFRkIChcMTc3KSBUaiBUKiAtMTAuNSAwIFRkIEVUDQpRDQpRDQpx
DQoxIDAgMCAxIDIzIC0zIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDE0IFRtIC9G
MSAxMCBUZiAxMiBUTCAuMDcyMjkgVHcgKENvbnNlcXVlbmNlOiBhdm9pZCB0aGUgdGVybSBc
MjIzZG9tYWluXDIyNCB3aGljaCBpcyBYZW4tc3BlY2lmaWMgYW5kIHJlcXVpcmVzIGEgZGVm
aW5pdGlvbiB0byBiZSkgVGogVCogMCBUdyAocHJvdmlkZWQgZWFjaCB0aW1lIHdoZW4gdXNl
ZCBlbHNld2hlcmUpIFRqIFQqIEVUDQpRDQpRDQpxDQpRDQpRDQpxDQoxIDAgMCAxIDIwIDAg
Y20NClENCnENClENClENCnENCjEgMCAwIDEgMjMgLTMgY20NClENCnENClENClENCnENCjEg
MCAwIDEgNjIuNjkyOTEgMzg1LjgyMzYgY20NClENCnENCjEgMCAwIDEgNjIuNjkyOTEgMzM3
LjgyMzYgY20NCjAgMCAwIHJnDQpCVCAvRjEgMTAgVGYgMTIgVEwgRVQNCnENCjEgMCAwIDEg
NiAzMyBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBU
TCAxMC41IDAgVGQgKFwxNzcpIFRqIFQqIC0xMC41IDAgVGQgRVQNClENClENCnENCjEgMCAw
IDEgMjMgMzMgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYg
MTIgVEwgKFRoZXJlIGlzIGEgbmVlZCB0byBjb21tdW5pY2F0ZSB0aGF0IFhlbiBpcyBjYXBh
YmxlIG9mIGJlaW5nIHVzZWQgYXMgYSBTdGF0aWMgUGFydGl0aW9uaW5nIGh5cGVydmlzb3Ip
IFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDIzIDI3IGNtDQpRDQpxDQoxIDAgMCAxIDIz
IC0zIGNtDQowIDAgMCByZw0KQlQgL0YxIDEwIFRmIDEyIFRMIEVUDQpCVCAxIDAgMCAxIDAg
MiBUbSAgVCogRVQNCnENCjEgMCAwIDEgMjAgMjQgY20NClENCnENCjEgMCAwIDEgMjAgMjQg
Y20NClENCnENCjEgMCAwIDEgMjAgMCBjbQ0KMCAwIDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBU
TCBFVA0KcQ0KMSAwIDAgMSA2IDkgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBU
bSAvRjEgMTAgVGYgMTIgVEwgMTAuNSAwIFRkIChcMTc3KSBUaiBUKiAtMTAuNSAwIFRkIEVU
DQpRDQpRDQpxDQoxIDAgMCAxIDIzIC0zIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAw
IDE0IFRtIC9GMSAxMCBUZiAxMiBUTCA0LjE1NDk3NiBUdyAoVGhlIGNvbW11bml0eSBtZW1i
ZXJzIHVzaW5nIGFuZCBtYWludGFpbmluZyBkb20wbGVzcyBhcmUgdGhlIGN1cnJlbnQgcHJp
bWFyeSkgVGogVCogMCBUdyAoc3Rha2Vob2xkZXJzIGZvciB0aGlzKSBUaiBUKiBFVA0KUQ0K
UQ0KcQ0KUQ0KUQ0KcQ0KMSAwIDAgMSAyMCAwIGNtDQpRDQpxDQpRDQpRDQpxDQoxIDAgMCAx
IDIzIC0zIGNtDQpRDQpxDQpRDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDMzMS44MjM2IGNt
DQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDc2Ljg2NjE0IGNtDQowIDAgMCByZw0KQlQgL0Yx
IDEwIFRmIDEyIFRMIEVUDQpxDQoxIDAgMCAxIDYgMjM5Ljk1NzUgY20NCnENCjAgMCAwIHJn
DQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgMTAuNSAwIFRkIChcMTc3KSBU
aiBUKiAtMTAuNSAwIFRkIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDIzIDIyNy45NTc1IGNtDQpx
DQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDE0IFRtIC9GMSAxMCBUZiAxMiBUTCAyLjAyNTY5
NyBUdyAoVGhlcmUgaXMgYSBuZWVkIHRvIGNvbW11bmljYXRlIHRoYXQgdGhlIG5ldyBsYXVu
Y2ggZnVuY3Rpb25hbGl0eSBwcm92aWRlcyBuZXcgY2FwYWJpbGl0aWVzIG5vdCkgVGogVCog
MCBUdyAoYXZhaWxhYmxlIGVsc2V3aGVyZSwgYW5kIGlzIG1vcmUgdGhhbiBqdXN0IHN1cHBv
cnRpbmcgU3RhdGljIFBhcnRpdGlvbmluZykgVGogVCogRVQNClENClENCnENCjEgMCAwIDEg
MjMgMjIxLjk1NzUgY20NClENCnENCjEgMCAwIDEgMjMgLTMgY20NCjAgMCAwIHJnDQpCVCAv
RjEgMTAgVGYgMTIgVEwgRVQNCkJUIDEgMCAwIDEgMCAyIFRtICBUKiBFVA0KcQ0KMSAwIDAg
MSAyMCAyMTguOTU3NSBjbQ0KUQ0KcQ0KMSAwIDAgMSAyMCAyMTguOTU3NSBjbQ0KUQ0KcQ0K
MSAwIDAgMSAyMCAwIGNtDQowIDAgMCByZw0KQlQgL0YxIDEwIFRmIDEyIFRMIEVUDQpxDQox
IDAgMCAxIDYgMjAzLjk1NzUgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAv
RjEgMTAgVGYgMTIgVEwgMTAuNSAwIFRkIChcMTc3KSBUaiBUKiAtMTAuNSAwIFRkIEVUDQpR
DQpRDQpxDQoxIDAgMCAxIDIzIDE5MS45NTc1IGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAg
MSAwIDE0IFRtIC9GMSAxMCBUZiAxMiBUTCAxLjMzNjQxMiBUdyAoTm8gb3RoZXIgaHlwZXJ2
aXNvciBrbm93biB0byB0aGUgYXV0aG9ycyBvZiB0aGlzIGRvY3VtZW50IGlzIGNhcGFibGUg
b2YgcHJvdmlkaW5nIHdoYXQpIFRqIFQqIDAgVHcgKEh5cGVybGF1bmNoIHdpbGwgYmUgYWJs
ZSB0byBkby4gVGhlIGxhdW5jaCBzZXF1ZW5jZSBpcyBkZXNpZ25lZCB0bzopIFRqIFQqIEVU
DQpRDQpRDQpxDQoxIDAgMCAxIDIzIDE4NS45NTc1IGNtDQpRDQpxDQoxIDAgMCAxIDIzIC0w
LjA0MjUyIGNtDQowIDAgMCByZw0KQlQgL0YxIDEwIFRmIDEyIFRMIEVUDQpCVCAxIDAgMCAx
IDAgMiBUbSAgVCogRVQNCnENCjEgMCAwIDEgMjAgMTgwIGNtDQpRDQpxDQoxIDAgMCAxIDIw
IDE4MCBjbQ0KUQ0KcQ0KMSAwIDAgMSAyMCAxNjggY20NCjAgMCAwIHJnDQpCVCAvRjEgMTAg
VGYgMTIgVEwgRVQNCnENCjEgMCAwIDEgNiAtMyBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAw
IDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAxMC41IDAgVGQgKFwxNzcpIFRqIFQqIC0xMC41
IDAgVGQgRVQNClENClENCnENCjEgMCAwIDEgMjMgLTMgY20NCnENCjAgMCAwIHJnDQpCVCAx
IDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgKFJlbW92ZSBkZXBlbmRlbmN5IG9uIGEg
c2luZ2xlLCBoaWdobHktcHJpdmlsZWdlZCBpbml0aWFsIGRvbWFpbikgVGogVCogRVQNClEN
ClENCnENClENClENCnENCjEgMCAwIDEgMjAgMTYyIGNtDQpRDQpxDQoxIDAgMCAxIDIwIDEz
OCBjbQ0KMCAwIDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KcQ0KMSAwIDAgMSA2IDkg
Y20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgMTAu
NSAwIFRkIChcMTc3KSBUaiBUKiAtMTAuNSAwIFRkIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDIz
IC0zIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDE0IFRtIC9GMSAxMCBUZiAxMiBU
TCAuMTI0OTggVHcgKEFsbG93IHRoZSBpbml0aWFsIGRvbWFpbnMgc3RhcnRlZCB0byBiZSBp
bmRlcGVuZGVudCBhbmQgZnVsbHkgaXNvbGF0ZWQgZnJvbSBlYWNoKSBUaiBUKiAwIFR3IChv
dGhlcikgVGogVCogRVQNClENClENCnENClENClENCnENCjEgMCAwIDEgMjAgMTMyIGNtDQpR
DQpxDQoxIDAgMCAxIDIwIDEwOCBjbQ0KMCAwIDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBF
VA0KcQ0KMSAwIDAgMSA2IDkgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAv
RjEgMTAgVGYgMTIgVEwgMTAuNSAwIFRkIChcMTc3KSBUaiBUKiAtMTAuNSAwIFRkIEVUDQpR
DQpRDQpxDQoxIDAgMCAxIDIzIC0zIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDE0
IFRtIC9GMSAxMCBUZiAxMiBUTCAuNjkxNzk3IFR3IChTdXBwb3J0IGNvbmZpZ3VyYXRpb25z
IHdoZXJlIG5vIGZ1cnRoZXIgVk1zIGNhbiBiZSBsYXVuY2hlZCBvbmNlIHRoZSBpbml0aWFs
KSBUaiBUKiAwIFR3IChkb21haW5zIGhhdmUgc3RhcnRlZCkgVGogVCogRVQNClENClENCnEN
ClENClENCnENCjEgMCAwIDEgMjAgMTAyIGNtDQpRDQpxDQoxIDAgMCAxIDIwIDkwIGNtDQow
IDAgMCByZw0KQlQgL0YxIDEwIFRmIDEyIFRMIEVUDQpxDQoxIDAgMCAxIDYgLTMgY20NCnEN
CjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgMTAuNSAwIFRk
IChcMTc3KSBUaiBUKiAtMTAuNSAwIFRkIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDIzIC0zIGNt
DQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIChVc2Ug
YSBzdGFuZGFyZCwgZXh0ZW5zaWJsZSBmb3JtYXQgZm9yIGNvbnZleWluZyBWTSBjb25maWd1
cmF0aW9uIGRhdGEpIFRqIFQqIEVUDQpRDQpRDQpxDQpRDQpRDQpxDQoxIDAgMCAxIDIwIDg0
IGNtDQpRDQpxDQoxIDAgMCAxIDIwIDYwIGNtDQowIDAgMCByZw0KQlQgL0YxIDEwIFRmIDEy
IFRMIEVUDQpxDQoxIDAgMCAxIDYgOSBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAy
IFRtIC9GMSAxMCBUZiAxMiBUTCAxMC41IDAgVGQgKFwxNzcpIFRqIFQqIC0xMC41IDAgVGQg
RVQNClENClENCnENCjEgMCAwIDEgMjMgLTMgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAx
IDAgMTQgVG0gL0YxIDEwIFRmIDEyIFRMIDQuMjMwODg4IFR3IChFbnN1cmUgdGhhdCBkb21h
aW4gYnVpbGRpbmcgb2YgYWxsIGluaXRpYWwgZG9tYWlucyBpcyBwZXJmb3JtZWQgYnkgdGhl
KSBUaiBUKiAwIFR3IChoeXBlcnZpc29yIGZyb20gbWF0ZXJpYWxzIHN1cHBsaWVkIGJ5IHRo
ZSBib290bG9hZGVyKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KUQ0KUQ0KcQ0KMSAwIDAgMSAyMCA1
NCBjbQ0KUQ0KcQ0KMSAwIDAgMSAyMCA2IGNtDQowIDAgMCByZw0KQlQgL0YxIDEwIFRmIDEy
IFRMIEVUDQpxDQoxIDAgMCAxIDYgMzMgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAg
MiBUbSAvRjEgMTAgVGYgMTIgVEwgMTAuNSAwIFRkIChcMTc3KSBUaiBUKiAtMTAuNSAwIFRk
IEVUDQpRDQpRDQpxDQoxIDAgMCAxIDIzIC0zIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAg
MSAwIDM4IFRtIC9GMSAxMCBUZiAxMiBUTCAuNzI2NjQ3IFR3IChFbmFibGUgZmxleGlibGUg
Y29uZmlndXJhdGlvbiB0byBiZSBhcHBsaWVkIHRvIGFsbCBpbml0aWFsIGRvbWFpbnMgYnkg
YW4gb3B0aW9uYWwpIFRqIFQqIDAgVHcgMS4xOTk5NzkgVHcgKEJvb3QgRG9tYWluLCB0aGF0
IHJ1bnMgd2l0aCBsaW1pdGVkIHByaXZpbGVnZSwgYmVmb3JlIGFueSBvdGhlciBkb21haW4g
c3RhcnRzKSBUaiBUKiAwIFR3IDEuMzk5MDY5IFR3IChhbmQgb2J0YWlucyB0aGUgVk0gY29u
ZmlndXJhdGlvbiBkYXRhIGZyb20gdGhlIGJvb3Rsb2FkZXIgbWF0ZXJpYWxzIHZpYSB0aGUp
IFRqIFQqIDAgVHcgKGh5cGVydmlzb3IpIFRqIFQqIEVUDQpRDQpRDQpxDQpRDQpRDQpxDQox
IDAgMCAxIDIwIDAgY20NClENCnENClENClENCnENClENClENCnENClENClENCnENClENClEN
CiANCmVuZHN0cmVhbQ0KZW5kb2JqDQoxMzYgMCBvYmoNCjw8IC9MZW5ndGggMTA0OTAgPj4N
CnN0cmVhbQ0KMSAwIDAgMSAwIDAgY20gIEJUIC9GMSAxMiBUZiAxNC40IFRMIEVUDQpxDQox
IDAgMCAxIDYyLjY5MjkxIDY5My4wMjM2IGNtDQowIDAgMCByZw0KQlQgL0YxIDEwIFRmIDEy
IFRMIEVUDQpCVCAxIDAgMCAxIDYgNTkgVG0gIFQqIEVUDQpxDQoxIDAgMCAxIDIzIC0zIGNt
DQowIDAgMCByZw0KQlQgL0YxIDEwIFRmIDEyIFRMIEVUDQpCVCAxIDAgMCAxIDAgMiBUbSAg
VCogRVQNCnENCjEgMCAwIDEgMjAgMCBjbQ0KMCAwIDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBU
TCBFVA0KQlQgMSAwIDAgMSA2IDU5IFRtICBUKiBFVA0KcQ0KMSAwIDAgMSAyMyAtMyBjbQ0K
MCAwIDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KQlQgMSAwIDAgMSAwIDIgVG0gIFQq
IEVUDQpxDQoxIDAgMCAxIDIwIDQ4IGNtDQowIDAgMCByZw0KQlQgL0YxIDEwIFRmIDEyIFRM
IEVUDQpxDQoxIDAgMCAxIDYgOSBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRt
IC9GMSAxMCBUZiAxMiBUTCAxMC41IDAgVGQgKFwxNzcpIFRqIFQqIC0xMC41IDAgVGQgRVQN
ClENClENCnENCjEgMCAwIDEgMjMgLTMgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAg
MTQgVG0gL0YxIDEwIFRmIDEyIFRMIC4wNTE0MTIgVHcgKEVuYWJsZSBtZWFzdXJlbWVudHMg
b2YgYWxsIG9mIHRoZSBib290IG1hdGVyaWFscyBwcmlvciB0byB0aGVpciB1c2UsIGluIGEg
c2VxdWVuY2UpIFRqIFQqIDAgVHcgKHdpdGggbWluaW1pemVkIHByaXZpbGVnZSkgVGogVCog
RVQNClENClENCnENClENClENCnENCjEgMCAwIDEgMjAgNDIgY20NClENCnENCjEgMCAwIDEg
MjAgMzAgY20NCjAgMCAwIHJnDQpCVCAvRjEgMTAgVGYgMTIgVEwgRVQNCnENCjEgMCAwIDEg
NiAtMyBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBU
TCAxMC41IDAgVGQgKFwxNzcpIFRqIFQqIC0xMC41IDAgVGQgRVQNClENClENCnENCjEgMCAw
IDEgMjMgLTMgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYg
MTIgVEwgKFN1cHBvcnQgdXNlLWNhc2Utc3BlY2lmaWMgY3VzdG9taXplZCBCb290IERvbWFp
bnMpIFRqIFQqIEVUDQpRDQpRDQpxDQpRDQpRDQpxDQoxIDAgMCAxIDIwIDI0IGNtDQpRDQpx
DQoxIDAgMCAxIDIwIDAgY20NCjAgMCAwIHJnDQpCVCAvRjEgMTAgVGYgMTIgVEwgRVQNCnEN
CjEgMCAwIDEgNiA5IGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEw
IFRmIDEyIFRMIDEwLjUgMCBUZCAoXDE3NykgVGogVCogLTEwLjUgMCBUZCBFVA0KUQ0KUQ0K
cQ0KMSAwIDAgMSAyMyAtMyBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAxNCBUbSAv
RjEgMTAgVGYgMTIgVEwgMS4yNzYyMiBUdyAoQ29tcGxlbWVudCB0aGUgaHlwZXJ2aXNvclwy
MjJzIGV4aXN0aW5nIGFiaWxpdHkgdG8gZW5mb3JjZSBwb2xpY3ktYmFzZWQgTWFuZGF0b3J5
KSBUaiBUKiAwIFR3IChBY2Nlc3MgQ29udHJvbCkgVGogVCogRVQNClENClENCnENClENClEN
CnENCjEgMCAwIDEgMjAgMCBjbQ0KUQ0KcQ0KUQ0KUQ0KcQ0KMSAwIDAgMSAyMyAtMyBjbQ0K
UQ0KcQ0KUQ0KUQ0KcQ0KMSAwIDAgMSAyMCAwIGNtDQpRDQpxDQpRDQpRDQpxDQoxIDAgMCAx
IDIzIC0zIGNtDQpRDQpxDQpRDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDY4Ny4wMjM2IGNt
DQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDM1MS4wMjM2IGNtDQowIDAgMCByZw0KQlQgL0Yx
IDEwIFRmIDEyIFRMIEVUDQpxDQoxIDAgMCAxIDYgMzIxIGNtDQpxDQowIDAgMCByZw0KQlQg
MSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIDEwLjUgMCBUZCAoXDE3NykgVGogVCog
LTEwLjUgMCBUZCBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAyMyAzMjEgY20NCnENCjAgMCAwIHJn
DQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgKFwyMjNTdGF0aWNcMjI0IGFu
ZCBcMjIzRHluYW1pY1wyMjQgaGF2ZSBkaWZmZXJlbnQgYW5kIGltcG9ydGFudCBtZWFuaW5n
cyBpbiBkaWZmZXJlbnQgY29tbXVuaXRpZXMpIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAx
IDIzIDMxNSBjbQ0KUQ0KcQ0KMSAwIDAgMSAyMyAtMyBjbQ0KMCAwIDAgcmcNCkJUIC9GMSAx
MCBUZiAxMiBUTCBFVA0KQlQgMSAwIDAgMSAwIDIgVG0gIFQqIEVUDQpxDQoxIDAgMCAxIDIw
IDMxMiBjbQ0KUQ0KcQ0KMSAwIDAgMSAyMCAzMTIgY20NClENCnENCjEgMCAwIDEgMjAgMjg4
IGNtDQowIDAgMCByZw0KQlQgL0YxIDEwIFRmIDEyIFRMIEVUDQpxDQoxIDAgMCAxIDYgOSBj
bQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAxMC41
IDAgVGQgKFwxNzcpIFRqIFQqIC0xMC41IDAgVGQgRVQNClENClENCnENCjEgMCAwIDEgMjMg
LTMgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMTQgVG0gL0YxIDEwIFRmIDEyIFRM
IC41ODYxMzYgVHcgKFN0YXRpYyBhbmQgRHluYW1pYyBQYXJ0aXRpb25pbmcgZGVzY3JpYmUg
dGhlIGFiaWxpdHkgdG8gY3JlYXRlIG5ldyB2aXJ0dWFsIG1hY2hpbmVzLCBvciBub3QsKSBU
aiBUKiAwIFR3IChhZnRlciB0aGUgaW5pdGlhbCBob3N0IGJvb3QgcHJvY2VzcyBjb21wbGV0
ZXMpIFRqIFQqIEVUDQpRDQpRDQpxDQpRDQpRDQpxDQoxIDAgMCAxIDIwIDI4MiBjbQ0KUQ0K
cQ0KMSAwIDAgMSAyMCAxNjIgY20NCjAgMCAwIHJnDQpCVCAvRjEgMTAgVGYgMTIgVEwgRVQN
CnENCjEgMCAwIDEgNiAxMDUgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAv
RjEgMTAgVGYgMTIgVEwgMTAuNSAwIFRkIChcMTc3KSBUaiBUKiAtMTAuNSAwIFRkIEVUDQpR
DQpRDQpxDQoxIDAgMCAxIDIzIDU3IGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDUw
IFRtIC9GMSAxMCBUZiAxMiBUTCAxLjQ2OTk4NCBUdyAoU3RhdGljIGFuZCBEeW5hbWljIFJv
b3Qgb2YgVHJ1c3QgZGVzY3JpYmUgdGhlIG5hdHVyZSBvZiB0aGUgdHJ1c3QgY2hhaW4gZm9y
IGEgbWVhc3VyZWQpIFRqIFQqIDAgVHcgMy40NjEyMzUgVHcgKGxhdW5jaC4gSW4gdGhpcyBj
YXNlIFN0YXRpYyBpcyByZWZlcnJpbmcgdG8gdGhlIGZhY3QgdGhhdCB0aGUgdHJ1c3QgY2hh
aW4gaXMgZml4ZWQgYW5kKSBUaiBUKiAwIFR3IDIuMjU0MTQ3IFR3IChub24tcmVwZWF0YWJs
ZSB1bnRpbCB0aGUgbmV4dCBob3N0IHJlYm9vdCBvciBzaHV0ZG93bi4gV2hlcmVhcyBEeW5h
bWljIGluIHRoaXMgY2FzZSkgVGogVCogMCBUdyAxLjQ5NDk4MyBUdyAocmVmZXJzIHRvIHRo
ZSBhYmlsaXR5IHRvIGNvbmR1Y3QgdGhlIG1lYXN1cmVkIGxhdW5jaCBhdCBhbnkgdGltZSBh
bmQgcG90ZW50aWFsbHkgbXVsdGlwbGUpIFRqIFQqIDAgVHcgKHRpbWVzIGJlZm9yZSB0aGUg
bmV4dCBob3N0IHJlYm9vdCBvciBzaHV0ZG93bi4pIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAg
MCAxIDIzIDUxIGNtDQpRDQpxDQoxIDAgMCAxIDIzIC0zIGNtDQowIDAgMCByZw0KQlQgL0Yx
IDEwIFRmIDEyIFRMIEVUDQpCVCAxIDAgMCAxIDAgMiBUbSAgVCogRVQNCnENCjEgMCAwIDEg
MjAgNDggY20NClENCnENCjEgMCAwIDEgMjAgNDggY20NClENCnENCjEgMCAwIDEgMjAgMCBj
bQ0KMCAwIDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KcQ0KMSAwIDAgMSA2IDMzIGNt
DQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIDEwLjUg
MCBUZCAoXDE3NykgVGogVCogLTEwLjUgMCBUZCBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAyMyAt
MyBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAzOCBUbSAvRjEgMTAgVGYgMTIgVEwg
MS40MjYxMzYgVHcgKFdlIHdpbGwgYmUgdXNpbmcgSHlwZXJsYXVuY2ggd2l0aCBib3RoIFN0
YXRpYyBhbmQgRHluYW1pYyBSb290cyBvZiBUcnVzdCwgdG8pIFRqIFQqIDAgVHcgMS4xODM5
NzYgVHcgKGxhdW5jaCBib3RoIFN0YXRpYyBhbmQgRHluYW1pY2FsbHkgUGFydGl0aW9uZWQg
U3lzdGVtcywgYW5kIGJlaW5nIGNsZWFyIGFib3V0KSBUaiBUKiAwIFR3IDMuOTA5MDY5IFR3
IChleGFjdGx5IHdoaWNoIGNvbWJpbmF0aW9uIGlzIGJlaW5nIHN0YXJ0ZWQgd2lsbCBiZSB2
ZXJ5IGltcG9ydGFudCBcKGVnLiBmb3IpIFRqIFQqIDAgVHcgKGNlcnRpZmljYXRpb24gcHJv
Y2Vzc2VzXCkpIFRqIFQqIEVUDQpRDQpRDQpxDQpRDQpRDQpxDQoxIDAgMCAxIDIwIDAgY20N
ClENCnENClENClENCnENCjEgMCAwIDEgMjMgLTMgY20NClENCnENClENClENCnENCjEgMCAw
IDEgMjAgMTU2IGNtDQpRDQpxDQoxIDAgMCAxIDIwIDAgY20NCjAgMCAwIHJnDQpCVCAvRjEg
MTAgVGYgMTIgVEwgRVQNCnENCjEgMCAwIDEgNiAxNDEgY20NCnENCjAgMCAwIHJnDQpCVCAx
IDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgMTAuNSAwIFRkIChcMTc3KSBUaiBUKiAt
MTAuNSAwIFRkIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDIzIDEyOSBjbQ0KcQ0KMCAwIDAgcmcN
CkJUIDEgMCAwIDEgMCAxNCBUbSAvRjEgMTAgVGYgMTIgVEwgLjU0MTUyIFR3IChDb25zZXF1
ZW5jZTogdXNlcyBvZiBcMjIzU3RhdGljXDIyNCBhbmQgXDIyM0R5bmFtaWNcMjI0IG5lZWQg
dG8gYmUgcXVhbGlmaWVkIGlmIHRoZXkgYXJlIGluY29ycG9yYXRlZCkgVGogVCogMCBUdyAo
aW50byB0aGUgbmFtaW5nIG9mIHRoaXMgZnVuY3Rpb25hbGl0eSkgVGogVCogRVQNClENClEN
CnENCjEgMCAwIDEgMjMgMTIzIGNtDQpRDQpxDQoxIDAgMCAxIDIzIC0zIGNtDQowIDAgMCBy
Zw0KQlQgL0YxIDEwIFRmIDEyIFRMIEVUDQpCVCAxIDAgMCAxIDAgMiBUbSAgVCogRVQNCnEN
CjEgMCAwIDEgMjAgMTIwIGNtDQpRDQpxDQoxIDAgMCAxIDIwIDEyMCBjbQ0KUQ0KcQ0KMSAw
IDAgMSAyMCA5NiBjbQ0KMCAwIDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KcQ0KMSAw
IDAgMSA2IDkgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYg
MTIgVEwgMTAuNSAwIFRkIChcMTc3KSBUaiBUKiAtMTAuNSAwIFRkIEVUDQpRDQpRDQpxDQox
IDAgMCAxIDIzIC0zIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDE0IFRtIC9GMSAx
MCBUZiAxMiBUTCA2LjczOTk3NiBUdyAoVGhpcyBjYW4gYmUgZG9uZSBieSBhZGRpbmcgdGhl
IHByZWNlZGluZywgc3Ryb25nZXIgYnJhbmRlZCB0ZXJtOikgVGogVCogMCBUdyAoXDIyM0h5
cGVybGF1bmNoXDIyNCwgYmVmb3JlIFwyMjNTdGF0aWNcMjI0IG9yIFwyMjNEeW5hbWljXDIy
NCkgVGogVCogRVQNClENClENCnENClENClENCnENCjEgMCAwIDEgMjAgOTAgY20NClENCnEN
CjEgMCAwIDEgMjAgNjYgY20NCjAgMCAwIHJnDQpCVCAvRjEgMTAgVGYgMTIgVEwgRVQNCnEN
CjEgMCAwIDEgNiA5IGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEw
IFRmIDEyIFRMIDEwLjUgMCBUZCAoXDE3NykgVGogVCogLTEwLjUgMCBUZCBFVA0KUQ0KUQ0K
cQ0KMSAwIDAgMSAyMyAtMyBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAxNCBUbSAv
RjEgMTAgVGYgMTIgVEwgMS4yNDE5NzYgVHcgKGllLiBcMjIzSHlwZXJsYXVuY2ggU3RhdGlj
XDIyNCBkZXNjcmliZXMgbGF1bmNoIG9mIGEgU3RhdGljYWxseSBQYXJ0aXRpb25lZCBzeXN0
ZW0gYW5kKSBUaiBUKiAwIFR3IChcMjIzSHlwZXJsYXVuY2ggRHluYW1pY1wyMjQgZGVzY3Jp
YmVzIGxhdW5jaCBvZiBhIER5bmFtaWNhbGx5IFBhcnRpdGlvbmVkIHN5c3RlbS4pIFRqIFQq
IEVUDQpRDQpRDQpxDQpRDQpRDQpxDQoxIDAgMCAxIDIwIDYwIGNtDQpRDQpxDQoxIDAgMCAx
IDIwIDAgY20NCjAgMCAwIHJnDQpCVCAvRjEgMTAgVGYgMTIgVEwgRVQNCnENCjEgMCAwIDEg
NiA0NSBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBU
TCAxMC41IDAgVGQgKFwxNzcpIFRqIFQqIC0xMC41IDAgVGQgRVQNClENClENCnENCjEgMCAw
IDEgMjMgLTMgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgNTAgVG0gL0YxIDEwIFRm
IDEyIFRMIDMuMDc1OTc2IFR3IChJbiBwcmFjdGljZSwgdGhpcyBtZWFucyB0aGF0IFwyMjNI
eXBlcmxhdW5jaCBTdGF0aWNcMjI0IGRlc2NyaWJlcyBzdGFydGluZyBhIFN0YXRpYykgVGog
VCogMCBUdyAuMDUyMTI2IFR3IChQYXJ0aXRpb25lZCBzeXN0ZW0gd2hlcmUgbm8gbmV3IGRv
bWFpbnMgY2FuIGJlIHN0YXJ0ZWQgbGF0ZXIgXChpZS4gbm8gVk0gaGFzIHRoZSkgVGogVCog
MCBUdyAxLjI4MjQ3IFR3IChDb250cm9sIERvbWFpbiBwZXJtaXNzaW9uXCksIHdoZXJlYXMg
XDIyM0h5cGVybGF1bmNoIER5bmFtaWNcMjI0IHdpbGwgbGF1bmNoIHNvbWUpIFRqIFQqIDAg
VHcgMS44MjQxNDcgVHcgKFZNIHdpdGggdGhlIENvbnRyb2wgRG9tYWluIHBlcm1pc3Npb24s
IGFibGUgdG8gY3JlYXRlIFZNcyBkeW5hbWljYWxseSBhdCBhKSBUaiBUKiAwIFR3IChsYXRl
ciBwb2ludC4pIFRqIFQqIEVUDQpRDQpRDQpxDQpRDQpRDQpxDQoxIDAgMCAxIDIwIDAgY20N
ClENCnENClENClENCnENCjEgMCAwIDEgMjMgLTMgY20NClENCnENClENClENCnENCjEgMCAw
IDEgMjAgMCBjbQ0KUQ0KcQ0KUQ0KUQ0KcQ0KMSAwIDAgMSAyMyAtMyBjbQ0KUQ0KcQ0KUQ0K
UQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSAzNTEuMDIzNiBjbQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42
OTI5MSAzMzMuMDIzNiBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMiAx
MCBUZiAxMiBUTCAoTmFtaW5nIFByb3Bvc2FsOikgVGogVCogRVQNClENClENCnENCjEgMCAw
IDEgNjIuNjkyOTEgMzI3LjAyMzYgY20NClENCnENCjEgMCAwIDEgNjIuNjkyOTEgMzI3LjAy
MzYgY20NClENCnENCjEgMCAwIDEgNjIuNjkyOTEgOTMuMDIzNjIgY20NCjAgMCAwIHJnDQpC
VCAvRjEgMTAgVGYgMTIgVEwgRVQNCnENCjEgMCAwIDEgNiAyMTkgY20NCnENCjAgMCAwIHJn
DQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgMTAuNSAwIFRkIChcMTc3KSBU
aiBUKiAtMTAuNSAwIFRkIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDIzIDIwNyBjbQ0KcQ0KMCAw
IDAgcmcNCkJUIDEgMCAwIDEgMCAxNCBUbSAvRjEgMTAgVGYgMTIgVEwgMS45NDkzNiBUdyAo
TmV3IFRlcm06IFwyMjNIeXBlcmxhdW5jaFwyMjQgOiB0aGUgYWJpbGl0eSBvZiBhIGh5cGVy
dmlzb3IgdG8gY29uc3RydWN0IGFuZCBzdGFydCBvbmUgb3IgbW9yZSB2aXJ0dWFsKSBUaiBU
KiAwIFR3IChtYWNoaW5lcyBhdCBzeXN0ZW0gbGF1bmNoLCBpbiB0aGUgZm9sbG93aW5nIG1h
bm5lcjopIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDIzIDIwMSBjbQ0KUQ0KcQ0KMSAw
IDAgMSAyMyAtMyBjbQ0KMCAwIDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KQlQgMSAw
IDAgMSAwIDIgVG0gIFQqIEVUDQpxDQoxIDAgMCAxIDIwIDE5OCBjbQ0KUQ0KcQ0KMSAwIDAg
MSAyMCAxOTggY20NClENCnENCjEgMCAwIDEgMjAgMTA4IGNtDQowIDAgMCByZw0KQlQgL0Yx
IDEwIFRmIDEyIFRMIEVUDQpxDQoxIDAgMCAxIDYgNzUgY20NCnENCjAgMCAwIHJnDQpCVCAx
IDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgMTAuNSAwIFRkIChcMTc3KSBUaiBUKiAt
MTAuNSAwIFRkIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDIzIDc1IGNtDQpxDQowIDAgMCByZw0K
QlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIChUaGUgaHlwZXJ2aXNvciBtdXN0
IGJ1aWxkIGFsbCBvZiB0aGUgZG9tYWlucyB0aGF0IGl0IHN0YXJ0cyBhdCBob3N0IGJvb3Qp
IFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDIzIDY5IGNtDQpRDQpxDQoxIDAgMCAxIDIz
IC0zIGNtDQowIDAgMCByZw0KQlQgL0YxIDEwIFRmIDEyIFRMIEVUDQpCVCAxIDAgMCAxIDAg
MiBUbSAgVCogRVQNCnENCjEgMCAwIDEgMjAgNjYgY20NClENCnENCjEgMCAwIDEgMjAgNjYg
Y20NClENCnENCjEgMCAwIDEgMjAgMzAgY20NCjAgMCAwIHJnDQpCVCAvRjEgMTAgVGYgMTIg
VEwgRVQNCnENCjEgMCAwIDEgNiAyMSBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAy
IFRtIC9GMSAxMCBUZiAxMiBUTCAxMC41IDAgVGQgKFwxNzcpIFRqIFQqIC0xMC41IDAgVGQg
RVQNClENClENCnENCjEgMCAwIDEgMjMgLTMgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAx
IDAgMjYgVG0gL0YxIDEwIFRmIDEyIFRMIDEuNTIxNDEyIFR3IChTaW1pbGFyIHRvIHRoZSB3
YXkgdGhlIGRvbTAgZG9tYWluIGlzIGJ1aWx0IGJ5IHRoZSBoeXBlcnZpc29yIHRvZGF5LCBh
bmQgaG93KSBUaiBUKiAwIFR3IC42ODkyNjkgVHcgKGRvbTBsZXNzIHdvcmtzOiBpdCB3aWxs
IHJ1biBhIGxvb3AgdG8gYnVpbGQgdGhlbSBhbGwsIGRyaXZlbiBmcm9tIHRoZSBjb25maWd1
cmF0aW9uKSBUaiBUKiAwIFR3IChwcm92aWRlZCkgVGogVCogRVQNClENClENCnENClENClEN
CnENCjEgMCAwIDEgMjAgMjQgY20NClENCnENCjEgMCAwIDEgMjAgMCBjbQ0KMCAwIDAgcmcN
CkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KcQ0KMSAwIDAgMSA2IDkgY20NCnENCjAgMCAwIHJn
DQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgMTAuNSAwIFRkIChcMTc3KSBU
aiBUKiAtMTAuNSAwIFRkIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDIzIC0zIGNtDQpxDQowIDAg
MCByZw0KQlQgMSAwIDAgMSAwIDE0IFRtIC9GMSAxMCBUZiAxMiBUTCAuNjUzODI4IFR3IChU
aGlzIGlzIGEgcmVxdWlyZW1lbnQgZm9yIGVuc3VyaW5nIHRoYXQgdGhlcmUgaXMgU3Ryb25n
IElzb2xhdGlvbiBiZXR3ZWVuIGVhY2ggb2YpIFRqIFQqIDAgVHcgKHRoZSBpbml0aWFsIFZN
cykgVGogVCogRVQNClENClENCnENClENClENCnENCjEgMCAwIDEgMjAgMCBjbQ0KUQ0KcQ0K
UQ0KUQ0KcQ0KMSAwIDAgMSAyMyAtMyBjbQ0KUQ0KcQ0KUQ0KUQ0KcQ0KMSAwIDAgMSAyMCAx
MDIgY20NClENCnENCjEgMCAwIDEgMjAgMCBjbQ0KMCAwIDAgcmcNCkJUIC9GMSAxMCBUZiAx
MiBUTCBFVA0KcQ0KMSAwIDAgMSA2IDg3IGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAw
IDIgVG0gL0YxIDEwIFRmIDEyIFRMIDEwLjUgMCBUZCAoXDE3NykgVGogVCogLTEwLjUgMCBU
ZCBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAyMyA3NSBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAw
IDEgMCAxNCBUbSAvRjEgMTAgVGYgMTIgVEwgLjA2OTI2OSBUdyAoQSBzaW5nbGUgZmlsZSBj
b250YWlucyB0aGUgVk0gY29uZmlncyBcKFwyMjNMYXVuY2ggQ29udHJvbCBNb2R1bGVcMjI0
OiBMQ00sIGluIERldmljZSBUcmVlIGJpbmFyeSkgVGogVCogMCBUdyAoZm9ybWF0XCkgaXMg
cHJvdmlkZWQgdG8gdGhlIGh5cGVydmlzb3IpIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAx
IDIzIDY5IGNtDQpRDQpxDQoxIDAgMCAxIDIzIC0zIGNtDQowIDAgMCByZw0KQlQgL0YxIDEw
IFRmIDEyIFRMIEVUDQpCVCAxIDAgMCAxIDAgMiBUbSAgVCogRVQNCnENCjEgMCAwIDEgMjAg
NjYgY20NClENCnENCjEgMCAwIDEgMjAgNjYgY20NClENCnENCjEgMCAwIDEgMjAgNTQgY20N
CjAgMCAwIHJnDQpCVCAvRjEgMTAgVGYgMTIgVEwgRVQNCnENCjEgMCAwIDEgNiAtMyBjbQ0K
cQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAxMC41IDAg
VGQgKFwxNzcpIFRqIFQqIC0xMC41IDAgVGQgRVQNClENClENCnENCjEgMCAwIDEgMjMgLTMg
Y20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgKFRo
ZSBoeXBlcnZpc29yIHBhcnNlcyBpdCBhbmQgYnVpbGRzIGRvbWFpbnMpIFRqIFQqIEVUDQpR
DQpRDQpxDQpRDQpRDQpxDQoxIDAgMCAxIDIwIDQ4IGNtDQpRDQpxDQoxIDAgMCAxIDIwIDAg
Y20NCjAgMCAwIHJnDQpCVCAvRjEgMTAgVGYgMTIgVEwgRVQNCnENCjEgMCAwIDEgNiAzMyBj
bQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAxMC41
IDAgVGQgKFwxNzcpIFRqIFQqIC0xMC41IDAgVGQgRVQNClENClENCnENCjEgMCAwIDEgMjMg
LTMgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMzggVG0gL0YxIDEwIFRmIDEyIFRM
IC4wNDU2MSBUdyAoSWYgdGhlIExDTSBjb25maWcgc2F5cyB0aGF0IGEgQm9vdCBEb21haW4g
c2hvdWxkIHJ1biBmaXJzdCwgdGhlbiB0aGUgTENNIGZpbGUgaXRzZWxmKSBUaiBUKiAwIFR3
IDIuNjAzOTg0IFR3IChpcyBtYWRlIGF2YWlsYWJsZSB0byB0aGUgQm9vdCBEb21haW4gZm9y
IGl0IHRvIHBhcnNlIGFuZCBhY3Qgb24sIHRvIGludm9rZSkgVGogVCogMCBUdyAxLjIyNzI1
MSBUdyAob3BlcmF0aW9ucyB2aWEgdGhlIGh5cGVydmlzb3IgdG8gYXBwbHkgYWRkaXRpb25h
bCBjb25maWd1cmF0aW9uIHRvIHRoZSBvdGhlciBWTXMpIFRqIFQqIDAgVHcgKFwoaWUuIGV4
ZWN1dGluZyBhIHByaXZpbGVnZS1jb25zdHJhaW5lZCB0b29sc3RhY2tcKSkgVGogVCogRVQN
ClENClENCnENClENClENCnENCjEgMCAwIDEgMjAgMCBjbQ0KUQ0KcQ0KUQ0KUQ0KcQ0KMSAw
IDAgMSAyMyAtMyBjbQ0KUQ0KcQ0KUQ0KUQ0KcQ0KMSAwIDAgMSAyMCAwIGNtDQpRDQpxDQpR
DQpRDQpxDQoxIDAgMCAxIDIzIC0zIGNtDQpRDQpxDQpRDQpRDQpxDQoxIDAgMCAxIDYyLjY5
MjkxIDg3LjAyMzYyIGNtDQpRDQogDQplbmRzdHJlYW0NCmVuZG9iag0KMTM3IDAgb2JqDQo8
PCAvTGVuZ3RoIDgxNzQgPj4NCnN0cmVhbQ0KMSAwIDAgMSAwIDAgY20gIEJUIC9GMSAxMiBU
ZiAxNC40IFRMIEVUDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDc0MS4wMjM2IGNtDQowIDAgMCBy
Zw0KQlQgL0YxIDEwIFRmIDEyIFRMIEVUDQpxDQoxIDAgMCAxIDYgOSBjbQ0KcQ0KMCAwIDAg
cmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAxMC41IDAgVGQgKFwxNzcp
IFRqIFQqIC0xMC41IDAgVGQgRVQNClENClENCnENCjEgMCAwIDEgMjMgLTMgY20NCnENCjAg
MCAwIHJnDQpCVCAxIDAgMCAxIDAgMTQgVG0gL0YxIDEwIFRmIDEyIFRMIC42MDQ1OTcgVHcg
KE5ldyBUZXJtOiBcMjIzSHlwZXJsYXVuY2ggU3RhdGljXDIyNDogc3RhcnRzIGEgU3RhdGlj
IFBhcnRpdGlvbmVkIHN5c3RlbSwgd2hlcmUgb25seSB0aGUgdmlydHVhbCBtYWNoaW5lcykg
VGogVCogMCBUdyAoc3RhcnRlZCBhdCBzeXN0ZW0gbGF1bmNoIGFyZSBydW5uaW5nIG9uIHRo
ZSBzeXN0ZW0pIFRqIFQqIEVUDQpRDQpRDQpxDQpRDQpRDQpxDQoxIDAgMCAxIDYyLjY5Mjkx
IDczNS4wMjM2IGNtDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDcxMS4wMjM2IGNtDQowIDAg
MCByZw0KQlQgL0YxIDEwIFRmIDEyIFRMIEVUDQpxDQoxIDAgMCAxIDYgOSBjbQ0KcQ0KMCAw
IDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAxMC41IDAgVGQgKFwx
NzcpIFRqIFQqIC0xMC41IDAgVGQgRVQNClENClENCnENCjEgMCAwIDEgMjMgLTMgY20NCnEN
CjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMTQgVG0gL0YxIDEwIFRmIDEyIFRMIDEuODYyNDgg
VHcgKE5ldyBUZXJtOiBcMjIzSHlwZXJsYXVuY2ggRHluYW1pY1wyMjQ6IHN0YXJ0cyBhIHN5
c3RlbSB3aGVyZSB2aXJ0dWFsIG1hY2hpbmVzIG1heSBiZSBkeW5hbWljYWxseSkgVGogVCog
MCBUdyAoYWRkZWQgYWZ0ZXIgdGhlIGluaXRpYWwgdmlydHVhbCBtYWNoaW5lcyBoYXZlIHN0
YXJ0ZWQuKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA3
MTEuMDIzNiBjbQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA2NjkuMDIzNiBjbQ0KcQ0KMCAw
IDAgcmcNCkJUIDEgMCAwIDEgMCAyNiBUbSAvRjEgMTAgVGYgMTIgVEwgLjgxMzExIFR3IChJ
biB0aGUgZGVmYXVsdCBjb25maWd1cmF0aW9uLCBYZW4gd2lsbCBiZSBjYXBhYmxlIG9mIGJv
dGggc3R5bGVzIG9mIEh5cGVybGF1bmNoIGZyb20gdGhlIHNhbWUgaHlwZXJ2aXNvcikgVGog
VCogMCBUdyAzLjQzMjEyNiBUdyAoYmluYXJ5LCB3aGVuIHBhaXJlZCB3aXRoIGl0cyBYU00g
Zmxhc2ssIHByb3ZpZGVzIHN0cm9uZyBjb250cm9scyB0aGF0IGVuYWJsZSBmaW5lIGdyYWlu
ZWQgc3lzdGVtKSBUaiBUKiAwIFR3IChwYXJ0aXRpb25pbmcuKSBUaiBUKiBFVA0KUQ0KUQ0K
cQ0KMSAwIDAgMSA2Mi42OTI5MSA2NjMuMDIzNiBjbQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5
MSA2NjMuMDIzNiBjbQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA2MzkuMDIzNiBjbQ0KMCAw
IDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KcQ0KMSAwIDAgMSA2IDkgY20NCnENCjAg
MCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgMTAuNSAwIFRkIChc
MTc3KSBUaiBUKiAtMTAuNSAwIFRkIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDIzIC0zIGNtDQpx
DQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDE0IFRtIC9GMSAxMCBUZiAxMiBUTCAuNTU4ODEg
VHcgKFJldGlyaW5nIFRlcm06IFwyMjNEb21CXDIyNDogd2lsbCBubyBsb25nZXIgYmUgdXNl
ZCB0byBkZXNjcmliZSB0aGUgb3B0aW9uYWwgZmlyc3QgZG9tYWluIHRoYXQgaXMgc3RhcnRl
ZC4gSXQpIFRqIFQqIDAgVHcgKGlzIHJlcGxhY2VkIHdpdGggdGhlIG1vcmUgZ2VuZXJhbCB0
ZXJtOiBcMjIzQm9vdCBEb21haW5cMjI0LikgVGogVCogRVQNClENClENCnENClENClENCnEN
CjEgMCAwIDEgNjIuNjkyOTEgNjMzLjAyMzYgY20NClENCnENCjEgMCAwIDEgNjIuNjkyOTEg
NjIxLjAyMzYgY20NCjAgMCAwIHJnDQpCVCAvRjEgMTAgVGYgMTIgVEwgRVQNCnENCjEgMCAw
IDEgNiAtMyBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAx
MiBUTCAxMC41IDAgVGQgKFwxNzcpIFRqIFQqIC0xMC41IDAgVGQgRVQNClENClENCnENCjEg
MCAwIDEgMjMgLTMgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAg
VGYgMTIgVEwgKFJldGlyaW5nIFRlcm06IFwyMjNEb20wbGVzc1wyMjQ6IGl0IGlzIHRvIGJl
IHJlcGxhY2VkIHdpdGggXDIyM0h5cGVybGF1bmNoIFN0YXRpY1wyMjQpIFRqIFQqIEVUDQpR
DQpRDQpxDQpRDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDYyMS4wMjM2IGNtDQpRDQpxDQox
IDAgMCAxIDYyLjY5MjkxIDU5MS4wMjM2IGNtDQpxDQpCVCAxIDAgMCAxIDAgMyBUbSAxOCBU
TCAvRjIgMTUgVGYgMCAwIDAgcmcgKDYuMyAgIEFwcGVuZGl4IDM6IFRlcm1pbm9sb2d5KSBU
aiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA1NjEuMDIzNiBjbQ0KcQ0KMCAw
IDAgcmcNCkJUIDEgMCAwIDEgMCAxNCBUbSAvRjEgMTAgVGYgMTIgVEwgLjg0ODczNSBUdyAo
VG8gaGVscCBlbnN1cmUgY2xhcml0eSBpbiByZWFkaW5nIHRoaXMgZG9jdW1lbnQsIHRoZSBm
b2xsb3dpbmcgaXMgdGhlIGRlZmluaXRpb24gb2YgdGVybWlub2xvZ3kgdXNlZCB3aXRoaW4p
IFRqIFQqIDAgVHcgKHRoaXMgZG9jdW1lbnQuKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAg
MSA2Mi42OTI5MSA1NDUuMDIzNiBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRt
IC9GMiAxMCBUZiAxMiBUTCAoQmFzaWMgQ29uZmlndXJhdGlvbikgVGogVCogRVQNClENClEN
CnENCjEgMCAwIDEgNjIuNjkyOTEgNTMwLjAyMzYgY20NCjAgMCAwIHJnDQpCVCAvRjEgMTAg
VGYgMTIgVEwgRVQNCkJUIDEgMCAwIDEgMCAyIFRtICBUKiBFVA0KcQ0KMSAwIDAgMSAyMCAw
IGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMICh0
aGUgbWluaW1hbCBpbmZvcm1hdGlvbiB0aGUgaHlwZXJ2aXNvciByZXF1aXJlcyB0byBpbnN0
YW50aWF0ZSBhIGRvbWFpbiBpbnN0YW5jZSkgVGogVCogRVQNClENClENCnENClENClENCnEN
CjEgMCAwIDEgNjIuNjkyOTEgNTE0LjAyMzYgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAx
IDAgMiBUbSAvRjIgMTAgVGYgMTIgVEwgKEJvb3QgRG9tYWluKSBUaiBUKiBFVA0KUQ0KUQ0K
cQ0KMSAwIDAgMSA2Mi42OTI5MSA0NzUuMDIzNiBjbQ0KMCAwIDAgcmcNCkJUIC9GMSAxMCBU
ZiAxMiBUTCBFVA0KQlQgMSAwIDAgMSAwIDI2IFRtICBUKiBFVA0KcQ0KMSAwIDAgMSAyMCAw
IGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDI2IFRtIC9GMSAxMCBUZiAxMiBUTCAu
NzIxOTg0IFR3IChhIGRvbWFpbiB3aXRoIGxpbWl0ZWQgcHJpdmlsZWdlcyBsYXVuY2hlZCBi
eSB0aGUgaHlwZXJ2aXNvciBkdXJpbmcgYSBNdWx0aXBsZSBEb21haW4gQm9vdCB0aGF0IHJ1
bnMpIFRqIFQqIDAgVHcgLjU3MzMxOCBUdyAoYXMgdGhlIGZpcnN0IGRvbWFpbiBzdGFydGVk
LiBJbiB0aGUgSHlwZXJsYXVuY2ggYXJjaGl0ZWN0dXJlLCBpdCBpcyByZXNwb25zaWJsZSBm
b3IgYXNzaXN0aW5nIHdpdGggaGlnaGVyKSBUaiBUKiAwIFR3IChsZXZlbCBvcGVyYXRpb25z
IG9mIHRoZSBkb21haW4gc2V0dXAgcHJvY2Vzcy4pIFRqIFQqIEVUDQpRDQpRDQpxDQpRDQpR
DQpxDQoxIDAgMCAxIDYyLjY5MjkxIDQ1OS4wMjM2IGNtDQpxDQowIDAgMCByZw0KQlQgMSAw
IDAgMSAwIDIgVG0gL0YyIDEwIFRmIDEyIFRMIChDbGFzc2ljIExhdW5jaCkgVGogVCogRVQN
ClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgNDQ0LjAyMzYgY20NCjAgMCAwIHJnDQpCVCAv
RjEgMTAgVGYgMTIgVEwgRVQNCkJUIDEgMCAwIDEgMCAyIFRtICBUKiBFVA0KcQ0KMSAwIDAg
MSAyMCAwIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEy
IFRMIChhIGJhY2t3YXJkcy1jb21wYXRpYmxlIGhvc3QgYm9vdCB0aGF0IGVuZHMgd2l0aCB0
aGUgbGF1bmNoIG9mIGEgc2luZ2xlIGRvbWFpbiBcKERvbTBcKSkgVGogVCogRVQNClENClEN
CnENClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgNDI4LjAyMzYgY20NCnENCjAgMCAwIHJn
DQpCVCAxIDAgMCAxIDAgMiBUbSAvRjIgMTAgVGYgMTIgVEwgKENvbnNvbGUgRG9tYWluKSBU
aiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA0MTMuMDIzNiBjbQ0KMCAwIDAg
cmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KQlQgMSAwIDAgMSAwIDIgVG0gIFQqIEVUDQpx
DQoxIDAgMCAxIDIwIDAgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEg
MTAgVGYgMTIgVEwgKGEgZG9tYWluIHRoYXQgaGFzIHRoZSBYZW4gY29uc29sZSBhc3NpZ25l
ZCB0byBpdCkgVGogVCogRVQNClENClENCnENClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEg
Mzk3LjAyMzYgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjIgMTAgVGYg
MTIgVEwgKENvbnRyb2wgRG9tYWluKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42
OTI5MSAzNTguMDIzNiBjbQ0KMCAwIDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KQlQg
MSAwIDAgMSAwIDI2IFRtICBUKiBFVA0KcQ0KMSAwIDAgMSAyMCAwIGNtDQpxDQowIDAgMCBy
Zw0KQlQgMSAwIDAgMSAwIDI2IFRtIC9GMSAxMCBUZiAxMiBUTCAxLjgwNDI2OSBUdyAoYSBw
cml2aWxlZ2VkIGRvbWFpbiB0aGF0IGhhcyBiZWVuIGdyYW50ZWQgQ29udHJvbCBEb21haW4g
cGVybWlzc2lvbnMgd2hpY2ggYXJlIHRob3NlIHRoYXQgYXJlKSBUaiBUKiAwIFR3IC40OTcz
MTggVHcgKHJlcXVpcmVkIGJ5IHRoZSBYZW4gdG9vbHN0YWNrIGZvciBtYW5hZ2luZyBvdGhl
ciBkb21haW5zLiBUaGVzZSBwZXJtaXNzaW9ucyBhcmUgYSBzdWJzZXQgb2YgdGhvc2UpIFRq
IFQqIDAgVHcgKHRoYXQgYXJlIGdyYW50ZWQgdG8gRG9tMC4pIFRqIFQqIEVUDQpRDQpRDQpx
DQpRDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDM0Mi4wMjM2IGNtDQpxDQowIDAgMCByZw0K
QlQgMSAwIDAgMSAwIDIgVG0gL0YyIDEwIFRmIDEyIFRMIChEZXZpY2UgVHJlZSkgVGogVCog
RVQNClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgMzI3LjAyMzYgY20NCjAgMCAwIHJnDQpC
VCAvRjEgMTAgVGYgMTIgVEwgRVQNCkJUIDEgMCAwIDEgMCAyIFRtICBUKiBFVA0KcQ0KMSAw
IDAgMSAyMCAwIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRm
IDEyIFRMIChhIHN0YW5kYXJkaXplZCBkYXRhIHN0cnVjdHVyZSwgd2l0aCBkZWZpbmVkIGZp
bGUgZm9ybWF0cywgZm9yIGRlc2NyaWJpbmcgaW5pdGlhbCBzeXN0ZW0gY29uZmlndXJhdGlv
bikgVGogVCogRVQNClENClENCnENClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgMzExLjAy
MzYgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjIgMTAgVGYgMTIgVEwg
KERpc2FnZ3JlZ2F0aW9uKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSAy
ODQuMDIzNiBjbQ0KMCAwIDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KQlQgMSAwIDAg
MSAwIDE0IFRtICBUKiBFVA0KcQ0KMSAwIDAgMSAyMCAwIGNtDQpxDQowIDAgMCByZw0KQlQg
MSAwIDAgMSAwIDE0IFRtIC9GMSAxMCBUZiAxMiBUTCAuODA5MTQ3IFR3ICh0aGUgc2VwYXJh
dGlvbiBvZiBzeXN0ZW0gcm9sZXMgYW5kIHJlc3BvbnNpYmlsaXRpZXMgYWNyb3NzIG11bHRp
cGxlIGNvbm5lY3RlZCBjb21wb25lbnRzIHRoYXQgd29yaykgVGogVCogMCBUdyAodG9nZXRo
ZXIgdG8gcHJvdmlkZSBmdW5jdGlvbmFsaXR5KSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KUQ0KUQ0K
cQ0KMSAwIDAgMSA2Mi42OTI5MSAyNjguMDIzNiBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAw
IDEgMCAyIFRtIC9GMiAxMCBUZiAxMiBUTCAoRG9tMCkgVGogVCogRVQNClENClENCnENCjEg
MCAwIDEgNjIuNjkyOTEgMjUzLjAyMzYgY20NCjAgMCAwIHJnDQpCVCAvRjEgMTAgVGYgMTIg
VEwgRVQNCkJUIDEgMCAwIDEgMCAyIFRtICBUKiBFVA0KcQ0KMSAwIDAgMSAyMCAwIGNtDQpx
DQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMICh0aGUgaGln
aGx5LXByaXZpbGVnZWQsIGZpcnN0IGFuZCBvbmx5IGRvbWFpbiBzdGFydGVkIGF0IGhvc3Qg
Ym9vdCBvbiBhIGNvbnZlbnRpb25hbCBYZW4gc3lzdGVtKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0K
UQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSAyMzcuMDIzNiBjbQ0KcQ0KMCAwIDAgcmcNCkJU
IDEgMCAwIDEgMCAyIFRtIC9GMiAxMCBUZiAxMiBUTCAoRG9tMGxlc3MpIFRqIFQqIEVUDQpR
DQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDIyMi4wMjM2IGNtDQowIDAgMCByZw0KQlQgL0Yx
IDEwIFRmIDEyIFRMIEVUDQpCVCAxIDAgMCAxIDAgMiBUbSAgVCogRVQNCnENCjEgMCAwIDEg
MjAgMCBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBU
TCAoYW4gZXhpc3RpbmcgZmVhdHVyZSBvZiBYZW4gb24gQXJtIHRoYXQgcHJvdmlkZXMgTXVs
dGlwbGUgRG9tYWluIEJvb3QpIFRqIFQqIEVUDQpRDQpRDQpxDQpRDQpRDQpxDQoxIDAgMCAx
IDYyLjY5MjkxIDIwNi4wMjM2IGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0g
L0YyIDEwIFRmIDEyIFRMIChEb21haW4pIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDYy
LjY5MjkxIDE5MS4wMjM2IGNtDQowIDAgMCByZw0KQlQgL0YxIDEwIFRmIDEyIFRMIEVUDQpC
VCAxIDAgMCAxIDAgMiBUbSAgVCogRVQNCnENCjEgMCAwIDEgMjAgMCBjbQ0KcQ0KMCAwIDAg
cmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAoYSBydW5uaW5nIGluc3Rh
bmNlIG9mIGEgdmlydHVhbCBtYWNoaW5lOyBcKGFzIHRoZSB0ZXJtIGlzIGNvbW1vbmx5IHVz
ZWQgaW4gdGhlIFhlbiBDb21tdW5pdHlcKSkgVGogVCogRVQNClENClENCnENClENClENCnEN
CjEgMCAwIDEgNjIuNjkyOTEgMTc1LjAyMzYgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAx
IDAgMiBUbSAvRjIgMTAgVGYgMTIgVEwgKERvbUIpIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAg
MCAxIDYyLjY5MjkxIDE2MC4wMjM2IGNtDQowIDAgMCByZw0KQlQgL0YxIDEwIFRmIDEyIFRM
IEVUDQpCVCAxIDAgMCAxIDAgMiBUbSAgVCogRVQNCnENCjEgMCAwIDEgMjAgMCBjbQ0KcQ0K
MCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAodGhlIGZvcm1l
ciBuYW1lIGZvciBIeXBlcmxhdW5jaCkgVGogVCogRVQNClENClENCnENClENClENCnENCjEg
MCAwIDEgNjIuNjkyOTEgMTQ0LjAyMzYgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAg
MiBUbSAvRjIgMTAgVGYgMTIgVEwgKEV4dGVuZGVkIENvbmZpZ3VyYXRpb24pIFRqIFQqIEVU
DQpRDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDEyOS4wMjM2IGNtDQowIDAgMCByZw0KQlQg
L0YxIDEwIFRmIDEyIFRMIEVUDQpCVCAxIDAgMCAxIDAgMiBUbSAgVCogRVQNCnENCjEgMCAw
IDEgMjAgMCBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAx
MiBUTCAoYW55IGNvbmZpZ3VyYXRpb24gb3B0aW9ucyBmb3IgYSBkb21haW4gYmV5b25kIGl0
cyBCYXNpYyBDb25maWd1cmF0aW9uKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KUQ0KUQ0KcQ0KMSAw
IDAgMSA2Mi42OTI5MSAxMTMuMDIzNiBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAy
IFRtIC9GMiAxMCBUZiAxMiBUTCAoSGFyZHdhcmUgRG9tYWluKSBUaiBUKiBFVA0KUQ0KUQ0K
cQ0KMSAwIDAgMSA2Mi42OTI5MSA4Ni4wMjM2MiBjbQ0KMCAwIDAgcmcNCkJUIC9GMSAxMCBU
ZiAxMiBUTCBFVA0KQlQgMSAwIDAgMSAwIDE0IFRtICBUKiBFVA0KcQ0KMSAwIDAgMSAyMCAw
IGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDE0IFRtIC9GMSAxMCBUZiAxMiBUTCAu
Mjk0MjY5IFR3IChhIHByaXZpbGVnZWQgZG9tYWluIHRoYXQgaGFzIGJlZW4gZ3JhbnRlZCBw
ZXJtaXNzaW9ucyB0byBhY2Nlc3MgYW5kIG1hbmFnZSBob3N0IGhhcmR3YXJlLiBUaGVzZSkg
VGogVCogMCBUdyAocGVybWlzc2lvbnMgYXJlIGEgc3Vic2V0IG9mIHRob3NlIHRoYXQgYXJl
IGdyYW50ZWQgdG8gRG9tMC4pIFRqIFQqIEVUDQpRDQpRDQpxDQpRDQpRDQogDQplbmRzdHJl
YW0NCmVuZG9iag0KMTM4IDAgb2JqDQo8PCAvTGVuZ3RoIDUyNjYgPj4NCnN0cmVhbQ0KMSAw
IDAgMSAwIDAgY20gIEJUIC9GMSAxMiBUZiAxNC40IFRMIEVUDQpxDQoxIDAgMCAxIDYyLjY5
MjkxIDc1My4wMjM2IGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YyIDEw
IFRmIDEyIFRMIChIb3N0IEJvb3QpIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDYyLjY5
MjkxIDczOC4wMjM2IGNtDQowIDAgMCByZw0KQlQgL0YxIDEwIFRmIDEyIFRMIEVUDQpCVCAx
IDAgMCAxIDAgMiBUbSAgVCogRVQNCnENCjEgMCAwIDEgMjAgMCBjbQ0KcQ0KMCAwIDAgcmcN
CkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAodGhlIHN5c3RlbSBzdGFydHVw
IG9mIFhlbiB1c2luZyB0aGUgY29uZmlndXJhdGlvbiBwcm92aWRlZCBieSB0aGUgYm9vdGxv
YWRlcikgVGogVCogRVQNClENClENCnENClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgNzIy
LjAyMzYgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjIgMTAgVGYgMTIg
VEwgKEh5cGVybGF1bmNoKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA3
MDcuMDIzNiBjbQ0KMCAwIDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KQlQgMSAwIDAg
MSAwIDIgVG0gIFQqIEVUDQpxDQoxIDAgMCAxIDIwIDAgY20NCnENCjAgMCAwIHJnDQpCVCAx
IDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgKGEgZmxleGlibGUgaG9zdCBib290IHRo
YXQgZW5kcyB3aXRoIHRoZSBsYXVuY2ggb2Ygb25lIG9yIG1vcmUgZG9tYWlucykgVGogVCog
RVQNClENClENCnENClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgNjkxLjAyMzYgY20NCnEN
CjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjIgMTAgVGYgMTIgVEwgKEluaXRpYWwg
RG9tYWluKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA2NjQuMDIzNiBj
bQ0KMCAwIDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KQlQgMSAwIDAgMSAwIDE0IFRt
ICBUKiBFVA0KcQ0KMSAwIDAgMSAyMCAwIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAw
IDE0IFRtIC9GMSAxMCBUZiAxMiBUTCAuMzQ1NDg4IFR3IChhIGRvbWFpbiB0aGF0IGlzIGRl
c2NyaWJlZCBpbiB0aGUgTENNIHRoYXQgaXMgcnVuIGFzIHBhcnQgb2YgYSBtdWx0aXBsZSBk
b21haW4gYm9vdC4gVGhpcyBpbmNsdWRlcyB0aGUpIFRqIFQqIDAgVHcgKEJvb3QgRG9tYWlu
LCBSZWNvdmVyeSBEb21haW4gYW5kIGFsbCBMYXVuY2hlZCBEb21haW5zLikgVGogVCogRVQN
ClENClENCnENClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgNjQ4LjAyMzYgY20NCnENCjAg
MCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjIgMTAgVGYgMTIgVEwgKExhdGUgSGFyZHdh
cmUgRG9tYWluKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA2MDkuMDIz
NiBjbQ0KMCAwIDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KQlQgMSAwIDAgMSAwIDI2
IFRtICBUKiBFVA0KcQ0KMSAwIDAgMSAyMCAwIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAg
MSAwIDI2IFRtIC9GMSAxMCBUZiAxMiBUTCAxLjM4NzMxOCBUdyAoYSBIYXJkd2FyZSBEb21h
aW4gdGhhdCBpcyBsYXVuY2hlZCBhZnRlciBob3N0IGJvb3QgaGFzIGFscmVhZHkgY29tcGxl
dGVkIHdpdGggYSBydW5uaW5nIERvbTAuKSBUaiBUKiAwIFR3IDEuOTAyMjkgVHcgKFdoZW4g
dGhlIExhdGUgSGFyZHdhcmUgRG9tYWluIGlzIHN0YXJ0ZWQsIERvbTAgcmVsaW5xdWlzaGVz
IGFuZCB0cmFuc2ZlcnMgdGhlIHBlcm1pc3Npb25zIHRvKSBUaiBUKiAwIFR3IChhY2Nlc3Mg
YW5kIG1hbmFnZSBob3N0IGhhcmR3YXJlIHRvIGl0Li4pIFRqIFQqIEVUDQpRDQpRDQpxDQpR
DQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDU5My4wMjM2IGNtDQpxDQowIDAgMCByZw0KQlQg
MSAwIDAgMSAwIDIgVG0gL0YyIDEwIFRmIDEyIFRMIChMYXVuY2ggQ29udHJvbCBNb2R1bGUg
XChMQ01cKSkgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgNTY2LjAyMzYg
Y20NCjAgMCAwIHJnDQpCVCAvRjEgMTAgVGYgMTIgVEwgRVQNCkJUIDEgMCAwIDEgMCAxNCBU
bSAgVCogRVQNCnENCjEgMCAwIDEgMjAgMCBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEg
MCAxNCBUbSAvRjEgMTAgVGYgMTIgVEwgLjYwNzMxOCBUdyAoQSBmaWxlIHN1cHBsaWVkIHRv
IHRoZSBoeXBlcnZpc29yIGJ5IHRoZSBib290bG9hZGVyIHRoYXQgY29udGFpbnMgY29uZmln
dXJhdGlvbiBkYXRhIGZvciB0aGUgaHlwZXJ2aXNvcikgVGogVCogMCBUdyAoYW5kIHRoZSBp
bml0aWFsIHNldCBvZiB2aXJ0dWFsIG1hY2hpbmVzIHRvIGJlIHJ1biBhdCBib290KSBUaiBU
KiBFVA0KUQ0KUQ0KcQ0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA1NTAuMDIzNiBjbQ0K
cQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMiAxMCBUZiAxMiBUTCAoTGF1bmNo
ZWQgRG9tYWluKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA1MjMuMDIz
NiBjbQ0KMCAwIDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KQlQgMSAwIDAgMSAwIDE0
IFRtICBUKiBFVA0KcQ0KMSAwIDAgMSAyMCAwIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAg
MSAwIDE0IFRtIC9GMSAxMCBUZiAxMiBUTCAxLjc3OTM5OCBUdyAoYSBkb21haW4sIGFzaWRl
IGZyb20gdGhlIGJvb3QgZG9tYWluIGFuZCByZWNvdmVyeSBkb21haW4sIHRoYXQgaXMgc3Rh
cnRlZCBhcyBwYXJ0IG9mIGEgbXVsdGlwbGUpIFRqIFQqIDAgVHcgKGRvbWFpbiBib290IGFu
ZCByZW1haW5zIHJ1bm5pbmcgb25jZSB0aGUgYm9vdCBwcm9jZXNzIGlzIGNvbXBsZXRlKSBU
aiBUKiBFVA0KUQ0KUQ0KcQ0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA1MDcuMDIzNiBj
bQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMiAxMCBUZiAxMiBUTCAoTXVs
dGlwbGUgRG9tYWluIEJvb3QpIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDYyLjY5Mjkx
IDQ4MC4wMjM2IGNtDQowIDAgMCByZw0KQlQgL0YxIDEwIFRmIDEyIFRMIEVUDQpCVCAxIDAg
MCAxIDAgMTQgVG0gIFQqIEVUDQpxDQoxIDAgMCAxIDIwIDAgY20NCnENCjAgMCAwIHJnDQpC
VCAxIDAgMCAxIDAgMTQgVG0gL0YxIDEwIFRmIDEyIFRMIC40MTU2OTcgVHcgKGEgc3lzdGVt
IGNvbmZpZ3VyYXRpb24gd2hlcmUgdGhlIGh5cGVydmlzb3IgYW5kIG11bHRpcGxlIHZpcnR1
YWwgbWFjaGluZXMgYXJlIGFsbCBsYXVuY2hlZCB3aGVuIHRoZSkgVGogVCogMCBUdyAoaG9z
dCBzeXN0ZW0gaGFyZHdhcmUgYm9vdHMpIFRqIFQqIEVUDQpRDQpRDQpxDQpRDQpRDQpxDQox
IDAgMCAxIDYyLjY5MjkxIDQ2NC4wMjM2IGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAw
IDIgVG0gL0YyIDEwIFRmIDEyIFRMIChSZWNvdmVyeSBEb21haW4pIFRqIFQqIEVUDQpRDQpR
DQpxDQoxIDAgMCAxIDYyLjY5MjkxIDQzNy4wMjM2IGNtDQowIDAgMCByZw0KQlQgL0YxIDEw
IFRmIDEyIFRMIEVUDQpCVCAxIDAgMCAxIDAgMTQgVG0gIFQqIEVUDQpxDQoxIDAgMCAxIDIw
IDAgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMTQgVG0gL0YxIDEwIFRmIDEyIFRM
IDMuNTM1MzE4IFR3IChhbiBvcHRpb25hbCBmYWxsYmFjayBkb21haW4gdGhhdCB0aGUgaHlw
ZXJ2aXNvciBtYXkgc3RhcnQgaW4gdGhlIGV2ZW50IG9mIGEgZGV0ZWN0YWJsZSBlcnJvcikg
VGogVCogMCBUdyAoZW5jb3VudGVyZWQgZHVyaW5nIHRoZSBtdWx0aXBsZSBkb21haW4gYm9v
dCBwcm9jZXNzKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5
MSA0MjEuMDIzNiBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMiAxMCBU
ZiAxMiBUTCAoU3lzdGVtIERldmljZSBUcmVlKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAg
MSA2Mi42OTI5MSAzOTQuMDIzNiBjbQ0KMCAwIDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBF
VA0KQlQgMSAwIDAgMSAwIDE0IFRtICBUKiBFVA0KcQ0KMSAwIDAgMSAyMCAwIGNtDQpxDQow
IDAgMCByZw0KQlQgMSAwIDAgMSAwIDE0IFRtIC9GMSAxMCBUZiAxMiBUTCAuMTY5NDMxIFR3
ICh0aGlzIGlzIHRoZSBwcm9kdWN0IG9mIGFuIEFybSBjb21tdW5pdHkgcHJvamVjdCB0byBl
eHRlbmQgRGV2aWNlIFRyZWUgdG8gY292ZXIgbW9yZSBhc3BlY3RzIG9mIGluaXRpYWwpIFRq
IFQqIDAgVHcgKHN5c3RlbSBjb25maWd1cmF0aW9uKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KUQ0K
UQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSAzNjQuMDIzNiBjbQ0KcQ0KQlQgMSAwIDAgMSAwIDMg
VG0gMTggVEwgL0YyIDE1IFRmIDAgMCAwIHJnICg2LjQgICBBcHBlbmRpeCA0OiBDb3B5cmln
aHQgTGljZW5zZSkgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgMzIyLjAy
MzYgY20NCnENCkJUIDEgMCAwIDEgMCAyNiBUbSAxLjgzMjY1MSBUdyAxMiBUTCAvRjEgMTAg
VGYgMCAwIDAgcmcgKFRoaXMgd29yayBpcyBsaWNlbnNlZCB1bmRlciBhIENyZWF0aXZlIENv
bW1vbnMgQXR0cmlidXRpb24gNC4wIEludGVybmF0aW9uYWwgTGljZW5zZS4gQSBjb3B5IG9m
IHRoaXMpIFRqIFQqIDAgVHcgMjQuNjk2MjIgVHcgKGxpY2Vuc2UgbWF5IGJlIG9idGFpbmVk
IGZyb20gdGhlIENyZWF0aXZlIENvbW1vbnMgd2Vic2l0ZSkgVGogVCogMCBUdyAoXCgpIFRq
IDAgMCAuNTAxOTYxIHJnIChodHRwczovL2NyZWF0aXZlY29tbW9ucy5vcmcvbGljZW5zZXMv
YnkvNC4wL2xlZ2FsY29kZSkgVGogMCAwIDAgcmcgKFwpLikgVGogVCogRVQNClENClENCnEN
CjEgMCAwIDEgNjIuNjkyOTEgMzE2LjAyMzYgY20NClENCnENCjEgMCAwIDEgNjIuNjkyOTEg
MzA0LjAyMzYgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYg
MTIgVEwgKENvbnRyaWJ1dGlvbnMgYnk6KSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2
Mi42OTI5MSAyOTIuMDIzNiBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9G
MSAxMCBUZiAxMiBUTCAoQ2hyaXN0b3BoZXIgQ2xhcmsgYXJlIENvcHlyaWdodCBcMjUxIDIw
MjEgU3RhciBMYWIgQ29ycG9yYXRpb24pIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDYy
LjY5MjkxIDI4MC4wMjM2IGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0Yx
IDEwIFRmIDEyIFRMIChEYW5pZWwgUC4gU21pdGggYXJlIENvcHlyaWdodCBcMjUxIDIwMjEg
QXBlcnR1cyBTb2x1dGlvbnMsIExMQykgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgNjIu
NjkyOTEgMjgwLjAyMzYgY20NClENCiANCmVuZHN0cmVhbQ0KZW5kb2JqDQoxMzkgMCBvYmoN
Cjw8IC9OdW1zIFsgMCAxNDAgMCBSIDEgMTQxIDAgUiAyIDE0MiAwIFIgMyAxNDMgMCBSIDQg
MTQ0IDAgUiANCiAgNSAxNDUgMCBSIDYgMTQ2IDAgUiA3IDE0NyAwIFIgOCAxNDggMCBSIDkg
MTQ5IDAgUiANCiAgMTAgMTUwIDAgUiAxMSAxNTEgMCBSIDEyIDE1MiAwIFIgMTMgMTUzIDAg
UiAxNCAxNTQgMCBSIF0gPj4NCmVuZG9iag0KMTQwIDAgb2JqDQo8PCAvUyAvRCAvU3QgMSA+
Pg0KZW5kb2JqDQoxNDEgMCBvYmoNCjw8IC9TIC9EIC9TdCAyID4+DQplbmRvYmoNCjE0MiAw
IG9iag0KPDwgL1MgL0QgL1N0IDMgPj4NCmVuZG9iag0KMTQzIDAgb2JqDQo8PCAvUyAvRCAv
U3QgNCA+Pg0KZW5kb2JqDQoxNDQgMCBvYmoNCjw8IC9TIC9EIC9TdCA1ID4+DQplbmRvYmoN
CjE0NSAwIG9iag0KPDwgL1MgL0QgL1N0IDYgPj4NCmVuZG9iag0KMTQ2IDAgb2JqDQo8PCAv
UyAvRCAvU3QgNyA+Pg0KZW5kb2JqDQoxNDcgMCBvYmoNCjw8IC9TIC9EIC9TdCA4ID4+DQpl
bmRvYmoNCjE0OCAwIG9iag0KPDwgL1MgL0QgL1N0IDkgPj4NCmVuZG9iag0KMTQ5IDAgb2Jq
DQo8PCAvUyAvRCAvU3QgMTAgPj4NCmVuZG9iag0KMTUwIDAgb2JqDQo8PCAvUyAvRCAvU3Qg
MTEgPj4NCmVuZG9iag0KMTUxIDAgb2JqDQo8PCAvUyAvRCAvU3QgMTIgPj4NCmVuZG9iag0K
MTUyIDAgb2JqDQo8PCAvUyAvRCAvU3QgMTMgPj4NCmVuZG9iag0KMTUzIDAgb2JqDQo8PCAv
UyAvRCAvU3QgMTQgPj4NCmVuZG9iag0KMTU0IDAgb2JqDQo8PCAvUyAvRCAvU3QgMTUgPj4N
CmVuZG9iag0KeHJlZg0KMCAxNTUNCjAwMDAwMDAwMDAgNjU1MzUgZg0KMDAwMDAwMDA3NSAw
MDAwMCBuDQowMDAwMDAwMTUyIDAwMDAwIG4NCjAwMDAwMDAyNjIgMDAwMDAgbg0KMDAwMDAw
MDM3NyAwMDAwMCBuDQowMDAwMDAwNTQ4IDAwMDAwIG4NCjAwMDAwMDA3MTkgMDAwMDAgbg0K
MDAwMDAwMDg5MCAwMDAwMCBuDQowMDAwMDAxMDYxIDAwMDAwIG4NCjAwMDAwMDEyMzIgMDAw
MDAgbg0KMDAwMDAwMTQwMyAwMDAwMCBuDQowMDAwMDAxNTc1IDAwMDAwIG4NCjAwMDAwMDE3
NDcgMDAwMDAgbg0KMDAwMDAwMTkxOSAwMDAwMCBuDQowMDAwMDAyMDkxIDAwMDAwIG4NCjAw
MDAwMDIyNjMgMDAwMDAgbg0KMDAwMDAwMjQzNSAwMDAwMCBuDQowMDAwMDAyNjA3IDAwMDAw
IG4NCjAwMDAwMDI3NzkgMDAwMDAgbg0KMDAwMDAwMjk1MSAwMDAwMCBuDQowMDAwMDAzMTIz
IDAwMDAwIG4NCjAwMDAwMDMyOTUgMDAwMDAgbg0KMDAwMDAwMzQ2NyAwMDAwMCBuDQowMDAw
MDAzNjM5IDAwMDAwIG4NCjAwMDAwMDM4MTEgMDAwMDAgbg0KMDAwMDAwMzk4MyAwMDAwMCBu
DQowMDAwMDA0MTU1IDAwMDAwIG4NCjAwMDAwMDQzMjcgMDAwMDAgbg0KMDAwMDAwNDQ5OSAw
MDAwMCBuDQowMDAwMDA0NjcxIDAwMDAwIG4NCjAwMDAwMDQ4NDMgMDAwMDAgbg0KMDAwMDAw
NTAxNSAwMDAwMCBuDQowMDAwMDA1MTg3IDAwMDAwIG4NCjAwMDAwMDUzNTkgMDAwMDAgbg0K
MDAwMDAwNTUzMSAwMDAwMCBuDQowMDAwMDA1NzAzIDAwMDAwIG4NCjAwMDAwMDU4NzUgMDAw
MDAgbg0KMDAwMDAwNjA0NyAwMDAwMCBuDQowMDAwMDA2MjE5IDAwMDAwIG4NCjAwMDAwMDYz
OTEgMDAwMDAgbg0KMDAwMDAwNjU2MyAwMDAwMCBuDQowMDAwMDA2NzM1IDAwMDAwIG4NCjAw
MDAwMDY5MDcgMDAwMDAgbg0KMDAwMDAwNzA3OSAwMDAwMCBuDQowMDAwMDA3MjUxIDAwMDAw
IG4NCjAwMDAwMDc0MjMgMDAwMDAgbg0KMDAwMDAwNzU5NSAwMDAwMCBuDQowMDAwMDA3NzY3
IDAwMDAwIG4NCjAwMDAwMDc5MzkgMDAwMDAgbg0KMDAwMDAwODExMSAwMDAwMCBuDQowMDAw
MDA4MjgzIDAwMDAwIG4NCjAwMDAwMDg0NTUgMDAwMDAgbg0KMDAwMDAwODYyNyAwMDAwMCBu
DQowMDAwMDA4Nzk5IDAwMDAwIG4NCjAwMDAwMDg5NzEgMDAwMDAgbg0KMDAwMDAwOTE0MyAw
MDAwMCBuDQowMDAwMDA5MzE1IDAwMDAwIG4NCjAwMDAwMDk0ODcgMDAwMDAgbg0KMDAwMDAw
OTY1OSAwMDAwMCBuDQowMDAwMDA5ODMxIDAwMDAwIG4NCjAwMDAwMTAwMDMgMDAwMDAgbg0K
MDAwMDAxMDE3NSAwMDAwMCBuDQowMDAwMDEwMzQ3IDAwMDAwIG4NCjAwMDAwMTA1MTkgMDAw
MDAgbg0KMDAwMDAxMDY5MSAwMDAwMCBuDQowMDAwMDExMzQ5IDAwMDAwIG4NCjAwMDAwMTE1
NjEgMDAwMDAgbg0KMDAwMDAxMTc3MyAwMDAwMCBuDQowMDAwMDExODkyIDAwMDAwIG4NCjAw
MDAwMTIxMDQgMDAwMDAgbg0KMDAwMDAxMjIyNyAwMDAwMCBuDQowMDAwMDEyNDM5IDAwMDAw
IG4NCjAwMDAwMTI1NDggMDAwMDAgbg0KMDAwMDAxMjc2MCAwMDAwMCBuDQowMDAwMDEyOTcy
IDAwMDAwIG4NCjAwMDAwMTMxODQgMDAwMDAgbg0KMDAwMDAxMzM5NiAwMDAwMCBuDQowMDAw
MDEzNjA4IDAwMDAwIG4NCjAwMDAwMTM4MjAgMDAwMDAgbg0KMDAwMDAxNDAzMiAwMDAwMCBu
DQowMDAwMDE0MjQ0IDAwMDAwIG4NCjAwMDAwMTQ0NTYgMDAwMDAgbg0KMDAwMDAxNDY2NSAw
MDAwMCBuDQowMDAwMDE0ODk2IDAwMDAwIG4NCjAwMDAwMTUwMDcgMDAwMDAgbg0KMDAwMDAx
NTI5NiAwMDAwMCBuDQowMDAwMDE1Mzc1IDAwMDAwIG4NCjAwMDAwMTU1NzcgMDAwMDAgbg0K
MDAwMDAxNTgyMiAwMDAwMCBuDQowMDAwMDE2MDU3IDAwMDAwIG4NCjAwMDAwMTYyNDYgMDAw
MDAgbg0KMDAwMDAxNjU1OSAwMDAwMCBuDQowMDAwMDE2ODQ2IDAwMDAwIG4NCjAwMDAwMTcx
MDYgMDAwMDAgbg0KMDAwMDAxNzQ0MiAwMDAwMCBuDQowMDAwMDE3ODQ0IDAwMDAwIG4NCjAw
MDAwMTgzNTkgMDAwMDAgbg0KMDAwMDAxODc5OSAwMDAwMCBuDQowMDAwMDE5MTgyIDAwMDAw
IG4NCjAwMDAwMTk1MTkgMDAwMDAgbg0KMDAwMDAxOTk5NSAwMDAwMCBuDQowMDAwMDIwMzAz
IDAwMDAwIG4NCjAwMDAwMjA2MzAgMDAwMDAgbg0KMDAwMDAyMDk4MCAwMDAwMCBuDQowMDAw
MDIxMjg0IDAwMDAwIG4NCjAwMDAwMjE1MzQgMDAwMDAgbg0KMDAwMDAyMTg0MyAwMDAwMCBu
DQowMDAwMDIyMTEzIDAwMDAwIG4NCjAwMDAwMjI0MzkgMDAwMDAgbg0KMDAwMDAyMjY3NCAw
MDAwMCBuDQowMDAwMDIyOTI4IDAwMDAwIG4NCjAwMDAwMjMxNzcgMDAwMDAgbg0KMDAwMDAy
MzQxMSAwMDAwMCBuDQowMDAwMDIzNzA3IDAwMDAwIG4NCjAwMDAwMjM5NDMgMDAwMDAgbg0K
MDAwMDAyNDE5MiAwMDAwMCBuDQowMDAwMDI0NDQ2IDAwMDAwIG4NCjAwMDAwMjQ2ODEgMDAw
MDAgbg0KMDAwMDAyNTAyOCAwMDAwMCBuDQowMDAwMDI1MjU0IDAwMDAwIG4NCjAwMDAwMjU2
ODkgMDAwMDAgbg0KMDAwMDAyNjE1OCAwMDAwMCBuDQowMDAwMDI2NDQyIDAwMDAwIG4NCjAw
MDAwMjY3NDIgMDAwMDAgbg0KMDAwMDAyNjkxMCAwMDAwMCBuDQowMDAwMDM1OTMwIDAwMDAw
IG4NCjAwMDAwNDQyMzggMDAwMDAgbg0KMDAwMDA1ODIyMyAwMDAwMCBuDQowMDAwMDY4MDE2
IDAwMDAwIG4NCjAwMDAwNzUwNzYgMDAwMDAgbg0KMDAwMDA4NTEzNiAwMDAwMCBuDQowMDAw
MDk2Mjg2IDAwMDAwIG4NCjAwMDAxMDM1NTIgMDAwMDAgbg0KMDAwMDExMTE4NyAwMDAwMCBu
DQowMDAwMTE4NzQ2IDAwMDAwIG4NCjAwMDAxMjMxNjMgMDAwMDAgbg0KMDAwMDEzMjMyMyAw
MDAwMCBuDQowMDAwMTQyODcyIDAwMDAwIG4NCjAwMDAxNTExMDQgMDAwMDAgbg0KMDAwMDE1
NjQyOCAwMDAwMCBuDQowMDAwMTU2NjI3IDAwMDAwIG4NCjAwMDAxNTY2NjUgMDAwMDAgbg0K
MDAwMDE1NjcwMyAwMDAwMCBuDQowMDAwMTU2NzQxIDAwMDAwIG4NCjAwMDAxNTY3NzkgMDAw
MDAgbg0KMDAwMDE1NjgxNyAwMDAwMCBuDQowMDAwMTU2ODU1IDAwMDAwIG4NCjAwMDAxNTY4
OTMgMDAwMDAgbg0KMDAwMDE1NjkzMSAwMDAwMCBuDQowMDAwMTU2OTY5IDAwMDAwIG4NCjAw
MDAxNTcwMDggMDAwMDAgbg0KMDAwMDE1NzA0NyAwMDAwMCBuDQowMDAwMTU3MDg2IDAwMDAw
IG4NCjAwMDAxNTcxMjUgMDAwMDAgbg0KMDAwMDE1NzE2NCAwMDAwMCBuDQp0cmFpbGVyDQo8
PCAvSUQgDQogJSBSZXBvcnRMYWIgZ2VuZXJhdGVkIFBERiBkb2N1bWVudCAtLSBkaWdlc3Qg
KGh0dHA6Ly93d3cucmVwb3J0bGFiLmNvbSkNCiBbKFwyNzBcMjI3J3VcMjA2XDM1M1xcXDMz
MDhcMzIyIXRbQlwzMDQ1KSAoXDI3MFwyMjcndVwyMDZcMzUzXFxcMzMwOFwzMjIhdFtCXDMw
NDUpXQ0KIC9JbmZvIDg0IDAgUiAvUm9vdCA4MyAwIFIgL1NpemUgMTU1ID4+DQpzdGFydHhy
ZWYNCjE1NzIwMw0KJSVFT0YNCg==
--------------AE2245E82A7EC007FBDDF8CD
Content-Type: application/pdf;
 name="hyperlaunch-devicetree.pdf"
Content-Transfer-Encoding: base64
Content-Disposition: attachment;
 filename="hyperlaunch-devicetree.pdf"

JVBERi0xLjQNCiWTjIueIFJlcG9ydExhYiBHZW5lcmF0ZWQgUERGIGRvY3VtZW50IGh0dHA6
Ly93d3cucmVwb3J0bGFiLmNvbQ0KMSAwIG9iag0KPDwgL0YxIDIgMCBSIC9GMiAzIDAgUiAv
RjMgNCAwIFIgL0Y0IDkgMCBSIC9GNSAxMSAwIFIgPj4NCmVuZG9iag0KMiAwIG9iag0KPDwg
L0Jhc2VGb250IC9IZWx2ZXRpY2EgL0VuY29kaW5nIC9XaW5BbnNpRW5jb2RpbmcgL05hbWUg
L0YxIC9TdWJ0eXBlIC9UeXBlMSAvVHlwZSAvRm9udCA+Pg0KZW5kb2JqDQozIDAgb2JqDQo8
PCAvQmFzZUZvbnQgL0hlbHZldGljYS1Cb2xkIC9FbmNvZGluZyAvV2luQW5zaUVuY29kaW5n
IC9OYW1lIC9GMiAvU3VidHlwZSAvVHlwZTEgL1R5cGUgL0ZvbnQgPj4NCmVuZG9iag0KNCAw
IG9iag0KPDwgL0Jhc2VGb250IC9Db3VyaWVyIC9FbmNvZGluZyAvV2luQW5zaUVuY29kaW5n
IC9OYW1lIC9GMyAvU3VidHlwZSAvVHlwZTEgL1R5cGUgL0ZvbnQgPj4NCmVuZG9iag0KNSAw
IG9iag0KPDwgL0NvbnRlbnRzIDI1IDAgUiAvTWVkaWFCb3ggWyAwIDAgNTk1LjI3NTYgODQx
Ljg4OTggXSAvUGFyZW50IDI0IDAgUiAvUmVzb3VyY2VzIDw8IC9Gb250IDEgMCBSIC9Qcm9j
U2V0IFsgL1BERiAvVGV4dCAvSW1hZ2VCIC9JbWFnZUMgL0ltYWdlSSBdID4+IC9Sb3RhdGUg
MCAvVHJhbnMgPDwgID4+IA0KICAvVHlwZSAvUGFnZSA+Pg0KZW5kb2JqDQo2IDAgb2JqDQo8
PCAvQ29udGVudHMgMjYgMCBSIC9NZWRpYUJveCBbIDAgMCA1OTUuMjc1NiA4NDEuODg5OCBd
IC9QYXJlbnQgMjQgMCBSIC9SZXNvdXJjZXMgPDwgL0ZvbnQgMSAwIFIgL1Byb2NTZXQgWyAv
UERGIC9UZXh0IC9JbWFnZUIgL0ltYWdlQyAvSW1hZ2VJIF0gPj4gL1JvdGF0ZSAwIC9UcmFu
cyA8PCAgPj4gDQogIC9UeXBlIC9QYWdlID4+DQplbmRvYmoNCjcgMCBvYmoNCjw8IC9Db250
ZW50cyAyNyAwIFIgL01lZGlhQm94IFsgMCAwIDU5NS4yNzU2IDg0MS44ODk4IF0gL1BhcmVu
dCAyNCAwIFIgL1Jlc291cmNlcyA8PCAvRm9udCAxIDAgUiAvUHJvY1NldCBbIC9QREYgL1Rl
eHQgL0ltYWdlQiAvSW1hZ2VDIC9JbWFnZUkgXSA+PiAvUm90YXRlIDAgL1RyYW5zIDw8ICA+
PiANCiAgL1R5cGUgL1BhZ2UgPj4NCmVuZG9iag0KOCAwIG9iag0KPDwgL0NvbnRlbnRzIDI4
IDAgUiAvTWVkaWFCb3ggWyAwIDAgNTk1LjI3NTYgODQxLjg4OTggXSAvUGFyZW50IDI0IDAg
UiAvUmVzb3VyY2VzIDw8IC9Gb250IDEgMCBSIC9Qcm9jU2V0IFsgL1BERiAvVGV4dCAvSW1h
Z2VCIC9JbWFnZUMgL0ltYWdlSSBdID4+IC9Sb3RhdGUgMCAvVHJhbnMgPDwgID4+IA0KICAv
VHlwZSAvUGFnZSA+Pg0KZW5kb2JqDQo5IDAgb2JqDQo8PCAvQmFzZUZvbnQgL0hlbHZldGlj
YS1PYmxpcXVlIC9FbmNvZGluZyAvV2luQW5zaUVuY29kaW5nIC9OYW1lIC9GNCAvU3VidHlw
ZSAvVHlwZTEgL1R5cGUgL0ZvbnQgPj4NCmVuZG9iag0KMTAgMCBvYmoNCjw8IC9Db250ZW50
cyAyOSAwIFIgL01lZGlhQm94IFsgMCAwIDU5NS4yNzU2IDg0MS44ODk4IF0gL1BhcmVudCAy
NCAwIFIgL1Jlc291cmNlcyA8PCAvRm9udCAxIDAgUiAvUHJvY1NldCBbIC9QREYgL1RleHQg
L0ltYWdlQiAvSW1hZ2VDIC9JbWFnZUkgXSA+PiAvUm90YXRlIDAgL1RyYW5zIDw8ICA+PiAN
CiAgL1R5cGUgL1BhZ2UgPj4NCmVuZG9iag0KMTEgMCBvYmoNCjw8IC9CYXNlRm9udCAvSGVs
dmV0aWNhLUJvbGRPYmxpcXVlIC9FbmNvZGluZyAvV2luQW5zaUVuY29kaW5nIC9OYW1lIC9G
NSAvU3VidHlwZSAvVHlwZTEgL1R5cGUgL0ZvbnQgPj4NCmVuZG9iag0KMTIgMCBvYmoNCjw8
IC9Db250ZW50cyAzMCAwIFIgL01lZGlhQm94IFsgMCAwIDU5NS4yNzU2IDg0MS44ODk4IF0g
L1BhcmVudCAyNCAwIFIgL1Jlc291cmNlcyA8PCAvRm9udCAxIDAgUiAvUHJvY1NldCBbIC9Q
REYgL1RleHQgL0ltYWdlQiAvSW1hZ2VDIC9JbWFnZUkgXSA+PiAvUm90YXRlIDAgL1RyYW5z
IDw8ICA+PiANCiAgL1R5cGUgL1BhZ2UgPj4NCmVuZG9iag0KMTMgMCBvYmoNCjw8IC9Db250
ZW50cyAzMSAwIFIgL01lZGlhQm94IFsgMCAwIDU5NS4yNzU2IDg0MS44ODk4IF0gL1BhcmVu
dCAyNCAwIFIgL1Jlc291cmNlcyA8PCAvRm9udCAxIDAgUiAvUHJvY1NldCBbIC9QREYgL1Rl
eHQgL0ltYWdlQiAvSW1hZ2VDIC9JbWFnZUkgXSA+PiAvUm90YXRlIDAgL1RyYW5zIDw8ICA+
PiANCiAgL1R5cGUgL1BhZ2UgPj4NCmVuZG9iag0KMTQgMCBvYmoNCjw8IC9PdXRsaW5lcyAx
NiAwIFIgL1BhZ2VMYWJlbHMgMzIgMCBSIC9QYWdlTW9kZSAvVXNlTm9uZSAvUGFnZXMgMjQg
MCBSIC9UeXBlIC9DYXRhbG9nID4+DQplbmRvYmoNCjE1IDAgb2JqDQo8PCAvQXV0aG9yICgp
IC9DcmVhdGlvbkRhdGUgKEQ6MjAyMTA1MTQxMTAxMDkrMDAnMDAnKSAvQ3JlYXRvciAoXCh1
bnNwZWNpZmllZFwpKSAvS2V5d29yZHMgKCkgL01vZERhdGUgKEQ6MjAyMTA1MTQxMTAxMDkr
MDAnMDAnKSAvUHJvZHVjZXIgKFJlcG9ydExhYiBQREYgTGlicmFyeSAtIHd3dy5yZXBvcnRs
YWIuY29tKSANCiAgL1N1YmplY3QgKFwodW5zcGVjaWZpZWRcKSkgL1RpdGxlIChYZW4gSHlw
ZXJsYXVuY2ggRGV2aWNlIFRyZWUgQmluZGluZ3MpIC9UcmFwcGVkIC9GYWxzZSA+Pg0KZW5k
b2JqDQoxNiAwIG9iag0KPDwgL0NvdW50IDggL0ZpcnN0IDE3IDAgUiAvTGFzdCAyMyAwIFIg
L1R5cGUgL091dGxpbmVzID4+DQplbmRvYmoNCjE3IDAgb2JqDQo8PCAvQ291bnQgMiAvRGVz
dCBbIDUgMCBSIC9YWVogNjIuNjkyOTEgNjIzLjAyMzYgMCBdIC9GaXJzdCAxOCAwIFIgL0xh
c3QgMTkgMCBSIC9OZXh0IDIwIDAgUiAvUGFyZW50IDE2IDAgUiANCiAgL1RpdGxlIChFeGFt
cGxlIENvbmZpZ3VyYXRpb24pID4+DQplbmRvYmoNCjE4IDAgb2JqDQo8PCAvRGVzdCBbIDYg
MCBSIC9YWVogNjIuNjkyOTEgNzY1LjAyMzYgMCBdIC9OZXh0IDE5IDAgUiAvUGFyZW50IDE3
IDAgUiAvVGl0bGUgKE11bHRpYm9vdCB4ODYgQ29uZmlndXJhdGlvbjopID4+DQplbmRvYmoN
CjE5IDAgb2JqDQo8PCAvRGVzdCBbIDggMCBSIC9YWVogNjIuNjkyOTEgNzY1LjAyMzYgMCBd
IC9QYXJlbnQgMTcgMCBSIC9QcmV2IDE4IDAgUiAvVGl0bGUgKE1vZHVsZSBBcm0gQ29uZmln
dXJhdGlvbjopID4+DQplbmRvYmoNCjIwIDAgb2JqDQo8PCAvRGVzdCBbIDEyIDAgUiAvWFla
IDYyLjY5MjkxIDc2NS4wMjM2IDAgXSAvTmV4dCAyMSAwIFIgL1BhcmVudCAxNiAwIFIgL1By
ZXYgMTcgMCBSIC9UaXRsZSAoVGhlIEh5cGVydmlzb3Igbm9kZSkgPj4NCmVuZG9iag0KMjEg
MCBvYmoNCjw8IC9EZXN0IFsgMTIgMCBSIC9YWVogNjIuNjkyOTEgNjU5LjAyMzYgMCBdIC9O
ZXh0IDIyIDAgUiAvUGFyZW50IDE2IDAgUiAvUHJldiAyMCAwIFIgL1RpdGxlIChUaGUgQ29u
ZmlnIG5vZGUpID4+DQplbmRvYmoNCjIyIDAgb2JqDQo8PCAvRGVzdCBbIDEyIDAgUiAvWFla
IDYyLjY5MjkxIDUyOS4wMjM2IDAgXSAvTmV4dCAyMyAwIFIgL1BhcmVudCAxNiAwIFIgL1By
ZXYgMjEgMCBSIC9UaXRsZSAoVGhlIERvbWFpbiBub2RlKSA+Pg0KZW5kb2JqDQoyMyAwIG9i
ag0KPDwgL0Rlc3QgWyAxMyAwIFIgL1hZWiA2Mi42OTI5MSA3MTQuMDIzNiAwIF0gL1BhcmVu
dCAxNiAwIFIgL1ByZXYgMjIgMCBSIC9UaXRsZSAoVGhlIE1vZHVsZSBub2RlKSA+Pg0KZW5k
b2JqDQoyNCAwIG9iag0KPDwgL0NvdW50IDcgL0tpZHMgWyA1IDAgUiA2IDAgUiA3IDAgUiA4
IDAgUiAxMCAwIFIgMTIgMCBSIDEzIDAgUiBdIC9UeXBlIC9QYWdlcyA+Pg0KZW5kb2JqDQoy
NSAwIG9iag0KPDwgL0xlbmd0aCAyMjEzID4+DQpzdHJlYW0NCjEgMCAwIDEgMCAwIGNtICBC
VCAvRjEgMTIgVGYgMTQuNCBUTCBFVA0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA3NDEuMDIzNiBj
bQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCA0IFRtIC9GMiAyMCBUZiAyNCBUTCA0OC43
NzQ4OCAwIFRkIChYZW4gSHlwZXJsYXVuY2ggRGV2aWNlIFRyZWUgQmluZGluZ3MpIFRqIFQq
IC00OC43NzQ4OCAwIFRkIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDY5NS4wMjM2
IGNtDQpxDQpCVCAxIDAgMCAxIDAgMjYgVG0gMS4wNTQ5ODUgVHcgMTIgVEwgL0YxIDEwIFRm
IDAgMCAwIHJnIChUaGUgWGVuIGh5cGVybGF1bmNoIGRldmljZSB0cmVlIGFkb3B0cyB0aGUg
ZG9tMGxlc3MgZGV2aWNlIHRyZWUgc3RydWN0dXJlIGFuZCBleHRlbmRzIGl0IHRvIG1lZXQg
dGhlKSBUaiBUKiAwIFR3IDUuMzQ2NjQ3IFR3IChyZXF1aXJlbWVudHMgZm9yIHRoZSBoeXBl
cmxhdW5jaCBjYXBhYmlsaXR5LiBUaGUgcHJpbWFyeSBkaWZmZXJlbmNlIGlzIHRoZSBpbnRy
b2R1Y3Rpb24gb2YgdGhlKSBUaiBUKiAwIFR3IC9GMyAxMCBUZiAwIDAgMCByZyAoaHlwZXJ2
aXNvciApIFRqIC9GMSAxMCBUZiAwIDAgMCByZyAobm9kZSB0aGF0IGlzIHVuZGVyIHRoZSAp
IFRqIC9GMyAxMCBUZiAwIDAgMCByZyAoL2Nob3NlbiApIFRqIC9GMSAxMCBUZiAwIDAgMCBy
ZyAobm9kZS4gVGhlIG1vdmUgdG8gYSBkZWRpY2F0ZWQgbm9kZSB3YXMgZHJpdmVuIGJ5Oikg
VGogVCogRVQNClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgNjg5LjAyMzYgY20NClENCnEN
CjEgMCAwIDEgNjIuNjkyOTEgNjg5LjAyMzYgY20NClENCnENCjEgMCAwIDEgNjIuNjkyOTEg
NjY1LjAyMzYgY20NCjAgMCAwIHJnDQpCVCAvRjEgMTAgVGYgMTIgVEwgRVQNCnENCjEgMCAw
IDEgNiA5IGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEy
IFRMIDUuNjYgMCBUZCAoMS4pIFRqIFQqIC01LjY2IDAgVGQgRVQNClENClENCnENCjEgMCAw
IDEgMjMgLTMgY20NCnENCkJUIDEgMCAwIDEgMCAxNCBUbSAuMTQ0NzI0IFR3IDEyIFRMIC9G
MSAxMCBUZiAwIDAgMCByZyAoUmVkdWNlcyB0aGUgbmVlZCB0byB3YWxrIG92ZXIgbm9kZXMg
dGhhdCBhcmUgbm90IG9mIGludGVyZXN0LCBlLmcuIG9ubHkgbm9kZXMgb2YgaW50ZXJlc3Qg
c2hvdWxkIGJlIGluKSBUaiBUKiAwIFR3IC9GMyAxMCBUZiAwIDAgMCByZyAoL2Nob3Nlbi9o
eXBlcnZpc29yKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5
MSA2NTkuMDIzNiBjbQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA2MzUuMDIzNiBjbQ0KMCAw
IDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KcQ0KMSAwIDAgMSA2IDkgY20NCnENCjAg
MCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgNS42NiAwIFRkICgy
LikgVGogVCogLTUuNjYgMCBUZCBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAyMyAtMyBjbQ0KcQ0K
QlQgMSAwIDAgMSAwIDE0IFRtIDMuMTY3Njc0IFR3IDEyIFRMIC9GMSAxMCBUZiAwIDAgMCBy
ZyAoQWxsb3dzIGZvciB0aGUgZG9tYWluIGNvbnN0cnVjdGlvbiBpbmZvcm1hdGlvbiB0byBl
YXNpbHkgYmUgc2FuaXRpemVkIGJ5IHNpbXBsZSByZW1vdmluZyB0aGUpIFRqIFQqIDAgVHcg
L0YzIDEwIFRmIDAgMCAwIHJnICgvY2hvc2VuL2h5cGVydmlzb3IgKSBUaiAvRjEgMTAgVGYg
MCAwIDAgcmcgKG5vZGUuKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KUQ0KUQ0KcQ0KMSAwIDAgMSA2
Mi42OTI5MSA2MzUuMDIzNiBjbQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA2MDIuMDIzNiBj
bQ0KcQ0KQlQgMSAwIDAgMSAwIDMuNSBUbSAyMSBUTCAvRjIgMTcuNSBUZiAwIDAgMCByZyAo
RXhhbXBsZSBDb25maWd1cmF0aW9uKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42
OTI5MSA1NzIuMDIzNiBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAxNCBUbSAvRjEg
MTAgVGYgMTIgVEwgMi4xMzkzOTggVHcgKEJlbG93IGFyZSB0d28gZXhhbXBsZSBkZXZpY2Ug
dHJlZSBkZWZpbml0aW9ucyBmb3IgdGhlIGh5cGVydmlzb3Igbm9kZS4gVGhlIGZpcnN0IGlz
IGFuIGV4YW1wbGUgb2YgYSkgVGogVCogMCBUdyAobXVsdGlib290LWJhc2VkIGNvbmZpZ3Vy
YXRpb24gZm9yIHg4NiBhbmQgdGhlIHNlY29uZCBpcyBhIG1vZHVsZS1iYXNlZCBjb25maWd1
cmF0aW9uIGZvciBBcm0uKSBUaiBUKiBFVA0KUQ0KUQ0KIA0KZW5kc3RyZWFtDQplbmRvYmoN
CjI2IDAgb2JqDQo8PCAvTGVuZ3RoIDI0OTEgPj4NCnN0cmVhbQ0KMSAwIDAgMSAwIDAgY20g
IEJUIC9GMSAxMiBUZiAxNC40IFRMIEVUDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDc0Ny4wMjM2
IGNtDQpxDQpCVCAxIDAgMCAxIDAgMyBUbSAxOCBUTCAvRjIgMTUgVGYgMCAwIDAgcmcgKE11
bHRpYm9vdCB4ODYgQ29uZmlndXJhdGlvbjopIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAx
IDYyLjY5MjkxIDc3LjgyMzYyIGNtDQpxDQpxDQoxIDAgMCAxIDAgMCBjbQ0KcQ0KMSAwIDAg
MSA2LjYgNi42IGNtDQpxDQouNjYyNzQ1IC42NjI3NDUgLjY2Mjc0NSBSRw0KLjUgdw0KLjk2
MDc4NCAuOTYwNzg0IC44NjI3NDUgcmcNCm4gLTYgLTYgNDY4LjY4OTggNjYwIHJlIEIqDQpR
DQpxDQpCVCAxIDAgMCAxIDAgNjM4IFRtIDEyIFRMIC9GMyAxMCBUZiAwIDAgMCByZyAoaHlw
ZXJ2aXNvciB7KSBUaiBUKiAoICAgICNhZGRyZXNzLWNlbGxzID0gKSBUaiAoPCkgVGogKDEp
IFRqICg+KSBUaiAoOykgVGogVCogKCAgICAjc2l6ZS1jZWxscyA9ICkgVGogKDwpIFRqICgw
KSBUaiAoPikgVGogKDspIFRqIFQqICggICAgY29tcGF0aWJsZSA9IFwyMjNoeXBlcnZpc29y
LHhlblwyMjQpIFRqIFQqICBUKiAoICAgIC8vIENvbmZpZ3VyYXRpb24gY29udGFpbmVyKSBU
aiBUKiAoICAgIGNvbmZpZyB7KSBUaiBUKiAoICAgICAgICBjb21wYXRpYmxlID0gInhlbixj
b25maWciOykgVGogVCogIFQqICggICAgICAgIG1vZHVsZSB7KSBUaiBUKiAoICAgICAgICAg
ICAgY29tcGF0aWJsZSA9ICJtb2R1bGUsbWljcm9jb2RlIiwgIm11bHRpYm9vdCxtb2R1bGUi
OykgVGogVCogKCAgICAgICAgICAgIG1iLWluZGV4ID0gKSBUaiAoPCkgVGogKDEpIFRqICg+
KSBUaiAoOykgVGogVCogKCAgICAgICAgfTspIFRqIFQqICBUKiAoICAgICAgICBtb2R1bGUg
eykgVGogVCogKCAgICAgICAgICAgIGNvbXBhdGlibGUgPSAibW9kdWxlLHhzbS1wb2xpY3ki
LCAibXVsdGlib290LG1vZHVsZSI7KSBUaiBUKiAoICAgICAgICAgICAgbWItaW5kZXggPSAp
IFRqICg8KSBUaiAoMikgVGogKD4pIFRqICg7KSBUaiBUKiAoICAgICAgICB9OykgVGogVCog
KCAgICB9OykgVGogVCogIFQqICggICAgLy8gQm9vdCBEb21haW4gZGVmaW5pdGlvbikgVGog
VCogKCAgICBkb21haW4geykgVGogVCogKCAgICAgICAgY29tcGF0aWJsZSA9ICJ4ZW4sZG9t
YWluIjspIFRqIFQqICBUKiAoICAgICAgICBkb21pZCA9ICkgVGogKDwpIFRqICgweDdGRjUp
IFRqICg+KSBUaiAoOykgVGogVCogIFQqICggICAgICAgIC8vIEZVTkNUSU9OX05PTkUgICAg
ICAgICAgICBcKDBcKSkgVGogVCogKCAgICAgICAgLy8gRlVOQ1RJT05fQk9PVCAgICAgICAg
ICAgIFwoMSApIFRqICg8KSBUaiAoPCkgVGogKCAwXCkpIFRqIFQqICggICAgICAgIC8vIEZV
TkNUSU9OX0NSQVNIICAgICAgICAgICBcKDEgKSBUaiAoPCkgVGogKDwpIFRqICggMVwpKSBU
aiBUKiAoICAgICAgICAvLyBGVU5DVElPTl9DT05TT0xFICAgICAgICAgXCgxICkgVGogKDwp
IFRqICg8KSBUaiAoIDJcKSkgVGogVCogKCAgICAgICAgLy8gRlVOQ1RJT05fWEVOU1RPUkUg
ICAgICAgIFwoMSApIFRqICg8KSBUaiAoPCkgVGogKCAzMFwpKSBUaiBUKiAoICAgICAgICAv
LyBGVU5DVElPTl9MRUdBQ1lfRE9NMCAgICAgXCgxICkgVGogKDwpIFRqICg8KSBUaiAoIDMx
XCkpIFRqIFQqICggICAgICAgIGZ1bmN0aW9ucyA9ICkgVGogKDwpIFRqICgweDAwMDAwMDAx
KSBUaiAoPikgVGogKDspIFRqIFQqICBUKiAoICAgICAgICBtZW1vcnkgPSApIFRqICg8KSBU
aiAoMHgwIDB4MjAwMDApIFRqICg+KSBUaiAoOykgVGogVCogKCAgICAgICAgY3B1cyA9ICkg
VGogKDwpIFRqICgxKSBUaiAoPikgVGogKDspIFRqIFQqICggICAgICAgIG1vZHVsZSB7KSBU
aiBUKiAoICAgICAgICAgICAgY29tcGF0aWJsZSA9ICJtb2R1bGUsa2VybmVsIiwgIm11bHRp
Ym9vdCxtb2R1bGUiOykgVGogVCogKCAgICAgICAgICAgIG1iLWluZGV4ID0gKSBUaiAoPCkg
VGogKDMpIFRqICg+KSBUaiAoOykgVGogVCogKCAgICAgICAgfTspIFRqIFQqICBUKiAoICAg
ICAgICBtb2R1bGUgeykgVGogVCogKCAgICAgICAgICAgIGNvbXBhdGlibGUgPSAibW9kdWxl
LHJhbWRpc2siLCAibXVsdGlib290LG1vZHVsZSI7KSBUaiBUKiAoICAgICAgICAgICAgbWIt
aW5kZXggPSApIFRqICg8KSBUaiAoNCkgVGogKD4pIFRqICg7KSBUaiBUKiAoICAgICAgICB9
OykgVGogVCogKCAgICAgICAgbW9kdWxlIHspIFRqIFQqICggICAgICAgICAgICBjb21wYXRp
YmxlID0gIm1vZHVsZSxjb25maWciLCAibXVsdGlib290LG1vZHVsZSI7KSBUaiBUKiAoICAg
ICAgICAgICAgbWItaW5kZXggPSApIFRqICg8KSBUaiAoNSkgVGogKD4pIFRqICg7KSBUaiBU
KiAoICAgICAgICB9OykgVGogVCogIFQqICggICAgLy8gQ2xhc3NpYyBEb20wIGRlZmluaXRp
b24pIFRqIFQqICggICAgZG9tYWluIHspIFRqIFQqICggICAgICAgIGNvbXBhdGlibGUgPSAi
eGVuLGRvbWFpbiI7KSBUaiBUKiAgVCogRVQNClENClENClENClENClENCiANCmVuZHN0cmVh
bQ0KZW5kb2JqDQoyNyAwIG9iag0KPDwgL0xlbmd0aCA1MDA5ID4+DQpzdHJlYW0NCjEgMCAw
IDEgMCAwIGNtICBCVCAvRjEgMTIgVGYgMTQuNCBUTCBFVA0KcQ0KMSAwIDAgMSA2Mi42OTI5
MSAzMDcuODIzNiBjbQ0KcQ0KcQ0KMSAwIDAgMSAwIDAgY20NCnENCjEgMCAwIDEgNi42IDYu
NiBjbQ0KcQ0KLjY2Mjc0NSAuNjYyNzQ1IC42NjI3NDUgUkcNCi41IHcNCi45NjA3ODQgLjk2
MDc4NCAuODYyNzQ1IHJnDQpuIC02IC02IDQ2OC42ODk4IDQ1NiByZSBCKg0KUQ0KcQ0KQlQg
MSAwIDAgMSAwIDQzNCBUbSAxMiBUTCAvRjMgMTAgVGYgMCAwIDAgcmcgKCAgICAgICAgZG9t
aWQgPSApIFRqICg8KSBUaiAoMCkgVGogKD4pIFRqICg7KSBUaiBUKiAgVCogKCAgICAgICAg
Ly8gUEVSTUlTU0lPTl9OT05FICAgICAgICAgIFwoMFwpKSBUaiBUKiAoICAgICAgICAvLyBQ
RVJNSVNTSU9OX0NPTlRST0wgICAgICAgXCgxICkgVGogKDwpIFRqICg8KSBUaiAoIDBcKSkg
VGogVCogKCAgICAgICAgLy8gUEVSTUlTU0lPTl9IQVJEV0FSRSAgICAgIFwoMSApIFRqICg8
KSBUaiAoPCkgVGogKCAxXCkpIFRqIFQqICggICAgICAgIHBlcm1pc3Npb25zID0gKSBUaiAo
PCkgVGogKDMpIFRqICg+KSBUaiAoOykgVGogVCogIFQqICggICAgICAgIC8vIEZVTkNUSU9O
X05PTkUgICAgICAgICAgICBcKDBcKSkgVGogVCogKCAgICAgICAgLy8gRlVOQ1RJT05fQk9P
VCAgICAgICAgICAgIFwoMSApIFRqICg8KSBUaiAoPCkgVGogKCAwXCkpIFRqIFQqICggICAg
ICAgIC8vIEZVTkNUSU9OX0NSQVNIICAgICAgICAgICBcKDEgKSBUaiAoPCkgVGogKDwpIFRq
ICggMVwpKSBUaiBUKiAoICAgICAgICAvLyBGVU5DVElPTl9DT05TT0xFICAgICAgICAgXCgx
ICkgVGogKDwpIFRqICg8KSBUaiAoIDJcKSkgVGogVCogKCAgICAgICAgLy8gRlVOQ1RJT05f
WEVOU1RPUkUgICAgICAgIFwoMSApIFRqICg8KSBUaiAoPCkgVGogKCAzMFwpKSBUaiBUKiAo
ICAgICAgICAvLyBGVU5DVElPTl9MRUdBQ1lfRE9NMCAgICAgXCgxICkgVGogKDwpIFRqICg8
KSBUaiAoIDMxXCkpIFRqIFQqICggICAgICAgIGZ1bmN0aW9ucyA9ICkgVGogKDwpIFRqICgw
eEMwMDAwMDA2KSBUaiAoPikgVGogKDspIFRqIFQqICBUKiAoICAgICAgICAvLyBNT0RFX1BB
UkFWSVJUVUFMSVpFRCAgICAgXCgxICkgVGogKDwpIFRqICg8KSBUaiAoIDBcKSAvKiBQViB8
IFBWSC9IVk0gKi8pIFRqIFQqICggICAgICAgIC8vIE1PREVfRU5BQkxFX0RFVklDRV9NT0RF
TCBcKDEgKSBUaiAoPCkgVGogKDwpIFRqICggMVwpIC8qIEhWTSB8IFBWSCAqLykgVGogVCog
KCAgICAgICAgLy8gTU9ERV9MT05HICAgICAgICAgICAgICAgIFwoMSApIFRqICg8KSBUaiAo
PCkgVGogKCAyXCkgLyogNjQgQklUIHwgMzIgQklUICovKSBUaiBUKiAoICAgICAgICBtb2Rl
ID0gKSBUaiAoPCkgVGogKDUpIFRqICg+KSBUaiAoOyAvKiA2NCBCSVQsIFBWICovKSBUaiBU
KiAgVCogKCAgICAgICAgLy8gVVVJRCkgVGogVCogKCAgICAgICAgZG9tYWluLXV1aWQgPSBb
QjMgRkIgOTggRkIgOEYgOUYgNjcgQTNdOykgVGogVCogIFQqICggICAgICAgIGNwdXMgPSAp
IFRqICg8KSBUaiAoMSkgVGogKD4pIFRqICg7KSBUaiBUKiAoICAgICAgICBtZW1vcnkgPSAp
IFRqICg8KSBUaiAoMHgwIDB4MjAwMDApIFRqICg+KSBUaiAoOykgVGogVCogKCAgICAgICAg
c2VjdXJpdHktaWQgPSBcMjIzZG9tMF90OykgVGogVCogIFQqICggICAgICAgIG1vZHVsZSB7
KSBUaiBUKiAoICAgICAgICAgICAgY29tcGF0aWJsZSA9ICJtb2R1bGUsa2VybmVsIiwgIm11
bHRpYm9vdCxtb2R1bGUiOykgVGogVCogKCAgICAgICAgICAgIG1iLWluZGV4ID0gKSBUaiAo
PCkgVGogKDYpIFRqICg+KSBUaiAoOykgVGogVCogKCAgICAgICAgICAgIGJvb3RhcmdzID0g
ImNvbnNvbGU9aHZjMCI7KSBUaiBUKiAoICAgICAgICB9OykgVGogVCogKCAgICAgICAgbW9k
dWxlIHspIFRqIFQqICggICAgICAgICAgICBjb21wYXRpYmxlID0gIm1vZHVsZSxyYW1kaXNr
IiwgIm11bHRpYm9vdCxtb2R1bGUiOykgVGogVCogKCAgICAgICAgICAgIG1iLWluZGV4ID0g
KSBUaiAoPCkgVGogKDcpIFRqICg+KSBUaiAoOykgVGogVCogKCAgICAgICAgfTspIFRqIFQq
ICh9OykgVGogVCogRVQNClENClENClENClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgMjg3
LjgyMzYgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIg
VEwgKFRoZSBtdWx0aWJvb3QgbW9kdWxlcyBzdXBwbGllZCB3aGVuIHVzaW5nIHRoZSBhYm92
ZSBjb25maWcgd291bGQgYmUsIGluIG9yZGVyOikgVGogVCogRVQNClENClENCnENCjEgMCAw
IDEgNjIuNjkyOTEgMjgxLjgyMzYgY20NClENCnENCjEgMCAwIDEgNjIuNjkyOTEgMjgxLjgy
MzYgY20NClENCnENCjEgMCAwIDEgNjIuNjkyOTEgMjY5LjgyMzYgY20NCjAgMCAwIHJnDQpC
VCAvRjEgMTAgVGYgMTIgVEwgRVQNCnENCjEgMCAwIDEgNiAtMyBjbQ0KcQ0KMCAwIDAgcmcN
CkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAxMC41IDAgVGQgKFwxNzcpIFRq
IFQqIC0xMC41IDAgVGQgRVQNClENClENCnENCjEgMCAwIDEgMjMgLTMgY20NCnENCjAgMCAw
IHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgKFwodGhlIGFib3ZlIGNv
bmZpZywgY29tcGlsZWRcKSkgVGogVCogRVQNClENClENCnENClENClENCnENCjEgMCAwIDEg
NjIuNjkyOTEgMjYzLjgyMzYgY20NClENCnENCjEgMCAwIDEgNjIuNjkyOTEgMjUxLjgyMzYg
Y20NCjAgMCAwIHJnDQpCVCAvRjEgMTAgVGYgMTIgVEwgRVQNCnENCjEgMCAwIDEgNiAtMyBj
bQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAxMC41
IDAgVGQgKFwxNzcpIFRqIFQqIC0xMC41IDAgVGQgRVQNClENClENCnENCjEgMCAwIDEgMjMg
LTMgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwg
KENQVSBtaWNyb2NvZGUpIFRqIFQqIEVUDQpRDQpRDQpxDQpRDQpRDQpxDQoxIDAgMCAxIDYy
LjY5MjkxIDI0NS44MjM2IGNtDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDIzMy44MjM2IGNt
DQowIDAgMCByZw0KQlQgL0YxIDEwIFRmIDEyIFRMIEVUDQpxDQoxIDAgMCAxIDYgLTMgY20N
CnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgMTAuNSAw
IFRkIChcMTc3KSBUaiBUKiAtMTAuNSAwIFRkIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDIzIC0z
IGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIChY
U00gcG9saWN5KSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5
MSAyMjcuODIzNiBjbQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSAyMTUuODIzNiBjbQ0KMCAw
IDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KcQ0KMSAwIDAgMSA2IC0zIGNtDQpxDQow
IDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIDEwLjUgMCBUZCAo
XDE3NykgVGogVCogLTEwLjUgMCBUZCBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAyMyAtMyBjbQ0K
cQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAoa2VybmVs
IGZvciBib290IGRvbWFpbikgVGogVCogRVQNClENClENCnENClENClENCnENCjEgMCAwIDEg
NjIuNjkyOTEgMjA5LjgyMzYgY20NClENCnENCjEgMCAwIDEgNjIuNjkyOTEgMTk3LjgyMzYg
Y20NCjAgMCAwIHJnDQpCVCAvRjEgMTAgVGYgMTIgVEwgRVQNCnENCjEgMCAwIDEgNiAtMyBj
bQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAxMC41
IDAgVGQgKFwxNzcpIFRqIFQqIC0xMC41IDAgVGQgRVQNClENClENCnENCjEgMCAwIDEgMjMg
LTMgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwg
KHJhbWRpc2sgZm9yIGJvb3QgZG9tYWluKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KUQ0KUQ0KcQ0K
MSAwIDAgMSA2Mi42OTI5MSAxOTEuODIzNiBjbQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSAx
NzkuODIzNiBjbQ0KMCAwIDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KcQ0KMSAwIDAg
MSA2IC0zIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEy
IFRMIDEwLjUgMCBUZCAoXDE3NykgVGogVCogLTEwLjUgMCBUZCBFVA0KUQ0KUQ0KcQ0KMSAw
IDAgMSAyMyAtMyBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBU
ZiAxMiBUTCAoYm9vdCBkb21haW4gY29uZmlndXJhdGlvbiBmaWxlKSBUaiBUKiBFVA0KUQ0K
UQ0KcQ0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSAxNzMuODIzNiBjbQ0KUQ0KcQ0KMSAw
IDAgMSA2Mi42OTI5MSAxNjEuODIzNiBjbQ0KMCAwIDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBU
TCBFVA0KcQ0KMSAwIDAgMSA2IC0zIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIg
VG0gL0YxIDEwIFRmIDEyIFRMIDEwLjUgMCBUZCAoXDE3NykgVGogVCogLTEwLjUgMCBUZCBF
VA0KUQ0KUQ0KcQ0KMSAwIDAgMSAyMyAtMyBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEg
MCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAoa2VybmVsIGZvciB0aGUgY2xhc3NpYyBkb20wIGRv
bWFpbikgVGogVCogRVQNClENClENCnENClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgMTU1
LjgyMzYgY20NClENCnENCjEgMCAwIDEgNjIuNjkyOTEgMTQzLjgyMzYgY20NCjAgMCAwIHJn
DQpCVCAvRjEgMTAgVGYgMTIgVEwgRVQNCnENCjEgMCAwIDEgNiAtMyBjbQ0KcQ0KMCAwIDAg
cmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAxMC41IDAgVGQgKFwxNzcp
IFRqIFQqIC0xMC41IDAgVGQgRVQNClENClENCnENCjEgMCAwIDEgMjMgLTMgY20NCnENCjAg
MCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgKHJhbWRpc2sgZm9y
IHRoZSBjbGFzc2ljIGRvbTAgZG9tYWluKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KUQ0KUQ0KcQ0K
MSAwIDAgMSA2Mi42OTI5MSAxNDMuODIzNiBjbQ0KUQ0KIA0KZW5kc3RyZWFtDQplbmRvYmoN
CjI4IDAgb2JqDQo8PCAvTGVuZ3RoIDI0MjIgPj4NCnN0cmVhbQ0KMSAwIDAgMSAwIDAgY20g
IEJUIC9GMSAxMiBUZiAxNC40IFRMIEVUDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDc0Ny4wMjM2
IGNtDQpxDQpCVCAxIDAgMCAxIDAgMyBUbSAxOCBUTCAvRjIgMTUgVGYgMCAwIDAgcmcgKE1v
ZHVsZSBBcm0gQ29uZmlndXJhdGlvbjopIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDYy
LjY5MjkxIDc3LjgyMzYyIGNtDQpxDQpxDQoxIDAgMCAxIDAgMCBjbQ0KcQ0KMSAwIDAgMSA2
LjYgNi42IGNtDQpxDQouNjYyNzQ1IC42NjI3NDUgLjY2Mjc0NSBSRw0KLjUgdw0KLjk2MDc4
NCAuOTYwNzg0IC44NjI3NDUgcmcNCm4gLTYgLTYgNDY4LjY4OTggNjYwIHJlIEIqDQpRDQpx
DQpCVCAxIDAgMCAxIDAgNjM4IFRtIDEyIFRMIC9GMyAxMCBUZiAwIDAgMCByZyAoaHlwZXJ2
aXNvciB7KSBUaiBUKiAoICAgIGNvbXBhdGlibGUgPSBcMjIzaHlwZXJ2aXNvcix4ZW5cMjI0
KSBUaiBUKiAgVCogKCAgICAvLyBDb25maWd1cmF0aW9uIGNvbnRhaW5lcikgVGogVCogKCAg
ICBjb25maWcgeykgVGogVCogKCAgICAgICAgY29tcGF0aWJsZSA9ICJ4ZW4sY29uZmlnIjsp
IFRqIFQqICBUKiAoICAgICAgICBtb2R1bGUgeykgVGogVCogKCAgICAgICAgICAgIGNvbXBh
dGlibGUgPSAibW9kdWxlLG1pY3JvY29kZVwyMjQ7KSBUaiBUKiAoICAgICAgICAgICAgbW9k
dWxlLWFkZHIgPSApIFRqICg8KSBUaiAoMHgwMDAwZmYwMCAweDgwKSBUaiAoPikgVGogKDsp
IFRqIFQqICggICAgICAgIH07KSBUaiBUKiAgVCogKCAgICAgICAgbW9kdWxlIHspIFRqIFQq
ICggICAgICAgICAgICBjb21wYXRpYmxlID0gIm1vZHVsZSx4c20tcG9saWN5IjspIFRqIFQq
ICggICAgICAgICAgICBtb2R1bGUtYWRkciA9ICkgVGogKDwpIFRqICgweDAwMDBmZjAwIDB4
ODApIFRqICg+KSBUaiAoOykgVGogVCogIFQqICggICAgICAgIH07KSBUaiBUKiAoICAgIH07
KSBUaiBUKiAgVCogKCAgICAvLyBCb290IERvbWFpbiBkZWZpbml0aW9uKSBUaiBUKiAoICAg
IGRvbWFpbiB7KSBUaiBUKiAoICAgICAgICBjb21wYXRpYmxlID0gInhlbixkb21haW4iOykg
VGogVCogIFQqICggICAgICAgIGRvbWlkID0gKSBUaiAoPCkgVGogKDB4N0ZGNSkgVGogKD4p
IFRqICg7KSBUaiBUKiAgVCogKCAgICAgICAgLy8gRlVOQ1RJT05fTk9ORSAgICAgICAgICAg
IFwoMFwpKSBUaiBUKiAoICAgICAgICAvLyBGVU5DVElPTl9CT09UICAgICAgICAgICAgXCgx
ICkgVGogKDwpIFRqICg8KSBUaiAoIDBcKSkgVGogVCogKCAgICAgICAgLy8gRlVOQ1RJT05f
Q1JBU0ggICAgICAgICAgIFwoMSApIFRqICg8KSBUaiAoPCkgVGogKCAxXCkpIFRqIFQqICgg
ICAgICAgIC8vIEZVTkNUSU9OX0NPTlNPTEUgICAgICAgICBcKDEgKSBUaiAoPCkgVGogKDwp
IFRqICggMlwpKSBUaiBUKiAoICAgICAgICAvLyBGVU5DVElPTl9YRU5TVE9SRSAgICAgICAg
XCgxICkgVGogKDwpIFRqICg8KSBUaiAoIDMwXCkpIFRqIFQqICggICAgICAgIC8vIEZVTkNU
SU9OX0xFR0FDWV9ET00wICAgICBcKDEgKSBUaiAoPCkgVGogKDwpIFRqICggMzFcKSkgVGog
VCogKCAgICAgICAgZnVuY3Rpb25zID0gKSBUaiAoPCkgVGogKDB4MDAwMDAwMDEpIFRqICg+
KSBUaiAoOykgVGogVCogIFQqICggICAgICAgIG1lbW9yeSA9ICkgVGogKDwpIFRqICgweDAg
MHgyMDAwMCkgVGogKD4pIFRqICg7KSBUaiBUKiAoICAgICAgICBjcHVzID0gKSBUaiAoPCkg
VGogKDEpIFRqICg+KSBUaiAoOykgVGogVCogKCAgICAgICAgbW9kdWxlIHspIFRqIFQqICgg
ICAgICAgICAgICBjb21wYXRpYmxlID0gIm1vZHVsZSxrZXJuZWwiOykgVGogVCogKCAgICAg
ICAgICAgIG1vZHVsZS1hZGRyID0gKSBUaiAoPCkgVGogKDB4MDAwMGZmMDAgMHg4MCkgVGog
KD4pIFRqICg7KSBUaiBUKiAoICAgICAgICB9OykgVGogVCogIFQqICggICAgICAgIG1vZHVs
ZSB7KSBUaiBUKiAoICAgICAgICAgICAgY29tcGF0aWJsZSA9ICJtb2R1bGUscmFtZGlzayI7
KSBUaiBUKiAoICAgICAgICAgICAgbW9kdWxlLWFkZHIgPSApIFRqICg8KSBUaiAoMHgwMDAw
ZmYwMCAweDgwKSBUaiAoPikgVGogKDspIFRqIFQqICggICAgICAgIH07KSBUaiBUKiAoICAg
ICAgICBtb2R1bGUgeykgVGogVCogKCAgICAgICAgICAgIGNvbXBhdGlibGUgPSAibW9kdWxl
LGNvbmZpZyI7KSBUaiBUKiAoICAgICAgICAgICAgbW9kdWxlLWFkZHIgPSApIFRqICg8KSBU
aiAoMHgwMDAwZmYwMCAweDgwKSBUaiAoPikgVGogKDspIFRqIFQqICggICAgICAgIH07KSBU
aiBUKiAgVCogKCAgICAvLyBDbGFzc2ljIERvbTAgZGVmaW5pdGlvbikgVGogVCogKCAgICBk
b21haW5AMCB7KSBUaiBUKiAoICAgICAgICBjb21wYXRpYmxlID0gInhlbixkb21haW4iOykg
VGogVCogIFQqICggICAgICAgIGRvbWlkID0gKSBUaiAoPCkgVGogKDApIFRqICg+KSBUaiAo
OykgVGogVCogRVQNClENClENClENClENClENCiANCmVuZHN0cmVhbQ0KZW5kb2JqDQoyOSAw
IG9iag0KPDwgL0xlbmd0aCA1NDE0ID4+DQpzdHJlYW0NCjEgMCAwIDEgMCAwIGNtICBCVCAv
RjEgMTIgVGYgMTQuNCBUTCBFVA0KcQ0KMSAwIDAgMSA2Mi42OTI5MSAzMTkuODIzNiBjbQ0K
cQ0KcQ0KMSAwIDAgMSAwIDAgY20NCnENCjEgMCAwIDEgNi42IDYuNiBjbQ0KcQ0KLjY2Mjc0
NSAuNjYyNzQ1IC42NjI3NDUgUkcNCi41IHcNCi45NjA3ODQgLjk2MDc4NCAuODYyNzQ1IHJn
DQpuIC02IC02IDQ2OC42ODk4IDQ0NCByZSBCKg0KUQ0KcQ0KQlQgMSAwIDAgMSAwIDQyMiBU
bSAxMiBUTCAvRjMgMTAgVGYgMCAwIDAgcmcgIFQqICggICAgICAgIC8vIFBFUk1JU1NJT05f
Tk9ORSAgICAgICAgICBcKDBcKSkgVGogVCogKCAgICAgICAgLy8gUEVSTUlTU0lPTl9DT05U
Uk9MICAgICAgIFwoMSApIFRqICg8KSBUaiAoPCkgVGogKCAwXCkpIFRqIFQqICggICAgICAg
IC8vIFBFUk1JU1NJT05fSEFSRFdBUkUgICAgICBcKDEgKSBUaiAoPCkgVGogKDwpIFRqICgg
MVwpKSBUaiBUKiAoICAgICAgICBwZXJtaXNzaW9ucyA9ICkgVGogKDwpIFRqICgzKSBUaiAo
PikgVGogKDspIFRqIFQqICBUKiAoICAgICAgICAvLyBGVU5DVElPTl9OT05FICAgICAgICAg
ICAgXCgwXCkpIFRqIFQqICggICAgICAgIC8vIEZVTkNUSU9OX0JPT1QgICAgICAgICAgICBc
KDEgKSBUaiAoPCkgVGogKDwpIFRqICggMFwpKSBUaiBUKiAoICAgICAgICAvLyBGVU5DVElP
Tl9DUkFTSCAgICAgICAgICAgXCgxICkgVGogKDwpIFRqICg8KSBUaiAoIDFcKSkgVGogVCog
KCAgICAgICAgLy8gRlVOQ1RJT05fQ09OU09MRSAgICAgICAgIFwoMSApIFRqICg8KSBUaiAo
PCkgVGogKCAyXCkpIFRqIFQqICggICAgICAgIC8vIEZVTkNUSU9OX1hFTlNUT1JFICAgICAg
ICBcKDEgKSBUaiAoPCkgVGogKDwpIFRqICggMzBcKSkgVGogVCogKCAgICAgICAgLy8gRlVO
Q1RJT05fTEVHQUNZX0RPTTAgICAgIFwoMSApIFRqICg8KSBUaiAoPCkgVGogKCAzMVwpKSBU
aiBUKiAoICAgICAgICBmdW5jdGlvbnMgPSApIFRqICg8KSBUaiAoMHhDMDAwMDAwNikgVGog
KD4pIFRqICg7KSBUaiBUKiAgVCogKCAgICAgICAgLy8gTU9ERV9QQVJBVklSVFVBTElaRUQg
ICAgIFwoMSApIFRqICg8KSBUaiAoPCkgVGogKCAwXCkgLyogUFYgfCBQVkgvSFZNICovKSBU
aiBUKiAoICAgICAgICAvLyBNT0RFX0VOQUJMRV9ERVZJQ0VfTU9ERUwgXCgxICkgVGogKDwp
IFRqICg8KSBUaiAoIDFcKSAvKiBIVk0gfCBQVkggKi8pIFRqIFQqICggICAgICAgIC8vIE1P
REVfTE9ORyAgICAgICAgICAgICAgICBcKDEgKSBUaiAoPCkgVGogKDwpIFRqICggMlwpIC8q
IDY0IEJJVCB8IDMyIEJJVCAqLykgVGogVCogKCAgICAgICAgbW9kZSA9ICkgVGogKDwpIFRq
ICg1KSBUaiAoPikgVGogKDsgLyogNjQgQklULCBQViAqLykgVGogVCogIFQqICggICAgICAg
IC8vIFVVSUQpIFRqIFQqICggICAgICAgIGRvbWFpbi11dWlkID0gW0IzIEZCIDk4IEZCIDhG
IDlGIDY3IEEzXTspIFRqIFQqICBUKiAoICAgICAgICBjcHVzID0gKSBUaiAoPCkgVGogKDEp
IFRqICg+KSBUaiAoOykgVGogVCogKCAgICAgICAgbWVtb3J5ID0gKSBUaiAoPCkgVGogKDB4
MCAweDIwMDAwKSBUaiAoPikgVGogKDspIFRqIFQqICggICAgICAgIHNlY3VyaXR5LWlkID0g
XDIyM2RvbTBfdFwyMjQ7KSBUaiBUKiAgVCogKCAgICAgICAgbW9kdWxlIHspIFRqIFQqICgg
ICAgICAgICAgICBjb21wYXRpYmxlID0gIm1vZHVsZSxrZXJuZWwiOykgVGogVCogKCAgICAg
ICAgICAgIG1vZHVsZS1hZGRyID0gKSBUaiAoPCkgVGogKDB4MDAwMGZmMDAgMHg4MCkgVGog
KD4pIFRqICg7KSBUaiBUKiAoICAgICAgICAgICAgYm9vdGFyZ3MgPSAiY29uc29sZT1odmMw
IjspIFRqIFQqICggICAgICAgIH07KSBUaiBUKiAoICAgICAgICBtb2R1bGUgeykgVGogVCog
KCAgICAgICAgICAgIGNvbXBhdGlibGUgPSAibW9kdWxlLHJhbWRpc2siOykgVGogVCogKCAg
ICAgICAgICAgIG1vZHVsZS1hZGRyID0gKSBUaiAoPCkgVGogKDB4MDAwMGZmMDAgMHg4MCkg
VGogKD4pIFRqICg7KSBUaiBUKiAoICAgICAgICB9OykgVGogVCogKH07KSBUaiBUKiBFVA0K
UQ0KUQ0KUQ0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSAyOTkuODIzNiBjbQ0KcQ0KMCAw
IDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAoVGhlIG1vZHVsZXMg
dGhhdCB3b3VsZCBiZSBzdXBwbGllZCB3aGVuIHVzaW5nIHRoZSBhYm92ZSBjb25maWcgd291
bGQgYmU6KSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSAyOTMuODIzNiBj
bQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSAyOTMuODIzNiBjbQ0KUQ0KcQ0KMSAwIDAgMSA2
Mi42OTI5MSAyODEuODIzNiBjbQ0KMCAwIDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0K
cQ0KMSAwIDAgMSA2IC0zIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0Yx
IDEwIFRmIDEyIFRMIDEwLjUgMCBUZCAoXDE3NykgVGogVCogLTEwLjUgMCBUZCBFVA0KUQ0K
UQ0KcQ0KMSAwIDAgMSAyMyAtMyBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRt
IC9GMSAxMCBUZiAxMiBUTCAoXCh0aGUgYWJvdmUgY29uZmlnLCBjb21waWxlZCBpbnRvIGhh
cmR3YXJlIHRyZWVcKSkgVGogVCogRVQNClENClENCnENClENClENCnENCjEgMCAwIDEgNjIu
NjkyOTEgMjc1LjgyMzYgY20NClENCnENCjEgMCAwIDEgNjIuNjkyOTEgMjYzLjgyMzYgY20N
CjAgMCAwIHJnDQpCVCAvRjEgMTAgVGYgMTIgVEwgRVQNCnENCjEgMCAwIDEgNiAtMyBjbQ0K
cQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAxMC41IDAg
VGQgKFwxNzcpIFRqIFQqIC0xMC41IDAgVGQgRVQNClENClENCnENCjEgMCAwIDEgMjMgLTMg
Y20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgKENQ
VSBtaWNyb2NvZGUpIFRqIFQqIEVUDQpRDQpRDQpxDQpRDQpRDQpxDQoxIDAgMCAxIDYyLjY5
MjkxIDI1Ny44MjM2IGNtDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDI0NS44MjM2IGNtDQow
IDAgMCByZw0KQlQgL0YxIDEwIFRmIDEyIFRMIEVUDQpxDQoxIDAgMCAxIDYgLTMgY20NCnEN
CjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgMTAuNSAwIFRk
IChcMTc3KSBUaiBUKiAtMTAuNSAwIFRkIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDIzIC0zIGNt
DQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIChYU00g
cG9saWN5KSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSAy
MzkuODIzNiBjbQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSAyMjcuODIzNiBjbQ0KMCAwIDAg
cmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KcQ0KMSAwIDAgMSA2IC0zIGNtDQpxDQowIDAg
MCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIDEwLjUgMCBUZCAoXDE3
NykgVGogVCogLTEwLjUgMCBUZCBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSAyMyAtMyBjbQ0KcQ0K
MCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAoa2VybmVsIGZv
ciBib290IGRvbWFpbikgVGogVCogRVQNClENClENCnENClENClENCnENCjEgMCAwIDEgNjIu
NjkyOTEgMjIxLjgyMzYgY20NClENCnENCjEgMCAwIDEgNjIuNjkyOTEgMjA5LjgyMzYgY20N
CjAgMCAwIHJnDQpCVCAvRjEgMTAgVGYgMTIgVEwgRVQNCnENCjEgMCAwIDEgNiAtMyBjbQ0K
cQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAxMC41IDAg
VGQgKFwxNzcpIFRqIFQqIC0xMC41IDAgVGQgRVQNClENClENCnENCjEgMCAwIDEgMjMgLTMg
Y20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgKHJh
bWRpc2sgZm9yIGJvb3QgZG9tYWluKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KUQ0KUQ0KcQ0KMSAw
IDAgMSA2Mi42OTI5MSAyMDMuODIzNiBjbQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSAxOTEu
ODIzNiBjbQ0KMCAwIDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KcQ0KMSAwIDAgMSA2
IC0zIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRM
IDEwLjUgMCBUZCAoXDE3NykgVGogVCogLTEwLjUgMCBUZCBFVA0KUQ0KUQ0KcQ0KMSAwIDAg
MSAyMyAtMyBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAx
MiBUTCAoYm9vdCBkb21haW4gY29uZmlndXJhdGlvbiBmaWxlKSBUaiBUKiBFVA0KUQ0KUQ0K
cQ0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSAxODUuODIzNiBjbQ0KUQ0KcQ0KMSAwIDAg
MSA2Mi42OTI5MSAxNzMuODIzNiBjbQ0KMCAwIDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBF
VA0KcQ0KMSAwIDAgMSA2IC0zIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0g
L0YxIDEwIFRmIDEyIFRMIDEwLjUgMCBUZCAoXDE3NykgVGogVCogLTEwLjUgMCBUZCBFVA0K
UQ0KUQ0KcQ0KMSAwIDAgMSAyMyAtMyBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAy
IFRtIC9GMSAxMCBUZiAxMiBUTCAoa2VybmVsIGZvciB0aGUgY2xhc3NpYyBkb20wIGRvbWFp
bikgVGogVCogRVQNClENClENCnENClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgMTY3Ljgy
MzYgY20NClENCnENCjEgMCAwIDEgNjIuNjkyOTEgMTU1LjgyMzYgY20NCjAgMCAwIHJnDQpC
VCAvRjEgMTAgVGYgMTIgVEwgRVQNCnENCjEgMCAwIDEgNiAtMyBjbQ0KcQ0KMCAwIDAgcmcN
CkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAxMC41IDAgVGQgKFwxNzcpIFRq
IFQqIC0xMC41IDAgVGQgRVQNClENClENCnENCjEgMCAwIDEgMjMgLTMgY20NCnENCjAgMCAw
IHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgKHJhbWRpc2sgZm9yIHRo
ZSBjbGFzc2ljIGRvbTAgZG9tYWluKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KUQ0KUQ0KcQ0KMSAw
IDAgMSA2Mi42OTI5MSAxNTUuODIzNiBjbQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSAxMTMu
ODIzNiBjbQ0KcQ0KQlQgMSAwIDAgMSAwIDI2IFRtIC45ODY4NiBUdyAxMiBUTCAvRjEgMTAg
VGYgMCAwIDAgcmcgKFRoZSBoeXBlcnZpc29yIGRldmljZSB0cmVlIHdvdWxkIGJlIGNvbXBp
bGVkIGludG8gdGhlIGhhcmR3YXJlIGRldmljZSB0cmVlIGFuZCBwcm92aWRlZCB0byBYZW4g
dXNpbmcpIFRqIFQqIDAgVHcgLjYzODczNSBUdyAodGhlIHN0YW5kYXJkIG1ldGhvZCBjdXJy
ZW50bHkgaW4gdXNlLiBUaGUgcmVtYWluaW5nIG1vZHVsZXMgd291bGQgbmVlZCB0byBiZSBs
b2FkZWQgaW4gdGhlIHJlc3BlY3RpdmUpIFRqIFQqIDAgVHcgKGFkZHJlc3NlcyBzcGVjaWZp
ZWQgaW4gdGhlICkgVGogL0Y0IDEwIFRmIDAgMCAwIHJnIChtb2R1bGUtYWRkciApIFRqIC9G
MSAxMCBUZiAwIDAgMCByZyAocHJvcGVydHkuKSBUaiBUKiBFVA0KUQ0KUQ0KIA0KZW5kc3Ry
ZWFtDQplbmRvYmoNCjMwIDAgb2JqDQo8PCAvTGVuZ3RoIDY4NTQgPj4NCnN0cmVhbQ0KMSAw
IDAgMSAwIDAgY20gIEJUIC9GMSAxMiBUZiAxNC40IFRMIEVUDQpxDQoxIDAgMCAxIDYyLjY5
MjkxIDc0NC4wMjM2IGNtDQpxDQpCVCAxIDAgMCAxIDAgMy41IFRtIDIxIFRMIC9GMiAxNy41
IFRmIDAgMCAwIHJnIChUaGUgSHlwZXJ2aXNvciBub2RlKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0K
MSAwIDAgMSA2Mi42OTI5MSA3MDIuMDIzNiBjbQ0KcQ0KQlQgMSAwIDAgMSAwIDI2IFRtIC4x
MjI5ODggVHcgMTIgVEwgL0YxIDEwIFRmIDAgMCAwIHJnIChUaGUgaHlwZXJ2aXNvciBub2Rl
IGlzIGEgdG9wIGxldmVsIGNvbnRhaW5lciBmb3IgdGhlIGRvbWFpbnMgdGhhdCB3aWxsIGJl
IGJ1aWx0IGJ5IGh5cGVydmlzb3Igb24gc3RhcnQgdXAuIE9uKSBUaiBUKiAwIFR3IDIuODEz
NTU1IFR3ICh0aGUgKSBUaiAvRjMgMTAgVGYgMCAwIDAgcmcgKGh5cGVydmlzb3IgKSBUaiAv
RjEgMTAgVGYgMCAwIDAgcmcgKG5vZGUgdGhlICkgVGogL0YzIDEwIFRmIDAgMCAwIHJnIChj
b21wYXRpYmxlICkgVGogL0YxIDEwIFRmIDAgMCAwIHJnIChwcm9wZXJ0eSBpcyB1c2VkIHRv
IGlkZW50aWZ5IHRoZSB0eXBlIG9mIGh5cGVydmlzb3Igbm9kZSkgVGogVCogMCBUdyAocHJl
c2VudC4uKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA2ODYuMDIzNiBj
bQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMiAxMCBUZiAxMiBUTCAoY29t
cGF0aWJsZSkgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgNjcxLjAyMzYg
Y20NCjAgMCAwIHJnDQpCVCAvRjEgMTAgVGYgMTIgVEwgRVQNCkJUIDEgMCAwIDEgMCAyIFRt
ICBUKiBFVA0KcQ0KMSAwIDAgMSAyMCAwIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAw
IDIgVG0gL0YxIDEwIFRmIDEyIFRMIChJZGVudGlmaWVzIHRoZSB0eXBlIG9mIG5vZGUuIFJl
cXVpcmVkLikgVGogVCogRVQNClENClENCnENClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEg
NjM4LjAyMzYgY20NCnENCkJUIDEgMCAwIDEgMCAzLjUgVG0gMjEgVEwgL0YyIDE3LjUgVGYg
MCAwIDAgcmcgKFRoZSBDb25maWcgbm9kZSkgVGogVCogRVQNClENClENCnENCjEgMCAwIDEg
NjIuNjkyOTEgNTcyLjAyMzYgY20NCnENCkJUIDEgMCAwIDEgMCA1MCBUbSAxLjI2OTQ2MSBU
dyAxMiBUTCAvRjEgMTAgVGYgMCAwIDAgcmcgKEEgY29uZmlnIG5vZGUgaXMgZm9yIGRldGFp
bGluZyBhbnkgbW9kdWxlcyB0aGF0IGFyZSBvZiBpbnRlcmVzdCB0byBYZW4gaXRzZWxmLiBG
b3IgZXhhbXBsZSB0aGlzIHdvdWxkIGJlKSBUaiBUKiAwIFR3IC4wODU2MSBUdyAod2hlcmUg
WGVuIHdvdWxkIGJlIGluZm9ybWVkIG9mIG1pY3JvY29kZSBvciBYU00gcG9saWN5IGxvY2F0
aW9ucy4gSWYgdGhlIG1vZHVsZXMgYXJlIG11bHRpYm9vdCBtb2R1bGVzKSBUaiBUKiAwIFR3
IC42ODQ5ODcgVHcgKGFuZCBhcmUgYWJsZSB0byBiZSBsb2NhdGVkIGJ5IGluZGV4IHdpdGhp
biB0aGUgbW9kdWxlIGNoYWluLCB0aGUgKSBUaiAvRjMgMTAgVGYgMCAwIDAgcmcgKG1iLWlu
ZGV4ICkgVGogL0YxIDEwIFRmIDAgMCAwIHJnIChwcm9wZXJ0eSBzaG91bGQgYmUgdXNlZCB0
bykgVGogVCogMCBUdyAyLjQ4MTg2IFR3IChzcGVjaWZ5IHRoZSBpbmRleCBpbiB0aGUgbXVs
dGlib290IG1vZHVsZSBjaGFpbi4uIElmIHRoZSBtb2R1bGUgd2lsbCBiZSBsb2NhdGVkIGJ5
IHBoeXNpY2FsIG1lbW9yeSkgVGogVCogMCBUdyAoYWRkcmVzcywgdGhlbiB0aGUgKSBUaiAv
RjMgMTAgVGYgMCAwIDAgcmcgKG1vZHVsZS1hZGRyICkgVGogL0YxIDEwIFRmIDAgMCAwIHJn
IChwcm9wZXJ0eSBzaG91bGQgYmUgdXNlZCB0byBpZGVudGlmeSB0aGUgbG9jYXRpb24gYW5k
IHNpemUgb2YgdGhlIG1vZHVsZS4pIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDYyLjY5
MjkxIDU1Ni4wMjM2IGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YyIDEw
IFRmIDEyIFRMIChjb21wYXRpYmxlKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42
OTI5MSA1NDEuMDIzNiBjbQ0KMCAwIDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KQlQg
MSAwIDAgMSAwIDIgVG0gIFQqIEVUDQpxDQoxIDAgMCAxIDIwIDAgY20NCnENCjAgMCAwIHJn
DQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwgKElkZW50aWZpZXMgdGhlIHR5
cGUgb2Ygbm9kZS4gUmVxdWlyZWQuKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KUQ0KUQ0KcQ0KMSAw
IDAgMSA2Mi42OTI5MSA1MDguMDIzNiBjbQ0KcQ0KQlQgMSAwIDAgMSAwIDMuNSBUbSAyMSBU
TCAvRjIgMTcuNSBUZiAwIDAgMCByZyAoVGhlIERvbWFpbiBub2RlKSBUaiBUKiBFVA0KUQ0K
UQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA0NTQuMDIzNiBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEg
MCAwIDEgMCAzOCBUbSAvRjEgMTAgVGYgMTIgVEwgLjU2OTk4NyBUdyAoQSBkb21haW4gbm9k
ZSBpcyBmb3IgZGVzY3JpYmluZyB0aGUgY29uc3RydWN0aW9uIG9mIGEgZG9tYWluLiBJdCBt
YXkgcHJvdmlkZSBhIGRvbWlkIHByb3BlcnR5IHdoaWNoIHdpbGwpIFRqIFQqIDAgVHcgLjM3
MTQ4OCBUdyAoYmUgdXNlZCBhcyB0aGUgcmVxdWVzdGVkIGRvbWFpbiBpZCBmb3IgdGhlIGRv
bWFpbiB3aXRoIGEgdmFsdWUgb2YgXDIyMzBcMjI0IHNpZ25pZnlpbmcgdG8gdXNlIHRoZSBu
ZXh0IGF2YWlsYWJsZSkgVGogVCogMCBUdyAuNTM3MjA5IFR3IChkb21haW4gaWQsIHdoaWNo
IGlzIHRoZSBkZWZhdWx0IGJlaGF2aW9yIGlmIG9taXR0ZWQuIEEgZG9tYWluIGNvbmZpZ3Vy
YXRpb24gaXMgbm90IGFibGUgdG8gcmVxdWVzdCBhIGRvbWlkKSBUaiBUKiAwIFR3IChvZiBc
MjIzMFwyMjQuIEFmdGVyIHRoYXQgYSBkb21haW4gbm9kZSBtYXkgaGF2ZSBhbnkgb2YgdGhl
IGZvbGxvd2luZyBwYXJhbWV0ZXJzLCkgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgNjIu
NjkyOTEgNDM4LjAyMzYgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjIg
MTAgVGYgMTIgVEwgKGNvbXBhdGlibGUpIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDYy
LjY5MjkxIDQyMy4wMjM2IGNtDQowIDAgMCByZw0KQlQgL0YxIDEwIFRmIDEyIFRMIEVUDQpC
VCAxIDAgMCAxIDAgMiBUbSAgVCogRVQNCnENCjEgMCAwIDEgMjAgMCBjbQ0KcQ0KMCAwIDAg
cmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBUZiAxMiBUTCAoSWRlbnRpZmllcyB0aGUg
dHlwZSBvZiBub2RlLiBSZXF1aXJlZC4pIFRqIFQqIEVUDQpRDQpRDQpxDQpRDQpRDQpxDQox
IDAgMCAxIDYyLjY5MjkxIDQwNy4wMjM2IGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAw
IDIgVG0gL0YyIDEwIFRmIDEyIFRMIChkb21pZCkgVGogVCogRVQNClENClENCnENCjEgMCAw
IDEgNjIuNjkyOTEgMzkyLjAyMzYgY20NCjAgMCAwIHJnDQpCVCAvRjEgMTAgVGYgMTIgVEwg
RVQNCkJUIDEgMCAwIDEgMCAyIFRtICBUKiBFVA0KcQ0KMSAwIDAgMSAyMCAwIGNtDQpxDQow
IDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIChJZGVudGlmaWVz
IHRoZSBkb21pZCByZXF1ZXN0ZWQgdG8gYXNzaWduIHRvIHRoZSBkb21haW4uIFJlcXVpcmVk
LikgVGogVCogRVQNClENClENCnENClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgMzc2LjAy
MzYgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjIgMTAgVGYgMTIgVEwg
KHBlcm1pc3Npb25zKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSAzNDku
MDIzNiBjbQ0KMCAwIDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KQlQgMSAwIDAgMSAw
IDE0IFRtICBUKiBFVA0KcQ0KMSAwIDAgMSAyMCAwIGNtDQpxDQowIDAgMCByZw0KQlQgMSAw
IDAgMSAwIDE0IFRtIC9GMSAxMCBUZiAxMiBUTCAxLjA0ODQ0MyBUdyAoVGhpcyBzZXRzIHdo
YXQgRGlzY3JldGlvbmFyeSBBY2Nlc3MgQ29udHJvbCBwZXJtaXNzaW9ucyBhIGRvbWFpbiBp
cyBhc3NpZ25lZC4gT3B0aW9uYWwsIGRlZmF1bHQgaXMpIFRqIFQqIDAgVHcgKG5vbmUuKSBU
aiBUKiBFVA0KUQ0KUQ0KcQ0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSAzMzMuMDIzNiBj
bQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMiAxMCBUZiAxMiBUTCAoZnVu
Y3Rpb25zKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSAzMTguMDIzNiBj
bQ0KMCAwIDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KQlQgMSAwIDAgMSAwIDIgVG0g
IFQqIEVUDQpxDQoxIDAgMCAxIDIwIDAgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAg
MiBUbSAvRjEgMTAgVGYgMTIgVEwgKFRoaXMgaWRlbnRpZmllcyB3aGF0IHN5c3RlbSBmdW5j
dGlvbnMgYSBkb21haW4gd2lsbCBmdWxmaWxsLiBPcHRpb25hbCwgdGhlIGRlZmF1bHQgaXMg
bm9uZS4pIFRqIFQqIEVUDQpRDQpRDQpxDQpRDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDMw
Ni4wMjM2IGNtDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDIxMS4wMjM2IGNtDQouOTYwNzg0
IC45NjA3ODQgLjg2Mjc0NSByZw0KbiAwIDk1IDQ2OS44ODk4IC05NSByZSBmKg0KMCAwIDAg
cmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KQlQgMSAwIDAgMSA2IDY5IFRtICBUKiBFVA0K
cQ0KMSAwIDAgMSAxNiA2NCBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyLjUgVG0g
L0Y1IDEyLjUgVGYgMTUgVEwgKE5vdGUpIFRqIFQqIEVUDQpRDQpRDQpxDQoxIDAgMCAxIDE2
IDE2IGNtDQpxDQpCVCAxIDAgMCAxIDAgMjYgVG0gNy40NzM5NzYgVHcgMTIgVEwgL0YxIDEw
IFRmIDAgMCAwIHJnIChUaGUgKSBUaiAvRjQgMTAgVGYgMCAwIDAgcmcgKGZ1bmN0aW9ucyAp
IFRqIC9GMSAxMCBUZiAwIDAgMCByZyAoYml0cyB0aGF0IGhhdmUgYmVlbiBzZWxlY3RlZCB0
byBpbmRpY2F0ZSApIFRqIC9GMyAxMCBUZiAwIDAgMCByZyAoRlVOQ1RJT05fWEVOU1RPUkUg
KSBUaiAvRjEgMTAgVGYgMCAwIDAgcmcgKGFuZCkgVGogVCogMCBUdyAuOTg4NTU1IFR3IC9G
MyAxMCBUZiAwIDAgMCByZyAoRlVOQ1RJT05fTEVHQUNZX0RPTTAgKSBUaiAvRjEgMTAgVGYg
MCAwIDAgcmcgKGFyZSB0aGUgbGFzdCB0d28gYml0cyBcKDMwLCAzMVwpIHN1Y2ggdGhhdCBz
aG91bGQgdGhlc2UgZmVhdHVyZXMgZXZlciBiZSkgVGogVCogMCBUdyAoZnVsbHkgcmV0aXJl
ZCwgdGhlIGZsYWdzIG1heSBiZSBkcm9wcGVkIHdpdGhvdXQgbGVhdmluZyBhIGdhcCBpbiB0
aGUgZmxhZyBzZXQuKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSBKDQoxIGoNCi42NjI3NDUgLjY2
Mjc0NSAuNjYyNzQ1IFJHDQouNSB3DQpuIDAgOTUgbSA0NjkuODg5OCA5NSBsIFMNCm4gMCAw
IG0gNDY5Ljg4OTggMCBsIFMNCm4gMCAwIG0gMCA5NSBsIFMNCm4gNDY5Ljg4OTggMCBtIDQ2
OS44ODk4IDk1IGwgUw0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSAyMDUuMDIzNiBjbQ0K
UQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSAxODkuMDIzNiBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEg
MCAwIDEgMCAyIFRtIC9GMiAxMCBUZiAxMiBUTCAobW9kZSkgVGogVCogRVQNClENClENCnEN
CjEgMCAwIDEgNjIuNjkyOTEgMTc0LjAyMzYgY20NCjAgMCAwIHJnDQpCVCAvRjEgMTAgVGYg
MTIgVEwgRVQNCkJUIDEgMCAwIDEgMCAyIFRtICBUKiBFVA0KcQ0KMSAwIDAgMSAyMCAwIGNt
DQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRmIDEyIFRMIChUaGUg
bW9kZSB0aGUgZG9tYWluIHdpbGwgYmUgZXhlY3V0ZWQgdW5kZXIuIFJlcXVpcmVkLikgVGog
VCogRVQNClENClENCnENClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgMTU4LjAyMzYgY20N
CnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjIgMTAgVGYgMTIgVEwgKGRvbWFp
bi11dWlkKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSAxNDMuMDIzNiBj
bQ0KMCAwIDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KQlQgMSAwIDAgMSAwIDIgVG0g
IFQqIEVUDQpxDQoxIDAgMCAxIDIwIDAgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAg
MiBUbSAvRjEgMTAgVGYgMTIgVEwgKEEgZ2xvYmFsbHkgdW5pcXVlIGlkZW50aWZpZXIgZm9y
IHRoZSBkb21haW4uIE9wdGlvbmFsLCB0aGUgZGVmYXVsdCBpcyBOVUxMLikgVGogVCogRVQN
ClENClENCnENClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgMTI3LjAyMzYgY20NCnENCjAg
MCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjIgMTAgVGYgMTIgVEwgKGNwdXMpIFRqIFQq
IEVUDQpRDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDExMi4wMjM2IGNtDQowIDAgMCByZw0K
QlQgL0YxIDEwIFRmIDEyIFRMIEVUDQpCVCAxIDAgMCAxIDAgMiBUbSAgVCogRVQNCnENCjEg
MCAwIDEgMjAgMCBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMSAxMCBU
ZiAxMiBUTCAoVGhlIG51bWJlciBvZiB2Q1BVcyB0byBiZSBhc3NpZ25lZCB0byB0aGUgZG9t
YWluLiBPcHRpb25hbCwgdGhlIGRlZmF1bHQgaXMgXDIyMzFcMjI0LikgVGogVCogRVQNClEN
ClENCnENClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgOTYuMDIzNjIgY20NCnENCjAgMCAw
IHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjIgMTAgVGYgMTIgVEwgKG1lbW9yeSkgVGogVCog
RVQNClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgODEuMDIzNjIgY20NCjAgMCAwIHJnDQpC
VCAvRjEgMTAgVGYgMTIgVEwgRVQNCkJUIDEgMCAwIDEgMCAyIFRtICBUKiBFVA0KcQ0KMSAw
IDAgMSAyMCAwIGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDIgVG0gL0YxIDEwIFRm
IDEyIFRMIChUaGUgYW1vdW50IG9mIG1lbW9yeSB0byBhc3NpZ24gdG8gdGhlIGRvbWFpbiwg
aW4gS0JzLiBSZXF1aXJlZC4pIFRqIFQqIEVUDQpRDQpRDQpxDQpRDQpRDQogDQplbmRzdHJl
YW0NCmVuZG9iag0KMzEgMCBvYmoNCjw8IC9MZW5ndGggMzcyMSA+Pg0Kc3RyZWFtDQoxIDAg
MCAxIDAgMCBjbSAgQlQgL0YxIDEyIFRmIDE0LjQgVEwgRVQNCnENCjEgMCAwIDEgNjIuNjky
OTEgNzUzLjAyMzYgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjIgMTAg
VGYgMTIgVEwgKHNlY3VyaXR5LWlkKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42
OTI5MSA3MjYuMDIzNiBjbQ0KMCAwIDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KQlQg
MSAwIDAgMSAwIDE0IFRtICBUKiBFVA0KcQ0KMSAwIDAgMSAyMCAwIGNtDQpxDQowIDAgMCBy
Zw0KQlQgMSAwIDAgMSAwIDE0IFRtIC9GMSAxMCBUZiAxMiBUTCAuMjU4NzM1IFR3IChUaGUg
c2VjdXJpdHkgaWRlbnRpdHkgdG8gYmUgYXNzaWduZWQgdG8gdGhlIGRvbWFpbiB3aGVuIFhT
TSBpcyB0aGUgYWNjZXNzIGNvbnRyb2wgbWVjaGFuaXNtIGJlaW5nKSBUaiBUKiAwIFR3ICh1
c2VkLiBPcHRpb25hbCwgdGhlIGRlZmF1bHQgaXMgXDIyM2RvbXVfdFwyMjQuKSBUaiBUKiBF
VA0KUQ0KUQ0KcQ0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA2OTMuMDIzNiBjbQ0KcQ0K
QlQgMSAwIDAgMSAwIDMuNSBUbSAyMSBUTCAvRjIgMTcuNSBUZiAwIDAgMCByZyAoVGhlIE1v
ZHVsZSBub2RlKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA2MjcuMDIz
NiBjbQ0KcQ0KQlQgMSAwIDAgMSAwIDUwIFRtIC44Mjk5ODQgVHcgMTIgVEwgL0YxIDEwIFRm
IDAgMCAwIHJnIChUaGlzIG5vZGUgZGVzY3JpYmVzIGEgYm9vdCBtb2R1bGUgbG9hZGVkIGJ5
IHRoZSBib290IGxvYWRlci4gVGhlIHJlcXVpcmVkIGNvbXBhdGlibGUgcHJvcGVydHkgZm9s
bG93cykgVGogVCogMCBUdyAuNDgzNjE1IFR3ICh0aGUgZm9ybWF0OiBtb2R1bGUsKSBUaiAo
PCkgVGogKHR5cGUpIFRqICg+IHdoZXJlIHR5cGUgY2FuIGJlIFwyMjNrZXJuZWxcMjI0LCBc
MjIzcmFtZGlza1wyMjQsIFwyMjNkZXZpY2UtdHJlZVwyMjQsIFwyMjNtaWNyb2NvZGVcMjI0
LCBcMjIzeHNtLXBvbGljeVwyMjQpIFRqIFQqIDAgVHcgLjIzODY1MSBUdyAob3IgXDIyM2Nv
bmZpZ1wyMjQuIEluIHRoZSBjYXNlIHRoZSBtb2R1bGUgaXMgYSBtdWx0aWJvb3QgbW9kdWxl
LCB0aGUgYWRkaXRpb25hbCBwcm9wZXJ0eSBzdHJpbmcgXDIyM211bHRpYm9vdCxtb2R1bGVc
MjI0KSBUaiBUKiAwIFR3IC43MjIyMDkgVHcgKG1heSBiZSBwcmVzZW50LiBPbmUgb2YgdHdv
IHByb3BlcnRpZXMgaXMgcmVxdWlyZWQgYW5kIGlkZW50aWZpZXMgaG93IHRvIGxvY2F0ZSB0
aGUgbW9kdWxlLiBUaGV5IGFyZSB0aGUpIFRqIFQqIDAgVHcgKG1iLWluZGV4LCB1c2VkIGZv
ciBtdWx0aWJvb3QgbW9kdWxlcywgYW5kIHRoZSBtb2R1bGUtYWRkciBmb3IgbWVtb3J5IGFk
ZHJlc3MgYmFzZWQgbG9jYXRpb24uKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42
OTI5MSA2MTEuMDIzNiBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAwIDEgMCAyIFRtIC9GMiAx
MCBUZiAxMiBUTCAoY29tcGF0aWJsZSkgVGogVCogRVQNClENClENCnENCjEgMCAwIDEgNjIu
NjkyOTEgNTg0LjAyMzYgY20NCjAgMCAwIHJnDQpCVCAvRjEgMTAgVGYgMTIgVEwgRVQNCkJU
IDEgMCAwIDEgMCAxNCBUbSAgVCogRVQNCnENCjEgMCAwIDEgMjAgMCBjbQ0KcQ0KMCAwIDAg
cmcNCkJUIDEgMCAwIDEgMCAxNCBUbSAvRjEgMTAgVGYgMTIgVEwgMS40NzM3MzUgVHcgKFRo
aXMgaWRlbnRpZmllcyB3aGF0IHRoZSBtb2R1bGUgaXMgYW5kIHRodXMgd2hhdCB0aGUgaHlw
ZXJ2aXNvciBzaG91bGQgdXNlIHRoZSBtb2R1bGUgZm9yIGR1cmluZykgVGogVCogMCBUdyAo
ZG9tYWluIGNvbnN0cnVjdGlvbi4gUmVxdWlyZWQuKSBUaiBUKiBFVA0KUQ0KUQ0KcQ0KUQ0K
UQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA1NjguMDIzNiBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEg
MCAwIDEgMCAyIFRtIC9GMiAxMCBUZiAxMiBUTCAobWItaW5kZXgpIFRqIFQqIEVUDQpRDQpR
DQpxDQoxIDAgMCAxIDYyLjY5MjkxIDU0MS4wMjM2IGNtDQowIDAgMCByZw0KQlQgL0YxIDEw
IFRmIDEyIFRMIEVUDQpCVCAxIDAgMCAxIDAgMTQgVG0gIFQqIEVUDQpxDQoxIDAgMCAxIDIw
IDAgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMTQgVG0gL0YxIDEwIFRmIDEyIFRM
IDIuODc2NDEyIFR3IChUaGlzIGlkZW50aWZpZXMgdGhlIGluZGV4IGZvciB0aGlzIG1vZHVs
ZSBpbiB0aGUgbXVsdGlib290IG1vZHVsZSBjaGFpbi4gUmVxdWlyZWQgZm9yIG11bHRpYm9v
dCkgVGogVCogMCBUdyAoZW52aXJvbm1lbnRzLikgVGogVCogRVQNClENClENCnENClENClEN
CnENCjEgMCAwIDEgNjIuNjkyOTEgNTI1LjAyMzYgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAg
MCAxIDAgMiBUbSAvRjIgMTAgVGYgMTIgVEwgKG1vZHVsZS1hZGRyKSBUaiBUKiBFVA0KUQ0K
UQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA1MTAuMDIzNiBjbQ0KMCAwIDAgcmcNCkJUIC9GMSAx
MCBUZiAxMiBUTCBFVA0KQlQgMSAwIDAgMSAwIDIgVG0gIFQqIEVUDQpxDQoxIDAgMCAxIDIw
IDAgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEgMTAgVGYgMTIgVEwg
KFRoaXMgaWRlbnRpZmllcyB3aGVyZSBpbiBtZW1vcnkgdGhpcyBtb2R1bGUgaXMgbG9jYXRl
ZC4gUmVxdWlyZWQgZm9yIG5vbi1tdWx0aWJvb3QgZW52aXJvbm1lbnRzLikgVGogVCogRVQN
ClENClENCnENClENClENCnENCjEgMCAwIDEgNjIuNjkyOTEgNDk0LjAyMzYgY20NCnENCjAg
MCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjIgMTAgVGYgMTIgVEwgKGJvb3RhcmdzKSBU
aiBUKiBFVA0KUQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSA0NzkuMDIzNiBjbQ0KMCAwIDAg
cmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KQlQgMSAwIDAgMSAwIDIgVG0gIFQqIEVUDQpx
DQoxIDAgMCAxIDIwIDAgY20NCnENCjAgMCAwIHJnDQpCVCAxIDAgMCAxIDAgMiBUbSAvRjEg
MTAgVGYgMTIgVEwgKFRoaXMgaXMgdXNlZCB0byBwcm92aWRlIHRoZSBib290IHBhcmFtcyB0
byBrZXJuZWwgbW9kdWxlcy4pIFRqIFQqIEVUDQpRDQpRDQpxDQpRDQpRDQpxDQoxIDAgMCAx
IDYyLjY5MjkxIDQ2Ny4wMjM2IGNtDQpRDQpxDQoxIDAgMCAxIDYyLjY5MjkxIDM4NC4wMjM2
IGNtDQouOTYwNzg0IC45NjA3ODQgLjg2Mjc0NSByZw0KbiAwIDgzIDQ2OS44ODk4IC04MyBy
ZSBmKg0KMCAwIDAgcmcNCkJUIC9GMSAxMCBUZiAxMiBUTCBFVA0KQlQgMSAwIDAgMSA2IDU3
IFRtICBUKiBFVA0KcQ0KMSAwIDAgMSAxNiA1MiBjbQ0KcQ0KMCAwIDAgcmcNCkJUIDEgMCAw
IDEgMCAyLjUgVG0gL0Y1IDEyLjUgVGYgMTUgVEwgKE5vdGUpIFRqIFQqIEVUDQpRDQpRDQpx
DQoxIDAgMCAxIDE2IDE2IGNtDQpxDQowIDAgMCByZw0KQlQgMSAwIDAgMSAwIDE0IFRtIC9G
MSAxMCBUZiAxMiBUTCAuODI3ODQgVHcgKFRoZSBib290YXJncyBwcm9wZXJ0eSBpcyBpbnRl
bmRlZCBmb3Igc2l0dWF0aW9ucyB3aGVyZSB0aGUgc2FtZSBrZXJuZWwgbXVsdGlib290IG1v
ZHVsZSBpcyB1c2VkKSBUaiBUKiAwIFR3IChmb3IgbW9yZSB0aGFuIG9uZSBkb21haW4uKSBU
aiBUKiBFVA0KUQ0KUQ0KcQ0KMSBKDQoxIGoNCi42NjI3NDUgLjY2Mjc0NSAuNjYyNzQ1IFJH
DQouNSB3DQpuIDAgODMgbSA0NjkuODg5OCA4MyBsIFMNCm4gMCAwIG0gNDY5Ljg4OTggMCBs
IFMNCm4gMCAwIG0gMCA4MyBsIFMNCm4gNDY5Ljg4OTggMCBtIDQ2OS44ODk4IDgzIGwgUw0K
UQ0KUQ0KcQ0KMSAwIDAgMSA2Mi42OTI5MSAzNzguMDIzNiBjbQ0KUQ0KIA0KZW5kc3RyZWFt
DQplbmRvYmoNCjMyIDAgb2JqDQo8PCAvTnVtcyBbIDAgMzMgMCBSIDEgMzQgMCBSIDIgMzUg
MCBSIDMgMzYgMCBSIDQgMzcgMCBSIA0KICA1IDM4IDAgUiA2IDM5IDAgUiBdID4+DQplbmRv
YmoNCjMzIDAgb2JqDQo8PCAvUyAvRCAvU3QgMSA+Pg0KZW5kb2JqDQozNCAwIG9iag0KPDwg
L1MgL0QgL1N0IDIgPj4NCmVuZG9iag0KMzUgMCBvYmoNCjw8IC9TIC9EIC9TdCAzID4+DQpl
bmRvYmoNCjM2IDAgb2JqDQo8PCAvUyAvRCAvU3QgNCA+Pg0KZW5kb2JqDQozNyAwIG9iag0K
PDwgL1MgL0QgL1N0IDUgPj4NCmVuZG9iag0KMzggMCBvYmoNCjw8IC9TIC9EIC9TdCA2ID4+
DQplbmRvYmoNCjM5IDAgb2JqDQo8PCAvUyAvRCAvU3QgNyA+Pg0KZW5kb2JqDQp4cmVmDQow
IDQwDQowMDAwMDAwMDAwIDY1NTM1IGYNCjAwMDAwMDAwNzUgMDAwMDAgbg0KMDAwMDAwMDE1
MCAwMDAwMCBuDQowMDAwMDAwMjYwIDAwMDAwIG4NCjAwMDAwMDAzNzUgMDAwMDAgbg0KMDAw
MDAwMDQ4MyAwMDAwMCBuDQowMDAwMDAwNjkyIDAwMDAwIG4NCjAwMDAwMDA5MDEgMDAwMDAg
bg0KMDAwMDAwMTExMCAwMDAwMCBuDQowMDAwMDAxMzE5IDAwMDAwIG4NCjAwMDAwMDE0Mzcg
MDAwMDAgbg0KMDAwMDAwMTY0NyAwMDAwMCBuDQowMDAwMDAxNzcwIDAwMDAwIG4NCjAwMDAw
MDE5ODAgMDAwMDAgbg0KMDAwMDAwMjE5MCAwMDAwMCBuDQowMDAwMDAyMjk5IDAwMDAwIG4N
CjAwMDAwMDI1OTcgMDAwMDAgbg0KMDAwMDAwMjY3NCAwMDAwMCBuDQowMDAwMDAyODM5IDAw
MDAwIG4NCjAwMDAwMDI5NzEgMDAwMDAgbg0KMDAwMDAwMzEwMCAwMDAwMCBuDQowMDAwMDAz
MjM3IDAwMDAwIG4NCjAwMDAwMDMzNzAgMDAwMDAgbg0KMDAwMDAwMzUwMyAwMDAwMCBuDQow
MDAwMDAzNjIzIDAwMDAwIG4NCjAwMDAwMDM3MjUgMDAwMDAgbg0KMDAwMDAwNTk5NSAwMDAw
MCBuDQowMDAwMDA4NTQzIDAwMDAwIG4NCjAwMDAwMTM2MDkgMDAwMDAgbg0KMDAwMDAxNjA4
OCAwMDAwMCBuDQowMDAwMDIxNTU5IDAwMDAwIG4NCjAwMDAwMjg0NzAgMDAwMDAgbg0KMDAw
MDAzMjI0OCAwMDAwMCBuDQowMDAwMDMyMzUwIDAwMDAwIG4NCjAwMDAwMzIzODcgMDAwMDAg
bg0KMDAwMDAzMjQyNCAwMDAwMCBuDQowMDAwMDMyNDYxIDAwMDAwIG4NCjAwMDAwMzI0OTgg
MDAwMDAgbg0KMDAwMDAzMjUzNSAwMDAwMCBuDQowMDAwMDMyNTcyIDAwMDAwIG4NCnRyYWls
ZXINCjw8IC9JRCANCiAlIFJlcG9ydExhYiBnZW5lcmF0ZWQgUERGIGRvY3VtZW50IC0tIGRp
Z2VzdCAoaHR0cDovL3d3dy5yZXBvcnRsYWIuY29tKQ0KIFsoLX5cMTc3XDMwNFwzMTBYXDAw
N1wyNjUrXDM1N1wyNTArXDIwM1wyNDFhXDI0NikgKC1+XDE3N1wzMDRcMzEwWFwwMDdcMjY1
K1wzNTdcMjUwK1wyMDNcMjQxYVwyNDYpXQ0KIC9JbmZvIDE1IDAgUiAvUm9vdCAxNCAwIFIg
L1NpemUgNDAgPj4NCnN0YXJ0eHJlZg0KMzI2MDkNCiUlRU9GDQo=
--------------AE2245E82A7EC007FBDDF8CD--


From xen-devel-bounces@lists.xenproject.org Fri May 14 15:11:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 15:11:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127405.239456 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhZTO-0008IB-66; Fri, 14 May 2021 15:11:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127405.239456; Fri, 14 May 2021 15:11:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhZTO-0008I4-2w; Fri, 14 May 2021 15:11:30 +0000
Received: by outflank-mailman (input) for mailman id 127405;
 Fri, 14 May 2021 15:11:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zRnq=KJ=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1lhZTM-0008Hy-Dq
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 15:11:28 +0000
Received: from mail-lj1-x234.google.com (unknown [2a00:1450:4864:20::234])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 62292a28-6481-4c80-94a0-501172984c0b;
 Fri, 14 May 2021 15:11:27 +0000 (UTC)
Received: by mail-lj1-x234.google.com with SMTP id e11so24716209ljn.13
 for <xen-devel@lists.xenproject.org>; Fri, 14 May 2021 08:11:27 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 62292a28-6481-4c80-94a0-501172984c0b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=Dzs7MRMo9A8buEh5sBLpY4PovQtIpPtoa7SQSWRHsVY=;
        b=ZmyunJeGSLA5dQeNDTamB5aFmXggZhNUYLe1C/nzWG+R0LKmNAxyE7UfzPdRflAMQv
         0Ac+04szGVx7nrF+GGLCMvA27D4LV9ALjJMeBaxrLXgpOD/hSwer4UcP+XETZWbwBcGO
         kSxdKmNZq0bUSTR/9mX2XEPbpYgoX8LWp8v+1Isk4lqonkDm+oyom9PmT3uu2INTR/b9
         uEyCkRxrJwsjwBaPua0cvB30z8+/1Sb09ZkAGSnah3NsOUbS1D2/BDUcQVHPm1Pg2XSV
         DNd/LLGcfpAZHxgkgEL3hMFQsLP01OsBcXnVTzDoLxMebxM5K1hrGnJ0PPIiE7YoNlwJ
         8wUA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=Dzs7MRMo9A8buEh5sBLpY4PovQtIpPtoa7SQSWRHsVY=;
        b=oP02SQukkxUlRpe1HP19QqGmABA52bNeeVbyM1qctc64B4xkjKb3YN+mIyWSrdKcOS
         Nx7K5gGSS6vzBgfuBHGzvy9EzHX5ldffry0IR7YR7B+RuwR53wRI3QpZmJin8XOIXmGJ
         T/IH/BBvyH920zkZVgzNy8sJCs6MPPp1rlkZIecoefATPImaX+qbWR4kq1NSiLs2HbBX
         SzB3YBein06bsz9hDgg0CsWXt2TgWXIqHZvmqyFRbYxJaHtV+XFHZ0xu6vXrxXihHM8I
         EuMNeu1y1kz50icS7pjJO2iQUgG+OHHoObo2pvSpyN9g2MgavVhSKICwS0nlIrW3GKXP
         2NQw==
X-Gm-Message-State: AOAM532zXWoetWUveGv0q96P8GlNvMA/m8yNzEWK41V+tNiNTM3OXHb9
	pnPStuRuPZX/46/bhyWjg+P/KbF7n0RNxEBTTJ0=
X-Google-Smtp-Source: ABdhPJxgmceqpv46lA4a6UawsrEwtRjDxu78SMsYvcGI8+siPNkSPoaqEcBbRAWbdjC6+oZ1TBpD9niWDRG72qcQo2E=
X-Received: by 2002:a2e:9a8b:: with SMTP id p11mr26349892lji.285.1621005086100;
 Fri, 14 May 2021 08:11:26 -0700 (PDT)
MIME-Version: 1.0
References: <20210514135014.78389-1-roger.pau@citrix.com>
In-Reply-To: <20210514135014.78389-1-roger.pau@citrix.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Fri, 14 May 2021 11:11:14 -0400
Message-ID: <CAKf6xpsyzazbY_mA0QtAuAqpOPkpuhjrZ1wid0khWy1urh4iBg@mail.gmail.com>
Subject: Re: [PATCH] libelf: improve PVH elfnote parsing
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, Ian Jackson <iwj@xenproject.org>, 
	Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, 
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
Content-Type: text/plain; charset="UTF-8"

On Fri, May 14, 2021 at 9:50 AM Roger Pau Monne <roger.pau@citrix.com> wrote:
>
> Pass an hvm boolean parameter to the elf note parsing and checking
> routines, so that better checking can be done in case libelf is
> dealing with an hvm container.
>
> elf_xen_note_check shouldn't return early unless PHYS32_ENTRY is set
> and the container is of type HVM, or else the loader and version
> checks would be avoided for kernels intended to be booted as PV but
> that also have PHYS32_ENTRY set.
>
> Adjust elf_xen_addr_calc_check so that the virtual addresses are
> actually physical ones (by setting virt_base and elf_paddr_offset to
> zero) when the container is of type HVM, as that container is always
> started with paging disabled.

Should elf_xen_addr_calc_check be changed so that PV operates on
virtual addresses and HVM operates on physical addresses?

I worked on some patches for this a while back, but lost track when
other work pulled me away.  I'll send out what I had, but I think I
had not tested many of the cases.  Also, I had other questions about
the approach.  Fundamentally, what notes and limits need to be checked
for PVH vs. PV?

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Fri May 14 15:18:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 15:18:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127409.239467 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhZa0-0000bu-Vo; Fri, 14 May 2021 15:18:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127409.239467; Fri, 14 May 2021 15:18:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhZa0-0000bn-Rp; Fri, 14 May 2021 15:18:20 +0000
Received: by outflank-mailman (input) for mailman id 127409;
 Fri, 14 May 2021 15:18:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zRnq=KJ=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1lhZa0-0000bh-9X
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 15:18:20 +0000
Received: from mail-qt1-x829.google.com (unknown [2607:f8b0:4864:20::829])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8f5de20c-72b4-48a5-806e-fa33dd6974df;
 Fri, 14 May 2021 15:18:19 +0000 (UTC)
Received: by mail-qt1-x829.google.com with SMTP id j11so22399683qtn.12
 for <xen-devel@lists.xenproject.org>; Fri, 14 May 2021 08:18:19 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:ba27:ebff:fee8:ce27])
 by smtp.gmail.com with ESMTPSA id
 d84sm4665216qke.131.2021.05.14.08.18.16
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 14 May 2021 08:18:17 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8f5de20c-72b4-48a5-806e-fa33dd6974df
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=t+535v201V6yOpH45NDGnYgfBVri8G9cr1TNn8Y5vxU=;
        b=aqGv2GnlndOQBWyVyD3ZsYDc7cry9v08kxCeP9FzXhBoxrkRcnboBS2hvCtxtf5q+b
         RWiAj9ttpPDaP8tcH0INQosU3lQo2Y1cSHQoprtpQVLNJlN6PYlkIZ/JK/rNAmeJUawe
         VIJU20tJL1kXjX4dr3fifYJTAEvgYEjot0Wib+gUn/U3od+9jCvwWDsvE5izagE4VM+C
         q5STODozGC75f5fRVZUcv93Z0toSvXSazMmriWRwJ8V1seaZfZkSnXjf7Jrbbn7pGeX+
         V1tagf1utMXwx4TOKGz+A0wb9VnbbyMcIo6xVcyygWXqNq5/4DoE7QxK/SzEWHF3h+ZT
         cJ+A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=t+535v201V6yOpH45NDGnYgfBVri8G9cr1TNn8Y5vxU=;
        b=Xa3I2Rr+QEeIK3CCRkTV8CzmAzxqGM3zaSLNJ6NxMcHw81wHOm5PC4VXmCPDEkoVLD
         C/5FE6P+n8379fBo3Lqm+oXU2HfeTFLy13CKbFIitqUaUaqz1UsmnblyB1VV/4yaAi92
         8sCEdZornsuY6JmGBB59/L+TSXcUbEEG3ueZbfZLblNkDE2pDmG9ai9lDP3+djV08SnI
         NdgatJEWM0Dz0e1z9fsiX0znXep7FKxPVBTBfC6ie6jG4zV+OGrzMn5NWx2KK56UI8R3
         Xe+dBTL/wBuKkd2pQbSNFG0PGg3Yv4gbeWnJQsZO/28nBoxts8iVrjNOBYZOXJ2ePsBA
         16eQ==
X-Gm-Message-State: AOAM531FsIZBtr94yIALC/KZRGL0EqVJt+tGCEixuEdo6AcJoSQmKLiV
	h/cFyIr0/wTtiuufgYk99o8=
X-Google-Smtp-Source: ABdhPJwNJGJf0niwJ7a/By1VXpPQr+BwLSKBfe2noB/9Y6LJbs2lPd8B50wNf9lLpl1gI7uZcAcc3A==
X-Received: by 2002:a05:622a:1005:: with SMTP id d5mr21548657qte.0.1621005499131;
        Fri, 14 May 2021 08:18:19 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: jandryuk@gmail.com,
	xen-devel@lists.xenproject.org
Cc: andrew.cooper3@citrix.com,
	george.dunlap@citrix.com,
	iwj@xenproject.org,
	jbeulich@suse.com,
	julien@xen.org,
	roger.pau@citrix.com,
	sstabellini@kernel.org,
	wl@xen.org
Subject: [RFC PATCH 1/3] libelf: Introduce phys_kstart/end
Date: Fri, 14 May 2021 11:17:29 -0400
Message-Id: <20210514151731.19272-1-jandryuk@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <CAKf6xpsyzazbY_mA0QtAuAqpOPkpuhjrZ1wid0khWy1urh4iBg@mail.gmail.com>
References: <CAKf6xpsyzazbY_mA0QtAuAqpOPkpuhjrZ1wid0khWy1urh4iBg@mail.gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The physical start and end matter for PVH.  These are only used by a PVH
dom0, but will help when separating the PV and PVH ELF checking in the
next patch.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 xen/arch/x86/hvm/dom0_build.c      | 4 ++--
 xen/common/libelf/libelf-dominfo.c | 3 +++
 xen/include/xen/libelf.h           | 2 ++
 3 files changed, 7 insertions(+), 2 deletions(-)

diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
index 878dc1d808..5b9192ecc6 100644
--- a/xen/arch/x86/hvm/dom0_build.c
+++ b/xen/arch/x86/hvm/dom0_build.c
@@ -574,8 +574,8 @@ static int __init pvh_load_kernel(struct domain *d, const module_t *image,
     }
 
     /* Copy the OS image and free temporary buffer. */
-    elf.dest_base = (void *)(parms.virt_kstart - parms.virt_base);
-    elf.dest_size = parms.virt_kend - parms.virt_kstart;
+    elf.dest_base = (void *)parms.phys_kstart - parms.elf_paddr_offset;
+    elf.dest_size = parms.phys_kend - parms.phys_kstart;
 
     elf_set_vcpu(&elf, v);
     rc = elf_load_binary(&elf);
diff --git a/xen/common/libelf/libelf-dominfo.c b/xen/common/libelf/libelf-dominfo.c
index 69c94b6f3b..b1f36866eb 100644
--- a/xen/common/libelf/libelf-dominfo.c
+++ b/xen/common/libelf/libelf-dominfo.c
@@ -453,6 +453,8 @@ static elf_errorstatus elf_xen_addr_calc_check(struct elf_binary *elf,
     }
 
     virt_offset = parms->virt_base - parms->elf_paddr_offset;
+    parms->phys_kstart = elf->pstart;
+    parms->phys_kend   = elf->pend;
     parms->virt_kstart = elf->pstart + virt_offset;
     parms->virt_kend   = elf->pend   + virt_offset;
 
@@ -464,6 +466,7 @@ static elf_errorstatus elf_xen_addr_calc_check(struct elf_binary *elf,
         elf_parse_bsdsyms(elf, elf->pend);
         if ( elf->bsd_symtab_pend )
             parms->virt_kend = elf->bsd_symtab_pend + virt_offset;
+            parms->phys_kend = elf->bsd_symtab_pend;
     }
 
     elf_msg(elf, "ELF: addresses:\n");
diff --git a/xen/include/xen/libelf.h b/xen/include/xen/libelf.h
index b73998150f..8d80d0812a 100644
--- a/xen/include/xen/libelf.h
+++ b/xen/include/xen/libelf.h
@@ -434,6 +434,8 @@ struct elf_dom_parms {
     /* calculated */
     uint64_t virt_kstart;
     uint64_t virt_kend;
+    uint64_t phys_kstart;
+    uint64_t phys_kend;
 };
 
 static inline void elf_xen_feature_set(int nr, uint32_t * addr)
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri May 14 15:18:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 15:18:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127410.239478 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhZa8-0000uf-6X; Fri, 14 May 2021 15:18:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127410.239478; Fri, 14 May 2021 15:18:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhZa8-0000uW-3W; Fri, 14 May 2021 15:18:28 +0000
Received: by outflank-mailman (input) for mailman id 127410;
 Fri, 14 May 2021 15:18:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zRnq=KJ=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1lhZa6-0000tx-RS
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 15:18:26 +0000
Received: from mail-qt1-x830.google.com (unknown [2607:f8b0:4864:20::830])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a6ddc3a4-12b4-4881-b946-300767f7ec71;
 Fri, 14 May 2021 15:18:25 +0000 (UTC)
Received: by mail-qt1-x830.google.com with SMTP id t7so22466997qtn.3
 for <xen-devel@lists.xenproject.org>; Fri, 14 May 2021 08:18:25 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:ba27:ebff:fee8:ce27])
 by smtp.gmail.com with ESMTPSA id
 d84sm4665216qke.131.2021.05.14.08.18.23
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 14 May 2021 08:18:23 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a6ddc3a4-12b4-4881-b946-300767f7ec71
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=SNd6wxVxBEfsPWZeVHLMlxGx5QbhPtmaFcdR2jE0DhI=;
        b=isN6jpvZ9+wi8AC1pGAG6y5F4EAetzbasi8ZdtM/pwptczuN3tbLUVmqKI+zR87xqa
         +rdRCbaYy34lYptq4sCCCDY86vYO5AFlDUkhsrf6jXVq1cvP4J8Hj+eU88JfrLAnoUSp
         ntRbc0V7DgM/CDq07IZzBRvXseZkzUuQLzQA9oskxyMi17VKowfdXqAvQBw6dmrFKOoX
         RQXCQ6WBpKZghTZTPkqrV+v1BLp/53a04iaFrDJBDVZWItQfetK1botqRtryWV3aATXq
         MoQqY4TiLNN98UEbH+U/brw0yRN/bSCZq5YJBrqevAchJoGnVFeJmddyIe6cuNsSjSMx
         gfhQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=SNd6wxVxBEfsPWZeVHLMlxGx5QbhPtmaFcdR2jE0DhI=;
        b=kE+Uuwf9NWYdM8avjPQsDVqgG6HQ7eunxKEYwa+HucYoHchrZb+Lc+DVcRoVr4FQqL
         eM0y5lV/70vMY3y4iFQKR3SOkq9uuCZ3CNj2kaiyoHyaV39IyXJRoQpYdLKk3WPRPe/f
         YySMC9hJmCJrw63Cu2L85RNsFkQoQDQCOfXg1kvSx6jhnaYPVu4OxAYdCgutD3HiF3jl
         Qvn3ct4DD4gPlEzU+2OD8QUmdof1NN+KpscLv8LPKe7fYLcwZz7DoQgnUY0nu/g4Dc15
         dv7bClysrHCrOLr+9uWOC7YdT0TV4Gd5kRkCIogk0D4hu7VRtjuUswxXgUT1YWqTh6xe
         4EKQ==
X-Gm-Message-State: AOAM531H5DABgoJ9vB7uTwn8z/6CmqQeL2cLU9Yz48BYwSxxWQMJRMfJ
	KQz6nJRiJvLJp9FEW8aAJG4=
X-Google-Smtp-Source: ABdhPJx0mfe0AbHORJpxukjRQaK2Wz47YBNKWagyiAy/SODP9shExCXw374BbPNWWk/TDo++M0wDrg==
X-Received: by 2002:a05:622a:4e:: with SMTP id y14mr10512177qtw.186.1621005505325;
        Fri, 14 May 2021 08:18:25 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: jandryuk@gmail.com,
	xen-devel@lists.xenproject.org
Cc: andrew.cooper3@citrix.com,
	george.dunlap@citrix.com,
	iwj@xenproject.org,
	jbeulich@suse.com,
	julien@xen.org,
	roger.pau@citrix.com,
	sstabellini@kernel.org,
	wl@xen.org
Subject: [RFC PATCH 2/3] libelf: Use flags to check pv or pvh in elf_xen_parse
Date: Fri, 14 May 2021 11:17:30 -0400
Message-Id: <20210514151731.19272-2-jandryuk@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210514151731.19272-1-jandryuk@gmail.com>
References: <CAKf6xpsyzazbY_mA0QtAuAqpOPkpuhjrZ1wid0khWy1urh4iBg@mail.gmail.com>
 <20210514151731.19272-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Certain checks are only applicable to PV vs. PVH, so split them and run
only the appropriate checks for each.

This fixes loading a PVH kernel that has a PHYS32_ENTRY but not an ENTRY
ELF note.  Such a kernel would fail the virt_entry check which is not
applicable for PVH.

This re-instatates loader and xen version checks for the PV case that
were omited for kernels passing the PHYS32_ENTRY check.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 tools/fuzz/libelf/libelf-fuzzer.c   |  2 +-
 tools/libs/guest/xg_dom_elfloader.c | 11 +++-
 tools/libs/guest/xg_dom_hvmloader.c |  2 +-
 xen/arch/x86/hvm/dom0_build.c       |  2 +-
 xen/arch/x86/pv/dom0_build.c        |  2 +-
 xen/common/libelf/libelf-dominfo.c  | 91 +++++++++++++++++++++++------
 xen/include/xen/libelf.h            |  7 ++-
 7 files changed, 93 insertions(+), 24 deletions(-)

diff --git a/tools/fuzz/libelf/libelf-fuzzer.c b/tools/fuzz/libelf/libelf-fuzzer.c
index 1ba8571711..f488510618 100644
--- a/tools/fuzz/libelf/libelf-fuzzer.c
+++ b/tools/fuzz/libelf/libelf-fuzzer.c
@@ -17,7 +17,7 @@ int LLVMFuzzerTestOneInput(const uint8_t *data, size_t size)
         return -1;
 
     elf_parse_binary(elf);
-    elf_xen_parse(elf, &parms);
+    elf_xen_parse(elf, &parms, ELF_XEN_CHECK_PV | ELF_XEN_CHECK_PVH);
 
     return 0;
 }
diff --git a/tools/libs/guest/xg_dom_elfloader.c b/tools/libs/guest/xg_dom_elfloader.c
index 06e713fe11..c3280b1603 100644
--- a/tools/libs/guest/xg_dom_elfloader.c
+++ b/tools/libs/guest/xg_dom_elfloader.c
@@ -120,6 +120,7 @@ static elf_negerrnoval check_elf_kernel(struct xc_dom_image *dom, bool verbose)
 static elf_negerrnoval xc_dom_probe_elf_kernel(struct xc_dom_image *dom)
 {
     struct elf_binary elf;
+    unsigned int flags;
     int rc;
 
     rc = check_elf_kernel(dom, 0);
@@ -135,7 +136,9 @@ static elf_negerrnoval xc_dom_probe_elf_kernel(struct xc_dom_image *dom)
      * or else we might be trying to load a plain ELF.
      */
     elf_parse_binary(&elf);
-    rc = elf_xen_parse(&elf, dom->parms);
+    flags = dom->container_type == XC_DOM_PV_CONTAINER ? ELF_XEN_CHECK_PV :
+                                                         ELF_XEN_CHECK_PVH;
+    rc = elf_xen_parse(&elf, dom->parms, flags);
     if ( rc != 0 )
         return rc;
 
@@ -146,6 +149,7 @@ static elf_negerrnoval xc_dom_parse_elf_kernel(struct xc_dom_image *dom)
 {
     struct elf_binary *elf;
     elf_negerrnoval rc;
+    unsigned int flags;
 
     rc = check_elf_kernel(dom, 1);
     if ( rc != 0 )
@@ -166,7 +170,10 @@ static elf_negerrnoval xc_dom_parse_elf_kernel(struct xc_dom_image *dom)
 
     /* parse binary and get xen meta info */
     elf_parse_binary(elf);
-    if ( elf_xen_parse(elf, dom->parms) != 0 )
+    flags = dom->container_type == XC_DOM_PV_CONTAINER ? ELF_XEN_CHECK_PV :
+                                                         ELF_XEN_CHECK_PVH;
+    rc = elf_xen_parse(elf, dom->parms, flags);
+    if ( rc != 0 )
     {
         rc = -EINVAL;
         goto out;
diff --git a/tools/libs/guest/xg_dom_hvmloader.c b/tools/libs/guest/xg_dom_hvmloader.c
index ec6ebad7fd..bf28690415 100644
--- a/tools/libs/guest/xg_dom_hvmloader.c
+++ b/tools/libs/guest/xg_dom_hvmloader.c
@@ -73,7 +73,7 @@ static elf_negerrnoval xc_dom_probe_hvm_kernel(struct xc_dom_image *dom)
      * else we might be trying to load a PV kernel.
      */
     elf_parse_binary(&elf);
-    rc = elf_xen_parse(&elf, dom->parms);
+    rc = elf_xen_parse(&elf, dom->parms, ELF_XEN_CHECK_PV | ELF_XEN_CHECK_PVH);
     if ( rc == 0 )
         return -EINVAL;
 
diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
index 5b9192ecc6..552448ce5d 100644
--- a/xen/arch/x86/hvm/dom0_build.c
+++ b/xen/arch/x86/hvm/dom0_build.c
@@ -561,7 +561,7 @@ static int __init pvh_load_kernel(struct domain *d, const module_t *image,
     elf_set_verbose(&elf);
 #endif
     elf_parse_binary(&elf);
-    if ( (rc = elf_xen_parse(&elf, &parms)) != 0 )
+    if ( (rc = elf_xen_parse(&elf, &parms, ELF_XEN_CHECK_PVH)) != 0 )
     {
         printk("Unable to parse kernel for ELFNOTES\n");
         return rc;
diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c
index e0801a9e6d..8bc77b0366 100644
--- a/xen/arch/x86/pv/dom0_build.c
+++ b/xen/arch/x86/pv/dom0_build.c
@@ -353,7 +353,7 @@ int __init dom0_construct_pv(struct domain *d,
         elf_set_verbose(&elf);
 
     elf_parse_binary(&elf);
-    if ( (rc = elf_xen_parse(&elf, &parms)) != 0 )
+    if ( (rc = elf_xen_parse(&elf, &parms, ELF_XEN_CHECK_PV)) != 0 )
         goto out;
 
     /* compatibility check */
diff --git a/xen/common/libelf/libelf-dominfo.c b/xen/common/libelf/libelf-dominfo.c
index b1f36866eb..13eb39ec52 100644
--- a/xen/common/libelf/libelf-dominfo.c
+++ b/xen/common/libelf/libelf-dominfo.c
@@ -359,7 +359,21 @@ elf_errorstatus elf_xen_parse_guest_info(struct elf_binary *elf,
 /* ------------------------------------------------------------------------ */
 /* sanity checks                                                            */
 
-static elf_errorstatus elf_xen_note_check(struct elf_binary *elf,
+static elf_errorstatus elf_xen_note_check_pvh(struct elf_binary *elf,
+                              struct elf_dom_parms *parms)
+{
+    /* PVH only requires one ELF note to be set */
+    if (parms->phys_entry != UNSET_ADDR32 )
+    {
+        elf_msg(elf, "ELF: Found PVH image\n");
+        return 0;
+    } else {
+        elf_err(elf, "ELF: Missing PVH PHYS32_ENTRY\n");
+        return -1;
+    }
+}
+
+static elf_errorstatus elf_xen_note_check_pv(struct elf_binary *elf,
                               struct elf_dom_parms *parms)
 {
     if ( (ELF_PTRVAL_INVALID(parms->elf_note_start)) &&
@@ -381,13 +395,6 @@ static elf_errorstatus elf_xen_note_check(struct elf_binary *elf,
          return 0;
     }
 
-    /* PVH only requires one ELF note to be set */
-    if ( parms->phys_entry != UNSET_ADDR32 )
-    {
-        elf_msg(elf, "ELF: Found PVH image\n");
-        return 0;
-    }
-
     /* Check the contents of the Xen notes or guest string. */
     if ( ((strlen(parms->loader) == 0) ||
           strncmp(parms->loader, "generic", 7)) &&
@@ -413,7 +420,36 @@ static elf_errorstatus elf_xen_note_check(struct elf_binary *elf,
     return 0;
 }
 
-static elf_errorstatus elf_xen_addr_calc_check(struct elf_binary *elf,
+static elf_errorstatus elf_xen_addr_calc_check_pvh(struct elf_binary *elf,
+                                                   struct elf_dom_parms *parms)
+{
+    parms->phys_kstart = elf->pstart;
+    parms->phys_kend   = elf->pend;
+
+    if ( parms->bsd_symtab )
+    {
+        elf_parse_bsdsyms(elf, elf->pend);
+        if ( elf->bsd_symtab_pend )
+            parms->phys_kend = elf->bsd_symtab_pend;
+    }
+
+    elf_msg(elf, "ELF: addresses:\n");
+    elf_msg(elf, "    phys_kstart      = 0x%" PRIx64 "\n", parms->phys_kstart);
+    elf_msg(elf, "    phys_kend        = 0x%" PRIx64 "\n", parms->phys_kend);
+    elf_msg(elf, "    phys_entry       = 0x%" PRIx32 "\n", parms->phys_entry);
+
+    if ( parms->phys_kstart > parms->phys_kend ||
+         parms->phys_entry < parms->phys_kstart ||
+         parms->phys_entry > parms->phys_kend )
+    {
+        elf_err(elf, "ERROR: ELF start or entries are out of bounds\n");
+        return -1;
+    }
+
+    return 0;
+}
+
+static elf_errorstatus elf_xen_addr_calc_check_pv(struct elf_binary *elf,
                                    struct elf_dom_parms *parms)
 {
     uint64_t virt_offset;
@@ -453,8 +489,6 @@ static elf_errorstatus elf_xen_addr_calc_check(struct elf_binary *elf,
     }
 
     virt_offset = parms->virt_base - parms->elf_paddr_offset;
-    parms->phys_kstart = elf->pstart;
-    parms->phys_kend   = elf->pend;
     parms->virt_kstart = elf->pstart + virt_offset;
     parms->virt_kend   = elf->pend   + virt_offset;
 
@@ -466,7 +500,6 @@ static elf_errorstatus elf_xen_addr_calc_check(struct elf_binary *elf,
         elf_parse_bsdsyms(elf, elf->pend);
         if ( elf->bsd_symtab_pend )
             parms->virt_kend = elf->bsd_symtab_pend + virt_offset;
-            parms->phys_kend = elf->bsd_symtab_pend;
     }
 
     elf_msg(elf, "ELF: addresses:\n");
@@ -500,9 +533,8 @@ static elf_errorstatus elf_xen_addr_calc_check(struct elf_binary *elf,
 
 /* ------------------------------------------------------------------------ */
 /* glue it all together ...                                                 */
-
-elf_errorstatus elf_xen_parse(struct elf_binary *elf,
-                  struct elf_dom_parms *parms)
+static elf_errorstatus elf_xen_parse_common(struct elf_binary *elf,
+                                            struct elf_dom_parms *parms)
 {
     ELF_HANDLE_DECL(elf_shdr) shdr;
     ELF_HANDLE_DECL(elf_phdr) phdr;
@@ -597,10 +629,35 @@ elf_errorstatus elf_xen_parse(struct elf_binary *elf,
         }
     }
 
-    if ( elf_xen_note_check(elf, parms) != 0 )
+    return 0;
+}
+
+elf_errorstatus elf_xen_parse(struct elf_binary *elf,
+                              struct elf_dom_parms *parms,
+                              unsigned int flags)
+{
+    if ( !flags ) {
+        elf_err(elf, "Must specify ELF_XEN_CHECK_{PV,PVH} flags to check");
         return -1;
-    if ( elf_xen_addr_calc_check(elf, parms) != 0 )
+    }
+
+    if ( elf_xen_parse_common(elf, parms) != 0 )
         return -1;
+
+    if ( flags & ELF_XEN_CHECK_PV ) {
+        if ( elf_xen_note_check_pv(elf, parms) != 0 )
+            return -1;
+        if ( elf_xen_addr_calc_check_pv(elf, parms) != 0 )
+            return -1;
+    }
+
+    if ( flags & ELF_XEN_CHECK_PVH ) {
+        if ( elf_xen_note_check_pvh(elf, parms) != 0 )
+            return -1;
+        if ( elf_xen_addr_calc_check_pvh(elf, parms) != 0 )
+            return -1;
+    }
+
     return 0;
 }
 
diff --git a/xen/include/xen/libelf.h b/xen/include/xen/libelf.h
index 8d80d0812a..858f42cf07 100644
--- a/xen/include/xen/libelf.h
+++ b/xen/include/xen/libelf.h
@@ -455,8 +455,13 @@ int elf_xen_parse_note(struct elf_binary *elf,
                        ELF_HANDLE_DECL(elf_note) note);
 int elf_xen_parse_guest_info(struct elf_binary *elf,
                              struct elf_dom_parms *parms);
+
+#define ELF_XEN_CHECK_PV  (1 << 0)
+#define ELF_XEN_CHECK_PVH (1 << 1)
+
 int elf_xen_parse(struct elf_binary *elf,
-                  struct elf_dom_parms *parms);
+                  struct elf_dom_parms *parms,
+                  unsigned int flags);
 
 static inline void *elf_memcpy_unchecked(void *dest, const void *src, size_t n)
     { return memcpy(dest, src, n); }
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri May 14 15:18:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 15:18:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127411.239489 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhZaD-0001H8-G0; Fri, 14 May 2021 15:18:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127411.239489; Fri, 14 May 2021 15:18:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhZaD-0001H1-Bu; Fri, 14 May 2021 15:18:33 +0000
Received: by outflank-mailman (input) for mailman id 127411;
 Fri, 14 May 2021 15:18:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=zRnq=KJ=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1lhZaC-0001EX-9a
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 15:18:32 +0000
Received: from mail-qt1-x832.google.com (unknown [2607:f8b0:4864:20::832])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 70d4fa47-0c2a-442d-9ff7-0e339ff3929b;
 Fri, 14 May 2021 15:18:31 +0000 (UTC)
Received: by mail-qt1-x832.google.com with SMTP id j11so22400214qtn.12
 for <xen-devel@lists.xenproject.org>; Fri, 14 May 2021 08:18:31 -0700 (PDT)
Received: from pm2-ws13.praxislan02.com ([2001:470:8:67e:ba27:ebff:fee8:ce27])
 by smtp.gmail.com with ESMTPSA id
 d84sm4665216qke.131.2021.05.14.08.18.29
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 14 May 2021 08:18:30 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 70d4fa47-0c2a-442d-9ff7-0e339ff3929b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=Nh+jCgVuPFnj2BoQMYGoLMwy+VkOph0GjRVVSgE5efg=;
        b=ZOmkwW/bFyNIHI05xkLPtSDEvC1OPoc75C28XkQ84+XTskYg6wPiqMZwAHmvhwWkHi
         kkQ8XIflzSP89lnUmMBZVL8F/2EV3h6ArpBDgxcoom4ZdS1fOtTjskHmjaZOwcYKbjy/
         ae+NH0FKbHf0pl6em6UHLBOP0popR0JRt+ckrNtDjmT0yJ4eshXXmMLNWoF0L+pM78gj
         Cw1VHXreZUKbRRaGIV37FpBAPQpnmqCLOL1j9JM6z5H/qaAHp1nDrhSyx82S4Egxj8vO
         fMYaudBzPc54XYo9wZwQLs97bt4nnKQ0EUcpR2AG1EwkAJFGBK5aKnlqz5d3OWZJhUHv
         6yHw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=Nh+jCgVuPFnj2BoQMYGoLMwy+VkOph0GjRVVSgE5efg=;
        b=DFzmmE3YX6GxPO8/BqBX0YvSq8bDbFBLIejlaVt2u4T3AVCiZN+j/tNJiTaSabH8D0
         3ilCvNjFKGS6pBAlhjg0kSpvXswSwcRfgOE8ttpfbUrfywpRlLoKveFVPxWMc5uu5SNh
         m/k+gxS4K5RbCJORnPX88Z+W8Ohv4fmmxdscP3aO51sJtAymuuJ5X9xtEB1HuNoVuEZe
         HB6u6UmKDFaYWzKRVJqkfccQNhuIBV1CeAh1I+3mnWh11ypPgKjDFH6lhGgV5mC8bIHh
         yM1DXCy9uAXeJ+kPKJesvAr9azB1yVC6OpqQIKEUFIskp4saBRY0rGXldBbDAJwWdOnM
         eKOw==
X-Gm-Message-State: AOAM533iPKFR0mfmmp5GTSlrhvebbYh2O+w0HMbFuW3nQXt43XnxdUz/
	WkgShrKGd0JYAs/95FvimBM=
X-Google-Smtp-Source: ABdhPJyMb6Di3mioBzyhrGqHD/bdqfVMeTa2/2V9mdVQrLSj2cSnWYptCJ7yPSaREQyAAKM5E9+qCw==
X-Received: by 2002:a05:622a:14d0:: with SMTP id u16mr40942492qtx.42.1621005511431;
        Fri, 14 May 2021 08:18:31 -0700 (PDT)
From: Jason Andryuk <jandryuk@gmail.com>
To: jandryuk@gmail.com,
	xen-devel@lists.xenproject.org
Cc: andrew.cooper3@citrix.com,
	george.dunlap@citrix.com,
	iwj@xenproject.org,
	jbeulich@suse.com,
	julien@xen.org,
	roger.pau@citrix.com,
	sstabellini@kernel.org,
	wl@xen.org
Subject: [RFC PATCH 3/3] libelf: PVH: only allow elf_paddr_offset of 0
Date: Fri, 14 May 2021 11:17:31 -0400
Message-Id: <20210514151731.19272-3-jandryuk@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210514151731.19272-1-jandryuk@gmail.com>
References: <CAKf6xpsyzazbY_mA0QtAuAqpOPkpuhjrZ1wid0khWy1urh4iBg@mail.gmail.com>
 <20210514151731.19272-1-jandryuk@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Modern Linux and FreeBSD hardcode it to 0.  Just drop its use for PVH.

Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
---
 xen/arch/x86/hvm/dom0_build.c      | 2 +-
 xen/common/libelf/libelf-dominfo.c | 6 ++++++
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
index 552448ce5d..335321ed3e 100644
--- a/xen/arch/x86/hvm/dom0_build.c
+++ b/xen/arch/x86/hvm/dom0_build.c
@@ -574,7 +574,7 @@ static int __init pvh_load_kernel(struct domain *d, const module_t *image,
     }
 
     /* Copy the OS image and free temporary buffer. */
-    elf.dest_base = (void *)parms.phys_kstart - parms.elf_paddr_offset;
+    elf.dest_base = (void *)parms.phys_kstart;
     elf.dest_size = parms.phys_kend - parms.phys_kstart;
 
     elf_set_vcpu(&elf, v);
diff --git a/xen/common/libelf/libelf-dominfo.c b/xen/common/libelf/libelf-dominfo.c
index 13eb39ec52..12feb8755e 100644
--- a/xen/common/libelf/libelf-dominfo.c
+++ b/xen/common/libelf/libelf-dominfo.c
@@ -433,6 +433,12 @@ static elf_errorstatus elf_xen_addr_calc_check_pvh(struct elf_binary *elf,
             parms->phys_kend = elf->bsd_symtab_pend;
     }
 
+    if ( parms->elf_paddr_offset != 0 ) {
+        elf_err(elf, "ERROR: ELF elf_paddr_offset (0x" PRIx64 ") is non-zero\n",
+                parms->elf_paddr_offset);
+        return -1;
+    }
+
     elf_msg(elf, "ELF: addresses:\n");
     elf_msg(elf, "    phys_kstart      = 0x%" PRIx64 "\n", parms->phys_kstart);
     elf_msg(elf, "    phys_kend        = 0x%" PRIx64 "\n", parms->phys_kend);
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri May 14 15:29:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 15:29:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127424.239500 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhZkx-0003ND-MH; Fri, 14 May 2021 15:29:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127424.239500; Fri, 14 May 2021 15:29:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhZkx-0003N6-J5; Fri, 14 May 2021 15:29:39 +0000
Received: by outflank-mailman (input) for mailman id 127424;
 Fri, 14 May 2021 15:29:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6p66=KJ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lhZkv-0003N0-Al
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 15:29:37 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f2bf1105-3341-430a-88b7-4bb53d427db1;
 Fri, 14 May 2021 15:29:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f2bf1105-3341-430a-88b7-4bb53d427db1
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1621006176;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=STgyva7vgqZVem9zpS5xBNz+MOG77T1myBWH2AZ2CMs=;
  b=VRYpEwN1BqbxxJ6lSijSZ/U0jj4F2c4F5vHTZ7lXtr7628gVfpWLupq/
   /osXBHquT2zqZjg7cwNQMidotScus1L3R+dXq0n804I7ORKuvWDNjlOU2
   JDiBtJJpkBOPch/xFL1UeygDRMjnhRnIdFD90A+Ib5Pvubj2+B2789+zw
   Y=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: h7FvfSEZxs5R2AYAtOsE+/Hf8DtokJBGi9Mkq1ObRdDQb11zLHDy62wYJq25bo3bPGh9CnBrqC
 SSfWX9m7/8113ICTj/OXXBugF2K8ZH4mef+VEaYUCGlzY+94wk3QX+EAyZBQv5Akb77RndTfqj
 yckfnePIQHZpbNlFfOghz71DjlbrdtvNUuQOOWMWiQbD6w2x5LzuR8R5pYPnCmuZhCYtJNgnei
 3/bVq6Qw2r8nEUgnhxwEn6cdA3Qc67qO8EszSr95pruQdBlSfTm6xkEm7SmYXqLRZdWsl1JCSh
 2u0=
X-SBRS: 5.1
X-MesageID: 43813597
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:QQt3n6PwlZPx6cBcT8P155DYdb4zR+YMi2TDiHoedfUFSKOlfp
 6V8MjztSWVtN4QMEtQ/+xoHJPwPE80lKQFm7X5WI3CYOCIghrMEGgP1/qH/9SkIVyDygc/79
 YRT0EdMqyJMbESt6+Ti2PUYrVQoqj0zEnrv5ak854Ed3AaV0gK1XYBNu/0KDwQeOEQbqBJaq
 Z0q/A36wZJFh8sH4uGL0hAe9KGi8zAlZrgbxJDLQUg8hOygTSh76O/OwSE3z8FOgk/gYsKwC
 zgqUjU96+ju/a0xlv3zGnI9albn9Pn159qGNGMsM4IMT/h4zzYJbiJY4fy/gzdndvfrWrDyL
 L30lMd1oVImj3sl1iO0FjQM1KK6kdo15eKomXo8kcKoqTCNXkH4oR69MRkmraw0TtXgDhG6t
 M+44uujeseMfrxplWJ2zH2bWAcqqOVmwtprQdBtQ0TbWMhAIUh5LD3q3klb6voWhiKsbwaLA
 ==
X-IronPort-AV: E=Sophos;i="5.82,300,1613451600"; 
   d="scan'208";a="43813597"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GLz3LoK+wgIXGCHTA2kFBxdPo+qx5c45Qk5bw4AsdvBlHO2yp1yRxRpnswApneHTzyjwRGF3MqUrnX48qSerrAyhY7oaE77svuSbGOgWqAQYkO8RartMnTEQNtKApjc/TWFxOe+lv2dbbned4F3rIHHrOmjy4KfigqzgHoBXkIzx5gZprr6gvkTUJuU5CcmcfNIAooUIForx1W91ykHZQ89W/NMBybAFUmEmWWPorSosyzjJUJ6BhfBsgM5jXJZJ+V09ZwQ0lokedY7OlMIxz64IfE0liachHev9NlLusBJdYLhcA/X7q0BahXYv+F5ZH/uStrn105guzl5Bim6VpQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YRA4CqEJCMYhykLDIMrQjSRpyhy5yWsDo08vRAuyV4w=;
 b=BZMmItqawl8UdwEB0vY4ByjHjhjh7fyLEkQFDDg3VunRHfTHcDQ7pYrwyztURmhtHSW62aqH6w0PKykWnhs1mY1hDpVrKSZlFWVNNgS9/7NQ7HwYZ1FBviQRkX7AAWmJXSinNmQk6UjSpcKlTDxGu/GRJLmQL6c6LhqG4Zb2y6MgvL/PBFg2RiWHwjHOlIHwyNr143YEyBR74xjfGRrbdt+OfkAQqgzIxK7gFdtz17lwVUaE/rOIytPi9mNSBbI1QSHgeUJe+fNwajAhaUg90TorX5xTzSj3XweLYk20OCad6ZYQqb1TtohNBOAgfD/q1kzA23cJHwdYGYzy1al/Vw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YRA4CqEJCMYhykLDIMrQjSRpyhy5yWsDo08vRAuyV4w=;
 b=uUW52WiEf3hFQF8+EltBa4ON6Wamlo5R4d8HIADab+RpE7sd3fdJI9nu6jQNXQN7eYIW5rb6/Fev4Wb1yUl2Hrq/bllirwpvT6FyaB2/puJBeTTI1Mp2QDVt1/lEEEddT7vA5nBXCS4+hdlWhw8jaJqE9oOHyXVYXug53ksk2R8=
Date: Fri, 14 May 2021 17:29:26 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: Re: Ping: [PATCH v5 0/6] evtchn: (not so) recent XSAs follow-on
Message-ID: <YJ6XVmadaDbP3aUx@Air-de-Roger>
References: <306e62e8-9070-2db9-c959-858465c50c1d@suse.com>
 <d29fa89b-ea0a-bdbd-04c9-02eff0854d47@suse.com>
 <40e90456-90dc-7932-68ec-6f4d0941999f@xen.org>
 <0e19fb4c-4a87-ff80-fa98-fab623d6704f@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <0e19fb4c-4a87-ff80-fa98-fab623d6704f@suse.com>
X-ClientProxiedBy: AM3PR07CA0057.eurprd07.prod.outlook.com
 (2603:10a6:207:4::15) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f71d3951-39d1-4c6d-c459-08d916ed1268
X-MS-TrafficTypeDiagnostic: DM5PR03MB3148:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB31484672FB9FD5F0734F84728F509@DM5PR03MB3148.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Dcto8rmolMrRhQDZKDa1r834PmqeirMiWuy9PNduBlGsBmuU2YkUhOJv/gw1v85JeVhtMRPAooAaJb7R08BPUIOpaKnP+3t3tAhHoahP0pXXKMo/xIWzif5401NrQoV7vDbX2ydU+7wEkAQxZxYmjGQ5oxcmoK0Uegzorof3VQ+6yJSn7MtXLrq+roq0clQDG7OxI1Stf9bA31qAdHpJI9ZlQN0gml/LkuKW0ghufmRIXpNlkgo1fk6l+9t9MvHbxRL8hzh+5gRTAbZJRSEf50b6aD7wg5KmgVdXD3T5XNS/Tiz/AMcRwpfSUss84EblcrSA8GsYeLPPBP74n6U3aGhlK22XbrF+DhoG46elQFf3JclZd2jqyY3NAgEIvJ5IAy4OQ/d+lgo/19DH53Lyp3jtf4Xv9M4XS+cB0U3bwZ6JszbksQeKS9kku70QADBvLm/LK5rZ8lJUiYmod+AzD1xJ8hKz32B9JVhzh93nMxF8Tt6WG7hBv4TZytkFeUteyzYOrayMOvo8EL1HuHYlnrf3lAd7slzRnvwaf5pz+5GCj5T7ZyvC4f/BHj5Es+x28+nwouaEzWH0JtFjWw/oNJE/Ci0zUWVwydCSinfu3UQ=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(39860400002)(366004)(136003)(396003)(346002)(376002)(83380400001)(85182001)(4326008)(956004)(2906002)(478600001)(33716001)(86362001)(6486002)(8676002)(54906003)(6496006)(26005)(38100700002)(53546011)(9686003)(66946007)(186003)(5660300002)(66476007)(6916009)(16526019)(8936002)(316002)(66556008)(6666004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?N0JBNHR4MjVNbmtlc3hybUpFV1BMRXB1V3JpS2JNd2QrbWxISWQvT1lKcS9r?=
 =?utf-8?B?b0x5ZjJmK29pZHRudXcyUFdmWk9qbmtjd1RPeWFwQy9scFVURVpwVUcyeGEz?=
 =?utf-8?B?TTl1RmFsclN6Ky9UN0hKTW9LeFJmZGJVZm04UXlhQ2M4VXY0cWNjSFhuZzRC?=
 =?utf-8?B?OWVlRlhUcXlHejc1RmFIV0YvUnh4dlRDZk5YRERaWVdocE1Sc3FOcG8zdmV5?=
 =?utf-8?B?ZHFQK085NUU4WVEvZEg1S01PbmR6S0k4OGQwMVZKUlNBbTRnNEx4Rllqd3FU?=
 =?utf-8?B?Yk5SWnFVZGI4aGQvUkNXLytML2VOWTIvUFM3STVFUUJBY2ZZTFJKa2NRcEpB?=
 =?utf-8?B?Q3grZzJqbzBHWjNqQlF3WklBNm1XU2hpdnlOUVNneVJQNjc3VTIzaUxaN3Zo?=
 =?utf-8?B?UkcrQmRVNS91bEdiNUdhRU03TXIrSXM5bC9GYy95T0Y5Z1RnOWNTL0Vnb1Ro?=
 =?utf-8?B?dlBvTVhaRHJEd2wzcldkMkdtNmo2UzRqMlBhNEdpYWI1WnFSSzdrWFFITUcv?=
 =?utf-8?B?aU44NXo4c1FIWEtoZVJnK0dLTGY3c1VxZ0JwRm9vL1RQVGNjVHdPSy9ZR3pU?=
 =?utf-8?B?czdmR00rc3N2d2Z3NEhSYmxLMVh2bkdWYnYrR3lnM0VCb3ZhR0tGdTNMay91?=
 =?utf-8?B?NUZ4bHlXbFVRbTQ1TFQ3TmdXZ1UzOVRxRE1pSy9sUkxoUk1ydkw2Y1dnUzh0?=
 =?utf-8?B?SG1MN3JRamh3MzVvb05xbUxVaHA5bC8yaDNheFp1L2NnTERKSkhOY1l4d2dI?=
 =?utf-8?B?US9lVHNuT0ZyVzFmWUlmNjc2SGU4bjg5L20wZkYvTjZFQVoycHFFQk1sTGdi?=
 =?utf-8?B?cWE5TTl0SGQwOWVXS3RLSGZ1WlU2c3dMVW5DZGpKcXhLL2pMT0dPenBFNDND?=
 =?utf-8?B?MVZsc1hsRjZiZjh4dkNOZUoxNkhqbzJ6V0doN0Q0aWswNnNVdHZGQkVCbDR5?=
 =?utf-8?B?am90bXFpdFJUNG9iWDFmN3E2Tzd0bEFpWDREdU16RTFiWktUc1l3Vm8ya212?=
 =?utf-8?B?Z29mOUtRcFM3MGpzYkJXOW00QzdqNFFaRmlGZE5iVGZWLzUwR2Q5SWk5d0N6?=
 =?utf-8?B?R1BoK0tGTVhNcENYQnFoK1NsSVNoUHFwRlY5NU9SSDZ0VFJhaHVZeEpST0sy?=
 =?utf-8?B?bExSNStyYUloZm9Mb0dldnhsMHAzRFNlcTB0QkNoTXhRZzVOVDJRVjRzaEhQ?=
 =?utf-8?B?cDE2RWowNU5XMHAzaVhZZ1FSbGtpY1NLRC9MeHFpaUFyYysrOHk2Ty9LSVFt?=
 =?utf-8?B?UGtGMUFKTUMxZXlnOFFMc3BhcndoeXAxNXphdGFxUU9aZSt5bU5NQ1RRNlly?=
 =?utf-8?B?UzZNTmdTNlhtcURwNHFZVjAvb2xpYk5SZW9Db2lhVi9ZRFU0YVdLTDZLTEtN?=
 =?utf-8?B?L2YwaUE5bU9DTDh5aWs2UDVPWGU3LzBDMGNtUHZId0xZckVXL2d2T044Ri9a?=
 =?utf-8?B?UlFNYmJDOHhPMGVSNkRLaDZEdnZyRFNINVcvaW1VZVo3T0RmOE91WkZJY3hI?=
 =?utf-8?B?ZWJpMkJmeW15b0ZnU3VLWmdKK1BSRnBsd0NtVHZsLzJoSmRrSFM5ZXdiOVpL?=
 =?utf-8?B?VCs1UXE4MHRDYWNXYXY0cEdnczVDUW5najhYSlkzTUFhYUthMXRCUnFqM1Jl?=
 =?utf-8?B?MzM1blhFV0VSNFFGdWpBdnFNVk9vemRkZUdDc2JBVmx3M2FqM1o2bjhZVkY0?=
 =?utf-8?B?WjhDZmdaTFhLQ0FzVmo4SGU0SWFaVU5VRXg2MEc1VHBKYzZhVlV3N2tjNHZV?=
 =?utf-8?Q?l5skbV+8eMsH/iRGcINkuwcCtY6hutGpdiIJ41m?=
X-MS-Exchange-CrossTenant-Network-Message-Id: f71d3951-39d1-4c6d-c459-08d916ed1268
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 May 2021 15:29:32.2836
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: SNlHgLpzg4O8W5eIz9JXI43NNDY/XDVhpcbcPYPHdGqHQzHoLJ+jfIOyAs665+XeXRzgFX2r1xabdcuQnWrmsA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3148
X-OriginatorOrg: citrix.com

On Thu, Apr 22, 2021 at 10:53:05AM +0200, Jan Beulich wrote:
> On 21.04.2021 17:56, Julien Grall wrote:
> > 
> > 
> > On 21/04/2021 16:23, Jan Beulich wrote:
> >> On 27.01.2021 09:13, Jan Beulich wrote:
> >>> These are grouped into a series largely because of their origin,
> >>> not so much because there are (heavy) dependencies among them.
> >>> The main change from v4 is the dropping of the two patches trying
> >>> to do away with the double event lock acquires in interdomain
> >>> channel handling. See also the individual patches.
> >>>
> >>> 1: use per-channel lock where possible
> >>> 2: convert domain event lock to an r/w one
> >>> 3: slightly defer lock acquire where possible
> >>> 4: add helper for port_is_valid() + evtchn_from_port()
> >>> 5: type adjustments
> >>> 6: drop acquiring of per-channel lock from send_guest_{global,vcpu}_virq()
> >>
> >> Only patch 4 here has got an ack so far - may I ask for clear feedback
> >> as to at least some of these being acceptable (I can see the last one
> >> being controversial, and if this was the only one left I probably
> >> wouldn't even ping, despite thinking that it helps reduce unecessary
> >> overhead).
> > 
> > I left feedback for the series one the previous version (see [1]). It 
> > would have been nice is if it was mentionned somewhere as this is still 
> > unresolved.
> 
> I will admit I forgot about the controversy on patch 1. I did, however,
> reply to your concerns. What didn't happen is the feedback from others
> that you did ask for.
> 
> And of course there are 4 more patches here (one of them having an ack,
> yes) which could do with feedback. I'm certainly willing, where possible,
> to further re-order the series such that controversial changes are at its
> end.

I think it would easier to figure out whether the changes are correct
if we had some kind of documentation about what/how the per-domain
event_lock and the per-event locks are supposed to be used. I don't
seem to be able to find any comments regarding how they are to be
used.

Regarding the changes itself in patch 1 (which I think has caused part
of the controversy here), I'm unsure they are worth it because the
functions modified all seem to be non-performance critical:
evtchn_status, domain_dump_evtchn_info, flask_get_peer_sid.

So I would say that unless we have clear rules written down for what
the per-domain event_lock protects, I would be hesitant to change any
of the logic, specially for critical paths.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri May 14 15:34:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 15:34:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127428.239511 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhZpA-0004j3-8h; Fri, 14 May 2021 15:34:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127428.239511; Fri, 14 May 2021 15:34:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhZpA-0004iw-4h; Fri, 14 May 2021 15:34:00 +0000
Received: by outflank-mailman (input) for mailman id 127428;
 Fri, 14 May 2021 15:33:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dpaq=KJ=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1lhZp8-0004iq-TW
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 15:33:59 +0000
Received: from mx0b-00069f02.pphosted.com (unknown [205.220.177.32])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 88b2a99c-88a2-4b74-a687-4d6133737a7e;
 Fri, 14 May 2021 15:33:58 +0000 (UTC)
Received: from pps.filterd (m0246632.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 14EFQM4e003094; Fri, 14 May 2021 15:33:55 GMT
Received: from oracle.com (userp3020.oracle.com [156.151.31.79])
 by mx0b-00069f02.pphosted.com with ESMTP id 38gpphrqxn-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 14 May 2021 15:33:55 +0000
Received: from userp3020.oracle.com (userp3020.oracle.com [127.0.0.1])
 by pps.podrdrct (8.16.0.36/8.16.0.36) with SMTP id 14EFSvtL175312;
 Fri, 14 May 2021 15:33:54 GMT
Received: from nam02-sn1-obe.outbound.protection.outlook.com
 (mail-sn1anam02lp2046.outbound.protection.outlook.com [104.47.57.46])
 by userp3020.oracle.com with ESMTP id 38gpphaguh-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 14 May 2021 15:33:53 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com (2603:10b6:208:321::10)
 by BL0PR10MB3073.namprd10.prod.outlook.com (2603:10b6:208:32::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.28; Fri, 14 May
 2021 15:33:52 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf]) by BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf%7]) with mapi id 15.20.4129.028; Fri, 14 May 2021
 15:33:51 +0000
Received: from [10.74.97.42] (160.34.89.42) by
 BY3PR05CA0058.namprd05.prod.outlook.com (2603:10b6:a03:39b::33) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4150.11 via Frontend
 Transport; Fri, 14 May 2021 15:33:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 88b2a99c-88a2-4b74-a687-4d6133737a7e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=lXQKrwoTgPb4jcMsO6+QH5R8pHFpAUlcHmVrmYQ365E=;
 b=cHzs7qXucgjJlsqwMwVFVvW0YW4UTO79R6/Uu75FkCQAb2ogavHJk3iJISCjI7cTYazr
 oFjnUjSAqzJHVCc1VkCyCmr+nkD9TBajScOp0UTH4GlPBMpLaAaqWbGUOJErIkjz4FPf
 4NAnTYRz+H9fF7J55+q14s82DNRm8d6c5ll4/cpzF+qzZjbFyDrZWE+wsVqIpeOQ3GC3
 Hywz7RRldunqKxB+i5OCsyPVPujIbjcF8wgcUrX4R7NGF+/7P3Kr4lJzjw46w6JrzAob
 xq52yz0eALIPWWsXNApH276ioPPgUTgvH/XPk7uv6V2LNt9BOxUNxVKDNI75Cg+GUmq/ /w== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RzMjXKEWorApQIg8dQae0d5Ik9/S3aU1L2JW/f+qTWtzIlifUW0nCxYOwYydxfj39Qnfb6r8g+q8r1d04cjOeGrafvtKrhvO+oBcDxrhYxvmx7CLR0kX9xlxcaTOtWOfbSqx1gHWfafnVe8gYDpfxEq0pfiQuxxzLKrqwukHVkDu7m7EJSGwrKykhLmrycpgbBcVI+ixkavrtssvuz43AosjtwrQ4iqFtWMA08YPCVNtVhRMJCcmIbQLDRA9knjd29ZRPjxsFzPfXrVaoUE10gs0VV0+azBaJY6FB9YY5AMgB4QZiRijLPTXlUt1DZt7+HmYmdF+hoNo2bw1L15kKg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lXQKrwoTgPb4jcMsO6+QH5R8pHFpAUlcHmVrmYQ365E=;
 b=Rkig+iBVeK1qWgv0LHy9/popSDBgSJEQaJh4vFd0r6CTYVpGP+d90xLwCsbVGAVoFzPZkgjCVBZdXvkVH2whv22wgbuXSSgenZV63zAo/DSRXvJQPVrk/K9ImjmlgvLLVp2N/hl9J5ClDGaPMtrW8xjlz4mzUDFsbfsKQYwkA9OPC89fKdl6OT16QL5ttdi2aEy2ER0RLmKdKw6fWncbuD91aswyvgTe6Jgn6I01Dm81JHByQRSXNvtMSbxFdxV/MRAnMx2ybs48rA9ZZF5II06JspntEuGXHhlYCR+Q8DZ6Bgx9/Vb5oHRHHbdFgZgNM1xQqZqBvdoJeruqvNUAVw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=lXQKrwoTgPb4jcMsO6+QH5R8pHFpAUlcHmVrmYQ365E=;
 b=Qckm+5gpTY8lqBq+YYGvYJL4jM6q98DdIPAf4TkOIF9QDyvqzTytTkQJVL4BvDzxUFT8oiIexBBatLtyQGKF47MJW0mcevERW7CZY2H4wZ/Jk9N+305RVQlLTEEJ+7F278FV0ltcd06r/Szn7Xk+iSydaifXZkNY/w4WqXaFRUE=
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=oracle.com;
Subject: Re: [PATCH v2 3/4] usb: dbgp: Fix return values for reset prep and
 startup
To: Connor Davis <connojdavis@gmail.com>,
        Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
        Juergen Gross <jgross@suse.com>,
        Stefano Stabellini <sstabellini@kernel.org>
Cc: Douglas Anderson <dianders@chromium.org>,
        "Eric W. Biederman" <ebiederm@xmission.com>,
        Chunfeng Yun <chunfeng.yun@mediatek.com>,
        Petr Mladek <pmladek@suse.com>, Sumit Garg <sumit.garg@linaro.org>,
        Lee Jones <lee.jones@linaro.org>, linux-usb@vger.kernel.org,
        linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org
References: <d160cee9b61c0ec41c2cd5ff9b4e107011d39d8c.1620952511.git.connojdavis@gmail.com>
 <0010a6165f3560f16123a142d297276e7d6c2087.1620952511.git.connojdavis@gmail.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <f5c8d1a5-84fa-19fc-14da-6bec1705cb5e@oracle.com>
Date: Fri, 14 May 2021 11:33:45 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
In-Reply-To: <0010a6165f3560f16123a142d297276e7d6c2087.1620952511.git.connojdavis@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Originating-IP: [160.34.89.42]
X-ClientProxiedBy: BY3PR05CA0058.namprd05.prod.outlook.com
 (2603:10b6:a03:39b::33) To BLAPR10MB5009.namprd10.prod.outlook.com
 (2603:10b6:208:321::10)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: cfe49b9f-67e9-4f26-556e-08d916edad33
X-MS-TrafficTypeDiagnostic: BL0PR10MB3073:
X-Microsoft-Antispam-PRVS: 
	<BL0PR10MB3073BC24CD59A6B90A9093EA8A509@BL0PR10MB3073.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	E4RRGvaes3tpCtOpn0cnD8lq/dlZwOCrqdHrDaA3J0GFZXSMMpK6yC0CYmqlOmv8i0KR1qWHQhMo/GsofuuNCSUeWSd7qt8fD5uYTvfqaapIRRD25MXCNtwZq3k34oN20FnHMB0AcBn4pkBFMiDyu8XVkfp1WIeYVskz6jx3LxB4VeFgFsgHZCBuZrShFMSqtVKaURo737Nb+9Uz2T1ltW9MANT8EHWE+EDA7ldX4cSYEiWRUYSbvKj+kZVQGHon5IHE9mPtQHayQiCXzHbHLfjm9Ca/bHQo2tc9XwY3cU8GZI4xLInefGeyRKu1DY/nCCdlUH7LOAeirEQS7NOa87LwmRWEndNkVHfz7dp01WVxh3xqkCLaSRP9Pt1eS88poeSWBzytASJv1+dRuyP6AY+2sHDg/spvq4zPDkzRqjFsgqis7EiFL9xMDtomu3fgzAll2ciICJ5JD1X30fQBNm2a9K8vQRbdE0Q1hHtL8UuOxDI+h9hLbuq5buxBxepT+6l85k6nqY4F0dddfOpjQxbVCWCi+9ZUYOvTpk5hgieZ+zFmFoAr3FuVs0JpRuRVlo2VFrGry8TX/ao/iyuk99BlJbmWUMxyu4d4ai7t+bWCmvKo3LH08Ze5Vks302hpgbCKCMXlpcelm68pwXfMfOA803kTN8004Z4SAFyI8BSwcQnzNZTZzqOq1TgghvBs
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB5009.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(39860400002)(396003)(366004)(136003)(346002)(376002)(31686004)(38100700002)(8936002)(7416002)(44832011)(54906003)(5660300002)(6666004)(2616005)(316002)(16576012)(26005)(6486002)(8676002)(16526019)(956004)(31696002)(66946007)(83380400001)(478600001)(186003)(2906002)(53546011)(66556008)(66476007)(4326008)(110136005)(36756003)(86362001)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?utf-8?B?U2pIUUZCSXNQeGVrbk1QOFVJRzVEZzlMbG1SV3JZM2lGTzdJTnROM1NQZ0lF?=
 =?utf-8?B?R3RzWS9NUVpIWG1Zd3lodUg4T2t1ZGl5TUUxSHRLanFvMFAzcit4RUpjZUZO?=
 =?utf-8?B?WXFCS1RyU0xtWlhURDdjR0syRXFHVTlNUnNWckdDa3RlaEJ4cXI4cWFucThl?=
 =?utf-8?B?clhpNmROVDVMRm9jZllDUkhydjRxTFBlZitkZERtS3I3V2orNzViTFdZdWJx?=
 =?utf-8?B?SURDdDNVTXdrQW8zaFI4bmQ4V3dMOXlOTUVSN2NWOTVUQXhtK0txalNXOFlv?=
 =?utf-8?B?NXBZVmhRbVVtbU9VUGFadEJ5QXB6UFZaMGtPQWJWa09EVDZIWWFqRUxDeVov?=
 =?utf-8?B?NEoyN2ZGbDNsMmt0Y0VNeG8wcWd6R2dVcUQzelV6WERaaHRWRitIclVZSE0z?=
 =?utf-8?B?Nk1Da1BsS1NkRTA5dTBzM3VaL3BLT1RmTzQrK1Z1TUIyNkIwbERTMkQ4UDdT?=
 =?utf-8?B?eUw2a21WTEpnMVJOMW1Denl6YVRYbS9GdVF2UFVQOExwZ2YwZWxlVkI2a2Jn?=
 =?utf-8?B?blNHUXNhaDZKdFZ1Y0tCK09LUlV6bitqQzJPMktySzlEMUo5MXcvN0N4VUkv?=
 =?utf-8?B?Z0U3VFlxMzk4cXNESDJOZXoyVHQ3YmpCTFBnazByT0dHVzAwWmExbXcvT3p5?=
 =?utf-8?B?a1QvZXhKUnFIeUZ2NlJ0c01IYW1MekpYSFM5M1lCQy9nQWxranZ5WGJaTjZq?=
 =?utf-8?B?ZlFwQWJYb09NQjBKc2tMSGlhUm1nM2RtVXVNTkU2eisvbVZHR2xFU3M1c1FD?=
 =?utf-8?B?dnloclRNb21seFdTSmo5anA4MENVU2t4bGxIUDBqNkd1b2lWb1cxYXdTUWp3?=
 =?utf-8?B?alByZDlhR0xCSU44NVlEcUlSQlJETmxQUFUyUzFmU1ZkNnl0aEhwMm91YUpQ?=
 =?utf-8?B?ZkVuL2JQY0ozYzNNYUkxSmlqK1BpNGJkV0wxYkVwV0VuWjdiWE5vYUtFMXNR?=
 =?utf-8?B?OS9nQzJjclA3d1RRZkgwNG5sbk1XV29ZM29paEwyUWxXU3hrOUZzSzFIdmdW?=
 =?utf-8?B?TExKWUJBTEdSU0VwUHdrNlU2Z08xQ1I5MWxianlDZWdtL0lZblI0dHh4Ym82?=
 =?utf-8?B?em5hUVlwbW9GVHJFVG1qWFdSSEJzMUozRHFzOFZ1RWl2TXcxMGtlbTRyK2Rv?=
 =?utf-8?B?SkU2MFZwMlREUko0dlZ0L1AyNDE4YmJlTTk2MStmdlZYTnoxQ1l5dHE4dTNG?=
 =?utf-8?B?ZGV5STVBOFNza20rZERGZDA0QnZHcHRzWlVHMHQ2SG9RUFZvN2JyK0RSV1Zs?=
 =?utf-8?B?Q1dReE9hV2g2NXYvZytoSlpYakswK0cwUUlDZ2hyZkhOZWd4S1lreDU5T2NE?=
 =?utf-8?B?b3JSRkFlaVh4RnM1eC85TlNqb1hjb1VpeWJIbXJFZzV6akJCbGhlWmc5Z2Jx?=
 =?utf-8?B?Z3pFNDB4WDg4bUFwbDJzaHVKWEcvRlozWis1SldPVWdEQUYwTTM4UmZSU3gr?=
 =?utf-8?B?RDVGdFc1TDNwVnBQMm1PVUJoMkhBcTM4bFl1Wk5pVUJmRGhwZ1NyMXBFMTRw?=
 =?utf-8?B?Q2FuZDJhQ2FaVTNJc0ZjSG8yazE0ZmQvVmxralltejNHaVMwVXdxYkx3MkFI?=
 =?utf-8?B?SGxjWVFsd1lRSzc1cDRkWDdOM296Um5kampmYXJReXVqaTVpVUVibk1kQTJo?=
 =?utf-8?B?b3VLck4zNkh0alYwSUFrV3Y2UVNrU3hHVW1SOXFneWRrUE1IR3FjVzZEakI0?=
 =?utf-8?B?MDA0LzljUUx2RFZib1B2YytDWCs3VHBSeGhoaVhPT2pMS0d0NnVIYWxFMTRX?=
 =?utf-8?Q?A6tQtjKSf7yUwhIaigEiVdbR4Km/GmKIY5RNMua?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cfe49b9f-67e9-4f26-556e-08d916edad33
X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB5009.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 14 May 2021 15:33:51.9364
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: jfy2qshqkwJh2sT6ddAdJhGcEf8rSg5pfYZOy0ULznhsSKTtTs1MO4rWjC0uAK6D6IgeWp7Z9P8LykXKexYTEsuKYivqsKQ41gD8ILICvy8=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL0PR10MB3073
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9984 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 mlxscore=0
 suspectscore=0 malwarescore=0 phishscore=0 spamscore=0 adultscore=0
 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2104190000 definitions=main-2105140123
X-Proofpoint-ORIG-GUID: elp2a4PUiRAdgbmB5hjZO6XK-RRfDT19
X-Proofpoint-GUID: elp2a4PUiRAdgbmB5hjZO6XK-RRfDT19


On 5/13/21 8:56 PM, Connor Davis wrote:
> Callers of dbgp_reset_prep treat a 0 return value as "stop using
> the debug port", which means they don't make any subsequent calls to
> dbgp_reset_prep or dbgp_external_startup.
>
> To ensure the callers' interpretation is correct, first return -EPERM
> from xen_dbgp_op if !xen_initial_domain(). This ensures that
> both xen_dbgp_reset_prep and xen_dbgp_external_startup return 0
> iff the PHYSDEVOP_DBGP_RESET_{PREPARE,DONE} hypercalls succeed. Also
> update xen_dbgp_reset_prep and xen_dbgp_external_startup to return
> -EPERM when !CONFIG_XEN_DOM0 for consistency.
>
> Next, return nonzero from dbgp_reset_prep if xen_dbgp_reset_prep returns
> 0. The nonzero value ensures that callers of dbgp_reset_prep will
> subsequently call dbgp_external_startup when it is safe to do so.
>
> Also invert the return values from dbgp_external_startup for
> consistency with dbgp_reset_prep (this inversion has no functional
> change since no callers actually check the value).
>
> Signed-off-by: Connor Davis <connojdavis@gmail.com>


For Xen bits:


Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>


For the rest it seems to me that error code passing could be improved: if it's 0 or 1 it should be bool. Or pass actual error code, with zero for no-error case, such as ...


> ---
>  drivers/usb/early/ehci-dbgp.c |  9 ++++++---
>  drivers/xen/dbgp.c            |  2 +-
>  include/linux/usb/ehci-dbgp.h | 14 +++++++++-----
>  3 files changed, 16 insertions(+), 9 deletions(-)
>
> diff --git a/drivers/usb/early/ehci-dbgp.c b/drivers/usb/early/ehci-dbgp.c
> index 45b42d8f6453..ff993d330c01 100644
> --- a/drivers/usb/early/ehci-dbgp.c
> +++ b/drivers/usb/early/ehci-dbgp.c
> @@ -970,8 +970,8 @@ int dbgp_reset_prep(struct usb_hcd *hcd)
>  	int ret = xen_dbgp_reset_prep(hcd);
>  	u32 ctrl;
>  
> -	if (ret)
> -		return ret;
> +	if (!ret)
> +		return 1;


... here or ...


>  
>  	dbgp_not_safe = 1;
>  	if (!ehci_debug)
> @@ -995,7 +995,10 @@ EXPORT_SYMBOL_GPL(dbgp_reset_prep);
>  
>  int dbgp_external_startup(struct usb_hcd *hcd)
>  {
> -	return xen_dbgp_external_startup(hcd) ?: _dbgp_external_startup();
> +	if (!xen_dbgp_external_startup(hcd))
> +		return 1;


... here.


-boris




From xen-devel-bounces@lists.xenproject.org Fri May 14 16:34:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 16:34:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127433.239526 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhal9-0002Xx-Tr; Fri, 14 May 2021 16:33:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127433.239526; Fri, 14 May 2021 16:33:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhal9-0002Xq-Qr; Fri, 14 May 2021 16:33:55 +0000
Received: by outflank-mailman (input) for mailman id 127433;
 Fri, 14 May 2021 16:33:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lhal8-0002Xk-V1
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 16:33:54 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lhal7-0004ox-RB; Fri, 14 May 2021 16:33:53 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lhal7-0003Og-LF; Fri, 14 May 2021 16:33:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=DCXhxaY0lIxT0yFC9K4BXwQrMISl65/MMj4Y8q1Zg1U=; b=MAfybQN/RmUtrXaz27KmVbToeK
	g6UmqjAQZfEYoG/6uPbJAJULrmcs1GGOHuCtpAaYe0YtIU4YUOi3fdp3nL+MxLzNSr9dWD4YNnv+i
	q0Ebo2ovCXUvBljzlAfafo/gxmdyW4SN0Q5zSd66L+xHHsbJ2GsnFrTdQHscyNI8q330=;
Subject: Re: [PATCH v2 1/2] tools/xenstore: move per connection read and write
 func hooks into a struct
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210514115620.32731-1-jgross@suse.com>
 <20210514115620.32731-2-jgross@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <7cdd7f43-3f3f-12e4-abf9-0e4d698a85b1@xen.org>
Date: Fri, 14 May 2021 17:33:51 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210514115620.32731-2-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 14/05/2021 12:56, Juergen Gross wrote:
> -struct connection *new_connection(connwritefn_t *write, connreadfn_t *read);
> +struct connection *new_connection(const struct interface_funcs *funcs);
>   struct connection *get_connection_by_id(unsigned int conn_id);
>   void check_store(void);
>   void corrupt(struct connection *conn, const char *fmt, ...);
> @@ -254,10 +256,7 @@ void finish_daemonize(void);
>   /* Open a pipe for signal handling */
>   void init_pipe(int reopen_log_pipe[2]);
>   
> -int writefd(struct connection *conn, const void *data, unsigned int len);
> -int readfd(struct connection *conn, void *data, unsigned int len);
> -
> -extern struct interface_funcs socket_funcs;
> +extern const struct interface_funcs socket_funcs;
Shouldn't this be protected with #ifdef NO_SOCKETS?

The rest looks good to me:

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 14 17:05:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 17:05:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127437.239536 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhbG5-0005qM-HV; Fri, 14 May 2021 17:05:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127437.239536; Fri, 14 May 2021 17:05:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhbG5-0005qF-Ef; Fri, 14 May 2021 17:05:53 +0000
Received: by outflank-mailman (input) for mailman id 127437;
 Fri, 14 May 2021 17:05:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lhbG4-0005q9-4j
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 17:05:52 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lhbG2-0005MO-PQ; Fri, 14 May 2021 17:05:50 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lhbG2-0001NP-J0; Fri, 14 May 2021 17:05:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=vntf2DdCc7983Y0U19IZH9wHmk8GppIJ5EwyFVMxWR8=; b=UW6NKZETYki5gfgKp3aUvxhyJ6
	fJsdQorwsKLPXx88WV3RqJ1hp2aZvm2XhwD9Rg0Y1QZW6KINdDKQmsEBIl3Pf6L8Wg/HHtB8Fi96/
	H0tuegxea76fPIHg0s9avx/U8X6XCnOumRHbOYsKhKiJewxJX7tv2UFzMLvnNB+XLra8=;
Subject: Re: [PATCH v2 2/2] tools/xenstore: simplify xenstored main loop
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210514115620.32731-1-jgross@suse.com>
 <20210514115620.32731-3-jgross@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <24e89076-4440-a32e-f701-71957cc2a9e4@xen.org>
Date: Fri, 14 May 2021 18:05:48 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210514115620.32731-3-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 14/05/2021 12:56, Juergen Gross wrote:
> The main loop of xenstored is rather complicated due to different
> handling of socket and ring-page interfaces. Unify that handling by
> introducing interface type specific functions can_read() and
> can_write().
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
> V2:
> - split off function vector introduction (Julien Grall)
> ---
>   tools/xenstore/xenstored_core.c   | 77 +++++++++++++++----------------
>   tools/xenstore/xenstored_core.h   |  2 +
>   tools/xenstore/xenstored_domain.c |  2 +
>   3 files changed, 41 insertions(+), 40 deletions(-)
> 
> diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
> index 856f518075..883a1a582a 100644
> --- a/tools/xenstore/xenstored_core.c
> +++ b/tools/xenstore/xenstored_core.c
> @@ -1659,9 +1659,34 @@ static int readfd(struct connection *conn, void *data, unsigned int len)
>   	return rc;
>   }
>   
> +static bool socket_can_process(struct connection *conn, int mask)
> +{
> +	if (conn->pollfd_idx == -1)
> +		return false;
> +
> +	if (fds[conn->pollfd_idx].revents & ~(POLLIN | POLLOUT)) {
> +		talloc_free(conn);
> +		return false;
> +	}
> +
> +	return (fds[conn->pollfd_idx].revents & mask) && !conn->is_ignored;
> +}
> +
> +static bool socket_can_write(struct connection *conn)
> +{
> +	return socket_can_process(conn, POLLOUT);
> +}
> +
> +static bool socket_can_read(struct connection *conn)
> +{
> +	return socket_can_process(conn, POLLIN);
> +}
> +
>   const struct interface_funcs socket_funcs = {
>   	.write = writefd,
>   	.read = readfd,
> +	.can_write = socket_can_write,
> +	.can_read = socket_can_read,
>   };
>   
>   static void accept_connection(int sock)
> @@ -2296,47 +2321,19 @@ int main(int argc, char *argv[])
>   			if (&next->list != &connections)
>   				talloc_increase_ref_count(next);
>   
> -			if (conn->domain) {
> -				if (domain_can_read(conn))
> -					handle_input(conn);
> -				if (talloc_free(conn) == 0)
> -					continue;
> -
> -				talloc_increase_ref_count(conn);
> -				if (domain_can_write(conn) &&
> -				    !list_empty(&conn->out_list))

AFAICT, the check "!list_empty(&conn->out_list)" can be safely removed 
because write_messages() will check if the list is empty (list_top() 
returns NULL in this case). Is that correct?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 14 17:07:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 17:07:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127443.239548 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhbHD-0006UE-2r; Fri, 14 May 2021 17:07:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127443.239548; Fri, 14 May 2021 17:07:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhbHC-0006U7-VE; Fri, 14 May 2021 17:07:02 +0000
Received: by outflank-mailman (input) for mailman id 127443;
 Fri, 14 May 2021 17:07:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lhbHC-0006Tz-Fv
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 17:07:02 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lhbH9-0005NK-IF; Fri, 14 May 2021 17:06:59 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lhbH9-0001Rw-Bn; Fri, 14 May 2021 17:06:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=gBlQfvOBxzjOWX42sl46nGkap3E0WFYLA4l92rq7CdI=; b=oMc5PbkYRznNH8DFyTnoiARuSa
	Oiimk6wNH9B0jMWhWwa6BIweuJ0Os1BkvOYdE04PDOOlpIwONlJSq77XzEt/Id3aH1Adg80nkFeAO
	1E06zXVXHDUL7+49RPXj1gQDwnXmiTPBo4+IIoqkoaMYd1H/qjXDZ5f1NsUYtURtYMag=;
Subject: Re: [PATCH] tools/xenstore: cleanup Makefile and gitignore
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20210514090116.21002-1-jgross@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <a67f922a-935e-2b8b-dde6-2362ca3371c3@xen.org>
Date: Fri, 14 May 2021 18:06:56 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210514090116.21002-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 14/05/2021 10:01, Juergen Gross wrote:
> The Makefile of xenstore and related to that the global .gitignore
> file contain some leftovers from ancient times. Remove those.
> 
> While at it sort the tools/xenstore/* entries in .gitignore.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Acked-by: Julien Grall <jgrall@amazon.com>

Cheers,

> ---
>   .gitignore              | 7 +++----
>   tools/xenstore/Makefile | 2 +-
>   2 files changed, 4 insertions(+), 5 deletions(-)
> 
> diff --git a/.gitignore b/.gitignore
> index 1c2fa1530b..4aad2ddd65 100644
> --- a/.gitignore
> +++ b/.gitignore
> @@ -288,15 +288,15 @@ tools/xenpaging/xenpaging
>   tools/xenpmd/xenpmd
>   tools/xenstore/xenstore
>   tools/xenstore/xenstore-chmod
> +tools/xenstore/xenstore-control
>   tools/xenstore/xenstore-exists
>   tools/xenstore/xenstore-list
> +tools/xenstore/xenstore-ls
>   tools/xenstore/xenstore-read
>   tools/xenstore/xenstore-rm
> +tools/xenstore/xenstore-watch
>   tools/xenstore/xenstore-write
> -tools/xenstore/xenstore-control
> -tools/xenstore/xenstore-ls
>   tools/xenstore/xenstored
> -tools/xenstore/xenstored_test
>   tools/xenstore/xs_tdb_dump
>   tools/xentop/xentop
>   tools/xentrace/xentrace_setsize
> @@ -428,7 +428,6 @@ tools/firmware/etherboot/ipxe.tar.gz
>   tools/firmware/etherboot/ipxe/
>   tools/python/xen/lowlevel/xl/_pyxl_types.c
>   tools/python/xen/lowlevel/xl/_pyxl_types.h
> -tools/xenstore/xenstore-watch
>   tools/xl/_paths.h
>   tools/xl/xl
>   
> diff --git a/tools/xenstore/Makefile b/tools/xenstore/Makefile
> index 01c9ccc70f..292b478fa1 100644
> --- a/tools/xenstore/Makefile
> +++ b/tools/xenstore/Makefile
> @@ -90,7 +90,7 @@ xs_tdb_dump: xs_tdb_dump.o utils.o tdb.o talloc.o
>   .PHONY: clean
>   clean:
>   	rm -f *.a *.o xenstored_probes.h
> -	rm -f xenstored xs_random xs_stress xs_crashme
> +	rm -f xenstored
>   	rm -f xs_tdb_dump xenstore-control init-xenstore-domain
>   	rm -f xenstore $(CLIENTS)
>   	$(RM) $(DEPS_RM)
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 14 17:53:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 17:53:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127473.239577 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhc0S-00045S-ED; Fri, 14 May 2021 17:53:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127473.239577; Fri, 14 May 2021 17:53:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhc0S-00045L-Ah; Fri, 14 May 2021 17:53:48 +0000
Received: by outflank-mailman (input) for mailman id 127473;
 Fri, 14 May 2021 17:53:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhc0R-00045B-C8; Fri, 14 May 2021 17:53:47 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhc0R-0006GB-1X; Fri, 14 May 2021 17:53:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhc0Q-0003Ym-Kl; Fri, 14 May 2021 17:53:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lhc0Q-0000ie-KF; Fri, 14 May 2021 17:53:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=hIT8yoW8kt1VZVQJ/dV8yOIHV6azliuuzPlgmKnhZSQ=; b=v21u/eSOBupSMkFRFchWxnM0zi
	cYOPGYWbJD6IKTJul3BPzN4+amzGcrEXw1q+Pp1zdUGzQbqbCSn5xOzXVJR+hzZVaNvHfa6z+jp2b
	k4wFMsKZOlq+wlUgAHFfNh4t+3qNMWNq7AjVv1Sdx2tyz4KsnofkzEPSmTFeYqUgmhVA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161946-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 161946: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
X-Osstest-Versions-This:
    xen=cb199cc7de987cfda4659fccf51059f210f6ad34
X-Osstest-Versions-That:
    xen=43d4cc7d36503bcc3aa2aa6ebea2b7912808f254
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 14 May 2021 17:53:46 +0000

flight 161946 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161946/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161926
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161926
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161926
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161926
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161926
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161926
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161926
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161926
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161926
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161926
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail starved in 161926

version targeted for testing:
 xen                  cb199cc7de987cfda4659fccf51059f210f6ad34
baseline version:
 xen                  43d4cc7d36503bcc3aa2aa6ebea2b7912808f254

Last test of basis   161926  2021-05-13 03:59:53 Z    1 days
Testing same since   161939  2021-05-13 21:07:48 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   43d4cc7d36..cb199cc7de  cb199cc7de987cfda4659fccf51059f210f6ad34 -> master


From xen-devel-bounces@lists.xenproject.org Fri May 14 18:54:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 18:54:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127485.239606 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhcwt-0001jx-CR; Fri, 14 May 2021 18:54:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127485.239606; Fri, 14 May 2021 18:54:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhcwt-0001jq-8G; Fri, 14 May 2021 18:54:11 +0000
Received: by outflank-mailman (input) for mailman id 127485;
 Fri, 14 May 2021 18:54:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CSkR=KJ=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lhcwr-0001R5-ID
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 18:54:09 +0000
Received: from mail-il1-x130.google.com (unknown [2607:f8b0:4864:20::130])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 31d89682-3904-4638-b714-9e0170676e68;
 Fri, 14 May 2021 18:54:05 +0000 (UTC)
Received: by mail-il1-x130.google.com with SMTP id r5so625109ilb.2
 for <xen-devel@lists.xenproject.org>; Fri, 14 May 2021 11:54:05 -0700 (PDT)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id d81sm2815190iof.26.2021.05.14.11.54.04
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 14 May 2021 11:54:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 31d89682-3904-4638-b714-9e0170676e68
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=3ZyiRnb9WVtOr1miy5zSjmis6+BOGxMggG62SA/umH8=;
        b=eJGco2a1tomoj6gw/zyZjjfGUagfbHQEckqHj6YRKZQbiYMoK9bCjFDBqZgAztQktc
         Hkd6n5m8LtxCtw4q8UY7hrru5QsaZ8FHT32W5bsmJPHY+zIzmvHIaKwiLaSKCmMcehv1
         qcIMb1uUuImTwGfHO6QNlR/2dmbwJGj8c24PQ79p8xnUANJvdG3YskahyT3dEaQY/Xaw
         vafYkHMjhp3knJhThf1q92mxfWPKmudE+y1Vq7NNatoRZrKEYDbWB+IKQ1G9zsaQdRpN
         2UWS1tQ4C+xdpm0hcp2aEjNkrDK4tnLtk6MZ0K1F3CP2m6FONzQ3x3HQxSEh9rrOpcaK
         GSNg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=3ZyiRnb9WVtOr1miy5zSjmis6+BOGxMggG62SA/umH8=;
        b=jDxpHxMmIE5qwWUJ6KfkBG4cKXofC6qmArFR46RLOUf8jvNOLR3D8G2kbvNuqgWAnc
         6qpjW+wrREepsc+t/S6mNznyTe0rCvha5HraWxuIGXMGRQPDLuquYvir2e0nzMuzbKke
         UNSVy2PXRNQvO+QWkn9TAwt4QhfDBZMwCIHtA/l0uvphhUP6VM3qExM/fjVap/F4TbjE
         gTjOkLgn4aDDh1dOwJu1cdTeIUd9Pc84Nvd24WhntqJ7sZud65doWclQFgScKDGkFLIk
         JZw2o9+f34OUdNmPpkdK65Ii1cAreW8Z0h0sHe/Hdr+QwHN9PrsxXR0lccsPFGA16fTo
         cmUQ==
X-Gm-Message-State: AOAM533M2BKZLp0dNgvcFm63/9qM2VoQkQLWabXNPP+uHXe0/cGGSsQN
	gN5o4kyVVunduYbSf3/alrgZRU3wjF8Z7w==
X-Google-Smtp-Source: ABdhPJxORXVTkCb9xQ/n10JtvuniniP2lgIXzB4QiGFKmbPlpndKtHWiUIW2Wfn+lytSZAbzUlQEog==
X-Received: by 2002:a05:6e02:1545:: with SMTP id j5mr11038056ilu.35.1621018444806;
        Fri, 14 May 2021 11:54:04 -0700 (PDT)
From: Connor Davis <connojdavis@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair23@gmail.com>,
	Connor Davis <connojdavis@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 1/5] xen/char: Default HAS_NS16550 to y only for X86 and ARM
Date: Fri, 14 May 2021 12:53:21 -0600
Message-Id: <3960a676376e0163d97ac02f968966cdfaccbf75.1621017334.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <cover.1621017334.git.connojdavis@gmail.com>
References: <cover.1621017334.git.connojdavis@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Defaulting to yes only for X86 and ARM reduces the requirements
for a minimal build when porting new architectures.

Signed-off-by: Connor Davis <connojdavis@gmail.com>
---
 xen/drivers/char/Kconfig | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/drivers/char/Kconfig b/xen/drivers/char/Kconfig
index b572305657..b15b0c8d6a 100644
--- a/xen/drivers/char/Kconfig
+++ b/xen/drivers/char/Kconfig
@@ -1,6 +1,6 @@
 config HAS_NS16550
 	bool "NS16550 UART driver" if ARM
-	default y
+	default y if (ARM || X86)
 	help
 	  This selects the 16550-series UART support. For most systems, say Y.
 
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri May 14 18:54:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 18:54:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127486.239617 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhcwx-00023K-Le; Fri, 14 May 2021 18:54:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127486.239617; Fri, 14 May 2021 18:54:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhcwx-000239-GP; Fri, 14 May 2021 18:54:15 +0000
Received: by outflank-mailman (input) for mailman id 127486;
 Fri, 14 May 2021 18:54:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CSkR=KJ=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lhcww-0001R5-IA
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 18:54:14 +0000
Received: from mail-io1-xd33.google.com (unknown [2607:f8b0:4864:20::d33])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e627cd2f-e916-46be-afea-f98f6f0457e1;
 Fri, 14 May 2021 18:54:06 +0000 (UTC)
Received: by mail-io1-xd33.google.com with SMTP id d24so19102570ios.2
 for <xen-devel@lists.xenproject.org>; Fri, 14 May 2021 11:54:06 -0700 (PDT)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id d81sm2815190iof.26.2021.05.14.11.54.05
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 14 May 2021 11:54:05 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e627cd2f-e916-46be-afea-f98f6f0457e1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=iluLvhPa3OY2DopHKBkR7Xvis99pDwru79D3Q8FxEio=;
        b=ZqAPjMz81sQ6Mja0yWmqh4jyQlLNIZ6DIOXrf1dXTRs2Aesv50iF5gqFQcS4dqxtb9
         9DzeKF17eajqevL3qqo8FyrGEV3cm0PUl+td/fSDUZCWwDIzlTn3q8xAFAY9QlznRr3K
         O15sPo9MmEWSmk6DAaB8u8dNonz/ik14TIaUtRm8OL66VlsjR+UagJZOOUYXxYeeEEGW
         ct4kupLtSalUduJQyWWW8YmMG6vxx5vG6vFXpocP1BWiMt1I5o9MN5X6q7QXmwMHwSxG
         KiTO71g1zqbQXUm0juxkq7JJ6MLivyehc5flXDdLjrsK1fqVRro5pznnOoO+V/bC5dJm
         frIQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=iluLvhPa3OY2DopHKBkR7Xvis99pDwru79D3Q8FxEio=;
        b=PlXX8wf0llHDacIErUL37NWlkI4UBEZOJdKElVWKPuMyugaV1ngYyngKQQmT7kZoCk
         4Mx7/Uk5uRi0WgDxantP+55Ta7QwNdgYEaPAYoe3WBRqomaZcReTJm59f+8Gy6K6ygPZ
         no+njcBN/ieA0aZpHz0xWSXXDWPOx6yKW1rhktyyX+Cv/wA6UqWqTSvEJW9ijGkiST4R
         9tvAmGN42J6R2MZ1FyuT4npkj4z1x6LJ556Z+EE+feTfu/Wzrndm7Kz4qCvF5zto3+Dm
         n7FvlM84SELv6IQmauPSDS5wBc2w+dCs4gQ6zinD/G0ZTaBPykj/esCAtrF6Zer3UKrp
         LcQQ==
X-Gm-Message-State: AOAM530NiuEEV2YG4VoTQAi3xXCXdUzOOKzx9EQjNOZqhkwgHburyF9f
	HrmcaIbGND7Cs0EkgrbgSmtM3+tE51cWBw==
X-Google-Smtp-Source: ABdhPJzSTRWnJ9GQzEmknI4HLkg5i5QpjvFzfYDGmQBZXIxyQH47g0uQBH3YteEv1TAPUbYAPf0zQw==
X-Received: by 2002:a05:6638:3ab:: with SMTP id z11mr44310166jap.58.1621018446053;
        Fri, 14 May 2021 11:54:06 -0700 (PDT)
From: Connor Davis <connojdavis@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair23@gmail.com>,
	Connor Davis <connojdavis@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH v3 2/5] xen/common: Guard iommu symbols with CONFIG_HAS_PASSTHROUGH
Date: Fri, 14 May 2021 12:53:22 -0600
Message-Id: <1156cb116da19ef64323e472bb6b6e87c6c73d77.1621017334.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <cover.1621017334.git.connojdavis@gmail.com>
References: <cover.1621017334.git.connojdavis@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The variables iommu_enabled and iommu_dont_flush_iotlb are defined in
drivers/passthrough/iommu.c and are referenced in common code, which
causes the link to fail when !CONFIG_HAS_PASSTHROUGH.

Guard references to these variables in common code so that xen
builds when !CONFIG_HAS_PASSTHROUGH.

Signed-off-by: Connor Davis <connojdavis@gmail.com>
---
 xen/common/memory.c     | 10 ++++++++++
 xen/include/xen/iommu.h |  8 +++++++-
 2 files changed, 17 insertions(+), 1 deletion(-)

diff --git a/xen/common/memory.c b/xen/common/memory.c
index b5c70c4b85..72a6b70cb5 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -294,7 +294,9 @@ int guest_remove_page(struct domain *d, unsigned long gmfn)
     p2m_type_t p2mt;
 #endif
     mfn_t mfn;
+#ifdef CONFIG_HAS_PASSTHROUGH
     bool *dont_flush_p, dont_flush;
+#endif
     int rc;
 
 #ifdef CONFIG_X86
@@ -385,13 +387,17 @@ int guest_remove_page(struct domain *d, unsigned long gmfn)
      * Since we're likely to free the page below, we need to suspend
      * xenmem_add_to_physmap()'s suppressing of IOMMU TLB flushes.
      */
+#ifdef CONFIG_HAS_PASSTHROUGH
     dont_flush_p = &this_cpu(iommu_dont_flush_iotlb);
     dont_flush = *dont_flush_p;
     *dont_flush_p = false;
+#endif
 
     rc = guest_physmap_remove_page(d, _gfn(gmfn), mfn, 0);
 
+#ifdef CONFIG_HAS_PASSTHROUGH
     *dont_flush_p = dont_flush;
+#endif
 
     /*
      * With the lack of an IOMMU on some platforms, domains with DMA-capable
@@ -839,11 +845,13 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
     xatp->gpfn += start;
     xatp->size -= start;
 
+#ifdef CONFIG_HAS_PASSTHROUGH
     if ( is_iommu_enabled(d) )
     {
        this_cpu(iommu_dont_flush_iotlb) = 1;
        extra.ppage = &pages[0];
     }
+#endif
 
     while ( xatp->size > done )
     {
@@ -868,6 +876,7 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
         }
     }
 
+#ifdef CONFIG_HAS_PASSTHROUGH
     if ( is_iommu_enabled(d) )
     {
         int ret;
@@ -894,6 +903,7 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
         if ( unlikely(ret) && rc >= 0 )
             rc = ret;
     }
+#endif
 
     return rc;
 }
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 460755df29..d878a93269 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -51,9 +51,15 @@ static inline bool_t dfn_eq(dfn_t x, dfn_t y)
     return dfn_x(x) == dfn_x(y);
 }
 
-extern bool_t iommu_enable, iommu_enabled;
+extern bool_t iommu_enable;
 extern bool force_iommu, iommu_quarantine, iommu_verbose;
 
+#ifdef CONFIG_HAS_PASSTHROUGH
+extern bool_t iommu_enabled;
+#else
+#define iommu_enabled false
+#endif
+
 #ifdef CONFIG_X86
 extern enum __packed iommu_intremap {
    /*
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri May 14 18:54:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 18:54:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127484.239595 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhcwo-0001S3-0w; Fri, 14 May 2021 18:54:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127484.239595; Fri, 14 May 2021 18:54:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhcwn-0001Rw-Sh; Fri, 14 May 2021 18:54:05 +0000
Received: by outflank-mailman (input) for mailman id 127484;
 Fri, 14 May 2021 18:54:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CSkR=KJ=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lhcwm-0001R5-OS
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 18:54:04 +0000
Received: from mail-il1-x133.google.com (unknown [2607:f8b0:4864:20::133])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0a9de6ac-36d1-4f37-9263-88d172cc9fb5;
 Fri, 14 May 2021 18:54:03 +0000 (UTC)
Received: by mail-il1-x133.google.com with SMTP id e14so574849ils.12
 for <xen-devel@lists.xenproject.org>; Fri, 14 May 2021 11:54:03 -0700 (PDT)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id d81sm2815190iof.26.2021.05.14.11.54.02
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 14 May 2021 11:54:02 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0a9de6ac-36d1-4f37-9263-88d172cc9fb5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=R/XW10cmwkhPazydDdamIao3szTmAhbyZrhX27c+PLs=;
        b=ifIY9XbKe4yoa6UXhdaMFtBWfGrjFeXFAR9zLOiY6v0OQIxbvJv1j1qB4RAQ0QdV4n
         SBRndWzcvTBuVMGld03AhSTC4lqgUNZep31NHvsOXU1pRQ1/VcAEBXhBNc5KOhk1qK/Y
         H3rAoZqWw6ncgZcfKkA+6e0i87u8KoWtGlOvxk9hfjPOakwl2JocT1vpIayanPXz9sGX
         RDB4jWM1/vINxEfMsk3lps/Vs1l59SZCVoJL2IhpOSk/rrNPmD/zIulnuL6o5NyzeAV/
         IQZ/a4U4ywhd7gPhvjREqUVR5iHhCIvWETrusGJk4EFnc8l/letOi9v1YmSssnvHmA8z
         23LA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=R/XW10cmwkhPazydDdamIao3szTmAhbyZrhX27c+PLs=;
        b=khV0P5ZaqCjhGBSYRuO2GLnLjZeBMrXzqKvJTnQlhTQXL0Xa7qPW/uR3BuOfueI+D2
         uoOWsidF2J9gaR2PwKBSJBa5e9HesC0pzPunSWcdVVfFA0kX+fbIFglYNrvT/O6mosOX
         +e+sduE3SyweuqoqEZkSPZUn3EiHLeqya4p08OSeK/aVgwJEIVOvTUel+/3dF0exsaZY
         ztAST1VGne0COYWHoGzUj8qfqSiO1o6iI4nlE2S5pTHXu/HfOJZqbM46bD59tiVdEGAY
         sj3o//Va79DSyAAs46mmCkbkHIaxIFvPT/EOl0S88vbu+bn0cKKBNZSTnI/vHAdMmDGM
         zgUA==
X-Gm-Message-State: AOAM533BomzboMkpQwR7dmIyrRGiYwef/KNAN9vW1zcYvT36tB/cYD1S
	dpaRAfryRQ638X9yinQ41Ii3q5Pt8fSw7w==
X-Google-Smtp-Source: ABdhPJzleuecYy7Ze+QcSJFHDhfQij/pvGegmijCQy8EoHk1R/E5zAvLVCSFrNbj7udveQGiqkPSpQ==
X-Received: by 2002:a05:6e02:ec3:: with SMTP id i3mr32479245ilk.250.1621018443134;
        Fri, 14 May 2021 11:54:03 -0700 (PDT)
From: Connor Davis <connojdavis@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair23@gmail.com>,
	Connor Davis <connojdavis@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Paul Durrant <paul@xen.org>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH v3 0/5] Minimal build for RISCV
Date: Fri, 14 May 2021 12:53:20 -0600
Message-Id: <cover.1621017334.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.31.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Hi all,

This series introduces a minimal build for RISCV. It is based on Bobby's
previous work from last year[0] rebased onto current Xen.

This series provides the patches necessary to get a minimal build
working. The build is "minimal" in the sense that it only supports
building TARGET=head.o. The arch/riscv/head.S is just a simple while(1).

The first 3 patches are mods to non-RISCV bits that enable building a
config with:

  !CONFIG_HAS_NS16550
  !CONFIG_HAS_PASSTHROUGH
  !CONFIG_GRANT_TABLE

respectively. The fourth patch adds the make/Kconfig boilerplate
alongside head.S and asm-riscv/config.h (head.S references ENTRY
that is defined in asm-riscv/config.h).

The last adds a docker container for doing the build. To build from the
docker container (after creating it locally), you can run the following:

  $ make XEN_TARGET_ARCH=riscv64 SUBSYSTEMS=xen -C xen TARGET=head.o

--
Changes since v2:
  - Reduced number of riscv files added to ease review

Changes since v1:
  - Dropped "xen/sched: Fix build when NR_CPUS == 1" since this was
    fixed for 4.15
  - Moved #ifdef-ary around iommu_enabled to iommu.h
  - Moved struct grant_table declaration above ifdef CONFIG_GRANT_TABLE
    instead of defining an empty struct when !CONFIG_GRANT_TABLE

Connor Davis (5):
  xen/char: Default HAS_NS16550 to y only for X86 and ARM
  xen/common: Guard iommu symbols with CONFIG_HAS_PASSTHROUGH
  xen: Fix build when !CONFIG_GRANT_TABLE
  xen: Add files needed for minimal riscv build
  automation: Add container for riscv64 builds

 automation/build/archlinux/riscv64.dockerfile |  33 ++++++
 automation/scripts/containerize               |   1 +
 config/riscv64.mk                             |   5 +
 xen/Makefile                                  |   8 +-
 xen/arch/riscv/Kconfig                        |  52 +++++++++
 xen/arch/riscv/Kconfig.debug                  |   0
 xen/arch/riscv/Makefile                       |   0
 xen/arch/riscv/Rules.mk                       |   0
 xen/arch/riscv/arch.mk                        |  16 +++
 xen/arch/riscv/asm-offsets.c                  |   0
 xen/arch/riscv/configs/riscv64_defconfig      |  12 ++
 xen/arch/riscv/head.S                         |   6 +
 xen/common/memory.c                           |  10 ++
 xen/drivers/char/Kconfig                      |   2 +-
 xen/include/asm-riscv/config.h                | 110 ++++++++++++++++++
 xen/include/xen/grant_table.h                 |   3 +-
 xen/include/xen/iommu.h                       |   8 +-
 17 files changed, 261 insertions(+), 5 deletions(-)
 create mode 100644 automation/build/archlinux/riscv64.dockerfile
 create mode 100644 config/riscv64.mk
 create mode 100644 xen/arch/riscv/Kconfig
 create mode 100644 xen/arch/riscv/Kconfig.debug
 create mode 100644 xen/arch/riscv/Makefile
 create mode 100644 xen/arch/riscv/Rules.mk
 create mode 100644 xen/arch/riscv/arch.mk
 create mode 100644 xen/arch/riscv/asm-offsets.c
 create mode 100644 xen/arch/riscv/configs/riscv64_defconfig
 create mode 100644 xen/arch/riscv/head.S
 create mode 100644 xen/include/asm-riscv/config.h

-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri May 14 18:54:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 18:54:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127488.239628 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhcx2-0002Rk-UG; Fri, 14 May 2021 18:54:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127488.239628; Fri, 14 May 2021 18:54:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhcx2-0002Rd-QS; Fri, 14 May 2021 18:54:20 +0000
Received: by outflank-mailman (input) for mailman id 127488;
 Fri, 14 May 2021 18:54:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CSkR=KJ=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lhcx1-0001R5-IN
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 18:54:19 +0000
Received: from mail-il1-x12e.google.com (unknown [2607:f8b0:4864:20::12e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ee43060e-79e4-409b-8d3d-c3c59584781e;
 Fri, 14 May 2021 18:54:07 +0000 (UTC)
Received: by mail-il1-x12e.google.com with SMTP id h11so584625ili.9
 for <xen-devel@lists.xenproject.org>; Fri, 14 May 2021 11:54:07 -0700 (PDT)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id d81sm2815190iof.26.2021.05.14.11.54.06
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 14 May 2021 11:54:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ee43060e-79e4-409b-8d3d-c3c59584781e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=aZUcLbAyYZr/mu1DuUlZ6kf0uoeQ0LVDzaRiHOP28GY=;
        b=P0jc2FBfxZxHsaAQlVSO1k1XLOGOTzpFWKon1uh24SlEm7rgvPi0dGmXN8P1vTKrYo
         JCoU/XWcVqogkukVPvDqyIQyBfwwPiPfIbS7n9Oc2QLx9OEbpbcLPuIEmwkW5/J9UCPj
         VMlgHhs/4fcQyURanvKpXtXLUY66Ll2KfjSJVUoO1iOOBC/qLHLsKOV6mcBtgdWeSIMY
         BywEHROGml2C1EKy8JNaStusibZw+VJt8b6Y0xRWSuqy0AM+6ORcRw9J/w3zTy/KdzDb
         61tuLiYKe4NjmqA+DnvL0q/v6cLTZ7w3NvHL8LtJ3dSJAubQFIkMXXAMP/3buFA0xNkB
         YqCA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=aZUcLbAyYZr/mu1DuUlZ6kf0uoeQ0LVDzaRiHOP28GY=;
        b=s9JSW9r8U6u01ulnO69RlmDt1hCgQFE7WKMU1X3RNkxzLw/Z7hrkobb6W5r83zJ4A+
         PhJEXz74fhgfvUBtxVJ6hbB53e6fKWAm6b3ppqSHUA3jawbBphzDmlgtSEtJJ947xTsw
         DQfBT7Y8rBxStIGLdKTOjtT2EZY/8zbyrHgdSPrefnTc8eecdU0GjYpP4cI6I8oiL6ER
         Em8HF+gYPkYcvIKvYou2J8UOSpvJ/+8bB9MAEjNwIrXoVWZnAW8HMM1WRzl8adDoA9+R
         ioxJC+4fL6GIhDsv1RZFJUaV+pc0CZcnmeCOO0HO0GYygbO4vEO4qJzXGrI2lyxbi+2Z
         4pjg==
X-Gm-Message-State: AOAM533fGlDmhCXotb1trhpGAIvX2rAV7DKmN4Z831vMZtwsu6t+yJQa
	wnw5ozPYBGl8G1revIwW2+tFk7EUDm/nkw==
X-Google-Smtp-Source: ABdhPJxbZQD1P9VmTwcBwo83IVE6LpSFxrPkfj+tOB1A88tB10+Ieq3hahvnTcEANQtwwQz06xYqvQ==
X-Received: by 2002:a05:6e02:1be8:: with SMTP id y8mr2239039ilv.52.1621018447314;
        Fri, 14 May 2021 11:54:07 -0700 (PDT)
From: Connor Davis <connojdavis@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair23@gmail.com>,
	Connor Davis <connojdavis@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 3/5] xen: Fix build when !CONFIG_GRANT_TABLE
Date: Fri, 14 May 2021 12:53:23 -0600
Message-Id: <834f7995ae80a3b37b6d508d1c989b4ee391f61b.1621017334.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <cover.1621017334.git.connojdavis@gmail.com>
References: <cover.1621017334.git.connojdavis@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Move struct grant_table; in grant_table.h above
ifdef CONFIG_GRANT_TABLE. This fixes the following:

/build/xen/include/xen/grant_table.h:84:50: error: 'struct grant_table'
declared inside parameter list will not be visible outside of this
definition or declaration [-Werror]
   84 | static inline int mem_sharing_gref_to_gfn(struct grant_table *gt,
      |

Signed-off-by: Connor Davis <connojdavis@gmail.com>
---
 xen/include/xen/grant_table.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/xen/include/xen/grant_table.h b/xen/include/xen/grant_table.h
index 63b6dc78f4..9f8b7e66c1 100644
--- a/xen/include/xen/grant_table.h
+++ b/xen/include/xen/grant_table.h
@@ -28,9 +28,10 @@
 #include <public/grant_table.h>
 #include <asm/grant_table.h>
 
-#ifdef CONFIG_GRANT_TABLE
 struct grant_table;
 
+#ifdef CONFIG_GRANT_TABLE
+
 extern unsigned int opt_max_grant_frames;
 
 /* Create/destroy per-domain grant table context. */
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri May 14 18:54:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 18:54:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127490.239639 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhcx8-00032K-8V; Fri, 14 May 2021 18:54:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127490.239639; Fri, 14 May 2021 18:54:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhcx8-000328-4h; Fri, 14 May 2021 18:54:26 +0000
Received: by outflank-mailman (input) for mailman id 127490;
 Fri, 14 May 2021 18:54:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CSkR=KJ=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lhcx6-0001R5-IQ
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 18:54:24 +0000
Received: from mail-il1-x12b.google.com (unknown [2607:f8b0:4864:20::12b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1cdc3d92-352d-4bb1-a4c9-206b3234064f;
 Fri, 14 May 2021 18:54:09 +0000 (UTC)
Received: by mail-il1-x12b.google.com with SMTP id p15so617921iln.3
 for <xen-devel@lists.xenproject.org>; Fri, 14 May 2021 11:54:09 -0700 (PDT)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id d81sm2815190iof.26.2021.05.14.11.54.07
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 14 May 2021 11:54:08 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1cdc3d92-352d-4bb1-a4c9-206b3234064f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=zIAC1xDVnT0fSpVJ2YCItiTMjfHRx3z5TUS+XXT8py4=;
        b=SHx4dDseUJs48OcZHLFxbEPAs0NXzTtBpoMGJJh8LN9ACY41pUxnOiSaL2RlP9GLHf
         x9VJXeoYaq8ZgC//KRQqIRKWtm/gyH7kqTuMf/MT0YSo8FlTKgJJfUKuGPftvTdFyKP9
         qPy91+zeECVbVSQiqSX60ahLxwR21Y5sF53fPxIo3QWjONQ9xWvmWfBEuxarh2iwvL5z
         aodttAL7pj0rwHejpLmZaBBYU2O6+OvV5eHDpbvS22UTFHAFb4JwQduPkE5W+GUIKtPs
         g2e+GiGVWsmG0tFNVwG+7+ZN/X/W1fMmABiFM4B45RQOU8mAT11ert1NrEVNJdMSTzbs
         slGw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=zIAC1xDVnT0fSpVJ2YCItiTMjfHRx3z5TUS+XXT8py4=;
        b=QGOx4peHazxNy/deSuSmWf3GlHF4B2xFQ4z+xr79c9isFLVeOufgTqxSIx+KQnkasJ
         to0fY0C7ou2ZkWIucx+nDM5T1WVWJfIhn5mEirdeD6Ypj369H1bbYa6Lg3lnC2SDwXJQ
         Mlr2OQsVd/cBTfoqZVqYu8l4V874WNiS8YGD9EnAzEqjXIktgbYP2iGOGWDTkvcKoZwV
         yPC6nJyXiC3+dhoCVF1TOk3IgNan/xuK2Oxk4zOQ2+mSQusSQYa13i+PrATLXH+umHSw
         4exa+inglhmdpAg81dsmd/gME50CY9lirGZ6WK9bHYnncOWbQyHri1hEWjGMtlyIJOtA
         47vQ==
X-Gm-Message-State: AOAM53252RUeh8v/yuvJQZZSzNtit+cJ1Y9PduwxSCa050TnGODBV7at
	UCR+fy/9F8Qc2eRn1xjk4Xs5y2NOuukoPA==
X-Google-Smtp-Source: ABdhPJyE0Ge3YTrYbl06lpF4QsFq/fv7XLOHq2YIzGyA33haC3A+jAD9yRkfi1RApvOFQeKqy1FNSw==
X-Received: by 2002:a92:4a0a:: with SMTP id m10mr6156291ilf.118.1621018448468;
        Fri, 14 May 2021 11:54:08 -0700 (PDT)
From: Connor Davis <connojdavis@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair23@gmail.com>,
	Connor Davis <connojdavis@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 4/5] xen: Add files needed for minimal riscv build
Date: Fri, 14 May 2021 12:53:24 -0600
Message-Id: <a7d2d43d0d9de9e10a3e92bc6f977d6f4b53bef6.1621017334.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <cover.1621017334.git.connojdavis@gmail.com>
References: <cover.1621017334.git.connojdavis@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add arch-specific makefiles and configs needed to build for
riscv64. Also add a minimal head.S that is a simple infinite loop.
head.o can be built with

$ make XEN_TARGET_ARCH=riscv64 SUBSYSTEMS=xen -C xen TARGET=head.o

No other TARGET is supported at the moment.

Signed-off-by: Connor Davis <connojdavis@gmail.com>
---
 config/riscv64.mk                        |   5 ++
 xen/Makefile                             |   8 +-
 xen/arch/riscv/Kconfig                   |  52 +++++++++++
 xen/arch/riscv/Kconfig.debug             |   0
 xen/arch/riscv/Makefile                  |   0
 xen/arch/riscv/Rules.mk                  |   0
 xen/arch/riscv/arch.mk                   |  16 ++++
 xen/arch/riscv/asm-offsets.c             |   0
 xen/arch/riscv/configs/riscv64_defconfig |  12 +++
 xen/arch/riscv/head.S                    |   6 ++
 xen/include/asm-riscv/config.h           | 110 +++++++++++++++++++++++
 11 files changed, 207 insertions(+), 2 deletions(-)
 create mode 100644 config/riscv64.mk
 create mode 100644 xen/arch/riscv/Kconfig
 create mode 100644 xen/arch/riscv/Kconfig.debug
 create mode 100644 xen/arch/riscv/Makefile
 create mode 100644 xen/arch/riscv/Rules.mk
 create mode 100644 xen/arch/riscv/arch.mk
 create mode 100644 xen/arch/riscv/asm-offsets.c
 create mode 100644 xen/arch/riscv/configs/riscv64_defconfig
 create mode 100644 xen/arch/riscv/head.S
 create mode 100644 xen/include/asm-riscv/config.h

diff --git a/config/riscv64.mk b/config/riscv64.mk
new file mode 100644
index 0000000000..a5a21e5fa2
--- /dev/null
+++ b/config/riscv64.mk
@@ -0,0 +1,5 @@
+CONFIG_RISCV := y
+CONFIG_RISCV_64 := y
+CONFIG_RISCV_$(XEN_OS) := y
+
+CONFIG_XEN_INSTALL_SUFFIX :=
diff --git a/xen/Makefile b/xen/Makefile
index 9f3be7766d..60de4cc6cd 100644
--- a/xen/Makefile
+++ b/xen/Makefile
@@ -26,7 +26,9 @@ MAKEFLAGS += -rR
 EFI_MOUNTPOINT ?= $(BOOT_DIR)/efi
 
 ARCH=$(XEN_TARGET_ARCH)
-SRCARCH=$(shell echo $(ARCH) | sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g')
+SRCARCH=$(shell echo $(ARCH) | \
+	  sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g' \
+	      -e s'/riscv.*/riscv/g')
 
 # Don't break if the build process wasn't called from the top level
 # we need XEN_TARGET_ARCH to generate the proper config
@@ -35,7 +37,8 @@ include $(XEN_ROOT)/Config.mk
 # Set ARCH/SUBARCH appropriately.
 export TARGET_SUBARCH  := $(XEN_TARGET_ARCH)
 export TARGET_ARCH     := $(shell echo $(XEN_TARGET_ARCH) | \
-                            sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g')
+                            sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g' \
+			        -e s'/riscv.*/riscv/g')
 
 # Allow someone to change their config file
 export KCONFIG_CONFIG ?= .config
@@ -335,6 +338,7 @@ _clean: delete-unfresh-files
 	$(MAKE) $(clean) xsm
 	$(MAKE) $(clean) crypto
 	$(MAKE) $(clean) arch/arm
+	$(MAKE) $(clean) arch/riscv
 	$(MAKE) $(clean) arch/x86
 	$(MAKE) $(clean) test
 	$(MAKE) -f $(BASEDIR)/tools/kconfig/Makefile.kconfig ARCH=$(ARCH) SRCARCH=$(SRCARCH) clean
diff --git a/xen/arch/riscv/Kconfig b/xen/arch/riscv/Kconfig
new file mode 100644
index 0000000000..d4bbd4294e
--- /dev/null
+++ b/xen/arch/riscv/Kconfig
@@ -0,0 +1,52 @@
+config 64BIT
+	bool
+
+config RISCV_64
+	bool
+	depends on 64BIT
+
+config RISCV
+	def_bool y
+
+config ARCH_DEFCONFIG
+	string
+	default "arch/riscv/configs/riscv64_defconfig" if RISCV_64
+
+menu "Architecture Features"
+
+source "arch/Kconfig"
+
+endmenu
+
+menu "ISA Selection"
+
+choice
+	prompt "Base ISA"
+        default RISCV_ISA_RV64IMA
+        help
+          This selects the base ISA extensions that Xen will target.
+
+config RISCV_ISA_RV64IMA
+	bool "RV64IMA"
+        select 64BIT
+        select RISCV_64
+        help
+           Use the RV64I base ISA, plus the "M" and "A" extensions
+           for integer multiply/divide and atomic instructions, respectively.
+
+endchoice
+
+config RISCV_ISA_C
+	bool "Compressed extension"
+        help
+           Add "C" to the ISA subsets that the toolchain is allowed
+           to emit when building Xen, which results in compressed
+           instructions in the Xen binary.
+
+           If unsure, say N.
+
+endmenu
+
+source "common/Kconfig"
+
+source "drivers/Kconfig"
diff --git a/xen/arch/riscv/Kconfig.debug b/xen/arch/riscv/Kconfig.debug
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/xen/arch/riscv/Rules.mk b/xen/arch/riscv/Rules.mk
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/xen/arch/riscv/arch.mk b/xen/arch/riscv/arch.mk
new file mode 100644
index 0000000000..10229c5440
--- /dev/null
+++ b/xen/arch/riscv/arch.mk
@@ -0,0 +1,16 @@
+########################################
+# RISCV-specific definitions
+
+ifeq ($(CONFIG_RISCV_64),y)
+    CFLAGS += -mabi=lp64
+endif
+
+riscv-march-$(CONFIG_RISCV_ISA_RV64IMA) := rv64ima
+riscv-march-$(CONFIG_RISCV_ISA_C)       := $(riscv-march-y)c
+
+# Note that -mcmodel=medany is used so that Xen can be mapped
+# into the upper half _or_ the lower half of the address space.
+# -mcmodel=medlow would force Xen into the lower half.
+
+CFLAGS += -march=$(riscv-march-y) -mstrict-align -mcmodel=medany
+CFLAGS += -I$(BASEDIR)/include
diff --git a/xen/arch/riscv/asm-offsets.c b/xen/arch/riscv/asm-offsets.c
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/xen/arch/riscv/configs/riscv64_defconfig b/xen/arch/riscv/configs/riscv64_defconfig
new file mode 100644
index 0000000000..664a5d2378
--- /dev/null
+++ b/xen/arch/riscv/configs/riscv64_defconfig
@@ -0,0 +1,12 @@
+# CONFIG_SCHED_CREDIT is not set
+# CONFIG_SCHED_RTDS is not set
+# CONFIG_SCHED_NULL is not set
+# CONFIG_SCHED_ARINC653 is not set
+# CONFIG_TRACEBUFFER is not set
+# CONFIG_DEBUG is not set
+# CONFIG_DEBUG_INFO is not set
+# CONFIG_HYPFS is not set
+# CONFIG_GRANT_TABLE is not set
+# CONFIG_SPECULATIVE_HARDEN_ARRAY is not set
+
+CONFIG_EXPERT=y
diff --git a/xen/arch/riscv/head.S b/xen/arch/riscv/head.S
new file mode 100644
index 0000000000..0dbc27ba75
--- /dev/null
+++ b/xen/arch/riscv/head.S
@@ -0,0 +1,6 @@
+#include <asm/config.h>
+
+        .text
+
+ENTRY(start)
+        j  start
diff --git a/xen/include/asm-riscv/config.h b/xen/include/asm-riscv/config.h
new file mode 100644
index 0000000000..84cb436dc1
--- /dev/null
+++ b/xen/include/asm-riscv/config.h
@@ -0,0 +1,110 @@
+/******************************************************************************
+ * config.h
+ *
+ * A Linux-style configuration list.
+ */
+
+#ifndef __RISCV_CONFIG_H__
+#define __RISCV_CONFIG_H__
+
+#if defined(CONFIG_RISCV_64)
+# define LONG_BYTEORDER 3
+# define ELFSIZE 64
+#else
+# error "Unsupported RISCV variant"
+#endif
+
+#define BYTES_PER_LONG (1 << LONG_BYTEORDER)
+#define BITS_PER_LONG  (BYTES_PER_LONG << 3)
+#define POINTER_ALIGN  BYTES_PER_LONG
+
+#define BITS_PER_LLONG 64
+
+/* xen_ulong_t is always 64 bits */
+#define BITS_PER_XEN_ULONG 64
+
+#define CONFIG_RISCV 1
+#define CONFIG_RISCV_L1_CACHE_SHIFT 6
+
+#define CONFIG_PAGEALLOC_MAX_ORDER 18
+#define CONFIG_DOMU_MAX_ORDER      9
+#define CONFIG_HWDOM_MAX_ORDER     10
+
+#define OPT_CONSOLE_STR "dtuart"
+
+#ifdef CONFIG_RISCV_64
+#define MAX_VIRT_CPUS 128u
+#else
+#error "Unsupported RISCV variant"
+#endif
+
+#define INVALID_VCPU_ID MAX_VIRT_CPUS
+
+/* Linkage for RISCV */
+#ifdef __ASSEMBLY__
+#define ALIGN .align 2
+
+#define ENTRY(name)                                \
+  .globl name;                                     \
+  ALIGN;                                           \
+  name:
+#endif
+
+#include <xen/const.h>
+
+#ifdef CONFIG_RISCV_64
+
+/*
+ * RISC-V Layout:
+ *   0x0000000000000000 - 0x0000003fffffffff (256GB, L2 slots [0-255])
+ *     Unmapped
+ *   0x0000004000000000 - 0xffffffbfffffffff
+ *     Inaccessible: sv39 only supports 39-bit sign-extended VAs.
+ *   0xffffffc000000000 - 0xffffffc0001fffff (2MB, L2 slot [256])
+ *     Unmapped
+ *   0xffffffc000200000 - 0xffffffc0003fffff (2MB, L2 slot [256])
+ *     Xen text, data, bss
+ *   0xffffffc000400000 - 0xffffffc0005fffff (2MB, L2 slot [256])
+ *     Fixmap: special-purpose 4K mapping slots
+ *   0xffffffc000600000 - 0xffffffc0009fffff (4MB, L2 slot [256])
+ *     Early boot mapping of FDT
+ *   0xffffffc000a00000 - 0xffffffc000bfffff (2MB, L2 slot [256])
+ *     Early relocation address, used when relocating Xen and later
+ *     for livepatch vmap (if compiled in)
+ *   0xffffffc040000000 - 0xffffffc07fffffff (1GB, L2 slot [257])
+ *     VMAP: ioremap and early_ioremap
+ *   0xffffffc080000000 - 0xffffffc13fffffff (3GB, L2 slots [258..260])
+ *     Unmapped
+ *   0xffffffc140000000 - 0xffffffc1bfffffff (2GB, L2 slots [261..262])
+ *     Frametable: 48 bytes per page for 133GB of RAM
+ *   0xffffffc1c0000000 - 0xffffffe1bfffffff (128GB, L2 slots [263..390])
+ *     1:1 direct mapping of RAM
+ *   0xffffffe1c0000000 - 0xffffffffffffffff (121GB, L2 slots [391..511])
+ *     Unmapped
+ */
+
+#define L2_ENTRY_BITS  30
+#define L2_ENTRY_BYTES (_AC(1,UL) << L2_ENTRY_BITS)
+#define L2_ADDR(_slot)                                      \
+    (((_AC(_slot, UL) >> 8) * _AC(0xffffff8000000000,UL)) | \
+     (_AC(_slot, UL) << L2_ENTRY_BITS))
+
+#define XEN_VIRT_START         _AT(vaddr_t, L2_ADDR(256) + MB(2))
+#define HYPERVISOR_VIRT_START  XEN_VIRT_START
+
+#define FRAMETABLE_VIRT_START  _AT(vaddr_t, L2_ADDR(261))
+
+#endif /* CONFIG_RISCV_64 */
+
+#define STACK_ORDER            3
+#define STACK_SIZE             (PAGE_SIZE << STACK_ORDER)
+
+#endif /* __RISCV_CONFIG_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri May 14 18:54:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 18:54:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127494.239649 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhcxD-0003XE-Ic; Fri, 14 May 2021 18:54:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127494.239649; Fri, 14 May 2021 18:54:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhcxD-0003X7-E7; Fri, 14 May 2021 18:54:31 +0000
Received: by outflank-mailman (input) for mailman id 127494;
 Fri, 14 May 2021 18:54:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CSkR=KJ=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lhcxB-0001R5-IY
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 18:54:29 +0000
Received: from mail-il1-x132.google.com (unknown [2607:f8b0:4864:20::132])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d41a3445-4e51-4fca-b306-9e1ab418212c;
 Fri, 14 May 2021 18:54:10 +0000 (UTC)
Received: by mail-il1-x132.google.com with SMTP id j20so581131ilo.10
 for <xen-devel@lists.xenproject.org>; Fri, 14 May 2021 11:54:10 -0700 (PDT)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id d81sm2815190iof.26.2021.05.14.11.54.09
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 14 May 2021 11:54:09 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d41a3445-4e51-4fca-b306-9e1ab418212c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=dKx3kHJenx3C/CwPtUUrPl3dcyda2vxTcbUQPlrKK/I=;
        b=u+78FH0Vr3aWOovk/6ndFCwcIoD4RYH9MU6CT9zrTY7IgYS7bRgUP/VgaR98+UBOvF
         8SlAnc9z2PHpbvanxUea/ViSfW8w04eQ2laQdvzDIQBGah0W57NX9WwzJUzjD8HGJPzm
         b3POupSVCma7wnpV/aIQdm7dlFq1NF1y98qwbs1Q4NYo/iQx2TNYHh0GBDmiplI6Tcjr
         vVBVAsxMF/sXfn/dThp/VV+7SnaTKVVIXmJltiGQGZpr2nXQuh2MLqQbBd1T6OO9dUmn
         97ybZZHlQVXK4XyRjxmRkAnJ1qQt+jF0V/7NlX4p6J/XghYfACuA6bAk1+Z8BhbvZckQ
         0ZaA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=dKx3kHJenx3C/CwPtUUrPl3dcyda2vxTcbUQPlrKK/I=;
        b=ZGeMNVXgCpqAV4Qs1BFS+OPSL3FDnS2seeLuIyYQVpHnY+U4ap7W9AWJoburwlZ+EN
         opFc+HZTyJUc7z7hbz+w92DTobueCVhWjSdU+eNZ9ZgDBHtTDAZ2XWsXXPeFEbjlC2nC
         2KnE9x5gA1u9q2fJcoNHJ49OCQsDccJuupwDi61qVeZq8ovYa9nLWxgwIYX3k5+EoLbw
         RJ+ftCms3fVfhn4+/SD5zxtl3oiz93r7o8K1yCh0psos9mtza2j4Kpuu4YJVJENp7Jx8
         LFuCQT0ClL1vK/AlPn4IIkXyMu7LcVUt49VyVvwz4iRx7ipVDnYPKxHPgX8HS8RxtZbD
         pi9g==
X-Gm-Message-State: AOAM533f/6rhaFmbycdbmyfcZAGybdgnB7uhG4PJWFMDsX2g1haAQ5a/
	bWtIngVCTuqycr2j/wivDZzidCRSOmavmA==
X-Google-Smtp-Source: ABdhPJxtvP4iLSoWwduNkKcEriM6cSfd2KJ6iN8SoZ/MqzVNFqk2QN6vVu89CSHckxRX7gIVfd6Baw==
X-Received: by 2002:a92:c569:: with SMTP id b9mr28228572ilj.117.1621018449545;
        Fri, 14 May 2021 11:54:09 -0700 (PDT)
From: Connor Davis <connojdavis@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair23@gmail.com>,
	Connor Davis <connojdavis@gmail.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH v3 5/5] automation: Add container for riscv64 builds
Date: Fri, 14 May 2021 12:53:25 -0600
Message-Id: <5a78ff425e45588da5c97c68e94275b649346012.1621017334.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <cover.1621017334.git.connojdavis@gmail.com>
References: <cover.1621017334.git.connojdavis@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a container for cross-compiling xen to riscv64.
This just includes the cross-compiler and necessary packages for
building xen itself (packages for tools, stubdoms, etc., can be
added later).

Signed-off-by: Connor Davis <connojdavis@gmail.com>
---
 automation/build/archlinux/riscv64.dockerfile | 33 +++++++++++++++++++
 automation/scripts/containerize               |  1 +
 2 files changed, 34 insertions(+)
 create mode 100644 automation/build/archlinux/riscv64.dockerfile

diff --git a/automation/build/archlinux/riscv64.dockerfile b/automation/build/archlinux/riscv64.dockerfile
new file mode 100644
index 0000000000..505b623c01
--- /dev/null
+++ b/automation/build/archlinux/riscv64.dockerfile
@@ -0,0 +1,33 @@
+FROM archlinux
+LABEL maintainer.name="The Xen Project" \
+      maintainer.email="xen-devel@lists.xenproject.org"
+
+# Packages needed for the build
+RUN pacman --noconfirm --needed -Syu \
+    base-devel \
+    gcc \
+    git
+
+# Packages needed for QEMU
+RUN pacman --noconfirm --needed -Syu \
+    pixman \
+    python \
+    sh
+
+# There is a regression in GDB that causes an assertion error
+# when setting breakpoints, use this commit until it is fixed!
+RUN git clone --recursive -j$(nproc) --progress https://github.com/riscv/riscv-gnu-toolchain && \
+    cd riscv-gnu-toolchain/riscv-gdb && \
+    git checkout 1dd588507782591478882a891f64945af9e2b86c && \
+    cd  .. && \
+    ./configure --prefix=/opt/riscv && \
+    make linux -j$(nproc) && \
+    rm -R /riscv-gnu-toolchain
+
+# Add compiler path
+ENV PATH=/opt/riscv/bin/:${PATH}
+ENV CROSS_COMPILE=riscv64-unknown-linux-gnu-
+
+RUN useradd --create-home user
+USER user
+WORKDIR /build
diff --git a/automation/scripts/containerize b/automation/scripts/containerize
index b7c81559fb..59edf0ba40 100755
--- a/automation/scripts/containerize
+++ b/automation/scripts/containerize
@@ -26,6 +26,7 @@ BASE="registry.gitlab.com/xen-project/xen"
 case "_${CONTAINER}" in
     _alpine) CONTAINER="${BASE}/alpine:3.12" ;;
     _archlinux|_arch) CONTAINER="${BASE}/archlinux:current" ;;
+    _riscv64) CONTAINER="${BASE}/archlinux:riscv64" ;;
     _centos7) CONTAINER="${BASE}/centos:7" ;;
     _centos72) CONTAINER="${BASE}/centos:7.2" ;;
     _fedora) CONTAINER="${BASE}/fedora:29";;
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri May 14 20:19:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 20:19:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127517.239660 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lheHT-0003hC-RE; Fri, 14 May 2021 20:19:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127517.239660; Fri, 14 May 2021 20:19:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lheHT-0003h5-OM; Fri, 14 May 2021 20:19:31 +0000
Received: by outflank-mailman (input) for mailman id 127517;
 Fri, 14 May 2021 20:19:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lheHS-0003gz-Hw
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 20:19:30 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lheHQ-0000OK-Sv; Fri, 14 May 2021 20:19:28 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lheHQ-0006fG-N6; Fri, 14 May 2021 20:19:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=RLS6JldehuGTIIhFGQNiOJtm+l36RFnIbCf+o7A02JQ=; b=2WVV2WnVrMJ4RsnrbjqHHyEaUl
	K9iUN/5ICFw0lFpMTnwY0p0kXteMUmF1ddhAnehcX90vncOG1Lo4I26DXKC05SbdbfKuJjrRJdQuM
	MJ1GgQN9hcxJxzuyurot8sXYSqg6gsj3418QRCCrXjQ/kYCSDBFWoqk17w+dGkTBR/vs=;
Subject: Re: [PATCH] tools/xenstore: claim resources when running as daemon
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210514084133.18658-1-jgross@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <1e38cce0-6960-ac21-b349-dac8551e23ed@xen.org>
Date: Fri, 14 May 2021 21:19:27 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210514084133.18658-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 14/05/2021 09:41, Juergen Gross wrote:
> Xenstored is absolutely mandatory for a Xen host and it can't be
> restarted, so being killed by OOM-killer in case of memory shortage is
> to be avoided.
> 
> Set /proc/$pid/oom_score_adj (if available) to -500 in order to allow
> xenstored to use large amounts of memory without being killed.
> 
> In order to support large numbers of domains the limit for open file
> descriptors might need to be raised. Each domain needs 2 file
> descriptors (one for the event channel and one for the xl per-domain
> daemon to connect to xenstored).

Hmmm... AFAICT there is only one file descriptor to handle all the event 
channels. Could you point out the code showing one event FD per domain?

> 
> Try to raise ulimit for open files to 65536. First the hard limit if
> needed, and then the soft limit.

I am not sure this is right to impose this limit to everyone. For 
instance, one admin may know that there will be no more than 100 domains 
on its system.

So the admin should be able to configure them. At this point, I think 
the two limit should be set my the initscript rather than xenstored itself.

This would also avoid the problem where Xenstored is not allowed to 
modify its limit (see more below).

> 
> Signed-off-by: Juergen Gross <jgross@suse.com>
> ---
>   tools/xenstore/xenstored_core.c   |  2 ++
>   tools/xenstore/xenstored_core.h   |  3 ++
>   tools/xenstore/xenstored_minios.c |  4 +++
>   tools/xenstore/xenstored_posix.c  | 46 +++++++++++++++++++++++++++++++
>   4 files changed, 55 insertions(+)
> 
> diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
> index b66d119a98..964e693450 100644
> --- a/tools/xenstore/xenstored_core.c
> +++ b/tools/xenstore/xenstored_core.c
> @@ -2243,6 +2243,8 @@ int main(int argc, char *argv[])
>   		xprintf = trace;
>   #endif
>   
> +	claim_resources();
> +
>   	signal(SIGHUP, trigger_reopen_log);
>   	if (tracefile)
>   		tracefile = talloc_strdup(NULL, tracefile);
> diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
> index 1467270476..ac26973648 100644
> --- a/tools/xenstore/xenstored_core.h
> +++ b/tools/xenstore/xenstored_core.h
> @@ -255,6 +255,9 @@ void daemonize(void);
>   /* Close stdin/stdout/stderr to complete daemonize */
>   void finish_daemonize(void);
>   
> +/* Set OOM-killer score and raise ulimit. */
> +void claim_resources(void);
> +
>   /* Open a pipe for signal handling */
>   void init_pipe(int reopen_log_pipe[2]);
>   
> diff --git a/tools/xenstore/xenstored_minios.c b/tools/xenstore/xenstored_minios.c
> index c94493e52a..df8ff580b0 100644
> --- a/tools/xenstore/xenstored_minios.c
> +++ b/tools/xenstore/xenstored_minios.c
> @@ -32,6 +32,10 @@ void finish_daemonize(void)
>   {
>   }
>   
> +void claim_resources(void)
> +{
> +}
> +
>   void init_pipe(int reopen_log_pipe[2])
>   {
>   	reopen_log_pipe[0] = -1;
> diff --git a/tools/xenstore/xenstored_posix.c b/tools/xenstore/xenstored_posix.c
> index 48c37ffe3e..0074fbd8b2 100644
> --- a/tools/xenstore/xenstored_posix.c
> +++ b/tools/xenstore/xenstored_posix.c
> @@ -22,6 +22,7 @@
>   #include <fcntl.h>
>   #include <stdlib.h>
>   #include <sys/mman.h>
> +#include <sys/resource.h>
>   
>   #include "utils.h"
>   #include "xenstored_core.h"
> @@ -87,6 +88,51 @@ void finish_daemonize(void)
>   	close(devnull);
>   }
>   
> +static void avoid_oom_killer(void)
> +{
> +	char path[32];
> +	char val[] = "-500";
> +	int fd;
> +
> +	snprintf(path, sizeof(path), "/proc/%d/oom_score_adj", (int)getpid());

This looks Linux specific. How about other OSes?

> +
> +	fd = open(path, O_WRONLY);
> +	/* Do nothing if file doesn't exist. */

Your commit message leads to think that we *must* configure the OOM. If 
not, then we should not continue. But here, this suggest this is 
optional. In fact...

> +	if (fd < 0)
> +		return;
> +	/* Ignore errors. */
> +	write(fd, val, sizeof(val));

... xenstored may not be allowed to modify its own parameters. So this 
would continue silently without the admin necessarily knowning the limit 
wasn't applied.

> +	close(fd);
> +}
> +
> +/* Max. 32752 domains with 2 open files per domain, plus some spare. */
> +#define MAX_FILES 65536
> +static void raise_ulimit(void)
> +{
> +	struct rlimit rlim;
> +
> +	if (getrlimit(RLIMIT_NOFILE, &rlim))
> +		return;
> +	if (rlim.rlim_max != RLIM_INFINITY && rlim.rlim_max < MAX_FILES)
> +	{
> +		rlim.rlim_max = MAX_FILES;
> +		setrlimit(RLIMIT_NOFILE, &rlim);
> +	}
> +	if (getrlimit(RLIMIT_NOFILE, &rlim))
> +		return;
> +	if (rlim.rlim_max == RLIM_INFINITY || rlim.rlim_max > MAX_FILES)
> +		rlim.rlim_cur = MAX_FILES;
> +	else
> +		rlim.rlim_cur = rlim.rlim_max;
> +	setrlimit(RLIMIT_NOFILE, &rlim);
> +}
> +
> +void claim_resources(void)
> +{
> +	avoid_oom_killer();
> +	raise_ulimit();
> +}
> +
>   void init_pipe(int reopen_log_pipe[2])
>   {
>   	int flags;
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 14 20:46:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 20:46:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127521.239671 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhehK-0006lJ-QL; Fri, 14 May 2021 20:46:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127521.239671; Fri, 14 May 2021 20:46:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhehK-0006lC-NR; Fri, 14 May 2021 20:46:14 +0000
Received: by outflank-mailman (input) for mailman id 127521;
 Fri, 14 May 2021 20:46:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l2R2=KJ=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1lhehK-0006l6-2P
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 20:46:14 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ee04ce41-d9d4-4706-8495-47c9c4031e2c;
 Fri, 14 May 2021 20:46:13 +0000 (UTC)
Received: from sisyou.hme. (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1621025154661522.5568518117393;
 Fri, 14 May 2021 13:45:54 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ee04ce41-d9d4-4706-8495-47c9c4031e2c
ARC-Seal: i=1; a=rsa-sha256; t=1621025158; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=ehKabzDPxLZDX2U4T3mPrx3ngawThjUlqyJXQtNuLFU4PBbH3CLYwftGAowUKR8Ly37HiMdB1R94vE0LpSTbaY4K8BXf0oC51KYC/A9SL1UR24W+vTVLVqs4CHSVoWN2UrUVcqoStIOgX5vRC2QX5USxv/EMxeDLTdARsgOHuxk=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1621025158; h=Content-Transfer-Encoding:Cc:Date:From:MIME-Version:Message-ID:Subject:To; 
	bh=Y7sXM4Hf3vHolzJWQE5uuSY38u+U13ssP/wqK949sMw=; 
	b=oEklJHLWrwpZPIgn5lY0gSEGe8SSEE2aTibbITS3+EJbQrgdY8S99S6ZbFBHzbN20ztyKWEH+fD9SVoIjrWmW8IDUlXTy9UsjqtaWoIMakQtVd0cuN7o9wexY8335fC5gLHyjkDPdnM4hATsP0iN2NJZ85lXKITPKIHEV6scM5A=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com> header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1621025158;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=From:To:Cc:Subject:Date:Message-Id:MIME-Version:Content-Transfer-Encoding;
	bh=Y7sXM4Hf3vHolzJWQE5uuSY38u+U13ssP/wqK949sMw=;
	b=ApyvX3EhlNb4aJ890tpdbGGT54aagKoCIizWONhAqShGA9K7XCLen3OkChUklpQ3
	hLMG5nVrf1+jW6JGnLuMLpEWwrfsPyxX6eISeSf06T4TR1tHh5pyPmiyiizNurLgGsJ
	vdvHc7SNpVtMzTCwdJzCdB9mELnm0Rvifcshyw44=
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	julien@xen.org,
	Volodymyr_Babchuk@epam.com,
	andrew.cooper3@citrix.com,
	george.dunlap@citrix.com,
	iwj@xenproject.org,
	jbeulich@suse.com,
	wl@xen.org,
	roger.pau@citrix.com,
	tamas@tklengyel.com,
	tim@xen.org,
	jgross@suse.com,
	aisaila@bitdefender.com,
	ppircalabu@bitdefender.com,
	dfaggioli@suse.com,
	paul@xen.org,
	kevin.tian@intel.com,
	dgdegra@tycho.nsa.gov,
	adam.schwalm@starlab.io,
	scott.davis@starlab.io
Subject: [RFC PATCH 00/10] xsm: introducing domain roles
Date: Fri, 14 May 2021 16:54:27 -0400
Message-Id: <20210514205437.13661-1-dpsmith@apertussolutions.com>
X-Mailer: git-send-email 2.20.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

During the hyperlaunch design sessions a request was made to come up with a
formal definition of the roles a domain was allowed to take on. In particular
the primary focus was to answer what is the control domain and what is the
hardware domain. Another comment came up during the discussion on PCI pass
through and how it would work in a disaggregated platform as what was being
proposed as a primary use case for hyperlaunch. Based on these concerns, the
hyperlaunch team took a hard look at what were all the roles that were either
explicitly defined in code, loosely defined in code, as well as those that were
more conceptual or required a solutions like Flask.

The result is that a set of seven explicitly assignable domain roles and three
implied domain roles were identified and defined. To provide for and enforce
these domain roles, it was identified that the core XSM system in fact existed
in this pseudo unsupported but supported existance. Depending on whether XSM
was turned on or off did not turn on or off the XSM hooks, it only deterimined
if the base "dummy policy module" was inlined in for the XSM hooks or if they
were made availble through the xsm_ops op dispatch structure.

This patch set starts with converting the existing security controls to using
the identified domain roles. It then moves to making the domain roles the core
enforcement mechanism for XSM and merging the split state of existance into an
equivalent of its supported form. With the conversion of XSM, the SILO policy
module is refactored to achieve its security goal as an extension of the domain
roles mechanism. The necessary adjustments are made to Flask and the Kconfig
system to support this work.

Due to the impact of this change, every effort was made to ensure the patch set
is bisectable and the features can be tested incrementally. This is an RFC with
limited building and testing completed against it, therefore one may find build
configurations and runtime configurations that do not work.

Daniel P. Smith (10):
  headers: introduce new default privilege model
  control domain: refactor is_control_domain
  xenstore: migrate to default privilege model
  xsm: convert rewrite privilege check function
  hardware domain: convert to domain roles
  xsm-roles: covert the dummy system to roles
  xsm-roles: adjusting core xsm
  xsm-silo: convert silo over to domain roles
  xsm-flask: clean up for domain roles conversion
  common/Kconfig: updating Kconfig for domain roles

 xen/arch/arm/dm.c                     |   2 +-
 xen/arch/arm/domctl.c                 |   6 +-
 xen/arch/arm/hvm.c                    |   2 +-
 xen/arch/arm/mm.c                     |   2 +-
 xen/arch/arm/platform_hypercall.c     |   2 +-
 xen/arch/x86/acpi/cpu_idle.c          |   3 +-
 xen/arch/x86/cpu/mcheck/mce.c         |   2 +-
 xen/arch/x86/cpu/mcheck/vmce.h        |   3 +-
 xen/arch/x86/cpu/vpmu.c               |   9 +-
 xen/arch/x86/crash.c                  |   2 +-
 xen/arch/x86/domctl.c                 |   8 +-
 xen/arch/x86/hvm/dm.c                 |   2 +-
 xen/arch/x86/hvm/hvm.c                |  12 +-
 xen/arch/x86/io_apic.c                |   9 +-
 xen/arch/x86/irq.c                    |   4 +-
 xen/arch/x86/mm.c                     |  22 +-
 xen/arch/x86/mm/mem_paging.c          |   2 +-
 xen/arch/x86/mm/mem_sharing.c         |   8 +-
 xen/arch/x86/mm/p2m.c                 |   2 +-
 xen/arch/x86/mm/paging.c              |   4 +-
 xen/arch/x86/mm/shadow/set.c          |   2 +-
 xen/arch/x86/msi.c                    |   6 +-
 xen/arch/x86/nmi.c                    |   3 +-
 xen/arch/x86/pci.c                    |   2 +-
 xen/arch/x86/physdev.c                |  16 +-
 xen/arch/x86/platform_hypercall.c     |  10 +-
 xen/arch/x86/pv/emul-priv-op.c        |   2 +-
 xen/arch/x86/setup.c                  |   3 +
 xen/arch/x86/sysctl.c                 |   4 +-
 xen/arch/x86/traps.c                  |   2 +-
 xen/arch/x86/x86_64/mm.c              |  11 +-
 xen/common/Kconfig                    |  14 +-
 xen/common/domain.c                   | 120 ++++-
 xen/common/domctl.c                   |  12 +-
 xen/common/event_channel.c            |  15 +-
 xen/common/grant_table.c              |  16 +-
 xen/common/hypfs.c                    |   2 +-
 xen/common/kernel.c                   |   2 +-
 xen/common/kexec.c                    |   4 +-
 xen/common/keyhandler.c               |   4 +-
 xen/common/mem_access.c               |   2 +-
 xen/common/memory.c                   |  16 +-
 xen/common/monitor.c                  |   2 +-
 xen/common/sched/core.c               |   6 +-
 xen/common/shutdown.c                 |  14 +-
 xen/common/sysctl.c                   |   8 +-
 xen/common/vm_event.c                 |   7 +-
 xen/common/xenoprof.c                 |   5 +-
 xen/drivers/char/console.c            |   2 +-
 xen/drivers/char/ns16550.c            |   3 +-
 xen/drivers/passthrough/device_tree.c |   4 +-
 xen/drivers/passthrough/pci.c         |  24 +-
 xen/drivers/passthrough/vtd/iommu.c   |   2 +-
 xen/include/xen/sched.h               |  30 +-
 xen/include/xsm/dummy.h               | 256 +++++-----
 xen/include/xsm/roles.h               |  70 +++
 xen/include/xsm/xsm.h                 | 710 +++++++++++++++++---------
 xen/xsm/Makefile                      |   3 +-
 xen/xsm/dummy.c                       | 160 ------
 xen/xsm/flask/flask_op.c              |   2 +-
 xen/xsm/silo.c                        |  22 +-
 xen/xsm/xsm_core.c                    |  46 +-
 62 files changed, 991 insertions(+), 759 deletions(-)
 create mode 100644 xen/include/xsm/roles.h
 delete mode 100644 xen/xsm/dummy.c

-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 14 20:46:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 20:46:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127523.239683 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhehn-0007HO-4g; Fri, 14 May 2021 20:46:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127523.239683; Fri, 14 May 2021 20:46:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhehn-0007HH-0Y; Fri, 14 May 2021 20:46:43 +0000
Received: by outflank-mailman (input) for mailman id 127523;
 Fri, 14 May 2021 20:46:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l2R2=KJ=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1lhehm-0007Gy-6V
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 20:46:42 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4e4a224a-d718-485e-886a-cb1fda1194a5;
 Fri, 14 May 2021 20:46:41 +0000 (UTC)
Received: from sisyou.hme. (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1621025158511259.7346088384993;
 Fri, 14 May 2021 13:45:58 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4e4a224a-d718-485e-886a-cb1fda1194a5
ARC-Seal: i=1; a=rsa-sha256; t=1621025160; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=U7/ezs2SLRUXirSIGHz96sNUGoDN4smUP3Do/O6Sd6zqoWt5zE/QZPJ8G6rwxWkwMART86OK1ssNB7HwrV8O8W32p+VPqN/O9+Q0Joa+WSHqhp8at8+tCuRSv3KnT/Hpz8Qg/cx7FkIy42f0lVXWBILARQooolJzROuCXtqqfxQ=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1621025160; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=TEHXWnGRgz+Mdn2AlmOOOGtjzZhVm4N8sw0QqTaoWeQ=; 
	b=QpfyjjF1aLfywbcLwS2aD3lcuNEzK4jxdKwxR3bhtOCeyUXjVztVuSXWJFbOHebEGoGtAkiwxKdqpSm65e3/KJbAbRoYPgSg3XIRDhKWan7ucXpi2HMFWELwwB+pJFhORhfaiY6bj+MMvz/mX02du2ECyRJ6kfGY6BsoGZyrIUk=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com> header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1621025160;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References:MIME-Version:Content-Transfer-Encoding;
	bh=TEHXWnGRgz+Mdn2AlmOOOGtjzZhVm4N8sw0QqTaoWeQ=;
	b=EDFm+B95+jfimlSNaZ87WFX17Yl3Dn1YwDN87NbF03ZAdGJ9tQhci0nhAghYO4E7
	XduczyeaGNias9+ugjmgRqTvgfDVxs8qih6AIXsgMGl+jWWrRo0KkZrC033I8tBALEr
	F7AkZ7l8jIR9cF2StCr3Eq6gW+FBHCPmlUKnyCsk=
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	julien@xen.org,
	Volodymyr_Babchuk@epam.com,
	andrew.cooper3@citrix.com,
	george.dunlap@citrix.com,
	iwj@xenproject.org,
	jbeulich@suse.com,
	wl@xen.org,
	roger.pau@citrix.com,
	tamas@tklengyel.com,
	tim@xen.org,
	jgross@suse.com,
	aisaila@bitdefender.com,
	ppircalabu@bitdefender.com,
	dfaggioli@suse.com,
	paul@xen.org,
	kevin.tian@intel.com,
	dgdegra@tycho.nsa.gov,
	adam.schwalm@starlab.io,
	scott.davis@starlab.io
Subject: [RFC PATCH 01/10] headers: introduce new default privilege model
Date: Fri, 14 May 2021 16:54:28 -0400
Message-Id: <20210514205437.13661-2-dpsmith@apertussolutions.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20210514205437.13661-1-dpsmith@apertussolutions.com>
References: <20210514205437.13661-1-dpsmith@apertussolutions.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

This defines the new privilege roles that a domain may be assigned.

Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
---
 xen/include/xen/sched.h | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index cc633fdc07..9b2c277ede 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -457,6 +457,24 @@ struct domain
      */
     bool             creation_finished;
 
+    /* When SILO or Flask are not in use, a domain may have one or more roles
+     * that are desired for it to fulfill. To accomplish these role a set of
+     * privilege is required. A break down of the basic privilege is mapped
+     * to a bit field for assignment and verification.
+     */
+#define XSM_NONE      (1U<<0)  /* No role required to make the call */
+#define XSM_SELF      (1U<<1)  /* Allowed to make the call on self */
+#define XSM_TARGET    (1U<<2)  /* Allowed to make the call on a domain's target */
+#define XSM_PLAT_CTRL (1U<<3)  /* Platform Control: domain that control the overall platform */
+#define XSM_DOM_BUILD (1U<<4)  /* Domain Builder: domain that does domain construction and destruction */
+#define XSM_DOM_SUPER (1U<<5)  /* Domain Supervisor: domain that control the lifecycle, of all domains */
+#define XSM_DEV_EMUL  (1U<<6)  /* Device Emulator: domain that provides its target domain's device emulator */
+#define XSM_DEV_BACK  (1U<<7)  /* Device Backend: domain that provides a device backend */
+#define XSM_HW_CTRL   (1U<<8)  /* Hardware Control: domain with physical hardware access and its allocation for domain usage */
+#define XSM_HW_SUPER  (1U<<9)  /* Hardware Supervisor: domain that control allocated physical hardware */
+#define XSM_XENSTORE  (1U<<31) /* Xenstore: domain that can do privileged operations on xenstore */
+    uint32_t         xsm_roles;
+
     /* Which guest this guest has privileges on */
     struct domain   *target;
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 14 20:47:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 20:47:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127527.239694 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lheiG-0007sj-D5; Fri, 14 May 2021 20:47:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127527.239694; Fri, 14 May 2021 20:47:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lheiG-0007sc-9l; Fri, 14 May 2021 20:47:12 +0000
Received: by outflank-mailman (input) for mailman id 127527;
 Fri, 14 May 2021 20:47:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l2R2=KJ=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1lheiE-0007sK-FP
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 20:47:10 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 71ada7a0-8925-4181-85d8-6e9f193f8087;
 Fri, 14 May 2021 20:47:09 +0000 (UTC)
Received: from sisyou.hme. (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1621025161143822.0352510861576;
 Fri, 14 May 2021 13:46:01 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 71ada7a0-8925-4181-85d8-6e9f193f8087
ARC-Seal: i=1; a=rsa-sha256; t=1621025162; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=JsZKcJD39Og/nefTLX+TvOb8CuRSrwy5xKtC5nMOOuJQ3FIprgsQzdlNbWUwEUhkOLIVba8MvBPkDaPFjWZV9CXOIKcnHI58pU+Lwp5DgrijIur0IvYjP6s3yYKq73JwAeU2EaZ3yDOCAD2DRZocIST1tI5vImDYDJAsL3H4gds=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1621025162; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=pRHPQ71We1MCZnJ3D/0BIGqhHZQGekLr/e7V1+0hc4U=; 
	b=RgmnKMbZibOUI5QLhhAhUDV0hzh0hJ0BxpoF5X8/7G4SK3Vy4FbCmryQh5TTsmW9sllub17dkarSS8DgCJrNaQ0OA+y53IQRvHyRkzksYyUK4U1agilS938k52JwSKl/AYr+SP2eo799ahGE7rs/rX2AKfAZNB+YuntaOTMJfD8=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com> header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1621025162;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References:MIME-Version:Content-Transfer-Encoding;
	bh=pRHPQ71We1MCZnJ3D/0BIGqhHZQGekLr/e7V1+0hc4U=;
	b=F+BBWFRoj4QkNVc7nuq5NyUsNZAWwYOn5SOgjvONyRbOSz8BDYkdKlAmWYfhQ53W
	pY5fOALfVNc72XJAhcede0oFHclahwe9LEoUouToxSO4U7ExgNc4FFfzRrjPmmCmTYZ
	Ams3yJ27ozCjoe+zSzAsUpcu27Z2MP3ObcTfvVoU=
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	julien@xen.org,
	Volodymyr_Babchuk@epam.com,
	andrew.cooper3@citrix.com,
	george.dunlap@citrix.com,
	iwj@xenproject.org,
	jbeulich@suse.com,
	wl@xen.org,
	roger.pau@citrix.com,
	tamas@tklengyel.com,
	tim@xen.org,
	jgross@suse.com,
	aisaila@bitdefender.com,
	ppircalabu@bitdefender.com,
	dfaggioli@suse.com,
	paul@xen.org,
	kevin.tian@intel.com,
	dgdegra@tycho.nsa.gov,
	adam.schwalm@starlab.io,
	scott.davis@starlab.io
Subject: [RFC PATCH 02/10] control domain: refactor is_control_domain
Date: Fri, 14 May 2021 16:54:29 -0400
Message-Id: <20210514205437.13661-3-dpsmith@apertussolutions.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20210514205437.13661-1-dpsmith@apertussolutions.com>
References: <20210514205437.13661-1-dpsmith@apertussolutions.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

Move to using the new Domain Control role as the backing to the
is_control_domain check.

Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
---
 xen/common/domain.c     | 3 +++
 xen/include/xen/sched.h | 4 +++-
 2 files changed, 6 insertions(+), 1 deletion(-)

diff --git a/xen/common/domain.c b/xen/common/domain.c
index cdda0d1f29..26bba8666d 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -556,6 +556,9 @@ struct domain *domain_create(domid_t domid,
     /* Sort out our idea of is_control_domain(). */
     d->is_privileged = is_priv;
 
+    if (is_priv)
+        d->xsm_roles = CLASSIC_DOM0_PRIVS;
+
     /* Sort out our idea of is_hardware_domain(). */
     if ( domid == 0 || domid == hardware_domid )
     {
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 9b2c277ede..66b79d9c9f 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -473,6 +473,8 @@ struct domain
 #define XSM_HW_CTRL   (1U<<8)  /* Hardware Control: domain with physical hardware access and its allocation for domain usage */
 #define XSM_HW_SUPER  (1U<<9)  /* Hardware Supervisor: domain that control allocated physical hardware */
 #define XSM_XENSTORE  (1U<<31) /* Xenstore: domain that can do privileged operations on xenstore */
+#define CLASSIC_DOM0_PRIVS (XSM_PLAT_CTRL | XSM_DOM_BUILD | XSM_DOM_SUPER | \
+		XSM_DEV_EMUL | XSM_HW_CTRL | XSM_HW_SUPER | XSM_XENSTORE)
     uint32_t         xsm_roles;
 
     /* Which guest this guest has privileges on */
@@ -1049,7 +1051,7 @@ static always_inline bool is_control_domain(const struct domain *d)
     if ( IS_ENABLED(CONFIG_PV_SHIM_EXCLUSIVE) )
         return false;
 
-    return evaluate_nospec(d->is_privileged);
+    return evaluate_nospec(d->xsm_roles & XSM_DOM_SUPER);
 }
 
 #define VM_ASSIST(d, t) (test_bit(VMASST_TYPE_ ## t, &(d)->vm_assist))
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 14 20:47:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 20:47:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127530.239705 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lheim-0008UO-LX; Fri, 14 May 2021 20:47:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127530.239705; Fri, 14 May 2021 20:47:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lheim-0008UH-IH; Fri, 14 May 2021 20:47:44 +0000
Received: by outflank-mailman (input) for mailman id 127530;
 Fri, 14 May 2021 20:47:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l2R2=KJ=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1lheik-0008T4-H0
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 20:47:42 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8d74994f-f036-4af5-8adc-4ad68a9277b9;
 Fri, 14 May 2021 20:47:41 +0000 (UTC)
Received: from sisyou.hme. (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1621025163659110.00355228974297;
 Fri, 14 May 2021 13:46:03 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8d74994f-f036-4af5-8adc-4ad68a9277b9
ARC-Seal: i=1; a=rsa-sha256; t=1621025166; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=NuKcIjf2Mse2sjH2moVr61XODOZU9hZ7ybaTOSiqn2u76cuw3cj+6NdbQfhZnHK8/B5OnWdyI8Q0Meb8MCAh9M+Onsy0tueL9kPELhO34aQchcTbhshihAlc2dzLPJr+xCIR54ztO8mimxA7LIYF/GPwhGTSwdX0oul7/QVtVg4=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1621025166; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=u4HQkexSnVzuR1YPZlazV5uqbJ/6zfhWNSE1eH1HBJ8=; 
	b=BRVhxSFJ1+V6DgKq770g0BJXXkUdv9m3JhIaFrmfOJlY5tRJ15LJ2weFAsRlRcm2yPNR1l1/SksWU0JqGguYe2PmVo8+DTKq7TWfibFnZH87ISyqWgOHer+z5lavaSl0/3il9QQ1tw6g/HOvg8PvoBVctbBgduST7uV5dmNBMeM=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com> header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1621025166;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References:MIME-Version:Content-Transfer-Encoding;
	bh=u4HQkexSnVzuR1YPZlazV5uqbJ/6zfhWNSE1eH1HBJ8=;
	b=QshMINoCesaBByUUc7RYJF2XUhdDIPtOsNwd4aL3NI1Lpp0BjLYDfpiA/22STJ8Z
	5jL06EIbeVtCvbqZubV/lskZ2rgYpwCYJ8j3nxOruREH2qhe+rHYvPChnBQSYkCg1nd
	qavHwxoZCaXrjBoqwPF2znlwyQGrYu5/C1W2Uz3g=
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	julien@xen.org,
	Volodymyr_Babchuk@epam.com,
	andrew.cooper3@citrix.com,
	george.dunlap@citrix.com,
	iwj@xenproject.org,
	jbeulich@suse.com,
	wl@xen.org,
	roger.pau@citrix.com,
	tamas@tklengyel.com,
	tim@xen.org,
	jgross@suse.com,
	aisaila@bitdefender.com,
	ppircalabu@bitdefender.com,
	dfaggioli@suse.com,
	paul@xen.org,
	kevin.tian@intel.com,
	dgdegra@tycho.nsa.gov,
	adam.schwalm@starlab.io,
	scott.davis@starlab.io
Subject: [RFC PATCH 03/10] xenstore: migrate to default privilege model
Date: Fri, 14 May 2021 16:54:30 -0400
Message-Id: <20210514205437.13661-4-dpsmith@apertussolutions.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20210514205437.13661-1-dpsmith@apertussolutions.com>
References: <20210514205437.13661-1-dpsmith@apertussolutions.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

Move to using a check for the Xenstore Domain role for the is_xenstore_domain
check.

Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
---
 xen/common/domain.c     | 3 +++
 xen/include/xen/sched.h | 2 +-
 2 files changed, 4 insertions(+), 1 deletion(-)

diff --git a/xen/common/domain.c b/xen/common/domain.c
index 26bba8666d..1f2c569e5d 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -551,6 +551,9 @@ struct domain *domain_create(domid_t domid,
     {
         d->options = config->flags;
         d->vmtrace_size = config->vmtrace_size;
+
+        if (config->flags & XEN_DOMCTL_CDF_xs_domain)
+            d->xsm_roles = XSM_XENSTORE;
     }
 
     /* Sort out our idea of is_control_domain(). */
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 66b79d9c9f..9a88e5b00f 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -1129,7 +1129,7 @@ static inline bool is_vcpu_online(const struct vcpu *v)
 
 static inline bool is_xenstore_domain(const struct domain *d)
 {
-    return d->options & XEN_DOMCTL_CDF_xs_domain;
+    return d->xsm_roles & XSM_XENSTORE;
 }
 
 static always_inline bool is_iommu_enabled(const struct domain *d)
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 14 20:49:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 20:49:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127536.239716 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhekp-0000rA-7z; Fri, 14 May 2021 20:49:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127536.239716; Fri, 14 May 2021 20:49:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhekp-0000r3-42; Fri, 14 May 2021 20:49:51 +0000
Received: by outflank-mailman (input) for mailman id 127536;
 Fri, 14 May 2021 20:49:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l2R2=KJ=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1lheko-0000qv-Ft
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 20:49:50 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7a330b98-4568-4f8a-990d-244c0f4f2527;
 Fri, 14 May 2021 20:49:49 +0000 (UTC)
Received: from sisyou.hme. (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1621025166577979.807324518762;
 Fri, 14 May 2021 13:46:06 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7a330b98-4568-4f8a-990d-244c0f4f2527
ARC-Seal: i=1; a=rsa-sha256; t=1621025168; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=DqquiB0vcshufhX81dYR856SpqjmGXYTkT5hRNpUvFQ3pA/2BrgcXVR9JRXUIDwAwapBXIaV4wxym6Fflcy8XdCLw3QOpS7rL+KllWFEYFVcCDz7vnEC13Nt2f6nVTbVzYYprsA1MSlMOUBjxHDrSZk/2h++HtU1mUWTGam30yQ=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1621025168; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=+YSQvju142df7o/Gs9B5LinHIqbuYDA8GAjRaF5brhs=; 
	b=nKD/UcxBpJFTnyjN/pvu106KfhlXt60tTJG+wWGFjAdi/RYsZxD6TRNzC9YkCcyHQkVAgwRM5HRoAGwkRgMMVEPgTOX75Y35hao8cnh0qWQFb4avnkr05Jj/j9gjhLEXQVaRjmYrZ7dN4xIakaM4Lez0RmfYWZmfvDBrzSx0z2Y=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com> header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1621025168;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References:MIME-Version:Content-Transfer-Encoding;
	bh=+YSQvju142df7o/Gs9B5LinHIqbuYDA8GAjRaF5brhs=;
	b=ILb0cWCBrYqzAtyEV8JbEC6XUl5hDbSuLB4rAWlO7S3V7lyEAfWTFPcTJHlO2rbc
	GfKZyoG3Xl9oTrdsUy9b4OeIM2WMhJKJd1O+ujM0Jp/Ibjx00GRMGf2oAe9jcVDGxQj
	9Zs7rb5AuurdXSWQU7ZPlimXFpMSWejDKGVxmL1Y=
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	julien@xen.org,
	Volodymyr_Babchuk@epam.com,
	andrew.cooper3@citrix.com,
	george.dunlap@citrix.com,
	iwj@xenproject.org,
	jbeulich@suse.com,
	wl@xen.org,
	roger.pau@citrix.com,
	tamas@tklengyel.com,
	tim@xen.org,
	jgross@suse.com,
	aisaila@bitdefender.com,
	ppircalabu@bitdefender.com,
	dfaggioli@suse.com,
	paul@xen.org,
	kevin.tian@intel.com,
	dgdegra@tycho.nsa.gov,
	adam.schwalm@starlab.io,
	scott.davis@starlab.io
Subject: [RFC PATCH 04/10] xsm: convert rewrite privilege check function
Date: Fri, 14 May 2021 16:54:31 -0400
Message-Id: <20210514205437.13661-5-dpsmith@apertussolutions.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20210514205437.13661-1-dpsmith@apertussolutions.com>
References: <20210514205437.13661-1-dpsmith@apertussolutions.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

This converts the previous XSM hook dummy checks over to using equivalent domain role privileges.

Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
---
 xen/arch/arm/dm.c                     |   2 +-
 xen/arch/arm/domctl.c                 |   6 +-
 xen/arch/arm/hvm.c                    |   2 +-
 xen/arch/arm/mm.c                     |   2 +-
 xen/arch/arm/platform_hypercall.c     |   2 +-
 xen/arch/x86/cpu/mcheck/mce.c         |   2 +-
 xen/arch/x86/cpu/vpmu.c               |   2 +-
 xen/arch/x86/domctl.c                 |   8 +-
 xen/arch/x86/hvm/dm.c                 |   2 +-
 xen/arch/x86/hvm/hvm.c                |  12 +-
 xen/arch/x86/irq.c                    |   4 +-
 xen/arch/x86/mm.c                     |  20 +-
 xen/arch/x86/mm/mem_paging.c          |   2 +-
 xen/arch/x86/mm/mem_sharing.c         |   8 +-
 xen/arch/x86/mm/p2m.c                 |   2 +-
 xen/arch/x86/mm/paging.c              |   4 +-
 xen/arch/x86/mm/shadow/set.c          |   2 +-
 xen/arch/x86/msi.c                    |   2 +-
 xen/arch/x86/pci.c                    |   2 +-
 xen/arch/x86/physdev.c                |  16 +-
 xen/arch/x86/platform_hypercall.c     |  10 +-
 xen/arch/x86/pv/emul-priv-op.c        |   2 +-
 xen/arch/x86/sysctl.c                 |   4 +-
 xen/common/domain.c                   |   4 +-
 xen/common/domctl.c                   |  12 +-
 xen/common/event_channel.c            |  12 +-
 xen/common/grant_table.c              |  16 +-
 xen/common/hypfs.c                    |   2 +-
 xen/common/kernel.c                   |   2 +-
 xen/common/kexec.c                    |   2 +-
 xen/common/mem_access.c               |   2 +-
 xen/common/memory.c                   |  16 +-
 xen/common/monitor.c                  |   2 +-
 xen/common/sched/core.c               |   6 +-
 xen/common/sysctl.c                   |   8 +-
 xen/common/vm_event.c                 |   2 +-
 xen/common/xenoprof.c                 |   2 +-
 xen/drivers/char/console.c            |   2 +-
 xen/drivers/passthrough/device_tree.c |   4 +-
 xen/drivers/passthrough/pci.c         |  12 +-
 xen/include/xen/sched.h               |   6 +
 xen/include/xsm/dummy.h               | 256 ++++++++++++++------------
 xen/include/xsm/xsm.h                 |  13 +-
 43 files changed, 253 insertions(+), 246 deletions(-)

diff --git a/xen/arch/arm/dm.c b/xen/arch/arm/dm.c
index 1b3fd6bc7d..7bc2ec42f6 100644
--- a/xen/arch/arm/dm.c
+++ b/xen/arch/arm/dm.c
@@ -45,7 +45,7 @@ int dm_op(const struct dmop_args *op_args)
     if ( rc )
         return rc;
 
-    rc = xsm_dm_op(XSM_DM_PRIV, d);
+    rc = xsm_dm_op(DEV_EMU_PRIVS, d);
     if ( rc )
         goto out;
 
diff --git a/xen/arch/arm/domctl.c b/xen/arch/arm/domctl.c
index b7d27f37df..fff8829b9b 100644
--- a/xen/arch/arm/domctl.c
+++ b/xen/arch/arm/domctl.c
@@ -95,11 +95,11 @@ long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
          * done by the 2 hypercalls for consistency with other
          * architectures.
          */
-        rc = xsm_map_domain_irq(XSM_HOOK, d, irq, NULL);
+        rc = xsm_map_domain_irq(XSM_NONE, d, irq, NULL);
         if ( rc )
             return rc;
 
-        rc = xsm_bind_pt_irq(XSM_HOOK, d, bind);
+        rc = xsm_bind_pt_irq(XSM_NONE, d, bind);
         if ( rc )
             return rc;
 
@@ -130,7 +130,7 @@ long arch_do_domctl(struct xen_domctl *domctl, struct domain *d,
         if ( irq != virq )
             return -EINVAL;
 
-        rc = xsm_unbind_pt_irq(XSM_HOOK, d, bind);
+        rc = xsm_unbind_pt_irq(XSM_NONE, d, bind);
         if ( rc )
             return rc;
 
diff --git a/xen/arch/arm/hvm.c b/xen/arch/arm/hvm.c
index 8951b34086..ec84077988 100644
--- a/xen/arch/arm/hvm.c
+++ b/xen/arch/arm/hvm.c
@@ -101,7 +101,7 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( d == NULL )
             return -ESRCH;
 
-        rc = xsm_hvm_param(XSM_TARGET, d, op);
+        rc = xsm_hvm_param(TARGET_PRIVS, d, op);
         if ( rc )
             goto param_fail;
 
diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
index 59f8a3f15f..7e88d9b1c7 100644
--- a/xen/arch/arm/mm.c
+++ b/xen/arch/arm/mm.c
@@ -1446,7 +1446,7 @@ int xenmem_add_to_physmap_one(
             return -EINVAL;
         }
 
-        rc = xsm_map_gmfn_foreign(XSM_TARGET, d, od);
+        rc = xsm_map_gmfn_foreign(TARGET_PRIVS, d, od);
         if ( rc )
         {
             put_pg_owner(od);
diff --git a/xen/arch/arm/platform_hypercall.c b/xen/arch/arm/platform_hypercall.c
index 8efac7ee60..4913f65e13 100644
--- a/xen/arch/arm/platform_hypercall.c
+++ b/xen/arch/arm/platform_hypercall.c
@@ -33,7 +33,7 @@ long do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
     if ( d == NULL )
         return -ESRCH;
 
-    ret = xsm_platform_op(XSM_PRIV, op->cmd);
+    ret = xsm_platform_op(XSM_PLAT_CTRL, op->cmd);
     if ( ret )
         return ret;
 
diff --git a/xen/arch/x86/cpu/mcheck/mce.c b/xen/arch/x86/cpu/mcheck/mce.c
index 7f433343bc..f6ce05cba9 100644
--- a/xen/arch/x86/cpu/mcheck/mce.c
+++ b/xen/arch/x86/cpu/mcheck/mce.c
@@ -1376,7 +1376,7 @@ long do_mca(XEN_GUEST_HANDLE_PARAM(xen_mc_t) u_xen_mc)
     struct xen_mc_msrinject *mc_msrinject;
     struct xen_mc_mceinject *mc_mceinject;
 
-    ret = xsm_do_mca(XSM_PRIV);
+    ret = xsm_do_mca(XSM_PLAT_CTRL);
     if ( ret )
         return x86_mcerr("", ret);
 
diff --git a/xen/arch/x86/cpu/vpmu.c b/xen/arch/x86/cpu/vpmu.c
index d8659c63f8..612b87526b 100644
--- a/xen/arch/x86/cpu/vpmu.c
+++ b/xen/arch/x86/cpu/vpmu.c
@@ -706,7 +706,7 @@ long do_xenpmu_op(unsigned int op, XEN_GUEST_HANDLE_PARAM(xen_pmu_params_t) arg)
     if ( !opt_vpmu_enabled || has_vlapic(current->domain) )
         return -EOPNOTSUPP;
 
-    ret = xsm_pmu_op(XSM_OTHER, current->domain, op);
+    ret = xsm_pmu_op(XSM_NONE | XSM_DOM_SUPER, current->domain, op);
     if ( ret )
         return ret;
 
diff --git a/xen/arch/x86/domctl.c b/xen/arch/x86/domctl.c
index e440bd021e..5cbe55a700 100644
--- a/xen/arch/x86/domctl.c
+++ b/xen/arch/x86/domctl.c
@@ -234,7 +234,7 @@ long arch_do_domctl(
         if ( (fp + np) <= fp || (fp + np) > MAX_IOPORTS )
             ret = -EINVAL;
         else if ( !ioports_access_permitted(currd, fp, fp + np - 1) ||
-                  xsm_ioport_permission(XSM_HOOK, d, fp, fp + np - 1, allow) )
+                  xsm_ioport_permission(XSM_NONE, d, fp, fp + np - 1, allow) )
             ret = -EPERM;
         else if ( allow )
             ret = ioports_permit_access(d, fp, fp + np - 1);
@@ -534,7 +534,7 @@ long arch_do_domctl(
         if ( !is_hvm_domain(d) )
             break;
 
-        ret = xsm_bind_pt_irq(XSM_HOOK, d, bind);
+        ret = xsm_bind_pt_irq(XSM_NONE, d, bind);
         if ( ret )
             break;
 
@@ -569,7 +569,7 @@ long arch_do_domctl(
         if ( irq <= 0 || !irq_access_permitted(currd, irq) )
             break;
 
-        ret = xsm_unbind_pt_irq(XSM_HOOK, d, bind);
+        ret = xsm_unbind_pt_irq(XSM_NONE, d, bind);
         if ( ret )
             break;
 
@@ -616,7 +616,7 @@ long arch_do_domctl(
         if ( !ioports_access_permitted(currd, fmp, fmp + np - 1) )
             break;
 
-        ret = xsm_ioport_mapping(XSM_HOOK, d, fmp, fmp + np - 1, add);
+        ret = xsm_ioport_mapping(XSM_NONE, d, fmp, fmp + np - 1, add);
         if ( ret )
             break;
 
diff --git a/xen/arch/x86/hvm/dm.c b/xen/arch/x86/hvm/dm.c
index b60b9f3364..bc452b551e 100644
--- a/xen/arch/x86/hvm/dm.c
+++ b/xen/arch/x86/hvm/dm.c
@@ -370,7 +370,7 @@ int dm_op(const struct dmop_args *op_args)
     if ( !is_hvm_domain(d) )
         goto out;
 
-    rc = xsm_dm_op(XSM_DM_PRIV, d);
+    rc = xsm_dm_op(DEV_EMU_PRIVS, d);
     if ( rc )
         goto out;
 
diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c
index ae37bc434a..7e9c624037 100644
--- a/xen/arch/x86/hvm/hvm.c
+++ b/xen/arch/x86/hvm/hvm.c
@@ -4064,7 +4064,7 @@ static int hvm_allow_set_param(struct domain *d,
     uint64_t value;
     int rc;
 
-    rc = xsm_hvm_param(XSM_TARGET, d, HVMOP_set_param);
+    rc = xsm_hvm_param(TARGET_PRIVS, d, HVMOP_set_param);
     if ( rc )
         return rc;
 
@@ -4211,7 +4211,7 @@ static int hvm_set_param(struct domain *d, uint32_t index, uint64_t value)
         rc = pmtimer_change_ioport(d, value);
         break;
     case HVM_PARAM_ALTP2M:
-        rc = xsm_hvm_param_altp2mhvm(XSM_PRIV, d);
+        rc = xsm_hvm_param_altp2mhvm(XSM_DOM_SUPER, d);
         if ( rc )
             break;
         if ( (value > XEN_ALTP2M_limited) ||
@@ -4340,7 +4340,7 @@ static int hvm_allow_get_param(struct domain *d,
 {
     int rc;
 
-    rc = xsm_hvm_param(XSM_TARGET, d, HVMOP_get_param);
+    rc = xsm_hvm_param(TARGET_PRIVS, d, HVMOP_get_param);
     if ( rc )
         return rc;
 
@@ -4550,7 +4550,7 @@ static int do_altp2m_op(
         goto out;
     }
 
-    if ( (rc = xsm_hvm_altp2mhvm_op(XSM_OTHER, d, mode, a.cmd)) )
+    if ( (rc = xsm_hvm_altp2mhvm_op(TARGET_PRIVS | DEV_EMU_PRIVS, d, mode, a.cmd)) )
         goto out;
 
     switch ( a.cmd )
@@ -4931,7 +4931,7 @@ static int hvmop_get_mem_type(
     if ( d == NULL )
         return -ESRCH;
 
-    rc = xsm_hvm_param(XSM_TARGET, d, HVMOP_get_mem_type);
+    rc = xsm_hvm_param(TARGET_PRIVS, d, HVMOP_get_mem_type);
     if ( rc )
         goto out;
 
@@ -5024,7 +5024,7 @@ long do_hvm_op(unsigned long op, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( unlikely(d != current->domain) )
             rc = -EOPNOTSUPP;
         else if ( is_hvm_domain(d) && paging_mode_shadow(d) )
-            rc = xsm_hvm_param(XSM_TARGET, d, op);
+            rc = xsm_hvm_param(TARGET_PRIVS, d, op);
         if ( !rc )
             pagetable_dying(a.gpa);
 
diff --git a/xen/arch/x86/irq.c b/xen/arch/x86/irq.c
index a1693f92dd..cff7cb11cd 100644
--- a/xen/arch/x86/irq.c
+++ b/xen/arch/x86/irq.c
@@ -2122,7 +2122,7 @@ int map_domain_pirq(
         return 0;
     }
 
-    ret = xsm_map_domain_irq(XSM_HOOK, d, irq, data);
+    ret = xsm_map_domain_irq(XSM_NONE, d, irq, data);
     if ( ret )
     {
         dprintk(XENLOG_G_ERR, "dom%d: could not permit access to irq %d mapping to pirq %d\n",
@@ -2342,7 +2342,7 @@ int unmap_domain_pirq(struct domain *d, int pirq)
         nr = msi_desc->msi.nvec;
     }
 
-    ret = xsm_unmap_domain_irq(XSM_HOOK, d, irq,
+    ret = xsm_unmap_domain_irq(XSM_NONE, d, irq,
                                msi_desc ? msi_desc->dev : NULL);
     if ( ret )
         goto done;
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index b7a10bbdd4..8ecb982a84 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -977,7 +977,7 @@ get_page_from_l1e(
          * minor hack can go away.
          */
         if ( (real_pg_owner == NULL) || (pg_owner == l1e_owner) ||
-             xsm_priv_mapping(XSM_TARGET, pg_owner, real_pg_owner) )
+             xsm_priv_mapping(TARGET_PRIVS, pg_owner, real_pg_owner) )
         {
             gdprintk(XENLOG_WARNING,
                      "pg_owner d%d l1e_owner d%d, but real_pg_owner d%d\n",
@@ -3407,7 +3407,7 @@ long do_mmuext_op(
         return -EINVAL;
     }
 
-    rc = xsm_mmuext_op(XSM_TARGET, currd, pg_owner);
+    rc = xsm_mmuext_op(TARGET_PRIVS, currd, pg_owner);
     if ( rc )
     {
         put_pg_owner(pg_owner);
@@ -3497,7 +3497,7 @@ long do_mmuext_op(
                 break;
             }
 
-            rc = xsm_memory_pin_page(XSM_HOOK, currd, pg_owner, page);
+            rc = xsm_memory_pin_page(XSM_NONE, currd, pg_owner, page);
             if ( !rc && unlikely(test_and_set_bit(_PGT_pinned,
                                                   &page->u.inuse.type_info)) )
             {
@@ -4005,7 +4005,7 @@ long do_mmu_update(
             }
             if ( xsm_needed != xsm_checked )
             {
-                rc = xsm_mmu_update(XSM_TARGET, d, pt_owner, pg_owner, xsm_needed);
+                rc = xsm_mmu_update(TARGET_PRIVS, d, pt_owner, pg_owner, xsm_needed);
                 if ( rc )
                     break;
                 xsm_checked = xsm_needed;
@@ -4148,7 +4148,7 @@ long do_mmu_update(
             xsm_needed |= XSM_MMU_MACHPHYS_UPDATE;
             if ( xsm_needed != xsm_checked )
             {
-                rc = xsm_mmu_update(XSM_TARGET, d, NULL, pg_owner, xsm_needed);
+                rc = xsm_mmu_update(TARGET_PRIVS, d, NULL, pg_owner, xsm_needed);
                 if ( rc )
                     break;
                 xsm_checked = xsm_needed;
@@ -4393,7 +4393,7 @@ static int __do_update_va_mapping(
 
     perfc_incr(calls_to_update_va);
 
-    rc = xsm_update_va_mapping(XSM_TARGET, d, pg_owner, val);
+    rc = xsm_update_va_mapping(TARGET_PRIVS, d, pg_owner, val);
     if ( rc )
         return rc;
 
@@ -4632,7 +4632,7 @@ long arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( d == NULL )
             return -ESRCH;
 
-        rc = xsm_domain_memory_map(XSM_TARGET, d);
+        rc = xsm_domain_memory_map(TARGET_PRIVS, d);
         if ( rc )
         {
             rcu_unlock_domain(d);
@@ -4699,7 +4699,7 @@ long arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         unsigned int i;
         bool store;
 
-        rc = xsm_machine_memory_map(XSM_PRIV);
+        rc = xsm_machine_memory_map(XSM_PLAT_CTRL);
         if ( rc )
             return rc;
 
@@ -4789,9 +4789,9 @@ long arch_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
             return -ESRCH;
 
         if ( cmd == XENMEM_set_pod_target )
-            rc = xsm_set_pod_target(XSM_PRIV, d);
+            rc = xsm_set_pod_target(XSM_DOM_SUPER, d);
         else
-            rc = xsm_get_pod_target(XSM_PRIV, d);
+            rc = xsm_get_pod_target(XSM_DOM_SUPER, d);
 
         if ( rc != 0 )
             goto pod_target_out_unlock;
diff --git a/xen/arch/x86/mm/mem_paging.c b/xen/arch/x86/mm/mem_paging.c
index 01281f786e..6f8420f988 100644
--- a/xen/arch/x86/mm/mem_paging.c
+++ b/xen/arch/x86/mm/mem_paging.c
@@ -452,7 +452,7 @@ int mem_paging_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_paging_op_t) arg)
     if ( rc )
         return rc;
 
-    rc = xsm_mem_paging(XSM_DM_PRIV, d);
+    rc = xsm_mem_paging(DEV_EMU_PRIVS, d);
     if ( rc )
         goto out;
 
diff --git a/xen/arch/x86/mm/mem_sharing.c b/xen/arch/x86/mm/mem_sharing.c
index 98b14f7b0a..ba7a479de0 100644
--- a/xen/arch/x86/mm/mem_sharing.c
+++ b/xen/arch/x86/mm/mem_sharing.c
@@ -1883,7 +1883,7 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
     if ( rc )
         return rc;
 
-    rc = xsm_mem_sharing(XSM_DM_PRIV, d);
+    rc = xsm_mem_sharing(DEV_EMU_PRIVS, d);
     if ( rc )
         goto out;
 
@@ -1928,7 +1928,7 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
         if ( rc )
             goto out;
 
-        rc = xsm_mem_sharing_op(XSM_DM_PRIV, d, cd, mso.op);
+        rc = xsm_mem_sharing_op(DEV_EMU_PRIVS, d, cd, mso.op);
         if ( rc )
         {
             rcu_unlock_domain(cd);
@@ -1994,7 +1994,7 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
         if ( rc )
             goto out;
 
-        rc = xsm_mem_sharing_op(XSM_DM_PRIV, d, cd, mso.op);
+        rc = xsm_mem_sharing_op(DEV_EMU_PRIVS, d, cd, mso.op);
         if ( rc )
         {
             rcu_unlock_domain(cd);
@@ -2056,7 +2056,7 @@ int mem_sharing_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_sharing_op_t) arg)
          * We reuse XENMEM_sharing_op_share XSM check here as this is
          * essentially the same concept repeated over multiple pages.
          */
-        rc = xsm_mem_sharing_op(XSM_DM_PRIV, d, cd,
+        rc = xsm_mem_sharing_op(DEV_EMU_PRIVS, d, cd,
                                 XENMEM_sharing_op_share);
         if ( rc )
         {
diff --git a/xen/arch/x86/mm/p2m.c b/xen/arch/x86/mm/p2m.c
index 3840f167b0..5dc0aafd51 100644
--- a/xen/arch/x86/mm/p2m.c
+++ b/xen/arch/x86/mm/p2m.c
@@ -2611,7 +2611,7 @@ static int p2m_add_foreign(struct domain *tdom, unsigned long fgfn,
             goto out;
     }
 
-    rc = xsm_map_gmfn_foreign(XSM_TARGET, tdom, fdom);
+    rc = xsm_map_gmfn_foreign(TARGET_PRIVS, tdom, fdom);
     if ( rc )
         goto out;
 
diff --git a/xen/arch/x86/mm/paging.c b/xen/arch/x86/mm/paging.c
index 8bc14df943..6db47c7101 100644
--- a/xen/arch/x86/mm/paging.c
+++ b/xen/arch/x86/mm/paging.c
@@ -712,7 +712,7 @@ int paging_domctl(struct domain *d, struct xen_domctl_shadow_op *sc,
         return -EBUSY;
     }
 
-    rc = xsm_shadow_control(XSM_HOOK, d, sc->op);
+    rc = xsm_shadow_control(XSM_NONE, d, sc->op);
     if ( rc )
         return rc;
 
@@ -769,7 +769,7 @@ long paging_domctl_continuation(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
     if ( d == NULL )
         return -ESRCH;
 
-    ret = xsm_domctl(XSM_OTHER, d, op.cmd);
+    ret = xsm_domctl(DEV_EMU_PRIVS | XENSTORE_PRIVS | XSM_DOM_SUPER, d, op.cmd);
     if ( !ret )
     {
         if ( domctl_lock_acquire() )
diff --git a/xen/arch/x86/mm/shadow/set.c b/xen/arch/x86/mm/shadow/set.c
index fff4d1633c..066865e1a6 100644
--- a/xen/arch/x86/mm/shadow/set.c
+++ b/xen/arch/x86/mm/shadow/set.c
@@ -106,7 +106,7 @@ shadow_get_page_from_l1e(shadow_l1e_t sl1e, struct domain *d, p2m_type_t type)
          (owner = page_get_owner(mfn_to_page(mfn))) &&
          (d != owner) )
     {
-        res = xsm_priv_mapping(XSM_TARGET, d, owner);
+        res = xsm_priv_mapping(TARGET_PRIVS, d, owner);
         if ( !res )
         {
             res = get_page_from_l1e(sl1e, d, owner);
diff --git a/xen/arch/x86/msi.c b/xen/arch/x86/msi.c
index 5febc0ea4b..6d4a873130 100644
--- a/xen/arch/x86/msi.c
+++ b/xen/arch/x86/msi.c
@@ -1310,7 +1310,7 @@ int pci_restore_msi_state(struct pci_dev *pdev)
     if ( !use_msi )
         return -EOPNOTSUPP;
 
-    ret = xsm_resource_setup_pci(XSM_PRIV,
+    ret = xsm_resource_setup_pci(XSM_HW_CTRL,
                                 (pdev->seg << 16) | (pdev->bus << 8) |
                                 pdev->devfn);
     if ( ret )
diff --git a/xen/arch/x86/pci.c b/xen/arch/x86/pci.c
index a9decd4f33..7ca9fc68f2 100644
--- a/xen/arch/x86/pci.c
+++ b/xen/arch/x86/pci.c
@@ -74,7 +74,7 @@ int pci_conf_write_intercept(unsigned int seg, unsigned int bdf,
                              uint32_t *data)
 {
     struct pci_dev *pdev;
-    int rc = xsm_pci_config_permission(XSM_HOOK, current->domain, bdf,
+    int rc = xsm_pci_config_permission(XSM_NONE, current->domain, bdf,
                                        reg, reg + size - 1, 1);
 
     if ( rc < 0 )
diff --git a/xen/arch/x86/physdev.c b/xen/arch/x86/physdev.c
index 23465bcd00..73e5757faf 100644
--- a/xen/arch/x86/physdev.c
+++ b/xen/arch/x86/physdev.c
@@ -110,7 +110,7 @@ int physdev_map_pirq(domid_t domid, int type, int *index, int *pirq_p,
     if ( d == NULL )
         return -ESRCH;
 
-    ret = xsm_map_domain_pirq(XSM_DM_PRIV, d);
+    ret = xsm_map_domain_pirq(DEV_EMU_PRIVS, d);
     if ( ret )
         goto free_domain;
 
@@ -148,7 +148,7 @@ int physdev_unmap_pirq(domid_t domid, int pirq)
         return -ESRCH;
 
     if ( domid != DOMID_SELF || !is_hvm_domain(d) || !has_pirq(d) )
-        ret = xsm_unmap_domain_pirq(XSM_DM_PRIV, d);
+        ret = xsm_unmap_domain_pirq(DEV_EMU_PRIVS, d);
     if ( ret )
         goto free_domain;
 
@@ -355,7 +355,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         ret = -EFAULT;
         if ( copy_from_guest(&apic, arg, 1) != 0 )
             break;
-        ret = xsm_apic(XSM_PRIV, currd, cmd);
+        ret = xsm_apic(XSM_HW_CTRL, currd, cmd);
         if ( ret )
             break;
         ret = ioapic_guest_read(apic.apic_physbase, apic.reg, &apic.value);
@@ -369,7 +369,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         ret = -EFAULT;
         if ( copy_from_guest(&apic, arg, 1) != 0 )
             break;
-        ret = xsm_apic(XSM_PRIV, currd, cmd);
+        ret = xsm_apic(XSM_HW_CTRL, currd, cmd);
         if ( ret )
             break;
         ret = ioapic_guest_write(apic.apic_physbase, apic.reg, apic.value);
@@ -385,7 +385,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 
         /* Use the APIC check since this dummy hypercall should still only
          * be called by the domain with access to program the ioapic */
-        ret = xsm_apic(XSM_PRIV, currd, cmd);
+        ret = xsm_apic(XSM_HW_CTRL, currd, cmd);
         if ( ret )
             break;
 
@@ -535,7 +535,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( copy_from_guest(&dev, arg, 1) )
             ret = -EFAULT;
         else
-            ret = xsm_resource_setup_pci(XSM_PRIV,
+            ret = xsm_resource_setup_pci(XSM_HW_CTRL,
                                          (dev.seg << 16) | (dev.bus << 8) |
                                          dev.devfn) ?:
                   pci_prepare_msix(dev.seg, dev.bus, dev.devfn,
@@ -546,7 +546,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
     case PHYSDEVOP_pci_mmcfg_reserved: {
         struct physdev_pci_mmcfg_reserved info;
 
-        ret = xsm_resource_setup_misc(XSM_PRIV);
+        ret = xsm_resource_setup_misc(XSM_HW_CTRL);
         if ( ret )
             break;
 
@@ -611,7 +611,7 @@ ret_t do_physdev_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( setup_gsi.gsi < 0 || setup_gsi.gsi >= nr_irqs_gsi )
             break;
 
-        ret = xsm_resource_setup_gsi(XSM_PRIV, setup_gsi.gsi);
+        ret = xsm_resource_setup_gsi(XSM_HW_CTRL, setup_gsi.gsi);
         if ( ret )
             break;
 
diff --git a/xen/arch/x86/platform_hypercall.c b/xen/arch/x86/platform_hypercall.c
index 23fadbc782..a3e4db9f02 100644
--- a/xen/arch/x86/platform_hypercall.c
+++ b/xen/arch/x86/platform_hypercall.c
@@ -196,7 +196,7 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
     if ( op->interface_version != XENPF_INTERFACE_VERSION )
         return -EACCES;
 
-    ret = xsm_platform_op(XSM_PRIV, op->cmd);
+    ret = xsm_platform_op(XSM_PLAT_CTRL, op->cmd);
     if ( ret )
         return ret;
 
@@ -614,7 +614,7 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
     {
         int cpu = op->u.cpu_ol.cpuid;
 
-        ret = xsm_resource_plug_core(XSM_HOOK);
+        ret = xsm_resource_plug_core(XSM_NONE);
         if ( ret )
             break;
 
@@ -640,7 +640,7 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
     {
         int cpu = op->u.cpu_ol.cpuid;
 
-        ret = xsm_resource_unplug_core(XSM_HOOK);
+        ret = xsm_resource_unplug_core(XSM_NONE);
         if ( ret )
             break;
 
@@ -669,7 +669,7 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
     break;
 
     case XENPF_cpu_hotadd:
-        ret = xsm_resource_plug_core(XSM_HOOK);
+        ret = xsm_resource_plug_core(XSM_NONE);
         if ( ret )
             break;
 
@@ -679,7 +679,7 @@ ret_t do_platform_op(XEN_GUEST_HANDLE_PARAM(xen_platform_op_t) u_xenpf_op)
     break;
 
     case XENPF_mem_hotadd:
-        ret = xsm_resource_plug_core(XSM_HOOK);
+        ret = xsm_resource_plug_core(XSM_NONE);
         if ( ret )
             break;
 
diff --git a/xen/arch/x86/pv/emul-priv-op.c b/xen/arch/x86/pv/emul-priv-op.c
index 8889509d2a..b3f7896271 100644
--- a/xen/arch/x86/pv/emul-priv-op.c
+++ b/xen/arch/x86/pv/emul-priv-op.c
@@ -250,7 +250,7 @@ static bool pci_cfg_ok(struct domain *currd, unsigned int start,
     }
 
     return !write ?
-           xsm_pci_config_permission(XSM_HOOK, currd, machine_bdf,
+           xsm_pci_config_permission(XSM_NONE, currd, machine_bdf,
                                      start, start + size - 1, 0) == 0 :
            pci_conf_write_intercept(0, machine_bdf, start, size, write) >= 0;
 }
diff --git a/xen/arch/x86/sysctl.c b/xen/arch/x86/sysctl.c
index aff52a13f3..a843d5aac5 100644
--- a/xen/arch/x86/sysctl.c
+++ b/xen/arch/x86/sysctl.c
@@ -190,8 +190,8 @@ long arch_do_sysctl(
         }
 
         if ( !ret )
-            ret = plug ? xsm_resource_plug_core(XSM_HOOK)
-                       : xsm_resource_unplug_core(XSM_HOOK);
+            ret = plug ? xsm_resource_plug_core(XSM_NONE)
+                       : xsm_resource_unplug_core(XSM_NONE);
 
         if ( !ret )
             ret = continue_hypercall_on_cpu(0, fn, hcpu);
diff --git a/xen/common/domain.c b/xen/common/domain.c
index 1f2c569e5d..b3a3864421 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -311,7 +311,7 @@ static int late_hwdom_init(struct domain *d)
     if ( d != hardware_domain || d->domain_id == 0 )
         return 0;
 
-    rv = xsm_init_hardware_domain(XSM_HOOK, d);
+    rv = xsm_init_hardware_domain(XSM_NONE, d);
     if ( rv )
         return rv;
 
@@ -655,7 +655,7 @@ struct domain *domain_create(domid_t domid,
         if ( !d->iomem_caps || !d->irq_caps )
             goto fail;
 
-        if ( (err = xsm_domain_create(XSM_HOOK, d, config->ssidref)) != 0 )
+        if ( (err = xsm_domain_create(XSM_NONE, d, config->ssidref)) != 0 )
             goto fail;
 
         d->controller_pause_count = 1;
diff --git a/xen/common/domctl.c b/xen/common/domctl.c
index af044e2eda..be7533caf9 100644
--- a/xen/common/domctl.c
+++ b/xen/common/domctl.c
@@ -314,7 +314,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
             return -ESRCH;
     }
 
-    ret = xsm_domctl(XSM_OTHER, d, op->cmd);
+    ret = xsm_domctl(DEV_EMU_PRIVS | XENSTORE_PRIVS | XSM_DOM_SUPER, d, op->cmd);
     if ( ret )
         goto domctl_out_unlock_domonly;
 
@@ -553,7 +553,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
         if ( d == NULL )
             goto getdomaininfo_out;
 
-        ret = xsm_getdomaininfo(XSM_HOOK, d);
+        ret = xsm_getdomaininfo(XSM_NONE, d);
         if ( ret )
             goto getdomaininfo_out;
 
@@ -688,7 +688,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
             break;
         }
         irq = pirq_access_permitted(current->domain, pirq);
-        if ( !irq || xsm_irq_permission(XSM_HOOK, d, irq, allow) )
+        if ( !irq || xsm_irq_permission(XSM_NONE, d, irq, allow) )
             ret = -EPERM;
         else if ( allow )
             ret = irq_permit_access(d, irq);
@@ -709,7 +709,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 
         if ( !iomem_access_permitted(current->domain,
                                      mfn, mfn + nr_mfns - 1) ||
-             xsm_iomem_permission(XSM_HOOK, d, mfn, mfn + nr_mfns - 1, allow) )
+             xsm_iomem_permission(XSM_NONE, d, mfn, mfn + nr_mfns - 1, allow) )
             ret = -EPERM;
         else if ( allow )
             ret = iomem_permit_access(d, mfn, mfn + nr_mfns - 1);
@@ -746,7 +746,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
              !iomem_access_permitted(d, mfn, mfn_end) )
             break;
 
-        ret = xsm_iomem_mapping(XSM_HOOK, d, mfn, mfn_end, add);
+        ret = xsm_iomem_mapping(XSM_NONE, d, mfn, mfn_end, add);
         if ( ret )
             break;
 
@@ -801,7 +801,7 @@ long do_domctl(XEN_GUEST_HANDLE_PARAM(xen_domctl_t) u_domctl)
 
         ret = -EOPNOTSUPP;
         if ( is_hvm_domain(e) )
-            ret = xsm_set_target(XSM_HOOK, d, e);
+            ret = xsm_set_target(XSM_NONE, d, e);
         if ( ret )
         {
             put_domain(e);
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 5479315aae..5c987096d9 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -296,7 +296,7 @@ static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
         ERROR_EXIT_DOM(port, d);
     chn = evtchn_from_port(d, port);
 
-    rc = xsm_evtchn_unbound(XSM_TARGET, d, chn, alloc->remote_dom);
+    rc = xsm_evtchn_unbound(TARGET_PRIVS, d, chn, alloc->remote_dom);
     if ( rc )
         goto out;
 
@@ -372,7 +372,7 @@ static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
          (rchn->u.unbound.remote_domid != ld->domain_id) )
         ERROR_EXIT_DOM(-EINVAL, rd);
 
-    rc = xsm_evtchn_interdomain(XSM_HOOK, ld, lchn, rd, rchn);
+    rc = xsm_evtchn_interdomain(XSM_NONE, ld, lchn, rd, rchn);
     if ( rc )
         goto out;
 
@@ -760,7 +760,7 @@ int evtchn_send(struct domain *ld, unsigned int lport)
         goto out;
     }
 
-    ret = xsm_evtchn_send(XSM_HOOK, ld, lchn);
+    ret = xsm_evtchn_send(XSM_NONE, ld, lchn);
     if ( ret )
         goto out;
 
@@ -985,7 +985,7 @@ int evtchn_status(evtchn_status_t *status)
         goto out;
     }
 
-    rc = xsm_evtchn_status(XSM_TARGET, d, chn);
+    rc = xsm_evtchn_status(TARGET_PRIVS, d, chn);
     if ( rc )
         goto out;
 
@@ -1310,7 +1310,7 @@ long do_event_channel_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( d == NULL )
             return -ESRCH;
 
-        rc = xsm_evtchn_reset(XSM_TARGET, current->domain, d);
+        rc = xsm_evtchn_reset(TARGET_PRIVS, current->domain, d);
         if ( !rc )
             rc = evtchn_reset(d, cmd == EVTCHNOP_reset_cont);
 
@@ -1371,7 +1371,7 @@ int alloc_unbound_xen_event_channel(
         goto out;
     chn = evtchn_from_port(ld, port);
 
-    rc = xsm_evtchn_unbound(XSM_TARGET, ld, chn, remote_domid);
+    rc = xsm_evtchn_unbound(TARGET_PRIVS, ld, chn, remote_domid);
     if ( rc )
         goto out;
 
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index ab30e2e8cf..27e4eb1d65 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -1063,7 +1063,7 @@ map_grant_ref(
         return;
     }
 
-    rc = xsm_grant_mapref(XSM_HOOK, ld, rd, op->flags);
+    rc = xsm_grant_mapref(XSM_NONE, ld, rd, op->flags);
     if ( rc )
     {
         rcu_unlock_domain(rd);
@@ -1403,7 +1403,7 @@ unmap_common(
         return;
     }
 
-    rc = xsm_grant_unmapref(XSM_HOOK, ld, rd);
+    rc = xsm_grant_unmapref(XSM_NONE, ld, rd);
     if ( rc )
     {
         rcu_unlock_domain(rd);
@@ -2021,7 +2021,7 @@ gnttab_setup_table(
         goto out;
     }
 
-    if ( xsm_grant_setup(XSM_TARGET, curr->domain, d) )
+    if ( xsm_grant_setup(TARGET_PRIVS, curr->domain, d) )
     {
         op.status = GNTST_permission_denied;
         goto out;
@@ -2103,7 +2103,7 @@ gnttab_query_size(
         goto out;
     }
 
-    if ( xsm_grant_query_size(XSM_TARGET, current->domain, d) )
+    if ( xsm_grant_query_size(TARGET_PRIVS, current->domain, d) )
     {
         op.status = GNTST_permission_denied;
         goto out;
@@ -2274,7 +2274,7 @@ gnttab_transfer(
             goto put_gfn_and_copyback;
         }
 
-        if ( xsm_grant_transfer(XSM_HOOK, d, e) )
+        if ( xsm_grant_transfer(XSM_NONE, d, e) )
         {
             gop.status = GNTST_permission_denied;
         unlock_and_copyback:
@@ -2812,7 +2812,7 @@ static int gnttab_copy_lock_domains(const struct gnttab_copy *op,
     if ( rc < 0 )
         goto error;
 
-    rc = xsm_grant_copy(XSM_HOOK, src->domain, dest->domain);
+    rc = xsm_grant_copy(XSM_NONE, src->domain, dest->domain);
     if ( rc < 0 )
     {
         rc = GNTST_permission_denied;
@@ -3231,7 +3231,7 @@ gnttab_get_status_frames(XEN_GUEST_HANDLE_PARAM(gnttab_get_status_frames_t) uop,
         op.status = GNTST_bad_domain;
         goto out1;
     }
-    rc = xsm_grant_setup(XSM_TARGET, current->domain, d);
+    rc = xsm_grant_setup(TARGET_PRIVS, current->domain, d);
     if ( rc )
     {
         op.status = GNTST_permission_denied;
@@ -3295,7 +3295,7 @@ gnttab_get_version(XEN_GUEST_HANDLE_PARAM(gnttab_get_version_t) uop)
     if ( d == NULL )
         return -ESRCH;
 
-    rc = xsm_grant_query_size(XSM_TARGET, current->domain, d);
+    rc = xsm_grant_query_size(TARGET_PRIVS, current->domain, d);
     if ( rc )
     {
         rcu_unlock_domain(d);
diff --git a/xen/common/hypfs.c b/xen/common/hypfs.c
index e71f7df479..207556896d 100644
--- a/xen/common/hypfs.c
+++ b/xen/common/hypfs.c
@@ -679,7 +679,7 @@ long do_hypfs_op(unsigned int cmd,
     struct hypfs_entry *entry;
     static char path[XEN_HYPFS_MAX_PATHLEN];
 
-    if ( xsm_hypfs_op(XSM_PRIV) )
+    if ( xsm_hypfs_op(XSM_PLAT_CTRL) )
         return -EPERM;
 
     if ( cmd == XEN_HYPFS_OP_get_version )
diff --git a/xen/common/kernel.c b/xen/common/kernel.c
index d77756a81e..5c065e403f 100644
--- a/xen/common/kernel.c
+++ b/xen/common/kernel.c
@@ -459,7 +459,7 @@ __initcall(param_init);
 
 DO(xen_version)(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
-    bool_t deny = !!xsm_xen_version(XSM_OTHER, cmd);
+    bool_t deny = !!xsm_xen_version(XSM_NONE | XSM_PLAT_CTRL, cmd);
 
     switch ( cmd )
     {
diff --git a/xen/common/kexec.c b/xen/common/kexec.c
index ebeee6405a..2d1d1ce205 100644
--- a/xen/common/kexec.c
+++ b/xen/common/kexec.c
@@ -1219,7 +1219,7 @@ static int do_kexec_op_internal(unsigned long op,
 {
     int ret = -EINVAL;
 
-    ret = xsm_kexec(XSM_PRIV);
+    ret = xsm_kexec(XSM_PLAT_CTRL);
     if ( ret )
         return ret;
 
diff --git a/xen/common/mem_access.c b/xen/common/mem_access.c
index 010e6f8dbf..6cbe12994d 100644
--- a/xen/common/mem_access.c
+++ b/xen/common/mem_access.c
@@ -47,7 +47,7 @@ int mem_access_memop(unsigned long cmd,
     if ( !p2m_mem_access_sanity_check(d) )
         goto out;
 
-    rc = xsm_mem_access(XSM_DM_PRIV, d);
+    rc = xsm_mem_access(DEV_EMU_PRIVS, d);
     if ( rc )
         goto out;
 
diff --git a/xen/common/memory.c b/xen/common/memory.c
index 76b9f58478..f51a9cea73 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -603,7 +603,7 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg)
         goto fail_early;
     }
 
-    rc = xsm_memory_exchange(XSM_TARGET, d);
+    rc = xsm_memory_exchange(TARGET_PRIVS, d);
     if ( rc )
     {
         rcu_unlock_domain(d);
@@ -1062,7 +1062,7 @@ static long xatp_permission_check(struct domain *d, unsigned int space)
          (!is_hardware_domain(d) || (d != current->domain)) )
         return -EACCES;
 
-    return xsm_add_to_physmap(XSM_TARGET, current->domain, d);
+    return xsm_add_to_physmap(TARGET_PRIVS, current->domain, d);
 }
 
 unsigned int ioreq_server_max_frames(const struct domain *d)
@@ -1222,7 +1222,7 @@ static int acquire_resource(
     if ( rc )
         return rc;
 
-    rc = xsm_domain_resource_map(XSM_DM_PRIV, d);
+    rc = xsm_domain_resource_map(DEV_EMU_PRIVS, d);
     if ( rc )
         goto out;
 
@@ -1378,7 +1378,7 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
              && (reservation.mem_flags & XENMEMF_populate_on_demand) )
             args.memflags |= MEMF_populate_on_demand;
 
-        if ( xsm_memory_adjust_reservation(XSM_TARGET, curr_d, d) )
+        if ( xsm_memory_adjust_reservation(TARGET_PRIVS, curr_d, d) )
         {
             rcu_unlock_domain(d);
             return start_extent;
@@ -1452,7 +1452,7 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( d == NULL )
             return -ESRCH;
 
-        rc = xsm_memory_stat_reservation(XSM_TARGET, curr_d, d);
+        rc = xsm_memory_stat_reservation(TARGET_PRIVS, curr_d, d);
         if ( rc )
         {
             rcu_unlock_domain(d);
@@ -1574,7 +1574,7 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
             return -ESRCH;
 
         rc = paging_mode_translate(d)
-             ? xsm_remove_from_physmap(XSM_TARGET, curr_d, d)
+             ? xsm_remove_from_physmap(TARGET_PRIVS, curr_d, d)
              : -EACCES;
         if ( rc )
         {
@@ -1621,7 +1621,7 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( d == NULL )
             return -EINVAL;
 
-        rc = xsm_claim_pages(XSM_PRIV, d);
+        rc = xsm_claim_pages(XSM_DOM_SUPER, d);
 
         if ( !rc )
             rc = domain_set_outstanding_pages(d, reservation.nr_extents);
@@ -1652,7 +1652,7 @@ long do_memory_op(unsigned long cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( (d = rcu_lock_domain_by_any_id(topology.domid)) == NULL )
             return -ESRCH;
 
-        rc = xsm_get_vnumainfo(XSM_TARGET, d);
+        rc = xsm_get_vnumainfo(TARGET_PRIVS, d);
         if ( rc )
         {
             rcu_unlock_domain(d);
diff --git a/xen/common/monitor.c b/xen/common/monitor.c
index d5c9ff1cbf..5649097ad5 100644
--- a/xen/common/monitor.c
+++ b/xen/common/monitor.c
@@ -36,7 +36,7 @@ int monitor_domctl(struct domain *d, struct xen_domctl_monitor_op *mop)
     if ( unlikely(current->domain == d) ) /* no domain_pause() */
         return -EPERM;
 
-    rc = xsm_vm_event_control(XSM_PRIV, d, mop->op, mop->event);
+    rc = xsm_vm_event_control(XSM_DOM_SUPER, d, mop->op, mop->event);
     if ( unlikely(rc) )
         return rc;
 
diff --git a/xen/common/sched/core.c b/xen/common/sched/core.c
index 6d34764d38..ff397d6971 100644
--- a/xen/common/sched/core.c
+++ b/xen/common/sched/core.c
@@ -1944,7 +1944,7 @@ ret_t do_sched_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
         if ( d == NULL )
             break;
 
-        ret = xsm_schedop_shutdown(XSM_DM_PRIV, current->domain, d);
+        ret = xsm_schedop_shutdown(DEV_EMU_PRIVS, current->domain, d);
         if ( likely(!ret) )
             domain_shutdown(d, sched_remote_shutdown.reason);
 
@@ -2046,7 +2046,7 @@ long sched_adjust(struct domain *d, struct xen_domctl_scheduler_op *op)
 {
     long ret;
 
-    ret = xsm_domctl_scheduler_op(XSM_HOOK, d, op->cmd);
+    ret = xsm_domctl_scheduler_op(XSM_NONE, d, op->cmd);
     if ( ret )
         return ret;
 
@@ -2081,7 +2081,7 @@ long sched_adjust_global(struct xen_sysctl_scheduler_op *op)
     struct cpupool *pool;
     int rc;
 
-    rc = xsm_sysctl_scheduler_op(XSM_HOOK, op->cmd);
+    rc = xsm_sysctl_scheduler_op(XSM_NONE, op->cmd);
     if ( rc )
         return rc;
 
diff --git a/xen/common/sysctl.c b/xen/common/sysctl.c
index 3558641cd9..172f9b528d 100644
--- a/xen/common/sysctl.c
+++ b/xen/common/sysctl.c
@@ -41,7 +41,7 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
     if ( op->interface_version != XEN_SYSCTL_INTERFACE_VERSION )
         return -EACCES;
 
-    ret = xsm_sysctl(XSM_PRIV, op->cmd);
+    ret = xsm_sysctl(XSM_PLAT_CTRL, op->cmd);
     if ( ret )
         return ret;
 
@@ -58,7 +58,7 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
     switch ( op->cmd )
     {
     case XEN_SYSCTL_readconsole:
-        ret = xsm_readconsole(XSM_HOOK, op->u.readconsole.clear);
+        ret = xsm_readconsole(XSM_NONE, op->u.readconsole.clear);
         if ( ret )
             break;
 
@@ -88,7 +88,7 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
             if ( num_domains == op->u.getdomaininfolist.max_domains )
                 break;
 
-            ret = xsm_getdomaininfo(XSM_HOOK, d);
+            ret = xsm_getdomaininfo(XSM_NONE, d);
             if ( ret )
                 continue;
 
@@ -191,7 +191,7 @@ long do_sysctl(XEN_GUEST_HANDLE_PARAM(xen_sysctl_t) u_sysctl)
         if ( op->u.page_offline.end < op->u.page_offline.start )
             break;
 
-        ret = xsm_page_offline(XSM_HOOK, op->u.page_offline.cmd);
+        ret = xsm_page_offline(XSM_NONE, op->u.page_offline.cmd);
         if ( ret )
             break;
 
diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index 44d542f23e..103d0a207f 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -584,7 +584,7 @@ int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec)
         return 0;
     }
 
-    rc = xsm_vm_event_control(XSM_PRIV, d, vec->mode, vec->op);
+    rc = xsm_vm_event_control(XSM_DOM_SUPER, d, vec->mode, vec->op);
     if ( rc )
         return rc;
 
diff --git a/xen/common/xenoprof.c b/xen/common/xenoprof.c
index 1926a92fe4..4268c12e5d 100644
--- a/xen/common/xenoprof.c
+++ b/xen/common/xenoprof.c
@@ -737,7 +737,7 @@ ret_t do_xenoprof_op(int op, XEN_GUEST_HANDLE_PARAM(void) arg)
         return -EPERM;
     }
 
-    ret = xsm_profile(XSM_HOOK, current->domain, op);
+    ret = xsm_profile(XSM_NONE, current->domain, op);
     if ( ret )
         return ret;
 
diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index 2358375170..93d51d6420 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -680,7 +680,7 @@ long do_console_io(unsigned int cmd, unsigned int count,
     long rc;
     unsigned int idx, len;
 
-    rc = xsm_console_io(XSM_OTHER, current->domain, cmd);
+    rc = xsm_console_io(XSM_NONE|XSM_DOM_SUPER, current->domain, cmd);
     if ( rc )
         return rc;
 
diff --git a/xen/drivers/passthrough/device_tree.c b/xen/drivers/passthrough/device_tree.c
index 999b831d90..a51bdd51d6 100644
--- a/xen/drivers/passthrough/device_tree.c
+++ b/xen/drivers/passthrough/device_tree.c
@@ -230,7 +230,7 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d,
         if ( ret )
             break;
 
-        ret = xsm_assign_dtdevice(XSM_HOOK, d, dt_node_full_name(dev));
+        ret = xsm_assign_dtdevice(XSM_NONE, d, dt_node_full_name(dev));
         if ( ret )
             break;
 
@@ -284,7 +284,7 @@ int iommu_do_dt_domctl(struct xen_domctl *domctl, struct domain *d,
         if ( ret )
             break;
 
-        ret = xsm_deassign_dtdevice(XSM_HOOK, d, dt_node_full_name(dev));
+        ret = xsm_deassign_dtdevice(XSM_NONE, d, dt_node_full_name(dev));
 
         if ( d == dom_io )
             return -EINVAL;
diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index 705137f8be..f9669c6afa 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -704,7 +704,7 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn,
     else
         pdev_type = "device";
 
-    ret = xsm_resource_plug_pci(XSM_PRIV, (seg << 16) | (bus << 8) | devfn);
+    ret = xsm_resource_plug_pci(XSM_HW_CTRL, (seg << 16) | (bus << 8) | devfn);
     if ( ret )
         return ret;
 
@@ -814,7 +814,7 @@ int pci_remove_device(u16 seg, u8 bus, u8 devfn)
     struct pci_dev *pdev;
     int ret;
 
-    ret = xsm_resource_unplug_pci(XSM_PRIV, (seg << 16) | (bus << 8) | devfn);
+    ret = xsm_resource_unplug_pci(XSM_HW_CTRL, (seg << 16) | (bus << 8) | devfn);
     if ( ret )
         return ret;
 
@@ -1484,7 +1484,7 @@ static int iommu_get_device_group(
              ((pdev->bus == bus) && (pdev->devfn == devfn)) )
             continue;
 
-        if ( xsm_get_device_group(XSM_HOOK, (seg << 16) | (pdev->bus << 8) | pdev->devfn) )
+        if ( xsm_get_device_group(XSM_NONE, (seg << 16) | (pdev->bus << 8) | pdev->devfn) )
             continue;
 
         sdev_id = ops->get_device_group_id(seg, pdev->bus, pdev->devfn);
@@ -1552,7 +1552,7 @@ int iommu_do_pci_domctl(
         u32 max_sdevs;
         XEN_GUEST_HANDLE_64(uint32) sdevs;
 
-        ret = xsm_get_device_group(XSM_HOOK, domctl->u.get_device_group.machine_sbdf);
+        ret = xsm_get_device_group(XSM_NONE, domctl->u.get_device_group.machine_sbdf);
         if ( ret )
             break;
 
@@ -1603,7 +1603,7 @@ int iommu_do_pci_domctl(
 
         machine_sbdf = domctl->u.assign_device.u.pci.machine_sbdf;
 
-        ret = xsm_assign_device(XSM_HOOK, d, machine_sbdf);
+        ret = xsm_assign_device(XSM_NONE, d, machine_sbdf);
         if ( ret )
             break;
 
@@ -1648,7 +1648,7 @@ int iommu_do_pci_domctl(
 
         machine_sbdf = domctl->u.assign_device.u.pci.machine_sbdf;
 
-        ret = xsm_deassign_device(XSM_HOOK, d, machine_sbdf);
+        ret = xsm_deassign_device(XSM_NONE, d, machine_sbdf);
         if ( ret )
             break;
 
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 9a88e5b00f..39681a5dff 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -475,6 +475,12 @@ struct domain
 #define XSM_XENSTORE  (1U<<31) /* Xenstore: domain that can do privileged operations on xenstore */
 #define CLASSIC_DOM0_PRIVS (XSM_PLAT_CTRL | XSM_DOM_BUILD | XSM_DOM_SUPER | \
 		XSM_DEV_EMUL | XSM_HW_CTRL | XSM_HW_SUPER | XSM_XENSTORE)
+/* Any access for which XSM_DEV_EMUL is the restriction, XSM_DOM_SUPER is an override */
+#define DEV_EMU_PRIVS (XSM_DOM_SUPER | XSM_DEV_EMUL)
+/* Anytime there is an XSM_TARGET check, XSM_SELF also applies, and XSM_DOM_SUPER is an override */
+#define TARGET_PRIVS (XSM_TARGET | XSM_SELF | XSM_DOM_SUPER)
+/* Anytime there is an XSM_XENSTORE check, XSM_DOM_SUPER is an override */
+#define XENSTORE_PRIVS (XSM_XENSTORE | XSM_DOM_SUPER)
     uint32_t         xsm_roles;
 
     /* Which guest this guest has privileges on */
diff --git a/xen/include/xsm/dummy.h b/xen/include/xsm/dummy.h
index a6dab0c809..35c9a4f2d4 100644
--- a/xen/include/xsm/dummy.h
+++ b/xen/include/xsm/dummy.h
@@ -65,37 +65,48 @@ void __xsm_action_mismatch_detected(void);
 #define XSM_INLINE always_inline
 #define XSM_DEFAULT_ARG xsm_default_t action,
 #define XSM_DEFAULT_VOID xsm_default_t action
-#define XSM_ASSERT_ACTION(def) LINKER_BUG_ON(def != action)
+#define XSM_ASSERT_ACTION(def) LINKER_BUG_ON((def) != action)
 
 #endif /* CONFIG_XSM */
 
 static always_inline int xsm_default_action(
     xsm_default_t action, struct domain *src, struct domain *target)
 {
-    switch ( action ) {
-    case XSM_HOOK:
+    /* TODO: these three if's could be squashed into one, decreasing
+     *       the readability/logical reason-ability but may decrease the
+     *       number of spectre gadgets
+     */
+    if ( action & XSM_NONE )
         return 0;
-    case XSM_TARGET:
-        if ( evaluate_nospec(src == target) )
-        {
-            return 0;
-    case XSM_XS_PRIV:
-            if ( evaluate_nospec(is_xenstore_domain(src)) )
-                return 0;
-        }
-        /* fall through */
-    case XSM_DM_PRIV:
-        if ( target && evaluate_nospec(src->target == target) )
-            return 0;
-        /* fall through */
-    case XSM_PRIV:
-        if ( is_control_domain(src) )
-            return 0;
-        return -EPERM;
-    default:
-        LINKER_BUG_ON(1);
-        return -EPERM;
-    }
+
+    if ( (action & XSM_SELF) && ((!target) || (src == target)) )
+        return 0;
+
+    if ( (action & XSM_TARGET) && ((target) && (src->target == target)) )
+        return 0;
+
+    /* XSM_DEV_EMUL is the only domain role with a condition, i.e. the
+     * role only applies to a domain's target.
+     */
+    if ( (action & XSM_DEV_EMUL) && (src->xsm_roles & XSM_DEV_EMUL)
+        && (target) && (src->target == target) )
+        return 0;
+
+    /* Mask out SELF, TARGET, and DEV_EMUL as they have been handled */
+    action &= ~(XSM_SELF & XSM_TARGET & XSM_DEV_EMUL);
+
+    /* Checks if the domain has one of the remaining roles set on it:
+     *      XSM_PLAT_CTRL
+     *      XSM_DOM_BUILD
+     *      XSM_DOM_SUPER
+     *      XSM_HW_CTRL
+     *      XSM_HW_SUPER
+     *      XSM_XENSTORE
+     */
+    if (src->xsm_roles & action)
+        return 0;
+
+    return -EPERM;
 }
 
 static XSM_INLINE void xsm_security_domaininfo(struct domain *d,
@@ -106,60 +117,60 @@ static XSM_INLINE void xsm_security_domaininfo(struct domain *d,
 
 static XSM_INLINE int xsm_domain_create(XSM_DEFAULT_ARG struct domain *d, u32 ssidref)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
+    XSM_ASSERT_ACTION(XSM_NONE);
     return xsm_default_action(action, current->domain, d);
 }
 
 static XSM_INLINE int xsm_getdomaininfo(XSM_DEFAULT_ARG struct domain *d)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
+    XSM_ASSERT_ACTION(XSM_NONE);
     return xsm_default_action(action, current->domain, d);
 }
 
 static XSM_INLINE int xsm_domctl_scheduler_op(XSM_DEFAULT_ARG struct domain *d, int cmd)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
+    XSM_ASSERT_ACTION(XSM_NONE);
     return xsm_default_action(action, current->domain, d);
 }
 
 static XSM_INLINE int xsm_sysctl_scheduler_op(XSM_DEFAULT_ARG int cmd)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
+    XSM_ASSERT_ACTION(XSM_NONE);
     return xsm_default_action(action, current->domain, NULL);
 }
 
 static XSM_INLINE int xsm_set_target(XSM_DEFAULT_ARG struct domain *d, struct domain *e)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
+    XSM_ASSERT_ACTION(XSM_NONE);
     return xsm_default_action(action, current->domain, NULL);
 }
 
 static XSM_INLINE int xsm_domctl(XSM_DEFAULT_ARG struct domain *d, int cmd)
 {
-    XSM_ASSERT_ACTION(XSM_OTHER);
+    XSM_ASSERT_ACTION(DEV_EMU_PRIVS | XENSTORE_PRIVS | XSM_DOM_SUPER);
     switch ( cmd )
     {
     case XEN_DOMCTL_ioport_mapping:
     case XEN_DOMCTL_memory_mapping:
     case XEN_DOMCTL_bind_pt_irq:
     case XEN_DOMCTL_unbind_pt_irq:
-        return xsm_default_action(XSM_DM_PRIV, current->domain, d);
+        return xsm_default_action(DEV_EMU_PRIVS, current->domain, d);
     case XEN_DOMCTL_getdomaininfo:
-        return xsm_default_action(XSM_XS_PRIV, current->domain, d);
+        return xsm_default_action(XENSTORE_PRIVS, current->domain, d);
     default:
-        return xsm_default_action(XSM_PRIV, current->domain, d);
+        return xsm_default_action(XSM_DOM_SUPER, current->domain, d);
     }
 }
 
 static XSM_INLINE int xsm_sysctl(XSM_DEFAULT_ARG int cmd)
 {
-    XSM_ASSERT_ACTION(XSM_PRIV);
+    XSM_ASSERT_ACTION(XSM_PLAT_CTRL);
     return xsm_default_action(action, current->domain, NULL);
 }
 
 static XSM_INLINE int xsm_readconsole(XSM_DEFAULT_ARG uint32_t clear)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
+    XSM_ASSERT_ACTION(XSM_NONE);
     return xsm_default_action(action, current->domain, NULL);
 }
 
@@ -176,113 +187,113 @@ static XSM_INLINE void xsm_free_security_domain(struct domain *d)
 static XSM_INLINE int xsm_grant_mapref(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2,
                                                                 uint32_t flags)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
+    XSM_ASSERT_ACTION(XSM_NONE);
     return xsm_default_action(action, d1, d2);
 }
 
 static XSM_INLINE int xsm_grant_unmapref(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
+    XSM_ASSERT_ACTION(XSM_NONE);
     return xsm_default_action(action, d1, d2);
 }
 
 static XSM_INLINE int xsm_grant_setup(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
 {
-    XSM_ASSERT_ACTION(XSM_TARGET);
+    XSM_ASSERT_ACTION(TARGET_PRIVS);
     return xsm_default_action(action, d1, d2);
 }
 
 static XSM_INLINE int xsm_grant_transfer(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
+    XSM_ASSERT_ACTION(XSM_NONE);
     return xsm_default_action(action, d1, d2);
 }
 
 static XSM_INLINE int xsm_grant_copy(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
+    XSM_ASSERT_ACTION(XSM_NONE);
     return xsm_default_action(action, d1, d2);
 }
 
 static XSM_INLINE int xsm_grant_query_size(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
 {
-    XSM_ASSERT_ACTION(XSM_TARGET);
+    XSM_ASSERT_ACTION(TARGET_PRIVS);
     return xsm_default_action(action, d1, d2);
 }
 
 static XSM_INLINE int xsm_memory_exchange(XSM_DEFAULT_ARG struct domain *d)
 {
-    XSM_ASSERT_ACTION(XSM_TARGET);
+    XSM_ASSERT_ACTION(TARGET_PRIVS);
     return xsm_default_action(action, current->domain, d);
 }
 
 static XSM_INLINE int xsm_memory_adjust_reservation(XSM_DEFAULT_ARG struct domain *d1,
                                                             struct domain *d2)
 {
-    XSM_ASSERT_ACTION(XSM_TARGET);
+    XSM_ASSERT_ACTION(TARGET_PRIVS);
     return xsm_default_action(action, d1, d2);
 }
 
 static XSM_INLINE int xsm_memory_stat_reservation(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
 {
-    XSM_ASSERT_ACTION(XSM_TARGET);
+    XSM_ASSERT_ACTION(TARGET_PRIVS);
     return xsm_default_action(action, d1, d2);
 }
 
 static XSM_INLINE int xsm_console_io(XSM_DEFAULT_ARG struct domain *d, int cmd)
 {
-    XSM_ASSERT_ACTION(XSM_OTHER);
+    XSM_ASSERT_ACTION(XSM_NONE|XSM_DOM_SUPER);
     if ( d->is_console )
-        return xsm_default_action(XSM_HOOK, d, NULL);
+        return xsm_default_action(XSM_NONE, d, NULL);
 #ifdef CONFIG_VERBOSE_DEBUG
     if ( cmd == CONSOLEIO_write )
-        return xsm_default_action(XSM_HOOK, d, NULL);
+        return xsm_default_action(XSM_NONE, d, NULL);
 #endif
-    return xsm_default_action(XSM_PRIV, d, NULL);
+    return xsm_default_action(XSM_DOM_SUPER, d, NULL);
 }
 
 static XSM_INLINE int xsm_profile(XSM_DEFAULT_ARG struct domain *d, int op)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
+    XSM_ASSERT_ACTION(XSM_NONE);
     return xsm_default_action(action, d, NULL);
 }
 
 static XSM_INLINE int xsm_kexec(XSM_DEFAULT_VOID)
 {
-    XSM_ASSERT_ACTION(XSM_PRIV);
+    XSM_ASSERT_ACTION(XSM_PLAT_CTRL);
     return xsm_default_action(action, current->domain, NULL);
 }
 
 static XSM_INLINE int xsm_schedop_shutdown(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
 {
-    XSM_ASSERT_ACTION(XSM_DM_PRIV);
+    XSM_ASSERT_ACTION(DEV_EMU_PRIVS);
     return xsm_default_action(action, d1, d2);
 }
 
 static XSM_INLINE int xsm_memory_pin_page(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2,
                                           struct page_info *page)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
+    XSM_ASSERT_ACTION(XSM_NONE);
     return xsm_default_action(action, d1, d2);
 }
 
 static XSM_INLINE int xsm_claim_pages(XSM_DEFAULT_ARG struct domain *d)
 {
-    XSM_ASSERT_ACTION(XSM_PRIV);
+    XSM_ASSERT_ACTION(XSM_DOM_SUPER);
     return xsm_default_action(action, current->domain, d);
 }
 
 static XSM_INLINE int xsm_evtchn_unbound(XSM_DEFAULT_ARG struct domain *d, struct evtchn *chn,
                                          domid_t id2)
 {
-    XSM_ASSERT_ACTION(XSM_TARGET);
+    XSM_ASSERT_ACTION(TARGET_PRIVS);
     return xsm_default_action(action, current->domain, d);
 }
 
 static XSM_INLINE int xsm_evtchn_interdomain(XSM_DEFAULT_ARG struct domain *d1, struct evtchn
                                 *chan1, struct domain *d2, struct evtchn *chan2)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
+    XSM_ASSERT_ACTION(XSM_NONE);
     return xsm_default_action(action, d1, d2);
 }
 
@@ -293,19 +304,19 @@ static XSM_INLINE void xsm_evtchn_close_post(struct evtchn *chn)
 
 static XSM_INLINE int xsm_evtchn_send(XSM_DEFAULT_ARG struct domain *d, struct evtchn *chn)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
+    XSM_ASSERT_ACTION(XSM_NONE);
     return xsm_default_action(action, d, NULL);
 }
 
 static XSM_INLINE int xsm_evtchn_status(XSM_DEFAULT_ARG struct domain *d, struct evtchn *chn)
 {
-    XSM_ASSERT_ACTION(XSM_TARGET);
+    XSM_ASSERT_ACTION(TARGET_PRIVS);
     return xsm_default_action(action, current->domain, d);
 }
 
 static XSM_INLINE int xsm_evtchn_reset(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
 {
-    XSM_ASSERT_ACTION(XSM_TARGET);
+    XSM_ASSERT_ACTION(TARGET_PRIVS);
     return xsm_default_action(action, d1, d2);
 }
 
@@ -328,44 +339,44 @@ static XSM_INLINE char *xsm_show_security_evtchn(struct domain *d, const struct
 
 static XSM_INLINE int xsm_init_hardware_domain(XSM_DEFAULT_ARG struct domain *d)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
+    XSM_ASSERT_ACTION(XSM_NONE);
     return xsm_default_action(action, current->domain, d);
 }
 
 static XSM_INLINE int xsm_get_pod_target(XSM_DEFAULT_ARG struct domain *d)
 {
-    XSM_ASSERT_ACTION(XSM_PRIV);
+    XSM_ASSERT_ACTION(XSM_DOM_SUPER);
     return xsm_default_action(action, current->domain, d);
 }
 
 static XSM_INLINE int xsm_set_pod_target(XSM_DEFAULT_ARG struct domain *d)
 {
-    XSM_ASSERT_ACTION(XSM_PRIV);
+    XSM_ASSERT_ACTION(XSM_DOM_SUPER);
     return xsm_default_action(action, current->domain, d);
 }
 
 static XSM_INLINE int xsm_get_vnumainfo(XSM_DEFAULT_ARG struct domain *d)
 {
-    XSM_ASSERT_ACTION(XSM_TARGET);
+    XSM_ASSERT_ACTION(TARGET_PRIVS);
     return xsm_default_action(action, current->domain, d);
 }
 
 #if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_PCI)
 static XSM_INLINE int xsm_get_device_group(XSM_DEFAULT_ARG uint32_t machine_bdf)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
+    XSM_ASSERT_ACTION(XSM_NONE);
     return xsm_default_action(action, current->domain, NULL);
 }
 
 static XSM_INLINE int xsm_assign_device(XSM_DEFAULT_ARG struct domain *d, uint32_t machine_bdf)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
+    XSM_ASSERT_ACTION(XSM_NONE);
     return xsm_default_action(action, current->domain, d);
 }
 
 static XSM_INLINE int xsm_deassign_device(XSM_DEFAULT_ARG struct domain *d, uint32_t machine_bdf)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
+    XSM_ASSERT_ACTION(XSM_NONE);
     return xsm_default_action(action, current->domain, d);
 }
 
@@ -375,14 +386,14 @@ static XSM_INLINE int xsm_deassign_device(XSM_DEFAULT_ARG struct domain *d, uint
 static XSM_INLINE int xsm_assign_dtdevice(XSM_DEFAULT_ARG struct domain *d,
                                           const char *dtpath)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
+    XSM_ASSERT_ACTION(XSM_NONE);
     return xsm_default_action(action, current->domain, d);
 }
 
 static XSM_INLINE int xsm_deassign_dtdevice(XSM_DEFAULT_ARG struct domain *d,
                                             const char *dtpath)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
+    XSM_ASSERT_ACTION(XSM_NONE);
     return xsm_default_action(action, current->domain, d);
 }
 
@@ -390,55 +401,55 @@ static XSM_INLINE int xsm_deassign_dtdevice(XSM_DEFAULT_ARG struct domain *d,
 
 static XSM_INLINE int xsm_resource_plug_core(XSM_DEFAULT_VOID)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
+    XSM_ASSERT_ACTION(XSM_NONE);
     return xsm_default_action(action, current->domain, NULL);
 }
 
 static XSM_INLINE int xsm_resource_unplug_core(XSM_DEFAULT_VOID)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
+    XSM_ASSERT_ACTION(XSM_NONE);
     return xsm_default_action(action, current->domain, NULL);
 }
 
 static XSM_INLINE int xsm_resource_plug_pci(XSM_DEFAULT_ARG uint32_t machine_bdf)
 {
-    XSM_ASSERT_ACTION(XSM_PRIV);
+    XSM_ASSERT_ACTION(XSM_HW_CTRL);
     return xsm_default_action(action, current->domain, NULL);
 }
 
 static XSM_INLINE int xsm_resource_unplug_pci(XSM_DEFAULT_ARG uint32_t machine_bdf)
 {
-    XSM_ASSERT_ACTION(XSM_PRIV);
+    XSM_ASSERT_ACTION(XSM_HW_CTRL);
     return xsm_default_action(action, current->domain, NULL);
 }
 
 static XSM_INLINE int xsm_resource_setup_pci(XSM_DEFAULT_ARG uint32_t machine_bdf)
 {
-    XSM_ASSERT_ACTION(XSM_PRIV);
+    XSM_ASSERT_ACTION(XSM_HW_CTRL);
     return xsm_default_action(action, current->domain, NULL);
 }
 
 static XSM_INLINE int xsm_resource_setup_gsi(XSM_DEFAULT_ARG int gsi)
 {
-    XSM_ASSERT_ACTION(XSM_PRIV);
+    XSM_ASSERT_ACTION(XSM_HW_CTRL);
     return xsm_default_action(action, current->domain, NULL);
 }
 
 static XSM_INLINE int xsm_resource_setup_misc(XSM_DEFAULT_VOID)
 {
-    XSM_ASSERT_ACTION(XSM_PRIV);
+    XSM_ASSERT_ACTION(XSM_HW_CTRL);
     return xsm_default_action(action, current->domain, NULL);
 }
 
 static XSM_INLINE int xsm_page_offline(XSM_DEFAULT_ARG uint32_t cmd)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
+    XSM_ASSERT_ACTION(XSM_NONE);
     return xsm_default_action(action, current->domain, NULL);
 }
 
 static XSM_INLINE int xsm_hypfs_op(XSM_DEFAULT_VOID)
 {
-    XSM_ASSERT_ACTION(XSM_PRIV);
+    XSM_ASSERT_ACTION(XSM_PLAT_CTRL);
     return xsm_default_action(action, current->domain, NULL);
 }
 
@@ -461,57 +472,57 @@ static XSM_INLINE char *xsm_show_irq_sid(int irq)
 
 static XSM_INLINE int xsm_map_domain_pirq(XSM_DEFAULT_ARG struct domain *d)
 {
-    XSM_ASSERT_ACTION(XSM_DM_PRIV);
+    XSM_ASSERT_ACTION(DEV_EMU_PRIVS);
     return xsm_default_action(action, current->domain, d);
 }
 
 static XSM_INLINE int xsm_map_domain_irq(XSM_DEFAULT_ARG struct domain *d,
                                          int irq, const void *data)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
+    XSM_ASSERT_ACTION(XSM_NONE);
     return xsm_default_action(action, current->domain, d);
 }
 
 static XSM_INLINE int xsm_unmap_domain_pirq(XSM_DEFAULT_ARG struct domain *d)
 {
-    XSM_ASSERT_ACTION(XSM_DM_PRIV);
+    XSM_ASSERT_ACTION(DEV_EMU_PRIVS);
     return xsm_default_action(action, current->domain, d);
 }
 
 static XSM_INLINE int xsm_bind_pt_irq(XSM_DEFAULT_ARG struct domain *d, struct xen_domctl_bind_pt_irq *bind)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
+    XSM_ASSERT_ACTION(XSM_NONE);
     return xsm_default_action(action, current->domain, d);
 }
 
 static XSM_INLINE int xsm_unbind_pt_irq(XSM_DEFAULT_ARG struct domain *d, struct xen_domctl_bind_pt_irq *bind)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
+    XSM_ASSERT_ACTION(XSM_NONE);
     return xsm_default_action(action, current->domain, d);
 }
 
 static XSM_INLINE int xsm_unmap_domain_irq(XSM_DEFAULT_ARG struct domain *d,
                                            int irq, const void *data)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
+    XSM_ASSERT_ACTION(XSM_NONE);
     return xsm_default_action(action, current->domain, d);
 }
 
 static XSM_INLINE int xsm_irq_permission(XSM_DEFAULT_ARG struct domain *d, int pirq, uint8_t allow)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
+    XSM_ASSERT_ACTION(XSM_NONE);
     return xsm_default_action(action, current->domain, d);
 }
 
 static XSM_INLINE int xsm_iomem_permission(XSM_DEFAULT_ARG struct domain *d, uint64_t s, uint64_t e, uint8_t allow)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
+    XSM_ASSERT_ACTION(XSM_NONE);
     return xsm_default_action(action, current->domain, d);
 }
 
 static XSM_INLINE int xsm_iomem_mapping(XSM_DEFAULT_ARG struct domain *d, uint64_t s, uint64_t e, uint8_t allow)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
+    XSM_ASSERT_ACTION(XSM_NONE);
     return xsm_default_action(action, current->domain, d);
 }
 
@@ -519,60 +530,61 @@ static XSM_INLINE int xsm_pci_config_permission(XSM_DEFAULT_ARG struct domain *d
                                         uint16_t start, uint16_t end,
                                         uint8_t access)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
+    XSM_ASSERT_ACTION(XSM_NONE);
     return xsm_default_action(action, current->domain, d);
 }
 
 static XSM_INLINE int xsm_add_to_physmap(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
 {
-    XSM_ASSERT_ACTION(XSM_TARGET);
+    XSM_ASSERT_ACTION(TARGET_PRIVS);
     return xsm_default_action(action, d1, d2);
 }
 
 static XSM_INLINE int xsm_remove_from_physmap(XSM_DEFAULT_ARG struct domain *d1, struct domain *d2)
 {
-    XSM_ASSERT_ACTION(XSM_TARGET);
+    XSM_ASSERT_ACTION(TARGET_PRIVS);
     return xsm_default_action(action, d1, d2);
 }
 
 static XSM_INLINE int xsm_map_gmfn_foreign(XSM_DEFAULT_ARG struct domain *d, struct domain *t)
 {
-    XSM_ASSERT_ACTION(XSM_TARGET);
+    XSM_ASSERT_ACTION(TARGET_PRIVS);
     return xsm_default_action(action, d, t);
 }
 
 static XSM_INLINE int xsm_hvm_param(XSM_DEFAULT_ARG struct domain *d, unsigned long op)
 {
-    XSM_ASSERT_ACTION(XSM_TARGET);
+    XSM_ASSERT_ACTION(TARGET_PRIVS);
     return xsm_default_action(action, current->domain, d);
 }
 
+/* This check is no longer being called */
 static XSM_INLINE int xsm_hvm_control(XSM_DEFAULT_ARG struct domain *d, unsigned long op)
 {
-    XSM_ASSERT_ACTION(XSM_DM_PRIV);
+    XSM_ASSERT_ACTION(DEV_EMU_PRIVS);
     return xsm_default_action(action, current->domain, d);
 }
 
 static XSM_INLINE int xsm_hvm_param_altp2mhvm(XSM_DEFAULT_ARG struct domain *d)
 {
-    XSM_ASSERT_ACTION(XSM_PRIV);
+    XSM_ASSERT_ACTION(XSM_DOM_SUPER);
     return xsm_default_action(action, current->domain, d);
 }
 
 static XSM_INLINE int xsm_hvm_altp2mhvm_op(XSM_DEFAULT_ARG struct domain *d, uint64_t mode, uint32_t op)
 {
-    XSM_ASSERT_ACTION(XSM_OTHER);
+    XSM_ASSERT_ACTION(TARGET_PRIVS | DEV_EMU_PRIVS);
 
     switch ( mode )
     {
     case XEN_ALTP2M_mixed:
-        return xsm_default_action(XSM_TARGET, current->domain, d);
+        return xsm_default_action(TARGET_PRIVS, current->domain, d);
     case XEN_ALTP2M_external:
-        return xsm_default_action(XSM_DM_PRIV, current->domain, d);
+        return xsm_default_action(DEV_EMU_PRIVS, current->domain, d);
     case XEN_ALTP2M_limited:
         if ( HVMOP_altp2m_vcpu_enable_notify == op )
-            return xsm_default_action(XSM_TARGET, current->domain, d);
-        return xsm_default_action(XSM_DM_PRIV, current->domain, d);
+            return xsm_default_action(TARGET_PRIVS, current->domain, d);
+        return xsm_default_action(DEV_EMU_PRIVS, current->domain, d);
     default:
         return -EPERM;
     }
@@ -580,14 +592,14 @@ static XSM_INLINE int xsm_hvm_altp2mhvm_op(XSM_DEFAULT_ARG struct domain *d, uin
 
 static XSM_INLINE int xsm_vm_event_control(XSM_DEFAULT_ARG struct domain *d, int mode, int op)
 {
-    XSM_ASSERT_ACTION(XSM_PRIV);
+    XSM_ASSERT_ACTION(XSM_DOM_SUPER);
     return xsm_default_action(action, current->domain, d);
 }
 
 #ifdef CONFIG_MEM_ACCESS
 static XSM_INLINE int xsm_mem_access(XSM_DEFAULT_ARG struct domain *d)
 {
-    XSM_ASSERT_ACTION(XSM_DM_PRIV);
+    XSM_ASSERT_ACTION(DEV_EMU_PRIVS);
     return xsm_default_action(action, current->domain, d);
 }
 #endif
@@ -595,7 +607,7 @@ static XSM_INLINE int xsm_mem_access(XSM_DEFAULT_ARG struct domain *d)
 #ifdef CONFIG_HAS_MEM_PAGING
 static XSM_INLINE int xsm_mem_paging(XSM_DEFAULT_ARG struct domain *d)
 {
-    XSM_ASSERT_ACTION(XSM_DM_PRIV);
+    XSM_ASSERT_ACTION(DEV_EMU_PRIVS);
     return xsm_default_action(action, current->domain, d);
 }
 #endif
@@ -603,51 +615,51 @@ static XSM_INLINE int xsm_mem_paging(XSM_DEFAULT_ARG struct domain *d)
 #ifdef CONFIG_MEM_SHARING
 static XSM_INLINE int xsm_mem_sharing(XSM_DEFAULT_ARG struct domain *d)
 {
-    XSM_ASSERT_ACTION(XSM_DM_PRIV);
+    XSM_ASSERT_ACTION(DEV_EMU_PRIVS);
     return xsm_default_action(action, current->domain, d);
 }
 #endif
 
 static XSM_INLINE int xsm_platform_op(XSM_DEFAULT_ARG uint32_t op)
 {
-    XSM_ASSERT_ACTION(XSM_PRIV);
+    XSM_ASSERT_ACTION(XSM_PLAT_CTRL);
     return xsm_default_action(action, current->domain, NULL);
 }
 
 #ifdef CONFIG_X86
 static XSM_INLINE int xsm_do_mca(XSM_DEFAULT_VOID)
 {
-    XSM_ASSERT_ACTION(XSM_PRIV);
+    XSM_ASSERT_ACTION(XSM_PLAT_CTRL);
     return xsm_default_action(action, current->domain, NULL);
 }
 
 static XSM_INLINE int xsm_shadow_control(XSM_DEFAULT_ARG struct domain *d, uint32_t op)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
+    XSM_ASSERT_ACTION(XSM_NONE);
     return xsm_default_action(action, current->domain, d);
 }
 
 static XSM_INLINE int xsm_mem_sharing_op(XSM_DEFAULT_ARG struct domain *d, struct domain *cd, int op)
 {
-    XSM_ASSERT_ACTION(XSM_DM_PRIV);
+    XSM_ASSERT_ACTION(DEV_EMU_PRIVS);
     return xsm_default_action(action, current->domain, cd);
 }
 
 static XSM_INLINE int xsm_apic(XSM_DEFAULT_ARG struct domain *d, int cmd)
 {
-    XSM_ASSERT_ACTION(XSM_PRIV);
+    XSM_ASSERT_ACTION(XSM_HW_CTRL);
     return xsm_default_action(action, d, NULL);
 }
 
 static XSM_INLINE int xsm_machine_memory_map(XSM_DEFAULT_VOID)
 {
-    XSM_ASSERT_ACTION(XSM_PRIV);
+    XSM_ASSERT_ACTION(XSM_PLAT_CTRL);
     return xsm_default_action(action, current->domain, NULL);
 }
 
 static XSM_INLINE int xsm_domain_memory_map(XSM_DEFAULT_ARG struct domain *d)
 {
-    XSM_ASSERT_ACTION(XSM_TARGET);
+    XSM_ASSERT_ACTION(TARGET_PRIVS);
     return xsm_default_action(action, current->domain, d);
 }
 
@@ -655,7 +667,7 @@ static XSM_INLINE int xsm_mmu_update(XSM_DEFAULT_ARG struct domain *d, struct do
                                      struct domain *f, uint32_t flags)
 {
     int rc = 0;
-    XSM_ASSERT_ACTION(XSM_TARGET);
+    XSM_ASSERT_ACTION(TARGET_PRIVS);
     if ( f != dom_io )
         rc = xsm_default_action(action, d, f);
     if ( evaluate_nospec(t) && !rc )
@@ -665,47 +677,47 @@ static XSM_INLINE int xsm_mmu_update(XSM_DEFAULT_ARG struct domain *d, struct do
 
 static XSM_INLINE int xsm_mmuext_op(XSM_DEFAULT_ARG struct domain *d, struct domain *f)
 {
-    XSM_ASSERT_ACTION(XSM_TARGET);
+    XSM_ASSERT_ACTION(TARGET_PRIVS);
     return xsm_default_action(action, d, f);
 }
 
 static XSM_INLINE int xsm_update_va_mapping(XSM_DEFAULT_ARG struct domain *d, struct domain *f, 
                                                             l1_pgentry_t pte)
 {
-    XSM_ASSERT_ACTION(XSM_TARGET);
+    XSM_ASSERT_ACTION(TARGET_PRIVS);
     return xsm_default_action(action, d, f);
 }
 
 static XSM_INLINE int xsm_priv_mapping(XSM_DEFAULT_ARG struct domain *d, struct domain *t)
 {
-    XSM_ASSERT_ACTION(XSM_TARGET);
+    XSM_ASSERT_ACTION(TARGET_PRIVS);
     return xsm_default_action(action, d, t);
 }
 
 static XSM_INLINE int xsm_ioport_permission(XSM_DEFAULT_ARG struct domain *d, uint32_t s, uint32_t e, uint8_t allow)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
+    XSM_ASSERT_ACTION(XSM_NONE);
     return xsm_default_action(action, current->domain, d);
 }
 
 static XSM_INLINE int xsm_ioport_mapping(XSM_DEFAULT_ARG struct domain *d, uint32_t s, uint32_t e, uint8_t allow)
 {
-    XSM_ASSERT_ACTION(XSM_HOOK);
+    XSM_ASSERT_ACTION(XSM_NONE);
     return xsm_default_action(action, current->domain, d);
 }
 
 static XSM_INLINE int xsm_pmu_op (XSM_DEFAULT_ARG struct domain *d, unsigned int op)
 {
-    XSM_ASSERT_ACTION(XSM_OTHER);
+    XSM_ASSERT_ACTION(XSM_NONE | XSM_DOM_SUPER);
     switch ( op )
     {
     case XENPMU_init:
     case XENPMU_finish:
     case XENPMU_lvtpc_set:
     case XENPMU_flush:
-        return xsm_default_action(XSM_HOOK, d, current->domain);
+        return xsm_default_action(XSM_NONE, d, current->domain);
     default:
-        return xsm_default_action(XSM_PRIV, d, current->domain);
+        return xsm_default_action(XSM_DOM_SUPER, d, current->domain);
     }
 }
 
@@ -713,7 +725,7 @@ static XSM_INLINE int xsm_pmu_op (XSM_DEFAULT_ARG struct domain *d, unsigned int
 
 static XSM_INLINE int xsm_dm_op(XSM_DEFAULT_ARG struct domain *d)
 {
-    XSM_ASSERT_ACTION(XSM_DM_PRIV);
+    XSM_ASSERT_ACTION(DEV_EMU_PRIVS);
     return xsm_default_action(action, current->domain, d);
 }
 
@@ -745,7 +757,7 @@ static XSM_INLINE int xsm_argo_send(const struct domain *d,
 #include <public/version.h>
 static XSM_INLINE int xsm_xen_version (XSM_DEFAULT_ARG uint32_t op)
 {
-    XSM_ASSERT_ACTION(XSM_OTHER);
+    XSM_ASSERT_ACTION(XSM_NONE | XSM_PLAT_CTRL);
     switch ( op )
     {
     case XENVER_version:
@@ -761,14 +773,14 @@ static XSM_INLINE int xsm_xen_version (XSM_DEFAULT_ARG uint32_t op)
     case XENVER_pagesize:
     case XENVER_guest_handle:
         /* These MUST always be accessible to any guest by default. */
-        return xsm_default_action(XSM_HOOK, current->domain, NULL);
+        return xsm_default_action(XSM_NONE, current->domain, NULL);
     default:
-        return xsm_default_action(XSM_PRIV, current->domain, NULL);
+        return xsm_default_action(XSM_PLAT_CTRL, current->domain, NULL);
     }
 }
 
 static XSM_INLINE int xsm_domain_resource_map(XSM_DEFAULT_ARG struct domain *d)
 {
-    XSM_ASSERT_ACTION(XSM_DM_PRIV);
+    XSM_ASSERT_ACTION(DEV_EMU_PRIVS);
     return xsm_default_action(action, current->domain, d);
 }
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index 7bdd254420..b50d8a711f 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -30,18 +30,7 @@ typedef u32 xsm_magic_t;
 #define XSM_MAGIC 0x0
 #endif
 
-/* These annotations are used by callers and in dummy.h to document the
- * default actions of XSM hooks. They should be compiled out otherwise.
- */
-enum xsm_default {
-    XSM_HOOK,     /* Guests can normally access the hypercall */
-    XSM_DM_PRIV,  /* Device model can perform on its target domain */
-    XSM_TARGET,   /* Can perform on self or your target domain */
-    XSM_PRIV,     /* Privileged - normally restricted to dom0 */
-    XSM_XS_PRIV,  /* Xenstore domain - can do some privileged operations */
-    XSM_OTHER     /* Something more complex */
-};
-typedef enum xsm_default xsm_default_t;
+typedef uint32_t xsm_default_t;
 
 struct xsm_operations {
     void (*security_domaininfo) (struct domain *d,
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 14 20:50:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 20:50:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127542.239727 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhelY-0002CM-Mj; Fri, 14 May 2021 20:50:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127542.239727; Fri, 14 May 2021 20:50:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhelY-0002CF-Jf; Fri, 14 May 2021 20:50:36 +0000
Received: by outflank-mailman (input) for mailman id 127542;
 Fri, 14 May 2021 20:50:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l2R2=KJ=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1lhelX-0002C3-0I
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 20:50:35 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ef49ce06-6294-4d17-8298-3744182c9599;
 Fri, 14 May 2021 20:50:32 +0000 (UTC)
Received: from sisyou.hme. (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1621025169173821.904720637283;
 Fri, 14 May 2021 13:46:09 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ef49ce06-6294-4d17-8298-3744182c9599
ARC-Seal: i=1; a=rsa-sha256; t=1621025170; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=WYfknhhbYjCXw/qkEnWiNmLLk6PC2dW6zeHMKWEbxBG7wcEiKbiA1L+r5wCTiGxqhEkDT3WDQ+SluzNZwMadEpuZI9mcpE73pvXtFzEvf9oFoMXcVf/dZumvDvZfxx+CGqYCoYlc6gs3fAMOOcCh+MUSZO2N57aIV00s8zkOFSw=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1621025170; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=pJEaYk4PqGj22W5R4j6dKlBEZIsaxzqHFKLwhENQhJk=; 
	b=B+MQuXd9SauNaU1dgYKfMWyUyZmuJ9836oKYzzSYU8h7i3vebUeuAa1U78TM/uDB3RHcYANEJ9xG6v6gLFGbrlEduVVNWEvX3u4j0k3vsipZ+FuFtler8LAmza7jZdqawXrqVRMUvSyHjXhAjVRw9A8K2PxdUYG4zk8q4gVfJKI=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com> header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1621025170;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References:MIME-Version:Content-Transfer-Encoding;
	bh=pJEaYk4PqGj22W5R4j6dKlBEZIsaxzqHFKLwhENQhJk=;
	b=VodgGzU/hbSBFyvdaEE2s0/JT+nargFIK2QcTa3s+Ex6aVlXuMqn8j6aHh+wG6DQ
	EX2lDmWV8I64P2apF2Z7nQt+VZ3ps8TO3UtF5iAO2+VVoGOGDWa85dO/70bFOutGHuD
	HBCIecVtF8x1UWPVGW2PU0vQ8bniSzave4D4069o=
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	julien@xen.org,
	Volodymyr_Babchuk@epam.com,
	andrew.cooper3@citrix.com,
	george.dunlap@citrix.com,
	iwj@xenproject.org,
	jbeulich@suse.com,
	wl@xen.org,
	roger.pau@citrix.com,
	tamas@tklengyel.com,
	tim@xen.org,
	jgross@suse.com,
	aisaila@bitdefender.com,
	ppircalabu@bitdefender.com,
	dfaggioli@suse.com,
	paul@xen.org,
	kevin.tian@intel.com,
	dgdegra@tycho.nsa.gov,
	adam.schwalm@starlab.io,
	scott.davis@starlab.io
Subject: [RFC PATCH 05/10] hardware domain: convert to domain roles
Date: Fri, 14 May 2021 16:54:32 -0400
Message-Id: <20210514205437.13661-6-dpsmith@apertussolutions.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20210514205437.13661-1-dpsmith@apertussolutions.com>
References: <20210514205437.13661-1-dpsmith@apertussolutions.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

This refactors the hardware_domain so that it is works within the
new domain roles construct.

Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
---
 xen/arch/x86/acpi/cpu_idle.c        |   3 +-
 xen/arch/x86/cpu/mcheck/vmce.h      |   3 +-
 xen/arch/x86/cpu/vpmu.c             |   7 +-
 xen/arch/x86/crash.c                |   2 +-
 xen/arch/x86/io_apic.c              |   9 ++-
 xen/arch/x86/mm.c                   |   2 +-
 xen/arch/x86/msi.c                  |   4 +-
 xen/arch/x86/nmi.c                  |   3 +-
 xen/arch/x86/setup.c                |   3 +
 xen/arch/x86/traps.c                |   2 +-
 xen/arch/x86/x86_64/mm.c            |  11 +--
 xen/common/domain.c                 | 114 ++++++++++++++++++++++------
 xen/common/event_channel.c          |   3 +-
 xen/common/kexec.c                  |   2 +-
 xen/common/keyhandler.c             |   4 +-
 xen/common/shutdown.c               |  14 ++--
 xen/common/vm_event.c               |   5 +-
 xen/common/xenoprof.c               |   3 +-
 xen/drivers/char/ns16550.c          |   3 +-
 xen/drivers/passthrough/pci.c       |  12 +--
 xen/drivers/passthrough/vtd/iommu.c |   2 +-
 xen/include/xen/sched.h             |   7 +-
 22 files changed, 152 insertions(+), 66 deletions(-)

diff --git a/xen/arch/x86/acpi/cpu_idle.c b/xen/arch/x86/acpi/cpu_idle.c
index c092086b33..7a42c56944 100644
--- a/xen/arch/x86/acpi/cpu_idle.c
+++ b/xen/arch/x86/acpi/cpu_idle.c
@@ -1206,7 +1206,8 @@ static void set_cx(
             cx->entry_method = ACPI_CSTATE_EM_HALT;
         break;
     case ACPI_ADR_SPACE_SYSTEM_IO:
-        if ( ioports_deny_access(hardware_domain, cx->address, cx->address) )
+        if ( ioports_deny_access(get_hardware_domain(),
+             cx->address, cx->address) )
             printk(XENLOG_WARNING "Could not deny access to port %04x\n",
                    cx->address);
         cx->entry_method = ACPI_CSTATE_EM_SYSIO;
diff --git a/xen/arch/x86/cpu/mcheck/vmce.h b/xen/arch/x86/cpu/mcheck/vmce.h
index 2e9b32a9bd..774cd8a5af 100644
--- a/xen/arch/x86/cpu/mcheck/vmce.h
+++ b/xen/arch/x86/cpu/mcheck/vmce.h
@@ -6,8 +6,7 @@
 int vmce_init(struct cpuinfo_x86 *c);
 
 #define dom0_vmce_enabled() \
-    (hardware_domain && \
-     evtchn_virq_enabled(domain_vcpu(hardware_domain, 0), VIRQ_MCA))
+    (evtchn_virq_enabled(domain_vcpu(get_hardware_domain(), 0), VIRQ_MCA))
 
 int unmmap_broken_page(struct domain *d, mfn_t mfn, unsigned long gfn);
 
diff --git a/xen/arch/x86/cpu/vpmu.c b/xen/arch/x86/cpu/vpmu.c
index 612b87526b..79715ce7e7 100644
--- a/xen/arch/x86/cpu/vpmu.c
+++ b/xen/arch/x86/cpu/vpmu.c
@@ -169,13 +169,14 @@ int vpmu_do_msr(unsigned int msr, uint64_t *msr_content,
 static inline struct vcpu *choose_hwdom_vcpu(void)
 {
     unsigned idx;
+    struct domain *hwdom = get_hardware_domain();
 
-    if ( hardware_domain->max_vcpus == 0 )
+    if ( hwdom->max_vcpus == 0 )
         return NULL;
 
-    idx = smp_processor_id() % hardware_domain->max_vcpus;
+    idx = smp_processor_id() % hwdom->max_vcpus;
 
-    return hardware_domain->vcpu[idx];
+    return hwdom->vcpu[idx];
 }
 
 void vpmu_do_interrupt(struct cpu_user_regs *regs)
diff --git a/xen/arch/x86/crash.c b/xen/arch/x86/crash.c
index 0611b4fb9b..e47f7da36d 100644
--- a/xen/arch/x86/crash.c
+++ b/xen/arch/x86/crash.c
@@ -210,7 +210,7 @@ void machine_crash_shutdown(void)
     info = kexec_crash_save_info();
     info->xen_phys_start = xen_phys_start;
     info->dom0_pfn_to_mfn_frame_list_list =
-        arch_get_pfn_to_mfn_frame_list_list(hardware_domain);
+        arch_get_pfn_to_mfn_frame_list_list(get_hardware_domain());
 }
 
 /*
diff --git a/xen/arch/x86/io_apic.c b/xen/arch/x86/io_apic.c
index 58b26d962c..520dea2552 100644
--- a/xen/arch/x86/io_apic.c
+++ b/xen/arch/x86/io_apic.c
@@ -2351,6 +2351,7 @@ int ioapic_guest_write(unsigned long physbase, unsigned int reg, u32 val)
     struct IO_APIC_route_entry rte = { 0 };
     unsigned long flags;
     struct irq_desc *desc;
+    struct domain *hwdom = get_hardware_domain();
 
     if ( (apic = ioapic_physbase_to_id(physbase)) < 0 )
         return apic;
@@ -2401,7 +2402,7 @@ int ioapic_guest_write(unsigned long physbase, unsigned int reg, u32 val)
     if ( !rte.mask )
     {
         pirq = (irq >= 256) ? irq : rte.vector;
-        if ( pirq >= hardware_domain->nr_pirqs )
+        if ( pirq >= hwdom->nr_pirqs )
             return -EINVAL;
     }
     else
@@ -2443,10 +2444,10 @@ int ioapic_guest_write(unsigned long physbase, unsigned int reg, u32 val)
     }
     if ( pirq >= 0 )
     {
-        spin_lock(&hardware_domain->event_lock);
-        ret = map_domain_pirq(hardware_domain, pirq, irq,
+        spin_lock(&hwdom->event_lock);
+        ret = map_domain_pirq(hwdom, pirq, irq,
                               MAP_PIRQ_TYPE_GSI, NULL);
-        spin_unlock(&hardware_domain->event_lock);
+        spin_unlock(&hwdom->event_lock);
         if ( ret < 0 )
             return ret;
     }
diff --git a/xen/arch/x86/mm.c b/xen/arch/x86/mm.c
index 8ecb982a84..7859eef303 100644
--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -4917,7 +4917,7 @@ mfn_t alloc_xen_pagetable_new(void)
     {
         void *ptr = alloc_xenheap_page();
 
-        BUG_ON(!hardware_domain && !ptr);
+        BUG_ON(!ptr);
         return ptr ? virt_to_mfn(ptr) : INVALID_MFN;
     }
 
diff --git a/xen/arch/x86/msi.c b/xen/arch/x86/msi.c
index 6d4a873130..ea8a9224ce 100644
--- a/xen/arch/x86/msi.c
+++ b/xen/arch/x86/msi.c
@@ -660,7 +660,7 @@ static int msi_capability_init(struct pci_dev *dev,
 
     *desc = entry;
     /* Restore the original MSI enabled bits  */
-    if ( !hardware_domain )
+    if ( !is_hardware_domain_started() )
     {
         /*
          * ..., except for internal requests (before Dom0 starts), in which
@@ -965,7 +965,7 @@ static int msix_capability_init(struct pci_dev *dev,
     ++msix->used_entries;
 
     /* Restore MSI-X enabled bits */
-    if ( !hardware_domain )
+    if ( !is_hardware_domain_started() )
     {
         /*
          * ..., except for internal requests (before Dom0 starts), in which
diff --git a/xen/arch/x86/nmi.c b/xen/arch/x86/nmi.c
index ab94a96c4d..61a083a836 100644
--- a/xen/arch/x86/nmi.c
+++ b/xen/arch/x86/nmi.c
@@ -594,7 +594,8 @@ static void do_nmi_stats(unsigned char key)
     for_each_online_cpu ( cpu )
         printk("%3u\t%3u\n", cpu, per_cpu(nmi_count, cpu));
 
-    if ( !hardware_domain || !(v = domain_vcpu(hardware_domain, 0)) )
+    if ( !is_hardware_domain_started() ||
+         !(v = domain_vcpu(get_hardware_domain(), 0)) )
         return;
 
     pend = v->arch.nmi_pending;
diff --git a/xen/arch/x86/setup.c b/xen/arch/x86/setup.c
index a6658d9769..e184f00117 100644
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -776,6 +776,9 @@ static struct domain *__init create_dom0(const module_t *image,
     if ( IS_ERR(d) || (alloc_dom0_vcpu0(d) == NULL) )
         panic("Error creating domain 0\n");
 
+    /* Ensure the correct roles are assigned */
+    d->xsm_roles = CLASSIC_DOM0_PRIVS;
+
     /* Grab the DOM0 command line. */
     cmdline = image->string ? __va(image->string) : NULL;
     if ( cmdline || kextra )
diff --git a/xen/arch/x86/traps.c b/xen/arch/x86/traps.c
index 3c2e563cce..dd47afe765 100644
--- a/xen/arch/x86/traps.c
+++ b/xen/arch/x86/traps.c
@@ -1683,7 +1683,7 @@ static bool pci_serr_nmicont(void)
 
 static void nmi_hwdom_report(unsigned int reason_idx)
 {
-    struct domain *d = hardware_domain;
+    struct domain *d = get_hardware_domain();
 
     if ( !d || !d->vcpu || !d->vcpu[0] || !is_pv_domain(d) /* PVH fixme */ )
         return;
diff --git a/xen/arch/x86/x86_64/mm.c b/xen/arch/x86/x86_64/mm.c
index d7e67311fa..7bdb7a2487 100644
--- a/xen/arch/x86/x86_64/mm.c
+++ b/xen/arch/x86/x86_64/mm.c
@@ -1198,6 +1198,7 @@ int memory_add(unsigned long spfn, unsigned long epfn, unsigned int pxm)
     unsigned long old_max = max_page, old_total = total_pages;
     unsigned long old_node_start, old_node_span, orig_online;
     unsigned long i;
+    struct domain *hwdom = get_hardware_domain();
 
     dprintk(XENLOG_INFO, "memory_add %lx ~ %lx with pxm %x\n", spfn, epfn, pxm);
 
@@ -1280,12 +1281,12 @@ int memory_add(unsigned long spfn, unsigned long epfn, unsigned int pxm)
      * shared or being kept in sync then newly added memory needs to be
      * mapped here.
      */
-    if ( is_iommu_enabled(hardware_domain) &&
-         !iommu_use_hap_pt(hardware_domain) &&
-         !need_iommu_pt_sync(hardware_domain) )
+    if ( is_iommu_enabled(hwdom) &&
+         !iommu_use_hap_pt(hwdom) &&
+         !need_iommu_pt_sync(hwdom) )
     {
         for ( i = spfn; i < epfn; i++ )
-            if ( iommu_legacy_map(hardware_domain, _dfn(i), _mfn(i),
+            if ( iommu_legacy_map(hwdom, _dfn(i), _mfn(i),
                                   1ul << PAGE_ORDER_4K,
                                   IOMMUF_readable | IOMMUF_writable) )
                 break;
@@ -1293,7 +1294,7 @@ int memory_add(unsigned long spfn, unsigned long epfn, unsigned int pxm)
         {
             while (i-- > old_max)
                 /* If statement to satisfy __must_check. */
-                if ( iommu_legacy_unmap(hardware_domain, _dfn(i),
+                if ( iommu_legacy_unmap(hwdom, _dfn(i),
                                         1ul << PAGE_ORDER_4K) )
                     continue;
 
diff --git a/xen/common/domain.c b/xen/common/domain.c
index b3a3864421..d9b75bf835 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -45,6 +45,7 @@
 
 #ifdef CONFIG_X86
 #include <asm/guest.h>
+#include <asm/pv/shim.h>
 #endif
 
 /* Linux config option: propageted to domain0 */
@@ -302,23 +303,50 @@ struct vcpu *vcpu_create(struct domain *d, unsigned int vcpu_id)
     return NULL;
 }
 
-static int late_hwdom_init(struct domain *d)
+/* pivot_hw_ctl:
+ *  This is a one-way pivot from existing to new hardware domain. Upon success
+ *  the domain *next_hwdom will be in control of the hardware and domain
+ *  *curr_hwdom will no longer have access.
+ */
+static int pivot_hw_ctl(struct domain *next_hwdom)
 {
 #ifdef CONFIG_LATE_HWDOM
-    struct domain *dom0;
+    bool already_found = false;
+    struct domain **pd = &domain_list, *curr_hwdom = NULL;
+    domid_t dom0_id = 0;
     int rv;
 
-    if ( d != hardware_domain || d->domain_id == 0 )
+#ifdef CONFIG_PV_SHIM
+    /* On PV shim dom0 != 0 */
+    dom0_id = get_initial_domain_id();
+#endif
+
+    if ( !(next_hwdom->xsm_roles & XSM_HW_CTRL) &&
+         next_hwdom->domain_id == dom0_id )
         return 0;
 
-    rv = xsm_init_hardware_domain(XSM_NONE, d);
+    rv = xsm_init_hardware_domain(XSM_NONE, next_hwdom);
     if ( rv )
         return rv;
 
-    printk("Initialising hardware domain %d\n", hardware_domid);
+    spin_lock(&domlist_read_lock);
+
+    /* Walk whole list to ensure there is only one XSM_HW_CTRL domain */
+    for ( ; *pd != NULL; pd = &(*pd)->next_in_list )
+        if ( (*pd)->xsm_roles & XSM_HW_CTRL ) {
+            if ( !already_found )
+                panic("There should be only one domain with XSM_HW_CTRL\n");
+            already_found = true;
+            curr_hwdom = pd;
+        }
+
+    spin_unlock(&domlist_read_lock);
+
+    ASSERT(curr_hwdom != NULL);
+
+    printk("Initialising hardware domain %d\n", d->domain_id);
 
-    dom0 = rcu_lock_domain_by_id(0);
-    ASSERT(dom0 != NULL);
+    rcu_lock_domain(curr_hwdom);
     /*
      * Hardware resource ranges for domain 0 have been set up from
      * various sources intended to restrict the hardware domain's
@@ -331,17 +359,19 @@ static int late_hwdom_init(struct domain *d)
      * may be modified after this hypercall returns if a more complex
      * device model is desired.
      */
-    rangeset_swap(d->irq_caps, dom0->irq_caps);
-    rangeset_swap(d->iomem_caps, dom0->iomem_caps);
+    rangeset_swap(next_hwdom->irq_caps, curr_hwdom->irq_caps);
+    rangeset_swap(next_hwdom->iomem_caps, curr_hwdom->iomem_caps);
 #ifdef CONFIG_X86
-    rangeset_swap(d->arch.ioport_caps, dom0->arch.ioport_caps);
-    setup_io_bitmap(d);
-    setup_io_bitmap(dom0);
+    rangeset_swap(next_hwdom->arch.ioport_caps, curr_hwdom->arch.ioport_caps);
+    setup_io_bitmap(next_hwdom);
+    setup_io_bitmap(curr_hwdom);
 #endif
 
-    rcu_unlock_domain(dom0);
+    curr_hwdom->xsm_roles &= ! XSM_HW_CTRL;
 
-    iommu_hwdom_init(d);
+    rcu_unlock_domain(curr_hwdom);
+
+    iommu_hwdom_init(next_hwdom);
 
     return rv;
 #else
@@ -530,7 +560,7 @@ struct domain *domain_create(domid_t domid,
                              struct xen_domctl_createdomain *config,
                              bool is_priv)
 {
-    struct domain *d, **pd, *old_hwdom = NULL;
+    struct domain *d, **pd;
     enum { INIT_watchdog = 1u<<1,
            INIT_evtchn = 1u<<3, INIT_gnttab = 1u<<4, INIT_arch = 1u<<5 };
     int err, init_status = 0;
@@ -559,17 +589,19 @@ struct domain *domain_create(domid_t domid,
     /* Sort out our idea of is_control_domain(). */
     d->is_privileged = is_priv;
 
-    if (is_priv)
+    /* reality is that is_priv is only set when construction dom0 */
+    if (is_priv) {
         d->xsm_roles = CLASSIC_DOM0_PRIVS;
+        hardware_domain = d;
+    }
 
     /* Sort out our idea of is_hardware_domain(). */
-    if ( domid == 0 || domid == hardware_domid )
+    if ( domid == hardware_domid )
     {
         if ( hardware_domid < 0 || hardware_domid >= DOMID_FIRST_RESERVED )
             panic("The value of hardware_dom must be a valid domain ID\n");
 
-        old_hwdom = hardware_domain;
-        hardware_domain = d;
+        d->xsm_roles = CLASSIC_HWDOM_PRIVS;
     }
 
     TRACE_1D(TRC_DOM0_DOM_ADD, d->domain_id);
@@ -682,12 +714,14 @@ struct domain *domain_create(domid_t domid,
         if ( (err = sched_init_domain(d, 0)) != 0 )
             goto fail;
 
-        if ( (err = late_hwdom_init(d)) != 0 )
+        if ( (err = pivot_hw_ctl(d)) != 0 )
             goto fail;
 
         /*
          * Must not fail beyond this point, as our caller doesn't know whether
-         * the domain has been entered into domain_list or not.
+         * the domain has been entered into domain_list or not. Additionally
+         * if a hardware control pivot occurred then a failure will leave the
+         * platform without access to hardware.
          */
 
         spin_lock(&domlist_update_lock);
@@ -711,8 +745,6 @@ struct domain *domain_create(domid_t domid,
     err = err ?: -EILSEQ; /* Release build safety. */
 
     d->is_dying = DOMDYING_dead;
-    if ( hardware_domain == d )
-        hardware_domain = old_hwdom;
     atomic_set(&d->refcnt, DOMAIN_DESTROYED);
 
     sched_destroy_domain(d);
@@ -808,6 +840,42 @@ out:
 }
 
 
+bool is_hardware_domain_started()
+{
+    bool exists = false;
+    struct domain **pd = &domain_list;
+
+    if ( *pd != NULL) {
+        rcu_read_lock(&domlist_read_lock);
+
+        for ( ; *pd != NULL; pd = &(*pd)->next_in_list )
+            if ( (*pd)->xsm_roles & XSM_HW_CTRL )
+                break;
+
+        rcu_read_unlock(&domlist_read_lock);
+
+        if ( *pd != NULL )
+            exists = true;
+    }
+
+    if (exists)
+        ASSERT(*pd == hardware_domain);
+
+    return exists;
+}
+
+
+struct domain *get_hardware_domain()
+{
+    if (hardware_domain == NULL)
+        return NULL;
+
+    ASSERT(hardware_domain->xsm_roles & XSM_HW_CTRL);
+
+    return hardware_domain;
+}
+
+
 struct domain *get_domain_by_id(domid_t dom)
 {
     struct domain *d;
diff --git a/xen/common/event_channel.c b/xen/common/event_channel.c
index 5c987096d9..775f7aa00c 100644
--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -904,7 +904,8 @@ void send_global_virq(uint32_t virq)
 {
     ASSERT(virq_is_global(virq));
 
-    send_guest_global_virq(global_virq_handlers[virq] ?: hardware_domain, virq);
+    send_guest_global_virq(
+        global_virq_handlers[virq] ?: get_hardware_domain(), virq);
 }
 
 int set_global_virq_handler(struct domain *d, uint32_t virq)
diff --git a/xen/common/kexec.c b/xen/common/kexec.c
index 2d1d1ce205..f36d3f880c 100644
--- a/xen/common/kexec.c
+++ b/xen/common/kexec.c
@@ -903,7 +903,7 @@ static int kexec_load_slot(struct kexec_image *kimage)
 static uint16_t kexec_load_v1_arch(void)
 {
 #ifdef CONFIG_X86
-    return is_pv_32bit_domain(hardware_domain) ? EM_386 : EM_X86_64;
+    return is_pv_32bit_domain(get_hardware_domain()) ? EM_386 : EM_X86_64;
 #else
     return EM_NONE;
 #endif
diff --git a/xen/common/keyhandler.c b/xen/common/keyhandler.c
index 8b9f378371..c22d02dea7 100644
--- a/xen/common/keyhandler.c
+++ b/xen/common/keyhandler.c
@@ -228,12 +228,12 @@ static void dump_hwdom_registers(unsigned char key)
 {
     struct vcpu *v;
 
-    if ( hardware_domain == NULL )
+    if ( is_hardware_domain_started() )
         return;
 
     printk("'%c' pressed -> dumping Dom0's registers\n", key);
 
-    for_each_vcpu ( hardware_domain, v )
+    for_each_vcpu ( get_hardware_domain(), v )
     {
         if ( alt_key_handling && softirq_pending(smp_processor_id()) )
         {
diff --git a/xen/common/shutdown.c b/xen/common/shutdown.c
index abde48aa4c..a8f475cc6f 100644
--- a/xen/common/shutdown.c
+++ b/xen/common/shutdown.c
@@ -32,43 +32,45 @@ static void noreturn maybe_reboot(void)
 
 void hwdom_shutdown(u8 reason)
 {
+    struct domain *hwdom = get_hardware_domain();
+
     switch ( reason )
     {
     case SHUTDOWN_poweroff:
         printk("Hardware Dom%u halted: halting machine\n",
-               hardware_domain->domain_id);
+               hwdom->domain_id);
         machine_halt();
         break; /* not reached */
 
     case SHUTDOWN_crash:
         debugger_trap_immediate();
-        printk("Hardware Dom%u crashed: ", hardware_domain->domain_id);
+        printk("Hardware Dom%u crashed: ", hwdom->domain_id);
         kexec_crash(CRASHREASON_HWDOM);
         maybe_reboot();
         break; /* not reached */
 
     case SHUTDOWN_reboot:
         printk("Hardware Dom%u shutdown: rebooting machine\n",
-               hardware_domain->domain_id);
+               hwdom->domain_id);
         machine_restart(0);
         break; /* not reached */
 
     case SHUTDOWN_watchdog:
         printk("Hardware Dom%u shutdown: watchdog rebooting machine\n",
-               hardware_domain->domain_id);
+               hwdom->domain_id);
         kexec_crash(CRASHREASON_WATCHDOG);
         machine_restart(0);
         break; /* not reached */
 
     case SHUTDOWN_soft_reset:
         printk("Hardware domain %d did unsupported soft reset, rebooting.\n",
-               hardware_domain->domain_id);
+               hwdom->domain_id);
         machine_restart(0);
         break; /* not reached */
 
     default:
         printk("Hardware Dom%u shutdown (unknown reason %u): ",
-               hardware_domain->domain_id, reason);
+               hwdom->domain_id, reason);
         maybe_reboot();
         break; /* not reached */
     }
diff --git a/xen/common/vm_event.c b/xen/common/vm_event.c
index 103d0a207f..58cfcea056 100644
--- a/xen/common/vm_event.c
+++ b/xen/common/vm_event.c
@@ -577,6 +577,7 @@ void vm_event_cleanup(struct domain *d)
 int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec)
 {
     int rc;
+    struct domain *hwdom = get_hardware_domain();
 
     if ( vec->op == XEN_VM_EVENT_GET_VERSION )
     {
@@ -624,7 +625,7 @@ int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec)
         {
             rc = -EOPNOTSUPP;
             /* hvm fixme: p2m_is_foreign types need addressing */
-            if ( is_hvm_domain(hardware_domain) )
+            if ( is_hvm_domain(hwdom) )
                 break;
 
             rc = -ENODEV;
@@ -717,7 +718,7 @@ int vm_event_domctl(struct domain *d, struct xen_domctl_vm_event_op *vec)
         case XEN_VM_EVENT_ENABLE:
             rc = -EOPNOTSUPP;
             /* hvm fixme: p2m_is_foreign types need addressing */
-            if ( is_hvm_domain(hardware_domain) )
+            if ( is_hvm_domain(hwdom) )
                 break;
 
             rc = -ENODEV;
diff --git a/xen/common/xenoprof.c b/xen/common/xenoprof.c
index 4268c12e5d..bd8d17df1f 100644
--- a/xen/common/xenoprof.c
+++ b/xen/common/xenoprof.c
@@ -270,7 +270,8 @@ static int alloc_xenoprof_struct(
     bufsize = sizeof(struct xenoprof_buf);
     i = sizeof(struct event_log);
 #ifdef CONFIG_COMPAT
-    d->xenoprof->is_compat = is_pv_32bit_domain(is_passive ? hardware_domain : d);
+    d->xenoprof->is_compat =
+        is_pv_32bit_domain(is_passive ? get_hardware_domain() : d);
     if ( XENOPROF_COMPAT(d->xenoprof) )
     {
         bufsize = sizeof(struct compat_oprof_buf);
diff --git a/xen/drivers/char/ns16550.c b/xen/drivers/char/ns16550.c
index 16a73d0c0e..e957b4732d 100644
--- a/xen/drivers/char/ns16550.c
+++ b/xen/drivers/char/ns16550.c
@@ -566,7 +566,8 @@ static void __init ns16550_endboot(struct serial_port *port)
 
     if ( uart->remapped_io_base )
         return;
-    rv = ioports_deny_access(hardware_domain, uart->io_base, uart->io_base + 7);
+    rv = ioports_deny_access(get_hardware_domain(),
+            uart->io_base, uart->io_base + 7);
     if ( rv != 0 )
         BUG();
 #endif
diff --git a/xen/drivers/passthrough/pci.c b/xen/drivers/passthrough/pci.c
index f9669c6afa..dcb1472e7e 100644
--- a/xen/drivers/passthrough/pci.c
+++ b/xen/drivers/passthrough/pci.c
@@ -776,7 +776,7 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn,
     ret = 0;
     if ( !pdev->domain )
     {
-        pdev->domain = hardware_domain;
+        pdev->domain = get_hardware_domain();
         ret = iommu_add_device(pdev);
         if ( ret )
         {
@@ -784,7 +784,7 @@ int pci_add_device(u16 seg, u8 bus, u8 devfn,
             goto out;
         }
 
-        list_add(&pdev->domain_list, &hardware_domain->pdev_list);
+        list_add(&pdev->domain_list, &pdev->domain->pdev_list);
     }
     else
         iommu_enable_device(pdev);
@@ -860,7 +860,7 @@ static int deassign_device(struct domain *d, uint16_t seg, uint8_t bus,
     /* De-assignment from dom_io should de-quarantine the device */
     target = ((pdev->quarantine || iommu_quarantine) &&
               pdev->domain != dom_io) ?
-        dom_io : hardware_domain;
+        dom_io : get_hardware_domain();
 
     while ( pdev->phantom_stride )
     {
@@ -879,7 +879,7 @@ static int deassign_device(struct domain *d, uint16_t seg, uint8_t bus,
     if ( ret )
         goto out;
 
-    if ( pdev->domain == hardware_domain  )
+    if ( is_hardware_domain(pdev->domain) )
         pdev->quarantine = false;
 
     pdev->fault.count = 0;
@@ -1403,7 +1403,7 @@ static int device_assigned(u16 seg, u8 bus, u8 devfn)
      * domain or dom_io then it must be assigned to a guest, or be
      * hidden (owned by dom_xen).
      */
-    else if ( pdev->domain != hardware_domain &&
+    else if ( !is_hardware_domain(pdev->domain) &&
               pdev->domain != dom_io )
         rc = -EBUSY;
 
@@ -1426,7 +1426,7 @@ static int assign_device(struct domain *d, u16 seg, u8 bus, u8 devfn, u32 flag)
     /* device_assigned() should already have cleared the device for assignment */
     ASSERT(pcidevs_locked());
     pdev = pci_get_pdev(seg, bus, devfn);
-    ASSERT(pdev && (pdev->domain == hardware_domain ||
+    ASSERT(pdev && (is_hardware_domain(pdev->domain) ||
                     pdev->domain == dom_io));
 
     if ( pdev->msix )
diff --git a/xen/drivers/passthrough/vtd/iommu.c b/xen/drivers/passthrough/vtd/iommu.c
index b2ca152e1f..580b329db9 100644
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -2358,7 +2358,7 @@ static int reassign_device_ownership(
      * can attempt to send arbitrary LAPIC/MSI messages. We are unprotected
      * by the root complex unless interrupt remapping is enabled.
      */
-    if ( (target != hardware_domain) && !iommu_intremap )
+    if ( (!is_hardware_domain(target)) && !iommu_intremap )
         untrusted_msi = true;
 
     /*
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 39681a5dff..55b7de93d2 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -475,6 +475,7 @@ struct domain
 #define XSM_XENSTORE  (1U<<31) /* Xenstore: domain that can do privileged operations on xenstore */
 #define CLASSIC_DOM0_PRIVS (XSM_PLAT_CTRL | XSM_DOM_BUILD | XSM_DOM_SUPER | \
 		XSM_DEV_EMUL | XSM_HW_CTRL | XSM_HW_SUPER | XSM_XENSTORE)
+#define CLASSIC_HWDOM_PRIVS (XSM_HW_CTRL | XSM_DEV_EMUL)
 /* Any access for which XSM_DEV_EMUL is the restriction, XSM_DOM_SUPER is an override */
 #define DEV_EMU_PRIVS (XSM_DOM_SUPER | XSM_DEV_EMUL)
 /* Anytime there is an XSM_TARGET check, XSM_SELF also applies, and XSM_DOM_SUPER is an override */
@@ -731,6 +732,10 @@ static inline struct domain *rcu_lock_current_domain(void)
     return /*rcu_lock_domain*/(current->domain);
 }
 
+bool is_hardware_domain_started(void);
+
+struct domain *get_hardware_domain(void);
+
 struct domain *get_domain_by_id(domid_t dom);
 
 struct domain *get_pg_owner(domid_t domid);
@@ -1048,7 +1053,7 @@ static always_inline bool is_hardware_domain(const struct domain *d)
     if ( IS_ENABLED(CONFIG_PV_SHIM_EXCLUSIVE) )
         return false;
 
-    return evaluate_nospec(d == hardware_domain);
+    return evaluate_nospec(d->xsm_roles & XSM_HW_CTRL);
 }
 
 /* This check is for functionality specific to a control domain */
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 14 20:51:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 20:51:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127547.239737 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhem0-0002lE-VY; Fri, 14 May 2021 20:51:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127547.239737; Fri, 14 May 2021 20:51:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhem0-0002l7-SX; Fri, 14 May 2021 20:51:04 +0000
Received: by outflank-mailman (input) for mailman id 127547;
 Fri, 14 May 2021 20:51:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l2R2=KJ=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1lhelz-0002i0-MV
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 20:51:03 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ce0d206f-6a8f-4b51-b457-e3240df6f070;
 Fri, 14 May 2021 20:51:02 +0000 (UTC)
Received: from sisyou.hme. (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1621025171671569.192602179863;
 Fri, 14 May 2021 13:46:11 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ce0d206f-6a8f-4b51-b457-e3240df6f070
ARC-Seal: i=1; a=rsa-sha256; t=1621025172; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=WsV8LKMh3OE0XxwkuHnGxZE/mjjJwhRHXx0sf7r0WpsqPh/5jD4+vIOANX2AZEl2pR3nu8u4alXdBsxxj9sXZaDSWPqicVsMCmEmHyNU5MZXEagZ8YZH2pXTFKEM0rbabsDOg5GBxn+YTE7t4A0+FDGGNUkP9RrdLme2Nip859E=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1621025172; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=eKFqf1oaQMvGDC3mpSiXKObyKb9uq3sTtie25mvd3t8=; 
	b=D6hk11zEhoA2nUMWH/zPJ+WjRX4OX1vIwmt0xYZJFjBLVqDDnRfjmd7qnlhQSP1dBfiVMMT/25gO3Lj6sf67YbNZ04JAE00XeKH6EXWHmDamP8kFWKoxVutCW3kFDj6KWgYvZ1dWAayipVt/YOwW+kJKSmMEbhM6gmz9ISya2fU=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com> header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1621025172;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References:MIME-Version:Content-Transfer-Encoding;
	bh=eKFqf1oaQMvGDC3mpSiXKObyKb9uq3sTtie25mvd3t8=;
	b=B7SxaKsaMJZJHnDwgNrMMx7MrW2MddhBfor12b1YR1s4iRZZDO3ic96Tt7f1r19+
	H87sVQOt72ERh2atfFZ+Mq66N1CrLiDOkOWot5BCvTVI6u+I8SR5qfK7sWRmGoESEWc
	mAk1YcC5Q9sUnbjSUHnYj1GjWBembqNlTiBwcdpk=
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	julien@xen.org,
	Volodymyr_Babchuk@epam.com,
	andrew.cooper3@citrix.com,
	george.dunlap@citrix.com,
	iwj@xenproject.org,
	jbeulich@suse.com,
	wl@xen.org,
	roger.pau@citrix.com,
	tamas@tklengyel.com,
	tim@xen.org,
	jgross@suse.com,
	aisaila@bitdefender.com,
	ppircalabu@bitdefender.com,
	dfaggioli@suse.com,
	paul@xen.org,
	kevin.tian@intel.com,
	dgdegra@tycho.nsa.gov,
	adam.schwalm@starlab.io,
	scott.davis@starlab.io
Subject: [RFC PATCH 06/10] xsm-roles: covert the dummy system to roles
Date: Fri, 14 May 2021 16:54:33 -0400
Message-Id: <20210514205437.13661-7-dpsmith@apertussolutions.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20210514205437.13661-1-dpsmith@apertussolutions.com>
References: <20210514205437.13661-1-dpsmith@apertussolutions.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

The difference between XSM and non-XSM was whether the "dummy" policy was
invoked via direct calls or through function pointers. The "dummy" policy
enforced a set of rules that effictively defined a loosely set of roles that a
domain may have. This builds on the work of replacing those rules with well
defined roles by moving away from pseudo is or is not XSM and formalizing the
roles checks as the core security framework.

Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
---
 xen/include/xen/sched.h |   9 -
 xen/include/xsm/roles.h |  70 ++++
 xen/include/xsm/xsm.h   | 689 +++++++++++++++++++++++++++-------------
 xen/xsm/xsm_core.c      |   4 +-
 4 files changed, 544 insertions(+), 228 deletions(-)
 create mode 100644 xen/include/xsm/roles.h

diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 55b7de93d2..d84b047359 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -473,15 +473,6 @@ struct domain
 #define XSM_HW_CTRL   (1U<<8)  /* Hardware Control: domain with physical hardware access and its allocation for domain usage */
 #define XSM_HW_SUPER  (1U<<9)  /* Hardware Supervisor: domain that control allocated physical hardware */
 #define XSM_XENSTORE  (1U<<31) /* Xenstore: domain that can do privileged operations on xenstore */
-#define CLASSIC_DOM0_PRIVS (XSM_PLAT_CTRL | XSM_DOM_BUILD | XSM_DOM_SUPER | \
-		XSM_DEV_EMUL | XSM_HW_CTRL | XSM_HW_SUPER | XSM_XENSTORE)
-#define CLASSIC_HWDOM_PRIVS (XSM_HW_CTRL | XSM_DEV_EMUL)
-/* Any access for which XSM_DEV_EMUL is the restriction, XSM_DOM_SUPER is an override */
-#define DEV_EMU_PRIVS (XSM_DOM_SUPER | XSM_DEV_EMUL)
-/* Anytime there is an XSM_TARGET check, XSM_SELF also applies, and XSM_DOM_SUPER is an override */
-#define TARGET_PRIVS (XSM_TARGET | XSM_SELF | XSM_DOM_SUPER)
-/* Anytime there is an XSM_XENSTORE check, XSM_DOM_SUPER is an override */
-#define XENSTORE_PRIVS (XSM_XENSTORE | XSM_DOM_SUPER)
     uint32_t         xsm_roles;
 
     /* Which guest this guest has privileges on */
diff --git a/xen/include/xsm/roles.h b/xen/include/xsm/roles.h
new file mode 100644
index 0000000000..e6989fffa6
--- /dev/null
+++ b/xen/include/xsm/roles.h
@@ -0,0 +1,70 @@
+/*
+ *  This file contains the XSM roles.
+ *
+ *  This work is based on the original XSM dummy policy.
+ *
+ *  Author:  Daniel P. Smith, <dpsmith@apertussolutions.com>
+ *
+ *  This program is free software; you can redistribute it and/or modify
+ *  it under the terms of the GNU General Public License version 2,
+ *  as published by the Free Software Foundation.
+ */
+
+#ifndef __XSM_ROLES_H__
+#define __XSM_ROLES_H__
+
+#include <xen/sched.h>
+
+#define CLASSIC_DOM0_PRIVS (XSM_PLAT_CTRL | XSM_DOM_BUILD | XSM_DOM_SUPER | \
+		XSM_DEV_EMUL | XSM_HW_CTRL | XSM_HW_SUPER | XSM_XENSTORE)
+
+#define CLASSIC_HWDOM_PRIVS (XSM_HW_CTRL | XSM_DEV_EMUL)
+
+/* Any access for which XSM_DEV_EMUL is the restriction, XSM_DOM_SUPER is an override */
+#define DEV_EMU_PRIVS (XSM_DOM_SUPER | XSM_DEV_EMUL)
+
+/* Anytime there is an XSM_TARGET check, XSM_SELF also applies, and XSM_DOM_SUPER is an override */
+#define TARGET_PRIVS (XSM_TARGET | XSM_SELF | XSM_DOM_SUPER)
+
+/* Anytime there is an XSM_XENSTORE check, XSM_DOM_SUPER is an override */
+#define XENSTORE_PRIVS (XSM_XENSTORE | XSM_DOM_SUPER)
+
+typedef uint32_t xsm_role_t;
+
+static always_inline int xsm_validate_role(
+    xsm_role_t allowed, struct domain *src, struct domain *target)
+{
+    if ( allowed & XSM_NONE )
+        return 0;
+
+    if ( (allowed & XSM_SELF) && ((!target) || (src == target)) )
+        return 0;
+
+    if ( (allowed & XSM_TARGET) && ((target) && (src->target == target)) )
+        return 0;
+
+    /* XSM_DEV_EMUL is the only domain role with a condition, i.e. the
+     * role only applies to a domain's target.
+     */
+    if ( (allowed & XSM_DEV_EMUL) && (src->xsm_roles & XSM_DEV_EMUL)
+        && (target) && (src->target == target) )
+        return 0;
+
+    /* Mask out SELF, TARGET, and DEV_EMUL as they have been handled */
+    allowed &= ~(XSM_SELF | XSM_TARGET | XSM_DEV_EMUL);
+
+    /* Checks if the domain has one of the remaining roles set on it:
+     *      XSM_PLAT_CTRL
+     *      XSM_DOM_BUILD
+     *      XSM_DOM_SUPER
+     *      XSM_HW_CTRL
+     *      XSM_HW_SUPER
+     *      XSM_XENSTORE
+     */
+    if (src->xsm_roles & allowed)
+        return 0;
+
+    return -EPERM;
+}
+
+#endif /* __XSM_ROLES_H__ */
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index b50d8a711f..50f2f547dc 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -16,8 +16,12 @@
 #define __XSM_H__
 
 #include <xen/sched.h>
+#include <xsm/roles.h>
 #include <xen/multiboot.h>
 
+#include <public/version.h>
+#include <public/hvm/params.h>
+
 typedef void xsm_op_t;
 DEFINE_XEN_GUEST_HANDLE(xsm_op_t);
 
@@ -30,8 +34,6 @@ typedef u32 xsm_magic_t;
 #define XSM_MAGIC 0x0
 #endif
 
-typedef uint32_t xsm_default_t;
-
 struct xsm_operations {
     void (*security_domaininfo) (struct domain *d,
                                         struct xen_domctl_getdomaininfo *info);
@@ -178,564 +180,797 @@ struct xsm_operations {
 #endif
 };
 
-#ifdef CONFIG_XSM
-
 extern struct xsm_operations *xsm_ops;
 
-#ifndef XSM_NO_WRAPPERS
+#define CALL_XSM_OP(op, ...)                            \
+    do {                                                \
+        if ( xsm_ops && xsm_ops->op )                   \
+            return xsm_ops->op(__VA_ARGS__);            \
+    } while ( 0 )
+
+#define CALL_XSM_OP_NORET(op, ...)                      \
+    do {                                                \
+        if ( xsm_ops && xsm_ops->op ) {                 \
+            xsm_ops->op(__VA_ARGS__);                   \
+            return;                                     \
+        }                                               \
+    } while ( 0 )
+
+#define XSM_ALLOWED_ROLES(def)                          \
+    do {                                                \
+        BUG_ON( !((def) & role) );                      \
+    } while ( 0 )
 
 static inline void xsm_security_domaininfo (struct domain *d,
                                         struct xen_domctl_getdomaininfo *info)
 {
-    xsm_ops->security_domaininfo(d, info);
+    CALL_XSM_OP_NORET(security_domaininfo,d, info);
+
+    return;
 }
 
-static inline int xsm_domain_create (xsm_default_t def, struct domain *d, u32 ssidref)
+static inline int xsm_domain_create (xsm_role_t role, struct domain *d, u32 ssidref)
 {
-    return xsm_ops->domain_create(d, ssidref);
+    CALL_XSM_OP(domain_create, d, ssidref);
+    XSM_ALLOWED_ROLES(XSM_NONE);
+    return xsm_validate_role(role, current->domain, d);
 }
 
-static inline int xsm_getdomaininfo (xsm_default_t def, struct domain *d)
+static inline int xsm_getdomaininfo (xsm_role_t role, struct domain *d)
 {
-    return xsm_ops->getdomaininfo(d);
+    CALL_XSM_OP(getdomaininfo, d);
+    XSM_ALLOWED_ROLES(XSM_NONE);
+    return xsm_validate_role(role, current->domain, d);
 }
 
-static inline int xsm_domctl_scheduler_op (xsm_default_t def, struct domain *d, int cmd)
+static inline int xsm_domctl_scheduler_op (xsm_role_t role, struct domain *d, int cmd)
 {
-    return xsm_ops->domctl_scheduler_op(d, cmd);
+    CALL_XSM_OP(domctl_scheduler_op, d, cmd);
+    XSM_ALLOWED_ROLES(XSM_NONE);
+    return xsm_validate_role(role, current->domain, d);
 }
 
-static inline int xsm_sysctl_scheduler_op (xsm_default_t def, int cmd)
+static inline int xsm_sysctl_scheduler_op (xsm_role_t role, int cmd)
 {
-    return xsm_ops->sysctl_scheduler_op(cmd);
+    CALL_XSM_OP(sysctl_scheduler_op, cmd);
+    XSM_ALLOWED_ROLES(XSM_NONE);
+    return xsm_validate_role(role, current->domain, NULL);
 }
 
-static inline int xsm_set_target (xsm_default_t def, struct domain *d, struct domain *e)
+static inline int xsm_set_target (xsm_role_t role, struct domain *d, struct domain *e)
 {
-    return xsm_ops->set_target(d, e);
+    CALL_XSM_OP(set_target, d, e);
+    XSM_ALLOWED_ROLES(XSM_NONE);
+    return xsm_validate_role(role, current->domain, NULL);
 }
 
-static inline int xsm_domctl (xsm_default_t def, struct domain *d, int cmd)
+static inline int xsm_domctl (xsm_role_t role, struct domain *d, int cmd)
 {
-    return xsm_ops->domctl(d, cmd);
+    CALL_XSM_OP(domctl, d, cmd);
+    XSM_ALLOWED_ROLES(DEV_EMU_PRIVS | XENSTORE_PRIVS | XSM_DOM_SUPER);
+    switch ( cmd )
+    {
+    case XEN_DOMCTL_ioport_mapping:
+    case XEN_DOMCTL_memory_mapping:
+    case XEN_DOMCTL_bind_pt_irq:
+    case XEN_DOMCTL_unbind_pt_irq:
+        return xsm_validate_role(DEV_EMU_PRIVS, current->domain, d);
+    case XEN_DOMCTL_getdomaininfo:
+        return xsm_validate_role(XENSTORE_PRIVS, current->domain, d);
+    default:
+        return xsm_validate_role(XSM_DOM_SUPER, current->domain, d);
+    }
 }
 
-static inline int xsm_sysctl (xsm_default_t def, int cmd)
+static inline int xsm_sysctl (xsm_role_t role, int cmd)
 {
-    return xsm_ops->sysctl(cmd);
+    CALL_XSM_OP(sysctl, cmd);
+    XSM_ALLOWED_ROLES(XSM_PLAT_CTRL);
+    return xsm_validate_role(role, current->domain, NULL);
 }
 
-static inline int xsm_readconsole (xsm_default_t def, uint32_t clear)
+static inline int xsm_readconsole (xsm_role_t role, uint32_t clear)
 {
-    return xsm_ops->readconsole(clear);
+    CALL_XSM_OP(readconsole, clear);
+    XSM_ALLOWED_ROLES(XSM_NONE);
+    return xsm_validate_role(role, current->domain, NULL);
 }
 
-static inline int xsm_evtchn_unbound (xsm_default_t def, struct domain *d1, struct evtchn *chn,
+static inline int xsm_evtchn_unbound (xsm_role_t role, struct domain *d1, struct evtchn *chn,
                                                                     domid_t id2)
 {
-    return xsm_ops->evtchn_unbound(d1, chn, id2);
+    CALL_XSM_OP(evtchn_unbound, d1, chn, id2);
+    XSM_ALLOWED_ROLES(TARGET_PRIVS);
+    return xsm_validate_role(role, current->domain, d1);
 }
 
-static inline int xsm_evtchn_interdomain (xsm_default_t def, struct domain *d1,
+static inline int xsm_evtchn_interdomain (xsm_role_t role, struct domain *d1,
                 struct evtchn *chan1, struct domain *d2, struct evtchn *chan2)
 {
-    return xsm_ops->evtchn_interdomain(d1, chan1, d2, chan2);
+    CALL_XSM_OP(evtchn_interdomain, d1, chan1, d2, chan2);
+    XSM_ALLOWED_ROLES(XSM_NONE);
+    return xsm_validate_role(role, d1, d2);
 }
 
 static inline void xsm_evtchn_close_post (struct evtchn *chn)
 {
-    xsm_ops->evtchn_close_post(chn);
+    CALL_XSM_OP_NORET(evtchn_close_post, chn);
+    return;
 }
 
-static inline int xsm_evtchn_send (xsm_default_t def, struct domain *d, struct evtchn *chn)
+static inline int xsm_evtchn_send (xsm_role_t role, struct domain *d, struct evtchn *chn)
 {
-    return xsm_ops->evtchn_send(d, chn);
+    CALL_XSM_OP(evtchn_send, d, chn);
+    XSM_ALLOWED_ROLES(XSM_NONE);
+    return xsm_validate_role(role, d, NULL);
 }
 
-static inline int xsm_evtchn_status (xsm_default_t def, struct domain *d, struct evtchn *chn)
+static inline int xsm_evtchn_status (xsm_role_t role, struct domain *d, struct evtchn *chn)
 {
-    return xsm_ops->evtchn_status(d, chn);
+    CALL_XSM_OP(evtchn_status, d, chn);
+    XSM_ALLOWED_ROLES(TARGET_PRIVS);
+    return xsm_validate_role(role, current->domain, d);
 }
 
-static inline int xsm_evtchn_reset (xsm_default_t def, struct domain *d1, struct domain *d2)
+static inline int xsm_evtchn_reset (xsm_role_t role, struct domain *d1, struct domain *d2)
 {
-    return xsm_ops->evtchn_reset(d1, d2);
+    CALL_XSM_OP(evtchn_reset, d1, d2);
+    XSM_ALLOWED_ROLES(TARGET_PRIVS);
+    return xsm_validate_role(role, d1, d2);
 }
 
-static inline int xsm_grant_mapref (xsm_default_t def, struct domain *d1, struct domain *d2,
+static inline int xsm_grant_mapref (xsm_role_t role, struct domain *d1, struct domain *d2,
                                                                 uint32_t flags)
 {
-    return xsm_ops->grant_mapref(d1, d2, flags);
+    CALL_XSM_OP(grant_mapref, d1, d2, flags);
+    XSM_ALLOWED_ROLES(XSM_NONE);
+    return xsm_validate_role(role, d1, d2);
 }
 
-static inline int xsm_grant_unmapref (xsm_default_t def, struct domain *d1, struct domain *d2)
+static inline int xsm_grant_unmapref (xsm_role_t role, struct domain *d1, struct domain *d2)
 {
-    return xsm_ops->grant_unmapref(d1, d2);
+    CALL_XSM_OP(grant_unmapref, d1, d2);
+    XSM_ALLOWED_ROLES(XSM_NONE);
+    return xsm_validate_role(role, d1, d2);
 }
 
-static inline int xsm_grant_setup (xsm_default_t def, struct domain *d1, struct domain *d2)
+static inline int xsm_grant_setup (xsm_role_t role, struct domain *d1, struct domain *d2)
 {
-    return xsm_ops->grant_setup(d1, d2);
+    CALL_XSM_OP(grant_setup, d1, d2);
+    XSM_ALLOWED_ROLES(TARGET_PRIVS);
+    return xsm_validate_role(role, d1, d2);
 }
 
-static inline int xsm_grant_transfer (xsm_default_t def, struct domain *d1, struct domain *d2)
+static inline int xsm_grant_transfer (xsm_role_t role, struct domain *d1, struct domain *d2)
 {
-    return xsm_ops->grant_transfer(d1, d2);
+    CALL_XSM_OP(grant_transfer, d1, d2);
+    XSM_ALLOWED_ROLES(XSM_NONE);
+    return xsm_validate_role(role, d1, d2);
 }
 
-static inline int xsm_grant_copy (xsm_default_t def, struct domain *d1, struct domain *d2)
+static inline int xsm_grant_copy (xsm_role_t role, struct domain *d1, struct domain *d2)
 {
-    return xsm_ops->grant_copy(d1, d2);
+    CALL_XSM_OP(grant_copy, d1, d2);
+    XSM_ALLOWED_ROLES(XSM_NONE);
+    return xsm_validate_role(role, d1, d2);
 }
 
-static inline int xsm_grant_query_size (xsm_default_t def, struct domain *d1, struct domain *d2)
+static inline int xsm_grant_query_size (xsm_role_t role, struct domain *d1, struct domain *d2)
 {
-    return xsm_ops->grant_query_size(d1, d2);
+    CALL_XSM_OP(grant_query_size, d1, d2);
+    XSM_ALLOWED_ROLES(TARGET_PRIVS);
+    return xsm_validate_role(role, d1, d2);
 }
 
 static inline int xsm_alloc_security_domain (struct domain *d)
 {
-    return xsm_ops->alloc_security_domain(d);
+    CALL_XSM_OP(alloc_security_domain, d);
+    return 0;
 }
 
 static inline void xsm_free_security_domain (struct domain *d)
 {
-    xsm_ops->free_security_domain(d);
+    CALL_XSM_OP_NORET(free_security_domain, d);
+    return;
 }
 
 static inline int xsm_alloc_security_evtchns(
     struct evtchn chn[], unsigned int nr)
 {
-    return xsm_ops->alloc_security_evtchns(chn, nr);
+    CALL_XSM_OP(alloc_security_evtchns, chn, nr);
+    return 0;
 }
 
 static inline void xsm_free_security_evtchns(
     struct evtchn chn[], unsigned int nr)
 {
-    xsm_ops->free_security_evtchns(chn, nr);
+    CALL_XSM_OP_NORET(free_security_evtchns, chn, nr);
+    return;
 }
 
 static inline char *xsm_show_security_evtchn (struct domain *d, const struct evtchn *chn)
 {
-    return xsm_ops->show_security_evtchn(d, chn);
+    CALL_XSM_OP(show_security_evtchn, d, chn);
+    return NULL;
 }
 
-static inline int xsm_init_hardware_domain (xsm_default_t def, struct domain *d)
+static inline int xsm_init_hardware_domain (xsm_role_t role, struct domain *d)
 {
-    return xsm_ops->init_hardware_domain(d);
+    CALL_XSM_OP(init_hardware_domain, d);
+    XSM_ALLOWED_ROLES(XSM_NONE);
+    return xsm_validate_role(role, current->domain, d);
 }
 
-static inline int xsm_get_pod_target (xsm_default_t def, struct domain *d)
+static inline int xsm_get_pod_target (xsm_role_t role, struct domain *d)
 {
-    return xsm_ops->get_pod_target(d);
+    CALL_XSM_OP(get_pod_target, d);
+    XSM_ALLOWED_ROLES(XSM_DOM_SUPER);
+    return xsm_validate_role(role, current->domain, d);
 }
 
-static inline int xsm_set_pod_target (xsm_default_t def, struct domain *d)
+static inline int xsm_set_pod_target (xsm_role_t role, struct domain *d)
 {
-    return xsm_ops->set_pod_target(d);
+    CALL_XSM_OP(set_pod_target, d);
+    XSM_ALLOWED_ROLES(XSM_DOM_SUPER);
+    return xsm_validate_role(role, current->domain, d);
 }
 
-static inline int xsm_memory_exchange (xsm_default_t def, struct domain *d)
+static inline int xsm_memory_exchange (xsm_role_t role, struct domain *d)
 {
-    return xsm_ops->memory_exchange(d);
+    CALL_XSM_OP(memory_exchange, d);
+    XSM_ALLOWED_ROLES(TARGET_PRIVS);
+    return xsm_validate_role(role, current->domain, d);
 }
 
-static inline int xsm_memory_adjust_reservation (xsm_default_t def, struct domain *d1, struct
+static inline int xsm_memory_adjust_reservation (xsm_role_t role, struct domain *d1, struct
                                                                     domain *d2)
 {
-    return xsm_ops->memory_adjust_reservation(d1, d2);
+    CALL_XSM_OP(memory_adjust_reservation, d1, d2);
+    XSM_ALLOWED_ROLES(TARGET_PRIVS);
+    return xsm_validate_role(role, d1, d2);
 }
 
-static inline int xsm_memory_stat_reservation (xsm_default_t def, struct domain *d1,
+static inline int xsm_memory_stat_reservation (xsm_role_t role, struct domain *d1,
                                                             struct domain *d2)
 {
-    return xsm_ops->memory_stat_reservation(d1, d2);
+    CALL_XSM_OP(memory_stat_reservation, d1, d2);
+    XSM_ALLOWED_ROLES(TARGET_PRIVS);
+    return xsm_validate_role(role, d1, d2);
 }
 
-static inline int xsm_memory_pin_page(xsm_default_t def, struct domain *d1, struct domain *d2,
+static inline int xsm_memory_pin_page(xsm_role_t role, struct domain *d1, struct domain *d2,
                                       struct page_info *page)
 {
-    return xsm_ops->memory_pin_page(d1, d2, page);
+    CALL_XSM_OP(memory_pin_page, d1, d2, page);
+    XSM_ALLOWED_ROLES(XSM_NONE);
+    return xsm_validate_role(role, d1, d2);
 }
 
-static inline int xsm_add_to_physmap(xsm_default_t def, struct domain *d1, struct domain *d2)
+static inline int xsm_add_to_physmap(xsm_role_t role, struct domain *d1, struct domain *d2)
 {
-    return xsm_ops->add_to_physmap(d1, d2);
+    CALL_XSM_OP(add_to_physmap, d1, d2);
+    XSM_ALLOWED_ROLES(TARGET_PRIVS);
+    return xsm_validate_role(role, d1, d2);
 }
 
-static inline int xsm_remove_from_physmap(xsm_default_t def, struct domain *d1, struct domain *d2)
+static inline int xsm_remove_from_physmap(xsm_role_t role, struct domain *d1, struct domain *d2)
 {
-    return xsm_ops->remove_from_physmap(d1, d2);
+    CALL_XSM_OP(remove_from_physmap, d1, d2);
+    XSM_ALLOWED_ROLES(TARGET_PRIVS);
+    return xsm_validate_role(role, d1, d2);
 }
 
-static inline int xsm_map_gmfn_foreign (xsm_default_t def, struct domain *d, struct domain *t)
+static inline int xsm_map_gmfn_foreign (xsm_role_t role, struct domain *d, struct domain *t)
 {
-    return xsm_ops->map_gmfn_foreign(d, t);
+    CALL_XSM_OP(map_gmfn_foreign, d, t);
+    XSM_ALLOWED_ROLES(TARGET_PRIVS);
+    return xsm_validate_role(role, d, t);
 }
 
-static inline int xsm_claim_pages(xsm_default_t def, struct domain *d)
+static inline int xsm_claim_pages(xsm_role_t role, struct domain *d)
 {
-    return xsm_ops->claim_pages(d);
+    CALL_XSM_OP(claim_pages, d);
+    XSM_ALLOWED_ROLES(XSM_DOM_SUPER);
+    return xsm_validate_role(role, current->domain, d);
 }
 
-static inline int xsm_console_io (xsm_default_t def, struct domain *d, int cmd)
+static inline int xsm_console_io (xsm_role_t role, struct domain *d, int cmd)
 {
-    return xsm_ops->console_io(d, cmd);
+    CALL_XSM_OP(console_io, d, cmd);
+    XSM_ALLOWED_ROLES(XSM_NONE|XSM_DOM_SUPER);
+    if ( d->is_console )
+        return xsm_validate_role(XSM_NONE, d, NULL);
+#ifdef CONFIG_VERBOSE_DEBUG
+    if ( cmd == CONSOLEIO_write )
+        return xsm_validate_role(XSM_NONE, d, NULL);
+#endif
+    return xsm_validate_role(XSM_DOM_SUPER, d, NULL);
 }
 
-static inline int xsm_profile (xsm_default_t def, struct domain *d, int op)
+static inline int xsm_profile (xsm_role_t role, struct domain *d, int op)
 {
-    return xsm_ops->profile(d, op);
+    CALL_XSM_OP(profile, d, op);
+    XSM_ALLOWED_ROLES(XSM_NONE);
+    return xsm_validate_role(role, d, NULL);
 }
 
-static inline int xsm_kexec (xsm_default_t def)
+static inline int xsm_kexec (xsm_role_t role)
 {
-    return xsm_ops->kexec();
+    CALL_XSM_OP(kexec);
+    XSM_ALLOWED_ROLES(XSM_PLAT_CTRL);
+    return xsm_validate_role(role, current->domain, NULL);
 }
 
-static inline int xsm_schedop_shutdown (xsm_default_t def, struct domain *d1, struct domain *d2)
+static inline int xsm_schedop_shutdown (xsm_role_t role, struct domain *d1, struct domain *d2)
 {
-    return xsm_ops->schedop_shutdown(d1, d2);
+    CALL_XSM_OP(schedop_shutdown, d1, d2);
+    XSM_ALLOWED_ROLES(DEV_EMU_PRIVS);
+    return xsm_validate_role(role, d1, d2);
 }
 
 static inline char *xsm_show_irq_sid (int irq)
 {
-    return xsm_ops->show_irq_sid(irq);
+    CALL_XSM_OP(show_irq_sid, irq);
+    return NULL;
 }
 
-static inline int xsm_map_domain_pirq (xsm_default_t def, struct domain *d)
+static inline int xsm_map_domain_pirq (xsm_role_t role, struct domain *d)
 {
-    return xsm_ops->map_domain_pirq(d);
+    CALL_XSM_OP(map_domain_pirq, d);
+    XSM_ALLOWED_ROLES(DEV_EMU_PRIVS);
+    return xsm_validate_role(role, current->domain, d);
 }
 
-static inline int xsm_map_domain_irq (xsm_default_t def, struct domain *d, int irq, void *data)
+static inline int xsm_map_domain_irq (xsm_role_t role, struct domain *d, int irq, void *data)
 {
-    return xsm_ops->map_domain_irq(d, irq, data);
+    CALL_XSM_OP(map_domain_irq, d, irq, data);
+    XSM_ALLOWED_ROLES(XSM_NONE);
+    return xsm_validate_role(role, current->domain, d);
 }
 
-static inline int xsm_unmap_domain_pirq (xsm_default_t def, struct domain *d)
+static inline int xsm_unmap_domain_pirq (xsm_role_t role, struct domain *d)
 {
-    return xsm_ops->unmap_domain_pirq(d);
+    CALL_XSM_OP(unmap_domain_pirq, d);
+    XSM_ALLOWED_ROLES(DEV_EMU_PRIVS);
+    return xsm_validate_role(role, current->domain, d);
 }
 
-static inline int xsm_unmap_domain_irq (xsm_default_t def, struct domain *d, int irq, void *data)
+static inline int xsm_unmap_domain_irq (xsm_role_t role, struct domain *d, int irq, void *data)
 {
-    return xsm_ops->unmap_domain_irq(d, irq, data);
+    CALL_XSM_OP(unmap_domain_irq, d, irq, data);
+    XSM_ALLOWED_ROLES(XSM_NONE);
+    return xsm_validate_role(role, current->domain, d);
 }
 
-static inline int xsm_bind_pt_irq(xsm_default_t def, struct domain *d,
+static inline int xsm_bind_pt_irq(xsm_role_t role, struct domain *d,
                                   struct xen_domctl_bind_pt_irq *bind)
 {
-    return xsm_ops->bind_pt_irq(d, bind);
+    CALL_XSM_OP(bind_pt_irq, d, bind);
+    XSM_ALLOWED_ROLES(XSM_NONE);
+    return xsm_validate_role(role, current->domain, d);
 }
 
-static inline int xsm_unbind_pt_irq(xsm_default_t def, struct domain *d,
+static inline int xsm_unbind_pt_irq(xsm_role_t role, struct domain *d,
                                     struct xen_domctl_bind_pt_irq *bind)
 {
-    return xsm_ops->unbind_pt_irq(d, bind);
+    CALL_XSM_OP(unbind_pt_irq, d, bind);
+    XSM_ALLOWED_ROLES(XSM_NONE);
+    return xsm_validate_role(role, current->domain, d);
 }
 
-static inline int xsm_irq_permission (xsm_default_t def, struct domain *d, int pirq, uint8_t allow)
+static inline int xsm_irq_permission (xsm_role_t role, struct domain *d, int pirq, uint8_t allow)
 {
-    return xsm_ops->irq_permission(d, pirq, allow);
+    CALL_XSM_OP(irq_permission, d, pirq, allow);
+    XSM_ALLOWED_ROLES(XSM_NONE);
+    return xsm_validate_role(role, current->domain, d);
 }
 
-static inline int xsm_iomem_permission (xsm_default_t def, struct domain *d, uint64_t s, uint64_t e, uint8_t allow)
+static inline int xsm_iomem_permission (xsm_role_t role, struct domain *d, uint64_t s, uint64_t e, uint8_t allow)
 {
-    return xsm_ops->iomem_permission(d, s, e, allow);
+    CALL_XSM_OP(iomem_permission, d, s, e, allow);
+    XSM_ALLOWED_ROLES(XSM_NONE);
+    return xsm_validate_role(role, current->domain, d);
 }
 
-static inline int xsm_iomem_mapping (xsm_default_t def, struct domain *d, uint64_t s, uint64_t e, uint8_t allow)
+static inline int xsm_iomem_mapping (xsm_role_t role, struct domain *d, uint64_t s, uint64_t e, uint8_t allow)
 {
-    return xsm_ops->iomem_mapping(d, s, e, allow);
+    CALL_XSM_OP(iomem_mapping, d, s, e, allow);
+    XSM_ALLOWED_ROLES(XSM_NONE);
+    return xsm_validate_role(role, current->domain, d);
 }
 
-static inline int xsm_pci_config_permission (xsm_default_t def, struct domain *d, uint32_t machine_bdf, uint16_t start, uint16_t end, uint8_t access)
+static inline int xsm_pci_config_permission (xsm_role_t role, struct domain *d, uint32_t machine_bdf, uint16_t start, uint16_t end, uint8_t access)
 {
-    return xsm_ops->pci_config_permission(d, machine_bdf, start, end, access);
+    CALL_XSM_OP(pci_config_permission, d, machine_bdf, start, end, access);
+    XSM_ALLOWED_ROLES(XSM_NONE);
+    return xsm_validate_role(role, current->domain, d);
 }
 
 #if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_PCI)
-static inline int xsm_get_device_group(xsm_default_t def, uint32_t machine_bdf)
+static inline int xsm_get_device_group(xsm_role_t role, uint32_t machine_bdf)
 {
-    return xsm_ops->get_device_group(machine_bdf);
+    CALL_XSM_OP(get_device_group, machine_bdf);
+    XSM_ALLOWED_ROLES(XSM_NONE);
+    return xsm_validate_role(role, current->domain, NULL);
 }
 
-static inline int xsm_assign_device(xsm_default_t def, struct domain *d, uint32_t machine_bdf)
+static inline int xsm_assign_device(xsm_role_t role, struct domain *d, uint32_t machine_bdf)
 {
-    return xsm_ops->assign_device(d, machine_bdf);
+    CALL_XSM_OP(assign_device, d, machine_bdf);
+    XSM_ALLOWED_ROLES(XSM_NONE);
+    return xsm_validate_role(role, current->domain, d);
 }
 
-static inline int xsm_deassign_device(xsm_default_t def, struct domain *d, uint32_t machine_bdf)
+static inline int xsm_deassign_device(xsm_role_t role, struct domain *d, uint32_t machine_bdf)
 {
-    return xsm_ops->deassign_device(d, machine_bdf);
+    CALL_XSM_OP(deassign_device, d, machine_bdf);
+    XSM_ALLOWED_ROLES(XSM_NONE);
+    return xsm_validate_role(role, current->domain, d);
 }
 #endif /* HAS_PASSTHROUGH && HAS_PCI) */
 
 #if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_DEVICE_TREE)
-static inline int xsm_assign_dtdevice(xsm_default_t def, struct domain *d,
+static inline int xsm_assign_dtdevice(xsm_role_t role, struct domain *d,
                                       const char *dtpath)
 {
-    return xsm_ops->assign_dtdevice(d, dtpath);
+    CALL_XSM_OP(assign_dtdevice, d, dtpath);
+    XSM_ALLOWED_ROLES(XSM_NONE);
+    return xsm_validate_role(role, current->domain, d);
 }
 
-static inline int xsm_deassign_dtdevice(xsm_default_t def, struct domain *d,
+static inline int xsm_deassign_dtdevice(xsm_role_t role, struct domain *d,
                                         const char *dtpath)
 {
-    return xsm_ops->deassign_dtdevice(d, dtpath);
+    CALL_XSM_OP(deassign_dtdevice, d, dtpath);
+    XSM_ALLOWED_ROLES(XSM_NONE);
+    return xsm_validate_role(role, current->domain, d);
 }
 
 #endif /* HAS_PASSTHROUGH && HAS_DEVICE_TREE */
 
-static inline int xsm_resource_plug_pci (xsm_default_t def, uint32_t machine_bdf)
+static inline int xsm_resource_plug_pci (xsm_role_t role, uint32_t machine_bdf)
 {
-    return xsm_ops->resource_plug_pci(machine_bdf);
+    CALL_XSM_OP(resource_plug_pci, machine_bdf);
+    XSM_ALLOWED_ROLES(XSM_HW_CTRL);
+    return xsm_validate_role(role, current->domain, NULL);
 }
 
-static inline int xsm_resource_unplug_pci (xsm_default_t def, uint32_t machine_bdf)
+static inline int xsm_resource_unplug_pci (xsm_role_t role, uint32_t machine_bdf)
 {
-    return xsm_ops->resource_unplug_pci(machine_bdf);
+    CALL_XSM_OP(resource_unplug_pci, machine_bdf);
+    XSM_ALLOWED_ROLES(XSM_HW_CTRL);
+    return xsm_validate_role(role, current->domain, NULL);
 }
 
-static inline int xsm_resource_plug_core (xsm_default_t def)
+static inline int xsm_resource_plug_core (xsm_role_t role)
 {
-    return xsm_ops->resource_plug_core();
+    CALL_XSM_OP(resource_plug_core);
+    XSM_ALLOWED_ROLES(XSM_NONE);
+    return xsm_validate_role(role, current->domain, NULL);
 }
 
-static inline int xsm_resource_unplug_core (xsm_default_t def)
+static inline int xsm_resource_unplug_core (xsm_role_t role)
 {
-    return xsm_ops->resource_unplug_core();
+    CALL_XSM_OP(resource_unplug_core);
+    XSM_ALLOWED_ROLES(XSM_NONE);
+    return xsm_validate_role(role, current->domain, NULL);
 }
 
-static inline int xsm_resource_setup_pci (xsm_default_t def, uint32_t machine_bdf)
+static inline int xsm_resource_setup_pci (xsm_role_t role, uint32_t machine_bdf)
 {
-    return xsm_ops->resource_setup_pci(machine_bdf);
+    CALL_XSM_OP(resource_setup_pci, machine_bdf);
+    XSM_ALLOWED_ROLES(XSM_HW_CTRL);
+    return xsm_validate_role(role, current->domain, NULL);
 }
 
-static inline int xsm_resource_setup_gsi (xsm_default_t def, int gsi)
+static inline int xsm_resource_setup_gsi (xsm_role_t role, int gsi)
 {
-    return xsm_ops->resource_setup_gsi(gsi);
+    CALL_XSM_OP(resource_setup_gsi, gsi);
+    XSM_ALLOWED_ROLES(XSM_HW_CTRL);
+    return xsm_validate_role(role, current->domain, NULL);
 }
 
-static inline int xsm_resource_setup_misc (xsm_default_t def)
+static inline int xsm_resource_setup_misc (xsm_role_t role)
 {
-    return xsm_ops->resource_setup_misc();
+    CALL_XSM_OP(resource_setup_misc);
+    XSM_ALLOWED_ROLES(XSM_HW_CTRL);
+    return xsm_validate_role(role, current->domain, NULL);
 }
 
-static inline int xsm_page_offline(xsm_default_t def, uint32_t cmd)
+static inline int xsm_page_offline(xsm_role_t role, uint32_t cmd)
 {
-    return xsm_ops->page_offline(cmd);
+    CALL_XSM_OP(page_offline, cmd);
+    XSM_ALLOWED_ROLES(XSM_NONE);
+    return xsm_validate_role(role, current->domain, NULL);
 }
 
-static inline int xsm_hypfs_op(xsm_default_t def)
+static inline int xsm_hypfs_op(xsm_role_t role)
 {
-    return xsm_ops->hypfs_op();
+    CALL_XSM_OP(hypfs_op);
+    XSM_ALLOWED_ROLES(XSM_PLAT_CTRL);
+    return xsm_validate_role(role, current->domain, NULL);
 }
 
 static inline long xsm_do_xsm_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 {
-    return xsm_ops->do_xsm_op(op);
+    CALL_XSM_OP(do_xsm_op, op);
+    return -ENOSYS;
 }
 
 #ifdef CONFIG_COMPAT
 static inline int xsm_do_compat_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 {
-    return xsm_ops->do_compat_op(op);
+    CALL_XSM_OP(do_compat_op, op);
+    return -ENOSYS;
 }
 #endif
 
-static inline int xsm_hvm_param (xsm_default_t def, struct domain *d, unsigned long op)
+static inline int xsm_hvm_param (xsm_role_t role, struct domain *d, unsigned long op)
 {
-    return xsm_ops->hvm_param(d, op);
+    CALL_XSM_OP(hvm_param, d, op);
+    XSM_ALLOWED_ROLES(TARGET_PRIVS);
+    return xsm_validate_role(role, current->domain, d);
 }
 
-static inline int xsm_hvm_control(xsm_default_t def, struct domain *d, unsigned long op)
+static inline int xsm_hvm_control(xsm_role_t role, struct domain *d, unsigned long op)
 {
-    return xsm_ops->hvm_control(d, op);
+    CALL_XSM_OP(hvm_control, d, op);
+    XSM_ALLOWED_ROLES(DEV_EMU_PRIVS);
+    return xsm_validate_role(role, current->domain, d);
 }
 
-static inline int xsm_hvm_param_altp2mhvm (xsm_default_t def, struct domain *d)
+static inline int xsm_hvm_param_altp2mhvm (xsm_role_t role, struct domain *d)
 {
-    return xsm_ops->hvm_param_altp2mhvm(d);
+    CALL_XSM_OP(hvm_param_altp2mhvm, d);
+    XSM_ALLOWED_ROLES(XSM_DOM_SUPER);
+    return xsm_validate_role(role, current->domain, d);
 }
 
-static inline int xsm_hvm_altp2mhvm_op (xsm_default_t def, struct domain *d, uint64_t mode, uint32_t op)
+static inline int xsm_hvm_altp2mhvm_op (xsm_role_t role, struct domain *d, uint64_t mode, uint32_t op)
 {
-    return xsm_ops->hvm_altp2mhvm_op(d, mode, op);
+    CALL_XSM_OP(hvm_altp2mhvm_op, d, mode, op);
+    XSM_ALLOWED_ROLES(TARGET_PRIVS | DEV_EMU_PRIVS);
+
+    switch ( mode )
+    {
+    case XEN_ALTP2M_mixed:
+        return xsm_validate_role(TARGET_PRIVS, current->domain, d);
+    case XEN_ALTP2M_external:
+        return xsm_validate_role(DEV_EMU_PRIVS, current->domain, d);
+    case XEN_ALTP2M_limited:
+        if ( HVMOP_altp2m_vcpu_enable_notify == op )
+            return xsm_validate_role(TARGET_PRIVS, current->domain, d);
+        return xsm_validate_role(DEV_EMU_PRIVS, current->domain, d);
+    default:
+        return -EPERM;
+    }
 }
 
-static inline int xsm_get_vnumainfo (xsm_default_t def, struct domain *d)
+static inline int xsm_get_vnumainfo (xsm_role_t role, struct domain *d)
 {
-    return xsm_ops->get_vnumainfo(d);
+    CALL_XSM_OP(get_vnumainfo, d);
+    XSM_ALLOWED_ROLES(TARGET_PRIVS);
+    return xsm_validate_role(role, current->domain, d);
 }
 
-static inline int xsm_vm_event_control (xsm_default_t def, struct domain *d, int mode, int op)
+static inline int xsm_vm_event_control (xsm_role_t role, struct domain *d, int mode, int op)
 {
-    return xsm_ops->vm_event_control(d, mode, op);
+    CALL_XSM_OP(vm_event_control, d, mode, op);
+    XSM_ALLOWED_ROLES(XSM_DOM_SUPER);
+    return xsm_validate_role(role, current->domain, d);
 }
 
 #ifdef CONFIG_MEM_ACCESS
-static inline int xsm_mem_access (xsm_default_t def, struct domain *d)
+static inline int xsm_mem_access (xsm_role_t role, struct domain *d)
 {
-    return xsm_ops->mem_access(d);
+    CALL_XSM_OP(mem_access, d);
+    XSM_ALLOWED_ROLES(DEV_EMU_PRIVS);
+    return xsm_validate_role(role, current->domain, d);
 }
 #endif
 
 #ifdef CONFIG_HAS_MEM_PAGING
-static inline int xsm_mem_paging (xsm_default_t def, struct domain *d)
+static inline int xsm_mem_paging (xsm_role_t role, struct domain *d)
 {
-    return xsm_ops->mem_paging(d);
+    CALL_XSM_OP(mem_paging, d);
+    XSM_ALLOWED_ROLES(DEV_EMU_PRIVS);
+    return xsm_validate_role(role, current->domain, d);
 }
 #endif
 
 #ifdef CONFIG_MEM_SHARING
-static inline int xsm_mem_sharing (xsm_default_t def, struct domain *d)
+static inline int xsm_mem_sharing (xsm_role_t role, struct domain *d)
 {
-    return xsm_ops->mem_sharing(d);
+    CALL_XSM_OP(mem_sharing, d);
+    XSM_ALLOWED_ROLES(DEV_EMU_PRIVS);
+    return xsm_validate_role(role, current->domain, d);
 }
 #endif
 
-static inline int xsm_platform_op (xsm_default_t def, uint32_t op)
+static inline int xsm_platform_op (xsm_role_t role, uint32_t op)
 {
-    return xsm_ops->platform_op(op);
+    CALL_XSM_OP(platform_op, op);
+    XSM_ALLOWED_ROLES(XSM_PLAT_CTRL);
+    return xsm_validate_role(role, current->domain, NULL);
 }
 
 #ifdef CONFIG_X86
-static inline int xsm_do_mca(xsm_default_t def)
-{
-    return xsm_ops->do_mca();
-}
-
-static inline int xsm_shadow_control (xsm_default_t def, struct domain *d, uint32_t op)
+static inline int xsm_do_mca(xsm_role_t role)
 {
-    return xsm_ops->shadow_control(d, op);
+    CALL_XSM_OP(do_mca);
+    XSM_ALLOWED_ROLES(XSM_PLAT_CTRL);
+    return xsm_validate_role(role, current->domain, NULL);
 }
 
-static inline int xsm_mem_sharing_op (xsm_default_t def, struct domain *d, struct domain *cd, int op)
+static inline int xsm_shadow_control (xsm_role_t role, struct domain *d, uint32_t op)
 {
-    return xsm_ops->mem_sharing_op(d, cd, op);
+    CALL_XSM_OP(shadow_control, d, op);
+    XSM_ALLOWED_ROLES(XSM_NONE);
+    return xsm_validate_role(role, current->domain, d);
 }
 
-static inline int xsm_apic (xsm_default_t def, struct domain *d, int cmd)
+static inline int xsm_mem_sharing_op (xsm_role_t role, struct domain *d, struct domain *cd, int op)
 {
-    return xsm_ops->apic(d, cmd);
+    CALL_XSM_OP(mem_sharing_op, d, cd, op);
+    XSM_ALLOWED_ROLES(DEV_EMU_PRIVS);
+    return xsm_validate_role(role, current->domain, cd);
 }
 
-static inline int xsm_memtype (xsm_default_t def, uint32_t access)
+static inline int xsm_apic (xsm_role_t role, struct domain *d, int cmd)
 {
-    return xsm_ops->memtype(access);
+    CALL_XSM_OP(apic, d, cmd);
+    XSM_ALLOWED_ROLES(XSM_HW_CTRL);
+    return xsm_validate_role(role, d, NULL);
 }
 
-static inline int xsm_machine_memory_map(xsm_default_t def)
+static inline int xsm_machine_memory_map(xsm_role_t role)
 {
-    return xsm_ops->machine_memory_map();
+    CALL_XSM_OP(machine_memory_map);
+    XSM_ALLOWED_ROLES(XSM_PLAT_CTRL);
+    return xsm_validate_role(role, current->domain, NULL);
 }
 
-static inline int xsm_domain_memory_map(xsm_default_t def, struct domain *d)
+static inline int xsm_domain_memory_map(xsm_role_t role, struct domain *d)
 {
-    return xsm_ops->domain_memory_map(d);
+    CALL_XSM_OP(domain_memory_map, d);
+    XSM_ALLOWED_ROLES(TARGET_PRIVS);
+    return xsm_validate_role(role, current->domain, d);
 }
 
-static inline int xsm_mmu_update (xsm_default_t def, struct domain *d, struct domain *t,
+static inline int xsm_mmu_update (xsm_role_t role, struct domain *d, struct domain *t,
                                   struct domain *f, uint32_t flags)
 {
-    return xsm_ops->mmu_update(d, t, f, flags);
+    int rc = 0;
+    CALL_XSM_OP(mmu_update, d, t, f, flags);
+    XSM_ALLOWED_ROLES(TARGET_PRIVS);
+    if ( f != dom_io )
+        rc = xsm_validate_role(role, d, f);
+    if ( evaluate_nospec(t) && !rc )
+        rc = xsm_validate_role(role, d, t);
+    return rc;
 }
 
-static inline int xsm_mmuext_op (xsm_default_t def, struct domain *d, struct domain *f)
+static inline int xsm_mmuext_op (xsm_role_t role, struct domain *d, struct domain *f)
 {
-    return xsm_ops->mmuext_op(d, f);
+    CALL_XSM_OP(mmuext_op, d, f);
+    XSM_ALLOWED_ROLES(TARGET_PRIVS);
+    return xsm_validate_role(role, d, f);
 }
 
-static inline int xsm_update_va_mapping(xsm_default_t def, struct domain *d, struct domain *f,
+static inline int xsm_update_va_mapping(xsm_role_t role, struct domain *d, struct domain *f,
                                                             l1_pgentry_t pte)
 {
-    return xsm_ops->update_va_mapping(d, f, pte);
+    CALL_XSM_OP(update_va_mapping, d, f, pte);
+    XSM_ALLOWED_ROLES(TARGET_PRIVS);
+    return xsm_validate_role(role, d, f);
 }
 
-static inline int xsm_priv_mapping(xsm_default_t def, struct domain *d, struct domain *t)
+static inline int xsm_priv_mapping(xsm_role_t role, struct domain *d, struct domain *t)
 {
-    return xsm_ops->priv_mapping(d, t);
+    CALL_XSM_OP(priv_mapping, d, t);
+    XSM_ALLOWED_ROLES(TARGET_PRIVS);
+    return xsm_validate_role(role, d, t);
 }
 
-static inline int xsm_ioport_permission (xsm_default_t def, struct domain *d, uint32_t s, uint32_t e, uint8_t allow)
+static inline int xsm_ioport_permission (xsm_role_t role, struct domain *d, uint32_t s, uint32_t e, uint8_t allow)
 {
-    return xsm_ops->ioport_permission(d, s, e, allow);
+    CALL_XSM_OP(ioport_permission, d, s, e, allow);
+    XSM_ALLOWED_ROLES(XSM_NONE);
+    return xsm_validate_role(role, current->domain, d);
 }
 
-static inline int xsm_ioport_mapping (xsm_default_t def, struct domain *d, uint32_t s, uint32_t e, uint8_t allow)
+static inline int xsm_ioport_mapping (xsm_role_t role, struct domain *d, uint32_t s, uint32_t e, uint8_t allow)
 {
-    return xsm_ops->ioport_mapping(d, s, e, allow);
+    CALL_XSM_OP(ioport_mapping, d, s, e, allow);
+    XSM_ALLOWED_ROLES(XSM_NONE);
+    return xsm_validate_role(role, current->domain, d);
 }
 
-static inline int xsm_pmu_op (xsm_default_t def, struct domain *d, unsigned int op)
+static inline int xsm_pmu_op (xsm_role_t role, struct domain *d, unsigned int op)
 {
-    return xsm_ops->pmu_op(d, op);
+    CALL_XSM_OP(pmu_op, d, op);
+    XSM_ALLOWED_ROLES(XSM_NONE | XSM_DOM_SUPER);
+    switch ( op )
+    {
+    case XENPMU_init:
+    case XENPMU_finish:
+    case XENPMU_lvtpc_set:
+    case XENPMU_flush:
+        return xsm_validate_role(XSM_NONE, d, current->domain);
+    default:
+        return xsm_validate_role(XSM_DOM_SUPER, d, current->domain);
+    }
 }
 
 #endif /* CONFIG_X86 */
 
-static inline int xsm_dm_op(xsm_default_t def, struct domain *d)
+static inline int xsm_dm_op(xsm_role_t role, struct domain *d)
 {
-    return xsm_ops->dm_op(d);
+    CALL_XSM_OP(dm_op, d);
+    XSM_ALLOWED_ROLES(DEV_EMU_PRIVS);
+    return xsm_validate_role(role, current->domain, d);
 }
 
-static inline int xsm_xen_version (xsm_default_t def, uint32_t op)
+static inline int xsm_xen_version (xsm_role_t role, uint32_t op)
 {
-    return xsm_ops->xen_version(op);
+    CALL_XSM_OP(xen_version, op);
+    XSM_ALLOWED_ROLES(XSM_NONE | XSM_PLAT_CTRL);
+    switch ( op )
+    {
+    case XENVER_version:
+    case XENVER_platform_parameters:
+    case XENVER_get_features:
+        /* These sub-ops ignore the permission checks and return data. */
+        block_speculation();
+        return 0;
+    case XENVER_extraversion:
+    case XENVER_compile_info:
+    case XENVER_capabilities:
+    case XENVER_changeset:
+    case XENVER_pagesize:
+    case XENVER_guest_handle:
+        /* These MUST always be accessible to any guest by default. */
+        return xsm_validate_role(XSM_NONE, current->domain, NULL);
+    default:
+        return xsm_validate_role(XSM_PLAT_CTRL, current->domain, NULL);
+    }
 }
 
-static inline int xsm_domain_resource_map(xsm_default_t def, struct domain *d)
+static inline int xsm_domain_resource_map(xsm_role_t role, struct domain *d)
 {
-    return xsm_ops->domain_resource_map(d);
+    CALL_XSM_OP(domain_resource_map, d);
+    XSM_ALLOWED_ROLES(DEV_EMU_PRIVS);
+    return xsm_validate_role(role, current->domain, d);
 }
 
 #ifdef CONFIG_ARGO
 static inline int xsm_argo_enable(const struct domain *d)
 {
-    return xsm_ops->argo_enable(d);
+    CALL_XSM_OP(argo_enable, d);
+    return 0;
 }
 
 static inline int xsm_argo_register_single_source(const struct domain *d,
                                                   const struct domain *t)
 {
-    return xsm_ops->argo_register_single_source(d, t);
+    CALL_XSM_OP(argo_register_single_source, d, t);
+    return 0;
 }
 
 static inline int xsm_argo_register_any_source(const struct domain *d)
 {
-    return xsm_ops->argo_register_any_source(d);
+    CALL_XSM_OP(argo_register_any_source, d);
+    return 0;
 }
 
 static inline int xsm_argo_send(const struct domain *d, const struct domain *t)
 {
-    return xsm_ops->argo_send(d, t);
+    CALL_XSM_OP(argo_send, d, t);
+    return 0;
 }
 
 #endif /* CONFIG_ARGO */
 
-#endif /* XSM_NO_WRAPPERS */
-
-#ifdef CONFIG_MULTIBOOT
-extern int xsm_multiboot_init(unsigned long *module_map,
-                              const multiboot_info_t *mbi);
-extern int xsm_multiboot_policy_init(unsigned long *module_map,
-                                     const multiboot_info_t *mbi,
-                                     void **policy_buffer,
-                                     size_t *policy_size);
-#endif
-
-#ifdef CONFIG_HAS_DEVICE_TREE
-/*
- * Initialize XSM
- *
- * On success, return 1 if using SILO mode else 0.
- */
-extern int xsm_dt_init(void);
-extern int xsm_dt_policy_init(void **policy_buffer, size_t *policy_size);
-extern bool has_xsm_magic(paddr_t);
-#endif
-
 extern int register_xsm(struct xsm_operations *ops);
 
 extern struct xsm_operations dummy_xsm_ops;
@@ -760,9 +995,29 @@ extern void silo_init(void);
 static inline void silo_init(void) {}
 #endif
 
-#else /* CONFIG_XSM */
+#ifdef CONFIG_XSM_POLICY_MODULES
+
+#ifdef CONFIG_MULTIBOOT
+extern int xsm_multiboot_init(unsigned long *module_map,
+                              const multiboot_info_t *mbi);
+extern int xsm_multiboot_policy_init(unsigned long *module_map,
+                                     const multiboot_info_t *mbi,
+                                     void **policy_buffer,
+                                     size_t *policy_size);
+#endif
+
+#ifdef CONFIG_HAS_DEVICE_TREE
+/*
+ * Initialize XSM
+ *
+ * On success, return 1 if using SILO mode else 0.
+ */
+extern int xsm_dt_init(void);
+extern int xsm_dt_policy_init(void **policy_buffer, size_t *policy_size);
+extern bool has_xsm_magic(paddr_t);
+#endif
 
-#include <xsm/dummy.h>
+#else /* CONFIG_XSM_POLICY_MODULES */
 
 #ifdef CONFIG_MULTIBOOT
 static inline int xsm_multiboot_init (unsigned long *module_map,
@@ -784,6 +1039,6 @@ static inline bool has_xsm_magic(paddr_t start)
 }
 #endif /* CONFIG_HAS_DEVICE_TREE */
 
-#endif /* CONFIG_XSM */
+#endif /* CONFIG_XSM_POLICY_MODULES */
 
 #endif /* __XSM_H */
diff --git a/xen/xsm/xsm_core.c b/xen/xsm/xsm_core.c
index 5eab21e1b1..6bd8ad8751 100644
--- a/xen/xsm/xsm_core.c
+++ b/xen/xsm/xsm_core.c
@@ -18,8 +18,6 @@
 #include <xen/hypercall.h>
 #include <xsm/xsm.h>
 
-#ifdef CONFIG_XSM
-
 #ifdef CONFIG_MULTIBOOT
 #include <asm/setup.h>
 #endif
@@ -32,6 +30,8 @@
 
 struct xsm_operations *xsm_ops;
 
+#ifdef CONFIG_XSM
+
 enum xsm_bootparam {
     XSM_BOOTPARAM_DUMMY,
     XSM_BOOTPARAM_FLASK,
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 14 20:51:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 20:51:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127554.239749 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhemZ-0003QV-DR; Fri, 14 May 2021 20:51:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127554.239749; Fri, 14 May 2021 20:51:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhemZ-0003QO-AN; Fri, 14 May 2021 20:51:39 +0000
Received: by outflank-mailman (input) for mailman id 127554;
 Fri, 14 May 2021 20:51:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l2R2=KJ=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1lhemY-0003Po-HX
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 20:51:38 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a9973e2f-0f1c-4300-af93-06caf840ab44;
 Fri, 14 May 2021 20:51:37 +0000 (UTC)
Received: from sisyou.hme. (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1621025174137572.2487861385831;
 Fri, 14 May 2021 13:46:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a9973e2f-0f1c-4300-af93-06caf840ab44
ARC-Seal: i=1; a=rsa-sha256; t=1621025176; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=j29W/jvDvHchWTZy87G1FmjM7/1/qoCbhnR0Q+0s1nFfn4bO1FSeKPtIpX/Zzv1315U8wVkGpTH1QBkgHbzqAa2OW0qX8eEBk1u1CJPpQt4WKNTgPlB40LVrJqK61ffGX4RzHlt28QBeogJd6Y0LsZzbvy9rddLTWNNPtbjtN7g=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1621025176; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=UOuUY/j2uqB9R7XKS5OSaop21yJAw0RickXVL7Xgjpg=; 
	b=biGDvtKposTKEZW5zKxADvfVpQelxWMZt3wKsIf8C4AB7Ymrlebs7wyRtGBiqpd0HB7fRn7W4fUr3WgqI31VYo6AneKQXQNV00MT3+WufoMOgFVhiukjxyrQJSNkIyZAxjAWb+NIxc809WUelede7U6stKjrrXblynKk0zbIe88=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com> header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1621025176;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References:MIME-Version:Content-Transfer-Encoding;
	bh=UOuUY/j2uqB9R7XKS5OSaop21yJAw0RickXVL7Xgjpg=;
	b=X1v2pp3Tp0QSW2NOqS2evUr2UdMYHFbuYdOg/FvtTVZN00VSpjEc511lnm/HrRY0
	uINmdGBSrV1Htc9i2vXDRXzVB+JXp9Xbe+tSVUK7NqWpu5RqREBnJK43EpWri4/wLHb
	OqFFL3Ek/AQkZ5a/2AYTTQHA0FSyBIkW0W/wn49E=
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	julien@xen.org,
	Volodymyr_Babchuk@epam.com,
	andrew.cooper3@citrix.com,
	george.dunlap@citrix.com,
	iwj@xenproject.org,
	jbeulich@suse.com,
	wl@xen.org,
	roger.pau@citrix.com,
	tamas@tklengyel.com,
	tim@xen.org,
	jgross@suse.com,
	aisaila@bitdefender.com,
	ppircalabu@bitdefender.com,
	dfaggioli@suse.com,
	paul@xen.org,
	kevin.tian@intel.com,
	dgdegra@tycho.nsa.gov,
	adam.schwalm@starlab.io,
	scott.davis@starlab.io
Subject: [RFC PATCH 07/10] xsm-roles: adjusting core xsm
Date: Fri, 14 May 2021 16:54:34 -0400
Message-Id: <20210514205437.13661-8-dpsmith@apertussolutions.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20210514205437.13661-1-dpsmith@apertussolutions.com>
References: <20210514205437.13661-1-dpsmith@apertussolutions.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

This is adjustments and clean ups to the core of xsm for adoption of the domain
roles.

Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
---
 xen/include/xen/sched.h |   2 +-
 xen/include/xsm/xsm.h   |  26 -------
 xen/xsm/Makefile        |   3 +-
 xen/xsm/dummy.c         | 160 ----------------------------------------
 xen/xsm/xsm_core.c      |  46 +++---------
 5 files changed, 14 insertions(+), 223 deletions(-)
 delete mode 100644 xen/xsm/dummy.c

diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index d84b047359..a00d7fc260 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -120,7 +120,7 @@ struct evtchn
     unsigned short notify_vcpu_id; /* VCPU for local delivery notification */
     uint32_t fifo_lastq;           /* Data for identifying last queue. */
 
-#ifdef CONFIG_XSM
+#ifdef CONFIG_XSM_POLICY
     union {
 #ifdef XSM_NEED_GENERIC_EVTCHN_SSID
         /*
diff --git a/xen/include/xsm/xsm.h b/xen/include/xsm/xsm.h
index 50f2f547dc..8b5e9c737b 100644
--- a/xen/include/xsm/xsm.h
+++ b/xen/include/xsm/xsm.h
@@ -995,8 +995,6 @@ extern void silo_init(void);
 static inline void silo_init(void) {}
 #endif
 
-#ifdef CONFIG_XSM_POLICY_MODULES
-
 #ifdef CONFIG_MULTIBOOT
 extern int xsm_multiboot_init(unsigned long *module_map,
                               const multiboot_info_t *mbi);
@@ -1017,28 +1015,4 @@ extern int xsm_dt_policy_init(void **policy_buffer, size_t *policy_size);
 extern bool has_xsm_magic(paddr_t);
 #endif
 
-#else /* CONFIG_XSM_POLICY_MODULES */
-
-#ifdef CONFIG_MULTIBOOT
-static inline int xsm_multiboot_init (unsigned long *module_map,
-                                      const multiboot_info_t *mbi)
-{
-    return 0;
-}
-#endif
-
-#ifdef CONFIG_HAS_DEVICE_TREE
-static inline int xsm_dt_init(void)
-{
-    return 0;
-}
-
-static inline bool has_xsm_magic(paddr_t start)
-{
-    return false;
-}
-#endif /* CONFIG_HAS_DEVICE_TREE */
-
-#endif /* CONFIG_XSM_POLICY_MODULES */
-
 #endif /* __XSM_H */
diff --git a/xen/xsm/Makefile b/xen/xsm/Makefile
index cf0a728f1c..870bbb8247 100644
--- a/xen/xsm/Makefile
+++ b/xen/xsm/Makefile
@@ -1,6 +1,5 @@
 obj-y += xsm_core.o
-obj-$(CONFIG_XSM) += xsm_policy.o
-obj-$(CONFIG_XSM) += dummy.o
+obj-$(CONFIG_XSM_POLICY) += xsm_policy.o
 obj-$(CONFIG_XSM_SILO) += silo.o
 
 obj-$(CONFIG_XSM_FLASK) += flask/
diff --git a/xen/xsm/dummy.c b/xen/xsm/dummy.c
deleted file mode 100644
index 627f12dbff..0000000000
--- a/xen/xsm/dummy.c
+++ /dev/null
@@ -1,160 +0,0 @@
-/*
- *  This work is based on the LSM implementation in Linux 2.6.13.4.
- *
- *  Author:  George Coker, <gscoker@alpha.ncsc.mil>
- *
- *  Contributors: Michael LeMay, <mdlemay@epoch.ncsc.mil>
- *
- *  This program is free software; you can redistribute it and/or modify
- *  it under the terms of the GNU General Public License version 2,
- *  as published by the Free Software Foundation.
- */
-
-#define XSM_NO_WRAPPERS
-#include <xsm/dummy.h>
-
-struct xsm_operations dummy_xsm_ops;
-
-#define set_to_dummy_if_null(ops, function)                            \
-    do {                                                               \
-        if ( !ops->function )                                          \
-            ops->function = xsm_##function;                            \
-    } while (0)
-
-void __init xsm_fixup_ops (struct xsm_operations *ops)
-{
-    set_to_dummy_if_null(ops, security_domaininfo);
-    set_to_dummy_if_null(ops, domain_create);
-    set_to_dummy_if_null(ops, getdomaininfo);
-    set_to_dummy_if_null(ops, domctl_scheduler_op);
-    set_to_dummy_if_null(ops, sysctl_scheduler_op);
-    set_to_dummy_if_null(ops, set_target);
-    set_to_dummy_if_null(ops, domctl);
-    set_to_dummy_if_null(ops, sysctl);
-    set_to_dummy_if_null(ops, readconsole);
-
-    set_to_dummy_if_null(ops, evtchn_unbound);
-    set_to_dummy_if_null(ops, evtchn_interdomain);
-    set_to_dummy_if_null(ops, evtchn_close_post);
-    set_to_dummy_if_null(ops, evtchn_send);
-    set_to_dummy_if_null(ops, evtchn_status);
-    set_to_dummy_if_null(ops, evtchn_reset);
-
-    set_to_dummy_if_null(ops, grant_mapref);
-    set_to_dummy_if_null(ops, grant_unmapref);
-    set_to_dummy_if_null(ops, grant_setup);
-    set_to_dummy_if_null(ops, grant_transfer);
-    set_to_dummy_if_null(ops, grant_copy);
-    set_to_dummy_if_null(ops, grant_query_size);
-
-    set_to_dummy_if_null(ops, alloc_security_domain);
-    set_to_dummy_if_null(ops, free_security_domain);
-    set_to_dummy_if_null(ops, alloc_security_evtchns);
-    set_to_dummy_if_null(ops, free_security_evtchns);
-    set_to_dummy_if_null(ops, show_security_evtchn);
-    set_to_dummy_if_null(ops, init_hardware_domain);
-
-    set_to_dummy_if_null(ops, get_pod_target);
-    set_to_dummy_if_null(ops, set_pod_target);
-
-    set_to_dummy_if_null(ops, memory_exchange);
-    set_to_dummy_if_null(ops, memory_adjust_reservation);
-    set_to_dummy_if_null(ops, memory_stat_reservation);
-    set_to_dummy_if_null(ops, memory_pin_page);
-    set_to_dummy_if_null(ops, claim_pages);
-
-    set_to_dummy_if_null(ops, console_io);
-
-    set_to_dummy_if_null(ops, profile);
-
-    set_to_dummy_if_null(ops, kexec);
-    set_to_dummy_if_null(ops, schedop_shutdown);
-
-    set_to_dummy_if_null(ops, show_irq_sid);
-    set_to_dummy_if_null(ops, map_domain_pirq);
-    set_to_dummy_if_null(ops, map_domain_irq);
-    set_to_dummy_if_null(ops, unmap_domain_pirq);
-    set_to_dummy_if_null(ops, unmap_domain_irq);
-    set_to_dummy_if_null(ops, bind_pt_irq);
-    set_to_dummy_if_null(ops, unbind_pt_irq);
-    set_to_dummy_if_null(ops, irq_permission);
-    set_to_dummy_if_null(ops, iomem_permission);
-    set_to_dummy_if_null(ops, iomem_mapping);
-    set_to_dummy_if_null(ops, pci_config_permission);
-    set_to_dummy_if_null(ops, get_vnumainfo);
-
-#if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_PCI)
-    set_to_dummy_if_null(ops, get_device_group);
-    set_to_dummy_if_null(ops, assign_device);
-    set_to_dummy_if_null(ops, deassign_device);
-#endif
-
-#if defined(CONFIG_HAS_PASSTHROUGH) && defined(CONFIG_HAS_DEVICE_TREE)
-    set_to_dummy_if_null(ops, assign_dtdevice);
-    set_to_dummy_if_null(ops, deassign_dtdevice);
-#endif
-
-    set_to_dummy_if_null(ops, resource_plug_core);
-    set_to_dummy_if_null(ops, resource_unplug_core);
-    set_to_dummy_if_null(ops, resource_plug_pci);
-    set_to_dummy_if_null(ops, resource_unplug_pci);
-    set_to_dummy_if_null(ops, resource_setup_pci);
-    set_to_dummy_if_null(ops, resource_setup_gsi);
-    set_to_dummy_if_null(ops, resource_setup_misc);
-
-    set_to_dummy_if_null(ops, page_offline);
-    set_to_dummy_if_null(ops, hypfs_op);
-    set_to_dummy_if_null(ops, hvm_param);
-    set_to_dummy_if_null(ops, hvm_control);
-    set_to_dummy_if_null(ops, hvm_param_altp2mhvm);
-    set_to_dummy_if_null(ops, hvm_altp2mhvm_op);
-
-    set_to_dummy_if_null(ops, do_xsm_op);
-#ifdef CONFIG_COMPAT
-    set_to_dummy_if_null(ops, do_compat_op);
-#endif
-
-    set_to_dummy_if_null(ops, add_to_physmap);
-    set_to_dummy_if_null(ops, remove_from_physmap);
-    set_to_dummy_if_null(ops, map_gmfn_foreign);
-
-    set_to_dummy_if_null(ops, vm_event_control);
-
-#ifdef CONFIG_MEM_ACCESS
-    set_to_dummy_if_null(ops, mem_access);
-#endif
-
-#ifdef CONFIG_HAS_MEM_PAGING
-    set_to_dummy_if_null(ops, mem_paging);
-#endif
-
-#ifdef CONFIG_MEM_SHARING
-    set_to_dummy_if_null(ops, mem_sharing);
-#endif
-
-    set_to_dummy_if_null(ops, platform_op);
-#ifdef CONFIG_X86
-    set_to_dummy_if_null(ops, do_mca);
-    set_to_dummy_if_null(ops, shadow_control);
-    set_to_dummy_if_null(ops, mem_sharing_op);
-    set_to_dummy_if_null(ops, apic);
-    set_to_dummy_if_null(ops, machine_memory_map);
-    set_to_dummy_if_null(ops, domain_memory_map);
-    set_to_dummy_if_null(ops, mmu_update);
-    set_to_dummy_if_null(ops, mmuext_op);
-    set_to_dummy_if_null(ops, update_va_mapping);
-    set_to_dummy_if_null(ops, priv_mapping);
-    set_to_dummy_if_null(ops, ioport_permission);
-    set_to_dummy_if_null(ops, ioport_mapping);
-    set_to_dummy_if_null(ops, pmu_op);
-#endif
-    set_to_dummy_if_null(ops, dm_op);
-    set_to_dummy_if_null(ops, xen_version);
-    set_to_dummy_if_null(ops, domain_resource_map);
-#ifdef CONFIG_ARGO
-    set_to_dummy_if_null(ops, argo_enable);
-    set_to_dummy_if_null(ops, argo_register_single_source);
-    set_to_dummy_if_null(ops, argo_register_any_source);
-    set_to_dummy_if_null(ops, argo_send);
-#endif
-}
diff --git a/xen/xsm/xsm_core.c b/xen/xsm/xsm_core.c
index 6bd8ad8751..89c16511b8 100644
--- a/xen/xsm/xsm_core.c
+++ b/xen/xsm/xsm_core.c
@@ -26,14 +26,12 @@
 #include <asm/setup.h>
 #endif
 
-#define XSM_FRAMEWORK_VERSION    "1.0.0"
+#define XSM_FRAMEWORK_VERSION    "2.0.0"
 
 struct xsm_operations *xsm_ops;
 
-#ifdef CONFIG_XSM
-
 enum xsm_bootparam {
-    XSM_BOOTPARAM_DUMMY,
+    XSM_BOOTPARAM_ROLE,
     XSM_BOOTPARAM_FLASK,
     XSM_BOOTPARAM_SILO,
 };
@@ -44,15 +42,15 @@ static enum xsm_bootparam __initdata xsm_bootparam =
 #elif CONFIG_XSM_SILO_DEFAULT
     XSM_BOOTPARAM_SILO;
 #else
-    XSM_BOOTPARAM_DUMMY;
+    XSM_BOOTPARAM_ROLE;
 #endif
 
 static int __init parse_xsm_param(const char *s)
 {
     int rc = 0;
 
-    if ( !strcmp(s, "dummy") )
-        xsm_bootparam = XSM_BOOTPARAM_DUMMY;
+    if ( !strcmp(s, "role") )
+        xsm_bootparam = XSM_BOOTPARAM_ROLE;
 #ifdef CONFIG_XSM_FLASK
     else if ( !strcmp(s, "flask") )
         xsm_bootparam = XSM_BOOTPARAM_FLASK;
@@ -68,15 +66,6 @@ static int __init parse_xsm_param(const char *s)
 }
 custom_param("xsm", parse_xsm_param);
 
-static inline int verify(struct xsm_operations *ops)
-{
-    /* verify the security_operations structure exists */
-    if ( !ops )
-        return -EINVAL;
-    xsm_fixup_ops(ops);
-    return 0;
-}
-
 static int __init xsm_core_init(const void *policy_buffer, size_t policy_size)
 {
 #ifdef CONFIG_XSM_FLASK_POLICY
@@ -87,17 +76,9 @@ static int __init xsm_core_init(const void *policy_buffer, size_t policy_size)
     }
 #endif
 
-    if ( verify(&dummy_xsm_ops) )
-    {
-        printk(XENLOG_ERR "Could not verify dummy_xsm_ops structure\n");
-        return -EIO;
-    }
-
-    xsm_ops = &dummy_xsm_ops;
-
     switch ( xsm_bootparam )
     {
-    case XSM_BOOTPARAM_DUMMY:
+    case XSM_BOOTPARAM_ROLE:
         break;
 
     case XSM_BOOTPARAM_FLASK:
@@ -116,6 +97,7 @@ static int __init xsm_core_init(const void *policy_buffer, size_t policy_size)
     return 0;
 }
 
+
 #ifdef CONFIG_MULTIBOOT
 int __init xsm_multiboot_init(unsigned long *module_map,
                               const multiboot_info_t *mbi)
@@ -126,6 +108,7 @@ int __init xsm_multiboot_init(unsigned long *module_map,
 
     printk("XSM Framework v" XSM_FRAMEWORK_VERSION " initialized\n");
 
+#ifdef CONFIG_XSM_POLICY
     if ( XSM_MAGIC )
     {
         ret = xsm_multiboot_policy_init(module_map, mbi,
@@ -137,6 +120,7 @@ int __init xsm_multiboot_init(unsigned long *module_map,
             return -EINVAL;
         }
     }
+#endif
 
     ret = xsm_core_init(policy_buffer, policy_size);
     bootstrap_map(NULL);
@@ -154,6 +138,7 @@ int __init xsm_dt_init(void)
 
     printk("XSM Framework v" XSM_FRAMEWORK_VERSION " initialized\n");
 
+#ifdef CONFIG_XSM_POLICY
     if ( XSM_MAGIC )
     {
         ret = xsm_dt_policy_init(&policy_buffer, &policy_size);
@@ -163,6 +148,7 @@ int __init xsm_dt_init(void)
             return -EINVAL;
         }
     }
+#endif
 
     ret = xsm_core_init(policy_buffer, policy_size);
 
@@ -197,13 +183,7 @@ bool __init has_xsm_magic(paddr_t start)
 
 int __init register_xsm(struct xsm_operations *ops)
 {
-    if ( verify(ops) )
-    {
-        printk(XENLOG_ERR "Could not verify xsm_operations structure\n");
-        return -EINVAL;
-    }
-
-    if ( xsm_ops != &dummy_xsm_ops )
+    if ( xsm_ops != NULL )
         return -EAGAIN;
 
     xsm_ops = ops;
@@ -211,8 +191,6 @@ int __init register_xsm(struct xsm_operations *ops)
     return 0;
 }
 
-#endif
-
 long do_xsm_op (XEN_GUEST_HANDLE_PARAM(xsm_op_t) op)
 {
     return xsm_do_xsm_op(op);
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 14 20:52:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 20:52:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127556.239759 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhen0-00040A-Ma; Fri, 14 May 2021 20:52:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127556.239759; Fri, 14 May 2021 20:52:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhen0-000403-Ja; Fri, 14 May 2021 20:52:06 +0000
Received: by outflank-mailman (input) for mailman id 127556;
 Fri, 14 May 2021 20:52:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l2R2=KJ=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1lhemz-0003y3-6b
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 20:52:05 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a22dd97f-fa72-4fa7-9bdb-ccd91e95adc1;
 Fri, 14 May 2021 20:52:02 +0000 (UTC)
Received: from sisyou.hme. (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 16210251766101006.373208647906;
 Fri, 14 May 2021 13:46:16 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a22dd97f-fa72-4fa7-9bdb-ccd91e95adc1
ARC-Seal: i=1; a=rsa-sha256; t=1621025178; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=gav8mVX8+Lj1JTfaJ5VuA2M8Ok24RS6vhX8KXf7rzfCYSDnOwEX9lHL7tUX5ZzqYKiDW3WFLQO013z8LWtxccEbYRirT2dCeMCd9mpcjMwrQiD2bj5xQ8qc/KbgFgh0zAWM2sNa0lqvt7MAlzQ1/qUfae88DjwmbEf3m1dPEQZ4=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1621025178; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=j1TdhTTR8bEErLKTI/Pj6mQGmlE2NmJBrVMzKAEiNsk=; 
	b=F3fy2Nh05W9WnuoreC3Trcl43bodrA8xRhtVRbtpEO5L/1y1TtIU1LOsijI1X4ypb93E2ztMT4Axs2tvCt1XHSVmd+cNHSMCvvDWACJU+/MnZLkZih1ISUzFS/fbLHE41uvPROIkBQrHAFSp4gwY26cmSs6Hke2JSRdQRz8Ggis=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com> header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1621025178;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References:MIME-Version:Content-Transfer-Encoding;
	bh=j1TdhTTR8bEErLKTI/Pj6mQGmlE2NmJBrVMzKAEiNsk=;
	b=l0Yqy5F4mMljqC6V6mAA1YlCL9AH9R+sJ+svxJS7KWKFiihOx1jFeIXx7swg3GUC
	xZBCzchv8qdcW/aae7e6VcaR5vGKgE3fVdRctLljQ5hp9JJibJ77JdCnp5Ob/2ko/Xo
	Rf1gQQBCtOqcX9wLFmfS9p9Ut7zDqYU6vpQeutcQ=
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	julien@xen.org,
	Volodymyr_Babchuk@epam.com,
	andrew.cooper3@citrix.com,
	george.dunlap@citrix.com,
	iwj@xenproject.org,
	jbeulich@suse.com,
	wl@xen.org,
	roger.pau@citrix.com,
	tamas@tklengyel.com,
	tim@xen.org,
	jgross@suse.com,
	aisaila@bitdefender.com,
	ppircalabu@bitdefender.com,
	dfaggioli@suse.com,
	paul@xen.org,
	kevin.tian@intel.com,
	dgdegra@tycho.nsa.gov,
	adam.schwalm@starlab.io,
	scott.davis@starlab.io
Subject: [RFC PATCH 08/10] xsm-silo: convert silo over to domain roles
Date: Fri, 14 May 2021 16:54:35 -0400
Message-Id: <20210514205437.13661-9-dpsmith@apertussolutions.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20210514205437.13661-1-dpsmith@apertussolutions.com>
References: <20210514205437.13661-1-dpsmith@apertussolutions.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

This converts the SILO XSM module to function as an extension to the domain
roles system to implement an extended enforcement policy.

Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
---
 xen/xsm/silo.c | 22 +++++++++++++---------
 1 file changed, 13 insertions(+), 9 deletions(-)

diff --git a/xen/xsm/silo.c b/xen/xsm/silo.c
index 4850756a3d..3b3ca8fb84 100644
--- a/xen/xsm/silo.c
+++ b/xen/xsm/silo.c
@@ -17,9 +17,11 @@
  * You should have received a copy of the GNU General Public License along with
  * this program; If not, see <http://www.gnu.org/licenses/>.
  */
-#define XSM_NO_WRAPPERS
-#include <xsm/dummy.h>
 
+#include <xsm/xsm.h>
+#include <xsm/roles.h>
+
+#define SILO_ALLOWED_ROLES ( XSM_DOM_SUPER | XSM_DEV_BACK )
 /*
  * Check if inter-domain communication is allowed.
  * Return true when pass check.
@@ -29,8 +31,10 @@ static bool silo_mode_dom_check(const struct domain *ldom,
 {
     const struct domain *currd = current->domain;
 
-    return (is_control_domain(currd) || is_control_domain(ldom) ||
-            is_control_domain(rdom) || ldom == rdom);
+    return ( currd->xsm_roles & SILO_ALLOWED_ROLES ||
+            ldom->xsm_roles & SILO_ALLOWED_ROLES ||
+            rdom->xsm_roles & SILO_ALLOWED_ROLES ||
+            ldom == rdom );
 }
 
 static int silo_evtchn_unbound(struct domain *d1, struct evtchn *chn,
@@ -44,7 +48,7 @@ static int silo_evtchn_unbound(struct domain *d1, struct evtchn *chn,
     else
     {
         if ( silo_mode_dom_check(d1, d2) )
-            rc = xsm_evtchn_unbound(d1, chn, id2);
+            rc = xsm_validate_role(TARGET_PRIVS, current->domain, d1);
         rcu_unlock_domain(d2);
     }
 
@@ -55,7 +59,7 @@ static int silo_evtchn_interdomain(struct domain *d1, struct evtchn *chan1,
                                    struct domain *d2, struct evtchn *chan2)
 {
     if ( silo_mode_dom_check(d1, d2) )
-        return xsm_evtchn_interdomain(d1, chan1, d2, chan2);
+        return xsm_validate_role(XSM_NONE, d1, d2);
     return -EPERM;
 }
 
@@ -63,21 +67,21 @@ static int silo_grant_mapref(struct domain *d1, struct domain *d2,
                              uint32_t flags)
 {
     if ( silo_mode_dom_check(d1, d2) )
-        return xsm_grant_mapref(d1, d2, flags);
+        return xsm_validate_role(XSM_NONE, d1, d2);
     return -EPERM;
 }
 
 static int silo_grant_transfer(struct domain *d1, struct domain *d2)
 {
     if ( silo_mode_dom_check(d1, d2) )
-        return xsm_grant_transfer(d1, d2);
+        return xsm_validate_role(XSM_NONE, d1, d2);
     return -EPERM;
 }
 
 static int silo_grant_copy(struct domain *d1, struct domain *d2)
 {
     if ( silo_mode_dom_check(d1, d2) )
-        return xsm_grant_copy(d1, d2);
+        return xsm_validate_role(XSM_NONE, d1, d2);
     return -EPERM;
 }
 
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 14 20:54:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 20:54:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127564.239771 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lheow-0004mE-3z; Fri, 14 May 2021 20:54:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127564.239771; Fri, 14 May 2021 20:54:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lheov-0004m7-W2; Fri, 14 May 2021 20:54:05 +0000
Received: by outflank-mailman (input) for mailman id 127564;
 Fri, 14 May 2021 20:54:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l2R2=KJ=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1lheou-0004lD-Sd
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 20:54:04 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aaebd173-a53f-4e78-aea0-ac41d5c5eed3;
 Fri, 14 May 2021 20:54:04 +0000 (UTC)
Received: from sisyou.hme. (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1621025180253480.449466746447;
 Fri, 14 May 2021 13:46:20 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aaebd173-a53f-4e78-aea0-ac41d5c5eed3
ARC-Seal: i=1; a=rsa-sha256; t=1621025182; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=QuCdde0fjdutlAxwM9kKB3t1vix6ZO9ja6ftxUtFU5PTT+mT01BepUJAImFtZCt1PrMoirZz80AKjsuHzpdb3g5dXTbfKHDtqm+X9YH5fg66O3drFSAe41e3b9MjK1YKftTx0uIfsygNCvsszhYvCcLZ8REen/Tx301dL3rI248=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1621025182; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=QnVrpx/J4fkxlMFtNOziXCA6XJRJeUiOyN6q0Hwhw5Y=; 
	b=mD3QR+YjAK+wBBbeNj7yqMc0JdpX1RSKSBnABjgO5mdIQfu6f109cN/xOO3dT/wZB72rDh6lxdXr/cg7V5qRtzlA+F8WXTHMH7tW8qbHr7f+TzdfU9w2nPRFwK0/8R3F4DaoJ11kl2WeM3OoHrTDGb+jk160aMgWfV1pbXVgUGo=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com> header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1621025182;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References:MIME-Version:Content-Transfer-Encoding;
	bh=QnVrpx/J4fkxlMFtNOziXCA6XJRJeUiOyN6q0Hwhw5Y=;
	b=OHUkE+EjKAUhjRrr16m1dNfPBDZuczmiVIPCBOQd4fBdiNf1T8RHWU64o7CjW1Cb
	lXZv4C0cc7mtXkTc2FKYwLKKEKZ/2U/PZVES/le8LSIf/9ZCxI7u8QmpIt3AqW9bSyq
	N8qwFz1yc3ixvKLLbzcxb6vMDQBAUEaJqimkOD0k=
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	julien@xen.org,
	Volodymyr_Babchuk@epam.com,
	andrew.cooper3@citrix.com,
	george.dunlap@citrix.com,
	iwj@xenproject.org,
	jbeulich@suse.com,
	wl@xen.org,
	roger.pau@citrix.com,
	tamas@tklengyel.com,
	tim@xen.org,
	jgross@suse.com,
	aisaila@bitdefender.com,
	ppircalabu@bitdefender.com,
	dfaggioli@suse.com,
	paul@xen.org,
	kevin.tian@intel.com,
	dgdegra@tycho.nsa.gov,
	adam.schwalm@starlab.io,
	scott.davis@starlab.io
Subject: [RFC PATCH 09/10] xsm-flask: clean up for domain roles conversion
Date: Fri, 14 May 2021 16:54:36 -0400
Message-Id: <20210514205437.13661-10-dpsmith@apertussolutions.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20210514205437.13661-1-dpsmith@apertussolutions.com>
References: <20210514205437.13661-1-dpsmith@apertussolutions.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

The domain roles approach changed the idea of how the default XSM policy module
is configured. This makes the minor adjustment for that change.

Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
---
 xen/xsm/flask/flask_op.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/xen/xsm/flask/flask_op.c b/xen/xsm/flask/flask_op.c
index 01e52138a1..63c263ebed 100644
--- a/xen/xsm/flask/flask_op.c
+++ b/xen/xsm/flask/flask_op.c
@@ -244,7 +244,7 @@ static int flask_disable(void)
     flask_disabled = 1;
 
     /* Reset xsm_ops to the original module. */
-    xsm_ops = &dummy_xsm_ops;
+    xsm_ops = NULL;
 
     return 0;
 }
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 14 20:54:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 20:54:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127567.239782 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhepR-0005M4-CN; Fri, 14 May 2021 20:54:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127567.239782; Fri, 14 May 2021 20:54:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhepR-0005Lx-8y; Fri, 14 May 2021 20:54:37 +0000
Received: by outflank-mailman (input) for mailman id 127567;
 Fri, 14 May 2021 20:54:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l2R2=KJ=apertussolutions.com=dpsmith@srs-us1.protection.inumbo.net>)
 id 1lhepP-0005Lh-Ot
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 20:54:35 +0000
Received: from sender4-of-o51.zoho.com (unknown [136.143.188.51])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a3e8d256-e19f-460e-ba88-322a74a98629;
 Fri, 14 May 2021 20:54:35 +0000 (UTC)
Received: from sisyou.hme. (static-72-81-132-2.bltmmd.fios.verizon.net
 [72.81.132.2]) by mx.zohomail.com
 with SMTPS id 1621025182737819.1959056472596;
 Fri, 14 May 2021 13:46:22 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a3e8d256-e19f-460e-ba88-322a74a98629
ARC-Seal: i=1; a=rsa-sha256; t=1621025184; cv=none; 
	d=zohomail.com; s=zohoarc; 
	b=am66yHbMEbvVXda4yV02AM7B3UfoQs94eod4/+Nbhecxu6jNtYb7wp3iFrALk6oLmjok8zUUdwmVqjy10Or7Ylca18JhynRYiUM+/mixQOEV4N3zBx0+eFMd3X3svlowZ7XuHY6KunsthH88cYM74UR4ngHpnd0a2o8YMDDgnCA=
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=zohomail.com; s=zohoarc; 
	t=1621025184; h=Content-Transfer-Encoding:Cc:Date:From:In-Reply-To:MIME-Version:Message-ID:References:Subject:To; 
	bh=wGvCSSkU+bbasgBbtH8pFrI6xBZiDUPG25gTuiiEgZg=; 
	b=fzl33zqwkpAb35n+OFVABlbb2MnwJNaXTVp/HFeQw3T1k3Lf8Od9844KtkyVf4PCurUW0g+BuOuYBlyj5cGaC/LAYr4VAoF0U7HzwK315Yhdx2dIjaQNOCJY7qGGy7kbicnZ13b33jMQm2g7gTIGxlBF2riRHTWOnDm4p9o4ZJs=
ARC-Authentication-Results: i=1; mx.zohomail.com;
	dkim=pass  header.i=apertussolutions.com;
	spf=pass  smtp.mailfrom=dpsmith@apertussolutions.com;
	dmarc=pass header.from=<dpsmith@apertussolutions.com> header.from=<dpsmith@apertussolutions.com>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; t=1621025184;
	s=zoho; d=apertussolutions.com; i=dpsmith@apertussolutions.com;
	h=From:To:Cc:Subject:Date:Message-Id:In-Reply-To:References:MIME-Version:Content-Transfer-Encoding;
	bh=wGvCSSkU+bbasgBbtH8pFrI6xBZiDUPG25gTuiiEgZg=;
	b=B/FKSizqkYATTS3vrvUFnslLv3AB7UwEBf9dn/VFIO7eq3IKX4otTAAUYoj28D3y
	yOfTUs2ikHw5Wvq5YKC+YNUxPZ4VfFoIpx9/SDTy9+Ssr+yT/cJZjqbXKJ3gYSVgF98
	bK8MDMxF8mM2BjeMT/fbClzZ3obnHj4MtLRFH43A=
From: "Daniel P. Smith" <dpsmith@apertussolutions.com>
To: xen-devel@lists.xenproject.org
Cc: sstabellini@kernel.org,
	julien@xen.org,
	Volodymyr_Babchuk@epam.com,
	andrew.cooper3@citrix.com,
	george.dunlap@citrix.com,
	iwj@xenproject.org,
	jbeulich@suse.com,
	wl@xen.org,
	roger.pau@citrix.com,
	tamas@tklengyel.com,
	tim@xen.org,
	jgross@suse.com,
	aisaila@bitdefender.com,
	ppircalabu@bitdefender.com,
	dfaggioli@suse.com,
	paul@xen.org,
	kevin.tian@intel.com,
	dgdegra@tycho.nsa.gov,
	adam.schwalm@starlab.io,
	scott.davis@starlab.io
Subject: [RFC PATCH 10/10] common/Kconfig: updating Kconfig for domain roles
Date: Fri, 14 May 2021 16:54:37 -0400
Message-Id: <20210514205437.13661-11-dpsmith@apertussolutions.com>
X-Mailer: git-send-email 2.20.1
In-Reply-To: <20210514205437.13661-1-dpsmith@apertussolutions.com>
References: <20210514205437.13661-1-dpsmith@apertussolutions.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
X-ZohoMailClient: External

This adjusts the Kconfig system for the reorganizing of XSM by the introduction
of domain roles.

Signed-off-by: Daniel P. Smith <dpsmith@apertussolutions.com>
---
 xen/common/Kconfig | 14 ++++++++------
 1 file changed, 8 insertions(+), 6 deletions(-)

diff --git a/xen/common/Kconfig b/xen/common/Kconfig
index 3064bf6b89..560ad274c4 100644
--- a/xen/common/Kconfig
+++ b/xen/common/Kconfig
@@ -199,11 +199,12 @@ config XENOPROF
 
 	  If unsure, say Y.
 
-config XSM
-	bool "Xen Security Modules support"
-	default ARM
+menu "Xen Security Modules"
+
+config XSM_POLICY
+	bool "XSM policy support"
 	---help---
-	  Enables the security framework known as Xen Security Modules which
+	  Enables loadable policy support for Xen Security Modules which
 	  allows administrators fine-grained control over a Xen domain and
 	  its capabilities by defining permissible interactions between domains,
 	  the hypervisor itself, and related resources such as memory and
@@ -214,7 +215,7 @@ config XSM
 config XSM_FLASK
 	def_bool y
 	prompt "FLux Advanced Security Kernel support"
-	depends on XSM
+	depends on XSM_POLICY
 	---help---
 	  Enables FLASK (FLux Advanced Security Kernel) as the access control
 	  mechanism used by the XSM framework.  This provides a mandatory access
@@ -254,7 +255,6 @@ config XSM_FLASK_POLICY
 config XSM_SILO
 	def_bool y
 	prompt "SILO support"
-	depends on XSM
 	---help---
 	  Enables SILO as the access control mechanism used by the XSM framework.
 	  This is not the default module, add boot parameter xsm=silo to choose
@@ -278,6 +278,8 @@ choice
 		bool "SILO" if XSM_SILO
 endchoice
 
+endmenu
+
 config LATE_HWDOM
 	bool "Dedicated hardware domain"
 	default n
-- 
2.20.1



From xen-devel-bounces@lists.xenproject.org Fri May 14 21:02:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 21:02:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127577.239793 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhewZ-0006tS-8B; Fri, 14 May 2021 21:01:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127577.239793; Fri, 14 May 2021 21:01:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhewZ-0006tL-57; Fri, 14 May 2021 21:01:59 +0000
Received: by outflank-mailman (input) for mailman id 127577;
 Fri, 14 May 2021 21:01:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZhtO=KJ=gmail.com=bobbyeshleman@srs-us1.protection.inumbo.net>)
 id 1lhewX-0006tF-Rs
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 21:01:57 +0000
Received: from mail-pj1-x1030.google.com (unknown [2607:f8b0:4864:20::1030])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8370a17f-116b-4cff-990f-3a063f56c7d2;
 Fri, 14 May 2021 21:01:57 +0000 (UTC)
Received: by mail-pj1-x1030.google.com with SMTP id
 gb21-20020a17090b0615b029015d1a863a91so2279522pjb.2
 for <xen-devel@lists.xenproject.org>; Fri, 14 May 2021 14:01:57 -0700 (PDT)
Received: from ?IPv6:2601:1c2:4f80:d230::1? ([2601:1c2:4f80:d230::1])
 by smtp.gmail.com with ESMTPSA id m6sm1756204pfc.133.2021.05.14.14.01.55
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 14 May 2021 14:01:55 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8370a17f-116b-4cff-990f-3a063f56c7d2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:organization:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=guhBn2/qVoLPjSGAw62plNUlRR+xAO3c6yC3ikgOGR4=;
        b=py0NTGeU8RfAmLSjOaEaSNeFMX0CPSIpN+e3/gLMOdsGpiC5PItDhnoR5r/ozPQG8E
         z2E5BEIDf00q2GBewtAOwVVcYT7fo+Km7eig/FhGhh53eO11A+rRCNDU4B8G5apBM58w
         LHz0lhI3KZMXeTemVGt4JIKojv09u9PcK1s3rM17oKXbYImchEdkqYIlrfzOxL6ZfxUY
         3cuAZRgsOrsJ91vMi2zhgSFwUPwP7fov7MvhIqFha/1+ZtvlYat9sSXpxVoTLkrEvLhT
         wT3useWR/tKkpV+IvMrXW1Cfo/hkr7Gumkfel0vKWFTzZnS9BlKb6ssM5EvzuYzdfiZC
         LuPA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:organization
         :message-id:date:user-agent:mime-version:in-reply-to
         :content-language:content-transfer-encoding;
        bh=guhBn2/qVoLPjSGAw62plNUlRR+xAO3c6yC3ikgOGR4=;
        b=XUPsCjeQiko/BapUu4P22Ep2lKhEt78KzNJGHp4VV3iBtMqOA32+HlDvZNq+R2lQ7Q
         JEmn+JJncK/7yzG4/YK8FWWPUvMhxOi3uPE3Ogn5eo42978V5CWJCPSxcVMq5C+NRH1R
         SJU4YGQ54Ef0clAOK5AgFOvCu0xksinxYv4jibOMue5n2Y3V7Ya3hpGBXQTboC/tlfXu
         rRt6N6pPtGbZuF5x6IuBZ8rKs1XNrSLj7FzPaOo2g+jzIv989DrXACgPcOoDro0+5xDt
         Qn2A1NXNrnxWRcSkeYkQ1FfSqD5zEYqMStNVTFG8tpgEBxkw3ZvbqE+mPTROXcLvw5R7
         N7bA==
X-Gm-Message-State: AOAM531TPRnsEVCzw1C6bB63RUfnB7epbtAflSo8BoB6Roa9oTLlF4TN
	UWdmeOnh6ZKdwtyxBxBPvxQ=
X-Google-Smtp-Source: ABdhPJzbCjhnbZSLvb8Dp9TsmWzt/xNcrSfLZDD3tozX3wzncMpjyO8hmlzKv2u9cDhy3yCb5Ri/gA==
X-Received: by 2002:a17:90a:4493:: with SMTP id t19mr7188635pjg.217.1621026116323;
        Fri, 14 May 2021 14:01:56 -0700 (PDT)
Subject: Re: [PATCH v3 5/5] automation: Add container for riscv64 builds
To: Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
Cc: Alistair Francis <alistair23@gmail.com>,
 Doug Goldstein <cardoe@cardoe.com>
References: <cover.1621017334.git.connojdavis@gmail.com>
 <5a78ff425e45588da5c97c68e94275b649346012.1621017334.git.connojdavis@gmail.com>
From: Bob Eshleman <bobbyeshleman@gmail.com>
Organization: Vates SAS
Message-ID: <7efa3461-b8f0-c93b-95a2-c596a6dc2c1e@gmail.com>
Date: Fri, 14 May 2021 14:01:53 -0700
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
MIME-Version: 1.0
In-Reply-To: <5a78ff425e45588da5c97c68e94275b649346012.1621017334.git.connojdavis@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 5/14/21 11:53 AM, Connor Davis wrote:
> +
> +# There is a regression in GDB that causes an assertion error
> +# when setting breakpoints, use this commit until it is fixed!
> +RUN git clone --recursive -j$(nproc) --progress https://github.com/riscv/riscv-gnu-toolchain && \
> +    cd riscv-gnu-toolchain/riscv-gdb && \
> +    git checkout 1dd588507782591478882a891f64945af9e2b86c && \
> +    cd  .. && \
> +    ./configure --prefix=/opt/riscv && \
> +    make linux -j$(nproc) && \
> +    rm -R /riscv-gnu-toolchai

What do you think about using the RISCV tool chain from the Arch
repos now?

I've also discovered that the sym table error avoided by the commit
pin can be worked around by removing already loaded symbols with
`file` (no args) prior to calling `file path/to/file` to load new
ones.  So if people did still want to use the container for
development, they could still use the gdb installed by pacman
(with the symbols caveat).

-- 
Bobby Eshleman
SE at Vates SAS


From xen-devel-bounces@lists.xenproject.org Fri May 14 21:26:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 21:26:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127587.239804 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhfK8-0000rn-5Q; Fri, 14 May 2021 21:26:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127587.239804; Fri, 14 May 2021 21:26:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhfK8-0000rg-2H; Fri, 14 May 2021 21:26:20 +0000
Received: by outflank-mailman (input) for mailman id 127587;
 Fri, 14 May 2021 21:26:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhfK7-0000rW-1j; Fri, 14 May 2021 21:26:19 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhfK6-0001dn-Tp; Fri, 14 May 2021 21:26:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhfK6-00060w-MF; Fri, 14 May 2021 21:26:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lhfK6-0008Tq-Lp; Fri, 14 May 2021 21:26:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=opZifqcHdh4O0vUzMm5OwdDp5MuulbEJ1kC8iNjUyds=; b=PYkGwntXTN5zshGsiqf9XVh6E0
	+dCYXmXu6KgQ2Ap3xY8OOXlWKlq+z3CfeGp/Aauqb2IND7J4WUxKdygqHFr0v0ngJMxCiP2G7c5SF
	bG6Du/tGKnOHOG5/aFuUVFf1o/RH098CrMS+pRaEQRnn/UWzbuR/HlrabtoyNGEl6E6w=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161949-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 161949: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=d82c4693f8d5c6b05f40ccf351c84645201067c1
X-Osstest-Versions-That:
    ovmf=22ac5cc9d9db34056f7c97e994fd9def683ebb2e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 14 May 2021 21:26:18 +0000

flight 161949 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161949/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 d82c4693f8d5c6b05f40ccf351c84645201067c1
baseline version:
 ovmf                 22ac5cc9d9db34056f7c97e994fd9def683ebb2e

Last test of basis   161943  2021-05-14 03:41:19 Z    0 days
Testing same since   161949  2021-05-14 10:10:07 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Sergei Dmitrouk <sergei@posteo.net>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   22ac5cc9d9..d82c4693f8  d82c4693f8d5c6b05f40ccf351c84645201067c1 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri May 14 21:54:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 21:54:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127592.239817 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhfks-0003wh-CH; Fri, 14 May 2021 21:53:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127592.239817; Fri, 14 May 2021 21:53:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhfks-0003wa-9G; Fri, 14 May 2021 21:53:58 +0000
Received: by outflank-mailman (input) for mailman id 127592;
 Fri, 14 May 2021 21:53:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZhtO=KJ=gmail.com=bobbyeshleman@srs-us1.protection.inumbo.net>)
 id 1lhfkq-0003wU-K1
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 21:53:56 +0000
Received: from mail-pl1-x62a.google.com (unknown [2607:f8b0:4864:20::62a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 40890c1f-8e7b-43c0-95e5-e590480fa95a;
 Fri, 14 May 2021 21:53:55 +0000 (UTC)
Received: by mail-pl1-x62a.google.com with SMTP id 69so121687plc.5
 for <xen-devel@lists.xenproject.org>; Fri, 14 May 2021 14:53:55 -0700 (PDT)
Received: from ?IPv6:2601:1c2:4f80:d230::1? ([2601:1c2:4f80:d230::1])
 by smtp.gmail.com with ESMTPSA id d23sm4742563pfo.80.2021.05.14.14.53.53
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 14 May 2021 14:53:54 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 40890c1f-8e7b-43c0-95e5-e590480fa95a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:organization:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=ZdmFyIX342j4rEeA8lF7p7/HHeBCV5GocOzzYwKVt5I=;
        b=XRa89AgGsiP0mC8r5JL33/pw1by/n2TyWpVmoCmbmKB6uk798YDa6O4FGv5Jods4EC
         2gXgPgt6stQEc6ODhCtmy/vZyUSyYClt133dgpuYu0DhOVFfEIOJpT+JTBic4lwGFNuR
         YAbb8O3lM9mkkYqZgz1B0Y+0yDNBQxcpV89tlmJOWuaDTHOdp5QS/Cy1XFfIrL4DG99G
         iFpoYysl8rKppE6EE3m7yNwDUG+VpuQYSuifO13Mauun1UEOVfcFTny3CZxyVQbhH1Wd
         1bgTYJanO1uXdMazeMNugUzmTVXZmBFx6VcmoGv/XdmYhsAbBkvFxt/QnkhrovB8z3/m
         FshQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:organization
         :message-id:date:user-agent:mime-version:in-reply-to
         :content-language:content-transfer-encoding;
        bh=ZdmFyIX342j4rEeA8lF7p7/HHeBCV5GocOzzYwKVt5I=;
        b=S0xSzTxRimnbdcb2RZ93dwvaCud+3cf1dXrIH2JtYqzApNChk3UOL/JrH1ZvIJ+4N4
         C/Nfxef/9cA2Fvf+UKNX5iK5TSXvKU/zWh9jhl8jV/1CSzl8I5NZT9/d2y/NQO9GUvW8
         RUDa5jOzH23EsxnLqVPJoX/Wrnf+VIf/8JPMqaR+3BcE+FrREEe99TOo2KDoaF+Tebds
         n5gMwvS5MC8Zca7NtonN1MWckXDHxAMFXgyFoT8xMkswhiAR5oam6ffYz34vco455yUv
         /8H9A552LF3mpjs5QRHE/pf7bw8iK0W7Y5QbIloNorg1wdmOSMmDXOUGTt9v4WkygxCX
         EGMQ==
X-Gm-Message-State: AOAM530GUjNNFnDCF+tGyUqxr21g7mHCPf98QPL2U4Ta6Pieul0y3518
	tUFn6LDOPdFd0+wmlpzLtOQ=
X-Google-Smtp-Source: ABdhPJysIpxQUCbSJQdoFOhbYkU6zLJQoQzNtjho4BBD1T05Wzk7OUYvY3csmqaNBBSYpb/CF947oA==
X-Received: by 2002:a17:902:e00e:b029:ef:5f1c:18a8 with SMTP id o14-20020a170902e00eb02900ef5f1c18a8mr21109264plo.38.1621029234996;
        Fri, 14 May 2021 14:53:54 -0700 (PDT)
Subject: Re: [PATCH v3 4/5] xen: Add files needed for minimal riscv build
To: Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
Cc: Alistair Francis <alistair23@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <cover.1621017334.git.connojdavis@gmail.com>
 <a7d2d43d0d9de9e10a3e92bc6f977d6f4b53bef6.1621017334.git.connojdavis@gmail.com>
From: Bob Eshleman <bobbyeshleman@gmail.com>
Organization: Vates SAS
Message-ID: <97815ecd-7335-9c85-5df8-802ed119d518@gmail.com>
Date: Fri, 14 May 2021 14:53:52 -0700
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
MIME-Version: 1.0
In-Reply-To: <a7d2d43d0d9de9e10a3e92bc6f977d6f4b53bef6.1621017334.git.connojdavis@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 5/14/21 11:53 AM, Connor Davis wrote:
> Add arch-specific makefiles and configs needed to build for
> riscv64. Also add a minimal head.S that is a simple infinite loop.
> head.o can be built with
> 
> $ make XEN_TARGET_ARCH=riscv64 SUBSYSTEMS=xen -C xen TARGET=head.o
> 

I recently realized that the Linux kernel build uses `ARCH=riscv`
with 32 vs 64 being differentiated by Kconfig opts (and __riscv_xlen).
I think `riscv64` was inherited by `arm64`.  This is something I'd
like to eventually change the Xen build to (i.e.,
`XEN_TARGET_ARCH=riscv make`) would it be possible for us to get that
in this series?

...

> diff --git a/xen/include/asm-riscv/config.h b/xen/include/asm-riscv/config.h
> new file mode 100644
> index 0000000000..84cb436dc1
> --- /dev/null
> +++ b/xen/include/asm-riscv/config.h
> @@ -0,0 +1,110 @@
> +/******************************************************************************
> + * config.h
> + *
> + * A Linux-style configuration list.
> + */
> +
> +#ifndef __RISCV_CONFIG_H__
> +#define __RISCV_CONFIG_H__
> +

...

> +
> +#ifdef CONFIG_RISCV_64
> +
> +/*
> + * RISC-V Layout:
> + *   0x0000000000000000 - 0x0000003fffffffff (256GB, L2 slots [0-255])
> + *     Unmapped
> + *   0x0000004000000000 - 0xffffffbfffffffff
> + *     Inaccessible: sv39 only supports 39-bit sign-extended VAs.
> + *   0xffffffc000000000 - 0xffffffc0001fffff (2MB, L2 slot [256])
> + *     Unmapped
> + *   0xffffffc000200000 - 0xffffffc0003fffff (2MB, L2 slot [256])
> + *     Xen text, data, bss
> + *   0xffffffc000400000 - 0xffffffc0005fffff (2MB, L2 slot [256])
> + *     Fixmap: special-purpose 4K mapping slots
> + *   0xffffffc000600000 - 0xffffffc0009fffff (4MB, L2 slot [256])
> + *     Early boot mapping of FDT
> + *   0xffffffc000a00000 - 0xffffffc000bfffff (2MB, L2 slot [256])
> + *     Early relocation address, used when relocating Xen and later
> + *     for livepatch vmap (if compiled in)
> + *   0xffffffc040000000 - 0xffffffc07fffffff (1GB, L2 slot [257])
> + *     VMAP: ioremap and early_ioremap
> + *   0xffffffc080000000 - 0xffffffc13fffffff (3GB, L2 slots [258..260])
> + *     Unmapped
> + *   0xffffffc140000000 - 0xffffffc1bfffffff (2GB, L2 slots [261..262])
> + *     Frametable: 48 bytes per page for 133GB of RAM
> + *   0xffffffc1c0000000 - 0xffffffe1bfffffff (128GB, L2 slots [263..390])
> + *     1:1 direct mapping of RAM
> + *   0xffffffe1c0000000 - 0xffffffffffffffff (121GB, L2 slots [391..511])
> + *     Unmapped
> + */
> +
What is the benefit of moving the layout up to 0xffffffc000000000?

-- 
Bobby Eshleman
SE at Vates SAS


From xen-devel-bounces@lists.xenproject.org Fri May 14 21:56:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 21:56:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127595.239829 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhfmy-0004ZW-PV; Fri, 14 May 2021 21:56:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127595.239829; Fri, 14 May 2021 21:56:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhfmy-0004ZP-MN; Fri, 14 May 2021 21:56:08 +0000
Received: by outflank-mailman (input) for mailman id 127595;
 Fri, 14 May 2021 21:56:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhfmx-0004ZF-BP; Fri, 14 May 2021 21:56:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhfmx-00027L-51; Fri, 14 May 2021 21:56:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhfmw-0006qr-Mf; Fri, 14 May 2021 21:56:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lhfmw-0003Nd-MC; Fri, 14 May 2021 21:56:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=PBq8CQWLVDaiYCYkQW6BWiCc3Xm2xjsazOUG0Gr2vas=; b=AUpwWYslMLUbdudj4lvLy7+RRA
	DW0oQsftrdG8DSGks4wD1PlYctJFxhKNYnu3clRaKrUD6mCrHgRmGC5LxNUmmvF1h/vpU9SpUVQ2q
	E2cMUXNWqAShBp45FJuqsBBPmSLUAZMTYJ6Zp/JhbFPhWxPa3KUGzUl4ylyt1f0SQado=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161947-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 161947: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=b82e5721a17325739adef83a029340a63b53d4ad
X-Osstest-Versions-That:
    linux=16022114de9869743d6304290815cdb8a8c7deaa
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 14 May 2021 21:56:06 +0000

flight 161947 linux-5.4 real [real]
flight 161951 linux-5.4 real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/161947/
http://logs.test-lab.xenproject.org/osstest/logs/161951/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 161951-retest

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    18 guest-start/debian.repeat fail REGR. vs. 161918

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161918
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161918
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161918
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161918
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 161918
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161918
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161918
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161918
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161918
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161918
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161918
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                b82e5721a17325739adef83a029340a63b53d4ad
baseline version:
 linux                16022114de9869743d6304290815cdb8a8c7deaa

Last test of basis   161918  2021-05-12 12:31:47 Z    2 days
Testing same since   161947  2021-05-14 08:11:45 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alain Volmat <alain.volmat@foss.st.com>
  Alex Deucher <alexander.deucher@amd.com>
  Alex Williamson <alex.williamson@redhat.com>
  Alexander Lobakin <alobakin@pm.me>
  Alexandre Belloni <alexandre.belloni@bootlin.com>
  Anders Trier Olesen <anders.trier.olesen@gmail.com>
  Andrew Morton <akpm@linux-foundation.org>
  Andrew Scull <ascull@google.com>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Antonio Borneo <antonio.borneo@foss.st.com>
  Archie Pusaka <apusaka@chromium.org>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Arnd Bergmann <arnd@arndb.de>
  Artur Petrosyan <Arthur.Petrosyan@synopsys.com>
  Arun Easi <aeasi@marvell.com>
  Ashok Raj <ashok.raj@intel.com>
  Athira Rajeev <atrajeev@linux.vnet.ibm.com>
  Badhri Jagan Sridharan <badhri@google.com>
  Bard Liao <yung-chuan.liao@linux.intel.com>
  Bjorn Andersson <bjorn.andersson@linaro.org>
  Bjorn Helgaas <bhelgaas@google.com>
  Boris Brezillon <boris.brezillon@collabora.com>
  Borislav Petkov <bp@suse.de>
  Brian King <brking@linux.vnet.ibm.com>
  Brian Norris <computersforpeace@gmail.com>
  Chanwoo Choi <cw00.choi@samsung.com>
  Chen Huang <chenhuang5@huawei.com>
  Chen Hui <clare.chenhui@huawei.com>
  Chen-Yu Tsai <wens@csie.org>
  Christian Borntraeger <borntraeger@de.ibm.com>
  Christian König <christian.koenig@amd.com>
  Christoph Hellwig <hch@lst.de>
  Christophe Leroy <christophe.leroy@csgroup.eu>
  Chunfeng Yun <chunfeng.yun@mediatek.com>
  Claudio Imbrenda <imbrenda@linux.ibm.com>
  Colin Ian King <colin.king@canonical.com>
  Cédric Le Goater <clg@kaod.org>
  Dan Carpenter <dan.carpenter@oracle.com>
  David Hildenbrand <david@redhat.com>
  David S. Miller <davem@davemloft.net>
  Devesh Sharma <devesh.sharma@broadcom.com>
  Dong Aisheng <aisheng.dong@nxp.com>
  Eric Dumazet <edumazet@google.com>
  Erwan Le Ray <erwan.leray@foss.st.com>
  Fabian Vogt <fabian@ritter-vogt.de>
  Fabrice Gasnier <fabrice.gasnier@foss.st.com>
  Felix Kuehling <Felix.Kuehling@amd.com>
  Finn Thain <fthain@telegraphics.com.au>
  Florian Fainelli <f.fainelli@gmail.com>
  Geert Uytterhoeven <geert+renesas@glider.be>
  Geert Uytterhoeven <geert@linux-m68k.org>
  gexueyuan <gexueyuan@gmail.com>
  Giovanni Cabiddu <giovanni.cabiddu@intel.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Greg Kurz <groug@kaod.org>
  Gregory CLEMENT <gregory.clement@bootlin.com>
  Hannes Reinecke <hare@suse.de>
  Hans de Goede <hdegoede@redhat.com>
  Hans Verkuil <hverkuil-cisco@xs4all.nl>
  Harry Wentland <harry.wentland@amd.com>
  He Ying <heying24@huawei.com>
  Heikki Krogerus <heikki.krogerus@linux.intel.com>
  Heiko Carstens <hca@linux.ibm.com>
  Heming Zhao <heming.zhao@suse.com>
  Herbert Xu <herbert@gondor.apana.org.au>
  Hulk Robot <hulkrobot@huawei.com>
  Ilya Lipnitskiy <ilya.lipnitskiy@gmail.com>
  Ingo Molnar <mingo@kernel.org>
  Jae Hyun Yoo <jae.hyun.yoo@linux.intel.com>
  Jakub Kicinski <kubakici@wp.pl>
  Jan Glauber <jglauber@digitalocean.com>
  Jane Chu <jane.chu@oracle.com>
  Jason Gunthorpe <jgg@nvidia.com>
  Jason Self <jason@bluehome.net>
  Jens Axboe <axboe@kernel.dk>
  Jia Zhou <zhou.jia2@zte.com.cn>
  Jia-Ju Bai <baijiaju1990@gmail.com>
  Jiri Kosina <jkosina@suse.cz>
  Joel Stanley <joel@jms.id.au>
  Johan Hovold <johan@kernel.org>
  Johannes Berg <johannes.berg@intel.com>
  John Cox <jc@kynesim.co.uk>
  John Garry <john.garry@huawei.com>
  Jon Hunter <jonathanh@nvidia.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Jonathon Reinhart <jonathon.reinhart@gmail.com>
  Jordan Niethe <jniethe5@gmail.com>
  Juergen Gross <jgross@suse.com>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Kalle Valo <kvalo@codeaurora.org>
  Krzysztof Kozlowski <krzk@kernel.org>
  Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com>
  Kunihiko Hayashi <hayashi.kunihiko@socionext.com>
  Larry Finger <Larry.Finger@lwfinger.net>
  Lee Jones <lee.jones@linaro.org>
  Lin Ma <linma@zju.edu.cn>
  Linus Lüssing <linus.luessing@c0d3.blue>
  Linus Torvalds <torvalds@linux-foundation.org>
  Lukasz Majczak <lma@semihalf.com>
  Lv Yunlong <lyl2019@mail.ustc.edu.cn>
  Maciej W. Rozycki <macro@orcam.me.uk>
  Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
  Marc Zyngier <maz@kernel.org>
  Marcel Holtmann <marcel@holtmann.org>
  Marek Behún <kabel@kernel.org>
  Mark Brown <broonie@kernel.org>
  Martin K. Petersen <martin.petersen@oracle.com>
  Martin Schiller <ms@dev.tdt.de>
  Masami Hiramatsu <mhiramat@kernel.org>
  Mauro Carvalho Chehab <mchehab+huawei@kernel.org>
  Maxim Mikityanskiy <maxtram95@gmail.com>
  Meng Li <Meng.Li@windriver.com>
  Michael Chan <michael.chan@broadcom.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Michael Kelley <mikelley@microsoft.com>
  Michael Walle <michael@walle.cc>
  Michal Kalderon <michal.kalderon@marvell.com>
  Miklos Szeredi <mszeredi@redhat.com>
  Minas Harutyunyan <Minas.Harutyunyan@synopsys.com>
  Miquel Raynal <miquel.raynal@bootlin.com>
  Nathan Chancellor <nathan@kernel.org>
  Nicholas Piggin <npiggin@gmail.com>
  Niklas Cassel <niklas.cassel@wdc.com>
  Niklas Söderlund <niklas.soderlund+renesas@ragnatech.se>
  Oliver Neukum <oneukum@suse.com>
  Or Cohen <orcohen@paloaltonetworks.com>
  Otavio Pontes <otavio.pontes@intel.com>
  Pali Rohár <pali@kernel.org>
  Pan Bian <bianpan2016@163.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Durrant <pdurrant@amazon.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Petr Machata <petrm@nvidia.com>
  Philip Soares <philips@netisense.com>
  Phillip Potter <phil@philpotter.co.uk>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Ping-Ke Shih <pkshih@realtek.com>
  Potnuri Bharat Teja <bharat@chelsio.com>
  Qinglang Miao <miaoqinglang@huawei.com>
  Quanyang Wang <quanyang.wang@windriver.com>
  Quentin Perret <qperret@google.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Rafał Miłecki <rafal@milecki.pl>
  Rander Wang <rander.wang@intel.com>
  Randy Dunlap <rdunlap@infradead.org>
  Randy Dunlap <rdunlap@infradead.org> # build-tested
  Richard Weinberger <richard@nod.at>
  Sabrina Dubroca <sd@queasysnail.net>
  Sagi Grimberg <sagi@grimberg.me>
  Salil Mehta <salil.mehta@huawei.com>
  Sameer Pujar <spujar@nvidia.com>
  Sami Loone <sami@loone.fi>
  Sasha Levin <sashal@kernel.org>
  Sean Christopherson <seanjc@google.com>
  Sebastian Reichel <sebastian.reichel@collabora.com>
  Sergey Shtylyov <s.shtylyov@omprussia.ru>
  Shawn Guo <shawn.guo@linaro.org>
  Shengjiu Wang <shengjiu.wang@nxp.com>
  Shiraz Saleem <shiraz.saleem@intel.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Sindhu Devale <sindhu.devale@intel.com>
  Song Liu <song@kernel.org>
  Song Liu <songliubraving@fb.com>
  Srinivas Kandagatla <srinivas.kandagatla@linaro.org>
  Stanislav Yakovlev <stas.yakovlev@gmail.com>
  Stefano Garzarella <sgarzare@redhat.com>
  Steffen Dirkwinkel <s.dirkwinkel@beckhoff.com>
  Stephen Boyd <sboyd@kernel.org>
  Sudhakar Panneerselvam <sudhakar.panneerselvam@oracle.com>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  syzbot <syzbot+43e93968b964e369db0b@syzkaller.appspotmail.com>
  Taehee Yoo <ap420073@gmail.com>
  Takashi Iwai <tiwai@suse.de>
  Tao Ren <rentao.bupt@gmail.com>
  Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
  Thomas Bogendoerfer <tsbogend@alpha.franken.de>
  Toke Høiland-Jørgensen <toke@redhat.com>
  Tomasz Maciej Nowak <tmn505@gmail.com>
  Tong Zhang <ztong0001@gmail.com>
  Tyrel Datwyler <tyreld@linux.ibm.com>
  Vinod Koul <vkoul@kernel.org>
  Viresh Kumar <viresh.kumar@linaro.org>
  Vitaly Chikunov <vt@altlinux.org>
  Vladimir Barinov <vladimir.barinov@cogentembedded.com>
  Waiman Long <longman@redhat.com>
  Wang Li <wangli74@huawei.com>
  Wang Wensheng <wangwensheng4@huawei.com>
  Wei Liu <wei.liu@kernel.org>
  Will Deacon <will@kernel.org>
  William A. Kennington III <wak@google.com>
  William Breathitt Gray <vilhelm.gray@gmail.com>
  Wolfram Sang <wsa@kernel.org>
  Xie He <xie.he.0141@gmail.com>
  Xin Long <lucien.xin@gmail.com>
  Yang Yingliang <yangyingliang@huawei.com>
  Yaqii Wu <yaqii.wu@mediatek.com>
  Ye Bin <yebin10@huawei.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yoshihiro Shimoda <yoshihiro.shimoda.uh@renesas.com>
  Zhao Heming <heming.zhao@suse.com>
  Zhenyu Wang <zhenyuw@linux.intel.com>
  Álvaro Fernández Rojas <noltari@gmail.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   16022114de98..b82e5721a173  b82e5721a17325739adef83a029340a63b53d4ad -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Fri May 14 23:13:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 23:13:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127609.239843 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhgzz-0003dA-JM; Fri, 14 May 2021 23:13:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127609.239843; Fri, 14 May 2021 23:13:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhgzz-0003d3-FY; Fri, 14 May 2021 23:13:39 +0000
Received: by outflank-mailman (input) for mailman id 127609;
 Fri, 14 May 2021 23:13:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhgzy-0003cn-VM; Fri, 14 May 2021 23:13:38 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhgzy-0003Nl-GX; Fri, 14 May 2021 23:13:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhgzy-0001H9-69; Fri, 14 May 2021 23:13:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lhgzy-0005fi-5j; Fri, 14 May 2021 23:13:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ClpB4QKAW+JtzfJSxhdS5bU20C7Aoswqmh1AZyIYS4s=; b=43LrGXOhD/pmhcJjP9ise6bnqK
	g2U95UQ9eQs8CRW4lc4l3IIeWagJr5nk5kHbpwPV/x4fxzi+jdUKgjUIv9UWxxOf27sUAu3T1zCqm
	4d5nrvqoMO1mwQARsTK0ye3/6x6Wc69NMaitud5XyF+DYZ/iI4kyark48UzWi6XB7GBs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161948-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 161948: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:guest-start:fail:heisenbug
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:guest-start/debianhvm.repeat:fail:heisenbug
    linux-linus:test-armhf-armhf-xl-rtds:guest-start.2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=315d99318179b9cd5077ccc9f7f26a164c9fa998
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 14 May 2021 23:13:38 +0000

flight 161948 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161948/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx  8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-examine    4 memdisk-try-append fail in 161940 pass in 161948
 test-armhf-armhf-xl-rtds     14 guest-start      fail in 161940 pass in 161948
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 18 guest-start/debianhvm.repeat fail pass in 161940

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     19 guest-start.2           fail blocked in 152332
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                315d99318179b9cd5077ccc9f7f26a164c9fa998
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  287 days
Failing since        152366  2020-08-01 20:49:34 Z  286 days  479 attempts
Testing same since   161940  2021-05-13 21:40:37 Z    1 days    2 attempts

------------------------------------------------------------
6046 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           fail    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1640334 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 14 23:26:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 23:26:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127614.239856 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhhC5-00057G-PJ; Fri, 14 May 2021 23:26:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127614.239856; Fri, 14 May 2021 23:26:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhhC5-000579-M9; Fri, 14 May 2021 23:26:09 +0000
Received: by outflank-mailman (input) for mailman id 127614;
 Fri, 14 May 2021 23:26:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=saLk=KJ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lhhC4-000573-8p
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 23:26:08 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 528bff05-dd5f-482f-a94c-ca0a1d337fa8;
 Fri, 14 May 2021 23:26:07 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id D13C861444;
 Fri, 14 May 2021 23:26:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 528bff05-dd5f-482f-a94c-ca0a1d337fa8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1621034766;
	bh=oza9Ea4xWnj3E1LE9L0Fa1SUx3Ez8tkeVj9BnvB7K1g=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=eVL1V+nfF17HcvKvu6pyfgllNvPvky/vp70qUMTPJGu5aRmKlxOhy0rzwaDW4n7C3
	 RTek3fIXrTmCk3qjbGNyrJ8ZYNgy8ntD2hE2q6nV15f5VczpdqPYFfftN0q3nY8VMU
	 WEGPQtGoc37V9pgCcEPUpTDz/gTtfEcNZE0rKq4Z9ns9JzZVNUlGyuHWjUst52F0wQ
	 bdXioet10SxfJAjg41NrZKMsUxch52BOTXjd9co9T+RAbPeVFqRmdqLaAm/DjFdbFi
	 mFjeMmx6MGZjRGJr3DCQPMfFi1Y5leRh+b9qf+5h7tjC2T7FGCcj8yo4/LKY1cFBhn
	 n2Yy82bg1cDRw==
Date: Fri, 14 May 2021 16:25:58 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, Wei.Chen@arm.com, Henry.Wang@arm.com, 
    Penny.Zheng@arm.com, Bertrand.Marquis@arm.com, 
    Wei Liu <wei.liu2@citrix.com>, Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Hongyan Xia <hongyxia@amazon.com>, Julien Grall <jgrall@amazon.com>, 
    Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>, 
    Andrew Cooper <andrew.cooper3@citrix.com>, 
    =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
    Hongian Xia <hongyax@amazon.com>
Subject: Re: [PATCH RFCv2 12/15] xen/arm: add Persistent Map (PMAP)
 infrastructure
In-Reply-To: <20210425201318.15447-13-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2105141602330.14426@sstabellini-ThinkPad-T480s>
References: <20210425201318.15447-1-julien@xen.org> <20210425201318.15447-13-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: multipart/mixed; BOUNDARY="8323329-2105078586-1621033812=:14426"
Content-ID: <alpine.DEB.2.21.2105141610170.14426@sstabellini-ThinkPad-T480s>

  This message is in MIME format.  The first part should be readable text,
  while the remaining parts are likely unreadable without MIME-aware tools.

--8323329-2105078586-1621033812=:14426
Content-Type: text/plain; CHARSET=UTF-8
Content-Transfer-Encoding: 8BIT
Content-ID: <alpine.DEB.2.21.2105141610171.14426@sstabellini-ThinkPad-T480s>

On Sun, 25 Apr 2021, Julien Grall wrote:
> From: Wei Liu <wei.liu2@citrix.com>
> 
> The basic idea is like Persistent Kernel Map (PKMAP) in Linux. We
> pre-populate all the relevant page tables before the system is fully
> set up.
> 
> We will need it on Arm in order to rework the arm64 version of
> xenheap_setup_mappings() as we may need to use pages allocated from
> the boot allocator before they are effectively mapped.
> 
> This infrastructure is not lock-protected therefore can only be used
> before smpboot. After smpboot, map_domain_page() has to be used.
> 
> This is based on the x86 version [1] that was originally implemented
> by Wei Liu.
> 
> Take the opportunity to switch the parameter attr from unsigned to
> unsigned int.
> 
> [1] <e92da4ad6015b6089737fcccba3ec1d6424649a5.1588278317.git.hongyxia@amazon.com>
> 
> Signed-off-by: Wei Liu <wei.liu2@citrix.com>
> Signed-off-by: Hongyan Xia <hongyxia@amazon.com>
> [julien: Adapted for Arm]
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> ---
>     Changes in v2:
>         - New patch
> 
> Cc: Jan Beulich <jbeulich@suse.com>
> Cc: Wei Liu <wl@xen.org>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> Cc: Roger Pau Monné <roger.pau@citrix.com>
> Cc: Hongian Xia <hongyax@amazon.com>
> 
> This is mostly a copy of the PMAP infrastructure currently discussed
> on x86. The only difference is how the page-tables are updated.
> 
> I think we want to consider to provide a common infrastructure. But
> I haven't done it yet to gather feedback on the overall series
> first.
> ---
>  xen/arch/arm/Makefile        |   1 +
>  xen/arch/arm/mm.c            |   7 +--
>  xen/arch/arm/pmap.c          | 101 +++++++++++++++++++++++++++++++++++
>  xen/include/asm-arm/config.h |   2 +
>  xen/include/asm-arm/lpae.h   |   8 +++
>  xen/include/asm-arm/pmap.h   |  10 ++++
>  6 files changed, 123 insertions(+), 6 deletions(-)
>  create mode 100644 xen/arch/arm/pmap.c
>  create mode 100644 xen/include/asm-arm/pmap.h
> 
> diff --git a/xen/arch/arm/Makefile b/xen/arch/arm/Makefile
> index ca75f1040dcc..2196ada24941 100644
> --- a/xen/arch/arm/Makefile
> +++ b/xen/arch/arm/Makefile
> @@ -40,6 +40,7 @@ obj-y += percpu.o
>  obj-y += platform.o
>  obj-y += platform_hypercall.o
>  obj-y += physdev.o
> +obj-y += pmap.init.o
>  obj-y += processor.o
>  obj-y += psci.o
>  obj-y += setup.o
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index d090fdfd5994..5e713b599611 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -288,12 +288,7 @@ void dump_hyp_walk(vaddr_t addr)
>      dump_pt_walk(ttbr, addr, HYP_PT_ROOT_LEVEL, 1);
>  }
>  
> -/*
> - * Standard entry type that we'll use to build Xen's own pagetables.
> - * We put the same permissions at every level, because they're ignored
> - * by the walker in non-leaf entries.
> - */
> -static inline lpae_t mfn_to_xen_entry(mfn_t mfn, unsigned attr)
> +lpae_t mfn_to_xen_entry(mfn_t mfn, unsigned int attr)
>  {
>      lpae_t e = (lpae_t) {
>          .pt = {
> diff --git a/xen/arch/arm/pmap.c b/xen/arch/arm/pmap.c
> new file mode 100644
> index 000000000000..702b1bde982d
> --- /dev/null
> +++ b/xen/arch/arm/pmap.c
> @@ -0,0 +1,101 @@
> +#include <xen/init.h>
> +#include <xen/mm.h>
> +
> +#include <asm/bitops.h>
> +#include <asm/flushtlb.h>
> +#include <asm/pmap.h>
> +
> +/*
> + * To be able to use FIXMAP_PMAP_BEGIN.
> + * XXX: move fixmap definition in a separate header
> + */

yes please :-)


> +#include <xen/acpi.h>
> +
> +/*
> + * Simple mapping infrastructure to map / unmap pages in fixed map.
> + * This is used to set up the page table for mapcache, which is used
> + * by map domain page infrastructure.
> + *
> + * This structure is not protected by any locks, so it must not be used after
> + * smp bring-up.
> + */
> +
> +/* Bitmap to track which slot is used */
> +static unsigned long __initdata inuse;
> +
> +/* XXX: Find an header to declare it */

yep


> +extern lpae_t xen_fixmap[LPAE_ENTRIES];
> +
> +void *__init pmap_map(mfn_t mfn)
> +{
> +    unsigned long flags;
> +    unsigned int idx;
> +    vaddr_t linear;
> +    unsigned int slot;
> +    lpae_t *entry, pte;
> +
> +    BUILD_BUG_ON(sizeof(inuse) * BITS_PER_LONG < NUM_FIX_PMAP);
> +
> +    ASSERT(system_state < SYS_STATE_smp_boot);

One small concern here is that we have been using SYS_STATE_early_boot
to switch implementation of things like xen_map_table. Between
SYS_STATE_early_boot and SYS_STATE_smp_boot there is SYS_STATE_boot.

I guess I am wondering if instead of three potentially different mapping
functions (<= SYS_STATE_early_boot, < SYS_STATE_smp_boot, >=
SYS_STATE_smp_boot) we can get away with only two?



> +    local_irq_save(flags);
> +
> +    idx = find_first_zero_bit(&inuse, NUM_FIX_PMAP);
> +    if ( idx == NUM_FIX_PMAP )
> +        panic("Out of PMAP slots\n");
> +
> +    __set_bit(idx, &inuse);
> +
> +    slot = idx + FIXMAP_PMAP_BEGIN;
> +    ASSERT(slot >= FIXMAP_PMAP_BEGIN && slot <= FIXMAP_PMAP_END);
> +
> +    linear = FIXMAP_ADDR(slot);
> +    /*
> +     * We cannot use set_fixmap() here. We use PMAP when there is no direct map,
> +     * so map_pages_to_xen() called by set_fixmap() needs to map pages on
> +     * demand, which then calls pmap() again, resulting in a loop. Modify the
> +     * PTEs directly instead. The same is true for pmap_unmap().
> +     */
> +    entry = &xen_fixmap[third_table_offset(linear)];
> +
> +    ASSERT(!lpae_is_valid(*entry));
> +
> +    pte = mfn_to_xen_entry(mfn, PAGE_HYPERVISOR_RW);
> +    pte.pt.table = 1;
> +    write_pte(entry, pte);
> +
> +    local_irq_restore(flags);
> +
> +    return (void *)linear;
> +}
> +
> +void __init pmap_unmap(const void *p)
> +{
> +    unsigned long flags;
> +    unsigned int idx;
> +    lpae_t *entry;
> +    lpae_t pte = { 0 };
> +    unsigned int slot = third_table_offset((vaddr_t)p);
> +
> +    ASSERT(system_state < SYS_STATE_smp_boot);
> +    ASSERT(slot >= FIXMAP_PMAP_BEGIN && slot <= FIXMAP_PMAP_END);
> +
> +    idx = slot - FIXMAP_PMAP_BEGIN;
> +    local_irq_save(flags);
> +
> +    __clear_bit(idx, &inuse);
> +    entry = &xen_fixmap[third_table_offset((vaddr_t)p)];
> +    write_pte(entry, pte);
> +    flush_xen_tlb_range_va_local((vaddr_t)p, PAGE_SIZE);
> +
> +    local_irq_restore(flags);
> +}
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> diff --git a/xen/include/asm-arm/config.h b/xen/include/asm-arm/config.h
> index 85d4a510ce8a..35050855b6e1 100644
> --- a/xen/include/asm-arm/config.h
> +++ b/xen/include/asm-arm/config.h
> @@ -180,6 +180,8 @@
>  #define FIXMAP_MISC     1  /* Ephemeral mappings of hardware */
>  #define FIXMAP_ACPI_BEGIN  2  /* Start mappings of ACPI tables */
>  #define FIXMAP_ACPI_END    (FIXMAP_ACPI_BEGIN + NUM_FIXMAP_ACPI_PAGES - 1)  /* End mappings of ACPI tables */
> +#define FIXMAP_PMAP_BEGIN (FIXMAP_ACPI_END + 1)
> +#define FIXMAP_PMAP_END (FIXMAP_PMAP_BEGIN + NUM_FIX_PMAP - 1)
>  
>  #define NR_hypercalls 64
>  
> diff --git a/xen/include/asm-arm/lpae.h b/xen/include/asm-arm/lpae.h
> index 310f5225e056..81fd482ab2ce 100644
> --- a/xen/include/asm-arm/lpae.h
> +++ b/xen/include/asm-arm/lpae.h
> @@ -4,6 +4,7 @@
>  #ifndef __ASSEMBLY__
>  
>  #include <xen/page-defs.h>
> +#include <xen/mm-frame.h>
>  
>  /*
>   * WARNING!  Unlike the x86 pagetable code, where l1 is the lowest level and
> @@ -168,6 +169,13 @@ static inline bool lpae_is_superpage(lpae_t pte, unsigned int level)
>          third_table_offset(addr)            \
>      }
>  
> +/*
> + * Standard entry type that we'll use to build Xen's own pagetables.
> + * We put the same permissions at every level, because they're ignored
> + * by the walker in non-leaf entries.
> + */
> +lpae_t mfn_to_xen_entry(mfn_t mfn, unsigned int attr);
> +
>  #endif /* __ASSEMBLY__ */
>  
>  /*
> diff --git a/xen/include/asm-arm/pmap.h b/xen/include/asm-arm/pmap.h
> new file mode 100644
> index 000000000000..8e1dce93f8e4
> --- /dev/null
> +++ b/xen/include/asm-arm/pmap.h
> @@ -0,0 +1,10 @@
> +#ifndef __ASM_PMAP_H__
> +#define __ARM_PMAP_H__

ASM/ARM


> +/* Large enough for mapping 5 levels of page tables with some headroom */
> +#define NUM_FIX_PMAP 8
> +
> +void *pmap_map(mfn_t mfn);
> +void pmap_unmap(const void *p);

> +#endif	/* __ASM_PMAP_H__ */
--8323329-2105078586-1621033812=:14426--


From xen-devel-bounces@lists.xenproject.org Fri May 14 23:36:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 23:36:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127621.239868 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhhLY-0006eI-Oy; Fri, 14 May 2021 23:35:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127621.239868; Fri, 14 May 2021 23:35:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhhLY-0006eB-Le; Fri, 14 May 2021 23:35:56 +0000
Received: by outflank-mailman (input) for mailman id 127621;
 Fri, 14 May 2021 23:35:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=saLk=KJ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lhhLX-0006e5-7a
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 23:35:55 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fa0156f1-62c9-4035-9abf-ddbaab356c51;
 Fri, 14 May 2021 23:35:54 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 321B161350;
 Fri, 14 May 2021 23:35:53 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fa0156f1-62c9-4035-9abf-ddbaab356c51
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1621035353;
	bh=UTNKx1PKEmnbSPW7tZFyqO7KdR9soOOqIZcVbewQXhU=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=PWlg2g+JQi5iRBVhuQynH0OLm0s4bpXfavuFUxyTIMnpQorEab/FfJYYWi39/S5zd
	 wcL0wZDj2owQOm6pZvEde4rbOxotaVhwDK0l+tmkNzZvucGpt3ecnP/7Di4y4Psl2x
	 CF3pBvHynp0vMGjAqI3zAkkF91TjFS9PllG5j73TIDTxMI8Xck+NVsaOO+dEpZxTos
	 Qqupsgn2jtInD/mK7CBK93tluUEl8zPCwGsSmyqUCchrcZ6yAN9uXoMZnJewQ/k0Ln
	 jiYGkzrQAGG6TcSnsWKZlUH9CTYZFa4TdbPlK1SPscLJNYkgKtk3hQIo5I77Iodmqa
	 mwVsNem76qSpw==
Date: Fri, 14 May 2021 16:35:52 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, Wei.Chen@arm.com, Henry.Wang@arm.com, 
    Penny.Zheng@arm.com, Bertrand.Marquis@arm.com, 
    Julien Grall <jgrall@amazon.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH RFCv2 13/15] xen/arm: mm: Use the PMAP helpers in
 xen_{,un}map_table()
In-Reply-To: <20210425201318.15447-14-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2105141628130.14426@sstabellini-ThinkPad-T480s>
References: <20210425201318.15447-1-julien@xen.org> <20210425201318.15447-14-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Sun, 25 Apr 2021, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> During early boot, it is not possible to use xen_{,un}map_table()
> if the page tables are not residing the Xen binary.
> 
> This is a blocker to switch some of the helpers to use xen_pt_update()
> as we may need to allocate extra page tables and access them before
> the domheap has been initialized (see setup_xenheap_mappings()).
> 
> xen_{,un}map_table() are now updated to use the PMAP helpers for early
> boot map/unmap. Note that the special case for page-tables residing
> in Xen binary has been dropped because it is "complex" and was
> only added as a workaround in 8d4f1b8878e0 ("xen/arm: mm: Allow
> generic xen page-tables helpers to be called early").
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

In terms of boot stages:

- SYS_STATE_early_boot --> use pmap_map
- greater than SYS_STATE_early_boot --> map_domain_page

While actually pmap would be able to work as far as SYS_STATE_boot, but
we don't need it. Is it worth simplifying it by changing the checks in
the previous patch to be against SYS_STATE_early_boot?

In any case:

Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>



> ---
>     Changes in v2:
>         - New patch
> ---
>  xen/arch/arm/mm.c | 33 +++++++++------------------------
>  1 file changed, 9 insertions(+), 24 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index 5e713b599611..f5768f2d4a81 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -41,6 +41,7 @@
>  #include <xen/sizes.h>
>  #include <xen/libfdt/libfdt.h>
>  
> +#include <asm/pmap.h>
>  #include <asm/setup.h>
>  
>  /* Override macros from asm/page.h to make them work with mfn_t */
> @@ -967,27 +968,11 @@ void *ioremap(paddr_t pa, size_t len)
>  static lpae_t *xen_map_table(mfn_t mfn)
>  {
>      /*
> -     * We may require to map the page table before map_domain_page() is
> -     * useable. The requirements here is it must be useable as soon as
> -     * page-tables are allocated dynamically via alloc_boot_pages().
> -     *
> -     * We need to do the check on physical address rather than virtual
> -     * address to avoid truncation on Arm32. Therefore is_kernel() cannot
> -     * be used.
> +     * During early boot, map_domain_page() may be unusable. Use the
> +     * PMAP to map temporarily a page-table.
>       */
>      if ( system_state == SYS_STATE_early_boot )
> -    {
> -        if ( is_xen_fixed_mfn(mfn) )
> -        {
> -            /*
> -             * It is fine to demote the type because the size of Xen
> -             * will always fit in vaddr_t.
> -             */
> -            vaddr_t offset = mfn_to_maddr(mfn) - virt_to_maddr(&_start);
> -
> -            return (lpae_t *)(XEN_VIRT_START + offset);
> -        }
> -    }
> +        return pmap_map(mfn);
>  
>      return map_domain_page(mfn);
>  }
> @@ -996,12 +981,12 @@ static void xen_unmap_table(const lpae_t *table)
>  {
>      /*
>       * During early boot, xen_map_table() will not use map_domain_page()
> -     * for page-tables residing in Xen binary. So skip the unmap part.
> +     * but the PMAP.
>       */
> -    if ( system_state == SYS_STATE_early_boot && is_kernel(table) )
> -        return;
> -
> -    unmap_domain_page(table);
> +    if ( system_state == SYS_STATE_early_boot )
> +        pmap_unmap(table);
> +    else
> +        unmap_domain_page(table);


From xen-devel-bounces@lists.xenproject.org Fri May 14 23:47:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 23:47:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127625.239879 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhhWV-00086L-Lr; Fri, 14 May 2021 23:47:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127625.239879; Fri, 14 May 2021 23:47:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhhWV-00086E-Ie; Fri, 14 May 2021 23:47:15 +0000
Received: by outflank-mailman (input) for mailman id 127625;
 Fri, 14 May 2021 23:47:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CSkR=KJ=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lhhWT-000868-QU
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 23:47:13 +0000
Received: from mail-io1-xd30.google.com (unknown [2607:f8b0:4864:20::d30])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 38dfd6f4-f401-48ab-b9a1-a7c045a3d441;
 Fri, 14 May 2021 23:47:12 +0000 (UTC)
Received: by mail-io1-xd30.google.com with SMTP id n10so417687ion.8
 for <xen-devel@lists.xenproject.org>; Fri, 14 May 2021 16:47:12 -0700 (PDT)
Received: from [192.168.99.80] (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id x5sm2442284ilj.28.2021.05.14.16.47.12
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 14 May 2021 16:47:12 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 38dfd6f4-f401-48ab-b9a1-a7c045a3d441
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=U4pRru/QMk+VbrTz7X/lmqPanNtSdNtq+IjuZFqazRI=;
        b=rrneePu+QjkAiTpOd0rpjcVYJlqzNWjs8UGqBVC0Zap92j9MkIQB+RbZ7OrZOss5qI
         seWkh8ORZpsv/knrsL9/6pNlVl3njJz+s/iLX3eW4yQKvNXjHnisYq9O7O3ORF9M3GXw
         1tJ5rMFBS30GAtqxUHFILAewKJ3VCMW4WflQXG2/4FzGAfaW70X+Izez19S3XiEK2zTf
         SxqUJRN+l7Sswje6/AEcQwyckP/sD2W1dTeeRiQToC9iGHpSNPWi5Kh/xRayZPZgwvTY
         1M6p9PQBeY+2kLFBiH0vrlRrkx31zEIUw2VZkm71dzS0EebYx9XzLLRr5Z0ToPyODfht
         kCUw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=U4pRru/QMk+VbrTz7X/lmqPanNtSdNtq+IjuZFqazRI=;
        b=Lv28/xwUyniqi9ZTt6Kyl4ptHhT40HesGJL8pYx+/LxxROSroV0h1AwaQRkF/qp4ni
         8u5lxU4U1lozzGzX+QacTruaTMjTJiizTloAg9mmwDjzySoJcozS7N0R6NLtiZXeuaWL
         DkGycKYQ8iiBnmhJWwDjMoz99rgn+ETWJlxO/7dLC7vZz/Hen9MJRacCqC68RenKkjTj
         BjAfqPbTLPU1sOiZf+vZF+vNB7vwyCqdyeD4p774/RnAzn+HmAm945dn5ImPQgT3x1AO
         lFEczZT5hMHJLuHSngvQzyJKyPmE2uWw5LSHCcjpwIqCu78k1njFaqpjMAz1Y+yR+zoa
         81xw==
X-Gm-Message-State: AOAM532giOJpjJdnfrYnUrx24dPSl0uHWEFwmcGu7n8i/KPMeyj55qMD
	ae9i6urNonfuV5NrtH8YSwg=
X-Google-Smtp-Source: ABdhPJy5zP9S+o6KKWFozMpDmudizL16bG006I8qKB4/3Y7yZer0Dsog3Saw5p82cDlLZjbWR8PgUw==
X-Received: by 2002:a05:6638:3013:: with SMTP id r19mr46160047jak.36.1621036032591;
        Fri, 14 May 2021 16:47:12 -0700 (PDT)
Subject: Re: [PATCH v3 4/5] xen: Add files needed for minimal riscv build
To: Bob Eshleman <bobbyeshleman@gmail.com>, xen-devel@lists.xenproject.org
Cc: Alistair Francis <alistair23@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <cover.1621017334.git.connojdavis@gmail.com>
 <a7d2d43d0d9de9e10a3e92bc6f977d6f4b53bef6.1621017334.git.connojdavis@gmail.com>
 <97815ecd-7335-9c85-5df8-802ed119d518@gmail.com>
From: Connor Davis <connojdavis@gmail.com>
Message-ID: <fcc06468-3249-6e6a-dfbd-fc9f69a3de2b@gmail.com>
Date: Fri, 14 May 2021 17:47:26 -0600
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <97815ecd-7335-9c85-5df8-802ed119d518@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 5/14/21 3:53 PM, Bob Eshleman wrote:
> On 5/14/21 11:53 AM, Connor Davis wrote:
>> Add arch-specific makefiles and configs needed to build for
>> riscv64. Also add a minimal head.S that is a simple infinite loop.
>> head.o can be built with
>>
>> $ make XEN_TARGET_ARCH=riscv64 SUBSYSTEMS=xen -C xen TARGET=head.o
>>
> I recently realized that the Linux kernel build uses `ARCH=riscv`
> with 32 vs 64 being differentiated by Kconfig opts (and __riscv_xlen).
> I think `riscv64` was inherited by `arm64`.  This is something I'd
> like to eventually change the Xen build to (i.e.,
> `XEN_TARGET_ARCH=riscv make`) would it be possible for us to get that
> in this series?

Sure, I can do that

>
>> diff --git a/xen/include/asm-riscv/config.h b/xen/include/asm-riscv/config.h
>> new file mode 100644
>> index 0000000000..84cb436dc1
>> --- /dev/null
>> +++ b/xen/include/asm-riscv/config.h
>> @@ -0,0 +1,110 @@
>> +/******************************************************************************
>> + * config.h
>> + *
>> + * A Linux-style configuration list.
>> + */
>> +
>> +#ifndef __RISCV_CONFIG_H__
>> +#define __RISCV_CONFIG_H__
>> +
> ...
>
>> +
>> +#ifdef CONFIG_RISCV_64
>> +
>> +/*
>> + * RISC-V Layout:
>> + *   0x0000000000000000 - 0x0000003fffffffff (256GB, L2 slots [0-255])
>> + *     Unmapped
>> + *   0x0000004000000000 - 0xffffffbfffffffff
>> + *     Inaccessible: sv39 only supports 39-bit sign-extended VAs.
>> + *   0xffffffc000000000 - 0xffffffc0001fffff (2MB, L2 slot [256])
>> + *     Unmapped
>> + *   0xffffffc000200000 - 0xffffffc0003fffff (2MB, L2 slot [256])
>> + *     Xen text, data, bss
>> + *   0xffffffc000400000 - 0xffffffc0005fffff (2MB, L2 slot [256])
>> + *     Fixmap: special-purpose 4K mapping slots
>> + *   0xffffffc000600000 - 0xffffffc0009fffff (4MB, L2 slot [256])
>> + *     Early boot mapping of FDT
>> + *   0xffffffc000a00000 - 0xffffffc000bfffff (2MB, L2 slot [256])
>> + *     Early relocation address, used when relocating Xen and later
>> + *     for livepatch vmap (if compiled in)
>> + *   0xffffffc040000000 - 0xffffffc07fffffff (1GB, L2 slot [257])
>> + *     VMAP: ioremap and early_ioremap
>> + *   0xffffffc080000000 - 0xffffffc13fffffff (3GB, L2 slots [258..260])
>> + *     Unmapped
>> + *   0xffffffc140000000 - 0xffffffc1bfffffff (2GB, L2 slots [261..262])
>> + *     Frametable: 48 bytes per page for 133GB of RAM
>> + *   0xffffffc1c0000000 - 0xffffffe1bfffffff (128GB, L2 slots [263..390])
>> + *     1:1 direct mapping of RAM
>> + *   0xffffffe1c0000000 - 0xffffffffffffffff (121GB, L2 slots [391..511])
>> + *     Unmapped
>> + */
>> +
> What is the benefit of moving the layout up to 0xffffffc000000000?

I thought it made the most sense to use the upper half since Xen is 
privileged

and privileged code is typically mapped in the upper half, at least on 
x86. I'm happy to

move it down if that would be preferred.


Thanks,

Connor



From xen-devel-bounces@lists.xenproject.org Fri May 14 23:51:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 23:51:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127629.239890 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhhaN-00011x-6H; Fri, 14 May 2021 23:51:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127629.239890; Fri, 14 May 2021 23:51:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhhaN-00011q-2x; Fri, 14 May 2021 23:51:15 +0000
Received: by outflank-mailman (input) for mailman id 127629;
 Fri, 14 May 2021 23:51:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=saLk=KJ=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lhhaL-00011k-Rt
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 23:51:13 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 38076f80-e871-4904-b875-bfd62736c1a3;
 Fri, 14 May 2021 23:51:13 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 033A261440;
 Fri, 14 May 2021 23:51:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 38076f80-e871-4904-b875-bfd62736c1a3
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1621036272;
	bh=RkuxY7UFyW2YAXExWN195eVZmIgYm5uFRGdSYrW1fKc=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=l4jnYRa+mphpanBZnHu/qHfk0lts6053+TAWO9YtwOLj25UPd75UdaZWj5nEym/tY
	 t6UmNpQ1/gB0Z4dFFzWQHxmSYCmQsAUne+hHwKeD+hIsoxazZ9W8L9p85XtH36BZ1I
	 aF/bhAXKE4W0kXsTDY1wbYOenAL84+qq54DKKrtNu39vZVOYthi7TH7AyvVsWJjyWS
	 GsPDi+Kkmc4bOpbRL7f7Le5Q92wx6JROL27/zx+TECiQf5vHOgh3YWr0At/GCT0ZXu
	 ebpMmZc0giGGk0yhwOtpDAsro523n68KZEvAxwqtCFNmcxXtNUDkd5pZu5bnBAkNQf
	 4K/8kb/5yPKSQ==
Date: Fri, 14 May 2021 16:51:11 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, Wei.Chen@arm.com, Henry.Wang@arm.com, 
    Penny.Zheng@arm.com, Bertrand.Marquis@arm.com, 
    Julien Grall <julien.grall@arm.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Julien Grall <jgrall@amazon.com>
Subject: Re: [PATCH RFCv2 14/15] xen/arm: mm: Rework
 setup_xenheap_mappings()
In-Reply-To: <20210425201318.15447-15-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2105141646340.14426@sstabellini-ThinkPad-T480s>
References: <20210425201318.15447-1-julien@xen.org> <20210425201318.15447-15-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Sun, 25 Apr 2021, Julien Grall wrote:
> From: Julien Grall <julien.grall@arm.com>
> 
> A few issues have been reported with setup_xenheap_mappings() over the
> last couple of years. The main ones are:
>     - It will break on platform supporting more than 512GB of RAM
>       because the memory allocated by the boot allocator is not yet
>       mapped.
>     - Aligning all the regions to 1GB may lead to unexpected result
>       because we may alias non-cacheable region (such as device or reserved
>       regions).
> 
> map_pages_to_xen() was recently reworked to allow superpage mappings and
> deal with the use of page-tables before they are mapped.
> 
> Most of the code in setup_xenheap_mappings() is now replaced with a
> single call to map_pages_to_xen().
> 
> This also require to re-order the steps setup_mm() so the regions are
> given to the boot allocator first and then we setup the xenheap
> mappings.

I know this is paranoia but doesn't this introduce a requirement on the
size of the first bank in bootinfo.mem.bank[] ?

It should be at least 8KB?

I know it is unlikely but it is theoretically possible to have a first
bank of just 1KB.

I think we should write the requirement down with an in-code comment?
Or better maybe we should write a message about it in the panic below?


> Note that the 1GB alignment is not yet removed.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> ---
>     Changes in v2:
>         - New patch
> 
>     TODO:
>         - Remove the 1GB alignment
>         - Add support for setting the contiguous bit
> ---
>  xen/arch/arm/mm.c    | 60 ++++----------------------------------------
>  xen/arch/arm/setup.c | 10 ++++++--

I love it!


>  2 files changed, 13 insertions(+), 57 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index f5768f2d4a81..c49403b687f5 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -143,17 +143,6 @@ static DEFINE_PAGE_TABLE(cpu0_pgtable);
>  static DEFINE_PAGE_TABLES(cpu0_dommap, DOMHEAP_SECOND_PAGES);
>  #endif
>  
> -#ifdef CONFIG_ARM_64
> -/* The first page of the first level mapping of the xenheap. The
> - * subsequent xenheap first level pages are dynamically allocated, but
> - * we need this one to bootstrap ourselves. */
> -static DEFINE_PAGE_TABLE(xenheap_first_first);
> -/* The zeroeth level slot which uses xenheap_first_first. Used because
> - * setup_xenheap_mappings otherwise relies on mfn_to_virt which isn't
> - * valid for a non-xenheap mapping. */
> -static __initdata int xenheap_first_first_slot = -1;
> -#endif
> -
>  /* Common pagetable leaves */
>  /* Second level page tables.
>   *
> @@ -818,9 +807,9 @@ void __init setup_xenheap_mappings(unsigned long base_mfn,
>  void __init setup_xenheap_mappings(unsigned long base_mfn,
>                                     unsigned long nr_mfns)
>  {
> -    lpae_t *first, pte;
>      unsigned long mfn, end_mfn;
>      vaddr_t vaddr;
> +    int rc;
>  
>      /* Align to previous 1GB boundary */
>      mfn = base_mfn & ~((FIRST_SIZE>>PAGE_SHIFT)-1);
> @@ -846,49 +835,10 @@ void __init setup_xenheap_mappings(unsigned long base_mfn,
>       */
>      vaddr = (vaddr_t)__mfn_to_virt(base_mfn) & FIRST_MASK;
>  
> -    while ( mfn < end_mfn )
> -    {
> -        int slot = zeroeth_table_offset(vaddr);
> -        lpae_t *p = &xen_pgtable[slot];
> -
> -        if ( p->pt.valid )
> -        {
> -            /* mfn_to_virt is not valid on the 1st 1st mfn, since it
> -             * is not within the xenheap. */
> -            first = slot == xenheap_first_first_slot ?
> -                xenheap_first_first : mfn_to_virt(lpae_get_mfn(*p));
> -        }
> -        else if ( xenheap_first_first_slot == -1)
> -        {
> -            /* Use xenheap_first_first to bootstrap the mappings */
> -            first = xenheap_first_first;
> -
> -            pte = pte_of_xenaddr((vaddr_t)xenheap_first_first);
> -            pte.pt.table = 1;
> -            write_pte(p, pte);
> -
> -            xenheap_first_first_slot = slot;
> -        }
> -        else
> -        {
> -            mfn_t first_mfn = alloc_boot_pages(1, 1);
> -
> -            clear_page(mfn_to_virt(first_mfn));
> -            pte = mfn_to_xen_entry(first_mfn, MT_NORMAL);
> -            pte.pt.table = 1;
> -            write_pte(p, pte);
> -            first = mfn_to_virt(first_mfn);
> -        }
> -
> -        pte = mfn_to_xen_entry(_mfn(mfn), MT_NORMAL);
> -        /* TODO: Set pte.pt.contig when appropriate. */
> -        write_pte(&first[first_table_offset(vaddr)], pte);
> -
> -        mfn += FIRST_SIZE>>PAGE_SHIFT;
> -        vaddr += FIRST_SIZE;
> -    }
> -
> -    flush_xen_tlb_local();
> +    rc = map_pages_to_xen(vaddr, _mfn(mfn), end_mfn - mfn,
> +                          PAGE_HYPERVISOR_RW | _PAGE_BLOCK);
> +    if ( rc )
> +        panic("Unable to setup the xenheap mappings.\n");

This is the panic I was talking about


>  }
>  #endif
>  
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 00aad1c194b9..0993a4bb52d4 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -761,8 +761,11 @@ static void __init setup_mm(void)
>          ram_start = min(ram_start,bank_start);
>          ram_end = max(ram_end,bank_end);
>  
> -        setup_xenheap_mappings(bank_start>>PAGE_SHIFT, bank_size>>PAGE_SHIFT);
> -
> +        /*
> +         * Add the region to the boot allocator first, so we can use
> +         * some to allocate page-tables for setting up the xenheap
> +         * mappings.
> +         */
>          s = bank_start;
>          while ( s < bank_end )
>          {
> @@ -781,6 +784,9 @@ static void __init setup_mm(void)
>              fw_unreserved_regions(s, e, init_boot_pages, 0);
>              s = n;
>          }
> +
> +        setup_xenheap_mappings(bank_start >> PAGE_SHIFT,
> +                               bank_size >> PAGE_SHIFT);
>      }
>  
>      total_pages += ram_size >> PAGE_SHIFT;
> -- 
> 2.17.1
> 


From xen-devel-bounces@lists.xenproject.org Fri May 14 23:54:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 14 May 2021 23:54:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127633.239901 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhhdF-0001hX-KU; Fri, 14 May 2021 23:54:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127633.239901; Fri, 14 May 2021 23:54:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhhdF-0001hQ-HO; Fri, 14 May 2021 23:54:13 +0000
Received: by outflank-mailman (input) for mailman id 127633;
 Fri, 14 May 2021 23:54:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=CSkR=KJ=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lhhdE-0001hK-Tl
 for xen-devel@lists.xenproject.org; Fri, 14 May 2021 23:54:13 +0000
Received: from mail-io1-xd2a.google.com (unknown [2607:f8b0:4864:20::d2a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b9e5c1de-74d9-4c6f-bc3a-0c9d96b39fd1;
 Fri, 14 May 2021 23:54:12 +0000 (UTC)
Received: by mail-io1-xd2a.google.com with SMTP id k25so437573iob.6
 for <xen-devel@lists.xenproject.org>; Fri, 14 May 2021 16:54:12 -0700 (PDT)
Received: from [192.168.99.80] (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id o8sm3782827ils.46.2021.05.14.16.54.11
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 14 May 2021 16:54:11 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b9e5c1de-74d9-4c6f-bc3a-0c9d96b39fd1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=rqRMzyXq2dA6+ggS4z9IyFlWf00u8oA3SiuA2M/aiek=;
        b=Dcxle/7EJHO5rgaZ1WILwbV1l3hnRk7Qt6RSLlgjjcjjz7zW384yB3KIbmy9oUQOx6
         p6UBNHsKcR05q8DRbGTl1PwCUocXL6rDDQzU9ZnUziEE7oYhMXwVTZQwuCPtafSz0BDx
         WXdHzYmNbN0ONgPFQJgQOqTnCRzOMaVVXcWkMWyg3zCgE4PsWww5HpySyXAt2GQYPUhU
         jHrdY4oErzqIkoc/LssGn3C3fNEJALwPAwf5ddFBv1TN4vDtnmMt8ZuKBvJ3KTVsSaQy
         vgCEH8G7z6unQg8/Zq3z11a6Vyeti0n77IsjR491Ifx5ymLT7ixZ2r2bvLnbDj+Mdjhj
         zhTA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=rqRMzyXq2dA6+ggS4z9IyFlWf00u8oA3SiuA2M/aiek=;
        b=ZWRyvu3iuBRcihwUEsPC3v+nQcqWGDaPJ0+6s45pqFX+U0GgEK7pYP7usgbKCj9GPZ
         EOYyUqMFfR6uOGj1tncY7M6sApcjbPcJ0yZadgkPfeoYf4d2oxbNk1fahgUIldjaPB71
         emF65Z4UeAa/oqDR9tVUOPPcktTPUI9v8/SlwPWmrOBlaU3F2FsTFaV/MPqkRwEKSVTV
         sHt6KwsJx7nnNH2TY5UciD1gB1ytmT5RNgxfKg3fyho7sizW0mSYmHzJIsS7u9pfGAFt
         FG8jqU4GIWPHSHDlHu49VyWiSuxsA9UZ5gr/2ZN6ItW/f80t6WicrVR96jp683+/uYfC
         vIaA==
X-Gm-Message-State: AOAM531grGPb8OVvBrVWLHirx9RJaetI8aNWVHfPnn7ZvZz79OrwrwkH
	3igI8Nu8if1R38I3uzq60dI=
X-Google-Smtp-Source: ABdhPJxxu2K7aB9D2JD+utYCDQ3BL7M7OpXoF/Ojp7gSYTdp+KQyPkjnqUYFzMr33Ctt7iZ0y1iRww==
X-Received: by 2002:a02:5142:: with SMTP id s63mr44752697jaa.82.1621036451854;
        Fri, 14 May 2021 16:54:11 -0700 (PDT)
Subject: Re: [PATCH v3 5/5] automation: Add container for riscv64 builds
To: Bob Eshleman <bobbyeshleman@gmail.com>, xen-devel@lists.xenproject.org
Cc: Alistair Francis <alistair23@gmail.com>,
 Doug Goldstein <cardoe@cardoe.com>
References: <cover.1621017334.git.connojdavis@gmail.com>
 <5a78ff425e45588da5c97c68e94275b649346012.1621017334.git.connojdavis@gmail.com>
 <7efa3461-b8f0-c93b-95a2-c596a6dc2c1e@gmail.com>
From: Connor Davis <connojdavis@gmail.com>
Message-ID: <eed7d4e6-6efb-c166-b947-0163df864f8b@gmail.com>
Date: Fri, 14 May 2021 17:54:25 -0600
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <7efa3461-b8f0-c93b-95a2-c596a6dc2c1e@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 5/14/21 3:01 PM, Bob Eshleman wrote:
> On 5/14/21 11:53 AM, Connor Davis wrote:
>> +
>> +# There is a regression in GDB that causes an assertion error
>> +# when setting breakpoints, use this commit until it is fixed!
>> +RUN git clone --recursive -j$(nproc) --progress https://github.com/riscv/riscv-gnu-toolchain && \
>> +    cd riscv-gnu-toolchain/riscv-gdb && \
>> +    git checkout 1dd588507782591478882a891f64945af9e2b86c && \
>> +    cd  .. && \
>> +    ./configure --prefix=/opt/riscv && \
>> +    make linux -j$(nproc) && \
>> +    rm -R /riscv-gnu-toolchai
> What do you think about using the RISCV tool chain from the Arch
> repos now?
That sounds much better, will update
>
> I've also discovered that the sym table error avoided by the commit
> pin can be worked around by removing already loaded symbols with
> `file` (no args) prior to calling `file path/to/file` to load new
> ones.  So if people did still want to use the container for
> development, they could still use the gdb installed by pacman
> (with the symbols caveat).
>
Thanks,

Connor



From xen-devel-bounces@lists.xenproject.org Sat May 15 00:03:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 May 2021 00:03:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127637.239912 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhhlo-0003hA-4c; Sat, 15 May 2021 00:03:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127637.239912; Sat, 15 May 2021 00:03:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhhln-0003h3-Vb; Sat, 15 May 2021 00:03:03 +0000
Received: by outflank-mailman (input) for mailman id 127637;
 Sat, 15 May 2021 00:03:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=FxVO=KK=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1lhhll-0003gx-U5
 for xen-devel@lists.xenproject.org; Sat, 15 May 2021 00:03:01 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id adbcb51a-96ae-4112-ae63-ee7b95c1bd0d;
 Sat, 15 May 2021 00:03:01 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 300FD613D7;
 Sat, 15 May 2021 00:03:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: adbcb51a-96ae-4112-ae63-ee7b95c1bd0d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1621036980;
	bh=APtDweXxNrrvLAK2GWnqDFZ5Ufhv3Fs1izb7+UJ2ORs=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=BSLTPnCbafWY19L+AkquzRk2a18L0tQ0dLF/8YZX0gGO4Som4rQCRqO+wYsh26sUd
	 mtFkFyGyUWbEpY/cReP8NVpJSGakhuBSbhedOvQra+JXNbeFWu6U+i9k4rl9DMy1Rc
	 b1kzPR+SGkQ/QqH5SZvXaUt0vznH3jPg38dLQs4AkjWyzVNLoPuSb4jAt37HicLJmx
	 0s2mzt2t7VYYkoEwogFRuqF4NqTnulPy/A4un1Z8r/6tf7pZt360hbProjlgZu3L2c
	 uJHKAgWO3jByL1m2IDMtB3SBcNP/PjlbLOR4wndDx/FXwKNQPRvon03Ss1ex3HDKwi
	 PW7grxC4QGj8g==
Date: Fri, 14 May 2021 17:02:59 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: xen-devel@lists.xenproject.org, Wei.Chen@arm.com, Henry.Wang@arm.com, 
    Penny.Zheng@arm.com, Bertrand.Marquis@arm.com, 
    Julien Grall <julien.grall@arm.com>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Julien Grall <jgrall@amazon.com>
Subject: Re: [PATCH RFCv2 15/15] xen/arm: mm: Re-implement setup_frame_table_mappings()
 with map_pages_to_xen()
In-Reply-To: <20210425201318.15447-16-julien@xen.org>
Message-ID: <alpine.DEB.2.21.2105141658510.14426@sstabellini-ThinkPad-T480s>
References: <20210425201318.15447-1-julien@xen.org> <20210425201318.15447-16-julien@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Sun, 25 Apr 2021, Julien Grall wrote:
> From: Julien Grall <julien.grall@arm.com>
> 
> Now that map_pages_to_xen() has been extended to support 2MB mappings,
> we can replace the create_mappings() call by map_pages_to_xen() call.
> 
> This has the advantage to remove the different between 32-bit and 64-bit
> code.
> 
> Lastly remove create_mappings() as there is no more callers.
> 
> Signed-off-by: Julien Grall <julien.grall@arm.com>
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> ---
>     Changes in v2:
>         - New patch
> 
>     TODO:
>         - Add support for setting the contiguous bit
> ---
>  xen/arch/arm/mm.c | 64 +++++------------------------------------------
>  1 file changed, 6 insertions(+), 58 deletions(-)
> 
> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> index c49403b687f5..5f8ae029dd6d 100644
> --- a/xen/arch/arm/mm.c
> +++ b/xen/arch/arm/mm.c
> @@ -359,40 +359,6 @@ void clear_fixmap(unsigned map)
>      BUG_ON(res != 0);
>  }
>  
> -/* Create Xen's mappings of memory.
> - * Mapping_size must be either 2MB or 32MB.
> - * Base and virt must be mapping_size aligned.
> - * Size must be a multiple of mapping_size.
> - * second must be a contiguous set of second level page tables
> - * covering the region starting at virt_offset. */
> -static void __init create_mappings(lpae_t *second,
> -                                   unsigned long virt_offset,
> -                                   unsigned long base_mfn,
> -                                   unsigned long nr_mfns,
> -                                   unsigned int mapping_size)
> -{
> -    unsigned long i, count;
> -    const unsigned long granularity = mapping_size >> PAGE_SHIFT;
> -    lpae_t pte, *p;
> -
> -    ASSERT((mapping_size == MB(2)) || (mapping_size == MB(32)));
> -    ASSERT(!((virt_offset >> PAGE_SHIFT) % granularity));
> -    ASSERT(!(base_mfn % granularity));
> -    ASSERT(!(nr_mfns % granularity));
> -
> -    count = nr_mfns / LPAE_ENTRIES;
> -    p = second + second_linear_offset(virt_offset);
> -    pte = mfn_to_xen_entry(_mfn(base_mfn), MT_NORMAL);
> -    if ( granularity == 16 * LPAE_ENTRIES )
> -        pte.pt.contig = 1;  /* These maps are in 16-entry contiguous chunks. */
> -    for ( i = 0; i < count; i++ )
> -    {
> -        write_pte(p + i, pte);
> -        pte.pt.base += 1 << LPAE_SHIFT;
> -    }
> -    flush_xen_tlb_local();
> -}
> -
>  #ifdef CONFIG_DOMAIN_PAGE
>  void *map_domain_page_global(mfn_t mfn)
>  {
> @@ -850,36 +816,18 @@ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
>      unsigned long frametable_size = nr_pdxs * sizeof(struct page_info);
>      mfn_t base_mfn;
>      const unsigned long mapping_size = frametable_size < MB(32) ? MB(2) : MB(32);
> -#ifdef CONFIG_ARM_64
> -    lpae_t *second, pte;
> -    unsigned long nr_second;
> -    mfn_t second_base;
> -    int i;
> -#endif
> +    int rc;
>  
>      frametable_base_pdx = mfn_to_pdx(maddr_to_mfn(ps));
>      /* Round up to 2M or 32M boundary, as appropriate. */
>      frametable_size = ROUNDUP(frametable_size, mapping_size);
>      base_mfn = alloc_boot_pages(frametable_size >> PAGE_SHIFT, 32<<(20-12));
>  
> -#ifdef CONFIG_ARM_64
> -    /* Compute the number of second level pages. */
> -    nr_second = ROUNDUP(frametable_size, FIRST_SIZE) >> FIRST_SHIFT;
> -    second_base = alloc_boot_pages(nr_second, 1);
> -    second = mfn_to_virt(second_base);
> -    for ( i = 0; i < nr_second; i++ )
> -    {
> -        clear_page(mfn_to_virt(mfn_add(second_base, i)));
> -        pte = mfn_to_xen_entry(mfn_add(second_base, i), MT_NORMAL);
> -        pte.pt.table = 1;
> -        write_pte(&xen_first[first_table_offset(FRAMETABLE_VIRT_START)+i], pte);
> -    }
> -    create_mappings(second, 0, mfn_x(base_mfn), frametable_size >> PAGE_SHIFT,
> -                    mapping_size);
> -#else
> -    create_mappings(xen_second, FRAMETABLE_VIRT_START, mfn_x(base_mfn),
> -                    frametable_size >> PAGE_SHIFT, mapping_size);
> -#endif
> +    /* XXX: Handle contiguous bit */
> +    rc = map_pages_to_xen(FRAMETABLE_VIRT_START, base_mfn,
> +                          frametable_size >> PAGE_SHIFT, PAGE_HYPERVISOR_RW);
> +    if ( rc )
> +        panic("Unable to setup the frametable mappings.\n");

This is a lot better.

I take that "XXX: Handle contiguous bit" refers to the lack of
_PAGE_BLOCK. Why can't we just | _PAGE_BLOCK like in other places?


>      memset(&frame_table[0], 0, nr_pdxs * sizeof(struct page_info));
>      memset(&frame_table[nr_pdxs], -1,



From xen-devel-bounces@lists.xenproject.org Sat May 15 01:18:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 May 2021 01:18:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127648.239923 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhiwc-0007ax-M7; Sat, 15 May 2021 01:18:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127648.239923; Sat, 15 May 2021 01:18:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhiwc-0007aq-HU; Sat, 15 May 2021 01:18:18 +0000
Received: by outflank-mailman (input) for mailman id 127648;
 Sat, 15 May 2021 01:18:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=u/QD=KK=m5p.com=ehem@srs-us1.protection.inumbo.net>)
 id 1lhiwb-0007ak-Bu
 for xen-devel@lists.xenproject.org; Sat, 15 May 2021 01:18:17 +0000
Received: from mailhost.m5p.com (unknown [74.104.188.4])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4a59e7c4-5712-4516-b7cf-7e7ef6253405;
 Sat, 15 May 2021 01:18:16 +0000 (UTC)
Received: from m5p.com (mailhost.m5p.com [IPv6:2001:470:1f07:15ff:0:0:0:f7])
 by mailhost.m5p.com (8.16.1/8.15.2) with ESMTPS id 14F1I4w8079097
 (version=TLSv1.3 cipher=TLS_AES_256_GCM_SHA384 bits=256 verify=NO);
 Fri, 14 May 2021 21:18:10 -0400 (EDT) (envelope-from ehem@m5p.com)
Received: (from ehem@localhost)
 by m5p.com (8.16.1/8.15.2/Submit) id 14F1I4dM079096;
 Fri, 14 May 2021 18:18:04 -0700 (PDT) (envelope-from ehem)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4a59e7c4-5712-4516-b7cf-7e7ef6253405
Date: Fri, 14 May 2021 18:18:04 -0700
From: Elliott Mitchell <ehem+xen@m5p.com>
To: Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org, Roger Pau Monn?? <royger@freebsd.org>,
        Mitchell Horne <mhorne@freebsd.org>
Subject: Re: Uses of /hypervisor memory range (was: FreeBSD/Xen/ARM issues)
Message-ID: <YJ8hTE/JbJygtVAL@mattapan.m5p.com>
References: <YIptpndhk6MOJFod@Air-de-Roger>
 <YItwHirnih6iUtRS@mattapan.m5p.com>
 <YIu80FNQHKS3+jVN@Air-de-Roger>
 <YJDcDjjgCsQUdsZ7@mattapan.m5p.com>
 <YJURGaqAVBSYnMRf@Air-de-Roger>
 <YJYem5CW/97k/e5A@mattapan.m5p.com>
 <YJs/YAgB8molh7e5@mattapan.m5p.com>
 <54427968-9b13-36e6-0001-27fb49f85635@xen.org>
 <YJ3jlGSxs60Io+dp@mattapan.m5p.com>
 <93936406-574f-7fd0-53bf-3bafaa4b1947@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <93936406-574f-7fd0-53bf-3bafaa4b1947@xen.org>
X-Spam-Status: No, score=0.4 required=10.0 tests=KHOP_HELO_FCRDNS autolearn=no
	autolearn_force=no version=3.4.5
X-Spam-Checker-Version: SpamAssassin 3.4.5 (2021-03-20) on mattapan.m5p.com

On Fri, May 14, 2021 at 09:32:10AM +0100, Julien Grall wrote:
> On 14/05/2021 03:42, Elliott Mitchell wrote:
> > 
> > Issue is what is the intended use of the memory range allocated to
> > /hypervisor in the device-tree on ARM?  What do the Xen developers plan
> > for?  What is expected?
> 
>  From docs/misc/arm/device-tree/guest.txt:
> 
> "
> - reg: specifies the base physical address and size of a region in
>    memory where the grant table should be mapped to, using an
>    HYPERVISOR_memory_op hypercall. The memory region is large enough to map
>    the whole grant table (it is larger or equal to 
> gnttab_max_grant_frames()).
>    This property is unnecessary when booting Dom0 using ACPI.
> "
> 
> Effectively, this is a known space in memory that is unallocated. Not 
> all the guests will use it if they have a better way to find unallocated 
> space.

The use of "should" is generally considered strong encouragement to do
so.  A warning $something is lurking here and you may regret it if you
recklessly disobey this without knowning what is going on behind the
scenes.

Whereas your language here suggests "can" is a better word since it is
simply a random unused address range.


> > Was the /hypervisor range intended *strictly* for mapping grant-tables?
> 
> It was introduced to tell the OS a place where the grant-table could be 
> conveniently mapped.

Yet this is strange.  If any $random unused address range is acceptable,
why bother suggesting a particular one?  If this is really purely the
OS's choice, why is Xen bothering to suggest a range at all?


> > Was it intended for /hypervisor to grow over the
> > years as hardware got cheaper?
> I don't understand this question.

Going to the trouble of suggesting a range points to something going on.
I'm looking for an explanation since strange choices might hint at
something unpleasant lurking below and I should watch where I step.


> > Might it be better to deprecate the /hypervisor range and have domains
> > allocate any available address space for foreign mappings?
> 
> It may be easy for FreeBSD to find available address space but so far 
> this has not been the case in Linux (I haven't checked the latest 
> version though).
> 
> To be clear, an OS is free to not use the range provided in /hypervisor 
> (maybe this is not clear enough in the spec?). This was mostly 
> introduced to overcome some issues we saw in Linux when Xen on Arm was 
> introduced.

Mind if I paraphrase this?

"this is a bring-up hack for Linux which hangs around since we haven't
felt any pressure to fix the underlying Linux issue"

Is that reasonable?


> > Should the FreeBSD implementation be treating grant tables as distinct
> > from other foreign mappings?
> 
> Both require unallocated addres space to work. IIRC FreeBSD is able to 
> find unallocated space easily, so I would recommend to use it.

That is supposed to be, but it appears there is presently a bug which has
broken the functionality on ARM.  As such, as a proper lazy developer if
I can abuse the /hypervisor address range for all foreign mappings, I
will.

My feeling is one of two things should happen with the /hypervisor
address range:

1>  OSes could be encouraged to use it for all foreign mappings.  The
range should be dynamic in some fashion.  There could be a handy way to
allow configuring the amount of address space thus reserved.

2>  The range should be declared deprecated.  Everyone should be put on
the same page that this was a quick hack for bringing up Xen/ARM/Linux,
and really it shouldn't have escaped.


> > (is treating them the same likely to
> > induce buggy behavior on x86?)
> 
> I will leave this answer to Roger.

This was directed towards *you*.  There is this thing here which looks
kind of odd in a vaguely unpleasant way.  I'm trying to figure out
whether I should embrace it, versus running away.



On Fri, May 14, 2021 at 12:07:53PM +0200, Roger Pau Monn?? wrote:
> On Fri, May 14, 2021 at 09:32:10AM +0100, Julien Grall wrote:
> > On 14/05/2021 03:42, Elliott Mitchell wrote:
> > > Was it intended for the /hypervisor range to dynamically scale with the
> > > size of the domain?
> > As per above, this doesn't depend on the size of the domain. Instead, this
> > depends on what sort of the backend will be present in the domain.
> 
> It should instead scale based on the total memory on the system, ie:
> if your hardware has 4GB of RAM the unpopulated range should at least
> be: 4GB - memory of the current domain, so that it could map any
> possible page assigned to a different domain (and even then I'm not
> sure we shouldn't account for duplicated mappings).

This would be approach #1 from above.  Going fully in this direction
seems reasonable if the entire Xen/ARM team is up for this approach.
Otherwise approach #2 also seems reasonable.  Problem is the current
situation seems an unreasonable hybrid.

> > > Should the FreeBSD implementation be treating grant tables as distinct
> > > from other foreign mappings?
> > 
> > Both require unallocated addres space to work. IIRC FreeBSD is able to find
> > unallocated space easily, so I would recommend to use it.
> 
> I agree. I think the main issue here is that there seems to be some
> bug (or behavior not understood properly) with the resource manager
> on Arm that returns an error when requesting a region anywhere in the
> memory address space, ie: [0, ~0].

I'm pretty sure there IS a bug, somewhere.  Question is whether it is in
the ARM nexus code, versus the xenpv code.  Thing is, as a lazy developer
I would love to avoid the task of fully diagnosing the bug by using an
alternative approach.

Alas, the alternative approach may not be viable longer term at which
point I want to force everyone to endure the hardship of getting this
fully fixed.   :-)


-- 
(\___(\___(\______          --=> 8-) EHM <=--          ______/)___/)___/)
 \BS (    |         ehem+sigmsg@m5p.com  PGP 87145445         |    )   /
  \_CS\   |  _____  -O #include <stddisclaimer.h> O-   _____  |   /  _/
8A19\___\_|_/58D2 7E3D DDF4 7BA6 <-PGP-> 41D1 B375 37D0 8714\_|_/___/5445




From xen-devel-bounces@lists.xenproject.org Sat May 15 02:13:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 May 2021 02:13:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127654.239934 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhjoP-000576-0a; Sat, 15 May 2021 02:13:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127654.239934; Sat, 15 May 2021 02:13:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhjoO-00056z-SS; Sat, 15 May 2021 02:13:52 +0000
Received: by outflank-mailman (input) for mailman id 127654;
 Sat, 15 May 2021 02:13:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhjoN-00056p-Hc; Sat, 15 May 2021 02:13:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhjoN-0004Gu-Ad; Sat, 15 May 2021 02:13:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhjoN-0001G7-1g; Sat, 15 May 2021 02:13:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lhjoN-0006sz-1D; Sat, 15 May 2021 02:13:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=1Ka5LXLP7hMWJJnFMb0DFf2uJsLkcsJI0AOlNFufYUs=; b=BtJg53GKOxWZjCqf5o2Lj3QiGT
	e7e5k39HaxJYj/Mtu0reXMhwRwaYcWxqYs0SIfHsaNDyCHaszgsTktVO5eOw2YDmY8h1e7p0jhALM
	HrW/F5Yzod9O4Q6OWbpPDXxRUWDzcyhMRXFmw3mtlltFKhSOimpKHJcQRVhLeKzSsaFo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161952-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 161952: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=32928415e36b3e234efb5c24143e06060a68fba3
X-Osstest-Versions-That:
    ovmf=d82c4693f8d5c6b05f40ccf351c84645201067c1
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 15 May 2021 02:13:51 +0000

flight 161952 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161952/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 32928415e36b3e234efb5c24143e06060a68fba3
baseline version:
 ovmf                 d82c4693f8d5c6b05f40ccf351c84645201067c1

Last test of basis   161949  2021-05-14 10:10:07 Z    0 days
Testing same since   161952  2021-05-14 21:40:05 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Lendacky, Thomas <thomas.lendacky@amd.com>
  Tom Lendacky <thomas.lendacky@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   d82c4693f8..32928415e3  32928415e36b3e234efb5c24143e06060a68fba3 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat May 15 02:38:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 May 2021 02:38:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127660.239948 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhkCC-0007T8-1b; Sat, 15 May 2021 02:38:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127660.239948; Sat, 15 May 2021 02:38:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhkCB-0007T1-UM; Sat, 15 May 2021 02:38:27 +0000
Received: by outflank-mailman (input) for mailman id 127660;
 Sat, 15 May 2021 02:38:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhkCB-0007Sr-Dc; Sat, 15 May 2021 02:38:27 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhkCB-0004kj-4z; Sat, 15 May 2021 02:38:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhkCA-000236-MR; Sat, 15 May 2021 02:38:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lhkCA-0001kT-Lu; Sat, 15 May 2021 02:38:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=JvNO1VVt8MyGgSdxaRV1VtpySwTXWftsb7SDWWur5KU=; b=KhA1LTkJKnKo2bLEB8FYhAu7SL
	agqFLjE5wc9sXYCfoSoeuTrKnq2uMgXZVeHw8lTVA0mskxPzysVN3RjVwh8Qm8kQ9pKCOcMBDBf3L
	RXyZ9iQN1nnFH0FHYZlUaJhEUqK7ON8HtAjhrjQTs0/a68w98RGPI5FAWUdx0egRB/Vg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161950-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 161950: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=96662996eda78c48aadddd4e76d8615c7eb72d80
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 15 May 2021 02:38:26 +0000

flight 161950 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161950/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                96662996eda78c48aadddd4e76d8615c7eb72d80
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  267 days
Failing since        152659  2020-08-21 14:07:39 Z  266 days  487 attempts
Testing same since   161950  2021-05-14 14:09:23 Z    0 days    1 attempts

------------------------------------------------------------
499 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 151958 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 15 07:10:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 May 2021 07:10:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127669.239962 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhoRh-0007Az-U1; Sat, 15 May 2021 07:10:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127669.239962; Sat, 15 May 2021 07:10:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhoRh-0007As-Qy; Sat, 15 May 2021 07:10:45 +0000
Received: by outflank-mailman (input) for mailman id 127669;
 Sat, 15 May 2021 07:10:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhoRh-0007Ai-Eq; Sat, 15 May 2021 07:10:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhoRh-0001Df-8q; Sat, 15 May 2021 07:10:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhoRg-0007jx-TZ; Sat, 15 May 2021 07:10:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lhoRg-0004BS-T5; Sat, 15 May 2021 07:10:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=PlpFy1Vf0O1xUPhYtQz1EltoqG942fkRTXTRZ+BVCBE=; b=AJDxP9cmmeXXxz0JH4tL+ciK9T
	2U4vB3UTkyH5pZzOtIhD3Vf5F3yUiEpIWmhHybthp+K5qzforDdCECEwH6lJi0Wur7Y/bUEYzTeN9
	yBZGUelKQ4Ku0Dld1XKiaPWoe8RDwbIwrQqHNKJGzE03jwY35+hzIVkD00EOuzm8RlOg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161953-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 161953: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-localmigrate/x10:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=25a1298726e97b9d25379986f5d54d9e62ad6e93
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 15 May 2021 07:10:44 +0000

flight 161953 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161953/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 19 guest-localmigrate/x10  fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx  8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                25a1298726e97b9d25379986f5d54d9e62ad6e93
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  287 days
Failing since        152366  2020-08-01 20:49:34 Z  286 days  480 attempts
Testing same since   161953  2021-05-14 23:42:13 Z    0 days    1 attempts

------------------------------------------------------------
6049 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1641070 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 15 08:28:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 May 2021 08:28:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127678.239975 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhpf7-0005xs-5Q; Sat, 15 May 2021 08:28:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127678.239975; Sat, 15 May 2021 08:28:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhpf7-0005xl-28; Sat, 15 May 2021 08:28:41 +0000
Received: by outflank-mailman (input) for mailman id 127678;
 Sat, 15 May 2021 08:28:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhpf5-0005xY-CX; Sat, 15 May 2021 08:28:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhpf4-0002xg-U4; Sat, 15 May 2021 08:28:38 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhpf4-0003tR-Kw; Sat, 15 May 2021 08:28:38 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lhpf4-0004Rf-KT; Sat, 15 May 2021 08:28:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Z4gsBjnToKYPFEBRsVqjrNUCKHQkbvoJ2M3kolFsMlg=; b=xkud0zCkzrCguVMBsomJlcmGLR
	UiHqFs89CUyzihgIQk/lw/ENDian6VYyS0kgqYpAP7lVRn5S0EhZnTctcKnk9kKXVgLQ64/kqHSH4
	V0KFlldgXTKM6NyIqPB2hDMXq65TWNDT3qD/OeBBQ2TvxSj8iJqfZ9YOSmmHAMq/PXm8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161956-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 161956: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=df28ba289c7d74cda5a9022e1376bfb9cd18ebb7
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 15 May 2021 08:28:38 +0000

flight 161956 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161956/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              df28ba289c7d74cda5a9022e1376bfb9cd18ebb7
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  309 days
Failing since        151818  2020-07-11 04:18:52 Z  308 days  301 attempts
Testing same since   161956  2021-05-15 04:20:06 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 57600 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 15 08:48:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 May 2021 08:48:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127688.239993 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhpyP-0008K3-QX; Sat, 15 May 2021 08:48:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127688.239993; Sat, 15 May 2021 08:48:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhpyP-0008Jw-NM; Sat, 15 May 2021 08:48:37 +0000
Received: by outflank-mailman (input) for mailman id 127688;
 Sat, 15 May 2021 08:48:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lhpyO-0008Jq-FJ
 for xen-devel@lists.xenproject.org; Sat, 15 May 2021 08:48:36 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lhpyN-0003HE-67; Sat, 15 May 2021 08:48:35 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lhpyM-0004se-RT; Sat, 15 May 2021 08:48:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=Vu/Ot5cSLCpfQgycmsExf1exDNYGKOqncuBqdooCr0U=; b=v9z/IdBG1NQVdb/bHrcuN8efdC
	iKZe6ZCJ3VtvVG+9EZis0qc82/7ng4gJ+f1S5Dq7k5aJiNzHM5y6iju3dQKFkKzevnz4/qkoSFpvs
	5qfM6FkMJHlkOn0i/11KF18gMdpZCfjUdb+jOwTBpEJ7y7PcdJ3dch6uO7sssN/CTIZA=;
Subject: Re: [PATCH RFCv2 10/15] xen/arm: mm: Allocate xen page tables in
 domheap rather than xenheap
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Wei.Chen@arm.com, Henry.Wang@arm.com,
 Penny.Zheng@arm.com, Bertrand.Marquis@arm.com,
 Julien Grall <jgrall@amazon.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20210425201318.15447-1-julien@xen.org>
 <20210425201318.15447-11-julien@xen.org>
 <alpine.DEB.2.21.2105121529180.5018@sstabellini-ThinkPad-T480s>
 <9429bec0-8706-42b9-cda6-77adde961235@xen.org>
 <alpine.DEB.2.21.2105131522030.5018@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <602bea61-9db2-a27d-1fed-001e2b5b2176@xen.org>
Date: Sat, 15 May 2021 09:48:32 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2105131522030.5018@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 13/05/2021 23:27, Stefano Stabellini wrote:
> On Thu, 13 May 2021, Julien Grall wrote:
>> Hi Stefano,
>>
>> On 12/05/2021 23:44, Stefano Stabellini wrote:
>>> On Sun, 25 Apr 2021, Julien Grall wrote:
>>>> From: Julien Grall <jgrall@amazon.com>
>>>>
>>>> xen_{un,}map_table() already uses the helper to map/unmap pages
>>>> on-demand (note this is currently a NOP on arm64). So switching to
>>>> domheap don't have any disavantage.
>>>>
>>>> But this as the benefit:
>>>>       - to keep the page tables unmapped if an arch decided to do so
>>>>       - reduce xenheap use on arm32 which can be pretty small
>>>>
>>>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>>
>>> Thanks for the patch. It looks OK but let me ask a couple of questions
>>> to clarify my doubts.
>>>
>>> This change should have no impact to arm64, right?
>>>
>>> For arm32, I wonder why we were using map_domain_page before in
>>> xen_map_table: it wasn't necessary, was it? In fact, one could even say
>>> that it was wrong?
>> In xen_map_table() we need to be able to map pages from Xen binary, xenheap...
>> On arm64, we would be able to use mfn_to_virt() because everything is mapped
>> in Xen. But that's not the case on arm32. So we need a way to map anything
>> easily.
>>
>> The only difference between xenheap and domheap are the former is always
>> mapped while the latter may not be. So one can also view a xenheap page as a
>> glorified domheap.
>>
>> I also don't really want to create yet another interface to map pages (we have
>> vmap(), map_domain_page(), map_domain_global_page()...). So, when I
>> implemented xen_map_table() a couple of years ago, I came to the conclusion
>> that this is a convenient (ab)use of the interface.
> 
> Got it. Repeating to check if I see the full picture. On ARM64 there are
> no changes. On ARM32, at runtime there are no changes mapping/unmapping
> pages because xen_map_table is already mapping all pages using domheap,
> even xenheap pages are mapped as domheap; so this patch makes no
> difference in mapping/unmapping, correct?

For arm32, it makes a slight difference when allocating a new page table 
(we didn't call map/unmap before) but this is not called often.

The main "drop" in performance happened when xen_{,map}_table() was 
introduced.

> 
> The only difference is that on arm32 we are using domheap to allocate
> the pages, which is a different (larger) pool.

Yes.

Would you be happy to give you acked-by/reviewed-by on this basis?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat May 15 08:54:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 May 2021 08:54:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127693.240003 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhq4K-0001J6-GK; Sat, 15 May 2021 08:54:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127693.240003; Sat, 15 May 2021 08:54:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhq4K-0001Iz-DA; Sat, 15 May 2021 08:54:44 +0000
Received: by outflank-mailman (input) for mailman id 127693;
 Sat, 15 May 2021 08:54:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lhq4I-0001It-TP
 for xen-devel@lists.xenproject.org; Sat, 15 May 2021 08:54:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lhq4E-0003Mv-Jo; Sat, 15 May 2021 08:54:38 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lhq4E-000592-BD; Sat, 15 May 2021 08:54:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=ATct03ZRjkatB2Crje2oNczGFYgbArRpdYy7QxMVz6s=; b=KtqeDFJ1LgSDjvVbt/3wglgi4H
	PaUCCAGx9OBoecuA9wFt5NGR+eELWoZvIlvIhS5BXxzZ8Iwh7bXXG1oRdfMH4MI+xMxv+z4oMkNXN
	zRy07/iK4BD7AbS+tUwD+ymCmlAnVtWHIBo8kipBPfEA9v1d9CTY2nb02LftbPGVyWZM=;
Subject: Re: [PATCH RFCv2 12/15] xen/arm: add Persistent Map (PMAP)
 infrastructure
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Wei.Chen@arm.com, Henry.Wang@arm.com,
 Penny.Zheng@arm.com, Bertrand.Marquis@arm.com, Wei Liu
 <wei.liu2@citrix.com>, Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Hongyan Xia <hongyxia@amazon.com>, Julien Grall <jgrall@amazon.com>,
 Jan Beulich <jbeulich@suse.com>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Hongian Xia <hongyax@amazon.com>
References: <20210425201318.15447-1-julien@xen.org>
 <20210425201318.15447-13-julien@xen.org>
 <alpine.DEB.2.21.2105141602330.14426@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <46b2da6d-d0db-0016-079c-72e18991eb1e@xen.org>
Date: Sat, 15 May 2021 09:54:34 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2105141602330.14426@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 15/05/2021 00:25, Stefano Stabellini wrote:
> On Sun, 25 Apr 2021, Julien Grall wrote:
>> From: Wei Liu <wei.liu2@citrix.com>
>> +extern lpae_t xen_fixmap[LPAE_ENTRIES];
>> +
>> +void *__init pmap_map(mfn_t mfn)
>> +{
>> +    unsigned long flags;
>> +    unsigned int idx;
>> +    vaddr_t linear;
>> +    unsigned int slot;
>> +    lpae_t *entry, pte;
>> +
>> +    BUILD_BUG_ON(sizeof(inuse) * BITS_PER_LONG < NUM_FIX_PMAP);
>> +
>> +    ASSERT(system_state < SYS_STATE_smp_boot);
> 
> One small concern here is that we have been using SYS_STATE_early_boot
> to switch implementation of things like xen_map_table. Between
> SYS_STATE_early_boot and SYS_STATE_smp_boot there is SYS_STATE_boot.
> 
> I guess I am wondering if instead of three potentially different mapping
> functions (<= SYS_STATE_early_boot, < SYS_STATE_smp_boot, >=
> SYS_STATE_smp_boot) we can get away with only two?

This is more flexible than the existing method to map memory when state 
== SYS_STATE_early_boot. If you look at the next patch (#13), you will 
see that there will be only two method to map memory.

[...]

>> diff --git a/xen/include/asm-arm/pmap.h b/xen/include/asm-arm/pmap.h
>> new file mode 100644
>> index 000000000000..8e1dce93f8e4
>> --- /dev/null
>> +++ b/xen/include/asm-arm/pmap.h
>> @@ -0,0 +1,10 @@
>> +#ifndef __ASM_PMAP_H__
>> +#define __ARM_PMAP_H__
> 
> ASM/ARM

I will fix it.

> 
> 
>> +/* Large enough for mapping 5 levels of page tables with some headroom */
>> +#define NUM_FIX_PMAP 8
>> +
>> +void *pmap_map(mfn_t mfn);
>> +void pmap_unmap(const void *p);
> 
>> +#endif	/* __ASM_PMAP_H__ */

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat May 15 09:03:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 May 2021 09:03:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127701.240018 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhqCs-0002lH-Eo; Sat, 15 May 2021 09:03:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127701.240018; Sat, 15 May 2021 09:03:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhqCs-0002lA-BY; Sat, 15 May 2021 09:03:34 +0000
Received: by outflank-mailman (input) for mailman id 127701;
 Sat, 15 May 2021 09:03:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lhqCr-0002l4-9h
 for xen-devel@lists.xenproject.org; Sat, 15 May 2021 09:03:33 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lhqCp-0003YX-Kd; Sat, 15 May 2021 09:03:31 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lhqCp-000616-EE; Sat, 15 May 2021 09:03:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=y3NJOJ5MxxGvXEvPLHrkIjwPF7NbgRCq4+z8KJCmBAw=; b=vKcMfiCLuL+/Rub5jb0qv1wE5m
	rH0LEWCZ04cQpeE4HPq6vKcF8SCqvLBODk5MeYEYVPQxE+t0mnHOtbREjF/PH39IL6L1LCWAShwN0
	W+OZBL8vQmD55nNY87NEKH30G4xyQ2KDbFSKk/TNtrWBNBvnid5lrvI4D+b+8e8GZbWk=;
Subject: Re: [PATCH RFCv2 13/15] xen/arm: mm: Use the PMAP helpers in
 xen_{,un}map_table()
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Wei.Chen@arm.com, Henry.Wang@arm.com,
 Penny.Zheng@arm.com, Bertrand.Marquis@arm.com,
 Julien Grall <jgrall@amazon.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
References: <20210425201318.15447-1-julien@xen.org>
 <20210425201318.15447-14-julien@xen.org>
 <alpine.DEB.2.21.2105141628130.14426@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <9ea1479f-d914-3ee0-7794-f7a7a2c8ee0c@xen.org>
Date: Sat, 15 May 2021 10:03:29 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2105141628130.14426@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 15/05/2021 00:35, Stefano Stabellini wrote:
> On Sun, 25 Apr 2021, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> During early boot, it is not possible to use xen_{,un}map_table()
>> if the page tables are not residing the Xen binary.
>>
>> This is a blocker to switch some of the helpers to use xen_pt_update()
>> as we may need to allocate extra page tables and access them before
>> the domheap has been initialized (see setup_xenheap_mappings()).
>>
>> xen_{,un}map_table() are now updated to use the PMAP helpers for early
>> boot map/unmap. Note that the special case for page-tables residing
>> in Xen binary has been dropped because it is "complex" and was
>> only added as a workaround in 8d4f1b8878e0 ("xen/arm: mm: Allow
>> generic xen page-tables helpers to be called early").
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> In terms of boot stages:
> 
> - SYS_STATE_early_boot --> use pmap_map
> - greater than SYS_STATE_early_boot --> map_domain_page
> 
> While actually pmap would be able to work as far as SYS_STATE_boot, but
> we don't need it. Is it worth simplifying it by changing the checks in
> the previous patch to be against SYS_STATE_early_boot?

We need to differentiate between when this is used and when PMAP can be 
used. The ASSERT() is here to check that the PMAP code is not used 
outside of its limit.

In the current implementation, PMAP can be used at any time until we 
start bring-up secondary CPUs. So the ASSERT() is correct because 
doesn't restrict unnecessarily the use of it.

Note, the code is going to be moved to common code in the next revision.

> 
> In any case:
> 
> Reviewed-by: Stefano Stabellini <sstabellini@kernel.org>

Thank you!

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat May 15 09:21:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 May 2021 09:21:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127706.240029 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhqU6-00050T-Vj; Sat, 15 May 2021 09:21:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127706.240029; Sat, 15 May 2021 09:21:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhqU6-00050M-Sf; Sat, 15 May 2021 09:21:22 +0000
Received: by outflank-mailman (input) for mailman id 127706;
 Sat, 15 May 2021 09:21:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lhqU5-00050G-Oh
 for xen-devel@lists.xenproject.org; Sat, 15 May 2021 09:21:21 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lhqU2-0003qJ-2m; Sat, 15 May 2021 09:21:18 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lhqU1-00078p-R3; Sat, 15 May 2021 09:21:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=EXjVYPGzAzSOgIsaYooiVOnwHh36GTNnxdAwU9E4nqw=; b=JTiNl9w90dS7vIsPPbGOY/6QkX
	dSZ7Wxu1ITWzWc8cM156+WCliA840fRIufu9fdXEMRe7wezPbiyMoAIzykKHLgFEQdd+IEr0Xnawe
	EvTSGn4z1o7Cx5ITkk33PoI382eqsDFLVSP2SfztXsm28/e3lKtiS6yQ9D9VwoBaowrA=;
Subject: Re: [PATCH RFCv2 14/15] xen/arm: mm: Rework setup_xenheap_mappings()
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Wei.Chen@arm.com, Henry.Wang@arm.com,
 Penny.Zheng@arm.com, Bertrand.Marquis@arm.com,
 Julien Grall <julien.grall@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Julien Grall <jgrall@amazon.com>
References: <20210425201318.15447-1-julien@xen.org>
 <20210425201318.15447-15-julien@xen.org>
 <alpine.DEB.2.21.2105141646340.14426@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <8cda6d2d-7f3c-fef6-c534-2fadabed1bad@xen.org>
Date: Sat, 15 May 2021 10:21:15 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2105141646340.14426@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 15/05/2021 00:51, Stefano Stabellini wrote:
> On Sun, 25 Apr 2021, Julien Grall wrote:
>> From: Julien Grall <julien.grall@arm.com>
>>
>> A few issues have been reported with setup_xenheap_mappings() over the
>> last couple of years. The main ones are:
>>      - It will break on platform supporting more than 512GB of RAM
>>        because the memory allocated by the boot allocator is not yet
>>        mapped.
>>      - Aligning all the regions to 1GB may lead to unexpected result
>>        because we may alias non-cacheable region (such as device or reserved
>>        regions).
>>
>> map_pages_to_xen() was recently reworked to allow superpage mappings and
>> deal with the use of page-tables before they are mapped.
>>
>> Most of the code in setup_xenheap_mappings() is now replaced with a
>> single call to map_pages_to_xen().
>>
>> This also require to re-order the steps setup_mm() so the regions are
>> given to the boot allocator first and then we setup the xenheap
>> mappings.
> 
> I know this is paranoia but doesn't this introduce a requirement on the
> size of the first bank in bootinfo.mem.bank[] ?
> 
> It should be at least 8KB?

AFAIK, the current requirement is 4KB because of the 1GB mapping. Long 
term, it would be 8KB.

> 
> I know it is unlikely but it is theoretically possible to have a first
> bank of just 1KB.

All the page allocators are working at the page granularity level. I am 
not entirely sure whether the current Xen would ignore the region or break.

Note that setup_xenheap_mappings() is taking a base MFN and a number of 
pages to map. So this doesn't look to be a new problem here.

For the 8KB requirement, we can look at first all the pages to the boot 
allocator and then do the mapping.

> 
> I think we should write the requirement down with an in-code comment?
> Or better maybe we should write a message about it in the panic below?

I can write an in-code comment but I think writing down in the panic() 
would be wrong because there is no promise this is going to be the only 
reason it fails (imagine there is a bug in the code...).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat May 15 09:25:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 May 2021 09:25:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127712.240039 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhqYY-0005hQ-HY; Sat, 15 May 2021 09:25:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127712.240039; Sat, 15 May 2021 09:25:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhqYY-0005hJ-EP; Sat, 15 May 2021 09:25:58 +0000
Received: by outflank-mailman (input) for mailman id 127712;
 Sat, 15 May 2021 09:25:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lhqYW-0005hD-SK
 for xen-devel@lists.xenproject.org; Sat, 15 May 2021 09:25:56 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lhqYV-0003vk-86; Sat, 15 May 2021 09:25:55 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lhqYV-0007XR-1x; Sat, 15 May 2021 09:25:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=XHtW0I3Psmjv7cxq+rSSr3L70g0qHUVT7vSAfNDsmHU=; b=5QLwqQ7TQs3urAFnAoIqzSfLlH
	Ql06Hsq/YFp0CtnIHRoayP0wLJbNKzu2i08/ZmpvcL13RX/3spqXSd8owDtSdD13n2UK2ZhWh0Y4Y
	6Fv9Ke0GXZXctV5edHoCxlPnI4d9Z5XGjXpUDWwjuPkZNy/+CcdaQY7eQoeZTykMuYT8=;
Subject: Re: [PATCH RFCv2 15/15] xen/arm: mm: Re-implement
 setup_frame_table_mappings() with map_pages_to_xen()
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: xen-devel@lists.xenproject.org, Wei.Chen@arm.com, Henry.Wang@arm.com,
 Penny.Zheng@arm.com, Bertrand.Marquis@arm.com,
 Julien Grall <julien.grall@arm.com>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Julien Grall <jgrall@amazon.com>
References: <20210425201318.15447-1-julien@xen.org>
 <20210425201318.15447-16-julien@xen.org>
 <alpine.DEB.2.21.2105141658510.14426@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <2308478e-527b-a54a-206a-785f80515835@xen.org>
Date: Sat, 15 May 2021 10:25:52 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2105141658510.14426@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 15/05/2021 01:02, Stefano Stabellini wrote:
> On Sun, 25 Apr 2021, Julien Grall wrote:
>> From: Julien Grall <julien.grall@arm.com>
>>
>> Now that map_pages_to_xen() has been extended to support 2MB mappings,
>> we can replace the create_mappings() call by map_pages_to_xen() call.
>>
>> This has the advantage to remove the different between 32-bit and 64-bit
>> code.
>>
>> Lastly remove create_mappings() as there is no more callers.
>>
>> Signed-off-by: Julien Grall <julien.grall@arm.com>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>>
>> ---
>>      Changes in v2:
>>          - New patch
>>
>>      TODO:
>>          - Add support for setting the contiguous bit
>> ---
>>   xen/arch/arm/mm.c | 64 +++++------------------------------------------
>>   1 file changed, 6 insertions(+), 58 deletions(-)
>>
>> diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
>> index c49403b687f5..5f8ae029dd6d 100644
>> --- a/xen/arch/arm/mm.c
>> +++ b/xen/arch/arm/mm.c
>> @@ -359,40 +359,6 @@ void clear_fixmap(unsigned map)
>>       BUG_ON(res != 0);
>>   }
>>   
>> -/* Create Xen's mappings of memory.
>> - * Mapping_size must be either 2MB or 32MB.
>> - * Base and virt must be mapping_size aligned.
>> - * Size must be a multiple of mapping_size.
>> - * second must be a contiguous set of second level page tables
>> - * covering the region starting at virt_offset. */
>> -static void __init create_mappings(lpae_t *second,
>> -                                   unsigned long virt_offset,
>> -                                   unsigned long base_mfn,
>> -                                   unsigned long nr_mfns,
>> -                                   unsigned int mapping_size)
>> -{
>> -    unsigned long i, count;
>> -    const unsigned long granularity = mapping_size >> PAGE_SHIFT;
>> -    lpae_t pte, *p;
>> -
>> -    ASSERT((mapping_size == MB(2)) || (mapping_size == MB(32)));
>> -    ASSERT(!((virt_offset >> PAGE_SHIFT) % granularity));
>> -    ASSERT(!(base_mfn % granularity));
>> -    ASSERT(!(nr_mfns % granularity));
>> -
>> -    count = nr_mfns / LPAE_ENTRIES;
>> -    p = second + second_linear_offset(virt_offset);
>> -    pte = mfn_to_xen_entry(_mfn(base_mfn), MT_NORMAL);
>> -    if ( granularity == 16 * LPAE_ENTRIES )
>> -        pte.pt.contig = 1;  /* These maps are in 16-entry contiguous chunks. */
>> -    for ( i = 0; i < count; i++ )
>> -    {
>> -        write_pte(p + i, pte);
>> -        pte.pt.base += 1 << LPAE_SHIFT;
>> -    }
>> -    flush_xen_tlb_local();
>> -}
>> -
>>   #ifdef CONFIG_DOMAIN_PAGE
>>   void *map_domain_page_global(mfn_t mfn)
>>   {
>> @@ -850,36 +816,18 @@ void __init setup_frametable_mappings(paddr_t ps, paddr_t pe)
>>       unsigned long frametable_size = nr_pdxs * sizeof(struct page_info);
>>       mfn_t base_mfn;
>>       const unsigned long mapping_size = frametable_size < MB(32) ? MB(2) : MB(32);
>> -#ifdef CONFIG_ARM_64
>> -    lpae_t *second, pte;
>> -    unsigned long nr_second;
>> -    mfn_t second_base;
>> -    int i;
>> -#endif
>> +    int rc;
>>   
>>       frametable_base_pdx = mfn_to_pdx(maddr_to_mfn(ps));
>>       /* Round up to 2M or 32M boundary, as appropriate. */
>>       frametable_size = ROUNDUP(frametable_size, mapping_size);
>>       base_mfn = alloc_boot_pages(frametable_size >> PAGE_SHIFT, 32<<(20-12));
>>   
>> -#ifdef CONFIG_ARM_64
>> -    /* Compute the number of second level pages. */
>> -    nr_second = ROUNDUP(frametable_size, FIRST_SIZE) >> FIRST_SHIFT;
>> -    second_base = alloc_boot_pages(nr_second, 1);
>> -    second = mfn_to_virt(second_base);
>> -    for ( i = 0; i < nr_second; i++ )
>> -    {
>> -        clear_page(mfn_to_virt(mfn_add(second_base, i)));
>> -        pte = mfn_to_xen_entry(mfn_add(second_base, i), MT_NORMAL);
>> -        pte.pt.table = 1;
>> -        write_pte(&xen_first[first_table_offset(FRAMETABLE_VIRT_START)+i], pte);
>> -    }
>> -    create_mappings(second, 0, mfn_x(base_mfn), frametable_size >> PAGE_SHIFT,
>> -                    mapping_size);
>> -#else
>> -    create_mappings(xen_second, FRAMETABLE_VIRT_START, mfn_x(base_mfn),
>> -                    frametable_size >> PAGE_SHIFT, mapping_size);
>> -#endif
>> +    /* XXX: Handle contiguous bit */
>> +    rc = map_pages_to_xen(FRAMETABLE_VIRT_START, base_mfn,
>> +                          frametable_size >> PAGE_SHIFT, PAGE_HYPERVISOR_RW);
>> +    if ( rc )
>> +        panic("Unable to setup the frametable mappings.\n");
> 
> This is a lot better.
> 
> I take that "XXX: Handle contiguous bit" refers to the lack of
> _PAGE_BLOCK. Why can't we just | _PAGE_BLOCK like in other places?

I forgot to add _PAGE_BLOCK, however this is unrelated to my comment.

Currently, the frametable is mapped using 2MB mapping and setting the 
contiguous bit for each entry if the mapping is 32MB aligned.

_PAGE_BLOCK will only create 2MB mapping but will not set the contiguous 
bit. This will increase the pressure on the TLBs (we would get 16 entry 
rather than 1) if on system where the TLBs can take advantange of it.

So map_pages_to_xen() needs to gain support for contiguous bit. I 
haven't yet looked at it (hence the RFC state).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat May 15 10:08:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 May 2021 10:08:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127720.240051 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhrDo-0001Kk-R6; Sat, 15 May 2021 10:08:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127720.240051; Sat, 15 May 2021 10:08:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhrDo-0001Kd-O4; Sat, 15 May 2021 10:08:36 +0000
Received: by outflank-mailman (input) for mailman id 127720;
 Sat, 15 May 2021 10:08:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lhrDm-0001KX-Oa
 for xen-devel@lists.xenproject.org; Sat, 15 May 2021 10:08:34 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lhrDk-0004hy-LZ; Sat, 15 May 2021 10:08:32 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lhrDk-0002Fx-Eu; Sat, 15 May 2021 10:08:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=TKlSwp9ZcMh7TrtmWc5j+Vq3PuYmS2slhfCEEiinvNA=; b=f0ybz9kcEGSQUoI3iS5cEFEmjd
	2h/BuqsrTHea0ESzJ3SvrcQtIQfVq3XfScA0vz/bW4FmpUqr9DVSDbvcwjbZhYyUnPLzXjpi3UnmT
	68zMyLgrM0rd5yGSGVndgn9K4pofvyMmv/ifvgQVZXMDqpLoaGOwPTdigtk0OoiIF2mY=;
Subject: Re: Uses of /hypervisor memory range (was: FreeBSD/Xen/ARM issues)
To: Elliott Mitchell <ehem+xen@m5p.com>
Cc: xen-devel@lists.xenproject.org, Roger Pau Monn?? <royger@freebsd.org>,
 Mitchell Horne <mhorne@freebsd.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
References: <YIptpndhk6MOJFod@Air-de-Roger>
 <YItwHirnih6iUtRS@mattapan.m5p.com> <YIu80FNQHKS3+jVN@Air-de-Roger>
 <YJDcDjjgCsQUdsZ7@mattapan.m5p.com> <YJURGaqAVBSYnMRf@Air-de-Roger>
 <YJYem5CW/97k/e5A@mattapan.m5p.com> <YJs/YAgB8molh7e5@mattapan.m5p.com>
 <54427968-9b13-36e6-0001-27fb49f85635@xen.org>
 <YJ3jlGSxs60Io+dp@mattapan.m5p.com>
 <93936406-574f-7fd0-53bf-3bafaa4b1947@xen.org>
 <YJ8hTE/JbJygtVAL@mattapan.m5p.com>
From: Julien Grall <julien@xen.org>
Message-ID: <f7360dac-5d83-733b-7ec5-c73d4dc0350d@xen.org>
Date: Sat, 15 May 2021 11:08:30 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <YJ8hTE/JbJygtVAL@mattapan.m5p.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

(+ Andrew, + Stefano)

On 15/05/2021 02:18, Elliott Mitchell wrote:
> On Fri, May 14, 2021 at 09:32:10AM +0100, Julien Grall wrote:
>> On 14/05/2021 03:42, Elliott Mitchell wrote:
>>>
>>> Issue is what is the intended use of the memory range allocated to
>>> /hypervisor in the device-tree on ARM?  What do the Xen developers plan
>>> for?  What is expected?
>>
>>   From docs/misc/arm/device-tree/guest.txt:
>>
>> "
>> - reg: specifies the base physical address and size of a region in
>>     memory where the grant table should be mapped to, using an
>>     HYPERVISOR_memory_op hypercall. The memory region is large enough to map
>>     the whole grant table (it is larger or equal to
>> gnttab_max_grant_frames()).
>>     This property is unnecessary when booting Dom0 using ACPI.
>> "
>>
>> Effectively, this is a known space in memory that is unallocated. Not
>> all the guests will use it if they have a better way to find unallocated
>> space.
> 
> The use of "should" is generally considered strong encouragement to do
> so.  A warning $something is lurking here and you may regret it if you
> recklessly disobey this without knowning what is going on behind the
> scenes.

I thought a bit more over night. The potential trouble I can think of 
for a domU is the magic pages are not described in the DT.

I think every other regions should be discoverable from the DT (at least 
for a domU).

> 
> Whereas your language here suggests "can" is a better word since it is
> simply a random unused address range.
> 
> 
>>> Was the /hypervisor range intended *strictly* for mapping grant-tables?
>>
>> It was introduced to tell the OS a place where the grant-table could be
>> conveniently mapped.
> 
> Yet this is strange.  If any $random unused address range is acceptable,
> why bother suggesting a particular one?  If this is really purely the
> OS's choice, why is Xen bothering to suggest a range at all?

I have added Stefano who may have more historical context than what I 
wrote in my previous e-mail.

> 
> 
>>> Was it intended for /hypervisor to grow over the
>>> years as hardware got cheaper?
>> I don't understand this question.
> 
> Going to the trouble of suggesting a range points to something going on.
> I'm looking for an explanation since strange choices might hint at
> something unpleasant lurking below and I should watch where I step.
> 
> 
>>> Might it be better to deprecate the /hypervisor range and have domains
>>> allocate any available address space for foreign mappings?
>>
>> It may be easy for FreeBSD to find available address space but so far
>> this has not been the case in Linux (I haven't checked the latest
>> version though).
>>
>> To be clear, an OS is free to not use the range provided in /hypervisor
>> (maybe this is not clear enough in the spec?). This was mostly
>> introduced to overcome some issues we saw in Linux when Xen on Arm was
>> introduced.
> 
> Mind if I paraphrase this?
> 
> "this is a bring-up hack for Linux which hangs around since we haven't
> felt any pressure to fix the underlying Linux issue"
> 
> Is that reasonable?

Yes. I have revisited the problem a few times and every time I got stuck 
because not all the I/O regions where reported to Linux. So Linux would 
not be able to find a safe unallocated space.

> 
> 
>>> Should the FreeBSD implementation be treating grant tables as distinct
>>> from other foreign mappings?
>>
>> Both require unallocated addres space to work. IIRC FreeBSD is able to
>> find unallocated space easily, so I would recommend to use it.
> 
> That is supposed to be, but it appears there is presently a bug which has
> broken the functionality on ARM.  

Do you mind share some details?

> As such, as a proper lazy developer if
> I can abuse the /hypervisor address range for all foreign mappings, I
> will.

Are you aiming to support dom0 now?

> 
> My feeling is one of two things should happen with the /hypervisor
> address range:
> 
> 1>  OSes could be encouraged to use it for all foreign mappings.  The
> range should be dynamic in some fashion.  There could be a handy way to
> allow configuring the amount of address space thus reserved.

In the context of XSA-300 and virtio on Xen on Arm, we discussed about 
providing a revion for foreign mapping. The main trouble here is 
figuring out of the size, if you mess it up then you may break all the 
PV drivers.

If the problem is finding space, then I would like to suggest a 
different approach (I think I may have discussed it with Andrew). Xen is 
in maintaining the P2M for the guest and therefore now where are the 
unallocated space. How about introducing a new hypercall to ask for 
"unallocated space"?

This would not work for older hypervisor but you could use the RAM 
instead (as Linux does). This is has drawbacks (e.g. shattering 
superpage, reducing the amount of memory usuable...), but for 1> you 
would also need a hack for older Xen.

> 
> 2>  The range should be declared deprecated.  Everyone should be put on
> the same page that this was a quick hack for bringing up Xen/ARM/Linux,
> and really it shouldn't have escaped.

How about relaxing the wording instead?

> 
> 
>>> (is treating them the same likely to
>>> induce buggy behavior on x86?)
>>
>> I will leave this answer to Roger.
> 
> This was directed towards *you*.  There is this thing here which looks
> kind of odd in a vaguely unpleasant way.  I'm trying to figure out
> whether I should embrace it, versus running away.

I am not aware of any potential buggy behavior here. In both arm and 
x86, the requirement is to find unallocated address space (unless you 
want to waste RAM as Linux does).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat May 15 11:51:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 May 2021 11:51:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127728.240062 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhspW-0002g3-Kz; Sat, 15 May 2021 11:51:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127728.240062; Sat, 15 May 2021 11:51:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhspW-0002fw-Gy; Sat, 15 May 2021 11:51:38 +0000
Received: by outflank-mailman (input) for mailman id 127728;
 Sat, 15 May 2021 11:51:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lhspV-0002fq-Q3
 for xen-devel@lists.xenproject.org; Sat, 15 May 2021 11:51:37 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lhspU-0006Kd-II; Sat, 15 May 2021 11:51:36 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lhspU-0000uc-Bq; Sat, 15 May 2021 11:51:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=Wpju7DBlbS1T4x153hwpUWrnAHoqUjoofi9gJKTKU7A=; b=Q/SNTjg4zFfBEvgQem+IlZSkHA
	ww9ikLnxMEH3XaMQKNkXvrEmfH2CNWEpjKtyoVATP5Ud7w+CeVvSr3ENK+QZuJik/LR5UDTLUhpSx
	UeOLJAdI9VN/Eig6BUCDY8eTsatxQVS2RSDteZxP6KoLL0KYaGzCma6TlBRHSK3TI4tc=;
Subject: Re: [PATCH] tools/xenstore: simplify xenstored main loop
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210514083905.18212-1-jgross@suse.com>
 <304944cf-ac92-be14-e088-1975ef073255@xen.org>
 <3be1937f-3cd9-3eb8-48fd-bc9c9a85c051@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <5744a347-282c-ff61-7507-03b7a1e9d4c9@xen.org>
Date: Sat, 15 May 2021 12:51:34 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <3be1937f-3cd9-3eb8-48fd-bc9c9a85c051@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Juergen,

On 14/05/2021 10:42, Juergen Gross wrote:
> On 14.05.21 11:35, Julien Grall wrote:
>>> -struct connection *new_connection(connwritefn_t *write, connreadfn_t 
>>> *read);
>>> +struct connection *new_connection(struct interface_funcs *funcs);
>>>   struct connection *get_connection_by_id(unsigned int conn_id);
>>>   void check_store(void);
>>>   void corrupt(struct connection *conn, const char *fmt, ...);
>>> @@ -254,9 +258,6 @@ void finish_daemonize(void);
>>>   /* Open a pipe for signal handling */
>>>   void init_pipe(int reopen_log_pipe[2]);
>>> -int writefd(struct connection *conn, const void *data, unsigned int 
>>> len);
>>> -int readfd(struct connection *conn, void *data, unsigned int len);
>>> -
>>>   extern struct interface_funcs socket_funcs;
>>
>> Hmmm... I guess this change splipped in the staging before hand?
> 
> No, I just forgot to make the functions static.

Hmmm... I am not sure how this is related to my question. What I meant 
it the line "extern struct interface_funcs ..." doesn't have a '+' in front.

If you look at the history, this was added by mistake in:

commit 2ea411bc2c0a5a4c7ab145270f1949630460e72b
Author: Juergen Gross <jgross@suse.com>
Date:   Wed Jan 13 14:00:20 2021 +0100

     tools/xenstore: add read connection state for live update

     Add the needed functions for reading connection state for live update.

     As the connection is identified by a unique connection id in the state
     records we need to add this to struct connection. Add a new function
     to return the connection based on a connection id.

     Signed-off-by: Juergen Gross <jgross@suse.com>
     Reviewed-by: Julien Grall <jgrall@amazon.com>
     Acked-by: Wei Liu <wl@xen.org>

> 
> 
> Juergen

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat May 15 12:46:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 May 2021 12:46:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127765.240097 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhtgW-0000fp-R8; Sat, 15 May 2021 12:46:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127765.240097; Sat, 15 May 2021 12:46:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhtgW-0000fi-M2; Sat, 15 May 2021 12:46:24 +0000
Received: by outflank-mailman (input) for mailman id 127765;
 Sat, 15 May 2021 12:46:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhtgV-0000fY-T2; Sat, 15 May 2021 12:46:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhtgV-0007DN-LJ; Sat, 15 May 2021 12:46:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhtgV-0008Uw-D2; Sat, 15 May 2021 12:46:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lhtgV-0002FF-CV; Sat, 15 May 2021 12:46:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=frJaHaMmhZpz1gtC+4PBpmjwRzDGTJ1pSsC5Ou7X6Cw=; b=6VZ+HDkGXg0Tfl0p8udUPJvNIZ
	mTcCUEsDfK6wRtyVy731jkTnd1EnTPo45r+rm38KA5BGNLW55TlqMqInfqdWkwF0whDMx3SOOblEp
	KmCz9YL41fHaIuYOiTmkYFVzeuwg7aW2OKlC9Cr+n6zQpYvrbH7HHH1fas6udBybMs54=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161954-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 161954: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-multivcpu:xen-boot:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=cb199cc7de987cfda4659fccf51059f210f6ad34
X-Osstest-Versions-That:
    xen=cb199cc7de987cfda4659fccf51059f210f6ad34
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 15 May 2021 12:46:23 +0000

flight 161954 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161954/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-multivcpu  8 xen-boot                  fail pass in 161946

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161946
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161946
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 161946
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161946
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161946
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161946
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161946
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161946
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161946
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161946
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161946
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  cb199cc7de987cfda4659fccf51059f210f6ad34
baseline version:
 xen                  cb199cc7de987cfda4659fccf51059f210f6ad34

Last test of basis   161954  2021-05-15 01:52:45 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sat May 15 17:27:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 May 2021 17:27:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127850.240183 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhy4U-0004Zz-01; Sat, 15 May 2021 17:27:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127850.240183; Sat, 15 May 2021 17:27:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhy4T-0004Zs-TF; Sat, 15 May 2021 17:27:25 +0000
Received: by outflank-mailman (input) for mailman id 127850;
 Sat, 15 May 2021 17:27:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhy4S-0004Zi-8O; Sat, 15 May 2021 17:27:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhy4S-0003sk-2K; Sat, 15 May 2021 17:27:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lhy4R-0001Jm-Ju; Sat, 15 May 2021 17:27:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lhy4R-0000MU-JP; Sat, 15 May 2021 17:27:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=YpbaMt3unx3IayLRakl7UxO/rSDebb+khlXss5Q1ZAE=; b=b1i3Hx1Gz6Ik83eiOlHZWDQTjW
	eBs/CU9N5gvboPdv4mZ9jJudet8QgMbqekIRRQBwqHt30wce6mTGC8YJ2DvygrjtXTXtWeE1gTylS
	gHGauUBOR+vLqgzyUIVGsILXpqvpEbtZ+Rh+FMY0Epp1mlI8fJ5laOtdLmwu9YIbdyPA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161955-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 161955: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate:fail:allowable
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 15 May 2021 17:27:23 +0000

flight 161955 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161955/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     18 guest-localmigrate       fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  268 days
Failing since        152659  2020-08-21 14:07:39 Z  267 days  488 attempts
Testing same since   161955  2021-05-15 02:41:48 Z    0 days    1 attempts

------------------------------------------------------------
499 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 152107 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 15 19:11:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 May 2021 19:11:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127861.240197 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhzhD-0006BL-NT; Sat, 15 May 2021 19:11:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127861.240197; Sat, 15 May 2021 19:11:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lhzhD-0006BE-K2; Sat, 15 May 2021 19:11:31 +0000
Received: by outflank-mailman (input) for mailman id 127861;
 Sat, 15 May 2021 19:11:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lhzhB-0006B4-VC
 for xen-devel@lists.xenproject.org; Sat, 15 May 2021 19:11:29 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lhzhB-0005gE-PR; Sat, 15 May 2021 19:11:29 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lhzhB-0003qR-JZ; Sat, 15 May 2021 19:11:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=8i4vi/4f3EfQcOXxuDrH+O6VCvyUFJjoTdfDRUWQMuk=; b=wYRzgMHAHg/4ziyltnKC8WNNgS
	55CMEILoHwhZbzIH0Qa2zUuTLMl4ENo5yankA8ScckD+YnwnN35hDMkFRuWsxxzLP+TpYeOf6cM3a
	XN/cChEAMcNkZSuxtd+rQHsIwqQNrEEDU1YYQ5Jy0Kf/mICN/Qt7WNWJiwtKlVdU2Bk8=;
Subject: Re: Discussion of Xenheap problems on AArch64
To: Henry Wang <Henry.Wang@arm.com>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Chen <Wei.Chen@arm.com>, Penny Zheng <Penny.Zheng@arm.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
References: <PA4PR08MB6253F49C13ED56811BA5B64E92479@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <cdde98ca-4183-c92b-adca-801330992fc5@xen.org>
 <PA4PR08MB62538BBA256E66A0415F0C7192479@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <f14aa1d6-35d2-a9a3-0672-7f0d3ae3ec89@xen.org>
 <PA4PR08MB62534C4130B59CAA9A8A8BF792419@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <PA4PR08MB6253FBC7F5E690DB74F2E11F92409@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <2a65b8c0-fccc-2ccc-f736-7f3f666e84d1@xen.org>
 <PA4PR08MB62537A958107CD234831E0B892579@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <ba649865-410b-e1be-39a3-c4cac802f464@xen.org>
 <PA4PR08MB6253F85E184CA51BDB99786992539@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <ba1bc084-5a5b-1410-acba-33bfca7c4f6a@xen.org>
 <PA4PR08MB6253E95579D8277D7FD1BE9A92509@PA4PR08MB6253.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
Message-ID: <7247122c-127d-705c-78a5-7f9460f5821a@xen.org>
Date: Sat, 15 May 2021 20:11:27 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <PA4PR08MB6253E95579D8277D7FD1BE9A92509@PA4PR08MB6253.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Henry,

On 14/05/2021 05:35, Henry Wang wrote:
>> From: Julien Grall <julien@xen.org>
> Hi Julien,
> 
>>
>> On 11/05/2021 02:11, Henry Wang wrote:
>>> Hi Julien,
>> Hi Henry,
>>>
>>>> From: Julien Grall <julien@xen.org>
>>>> Hi Henry,
>>>>
>>>> On 07/05/2021 05:06, Henry Wang wrote:
>>>>>> From: Julien Grall <julien@xen.org>
>>>>>> On 28/04/2021 10:28, Henry Wang wrote:
>>>> [...]
>>>>
>>>>> when I continue booting Xen, I got following error log:
>>>>>
>>>>> (XEN) Xen call trace:
>>>>> (XEN)    [<00000000002b5a5c>] alloc_boot_pages+0x94/0x98 (PC)
>>>>> (XEN)    [<00000000002ca3bc>] setup_frametable_mappings+0xa4/0x108
>>>> (LR)
>>>>> (XEN)    [<00000000002ca3bc>] setup_frametable_mappings+0xa4/0x108
>>>>> (XEN)    [<00000000002cb988>] start_xen+0x344/0xbcc
>>>>> (XEN)    [<00000000002001c0>]
>>>> arm64/head.o#primary_switched+0x10/0x30
>>>>> (XEN)
>>>>> (XEN) ****************************************
>>>>> (XEN) Panic on CPU 0:
>>>>> (XEN) Xen BUG at page_alloc.c:432
>>>>> (XEN) ****************************************
>>>>
>>>> This is happening without my patch series applied, right? If so, what
>>>> happen if you apply it?
>>>
>>> No, I am afraid this is with your patch series applied, and that is why I
>>> am a little bit confused about the error log...
>>
>> You are hitting the BUG() at the end of alloc_boot_pages(). This is hit
>> because the boot allocator couldn't allocate memory for your request.
>>
>> Would you be able to apply the following diff and paste the output here?
> 
> Thank you, of course yes, please see below output attached :)
> 
>>
>> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
>> index ace6333c18ea..dbb736fdb275 100644
>> --- a/xen/common/page_alloc.c
>> +++ b/xen/common/page_alloc.c
>> @@ -329,6 +329,8 @@ void __init init_boot_pages(paddr_t ps, paddr_t pe)
>>        if ( pe <= ps )
>>            return;
>>
>> +    printk("%s: ps %"PRI_paddr" pe %"PRI_paddr"\n", __func__, ps, pe);
>                                                ^ FYI: I have to change this PRI_paddr to PRIpaddr
>                                                   to make compiler happy

Ah yes, we don't have a variant with _. I thought compiled test before 
sending it :(.

> 
>> +
>>        first_valid_mfn = mfn_min(maddr_to_mfn(ps), first_valid_mfn);
>>
>>        bootmem_region_add(ps >> PAGE_SHIFT, pe >> PAGE_SHIFT);
>> @@ -395,6 +397,8 @@ mfn_t __init alloc_boot_pages(unsigned long nr_pfns,
>> unsigned long pfn_align)
>>        unsigned long pg, _e;
>>        unsigned int i = nr_bootmem_regions;
>>
>> +    printk("%s: nr_pfns %lu pfn_align %lu\n", __func__, nr_pfns,
>> pfn_align);
>> +
>>        BUG_ON(!nr_bootmem_regions);
>>
>>        while ( i-- )
>>
> 
> I also added some printk to make sure the dtb is parsed correctly, and for the
> Error case, I get following log:

Thank you for the log.

> 
> (XEN) ----------banks=2--------
> (XEN) ----------start=80000000--------
> (XEN) ----------size=7F000000--------
> (XEN) ----------start=F900000000--------
> (XEN) ----------size=80000000--------
> (XEN) Checking for initrd in /chosen
> (XEN) RAM: 0000000080000000 - 00000000feffffff
> (XEN) RAM: 000000f900000000 - 000000f97fffffff
> (XEN)
> (XEN) MODULE[0]: 0000000084000000 - 00000000841464c8 Xen
> (XEN) MODULE[1]: 00000000841464c8 - 0000000084148c9b Device Tree
> (XEN) MODULE[2]: 0000000080080000 - 0000000081080000 Kernel
> (XEN)  RESVD[0]: 0000000080000000 - 0000000080010000
> (XEN)
> (XEN) Command line: noreboot dom0_mem=1024M console=dtuart
> dtuart=serial0 bootscrub=0
> (XEN) PFN compression on bits 21...22
> (XEN) init_boot_pages: ps 0000000080010000 pe 0000000080080000

The size of this region is 448MB.

> (XEN) init_boot_pages: ps 0000000081080000 pe 0000000084000000

The size of this region is 47MB.

> (XEN) init_boot_pages: ps 0000000084149000 pe 00000000ff000000

The size of this region is 1966MB.


> (XEN) alloc_boot_pages: nr_pfns 1 pfn_align 1
> (XEN) alloc_boot_pages: nr_pfns 1 pfn_align 1
> (XEN) alloc_boot_pages: nr_pfns 1 pfn_align 1
> (XEN) init_boot_pages: ps 000000f900000000 pe 000000f980000000

The size of this region is 2048MB.

> (XEN) alloc_boot_pages: nr_pfns 909312 pfn_align 8192

This is asking for 3552MB of contiguous memory which cannot be 
accommodated. In any case, this is quite a large region to ask.

Same...

> (XEN) Xen BUG at page_alloc.c:436
> 
> To compare with the maximum start address (f800000000) of second part mem
> where xen boots correctly, I also attached the log for your information:
> 
> (XEN) ----------banks=2--------
> (XEN) ----------start=80000000--------
> (XEN) ----------size=7F000000--------
> (XEN) ----------start=F800000000--------
> (XEN) ----------size=80000000--------
> (XEN) Checking for initrd in /chosen
> (XEN) RAM: 0000000080000000 - 00000000feffffff
> (XEN) RAM: 000000f800000000 - 000000f87fffffff
> (XEN)
> (XEN) MODULE[0]: 0000000084000000 - 00000000841464c8 Xen
> (XEN) MODULE[1]: 00000000841464c8 - 0000000084148c9b Device Tree
> (XEN) MODULE[2]: 0000000080080000 - 0000000081080000 Kernel
> (XEN)  RESVD[0]: 0000000080000000 - 0000000080010000
> (XEN)
> (XEN) Command line: noreboot dom0_mem=1024M console=dtuart
> dtuart=serial0 bootscrub=0
> (XEN) PFN compression on bits 20...22
> (XEN) init_boot_pages: ps 0000000080010000 pe 0000000080080000
> (XEN) init_boot_pages: ps 0000000081080000 pe 0000000084000000
> (XEN) init_boot_pages: ps 0000000084149000 pe 00000000ff000000
> (XEN) alloc_boot_pages: nr_pfns 1 pfn_align 1
> (XEN) alloc_boot_pages: nr_pfns 1 pfn_align 1
> (XEN) alloc_boot_pages: nr_pfns 1 pfn_align 1
> (XEN) init_boot_pages: ps 000000f800000000 pe 000000f880000000
> (XEN) alloc_boot_pages: nr_pfns 450560 pfn_align 8192

... here. We are trying to allocate a 1.5GB frametable. You have only 
4GB of memory so the frametable should be a lot smaller (few tens of MB).

This is happening because PDX is not able to find many bits to compress.
I am not sure we can compress more with the current PDX algorithm. This 
may require some extensive improvement to reduce the footprint.

On a previous e-mail, you said you tweaked the FVP model to set those 
regions. Were you trying to mimick the memory layout of a real HW 
(either current or future)?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Sat May 15 20:03:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 15 May 2021 20:03:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127866.240207 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1li0Vb-0002oT-NI; Sat, 15 May 2021 20:03:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127866.240207; Sat, 15 May 2021 20:03:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1li0Vb-0002oM-KL; Sat, 15 May 2021 20:03:35 +0000
Received: by outflank-mailman (input) for mailman id 127866;
 Sat, 15 May 2021 20:03:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1li0Va-0002oC-L9; Sat, 15 May 2021 20:03:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1li0Va-0006bW-B0; Sat, 15 May 2021 20:03:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1li0VZ-0005zq-VD; Sat, 15 May 2021 20:03:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1li0VZ-0002cK-Ui; Sat, 15 May 2021 20:03:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=c1g15X45V/9Qh7R9PqsA1Zlj0X2o4f/wLYUazLvKVYI=; b=oK6kmXatsMInUqPsuVIp6sA/gt
	2+YmrQ/SFNl/ZdLlXWdNsurrX+vT8+Dul4nkvKPL9a5QAXeDg/c1QggcBtLSbjyZP5dnzfADs6XEV
	Kx0rRsax1wCbnf1/T2OrjeCIWyWgfAcy0HTjem38skGK9gnsMJO3k0xr/9GjLGp37eXk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161957-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 161957: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-amd64-amd64-libvirt-xsm:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-xsm:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=25a1298726e97b9d25379986f5d54d9e62ad6e93
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 15 May 2021 20:03:33 +0000

flight 161957 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161957/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx  8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-amd64-pvgrub 19 guest-localmigrate/x10 fail in 161953 pass in 161957
 test-amd64-amd64-libvirt-xsm 20 guest-start/debian.repeat  fail pass in 161953
 test-amd64-amd64-xl-xsm      22 guest-start/debian.repeat  fail pass in 161953
 test-amd64-amd64-examine      4 memdisk-try-append         fail pass in 161953

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                25a1298726e97b9d25379986f5d54d9e62ad6e93
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  288 days
Failing since        152366  2020-08-01 20:49:34 Z  286 days  481 attempts
Testing same since   161953  2021-05-14 23:42:13 Z    0 days    2 attempts

------------------------------------------------------------
6049 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 fail    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      fail    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1641070 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 16 01:09:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 May 2021 01:09:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127877.240222 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1li5Gz-0001cO-1x; Sun, 16 May 2021 01:08:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127877.240222; Sun, 16 May 2021 01:08:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1li5Gy-0001cH-Tr; Sun, 16 May 2021 01:08:48 +0000
Received: by outflank-mailman (input) for mailman id 127877;
 Sun, 16 May 2021 01:08:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1li5Gx-0001c7-32; Sun, 16 May 2021 01:08:47 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1li5Gw-0000Wu-Kr; Sun, 16 May 2021 01:08:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1li5Gw-0003wt-8X; Sun, 16 May 2021 01:08:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1li5Gw-0006ou-81; Sun, 16 May 2021 01:08:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8EBtTlbceS/gVrQhVHd1YVOWxddoYNnkoAc7noc0hPQ=; b=QmBoFuNc1qnFG9drilvART+Wca
	y7qCJxeIgiOlutBoa7bE0pTdTpc1ucMDEjbad8ULan7PeIXlBMnFYMFF2bAyKjB3qsohLSmhgC4jK
	kvRMfOULja7lZtW8w7RxAXdbovuFnaOGSPNAHumSmHvGgc430RyFsPxjumhmKHJhHb08=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161961-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 161961: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 16 May 2021 01:08:46 +0000

flight 161961 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161961/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  268 days
Failing since        152659  2020-08-21 14:07:39 Z  267 days  489 attempts
Testing same since   161955  2021-05-15 02:41:48 Z    0 days    2 attempts

------------------------------------------------------------
499 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 152107 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 16 04:10:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 May 2021 04:10:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127884.240236 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1li86f-0001sU-Pe; Sun, 16 May 2021 04:10:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127884.240236; Sun, 16 May 2021 04:10:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1li86f-0001sN-Kp; Sun, 16 May 2021 04:10:21 +0000
Received: by outflank-mailman (input) for mailman id 127884;
 Sun, 16 May 2021 04:10:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1li86e-0001sB-SW; Sun, 16 May 2021 04:10:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1li86e-0003vm-LV; Sun, 16 May 2021 04:10:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1li86e-00022D-92; Sun, 16 May 2021 04:10:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1li86e-0003H7-87; Sun, 16 May 2021 04:10:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=H3X8B8P20ue8c/aCBHaZ2wpT2epz4fflLAsYAstep84=; b=C3Kk8hkXf08VplJDO2oUOo4Fln
	8DpczjBT3p3M9ugABZ9D1zDTW3HhdRKemMc0gkeOZwZtd8fDVXHXMrfdaw3WuY9r894Vg5vernv4Z
	gH2Ce+lG4Mv9tJRB/NpPKnv1hCsCzXibxPvKX33oR6VPNVv9aA5bJt3GpCGIZW+MwNZs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161962-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 161962: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=c12a29ed9094b4b9cde8965c12850460b9a79d7c
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 16 May 2021 04:10:20 +0000

flight 161962 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161962/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx  8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                c12a29ed9094b4b9cde8965c12850460b9a79d7c
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  288 days
Failing since        152366  2020-08-01 20:49:34 Z  287 days  482 attempts
Testing same since   161962  2021-05-15 20:12:12 Z    0 days    1 attempts

------------------------------------------------------------
6056 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1643489 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 16 06:30:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 May 2021 06:30:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127896.240249 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liAIB-0006Qu-53; Sun, 16 May 2021 06:30:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127896.240249; Sun, 16 May 2021 06:30:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liAIB-0006Qn-1s; Sun, 16 May 2021 06:30:23 +0000
Received: by outflank-mailman (input) for mailman id 127896;
 Sun, 16 May 2021 06:30:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E7ce=KL=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1liAIA-0006Qh-19
 for xen-devel@lists.xenproject.org; Sun, 16 May 2021 06:30:22 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3f177d21-e47e-4e70-85de-31837e3da669;
 Sun, 16 May 2021 06:30:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 31C65AED7;
 Sun, 16 May 2021 06:30:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3f177d21-e47e-4e70-85de-31837e3da669
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621146620; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=v8til0hEEG2JW3BLQ4jQ4a/ff0IVW6Ge5T7xlA6EHMc=;
	b=i3+w7E43uApRIFjjRPrk53E9roPfVn2nk/KLRgw44xW2S4mCZCBvBx6m5oN/6Z/ZsE0O8s
	EKcO/KbRVF44/RZcWPPCBLfsnzb3ZL3h22zp7xqZLXuPouMF/KQ8BA/Bs6mzDqND+jSxPI
	MF09hpK2S2YICkwnv7hHHARUy4OAzhA=
From: Juergen Gross <jgross@suse.com>
To: torvalds@linux-foundation.org
Cc: linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: [GIT PULL] xen: branch for v5.13-rc2
Date: Sun, 16 May 2021 08:30:19 +0200
Message-Id: <20210516063019.3296-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Linus,

Please git pull the following tag:

 git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.13b-rc2-tag

xen: branch for v5.13-rc2

It contains the following patches:

- 2 patches for error path fixes
- a small series for fixing a regression with swiotlb with Xen on Arm


Thanks.

Juergen

 arch/arm/xen/mm.c               | 20 +++++++-------------
 arch/arm64/mm/init.c            |  3 ++-
 drivers/xen/gntdev.c            |  4 +++-
 drivers/xen/swiotlb-xen.c       |  5 +++++
 drivers/xen/unpopulated-alloc.c |  4 +++-
 include/xen/arm/swiotlb-xen.h   | 15 ++++++++++++++-
 6 files changed, 34 insertions(+), 17 deletions(-)

Christoph Hellwig (1):
      arm64: do not set SWIOTLB_NO_FORCE when swiotlb is required

Juergen Gross (2):
      xen/gntdev: fix gntdev_mmap() error exit path
      Merge tag 'for-linus-5.13b-rc2-tag' of gitolite.kernel.org:pub/scm/linux/kernel/git/xen/tip into __for-linus-5.13b-rc2-tag

Stefano Stabellini (2):
      xen/arm: move xen_swiotlb_detect to arm/swiotlb-xen.h
      xen/swiotlb: check if the swiotlb has already been initialized

Zhen Lei (1):
      xen/unpopulated-alloc: fix error return code in fill_list()


From xen-devel-bounces@lists.xenproject.org Sun May 16 06:30:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 May 2021 06:30:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127897.240261 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liAIG-0006hf-Cf; Sun, 16 May 2021 06:30:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127897.240261; Sun, 16 May 2021 06:30:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liAIG-0006hY-9O; Sun, 16 May 2021 06:30:28 +0000
Received: by outflank-mailman (input) for mailman id 127897;
 Sun, 16 May 2021 06:30:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=E7ce=KL=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1liAIE-0006Qh-Re
 for xen-devel@lists.xenproject.org; Sun, 16 May 2021 06:30:26 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fd3d4ce5-ec5e-4272-b28f-c559394c2c4a;
 Sun, 16 May 2021 06:30:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A8183AFC8;
 Sun, 16 May 2021 06:30:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fd3d4ce5-ec5e-4272-b28f-c559394c2c4a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621146620; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=v8til0hEEG2JW3BLQ4jQ4a/ff0IVW6Ge5T7xlA6EHMc=;
	b=ga/I6o3hbtGVeR9bAnm8oiI+abMWn1V+1mrd0MCPjkvsQR7mVvBltKDjdXUrzkP9Hvg3nQ
	asO0As6A/ikJO4JKcLrpBSIR0yOrQ+lcCVpsAm8F6PzotWAoyLPmI2ZJ8QVprVr0uk3A0z
	MXXIt3BS4o7OrY+pEQgfLypxEi1jPYI=
From: Juergen Gross <jgross@suse.com>
To: torvalds@linux-foundation.org
Cc: linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: [GIT PULL] xen: branch for v5.13-rc2
Date: Sun, 16 May 2021 08:30:20 +0200
Message-Id: <20210516063020.3349-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Linus,

Please git pull the following tag:

 git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.13b-rc2-tag

xen: branch for v5.13-rc2

It contains the following patches:

- 2 patches for error path fixes
- a small series for fixing a regression with swiotlb with Xen on Arm


Thanks.

Juergen

 arch/arm/xen/mm.c               | 20 +++++++-------------
 arch/arm64/mm/init.c            |  3 ++-
 drivers/xen/gntdev.c            |  4 +++-
 drivers/xen/swiotlb-xen.c       |  5 +++++
 drivers/xen/unpopulated-alloc.c |  4 +++-
 include/xen/arm/swiotlb-xen.h   | 15 ++++++++++++++-
 6 files changed, 34 insertions(+), 17 deletions(-)

Christoph Hellwig (1):
      arm64: do not set SWIOTLB_NO_FORCE when swiotlb is required

Juergen Gross (2):
      xen/gntdev: fix gntdev_mmap() error exit path
      Merge tag 'for-linus-5.13b-rc2-tag' of gitolite.kernel.org:pub/scm/linux/kernel/git/xen/tip into __for-linus-5.13b-rc2-tag

Stefano Stabellini (2):
      xen/arm: move xen_swiotlb_detect to arm/swiotlb-xen.h
      xen/swiotlb: check if the swiotlb has already been initialized

Zhen Lei (1):
      xen/unpopulated-alloc: fix error return code in fill_list()


From xen-devel-bounces@lists.xenproject.org Sun May 16 08:58:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 May 2021 08:58:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127909.240271 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liCbH-0003GR-Gj; Sun, 16 May 2021 08:58:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127909.240271; Sun, 16 May 2021 08:58:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liCbH-0003GK-Do; Sun, 16 May 2021 08:58:15 +0000
Received: by outflank-mailman (input) for mailman id 127909;
 Sun, 16 May 2021 08:58:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liCbF-0003GA-L9; Sun, 16 May 2021 08:58:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liCbF-0001GP-BP; Sun, 16 May 2021 08:58:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liCbE-0001uT-Va; Sun, 16 May 2021 08:58:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1liCbE-0002Nq-V7; Sun, 16 May 2021 08:58:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NNcTwNEARiC4n0YxPL1D4IQEbvaXAA+LJXYpS+dgX+Q=; b=56tlCcftWI2l5lfbCkM4fPAP4q
	lsJNJYb/MddFTkQ2tXZRBybCpTBDzoBUo2bi1IwYIdHLroj8h3DC+TDMvB3n/Rt9+a3KvtzpuKSJn
	sfOSTgZGu28rLZcwrQErzOLaE4UfWSQT3ENNB1HQndlzU6lGdetClsbg7P4IJBgstEnU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161963-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 161963: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl:guest-localmigrate/x10:fail:heisenbug
    qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 16 May 2021 08:58:12 +0000

flight 161963 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161963/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-amd64-i386-xl           20 guest-localmigrate/x10     fail pass in 161961

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  268 days
Failing since        152659  2020-08-21 14:07:39 Z  267 days  490 attempts
Testing same since   161955  2021-05-15 02:41:48 Z    1 days    3 attempts

------------------------------------------------------------
499 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 152107 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 16 10:07:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 May 2021 10:07:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127915.240286 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liDgV-0001QB-C5; Sun, 16 May 2021 10:07:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127915.240286; Sun, 16 May 2021 10:07:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liDgV-0001Q4-8S; Sun, 16 May 2021 10:07:43 +0000
Received: by outflank-mailman (input) for mailman id 127915;
 Sun, 16 May 2021 10:07:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liDgT-0001PY-PA; Sun, 16 May 2021 10:07:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liDgT-0002TM-Fw; Sun, 16 May 2021 10:07:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liDgT-0005hK-7F; Sun, 16 May 2021 10:07:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1liDgT-0005GM-6n; Sun, 16 May 2021 10:07:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=UPpeMP2K6jljoIwD2N18pN4fj3qQf0PvatL9ciSzk5w=; b=XXyHCrtj9flJatKJC3iarpw8co
	DMWeZp+jqdnUwYBsR5EsFy4ORgd8Lz794VmV47usnA1Bst4KnhCPskwvdkteao0yA8Tpx4q0FA68+
	pbJYDYJcehnIEY3aaBEem07LPSFwqyrKuhudlGTUCilA9giOlG/aO7vpi8Rd4fYQRRFQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161966-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 161966: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=df28ba289c7d74cda5a9022e1376bfb9cd18ebb7
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 16 May 2021 10:07:41 +0000

flight 161966 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161966/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              df28ba289c7d74cda5a9022e1376bfb9cd18ebb7
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  310 days
Failing since        151818  2020-07-11 04:18:52 Z  309 days  302 attempts
Testing same since   161956  2021-05-15 04:20:06 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 57600 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 16 10:07:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 May 2021 10:07:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127917.240300 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liDgj-0001la-Sa; Sun, 16 May 2021 10:07:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127917.240300; Sun, 16 May 2021 10:07:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liDgj-0001lT-Od; Sun, 16 May 2021 10:07:57 +0000
Received: by outflank-mailman (input) for mailman id 127917;
 Sun, 16 May 2021 10:07:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liDgi-0001kT-3A; Sun, 16 May 2021 10:07:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liDgh-0002Tt-Rx; Sun, 16 May 2021 10:07:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liDgh-0005jD-Ko; Sun, 16 May 2021 10:07:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1liDgh-0005ww-KE; Sun, 16 May 2021 10:07:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7IBT4hdzKo0Nr4gpXKsP+oaYWReQYY4f9EpbqK83PwU=; b=1zM+IlHtVy1fJnDVr3mRHUiV9w
	AnMIOl9WZoZndfz48uhQXpc4jz85bQT2h9JhJC58/LO+w5tSZ4dRRi7BtEKtY1b55lt5sPVMrOfxM
	kxgoXSshUlqE9uH8ib3cDWZzjjLDZuIcWM37x1X7zgEskTnqQbm9F9uFZtm/SRYbifIg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161968-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 161968: all pass - PUSHED
X-Osstest-Versions-This:
    xen=cb199cc7de987cfda4659fccf51059f210f6ad34
X-Osstest-Versions-That:
    xen=d4fb5f166c2bfbaf9ba0de69da0d411288f437a9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 16 May 2021 10:07:55 +0000

flight 161968 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161968/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  cb199cc7de987cfda4659fccf51059f210f6ad34
baseline version:
 xen                  d4fb5f166c2bfbaf9ba0de69da0d411288f437a9

Last test of basis   161916  2021-05-12 09:19:39 Z    4 days
Testing same since   161968  2021-05-16 09:19:30 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>
  Olaf Hering <olaf@aepfle.de>
  Wei Liu <wl@xen.org>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   d4fb5f166c..cb199cc7de  cb199cc7de987cfda4659fccf51059f210f6ad34 -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Sun May 16 13:24:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 May 2021 13:24:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127982.240352 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liGkg-0005O3-E5; Sun, 16 May 2021 13:24:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127982.240352; Sun, 16 May 2021 13:24:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liGkg-0005Nw-Ac; Sun, 16 May 2021 13:24:14 +0000
Received: by outflank-mailman (input) for mailman id 127982;
 Sun, 16 May 2021 13:24:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liGkf-0005Nm-0B; Sun, 16 May 2021 13:24:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liGke-0005do-QA; Sun, 16 May 2021 13:24:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liGke-0005sQ-Fo; Sun, 16 May 2021 13:24:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1liGke-0004op-FE; Sun, 16 May 2021 13:24:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=o9P7bo/7P0Fz4LBhpFvVCfLwyO2nAO4sEiGKgbCbtcw=; b=ek8mNnwAuHx9JpVrMPOJRnkeCe
	TPhnegqyhsCKagp0l9ufYIGrVZmJgiEKWPrWQr4UwFjEbeTfAXaFqNlKoR8usfBNTPyYEbeBiLiFx
	Equ0yzUyf2SmjuSJ81enewlMwjB+d34ySQkZD+eNd3gboZGiXX7/1fs/dCjIhHjmr3ZE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161964-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 161964: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=cb199cc7de987cfda4659fccf51059f210f6ad34
X-Osstest-Versions-That:
    xen=cb199cc7de987cfda4659fccf51059f210f6ad34
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 16 May 2021 13:24:12 +0000

flight 161964 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161964/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161954
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161954
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 161954
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161954
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161954
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161954
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161954
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161954
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161954
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161954
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161954
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  cb199cc7de987cfda4659fccf51059f210f6ad34
baseline version:
 xen                  cb199cc7de987cfda4659fccf51059f210f6ad34

Last test of basis   161964  2021-05-16 01:51:36 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun May 16 16:27:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 May 2021 16:27:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127988.240366 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liJc3-0005Tj-GO; Sun, 16 May 2021 16:27:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127988.240366; Sun, 16 May 2021 16:27:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liJc3-0005Tc-DS; Sun, 16 May 2021 16:27:31 +0000
Received: by outflank-mailman (input) for mailman id 127988;
 Sun, 16 May 2021 16:27:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liJc2-0005TR-1I; Sun, 16 May 2021 16:27:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liJc1-0000go-Kr; Sun, 16 May 2021 16:27:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liJc1-0004wA-9G; Sun, 16 May 2021 16:27:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1liJc1-0005Fl-8l; Sun, 16 May 2021 16:27:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=CVTirLn9dECkGCMpNSe29VhuX5G67sPgATj9XVCKVIE=; b=AN5EQQbj/ZVwZ+SIS55cia2td6
	N0Rsrtw4BVpRA8w7Bwbdzw9gx+1TLgvQDlpKLVwjNBg6xTphQTAffO0cNTYxoX9BZa3bpoYVZvB09
	ntWCDBz3Oek+mLuYFnBPcTEea7s2udRcvZWrFZk4hdI+6vPaYNRp/gui2I4+vR0juqc0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161965-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 161965: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=63d1cb53e26a9a4168b84a8981b225c0a9cfa235
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 16 May 2021 16:27:29 +0000

flight 161965 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161965/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx  8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                63d1cb53e26a9a4168b84a8981b225c0a9cfa235
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  288 days
Failing since        152366  2020-08-01 20:49:34 Z  287 days  483 attempts
Testing same since   161965  2021-05-16 04:14:11 Z    0 days    1 attempts

------------------------------------------------------------
6056 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1644007 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 16 16:45:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 May 2021 16:45:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.127997.240381 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liJtA-0007sm-7q; Sun, 16 May 2021 16:45:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 127997.240381; Sun, 16 May 2021 16:45:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liJtA-0007sf-4k; Sun, 16 May 2021 16:45:12 +0000
Received: by outflank-mailman (input) for mailman id 127997;
 Sun, 16 May 2021 16:45:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nw2+=KL=kernel.org=pr-tracker-bot@srs-us1.protection.inumbo.net>)
 id 1liJt9-0007sZ-BX
 for xen-devel@lists.xenproject.org; Sun, 16 May 2021 16:45:11 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3cfb92b8-fd65-41af-9f32-9f0b661e89f6;
 Sun, 16 May 2021 16:45:10 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPS id B451A610CA;
 Sun, 16 May 2021 16:45:09 +0000 (UTC)
Received: from pdx-korg-docbuild-2.ci.codeaurora.org (localhost.localdomain
 [127.0.0.1])
 by pdx-korg-docbuild-2.ci.codeaurora.org (Postfix) with ESMTP id A15C960A23;
 Sun, 16 May 2021 16:45:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3cfb92b8-fd65-41af-9f32-9f0b661e89f6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1621183509;
	bh=4FKIJRZigr3MmLKiP7lAxUR2d8nC79R8APeyi0lOsz0=;
	h=Subject:From:In-Reply-To:References:Date:To:Cc:From;
	b=sGHPtMJYH59AGTjqnQWE23IiNWId05Sv8rBZMN0T0EhGyXNMgq+Y0cOC9UotDFGcC
	 45uMBbw116dGMMmhoHl7EiTQ3MNbkFx7p65oTsEqI3x1/ZNypKZizvwBFrskC0eT/4
	 u1LUN0LQiTyV2OEgPQTL/OCrCFrf2beprmBYj7S/pKIZ21C+KXYfI0/ZG0nDFKIzls
	 hs+Dx4mObHsEsfkMktTO3Mi/364PHPvRww632YxZDn6JTrnNkPsXS8m2yvLUlj6yLB
	 fkHiExVCgQ4jF8H5dvEf2AA1Ue3Wlb1ouapii6SsrA7tXYeeMLJFpaQUYn7w2NBlE6
	 yEV1fqQzJ5ikg==
Subject: Re: [GIT PULL] xen: branch for v5.13-rc2
From: pr-tracker-bot@kernel.org
In-Reply-To: <20210516063020.3349-1-jgross@suse.com>
References: <20210516063020.3349-1-jgross@suse.com>
X-PR-Tracked-List-Id: <linux-kernel.vger.kernel.org>
X-PR-Tracked-Message-Id: <20210516063020.3349-1-jgross@suse.com>
X-PR-Tracked-Remote: git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.13b-rc2-tag
X-PR-Tracked-Commit-Id: 97729b653de52ba98e08732dd8855586e37a3a31
X-PR-Merge-Tree: torvalds/linux.git
X-PR-Merge-Refname: refs/heads/master
X-PR-Merge-Commit-Id: f44e58bb1905ada4910f26676d2ea22a35545276
Message-Id: <162118350959.14702.2339566205360958922.pr-tracker-bot@kernel.org>
Date: Sun, 16 May 2021 16:45:09 +0000
To: Juergen Gross <jgross@suse.com>
Cc: torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com

The pull request you sent on Sun, 16 May 2021 08:30:20 +0200:

> git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.13b-rc2-tag

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/f44e58bb1905ada4910f26676d2ea22a35545276

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/prtracker.html


From xen-devel-bounces@lists.xenproject.org Sun May 16 20:37:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 May 2021 20:37:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128003.240392 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liNWB-0003Sw-Sq; Sun, 16 May 2021 20:37:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128003.240392; Sun, 16 May 2021 20:37:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liNWB-0003Sp-P4; Sun, 16 May 2021 20:37:43 +0000
Received: by outflank-mailman (input) for mailman id 128003;
 Sun, 16 May 2021 20:37:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liNWA-0003SZ-Ey; Sun, 16 May 2021 20:37:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liNWA-0004uq-9L; Sun, 16 May 2021 20:37:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liNW9-0006hn-UU; Sun, 16 May 2021 20:37:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1liNW9-0007Ii-Tb; Sun, 16 May 2021 20:37:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=OpyuBCNGUwOY/9tC/+Y3KPdvgS8xQhLuN2v8luP7f9E=; b=GCDXRgEPUYUDO2XHKG4VSf9uns
	tN9mPfJdXjCzhOKi/vYqlvK1XvDbl6w9uuRpnrAgD16L6uMPL7aux1mkoa2xHrbw2d4opA3RU6E07
	YR0ff8zl1Pzcw2kA93j8HkW+O5Jb18laLP8XVz/anMjEI56tsBKBWDFJuAeYPu4sXb6Q=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161967-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 161967: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start.2:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 16 May 2021 20:37:41 +0000

flight 161967 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161967/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds     19 guest-start.2           fail blocked in 152631
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  269 days
Failing since        152659  2020-08-21 14:07:39 Z  268 days  491 attempts
Testing same since   161955  2021-05-15 02:41:48 Z    1 days    4 attempts

------------------------------------------------------------
499 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 152107 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 16 22:49:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 May 2021 22:49:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128010.240406 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liPZ9-0006ce-Iq; Sun, 16 May 2021 22:48:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128010.240406; Sun, 16 May 2021 22:48:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liPZ9-0006cX-EZ; Sun, 16 May 2021 22:48:55 +0000
Received: by outflank-mailman (input) for mailman id 128010;
 Sun, 16 May 2021 22:48:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=M0Ii=KL=gmail.com=alistair23@srs-us1.protection.inumbo.net>)
 id 1liPZ7-0006cR-Bs
 for xen-devel@lists.xenproject.org; Sun, 16 May 2021 22:48:53 +0000
Received: from mail-il1-x136.google.com (unknown [2607:f8b0:4864:20::136])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 940f7ca9-406d-402b-9956-7580411b36ec;
 Sun, 16 May 2021 22:48:52 +0000 (UTC)
Received: by mail-il1-x136.google.com with SMTP id h15so1657379ilr.2
 for <xen-devel@lists.xenproject.org>; Sun, 16 May 2021 15:48:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 940f7ca9-406d-402b-9956-7580411b36ec
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=813jZWrJgwbGDM0rg0Twntty63F/whIPr9Jlee3RRDM=;
        b=HxmJu5ODXps9H/4Ys9/o0JMv1ptdEQavDzq69ub4+09awhcuOkqx7pwyRQLF7Rgry1
         d799ZuSqo1qZf5jS/LPmfIgcYJZQkMl5Bsa1BWLaI+GmICA2bBUYIlA6JbN7j3hLji7a
         W7/O9pSmZQKYeQn4BKN4vG/AXFXM7Q7NwWVyNRcOHjBjTulBvBfFXTABJEh6N5MzuLFB
         /6J8PBUszVonYsnpAZBeE63aBa2jc5wZRXQWHJckzOs2iTrIIcFnm5vaBV4ZyAthkL9z
         4Fi7rX9tU5dUXlk+SWEl+BobNP1QZv+v5v0OcgdB+xNSI9ukke/O/cuWastxU3VG2FiS
         iEMQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=813jZWrJgwbGDM0rg0Twntty63F/whIPr9Jlee3RRDM=;
        b=YDwSM4S6yNaQaH9hP/2UZ7WDi7Y5OG+sI8WI8MzNFy/HbpiT2n8nrInetLlNU1zILf
         M1qvePBvKrvLcyKVvJd1Uv8lM9nGOKZmVx23+vpTtTrgdj0goF8pw4i3ziDo2/7Nx5tj
         I8GusyaZVyXLckj3E74NnbdCUisXg4mAMOebwaDM1imB773rW9bLGi3eBO63iU7Bachb
         avwIaZ2NzvdS+yc+bX1cd7uONijyWELl6txdbwjeTGN6fytYhjCViyudNzfTl03yKPj+
         i8ZT3RR2nlHU5cKgVpifoHEbo1AwsndkXdwW4XXl8oTiGbq9/u9c4BkPmY8LTWNju4AW
         ojjA==
X-Gm-Message-State: AOAM53196mq9yKBzTrTVoWwZLZARDyJRCjrH/R97FxIxbjXJVlDw8UwZ
	FXmrOz7mE76KdhQdLI7qBEcLx5xiqsuk58A1wO4=
X-Google-Smtp-Source: ABdhPJyPduUHLxtCBHpDMk77p3AQ1w9SH4eWFdPTzFlOW4bkdGpkWY8Kiu4hsYsKEmNK0o5O4dPM92AF02FwlLCYGsc=
X-Received: by 2002:a05:6e02:eac:: with SMTP id u12mr50967693ilj.177.1621205332119;
 Sun, 16 May 2021 15:48:52 -0700 (PDT)
MIME-Version: 1.0
References: <cover.1621017334.git.connojdavis@gmail.com> <3960a676376e0163d97ac02f968966cdfaccbf75.1621017334.git.connojdavis@gmail.com>
In-Reply-To: <3960a676376e0163d97ac02f968966cdfaccbf75.1621017334.git.connojdavis@gmail.com>
From: Alistair Francis <alistair23@gmail.com>
Date: Mon, 17 May 2021 08:48:26 +1000
Message-ID: <CAKmqyKPKU6-P5yrkNG4upCEjOZUngF8QrfZs4Q0mBYyKRsuPqw@mail.gmail.com>
Subject: Re: [PATCH v3 1/5] xen/char: Default HAS_NS16550 to y only for X86
 and ARM
To: Connor Davis <connojdavis@gmail.com>
Cc: "open list:X86" <xen-devel@lists.xenproject.org>, 
	Bobby Eshleman <bobbyeshleman@gmail.com>, Andrew Cooper <andrew.cooper3@citrix.com>, 
	George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
	Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
Content-Type: text/plain; charset="UTF-8"

On Sat, May 15, 2021 at 4:54 AM Connor Davis <connojdavis@gmail.com> wrote:
>
> Defaulting to yes only for X86 and ARM reduces the requirements
> for a minimal build when porting new architectures.
>
> Signed-off-by: Connor Davis <connojdavis@gmail.com>

Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Alistair

> ---
>  xen/drivers/char/Kconfig | 2 +-
>  1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/xen/drivers/char/Kconfig b/xen/drivers/char/Kconfig
> index b572305657..b15b0c8d6a 100644
> --- a/xen/drivers/char/Kconfig
> +++ b/xen/drivers/char/Kconfig
> @@ -1,6 +1,6 @@
>  config HAS_NS16550
>         bool "NS16550 UART driver" if ARM
> -       default y
> +       default y if (ARM || X86)
>         help
>           This selects the 16550-series UART support. For most systems, say Y.
>
> --
> 2.31.1
>


From xen-devel-bounces@lists.xenproject.org Sun May 16 23:52:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 16 May 2021 23:52:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128015.240417 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liQYV-0004bu-5J; Sun, 16 May 2021 23:52:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128015.240417; Sun, 16 May 2021 23:52:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liQYV-0004bn-26; Sun, 16 May 2021 23:52:19 +0000
Received: by outflank-mailman (input) for mailman id 128015;
 Sun, 16 May 2021 23:52:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liQYT-0004bd-9e; Sun, 16 May 2021 23:52:17 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liQYT-00082d-1l; Sun, 16 May 2021 23:52:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liQYS-0005QD-KZ; Sun, 16 May 2021 23:52:16 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1liQYS-0004fp-K1; Sun, 16 May 2021 23:52:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=a0xph0GjQrrDPbuhc7pZYlRX/0Yq2OZXuYCCU/ut0PY=; b=GWbCP7wTZox2Ai/uUc7oWzVde9
	WRmkMdODPupaa5a1e9V1si4IKm8heSVDNOZ/pTQrl0lrXzJNRe1+d1oWqkxPdaS8s3Qf+7pRvtL1z
	/RnuVI3CiNb5wSvXbCNsU1vJCRVWcgJs8b/y7GHNpfQOKJc3dv3X7WASBxeJlJ/3PiTM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161970-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 161970: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-arm64-arm64-examine:reboot:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:xen-boot:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-seattle:xen-boot:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=63d1cb53e26a9a4168b84a8981b225c0a9cfa235
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 16 May 2021 23:52:16 +0000

flight 161970 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161970/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-arm64-arm64-examine      8 reboot                   fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm  8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl           8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1   8 xen-boot                 fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm       8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx  8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-seattle   8 xen-boot                 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10     fail pass in 161965
 test-armhf-armhf-libvirt      8 xen-boot                   fail pass in 161965

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt 16 saverestore-support-check fail in 161965 like 152332
 test-armhf-armhf-libvirt    15 migrate-support-check fail in 161965 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                63d1cb53e26a9a4168b84a8981b225c0a9cfa235
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  289 days
Failing since        152366  2020-08-01 20:49:34 Z  288 days  484 attempts
Testing same since   161965  2021-05-16 04:14:11 Z    0 days    2 attempts

------------------------------------------------------------
6056 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     fail    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1644007 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 17 04:48:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 04:48:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128024.240431 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liVAU-0004Gm-LF; Mon, 17 May 2021 04:47:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128024.240431; Mon, 17 May 2021 04:47:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liVAU-0004Gf-HK; Mon, 17 May 2021 04:47:50 +0000
Received: by outflank-mailman (input) for mailman id 128024;
 Mon, 17 May 2021 04:47:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liVAT-0004GV-3O; Mon, 17 May 2021 04:47:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liVAS-0002tK-T4; Mon, 17 May 2021 04:47:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liVAS-0002ru-Hm; Mon, 17 May 2021 04:47:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1liVAS-00061A-GM; Mon, 17 May 2021 04:47:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=aaFP50CNEx2Wp60iWVnv7P0LeMdmL6Bm0+X2ccpmWOo=; b=r9Ozu5In1f7PifWrScV70w24mp
	ZpnJ1xbXq1OBpN698Y7C8gujBwvwIRpCfHpk8ez3mDSUIbqkmMpsy/gbv/bH4mwdLR3c+pOhL7rg2
	ZCmlqJOVNVjBn+EuZo2bB4Y9b9bFTrp9MOgHdBNfU3AdTPggDkT7L1ES+6i4mrVa4OYo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161971-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 161971: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=6005ee07c380cbde44292f5f6c96e7daa70f4f7d
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 17 May 2021 04:47:48 +0000

flight 161971 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161971/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                6005ee07c380cbde44292f5f6c96e7daa70f4f7d
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  269 days
Failing since        152659  2020-08-21 14:07:39 Z  268 days  492 attempts
Testing same since   161971  2021-05-16 21:09:32 Z    0 days    1 attempts

------------------------------------------------------------
502 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 152903 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 17 06:07:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 06:07:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128030.240445 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liWPa-00042D-6g; Mon, 17 May 2021 06:07:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128030.240445; Mon, 17 May 2021 06:07:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liWPa-000426-3o; Mon, 17 May 2021 06:07:30 +0000
Received: by outflank-mailman (input) for mailman id 128030;
 Mon, 17 May 2021 06:07:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=frGc=KM=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1liWPY-000420-D6
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 06:07:28 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b6487a78-84b3-45c7-af65-ff66772a6c66;
 Mon, 17 May 2021 06:07:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C6A45B195;
 Mon, 17 May 2021 06:07:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b6487a78-84b3-45c7-af65-ff66772a6c66
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621231645; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=FqLH/VtAy0PQS9wOptFGjOCuKLp7AxzOxWM+3+rSrcU=;
	b=Il8JoTICmHz+jtN/mO3kTeA5J6DZr3p3wbv8+7lQw1Oh+nWo1nlZFAIKccHsM1Ofdym0Jj
	UWBfuk3AUpYLaeL2KJboV2kpeJmJ7scfkHlEivGoIzHR9nSd/M8ZLXePdX6VqzrVGSRcGi
	dqjCI8lZoIpUSlfxUyBkepQ2uEAWjjY=
Subject: Re: [PATCH v2 1/2] tools/xenstore: move per connection read and write
 func hooks into a struct
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210514115620.32731-1-jgross@suse.com>
 <20210514115620.32731-2-jgross@suse.com>
 <7cdd7f43-3f3f-12e4-abf9-0e4d698a85b1@xen.org>
From: Juergen Gross <jgross@suse.com>
Message-ID: <fa09bf28-8aab-b90c-b0bf-937e4867afa0@suse.com>
Date: Mon, 17 May 2021 08:07:24 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.0
MIME-Version: 1.0
In-Reply-To: <7cdd7f43-3f3f-12e4-abf9-0e4d698a85b1@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="FwFs7zGLzu17JuxcLR6sz0s90RqKhiY18"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--FwFs7zGLzu17JuxcLR6sz0s90RqKhiY18
Content-Type: multipart/mixed; boundary="xVBvLyV4pfW39HtDfdXXBDOPpsXq7Mw4C";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <fa09bf28-8aab-b90c-b0bf-937e4867afa0@suse.com>
Subject: Re: [PATCH v2 1/2] tools/xenstore: move per connection read and write
 func hooks into a struct
References: <20210514115620.32731-1-jgross@suse.com>
 <20210514115620.32731-2-jgross@suse.com>
 <7cdd7f43-3f3f-12e4-abf9-0e4d698a85b1@xen.org>
In-Reply-To: <7cdd7f43-3f3f-12e4-abf9-0e4d698a85b1@xen.org>

--xVBvLyV4pfW39HtDfdXXBDOPpsXq7Mw4C
Content-Type: multipart/mixed;
 boundary="------------E04E6C0CA53741A6FB7D1439"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------E04E6C0CA53741A6FB7D1439
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 14.05.21 18:33, Julien Grall wrote:
> Hi Juergen,
>=20
> On 14/05/2021 12:56, Juergen Gross wrote:
>> -struct connection *new_connection(connwritefn_t *write, connreadfn_t =

>> *read);
>> +struct connection *new_connection(const struct interface_funcs *funcs=
);
>> =C2=A0 struct connection *get_connection_by_id(unsigned int conn_id);
>> =C2=A0 void check_store(void);
>> =C2=A0 void corrupt(struct connection *conn, const char *fmt, ...);
>> @@ -254,10 +256,7 @@ void finish_daemonize(void);
>> =C2=A0 /* Open a pipe for signal handling */
>> =C2=A0 void init_pipe(int reopen_log_pipe[2]);
>> -int writefd(struct connection *conn, const void *data, unsigned int=20
>> len);
>> -int readfd(struct connection *conn, void *data, unsigned int len);
>> -
>> -extern struct interface_funcs socket_funcs;
>> +extern const struct interface_funcs socket_funcs;
> Shouldn't this be protected with #ifdef NO_SOCKETS?

Yes, I can add it.

>=20
> The rest looks good to me:
>=20
> Reviewed-by: Julien Grall <jgrall@amazon.com>

Thanks,

Juergen


--------------E04E6C0CA53741A6FB7D1439
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------E04E6C0CA53741A6FB7D1439--

--xVBvLyV4pfW39HtDfdXXBDOPpsXq7Mw4C--

--FwFs7zGLzu17JuxcLR6sz0s90RqKhiY18
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmCiCBwFAwAAAAAACgkQsN6d1ii/Ey9T
4gf+OXR6DXWcKVnVy18z/6VCi/61pXrNNrvKUebdrGta/86Ofmt+Y2kVbXnwP9XeXMASFJt/cr6g
aJvKvqjJtVSS7potmuro+7TRseYqHLSRz9PH38n40ATg2sacWzW7oMt+H7+owR/fXRzoCYvaKz7j
zaamTPMGJHOHjyB/FFjrmIxJFYiZuwoWK7aguHjk1Z6wEzRVm5F8YYET0o7Ud0LZFmSkggCvGsEz
ThPsUZUTJZY6Ygg87TQJh8NryG3Gmd2LyjRMqASDik/HCwscb4Oaml6Y5eX7q4ZlK5WfniJWr+LB
iw4stRHjSmP9qPDSPcQYGNLRk4ph9a+qkKvrtu6dow==
=E8px
-----END PGP SIGNATURE-----

--FwFs7zGLzu17JuxcLR6sz0s90RqKhiY18--


From xen-devel-bounces@lists.xenproject.org Mon May 17 06:10:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 06:10:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128035.240457 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liWSq-0005O6-P4; Mon, 17 May 2021 06:10:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128035.240457; Mon, 17 May 2021 06:10:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liWSq-0005Nz-Jk; Mon, 17 May 2021 06:10:52 +0000
Received: by outflank-mailman (input) for mailman id 128035;
 Mon, 17 May 2021 06:10:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=frGc=KM=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1liWSp-0005Nt-Sr
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 06:10:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e947e231-7526-4dcd-af13-fdba7e689a5b;
 Mon, 17 May 2021 06:10:50 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 084AEB17E;
 Mon, 17 May 2021 06:10:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e947e231-7526-4dcd-af13-fdba7e689a5b
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621231850; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=qCdkIsV7Fle2n1PZaY30cqASV5X3cN/rbGLPhtugg8c=;
	b=OJU8n3JP5M1r5K/ywx4r/wORD9puCMBfqWVZQjtIaIcTTA1NaReRMG8o07/sj70xQ/OwSi
	fvzEIKXPbnKSuKX2gHI3lBOI3/vnmUCeMgOBxe/Wkb/ONn3HvaD2+F8L+Mk5XXnDiSFe1v
	s//w5jvIIx+4a8qqIb95u3YeVk8dNaI=
Subject: Re: [PATCH v2 2/2] tools/xenstore: simplify xenstored main loop
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210514115620.32731-1-jgross@suse.com>
 <20210514115620.32731-3-jgross@suse.com>
 <24e89076-4440-a32e-f701-71957cc2a9e4@xen.org>
From: Juergen Gross <jgross@suse.com>
Message-ID: <12b13143-717b-c288-b96b-50613dafc6d3@suse.com>
Date: Mon, 17 May 2021 08:10:49 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.0
MIME-Version: 1.0
In-Reply-To: <24e89076-4440-a32e-f701-71957cc2a9e4@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="uhyCoXdgLTLBAJJpxQta8UGpiqqAG1jzm"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--uhyCoXdgLTLBAJJpxQta8UGpiqqAG1jzm
Content-Type: multipart/mixed; boundary="ivS4C9vKhIRFbI3T3j9Wwpbn8bjBbAU3x";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <12b13143-717b-c288-b96b-50613dafc6d3@suse.com>
Subject: Re: [PATCH v2 2/2] tools/xenstore: simplify xenstored main loop
References: <20210514115620.32731-1-jgross@suse.com>
 <20210514115620.32731-3-jgross@suse.com>
 <24e89076-4440-a32e-f701-71957cc2a9e4@xen.org>
In-Reply-To: <24e89076-4440-a32e-f701-71957cc2a9e4@xen.org>

--ivS4C9vKhIRFbI3T3j9Wwpbn8bjBbAU3x
Content-Type: multipart/mixed;
 boundary="------------445480A3475B0BCD97F9E246"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------445480A3475B0BCD97F9E246
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 14.05.21 19:05, Julien Grall wrote:
> Hi Juergen,
>=20
> On 14/05/2021 12:56, Juergen Gross wrote:
>> The main loop of xenstored is rather complicated due to different
>> handling of socket and ring-page interfaces. Unify that handling by
>> introducing interface type specific functions can_read() and
>> can_write().
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>> V2:
>> - split off function vector introduction (Julien Grall)
>> ---
>> =C2=A0 tools/xenstore/xenstored_core.c=C2=A0=C2=A0 | 77 ++++++++++++++=
+----------------
>> =C2=A0 tools/xenstore/xenstored_core.h=C2=A0=C2=A0 |=C2=A0 2 +
>> =C2=A0 tools/xenstore/xenstored_domain.c |=C2=A0 2 +
>> =C2=A0 3 files changed, 41 insertions(+), 40 deletions(-)
>>
>> diff --git a/tools/xenstore/xenstored_core.c=20
>> b/tools/xenstore/xenstored_core.c
>> index 856f518075..883a1a582a 100644
>> --- a/tools/xenstore/xenstored_core.c
>> +++ b/tools/xenstore/xenstored_core.c
>> @@ -1659,9 +1659,34 @@ static int readfd(struct connection *conn, void=
=20
>> *data, unsigned int len)
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return rc;
>> =C2=A0 }
>> +static bool socket_can_process(struct connection *conn, int mask)
>> +{
>> +=C2=A0=C2=A0=C2=A0 if (conn->pollfd_idx =3D=3D -1)
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return false;
>> +
>> +=C2=A0=C2=A0=C2=A0 if (fds[conn->pollfd_idx].revents & ~(POLLIN | POL=
LOUT)) {
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 talloc_free(conn);
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return false;
>> +=C2=A0=C2=A0=C2=A0 }
>> +
>> +=C2=A0=C2=A0=C2=A0 return (fds[conn->pollfd_idx].revents & mask) && !=
conn->is_ignored;
>> +}
>> +
>> +static bool socket_can_write(struct connection *conn)
>> +{
>> +=C2=A0=C2=A0=C2=A0 return socket_can_process(conn, POLLOUT);
>> +}
>> +
>> +static bool socket_can_read(struct connection *conn)
>> +{
>> +=C2=A0=C2=A0=C2=A0 return socket_can_process(conn, POLLIN);
>> +}
>> +
>> =C2=A0 const struct interface_funcs socket_funcs =3D {
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 .write =3D writefd,
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 .read =3D readfd,
>> +=C2=A0=C2=A0=C2=A0 .can_write =3D socket_can_write,
>> +=C2=A0=C2=A0=C2=A0 .can_read =3D socket_can_read,
>> =C2=A0 };
>> =C2=A0 static void accept_connection(int sock)
>> @@ -2296,47 +2321,19 @@ int main(int argc, char *argv[])
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 if (&next->list !=3D &connections)
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 talloc_increase_ref_count(next);
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if=
 (conn->domain) {
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 if (domain_can_read(conn))
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 handle_input(conn);
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 if (talloc_free(conn) =3D=3D 0)
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 continue;
>> -
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 talloc_increase_ref_count(conn);
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0 if (domain_can_write(conn) &&
>> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 !list_empty(&conn->out_list=
))
>=20
> AFAICT, the check "!list_empty(&conn->out_list)" can be safely removed =

> because write_messages() will check if the list is empty (list_top()=20
> returns NULL in this case). Is that correct?

Yes.


Juergen


--------------445480A3475B0BCD97F9E246
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------445480A3475B0BCD97F9E246--

--ivS4C9vKhIRFbI3T3j9Wwpbn8bjBbAU3x--

--uhyCoXdgLTLBAJJpxQta8UGpiqqAG1jzm
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmCiCOkFAwAAAAAACgkQsN6d1ii/Ey+b
qAf+Jznf+p+VRCCemACVSCt2xff1CIaEVCB7yyWf+gPHbET4ofdFcYnB2F58uVnd5cYOTGSu3jPI
emXuD4jTxytBB2u1u8zXFjqHCxcAIECQ6SvXDTbLaIHr4TZJBu6u3pyqrBCmExxH7+YIDV7HSOYS
Z1BYLV3fn+814+5ICZ6ibJ9G5Tg0qYsPnhX8gTkOQXO5ykGqKiD1aqvh2CltNzX10Qd81eqfKsL5
wOP1Rasbtb+NafC0LfO8Kr06kM3SuWOLSvLvKXqA32vKZyfctqY6+mRDmof+2C89NusxhGAV7Igh
5HX26mWy0XkjT3nx2u8TS07ZeHtq2YiTxxpRe0mezw==
=JNT/
-----END PGP SIGNATURE-----

--uhyCoXdgLTLBAJJpxQta8UGpiqqAG1jzm--


From xen-devel-bounces@lists.xenproject.org Mon May 17 06:38:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 06:38:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128045.240466 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liWtQ-000820-3R; Mon, 17 May 2021 06:38:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128045.240466; Mon, 17 May 2021 06:38:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liWtQ-00081t-0P; Mon, 17 May 2021 06:38:20 +0000
Received: by outflank-mailman (input) for mailman id 128045;
 Mon, 17 May 2021 06:38:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+8Xj=KM=arm.com=henry.wang@srs-us1.protection.inumbo.net>)
 id 1liWtP-00081m-0F
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 06:38:19 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe09::609])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ff0bc462-95e1-48ac-9603-2426e63be582;
 Mon, 17 May 2021 06:38:15 +0000 (UTC)
Received: from DB6PR0501CA0030.eurprd05.prod.outlook.com (2603:10a6:4:67::16)
 by DB6PR0801MB1909.eurprd08.prod.outlook.com (2603:10a6:4:72::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.25; Mon, 17 May
 2021 06:38:12 +0000
Received: from DB5EUR03FT061.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:67:cafe::1f) by DB6PR0501CA0030.outlook.office365.com
 (2603:10a6:4:67::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.25 via Frontend
 Transport; Mon, 17 May 2021 06:38:12 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT061.mail.protection.outlook.com (10.152.21.234) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Mon, 17 May 2021 06:38:12 +0000
Received: ("Tessian outbound 1e34f83e4964:v91");
 Mon, 17 May 2021 06:38:12 +0000
Received: from 7391579a5213.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 F4469669-9969-47EB-BFAF-6E39D6A32153.1; 
 Mon, 17 May 2021 06:38:06 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 7391579a5213.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 17 May 2021 06:38:06 +0000
Received: from PA4PR08MB6253.eurprd08.prod.outlook.com (2603:10a6:102:e4::8)
 by PA4PR08MB6064.eurprd08.prod.outlook.com (2603:10a6:102:e2::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.28; Mon, 17 May
 2021 06:38:05 +0000
Received: from PA4PR08MB6253.eurprd08.prod.outlook.com
 ([fe80::19f9:d346:b9af:5cad]) by PA4PR08MB6253.eurprd08.prod.outlook.com
 ([fe80::19f9:d346:b9af:5cad%3]) with mapi id 15.20.4129.031; Mon, 17 May 2021
 06:38:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ff0bc462-95e1-48ac-9603-2426e63be582
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IdVtVJ+9fSc+lGKAXBgShaRNGr2ufhmlajbFBoKDwZA=;
 b=8/giHiCPUc8p+qLimNTdW2XWo/daG0rwQWBzlQFcJAQTV/4Y2Hkkuq6cxhN0HUYXcxI+cejTs+sM9C7ozPOJg75WjPSaaTcMIGSu/5CH4cfMfFnloE8vh1/AchHkeRSZ5KYWC1ZW4s2AIXwnWl/QkWg3r4aJQ5gY0igIdwjAyX8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TM5Mv1nLuPDOyq99HJQHCH5uzzG/SwS2I/ZiV4G5uyijcyZT+WLJRlJQI8PPtJyKh7zK+LPtRfuTHRyGKbAVBqspYTtB+hhhKaefa2LbDu9vh1PIaWlXPvYdYQBmr+uoPVO/bTMBX7hswdeKb25q39IpLpGTciVm5qU2G6djxKuGWbcQl7EbNLYxHniczzy3ChjSdSdl4zHfrRcoimxNJk+337BqjSB5tSn8ZDFGA4NWsMeZB+0u3Gxw7huv5j2h2o/qsetP4Z7d5N784mdWOqPqoUy4uDthytRD8s+5ALZ9wyq7Ud2PwtOiKhQGnYr123u5DyZfjKPByLhOzkxqTQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IdVtVJ+9fSc+lGKAXBgShaRNGr2ufhmlajbFBoKDwZA=;
 b=lxsC34kjeIDwQE00tM3unUdU2W+EpkU8lnO2ujJKIveVHKOOUZA0PVI2IoOTKLW2OKNP/SyiAc6sXOKSSGxxGOjgIRxvV8wo7hcHQjlPKuv3msY8AEBAkgRaYr5UDPCEDdk1AgpDXypDdtghl4o8t69/0cl3DXBLZ5u8NH/cuMwe0J9tmx7WxVnldNZwtqZXQN7cXa9ZxHMxSR/ud35LfkqTHovd09NSbe1XECBydD+aS4xft4D0o2ibzgn+959GpdU8wWF8HCfeB+LA8hWAZbp0LY8d8AR5FpFfAkIWApmMDYsBVU4+bGoQ4uRVeyeQuM9B34VHCt18d1A6E5NBUA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IdVtVJ+9fSc+lGKAXBgShaRNGr2ufhmlajbFBoKDwZA=;
 b=8/giHiCPUc8p+qLimNTdW2XWo/daG0rwQWBzlQFcJAQTV/4Y2Hkkuq6cxhN0HUYXcxI+cejTs+sM9C7ozPOJg75WjPSaaTcMIGSu/5CH4cfMfFnloE8vh1/AchHkeRSZ5KYWC1ZW4s2AIXwnWl/QkWg3r4aJQ5gY0igIdwjAyX8=
From: Henry Wang <Henry.Wang@arm.com>
To: Julien Grall <julien@xen.org>, "sstabellini@kernel.org"
	<sstabellini@kernel.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Wei Chen <Wei.Chen@arm.com>, Penny Zheng <Penny.Zheng@arm.com>, Bertrand
 Marquis <Bertrand.Marquis@arm.com>
Subject: RE: Discussion of Xenheap problems on AArch64
Thread-Topic: Discussion of Xenheap problems on AArch64
Thread-Index:
 Adc2dyA8lkZGRqbyRiSglHolanVkwQAFhaqAAACgy/AA4CfqgABHcHyAADhcqlAABznSAAGrycWAALiGZgAAEDKF4ACJdUUAABHcYPAAVJTsgABJ8qkw
Date: Mon, 17 May 2021 06:38:05 +0000
Message-ID:
 <PA4PR08MB6253AB1B1286086E4EDE60A2922D9@PA4PR08MB6253.eurprd08.prod.outlook.com>
References:
 <PA4PR08MB6253F49C13ED56811BA5B64E92479@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <cdde98ca-4183-c92b-adca-801330992fc5@xen.org>
 <PA4PR08MB62538BBA256E66A0415F0C7192479@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <f14aa1d6-35d2-a9a3-0672-7f0d3ae3ec89@xen.org>
 <PA4PR08MB62534C4130B59CAA9A8A8BF792419@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <PA4PR08MB6253FBC7F5E690DB74F2E11F92409@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <2a65b8c0-fccc-2ccc-f736-7f3f666e84d1@xen.org>
 <PA4PR08MB62537A958107CD234831E0B892579@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <ba649865-410b-e1be-39a3-c4cac802f464@xen.org>
 <PA4PR08MB6253F85E184CA51BDB99786992539@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <ba1bc084-5a5b-1410-acba-33bfca7c4f6a@xen.org>
 <PA4PR08MB6253E95579D8277D7FD1BE9A92509@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <7247122c-127d-705c-78a5-7f9460f5821a@xen.org>
In-Reply-To: <7247122c-127d-705c-78a5-7f9460f5821a@xen.org>
Accept-Language: zh-CN, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: C1F9BDF1B6D2F84DB05CB25F5DBE09BA.0
x-checkrecipientchecked: true
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [203.126.0.112]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: ef257bcc-0f14-4320-c309-08d918fe57d5
x-ms-traffictypediagnostic: PA4PR08MB6064:|DB6PR0801MB1909:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DB6PR0801MB1909B15A7C8F512FCF270990922D9@DB6PR0801MB1909.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 AJyel+PhWjfMta/g2AJg6onpjZJ7BMynJClf7ZRWGppOHVbc3FEIDZIKnnz+tU/YpBTf8kmBbabAKaG+WfQAR5LM/YaONK4cX3xQxkUoSXPYbg9Ya1IZdyl3a18XcWwKb4qE0yLzl5DRd6VbHPyFS7KJopQrbqAX36+JPHpS8Pj4IEoKzL33jKmh1mib1n/3LPuZVN4L5pJT1YOqX+e/pK5Up0jMBIhWUwwGykyquwD7dPnDUpnAvDUnrl+WE/ZpBuo+8n+/HfcJMVDd0mtav+T5jcBZZXaBjsY6gVpaV7ljmgHbFsV94JRiS34qwHdzNVe7m45XlFVj8zatlIbQTr/KxVHgCvE/sALYfbr6J29kXJOHquNe3AlfvqGXk3uzx1I/M6azCWGdV9OGXCPZUhoGkfoKygpzLdaO1xNgTkuZt4nxKpIMpiEIkagodidRjp1AqDlKsAZZG9NWXHZyhWjMEJmt2xAluE2W4+LfxIoXcCbq4rknahsCipD24c4SB82phFANTgKw4KIlpe3BmVF34yQ5+a1zqydspFMvNRKC6w5za1yOhMyr8NofFynYoIPHmrYhh/qVnBHDYIjCPePf5LJrLi7t73BZcmuoqHY=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PA4PR08MB6253.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(396003)(346002)(136003)(39860400002)(366004)(55016002)(186003)(316002)(110136005)(66946007)(54906003)(71200400001)(86362001)(8676002)(6506007)(2906002)(9686003)(8936002)(4326008)(5660300002)(66556008)(76116006)(33656002)(66446008)(7696005)(64756008)(83380400001)(26005)(38100700002)(66476007)(122000001)(478600001)(52536014);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?N3piMW8rZmxoQkdIMVAwYVY0YTlmenl6TmxyVFczV3o4cDg3azdNM0hOVjV1?=
 =?utf-8?B?WjVYM21JSGNPZWszNnQwVktRQ3ZRbWxKQ25ObUVSK2VPMVVwN2VCWlFoa1JN?=
 =?utf-8?B?ZnpscFQrVGwxOFdzSHJDOWZTZVR6aUlyVWMwNjN6NUF4MTlsQUNRN1NCNXUy?=
 =?utf-8?B?N05PV0hzbWYweG5KSUNaQWZDNTVTR1RHNjllZkhkclgwWTd4MnpWTkZzTlNT?=
 =?utf-8?B?dmtkNDIvUkxqQldLWEdRcTZySTV0Vm9NTUpEYzFlVWJrMlBETGpjVkFzVmpO?=
 =?utf-8?B?bGgwK0FmYnNSNlpEYW44K0tCZ2NEdVVjcEdDS1Z6TG56b3lhTEZhNXZQY2Z0?=
 =?utf-8?B?M0RPT0g5Y09mU1EwVm9vcnpwV0JsajdMMlpNTVVFSU1xV0l1cnpoaTFSVUFp?=
 =?utf-8?B?T05UdnNodzE3YzlSU2paSTJKZFpLMGJiWndiejc0TGFFY2I0dDJ2ZGw3SUYx?=
 =?utf-8?B?VW5ya2pxejlnQ3ZKTkZIbGxraWF3bEJIOVFySlFnblJCZ3dKWjA4UmlOSnk5?=
 =?utf-8?B?SEpqV3V5bklPaXFLOTB2SlkreGt5THNrQmxUb1hHMVBDN2JiSzVhNEpjWk9a?=
 =?utf-8?B?QkdwQnpyek1NLzc1M2FncGN6eUZQdDlNRm5leTQ4L3gyV1dFSDR6cjdSQVh5?=
 =?utf-8?B?Q0VXNkJLREZFT25ZSEpwNmlBR1AwbG5uN09XRUhCNExDMmd1a0JlOWsrNnZE?=
 =?utf-8?B?QzJrUjVpczNaNTBvaWw1SzBCdlRUSVdMenBkN2Y3VmpPNVgxUVVObDExcUR2?=
 =?utf-8?B?b1lSZWFDdzJRdzFtS1BOczVIQUJaYUovU0ZLdGw2UWNmK21uR2xVeVhoejVJ?=
 =?utf-8?B?MktDUnc4QVdkSHZ5eGZBRTNEWEJvM3ZZb1IzTSt2UWZCT3dRSTlDcE9yak00?=
 =?utf-8?B?R0NLcWlXY0xaYTJOQUs2ZUF4S2N6cEFZWkVxaXdVN2FXUXVUVzd2b1ZLZTNP?=
 =?utf-8?B?MmptZnlPclN4UlRETDBGbXFlbDZWa0tPVnl3SE40M0treXBTdW14NVFKTGxa?=
 =?utf-8?B?Z3FUU1pIMnJ1ZDlDREtuZzUwNUlvOC9hWW1qTkgyaHRuU09yNU9PaGtUYzNk?=
 =?utf-8?B?MTAvWXRCZmRGSWtqeTIyMHo5Zko5WVpGejZEa1FzZkJZblpmOTA4dUpSMTNl?=
 =?utf-8?B?ajByVTlRZjNMcUZNS0ZNTFZ4OUV2WXZtdUo3eHJMa253VW5ZSFQrdGZ6azdy?=
 =?utf-8?B?ejg4RFRxLzFnKys2WFpIQ0tRWnZNYzc0OXYxaHUwd1F5VUpJYVIzZjZnZ3RD?=
 =?utf-8?B?NnFBaGJZUTU2eHllTHl5ZjQ2UHR2VTEzV0JNOEZBanlERExjSWtmQmJoZTN4?=
 =?utf-8?B?a2liQmF5bnh1ZUI1Q00veHZ6VHBsSW04VjFXOHpiTmFuSWhON2xibzNyRlE3?=
 =?utf-8?B?U3IzUWhoTGRWN2xsUkhDd2tVNXhqbFM1dVRGQVA4QnFIbmRVOWZtNmgvRmRX?=
 =?utf-8?B?b1BXZkRZTUJLWEZWZGZxWVNhRGFnZkxQelhYOUJjSXJtN1N2RStLV2pPVUJR?=
 =?utf-8?B?R3pZR0ZYbkJEbk1YVStrTG9uL0Z6RGMweWJoUXY3ZkdmTlpJd0NiVFpCMXl2?=
 =?utf-8?B?MUorNk9xa2lVRXg5a3VPUW9lUVllckhMNmtwdEhXby9rYU9LaDl3azNBNUZU?=
 =?utf-8?B?dFBDVWs4MGo1T04ybVgzSm4reXg1UVFETjVodVUvZGIvWFpqcGxvVUpKK3pJ?=
 =?utf-8?B?MGFvSk9UdFRkcURxRlk4cWR0R0Z2Q1BiM1R1OEdqM1ByYzdhR1BjdE0rc3Nz?=
 =?utf-8?Q?VppdL0+o4PiYfYpqGMJ14MZ3cgyxKdQKeKImApp?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PA4PR08MB6064
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT061.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	f0bdd7d3-cfa5-4aea-35a9-08d918fe53a3
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	ZmmsYU/mW1kRfwAs01JEasa9CkB4WQ4BTx24AZlC1CuNXuMEIqYyGh9xDRV3yr4dw+EGw3zAb+uhHNqh1KAAwwhIpyBUotLjfOJ6oZmbwUkYHvWeJpK0P3lRl35UoIlLpIKsbm4g1Wf3k7Z08x/aGNvu6pKWpCu2kbxQ2pIV7xGM7ZB/hkGfxEd2yho8z1hNqDtzt298w6oIPfb3cd4anaG2P+QCewXRM6doOrw3lAZmLVNgjM0e5E2QjnuEeVFtM90zQLdMVRbfuoAhjW46ndZm+UQHR0yWEH4+I4w46e4DI5bfi4BzNaosilp+IL4jOkdwNM9K+cif9kLiV6Yt1KldtGzKqwlSUPgpoMZe9aFZ/w5TcbTzSSwu+WDuyseYfFVfi+enrCj3MHDE4y6aewCzkqenPaqwFOaE2rLisj0KhCG9uXDW5x7a/24UFt7fI0uhHPqlpQDuDGFAklywZw9231a1gJQtHp25W2Vl+hUwvQZ7en4OxiJub9VxoRknTav7N23eKGeS+55n9uqu9mFskjQID9z7u1CTf1Fixl7tlTbxOHUSyKvKJc+wQsC8/wUGX68ZOxghy6cgTEuODjdWpa6F8zgeZYPUOpICJQhrgAFIWU1nRGz3RC1Cwxo/1RCK4SL9N9Mb211YFA+M+YfK56drPOkBSnnaB+ozT24=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(376002)(136003)(396003)(39860400002)(36840700001)(46966006)(86362001)(9686003)(83380400001)(186003)(82740400003)(7696005)(36860700001)(47076005)(2906002)(356005)(70586007)(55016002)(8676002)(6506007)(110136005)(54906003)(70206006)(478600001)(5660300002)(8936002)(4326008)(81166007)(26005)(82310400003)(33656002)(316002)(336012)(52536014);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 May 2021 06:38:12.3942
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ef257bcc-0f14-4320-c309-08d918fe57d5
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT061.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0801MB1909

DQo+IEZyb206IEp1bGllbiBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQpIaSBKdWxpZW4sDQoNCj4g
SGkgSGVucnksDQo+IA0KPiA+Pj4+IFsuLi5dDQo+IA0KPiBBaCB5ZXMsIHdlIGRvbid0IGhhdmUg
YSB2YXJpYW50IHdpdGggXy4gSSB0aG91Z2h0IGNvbXBpbGVkIHRlc3QgYmVmb3JlDQo+IHNlbmRp
bmcgaXQgOiguDQoNCk5vIHdvcnJpZXMgOikNCg0KPiANCj4gPg0KPiA+IChYRU4pIC0tLS0tLS0t
LS1iYW5rcz0yLS0tLS0tLS0NCj4gPiAoWEVOKSAtLS0tLS0tLS0tc3RhcnQ9ODAwMDAwMDAtLS0t
LS0tLQ0KPiA+IChYRU4pIC0tLS0tLS0tLS1zaXplPTdGMDAwMDAwLS0tLS0tLS0NCj4gPiAoWEVO
KSAtLS0tLS0tLS0tc3RhcnQ9RjkwMDAwMDAwMC0tLS0tLS0tDQo+ID4gKFhFTikgLS0tLS0tLS0t
LXNpemU9ODAwMDAwMDAtLS0tLS0tLQ0KPiA+IChYRU4pIENoZWNraW5nIGZvciBpbml0cmQgaW4g
L2Nob3Nlbg0KPiA+IChYRU4pIFJBTTogMDAwMDAwMDA4MDAwMDAwMCAtIDAwMDAwMDAwZmVmZmZm
ZmYNCj4gPiAoWEVOKSBSQU06IDAwMDAwMGY5MDAwMDAwMDAgLSAwMDAwMDBmOTdmZmZmZmZmDQo+
ID4gKFhFTikNCj4gPiAoWEVOKSBNT0RVTEVbMF06IDAwMDAwMDAwODQwMDAwMDAgLSAwMDAwMDAw
MDg0MTQ2NGM4IFhlbg0KPiA+IChYRU4pIE1PRFVMRVsxXTogMDAwMDAwMDA4NDE0NjRjOCAtIDAw
MDAwMDAwODQxNDhjOWIgRGV2aWNlIFRyZWUNCj4gPiAoWEVOKSBNT0RVTEVbMl06IDAwMDAwMDAw
ODAwODAwMDAgLSAwMDAwMDAwMDgxMDgwMDAwIEtlcm5lbA0KPiA+IChYRU4pICBSRVNWRFswXTog
MDAwMDAwMDA4MDAwMDAwMCAtIDAwMDAwMDAwODAwMTAwMDANCj4gPiAoWEVOKQ0KPiA+IChYRU4p
IENvbW1hbmQgbGluZTogbm9yZWJvb3QgZG9tMF9tZW09MTAyNE0gY29uc29sZT1kdHVhcnQNCj4g
PiBkdHVhcnQ9c2VyaWFsMCBib290c2NydWI9MA0KPiA+IChYRU4pIFBGTiBjb21wcmVzc2lvbiBv
biBiaXRzIDIxLi4uMjINCj4gPiAoWEVOKSBpbml0X2Jvb3RfcGFnZXM6IHBzIDAwMDAwMDAwODAw
MTAwMDAgcGUgMDAwMDAwMDA4MDA4MDAwMA0KPiANCj4gVGhlIHNpemUgb2YgdGhpcyByZWdpb24g
aXMgNDQ4TUIuDQo+IA0KPiA+IChYRU4pIGluaXRfYm9vdF9wYWdlczogcHMgMDAwMDAwMDA4MTA4
MDAwMCBwZSAwMDAwMDAwMDg0MDAwMDAwDQo+IA0KPiBUaGUgc2l6ZSBvZiB0aGlzIHJlZ2lvbiBp
cyA0N01CLg0KPiANCj4gPiAoWEVOKSBpbml0X2Jvb3RfcGFnZXM6IHBzIDAwMDAwMDAwODQxNDkw
MDAgcGUgMDAwMDAwMDBmZjAwMDAwMA0KPiANCj4gVGhlIHNpemUgb2YgdGhpcyByZWdpb24gaXMg
MTk2Nk1CLg0KPiANCj4gDQo+ID4gKFhFTikgYWxsb2NfYm9vdF9wYWdlczogbnJfcGZucyAxIHBm
bl9hbGlnbiAxDQo+ID4gKFhFTikgYWxsb2NfYm9vdF9wYWdlczogbnJfcGZucyAxIHBmbl9hbGln
biAxDQo+ID4gKFhFTikgYWxsb2NfYm9vdF9wYWdlczogbnJfcGZucyAxIHBmbl9hbGlnbiAxDQo+
ID4gKFhFTikgaW5pdF9ib290X3BhZ2VzOiBwcyAwMDAwMDBmOTAwMDAwMDAwIHBlIDAwMDAwMGY5
ODAwMDAwMDANCj4gDQo+IFRoZSBzaXplIG9mIHRoaXMgcmVnaW9uIGlzIDIwNDhNQi4NCj4gDQo+
ID4gKFhFTikgYWxsb2NfYm9vdF9wYWdlczogbnJfcGZucyA5MDkzMTIgcGZuX2FsaWduIDgxOTIN
Cj4gDQo+IFRoaXMgaXMgYXNraW5nIGZvciAzNTUyTUIgb2YgY29udGlndW91cyBtZW1vcnkgd2hp
Y2ggY2Fubm90IGJlDQo+IGFjY29tbW9kYXRlZC4gSW4gYW55IGNhc2UsIHRoaXMgaXMgcXVpdGUg
YSBsYXJnZSByZWdpb24gdG8gYXNrLg0KPiANCj4gU2FtZS4uLg0KPiANCj4gPiAoWEVOKSBYZW4g
QlVHIGF0IHBhZ2VfYWxsb2MuYzo0MzYNCj4gPg0KPiA+IFRvIGNvbXBhcmUgd2l0aCB0aGUgbWF4
aW11bSBzdGFydCBhZGRyZXNzIChmODAwMDAwMDAwKSBvZiBzZWNvbmQgcGFydA0KPiBtZW0NCj4g
PiB3aGVyZSB4ZW4gYm9vdHMgY29ycmVjdGx5LCBJIGFsc28gYXR0YWNoZWQgdGhlIGxvZyBmb3Ig
eW91ciBpbmZvcm1hdGlvbjoNCj4gPg0KPiA+IChYRU4pIC0tLS0tLS0tLS1iYW5rcz0yLS0tLS0t
LS0NCj4gPiAoWEVOKSAtLS0tLS0tLS0tc3RhcnQ9ODAwMDAwMDAtLS0tLS0tLQ0KPiA+IChYRU4p
IC0tLS0tLS0tLS1zaXplPTdGMDAwMDAwLS0tLS0tLS0NCj4gPiAoWEVOKSAtLS0tLS0tLS0tc3Rh
cnQ9RjgwMDAwMDAwMC0tLS0tLS0tDQo+ID4gKFhFTikgLS0tLS0tLS0tLXNpemU9ODAwMDAwMDAt
LS0tLS0tLQ0KPiA+IChYRU4pIENoZWNraW5nIGZvciBpbml0cmQgaW4gL2Nob3Nlbg0KPiA+IChY
RU4pIFJBTTogMDAwMDAwMDA4MDAwMDAwMCAtIDAwMDAwMDAwZmVmZmZmZmYNCj4gPiAoWEVOKSBS
QU06IDAwMDAwMGY4MDAwMDAwMDAgLSAwMDAwMDBmODdmZmZmZmZmDQo+ID4gKFhFTikNCj4gPiAo
WEVOKSBNT0RVTEVbMF06IDAwMDAwMDAwODQwMDAwMDAgLSAwMDAwMDAwMDg0MTQ2NGM4IFhlbg0K
PiA+IChYRU4pIE1PRFVMRVsxXTogMDAwMDAwMDA4NDE0NjRjOCAtIDAwMDAwMDAwODQxNDhjOWIg
RGV2aWNlIFRyZWUNCj4gPiAoWEVOKSBNT0RVTEVbMl06IDAwMDAwMDAwODAwODAwMDAgLSAwMDAw
MDAwMDgxMDgwMDAwIEtlcm5lbA0KPiA+IChYRU4pICBSRVNWRFswXTogMDAwMDAwMDA4MDAwMDAw
MCAtIDAwMDAwMDAwODAwMTAwMDANCj4gPiAoWEVOKQ0KPiA+IChYRU4pIENvbW1hbmQgbGluZTog
bm9yZWJvb3QgZG9tMF9tZW09MTAyNE0gY29uc29sZT1kdHVhcnQNCj4gPiBkdHVhcnQ9c2VyaWFs
MCBib290c2NydWI9MA0KPiA+IChYRU4pIFBGTiBjb21wcmVzc2lvbiBvbiBiaXRzIDIwLi4uMjIN
Cj4gPiAoWEVOKSBpbml0X2Jvb3RfcGFnZXM6IHBzIDAwMDAwMDAwODAwMTAwMDAgcGUgMDAwMDAw
MDA4MDA4MDAwMA0KPiA+IChYRU4pIGluaXRfYm9vdF9wYWdlczogcHMgMDAwMDAwMDA4MTA4MDAw
MCBwZSAwMDAwMDAwMDg0MDAwMDAwDQo+ID4gKFhFTikgaW5pdF9ib290X3BhZ2VzOiBwcyAwMDAw
MDAwMDg0MTQ5MDAwIHBlIDAwMDAwMDAwZmYwMDAwMDANCj4gPiAoWEVOKSBhbGxvY19ib290X3Bh
Z2VzOiBucl9wZm5zIDEgcGZuX2FsaWduIDENCj4gPiAoWEVOKSBhbGxvY19ib290X3BhZ2VzOiBu
cl9wZm5zIDEgcGZuX2FsaWduIDENCj4gPiAoWEVOKSBhbGxvY19ib290X3BhZ2VzOiBucl9wZm5z
IDEgcGZuX2FsaWduIDENCj4gPiAoWEVOKSBpbml0X2Jvb3RfcGFnZXM6IHBzIDAwMDAwMGY4MDAw
MDAwMDAgcGUgMDAwMDAwZjg4MDAwMDAwMA0KPiA+IChYRU4pIGFsbG9jX2Jvb3RfcGFnZXM6IG5y
X3BmbnMgNDUwNTYwIHBmbl9hbGlnbiA4MTkyDQo+IA0KPiAuLi4gaGVyZS4gV2UgYXJlIHRyeWlu
ZyB0byBhbGxvY2F0ZSBhIDEuNUdCIGZyYW1ldGFibGUuIFlvdSBoYXZlIG9ubHkNCj4gNEdCIG9m
IG1lbW9yeSBzbyB0aGUgZnJhbWV0YWJsZSBzaG91bGQgYmUgYSBsb3Qgc21hbGxlciAoZmV3IHRl
bnMgb2YgTUIpLg0KPiANCj4gVGhpcyBpcyBoYXBwZW5pbmcgYmVjYXVzZSBQRFggaXMgbm90IGFi
bGUgdG8gZmluZCBtYW55IGJpdHMgdG8gY29tcHJlc3MuDQo+IEkgYW0gbm90IHN1cmUgd2UgY2Fu
IGNvbXByZXNzIG1vcmUgd2l0aCB0aGUgY3VycmVudCBQRFggYWxnb3JpdGhtLiBUaGlzDQo+IG1h
eSByZXF1aXJlIHNvbWUgZXh0ZW5zaXZlIGltcHJvdmVtZW50IHRvIHJlZHVjZSB0aGUgZm9vdHBy
aW50Lg0KDQpZZXMgeW91IGFyZSByaWdodCwgdGhlbiBJIGRvbid0IGhhdmUgYW55IG1vcmUgcXVl
c3Rpb25zLiBUaGFua3MgdmVyeSBtdWNoDQpmb3IgdGhlIGRldGFpbGVkIGV4cGxhbmF0aW9uLg0K
DQo+IA0KPiBPbiBhIHByZXZpb3VzIGUtbWFpbCwgeW91IHNhaWQgeW91IHR3ZWFrZWQgdGhlIEZW
UCBtb2RlbCB0byBzZXQgdGhvc2UNCj4gcmVnaW9ucy4gV2VyZSB5b3UgdHJ5aW5nIHRvIG1pbWlj
ayB0aGUgbWVtb3J5IGxheW91dCBvZiBhIHJlYWwgSFcNCj4gKGVpdGhlciBjdXJyZW50IG9yIGZ1
dHVyZSk/DQoNCk5vdCByZWFsbHksIEkgd2FzIGp1c3QgdHJ5aW5nIHRvIGNvdmVyIGFzIG1hbnkg
Y2FzZXMgYXMgcG9zc2libGUgYW5kIHRoZXNlDQpyZWdpb25zIHdlcmUganVzdCBwaWNrZWQgZm9y
IHRlc3RpbmcgeW91ciBwYXRjaHNldCBpbiBkaWZmZXJlbnQgc2NlbmFyaW9zLg0KDQpBcyB0aGUg
aXNzdWUgaXMgcmVsYXRlZCB0byB0aGUgUERYIGFsZ29yaXRobSBpbnN0ZWFkIG9mIHRoZSBoZWFw
IGFsbG9jYXRpb24sDQphbmQgdGhlICJhbGxvY2F0aW5nIGEgYmlnIGhlYXAgb3IgdHdvIGhlYXAg
YmFua3Mgd2l0aCBhIGJpZyBnYXAiIGlzIHRlc3RlZCwNCkkgdGhpbmsgdGhpcyBwYXRjaHNldCBp
cyBwZXJmZWN0IF5eIFRoYW5rIHlvdS4NCg0KS2luZCByZWdhcmRzLA0KDQpIZW5yeQ0KDQo+IA0K
PiBDaGVlcnMsDQo+IA0KPiAtLQ0KPiBKdWxpZW4gR3JhbGwNCg==


From xen-devel-bounces@lists.xenproject.org Mon May 17 06:47:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 06:47:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128050.240478 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liX25-00017B-Vf; Mon, 17 May 2021 06:47:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128050.240478; Mon, 17 May 2021 06:47:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liX25-000172-Si; Mon, 17 May 2021 06:47:17 +0000
Received: by outflank-mailman (input) for mailman id 128050;
 Mon, 17 May 2021 06:47:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=frGc=KM=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1liX24-00016p-Lu
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 06:47:16 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 378f459e-59dd-4b99-b20c-3dbce9c81e75;
 Mon, 17 May 2021 06:47:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 851E7AC5B;
 Mon, 17 May 2021 06:47:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 378f459e-59dd-4b99-b20c-3dbce9c81e75
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621234034; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=TnQzZ3p6KNinDGhPECmEcsk2w1Ztj/zC8BRqaYk8avE=;
	b=Ozy++M1jnsE6eFw9rlCaUFgupzGtVIWJkVfHwvNwWLu1NaypFj6ubscz9LgTrPkmd0paag
	iHmGpE6LKuhj28Hh2oB4jce8RvJABrK8WgiipENFnNk50K8dK6082UdFncpiSmWOf9dYOe
	Hnt+s6ez1ZflCzLSwNErku3o/Zi/dYs=
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210514084133.18658-1-jgross@suse.com>
 <1e38cce0-6960-ac21-b349-dac8551e23ed@xen.org>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH] tools/xenstore: claim resources when running as daemon
Message-ID: <fe5f1e6a-1a89-ea12-feb5-318f25d4281f@suse.com>
Date: Mon, 17 May 2021 08:47:13 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.0
MIME-Version: 1.0
In-Reply-To: <1e38cce0-6960-ac21-b349-dac8551e23ed@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="r5eGLDFeDnxNKyd6apUHOTAS37yE7rq1k"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--r5eGLDFeDnxNKyd6apUHOTAS37yE7rq1k
Content-Type: multipart/mixed; boundary="6VhrY2KrthhTmiCITCVUL9M6xkNtWjIRh";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <fe5f1e6a-1a89-ea12-feb5-318f25d4281f@suse.com>
Subject: Re: [PATCH] tools/xenstore: claim resources when running as daemon
References: <20210514084133.18658-1-jgross@suse.com>
 <1e38cce0-6960-ac21-b349-dac8551e23ed@xen.org>
In-Reply-To: <1e38cce0-6960-ac21-b349-dac8551e23ed@xen.org>

--6VhrY2KrthhTmiCITCVUL9M6xkNtWjIRh
Content-Type: multipart/mixed;
 boundary="------------9B7C61A149234CEBB298A308"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------9B7C61A149234CEBB298A308
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 14.05.21 22:19, Julien Grall wrote:
> Hi Juergen,
>=20
> On 14/05/2021 09:41, Juergen Gross wrote:
>> Xenstored is absolutely mandatory for a Xen host and it can't be
>> restarted, so being killed by OOM-killer in case of memory shortage is=

>> to be avoided.
>>
>> Set /proc/$pid/oom_score_adj (if available) to -500 in order to allow
>> xenstored to use large amounts of memory without being killed.
>>
>> In order to support large numbers of domains the limit for open file
>> descriptors might need to be raised. Each domain needs 2 file
>> descriptors (one for the event channel and one for the xl per-domain
>> daemon to connect to xenstored).
>=20
> Hmmm... AFAICT there is only one file descriptor to handle all the even=
t=20
> channels. Could you point out the code showing one event FD per domain?=


I have let me fooled by just counting the file descriptors used with one
or two domain active.

So you are right that all event channels only use one fd, but each xl
daemon will use two (which should be fixed, IMO). And thinking more
about it it is even worse: each qemu process will at least require one
additional fd.

>=20
>>
>> Try to raise ulimit for open files to 65536. First the hard limit if
>> needed, and then the soft limit.
>=20
> I am not sure this is right to impose this limit to everyone. For=20
> instance, one admin may know that there will be no more than 100 domain=
s=20
> on its system.

Is setting a higher limit really a problem?

> So the admin should be able to configure them. At this point, I think=20
> the two limit should be set my the initscript rather than xenstored its=
elf.

But the admin would need to know the Xen internals for selecting the
correct limits. In the end I'd be fine with moving this modification to
the script starting Xenstore (which would be launch-xenstore), but the
configuration item should be "max number of domains to support".

>=20
> This would also avoid the problem where Xenstored is not allowed to=20
> modify its limit (see more below).
>=20
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>> ---
>> =C2=A0 tools/xenstore/xenstored_core.c=C2=A0=C2=A0 |=C2=A0 2 ++
>> =C2=A0 tools/xenstore/xenstored_core.h=C2=A0=C2=A0 |=C2=A0 3 ++
>> =C2=A0 tools/xenstore/xenstored_minios.c |=C2=A0 4 +++
>> =C2=A0 tools/xenstore/xenstored_posix.c=C2=A0 | 46 +++++++++++++++++++=
++++++++++++
>> =C2=A0 4 files changed, 55 insertions(+)
>>
>> diff --git a/tools/xenstore/xenstored_core.c=20
>> b/tools/xenstore/xenstored_core.c
>> index b66d119a98..964e693450 100644
>> --- a/tools/xenstore/xenstored_core.c
>> +++ b/tools/xenstore/xenstored_core.c
>> @@ -2243,6 +2243,8 @@ int main(int argc, char *argv[])
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 xprintf =3D tra=
ce;
>> =C2=A0 #endif
>> +=C2=A0=C2=A0=C2=A0 claim_resources();
>> +
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 signal(SIGHUP, trigger_reopen_log);
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (tracefile)
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 tracefile =3D t=
alloc_strdup(NULL, tracefile);
>> diff --git a/tools/xenstore/xenstored_core.h=20
>> b/tools/xenstore/xenstored_core.h
>> index 1467270476..ac26973648 100644
>> --- a/tools/xenstore/xenstored_core.h
>> +++ b/tools/xenstore/xenstored_core.h
>> @@ -255,6 +255,9 @@ void daemonize(void);
>> =C2=A0 /* Close stdin/stdout/stderr to complete daemonize */
>> =C2=A0 void finish_daemonize(void);
>> +/* Set OOM-killer score and raise ulimit. */
>> +void claim_resources(void);
>> +
>> =C2=A0 /* Open a pipe for signal handling */
>> =C2=A0 void init_pipe(int reopen_log_pipe[2]);
>> diff --git a/tools/xenstore/xenstored_minios.c=20
>> b/tools/xenstore/xenstored_minios.c
>> index c94493e52a..df8ff580b0 100644
>> --- a/tools/xenstore/xenstored_minios.c
>> +++ b/tools/xenstore/xenstored_minios.c
>> @@ -32,6 +32,10 @@ void finish_daemonize(void)
>> =C2=A0 {
>> =C2=A0 }
>> +void claim_resources(void)
>> +{
>> +}
>> +
>> =C2=A0 void init_pipe(int reopen_log_pipe[2])
>> =C2=A0 {
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 reopen_log_pipe[0] =3D -1;
>> diff --git a/tools/xenstore/xenstored_posix.c=20
>> b/tools/xenstore/xenstored_posix.c
>> index 48c37ffe3e..0074fbd8b2 100644
>> --- a/tools/xenstore/xenstored_posix.c
>> +++ b/tools/xenstore/xenstored_posix.c
>> @@ -22,6 +22,7 @@
>> =C2=A0 #include <fcntl.h>
>> =C2=A0 #include <stdlib.h>
>> =C2=A0 #include <sys/mman.h>
>> +#include <sys/resource.h>
>> =C2=A0 #include "utils.h"
>> =C2=A0 #include "xenstored_core.h"
>> @@ -87,6 +88,51 @@ void finish_daemonize(void)
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 close(devnull);
>> =C2=A0 }
>> +static void avoid_oom_killer(void)
>> +{
>> +=C2=A0=C2=A0=C2=A0 char path[32];
>> +=C2=A0=C2=A0=C2=A0 char val[] =3D "-500";
>> +=C2=A0=C2=A0=C2=A0 int fd;
>> +
>> +=C2=A0=C2=A0=C2=A0 snprintf(path, sizeof(path), "/proc/%d/oom_score_a=
dj",=20
>> (int)getpid());
>=20
> This looks Linux specific. How about other OSes?

I don't know whether other OSes have an OOM killer, and if they do, how
to configure it. It is a best effort attempt, after all.

>=20
>> +
>> +=C2=A0=C2=A0=C2=A0 fd =3D open(path, O_WRONLY);
>> +=C2=A0=C2=A0=C2=A0 /* Do nothing if file doesn't exist. */
>=20
> Your commit message leads to think that we *must* configure the OOM. If=
=20
> not, then we should not continue. But here, this suggest this is=20
> optional. In fact...

I can modify the commit message by adding a "Try to".

>=20
>> +=C2=A0=C2=A0=C2=A0 if (fd < 0)
>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return;
>> +=C2=A0=C2=A0=C2=A0 /* Ignore errors. */
>> +=C2=A0=C2=A0=C2=A0 write(fd, val, sizeof(val));
>=20
> ... xenstored may not be allowed to modify its own parameters. So this =

> would continue silently without the admin necessarily knowning the limi=
t=20
> wasn't applied.

I can add a line in the Xenstore log in this regard.


Juergen

--------------9B7C61A149234CEBB298A308
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------9B7C61A149234CEBB298A308--

--6VhrY2KrthhTmiCITCVUL9M6xkNtWjIRh--

--r5eGLDFeDnxNKyd6apUHOTAS37yE7rq1k
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmCiEXEFAwAAAAAACgkQsN6d1ii/Ey8v
Wgf+K7GXSyh3MUOn5Lp1qtJo3kRa8UHspxUwyRqxergfHPcC1DAtw6iGBWU49SkWa7HFwNd9MPXn
VM482Q//sVz8YQcvFlhU6XSmZH1ZugkBvQ3qQ4fOfgls22u/25B3JfQwDUe/D636tK9Prz95tJCl
9oXAnIIZWua3czSc6UZo9ghg7Alai+XzbBA4OStv+QZOm/OemUyV5rvFjy4Pr84s2Nv94PvVfloi
tmhOJ8E84ilCe9QwSXIufbHOeBiwhy53mfGbWSyOTB09nlGGp4FptMDd3zMvsEEFXn/WIo6HNw1R
fO72y5Q4fPmwi2k8+XKp0uRp7iyxdgqDlhcvcUUkIA==
=4lOD
-----END PGP SIGNATURE-----

--r5eGLDFeDnxNKyd6apUHOTAS37yE7rq1k--


From xen-devel-bounces@lists.xenproject.org Mon May 17 06:48:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 06:48:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128056.240489 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liX3D-0001pf-Ek; Mon, 17 May 2021 06:48:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128056.240489; Mon, 17 May 2021 06:48:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liX3D-0001pY-B8; Mon, 17 May 2021 06:48:27 +0000
Received: by outflank-mailman (input) for mailman id 128056;
 Mon, 17 May 2021 06:48:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FIJu=KM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1liX3C-0001pO-EN
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 06:48:26 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 37de0106-a572-435a-a931-98f813e36962;
 Mon, 17 May 2021 06:48:25 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7C556AF3E;
 Mon, 17 May 2021 06:48:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 37de0106-a572-435a-a931-98f813e36962
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621234104; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=O6qUVKWUKUzptKyI9QMY4G49qkAM1nkIhX4ZXC+4+Hs=;
	b=JbqL611RsNq4/5E6QtlsiYkidds/foslsKgqnWQ5dAYqrtLQxTCYSauKzvizAHAg1xw6RT
	TjhRBDeDGC8/RQTUlU1sBzpoW4gQ6bJSBiUMVD5M4PcmIqGnkBT5VMHkXOiT13adGKry6w
	H/SWgUB5SX8Iu1jQYXb13AEK3LnLtFA=
Subject: Re: [PATCH v3 2/5] xen/x86: manually build xen.mb.efi binary
To: Bob Eshleman <bobbyeshleman@gmail.com>
Cc: Daniel Kiper <daniel.kiper@oracle.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <cover.1611273359.git.bobbyeshleman@gmail.com>
 <28d5536a2f7691e8f79d55f1470fa89ce4fae93d.1611273359.git.bobbyeshleman@gmail.com>
 <3c621726-31c4-6a79-a020-88c59644111b@suse.com>
 <74ea104d-3826-d80d-3af5-f444d065c73f@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a183a5f9-0f36-187d-fd06-8d6db99cbe43@suse.com>
Date: Mon, 17 May 2021 08:48:32 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <74ea104d-3826-d80d-3af5-f444d065c73f@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 07.05.2021 22:26, Bob Eshleman wrote:
> What is your intuition WRT the idea that instead of trying add a PE/COFF hdr
> in front of Xen's mb2 bin, we instead go the route of introducing valid mb2
> entry points into xen.efi?

At the first glance I think this is going to be less intrusive, and hence
to be preferred. But of course I haven't experimented in any way ...

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 17 06:49:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 06:49:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128059.240499 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liX3l-0002P7-O4; Mon, 17 May 2021 06:49:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128059.240499; Mon, 17 May 2021 06:49:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liX3l-0002P0-Kz; Mon, 17 May 2021 06:49:01 +0000
Received: by outflank-mailman (input) for mailman id 128059;
 Mon, 17 May 2021 06:49:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=frGc=KM=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1liX3k-0002MT-A9
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 06:49:00 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e31979ac-1134-46ab-92d6-0baaaab921a2;
 Mon, 17 May 2021 06:48:53 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9760EAAFD;
 Mon, 17 May 2021 06:48:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e31979ac-1134-46ab-92d6-0baaaab921a2
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621234132; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=t82WmwomixiXQU1aJXx8VZmvd3rCMkGrjCKqHwePMiQ=;
	b=cgJttrL3pOSPIKstid/5v/wLuxbVMpVcvwRUYoQTLD4C4ua+zzanH+pI3cmHmPI7CpAdFi
	Ya2nZwSrPu0phiZM4l3tdHV7KV79m9JCJbp6o+weNnVFKN5fdwALcTsHQxkqD4RolVTpYM
	xHlPmSLRo1BI6PNmg7LJx0arrFe4oEI=
Subject: Re: [PATCH] tools/xenstore: simplify xenstored main loop
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210514083905.18212-1-jgross@suse.com>
 <304944cf-ac92-be14-e088-1975ef073255@xen.org>
 <3be1937f-3cd9-3eb8-48fd-bc9c9a85c051@suse.com>
 <5744a347-282c-ff61-7507-03b7a1e9d4c9@xen.org>
From: Juergen Gross <jgross@suse.com>
Message-ID: <ae53addb-7e64-8a93-10fc-149f5ae4bea3@suse.com>
Date: Mon, 17 May 2021 08:48:51 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.0
MIME-Version: 1.0
In-Reply-To: <5744a347-282c-ff61-7507-03b7a1e9d4c9@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="3r2qIYjf4DevjROFZBuXZb59Kj1JbQ2AQ"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--3r2qIYjf4DevjROFZBuXZb59Kj1JbQ2AQ
Content-Type: multipart/mixed; boundary="QhB45rtkKkDRUkzjfuIUtPO2RZ9DFAk1e";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <ae53addb-7e64-8a93-10fc-149f5ae4bea3@suse.com>
Subject: Re: [PATCH] tools/xenstore: simplify xenstored main loop
References: <20210514083905.18212-1-jgross@suse.com>
 <304944cf-ac92-be14-e088-1975ef073255@xen.org>
 <3be1937f-3cd9-3eb8-48fd-bc9c9a85c051@suse.com>
 <5744a347-282c-ff61-7507-03b7a1e9d4c9@xen.org>
In-Reply-To: <5744a347-282c-ff61-7507-03b7a1e9d4c9@xen.org>

--QhB45rtkKkDRUkzjfuIUtPO2RZ9DFAk1e
Content-Type: multipart/mixed;
 boundary="------------09DE16596C228E3025C1508A"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------09DE16596C228E3025C1508A
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 15.05.21 13:51, Julien Grall wrote:
> Hi Juergen,
>=20
> On 14/05/2021 10:42, Juergen Gross wrote:
>> On 14.05.21 11:35, Julien Grall wrote:
>>>> -struct connection *new_connection(connwritefn_t *write,=20
>>>> connreadfn_t *read);
>>>> +struct connection *new_connection(struct interface_funcs *funcs);
>>>> =C2=A0 struct connection *get_connection_by_id(unsigned int conn_id)=
;
>>>> =C2=A0 void check_store(void);
>>>> =C2=A0 void corrupt(struct connection *conn, const char *fmt, ...);
>>>> @@ -254,9 +258,6 @@ void finish_daemonize(void);
>>>> =C2=A0 /* Open a pipe for signal handling */
>>>> =C2=A0 void init_pipe(int reopen_log_pipe[2]);
>>>> -int writefd(struct connection *conn, const void *data, unsigned int=
=20
>>>> len);
>>>> -int readfd(struct connection *conn, void *data, unsigned int len);
>>>> -
>>>> =C2=A0 extern struct interface_funcs socket_funcs;
>>>
>>> Hmmm... I guess this change splipped in the staging before hand?
>>
>> No, I just forgot to make the functions static.
>=20
> Hmmm... I am not sure how this is related to my question. What I meant =

> it the line "extern struct interface_funcs ..." doesn't have a '+' in=20
> front.

Oh, I misunderstood you.

>=20
> If you look at the history, this was added by mistake in:
>=20
> commit 2ea411bc2c0a5a4c7ab145270f1949630460e72b
> Author: Juergen Gross <jgross@suse.com>
> Date:=C2=A0=C2=A0 Wed Jan 13 14:00:20 2021 +0100

Yes, this seems to be the case. I had this patch lying around for some
time and moving the LU patches further up in the queue seems to have
missed this line.


Juergen

--------------09DE16596C228E3025C1508A
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------09DE16596C228E3025C1508A--

--QhB45rtkKkDRUkzjfuIUtPO2RZ9DFAk1e--

--3r2qIYjf4DevjROFZBuXZb59Kj1JbQ2AQ
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmCiEdMFAwAAAAAACgkQsN6d1ii/Ey92
1AgAlalw3rHLwfJESZJzKZG93gplPQtm7s+yq/Qs/zC2K1r7H70dSY8nEjOlSWe6sjraqUrQtMFw
UYty2YPlQwRWc1aJW/pu/lV56PpZGgHigxOtj9bOwZGsXh30eiJngQAhwxXJ0OCTtNFQPNZ2BspU
iQMtrwuvJdYv2zAgo00WlGzA5/jcjEqmqhoSMtzf4O7zFMSiOBkuoW6aOXiHikfyPnmZ0Yv1zent
FlEW8PGrP0EYuMyehBFCjp+AoP/xaagFyAt1/EpSE9CtPQ52yeovC0/BbCzku+prCn1C6E9f01VP
RpwV18/AgBOMB16JNNE9kwNNfpDNsS7dkmy7mfhrrQ==
=IIU3
-----END PGP SIGNATURE-----

--3r2qIYjf4DevjROFZBuXZb59Kj1JbQ2AQ--


From xen-devel-bounces@lists.xenproject.org Mon May 17 06:54:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 06:54:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128071.240511 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liX8n-0003vh-CF; Mon, 17 May 2021 06:54:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128071.240511; Mon, 17 May 2021 06:54:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liX8n-0003va-8d; Mon, 17 May 2021 06:54:13 +0000
Received: by outflank-mailman (input) for mailman id 128071;
 Mon, 17 May 2021 06:54:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liX8m-0003vQ-0o; Mon, 17 May 2021 06:54:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liX8l-0005L5-LE; Mon, 17 May 2021 06:54:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liX8l-0002Gb-B2; Mon, 17 May 2021 06:54:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1liX8l-0005gJ-AZ; Mon, 17 May 2021 06:54:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=oTnHydhJUvpu89ito2JfZlwFOdm4tcORXsg1PcgpXMU=; b=dp+UdrUmNADxyjiAWrYvbssect
	jA1+zUoCN8gLSvS/3+/XHZSknaUDJlIXbaM41cr7CrkaWvgVz8sNaGhp+mA30dSNbRiuAIHca8QBZ
	PQjsHY331+TP92f7CQw7vUhiVFeBiZghewD3L174WIB0Kpd74dOy3YRyMP+pcMMx8IBc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161975-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 161975: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=df28ba289c7d74cda5a9022e1376bfb9cd18ebb7
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 17 May 2021 06:54:11 +0000

flight 161975 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161975/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              df28ba289c7d74cda5a9022e1376bfb9cd18ebb7
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  311 days
Failing since        151818  2020-07-11 04:18:52 Z  310 days  303 attempts
Testing same since   161956  2021-05-15 04:20:06 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 57600 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 17 07:01:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 07:01:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128078.240524 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liXG7-0005PR-5s; Mon, 17 May 2021 07:01:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128078.240524; Mon, 17 May 2021 07:01:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liXG7-0005PK-2r; Mon, 17 May 2021 07:01:47 +0000
Received: by outflank-mailman (input) for mailman id 128078;
 Mon, 17 May 2021 07:01:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FIJu=KM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1liXG5-0005PD-Df
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 07:01:45 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0a322092-0a55-4d25-ab2b-7265287aeb60;
 Mon, 17 May 2021 07:01:44 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 91391AAFD;
 Mon, 17 May 2021 07:01:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0a322092-0a55-4d25-ab2b-7265287aeb60
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621234903; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=rmL2HexsXj1JXtqIY/fFqK9Ye+EMy5r2sQuzR9Oyc7A=;
	b=PCgwvR2nkqPltXhRzCciNtXyCrL59qxz0c0BAaRSGbMOOMLO6tTBFDtNoR4J31Zs+Leqs5
	qKhiNBRNBmipi0YxjD0onx3AtP/Qn+BgpF5p/SIFZclOnm5oU5GQwPQL6pZBH+A1UNnTQ8
	wB86BuccTymr5tM2oE0q0BR3RevsiWw=
Subject: Re: [PATCH v3 10/10] arm64: Change type of hsr, cpsr, spsr_el1 to
 uint64_t
To: Julien Grall <julien@xen.org>, Michal Orzel <michal.orzel@arm.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, Tamas K Lengyel <tamas@tklengyel.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>, bertrand.marquis@arm.com,
 wei.chen@arm.com, xen-devel@lists.xenproject.org
References: <20210505074308.11016-1-michal.orzel@arm.com>
 <20210505074308.11016-11-michal.orzel@arm.com>
 <c5676e69-a474-d1ad-c7e9-49c03be3ab66@suse.com>
 <1ff4f9fb-0eca-189a-2b47-b910dc6b3639@arm.com>
 <42a998be-2f99-a1b6-ace6-4c5d42af7046@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <54e845e1-f283-d70c-a0c2-73e768e5a56e@suse.com>
Date: Mon, 17 May 2021 09:01:46 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <42a998be-2f99-a1b6-ace6-4c5d42af7046@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 12.05.2021 19:59, Julien Grall wrote:
> Hi,
> 
> On 11/05/2021 07:37, Michal Orzel wrote:
>> On 05.05.2021 10:00, Jan Beulich wrote:
>>> On 05.05.2021 09:43, Michal Orzel wrote:
>>>> --- a/xen/include/public/arch-arm.h
>>>> +++ b/xen/include/public/arch-arm.h
>>>> @@ -267,10 +267,10 @@ struct vcpu_guest_core_regs
>>>>   
>>>>       /* Return address and mode */
>>>>       __DECL_REG(pc64,         pc32);             /* ELR_EL2 */
>>>> -    uint32_t cpsr;                              /* SPSR_EL2 */
>>>> +    uint64_t cpsr;                              /* SPSR_EL2 */
>>>>   
>>>>       union {
>>>> -        uint32_t spsr_el1;       /* AArch64 */
>>>> +        uint64_t spsr_el1;       /* AArch64 */
>>>>           uint32_t spsr_svc;       /* AArch32 */
>>>>       };
>>>
>>> This change affects, besides domctl, also default_initialise_vcpu(),
>>> which Arm's arch_initialise_vcpu() calls. I realize do_arm_vcpu_op()
>>> only allows two unrelated VCPUOP_* to pass, but then I don't
>>> understand why arch_initialise_vcpu() doesn't simply return e.g.
>>> -EOPNOTSUPP. Hence I suspect I'm missing something.
> 
> I think it is just an overlooked when reviewing the following commit:
> 
> commit 192df6f9122ddebc21d0a632c10da3453aeee1c2
> Author: Roger Pau Monné <roger.pau@citrix.com>
> Date:   Tue Dec 15 14:12:32 2015 +0100
> 
>      x86: allow HVM guests to use hypercalls to bring up vCPUs
> 
>      Allow the usage of the VCPUOP_initialise, VCPUOP_up, VCPUOP_down,
>      VCPUOP_is_up, VCPUOP_get_physid and VCPUOP_send_nmi hypercalls from HVM
>      guests.
> 
>      This patch introduces a new structure (vcpu_hvm_context) that 
> should be used
>      in conjuction with the VCPUOP_initialise hypercall in order to 
> initialize
>      vCPUs for HVM guests.
> 
>      Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
>      Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>      Reviewed-by: Jan Beulich <jbeulich@suse.com>
>      Acked-by: Ian Campbell <ian.campbell@citrix.com>
> 
> On Arm, the structure vcpu_guest_context is not exposed outside of Xen 
> and the tools. Interestingly vcpu_guest_core_regs is but it should only 
> be used within vcpu_guest_context.
> 
> So as this is not used by stable ABI, it is fine to break it.
> 
>>>
>> I agree that do_arm_vcpu_op only allows two VCPUOP* to pass and
>> arch_initialise_vcpu being called in case of VCPUOP_initialise
>> has no sense as VCPUOP_initialise is not supported on arm.
>> It makes sense that it should return -EOPNOTSUPP.
>> However do_arm_vcpu_op will not accept VCPUOP_initialise and will return
>> -EINVAL. So arch_initialise_vcpu for arm will not be called.
>> Do you think that changing this behaviour so that arch_initialise_vcpu returns
>> -EOPNOTSUPP should be part of this patch?
> 
> I think this change is unrelated. So it should be handled in a follow-up 
> patch.

My only difference in viewing this is that I'd say the adjustment
would better be a prereq patch to this one, such that the one here
ends up being more obviously correct. Also, if the function is
indeed not meant to be reachable, besides making it return
-EOPNOTSUPP (or alike) it should probably also have
ASSERT_UNREACHABLE() added.

Jan

> If you are taking care of this, would you mind to also look to move 
> struct vcpu_guest_core_regs within the #if defined(__XEN__) || 
> defined(__XEN_TOOLS__)?
> 
> I will attempt to do a proper review of this patch by the end of the week.
> 
> Cheers,
> 



From xen-devel-bounces@lists.xenproject.org Mon May 17 07:16:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 07:16:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128087.240535 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liXTh-00076T-Kp; Mon, 17 May 2021 07:15:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128087.240535; Mon, 17 May 2021 07:15:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liXTh-00076M-Ht; Mon, 17 May 2021 07:15:49 +0000
Received: by outflank-mailman (input) for mailman id 128087;
 Mon, 17 May 2021 07:15:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FIJu=KM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1liXTg-00076F-A4
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 07:15:48 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ba2031d2-9de2-4329-85d7-05e958f99082;
 Mon, 17 May 2021 07:15:46 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 62796ADCE;
 Mon, 17 May 2021 07:15:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ba2031d2-9de2-4329-85d7-05e958f99082
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621235745; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=y0qDswvG0WUKp8Q75+0gU3lwbXvDXvdVCWH5bc3s6oY=;
	b=c0MRyN/iPpRFzOiex28Ph14xZ42xNvkSaqTIYpREaE7nz9Bze23X4jUg4VafhVSUzG3cTT
	5qI9FfPCz2tYpKSe/UZHzhjUg4okKnqMxzfxHzTJ7/lgJnobk75LgzPBvsm8uOYhiEkeX5
	Cp7tFS1odOsX8t5OwD1AzS6euLJ3psM=
Subject: Re: Ping: [PATCH v5 0/6] evtchn: (not so) recent XSAs follow-on
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Julien Grall <julien@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <306e62e8-9070-2db9-c959-858465c50c1d@suse.com>
 <d29fa89b-ea0a-bdbd-04c9-02eff0854d47@suse.com>
 <40e90456-90dc-7932-68ec-6f4d0941999f@xen.org>
 <0e19fb4c-4a87-ff80-fa98-fab623d6704f@suse.com>
 <YJ6XVmadaDbP3aUx@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <19a976e1-a4cb-16a9-cd53-1b7198859ddd@suse.com>
Date: Mon, 17 May 2021 09:15:53 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <YJ6XVmadaDbP3aUx@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 14.05.2021 17:29, Roger Pau Monné wrote:
> On Thu, Apr 22, 2021 at 10:53:05AM +0200, Jan Beulich wrote:
>> On 21.04.2021 17:56, Julien Grall wrote:
>>>
>>>
>>> On 21/04/2021 16:23, Jan Beulich wrote:
>>>> On 27.01.2021 09:13, Jan Beulich wrote:
>>>>> These are grouped into a series largely because of their origin,
>>>>> not so much because there are (heavy) dependencies among them.
>>>>> The main change from v4 is the dropping of the two patches trying
>>>>> to do away with the double event lock acquires in interdomain
>>>>> channel handling. See also the individual patches.
>>>>>
>>>>> 1: use per-channel lock where possible
>>>>> 2: convert domain event lock to an r/w one
>>>>> 3: slightly defer lock acquire where possible
>>>>> 4: add helper for port_is_valid() + evtchn_from_port()
>>>>> 5: type adjustments
>>>>> 6: drop acquiring of per-channel lock from send_guest_{global,vcpu}_virq()
>>>>
>>>> Only patch 4 here has got an ack so far - may I ask for clear feedback
>>>> as to at least some of these being acceptable (I can see the last one
>>>> being controversial, and if this was the only one left I probably
>>>> wouldn't even ping, despite thinking that it helps reduce unecessary
>>>> overhead).
>>>
>>> I left feedback for the series one the previous version (see [1]). It 
>>> would have been nice is if it was mentionned somewhere as this is still 
>>> unresolved.
>>
>> I will admit I forgot about the controversy on patch 1. I did, however,
>> reply to your concerns. What didn't happen is the feedback from others
>> that you did ask for.
>>
>> And of course there are 4 more patches here (one of them having an ack,
>> yes) which could do with feedback. I'm certainly willing, where possible,
>> to further re-order the series such that controversial changes are at its
>> end.
> 
> I think it would easier to figure out whether the changes are correct
> if we had some kind of documentation about what/how the per-domain
> event_lock and the per-event locks are supposed to be used. I don't
> seem to be able to find any comments regarding how they are to be
> used.

I think especially in pass-through code there are a number of cases
where the per-domain lock really is being abused, simply for being
available without much further thought. I'm not convinced documenting
such abuse is going to help the situation. Yet of course I can see
that having documentation would make review easier ...

> Regarding the changes itself in patch 1 (which I think has caused part
> of the controversy here), I'm unsure they are worth it because the
> functions modified all seem to be non-performance critical:
> evtchn_status, domain_dump_evtchn_info, flask_get_peer_sid.
> 
> So I would say that unless we have clear rules written down for what
> the per-domain event_lock protects, I would be hesitant to change any
> of the logic, specially for critical paths.

Okay, I'll drop patch 1 and also patch 6 for being overly controversial.
Some of the other patches still look worthwhile to me, though. I'll
also consider moving the spin->r/w lock conversion last in the series.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 17 07:33:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 07:33:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128093.240547 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liXkn-00011r-51; Mon, 17 May 2021 07:33:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128093.240547; Mon, 17 May 2021 07:33:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liXkn-00011k-1i; Mon, 17 May 2021 07:33:29 +0000
Received: by outflank-mailman (input) for mailman id 128093;
 Mon, 17 May 2021 07:33:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FIJu=KM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1liXkl-00011e-LS
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 07:33:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cfab9a4d-a304-4965-8c03-bf02bdc275a1;
 Mon, 17 May 2021 07:33:26 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9E385AEC5;
 Mon, 17 May 2021 07:33:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cfab9a4d-a304-4965-8c03-bf02bdc275a1
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621236805; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=24Z/UW0NpvL4JqusYZlmgtoTjXyjdYgR/Gpy6zpGA+o=;
	b=vSsm2BzQPq4+Xaq1c2uTlCwIZM8THIWuVZOTH3YTcnKBRwjwui7CP4Xx8t9+66YSi0VPYy
	p2ZamSZzoTshLGX2KHLgmOWQyGGiM5rJMxPFCfkVO3Qxp89QMwk2dHWvwIWufqWZyTdP3n
	saTJFtZ/qigFA2ZqrZRvVafsMEmGj5I=
Subject: Re: [PATCH v3 03/22] x86/xstate: re-size save area when CPUID policy
 changes
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: George Dunlap <george.dunlap@citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <322de6db-e01f-0b57-5777-5d94a13c441a@suse.com>
 <8ba8f016-0aed-277b-bbea-80022d057791@suse.com>
 <5a954be8-e213-36d8-27da-4c51243dc280@citrix.com>
 <f515fdfb-d1a6-56d8-5db3-ebddeed23806@suse.com>
 <f16afc8a-ccd4-7e5e-e08d-d96597c6e8ab@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <aa5a5495-d0e9-4951-f2ed-c87121b1dfe8@suse.com>
Date: Mon, 17 May 2021 09:33:33 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <f16afc8a-ccd4-7e5e-e08d-d96597c6e8ab@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 11.05.2021 18:41, Andrew Cooper wrote:
> On 03/05/2021 15:22, Jan Beulich wrote:
>>> Another consequence is that we need to rethink our hypercall behaviour. 
>>> There is no such thing as supervisor states in an uncompressed XSAVE
>>> image, which means we can't continue with that being the ABI.
>> I don't think the hypercall input / output blob needs to follow any
>> specific hardware layout.
> 
> Currently, the blob is { xcr0, xcr0_accum, uncompressed image }.
> 
> As we haven't supported any compressed states yet, we are at liberty to
> create a forward compatible change by logically s/xcr0/xstate/ and
> permitting an uncompressed image.
> 
> Irritatingly, we have xcr0=0 as a permitted state and out in the field,
> for "no xsave state".  This contributes a substantial quantity of
> complexity in our xstate logic, and invalidates the easy fix I had for
> not letting the HVM initpath explode.
> 
> The first task is to untangle the non-architectural xcr0=0 case, and to
> support compressed images.  Size parsing needs to be split into two, as
> for compressed images, we need to consume XSTATE_BV and XCOMP_BV to
> cross-check the size.

Not sure about the need to eliminate the xcr0=0 (or xstates=0) case.
Which isn't to say I'm opposed if you want to do so and it's not
overly intrusive.

> I think we also want a rule that Xen will always send compressed if it
> is using XSAVES (/XSAVEC in the interim?)

If this is sufficiently neutral to tool stack code, why not (albeit
I don't think there needs to be a "rule" - Xen should be free to
provide what it deems best, with consumers in the position to easily
recognize the format; similarly Xen should be consuming whatever it
gets handed, as long as that's valid state). Luckily the layout is
visible just through tool-stack-only interfaces.

>  We do not want to be working
> with uncompressed images at all, now that MPX is a reasonable sized hole
> in the middle.

They're together no larger (128 bytes) than the LWP hole right ahead
of them (at 0x340). I agree avoiding uncompressed format is worthwhile,
but perhaps quite a bit more so for systems with higher components
following unavailable even bigger ones.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 17 08:11:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 08:11:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128101.240558 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liYL9-0005iE-8z; Mon, 17 May 2021 08:11:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128101.240558; Mon, 17 May 2021 08:11:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liYL9-0005i7-5i; Mon, 17 May 2021 08:11:03 +0000
Received: by outflank-mailman (input) for mailman id 128101;
 Mon, 17 May 2021 08:11:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liYL7-0005hw-7i; Mon, 17 May 2021 08:11:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liYL7-00078t-0i; Mon, 17 May 2021 08:11:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liYL6-0005OE-NK; Mon, 17 May 2021 08:11:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1liYL6-0005LU-Mn; Mon, 17 May 2021 08:11:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=vlCWYWkppZS9LhftMNvDchEn78ngbUs2M6yGCxzwjlY=; b=UjiCiYGi0W8BkfQv+CK3TBPTv4
	DhMbCG9MGlhy6GO8rqu/ylPfTOStKXf1a/mBnPtZvY4LNYUcRs0C9ZArDmeyxbt9oxqRrVmyBQ2al
	OnYIlF8O4f99nVnkK0OTxHe5gDjy++IiT0bFM02pOjmOIXawsyyiZyowZVYoi0x+soj4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161972-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 161972: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=d07f6ca923ea0927a1024dfccafc5b53b61cfecc
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 17 May 2021 08:11:00 +0000

flight 161972 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161972/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                d07f6ca923ea0927a1024dfccafc5b53b61cfecc
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  289 days
Failing since        152366  2020-08-01 20:49:34 Z  288 days  485 attempts
Testing same since   161972  2021-05-17 00:12:03 Z    0 days    1 attempts

------------------------------------------------------------
6063 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1645314 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 17 08:43:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 08:43:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128112.240572 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liYqf-0000jz-3b; Mon, 17 May 2021 08:43:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128112.240572; Mon, 17 May 2021 08:43:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liYqf-0000js-0T; Mon, 17 May 2021 08:43:37 +0000
Received: by outflank-mailman (input) for mailman id 128112;
 Mon, 17 May 2021 08:43:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FIJu=KM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1liYqe-0000jl-BX
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 08:43:36 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5898dc21-1079-4a96-8941-46827d273861;
 Mon, 17 May 2021 08:43:33 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id ED1FEAF0E;
 Mon, 17 May 2021 08:43:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5898dc21-1079-4a96-8941-46827d273861
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621241013; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=aIUSYbFhtCQA3XTRYSu8x2xKo88VfWD2v5wur0/1FQs=;
	b=Z63+bmFglkWq6CbeWJmy4rltlpG1mPkn36fT72mHCcnAKY2BDDHt4HkXdbIG7ospb0KmsU
	wGToXFWMKazNnh2UfeN9GAZYsVllcXH7Z+0YzVylDDQeLxyGhtgXEdRuSacZubZP7dWXlp
	fN51papuLHRxbVWlSFhWg18Za8yhlPA=
Subject: Re: Regressed XSA-286, was [xen-unstable test] 161917: regressions -
 FAIL
To: Andrew Cooper <andrew.cooper3@citrix.com>
References: <osstest-161917-mainreport@xen.org>
 <7cfa28ae-2fbe-0945-8a6c-a965ec52155f@citrix.com>
Cc: osstest service owner <osstest-admin@xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e8fae962-1a5b-cc91-d429-a716b009f062@suse.com>
Date: Mon, 17 May 2021 10:43:40 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <7cfa28ae-2fbe-0945-8a6c-a965ec52155f@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 13.05.2021 22:15, Andrew Cooper wrote:
> On 13/05/2021 04:56, osstest service owner wrote:
>> Tests which are failing intermittently (not blocking):
>>  test-xtf-amd64-amd64-3 92 xtf/test-pv32pae-xsa-286 fail in 161909 pass in 161917
> 
> While noticing the ARM issue above, I also spotted this one by chance. 
> There are two issues.
> 
> First, I have reverted bed7e6cad30 and edcfce55917.  The XTF test is
> correct, and they really do reintroduce XSA-286.  It is a miracle of
> timing that we don't need an XSA/CVE against Xen 4.15.

I have to admit that from the description in the revert (on top of
what you say here) it does not really become clear to me what is
wrong with _either_ of these changes:

"The TLB flushing is for Xen's correctness, not the guest's."

XSA-286 was solely about guest correctness, which was broken by Xen's
behavior. Hence we're still only talking about guest observable
behavior here.

"The text in c/s bed7e6cad30 is technically correct, from the guests
 point of view, but clearly false as far as XSA-286 is concerned."

As a result I also don't understand this, nor the actual reason why
you did revert both, rather than just ...

"That said, it is edcfce55917 which introduced the regression, which
 demonstrates that the reasoning is flawed."

... this. Furthermore you merely state an observation here, without
going into any detail as to what's wrong with the reasoning, and
hence why it is the change that's wrong and the test that's correct
(and no issue elsewhere). Don't get me wrong - I'm not excluding
you're right, but you fail to explain things properly. I can't see
how avoiding a flush for a page table which isn't hooked up anywhere
(and which hence isn't accessible via lookups through the linear
page tables) can have caused a problem (except perhaps uncover an
issue, e.g. a missing flush, elsewhere). Nor can I see how the XTF
test would trigger the flush avoidance, as it doesn't play with
free floating page tables. Plus this change affects 64-bit guests
as much as 32-bit ones, yet no (apparent) regression could be seen
there.

Similarly for the other change: Since only guest perspective matters,
the flush ought to be fine to defer until the guest actually reloads
CR3; until then using either the stale or updated linear page tables
is acceptable, and guests need to not rely on either, just like would
be the case on bare metal (and there it's even stronger: an OS can
rely upon the prior page tables to continue to be used, as the PDPTEs
get reloaded _only_ during CR3 loads; mimicking this for PV would be
not exactly trivial, I think). And I notice that the XTF test
exercises an L3 entry update without a subsequent CR3 write, which
is wrong for PAE. (I therefore suspect it is bed7e6cad30 which has
caused the test failure, not edcfce55917 as you have said in the
description of the revert.)

> Given that I was unhappy with the changes in the first place, I don't
> particularly want to see an attempt to resurrect them.  I did not find
> the claim that they were a perf improvement in the first place very
> convincing, and the XTF test demonstrates that the reasoning about their
> safety was incorrect.

Interesting: Where did you voice your unhappiness? All I can find on
that entire series' thread is a reply of yours on a post-commit-
message remark regarding a comment you had introduced with the 286
fix. All other discussion there was between Roger and me.

Additionally I don't see why you treated this as an emergency and
reverted without posting a patch and getting an ack.

> Second, the unexplained OSSTest behaviour.
> 
> When I repro'd this on pinot1, test-pv32pae-xsa-286 failing was totally
> deterministic and repeatable (I tried 100 times because the test is a
> fraction of a second).
> 
> From the log trawling which Ian already did, the first recorded failure
> was flight 160912 on April 11th.  All failures (12, but this number is a
> few flights old now) were on pinot*.
> 
> What would be interesting to see is whether there have been any passes
> on pinot since 160912.
> 
> I can't see any reason why the test would be reliable for me, but
> unreliable for OSSTest, so I'm wondering whether it is actually
> reliable, and something is wrong with the stickiness heuristic.

Isn't (un)reliability of this test, besides the sensitivity to IRQs
and context switches, tied to hardware behavior, in particular TLB
capacity and replacement policy? Aiui the test has

    xtf_success("Success: Probably not vulnerable to XSA-286\n");

for the combination of all of these reasons.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 17 09:27:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 09:27:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128121.240583 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liZX6-00051U-Mc; Mon, 17 May 2021 09:27:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128121.240583; Mon, 17 May 2021 09:27:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liZX6-00051N-I9; Mon, 17 May 2021 09:27:28 +0000
Received: by outflank-mailman (input) for mailman id 128121;
 Mon, 17 May 2021 09:27:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FIJu=KM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1liZX5-00051H-2q
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 09:27:27 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f1ad6a74-381c-4322-bc71-74a7ff25a2eb;
 Mon, 17 May 2021 09:27:25 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 36D7DAFF8;
 Mon, 17 May 2021 09:27:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f1ad6a74-381c-4322-bc71-74a7ff25a2eb
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621243644; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=7TuSZq57slPhRjEpsOagedplSNHlE1CPas+IHMHEAuA=;
	b=evwleRH3g7PteA9yos1s6bIyXReBgnSIPuZiUso5pI0dPrEph8MBjaVaxcoFsCy1Oh3HJo
	BZMRpgAR+jh6hJkHY18ezsZRPxgysm8gql7H/0lmE0PU8fHBUCHIEwS51PuF/JhD1Z89W2
	u9O7FvGfdXQFKFSbp5w2HfUgw6nzs/0=
Subject: Re: [PATCH] drivers/char: Add USB3 debug capability driver
To: Connor Davis <connojdavis@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
References: <9a6a15ebc538105c83be88883ab3a7125ed52d37.1620776791.git.connojdavis@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <912fa28c-5cb3-cf40-00db-19423c442da3@suse.com>
Date: Mon, 17 May 2021 11:27:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <9a6a15ebc538105c83be88883ab3a7125ed52d37.1620776791.git.connojdavis@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 12.05.2021 02:12, Connor Davis wrote:
> Add support for the xHCI debug capability (DbC). The DbC provides
> a SuperSpeed serial link between a debug target running Xen and a
> debug host. To use it you will need a USB3 A/A debug cable plugged into
> a root port on the target machine. Recent kernels report the existence
> of the DbC capability at
> 
>   /sys/kernel/debug/usb/xhci/<seg>:<bus>:<slot>.<func>/reg-ext-dbc:00
> 
> The host machine should have the usb_debug.ko module on the system
> (the cable can be plugged into any port on the host side). After the
> host usb_debug enumerates the DbC, it will create a /dev/ttyUSB<n> file
> that can be used with things like minicom.
> 
> To use the DbC as a console, pass `console=dbgp dbgp=xhci`
> on the xen command line. This will select the first host controller
> found that implements the DbC. Other variations to 'dbgp=' are accepted,
> please see xen-command-line.pandoc for more. Remote GDB is also supported
> with `gdb=dbgp dbgp=xhci`. Note that to see output and/or provide input
> after dom0 starts, DMA remapping of the host controller must be
> disabled.
> 
> Signed-off-by: Connor Davis <connojdavis@gmail.com>
> ---
>  MAINTAINERS                       |    6 +
>  docs/misc/xen-command-line.pandoc |   19 +-
>  xen/arch/x86/Kconfig              |    1 -
>  xen/arch/x86/setup.c              |    1 +
>  xen/drivers/char/Kconfig          |   15 +
>  xen/drivers/char/Makefile         |    1 +
>  xen/drivers/char/xhci-dbc.c       | 1122 +++++++++++++++++++++++++++++
>  xen/drivers/char/xhci-dbc.h       |  621 ++++++++++++++++

What is the reason for needing the separate header? Isn't / can't all
logic (be) contained within xhci-dbc.c? If this is because you clone
code from elsewhere (as the initial comment in the files seems to
suggest), it might be a good idea for the description to say so.
Depending on the origin and possible plans to keep out clone in sync
down the road, undoing this separation as well as correction of certain
style issues (which I'm not going to try to spot consistently just yet)
may then not want requesting. Otoh the files look to have been converted
to Xen style, so direct syncing may not be a goal.

> --- a/docs/misc/xen-command-line.pandoc
> +++ b/docs/misc/xen-command-line.pandoc
> @@ -714,9 +714,26 @@ Available alternatives, with their meaning, are:
>  ### dbgp
>  > `= ehci[ <integer> | @pci<bus>:<slot>.<func> ]`
>  
> -Specify the USB controller to use, either by instance number (when going
> +Specify the EHCI USB controller to use, either by instance number (when going
>  over the PCI busses sequentially) or by PCI device (must be on segment 0).
>  
> +If you have a system with an xHCI USB controller that supports the Debug
> +Capability (DbC), you can use
> +
> +> `= xhci[ <integer> | @pci<bus>:<slot>.<func> ]`
> +
> +To use this, you need a USB3 A/A debugging cable plugged into a SuperSpeed
> +root port on the target machine. Recent kernels expose the existence of the
> +DbC at /sys/kernel/debug/usb/xhci/<seg>:<bus>:<slot>.<func>/reg-ext-dbc:00.
> +Note that to see output and process input after dom0 is started, you need to
> +ensure that the host controller's DMA is not remapped (e.g. with
> +dom0-iommu=passthrough).

This is a relatively bad limitation imo - people would better not get
used to using passthrough mode, and debugging other IOMMU modes (for
Dom0) may then be impossible at all.

> --- a/xen/arch/x86/Kconfig
> +++ b/xen/arch/x86/Kconfig
> @@ -11,7 +11,6 @@ config X86
>  	select HAS_ALTERNATIVE
>  	select HAS_COMPAT
>  	select HAS_CPUFREQ
> -	select HAS_EHCI

Why?

> --- a/xen/drivers/char/Kconfig
> +++ b/xen/drivers/char/Kconfig
> @@ -63,6 +63,21 @@ config HAS_SCIF
>  config HAS_EHCI
>  	bool
>  	depends on X86
> +        default y if !HAS_XHCI_DBC

Again, way? The two drivers shouldn't be exclusive of one another.

Also please note the mixture of indentation you introduce.

>  	help
>  	  This selects the USB based EHCI debug port to be used as a UART. If
>  	  you have an x86 based system with USB, say Y.
> +
> +config HAS_XHCI_DBC
> +	bool "xHCI Debug Capability driver"

A setting name HAS_* wouldn't normally have a prompt.

> +	depends on X86 && HAS_PCI
> +	help
> +	  This selects the xHCI Debug Capabilty to be used as a UART.
> +
> +config XHCI_FIXMAP_PAGES
> +        int "Number of fixmap entries to allocate for the xHC"
> +	depends on HAS_XHCI_DBC
> +        default 16
> +        help
> +          This should equal the size (in 4K pages) of the first 64-bit
> +          BAR of the host controller in which the DbC is being used.

Again - please use consistent (in itself as well as with the rest of
the file) indentation.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 17 09:31:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 09:31:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128126.240594 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liZbR-0006QQ-9E; Mon, 17 May 2021 09:31:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128126.240594; Mon, 17 May 2021 09:31:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liZbR-0006QI-43; Mon, 17 May 2021 09:31:57 +0000
Received: by outflank-mailman (input) for mailman id 128126;
 Mon, 17 May 2021 09:31:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FIJu=KM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1liZbP-0006QC-Nn
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 09:31:55 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7d0e0010-f48c-4ade-bb2c-dedf2fceadaf;
 Mon, 17 May 2021 09:31:55 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 372A5B05C;
 Mon, 17 May 2021 09:31:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7d0e0010-f48c-4ade-bb2c-dedf2fceadaf
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621243914; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=tA3XRwdT8ok6d1059+xvA8E+I4telJ7GeyBKUaR4o14=;
	b=mAhY5yhHPYD3W2hP39/fxKfs6wNBqpydeAo2pLbRj/XbaPdsPUkMCt6xWN9nmg3GUvnzvm
	wr5LGulJL3xSZQwJhPg0+IBh/7/6VEr9qNracpZIJ+ImNiIFacbG2ajapD5UzViYA/wdy1
	EEOK7ksb34J50ytZqhi9FMnaPebwk5s=
Subject: Re: [PATCH v2 1/4] usb: early: Avoid using DbC if already enabled
To: Connor Davis <connojdavis@gmail.com>
Cc: Jann Horn <jannh@google.com>, Lee Jones <lee.jones@linaro.org>,
 Chunfeng Yun <chunfeng.yun@mediatek.com>, linux-usb@vger.kernel.org,
 linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
 Greg Kroah-Hartman <gregkh@linuxfoundation.org>
References: <cover.1620950220.git.connojdavis@gmail.com>
 <d160cee9b61c0ec41c2cd5ff9b4e107011d39d8c.1620952511.git.connojdavis@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8ccce25a-e3ca-cb30-f6a3-f9243a85a49b@suse.com>
Date: Mon, 17 May 2021 11:32:01 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <d160cee9b61c0ec41c2cd5ff9b4e107011d39d8c.1620952511.git.connojdavis@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 14.05.2021 02:56, Connor Davis wrote:
> Check if the debug capability is enabled in early_xdbc_parse_parameter,
> and if it is, return with an error. This avoids collisions with whatever
> enabled the DbC prior to linux starting.

Doesn't this go too far and prevent use even if firmware (perhaps
mistakenly) left it enabled?

Jan

> Signed-off-by: Connor Davis <connojdavis@gmail.com>
> ---
>  drivers/usb/early/xhci-dbc.c | 10 ++++++++++
>  1 file changed, 10 insertions(+)
> 
> diff --git a/drivers/usb/early/xhci-dbc.c b/drivers/usb/early/xhci-dbc.c
> index be4ecbabdd58..ca67fddc2d36 100644
> --- a/drivers/usb/early/xhci-dbc.c
> +++ b/drivers/usb/early/xhci-dbc.c
> @@ -642,6 +642,16 @@ int __init early_xdbc_parse_parameter(char *s)
>  	}
>  	xdbc.xdbc_reg = (struct xdbc_regs __iomem *)(xdbc.xhci_base + offset);
>  
> +	if (readl(&xdbc.xdbc_reg->control) & CTRL_DBC_ENABLE) {
> +		pr_notice("xhci debug capability already in use\n");
> +		early_iounmap(xdbc.xhci_base, xdbc.xhci_length);
> +		xdbc.xdbc_reg = NULL;
> +		xdbc.xhci_base = NULL;
> +		xdbc.xhci_length = 0;
> +
> +		return -ENODEV;
> +	}
> +
>  	return 0;
>  }
>  
> 



From xen-devel-bounces@lists.xenproject.org Mon May 17 09:36:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 09:36:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128133.240604 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liZfo-0007A4-PH; Mon, 17 May 2021 09:36:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128133.240604; Mon, 17 May 2021 09:36:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liZfo-00079x-MN; Mon, 17 May 2021 09:36:28 +0000
Received: by outflank-mailman (input) for mailman id 128133;
 Mon, 17 May 2021 09:36:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1liZfo-00079r-6p
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 09:36:28 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1liZfl-00006B-DE; Mon, 17 May 2021 09:36:25 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1liZfl-0005Oh-6j; Mon, 17 May 2021 09:36:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=J1WSNSzJeyuM+ouj28g9EeljK31fGQKeT+wbfTEyemw=; b=wpF/0NPYNuowfHSAn5O3Vl9Mkh
	YlLAwL3UomNc1O9kqxzpjrQwc8LjsuciKtpdsdGjJ5l6z74xXL2WU2P6YlPw65K14g5N4xLtM678g
	Ino7uSz3FP/OLaJ8TtsY4x8YYfhKNVWOGmgwAXk98V6EYFtKo7W1kvcfHk8oh7URJSuk=;
Subject: Re: [PATCH] drivers/char: Add USB3 debug capability driver
To: Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <9a6a15ebc538105c83be88883ab3a7125ed52d37.1620776791.git.connojdavis@gmail.com>
From: Julien Grall <julien@xen.org>
Message-ID: <46931334-d4a8-eb89-0b81-727ff30c0ec0@xen.org>
Date: Mon, 17 May 2021 10:36:22 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <9a6a15ebc538105c83be88883ab3a7125ed52d37.1620776791.git.connojdavis@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Connor,

On 12/05/2021 01:12, Connor Davis wrote:
> +config XHCI_FIXMAP_PAGES
> +        int "Number of fixmap entries to allocate for the xHC"
> +	depends on HAS_XHCI_DBC
> +        default 16
> +        help
> +          This should equal the size (in 4K pages) of the first 64-bit
> +          BAR of the host controller in which the DbC is being used.

Why can't you use the ioremap() for the new serial controller? Is this 
going to be used by Xen very early?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 17 09:38:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 09:38:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128138.240615 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liZhH-0007lu-4Z; Mon, 17 May 2021 09:37:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128138.240615; Mon, 17 May 2021 09:37:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liZhH-0007ln-1h; Mon, 17 May 2021 09:37:59 +0000
Received: by outflank-mailman (input) for mailman id 128138;
 Mon, 17 May 2021 09:37:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FIJu=KM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1liZhG-0007lR-40
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 09:37:58 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 49b7c9ea-3462-4fcf-b705-60f99bbd78f1;
 Mon, 17 May 2021 09:37:57 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A9039B179;
 Mon, 17 May 2021 09:37:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 49b7c9ea-3462-4fcf-b705-60f99bbd78f1
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621244276; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=cupozExXSDEk2x6F4/kVlSk0FfTDS14py9FNXF+6+qY=;
	b=JwPk3ZUf5cV6QirZEXyDbEucNXJPmCQDyoElMNdcFu5z8Aq6Nw9ANdW+BcIM7Gcxwhd1gM
	Mg5wzs7HCj2rydCJjGCg8uqIrlO4+/RNtE0jHTkJnEm33rsgspndz0VghOgyRLswAo5zsB
	dk5oXo7kbT5z2Sq1YNHkOwDPDFb4f+w=
Subject: Re: [PATCH] include/public: add RING_RESPONSE_PROD_OVERFLOW macro
To: Juergen Gross <jgross@suse.com>
References: <20210512164800.26236-1-jgross@suse.com>
Cc: xen-devel@lists.xenproject.org
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <26985871-d70d-2184-c27e-fe161127de5f@suse.com>
Date: Mon, 17 May 2021 11:38:04 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210512164800.26236-1-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 12.05.2021 18:48, Juergen Gross wrote:
> Add a new RING_RESPONSE_PROD_OVERFLOW() macro for being able to
> detect an ill-behaved backend tampering with the response producer
> index.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Acked-by: Jan Beulich <jbeulich@suse.com>

> --- a/xen/include/public/io/ring.h
> +++ b/xen/include/public/io/ring.h
> @@ -259,6 +259,10 @@ typedef struct __name##_back_ring __name##_back_ring_t
>  #define RING_REQUEST_PROD_OVERFLOW(_r, _prod)                           \
>      (((_prod) - (_r)->rsp_prod_pvt) > RING_SIZE(_r))
>  
> +/* Ill-behaved backend determination: Can there be this many responses? */
> +#define RING_RESPONSE_PROD_OVERFLOW(_r, _prod)                          \
> +    (((_prod) - (_r)->rsp_cons) > RING_SIZE(_r))
> +
>  #define RING_PUSH_REQUESTS(_r) do {                                     \
>      xen_wmb(); /* back sees requests /before/ updated producer index */ \
>      (_r)->sring->req_prod = (_r)->req_prod_pvt;                         \
> 



From xen-devel-bounces@lists.xenproject.org Mon May 17 10:54:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 10:54:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128146.240627 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liaso-0006nu-Sz; Mon, 17 May 2021 10:53:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128146.240627; Mon, 17 May 2021 10:53:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liaso-0006nn-Pe; Mon, 17 May 2021 10:53:58 +0000
Received: by outflank-mailman (input) for mailman id 128146;
 Mon, 17 May 2021 10:53:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FIJu=KM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1liasn-0006nh-Fb
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 10:53:57 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b5e337f3-7c88-491c-aa49-cc69f58fba63;
 Mon, 17 May 2021 10:53:55 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A807BB039;
 Mon, 17 May 2021 10:53:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b5e337f3-7c88-491c-aa49-cc69f58fba63
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621248834; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=oNLlcpSq0NAE2LV/drYmUJJTMCioiwFltcXsOAXuNIM=;
	b=nRsxuN4XtAfL7Q2P82LJNHU0XTAnfJ36XojWj0QrhStsXoMxj7AE/Qk2sCqLwDlPv0U/iZ
	29gwgTy7L9dt5rhjhriXMQVsD8PnLHPAAu+ZKZkP7qUsUnuIxij8zt9jOoNL3oMdvNSFql
	YoHD1aQ4g78Vw6QQJXQdzUg1Wut5cWk=
Subject: Re: regression in recent pvops kernels, dom0 crashes early
To: Olaf Hering <olaf@aepfle.de>
References: <20210513122457.4182eb7f.olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <7abb3c8f-4a9b-700b-5c0c-dc6f42336eab@suse.com>
Date: Mon, 17 May 2021 12:54:02 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210513122457.4182eb7f.olaf@aepfle.de>
Content-Type: text/plain; charset=windows-1252
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 13.05.2021 12:24, Olaf Hering wrote:
> Recent pvops dom0 kernels fail to boot on this particular ProLiant BL465c G5 box.
> It happens to work with every Xen and a 4.4 based sle12sp3 kernel, but fails with every Xen and a 4.12 based sle12sp4 (and every newer) kernel.
> 
> Any idea what is going on?
> 
> ....
> (XEN) Freed 256kB init memory.
> (XEN) mm.c:1758:d0 Bad L1 flags 800000
> (XEN) traps.c:458:d0 Unhandled invalid opcode fault/trap [#6] on VCPU 0 [ec=0000]
> (XEN) domain_crash_sync called from entry.S: fault at ffff82d08022a2a0 create_bounce_frame+0x133/0x143
> (XEN) Domain 0 (vcpu#0) crashed on cpu#0:
> (XEN) ----[ Xen-4.4.20170405T152638.6bf0560e12-9.xen44  x86_64  debug=y  Not tainted ]----
> ....
> 
> ....
> (XEN) Freed 656kB init memory
> (XEN) mm.c:2165:d0v0 Bad L1 flags 800000
> (XEN) d0v0 Unhandled invalid opcode fault/trap [#6, ec=ffffffff]
> (XEN) domain_crash_sync called from entry.S: fault at ffff82d04031a016 x86_64/entry.S#create_bounce_frame+0x15d/0x177
> (XEN) Domain 0 (vcpu#0) crashed on cpu#5:
> (XEN) ----[ Xen-4.15.20210504T145803.280d472f4f-6.xen415  x86_64  debug=y  Not tainted ]----
> ....
> 
> I can probably cycle through all kernels between 4.4 and 4.12 to see where it broke.

I didn't try to figure out where exactly it broke, but could you give the
patch below a try, perhaps on top of my almost-one-year-old submission at
https://lkml.org/lkml/2020/5/27/1035?

Jan

x86/Xen: swap NX determination and GDT setup on BSP

xen_setup_gdt(), via xen_load_gdt_boot(), wants to adjust page tables.
For this to work when NX is not available, x86_configure_nx() needs to
be called first.

Reported-by: Olaf Hering <olaf@aepfle.de>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -1262,16 +1262,16 @@ asmlinkage __visible void __init xen_sta
 	/* Get mfn list */
 	xen_build_dynamic_phys_to_machine();
 
+	/* Work out if we support NX */
+	get_cpu_cap(&boot_cpu_data);
+	x86_configure_nx();
+
 	/*
 	 * Set up kernel GDT and segment registers, mainly so that
 	 * -fstack-protector code can be executed.
 	 */
 	xen_setup_gdt(0);
 
-	/* Work out if we support NX */
-	get_cpu_cap(&boot_cpu_data);
-	x86_configure_nx();
-
 	/* Determine virtual and physical address sizes */
 	get_cpu_address_sizes(&boot_cpu_data);
 


From xen-devel-bounces@lists.xenproject.org Mon May 17 10:54:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 10:54:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128149.240638 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liatE-0007Jg-9d; Mon, 17 May 2021 10:54:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128149.240638; Mon, 17 May 2021 10:54:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liatE-0007JZ-5N; Mon, 17 May 2021 10:54:24 +0000
Received: by outflank-mailman (input) for mailman id 128149;
 Mon, 17 May 2021 10:54:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liatD-0007JI-Eo; Mon, 17 May 2021 10:54:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liatD-0001Rn-8o; Mon, 17 May 2021 10:54:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liatC-0004dX-TZ; Mon, 17 May 2021 10:54:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1liatC-0003i1-T1; Mon, 17 May 2021 10:54:22 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=sSHo1MjU5JoYrH1njg82cGZwNBR+yyvlLwc6se+H7AM=; b=Ff5HJKleEiGdyjwuw0hBWGzU75
	7CMChR7Fe7XU2VDzyD2zT7jsj+ZnjNls1rLy+FFWSKmSlXYRsJgoCLlG2rK9e9VSQ5zF/sqQEhWW2
	UJWz+YpJwlwXro+eg6grlb4WDFRXMzvbaqihbeKhuWW7lc3N3KRemMd2dSzf0fj8Pgtk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161974-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 161974: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=e0cb5e1814a67bb12dd476a72d1698350633bcbb
X-Osstest-Versions-That:
    ovmf=32928415e36b3e234efb5c24143e06060a68fba3
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 17 May 2021 10:54:22 +0000

flight 161974 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161974/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 e0cb5e1814a67bb12dd476a72d1698350633bcbb
baseline version:
 ovmf                 32928415e36b3e234efb5c24143e06060a68fba3

Last test of basis   161952  2021-05-14 21:40:05 Z    2 days
Testing same since   161974  2021-05-17 02:40:16 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Rebecca Cran <rebecca@nuviainc.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   32928415e3..e0cb5e1814  e0cb5e1814a67bb12dd476a72d1698350633bcbb -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon May 17 10:59:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 10:59:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128162.240651 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liaxi-00089i-RE; Mon, 17 May 2021 10:59:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128162.240651; Mon, 17 May 2021 10:59:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liaxi-00089b-ON; Mon, 17 May 2021 10:59:02 +0000
Received: by outflank-mailman (input) for mailman id 128162;
 Mon, 17 May 2021 10:59:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FIJu=KM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1liaxh-000892-9G
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 10:59:01 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0ed95dc4-2799-4b56-b405-4659370b0ae7;
 Mon, 17 May 2021 10:58:55 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 31039AE89;
 Mon, 17 May 2021 10:58:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0ed95dc4-2799-4b56-b405-4659370b0ae7
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621249134; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=PRce/xL7hdszTFBxalqPXo3Pbt1l/3oKPkY+6zNLqOY=;
	b=CRkUpWSgK0JhPft+XYOuXWj+QkwoTvfTXDQvOII5GP5o8wKdRfcwgnVLgRejhbv4ZfMIyX
	/d5kcB+NynVdP2Xo3uCvk6BGZqOmojj7/Zor+YzSQDeZZO9f6T0DzlrBR+VJyYJhYSWOxk
	LTUamU+dIEs5juTtRJNpC48j+bjmmnk=
Subject: Re: Regressed XSA-286, was [xen-unstable test] 161917: regressions -
 FAIL
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: osstest service owner <osstest-admin@xenproject.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>
References: <osstest-161917-mainreport@xen.org>
 <7cfa28ae-2fbe-0945-8a6c-a965ec52155f@citrix.com>
 <e8fae962-1a5b-cc91-d429-a716b009f062@suse.com>
Message-ID: <f365c8f9-362c-f7f5-6ed4-34131a1dce63@suse.com>
Date: Mon, 17 May 2021 12:59:01 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <e8fae962-1a5b-cc91-d429-a716b009f062@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 17.05.2021 10:43, Jan Beulich wrote:
> On 13.05.2021 22:15, Andrew Cooper wrote:
>> Second, the unexplained OSSTest behaviour.
>>
>> When I repro'd this on pinot1, test-pv32pae-xsa-286 failing was totally
>> deterministic and repeatable (I tried 100 times because the test is a
>> fraction of a second).
>>
>> From the log trawling which Ian already did, the first recorded failure
>> was flight 160912 on April 11th.  All failures (12, but this number is a
>> few flights old now) were on pinot*.
>>
>> What would be interesting to see is whether there have been any passes
>> on pinot since 160912.
>>
>> I can't see any reason why the test would be reliable for me, but
>> unreliable for OSSTest, so I'm wondering whether it is actually
>> reliable, and something is wrong with the stickiness heuristic.
> 
> Isn't (un)reliability of this test, besides the sensitivity to IRQs
> and context switches, tied to hardware behavior, in particular TLB
> capacity and replacement policy? Aiui the test has
> 
>     xtf_success("Success: Probably not vulnerable to XSA-286\n");
> 
> for the combination of all of these reasons.

I've just done a dozen runs on my Skylake - all reported SUCCESS.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 17 11:09:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 11:09:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128168.240663 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lib7S-0001E1-QT; Mon, 17 May 2021 11:09:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128168.240663; Mon, 17 May 2021 11:09:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lib7S-0001Dt-NA; Mon, 17 May 2021 11:09:06 +0000
Received: by outflank-mailman (input) for mailman id 128168;
 Mon, 17 May 2021 11:09:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FIJu=KM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lib7R-0001Cj-Kl
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 11:09:05 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 12ccbfdf-e298-4f82-aa07-e4f375556a82;
 Mon, 17 May 2021 11:09:04 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CC31FACAD;
 Mon, 17 May 2021 11:09:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 12ccbfdf-e298-4f82-aa07-e4f375556a82
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621249743; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=NjEdIiDKtNAoSDdn93yV72T7E4qq1qeSCiIJdfWX3Qs=;
	b=cW0LduHsIJP07rJWe4sxJtAM24z3+Yg+X9VJV4cmz/vbHZJrPcu9pslEAdy5142t2sCELM
	kKvHfCBpJRepOf+kU5zucWs+N7luJviHaNkSYTabN4/7gDMSc/G8xdcPPMJ0dDUAb94a7o
	ya3ucXebXFc9PWs9jD8zHYs7fIhr0Y8=
Subject: Re: [PATCH] libelf: improve PVH elfnote parsing
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <20210514135014.78389-1-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8ef298f4-24b5-e133-fc3e-87a67132a61b@suse.com>
Date: Mon, 17 May 2021 13:09:11 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210514135014.78389-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 14.05.2021 15:50, Roger Pau Monne wrote:
> @@ -426,7 +426,7 @@ static elf_errorstatus elf_xen_addr_calc_check(struct elf_binary *elf,
>      }
>  
>      /* Initial guess for virt_base is 0 if it is not explicitly defined. */
> -    if ( parms->virt_base == UNSET_ADDR )
> +    if ( parms->virt_base == UNSET_ADDR || hvm )
>      {
>          parms->virt_base = 0;
>          elf_msg(elf, "ELF: VIRT_BASE unset, using %#" PRIx64 "\n",
> @@ -442,7 +442,7 @@ static elf_errorstatus elf_xen_addr_calc_check(struct elf_binary *elf,
>       * If we are using the modern ELF notes interface then the default
>       * is 0.
>       */
> -    if ( parms->elf_paddr_offset == UNSET_ADDR )
> +    if ( parms->elf_paddr_offset == UNSET_ADDR || hvm )
>      {
>          if ( parms->elf_note_start )
>              parms->elf_paddr_offset = 0;

Both of these would want their respective comments also updated, I
think: There's no defaulting or guessing really in PVH mode, is
there?

> @@ -456,8 +456,13 @@ static elf_errorstatus elf_xen_addr_calc_check(struct elf_binary *elf,
>      parms->virt_kstart = elf->pstart + virt_offset;
>      parms->virt_kend   = elf->pend   + virt_offset;
>  
> -    if ( parms->virt_entry == UNSET_ADDR )
> -        parms->virt_entry = elf_uval(elf, elf->ehdr, e_entry);
> +    if ( parms->virt_entry == UNSET_ADDR || hvm )
> +    {
> +        if ( parms->phys_entry != UNSET_ADDR32 )

Don't you need "&& hvm" here to prevent ...

> +            parms->virt_entry = parms->phys_entry;

... this taking effect for a kernel capable of running in both
PV and PVH modes, instead of ...

> +        else
> +            parms->virt_entry = elf_uval(elf, elf->ehdr, e_entry);

... this (when actually in PV mode)?

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 17 11:16:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 11:16:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128174.240674 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1libEB-0002cv-I7; Mon, 17 May 2021 11:16:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128174.240674; Mon, 17 May 2021 11:16:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1libEB-0002co-EI; Mon, 17 May 2021 11:16:03 +0000
Received: by outflank-mailman (input) for mailman id 128174;
 Mon, 17 May 2021 11:16:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FIJu=KM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1libEA-0002ci-CK
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 11:16:02 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 16287dd3-1efe-4ab1-8762-93cc88d733f5;
 Mon, 17 May 2021 11:16:01 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8CB07B080;
 Mon, 17 May 2021 11:16:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 16287dd3-1efe-4ab1-8762-93cc88d733f5
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621250160; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=PRnmnJ6cgYdDT0uurga8Jr2PBHHTqFSap47T56nENTg=;
	b=cXV0F0KBqPK7B49CKPPegCNzgNNgGi6WlH/6VCF0+mExvGA5o7Wh/jBturiVZjKrczpCST
	cJg2Bk2gY+VuTZpy83LMKpbz6ncsyBzjXkRGnqhnox4LqEO0PgPwE26/Khc3QF6GAE6iws
	2QeLylBkRNbvvnipsX/oMV9Jg07mSEE=
Subject: Re: [PATCH v3 2/5] xen/common: Guard iommu symbols with
 CONFIG_HAS_PASSTHROUGH
To: Connor Davis <connojdavis@gmail.com>, Julien Grall <julien@xen.org>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair23@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1621017334.git.connojdavis@gmail.com>
 <1156cb116da19ef64323e472bb6b6e87c6c73d77.1621017334.git.connojdavis@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <556d1933-3b11-0780-edec-b6dc1729bc56@suse.com>
Date: Mon, 17 May 2021 13:16:08 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <1156cb116da19ef64323e472bb6b6e87c6c73d77.1621017334.git.connojdavis@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 14.05.2021 20:53, Connor Davis wrote:
> --- a/xen/common/memory.c
> +++ b/xen/common/memory.c
> @@ -294,7 +294,9 @@ int guest_remove_page(struct domain *d, unsigned long gmfn)
>      p2m_type_t p2mt;
>  #endif
>      mfn_t mfn;
> +#ifdef CONFIG_HAS_PASSTHROUGH
>      bool *dont_flush_p, dont_flush;
> +#endif
>      int rc;
>  
>  #ifdef CONFIG_X86
> @@ -385,13 +387,17 @@ int guest_remove_page(struct domain *d, unsigned long gmfn)
>       * Since we're likely to free the page below, we need to suspend
>       * xenmem_add_to_physmap()'s suppressing of IOMMU TLB flushes.
>       */
> +#ifdef CONFIG_HAS_PASSTHROUGH
>      dont_flush_p = &this_cpu(iommu_dont_flush_iotlb);
>      dont_flush = *dont_flush_p;
>      *dont_flush_p = false;
> +#endif
>  
>      rc = guest_physmap_remove_page(d, _gfn(gmfn), mfn, 0);
>  
> +#ifdef CONFIG_HAS_PASSTHROUGH
>      *dont_flush_p = dont_flush;
> +#endif
>  
>      /*
>       * With the lack of an IOMMU on some platforms, domains with DMA-capable
> @@ -839,11 +845,13 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
>      xatp->gpfn += start;
>      xatp->size -= start;
>  
> +#ifdef CONFIG_HAS_PASSTHROUGH
>      if ( is_iommu_enabled(d) )
>      {
>         this_cpu(iommu_dont_flush_iotlb) = 1;
>         extra.ppage = &pages[0];
>      }
> +#endif
>  
>      while ( xatp->size > done )
>      {
> @@ -868,6 +876,7 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
>          }
>      }
>  
> +#ifdef CONFIG_HAS_PASSTHROUGH
>      if ( is_iommu_enabled(d) )
>      {
>          int ret;
> @@ -894,6 +903,7 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
>          if ( unlikely(ret) && rc >= 0 )
>              rc = ret;
>      }
> +#endif
>  
>      return rc;
>  }

I wonder whether all of these wouldn't better become CONFIG_X86:
ISTR Julien indicating that he doesn't see the override getting used
on Arm. (Julien, please correct me if I'm misremembering.)

> --- a/xen/include/xen/iommu.h
> +++ b/xen/include/xen/iommu.h
> @@ -51,9 +51,15 @@ static inline bool_t dfn_eq(dfn_t x, dfn_t y)
>      return dfn_x(x) == dfn_x(y);
>  }
>  
> -extern bool_t iommu_enable, iommu_enabled;
> +extern bool_t iommu_enable;
>  extern bool force_iommu, iommu_quarantine, iommu_verbose;
>  
> +#ifdef CONFIG_HAS_PASSTHROUGH
> +extern bool_t iommu_enabled;

Just bool please, like is already the case for the line in context
above. We're in the process of phasing out bool_t.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 17 11:22:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 11:22:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128180.240685 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1libJy-0003zZ-6T; Mon, 17 May 2021 11:22:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128180.240685; Mon, 17 May 2021 11:22:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1libJy-0003zS-3U; Mon, 17 May 2021 11:22:02 +0000
Received: by outflank-mailman (input) for mailman id 128180;
 Mon, 17 May 2021 11:22:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FIJu=KM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1libJw-0003zM-30
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 11:22:00 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0fbc4c5a-d9e1-4eff-9e14-0f1028d77262;
 Mon, 17 May 2021 11:21:59 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 767D0B077;
 Mon, 17 May 2021 11:21:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0fbc4c5a-d9e1-4eff-9e14-0f1028d77262
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621250518; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=JqORidrnXvepoyATdPCMnKUTiaJhRNOhWnYZ8IcDbzI=;
	b=HNrWiRIppa3RD141YkyGccYkkVFB46DBdvuXAWCVFGlOZpPaQ7xNKbLWFYG88u3+mKxsFb
	Yjh/r8gxZe4ySbDbxZRRwQTgfs/k030O3bvnSjUZu8X2S1PcasJG6YJRFoLzJiqJP7b8U5
	5T7bp/6w2b5ALjx1lzOH9Fi9thTq9LM=
Subject: Re: [PATCH v3 3/5] xen: Fix build when !CONFIG_GRANT_TABLE
To: Connor Davis <connojdavis@gmail.com>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair23@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1621017334.git.connojdavis@gmail.com>
 <834f7995ae80a3b37b6d508d1c989b4ee391f61b.1621017334.git.connojdavis@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b56a1cfd-46ab-c601-883c-73537dfaac92@suse.com>
Date: Mon, 17 May 2021 13:22:06 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <834f7995ae80a3b37b6d508d1c989b4ee391f61b.1621017334.git.connojdavis@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 14.05.2021 20:53, Connor Davis wrote:
> Move struct grant_table; in grant_table.h above
> ifdef CONFIG_GRANT_TABLE. This fixes the following:
> 
> /build/xen/include/xen/grant_table.h:84:50: error: 'struct grant_table'
> declared inside parameter list will not be visible outside of this
> definition or declaration [-Werror]
>    84 | static inline int mem_sharing_gref_to_gfn(struct grant_table *gt,
>       |

There must be more to this, as e.g. the PV shim does get built with
!GRANT_TABLE. Nevertheless, ...

> Signed-off-by: Connor Davis <connojdavis@gmail.com>

... since the potential of breaking the build is obvious enough,
Acked-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 17 11:51:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 11:51:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128186.240696 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1libmB-0007Dn-GF; Mon, 17 May 2021 11:51:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128186.240696; Mon, 17 May 2021 11:51:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1libmB-0007Dg-CL; Mon, 17 May 2021 11:51:11 +0000
Received: by outflank-mailman (input) for mailman id 128186;
 Mon, 17 May 2021 11:51:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FIJu=KM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1libmA-0007Da-Jn
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 11:51:10 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f41515cf-504f-46e3-9321-7846ee63903d;
 Mon, 17 May 2021 11:51:07 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7B673AD9F;
 Mon, 17 May 2021 11:51:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f41515cf-504f-46e3-9321-7846ee63903d
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621252266; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=hNgqxTeEaaedrCQZvAxUUKMN+PMgdcQJkWDCdUu/0FM=;
	b=DGxOcKyLjpnV2boVs4M2hetRyt0QSgVKhxgnQgEstG7ROln+dDO9gL2YwVf4qRizlFfWXa
	/1NnBDjjmAlEM33f6FDTAQDi0c16Oph9InXYvJ4UWbMnmi/KoFYCf5YQBo2GbmghwuLplL
	UQ8zjIJzcKQtVvmCe8oFrfxc/tSkQUw=
Subject: Re: [PATCH v3 4/5] xen: Add files needed for minimal riscv build
To: Connor Davis <connojdavis@gmail.com>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair23@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1621017334.git.connojdavis@gmail.com>
 <a7d2d43d0d9de9e10a3e92bc6f977d6f4b53bef6.1621017334.git.connojdavis@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ce3ff72e-611b-3b9c-96fa-afc9e8767681@suse.com>
Date: Mon, 17 May 2021 13:51:14 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <a7d2d43d0d9de9e10a3e92bc6f977d6f4b53bef6.1621017334.git.connojdavis@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 14.05.2021 20:53, Connor Davis wrote:
> --- /dev/null
> +++ b/config/riscv64.mk
> @@ -0,0 +1,5 @@
> +CONFIG_RISCV := y
> +CONFIG_RISCV_64 := y
> +CONFIG_RISCV_$(XEN_OS) := y

I wonder whether the last one actually gets used anywhere, but I do
realize that other architectures have similar definitions.

> --- a/xen/Makefile
> +++ b/xen/Makefile
> @@ -26,7 +26,9 @@ MAKEFLAGS += -rR
>  EFI_MOUNTPOINT ?= $(BOOT_DIR)/efi
>  
>  ARCH=$(XEN_TARGET_ARCH)
> -SRCARCH=$(shell echo $(ARCH) | sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g')
> +SRCARCH=$(shell echo $(ARCH) | \
> +	  sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g' \
> +	      -e s'/riscv.*/riscv/g')

I think in makefiles tab indentation would better be restricted to
rules. While here it's a minor nit, ...

> @@ -35,7 +37,8 @@ include $(XEN_ROOT)/Config.mk
>  # Set ARCH/SUBARCH appropriately.
>  export TARGET_SUBARCH  := $(XEN_TARGET_ARCH)
>  export TARGET_ARCH     := $(shell echo $(XEN_TARGET_ARCH) | \
> -                            sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g')
> +                            sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g' \
> +			        -e s'/riscv.*/riscv/g')

... here you're actually introducing locally inconsistent indentation.

> --- /dev/null
> +++ b/xen/arch/riscv/Kconfig
> @@ -0,0 +1,52 @@
> +config 64BIT
> +	bool

I'm afraid this is stale now - the option now lives in xen/arch/Kconfig,
available to all architectures.

> +config RISCV_64
> +	bool
> +	depends on 64BIT

Such a "depends on" is relatively pointless - they're more important to
have when there is a prompt.

> +config ARCH_DEFCONFIG
> +	string
> +	default "arch/riscv/configs/riscv64_defconfig" if RISCV_64

For the RISCV_64 dependency to be really useful (at least with the
command line kconfig), you want its selection to live above the use.

> +menu "Architecture Features"
> +
> +source "arch/Kconfig"
> +
> +endmenu
> +
> +menu "ISA Selection"
> +
> +choice
> +	prompt "Base ISA"
> +        default RISCV_ISA_RV64IMA
> +        help
> +          This selects the base ISA extensions that Xen will target.
> +
> +config RISCV_ISA_RV64IMA
> +	bool "RV64IMA"
> +        select 64BIT
> +        select RISCV_64

I think you want only the latter here, and the former done by RISCV_64
(or select the former here, and have "default y if 64BIT" at RISCV_64).
That way, not every party wanting 64-bit has to select both.

> +        help
> +           Use the RV64I base ISA, plus the "M" and "A" extensions
> +           for integer multiply/divide and atomic instructions, respectively.
> +
> +endchoice
> +
> +config RISCV_ISA_C
> +	bool "Compressed extension"
> +        help
> +           Add "C" to the ISA subsets that the toolchain is allowed
> +           to emit when building Xen, which results in compressed
> +           instructions in the Xen binary.
> +
> +           If unsure, say N.

For all of the above, please adjust indentation to be consistent with
(the bulk of) what we have elsewhere.

> --- /dev/null
> +++ b/xen/arch/riscv/arch.mk
> @@ -0,0 +1,16 @@
> +########################################
> +# RISCV-specific definitions
> +
> +ifeq ($(CONFIG_RISCV_64),y)
> +    CFLAGS += -mabi=lp64
> +endif

Wherever possible I think we should prefer the list approach:

CFLAGS-$(CONFIG_RISCV_64) += -mabi=lp64

> --- /dev/null
> +++ b/xen/arch/riscv/configs/riscv64_defconfig
> @@ -0,0 +1,12 @@
> +# CONFIG_SCHED_CREDIT is not set
> +# CONFIG_SCHED_RTDS is not set
> +# CONFIG_SCHED_NULL is not set
> +# CONFIG_SCHED_ARINC653 is not set
> +# CONFIG_TRACEBUFFER is not set
> +# CONFIG_DEBUG is not set
> +# CONFIG_DEBUG_INFO is not set
> +# CONFIG_HYPFS is not set
> +# CONFIG_GRANT_TABLE is not set
> +# CONFIG_SPECULATIVE_HARDEN_ARRAY is not set
> +
> +CONFIG_EXPERT=y

These are rather odd defaults, more like for a special purpose
config than a general purpose one. None of what you turn off here
will guarantee to be off for people actually trying to build
things, so it's not clear to me what the idea here is. As a
specific remark, especially during bringup work I think it is
quite important to not default DEBUG to off: You definitely want
to see whether any assertions trigger.

> --- /dev/null
> +++ b/xen/include/asm-riscv/config.h
> @@ -0,0 +1,110 @@
> +/******************************************************************************
> + * config.h
> + *
> + * A Linux-style configuration list.

May I suggest to not further replicate this now inapplicable
comment? It was already somewhat bogus for Arm to clone from x86.

> + */
> +
> +#ifndef __RISCV_CONFIG_H__
> +#define __RISCV_CONFIG_H__
> +
> +#if defined(CONFIG_RISCV_64)
> +# define LONG_BYTEORDER 3
> +# define ELFSIZE 64
> +#else
> +# error "Unsupported RISCV variant"
> +#endif
> +
> +#define BYTES_PER_LONG (1 << LONG_BYTEORDER)
> +#define BITS_PER_LONG  (BYTES_PER_LONG << 3)
> +#define POINTER_ALIGN  BYTES_PER_LONG
> +
> +#define BITS_PER_LLONG 64
> +
> +/* xen_ulong_t is always 64 bits */
> +#define BITS_PER_XEN_ULONG 64
> +
> +#define CONFIG_RISCV 1

This duplicates a "real" Kconfig setting, doesn't it?

> +#define CONFIG_RISCV_L1_CACHE_SHIFT 6
> +
> +#define CONFIG_PAGEALLOC_MAX_ORDER 18
> +#define CONFIG_DOMU_MAX_ORDER      9
> +#define CONFIG_HWDOM_MAX_ORDER     10
> +
> +#define OPT_CONSOLE_STR "dtuart"
> +
> +#ifdef CONFIG_RISCV_64
> +#define MAX_VIRT_CPUS 128u
> +#else
> +#error "Unsupported RISCV variant"
> +#endif

Could this be folded with the other similar construct further up?

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 17 11:56:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 11:56:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128195.240706 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1libr0-00081E-6c; Mon, 17 May 2021 11:56:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128195.240706; Mon, 17 May 2021 11:56:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1libr0-000817-3Y; Mon, 17 May 2021 11:56:10 +0000
Received: by outflank-mailman (input) for mailman id 128195;
 Mon, 17 May 2021 11:56:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FIJu=KM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1libqz-000811-3L
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 11:56:09 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0b09e2cf-cdb7-44ca-8712-5e3072878035;
 Mon, 17 May 2021 11:56:07 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C6CE6ABF6;
 Mon, 17 May 2021 11:56:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0b09e2cf-cdb7-44ca-8712-5e3072878035
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621252566; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=jpM7LXCKjoAEKzHogk/TQuEiqSyV2Ek4pl3OH8UCDkk=;
	b=dkfsgVuXNLPgpitWSc3AXIGdfVfoW7UwYEUJ7m0XSF0vQfwhyHW6ZTc7DyRz3rZD8lykNl
	H9z2dVfsndkqW9qjjC4RhQWEtv5sMmoITnACAVjwjZiTMSLUOWLEDpAkdZj7ZRQz6+mBMy
	l+OoHQ07esx9NZWJwE7AQGgPwfqGY/s=
Subject: Re: [PATCH v3 1/5] xen/char: Default HAS_NS16550 to y only for X86
 and ARM
To: Connor Davis <connojdavis@gmail.com>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair23@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1621017334.git.connojdavis@gmail.com>
 <3960a676376e0163d97ac02f968966cdfaccbf75.1621017334.git.connojdavis@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <76b5e429-a0b0-b8a2-cd31-85cbb4da8680@suse.com>
Date: Mon, 17 May 2021 13:56:14 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <3960a676376e0163d97ac02f968966cdfaccbf75.1621017334.git.connojdavis@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 14.05.2021 20:53, Connor Davis wrote:
> Defaulting to yes only for X86 and ARM reduces the requirements
> for a minimal build when porting new architectures.

While I agree with the statement, ...

> --- a/xen/drivers/char/Kconfig
> +++ b/xen/drivers/char/Kconfig
> @@ -1,6 +1,6 @@
>  config HAS_NS16550
>  	bool "NS16550 UART driver" if ARM
> -	default y
> +	default y if (ARM || X86)

... this approach doesn't scale very well. You would likely have
been hesitant to add a, say, 12-way || here if we had this many
architectures already. I think you instead want

 config HAS_NS16550
 	bool "NS16550 UART driver" if ARM
+	default n if RISCV
 	default y

which then can be adjusted back by another one line change once
the driver code actually builds.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 17 12:14:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 12:14:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128209.240717 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lic8U-0002Bo-3C; Mon, 17 May 2021 12:14:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128209.240717; Mon, 17 May 2021 12:14:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lic8U-0002Bh-00; Mon, 17 May 2021 12:14:14 +0000
Received: by outflank-mailman (input) for mailman id 128209;
 Mon, 17 May 2021 12:14:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lic8S-0002BP-JK; Mon, 17 May 2021 12:14:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lic8S-0002pe-Bh; Mon, 17 May 2021 12:14:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lic8S-0000Ed-1V; Mon, 17 May 2021 12:14:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lic8S-0000eb-0w; Mon, 17 May 2021 12:14:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=SZiHRDfRxtYhRKeXThEwlXIEgL4QwVxkgk9SwPDzEU4=; b=nP60TO/WCy70huMXumyci2Hbo+
	MHPzYkwECiLwUEkPG7o1VikRVR6/krvqe4ekR3z67x/b3CH3CYwC0UtKNi/CSVEA9I13LhEl6G0Bm
	HO+SR7paTJpCEM+ikOtgPtwjEJ5zcV+0NQsUiehd1Ykd85uLU4umMz9Z4e6fTuhk2V/4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161973-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 161973: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-start/debianhvm.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=cb199cc7de987cfda4659fccf51059f210f6ad34
X-Osstest-Versions-That:
    xen=cb199cc7de987cfda4659fccf51059f210f6ad34
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 17 May 2021 12:14:12 +0000

flight 161973 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161973/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 20 guest-start/debianhvm.repeat fail pass in 161964

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161964
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161964
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 161964
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161964
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161964
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161964
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161964
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161964
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161964
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161964
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161964
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  cb199cc7de987cfda4659fccf51059f210f6ad34
baseline version:
 xen                  cb199cc7de987cfda4659fccf51059f210f6ad34

Last test of basis   161973  2021-05-17 01:52:45 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon May 17 12:44:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 12:44:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128216.240731 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1licbT-0005PO-Dw; Mon, 17 May 2021 12:44:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128216.240731; Mon, 17 May 2021 12:44:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1licbT-0005PH-Ai; Mon, 17 May 2021 12:44:11 +0000
Received: by outflank-mailman (input) for mailman id 128216;
 Mon, 17 May 2021 12:44:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1CF0=KM=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1licbS-0005PB-10
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 12:44:10 +0000
Received: from mail-wm1-f53.google.com (unknown [209.85.128.53])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c279eabf-8ec3-4e5f-90c2-b11bca4b6c97;
 Mon, 17 May 2021 12:44:09 +0000 (UTC)
Received: by mail-wm1-f53.google.com with SMTP id
 b19-20020a05600c06d3b029014258a636e8so3520303wmn.2
 for <xen-devel@lists.xenproject.org>; Mon, 17 May 2021 05:44:09 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id e10sm17579130wrw.20.2021.05.17.05.44.07
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 May 2021 05:44:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c279eabf-8ec3-4e5f-90c2-b11bca4b6c97
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to;
        bh=NkH1hn/llmMAWjGH+T9LzgpgQvfVwkz1CmZvTr8+9h4=;
        b=oaknIcLPJnZCM399r8MY8UcdYPGUUyZIm2SgqpYWupFAUZFgwziXkhBvnh1mONj7qf
         P2sxVi0GRn37nYm0NdZ3VPlZCAu6r98PdhrlKWSh1P3DeTsIhGdBDdgDTdmur+wd53wN
         sjhlZYwtWJtjFyN5kuCNKzk0GSuwBjtg3/5d8YzdysHkqL/lhr2TJrMZggeOQlDNKrgt
         Br4q0PdOnDPHiDvV0vIfhDVbCujflkiYtOOAY6nC9D3a9ZdlQajZu6AuWywShgXSQ2Jb
         a3v36BW1mxT2I6QeXR3yO3iHyDlOnjMPzXq2k001zSPg7T92VuSUwFwhMPOYucOcFm0d
         I7vw==
X-Gm-Message-State: AOAM533EGiXNHewRSPolUGQjtJkVs9ilgLxXeClimG8CKhh+3jLulAb8
	o0Uhm7/yyAK4djy7Vmuc1sc=
X-Google-Smtp-Source: ABdhPJyDckIeMOlTfKLw41zCB81nfFbEhlIW1ItLKHFwGBOxhddC3x8BxnUAiH/iY8rtpM2EQVZBsw==
X-Received: by 2002:a7b:ca42:: with SMTP id m2mr64020614wml.67.1621255448538;
        Mon, 17 May 2021 05:44:08 -0700 (PDT)
Date: Mon, 17 May 2021 12:44:06 +0000
From: Wei Liu <wl@xen.org>
To: Juergen Gross <jgross@suse.com>
Cc: Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org,
	Ian Jackson <iwj@xenproject.org>
Subject: Re: [PATCH v2 2/6] tools/libs/ctrl: fix xc_core_arch_map_p2m() to
 support linear p2m table
Message-ID: <20210517124406.khnpmw5l7mzru5zm@liuwe-devbox-debian-v2>
References: <20210412152236.1975-1-jgross@suse.com>
 <20210412152236.1975-3-jgross@suse.com>
 <20210421101334.4wuzqjkwiq7ixsbh@liuwe-devbox-debian-v2>
 <6f3853d8-6900-cf27-bd2b-750b6ee2cfb1@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <6f3853d8-6900-cf27-bd2b-750b6ee2cfb1@suse.com>

On Wed, Apr 21, 2021 at 12:17:49PM +0200, Juergen Gross wrote:
> On 21.04.21 12:13, Wei Liu wrote:
> > On Mon, Apr 12, 2021 at 05:22:32PM +0200, Juergen Gross wrote:
> > > The core of a pv linux guest produced via "xl dump-core" is not usable
> > > as since kernel 4.14 only the linear p2m table is kept if Xen indicates
> > > it is supporting that. Unfortunately xc_core_arch_map_p2m() is still
> > > supporting the 3-level p2m tree only.
> > > 
> > > Fix that by copying the functionality of map_p2m() from libxenguest to
> > > libxenctrl.
> > > 
> > 
> > So there are now two copies of the same logic? Is it possible to reduce
> > it to only one?
> 
> Yes. See the intro mail of the series.
> 
> I wanted to fix the issue first, before doing the major cleanup work.

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Mon May 17 13:21:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 13:21:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128222.240743 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lidB3-0000tP-AW; Mon, 17 May 2021 13:20:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128222.240743; Mon, 17 May 2021 13:20:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lidB3-0000tI-76; Mon, 17 May 2021 13:20:57 +0000
Received: by outflank-mailman (input) for mailman id 128222;
 Mon, 17 May 2021 13:20:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W5oJ=KM=oracle.com=daniel.kiper@srs-us1.protection.inumbo.net>)
 id 1lidB1-0000tC-VW
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 13:20:56 +0000
Received: from mx0a-00069f02.pphosted.com (unknown [205.220.165.32])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bf202809-4199-4624-aa81-1e0949c339a0;
 Mon, 17 May 2021 13:20:54 +0000 (UTC)
Received: from pps.filterd (m0246617.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 14HDCvfC003912; Mon, 17 May 2021 13:20:52 GMT
Received: from oracle.com (userp3030.oracle.com [156.151.31.80])
 by mx0b-00069f02.pphosted.com with ESMTP id 38j6usgqtv-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Mon, 17 May 2021 13:20:52 +0000
Received: from userp3030.oracle.com (userp3030.oracle.com [127.0.0.1])
 by pps.podrdrct (8.16.0.36/8.16.0.36) with SMTP id 14HDA55Y067742;
 Mon, 17 May 2021 13:20:50 GMT
Received: from nam02-bn1-obe.outbound.protection.outlook.com
 (mail-bn1nam07lp2044.outbound.protection.outlook.com [104.47.51.44])
 by userp3030.oracle.com with ESMTP id 38j3dtb6w3-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Mon, 17 May 2021 13:20:50 +0000
Received: from CY4PR1001MB2230.namprd10.prod.outlook.com
 (2603:10b6:910:46::36) by CY4PR10MB1477.namprd10.prod.outlook.com
 (2603:10b6:903:2b::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.25; Mon, 17 May
 2021 13:20:48 +0000
Received: from CY4PR1001MB2230.namprd10.prod.outlook.com
 ([fe80::71d1:56f1:d7fb:bb8f]) by CY4PR1001MB2230.namprd10.prod.outlook.com
 ([fe80::71d1:56f1:d7fb:bb8f%7]) with mapi id 15.20.4065.039; Mon, 17 May 2021
 13:20:48 +0000
Received: from tomti.i.net-space.pl (84.10.22.86) by
 AM6P194CA0054.EURP194.PROD.OUTLOOK.COM (2603:10a6:209:84::31) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4108.25 via Frontend Transport; Mon, 17 May 2021 13:20:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bf202809-4199-4624-aa81-1e0949c339a0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=date : from : to : cc
 : subject : message-id : references : content-type : in-reply-to :
 mime-version; s=corp-2020-01-29;
 bh=v32yXVdAri+fD1f6xpe0cgv9FFzz9yQJUWmLf+6uHF8=;
 b=aZrz5JDEvLQHXRWDJHAHQ2wEgev4OXp7ADPQ95IwCruMyfVt2/o4xVLF/rZ8lesUHPCg
 yREh7m/0m6jUmIgxwnsHElahXKOsjNMsO+MrdWj1TzsTGO3/OzfecyagLxJnDOMevr50
 cv7TsJq9l/JUfKCDQ2gGudX/hva6HUXjg2Wed+hEOF9AGe2wlnRaYGwVhhwvW7/LRu1i
 cioXmLsh1le+MgvnUwgLrJuDMYIR1fcaJ9FRHlEyqxSUyEPDFenz/q3IkMlo2vSmZcN4
 HfZD1xz8HDwupzYjMJsfLMUYQVOmVp6ycB892eyCEGuWQkvhBcmfF/i4eE3E0C6ArYGj Sg== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=bly/fNd/Ylcv+6kSZd+9eb9SAVU2j2AP+Kv8wzfi9OIFaofjITvnC6j2u+8CxuZewElWeRdaXw9VbHsfyDbJ5DZfOiDQMDAIGHJ4qVlloeY57t3LUwlNQ6Wcjt1fP1VnxZaCOO095X+oNMmIIpOXZcT3XFa2omhswrVsoUggrQuVxV0TVS/HyXh7ySgvdtggFTVh0DS6i3LTZdpqrSoMfvHrgZk5MmIWBew5tskbsw9Nt9nMnWnhFj/+EPhM5EqrwIALsSzr3aeFjN4rka7sGP2pfptQS3hUjorqUFWG01nhbBorJCqKT0aGfO/HVHVOiGVTyaRJArFHPbRDDX+UIA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=v32yXVdAri+fD1f6xpe0cgv9FFzz9yQJUWmLf+6uHF8=;
 b=cVnEWzb0kq8Nr14u8buq2Ko9D5BiolyGUk0yjpDZQN7KM7jhu8QtVwe8erdCO/VNs7P1a0K0cR/96UweAjZ1Ax6xz1ojtg99+I7VvNf6Jxu8yHyeNsmO5HZIFP9RcTDFmL0ArZ+95zlrKaEoSN90YrqAIlTAd5N2YtpQiLufJqDXvN8MteUt/BWEI3PyW1AxJwVBy1Y3Uuyc9j8ZmR8RWxQ/1/dUiG8TtdB38oygpuquRYxFrandpZr/nwoW+oE1TLl/RpiJGUro+bxn8hAODdTZcY7TfWgJKgN6gQkvdzr92ISPB8X4/nFjfivtzAPC8eT4FbyOJkoRWoNvlrORTQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=v32yXVdAri+fD1f6xpe0cgv9FFzz9yQJUWmLf+6uHF8=;
 b=tIFl5N40/FSQRPo7zTM0QETpOexRii0unzHfTcAt+X62Ak4ybHjLNHo6w08rgo+oDaCGsKjS+oSTJ4Gm64bQ3KwAbNSkD+a/MjmgT65J+HReNTMojam/YiMa75iDM73oEtIAIZksPlVliWNDTO8AXITC0biUuTHgktE08dYxlQk=
Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=oracle.com;
Date: Mon, 17 May 2021 15:20:39 +0200
From: Daniel Kiper <daniel.kiper@oracle.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Bob Eshleman <bobbyeshleman@gmail.com>,
        Andrew Cooper <andrew.cooper3@citrix.com>,
        George Dunlap <george.dunlap@citrix.com>,
        Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
        Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
        Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v3 2/5] xen/x86: manually build xen.mb.efi binary
Message-ID: <20210517132039.6czppjfge27x4mwg@tomti.i.net-space.pl>
References: <cover.1611273359.git.bobbyeshleman@gmail.com>
 <28d5536a2f7691e8f79d55f1470fa89ce4fae93d.1611273359.git.bobbyeshleman@gmail.com>
 <3c621726-31c4-6a79-a020-88c59644111b@suse.com>
 <74ea104d-3826-d80d-3af5-f444d065c73f@gmail.com>
 <a183a5f9-0f36-187d-fd06-8d6db99cbe43@suse.com>
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <a183a5f9-0f36-187d-fd06-8d6db99cbe43@suse.com>
User-Agent: NeoMutt/20170113 (1.7.2)
X-Originating-IP: [84.10.22.86]
X-ClientProxiedBy: AM6P194CA0054.EURP194.PROD.OUTLOOK.COM
 (2603:10a6:209:84::31) To CY4PR1001MB2230.namprd10.prod.outlook.com
 (2603:10b6:910:46::36)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 56606d4b-acf2-465e-0c1d-08d9193695c8
X-MS-TrafficTypeDiagnostic: CY4PR10MB1477:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<CY4PR10MB1477AA755218ECB5FA121B7BE32D9@CY4PR10MB1477.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6108;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	Jw/WxHGQ2bytqRlyZXzoos/0sC/7Z5V3/Yt623Y8y4gTQKK7Wg5xX5WwQzFAl7Lseo3b2nzDTqZvnhV0s5Rvj8eLhqmCT6uVUe93NazFWEcS/kmgE1pvesA+WJLJCW4nAcl0Inruil/xMhys+yENUl9miDzsGE1MKiaQSky4EhkD1qnTBAdrTn/uNN85sM/PX1Sqy7bxrFG9Zg96piPjcC7eQ9y33XxPDDUnOvJYjdv3TKcgE09tUTNBPYFeT6PvULHw4XFlCdYawtZEpzFHb7RWpxor0OG1gLLcwaBFZCF0MZ0E8t7Cdgj4Y0tsH1axTj3hfCjLSVW+1iq3RKpcnX7aCCqlBhh+1auIKpGWGXErwjJQ5gxH3Ivv5NyfCGMpK+GScFktV2DGTrIzPn/lpWgeOB+KGOrGX4/TaZXFcI7WZcn03ag7aoAjIEZyy6tcCgs9B95rMuW+WRUrcfxJ1eL22sFtrsmxQTiSHloUJhpTu5Pz9u2tJ/b9KOTd5E2mw1W0NPWQsOjyiyz4bXeltXFvAkqS3Xcn6bbAhoyHDzaEUjWv/wDmvudJ26IBs9ItknXxzSnuWg4IfPj0JS3bJAmEcj4M0rmBi7eXsAoubv7OwnKRTPSkxL0IJQtgOW2BJG3s2UjSLV1M+pwEFsQAx6AIgDajQpr3iMiUD3xRC9U=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:CY4PR1001MB2230.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(366004)(376002)(346002)(39860400002)(396003)(38350700002)(38100700002)(2906002)(44832011)(6666004)(316002)(16526019)(186003)(478600001)(86362001)(52116002)(4744005)(55016002)(54906003)(1076003)(53546011)(7696005)(6916009)(8676002)(8936002)(9686003)(66556008)(66476007)(66946007)(4326008)(6506007)(956004)(26005)(5660300002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?us-ascii?Q?v+pIgzT1XQyvabs9nvJnYTYtRjabjEKnnneuZiJEeOPGt0b4RII/U0gzPqGL?=
 =?us-ascii?Q?8exlSXR2hsAXHtTrEXQqsb619i3WCrpc+USnL8j1POD71mDjM1Sje3Mcbdz7?=
 =?us-ascii?Q?hFGcwBaidrypaiZTlgZg10+06BdEo93XDxONYD8kiyf+KSK4yFUJbdn4I35v?=
 =?us-ascii?Q?YzIrtpBxpiip8G+WUNkUXqTe7vaWv7uDN2XRdm2NJELx290JWJOjmEN/w1bn?=
 =?us-ascii?Q?lT+habo86KkQL9bjhEZeX5XpOLoswPZVQJCXn2L1PGCwyQOLKMM0VZrP5W3M?=
 =?us-ascii?Q?sbzMx7J2dIXrqOqFqsOuMIs84hIt6GneJoa4EnvCNVWg6zclO36omczCRe5Y?=
 =?us-ascii?Q?Jrbgs7KaRKLaEQK8RdgLoiYiebnbC0/CwPzXZ0OdiuQLrCp6Py9DnDmBixk/?=
 =?us-ascii?Q?uOtmLU1RUfQbTWqG10oWUWowR5BApvlwo4lgofJxAXVCW1gVO7GOo9wCptF9?=
 =?us-ascii?Q?6oUq9LY5gV/Oyay0/GZYoCj9N2CIn3m7zdlrw5gGM8k/QAy8/W4MP1q2afFY?=
 =?us-ascii?Q?qYyyyoGnZQAv166fWGRdvzjKyQfxAAWM13uLaZTEU1C8c6AzXbLkq6exMsOg?=
 =?us-ascii?Q?gl6RY/YeA6i7t9jxuNLA2lZJAH69+s4c9ZoYUsyGyy3FbgGaOliO8cIzQTsw?=
 =?us-ascii?Q?Ms8zYX0cvX4Xsd/p6LobFrBAu2okOM+5FKWFxjyad+IOfxQthnqvCCU6A1OS?=
 =?us-ascii?Q?jY6enkGMMxMD5ulwX1jLOEUd4XNFRUYj/2VIbNy2tWYbD2Fel13cO0nFi8WW?=
 =?us-ascii?Q?G3ParUnq1uCLlfvuqv4rIuIXTUi9hBDOEkIzAOyKWBSl5qvmWmnKHnOLhCZG?=
 =?us-ascii?Q?1Mi6R7hTDm58gA755bbV/M0CgN0X9+Xy+jLXjHNq2FwSlSZl+UEpCE9OIqfq?=
 =?us-ascii?Q?l296UO1hjz2t60jGz+a/hIKw+QPwbePW1j8yGTYj7386wAJXaOMMAqOrnFu7?=
 =?us-ascii?Q?RhQyH6NdB/F2ETaBJtMm7gbU7XQnvjx3/gvs5rSUVoYtSoOU5/EGJBobJjVT?=
 =?us-ascii?Q?jJvTHTUE7eL9EwCqD1TeLuBfI879Mjh8sNQw4ARU/k+XvliPL+Cs1l7Alu2j?=
 =?us-ascii?Q?3YVRUpGEXg9Iab+/6tpH/7irz6bzzanUgRE9iKXNKb/vaUaYYTKdVqvgCUuu?=
 =?us-ascii?Q?+qPbEzJQGyF297Yx/bCbJREHw0UjGYFFetdTQ30UY2apVxggfDVwC41PRiKo?=
 =?us-ascii?Q?/dHE9zri30rBJrxtrcs5dMhzb5GcGnNS78/FuiD5MEmTtWllMrSShuQst6om?=
 =?us-ascii?Q?ZY8PzEiRRqiwjQK6uW29Cz79PBStGPPwBOwC3UyM4vfFuNBeR4Spodrb20+7?=
 =?us-ascii?Q?9Tn2Wv51GdKexKKax/HWi0fI?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 56606d4b-acf2-465e-0c1d-08d9193695c8
X-MS-Exchange-CrossTenant-AuthSource: CY4PR1001MB2230.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 17 May 2021 13:20:48.5527
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: gACJNgfT8t69pp+LrLOOuOTG1vSuJlZCL0gGXZND+I1LoBdmUzoZyfX3zcLd2wT7frhgKXtwxcRHppHd1PytBA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CY4PR10MB1477
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9986 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 spamscore=0 mlxlogscore=999
 mlxscore=0 adultscore=0 malwarescore=0 suspectscore=0 phishscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2105170093
X-Proofpoint-GUID: LxySIoMWWrrU7mSM4N1qfgMbgj2EoPGY
X-Proofpoint-ORIG-GUID: LxySIoMWWrrU7mSM4N1qfgMbgj2EoPGY

On Mon, May 17, 2021 at 08:48:32AM +0200, Jan Beulich wrote:
> On 07.05.2021 22:26, Bob Eshleman wrote:
> > What is your intuition WRT the idea that instead of trying add a PE/COFF hdr
> > in front of Xen's mb2 bin, we instead go the route of introducing valid mb2
> > entry points into xen.efi?
>
> At the first glance I think this is going to be less intrusive, and hence
> to be preferred. But of course I haven't experimented in any way ...

When I worked on this a few years ago I tried that way. Sadly I failed
because I was not able to produce "linear" PE image using binutils
exiting that days. Though I think you should try once again. Maybe
newer binutils are more flexible and will be able to produce a PE image
with properties required by Multiboot2 protocol.

Daniel


From xen-devel-bounces@lists.xenproject.org Mon May 17 13:22:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 13:22:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128230.240754 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lidCE-0001WG-PX; Mon, 17 May 2021 13:22:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128230.240754; Mon, 17 May 2021 13:22:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lidCE-0001W9-Li; Mon, 17 May 2021 13:22:10 +0000
Received: by outflank-mailman (input) for mailman id 128230;
 Mon, 17 May 2021 13:22:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FIJu=KM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lidCC-0001W3-Hf
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 13:22:08 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7f3ba80b-5fb1-4c77-a076-907c4bcf72f2;
 Mon, 17 May 2021 13:22:07 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 054C8B162;
 Mon, 17 May 2021 13:22:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7f3ba80b-5fb1-4c77-a076-907c4bcf72f2
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621257727; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=tYp/S6dDZV1GNnEKVFysZqTjdXObr/Rva9bndRnzBW8=;
	b=ihxuP61zjkBqOnd3jeVXPfy6maw5O90eAkYOJbYifWOx/1FulEqI0AvNf/Ha89vZ8dWTgO
	PAxJe5OjQyMcpi4zTKLGYOsIvOZFOkxQFsshtuziNP+zsSiYtzM4ldNiZr9xUS9zCtVVzd
	MFcMvDtCgbG3quTEO3+mJcZwub6hqoQ=
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH][XTF] XSA-286: fix PAE variant of test
Message-ID: <d6f4bfd1-791f-8191-5cce-6c98977a5175@suse.com>
Date: Mon, 17 May 2021 15:22:06 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

L3 entry updates aren't specified to take immediate effect in PAE mode:
On bare metal, only the next CR3 load actually loads the PDPTEs, and a
32-bit Xen also wouldn't immediately propagate new entries into the
PDPTEs. Them taking immediate effect (leaving aside the need to flush
the TLB) on 64-bit Xen is merely to not complicate the hypervisor
implementation more than necessary. Guests cannot depend on such
behavior, and hence this test shouldn't either.

Insert the hypercall equivalent of a CR3 reload into the multicall.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
With this, cb199cc7de98 ('Revert "x86/PV32: avoid TLB flushing after
mod_l3_entry()" and "x86/PV: restrict TLB flushing after
mod_l[234]_entry()"') should imo be reverted from the Xen tree. The
claim was wrong that the test was correct and hypervisor code flawed.

--- a/tests/xsa-286/main.c
+++ b/tests/xsa-286/main.c
@@ -128,9 +128,18 @@ void test_main(void)
          *
          * - update_va_mapping(addr, 0, INLVPG)
          * - mmu_update(&l3t[slot], l2t2)
+         * - (PAE only) new_baseptr(cr3)
          * - update_va_mapping(addr, gfn0 | AD|WR|P, INLVPG)
          */
         mu[0].val = pte_from_virt(l2t2, PF_SYM(AD, RW, P));
+#ifdef __i386__
+        mmuext_op_t mx[] = {
+            {
+                .cmd = MMUEXT_NEW_BASEPTR,
+                .arg1.mfn = read_cr3() >> PAGE_SHIFT,
+            },
+        };
+#endif
         intpte_t nl1e = pte_from_gfn(pfn_to_mfn(0), PF_SYM(AD, RW, P));
         multicall_entry_t multi[] = {
             {
@@ -153,6 +162,17 @@ void test_main(void)
                     DOMID_SELF,
                 },
             },
+#ifdef __i386__
+            {
+                .op = __HYPERVISOR_mmuext_op,
+                .args = {
+                    _u(mx),
+                    ARRAY_SIZE(mx),
+                    _u(NULL),
+                    DOMID_SELF,
+                },
+            },
+#endif
             {
                 .op = __HYPERVISOR_update_va_mapping,
                 .args = {


From xen-devel-bounces@lists.xenproject.org Mon May 17 13:24:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 13:24:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128235.240765 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lidEW-0002Dt-5p; Mon, 17 May 2021 13:24:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128235.240765; Mon, 17 May 2021 13:24:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lidEW-0002Dm-2k; Mon, 17 May 2021 13:24:32 +0000
Received: by outflank-mailman (input) for mailman id 128235;
 Mon, 17 May 2021 13:24:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FIJu=KM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lidEV-0002Dg-02
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 13:24:31 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 24599434-454e-48fa-bc39-738f5e1affd8;
 Mon, 17 May 2021 13:24:30 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 401D5B1C8;
 Mon, 17 May 2021 13:24:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 24599434-454e-48fa-bc39-738f5e1affd8
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621257869; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=AUdEQ2AJRw6LNZgC0oNGHQ9ysBMUAECATObwQXJikp8=;
	b=PylPRWcaT7eCTtQyGa0Rdz8MzXKjVerT4YmcBUN2m0lyeyb81LQ6dHXC4E+Eg6EP6vYLZV
	JjSU4JUZMfMYeoJ5oYgYQG1W1BaOx1BdDRcImx5P9g74fGBkv7pNDtVEKszweFueqSMZMq
	TOlrTSvNVE6npc84BzE1J19Pxmrn4ik=
Subject: Re: [PATCH v3 2/5] xen/x86: manually build xen.mb.efi binary
To: Daniel Kiper <daniel.kiper@oracle.com>
Cc: Bob Eshleman <bobbyeshleman@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <cover.1611273359.git.bobbyeshleman@gmail.com>
 <28d5536a2f7691e8f79d55f1470fa89ce4fae93d.1611273359.git.bobbyeshleman@gmail.com>
 <3c621726-31c4-6a79-a020-88c59644111b@suse.com>
 <74ea104d-3826-d80d-3af5-f444d065c73f@gmail.com>
 <a183a5f9-0f36-187d-fd06-8d6db99cbe43@suse.com>
 <20210517132039.6czppjfge27x4mwg@tomti.i.net-space.pl>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ee89a22d-5f46-51ed-4c46-63cfc60cbafc@suse.com>
Date: Mon, 17 May 2021 15:24:28 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210517132039.6czppjfge27x4mwg@tomti.i.net-space.pl>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 17.05.2021 15:20, Daniel Kiper wrote:
> On Mon, May 17, 2021 at 08:48:32AM +0200, Jan Beulich wrote:
>> On 07.05.2021 22:26, Bob Eshleman wrote:
>>> What is your intuition WRT the idea that instead of trying add a PE/COFF hdr
>>> in front of Xen's mb2 bin, we instead go the route of introducing valid mb2
>>> entry points into xen.efi?
>>
>> At the first glance I think this is going to be less intrusive, and hence
>> to be preferred. But of course I haven't experimented in any way ...
> 
> When I worked on this a few years ago I tried that way. Sadly I failed
> because I was not able to produce "linear" PE image using binutils
> exiting that days.

What is a "linear" PE image?

> Maybe
> newer binutils are more flexible and will be able to produce a PE image
> with properties required by Multiboot2 protocol.

Isn't all you need the MB2 header within the first so many bytes of the
(disk) image? Or was it the image as loaded into memory? Both should be
possible to arrange for.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 17 13:40:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 13:40:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128243.240775 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lidTy-0004Sv-J2; Mon, 17 May 2021 13:40:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128243.240775; Mon, 17 May 2021 13:40:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lidTy-0004So-G7; Mon, 17 May 2021 13:40:30 +0000
Received: by outflank-mailman (input) for mailman id 128243;
 Mon, 17 May 2021 13:40:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=X6rY=KM=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lidTx-0004Si-Iv
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 13:40:29 +0000
Received: from mail-oi1-x235.google.com (unknown [2607:f8b0:4864:20::235])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 78b1ceaa-4b32-495d-91da-d5d3c7f28f82;
 Mon, 17 May 2021 13:40:28 +0000 (UTC)
Received: by mail-oi1-x235.google.com with SMTP id u144so6491838oie.6
 for <xen-devel@lists.xenproject.org>; Mon, 17 May 2021 06:40:28 -0700 (PDT)
Received: from [192.168.99.80] (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id q26sm3157730otn.0.2021.05.17.06.40.26
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 May 2021 06:40:27 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 78b1ceaa-4b32-495d-91da-d5d3c7f28f82
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=PHuiLV63C2tSmFFHnFLyLJ7i3+2YiHakKhRUDluCH84=;
        b=TRiPcrDpaCx6v8yoGO6mzvbi7SuyVGoWdMb9UzinGPzL9gExNh73ZBR9NGSwqpafg/
         iv/yve3NKLQDNfXlQS9lASd/P7qVhBfGu+Cbap/g6CQlO4w3f6khoiJT+f1LnHujSBVv
         4908Ol61233hB6ZZZlxrp+gUZSPo31Bta1hZolITCz9CZMRPogwgzBKL62pe/TpjqClY
         T8n8u1xfo3XCmbXzsnpVFrztk329Ale/YsxEB3ba8wimKLbuS5sSSpVLsnet2V0uQ97f
         UvYAnLaZmHtqt/Bi2O3T9Dz5vlfm/BdaNlM08ElV2T+z9j/n4kHPNLWhJg0hRjuh5hxr
         xEIQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=PHuiLV63C2tSmFFHnFLyLJ7i3+2YiHakKhRUDluCH84=;
        b=YPQ01z7WJSYShu9AoW0F6hap18mBOzDE/Squb3E8hUjSSlIwCyUNa/V1F+eQBuhB+L
         hb7/MpZwF6mSKWANlfw+6KTUmasnbQ7InMNvGt+XYGxRLHcJY6Q5WzVunZTyb6yLg+x8
         G6K5wqnA6MzsmaKTHDEyPOr71JORwwx29GbOg/+llCinKESH09GHMF41ql6GRxnDIzq5
         DTPCtWph71CuVRD0yTSGnn7FippDRMX4+MnuPUU4g3APQ7+3SX+5tlcz01hOj8XLopC6
         xBon8sgItD2HMcQHterwL/lvShNqnZk1n1Go7w/Pza2Lpg67/7jQfkHHzcjS7V+0FFGW
         viDQ==
X-Gm-Message-State: AOAM5328hO1ukZlaF9ZvNwL8dSbxwKsM/yDLwIxexvuJKG6FYUtxOKFi
	16C9ujyG2dJd2SDEjZ3ONmi3g9g+xJhfDg==
X-Google-Smtp-Source: ABdhPJzgFGr6evghvbaa+ti/HoijdnyLyjMOzhrdOHT5ZKtj4JgcWxRoCKxRNj9dMTkjzcIqA3UJAw==
X-Received: by 2002:aca:f0c:: with SMTP id 12mr39647654oip.131.1621258827645;
        Mon, 17 May 2021 06:40:27 -0700 (PDT)
Subject: Re: [PATCH] drivers/char: Add USB3 debug capability driver
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
References: <9a6a15ebc538105c83be88883ab3a7125ed52d37.1620776791.git.connojdavis@gmail.com>
 <912fa28c-5cb3-cf40-00db-19423c442da3@suse.com>
From: Connor Davis <connojdavis@gmail.com>
Message-ID: <b31e6eed-437e-de23-1395-48db052d6237@gmail.com>
Date: Mon, 17 May 2021 07:40:46 -0600
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <912fa28c-5cb3-cf40-00db-19423c442da3@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 5/17/21 3:27 AM, Jan Beulich wrote:
> On 12.05.2021 02:12, Connor Davis wrote:
>> Add support for the xHCI debug capability (DbC). The DbC provides
>> a SuperSpeed serial link between a debug target running Xen and a
>> debug host. To use it you will need a USB3 A/A debug cable plugged into
>> a root port on the target machine. Recent kernels report the existence
>> of the DbC capability at
>>
>>    /sys/kernel/debug/usb/xhci/<seg>:<bus>:<slot>.<func>/reg-ext-dbc:00
>>
>> The host machine should have the usb_debug.ko module on the system
>> (the cable can be plugged into any port on the host side). After the
>> host usb_debug enumerates the DbC, it will create a /dev/ttyUSB<n> file
>> that can be used with things like minicom.
>>
>> To use the DbC as a console, pass `console=dbgp dbgp=xhci`
>> on the xen command line. This will select the first host controller
>> found that implements the DbC. Other variations to 'dbgp=' are accepted,
>> please see xen-command-line.pandoc for more. Remote GDB is also supported
>> with `gdb=dbgp dbgp=xhci`. Note that to see output and/or provide input
>> after dom0 starts, DMA remapping of the host controller must be
>> disabled.
>>
>> Signed-off-by: Connor Davis <connojdavis@gmail.com>
>> ---
>>   MAINTAINERS                       |    6 +
>>   docs/misc/xen-command-line.pandoc |   19 +-
>>   xen/arch/x86/Kconfig              |    1 -
>>   xen/arch/x86/setup.c              |    1 +
>>   xen/drivers/char/Kconfig          |   15 +
>>   xen/drivers/char/Makefile         |    1 +
>>   xen/drivers/char/xhci-dbc.c       | 1122 +++++++++++++++++++++++++++++
>>   xen/drivers/char/xhci-dbc.h       |  621 ++++++++++++++++
> What is the reason for needing the separate header? Isn't / can't all
> logic (be) contained within xhci-dbc.c? If this is because you clone
> code from elsewhere (as the initial comment in the files seems to
> suggest), it might be a good idea for the description to say so.
> Depending on the origin and possible plans to keep out clone in sync
> down the road, undoing this separation as well as correction of certain
> style issues (which I'm not going to try to spot consistently just yet)
> may then not want requesting. Otoh the files look to have been converted
> to Xen style, so direct syncing may not be a goal.
>
No reason other than cosmetic, separating "header-ish" things from source.

I can put it all in one .c if that is preferred

>> --- a/docs/misc/xen-command-line.pandoc
>> +++ b/docs/misc/xen-command-line.pandoc
>> @@ -714,9 +714,26 @@ Available alternatives, with their meaning, are:
>>   ### dbgp
>>   > `= ehci[ <integer> | @pci<bus>:<slot>.<func> ]`
>>   
>> -Specify the USB controller to use, either by instance number (when going
>> +Specify the EHCI USB controller to use, either by instance number (when going
>>   over the PCI busses sequentially) or by PCI device (must be on segment 0).
>>   
>> +If you have a system with an xHCI USB controller that supports the Debug
>> +Capability (DbC), you can use
>> +
>> +> `= xhci[ <integer> | @pci<bus>:<slot>.<func> ]`
>> +
>> +To use this, you need a USB3 A/A debugging cable plugged into a SuperSpeed
>> +root port on the target machine. Recent kernels expose the existence of the
>> +DbC at /sys/kernel/debug/usb/xhci/<seg>:<bus>:<slot>.<func>/reg-ext-dbc:00.
>> +Note that to see output and process input after dom0 is started, you need to
>> +ensure that the host controller's DMA is not remapped (e.g. with
>> +dom0-iommu=passthrough).
> This is a relatively bad limitation imo - people would better not get
> used to using passthrough mode, and debugging other IOMMU modes (for
> Dom0) may then be impossible at all.

Why is turning on passthrough mode to debug something bad? Sure

it shouldn't be done in a production deployment, but I don't see the

issue if it is used in a debug environment.

>> --- a/xen/arch/x86/Kconfig
>> +++ b/xen/arch/x86/Kconfig
>> @@ -11,7 +11,6 @@ config X86
>>   	select HAS_ALTERNATIVE
>>   	select HAS_COMPAT
>>   	select HAS_CPUFREQ
>> -	select HAS_EHCI
> Why?
>
>> --- a/xen/drivers/char/Kconfig
>> +++ b/xen/drivers/char/Kconfig
>> @@ -63,6 +63,21 @@ config HAS_SCIF
>>   config HAS_EHCI
>>   	bool
>>   	depends on X86
>> +        default y if !HAS_XHCI_DBC
> Again, way? The two drivers shouldn't be exclusive of one another.

They both implement the dbgp_op hypercall, so they can't both

be built, otherwise the build fails (as the code stands now with this

patch that is)

>
> Also please note the mixture of indentation you introduce.
Thanks, will fix
>
>>   	help
>>   	  This selects the USB based EHCI debug port to be used as a UART. If
>>   	  you have an x86 based system with USB, say Y.
>> +
>> +config HAS_XHCI_DBC
>> +	bool "xHCI Debug Capability driver"
> A setting name HAS_* wouldn't normally have a prompt.
Understood
>
>> +	depends on X86 && HAS_PCI
>> +	help
>> +	  This selects the xHCI Debug Capabilty to be used as a UART.
>> +
>> +config XHCI_FIXMAP_PAGES
>> +        int "Number of fixmap entries to allocate for the xHC"
>> +	depends on HAS_XHCI_DBC
>> +        default 16
>> +        help
>> +          This should equal the size (in 4K pages) of the first 64-bit
>> +          BAR of the host controller in which the DbC is being used.
> Again - please use consistent (in itself as well as with the rest of
> the file) indentation.
>
> Jan


Thanks,

Connor



From xen-devel-bounces@lists.xenproject.org Mon May 17 13:41:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 13:41:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128247.240787 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lidUz-000527-U5; Mon, 17 May 2021 13:41:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128247.240787; Mon, 17 May 2021 13:41:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lidUz-000520-R3; Mon, 17 May 2021 13:41:33 +0000
Received: by outflank-mailman (input) for mailman id 128247;
 Mon, 17 May 2021 13:41:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=X6rY=KM=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lidUy-00051s-M3
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 13:41:32 +0000
Received: from mail-ot1-x32f.google.com (unknown [2607:f8b0:4864:20::32f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dc42a2ee-2fc1-4a4e-b6fa-2c5d4192dcf2;
 Mon, 17 May 2021 13:41:32 +0000 (UTC)
Received: by mail-ot1-x32f.google.com with SMTP id
 69-20020a9d0a4b0000b02902ed42f141e1so5565191otg.2
 for <xen-devel@lists.xenproject.org>; Mon, 17 May 2021 06:41:32 -0700 (PDT)
Received: from [192.168.99.80] (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id a19sm3183745otk.31.2021.05.17.06.41.31
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 May 2021 06:41:31 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dc42a2ee-2fc1-4a4e-b6fa-2c5d4192dcf2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=367gddCLKIQMUSKWrDX1dfdM4/aoyN7KbKCA66qOI0c=;
        b=hEAy7E8piRYEtO0NfCWzNs9fm92Xeij1/nsUVb5/X6nKmXrHx2UVRVjGaVjoWiaHLK
         3VexaKHveGVJdT5PXr+lAyQ1qONbBOm1U79+Q6VZalapPHvIgqxxd2/LOgOg4/BTkweP
         mJ30xF+iOii+8PZ+jToYRkMU+L+HIY2iom/fptHcfLX1nkTrLrLdyY1gyGHZ4f8dubrC
         lBObasSkUtR4/ZDMrKP4ZfrC+LY/tnajzVz4dpYw4OZUwDOdjOP2a60Lyh0yWTtcLRwH
         qOgtg91MvKrt+ptACsRCMKREeSThMeVKLo3IcjE3DI6r3z8OTtjnQ8WHtJLd8cTZqzO+
         POZw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=367gddCLKIQMUSKWrDX1dfdM4/aoyN7KbKCA66qOI0c=;
        b=mWnJhIABy1GT1sJbFYcLMbrj0XDJts4GF3FOpRg5kDLqL9xixKf4wp2gOqa+k+WDvK
         h5ia1gB2vB83g7orRa0+DW8L6Bw0gha3UQ1ThY1DiHqfDXnXMjLIDqGy65/TO3TyFqWW
         Ccno/EAFgj5acbMGVLsDqO8mD1Vn3JT+AT7kjUwkhmgOFrvqYTylYykosIgsjPNEG/II
         R0/gwM88wCCAx1EB2VAFhYQt9kLblekuERwtUbvFQbvDgN0S/dqJnD7lulg9NpsljrAy
         K8YNYZztkJpxnIQoeejKsICe0KPM9gE77CE0K8CMSA8Pyf68tf+CqxzR0chEv8RjkMFr
         B1dw==
X-Gm-Message-State: AOAM531xpf92k6DTDZPsZLVuN7X019zFZyW28BLu7+HxD41ZhX9OuVEu
	W32z3yFNNWsf6LVQIOkKu8U=
X-Google-Smtp-Source: ABdhPJw/4KPF6rbMWTH9G9q05p+HF1Asqzzkba6PyLLfowGcTnoJchrRCQYhLDrR/cAXEDacVwmqSQ==
X-Received: by 2002:a05:6830:248d:: with SMTP id u13mr49382062ots.121.1621258891721;
        Mon, 17 May 2021 06:41:31 -0700 (PDT)
Subject: Re: [PATCH] drivers/char: Add USB3 debug capability driver
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <9a6a15ebc538105c83be88883ab3a7125ed52d37.1620776791.git.connojdavis@gmail.com>
 <46931334-d4a8-eb89-0b81-727ff30c0ec0@xen.org>
From: Connor Davis <connojdavis@gmail.com>
Message-ID: <60cc20a9-03b1-214e-d3f2-6f383d10ac03@gmail.com>
Date: Mon, 17 May 2021 07:41:50 -0600
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <46931334-d4a8-eb89-0b81-727ff30c0ec0@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 5/17/21 3:36 AM, Julien Grall wrote:
> Hi Connor,
>
> On 12/05/2021 01:12, Connor Davis wrote:
>> +config XHCI_FIXMAP_PAGES
>> +        int "Number of fixmap entries to allocate for the xHC"
>> +    depends on HAS_XHCI_DBC
>> +        default 16
>> +        help
>> +          This should equal the size (in 4K pages) of the first 64-bit
>> +          BAR of the host controller in which the DbC is being used.
>
> Why can't you use the ioremap() for the new serial controller? Is this 
> going to be used by Xen very early?
>
> Cheers,
>
Yes it is used very early


Thanks,

Connor



From xen-devel-bounces@lists.xenproject.org Mon May 17 13:48:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 13:48:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128262.240825 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lidbF-00069x-Va; Mon, 17 May 2021 13:48:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128262.240825; Mon, 17 May 2021 13:48:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lidbF-00069q-SK; Mon, 17 May 2021 13:48:01 +0000
Received: by outflank-mailman (input) for mailman id 128262;
 Mon, 17 May 2021 13:48:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FIJu=KM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lidbE-00069U-Kl
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 13:48:00 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d1bdf183-4b76-4cf9-943f-5d6723c56553;
 Mon, 17 May 2021 13:47:59 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C4DFCAD2D;
 Mon, 17 May 2021 13:47:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d1bdf183-4b76-4cf9-943f-5d6723c56553
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621259278; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=bl+QFlr3DTg8dgbGhORW0TRkVne+U+NysgYfakjAjVE=;
	b=uF4XldrdwPHMJlisteYMEMgYgujoVRt8uRrpo/cA0OLrefdRXGT43/SCknaQqkO2q5FUKv
	lGoceC5VcemztObAEZqrusPMxfyilH7SzdmcfUux18WV8MZHY9O153ziVKMEbrZHMAD/v0
	CNQ2Mm6v92kyr5o1q7fDLOK+rAPJ6WY=
Subject: Re: [PATCH] drivers/char: Add USB3 debug capability driver
To: Connor Davis <connojdavis@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
References: <9a6a15ebc538105c83be88883ab3a7125ed52d37.1620776791.git.connojdavis@gmail.com>
 <912fa28c-5cb3-cf40-00db-19423c442da3@suse.com>
 <b31e6eed-437e-de23-1395-48db052d6237@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b049745e-3004-cfc5-7130-e70452936cf9@suse.com>
Date: Mon, 17 May 2021 15:47:58 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <b31e6eed-437e-de23-1395-48db052d6237@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 17.05.2021 15:40, Connor Davis wrote:
> On 5/17/21 3:27 AM, Jan Beulich wrote:
>> On 12.05.2021 02:12, Connor Davis wrote:
>>> --- a/docs/misc/xen-command-line.pandoc
>>> +++ b/docs/misc/xen-command-line.pandoc
>>> @@ -714,9 +714,26 @@ Available alternatives, with their meaning, are:
>>>   ### dbgp
>>>   > `= ehci[ <integer> | @pci<bus>:<slot>.<func> ]`
>>>   
>>> -Specify the USB controller to use, either by instance number (when going
>>> +Specify the EHCI USB controller to use, either by instance number (when going
>>>   over the PCI busses sequentially) or by PCI device (must be on segment 0).
>>>   
>>> +If you have a system with an xHCI USB controller that supports the Debug
>>> +Capability (DbC), you can use
>>> +
>>> +> `= xhci[ <integer> | @pci<bus>:<slot>.<func> ]`
>>> +
>>> +To use this, you need a USB3 A/A debugging cable plugged into a SuperSpeed
>>> +root port on the target machine. Recent kernels expose the existence of the
>>> +DbC at /sys/kernel/debug/usb/xhci/<seg>:<bus>:<slot>.<func>/reg-ext-dbc:00.
>>> +Note that to see output and process input after dom0 is started, you need to
>>> +ensure that the host controller's DMA is not remapped (e.g. with
>>> +dom0-iommu=passthrough).
>> This is a relatively bad limitation imo - people would better not get
>> used to using passthrough mode, and debugging other IOMMU modes (for
>> Dom0) may then be impossible at all.
> 
> Why is turning on passthrough mode to debug something bad? Sure
> 
> it shouldn't be done in a production deployment, but I don't see the
> 
> issue if it is used in a debug environment.

But then you develop with and test something that's not what gets
used in production.

>>> --- a/xen/arch/x86/Kconfig
>>> +++ b/xen/arch/x86/Kconfig
>>> @@ -11,7 +11,6 @@ config X86
>>>   	select HAS_ALTERNATIVE
>>>   	select HAS_COMPAT
>>>   	select HAS_CPUFREQ
>>> -	select HAS_EHCI
>> Why?
>>
>>> --- a/xen/drivers/char/Kconfig
>>> +++ b/xen/drivers/char/Kconfig
>>> @@ -63,6 +63,21 @@ config HAS_SCIF
>>>   config HAS_EHCI
>>>   	bool
>>>   	depends on X86
>>> +        default y if !HAS_XHCI_DBC
>> Again, why? The two drivers shouldn't be exclusive of one another.
> 
> They both implement the dbgp_op hypercall, so they can't both
> 
> be built, otherwise the build fails (as the code stands now with this
> 
> patch that is)

Oh, right - this hypercall needs properly multiplexing onto
whichever of the two drivers is actually in use. Not sure
whether even the case when both are in use at the same time
needs considering / dealing with.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 17 13:48:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 13:48:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128264.240836 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lidbn-0006hh-7n; Mon, 17 May 2021 13:48:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128264.240836; Mon, 17 May 2021 13:48:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lidbn-0006ha-4n; Mon, 17 May 2021 13:48:35 +0000
Received: by outflank-mailman (input) for mailman id 128264;
 Mon, 17 May 2021 13:48:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=X6rY=KM=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lidbl-0006gG-WD
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 13:48:34 +0000
Received: from mail-ot1-x32e.google.com (unknown [2607:f8b0:4864:20::32e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4dee4ff9-b37a-4522-9bc7-e0050e95dbf0;
 Mon, 17 May 2021 13:48:33 +0000 (UTC)
Received: by mail-ot1-x32e.google.com with SMTP id
 q7-20020a9d57870000b02902a5c2bd8c17so5578637oth.5
 for <xen-devel@lists.xenproject.org>; Mon, 17 May 2021 06:48:33 -0700 (PDT)
Received: from [192.168.99.80] (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id f9sm3160208otq.27.2021.05.17.06.48.32
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 May 2021 06:48:32 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4dee4ff9-b37a-4522-9bc7-e0050e95dbf0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=Y7G3TzUfMF26NrwSfO4KjIdPwYhYispCIDNJBE6rnKU=;
        b=IlUjmEUztnRpvBfIDXO7inyXDXdMskWLTvaqVIFQNg/sz7Sm85rGV+V/NycCnzCMqp
         pR41oIEn6xzSWBDCE66TpcjX6qvaIxxkpGkrsNqRDG6fpqnsu2OSG8vTPdzkJ4UQcAmF
         vCk4Og0AUaXC+Rx6Tycg1z1yITNJBeGIVh+EKO+wJpyfqne9xBYo6GBQVfvgLU03xVpB
         qHvIQ3dwr+nVtRMdn4xrH2UGRqgIu9jxk4ditSoGxHEyIiEN7hQeODI70v97xH6/mTw3
         0h3rOWsJ0JiedFQaXs1hiDZDR4l0+g8RR+/wHKjxHwaBPWEc0DGLcB9//Ng8jc9U+/9E
         a5MQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=Y7G3TzUfMF26NrwSfO4KjIdPwYhYispCIDNJBE6rnKU=;
        b=cykTW77MJ5VJkYKe1cV4X9tA2x315TtUCDAPjL1EW8j2oevI0ke+5/t2ia3/dqzkII
         gJ67YAwop4j/UHsookWxELSG8nkNV8Q/FFo3ZoWQXSo9QYxttAXsLucMkIEJLxin5Ftt
         hm3u7c58O28Tgw/OjOlgsGe2mbYIfYZSTejk1gkDcXWfx42Udr6HFN78iL1QESpQyHg3
         kv3j4MlGmgulg6iuiJEoS0fhaxsng++U/ZLBIt6Q2Eq6599wxQGzuEZYF+U8VR2bZ1Hy
         jKZoLHph21xIFp51G/MqW0DjL+qZVsaJ9NDrivdDQnEc/QYGfJJVRzT1dqOqN6cE8Lmn
         B0ww==
X-Gm-Message-State: AOAM531A6cT3f87ypcbOazVlEBumT5KxlGKvn8uyjLgzHOF0yhCrnI7p
	t11WQd9ByhsnT2JhqC38ykU=
X-Google-Smtp-Source: ABdhPJyLzO83i4D+oIyg6HoPQKkQpEgGaCBeACLMXfFi8Xmghwtvoqj4t3iSgcq4bojWvN+4i7BQhA==
X-Received: by 2002:a9d:a14:: with SMTP id 20mr45335812otg.86.1621259312975;
        Mon, 17 May 2021 06:48:32 -0700 (PDT)
Subject: Re: [PATCH v2 1/4] usb: early: Avoid using DbC if already enabled
To: Jan Beulich <jbeulich@suse.com>
Cc: Jann Horn <jannh@google.com>, Lee Jones <lee.jones@linaro.org>,
 Chunfeng Yun <chunfeng.yun@mediatek.com>, linux-usb@vger.kernel.org,
 linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
 Greg Kroah-Hartman <gregkh@linuxfoundation.org>
References: <cover.1620950220.git.connojdavis@gmail.com>
 <d160cee9b61c0ec41c2cd5ff9b4e107011d39d8c.1620952511.git.connojdavis@gmail.com>
 <8ccce25a-e3ca-cb30-f6a3-f9243a85a49b@suse.com>
From: Connor Davis <connojdavis@gmail.com>
Message-ID: <16400ee4-4406-8b26-10c0-a423b2b1fed0@gmail.com>
Date: Mon, 17 May 2021 07:48:52 -0600
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <8ccce25a-e3ca-cb30-f6a3-f9243a85a49b@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 5/17/21 3:32 AM, Jan Beulich wrote:
> On 14.05.2021 02:56, Connor Davis wrote:
>> Check if the debug capability is enabled in early_xdbc_parse_parameter,
>> and if it is, return with an error. This avoids collisions with whatever
>> enabled the DbC prior to linux starting.
> Doesn't this go too far and prevent use even if firmware (perhaps
> mistakenly) left it enabled?
>
> Jan

Yes, but how is one supposed to distinguish the broken firmware and 
non-broken

firmware cases?

>
>> Signed-off-by: Connor Davis <connojdavis@gmail.com>
>> ---
>>   drivers/usb/early/xhci-dbc.c | 10 ++++++++++
>>   1 file changed, 10 insertions(+)
>>
>> diff --git a/drivers/usb/early/xhci-dbc.c b/drivers/usb/early/xhci-dbc.c
>> index be4ecbabdd58..ca67fddc2d36 100644
>> --- a/drivers/usb/early/xhci-dbc.c
>> +++ b/drivers/usb/early/xhci-dbc.c
>> @@ -642,6 +642,16 @@ int __init early_xdbc_parse_parameter(char *s)
>>   	}
>>   	xdbc.xdbc_reg = (struct xdbc_regs __iomem *)(xdbc.xhci_base + offset);
>>   
>> +	if (readl(&xdbc.xdbc_reg->control) & CTRL_DBC_ENABLE) {
>> +		pr_notice("xhci debug capability already in use\n");
>> +		early_iounmap(xdbc.xhci_base, xdbc.xhci_length);
>> +		xdbc.xdbc_reg = NULL;
>> +		xdbc.xhci_base = NULL;
>> +		xdbc.xhci_length = 0;
>> +
>> +		return -ENODEV;
>> +	}
>> +
>>   	return 0;
>>   }
>>   
>>
Thanks,

Connor



From xen-devel-bounces@lists.xenproject.org Mon May 17 13:50:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 13:50:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128273.240848 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liddu-00089w-OU; Mon, 17 May 2021 13:50:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128273.240848; Mon, 17 May 2021 13:50:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liddu-00089p-LP; Mon, 17 May 2021 13:50:46 +0000
Received: by outflank-mailman (input) for mailman id 128273;
 Mon, 17 May 2021 13:50:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FIJu=KM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1liddt-00089h-Nn
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 13:50:45 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 86d86362-0d13-4ec1-8858-52a87dfc5761;
 Mon, 17 May 2021 13:50:44 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 23CA6AEB6;
 Mon, 17 May 2021 13:50:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 86d86362-0d13-4ec1-8858-52a87dfc5761
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621259444; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=+7sasbVGYUEZ5AUFJyqaV2/43qQDZihNx+bYQeU0oug=;
	b=I7bynsVI2NinzMt6jLlxaT7EoTUkMzIIhstMwHywg/jo1hAPH9EXoyrraJJ4DSG2YpY6QI
	bFqeWwhBNI9Rq1Lz1hnEGwT+7cTBLBPJCgWs+IEqtsZF72gl9f1oYwWJgFZBD5AoVCVQ8w
	San4lRm2W38SFSoeeTeHGWizy6gMsAo=
Subject: Re: [PATCH 2/8] xen/blkfront: read response from backend only once
To: Juergen Gross <jgross@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Jens Axboe <axboe@kernel.dk>,
 linux-block@vger.kernel.org, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org
References: <20210513100302.22027-1-jgross@suse.com>
 <20210513100302.22027-3-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8bc3e4e7-a93f-cd8c-3f96-3a8caea38ed3@suse.com>
Date: Mon, 17 May 2021 15:50:43 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210513100302.22027-3-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 13.05.2021 12:02, Juergen Gross wrote:
> In order to avoid problems in case the backend is modifying a response
> on the ring page while the frontend has already seen it, just read the
> response into a local buffer in one go and then operate on that buffer
> only.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Mon May 17 13:52:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 13:52:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128278.240859 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lidfE-0000M4-3V; Mon, 17 May 2021 13:52:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128278.240859; Mon, 17 May 2021 13:52:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lidfE-0000Lx-01; Mon, 17 May 2021 13:52:08 +0000
Received: by outflank-mailman (input) for mailman id 128278;
 Mon, 17 May 2021 13:52:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lidfC-0000Ln-1y; Mon, 17 May 2021 13:52:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lidfB-0004bV-R8; Mon, 17 May 2021 13:52:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lidfB-0006MX-It; Mon, 17 May 2021 13:52:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lidfB-0007SY-IO; Mon, 17 May 2021 13:52:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/kgCad0uW6uSSrmGEqX8ns+Q6TNOUJaal9t6fA1GGKo=; b=HAFM4W6IAI48GUj4FqHk3SmMz8
	xilwO4+n7Fiwf4Tu9K4kYOUZ/sVBBe+evXdBttR3+DApEmQ0BAlgf7bwIIb8qebqT+q+teawg02vj
	8kVWcqaO1myHJCE+IskcQoUkc0wAhB+wUTxGvjBpnUaqWAxPRH9mMJiLieDcJ7oymsVU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161978-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xtf test] 161978: all pass - PUSHED
X-Osstest-Versions-This:
    xtf=5ead491e36af6cb8681fc1278bd36c756ad62ac2
X-Osstest-Versions-That:
    xtf=880092854e5473558af77289bb7c01a9fa9dda5a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 17 May 2021 13:52:05 +0000

flight 161978 xtf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161978/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xtf                  5ead491e36af6cb8681fc1278bd36c756ad62ac2
baseline version:
 xtf                  880092854e5473558af77289bb7c01a9fa9dda5a

Last test of basis   161814  2021-05-06 14:40:11 Z   10 days
Testing same since   161978  2021-05-17 10:41:15 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>

jobs:
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-amd64-pvops                                            pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xtf.git
   8800928..5ead491  5ead491e36af6cb8681fc1278bd36c756ad62ac2 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon May 17 13:52:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 13:52:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128280.240873 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lidfM-0000g9-CD; Mon, 17 May 2021 13:52:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128280.240873; Mon, 17 May 2021 13:52:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lidfM-0000g2-91; Mon, 17 May 2021 13:52:16 +0000
Received: by outflank-mailman (input) for mailman id 128280;
 Mon, 17 May 2021 13:52:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=X6rY=KM=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lidfL-0000fT-1s
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 13:52:15 +0000
Received: from mail-oi1-x22d.google.com (unknown [2607:f8b0:4864:20::22d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3a18b85a-8d6e-444e-8f7e-c5918ac3cdc6;
 Mon, 17 May 2021 13:52:14 +0000 (UTC)
Received: by mail-oi1-x22d.google.com with SMTP id u144so6525402oie.6
 for <xen-devel@lists.xenproject.org>; Mon, 17 May 2021 06:52:14 -0700 (PDT)
Received: from [192.168.99.80] (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id e26sm2728345oig.9.2021.05.17.06.52.12
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 May 2021 06:52:13 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3a18b85a-8d6e-444e-8f7e-c5918ac3cdc6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=3Q5/aVh8S/Ya8SZ9Z+yZMN/mZ8v1AGIbYyuRPr9kbSg=;
        b=BGFqnrx52Lg5Kk/yz/K8ABPMSsMSfdxvlAUaAv6w4A02S9Bfhd7kvDIYlZgpbStmgp
         siBGgjqIBk5nFgdx4A5lQsoDQ6xnW78GJyTmP/5ZXu9NPCFEU24+S3t/0fJ20N4z6hPq
         /gNtIIGa2hh2rJFjk6LVn6W2CU8pk2R1w8NhOzFNxVISK/+BxuA8l/52Yc2kko9ioqHG
         rMneXAIqteO4mi8rMIPjhQTiDIl8iMHs3OUxL/I0PlBTArrpvhyx4WaK6f3lzOo+o4P0
         4zK/M6b58Za3pJMFzAPFmKg5SD4LcAWg8Rqssm9Mogd6L10ROaAxF4lQZZrOmHFwjtAh
         7+LQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=3Q5/aVh8S/Ya8SZ9Z+yZMN/mZ8v1AGIbYyuRPr9kbSg=;
        b=oCckxWXF8HMs9CrNLkZ1q3Izn24UZ5pIRHLjEFz2mB22nNxS6oVg7ekiLr54Bu4Ex9
         AFQAOlC72FaRd8JroT8Ts5X1ibDnzuZCW4mZLGeOK87G17vdjR/cd/GVBuWH5mv/qEXD
         oLEJTMrXiQdNWKtW+YKvWvuP9eanv11IL7IeD6UL9o99S2hc5jE3o8dg4BhpBLJXYsja
         +N5gInSVnND84Nb3QxtI6gXOiEWDskhJUXGwDY58LmHi39KblAvDUTz3kKWzSXLru5dq
         3ZT9TyjF1+CV8csQJM+VNR62bL26H03qiCS9b6IrfJTH4oL2E57jm8p4DbWV5WRj7L4w
         TGEw==
X-Gm-Message-State: AOAM530WFM/0sSbCd7FoGeuPr6Ls/J8JGbXZ1ZjO9DL9iy0oJkkFcYXg
	S2/nbLqgqWa2xOXwe1zNN2v/VREL6CVNww==
X-Google-Smtp-Source: ABdhPJxZ3fH4rgHtyBtCqxBzKM2g4sRlW39TC70MWTYVwn2+mZttUu/uUYAdw3JTBzyh7OKvpjRoaA==
X-Received: by 2002:aca:280a:: with SMTP id 10mr16184090oix.13.1621259533652;
        Mon, 17 May 2021 06:52:13 -0700 (PDT)
Subject: Re: [PATCH v3 2/5] xen/common: Guard iommu symbols with
 CONFIG_HAS_PASSTHROUGH
To: Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair23@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1621017334.git.connojdavis@gmail.com>
 <1156cb116da19ef64323e472bb6b6e87c6c73d77.1621017334.git.connojdavis@gmail.com>
 <556d1933-3b11-0780-edec-b6dc1729bc56@suse.com>
From: Connor Davis <connojdavis@gmail.com>
Message-ID: <beed18a6-02e7-d3a5-0a86-e2872d8927ac@gmail.com>
Date: Mon, 17 May 2021 07:52:32 -0600
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <556d1933-3b11-0780-edec-b6dc1729bc56@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 5/17/21 5:16 AM, Jan Beulich wrote:
> On 14.05.2021 20:53, Connor Davis wrote:
>> --- a/xen/common/memory.c
>> +++ b/xen/common/memory.c
>> @@ -294,7 +294,9 @@ int guest_remove_page(struct domain *d, unsigned long gmfn)
>>       p2m_type_t p2mt;
>>   #endif
>>       mfn_t mfn;
>> +#ifdef CONFIG_HAS_PASSTHROUGH
>>       bool *dont_flush_p, dont_flush;
>> +#endif
>>       int rc;
>>   
>>   #ifdef CONFIG_X86
>> @@ -385,13 +387,17 @@ int guest_remove_page(struct domain *d, unsigned long gmfn)
>>        * Since we're likely to free the page below, we need to suspend
>>        * xenmem_add_to_physmap()'s suppressing of IOMMU TLB flushes.
>>        */
>> +#ifdef CONFIG_HAS_PASSTHROUGH
>>       dont_flush_p = &this_cpu(iommu_dont_flush_iotlb);
>>       dont_flush = *dont_flush_p;
>>       *dont_flush_p = false;
>> +#endif
>>   
>>       rc = guest_physmap_remove_page(d, _gfn(gmfn), mfn, 0);
>>   
>> +#ifdef CONFIG_HAS_PASSTHROUGH
>>       *dont_flush_p = dont_flush;
>> +#endif
>>   
>>       /*
>>        * With the lack of an IOMMU on some platforms, domains with DMA-capable
>> @@ -839,11 +845,13 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
>>       xatp->gpfn += start;
>>       xatp->size -= start;
>>   
>> +#ifdef CONFIG_HAS_PASSTHROUGH
>>       if ( is_iommu_enabled(d) )
>>       {
>>          this_cpu(iommu_dont_flush_iotlb) = 1;
>>          extra.ppage = &pages[0];
>>       }
>> +#endif
>>   
>>       while ( xatp->size > done )
>>       {
>> @@ -868,6 +876,7 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
>>           }
>>       }
>>   
>> +#ifdef CONFIG_HAS_PASSTHROUGH
>>       if ( is_iommu_enabled(d) )
>>       {
>>           int ret;
>> @@ -894,6 +903,7 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
>>           if ( unlikely(ret) && rc >= 0 )
>>               rc = ret;
>>       }
>> +#endif
>>   
>>       return rc;
>>   }
> I wonder whether all of these wouldn't better become CONFIG_X86:
> ISTR Julien indicating that he doesn't see the override getting used
> on Arm. (Julien, please correct me if I'm misremembering.)
>
>> --- a/xen/include/xen/iommu.h
>> +++ b/xen/include/xen/iommu.h
>> @@ -51,9 +51,15 @@ static inline bool_t dfn_eq(dfn_t x, dfn_t y)
>>       return dfn_x(x) == dfn_x(y);
>>   }
>>   
>> -extern bool_t iommu_enable, iommu_enabled;
>> +extern bool_t iommu_enable;
>>   extern bool force_iommu, iommu_quarantine, iommu_verbose;
>>   
>> +#ifdef CONFIG_HAS_PASSTHROUGH
>> +extern bool_t iommu_enabled;
> Just bool please, like is already the case for the line in context
> above. We're in the process of phasing out bool_t.
Got it, thanks.
>
> Jan


Thanks,

Connor



From xen-devel-bounces@lists.xenproject.org Mon May 17 14:01:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 14:01:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128292.240884 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lidnt-0002cv-AW; Mon, 17 May 2021 14:01:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128292.240884; Mon, 17 May 2021 14:01:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lidnt-0002co-6q; Mon, 17 May 2021 14:01:05 +0000
Received: by outflank-mailman (input) for mailman id 128292;
 Mon, 17 May 2021 14:01:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FIJu=KM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lidnr-0002ci-TQ
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 14:01:03 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2c4332d7-69a9-4752-9c54-e65bff54d52b;
 Mon, 17 May 2021 14:01:02 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 67309AC8F;
 Mon, 17 May 2021 14:01:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c4332d7-69a9-4752-9c54-e65bff54d52b
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621260061; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=+NdK/avQ8FBevNdzVt7abclY7ENC4Xp3kwu3bH3BXGU=;
	b=hYPsv114snNpTzRdG0F8FjOixxA+SuYB69/5PExQu2q6jetyOORhTmI33XUIRFQoSsuumH
	wm8zR2YajUMDfNKrOTJZip22sEn+XReUuQtivi9aVHWVXxe3Q/Ne8zCb43HBKhQol9uRzg
	SbZcgBXtsfJDlrDU97x5cATUj+CAjT4=
Subject: Re: [PATCH 3/8] xen/blkfront: don't take local copy of a request from
 the ring page
To: Juergen Gross <jgross@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Jens Axboe <axboe@kernel.dk>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org, linux-block@vger.kernel.org
References: <20210513100302.22027-1-jgross@suse.com>
 <20210513100302.22027-4-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4cbf7b7f-5f00-4aba-4d54-06aa73d1bc32@suse.com>
Date: Mon, 17 May 2021 16:01:00 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210513100302.22027-4-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 13.05.2021 12:02, Juergen Gross wrote:
> In order to avoid a malicious backend being able to influence the local
> copy of a request build the request locally first and then copy it to
> the ring page instead of doing it the other way round as today.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
with one remark/question:

> @@ -703,6 +704,7 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
>  {
>  	struct blkfront_info *info = rinfo->dev_info;
>  	struct blkif_request *ring_req, *extra_ring_req = NULL;
> +	struct blkif_request *final_ring_req, *final_extra_ring_req;

Without setting final_extra_ring_req to NULL just like is done for
extra_ring_req, ...

> @@ -840,10 +845,10 @@ static int blkif_queue_rw_req(struct request *req, struct blkfront_ring_info *ri
>  	if (setup.segments)
>  		kunmap_atomic(setup.segments);
>  
> -	/* Keep a private copy so we can reissue requests when recovering. */
> -	rinfo->shadow[id].req = *ring_req;
> +	/* Copy request(s) to the ring page. */
> +	*final_ring_req = *ring_req;
>  	if (unlikely(require_extra_req))
> -		rinfo->shadow[extra_id].req = *extra_ring_req;
> +		*final_extra_ring_req = *extra_ring_req;

... are you sure all supported compilers will recognize the
conditional use and not warn about use of a possibly uninitialized
variable?

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 17 14:11:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 14:11:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128302.240895 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lidxd-00046t-8o; Mon, 17 May 2021 14:11:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128302.240895; Mon, 17 May 2021 14:11:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lidxd-00046m-5n; Mon, 17 May 2021 14:11:09 +0000
Received: by outflank-mailman (input) for mailman id 128302;
 Mon, 17 May 2021 14:11:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FIJu=KM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lidxc-00046g-6K
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 14:11:08 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2475df2b-71d5-4864-be04-448e29dea9fc;
 Mon, 17 May 2021 14:11:06 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B3985B226;
 Mon, 17 May 2021 14:11:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2475df2b-71d5-4864-be04-448e29dea9fc
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621260665; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=EHKWqf4XFF6yDcLUh1zTWK08SLSurVXp3htCEcxP5vE=;
	b=E/smndzrybtZpCiCz9AUTl7jnNh2pkguz/VZt6nPJHhKIgSHg5e+tw9ARnJDn1EcyK2+cn
	lFybTRXBH2ocNsv/+wp+GGviqEPear+M6xfRObO9q4jdTFrboawxEbyCMWSdUqoavyooLm
	QhEcb3OMdpcAQwB8NU5o7YmQaot/KFs=
Subject: Re: [PATCH 4/8] xen/blkfront: don't trust the backend response data
 blindly
To: Juergen Gross <jgross@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Jens Axboe <axboe@kernel.dk>,
 xen-devel@lists.xenproject.org, linux-block@vger.kernel.org,
 linux-kernel@vger.kernel.org
References: <20210513100302.22027-1-jgross@suse.com>
 <20210513100302.22027-5-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <315ad8b9-8a98-8d3e-f66c-ab32af2731a8@suse.com>
Date: Mon, 17 May 2021 16:11:05 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210513100302.22027-5-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 13.05.2021 12:02, Juergen Gross wrote:
> @@ -1574,10 +1580,16 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
>  	spin_lock_irqsave(&rinfo->ring_lock, flags);
>   again:
>  	rp = rinfo->ring.sring->rsp_prod;
> +	if (RING_RESPONSE_PROD_OVERFLOW(&rinfo->ring, rp)) {
> +		pr_alert("%s: illegal number of responses %u\n",
> +			 info->gd->disk_name, rp - rinfo->ring.rsp_cons);
> +		goto err;
> +	}
>  	rmb(); /* Ensure we see queued responses up to 'rp'. */

I think you want to insert after the barrier.

> @@ -1680,6 +1707,11 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
>  	spin_unlock_irqrestore(&rinfo->ring_lock, flags);
>  
>  	return IRQ_HANDLED;
> +
> + err:
> +	info->connected = BLKIF_STATE_ERROR;
> +	pr_alert("%s disabled for further use\n", info->gd->disk_name);
> +	return IRQ_HANDLED;
>  }

Am I understanding that a suspend (and then resume) can be used to
recover from error state? If so - is this intentional? If so in turn,
would it make sense to spell this out in the description?

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 17 14:11:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 14:11:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128303.240906 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lidxi-0004Om-HB; Mon, 17 May 2021 14:11:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128303.240906; Mon, 17 May 2021 14:11:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lidxi-0004Of-DS; Mon, 17 May 2021 14:11:14 +0000
Received: by outflank-mailman (input) for mailman id 128303;
 Mon, 17 May 2021 14:11:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=frGc=KM=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lidxh-00046g-4J
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 14:11:13 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a4d60386-fe0d-4626-9de0-51d330615f08;
 Mon, 17 May 2021 14:11:10 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 96EB6B231;
 Mon, 17 May 2021 14:11:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a4d60386-fe0d-4626-9de0-51d330615f08
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621260669; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=S44vK4RdOnJ04XbS2vkRaVaifdqAqJWbTuWTIISy/hw=;
	b=iR3+DbGvkyg7WQ8Z8cDFgxHoRSgvWIAeS4+DecIQb7fAJrKhgLBfc5ZSxuiHq2XmAhBmQR
	luVDcXvc2cLSQkLIAonFcQrRB/311xaPJMSYV7yGraVt/7uAQ3jJ8z8lfm7f3GQAxi1t5K
	7z9s9bPIyZSS3xPTHT1+bekE/mf2q0Y=
Subject: Re: [PATCH 3/8] xen/blkfront: don't take local copy of a request from
 the ring page
To: Jan Beulich <jbeulich@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Jens Axboe <axboe@kernel.dk>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org, linux-block@vger.kernel.org
References: <20210513100302.22027-1-jgross@suse.com>
 <20210513100302.22027-4-jgross@suse.com>
 <4cbf7b7f-5f00-4aba-4d54-06aa73d1bc32@suse.com>
From: Juergen Gross <jgross@suse.com>
Message-ID: <278182af-044a-9445-bbdb-fdbb65d0da7c@suse.com>
Date: Mon, 17 May 2021 16:11:08 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <4cbf7b7f-5f00-4aba-4d54-06aa73d1bc32@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="r1814kCXy88HKlk6GseNHQ5Jl1bQjvzK4"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--r1814kCXy88HKlk6GseNHQ5Jl1bQjvzK4
Content-Type: multipart/mixed; boundary="3z0SaXV9bT9EMUHkBlogZ1gUM2CrDn6LJ";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Jens Axboe <axboe@kernel.dk>, xen-devel@lists.xenproject.org,
 linux-kernel@vger.kernel.org, linux-block@vger.kernel.org
Message-ID: <278182af-044a-9445-bbdb-fdbb65d0da7c@suse.com>
Subject: Re: [PATCH 3/8] xen/blkfront: don't take local copy of a request from
 the ring page
References: <20210513100302.22027-1-jgross@suse.com>
 <20210513100302.22027-4-jgross@suse.com>
 <4cbf7b7f-5f00-4aba-4d54-06aa73d1bc32@suse.com>
In-Reply-To: <4cbf7b7f-5f00-4aba-4d54-06aa73d1bc32@suse.com>

--3z0SaXV9bT9EMUHkBlogZ1gUM2CrDn6LJ
Content-Type: multipart/mixed;
 boundary="------------D884FEC9E12ABF923568C4A9"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------D884FEC9E12ABF923568C4A9
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 17.05.21 16:01, Jan Beulich wrote:
> On 13.05.2021 12:02, Juergen Gross wrote:
>> In order to avoid a malicious backend being able to influence the loca=
l
>> copy of a request build the request locally first and then copy it to
>> the ring page instead of doing it the other way round as today.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>=20
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> with one remark/question:
>=20
>> @@ -703,6 +704,7 @@ static int blkif_queue_rw_req(struct request *req,=20
struct blkfront_ring_info *ri
>>   {
>>   	struct blkfront_info *info =3D rinfo->dev_info;
>>   	struct blkif_request *ring_req, *extra_ring_req =3D NULL;
>> +	struct blkif_request *final_ring_req, *final_extra_ring_req;
>=20
> Without setting final_extra_ring_req to NULL just like is done for
> extra_ring_req, ...
>=20
>> @@ -840,10 +845,10 @@ static int blkif_queue_rw_req(struct request *re=
q, struct blkfront_ring_info *ri
>>   	if (setup.segments)
>>   		kunmap_atomic(setup.segments);
>>  =20
>> -	/* Keep a private copy so we can reissue requests when recovering. *=
/
>> -	rinfo->shadow[id].req =3D *ring_req;
>> +	/* Copy request(s) to the ring page. */
>> +	*final_ring_req =3D *ring_req;
>>   	if (unlikely(require_extra_req))
>> -		rinfo->shadow[extra_id].req =3D *extra_ring_req;
>> +		*final_extra_ring_req =3D *extra_ring_req;
>=20
> ... are you sure all supported compilers will recognize the
> conditional use and not warn about use of a possibly uninitialized
> variable?

Hmm, probably better safe than sorry. Will change it.


Juergen

--------------D884FEC9E12ABF923568C4A9
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------D884FEC9E12ABF923568C4A9--

--3z0SaXV9bT9EMUHkBlogZ1gUM2CrDn6LJ--

--r1814kCXy88HKlk6GseNHQ5Jl1bQjvzK4
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmCieXwFAwAAAAAACgkQsN6d1ii/Ey8E
nwf/b6Rnrwjrlv1zmZ3HhUue7Q9cl/SAVFjVfclytWJgqiZVWDpMOksl6hC1n93jfIItXpYQDtyn
9dvJ6u7gOKrKV+XFmJfhoxMevjfpznM68qdiEIxgbAf+nI/7EbyV6w2keI/FkR5SFcQcLgO8QhSv
beCxfIaVphAAvcCQo1Pd3QW0s9jqfyQY+GYuSjzLtrLggZFbwGhBwCCcyiofV6scsAwVYEmTF+Sg
zU/nVqZEM5FgpGj/ITxXCDK1LcIGkWwzTBub7lGK/0wPv9c5pIcLlWFOAApAPv6HL37e9f8Ul9Ot
VnCoqPVtM8ZgRnhNmleb5LNbGV/nn49CbDfyHuEyig==
=BHS7
-----END PGP SIGNATURE-----

--r1814kCXy88HKlk6GseNHQ5Jl1bQjvzK4--


From xen-devel-bounces@lists.xenproject.org Mon May 17 14:13:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 14:13:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128315.240917 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lie0N-0005QD-40; Mon, 17 May 2021 14:13:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128315.240917; Mon, 17 May 2021 14:13:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lie0N-0005Q6-0T; Mon, 17 May 2021 14:13:59 +0000
Received: by outflank-mailman (input) for mailman id 128315;
 Mon, 17 May 2021 14:13:57 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FIJu=KM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lie0L-0005Ps-Qb
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 14:13:57 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3bc33be9-2541-4f64-ad84-1a677bb6e092;
 Mon, 17 May 2021 14:13:57 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4BF45B231;
 Mon, 17 May 2021 14:13:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3bc33be9-2541-4f64-ad84-1a677bb6e092
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621260836; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=lwE+T0c+xlHjc49FEylrM2BvekW9eKLArf9QaPrjeEw=;
	b=MWgi0KHh0Jd9DgsygqlTrv7WF3mMH+tR979KiKRtH17JWUc0vjATnRHwRxkz7WqJdOap+b
	gDoh6thuXBQePPtv7wwg2NWmEnhuBeBhzRWG89YqvaBML/HVasdng0lVHp1Ff7jBJ4yMWg
	mOhDSclmEAjvafp9vOBWrPvrvpMt/nY=
Subject: Re: [PATCH v2 1/4] usb: early: Avoid using DbC if already enabled
To: Connor Davis <connojdavis@gmail.com>
Cc: Jann Horn <jannh@google.com>, Lee Jones <lee.jones@linaro.org>,
 Chunfeng Yun <chunfeng.yun@mediatek.com>, linux-usb@vger.kernel.org,
 linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
 Greg Kroah-Hartman <gregkh@linuxfoundation.org>
References: <cover.1620950220.git.connojdavis@gmail.com>
 <d160cee9b61c0ec41c2cd5ff9b4e107011d39d8c.1620952511.git.connojdavis@gmail.com>
 <8ccce25a-e3ca-cb30-f6a3-f9243a85a49b@suse.com>
 <16400ee4-4406-8b26-10c0-a423b2b1fed0@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ddb58cbd-0a72-f680-80f4-ce09b13a2cee@suse.com>
Date: Mon, 17 May 2021 16:13:55 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <16400ee4-4406-8b26-10c0-a423b2b1fed0@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 17.05.2021 15:48, Connor Davis wrote:
> 
> On 5/17/21 3:32 AM, Jan Beulich wrote:
>> On 14.05.2021 02:56, Connor Davis wrote:
>>> Check if the debug capability is enabled in early_xdbc_parse_parameter,
>>> and if it is, return with an error. This avoids collisions with whatever
>>> enabled the DbC prior to linux starting.
>> Doesn't this go too far and prevent use even if firmware (perhaps
>> mistakenly) left it enabled?
> 
> Yes, but how is one supposed to distinguish the broken firmware and 
> non-broken
> 
> firmware cases?

Well, a first step might be to only check if running virtualized.
And then if your running virtualized, there might be a way to
inquire the hypervisor?

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 17 14:20:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 14:20:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128323.240928 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lie6c-0006ro-Ro; Mon, 17 May 2021 14:20:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128323.240928; Mon, 17 May 2021 14:20:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lie6c-0006rh-N7; Mon, 17 May 2021 14:20:26 +0000
Received: by outflank-mailman (input) for mailman id 128323;
 Mon, 17 May 2021 14:20:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FIJu=KM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lie6c-0006rb-5n
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 14:20:26 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a15a006d-cecd-4b62-b0d8-13c70180635c;
 Mon, 17 May 2021 14:20:25 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C0BC3B271;
 Mon, 17 May 2021 14:20:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a15a006d-cecd-4b62-b0d8-13c70180635c
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621261224; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=pBvkavdK2+tEKfPns8/0KtXvMxeeFCb6/eSO70iXjJI=;
	b=KLTOub7TZSUOtkuHjpjekTAeT7wCeVfOQvosdmBXGcqwk4DiC/5/awqVvT7u7JBOfoHIyE
	v7KGYFo5pFFclu6Ss7cK58jfDScXMPZZBsPG4CyN9PmNL1hEk1EfZO+4UK8ttQkgRhH+L2
	e/oEsvaU+rkOSW2y8kZNRwFlPjxMsro=
Subject: Re: [PATCH 5/8] xen/netfront: read response from backend only once
To: Juergen Gross <jgross@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "David S. Miller" <davem@davemloft.net>, Jakub Kicinski <kuba@kernel.org>,
 xen-devel@lists.xenproject.org, netdev@vger.kernel.org,
 linux-kernel@vger.kernel.org
References: <20210513100302.22027-1-jgross@suse.com>
 <20210513100302.22027-6-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c9f90370-fc02-3f05-0670-35f795c59d95@suse.com>
Date: Mon, 17 May 2021 16:20:21 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210513100302.22027-6-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 13.05.2021 12:02, Juergen Gross wrote:
> In order to avoid problems in case the backend is modifying a response
> on the ring page while the frontend has already seen it, just read the
> response into a local buffer in one go and then operate on that buffer
> only.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>
with one remark:

> @@ -830,24 +830,22 @@ static int xennet_get_extras(struct netfront_queue *queue,
>  			break;
>  		}
>  
> -		extra = (struct xen_netif_extra_info *)
> -			RING_GET_RESPONSE(&queue->rx, ++cons);
> +		RING_COPY_RESPONSE(&queue->rx, ++cons, &extra);
>  
> -		if (unlikely(!extra->type ||
> -			     extra->type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
> +		if (unlikely(!extra.type ||
> +			     extra.type >= XEN_NETIF_EXTRA_TYPE_MAX)) {
>  			if (net_ratelimit())
>  				dev_warn(dev, "Invalid extra type: %d\n",
> -					extra->type);
> +					extra.type);
>  			err = -EINVAL;
>  		} else {
> -			memcpy(&extras[extra->type - 1], extra,
> -			       sizeof(*extra));
> +			memcpy(&extras[extra.type - 1], &extra, sizeof(extra));

Maybe take the opportunity and switch to (type safe) structure
assignment?

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 17 14:23:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 14:23:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128328.240938 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lie9L-0007W2-7u; Mon, 17 May 2021 14:23:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128328.240938; Mon, 17 May 2021 14:23:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lie9L-0007Vv-4p; Mon, 17 May 2021 14:23:15 +0000
Received: by outflank-mailman (input) for mailman id 128328;
 Mon, 17 May 2021 14:23:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=frGc=KM=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lie9K-0007Vp-DJ
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 14:23:14 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ab46aa3a-8061-4269-956e-eca1b86980d6;
 Mon, 17 May 2021 14:23:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 75198B231;
 Mon, 17 May 2021 14:23:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ab46aa3a-8061-4269-956e-eca1b86980d6
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621261392; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=l0UfPfd8q4JFoy/Gm6qLKY+Gv2/rFgvF9TXVY1oORvg=;
	b=e9BZ6Kdl3nJMZU0ppxyFtdn0Phjh0jmK4TVR7JmHJWSH/+iXIA95y8g3jMI6CsFW6eN9La
	TSABvjb7BjHvRl3nzK8ZbCXF+3lhkPvZO/MLTxDG5iOXhCqk++XLa3svj1iHXNokHdfBUQ
	jE6etTanday/Omsjn2Ioza+tXyEr26o=
To: Jan Beulich <jbeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Jens Axboe <axboe@kernel.dk>,
 xen-devel@lists.xenproject.org, linux-block@vger.kernel.org,
 linux-kernel@vger.kernel.org
References: <20210513100302.22027-1-jgross@suse.com>
 <20210513100302.22027-5-jgross@suse.com>
 <315ad8b9-8a98-8d3e-f66c-ab32af2731a8@suse.com>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH 4/8] xen/blkfront: don't trust the backend response data
 blindly
Message-ID: <6095c4b9-a9bb-8a38-fb6c-a5483105b802@suse.com>
Date: Mon, 17 May 2021 16:23:11 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <315ad8b9-8a98-8d3e-f66c-ab32af2731a8@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="uFelNYe0WoxMBWJXMM8pj0dKp98OxKi7g"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--uFelNYe0WoxMBWJXMM8pj0dKp98OxKi7g
Content-Type: multipart/mixed; boundary="BxGqQncOtjZqp2zBgNsSAzrhnURnu5KtO";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Jens Axboe <axboe@kernel.dk>,
 xen-devel@lists.xenproject.org, linux-block@vger.kernel.org,
 linux-kernel@vger.kernel.org
Message-ID: <6095c4b9-a9bb-8a38-fb6c-a5483105b802@suse.com>
Subject: Re: [PATCH 4/8] xen/blkfront: don't trust the backend response data
 blindly
References: <20210513100302.22027-1-jgross@suse.com>
 <20210513100302.22027-5-jgross@suse.com>
 <315ad8b9-8a98-8d3e-f66c-ab32af2731a8@suse.com>
In-Reply-To: <315ad8b9-8a98-8d3e-f66c-ab32af2731a8@suse.com>

--BxGqQncOtjZqp2zBgNsSAzrhnURnu5KtO
Content-Type: multipart/mixed;
 boundary="------------186FB5FB25262D3342EC3271"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------186FB5FB25262D3342EC3271
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 17.05.21 16:11, Jan Beulich wrote:
> On 13.05.2021 12:02, Juergen Gross wrote:
>> @@ -1574,10 +1580,16 @@ static irqreturn_t blkif_interrupt(int irq, vo=
id *dev_id)
>>   	spin_lock_irqsave(&rinfo->ring_lock, flags);
>>    again:
>>   	rp =3D rinfo->ring.sring->rsp_prod;
>> +	if (RING_RESPONSE_PROD_OVERFLOW(&rinfo->ring, rp)) {
>> +		pr_alert("%s: illegal number of responses %u\n",
>> +			 info->gd->disk_name, rp - rinfo->ring.rsp_cons);
>> +		goto err;
>> +	}
>>   	rmb(); /* Ensure we see queued responses up to 'rp'. */
>=20
> I think you want to insert after the barrier.

Why? The relevant variable which is checked is "rp". The result of the
check is in no way depending on the responses themselves. And any change
of rsp_cons is protected by ring_lock, so there is no possibility of
reading an old value here.

>=20
>> @@ -1680,6 +1707,11 @@ static irqreturn_t blkif_interrupt(int irq, voi=
d *dev_id)
>>   	spin_unlock_irqrestore(&rinfo->ring_lock, flags);
>>  =20
>>   	return IRQ_HANDLED;
>> +
>> + err:
>> +	info->connected =3D BLKIF_STATE_ERROR;
>> +	pr_alert("%s disabled for further use\n", info->gd->disk_name);
>> +	return IRQ_HANDLED;
>>   }
>=20
> Am I understanding that a suspend (and then resume) can be used to
> recover from error state? If so - is this intentional? If so in turn,
> would it make sense to spell this out in the description?

I'd call it a nice side effect rather than intention. I can add a remark
to the commit message if you want.


Juergen

--------------186FB5FB25262D3342EC3271
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------186FB5FB25262D3342EC3271--

--BxGqQncOtjZqp2zBgNsSAzrhnURnu5KtO--

--uFelNYe0WoxMBWJXMM8pj0dKp98OxKi7g
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmCifE8FAwAAAAAACgkQsN6d1ii/Ey/s
3Qf9HTZOC5syO0wOHlDd2ImlJLkEvymVnZeq/cv6R/zVD85g8sRgRbKW9Kk5YYous1E6EfxSZnIY
kqZeqH3gRL+Tn3NAcn1fUHKUEhzYAHgpwzuUqm2vj9t43B83IatRzcmAn1zERvvGL6wXpkJnDc86
OVohuIaeecTCEbFzynnB7JpQIM5XGLaTmPUtwOGVDVEQzCRlIAl5qEPNkJ4OWiVUgfJ4DqaFdxil
fQSMnBI6l3V5Lr6Hbkejus0bfNJKZX1GDiry6HS4RFfKssw8JlXJLUaHGEcc6Go2bVK2qsB+Lt7P
BOMjevMUX8GBurmcCt74HCmhUUgY1L7RYv5YjxENvg==
=98zG
-----END PGP SIGNATURE-----

--uFelNYe0WoxMBWJXMM8pj0dKp98OxKi7g--


From xen-devel-bounces@lists.xenproject.org Mon May 17 14:24:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 14:24:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128333.240950 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lieAS-00087P-Jw; Mon, 17 May 2021 14:24:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128333.240950; Mon, 17 May 2021 14:24:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lieAS-00087I-Fz; Mon, 17 May 2021 14:24:24 +0000
Received: by outflank-mailman (input) for mailman id 128333;
 Mon, 17 May 2021 14:24:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=X6rY=KM=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lieAR-000878-6N
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 14:24:23 +0000
Received: from mail-ot1-x335.google.com (unknown [2607:f8b0:4864:20::335])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a30cd112-584b-4abd-9a00-caaa86fa6486;
 Mon, 17 May 2021 14:24:22 +0000 (UTC)
Received: by mail-ot1-x335.google.com with SMTP id
 d3-20020a9d29030000b029027e8019067fso5648315otb.13
 for <xen-devel@lists.xenproject.org>; Mon, 17 May 2021 07:24:22 -0700 (PDT)
Received: from [192.168.99.80] (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id 19sm2728380oiy.11.2021.05.17.07.24.21
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 May 2021 07:24:21 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a30cd112-584b-4abd-9a00-caaa86fa6486
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=wP0qVRRFLaqKdNsvIebHqzMAQA0sHv3p78V+XQan4IE=;
        b=SIIetbEYExbcUM1QfmpytUTa2IVHHxjrX5L6JO2j1xRF6GNf1rzKAyM40K3Hi5hYwf
         REzXM3GmnmYKtfOC3JTzd+8Q5wzYuan+AmqRuwrgoXOPPqC9S/ccK8CLcBCIZT7ro76q
         vdQ4N0tLO/zXJP4b5wg1VMck/JcqcLbL+m0YijN1z9Wy4KH1jVjVXeFNlbqjhBuEZpFL
         clyJPWEn6AwkURWyVxA9BsFa3GCFHCy/9/3s1au+ksMHEydPpvbnXHqdH34qXED252Fr
         3ipPzVpKQiT02tYGuDwi4Puho9+BjNndIhyvsV44R5iSoKNE4kJ2qajjZsijf8TctP+u
         qB3A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=wP0qVRRFLaqKdNsvIebHqzMAQA0sHv3p78V+XQan4IE=;
        b=isZmC00gTfYyKNMr9kV5oYYzpQN5Fpd72C2uSdSXvdNHLC8Q9XI4nIViE6vxe7pEXU
         vFLbULKGRTZ56hIaykeaLTAbty6GXHNxqErech54FWfMIZ/CzsdfD2pxMVZHWoUKXASz
         fLS68pq0K/TwE27FCL6nO0y9jrOGIz/RTy9coKke5acItfCnxzBDIzpmv7Mst8DrhgnA
         SYhTLsjIn5g3V9UgfkQQLq30AJZPAcRJo8GAYVAwGOOaJsObgAcdyQajl4VPWENoRtoW
         quVpRfDOWlhkjMXCV+c6YIm/g6wYvO65q+qHpGTEAJdlOoJElhOZKRYtQwXEQtzE7nfJ
         ZS6w==
X-Gm-Message-State: AOAM533wTzMGj6pVEZgVLrWbPanXPrYKgYfrV/5eMOVDGBa8yIum3Sod
	MvIa9khRwZ0x4MZicbAy7+0=
X-Google-Smtp-Source: ABdhPJxdOLV4H8+0nYsjoYe7S0t3PHTaJfdsTy6vFeUExppMn/WtX8UkSoPTI9j9rQzIApvCTHBZWA==
X-Received: by 2002:a9d:4e88:: with SMTP id v8mr28025140otk.110.1621261462130;
        Mon, 17 May 2021 07:24:22 -0700 (PDT)
Subject: Re: [PATCH v2 1/4] usb: early: Avoid using DbC if already enabled
To: Jan Beulich <jbeulich@suse.com>
Cc: Jann Horn <jannh@google.com>, Lee Jones <lee.jones@linaro.org>,
 Chunfeng Yun <chunfeng.yun@mediatek.com>, linux-usb@vger.kernel.org,
 linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
 Greg Kroah-Hartman <gregkh@linuxfoundation.org>
References: <cover.1620950220.git.connojdavis@gmail.com>
 <d160cee9b61c0ec41c2cd5ff9b4e107011d39d8c.1620952511.git.connojdavis@gmail.com>
 <8ccce25a-e3ca-cb30-f6a3-f9243a85a49b@suse.com>
 <16400ee4-4406-8b26-10c0-a423b2b1fed0@gmail.com>
 <ddb58cbd-0a72-f680-80f4-ce09b13a2cee@suse.com>
From: Connor Davis <connojdavis@gmail.com>
Message-ID: <55325db1-b086-fc81-9117-6560c4914a12@gmail.com>
Date: Mon, 17 May 2021 08:24:41 -0600
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <ddb58cbd-0a72-f680-80f4-ce09b13a2cee@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 5/17/21 8:13 AM, Jan Beulich wrote:
> On 17.05.2021 15:48, Connor Davis wrote:
>> On 5/17/21 3:32 AM, Jan Beulich wrote:
>>> On 14.05.2021 02:56, Connor Davis wrote:
>>>> Check if the debug capability is enabled in early_xdbc_parse_parameter,
>>>> and if it is, return with an error. This avoids collisions with whatever
>>>> enabled the DbC prior to linux starting.
>>> Doesn't this go too far and prevent use even if firmware (perhaps
>>> mistakenly) left it enabled?
>> Yes, but how is one supposed to distinguish the broken firmware and
>> non-broken
>>
>> firmware cases?
> Well, a first step might be to only check if running virtualized.
> And then if your running virtualized, there might be a way to
> inquire the hypervisor?

Right, but if it was enabled by something other than a hypervisor,

or you're not running virtualized, how do you distinguish then? IMO

the proper thing to do in any case is to simply not use the DbC in linux.

Thanks,

Connor



From xen-devel-bounces@lists.xenproject.org Mon May 17 14:24:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 14:24:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128335.240961 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lieAq-0000C0-Tm; Mon, 17 May 2021 14:24:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128335.240961; Mon, 17 May 2021 14:24:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lieAq-0000Bs-PE; Mon, 17 May 2021 14:24:48 +0000
Received: by outflank-mailman (input) for mailman id 128335;
 Mon, 17 May 2021 14:24:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=frGc=KM=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lieAp-0000Bd-Gs
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 14:24:47 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 031c8065-74a0-4f8a-8b55-850ccf68e2a7;
 Mon, 17 May 2021 14:24:46 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E0048B233;
 Mon, 17 May 2021 14:24:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 031c8065-74a0-4f8a-8b55-850ccf68e2a7
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621261486; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=psXfcclVTBaiBuEIRBHeyf5N3qXDyygm0+4A2JGIQ2I=;
	b=KFAlAk5kSj9kN5kOhQ5f5uuc1L1wcG6xJBafUPAuFXYrJkKsStZorfXq69wfCCssTNvTJh
	PaYfYH1w2jy9ZnCYJZcNLEMZqvG79fCPDvFgLurQo1JND9YfAbxks3S8HU3Ob+ucoAuMRn
	FYNE0pl+fX5WB776RmWaWZwPUDmaP8Y=
Subject: Re: [PATCH 5/8] xen/netfront: read response from backend only once
To: Jan Beulich <jbeulich@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "David S. Miller" <davem@davemloft.net>, Jakub Kicinski <kuba@kernel.org>,
 xen-devel@lists.xenproject.org, netdev@vger.kernel.org,
 linux-kernel@vger.kernel.org
References: <20210513100302.22027-1-jgross@suse.com>
 <20210513100302.22027-6-jgross@suse.com>
 <c9f90370-fc02-3f05-0670-35f795c59d95@suse.com>
From: Juergen Gross <jgross@suse.com>
Message-ID: <132c33ae-52c9-532b-b0b4-b5952db09720@suse.com>
Date: Mon, 17 May 2021 16:24:45 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <c9f90370-fc02-3f05-0670-35f795c59d95@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="erLtx1Uu4AqeXpHOV2LdFAUXM77dGXQRf"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--erLtx1Uu4AqeXpHOV2LdFAUXM77dGXQRf
Content-Type: multipart/mixed; boundary="iIxXbN4RlBG5iLT9GjgqjHMfgfF7HsdnN";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "David S. Miller" <davem@davemloft.net>, Jakub Kicinski <kuba@kernel.org>,
 xen-devel@lists.xenproject.org, netdev@vger.kernel.org,
 linux-kernel@vger.kernel.org
Message-ID: <132c33ae-52c9-532b-b0b4-b5952db09720@suse.com>
Subject: Re: [PATCH 5/8] xen/netfront: read response from backend only once
References: <20210513100302.22027-1-jgross@suse.com>
 <20210513100302.22027-6-jgross@suse.com>
 <c9f90370-fc02-3f05-0670-35f795c59d95@suse.com>
In-Reply-To: <c9f90370-fc02-3f05-0670-35f795c59d95@suse.com>

--iIxXbN4RlBG5iLT9GjgqjHMfgfF7HsdnN
Content-Type: multipart/mixed;
 boundary="------------06866E4B8953044595525960"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------06866E4B8953044595525960
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 17.05.21 16:20, Jan Beulich wrote:
> On 13.05.2021 12:02, Juergen Gross wrote:
>> In order to avoid problems in case the backend is modifying a response=

>> on the ring page while the frontend has already seen it, just read the=

>> response into a local buffer in one go and then operate on that buffer=

>> only.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
>=20
> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> with one remark:
>=20
>> @@ -830,24 +830,22 @@ static int xennet_get_extras(struct netfront_que=
ue *queue,
>>   			break;
>>   		}
>>  =20
>> -		extra =3D (struct xen_netif_extra_info *)
>> -			RING_GET_RESPONSE(&queue->rx, ++cons);
>> +		RING_COPY_RESPONSE(&queue->rx, ++cons, &extra);
>>  =20
>> -		if (unlikely(!extra->type ||
>> -			     extra->type >=3D XEN_NETIF_EXTRA_TYPE_MAX)) {
>> +		if (unlikely(!extra.type ||
>> +			     extra.type >=3D XEN_NETIF_EXTRA_TYPE_MAX)) {
>>   			if (net_ratelimit())
>>   				dev_warn(dev, "Invalid extra type: %d\n",
>> -					extra->type);
>> +					extra.type);
>>   			err =3D -EINVAL;
>>   		} else {
>> -			memcpy(&extras[extra->type - 1], extra,
>> -			       sizeof(*extra));
>> +			memcpy(&extras[extra.type - 1], &extra, sizeof(extra));
>=20
> Maybe take the opportunity and switch to (type safe) structure
> assignment?

Yes, good idea.


Juergen

--------------06866E4B8953044595525960
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------06866E4B8953044595525960--

--iIxXbN4RlBG5iLT9GjgqjHMfgfF7HsdnN--

--erLtx1Uu4AqeXpHOV2LdFAUXM77dGXQRf
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmCifK0FAwAAAAAACgkQsN6d1ii/Ey/t
Owf/W6F4yLE4ar8JzbuqpMqSkwm/nom8yx9ADY79ZCrrpRZH3MMBlbmzarNCoGZ4+hQknSwyFHf9
XRLfQEFoRGt1EYJKhiwkT5jnqa0DXQmEphb8rYVlhVtt87OBs8xOBhvDR5qYa0lT9rNl6jon/h/R
k6eo3fQ6U3i0SsPghYxwxsOrUJ5T8PApQwn2kBC+VnzlOfdTByLw+fA84fxf+6WKqsRt/SDyr/4t
3FpdfKqgjI+5xyeaGY+KIz8tpBnAgk+8NzfGzV7mT1n++SQwaZtc4hLiTIyONAoFPWg8FxxcZ0iW
aOOpG0nJdWsqIT8w301IS8zVGqTgfBwGOGPcNKaWtw==
=vTDb
-----END PGP SIGNATURE-----

--erLtx1Uu4AqeXpHOV2LdFAUXM77dGXQRf--


From xen-devel-bounces@lists.xenproject.org Mon May 17 15:08:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 15:08:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128351.240972 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lieqw-0004Zs-EF; Mon, 17 May 2021 15:08:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128351.240972; Mon, 17 May 2021 15:08:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lieqw-0004Zl-9S; Mon, 17 May 2021 15:08:18 +0000
Received: by outflank-mailman (input) for mailman id 128351;
 Mon, 17 May 2021 15:08:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FIJu=KM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1liequ-0004Zf-T9
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 15:08:16 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b4cd7299-4a29-4c85-9c88-4a6e90d2183e;
 Mon, 17 May 2021 15:08:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A90C7AF2C;
 Mon, 17 May 2021 15:08:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b4cd7299-4a29-4c85-9c88-4a6e90d2183e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621264094; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=PTgCAclxCK80C0ynDaeVVPOUMYjSdOogYauiOm/CJrI=;
	b=MArlY3MJXa+52P1t/iSJJeD/qNAhCuTAN0iZOJJ2q9uaYg9bu/wWt0bE9R1mPPeG64q4pu
	h0EqiSrA+2ZJmPbzFGcOWPfk6kIk5ibIXP/l5JbxO8ojYUQfP8wSDwQaVeWVO/GpFVPaug
	h+LDH9s3QLePU+fKgk4qMW3/659oFww=
Subject: Re: [PATCH 6/8] xen/netfront: don't read data from request on the
 ring page
To: Juergen Gross <jgross@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "David S. Miller" <davem@davemloft.net>, Jakub Kicinski <kuba@kernel.org>,
 xen-devel@lists.xenproject.org, netdev@vger.kernel.org,
 linux-kernel@vger.kernel.org
References: <20210513100302.22027-1-jgross@suse.com>
 <20210513100302.22027-7-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <72cb5265-aca2-8717-6860-19b66ccf9191@suse.com>
Date: Mon, 17 May 2021 17:08:13 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210513100302.22027-7-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 13.05.2021 12:03, Juergen Gross wrote:
> In order to avoid a malicious backend being able to influence the local
> processing of a request build the request locally first and then copy
> it to the ring page. Any reading from the request needs to be done on
> the local instance.

"Any reading" isn't really true - you don't change xennet_make_one_txreq(),
yet that has a read-modify-write operation. Without that I would have
been inclined to ask whether ...

> --- a/drivers/net/xen-netfront.c
> +++ b/drivers/net/xen-netfront.c
> @@ -435,7 +435,8 @@ struct xennet_gnttab_make_txreq {
>  	struct netfront_queue *queue;
>  	struct sk_buff *skb;
>  	struct page *page;
> -	struct xen_netif_tx_request *tx; /* Last request */
> +	struct xen_netif_tx_request *tx;      /* Last request on ring page */
> +	struct xen_netif_tx_request tx_local; /* Last request local copy*/

... retaining the tx field here is a good idea.

> @@ -463,30 +464,27 @@ static void xennet_tx_setup_grant(unsigned long gfn, unsigned int offset,
>  	queue->grant_tx_page[id] = page;
>  	queue->grant_tx_ref[id] = ref;
>  
> -	tx->id = id;
> -	tx->gref = ref;
> -	tx->offset = offset;
> -	tx->size = len;
> -	tx->flags = 0;
> +	info->tx_local.id = id;
> +	info->tx_local.gref = ref;
> +	info->tx_local.offset = offset;
> +	info->tx_local.size = len;
> +	info->tx_local.flags = 0;
> +
> +	*tx = info->tx_local;
>  
>  	info->tx = tx;
> -	info->size += tx->size;
> +	info->size += info->tx_local.size;
>  }
>  
>  static struct xen_netif_tx_request *xennet_make_first_txreq(
> -	struct netfront_queue *queue, struct sk_buff *skb,
> -	struct page *page, unsigned int offset, unsigned int len)
> +	struct xennet_gnttab_make_txreq *info,
> +	unsigned int offset, unsigned int len)
>  {
> -	struct xennet_gnttab_make_txreq info = {
> -		.queue = queue,
> -		.skb = skb,
> -		.page = page,
> -		.size = 0,
> -	};
> +	info->size = 0;
>  
> -	gnttab_for_one_grant(page, offset, len, xennet_tx_setup_grant, &info);
> +	gnttab_for_one_grant(info->page, offset, len, xennet_tx_setup_grant, info);
>  
> -	return info.tx;
> +	return info->tx;
>  }

Similarly this returning of a pointer into the ring looks at least
risky to me. At the very least it looks as if ...

> @@ -704,14 +699,16 @@ static netdev_tx_t xennet_start_xmit(struct sk_buff *skb, struct net_device *dev
>  	}
>  
>  	/* First request for the linear area. */
> -	first_tx = tx = xennet_make_first_txreq(queue, skb,
> -						page, offset, len);
> +	info.queue = queue;
> +	info.skb = skb;
> +	info.page = page;
> +	first_tx = tx = xennet_make_first_txreq(&info, offset, len);

... you could avoid setting tx here; perhaps the local variable
could go away altogether, showing it's really just first_rx that
is still needed. It's odd that ...

>  	offset += tx->size;

... you don't change this one, when ...

>  	if (offset == PAGE_SIZE) {
>  		page++;
>  		offset = 0;
>  	}
> -	len -= tx->size;
> +	len -= info.tx_local.size;

... you do so here. Likely just an oversight.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 17 15:12:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 15:12:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128356.240983 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lieul-0005wl-Tr; Mon, 17 May 2021 15:12:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128356.240983; Mon, 17 May 2021 15:12:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lieul-0005we-Qc; Mon, 17 May 2021 15:12:15 +0000
Received: by outflank-mailman (input) for mailman id 128356;
 Mon, 17 May 2021 15:12:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FIJu=KM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lieuk-0005wY-Pm
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 15:12:14 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a4b5c7b5-a4ae-4395-b74d-9ba8be501bf9;
 Mon, 17 May 2021 15:12:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 105D8AF75;
 Mon, 17 May 2021 15:12:13 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a4b5c7b5-a4ae-4395-b74d-9ba8be501bf9
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621264333; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ACzB0DWbQGMuhP7p+vCM4+g4XN+9TxlE9DnI7LNXZBs=;
	b=vDfEffzs87TIlyJUBoxJiFeoxiofBZyZFCKTYenvgXrKex0QVkTui63PUD/mFZrrvqzWov
	+AYG2+nff9AsadGKL2MUFXg09LY7mghXZS9EigOW0wWNu9PYsOIXbDPqKOVJ9fab+FOGoK
	NAJ/O7l2p8W6IrmSmmMzF+HEbCj8zcE=
Subject: Re: [PATCH 4/8] xen/blkfront: don't trust the backend response data
 blindly
To: Juergen Gross <jgross@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Jens Axboe <axboe@kernel.dk>,
 xen-devel@lists.xenproject.org, linux-block@vger.kernel.org,
 linux-kernel@vger.kernel.org
References: <20210513100302.22027-1-jgross@suse.com>
 <20210513100302.22027-5-jgross@suse.com>
 <315ad8b9-8a98-8d3e-f66c-ab32af2731a8@suse.com>
 <6095c4b9-a9bb-8a38-fb6c-a5483105b802@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a19a13ba-a386-2808-ad85-338d47085fa6@suse.com>
Date: Mon, 17 May 2021 17:12:12 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <6095c4b9-a9bb-8a38-fb6c-a5483105b802@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 17.05.2021 16:23, Juergen Gross wrote:
> On 17.05.21 16:11, Jan Beulich wrote:
>> On 13.05.2021 12:02, Juergen Gross wrote:
>>> @@ -1574,10 +1580,16 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
>>>   	spin_lock_irqsave(&rinfo->ring_lock, flags);
>>>    again:
>>>   	rp = rinfo->ring.sring->rsp_prod;
>>> +	if (RING_RESPONSE_PROD_OVERFLOW(&rinfo->ring, rp)) {
>>> +		pr_alert("%s: illegal number of responses %u\n",
>>> +			 info->gd->disk_name, rp - rinfo->ring.rsp_cons);
>>> +		goto err;
>>> +	}
>>>   	rmb(); /* Ensure we see queued responses up to 'rp'. */
>>
>> I think you want to insert after the barrier.
> 
> Why? The relevant variable which is checked is "rp". The result of the
> check is in no way depending on the responses themselves. And any change
> of rsp_cons is protected by ring_lock, so there is no possibility of
> reading an old value here.

But this is a standard double read situation: You might check a value
and then (via a separate read) use a different one past the barrier.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 17 15:22:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 15:22:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128363.240993 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lif4o-0007Qe-Qd; Mon, 17 May 2021 15:22:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128363.240993; Mon, 17 May 2021 15:22:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lif4o-0007QX-Nk; Mon, 17 May 2021 15:22:38 +0000
Received: by outflank-mailman (input) for mailman id 128363;
 Mon, 17 May 2021 15:22:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=frGc=KM=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lif4n-0007QR-G7
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 15:22:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3cafce4e-539e-4159-861d-25cab62e2b47;
 Mon, 17 May 2021 15:22:36 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A8110B038;
 Mon, 17 May 2021 15:22:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3cafce4e-539e-4159-861d-25cab62e2b47
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621264955; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Vlvma5SArPLioBtujsyqPd1/4ptzN56ZQ5HrU7Um1A8=;
	b=dsf5hCh9gw42Kv7RzGgxUZdbkPDWFfhjIVw09dnzrJ+ntMxGiQVufUCQ8y0QB2gshjHtRr
	A4S+4XihTXIXBERzAbfGrL5sLPwK+2VWy5Fz8qbR3yYofhjEzzYTQDht8KJ1z5YUSMgvuV
	0NQQsRFAJKBQsGcOuH/Q2V0IoLudI2k=
To: Jan Beulich <jbeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Jens Axboe <axboe@kernel.dk>,
 xen-devel@lists.xenproject.org, linux-block@vger.kernel.org,
 linux-kernel@vger.kernel.org
References: <20210513100302.22027-1-jgross@suse.com>
 <20210513100302.22027-5-jgross@suse.com>
 <315ad8b9-8a98-8d3e-f66c-ab32af2731a8@suse.com>
 <6095c4b9-a9bb-8a38-fb6c-a5483105b802@suse.com>
 <a19a13ba-a386-2808-ad85-338d47085fa6@suse.com>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH 4/8] xen/blkfront: don't trust the backend response data
 blindly
Message-ID: <030ef85e-b5af-f46e-c8dc-88b8d195c4e1@suse.com>
Date: Mon, 17 May 2021 17:22:34 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <a19a13ba-a386-2808-ad85-338d47085fa6@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="KaxRaOfMOTV3ONHVOcPibhwOQdgKQK4xW"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--KaxRaOfMOTV3ONHVOcPibhwOQdgKQK4xW
Content-Type: multipart/mixed; boundary="lSdrJgQotrkl2EvuPHMBD3tbYp2zzvjtI";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Jens Axboe <axboe@kernel.dk>,
 xen-devel@lists.xenproject.org, linux-block@vger.kernel.org,
 linux-kernel@vger.kernel.org
Message-ID: <030ef85e-b5af-f46e-c8dc-88b8d195c4e1@suse.com>
Subject: Re: [PATCH 4/8] xen/blkfront: don't trust the backend response data
 blindly
References: <20210513100302.22027-1-jgross@suse.com>
 <20210513100302.22027-5-jgross@suse.com>
 <315ad8b9-8a98-8d3e-f66c-ab32af2731a8@suse.com>
 <6095c4b9-a9bb-8a38-fb6c-a5483105b802@suse.com>
 <a19a13ba-a386-2808-ad85-338d47085fa6@suse.com>
In-Reply-To: <a19a13ba-a386-2808-ad85-338d47085fa6@suse.com>

--lSdrJgQotrkl2EvuPHMBD3tbYp2zzvjtI
Content-Type: multipart/mixed;
 boundary="------------8A9128D43577DDC6E826F9C7"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------8A9128D43577DDC6E826F9C7
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 17.05.21 17:12, Jan Beulich wrote:
> On 17.05.2021 16:23, Juergen Gross wrote:
>> On 17.05.21 16:11, Jan Beulich wrote:
>>> On 13.05.2021 12:02, Juergen Gross wrote:
>>>> @@ -1574,10 +1580,16 @@ static irqreturn_t blkif_interrupt(int irq, =
void *dev_id)
>>>>    	spin_lock_irqsave(&rinfo->ring_lock, flags);
>>>>     again:
>>>>    	rp =3D rinfo->ring.sring->rsp_prod;
>>>> +	if (RING_RESPONSE_PROD_OVERFLOW(&rinfo->ring, rp)) {
>>>> +		pr_alert("%s: illegal number of responses %u\n",
>>>> +			 info->gd->disk_name, rp - rinfo->ring.rsp_cons);
>>>> +		goto err;
>>>> +	}
>>>>    	rmb(); /* Ensure we see queued responses up to 'rp'. */
>>>
>>> I think you want to insert after the barrier.
>>
>> Why? The relevant variable which is checked is "rp". The result of the=

>> check is in no way depending on the responses themselves. And any chan=
ge
>> of rsp_cons is protected by ring_lock, so there is no possibility of
>> reading an old value here.
>=20
> But this is a standard double read situation: You might check a value
> and then (via a separate read) use a different one past the barrier.

Yes and no.

rsp_cons should never be written by the other side, and additionally
it would be read multiple times anyway.

So if the other side is writing it, the write could always happen after
the test and before the loop is started. This is no real issue here as
the frontend would very soon stumble over an illegal response (either
no request pending, or some other inconsistency). The test is meant to
have a more detailed error message in case it hits.

In the end it doesn't really matter, so I can change it. I just wanted
to point out that IMO both variants are equally valid.


Juergen

--------------8A9128D43577DDC6E826F9C7
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------8A9128D43577DDC6E826F9C7--

--lSdrJgQotrkl2EvuPHMBD3tbYp2zzvjtI--

--KaxRaOfMOTV3ONHVOcPibhwOQdgKQK4xW
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmCiijoFAwAAAAAACgkQsN6d1ii/Ey9a
Zwf/YmchUj1ZWTNFS/m8dlWkjN8r50Iyj1K9E+f0LDRvOs+mUJfszCsV6Wob1jVgcmBChKGuWmxb
cDv0pa2hNYaPqYhTHWZhoqXgxEkgPWp4J2ymhbzKf1/kMgbC/uJwR4pvITf1GiRJSIGkskYYc4+G
PhOXfnMVcU6YLSmgqOi4aJ2YU5nmSzRr7m8QY/aREcrN3pRJAPFkAAY/z6RhGOvS7dfw+SljoQdV
5NGJ3JuxzCPcpc8V1+/ECRDVTDbIl341rAOYOEG6it0/ueXKkIq12mBlY2HF9Gbzy8dDsETaKJ7o
Ztj55aATT1gFwIpnn93Eu5OZLMaPgWrF8CAbDqtqjg==
=ViMj
-----END PGP SIGNATURE-----

--KaxRaOfMOTV3ONHVOcPibhwOQdgKQK4xW--


From xen-devel-bounces@lists.xenproject.org Mon May 17 15:32:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 15:32:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128369.241005 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lifDf-0000Vv-NH; Mon, 17 May 2021 15:31:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128369.241005; Mon, 17 May 2021 15:31:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lifDf-0000Vo-K7; Mon, 17 May 2021 15:31:47 +0000
Received: by outflank-mailman (input) for mailman id 128369;
 Mon, 17 May 2021 15:31:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FIJu=KM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lifDe-0000Ve-8j
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 15:31:46 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ab36d020-7f0e-4330-8333-1a477c5f57b6;
 Mon, 17 May 2021 15:31:45 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4AAFDB038;
 Mon, 17 May 2021 15:31:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ab36d020-7f0e-4330-8333-1a477c5f57b6
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621265504; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=D78x0RQykGxtuFVKDLK2tegONNqQIINNV1fp5HE+u1o=;
	b=UlpURxft8mCNp5WtrDPvtUpGOpEsRnDR0jJF0c7mh+vDsIm6ScJ7qTGjqT60UiVDQRJ6k0
	41KGiWtA9xdJD4l8MO7kltHMJ9fUX+Dq08VZR9dbQatXEsB9+Cf/xLGKQnaKCC2nOvEHGu
	Hwt42g0mrif5bw2UWDSVartMsW1ZQPg=
Subject: Re: [PATCH 7/8] xen/netfront: don't trust the backend response data
 blindly
To: Juergen Gross <jgross@suse.com>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 "David S. Miller" <davem@davemloft.net>, Jakub Kicinski <kuba@kernel.org>,
 xen-devel@lists.xenproject.org, netdev@vger.kernel.org,
 linux-kernel@vger.kernel.org
References: <20210513100302.22027-1-jgross@suse.com>
 <20210513100302.22027-8-jgross@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <18aa307e-edf0-cb8b-1fd2-2b5c89522d02@suse.com>
Date: Mon, 17 May 2021 17:31:43 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210513100302.22027-8-jgross@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 13.05.2021 12:03, Juergen Gross wrote:
> @@ -429,6 +453,12 @@ static void xennet_tx_buf_gc(struct netfront_queue *queue)
>  	} while (more_to_do);
>  
>  	xennet_maybe_wake_tx(queue);
> +
> +	return;
> +
> + err:
> +	queue->info->broken = true;
> +	dev_alert(dev, "Disabled for further use\n");
>  }

If in blkfront the ability to revive a device via a suspend/resume cycle
is "a nice side effect", wouldn't it be nice for all frontends to behave
similarly in this regard? I.e. wouldn't you want to also clear this flag
somewhere? And shouldn't additionally / more generally a disconnect /
connect cycle allow proper operation again?

> @@ -472,6 +502,13 @@ static void xennet_tx_setup_grant(unsigned long gfn, unsigned int offset,
>  
>  	*tx = info->tx_local;
>  
> +	/*
> +	 * The request is not in its final form, as size and flags might be
> +	 * modified later, but even if a malicious backend will send a response
> +	 * now, nothing bad regarding security could happen.
> +	 */
> +	queue->tx_pending[id] = true;

I'm not sure I can agree with what the comment says. If the backend
sent a response prematurely, wouldn't the underlying slot(s) become
available for re-use, and hence potentially get filled / updated by
two parties?

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 17 15:33:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 15:33:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128374.241016 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lifFc-000172-3a; Mon, 17 May 2021 15:33:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128374.241016; Mon, 17 May 2021 15:33:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lifFc-00016v-0D; Mon, 17 May 2021 15:33:48 +0000
Received: by outflank-mailman (input) for mailman id 128374;
 Mon, 17 May 2021 15:33:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FIJu=KM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lifFa-00016k-Ko
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 15:33:46 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b7629a21-e443-408d-8a59-159b9b53ed6b;
 Mon, 17 May 2021 15:33:45 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4AF1DB27A;
 Mon, 17 May 2021 15:33:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b7629a21-e443-408d-8a59-159b9b53ed6b
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621265625; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=P6+oiXgUBbiaQ/XIdIi+J5lKOSw2bw8dPTfk4MM0uUE=;
	b=u0zDqmEJFdoLYXoTYowO0v/J27aISHzHszyWtgmZuKH+yd+ZSrOxGCOf0HZARqivSnMxJS
	NNX8CgyZW4gE/h3faxtUuyR6KAZVvVY+yYcSZ735UPBNopr+2siAxh+7xs/1wfEaQAM5nW
	iGm5n19sIIzLG48c6IV2RHhLcNXVzNw=
Subject: Re: [PATCH 4/8] xen/blkfront: don't trust the backend response data
 blindly
To: Juergen Gross <jgross@suse.com>
Cc: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Jens Axboe <axboe@kernel.dk>,
 xen-devel@lists.xenproject.org, linux-block@vger.kernel.org,
 linux-kernel@vger.kernel.org
References: <20210513100302.22027-1-jgross@suse.com>
 <20210513100302.22027-5-jgross@suse.com>
 <315ad8b9-8a98-8d3e-f66c-ab32af2731a8@suse.com>
 <6095c4b9-a9bb-8a38-fb6c-a5483105b802@suse.com>
 <a19a13ba-a386-2808-ad85-338d47085fa6@suse.com>
 <030ef85e-b5af-f46e-c8dc-88b8d195c4e1@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <477f01cd-8793-705c-10f9-cf0c0cd6ed84@suse.com>
Date: Mon, 17 May 2021 17:33:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <030ef85e-b5af-f46e-c8dc-88b8d195c4e1@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 17.05.2021 17:22, Juergen Gross wrote:
> On 17.05.21 17:12, Jan Beulich wrote:
>> On 17.05.2021 16:23, Juergen Gross wrote:
>>> On 17.05.21 16:11, Jan Beulich wrote:
>>>> On 13.05.2021 12:02, Juergen Gross wrote:
>>>>> @@ -1574,10 +1580,16 @@ static irqreturn_t blkif_interrupt(int irq, void *dev_id)
>>>>>    	spin_lock_irqsave(&rinfo->ring_lock, flags);
>>>>>     again:
>>>>>    	rp = rinfo->ring.sring->rsp_prod;
>>>>> +	if (RING_RESPONSE_PROD_OVERFLOW(&rinfo->ring, rp)) {
>>>>> +		pr_alert("%s: illegal number of responses %u\n",
>>>>> +			 info->gd->disk_name, rp - rinfo->ring.rsp_cons);
>>>>> +		goto err;
>>>>> +	}
>>>>>    	rmb(); /* Ensure we see queued responses up to 'rp'. */
>>>>
>>>> I think you want to insert after the barrier.
>>>
>>> Why? The relevant variable which is checked is "rp". The result of the
>>> check is in no way depending on the responses themselves. And any change
>>> of rsp_cons is protected by ring_lock, so there is no possibility of
>>> reading an old value here.
>>
>> But this is a standard double read situation: You might check a value
>> and then (via a separate read) use a different one past the barrier.
> 
> Yes and no.
> 
> rsp_cons should never be written by the other side, and additionally
> it would be read multiple times anyway.

But I'm talking about rsp_prod, as that's what rp gets loaded from.

Jan

> So if the other side is writing it, the write could always happen after
> the test and before the loop is started. This is no real issue here as
> the frontend would very soon stumble over an illegal response (either
> no request pending, or some other inconsistency). The test is meant to
> have a more detailed error message in case it hits.
> 
> In the end it doesn't really matter, so I can change it. I just wanted
> to point out that IMO both variants are equally valid.
> 
> 
> Juergen
> 



From xen-devel-bounces@lists.xenproject.org Mon May 17 15:42:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 15:42:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128381.241027 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lifO7-0002bW-VX; Mon, 17 May 2021 15:42:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128381.241027; Mon, 17 May 2021 15:42:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lifO7-0002bP-SK; Mon, 17 May 2021 15:42:35 +0000
Received: by outflank-mailman (input) for mailman id 128381;
 Mon, 17 May 2021 15:42:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lifO6-0002bJ-Nz
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 15:42:34 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lifO5-0006YV-R7; Mon, 17 May 2021 15:42:33 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lifO5-0008UT-KY; Mon, 17 May 2021 15:42:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=zMfxWZHlyGd70oueurk5bT8CNJWsIFlwRgrweT2ly1c=; b=l4ZHCPQHoGoK6b1lraDGl1QmLh
	afNfc3XmsLiga+uAPHDEFeYYdGdxEr69IoyKJXINMPdtEvd+02t4s7xMCBMdszXA1pyE2z0Xghvn5
	7E1fnkZqkwHC8VwWAYsb/wOkKkFZlHCE+hGEy473pV0JjXSlcp7ZHo1cHW1mt2R8ByEg=;
Subject: Re: [PATCH v3 2/5] xen/common: Guard iommu symbols with
 CONFIG_HAS_PASSTHROUGH
To: Jan Beulich <jbeulich@suse.com>, Connor Davis <connojdavis@gmail.com>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair23@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1621017334.git.connojdavis@gmail.com>
 <1156cb116da19ef64323e472bb6b6e87c6c73d77.1621017334.git.connojdavis@gmail.com>
 <556d1933-3b11-0780-edec-b6dc1729bc56@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <98b429d0-2673-624e-1690-9c0e8373ed5b@xen.org>
Date: Mon, 17 May 2021 16:42:30 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <556d1933-3b11-0780-edec-b6dc1729bc56@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 17/05/2021 12:16, Jan Beulich wrote:
> On 14.05.2021 20:53, Connor Davis wrote:
>> --- a/xen/common/memory.c
>> +++ b/xen/common/memory.c
>> @@ -294,7 +294,9 @@ int guest_remove_page(struct domain *d, unsigned long gmfn)
>>       p2m_type_t p2mt;
>>   #endif
>>       mfn_t mfn;
>> +#ifdef CONFIG_HAS_PASSTHROUGH
>>       bool *dont_flush_p, dont_flush;
>> +#endif
>>       int rc;
>>   
>>   #ifdef CONFIG_X86
>> @@ -385,13 +387,17 @@ int guest_remove_page(struct domain *d, unsigned long gmfn)
>>        * Since we're likely to free the page below, we need to suspend
>>        * xenmem_add_to_physmap()'s suppressing of IOMMU TLB flushes.
>>        */
>> +#ifdef CONFIG_HAS_PASSTHROUGH
>>       dont_flush_p = &this_cpu(iommu_dont_flush_iotlb);
>>       dont_flush = *dont_flush_p;
>>       *dont_flush_p = false;
>> +#endif
>>   
>>       rc = guest_physmap_remove_page(d, _gfn(gmfn), mfn, 0);
>>   
>> +#ifdef CONFIG_HAS_PASSTHROUGH
>>       *dont_flush_p = dont_flush;
>> +#endif
>>   
>>       /*
>>        * With the lack of an IOMMU on some platforms, domains with DMA-capable
>> @@ -839,11 +845,13 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
>>       xatp->gpfn += start;
>>       xatp->size -= start;
>>   
>> +#ifdef CONFIG_HAS_PASSTHROUGH
>>       if ( is_iommu_enabled(d) )
>>       {
>>          this_cpu(iommu_dont_flush_iotlb) = 1;
>>          extra.ppage = &pages[0];
>>       }
>> +#endif
>>   
>>       while ( xatp->size > done )
>>       {
>> @@ -868,6 +876,7 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
>>           }
>>       }
>>   
>> +#ifdef CONFIG_HAS_PASSTHROUGH
>>       if ( is_iommu_enabled(d) )
>>       {
>>           int ret;
>> @@ -894,6 +903,7 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
>>           if ( unlikely(ret) && rc >= 0 )
>>               rc = ret;
>>       }
>> +#endif
>>   
>>       return rc;
>>   }
> 
> I wonder whether all of these wouldn't better become CONFIG_X86:
> ISTR Julien indicating that he doesn't see the override getting used
> on Arm. (Julien, please correct me if I'm misremembering.)

Right, so far, I haven't been in favor to introduce it because:
    1) The P2M code may free some memory. So you can't always ignore the 
flush (I think this is wrong for the upper layer to know when this can 
happen).
    2) It is unclear what happen if the IOMMU TLBs and the PT contains 
different mappings (I received conflicted advice).

So it is better to always flush and as early as possible.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 17 15:43:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 15:43:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128387.241037 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lifP7-0003EG-Bn; Mon, 17 May 2021 15:43:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128387.241037; Mon, 17 May 2021 15:43:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lifP7-0003E9-8x; Mon, 17 May 2021 15:43:37 +0000
Received: by outflank-mailman (input) for mailman id 128387;
 Mon, 17 May 2021 15:43:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FIJu=KM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lifP5-0003E2-Lb
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 15:43:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6bd78708-c571-46ba-a4c2-d6598dfb4e74;
 Mon, 17 May 2021 15:43:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D663EB038;
 Mon, 17 May 2021 15:43:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6bd78708-c571-46ba-a4c2-d6598dfb4e74
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621266214; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=trpc5OXpqumYbEA+bx+DsO2tOUK31/Ah7ze3sn5dElo=;
	b=M6YZcIJyBPPHMXZ/1Xo18e4Hp+ADHXu8vFMR6oPayqEda/MV/MZS2YrJQ/24RqsVs92b/t
	DZqbKbStY5C2cRUXKk+kQp6MbRmnA9MbYjPSWSh/yqCaxMaRq2krTXNh8FYjcVktT053vk
	H/+Ep0N6JmdGZMgItbY1z049kbWg6lc=
Subject: Re: [PATCH v4 03/10] libx86: introduce helper to fetch msr entry
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Ian Jackson <iwj@xenproject.org>, xen-devel@lists.xenproject.org
References: <20210507110422.24608-1-roger.pau@citrix.com>
 <20210507110422.24608-4-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <035cc783-6083-f141-d4a3-db7a6adc36f5@suse.com>
Date: Mon, 17 May 2021 17:43:33 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210507110422.24608-4-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 07.05.2021 13:04, Roger Pau Monne wrote:
> @@ -91,6 +91,21 @@ int x86_msr_copy_from_buffer(struct msr_policy *policy,
>                               const msr_entry_buffer_t msrs, uint32_t nr_entries,
>                               uint32_t *err_msr);
>  
> +/**
> + * Get a MSR entry from a policy object.
> + *
> + * @param policy      The msr_policy object.
> + * @param idx         The index.
> + * @returns a pointer to the requested leaf or NULL in case of error.
> + *
> + * Do not call this function directly and instead use x86_msr_get_entry that
> + * will deal with both const and non-const policies returning a pointer with
> + * constness matching that of the input.
> + */
> +const uint64_t *_x86_msr_get_entry(const struct msr_policy *policy,
> +                                   uint32_t idx);
> +#define x86_msr_get_entry(p, i) \
> +    ((__typeof__(&(p)->platform_info.raw))_x86_msr_get_entry(p, i))
>  #endif /* !XEN_LIB_X86_MSR_H */

Just two nits: I think it would be nice to retain a blank line ahead of
the #endif. And here as well as in the CPUID counterpart you introduce,
strictly speaking, name space violations (via the leading underscore).
I realize I'm not really liked for pointing such out, but it may be
more relevant here than in pure hypervisor code, as this library code
is supposed to be usable also in userland.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 17 15:55:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 15:55:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128396.241049 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lifay-0004pe-H9; Mon, 17 May 2021 15:55:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128396.241049; Mon, 17 May 2021 15:55:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lifay-0004pX-E5; Mon, 17 May 2021 15:55:52 +0000
Received: by outflank-mailman (input) for mailman id 128396;
 Mon, 17 May 2021 15:55:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lifaw-0004pR-Uk
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 15:55:50 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lifav-0006mg-CB; Mon, 17 May 2021 15:55:49 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lifav-00012J-5m; Mon, 17 May 2021 15:55:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=GyY41iX1vlf77gKulqztQA2fMaPkG60hufFzfz6A6Us=; b=cPrAZCOexrieixFlPtCSM7qExG
	4KIkh31uPc1V/9ENHbC//2+5Spclicy6sY8uGsF+f1a+L4OkGf6ZLdpr1M0iznkyVT1UHUMVbW8rk
	5vIuTp+MFOhDbVI1K52rCvjzPJFRAHzQQC6W2mGdG6OTGggrPE6KQIZ3J/UrZsaYzf2o=;
Subject: Re: [PATCH] tools/xenstore: claim resources when running as daemon
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20210514084133.18658-1-jgross@suse.com>
 <1e38cce0-6960-ac21-b349-dac8551e23ed@xen.org>
 <fe5f1e6a-1a89-ea12-feb5-318f25d4281f@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <39860a0c-5ac5-2537-532f-6ce288cc7219@xen.org>
Date: Mon, 17 May 2021 16:55:47 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <fe5f1e6a-1a89-ea12-feb5-318f25d4281f@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Juergen,

On 17/05/2021 07:47, Juergen Gross wrote:
> On 14.05.21 22:19, Julien Grall wrote:
>> Hi Juergen,
>>
>> On 14/05/2021 09:41, Juergen Gross wrote:
>>> Xenstored is absolutely mandatory for a Xen host and it can't be
>>> restarted, so being killed by OOM-killer in case of memory shortage is
>>> to be avoided.
>>>
>>> Set /proc/$pid/oom_score_adj (if available) to -500 in order to allow
>>> xenstored to use large amounts of memory without being killed.
>>>
>>> In order to support large numbers of domains the limit for open file
>>> descriptors might need to be raised. Each domain needs 2 file
>>> descriptors (one for the event channel and one for the xl per-domain
>>> daemon to connect to xenstored).
>>
>> Hmmm... AFAICT there is only one file descriptor to handle all the 
>> event channels. Could you point out the code showing one event FD per 
>> domain?
> 
> I have let me fooled by just counting the file descriptors used with one
> or two domain active.
> 
> So you are right that all event channels only use one fd, but each xl
> daemon will use two (which should be fixed, IMO). And thinking more
> about it it is even worse: each qemu process will at least require one
> additional fd.
> 
>>
>>>
>>> Try to raise ulimit for open files to 65536. First the hard limit if
>>> needed, and then the soft limit.
>>
>> I am not sure this is right to impose this limit to everyone. For 
>> instance, one admin may know that there will be no more than 100 
>> domains on its system.
> 
> Is setting a higher limit really a problem?

I am quite unease to set a limit that nearly nobody will reach unless 
something went horribly wrong on the system.

> 
>> So the admin should be able to configure them. At this point, I think 
>> the two limit should be set my the initscript rather than xenstored 
>> itself.
> 
> But the admin would need to know the Xen internals for selecting the
> correct limits. In the end I'd be fine with moving this modification to
> the script starting Xenstore (which would be launch-xenstore), but the
> configuration item should be "max number of domains to support".

I would be fine with "max numer of domains to support". What I care the 
most here is the limits are actually applied most of (if not all) the time.

> 
>>
>> This would also avoid the problem where Xenstored is not allowed to 
>> modify its limit (see more below).
>>
>>>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>> ---
>>>   tools/xenstore/xenstored_core.c   |  2 ++
>>>   tools/xenstore/xenstored_core.h   |  3 ++
>>>   tools/xenstore/xenstored_minios.c |  4 +++
>>>   tools/xenstore/xenstored_posix.c  | 46 +++++++++++++++++++++++++++++++
>>>   4 files changed, 55 insertions(+)
>>>
>>> diff --git a/tools/xenstore/xenstored_core.c 
>>> b/tools/xenstore/xenstored_core.c
>>> index b66d119a98..964e693450 100644
>>> --- a/tools/xenstore/xenstored_core.c
>>> +++ b/tools/xenstore/xenstored_core.c
>>> @@ -2243,6 +2243,8 @@ int main(int argc, char *argv[])
>>>           xprintf = trace;
>>>   #endif
>>> +    claim_resources();
>>> +
>>>       signal(SIGHUP, trigger_reopen_log);
>>>       if (tracefile)
>>>           tracefile = talloc_strdup(NULL, tracefile);
>>> diff --git a/tools/xenstore/xenstored_core.h 
>>> b/tools/xenstore/xenstored_core.h
>>> index 1467270476..ac26973648 100644
>>> --- a/tools/xenstore/xenstored_core.h
>>> +++ b/tools/xenstore/xenstored_core.h
>>> @@ -255,6 +255,9 @@ void daemonize(void);
>>>   /* Close stdin/stdout/stderr to complete daemonize */
>>>   void finish_daemonize(void);
>>> +/* Set OOM-killer score and raise ulimit. */
>>> +void claim_resources(void);
>>> +
>>>   /* Open a pipe for signal handling */
>>>   void init_pipe(int reopen_log_pipe[2]);
>>> diff --git a/tools/xenstore/xenstored_minios.c 
>>> b/tools/xenstore/xenstored_minios.c
>>> index c94493e52a..df8ff580b0 100644
>>> --- a/tools/xenstore/xenstored_minios.c
>>> +++ b/tools/xenstore/xenstored_minios.c
>>> @@ -32,6 +32,10 @@ void finish_daemonize(void)
>>>   {
>>>   }
>>> +void claim_resources(void)
>>> +{
>>> +}
>>> +
>>>   void init_pipe(int reopen_log_pipe[2])
>>>   {
>>>       reopen_log_pipe[0] = -1;
>>> diff --git a/tools/xenstore/xenstored_posix.c 
>>> b/tools/xenstore/xenstored_posix.c
>>> index 48c37ffe3e..0074fbd8b2 100644
>>> --- a/tools/xenstore/xenstored_posix.c
>>> +++ b/tools/xenstore/xenstored_posix.c
>>> @@ -22,6 +22,7 @@
>>>   #include <fcntl.h>
>>>   #include <stdlib.h>
>>>   #include <sys/mman.h>
>>> +#include <sys/resource.h>
>>>   #include "utils.h"
>>>   #include "xenstored_core.h"
>>> @@ -87,6 +88,51 @@ void finish_daemonize(void)
>>>       close(devnull);
>>>   }
>>> +static void avoid_oom_killer(void)
>>> +{
>>> +    char path[32];
>>> +    char val[] = "-500";
>>> +    int fd;
>>> +
>>> +    snprintf(path, sizeof(path), "/proc/%d/oom_score_adj", 
>>> (int)getpid());
>>
>> This looks Linux specific. How about other OSes?
> 
> I don't know whether other OSes have an OOM killer, and if they do, how
> to configure it. It is a best effort attempt, after all.

I have CCed Roger who should be able to help for FreeBSD.

> 
>>
>>> +
>>> +    fd = open(path, O_WRONLY);
>>> +    /* Do nothing if file doesn't exist. */
>>
>> Your commit message leads to think that we *must* configure the OOM. 
>> If not, then we should not continue. But here, this suggest this is 
>> optional. In fact...
> 
> I can modify the commit message by adding a "Try to".
> 
>>
>>> +    if (fd < 0)
>>> +        return;
>>> +    /* Ignore errors. */
>>> +    write(fd, val, sizeof(val));
>>
>> ... xenstored may not be allowed to modify its own parameters. So this 
>> would continue silently without the admin necessarily knowning the 
>> limit wasn't applied.
> 
> I can add a line in the Xenstore log in this regard.

This feels wrong to me. If a limit cannot be applied then it should fail 
early rather than possibly at the wrong moment a few days (months?) after.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 17 16:03:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 16:03:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128403.241060 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lifiY-0006v1-Bc; Mon, 17 May 2021 16:03:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128403.241060; Mon, 17 May 2021 16:03:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lifiY-0006uu-8K; Mon, 17 May 2021 16:03:42 +0000
Received: by outflank-mailman (input) for mailman id 128403;
 Mon, 17 May 2021 16:03:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lifiX-0006uo-DE
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 16:03:41 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lifiV-0007S0-Bu; Mon, 17 May 2021 16:03:39 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lifiV-0001jf-4w; Mon, 17 May 2021 16:03:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=XetJgpg7ogIfiyXCh6Kea/AMzHx5/dYOss+zsrZSEQo=; b=woqgGPCON0u3UrwFmvvxSdW+vI
	3w/IXyB5wehdN5ZWTKmyYxQeLOvzcw0CpV93b+rqLRbAdyNKfkMgRbOz9UorgOnJ69wztXNys7sBC
	AqMBsUYqY5qaGQWN9tUUQxZLcnCCzFfPFcaug3DgtIGS40bbjDc6R31qUS8d51w2jxA4=;
Subject: Re: [PATCH v3 10/10] arm64: Change type of hsr, cpsr, spsr_el1 to
 uint64_t
To: Jan Beulich <jbeulich@suse.com>, Michal Orzel <michal.orzel@arm.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, Tamas K Lengyel <tamas@tklengyel.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>, bertrand.marquis@arm.com,
 wei.chen@arm.com, xen-devel@lists.xenproject.org
References: <20210505074308.11016-1-michal.orzel@arm.com>
 <20210505074308.11016-11-michal.orzel@arm.com>
 <c5676e69-a474-d1ad-c7e9-49c03be3ab66@suse.com>
 <1ff4f9fb-0eca-189a-2b47-b910dc6b3639@arm.com>
 <42a998be-2f99-a1b6-ace6-4c5d42af7046@xen.org>
 <54e845e1-f283-d70c-a0c2-73e768e5a56e@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <b8a14892-0290-3aff-c4b5-6d363b884db7@xen.org>
Date: Mon, 17 May 2021 17:03:36 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <54e845e1-f283-d70c-a0c2-73e768e5a56e@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Jan,

On 17/05/2021 08:01, Jan Beulich wrote:
> On 12.05.2021 19:59, Julien Grall wrote:
>> Hi,
>>
>> On 11/05/2021 07:37, Michal Orzel wrote:
>>> On 05.05.2021 10:00, Jan Beulich wrote:
>>>> On 05.05.2021 09:43, Michal Orzel wrote:
>>>>> --- a/xen/include/public/arch-arm.h
>>>>> +++ b/xen/include/public/arch-arm.h
>>>>> @@ -267,10 +267,10 @@ struct vcpu_guest_core_regs
>>>>>    
>>>>>        /* Return address and mode */
>>>>>        __DECL_REG(pc64,         pc32);             /* ELR_EL2 */
>>>>> -    uint32_t cpsr;                              /* SPSR_EL2 */
>>>>> +    uint64_t cpsr;                              /* SPSR_EL2 */
>>>>>    
>>>>>        union {
>>>>> -        uint32_t spsr_el1;       /* AArch64 */
>>>>> +        uint64_t spsr_el1;       /* AArch64 */
>>>>>            uint32_t spsr_svc;       /* AArch32 */
>>>>>        };
>>>>
>>>> This change affects, besides domctl, also default_initialise_vcpu(),
>>>> which Arm's arch_initialise_vcpu() calls. I realize do_arm_vcpu_op()
>>>> only allows two unrelated VCPUOP_* to pass, but then I don't
>>>> understand why arch_initialise_vcpu() doesn't simply return e.g.
>>>> -EOPNOTSUPP. Hence I suspect I'm missing something.
>>
>> I think it is just an overlooked when reviewing the following commit:
>>
>> commit 192df6f9122ddebc21d0a632c10da3453aeee1c2
>> Author: Roger Pau Monné <roger.pau@citrix.com>
>> Date:   Tue Dec 15 14:12:32 2015 +0100
>>
>>       x86: allow HVM guests to use hypercalls to bring up vCPUs
>>
>>       Allow the usage of the VCPUOP_initialise, VCPUOP_up, VCPUOP_down,
>>       VCPUOP_is_up, VCPUOP_get_physid and VCPUOP_send_nmi hypercalls from HVM
>>       guests.
>>
>>       This patch introduces a new structure (vcpu_hvm_context) that
>> should be used
>>       in conjuction with the VCPUOP_initialise hypercall in order to
>> initialize
>>       vCPUs for HVM guests.
>>
>>       Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
>>       Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>       Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>       Acked-by: Ian Campbell <ian.campbell@citrix.com>
>>
>> On Arm, the structure vcpu_guest_context is not exposed outside of Xen
>> and the tools. Interestingly vcpu_guest_core_regs is but it should only
>> be used within vcpu_guest_context.
>>
>> So as this is not used by stable ABI, it is fine to break it.
>>
>>>>
>>> I agree that do_arm_vcpu_op only allows two VCPUOP* to pass and
>>> arch_initialise_vcpu being called in case of VCPUOP_initialise
>>> has no sense as VCPUOP_initialise is not supported on arm.
>>> It makes sense that it should return -EOPNOTSUPP.
>>> However do_arm_vcpu_op will not accept VCPUOP_initialise and will return
>>> -EINVAL. So arch_initialise_vcpu for arm will not be called.
>>> Do you think that changing this behaviour so that arch_initialise_vcpu returns
>>> -EOPNOTSUPP should be part of this patch?
>>
>> I think this change is unrelated. So it should be handled in a follow-up
>> patch.
> 
> My only difference in viewing this is that I'd say the adjustment
> would better be a prereq patch to this one, such that the one here
> ends up being more obviously correct.

The function is already not reachable so I felt it was unfair to require 
the clean-up for merging this code.

> Also, if the function is
> indeed not meant to be reachable, besides making it return
> -EOPNOTSUPP (or alike) it should probably also have
> ASSERT_UNREACHABLE() added.

+1 on the idea.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 17 16:04:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 16:04:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128407.241071 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lifjQ-0007Ue-LM; Mon, 17 May 2021 16:04:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128407.241071; Mon, 17 May 2021 16:04:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lifjQ-0007UX-IC; Mon, 17 May 2021 16:04:36 +0000
Received: by outflank-mailman (input) for mailman id 128407;
 Mon, 17 May 2021 16:04:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FIJu=KM=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lifjP-0007UK-8T
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 16:04:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 59a491ef-dc0d-4fe9-8afe-f8fdb702ceb0;
 Mon, 17 May 2021 16:04:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 91CBDAE93;
 Mon, 17 May 2021 16:04:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 59a491ef-dc0d-4fe9-8afe-f8fdb702ceb0
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621267473; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=p2udeP0s/0Phpi1/aYfpqyEl1NJS8a8IWCFR+P5vLWA=;
	b=EOEyGU8fjmXayF7AlDefARVbsi+00SC4LzmSFjXYqqEtw9bmtfghNkj3oJRDXek1SWiMnW
	4wHJFHcWNeUTo1gnYJ+I9Q9sueezBVTYBw8zHSFScPQha9tX0m9Y+YMXwyFs8PR/30UXHW
	VMM3qqq51yBvD1NBxKgEHmeOHBTJYFU=
Subject: Re: [PATCH v5 1/2] xen/pci: Refactor PCI MSI intercept related code
To: Rahul Singh <rahul.singh@arm.com>
Cc: bertrand.marquis@arm.com, Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org
References: <cover.1620661205.git.rahul.singh@arm.com>
 <73305c5cb5b805618ec24405c1848d240a786a89.1620661205.git.rahul.singh@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1c049229-8a5e-3100-f27d-c8eaa4914a55@suse.com>
Date: Mon, 17 May 2021 18:04:32 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <73305c5cb5b805618ec24405c1848d240a786a89.1620661205.git.rahul.singh@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 10.05.2021 17:47, Rahul Singh wrote:
> --- /dev/null
> +++ b/xen/drivers/passthrough/msi-intercept.c
> @@ -0,0 +1,55 @@
> +/*
> + * Copyright (C) 2008,  Netronome Systems, Inc.
> + *
> + * This program is free software; you can redistribute it and/or modify it
> + * under the terms and conditions of the GNU General Public License,
> + * version 2, as published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope it will be useful, but WITHOUT
> + * ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or
> + * FITNESS FOR A PARTICULAR PURPOSE.  See the GNU General Public License for
> + * more details.
> + *
> + * You should have received a copy of the GNU General Public License along with
> + * this program; If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <xen/init.h>
> +#include <xen/pci.h>
> +#include <asm/msi.h>
> +#include <asm/hvm/io.h>
> +
> +int pdev_msix_assign(struct domain *d, struct pci_dev *pdev)
> +{
> +    int rc;
> +
> +    if ( pdev->msix )
> +    {
> +        rc = pci_reset_msix_state(pdev);
> +        if ( rc )
> +            return rc;
> +        msixtbl_init(d);
> +    }
> +
> +    return 0;
> +}
> +
> +void pdev_dump_msi(const struct pci_dev *pdev)
> +{
> +    const struct msi_desc *msi;
> +
> +    printk("- MSIs < ");
> +    list_for_each_entry ( msi, &pdev->msi_list, list )
> +        printk("%d ", msi->irq);
> +    printk(">");
> +}

I guess I didn't look closely enough in earlier versions: I don't
see how either of the two functions is intercept related. If they
need separating out, they want to go into a file named msi.c (and,
as said before on other similar changes in the following patch,
not be keyed to HAS_PCI_MSI_INTERCEPT).

I'm sorry for noticing this only now.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 17 16:20:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 16:20:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128415.241086 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lifz4-0001QP-1E; Mon, 17 May 2021 16:20:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128415.241086; Mon, 17 May 2021 16:20:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lifz3-0001QI-Th; Mon, 17 May 2021 16:20:45 +0000
Received: by outflank-mailman (input) for mailman id 128415;
 Mon, 17 May 2021 16:20:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lifz2-0001Q7-Sj; Mon, 17 May 2021 16:20:44 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lifz2-0007lK-Ni; Mon, 17 May 2021 16:20:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lifz2-0002hQ-AJ; Mon, 17 May 2021 16:20:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lifz2-0006Ly-9m; Mon, 17 May 2021 16:20:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=pKSQfsMUgDKjBZx3C19DoI5EAL4o/lZFtYvQWY3qUp8=; b=M9cUvZ4GEJjGnTuWkI+0CbUqft
	q4SjHqkJBeMOu9MEDwkIQVLs9Ek/3vBpJd2PSg2NN3NjVlvCYyhZcR3rtRXsa+F8B9atMHqv2wwHr
	W6yA5tDPgcTkNHUnPK8K0lHBJLMvWyY3Z3Y0NV9emAg/0oBitoUpVtHhzlUjAZ3zBvR8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161976-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 161976: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=6005ee07c380cbde44292f5f6c96e7daa70f4f7d
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 17 May 2021 16:20:44 +0000

flight 161976 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161976/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                6005ee07c380cbde44292f5f6c96e7daa70f4f7d
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  270 days
Failing since        152659  2020-08-21 14:07:39 Z  269 days  493 attempts
Testing same since   161971  2021-05-16 21:09:32 Z    0 days    2 attempts

------------------------------------------------------------
502 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 152903 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 17 16:56:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 16:56:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128427.241103 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ligXT-00050H-2i; Mon, 17 May 2021 16:56:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128427.241103; Mon, 17 May 2021 16:56:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ligXS-00050A-Vh; Mon, 17 May 2021 16:56:18 +0000
Received: by outflank-mailman (input) for mailman id 128427;
 Mon, 17 May 2021 16:56:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ligXS-000500-2u; Mon, 17 May 2021 16:56:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ligXR-0008Mj-Se; Mon, 17 May 2021 16:56:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ligXR-0003XN-JX; Mon, 17 May 2021 16:56:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ligXR-0000Tf-J4; Mon, 17 May 2021 16:56:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zMgjh6YpABgnxvknzfTY8OQV1Z7wc7pwB2h6b7d7xvw=; b=sIs6eY3XlId25l/Tbh2FAykwta
	0QolTQVX+vzwlwJY5ESj178zQyMw2BU/o/FT2UOlc+j0D+hyA6Di1cXeTDwkouay0J+krU/xvXuzq
	As2UC/NknONwJMOFAsWfWnCr0Uy1qPxtYaisgmlpOuRNTDeeK15+vOG5un7OfpuUWcJw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161980-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 161980: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=71a25d03b70b399d666d05a3d0046d821248c80e
X-Osstest-Versions-That:
    xen=cb199cc7de987cfda4659fccf51059f210f6ad34
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 17 May 2021 16:56:17 +0000

flight 161980 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161980/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  71a25d03b70b399d666d05a3d0046d821248c80e
baseline version:
 xen                  cb199cc7de987cfda4659fccf51059f210f6ad34

Last test of basis   161937  2021-05-13 18:01:32 Z    3 days
Testing same since   161980  2021-05-17 14:02:34 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Connor Davis <connojdavis@gmail.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   cb199cc7de..71a25d03b7  71a25d03b70b399d666d05a3d0046d821248c80e -> smoke


From xen-devel-bounces@lists.xenproject.org Mon May 17 17:36:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 17:36:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128434.241117 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lihAE-0000xQ-7d; Mon, 17 May 2021 17:36:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128434.241117; Mon, 17 May 2021 17:36:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lihAE-0000xJ-3w; Mon, 17 May 2021 17:36:22 +0000
Received: by outflank-mailman (input) for mailman id 128434;
 Mon, 17 May 2021 17:36:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lihAC-0000x8-RF; Mon, 17 May 2021 17:36:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lihAC-0000av-Ke; Mon, 17 May 2021 17:36:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lihAC-000586-Ca; Mon, 17 May 2021 17:36:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lihAC-0004nR-C7; Mon, 17 May 2021 17:36:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=SJSSs1DxLASH8X3r16XZ5xp/emnhb4l75L9+ruwEx4o=; b=COR1BeNRimN6hqn2/ZVtZAqzCP
	8fIZDVwttsd7mZ9Mv518N9o8FF54hCa+qoh9jz9ajJhBC2Jk2wcetbjCun3uESTlUqyDeiNgDxPn3
	vYPRwWMed0Uk06YseVRqYtJ8eGmMgt796ZY8W2qaUTWGatStuRYUVsCjZ3HOPliYjwHY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161979-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 161979: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=d2e0c473e6f0d518e8acb187f99bb7e61f7df862
X-Osstest-Versions-That:
    ovmf=e0cb5e1814a67bb12dd476a72d1698350633bcbb
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 17 May 2021 17:36:20 +0000

flight 161979 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161979/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 d2e0c473e6f0d518e8acb187f99bb7e61f7df862
baseline version:
 ovmf                 e0cb5e1814a67bb12dd476a72d1698350633bcbb

Last test of basis   161974  2021-05-17 02:40:16 Z    0 days
Testing same since   161979  2021-05-17 11:10:07 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Chen, Christine <Yuwei.Chen@intel.com>
  Daniel Schaefer <daniel.schaefer@hpe.com>
  Yuwei Chen <yuwei.chen@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   e0cb5e1814..d2e0c473e6  d2e0c473e6f0d518e8acb187f99bb7e61f7df862 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon May 17 17:54:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 17:54:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128441.241132 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lihRy-0003Ip-RB; Mon, 17 May 2021 17:54:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128441.241132; Mon, 17 May 2021 17:54:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lihRy-0003Ii-NY; Mon, 17 May 2021 17:54:42 +0000
Received: by outflank-mailman (input) for mailman id 128441;
 Mon, 17 May 2021 17:54:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lihRx-0003Ic-AN
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 17:54:41 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lihRw-0000sq-AC; Mon, 17 May 2021 17:54:40 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lihRw-0007Po-43; Mon, 17 May 2021 17:54:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=dUUzb2Zr6mZEpVhQ+uzkc3u1SAW4enauwTXK1MYXImg=; b=NfAu8w99hMTWjfQvc0fjptwze0
	Yggjya6Ol97XcL9Sy21MzJE1D1NGKuQK7jgZXrbeu6p6vsHvkVrvCcuyiRpfYQJXrIGSNXdcc5Ak4
	0ie8oYzKv+sfmnv88gBTW2oaQJk4+Ool2yXuECglpM1CK66L/498QxBHCil8hWpvldzY=;
Subject: Re: [PATCH v2 2/2] tools/xenstore: simplify xenstored main loop
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210514115620.32731-1-jgross@suse.com>
 <20210514115620.32731-3-jgross@suse.com>
 <24e89076-4440-a32e-f701-71957cc2a9e4@xen.org>
 <12b13143-717b-c288-b96b-50613dafc6d3@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <3cf89fa9-9b04-5f5e-7190-8ca2a2b01c92@xen.org>
Date: Mon, 17 May 2021 18:54:38 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <12b13143-717b-c288-b96b-50613dafc6d3@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Juergen,

On 17/05/2021 07:10, Juergen Gross wrote:
> On 14.05.21 19:05, Julien Grall wrote:
>> Hi Juergen,
>>
>> On 14/05/2021 12:56, Juergen Gross wrote:
>>> The main loop of xenstored is rather complicated due to different
>>> handling of socket and ring-page interfaces. Unify that handling by
>>> introducing interface type specific functions can_read() and
>>> can_write().
>>>
>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>> ---
>>> V2:
>>> - split off function vector introduction (Julien Grall)
>>> ---
>>>   tools/xenstore/xenstored_core.c   | 77 +++++++++++++++----------------
>>>   tools/xenstore/xenstored_core.h   |  2 +
>>>   tools/xenstore/xenstored_domain.c |  2 +
>>>   3 files changed, 41 insertions(+), 40 deletions(-)
>>>
>>> diff --git a/tools/xenstore/xenstored_core.c 
>>> b/tools/xenstore/xenstored_core.c
>>> index 856f518075..883a1a582a 100644
>>> --- a/tools/xenstore/xenstored_core.c
>>> +++ b/tools/xenstore/xenstored_core.c
>>> @@ -1659,9 +1659,34 @@ static int readfd(struct connection *conn, 
>>> void *data, unsigned int len)
>>>       return rc;
>>>   }
>>> +static bool socket_can_process(struct connection *conn, int mask)
>>> +{
>>> +    if (conn->pollfd_idx == -1)
>>> +        return false;
>>> +
>>> +    if (fds[conn->pollfd_idx].revents & ~(POLLIN | POLLOUT)) {
>>> +        talloc_free(conn);
>>> +        return false;
>>> +    }
>>> +
>>> +    return (fds[conn->pollfd_idx].revents & mask) && !conn->is_ignored;
>>> +}
>>> +
>>> +static bool socket_can_write(struct connection *conn)
>>> +{
>>> +    return socket_can_process(conn, POLLOUT);
>>> +}
>>> +
>>> +static bool socket_can_read(struct connection *conn)
>>> +{
>>> +    return socket_can_process(conn, POLLIN);
>>> +}
>>> +
>>>   const struct interface_funcs socket_funcs = {
>>>       .write = writefd,
>>>       .read = readfd,
>>> +    .can_write = socket_can_write,
>>> +    .can_read = socket_can_read,
>>>   };
>>>   static void accept_connection(int sock)
>>> @@ -2296,47 +2321,19 @@ int main(int argc, char *argv[])
>>>               if (&next->list != &connections)
>>>                   talloc_increase_ref_count(next);
>>> -            if (conn->domain) {
>>> -                if (domain_can_read(conn))
>>> -                    handle_input(conn);
>>> -                if (talloc_free(conn) == 0)
>>> -                    continue;
>>> -
>>> -                talloc_increase_ref_count(conn);
>>> -                if (domain_can_write(conn) &&
>>> -                    !list_empty(&conn->out_list))
>>
>> AFAICT, the check "!list_empty(&conn->out_list)" can be safely removed 
>> because write_messages() will check if the list is empty (list_top() 
>> returns NULL in this case). Is that correct?
> 
> Yes.

Thanks, how about adding in the commit message:

"Take the opportunity to remove the empty list check before calling 
write_messages() because the function is already able to cope with an 
empty list."

I can update the commit message while committing it.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 17 17:57:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 17:57:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128446.241143 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lihUa-0003vD-7m; Mon, 17 May 2021 17:57:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128446.241143; Mon, 17 May 2021 17:57:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lihUa-0003v6-4m; Mon, 17 May 2021 17:57:24 +0000
Received: by outflank-mailman (input) for mailman id 128446;
 Mon, 17 May 2021 17:57:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lihUY-0003v0-UL
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 17:57:22 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lihUX-0000xF-Br; Mon, 17 May 2021 17:57:21 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lihUX-0007jg-5u; Mon, 17 May 2021 17:57:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:References:Cc:To:From:Subject;
	bh=0BpIULNvqppXfJ8kFTXCLdLrLKmy7mOmn4MW6g2NJ98=; b=Sz1qizjxXbeUxtWnHiwmXSECi7
	Yb5qmMdVeRfUZvEQxMfHby1fPFgOFt4Ea8/FdSZ52xe+Q1qQcq061PIDhdk316cxY2SQlj8vjfzrc
	5EAHoMXhjzSuW5XLOvrs1SqgRglmvR+gbfEcpxoVSnampEVbu72pr7wePrvYKv5c0bRw=;
Subject: Re: [PATCH] tools/xenstore: cleanup Makefile and gitignore
From: Julien Grall <julien@xen.org>
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20210514090116.21002-1-jgross@suse.com>
 <a67f922a-935e-2b8b-dde6-2362ca3371c3@xen.org>
Message-ID: <e9fb6fad-f8b5-7566-5c5c-ec79ba146530@xen.org>
Date: Mon, 17 May 2021 18:57:19 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <a67f922a-935e-2b8b-dde6-2362ca3371c3@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 14/05/2021 18:06, Julien Grall wrote:
> Hi Juergen,
> 
> On 14/05/2021 10:01, Juergen Gross wrote:
>> The Makefile of xenstore and related to that the global .gitignore
>> file contain some leftovers from ancient times. Remove those.
>>
>> While at it sort the tools/xenstore/* entries in .gitignore.
>>
>> Signed-off-by: Juergen Gross <jgross@suse.com>
> 
> Acked-by: Julien Grall <jgrall@amazon.com>

Committed.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 17 18:12:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 18:12:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128454.241157 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lihjG-0006Jp-CN; Mon, 17 May 2021 18:12:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128454.241157; Mon, 17 May 2021 18:12:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lihjG-0006Ji-9J; Mon, 17 May 2021 18:12:34 +0000
Received: by outflank-mailman (input) for mailman id 128454;
 Mon, 17 May 2021 18:12:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lihjF-0006Jc-7Z
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 18:12:33 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lihjB-0001Ib-LY; Mon, 17 May 2021 18:12:29 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lihjB-0000d7-Bg; Mon, 17 May 2021 18:12:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=1UCfJABfTb6rfYAygapJEkqrS8gUWVMJiYUEUif62Ww=; b=rOp3S6lX7OF9+TUx1iyKJNxiH5
	/xE57QzzJc2YFt8tdSYrdpkPD8mQQAa1LDBjPl7RisaEHqcgNQ1F3ADkQ4jdgqQrTVMdjk+a9xQn4
	bB+LzOZzTad5HEbgNhEnS32i5gamOOXKSxCzpjDBwughLhjzU+VBGmxsDPj9XZn3UBqk=;
Subject: Re: [PATCH v3 3/5] tools/libs/foreignmemory: Fix PAGE_SIZE
 redefinition error
To: Costin Lupu <costin.lupu@cs.pub.ro>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <cover.1620633386.git.costin.lupu@cs.pub.ro>
 <2040286fc39b7e1d46376a8e75ac59d8d3be5aff.1620633386.git.costin.lupu@cs.pub.ro>
From: Julien Grall <julien@xen.org>
Message-ID: <690806fb-e6e2-f61f-d7d6-a17efa6329d9@xen.org>
Date: Mon, 17 May 2021 19:12:27 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <2040286fc39b7e1d46376a8e75ac59d8d3be5aff.1620633386.git.costin.lupu@cs.pub.ro>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Costin,

On 10/05/2021 09:35, Costin Lupu wrote:
> @@ -168,7 +168,7 @@ void *osdep_xenforeignmemory_map(xenforeignmemory_handle *fmem,
>       size_t i;
>       int rc;
>   
> -    addr = mmap(addr, num << PAGE_SHIFT, prot, flags | MAP_SHARED,
> +    addr = mmap(addr, num << XC_PAGE_SHIFT, prot, flags | MAP_SHARED,
>                   fd, 0);
>       if ( addr == MAP_FAILED )
>           return NULL;
> @@ -198,9 +198,10 @@ void *osdep_xenforeignmemory_map(xenforeignmemory_handle *fmem,
>            */
>           privcmd_mmapbatch_t ioctlx;
>           xen_pfn_t *pfn;
> -        unsigned int pfn_arr_size = ROUNDUP((num * sizeof(*pfn)), PAGE_SHIFT);
> +        unsigned int pfn_arr_size = ROUNDUP((num * sizeof(*pfn)), XC_PAGE_SHIFT);
> +        int os_page_size = getpagesize();

Hmmm... Sorry I only realized now that the manpage suggests that 
getpagesize() is legacy:

     SVr4, 4.4BSD, SUSv2.  In SUSv2 the getpagesize() call is labeled 
LEGACY, and in POSIX.1-2001 it has been dropped; HP-UX does not have 
this call.

And then:

    Portable applications should employ sysconf(_SC_PAGESIZE) instead of 
getpagesize():

            #include <unistd.h>
            long sz = sysconf(_SC_PAGESIZE);

As this is only used by Linux, it is not clear to me whether this is 
important. Ian, what do you think?

>   
> -        if ( pfn_arr_size <= PAGE_SIZE )
> +        if ( pfn_arr_size <= os_page_size )

Your commit message suggests we are only s/PAGE_SHIFT/XC_PAGE_SHIFT/ but 
this is using getpagesize() instead. I agree it should be using the OS 
size. However, this should be clarified in the commit message.

The rest of the patch looks fine to me .

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 17 18:16:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 18:16:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128459.241168 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lihmf-00070D-Rq; Mon, 17 May 2021 18:16:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128459.241168; Mon, 17 May 2021 18:16:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lihmf-000706-Ow; Mon, 17 May 2021 18:16:05 +0000
Received: by outflank-mailman (input) for mailman id 128459;
 Mon, 17 May 2021 18:16:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lihme-0006zz-Rm
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 18:16:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lihmd-0001NY-Km; Mon, 17 May 2021 18:16:03 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lihmd-0000oG-FJ; Mon, 17 May 2021 18:16:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=ZuiPfTS7JrllEztYBNswa8Awq4uj96P9deDzQaJ70xQ=; b=H1OgAjPjw2a6F6HSOCjev6/ih2
	iE3A2yHOkrIBjyRb6+siGDaVHxRhX0iCZbqNfgX7XUJvxGAtu/vbXc7Vq1sVqLidqOhi6KeI6FeuU
	n7eaKsrqRiatIdHo25oCCYIFKacLP7RclThnpAW0u8/HReajIQjkSLo16i23PUJpFgxo=;
Subject: Re: [PATCH v3 4/5] tools/libs/gnttab: Fix PAGE_SIZE redefinition
 error
To: Costin Lupu <costin.lupu@cs.pub.ro>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <cover.1620633386.git.costin.lupu@cs.pub.ro>
 <b1e87eb24dfde3b1f11c5a14a4298531b4a308ad.1620633386.git.costin.lupu@cs.pub.ro>
From: Julien Grall <julien@xen.org>
Message-ID: <5603464e-2ef5-9358-d039-cfb1f93340d3@xen.org>
Date: Mon, 17 May 2021 19:16:01 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <b1e87eb24dfde3b1f11c5a14a4298531b4a308ad.1620633386.git.costin.lupu@cs.pub.ro>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Costin,

On 10/05/2021 09:35, Costin Lupu wrote:
> If PAGE_SIZE is already defined in the system (e.g. in /usr/include/limits.h
> header) then gcc will trigger a redefinition error because of -Werror. This
> patch replaces usage of PAGE_* macros with XC_PAGE_* macros in order to avoid
> confusion between control domain page granularity (PAGE_* definitions) and
> guest domain page granularity (which is what we are dealing with here).
> 
> Signed-off-by: Costin Lupu <costin.lupu@cs.pub.ro>
> ---
>   tools/libs/gnttab/freebsd.c | 28 +++++++++++++---------------
>   tools/libs/gnttab/linux.c   | 28 +++++++++++++---------------
>   tools/libs/gnttab/netbsd.c  | 23 ++++++++++-------------
>   3 files changed, 36 insertions(+), 43 deletions(-)
> 
> diff --git a/tools/libs/gnttab/freebsd.c b/tools/libs/gnttab/freebsd.c
> index 768af701c6..e42ac3fbf3 100644
> --- a/tools/libs/gnttab/freebsd.c
> +++ b/tools/libs/gnttab/freebsd.c
> @@ -30,14 +30,11 @@
>   
>   #include <xen/sys/gntdev.h>
>   
> +#include <xenctrl.h>
>   #include <xen-tools/libs.h>
>   
>   #include "private.h"
>   
> -#define PAGE_SHIFT           12
> -#define PAGE_SIZE            (1UL << PAGE_SHIFT)
> -#define PAGE_MASK            (~(PAGE_SIZE-1))
> -
>   #define DEVXEN "/dev/xen/gntdev"
>   
>   int osdep_gnttab_open(xengnttab_handle *xgt)
> @@ -77,10 +74,11 @@ void *osdep_gnttab_grant_map(xengnttab_handle *xgt,
>       int domids_stride;
>       unsigned int refs_size = ROUNDUP(count *
>                                        sizeof(struct ioctl_gntdev_grant_ref),
> -                                     PAGE_SHIFT);
> +                                     XC_PAGE_SHIFT);
> +    int os_page_size = getpagesize();

Same remark as for patch #4. This at least want to be explained in the 
commit message.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 17 18:16:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 18:16:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128460.241179 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lihn3-0007Uu-49; Mon, 17 May 2021 18:16:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128460.241179; Mon, 17 May 2021 18:16:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lihn3-0007Un-1A; Mon, 17 May 2021 18:16:29 +0000
Received: by outflank-mailman (input) for mailman id 128460;
 Mon, 17 May 2021 18:16:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lihn1-0007TL-Ld; Mon, 17 May 2021 18:16:27 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lihn1-0001Nr-FY; Mon, 17 May 2021 18:16:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lihn1-00077U-7y; Mon, 17 May 2021 18:16:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lihn1-0006Qw-7V; Mon, 17 May 2021 18:16:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=AgHSIh8aLOcw1O2rS5LsnIZC/tGIqDgcQfnI1p5zDaE=; b=kJzdMqlqXKEGNaMZH27XMLXkzc
	tPv1rbpU6ndtcJ9JamtyH1iBUC7/q7mia/cItSLF7A5Tw7RafccEiLOzZCj3vubJHt2HCopjsHJz8
	AoiWoPrhefFYgg8qaeoQoiDBYqI33Bddj+lqK+kgwUJDNTs9cYqmCDEmVmkZtZK5Bl70=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161977-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 161977: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=d07f6ca923ea0927a1024dfccafc5b53b61cfecc
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 17 May 2021 18:16:27 +0000

flight 161977 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161977/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  14 guest-start    fail in 161972 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-qemuu-freebsd12-amd64 13 guest-start      fail pass in 161972
 test-arm64-arm64-xl-credit1  13 debian-fixup               fail pass in 161972
 test-arm64-arm64-libvirt-xsm 13 debian-fixup               fail pass in 161972

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check fail in 161972 never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check fail in 161972 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                d07f6ca923ea0927a1024dfccafc5b53b61cfecc
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  289 days
Failing since        152366  2020-08-01 20:49:34 Z  288 days  486 attempts
Testing same since   161972  2021-05-17 00:12:03 Z    0 days    2 attempts

------------------------------------------------------------
6063 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1645314 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 17 18:17:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 18:17:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128470.241194 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lihna-0008DK-Ko; Mon, 17 May 2021 18:17:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128470.241194; Mon, 17 May 2021 18:17:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lihna-0008DC-Hh; Mon, 17 May 2021 18:17:02 +0000
Received: by outflank-mailman (input) for mailman id 128470;
 Mon, 17 May 2021 18:17:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lihnY-0008Co-Gm
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 18:17:00 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lihnU-0001O5-MN; Mon, 17 May 2021 18:16:56 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lihnU-0000rj-G4; Mon, 17 May 2021 18:16:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=SyUBnMh+Uf9qXIh1WsVbdqWxPUPoylyUGyRQWe/+SIQ=; b=29Fk9DDQ8SmZDeU1fVVsvumPZM
	h37HtRKo1HRGltprZz8EzDlqSyBjn57fPgVduBc4G6+ZxxNiFT1CuvXY4rNDh8RNVdV7eShbdRxAA
	VUQ7yY1/E2j69CGd1rCzPmY/nqYGO77vAVDLPL4RW3Orec9zm9Dzh8W3pj12gZviUGfg=;
Subject: Re: [PATCH v3 5/5] tools/ocaml: Fix redefinition errors
To: Costin Lupu <costin.lupu@cs.pub.ro>, xen-devel@lists.xenproject.org
Cc: Christian Lindig <christian.lindig@citrix.com>,
 David Scott <dave@recoil.org>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>
References: <cover.1620633386.git.costin.lupu@cs.pub.ro>
 <50763a92df0c58ed0d7749717b7ff5e2a039a4dd.1620633386.git.costin.lupu@cs.pub.ro>
From: Julien Grall <julien@xen.org>
Message-ID: <bb15b60d-ebbb-d0c1-9b95-605cf5ae5a41@xen.org>
Date: Mon, 17 May 2021 19:16:54 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <50763a92df0c58ed0d7749717b7ff5e2a039a4dd.1620633386.git.costin.lupu@cs.pub.ro>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Costin,

On 10/05/2021 09:35, Costin Lupu wrote:
> If PAGE_SIZE is already defined in the system (e.g. in /usr/include/limits.h
> header) then gcc will trigger a redefinition error because of -Werror. This
> patch replaces usage of PAGE_* macros with XC_PAGE_* macros in order to avoid
> confusion between control domain page granularity (PAGE_* definitions) and
> guest domain page granularity (which is what we are dealing with here).
> 
> Same issue applies for redefinitions of Val_none and Some_val macros which
> can be already define in the OCaml system headers (e.g.
> /usr/lib/ocaml/caml/mlvalues.h).
> 
> Signed-off-by: Costin Lupu <costin.lupu@cs.pub.ro>

Reviewed-by: Julien Grall <jgrall@amazon.com>

Cheers,

> ---
>   tools/ocaml/libs/xc/xenctrl_stubs.c            | 10 ++++------
>   tools/ocaml/libs/xentoollog/xentoollog_stubs.c |  4 ++++
>   tools/ocaml/libs/xl/xenlight_stubs.c           |  4 ++++
>   3 files changed, 12 insertions(+), 6 deletions(-)
> 
> diff --git a/tools/ocaml/libs/xc/xenctrl_stubs.c b/tools/ocaml/libs/xc/xenctrl_stubs.c
> index d05d7bb30e..f9e33e599a 100644
> --- a/tools/ocaml/libs/xc/xenctrl_stubs.c
> +++ b/tools/ocaml/libs/xc/xenctrl_stubs.c
> @@ -36,14 +36,12 @@
>   
>   #include "mmap_stubs.h"
>   
> -#define PAGE_SHIFT		12
> -#define PAGE_SIZE               (1UL << PAGE_SHIFT)
> -#define PAGE_MASK               (~(PAGE_SIZE-1))
> -
>   #define _H(__h) ((xc_interface *)(__h))
>   #define _D(__d) ((uint32_t)Int_val(__d))
>   
> +#ifndef Val_none
>   #define Val_none (Val_int(0))
> +#endif
>   
>   #define string_of_option_array(array, index) \
>   	((Field(array, index) == Val_none) ? NULL : String_val(Field(Field(array, index), 0)))
> @@ -818,7 +816,7 @@ CAMLprim value stub_xc_domain_memory_increase_reservation(value xch,
>   	CAMLparam3(xch, domid, mem_kb);
>   	int retval;
>   
> -	unsigned long nr_extents = ((unsigned long)(Int64_val(mem_kb))) >> (PAGE_SHIFT - 10);
> +	unsigned long nr_extents = ((unsigned long)(Int64_val(mem_kb))) >> (XC_PAGE_SHIFT - 10);
>   
>   	uint32_t c_domid = _D(domid);
>   	caml_enter_blocking_section();
> @@ -924,7 +922,7 @@ CAMLprim value stub_pages_to_kib(value pages)
>   {
>   	CAMLparam1(pages);
>   
> -	CAMLreturn(caml_copy_int64(Int64_val(pages) << (PAGE_SHIFT - 10)));
> +	CAMLreturn(caml_copy_int64(Int64_val(pages) << (XC_PAGE_SHIFT - 10)));
>   }
>   
>   
> diff --git a/tools/ocaml/libs/xentoollog/xentoollog_stubs.c b/tools/ocaml/libs/xentoollog/xentoollog_stubs.c
> index bf64b211c2..e4306a0c2f 100644
> --- a/tools/ocaml/libs/xentoollog/xentoollog_stubs.c
> +++ b/tools/ocaml/libs/xentoollog/xentoollog_stubs.c
> @@ -53,8 +53,12 @@ static char * dup_String_val(value s)
>   #include "_xtl_levels.inc"
>   
>   /* Option type support as per http://www.linux-nantes.org/~fmonnier/ocaml/ocaml-wrapping-c.php */
> +#ifndef Val_none
>   #define Val_none Val_int(0)
> +#endif
> +#ifndef Some_val
>   #define Some_val(v) Field(v,0)
> +#endif
>   
>   static value Val_some(value v)
>   {
> diff --git a/tools/ocaml/libs/xl/xenlight_stubs.c b/tools/ocaml/libs/xl/xenlight_stubs.c
> index 352a00134d..45b8af61c7 100644
> --- a/tools/ocaml/libs/xl/xenlight_stubs.c
> +++ b/tools/ocaml/libs/xl/xenlight_stubs.c
> @@ -227,8 +227,12 @@ static value Val_string_list(libxl_string_list *c_val)
>   }
>   
>   /* Option type support as per http://www.linux-nantes.org/~fmonnier/ocaml/ocaml-wrapping-c.php */
> +#ifndef Val_none
>   #define Val_none Val_int(0)
> +#endif
> +#ifndef Some_val
>   #define Some_val(v) Field(v,0)
> +#endif
>   
>   static value Val_some(value v)
>   {
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 17 18:42:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 18:42:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128482.241205 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liiBi-00030G-Kn; Mon, 17 May 2021 18:41:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128482.241205; Mon, 17 May 2021 18:41:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liiBi-000309-Ha; Mon, 17 May 2021 18:41:58 +0000
Received: by outflank-mailman (input) for mailman id 128482;
 Mon, 17 May 2021 18:41:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=1CF0=KM=gmail.com=wei.liu.xen@srs-us1.protection.inumbo.net>)
 id 1liiBh-000303-9U
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 18:41:57 +0000
Received: from mail-wm1-f54.google.com (unknown [209.85.128.54])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5393c8c7-48b4-4c7e-91d4-078aed8fd2b4;
 Mon, 17 May 2021 18:41:56 +0000 (UTC)
Received: by mail-wm1-f54.google.com with SMTP id
 b19-20020a05600c06d3b029014258a636e8so156881wmn.2
 for <xen-devel@lists.xenproject.org>; Mon, 17 May 2021 11:41:56 -0700 (PDT)
Received: from liuwe-devbox-debian-v2 ([51.145.34.42])
 by smtp.gmail.com with ESMTPSA id h15sm1691655wmq.1.2021.05.17.11.41.54
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 17 May 2021 11:41:55 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5393c8c7-48b4-4c7e-91d4-078aed8fd2b4
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:in-reply-to;
        bh=EuPClXwtkqqvCFK9WnOJFo34QC+yCQRQ0OVjAS1R/20=;
        b=l6Mk+ErcVmP4YLOgYQt7YOkg0+HJvo+UzZKhqvHqlM7DaNZX2+nfouUGbelGbIQ/rH
         JUlWd7gc7u4YTHnF5IAMutJNBX0/Ek8jjPsz7mbGyfMubTjU5VqoUcoTgsLsihhmyNi6
         ctIKDKd9s6FuNbzYVLK/ySys4r76xtov6ll9vpmLaG0qXyyOkpymm1v2WKLYlS+ftih1
         npo4qd1Gyl6oUo7ZhZxRjLQBzD5pJKjvG9uVMrc5l0jLy3iq8KE7v31eYxK/LLiW4iqd
         F2LAmiGqVkk0RxSAcOpYVrKLBS0a+yFkA8frhsYfkUgUYXrltOw0mZ3CP0b2fxG2hDWK
         m6KA==
X-Gm-Message-State: AOAM532aAXDZxqJqWQNc+wNp9qUdcgLTOKf5WWZ/ZQXzN+UoXcADxK9U
	KTEwVw3w5b2f6mgq+V2YBxI=
X-Google-Smtp-Source: ABdhPJwqYvchC/R5Vm+kenSBq3TwNsoSvuZ9oZMyqpos07ojN/oMekcNvVgZMuB8ywXnkrhIkv8plA==
X-Received: by 2002:a05:600c:4e86:: with SMTP id f6mr469628wmq.104.1621276915852;
        Mon, 17 May 2021 11:41:55 -0700 (PDT)
Date: Mon, 17 May 2021 18:41:53 +0000
From: Wei Liu <wl@xen.org>
To: Julien Grall <julien@xen.org>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>,
	Julien Grall <jgrall@amazon.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Jan Beulich <jbeulich@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Dario Faggioli <dfaggioli@suse.com>, Tim Deegan <tim@xen.org>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
Subject: Re: PING Re: [PATCH 00/14] Use const whether we point to literal
 strings (take 1)
Message-ID: <20210517184153.wwj4ek4bkmelpuia@liuwe-devbox-debian-v2>
References: <20210405155713.29754-1-julien@xen.org>
 <05eaa910-7630-d1e4-4b70-3008d672fe5c@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <05eaa910-7630-d1e4-4b70-3008d672fe5c@xen.org>

On Mon, May 10, 2021 at 06:49:01PM +0100, Julien Grall wrote:
> Hi,
> 
> Ian, Wei, Anthony, can I get some feedbacks on the tools side?

I think this is moving to the right direction so

Acked-by: Wei Liu <wl@xen.org>


From xen-devel-bounces@lists.xenproject.org Mon May 17 20:25:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 20:25:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128490.241215 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lijnv-00047F-HJ; Mon, 17 May 2021 20:25:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128490.241215; Mon, 17 May 2021 20:25:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lijnv-000478-EJ; Mon, 17 May 2021 20:25:31 +0000
Received: by outflank-mailman (input) for mailman id 128490;
 Mon, 17 May 2021 20:25:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lijnt-00046t-Re; Mon, 17 May 2021 20:25:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lijnt-0003eY-O9; Mon, 17 May 2021 20:25:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lijnt-0004Lz-Bk; Mon, 17 May 2021 20:25:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lijnt-0003uK-B0; Mon, 17 May 2021 20:25:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=vMupC1Zk01vcLw+37x3sbX+tAlmpMXylLwY4pEx9Tq0=; b=Rj1WR/+yoMfNHP7uAaMxxUUEQF
	qyiNDgNt3HMDr+J3zJAecpDQ8XE1Cz5P2TIU85lbSqQuMlOgouhqyXOcV2DlKomysyAIEFyiYeF8z
	xMKuIbkDPyHH9yWPCwKTrZVEHy/q36l31q5knHnSkbysiwL7KwP7T/OokN42WYBwkVkg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161982-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 161982: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=27eb6833134d0f3ddfb02d09055776e837e9a747
X-Osstest-Versions-That:
    xen=71a25d03b70b399d666d05a3d0046d821248c80e
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 17 May 2021 20:25:29 +0000

flight 161982 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161982/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  27eb6833134d0f3ddfb02d09055776e837e9a747
baseline version:
 xen                  71a25d03b70b399d666d05a3d0046d821248c80e

Last test of basis   161980  2021-05-17 14:02:34 Z    0 days
Testing same since   161982  2021-05-17 17:00:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   71a25d03b7..27eb683313  27eb6833134d0f3ddfb02d09055776e837e9a747 -> smoke


From xen-devel-bounces@lists.xenproject.org Mon May 17 21:44:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 21:44:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128497.241230 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lil20-00036d-EO; Mon, 17 May 2021 21:44:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128497.241230; Mon, 17 May 2021 21:44:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lil20-00036W-Ak; Mon, 17 May 2021 21:44:08 +0000
Received: by outflank-mailman (input) for mailman id 128497;
 Mon, 17 May 2021 21:44:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qOHf=KM=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1lil1z-00036C-Bt
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 21:44:07 +0000
Received: from out3-smtp.messagingengine.com (unknown [66.111.4.27])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8a70d150-cfb9-4a78-b535-896bcb79b6a4;
 Mon, 17 May 2021 21:44:05 +0000 (UTC)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailout.nyi.internal (Postfix) with ESMTP id E9AD25C0172;
 Mon, 17 May 2021 17:44:04 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute1.internal (MEProxy); Mon, 17 May 2021 17:44:04 -0400
Received: from mail-itl (ip5b434f04.dynamic.kabel-deutschland.de [91.67.79.4])
 by mail.messagingengine.com (Postfix) with ESMTPA;
 Mon, 17 May 2021 17:44:03 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8a70d150-cfb9-4a78-b535-896bcb79b6a4
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:content-type:date:from:in-reply-to
	:message-id:mime-version:references:subject:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; bh=tlZBfa
	KRDABNllg2+fYxJA26vcxtyQM/CjtTLSQKZrM=; b=wmVUTuSrwBIVD7qZwtTXXJ
	fjslq3lomOz3PPW4idFTqChNCAD87/t0a8guBGZnKurQc77OcSekraUKD/2nZKmP
	/kS5rJ6OJofq8LznRv1uwrQU7eNq2BC4KjX6Zv09rcCy8TsSWzIvcSA6fyzyNgVt
	nZKMmk0oGACUfpzw/NApWIMcAjwkF0Mm4VZSK9roLWcnB50chVO+Z5k5jfGqtJoX
	lhwz3v4rp0PB+h/c47vuh1urr0UIwJznczuHg4kJ+73RWDMebDpTKUG8FY42Jw+E
	XVvjkJTmrqRBQT24hBz4TyCmR1FaJqBb+wezpVyd5W0Wx4mI/dmTNc2aqHoG2ejg
	==
X-ME-Sender: <xms:pOOiYChedNEkEWou1mpGuigpM6k-D_vi2AQRjwdmiMfsc6NAcSTJGA>
    <xme:pOOiYDAj-n8vVld6WLCqWAVnM0EBkurCK43h5pCfSGCX_hd0JUUJSXPIhoyTDTzzO
    zshQKdBMiLLUw>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrvdeihedgudeigecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpeffhffvuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgv
    khcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinh
    hvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepkeeg
    tdfgvdeihefhhedtvdelieeiueetveehteffjeejjedvieejvefhueeffeegnecuffhomh
    grihhnpehgihhthhhusgdrtghomhenucfkphepledurdeijedrjeelrdegnecuvehluhhs
    thgvrhfuihiivgeptdenucfrrghrrghmpehmrghilhhfrhhomhepmhgrrhhmrghrvghkse
    hinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtghomh
X-ME-Proxy: <xmx:pOOiYKFzvkC1uenmtT0zvUAm6JqNUmLYitaTxurgJ-h_vlBy92rGqw>
    <xmx:pOOiYLRNSSjS09hu-plzFv6O2-M3bz4-3_-RaKUkuEcbTzSlF_1cww>
    <xmx:pOOiYPyROfmJjGKeG6WqB_ho4GsWglGJWBVhUqjKEQH68PCVBxZ1EQ>
    <xmx:pOOiYKqQIZ5VBnBjYPQGZ71ZiEB1dPVT6FU0xS9Q_IrC41cgvZZPZA>
Date: Mon, 17 May 2021 23:43:59 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: "Durrant, Paul" <pdurrant@amazon.co.uk>
Cc: Michael Brown <mbrown@fensystems.co.uk>, "paul@xen.org" <paul@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"wei.liu@kernel.org" <wei.liu@kernel.org>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH] xen-netback: Check for hotplug-status existence before
 watching
Message-ID: <YKLjoALdw4oKSZ04@mail-itl>
References: <20210413152512.903750-1-mbrown@fensystems.co.uk>
 <YJl8IC7EbXKpARWL@mail-itl>
 <404130e4-210d-2214-47a8-833c0463d997@fensystems.co.uk>
 <YJmBDpqQ12ZBGf58@mail-itl>
 <21f38a92-c8ae-12a7-f1d8-50810c5eb088@fensystems.co.uk>
 <YJmMvTkp2Y1hlLLm@mail-itl>
 <df9e9a32b0294aee814eeb58d2d71edd@EX13D32EUC003.ant.amazon.com>
 <YJpfORXIgEaWlQ7E@mail-itl>
 <YJpgNvOmDL9SuRye@mail-itl>
 <9edd6873034f474baafd70b1df693001@EX13D32EUC003.ant.amazon.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="k7Ovt1DhdVgIdtca"
Content-Disposition: inline
In-Reply-To: <9edd6873034f474baafd70b1df693001@EX13D32EUC003.ant.amazon.com>


--k7Ovt1DhdVgIdtca
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Mon, 17 May 2021 23:43:59 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: "Durrant, Paul" <pdurrant@amazon.co.uk>
Cc: Michael Brown <mbrown@fensystems.co.uk>, "paul@xen.org" <paul@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"wei.liu@kernel.org" <wei.liu@kernel.org>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH] xen-netback: Check for hotplug-status existence before
 watching

On Tue, May 11, 2021 at 12:46:38PM +0000, Durrant, Paul wrote:
> > -----Original Message-----
> > From: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingslab.com>
> > Sent: 11 May 2021 11:45
> > To: Durrant, Paul <pdurrant@amazon.co.uk>
> > Cc: Michael Brown <mbrown@fensystems.co.uk>; paul@xen.org; xen-devel@li=
sts.xenproject.org;
> > netdev@vger.kernel.org; wei.liu@kernel.org
> > Subject: RE: [EXTERNAL] [PATCH] xen-netback: Check for hotplug-status e=
xistence before watching
> >=20
> > On Tue, May 11, 2021 at 12:40:54PM +0200, Marek Marczykowski-G=C3=B3rec=
ki wrote:
> > > On Tue, May 11, 2021 at 07:06:55AM +0000, Durrant, Paul wrote:
> > > > > -----Original Message-----
> > > > > From: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingsla=
b.com>
> > > > > Sent: 10 May 2021 20:43
> > > > > To: Michael Brown <mbrown@fensystems.co.uk>; paul@xen.org
> > > > > Cc: paul@xen.org; xen-devel@lists.xenproject.org; netdev@vger.ker=
nel.org; wei.liu@kernel.org;
> > Durrant,
> > > > > Paul <pdurrant@amazon.co.uk>
> > > > > Subject: RE: [EXTERNAL] [PATCH] xen-netback: Check for hotplug-st=
atus existence before watching
> > > > >
> > > > > On Mon, May 10, 2021 at 08:06:55PM +0100, Michael Brown wrote:
> > > > > > If you have a suggested patch, I'm happy to test that it doesn'=
t reintroduce
> > > > > > the regression bug that was fixed by this commit.
> > > > >
> > > > > Actually, I've just tested with a simple reloading xen-netfront m=
odule. It
> > > > > seems in this case, the hotplug script is not re-executed. In fac=
t, I
> > > > > think it should not be re-executed at all, since the vif interface
> > > > > remains in place (it just gets NO-CARRIER flag).
> > > > >
> > > > > This brings a question, why removing hotplug-status in the first =
place?
> > > > > The interface remains correctly configured by the hotplug script =
after
> > > > > all. From the commit message:
> > > > >
> > > > >     xen-netback: remove 'hotplug-status' once it has served its p=
urpose
> > > > >
> > > > >     Removing the 'hotplug-status' node in netback_remove() is wro=
ng; the script
> > > > >     may not have completed. Only remove the node once the watch h=
as fired and
> > > > >     has been unregistered.
> > > > >
> > > > > I think the intention was to remove 'hotplug-status' node _later_=
 in
> > > > > case of quickly adding and removing the interface. Is that right,=
 Paul?
> > > >
> > > > The removal was done to allow unbind/bind to function correctly. II=
RC before the original patch
> > doing a bind would stall forever waiting for the hotplug status to chan=
ge, which would never happen.
> > >
> > > Hmm, in that case maybe don't remove it at all in the backend, and let
> > > it be cleaned up by the toolstack, when it removes other backend-rela=
ted
> > > nodes?
> >=20
> > No, unbind/bind _does_ require hotplug script to be called again.
> >=20
>=20
> Yes, sorry I was misremembering. My memory is hazy but there was definite=
ly a problem at the time with leaving the node in place.
>=20
> > When exactly vif interface appears in the system (starts to be available
> > for the hotplug script)? Maybe remove 'hotplug-status' just before that
> > point?
> >=20
>=20
> I really can't remember any detail. Perhaps try reverting both patches th=
en and check that the unbind/rmmod/modprobe/bind sequence still works (and =
the backend actually makes it into connected state).

Ok, I've tried this. I've reverted both commits, then used your test
script from the 9476654bd5e8ad42abe8ee9f9e90069ff8e60c17:
   =20
    This has been tested by running iperf as a server in the test VM and
    then running a client against it in a continuous loop, whilst also
    running:
   =20
    while true;
      do echo vif-$DOMID-$VIF >unbind;
      echo down;
      rmmod xen-netback;
      echo unloaded;
      modprobe xen-netback;
      cd $(pwd);
      brctl addif xenbr0 vif$DOMID.$VIF;
      ip link set vif$DOMID.$VIF up;
      echo up;
      sleep 5;
      done
   =20
    in dom0 from /sys/bus/xen-backend/drivers/vif to continuously unbind,
    unload, re-load, re-bind and re-plumb the backend.
   =20
In fact, the need to call `brctl` and `ip link` manually is exactly
because the hotplug script isn't executed. When I execute it manually,
the backend properly gets back to working. So, removing 'hotplug-status'
was in the correct place (netback_remove). The missing part is the toolstack
calling the hotplug script on xen-netback re-bind.

In this case, I'm not sure what is the proper way. If I restart
xendriverdomain service (I do run the backend in domU), it properly
executes hotplug script and the backend interface gets properly
configured. But it doesn't do it on its own. It seems to be related to
device "state" in xenstore. The specific part of the libxl is
backend_watch_callback():
https://github.com/xen-project/xen/blob/master/tools/libs/light/libxl_devic=
e.c#L1664

    ddev =3D search_for_device(dguest, dev);
    if (ddev =3D=3D NULL && state =3D=3D XenbusStateClosed) {
        /*
         * Spurious state change, device has already been disconnected
         * or never attached.
         */
        goto skip;
    } else if (ddev =3D=3D NULL) {
        rc =3D add_device(egc, nested_ao, dguest, dev);
        if (rc > 0)
            free_ao =3D true;
    } else if (state =3D=3D XenbusStateClosed && online =3D=3D 0) {
        rc =3D remove_device(egc, nested_ao, dguest, ddev);
        if (rc > 0)
            free_ao =3D true;
        check_and_maybe_remove_guest(gc, ddomain, dguest);
    }

In short: if device gets XenbusStateInitWait for the first time (ddev =3D=3D
NULL case), it goes to add_device() which executes the hotplug script
and stores the device.
Then, if device goes to XenbusStateClosed + online=3D=3D0 state, then it
executes hotplug script again (with "offline" parameter) and forgets the
device. If you unbind the driver, the device stays in
XenbusStateConnected state (in xenstore), and after you bind it again,
it goes to XenbusStateInitWait. It don't think it goes through
XenbusStateClosed, and online stays at 1 too, so libxl doesn't execute
the hotplug script again.

Some solution could be to add an extra case at the end, like "ddev !=3D
NULL && state =3D=3D XenbusStateInitWait && hotplug-status !=3D connected".
And make sure xl devd won't call the same hotplug script multiple times
for the same device _at the same time_ (I'm not sure about the async
machinery here).

But even if xl devd (aka xendriverdomain service) gets "fixes" to
execute hotplug script in that case, I don't think it would work in
backend in dom0 case - there, I think nothing watches already configured
vif interfaces (there is no xl devd daemon in dom0, and xl background
process watches only domain death and cdrom eject events).=20

I'm adding toolstack maintainers, maybe they'll have some idea...

In any case, the issue is not calling the hotplug script, responsible
for configuring newly created vif interface. Not kernel waiting for it.
So, I think both commits should still be reverted.

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--k7Ovt1DhdVgIdtca
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmCi46AACgkQ24/THMrX
1ywsqAf/VEiZIJ5Qxk9keLiV804ReqfQfqsC6iuj3exTvVtvR7hzRRe1nMxvuVS6
rjmYRoWYY5V36IH0GXaUb9wri7Kg8uBZ8J9fTsaq8sJTXj+Re0aFckoDTeTwkzJl
CdpdgL1meNwvE7znIpA92LsObRPKqFPzAYMzFNt7eoaFYA7Y81n4nBKbKLfI4PiS
r9mzZSevt3yzGnbU6thYvYbGfmlGYArgZZ2mKi8eaMfnh7lLtHBD692t6AARjce3
j897zPf44EisvYMowITaF1A/D1SVl8cOabPzVj/VC2NP0z2Hjl46aRwrqd5FAM3V
cVO/tttRGl7cxzMOexxRzyn3GeI2rg==
=5+LX
-----END PGP SIGNATURE-----

--k7Ovt1DhdVgIdtca--


From xen-devel-bounces@lists.xenproject.org Mon May 17 21:51:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 21:51:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128502.241240 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lil9M-0004Ta-7i; Mon, 17 May 2021 21:51:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128502.241240; Mon, 17 May 2021 21:51:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lil9M-0004TT-4t; Mon, 17 May 2021 21:51:44 +0000
Received: by outflank-mailman (input) for mailman id 128502;
 Mon, 17 May 2021 21:51:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Z2KI=KM=fensystems.co.uk=mbrown@srs-us1.protection.inumbo.net>)
 id 1lil9L-0004TN-1f
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 21:51:43 +0000
Received: from blyat.fensystems.co.uk (unknown [54.246.183.96])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2bde7f93-e5fb-4acd-800d-565526daa305;
 Mon, 17 May 2021 21:51:42 +0000 (UTC)
Received: from dolphin.home (unknown
 [IPv6:2a00:23c6:5495:5e00:72b3:d5ff:feb1:e101])
 by blyat.fensystems.co.uk (Postfix) with ESMTPSA id 4DA8144319;
 Mon, 17 May 2021 21:51:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2bde7f93-e5fb-4acd-800d-565526daa305
Subject: Re: [PATCH] xen-netback: Check for hotplug-status existence before
 watching
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, "Durrant, Paul" <pdurrant@amazon.co.uk>
Cc: "paul@xen.org" <paul@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
 "wei.liu@kernel.org" <wei.liu@kernel.org>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20210413152512.903750-1-mbrown@fensystems.co.uk>
 <YJl8IC7EbXKpARWL@mail-itl>
 <404130e4-210d-2214-47a8-833c0463d997@fensystems.co.uk>
 <YJmBDpqQ12ZBGf58@mail-itl>
 <21f38a92-c8ae-12a7-f1d8-50810c5eb088@fensystems.co.uk>
 <YJmMvTkp2Y1hlLLm@mail-itl>
 <df9e9a32b0294aee814eeb58d2d71edd@EX13D32EUC003.ant.amazon.com>
 <YJpfORXIgEaWlQ7E@mail-itl> <YJpgNvOmDL9SuRye@mail-itl>
 <9edd6873034f474baafd70b1df693001@EX13D32EUC003.ant.amazon.com>
 <YKLjoALdw4oKSZ04@mail-itl>
From: Michael Brown <mbrown@fensystems.co.uk>
Message-ID: <ed4057aa-d69e-e803-752b-c007ab4e707d@fensystems.co.uk>
Date: Mon, 17 May 2021 22:51:38 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.7.0
MIME-Version: 1.0
In-Reply-To: <YKLjoALdw4oKSZ04@mail-itl>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit
X-Spam-Status: No, score=-2.9 required=5.0 tests=ALL_TRUSTED,BAYES_00
	autolearn=ham autolearn_force=no version=3.4.2
X-Spam-Checker-Version: SpamAssassin 3.4.2 (2018-09-13) on
	blyat.fensystems.co.uk

On 17/05/2021 22:43, Marek Marczykowski-Górecki wrote:
> In any case, the issue is not calling the hotplug script, responsible
> for configuring newly created vif interface. Not kernel waiting for it.
> So, I think both commits should still be reverted.

Did you also test the ability for a domU to have the netfront driver 
reloaded?  (That should be roughly equivalent to my original test 
scenario comprising the iPXE netfront driver used to boot a kernel that 
then loads the Linux netfront driver.)

Michael




From xen-devel-bounces@lists.xenproject.org Mon May 17 21:58:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 21:58:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128511.241251 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lilFw-0005Fe-4p; Mon, 17 May 2021 21:58:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128511.241251; Mon, 17 May 2021 21:58:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lilFw-0005FX-1x; Mon, 17 May 2021 21:58:32 +0000
Received: by outflank-mailman (input) for mailman id 128511;
 Mon, 17 May 2021 21:58:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=qOHf=KM=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1lilFu-0005FR-K6
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 21:58:30 +0000
Received: from out3-smtp.messagingengine.com (unknown [66.111.4.27])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 565ebd18-4ed8-4355-bf1e-d1e4f4756b67;
 Mon, 17 May 2021 21:58:30 +0000 (UTC)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailout.nyi.internal (Postfix) with ESMTP id DA9D15C0184;
 Mon, 17 May 2021 17:58:29 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute1.internal (MEProxy); Mon, 17 May 2021 17:58:29 -0400
Received: from mail-itl (ip5b434f04.dynamic.kabel-deutschland.de [91.67.79.4])
 by mail.messagingengine.com (Postfix) with ESMTPA;
 Mon, 17 May 2021 17:58:28 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 565ebd18-4ed8-4355-bf1e-d1e4f4756b67
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:content-type:date:from:in-reply-to
	:message-id:mime-version:references:subject:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; bh=KZ594z
	tIARs/GJxBthPQn+lIZaulwFStf5XQEa/2T6U=; b=Fyf7tYIzS6KwF24g8AV9TN
	eCNgeQVskLGDEn4FUju6k7dyDRg6TsqIiz2iZgA4iDqV2ZXTKMor3g6xF9RfkI4g
	cHj/Po8RzYaWpG+R5jD+uo2tkF0y4g+bul8L+JfVqopv+2gYc3Pk7HEs6HQHxWQB
	RgunkmZhvStSmenx04Ic8ZsQeyps6uOl7cizxSFm0j27KVxuIL4d/bQcmOfSEups
	eFZNnOATWmArwPZef2gEOjWCTVkh0puwlKmFwFVsO/0qIgUuDXMCFY6UkwMHRa0u
	ZkXxEDEN/aoFtdBCDp4bvQp7P9Zfar9KLN+2eQCK3C/QylTCUXBm+a9gC2RHujmQ
	==
X-ME-Sender: <xms:BeeiYJ3mH7xwvkhBiwx7CKeWP3SMgnRdLDwEEEUcynP16OC65u0XfA>
    <xme:BeeiYAFbeMlWdLOz4i_YuR0XRgqYSKMpoM7sVoUm1qIWh27ftdfZ1q3ghrYPQkiUN
    QtkS2drkAiarA>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrvdeihedgudeiiecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpeffhffvuffkfhggtggujgesghdtreertddtjeenucfhrhhomhepofgrrhgv
    khcuofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinh
    hvihhsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnhepteev
    ffeigffhkefhgfegfeffhfegveeikeettdfhheevieehieeitddugeefteffnecukfhppe
    eluddrieejrdejledrgeenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgr
    ihhlfhhrohhmpehmrghrmhgrrhgvkhesihhnvhhishhisghlvghthhhinhhgshhlrggsrd
    gtohhm
X-ME-Proxy: <xmx:BeeiYJ79TaRUN2Lxb48BWE01GDLbMzE4V9mxzHzi3QIewbcp1E50nQ>
    <xmx:BeeiYG1cnB2z1bM9OMnLMSaXjlQoWBq1LPy1mAATk9u4bP7K-R_fDw>
    <xmx:BeeiYMEMXQ0aPrBdZBnSHffjWwXlVtDXeNUaP8m9RGOcP6k3XLBKDw>
    <xmx:BeeiYBOydb_MZjU-ndPVesHGLuRXeqIS_8f8r0OFgSdGy-Q5wOioPw>
Date: Mon, 17 May 2021 23:58:25 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Michael Brown <mbrown@fensystems.co.uk>
Cc: "Durrant, Paul" <pdurrant@amazon.co.uk>, "paul@xen.org" <paul@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"wei.liu@kernel.org" <wei.liu@kernel.org>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH] xen-netback: Check for hotplug-status existence before
 watching
Message-ID: <YKLnAcaVsMUQUC74@mail-itl>
References: <404130e4-210d-2214-47a8-833c0463d997@fensystems.co.uk>
 <YJmBDpqQ12ZBGf58@mail-itl>
 <21f38a92-c8ae-12a7-f1d8-50810c5eb088@fensystems.co.uk>
 <YJmMvTkp2Y1hlLLm@mail-itl>
 <df9e9a32b0294aee814eeb58d2d71edd@EX13D32EUC003.ant.amazon.com>
 <YJpfORXIgEaWlQ7E@mail-itl>
 <YJpgNvOmDL9SuRye@mail-itl>
 <9edd6873034f474baafd70b1df693001@EX13D32EUC003.ant.amazon.com>
 <YKLjoALdw4oKSZ04@mail-itl>
 <ed4057aa-d69e-e803-752b-c007ab4e707d@fensystems.co.uk>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="v82u8Wko8ZqO+zT5"
Content-Disposition: inline
In-Reply-To: <ed4057aa-d69e-e803-752b-c007ab4e707d@fensystems.co.uk>


--v82u8Wko8ZqO+zT5
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Mon, 17 May 2021 23:58:25 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Michael Brown <mbrown@fensystems.co.uk>
Cc: "Durrant, Paul" <pdurrant@amazon.co.uk>, "paul@xen.org" <paul@xen.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"wei.liu@kernel.org" <wei.liu@kernel.org>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH] xen-netback: Check for hotplug-status existence before
 watching

On Mon, May 17, 2021 at 10:51:38PM +0100, Michael Brown wrote:
> On 17/05/2021 22:43, Marek Marczykowski-G=C3=B3recki wrote:
> > In any case, the issue is not calling the hotplug script, responsible
> > for configuring newly created vif interface. Not kernel waiting for it.
> > So, I think both commits should still be reverted.
>=20
> Did you also test the ability for a domU to have the netfront driver
> reloaded?  (That should be roughly equivalent to my original test scenario
> comprising the iPXE netfront driver used to boot a kernel that then loads
> the Linux netfront driver.)

Yes, with both commits reverted, it just works. Without the need to do
anything at the backend side.

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--v82u8Wko8ZqO+zT5
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmCi5wEACgkQ24/THMrX
1yxtFgf+OzfFn2JRwVyKHdBAPwZoJJT+BTQds43iuXSkSLYOR07z8k8q3oTEBfPM
R0B5K9O869Prjinv+14tkO1coZs4vNb/4GTmxd4WEPwbA5AqGiq2Y+5VKbDL6Wwv
FQeHDSgZ3xIgX+mq4gLX2Nccwlc4X8R6Pidq+5TLPGpEkIlutDQpIPfHMncYyrli
Xr9ffRXZvbkck1UTNzx/h1epY3UwIupyhzIhz5ieqhhqRRsZKVKmd/D0Q8zUX9oS
Wa60THmmamKjIPgEvN5LfFe0z6tYohJIWA5AnY6hQCSVbg29t1zXMIVpwYE+WL8n
vPegbPQ2KbLcHd9RZ8igEIL0BlX+eA==
=p4UQ
-----END PGP SIGNATURE-----

--v82u8Wko8ZqO+zT5--


From xen-devel-bounces@lists.xenproject.org Mon May 17 23:20:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 23:20:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128519.241263 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1limXO-0004uV-9n; Mon, 17 May 2021 23:20:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128519.241263; Mon, 17 May 2021 23:20:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1limXO-0004uO-61; Mon, 17 May 2021 23:20:38 +0000
Received: by outflank-mailman (input) for mailman id 128519;
 Mon, 17 May 2021 23:20:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=FDvn=KM=zededa.com=roman@srs-us1.protection.inumbo.net>)
 id 1limXM-0004uI-W5
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 23:20:37 +0000
Received: from mail-qt1-x829.google.com (unknown [2607:f8b0:4864:20::829])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eb74cd38-aafa-40c4-8105-c44409986b93;
 Mon, 17 May 2021 23:20:36 +0000 (UTC)
Received: by mail-qt1-x829.google.com with SMTP id c10so6163012qtx.10
 for <xen-devel@lists.xenproject.org>; Mon, 17 May 2021 16:20:35 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eb74cd38-aafa-40c4-8105-c44409986b93
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=zededa.com; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=48ejumFBodxIZN3IPhbRMcDcDg17r4FZ0iR6BIZLfY4=;
        b=kIRPSwAL0ToihqsoHdQgfaKI1L160IYK2edVerfcWZR06NHKQkeoe+r57HBCv0NteH
         ZyRndzcV1xGO65Ds/DqrrWoyM8C5SYFvcCkWPItXdbYcwH2lzaPYpN4wrDH2/XOq6UX8
         ibbyHBPuZ8O7oz3hNyQS8+n2k+Z02kjYoj8Qv01eV12//CpR3ipWATZufaDvf3wXwpuG
         H32b+Htc2/qN9EhpCI3cAbYGgodgxNRnKQnQ0iHCAwZ3glXJM4s5iQgYfHWzGXjUuKyD
         dOUXUcZaF3k9P4wQFtJoP1kI3TwWEqaaqWSZTbqjKUh7XjBrjOVxy8NB3lj1OgCk/kCn
         /+rQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=48ejumFBodxIZN3IPhbRMcDcDg17r4FZ0iR6BIZLfY4=;
        b=Mlqm2/q69128HiuU/2Xp+7JEauu1v990qyq4j9bajFSqr0zVpf/ZiebtxtkUhokEZ2
         dhPIxZ4LNxbC/unoWSfC4Wt84pMyIT3+8d7vCRMHZLKMHTjqMh56klDv1zmbvpQMYrnc
         FzhWx7ZbtH3JhpYWgue6Pqkqz5sZ+mEy4vXTBQbnxWVaS2P5H+YaLS2i24gNZbpR6Gws
         BycI+EJRXzW3jNQ7/RR9N3Qi6Y7d76685vKkehVvgGBwOPP8o5U335j9/dGOhm4wwHWm
         tuamy/QiphYKCSlI2UYaeKoho0ZmKMwQurnujsnVjgYJuRUAt2uodKFxrZWWxhzvdCrj
         yJFw==
X-Gm-Message-State: AOAM532aYzfCps2xwqt5mAJEx6o7nLWBxvg+8bABany3X4VAWfqkhCuj
	6KJHstAKqctBtQ1yCsI7jllFtbFCmYi+2IrLXjlT8Q==
X-Google-Smtp-Source: ABdhPJyr6wjblgST+W2ar+L4ad0ZMa/GLuuq/shpKxBr57/yPZc2dTWA1O146h+urKRNP9EAIZPBn+7kk8+HZ5Z7H/0=
X-Received: by 2002:ac8:4b44:: with SMTP id e4mr1952476qts.266.1621293635621;
 Mon, 17 May 2021 16:20:35 -0700 (PDT)
MIME-Version: 1.0
References: <cover.1621017334.git.connojdavis@gmail.com>
In-Reply-To: <cover.1621017334.git.connojdavis@gmail.com>
From: Roman Shaposhnik <roman@zededa.com>
Date: Mon, 17 May 2021 16:20:24 -0700
Message-ID: <CAMmSBy-9_egM5j4aiOWx_fa5FwDy1ZpgOSdt8Pywqui38z52fw@mail.gmail.com>
Subject: Re: [PATCH v3 0/5] Minimal build for RISCV
To: Connor Davis <connojdavis@gmail.com>
Cc: Xen-devel <xen-devel@lists.xenproject.org>, 
	Bobby Eshleman <bobbyeshleman@gmail.com>, Alistair Francis <alistair23@gmail.com>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, 
	Ian Jackson <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>, 
	Doug Goldstein <cardoe@cardoe.com>
Content-Type: text/plain; charset="UTF-8"

FWIW: for project EVE, the timing on this is perfect -- I've just
started a complete RISC-V port:
    https://github.com/lf-edge/eve/pull/2083

Targeting KVM for now, but would be awesome to see at least some
rudimentary RISC-V support land in Xen.

Connor, I'll be merging your changes into my patchset on EVE side ASAP
-- just to start playing with it.

Thanks,
Roman.

On Fri, May 14, 2021 at 11:54 AM Connor Davis <connojdavis@gmail.com> wrote:
>
> Hi all,
>
> This series introduces a minimal build for RISCV. It is based on Bobby's
> previous work from last year[0] rebased onto current Xen.
>
> This series provides the patches necessary to get a minimal build
> working. The build is "minimal" in the sense that it only supports
> building TARGET=head.o. The arch/riscv/head.S is just a simple while(1).
>
> The first 3 patches are mods to non-RISCV bits that enable building a
> config with:
>
>   !CONFIG_HAS_NS16550
>   !CONFIG_HAS_PASSTHROUGH
>   !CONFIG_GRANT_TABLE
>
> respectively. The fourth patch adds the make/Kconfig boilerplate
> alongside head.S and asm-riscv/config.h (head.S references ENTRY
> that is defined in asm-riscv/config.h).
>
> The last adds a docker container for doing the build. To build from the
> docker container (after creating it locally), you can run the following:
>
>   $ make XEN_TARGET_ARCH=riscv64 SUBSYSTEMS=xen -C xen TARGET=head.o
>
> --
> Changes since v2:
>   - Reduced number of riscv files added to ease review
>
> Changes since v1:
>   - Dropped "xen/sched: Fix build when NR_CPUS == 1" since this was
>     fixed for 4.15
>   - Moved #ifdef-ary around iommu_enabled to iommu.h
>   - Moved struct grant_table declaration above ifdef CONFIG_GRANT_TABLE
>     instead of defining an empty struct when !CONFIG_GRANT_TABLE
>
> Connor Davis (5):
>   xen/char: Default HAS_NS16550 to y only for X86 and ARM
>   xen/common: Guard iommu symbols with CONFIG_HAS_PASSTHROUGH
>   xen: Fix build when !CONFIG_GRANT_TABLE
>   xen: Add files needed for minimal riscv build
>   automation: Add container for riscv64 builds
>
>  automation/build/archlinux/riscv64.dockerfile |  33 ++++++
>  automation/scripts/containerize               |   1 +
>  config/riscv64.mk                             |   5 +
>  xen/Makefile                                  |   8 +-
>  xen/arch/riscv/Kconfig                        |  52 +++++++++
>  xen/arch/riscv/Kconfig.debug                  |   0
>  xen/arch/riscv/Makefile                       |   0
>  xen/arch/riscv/Rules.mk                       |   0
>  xen/arch/riscv/arch.mk                        |  16 +++
>  xen/arch/riscv/asm-offsets.c                  |   0
>  xen/arch/riscv/configs/riscv64_defconfig      |  12 ++
>  xen/arch/riscv/head.S                         |   6 +
>  xen/common/memory.c                           |  10 ++
>  xen/drivers/char/Kconfig                      |   2 +-
>  xen/include/asm-riscv/config.h                | 110 ++++++++++++++++++
>  xen/include/xen/grant_table.h                 |   3 +-
>  xen/include/xen/iommu.h                       |   8 +-
>  17 files changed, 261 insertions(+), 5 deletions(-)
>  create mode 100644 automation/build/archlinux/riscv64.dockerfile
>  create mode 100644 config/riscv64.mk
>  create mode 100644 xen/arch/riscv/Kconfig
>  create mode 100644 xen/arch/riscv/Kconfig.debug
>  create mode 100644 xen/arch/riscv/Makefile
>  create mode 100644 xen/arch/riscv/Rules.mk
>  create mode 100644 xen/arch/riscv/arch.mk
>  create mode 100644 xen/arch/riscv/asm-offsets.c
>  create mode 100644 xen/arch/riscv/configs/riscv64_defconfig
>  create mode 100644 xen/arch/riscv/head.S
>  create mode 100644 xen/include/asm-riscv/config.h
>
> --
> 2.31.1
>
>


From xen-devel-bounces@lists.xenproject.org Mon May 17 23:28:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 23:28:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128526.241273 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1limfB-0005cO-2l; Mon, 17 May 2021 23:28:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128526.241273; Mon, 17 May 2021 23:28:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1limfA-0005cH-W4; Mon, 17 May 2021 23:28:40 +0000
Received: by outflank-mailman (input) for mailman id 128526;
 Mon, 17 May 2021 23:28:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1limfA-0005c7-8O; Mon, 17 May 2021 23:28:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1limf9-0006kw-Uk; Mon, 17 May 2021 23:28:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1limf9-0006hG-N1; Mon, 17 May 2021 23:28:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1limf9-0001fF-MW; Mon, 17 May 2021 23:28:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LQR3uVqz3fkCBPw3BWJfH1SwJOW1Ks477MTwFHAmjk4=; b=E7Ji7mOYyvHeFFIbJlJE4S7kNG
	ohS1/SxC721Dri5TNs4qxkgmsl+Qa5ritbFYdYYvw5nJtu9veo7g4E2BcPc+wFIczLHVCV51AoT1b
	fFUBvs1R4vePHI2OVS3iD0ASZDbdOjftNYf/YqmSQTepP6Ooq0v0z45ka3PEKgQTLduo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161985-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 161985: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=3ac8835a80b27fc4e7116dbde78d3eececc66fc9
X-Osstest-Versions-That:
    xen=27eb6833134d0f3ddfb02d09055776e837e9a747
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 17 May 2021 23:28:39 +0000

flight 161985 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161985/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  3ac8835a80b27fc4e7116dbde78d3eececc66fc9
baseline version:
 xen                  27eb6833134d0f3ddfb02d09055776e837e9a747

Last test of basis   161982  2021-05-17 17:00:27 Z    0 days
Testing same since   161985  2021-05-17 21:00:36 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   27eb683313..3ac8835a80  3ac8835a80b27fc4e7116dbde78d3eececc66fc9 -> smoke


From xen-devel-bounces@lists.xenproject.org Mon May 17 23:43:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 23:43:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128532.241288 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1limtG-0007tC-D3; Mon, 17 May 2021 23:43:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128532.241288; Mon, 17 May 2021 23:43:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1limtG-0007t5-9w; Mon, 17 May 2021 23:43:14 +0000
Received: by outflank-mailman (input) for mailman id 128532;
 Mon, 17 May 2021 23:43:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=X6rY=KM=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1limtF-0007sz-1z
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 23:43:13 +0000
Received: from mail-ot1-x331.google.com (unknown [2607:f8b0:4864:20::331])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ac17041e-42ae-4a64-a6b3-5bc24904251d;
 Mon, 17 May 2021 23:43:12 +0000 (UTC)
Received: by mail-ot1-x331.google.com with SMTP id
 n32-20020a9d1ea30000b02902a53d6ad4bdso7102957otn.3
 for <xen-devel@lists.xenproject.org>; Mon, 17 May 2021 16:43:12 -0700 (PDT)
Received: from [192.168.99.80] (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id 9sm3045457oie.51.2021.05.17.16.43.10
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 May 2021 16:43:11 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ac17041e-42ae-4a64-a6b3-5bc24904251d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=MyBYGXe0dG09ETtSlfZDNaPmp0kxxQmpe+PbWLlzy4c=;
        b=mgLv5OfD2ycIN0KxRISg4k0gn5vjkyhkWmFeh/j7OJeMZ9usQe/l6+95iajvHV9Pz9
         YIB/ScvDeIS81UZRt5yTYPIYN2yG04qRIZAVZ1ym7xVjZLBGxyk9jOAOPlzN0L86qFok
         KEUYFQZnYjZ/35LzGpJ3I1iqI+cqX7LLPqGlG7hZMCmXTWxEwZTyoT2wtb+KtQeztzRG
         1X48GuK0RRXyWqumE2mUnxRd1z5cWYAJ4NCVHYkT2sRhpnRst0XDwJkQTlHpoPWB5XXQ
         zF8BUB9uOvKOzd4a43vQHF8kSOsPVN7fmIBUrMl6UDOtUlnDqqZC+kBvZ2Ry62C69sgZ
         ZXNg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=MyBYGXe0dG09ETtSlfZDNaPmp0kxxQmpe+PbWLlzy4c=;
        b=JcppIzHKRtZzo/7p97eTILCgtb+6Lg6Ne0TSCIowvbhNDaRWooN8R9rDi7pisSjU29
         a0Y9TLetOU/hdcX/+A1LOM/4L8ZH1/UBKpchn9Q7o40ICHRTG2EzXcb1WcPpKFZ8gD6X
         URUenGrIP2mfYAUn6AtDfqCBJMSLU3TZGrmMKwEEZAAqsFEksGOKpTtSCC8B/681+NNJ
         2Lx9Ip2ohCLDM2ndKVO+0cm97kGXhJQKn9N5yEQdRBAKxLFTWATn122PgwhVa/3/JRYf
         +JTE5Dwrr2t4KKMek4aRSnmqyfa45DGV0zPkw63VPmPLewQ8w9ox4Z9MbMp9lhOVFyHK
         fQPA==
X-Gm-Message-State: AOAM530gvi7EEs6FbVnEeTMpX1SprznQk1OH3cyNL4c0Pp7JK6RYIer/
	m3P8V8YyuiaLJqfwPgqdJ24nGSCbzoOA8A==
X-Google-Smtp-Source: ABdhPJzdUt0Fo0OgFQ6Iu/Lf7MWF3l4+O2bTmQvkryQ3aEUA80Uk5NsYhsRGQL/rANUfdqrARjezPg==
X-Received: by 2002:a05:6830:410d:: with SMTP id w13mr1734902ott.173.1621294991812;
        Mon, 17 May 2021 16:43:11 -0700 (PDT)
Subject: Re: [PATCH v3 1/5] xen/char: Default HAS_NS16550 to y only for X86
 and ARM
To: Jan Beulich <jbeulich@suse.com>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair23@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1621017334.git.connojdavis@gmail.com>
 <3960a676376e0163d97ac02f968966cdfaccbf75.1621017334.git.connojdavis@gmail.com>
 <76b5e429-a0b0-b8a2-cd31-85cbb4da8680@suse.com>
From: Connor Davis <connojdavis@gmail.com>
Message-ID: <86738c5c-28b2-b82f-0ff4-ac84cb03a64b@gmail.com>
Date: Mon, 17 May 2021 17:43:31 -0600
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <76b5e429-a0b0-b8a2-cd31-85cbb4da8680@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 5/17/21 5:56 AM, Jan Beulich wrote:
>> --- a/xen/drivers/char/Kconfig
>> +++ b/xen/drivers/char/Kconfig
>> @@ -1,6 +1,6 @@
>>   config HAS_NS16550
>>   	bool "NS16550 UART driver" if ARM
>> -	default y
>> +	default y if (ARM || X86)
> ... this approach doesn't scale very well. You would likely have
> been hesitant to add a, say, 12-way || here if we had this many
> architectures already. I think you instead want
>
>   config HAS_NS16550
>   	bool "NS16550 UART driver" if ARM
> +	default n if RISCV
>   	default y
>
> which then can be adjusted back by another one line change once
> the driver code actually builds.
>
> Jan

Agreed, I will update this in the next version.


Thanks,

Connor



From xen-devel-bounces@lists.xenproject.org Mon May 17 23:45:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 17 May 2021 23:45:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128536.241298 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1limvr-0008WM-Sk; Mon, 17 May 2021 23:45:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128536.241298; Mon, 17 May 2021 23:45:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1limvr-0008WF-Pc; Mon, 17 May 2021 23:45:55 +0000
Received: by outflank-mailman (input) for mailman id 128536;
 Mon, 17 May 2021 23:45:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=X6rY=KM=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1limvq-0008W9-2G
 for xen-devel@lists.xenproject.org; Mon, 17 May 2021 23:45:54 +0000
Received: from mail-oi1-x22e.google.com (unknown [2607:f8b0:4864:20::22e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 341beecd-4ff2-4ed8-99c6-76f2ec0071a0;
 Mon, 17 May 2021 23:45:53 +0000 (UTC)
Received: by mail-oi1-x22e.google.com with SMTP id b25so8142005oic.0
 for <xen-devel@lists.xenproject.org>; Mon, 17 May 2021 16:45:53 -0700 (PDT)
Received: from [192.168.99.80] (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id l11sm3289900ooq.44.2021.05.17.16.45.51
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 May 2021 16:45:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 341beecd-4ff2-4ed8-99c6-76f2ec0071a0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=0Td9hzMwh0l7nVPyPWJs5h3Oib92yw47wCHz+OsqDpw=;
        b=T+6mAgbkl+b8IFSlSCGNMM2mJ/c/iaQFGhfg6fP+HM61EYma0ePPNTI/OB+njrnd2a
         +fvSUhqhKPwSDUPToS4Nu9YJUnjUiKhkNlVQP8+3glRYY2O6c+9w1ATKIlxdrBXO1G16
         C4RRJmV8vh6FfMiCtQhu585ep47HhFEtBUM11aueYYPhvvjI94zFl464SdsaSlKJiCog
         D3ht30rZgSEMEwt7VCjCHLb7plwbB8jlALbul8r7QQTA+0xHbaMyhcx0wlexU1BguHza
         Y3fI60035uurmYTAM6jrUtunD0KccuF3vFux8W6qeqUFmX0BgLZvHFMybZvVz8bGEa9I
         BXQg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=0Td9hzMwh0l7nVPyPWJs5h3Oib92yw47wCHz+OsqDpw=;
        b=lRxc0FwUcKt+u60E4hx854hV9+FyH/0KKllgPxH8EE1OT9nfNkA9hVTAvisoLAtqp1
         7Vz4Jp3EW1tWUE2r/rbdR4txDgM4LKNq5QcS2AGbLGwQcoXCbcIkfWtobVtdVrBjIkHy
         w/TzDhcPaRhNCOdg3GRa1CvfdewN3OKgolpr/q5eDGkjKZt+t8yJmByvFZz/6gEsNejF
         ZeiuvDncRgFLIe/8eJH8CEEBCEB3f0jqyU62BRs8NBjMbGbbmyLNpsagWajyst4Tpji9
         ITlIbH2mScHCsMCHduGEkBBKrFAyIkOp1Dysj7hWaKBH4u63Yriif9ppyU6BXEj3EJER
         9Ang==
X-Gm-Message-State: AOAM530j4QwzeqkoZkOBdQV5yXf6j4tsXyqInGNYqWfQ2FjLji/JBYA5
	Qowi2bRz3g549j9QEQ9WhIhc+MaN/cdV+w==
X-Google-Smtp-Source: ABdhPJzHyDxnUrn7L68qr7XKG/umoQqc5tYC5XKWM9l8HmSbyEg8bbSKHVYxHsr6srMr/fZvEm9EsA==
X-Received: by 2002:aca:c6d7:: with SMTP id w206mr1735235oif.87.1621295152791;
        Mon, 17 May 2021 16:45:52 -0700 (PDT)
Subject: Re: [PATCH v3 3/5] xen: Fix build when !CONFIG_GRANT_TABLE
To: Jan Beulich <jbeulich@suse.com>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair23@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1621017334.git.connojdavis@gmail.com>
 <834f7995ae80a3b37b6d508d1c989b4ee391f61b.1621017334.git.connojdavis@gmail.com>
 <b56a1cfd-46ab-c601-883c-73537dfaac92@suse.com>
From: Connor Davis <connojdavis@gmail.com>
Message-ID: <42d16824-0dd2-0aea-e396-62231a71a0e0@gmail.com>
Date: Mon, 17 May 2021 17:46:12 -0600
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <b56a1cfd-46ab-c601-883c-73537dfaac92@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 5/17/21 5:22 AM, Jan Beulich wrote:
> On 14.05.2021 20:53, Connor Davis wrote:
>> Move struct grant_table; in grant_table.h above
>> ifdef CONFIG_GRANT_TABLE. This fixes the following:
>>
>> /build/xen/include/xen/grant_table.h:84:50: error: 'struct grant_table'
>> declared inside parameter list will not be visible outside of this
>> definition or declaration [-Werror]
>>     84 | static inline int mem_sharing_gref_to_gfn(struct grant_table *gt,
>>        |
> There must be more to this, as e.g. the PV shim does get built with
> !GRANT_TABLE. Nevertheless, ...
>
You are right... you're comment made me realize I had only tested this

with XEN_TARGET_ARCH=riscv64. I will rework this.

>> Signed-off-by: Connor Davis <connojdavis@gmail.com>
> ... since the potential of breaking the build is obvious enough,
> Acked-by: Jan Beulich <jbeulich@suse.com>
>
> Jan


Thanks,

Connor



From xen-devel-bounces@lists.xenproject.org Tue May 18 00:13:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 00:13:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128543.241310 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1linMY-0003mU-R9; Tue, 18 May 2021 00:13:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128543.241310; Tue, 18 May 2021 00:13:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1linMY-0003mN-NT; Tue, 18 May 2021 00:13:30 +0000
Received: by outflank-mailman (input) for mailman id 128543;
 Tue, 18 May 2021 00:13:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1linMW-0003mD-Rb; Tue, 18 May 2021 00:13:28 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1linMW-00089W-LT; Tue, 18 May 2021 00:13:28 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1linMW-0001C6-9I; Tue, 18 May 2021 00:13:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1linMW-0003Za-8m; Tue, 18 May 2021 00:13:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=9fCichlTJwdhCXn3ezITWXN6pj5euRXLsasarHeteBw=; b=EoB2owWRDGcOG36TVmImmnZyUU
	XaWTQ3WzMxn+Omqw6Eaqu5OIVFTBITEIxfIZFmd/KDkIoQwaZo2pF++Pp8K5ThysEmTekqfGUqgeI
	yX37uDVIeCi90CvPIqd5aPNm4khUkhWdJkHUaaAFyPOY2pb+oEAeeMksl44AkJTvLk3o=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161981-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 161981: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=32de74a1ac188cef3b996a65954d5b87128a4368
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 18 May 2021 00:13:28 +0000

flight 161981 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161981/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                32de74a1ac188cef3b996a65954d5b87128a4368
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  270 days
Failing since        152659  2020-08-21 14:07:39 Z  269 days  494 attempts
Testing same since   161981  2021-05-17 16:38:18 Z    0 days    1 attempts

------------------------------------------------------------
502 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 153386 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 18 00:37:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 00:37:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128555.241324 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1linjm-0006Bz-4O; Tue, 18 May 2021 00:37:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128555.241324; Tue, 18 May 2021 00:37:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1linjm-0006Bs-0I; Tue, 18 May 2021 00:37:30 +0000
Received: by outflank-mailman (input) for mailman id 128555;
 Tue, 18 May 2021 00:37:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nvJe=KN=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1linjk-0006Bm-D4
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 00:37:28 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ad5faf4a-0344-439e-b3e1-8bb8d6a67239;
 Tue, 18 May 2021 00:37:27 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 6E3EE61369;
 Tue, 18 May 2021 00:37:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ad5faf4a-0344-439e-b3e1-8bb8d6a67239
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1621298246;
	bh=/M5LhQjF5T2pAAu/brxavGChh15xMLHZFxXULsJvDOA=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=HFV2o0YPT9s35MLRr0aO622YEdUN40Kh/F5ah9jpPHlK2nYqeM42M7QcVTGFVj5M+
	 5NquKipzijCa7TsA7XX8iIg+tgmbF/dT+aCSg/LxCM49ta7a6cOtXfReBWCl0FEo/9
	 cHrxdeULToU3C0oVOWRR0NkSl7WY9OgF2/0DbBjN8v6vdjnQLi+MLIsrQxdCCSTqUJ
	 F50eDjDIesL1f1jjzAzs6wth7KCOGSVdC2xsxrNH/DbVvQRLh41GAyHnaP3Rc1afUn
	 mz7AUkgszdvl8cilU2g07mrAoEuDM+fWORxJG7MLEdg516EzwEmPFvwVbT6iJXuesN
	 V2m3mJPe0y/8A==
Date: Mon, 17 May 2021 17:37:25 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, Wei.Chen@arm.com, Henry.Wang@arm.com, 
    Penny.Zheng@arm.com, Bertrand.Marquis@arm.com, 
    Julien Grall <jgrall@amazon.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>
Subject: Re: [PATCH RFCv2 10/15] xen/arm: mm: Allocate xen page tables in
 domheap rather than xenheap
In-Reply-To: <602bea61-9db2-a27d-1fed-001e2b5b2176@xen.org>
Message-ID: <alpine.DEB.2.21.2105171736450.14426@sstabellini-ThinkPad-T480s>
References: <20210425201318.15447-1-julien@xen.org> <20210425201318.15447-11-julien@xen.org> <alpine.DEB.2.21.2105121529180.5018@sstabellini-ThinkPad-T480s> <9429bec0-8706-42b9-cda6-77adde961235@xen.org> <alpine.DEB.2.21.2105131522030.5018@sstabellini-ThinkPad-T480s>
 <602bea61-9db2-a27d-1fed-001e2b5b2176@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Sat, 15 May 2021, Julien Grall wrote:
> Hi Stefano,
> 
> On 13/05/2021 23:27, Stefano Stabellini wrote:
> > On Thu, 13 May 2021, Julien Grall wrote:
> > > Hi Stefano,
> > > 
> > > On 12/05/2021 23:44, Stefano Stabellini wrote:
> > > > On Sun, 25 Apr 2021, Julien Grall wrote:
> > > > > From: Julien Grall <jgrall@amazon.com>
> > > > > 
> > > > > xen_{un,}map_table() already uses the helper to map/unmap pages
> > > > > on-demand (note this is currently a NOP on arm64). So switching to
> > > > > domheap don't have any disavantage.
> > > > > 
> > > > > But this as the benefit:
> > > > >       - to keep the page tables unmapped if an arch decided to do so
> > > > >       - reduce xenheap use on arm32 which can be pretty small
> > > > > 
> > > > > Signed-off-by: Julien Grall <jgrall@amazon.com>
> > > > 
> > > > Thanks for the patch. It looks OK but let me ask a couple of questions
> > > > to clarify my doubts.
> > > > 
> > > > This change should have no impact to arm64, right?
> > > > 
> > > > For arm32, I wonder why we were using map_domain_page before in
> > > > xen_map_table: it wasn't necessary, was it? In fact, one could even say
> > > > that it was wrong?
> > > In xen_map_table() we need to be able to map pages from Xen binary,
> > > xenheap...
> > > On arm64, we would be able to use mfn_to_virt() because everything is
> > > mapped
> > > in Xen. But that's not the case on arm32. So we need a way to map anything
> > > easily.
> > > 
> > > The only difference between xenheap and domheap are the former is always
> > > mapped while the latter may not be. So one can also view a xenheap page as
> > > a
> > > glorified domheap.
> > > 
> > > I also don't really want to create yet another interface to map pages (we
> > > have
> > > vmap(), map_domain_page(), map_domain_global_page()...). So, when I
> > > implemented xen_map_table() a couple of years ago, I came to the
> > > conclusion
> > > that this is a convenient (ab)use of the interface.
> > 
> > Got it. Repeating to check if I see the full picture. On ARM64 there are
> > no changes. On ARM32, at runtime there are no changes mapping/unmapping
> > pages because xen_map_table is already mapping all pages using domheap,
> > even xenheap pages are mapped as domheap; so this patch makes no
> > difference in mapping/unmapping, correct?
> 
> For arm32, it makes a slight difference when allocating a new page table (we
> didn't call map/unmap before) but this is not called often.
> 
> The main "drop" in performance happened when xen_{,map}_table() was
> introduced.
> 
> > 
> > The only difference is that on arm32 we are using domheap to allocate
> > the pages, which is a different (larger) pool.
> 
> Yes.
> 
> Would you be happy to give you acked-by/reviewed-by on this basis?

Yes

Acked-by: Stefano Stabellini <sstabellini@kernel.org>


From xen-devel-bounces@lists.xenproject.org Tue May 18 00:50:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 00:50:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128560.241335 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1linwR-0008NB-BE; Tue, 18 May 2021 00:50:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128560.241335; Tue, 18 May 2021 00:50:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1linwR-0008N4-6L; Tue, 18 May 2021 00:50:35 +0000
Received: by outflank-mailman (input) for mailman id 128560;
 Tue, 18 May 2021 00:50:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nvJe=KN=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1linwP-0008My-AQ
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 00:50:33 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 43b99d20-dbff-4225-8155-6c78ab3c0ab8;
 Tue, 18 May 2021 00:50:32 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 69B49613AE;
 Tue, 18 May 2021 00:50:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 43b99d20-dbff-4225-8155-6c78ab3c0ab8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1621299031;
	bh=OTJQP3YJP2vtwFR7puXkTCgaA7pkfGxka3vjjiVAoBc=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=FggtqzTJ9+aDR3QkqHtBjiYmMvNkD/wseXgNtFiptmLKa794OuSoEfElOgSCto/UY
	 nyXg69W5lshe2Xud5ZjvLdbsI47XQEBgsnJo4j/FQCjzS+AV9LjSP10hnXjVW4Ju/5
	 ZLbl0ZzLDH+BGWlLZgI+ToTrmAwHwrgtHufRxarwyLAMkTL6G52KCco/ZIv9wV65gh
	 jInDcr1RNiGJADQo+/YwbJVyDiAoEVKJRLZVqABNxjv0HNwvdv5tFNCHG+XzQG7dMB
	 oUuDFtbP25BKoVMgYoMSOqqt7++PUne5LL4yRkRypFGQRaQkEIAgYVmAJg8/zZykmF
	 LhA+kN5lZpEPg==
Date: Mon, 17 May 2021 17:50:30 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, Wei.Chen@arm.com, Henry.Wang@arm.com, 
    Penny.Zheng@arm.com, Bertrand.Marquis@arm.com, 
    Julien Grall <julien.grall@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Julien Grall <jgrall@amazon.com>
Subject: Re: [PATCH RFCv2 14/15] xen/arm: mm: Rework
 setup_xenheap_mappings()
In-Reply-To: <8cda6d2d-7f3c-fef6-c534-2fadabed1bad@xen.org>
Message-ID: <alpine.DEB.2.21.2105171738320.14426@sstabellini-ThinkPad-T480s>
References: <20210425201318.15447-1-julien@xen.org> <20210425201318.15447-15-julien@xen.org> <alpine.DEB.2.21.2105141646340.14426@sstabellini-ThinkPad-T480s> <8cda6d2d-7f3c-fef6-c534-2fadabed1bad@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Sat, 15 May 2021, Julien Grall wrote:
> Hi,
> 
> On 15/05/2021 00:51, Stefano Stabellini wrote:
> > On Sun, 25 Apr 2021, Julien Grall wrote:
> > > From: Julien Grall <julien.grall@arm.com>
> > > 
> > > A few issues have been reported with setup_xenheap_mappings() over the
> > > last couple of years. The main ones are:
> > >      - It will break on platform supporting more than 512GB of RAM
> > >        because the memory allocated by the boot allocator is not yet
> > >        mapped.
> > >      - Aligning all the regions to 1GB may lead to unexpected result
> > >        because we may alias non-cacheable region (such as device or
> > > reserved
> > >        regions).
> > > 
> > > map_pages_to_xen() was recently reworked to allow superpage mappings and
> > > deal with the use of page-tables before they are mapped.
> > > 
> > > Most of the code in setup_xenheap_mappings() is now replaced with a
> > > single call to map_pages_to_xen().
> > > 
> > > This also require to re-order the steps setup_mm() so the regions are
> > > given to the boot allocator first and then we setup the xenheap
> > > mappings.
> > 
> > I know this is paranoia but doesn't this introduce a requirement on the
> > size of the first bank in bootinfo.mem.bank[] ?
> > 
> > It should be at least 8KB?
> 
> AFAIK, the current requirement is 4KB because of the 1GB mapping. Long term,
> it would be 8KB.
> 
> > 
> > I know it is unlikely but it is theoretically possible to have a first
> > bank of just 1KB.
> 
> All the page allocators are working at the page granularity level. I am not
> entirely sure whether the current Xen would ignore the region or break.
> 
> Note that setup_xenheap_mappings() is taking a base MFN and a number of pages
> to map. So this doesn't look to be a new problem here.

Yeah... the example of the first bank being 1KB is wrong because, like
you wrote, it wouldn't work before your patches either, and probably it
will never work.

Maybe a better example is a first bank of 4KB exactly.


> For the 8KB requirement, we can look at first all the pages to the boot
> allocator and then do the mapping.
> 
> > 
> > I think we should write the requirement down with an in-code comment?
> > Or better maybe we should write a message about it in the panic below?
> 
> I can write an in-code comment but I think writing down in the panic() would
> be wrong because there is no promise this is going to be the only reason it
> fails (imagine there is a bug in the code...).

Maybe it is sufficient to print out the error code (EINVAL / ENOMEM etc)
in the panic. If you see -12 you know what to look for.

Looking into it I realize that we are not returning -ENOMEM in case of
alloc_boot_pages failures in create_xen_table. It looks like we would
hit a BUG() in the implementation of alloc_boot_pages. Maybe that's good
enough as is then.


From xen-devel-bounces@lists.xenproject.org Tue May 18 00:51:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 00:51:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128563.241346 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1linxM-0000VY-Iq; Tue, 18 May 2021 00:51:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128563.241346; Tue, 18 May 2021 00:51:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1linxM-0000VR-Fw; Tue, 18 May 2021 00:51:32 +0000
Received: by outflank-mailman (input) for mailman id 128563;
 Tue, 18 May 2021 00:51:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nvJe=KN=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1linxK-0000VJ-Cm
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 00:51:30 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d2fd4c7e-04f7-4356-9feb-f178b7afc5f2;
 Tue, 18 May 2021 00:51:29 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id BA92D613AE;
 Tue, 18 May 2021 00:51:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d2fd4c7e-04f7-4356-9feb-f178b7afc5f2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1621299089;
	bh=J2cSjDnsf7weHduuUYT1lTR3uyi8DNz7FUP2VJDiO2M=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=LeOI7WMxp0j5Insgdz16veHhsbQVrpoWNUgpgFtOBiA5U/jTn2BAX4aphwqc9+2x6
	 47Rjl/hx2DONHcAjEcyKx+FZM9VZTzMf9cAs1YNrvRPZ4XxYq3lTrobeCRjE9jHh3e
	 u5yTUIoFBz/EUzhXyc7pqoZA73Z/Md7tHEQ5SsoC0AnjErpN2cN76njeN8i3Tzdzjb
	 +MA67RVABDxnWSTeWCIWgki08EuNmYely3joTpj0vQj5ZYvy4OE9/DC/MlbiFWu+q4
	 e+PyzxsxUOaZ80goKiOhWS/MrreYCtZ6iAb36SVpv9xhjaJGL8Z4aasOL+OsTeO31H
	 KsGcEI3O53cRg==
Date: Mon, 17 May 2021 17:51:28 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    xen-devel@lists.xenproject.org, Wei.Chen@arm.com, Henry.Wang@arm.com, 
    Penny.Zheng@arm.com, Bertrand.Marquis@arm.com, 
    Julien Grall <julien.grall@arm.com>, 
    Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>, 
    Julien Grall <jgrall@amazon.com>
Subject: Re: [PATCH RFCv2 15/15] xen/arm: mm: Re-implement setup_frame_table_mappings()
 with map_pages_to_xen()
In-Reply-To: <2308478e-527b-a54a-206a-785f80515835@xen.org>
Message-ID: <alpine.DEB.2.21.2105171751110.14426@sstabellini-ThinkPad-T480s>
References: <20210425201318.15447-1-julien@xen.org> <20210425201318.15447-16-julien@xen.org> <alpine.DEB.2.21.2105141658510.14426@sstabellini-ThinkPad-T480s> <2308478e-527b-a54a-206a-785f80515835@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Sat, 15 May 2021, Julien Grall wrote:
> Hi Stefano,
> 
> On 15/05/2021 01:02, Stefano Stabellini wrote:
> > On Sun, 25 Apr 2021, Julien Grall wrote:
> > > From: Julien Grall <julien.grall@arm.com>
> > > 
> > > Now that map_pages_to_xen() has been extended to support 2MB mappings,
> > > we can replace the create_mappings() call by map_pages_to_xen() call.
> > > 
> > > This has the advantage to remove the different between 32-bit and 64-bit
> > > code.
> > > 
> > > Lastly remove create_mappings() as there is no more callers.
> > > 
> > > Signed-off-by: Julien Grall <julien.grall@arm.com>
> > > Signed-off-by: Julien Grall <jgrall@amazon.com>
> > > 
> > > ---
> > >      Changes in v2:
> > >          - New patch
> > > 
> > >      TODO:
> > >          - Add support for setting the contiguous bit
> > > ---
> > >   xen/arch/arm/mm.c | 64 +++++------------------------------------------
> > >   1 file changed, 6 insertions(+), 58 deletions(-)
> > > 
> > > diff --git a/xen/arch/arm/mm.c b/xen/arch/arm/mm.c
> > > index c49403b687f5..5f8ae029dd6d 100644
> > > --- a/xen/arch/arm/mm.c
> > > +++ b/xen/arch/arm/mm.c
> > > @@ -359,40 +359,6 @@ void clear_fixmap(unsigned map)
> > >       BUG_ON(res != 0);
> > >   }
> > >   -/* Create Xen's mappings of memory.
> > > - * Mapping_size must be either 2MB or 32MB.
> > > - * Base and virt must be mapping_size aligned.
> > > - * Size must be a multiple of mapping_size.
> > > - * second must be a contiguous set of second level page tables
> > > - * covering the region starting at virt_offset. */
> > > -static void __init create_mappings(lpae_t *second,
> > > -                                   unsigned long virt_offset,
> > > -                                   unsigned long base_mfn,
> > > -                                   unsigned long nr_mfns,
> > > -                                   unsigned int mapping_size)
> > > -{
> > > -    unsigned long i, count;
> > > -    const unsigned long granularity = mapping_size >> PAGE_SHIFT;
> > > -    lpae_t pte, *p;
> > > -
> > > -    ASSERT((mapping_size == MB(2)) || (mapping_size == MB(32)));
> > > -    ASSERT(!((virt_offset >> PAGE_SHIFT) % granularity));
> > > -    ASSERT(!(base_mfn % granularity));
> > > -    ASSERT(!(nr_mfns % granularity));
> > > -
> > > -    count = nr_mfns / LPAE_ENTRIES;
> > > -    p = second + second_linear_offset(virt_offset);
> > > -    pte = mfn_to_xen_entry(_mfn(base_mfn), MT_NORMAL);
> > > -    if ( granularity == 16 * LPAE_ENTRIES )
> > > -        pte.pt.contig = 1;  /* These maps are in 16-entry contiguous
> > > chunks. */
> > > -    for ( i = 0; i < count; i++ )
> > > -    {
> > > -        write_pte(p + i, pte);
> > > -        pte.pt.base += 1 << LPAE_SHIFT;
> > > -    }
> > > -    flush_xen_tlb_local();
> > > -}
> > > -
> > >   #ifdef CONFIG_DOMAIN_PAGE
> > >   void *map_domain_page_global(mfn_t mfn)
> > >   {
> > > @@ -850,36 +816,18 @@ void __init setup_frametable_mappings(paddr_t ps,
> > > paddr_t pe)
> > >       unsigned long frametable_size = nr_pdxs * sizeof(struct page_info);
> > >       mfn_t base_mfn;
> > >       const unsigned long mapping_size = frametable_size < MB(32) ? MB(2)
> > > : MB(32);
> > > -#ifdef CONFIG_ARM_64
> > > -    lpae_t *second, pte;
> > > -    unsigned long nr_second;
> > > -    mfn_t second_base;
> > > -    int i;
> > > -#endif
> > > +    int rc;
> > >         frametable_base_pdx = mfn_to_pdx(maddr_to_mfn(ps));
> > >       /* Round up to 2M or 32M boundary, as appropriate. */
> > >       frametable_size = ROUNDUP(frametable_size, mapping_size);
> > >       base_mfn = alloc_boot_pages(frametable_size >> PAGE_SHIFT,
> > > 32<<(20-12));
> > >   -#ifdef CONFIG_ARM_64
> > > -    /* Compute the number of second level pages. */
> > > -    nr_second = ROUNDUP(frametable_size, FIRST_SIZE) >> FIRST_SHIFT;
> > > -    second_base = alloc_boot_pages(nr_second, 1);
> > > -    second = mfn_to_virt(second_base);
> > > -    for ( i = 0; i < nr_second; i++ )
> > > -    {
> > > -        clear_page(mfn_to_virt(mfn_add(second_base, i)));
> > > -        pte = mfn_to_xen_entry(mfn_add(second_base, i), MT_NORMAL);
> > > -        pte.pt.table = 1;
> > > -
> > > write_pte(&xen_first[first_table_offset(FRAMETABLE_VIRT_START)+i], pte);
> > > -    }
> > > -    create_mappings(second, 0, mfn_x(base_mfn), frametable_size >>
> > > PAGE_SHIFT,
> > > -                    mapping_size);
> > > -#else
> > > -    create_mappings(xen_second, FRAMETABLE_VIRT_START, mfn_x(base_mfn),
> > > -                    frametable_size >> PAGE_SHIFT, mapping_size);
> > > -#endif
> > > +    /* XXX: Handle contiguous bit */
> > > +    rc = map_pages_to_xen(FRAMETABLE_VIRT_START, base_mfn,
> > > +                          frametable_size >> PAGE_SHIFT,
> > > PAGE_HYPERVISOR_RW);
> > > +    if ( rc )
> > > +        panic("Unable to setup the frametable mappings.\n");
> > 
> > This is a lot better.
> > 
> > I take that "XXX: Handle contiguous bit" refers to the lack of
> > _PAGE_BLOCK. Why can't we just | _PAGE_BLOCK like in other places?
> 
> I forgot to add _PAGE_BLOCK, however this is unrelated to my comment.
> 
> Currently, the frametable is mapped using 2MB mapping and setting the
> contiguous bit for each entry if the mapping is 32MB aligned.
> 
> _PAGE_BLOCK will only create 2MB mapping but will not set the contiguous bit.
> This will increase the pressure on the TLBs (we would get 16 entry rather than
> 1) if on system where the TLBs can take advantange of it.
> 
> So map_pages_to_xen() needs to gain support for contiguous bit. I haven't yet
> looked at it (hence the RFC state).

I see, thanks for the explanation


From xen-devel-bounces@lists.xenproject.org Tue May 18 01:44:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 01:44:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128570.241357 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liom6-0002gp-Ko; Tue, 18 May 2021 01:43:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128570.241357; Tue, 18 May 2021 01:43:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liom6-0002gN-DV; Tue, 18 May 2021 01:43:58 +0000
Received: by outflank-mailman (input) for mailman id 128570;
 Tue, 18 May 2021 01:43:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VpwN=KN=gmail.com=bobbyeshleman@srs-us1.protection.inumbo.net>)
 id 1liom5-0002gH-Rn
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 01:43:57 +0000
Received: from mail-pl1-x630.google.com (unknown [2607:f8b0:4864:20::630])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 198b57ce-c00f-4e78-9262-3de83e0e873d;
 Tue, 18 May 2021 01:43:56 +0000 (UTC)
Received: by mail-pl1-x630.google.com with SMTP id t21so4197824plo.2
 for <xen-devel@lists.xenproject.org>; Mon, 17 May 2021 18:43:56 -0700 (PDT)
Received: from ?IPv6:2601:1c2:4f80:d230::1? ([2601:1c2:4f80:d230::1])
 by smtp.gmail.com with ESMTPSA id s3sm12297222pgs.62.2021.05.17.18.43.54
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 May 2021 18:43:55 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 198b57ce-c00f-4e78-9262-3de83e0e873d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:organization:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=KoUioznIzJM7AjTLN+DNXOBvSstctxmIoCBWsVfkzhE=;
        b=Fiw9xJCv3W9YngUlDtQpnpx48h47JG8Uhw23whajawJ1Nio9xseZghoMNc0n3ZXHiE
         bTfSWaTR4bPW3RyLtn9+BV5n26dLAIIfXoluot0RsOYx991hBezY6Xly+eMCJo9n2o4w
         wHldKOH0rhNq2AI2QmXLdU6wi0AXJxFmLbjth55x1nyRrGbuKYL+c+b3Kunz41bqmjF+
         nj7K2aOuo/HCC5rHyeM8IookIceD1lIWMRsJQkiwWpYSlfEywerJAnFHVFWYDYZ1xXFK
         MOI0v2JC6aqvPw3h5ecqMUtH/JP/H8swv0xtG2vXP+St6otPr7FJufSr85hoIdyOT37r
         5img==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:organization
         :message-id:date:user-agent:mime-version:in-reply-to
         :content-language:content-transfer-encoding;
        bh=KoUioznIzJM7AjTLN+DNXOBvSstctxmIoCBWsVfkzhE=;
        b=cFkiQKbkhaCmvOKx4g68zsVVamq77NeVmBkbFqpJ007pC8q9qvOxDMRT3doFFOEpZE
         hub1oY7fgTeUN6nQYx+j7zU05qBj55oL+w/EYqwEwVV2/TKLiIeReeU+LN+WD23/cINL
         u4vzFNyvuZnvzh1Ft4jZeeQ1qWmo+h9LRL7McPp9ddWNNo1Q9Kh9694p4W5tIjMVuziF
         6mDqmz7Mfcf1+WR5XKN2wDLV6KmnDe3v58Ob1hwmJqYEnN5amJGow4VzfNIcbz9FTB3m
         oMjaIvCxC9a9ipcIwFIUEffGvWLa7coUoBZRduVRKn/lgl+K1BfcgyRL3HkustpY5Y97
         AYLw==
X-Gm-Message-State: AOAM531wJORdOImMtHs+dwhjLiVLld9gG+R7HJRGMLhnyfmFk3hLP3wv
	awRrlvyivzxZ73A88fQcnyk=
X-Google-Smtp-Source: ABdhPJy7aMZWkLq9+eZqwm+r2bt9FSjZ9qeJ/RttoyarJp28gJ2EFqTWvTn7nihKnzBC/o7ZDNFySw==
X-Received: by 2002:a17:902:d4c6:b029:ef:80f3:c543 with SMTP id o6-20020a170902d4c6b02900ef80f3c543mr1674170plg.85.1621302235890;
        Mon, 17 May 2021 18:43:55 -0700 (PDT)
Subject: Re: [PATCH v3 4/5] xen: Add files needed for minimal riscv build
To: Connor Davis <connojdavis@gmail.com>, xen-devel@lists.xenproject.org
Cc: Alistair Francis <alistair23@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <cover.1621017334.git.connojdavis@gmail.com>
 <a7d2d43d0d9de9e10a3e92bc6f977d6f4b53bef6.1621017334.git.connojdavis@gmail.com>
 <97815ecd-7335-9c85-5df8-802ed119d518@gmail.com>
 <fcc06468-3249-6e6a-dfbd-fc9f69a3de2b@gmail.com>
From: Bob Eshleman <bobbyeshleman@gmail.com>
Organization: Vates SAS
Message-ID: <bbfaf1d6-5d2c-1a79-6237-c42083635d77@gmail.com>
Date: Mon, 17 May 2021 18:43:53 -0700
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
MIME-Version: 1.0
In-Reply-To: <fcc06468-3249-6e6a-dfbd-fc9f69a3de2b@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 5/14/21 4:47 PM, Connor Davis wrote:
> 
> On 5/14/21 3:53 PM, Bob Eshleman wrote:
>> On 5/14/21 11:53 AM, Connor Davis wrote:
>>
>>> +
>>> +#ifdef CONFIG_RISCV_64
>>> +
>>> +/*
>>> + * RISC-V Layout:
>>> + *   0x0000000000000000 - 0x0000003fffffffff (256GB, L2 slots [0-255])
>>> + *     Unmapped
>>> + *   0x0000004000000000 - 0xffffffbfffffffff
>>> + *     Inaccessible: sv39 only supports 39-bit sign-extended VAs.
>>> + *   0xffffffc000000000 - 0xffffffc0001fffff (2MB, L2 slot [256])
>>> + *     Unmapped
>>> + *   0xffffffc000200000 - 0xffffffc0003fffff (2MB, L2 slot [256])
>>> + *     Xen text, data, bss
>>> + *   0xffffffc000400000 - 0xffffffc0005fffff (2MB, L2 slot [256])
>>> + *     Fixmap: special-purpose 4K mapping slots
>>> + *   0xffffffc000600000 - 0xffffffc0009fffff (4MB, L2 slot [256])
>>> + *     Early boot mapping of FDT
>>> + *   0xffffffc000a00000 - 0xffffffc000bfffff (2MB, L2 slot [256])
>>> + *     Early relocation address, used when relocating Xen and later
>>> + *     for livepatch vmap (if compiled in)
>>> + *   0xffffffc040000000 - 0xffffffc07fffffff (1GB, L2 slot [257])
>>> + *     VMAP: ioremap and early_ioremap
>>> + *   0xffffffc080000000 - 0xffffffc13fffffff (3GB, L2 slots [258..260])
>>> + *     Unmapped
>>> + *   0xffffffc140000000 - 0xffffffc1bfffffff (2GB, L2 slots [261..262])
>>> + *     Frametable: 48 bytes per page for 133GB of RAM
>>> + *   0xffffffc1c0000000 - 0xffffffe1bfffffff (128GB, L2 slots [263..390])
>>> + *     1:1 direct mapping of RAM
>>> + *   0xffffffe1c0000000 - 0xffffffffffffffff (121GB, L2 slots [391..511])
>>> + *     Unmapped
>>> + */
>>> +
>> What is the benefit of moving the layout up to 0xffffffc000000000?
> 
> I thought it made the most sense to use the upper half since Xen is privileged
> 
> and privileged code is typically mapped in the upper half, at least on x86. I'm happy to
> 
> move it down if that would be preferred.
> 
> 

I do like the idea of following norms, but I think I prefer following the ARM norm
over the x86 norm unless there is a technical reason not to. Just due to
ARM and RISC-V having much more overlap than x86 and RISC-V.  In this case,
all things being equal, I'd prefer following the ARM model and use the lower half.
I definitely like adding the note on the inaccessible region.

Thanks,

-- 
Bobby Eshleman
SE at Vates SAS


From xen-devel-bounces@lists.xenproject.org Tue May 18 03:58:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 03:58:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128579.241368 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liqsE-0006Dd-9Y; Tue, 18 May 2021 03:58:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128579.241368; Tue, 18 May 2021 03:58:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liqsE-0006DW-4k; Tue, 18 May 2021 03:58:26 +0000
Received: by outflank-mailman (input) for mailman id 128579;
 Tue, 18 May 2021 03:58:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=k696=KN=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1liqsC-0006DQ-KG
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 03:58:24 +0000
Received: from mail-oi1-x22d.google.com (unknown [2607:f8b0:4864:20::22d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a30a1820-66a7-4c1b-a7b5-cf6f638c39e7;
 Tue, 18 May 2021 03:58:23 +0000 (UTC)
Received: by mail-oi1-x22d.google.com with SMTP id u144so8514437oie.6
 for <xen-devel@lists.xenproject.org>; Mon, 17 May 2021 20:58:23 -0700 (PDT)
Received: from [192.168.99.80] (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id e22sm3578062otl.74.2021.05.17.20.58.21
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 May 2021 20:58:22 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a30a1820-66a7-4c1b-a7b5-cf6f638c39e7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=aFBMoqSfZ2tjkO3rtPY9w47poNJ9sXu+3W96WXkUC0s=;
        b=nsno12JfrqerkfpE8sQT+RQdrmQOKRgo5QtWNSMgtRUp4ywL9YW3i7Hfiim7GLZJhu
         sCjV5k1ftysZLQCwTvzbw1prKDsG7YoQXmgTkS8w/xS59D1WvBEcNpr0X8zkcJX0ssJe
         wExgGr+wNi5tLg2EUzEBPkv1os6yIkOq0s1xEgWfXAG0SdfGFBGGIDiSvldQvSPO2zr1
         W8gdpgbySZT+v5ceJaOizp5O00cQqDf8M3vlahkVBdyLEHC4G0juZ/jYN00oriHQdy6D
         iW7iRAGWEV4ol6TVty70t3XsTVGxZfJSbQCcUREdGx4C4Zo1B2rjOSDBea+jd9msVOa4
         GSDA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=aFBMoqSfZ2tjkO3rtPY9w47poNJ9sXu+3W96WXkUC0s=;
        b=DzdCShWjXQn9htW0XU3+DtKwm5lJLvzxTfouT4LbVLNNoOsUwytyiUMKD6/CAPcQq4
         ZnYPyAF3RPmrUu7Eukl5glChOpo4R3IjwZ5OJy1nZhyjieLTekceVfPqeKqaowrqYUKC
         F8mDXJA4jbRxu7Rt/rxpSdWcd/aQiQxar/O5RN1PWTNa1CXPl6R8yB7MmuMeRMAnP402
         tlMyzKazIokYHDE9VtegcdqMGuA46QZg9wjYSrWLPU+0nstAT4WbD41fJIv9Nze9IzjC
         KrmwGZNW/fhl0FMjvq9T0wW/I809z3/tyo3/c3qSRNLJdDEWuA+zL31E++aewOSFEmbn
         wFLQ==
X-Gm-Message-State: AOAM530fXv7+UsiQ831cyLCPvHtRtGssLUaEcSddePBDHog1NFXEKVVx
	mcVLvlHkmxINsP4pEWBNX0sGHBaFB6oH2Q==
X-Google-Smtp-Source: ABdhPJw7VedbAGRfDlbn81vO0pJOcGWUFfarg28s2qv+sJ0ZXg2pZfTanVK9BSeKm7xUHY5KAZe9NQ==
X-Received: by 2002:a05:6808:4d3:: with SMTP id a19mr1914349oie.40.1621310302845;
        Mon, 17 May 2021 20:58:22 -0700 (PDT)
Subject: Re: [PATCH v3 3/5] xen: Fix build when !CONFIG_GRANT_TABLE
To: Jan Beulich <jbeulich@suse.com>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair23@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1621017334.git.connojdavis@gmail.com>
 <834f7995ae80a3b37b6d508d1c989b4ee391f61b.1621017334.git.connojdavis@gmail.com>
 <b56a1cfd-46ab-c601-883c-73537dfaac92@suse.com>
From: Connor Davis <connojdavis@gmail.com>
Message-ID: <9b231486-bdf8-d6a7-c9c6-126d5bc207f8@gmail.com>
Date: Mon, 17 May 2021 21:58:43 -0600
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <b56a1cfd-46ab-c601-883c-73537dfaac92@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 5/17/21 5:22 AM, Jan Beulich wrote:
> On 14.05.2021 20:53, Connor Davis wrote:
>> Move struct grant_table; in grant_table.h above
>> ifdef CONFIG_GRANT_TABLE. This fixes the following:
>>
>> /build/xen/include/xen/grant_table.h:84:50: error: 'struct grant_table'
>> declared inside parameter list will not be visible outside of this
>> definition or declaration [-Werror]
>>     84 | static inline int mem_sharing_gref_to_gfn(struct grant_table *gt,
>>        |
> There must be more to this, as e.g. the PV shim does get built with
> !GRANT_TABLE. Nevertheless, ...
>
Can you elaborate? I tested all defconfigs with and without grant tables

enabled on x86 and ARM and they all build fine.

>> Signed-off-by: Connor Davis <connojdavis@gmail.com>
> ... since the potential of breaking the build is obvious enough,
> Acked-by: Jan Beulich <jbeulich@suse.com>
>
> Jan


Thanks,

Connor



From xen-devel-bounces@lists.xenproject.org Tue May 18 04:05:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 04:05:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128588.241379 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liqyY-0007n2-3U; Tue, 18 May 2021 04:04:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128588.241379; Tue, 18 May 2021 04:04:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liqyY-0007mv-0A; Tue, 18 May 2021 04:04:58 +0000
Received: by outflank-mailman (input) for mailman id 128588;
 Tue, 18 May 2021 04:04:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=k696=KN=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1liqyW-0007mp-0S
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 04:04:56 +0000
Received: from mail-ot1-x32e.google.com (unknown [2607:f8b0:4864:20::32e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5c59bddc-d987-4111-98e9-643c8ae21d30;
 Tue, 18 May 2021 04:04:55 +0000 (UTC)
Received: by mail-ot1-x32e.google.com with SMTP id
 36-20020a9d0ba70000b02902e0a0a8fe36so7500760oth.8
 for <xen-devel@lists.xenproject.org>; Mon, 17 May 2021 21:04:55 -0700 (PDT)
Received: from [192.168.99.80] (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id a18sm3657807otp.48.2021.05.17.21.04.54
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 May 2021 21:04:54 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5c59bddc-d987-4111-98e9-643c8ae21d30
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=TuYjVpp13TZsuq7/CADVizMkRzWxPfWgCLP5dnTDveQ=;
        b=dfI6o7sShjPqqzYHgCa5PMDHdbBtgrfVY/PisFPUA62ayRq0/4e8yLZkj90bgtx0su
         lNLagUKWC+o/7zcbbrPnCcy7KeuhtDk3Aey2VAame1PchlMWI/WWituK3vBCjy28BF4J
         lpG4N7JoH/3uKOF5891jnqSO+VzPzxBVGFjUKcIG770OEXuluIYJGHOpD+f8zVDdmFYB
         kX7bHelMsk4tYQHy7/9YnYEoMbyIvBm+jB6WpEoegRBpo3oPMekgeed3YXpOJLkbeCp1
         c2EToXJNGYak8s39fSUTFgy5rzgf2GuKShBHv9oYRZOW6BOEdzaRf1/wjYJGE0jaFvMw
         udCg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=TuYjVpp13TZsuq7/CADVizMkRzWxPfWgCLP5dnTDveQ=;
        b=RYslNQCi9sjtruR1CeKbUM9OcU3HwjWA/PIDz4HfEyAmPSJpFWCHvczeuHuKduYFm3
         QCv3mjt95PAJ3HVTp/FRNWl9eBTDDXBTftikcEg1roZDclnDSyNl3BayYPjgxo9/x6pw
         XW+GdFa8/xOM2Y+n1uDU8ypplK0o+4nIuTyTd+QsrCTVPE1gQlvIxuF9IbbGnc3MFh6H
         aIQPDNjJIFXZrR8bb+OjWMO8V8vdn1/To+tsDt++BgitE1simckCOuwFtWWJRVaQ4wz0
         eRCp8ZaWyzcf0KIPZ1tqRb7sT0sY8LKT3fVIy7b7frlKSqCY7v3VdWk6rdH06Lbm+7d4
         E/Vg==
X-Gm-Message-State: AOAM533G4GkabSQoC/uxoRX+AdaZbwHZFNrO4KNX5qUBxnKtd8vgs8kk
	iDCeMO99jVdoQmoMZUQA+vk=
X-Google-Smtp-Source: ABdhPJxXa0u7eT86x6L3KW1czgYdGewExdiIQM6JN+aacEsgaFazI+zk2h2pNmmEjTBJaW9z64g6TA==
X-Received: by 2002:a9d:7003:: with SMTP id k3mr2583971otj.351.1621310694862;
        Mon, 17 May 2021 21:04:54 -0700 (PDT)
Subject: Re: [PATCH v3 4/5] xen: Add files needed for minimal riscv build
To: Bob Eshleman <bobbyeshleman@gmail.com>, xen-devel@lists.xenproject.org
Cc: Alistair Francis <alistair23@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <cover.1621017334.git.connojdavis@gmail.com>
 <a7d2d43d0d9de9e10a3e92bc6f977d6f4b53bef6.1621017334.git.connojdavis@gmail.com>
 <97815ecd-7335-9c85-5df8-802ed119d518@gmail.com>
 <fcc06468-3249-6e6a-dfbd-fc9f69a3de2b@gmail.com>
 <bbfaf1d6-5d2c-1a79-6237-c42083635d77@gmail.com>
From: Connor Davis <connojdavis@gmail.com>
Message-ID: <07b22852-5e80-695b-2877-bc6ecd03d35c@gmail.com>
Date: Mon, 17 May 2021 22:05:16 -0600
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <bbfaf1d6-5d2c-1a79-6237-c42083635d77@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 5/17/21 7:43 PM, Bob Eshleman wrote:
> On 5/14/21 4:47 PM, Connor Davis wrote:
>> On 5/14/21 3:53 PM, Bob Eshleman wrote:
>>> On 5/14/21 11:53 AM, Connor Davis wrote:
>>>
>>>> +
>>>> +#ifdef CONFIG_RISCV_64
>>>> +
>>>> +/*
>>>> + * RISC-V Layout:
>>>> + *   0x0000000000000000 - 0x0000003fffffffff (256GB, L2 slots [0-255])
>>>> + *     Unmapped
>>>> + *   0x0000004000000000 - 0xffffffbfffffffff
>>>> + *     Inaccessible: sv39 only supports 39-bit sign-extended VAs.
>>>> + *   0xffffffc000000000 - 0xffffffc0001fffff (2MB, L2 slot [256])
>>>> + *     Unmapped
>>>> + *   0xffffffc000200000 - 0xffffffc0003fffff (2MB, L2 slot [256])
>>>> + *     Xen text, data, bss
>>>> + *   0xffffffc000400000 - 0xffffffc0005fffff (2MB, L2 slot [256])
>>>> + *     Fixmap: special-purpose 4K mapping slots
>>>> + *   0xffffffc000600000 - 0xffffffc0009fffff (4MB, L2 slot [256])
>>>> + *     Early boot mapping of FDT
>>>> + *   0xffffffc000a00000 - 0xffffffc000bfffff (2MB, L2 slot [256])
>>>> + *     Early relocation address, used when relocating Xen and later
>>>> + *     for livepatch vmap (if compiled in)
>>>> + *   0xffffffc040000000 - 0xffffffc07fffffff (1GB, L2 slot [257])
>>>> + *     VMAP: ioremap and early_ioremap
>>>> + *   0xffffffc080000000 - 0xffffffc13fffffff (3GB, L2 slots [258..260])
>>>> + *     Unmapped
>>>> + *   0xffffffc140000000 - 0xffffffc1bfffffff (2GB, L2 slots [261..262])
>>>> + *     Frametable: 48 bytes per page for 133GB of RAM
>>>> + *   0xffffffc1c0000000 - 0xffffffe1bfffffff (128GB, L2 slots [263..390])
>>>> + *     1:1 direct mapping of RAM
>>>> + *   0xffffffe1c0000000 - 0xffffffffffffffff (121GB, L2 slots [391..511])
>>>> + *     Unmapped
>>>> + */
>>>> +
>>> What is the benefit of moving the layout up to 0xffffffc000000000?
>> I thought it made the most sense to use the upper half since Xen is privileged
>>
>> and privileged code is typically mapped in the upper half, at least on x86. I'm happy to
>>
>> move it down if that would be preferred.
>>
>>
> I do like the idea of following norms, but I think I prefer following the ARM norm
> over the x86 norm unless there is a technical reason not to. Just due to
> ARM and RISC-V having much more overlap than x86 and RISC-V.  In this case,
> all things being equal, I'd prefer following the ARM model and use the lower half.
> I definitely like adding the note on the inaccessible region.

Sounds good, I will move it down.


Thanks,

Connor



From xen-devel-bounces@lists.xenproject.org Tue May 18 04:10:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 04:10:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128593.241389 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lir4H-0000j8-OX; Tue, 18 May 2021 04:10:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128593.241389; Tue, 18 May 2021 04:10:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lir4H-0000j1-LT; Tue, 18 May 2021 04:10:53 +0000
Received: by outflank-mailman (input) for mailman id 128593;
 Tue, 18 May 2021 04:10:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=k696=KN=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1lir4G-0000iv-75
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 04:10:52 +0000
Received: from mail-oo1-xc32.google.com (unknown [2607:f8b0:4864:20::c32])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fe93f687-14d0-4751-a789-8a5d2e3e5a0d;
 Tue, 18 May 2021 04:10:51 +0000 (UTC)
Received: by mail-oo1-xc32.google.com with SMTP id
 v13-20020a4aa40d0000b02902052145a469so1949332ool.3
 for <xen-devel@lists.xenproject.org>; Mon, 17 May 2021 21:10:51 -0700 (PDT)
Received: from [192.168.99.80] (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id b81sm544483oia.19.2021.05.17.21.10.49
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 May 2021 21:10:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fe93f687-14d0-4751-a789-8a5d2e3e5a0d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=YGxYEzvqZ9sGJNNBQMTkgmj0en+bbvl0F77rkeWFRm0=;
        b=kMjsvvpC+dSknQRpZ7UCV+RC6XMXgke6h/dagJmha1LasGdR/fKSKLmDFIgTQnRC/P
         Xo08hZQRt+W1DKVIvI0VmuWBodkBgASfsvFjBwc0HMgjwwBf4IcNePofvH8Tgse+y9Eo
         QyDc6GY/TdI1Zn+lvVUMz5MmqXiqtmptKP7yFuBP1EHfoxqOmuRVOwCA96eKwS9wJ1Vc
         +Ct8xLQKWapFmZLYA1In9wjP5G7p/PB1+hpJe0PYg97V6riy8SfPzlM5ql5WD+87ERbF
         ynky64DKNczaAHyLyB8muFhTJgZDZ3Am7KuqHhwMaBf0VQhcaq5owuRv5V90V4TvlgMU
         wLXw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=YGxYEzvqZ9sGJNNBQMTkgmj0en+bbvl0F77rkeWFRm0=;
        b=IcWSCeZMm5bvS5UguZaYVVZX2ZnwiSCpEzIdpHlSTYoJsWKEwnbV4Uce3Kmgax7XwN
         eL1E/+U6a2OneKNX4OQgL76W6WJ9dMfT4PtTmJQOnQVHSEtYWygaPOpBVuzjAtSDnK6I
         o1zwiboPY3+LPGoTqXHX2/UZZJXGnwkVBgfTanb/2/gZEzPbGUkwAKH5PgE5QS3PjudU
         Av7ROzXoBvVCW+lJrNLKJhHuSe98JiyMowoWlBRQNqO49g7znRhSbqrept4m8nwlIEte
         XApjdhsyAfh2yrbWPK8KabLxhCT0Wd9XkS/Tak60O6ESt54hfNjboqXJkHvP+/4GF7cu
         CvKw==
X-Gm-Message-State: AOAM531iAKMI2uyu/XSc5Ka5P81DIDv6xyLzRTfpgPer/7yg+nLzrobu
	1D/lNZ+pLlsZnvj/AQwyq09QYDvI+LxkkQ==
X-Google-Smtp-Source: ABdhPJxI7VxEFRTHRIjQhd9VW0RCpQTDmnUr34JwWS0wuFANaDWXprzw1nI0GUUkSLDv8LLUYzGPMw==
X-Received: by 2002:a4a:d004:: with SMTP id h4mr2642934oor.90.1621311050686;
        Mon, 17 May 2021 21:10:50 -0700 (PDT)
Subject: Re: [PATCH v3 2/5] xen/common: Guard iommu symbols with
 CONFIG_HAS_PASSTHROUGH
To: Julien Grall <julien@xen.org>, Jan Beulich <jbeulich@suse.com>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair23@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1621017334.git.connojdavis@gmail.com>
 <1156cb116da19ef64323e472bb6b6e87c6c73d77.1621017334.git.connojdavis@gmail.com>
 <556d1933-3b11-0780-edec-b6dc1729bc56@suse.com>
 <98b429d0-2673-624e-1690-9c0e8373ed5b@xen.org>
From: Connor Davis <connojdavis@gmail.com>
Message-ID: <7cf966f6-7ccf-ba63-2b67-129577a7ca53@gmail.com>
Date: Mon, 17 May 2021 22:11:11 -0600
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <98b429d0-2673-624e-1690-9c0e8373ed5b@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 8bit
Content-Language: en-US


On 5/17/21 9:42 AM, Julien Grall wrote:
> Hi Jan,
>
> On 17/05/2021 12:16, Jan Beulich wrote:
>> On 14.05.2021 20:53, Connor Davis wrote:
>>> --- a/xen/common/memory.c
>>> +++ b/xen/common/memory.c
>>> @@ -294,7 +294,9 @@ int guest_remove_page(struct domain *d, unsigned 
>>> long gmfn)
>>>       p2m_type_t p2mt;
>>>   #endif
>>>       mfn_t mfn;
>>> +#ifdef CONFIG_HAS_PASSTHROUGH
>>>       bool *dont_flush_p, dont_flush;
>>> +#endif
>>>       int rc;
>>>     #ifdef CONFIG_X86
>>> @@ -385,13 +387,17 @@ int guest_remove_page(struct domain *d, 
>>> unsigned long gmfn)
>>>        * Since we're likely to free the page below, we need to suspend
>>>        * xenmem_add_to_physmap()'s suppressing of IOMMU TLB flushes.
>>>        */
>>> +#ifdef CONFIG_HAS_PASSTHROUGH
>>>       dont_flush_p = &this_cpu(iommu_dont_flush_iotlb);
>>>       dont_flush = *dont_flush_p;
>>>       *dont_flush_p = false;
>>> +#endif
>>>         rc = guest_physmap_remove_page(d, _gfn(gmfn), mfn, 0);
>>>   +#ifdef CONFIG_HAS_PASSTHROUGH
>>>       *dont_flush_p = dont_flush;
>>> +#endif
>>>         /*
>>>        * With the lack of an IOMMU on some platforms, domains with 
>>> DMA-capable
>>> @@ -839,11 +845,13 @@ int xenmem_add_to_physmap(struct domain *d, 
>>> struct xen_add_to_physmap *xatp,
>>>       xatp->gpfn += start;
>>>       xatp->size -= start;
>>>   +#ifdef CONFIG_HAS_PASSTHROUGH
>>>       if ( is_iommu_enabled(d) )
>>>       {
>>>          this_cpu(iommu_dont_flush_iotlb) = 1;
>>>          extra.ppage = &pages[0];
>>>       }
>>> +#endif
>>>         while ( xatp->size > done )
>>>       {
>>> @@ -868,6 +876,7 @@ int xenmem_add_to_physmap(struct domain *d, 
>>> struct xen_add_to_physmap *xatp,
>>>           }
>>>       }
>>>   +#ifdef CONFIG_HAS_PASSTHROUGH
>>>       if ( is_iommu_enabled(d) )
>>>       {
>>>           int ret;
>>> @@ -894,6 +903,7 @@ int xenmem_add_to_physmap(struct domain *d, 
>>> struct xen_add_to_physmap *xatp,
>>>           if ( unlikely(ret) && rc >= 0 )
>>>               rc = ret;
>>>       }
>>> +#endif
>>>         return rc;
>>>   }
>>
>> I wonder whether all of these wouldn't better become CONFIG_X86:
>> ISTR Julien indicating that he doesn't see the override getting used
>> on Arm. (Julien, please correct me if I'm misremembering.)
>
> Right, so far, I haven't been in favor to introduce it because:
>    1) The P2M code may free some memory. So you can't always ignore 
> the flush (I think this is wrong for the upper layer to know when this 
> can happen).
>    2) It is unclear what happen if the IOMMU TLBs and the PT contains 
> different mappings (I received conflicted advice).
>
> So it is better to always flush and as early as possible.

So keep it as is or switch to CONFIG_X86?


Thanks,

Connor



From xen-devel-bounces@lists.xenproject.org Tue May 18 04:58:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 04:58:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128598.241400 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liroG-0004xX-BI; Tue, 18 May 2021 04:58:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128598.241400; Tue, 18 May 2021 04:58:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liroG-0004xQ-8I; Tue, 18 May 2021 04:58:24 +0000
Received: by outflank-mailman (input) for mailman id 128598;
 Tue, 18 May 2021 04:58:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=k696=KN=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1liroF-0004xK-9j
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 04:58:23 +0000
Received: from mail-oo1-xc2e.google.com (unknown [2607:f8b0:4864:20::c2e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f52ece38-9ff9-4977-af32-efcf8a780231;
 Tue, 18 May 2021 04:58:21 +0000 (UTC)
Received: by mail-oo1-xc2e.google.com with SMTP id
 j17-20020a4ad6d10000b02901fef5280522so1978187oot.0
 for <xen-devel@lists.xenproject.org>; Mon, 17 May 2021 21:58:21 -0700 (PDT)
Received: from [192.168.99.80] (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id j16sm3585479otn.55.2021.05.17.21.58.20
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 May 2021 21:58:20 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f52ece38-9ff9-4977-af32-efcf8a780231
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=h7CSCLY6qNiPcBHyu63u237ERyyaZGO07dKlZAetLi4=;
        b=BPPv7Gmc0r6DqkIQ0FqK0PI2YxzeBRNFCE/Ks1JXekfpgOel7XPz3ze+GyrCo6aZM5
         ESVpw2A9bBqNNdw0NQEMyCGUVnua+pdcwXLnmy8ag+A58Wpg4Mzhr8aN7iJuQsEFHjPU
         fzZB2MQPi0boQSPUXlcVWb/71tFfeZcDHmqSSNq1piXC7UzltaIE1y2h8zNCWOvtr6ZS
         KH4+NpYVBVKFdOGYhe3kO7HeiGPOUwr9QHYZ58A/xFeGh8Rc7w32606ZyMN6QvTUli0h
         SWnQFxWQ9JdwQ0neBadyoNLloD7VEGVAujUwkdeepGqZhS/hrJ6Srt1g7/iPjp1nfRDs
         WHsg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=h7CSCLY6qNiPcBHyu63u237ERyyaZGO07dKlZAetLi4=;
        b=q6rrVlSILOva426o0Ma1cKHUOCrWxgMrb7aWHKaBM5YDxPnMr42zSdp1GIgCROGrtB
         a7IplTUBD/hdkn4AjhoXDdoOqQUeWYsJGX2H5CbVFn8YTxvjCHFUZgqdMsMSbF19alO1
         Gbqymh3LrtLRUqwsKBSnUqkHXReI8KDvKa+Elz3qk03pTFtjL7M1PSdazDEHFJarDvgN
         3AJLnraIfsQ00xz0Z/DQ93iI18kPJTyB7ufHrGYrAreko+SZ+zKDH3usggqnRikYRwec
         pvXsiJDQrpmud5QXD4KVwMSqz3g4sECRP+uQ11ZCI0TIaiahjmyhmh84ip59mM0CzYtC
         aKGw==
X-Gm-Message-State: AOAM5324jLMyS1vdtk84CSQ7UpM6861cQFmsHJUQ27bRJzxrVQX7XBnP
	7/IH6+s4WhyiBeUfS12LrLhRjIrGqmL1iw==
X-Google-Smtp-Source: ABdhPJztjqttf4ZqXSjCXU1z2F62/GP7QsvHbNFCkgSaDskmReEYI81Cbxa8h/0KsFJFS0+fef/KbA==
X-Received: by 2002:a4a:925c:: with SMTP id g28mr58372ooh.65.1621313901063;
        Mon, 17 May 2021 21:58:21 -0700 (PDT)
Subject: Re: [PATCH v3 4/5] xen: Add files needed for minimal riscv build
To: Jan Beulich <jbeulich@suse.com>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair23@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1621017334.git.connojdavis@gmail.com>
 <a7d2d43d0d9de9e10a3e92bc6f977d6f4b53bef6.1621017334.git.connojdavis@gmail.com>
 <ce3ff72e-611b-3b9c-96fa-afc9e8767681@suse.com>
From: Connor Davis <connojdavis@gmail.com>
Message-ID: <95399fcf-54b0-828f-b3cb-9332ad779f68@gmail.com>
Date: Mon, 17 May 2021 22:58:42 -0600
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <ce3ff72e-611b-3b9c-96fa-afc9e8767681@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US



On 5/17/21 5:51 AM, Jan Beulich wrote:
> On 14.05.2021 20:53, Connor Davis wrote:
>> --- /dev/null
>> +++ b/config/riscv64.mk
>> @@ -0,0 +1,5 @@
>> +CONFIG_RISCV := y
>> +CONFIG_RISCV_64 := y
>> +CONFIG_RISCV_$(XEN_OS) := y
> I wonder whether the last one actually gets used anywhere, but I do
> realize that other architectures have similar definitions.
>
>> --- a/xen/Makefile
>> +++ b/xen/Makefile
>> @@ -26,7 +26,9 @@ MAKEFLAGS += -rR
>>   EFI_MOUNTPOINT ?= $(BOOT_DIR)/efi
>>   
>>   ARCH=$(XEN_TARGET_ARCH)
>> -SRCARCH=$(shell echo $(ARCH) | sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g')
>> +SRCARCH=$(shell echo $(ARCH) | \
>> +	  sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g' \
>> +	      -e s'/riscv.*/riscv/g')
> I think in makefiles tab indentation would better be restricted to
> rules. While here it's a minor nit, ...
>
>> @@ -35,7 +37,8 @@ include $(XEN_ROOT)/Config.mk
>>   # Set ARCH/SUBARCH appropriately.
>>   export TARGET_SUBARCH  := $(XEN_TARGET_ARCH)
>>   export TARGET_ARCH     := $(shell echo $(XEN_TARGET_ARCH) | \
>> -                            sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g')
>> +                            sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g' \
>> +			        -e s'/riscv.*/riscv/g')
> ... here you're actually introducing locally inconsistent indentation.
>
>> --- /dev/null
>> +++ b/xen/arch/riscv/Kconfig
>> @@ -0,0 +1,52 @@
>> +config 64BIT
>> +	bool
> I'm afraid this is stale now - the option now lives in xen/arch/Kconfig,
> available to all architectures.
>
>> +config RISCV_64
>> +	bool
>> +	depends on 64BIT
> Such a "depends on" is relatively pointless - they're more important to
> have when there is a prompt.
>
>> +config ARCH_DEFCONFIG
>> +	string
>> +	default "arch/riscv/configs/riscv64_defconfig" if RISCV_64
> For the RISCV_64 dependency to be really useful (at least with the
> command line kconfig), you want its selection to live above the use.
>
>> +menu "Architecture Features"
>> +
>> +source "arch/Kconfig"
>> +
>> +endmenu
>> +
>> +menu "ISA Selection"
>> +
>> +choice
>> +	prompt "Base ISA"
>> +        default RISCV_ISA_RV64IMA
>> +        help
>> +          This selects the base ISA extensions that Xen will target.
>> +
>> +config RISCV_ISA_RV64IMA
>> +	bool "RV64IMA"
>> +        select 64BIT
>> +        select RISCV_64
> I think you want only the latter here, and the former done by RISCV_64
> (or select the former here, and have "default y if 64BIT" at RISCV_64).
> That way, not every party wanting 64-bit has to select both.
>
>> +        help
>> +           Use the RV64I base ISA, plus the "M" and "A" extensions
>> +           for integer multiply/divide and atomic instructions, respectively.
>> +
>> +endchoice
>> +
>> +config RISCV_ISA_C
>> +	bool "Compressed extension"
>> +        help
>> +           Add "C" to the ISA subsets that the toolchain is allowed
>> +           to emit when building Xen, which results in compressed
>> +           instructions in the Xen binary.
>> +
>> +           If unsure, say N.
> For all of the above, please adjust indentation to be consistent with
> (the bulk of) what we have elsewhere.
Will do
>> --- /dev/null
>> +++ b/xen/arch/riscv/arch.mk
>> @@ -0,0 +1,16 @@
>> +########################################
>> +# RISCV-specific definitions
>> +
>> +ifeq ($(CONFIG_RISCV_64),y)
>> +    CFLAGS += -mabi=lp64
>> +endif
> Wherever possible I think we should prefer the list approach:
>
> CFLAGS-$(CONFIG_RISCV_64) += -mabi=lp64
>
>> --- /dev/null
>> +++ b/xen/arch/riscv/configs/riscv64_defconfig
>> @@ -0,0 +1,12 @@
>> +# CONFIG_SCHED_CREDIT is not set
>> +# CONFIG_SCHED_RTDS is not set
>> +# CONFIG_SCHED_NULL is not set
>> +# CONFIG_SCHED_ARINC653 is not set
>> +# CONFIG_TRACEBUFFER is not set
>> +# CONFIG_DEBUG is not set
>> +# CONFIG_DEBUG_INFO is not set
>> +# CONFIG_HYPFS is not set
>> +# CONFIG_GRANT_TABLE is not set
>> +# CONFIG_SPECULATIVE_HARDEN_ARRAY is not set
>> +
>> +CONFIG_EXPERT=y
> These are rather odd defaults, more like for a special purpose
> config than a general purpose one. None of what you turn off here
> will guarantee to be off for people actually trying to build
> things, so it's not clear to me what the idea here is. As a
> specific remark, especially during bringup work I think it is
> quite important to not default DEBUG to off: You definitely want
> to see whether any assertions trigger.
The idea was to turn off as much stuff as possible to get a minimal
build (involving xen/common) working. Although now that we're focused on
only a few files at a time, they could be enabled without adding any
undue burden (at least for now).

Perhaps it would be best to rename the file to include "tiny" or something,
and then add a normal defconfig once things are actually running?

At any rate, agreed on the DEBUG setting, I will enable that.
>> --- /dev/null
>> +++ b/xen/include/asm-riscv/config.h
>> @@ -0,0 +1,110 @@
>> +/******************************************************************************
>> + * config.h
>> + *
>> + * A Linux-style configuration list.
> May I suggest to not further replicate this now inapplicable
> comment? It was already somewhat bogus for Arm to clone from x86.
Sure thing.
>
>> + */
>> +
>> +#ifndef __RISCV_CONFIG_H__
>> +#define __RISCV_CONFIG_H__
>> +
>> +#if defined(CONFIG_RISCV_64)
>> +# define LONG_BYTEORDER 3
>> +# define ELFSIZE 64
>> +#else
>> +# error "Unsupported RISCV variant"
>> +#endif
>> +
>> +#define BYTES_PER_LONG (1 << LONG_BYTEORDER)
>> +#define BITS_PER_LONG  (BYTES_PER_LONG << 3)
>> +#define POINTER_ALIGN  BYTES_PER_LONG
>> +
>> +#define BITS_PER_LLONG 64
>> +
>> +/* xen_ulong_t is always 64 bits */
>> +#define BITS_PER_XEN_ULONG 64
>> +
>> +#define CONFIG_RISCV 1
> This duplicates a "real" Kconfig setting, doesn't it?
Yes, will remove, thanks
>
>> +#define CONFIG_RISCV_L1_CACHE_SHIFT 6
>> +
>> +#define CONFIG_PAGEALLOC_MAX_ORDER 18
>> +#define CONFIG_DOMU_MAX_ORDER      9
>> +#define CONFIG_HWDOM_MAX_ORDER     10
>> +
>> +#define OPT_CONSOLE_STR "dtuart"
>> +
>> +#ifdef CONFIG_RISCV_64
>> +#define MAX_VIRT_CPUS 128u
>> +#else
>> +#error "Unsupported RISCV variant"
>> +#endif
> Could this be folded with the other similar construct further up?
Yep, will do.

Thanks,
Connor


From xen-devel-bounces@lists.xenproject.org Tue May 18 05:08:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 05:08:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128605.241412 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lirxf-0006i9-9r; Tue, 18 May 2021 05:08:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128605.241412; Tue, 18 May 2021 05:08:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lirxf-0006i2-65; Tue, 18 May 2021 05:08:07 +0000
Received: by outflank-mailman (input) for mailman id 128605;
 Tue, 18 May 2021 05:08:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2je3=KN=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1lirxd-0006hw-9g
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 05:08:05 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [40.107.20.43]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0d7caf99-845f-4c1a-8493-9ff95be76fcf;
 Tue, 18 May 2021 05:08:03 +0000 (UTC)
Received: from DB6PR0802CA0029.eurprd08.prod.outlook.com (2603:10a6:4:a3::15)
 by DB7PR08MB3820.eurprd08.prod.outlook.com (2603:10a6:10:7f::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.27; Tue, 18 May
 2021 05:08:02 +0000
Received: from DB5EUR03FT029.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:a3:cafe::24) by DB6PR0802CA0029.outlook.office365.com
 (2603:10a6:4:a3::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.25 via Frontend
 Transport; Tue, 18 May 2021 05:08:02 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT029.mail.protection.outlook.com (10.152.20.131) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Tue, 18 May 2021 05:08:01 +0000
Received: ("Tessian outbound 504317ef584c:v92");
 Tue, 18 May 2021 05:08:01 +0000
Received: from db17a58bebc5.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 60777063-A240-4833-9BCA-4A42F57A272C.1; 
 Tue, 18 May 2021 05:07:55 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id db17a58bebc5.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 18 May 2021 05:07:55 +0000
Received: from DB6PR07CA0021.eurprd07.prod.outlook.com (2603:10a6:6:2d::31) by
 DB7PR08MB3114.eurprd08.prod.outlook.com (2603:10a6:5:1b::30) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25; Tue, 18 May 2021 05:07:53 +0000
Received: from DB5EUR03FT027.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:2d:cafe::33) by DB6PR07CA0021.outlook.office365.com
 (2603:10a6:6:2d::31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4150.11 via Frontend
 Transport; Tue, 18 May 2021 05:07:53 +0000
Received: from nebula.arm.com (40.67.248.234) by
 DB5EUR03FT027.mail.protection.outlook.com (10.152.20.121) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.4129.25 via Frontend Transport; Tue, 18 May 2021 05:07:53 +0000
Received: from AZ-NEU-EX03.Arm.com (10.251.24.31) by AZ-NEU-EX03.Arm.com
 (10.251.24.31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.14; Tue, 18 May
 2021 05:07:52 +0000
Received: from entos-thunderx2-04.shanghai.arm.com (10.169.212.221) by
 mail.arm.com (10.251.24.31) with Microsoft SMTP Server id 15.1.2176.14 via
 Frontend Transport; Tue, 18 May 2021 05:07:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0d7caf99-845f-4c1a-8493-9ff95be76fcf
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=b/Mv1T2M0Bx1B+fMN0UAjGi/7MkFiN2JRmd2c51npb8=;
 b=HmB2DtkYoYocC8MaPgmA1kB1ldDSiRP0/ruhomrRpX3e68I7Bcgz0A7iSYTHwWLv720uwBB1LCaGKoNWfenruWTfNl7J5fMo8Y4vd39pTgCfq8JvoTw0xaFxpadYGMFMMyXKdrFr3jYMhK1ahZTQRQYi329rd5vN/UcuOXCT96U=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 6f2b5464e4b0ed90
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=DWB92OENDXoY25hzRFHYLUNn3SYwapYJ03OscNjEHUpaZEucBkpX6Z2DtlxQ/Kankdtb8xfpHIjuLArVxgROTfxiu/+onFO+qnDsBjyw89kn3LyFzljKdvdKvq+/auD4dve8prgIxu1ESumnyYv1mjrW2BIpY4UPl/rRJdomb2So0DbwthVbEgjMYE5bd1qYlaUduJFZBgQ90Rkg45bYi2mImD+L3jBB9n29qMjmeVXq3N5lEP6x2NSQ11Fp4v8vCHKolG827l/JiPv6k+3SmapvFW4oj4tZwntwXjV/YpjkiLzTzBExVuujFR+JVt2wfsihn3jIoa4IGR7hREAb3Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=b/Mv1T2M0Bx1B+fMN0UAjGi/7MkFiN2JRmd2c51npb8=;
 b=ZlmugA+oKKzq5Ix+vl9kdgXNXJdA7TOlM8gF5NTai00A+lfyyuhdDVtqdgvJyNELIYPsTozCvEKzZYk13GDZaZqRl077+yGodKKmQJYjd0+hpxntBazYc078nzBENnZB+51mJaroAOlFcXuHgur8QBbtzGa1XKMEG3XAfIoMkuGZcRvdjqU0jmOdGiWlG0fAv6+jXiTybqmOLPXLbfL4LGJWOzbg10r36B6knbx5VkDL2MaIUaLJxzf7oGRNdWElcigf7eajChN2jghUrFLPpQcJdi+lGa0/qCJW/XjNXb5hYN95m858mSE86jM2Q6NvSptYoh4NejHm7v5/nxsg4g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=b/Mv1T2M0Bx1B+fMN0UAjGi/7MkFiN2JRmd2c51npb8=;
 b=HmB2DtkYoYocC8MaPgmA1kB1ldDSiRP0/ruhomrRpX3e68I7Bcgz0A7iSYTHwWLv720uwBB1LCaGKoNWfenruWTfNl7J5fMo8Y4vd39pTgCfq8JvoTw0xaFxpadYGMFMMyXKdrFr3jYMhK1ahZTQRQYi329rd5vN/UcuOXCT96U=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=pass action=none
 header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com;
From: Penny Zheng <penny.zheng@arm.com>
To: <xen-devel@lists.xenproject.org>, <sstabellini@kernel.org>,
	<julien@xen.org>
CC: <Bertrand.Marquis@arm.com>, <Penny.Zheng@arm.com>, <Wei.Chen@arm.com>,
	<nd@arm.com>
Subject: [PATCH] Xen: Design doc for 1:1 direct-map and static allocation
Date: Tue, 18 May 2021 01:07:38 -0400
Message-ID: <20210518050738.163156-1-penny.zheng@arm.com>
X-Mailer: git-send-email 2.17.1
MIME-Version: 1.0
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: dbe0e276-d68f-48e0-4323-08d919bae944
X-MS-TrafficTypeDiagnostic: DB7PR08MB3114:|DB7PR08MB3820:
X-Microsoft-Antispam-PRVS:
	<DB7PR08MB38203B47F5853FDB9E0D1AA3F72C9@DB7PR08MB3820.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 DrpkQxLy/IdLBKaLAulstC33bldfutzmHbyWk8aC1usWQ/iOZh3GFEW+sk9/cGDNBIdw0inV8C2KZKs0zJjcA13J9WQxDVTzQDrVkplRgavwkqphWd23KQAFnxMZpVaZgrCZRRZ9glxV0tswQHQPY5g7V9yyqPuRSBdgeQPmPk1+x7espgll6BEydH6NkOl725iW2Uh1xhg4i7Kt70ryrzr3mtJ0FBPy4Qzx9jY7mXhTJRv/VesDdnmvYkI9MB9yI1yQiz9HXAze4Yui414VxwJn8HT8EvUMxFCkuDDGguz/NhDDymd3slwc6p+aCluMIiVjZcQtqLCSt8Wl5po3GoO1fUVeKr+PflfRy6oi6j75EjgX4soCvm93tnGyVl4ECZh9O2+6rDN+0MCPMnrxjkKO5MjNSTNXFTx2khbPB9Cp2eZa0gmLtRgk8n023mfUOAeEfMqa3d0fgLFDZftedJ/GOMxB89GzXwhy/urWk9Bzx25S/JcdtfbX5PExrEJoC/F+iWGRCYZ831PSXPDhIv23fwK3D1datqnDuvjgncbddCnvZz1XtAJMbbLKcLobnzYX4HZCz6xMbhXi1Miehkq2RuwqvFVzpd0xb+McpEEV0uE14CvtsH72Ul/dSRxyWgJLLHASKwSAsIvg7WDY6KxhEl2G9CAoXS8vASrn4w4gGyVmOdWm20h4a0gE5tRmyJrrGvQG1l2pb+iWkdn/Y6Qnq97/sQZHigxBxuj1fjd02Lr3PxDhFpaC2PLdQp/C6/Y1lZLfD0x0xnE79MlG3A==
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(4636009)(346002)(396003)(39850400004)(376002)(136003)(46966006)(36840700001)(82310400003)(7696005)(70206006)(186003)(478600001)(4326008)(82740400003)(5660300002)(70586007)(1076003)(8936002)(86362001)(36756003)(336012)(2616005)(6666004)(316002)(44832011)(8676002)(47076005)(426003)(26005)(83380400001)(54906003)(81166007)(36860700001)(110136005)(2906002)(356005)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3114
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT029.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	856b2ac4-762c-4f04-0981-08d919bae432
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	b03BZI//UOyKfD9IqWjp0hbcEzq1g1QlNocVWhhvMQ4qzPdWekz3jYZWy7fworKzOh3jNOELAyRTAsu7mfrnnM+sVG6pNFfvvRyZiQRCDcDNa93xLPrrZLiYY3CtYWx65m0NLKcmsUU3kykv3xfvT/CNj0oaNYZ/+ak9DGVkwsFCcA1QF/qBO3AT53JSbDjdwpXkzUmr3XfSHBb6jCjsfsCNBlF9VxHeUXYrzg30bpygkRIaFHCQR1J7sgSO2EHQmgW9RZD/cPLkZyfJuELtO2w+dbTE5dumKbFcVqFzxKXIZBnt21GAyulPNzoTSlLAvimiZAoJaT3b2CbkpEUpqGgJh0gZYmIQEUIgdQh8l9+nhyl29ZeI9Q6Ej5xFOtKgMaZ6FqwaiuzY8VE3orFnOYPmpouzDzmvloyLYi7/Ikx8lz5dwUc9zvXbDvxL6gcO60R0Mu9U8CnKDzZV5YG3oJ6r/KO6eCfUAbbp2MDGMrXDz1T6BK5/xFGDo+q+UweNK83uyIlozu6X7raBnzNGBjxrLTmqLLp77WnrL3iNZFUBKgXfIT3jzaeBZ5HVqMeRXzsPNY37ehhfHR71dGl0G3dHWyji+H0q8EoknKPxt8bNkdDmQ/v8VH7pcO+324jfNgfFIvDlZTedeOGgC+Ts73NApnGPH4yWrCQv+T4SdSMp7GiDrePPA/F+Rg84Iexts0RSDTfWAytSAhV5/I9X7HLUg8zoXSXPyGbJlweRrRE=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(346002)(136003)(376002)(39850400004)(36840700001)(46966006)(478600001)(8936002)(54906003)(110136005)(316002)(83380400001)(8676002)(2616005)(81166007)(82740400003)(336012)(186003)(26005)(7696005)(36756003)(5660300002)(1076003)(4326008)(36860700001)(47076005)(6666004)(82310400003)(44832011)(426003)(70206006)(70586007)(2906002)(86362001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2021 05:08:01.7722
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: dbe0e276-d68f-48e0-4323-08d919bae944
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT029.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3820

Create one design doc for 1:1 direct-map and static allocation.
It is the first draft and aims to describe why and how we allocate
1:1 direct-map(guest physical == physical) domains, and why and how
we let domains on static allocation.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
 docs/designs/static_alloc_and_direct_map.md | 239 ++++++++++++++++++++
 1 file changed, 239 insertions(+)
 create mode 100644 docs/designs/static_alloc_and_direct_map.md

diff --git a/docs/designs/static_alloc_and_direct_map.md b/docs/designs/static_alloc_and_direct_map.md
new file mode 100644
index 0000000000..fdda162188
--- /dev/null
+++ b/docs/designs/static_alloc_and_direct_map.md
@@ -0,0 +1,239 @@
+# Preface
+
+The document is an early draft for 1:1 direct-map memory map
+(`guest physical == physical`) of domUs and Static Allocation.
+Since the implementation of these two features shares a lot, we would like
+to introduce both in one design.
+
+Right now, these two features are limited to ARM architecture.
+
+This design aims to describe why and how the guest would be created as 1:1
+direct-map domain, and why and what the static allocation is.
+
+This document is partly based on Stefano Stabellini's patch serie v1:
+[direct-map DomUs](
+https://lists.xenproject.org/archives/html/xen-devel/2020-04/msg00707.html).
+
+This is a first draft and some questions are still unanswered. When this is
+the case, it will be included under chapter `DISCUSSION`.
+
+# Introduction on Static Allocation
+
+Static allocation refers to system or sub-system(domains) for which memory
+areas are pre-defined by configuration using physical address ranges.
+
+## Background
+
+Cases where needs static allocation:
+
+  * Static allocation needed whenever a system has a pre-defined non-changing
+behaviour. This is usually the case in safety world where system must behave
+the same upon reboot, so memory resource for both XEN and domains should be
+static and pre-defined.
+
+  * Static allocation needed whenever a guest wants to allocate memory
+from refined memory ranges. For example, a system has one high-speed RAM
+region, and would like to assign it to one specific domain.
+
+  * Static allocation needed whenever a system needs a guest restricted to some
+known memory area due to hardware limitations reason. For example, some device
+can only do DMA to a specific part of the memory.
+
+Limitations:
+  * There is no consideration for PV devices at the moment.
+
+## Design on Static Allocation
+
+Static allocation refers to system or sub-system(domains) for which memory
+areas are pre-defined by configuration using physical address ranges.
+
+These pre-defined memory, -- Static Momery, as parts of RAM reserved in the
+beginning, shall never go to heap allocator or boot allocator for any use.
+
+### Static Allocation for Domains
+
+### New Deivce Tree Node: `xen,static_mem`
+
+Here introduces new `xen,static_mem` node to define static memory nodes for
+one specific domain.
+
+For domains on static allocation, users need to pre-define guest RAM regions in
+configuration, through `xen,static_mem` node under approriate `domUx` node.
+
+Here is one example:
+
+
+        domU1 {
+            compatible = "xen,domain";
+            #address-cells = <0x2>;
+            #size-cells = <0x2>;
+            cpus = <2>;
+            xen,static-mem = <0x0 0xa0000000 0x0 0x20000000>;
+            ...
+        };
+
+RAM at 0xa0000000 of 512 MB are static memory reserved for domU1 as its RAM.
+
+### New Page Flag: `PGC_reserved`
+
+In order to differentiate and manage pages reserved as static memory with
+those which are allocated from heap allocator for normal domains, we shall
+introduce a new page flag `PGC_reserved` to tell.
+
+Grant pages `PGC_reserved` when initializing static memory.
+
+### New linked page list: `reserved_page_list` in  `struct domain`
+
+Right now, for normal domains, on assigning pages to domain, pages allocated
+from heap allocator as guest RAM shall be inserted to one linked page
+list `page_list` for later managing and storing.
+
+So in order to tell, pages allocated from static memory, shall be inserted
+to a different linked page list `reserved_page_list`.
+
+Later, when domain get destroyed and memory relinquished, only pages in
+`page_list` go back to heap, and pages in `reserved_page_list` shall not.
+
+### Memory Allocation for Domains on Static Allocation
+
+RAM regions pre-defined as static memory for one specifc domain shall be parsed
+and reserved from the beginning. And they shall never go to any memory
+allocator for any use.
+
+Later when allocating static memory for this specific domain, after acquiring
+those reserved regions, users need to a do set of verification before
+assigning.
+For each page there, it at least includes the following steps:
+1. Check if it is in free state and has zero reference count.
+2. Check if the page is reserved(`PGC_reserved`).
+
+Then, assigning these pages to this specific domain, and all pages go to one
+new linked page list `reserved_page_list`.
+
+At last, set up guest P2M mapping. By default, it shall be mapped to the fixed
+guest RAM address `GUEST_RAM0_BASE`, `GUEST_RAM1_BASE`, just like normal
+domains. But later in 1:1 direct-map design, if `direct-map` is set, the guest
+physical address will equal to physical address.
+
+### Static Allocation for Xen itself
+
+### New Deivce Tree Node: `xen,reserved_heap`
+
+Static memory for Xen heap refers to parts of RAM reserved in the beginning
+for Xen heap only. The memory is pre-defined through XEN configuration
+using physical address ranges.
+
+The reserved memory for Xen heap is an optional feature and can be enabled
+by adding a device tree property in the `chosen` node. Currently, this feature
+is only supported on AArch64.
+
+Here is one example:
+
+
+        chosen {
+            xen,reserved-heap = <0x0 0x30000000 0x0 0x40000000>;
+            ...
+        };
+
+RAM at 0x30000000 of 1G size will be reserved as heap memory. Later, heap
+allocator will allocate memory only from this specific region.
+
+# Introduction on 1:1 direct-map
+
+## Background
+
+Cases where domU needs 1:1 direct-map memory map:
+
+  * IOMMU not present in the system.
+  * IOMMU disabled if it doesn't cover a specific device and all the guests
+are trusted. Thinking a mixed scenario, where a few devices with IOMMU and
+a few without, then guest DMA security still could not be totally guaranteed.
+So users may want to disable the IOMMU, to at least gain some performance
+improvement from IOMMU disabled.
+  * IOMMU disabled as a workaround when it doesn't have enough bandwidth.
+To be specific, in a few extreme situation, when multiple devices do DMA
+concurrently, these requests may exceed IOMMU's transmission capacity.
+  * IOMMU disabled when it adds too much latency on DMA. For example,
+TLB may be missing in some IOMMU hardware, which may bring latency in DMA
+progress, so users may want to disable it in some realtime scenario.
+
+*WARNING:
+Users should be aware that it is not always secure to assign a device without
+IOMMU/SMMU protection.
+When the device is not protected by the IOMMU/SMMU, the administrator should
+make sure that:
+ 1. The device is assigned to a trusted guest.
+ 2. Users have additional security mechanism on the platform.
+
+Limitations:
+  * There is no consideration for PV devices at the moment.
+
+## Design on 1:1 direct-map
+
+Here only supports 1:1 direct-map with user-defined memory regions.
+
+The implementation may cover following aspects:
+
+### Native Address and IRQ numbers for GIC and UART(vPL011)
+
+Today, fixed addresses and IRQ numbers are used to map GIC and UART(vPL011)
+in DomUs. And it may cause potential clash on 1:1 direct-map domains.
+So, Using native addresses and irq numbers for GIC, UART(vPL011), in
+1:1 direct-map domains is necessary.
+
+For the virtual interrupt of vPL011: instead of always using
+`GUEST_VPL011_SPI`, try to reuse the physical SPI number if possible.
+
+### New Device Tree Node: `direct-map` Option
+
+Introduce a new option `direct-map` for 1:1 direct-map domains.
+
+When users allocating an 1:1 direct-map domain, `direct-map` property needs
+to be added under the appropriate `/chosen/domUx`. For now, since only
+supporting 1:1 direct-map with user-defined memory regions, users must choose
+RAM banks as 1:1 dirct-map guest RAM, through `xen,static-mem`, which has
+been elaborated before in chapter `New Deivce Tree Node: `xen,static_mem``.
+
+Hers is one example to allocate one 1:1 direct-map domain:
+
+
+            chosen {
+                ...
+                domU1 {
+                    compatible = "xen, domain";
+                    #address-cells = <0x2>;
+                    #size-cells = <0x2>;
+                    cpus = <2>;
+                    vpl011;
+                    direct-map;
+                    xen,static-mem = <0x0 0x30000000 0x0 0x40000000>;
+                    ...
+                };
+                ...
+            };
+
+DOMU1 is an 1:1 direct-map domain with reserved RAM at 0x30000000 of 1G size.
+
+### Memory Allocation for 1:1 direct-map Domain
+
+Implementing memory allocation for 1:1 direct-map domain includes two parts:
+Static Allocation for Domain and 1:1 direct-map.
+
+The first part has been elaborated before in chapter `Memory Allocation for
+Domains on Static Allocation`. Then, to ensure 1:1 direct-map, when setting up
+guest P2M mapping, it needs to make sure that guest physical address equal to
+physical address(`gfn == mfn`).
+
+*DISCUSSION:
+
+  * Here only supports booting up one domain on static allocation or on 1:1
+direct-map through device tree, is `xl` also needed?
+
+  * Here only supports 1:1 direct-map domain with user-defined memory regions,
+is 1:1 direct-map domain with arbitrary memory regions also needed? We had
+quite a discussion [here](
+https://patchew.org/Xen/20201208052113.1641514-1-penny.zheng@arm.com/). In
+order to mitigate guest memory fragementation, we introduce static memory pool(
+same implementation as `xen,reserved-heap`) and static memory allocator(a new
+linear memory allocator, very alike boot allocator). This new allocator is also
+applied to MPU system, so I may create a new design on this to elaborate more.
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 18 05:21:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 05:21:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128613.241423 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lisAx-0000Xf-Is; Tue, 18 May 2021 05:21:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128613.241423; Tue, 18 May 2021 05:21:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lisAx-0000XY-Fq; Tue, 18 May 2021 05:21:51 +0000
Received: by outflank-mailman (input) for mailman id 128613;
 Tue, 18 May 2021 05:21:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2je3=KN=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1lisAv-0000XS-FI
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 05:21:49 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com (unknown
 [40.107.3.64]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 78af3743-edf4-4978-a783-2edaca25601b;
 Tue, 18 May 2021 05:21:46 +0000 (UTC)
Received: from AM6PR08CA0007.eurprd08.prod.outlook.com (2603:10a6:20b:b2::19)
 by AM0PR08MB3330.eurprd08.prod.outlook.com (2603:10a6:208:5c::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.28; Tue, 18 May
 2021 05:21:43 +0000
Received: from AM5EUR03FT024.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:b2:cafe::e2) by AM6PR08CA0007.outlook.office365.com
 (2603:10a6:20b:b2::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.25 via Frontend
 Transport; Tue, 18 May 2021 05:21:43 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT024.mail.protection.outlook.com (10.152.16.175) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Tue, 18 May 2021 05:21:43 +0000
Received: ("Tessian outbound 0f1e4509c199:v92");
 Tue, 18 May 2021 05:21:43 +0000
Received: from c6bec359284a.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 72016849-DD8E-447A-AF69-C5D4479BD6E6.1; 
 Tue, 18 May 2021 05:21:36 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c6bec359284a.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 18 May 2021 05:21:36 +0000
Received: from AM6P192CA0065.EURP192.PROD.OUTLOOK.COM (2603:10a6:209:82::42)
 by DB8PR08MB5404.eurprd08.prod.outlook.com (2603:10a6:10:117::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.25; Tue, 18 May
 2021 05:21:34 +0000
Received: from VE1EUR03FT018.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:82:cafe::bb) by AM6P192CA0065.outlook.office365.com
 (2603:10a6:209:82::42) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.25 via Frontend
 Transport; Tue, 18 May 2021 05:21:34 +0000
Received: from nebula.arm.com (40.67.248.234) by
 VE1EUR03FT018.mail.protection.outlook.com (10.152.18.135) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.4129.25 via Frontend Transport; Tue, 18 May 2021 05:21:34 +0000
Received: from AZ-NEU-EX01.Emea.Arm.com (10.251.26.4) by AZ-NEU-EX03.Arm.com
 (10.251.24.31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2176.14; Tue, 18 May
 2021 05:21:33 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX01.Emea.Arm.com
 (10.251.26.4) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2176.14; Tue, 18
 May 2021 05:21:33 +0000
Received: from penny.shanghai.arm.com (10.169.190.66) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2176.14 via Frontend
 Transport; Tue, 18 May 2021 05:21:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 78af3743-edf4-4978-a783-2edaca25601b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=du5OP3mWpKQtM/QoyT2ZsDEjLUVsC0mPKC4RWPNn37o=;
 b=tZjWizBbTesSpribmwEqGTbBsIOJMbGGEhrrtIeC+9Npwub6mARxvcjvgcDjkzas9iOo6fZc3Sv6RCOYmU2VBVAlyJBNw8nmKBd6TFCikBTf+ysRvgw3YMMa38Z14RaWAKrOcjkPO+xWbVp70+CNiWBjepgPPR7bWBjb7eurGI0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 82aa9357170c4fdc
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=UpES8RNL1KPyWVL8RKVqVOzPurftiNp94elIgFVM0AUvLBH7cJfnQPrPaGlVWPlRxz0B7dBCq7kBKKZmvTYGtofk3PeKff9WIg9FxpFyB9X50c3Ukd6VtofuEVVGn4QyjLTwEhSCLoivv1vu+YnpX87o1eM5A8b+sFA1K7J6hClrsUJbVjIguCdz/oo5/ZK53NQiUH6rleclBpq6BJL94tFd69p6n8O3f2KIIi3BTrQB4HAnREHOEu/lHLdrxZS6OCPS1rdDd8MwblZZCg2LMOx0N6bh7n6UXCt6hqUPe4iOGprFVBk47IWpdc7K1oIzc1R5Ec5zFDYh/rxsYjDuWA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=du5OP3mWpKQtM/QoyT2ZsDEjLUVsC0mPKC4RWPNn37o=;
 b=bv3vj4HuFRpRMePJBoMkzjXnOwjSMojsln3VQWQQKx0eSblTk0RbXNF3wKJeDRxPY3Sf3BoT5rByPQCgCcg/sXomjPIMEgM087TOqd5ku3RexINOaZgQyrz/7GSmHVmxqYfETKOv2kvf7PZ+ZI38CH1j38jJ7dBK8364wc+yJdWfYw68N4TUwFqJeQmupMp8hy44o27lTG+aH9u8q8qEXxR9XGWFlmUN6d/ziWniNYWXa1laVPLpihUiYGdjHofg162yaX+Vo8yuPFNuP+gZ13p8d+kQnRLC4mNzUOuTRb8zc4jgw1k1V/+C8xLysU7Je72EtuqwffZec3NWynFAOQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=du5OP3mWpKQtM/QoyT2ZsDEjLUVsC0mPKC4RWPNn37o=;
 b=tZjWizBbTesSpribmwEqGTbBsIOJMbGGEhrrtIeC+9Npwub6mARxvcjvgcDjkzas9iOo6fZc3Sv6RCOYmU2VBVAlyJBNw8nmKBd6TFCikBTf+ysRvgw3YMMa38Z14RaWAKrOcjkPO+xWbVp70+CNiWBjepgPPR7bWBjb7eurGI0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=pass action=none
 header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com;
From: Penny Zheng <penny.zheng@arm.com>
To: <xen-devel@lists.xenproject.org>, <sstabellini@kernel.org>,
	<julien@xen.org>
CC: <Bertrand.Marquis@arm.com>, <Penny.Zheng@arm.com>, <Wei.Chen@arm.com>,
	<nd@arm.com>
Subject: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
Date: Tue, 18 May 2021 05:21:04 +0000
Message-ID: <20210518052113.725808-2-penny.zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20210518052113.725808-1-penny.zheng@arm.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 76e46bd6-cf87-43e2-95dd-08d919bcd2ed
X-MS-TrafficTypeDiagnostic: DB8PR08MB5404:|AM0PR08MB3330:
X-Microsoft-Antispam-PRVS:
	<AM0PR08MB3330A50662E33B29DD11887DF72C9@AM0PR08MB3330.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 F9SiYjmWZKYsM3s96cl/mX/fsp1oReI9T7a5wXs2Ny/cCmjDOr23X5otT6BGv7KVjnR+b2qBh0ibXgrPOd+eEijoazT5upDgCA+8LN6z0NqMn+YwiIRHMfcw6svNOBsksOC+Kyw04gqgQR2jhcL7sJXWWg9k6M3A9mJtXYgmHD8LD3VL3LZ60VH/9IjOHI3bzCfAbLpERbkx5/bW64m2ylBI4LKtdlnh5gbMYGt11RH12PmP+WypwtLGfjVqKfKI7jRJERexwzOhr3ArEkKVlOS5wDbPAR5GTwW53zuZF3idQWRquVQUCtM0VUvRxrXTcnKsrxaNddh2SK6Cnh8ZG0sfu1l09eRgdym8Q+bA4fl/+BIXstAqpVATJYHxhYU0BTHmmJ7X5cZoMlGwNTJgedVqzZ7OqkFETPUDrdQ6azArK//RtUFMMRcC55BDeKl/bhGa5E5BQCiF4/rzmv0SQWjTmpijDCCDZZDfAOtoVe0jTfRub1/b+bS7tOkGxPssHj+HGedLhZJPKHn8Yv5LpuoZQRHCS0hX6+sT3ovhLaUByrsh8/KcogvanxwzexjAFwEnn4b1EmZG/iWymD9UmpYno9CZrP4pTzWOR6iPQ4J5vp3WAasmRLInTpqZ+5QMbmsmm4sSPhdEGDleUjQtBYcoOlEtCrqc0Td4/dya//W3FNYd6sW1VbMWhynKiW70
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(4636009)(376002)(136003)(346002)(396003)(39850400004)(36840700001)(46966006)(47076005)(36860700001)(5660300002)(186003)(26005)(2616005)(336012)(478600001)(81166007)(2906002)(82740400003)(70586007)(4326008)(54906003)(82310400003)(356005)(426003)(8676002)(1076003)(70206006)(36756003)(86362001)(44832011)(110136005)(83380400001)(316002)(8936002)(6666004)(7696005)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5404
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT024.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	b82fa1be-3500-4c41-9175-08d919bccda9
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	PT/koVu75C1Fj/Dhw+UvdhPExqGstVbqnbDKZMTIY1SzGtRAbzmJQygrIAdVuLdukcREXfbwmeE5Rj92KB2Sy23/MoHjXl2SDTUP5HnvwQN8hB8NL5LM11BAf8lgduZp6Hkdu/DRt+90jTWoMxFJ7G9VJxV+qCGoowxaSJ7pxSAHK0e9/Pkz5cQspv3S37H/+s4dl82YPn8LsMFt7c3entl9LEa8s2dLhDSDPjj9DAYkyDm9HnBFxd5hUNl5BQWM37BlMppg3fCmhkk8SBTc8I1C01fYvaVFdupXXbYvPfQxq4+UBtUlMDlH4hFB8QE3FgCJVF4Uv1+Xq0e0nWlxwNbLFYcbEkB6MdHPOjIikeaQzo/87Qje84y8JO41UejYTA/MwWRfTduKZQVDzUyHxibyqmippv6cS5Z7qGxzK8Yboy1BsrDI8Zv8lfknQPKCb+6YxTuch02G7FzjXRj615lWENmPNvMdusM3THqkWxklL+SRZDZx9m0AbyAeWCZy52ZS2ctDOfHZ+re5941su2RT1immrX39genrj6gPE4rEfob3HtD/6xiWhrx0kaiYiJcnmV/RjY7LwLPnGBdWRlJ8ll6adQp6PscI8EgJcxDAU2hGhY+y8hma72EBpRqmAuuWDySEpH1rec8RxKOQ1lF3j2pEpSZ7u8zd7T4/7+Q=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(376002)(136003)(396003)(39850400004)(36840700001)(46966006)(7696005)(83380400001)(81166007)(54906003)(5660300002)(110136005)(36860700001)(47076005)(82740400003)(316002)(8676002)(8936002)(478600001)(44832011)(2616005)(36756003)(82310400003)(336012)(186003)(1076003)(2906002)(4326008)(70586007)(426003)(6666004)(26005)(86362001)(70206006);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2021 05:21:43.2208
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 76e46bd6-cf87-43e2-95dd-08d919bcd2ed
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT024.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB3330

Static Allocation refers to system or sub-system(domains) for which memory
areas are pre-defined by configuration using physical address ranges.
Those pre-defined memory, -- Static Momery, as parts of RAM reserved in the
beginning, shall never go to heap allocator or boot allocator for any use.

Domains on Static Allocation is supported through device tree property
`xen,static-mem` specifying reserved RAM banks as this domain's guest RAM.
By default, they shall be mapped to the fixed guest RAM address
`GUEST_RAM0_BASE`, `GUEST_RAM1_BASE`.

This patch introduces this new `xen,static-mem` property to define static
memory nodes in device tree file.
This patch also documents and parses this new attribute at boot time and
stores related info in static_mem for later initialization.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
 docs/misc/arm/device-tree/booting.txt | 33 +++++++++++++++++
 xen/arch/arm/bootfdt.c                | 52 +++++++++++++++++++++++++++
 xen/include/asm-arm/setup.h           |  2 ++
 3 files changed, 87 insertions(+)

diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
index 5243bc7fd3..d209149d71 100644
--- a/docs/misc/arm/device-tree/booting.txt
+++ b/docs/misc/arm/device-tree/booting.txt
@@ -268,3 +268,36 @@ The DTB fragment is loaded at 0xc000000 in the example above. It should
 follow the convention explained in docs/misc/arm/passthrough.txt. The
 DTB fragment will be added to the guest device tree, so that the guest
 kernel will be able to discover the device.
+
+
+Static Allocation
+=============
+
+Static Allocation refers to system or sub-system(domains) for which memory
+areas are pre-defined by configuration using physical address ranges.
+Those pre-defined memory, -- Static Momery, as parts of RAM reserved in the
+beginning, shall never go to heap allocator or boot allocator for any use.
+
+Domains on Static Allocation is supported through device tree property
+`xen,static-mem` specifying reserved RAM banks as this domain's guest RAM.
+By default, they shall be mapped to the fixed guest RAM address
+`GUEST_RAM0_BASE`, `GUEST_RAM1_BASE`.
+
+Static Allocation is only supported on AArch64 for now.
+
+The dtb property should look like as follows:
+
+        chosen {
+            domU1 {
+                compatible = "xen,domain";
+                #address-cells = <0x2>;
+                #size-cells = <0x2>;
+                cpus = <2>;
+                xen,static-mem = <0x0 0x30000000 0x0 0x20000000>;
+
+                ...
+            };
+        };
+
+DOMU1 on Static Allocation has reserved RAM bank at 0x30000000 of 512MB size
+as guest RAM.
diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
index dcff512648..e9f14e6a44 100644
--- a/xen/arch/arm/bootfdt.c
+++ b/xen/arch/arm/bootfdt.c
@@ -327,6 +327,55 @@ static void __init process_chosen_node(const void *fdt, int node,
     add_boot_module(BOOTMOD_RAMDISK, start, end-start, false);
 }
 
+static int __init process_static_memory(const void *fdt, int node,
+                                        const char *name,
+                                        u32 address_cells, u32 size_cells,
+                                        void *data)
+{
+    int i;
+    int banks;
+    const __be32 *cell;
+    paddr_t start, size;
+    u32 reg_cells = address_cells + size_cells;
+    struct meminfo *mem = data;
+    const struct fdt_property *prop;
+
+    if ( address_cells < 1 || size_cells < 1 )
+    {
+        printk("fdt: invalid #address-cells or #size-cells for static memory");
+        return -EINVAL;
+    }
+
+    /*
+     * Check if static memory property belongs to a specific domain, that is,
+     * its node `domUx` has compatible string "xen,domain".
+     */
+    if ( fdt_node_check_compatible(fdt, node, "xen,domain") != 0 )
+        printk("xen,static-mem property can only locate under /domUx node.\n");
+
+    prop = fdt_get_property(fdt, node, name, NULL);
+    if ( !prop )
+        return -ENOENT;
+
+    cell = (const __be32 *)prop->data;
+    banks = fdt32_to_cpu(prop->len) / (reg_cells * sizeof (u32));
+
+    for ( i = 0; i < banks && mem->nr_banks < NR_MEM_BANKS; i++ )
+    {
+        device_tree_get_reg(&cell, address_cells, size_cells, &start, &size);
+        /* Some DT may describe empty bank, ignore them */
+        if ( !size )
+            continue;
+        mem->bank[mem->nr_banks].start = start;
+        mem->bank[mem->nr_banks].size = size;
+        mem->nr_banks++;
+    }
+
+    if ( i < banks )
+        return -ENOSPC;
+    return 0;
+}
+
 static int __init early_scan_node(const void *fdt,
                                   int node, const char *name, int depth,
                                   u32 address_cells, u32 size_cells,
@@ -345,6 +394,9 @@ static int __init early_scan_node(const void *fdt,
         process_multiboot_node(fdt, node, name, address_cells, size_cells);
     else if ( depth == 1 && device_tree_node_matches(fdt, node, "chosen") )
         process_chosen_node(fdt, node, name, address_cells, size_cells);
+    else if ( depth == 2 && fdt_get_property(fdt, node, "xen,static-mem", NULL) )
+        process_static_memory(fdt, node, "xen,static-mem", address_cells,
+                              size_cells, &bootinfo.static_mem);
 
     if ( rc < 0 )
         printk("fdt: node `%s': parsing failed\n", name);
diff --git a/xen/include/asm-arm/setup.h b/xen/include/asm-arm/setup.h
index 5283244015..5e9f296760 100644
--- a/xen/include/asm-arm/setup.h
+++ b/xen/include/asm-arm/setup.h
@@ -74,6 +74,8 @@ struct bootinfo {
 #ifdef CONFIG_ACPI
     struct meminfo acpi;
 #endif
+    /* Static Memory */
+    struct meminfo static_mem;
 };
 
 extern struct bootinfo bootinfo;
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 18 05:21:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 05:21:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128614.241434 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lisB2-0000p9-SH; Tue, 18 May 2021 05:21:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128614.241434; Tue, 18 May 2021 05:21:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lisB2-0000ov-OO; Tue, 18 May 2021 05:21:56 +0000
Received: by outflank-mailman (input) for mailman id 128614;
 Tue, 18 May 2021 05:21:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2je3=KN=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1lisB2-0000oV-7N
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 05:21:56 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.1.83]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c87f45e4-de27-4076-a08c-d183282498bb;
 Tue, 18 May 2021 05:21:54 +0000 (UTC)
Received: from DB7PR05CA0038.eurprd05.prod.outlook.com (2603:10a6:10:2e::15)
 by AS8PR08MB6085.eurprd08.prod.outlook.com (2603:10a6:20b:294::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.25; Tue, 18 May
 2021 05:21:50 +0000
Received: from DB5EUR03FT031.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:2e:cafe::15) by DB7PR05CA0038.outlook.office365.com
 (2603:10a6:10:2e::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.32 via Frontend
 Transport; Tue, 18 May 2021 05:21:50 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT031.mail.protection.outlook.com (10.152.20.142) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Tue, 18 May 2021 05:21:50 +0000
Received: ("Tessian outbound 3050e7a5b95d:v92");
 Tue, 18 May 2021 05:21:50 +0000
Received: from b27086a73afc.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 0594D1CA-7287-4677-9637-438FA4D7785C.1; 
 Tue, 18 May 2021 05:21:44 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id b27086a73afc.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 18 May 2021 05:21:44 +0000
Received: from AM6PR01CA0056.eurprd01.prod.exchangelabs.com
 (2603:10a6:20b:e0::33) by AM6PR08MB4375.eurprd08.prod.outlook.com
 (2603:10a6:20b:b8::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.26; Tue, 18 May
 2021 05:21:43 +0000
Received: from AM5EUR03FT032.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:e0:cafe::3d) by AM6PR01CA0056.outlook.office365.com
 (2603:10a6:20b:e0::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.25 via Frontend
 Transport; Tue, 18 May 2021 05:21:43 +0000
Received: from nebula.arm.com (40.67.248.234) by
 AM5EUR03FT032.mail.protection.outlook.com (10.152.16.84) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.4129.25 via Frontend Transport; Tue, 18 May 2021 05:21:42 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX04.Arm.com
 (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.14; Tue, 18 May
 2021 05:21:41 +0000
Received: from penny.shanghai.arm.com (10.169.190.66) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2176.14 via Frontend
 Transport; Tue, 18 May 2021 05:21:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c87f45e4-de27-4076-a08c-d183282498bb
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2KQQ4ZqD+jOLvYXqT3cXJP1kPDbk4rROsPwkwbO9YKo=;
 b=JDxdlW6xBOBhuFREHdVyEtA595DeKXvfGAFemmyD6FMPYnhp0UhVhYTVDbof/OqnzfUbdW6OJUI/rF8vAFUxP3L8zAyO3AEoWGQs7k7mDczENsqHas6lTW85XEd6JcM0lr/jkLzd2QICTUX0r8VufLsZQ8APz/cbN/N34ryzRtw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: c2b47eb8e8d018e7
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HIoFZcaTw40XFy6IMbUwZU+b6LayTkxP6PqLFXnLsiM7qiG7FWiC1Kd2J/ERk7vRPKOjIpZ/9R9tr8c3MclK9CGxc/4+oTjt9lNKo2I+LjKCAdBaMwa3TTTO4SUeMQNHgfMiVrnvD4SDSaFUxlUugk7PEaU1ZlR+yUblS7gpqO6/F3n0I7NE96K3AEQeT7NXs4s86W9n2w7ETZRYmiAiDpA1b27RLmzXpM001ZfGN8G7UHpk+KEkahSWbcWYJuc5NKnf5xhjW9N21tmHp7G8DpfQSAZ7ZpJpsY9NGmR6Uug3213g5SSNv+mmM+lTNi7kDOMt4HbIdWt5ZupJnAnkxw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2KQQ4ZqD+jOLvYXqT3cXJP1kPDbk4rROsPwkwbO9YKo=;
 b=CTWbuMTF9penajP0yGvz2bkMJLud6CusF9oU+YDEXOswSv8quczqFaM4Z0eyqOYo7psmAq4YU5SiMfX0pkXJ5QwGrmAyjT5+TqCci1kD4vZWY/GTeD1Gkq2XwOEi+rbK+UsNSmoNk95fK8rzt8Mwa4cC97J9VBFdxEwyH33IvOlwCx64YQjp4zoK0OF0veV+N8nXXhjGJzAUJeHHgvOecdtGB84fo6lMkJPgjrhkdAew9gUz4AljsAI2ySSAaOTgC1EzeIlu0S3umM8CUrOhDAPuSmj61LPrDcxeIXtCDvuuIT5ZrZB1OazzkUsReHgb5Y3K1CI13weU/HfasyJ5wQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2KQQ4ZqD+jOLvYXqT3cXJP1kPDbk4rROsPwkwbO9YKo=;
 b=JDxdlW6xBOBhuFREHdVyEtA595DeKXvfGAFemmyD6FMPYnhp0UhVhYTVDbof/OqnzfUbdW6OJUI/rF8vAFUxP3L8zAyO3AEoWGQs7k7mDczENsqHas6lTW85XEd6JcM0lr/jkLzd2QICTUX0r8VufLsZQ8APz/cbN/N34ryzRtw=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=pass action=none
 header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com;
From: Penny Zheng <penny.zheng@arm.com>
To: <xen-devel@lists.xenproject.org>, <sstabellini@kernel.org>,
	<julien@xen.org>
CC: <Bertrand.Marquis@arm.com>, <Penny.Zheng@arm.com>, <Wei.Chen@arm.com>,
	<nd@arm.com>
Subject: [PATCH 04/10] xen/arm: static memory initialization
Date: Tue, 18 May 2021 05:21:07 +0000
Message-ID: <20210518052113.725808-5-penny.zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20210518052113.725808-1-penny.zheng@arm.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e33081cd-4603-415e-b2b6-08d919bcd740
X-MS-TrafficTypeDiagnostic: AM6PR08MB4375:|AS8PR08MB6085:
X-Microsoft-Antispam-PRVS:
	<AS8PR08MB6085815D8CFFAA4C9603117EF72C9@AS8PR08MB6085.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:2582;OLM:2582;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 64H980U++Cmpl4LANGBOruWixAxgFkMB+Cuk3e94o+bRlXf9gsCAmZ2xUYWcSzn3MQli481yWiTYFF1nMGrXH8qv8PvtjL+wE7mc5hzzRg0agpbLvX0EDfV4JyzFy3/+cPdi8OXoIpgrlglzZE4y57DcOGdczb5bdVWKbcLmEzRzJmvHRx7FGlA0QLG0HMjtM5oodFaIvE+2m+d+3oqidwCp8kyj1SX7wx2OHlrUZIcSpTjsSyD4ZZwxH8BIKKL0SDG4eBf5Kaln6/sQsiaZAs/ibHPYfM9FhadsP8rjb89JHrFt+wgp501rOmbqvBTafjdKY7QS505iIwWznb9cWVSEbw+BLn+uMn6lMSpmBNIYF2czhLgogB5DGPeTdyI65ru8wuT1XQdEPKhFG4uwcoyCZxRpyBJWuCv2aGhPRHU4mhprhZPCW7MY8I7Ynjzw86DF6GnrnorgbE+MDJ2Zq6qb/E88xCaea8IK10DLgIlFryVsy+QFcOwnuzUfJ3f/SphSZQDBL9YkkQGXm8Q1nq+WvYKp0cIK+zjGUD+OCpa4IlEA6XtYGJPTcfY8V9qhWSWpo6/GfHDmlp4wFavvna3NbfoRZBJzXA1yS1ga8ugcdkybet9XsI/y+gw16O9AG/pwjAe2FoYArf+hERkq02k0o0WzyiSG5zRuUEzxfVwMj1A4w+0ajVNWD8dduiBl
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(4636009)(346002)(376002)(396003)(136003)(39850400004)(36840700001)(46966006)(2906002)(6666004)(5660300002)(81166007)(82310400003)(36860700001)(36756003)(83380400001)(1076003)(70586007)(336012)(478600001)(4326008)(54906003)(426003)(7696005)(110136005)(44832011)(86362001)(316002)(356005)(186003)(8676002)(82740400003)(70206006)(8936002)(47076005)(26005)(2616005)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4375
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT031.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	218a7176-d02e-4c49-bdc1-08d919bcd2b8
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	bCU5qWrBGZ0UCNeOC3pBXr/Nic6zWzCU9YSFdZGp4tlpF/8oVDW0T4d22KcOkNJko7/2HC3dejpTO4rhBFMY3FERG8Rk2ZMJxSp+ess3JmiPwcyBTvemvrXN/zMpaWVFFeXDoGQzfvJB23r/OjoeACZiigogl1vrORr0+yaxl09Pwb8o/eKKFzNLJvKITCV7Dj9TVPHqu7GLB9PX4h/wvdasPpSpY/uxg666zkunVs+mriVwvia2u3sQ4UwjS6cusV8HDVq5YcH9KRE4SjaT+ACVeb8Mu5tqdp11ceaRf6811kiuu8xTxD9/+V4ZFIIuWI3Pyt4aDHKQOspD0nB7JLRe8JpbmYUzwdmaIjO4SzPZrCddVr1YDeI/f7tLSXQEjrzDkNog289PGGcZOh657/iMVZ3wU/YF4oZ40dclbuFyId70yirk0JLZ6uavlvlEY+PzY3wv4+KU+i/OZi2wCpHhwJUVgBj6jjathmVWKJ9SinsJvi/8iZvIIc8aw5LSNjx3v0e1K23XjoLtFH8Jqzb3aHo6lVuVCXdgVoyI8Oeoy/Z1wkdrD+d0cJD4NE8RaiH5XMQI1MJXVWQfYScFE29jIW/jdCJdMy43HE1+Anzw2imKkak29Mh5VaiSNdM7xruudtP2eo5amhomNgLEnE5VastPE7ofeQpSzEARasg=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(39850400004)(346002)(136003)(376002)(36840700001)(46966006)(86362001)(4326008)(36860700001)(6666004)(2906002)(2616005)(70206006)(478600001)(70586007)(8676002)(5660300002)(8936002)(82740400003)(36756003)(316002)(110136005)(426003)(186003)(1076003)(44832011)(81166007)(82310400003)(26005)(47076005)(83380400001)(7696005)(336012)(54906003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2021 05:21:50.5453
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e33081cd-4603-415e-b2b6-08d919bcd740
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT031.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6085

This patch introduces static memory initialization, during system RAM boot up.

New func init_staticmem_pages is the equivalent of init_heap_pages, responsible
for static memory initialization.

Helper func free_staticmem_pages is the equivalent of free_heap_pages, to free
nr_pfns pages of static memory.
For each page, it includes the following steps:
1. change page state from in-use(also initialization state) to free state and
grant PGC_reserved.
2. set its owner NULL and make sure this page is not a guest frame any more
3. follow the same cache coherency policy in free_heap_pages
4. scrub the page in need

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
 xen/arch/arm/setup.c    |  2 ++
 xen/common/page_alloc.c | 70 +++++++++++++++++++++++++++++++++++++++++
 xen/include/xen/mm.h    |  3 ++
 3 files changed, 75 insertions(+)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 444dbbd676..f80162c478 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -818,6 +818,8 @@ static void __init setup_mm(void)
 
     setup_frametable_mappings(ram_start, ram_end);
     max_page = PFN_DOWN(ram_end);
+
+    init_staticmem_pages();
 }
 #endif
 
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index ace6333c18..58b53c6ac2 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -150,6 +150,9 @@
 #define p2m_pod_offline_or_broken_hit(pg) 0
 #define p2m_pod_offline_or_broken_replace(pg) BUG_ON(pg != NULL)
 #endif
+#ifdef CONFIG_ARM_64
+#include <asm/setup.h>
+#endif
 
 /*
  * Comma-separated list of hexadecimal page numbers containing bad bytes.
@@ -1512,6 +1515,49 @@ static void free_heap_pages(
     spin_unlock(&heap_lock);
 }
 
+/* Equivalent of free_heap_pages to free nr_pfns pages of static memory. */
+static void free_staticmem_pages(struct page_info *pg, unsigned long nr_pfns,
+                                 bool need_scrub)
+{
+    mfn_t mfn = page_to_mfn(pg);
+    int i;
+
+    for ( i = 0; i < nr_pfns; i++ )
+    {
+        switch ( pg[i].count_info & PGC_state )
+        {
+        case PGC_state_inuse:
+            BUG_ON(pg[i].count_info & PGC_broken);
+            /* Make it free and reserved. */
+            pg[i].count_info = PGC_state_free | PGC_reserved;
+            break;
+
+        default:
+            printk(XENLOG_ERR
+                   "Page state shall be only in PGC_state_inuse. "
+                   "pg[%u] MFN %"PRI_mfn" count_info=%#lx tlbflush_timestamp=%#x.\n",
+                   i, mfn_x(mfn) + i,
+                   pg[i].count_info,
+                   pg[i].tlbflush_timestamp);
+            BUG();
+        }
+
+        /*
+         * Follow the same cache coherence scheme in free_heap_pages.
+         * If a page has no owner it will need no safety TLB flush.
+         */
+        pg[i].u.free.need_tlbflush = (page_get_owner(&pg[i]) != NULL);
+        if ( pg[i].u.free.need_tlbflush )
+            page_set_tlbflush_timestamp(&pg[i]);
+
+        /* This page is not a guest frame any more. */
+        page_set_owner(&pg[i], NULL);
+        set_gpfn_from_mfn(mfn_x(mfn) + i, INVALID_M2P_ENTRY);
+
+        if ( need_scrub )
+            scrub_one_page(&pg[i]);
+    }
+}
 
 /*
  * Following rules applied for page offline:
@@ -1828,6 +1874,30 @@ static void init_heap_pages(
     }
 }
 
+/* Equivalent of init_heap_pages to do static memory initialization */
+void __init init_staticmem_pages(void)
+{
+    int bank;
+
+    /*
+     * TODO: Considering NUMA-support scenario.
+     */
+    for ( bank = 0 ; bank < bootinfo.static_mem.nr_banks; bank++ )
+    {
+        paddr_t bank_start = bootinfo.static_mem.bank[bank].start;
+        paddr_t bank_size = bootinfo.static_mem.bank[bank].size;
+        paddr_t bank_end = bank_start + bank_size;
+
+        bank_start = round_pgup(bank_start);
+        bank_end = round_pgdown(bank_end);
+        if ( bank_end <= bank_start )
+            return;
+
+        free_staticmem_pages(maddr_to_page(bank_start),
+                            (bank_end - bank_start) >> PAGE_SHIFT, false);
+    }
+}
+
 static unsigned long avail_heap_pages(
     unsigned int zone_lo, unsigned int zone_hi, unsigned int node)
 {
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index 667f9dac83..8b1a2207b2 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -85,6 +85,9 @@ bool scrub_free_pages(void);
 } while ( false )
 #define FREE_XENHEAP_PAGE(p) FREE_XENHEAP_PAGES(p, 0)
 
+/* Static Memory */
+void init_staticmem_pages(void);
+
 /* Map machine page range in Xen virtual address space. */
 int map_pages_to_xen(
     unsigned long virt,
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 18 05:22:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 05:22:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128615.241445 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lisB8-0001AJ-9M; Tue, 18 May 2021 05:22:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128615.241445; Tue, 18 May 2021 05:22:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lisB8-0001AC-5T; Tue, 18 May 2021 05:22:02 +0000
Received: by outflank-mailman (input) for mailman id 128615;
 Tue, 18 May 2021 05:22:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2je3=KN=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1lisB7-0000oV-2K
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 05:22:01 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [40.107.20.82]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7dd5b481-45ae-4d0b-843f-89b10c884e13;
 Tue, 18 May 2021 05:21:55 +0000 (UTC)
Received: from AM6P191CA0091.EURP191.PROD.OUTLOOK.COM (2603:10a6:209:8a::32)
 by AS8PR08MB6469.eurprd08.prod.outlook.com (2603:10a6:20b:33c::17) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.25; Tue, 18 May
 2021 05:21:46 +0000
Received: from AM5EUR03FT046.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:8a:cafe::8a) by AM6P191CA0091.outlook.office365.com
 (2603:10a6:209:8a::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.25 via Frontend
 Transport; Tue, 18 May 2021 05:21:46 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT046.mail.protection.outlook.com (10.152.16.164) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Tue, 18 May 2021 05:21:45 +0000
Received: ("Tessian outbound 3c5232d12880:v92");
 Tue, 18 May 2021 05:21:45 +0000
Received: from 8e78856243c1.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 28167E2F-E4E9-4F96-A7DC-125E3053516C.1; 
 Tue, 18 May 2021 05:21:38 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 8e78856243c1.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 18 May 2021 05:21:38 +0000
Received: from AM6PR08CA0010.eurprd08.prod.outlook.com (2603:10a6:20b:b2::22)
 by VE1PR08MB5598.eurprd08.prod.outlook.com (2603:10a6:800:1a2::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.27; Tue, 18 May
 2021 05:21:33 +0000
Received: from VE1EUR03FT046.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:b2:cafe::c) by AM6PR08CA0010.outlook.office365.com
 (2603:10a6:20b:b2::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.24 via Frontend
 Transport; Tue, 18 May 2021 05:21:33 +0000
Received: from nebula.arm.com (40.67.248.234) by
 VE1EUR03FT046.mail.protection.outlook.com (10.152.19.226) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.4129.25 via Frontend Transport; Tue, 18 May 2021 05:21:33 +0000
Received: from AZ-NEU-EX01.Emea.Arm.com (10.251.26.4) by AZ-NEU-EX03.Arm.com
 (10.251.24.31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2176.14; Tue, 18 May
 2021 05:21:32 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX01.Emea.Arm.com
 (10.251.26.4) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2176.14; Tue, 18
 May 2021 05:21:30 +0000
Received: from penny.shanghai.arm.com (10.169.190.66) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2176.14 via Frontend
 Transport; Tue, 18 May 2021 05:21:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7dd5b481-45ae-4d0b-843f-89b10c884e13
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=04Kt5IDhcODuyzlAcnmdKfPcs/5dhHEBgwMoFTW8ze0=;
 b=ro/GdX5ypMYTeaywINLsfHtCrfO2vThTnqs4eVzDiz307WdxBZFYje8dhg1d4XuN3HNPUAXCt+y7OAxURSvGnp8PDztZA8XxFSpOnEW9TlimyMjOs+7Dp5TytyCLAw8Ym3LrQTghc7j4LuT5DX8N72krrRV+h0b1/kSG2uTk/mY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: c1f9d5eb67de605c
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jy5VgMKCfjKFIBwgc53M7xtCEIqrqYt4MEfH6a4sWVwj5LfcFlFQ2guWLIs6AuDDFe55ko4bBO1lUUtOo5ax3lKicEswz/nsKmxZLLmYkzWJtxNjzKxjS3i8xHyjr8hoVO9DIvtSXYDDAyVUiD9zhcZZsEQcbHu4ynPIAeDNt7r5OZP/HCEBor85u7OHL92uT1HNGZIqAMwcitopUnHWy3gErrpaMtlBOdkZmq6pGePVAoaEgpmlrSfE/xVTvTNSRjCcPr5jTnaoGRf9Pa8G/aq5tCax1H7UTP1Y1tWvR+sO3drYGn8Dpap4TEKZmPabc2vS2ZYVarHTMbvZoBO1Mw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=04Kt5IDhcODuyzlAcnmdKfPcs/5dhHEBgwMoFTW8ze0=;
 b=dAYEAuT9SeZOVqs0lh2UUP3MG5dlL8+cxQ2s/PJyZ7q2srEMabsn91JY8YpOFxpsmU01AzcPwcvaRX17zvh7go/Pd38THz9M1aRHkAl78Y2EOBcGY5cvtrAi5tUXys9IIDTLlVlHC4h4jh1EUF6SF3OvqLBy2FvnQ3KU7e5H7giVoRxRWn5A2OqkKcxk5mm9hcUKQs+NhFDN9qVPOnkn3rCJ4srWerSRm3M5lSistpCNy/6lqAQviFdi1oHYRs6aPm/fTvXJWjQR7M6xyIW0FW3A+S/uLzJ+Q/ys/pH+KS+Fz2M/tT5deZpQYvtDyQV+m3vD2bV/TVp3DuYYby9x7A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=04Kt5IDhcODuyzlAcnmdKfPcs/5dhHEBgwMoFTW8ze0=;
 b=ro/GdX5ypMYTeaywINLsfHtCrfO2vThTnqs4eVzDiz307WdxBZFYje8dhg1d4XuN3HNPUAXCt+y7OAxURSvGnp8PDztZA8XxFSpOnEW9TlimyMjOs+7Dp5TytyCLAw8Ym3LrQTghc7j4LuT5DX8N72krrRV+h0b1/kSG2uTk/mY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=pass action=none
 header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com;
From: Penny Zheng <penny.zheng@arm.com>
To: <xen-devel@lists.xenproject.org>, <sstabellini@kernel.org>,
	<julien@xen.org>
CC: <Bertrand.Marquis@arm.com>, <Penny.Zheng@arm.com>, <Wei.Chen@arm.com>,
	<nd@arm.com>
Subject: [PATCH 00/10] Domain on Static Allocation
Date: Tue, 18 May 2021 05:21:03 +0000
Message-ID: <20210518052113.725808-1-penny.zheng@arm.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a6f758ff-6e8d-426c-f007-08d919bcd453
X-MS-TrafficTypeDiagnostic: VE1PR08MB5598:|AS8PR08MB6469:
X-Microsoft-Antispam-PRVS:
	<AS8PR08MB64698A8EC5C754999ADDD82FF72C9@AS8PR08MB6469.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 yvs3tBFIuhlrRwcHAwF0ABEO7Au7dgo9w824Gsb//vp1rvnUowKAcBI6ODxZ5eXEzrAzmU/SqyPkBPdFKUKui6HQL9S7jzJH6pB5VrO4phB8EAqdFTahw6RtzhjtkqIombm4xxj0sPzm5lCd1XzM+woO8e53q0ACmTP/uNe1+La3nIMb/3d3QH/2yJBioV0bPxyewr81Ipt9kUyr7kI9VdME9B6wPlL0s4ATPd0NRDTTlgOxERM3LJo0o7tkLS9lQ57omGZ3ZtgUpzoMQeCYmP2EUFKchGLYyoSbh3WCVVFg7/L5EyjbEjX+KeTJtn7oxvij9ShN+8OXr3QC/t3Jxl41nHPfREF8XBwTrArjqepC+8P9ol/yoI3voJU2O3l37iGnO25wXbbcN1Rh3FypqNFzImzAtIm1Z3lw7J5IDDKVOqtLlWvShrwdkepra8Eomm/kdl3TR7ay3bzQp9G5tySgwZ5AaBAtxzLIq1lOoc5lfupGH9Lk0jjgudSv6X7UzddMWgT6Ie08KQBkS+AoDDSpwrXeTevH7xAjaqx+LwSy+r7Pnp5iSx2/+xpl+gg15ppvIkkBR+lC/xXuyunmE/rqzqyRnEfXDXEi91QuX/1kkTDQAGnnPgKZ9Dto6CvRrxJVF6SPheV3yhD2gMVfQzYx1hvE8oxY40LevRX1r6pq5WOzEN3mG/mMdIyqYFmxhu1wF3NBE2k3WX8pW+xDKqCz8rPtntpsuIZX/ahD/i7xRu8R/9yhRNQQ3MHZCre87QiV5v9muLSgdsPP8cJ5kQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(4636009)(39850400004)(396003)(346002)(376002)(136003)(46966006)(36840700001)(70586007)(47076005)(82740400003)(83380400001)(5660300002)(2616005)(70206006)(86362001)(1076003)(36860700001)(36756003)(4326008)(186003)(7696005)(82310400003)(8936002)(54906003)(26005)(478600001)(8676002)(6666004)(966005)(44832011)(316002)(2906002)(426003)(336012)(81166007)(110136005)(356005)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB5598
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT046.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	c89c5324-3c5f-45f6-718e-08d919bccd08
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	5o6ZNudDoaUD6ehT1aZMyLLlyi2EJRhVjrh9BgZ6uPBccKneJIZe11uLISSoo1X538ZyCxH9pfi2JT0hbeeEo6wLIuT6hXGH+a7/tPlj6x7ljrcO8Fqaqu6+ksiqRKB0dYciGHByiJ71ewKITR7ujjNSHQl47t1Q1X9P6ZPtKi1RAyJNYd3FPRXn/JvmE3jXYyoPAOy15nUGoI1m7e76fRHkmHlMxAObBohWN1cm89LjfQCMNb5Sw08ndWtjpZw8sg6oWavzTEnd1LK0A+zScfB7vp2n7RWF1bz1WCSSYgCsPfat+04CQDgE3fAolobfqWP85Oqe55DzHopn9ALGYIWDVY3UkgBF9AmEjQHQG7tnrgfD+jeBhu9R90PrR2fIRkFBUCXv7EQyE4PKUjiBCLiaubkmRdS0JmVfy0pOIZmLOy7i1yn+BEoGsVDz7VytRH4/3w441K4OF+CIoWeXx+RVTbpG40D4Z+VKhieIS3U7ZcujnnLZgykBlOo8SllORVQHxfccD5t5vO2jweiQ4rsRMCyypEaLGSDS1ZSqV1mRBjV5z/5QmbJTbeSpSwS4edcIhITFeyTl6pMP9Q5NWLmBQaiBPnSb0ShgHxfr3MTQ0aeUoo0H3S2aNqOJGpnWCABnVQQlMiU+2sHy2i50/n+3IUcNFeYL8nOszSOMqTUd8nYxJ9kuygLFyws+ZsiYFyIM+RQB/a892Qas6eqtOSkjTfpx0iynmuXhhSbXI50=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(39850400004)(136003)(376002)(346002)(36840700001)(46966006)(86362001)(54906003)(186003)(82310400003)(2616005)(316002)(36860700001)(2906002)(7696005)(336012)(26005)(110136005)(6666004)(82740400003)(44832011)(47076005)(81166007)(8676002)(5660300002)(83380400001)(478600001)(70206006)(966005)(1076003)(8936002)(426003)(36756003)(70586007)(4326008);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2021 05:21:45.5922
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a6f758ff-6e8d-426c-f007-08d919bcd453
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT046.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AS8PR08MB6469

Static allocation refers to system or sub-system(domains) for which memory
areas are pre-defined by configuration using physical address ranges.
Those pre-defined memory, -- Static Momery, as parts of RAM reserved in the
beginning, shall never go to heap allocator or boot allocator for any use.

This Patch Serie only talks about Domain on Static Allocation.

Domain on Static Allocation is supported through device tree property
`xen,static-mem` specifying reserved RAM banks as this domain's guest RAM.
By default, they shall be mapped to the fixed guest RAM address
`GUEST_RAM0_BASE`, `GUEST_RAM1_BASE`.

Looking into related [design link](
https://lists.xenproject.org/archives/html/xen-devel/2021-05/msg00882.html)
for more details.

The whole design is about Static Allocation and 1:1 direct-map, and this
Patch Serie only covers parts of it, which are Domain on Static Allocation.
Other features will be delievered through different patch series.

Penny Zheng (10):
  xen/arm: introduce domain on Static Allocation
  xen/arm: handle static memory in dt_unreserved_regions
  xen/arm: introduce PGC_reserved
  xen/arm: static memory initialization
  xen/arm: introduce alloc_staticmem_pages
  xen: replace order with nr_pfns in assign_pages for better
    compatibility
  xen/arm: intruduce alloc_domstatic_pages
  xen/arm: introduce reserved_page_list
  xen/arm: parse `xen,static-mem` info during domain construction
  xen/arm: introduce allocate_static_memory

 docs/misc/arm/device-tree/booting.txt |  33 ++++
 xen/arch/arm/bootfdt.c                |  52 +++++++
 xen/arch/arm/domain_build.c           | 211 +++++++++++++++++++++++++-
 xen/arch/arm/setup.c                  |  41 ++++-
 xen/arch/x86/pv/dom0_build.c          |   2 +-
 xen/common/domain.c                   |   1 +
 xen/common/grant_table.c              |   2 +-
 xen/common/memory.c                   |   4 +-
 xen/common/page_alloc.c               | 210 +++++++++++++++++++++++--
 xen/include/asm-arm/domain.h          |   3 +
 xen/include/asm-arm/mm.h              |  16 +-
 xen/include/asm-arm/setup.h           |   2 +
 xen/include/xen/mm.h                  |   9 +-
 xen/include/xen/sched.h               |   5 +
 14 files changed, 564 insertions(+), 27 deletions(-)

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 18 05:22:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 05:22:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128616.241451 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lisB8-0001EH-OM; Tue, 18 May 2021 05:22:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128616.241451; Tue, 18 May 2021 05:22:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lisB8-0001DC-Ey; Tue, 18 May 2021 05:22:02 +0000
Received: by outflank-mailman (input) for mailman id 128616;
 Tue, 18 May 2021 05:22:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2je3=KN=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1lisB7-00019F-7s
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 05:22:01 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [40.107.20.54]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e7776fe5-c659-4fa9-bae3-6038b9f1d11b;
 Tue, 18 May 2021 05:21:59 +0000 (UTC)
Received: from AM6PR08CA0035.eurprd08.prod.outlook.com (2603:10a6:20b:c0::23)
 by DB7PR08MB3852.eurprd08.prod.outlook.com (2603:10a6:10:7f::31) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.25; Tue, 18 May
 2021 05:21:57 +0000
Received: from AM5EUR03FT006.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:c0:cafe::cc) by AM6PR08CA0035.outlook.office365.com
 (2603:10a6:20b:c0::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.25 via Frontend
 Transport; Tue, 18 May 2021 05:21:57 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT006.mail.protection.outlook.com (10.152.16.122) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Tue, 18 May 2021 05:21:57 +0000
Received: ("Tessian outbound 6c8a2be3c2e7:v92");
 Tue, 18 May 2021 05:21:57 +0000
Received: from 58d3c760b65e.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 8281B599-D76D-4C7B-907F-0008096FA724.1; 
 Tue, 18 May 2021 05:21:50 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 58d3c760b65e.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 18 May 2021 05:21:50 +0000
Received: from AM6PR01CA0048.eurprd01.prod.exchangelabs.com
 (2603:10a6:20b:e0::25) by PAXPR08MB6986.eurprd08.prod.outlook.com
 (2603:10a6:102:1de::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.26; Tue, 18 May
 2021 05:21:49 +0000
Received: from AM5EUR03FT032.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:e0:cafe::71) by AM6PR01CA0048.outlook.office365.com
 (2603:10a6:20b:e0::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.25 via Frontend
 Transport; Tue, 18 May 2021 05:21:49 +0000
Received: from nebula.arm.com (40.67.248.234) by
 AM5EUR03FT032.mail.protection.outlook.com (10.152.16.84) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.4129.25 via Frontend Transport; Tue, 18 May 2021 05:21:49 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX04.Arm.com
 (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.14; Tue, 18 May
 2021 05:21:46 +0000
Received: from penny.shanghai.arm.com (10.169.190.66) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2176.14 via Frontend
 Transport; Tue, 18 May 2021 05:21:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e7776fe5-c659-4fa9-bae3-6038b9f1d11b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QirTiM5BMFqVEm2akRsnyKmv1ZPwRYUKo4oLM4omp7M=;
 b=6cv8kmt+/cThiBqyQqFikKTfM1ITBUInNckkOehGk2AmAjlkK37LdGGt7XaoAZZvnr/mZqQdyY3AuiOeB9L7zrnKlb9FHbXvPQXq5Mx/XqV/ohXhQ9BHasIDWS01nMplPBXOBctWv9Bqn/xCS8YvHCVs7s6zTui/iAWrP/kJmvk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 7b81643dc9652aef
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OldNX2vr0OfF/2CaQzl8FkQu0fxRpLNUJ5jq5NCrCnGCZAdox1GAL1jIp+NnHYeBT/+VrsgMiCcohUsMY7GG76ri15uJ/A1lQzTItr4OwsBJt8rqIROEGLnYOY1voY6eNAbfttUPAF/iTbB6wgWOwACeVHlQM+mrytAZhrbPJx+B9Wz2yqRGko1/XAu03x7bETOjsg/AEOZQ/TQj12F30u8OEUj8wfoeAajYeqwz6+WuldA4v8x9vjk8rjLRllBQCKU+NSWJmaTQ6OJWi7mdWVaDnONYrh7fwaRGISWAIrIcyJmVXph/A+ZVxHyIvgx7/9UDUCQmg3P7pnO00jpldA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QirTiM5BMFqVEm2akRsnyKmv1ZPwRYUKo4oLM4omp7M=;
 b=HMYz6EH1/lZGqkBIJszMxDiow+xGMT+9F/oNnJVIqH7SosT//jORLs/0bIV1x6PYYk9s3tIyAeRXV+NPCjJcMEi6PHkDL2lEVrSe6CkD+jRNaZDMubYm28JCrffx4NErGywXo9GLowv6RjyVeD0N7dxWsHQx7U8Sl3jMN1a6GiQmRCUQcSPEFAa4M2vtWYVtOheZ1xrCJx5V7vpQQX3HtchdklVBo1CSIpDb6pweV5Hrc1ffApbNqOgUkyZR2pQR9DbyYfcEAc5NYCZXckjpX9GRoa5xK/aVybudhMatJramP+nK7lKjdQm8b4eo0BEAvAJ6cPzzru/NWyq4l57Zyg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QirTiM5BMFqVEm2akRsnyKmv1ZPwRYUKo4oLM4omp7M=;
 b=6cv8kmt+/cThiBqyQqFikKTfM1ITBUInNckkOehGk2AmAjlkK37LdGGt7XaoAZZvnr/mZqQdyY3AuiOeB9L7zrnKlb9FHbXvPQXq5Mx/XqV/ohXhQ9BHasIDWS01nMplPBXOBctWv9Bqn/xCS8YvHCVs7s6zTui/iAWrP/kJmvk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=pass action=none
 header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com;
From: Penny Zheng <penny.zheng@arm.com>
To: <xen-devel@lists.xenproject.org>, <sstabellini@kernel.org>,
	<julien@xen.org>
CC: <Bertrand.Marquis@arm.com>, <Penny.Zheng@arm.com>, <Wei.Chen@arm.com>,
	<nd@arm.com>
Subject: [PATCH 06/10] xen: replace order with nr_pfns in assign_pages for better compatibility
Date: Tue, 18 May 2021 05:21:09 +0000
Message-ID: <20210518052113.725808-7-penny.zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20210518052113.725808-1-penny.zheng@arm.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 9fd858e9-608b-4f28-e450-08d919bcdb74
X-MS-TrafficTypeDiagnostic: PAXPR08MB6986:|DB7PR08MB3852:
X-Microsoft-Antispam-PRVS:
	<DB7PR08MB38521AB5BDA8249C81BBE3EBF72C9@DB7PR08MB3852.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:2657;OLM:2657;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 QUNkkpUVMuE18FlZNUqu0BjVFNSUCeqE5btNzUBPQyzvBYuqHW7oJ8EFbzXmtQPlPr0JGYBQyN+rwu8XGCOKjbIhsXiScO2vA/rndoK77nAmwteNEiZY84Yq6xffZgPAFSu9zR1SQgsO+Kq0iGIf7xzTWU/fpmpaUzWUx7Xe0bAOeuIwd8m+pfKDovhZ4sFLm02WHxcpm+Z0ISWnfQRIztKq1ujK/2J/EoguviHcrBSjCsFJrLmEEoODImljH3tcldn48Y0/K3ZG66ikfeQAFCZ+aj8VfzvXX5O5iD0hbQZKHDTzthosps8Hwb8oSKiNTwVGAT6MmyUDGH0n1bc6Pebe4vDBaUzGjaLf9uWOX0S8N0uYcaAEkxDhO9eO32fldiNlYVHmcbcFC57ceYpenc+PCWAvZRrMAWjG1onuiqtp8bfZcv1mhCOnppVLnL5/AZPA4q61w4iS7izZzm2oTnwOGvAOq/w7A4shWWVE5INlc83m7gayUyFRr4FrA0NJ8bg6F742Cu9Y7COIEMoj51ju+8AkSd4uNyc/hOouxctN8zlAWbwXjkfzprzJZZsGt1Z9gbtc8Ow8/BXppLsMdOXeWtVFvlaqycD1FCfp8X6/8/5gWTZpB1/m4y+/OQQLigjUUU0QkGgb5iNW5TbI6483uIbUUoCnMPx1Y7101GL13sOkM524zvCR4MwAoXYs
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(4636009)(376002)(346002)(396003)(39850400004)(136003)(36840700001)(46966006)(83380400001)(316002)(6666004)(36756003)(7696005)(70206006)(2616005)(70586007)(478600001)(86362001)(110136005)(5660300002)(8676002)(82740400003)(336012)(36860700001)(1076003)(44832011)(356005)(426003)(81166007)(47076005)(2906002)(26005)(8936002)(186003)(54906003)(82310400003)(4326008)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB6986
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT006.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	2b07f62c-18f4-4de8-9caf-08d919bcd65a
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	sxrucFlrFuBPBzDaZIveWaHc74CnV8uvzdEV1JZTfSttL05n584wrkyOEgrS/p0HLOR2whxXNMP9lDrqv71thBg7Fv+30YWxaa0Ia6uXhbT2FuUQKWGeMA+ILcyh85NY491Lkp5RsALji4TPZkfnMqiWVZADB6SBcu1G3NPSb24wJlRXdHgO5zRwc0cAvFHYdtBAUDQwwl3kFmxsjunNQ6Q0mAc5BnqDDvxnxWdRgsOq1NRz4mVSx2vp+dOYy7l/PsupIkJ9sQZFyQmHfVSMBJr5sC7VMASeGJNTp0Si6nqmNkuIAlO2E17r5GzuOrQhzD4YLadTysg8VbxSZ3+fTcDWsrC3J9WtKxMdPWVfoXb2/g7fRuT6k3JQK1LDgf2uf0kqPob+aZBnCJufS07c5hILFbiUS50WAWlXmD0qZbqnIOu0OJ8SW1Q9VH1qqLdbKLaj4M+5e1reO8D/9qb010YhOQG30LZVlTCtJtdZB83OLR2RhBH7jUZgDSruZlxrEdFq0th8NhtFS00YHmE0E714uWtWiLs2LSoStGmsKfSfrEW61L3xR9FYJN60hOq5/MZpHdJvJTjzTjTSEOZvMTDkMaBu72QFpiNffCRUiz08Wq99zQWNLYeHxVYLFDtODiQSAVQbKoSbx+ZxNaXSGw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(376002)(136003)(396003)(39850400004)(36840700001)(46966006)(1076003)(47076005)(86362001)(54906003)(26005)(2906002)(110136005)(36860700001)(7696005)(2616005)(8676002)(81166007)(316002)(478600001)(6666004)(4326008)(186003)(44832011)(70206006)(5660300002)(82310400003)(336012)(70586007)(8936002)(426003)(36756003)(82740400003)(83380400001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2021 05:21:57.5323
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 9fd858e9-608b-4f28-e450-08d919bcdb74
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT006.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3852

Function parameter order in assign_pages is always used as 1ul << order,
referring to 2@order pages.

Now, for better compatibility with new static memory, order shall
be replaced with nr_pfns pointing to page count with no constraint,
like 250MB.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
 xen/arch/x86/pv/dom0_build.c |  2 +-
 xen/common/grant_table.c     |  2 +-
 xen/common/memory.c          |  4 ++--
 xen/common/page_alloc.c      | 16 ++++++++--------
 xen/include/xen/mm.h         |  2 +-
 5 files changed, 13 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c
index e0801a9e6d..4e57836763 100644
--- a/xen/arch/x86/pv/dom0_build.c
+++ b/xen/arch/x86/pv/dom0_build.c
@@ -556,7 +556,7 @@ int __init dom0_construct_pv(struct domain *d,
         else
         {
             while ( count-- )
-                if ( assign_pages(d, mfn_to_page(_mfn(mfn++)), 0, 0) )
+                if ( assign_pages(d, mfn_to_page(_mfn(mfn++)), 1, 0) )
                     BUG();
         }
         initrd->mod_end = 0;
diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index ab30e2e8cf..925bf924bd 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -2354,7 +2354,7 @@ gnttab_transfer(
          * is respected and speculative execution is blocked accordingly
          */
         if ( unlikely(!evaluate_nospec(okay)) ||
-            unlikely(assign_pages(e, page, 0, MEMF_no_refcount)) )
+            unlikely(assign_pages(e, page, 1, MEMF_no_refcount)) )
         {
             bool drop_dom_ref;
 
diff --git a/xen/common/memory.c b/xen/common/memory.c
index b5c70c4b85..2dca23aa7f 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -722,7 +722,7 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg)
         /* Assign each output page to the domain. */
         for ( j = 0; (page = page_list_remove_head(&out_chunk_list)); ++j )
         {
-            if ( assign_pages(d, page, exch.out.extent_order,
+            if ( assign_pages(d, page, 1UL << exch.out.extent_order,
                               MEMF_no_refcount) )
             {
                 unsigned long dec_count;
@@ -791,7 +791,7 @@ static long memory_exchange(XEN_GUEST_HANDLE_PARAM(xen_memory_exchange_t) arg)
      * cleared PGC_allocated.
      */
     while ( (page = page_list_remove_head(&in_chunk_list)) )
-        if ( assign_pages(d, page, 0, MEMF_no_refcount) )
+        if ( assign_pages(d, page, 1, MEMF_no_refcount) )
         {
             BUG_ON(!d->is_dying);
             free_domheap_page(page);
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index adf2889e76..0eb9f22a00 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -2388,7 +2388,7 @@ void init_domheap_pages(paddr_t ps, paddr_t pe)
 int assign_pages(
     struct domain *d,
     struct page_info *pg,
-    unsigned int order,
+    unsigned long nr_pfns,
     unsigned int memflags)
 {
     int rc = 0;
@@ -2408,7 +2408,7 @@ int assign_pages(
     {
         unsigned int extra_pages = 0;
 
-        for ( i = 0; i < (1ul << order); i++ )
+        for ( i = 0; i < nr_pfns; i++ )
         {
             ASSERT(!(pg[i].count_info & ~PGC_extra));
             if ( pg[i].count_info & PGC_extra )
@@ -2417,18 +2417,18 @@ int assign_pages(
 
         ASSERT(!extra_pages ||
                ((memflags & MEMF_no_refcount) &&
-                extra_pages == 1u << order));
+                extra_pages == nr_pfns));
     }
 #endif
 
     if ( pg[0].count_info & PGC_extra )
     {
-        d->extra_pages += 1u << order;
+        d->extra_pages += nr_pfns;
         memflags &= ~MEMF_no_refcount;
     }
     else if ( !(memflags & MEMF_no_refcount) )
     {
-        unsigned int tot_pages = domain_tot_pages(d) + (1 << order);
+        unsigned int tot_pages = domain_tot_pages(d) + nr_pfns;
 
         if ( unlikely(tot_pages > d->max_pages) )
         {
@@ -2440,10 +2440,10 @@ int assign_pages(
     }
 
     if ( !(memflags & MEMF_no_refcount) &&
-         unlikely(domain_adjust_tot_pages(d, 1 << order) == (1 << order)) )
+         unlikely(domain_adjust_tot_pages(d, nr_pfns) == nr_pfns) )
         get_knownalive_domain(d);
 
-    for ( i = 0; i < (1 << order); i++ )
+    for ( i = 0; i < nr_pfns; i++ )
     {
         ASSERT(page_get_owner(&pg[i]) == NULL);
         page_set_owner(&pg[i], d);
@@ -2499,7 +2499,7 @@ struct page_info *alloc_domheap_pages(
                 pg[i].count_info = PGC_extra;
             }
         }
-        if ( assign_pages(d, pg, order, memflags) )
+        if ( assign_pages(d, pg, 1ul << order, memflags) )
         {
             free_heap_pages(pg, order, memflags & MEMF_no_scrub);
             return NULL;
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index 8b1a2207b2..dcf9daaa46 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -131,7 +131,7 @@ void heap_init_late(void);
 int assign_pages(
     struct domain *d,
     struct page_info *pg,
-    unsigned int order,
+    unsigned long nr_pfns,
     unsigned int memflags);
 
 /* Dump info to serial console */
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 18 05:22:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 05:22:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128617.241457 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lisB9-0001Oo-9j; Tue, 18 May 2021 05:22:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128617.241457; Tue, 18 May 2021 05:22:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lisB9-0001Mc-32; Tue, 18 May 2021 05:22:03 +0000
Received: by outflank-mailman (input) for mailman id 128617;
 Tue, 18 May 2021 05:22:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2je3=KN=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1lisB7-00019F-Ot
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 05:22:01 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [40.107.22.56]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 22c98885-b59f-492a-a86c-5ea2549982ed;
 Tue, 18 May 2021 05:21:59 +0000 (UTC)
Received: from AM6P193CA0064.EURP193.PROD.OUTLOOK.COM (2603:10a6:209:8e::41)
 by DB8PR08MB5290.eurprd08.prod.outlook.com (2603:10a6:10:a5::29) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.31; Tue, 18 May
 2021 05:21:57 +0000
Received: from AM5EUR03FT043.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:8e:cafe::ee) by AM6P193CA0064.outlook.office365.com
 (2603:10a6:209:8e::41) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.25 via Frontend
 Transport; Tue, 18 May 2021 05:21:57 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT043.mail.protection.outlook.com (10.152.17.43) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Tue, 18 May 2021 05:21:56 +0000
Received: ("Tessian outbound 504317ef584c:v92");
 Tue, 18 May 2021 05:21:56 +0000
Received: from e49ec5f258b2.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 44D1C212-83AF-41CD-A324-6BDCCE7C1ADB.1; 
 Tue, 18 May 2021 05:21:50 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id e49ec5f258b2.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 18 May 2021 05:21:50 +0000
Received: from AM6P194CA0017.EURP194.PROD.OUTLOOK.COM (2603:10a6:209:90::30)
 by AM0PR08MB3474.eurprd08.prod.outlook.com (2603:10a6:208:e1::32) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.25; Tue, 18 May
 2021 05:21:49 +0000
Received: from VE1EUR03FT005.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:90:cafe::5c) by AM6P194CA0017.outlook.office365.com
 (2603:10a6:209:90::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.25 via Frontend
 Transport; Tue, 18 May 2021 05:21:49 +0000
Received: from nebula.arm.com (40.67.248.234) by
 VE1EUR03FT005.mail.protection.outlook.com (10.152.18.172) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.4129.25 via Frontend Transport; Tue, 18 May 2021 05:21:49 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX03.Arm.com
 (10.251.24.31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.14; Tue, 18 May
 2021 05:21:38 +0000
Received: from penny.shanghai.arm.com (10.169.190.66) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2176.14 via Frontend
 Transport; Tue, 18 May 2021 05:21:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 22c98885-b59f-492a-a86c-5ea2549982ed
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5J8iLZFedAFmS+5HNJIXYA28hBbm27AI3WM5PMpmaN0=;
 b=x/GM7FDAFvwaDYzkBoyoH2VUTNak9wEY9n1pmmndAdQiLuX/OlmfqO+uxxJL7C8aUtDu73Xjqopo+/trgvc9ULO6yHcn8puL9e79O8/8PZnEnlw8Dbo/EjRVInR1RJy6RxvVjGVcYkODFrh3bSI1NDXmSVs3tRqnqf6HxSoqVio=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 1936533a0e7f1746
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AYFQ8chCOUp825RgWfo00oWm/7pxUUD3VTpMLkX/ugTVvHvEwY1ol10k7xB6jEjZl9ml8DaKZQztCdK+J2BzqKVvQcyFwCPJbq95S1CqH/OnlYaZn00kCjTaqCVmilYmURpXzM8X/iLhYgNCcFbG7W6tlOuWP1mCEmuGIVIW+3i9fnxqdKIixuX8qhygoMi1N4IxJFcOsLI7DVIEn6iK8El2Pm2XIGd4EI9gzeD9Mp0C9XoGysjexHDT+Tg/Wl1aImWW6rJacauVkLWyUZUNyT8hNlWXbO/X9D1vH0nnw03BDObtBoF99ZAFcGUji6OSsi+wYcdKY3fE10iMPrrP7g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5J8iLZFedAFmS+5HNJIXYA28hBbm27AI3WM5PMpmaN0=;
 b=CuRjXJqhjRQklzzSdIcUwefl5iFeEXgd2iWCTJoZZDnv97tMObotAD6S3ksm9tZsFGJlcgq78u7voAYsMHfokQXEiyiDDu4ZqJeXlIBZAgK82HCCxViNRUsZV8sivFHnTHwLNZtpkDLOyW/XwIMXA5C2PwG+hv7xmrNYoQImsd61hPYwjRdeMXZ/c+dIGqWyYOsRH1+KuLfJYLO44C0bVaaRI0MvpEg4PuOJySBWLQmCjf+iXYtYto2f/+a79C/AT3y0hdigfWyC647W/vEJpsAUnSTkhda6rxaaaqZxgFFnzqvdk+JtUaDAd6/+8l5fcJXoTrg0hi4c2z/4B8Unyw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5J8iLZFedAFmS+5HNJIXYA28hBbm27AI3WM5PMpmaN0=;
 b=x/GM7FDAFvwaDYzkBoyoH2VUTNak9wEY9n1pmmndAdQiLuX/OlmfqO+uxxJL7C8aUtDu73Xjqopo+/trgvc9ULO6yHcn8puL9e79O8/8PZnEnlw8Dbo/EjRVInR1RJy6RxvVjGVcYkODFrh3bSI1NDXmSVs3tRqnqf6HxSoqVio=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=pass action=none
 header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com;
From: Penny Zheng <penny.zheng@arm.com>
To: <xen-devel@lists.xenproject.org>, <sstabellini@kernel.org>,
	<julien@xen.org>
CC: <Bertrand.Marquis@arm.com>, <Penny.Zheng@arm.com>, <Wei.Chen@arm.com>,
	<nd@arm.com>
Subject: [PATCH 03/10] xen/arm: introduce PGC_reserved
Date: Tue, 18 May 2021 05:21:06 +0000
Message-ID: <20210518052113.725808-4-penny.zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20210518052113.725808-1-penny.zheng@arm.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 17dbf392-8957-448f-c17e-08d919bcdb1a
X-MS-TrafficTypeDiagnostic: AM0PR08MB3474:|DB8PR08MB5290:
X-Microsoft-Antispam-PRVS:
	<DB8PR08MB529063D0C2A72AC513BB7E4CF72C9@DB8PR08MB5290.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:2733;OLM:2733;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 IOZYbxKdCQYup2rYlh5fjDhoVqa58RYI7xubtEg5OOX0KLU96wxhLLghb5MzADeNKQADqP4G06dS5hJTSLIfc5phB4/m8Tt0AQwjKUF8I5/g8aGGRIH7PFUSdvVnoIspJsRcq9+7ZfucR9hE0b/C+n/rIfFpGoOggzNasBU44JtiGDBIue3s8fxMo4GURX3C8uhmn5Kq4/MLZeX/YAx4ppDIHH0Bw6M4++vwJpqkIM/cW1kEbvdUPgbL0fwgo0LFoCNrzJI/gZ90RvpfqBOxHhQa/jwYvDDlxKJD04GGKWqr4jZXCypmXI8MxAqPF3JloTwJjvZF8Ta4XmRBn93YyQtpOyKTpnrXTcIVU3wgj4+h6cpY1yR7FTr5vLHYxNTOBBuZP0mIgGpvkc37B8DuNJn/V0cDYHBlH4x4Fc8RhEhZvcBIcXYMNYKQNSYLOkqlWhctqZJUVETK2aR/DFsRjTuhlaLSATa2qcNmbA18Wf0a9vubjDOtZbrXEnq6uTxBwYikMXiirtXqSW5FUCNGmQV9upCUyOes0F9t47w1SSQBIW4gzkIl0zGvpcv8GV6/KxMvVdDe20sEHH2dZV6BBw8lBHVF3Ze7FKNsLJ81k2G/Hxwb6KpVomcL9CJQ3FOwLuMlhC4jxm09pzPpjVOq17mkIZX5OpRghmlLp1806ABr42P7CUlJoYIXG7uFGh6X
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(4636009)(376002)(346002)(136003)(39850400004)(396003)(46966006)(36840700001)(316002)(86362001)(81166007)(186003)(1076003)(8676002)(110136005)(82740400003)(478600001)(47076005)(8936002)(426003)(36756003)(70206006)(83380400001)(356005)(5660300002)(54906003)(2906002)(82310400003)(336012)(70586007)(4326008)(7696005)(6666004)(44832011)(26005)(2616005)(36860700001)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB3474
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT043.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	ea7cf5e5-212b-48a1-a2d4-08d919bcd677
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	tMXJaSvyh+X49EXeU+Ko9a/5bIpUQNTDPZjWdefmLpouE++uFU5Wdt9qRgkqMuVlpFPM0vo/YIWL57p22Tr7WyoYPejBlHkjiYADHIsfFO+iVURIT8rd9QdQYuP8u6tRZiP/kjwKpd1FtuhQKbhAAH00F80bTXuOVKV47VccHzA0k8U+LfhIzvGb/D9V09laVSkNqcJUogcN0tHnhjx4N/ClXmLmgfldlxBWCGwlU2fz/zsp0qPjmQKUhIQ5KxjI0Co5Qv2Aj86LNY+0E17/73zqmHLZFXIhruI643mbCNBw55F1ixsfre1Ls5ESlO+NOgkLJURIVQeQK+pRKBMRU65bGlKAPMKPSST8pBhZZdW8kdzsInEt4C+s1mg9ra/4Y9tn10LWLORSTu2i9WU5T46nbuIQJRPzd9vblOta2qf7bdSlBY86IERrD/yhua/YD06h54ozia2dIsNT3isYw0hXIbf60tX7ljE+rLJLlqx3P3DvCDx6c33R0rc0EujQoOADr6evIn2860DGUKqD1TwEMZoUDc5qxuY+gNLpgmcDhBi+EgwTNmRkaQ4aV1Lnp2/Ry0xYoWEnEBe1VoEDfXKQjZKlcUJk/BE25XN5O91PDuEoOZ8kS5lGEG2OA0luUJIbl/JTixX8RqYLR9NLc6CWMNtc83yR0ab+gGFqbqA=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(396003)(136003)(39850400004)(376002)(46966006)(36840700001)(110136005)(26005)(81166007)(7696005)(82310400003)(478600001)(83380400001)(70586007)(316002)(70206006)(4326008)(36860700001)(47076005)(186003)(82740400003)(2616005)(1076003)(426003)(5660300002)(336012)(8676002)(44832011)(6666004)(86362001)(2906002)(36756003)(8936002)(54906003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2021 05:21:56.9625
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 17dbf392-8957-448f-c17e-08d919bcdb1a
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT043.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5290

In order to differentiate pages of static memory, from those allocated from
heap, this patch introduces a new page flag PGC_reserved to tell.

New struct reserved in struct page_info is to describe reserved page info,
that is, which specific domain this page is reserved to.

Helper page_get_reserved_owner and page_set_reserved_owner are
designated to get/set reserved page's owner.

Struct domain is enlarged to more than PAGE_SIZE, due to newly-imported
struct reserved in struct page_info.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
 xen/include/asm-arm/mm.h | 16 +++++++++++++++-
 1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
index 0b7de3102e..d8922fd5db 100644
--- a/xen/include/asm-arm/mm.h
+++ b/xen/include/asm-arm/mm.h
@@ -88,7 +88,15 @@ struct page_info
          */
         u32 tlbflush_timestamp;
     };
-    u64 pad;
+
+    /* Page is reserved. */
+    struct {
+        /*
+         * Reserved Owner of this page,
+         * if this page is reserved to a specific domain.
+         */
+        struct domain *domain;
+    } reserved;
 };
 
 #define PG_shift(idx)   (BITS_PER_LONG - (idx))
@@ -108,6 +116,9 @@ struct page_info
   /* Page is Xen heap? */
 #define _PGC_xen_heap     PG_shift(2)
 #define PGC_xen_heap      PG_mask(1, 2)
+  /* Page is reserved, referring static memory */
+#define _PGC_reserved     PG_shift(3)
+#define PGC_reserved      PG_mask(1, 3)
 /* ... */
 /* Page is broken? */
 #define _PGC_broken       PG_shift(7)
@@ -161,6 +172,9 @@ extern unsigned long xenheap_base_pdx;
 #define page_get_owner(_p)    (_p)->v.inuse.domain
 #define page_set_owner(_p,_d) ((_p)->v.inuse.domain = (_d))
 
+#define page_get_reserved_owner(_p)    (_p)->reserved.domain
+#define page_set_reserved_owner(_p,_d) ((_p)->reserved.domain = (_d))
+
 #define maddr_get_owner(ma)   (page_get_owner(maddr_to_page((ma))))
 
 #define frame_table ((struct page_info *)FRAMETABLE_VIRT_START)
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 18 05:22:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 05:22:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128618.241478 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lisBD-00029O-Nr; Tue, 18 May 2021 05:22:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128618.241478; Tue, 18 May 2021 05:22:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lisBD-000297-KB; Tue, 18 May 2021 05:22:07 +0000
Received: by outflank-mailman (input) for mailman id 128618;
 Tue, 18 May 2021 05:22:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2je3=KN=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1lisBC-00019F-6q
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 05:22:06 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7e1b::61f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 539c1e41-0f1d-4ab4-8353-6e8862bf21ca;
 Tue, 18 May 2021 05:22:00 +0000 (UTC)
Received: from AS8P251CA0022.EURP251.PROD.OUTLOOK.COM (2603:10a6:20b:2f2::6)
 by AM5PR0801MB2018.eurprd08.prod.outlook.com (2603:10a6:203:43::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.26; Tue, 18 May
 2021 05:21:58 +0000
Received: from VE1EUR03FT013.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:2f2::4) by AS8P251CA0022.outlook.office365.com
 (2603:10a6:20b:2f2::6) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.27 via Frontend
 Transport; Tue, 18 May 2021 05:21:58 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT013.mail.protection.outlook.com (10.152.19.37) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Tue, 18 May 2021 05:21:57 +0000
Received: ("Tessian outbound ea2c9a942a09:v92");
 Tue, 18 May 2021 05:21:57 +0000
Received: from 5236c737a9ab.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 CA7AF26D-EB39-43EE-87A6-756D2538B304.1; 
 Tue, 18 May 2021 05:21:50 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 5236c737a9ab.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 18 May 2021 05:21:50 +0000
Received: from AM6P194CA0032.EURP194.PROD.OUTLOOK.COM (2603:10a6:209:90::45)
 by VI1PR08MB4015.eurprd08.prod.outlook.com (2603:10a6:803:e1::28) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.26; Tue, 18 May
 2021 05:21:48 +0000
Received: from VE1EUR03FT005.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:90:cafe::10) by AM6P194CA0032.outlook.office365.com
 (2603:10a6:209:90::45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.32 via Frontend
 Transport; Tue, 18 May 2021 05:21:48 +0000
Received: from nebula.arm.com (40.67.248.234) by
 VE1EUR03FT005.mail.protection.outlook.com (10.152.18.172) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.4129.25 via Frontend Transport; Tue, 18 May 2021 05:21:48 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX03.Arm.com
 (10.251.24.31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.14; Tue, 18 May
 2021 05:21:35 +0000
Received: from penny.shanghai.arm.com (10.169.190.66) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2176.14 via Frontend
 Transport; Tue, 18 May 2021 05:21:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 539c1e41-0f1d-4ab4-8353-6e8862bf21ca
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Y6ATApPmkJrCTf9a/bdU7n8KOS835CGS4AwJ5jf2OAQ=;
 b=FtAwzZyfqQX6TCwB96aAijXOtcr2L/e245WtP8F4vkcoCkanGYmwJHzqzAsvSu14CKU1FVs6IKbBF+wPpAx0067f+FiPz2azGpke+cMPOWUwZTa5HIRTuBMVYvLE82WZy7bjcbj/asGnm5aflWUsKwKs5R0g/RJuTYKC7hRGyH4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 97192d34f5350264
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FTHnQ07fRNS5BlvlqHDqyskc/9S0UEz6GUcdpGuapbw7glfOX5H7aGelZnGk6D4BSmAwMnFEivcUwz/YIQxATxATrZCj12JSkOflDxTLktQXWjg+GTeK4tDVX60nL7eoull+KpXL+lHF8LAwVRLgpLddxEdoMEsvYD5Uf5DSxwvnc03SZPWAZMnJfzuNB5Uksuwj7+6TsssAc4I+Csd5q6qO8RgJ6c6wg0OwdgTwxG9M96NgU8tM9dVnWASjLWSgtKym1RRvVlToNI6X4pAizb9e06o5uSgkpM1Un5m3Yfoon0+RE8t3gerc6k3zChtbxj9TmFcisDTBr32gimMkhw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Y6ATApPmkJrCTf9a/bdU7n8KOS835CGS4AwJ5jf2OAQ=;
 b=lvDdsQKErF1AEelc10ZAdO00PEOqtgNwm9Vy05ZOLdEbdXsJQKTfgu1l8I3zXJ/uNQITuwmBpymin+TOQRmRwAQmLQH2uB6nAc7LxueFmip2NrFfhjKT4z7nsmhsS8X2sco/4juM3XDrr3ptOO1kuRBEsVgglRVzGln8+JLGRSbULlr7e/opHaXVqTDn0Cvwt7GOECQ8ytr6JcVWRTq3PKc7SpBDAyQtyffd1PKz7skVHtLzYqRzPAuGlZFef6mwPFFid6ViXhakHcFbP3abe6UnaAHYEpdQCyPk8YWuGccGAz886D1TnduWalPw+FP5cXG3gXkNox1vuRNTgmNTOA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Y6ATApPmkJrCTf9a/bdU7n8KOS835CGS4AwJ5jf2OAQ=;
 b=FtAwzZyfqQX6TCwB96aAijXOtcr2L/e245WtP8F4vkcoCkanGYmwJHzqzAsvSu14CKU1FVs6IKbBF+wPpAx0067f+FiPz2azGpke+cMPOWUwZTa5HIRTuBMVYvLE82WZy7bjcbj/asGnm5aflWUsKwKs5R0g/RJuTYKC7hRGyH4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=pass action=none
 header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com;
From: Penny Zheng <penny.zheng@arm.com>
To: <xen-devel@lists.xenproject.org>, <sstabellini@kernel.org>,
	<julien@xen.org>
CC: <Bertrand.Marquis@arm.com>, <Penny.Zheng@arm.com>, <Wei.Chen@arm.com>,
	<nd@arm.com>
Subject: [PATCH 02/10] xen/arm: handle static memory in dt_unreserved_regions
Date: Tue, 18 May 2021 05:21:05 +0000
Message-ID: <20210518052113.725808-3-penny.zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20210518052113.725808-1-penny.zheng@arm.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4df65769-59d3-45fb-4aab-08d919bcdbac
X-MS-TrafficTypeDiagnostic: VI1PR08MB4015:|AM5PR0801MB2018:
X-Microsoft-Antispam-PRVS:
	<AM5PR0801MB201817FC5AD2ADFC828528DAF72C9@AM5PR0801MB2018.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:854;OLM:854;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 odg6zEdrcvElYOsQxbaQWCT4gGB5zc8gDjzmdtmiwBoQz1tNER0qsUSurEMYNRtVrnjpGNyIDSVPs8dkuoFOdXTz/Pp+O7mUOR+1GCdbg70VHpPIehOwpEDEMWp6t7EPKqzfQNiuXITpflYE1ogrFqNk0nveaBtCIaN45QERjaSxAFSjL26cWFynti0HpJDYjr75cuX+UFbVkTRZ4jWM9x4ia+qyNiBbIqhmuPHpGMZysgUrUPEZhoqZjNuNe4CJ4hme4mft3dnd3Gk9KsbWVP59+rVCMww30AtGd2kMmncNM0EnIQ2Y41psqPL3aD2E23cFuT/OHpOm+FzuBCJ6xsWafmIzDRHjUCU0hWarJVZZtM6cNZ9mlf3+ntfLeRQ+WhfEGgMRSxtuvDLg7AhL7Y9ghV6kSPZwZAzunurUrhTWOUTdDGGLGLQml3TtiW31vGQB04qRDqn8qJdEV+6meTwJHk4WixRnnLnydLryXUWuVW8GbWhf2tAl3MdiurPA4/NwKoc6164F7+kvp7mMgK94ADga8Gj9DH42YeXTfoKfGSl9m01XpeOYfILLKnb+xb2I5VIRCGz5zKqM2QrRy418IvbUyq0sV8kMVHNQbs7s16rMqbcrGJSDbUAh3Cnn+BLBdRe3JD/85I4Q8U8yrwzGRixFMZkAg/O9gGc4Cmt45q1GBq0rvAmN0+cdNxKX
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(4636009)(346002)(136003)(39850400004)(396003)(376002)(36840700001)(46966006)(2906002)(82740400003)(44832011)(81166007)(83380400001)(8936002)(8676002)(82310400003)(356005)(1076003)(86362001)(110136005)(70586007)(336012)(6666004)(54906003)(47076005)(316002)(2616005)(26005)(478600001)(426003)(36860700001)(70206006)(4326008)(5660300002)(7696005)(36756003)(186003)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB4015
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT013.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	b83e22af-2083-44d1-af6a-08d919bcd5ca
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	7vtvXyDQF++YL3xqJ+VVxUESCwPJY4FZfp0uq2OUrS5LQcz8vy4qvZJT42YXtORfN+3Sh7m4TSySFRJXR78LUGXA9GvnN3AJADewCjgQJl5kc46ECXSjpqAl+T7+S6pBe3aOrCDVVnvMUQ6NOsOElXFwzGIWHfJaWbsIgvwV2prF+g9sr0Vg/hgktjT+kZI1uuzjPqmUOzCGkjHpxlNzfeIt7ak3EMj5vsO+ZpI+xIXkRQfBVDeroU7pdIm0zRlF/W1GcwYXD8gGhCzKjbZCEBlcfCC9UjiNbI2hq4dai5aYBwXt/7fEYeaZCO61jBXD+kqTzb0PisYEOvc3W6k/si06lr+caI/JxUGJf7ko7oEBGtIOGaeygW44Jj+npG87gJj7m2OYgHmP2h3Pal9rcxhScJDuivdxOTASOaDGLjLGesGnDBu1UBMMHRITRbLJlqWhSrrrofB95R5tAhrCuGRMpFSIRhn+7p0GCAv/MYhTZDGBHnxIOWcfnDwiNoS2Fk6QF+pddMf4GbfjEIzn30KpDS6y5msOEdD/AXVgGxw19rdnoOi+09J06ahcogONBwRw331cqF/41zd3vW2LrAtB8XRyDbkSeXCesBRurSeRFu0qkZWTvk+ZkpfZfGQvq9estt2bxFC93KQYrFNpUAfr1q5eK5a+LP2XYpjMpmM=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(136003)(346002)(39850400004)(376002)(36840700001)(46966006)(81166007)(36756003)(54906003)(5660300002)(1076003)(2906002)(316002)(82310400003)(4326008)(110136005)(86362001)(36860700001)(186003)(8676002)(7696005)(70206006)(26005)(2616005)(8936002)(6666004)(478600001)(47076005)(44832011)(70586007)(83380400001)(426003)(82740400003)(336012);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2021 05:21:57.8415
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 4df65769-59d3-45fb-4aab-08d919bcdbac
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT013.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM5PR0801MB2018

static memory regions overlap with memory nodes. The
overlapping memory is reserved-memory and should be
handled accordingly:
dt_unreserved_regions should skip these regions the
same way they are already skipping mem-reserved regions.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
 xen/arch/arm/setup.c | 39 +++++++++++++++++++++++++++++++++------
 1 file changed, 33 insertions(+), 6 deletions(-)

diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
index 00aad1c194..444dbbd676 100644
--- a/xen/arch/arm/setup.c
+++ b/xen/arch/arm/setup.c
@@ -201,7 +201,7 @@ static void __init dt_unreserved_regions(paddr_t s, paddr_t e,
                                          void (*cb)(paddr_t, paddr_t),
                                          int first)
 {
-    int i, nr = fdt_num_mem_rsv(device_tree_flattened);
+    int i, nr_reserved, nr_static, nr = fdt_num_mem_rsv(device_tree_flattened);
 
     for ( i = first; i < nr ; i++ )
     {
@@ -222,18 +222,45 @@ static void __init dt_unreserved_regions(paddr_t s, paddr_t e,
     }
 
     /*
-     * i is the current bootmodule we are evaluating across all possible
-     * kinds.
+     * i is the current reserved RAM banks we are evaluating across all
+     * possible kinds.
      *
      * When retrieving the corresponding reserved-memory addresses
      * below, we need to index the bootinfo.reserved_mem bank starting
      * from 0, and only counting the reserved-memory modules. Hence,
      * we need to use i - nr.
      */
-    for ( ; i - nr < bootinfo.reserved_mem.nr_banks; i++ )
+    i = i - nr;
+    nr_reserved = bootinfo.reserved_mem.nr_banks;
+    for ( ; i < nr_reserved; i++ )
     {
-        paddr_t r_s = bootinfo.reserved_mem.bank[i - nr].start;
-        paddr_t r_e = r_s + bootinfo.reserved_mem.bank[i - nr].size;
+        paddr_t r_s = bootinfo.reserved_mem.bank[i].start;
+        paddr_t r_e = r_s + bootinfo.reserved_mem.bank[i].size;
+
+        if ( s < r_e && r_s < e )
+        {
+            dt_unreserved_regions(r_e, e, cb, i + 1);
+            dt_unreserved_regions(s, r_s, cb, i + 1);
+            return;
+        }
+    }
+
+    /*
+     * i is the current reserved RAM banks we are evaluating across all
+     * possible kinds.
+     *
+     * When retrieving the corresponding static-memory bank address
+     * below, we need to index the bootinfo.static_mem starting
+     * from 0, and only counting the static-memory bank. Hence,
+     * we need to use i - nr_reserved.
+     */
+
+    i = i - nr_reserved;
+    nr_static = bootinfo.static_mem.nr_banks;
+    for ( ; i < nr_static; i++ )
+    {
+        paddr_t r_s = bootinfo.static_mem.bank[i].start;
+        paddr_t r_e = r_s + bootinfo.static_mem.bank[i].size;
 
         if ( s < r_e && r_s < e )
         {
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 18 05:22:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 05:22:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128620.241489 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lisBI-0002ia-4N; Tue, 18 May 2021 05:22:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128620.241489; Tue, 18 May 2021 05:22:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lisBH-0002i4-Ul; Tue, 18 May 2021 05:22:11 +0000
Received: by outflank-mailman (input) for mailman id 128620;
 Tue, 18 May 2021 05:22:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2je3=KN=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1lisBH-00019F-6x
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 05:22:11 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [40.107.20.82]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ffbe51d9-08e8-430c-b575-47a5bd832918;
 Tue, 18 May 2021 05:22:01 +0000 (UTC)
Received: from AM6PR05CA0032.eurprd05.prod.outlook.com (2603:10a6:20b:2e::45)
 by DBAPR08MB5830.eurprd08.prod.outlook.com (2603:10a6:10:1a7::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.26; Tue, 18 May
 2021 05:21:57 +0000
Received: from VE1EUR03FT020.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:2e:cafe::88) by AM6PR05CA0032.outlook.office365.com
 (2603:10a6:20b:2e::45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.25 via Frontend
 Transport; Tue, 18 May 2021 05:21:56 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT020.mail.protection.outlook.com (10.152.18.242) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Tue, 18 May 2021 05:21:56 +0000
Received: ("Tessian outbound ea2c9a942a09:v92");
 Tue, 18 May 2021 05:21:56 +0000
Received: from a5edd8368e23.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 C1CEBFC5-4D84-47DC-8713-CC85463C7614.1; 
 Tue, 18 May 2021 05:21:50 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a5edd8368e23.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 18 May 2021 05:21:50 +0000
Received: from AM6PR01CA0047.eurprd01.prod.exchangelabs.com
 (2603:10a6:20b:e0::24) by AM8PR08MB6546.eurprd08.prod.outlook.com
 (2603:10a6:20b:355::18) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.26; Tue, 18 May
 2021 05:21:47 +0000
Received: from AM5EUR03FT032.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:e0:cafe::23) by AM6PR01CA0047.outlook.office365.com
 (2603:10a6:20b:e0::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.25 via Frontend
 Transport; Tue, 18 May 2021 05:21:47 +0000
Received: from nebula.arm.com (40.67.248.234) by
 AM5EUR03FT032.mail.protection.outlook.com (10.152.16.84) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.4129.25 via Frontend Transport; Tue, 18 May 2021 05:21:46 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX04.Arm.com
 (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.14; Tue, 18 May
 2021 05:21:43 +0000
Received: from penny.shanghai.arm.com (10.169.190.66) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2176.14 via Frontend
 Transport; Tue, 18 May 2021 05:21:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ffbe51d9-08e8-430c-b575-47a5bd832918
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mWXR0/xvkg8fl1DOrhLwDN/UAdqmRXxJdB80UDoJgTw=;
 b=zRqFKgfhKUrXm0Zrx6A5JsqiASeNTzkjJxXJ4sGhNS3J2GDfriW223+WDbWrLN96Mk2v9vhwnm4Zz7dd1b+OubdicpgLKOAglqFF3j2JYY4HGS+p3LswDbtauY2m2jbDLqiHljGzg3izgZfh9Yh+kWiAm75P9Qi+HJ9QGs/gjL4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 0465a10a71ce3249
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iP3BGCR0fld0pDZE3tSsUgSe8073ZIiIzLijEvu1Ik9Fx4BjGBMUJdnFfNLIFR+tYffnRyWEFWrwZv3ymuiZdrGWmHPuawUt90XafcAnHVwAbC0TLknxWAOm1r6l+fbBB3ry8xa2Oss1pnSPL9z/rwNGeA1o/yLqNTG5olwyn792rIs8LTK6hfxEKHVZpdCM46curYQwk83bkKutDcsU8A0G9e0b5mvClvkWPOg9Ta0oXGqR/T8OTnsQQiWr846kV1CeKGKHUcOUO/IgLA5tcC4Z/XhoMefP3acip2JTx/rbI7fcFoaSLV9Wo7B1zLHWoaMEDeDbm6chWSEIdx97lw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mWXR0/xvkg8fl1DOrhLwDN/UAdqmRXxJdB80UDoJgTw=;
 b=iquAEduFif0g/09hNY2aabYW+y+n95Q6Ja77IWClB5+KtU+6/QZHdhN6ArYCEVgBYwsCLpEFpZdSK442MBZtFTaD0j2tdD6E7rqmW+aCoxHfIZNMCcUYUwfYKN7EHDcSDVXxAjDyF2eOVSe0x5f9sLd34muLHMF8jee6edrFPRxiNmYEgCs8WF7gcaoAu81KFoNMYr5d7KYTNgva+T/9NadNBCBKDgfYdNtbpWiCQqFCXEDyqk09iDjWN/fwrj7jolwxZWe2R7p4a9StR696GrKcRcdOIzkgU2xxWa3hw4/WD+DXyP3RN5rzjGyEUcZQagRFo81kBsP8bgwMRNpMzA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mWXR0/xvkg8fl1DOrhLwDN/UAdqmRXxJdB80UDoJgTw=;
 b=zRqFKgfhKUrXm0Zrx6A5JsqiASeNTzkjJxXJ4sGhNS3J2GDfriW223+WDbWrLN96Mk2v9vhwnm4Zz7dd1b+OubdicpgLKOAglqFF3j2JYY4HGS+p3LswDbtauY2m2jbDLqiHljGzg3izgZfh9Yh+kWiAm75P9Qi+HJ9QGs/gjL4=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=pass action=none
 header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com;
From: Penny Zheng <penny.zheng@arm.com>
To: <xen-devel@lists.xenproject.org>, <sstabellini@kernel.org>,
	<julien@xen.org>
CC: <Bertrand.Marquis@arm.com>, <Penny.Zheng@arm.com>, <Wei.Chen@arm.com>,
	<nd@arm.com>
Subject: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages
Date: Tue, 18 May 2021 05:21:08 +0000
Message-ID: <20210518052113.725808-6-penny.zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20210518052113.725808-1-penny.zheng@arm.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 367e8842-5f01-42d4-9d80-08d919bcdade
X-MS-TrafficTypeDiagnostic: AM8PR08MB6546:|DBAPR08MB5830:
X-Microsoft-Antispam-PRVS:
	<DBAPR08MB58304EB4D95A5958D31AEB3EF72C9@DBAPR08MB5830.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:64;OLM:64;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 pozjuBEdLJWPLQ2j9pjfaBPDvZUqeVcREMpRtm7y+CZHk5iokvwhVnvZisXUL6dfcXjnDfcwann0iTr8MP37hcWn2pqkE0j5P5euUg02bi3cLs+EXqkhvvpJK71dULaWxMVkRVaA1ZP1QQherNxP5mhzWUqQoDcLnWqwuwzAZ8A6sDtCiY7sx1qpFrFGL7a/6mwdbY6K/ZZVCWbmgltde9C7rp+j1NchVOxlb3lBEF4aSLC3FN8czNIfO7O8E7rFc7KwnMm5pmc7IapLAHfPTgrmtMLpXL4nfbFsmtKbt5uIxeU75AZJ5HFuE891UT4D65fojiKqpTYOXm7g1kWz/ksYh0GG8vN4+9x/2zPVIXQnaVmZyLrtd2E62nAfhCTpDfKn7lyyqupLXWU0Tuu89O+ArGk8KvtXd0rTwcZHWkUge04aJUgyfNuF9oH+MKOXxhzTgLrPAFYms0MS/bOq50aUr159QVmraqMRtLpt7RPDEwu0aVUUxuB8e9brlXdGL0QtgZMaLwpa7QsVRdGWA7G56XnEHErUoMDcTLivWdyNu/5NrO3fHjAmf9NW/snbecATPIFWRir5XYfQpNHIp5SvRDRoKdR1L13LwIekb1Pihg7gTb6e58dK3KoJrYbKg3d/SuQH1Z9TUw30b3W7znd6JGUw7pb9I4BpC7pnMiWKrWar6gQjeeJSzEudUaHP
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(4636009)(346002)(136003)(376002)(39850400004)(396003)(36840700001)(46966006)(82740400003)(6666004)(86362001)(8936002)(70206006)(4326008)(1076003)(8676002)(47076005)(44832011)(110136005)(426003)(7696005)(36756003)(70586007)(26005)(82310400003)(478600001)(186003)(316002)(2906002)(5660300002)(2616005)(81166007)(54906003)(336012)(36860700001)(356005)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB6546
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT020.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	1a1bd56a-c0ee-4be3-a05a-08d919bcd50a
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Cx00cHDr2xGlzBf5u9DoopmiCjUE/x1kBXiXQcgUpQG0rHSrk2DcWZZoShvIcTZEFA0VVeX1jdbkdkwrQpLNNoKj2V0Kk5wFfVPRB4ovhsj27tu9ub788+AKQk1uyU5poUjLyGUTjcrG61BgvEu0W992PP/l+Zoru1XNLtOOgioMrful4dQWFuKoUu0TJYaX0FnMeWY8D0V0BsUt3Cbvezv6DmhELMpDmgPNvrhYmRRZPV8vO4/yydRDZA2vXhE56YhMtL7J530XVyWQc27PwbChDpNjGobAGlzu6SYo7bhKPTrfrnOXYsENpTSzVGMMKXgYBTizO/EYpJRdWfiKzlokmmDbdzXuTUtuwfOep1/ipdJx/COJ0Mcx81QtrnRtoDLr4Gs5n1E+ifsm/WYKtOLbVwllW0tMzw9rOINDBppH5myu8Yjdj1PQ9mn+4+vqd+O/OyUiRXkTFZKQAXfHF693awjVto5pacPyM5mfmgNHNKo4xCtr7DGm97SccypoqhpJRsPp6CUXm7CbjYHA3Gdx7QQX8lgctREq/cG9nolUcT3RBU9RbcTNOYtgycHe8rbPkXQmp4n7V9qkOETDqzhnKg43fVM+rsjgbkReCmZhM6kqp6szg+46J/l+mlFBOPgZLHq4LlYKOl0Pl2d72w==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39850400004)(396003)(136003)(346002)(376002)(36840700001)(46966006)(316002)(2616005)(426003)(8676002)(82310400003)(86362001)(47076005)(36860700001)(1076003)(70586007)(81166007)(478600001)(6666004)(8936002)(110136005)(7696005)(186003)(82740400003)(5660300002)(4326008)(2906002)(36756003)(54906003)(70206006)(26005)(336012)(44832011);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2021 05:21:56.4844
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 367e8842-5f01-42d4-9d80-08d919bcdade
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT020.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5830

alloc_staticmem_pages is designated to allocate nr_pfns contiguous
pages of static memory. And it is the equivalent of alloc_heap_pages
for static memory.
This commit only covers allocating at specified starting address.

For each page, it shall check if the page is reserved
(PGC_reserved) and free. It shall also do a set of necessary
initialization, which are mostly the same ones in alloc_heap_pages,
like, following the same cache-coherency policy and turning page
status into PGC_state_used, etc.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
 xen/common/page_alloc.c | 64 +++++++++++++++++++++++++++++++++++++++++
 1 file changed, 64 insertions(+)

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 58b53c6ac2..adf2889e76 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -1068,6 +1068,70 @@ static struct page_info *alloc_heap_pages(
     return pg;
 }
 
+/*
+ * Allocate nr_pfns contiguous pages, starting at #start, of static memory.
+ * It is the equivalent of alloc_heap_pages for static memory
+ */
+static struct page_info *alloc_staticmem_pages(unsigned long nr_pfns,
+                                                paddr_t start,
+                                                unsigned int memflags)
+{
+    bool need_tlbflush = false;
+    uint32_t tlbflush_timestamp = 0;
+    unsigned int i;
+    struct page_info *pg;
+    mfn_t s_mfn;
+
+    /* For now, it only supports allocating at specified address. */
+    s_mfn = maddr_to_mfn(start);
+    pg = mfn_to_page(s_mfn);
+    if ( !pg )
+        return NULL;
+
+    for ( i = 0; i < nr_pfns; i++)
+    {
+        /*
+         * Reference count must continuously be zero for free pages
+         * of static memory(PGC_reserved).
+         */
+        ASSERT(pg[i].count_info & PGC_reserved);
+        if ( (pg[i].count_info & ~PGC_reserved) != PGC_state_free )
+        {
+            printk(XENLOG_ERR
+                    "Reference count must continuously be zero for free pages"
+                    "pg[%u] MFN %"PRI_mfn" c=%#lx t=%#x\n",
+                    i, mfn_x(page_to_mfn(pg + i)),
+                    pg[i].count_info, pg[i].tlbflush_timestamp);
+            BUG();
+        }
+
+        if ( !(memflags & MEMF_no_tlbflush) )
+            accumulate_tlbflush(&need_tlbflush, &pg[i],
+                                &tlbflush_timestamp);
+
+        /*
+         * Reserve flag PGC_reserved and change page state
+         * to PGC_state_inuse.
+         */
+        pg[i].count_info = (pg[i].count_info & PGC_reserved) | PGC_state_inuse;
+        /* Initialise fields which have other uses for free pages. */
+        pg[i].u.inuse.type_info = 0;
+        page_set_owner(&pg[i], NULL);
+
+        /*
+         * Ensure cache and RAM are consistent for platforms where the
+         * guest can control its own visibility of/through the cache.
+         */
+        flush_page_to_ram(mfn_x(page_to_mfn(&pg[i])),
+                            !(memflags & MEMF_no_icache_flush));
+    }
+
+    if ( need_tlbflush )
+        filtered_flush_tlb_mask(tlbflush_timestamp);
+
+    return pg;
+}
+
 /* Remove any offlined page in the buddy pointed to by head. */
 static int reserve_offlined_page(struct page_info *head)
 {
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 18 05:22:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 05:22:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128624.241500 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lisBN-0003M1-Mu; Tue, 18 May 2021 05:22:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128624.241500; Tue, 18 May 2021 05:22:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lisBN-0003Lm-II; Tue, 18 May 2021 05:22:17 +0000
Received: by outflank-mailman (input) for mailman id 128624;
 Tue, 18 May 2021 05:22:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2je3=KN=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1lisBM-00019F-73
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 05:22:16 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com (unknown
 [40.107.15.48]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ffa3db3a-c3a6-4fde-b98f-8d43feb17dde;
 Tue, 18 May 2021 05:22:08 +0000 (UTC)
Received: from AM6P193CA0119.EURP193.PROD.OUTLOOK.COM (2603:10a6:209:85::24)
 by HE1PR0801MB1850.eurprd08.prod.outlook.com (2603:10a6:3:86::20) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.26; Tue, 18 May
 2021 05:22:05 +0000
Received: from AM5EUR03FT004.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:85:cafe::88) by AM6P193CA0119.outlook.office365.com
 (2603:10a6:209:85::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.32 via Frontend
 Transport; Tue, 18 May 2021 05:22:05 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT004.mail.protection.outlook.com (10.152.16.163) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Tue, 18 May 2021 05:22:05 +0000
Received: ("Tessian outbound 3c287b285c95:v92");
 Tue, 18 May 2021 05:22:04 +0000
Received: from e6e41f494c34.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 6002B449-B487-40A5-A21B-C21F9B34A70E.1; 
 Tue, 18 May 2021 05:21:58 +0000
Received: from EUR02-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id e6e41f494c34.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 18 May 2021 05:21:58 +0000
Received: from AM6P194CA0036.EURP194.PROD.OUTLOOK.COM (2603:10a6:209:90::49)
 by DBBPR08MB6314.eurprd08.prod.outlook.com (2603:10a6:10:20f::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.26; Tue, 18 May
 2021 05:21:56 +0000
Received: from VE1EUR03FT005.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:90:cafe::10) by AM6P194CA0036.outlook.office365.com
 (2603:10a6:209:90::49) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.25 via Frontend
 Transport; Tue, 18 May 2021 05:21:56 +0000
Received: from nebula.arm.com (40.67.248.234) by
 VE1EUR03FT005.mail.protection.outlook.com (10.152.18.172) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.4129.25 via Frontend Transport; Tue, 18 May 2021 05:21:56 +0000
Received: from AZ-NEU-EX01.Emea.Arm.com (10.251.26.4) by AZ-NEU-EX03.Arm.com
 (10.251.24.31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2176.14; Tue, 18 May
 2021 05:21:49 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX01.Emea.Arm.com
 (10.251.26.4) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2176.14; Tue, 18
 May 2021 05:21:49 +0000
Received: from penny.shanghai.arm.com (10.169.190.66) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2176.14 via Frontend
 Transport; Tue, 18 May 2021 05:21:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ffa3db3a-c3a6-4fde-b98f-8d43feb17dde
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZE+hOEo1iLiVZbrYdwypFc9m8SMes4QkXJwoF5cLLCI=;
 b=A8z/3Jxdey1736gclVsCGXTKdwqJJz09tMR1UFWSSyjbSXE9MGOZINe24d7v0xafWulN8eenAOdGopUeDxZHby/yjaemWRjCwq3PCO3e+aCi1Nz3Lz56tBCKjKNOi6WVsXx3MyTMMTgZoHzzoIjQdmRrYTigRyXJxbjE4K0opBc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 1d38469fbf5448af
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FCtIR/RVhesXgjkJHQGF9EjRi7lTkaz/TATKyOncArQXgMZ5K+8aHWA12A3MHcOJmel90Og6Nhasi8MWFTwoJAg6XY/gzGfYejKZAviC89TQeIx/i4qv7IL1YYjNqiA+aFN5aR6Z0pGjEpQ+NgKje0IcK1x0w5NpDs0VM7A//2UTlRFkWUO0+MTrkXFt9RF0e4PPnA5ye0oLow/Rq6v4QO4H2PaSuV5s9NLeHEo096CGFSSWtYIQi8eJJvsRgRYuChXuc7S5rYN0v8zANbsfVj1TESbGged3vbYvaomXJdcaZuChTi4ZNjFy0dA6TWJD3zOsJFmH/1AURpPGrfImyw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZE+hOEo1iLiVZbrYdwypFc9m8SMes4QkXJwoF5cLLCI=;
 b=VjwXAY/5BMymAUmTnnGwMxb/gtiIeU+cji14rGREjDA+MP9rEKWnDGBl2yHpodrKvqFvF+cLmI/fN2J5xw6Mx1bykYi45PkUfCpUt/njKlztPU+BQN/XT0bAfxoVOXCSFTLc25ylU4hfSNzpPCIPeZtmvV2a7B1bFSybdA6F84N2HZSuSob0zVcEFji6WyHjF6e0KmI90eYK7gFERdzVu8k6wamFtHk6ICACE1pn/YpiOM2t8rgZyYrmheZRz7N5I1tu7PQK15l6IE+2joGehiZojd//cFJu8D8Hw8XcxLx9Q5BUiovuVsm2z3LJLe2ne0p1hIgwd/ZSgbyFKmoHTw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZE+hOEo1iLiVZbrYdwypFc9m8SMes4QkXJwoF5cLLCI=;
 b=A8z/3Jxdey1736gclVsCGXTKdwqJJz09tMR1UFWSSyjbSXE9MGOZINe24d7v0xafWulN8eenAOdGopUeDxZHby/yjaemWRjCwq3PCO3e+aCi1Nz3Lz56tBCKjKNOi6WVsXx3MyTMMTgZoHzzoIjQdmRrYTigRyXJxbjE4K0opBc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=pass action=none
 header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com;
From: Penny Zheng <penny.zheng@arm.com>
To: <xen-devel@lists.xenproject.org>, <sstabellini@kernel.org>,
	<julien@xen.org>
CC: <Bertrand.Marquis@arm.com>, <Penny.Zheng@arm.com>, <Wei.Chen@arm.com>,
	<nd@arm.com>
Subject: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
Date: Tue, 18 May 2021 05:21:10 +0000
Message-ID: <20210518052113.725808-8-penny.zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20210518052113.725808-1-penny.zheng@arm.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 337c6998-d82d-4726-f50e-08d919bce017
X-MS-TrafficTypeDiagnostic: DBBPR08MB6314:|HE1PR0801MB1850:
X-Microsoft-Antispam-PRVS:
	<HE1PR0801MB18507C21EB9421B437C15292F72C9@HE1PR0801MB1850.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 N0cvI42XJ3Wn/KwSal7wBIvhv5hDDhMQnllxpDKfddcNR2RaCEyLk44ft/YipEa7J7AAZ2DOVxNDYTK3VOJRhY5ya8nOPI07KBrMJ7BT/UtSyG9T3Xu+4GJPoOyX42ku7pPS9S4v8vL8MFrAzAV/AEgf7R3w/oTGNtVXLZbrMhuqTL0cHwdnoS9AkLd4hgALVhp7y2Jlf1GuTPjQSnXgeTwCTWQXo0I7qwFFtPVjvNgvaKaeMuLDksMPlphPsNM+Ac/xjnmucYQYAr91+ONERoKjpSEyCv0UYt+q48EN/gYHYOs0m2jpBGt1C/kV1jfO2mPESl1fV+mz1SVl8V4YjkCKx00mi9rxAooISlv2S2G+TLiq3b2P2bGSzMrovivyOs8Gkqcw6msUobLQD5349hXflMJFkvz7NuYEV6rao0BJgHIn8T6uNzHPeycj7//gZxXOfOzUrfxG2+TNw3WZlJhR2u462SXEDRixba0IMMMHJbfik5mFfALgk1Ey7n8hQ0ygkoDdGwqBFArZCdkfF1vUV2nkz29wIVoAC/7d0bFunHIo8kV5ZSt1dF1tR1DFP2yyZaR6PKSnALo5Vp6hwEe0bleVW0n/pJb38O0334ktvg8Ee9S226rfSr76BXFrEmYPfPc/SayEGw65yFkjtVNInbP0D3HZWbsCXonJ0ZZ7x2u0RAUfEuqv/LBjZUtg
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(4636009)(376002)(136003)(346002)(39850400004)(396003)(46966006)(36840700001)(36756003)(2616005)(83380400001)(36860700001)(82310400003)(86362001)(6666004)(336012)(478600001)(8936002)(7696005)(2906002)(82740400003)(44832011)(426003)(70206006)(81166007)(70586007)(316002)(8676002)(5660300002)(186003)(1076003)(356005)(47076005)(54906003)(4326008)(26005)(110136005)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB6314
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT004.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	a040dca4-7c1f-477c-dbd6-08d919bcda99
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	KzMZVs4BLGX2lhVgTBdihpq5Ma33oRbAAZ7AY+SwYq6MrNfLVAtkB3G2CFS43zUBnl24KqeYC3Zmm86pym1b0jcBM3M7UTYuLryjxeKOn1cL9lO4siNRnTprdiVy0MOai3oxkl1HM1QPETEOjoBCNW15+WbsMeVR2s2adevEewF/q/sZ4/flpjnOI9g9pgsq9t/pYHDYLTl0DrGr015IEzv3gpwz092l8s+Od7YMXlLO5DYs4FUZnuhbmndBmLe0Ekgh1wwRAIrQj1Q9m8Be68yEyDEbRvIR/C245GQvj9/Wzy78PbTxQj5qStLlNRzmzwiH2jM5BpypSV80a+oOAmxY9jKyBtuDf3y+bAvbf3uKZQOKVeP6yknM6UzAabQuUsldsyHu/zyk5A8R5lTjRBchUcMsINLB+8a8h6LG3DTlla9bl+PHAr7BJyB5dVB1DsiW4MkNjMmBTecjoF10ZfJKTmqo4HYI4dxWWdhBzg4XtuAP39uPDz6eOQ5cGcak5Lmb5JK8FYAjj/8kXpL6nVGhM87xcO/uRaMi0odLuL3nYz2WuTQXOiZujZdte2dXghmlC9v59IHS2Zc+01E6aXZRYkXRIRybxD5oQ3rTY8ppr9E6bXhprr20y1Eop6fi+WdsX8FUE/yRlCwOjQYLlQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(396003)(346002)(39850400004)(136003)(46966006)(36840700001)(26005)(36860700001)(54906003)(86362001)(47076005)(8676002)(2906002)(36756003)(478600001)(110136005)(186003)(81166007)(6666004)(336012)(2616005)(316002)(1076003)(70586007)(5660300002)(83380400001)(70206006)(44832011)(8936002)(82740400003)(7696005)(4326008)(426003)(82310400003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2021 05:22:05.3279
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 337c6998-d82d-4726-f50e-08d919bce017
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT004.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0801MB1850

alloc_domstatic_pages is the equivalent of alloc_domheap_pages for
static mmeory, and it is to allocate nr_pfns pages of static memory
and assign them to one specific domain.

It uses alloc_staticmen_pages to get nr_pages pages of static memory,
then on success, it will use assign_pages to assign those pages to
one specific domain, including using page_set_reserved_owner to set its
reserved domain owner.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
 xen/common/page_alloc.c | 53 +++++++++++++++++++++++++++++++++++++++++
 xen/include/xen/mm.h    |  4 ++++
 2 files changed, 57 insertions(+)

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index 0eb9f22a00..f1f1296a61 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -2447,6 +2447,9 @@ int assign_pages(
     {
         ASSERT(page_get_owner(&pg[i]) == NULL);
         page_set_owner(&pg[i], d);
+        /* use page_set_reserved_owner to set its reserved domain owner. */
+        if ( (pg[i].count_info & PGC_reserved) )
+            page_set_reserved_owner(&pg[i], d);
         smp_wmb(); /* Domain pointer must be visible before updating refcnt. */
         pg[i].count_info =
             (pg[i].count_info & PGC_extra) | PGC_allocated | 1;
@@ -2509,6 +2512,56 @@ struct page_info *alloc_domheap_pages(
     return pg;
 }
 
+/*
+ * Allocate nr_pfns contiguous pages, starting at #start, of static memory,
+ * then assign them to one specific domain #d.
+ * It is the equivalent of alloc_domheap_pages for static memory.
+ */
+struct page_info *alloc_domstatic_pages(
+        struct domain *d, unsigned long nr_pfns, paddr_t start,
+        unsigned int memflags)
+{
+    struct page_info *pg = NULL;
+    unsigned long dma_size;
+
+    ASSERT(!in_irq());
+
+    if ( memflags & MEMF_no_owner )
+        memflags |= MEMF_no_refcount;
+
+    if ( !dma_bitsize )
+        memflags &= ~MEMF_no_dma;
+    else
+    {
+        dma_size = 1ul << bits_to_zone(dma_bitsize);
+        /* Starting address shall meet the DMA limitation. */
+        if ( dma_size && start < dma_size )
+            return NULL;
+    }
+
+    pg = alloc_staticmem_pages(nr_pfns, start, memflags);
+    if ( !pg )
+        return NULL;
+
+    if ( d && !(memflags & MEMF_no_owner) )
+    {
+        if ( memflags & MEMF_no_refcount )
+        {
+            unsigned long i;
+
+            for ( i = 0; i < nr_pfns; i++ )
+                pg[i].count_info = PGC_extra;
+        }
+        if ( assign_pages(d, pg, nr_pfns, memflags) )
+        {
+            free_staticmem_pages(pg, nr_pfns, memflags & MEMF_no_scrub);
+            return NULL;
+        }
+    }
+
+    return pg;
+}
+
 void free_domheap_pages(struct page_info *pg, unsigned int order)
 {
     struct domain *d = page_get_owner(pg);
diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
index dcf9daaa46..e45987f0ed 100644
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -111,6 +111,10 @@ unsigned long __must_check domain_adjust_tot_pages(struct domain *d,
 int domain_set_outstanding_pages(struct domain *d, unsigned long pages);
 void get_outstanding_claims(uint64_t *free_pages, uint64_t *outstanding_pages);
 
+/* Static Memory */
+struct page_info *alloc_domstatic_pages(struct domain *d,
+        unsigned long nr_pfns, paddr_t start, unsigned int memflags);
+
 /* Domain suballocator. These functions are *not* interrupt-safe.*/
 void init_domheap_pages(paddr_t ps, paddr_t pe);
 struct page_info *alloc_domheap_pages(
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 18 05:22:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 05:22:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128629.241510 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lisBV-0004CU-5L; Tue, 18 May 2021 05:22:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128629.241510; Tue, 18 May 2021 05:22:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lisBV-0004CH-1p; Tue, 18 May 2021 05:22:25 +0000
Received: by outflank-mailman (input) for mailman id 128629;
 Tue, 18 May 2021 05:22:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2je3=KN=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1lisBT-00047I-M3
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 05:22:23 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [40.107.22.77]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8e1f321c-4bc0-4b0c-8634-350deec083ba;
 Tue, 18 May 2021 05:22:22 +0000 (UTC)
Received: from AM7PR02CA0009.eurprd02.prod.outlook.com (2603:10a6:20b:100::19)
 by DB6PR0802MB2199.eurprd08.prod.outlook.com (2603:10a6:4:82::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.25; Tue, 18 May
 2021 05:22:20 +0000
Received: from AM5EUR03FT051.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:100:cafe::ae) by AM7PR02CA0009.outlook.office365.com
 (2603:10a6:20b:100::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.25 via Frontend
 Transport; Tue, 18 May 2021 05:22:20 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT051.mail.protection.outlook.com (10.152.16.246) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Tue, 18 May 2021 05:22:20 +0000
Received: ("Tessian outbound 504317ef584c:v92");
 Tue, 18 May 2021 05:22:20 +0000
Received: from 4ca63d3bf940.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 87B07377-33EE-4E9D-828B-361404BDB204.1; 
 Tue, 18 May 2021 05:22:13 +0000
Received: from FRA01-PR2-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 4ca63d3bf940.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 18 May 2021 05:22:13 +0000
Received: from AM6PR08CA0008.eurprd08.prod.outlook.com (2603:10a6:20b:b2::20)
 by PR2PR08MB4858.eurprd08.prod.outlook.com (2603:10a6:101:1f::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.25; Tue, 18 May
 2021 05:22:12 +0000
Received: from VE1EUR03FT046.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:b2:cafe::bd) by AM6PR08CA0008.outlook.office365.com
 (2603:10a6:20b:b2::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.24 via Frontend
 Transport; Tue, 18 May 2021 05:22:12 +0000
Received: from nebula.arm.com (40.67.248.234) by
 VE1EUR03FT046.mail.protection.outlook.com (10.152.19.226) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.4129.25 via Frontend Transport; Tue, 18 May 2021 05:22:12 +0000
Received: from AZ-NEU-EX01.Emea.Arm.com (10.251.26.4) by AZ-NEU-EX03.Arm.com
 (10.251.24.31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2176.14; Tue, 18 May
 2021 05:21:52 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX01.Emea.Arm.com
 (10.251.26.4) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2176.14; Tue, 18
 May 2021 05:21:52 +0000
Received: from penny.shanghai.arm.com (10.169.190.66) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2176.14 via Frontend
 Transport; Tue, 18 May 2021 05:21:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8e1f321c-4bc0-4b0c-8634-350deec083ba
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9ZNC/nfkeDYicCmVJe5toeMnkfsndF5g/9+kkSrRk+M=;
 b=ogoyPpU9KTIuq7lcLHcOa/YvXOVxN63hmoRn/riXVZwmt7yGHohth7ynnprH9hUORhJQpUNfvEMqWL7T0cz/zKoi8xVKA9VXKF3P6KAtBK2TkgZOAJM1W9VjaQMEH2pprXYP4zwpWkH3vHcShAbDqEwF5kg+KtbDdDetK+cEJQ8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: a6f4acc816c8473b
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=aZyWU+3eFnzBFFCDhXJPb49pfhbkDhc4nP2Bs3gb2QRIaCPo4c8ZFDg6Zkt9HCzEGkU3ByOM8XVLDSTOFTRYlBz/0NCEMOEUg2+K/QAbKQxhTJPD0IKaYcRK+iVyoYVZsUHg2UEtEjNTme3+/gVciwlWjNy5LNeYWjdLJDSO2QhEU85vGkzw2pF2vj3TQzaFzuuCPtY2fYp4f56ybOW70odgHPjJ8ET9MM/mO3d4YhdYUN7qpXFGw8PfO1aKuNgrjDgVT456weO86lXqxPbZTOK1WCbIIRIQJPB54/gpbHe2tqPvu3Q8I4D3GRoUGSgMBF1VzeOANGxcmyDmufr1mg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9ZNC/nfkeDYicCmVJe5toeMnkfsndF5g/9+kkSrRk+M=;
 b=f3PJgjXFuxfeZ1WcNt/5uRXvtEjUrhh0H9RHj3S/1oEFyRHZZOkPwqB2ZPn0OEj/MmaDYXNGYSjYW6CLK9m+toqKmU+uZJ8jAje6DKYeidU68WdvEvfpOOk9cIMu9sgWr6aThJRyYQBEn35f2/QWV0buFdFI7Wc/gLt6LQ6DGR/9TqLdPJC5SGZNaUIaK5Y5JRm3drNgx6dwgJn5bBCOd8+o4PZRn4zCK2DyC7t/zlEKiDXQ5mVlHz++5PfiUPG+aMlFVA+4jjjrQz3nFK83gJ/dtMllfmsbcpnILpXTfF0ZFRXdf99E+Qn1tcuYppKaJEb2gBBpHG6eQANK/gn/8Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9ZNC/nfkeDYicCmVJe5toeMnkfsndF5g/9+kkSrRk+M=;
 b=ogoyPpU9KTIuq7lcLHcOa/YvXOVxN63hmoRn/riXVZwmt7yGHohth7ynnprH9hUORhJQpUNfvEMqWL7T0cz/zKoi8xVKA9VXKF3P6KAtBK2TkgZOAJM1W9VjaQMEH2pprXYP4zwpWkH3vHcShAbDqEwF5kg+KtbDdDetK+cEJQ8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=pass action=none
 header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com;
From: Penny Zheng <penny.zheng@arm.com>
To: <xen-devel@lists.xenproject.org>, <sstabellini@kernel.org>,
	<julien@xen.org>
CC: <Bertrand.Marquis@arm.com>, <Penny.Zheng@arm.com>, <Wei.Chen@arm.com>,
	<nd@arm.com>
Subject: [PATCH 08/10] xen/arm: introduce reserved_page_list
Date: Tue, 18 May 2021 05:21:11 +0000
Message-ID: <20210518052113.725808-9-penny.zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20210518052113.725808-1-penny.zheng@arm.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 9770d729-03d4-46d6-dc47-08d919bce8f1
X-MS-TrafficTypeDiagnostic: PR2PR08MB4858:|DB6PR0802MB2199:
X-Microsoft-Antispam-PRVS:
	<DB6PR0802MB2199792B4E62AAB42D1D0DA8F72C9@DB6PR0802MB2199.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:5516;OLM:5516;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 gzOnoWMOgYntjcBJG0Et1nPQDFF9M/nB5sCOh8SprxxAAtaZH+NUFZsnS9vPLnpORLqAUtX6hBoBP4uBq6QcayoWsU1+/YxCA/A/a+MhEMFj99BHawEc874+VMjiKjjuIBhZFr8yL4viIPpkdaW3JbtOM/RwSYxTDx5a0/Ic6K2StSDugtIvtri+tMgVVoSiyX8F2aR0Rta+kpZVKhQW0lUHmBB1jVAMaJ6ClIxXLm7wY8srnCnSTQJs2+k5IUaGZTx++2+i5daFb1RQZoUcr0HoPjBivy0zmwjmkHY1zywt3aKGJ0GGwTokH1ir7tdzU9xu2nrTuYe2FOD2UMLRTksA2Q6hR+Jh54EU1YpiG2WcRDaOdTyUgeoN4ojTAAwLiqqr12ceg0mlZMzgYyDg8o2cjLiFckgxNZ8PE8+OiU2bABHkKedN1iHKkbpPrLrt1y4pcHTP4Xk0dMp1pHvNAIbibsGD92i+2buUGuQ4H5IZEVeub5/pNQw54MRSRtBM4Yk+vr9Dt6ICkD3JqWKVDwK1F9+IM5/74nWUWBrnzvG/8vaAKLuppb38TIgfagMw1LFpfOJ+DZdhBjXyTzyxSx7Nezre5c0alLDQN2Ailau8qoX2hTi303Y09Si7x5rLdxzqDh9kjdf3T2qmQDeT8DOD/hMEJHo9KuPpz/vFcbEUO9nyjdjltpPRbhEZxdwa
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(4636009)(396003)(376002)(39850400004)(346002)(136003)(46966006)(36840700001)(1076003)(5660300002)(426003)(186003)(336012)(2616005)(82310400003)(47076005)(82740400003)(6666004)(70206006)(2906002)(70586007)(356005)(83380400001)(36756003)(86362001)(8936002)(478600001)(110136005)(4326008)(44832011)(316002)(54906003)(7696005)(8676002)(81166007)(36860700001)(26005)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PR2PR08MB4858
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT051.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	feca53fd-4922-449d-5d93-08d919bce41b
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	S+kNN0rta+uUSFCEoGwCrJU8l8DrsFFlAp0DR+TABuk93hRdrvzCzRAcx1U1Ftq+zRrwM+eh675sLTVbEs7bdLKhdCT7UA4Qid71SKTTLU9jRWkrs8kUuQncah/K4AbEaZN9B0kTKClePnVrQVEblbUNHb8laEkeSTHzQVZZ8jkT35cIuM634fdoy+ayW7VfVdWcQf69zBDVbaLzF9UYkslZbYQKaGkLZXyHX9lSj5rAbpiNogr0AuVCwA6NblxNKvaWLaaxA9K1yGIEBQoMAt0qUTS9V+LqTHnn/nQQ0GwhQ4JKmc2stftvFztJvKO/lMJ8wsniNHAVt0N4yv8qvrA+SwSKs2MWYaVFpYrwscacPX2pXnmmbYqrX38foO2HFObUXx8OJff/u1uG71jC34OgZJb3/BrTJ34m5nwPL88tKWV4VMfs5hqq7P5eg6lBLy5/WfKIkccOKKcq85TjTs7Wvr9v3tW/uVi/wnhhI4FEJ1CKVz6jjt/SFTYPIdH3tZBRRDxZkgh3eYwtuShsBHdmtza3nwaGWE9CK5zCXNdgmo416P0Mmzy3mUN6lQaKv+bzBqoUXMjNQatg9AjSTleXw21DnPpZtDJpnbDchbQITVA6jfiufWRS92Rod/RIUceNQpshve+rptEwXLPb9R/yZhDSqjWuEAEJdqaunAM=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39850400004)(396003)(346002)(136003)(376002)(36840700001)(46966006)(82310400003)(7696005)(6666004)(47076005)(36860700001)(426003)(478600001)(4326008)(26005)(336012)(1076003)(5660300002)(2616005)(44832011)(36756003)(8676002)(8936002)(316002)(54906003)(186003)(110136005)(82740400003)(81166007)(83380400001)(70206006)(70586007)(2906002)(86362001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2021 05:22:20.1646
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 9770d729-03d4-46d6-dc47-08d919bce8f1
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT051.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0802MB2199

Since page_list under struct domain refers to linked pages as gueast RAM
allocated from heap, it should not include reserved pages of static memory.

The number of PGC_reserved pages assigned to a domain is tracked in
a new 'reserved_pages' counter. Also introduce a new reserved_page_list
to link pages of static memory. Let page_to_list return reserved_page_list,
when flag is PGC_reserved.

Later, when domain get destroyed or restarted, those new values will help
relinquish memory to proper place, not been given back to heap.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
 xen/common/domain.c     | 1 +
 xen/common/page_alloc.c | 7 +++++--
 xen/include/xen/sched.h | 5 +++++
 3 files changed, 11 insertions(+), 2 deletions(-)

diff --git a/xen/common/domain.c b/xen/common/domain.c
index 6b71c6d6a9..c38afd2969 100644
--- a/xen/common/domain.c
+++ b/xen/common/domain.c
@@ -578,6 +578,7 @@ struct domain *domain_create(domid_t domid,
     INIT_PAGE_LIST_HEAD(&d->page_list);
     INIT_PAGE_LIST_HEAD(&d->extra_page_list);
     INIT_PAGE_LIST_HEAD(&d->xenpage_list);
+    INIT_PAGE_LIST_HEAD(&d->reserved_page_list);
 
     spin_lock_init(&d->node_affinity_lock);
     d->node_affinity = NODE_MASK_ALL;
diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index f1f1296a61..e3f07ec6c5 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -2410,7 +2410,7 @@ int assign_pages(
 
         for ( i = 0; i < nr_pfns; i++ )
         {
-            ASSERT(!(pg[i].count_info & ~PGC_extra));
+            ASSERT(!(pg[i].count_info & ~(PGC_extra | PGC_reserved)));
             if ( pg[i].count_info & PGC_extra )
                 extra_pages++;
         }
@@ -2439,6 +2439,9 @@ int assign_pages(
         }
     }
 
+    if ( pg[0].count_info & PGC_reserved )
+        d->reserved_pages += nr_pfns;
+
     if ( !(memflags & MEMF_no_refcount) &&
          unlikely(domain_adjust_tot_pages(d, nr_pfns) == nr_pfns) )
         get_knownalive_domain(d);
@@ -2452,7 +2455,7 @@ int assign_pages(
             page_set_reserved_owner(&pg[i], d);
         smp_wmb(); /* Domain pointer must be visible before updating refcnt. */
         pg[i].count_info =
-            (pg[i].count_info & PGC_extra) | PGC_allocated | 1;
+            (pg[i].count_info & (PGC_extra | PGC_reserved)) | PGC_allocated | 1;
         page_list_add_tail(&pg[i], page_to_list(d, &pg[i]));
     }
 
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 3982167144..b6333ed8bb 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -368,6 +368,7 @@ struct domain
     struct page_list_head page_list;  /* linked list */
     struct page_list_head extra_page_list; /* linked list (size extra_pages) */
     struct page_list_head xenpage_list; /* linked list (size xenheap_pages) */
+    struct page_list_head reserved_page_list; /* linked list (size reserved pages) */
 
     /*
      * This field should only be directly accessed by domain_adjust_tot_pages()
@@ -379,6 +380,7 @@ struct domain
     unsigned int     outstanding_pages; /* pages claimed but not possessed */
     unsigned int     max_pages;         /* maximum value for domain_tot_pages() */
     unsigned int     extra_pages;       /* pages not included in domain_tot_pages() */
+    unsigned int     reserved_pages;    /* pages of static memory */
     atomic_t         shr_pages;         /* shared pages */
     atomic_t         paged_pages;       /* paged-out pages */
 
@@ -588,6 +590,9 @@ static inline struct page_list_head *page_to_list(
     if ( pg->count_info & PGC_extra )
         return &d->extra_page_list;
 
+    if ( pg->count_info & PGC_reserved )
+        return &d->reserved_page_list;
+
     return &d->page_list;
 }
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 18 05:22:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 05:22:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128635.241522 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lisBc-00051m-Qf; Tue, 18 May 2021 05:22:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128635.241522; Tue, 18 May 2021 05:22:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lisBc-00051f-MD; Tue, 18 May 2021 05:22:32 +0000
Received: by outflank-mailman (input) for mailman id 128635;
 Tue, 18 May 2021 05:22:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2je3=KN=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1lisBb-00019F-7m
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 05:22:31 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe1e::629])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 599ea074-c24c-4eab-97e5-6cd0e1c41332;
 Tue, 18 May 2021 05:22:18 +0000 (UTC)
Received: from AM0PR04CA0120.eurprd04.prod.outlook.com (2603:10a6:208:55::25)
 by DB6PR0802MB2598.eurprd08.prod.outlook.com (2603:10a6:4:97::20)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.26; Tue, 18 May
 2021 05:22:16 +0000
Received: from AM5EUR03FT025.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:208:55:cafe::61) by AM0PR04CA0120.outlook.office365.com
 (2603:10a6:208:55::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.25 via Frontend
 Transport; Tue, 18 May 2021 05:22:16 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT025.mail.protection.outlook.com (10.152.16.157) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Tue, 18 May 2021 05:22:15 +0000
Received: ("Tessian outbound 504317ef584c:v92");
 Tue, 18 May 2021 05:22:15 +0000
Received: from a8228ba31155.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 22C9CDE8-865D-4F08-95D8-F9829E163804.1; 
 Tue, 18 May 2021 05:22:08 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a8228ba31155.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 18 May 2021 05:22:08 +0000
Received: from AM5PR0601CA0044.eurprd06.prod.outlook.com
 (2603:10a6:203:68::30) by DB7PR08MB3338.eurprd08.prod.outlook.com
 (2603:10a6:5:1b::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.26; Tue, 18 May
 2021 05:22:07 +0000
Received: from AM5EUR03FT050.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:68:cafe::fa) by AM5PR0601CA0044.outlook.office365.com
 (2603:10a6:203:68::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.25 via Frontend
 Transport; Tue, 18 May 2021 05:22:07 +0000
Received: from nebula.arm.com (40.67.248.234) by
 AM5EUR03FT050.mail.protection.outlook.com (10.152.17.47) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.4129.25 via Frontend Transport; Tue, 18 May 2021 05:22:06 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX04.Arm.com
 (10.251.24.32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id 15.1.2176.14; Tue, 18 May
 2021 05:21:54 +0000
Received: from penny.shanghai.arm.com (10.169.190.66) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2176.14 via Frontend
 Transport; Tue, 18 May 2021 05:21:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 599ea074-c24c-4eab-97e5-6cd0e1c41332
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=F8m43zx7x8u+u5yr2EVRgwygvKFWL/IN38zVtxS/SDg=;
 b=JmWBMpF4NraHMaTK5kI4Os0kGwewpd12qeakHYdubxm7l6C+2JPRidUwONaWFZCnZWED5YQ0woyRwGXmurm0kR2SASMrgiUDPZKr+7yF7I/aGHZ595nJNQ2yVvIfqij2lWLb4DmWElWFp2fd7KABZsu/zmyIGtvxG6lL9y5CH0E=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 146620f27eb284f0
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=n4jPM40N69hYTMhvI75fN7KlbGig55PO4u/gP2b4rY39s2UlT/79cy6rg4WJ/6JW87hFyDt6XeUj27W9h/tqYouu3xUXK/9uThzGT2fNh9fmdItfWuDnN1feroRq8MkEJYZxzp1xqwj4JNOgH62cgPAjWNl8QdJozwtBSht4UKXrqqSyfKyy4E7hPrmqLBK7dZDEoKTRfgUDymO/Knt5RKzsSQpczJVQJ5tRgJctwCRD8s727F3IU9XxBZNWu85pNCAxUtL1+P1S3eQV5lP3glzzrmRexmBY4oVSvwx9StHeIEQUz4tpf/bgzoGhEgOLiBspgIshVFx9YfujeIYbZQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=F8m43zx7x8u+u5yr2EVRgwygvKFWL/IN38zVtxS/SDg=;
 b=cpd3Lj3Jfi+yoTAoxfXZTRq10LI3pjZecetiQNJTbNX2UyJwG79TrFEG6YWLX7qUwD4UD5JYasTyiaPw/obKJ/c8PBYAkSF2MKm74RceSbgTR4ykXX3h/mIH/a9zMwqmabQtxTA/NPeQbsvlZhER46ZCcC+v7WrKEg6t7Q5cMEKO4K0GPDMIaeRc7CSXDTgDiPpyEagIyk33p4OByhZaKYwj7V4G1jJNMY6BAG5/+jeS4MT/yBgQKOtSysC+h2E7hKYX/KoYmMABsPCJVMg6s4x1zV3bdKQ8m0ys2AmyrTXgXtbIl3FZcfQJsMPrY3wlXq/HDg5kUypgNA2O/Rb+1w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=F8m43zx7x8u+u5yr2EVRgwygvKFWL/IN38zVtxS/SDg=;
 b=JmWBMpF4NraHMaTK5kI4Os0kGwewpd12qeakHYdubxm7l6C+2JPRidUwONaWFZCnZWED5YQ0woyRwGXmurm0kR2SASMrgiUDPZKr+7yF7I/aGHZ595nJNQ2yVvIfqij2lWLb4DmWElWFp2fd7KABZsu/zmyIGtvxG6lL9y5CH0E=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=pass action=none
 header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com;
From: Penny Zheng <penny.zheng@arm.com>
To: <xen-devel@lists.xenproject.org>, <sstabellini@kernel.org>,
	<julien@xen.org>
CC: <Bertrand.Marquis@arm.com>, <Penny.Zheng@arm.com>, <Wei.Chen@arm.com>,
	<nd@arm.com>
Subject: [PATCH 09/10] xen/arm: parse `xen,static-mem` info during domain construction
Date: Tue, 18 May 2021 05:21:12 +0000
Message-ID: <20210518052113.725808-10-penny.zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20210518052113.725808-1-penny.zheng@arm.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a905f5fe-fd47-4f2b-247f-08d919bce630
X-MS-TrafficTypeDiagnostic: DB7PR08MB3338:|DB6PR0802MB2598:
X-Microsoft-Antispam-PRVS:
	<DB6PR0802MB2598BAE51FDC5CD8F03789E2F72C9@DB6PR0802MB2598.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:2089;OLM:2089;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 jnNDjiU2nU/HY1AORxFCFCTAGP2pCTFhyhvse5NTOrWByXqEdOq2FC0Jz82+NjyAvxBbf96Qsy1/M7pk4Fkd0u2hvPI4h0BiSogj5txcOqGzyuAFAjUMcRULgbhf9rGdHxRPYqW1baWm39T6/5dv2IO5yara3kcF/Ydmyn9P4fLa2+K52oyDsmhfaLgbLjrd7YEzMXa89bLAZDFaYWY+NvXFE970TCkX+T/WSF+cfzZ/HQTfwVng8+p61SvdD8uujcvvbt54tXTdg15ZAO1z6W6BBiwvt402YBSMsKA+GccfPX3bmKtc8kZnSeTHD6y6DJHU4uESm7BHfr1LjBraj3/Nars4D6bxWGs32MEAeuZxZ4QR1Jp1+c2zmXJ6UZxEaxg2E60BNuoWI4xw1luqqvMFQxKa4zUQJm/Zd43c+xUWXAaP1XYANSj537Z0ueGgG3rZDULI/kYCZbB9xHmBPxG0RhhMPTGsO4suzxqnv2I0rczyYEkM2BoQXM/1FcJDN5Ir6x8OAVSRuNb1LDueqrMycZq7efaiyfrPF42h0NcgRoDOuK9FguM4pJs+cpita8MrJ35qH7KT9KSx08NNHX90Lddp6yvFBmWHviozzgiAC+sbU35RK/tPjWwKnrTJbTtSBR6iM58P7JUsBux4V0372B3oeW/uN1DOFNTRfB9MOM0pOprBifrHcmzGMf2uq7kLTsUzBLrmzCsereIbRQ==
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(4636009)(136003)(376002)(396003)(346002)(39850400004)(36840700001)(46966006)(426003)(110136005)(54906003)(70586007)(4326008)(336012)(186003)(316002)(47076005)(8676002)(5660300002)(70206006)(2906002)(86362001)(7696005)(6666004)(82740400003)(8936002)(81166007)(36860700001)(82310400003)(44832011)(36756003)(356005)(83380400001)(1076003)(2616005)(26005)(478600001)(81973001)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3338
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT025.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	20edcc16-d774-43e2-b8db-08d919bce10b
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Af0pTLUmx8CTPEk0bdl3eAZflXMSVL3UryWs3i1sEveOiYpY2oqPxqJjSY9nGAyAW3vMDVmrExfLxXLeC9POw1zkbVH1OB3HrnbUpHnjTcneHZKIcrBk96VnZPyOy0OCZsI1LEVXAsEIMH6VSj/pm3MNHKXLSArnEEEF1BMVlREf0peNRd4tMui5v3Lhc+k1VmGmchvdBI6q/4xUiVo3LoyCaSzcPJbqBEEc4+e12B/eU1o3q2a8N8kPEEPX7P0k8uh3LwbFGWLrln0MS9rtNv5QEKEJgUmHzJi+O0DHVJa+q/gct0e/X6386uKllG4w0ydrw5ZI2I0kGiArhBWt6n2Whvn6//jXz2M7GAoTP28gPxZ1a4grLaztuJq+COZ7KUfVbZDIHY4YCUir/ECALNTomTJ+g4pq37qq8bpAiABTexosZwAM0bDMpNirB+a7Ti4L22DnKXxDmwP8FsgqZacw7ZyDrsZUOH9U23q5BQuTHmPdB2YR4+kmsem7BRWRTFX5veg0r1EO6J1Sw/UQ5mhaXuvo3NfPZ2HL5ZobFqIq7VSQqxbC0yUgHVJ0qk6ctQg4DIKykbFYfa+fKnBaGmv+EtRTjwvNygd7uSGGdpPYCfsxB/lD1gK4qxmRAp6RqfRcSOiVJrNsKzTV6USs0XiLH8jKnGrHgX5bKotC6uqEYwf4W97SjliIJmTOEy9m
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(346002)(396003)(376002)(39850400004)(36840700001)(46966006)(110136005)(54906003)(8936002)(7696005)(26005)(44832011)(316002)(81166007)(36756003)(478600001)(86362001)(47076005)(8676002)(36860700001)(4326008)(82310400003)(82740400003)(186003)(1076003)(5660300002)(336012)(6666004)(426003)(70206006)(83380400001)(2616005)(2906002)(70586007)(81973001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2021 05:22:15.5516
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: a905f5fe-fd47-4f2b-247f-08d919bce630
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT025.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0802MB2598

This commit parses `xen,static-mem` device tree property, to acquire
static memory info reserved for this domain, when constructing domain
during boot-up.

Related info shall be stored in new static_mem value under per domain
struct arch_domain.

Right now, the implementation of allocate_static_memory is missing, and
will be introduced later. It just BUG() out at the moment.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
 xen/arch/arm/domain_build.c  | 58 ++++++++++++++++++++++++++++++++----
 xen/include/asm-arm/domain.h |  3 ++
 2 files changed, 56 insertions(+), 5 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 282416e74d..30b55588b7 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -2424,17 +2424,61 @@ static int __init construct_domU(struct domain *d,
 {
     struct kernel_info kinfo = {};
     int rc;
-    u64 mem;
+    u64 mem, static_mem_size = 0;
+    const struct dt_property *prop;
+    u32 static_mem_len;
+    bool static_mem = false;
+
+    /*
+     * Guest RAM could be of static memory from static allocation,
+     * which will be specified through "xen,static-mem" property.
+     */
+    prop = dt_find_property(node, "xen,static-mem", &static_mem_len);
+    if ( prop )
+    {
+        const __be32 *cell;
+        u32 addr_cells = 2, size_cells = 2, reg_cells;
+        u64 start, size;
+        int i, banks;
+        static_mem = true;
+
+        dt_property_read_u32(node, "#address-cells", &addr_cells);
+        dt_property_read_u32(node, "#size-cells", &size_cells);
+        BUG_ON(size_cells > 2 || addr_cells > 2);
+        reg_cells = addr_cells + size_cells;
+
+        cell = (const __be32 *)prop->value;
+        banks = static_mem_len / (reg_cells * sizeof (u32));
+        BUG_ON(banks > NR_MEM_BANKS);
+
+        for ( i = 0; i < banks; i++ )
+        {
+            device_tree_get_reg(&cell, addr_cells, size_cells, &start, &size);
+            d->arch.static_mem.bank[i].start = start;
+            d->arch.static_mem.bank[i].size = size;
+            static_mem_size += size;
+
+            printk(XENLOG_INFO
+                    "Static Memory Bank[%d] for Domain %pd:"
+                    "0x%"PRIx64"-0x%"PRIx64"\n",
+                    i, d,
+                    d->arch.static_mem.bank[i].start,
+                    d->arch.static_mem.bank[i].start +
+                    d->arch.static_mem.bank[i].size);
+        }
+        d->arch.static_mem.nr_banks = banks;
+    }
 
     rc = dt_property_read_u64(node, "memory", &mem);
-    if ( !rc )
+    if ( !static_mem && !rc )
     {
         printk("Error building DomU: cannot read \"memory\" property\n");
         return -EINVAL;
     }
-    kinfo.unassigned_mem = (paddr_t)mem * SZ_1K;
+    kinfo.unassigned_mem = static_mem ? static_mem_size : (paddr_t)mem * SZ_1K;
 
-    printk("*** LOADING DOMU cpus=%u memory=%"PRIx64"KB ***\n", d->max_vcpus, mem);
+    printk("*** LOADING DOMU cpus=%u memory=%"PRIx64"KB ***\n",
+            d->max_vcpus, (kinfo.unassigned_mem) >> 10);
 
     kinfo.vpl011 = dt_property_read_bool(node, "vpl011");
 
@@ -2452,7 +2496,11 @@ static int __init construct_domU(struct domain *d,
     /* type must be set before allocate memory */
     d->arch.type = kinfo.type;
 #endif
-    allocate_memory(d, &kinfo);
+    if ( static_mem )
+        /* allocate_static_memory(d, &kinfo); */
+        BUG();
+    else
+        allocate_memory(d, &kinfo);
 
     rc = prepare_dtb_domU(d, &kinfo);
     if ( rc < 0 )
diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
index c9277b5c6d..81b8eb453c 100644
--- a/xen/include/asm-arm/domain.h
+++ b/xen/include/asm-arm/domain.h
@@ -10,6 +10,7 @@
 #include <asm/gic.h>
 #include <asm/vgic.h>
 #include <asm/vpl011.h>
+#include <asm/setup.h>
 #include <public/hvm/params.h>
 
 struct hvm_domain
@@ -89,6 +90,8 @@ struct arch_domain
 #ifdef CONFIG_TEE
     void *tee;
 #endif
+
+    struct meminfo static_mem;
 }  __cacheline_aligned;
 
 struct arch_vcpu
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 18 05:23:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 05:23:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128655.241533 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lisCD-0006XY-4S; Tue, 18 May 2021 05:23:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128655.241533; Tue, 18 May 2021 05:23:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lisCD-0006XN-1A; Tue, 18 May 2021 05:23:09 +0000
Received: by outflank-mailman (input) for mailman id 128655;
 Tue, 18 May 2021 05:23:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2je3=KN=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1lisBq-00019F-8J
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 05:22:46 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com (unknown
 [40.107.15.73]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4a23c45e-d7ee-4558-92c6-37e4e9481593;
 Tue, 18 May 2021 05:22:24 +0000 (UTC)
Received: from DB6PR0301CA0067.eurprd03.prod.outlook.com (2603:10a6:6:30::14)
 by DB7PR08MB3595.eurprd08.prod.outlook.com (2603:10a6:10:40::14) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.26; Tue, 18 May
 2021 05:22:23 +0000
Received: from DB5EUR03FT050.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:30:cafe::76) by DB6PR0301CA0067.outlook.office365.com
 (2603:10a6:6:30::14) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.25 via Frontend
 Transport; Tue, 18 May 2021 05:22:23 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT050.mail.protection.outlook.com (10.152.21.128) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Tue, 18 May 2021 05:22:23 +0000
Received: ("Tessian outbound 6c8a2be3c2e7:v92");
 Tue, 18 May 2021 05:22:22 +0000
Received: from 3507816d2bf8.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 B8B6842C-802E-4E8C-8D3A-DCCFDAC485D5.1; 
 Tue, 18 May 2021 05:22:14 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 3507816d2bf8.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 18 May 2021 05:22:14 +0000
Received: from AM6PR08CA0013.eurprd08.prod.outlook.com (2603:10a6:20b:b2::25)
 by AM8PR08MB6514.eurprd08.prod.outlook.com (2603:10a6:20b:36b::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.28; Tue, 18 May
 2021 05:22:13 +0000
Received: from VE1EUR03FT046.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:b2:cafe::7) by AM6PR08CA0013.outlook.office365.com
 (2603:10a6:20b:b2::25) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.25 via Frontend
 Transport; Tue, 18 May 2021 05:22:13 +0000
Received: from nebula.arm.com (40.67.248.234) by
 VE1EUR03FT046.mail.protection.outlook.com (10.152.19.226) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256) id
 15.20.4129.25 via Frontend Transport; Tue, 18 May 2021 05:22:13 +0000
Received: from AZ-NEU-EX01.Emea.Arm.com (10.251.26.4) by AZ-NEU-EX03.Arm.com
 (10.251.24.31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id 15.1.2176.14; Tue, 18 May
 2021 05:21:58 +0000
Received: from AZ-NEU-EX04.Arm.com (10.251.24.32) by AZ-NEU-EX01.Emea.Arm.com
 (10.251.26.4) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2176.14; Tue, 18
 May 2021 05:21:57 +0000
Received: from penny.shanghai.arm.com (10.169.190.66) by mail.arm.com
 (10.251.24.32) with Microsoft SMTP Server id 15.1.2176.14 via Frontend
 Transport; Tue, 18 May 2021 05:21:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4a23c45e-d7ee-4558-92c6-37e4e9481593
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=86kt5SDAa1ZjE9UwrzfaVaiOyuKnBJmNhimAFnTYVsQ=;
 b=FNDXUdDE6XbSgKQzOGrBQoWgBBTxX0iyW4vzWkKqz9aBiByrMDockZL+L/txjKwMsppbVuxrjZwbMRpCr+dXCDe1k0EtKNOrQooXM8pGF+fOI9bNZmfVRkPspvjERtnOjv7zoe0s6zfD7W793aSbhlaUl1TYGDKVtYxNVuO/fck=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 1d0e93b991ec7343
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=YL/IIhpCU3zlPC1J3vGI8BK2td25vkHCZ8ss6bBsBC2VzzRQwJUYY/Hed+vRv8eHZxCbQ1gVKsZZPL/1V6RUq+n4VoA0Z3EokdU2GvYbJ57X5l6boigGK3KKyeBqbT4RJNtdnRe2IFWOMIPdjjF7tr2ccx1F7pNzrb01heOobiIrOrQzr4XxF2raBFokrC7lHZ+clnqvY4JNWL55jU9Euoa+gYsmby+qfib5uw5SyMSnSaea4jzorAwE9XrM0ShT6MAVPu8uGCDqwxFqvkwxKBvj9JXNfLKl2rTkjCe03QQ8A5c1nc+SJ7MjFa+ZUvnMRi7d9MtS3m/S99/lu2gThA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=86kt5SDAa1ZjE9UwrzfaVaiOyuKnBJmNhimAFnTYVsQ=;
 b=k31a3371Pqn3v4lXkZAjaTdkc8yeC4k4AT7oHEXe1RITFqsUV4IpDyup0OqOziYYL7EV0Ab5yLoC26Dmi05j5S5xxGGy961Vs0utlkIeptDHwNoW+0ee0VWfFmVLBeFIE4QIOTAunABEDZi0rH9AtNiSqPRpardFnzaFSnMBgxxBX8KAxlTfiOMOcsz4xrFgV2s3AEfE+3ejBDa+Ia9Pj+VPUlOusB/O6WX5EcQiFenMbk/aiTpRVJi4ZTCsmnbg3GFTJjnSVJktIAIsskRt8K50jF4tU4uearsHTxRIXDsarur6D+/zzDatNMfugI5Gc56z4l8B3aZgNcJ8gKEQfA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass (sender ip is
 40.67.248.234) smtp.rcpttodomain=lists.xenproject.org smtp.mailfrom=arm.com;
 dmarc=pass (p=none sp=none pct=100) action=none header.from=arm.com;
 dkim=none (message not signed); arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=86kt5SDAa1ZjE9UwrzfaVaiOyuKnBJmNhimAFnTYVsQ=;
 b=FNDXUdDE6XbSgKQzOGrBQoWgBBTxX0iyW4vzWkKqz9aBiByrMDockZL+L/txjKwMsppbVuxrjZwbMRpCr+dXCDe1k0EtKNOrQooXM8pGF+fOI9bNZmfVRkPspvjERtnOjv7zoe0s6zfD7W793aSbhlaUl1TYGDKVtYxNVuO/fck=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 40.67.248.234)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=pass action=none
 header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 40.67.248.234 as permitted sender) receiver=protection.outlook.com;
 client-ip=40.67.248.234; helo=nebula.arm.com;
From: Penny Zheng <penny.zheng@arm.com>
To: <xen-devel@lists.xenproject.org>, <sstabellini@kernel.org>,
	<julien@xen.org>
CC: <Bertrand.Marquis@arm.com>, <Penny.Zheng@arm.com>, <Wei.Chen@arm.com>,
	<nd@arm.com>
Subject: [PATCH 10/10] xen/arm: introduce allocate_static_memory
Date: Tue, 18 May 2021 05:21:13 +0000
Message-ID: <20210518052113.725808-11-penny.zheng@arm.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20210518052113.725808-1-penny.zheng@arm.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-EOPAttributedMessage: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c1fd1713-aab3-4fc5-c7ce-08d919bceaa5
X-MS-TrafficTypeDiagnostic: AM8PR08MB6514:|DB7PR08MB3595:
X-Microsoft-Antispam-PRVS:
	<DB7PR08MB3595DAA477F5A2EC4300BB8CF72C9@DB7PR08MB3595.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:4714;OLM:4714;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 DvOWAfC9F4plK2BK0M4LykxgQy+wfGrNX4WkLBiBpcGgR5fhu6H7NZP2rmQD03aWP03C0tDyXLFUlVZStZRLiaGXpO60e52QwjV7d8vqkuwLyB/1PWieGl1bJZtVhRF2OFQTZGuZeax/NasVff0QmM4iZsxRVZu9tFxviOcTDCpkAJXTQyOx4W5C3aEG7CjSARY/ztRmmAgkdfJkge8AiPjAz+b3pDpbDqu+wbi9x0zRzt44qsH/xI0uz3GN1ZJhWHTbNPbR+q3N1g99UNqjLmX3i5S4fDjnIy9wkNanOoWV2mqRDMpo1+DUcOUKbsJoKZqGiL7/1nfKyKqkahKOnK/UlwOk5wOEtrF8umBxv5HXEgspDb887ECoqkBqDfJSx8viMaSzFyz9Nyrr924iRvGyN3KEUqdylomxeHAzPfZvKQo34ZIyACwkdoUJKDL0qOblEV8bB73863uB6aimZJSimuI3VjdsvS4P7m8TcsYKgGKEfNPQpGBfYXTQv2ZFov9ay8bDrf4RGqtuGYpouiO06EBWq28EemhUWIFZloKZbGBBhCH+RJa614mCewMBFZSCDyuVGlY3mUcF5Uszgn34io0Vo9q6FjhTsYhnrx1mYAee3H/sTmuLCEOF1SjOXTrS3woOF0oLQ7pr1lz+7pCQgQ7RBrUcBfSGyzj4Mw7UNtPgApU7fJRYc6UtcF7q
X-Forefront-Antispam-Report-Untrusted:
 CIP:40.67.248.234;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:nebula.arm.com;PTR:InfoDomainNonexistent;CAT:NONE;SFS:(4636009)(376002)(136003)(346002)(396003)(39850400004)(36840700001)(46966006)(426003)(2616005)(5660300002)(82310400003)(110136005)(36756003)(2906002)(26005)(1076003)(4326008)(54906003)(8936002)(83380400001)(81166007)(86362001)(70586007)(356005)(36860700001)(186003)(70206006)(82740400003)(7696005)(6666004)(8676002)(44832011)(47076005)(478600001)(316002)(336012)(36900700001);DIR:OUT;SFP:1101;
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB6514
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT050.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	7ab52b75-5457-4ff1-7abb-08d919bce4c8
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	aJqKRjPHRC4drwTZr4X5L+QSiGAUNmPVsS9wx9fTJCpPPh78/xmZGuLU0lauA64VNNeGku/pYm9qm5l44U87177bMZYJWW4mYKurqvmphcXi8LoR1v/xwZ/ddueThSqaXdKWVXQIwPapusWhmQSW0+fkMESk6Q1m/tx6E/1UpodXY6DDyqDC5toflcj3MrFJnf+p6+rJtsVNX+fQq6gl2Ox9KO8s42/iABvUTUvfhwQ/2ue9aMTTRi0dk5fqP/dP/bsTjFQBPsNiV/QS9f1KRUz/nyEfueodreau3OEKu/0cVNK2FG9LqXgRLATm1OtsQP9VVb5yHW7gMniZAknHfH3J9HF4aYW/UhgsrxHgYMQudJoraZ7nSWoO9ll68zyAkP3i0gXBTwwz7g2qqbwNP4f57VdU3ALZ95l1PvlHQ/a0CT5sdJhCzXTl2qy+RgISuV6MyM5WxhkWgOFDNEF1QoD1ruw2hFxsBHleGBDKRgP3Rg0SECsoJiqFqkoVUi52bbL4W5mJl62+gJpQzqVHi3CsvuzPOxfKjiBa3/aX/53HjCl9Rr9Z13q6HEKWTeChslXlSwkD/TkgzHWQqOd6Otr/KEE/ji32dUgC0LeDVmW6bHXGHBverIF7sXoWorEI2WCndDc3FETqZqBsr8IxyA==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(136003)(376002)(39850400004)(346002)(36840700001)(46966006)(336012)(36756003)(44832011)(86362001)(478600001)(2616005)(426003)(1076003)(186003)(4326008)(82740400003)(26005)(8676002)(6666004)(8936002)(81166007)(83380400001)(47076005)(82310400003)(36860700001)(110136005)(316002)(70206006)(70586007)(5660300002)(54906003)(7696005)(2906002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2021 05:22:23.0845
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c1fd1713-aab3-4fc5-c7ce-08d919bceaa5
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT050.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3595

This commit introduces allocate_static_memory to allocate static memory as
guest RAM for domain on Static Allocation.

It uses alloc_domstatic_pages to allocate pre-defined static memory banks
for this domain, and uses guest_physmap_add_page to set up P2M table,
guest starting at fixed GUEST_RAM0_BASE, GUEST_RAM1_BASE.

Signed-off-by: Penny Zheng <penny.zheng@arm.com>
---
 xen/arch/arm/domain_build.c | 157 +++++++++++++++++++++++++++++++++++-
 1 file changed, 155 insertions(+), 2 deletions(-)

diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
index 30b55588b7..9f662313ad 100644
--- a/xen/arch/arm/domain_build.c
+++ b/xen/arch/arm/domain_build.c
@@ -437,6 +437,50 @@ static bool __init allocate_bank_memory(struct domain *d,
     return true;
 }
 
+/*
+ * #ram_index and #ram_index refer to the index and starting address of guest
+ * memory kank stored in kinfo->mem.
+ * Static memory at #smfn of #tot_size shall be mapped #sgfn, and
+ * #sgfn will be next guest address to map when returning.
+ */
+static bool __init allocate_static_bank_memory(struct domain *d,
+                                               struct kernel_info *kinfo,
+                                               int ram_index,
+                                               paddr_t ram_addr,
+                                               gfn_t* sgfn,
+                                               mfn_t smfn,
+                                               paddr_t tot_size)
+{
+    int res;
+    struct membank *bank;
+    paddr_t _size = tot_size;
+
+    bank = &kinfo->mem.bank[ram_index];
+    bank->start = ram_addr;
+    bank->size = bank->size + tot_size;
+
+    while ( tot_size > 0 )
+    {
+        unsigned int order = get_allocation_size(tot_size);
+
+        res = guest_physmap_add_page(d, *sgfn, smfn, order);
+        if ( res )
+        {
+            dprintk(XENLOG_ERR, "Failed map pages to DOMU: %d", res);
+            return false;
+        }
+
+        *sgfn = gfn_add(*sgfn, 1UL << order);
+        smfn = mfn_add(smfn, 1UL << order);
+        tot_size -= (1ULL << (PAGE_SHIFT + order));
+    }
+
+    kinfo->mem.nr_banks = ram_index + 1;
+    kinfo->unassigned_mem -= _size;
+
+    return true;
+}
+
 static void __init allocate_memory(struct domain *d, struct kernel_info *kinfo)
 {
     unsigned int i;
@@ -480,6 +524,116 @@ fail:
           (unsigned long)kinfo->unassigned_mem >> 10);
 }
 
+/* Allocate memory from static memory as RAM for one specific domain d. */
+static void __init allocate_static_memory(struct domain *d,
+                                            struct kernel_info *kinfo)
+{
+    int nr_banks, _banks = 0;
+    size_t ram0_size = GUEST_RAM0_SIZE, ram1_size = GUEST_RAM1_SIZE;
+    paddr_t bank_start, bank_size;
+    gfn_t sgfn;
+    mfn_t smfn;
+
+    kinfo->mem.nr_banks = 0;
+    sgfn = gaddr_to_gfn(GUEST_RAM0_BASE);
+    nr_banks = d->arch.static_mem.nr_banks;
+    ASSERT(nr_banks >= 0);
+
+    if ( kinfo->unassigned_mem <= 0 )
+        goto fail;
+
+    while ( _banks < nr_banks )
+    {
+        bank_start = d->arch.static_mem.bank[_banks].start;
+        smfn = maddr_to_mfn(bank_start);
+        bank_size = d->arch.static_mem.bank[_banks].size;
+
+        if ( !alloc_domstatic_pages(d, bank_size >> PAGE_SHIFT, bank_start, 0) )
+        {
+            printk(XENLOG_ERR
+                    "%pd: cannot allocate static memory"
+                    "(0x%"PRIx64" - 0x%"PRIx64")",
+                    d, bank_start, bank_start + bank_size);
+            goto fail;
+        }
+
+        /*
+         * By default, it shall be mapped to the fixed guest RAM address
+         * `GUEST_RAM0_BASE`, `GUEST_RAM1_BASE`.
+         * Starting from RAM0(GUEST_RAM0_BASE).
+         */
+        if ( ram0_size )
+        {
+            /* RAM0 at most holds GUEST_RAM0_SIZE. */
+            if ( ram0_size >= bank_size )
+            {
+                if ( !allocate_static_bank_memory(d, kinfo,
+                                                  0, GUEST_RAM0_BASE,
+                                                  &sgfn, smfn, bank_size) )
+                    goto fail;
+
+                ram0_size = ram0_size - bank_size;
+                _banks++;
+                continue;
+            }
+            else /* bank_size > ram0_size */
+            {
+                if ( !allocate_static_bank_memory(d, kinfo,
+                                                  0, GUEST_RAM0_BASE,
+                                                  &sgfn, smfn, ram0_size) )
+                    goto fail;
+
+                /* The whole RAM0 is consumed. */
+                ram0_size -= ram0_size;
+                /* This bank hasn't been totally mapped, seeking to RAM1. */
+                bank_size = bank_size - ram0_size;
+                smfn = mfn_add(smfn, ram0_size >> PAGE_SHIFT);
+                sgfn = gaddr_to_gfn(GUEST_RAM1_BASE);
+            }
+        }
+
+        if ( ram1_size >= bank_size )
+        {
+            if ( !allocate_static_bank_memory(d, kinfo,
+                                              1, GUEST_RAM1_BASE,
+                                              &sgfn, smfn, bank_size) )
+            goto fail;
+
+            ram1_size = ram1_size - bank_size;
+            _banks++;
+            continue;
+        }
+        else
+            /*
+             * If RAM1 still couldn't meet the requirement,
+             * no way to seek for now.
+             */
+            goto fail;
+    }
+
+    if ( kinfo->unassigned_mem )
+        goto fail;
+
+    for( int i = 0; i < kinfo->mem.nr_banks; i++ )
+    {
+        printk(XENLOG_INFO "%pd BANK[%d] %#"PRIpaddr"-%#"PRIpaddr" (%ldMB)\n",
+               d,
+               i,
+               kinfo->mem.bank[i].start,
+               kinfo->mem.bank[i].start + kinfo->mem.bank[i].size,
+               /* Don't want format this as PRIpaddr (16 digit hex) */
+               (unsigned long)(kinfo->mem.bank[i].size >> 20));
+    }
+
+    return;
+
+fail:
+    panic("Failed to allocate requested domain memory."
+          /* Don't want format this as PRIpaddr (16 digit hex) */
+          " %ldKB unallocated. Fix the VMs configurations.\n",
+          (unsigned long)kinfo->unassigned_mem >> 10);
+}
+
 static int __init write_properties(struct domain *d, struct kernel_info *kinfo,
                                    const struct dt_device_node *node)
 {
@@ -2497,8 +2651,7 @@ static int __init construct_domU(struct domain *d,
     d->arch.type = kinfo.type;
 #endif
     if ( static_mem )
-        /* allocate_static_memory(d, &kinfo); */
-        BUG();
+        allocate_static_memory(d, &kinfo);
     else
         allocate_memory(d, &kinfo);
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Tue May 18 05:47:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 05:47:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128688.241543 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lisZV-0000wA-9C; Tue, 18 May 2021 05:47:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128688.241543; Tue, 18 May 2021 05:47:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lisZV-0000w3-6C; Tue, 18 May 2021 05:47:13 +0000
Received: by outflank-mailman (input) for mailman id 128688;
 Tue, 18 May 2021 05:47:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lisZT-0000vt-Ul; Tue, 18 May 2021 05:47:11 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lisZT-00038C-Na; Tue, 18 May 2021 05:47:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lisZT-0007dx-CR; Tue, 18 May 2021 05:47:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lisZT-0004H5-Ac; Tue, 18 May 2021 05:47:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=sLCErour/VPr0cRc6n2LPlQ+EfRQQPPITg9y/wTF2no=; b=Wyt+NHYWgyvYZz1XsYM459gESY
	t60emySBjnuoSAhYNX+Q1PT/OyfqtvaFKZU6141UR9uZNUPep3VPG0o+GZ4DqQvDhljGmdQWygQ7u
	Z1LayLlFjawtXMcGewQ95jDr0zhM4QWdgesFHLG1BHHKLCQtQERm/xKhXLo/g94tAXv8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161983-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 161983: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=71a25d03b70b399d666d05a3d0046d821248c80e
X-Osstest-Versions-That:
    xen=cb199cc7de987cfda4659fccf51059f210f6ad34
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 18 May 2021 05:47:11 +0000

flight 161983 xen-unstable real [real]
flight 161989 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/161983/
http://logs.test-lab.xenproject.org/osstest/logs/161989/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 161973

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161973
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161973
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 161973
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161973
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161973
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161973
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161973
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161973
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161973
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161973
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161973
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  71a25d03b70b399d666d05a3d0046d821248c80e
baseline version:
 xen                  cb199cc7de987cfda4659fccf51059f210f6ad34

Last test of basis   161973  2021-05-17 01:52:45 Z    1 days
Testing same since   161983  2021-05-17 17:06:44 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Connor Davis <connojdavis@gmail.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 71a25d03b70b399d666d05a3d0046d821248c80e
Author: Connor Davis <connojdavis@gmail.com>
Date:   Mon May 17 15:43:19 2021 +0200

    xen: fix build when !CONFIG_GRANT_TABLE
    
    Move struct grant_table; in grant_table.h above
    ifdef CONFIG_GRANT_TABLE. This fixes the following:
    
    /build/xen/include/xen/grant_table.h:84:50: error: 'struct grant_table'
    declared inside parameter list will not be visible outside of this
    definition or declaration [-Werror]
       84 | static inline int mem_sharing_gref_to_gfn(struct grant_table *gt,
          |
    
    Signed-off-by: Connor Davis <connojdavis@gmail.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit d1f6296053f464c9da6f6fa5f1ece864130718ce
Author: Juergen Gross <jgross@suse.com>
Date:   Mon May 17 15:43:04 2021 +0200

    include/public: add RING_RESPONSE_PROD_OVERFLOW macro
    
    Add a new RING_RESPONSE_PROD_OVERFLOW() macro for being able to
    detect an ill-behaved backend tampering with the response producer
    index.
    
    Signed-off-by: Juergen Gross <jgross@suse.com>
    Acked-by: Jan Beulich <jbeulich@suse.com>

commit 599bca58849effbac4ef03a5986d20e38e26a854
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon May 17 15:42:32 2021 +0200

    Argo/XSM: add SILO hooks
    
    In SILO mode restrictions for inter-domain communication should apply
    here along the lines of those for evtchn and gnttab.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Daniel P. Smith <dpsmith@apertussolutions.com>

commit bd1e7b47bac00735a47055e2cba4106b54175138
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon May 17 15:42:00 2021 +0200

    x86/shim: fix build when !PV32
    
    In this case compat headers don't get generated (and aren't needed).
    The changes made by 527922008bce ("x86: slim down hypercall handling
    when !PV32") also weren't quite sufficient for this case.
    
    Try to limit #ifdef-ary by introducing two "fallback" #define-s.
    
    Fixes: d23d792478db ("x86: avoid building COMPAT code when !HVM && !PV32")
    Reported-by: Andrew Cooper <andrew.cooper3@citrix.com>
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>

commit aa803ba38a867551917d11059eaa044955556e05
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon May 17 15:41:28 2021 +0200

    x86emul: fix test harness build for gas 2.36
    
    All of the sudden, besides .text and .rodata and alike, an always
    present .note.gnu.property section has appeared. This section, when
    converting to binary format output, gets placed according to its
    linked address, causing the resulting blobs to be about 128Mb in size.
    The resulting headers with a C representation of the binary blobs then
    are, of course all a multiple of that size (and take accordingly long
    to create). I didn't bother waiting to see what size the final
    test_x86_emulator binary then would have had.
    
    See also https://sourceware.org/bugzilla/show_bug.cgi?id=27753.
    
    Rather than figuring out whether gas supports -mx86-used-note=, simply
    remove the section while creating *.bin.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>

commit 12a963b22b02a377ddb6a46db304fa4a0eee8c39
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon May 17 15:40:53 2021 +0200

    x86/AMD: also determine L3 cache size
    
    For Intel CPUs we record L3 cache size, hence we should also do so for
    AMD and alike.
    
    While making these additions, also make sure (throughout the function)
    that we don't needlessly overwrite prior values when the new value to be
    stored is zero.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>

commit b6ecd5c8bc0b9727f095c0bb2fedf62a565417f1
Author: Jan Beulich <jbeulich@suse.com>
Date:   Mon May 17 15:38:39 2021 +0200

    build: centralize / unify asm-offsets generation
    
    Except for an additional prereq Arm and x86 have the same needs here,
    and Arm can also benefit from the recent x86 side improvement. Recurse
    into arch/*/ only for a phony include target (doing nothing on Arm),
    and handle asm-offsets itself entirely locally to xen/Makefile.
    
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Acked-by: Roger Pau Monné <roger.pau@citrix.com>
    Acked-by: Julien Grall <jgrall@amazon.com>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue May 18 05:51:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 05:51:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128698.241557 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lisdQ-0002Jq-S0; Tue, 18 May 2021 05:51:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128698.241557; Tue, 18 May 2021 05:51:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lisdQ-0002Jj-Oo; Tue, 18 May 2021 05:51:16 +0000
Received: by outflank-mailman (input) for mailman id 128698;
 Tue, 18 May 2021 05:51:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lisdP-0002JZ-Mj; Tue, 18 May 2021 05:51:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lisdP-0003Cc-Ja; Tue, 18 May 2021 05:51:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lisdP-0007jn-4C; Tue, 18 May 2021 05:51:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lisdP-0005qK-3k; Tue, 18 May 2021 05:51:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jfetwrJU6bfxQEej+U/c8REmJzV+OC1Yo7USGadVO28=; b=yQuQtQdOVgKXsiydSIUM4EuAPL
	d6YCfqR2YYTcz6mvElxchc5w95ebx1z9byuQBgBuj2EDGUfhuiBzpXQhcLWDuWM9JLcJAA15tfJJS
	CCV3+F+nufTXjaem5i4ZUFZOpAtB3+LPszfjroCfFfIkhXyHsL9sY6vxlSngpWQIDhRA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161987-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 161987: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=1fbf5e30ae8eb725f4e10984f7b0a208f78abbd0
X-Osstest-Versions-That:
    ovmf=d2e0c473e6f0d518e8acb187f99bb7e61f7df862
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 18 May 2021 05:51:15 +0000

flight 161987 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161987/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 1fbf5e30ae8eb725f4e10984f7b0a208f78abbd0
baseline version:
 ovmf                 d2e0c473e6f0d518e8acb187f99bb7e61f7df862

Last test of basis   161979  2021-05-17 11:10:07 Z    0 days
Testing same since   161987  2021-05-18 01:10:06 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Loo Tung Lun <tung.lun.loo@intel.com>
  Loo, Tung Lun <tung.lun.loo@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   d2e0c473e6..1fbf5e30ae  1fbf5e30ae8eb725f4e10984f7b0a208f78abbd0 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue May 18 06:19:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 06:19:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128707.241575 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lit4f-0004rQ-3I; Tue, 18 May 2021 06:19:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128707.241575; Tue, 18 May 2021 06:19:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lit4f-0004rJ-0D; Tue, 18 May 2021 06:19:25 +0000
Received: by outflank-mailman (input) for mailman id 128707;
 Tue, 18 May 2021 06:19:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+8gn=KN=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lit4d-0004r5-Gb
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 06:19:23 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b2ff66ce-25e5-4b24-a9d4-6c2f2447f62d;
 Tue, 18 May 2021 06:19:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D0D42AEFF;
 Tue, 18 May 2021 06:19:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b2ff66ce-25e5-4b24-a9d4-6c2f2447f62d
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621318760; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=z6KF+YTRzP6/Lbt7W9Y/V0Lrtmt/kNDWUTq2Wg8gNHU=;
	b=kErWyrRD9ASwp2ZUgI2yDeV0aaMzRntC9NDH1/HVtgwTOinfw0Lmr3EjnK6QBG1mvVBVV7
	iHDszG6XSdBN+yezJY1idn/B+Ty53BENNnYM/9DqXBujf91Hn5ZS+ct6zyomMBnrWOdfKK
	zDq52J25hHLMcMZUYGeokLNlij2I6Pw=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 0/2] tools/xenstore: simplify xenstored main loop
Date: Tue, 18 May 2021 08:19:05 +0200
Message-Id: <20210518061907.16374-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Small series to make the main loop of xenstored more readable.

Changes in V2:
- split into two patches
- use const
- NO_SOCKETS handling

Changes in V3:
- guard extern declaration with NO_SOCKETS

Juergen Gross (2):
  tools/xenstore: move per connection read and write func hooks into a
    struct
  tools/xenstore: simplify xenstored main loop

 tools/xenstore/xenstored_core.c   | 121 ++++++++++++++----------------
 tools/xenstore/xenstored_core.h   |  23 +++---
 tools/xenstore/xenstored_domain.c |  15 +++-
 3 files changed, 81 insertions(+), 78 deletions(-)

-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue May 18 06:19:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 06:19:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128709.241597 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lit4j-0005Pe-Ou; Tue, 18 May 2021 06:19:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128709.241597; Tue, 18 May 2021 06:19:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lit4j-0005PX-LR; Tue, 18 May 2021 06:19:29 +0000
Received: by outflank-mailman (input) for mailman id 128709;
 Tue, 18 May 2021 06:19:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+8gn=KN=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lit4i-0004r5-Av
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 06:19:28 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b9eb95af-3541-49c4-a5d0-f0780bced95c;
 Tue, 18 May 2021 06:19:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 257A4AF09;
 Tue, 18 May 2021 06:19:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b9eb95af-3541-49c4-a5d0-f0780bced95c
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621318761; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=989znmd4h1mcR15D9WJ4lxb1rPSUzv44K4HKKcR8q7E=;
	b=K8D2WDHhpm027ir+2xZwrN16a1x6CaO7Ak4rl1QNL70OwzTqMlWNqEk1opWnNsDYhr3xAq
	ZtZ+XeRyCjhPYWpQPoBUhtfRsgrbi0dJ9nGsC/88UCRhQr6RXvxHKVW1CoU8hGFeTI3J+i
	OIz1J3uqccvF2psrQILsefoKpj8HIV0=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v3 2/2] tools/xenstore: simplify xenstored main loop
Date: Tue, 18 May 2021 08:19:07 +0200
Message-Id: <20210518061907.16374-3-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210518061907.16374-1-jgross@suse.com>
References: <20210518061907.16374-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The main loop of xenstored is rather complicated due to different
handling of socket and ring-page interfaces. Unify that handling by
introducing interface type specific functions can_read() and
can_write().

Take the opportunity to remove the empty list check before calling
write_messages() because the function is already able to cope with an
empty list.

Signed-off-by: Juergen Gross <jgross@suse.com>
---
V2:
- split off function vector introduction (Julien Grall)
V3:
- expand commit message (Julien Grall)
---
 tools/xenstore/xenstored_core.c   | 77 +++++++++++++++----------------
 tools/xenstore/xenstored_core.h   |  2 +
 tools/xenstore/xenstored_domain.c |  2 +
 3 files changed, 41 insertions(+), 40 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 856f518075..883a1a582a 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -1659,9 +1659,34 @@ static int readfd(struct connection *conn, void *data, unsigned int len)
 	return rc;
 }
 
+static bool socket_can_process(struct connection *conn, int mask)
+{
+	if (conn->pollfd_idx == -1)
+		return false;
+
+	if (fds[conn->pollfd_idx].revents & ~(POLLIN | POLLOUT)) {
+		talloc_free(conn);
+		return false;
+	}
+
+	return (fds[conn->pollfd_idx].revents & mask) && !conn->is_ignored;
+}
+
+static bool socket_can_write(struct connection *conn)
+{
+	return socket_can_process(conn, POLLOUT);
+}
+
+static bool socket_can_read(struct connection *conn)
+{
+	return socket_can_process(conn, POLLIN);
+}
+
 const struct interface_funcs socket_funcs = {
 	.write = writefd,
 	.read = readfd,
+	.can_write = socket_can_write,
+	.can_read = socket_can_read,
 };
 
 static void accept_connection(int sock)
@@ -2296,47 +2321,19 @@ int main(int argc, char *argv[])
 			if (&next->list != &connections)
 				talloc_increase_ref_count(next);
 
-			if (conn->domain) {
-				if (domain_can_read(conn))
-					handle_input(conn);
-				if (talloc_free(conn) == 0)
-					continue;
-
-				talloc_increase_ref_count(conn);
-				if (domain_can_write(conn) &&
-				    !list_empty(&conn->out_list))
-					handle_output(conn);
-				if (talloc_free(conn) == 0)
-					continue;
-			} else {
-				if (conn->pollfd_idx != -1) {
-					if (fds[conn->pollfd_idx].revents
-					    & ~(POLLIN|POLLOUT))
-						talloc_free(conn);
-					else if ((fds[conn->pollfd_idx].revents
-						  & POLLIN) &&
-						 !conn->is_ignored)
-						handle_input(conn);
-				}
-				if (talloc_free(conn) == 0)
-					continue;
-
-				talloc_increase_ref_count(conn);
-
-				if (conn->pollfd_idx != -1) {
-					if (fds[conn->pollfd_idx].revents
-					    & ~(POLLIN|POLLOUT))
-						talloc_free(conn);
-					else if ((fds[conn->pollfd_idx].revents
-						  & POLLOUT) &&
-						 !conn->is_ignored)
-						handle_output(conn);
-				}
-				if (talloc_free(conn) == 0)
-					continue;
+			if (conn->funcs->can_read(conn))
+				handle_input(conn);
+			if (talloc_free(conn) == 0)
+				continue;
 
-				conn->pollfd_idx = -1;
-			}
+			talloc_increase_ref_count(conn);
+
+			if (conn->funcs->can_write(conn))
+				handle_output(conn);
+			if (talloc_free(conn) == 0)
+				continue;
+
+			conn->pollfd_idx = -1;
 		}
 
 		if (delayed_requests) {
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 6c4845c196..bb36111ecc 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -90,6 +90,8 @@ struct connection;
 struct interface_funcs {
 	int (*write)(struct connection *, const void *, unsigned int);
 	int (*read)(struct connection *, void *, unsigned int);
+	bool (*can_write)(struct connection *);
+	bool (*can_read)(struct connection *);
 };
 
 struct connection
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index f3cd56050e..708bf68af0 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -175,6 +175,8 @@ static int readchn(struct connection *conn, void *data, unsigned int len)
 static const struct interface_funcs domain_funcs = {
 	.write = writechn,
 	.read = readchn,
+	.can_write = domain_can_write,
+	.can_read = domain_can_read,
 };
 
 static void *map_interface(domid_t domid)
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue May 18 06:19:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 06:19:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128708.241579 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lit4f-0004uQ-Ct; Tue, 18 May 2021 06:19:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128708.241579; Tue, 18 May 2021 06:19:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lit4f-0004tB-7f; Tue, 18 May 2021 06:19:25 +0000
Received: by outflank-mailman (input) for mailman id 128708;
 Tue, 18 May 2021 06:19:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+8gn=KN=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lit4d-0004r6-Rz
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 06:19:23 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 343f3315-56cf-42ad-8ffc-dfa3b0f2d8d8;
 Tue, 18 May 2021 06:19:22 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 00EACAF06;
 Tue, 18 May 2021 06:19:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 343f3315-56cf-42ad-8ffc-dfa3b0f2d8d8
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621318761; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=G6YyW3jOF1EJwsgj7luS33EIS2oEXyrGcjmcRKpanwo=;
	b=E62mX5q/nY0dWfHcPR42VkWIvyXDLZ2Pg14cHH20i38zYBTY/l2bpWkAbniZbPcUr7D1Ls
	fYDWGKkrWOm9vhrltbZmpX/HBETa1CKd/MQoiyp6IxRAIzik85Mi2i2fetwkUCyy7FD8pu
	kgw0o4/iSMReo3OJpiji3F5VjD1xX2I=
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Juergen Gross <jgross@suse.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH v3 1/2] tools/xenstore: move per connection read and write func hooks into a struct
Date: Tue, 18 May 2021 08:19:06 +0200
Message-Id: <20210518061907.16374-2-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
In-Reply-To: <20210518061907.16374-1-jgross@suse.com>
References: <20210518061907.16374-1-jgross@suse.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Put the interface type specific functions into an own structure and let
struct connection contain only a pointer to that new function vector.

Don't even define the socket based functions in case of NO_SOCKETS
(Mini-OS).

Signed-off-by: Juergen Gross <jgross@suse.com>
Reviewed-by: Julien Grall <jgrall@amazon.com>
---
V2:
- split off from V1 patch (Julien Grall)
- use const qualifier (Julien Grall)
- drop socket specific case for Mini-OS (Julien Grall)
V3:
- guard extern declaration with #ifndef NO_SOCKETS (Julien Grall)
---
 tools/xenstore/xenstored_core.c   | 44 +++++++++++++------------------
 tools/xenstore/xenstored_core.h   | 21 ++++++++-------
 tools/xenstore/xenstored_domain.c | 13 +++++++--
 3 files changed, 40 insertions(+), 38 deletions(-)

diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
index 4b7b71cfb3..856f518075 100644
--- a/tools/xenstore/xenstored_core.c
+++ b/tools/xenstore/xenstored_core.c
@@ -226,8 +226,8 @@ static bool write_messages(struct connection *conn)
 				sockmsg_string(out->hdr.msg.type),
 				out->hdr.msg.len,
 				out->buffer, conn);
-		ret = conn->write(conn, out->hdr.raw + out->used,
-				  sizeof(out->hdr) - out->used);
+		ret = conn->funcs->write(conn, out->hdr.raw + out->used,
+					 sizeof(out->hdr) - out->used);
 		if (ret < 0)
 			return false;
 
@@ -243,8 +243,8 @@ static bool write_messages(struct connection *conn)
 			return true;
 	}
 
-	ret = conn->write(conn, out->buffer + out->used,
-			  out->hdr.msg.len - out->used);
+	ret = conn->funcs->write(conn, out->buffer + out->used,
+				 out->hdr.msg.len - out->used);
 	if (ret < 0)
 		return false;
 
@@ -1531,8 +1531,8 @@ static void handle_input(struct connection *conn)
 	/* Not finished header yet? */
 	if (in->inhdr) {
 		if (in->used != sizeof(in->hdr)) {
-			bytes = conn->read(conn, in->hdr.raw + in->used,
-					   sizeof(in->hdr) - in->used);
+			bytes = conn->funcs->read(conn, in->hdr.raw + in->used,
+						  sizeof(in->hdr) - in->used);
 			if (bytes < 0)
 				goto bad_client;
 			in->used += bytes;
@@ -1557,8 +1557,8 @@ static void handle_input(struct connection *conn)
 		in->inhdr = false;
 	}
 
-	bytes = conn->read(conn, in->buffer + in->used,
-			   in->hdr.msg.len - in->used);
+	bytes = conn->funcs->read(conn, in->buffer + in->used,
+				  in->hdr.msg.len - in->used);
 	if (bytes < 0)
 		goto bad_client;
 
@@ -1581,7 +1581,7 @@ static void handle_output(struct connection *conn)
 		ignore_connection(conn);
 }
 
-struct connection *new_connection(connwritefn_t *write, connreadfn_t *read)
+struct connection *new_connection(const struct interface_funcs *funcs)
 {
 	struct connection *new;
 
@@ -1591,8 +1591,7 @@ struct connection *new_connection(connwritefn_t *write, connreadfn_t *read)
 
 	new->fd = -1;
 	new->pollfd_idx = -1;
-	new->write = write;
-	new->read = read;
+	new->funcs = funcs;
 	new->is_ignored = false;
 	new->transaction_started = 0;
 	INIT_LIST_HEAD(&new->out_list);
@@ -1621,20 +1620,8 @@ struct connection *get_connection_by_id(unsigned int conn_id)
 static void accept_connection(int sock)
 {
 }
-
-int writefd(struct connection *conn, const void *data, unsigned int len)
-{
-	errno = EBADF;
-	return -1;
-}
-
-int readfd(struct connection *conn, void *data, unsigned int len)
-{
-	errno = EBADF;
-	return -1;
-}
 #else
-int writefd(struct connection *conn, const void *data, unsigned int len)
+static int writefd(struct connection *conn, const void *data, unsigned int len)
 {
 	int rc;
 
@@ -1650,7 +1637,7 @@ int writefd(struct connection *conn, const void *data, unsigned int len)
 	return rc;
 }
 
-int readfd(struct connection *conn, void *data, unsigned int len)
+static int readfd(struct connection *conn, void *data, unsigned int len)
 {
 	int rc;
 
@@ -1672,6 +1659,11 @@ int readfd(struct connection *conn, void *data, unsigned int len)
 	return rc;
 }
 
+const struct interface_funcs socket_funcs = {
+	.write = writefd,
+	.read = readfd,
+};
+
 static void accept_connection(int sock)
 {
 	int fd;
@@ -1681,7 +1673,7 @@ static void accept_connection(int sock)
 	if (fd < 0)
 		return;
 
-	conn = new_connection(writefd, readfd);
+	conn = new_connection(&socket_funcs);
 	if (conn)
 		conn->fd = fd;
 	else
diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
index 6a6d0448e8..6c4845c196 100644
--- a/tools/xenstore/xenstored_core.h
+++ b/tools/xenstore/xenstored_core.h
@@ -86,8 +86,11 @@ struct delayed_request {
 };
 
 struct connection;
-typedef int connwritefn_t(struct connection *, const void *, unsigned int);
-typedef int connreadfn_t(struct connection *, void *, unsigned int);
+
+struct interface_funcs {
+	int (*write)(struct connection *, const void *, unsigned int);
+	int (*read)(struct connection *, void *, unsigned int);
+};
 
 struct connection
 {
@@ -131,9 +134,8 @@ struct connection
 	/* My watches. */
 	struct list_head watches;
 
-	/* Methods for communicating over this connection: write can be NULL */
-	connwritefn_t *write;
-	connreadfn_t *read;
+	/* Methods for communicating over this connection. */
+	const struct interface_funcs *funcs;
 
 	/* Support for live update: connection id. */
 	unsigned int conn_id;
@@ -196,7 +198,7 @@ int write_node_raw(struct connection *conn, TDB_DATA *key, struct node *node,
 struct node *read_node(struct connection *conn, const void *ctx,
 		       const char *name);
 
-struct connection *new_connection(connwritefn_t *write, connreadfn_t *read);
+struct connection *new_connection(const struct interface_funcs *funcs);
 struct connection *get_connection_by_id(unsigned int conn_id);
 void check_store(void);
 void corrupt(struct connection *conn, const char *fmt, ...);
@@ -254,10 +256,9 @@ void finish_daemonize(void);
 /* Open a pipe for signal handling */
 void init_pipe(int reopen_log_pipe[2]);
 
-int writefd(struct connection *conn, const void *data, unsigned int len);
-int readfd(struct connection *conn, void *data, unsigned int len);
-
-extern struct interface_funcs socket_funcs;
+#ifndef NO_SOCKETS
+extern const struct interface_funcs socket_funcs;
+#endif
 extern xengnttab_handle **xgt_handle;
 
 int remember_string(struct hashtable *hash, const char *str);
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 0c17937c0f..f3cd56050e 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -172,6 +172,11 @@ static int readchn(struct connection *conn, void *data, unsigned int len)
 	return len;
 }
 
+static const struct interface_funcs domain_funcs = {
+	.write = writechn,
+	.read = readchn,
+};
+
 static void *map_interface(domid_t domid)
 {
 	return xengnttab_map_grant_ref(*xgt_handle, domid,
@@ -389,7 +394,7 @@ static int new_domain(struct domain *domain, int port, bool restore)
 
 	domain->introduced = true;
 
-	domain->conn = new_connection(writechn, readchn);
+	domain->conn = new_connection(&domain_funcs);
 	if (!domain->conn)  {
 		errno = ENOMEM;
 		return errno;
@@ -1288,10 +1293,14 @@ void read_state_connection(const void *ctx, const void *state)
 	struct domain *domain, *tdomain;
 
 	if (sc->conn_type == XS_STATE_CONN_TYPE_SOCKET) {
-		conn = new_connection(writefd, readfd);
+#ifdef NO_SOCKETS
+		barf("socket based connection without sockets");
+#else
+		conn = new_connection(&socket_funcs);
 		if (!conn)
 			barf("error restoring connection");
 		conn->fd = sc->spec.socket_fd;
+#endif
 	} else {
 		domain = introduce_domain(ctx, sc->spec.ring.domid,
 					  sc->spec.ring.evtchn, true);
-- 
2.26.2



From xen-devel-bounces@lists.xenproject.org Tue May 18 06:27:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 06:27:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128727.241608 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litCc-0007YU-Fn; Tue, 18 May 2021 06:27:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128727.241608; Tue, 18 May 2021 06:27:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litCc-0007YN-C2; Tue, 18 May 2021 06:27:38 +0000
Received: by outflank-mailman (input) for mailman id 128727;
 Tue, 18 May 2021 06:27:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tO0P=KN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1litCb-0007YH-26
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 06:27:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0f343995-a874-42e9-8273-d1fa9b7f3c7f;
 Tue, 18 May 2021 06:27:36 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3EDE9AF0B;
 Tue, 18 May 2021 06:27:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0f343995-a874-42e9-8273-d1fa9b7f3c7f
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621319255; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=vH3Fcfj+/jcnPVnbRR9qiOn/UQ1RAcuNx8X60zLDJa8=;
	b=aGFP+S0i8Exn4fzP/ywQEp4O5zEpIbrXDP953dzfidosYZMrL5n+Npx3ND3SVHiLua6/8X
	ravvah4gD/1Jds1eRoAMOmiCXHz8X+KJNjufSgeOv6vUuycsMcVJQOAzqVt3nMZ4bolr01
	9Psvt6iBZhrcYtGklafxObPkHiXZCvE=
Subject: Re: [PATCH v3 2/5] xen/common: Guard iommu symbols with
 CONFIG_HAS_PASSTHROUGH
To: Connor Davis <connojdavis@gmail.com>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair23@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org,
 Julien Grall <julien@xen.org>
References: <cover.1621017334.git.connojdavis@gmail.com>
 <1156cb116da19ef64323e472bb6b6e87c6c73d77.1621017334.git.connojdavis@gmail.com>
 <556d1933-3b11-0780-edec-b6dc1729bc56@suse.com>
 <98b429d0-2673-624e-1690-9c0e8373ed5b@xen.org>
 <7cf966f6-7ccf-ba63-2b67-129577a7ca53@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8e415cac-a8b3-67a6-2f7b-489b964ceb50@suse.com>
Date: Tue, 18 May 2021 08:27:34 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <7cf966f6-7ccf-ba63-2b67-129577a7ca53@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 18.05.2021 06:11, Connor Davis wrote:
> 
> On 5/17/21 9:42 AM, Julien Grall wrote:
>> Hi Jan,
>>
>> On 17/05/2021 12:16, Jan Beulich wrote:
>>> On 14.05.2021 20:53, Connor Davis wrote:
>>>> --- a/xen/common/memory.c
>>>> +++ b/xen/common/memory.c
>>>> @@ -294,7 +294,9 @@ int guest_remove_page(struct domain *d, unsigned 
>>>> long gmfn)
>>>>       p2m_type_t p2mt;
>>>>   #endif
>>>>       mfn_t mfn;
>>>> +#ifdef CONFIG_HAS_PASSTHROUGH
>>>>       bool *dont_flush_p, dont_flush;
>>>> +#endif
>>>>       int rc;
>>>>     #ifdef CONFIG_X86
>>>> @@ -385,13 +387,17 @@ int guest_remove_page(struct domain *d, 
>>>> unsigned long gmfn)
>>>>        * Since we're likely to free the page below, we need to suspend
>>>>        * xenmem_add_to_physmap()'s suppressing of IOMMU TLB flushes.
>>>>        */
>>>> +#ifdef CONFIG_HAS_PASSTHROUGH
>>>>       dont_flush_p = &this_cpu(iommu_dont_flush_iotlb);
>>>>       dont_flush = *dont_flush_p;
>>>>       *dont_flush_p = false;
>>>> +#endif
>>>>         rc = guest_physmap_remove_page(d, _gfn(gmfn), mfn, 0);
>>>>   +#ifdef CONFIG_HAS_PASSTHROUGH
>>>>       *dont_flush_p = dont_flush;
>>>> +#endif
>>>>         /*
>>>>        * With the lack of an IOMMU on some platforms, domains with 
>>>> DMA-capable
>>>> @@ -839,11 +845,13 @@ int xenmem_add_to_physmap(struct domain *d, 
>>>> struct xen_add_to_physmap *xatp,
>>>>       xatp->gpfn += start;
>>>>       xatp->size -= start;
>>>>   +#ifdef CONFIG_HAS_PASSTHROUGH
>>>>       if ( is_iommu_enabled(d) )
>>>>       {
>>>>          this_cpu(iommu_dont_flush_iotlb) = 1;
>>>>          extra.ppage = &pages[0];
>>>>       }
>>>> +#endif
>>>>         while ( xatp->size > done )
>>>>       {
>>>> @@ -868,6 +876,7 @@ int xenmem_add_to_physmap(struct domain *d, 
>>>> struct xen_add_to_physmap *xatp,
>>>>           }
>>>>       }
>>>>   +#ifdef CONFIG_HAS_PASSTHROUGH
>>>>       if ( is_iommu_enabled(d) )
>>>>       {
>>>>           int ret;
>>>> @@ -894,6 +903,7 @@ int xenmem_add_to_physmap(struct domain *d, 
>>>> struct xen_add_to_physmap *xatp,
>>>>           if ( unlikely(ret) && rc >= 0 )
>>>>               rc = ret;
>>>>       }
>>>> +#endif
>>>>         return rc;
>>>>   }
>>>
>>> I wonder whether all of these wouldn't better become CONFIG_X86:
>>> ISTR Julien indicating that he doesn't see the override getting used
>>> on Arm. (Julien, please correct me if I'm misremembering.)
>>
>> Right, so far, I haven't been in favor to introduce it because:
>>    1) The P2M code may free some memory. So you can't always ignore 
>> the flush (I think this is wrong for the upper layer to know when this 
>> can happen).
>>    2) It is unclear what happen if the IOMMU TLBs and the PT contains 
>> different mappings (I received conflicted advice).
>>
>> So it is better to always flush and as early as possible.
> 
> So keep it as is or switch to CONFIG_X86?

Please switch, unless anyone else voices a strong opinion towards
keeping as is.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 18 06:31:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 06:31:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128732.241618 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litGT-0000Uw-04; Tue, 18 May 2021 06:31:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128732.241618; Tue, 18 May 2021 06:31:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litGS-0000Up-TJ; Tue, 18 May 2021 06:31:36 +0000
Received: by outflank-mailman (input) for mailman id 128732;
 Tue, 18 May 2021 06:31:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tO0P=KN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1litGR-0000Uj-0s
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 06:31:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e373942d-b41f-420c-af38-ca3c02330dfb;
 Tue, 18 May 2021 06:31:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 47F5FAF0B;
 Tue, 18 May 2021 06:31:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e373942d-b41f-420c-af38-ca3c02330dfb
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621319493; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=HnESSdBCvYuGGZZyXbf4rxoatfqwLtUxoeI8jOpi424=;
	b=E5fhe/bpF/MtCa0BP1hMCmDXonZdvixJXNIAecPhPhZaNv9mpFY66eXS9z+GlRBrnQ264t
	gvUARtLz0ir4RoQu7zqXiLwWZqRxVg/rvMlJPHrOJ10K3siQsRHyIesCZ0jvZ4bh2oSsf7
	zZj7RfY18iAmcEklKM4Dx1KK1uJZ1Kk=
Subject: Re: [PATCH v3 3/5] xen: Fix build when !CONFIG_GRANT_TABLE
To: Connor Davis <connojdavis@gmail.com>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair23@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1621017334.git.connojdavis@gmail.com>
 <834f7995ae80a3b37b6d508d1c989b4ee391f61b.1621017334.git.connojdavis@gmail.com>
 <b56a1cfd-46ab-c601-883c-73537dfaac92@suse.com>
 <9b231486-bdf8-d6a7-c9c6-126d5bc207f8@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <31d96abe-ca60-e621-3b6c-e70663b8199d@suse.com>
Date: Tue, 18 May 2021 08:31:32 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <9b231486-bdf8-d6a7-c9c6-126d5bc207f8@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 18.05.2021 05:58, Connor Davis wrote:
> 
> On 5/17/21 5:22 AM, Jan Beulich wrote:
>> On 14.05.2021 20:53, Connor Davis wrote:
>>> Move struct grant_table; in grant_table.h above
>>> ifdef CONFIG_GRANT_TABLE. This fixes the following:
>>>
>>> /build/xen/include/xen/grant_table.h:84:50: error: 'struct grant_table'
>>> declared inside parameter list will not be visible outside of this
>>> definition or declaration [-Werror]
>>>     84 | static inline int mem_sharing_gref_to_gfn(struct grant_table *gt,
>>>        |
>> There must be more to this, as e.g. the PV shim does get built with
>> !GRANT_TABLE. Nevertheless, ...
>>
> Can you elaborate? I tested all defconfigs with and without grant tables
> 
> enabled on x86 and ARM and they all build fine.

I'm confused: Everything building fine supports my statement, so if
more elaboration was needed, it would be you to make more precise
the conditions upon which a build failure would occur. But this is
largely moot now, since I did commit your change yesterday already,
for, as said, the potential of breaking the build being obvious
enough.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 18 06:33:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 06:33:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128736.241630 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litIO-00018O-DR; Tue, 18 May 2021 06:33:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128736.241630; Tue, 18 May 2021 06:33:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litIO-00018H-9j; Tue, 18 May 2021 06:33:36 +0000
Received: by outflank-mailman (input) for mailman id 128736;
 Tue, 18 May 2021 06:33:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tO0P=KN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1litIM-000187-7f
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 06:33:34 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8951687a-a21a-4ab7-bbc5-a26f613f3890;
 Tue, 18 May 2021 06:33:33 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A0C5AAFEF;
 Tue, 18 May 2021 06:33:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8951687a-a21a-4ab7-bbc5-a26f613f3890
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621319612; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=hl44uEQtqJbnIvb/ZnExV6o4e3Y1rACn6HN7F+6vs8E=;
	b=C5ThG2csbrrjGMIetdHXRKuyYipdYp0UAsvz2mDPkf4r6wAwyMGFXYOuX+Y0sK41bBi/3M
	mczsLvGrFvarrCEOERTcVEpWVVKZsSrkVBapyXZB5ebSSYa0+0uD2DdeLaksr/J54/Bxh7
	RoJnnrSauTALcGcwCAkjHGqvHogwYgw=
Subject: Re: [PATCH v3 4/5] xen: Add files needed for minimal riscv build
To: Connor Davis <connojdavis@gmail.com>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair23@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1621017334.git.connojdavis@gmail.com>
 <a7d2d43d0d9de9e10a3e92bc6f977d6f4b53bef6.1621017334.git.connojdavis@gmail.com>
 <ce3ff72e-611b-3b9c-96fa-afc9e8767681@suse.com>
 <95399fcf-54b0-828f-b3cb-9332ad779f68@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1f551526-dde9-8641-ea41-915dbaac9c46@suse.com>
Date: Tue, 18 May 2021 08:33:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <95399fcf-54b0-828f-b3cb-9332ad779f68@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 18.05.2021 06:58, Connor Davis wrote:
> On 5/17/21 5:51 AM, Jan Beulich wrote:
>> On 14.05.2021 20:53, Connor Davis wrote:
>>> --- /dev/null
>>> +++ b/xen/arch/riscv/configs/riscv64_defconfig
>>> @@ -0,0 +1,12 @@
>>> +# CONFIG_SCHED_CREDIT is not set
>>> +# CONFIG_SCHED_RTDS is not set
>>> +# CONFIG_SCHED_NULL is not set
>>> +# CONFIG_SCHED_ARINC653 is not set
>>> +# CONFIG_TRACEBUFFER is not set
>>> +# CONFIG_DEBUG is not set
>>> +# CONFIG_DEBUG_INFO is not set
>>> +# CONFIG_HYPFS is not set
>>> +# CONFIG_GRANT_TABLE is not set
>>> +# CONFIG_SPECULATIVE_HARDEN_ARRAY is not set
>>> +
>>> +CONFIG_EXPERT=y
>> These are rather odd defaults, more like for a special purpose
>> config than a general purpose one. None of what you turn off here
>> will guarantee to be off for people actually trying to build
>> things, so it's not clear to me what the idea here is. As a
>> specific remark, especially during bringup work I think it is
>> quite important to not default DEBUG to off: You definitely want
>> to see whether any assertions trigger.
> The idea was to turn off as much stuff as possible to get a minimal
> build (involving xen/common) working. Although now that we're focused on
> only a few files at a time, they could be enabled without adding any
> undue burden (at least for now).
> 
> Perhaps it would be best to rename the file to include "tiny" or something,
> and then add a normal defconfig once things are actually running?

Yes please.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 18 06:42:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 06:42:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128744.241641 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litR3-0002fd-DS; Tue, 18 May 2021 06:42:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128744.241641; Tue, 18 May 2021 06:42:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litR3-0002fW-9W; Tue, 18 May 2021 06:42:33 +0000
Received: by outflank-mailman (input) for mailman id 128744;
 Tue, 18 May 2021 06:42:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RJ/V=KN=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1litR1-0002fQ-RK
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 06:42:31 +0000
Received: from mail-pj1-x1032.google.com (unknown [2607:f8b0:4864:20::1032])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1c90909e-46fb-40dd-9fc1-a687de519c3b;
 Tue, 18 May 2021 06:42:30 +0000 (UTC)
Received: by mail-pj1-x1032.google.com with SMTP id gm21so4966067pjb.5
 for <xen-devel@lists.xenproject.org>; Mon, 17 May 2021 23:42:30 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:f284:b819:54ca:c198])
 by smtp.gmail.com with UTF8SMTPSA id f21sm7240386pjt.11.2021.05.17.23.42.22
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 May 2021 23:42:29 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c90909e-46fb-40dd-9fc1-a687de519c3b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=PDyrwk3dKvoxKjer/W/nKoHZ25xkHMzcXLNqNVSjI98=;
        b=VtOmdjwVV5WinnYM8kjIEP+f3T2dP4qhZew2qo2b8rOixnhhoKqE73hDfXFYiPhVG+
         U6BAIa1KXGyc+Nw/pBnyzHhaT1tIpO8FD1D5RuU3Fbq8THMadmNoknnSwKB9hGOx3JSo
         9OgCVPZ0ChLJsVYfXDyAbZmg8Bo+ncPHVka+o=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=PDyrwk3dKvoxKjer/W/nKoHZ25xkHMzcXLNqNVSjI98=;
        b=GPOgErzcMnV9L2yeZUoaLzctxJ8d2mquXtsYjCZVxPtN8c3la2u6dBA8MHD0YDc+hF
         LlNUs0gY1dsXY1BPkORE//oHl4i2yUos5PXVGDliF95+KakYT6oe8zqPqwOkrQXE+hbt
         u8gmWStRmKW65XCrqH1t9Sb59Y26XtlMK1bRZvaD50VREtzw3nFA7qd7IRUMoNW5xFs1
         3NPgY/8uYFDGPL/DzN00N7ERAVHKYUcsgDJdGicL32jtVi02K4ve+BAxnKM13sX19tPB
         nPS+BFCglfUMCmqmdgCp5Ag+GfLqLZ4pn3uQu2GDDCTdin9ImIDc/4fDosfx5Dngk07f
         EygQ==
X-Gm-Message-State: AOAM532cgH9GBU2DoNw+Flc/3n2I9QmMeL6Cw/IplwfpUQccDME+YeqC
	vYMoLaNGH47OpOdqk/Ewfv+2FA==
X-Google-Smtp-Source: ABdhPJw/3HNuoTaSGjywqAr2hcme1ypfuSgw3ugqZ0Om4CEwcZ/8hmi6P3tglXLSkIpe+RJxzArq6w==
X-Received: by 2002:a17:902:b408:b029:ec:e879:bbd8 with SMTP id x8-20020a170902b408b02900ece879bbd8mr2877299plr.65.1621320149869;
        Mon, 17 May 2021 23:42:29 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v7 00/15] Restricted DMA 
Date: Tue, 18 May 2021 14:42:00 +0800
Message-Id: <20210518064215.2856977-1-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.751.gd2f1c929bd-goog
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This series implements mitigations for lack of DMA access control on
systems without an IOMMU, which could result in the DMA accessing the
system memory at unexpected times and/or unexpected addresses, possibly
leading to data leakage or corruption.

For example, we plan to use the PCI-e bus for Wi-Fi and that PCI-e bus is
not behind an IOMMU. As PCI-e, by design, gives the device full access to
system memory, a vulnerability in the Wi-Fi firmware could easily escalate
to a full system exploit (remote wifi exploits: [1a], [1b] that shows a
full chain of exploits; [2], [3]).

To mitigate the security concerns, we introduce restricted DMA. Restricted
DMA utilizes the existing swiotlb to bounce streaming DMA in and out of a
specially allocated region and does memory allocation from the same region.
The feature on its own provides a basic level of protection against the DMA
overwriting buffer contents at unexpected times. However, to protect
against general data leakage and system memory corruption, the system needs
to provide a way to restrict the DMA to a predefined memory region (this is
usually done at firmware level, e.g. MPU in ATF on some ARM platforms [4]).

[1a] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_4.html
[1b] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_11.html
[2] https://blade.tencent.com/en/advisories/qualpwn/
[3] https://www.bleepingcomputer.com/news/security/vulnerabilities-found-in-highly-popular-firmware-for-wifi-chips/
[4] https://github.com/ARM-software/arm-trusted-firmware/blob/master/plat/mediatek/mt8183/drivers/emi_mpu/emi_mpu.c#L132

v7:
Fix debugfs, PageHighMem and comment style in rmem_swiotlb_device_init

v6:
Address the comments in v5
https://lore.kernel.org/patchwork/cover/1423201/

v5:
Rebase on latest linux-next
https://lore.kernel.org/patchwork/cover/1416899/

v4:
- Fix spinlock bad magic
- Use rmem->name for debugfs entry
- Address the comments in v3
https://lore.kernel.org/patchwork/cover/1378113/

v3:
Using only one reserved memory region for both streaming DMA and memory
allocation.
https://lore.kernel.org/patchwork/cover/1360992/

v2:
Building on top of swiotlb.
https://lore.kernel.org/patchwork/cover/1280705/

v1:
Using dma_map_ops.
https://lore.kernel.org/patchwork/cover/1271660/

Claire Chang (15):
  swiotlb: Refactor swiotlb init functions
  swiotlb: Refactor swiotlb_create_debugfs
  swiotlb: Add DMA_RESTRICTED_POOL
  swiotlb: Add restricted DMA pool initialization
  swiotlb: Add a new get_io_tlb_mem getter
  swiotlb: Update is_swiotlb_buffer to add a struct device argument
  swiotlb: Update is_swiotlb_active to add a struct device argument
  swiotlb: Bounce data from/to restricted DMA pool if available
  swiotlb: Move alloc_size to find_slots
  swiotlb: Refactor swiotlb_tbl_unmap_single
  dma-direct: Add a new wrapper __dma_direct_free_pages()
  swiotlb: Add restricted DMA alloc/free support.
  dma-direct: Allocate memory from restricted DMA pool if available
  dt-bindings: of: Add restricted DMA pool
  of: Add plumbing for restricted DMA pool

 .../reserved-memory/reserved-memory.txt       |  27 ++
 drivers/gpu/drm/i915/gem/i915_gem_internal.c  |   2 +-
 drivers/gpu/drm/nouveau/nouveau_ttm.c         |   2 +-
 drivers/iommu/dma-iommu.c                     |  12 +-
 drivers/of/address.c                          |  25 ++
 drivers/of/device.c                           |   3 +
 drivers/of/of_private.h                       |   5 +
 drivers/pci/xen-pcifront.c                    |   2 +-
 drivers/xen/swiotlb-xen.c                     |   2 +-
 include/linux/device.h                        |   4 +
 include/linux/swiotlb.h                       |  41 ++-
 kernel/dma/Kconfig                            |  14 +
 kernel/dma/direct.c                           |  63 +++--
 kernel/dma/direct.h                           |   9 +-
 kernel/dma/swiotlb.c                          | 242 +++++++++++++-----
 15 files changed, 356 insertions(+), 97 deletions(-)

-- 
2.31.1.751.gd2f1c929bd-goog



From xen-devel-bounces@lists.xenproject.org Tue May 18 06:42:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 06:42:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128745.241652 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litRB-0002yy-Lo; Tue, 18 May 2021 06:42:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128745.241652; Tue, 18 May 2021 06:42:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litRB-0002yp-IT; Tue, 18 May 2021 06:42:41 +0000
Received: by outflank-mailman (input) for mailman id 128745;
 Tue, 18 May 2021 06:42:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RJ/V=KN=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1litR9-0002xr-Ud
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 06:42:39 +0000
Received: from mail-pg1-x52c.google.com (unknown [2607:f8b0:4864:20::52c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 107e5e64-d1d5-4875-91a1-fb31c62a59b7;
 Tue, 18 May 2021 06:42:39 +0000 (UTC)
Received: by mail-pg1-x52c.google.com with SMTP id m190so6329084pga.2
 for <xen-devel@lists.xenproject.org>; Mon, 17 May 2021 23:42:39 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:f284:b819:54ca:c198])
 by smtp.gmail.com with UTF8SMTPSA id f18sm12153863pjh.55.2021.05.17.23.42.31
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 May 2021 23:42:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 107e5e64-d1d5-4875-91a1-fb31c62a59b7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=f0zGqobmE+lkmHMw08LsrZsxoikwodhRrqdMyWS0tAA=;
        b=O8aagNy7FTFrooDCYpuJrDWSCqHTLGXjKyHB+pHE3/z4xA8UjgEXEMQpyVbvLArn5p
         9RJMtnPI2hNJiJeB9uZpd77ozYEpj0B51kQWMUn421k3M3UchQ3Wf3jETisNv4LiCWmi
         FgM2BAfPUmtI31bzjXPvC0R9T3e7TB1+XP0rs=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=f0zGqobmE+lkmHMw08LsrZsxoikwodhRrqdMyWS0tAA=;
        b=rrFpUNm+4Ob7/Ur3D0GTiNO0ZEUb730JjTH6rn9CfapcEX/L1BVxrH0wQwT3I2gVC8
         H0eYi934nkg7jKbv75nMBygpyqRNX/4GEt1+jmphRRX5N2aMTLNoVGNaQxkqcr1EjuTJ
         Elm5KIP+3pyg0JPYKzXyd4aEQmnDY6RYz8SrdyYYCXx2OQ9YvXPl4fPJMk+m08BHewTq
         gQDgDyL1Fvugj9pIb4noYQmSPEiWKY/6m2uFpRRYTH+25qRcWLP2WhuE/HAB6mXpxQT5
         xByLx/uRd+UjPmtQAtyLKomRV25Z1xJGp5cOmBTe2hd7YJFsMNOC/arFrzg7R3mQzAIR
         SNag==
X-Gm-Message-State: AOAM5330q2rcTABgEUUJ7m6O+hIkFrJnP+LqAX4Tt5kjBYSOiZFk96cc
	KJIS9Q/556BAir10KLc1k3hsQg==
X-Google-Smtp-Source: ABdhPJxbs6f1RTLLRHsqfyQiH6DipBZlDpzIt/59ehrA6/8WtAYygFAyd6CI4Mf217YDPltmWUCLBg==
X-Received: by 2002:aa7:938f:0:b029:2de:2cf2:6a27 with SMTP id t15-20020aa7938f0000b02902de2cf26a27mr2479969pfe.47.1621320158403;
        Mon, 17 May 2021 23:42:38 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v7 01/15] swiotlb: Refactor swiotlb init functions
Date: Tue, 18 May 2021 14:42:01 +0800
Message-Id: <20210518064215.2856977-2-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.751.gd2f1c929bd-goog
In-Reply-To: <20210518064215.2856977-1-tientzu@chromium.org>
References: <20210518064215.2856977-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new function, swiotlb_init_io_tlb_mem, for the io_tlb_mem struct
initialization to make the code reusable.

Note that we now also call set_memory_decrypted in swiotlb_init_with_tbl.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/swiotlb.c | 51 ++++++++++++++++++++++----------------------
 1 file changed, 25 insertions(+), 26 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 8ca7d505d61c..d3232fc19385 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -168,9 +168,30 @@ void __init swiotlb_update_mem_attributes(void)
 	memset(vaddr, 0, bytes);
 }
 
-int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
+static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
+				    unsigned long nslabs, bool late_alloc)
 {
+	void *vaddr = phys_to_virt(start);
 	unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
+
+	mem->nslabs = nslabs;
+	mem->start = start;
+	mem->end = mem->start + bytes;
+	mem->index = 0;
+	mem->late_alloc = late_alloc;
+	spin_lock_init(&mem->lock);
+	for (i = 0; i < mem->nslabs; i++) {
+		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
+		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
+		mem->slots[i].alloc_size = 0;
+	}
+
+	set_memory_decrypted((unsigned long)vaddr, bytes >> PAGE_SHIFT);
+	memset(vaddr, 0, bytes);
+}
+
+int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
+{
 	struct io_tlb_mem *mem;
 	size_t alloc_size;
 
@@ -186,16 +207,8 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
 	if (!mem)
 		panic("%s: Failed to allocate %zu bytes align=0x%lx\n",
 		      __func__, alloc_size, PAGE_SIZE);
-	mem->nslabs = nslabs;
-	mem->start = __pa(tlb);
-	mem->end = mem->start + bytes;
-	mem->index = 0;
-	spin_lock_init(&mem->lock);
-	for (i = 0; i < mem->nslabs; i++) {
-		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
-		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
-		mem->slots[i].alloc_size = 0;
-	}
+
+	swiotlb_init_io_tlb_mem(mem, __pa(tlb), nslabs, false);
 
 	io_tlb_default_mem = mem;
 	if (verbose)
@@ -282,7 +295,6 @@ swiotlb_late_init_with_default_size(size_t default_size)
 int
 swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 {
-	unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
 	struct io_tlb_mem *mem;
 
 	if (swiotlb_force == SWIOTLB_NO_FORCE)
@@ -297,20 +309,7 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 	if (!mem)
 		return -ENOMEM;
 
-	mem->nslabs = nslabs;
-	mem->start = virt_to_phys(tlb);
-	mem->end = mem->start + bytes;
-	mem->index = 0;
-	mem->late_alloc = 1;
-	spin_lock_init(&mem->lock);
-	for (i = 0; i < mem->nslabs; i++) {
-		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
-		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
-		mem->slots[i].alloc_size = 0;
-	}
-
-	set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT);
-	memset(tlb, 0, bytes);
+	swiotlb_init_io_tlb_mem(mem, virt_to_phys(tlb), nslabs, true);
 
 	io_tlb_default_mem = mem;
 	swiotlb_print_info();
-- 
2.31.1.751.gd2f1c929bd-goog



From xen-devel-bounces@lists.xenproject.org Tue May 18 06:42:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 06:42:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128747.241663 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litRL-0003Re-0V; Tue, 18 May 2021 06:42:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128747.241663; Tue, 18 May 2021 06:42:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litRK-0003RV-Si; Tue, 18 May 2021 06:42:50 +0000
Received: by outflank-mailman (input) for mailman id 128747;
 Tue, 18 May 2021 06:42:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RJ/V=KN=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1litRJ-0003ON-4p
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 06:42:49 +0000
Received: from mail-pj1-x1036.google.com (unknown [2607:f8b0:4864:20::1036])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b04f135c-3ab3-44be-8fab-096c07fccce8;
 Tue, 18 May 2021 06:42:48 +0000 (UTC)
Received: by mail-pj1-x1036.google.com with SMTP id g24so4970025pji.4
 for <xen-devel@lists.xenproject.org>; Mon, 17 May 2021 23:42:48 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:f284:b819:54ca:c198])
 by smtp.gmail.com with UTF8SMTPSA id w2sm6038009pjq.5.2021.05.17.23.42.40
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 May 2021 23:42:47 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b04f135c-3ab3-44be-8fab-096c07fccce8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=sGrhy/lObEmrAUPjh232akVjdJCQ2N3TEXL4wbqFbws=;
        b=duRWURIb9Sx8GJHi1ThwAr2NHVUzVDJmZ2JmfALmXmR1jxNy4cEQqo3Uwkbwr8EGA5
         R6coZdV6ao6m5xA8Ypmv+2VDPONe4O7NNvHSgWcMocDtkrKiWZWiU74wMPfrgeqiPT2O
         Tm9+7vaYlGeGJ0ngcQbAsFwROnGV4x58UiTXo=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=sGrhy/lObEmrAUPjh232akVjdJCQ2N3TEXL4wbqFbws=;
        b=MuhhGYZvHrr73NU9b3R2CAZ+sdY6rwCMWzxnwtkgoqL7cOtOU70ItMt+U08bMRw+6g
         OwHwotELYEzI4DYJNaCshrrX6PEregcR4EcTsNR1wQsk1MO6qKY2Bn5noyI+JP0teiGL
         yeu2Gp5BDrpxix+4WBwq8xvAiKDePULXiw/oLjr/0eWelbrrPGvm+Tfau3FXFHsBvfFg
         Yk8/GYD/K9AI74U1ySq0kHvI80cYkzv19nJsIVaJT5OxZN5F36aY+TGNDGy//rAzERTX
         oNNDyr1CVdG4yK24qg/q5/a0lR8zHadSueFtjG2to1EIYw57tUzU/Gh94qHAMwqOUOAu
         aztw==
X-Gm-Message-State: AOAM533XWFLCsNtHzHl/lMBCUde8Npl8NFynjs7CCSiBABQrZ0DBriZS
	TUAcj4s2Gsn39Bh+t0fAExSvxw==
X-Google-Smtp-Source: ABdhPJxBFjGdNYNy2/hyDmz/8TNNMnZjFTEu3cEDsr/NePqF5IBoqi2/51M3flUXtNzSuuF0EAFgfQ==
X-Received: by 2002:a17:90b:128d:: with SMTP id fw13mr1083856pjb.211.1621320167688;
        Mon, 17 May 2021 23:42:47 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v7 02/15] swiotlb: Refactor swiotlb_create_debugfs
Date: Tue, 18 May 2021 14:42:02 +0800
Message-Id: <20210518064215.2856977-3-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.751.gd2f1c929bd-goog
In-Reply-To: <20210518064215.2856977-1-tientzu@chromium.org>
References: <20210518064215.2856977-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Split the debugfs creation to make the code reusable for supporting
different bounce buffer pools, e.g. restricted DMA pool.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/swiotlb.c | 25 +++++++++++++++++++------
 1 file changed, 19 insertions(+), 6 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index d3232fc19385..b849b01a446f 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -64,6 +64,7 @@
 enum swiotlb_force swiotlb_force;
 
 struct io_tlb_mem *io_tlb_default_mem;
+static struct dentry *debugfs_dir;
 
 /*
  * Max segment that we can provide which (if pages are contingous) will
@@ -662,18 +663,30 @@ EXPORT_SYMBOL_GPL(is_swiotlb_active);
 
 #ifdef CONFIG_DEBUG_FS
 
-static int __init swiotlb_create_debugfs(void)
+static void swiotlb_create_debugfs(struct io_tlb_mem *mem, const char *name)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
-
 	if (!mem)
-		return 0;
-	mem->debugfs = debugfs_create_dir("swiotlb", NULL);
+		return;
+
+	mem->debugfs = debugfs_create_dir(name, debugfs_dir);
 	debugfs_create_ulong("io_tlb_nslabs", 0400, mem->debugfs, &mem->nslabs);
 	debugfs_create_ulong("io_tlb_used", 0400, mem->debugfs, &mem->used);
+}
+
+static int __init swiotlb_create_default_debugfs(void)
+{
+	struct io_tlb_mem *mem = io_tlb_default_mem;
+
+	if (mem) {
+		swiotlb_create_debugfs(mem, "swiotlb");
+		debugfs_dir = mem->debugfs;
+	} else {
+		debugfs_dir = debugfs_create_dir("swiotlb", NULL);
+	}
+
 	return 0;
 }
 
-late_initcall(swiotlb_create_debugfs);
+late_initcall(swiotlb_create_default_debugfs);
 
 #endif
-- 
2.31.1.751.gd2f1c929bd-goog



From xen-devel-bounces@lists.xenproject.org Tue May 18 06:43:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 06:43:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128748.241674 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litRU-0003wF-8u; Tue, 18 May 2021 06:43:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128748.241674; Tue, 18 May 2021 06:43:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litRU-0003w1-56; Tue, 18 May 2021 06:43:00 +0000
Received: by outflank-mailman (input) for mailman id 128748;
 Tue, 18 May 2021 06:42:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RJ/V=KN=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1litRS-0003sj-3o
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 06:42:58 +0000
Received: from mail-pj1-x102b.google.com (unknown [2607:f8b0:4864:20::102b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bce303c4-a539-4d6f-9fa6-6796f636df41;
 Tue, 18 May 2021 06:42:57 +0000 (UTC)
Received: by mail-pj1-x102b.google.com with SMTP id
 b15-20020a17090a550fb029015dad75163dso1014903pji.0
 for <xen-devel@lists.xenproject.org>; Mon, 17 May 2021 23:42:57 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:f284:b819:54ca:c198])
 by smtp.gmail.com with UTF8SMTPSA id o7sm7726182pgs.45.2021.05.17.23.42.49
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 May 2021 23:42:56 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bce303c4-a539-4d6f-9fa6-6796f636df41
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=dhey+gEpzKL8h8pP3jKcbtrIWfO9mRaRLZrTGjSACw0=;
        b=OzRbULE97gDu+nJm+2W1/NzhhkCj0QCTZrsqxU76qn+vtZGLqcb+w1igoYylLgqZMK
         FNrnT45xuk38fTuKCy3r98pZ9JUuUor+pLXJzBOrMy0zAumHGMSv3m5XekoY/RuTIkUO
         rl+4b1wiByTGqcgiDzKrnenCWuMAxjCqA3MzM=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=dhey+gEpzKL8h8pP3jKcbtrIWfO9mRaRLZrTGjSACw0=;
        b=n/lXtyFXRYOtvi6njYkmZhTDUEgLcq8R4dzZQ4Jmx/rKJ+P9Ndp35Cp4LVHjE3bKP6
         4qJAiEv8xCjgwLBfzT9PjTD3ybAAEErbjguedh/S8L1XLZIb2xYdQAoftfvMwjPdRDDh
         H3Z9PNOBQ4rQy9Ld9tlQrXorxyqPjncgd/psYfETdqHpd6Ve/PLUVbcZZ4jOu/pWhr44
         YPTJKoBlPwv63CN2R5s464W7XxyZZ1yqjLfXUQUbdfHxIpp2qonS6OC7k9EJIqD7PIfq
         FHYJXn2Vs6XgxSzickXRswEQG69k6QcVph+kDbldCwhQTJDqQ5J6VIEqL+KTw7qog6s4
         zjMQ==
X-Gm-Message-State: AOAM531rNSeVfRRo6YV/uK9zm4dK4m9ONidwXjj8gH76OO8Bn5r4Af3k
	SPBh1JaMihxJOqjhqNwQFWhNvg==
X-Google-Smtp-Source: ABdhPJzAlI3Jj1VfPflg9N5+BlUFXyQH4Z87PXO5YBjAsPmi6ozdOqhZFzyDTIlkjmaywaEzQdXtCA==
X-Received: by 2002:a17:90b:1d8f:: with SMTP id pf15mr3521801pjb.164.1621320176785;
        Mon, 17 May 2021 23:42:56 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v7 03/15] swiotlb: Add DMA_RESTRICTED_POOL
Date: Tue, 18 May 2021 14:42:03 +0800
Message-Id: <20210518064215.2856977-4-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.751.gd2f1c929bd-goog
In-Reply-To: <20210518064215.2856977-1-tientzu@chromium.org>
References: <20210518064215.2856977-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new kconfig symbol, DMA_RESTRICTED_POOL, for restricted DMA pool.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/Kconfig | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/kernel/dma/Kconfig b/kernel/dma/Kconfig
index 77b405508743..3e961dc39634 100644
--- a/kernel/dma/Kconfig
+++ b/kernel/dma/Kconfig
@@ -80,6 +80,20 @@ config SWIOTLB
 	bool
 	select NEED_DMA_MAP_STATE
 
+config DMA_RESTRICTED_POOL
+	bool "DMA Restricted Pool"
+	depends on OF && OF_RESERVED_MEM
+	select SWIOTLB
+	help
+	  This enables support for restricted DMA pools which provide a level of
+	  DMA memory protection on systems with limited hardware protection
+	  capabilities, such as those lacking an IOMMU.
+
+	  For more information see
+	  <Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt>
+	  and <kernel/dma/swiotlb.c>.
+	  If unsure, say "n".
+
 #
 # Should be selected if we can mmap non-coherent mappings to userspace.
 # The only thing that is really required is a way to set an uncached bit
-- 
2.31.1.751.gd2f1c929bd-goog



From xen-devel-bounces@lists.xenproject.org Tue May 18 06:43:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 06:43:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128753.241684 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litRd-0004X6-H2; Tue, 18 May 2021 06:43:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128753.241684; Tue, 18 May 2021 06:43:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litRd-0004Wz-Dn; Tue, 18 May 2021 06:43:09 +0000
Received: by outflank-mailman (input) for mailman id 128753;
 Tue, 18 May 2021 06:43:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RJ/V=KN=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1litRc-0003sj-2p
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 06:43:08 +0000
Received: from mail-pg1-x52c.google.com (unknown [2607:f8b0:4864:20::52c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id be00b0c2-06eb-45cb-8df3-4b763d52e2e6;
 Tue, 18 May 2021 06:43:06 +0000 (UTC)
Received: by mail-pg1-x52c.google.com with SMTP id i5so6353965pgm.0
 for <xen-devel@lists.xenproject.org>; Mon, 17 May 2021 23:43:06 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:f284:b819:54ca:c198])
 by smtp.gmail.com with UTF8SMTPSA id a20sm11417948pfc.186.2021.05.17.23.42.58
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 May 2021 23:43:05 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: be00b0c2-06eb-45cb-8df3-4b763d52e2e6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=FJC+E3uhj1W2MFTmj3ojedsNmOVTANv3e34M9qGJXtM=;
        b=k9343CttsolN9GsMPr2aOKzLXd90TLGa6ZLkyjGdncv3QQYg2LKLIXjo4a+mDhfFv4
         P//UuSbTLM+U3QCH8tLUw/lKu3fRL4MkkniHIUcX1CuKKoskm674+IX+bA4GPuiLs5v8
         vsc7CRHFE5KRxusiqzL5djvJXTrEyBU8vv2aY=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=FJC+E3uhj1W2MFTmj3ojedsNmOVTANv3e34M9qGJXtM=;
        b=kcfDKuw9nM2y6NYNid4pZaYjOEZVTTPKD0esK4VKtgPx1q1rU/gnAk3yeJADU81Jsr
         thnj1ULdxxm735GQI2lfsFbbkbmEFXNv+TDjKqylgybFU5euVbt/LN82mdjfmwXMG+Bn
         MHuQwEzyUSkQ08Ytuw2pMgXaz7/qHevnh9i8MykmbX8dEyBeU/lgzrBpdgtHKcplExiM
         0AQufPrHrKB4zoEvFUh6qrtNAqS8dVv2GNsdzFZGpx6da6azJJhG+BHDJoC/zZ7CgloK
         Z+ZrwtixRMsOFwidf3RNjWiCRXMjkj/5yfdWlGRXtFRP8Gals/axxdvo6ZYAlTvlOvTi
         VHMQ==
X-Gm-Message-State: AOAM532VTSl8jz7jOvSsJrbTQU3qrRhMuXoWPXVVR0WGXjFtTvKenoKH
	8ASHSUNcX1uH8rb+Qz2fYK5pkA==
X-Google-Smtp-Source: ABdhPJwd3oGOnal6zCAQgmfEPECLQv7ZEkCh4nTH0wfMV26ZVTXigImmyLVW/uGw7lbQKlKuFvK+pg==
X-Received: by 2002:aa7:9001:0:b029:2d4:9408:9998 with SMTP id m1-20020aa790010000b02902d494089998mr3732041pfo.9.1621320185567;
        Mon, 17 May 2021 23:43:05 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v7 04/15] swiotlb: Add restricted DMA pool initialization
Date: Tue, 18 May 2021 14:42:04 +0800
Message-Id: <20210518064215.2856977-5-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.751.gd2f1c929bd-goog
In-Reply-To: <20210518064215.2856977-1-tientzu@chromium.org>
References: <20210518064215.2856977-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add the initialization function to create restricted DMA pools from
matching reserved-memory nodes.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 include/linux/device.h  |  4 +++
 include/linux/swiotlb.h |  3 +-
 kernel/dma/swiotlb.c    | 76 +++++++++++++++++++++++++++++++++++++++++
 3 files changed, 82 insertions(+), 1 deletion(-)

diff --git a/include/linux/device.h b/include/linux/device.h
index 38a2071cf776..4987608ea4ff 100644
--- a/include/linux/device.h
+++ b/include/linux/device.h
@@ -416,6 +416,7 @@ struct dev_links_info {
  * @dma_pools:	Dma pools (if dma'ble device).
  * @dma_mem:	Internal for coherent mem override.
  * @cma_area:	Contiguous memory area for dma allocations
+ * @dma_io_tlb_mem: Internal for swiotlb io_tlb_mem override.
  * @archdata:	For arch-specific additions.
  * @of_node:	Associated device tree node.
  * @fwnode:	Associated device node supplied by platform firmware.
@@ -521,6 +522,9 @@ struct device {
 #ifdef CONFIG_DMA_CMA
 	struct cma *cma_area;		/* contiguous memory area for dma
 					   allocations */
+#endif
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+	struct io_tlb_mem *dma_io_tlb_mem;
 #endif
 	/* arch specific additions */
 	struct dev_archdata	archdata;
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 216854a5e513..03ad6e3b4056 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -72,7 +72,8 @@ extern enum swiotlb_force swiotlb_force;
  *		range check to see if the memory was in fact allocated by this
  *		API.
  * @nslabs:	The number of IO TLB blocks (in groups of 64) between @start and
- *		@end. This is command line adjustable via setup_io_tlb_npages.
+ *		@end. For default swiotlb, this is command line adjustable via
+ *		setup_io_tlb_npages.
  * @used:	The number of used IO TLB block.
  * @list:	The free list describing the number of free entries available
  *		from each index.
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index b849b01a446f..1d8eb4de0d01 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -39,6 +39,13 @@
 #ifdef CONFIG_DEBUG_FS
 #include <linux/debugfs.h>
 #endif
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+#include <linux/io.h>
+#include <linux/of.h>
+#include <linux/of_fdt.h>
+#include <linux/of_reserved_mem.h>
+#include <linux/slab.h>
+#endif
 
 #include <asm/io.h>
 #include <asm/dma.h>
@@ -690,3 +697,72 @@ static int __init swiotlb_create_default_debugfs(void)
 late_initcall(swiotlb_create_default_debugfs);
 
 #endif
+
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
+				    struct device *dev)
+{
+	struct io_tlb_mem *mem = rmem->priv;
+	unsigned long nslabs = rmem->size >> IO_TLB_SHIFT;
+
+	if (dev->dma_io_tlb_mem)
+		return 0;
+
+	/*
+	 * Since multiple devices can share the same pool, the private data,
+	 * io_tlb_mem struct, will be initialized by the first device attached
+	 * to it.
+	 */
+	if (!mem) {
+		mem = kzalloc(struct_size(mem, slots, nslabs), GFP_KERNEL);
+		if (!mem)
+			return -ENOMEM;
+
+		if (PageHighMem(pfn_to_page(PHYS_PFN(rmem->base)))) {
+			kfree(mem);
+			return -EINVAL;
+		}
+
+		swiotlb_init_io_tlb_mem(mem, rmem->base, nslabs, false);
+
+		rmem->priv = mem;
+
+		if (IS_ENABLED(CONFIG_DEBUG_FS))
+			swiotlb_create_debugfs(mem, rmem->name);
+	}
+
+	dev->dma_io_tlb_mem = mem;
+
+	return 0;
+}
+
+static void rmem_swiotlb_device_release(struct reserved_mem *rmem,
+					struct device *dev)
+{
+	if (dev)
+		dev->dma_io_tlb_mem = NULL;
+}
+
+static const struct reserved_mem_ops rmem_swiotlb_ops = {
+	.device_init = rmem_swiotlb_device_init,
+	.device_release = rmem_swiotlb_device_release,
+};
+
+static int __init rmem_swiotlb_setup(struct reserved_mem *rmem)
+{
+	unsigned long node = rmem->fdt_node;
+
+	if (of_get_flat_dt_prop(node, "reusable", NULL) ||
+	    of_get_flat_dt_prop(node, "linux,cma-default", NULL) ||
+	    of_get_flat_dt_prop(node, "linux,dma-default", NULL) ||
+	    of_get_flat_dt_prop(node, "no-map", NULL))
+		return -EINVAL;
+
+	rmem->ops = &rmem_swiotlb_ops;
+	pr_info("Reserved memory: created device swiotlb memory pool at %pa, size %ld MiB\n",
+		&rmem->base, (unsigned long)rmem->size / SZ_1M);
+	return 0;
+}
+
+RESERVEDMEM_OF_DECLARE(dma, "restricted-dma-pool", rmem_swiotlb_setup);
+#endif /* CONFIG_DMA_RESTRICTED_POOL */
-- 
2.31.1.751.gd2f1c929bd-goog



From xen-devel-bounces@lists.xenproject.org Tue May 18 06:43:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 06:43:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128756.241696 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litRm-0005BI-0h; Tue, 18 May 2021 06:43:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128756.241696; Tue, 18 May 2021 06:43:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litRl-0005Ah-Sf; Tue, 18 May 2021 06:43:17 +0000
Received: by outflank-mailman (input) for mailman id 128756;
 Tue, 18 May 2021 06:43:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+8gn=KN=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1litRk-00057E-Ta
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 06:43:16 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 165d5409-afab-4c53-bb8a-598367cd407a;
 Tue, 18 May 2021 06:43:15 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C0578AF0B;
 Tue, 18 May 2021 06:43:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 165d5409-afab-4c53-bb8a-598367cd407a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621320194; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=VzK2ZTcLLJDRr8iwYOgO5IkRKWhEI6GtSBSnkGnGsuw=;
	b=V3Uq6X1wYQTp72X7hH0jhJII2jlkExGEwnAFl5Gh/635CcyxLzwd5f4of+3gvfvq/UEE49
	5ZoE6ikVWVB6FpcVnl8s4pwzTHOGWS0+FFMfpG26NsRVMKvDCET/9NmkcVs5JmosCC9fSF
	1k/5DCAx75qFQx5e77b9RRAtO/cont4=
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20210514084133.18658-1-jgross@suse.com>
 <1e38cce0-6960-ac21-b349-dac8551e23ed@xen.org>
 <fe5f1e6a-1a89-ea12-feb5-318f25d4281f@suse.com>
 <39860a0c-5ac5-2537-532f-6ce288cc7219@xen.org>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH] tools/xenstore: claim resources when running as daemon
Message-ID: <e69f7d4c-a616-1265-e909-fd14feea7412@suse.com>
Date: Tue, 18 May 2021 08:43:13 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <39860a0c-5ac5-2537-532f-6ce288cc7219@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="EvWBj5DGzB9c3mKJmXnOmy6KaTqJ3rTZ2"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--EvWBj5DGzB9c3mKJmXnOmy6KaTqJ3rTZ2
Content-Type: multipart/mixed; boundary="VJrYLzHbQdpGNORZDll3pY5qyJ71kjhfA";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Message-ID: <e69f7d4c-a616-1265-e909-fd14feea7412@suse.com>
Subject: Re: [PATCH] tools/xenstore: claim resources when running as daemon
References: <20210514084133.18658-1-jgross@suse.com>
 <1e38cce0-6960-ac21-b349-dac8551e23ed@xen.org>
 <fe5f1e6a-1a89-ea12-feb5-318f25d4281f@suse.com>
 <39860a0c-5ac5-2537-532f-6ce288cc7219@xen.org>
In-Reply-To: <39860a0c-5ac5-2537-532f-6ce288cc7219@xen.org>

--VJrYLzHbQdpGNORZDll3pY5qyJ71kjhfA
Content-Type: multipart/mixed;
 boundary="------------EE5529C996EC7D209640BC13"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------EE5529C996EC7D209640BC13
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 17.05.21 17:55, Julien Grall wrote:
> Hi Juergen,
>=20
> On 17/05/2021 07:47, Juergen Gross wrote:
>> On 14.05.21 22:19, Julien Grall wrote:
>>> Hi Juergen,
>>>
>>> On 14/05/2021 09:41, Juergen Gross wrote:
>>>> Xenstored is absolutely mandatory for a Xen host and it can't be
>>>> restarted, so being killed by OOM-killer in case of memory shortage =
is
>>>> to be avoided.
>>>>
>>>> Set /proc/$pid/oom_score_adj (if available) to -500 in order to allo=
w
>>>> xenstored to use large amounts of memory without being killed.
>>>>
>>>> In order to support large numbers of domains the limit for open file=

>>>> descriptors might need to be raised. Each domain needs 2 file
>>>> descriptors (one for the event channel and one for the xl per-domain=

>>>> daemon to connect to xenstored).
>>>
>>> Hmmm... AFAICT there is only one file descriptor to handle all the=20
>>> event channels. Could you point out the code showing one event FD per=20

>>> domain?
>>
>> I have let me fooled by just counting the file descriptors used with o=
ne
>> or two domain active.
>>
>> So you are right that all event channels only use one fd, but each xl
>> daemon will use two (which should be fixed, IMO). And thinking more
>> about it it is even worse: each qemu process will at least require one=

>> additional fd.
>>
>>>
>>>>
>>>> Try to raise ulimit for open files to 65536. First the hard limit if=

>>>> needed, and then the soft limit.
>>>
>>> I am not sure this is right to impose this limit to everyone. For=20
>>> instance, one admin may know that there will be no more than 100=20
>>> domains on its system.
>>
>> Is setting a higher limit really a problem?
>=20
> I am quite unease to set a limit that nearly nobody will reach unless=20
> something went horribly wrong on the system.

Hmm, I really don't see the downside of having the possibility to let
xenstored open lots of files.

Anyway we can do it via prlimit if you like that better.

>=20
>>
>>> So the admin should be able to configure them. At this point, I think=20

>>> the two limit should be set my the initscript rather than xenstored=20
>>> itself.
>>
>> But the admin would need to know the Xen internals for selecting the
>> correct limits. In the end I'd be fine with moving this modification t=
o
>> the script starting Xenstore (which would be launch-xenstore), but the=

>> configuration item should be "max number of domains to support".
>=20
> I would be fine with "max numer of domains to support". What I care the=20

> most here is the limits are actually applied most of (if not all) the t=
ime.

I did another test and found:

- the xl daemon for a guest will use 2 socket connections
- qemu for a HVM guest will use 3 socket connections
- qemu for a PV guest is using one socket connection
- 14 other files are used by xenstored

So we should set the limit to 5 * n_dom + 100 (some headroom will be
nice IMO).

>=20
>>
>>>
>>> This would also avoid the problem where Xenstored is not allowed to=20
>>> modify its limit (see more below).
>>>
>>>>
>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>> ---
>>>> =C2=A0 tools/xenstore/xenstored_core.c=C2=A0=C2=A0 |=C2=A0 2 ++
>>>> =C2=A0 tools/xenstore/xenstored_core.h=C2=A0=C2=A0 |=C2=A0 3 ++
>>>> =C2=A0 tools/xenstore/xenstored_minios.c |=C2=A0 4 +++
>>>> =C2=A0 tools/xenstore/xenstored_posix.c=C2=A0 | 46=20
>>>> +++++++++++++++++++++++++++++++
>>>> =C2=A0 4 files changed, 55 insertions(+)
>>>>
>>>> diff --git a/tools/xenstore/xenstored_core.c=20
>>>> b/tools/xenstore/xenstored_core.c
>>>> index b66d119a98..964e693450 100644
>>>> --- a/tools/xenstore/xenstored_core.c
>>>> +++ b/tools/xenstore/xenstored_core.c
>>>> @@ -2243,6 +2243,8 @@ int main(int argc, char *argv[])
>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 xprintf =3D t=
race;
>>>> =C2=A0 #endif
>>>> +=C2=A0=C2=A0=C2=A0 claim_resources();
>>>> +
>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 signal(SIGHUP, trigger_reopen_log);
>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 if (tracefile)
>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 tracefile =3D=20
talloc_strdup(NULL, tracefile);
>>>> diff --git a/tools/xenstore/xenstored_core.h=20
>>>> b/tools/xenstore/xenstored_core.h
>>>> index 1467270476..ac26973648 100644
>>>> --- a/tools/xenstore/xenstored_core.h
>>>> +++ b/tools/xenstore/xenstored_core.h
>>>> @@ -255,6 +255,9 @@ void daemonize(void);
>>>> =C2=A0 /* Close stdin/stdout/stderr to complete daemonize */
>>>> =C2=A0 void finish_daemonize(void);
>>>> +/* Set OOM-killer score and raise ulimit. */
>>>> +void claim_resources(void);
>>>> +
>>>> =C2=A0 /* Open a pipe for signal handling */
>>>> =C2=A0 void init_pipe(int reopen_log_pipe[2]);
>>>> diff --git a/tools/xenstore/xenstored_minios.c=20
>>>> b/tools/xenstore/xenstored_minios.c
>>>> index c94493e52a..df8ff580b0 100644
>>>> --- a/tools/xenstore/xenstored_minios.c
>>>> +++ b/tools/xenstore/xenstored_minios.c
>>>> @@ -32,6 +32,10 @@ void finish_daemonize(void)
>>>> =C2=A0 {
>>>> =C2=A0 }
>>>> +void claim_resources(void)
>>>> +{
>>>> +}
>>>> +
>>>> =C2=A0 void init_pipe(int reopen_log_pipe[2])
>>>> =C2=A0 {
>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 reopen_log_pipe[0] =3D -1;
>>>> diff --git a/tools/xenstore/xenstored_posix.c=20
>>>> b/tools/xenstore/xenstored_posix.c
>>>> index 48c37ffe3e..0074fbd8b2 100644
>>>> --- a/tools/xenstore/xenstored_posix.c
>>>> +++ b/tools/xenstore/xenstored_posix.c
>>>> @@ -22,6 +22,7 @@
>>>> =C2=A0 #include <fcntl.h>
>>>> =C2=A0 #include <stdlib.h>
>>>> =C2=A0 #include <sys/mman.h>
>>>> +#include <sys/resource.h>
>>>> =C2=A0 #include "utils.h"
>>>> =C2=A0 #include "xenstored_core.h"
>>>> @@ -87,6 +88,51 @@ void finish_daemonize(void)
>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 close(devnull);
>>>> =C2=A0 }
>>>> +static void avoid_oom_killer(void)
>>>> +{
>>>> +=C2=A0=C2=A0=C2=A0 char path[32];
>>>> +=C2=A0=C2=A0=C2=A0 char val[] =3D "-500";
>>>> +=C2=A0=C2=A0=C2=A0 int fd;
>>>> +
>>>> +=C2=A0=C2=A0=C2=A0 snprintf(path, sizeof(path), "/proc/%d/oom_score=
_adj",=20
>>>> (int)getpid());
>>>
>>> This looks Linux specific. How about other OSes?
>>
>> I don't know whether other OSes have an OOM killer, and if they do, ho=
w
>> to configure it. It is a best effort attempt, after all.
>=20
> I have CCed Roger who should be able to help for FreeBSD.

It would be possible to set the OOM-score from the launch script, too.

>=20
>>
>>>
>>>> +
>>>> +=C2=A0=C2=A0=C2=A0 fd =3D open(path, O_WRONLY);
>>>> +=C2=A0=C2=A0=C2=A0 /* Do nothing if file doesn't exist. */
>>>
>>> Your commit message leads to think that we *must* configure the OOM. =

>>> If not, then we should not continue. But here, this suggest this is=20
>>> optional. In fact...
>>
>> I can modify the commit message by adding a "Try to".
>>
>>>
>>>> +=C2=A0=C2=A0=C2=A0 if (fd < 0)
>>>> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 return;
>>>> +=C2=A0=C2=A0=C2=A0 /* Ignore errors. */
>>>> +=C2=A0=C2=A0=C2=A0 write(fd, val, sizeof(val));
>>>
>>> ... xenstored may not be allowed to modify its own parameters. So=20
>>> this would continue silently without the admin necessarily knowning=20
>>> the limit wasn't applied.
>>
>> I can add a line in the Xenstore log in this regard.
>=20
> This feels wrong to me. If a limit cannot be applied then it should fai=
l=20
> early rather than possibly at the wrong moment a few days (months?) aft=
er.

I think issuing a warning would be better here. We are running with
no OOM adjustments since years now.


Juergen

--------------EE5529C996EC7D209640BC13
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------EE5529C996EC7D209640BC13--

--VJrYLzHbQdpGNORZDll3pY5qyJ71kjhfA--

--EvWBj5DGzB9c3mKJmXnOmy6KaTqJ3rTZ2
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmCjYgIFAwAAAAAACgkQsN6d1ii/Ey/+
+Af/cnpYv+zhFaCbjGNy1THawZG8q2IFf580IjbSP7s5ZyfRsRtDgsvMAILd6FaTH68zqwzFHzt8
swaT5T3XFbXCKf9FWpCp9IvEjAet7MkJ8/k7zOskVRAtNBEFAb0KIwvUKlqNcOpPzxZu7R1lb3ub
pyfu427YMWcrtZ5wz0N7FOpq0jmrXVmsH8hSEhe02LZE7TdkYyZjmSJBEdK7bs3VUn2schhe3JOW
n8C1ieXmp0fhKS52iauCKFo/zoFVZ9Tg6TIH9yUdbVWsoEkEf1E/u3z335+BmnjXzS8q671gcyDW
P+ovh6Lomlps4dIDP9aeB6FL2H2JbK98ER89fUaoeQ==
=wgOr
-----END PGP SIGNATURE-----

--EvWBj5DGzB9c3mKJmXnOmy6KaTqJ3rTZ2--


From xen-devel-bounces@lists.xenproject.org Tue May 18 06:43:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 06:43:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128761.241707 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litRt-0005j0-9A; Tue, 18 May 2021 06:43:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128761.241707; Tue, 18 May 2021 06:43:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litRt-0005ir-5t; Tue, 18 May 2021 06:43:25 +0000
Received: by outflank-mailman (input) for mailman id 128761;
 Tue, 18 May 2021 06:43:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RJ/V=KN=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1litRr-0003sj-3G
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 06:43:23 +0000
Received: from mail-pj1-x102a.google.com (unknown [2607:f8b0:4864:20::102a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d2042684-1f8d-4605-bd86-e70284f48dec;
 Tue, 18 May 2021 06:43:15 +0000 (UTC)
Received: by mail-pj1-x102a.google.com with SMTP id t11so5004161pjm.0
 for <xen-devel@lists.xenproject.org>; Mon, 17 May 2021 23:43:15 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:f284:b819:54ca:c198])
 by smtp.gmail.com with UTF8SMTPSA id k10sm4407807pfu.175.2021.05.17.23.43.07
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 May 2021 23:43:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d2042684-1f8d-4605-bd86-e70284f48dec
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=um8C1f1vzKtWMFhzfAaBUmj42oNwrbNCekMrLvRdk1U=;
        b=Nz40zJd0pKEYV/k8IK4lTQTRDZOUiOUwbA1Arvy6QUQBlPK9z/7NLsKlNL3/1e+jPn
         Rvsno7nzSv10lOGgseQXJjtNssSPkX7wvn5b3le+v615CbL79fsk9Hc2gr53iJRIIsL9
         joN4AoOubYBk2J2DPKHOuv42MtUoFxjdy1UIc=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=um8C1f1vzKtWMFhzfAaBUmj42oNwrbNCekMrLvRdk1U=;
        b=i08iY70btLs1Gug3jZtz+b4gupPMdNifd8jvTlQ7O3PfLdG7kmgOdIiurmUSEE3I1x
         diJIdQ5+fchQwZNJtbtOlrpzM44TWNnLsPnM712fmCx8fO8dSqrl0d2Nmyl5RC+4NcQI
         MHoFyrX4ArGkM3NpwxQimLtIGcohtlBI5W9GdkAympipM+jK0u4FO/BT8X5aEaXM2V9/
         8aIufDVizO+gUOpO2u1PJqeXFbpmYQOe7PiOcXyRiIaO/JKAkVZJoKay1kMmIO2cO6ZQ
         haW9b8PY3bhMFv/65199BAphUPnnWGJ4ukPr6Fz+sEKM5v6PaIpM0YHDpsfCzdtYx+6l
         YYWQ==
X-Gm-Message-State: AOAM530ANga3vxoX5CnNkw9EYj6QBPVdNE0pREF8uTuDUQgp8HTXsw+O
	rw0guvPlTzVg2rWkNEVg6V3ZRA==
X-Google-Smtp-Source: ABdhPJzOUb2or4i3Atyb+wUWqF92M1zBsVH9PXp2RXUL88l16TO6O0yu3WeWl6JLm0exg/RwsGT9VA==
X-Received: by 2002:a17:90a:f987:: with SMTP id cq7mr1872645pjb.30.1621320194605;
        Mon, 17 May 2021 23:43:14 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v7 05/15] swiotlb: Add a new get_io_tlb_mem getter
Date: Tue, 18 May 2021 14:42:05 +0800
Message-Id: <20210518064215.2856977-6-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.751.gd2f1c929bd-goog
In-Reply-To: <20210518064215.2856977-1-tientzu@chromium.org>
References: <20210518064215.2856977-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new getter, get_io_tlb_mem, to help select the io_tlb_mem struct.
The restricted DMA pool is preferred if available.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 include/linux/swiotlb.h | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 03ad6e3b4056..b469f04cca26 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -2,6 +2,7 @@
 #ifndef __LINUX_SWIOTLB_H
 #define __LINUX_SWIOTLB_H
 
+#include <linux/device.h>
 #include <linux/dma-direction.h>
 #include <linux/init.h>
 #include <linux/types.h>
@@ -102,6 +103,16 @@ struct io_tlb_mem {
 };
 extern struct io_tlb_mem *io_tlb_default_mem;
 
+static inline struct io_tlb_mem *get_io_tlb_mem(struct device *dev)
+{
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+	if (dev && dev->dma_io_tlb_mem)
+		return dev->dma_io_tlb_mem;
+#endif /* CONFIG_DMA_RESTRICTED_POOL */
+
+	return io_tlb_default_mem;
+}
+
 static inline bool is_swiotlb_buffer(phys_addr_t paddr)
 {
 	struct io_tlb_mem *mem = io_tlb_default_mem;
-- 
2.31.1.751.gd2f1c929bd-goog



From xen-devel-bounces@lists.xenproject.org Tue May 18 06:43:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 06:43:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128764.241718 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litS1-0006IG-KN; Tue, 18 May 2021 06:43:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128764.241718; Tue, 18 May 2021 06:43:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litS1-0006I5-Gr; Tue, 18 May 2021 06:43:33 +0000
Received: by outflank-mailman (input) for mailman id 128764;
 Tue, 18 May 2021 06:43:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RJ/V=KN=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1litS1-0003sj-3O
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 06:43:33 +0000
Received: from mail-pf1-x436.google.com (unknown [2607:f8b0:4864:20::436])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d535fcec-9b62-4a36-8b79-1f05ccfe66de;
 Tue, 18 May 2021 06:43:24 +0000 (UTC)
Received: by mail-pf1-x436.google.com with SMTP id e19so6693756pfv.3
 for <xen-devel@lists.xenproject.org>; Mon, 17 May 2021 23:43:24 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:f284:b819:54ca:c198])
 by smtp.gmail.com with UTF8SMTPSA id a15sm5106553pff.128.2021.05.17.23.43.16
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 May 2021 23:43:23 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d535fcec-9b62-4a36-8b79-1f05ccfe66de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=3asSzBPVxr0zE6eyP6v7PyM8kt+EOH1otNu6J+5Wb6I=;
        b=M8M39u1LmCJUiXoirbQIsQ3wvupyoMOIn4OVyycIkrHpEpRk7wxu8b4s/53RX/5sw/
         02mwjM9/fUv8ua36J3oEuXr1byMFqjh5My6EHk8/Wn3dbgn5TNnfoiGpUI3hhAV5EidP
         9U+MQp/fd62H/569GaG/ZgF6fdCKaFVtYpchI=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=3asSzBPVxr0zE6eyP6v7PyM8kt+EOH1otNu6J+5Wb6I=;
        b=UvLYx6nA4rTpOyItNPdCbMjBfy4X4v1JGd5YKgGZoM9Z7cXxBGHlQwVAZaP3y+u5ns
         Zq8fx2jYJE/Ztt8ohNf8moJbiyClqybo1YLTd6ba/vPfAGrByNWwFSx9Vm8yRbsliIrY
         o+ANElLZkbvaxN/+uYoOuPYmRL9kqWq+PDzPIgbqCzpuwLAlhtL2fLF6fXKIN7eEa7x/
         cUyEH7cz4kW9lxKS1Hnp3CCiD/ipjBUtj8X3UyvF/kraP8BtLHpdZqW+FE08HvJmW8Ff
         tPaFmbCCi1smYDsI8s5u9uuEM5JBt87HqKBRjWDiyUDAYu8C2vQsUeL3UJWwNW7F6e7g
         li4g==
X-Gm-Message-State: AOAM5323BBWQatgZev2F59qErKWjAYsabxKx6lnXlto6lEzhc6UEROK2
	xR4FJZEMrUlmUnKNRPhSHs/6Uw==
X-Google-Smtp-Source: ABdhPJy5aChnGF5tFfJAWMY4dwToRgKaNj0hASBbg+V9hmmfmPhYdkw3Ls+3DYj9Bq61VCROrKDfVw==
X-Received: by 2002:a63:416:: with SMTP id 22mr3567052pge.363.1621320204146;
        Mon, 17 May 2021 23:43:24 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v7 06/15] swiotlb: Update is_swiotlb_buffer to add a struct device argument
Date: Tue, 18 May 2021 14:42:06 +0800
Message-Id: <20210518064215.2856977-7-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.751.gd2f1c929bd-goog
In-Reply-To: <20210518064215.2856977-1-tientzu@chromium.org>
References: <20210518064215.2856977-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Update is_swiotlb_buffer to add a struct device argument. This will be
useful later to allow for restricted DMA pool.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 drivers/iommu/dma-iommu.c | 12 ++++++------
 drivers/xen/swiotlb-xen.c |  2 +-
 include/linux/swiotlb.h   |  6 +++---
 kernel/dma/direct.c       |  6 +++---
 kernel/dma/direct.h       |  6 +++---
 5 files changed, 16 insertions(+), 16 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 7bcdd1205535..a5df35bfd150 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -504,7 +504,7 @@ static void __iommu_dma_unmap_swiotlb(struct device *dev, dma_addr_t dma_addr,
 
 	__iommu_dma_unmap(dev, dma_addr, size);
 
-	if (unlikely(is_swiotlb_buffer(phys)))
+	if (unlikely(is_swiotlb_buffer(dev, phys)))
 		swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
 }
 
@@ -575,7 +575,7 @@ static dma_addr_t __iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys,
 	}
 
 	iova = __iommu_dma_map(dev, phys, aligned_size, prot, dma_mask);
-	if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(phys))
+	if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(dev, phys))
 		swiotlb_tbl_unmap_single(dev, phys, org_size, dir, attrs);
 	return iova;
 }
@@ -781,7 +781,7 @@ static void iommu_dma_sync_single_for_cpu(struct device *dev,
 	if (!dev_is_dma_coherent(dev))
 		arch_sync_dma_for_cpu(phys, size, dir);
 
-	if (is_swiotlb_buffer(phys))
+	if (is_swiotlb_buffer(dev, phys))
 		swiotlb_sync_single_for_cpu(dev, phys, size, dir);
 }
 
@@ -794,7 +794,7 @@ static void iommu_dma_sync_single_for_device(struct device *dev,
 		return;
 
 	phys = iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle);
-	if (is_swiotlb_buffer(phys))
+	if (is_swiotlb_buffer(dev, phys))
 		swiotlb_sync_single_for_device(dev, phys, size, dir);
 
 	if (!dev_is_dma_coherent(dev))
@@ -815,7 +815,7 @@ static void iommu_dma_sync_sg_for_cpu(struct device *dev,
 		if (!dev_is_dma_coherent(dev))
 			arch_sync_dma_for_cpu(sg_phys(sg), sg->length, dir);
 
-		if (is_swiotlb_buffer(sg_phys(sg)))
+		if (is_swiotlb_buffer(dev, sg_phys(sg)))
 			swiotlb_sync_single_for_cpu(dev, sg_phys(sg),
 						    sg->length, dir);
 	}
@@ -832,7 +832,7 @@ static void iommu_dma_sync_sg_for_device(struct device *dev,
 		return;
 
 	for_each_sg(sgl, sg, nelems, i) {
-		if (is_swiotlb_buffer(sg_phys(sg)))
+		if (is_swiotlb_buffer(dev, sg_phys(sg)))
 			swiotlb_sync_single_for_device(dev, sg_phys(sg),
 						       sg->length, dir);
 
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 4c89afc0df62..0c6ed09f8513 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -100,7 +100,7 @@ static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr)
 	 * in our domain. Therefore _only_ check address within our domain.
 	 */
 	if (pfn_valid(PFN_DOWN(paddr)))
-		return is_swiotlb_buffer(paddr);
+		return is_swiotlb_buffer(dev, paddr);
 	return 0;
 }
 
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index b469f04cca26..2a6cca07540b 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -113,9 +113,9 @@ static inline struct io_tlb_mem *get_io_tlb_mem(struct device *dev)
 	return io_tlb_default_mem;
 }
 
-static inline bool is_swiotlb_buffer(phys_addr_t paddr)
+static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = get_io_tlb_mem(dev);
 
 	return mem && paddr >= mem->start && paddr < mem->end;
 }
@@ -127,7 +127,7 @@ bool is_swiotlb_active(void);
 void __init swiotlb_adjust_size(unsigned long size);
 #else
 #define swiotlb_force SWIOTLB_NO_FORCE
-static inline bool is_swiotlb_buffer(phys_addr_t paddr)
+static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
 	return false;
 }
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index f737e3347059..84c9feb5474a 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -343,7 +343,7 @@ void dma_direct_sync_sg_for_device(struct device *dev,
 	for_each_sg(sgl, sg, nents, i) {
 		phys_addr_t paddr = dma_to_phys(dev, sg_dma_address(sg));
 
-		if (unlikely(is_swiotlb_buffer(paddr)))
+		if (unlikely(is_swiotlb_buffer(dev, paddr)))
 			swiotlb_sync_single_for_device(dev, paddr, sg->length,
 						       dir);
 
@@ -369,7 +369,7 @@ void dma_direct_sync_sg_for_cpu(struct device *dev,
 		if (!dev_is_dma_coherent(dev))
 			arch_sync_dma_for_cpu(paddr, sg->length, dir);
 
-		if (unlikely(is_swiotlb_buffer(paddr)))
+		if (unlikely(is_swiotlb_buffer(dev, paddr)))
 			swiotlb_sync_single_for_cpu(dev, paddr, sg->length,
 						    dir);
 
@@ -504,7 +504,7 @@ size_t dma_direct_max_mapping_size(struct device *dev)
 bool dma_direct_need_sync(struct device *dev, dma_addr_t dma_addr)
 {
 	return !dev_is_dma_coherent(dev) ||
-		is_swiotlb_buffer(dma_to_phys(dev, dma_addr));
+	       is_swiotlb_buffer(dev, dma_to_phys(dev, dma_addr));
 }
 
 /**
diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
index 50afc05b6f1d..13e9e7158d94 100644
--- a/kernel/dma/direct.h
+++ b/kernel/dma/direct.h
@@ -56,7 +56,7 @@ static inline void dma_direct_sync_single_for_device(struct device *dev,
 {
 	phys_addr_t paddr = dma_to_phys(dev, addr);
 
-	if (unlikely(is_swiotlb_buffer(paddr)))
+	if (unlikely(is_swiotlb_buffer(dev, paddr)))
 		swiotlb_sync_single_for_device(dev, paddr, size, dir);
 
 	if (!dev_is_dma_coherent(dev))
@@ -73,7 +73,7 @@ static inline void dma_direct_sync_single_for_cpu(struct device *dev,
 		arch_sync_dma_for_cpu_all();
 	}
 
-	if (unlikely(is_swiotlb_buffer(paddr)))
+	if (unlikely(is_swiotlb_buffer(dev, paddr)))
 		swiotlb_sync_single_for_cpu(dev, paddr, size, dir);
 
 	if (dir == DMA_FROM_DEVICE)
@@ -113,7 +113,7 @@ static inline void dma_direct_unmap_page(struct device *dev, dma_addr_t addr,
 	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
 		dma_direct_sync_single_for_cpu(dev, addr, size, dir);
 
-	if (unlikely(is_swiotlb_buffer(phys)))
+	if (unlikely(is_swiotlb_buffer(dev, phys)))
 		swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
 }
 #endif /* _KERNEL_DMA_DIRECT_H */
-- 
2.31.1.751.gd2f1c929bd-goog



From xen-devel-bounces@lists.xenproject.org Tue May 18 06:44:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 06:44:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128777.241732 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litSW-0007SC-7K; Tue, 18 May 2021 06:44:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128777.241732; Tue, 18 May 2021 06:44:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litSW-0007S5-46; Tue, 18 May 2021 06:44:04 +0000
Received: by outflank-mailman (input) for mailman id 128777;
 Tue, 18 May 2021 06:44:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RJ/V=KN=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1litSV-0007Nz-3j
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 06:44:03 +0000
Received: from mail-pf1-x430.google.com (unknown [2607:f8b0:4864:20::430])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 39c0ef2e-1c34-4b28-a921-96ab2962ec73;
 Tue, 18 May 2021 06:43:59 +0000 (UTC)
Received: by mail-pf1-x430.google.com with SMTP id c17so6684207pfn.6
 for <xen-devel@lists.xenproject.org>; Mon, 17 May 2021 23:43:59 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:f284:b819:54ca:c198])
 by smtp.gmail.com with UTF8SMTPSA id a20sm11420303pfc.186.2021.05.17.23.43.51
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 May 2021 23:43:58 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 39c0ef2e-1c34-4b28-a921-96ab2962ec73
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=rgayaZ7GWHTeliLzRc8wy4aBShyZ94T8DlXHnve2v7k=;
        b=CVg5cNHTebHTJBqunCV47Mo4qFeoHBEr6ld6Z/DEG1lU3dLG5r4Wtlzfn2KP5mrHvt
         1FVkRCobdEPqqc2lDqiiicjReh7MqCklniPhFZkWQ+I1p8japwv1ZRgN9VbgBNgBNXG2
         qhC05pQchQ4qBPIe+0xKhViNhyd4HiCnod0sI=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=rgayaZ7GWHTeliLzRc8wy4aBShyZ94T8DlXHnve2v7k=;
        b=tM+yarZGU8BU0DKI1/Tl1ntScUctvXXIMO1mbutzUDBsBDL5TVXpL/Nf0O62N/8Ofs
         +Rtq3j4FqUsx9CEThd39n7za+K9QO9rxlQy7HKZ5GPKxpWeD+umYzRsm7catIX8wR8ek
         kp53EAhY1HYDGd3dPAgcZA+/ni8/olAXS02oA9cRHLq74+GjlJ5R/GjJYNoTXlTrY2jX
         AiNOhiFRIXnuiQ4JOR86ynvZczDKYvcfMZPFCFYbEp08jDSWRUl6il7Pi35shBumyoLv
         bkVJN+d+yTKAkxJmDZGUqxzOYQ9joWPZAMl7dPD1tE4lpAhG94CAcOeNg8xEVH447q+x
         rbfA==
X-Gm-Message-State: AOAM532TFuAsyimyA5aUVLjRWl5YVNKdVVa6iS6uOcRect7O2Rm8zkbs
	sumx9b+hPMuPq9rMq04rnO8YOQ==
X-Google-Smtp-Source: ABdhPJzly3p1J9a9fnkqt1peWSejUIuoFQSkE4/Sieh8f/p1YXKxhe9yeeLQRNOCgO9q/A9p4jxA0A==
X-Received: by 2002:a63:ae01:: with SMTP id q1mr3455733pgf.216.1621320238755;
        Mon, 17 May 2021 23:43:58 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v7 10/15] swiotlb: Refactor swiotlb_tbl_unmap_single
Date: Tue, 18 May 2021 14:42:10 +0800
Message-Id: <20210518064215.2856977-11-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.751.gd2f1c929bd-goog
In-Reply-To: <20210518064215.2856977-1-tientzu@chromium.org>
References: <20210518064215.2856977-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new function, release_slots, to make the code reusable for supporting
different bounce buffer pools, e.g. restricted DMA pool.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/swiotlb.c | 35 ++++++++++++++++++++---------------
 1 file changed, 20 insertions(+), 15 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 2ec6711071de..cef856d23194 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -550,27 +550,15 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 	return tlb_addr;
 }
 
-/*
- * tlb_addr is the physical address of the bounce buffer to unmap.
- */
-void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
-			      size_t mapping_size, enum dma_data_direction dir,
-			      unsigned long attrs)
+static void release_slots(struct device *dev, phys_addr_t tlb_addr)
 {
-	struct io_tlb_mem *mem = get_io_tlb_mem(hwdev);
+	struct io_tlb_mem *mem = get_io_tlb_mem(dev);
 	unsigned long flags;
-	unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr);
+	unsigned int offset = swiotlb_align_offset(dev, tlb_addr);
 	int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT;
 	int nslots = nr_slots(mem->slots[index].alloc_size + offset);
 	int count, i;
 
-	/*
-	 * First, sync the memory before unmapping the entry
-	 */
-	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
-	    (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL))
-		swiotlb_bounce(hwdev, tlb_addr, mapping_size, DMA_FROM_DEVICE);
-
 	/*
 	 * Return the buffer to the free list by setting the corresponding
 	 * entries to indicate the number of contiguous entries available.
@@ -605,6 +593,23 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
 	spin_unlock_irqrestore(&mem->lock, flags);
 }
 
+/*
+ * tlb_addr is the physical address of the bounce buffer to unmap.
+ */
+void swiotlb_tbl_unmap_single(struct device *dev, phys_addr_t tlb_addr,
+			      size_t mapping_size, enum dma_data_direction dir,
+			      unsigned long attrs)
+{
+	/*
+	 * First, sync the memory before unmapping the entry
+	 */
+	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
+	    (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL))
+		swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_FROM_DEVICE);
+
+	release_slots(dev, tlb_addr);
+}
+
 void swiotlb_sync_single_for_device(struct device *dev, phys_addr_t tlb_addr,
 		size_t size, enum dma_data_direction dir)
 {
-- 
2.31.1.751.gd2f1c929bd-goog



From xen-devel-bounces@lists.xenproject.org Tue May 18 06:44:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 06:44:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128780.241743 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litSc-0007ol-Fw; Tue, 18 May 2021 06:44:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128780.241743; Tue, 18 May 2021 06:44:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litSc-0007oc-Co; Tue, 18 May 2021 06:44:10 +0000
Received: by outflank-mailman (input) for mailman id 128780;
 Tue, 18 May 2021 06:44:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RJ/V=KN=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1litSb-0007nQ-3R
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 06:44:09 +0000
Received: from mail-pj1-x102f.google.com (unknown [2607:f8b0:4864:20::102f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2e42c7e5-6363-4e5e-bf53-3d89b9691143;
 Tue, 18 May 2021 06:44:08 +0000 (UTC)
Received: by mail-pj1-x102f.google.com with SMTP id ot16so3046470pjb.3
 for <xen-devel@lists.xenproject.org>; Mon, 17 May 2021 23:44:08 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:f284:b819:54ca:c198])
 by smtp.gmail.com with UTF8SMTPSA id v15sm12381541pgc.57.2021.05.17.23.44.00
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 May 2021 23:44:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2e42c7e5-6363-4e5e-bf53-3d89b9691143
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=H/0xCRd+ir9ugEan1LBVBlJiEk+45QLwGnb5fc8838I=;
        b=KR7bAMKVXiuef9Enfc3qqOsvPP2FyKN9RXcHtF7tuhacS+g7AW6vqxy6Oxigwcnj/w
         NML1jjMlV0WumJ3qj48zCzjjpMb4o5kwSR3uUi5FPDJ3Io5wXWz3iufc8OItPyWghGm7
         nUiT16XOLQ2azpPF67BtZlco+kTeYLED2mK30=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=H/0xCRd+ir9ugEan1LBVBlJiEk+45QLwGnb5fc8838I=;
        b=ezDOZPT6nlz7/SBiy0bsGAw9mRh4xahnkpfCv4Qyw+j5L6PsUt6Nc8u+BWf8sh2A1y
         XGGq27iIpRoUUdN3wK2MlTIOyShm+tl+om2gDtex/Tio03CiLkkdTDf2g3AAzyfhG5Mw
         yAAOa1IfGP0Jmr1kA7lgugydg4Np10xwRvFASe4XoQ2KVlOBMiQB8Ppgq05Y6H1Du5Du
         rRHyEfqd67PF5LO214Yar1624ZAzA1IW0KLHhM8tB5zMlp0vT2dqss4cMMHXbsEG1Amh
         JdESlVsOTJG1I5k8qnBa4TQspjO/H3ajPY73xPlu6uYKLyZW2MlaDk4Oz5LFJ/6p7hKG
         iIxA==
X-Gm-Message-State: AOAM533vpKB5UruFG4g7Bjzuhdcg4KvGTyQbLnD9m1E524GzwhRf48RU
	WjxAqewwZJv+KQiVLBxMvVUGvQ==
X-Google-Smtp-Source: ABdhPJyHq2ByMSFXLwa1tqvbpQpdfl+6mE0dgptE4Iebu3B0u/RV1TTeI+xPp14xDPDAQBFpeOD6KQ==
X-Received: by 2002:a17:90b:188f:: with SMTP id mn15mr3720648pjb.219.1621320247763;
        Mon, 17 May 2021 23:44:07 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v7 11/15] dma-direct: Add a new wrapper __dma_direct_free_pages()
Date: Tue, 18 May 2021 14:42:11 +0800
Message-Id: <20210518064215.2856977-12-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.751.gd2f1c929bd-goog
In-Reply-To: <20210518064215.2856977-1-tientzu@chromium.org>
References: <20210518064215.2856977-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new wrapper __dma_direct_free_pages() that will be useful later
for swiotlb_free().

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/direct.c | 14 ++++++++++----
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 078f7087e466..eb4098323bbc 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -75,6 +75,12 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size)
 		min_not_zero(dev->coherent_dma_mask, dev->bus_dma_limit);
 }
 
+static void __dma_direct_free_pages(struct device *dev, struct page *page,
+				    size_t size)
+{
+	dma_free_contiguous(dev, page, size);
+}
+
 static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
 		gfp_t gfp)
 {
@@ -237,7 +243,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 			return NULL;
 	}
 out_free_pages:
-	dma_free_contiguous(dev, page, size);
+	__dma_direct_free_pages(dev, page, size);
 	return NULL;
 }
 
@@ -273,7 +279,7 @@ void dma_direct_free(struct device *dev, size_t size,
 	else if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_CLEAR_UNCACHED))
 		arch_dma_clear_uncached(cpu_addr, size);
 
-	dma_free_contiguous(dev, dma_direct_to_page(dev, dma_addr), size);
+	__dma_direct_free_pages(dev, dma_direct_to_page(dev, dma_addr), size);
 }
 
 struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
@@ -310,7 +316,7 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
 	*dma_handle = phys_to_dma_direct(dev, page_to_phys(page));
 	return page;
 out_free_pages:
-	dma_free_contiguous(dev, page, size);
+	__dma_direct_free_pages(dev, page, size);
 	return NULL;
 }
 
@@ -329,7 +335,7 @@ void dma_direct_free_pages(struct device *dev, size_t size,
 	if (force_dma_unencrypted(dev))
 		set_memory_encrypted((unsigned long)vaddr, 1 << page_order);
 
-	dma_free_contiguous(dev, page, size);
+	__dma_direct_free_pages(dev, page, size);
 }
 
 #if defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE) || \
-- 
2.31.1.751.gd2f1c929bd-goog



From xen-devel-bounces@lists.xenproject.org Tue May 18 06:44:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 06:44:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128782.241754 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litSl-0008Kf-Q8; Tue, 18 May 2021 06:44:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128782.241754; Tue, 18 May 2021 06:44:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litSl-0008KW-LA; Tue, 18 May 2021 06:44:19 +0000
Received: by outflank-mailman (input) for mailman id 128782;
 Tue, 18 May 2021 06:44:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RJ/V=KN=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1litSk-0008HI-Hl
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 06:44:18 +0000
Received: from mail-pf1-x42f.google.com (unknown [2607:f8b0:4864:20::42f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1a22553b-ddbd-4efe-9be4-3340206f65f4;
 Tue, 18 May 2021 06:44:17 +0000 (UTC)
Received: by mail-pf1-x42f.google.com with SMTP id c17so6684729pfn.6
 for <xen-devel@lists.xenproject.org>; Mon, 17 May 2021 23:44:17 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:f284:b819:54ca:c198])
 by smtp.gmail.com with UTF8SMTPSA id a7sm11612338pfg.76.2021.05.17.23.44.09
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 May 2021 23:44:16 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1a22553b-ddbd-4efe-9be4-3340206f65f4
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=XERRPmZD9LZ8NlvTKf69Ki0Z3JOqJsfzsMjeGM/hmyc=;
        b=oT5AI3WxhLeqEGvrMWNth5ilokIoDf3aUpAY7upThFXBQOnHIqoXsdqVxmKsJ2ibAV
         p2tS4tJqThBqZvPuEA86u4i7tHwUvgMMmS6jwMmWzqazAKGD8hQRrm4uJZAQme8fQgO7
         xMpoBHann0anuHkotadEJq5SA2Uh6yM8s5FoE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=XERRPmZD9LZ8NlvTKf69Ki0Z3JOqJsfzsMjeGM/hmyc=;
        b=YnKdsB1K9uz01vDSSSuO8X2bf/5d7jqSYCcM5TdGp82p/h/ZOfJgiAG/wIfvHdu8fo
         YMZKmIVk9cpOMcvVtz+g3GEry02QwKIYaK6MIePi5/vUshP9q1KMQDn8QcRXDbg4eLnx
         SnAzmyNjxXYArM1e15llU1MtB6HkRA3j2LpEAxKztZOBY9oXJ7QDwhwEbQRFihU1k/Pl
         qA9r5gw5VYg//AS/4irAp4vNeaDxGbZJErLgsaExOd4mkh73LKhHDDA/YV8iAqOHGqCt
         mivzwAevWfQje0IoQoBo7dzNu7lQsG3mkID4DmgvQUajYK34TMjz/sA0EPwGhHWrgbp+
         CyDA==
X-Gm-Message-State: AOAM531Kb4rqG5bcZ2fdMMFptAPTtG2AfOihKyynRZXEJ/QrTTpYeeyE
	7Beut9148W+TtGhvA+f0DQG8/g==
X-Google-Smtp-Source: ABdhPJyi8T1bUdRh2weFxzn5qV51jHdwUH18sdVlzVJRpz5+Czvdmvjaq4flIajQeG3jnSiA7cCp7w==
X-Received: by 2002:a05:6a00:15d4:b029:2de:a538:c857 with SMTP id o20-20020a056a0015d4b02902dea538c857mr438129pfu.51.1621320256584;
        Mon, 17 May 2021 23:44:16 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v7 12/15] swiotlb: Add restricted DMA alloc/free support.
Date: Tue, 18 May 2021 14:42:12 +0800
Message-Id: <20210518064215.2856977-13-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.751.gd2f1c929bd-goog
In-Reply-To: <20210518064215.2856977-1-tientzu@chromium.org>
References: <20210518064215.2856977-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add the functions, swiotlb_{alloc,free} to support the memory allocation
from restricted DMA pool.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 include/linux/swiotlb.h |  4 ++++
 kernel/dma/swiotlb.c    | 35 +++++++++++++++++++++++++++++++++--
 2 files changed, 37 insertions(+), 2 deletions(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 0c5a18d9cf89..e8cf49bd90c5 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -134,6 +134,10 @@ unsigned int swiotlb_max_segment(void);
 size_t swiotlb_max_mapping_size(struct device *dev);
 bool is_swiotlb_active(struct device *dev);
 void __init swiotlb_adjust_size(unsigned long size);
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+struct page *swiotlb_alloc(struct device *dev, size_t size);
+bool swiotlb_free(struct device *dev, struct page *page, size_t size);
+#endif /* CONFIG_DMA_RESTRICTED_POOL */
 #else
 #define swiotlb_force SWIOTLB_NO_FORCE
 static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index cef856d23194..d3fa4669229b 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -457,8 +457,9 @@ static int find_slots(struct device *dev, phys_addr_t orig_addr,
 
 	index = wrap = wrap_index(mem, ALIGN(mem->index, stride));
 	do {
-		if ((slot_addr(tbl_dma_addr, index) & iotlb_align_mask) !=
-		    (orig_addr & iotlb_align_mask)) {
+		if (orig_addr &&
+		    (slot_addr(tbl_dma_addr, index) & iotlb_align_mask) !=
+			    (orig_addr & iotlb_align_mask)) {
 			index = wrap_index(mem, index + 1);
 			continue;
 		}
@@ -704,6 +705,36 @@ late_initcall(swiotlb_create_default_debugfs);
 #endif
 
 #ifdef CONFIG_DMA_RESTRICTED_POOL
+struct page *swiotlb_alloc(struct device *dev, size_t size)
+{
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
+	phys_addr_t tlb_addr;
+	int index;
+
+	if (!mem)
+		return NULL;
+
+	index = find_slots(dev, 0, size);
+	if (index == -1)
+		return NULL;
+
+	tlb_addr = slot_addr(mem->start, index);
+
+	return pfn_to_page(PFN_DOWN(tlb_addr));
+}
+
+bool swiotlb_free(struct device *dev, struct page *page, size_t size)
+{
+	phys_addr_t tlb_addr = page_to_phys(page);
+
+	if (!is_swiotlb_buffer(dev, tlb_addr))
+		return false;
+
+	release_slots(dev, tlb_addr);
+
+	return true;
+}
+
 static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
 				    struct device *dev)
 {
-- 
2.31.1.751.gd2f1c929bd-goog



From xen-devel-bounces@lists.xenproject.org Tue May 18 06:44:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 06:44:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128783.241765 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litSu-0000Pf-4X; Tue, 18 May 2021 06:44:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128783.241765; Tue, 18 May 2021 06:44:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litSt-0000PW-Vc; Tue, 18 May 2021 06:44:27 +0000
Received: by outflank-mailman (input) for mailman id 128783;
 Tue, 18 May 2021 06:44:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RJ/V=KN=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1litSs-0000OJ-N8
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 06:44:26 +0000
Received: from mail-pl1-x62a.google.com (unknown [2607:f8b0:4864:20::62a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bb695a3e-000b-4a1f-b436-7f2d4eabcfaa;
 Tue, 18 May 2021 06:44:25 +0000 (UTC)
Received: by mail-pl1-x62a.google.com with SMTP id b7so376811plg.0
 for <xen-devel@lists.xenproject.org>; Mon, 17 May 2021 23:44:25 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:f284:b819:54ca:c198])
 by smtp.gmail.com with UTF8SMTPSA id l15sm3725121pjj.23.2021.05.17.23.44.17
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 May 2021 23:44:24 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bb695a3e-000b-4a1f-b436-7f2d4eabcfaa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=5ME8xbVrLT1BUR1nn0WZNGLiSd7pBJOleL70nFokS5s=;
        b=OF5AjuorPUcDKWCIY2cI21H2QqMOy56tOlb0bZaIuHy41Pwxx9aMTnun5rob6gYmNy
         tgGRoEPhDR55PbYrhqOPK0hpUqcTAR/MhBXnq/aiEU0qq5WmIFoKlkSikiQ9Em1vIBZa
         h3DKJVKEpqlEZnw8ltIXGcqxM6Zc9DKkgDW2s=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=5ME8xbVrLT1BUR1nn0WZNGLiSd7pBJOleL70nFokS5s=;
        b=dbZ4Okal5ZsoiNudE7nUhtlUy5YUv93gC0m4isvAcbS2+GE27ZI5BbEd4FgB2r0+tE
         J7gxED0dDVCXJD5R3aUepqXB8nwMrwAh23COUV0DS7TWIQDTSNsLaY/zegIqh+Q1qqv8
         2Hch/2OnHNNL2EVlYlM2MgiiTheuvxZlMjeZhAgp09uVN09wBnL/4MgPtVBw9/6NgDQv
         xaAsUs1JN3OOKYdK/kGuTOGzEx78IKldWaY6jt2L8yCLkcFVBVFAEfATwbsRgh8MSbAa
         qiL/8+BWjuW98Qyvzm2woUQ+VPx1QXLflCT7jp6vSkmPlkTZ0cQR6vKrV+KzeL0M7uyl
         7xXA==
X-Gm-Message-State: AOAM532vNozwVnV68mkh6bQezLoABgzz4vHSW/d3lKU8M+zdwVQ/z1kO
	R/Cixyn0N7v2+HX0ZeVLomXl+w==
X-Google-Smtp-Source: ABdhPJwvGub0AH2WlfaqVuMPJjLs6kJJo5gDHdTcQu2VZaWHgS/8ilwSFIpGismzEgDCho/YgbEpcg==
X-Received: by 2002:a17:902:c24c:b029:ee:f427:9808 with SMTP id 12-20020a170902c24cb02900eef4279808mr2892227plg.58.1621320265144;
        Mon, 17 May 2021 23:44:25 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v7 13/15] dma-direct: Allocate memory from restricted DMA pool if available
Date: Tue, 18 May 2021 14:42:13 +0800
Message-Id: <20210518064215.2856977-14-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.751.gd2f1c929bd-goog
In-Reply-To: <20210518064215.2856977-1-tientzu@chromium.org>
References: <20210518064215.2856977-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The restricted DMA pool is preferred if available.

The restricted DMA pools provide a basic level of protection against the
DMA overwriting buffer contents at unexpected times. However, to protect
against general data leakage and system memory corruption, the system
needs to provide a way to lock down the memory access, e.g., MPU.

Note that since coherent allocation needs remapping, one must set up
another device coherent pool by shared-dma-pool and use
dma_alloc_from_dev_coherent instead for atomic coherent allocation.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/direct.c | 38 +++++++++++++++++++++++++++++---------
 1 file changed, 29 insertions(+), 9 deletions(-)

diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index eb4098323bbc..0d521f78c7b9 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -78,6 +78,10 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size)
 static void __dma_direct_free_pages(struct device *dev, struct page *page,
 				    size_t size)
 {
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+	if (swiotlb_free(dev, page, size))
+		return;
+#endif
 	dma_free_contiguous(dev, page, size);
 }
 
@@ -92,7 +96,17 @@ static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
 
 	gfp |= dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask,
 					   &phys_limit);
-	page = dma_alloc_contiguous(dev, size, gfp);
+
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+	page = swiotlb_alloc(dev, size);
+	if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
+		__dma_direct_free_pages(dev, page, size);
+		page = NULL;
+	}
+#endif
+
+	if (!page)
+		page = dma_alloc_contiguous(dev, size, gfp);
 	if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
 		dma_free_contiguous(dev, page, size);
 		page = NULL;
@@ -148,7 +162,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 		gfp |= __GFP_NOWARN;
 
 	if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
-	    !force_dma_unencrypted(dev)) {
+	    !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) {
 		page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO);
 		if (!page)
 			return NULL;
@@ -161,18 +175,23 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 	}
 
 	if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) &&
-	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
-	    !dev_is_dma_coherent(dev))
+	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) &&
+	    !is_dev_swiotlb_force(dev))
 		return arch_dma_alloc(dev, size, dma_handle, gfp, attrs);
 
 	/*
 	 * Remapping or decrypting memory may block. If either is required and
 	 * we can't block, allocate the memory from the atomic pools.
+	 * If restricted DMA (i.e., is_dev_swiotlb_force) is required, one must
+	 * set up another device coherent pool by shared-dma-pool and use
+	 * dma_alloc_from_dev_coherent instead.
 	 */
 	if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) &&
 	    !gfpflags_allow_blocking(gfp) &&
 	    (force_dma_unencrypted(dev) ||
-	     (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev))))
+	     (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
+	      !dev_is_dma_coherent(dev))) &&
+	    !is_dev_swiotlb_force(dev))
 		return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
 
 	/* we always manually zero the memory once we are done */
@@ -253,15 +272,15 @@ void dma_direct_free(struct device *dev, size_t size,
 	unsigned int page_order = get_order(size);
 
 	if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
-	    !force_dma_unencrypted(dev)) {
+	    !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) {
 		/* cpu_addr is a struct page cookie, not a kernel address */
 		dma_free_contiguous(dev, cpu_addr, size);
 		return;
 	}
 
 	if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) &&
-	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
-	    !dev_is_dma_coherent(dev)) {
+	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) &&
+	    !is_dev_swiotlb_force(dev)) {
 		arch_dma_free(dev, size, cpu_addr, dma_addr, attrs);
 		return;
 	}
@@ -289,7 +308,8 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
 	void *ret;
 
 	if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) &&
-	    force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp))
+	    force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp) &&
+	    !is_dev_swiotlb_force(dev))
 		return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
 
 	page = __dma_direct_alloc_pages(dev, size, gfp);
-- 
2.31.1.751.gd2f1c929bd-goog



From xen-devel-bounces@lists.xenproject.org Tue May 18 06:44:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 06:44:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128785.241776 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litT2-000143-Hh; Tue, 18 May 2021 06:44:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128785.241776; Tue, 18 May 2021 06:44:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litT2-00013h-EM; Tue, 18 May 2021 06:44:36 +0000
Received: by outflank-mailman (input) for mailman id 128785;
 Tue, 18 May 2021 06:44:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RJ/V=KN=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1litT1-00010x-6V
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 06:44:35 +0000
Received: from mail-pf1-x429.google.com (unknown [2607:f8b0:4864:20::429])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4aa19155-2cc3-42ae-b28e-ed07ba814704;
 Tue, 18 May 2021 06:44:34 +0000 (UTC)
Received: by mail-pf1-x429.google.com with SMTP id x188so6684903pfd.7
 for <xen-devel@lists.xenproject.org>; Mon, 17 May 2021 23:44:34 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:f284:b819:54ca:c198])
 by smtp.gmail.com with UTF8SMTPSA id t22sm4382996pfl.50.2021.05.17.23.44.26
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 May 2021 23:44:33 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4aa19155-2cc3-42ae-b28e-ed07ba814704
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=CTSTD7WHMjQLnf+yB2P/0oEkEOIVo79nGE1UI2KUH0Q=;
        b=nAAUAjIfoQT+ak6LjVaVetBSUMNnB6n7A11MZcPIr+b55pauAURqTYLKnD47uanmZJ
         3cJyu/W7zXu7vcXxot+z5bCjTjNShkL3mMnRTemIfV7uSqDhggL5tXTfLRj2IiU5yZER
         EMx5ceT6GrGiYCMKZLVA9T2R6o8FKbKEjkXrU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=CTSTD7WHMjQLnf+yB2P/0oEkEOIVo79nGE1UI2KUH0Q=;
        b=U3C+XbuKbb2vQfnHOImCv7VVCN2OD+59OxnWq7AQqfIirvWGT3Y+xI2s3ZHAurkhOK
         Alf1QCAa3odyDLY7vUYLKf6I39eRy4fReS8qyWFVDcyCLWFt4YHkN0hkgqK06CVl09kw
         hOGk97Tb5qQckKNNo8yT9rPdHEpMsDJNql+L1nFg5NrkudZEuLpboCfzPVb3lq9DAT0V
         /niTbwMvJlw+bjVxPMYabLxXeYZZxOHh0JyzlSrBpz2FYOPVdkf1aNVZ8G2vVm56unGi
         2ZuI2WHW16vOZ/2Pa5YkaqCTbW4OT3Fy3dV3eRWGDFpQYsGwJxj38H+AZQY/IWB9ytOb
         Y2Pw==
X-Gm-Message-State: AOAM5316GTgHBB9p1rWJE1J+lSSVR8hk95ek50qe1Fs34+bqmx/q/WDa
	MrQz5IQjhMhGhTFTADmhkQDfLw==
X-Google-Smtp-Source: ABdhPJw36g57cjbl14iJOGlbcydmr3R8efhe81FZmLfhunRUCZecqFNQ71U0gUeZm0dYMjJhBmkTmQ==
X-Received: by 2002:aa7:8e0d:0:b029:214:a511:d88b with SMTP id c13-20020aa78e0d0000b0290214a511d88bmr3600961pfr.2.1621320273735;
        Mon, 17 May 2021 23:44:33 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v7 14/15] dt-bindings: of: Add restricted DMA pool
Date: Tue, 18 May 2021 14:42:14 +0800
Message-Id: <20210518064215.2856977-15-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.751.gd2f1c929bd-goog
In-Reply-To: <20210518064215.2856977-1-tientzu@chromium.org>
References: <20210518064215.2856977-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Introduce the new compatible string, restricted-dma-pool, for restricted
DMA. One can specify the address and length of the restricted DMA memory
region by restricted-dma-pool in the reserved-memory node.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 .../reserved-memory/reserved-memory.txt       | 27 +++++++++++++++++++
 1 file changed, 27 insertions(+)

diff --git a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
index e8d3096d922c..284aea659015 100644
--- a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
+++ b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
@@ -51,6 +51,23 @@ compatible (optional) - standard definition
           used as a shared pool of DMA buffers for a set of devices. It can
           be used by an operating system to instantiate the necessary pool
           management subsystem if necessary.
+        - restricted-dma-pool: This indicates a region of memory meant to be
+          used as a pool of restricted DMA buffers for a set of devices. The
+          memory region would be the only region accessible to those devices.
+          When using this, the no-map and reusable properties must not be set,
+          so the operating system can create a virtual mapping that will be used
+          for synchronization. The main purpose for restricted DMA is to
+          mitigate the lack of DMA access control on systems without an IOMMU,
+          which could result in the DMA accessing the system memory at
+          unexpected times and/or unexpected addresses, possibly leading to data
+          leakage or corruption. The feature on its own provides a basic level
+          of protection against the DMA overwriting buffer contents at
+          unexpected times. However, to protect against general data leakage and
+          system memory corruption, the system needs to provide way to lock down
+          the memory access, e.g., MPU. Note that since coherent allocation
+          needs remapping, one must set up another device coherent pool by
+          shared-dma-pool and use dma_alloc_from_dev_coherent instead for atomic
+          coherent allocation.
         - vendor specific string in the form <vendor>,[<device>-]<usage>
 no-map (optional) - empty property
     - Indicates the operating system must not create a virtual mapping
@@ -120,6 +137,11 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB).
 			compatible = "acme,multimedia-memory";
 			reg = <0x77000000 0x4000000>;
 		};
+
+		restricted_dma_mem_reserved: restricted_dma_mem_reserved {
+			compatible = "restricted-dma-pool";
+			reg = <0x50000000 0x400000>;
+		};
 	};
 
 	/* ... */
@@ -138,4 +160,9 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB).
 		memory-region = <&multimedia_reserved>;
 		/* ... */
 	};
+
+	pcie_device: pcie_device@0,0 {
+		memory-region = <&restricted_dma_mem_reserved>;
+		/* ... */
+	};
 };
-- 
2.31.1.751.gd2f1c929bd-goog



From xen-devel-bounces@lists.xenproject.org Tue May 18 06:44:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 06:44:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128788.241787 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litTB-0001eu-Su; Tue, 18 May 2021 06:44:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128788.241787; Tue, 18 May 2021 06:44:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litTB-0001ek-OS; Tue, 18 May 2021 06:44:45 +0000
Received: by outflank-mailman (input) for mailman id 128788;
 Tue, 18 May 2021 06:44:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RJ/V=KN=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1litT9-0001as-OG
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 06:44:43 +0000
Received: from mail-pl1-x635.google.com (unknown [2607:f8b0:4864:20::635])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2ef5806f-09ec-4e4c-a594-f57ba5ab4664;
 Tue, 18 May 2021 06:44:42 +0000 (UTC)
Received: by mail-pl1-x635.google.com with SMTP id a11so4536541plh.3
 for <xen-devel@lists.xenproject.org>; Mon, 17 May 2021 23:44:42 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:f284:b819:54ca:c198])
 by smtp.gmail.com with UTF8SMTPSA id u23sm1635824pfn.106.2021.05.17.23.44.35
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 May 2021 23:44:41 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2ef5806f-09ec-4e4c-a594-f57ba5ab4664
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=I5cfiiuVy355q+pMGkQSgcYSZ5RdxJgoq0S/wr5t2jQ=;
        b=VCVUNM1MLDDCM1YEpGwmMdmzVg+pX5xmo6WpPBxB4HadDcwl/CwygcBJ5Z5xVX7EGU
         JOp+gYR2V8ugVsU2F+sXgC/jRu3mFZRgRkzZFN+5qA7YQg/e6fHR9WHgCovgX+k3cvqU
         Wun2WPZQV8InkeRjlR+5gd/zLl5l3irc9NkFk=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=I5cfiiuVy355q+pMGkQSgcYSZ5RdxJgoq0S/wr5t2jQ=;
        b=GcFbIqsyEdYxk3i3vJvN4Xx6guwG7MeAYeY/QLGaJ0KG1CME1iW/nWHORP+l44HBpk
         P06SWEZRRooUnmYBs70MKBdIWN4/p12TrDJajXqrEHeqyY1KCLSWpUYaeLpHCL6e9zyP
         xoaTGoXRr6wMx8m1wW9zTZY8/mU9xKxnA8K3oJbaheygMLHtN0GFvZANehxddPqMJZ3g
         e/X7FLOrhwsT+vvnpVQek8GtXfBzlWpzEhLEyulJCb7OKhxj3l2M5VEEv9WcjKyAz25n
         oi/MIrR5sSO0TTGWl7y8KZSHQf7zfsJciyrv8W5YuY4ymJ7H8jMy5oeFcORLxkBU8yLd
         cSCQ==
X-Gm-Message-State: AOAM530a0GtZJnK8dR5T616HiLnRZ7UlH6T+qXw7LgZ67yal+IFF9xaN
	QXgole8QDabDAQl5zWqoCLFAaA==
X-Google-Smtp-Source: ABdhPJwzOAZEABY7sDXY0ei+QGCalaTIdclP5XZ2XQEkXIWtCph5uBf4dqlngGwrRv8DyOssDP/zXQ==
X-Received: by 2002:a17:90a:9312:: with SMTP id p18mr3499688pjo.171.1621320282285;
        Mon, 17 May 2021 23:44:42 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v7 15/15] of: Add plumbing for restricted DMA pool
Date: Tue, 18 May 2021 14:42:15 +0800
Message-Id: <20210518064215.2856977-16-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.751.gd2f1c929bd-goog
In-Reply-To: <20210518064215.2856977-1-tientzu@chromium.org>
References: <20210518064215.2856977-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

If a device is not behind an IOMMU, we look up the device node and set
up the restricted DMA when the restricted-dma-pool is presented.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 drivers/of/address.c    | 25 +++++++++++++++++++++++++
 drivers/of/device.c     |  3 +++
 drivers/of/of_private.h |  5 +++++
 3 files changed, 33 insertions(+)

diff --git a/drivers/of/address.c b/drivers/of/address.c
index aca94c348bd4..c562a9ff5f0b 100644
--- a/drivers/of/address.c
+++ b/drivers/of/address.c
@@ -8,6 +8,7 @@
 #include <linux/logic_pio.h>
 #include <linux/module.h>
 #include <linux/of_address.h>
+#include <linux/of_reserved_mem.h>
 #include <linux/pci.h>
 #include <linux/pci_regs.h>
 #include <linux/sizes.h>
@@ -1112,6 +1113,30 @@ bool of_dma_is_coherent(struct device_node *np)
 }
 EXPORT_SYMBOL_GPL(of_dma_is_coherent);
 
+int of_dma_set_restricted_buffer(struct device *dev)
+{
+	struct device_node *node;
+	int count, i;
+
+	if (!dev->of_node)
+		return 0;
+
+	count = of_property_count_elems_of_size(dev->of_node, "memory-region",
+						sizeof(phandle));
+	for (i = 0; i < count; i++) {
+		node = of_parse_phandle(dev->of_node, "memory-region", i);
+		/* There might be multiple memory regions, but only one
+		 * restriced-dma-pool region is allowed.
+		 */
+		if (of_device_is_compatible(node, "restricted-dma-pool") &&
+		    of_device_is_available(node))
+			return of_reserved_mem_device_init_by_idx(
+				dev, dev->of_node, i);
+	}
+
+	return 0;
+}
+
 /**
  * of_mmio_is_nonposted - Check if device uses non-posted MMIO
  * @np:	device node
diff --git a/drivers/of/device.c b/drivers/of/device.c
index c5a9473a5fb1..d8d865223e51 100644
--- a/drivers/of/device.c
+++ b/drivers/of/device.c
@@ -165,6 +165,9 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
 
 	arch_setup_dma_ops(dev, dma_start, size, iommu, coherent);
 
+	if (!iommu)
+		return of_dma_set_restricted_buffer(dev);
+
 	return 0;
 }
 EXPORT_SYMBOL_GPL(of_dma_configure_id);
diff --git a/drivers/of/of_private.h b/drivers/of/of_private.h
index d717efbd637d..9fc874548528 100644
--- a/drivers/of/of_private.h
+++ b/drivers/of/of_private.h
@@ -163,12 +163,17 @@ struct bus_dma_region;
 #if defined(CONFIG_OF_ADDRESS) && defined(CONFIG_HAS_DMA)
 int of_dma_get_range(struct device_node *np,
 		const struct bus_dma_region **map);
+int of_dma_set_restricted_buffer(struct device *dev);
 #else
 static inline int of_dma_get_range(struct device_node *np,
 		const struct bus_dma_region **map)
 {
 	return -ENODEV;
 }
+static inline int of_dma_set_restricted_buffer(struct device *dev)
+{
+	return -ENODEV;
+}
 #endif
 
 #endif /* _LINUX_OF_PRIVATE_H */
-- 
2.31.1.751.gd2f1c929bd-goog



From xen-devel-bounces@lists.xenproject.org Tue May 18 06:49:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 06:49:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128824.241800 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litXJ-00034J-GX; Tue, 18 May 2021 06:49:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128824.241800; Tue, 18 May 2021 06:49:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litXJ-00034C-DV; Tue, 18 May 2021 06:49:01 +0000
Received: by outflank-mailman (input) for mailman id 128824;
 Tue, 18 May 2021 06:49:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RJ/V=KN=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1litXI-000344-5l
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 06:49:00 +0000
Received: from mail-il1-x136.google.com (unknown [2607:f8b0:4864:20::136])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c6374ad9-b12a-43f1-8c36-1fd7a2b53ba1;
 Tue, 18 May 2021 06:48:59 +0000 (UTC)
Received: by mail-il1-x136.google.com with SMTP id z1so8275420ils.0
 for <xen-devel@lists.xenproject.org>; Mon, 17 May 2021 23:48:59 -0700 (PDT)
Received: from mail-io1-f49.google.com (mail-io1-f49.google.com.
 [209.85.166.49])
 by smtp.gmail.com with ESMTPSA id d5sm10097421ilf.55.2021.05.17.23.48.57
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 May 2021 23:48:58 -0700 (PDT)
Received: by mail-io1-f49.google.com with SMTP id k16so8345400ios.10
 for <xen-devel@lists.xenproject.org>; Mon, 17 May 2021 23:48:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c6374ad9-b12a-43f1-8c36-1fd7a2b53ba1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=3HopsH6N9F8FpkG4FSoTMtRMwuiSWSuF3ZB9X+VJCjU=;
        b=LETfA+HNnfbbscLemZLHMHTU1ucDVUTbar59Z+Ip1n1aOSxU2vCiyi6AnzZ7ISe1Ci
         d5+6nDZp7u09C4wccj2HMQoLbhxVgGmyilxSsMMqy6XnJ9Gh0XN/+XAYDZzV88am1gOT
         FB7ehF0SoAEZFAQ80iPByl3CmJPeIGWiXIdbM=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=3HopsH6N9F8FpkG4FSoTMtRMwuiSWSuF3ZB9X+VJCjU=;
        b=NBYGZBhezN0Lk/PmwG7rkzQ/1AzKylacmKphA1ml8ElZxez/LDgT68HagyLcQjw/qR
         177yY1ngHzS+WIscYgbH+1a0LgRrQMplc1yZx+fSfsCwb3kOQVyFly6QiweRRuLI23pg
         SQXLgVTsrKo9a5hM9UsLxAjI6bqcCZjVfAJ5e2pPbmteIAcwyU9FQep4rRfGmlAHcymo
         KfJ3RQj23/XMHD5bFxtKGeT+kXpvtK6fbymBffY9EjfIobjEN6iTbvmp8EBhgFI+TbE9
         Xa1Jxmlh9GYYjfnI+4ntrZRWAw9bbYy9qUDD10GWuXmPtyYEPX7mzsIe+AEzNJn43qSA
         7HCQ==
X-Gm-Message-State: AOAM532TEMGJ3gXsWLp66c6bHwWsG8U+HVwdMZUZb7jUV7g+XIFFbTJJ
	lCGB+iGs1gAmI92UZktXgXojSacJlype2w==
X-Google-Smtp-Source: ABdhPJwhBO3KSW0EFRoDWRJTzhgsHA7JeKRlSuNNrCD9NbBU8x11PvPhCDM4mOvgurfC0J/tHIMe9A==
X-Received: by 2002:a05:6e02:1d15:: with SMTP id i21mr3072489ila.2.1621320538962;
        Mon, 17 May 2021 23:48:58 -0700 (PDT)
X-Received: by 2002:a05:6638:32a8:: with SMTP id f40mr3969029jav.84.1621320526895;
 Mon, 17 May 2021 23:48:46 -0700 (PDT)
MIME-Version: 1.0
References: <20210518064215.2856977-1-tientzu@chromium.org> <20210518064215.2856977-5-tientzu@chromium.org>
In-Reply-To: <20210518064215.2856977-5-tientzu@chromium.org>
From: Claire Chang <tientzu@chromium.org>
Date: Tue, 18 May 2021 14:48:35 +0800
X-Gmail-Original-Message-ID: <CALiNf2_AWsnGqCnh02ZAGt+B-Ypzs1=-iOG2owm4GZHz2JAc4A@mail.gmail.com>
Message-ID: <CALiNf2_AWsnGqCnh02ZAGt+B-Ypzs1=-iOG2owm4GZHz2JAc4A@mail.gmail.com>
Subject: Re: [PATCH v7 04/15] swiotlb: Add restricted DMA pool initialization
To: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>, 
	Will Deacon <will@kernel.org>, Frank Rowand <frowand.list@gmail.com>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com, jgross@suse.com, 
	Christoph Hellwig <hch@lst.de>, Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org, paulus@samba.org, 
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, sstabellini@kernel.org, 
	Robin Murphy <robin.murphy@arm.com>, grant.likely@arm.com, xypron.glpk@gmx.de, 
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org, bauerman@linux.ibm.com, 
	peterz@infradead.org, Greg KH <gregkh@linuxfoundation.org>, 
	Saravana Kannan <saravanak@google.com>, "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, 
	heikki.krogerus@linux.intel.com, 
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Randy Dunlap <rdunlap@infradead.org>, 
	Dan Williams <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
	linux-devicetree <devicetree@vger.kernel.org>, lkml <linux-kernel@vger.kernel.org>, 
	linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, 
	Nicolas Boichat <drinkcat@chromium.org>, Jim Quinlan <james.quinlan@broadcom.com>, 
	Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com, 
	Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk, 
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
	Jianxiong Gao <jxgao@google.com>, joonas.lahtinen@linux.intel.com, 
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
	matthew.auld@intel.com, rodrigo.vivi@intel.com, 
	thomas.hellstrom@linux.intel.com
Content-Type: text/plain; charset="UTF-8"

I didn't move this to a separate file because I feel it might be
confusing for swiotlb_alloc/free (and need more functions to be
non-static).
Maybe instead of moving to a separate file, we can try to come up with
a better naming?


From xen-devel-bounces@lists.xenproject.org Tue May 18 06:52:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 06:52:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128829.241812 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litaU-0004Qq-WD; Tue, 18 May 2021 06:52:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128829.241812; Tue, 18 May 2021 06:52:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litaU-0004Qj-Ss; Tue, 18 May 2021 06:52:18 +0000
Received: by outflank-mailman (input) for mailman id 128829;
 Tue, 18 May 2021 06:52:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RJ/V=KN=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1litaT-0004Qd-UT
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 06:52:17 +0000
Received: from mail-pg1-x529.google.com (unknown [2607:f8b0:4864:20::529])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 393078a2-017e-4e09-a9e6-96eb30d28542;
 Tue, 18 May 2021 06:52:17 +0000 (UTC)
Received: by mail-pg1-x529.google.com with SMTP id y32so6311420pga.11
 for <xen-devel@lists.xenproject.org>; Mon, 17 May 2021 23:52:17 -0700 (PDT)
Received: from mail-pg1-f170.google.com (mail-pg1-f170.google.com.
 [209.85.215.170])
 by smtp.gmail.com with ESMTPSA id w127sm11382900pfw.4.2021.05.17.23.52.14
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 May 2021 23:52:15 -0700 (PDT)
Received: by mail-pg1-f170.google.com with SMTP id l70so6341648pga.1
 for <xen-devel@lists.xenproject.org>; Mon, 17 May 2021 23:52:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 393078a2-017e-4e09-a9e6-96eb30d28542
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=okFGSzwxy3Xl1EDwrbsZ6bJWGdPaUEKxSezi4/SE71A=;
        b=KvboTWH6SCfdQ7LLxm6vH9xhTNNbPkylAhr3iqPQWH1QNODKOLIbMt/OYjboWMwVvr
         +KD/UQeQoUETk6OK7TCWS0Utmt5EvBfWbA4IzzRRuwJkDy5Qi3ICeWOTChp1Py6zaNQt
         fa1WHS9ymuIo1ivMcIga1N5vVpkmfX46UnGI0=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=okFGSzwxy3Xl1EDwrbsZ6bJWGdPaUEKxSezi4/SE71A=;
        b=tEFE/OdqgR+nGsev2IChzSNHDgW3/hM5aFm5QueXwVndSwSKsWY6NsEfImjYKoGt/p
         3y+z+OkG4cLnGoV3eaa/Mj3VT6ejQaJmYYAGe1lmet5IlFOBSZ2mWipqN0bxG5Zksv4Z
         Q/nEHA8zhXxp9hO6+Rd1+9qEPfKWEjtaZxf0pEykP4P2N8525gcaEfwTXs8c52o+Rqe5
         UYdmrhA7c0zCgis6jaRc20vTEg0fJm6vvhpDg65uERgQsfmEMg8d0ZD1zqanXI4btt/d
         bekCyXswWjHEEdG77mRw0Dufpo9Ohj1O/zRHxkpesr6i2Ob8JwMaBr2oSU6dHlqdhud5
         wzzw==
X-Gm-Message-State: AOAM530wtXi+/1Wfl/Z/KKT6C168D3Xg3XoGoCMDsBSYfE/KhZilJqrU
	+vPgGwSDYImVogIDtxB7u+TYPgNzziutcQ==
X-Google-Smtp-Source: ABdhPJybF9TFd7lA+MuSYK+zBgx2S+SGjMxTUOapTnfl+2UGju/4PVYr3NXyV1OZDZk5YxDsUd4lAQ==
X-Received: by 2002:a62:6101:0:b029:215:3a48:4e6e with SMTP id v1-20020a6261010000b02902153a484e6emr3717961pfb.2.1621320736314;
        Mon, 17 May 2021 23:52:16 -0700 (PDT)
X-Received: by 2002:a05:6e02:1a4d:: with SMTP id u13mr3011800ilv.64.1621320723564;
 Mon, 17 May 2021 23:52:03 -0700 (PDT)
MIME-Version: 1.0
References: <20210518064215.2856977-1-tientzu@chromium.org> <20210518064215.2856977-6-tientzu@chromium.org>
In-Reply-To: <20210518064215.2856977-6-tientzu@chromium.org>
From: Claire Chang <tientzu@chromium.org>
Date: Tue, 18 May 2021 14:51:52 +0800
X-Gmail-Original-Message-ID: <CALiNf28ke3c91Y7xaHUgvJePKXqYA7UmsYJV9yaeZc3-4Lzs8Q@mail.gmail.com>
Message-ID: <CALiNf28ke3c91Y7xaHUgvJePKXqYA7UmsYJV9yaeZc3-4Lzs8Q@mail.gmail.com>
Subject: Re: [PATCH v7 05/15] swiotlb: Add a new get_io_tlb_mem getter
To: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>, 
	Will Deacon <will@kernel.org>, Frank Rowand <frowand.list@gmail.com>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com, jgross@suse.com, 
	Christoph Hellwig <hch@lst.de>, Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org, paulus@samba.org, 
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, sstabellini@kernel.org, 
	Robin Murphy <robin.murphy@arm.com>, grant.likely@arm.com, xypron.glpk@gmx.de, 
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org, bauerman@linux.ibm.com, 
	peterz@infradead.org, Greg KH <gregkh@linuxfoundation.org>, 
	Saravana Kannan <saravanak@google.com>, "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, 
	heikki.krogerus@linux.intel.com, 
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Randy Dunlap <rdunlap@infradead.org>, 
	Dan Williams <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
	linux-devicetree <devicetree@vger.kernel.org>, lkml <linux-kernel@vger.kernel.org>, 
	linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, 
	Nicolas Boichat <drinkcat@chromium.org>, Jim Quinlan <james.quinlan@broadcom.com>, 
	Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com, 
	Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk, 
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
	Jianxiong Gao <jxgao@google.com>, joonas.lahtinen@linux.intel.com, 
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
	matthew.auld@intel.com, rodrigo.vivi@intel.com, 
	thomas.hellstrom@linux.intel.com
Content-Type: text/plain; charset="UTF-8"

Still keep this function because directly using dev->dma_io_tlb_mem
will cause issues for memory allocation for existing devices. The pool
can't support atomic coherent allocation so we need to distinguish the
per device pool and the default pool in swiotlb_alloc.


From xen-devel-bounces@lists.xenproject.org Tue May 18 06:53:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 06:53:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128833.241823 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litbJ-00050Z-Ax; Tue, 18 May 2021 06:53:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128833.241823; Tue, 18 May 2021 06:53:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litbJ-00050S-7J; Tue, 18 May 2021 06:53:09 +0000
Received: by outflank-mailman (input) for mailman id 128833;
 Tue, 18 May 2021 06:53:07 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RJ/V=KN=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1litSL-0003sj-3r
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 06:43:53 +0000
Received: from mail-pf1-x430.google.com (unknown [2607:f8b0:4864:20::430])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b19adae2-3a06-4435-8926-6329eff50d7c;
 Tue, 18 May 2021 06:43:50 +0000 (UTC)
Received: by mail-pf1-x430.google.com with SMTP id d16so6668207pfn.12
 for <xen-devel@lists.xenproject.org>; Mon, 17 May 2021 23:43:50 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:f284:b819:54ca:c198])
 by smtp.gmail.com with UTF8SMTPSA id r28sm8094082pgm.53.2021.05.17.23.43.42
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 May 2021 23:43:49 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b19adae2-3a06-4435-8926-6329eff50d7c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=yOejb1pLptOBLwtr8s3muQhGJxeYmn24mhC6x3aimXI=;
        b=bJvp0zIiInSN5ZWaIMe8e5K+HNR7ISq4UEo1mLZbEgC1ZXykjzDgXCamzybxChCCYg
         Q5AzY3unCvHW5q2CcyLhQX/6v1MGhmSqONQXoQ4kX2g8toPd8iXpQU2fcNShC0z5alp0
         825yXIYKzFDJuPxiZomIf8Ob1ZeTDy0i8cMNU=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=yOejb1pLptOBLwtr8s3muQhGJxeYmn24mhC6x3aimXI=;
        b=gEwaWjb61a1gd/Vl2op5ZTK7DM/KT9fFCVRdLZ6QUfTj/J/vU/4+Z4dk7WcRpDRClk
         acsQ+RqzQP4sVyXCRJrdied1/kEPfSCi2X0YaETUaDdX36thIgWSijb+UhfE1vsGnu2W
         eqcCQXgLKFfOuT+t5o/6RiPDdfREkgc7bpI+WfeMhA3o+ZhnjCotvQFmw3lvCtfNPr8J
         zjDE3Ei/Dg5WuWcDT0Pq6dAfoezJDxk3f4Y19n0tVAZ4IlbVDdPQO0FzzWyx8Bfblsyf
         n8rbgVRU2q18ytcdTzN93EPeOaJGdEGpKwUMc3I/zlZYT56/BVlIxEfkgc598rM6WWnJ
         oFEg==
X-Gm-Message-State: AOAM5327xXYX/HQIOVN75BpdQ34MeOFKvtDCNnDguoAkRhFcYAUK4xBf
	sLflwctha8xKqajf8cBa6jwedw==
X-Google-Smtp-Source: ABdhPJzNkJJAn2T0mZyFQDA+s/wRVj8jNsrTpysmpN0l8T2c67G6jzizBg6Ue6FXBk8PrqmSh8meVg==
X-Received: by 2002:a63:4b5b:: with SMTP id k27mr3656248pgl.368.1621320230160;
        Mon, 17 May 2021 23:43:50 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v7 09/15] swiotlb: Move alloc_size to find_slots
Date: Tue, 18 May 2021 14:42:09 +0800
Message-Id: <20210518064215.2856977-10-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.751.gd2f1c929bd-goog
In-Reply-To: <20210518064215.2856977-1-tientzu@chromium.org>
References: <20210518064215.2856977-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Move the maintenance of alloc_size to find_slots for better code
reusability later.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/swiotlb.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 95f482c4408c..2ec6711071de 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -482,8 +482,11 @@ static int find_slots(struct device *dev, phys_addr_t orig_addr,
 	return -1;
 
 found:
-	for (i = index; i < index + nslots; i++)
+	for (i = index; i < index + nslots; i++) {
 		mem->slots[i].list = 0;
+		mem->slots[i].alloc_size =
+			alloc_size - ((i - index) << IO_TLB_SHIFT);
+	}
 	for (i = index - 1;
 	     io_tlb_offset(i) != IO_TLB_SEGSIZE - 1 &&
 	     mem->slots[i].list; i--)
@@ -538,11 +541,8 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 	 * This is needed when we sync the memory.  Then we sync the buffer if
 	 * needed.
 	 */
-	for (i = 0; i < nr_slots(alloc_size + offset); i++) {
+	for (i = 0; i < nr_slots(alloc_size + offset); i++)
 		mem->slots[index + i].orig_addr = slot_addr(orig_addr, i);
-		mem->slots[index + i].alloc_size =
-			alloc_size - (i << IO_TLB_SHIFT);
-	}
 	tlb_addr = slot_addr(mem->start, index) + offset;
 	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
 	    (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL))
-- 
2.31.1.751.gd2f1c929bd-goog



From xen-devel-bounces@lists.xenproject.org Tue May 18 06:53:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 06:53:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128839.241839 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litbZ-0005Ye-3k; Tue, 18 May 2021 06:53:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128839.241839; Tue, 18 May 2021 06:53:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litbY-0005Y0-V3; Tue, 18 May 2021 06:53:24 +0000
Received: by outflank-mailman (input) for mailman id 128839;
 Tue, 18 May 2021 06:53:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RJ/V=KN=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1litSG-0003sj-3k
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 06:43:48 +0000
Received: from mail-pl1-x62b.google.com (unknown [2607:f8b0:4864:20::62b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7e8e9231-2e6b-4da5-a7c2-32d53399d372;
 Tue, 18 May 2021 06:43:42 +0000 (UTC)
Received: by mail-pl1-x62b.google.com with SMTP id n3so4530538plf.7
 for <xen-devel@lists.xenproject.org>; Mon, 17 May 2021 23:43:42 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:f284:b819:54ca:c198])
 by smtp.gmail.com with UTF8SMTPSA id c16sm11590759pgl.79.2021.05.17.23.43.34
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 May 2021 23:43:41 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7e8e9231-2e6b-4da5-a7c2-32d53399d372
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=JT5d6YAbqSvmA2KlfQD5hf+kbiYfdJYrN1IkT+aNsag=;
        b=ZD9htTkyvkMMEUEkLQy/gcEpM7a0MIQiTSmGWFCNXxq+cS4fwyD1/1tr56UCMPrm90
         EVzumZJoS3llTlb5ISKK/cVcn3v918VUbLmvCTKfyykZEE8GPe8Bv+qSbFp6wqzg5TUP
         ctly0anMGiSEZrCnObhPGgfqy4/ugiUoIEdJo=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=JT5d6YAbqSvmA2KlfQD5hf+kbiYfdJYrN1IkT+aNsag=;
        b=PCfoaVxgFq2FnLBwmrg4h0klCQkhphOPFeN0lAq3NUFQ88Tkgre1zJKZtKImv9a9FC
         t+QmwO9VEZWtbNHSkeuPdkSvF6y4prr//fmqfNWKe2hXnhXZjt1QBIjEvC8qx6OKWnA0
         zPRVcH4RsU9HZ1WzEqD63bIo1PeVw2ObmEWBIPnKjp5MAXCUgamPQzfwd7jeqOlYqRi7
         ZE9Psw4HK6bL6Xml/edjbcDlLiJlsCZUzdBcqwGPKy9b5dwTT/DxvQcVg+rC8vZ9kkiC
         4I45ay0vQWbBTsAPI3cEqkNukB+SOom+H5PBYG7APW4eAwUrlF7A7Vehg8qRLqjWqOUN
         KuPA==
X-Gm-Message-State: AOAM532vArOkCF1Jhp52PWsYzzXE88Vcxs9m/DLoDsjnfzTw9xT2SC3s
	OHPWKIoCODMNlC22aQBKZMWrdg==
X-Google-Smtp-Source: ABdhPJygj36pSY5SiWUplzqr15FxIdvmJ+/xyuGNliCPbWZ+Ih2Wqk2RJDjta6zpAFevyluE/pwdNw==
X-Received: by 2002:a17:90a:bc4a:: with SMTP id t10mr3648458pjv.4.1621320221637;
        Mon, 17 May 2021 23:43:41 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v7 08/15] swiotlb: Bounce data from/to restricted DMA pool if available
Date: Tue, 18 May 2021 14:42:08 +0800
Message-Id: <20210518064215.2856977-9-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.751.gd2f1c929bd-goog
In-Reply-To: <20210518064215.2856977-1-tientzu@chromium.org>
References: <20210518064215.2856977-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Regardless of swiotlb setting, the restricted DMA pool is preferred if
available.

The restricted DMA pools provide a basic level of protection against the
DMA overwriting buffer contents at unexpected times. However, to protect
against general data leakage and system memory corruption, the system
needs to provide a way to lock down the memory access, e.g., MPU.

Note that is_dev_swiotlb_force doesn't check if
swiotlb_force == SWIOTLB_FORCE. Otherwise the memory allocation behavior
with default swiotlb will be changed by the following patche
("dma-direct: Allocate memory from restricted DMA pool if available").

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 include/linux/swiotlb.h | 13 +++++++++++++
 kernel/dma/direct.c     |  3 ++-
 kernel/dma/direct.h     |  3 ++-
 kernel/dma/swiotlb.c    |  8 ++++----
 4 files changed, 21 insertions(+), 6 deletions(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index c530c976d18b..0c5a18d9cf89 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -120,6 +120,15 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 	return mem && paddr >= mem->start && paddr < mem->end;
 }
 
+static inline bool is_dev_swiotlb_force(struct device *dev)
+{
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+	if (dev->dma_io_tlb_mem)
+		return true;
+#endif /* CONFIG_DMA_RESTRICTED_POOL */
+	return false;
+}
+
 void __init swiotlb_exit(void);
 unsigned int swiotlb_max_segment(void);
 size_t swiotlb_max_mapping_size(struct device *dev);
@@ -131,6 +140,10 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
 	return false;
 }
+static inline bool is_dev_swiotlb_force(struct device *dev)
+{
+	return false;
+}
 static inline void swiotlb_exit(void)
 {
 }
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 7a88c34d0867..078f7087e466 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -496,7 +496,8 @@ size_t dma_direct_max_mapping_size(struct device *dev)
 {
 	/* If SWIOTLB is active, use its maximum mapping size */
 	if (is_swiotlb_active(dev) &&
-	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
+	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE ||
+	     is_dev_swiotlb_force(dev)))
 		return swiotlb_max_mapping_size(dev);
 	return SIZE_MAX;
 }
diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
index 13e9e7158d94..f94813674e23 100644
--- a/kernel/dma/direct.h
+++ b/kernel/dma/direct.h
@@ -87,7 +87,8 @@ static inline dma_addr_t dma_direct_map_page(struct device *dev,
 	phys_addr_t phys = page_to_phys(page) + offset;
 	dma_addr_t dma_addr = phys_to_dma(dev, phys);
 
-	if (unlikely(swiotlb_force == SWIOTLB_FORCE))
+	if (unlikely(swiotlb_force == SWIOTLB_FORCE) ||
+	    is_dev_swiotlb_force(dev))
 		return swiotlb_map(dev, phys, size, dir, attrs);
 
 	if (unlikely(!dma_capable(dev, dma_addr, size, true))) {
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 68e7633f11fe..95f482c4408c 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -347,7 +347,7 @@ void __init swiotlb_exit(void)
 static void swiotlb_bounce(struct device *dev, phys_addr_t tlb_addr, size_t size,
 			   enum dma_data_direction dir)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = get_io_tlb_mem(dev);
 	int index = (tlb_addr - mem->start) >> IO_TLB_SHIFT;
 	phys_addr_t orig_addr = mem->slots[index].orig_addr;
 	size_t alloc_size = mem->slots[index].alloc_size;
@@ -429,7 +429,7 @@ static unsigned int wrap_index(struct io_tlb_mem *mem, unsigned int index)
 static int find_slots(struct device *dev, phys_addr_t orig_addr,
 		size_t alloc_size)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = get_io_tlb_mem(dev);
 	unsigned long boundary_mask = dma_get_seg_boundary(dev);
 	dma_addr_t tbl_dma_addr =
 		phys_to_dma_unencrypted(dev, mem->start) & boundary_mask;
@@ -506,7 +506,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 		size_t mapping_size, size_t alloc_size,
 		enum dma_data_direction dir, unsigned long attrs)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = get_io_tlb_mem(dev);
 	unsigned int offset = swiotlb_align_offset(dev, orig_addr);
 	unsigned int i;
 	int index;
@@ -557,7 +557,7 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
 			      size_t mapping_size, enum dma_data_direction dir,
 			      unsigned long attrs)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = get_io_tlb_mem(hwdev);
 	unsigned long flags;
 	unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr);
 	int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT;
-- 
2.31.1.751.gd2f1c929bd-goog



From xen-devel-bounces@lists.xenproject.org Tue May 18 06:53:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 06:53:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128838.241834 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litbY-0005VR-PC; Tue, 18 May 2021 06:53:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128838.241834; Tue, 18 May 2021 06:53:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litbY-0005VK-Ln; Tue, 18 May 2021 06:53:24 +0000
Received: by outflank-mailman (input) for mailman id 128838;
 Tue, 18 May 2021 06:53:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RJ/V=KN=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1litSB-0003sj-3j
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 06:43:43 +0000
Received: from mail-pl1-x631.google.com (unknown [2607:f8b0:4864:20::631])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dbf6dbc6-87bb-47e2-a9de-ccd97ae7e114;
 Tue, 18 May 2021 06:43:33 +0000 (UTC)
Received: by mail-pl1-x631.google.com with SMTP id p6so4519643plr.11
 for <xen-devel@lists.xenproject.org>; Mon, 17 May 2021 23:43:33 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:f284:b819:54ca:c198])
 by smtp.gmail.com with UTF8SMTPSA id a24sm11876328pgv.76.2021.05.17.23.43.25
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 May 2021 23:43:32 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dbf6dbc6-87bb-47e2-a9de-ccd97ae7e114
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=bjArqV+hlKAu/YR4Bf0HrWAznxaG6/ViqX0/3NOaKhI=;
        b=MBjLoWJqVzfSb73dWupZUiKk/ssAN5N8GYocYapl77WaRkNBCU30CJ0pDmn9w80yhw
         MVtBb1EwytesjyLVS6eZfEylicocKFDZNAQcKX4lNQtm+ZmlmZfM95nH9fUCnMI2XRuc
         ZvQRkUg9Z86JquSdPyH9kszW5zVTlG0gx5t8A=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=bjArqV+hlKAu/YR4Bf0HrWAznxaG6/ViqX0/3NOaKhI=;
        b=T5Akw1uK3fcLgCgWsA0820LHmJWo2hqKCntgXgdQoP6DKNxuh32JyAkoCNL1IDbUrG
         lFRXxOvMLnMUWgnT66O9nOijGq9AVEIQN9BUxg2yrqq8ne+Uj5s2w7qzDwAEeW3AdggV
         Tb3xr9OeJbzDgI+qLCao7wz6sT5Er/09er8GmcyiElZRhnJyAqQjKRhDfZf5JD9lnneE
         YCNqQ9AX1EV3Z9AVeadvzMk8zBBJAupt1r6iW1bUxhJkEXvNTjjP76DwYUelE9wWKSrS
         UImUHceJ2NszxjWMOGgvpY0C35jxHqytMTBpKEhkY7BteSUGAEYmvo/A76ufOXVtzGGu
         A6pA==
X-Gm-Message-State: AOAM530p2NuTcfovcIGD8wwVGQ/ZbIpIypHq+J347q45Uuhfi1bZuq3F
	k/O2zwv/trPbSOtcjQpWFftgXA==
X-Google-Smtp-Source: ABdhPJy3X/gYk1dAyTIcg7bj+kPTZGJG2LJMuMDzmbRVZqezfKfGe6Z4HpV75vj+3EDW1ExXLQbhiw==
X-Received: by 2002:a17:902:8d83:b029:ef:9dd8:4d9 with SMTP id v3-20020a1709028d83b02900ef9dd804d9mr2930613plo.40.1621320212652;
        Mon, 17 May 2021 23:43:32 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v7 07/15] swiotlb: Update is_swiotlb_active to add a struct device argument
Date: Tue, 18 May 2021 14:42:07 +0800
Message-Id: <20210518064215.2856977-8-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.751.gd2f1c929bd-goog
In-Reply-To: <20210518064215.2856977-1-tientzu@chromium.org>
References: <20210518064215.2856977-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Update is_swiotlb_active to add a struct device argument. This will be
useful later to allow for restricted DMA pool.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 drivers/gpu/drm/i915/gem/i915_gem_internal.c | 2 +-
 drivers/gpu/drm/nouveau/nouveau_ttm.c        | 2 +-
 drivers/pci/xen-pcifront.c                   | 2 +-
 include/linux/swiotlb.h                      | 4 ++--
 kernel/dma/direct.c                          | 2 +-
 kernel/dma/swiotlb.c                         | 4 ++--
 6 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
index ce6b664b10aa..7d48c433446b 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
@@ -42,7 +42,7 @@ static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
 
 	max_order = MAX_ORDER;
 #ifdef CONFIG_SWIOTLB
-	if (is_swiotlb_active()) {
+	if (is_swiotlb_active(NULL)) {
 		unsigned int max_segment;
 
 		max_segment = swiotlb_max_segment();
diff --git a/drivers/gpu/drm/nouveau/nouveau_ttm.c b/drivers/gpu/drm/nouveau/nouveau_ttm.c
index e8b506a6685b..2a2ae6d6cf6d 100644
--- a/drivers/gpu/drm/nouveau/nouveau_ttm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_ttm.c
@@ -321,7 +321,7 @@ nouveau_ttm_init(struct nouveau_drm *drm)
 	}
 
 #if IS_ENABLED(CONFIG_SWIOTLB) && IS_ENABLED(CONFIG_X86)
-	need_swiotlb = is_swiotlb_active();
+	need_swiotlb = is_swiotlb_active(NULL);
 #endif
 
 	ret = ttm_device_init(&drm->ttm.bdev, &nouveau_bo_driver, drm->dev->dev,
diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c
index b7a8f3a1921f..6d548ce53ce7 100644
--- a/drivers/pci/xen-pcifront.c
+++ b/drivers/pci/xen-pcifront.c
@@ -693,7 +693,7 @@ static int pcifront_connect_and_init_dma(struct pcifront_device *pdev)
 
 	spin_unlock(&pcifront_dev_lock);
 
-	if (!err && !is_swiotlb_active()) {
+	if (!err && !is_swiotlb_active(NULL)) {
 		err = pci_xen_swiotlb_init_late();
 		if (err)
 			dev_err(&pdev->xdev->dev, "Could not setup SWIOTLB!\n");
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 2a6cca07540b..c530c976d18b 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -123,7 +123,7 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 void __init swiotlb_exit(void);
 unsigned int swiotlb_max_segment(void);
 size_t swiotlb_max_mapping_size(struct device *dev);
-bool is_swiotlb_active(void);
+bool is_swiotlb_active(struct device *dev);
 void __init swiotlb_adjust_size(unsigned long size);
 #else
 #define swiotlb_force SWIOTLB_NO_FORCE
@@ -143,7 +143,7 @@ static inline size_t swiotlb_max_mapping_size(struct device *dev)
 	return SIZE_MAX;
 }
 
-static inline bool is_swiotlb_active(void)
+static inline bool is_swiotlb_active(struct device *dev)
 {
 	return false;
 }
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 84c9feb5474a..7a88c34d0867 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -495,7 +495,7 @@ int dma_direct_supported(struct device *dev, u64 mask)
 size_t dma_direct_max_mapping_size(struct device *dev)
 {
 	/* If SWIOTLB is active, use its maximum mapping size */
-	if (is_swiotlb_active() &&
+	if (is_swiotlb_active(dev) &&
 	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
 		return swiotlb_max_mapping_size(dev);
 	return SIZE_MAX;
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 1d8eb4de0d01..68e7633f11fe 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -662,9 +662,9 @@ size_t swiotlb_max_mapping_size(struct device *dev)
 	return ((size_t)IO_TLB_SIZE) * IO_TLB_SEGSIZE;
 }
 
-bool is_swiotlb_active(void)
+bool is_swiotlb_active(struct device *dev)
 {
-	return io_tlb_default_mem != NULL;
+	return get_io_tlb_mem(dev) != NULL;
 }
 EXPORT_SYMBOL_GPL(is_swiotlb_active);
 
-- 
2.31.1.751.gd2f1c929bd-goog



From xen-devel-bounces@lists.xenproject.org Tue May 18 06:54:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 06:54:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128848.241858 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litcz-0006x1-Gf; Tue, 18 May 2021 06:54:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128848.241858; Tue, 18 May 2021 06:54:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litcz-0006wu-Dg; Tue, 18 May 2021 06:54:53 +0000
Received: by outflank-mailman (input) for mailman id 128848;
 Tue, 18 May 2021 06:54:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RJ/V=KN=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1litcx-0006wi-Gi
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 06:54:51 +0000
Received: from mail-qt1-x82b.google.com (unknown [2607:f8b0:4864:20::82b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 711b1f9f-e76a-498f-bbd6-6fe98088155a;
 Tue, 18 May 2021 06:54:50 +0000 (UTC)
Received: by mail-qt1-x82b.google.com with SMTP id y12so6700636qtx.11
 for <xen-devel@lists.xenproject.org>; Mon, 17 May 2021 23:54:50 -0700 (PDT)
Received: from mail-qk1-f173.google.com (mail-qk1-f173.google.com.
 [209.85.222.173])
 by smtp.gmail.com with ESMTPSA id v22sm11996883qtq.77.2021.05.17.23.54.48
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 May 2021 23:54:49 -0700 (PDT)
Received: by mail-qk1-f173.google.com with SMTP id x8so8358558qkl.2
 for <xen-devel@lists.xenproject.org>; Mon, 17 May 2021 23:54:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 711b1f9f-e76a-498f-bbd6-6fe98088155a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=XgyF0OvHzw8KTyadFBcEzsCamkUD+/ukWzei3hnGvkc=;
        b=VO0BvOlWIIiGkUdtOb1VSOWiBrk2KLMr0fiDCPwaBrvrf/j2/7hCX1JlEDcxzMqFmv
         42I6m34flwgfsu+jKFLERtm6bjzporQ7M0BTCF240tpdoJChz4PcESq3XpYvN81Vfxez
         ZW4CMor3EwZRISv4nvwP/XJ/8ZZ+MrGyZniYE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=XgyF0OvHzw8KTyadFBcEzsCamkUD+/ukWzei3hnGvkc=;
        b=OeaJxwYyQns4Zhjwxp8TVMZayRFVGWrVck4Hvu7PJml8Rj08vCWoaCgrittBvJqiqX
         plUh2NwkZNR/TKlAf87ii89jexZPNF95jQuPHl6+w8NipzncMCDtpwA1gGNZgubHxlbx
         q8vNJKML3SBUpwMzOk4HHdlGrH0XL5Nc32GOYRkx4ZkJUsrGBR/Hwv/00ZIOxrchs4Um
         an2GuoEOSA9I9gHFPpbh6JeXqiq8FAsAQFAMV1FKAlsW1XcUCWNhQ9GZ0gpQn3TkUaZK
         z4d1CH9PQXA7T8udYOJ2zEpXUAdIxc4qr7iE23MXGLo+xxvvs+Wx/JayV6rwk6MxHW2o
         dTvA==
X-Gm-Message-State: AOAM532rN3rjdEWe7l6kZ9s7HXYbNfCSLT10Kb3YcK51NgXUaNzUrcTc
	vG17K8KYcEWoXRYKvX9Dx9RRPpRtjJ1Rag==
X-Google-Smtp-Source: ABdhPJwfzdK4+8KJVWbMKgj8mChKiWXU/yNI9BVpl9D/qPH/IpeBkY+4edgJVwY3S2okDMprAFaBBA==
X-Received: by 2002:ac8:6e8b:: with SMTP id c11mr3310132qtv.133.1621320889963;
        Mon, 17 May 2021 23:54:49 -0700 (PDT)
X-Received: by 2002:a05:6638:10e4:: with SMTP id g4mr3960623jae.90.1621320877050;
 Mon, 17 May 2021 23:54:37 -0700 (PDT)
MIME-Version: 1.0
References: <20210510095026.3477496-1-tientzu@chromium.org>
In-Reply-To: <20210510095026.3477496-1-tientzu@chromium.org>
From: Claire Chang <tientzu@chromium.org>
Date: Tue, 18 May 2021 14:54:26 +0800
X-Gmail-Original-Message-ID: <CALiNf2-LhQqAX3kJSETOxG4ipu9Nhs97yYiGm0XZKG7vBQ_hNQ@mail.gmail.com>
Message-ID: <CALiNf2-LhQqAX3kJSETOxG4ipu9Nhs97yYiGm0XZKG7vBQ_hNQ@mail.gmail.com>
Subject: Re: [PATCH v6 00/15] Restricted DMA
To: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>, 
	Will Deacon <will@kernel.org>, Frank Rowand <frowand.list@gmail.com>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com, jgross@suse.com, 
	Christoph Hellwig <hch@lst.de>, Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org, paulus@samba.org, 
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, sstabellini@kernel.org, 
	Robin Murphy <robin.murphy@arm.com>, grant.likely@arm.com, xypron.glpk@gmx.de, 
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org, bauerman@linux.ibm.com, 
	peterz@infradead.org, Greg KH <gregkh@linuxfoundation.org>, 
	Saravana Kannan <saravanak@google.com>, "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, 
	heikki.krogerus@linux.intel.com, 
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Randy Dunlap <rdunlap@infradead.org>, 
	Dan Williams <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
	linux-devicetree <devicetree@vger.kernel.org>, lkml <linux-kernel@vger.kernel.org>, 
	linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, 
	Nicolas Boichat <drinkcat@chromium.org>, Jim Quinlan <james.quinlan@broadcom.com>, 
	Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com, 
	Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk, 
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
	Jianxiong Gao <jxgao@google.com>, joonas.lahtinen@linux.intel.com, 
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
	matthew.auld@intel.com, nouveau@lists.freedesktop.org, rodrigo.vivi@intel.com, 
	thomas.hellstrom@linux.intel.com
Content-Type: text/plain; charset="UTF-8"

v7: https://lore.kernel.org/patchwork/cover/1431031/

On Mon, May 10, 2021 at 5:50 PM Claire Chang <tientzu@chromium.org> wrote:
>
> From: Claire Chang <tientzu@google.com>
>
> This series implements mitigations for lack of DMA access control on
> systems without an IOMMU, which could result in the DMA accessing the
> system memory at unexpected times and/or unexpected addresses, possibly
> leading to data leakage or corruption.
>
> For example, we plan to use the PCI-e bus for Wi-Fi and that PCI-e bus is
> not behind an IOMMU. As PCI-e, by design, gives the device full access to
> system memory, a vulnerability in the Wi-Fi firmware could easily escalate
> to a full system exploit (remote wifi exploits: [1a], [1b] that shows a
> full chain of exploits; [2], [3]).
>
> To mitigate the security concerns, we introduce restricted DMA. Restricted
> DMA utilizes the existing swiotlb to bounce streaming DMA in and out of a
> specially allocated region and does memory allocation from the same region.
> The feature on its own provides a basic level of protection against the DMA
> overwriting buffer contents at unexpected times. However, to protect
> against general data leakage and system memory corruption, the system needs
> to provide a way to restrict the DMA to a predefined memory region (this is
> usually done at firmware level, e.g. MPU in ATF on some ARM platforms [4]).
>
> [1a] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_4.html
> [1b] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_11.html
> [2] https://blade.tencent.com/en/advisories/qualpwn/
> [3] https://www.bleepingcomputer.com/news/security/vulnerabilities-found-in-highly-popular-firmware-for-wifi-chips/
> [4] https://github.com/ARM-software/arm-trusted-firmware/blob/master/plat/mediatek/mt8183/drivers/emi_mpu/emi_mpu.c#L132
>
> v6:
> Address the comments in v5
>
> v5:
> Rebase on latest linux-next
> https://lore.kernel.org/patchwork/cover/1416899/
>
> v4:
> - Fix spinlock bad magic
> - Use rmem->name for debugfs entry
> - Address the comments in v3
> https://lore.kernel.org/patchwork/cover/1378113/
>
> v3:
> Using only one reserved memory region for both streaming DMA and memory
> allocation.
> https://lore.kernel.org/patchwork/cover/1360992/
>
> v2:
> Building on top of swiotlb.
> https://lore.kernel.org/patchwork/cover/1280705/
>
> v1:
> Using dma_map_ops.
> https://lore.kernel.org/patchwork/cover/1271660/
> *** BLURB HERE ***
>
> Claire Chang (15):
>   swiotlb: Refactor swiotlb init functions
>   swiotlb: Refactor swiotlb_create_debugfs
>   swiotlb: Add DMA_RESTRICTED_POOL
>   swiotlb: Add restricted DMA pool initialization
>   swiotlb: Add a new get_io_tlb_mem getter
>   swiotlb: Update is_swiotlb_buffer to add a struct device argument
>   swiotlb: Update is_swiotlb_active to add a struct device argument
>   swiotlb: Bounce data from/to restricted DMA pool if available
>   swiotlb: Move alloc_size to find_slots
>   swiotlb: Refactor swiotlb_tbl_unmap_single
>   dma-direct: Add a new wrapper __dma_direct_free_pages()
>   swiotlb: Add restricted DMA alloc/free support.
>   dma-direct: Allocate memory from restricted DMA pool if available
>   dt-bindings: of: Add restricted DMA pool
>   of: Add plumbing for restricted DMA pool
>
>  .../reserved-memory/reserved-memory.txt       |  27 ++
>  drivers/gpu/drm/i915/gem/i915_gem_internal.c  |   2 +-
>  drivers/gpu/drm/nouveau/nouveau_ttm.c         |   2 +-
>  drivers/iommu/dma-iommu.c                     |  12 +-
>  drivers/of/address.c                          |  25 ++
>  drivers/of/device.c                           |   3 +
>  drivers/of/of_private.h                       |   5 +
>  drivers/pci/xen-pcifront.c                    |   2 +-
>  drivers/xen/swiotlb-xen.c                     |   2 +-
>  include/linux/device.h                        |   4 +
>  include/linux/swiotlb.h                       |  41 ++-
>  kernel/dma/Kconfig                            |  14 +
>  kernel/dma/direct.c                           |  63 +++--
>  kernel/dma/direct.h                           |   9 +-
>  kernel/dma/swiotlb.c                          | 242 +++++++++++++-----
>  15 files changed, 356 insertions(+), 97 deletions(-)
>
> --
> 2.31.1.607.g51e8a6a459-goog
>


From xen-devel-bounces@lists.xenproject.org Tue May 18 06:57:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 06:57:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128855.241870 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litfN-0007dQ-V6; Tue, 18 May 2021 06:57:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128855.241870; Tue, 18 May 2021 06:57:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litfN-0007dJ-RA; Tue, 18 May 2021 06:57:21 +0000
Received: by outflank-mailman (input) for mailman id 128855;
 Tue, 18 May 2021 06:57:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=p7W/=KN=gmail.com=xadimgnik@srs-us1.protection.inumbo.net>)
 id 1litfM-0007dD-11
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 06:57:20 +0000
Received: from mail-wr1-x42e.google.com (unknown [2a00:1450:4864:20::42e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e6344acd-213b-43f1-b256-a5e7bb24c968;
 Tue, 18 May 2021 06:57:18 +0000 (UTC)
Received: by mail-wr1-x42e.google.com with SMTP id z17so8871298wrq.7
 for <xen-devel@lists.xenproject.org>; Mon, 17 May 2021 23:57:18 -0700 (PDT)
Received: from ?IPv6:2a00:23c5:5785:9a01:8896:b860:e287:45cb?
 ([2a00:23c5:5785:9a01:8896:b860:e287:45cb])
 by smtp.gmail.com with ESMTPSA id h14sm24965388wrq.45.2021.05.17.23.57.17
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 17 May 2021 23:57:17 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e6344acd-213b-43f1-b256-a5e7bb24c968
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:reply-to:subject:to:cc:references:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=Lkyx21VRSMiMSZnYaucpBR50CYphKVffI2Qp7NzZRYw=;
        b=PSl/UlgL7D6yaaiILnEj/A5skHrYwipO3Kz22ZdjvhRz5X9lBJ4MM9lknTX1LlFl5j
         +rtJmfqynOf9gQmCUOr7oeW2vEFTJkxIFhCx50xwfgWEx9tFWDz7H3nNcqP97YLggEwi
         ocynSirHMYPwfrGlVvUczmfM8wxP8WzVbaEt6IDo7J+RODKhCfjMlAeovyS1+FKaHfqv
         bKLqsZknqlKQQQdTPvGp0NjXYH5MDwgVpEb6eHLP9NzSZWGAvkpDu0nhHnivA/IkKvaY
         falX0+pxA4kx2wH0cvt8aIh65WgndxlrDrjj72e/YIAw/LYBvUqXZlVGFBlyB49JiEes
         loiw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:reply-to:subject:to:cc:references
         :message-id:date:user-agent:mime-version:in-reply-to
         :content-language:content-transfer-encoding;
        bh=Lkyx21VRSMiMSZnYaucpBR50CYphKVffI2Qp7NzZRYw=;
        b=QNAa9ef58cYNAoQCsocGQyBK8iXrdFqAPqAGc13o0IEitFH95cQGjLBWJDLpF3vV8k
         cXtARnLircAYvaB8A4lyuDhI7VEhHmt9KZsf6Wcq4nQ4MeLDM7g5MNqzMXmXmP+AqzCe
         x2B4bA/YLRG/Wm+LbltRInIUnFnwSIrSyIPEgEUXVnsn6OYGS06FzGdx6WUtKjnBm3AO
         6B3BbwH3CanxzQfZFrWFw6RGc3pbtE95XwP0qFUu6tM/lzwh7LYtLHx8TJfIf4jIaOH8
         93TMhiPzPvt72MBo7xcZ0vtfikWN9oPmK4YS3jjFGQBbzv78fRtRHN7gGU8jqvSXGzka
         /Hew==
X-Gm-Message-State: AOAM532MxG2NZQX+8wTOPUV64eLaH9tuAG07pmzXb4UAl9HJx7Y4hC2a
	XUTW3sdeEor83m/wzRo45AQ=
X-Google-Smtp-Source: ABdhPJz0eCctjJyTTpyrO6jmCxzTLMb2Z0aPswDW0oB9UdnlaDFQrRGGu3gSl9DpvIMVS8+YhxPgmg==
X-Received: by 2002:adf:e6c2:: with SMTP id y2mr4742719wrm.384.1621321037892;
        Mon, 17 May 2021 23:57:17 -0700 (PDT)
From: Paul Durrant <xadimgnik@gmail.com>
X-Google-Original-From: Paul Durrant <paul@xen.org>
Reply-To: paul@xen.org
Subject: Re: [PATCH] xen-netback: Check for hotplug-status existence before
 watching
To: =?UTF-8?Q?Marek_Marczykowski-G=c3=b3recki?=
 <marmarek@invisiblethingslab.com>, "Durrant, Paul" <pdurrant@amazon.co.uk>
Cc: Michael Brown <mbrown@fensystems.co.uk>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
 "wei.liu@kernel.org" <wei.liu@kernel.org>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, Anthony PERARD <anthony.perard@citrix.com>
References: <20210413152512.903750-1-mbrown@fensystems.co.uk>
 <YJl8IC7EbXKpARWL@mail-itl>
 <404130e4-210d-2214-47a8-833c0463d997@fensystems.co.uk>
 <YJmBDpqQ12ZBGf58@mail-itl>
 <21f38a92-c8ae-12a7-f1d8-50810c5eb088@fensystems.co.uk>
 <YJmMvTkp2Y1hlLLm@mail-itl>
 <df9e9a32b0294aee814eeb58d2d71edd@EX13D32EUC003.ant.amazon.com>
 <YJpfORXIgEaWlQ7E@mail-itl> <YJpgNvOmDL9SuRye@mail-itl>
 <9edd6873034f474baafd70b1df693001@EX13D32EUC003.ant.amazon.com>
 <YKLjoALdw4oKSZ04@mail-itl>
Message-ID: <8b7a9cd5-3696-65c2-5656-a1c8eb174344@xen.org>
Date: Tue, 18 May 2021 07:57:16 +0100
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <YKLjoALdw4oKSZ04@mail-itl>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 17/05/2021 22:43, Marek Marczykowski-Górecki wrote:
> On Tue, May 11, 2021 at 12:46:38PM +0000, Durrant, Paul wrote:
>>> -----Original Message-----
>>> From: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
>>> Sent: 11 May 2021 11:45
>>> To: Durrant, Paul <pdurrant@amazon.co.uk>
>>> Cc: Michael Brown <mbrown@fensystems.co.uk>; paul@xen.org; xen-devel@lists.xenproject.org;
>>> netdev@vger.kernel.org; wei.liu@kernel.org
>>> Subject: RE: [EXTERNAL] [PATCH] xen-netback: Check for hotplug-status existence before watching
>>>
>>> On Tue, May 11, 2021 at 12:40:54PM +0200, Marek Marczykowski-Górecki wrote:
>>>> On Tue, May 11, 2021 at 07:06:55AM +0000, Durrant, Paul wrote:
>>>>>> -----Original Message-----
>>>>>> From: Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
>>>>>> Sent: 10 May 2021 20:43
>>>>>> To: Michael Brown <mbrown@fensystems.co.uk>; paul@xen.org
>>>>>> Cc: paul@xen.org; xen-devel@lists.xenproject.org; netdev@vger.kernel.org; wei.liu@kernel.org;
>>> Durrant,
>>>>>> Paul <pdurrant@amazon.co.uk>
>>>>>> Subject: RE: [EXTERNAL] [PATCH] xen-netback: Check for hotplug-status existence before watching
>>>>>>
>>>>>> On Mon, May 10, 2021 at 08:06:55PM +0100, Michael Brown wrote:
>>>>>>> If you have a suggested patch, I'm happy to test that it doesn't reintroduce
>>>>>>> the regression bug that was fixed by this commit.
>>>>>>
>>>>>> Actually, I've just tested with a simple reloading xen-netfront module. It
>>>>>> seems in this case, the hotplug script is not re-executed. In fact, I
>>>>>> think it should not be re-executed at all, since the vif interface
>>>>>> remains in place (it just gets NO-CARRIER flag).
>>>>>>
>>>>>> This brings a question, why removing hotplug-status in the first place?
>>>>>> The interface remains correctly configured by the hotplug script after
>>>>>> all. From the commit message:
>>>>>>
>>>>>>      xen-netback: remove 'hotplug-status' once it has served its purpose
>>>>>>
>>>>>>      Removing the 'hotplug-status' node in netback_remove() is wrong; the script
>>>>>>      may not have completed. Only remove the node once the watch has fired and
>>>>>>      has been unregistered.
>>>>>>
>>>>>> I think the intention was to remove 'hotplug-status' node _later_ in
>>>>>> case of quickly adding and removing the interface. Is that right, Paul?
>>>>>
>>>>> The removal was done to allow unbind/bind to function correctly. IIRC before the original patch
>>> doing a bind would stall forever waiting for the hotplug status to change, which would never happen.
>>>>
>>>> Hmm, in that case maybe don't remove it at all in the backend, and let
>>>> it be cleaned up by the toolstack, when it removes other backend-related
>>>> nodes?
>>>
>>> No, unbind/bind _does_ require hotplug script to be called again.
>>>
>>
>> Yes, sorry I was misremembering. My memory is hazy but there was definitely a problem at the time with leaving the node in place.
>>
>>> When exactly vif interface appears in the system (starts to be available
>>> for the hotplug script)? Maybe remove 'hotplug-status' just before that
>>> point?
>>>
>>
>> I really can't remember any detail. Perhaps try reverting both patches then and check that the unbind/rmmod/modprobe/bind sequence still works (and the backend actually makes it into connected state).
> 
> Ok, I've tried this. I've reverted both commits, then used your test
> script from the 9476654bd5e8ad42abe8ee9f9e90069ff8e60c17:
>      
>      This has been tested by running iperf as a server in the test VM and
>      then running a client against it in a continuous loop, whilst also
>      running:
>      
>      while true;
>        do echo vif-$DOMID-$VIF >unbind;
>        echo down;
>        rmmod xen-netback;
>        echo unloaded;
>        modprobe xen-netback;
>        cd $(pwd);
>        brctl addif xenbr0 vif$DOMID.$VIF;
>        ip link set vif$DOMID.$VIF up;
>        echo up;
>        sleep 5;
>        done
>      
>      in dom0 from /sys/bus/xen-backend/drivers/vif to continuously unbind,
>      unload, re-load, re-bind and re-plumb the backend.
>      
> In fact, the need to call `brctl` and `ip link` manually is exactly
> because the hotplug script isn't executed. When I execute it manually,
> the backend properly gets back to working. So, removing 'hotplug-status'
> was in the correct place (netback_remove). The missing part is the toolstack
> calling the hotplug script on xen-netback re-bind.
> 

Why is that missing? We're going behind the back of the toolstack to do 
the unbind and bind so why should we expect it to re-execute a hotplug 
script?

> In this case, I'm not sure what is the proper way. If I restart
> xendriverdomain service (I do run the backend in domU), it properly
> executes hotplug script and the backend interface gets properly
> configured. But it doesn't do it on its own. It seems to be related to
> device "state" in xenstore. The specific part of the libxl is
> backend_watch_callback():
> https://github.com/xen-project/xen/blob/master/tools/libs/light/libxl_device.c#L1664
> 
>      ddev = search_for_device(dguest, dev);
>      if (ddev == NULL && state == XenbusStateClosed) {
>          /*
>           * Spurious state change, device has already been disconnected
>           * or never attached.
>           */
>          goto skip;
>      } else if (ddev == NULL) {
>          rc = add_device(egc, nested_ao, dguest, dev);
>          if (rc > 0)
>              free_ao = true;
>      } else if (state == XenbusStateClosed && online == 0) {
>          rc = remove_device(egc, nested_ao, dguest, ddev);
>          if (rc > 0)
>              free_ao = true;
>          check_and_maybe_remove_guest(gc, ddomain, dguest);
>      }
> 
> In short: if device gets XenbusStateInitWait for the first time (ddev ==
> NULL case), it goes to add_device() which executes the hotplug script
> and stores the device.
> Then, if device goes to XenbusStateClosed + online==0 state, then it
> executes hotplug script again (with "offline" parameter) and forgets the
> device. If you unbind the driver, the device stays in
> XenbusStateConnected state (in xenstore), and after you bind it again,
> it goes to XenbusStateInitWait. It don't think it goes through
> XenbusStateClosed, and online stays at 1 too, so libxl doesn't execute
> the hotplug script again.

This is pretty key. The frontend should not notice an unbind/bind i.e. 
there should be no evidence of it happening by examining states in 
xenstore (from the guest side).

   Paul

> 
> Some solution could be to add an extra case at the end, like "ddev !=
> NULL && state == XenbusStateInitWait && hotplug-status != connected".
> And make sure xl devd won't call the same hotplug script multiple times
> for the same device _at the same time_ (I'm not sure about the async
> machinery here).
> 
> But even if xl devd (aka xendriverdomain service) gets "fixes" to
> execute hotplug script in that case, I don't think it would work in
> backend in dom0 case - there, I think nothing watches already configured
> vif interfaces (there is no xl devd daemon in dom0, and xl background
> process watches only domain death and cdrom eject events).
> 
> I'm adding toolstack maintainers, maybe they'll have some idea...
> 
> In any case, the issue is not calling the hotplug script, responsible
> for configuring newly created vif interface. Not kernel waiting for it.
> So, I think both commits should still be reverted.
> 



From xen-devel-bounces@lists.xenproject.org Tue May 18 07:04:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 07:04:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128869.241884 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litlv-0000oB-Sf; Tue, 18 May 2021 07:04:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128869.241884; Tue, 18 May 2021 07:04:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litlv-0000o4-PC; Tue, 18 May 2021 07:04:07 +0000
Received: by outflank-mailman (input) for mailman id 128869;
 Tue, 18 May 2021 07:04:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tO0P=KN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1litlu-0000nC-Bf
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 07:04:06 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 854eedd6-6e71-4544-b6b1-298f24d9f509;
 Tue, 18 May 2021 07:04:04 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0E6E3B0B7;
 Tue, 18 May 2021 07:04:04 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 854eedd6-6e71-4544-b6b1-298f24d9f509
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621321444; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=/Eu+F6rHuFh5vngwMexR1YiB/bks/cxL5wk3e4UM5iw=;
	b=mOl9PR8cW22dB/qZUM8NAVsF7alJl7Cq+XCE9HlC2tfq6vWSqJWOGu4LM6ERln3Xyn1wE3
	yCAa2MAYkBFmjkHQqKSh4QxPKQtItBQZd2KPa+3ceeUnrlPoT37V9sSscNkqHxJ4NPKh9R
	QwD2+i3pJqbXDIw6c4RHr27Gk3hCmSs=
Subject: Re: [PATCH] Xen: Design doc for 1:1 direct-map and static allocation
To: Penny Zheng <penny.zheng@arm.com>
Cc: Bertrand.Marquis@arm.com, Wei.Chen@arm.com, nd@arm.com,
 xen-devel@lists.xenproject.org, sstabellini@kernel.org, julien@xen.org
References: <20210518050738.163156-1-penny.zheng@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <afce8ae7-8079-c73c-3eab-bc67a37fdf8e@suse.com>
Date: Tue, 18 May 2021 09:04:03 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210518050738.163156-1-penny.zheng@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 18.05.2021 07:07, Penny Zheng wrote:
> +## Background
> +
> +Cases where needs static allocation:
> +
> +  * Static allocation needed whenever a system has a pre-defined non-changing
> +behaviour. This is usually the case in safety world where system must behave
> +the same upon reboot, so memory resource for both XEN and domains should be
> +static and pre-defined.
> +
> +  * Static allocation needed whenever a guest wants to allocate memory
> +from refined memory ranges. For example, a system has one high-speed RAM
> +region, and would like to assign it to one specific domain.
> +
> +  * Static allocation needed whenever a system needs a guest restricted to some
> +known memory area due to hardware limitations reason. For example, some device
> +can only do DMA to a specific part of the memory.

This isn't a reason for fully static partitioning. Such devices also exist
in the x86 world, without there having been a need to statically partition
systems. All you want to guarantee is that for I/O purposes a domain has
_some_ memory in the accessible range.

> +Limitations:
> +  * There is no consideration for PV devices at the moment.

How would PV devices be affected? Drivers better would use grant
transfers, but that's about it afaics.

> +## Design on Static Allocation
> +
> +Static allocation refers to system or sub-system(domains) for which memory
> +areas are pre-defined by configuration using physical address ranges.
> +
> +These pre-defined memory, -- Static Momery, as parts of RAM reserved in the
> +beginning, shall never go to heap allocator or boot allocator for any use.
> +
> +### Static Allocation for Domains
> +
> +### New Deivce Tree Node: `xen,static_mem`
> +
> +Here introduces new `xen,static_mem` node to define static memory nodes for
> +one specific domain.
> +
> +For domains on static allocation, users need to pre-define guest RAM regions in
> +configuration, through `xen,static_mem` node under approriate `domUx` node.
> +
> +Here is one example:
> +
> +
> +        domU1 {
> +            compatible = "xen,domain";
> +            #address-cells = <0x2>;
> +            #size-cells = <0x2>;
> +            cpus = <2>;
> +            xen,static-mem = <0x0 0xa0000000 0x0 0x20000000>;
> +            ...
> +        };
> +
> +RAM at 0xa0000000 of 512 MB are static memory reserved for domU1 as its RAM.
> +
> +### New Page Flag: `PGC_reserved`
> +
> +In order to differentiate and manage pages reserved as static memory with
> +those which are allocated from heap allocator for normal domains, we shall
> +introduce a new page flag `PGC_reserved` to tell.

This contradicts you saying higher up "shall never go to heap allocator
or boot allocator for any use" - no such flag ought to be needed of the
allocators never get to see these pages. And even if such a flag was
needed, I can't see how it would be sufficient to express the page ->
domain relationship.

> +Grant pages `PGC_reserved` when initializing static memory.

I'm afraid I don't understand this sentence at all.

> +### New linked page list: `reserved_page_list` in  `struct domain`
> +
> +Right now, for normal domains, on assigning pages to domain, pages allocated
> +from heap allocator as guest RAM shall be inserted to one linked page
> +list `page_list` for later managing and storing.
> +
> +So in order to tell, pages allocated from static memory, shall be inserted
> +to a different linked page list `reserved_page_list`.
> +
> +Later, when domain get destroyed and memory relinquished, only pages in
> +`page_list` go back to heap, and pages in `reserved_page_list` shall not.

If such a domain can be destroyed (and re-created), how would the
association between memory and intended owner be retained / propagated?
Where else would the pages from reserved_page_list go (they need to go
somewhere, as the struct domain instance will go away)?

> +### Memory Allocation for Domains on Static Allocation
> +
> +RAM regions pre-defined as static memory for one specifc domain shall be parsed
> +and reserved from the beginning. And they shall never go to any memory
> +allocator for any use.
> +
> +Later when allocating static memory for this specific domain, after acquiring
> +those reserved regions, users need to a do set of verification before
> +assigning.
> +For each page there, it at least includes the following steps:
> +1. Check if it is in free state and has zero reference count.
> +2. Check if the page is reserved(`PGC_reserved`).

If this memory is reserved for a specific domain, why is such verification
necessary?

> +Then, assigning these pages to this specific domain, and all pages go to one
> +new linked page list `reserved_page_list`.
> +
> +At last, set up guest P2M mapping. By default, it shall be mapped to the fixed
> +guest RAM address `GUEST_RAM0_BASE`, `GUEST_RAM1_BASE`, just like normal
> +domains. But later in 1:1 direct-map design, if `direct-map` is set, the guest
> +physical address will equal to physical address.

I think you're missing "host" ahead of the 2nd "physical address"?

> +### Static Allocation for Xen itself
> +
> +### New Deivce Tree Node: `xen,reserved_heap`
> +
> +Static memory for Xen heap refers to parts of RAM reserved in the beginning
> +for Xen heap only. The memory is pre-defined through XEN configuration
> +using physical address ranges.
> +
> +The reserved memory for Xen heap is an optional feature and can be enabled
> +by adding a device tree property in the `chosen` node. Currently, this feature
> +is only supported on AArch64.

The earlier "Cases where needs static allocation" doesn't really seem to
cover any case where this would be needed for Xen itself. Without a need,
I don't see the point of having the feature.

> +## Background
> +
> +Cases where domU needs 1:1 direct-map memory map:
> +
> +  * IOMMU not present in the system.
> +  * IOMMU disabled if it doesn't cover a specific device and all the guests
> +are trusted. Thinking a mixed scenario, where a few devices with IOMMU and
> +a few without, then guest DMA security still could not be totally guaranteed.
> +So users may want to disable the IOMMU, to at least gain some performance
> +improvement from IOMMU disabled.
> +  * IOMMU disabled as a workaround when it doesn't have enough bandwidth.
> +To be specific, in a few extreme situation, when multiple devices do DMA
> +concurrently, these requests may exceed IOMMU's transmission capacity.
> +  * IOMMU disabled when it adds too much latency on DMA. For example,
> +TLB may be missing in some IOMMU hardware, which may bring latency in DMA
> +progress, so users may want to disable it in some realtime scenario.
> +
> +*WARNING:
> +Users should be aware that it is not always secure to assign a device without
> +IOMMU/SMMU protection.
> +When the device is not protected by the IOMMU/SMMU, the administrator should
> +make sure that:
> + 1. The device is assigned to a trusted guest.
> + 2. Users have additional security mechanism on the platform.
> +
> +Limitations:
> +  * There is no consideration for PV devices at the moment.

Again I'm struggling to see how PV devices might be impacted.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 18 07:15:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 07:15:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128875.241898 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litx9-0002Jq-19; Tue, 18 May 2021 07:15:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128875.241898; Tue, 18 May 2021 07:15:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1litx8-0002Jj-U1; Tue, 18 May 2021 07:15:42 +0000
Received: by outflank-mailman (input) for mailman id 128875;
 Tue, 18 May 2021 07:15:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tO0P=KN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1litx7-0002Jd-PR
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 07:15:41 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b4526d1e-a354-41a5-9b7b-74c0f2dd05ee;
 Tue, 18 May 2021 07:15:40 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2EA50ACF5;
 Tue, 18 May 2021 07:15:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b4526d1e-a354-41a5-9b7b-74c0f2dd05ee
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621322139; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ViA/9VBVx2zGG16Lk8Vsi237Q91m+xKo1Y9XUEB7aEs=;
	b=huruEDjKMzdoThn+foEHZt/KI+++H8fkOyM1qrxMhhcrFV4fDriXnocrVa3PSa4H5kyB4d
	ZU0PsHoFaBXia8pYQDY++TJERrfBLPWHkruqP78NHuwm3ER7Zt4vWlDZAhAr/4mcRZkxtF
	ixvU7BNEOPabEiphLR6yN2U92SfLx4o=
Subject: Re: [PATCH 04/10] xen/arm: static memory initialization
To: Penny Zheng <penny.zheng@arm.com>
Cc: Bertrand.Marquis@arm.com, Wei.Chen@arm.com, nd@arm.com,
 xen-devel@lists.xenproject.org, sstabellini@kernel.org, julien@xen.org
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-5-penny.zheng@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <dbffa647-37e2-93b6-4041-a1344aeb1837@suse.com>
Date: Tue, 18 May 2021 09:15:38 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210518052113.725808-5-penny.zheng@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 18.05.2021 07:21, Penny Zheng wrote:
> This patch introduces static memory initialization, during system RAM boot up.
> 
> New func init_staticmem_pages is the equivalent of init_heap_pages, responsible
> for static memory initialization.
> 
> Helper func free_staticmem_pages is the equivalent of free_heap_pages, to free
> nr_pfns pages of static memory.
> For each page, it includes the following steps:
> 1. change page state from in-use(also initialization state) to free state and
> grant PGC_reserved.
> 2. set its owner NULL and make sure this page is not a guest frame any more

But isn't the goal (as per the previous patch) to associate such pages
with a _specific_ domain?

> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -150,6 +150,9 @@
>  #define p2m_pod_offline_or_broken_hit(pg) 0
>  #define p2m_pod_offline_or_broken_replace(pg) BUG_ON(pg != NULL)
>  #endif
> +#ifdef CONFIG_ARM_64
> +#include <asm/setup.h>
> +#endif

Whatever it is that's needed from this header suggests the code won't
build for other architectures. I think init_staticmem_pages() in its
current shape shouldn't live in this (common) file.

> @@ -1512,6 +1515,49 @@ static void free_heap_pages(
>      spin_unlock(&heap_lock);
>  }
>  
> +/* Equivalent of free_heap_pages to free nr_pfns pages of static memory. */
> +static void free_staticmem_pages(struct page_info *pg, unsigned long nr_pfns,
> +                                 bool need_scrub)

Right now this function gets called only from an __init one. Unless
it is intended to gain further callers, it should be marked __init
itself then. Otherwise it should be made sure that other
architectures don't include this (dead there) code.

> +{
> +    mfn_t mfn = page_to_mfn(pg);
> +    int i;

This type doesn't fit nr_pfns'es.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 18 07:24:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 07:24:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128883.241912 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liu5R-0003lO-TL; Tue, 18 May 2021 07:24:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128883.241912; Tue, 18 May 2021 07:24:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liu5R-0003lH-PS; Tue, 18 May 2021 07:24:17 +0000
Received: by outflank-mailman (input) for mailman id 128883;
 Tue, 18 May 2021 07:24:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tO0P=KN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1liu5P-0003lB-QO
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 07:24:15 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 15e980a4-3869-4f80-bc37-6830443bc962;
 Tue, 18 May 2021 07:24:14 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C8CF2B149;
 Tue, 18 May 2021 07:24:13 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 15e980a4-3869-4f80-bc37-6830443bc962
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621322653; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=6SN2t2kr4nCsHFYuK21KCl0FjZfwoZetDLAugbr7I14=;
	b=hBhG+ahrGOgP+rI4LbMBMNvUW2A2oNb1NgBulLrRipGRxh5UF0gLSpvs1fzXXspkFpqh2R
	D7BF1176sFKsAuiKIa6gbASK0cwl4KvMA5Zshv/uF0Y+KyIfPt2ayJRpGMIQuwhXlGDws5
	3Rrcoe4YHleERz248UYqwRHk8DEopgw=
Subject: Re: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages
To: Penny Zheng <penny.zheng@arm.com>
Cc: Bertrand.Marquis@arm.com, Wei.Chen@arm.com, nd@arm.com,
 xen-devel@lists.xenproject.org, sstabellini@kernel.org, julien@xen.org
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-6-penny.zheng@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a890200d-b75b-dd59-5d13-b0b211a58da5@suse.com>
Date: Tue, 18 May 2021 09:24:13 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210518052113.725808-6-penny.zheng@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 18.05.2021 07:21, Penny Zheng wrote:
> alloc_staticmem_pages is designated to allocate nr_pfns contiguous
> pages of static memory. And it is the equivalent of alloc_heap_pages
> for static memory.
> This commit only covers allocating at specified starting address.
> 
> For each page, it shall check if the page is reserved
> (PGC_reserved) and free. It shall also do a set of necessary
> initialization, which are mostly the same ones in alloc_heap_pages,
> like, following the same cache-coherency policy and turning page
> status into PGC_state_used, etc.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
>  xen/common/page_alloc.c | 64 +++++++++++++++++++++++++++++++++++++++++
>  1 file changed, 64 insertions(+)
> 
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 58b53c6ac2..adf2889e76 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -1068,6 +1068,70 @@ static struct page_info *alloc_heap_pages(
>      return pg;
>  }
>  
> +/*
> + * Allocate nr_pfns contiguous pages, starting at #start, of static memory.
> + * It is the equivalent of alloc_heap_pages for static memory
> + */
> +static struct page_info *alloc_staticmem_pages(unsigned long nr_pfns,
> +                                                paddr_t start,
> +                                                unsigned int memflags)

This is surely breaking the build (at this point in the series -
recall that a series should build fine at every patch boundary),
for introducing an unused static function, which most compilers
will warn about.

Also again - please avoid introducing code that's always dead for
certain architectures. Quite likely you want a Kconfig option to
put a suitable #ifdef around such functions.

And a nit: Please correct the apparently off-by-one indentation.

> +{
> +    bool need_tlbflush = false;
> +    uint32_t tlbflush_timestamp = 0;
> +    unsigned int i;

This variable's type should (again) match nr_pfns'es (albeit I
think that parameter really wants to be nr_mfns).

> +    struct page_info *pg;
> +    mfn_t s_mfn;
> +
> +    /* For now, it only supports allocating at specified address. */
> +    s_mfn = maddr_to_mfn(start);
> +    pg = mfn_to_page(s_mfn);
> +    if ( !pg )
> +        return NULL;

Under what conditions would mfn_to_page() return NULL?

> +    for ( i = 0; i < nr_pfns; i++)
> +    {
> +        /*
> +         * Reference count must continuously be zero for free pages
> +         * of static memory(PGC_reserved).
> +         */
> +        ASSERT(pg[i].count_info & PGC_reserved);
> +        if ( (pg[i].count_info & ~PGC_reserved) != PGC_state_free )
> +        {
> +            printk(XENLOG_ERR
> +                    "Reference count must continuously be zero for free pages"
> +                    "pg[%u] MFN %"PRI_mfn" c=%#lx t=%#x\n",
> +                    i, mfn_x(page_to_mfn(pg + i)),
> +                    pg[i].count_info, pg[i].tlbflush_timestamp);

Nit: Indentation again.

> +            BUG();
> +        }
> +
> +        if ( !(memflags & MEMF_no_tlbflush) )
> +            accumulate_tlbflush(&need_tlbflush, &pg[i],
> +                                &tlbflush_timestamp);
> +
> +        /*
> +         * Reserve flag PGC_reserved and change page state

DYM "Preserve ..."?

> +         * to PGC_state_inuse.
> +         */
> +        pg[i].count_info = (pg[i].count_info & PGC_reserved) | PGC_state_inuse;
> +        /* Initialise fields which have other uses for free pages. */
> +        pg[i].u.inuse.type_info = 0;
> +        page_set_owner(&pg[i], NULL);
> +
> +        /*
> +         * Ensure cache and RAM are consistent for platforms where the
> +         * guest can control its own visibility of/through the cache.
> +         */
> +        flush_page_to_ram(mfn_x(page_to_mfn(&pg[i])),
> +                            !(memflags & MEMF_no_icache_flush));
> +    }
> +
> +    if ( need_tlbflush )
> +        filtered_flush_tlb_mask(tlbflush_timestamp);

With reserved pages dedicated to a specific domain, in how far is it
possible that stale mappings from a prior use can still be around,
making such TLB flushing necessary?

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 18 07:28:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 07:28:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128890.241925 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liu95-0004P0-Ds; Tue, 18 May 2021 07:28:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128890.241925; Tue, 18 May 2021 07:28:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liu95-0004Ot-Av; Tue, 18 May 2021 07:28:03 +0000
Received: by outflank-mailman (input) for mailman id 128890;
 Tue, 18 May 2021 07:28:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tO0P=KN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1liu93-0004Oi-59
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 07:28:01 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e7e31dd5-77d0-47ec-a44f-5ed0a5a838a1;
 Tue, 18 May 2021 07:28:00 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id B3EC6AD12;
 Tue, 18 May 2021 07:27:59 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e7e31dd5-77d0-47ec-a44f-5ed0a5a838a1
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621322879; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=okVqz5iwYM84IBzU/vQJDnw5VWOy4jzxieeVsM429Go=;
	b=V3nmKJIMpoYOEH5pYDx7JcGrusF7dTf7kcJO8QuHTAzjK57Qk8fQ7lLplxxBx2z2hjon+c
	E/VU1QEpHSEF5BdzLYBWsMaaQEYiUqECcJ7PTpeZGWyDYZwjlD/KFo+7HnwXirrftxSvkS
	cUCR29FBPznTU1fBHlqQudWPSul32GA=
Subject: Re: [PATCH 06/10] xen: replace order with nr_pfns in assign_pages for
 better compatibility
To: Penny Zheng <penny.zheng@arm.com>
Cc: Bertrand.Marquis@arm.com, Wei.Chen@arm.com, nd@arm.com,
 xen-devel@lists.xenproject.org, sstabellini@kernel.org, julien@xen.org
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-7-penny.zheng@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ede08d62-5240-bc52-3475-abdaef1afd30@suse.com>
Date: Tue, 18 May 2021 09:27:59 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210518052113.725808-7-penny.zheng@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 18.05.2021 07:21, Penny Zheng wrote:
> Function parameter order in assign_pages is always used as 1ul << order,
> referring to 2@order pages.
> 
> Now, for better compatibility with new static memory, order shall
> be replaced with nr_pfns pointing to page count with no constraint,
> like 250MB.

While I'm not entirely opposed, I'm also not convinced. The new
user could as well break up the range into suitable power-of-2
chunks. In no case do I view the wording "compatibility" here as
appropriate. There's no incompatibility at present.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 18 07:29:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 07:29:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128893.241937 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liuA1-0004y3-Og; Tue, 18 May 2021 07:29:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128893.241937; Tue, 18 May 2021 07:29:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liuA1-0004xw-LJ; Tue, 18 May 2021 07:29:01 +0000
Received: by outflank-mailman (input) for mailman id 128893;
 Tue, 18 May 2021 07:29:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liuA0-0004xk-DM; Tue, 18 May 2021 07:29:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liuA0-00052A-6y; Tue, 18 May 2021 07:29:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liu9z-0003Jg-W3; Tue, 18 May 2021 07:29:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1liu9z-0005jP-VY; Tue, 18 May 2021 07:28:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=eifrEq2zJlsFA+cxGvL6N0119e2m5gxeb7s9wxyiuEw=; b=cT6TtmEzD8Ft3W7Mzmo6HcO9PF
	ryZntEBl470WucuJ9a1hGx8B/lsUB4h1tH1dpuxXSWQ2TqpjHzncyjuhN6YKlcnDSlvXdhiNfn9+p
	CdyqWRroaZfba+EsO5xql72vMltGLaU+WkpD/9KzoQdXHW0cpkz/MoAHXsfaiO9OedhY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161984-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 161984: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=d07f6ca923ea0927a1024dfccafc5b53b61cfecc
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 18 May 2021 07:28:59 +0000

flight 161984 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161984/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl          14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-qemuu-freebsd12-amd64 13 guest-start fail in 161977 pass in 161984
 test-arm64-arm64-xl          13 debian-fixup     fail in 161977 pass in 161984
 test-arm64-arm64-xl-xsm      13 debian-fixup     fail in 161977 pass in 161984
 test-arm64-arm64-xl-credit1  13 debian-fixup     fail in 161977 pass in 161984
 test-arm64-arm64-libvirt-xsm 13 debian-fixup               fail pass in 161972
 test-arm64-arm64-xl-credit2  13 debian-fixup               fail pass in 161977

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check fail in 161972 never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check fail in 161972 never pass
 test-arm64-arm64-xl-credit2 15 migrate-support-check fail in 161977 never pass
 test-arm64-arm64-xl-credit2 16 saverestore-support-check fail in 161977 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                d07f6ca923ea0927a1024dfccafc5b53b61cfecc
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  290 days
Failing since        152366  2020-08-01 20:49:34 Z  289 days  487 attempts
Testing same since   161972  2021-05-17 00:12:03 Z    1 days    3 attempts

------------------------------------------------------------
6063 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1645314 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 18 07:34:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 07:34:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128905.241954 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liuFY-0006Yq-Nz; Tue, 18 May 2021 07:34:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128905.241954; Tue, 18 May 2021 07:34:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liuFY-0006Yj-KQ; Tue, 18 May 2021 07:34:44 +0000
Received: by outflank-mailman (input) for mailman id 128905;
 Tue, 18 May 2021 07:34:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tO0P=KN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1liuFX-0006Yd-2M
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 07:34:43 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f7e0ecfa-3ae4-4128-b52a-82cbc2c9d463;
 Tue, 18 May 2021 07:34:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 09C0EAD12;
 Tue, 18 May 2021 07:34:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f7e0ecfa-3ae4-4128-b52a-82cbc2c9d463
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621323281; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=kShlrabaw4ls32FAThQfFmZqAFnLYEfnF1x1a3abZqg=;
	b=l/jEKPLyua2VMl/MeXHdPP2JyqQ0iWgGJAt5cP97LF4Z8pMNacc/LXgKt0Gp5ACLW5XsWq
	DY1tXQHkgN0gYO8pavKMKTqNxvOesXl3DGw/oC+f8iBMOoXVlZSqsWuvvQinFIy337K9di
	aKb9u3PxO0Jed17KB4e6MkfvRGCmpdg=
Subject: Re: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
To: Penny Zheng <penny.zheng@arm.com>
Cc: Bertrand.Marquis@arm.com, Wei.Chen@arm.com, nd@arm.com,
 xen-devel@lists.xenproject.org, sstabellini@kernel.org, julien@xen.org
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-8-penny.zheng@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <7e4706dc-70ea-4dc9-3d70-f07396b462d8@suse.com>
Date: Tue, 18 May 2021 09:34:40 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210518052113.725808-8-penny.zheng@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 18.05.2021 07:21, Penny Zheng wrote:
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -2447,6 +2447,9 @@ int assign_pages(
>      {
>          ASSERT(page_get_owner(&pg[i]) == NULL);
>          page_set_owner(&pg[i], d);
> +        /* use page_set_reserved_owner to set its reserved domain owner. */
> +        if ( (pg[i].count_info & PGC_reserved) )
> +            page_set_reserved_owner(&pg[i], d);

Now this is puzzling: What's the point of setting two owner fields
to the same value? I also don't recall you having introduced
page_set_reserved_owner() for x86, so how is this going to build
there?

> @@ -2509,6 +2512,56 @@ struct page_info *alloc_domheap_pages(
>      return pg;
>  }
>  
> +/*
> + * Allocate nr_pfns contiguous pages, starting at #start, of static memory,
> + * then assign them to one specific domain #d.
> + * It is the equivalent of alloc_domheap_pages for static memory.
> + */
> +struct page_info *alloc_domstatic_pages(
> +        struct domain *d, unsigned long nr_pfns, paddr_t start,
> +        unsigned int memflags)
> +{
> +    struct page_info *pg = NULL;
> +    unsigned long dma_size;
> +
> +    ASSERT(!in_irq());
> +
> +    if ( memflags & MEMF_no_owner )
> +        memflags |= MEMF_no_refcount;
> +
> +    if ( !dma_bitsize )
> +        memflags &= ~MEMF_no_dma;
> +    else
> +    {
> +        dma_size = 1ul << bits_to_zone(dma_bitsize);
> +        /* Starting address shall meet the DMA limitation. */
> +        if ( dma_size && start < dma_size )
> +            return NULL;

It is the entire range (i.e. in particular the last byte) which needs
to meet such a restriction. I'm not convinced though that DMA width
restrictions and static allocation are sensible to coexist.

> +    }
> +
> +    pg = alloc_staticmem_pages(nr_pfns, start, memflags);
> +    if ( !pg )
> +        return NULL;
> +
> +    if ( d && !(memflags & MEMF_no_owner) )
> +    {
> +        if ( memflags & MEMF_no_refcount )
> +        {
> +            unsigned long i;
> +
> +            for ( i = 0; i < nr_pfns; i++ )
> +                pg[i].count_info = PGC_extra;
> +        }

Is this as well as the MEMF_no_owner case actually meaningful for
statically allocated pages?

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 18 07:39:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 07:39:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128911.241968 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liuK5-0007Eg-DX; Tue, 18 May 2021 07:39:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128911.241968; Tue, 18 May 2021 07:39:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liuK5-0007EZ-A3; Tue, 18 May 2021 07:39:25 +0000
Received: by outflank-mailman (input) for mailman id 128911;
 Tue, 18 May 2021 07:39:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tO0P=KN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1liuK3-0007ER-7U
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 07:39:23 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ac6be63f-c116-4368-a548-a59cbce9c77a;
 Tue, 18 May 2021 07:39:22 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5ED9AB14B;
 Tue, 18 May 2021 07:39:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ac6be63f-c116-4368-a548-a59cbce9c77a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621323561; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ztMMXNnmS1AqoJfVHmRGuV/LMebAV7sEvMIAdb5V5GU=;
	b=sNFmCrAiJxZU6hFWJApe/NTkz8x0LMLJPxhKkr0s9BTtQMd/A4R5yRoDXJm4pEQ5dBXOqB
	YKh+Zez0O/JOnaRAo4MkxrkIu2EKRAzfZ1dp80ShDSYzeyVaAItScqyDtrWllBM1kY4jpS
	S9Yrku/1Y8q68wu1jvk1aLf0oQPPbs8=
Subject: Re: [PATCH 08/10] xen/arm: introduce reserved_page_list
To: Penny Zheng <penny.zheng@arm.com>
Cc: Bertrand.Marquis@arm.com, Wei.Chen@arm.com, nd@arm.com,
 xen-devel@lists.xenproject.org, sstabellini@kernel.org, julien@xen.org
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-9-penny.zheng@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <97bc6ca6-206b-197f-de08-20691578b9db@suse.com>
Date: Tue, 18 May 2021 09:39:20 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210518052113.725808-9-penny.zheng@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 18.05.2021 07:21, Penny Zheng wrote:
> Since page_list under struct domain refers to linked pages as gueast RAM
> allocated from heap, it should not include reserved pages of static memory.
> 
> The number of PGC_reserved pages assigned to a domain is tracked in
> a new 'reserved_pages' counter. Also introduce a new reserved_page_list
> to link pages of static memory. Let page_to_list return reserved_page_list,
> when flag is PGC_reserved.
> 
> Later, when domain get destroyed or restarted, those new values will help
> relinquish memory to proper place, not been given back to heap.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
>  xen/common/domain.c     | 1 +
>  xen/common/page_alloc.c | 7 +++++--
>  xen/include/xen/sched.h | 5 +++++
>  3 files changed, 11 insertions(+), 2 deletions(-)

This contradicts the title's prefix: There's no Arm-specific change
here at all. But imo the title is correct, and the changes should
be Arm-specific. There's no point having struct domain fields on
e.g. x86 which aren't used there at all.

> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -2410,7 +2410,7 @@ int assign_pages(
>  
>          for ( i = 0; i < nr_pfns; i++ )
>          {
> -            ASSERT(!(pg[i].count_info & ~PGC_extra));
> +            ASSERT(!(pg[i].count_info & ~(PGC_extra | PGC_reserved)));
>              if ( pg[i].count_info & PGC_extra )
>                  extra_pages++;
>          }
> @@ -2439,6 +2439,9 @@ int assign_pages(
>          }
>      }
>  
> +    if ( pg[0].count_info & PGC_reserved )
> +        d->reserved_pages += nr_pfns;

I guess this again will fail to build on x86.

> @@ -588,6 +590,9 @@ static inline struct page_list_head *page_to_list(
>      if ( pg->count_info & PGC_extra )
>          return &d->extra_page_list;
>  
> +    if ( pg->count_info & PGC_reserved )
> +        return &d->reserved_page_list;

Same here.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 18 08:19:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 08:19:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128930.242009 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liuwU-0003QT-HM; Tue, 18 May 2021 08:19:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128930.242009; Tue, 18 May 2021 08:19:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liuwU-0003QM-Da; Tue, 18 May 2021 08:19:06 +0000
Received: by outflank-mailman (input) for mailman id 128930;
 Tue, 18 May 2021 08:19:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liuwS-0003Ot-RI; Tue, 18 May 2021 08:19:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liuwS-0006Ui-L5; Tue, 18 May 2021 08:19:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liuwS-0005lG-D9; Tue, 18 May 2021 08:19:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1liuwS-0007yw-Cb; Tue, 18 May 2021 08:19:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=zIZsABnffXo5nOf+IVytvIuo0ufq3lJw03plBGd3yqA=; b=laDFB8tSdNk6lvq3FGDCOiDt+W
	2jxHWO18e1LZSXbfTcvct/jIw1BFe6oC3Z3F0OgNwBMuezQKpUwsIY9R5mpJdGCnr8hSa0Z1nFJkU
	3CeP6e3PnhKOOMETNKEzc2Xmt7fhLj1e6GWUAfGwZmygg1pZejoN4cCiCky3h17X2YLQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161988-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 161988: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=530715bd0b619b87c969571c0c5a10ebda514218
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 18 May 2021 08:19:04 +0000

flight 161988 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161988/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              530715bd0b619b87c969571c0c5a10ebda514218
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  312 days
Failing since        151818  2020-07-11 04:18:52 Z  311 days  304 attempts
Testing same since   161988  2021-05-18 04:19:59 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 57738 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 18 08:39:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 08:39:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128938.242028 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1livFZ-0005hm-Cq; Tue, 18 May 2021 08:38:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128938.242028; Tue, 18 May 2021 08:38:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1livFZ-0005hf-9s; Tue, 18 May 2021 08:38:49 +0000
Received: by outflank-mailman (input) for mailman id 128938;
 Tue, 18 May 2021 08:38:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2je3=KN=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1livFY-0005hZ-BL
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 08:38:48 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown
 [40.107.6.42]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4a8e0a57-e6e9-461e-a637-52adc889a80c;
 Tue, 18 May 2021 08:38:44 +0000 (UTC)
Received: from AM7PR03CA0026.eurprd03.prod.outlook.com (2603:10a6:20b:130::36)
 by PAXPR08MB6735.eurprd08.prod.outlook.com (2603:10a6:102:138::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.26; Tue, 18 May
 2021 08:38:42 +0000
Received: from VE1EUR03FT019.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:130:cafe::a9) by AM7PR03CA0026.outlook.office365.com
 (2603:10a6:20b:130::36) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.25 via Frontend
 Transport; Tue, 18 May 2021 08:38:41 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT019.mail.protection.outlook.com (10.152.18.153) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Tue, 18 May 2021 08:38:40 +0000
Received: ("Tessian outbound 3c5232d12880:v92");
 Tue, 18 May 2021 08:38:40 +0000
Received: from 8fe6b0724967.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 56B87D00-7ABF-490F-AB35-7166F7AFDBC3.1; 
 Tue, 18 May 2021 08:38:34 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 8fe6b0724967.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 18 May 2021 08:38:34 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com (2603:10a6:803:10a::33)
 by VE1PR08MB5806.eurprd08.prod.outlook.com (2603:10a6:800:1b1::5)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.25; Tue, 18 May
 2021 08:38:32 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5]) by VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5%6]) with mapi id 15.20.4129.032; Tue, 18 May 2021
 08:38:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4a8e0a57-e6e9-461e-a637-52adc889a80c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5j6DHPk3Onp8iMyaDnuhLbqdzju0oLU7p9yQyO0eueU=;
 b=uGKXmL4t4t1JHdeuyktKbwUIf3l5M3gQ5CRyrqZLiJuUmifxZdaJytVLq0OYY5NsEV0HNWRq1qAaetrIbWHHX6jrOwZT1V1lkWTrnd3i+tZR2YK/EPho1cRD7WaqWnlBROvkEalvWow46Ix2L25VExLenpuowonDfv0TaZCoPWQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=M4+X6FceKZ8EivGW2ivCi5OMpsof/SIM1TwNcogto8LXovz75tNBFhPqf2gjnd3Z1Ld6rHc/qRFdpZd3bkdZRQkhyqrIbDYyfG/4LEED9wlBgIv7hv7D1VcvtUCur6p7EPzmcs8wVOoQj7kTE32pQf4kUenDl+eM8YA3sPEYVDads92c+GI36WcD6dHJ0RwM5++obANlOL/yLYxOkYrqM08i7K+v3b5BdcbWiB7ZvmFiNEwGBd+iv3DVH2Jj1Ss7qhcxxYifvIlAGGMpOF6cbNYFcMkHgemdxIT3um35jno5HlwB8WFCL+rz7JT0kSKePU98WiIX8RTo60K2y4bBLA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5j6DHPk3Onp8iMyaDnuhLbqdzju0oLU7p9yQyO0eueU=;
 b=ASJPiKj9A7CfQRF2S9y5jmTjF178ldhmSDTcgU4BXQi2+gwt7ndhjwGz3bAgHQ2qH7N831plro+/252O/spV90fHZjCBW/vZyV5ecy9O52BmRgHbdt0waWBOcHQcf0Ef6+BhIWWGRMTHaLeI3vWusDdk15RTIPi9LJn60n2gRdBD+VdOtsYsR7HhV9LKND6ex+o6AvGhg3tq3UvjWG1k8Yl0JEVKO2SrE8ylzXLkJSSUusO8T2syvRaUuj5dQ/T/j7XgOoY/XxQSzgA1W9jnj/l/HI7XHdHBtOmtgrn6H8nKA8YmaYILD8vPBIAg1b6MFywqVak5220F5Ju4NYeuJA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5j6DHPk3Onp8iMyaDnuhLbqdzju0oLU7p9yQyO0eueU=;
 b=uGKXmL4t4t1JHdeuyktKbwUIf3l5M3gQ5CRyrqZLiJuUmifxZdaJytVLq0OYY5NsEV0HNWRq1qAaetrIbWHHX6jrOwZT1V1lkWTrnd3i+tZR2YK/EPho1cRD7WaqWnlBROvkEalvWow46Ix2L25VExLenpuowonDfv0TaZCoPWQ=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	nd <nd@arm.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "sstabellini@kernel.org"
	<sstabellini@kernel.org>, "julien@xen.org" <julien@xen.org>
Subject: RE: [PATCH 08/10] xen/arm: introduce reserved_page_list
Thread-Topic: [PATCH 08/10] xen/arm: introduce reserved_page_list
Thread-Index: AQHXS6XFXoougD95T02cEJMQScUNQ6ro2ogAgAAQIWA=
Date: Tue, 18 May 2021 08:38:32 +0000
Message-ID:
 <VE1PR08MB521538CF7B0BFED3007E5D8DF72C9@VE1PR08MB5215.eurprd08.prod.outlook.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-9-penny.zheng@arm.com>
 <97bc6ca6-206b-197f-de08-20691578b9db@suse.com>
In-Reply-To: <97bc6ca6-206b-197f-de08-20691578b9db@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 63CCA8CC5CA50A43A763D68EEB432BF5.0
x-checkrecipientchecked: true
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [203.126.0.111]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: 25284c61-d092-4605-bf49-08d919d856c1
x-ms-traffictypediagnostic: VE1PR08MB5806:|PAXPR08MB6735:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<PAXPR08MB673566B975E624FBA3816541F72C9@PAXPR08MB6735.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8882;OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 /IDu5btiWy+NA0oGPWtWCNDE4DyybBRJJYFVHAk/myOymmmSQUfzD1AAorTR4V49BUGAle0dDKTEUQNUqgOyw+lTv6NY+ig3sbZqr9KqrfQgI7iaJELPSvIGRgbhZmlkPqKL/l69X9mjtUDNLNAkWU8qsYlS08ZdhCCv7LqXSQuvFX2LiNECEDI22FjvoPBHgC2EPMWm4cZbUTjEBRqunGfqSMMcM9973V2QOQXDl4XKw5qhnKQhxPjbog2vSyWs2P4xw87LD7Vz2ze/60Z9x4OXEDH8yMh1+vK5f+/i+F9TaIh0MJRlXj7UjEZMabtROHiquJTUc21NV96c63+vw0KiOvXMZOLR6LRjufIprOS7fG1YBSamJV3Y9E+fs/91Q4QaOgxqYxEjSs45lt+uzNDlAI5Ega8xFRwXuAJuGMw9hlOIHoSLgHXL2gKW8sjaZt834gQOrHUf27CU+2jPyGukZWUKeZ2Yfl7SavuhGoet5ngt0Ed6wt7jyjVb8LgJpjoUuYSVZ3s6vyQogMai5xILxMlREon3mM3tpZELXGeANevHYax8gI1LNkCX9L+a8ckWLpvZBT2Z4JavFNv1PqGnyxMz2qI3KKKTkRCJUCU=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR08MB5215.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(366004)(376002)(346002)(39860400002)(396003)(6506007)(53546011)(4326008)(2906002)(5660300002)(9686003)(86362001)(76116006)(66556008)(33656002)(55016002)(66946007)(122000001)(52536014)(8936002)(66476007)(66446008)(478600001)(8676002)(6916009)(7696005)(186003)(83380400001)(316002)(38100700002)(54906003)(26005)(71200400001)(64756008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?cVFVa05EbXVrb3l4MUJwSENYSlQzNit6U0I0T25DdDRpVlk0TWNWZWdESzE3?=
 =?utf-8?B?aThaclJ6VUZRTEFieVFhc0RSaElEdURocjR4MksvMGdMSHhuRzd0TlhHTytS?=
 =?utf-8?B?TnFvU3UyQWxBanl5Sjk4eXkxeGdtZTBuTEEzWlF6RFBEdEYxZG9qVEJRdFVJ?=
 =?utf-8?B?RWozbWtkZWc2Zi8zQ3hMaWFoekJ4ZElETWFUVGFyTVFEUUh6amV5VXZGL0hm?=
 =?utf-8?B?d2R3TmEvN3FKc1lKK2lqL0NPNXQ4em9yT1pQQ3FJLzRPc0l1UTB2U3lYdGQv?=
 =?utf-8?B?aHYvV3ZQd3dpSlJKamI5eG40SCtrZGZmR3k4eWNDcEFYcjZobDh4NWt0WjVG?=
 =?utf-8?B?Z0ZXNEV3aWNsbjVqOUc1SFo5QnNTTWdMM2hnanhDMkNxZ3BTYmtDR3BBMDU4?=
 =?utf-8?B?d3JTTjVreHJnVnRQdU1Dd0dkQXdqbWxhVitKb2NRdzRoYkpRbFZCOGtyOE5l?=
 =?utf-8?B?VnF1QUVSaVcvTUpGRzNjYjNZTEVPM1NMUVdzUC9jNml4cDc1NGlWK25LbkZW?=
 =?utf-8?B?UXU5U2xTYUxLWllHUy9pZ3lWR3pOTTYwNzUxSzF0VDZRajI0TnowSFEyeENS?=
 =?utf-8?B?bXZXMTI5Z05XYkZZSkhTOTFwV0QrVmJvaVk5dldxUldWS29rTnBWV2pGRmJr?=
 =?utf-8?B?MGxxUWcrbzRMMERlbEVVUk1nRmsxZXVtaXZPdVFZUzcreEduSVRVZVh1a2hi?=
 =?utf-8?B?MGVBT2NvTWFROWV4U001clhYT05leGo5SXh2WUxaR0Jsc0c2NlpQUGcyOU1z?=
 =?utf-8?B?REVGR0txTk4ySVBPS1h1L1VpbXg4S3ltZk9pcU9JbGFRRmNKVXBGQU9IRHly?=
 =?utf-8?B?YzcrWFhiTVNVR294dmczTmM2Yk9YVzBYNWMvRFU1YTZselFpU2d3MDJyTzdk?=
 =?utf-8?B?SEF5aWhiZFdZeDMyVXZDUk14Tm9wYWdORTJpYytrOE5OVzVjV0NDbldNWFJS?=
 =?utf-8?B?dGE4ZWYxeG1QUFY5c0pZTzRtMTdXWTNTRVo2aFFaS1Z2QmRkRTdCenNpWW51?=
 =?utf-8?B?TVl0Rjc1NjQzSnlmaXNNdjdTNkp3cjVDditpamdkeEV2ZFg2MkNRYkFaOXZt?=
 =?utf-8?B?aUpKSkZZNmFVSDJFeE1sVjVqZG45dUVEQXZtbFNVdkxzWWViVlEzNUhCMndu?=
 =?utf-8?B?SGxwOEk1YVRORGI4MllRbDhka3JHZ3R5VUxKUFhZR2VzUGo0MTlFQitXVkla?=
 =?utf-8?B?U2hpdlY3NFd6VDJuWWR6c3hZRDJVcXJSbG1rQ1NSbnlKMG14cE1oSzkwanEz?=
 =?utf-8?B?NHMzVWZtdFpRa25kK05JV3c3VDdhUWN0M29FQ2xoL2hGblFGOExQMjE3N1V1?=
 =?utf-8?B?bGY2L0U4NGl5RkRLT2IwMGoycDJJZW54MUhaaXBDMDVFM3ZiZDVYZCtpakdM?=
 =?utf-8?B?cVV0aVh0djJjdE9VK2Q3SHg1QzN6NTRTenBqZjdsaGpjL0hmaDZXZVVUSzNh?=
 =?utf-8?B?ekFqQitaS2swcXh0S2h1TVJVSy9IUHlUYlpCRXZDZVFaNElVRUpydXpkQ1Nk?=
 =?utf-8?B?Q1M3N2Y4WGg1NzQ0NVlQQW5nbllvQ01lR1Nnd09VK2R5ZDVpR0xMQUk2QWJC?=
 =?utf-8?B?YUM3a3B4VldUUUtTYXkxcjhsdVRXRHBXZVRsK0Z6b1ZjaFF5SDVCamhjZDZM?=
 =?utf-8?B?RGxLL0I4OEd0eUlWTzRiN0Evc254QU9XWWwwd3FBRU1VR1Y2dFMwTVVnZzlU?=
 =?utf-8?B?RDlhRzZTT2J1OWdCTzZnMjVrdllmTFNXeUJlQzRlZEVDcThIZ0hNVFR0ZlJN?=
 =?utf-8?Q?9ECli1Ny3llb52414oLSstnhR/7Y+5GT1Az7EFP?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB5806
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT019.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	0dda64d1-f172-4aba-732c-08d919d85202
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	SaXWRFO0guN59Y1VuyWmna3rhHfLKYkIHWozmhF487Lpxo7MP345+4w1RmxFsxD79zes76lJtMuCrFhvP6M9LYe+9CCX4YjYLzTtmgGIO25nkNDJ/AaaccrvP/ZiO5Jnh4U4ccSwcU9Ri+ROuv/WzRob2BGiJf9Xg0Yv1+oJ2iEKg3LGRECGew4rq6WInWN51dMir0fhXbM0r4Tq06wyWhSRTbGKBwOE+mYtXB4tKk4oFGmBriWtG4ZsUNtNxKNYwFBtuBCWXosibeXYrgdY9p/9ulKvx6pVgxc/mCeZqlaaMhAR6L9nXVSHqPeEArzMqYy7edWxRMaULiyGxs9Ixc8kC3hY0NuQTrWhWdfdW6/k/UKx7z9A8s+lgeRQPY33ZHj2GLbEN+TjLGpYJL7JWzHwa0xv62uAzzhj/ne0F4+IZ5zWgGc/5Z95FXE9FlNSc5GQgamdszTHcYYhjOZAWlNkpD1XCSNvxNQwOSevAtLsnMrrAveIKZA43XNpspGg5MqU0K9i4uzxKO/NR7HaaNSHgqxh8diZ29+7U4Osx/Cfdoxj6l0sxRaVlQ6z6lVVM6rA1jWnvj7bWdHOdeafQax9CRHinJ+01dRudfFewTddY5nqo2Qo2UAYy4lYhpUT6r2YZezMC681F+2A0Ri6Lw==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(136003)(376002)(346002)(396003)(36840700001)(46966006)(53546011)(6506007)(8936002)(316002)(47076005)(2906002)(86362001)(8676002)(9686003)(7696005)(4326008)(55016002)(54906003)(82740400003)(356005)(6862004)(83380400001)(52536014)(36860700001)(336012)(81166007)(33656002)(70586007)(26005)(186003)(70206006)(5660300002)(82310400003)(478600001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2021 08:38:40.7344
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 25284c61-d092-4605-bf49-08d919d856c1
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT019.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB6735

SGkgSmFuDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSmFuIEJldWxp
Y2ggPGpiZXVsaWNoQHN1c2UuY29tPg0KPiBTZW50OiBUdWVzZGF5LCBNYXkgMTgsIDIwMjEgMzoz
OSBQTQ0KPiBUbzogUGVubnkgWmhlbmcgPFBlbm55LlpoZW5nQGFybS5jb20+DQo+IENjOiBCZXJ0
cmFuZCBNYXJxdWlzIDxCZXJ0cmFuZC5NYXJxdWlzQGFybS5jb20+OyBXZWkgQ2hlbg0KPiA8V2Vp
LkNoZW5AYXJtLmNvbT47IG5kIDxuZEBhcm0uY29tPjsgeGVuLWRldmVsQGxpc3RzLnhlbnByb2pl
Y3Qub3JnOw0KPiBzc3RhYmVsbGluaUBrZXJuZWwub3JnOyBqdWxpZW5AeGVuLm9yZw0KPiBTdWJq
ZWN0OiBSZTogW1BBVENIIDA4LzEwXSB4ZW4vYXJtOiBpbnRyb2R1Y2UgcmVzZXJ2ZWRfcGFnZV9s
aXN0DQo+IA0KPiBPbiAxOC4wNS4yMDIxIDA3OjIxLCBQZW5ueSBaaGVuZyB3cm90ZToNCj4gPiBT
aW5jZSBwYWdlX2xpc3QgdW5kZXIgc3RydWN0IGRvbWFpbiByZWZlcnMgdG8gbGlua2VkIHBhZ2Vz
IGFzIGd1ZWFzdA0KPiA+IFJBTSBhbGxvY2F0ZWQgZnJvbSBoZWFwLCBpdCBzaG91bGQgbm90IGlu
Y2x1ZGUgcmVzZXJ2ZWQgcGFnZXMgb2Ygc3RhdGljDQo+IG1lbW9yeS4NCj4gPg0KPiA+IFRoZSBu
dW1iZXIgb2YgUEdDX3Jlc2VydmVkIHBhZ2VzIGFzc2lnbmVkIHRvIGEgZG9tYWluIGlzIHRyYWNr
ZWQgaW4gYQ0KPiA+IG5ldyAncmVzZXJ2ZWRfcGFnZXMnIGNvdW50ZXIuIEFsc28gaW50cm9kdWNl
IGEgbmV3IHJlc2VydmVkX3BhZ2VfbGlzdA0KPiA+IHRvIGxpbmsgcGFnZXMgb2Ygc3RhdGljIG1l
bW9yeS4gTGV0IHBhZ2VfdG9fbGlzdCByZXR1cm4NCj4gPiByZXNlcnZlZF9wYWdlX2xpc3QsIHdo
ZW4gZmxhZyBpcyBQR0NfcmVzZXJ2ZWQuDQo+ID4NCj4gPiBMYXRlciwgd2hlbiBkb21haW4gZ2V0
IGRlc3Ryb3llZCBvciByZXN0YXJ0ZWQsIHRob3NlIG5ldyB2YWx1ZXMgd2lsbA0KPiA+IGhlbHAg
cmVsaW5xdWlzaCBtZW1vcnkgdG8gcHJvcGVyIHBsYWNlLCBub3QgYmVlbiBnaXZlbiBiYWNrIHRv
IGhlYXAuDQo+ID4NCj4gPiBTaWduZWQtb2ZmLWJ5OiBQZW5ueSBaaGVuZyA8cGVubnkuemhlbmdA
YXJtLmNvbT4NCj4gPiAtLS0NCj4gPiAgeGVuL2NvbW1vbi9kb21haW4uYyAgICAgfCAxICsNCj4g
PiAgeGVuL2NvbW1vbi9wYWdlX2FsbG9jLmMgfCA3ICsrKysrLS0NCj4gPiAgeGVuL2luY2x1ZGUv
eGVuL3NjaGVkLmggfCA1ICsrKysrDQo+ID4gIDMgZmlsZXMgY2hhbmdlZCwgMTEgaW5zZXJ0aW9u
cygrKSwgMiBkZWxldGlvbnMoLSkNCj4gDQo+IFRoaXMgY29udHJhZGljdHMgdGhlIHRpdGxlJ3Mg
cHJlZml4OiBUaGVyZSdzIG5vIEFybS1zcGVjaWZpYyBjaGFuZ2UgaGVyZSBhdCBhbGwuDQo+IEJ1
dCBpbW8gdGhlIHRpdGxlIGlzIGNvcnJlY3QsIGFuZCB0aGUgY2hhbmdlcyBzaG91bGQgYmUgQXJt
LXNwZWNpZmljLiBUaGVyZSdzDQo+IG5vIHBvaW50IGhhdmluZyBzdHJ1Y3QgZG9tYWluIGZpZWxk
cyBvbiBlLmcuIHg4NiB3aGljaCBhcmVuJ3QgdXNlZCB0aGVyZSBhdCBhbGwuDQo+IA0KDQpZZXAs
IHlvdSdyZSByaWdodC4NCkknbGwgYWRkIGlmZGVmcyBpbiB0aGUgZm9sbG93aW5nIGNoYW5nZXMu
DQoNCj4gPiAtLS0gYS94ZW4vY29tbW9uL3BhZ2VfYWxsb2MuYw0KPiA+ICsrKyBiL3hlbi9jb21t
b24vcGFnZV9hbGxvYy5jDQo+ID4gQEAgLTI0MTAsNyArMjQxMCw3IEBAIGludCBhc3NpZ25fcGFn
ZXMoDQo+ID4NCj4gPiAgICAgICAgICBmb3IgKCBpID0gMDsgaSA8IG5yX3BmbnM7IGkrKyApDQo+
ID4gICAgICAgICAgew0KPiA+IC0gICAgICAgICAgICBBU1NFUlQoIShwZ1tpXS5jb3VudF9pbmZv
ICYgflBHQ19leHRyYSkpOw0KPiA+ICsgICAgICAgICAgICBBU1NFUlQoIShwZ1tpXS5jb3VudF9p
bmZvICYgfihQR0NfZXh0cmEgfA0KPiA+ICsgUEdDX3Jlc2VydmVkKSkpOw0KPiA+ICAgICAgICAg
ICAgICBpZiAoIHBnW2ldLmNvdW50X2luZm8gJiBQR0NfZXh0cmEgKQ0KPiA+ICAgICAgICAgICAg
ICAgICAgZXh0cmFfcGFnZXMrKzsNCj4gPiAgICAgICAgICB9DQo+ID4gQEAgLTI0MzksNiArMjQz
OSw5IEBAIGludCBhc3NpZ25fcGFnZXMoDQo+ID4gICAgICAgICAgfQ0KPiA+ICAgICAgfQ0KPiA+
DQo+ID4gKyAgICBpZiAoIHBnWzBdLmNvdW50X2luZm8gJiBQR0NfcmVzZXJ2ZWQgKQ0KPiA+ICsg
ICAgICAgIGQtPnJlc2VydmVkX3BhZ2VzICs9IG5yX3BmbnM7DQo+IA0KPiBJIGd1ZXNzIHRoaXMg
YWdhaW4gd2lsbCBmYWlsIHRvIGJ1aWxkIG9uIHg4Ni4NCj4gDQo+ID4gQEAgLTU4OCw2ICs1OTAs
OSBAQCBzdGF0aWMgaW5saW5lIHN0cnVjdCBwYWdlX2xpc3RfaGVhZCAqcGFnZV90b19saXN0KA0K
PiA+ICAgICAgaWYgKCBwZy0+Y291bnRfaW5mbyAmIFBHQ19leHRyYSApDQo+ID4gICAgICAgICAg
cmV0dXJuICZkLT5leHRyYV9wYWdlX2xpc3Q7DQo+ID4NCj4gPiArICAgIGlmICggcGctPmNvdW50
X2luZm8gJiBQR0NfcmVzZXJ2ZWQgKQ0KPiA+ICsgICAgICAgIHJldHVybiAmZC0+cmVzZXJ2ZWRf
cGFnZV9saXN0Ow0KPiANCj4gU2FtZSBoZXJlLg0KPiANCj4gSmFuDQoNClRoYW5rcw0KUGVubnkN
Cg==


From xen-devel-bounces@lists.xenproject.org Tue May 18 08:58:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 08:58:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128948.242046 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1livYI-000846-A5; Tue, 18 May 2021 08:58:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128948.242046; Tue, 18 May 2021 08:58:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1livYI-00083z-5q; Tue, 18 May 2021 08:58:10 +0000
Received: by outflank-mailman (input) for mailman id 128948;
 Tue, 18 May 2021 08:58:09 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2je3=KN=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1livYH-00083t-1m
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 08:58:09 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [40.107.20.88]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 06af0c4f-e17a-483a-9d7b-fdd9c496cddb;
 Tue, 18 May 2021 08:58:07 +0000 (UTC)
Received: from DB6PR1001CA0045.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:4:55::31)
 by PAXPR08MB6751.eurprd08.prod.outlook.com (2603:10a6:102:136::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.26; Tue, 18 May
 2021 08:58:05 +0000
Received: from DB5EUR03FT033.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:55:cafe::54) by DB6PR1001CA0045.outlook.office365.com
 (2603:10a6:4:55::31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.25 via Frontend
 Transport; Tue, 18 May 2021 08:58:04 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT033.mail.protection.outlook.com (10.152.20.76) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Tue, 18 May 2021 08:58:04 +0000
Received: ("Tessian outbound 3c287b285c95:v92");
 Tue, 18 May 2021 08:58:04 +0000
Received: from 2406e40d8dcc.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 DC28B092-8DE2-4931-AF26-B91F9EED2F6F.1; 
 Tue, 18 May 2021 08:57:59 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 2406e40d8dcc.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 18 May 2021 08:57:59 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com (2603:10a6:803:10a::33)
 by VE1PR08MB5616.eurprd08.prod.outlook.com (2603:10a6:800:1a1::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.26; Tue, 18 May
 2021 08:57:56 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5]) by VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5%6]) with mapi id 15.20.4129.032; Tue, 18 May 2021
 08:57:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 06af0c4f-e17a-483a-9d7b-fdd9c496cddb
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JIudVAO9QCkK7W0L6n9pcEes+ogSsARVdoirSSecxg0=;
 b=MJJMW1O+kudhqBmGYSwWvFJugPGIxTCRPYm0X5j29/3Y2UWDcLPYVr63/cUhZQHNxx988bqZyi/EgFdDwN5T2Qr36lHgAPzcc3O+75oVZ+LU4yyfL4p+t9SVDHaU/COkOxvpqHETItN+PdbS6dJbzQmC+gyJ/w665bEPy9dUxNg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=b13cvZZqvgU8GELv+TafYpGXW7fHdmVgGJJsWcO+m3UQLTFk708dw1H5/aZo0HZHxbXTFk5pC/nSPtdUQkBFXXr+xcT5mJkYxfWPt+jlWW2pYzEtJprHg7srTnWVBQRjIDIl6N+XQSJhnetR5wdfLw23/K4cKJNAdsd0anHeq3ZD9mQZ+OKS6+PmsGXF1sGud7fuafOmJKfo302UjiYD8zbOls3Tp+twRpH0vOOUA57SXcrAZjDjrWvFtXifXe0jthnDWsleIGPLwG5KxpUbgUzLXM3r5TkZvXTqh55lGhEeVTemVaOFjLhsVfwre3PNInuP5YQWjnDZBtP0/roc0w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JIudVAO9QCkK7W0L6n9pcEes+ogSsARVdoirSSecxg0=;
 b=VS8tOm6VTh1Sjt6Bdm/NIo3XoS8VIUCVwCPOJvCIA6d+Ghretz+KOb5z2TCtzUrn8U4gQTbA6gvGlGw/qY9JEFctw6H2lWTY6znk1D4esjGwQ1eQa+QbeZROQlNR5cup/A7ArM/3kuPsvhjrDj3VXII81ICmTgVQDrlnv2KEZa6YLwZjzq9X59S1FTxHVuxDzv2J0S9hQ2gr+rwHxX4xguuhMT5Ud2fGC6UJi0etQltqKNwulbmCljs+PzERJEJzKNA5gYYDTNryxTXJZYb8sYk7scaUznR8jLUsxyuOjA/IIijWVW2nd2F74Sr0mRnPNS3YhZxxAlA7OKEsZwN3EQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=JIudVAO9QCkK7W0L6n9pcEes+ogSsARVdoirSSecxg0=;
 b=MJJMW1O+kudhqBmGYSwWvFJugPGIxTCRPYm0X5j29/3Y2UWDcLPYVr63/cUhZQHNxx988bqZyi/EgFdDwN5T2Qr36lHgAPzcc3O+75oVZ+LU4yyfL4p+t9SVDHaU/COkOxvpqHETItN+PdbS6dJbzQmC+gyJ/w665bEPy9dUxNg=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	nd <nd@arm.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "sstabellini@kernel.org"
	<sstabellini@kernel.org>, "julien@xen.org" <julien@xen.org>
Subject: RE: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
Thread-Topic: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
Thread-Index: AQHXS6W/k/joZEkeaUiDjrfYH3jVEaro2ToAgAAR3sA=
Date: Tue, 18 May 2021 08:57:55 +0000
Message-ID:
 <VE1PR08MB521528492991FDFC87AC361BF72C9@VE1PR08MB5215.eurprd08.prod.outlook.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-8-penny.zheng@arm.com>
 <7e4706dc-70ea-4dc9-3d70-f07396b462d8@suse.com>
In-Reply-To: <7e4706dc-70ea-4dc9-3d70-f07396b462d8@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: EDF81ACCB07B0F4184A6B29E9B8B3D72.0
x-checkrecipientchecked: true
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [203.126.0.111]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: 50ab0eab-01e8-431e-f84a-08d919db0c83
x-ms-traffictypediagnostic: VE1PR08MB5616:|PAXPR08MB6751:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<PAXPR08MB675155C4F58AD3441F32D8E6F72C9@PAXPR08MB6751.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 j+tu6uVVYALSzqOocqRNKx28vbpkZ/azerXvo6JXniwVYq6PPXMPS8twLm52oZeG4MW4qExZ7IsX5BWJfJ7HURu/vIxEaNwIVAFzvP1lx1joftoRLczdMUqLVTdI42LKbvzJepQIKH5za1ViFpRk7gbdw0ws3MwMM6FGEEBwQOK7sU6Dt947nJTKAbEZmKbGeqGtHEs+xMgvgwBaZ1/kwpC5cGtTothtk7350FYdOr/JrwRMwCC7QXQU3lQIsO4MN4OyDJEhlPW1I/Op9X+ygMIcfjbacjnxp+9mVSaYhGds4hJocxxWuTq9tXaAuAolvlcR+7jNL0CmcLo3fLorOM6M08GwGVftLfr74Syr2Hli5CCtmNYJbrch00BN83eiYAPWz6z1B0cQ0v1YpHPBjYevwf/GclkcWZNUVxFfG263AVqNNhFl7BfJ1cQDnWxqdZ8czrtCkq7mSvJ+XbCYy6Xy02jws/rXXEbJBzoWM2V/OSh65pqWcYulWx0uelajpluVWpQFdUAh4OC9lNYhTkRmLCo7/UhwWBUmHkCdi8evigeqlKURY905iJBEX0L9rt1V5nrVYMiTfwLk99IIXV5G3fl7/nA/gyuGR5Ivl3g=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR08MB5215.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(39860400002)(346002)(136003)(396003)(376002)(83380400001)(8676002)(122000001)(6506007)(53546011)(52536014)(9686003)(5660300002)(478600001)(6916009)(7696005)(38100700002)(33656002)(86362001)(26005)(71200400001)(2906002)(55016002)(76116006)(66476007)(66446008)(66946007)(64756008)(66556008)(316002)(186003)(4326008)(54906003)(8936002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?Y1dQazN6c2pBU201Qk9KM1BQYmYyak92MVAwRTF4bGlta3drNmxjdnRZc1VM?=
 =?utf-8?B?T1ZjOGFRYi9aeTlrKzVjUzEyN29MSWJ5MFhrQVQrODVtL0pPZU9xaXVQNUZn?=
 =?utf-8?B?dDRvQXpTUGgyMEtIL2cralB1TVg3RVpxVzZBbnZnUk84bFcvOEpCdDZOK1NG?=
 =?utf-8?B?dkxSaUhVVEY3c053dkhNSVJSQzJwVXdJYlBVWUt1WENrNTYzWTFjZ1NWUmlN?=
 =?utf-8?B?MUJxNnUxOFRPTjl1VFJoUzcrZXJBRXNFblRrSmJxNy8wdXRiSEIwQVVmUWln?=
 =?utf-8?B?S1RyTjBNYk9ORDNVT2FTRFpocFdXNUd4N3R3VEdIT2lrU0J1cTc0SXRKckty?=
 =?utf-8?B?K2ZsRVJ2dDNXU3ZVNmlRa3pUWWlDU1F6ZDZXOHBpWGVremZjNXMxaGtZb3pM?=
 =?utf-8?B?VTd4NzBrZW1GYTZzTXFSZlI5Rmw0dkpxY0lvNndQNDVkeTNaSEl2QWRDbE1Q?=
 =?utf-8?B?bll5SjlxTGlLNFdVVVpwVWRFY1pMZkV2Yk0yTkFPZlpwYmNDQnlHWFZtR2pU?=
 =?utf-8?B?bFB5NDRDcWlaR2g0aHViWkVKMEJFemRxK1FxYzhsanhwdXk0WmJuUnovWkk4?=
 =?utf-8?B?TUZTUCt2MmRBZEZvZUZFbVlveDVGQUhmd1FCcUorVTltN2I5UFFkSjFvOSts?=
 =?utf-8?B?YXpQMTJmbUpIOHJrSTBRamFBR3FiYkxSS2VCVTdsTm1QMkhnMGZ0ZVRzcElp?=
 =?utf-8?B?MmNwbVByNUU4eTVTZjlOMW1jNXUvSC9tQWtnaWpWeVVhbUw2Mm0wU0h1T0Zt?=
 =?utf-8?B?alhJdkY5OEdXWGhObHVFankxOEliUktoUHhUNDE2K014a2QwQ2RCL05Qd01H?=
 =?utf-8?B?VXpIM2RKckhaazE5MjcyYStvK0c3QzMwNW9Db25UYW81M0dqdmtkUGs5TGx6?=
 =?utf-8?B?bWM2MUY3Vmp6SHByemdsY3Vrc2FrMUJBTDFYeE9XTDk3dHFHVHNqUzVaWllK?=
 =?utf-8?B?TWZTMzZyMS92YkpubTg0ZjlNL2FJT0V2Rm1PQ0ZRbmYxTTNkN21SV0hSY3pD?=
 =?utf-8?B?ZEUzTU9UR3dSR2U5WTlPaXBMZlBKRWVRQVVaS21Od2lJSlZmcmNuR1BQYXVL?=
 =?utf-8?B?bmFuUE5LNm5tV29Ga0xXL0F5aWErSVV4azIyUXVqNjJrVG5OS3hSNzdDNlVC?=
 =?utf-8?B?MlNPd2RnUTFXUEdxckdMWnpQdWR2VEE5TWZNdllPWTExU1lzVnRjbjFsek84?=
 =?utf-8?B?RGQrazFZNWN2ZDdVV3JUQnFKT1ZBV1dwVkJObzFiTGxBWmdGMGU1Smdqa0x5?=
 =?utf-8?B?T0Z6UU1OZHd4U1JULzU5K1NvaDR1Vi9waEJlMXhxRmFlbi82Z3Z4MlZXbzI0?=
 =?utf-8?B?WkI2MEZVeTBpVjhDOFBNNVBnVGtPbEwwQ3Uxa2MxU1M5N3ozLy8yNE1aYk0x?=
 =?utf-8?B?UnZXNGNzSVlIa3pyLzRMZnNLT3VSVy9yamFuRXlsZVluU2JrNlYxdm9GNjlk?=
 =?utf-8?B?dGE4VS84eno5Tm1EQzI3dzdNYUVWUW1ZRHczSzNzSm5qMUlZcHdCWittTDlF?=
 =?utf-8?B?aE9ZbVBoWEZzTWtuYWQvdjZtVElORkdqTUw1TWY1NG9qeGdhc0FVYUJvU29R?=
 =?utf-8?B?TEx4TXJ1ajBzZVNxcWVnRmhwQ1F5SVk0YmRrbjMvV0dTYkxGYjVtN1lHS3lq?=
 =?utf-8?B?SGFyUDcvN3lvYWlRN0tud0RrclcvWE1ZSFVyN05BSk5BdEkyTDZ4d3JRanpn?=
 =?utf-8?B?NEtWMDNyNzNRNHZ3ZjJ6c1B3L2V3cTlHL1d4enNGOUUwNTh5Z0pXby8zM3ZJ?=
 =?utf-8?Q?3m3RJu3VQa76ESJWN93gMwTG5H/rM7dzFZfzrvt?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB5616
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT033.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	802aec0f-bbe5-4152-44d9-08d919db0745
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	U0KvjyTawWd7YuDZY3nPsxVbEZZTmz0Jm6U/X+wh0U4TmFYwPTSFSxSZfmfY9QTdXUKXrGNkloOMGtVl5PeLCbiVXEYQQjRpkDAIP0VMh/ox7cnrLdVY1g4wzdpVWTnjQ1fTshyy9m7xzQrc+S4soQqIiny/w7Or/hVOarGI19fz0hV9dQg15FAaLVcO9+h8rMFtLm9+8MsGI4Os7P9EXgB0pgKXufj2OGHhzbv8IMVqTnjbf7AYorhP3kXv6lz09RGDEyRpiT41PxcrsrzVkdRiVyGDPmlOkvdEonJs8YjuNpLy+i12/B2cKfXs4VbQ3Bw9GUNDeXUUPui+8WKnSThpi2jtXBj47SqMVqt26IlIIzwt6lWvI8D5RiQY3QiisEHWsc5SY16KEVWqEgXskl5hLC4AM7DjtRFXI53oUIshNdNROxvKy7sAZuKjAySqkunDVkVc4mJIKbLoEAhxO8O2ldH0gzDMH99e7vQRk1Z73t3gwZKSgXlr2AoMn/WghsPlDQ9QFLMJswiFqEQuPgOJxE874M8CFBFt198BiSfV9c0gPLuiiZF2BXxGLmL2B++jDPrNODn9mIBmo5uEVZwtS262ki3PyLGo44y6FCx+qSG0EfbfH6XhZduu7Z0uYEZ+pkLiq+jdYavLJvk5peLRNgctFFMFw0/Bwto4tRs=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(346002)(396003)(39860400002)(136003)(46966006)(36840700001)(316002)(83380400001)(33656002)(7696005)(82740400003)(70206006)(53546011)(6862004)(86362001)(478600001)(70586007)(52536014)(36860700001)(6506007)(8676002)(336012)(186003)(55016002)(8936002)(81166007)(9686003)(47076005)(2906002)(356005)(54906003)(4326008)(26005)(82310400003)(5660300002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2021 08:58:04.8078
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 50ab0eab-01e8-431e-f84a-08d919db0c83
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT033.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB6751

SGkgSmFuDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSmFuIEJldWxp
Y2ggPGpiZXVsaWNoQHN1c2UuY29tPg0KPiBTZW50OiBUdWVzZGF5LCBNYXkgMTgsIDIwMjEgMzoz
NSBQTQ0KPiBUbzogUGVubnkgWmhlbmcgPFBlbm55LlpoZW5nQGFybS5jb20+DQo+IENjOiBCZXJ0
cmFuZCBNYXJxdWlzIDxCZXJ0cmFuZC5NYXJxdWlzQGFybS5jb20+OyBXZWkgQ2hlbg0KPiA8V2Vp
LkNoZW5AYXJtLmNvbT47IG5kIDxuZEBhcm0uY29tPjsgeGVuLWRldmVsQGxpc3RzLnhlbnByb2pl
Y3Qub3JnOw0KPiBzc3RhYmVsbGluaUBrZXJuZWwub3JnOyBqdWxpZW5AeGVuLm9yZw0KPiBTdWJq
ZWN0OiBSZTogW1BBVENIIDA3LzEwXSB4ZW4vYXJtOiBpbnRydWR1Y2UgYWxsb2NfZG9tc3RhdGlj
X3BhZ2VzDQo+IA0KPiBPbiAxOC4wNS4yMDIxIDA3OjIxLCBQZW5ueSBaaGVuZyB3cm90ZToNCj4g
PiAtLS0gYS94ZW4vY29tbW9uL3BhZ2VfYWxsb2MuYw0KPiA+ICsrKyBiL3hlbi9jb21tb24vcGFn
ZV9hbGxvYy5jDQo+ID4gQEAgLTI0NDcsNiArMjQ0Nyw5IEBAIGludCBhc3NpZ25fcGFnZXMoDQo+
ID4gICAgICB7DQo+ID4gICAgICAgICAgQVNTRVJUKHBhZ2VfZ2V0X293bmVyKCZwZ1tpXSkgPT0g
TlVMTCk7DQo+ID4gICAgICAgICAgcGFnZV9zZXRfb3duZXIoJnBnW2ldLCBkKTsNCj4gPiArICAg
ICAgICAvKiB1c2UgcGFnZV9zZXRfcmVzZXJ2ZWRfb3duZXIgdG8gc2V0IGl0cyByZXNlcnZlZCBk
b21haW4gb3duZXIuDQo+ICovDQo+ID4gKyAgICAgICAgaWYgKCAocGdbaV0uY291bnRfaW5mbyAm
IFBHQ19yZXNlcnZlZCkgKQ0KPiA+ICsgICAgICAgICAgICBwYWdlX3NldF9yZXNlcnZlZF9vd25l
cigmcGdbaV0sIGQpOw0KPiANCj4gTm93IHRoaXMgaXMgcHV6emxpbmc6IFdoYXQncyB0aGUgcG9p
bnQgb2Ygc2V0dGluZyB0d28gb3duZXIgZmllbGRzIHRvIHRoZSBzYW1lDQo+IHZhbHVlPyBJIGFs
c28gZG9uJ3QgcmVjYWxsIHlvdSBoYXZpbmcgaW50cm9kdWNlZA0KPiBwYWdlX3NldF9yZXNlcnZl
ZF9vd25lcigpIGZvciB4ODYsIHNvIGhvdyBpcyB0aGlzIGdvaW5nIHRvIGJ1aWxkIHRoZXJlPw0K
PiANCg0KVGhhbmtzIGZvciBwb2ludGluZyBvdXQgdGhhdCBpdCB3aWxsIGZhaWwgb24geDg2Lg0K
QXMgZm9yIHRoZSBzYW1lIHZhbHVlLCBzdXJlLCBJIHNoYWxsIGNoYW5nZSBpdCB0byBkb21pZF90
IGRvbWlkIHRvIHJlY29yZCBpdHMgcmVzZXJ2ZWQgb3duZXIuDQpPbmx5IGRvbWlkIGlzIGVub3Vn
aCBmb3IgZGlmZmVyZW50aWF0ZS4gDQpBbmQgZXZlbiB3aGVuIGRvbWFpbiBnZXQgcmVib290ZWQs
IHN0cnVjdCBkb21haW4gbWF5IGJlIGRlc3Ryb3llZCwgYnV0IGRvbWlkIHdpbGwgc3RheXMNClRo
ZSBzYW1lLg0KTWFqb3IgdXNlciBjYXNlcyBmb3IgZG9tYWluIG9uIHN0YXRpYyBhbGxvY2F0aW9u
IGFyZSByZWZlcnJpbmcgdG8gdGhlIHdob2xlIHN5c3RlbSBhcmUgc3RhdGljLA0KTm8gcnVudGlt
ZSBjcmVhdGlvbi4NCg0KPiA+IEBAIC0yNTA5LDYgKzI1MTIsNTYgQEAgc3RydWN0IHBhZ2VfaW5m
byAqYWxsb2NfZG9taGVhcF9wYWdlcygNCj4gPiAgICAgIHJldHVybiBwZzsNCj4gPiAgfQ0KPiA+
DQo+ID4gKy8qDQo+ID4gKyAqIEFsbG9jYXRlIG5yX3BmbnMgY29udGlndW91cyBwYWdlcywgc3Rh
cnRpbmcgYXQgI3N0YXJ0LCBvZiBzdGF0aWMNCj4gPiArbWVtb3J5LA0KPiA+ICsgKiB0aGVuIGFz
c2lnbiB0aGVtIHRvIG9uZSBzcGVjaWZpYyBkb21haW4gI2QuDQo+ID4gKyAqIEl0IGlzIHRoZSBl
cXVpdmFsZW50IG9mIGFsbG9jX2RvbWhlYXBfcGFnZXMgZm9yIHN0YXRpYyBtZW1vcnkuDQo+ID4g
KyAqLw0KPiA+ICtzdHJ1Y3QgcGFnZV9pbmZvICphbGxvY19kb21zdGF0aWNfcGFnZXMoDQo+ID4g
KyAgICAgICAgc3RydWN0IGRvbWFpbiAqZCwgdW5zaWduZWQgbG9uZyBucl9wZm5zLCBwYWRkcl90
IHN0YXJ0LA0KPiA+ICsgICAgICAgIHVuc2lnbmVkIGludCBtZW1mbGFncykNCj4gPiArew0KPiA+
ICsgICAgc3RydWN0IHBhZ2VfaW5mbyAqcGcgPSBOVUxMOw0KPiA+ICsgICAgdW5zaWduZWQgbG9u
ZyBkbWFfc2l6ZTsNCj4gPiArDQo+ID4gKyAgICBBU1NFUlQoIWluX2lycSgpKTsNCj4gPiArDQo+
ID4gKyAgICBpZiAoIG1lbWZsYWdzICYgTUVNRl9ub19vd25lciApDQo+ID4gKyAgICAgICAgbWVt
ZmxhZ3MgfD0gTUVNRl9ub19yZWZjb3VudDsNCj4gPiArDQo+ID4gKyAgICBpZiAoICFkbWFfYml0
c2l6ZSApDQo+ID4gKyAgICAgICAgbWVtZmxhZ3MgJj0gfk1FTUZfbm9fZG1hOw0KPiA+ICsgICAg
ZWxzZQ0KPiA+ICsgICAgew0KPiA+ICsgICAgICAgIGRtYV9zaXplID0gMXVsIDw8IGJpdHNfdG9f
em9uZShkbWFfYml0c2l6ZSk7DQo+ID4gKyAgICAgICAgLyogU3RhcnRpbmcgYWRkcmVzcyBzaGFs
bCBtZWV0IHRoZSBETUEgbGltaXRhdGlvbi4gKi8NCj4gPiArICAgICAgICBpZiAoIGRtYV9zaXpl
ICYmIHN0YXJ0IDwgZG1hX3NpemUgKQ0KPiA+ICsgICAgICAgICAgICByZXR1cm4gTlVMTDsNCj4g
DQo+IEl0IGlzIHRoZSBlbnRpcmUgcmFuZ2UgKGkuZS4gaW4gcGFydGljdWxhciB0aGUgbGFzdCBi
eXRlKSB3aGljaCBuZWVkcyB0byBtZWV0IHN1Y2gNCj4gYSByZXN0cmljdGlvbi4gSSdtIG5vdCBj
b252aW5jZWQgdGhvdWdoIHRoYXQgRE1BIHdpZHRoIHJlc3RyaWN0aW9ucyBhbmQgc3RhdGljDQo+
IGFsbG9jYXRpb24gYXJlIHNlbnNpYmxlIHRvIGNvZXhpc3QuDQo+IA0KDQpGV0lULCBpZiBzdGFy
dGluZyBhZGRyZXNzIG1lZXRzIHRoZSBsaW1pdGF0aW9uLCB0aGUgbGFzdCBieXRlLCBncmVhdGVy
IHRoYW4gc3RhcnRpbmcNCmFkZHJlc3Mgc2hhbGwgbWVldCBpdCB0b28uDQoNCj4gPiArICAgIH0N
Cj4gPiArDQo+ID4gKyAgICBwZyA9IGFsbG9jX3N0YXRpY21lbV9wYWdlcyhucl9wZm5zLCBzdGFy
dCwgbWVtZmxhZ3MpOw0KPiA+ICsgICAgaWYgKCAhcGcgKQ0KPiA+ICsgICAgICAgIHJldHVybiBO
VUxMOw0KPiA+ICsNCj4gPiArICAgIGlmICggZCAmJiAhKG1lbWZsYWdzICYgTUVNRl9ub19vd25l
cikgKQ0KPiA+ICsgICAgew0KPiA+ICsgICAgICAgIGlmICggbWVtZmxhZ3MgJiBNRU1GX25vX3Jl
ZmNvdW50ICkNCj4gPiArICAgICAgICB7DQo+ID4gKyAgICAgICAgICAgIHVuc2lnbmVkIGxvbmcg
aTsNCj4gPiArDQo+ID4gKyAgICAgICAgICAgIGZvciAoIGkgPSAwOyBpIDwgbnJfcGZuczsgaSsr
ICkNCj4gPiArICAgICAgICAgICAgICAgIHBnW2ldLmNvdW50X2luZm8gPSBQR0NfZXh0cmE7DQo+
ID4gKyAgICAgICAgfQ0KPiANCj4gSXMgdGhpcyBhcyB3ZWxsIGFzIHRoZSBNRU1GX25vX293bmVy
IGNhc2UgYWN0dWFsbHkgbWVhbmluZ2Z1bCBmb3Igc3RhdGljYWxseQ0KPiBhbGxvY2F0ZWQgcGFn
ZXM/DQo+IA0KDQpUaGFua3MgZm9yIHBvaW50aW5nIG91dC4gVHJ1bHksIHdlIGRvIG5vdCBuZWVk
IHRvIHRha2UgaXQgY29uc2lkZXJlZC4NCg0KPiBKYW4NCg==


From xen-devel-bounces@lists.xenproject.org Tue May 18 08:58:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 08:58:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128949.242057 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1livYY-0008Rf-Ho; Tue, 18 May 2021 08:58:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128949.242057; Tue, 18 May 2021 08:58:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1livYY-0008RY-Eg; Tue, 18 May 2021 08:58:26 +0000
Received: by outflank-mailman (input) for mailman id 128949;
 Tue, 18 May 2021 08:58:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1livYW-0008P6-R7
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 08:58:24 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1livYW-00078k-Le; Tue, 18 May 2021 08:58:24 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1livYW-0002ZH-F7; Tue, 18 May 2021 08:58:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=g5vg7X3Q/mEzrFWNuBk1vvNTZJEr4MJvkFawdSa5SIE=; b=oBQurQcVJzTH/kAJpCn1p2qpKO
	fRBGgw8qkTDY3OJW+/yAUdZIi9Ygl/3QWj172D9juyxTkfRZ3vy09gxUCpEpu2Hd8aFw4ZrrFEJPF
	YTgTj4IJ3WkiHYRGwL/FFo17stgTjI8xEu4KHouDGbSSPfpj9LFmjjHaLMR8NmJcHGVg=;
Subject: Re: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
To: Penny Zheng <penny.zheng@arm.com>, xen-devel@lists.xenproject.org,
 sstabellini@kernel.org
Cc: Bertrand.Marquis@arm.com, Wei.Chen@arm.com, nd@arm.com
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-2-penny.zheng@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <e1b90f06-92d2-11da-c556-4081907124b8@xen.org>
Date: Tue, 18 May 2021 09:58:21 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210518052113.725808-2-penny.zheng@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Penny,

On 18/05/2021 06:21, Penny Zheng wrote:
> Static Allocation refers to system or sub-system(domains) for which memory
> areas are pre-defined by configuration using physical address ranges.
> Those pre-defined memory, -- Static Momery, as parts of RAM reserved in the

s/Momery/Memory/

> beginning, shall never go to heap allocator or boot allocator for any use.
> 
> Domains on Static Allocation is supported through device tree property
> `xen,static-mem` specifying reserved RAM banks as this domain's guest RAM.
> By default, they shall be mapped to the fixed guest RAM address
> `GUEST_RAM0_BASE`, `GUEST_RAM1_BASE`.
> 
> This patch introduces this new `xen,static-mem` property to define static
> memory nodes in device tree file.
> This patch also documents and parses this new attribute at boot time and
> stores related info in static_mem for later initialization.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
>   docs/misc/arm/device-tree/booting.txt | 33 +++++++++++++++++
>   xen/arch/arm/bootfdt.c                | 52 +++++++++++++++++++++++++++
>   xen/include/asm-arm/setup.h           |  2 ++
>   3 files changed, 87 insertions(+)
> 
> diff --git a/docs/misc/arm/device-tree/booting.txt b/docs/misc/arm/device-tree/booting.txt
> index 5243bc7fd3..d209149d71 100644
> --- a/docs/misc/arm/device-tree/booting.txt
> +++ b/docs/misc/arm/device-tree/booting.txt
> @@ -268,3 +268,36 @@ The DTB fragment is loaded at 0xc000000 in the example above. It should
>   follow the convention explained in docs/misc/arm/passthrough.txt. The
>   DTB fragment will be added to the guest device tree, so that the guest
>   kernel will be able to discover the device.
> +
> +
> +Static Allocation
> +=============
> +
> +Static Allocation refers to system or sub-system(domains) for which memory
> +areas are pre-defined by configuration using physical address ranges.
> +Those pre-defined memory, -- Static Momery, as parts of RAM reserved in the

s/Momery/Memory/

> +beginning, shall never go to heap allocator or boot allocator for any use.
> +
> +Domains on Static Allocation is supported through device tree property
> +`xen,static-mem` specifying reserved RAM banks as this domain's guest RAM.

I would suggest to use "physical RAM" when you refer to the host memory.

> +By default, they shall be mapped to the fixed guest RAM address
> +`GUEST_RAM0_BASE`, `GUEST_RAM1_BASE`.

There are a few bits that needs to clarified or part of the description:
   1) "By default" suggests there is an alternative possibility. 
However, I don't see any.
   2) Will the first region of xen,static-mem be mapped to 
GUEST_RAM0_BASE and the second to GUEST_RAM1_BASE? What if a third 
region is specificed?
   3) We don't guarantee the base address and the size of the banks. 
Wouldn't it be better to let the admin select the region he/she wants?
   4) How do you determine the number of cells for the address and the size?

> +Static Allocation is only supported on AArch64 for now.

The code doesn't seem to be AArch64 specific. So why can't this be used 
for 32-bit Arm?

> +
> +The dtb property should look like as follows:
> +
> +        chosen {
> +            domU1 {
> +                compatible = "xen,domain";
> +                #address-cells = <0x2>;
> +                #size-cells = <0x2>;
> +                cpus = <2>;
> +                xen,static-mem = <0x0 0x30000000 0x0 0x20000000>;
> +
> +                ...
> +            };
> +        };
> +
> +DOMU1 on Static Allocation has reserved RAM bank at 0x30000000 of 512MB size

Do you mean "DomU1 will have a static memory of 512MB reserved from the 
physical address..."?

> +as guest RAM.
> diff --git a/xen/arch/arm/bootfdt.c b/xen/arch/arm/bootfdt.c
> index dcff512648..e9f14e6a44 100644
> --- a/xen/arch/arm/bootfdt.c
> +++ b/xen/arch/arm/bootfdt.c
> @@ -327,6 +327,55 @@ static void __init process_chosen_node(const void *fdt, int node,
>       add_boot_module(BOOTMOD_RAMDISK, start, end-start, false);
>   }
>   
> +static int __init process_static_memory(const void *fdt, int node,
> +                                        const char *name,
> +                                        u32 address_cells, u32 size_cells,
> +                                        void *data)
> +{
> +    int i;
> +    int banks;
> +    const __be32 *cell;
> +    paddr_t start, size;
> +    u32 reg_cells = address_cells + size_cells;
> +    struct meminfo *mem = data;
> +    const struct fdt_property *prop;
> +
> +    if ( address_cells < 1 || size_cells < 1 )
> +    {
> +        printk("fdt: invalid #address-cells or #size-cells for static memory");
> +        return -EINVAL;
> +    }
> +
> +    /*
> +     * Check if static memory property belongs to a specific domain, that is,
> +     * its node `domUx` has compatible string "xen,domain".
> +     */
> +    if ( fdt_node_check_compatible(fdt, node, "xen,domain") != 0 )
> +        printk("xen,static-mem property can only locate under /domUx node.\n");
> +
> +    prop = fdt_get_property(fdt, node, name, NULL);
> +    if ( !prop )
> +        return -ENOENT;
> +
> +    cell = (const __be32 *)prop->data;
> +    banks = fdt32_to_cpu(prop->len) / (reg_cells * sizeof (u32));
> +
> +    for ( i = 0; i < banks && mem->nr_banks < NR_MEM_BANKS; i++ )
> +    {
> +        device_tree_get_reg(&cell, address_cells, size_cells, &start, &size);
> +        /* Some DT may describe empty bank, ignore them */
> +        if ( !size )
> +            continue;
> +        mem->bank[mem->nr_banks].start = start;
> +        mem->bank[mem->nr_banks].size = size;
> +        mem->nr_banks++;
> +    }
> +
> +    if ( i < banks )
> +        return -ENOSPC;
> +    return 0;
> +}
> +
>   static int __init early_scan_node(const void *fdt,
>                                     int node, const char *name, int depth,
>                                     u32 address_cells, u32 size_cells,
> @@ -345,6 +394,9 @@ static int __init early_scan_node(const void *fdt,
>           process_multiboot_node(fdt, node, name, address_cells, size_cells);
>       else if ( depth == 1 && device_tree_node_matches(fdt, node, "chosen") )
>           process_chosen_node(fdt, node, name, address_cells, size_cells);
> +    else if ( depth == 2 && fdt_get_property(fdt, node, "xen,static-mem", NULL) )
> +        process_static_memory(fdt, node, "xen,static-mem", address_cells,
> +                              size_cells, &bootinfo.static_mem);

I am a bit concerned to add yet another method to parse the DT and all 
the extra code it will add like in patch #2.

 From the host PoV, they are memory reserved for a specific purpose. 
Would it be possible to consider the reserve-memory binding for that 
purpose? This will happen outside of chosen, but we could use a phandle 
to refer the region.

>   
>       if ( rc < 0 )
>           printk("fdt: node `%s': parsing failed\n", name);
> diff --git a/xen/include/asm-arm/setup.h b/xen/include/asm-arm/setup.h
> index 5283244015..5e9f296760 100644
> --- a/xen/include/asm-arm/setup.h
> +++ b/xen/include/asm-arm/setup.h
> @@ -74,6 +74,8 @@ struct bootinfo {
>   #ifdef CONFIG_ACPI
>       struct meminfo acpi;
>   #endif
> +    /* Static Memory */
> +    struct meminfo static_mem;
>   };
>   
>   extern struct bootinfo bootinfo;
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 18 09:04:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 09:04:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128959.242071 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1live5-0001hH-9p; Tue, 18 May 2021 09:04:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128959.242071; Tue, 18 May 2021 09:04:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1live5-0001hA-6j; Tue, 18 May 2021 09:04:09 +0000
Received: by outflank-mailman (input) for mailman id 128959;
 Tue, 18 May 2021 09:04:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1live4-0001h4-0O
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 09:04:08 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1live3-0007GX-RW; Tue, 18 May 2021 09:04:07 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1live3-00038h-Lk; Tue, 18 May 2021 09:04:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=frunRnsuH7x4tQCoV4Rb9XfJ5xUVTaEMUkxJWXIDpjA=; b=XSVItL0WRhGaUGuwEyYdXDoPZ+
	ik8kpqt4Z7XwNjrjkg7124BylMYzr61/J6u2gMW3A02TYwlrH7e4K20juuLZn6LOjvwlHgVca9P53
	3lrqiL7gk+bXT1hYIZ2CQyZHucx4lZTiYzIrQ56FEYI200ezQu3DUAH3VLzTR8vic5EA=;
Subject: Re: [PATCH 02/10] xen/arm: handle static memory in
 dt_unreserved_regions
To: Penny Zheng <penny.zheng@arm.com>, xen-devel@lists.xenproject.org,
 sstabellini@kernel.org
Cc: Bertrand.Marquis@arm.com, Wei.Chen@arm.com, nd@arm.com
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-3-penny.zheng@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <eb262284-fd02-f5a7-be45-69f11bbca7d6@xen.org>
Date: Tue, 18 May 2021 10:04:05 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210518052113.725808-3-penny.zheng@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Penny,

On 18/05/2021 06:21, Penny Zheng wrote:
> static memory regions overlap with memory nodes. The
> overlapping memory is reserved-memory and should be
> handled accordingly:
> dt_unreserved_regions should skip these regions the
> same way they are already skipping mem-reserved regions.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
>   xen/arch/arm/setup.c | 39 +++++++++++++++++++++++++++++++++------
>   1 file changed, 33 insertions(+), 6 deletions(-)
> 
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 00aad1c194..444dbbd676 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -201,7 +201,7 @@ static void __init dt_unreserved_regions(paddr_t s, paddr_t e,
>                                            void (*cb)(paddr_t, paddr_t),
>                                            int first)
>   {
> -    int i, nr = fdt_num_mem_rsv(device_tree_flattened);
> +    int i, nr_reserved, nr_static, nr = fdt_num_mem_rsv(device_tree_flattened);
>   
>       for ( i = first; i < nr ; i++ )
>       {
> @@ -222,18 +222,45 @@ static void __init dt_unreserved_regions(paddr_t s, paddr_t e,
>       }
>   
>       /*
> -     * i is the current bootmodule we are evaluating across all possible
> -     * kinds.
> +     * i is the current reserved RAM banks we are evaluating across all
> +     * possible kinds.
>        *
>        * When retrieving the corresponding reserved-memory addresses
>        * below, we need to index the bootinfo.reserved_mem bank starting
>        * from 0, and only counting the reserved-memory modules. Hence,
>        * we need to use i - nr.
>        */
> -    for ( ; i - nr < bootinfo.reserved_mem.nr_banks; i++ )
> +    i = i - nr;
> +    nr_reserved = bootinfo.reserved_mem.nr_banks;
> +    for ( ; i < nr_reserved; i++ )
>       {
> -        paddr_t r_s = bootinfo.reserved_mem.bank[i - nr].start;
> -        paddr_t r_e = r_s + bootinfo.reserved_mem.bank[i - nr].size;
> +        paddr_t r_s = bootinfo.reserved_mem.bank[i].start;
> +        paddr_t r_e = r_s + bootinfo.reserved_mem.bank[i].size;
> +
> +        if ( s < r_e && r_s < e )
> +        {
> +            dt_unreserved_regions(r_e, e, cb, i + 1);
> +            dt_unreserved_regions(s, r_s, cb, i + 1);
> +            return;
> +        }
> +    }
> +
> +    /*
> +     * i is the current reserved RAM banks we are evaluating across all
> +     * possible kinds.
> +     *
> +     * When retrieving the corresponding static-memory bank address
> +     * below, we need to index the bootinfo.static_mem starting
> +     * from 0, and only counting the static-memory bank. Hence,
> +     * we need to use i - nr_reserved.
> +     */
> +
> +    i = i - nr_reserved;
> +    nr_static = bootinfo.static_mem.nr_banks;
> +    for ( ; i < nr_static; i++ )
> +    {
> +        paddr_t r_s = bootinfo.static_mem.bank[i].start; > +        paddr_t r_e = r_s + bootinfo.static_mem.bank[i].size;

This is the 3rd loop we are adding in dt_unreserved_regions(). Each loop 
are doing pretty much the same thing except with a different array. I'd 
like to avoid the new loop if possible.

As mentionned in patch#1, the static memory is another kind of reserved 
memory. So could we describe the static memory using the reserved-memory?

>   
>           if ( s < r_e && r_s < e )
>           {
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 18 09:11:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 09:11:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128969.242084 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1livlL-0003Ae-90; Tue, 18 May 2021 09:11:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128969.242084; Tue, 18 May 2021 09:11:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1livlL-0003AX-5a; Tue, 18 May 2021 09:11:39 +0000
Received: by outflank-mailman (input) for mailman id 128969;
 Tue, 18 May 2021 09:11:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2je3=KN=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1livlK-0003AR-6O
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 09:11:38 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:fe0e::62a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 466effdb-cadc-41b2-b751-df7b4013c7e7;
 Tue, 18 May 2021 09:11:35 +0000 (UTC)
Received: from AM6PR08CA0034.eurprd08.prod.outlook.com (2603:10a6:20b:c0::22)
 by AM0PR08MB5252.eurprd08.prod.outlook.com (2603:10a6:208:15a::28)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.26; Tue, 18 May
 2021 09:11:33 +0000
Received: from VE1EUR03FT054.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:c0:cafe::ca) by AM6PR08CA0034.outlook.office365.com
 (2603:10a6:20b:c0::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.25 via Frontend
 Transport; Tue, 18 May 2021 09:11:33 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT054.mail.protection.outlook.com (10.152.19.64) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Tue, 18 May 2021 09:11:32 +0000
Received: ("Tessian outbound ea2c9a942a09:v92");
 Tue, 18 May 2021 09:11:32 +0000
Received: from 5f4943b216ab.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 11CB0373-9AE0-4E60-9D65-C9146234ABF9.1; 
 Tue, 18 May 2021 09:11:26 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 5f4943b216ab.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 18 May 2021 09:11:26 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com (2603:10a6:803:10a::33)
 by VE1PR08MB5824.eurprd08.prod.outlook.com (2603:10a6:800:1a8::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.26; Tue, 18 May
 2021 09:11:24 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5]) by VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5%6]) with mapi id 15.20.4129.032; Tue, 18 May 2021
 09:11:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 466effdb-cadc-41b2-b751-df7b4013c7e7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5Q7KXzE/8MKF9VUsFuMvHcTZu5f1VfqfVr7nJoEsK1A=;
 b=Iv384AANHHuGfjuChl76CnUIx9EU08zh6AxI7jFUwfiYUC1AW1TWPsUKoWQmF6RBAj6iWy5WVEJfJuYda+R6xJEPnpeC5vUJbkgSxQoim6GdjbHIlK4pVSr6RD+Ldyrsqd84YWb0Cwtm+lkX4pPaAgtovnE/mV4qUCqTjBGE3PE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=RkqhRsdP5tX0YzNSqoGQvwoktyoxmuGgUZjHvA3nJWz0Aqju7rViAVmF7zEsuItLb/IGNxn5Wu+b8Amn/VEod/TsQnNHMOAX3sQuCCY8Rl7DI05DDFz5tNTJGCGc2vKp3AWfEw4MOpEk0SA5EcemFgHH6nl4E1xvyzpwvFHwpc+HLhomUX2LGZFNr54bogyQ0w5sgqKFz8vdgUyN51sHGxKFDbl4Kkpe7NIK7MUY2SWWEtLSlEOQ61ypg8iQoA435fJo/JNfeL7FoJm49wSjae3/MjT7C5bDQSt7powYFRem8wJskNjWDImA0NAdpIohmhU7KcWXYaLM710/eaozFQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5Q7KXzE/8MKF9VUsFuMvHcTZu5f1VfqfVr7nJoEsK1A=;
 b=UxS0k/JazVllUCznwx7NYb9y5sE/h5LBxo2maDQydyBsTGcHbh79X/GTlGZt42sm/JqDd7Gyj220sX1uQdQXtgQRj32wt2C2hIO0NGiKaQezSnR50/lGgIBRqtccjs92U/OSgcqLF014TyRMnFpzHmSag1BmXucqid+5qqLmZLZthgUBXeF/mxTJbZB/zlKjZDH2pfxqnnEXOx6S0qxrJC92VKMta+ulwQWnhOe+AHN5jAFPcvJrU+W/IK2IbZJ2PU1n8jcmWqu/X39HqbU5Sh20YXCUXmw5IDp9cuGb5NVe/C0osZzwndXvhqLznyB/K8gzB854mOTcLfyFykoECQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5Q7KXzE/8MKF9VUsFuMvHcTZu5f1VfqfVr7nJoEsK1A=;
 b=Iv384AANHHuGfjuChl76CnUIx9EU08zh6AxI7jFUwfiYUC1AW1TWPsUKoWQmF6RBAj6iWy5WVEJfJuYda+R6xJEPnpeC5vUJbkgSxQoim6GdjbHIlK4pVSr6RD+Ldyrsqd84YWb0Cwtm+lkX4pPaAgtovnE/mV4qUCqTjBGE3PE=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	nd <nd@arm.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "sstabellini@kernel.org"
	<sstabellini@kernel.org>, "julien@xen.org" <julien@xen.org>
Subject: RE: [PATCH 06/10] xen: replace order with nr_pfns in assign_pages for
 better compatibility
Thread-Topic: [PATCH 06/10] xen: replace order with nr_pfns in assign_pages
 for better compatibility
Thread-Index: AQHXS6W5eIGNu7DPG0i7DMYwhf4hF6ro11yAgAAZR+A=
Date: Tue, 18 May 2021 09:11:24 +0000
Message-ID:
 <VE1PR08MB52153185C5E2C25E798FB376F72C9@VE1PR08MB5215.eurprd08.prod.outlook.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-7-penny.zheng@arm.com>
 <ede08d62-5240-bc52-3475-abdaef1afd30@suse.com>
In-Reply-To: <ede08d62-5240-bc52-3475-abdaef1afd30@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: CE5D4A2733D91149A4ECA2E40FEA8EA8.0
x-checkrecipientchecked: true
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [203.126.0.111]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: c98c72e1-d4f3-47d1-8caa-08d919dcee35
x-ms-traffictypediagnostic: VE1PR08MB5824:|AM0PR08MB5252:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM0PR08MB52524DD1BC6188B7BB826508F72C9@AM0PR08MB5252.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 UvJRFV8TKPmED4kgqvz3sMlxOo2+raAM7sbohU9g6dTm1Tt+UA/8TOccB3RnHx+Kgdx9KNOqat2+Cay9gBSvTCFL7fn6opiVAc1I+wdW/yqFEOiqqNFP17JczJCtM1KxUjQXevrMF7X34SdWcEjZBNeOXIJdsv7sRoPOljW4SJI2zpIpKFRIh5h2X80KfTmEcXQfjdRlXt9aBuyZLbIV/ikAO28ACxvS2kYbe70t/A4bvVibTXxPi/I0saUCB3Kg9FWfdQhhCxGOjeYmnqtViDn1rA9xUCf79S4dlDTtDs3AXgltdYVSUvmrRKtXMeN7IQJ3RQ1uBkYx1Ynut6ok9uXREUjWJvBZo7YZQ1PRH2M0XluKhF4HLFb9YBQJnSnNevVDfSH23ZUNAzEq6jyNSc21fmombVTkGvwkwHbPd3zC3gPyvzOEAH4bJK+DSkrFkWddL0og9ykxZPCVJ8BQI87J5v1sATB7IaR2IDoVz15IAUpXqpuOw1YC0rNYw6SBya5tK5aV4dMcUyM2Q5k51frR5WldQoROsgYjajLzXloY1o59Z6oFdTYkUiy4TKUhyrMKFmn4OoP7utIst9Dr26AQBWC4gIdbePTZsE/v0k0=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR08MB5215.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(136003)(366004)(39860400002)(396003)(376002)(55016002)(8676002)(64756008)(66446008)(9686003)(26005)(66946007)(6916009)(66476007)(76116006)(316002)(71200400001)(66556008)(38100700002)(122000001)(54906003)(52536014)(7696005)(5660300002)(86362001)(8936002)(2906002)(4326008)(33656002)(186003)(53546011)(478600001)(6506007)(83380400001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?alRDNkp2QjhLbjFrMGRrOE9Rcm1Ha1dtSXh1T1FBc0w1Qng3clgvNWF3c0Ev?=
 =?utf-8?B?MTZhalhiMXcxRHhBeW5KZ0t2VW45RDB6Q2tMUTZQcVBjV2RTM3pBSVJJbHFR?=
 =?utf-8?B?UzR1ckhRR1R0RkNKLzViMnc1TllVTnpudDl2d0hsblpKNzVpckFibnh6RHNz?=
 =?utf-8?B?T25LQ3N0dkRXZCt6MUwvbTB5ZjhiZkRGb1h2bGRqVm1TeVVPVDRsSzRUVWhU?=
 =?utf-8?B?c01CL2x2Q1pRbzZZc05LR1VjYTZwVXY2dytoSEk1SWkydExLUDhmVFI2N0w5?=
 =?utf-8?B?Z3ZXZlZuL2ZaMTVqOHovVFI0K0ZDaGZiN3Z5V1RQNG5zTFNPdG1qYUxFMGJp?=
 =?utf-8?B?M01pVU5jKzFNUmlQTWI4cnVPKzZFenpHSHdjd2dJWXplTEs4WndJYnZUUkNN?=
 =?utf-8?B?Qmd6RktuUVFTQi8vWEVTeEFSaTd6YU1TcmUvOVZMUjMycUVjbnZ1Ykg3dkc4?=
 =?utf-8?B?RHlFSnZGQjh6YjhsZkZrSTNZRWEzTWVHbk9lKzhJMTdRRTZ0ZEZ0dWE3ZjRU?=
 =?utf-8?B?bWFlaUdGeURyOSttaWdZeERBRElsNElkVERoV2U3TTlTWmJrMXBUK3pjbE5R?=
 =?utf-8?B?VHh0QTMycEhLUGRvNmxLck8yUFlsdDkxWk5RTnRzVHVYYTk0anlJblhYNlJZ?=
 =?utf-8?B?dHBtMzR4WGU3UmdVMHZTTWVaZXNVbkUyVlRNemNCeHVtM3JOTG1WYWlUQU53?=
 =?utf-8?B?UDhVNTJSN1RPdEdIaEVFZEEva1dIeDJrajEzVlZBV01CdnVTUHhKOEo1SHZl?=
 =?utf-8?B?OEc1eGl5UWEwWXJMVnBhNzMvcncra2NIKzQ1eE1sNmZBTitEYXRrcm5keHZX?=
 =?utf-8?B?eGtHNzNoK1BZcE1qNWhSUkhLdUd2UFNUN1MrMTVPMy9oZ1FXR1JWaFJMdDZG?=
 =?utf-8?B?bVZaMXI3alMrcTdGYmtKL2o0RkJYMkJtSENjNWhCencwWmRjT3IxMWY5YXBM?=
 =?utf-8?B?TmxhVWFobUVSOUU1VUM5ZktQbWZsV3JTV2lIRlpudXBwWTJVaDNNbUt2bVVm?=
 =?utf-8?B?NE05clpxWlpHWWtFbnpPeXpvRGdibmtvZXVMQ0t1ellUNE5BempzbFNXMTJp?=
 =?utf-8?B?THp1ODM2ZloybzdzNmVQbS9sOGYzN2VqZjhHTFM4NEFyanFCSm9RZUs0bVpw?=
 =?utf-8?B?ODlyUDh1UEJka3FuR1FWOVV2YkwyRS9BYnlRNW01VjhlYVJiVmpycHRwaEZY?=
 =?utf-8?B?SW8vNVd5T3JUWmZGanJzT3Rvazl0WmZRS1BHeUtZY3JLOWxkV1oxNEp4WkVu?=
 =?utf-8?B?S3NwaDBLVVlsNjB6OUtiVVcycy84TU03alhVRmZxVlJ5VnBGcDdKdlUzTS9C?=
 =?utf-8?B?cU5mWjhONVpkUzhlNlo3OGZpTzVDUHJjb3hkWlFXdmcrWEpQbXB0cjU5OUhv?=
 =?utf-8?B?UVJLalZQcXV5VGc2aXdhZlRDcEVKM2cxdVRjSWN1bVp5elVNRXhUS1NqNk9p?=
 =?utf-8?B?SVo4d0FJNGdVN0lPYnVTL0ltQlRJd1VJck04cDFsb3pRSmpKM1NCZWZ4alBy?=
 =?utf-8?B?d09nVWxnaWFVWDdRT0xGRUc0bEhrazg2T25Cd0VZYi9mWjVRczFtbDlZTG5C?=
 =?utf-8?B?Z0hIOUdiams4RHNHZGJmYkR3NU1uUHBnQ2UxanNsOUV0WXhINDdOciswS3NE?=
 =?utf-8?B?b0dCdHQ5R1RKbndPSEhxZjZCUjFaZkJiV3Z3d0xsb25zcHdydlIyOWZuTTVs?=
 =?utf-8?B?SnZmSXEvblQvMFhFNnN2WGNzZklYYVh2OS9CbUNqQ3BiNS9TaTBWODRObWFS?=
 =?utf-8?Q?eEHBGUo1NPoH7XRpgRuVpv+6VAVJfNXxVsDf2hD?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB5824
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT054.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	28e1b53f-4ec0-4df3-8b19-08d919dce92f
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	6DpRGdIxxnicOr7RDIz/lfbvV+zZsdC5WQvL4YxkNdzt2/NRvGeZeLAQdIdq3ju8YAkIOZ8vsqve6jjuNgTgV/GHdBSv8Ln8DsY92su2N052wCJz0gXQQeryLnQLSnEtFLH6vXZeBg/hxxFk31XzDWLwcs4E7HznUr61XgeZh/BRDkSn4F7GoZ87A5CrehYF5EIMGuB+B6FBYEkwQziIiPpjJP+VDZqlWRIaX9gHteWVxQxFeSwoyhx5T32sg41x+nGKO2/oXNseejL26/cMLcqrjTe/DV6i2+1p4JZqZmg5BuxXsBpHaUE+NtXqXyTN07leS+buoWWIno02dYYNCP6fROVip0UAsMYEDlZB8n+U7Mf2pAgoZbq+sEs1kyJClEUVIBqucqrglNvWT9mMs8pZ9ejnKSqEMOSXLFeHc7Z2PCdzM33veb2F6zdTjhQi5DZjbfAzT4GqBXKFpMugWz30FPJwGSuSlf9OJskdsFVT+109FZIlGn5sEHfS05mgd4F+wek7eWlCjCc3Ph8l2cPm/4K6IAgzesYDxk3ediTCl63ee/RcEJp4OP8kZ0MGAfUf59wVqEjnggfvFb3Vj+G52o5aKeuruL/DySNvxsbne17uXouFXU3fb9ZTxvRTeF7u0oysbtuFuPDZcm7+nI5Wf2KYSW7w56YjFX3iy1c=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(396003)(376002)(136003)(39860400002)(46966006)(36840700001)(86362001)(186003)(356005)(4326008)(26005)(55016002)(33656002)(6862004)(9686003)(36860700001)(81166007)(83380400001)(478600001)(336012)(82740400003)(2906002)(47076005)(52536014)(8676002)(54906003)(53546011)(5660300002)(6506007)(7696005)(70586007)(70206006)(316002)(8936002)(82310400003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2021 09:11:32.8322
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c98c72e1-d4f3-47d1-8caa-08d919dcee35
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT054.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB5252

SGkgSmFuDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSmFuIEJldWxp
Y2ggPGpiZXVsaWNoQHN1c2UuY29tPg0KPiBTZW50OiBUdWVzZGF5LCBNYXkgMTgsIDIwMjEgMzoy
OCBQTQ0KPiBUbzogUGVubnkgWmhlbmcgPFBlbm55LlpoZW5nQGFybS5jb20+DQo+IENjOiBCZXJ0
cmFuZCBNYXJxdWlzIDxCZXJ0cmFuZC5NYXJxdWlzQGFybS5jb20+OyBXZWkgQ2hlbg0KPiA8V2Vp
LkNoZW5AYXJtLmNvbT47IG5kIDxuZEBhcm0uY29tPjsgeGVuLWRldmVsQGxpc3RzLnhlbnByb2pl
Y3Qub3JnOw0KPiBzc3RhYmVsbGluaUBrZXJuZWwub3JnOyBqdWxpZW5AeGVuLm9yZw0KPiBTdWJq
ZWN0OiBSZTogW1BBVENIIDA2LzEwXSB4ZW46IHJlcGxhY2Ugb3JkZXIgd2l0aCBucl9wZm5zIGlu
IGFzc2lnbl9wYWdlcw0KPiBmb3IgYmV0dGVyIGNvbXBhdGliaWxpdHkNCj4gDQo+IE9uIDE4LjA1
LjIwMjEgMDc6MjEsIFBlbm55IFpoZW5nIHdyb3RlOg0KPiA+IEZ1bmN0aW9uIHBhcmFtZXRlciBv
cmRlciBpbiBhc3NpZ25fcGFnZXMgaXMgYWx3YXlzIHVzZWQgYXMgMXVsIDw8DQo+ID4gb3JkZXIs
IHJlZmVycmluZyB0byAyQG9yZGVyIHBhZ2VzLg0KPiA+DQo+ID4gTm93LCBmb3IgYmV0dGVyIGNv
bXBhdGliaWxpdHkgd2l0aCBuZXcgc3RhdGljIG1lbW9yeSwgb3JkZXIgc2hhbGwgYmUNCj4gPiBy
ZXBsYWNlZCB3aXRoIG5yX3BmbnMgcG9pbnRpbmcgdG8gcGFnZSBjb3VudCB3aXRoIG5vIGNvbnN0
cmFpbnQsIGxpa2UNCj4gPiAyNTBNQi4NCj4gDQo+IFdoaWxlIEknbSBub3QgZW50aXJlbHkgb3Bw
b3NlZCwgSSdtIGFsc28gbm90IGNvbnZpbmNlZC4gVGhlIG5ldyB1c2VyIGNvdWxkDQo+IGFzIHdl
bGwgYnJlYWsgdXAgdGhlIHJhbmdlIGludG8gc3VpdGFibGUgcG93ZXItb2YtMiBjaHVua3MuIElu
IG5vIGNhc2UgZG8gSQ0KPiB2aWV3IHRoZSB3b3JkaW5nICJjb21wYXRpYmlsaXR5IiBoZXJlIGFz
IGFwcHJvcHJpYXRlLiBUaGVyZSdzIG5vDQo+IGluY29tcGF0aWJpbGl0eSBhdCBwcmVzZW50Lg0K
PiANCg0KWWVzLCBtYXliZSB0aGUgaW5jb21wYXRpYmlsaXR5IGlzIG5vdCB0aGUgZ29vZCBjaG9p
Y2UgaGVyZS4NClN1cmUsIHRoZSBuZXcgdXNlciBkZWZpbml0ZWx5IGNvdWxkIGNob29zZSB0aGUg
d29ya2Fyb3VuZCB0byBicmVhayB1cCB0aGUgcmFuZ2UsIHdoaWxlDQppdCBtYXkgY29zdCBleHRy
YSB0aW1lLiANCkFuZCB3aGlsZSBjb25zaWRlcmluZyBNUFUgc3lzdGVtLCAgbWVtb3J5IHJhbmdl
IHNpemUgaXMgb2Z0ZW4gbm90IGluIHRoZSBwb3dlci1vZi0yLiAgDQoNCj4gSmFuDQoNClRoYW5r
cw0KUGVubnkNCg==


From xen-devel-bounces@lists.xenproject.org Tue May 18 09:18:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 09:18:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128975.242099 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1livsC-0003sk-3p; Tue, 18 May 2021 09:18:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128975.242099; Tue, 18 May 2021 09:18:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1livsC-0003sd-0k; Tue, 18 May 2021 09:18:44 +0000
Received: by outflank-mailman (input) for mailman id 128975;
 Tue, 18 May 2021 09:18:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ttlr=KN=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1livsA-0003sX-AI
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 09:18:42 +0000
Received: from out5-smtp.messagingengine.com (unknown [66.111.4.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ef05ad80-2aa8-4b33-8dec-7acfe4a62890;
 Tue, 18 May 2021 09:18:40 +0000 (UTC)
Received: from compute4.internal (compute4.nyi.internal [10.202.2.44])
 by mailout.nyi.internal (Postfix) with ESMTP id B75B05C01C6;
 Tue, 18 May 2021 05:18:40 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute4.internal (MEProxy); Tue, 18 May 2021 05:18:40 -0400
Received: from mail-itl (ip5b434f04.dynamic.kabel-deutschland.de [91.67.79.4])
 by mail.messagingengine.com (Postfix) with ESMTPA;
 Tue, 18 May 2021 05:18:38 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ef05ad80-2aa8-4b33-8dec-7acfe4a62890
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:content-type:date:from:in-reply-to
	:message-id:mime-version:references:subject:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; bh=pQOEY9
	uzaBbBGayfBBHGueQpT/CjFd/DrtGtIFRgTjc=; b=O7+4JlrellB3ex6Fnt9+Ge
	D/TTpy0wedSr9jVAKUZLwrrmKzob4xubqRF00xZRHC35wvgyVTKvF43lg7A7rmCF
	eeypZyImb6/S8K5I93ous5ioW3PjhKegRNyU5vHYjSSCDD/R6+zvYPtjBEE9mLDd
	BCifk+9YlM4g6GmI5lEJCVyByaQgwvbMGxtqHEKW8gqtTy9bGUEe+YkXEmRAXT6W
	6ExJE42kTGpz5vHh6DUh2gRVBgRHwWi6FJ5SML+cjyHKyMoP08jzvgZERv0AowTq
	z0Wo+99T+9KNcCmjjNiQxPctiP7cbBc2ilUOzbOdSHTS7pmD2AYs1W+1JeRKiZBw
	==
X-ME-Sender: <xms:b4ajYMtUvRpYzIFR3xZ6t9AdyFx1X__U56fIPDUXNU6prkWUyfFYXQ>
    <xme:b4ajYJeY5fkltpxrdOsoCYdwMqDQ52X5bkJ9rLDsnThNLVQ-aA4k3ifvU7YQK8fp6
    p1EVSkJhZxatw>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrvdeijedgudefucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvffukfhfgggtuggjsehgtdorredttdejnecuhfhrohhmpeforghrvghk
    ucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesihhnvh
    hishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeeiieeh
    jeegteeggeeigffhkeekieefjeduhedvfffhiefgkefhvdevfeejffdvfeenucfkpheple
    durdeijedrjeelrdegnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghi
    lhhfrhhomhepmhgrrhhmrghrvghksehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtg
    homh
X-ME-Proxy: <xmx:b4ajYHz6bUK6MaQZT9XJ2rIQtGIDdwVd2sudeYnUT_iSIMy3e3NP9g>
    <xmx:b4ajYPN6Es-b6nCQkIXvW1OKsJbsTDyFXgGolA18lyp65UcfH5-7UQ>
    <xmx:b4ajYM9vnBNOwYVJN3miUsEXt7N8V5YWJk_TO91IMrAzlWRzJFcI6A>
    <xmx:cIajYLlpLq2OtKOV53j2yNqRQUx3juRbfO2eCSEI116j2J29s_GMcQ>
Date: Tue, 18 May 2021 11:18:35 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: paul@xen.org
Cc: "Durrant, Paul" <pdurrant@amazon.co.uk>,
	Michael Brown <mbrown@fensystems.co.uk>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"wei.liu@kernel.org" <wei.liu@kernel.org>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH] xen-netback: Check for hotplug-status existence before
 watching
Message-ID: <YKOGayhGghjfgNXZ@mail-itl>
References: <404130e4-210d-2214-47a8-833c0463d997@fensystems.co.uk>
 <YJmBDpqQ12ZBGf58@mail-itl>
 <21f38a92-c8ae-12a7-f1d8-50810c5eb088@fensystems.co.uk>
 <YJmMvTkp2Y1hlLLm@mail-itl>
 <df9e9a32b0294aee814eeb58d2d71edd@EX13D32EUC003.ant.amazon.com>
 <YJpfORXIgEaWlQ7E@mail-itl>
 <YJpgNvOmDL9SuRye@mail-itl>
 <9edd6873034f474baafd70b1df693001@EX13D32EUC003.ant.amazon.com>
 <YKLjoALdw4oKSZ04@mail-itl>
 <8b7a9cd5-3696-65c2-5656-a1c8eb174344@xen.org>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="f87/K/rmaC2eIhg2"
Content-Disposition: inline
In-Reply-To: <8b7a9cd5-3696-65c2-5656-a1c8eb174344@xen.org>


--f87/K/rmaC2eIhg2
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Tue, 18 May 2021 11:18:35 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: paul@xen.org
Cc: "Durrant, Paul" <pdurrant@amazon.co.uk>,
	Michael Brown <mbrown@fensystems.co.uk>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"wei.liu@kernel.org" <wei.liu@kernel.org>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH] xen-netback: Check for hotplug-status existence before
 watching

On Tue, May 18, 2021 at 07:57:16AM +0100, Paul Durrant wrote:
> On 17/05/2021 22:43, Marek Marczykowski-G=C3=B3recki wrote:
> > On Tue, May 11, 2021 at 12:46:38PM +0000, Durrant, Paul wrote:
> > > I really can't remember any detail. Perhaps try reverting both patche=
s then and check that the unbind/rmmod/modprobe/bind sequence still works (=
and the backend actually makes it into connected state).
> >=20
> > Ok, I've tried this. I've reverted both commits, then used your test
> > script from the 9476654bd5e8ad42abe8ee9f9e90069ff8e60c17:
> >      This has been tested by running iperf as a server in the test VM a=
nd
> >      then running a client against it in a continuous loop, whilst also
> >      running:
> >      while true;
> >        do echo vif-$DOMID-$VIF >unbind;
> >        echo down;
> >        rmmod xen-netback;
> >        echo unloaded;
> >        modprobe xen-netback;
> >        cd $(pwd);
> >        brctl addif xenbr0 vif$DOMID.$VIF;
> >        ip link set vif$DOMID.$VIF up;
> >        echo up;
> >        sleep 5;
> >        done
> >      in dom0 from /sys/bus/xen-backend/drivers/vif to continuously unbi=
nd,
> >      unload, re-load, re-bind and re-plumb the backend.
> > In fact, the need to call `brctl` and `ip link` manually is exactly
> > because the hotplug script isn't executed. When I execute it manually,
> > the backend properly gets back to working. So, removing 'hotplug-status'
> > was in the correct place (netback_remove). The missing part is the tool=
stack
> > calling the hotplug script on xen-netback re-bind.
> >=20
>=20
> Why is that missing? We're going behind the back of the toolstack to do t=
he
> unbind and bind so why should we expect it to re-execute a hotplug script?

Ok, then simply execute the whole hotplug script (instead of its subset)
after re-loading the backend module and everything will be fine.

For example like this:
    XENBUS_PATH=3Dbackend/vif/$DOMID/$VIF \
    XENBUS_TYPE=3Dvif \
    XENBUS_BASE_PATH=3Dbackend \
    script=3D/etc/xen/scripts/vif-bridge \
    vif=3Dvif.$DOMID.$VIF \
    /etc/xen/scripts/vif-bridge online

(...)

> > In short: if device gets XenbusStateInitWait for the first time (ddev =
=3D=3D
> > NULL case), it goes to add_device() which executes the hotplug script
> > and stores the device.
> > Then, if device goes to XenbusStateClosed + online=3D=3D0 state, then it
> > executes hotplug script again (with "offline" parameter) and forgets the
> > device. If you unbind the driver, the device stays in
> > XenbusStateConnected state (in xenstore), and after you bind it again,
> > it goes to XenbusStateInitWait. It don't think it goes through
> > XenbusStateClosed, and online stays at 1 too, so libxl doesn't execute
> > the hotplug script again.
>=20
> This is pretty key. The frontend should not notice an unbind/bind i.e. th=
ere
> should be no evidence of it happening by examining states in xenstore (fr=
om
> the guest side).

If you update the backend module, I think the frontend needs at least to
re-evaluate feature-* nodes. In case of applying just a bug fix, they
should not change (in theory), but technically that would be the correct
thing to do.

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--f87/K/rmaC2eIhg2
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmCjhmsACgkQ24/THMrX
1yx8oAf/XCZOI2Ckmb3Ii8u0x2jWgKb9lUQJOhjXf1KjxF5xkUaM0EGGflO20D3h
VuobCUFcsEsrjBqJkaKT3mST0yYVyQzQhGerIVEn46UulxekbclZUCfhVylqi4ft
epRzNdTuENg9Rdsb5j7DL2/pq/LVTdOdK5r0En8vXE903YK6ylYj1zlnAl4L5hGP
kDQpRXZZpvvGzCynS6QrIsN3amJY4i+gq4C/WHZzzVBQPbFy2rnDTjlCljaBfd0v
/xA4eoYvJpcg6ia1O5JAKG34sD9I9PeFsY/A+6shzghDe+5L5R/CBCBwU4XNVKg6
wJPOcMJyZijDHT1jacYGybor/WxeoA==
=gEzQ
-----END PGP SIGNATURE-----

--f87/K/rmaC2eIhg2--


From xen-devel-bounces@lists.xenproject.org Tue May 18 09:30:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 09:30:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128985.242119 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liw3x-00066E-Hs; Tue, 18 May 2021 09:30:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128985.242119; Tue, 18 May 2021 09:30:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liw3x-000667-Ee; Tue, 18 May 2021 09:30:53 +0000
Received: by outflank-mailman (input) for mailman id 128985;
 Tue, 18 May 2021 09:30:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2je3=KN=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1liw3w-000661-4e
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 09:30:52 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [40.107.20.40]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 275d0a9a-e116-4e79-82ce-b912dbe153ba;
 Tue, 18 May 2021 09:30:49 +0000 (UTC)
Received: from AS8PR04CA0177.eurprd04.prod.outlook.com (2603:10a6:20b:331::32)
 by AM6PR08MB3701.eurprd08.prod.outlook.com (2603:10a6:20b:8b::31)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.25; Tue, 18 May
 2021 09:30:47 +0000
Received: from AM5EUR03FT055.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:331:cafe::cf) by AS8PR04CA0177.outlook.office365.com
 (2603:10a6:20b:331::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.25 via Frontend
 Transport; Tue, 18 May 2021 09:30:47 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT055.mail.protection.outlook.com (10.152.17.214) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Tue, 18 May 2021 09:30:47 +0000
Received: ("Tessian outbound 0f1e4509c199:v92");
 Tue, 18 May 2021 09:30:46 +0000
Received: from c3fab964e81d.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 E85E35FE-896D-454C-8150-A5C204B8ECAC.1; 
 Tue, 18 May 2021 09:30:36 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id c3fab964e81d.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 18 May 2021 09:30:36 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com (2603:10a6:803:10a::33)
 by VI1PR08MB4368.eurprd08.prod.outlook.com (2603:10a6:803:fe::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.28; Tue, 18 May
 2021 09:30:33 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5]) by VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5%6]) with mapi id 15.20.4129.032; Tue, 18 May 2021
 09:30:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 275d0a9a-e116-4e79-82ce-b912dbe153ba
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8Fu3GR2rcCmIzIx/09j5PZZGbEugwR4xo996xg3Au1g=;
 b=dd1GglZ1BpENjNIBOCDj6aOqaPm9gvvgWtIQfCnoR3vPzrJ+P38Y0kO945+5OWbPEovKbSzavXEtMkTYzkUwoQ4rwozVC+ieWCWep2uK3K8/ZpfMQmfbySPIEzUYmnHnYH5M3D+Q0gAn3H2qiMIdRV3k192P/6NVNAj5nWxQUCg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZVj5X9oX+Mrme4GwiFl8Hfbf6no0Uat+PcQYhWzNEghWt+ELtglXSn8lrFjk9WF/X4/b9jzagRQ6KuvXsGM226fncK+l/yLFci20m4M+cHj8G0oKdYXoxukk8/Kr+0lqKJXf9+t0hRLGi3ZXFgo+CqTyZVRlmNDf2zAkkhn2EM6APrHa2nvG+KdtcmZRxqhfaOOAtYwRx9Ah3FxXkO088134KuEVNZrrXffWv70CVxsJMf4/FoTSHv1ik3CKzeU+E7aClzxSRJsOLQaR3bVSHVVwy/4b3U75sjJ3eVKV7Mx03460a0pEwjWlolJRV9BaqO67bPdvY2uQCXszr4QDuw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8Fu3GR2rcCmIzIx/09j5PZZGbEugwR4xo996xg3Au1g=;
 b=Vn1kWWOW+c+ab0o3XkckR4nwPy+znEBQC2kjqu+UX95tj+zHr0kcX3NQ4u/bYBfb7D0PiFeEG7ZRcNOCzu+vM3gqiJ7AF3ypOVAC8xQtUzAib8m2tpzn3czD0GK/ihhVLBwGh35/WIQw2HsWmO7UYyo0dwBU7V1Xnx6PLxt3jysvnwiFZlD+GcZa15GbZfm495qftLRYyoBz8+xSO3Puc+vL4ABLsR2wNo6BkbtKquOHXmx3+NVFGdTKD/VR8et7dx34oUIrUcqY2uqwM/LQuHUIPzpfcZCPCpTghRANWuLtwSd07JgEF7oHhKskqHB5f255YLr/9FzJtTAfTb6GKw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8Fu3GR2rcCmIzIx/09j5PZZGbEugwR4xo996xg3Au1g=;
 b=dd1GglZ1BpENjNIBOCDj6aOqaPm9gvvgWtIQfCnoR3vPzrJ+P38Y0kO945+5OWbPEovKbSzavXEtMkTYzkUwoQ4rwozVC+ieWCWep2uK3K8/ZpfMQmfbySPIEzUYmnHnYH5M3D+Q0gAn3H2qiMIdRV3k192P/6NVNAj5nWxQUCg=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	nd <nd@arm.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "sstabellini@kernel.org"
	<sstabellini@kernel.org>, "julien@xen.org" <julien@xen.org>
Subject: RE: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages
Thread-Topic: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages
Thread-Index: AQHXS6W1I2rvO8FRzkm++uFejE9ZNKro1k+AgAAeO2A=
Date: Tue, 18 May 2021 09:30:33 +0000
Message-ID:
 <VE1PR08MB5215F5A28F5B08A2F4950DB9F72C9@VE1PR08MB5215.eurprd08.prod.outlook.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-6-penny.zheng@arm.com>
 <a890200d-b75b-dd59-5d13-b0b211a58da5@suse.com>
In-Reply-To: <a890200d-b75b-dd59-5d13-b0b211a58da5@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 1BC433EC98280248BAFDECA55250C11F.0
x-checkrecipientchecked: true
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [203.126.0.111]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: bfc92ea8-1a1c-47f0-de14-08d919df9e4d
x-ms-traffictypediagnostic: VI1PR08MB4368:|AM6PR08MB3701:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB37015CB99BC12B1CF6BA557CF72C9@AM6PR08MB3701.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:551;OLM:551;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 hophMEGw//RMXptKnrEnZC5CcjxqZ0APwrXcM2ewKqnfh8SZqqITlVxzbVkh1ZRanqAYWTXJYCcd3NAjJ13za9NjAUNyOL9qnnohXU8LBFsGD5BsrDJj4M2NOA69l7gQ8IijU+C7RWQ5G8QbntWSRivzKZyvOJTDP20Z4DjQ/ROlZkjhnruufgtOCoTJbhYBOWAiFQ54lqTpHdFsXmwM410pXNny6PBzaFT7yqfO/LUPes5F0/98sb6hupdig21ku/i8GByeCSarQHthFG5WZpijZuSTbHuUogL6AwmJJVkZRXe/L0r9Nq3dgYCqubkrRS2lGQ5zYrwcVhLJn8F2WecgV1sCccC8Ml7imtvoC9FSKYHyV1GA6OxTYORx9FC8c4LdubjiSqwCn72P5utFS5mn2HdafKO1AVgcdMHdPLRvyIm76yDhKzu2r4RkPwXHOVtf9oHxKAQ/T07DG/oSwtjCc77wglh+xyh5pV8aeLgJAyyACFnNwlFfXYPSGruMJscfiod3luPzDWuJPUiDQjXrnqsof8zZy5HKoeHJkSzbAmMu2TbXJ1Tc45Af85kI9qjPQ5LVDLeeTxZhVX78FhcQNESEntFMexaseU3Iqto=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR08MB5215.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(376002)(346002)(136003)(39860400002)(366004)(33656002)(8676002)(9686003)(66476007)(478600001)(64756008)(66446008)(52536014)(6506007)(5660300002)(55016002)(66946007)(66556008)(26005)(316002)(53546011)(2906002)(6916009)(54906003)(38100700002)(86362001)(122000001)(8936002)(7696005)(83380400001)(76116006)(4326008)(186003)(71200400001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?UW84T0pNRHVibXRKM0FZaXRzZkM5dEZTd3Rvb3VmMEJKcktlclBoU1c5TkRM?=
 =?utf-8?B?NXpCUkNFRVVnTGE3MUJMVldROXFVMVZOZWFsZnNER3RzSTBxdVl0bUlqZzVC?=
 =?utf-8?B?cDF0OFlPb1lJRXcwTGZVQitSMFBGT3o2ZWFWR0JpRWlPNllMck5mc1RxUWhO?=
 =?utf-8?B?cG9zd2dtMTlWTFhDTjJxdnYyZ09LT3BkUFl1WlozZU5IRTR4Z1BPQlRRejR4?=
 =?utf-8?B?WFJ3clJ1aTZ6WCtNZUZCUk81bkFLSWRMME1QWXduQ0tJWmtCZWpsejhxUDA4?=
 =?utf-8?B?Yys1VnlLWkdraVE2YWQrUnpQbUMzeU9sTG5FTThPa0hSaEl4Wk1VcFlubDdy?=
 =?utf-8?B?aTc0YytPQXhVTWtpTytrd25JamdiMk5tUndaVTNtK3hHWFFMZzZXTEM5Rlpj?=
 =?utf-8?B?cE4vQTVKcGNFeTVrN2xsT3ZSdDViR0Q5YkNCci8rNDczZVk2YTdOY2REYzEw?=
 =?utf-8?B?ZEVGSVFrOVZKM3VxNmU5RldXZ3Z5MlhpblRuWGFOMjlzNHlIZFhqamRDNktQ?=
 =?utf-8?B?dU5GdE0reEI0RkFjQzdGcjQvdmF2V3YrdHdwbng0OWR2MFBlVitVN2dsYXNH?=
 =?utf-8?B?dXBpUkRXV2FxRFprd0xLNk5LR05pSW5Nb1hWV1BwdVdGOTRsbTVSa1VjWDZp?=
 =?utf-8?B?eHFhU25jd014SDhyampvWGx3anVPa3kzekV3cDVGek9OYVVoRTA0bDllMW95?=
 =?utf-8?B?cmd1Mzg1bXp5RUZCZHkyam9EYWtzbjZycllRS3JlL3BkMWk4Q0dXallLNFlY?=
 =?utf-8?B?MllIdzNGRFpmRnpIeENEbXEwYmpTdG9WMGwycTI1RnB3ckpDOXNMd0NMUFdC?=
 =?utf-8?B?T0VzYS9wZG1naDVTU05VQXkrMUZpejd6WmVpQnhEZU1hQkNwdmNCNkFuaG11?=
 =?utf-8?B?eDNnQW9VVXZDOEowSHFYZkc0cUtvc0xCd2dQcEM0Q1owVmhrVHdwNUFJTEh1?=
 =?utf-8?B?RjRoWGozZGZWYnJoejNqVkh2cG0yNG15OGc0eTlTMmUrQVdWQmtWRVhVclF4?=
 =?utf-8?B?QXhqdkhIazN5SnhxbmVJcSt5eE5IT0FlWkcxeVZFdkI5Q0JCbU55SE5rMHNs?=
 =?utf-8?B?bXlLNnBESTFTRjFNdVc0S3lwTGc0US84UVFlT1c3YjZyRTdtSUVGTDRtVDd1?=
 =?utf-8?B?eWFWRXJuc0lLVkJVZ1h2ZWZGL1llWEFwWExpNE9wNXZ1N0ZOS0lKSGhoN3cv?=
 =?utf-8?B?eUpmNCs2bk9LT2dPdjR3TTY4cytiRnRRSUFjb0oxOW5nUVdlejg0Y0ZwV0Zm?=
 =?utf-8?B?V0xpTEhxSkdQTGs0VUI0UEhSVTJvWndRUWVOUlFMcWp3b2tuS3NVL2pBNUdm?=
 =?utf-8?B?RUphaTNPOW1GNnR3YjNGWTY1N2Y4cWZrb0hyb21CRFVLYTEva0l2VDRpUWFR?=
 =?utf-8?B?MnNKVS8rRHZyYkZNQ0xMZXBybmdGSHAwWStKaFFEQWgzT1duV3BvNWVhSlg2?=
 =?utf-8?B?ZDFHRjBFa2tNNklNc3JFdzdndzNweFdXNnRhQmdlOFcweU9tRTIyUDNaeUJH?=
 =?utf-8?B?YWI2N3ZIbjJHL2prMlhGeUIveC8vQVJDTzFxSGhWWDEwRFJqVUNUVUcrU3Vh?=
 =?utf-8?B?Z1JtS2lMc2RWQU5ZZXphcjIxVU0zc01xUTlUcDJUaFdwWncxVEVEYUo1ZHBT?=
 =?utf-8?B?V25Xc3FCbHlKQkl5VzhYS2VuVG5uNnRWNVNsR0VENEdHMlFKKzBuUWd2K0dz?=
 =?utf-8?B?ZVNZTUFaYm16MFpMKzBGQzRuU3NrV0JCYkpWOFBXL2VCOUxUclBlWjFVMmVz?=
 =?utf-8?Q?r17tdJpeA134MP7qPg9m14sqhMSE3jj171hT2kZ?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB4368
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT055.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	310d69b7-df74-40fd-287c-08d919df9611
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	3+82L/8U32Ld0YF3CJqoXDHPRBu4NvExXfBJHCwg9+eMyC15V3K20+DMM5V1iHlcfsTxq4H1WfmWgj8f24+Bfis5LIwK78OS3unyRK9wslqxMnH4cgG4n8mYBlKmegMbcfJJ+QoPtcIj4+13e2SRovfI1AyY7u85Q43E8eijudnWgd/RAWYyPknsltG5PyJiMXu+1/MwduWrkZfPtp9W403YSJO8vjFVwNVkRAiG6pxcsazrhnaaKqUVE0Om/eg91GUzlo5fY2RvhqH2HvKyr/uA7pk/Htzm7TMkyppY9jkROm3n+tK+jvfYn7ol7Vxk3fkxM6+pqv8ower8AER+d2mdPGsPQzVfOHB4fx3ztZXZDlWUK/SNCll9ulTmm0EW9xOHxHPsDNSTvZ9sJO17bt6SvxaaNktFZA1bmiNu03ehaXQztJOQ+n0adzUsJqm4nytjrbk2UAtDXcLC+RJj7/bx/8kKPI/Kbpby9tzJyxL6BPEmKUIz5ehYjvUmj/hJGoHri3CxHnRVWk3N8CECgEg4uGIY3xCaIRnc1Sq1zo5boEvIkVCUv1qKicQqlkaZ4EehPPzEd0cTaMNvUlz3q8k5ZPVwMbeO/C4FN/vVtyezkSvyANv6oZVzxyrujGOTGS/V3/VtcQVWKVmAlPKIrjIsuyfUBhu0xr4OlB3xBLk=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(39860400002)(136003)(376002)(346002)(36840700001)(46966006)(52536014)(478600001)(47076005)(7696005)(70206006)(8676002)(5660300002)(4326008)(86362001)(54906003)(8936002)(55016002)(6862004)(70586007)(186003)(316002)(9686003)(6506007)(36860700001)(336012)(26005)(2906002)(53546011)(356005)(82740400003)(81166007)(33656002)(82310400003)(83380400001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2021 09:30:47.3130
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: bfc92ea8-1a1c-47f0-de14-08d919df9e4d
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT055.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB3701

SGkgSmFuDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSmFuIEJldWxp
Y2ggPGpiZXVsaWNoQHN1c2UuY29tPg0KPiBTZW50OiBUdWVzZGF5LCBNYXkgMTgsIDIwMjEgMzoy
NCBQTQ0KPiBUbzogUGVubnkgWmhlbmcgPFBlbm55LlpoZW5nQGFybS5jb20+DQo+IENjOiBCZXJ0
cmFuZCBNYXJxdWlzIDxCZXJ0cmFuZC5NYXJxdWlzQGFybS5jb20+OyBXZWkgQ2hlbg0KPiA8V2Vp
LkNoZW5AYXJtLmNvbT47IG5kIDxuZEBhcm0uY29tPjsgeGVuLWRldmVsQGxpc3RzLnhlbnByb2pl
Y3Qub3JnOw0KPiBzc3RhYmVsbGluaUBrZXJuZWwub3JnOyBqdWxpZW5AeGVuLm9yZw0KPiBTdWJq
ZWN0OiBSZTogW1BBVENIIDA1LzEwXSB4ZW4vYXJtOiBpbnRyb2R1Y2UgYWxsb2Nfc3RhdGljbWVt
X3BhZ2VzDQo+IA0KPiBPbiAxOC4wNS4yMDIxIDA3OjIxLCBQZW5ueSBaaGVuZyB3cm90ZToNCj4g
PiBhbGxvY19zdGF0aWNtZW1fcGFnZXMgaXMgZGVzaWduYXRlZCB0byBhbGxvY2F0ZSBucl9wZm5z
IGNvbnRpZ3VvdXMNCj4gPiBwYWdlcyBvZiBzdGF0aWMgbWVtb3J5LiBBbmQgaXQgaXMgdGhlIGVx
dWl2YWxlbnQgb2YgYWxsb2NfaGVhcF9wYWdlcw0KPiA+IGZvciBzdGF0aWMgbWVtb3J5Lg0KPiA+
IFRoaXMgY29tbWl0IG9ubHkgY292ZXJzIGFsbG9jYXRpbmcgYXQgc3BlY2lmaWVkIHN0YXJ0aW5n
IGFkZHJlc3MuDQo+ID4NCj4gPiBGb3IgZWFjaCBwYWdlLCBpdCBzaGFsbCBjaGVjayBpZiB0aGUg
cGFnZSBpcyByZXNlcnZlZA0KPiA+IChQR0NfcmVzZXJ2ZWQpIGFuZCBmcmVlLiBJdCBzaGFsbCBh
bHNvIGRvIGEgc2V0IG9mIG5lY2Vzc2FyeQ0KPiA+IGluaXRpYWxpemF0aW9uLCB3aGljaCBhcmUg
bW9zdGx5IHRoZSBzYW1lIG9uZXMgaW4gYWxsb2NfaGVhcF9wYWdlcywNCj4gPiBsaWtlLCBmb2xs
b3dpbmcgdGhlIHNhbWUgY2FjaGUtY29oZXJlbmN5IHBvbGljeSBhbmQgdHVybmluZyBwYWdlDQo+
ID4gc3RhdHVzIGludG8gUEdDX3N0YXRlX3VzZWQsIGV0Yy4NCj4gPg0KPiA+IFNpZ25lZC1vZmYt
Ynk6IFBlbm55IFpoZW5nIDxwZW5ueS56aGVuZ0Bhcm0uY29tPg0KPiA+IC0tLQ0KPiA+ICB4ZW4v
Y29tbW9uL3BhZ2VfYWxsb2MuYyB8IDY0DQo+ID4gKysrKysrKysrKysrKysrKysrKysrKysrKysr
KysrKysrKysrKysrKysNCj4gPiAgMSBmaWxlIGNoYW5nZWQsIDY0IGluc2VydGlvbnMoKykNCj4g
Pg0KPiA+IGRpZmYgLS1naXQgYS94ZW4vY29tbW9uL3BhZ2VfYWxsb2MuYyBiL3hlbi9jb21tb24v
cGFnZV9hbGxvYy5jIGluZGV4DQo+ID4gNThiNTNjNmFjMi4uYWRmMjg4OWU3NiAxMDA2NDQNCj4g
PiAtLS0gYS94ZW4vY29tbW9uL3BhZ2VfYWxsb2MuYw0KPiA+ICsrKyBiL3hlbi9jb21tb24vcGFn
ZV9hbGxvYy5jDQo+ID4gQEAgLTEwNjgsNiArMTA2OCw3MCBAQCBzdGF0aWMgc3RydWN0IHBhZ2Vf
aW5mbyAqYWxsb2NfaGVhcF9wYWdlcygNCj4gPiAgICAgIHJldHVybiBwZzsNCj4gPiAgfQ0KPiA+
DQo+ID4gKy8qDQo+ID4gKyAqIEFsbG9jYXRlIG5yX3BmbnMgY29udGlndW91cyBwYWdlcywgc3Rh
cnRpbmcgYXQgI3N0YXJ0LCBvZiBzdGF0aWMgbWVtb3J5Lg0KPiA+ICsgKiBJdCBpcyB0aGUgZXF1
aXZhbGVudCBvZiBhbGxvY19oZWFwX3BhZ2VzIGZvciBzdGF0aWMgbWVtb3J5ICAqLw0KPiA+ICtz
dGF0aWMgc3RydWN0IHBhZ2VfaW5mbyAqYWxsb2Nfc3RhdGljbWVtX3BhZ2VzKHVuc2lnbmVkIGxv
bmcgbnJfcGZucywNCj4gPiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgcGFkZHJfdCBzdGFydCwNCj4gPiArICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgdW5zaWduZWQgaW50DQo+ID4gK21lbWZsYWdzKQ0KPiANCj4g
VGhpcyBpcyBzdXJlbHkgYnJlYWtpbmcgdGhlIGJ1aWxkIChhdCB0aGlzIHBvaW50IGluIHRoZSBz
ZXJpZXMgLSByZWNhbGwgdGhhdCBhIHNlcmllcw0KPiBzaG91bGQgYnVpbGQgZmluZSBhdCBldmVy
eSBwYXRjaCBib3VuZGFyeSksIGZvciBpbnRyb2R1Y2luZyBhbiB1bnVzZWQgc3RhdGljDQo+IGZ1
bmN0aW9uLCB3aGljaCBtb3N0IGNvbXBpbGVycyB3aWxsIHdhcm4gYWJvdXQuDQo+DQoNClN1cmUs
IEknbGwgY29tYmluZSBpdCB3aXRoIG90aGVyIGNvbW1pdHMNCg0KPiBBbHNvIGFnYWluIC0gcGxl
YXNlIGF2b2lkIGludHJvZHVjaW5nIGNvZGUgdGhhdCdzIGFsd2F5cyBkZWFkIGZvciBjZXJ0YWlu
DQo+IGFyY2hpdGVjdHVyZXMuIFF1aXRlIGxpa2VseSB5b3Ugd2FudCBhIEtjb25maWcgb3B0aW9u
IHRvIHB1dCBhIHN1aXRhYmxlICNpZmRlZg0KPiBhcm91bmQgc3VjaCBmdW5jdGlvbnMuDQo+IA0K
DQpTdXJlLCBzb3JyeSBmb3IgYWxsIHRoZSBtaXNzaW5nICNpZmRlZnMuDQoNCj4gQW5kIGEgbml0
OiBQbGVhc2UgY29ycmVjdCB0aGUgYXBwYXJlbnRseSBvZmYtYnktb25lIGluZGVudGF0aW9uLg0K
Pg0KDQpTdXJlLCBJJ2xsIGNoZWNrIHRocm91Z2ggdGhlIGNvZGUgbW9yZSBjYXJlZnVsbHkuDQoN
Cj4gPiArew0KPiA+ICsgICAgYm9vbCBuZWVkX3RsYmZsdXNoID0gZmFsc2U7DQo+ID4gKyAgICB1
aW50MzJfdCB0bGJmbHVzaF90aW1lc3RhbXAgPSAwOw0KPiA+ICsgICAgdW5zaWduZWQgaW50IGk7
DQo+IA0KPiBUaGlzIHZhcmlhYmxlJ3MgdHlwZSBzaG91bGQgKGFnYWluKSBtYXRjaCBucl9wZm5z
J2VzIChhbGJlaXQgSSB0aGluayB0aGF0DQo+IHBhcmFtZXRlciByZWFsbHkgd2FudHMgdG8gYmUg
bnJfbWZucykuDQo+IA0KDQpDb3JyZWN0IGlmIEkgdW5kZXJzdGFuZCB5b3Ugd3JvbmdseSwgeW91
IG1lYW4gdGhhdCBwYXJhbWV0ZXJzIGluIGFsbG9jX3N0YXRpY21lbV9wYWdlcw0KYXJlIGJldHRl
ciBiZSBuYW1lZCBhZnRlciB1bnNpZ25lZCBsb25nIG5yX21mbnMsIHJpZ2h0Pw0KDQo+ID4gKyAg
ICBzdHJ1Y3QgcGFnZV9pbmZvICpwZzsNCj4gPiArICAgIG1mbl90IHNfbWZuOw0KPiA+ICsNCj4g
PiArICAgIC8qIEZvciBub3csIGl0IG9ubHkgc3VwcG9ydHMgYWxsb2NhdGluZyBhdCBzcGVjaWZp
ZWQgYWRkcmVzcy4gKi8NCj4gPiArICAgIHNfbWZuID0gbWFkZHJfdG9fbWZuKHN0YXJ0KTsNCj4g
PiArICAgIHBnID0gbWZuX3RvX3BhZ2Uoc19tZm4pOw0KPiA+ICsgICAgaWYgKCAhcGcgKQ0KPiA+
ICsgICAgICAgIHJldHVybiBOVUxMOw0KPiANCj4gVW5kZXIgd2hhdCBjb25kaXRpb25zIHdvdWxk
IG1mbl90b19wYWdlKCkgcmV0dXJuIE5VTEw/DQoNClJpZ2h0LCBteSBtaXN0YWtlLg0KDQo+DQo+
ID4gKyAgICBmb3IgKCBpID0gMDsgaSA8IG5yX3BmbnM7IGkrKykNCj4gPiArICAgIHsNCj4gPiAr
ICAgICAgICAvKg0KPiA+ICsgICAgICAgICAqIFJlZmVyZW5jZSBjb3VudCBtdXN0IGNvbnRpbnVv
dXNseSBiZSB6ZXJvIGZvciBmcmVlIHBhZ2VzDQo+ID4gKyAgICAgICAgICogb2Ygc3RhdGljIG1l
bW9yeShQR0NfcmVzZXJ2ZWQpLg0KPiA+ICsgICAgICAgICAqLw0KPiA+ICsgICAgICAgIEFTU0VS
VChwZ1tpXS5jb3VudF9pbmZvICYgUEdDX3Jlc2VydmVkKTsNCj4gPiArICAgICAgICBpZiAoIChw
Z1tpXS5jb3VudF9pbmZvICYgflBHQ19yZXNlcnZlZCkgIT0gUEdDX3N0YXRlX2ZyZWUgKQ0KPiA+
ICsgICAgICAgIHsNCj4gPiArICAgICAgICAgICAgcHJpbnRrKFhFTkxPR19FUlINCj4gPiArICAg
ICAgICAgICAgICAgICAgICAiUmVmZXJlbmNlIGNvdW50IG11c3QgY29udGludW91c2x5IGJlIHpl
cm8gZm9yIGZyZWUgcGFnZXMiDQo+ID4gKyAgICAgICAgICAgICAgICAgICAgInBnWyV1XSBNRk4g
JSJQUklfbWZuIiBjPSUjbHggdD0lI3hcbiIsDQo+ID4gKyAgICAgICAgICAgICAgICAgICAgaSwg
bWZuX3gocGFnZV90b19tZm4ocGcgKyBpKSksDQo+ID4gKyAgICAgICAgICAgICAgICAgICAgcGdb
aV0uY291bnRfaW5mbywgcGdbaV0udGxiZmx1c2hfdGltZXN0YW1wKTsNCj4gDQo+IE5pdDogSW5k
ZW50YXRpb24gYWdhaW4uDQo+DQogDQpUaHgNCg0KPiA+ICsgICAgICAgICAgICBCVUcoKTsNCj4g
PiArICAgICAgICB9DQo+ID4gKw0KPiA+ICsgICAgICAgIGlmICggIShtZW1mbGFncyAmIE1FTUZf
bm9fdGxiZmx1c2gpICkNCj4gPiArICAgICAgICAgICAgYWNjdW11bGF0ZV90bGJmbHVzaCgmbmVl
ZF90bGJmbHVzaCwgJnBnW2ldLA0KPiA+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICZ0bGJmbHVzaF90aW1lc3RhbXApOw0KPiA+ICsNCj4gPiArICAgICAgICAvKg0KPiA+ICsgICAg
ICAgICAqIFJlc2VydmUgZmxhZyBQR0NfcmVzZXJ2ZWQgYW5kIGNoYW5nZSBwYWdlIHN0YXRlDQo+
IA0KPiBEWU0gIlByZXNlcnZlIC4uLiI/DQo+IA0KDQpTdXJlLCB0aHgNCg0KPiA+ICsgICAgICAg
ICAqIHRvIFBHQ19zdGF0ZV9pbnVzZS4NCj4gPiArICAgICAgICAgKi8NCj4gPiArICAgICAgICBw
Z1tpXS5jb3VudF9pbmZvID0gKHBnW2ldLmNvdW50X2luZm8gJiBQR0NfcmVzZXJ2ZWQpIHwNCj4g
UEdDX3N0YXRlX2ludXNlOw0KPiA+ICsgICAgICAgIC8qIEluaXRpYWxpc2UgZmllbGRzIHdoaWNo
IGhhdmUgb3RoZXIgdXNlcyBmb3IgZnJlZSBwYWdlcy4gKi8NCj4gPiArICAgICAgICBwZ1tpXS51
LmludXNlLnR5cGVfaW5mbyA9IDA7DQo+ID4gKyAgICAgICAgcGFnZV9zZXRfb3duZXIoJnBnW2ld
LCBOVUxMKTsNCj4gPiArDQo+ID4gKyAgICAgICAgLyoNCj4gPiArICAgICAgICAgKiBFbnN1cmUg
Y2FjaGUgYW5kIFJBTSBhcmUgY29uc2lzdGVudCBmb3IgcGxhdGZvcm1zIHdoZXJlIHRoZQ0KPiA+
ICsgICAgICAgICAqIGd1ZXN0IGNhbiBjb250cm9sIGl0cyBvd24gdmlzaWJpbGl0eSBvZi90aHJv
dWdoIHRoZSBjYWNoZS4NCj4gPiArICAgICAgICAgKi8NCj4gPiArICAgICAgICBmbHVzaF9wYWdl
X3RvX3JhbShtZm5feChwYWdlX3RvX21mbigmcGdbaV0pKSwNCj4gPiArICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICEobWVtZmxhZ3MgJiBNRU1GX25vX2ljYWNoZV9mbHVzaCkpOw0KPiA+ICsg
ICAgfQ0KPiA+ICsNCj4gPiArICAgIGlmICggbmVlZF90bGJmbHVzaCApDQo+ID4gKyAgICAgICAg
ZmlsdGVyZWRfZmx1c2hfdGxiX21hc2sodGxiZmx1c2hfdGltZXN0YW1wKTsNCj4gDQo+IFdpdGgg
cmVzZXJ2ZWQgcGFnZXMgZGVkaWNhdGVkIHRvIGEgc3BlY2lmaWMgZG9tYWluLCBpbiBob3cgZmFy
IGlzIGl0IHBvc3NpYmxlDQo+IHRoYXQgc3RhbGUgbWFwcGluZ3MgZnJvbSBhIHByaW9yIHVzZSBj
YW4gc3RpbGwgYmUgYXJvdW5kLCBtYWtpbmcgc3VjaCBUTEINCj4gZmx1c2hpbmcgbmVjZXNzYXJ5
Pw0KPiANCg0KWWVzLCB5b3UncmUgcmlnaHQuDQoNCj4gSmFuDQo=


From xen-devel-bounces@lists.xenproject.org Tue May 18 09:45:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 09:45:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.128995.242135 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liwIG-0007lB-5C; Tue, 18 May 2021 09:45:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 128995.242135; Tue, 18 May 2021 09:45:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liwIG-0007l4-28; Tue, 18 May 2021 09:45:40 +0000
Received: by outflank-mailman (input) for mailman id 128995;
 Tue, 18 May 2021 09:45:38 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1liwIE-0007kw-Sv
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 09:45:38 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1liwIE-0007xr-Mw; Tue, 18 May 2021 09:45:38 +0000
Received: from [54.239.6.190] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1liwIE-0006WW-HA; Tue, 18 May 2021 09:45:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=s+1ASYDTgKbn0iWzEQV8ALRZyiGQ0s1pFIA8eajwG8s=; b=KOlehsTrTQ1iwT/AdUByONkJhn
	ybzWLipvsHkD7MfNDs7tcpizWlg77b3mZMvvYGwJ4LWMCEABV/XCkBSSW0CPPKLKqWIBbskpAye40
	Zryg8Yaocap6QVuXYPGkcXlSznkfUKD+WGZI8s5XoZAs7HM9FbmjajEOCj5Fm2QqdbN4=;
Subject: Re: [PATCH 03/10] xen/arm: introduce PGC_reserved
To: Penny Zheng <penny.zheng@arm.com>, xen-devel@lists.xenproject.org,
 sstabellini@kernel.org
Cc: Bertrand.Marquis@arm.com, Wei.Chen@arm.com, nd@arm.com
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-4-penny.zheng@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <bc6a20ef-675d-bbd6-74f7-4ecc45805ee7@xen.org>
Date: Tue, 18 May 2021 10:45:36 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210518052113.725808-4-penny.zheng@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 18/05/2021 06:21, Penny Zheng wrote:
> In order to differentiate pages of static memory, from those allocated from
> heap, this patch introduces a new page flag PGC_reserved to tell.
> 
> New struct reserved in struct page_info is to describe reserved page info,
> that is, which specific domain this page is reserved to. >
> Helper page_get_reserved_owner and page_set_reserved_owner are
> designated to get/set reserved page's owner.
> 
> Struct domain is enlarged to more than PAGE_SIZE, due to newly-imported
> struct reserved in struct page_info.

struct domain may embed a pointer to a struct page_info but never 
directly embed the structure. So can you clarify what you mean?

> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
>   xen/include/asm-arm/mm.h | 16 +++++++++++++++-
>   1 file changed, 15 insertions(+), 1 deletion(-)
> 
> diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
> index 0b7de3102e..d8922fd5db 100644
> --- a/xen/include/asm-arm/mm.h
> +++ b/xen/include/asm-arm/mm.h
> @@ -88,7 +88,15 @@ struct page_info
>            */
>           u32 tlbflush_timestamp;
>       };
> -    u64 pad;
> +
> +    /* Page is reserved. */
> +    struct {
> +        /*
> +         * Reserved Owner of this page,
> +         * if this page is reserved to a specific domain.
> +         */
> +        struct domain *domain;
> +    } reserved;

The space in page_info is quite tight, so I would like to avoid 
introducing new fields unless we can't get away from it.

In this case, it is not clear why we need to differentiate the "Owner" 
vs the "Reserved Owner". It might be clearer if this change is folded in 
the first user of the field.

As an aside, for 32-bit Arm, you need to add a 4-byte padding.

>   };
>   
>   #define PG_shift(idx)   (BITS_PER_LONG - (idx))
> @@ -108,6 +116,9 @@ struct page_info
>     /* Page is Xen heap? */
>   #define _PGC_xen_heap     PG_shift(2)
>   #define PGC_xen_heap      PG_mask(1, 2)
> +  /* Page is reserved, referring static memory */

I would drop the second part of the sentence because the flag could be 
used for other purpose. One example is reserved memory when Live Updating.

> +#define _PGC_reserved     PG_shift(3)
> +#define PGC_reserved      PG_mask(1, 3)
>   /* ... */
>   /* Page is broken? */
>   #define _PGC_broken       PG_shift(7)
> @@ -161,6 +172,9 @@ extern unsigned long xenheap_base_pdx;
>   #define page_get_owner(_p)    (_p)->v.inuse.domain
>   #define page_set_owner(_p,_d) ((_p)->v.inuse.domain = (_d))
>   
> +#define page_get_reserved_owner(_p)    (_p)->reserved.domain
> +#define page_set_reserved_owner(_p,_d) ((_p)->reserved.domain = (_d))
> +
>   #define maddr_get_owner(ma)   (page_get_owner(maddr_to_page((ma))))
>   
>   #define frame_table ((struct page_info *)FRAMETABLE_VIRT_START)
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 18 09:51:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 09:51:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129001.242147 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liwO4-0000iE-R2; Tue, 18 May 2021 09:51:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129001.242147; Tue, 18 May 2021 09:51:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liwO4-0000i7-Nt; Tue, 18 May 2021 09:51:40 +0000
Received: by outflank-mailman (input) for mailman id 129001;
 Tue, 18 May 2021 09:51:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2je3=KN=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1liwO4-0000hz-3Q
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 09:51:40 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.8.81]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 737d3679-f0fa-49aa-ab02-cd4949aaf8e8;
 Tue, 18 May 2021 09:51:38 +0000 (UTC)
Received: from DB8PR04CA0025.eurprd04.prod.outlook.com (2603:10a6:10:110::35)
 by DBAPR08MB5717.eurprd08.prod.outlook.com (2603:10a6:10:1ae::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.28; Tue, 18 May
 2021 09:51:37 +0000
Received: from DB5EUR03FT016.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:110:cafe::a4) by DB8PR04CA0025.outlook.office365.com
 (2603:10a6:10:110::35) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.25 via Frontend
 Transport; Tue, 18 May 2021 09:51:37 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT016.mail.protection.outlook.com (10.152.20.141) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Tue, 18 May 2021 09:51:36 +0000
Received: ("Tessian outbound 6c8a2be3c2e7:v92");
 Tue, 18 May 2021 09:51:36 +0000
Received: from bcae2406f976.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 64F1C447-83C4-4A9B-8B73-B9DC90B96E7F.1; 
 Tue, 18 May 2021 09:51:26 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id bcae2406f976.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 18 May 2021 09:51:26 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com (2603:10a6:803:10a::33)
 by VE1PR08MB5581.eurprd08.prod.outlook.com (2603:10a6:800:1a0::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.26; Tue, 18 May
 2021 09:51:25 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5]) by VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5%6]) with mapi id 15.20.4129.032; Tue, 18 May 2021
 09:51:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 737d3679-f0fa-49aa-ab02-cd4949aaf8e8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ACcCiPpJf1X/XpSp4DB6Km0XuczRENrQcuj3msIh8v0=;
 b=RttbBikGk4qknTM+9xnS+drlyMoHYsSErXkaC2MBjGbJ0uRbZicCRbUSx1jBRG8GwHT//piqpk0CNdsqj6hmYcgxtyaQxu8j7bi2aRIohnG91JH8hPQnJpzr2MQ4w7zYS8LAg60zdpUh2lPTT5X6FjGI3ZMUV+QWjhpKyElKJWQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hky4kD/YGWCvBw/uMbIVSMLxLg5C9rVT/VN/I0H+z5xiyhxBNW2Lm/rAYlAA09hF2ABwXKaJSXzol8Y64RiNBuAEzWJrT/c2I9FmfGJjlafhZuSPWsz1gOaPBuK4Nvg9QE4FO0ekNawv51+istEi34r84IF9J7hFBCCw6KuohT00XskHPeudbB70NH84xWd8WOBFrCnX+Ti6pFQZkPP5P128OzpyRlAR7bpid2/qyO63fkXt0fkd1+Z2nBCtXqST/aMbL4B4IxmAafoCYCRgwl5gl1jIjRn9dPZwL9f8mPDHDFjea9wDl7XzmrzOuh7uHIMqAyjg7kikKVDiBdMpDQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ACcCiPpJf1X/XpSp4DB6Km0XuczRENrQcuj3msIh8v0=;
 b=jFiVEUxR2I3jBQJC1fltHnpd7p31BpwcV2wkAkLJqZrQJ8zA8jaa43ADw6piCRxwwQSZmtvjqc1EXNei+VySvduuF89mA6GEbtd1iff79Y7Sv6UweY+NwNahE2RS5amBj+j9Goq4Nht/GBRS2zY7wAs7Mm22cxQ2WrGQXeOlueeQbJi86sWSQK/pCA1i8jaa2C4wFiE9up8LyWq7kGHUvJghJsinDgzt66H0d2bdUIT4AyhYP92+QXXuBspKFgq4ycWChlMHF9hqEiLpXH5PDIGwuql4BNfpI6lwOB9aRf3GIJYfr1w2LdB+XqEaJE613pjFfEo488K8EXQOQ5EALA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ACcCiPpJf1X/XpSp4DB6Km0XuczRENrQcuj3msIh8v0=;
 b=RttbBikGk4qknTM+9xnS+drlyMoHYsSErXkaC2MBjGbJ0uRbZicCRbUSx1jBRG8GwHT//piqpk0CNdsqj6hmYcgxtyaQxu8j7bi2aRIohnG91JH8hPQnJpzr2MQ4w7zYS8LAg60zdpUh2lPTT5X6FjGI3ZMUV+QWjhpKyElKJWQ=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	nd <nd@arm.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "sstabellini@kernel.org"
	<sstabellini@kernel.org>, "julien@xen.org" <julien@xen.org>
Subject: RE: [PATCH 04/10] xen/arm: static memory initialization
Thread-Topic: [PATCH 04/10] xen/arm: static memory initialization
Thread-Index: AQHXS6WzoKPo0e3K4EKgzRPDOGKP8Kro0+kAgAAm7iA=
Date: Tue, 18 May 2021 09:51:25 +0000
Message-ID:
 <VE1PR08MB5215E7203960F535BC857F5CF72C9@VE1PR08MB5215.eurprd08.prod.outlook.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-5-penny.zheng@arm.com>
 <dbffa647-37e2-93b6-4041-a1344aeb1837@suse.com>
In-Reply-To: <dbffa647-37e2-93b6-4041-a1344aeb1837@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 3C665C90F9340D4AB36D95639ACD7A34.0
x-checkrecipientchecked: true
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [203.126.0.111]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: bc8ee6ce-7786-482b-9119-08d919e2871e
x-ms-traffictypediagnostic: VE1PR08MB5581:|DBAPR08MB5717:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DBAPR08MB5717E30B5485659C2479D8D2F72C9@DBAPR08MB5717.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8882;OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 suPbpIzbIxVoVd9u7vIyNpQjoJA6E9JONe6mgSP8QyT51uQbKMwwXv8KIFqyBuY2JgP1iGvmxZ1ujMivb0LrdLTw25z6ynpf0DXKoniAr3hfTf5I/WTLSaEhCH81P+vMCbM6FrRs+7FFyFjjTHAjPrfPuoyTcORY+6fLUwbn72tyFwzVe1q1JLPVvVYuJFqeZebtcxAa7y6hM8iyqU0gLwEn8zJoJ986c9gSrf7EDOsD8QTT1khgtUH+B0039mZ/c67knmHSoXcXgXYPMW1kwBv6blZOeTy1w0Tu1miGy77fREf/q/wDiYdIjN8WAhY+KZvSgMP8V683qUp1hU6mUFfSRNga5ONGPUK84+cj9QuDpmUitu0wgREwPLQf7rjGq3KYEMcHuyJr0IMub1TyKR8IBe5VyIXMq6g1lxwh10ZB/PiGT48Uix5kvANsOZCPR4DrNabmoJ9gz3KY6Kkb8boal0cZaBSBbVBeZI6cux551xrMC1ZMZ2i3Q52v9IQBMsrWdI/iILF6XZCbdpN6Y3gsZGqWnCNA7x7+H5rrg16pju7E+ssMUj2dGYWPUODjgQQyEfRagjgDfa0wXayKC1wFm9ZPxX0iCfBC0Ece3eI=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR08MB5215.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(39860400002)(136003)(346002)(396003)(376002)(9686003)(83380400001)(7696005)(53546011)(6506007)(186003)(5660300002)(38100700002)(122000001)(4326008)(52536014)(54906003)(316002)(8936002)(26005)(478600001)(6916009)(71200400001)(8676002)(33656002)(86362001)(2906002)(55016002)(66556008)(66476007)(64756008)(66446008)(66946007)(76116006);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?dFFOaHdhNUFsYjRFb0pDNHgvQjRoSVowcW1PTUpnaFlSNTROYkZ1SDgvR0Qv?=
 =?utf-8?B?SHVwYTF1TTJFQXNEUUpaR2tpNTRsSWFjdHRreVRVc0hsY21BUVFTN3IyN2pj?=
 =?utf-8?B?TS9sZGVuMmlLRTMwSk5TVVJDT092WlJvWnMrb3Rwd0JYMVpQdDBRTGlFWWE4?=
 =?utf-8?B?R0pmK2Z5VmVIMERocGh0SnlITkMvbkt1YlpLaFducThtWGlnd1orMDNjUnVW?=
 =?utf-8?B?Y0xOOUxDZllubDNpOUhnU0FsWVo3cWpjaTRjdnZUNUNkTWhBNElMWWZqL2Ny?=
 =?utf-8?B?WWNkclpNUVZXbjhsZ2lLYjd3eUxoVFdidFByNmQ4K0toNzQzekRMY0YwZkdY?=
 =?utf-8?B?Y09iS0haNVBBalY2bnVzVkZZQnZOZFM2YlUvUXVpa1ZLNEtGMjVDYUN3c0Fv?=
 =?utf-8?B?NmJkQWViSXNVbkQrTTc0bnZkZFp0RkVNMWtONUNPdGc3eXNNcUVXMytVQm04?=
 =?utf-8?B?Ui8xR2QxU1ltd3N4MUZiblljVHdmMk81SVY5d2ZsNjUxWTlrSGdGN3hvNVF4?=
 =?utf-8?B?OVptS2R0VDhLR2oweXVDQWJMY0Q5TmlLY0xhZkdGdVVpZEdhdERBU3gzWnkw?=
 =?utf-8?B?K1Q3Z3dicVdaVExxTHZ1c1NXeEdtQjR3VTRhc05PcGptRVlyb1greEM4SzFq?=
 =?utf-8?B?OHdIZXhWUXE0YitPaWpJTzlqMUxpR2lGT3VNQ3BMVndMTTN5N2N6UmJLdUZB?=
 =?utf-8?B?aVJZVHlqVjRYbCt1QjFFRGJCS2Vac2NGZHBRMk9JS3haNG1zUHVndTlXSmNO?=
 =?utf-8?B?bWhXU20zU1NFM2NJb3dhMjVDSjZNMjAxT2hvMW9XdGdnUFBxRG5XOVRZY0lH?=
 =?utf-8?B?blpQb1QyNUlwUHBNcjBJUlBYTXhsbXByYVE2MGllY0lBeWFZS0VMbjdVOEhs?=
 =?utf-8?B?N21nQitlRnUxWHRwZFdYckZTcnZxQVpjWnUwdkZmdUdPRTlsamt4RUlhYWZR?=
 =?utf-8?B?SkpBbWpheEwrKzRXUkxmek5YUjRtYXRpd3kwaHkrcHpXSDJjNmt5RlZaamZL?=
 =?utf-8?B?OXJ6bzZNL3VMTGpGQXRzcGZFVXR5RlJZTmV4QXRvbUkrVW00cVg0UjlCYSs0?=
 =?utf-8?B?aitvVEcxQWdibWFwcm1QM0xUNFc3aERkdlM3VGRWUENyekFsNGtkc3gzSjJF?=
 =?utf-8?B?S1dibVhGdjNqYmt5ditkUnh5RXI2NnJsZkNWMHMwdVNrYW1ad0hXM3BrYXV4?=
 =?utf-8?B?WUIwRyt6a2hILy9ZVldGcXBxSVZXT256ei9JdXNZYnRZK1YwOVlIOWRqK1Bu?=
 =?utf-8?B?eGwvd1JCdTMwbW85YzZ1b1gzY2duYVRzdTNrYlBBM2lTaVdmc1JHcVpKaDNi?=
 =?utf-8?B?OFBsME1yYXo5WUYwS0h1NHdNb0tHWGw4bm1TSHltMGZobktSN0pIVThjODUv?=
 =?utf-8?B?Rk1sbXpxcVdqUkUxS256UDFYUWowTHVjOXB6c3Q4eUZYWW5CamhWbjB5c3U4?=
 =?utf-8?B?dVlRYVZxb24rQkU5ZGZiRDd2MkdKc0xzQ051NHpjNzZnaGVrQktQWGtINUc2?=
 =?utf-8?B?WDJ1R2M3b1dmS3ljL1IzMzF5amRoeDZ6RDB5ZUZmb1U5Uk15aiszMms4TWZF?=
 =?utf-8?B?ZStrZzQzME5VaTE5TEQ2VkRpbG1CcXhMd2NpU202L0szWVI2N3Vja2E3aVV4?=
 =?utf-8?B?Q24zTUo3MzFuRTBGc1c3T3hOdUh6MHdSZE8xTGpXTFVUQk9wNnVldGFrbHdI?=
 =?utf-8?B?V3ZPc1F2YTRzM21XNndqa0YvWGlyTiswdTBINVpGUitMdXlxNDNnSkRqNW14?=
 =?utf-8?Q?AlkCENpGqyrLx0EkQlFNk+d+zh80+XyeQbT9Z7v?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB5581
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT016.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	0d4151f5-cb57-477b-e23b-08d919e28036
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	gAHJSK8V7WqtIx4h8bpb6quWlEobTqLoyeBSVVlULjMqGmE08ugNERDncWye2R840m02OMJnNxhy0OD9gn7TTG2YB4dKQiGXK6Kg4tz4c0mgtXxc8CQ/b/wYadMj3rCj9taE05tckxjHFJKpIBrSi//HbvgFPbjMxjJ2OhAPwJA+Ear0kXhJaQoiIiHwt1N4i+a8TU7OvsHuATTIbuKq0hp9rdhbLT4LXzEEb2w2hgqOUn9960zqf/owQXlUOUp7BTEAHPvGwYi3NlRV89adlRN1j2CrEymJ8H1IC6E76nEjKJWcPB7LxlfdmViiLKxnCLC7fS6tIV/nfvJoik2cINdABhwhRUez4mtn2HZD3q2AIFHXyQaSLJR1tjTfdRW6K/mbWoSb8MX7lxGMeT8+qDsfD4jSzspQuW2Xq070xASqeJFbyQeHcJSx0Ymo9tCEYPEzo8HBIc+rb35At6qJkz1QiQloZQ0f8bvc79BVs7IScJYDzfAw1CixySS1n7XpDsFCPR2VljsWeD4uPUdnNlgQISFCHlonClwXCBh15ltMvuJHqVmsugxp4ksnbM38ffQ/6YYYSHE0HRRHsds+iNiRCYcMRD+CbruW8r4JbyZteWsGYotTzhIaUhRpXMJ/voW5IktuDHhOG5VdCz9tWhH9xV+u4RuDgzIO/rkSR/U=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(136003)(376002)(346002)(396003)(46966006)(36840700001)(186003)(36860700001)(9686003)(70206006)(336012)(70586007)(356005)(6506007)(53546011)(7696005)(316002)(81166007)(82310400003)(47076005)(52536014)(8676002)(478600001)(55016002)(86362001)(82740400003)(2906002)(6862004)(54906003)(5660300002)(8936002)(83380400001)(4326008)(33656002)(26005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2021 09:51:36.9796
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: bc8ee6ce-7786-482b-9119-08d919e2871e
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT016.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBAPR08MB5717

SGkgSmFuDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSmFuIEJldWxp
Y2ggPGpiZXVsaWNoQHN1c2UuY29tPg0KPiBTZW50OiBUdWVzZGF5LCBNYXkgMTgsIDIwMjEgMzox
NiBQTQ0KPiBUbzogUGVubnkgWmhlbmcgPFBlbm55LlpoZW5nQGFybS5jb20+DQo+IENjOiBCZXJ0
cmFuZCBNYXJxdWlzIDxCZXJ0cmFuZC5NYXJxdWlzQGFybS5jb20+OyBXZWkgQ2hlbg0KPiA8V2Vp
LkNoZW5AYXJtLmNvbT47IG5kIDxuZEBhcm0uY29tPjsgeGVuLWRldmVsQGxpc3RzLnhlbnByb2pl
Y3Qub3JnOw0KPiBzc3RhYmVsbGluaUBrZXJuZWwub3JnOyBqdWxpZW5AeGVuLm9yZw0KPiBTdWJq
ZWN0OiBSZTogW1BBVENIIDA0LzEwXSB4ZW4vYXJtOiBzdGF0aWMgbWVtb3J5IGluaXRpYWxpemF0
aW9uDQo+IA0KPiBPbiAxOC4wNS4yMDIxIDA3OjIxLCBQZW5ueSBaaGVuZyB3cm90ZToNCj4gPiBU
aGlzIHBhdGNoIGludHJvZHVjZXMgc3RhdGljIG1lbW9yeSBpbml0aWFsaXphdGlvbiwgZHVyaW5n
IHN5c3RlbSBSQU0gYm9vdA0KPiB1cC4NCj4gPg0KPiA+IE5ldyBmdW5jIGluaXRfc3RhdGljbWVt
X3BhZ2VzIGlzIHRoZSBlcXVpdmFsZW50IG9mIGluaXRfaGVhcF9wYWdlcywNCj4gPiByZXNwb25z
aWJsZSBmb3Igc3RhdGljIG1lbW9yeSBpbml0aWFsaXphdGlvbi4NCj4gPg0KPiA+IEhlbHBlciBm
dW5jIGZyZWVfc3RhdGljbWVtX3BhZ2VzIGlzIHRoZSBlcXVpdmFsZW50IG9mIGZyZWVfaGVhcF9w
YWdlcywNCj4gPiB0byBmcmVlIG5yX3BmbnMgcGFnZXMgb2Ygc3RhdGljIG1lbW9yeS4NCj4gPiBG
b3IgZWFjaCBwYWdlLCBpdCBpbmNsdWRlcyB0aGUgZm9sbG93aW5nIHN0ZXBzOg0KPiA+IDEuIGNo
YW5nZSBwYWdlIHN0YXRlIGZyb20gaW4tdXNlKGFsc28gaW5pdGlhbGl6YXRpb24gc3RhdGUpIHRv
IGZyZWUNCj4gPiBzdGF0ZSBhbmQgZ3JhbnQgUEdDX3Jlc2VydmVkLg0KPiA+IDIuIHNldCBpdHMg
b3duZXIgTlVMTCBhbmQgbWFrZSBzdXJlIHRoaXMgcGFnZSBpcyBub3QgYSBndWVzdCBmcmFtZSBh
bnkNCj4gPiBtb3JlDQo+IA0KPiBCdXQgaXNuJ3QgdGhlIGdvYWwgKGFzIHBlciB0aGUgcHJldmlv
dXMgcGF0Y2gpIHRvIGFzc29jaWF0ZSBzdWNoIHBhZ2VzIHdpdGggYQ0KPiBfc3BlY2lmaWNfIGRv
bWFpbj8NCj4gDQoNCkZyZWVfc3RhdGljbWVtX3BhZ2VzIGFyZSBhbGlrZSBmcmVlX2hlYXBfcGFn
ZXMsIGFyZSBub3QgdXNlZCBvbmx5IGZvciBpbml0aWFsaXphdGlvbi4NCkZyZWVpbmcgdXNlZCBw
YWdlcyB0byB1bnVzZWQgYXJlIGFsc28gaW5jbHVkZWQuDQpIZXJlLCBzZXR0aW5nIGl0cyBvd25l
ciBOVUxMIGlzIHRvIHNldCBvd25lciBpbiB1c2VkIGZpZWxkIE5VTEwuDQpTdGlsbCwgSSBuZWVk
IHRvIGFkZCBtb3JlIGV4cGxhbmF0aW9uIGhlcmUuDQoNCj4gPiAtLS0gYS94ZW4vY29tbW9uL3Bh
Z2VfYWxsb2MuYw0KPiA+ICsrKyBiL3hlbi9jb21tb24vcGFnZV9hbGxvYy5jDQo+ID4gQEAgLTE1
MCw2ICsxNTAsOSBAQA0KPiA+ICAjZGVmaW5lIHAybV9wb2Rfb2ZmbGluZV9vcl9icm9rZW5faGl0
KHBnKSAwICAjZGVmaW5lDQo+ID4gcDJtX3BvZF9vZmZsaW5lX29yX2Jyb2tlbl9yZXBsYWNlKHBn
KSBCVUdfT04ocGcgIT0gTlVMTCkgICNlbmRpZg0KPiA+ICsjaWZkZWYgQ09ORklHX0FSTV82NA0K
PiA+ICsjaW5jbHVkZSA8YXNtL3NldHVwLmg+DQo+ID4gKyNlbmRpZg0KPiANCj4gV2hhdGV2ZXIg
aXQgaXMgdGhhdCdzIG5lZWRlZCBmcm9tIHRoaXMgaGVhZGVyIHN1Z2dlc3RzIHRoZSBjb2RlIHdv
bid0IGJ1aWxkDQo+IGZvciBvdGhlciBhcmNoaXRlY3R1cmVzLiBJIHRoaW5rIGluaXRfc3RhdGlj
bWVtX3BhZ2VzKCkgaW4gaXRzIGN1cnJlbnQgc2hhcGUNCj4gc2hvdWxkbid0IGxpdmUgaW4gdGhp
cyAoY29tbW9uKSBmaWxlLg0KPiANCg0KWWVzLCBJIHNob3VsZCBpbmNsdWRlIHRoZW0gYWxsIGJv
dGggdW5kZXIgb25lIHNwZWNpZmljIGNvbmZpZywgbWF5YmUgbGlrZQ0KQ09ORklHX1NUQVRJQ19N
RU0sIGFuZCB0aGlzIGNvbmZpZyBpcyBhcm0tc3BlY2lmaWMuDQoNCj4gPiBAQCAtMTUxMiw2ICsx
NTE1LDQ5IEBAIHN0YXRpYyB2b2lkIGZyZWVfaGVhcF9wYWdlcygNCj4gPiAgICAgIHNwaW5fdW5s
b2NrKCZoZWFwX2xvY2spOw0KPiA+ICB9DQo+ID4NCj4gPiArLyogRXF1aXZhbGVudCBvZiBmcmVl
X2hlYXBfcGFnZXMgdG8gZnJlZSBucl9wZm5zIHBhZ2VzIG9mIHN0YXRpYw0KPiA+ICttZW1vcnku
ICovIHN0YXRpYyB2b2lkIGZyZWVfc3RhdGljbWVtX3BhZ2VzKHN0cnVjdCBwYWdlX2luZm8gKnBn
LA0KPiB1bnNpZ25lZCBsb25nIG5yX3BmbnMsDQo+ID4gKyAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgIGJvb2wgbmVlZF9zY3J1YikNCj4gDQo+IFJpZ2h0IG5vdyB0aGlzIGZ1bmN0aW9u
IGdldHMgY2FsbGVkIG9ubHkgZnJvbSBhbiBfX2luaXQgb25lLiBVbmxlc3MgaXQgaXMNCj4gaW50
ZW5kZWQgdG8gZ2FpbiBmdXJ0aGVyIGNhbGxlcnMsIGl0IHNob3VsZCBiZSBtYXJrZWQgX19pbml0
IGl0c2VsZiB0aGVuLg0KPiBPdGhlcndpc2UgaXQgc2hvdWxkIGJlIG1hZGUgc3VyZSB0aGF0IG90
aGVyIGFyY2hpdGVjdHVyZXMgZG9uJ3QgaW5jbHVkZSB0aGlzDQo+IChkZWFkIHRoZXJlKSBjb2Rl
Lg0KPiANCg0KU3VyZSwgSSdsbCBhZGQgX19pbml0LiBUaHguDQoNCj4gPiArew0KPiA+ICsgICAg
bWZuX3QgbWZuID0gcGFnZV90b19tZm4ocGcpOw0KPiA+ICsgICAgaW50IGk7DQo+IA0KPiBUaGlz
IHR5cGUgZG9lc24ndCBmaXQgbnJfcGZucydlcy4NCj4gDQoNClN1cmUsIG5yX21mbnMgaXMgYmV0
dGVyIGluIGFsc28gbWFueSBvdGhlciBwbGFjZXMuDQoNCj4gSmFuDQo=


From xen-devel-bounces@lists.xenproject.org Tue May 18 10:00:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 10:00:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129008.242164 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liwWi-0002Ia-S1; Tue, 18 May 2021 10:00:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129008.242164; Tue, 18 May 2021 10:00:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liwWi-0002IT-Op; Tue, 18 May 2021 10:00:36 +0000
Received: by outflank-mailman (input) for mailman id 129008;
 Tue, 18 May 2021 10:00:35 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1liwWh-0002IN-Q5
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 10:00:35 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1liwWh-0008J7-K4; Tue, 18 May 2021 10:00:35 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1liwWh-0007jJ-EC; Tue, 18 May 2021 10:00:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=drkvHIk96+hlAk0nPZOZ/eVvPQ0bE0nhr0+Q1JfNjOg=; b=lUnH5mbQz0IZRN/rKiRhSm4FfJ
	INEysNrOL613+IGqTOMPINYfgSvjpZFbDrs4qECmkZyZwG9FPw9uyJcdcqr8t+pmOSNHeYo14lNCM
	AP6tuc/UmbRWprb0lMPiuGVvWBgsN6PcevA2o9xeyQmZw/7D+bPhX/2YmBVoMQod7neo=;
Subject: Re: [PATCH 04/10] xen/arm: static memory initialization
To: Penny Zheng <penny.zheng@arm.com>, xen-devel@lists.xenproject.org,
 sstabellini@kernel.org
Cc: Bertrand.Marquis@arm.com, Wei.Chen@arm.com, nd@arm.com
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-5-penny.zheng@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <d492ca1a-b9d6-6250-750c-7f511b183735@xen.org>
Date: Tue, 18 May 2021 11:00:33 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210518052113.725808-5-penny.zheng@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Penny,

On 18/05/2021 06:21, Penny Zheng wrote:
> This patch introduces static memory initialization, during system RAM boot up.
> 
> New func init_staticmem_pages is the equivalent of init_heap_pages, responsible
> for static memory initialization.
> 
> Helper func free_staticmem_pages is the equivalent of free_heap_pages, to free
> nr_pfns pages of static memory.
> For each page, it includes the following steps:
> 1. change page state from in-use(also initialization state) to free state and
> grant PGC_reserved.
> 2. set its owner NULL and make sure this page is not a guest frame any more
> 3. follow the same cache coherency policy in free_heap_pages
> 4. scrub the page in need
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
>   xen/arch/arm/setup.c    |  2 ++
>   xen/common/page_alloc.c | 70 +++++++++++++++++++++++++++++++++++++++++
>   xen/include/xen/mm.h    |  3 ++
>   3 files changed, 75 insertions(+)
> 
> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
> index 444dbbd676..f80162c478 100644
> --- a/xen/arch/arm/setup.c
> +++ b/xen/arch/arm/setup.c
> @@ -818,6 +818,8 @@ static void __init setup_mm(void)
>   
>       setup_frametable_mappings(ram_start, ram_end);
>       max_page = PFN_DOWN(ram_end);
> +
> +    init_staticmem_pages();
>   }
>   #endif
>   
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index ace6333c18..58b53c6ac2 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -150,6 +150,9 @@
>   #define p2m_pod_offline_or_broken_hit(pg) 0
>   #define p2m_pod_offline_or_broken_replace(pg) BUG_ON(pg != NULL)
>   #endif
> +#ifdef CONFIG_ARM_64
> +#include <asm/setup.h>
> +#endif
>   
>   /*
>    * Comma-separated list of hexadecimal page numbers containing bad bytes.
> @@ -1512,6 +1515,49 @@ static void free_heap_pages(
>       spin_unlock(&heap_lock);
>   }
>   
> +/* Equivalent of free_heap_pages to free nr_pfns pages of static memory. */
> +static void free_staticmem_pages(struct page_info *pg, unsigned long nr_pfns,

This function is dealing with MFNs, so the second parameter should be 
called nr_mfns.

> +                                 bool need_scrub)
> +{
> +    mfn_t mfn = page_to_mfn(pg);
> +    int i;
> +
> +    for ( i = 0; i < nr_pfns; i++ )
> +    {
> +        switch ( pg[i].count_info & PGC_state )
> +        {
> +        case PGC_state_inuse:
> +            BUG_ON(pg[i].count_info & PGC_broken);
> +            /* Make it free and reserved. */
> +            pg[i].count_info = PGC_state_free | PGC_reserved;
> +            break;
> +
> +        default:
> +            printk(XENLOG_ERR
> +                   "Page state shall be only in PGC_state_inuse. "
> +                   "pg[%u] MFN %"PRI_mfn" count_info=%#lx tlbflush_timestamp=%#x.\n",
> +                   i, mfn_x(mfn) + i,
> +                   pg[i].count_info,
> +                   pg[i].tlbflush_timestamp);
> +            BUG();
> +        }
> +
> +        /*
> +         * Follow the same cache coherence scheme in free_heap_pages.
> +         * If a page has no owner it will need no safety TLB flush.
> +         */
> +        pg[i].u.free.need_tlbflush = (page_get_owner(&pg[i]) != NULL);
> +        if ( pg[i].u.free.need_tlbflush )
> +            page_set_tlbflush_timestamp(&pg[i]);
> +
> +        /* This page is not a guest frame any more. */
> +        page_set_owner(&pg[i], NULL);
> +        set_gpfn_from_mfn(mfn_x(mfn) + i, INVALID_M2P_ENTRY);

The code looks quite similar to free_heap_pages(). Could we possibly 
create an helper which can be called from both?

> +
> +        if ( need_scrub )
> +            scrub_one_page(&pg[i]);

So the scrubbing will be synchronous. Is it what we want?

You also seem to miss the flush the call to flush_page_to_ram().

> +    }
> +}
>   
>   /*
>    * Following rules applied for page offline:
> @@ -1828,6 +1874,30 @@ static void init_heap_pages(
>       }
>   }
>   
> +/* Equivalent of init_heap_pages to do static memory initialization */
> +void __init init_staticmem_pages(void)
> +{
> +    int bank;
> +
> +    /*
> +     * TODO: Considering NUMA-support scenario.
> +     */
> +    for ( bank = 0 ; bank < bootinfo.static_mem.nr_banks; bank++ )

bootinfo is arm specific, so this code should live in arch/arm rather 
than common/.

> +    {
> +        paddr_t bank_start = bootinfo.static_mem.bank[bank].start;
> +        paddr_t bank_size = bootinfo.static_mem.bank[bank].size;
> +        paddr_t bank_end = bank_start + bank_size;
> +
> +        bank_start = round_pgup(bank_start);
> +        bank_end = round_pgdown(bank_end);
> +        if ( bank_end <= bank_start )
> +            return;
> +
> +        free_staticmem_pages(maddr_to_page(bank_start),
> +                            (bank_end - bank_start) >> PAGE_SHIFT, false);
> +    }
> +}
> +
>   static unsigned long avail_heap_pages(
>       unsigned int zone_lo, unsigned int zone_hi, unsigned int node)
>   {
> diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
> index 667f9dac83..8b1a2207b2 100644
> --- a/xen/include/xen/mm.h
> +++ b/xen/include/xen/mm.h
> @@ -85,6 +85,9 @@ bool scrub_free_pages(void);
>   } while ( false )
>   #define FREE_XENHEAP_PAGE(p) FREE_XENHEAP_PAGES(p, 0)
>   
> +/* Static Memory */
> +void init_staticmem_pages(void);
> +
>   /* Map machine page range in Xen virtual address space. */
>   int map_pages_to_xen(
>       unsigned long virt,
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 18 10:02:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 10:02:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129014.242175 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liwY6-0002xR-BJ; Tue, 18 May 2021 10:02:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129014.242175; Tue, 18 May 2021 10:02:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liwY6-0002xK-7I; Tue, 18 May 2021 10:02:02 +0000
Received: by outflank-mailman (input) for mailman id 129014;
 Tue, 18 May 2021 10:02:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1liwY5-0002xE-8F
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 10:02:01 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1liwY5-0008Lq-59; Tue, 18 May 2021 10:02:01 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1liwY4-0007qe-Vm; Tue, 18 May 2021 10:02:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:References:Cc:To:From:Subject;
	bh=AWeXZwlo24QjDn/2E3cEo2cH1U2LwQhlkCAKy875CzU=; b=KLuWPjNnEK1A3ZKqqOaGwsliUa
	/5KxIFQUH8dycqFMvkzC0ExcMsU4yTuwnKLDYsxujtETCT+8wXuRDPDLHiPxTO3oZjt1QoBVe2rnp
	CAGWERM6SBjEaiekRSemM9nQVURsIMxWhTeuNBUaPXbDVSB3tVZ/kC/ySlsK77Us5wC4=;
Subject: Re: [PATCH 04/10] xen/arm: static memory initialization
From: Julien Grall <julien@xen.org>
To: Penny Zheng <penny.zheng@arm.com>, xen-devel@lists.xenproject.org,
 sstabellini@kernel.org
Cc: Bertrand.Marquis@arm.com, Wei.Chen@arm.com, nd@arm.com
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-5-penny.zheng@arm.com>
 <d492ca1a-b9d6-6250-750c-7f511b183735@xen.org>
Message-ID: <f28b6b77-3891-d5ab-7a4a-a8dbac643be1@xen.org>
Date: Tue, 18 May 2021 11:01:59 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <d492ca1a-b9d6-6250-750c-7f511b183735@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 18/05/2021 11:00, Julien Grall wrote:
> Hi Penny,
> 
> On 18/05/2021 06:21, Penny Zheng wrote:
>> This patch introduces static memory initialization, during system RAM 
>> boot up.
>>
>> New func init_staticmem_pages is the equivalent of init_heap_pages, 
>> responsible
>> for static memory initialization.
>>
>> Helper func free_staticmem_pages is the equivalent of free_heap_pages, 
>> to free
>> nr_pfns pages of static memory.
>> For each page, it includes the following steps:
>> 1. change page state from in-use(also initialization state) to free 
>> state and
>> grant PGC_reserved.
>> 2. set its owner NULL and make sure this page is not a guest frame any 
>> more
>> 3. follow the same cache coherency policy in free_heap_pages
>> 4. scrub the page in need
>>
>> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
>> ---
>>   xen/arch/arm/setup.c    |  2 ++
>>   xen/common/page_alloc.c | 70 +++++++++++++++++++++++++++++++++++++++++
>>   xen/include/xen/mm.h    |  3 ++
>>   3 files changed, 75 insertions(+)
>>
>> diff --git a/xen/arch/arm/setup.c b/xen/arch/arm/setup.c
>> index 444dbbd676..f80162c478 100644
>> --- a/xen/arch/arm/setup.c
>> +++ b/xen/arch/arm/setup.c
>> @@ -818,6 +818,8 @@ static void __init setup_mm(void)
>>       setup_frametable_mappings(ram_start, ram_end);
>>       max_page = PFN_DOWN(ram_end);
>> +
>> +    init_staticmem_pages();
>>   }
>>   #endif
>> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
>> index ace6333c18..58b53c6ac2 100644
>> --- a/xen/common/page_alloc.c
>> +++ b/xen/common/page_alloc.c
>> @@ -150,6 +150,9 @@
>>   #define p2m_pod_offline_or_broken_hit(pg) 0
>>   #define p2m_pod_offline_or_broken_replace(pg) BUG_ON(pg != NULL)
>>   #endif
>> +#ifdef CONFIG_ARM_64
>> +#include <asm/setup.h>
>> +#endif
>>   /*
>>    * Comma-separated list of hexadecimal page numbers containing bad 
>> bytes.
>> @@ -1512,6 +1515,49 @@ static void free_heap_pages(
>>       spin_unlock(&heap_lock);
>>   }
>> +/* Equivalent of free_heap_pages to free nr_pfns pages of static 
>> memory. */
>> +static void free_staticmem_pages(struct page_info *pg, unsigned long 
>> nr_pfns,
> 
> This function is dealing with MFNs, so the second parameter should be 
> called nr_mfns.
> 
>> +                                 bool need_scrub)
>> +{
>> +    mfn_t mfn = page_to_mfn(pg);
>> +    int i;
>> +
>> +    for ( i = 0; i < nr_pfns; i++ )
>> +    {
>> +        switch ( pg[i].count_info & PGC_state )
>> +        {
>> +        case PGC_state_inuse:
>> +            BUG_ON(pg[i].count_info & PGC_broken);
>> +            /* Make it free and reserved. */
>> +            pg[i].count_info = PGC_state_free | PGC_reserved;
>> +            break;
>> +
>> +        default:
>> +            printk(XENLOG_ERR
>> +                   "Page state shall be only in PGC_state_inuse. "
>> +                   "pg[%u] MFN %"PRI_mfn" count_info=%#lx 
>> tlbflush_timestamp=%#x.\n",
>> +                   i, mfn_x(mfn) + i,
>> +                   pg[i].count_info,
>> +                   pg[i].tlbflush_timestamp);
>> +            BUG();
>> +        }
>> +
>> +        /*
>> +         * Follow the same cache coherence scheme in free_heap_pages.
>> +         * If a page has no owner it will need no safety TLB flush.
>> +         */
>> +        pg[i].u.free.need_tlbflush = (page_get_owner(&pg[i]) != NULL);
>> +        if ( pg[i].u.free.need_tlbflush )
>> +            page_set_tlbflush_timestamp(&pg[i]);
>> +
>> +        /* This page is not a guest frame any more. */
>> +        page_set_owner(&pg[i], NULL);
>> +        set_gpfn_from_mfn(mfn_x(mfn) + i, INVALID_M2P_ENTRY);
> 
> The code looks quite similar to free_heap_pages(). Could we possibly 
> create an helper which can be called from both?
> 
>> +
>> +        if ( need_scrub )
>> +            scrub_one_page(&pg[i]);
> 
> So the scrubbing will be synchronous. Is it what we want?
> 
> You also seem to miss the flush the call to flush_page_to_ram().

Hmmmm... Sorry I looked at the wrong function. This is not necessary for 
the free part.

> 
>> +    }
>> +}
>>   /*
>>    * Following rules applied for page offline:
>> @@ -1828,6 +1874,30 @@ static void init_heap_pages(
>>       }
>>   }
>> +/* Equivalent of init_heap_pages to do static memory initialization */
>> +void __init init_staticmem_pages(void)
>> +{
>> +    int bank;
>> +
>> +    /*
>> +     * TODO: Considering NUMA-support scenario.
>> +     */
>> +    for ( bank = 0 ; bank < bootinfo.static_mem.nr_banks; bank++ )
> 
> bootinfo is arm specific, so this code should live in arch/arm rather 
> than common/.
> 
>> +    {
>> +        paddr_t bank_start = bootinfo.static_mem.bank[bank].start;
>> +        paddr_t bank_size = bootinfo.static_mem.bank[bank].size;
>> +        paddr_t bank_end = bank_start + bank_size;
>> +
>> +        bank_start = round_pgup(bank_start);
>> +        bank_end = round_pgdown(bank_end);
>> +        if ( bank_end <= bank_start )
>> +            return;
>> +
>> +        free_staticmem_pages(maddr_to_page(bank_start),
>> +                            (bank_end - bank_start) >> PAGE_SHIFT, 
>> false);
>> +    }
>> +}
>> +
>>   static unsigned long avail_heap_pages(
>>       unsigned int zone_lo, unsigned int zone_hi, unsigned int node)
>>   {
>> diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
>> index 667f9dac83..8b1a2207b2 100644
>> --- a/xen/include/xen/mm.h
>> +++ b/xen/include/xen/mm.h
>> @@ -85,6 +85,9 @@ bool scrub_free_pages(void);
>>   } while ( false )
>>   #define FREE_XENHEAP_PAGE(p) FREE_XENHEAP_PAGES(p, 0)
>> +/* Static Memory */
>> +void init_staticmem_pages(void);
>> +
>>   /* Map machine page range in Xen virtual address space. */
>>   int map_pages_to_xen(
>>       unsigned long virt,
>>
> 
> Cheers,
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 18 10:10:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 10:10:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129022.242189 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liwfl-0003ki-5T; Tue, 18 May 2021 10:09:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129022.242189; Tue, 18 May 2021 10:09:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liwfl-0003kb-2R; Tue, 18 May 2021 10:09:57 +0000
Received: by outflank-mailman (input) for mailman id 129022;
 Tue, 18 May 2021 10:09:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1liwfk-0003kV-OD
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 10:09:56 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1liwfk-0008T6-Il; Tue, 18 May 2021 10:09:56 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1liwfk-0008Nc-Cq; Tue, 18 May 2021 10:09:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=+WmblekLkivM2UjT70d4R92sZ3gx7s/ukyceI63fapA=; b=nGkc9rkqZriuRn6A93VKLLDWAc
	WmkeoiJQtBGeC+qTSUgPx1DnBwF8FBYm3zxyRFsgn0NH2/k9KmauMKFK3m7AzpOU20KyI2tVHVXlY
	PgDVcihPapKYuvI2Gkd87KBqHdWP9/FfLbeBrrxd4Un+7q8oSuguVv6tlxSqKUU4+R78=;
Subject: Re: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages
To: Jan Beulich <jbeulich@suse.com>, Penny Zheng <penny.zheng@arm.com>
Cc: Bertrand.Marquis@arm.com, Wei.Chen@arm.com, nd@arm.com,
 xen-devel@lists.xenproject.org, sstabellini@kernel.org
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-6-penny.zheng@arm.com>
 <a890200d-b75b-dd59-5d13-b0b211a58da5@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <7f36351e-fa08-ea74-cbc2-049ced7aac4e@xen.org>
Date: Tue, 18 May 2021 11:09:54 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <a890200d-b75b-dd59-5d13-b0b211a58da5@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 18/05/2021 08:24, Jan Beulich wrote:
> On 18.05.2021 07:21, Penny Zheng wrote:
>> +         * to PGC_state_inuse.
>> +         */
>> +        pg[i].count_info = (pg[i].count_info & PGC_reserved) | PGC_state_inuse;
>> +        /* Initialise fields which have other uses for free pages. */
>> +        pg[i].u.inuse.type_info = 0;
>> +        page_set_owner(&pg[i], NULL);
>> +
>> +        /*
>> +         * Ensure cache and RAM are consistent for platforms where the
>> +         * guest can control its own visibility of/through the cache.
>> +         */
>> +        flush_page_to_ram(mfn_x(page_to_mfn(&pg[i])),
>> +                            !(memflags & MEMF_no_icache_flush));
>> +    }
>> +
>> +    if ( need_tlbflush )
>> +        filtered_flush_tlb_mask(tlbflush_timestamp);
> 
> With reserved pages dedicated to a specific domain, in how far is it
> possible that stale mappings from a prior use can still be around,
> making such TLB flushing necessary?

I would rather not make the assumption. I can see future where we just 
want to allocate memory from a static pool that may be shared with 
multiple domains.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 18 10:15:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 10:15:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129030.242205 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liwl2-0005DY-Ty; Tue, 18 May 2021 10:15:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129030.242205; Tue, 18 May 2021 10:15:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liwl2-0005DR-Qx; Tue, 18 May 2021 10:15:24 +0000
Received: by outflank-mailman (input) for mailman id 129030;
 Tue, 18 May 2021 10:15:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1liwl1-0005DL-MK
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 10:15:23 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1liwl1-00007d-GO; Tue, 18 May 2021 10:15:23 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1liwl1-0000N3-9u; Tue, 18 May 2021 10:15:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=1/88tc7pt7D9K/vJHj8cbqJZgUXseqadmmKl2VSgJcU=; b=aXDAbC3bEh6CIJnus3aAV9jJJ0
	TMxY56RbOEUKKYHcpoodkTSPbzo61guWAc2g7Qmp+Gxu2TprIdEdLSHSb3d/VpLtl4F7a/jCGBaB7
	OXOQrgPolvcPxmLtTW8dp+19mdP9ORi4EZ5ZkE8ztl+iU072Fwkqvicag7giHXPcfz7w=;
Subject: Re: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages
To: Penny Zheng <penny.zheng@arm.com>, xen-devel@lists.xenproject.org,
 sstabellini@kernel.org
Cc: Bertrand.Marquis@arm.com, Wei.Chen@arm.com, nd@arm.com
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-6-penny.zheng@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <e8e4148e-017b-955b-dd18-4576ce7c94ec@xen.org>
Date: Tue, 18 May 2021 11:15:21 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210518052113.725808-6-penny.zheng@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Penny,

On 18/05/2021 06:21, Penny Zheng wrote:
> alloc_staticmem_pages is designated to allocate nr_pfns contiguous
> pages of static memory. And it is the equivalent of alloc_heap_pages
> for static memory.
> This commit only covers allocating at specified starting address.
> 
> For each page, it shall check if the page is reserved
> (PGC_reserved) and free. It shall also do a set of necessary
> initialization, which are mostly the same ones in alloc_heap_pages,
> like, following the same cache-coherency policy and turning page
> status into PGC_state_used, etc.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
>   xen/common/page_alloc.c | 64 +++++++++++++++++++++++++++++++++++++++++
>   1 file changed, 64 insertions(+)
> 
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 58b53c6ac2..adf2889e76 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -1068,6 +1068,70 @@ static struct page_info *alloc_heap_pages(
>       return pg;
>   }
>   
> +/*
> + * Allocate nr_pfns contiguous pages, starting at #start, of static memory.
> + * It is the equivalent of alloc_heap_pages for static memory
> + */
> +static struct page_info *alloc_staticmem_pages(unsigned long nr_pfns,

This wants to be nr_mfns.

> +                                                paddr_t start,

I would prefer if this helper takes an mfn_t in parameter.

> +                                                unsigned int memflags)
> +{
> +    bool need_tlbflush = false;
> +    uint32_t tlbflush_timestamp = 0;
> +    unsigned int i;
> +    struct page_info *pg;
> +    mfn_t s_mfn;
> +
> +    /* For now, it only supports allocating at specified address. */
> +    s_mfn = maddr_to_mfn(start);
> +    pg = mfn_to_page(s_mfn);

We should avoid to make the assumption the start address will be valid. 
So you want to call mfn_valid() first.

At the same time, there is no guarantee that if the first page is valid, 
then the next nr_pfns will be. So the check should be performed for all 
of them.

> +    if ( !pg )
> +        return NULL;
> +
> +    for ( i = 0; i < nr_pfns; i++)
> +    {
> +        /*
> +         * Reference count must continuously be zero for free pages
> +         * of static memory(PGC_reserved).
> +         */
> +        ASSERT(pg[i].count_info & PGC_reserved);
> +        if ( (pg[i].count_info & ~PGC_reserved) != PGC_state_free )
> +        {
> +            printk(XENLOG_ERR
> +                    "Reference count must continuously be zero for free pages"
> +                    "pg[%u] MFN %"PRI_mfn" c=%#lx t=%#x\n",
> +                    i, mfn_x(page_to_mfn(pg + i)),
> +                    pg[i].count_info, pg[i].tlbflush_timestamp);
> +            BUG();

So we would crash Xen if the caller pass a wrong range. Is it what we want?

Also, who is going to prevent concurrent access?

> +        }
> +
> +        if ( !(memflags & MEMF_no_tlbflush) )
> +            accumulate_tlbflush(&need_tlbflush, &pg[i],
> +                                &tlbflush_timestamp);
> +
> +        /*
> +         * Reserve flag PGC_reserved and change page state
> +         * to PGC_state_inuse.
> +         */
> +        pg[i].count_info = (pg[i].count_info & PGC_reserved) | PGC_state_inuse;
> +        /* Initialise fields which have other uses for free pages. */
> +        pg[i].u.inuse.type_info = 0;
> +        page_set_owner(&pg[i], NULL);
> +
> +        /*
> +         * Ensure cache and RAM are consistent for platforms where the
> +         * guest can control its own visibility of/through the cache.
> +         */
> +        flush_page_to_ram(mfn_x(page_to_mfn(&pg[i])),
> +                            !(memflags & MEMF_no_icache_flush));
> +    }
> +
> +    if ( need_tlbflush )
> +        filtered_flush_tlb_mask(tlbflush_timestamp);
> +
> +    return pg;
> +}
> +
>   /* Remove any offlined page in the buddy pointed to by head. */
>   static int reserve_offlined_page(struct page_info *head)
>   {
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 18 10:20:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 10:20:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129035.242217 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liwq3-0006Zy-LH; Tue, 18 May 2021 10:20:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129035.242217; Tue, 18 May 2021 10:20:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liwq3-0006Zr-G4; Tue, 18 May 2021 10:20:35 +0000
Received: by outflank-mailman (input) for mailman id 129035;
 Tue, 18 May 2021 10:20:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1liwq2-0006Zl-2G
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 10:20:34 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1liwq1-0000DD-RV; Tue, 18 May 2021 10:20:33 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1liwq1-0000eh-Lg; Tue, 18 May 2021 10:20:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=JuAqfttAr0cqJXlEstfQXvr8kBJVIZtkS87RGGDsMy8=; b=EwpvBuvHE/lEv02EmNX1fo69Le
	4PqJbOS1DGAAhLxVcK572WTcTeWKzPkOF8TZnY5B799kU2tmS82wsMqxpZpNzHdXw8UQVZT5MiQxl
	Qy+IJJKIPL09QsApTQqTgv6DFFkhvfH+CMvrG4kAWdCCALy74Tk94O3KgX/wCJwpzS0E=;
Subject: Re: [PATCH 06/10] xen: replace order with nr_pfns in assign_pages for
 better compatibility
To: Penny Zheng <penny.zheng@arm.com>, xen-devel@lists.xenproject.org,
 sstabellini@kernel.org
Cc: Bertrand.Marquis@arm.com, Wei.Chen@arm.com, nd@arm.com
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-7-penny.zheng@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <7dc01bcd-1570-82fa-5d15-11c28a857b3f@xen.org>
Date: Tue, 18 May 2021 11:20:31 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210518052113.725808-7-penny.zheng@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Penny,

On 18/05/2021 06:21, Penny Zheng wrote:
> Function parameter order in assign_pages is always used as 1ul << order,
> referring to 2@order pages.
> 
> Now, for better compatibility with new static memory, order shall
> be replaced with nr_pfns pointing to page count with no constraint,
> like 250MB.

We have similar requirements for LiveUpdate because are preserving the 
memory with a number of pages (rather than a power-of-two). With the 
current interface would be need to split the range in a power of 2 which 
is a bit of pain.

However, I think I would prefer if we introduce a new interface (maybe 
assign_pages_nr()) rather than change the meaning of the field. This is 
for two reasons:
   1) We limit the risk to make mistake when backporting a patch touch 
assign_pages().
   2) Adding (1UL << order) for pretty much all the caller is not nice.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 18 10:30:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 10:30:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129042.242234 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liwzU-00082n-Mk; Tue, 18 May 2021 10:30:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129042.242234; Tue, 18 May 2021 10:30:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liwzU-00082g-Ie; Tue, 18 May 2021 10:30:20 +0000
Received: by outflank-mailman (input) for mailman id 129042;
 Tue, 18 May 2021 10:30:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1liwzT-00082a-Nr
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 10:30:19 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1liwzT-0000Nw-Iq; Tue, 18 May 2021 10:30:19 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1liwzT-0001Oh-CP; Tue, 18 May 2021 10:30:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=RRj92iEerUBkZ4fQVZ1PLXfsaygjd/QklXf58ZdiFrE=; b=SH8BpndxuP9BJBf4oc8GaQrysj
	hjTsx7R1nSRGDOMiTyUxXQUqcmTH9QdNW6OmqTCpKn73X4aRHiIOYR8UHPJUMjbquKbqpIOELzqEw
	pf5EQjlv07aJEPeGZ+rsxLE8dAYKMyZ5x3l5nb+5SlrdQqWLVGBxgpfP4nULfgq+KHQA=;
Subject: Re: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
To: Penny Zheng <penny.zheng@arm.com>, xen-devel@lists.xenproject.org,
 sstabellini@kernel.org
Cc: Bertrand.Marquis@arm.com, Wei.Chen@arm.com, nd@arm.com
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-8-penny.zheng@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <d2d1c50f-16bb-778b-acdd-0684878c100f@xen.org>
Date: Tue, 18 May 2021 11:30:17 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210518052113.725808-8-penny.zheng@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Penny,

Title: s/intruduce/introduce/

On 18/05/2021 06:21, Penny Zheng wrote:
> alloc_domstatic_pages is the equivalent of alloc_domheap_pages for
> static mmeory, and it is to allocate nr_pfns pages of static memory
> and assign them to one specific domain.
> 
> It uses alloc_staticmen_pages to get nr_pages pages of static memory,
> then on success, it will use assign_pages to assign those pages to
> one specific domain, including using page_set_reserved_owner to set its
> reserved domain owner.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
>   xen/common/page_alloc.c | 53 +++++++++++++++++++++++++++++++++++++++++
>   xen/include/xen/mm.h    |  4 ++++
>   2 files changed, 57 insertions(+)
> 
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index 0eb9f22a00..f1f1296a61 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -2447,6 +2447,9 @@ int assign_pages(
>       {
>           ASSERT(page_get_owner(&pg[i]) == NULL);
>           page_set_owner(&pg[i], d);
> +        /* use page_set_reserved_owner to set its reserved domain owner. */
> +        if ( (pg[i].count_info & PGC_reserved) )
> +            page_set_reserved_owner(&pg[i], d);

I have skimmed through the rest of the series and couldn't find anyone 
calling page_get_reserved_owner(). The value is also going to be the 
exact same as page_set_owner().

So why do we need it?

>           smp_wmb(); /* Domain pointer must be visible before updating refcnt. */
>           pg[i].count_info =
>               (pg[i].count_info & PGC_extra) | PGC_allocated | 1;

This will clobber PGC_reserved.

> @@ -2509,6 +2512,56 @@ struct page_info *alloc_domheap_pages(
>       return pg;
>   }
>   
> +/*
> + * Allocate nr_pfns contiguous pages, starting at #start, of static memory,

s/nr_pfns/nr_mfns/

> + * then assign them to one specific domain #d.
> + * It is the equivalent of alloc_domheap_pages for static memory.
> + */
> +struct page_info *alloc_domstatic_pages(
> +        struct domain *d, unsigned long nr_pfns, paddr_t start,

s/nr_pfns/nf_mfns/. Also, I would the third parameter to be an mfn_t.

> +        unsigned int memflags)
> +{
> +    struct page_info *pg = NULL;
> +    unsigned long dma_size;
> +
> +    ASSERT(!in_irq());
> +
> +    if ( memflags & MEMF_no_owner )
> +        memflags |= MEMF_no_refcount;
> +
> +    if ( !dma_bitsize )
> +        memflags &= ~MEMF_no_dma;
> +    else
> +    {
> +        dma_size = 1ul << bits_to_zone(dma_bitsize);
> +        /* Starting address shall meet the DMA limitation. */
> +        if ( dma_size && start < dma_size )
> +            return NULL;
> +    }
> +
> +    pg = alloc_staticmem_pages(nr_pfns, start, memflags);
> +    if ( !pg )
> +        return NULL;
> +
> +    if ( d && !(memflags & MEMF_no_owner) )
> +    {
> +        if ( memflags & MEMF_no_refcount )
> +        {
> +            unsigned long i;
> +
> +            for ( i = 0; i < nr_pfns; i++ )
> +                pg[i].count_info = PGC_extra;
> +        }
> +        if ( assign_pages(d, pg, nr_pfns, memflags) )
> +        {
> +            free_staticmem_pages(pg, nr_pfns, memflags & MEMF_no_scrub);
> +            return NULL;
> +        }
> +    }
> +
> +    return pg;
> +}
> +
>   void free_domheap_pages(struct page_info *pg, unsigned int order)
>   {
>       struct domain *d = page_get_owner(pg);
> diff --git a/xen/include/xen/mm.h b/xen/include/xen/mm.h
> index dcf9daaa46..e45987f0ed 100644
> --- a/xen/include/xen/mm.h
> +++ b/xen/include/xen/mm.h
> @@ -111,6 +111,10 @@ unsigned long __must_check domain_adjust_tot_pages(struct domain *d,
>   int domain_set_outstanding_pages(struct domain *d, unsigned long pages);
>   void get_outstanding_claims(uint64_t *free_pages, uint64_t *outstanding_pages);
>   
> +/* Static Memory */
> +struct page_info *alloc_domstatic_pages(struct domain *d,
> +        unsigned long nr_pfns, paddr_t start, unsigned int memflags);
> +
>   /* Domain suballocator. These functions are *not* interrupt-safe.*/
>   void init_domheap_pages(paddr_t ps, paddr_t pe);
>   struct page_info *alloc_domheap_pages(
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 18 10:42:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 10:42:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129049.242248 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lixB4-00014x-QL; Tue, 18 May 2021 10:42:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129049.242248; Tue, 18 May 2021 10:42:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lixB4-00014q-NH; Tue, 18 May 2021 10:42:18 +0000
Received: by outflank-mailman (input) for mailman id 129049;
 Tue, 18 May 2021 10:42:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ttlr=KN=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1lixB3-00014k-DU
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 10:42:17 +0000
Received: from out1-smtp.messagingengine.com (unknown [66.111.4.25])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 373ad088-adaa-49b7-803f-4f692d7a1945;
 Tue, 18 May 2021 10:42:15 +0000 (UTC)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailout.nyi.internal (Postfix) with ESMTP id 04EE35C00CC;
 Tue, 18 May 2021 06:42:15 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute1.internal (MEProxy); Tue, 18 May 2021 06:42:15 -0400
Received: from mail-itl (ip5b434f04.dynamic.kabel-deutschland.de [91.67.79.4])
 by mail.messagingengine.com (Postfix) with ESMTPA;
 Tue, 18 May 2021 06:42:12 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 373ad088-adaa-49b7-803f-4f692d7a1945
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:content-type:date:from:in-reply-to
	:message-id:mime-version:references:subject:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; bh=A1IP30
	HlocCCSSewwTTQKoI3s1gaJPVsV9LOgsd0CWU=; b=PDd2AlXO81dFZXzywDoRBh
	a63zAUk2V9q1iSsGg5X1gFq3efI4ALWqUkn4508HsyVIA5G94DTpbUkGnBl0fCQF
	G5ej9sK73N4brI2B2w2edoTXNKYjjLFlT39RTJp2dTpSAE+bHgX0pBdzw6KgHZwd
	TAnY5WG1SNX/KSs2lifgfPIOWeNWDqkse2xPnGhIDsU1506qk/S3ntKyswn6R7lP
	ITFPcGGVvCAsG5YoVkkooMHwSnhS8K2FvyuUhOdIMStteNld/tlBkl9Gv1Km2vXH
	tfOOGLac9Wci1w7H88ecY53DxGxZ8AtC/pWn2wBY4g26ooK7a8pZXMUfElBjz2fg
	==
X-ME-Sender: <xms:BpqjYFvNjlVWTJ4Vk5D7XAE3lpEWEfjS6el9uPIDEvT8Xl39_utgjA>
    <xme:BpqjYOdNJzy0IUMeBEaW8LKOekqSMH2KoneVIYPa5Cel72kg9h0hts-ycQYjUgd_W
    WaUF18js64t7A>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrvdeijedgfedtucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvffukfhfgggtuggjsehgtderredttdejnecuhfhrohhmpeforghrvghk
    ucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesihhnvh
    hishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeetveff
    iefghfekhffggeeffffhgeevieektedthfehveeiheeiiedtudegfeetffenucfkpheple
    durdeijedrjeelrdegnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghi
    lhhfrhhomhepmhgrrhhmrghrvghksehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtg
    homh
X-ME-Proxy: <xmx:BpqjYIwNDgcxGaUt_ioEaRwdThfVwvp_T0jJqf72wietrjCbpBl2TQ>
    <xmx:BpqjYMP4GssPYn30Z3GaO2PxxRSyHVRvyJUKIme98PDkMRiVZ7gK9Q>
    <xmx:BpqjYF_hAmAd4Mr-OL69JtqbQzDvP0yucH6qvweNN-8tIcQpyfU7Ng>
    <xmx:B5qjYBbHVDNzO1Qw5mikBbtXDwnvS9eURlR4Sp-U7JNg-IuDWOQDwQ>
Date: Tue, 18 May 2021 12:42:09 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: "Durrant, Paul" <pdurrant@amazon.co.uk>
Cc: Michael Brown <mbrown@fensystems.co.uk>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"wei.liu@kernel.org" <wei.liu@kernel.org>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH] xen-netback: Check for hotplug-status existence before
 watching
Message-ID: <YKOaAjdO5H3dRTiK@mail-itl>
References: <df9e9a32b0294aee814eeb58d2d71edd@EX13D32EUC003.ant.amazon.com>
 <YJpfORXIgEaWlQ7E@mail-itl>
 <YJpgNvOmDL9SuRye@mail-itl>
 <9edd6873034f474baafd70b1df693001@EX13D32EUC003.ant.amazon.com>
 <YKLjoALdw4oKSZ04@mail-itl>
 <8b7a9cd5-3696-65c2-5656-a1c8eb174344@xen.org>
 <YKOGayhGghjfgNXZ@mail-itl>
 <887f9533f5c54bfabfbff7231eb99b08@EX13D32EUC003.ant.amazon.com>
 <YKOMpXwcnr9QiXy8@mail-itl>
 <2c23e102b6254e42877eb1e8fe68a4f7@EX13D32EUC003.ant.amazon.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="6EOIiVkRIQqWqOV/"
Content-Disposition: inline
In-Reply-To: <2c23e102b6254e42877eb1e8fe68a4f7@EX13D32EUC003.ant.amazon.com>


--6EOIiVkRIQqWqOV/
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Tue, 18 May 2021 12:42:09 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: "Durrant, Paul" <pdurrant@amazon.co.uk>
Cc: Michael Brown <mbrown@fensystems.co.uk>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"wei.liu@kernel.org" <wei.liu@kernel.org>,
	Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>
Subject: Re: [PATCH] xen-netback: Check for hotplug-status existence before
 watching

On Tue, May 18, 2021 at 09:48:25AM +0000, Durrant, Paul wrote:
> > -----Original Message-----
> > From: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingslab.com>
> >=20
> > On Tue, May 18, 2021 at 09:34:45AM +0000, Durrant, Paul wrote:
> > > > -----Original Message-----
> > > > From: Marek Marczykowski-G=C3=B3recki <marmarek@invisiblethingslab.=
com>
> > > >
> > > > On Tue, May 18, 2021 at 07:57:16AM +0100, Paul Durrant wrote:
> > > > > Why is that missing? We're going behind the back of the toolstack=
 to do the
> > > > > unbind and bind so why should we expect it to re-execute a hotplu=
g script?
> > > >
> > > > Ok, then simply execute the whole hotplug script (instead of its su=
bset)
> > > > after re-loading the backend module and everything will be fine.
> > > >
> > > > For example like this:
> > > >     XENBUS_PATH=3Dbackend/vif/$DOMID/$VIF \
> > > >     XENBUS_TYPE=3Dvif \
> > > >     XENBUS_BASE_PATH=3Dbackend \
> > > >     script=3D/etc/xen/scripts/vif-bridge \
> > > >     vif=3Dvif.$DOMID.$VIF \
> > > >     /etc/xen/scripts/vif-bridge online
> > > >
> > >
> > > ... as long as there's no xenstore fall-out that the guest can observ=
e.
> >=20
> > Backend will set state to XenbusStateInitWait on load anyway...
> >=20
>=20
> Oh, that sounds like a bug then... It ought to go straight to connected i=
f the frontend is already there.

To me this sounds very suspicious. But if that's really what should
backend do, then it would "solve" also hotplug-status node issue.
See the end of netback_probe() function.
But I think if you start processing traffic before hotplug script
configures the interface (so - without switching to XenbusStateInitWait
and waiting for hotplug-status node), you'll post some packets into not
enabled interface, which I think will drop them (not queue). TCP will be
fine with that, but many other protocols not.

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--6EOIiVkRIQqWqOV/
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmCjmgEACgkQ24/THMrX
1yzCmwf/QjWF5aAIE38gHNVk+1kC9JDgsgIoaXSFus+0rK5dcZvAPXv15NxNpSeF
IGm9jhgEylYrv/cj/bwx8Wq1H2bKS2Dr3s3qxr3nP7MSCHxn3EmhaPy+5ZzhHkY4
nrlD6xqRSEV1aPoNVX97oASD5eLIZCai21T36fqk4ZH1YJfVy29mqI695ZRR0IFP
5wkIu7JgRyCGrqWVB0/RWjCuw3A2vGU3lIe4wKkS67K6EjCSh26V+/GDtKoJA07i
RH0W3V5iwOvsw0S9PtSDGelb+9SUP8gbqvcJ2akMLyxhOxVkM6ao0/1Xc+IwHxbt
xdURruEb31oJbMXvIC4caVACSmN9dQ==
=2E+b
-----END PGP SIGNATURE-----

--6EOIiVkRIQqWqOV/--


From xen-devel-bounces@lists.xenproject.org Tue May 18 10:43:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 10:43:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129057.242262 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lixCI-0001kt-CR; Tue, 18 May 2021 10:43:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129057.242262; Tue, 18 May 2021 10:43:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lixCI-0001km-8e; Tue, 18 May 2021 10:43:34 +0000
Received: by outflank-mailman (input) for mailman id 129057;
 Tue, 18 May 2021 10:43:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tO0P=KN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lixCG-0001kT-MH
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 10:43:32 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a4bca774-b1c5-43c8-8709-c448521a2227;
 Tue, 18 May 2021 10:43:31 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8D851AFCF;
 Tue, 18 May 2021 10:43:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a4bca774-b1c5-43c8-8709-c448521a2227
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621334610; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=PSANzXWlZkynQMwMKqGA1cWX6aVeLV4RM+53ftRNKvo=;
	b=AiAgQIROb/1CwM/eoQmSkx+ZNMguD0ks/EdhGBVK+klQuUV988RAbNduB+95j1LDhaMFh2
	vSonqw7M67rZrettEj+8SsvguIiAGLzh7gZfHOoC5CRgxSE4ywqmx9gffrJ18yNHhZGE7u
	eFG5Q+/guycki4m2Or3tv4K/8mP8BPE=
Subject: Re: [PATCH 04/10] xen/arm: static memory initialization
To: Penny Zheng <Penny.Zheng@arm.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 nd <nd@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "julien@xen.org" <julien@xen.org>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-5-penny.zheng@arm.com>
 <dbffa647-37e2-93b6-4041-a1344aeb1837@suse.com>
 <VE1PR08MB5215E7203960F535BC857F5CF72C9@VE1PR08MB5215.eurprd08.prod.outlook.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f9776245-0a33-4cb8-fd5a-f7be8ab38b78@suse.com>
Date: Tue, 18 May 2021 12:43:29 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <VE1PR08MB5215E7203960F535BC857F5CF72C9@VE1PR08MB5215.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 18.05.2021 11:51, Penny Zheng wrote:
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: Tuesday, May 18, 2021 3:16 PM
>>
>> On 18.05.2021 07:21, Penny Zheng wrote:
>>> This patch introduces static memory initialization, during system RAM boot
>> up.
>>>
>>> New func init_staticmem_pages is the equivalent of init_heap_pages,
>>> responsible for static memory initialization.
>>>
>>> Helper func free_staticmem_pages is the equivalent of free_heap_pages,
>>> to free nr_pfns pages of static memory.
>>> For each page, it includes the following steps:
>>> 1. change page state from in-use(also initialization state) to free
>>> state and grant PGC_reserved.
>>> 2. set its owner NULL and make sure this page is not a guest frame any
>>> more
>>
>> But isn't the goal (as per the previous patch) to associate such pages with a
>> _specific_ domain?
>>
> 
> Free_staticmem_pages are alike free_heap_pages, are not used only for initialization.
> Freeing used pages to unused are also included.
> Here, setting its owner NULL is to set owner in used field NULL.

I'm afraid I still don't understand.

> Still, I need to add more explanation here.

Yes please.

>>> @@ -1512,6 +1515,49 @@ static void free_heap_pages(
>>>      spin_unlock(&heap_lock);
>>>  }
>>>
>>> +/* Equivalent of free_heap_pages to free nr_pfns pages of static
>>> +memory. */ static void free_staticmem_pages(struct page_info *pg,
>> unsigned long nr_pfns,
>>> +                                 bool need_scrub)
>>
>> Right now this function gets called only from an __init one. Unless it is
>> intended to gain further callers, it should be marked __init itself then.
>> Otherwise it should be made sure that other architectures don't include this
>> (dead there) code.
>>
> 
> Sure, I'll add __init. Thx.

Didn't I see a 2nd call to the function later in the series? That
one doesn't permit the function to be __init, iirc.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 18 10:50:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 10:50:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129062.242273 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lixIk-0003CI-44; Tue, 18 May 2021 10:50:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129062.242273; Tue, 18 May 2021 10:50:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lixIk-0003CB-06; Tue, 18 May 2021 10:50:14 +0000
Received: by outflank-mailman (input) for mailman id 129062;
 Tue, 18 May 2021 10:50:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9UfV=KN=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lixIi-0003C5-Jc
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 10:50:12 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a79f1267-04be-4a7f-9499-5bc1f1706bed;
 Tue, 18 May 2021 10:50:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a79f1267-04be-4a7f-9499-5bc1f1706bed
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1621335011;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=trKKf++AzOW/eOKJ07UkttS0C4U3twhhwK2n6basGfY=;
  b=Gzveldvsk7IuZTbTd2fKx5rCNK0nE3bI6aSNMFJYZ3UyWBxVGtvDzsHH
   JYuMb4DD5ZNYuOjalC9wy8LeuPvDO6TC0i9iarxXyUHj5PBWuu39N+GFG
   xNHQ72/QtySoJizBRma0je3JffcKNd1FGrQ4QZ9kGQEZr51ZU6mDZ6Ira
   Y=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: Cl4RLLTtjbvYQwRyF1aqSfaIRDAHBKdxtlGin+2jzRydDkRusiQWkz9IVX8MW2rvSqbmJuhoNR
 gAXjlTfX+MLMj9E0wtQ4m7nvsBdRnE160k8rNRRejYgH812o86ZPNed76tJO7/ZsHn/4pgrF/5
 8zgp2C1nDS3ckDhmNEz8CdDDd8LMnt9OarYEZQHHhfdlEfsOJZNITLnF03i4y+Kfq/cNpscP77
 NwkVX43/JidDm69M21gKPcjec3/AdlwCDpC7n4oJV1tuD8iU0L0ssnvOSp5N/rZtv5NtSdR5lK
 v4I=
X-SBRS: 5.1
X-MesageID: 43809368
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:JssPva7EjhXk4QuXmAPXwDLXdLJyesId70hD6qkQc3FomwKj9/
 xG/c5rsSMc7Qx6ZJhOo7+90cW7L080lqQFhLX5X43SPzUO0VHARO1fBOPZqAEIcBeOlNK1u5
 0AT0B/YueAcGSTj6zBkXWF+wBL+qj5zEiq792usUuEVWtRGsZdB58SMHfhLqVxLjM2Y6YRJd
 6nyedsgSGvQngTZtTTPAh+YwCSz+e77a4PeHQ9dmYa1DU=
X-IronPort-AV: E=Sophos;i="5.82,309,1613451600"; 
   d="scan'208";a="43809368"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QmtPLdVBKFSiE03gZI5XsELw+SwhBuzkymtIUuT1j0+TS9a+7YSqpoPPfKYjA0xTTzloPtQdI3TNXjBdrNZ0UYkJ9q46sPVXII5vIuzUYaU9iJnqLEXFNrjwucxlK4gK+x7Ksuvv1uvvoU7XNwIZ6VKXlmJUNc1KgP83YABDg9cP/sBYg6foxErKTNOe0JHuqb14yC62FaOYGPYLaEEeeHKStGvCPkatU/nGfwLQ6ddOhFbDpSVgWTrXUegu1rNXIVeS6M6vFnPkjPGXO8V6BL3tfqi1EcxDwIPtTALgWmTtjT/wcGU00k+iPPpF5QBkuJRgSlxA1L41JwVLcQ7V9Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tUn5XI88xd1B744IQxuFJuWJsdR5tDB/QShlTfvEat8=;
 b=OvZHlc+NF2mAhcLlA74Jwb33A3sC5mQHobk3vIbzvB5682/mYqbzWO21x905CPXZhWLXOjYqVaqg2jdibM6K4q36KGvJDdOgZslA7cHSoXxLIYfPf5yPiQk04DXrvA18XRExDZMoRgSl6hVJ7w+8ZAYrBFgI+vlbJ0WukGivU8UfigvLhK3wfXnAGECjw0ls/BrAD5KJreawe3mr6bFdZP68yBUz60fbEgxBBnZ+2bZu7r/wDHxkriHK8hoHjVf8kt7Y3lmSpveYyM3LpKdGSdmGyOB12UipO8AiH13LtG7VjZqXUWEjTg0HvuyWHpfdcq3hDaRxijH/ZHBN/EdexA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=tUn5XI88xd1B744IQxuFJuWJsdR5tDB/QShlTfvEat8=;
 b=kMOiTMSC9+gqaRIXSQVs7dqvpTL52Q2eg2Gx7xA1s/EbgPY08BSmOSvM5K/4fgYsDPQy7SNk1p5VKH/eFrsgTejkkgaebpbeWXrhtWXMC7dNiIEa11yH6YFk+ItoyF2zTBGaRIwxvHvlKuPI03DrUFckKngo5bPEUhyclNh3BOg=
Date: Tue, 18 May 2021 12:50:02 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, "Ian
 Jackson" <iwj@xenproject.org>, <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v4 03/10] libx86: introduce helper to fetch msr entry
Message-ID: <YKOb2hn9XHPGhM5g@Air-de-Roger>
References: <20210507110422.24608-1-roger.pau@citrix.com>
 <20210507110422.24608-4-roger.pau@citrix.com>
 <035cc783-6083-f141-d4a3-db7a6adc36f5@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <035cc783-6083-f141-d4a3-db7a6adc36f5@suse.com>
X-ClientProxiedBy: MR2P264CA0053.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:31::17) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c70f8b8e-5702-4d52-2a5f-08d919eab38d
X-MS-TrafficTypeDiagnostic: DS7PR03MB5463:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DS7PR03MB546311B51F216A39C5843D438F2C9@DS7PR03MB5463.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6108;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: BAE+skgH8NolIoHm7B9yRbZMGhiglFAbVACk37G0g+f+TjzswTFr1wk1IunVZrEjA9iCpPdtCBSme8r1DdAAivi+Mzn496AU1z7vq6ukMkuV2LjibCkWo3NIRtW4CiBzs6RzMwz/hRt+BwlPpZLrluso/lDHZJtU0ij4QEgE5VZuoOYKdwOeErCrHsSxGTZBX1ajoHiuZQ/Bu3mKLdbchQfQziJBuje1pZ5LQTVvONIzJ61TM2bgbv5po9Gaq5ap4cRAiHzub31v+I01OTmUpEugHAFi+mEVi0J2R/A6LjZrlwPwbpgOabSrRpgXyhumYIsVwx1DtW1QJQl/bdZQLwUqGsKvcP26r1a34YkTOM4y15TlZV33K9//3KjEaTHMF2V03pgugRx2/UraYpytSBLEp2YoLZRu7xXceFtjJSVI6qf94mj0wsYdH60JNfZe9JPuZRVeILSIABzFjr8639FnvzKmzdVBqnpRjIAJMjSQJKijnqEXYw18ZPgsP9KSweMzt7bEDvpc1drjj/+afJP8Blj3kUhcBmI8sK5JAD8xyMaulMukkGqUA1xe4KBASsx2l+ii+PjfNvYk0HOazsf6ha6zV4DR5kPJf77rkAs=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(39860400002)(376002)(366004)(136003)(346002)(396003)(54906003)(6496006)(66946007)(66556008)(66476007)(8676002)(316002)(8936002)(478600001)(86362001)(85182001)(4326008)(6486002)(186003)(16526019)(9686003)(26005)(53546011)(956004)(33716001)(38100700002)(6666004)(83380400001)(2906002)(6916009)(5660300002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?NWJnQm93MWxDc0F2Sys0UjkzdEJJYnpTc29hdzZyM3RPUmtDS1NQV0Z0Rlly?=
 =?utf-8?B?TjZQMWxDRTRYTnMzd2pzSlZhOEY1NGZ5TVpTN1luMlM0ODl6b2IxNFc0dWZL?=
 =?utf-8?B?WUdFRnpYdDl3TFBCVEtQMTNSbkR3QWoyVGpncjhVRFlQWXR3SGlGN2pVSzZS?=
 =?utf-8?B?aHY0VWZXUE0rQUVEbWN1aTJQZE9YTDJZT3pqSjFBR1lWcUMvQ1B6ZytFSFVS?=
 =?utf-8?B?QThRRFUvbTJFRnNSa0E3c0FjMkV5UitwM0FPd1I1ZXBmOXBDS1FUSHV2eWkz?=
 =?utf-8?B?b0NXbzQzS21pSkVxN09uOTV1L3BTdkR0d3RQU0dOQ0tHYXFOcXRxWCtvcHRN?=
 =?utf-8?B?UFIyMjV5Q1NGNmU1dTFTUHFSdmJTcWVDeEdMTWxTczI5bWJDNlZJVnBQcExi?=
 =?utf-8?B?MWpQQWtjOHZkdEFxU2dsWS90V0FjZWZxTFJkTGFGaEgyTHpHR0R1cGhGV2Jy?=
 =?utf-8?B?T1U3Y0JoRGRjTGRrK3JCRUgyWm1IYzBsUGFHZm5qaXNtWHZqVUtDM3F0MGI5?=
 =?utf-8?B?WVViL1V2c1kxVnluWWp0aDA3UTBBMFVoQWZlUW0zTVMwNHpuRjZVVkgxV3Fs?=
 =?utf-8?B?cUVlQUdvWXJrZjBKaHNLTHBBeFBXUjhDYS9zOFZsbHJMc05Jb0tHaS9GSm9h?=
 =?utf-8?B?c3doaDRQeFkwVTRpS2hPSXFOVkVBc1JoYmdNNVJLYVZFQjl5MktWbyszU3ls?=
 =?utf-8?B?YUZkRmJVNFNzYTlRRGRWRno1QVEwRmpqb0ZNMTVSVE9VaDM5c2VBdXBMaHJl?=
 =?utf-8?B?ekUvenUxL2czQS93cEV0SHpsa1Y1WlIwY2dsTW9FVTVMTWFOdDRiZm1SN1Yx?=
 =?utf-8?B?RUNnNmtWQjMvcVZ4SlRwc3FiYmZYdzEra0xhM0xkQzl4NzcvNWZYNTlSSDVp?=
 =?utf-8?B?dHkvOEhlLzJ5eFl1OXgzYUdUdENiZHRJckoyLzNEYnowQXJFVWtEYi9Ja2pU?=
 =?utf-8?B?OVY3eDltYVJDQVVvWERZN3J3eUlsa2pWaS82b2xEanN0S2hpMHI2cmtNZGxR?=
 =?utf-8?B?eGlwODZKMjJodnpUYzZNaExRNEFPbi83MnVuV3E2ZEwvRk56VHFvTFAvQTlm?=
 =?utf-8?B?K2lNSk8xemdIblFBci85T0hURWwwODR0UVBlWGE0N0hPV2lWdWdQNmN6R3Bo?=
 =?utf-8?B?Z1ZUdlpaNEk0di9RMG53MG1NRGJ0ZG5meE94MWV5bXVPNGNPclVoczJhM0Er?=
 =?utf-8?B?bDNIaGVGTTJkVW54QWpGdkhiR3pMTE5WYWhycDZkS0k5SjNKZi9rRG42RytW?=
 =?utf-8?B?bkorMUJYdDgrdXd3aXNFaVZuaEIzeFQxaTZOWUpINTBsOXRMcnhnWVBKTk5k?=
 =?utf-8?B?UVhNRWhJQ3YyOTNwOUVFWlU1bW5kNlJyVzYyTllJbk9DUThUclpDbXFwbDI5?=
 =?utf-8?B?TXIwTElTWldndXAxZTQwNzFDYkgwcnNCSHRvT2xiYTAzYkJROHF3V29zRXVy?=
 =?utf-8?B?cWIwcW50Y3E4M1RLdDhwTXp1bUZpblEweEhZa0N0TElRU2tjcnRCZTd2dml0?=
 =?utf-8?B?Vjg2N3ZoS1E3a3NHekRsSm94VFZvSWZ3MldYNk02UlFLb05lUlA2dktTVHBk?=
 =?utf-8?B?Q3R2VXcySFJxRzV6T3k5RWxrdm80SEppRE03bEZ0VGMrb0VvQ3NnWDlJTlpi?=
 =?utf-8?B?d2w5SVpqbkFjNnphNkIwcWt2R3FKT2pyb1RjZnNlSWlGOFh6QWFnYVo0UVBv?=
 =?utf-8?B?a0NLTXBtRlphUzJZbDV4SEpJdE00bkM1cUNabnlKYW1NTVVxZVlEcnNPNENp?=
 =?utf-8?Q?vcomwl3RAY8yICSVhSqMjYRa7NHVhm/utrQASTC?=
X-MS-Exchange-CrossTenant-Network-Message-Id: c70f8b8e-5702-4d52-2a5f-08d919eab38d
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2021 10:50:07.7240
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: vdW+C/770A9gKhZ3K8DxYTJ9uwNTqoqqnXhFaTPfSVHVNJNPv6vxXSFWLgn7Z1wkplOew/7WE1BNg5E1XBB6rg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5463
X-OriginatorOrg: citrix.com

On Mon, May 17, 2021 at 05:43:33PM +0200, Jan Beulich wrote:
> On 07.05.2021 13:04, Roger Pau Monne wrote:
> > @@ -91,6 +91,21 @@ int x86_msr_copy_from_buffer(struct msr_policy *policy,
> >                               const msr_entry_buffer_t msrs, uint32_t nr_entries,
> >                               uint32_t *err_msr);
> >  
> > +/**
> > + * Get a MSR entry from a policy object.
> > + *
> > + * @param policy      The msr_policy object.
> > + * @param idx         The index.
> > + * @returns a pointer to the requested leaf or NULL in case of error.
> > + *
> > + * Do not call this function directly and instead use x86_msr_get_entry that
> > + * will deal with both const and non-const policies returning a pointer with
> > + * constness matching that of the input.
> > + */
> > +const uint64_t *_x86_msr_get_entry(const struct msr_policy *policy,
> > +                                   uint32_t idx);
> > +#define x86_msr_get_entry(p, i) \
> > +    ((__typeof__(&(p)->platform_info.raw))_x86_msr_get_entry(p, i))
> >  #endif /* !XEN_LIB_X86_MSR_H */
> 
> Just two nits: I think it would be nice to retain a blank line ahead of
> the #endif. And here as well as in the CPUID counterpart you introduce,
> strictly speaking, name space violations (via the leading underscore).

I guess another option would be to name the function
x86_msr_get_entry_const, and keep the x86_msr_get_entry macro as-is.

Does that seem better?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 18 11:02:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 11:02:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129071.242290 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lixUI-0004hm-BY; Tue, 18 May 2021 11:02:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129071.242290; Tue, 18 May 2021 11:02:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lixUI-0004hf-8S; Tue, 18 May 2021 11:02:10 +0000
Received: by outflank-mailman (input) for mailman id 129071;
 Tue, 18 May 2021 11:02:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lixUH-0004hZ-HA
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 11:02:09 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lixUH-0000y3-AM; Tue, 18 May 2021 11:02:09 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lixUH-000465-2C; Tue, 18 May 2021 11:02:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=sM8DWwqOPfusTvnpTGsD9A9ei6YMfjOEAiqzbWORjx0=; b=TyEt8Uv7XdsKpKf+Ixgzi4ItnI
	XYqIhjthN/BImQoZphI0Em5fd3BOYHQrjOTVkyohtNWwSIV8bnqkanqGdOS0JldQkPtQu5/c4KgKu
	SaPGpKMA+UOCpnZaegqw3jgCqCCtYJO8NJEs8bKhEw0Q27hzUd1ujagUcThUo5rgRixc=;
Subject: Re: [PATCH 08/10] xen/arm: introduce reserved_page_list
To: Penny Zheng <penny.zheng@arm.com>, xen-devel@lists.xenproject.org,
 sstabellini@kernel.org
Cc: Bertrand.Marquis@arm.com, Wei.Chen@arm.com, nd@arm.com
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-9-penny.zheng@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <c002d9b2-8210-1c03-b374-76e037b65e2f@xen.org>
Date: Tue, 18 May 2021 12:02:06 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210518052113.725808-9-penny.zheng@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Penny,

On 18/05/2021 06:21, Penny Zheng wrote:
> Since page_list under struct domain refers to linked pages as gueast RAM

s/gueast/guest/

> allocated from heap, it should not include reserved pages of static memory.

You already have PGC_reserved to indicate they are "static memory". So 
why do you need yet another list?

> 
> The number of PGC_reserved pages assigned to a domain is tracked in
> a new 'reserved_pages' counter. Also introduce a new reserved_page_list
> to link pages of static memory. Let page_to_list return reserved_page_list,
> when flag is PGC_reserved.
> 
> Later, when domain get destroyed or restarted, those new values will help
> relinquish memory to proper place, not been given back to heap.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
>   xen/common/domain.c     | 1 +
>   xen/common/page_alloc.c | 7 +++++--
>   xen/include/xen/sched.h | 5 +++++
>   3 files changed, 11 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/common/domain.c b/xen/common/domain.c
> index 6b71c6d6a9..c38afd2969 100644
> --- a/xen/common/domain.c
> +++ b/xen/common/domain.c
> @@ -578,6 +578,7 @@ struct domain *domain_create(domid_t domid,
>       INIT_PAGE_LIST_HEAD(&d->page_list);
>       INIT_PAGE_LIST_HEAD(&d->extra_page_list);
>       INIT_PAGE_LIST_HEAD(&d->xenpage_list);
> +    INIT_PAGE_LIST_HEAD(&d->reserved_page_list);
>   
>       spin_lock_init(&d->node_affinity_lock);
>       d->node_affinity = NODE_MASK_ALL;
> diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
> index f1f1296a61..e3f07ec6c5 100644
> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -2410,7 +2410,7 @@ int assign_pages(
>   
>           for ( i = 0; i < nr_pfns; i++ )
>           {
> -            ASSERT(!(pg[i].count_info & ~PGC_extra));
> +            ASSERT(!(pg[i].count_info & ~(PGC_extra | PGC_reserved)));
I think this change belongs to the previous patch.

>               if ( pg[i].count_info & PGC_extra )
>                   extra_pages++;
>           }
> @@ -2439,6 +2439,9 @@ int assign_pages(
>           }
>       }
>   
> +    if ( pg[0].count_info & PGC_reserved )
> +        d->reserved_pages += nr_pfns;

reserved_pages doesn't seem to be ever read or decremented. So why do 
you need it?

> +
>       if ( !(memflags & MEMF_no_refcount) &&
>            unlikely(domain_adjust_tot_pages(d, nr_pfns) == nr_pfns) )
>           get_knownalive_domain(d);
> @@ -2452,7 +2455,7 @@ int assign_pages(
>               page_set_reserved_owner(&pg[i], d);
>           smp_wmb(); /* Domain pointer must be visible before updating refcnt. */
>           pg[i].count_info =
> -            (pg[i].count_info & PGC_extra) | PGC_allocated | 1;
> +            (pg[i].count_info & (PGC_extra | PGC_reserved)) | PGC_allocated | 1;

Same here.

>           page_list_add_tail(&pg[i], page_to_list(d, &pg[i]));
>       }
>   
> diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
> index 3982167144..b6333ed8bb 100644
> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -368,6 +368,7 @@ struct domain
>       struct page_list_head page_list;  /* linked list */
>       struct page_list_head extra_page_list; /* linked list (size extra_pages) */
>       struct page_list_head xenpage_list; /* linked list (size xenheap_pages) */
> +    struct page_list_head reserved_page_list; /* linked list (size reserved pages) */
>   
>       /*
>        * This field should only be directly accessed by domain_adjust_tot_pages()
> @@ -379,6 +380,7 @@ struct domain
>       unsigned int     outstanding_pages; /* pages claimed but not possessed */
>       unsigned int     max_pages;         /* maximum value for domain_tot_pages() */
>       unsigned int     extra_pages;       /* pages not included in domain_tot_pages() */
> +    unsigned int     reserved_pages;    /* pages of static memory */
>       atomic_t         shr_pages;         /* shared pages */
>       atomic_t         paged_pages;       /* paged-out pages */
>   
> @@ -588,6 +590,9 @@ static inline struct page_list_head *page_to_list(
>       if ( pg->count_info & PGC_extra )
>           return &d->extra_page_list;
>   
> +    if ( pg->count_info & PGC_reserved )
> +        return &d->reserved_page_list;
> +
>       return &d->page_list;
>   }
>   
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 18 11:22:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 11:22:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129079.242307 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lixnn-00070E-6I; Tue, 18 May 2021 11:22:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129079.242307; Tue, 18 May 2021 11:22:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lixnn-000707-2n; Tue, 18 May 2021 11:22:19 +0000
Received: by outflank-mailman (input) for mailman id 129079;
 Tue, 18 May 2021 11:22:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9UfV=KN=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lixnl-000701-Nt
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 11:22:17 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f87a7cad-e82f-4cad-9903-da7c1d167681;
 Tue, 18 May 2021 11:22:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f87a7cad-e82f-4cad-9903-da7c1d167681
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1621336936;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=FwdNIL7qXVPlTmxhA3J+zSc7qIgdvIigpL79T2ZcotM=;
  b=Hp0wtejt2VJAgybOjgIOYOhFt0R6OSZoK5Zu6gVuE2TQeBtcaiEw79pF
   ZymHmZtHarUCTXj+/YQrS1BUZC4RpUtTpmJJjyBuCFQ+koTSgLt1d3d72
   rCEd3a5vVSTu7vJd0ckmC2NH/w8iuviJdBYPQTWwhpQrZYWYBrxb2PkKA
   g=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: nsiFW6O9ZJAwvwkt3nImK2sH8ap24MFzq8HQjiDQFTDbZBqanxOSQK+RSIZuCX8OGDKjtbFkml
 CFbNR7lqukG0k16tAK4eAUv1GOM6md+uoWSLOP/X1MiLuB7VFOQxu0siWmZyR3v6V4+mPRpGHA
 wxTdmJY1J8FxGLju9zqQ0csPoXA8FQn023m06wWHqqxyIzk6ubN8vZQBvgysvVIlMJ6zuCMj0g
 gqN4MoZVxbS+dav4A/zff7ttiKAwATNt+U2hIJ13jHvga9GA54w5/qbrTcAxwrZ3qLYtLDeJ3H
 NDU=
X-SBRS: 5.1
X-MesageID: 43811264
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:FB2zAaO1OFrpIsBcT8H155DYdb4zR+YMi2TDiHofdfUFSKClfp
 6V8cjztSWUtN4QMEtQ/uxoHJPwO080lKQFmrX5WI3NYOCIghrLEGgP1/qG/9SkIVyCygc/79
 YQT0EdMqyIMbESt6+Ti2PZYrVQs+VvsprY/ds2p00dMz2CAJsQiTuRZDzrdnGfE2J9dOUE/d
 enl4d6T1XKQwVaUu2LQl0+G8TTrdzCk5zrJTQcAQQ81QWIhTS0rJbnDhmxxH4lInJy6IZn1V
 KAvx3y562lvf3+4ATbzXXv45Nfn8ak4sdfBfaLltMeJlzX+0iVjbxaKv6/VQ0O0aOSAA5Aqq
 iIn/5gBbU915rpRBD0nfO3sDOQlArHghTZuC+laXiKm72weNt1MbsHuWr1GiGpnHbIh+sMpZ
 6j6Vjp/qa/PSmw7hgV2OK4Iy2CtnDE6kbKwtRjxUC2b+MlGclsRNskjT9o+dE7bWTH1Lw=
X-IronPort-AV: E=Sophos;i="5.82,310,1613451600"; 
   d="scan'208";a="43811264"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eMP8c6ipiwoFOKyECASbD7GFzNmiIDuucIMN/WPSFwnqlK5UEW+JRQM33ZZTswYr4kpD66wRXMhqSQqExdjBoqmJht4z0mIOU/yvRLU1UmjZ7yns3dn6tbHVVAKesUPPqeJHm6t+idQQobTZnkSJpfLFwusyClLpsk0RG5OSJRXMdybIaywGPJTDwweoo4XiRPMSiXuamfKjv+7Rby0VEqJEGsDHD9Xwj66g9Tw/IwqJhkWkfpRcGtO90HuA7kNiSwh6IaJRb+k4R6CDnQ2KoZE9bu3Sb+BWGGjDGqwTFt4qKj8pvTB1XXj8TgEgra5+D2H5nHi98BpwU3qzRoOuEQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OgOXpGvZRfLjobJD5KW4J7iqSCteow7nPWaKurE1NW8=;
 b=XypIPlp/U7kToJA4GqyEDKvoC6IAFdHYwY8A43fKxP/L2Boj1jRiu8z/BwgMV/TBtOVFKXJjoj2ioJYU+bzxZC9xD1s43ETPKOrAbWGuCMySraiqlkI7num1SuYmYL4qYgedOfvM1/DOIqhSbbdH2MhzK+QEImD+jvwgaVyFvjSWcf6MkYvmD8jDsHEesMtv5s3igeOwHOt2/XeJFrSDGgjuQ74vYdjmsP9nuJx4lqomGgV4T/2LOmVC86z8az2n36EW1wyAwQpCKyZ47Q4Djoh9oeI8hYTJ/byENrGrTU81SWLgd2yXAeaGHSyGXvy8ZxlSj5q1Vkdj5vPodoV58Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OgOXpGvZRfLjobJD5KW4J7iqSCteow7nPWaKurE1NW8=;
 b=IOEq9aXIdU4IxVI/9GKs1LDYdFksACxWJXqlToHRy509W+GtGLLbp9LrBY187DUC8gjzgqJ0iGxZ1shCzdga/bZr5cq7/s0L+ian9qSfzbtIiWk85JLTrxhz1RNcRmzueLwRVsLhp4GlNveMukfToj9w5AEQpJW5SFMbTP3Y/3Q=
Date: Tue, 18 May 2021 13:22:07 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Julien
 Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH] libelf: improve PVH elfnote parsing
Message-ID: <YKOjX2pYc+SCHyBn@Air-de-Roger>
References: <20210514135014.78389-1-roger.pau@citrix.com>
 <8ef298f4-24b5-e133-fc3e-87a67132a61b@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <8ef298f4-24b5-e133-fc3e-87a67132a61b@suse.com>
X-ClientProxiedBy: PAYP264CA0015.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:102:11e::20) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ad15d11a-b684-49d1-704f-08d919ef2ed2
X-MS-TrafficTypeDiagnostic: DM5PR03MB2490:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB2490C304E2EEE986D8728B2F8F2C9@DM5PR03MB2490.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: HWaOy8i229f8rXqiZAUV4lYKOPdL+gJBBJatyL+5J9qztdrfFuZNGrGtPJi6dGLAiMv3u2xcG0CKfPXzKD7BUPZQODezPCCHQdwL/xf1SHua31MC8xuIiUXun3MkoNDtfpdZx88FGLBpI4fyqMrhYiWe62dbRculeLS3EP/LiQklpobvPQyP3xuY54olW7IepfFxz1x2pM0vER5vU+NpgoUnOYm15UZwKTMMF7dmMzGVT2APRwbhGgY/H/YWiGpZRioNPaV4AonrGS0uoU+Fw3Kh73MfQvPHpmOGQ3o9+cEhPTH3dB3FhwoGYPlP8yiYCkA8XDiVj5bzM/xvp4nLu+i6bD0qys8/J5ou3HtEv8RUP9ygdEN4fjmljF+ySyGk+5IcpDu19x7P21SK057gLmIW3dnpbD5mfs6PA/Rb+pRjboaVpWKzjBrkpT0JCK79cOEXoSGHkwnzEC65BqX3b5FV1WCkXwk2o4j9d8Tpjy3AQpcyA6b4cgNmPJmCb15fM9nBqgDzD8rppZJw1U/ctQot/xV9jlk2i0hAvu2G7355TODLMEaFS9yL+qXrSuA+kYZdJ2keESVFrb0SbUcG8fzLxeZ9z3uKYODvsoh7g1Z5hUmznEF3vnokYtbRCqjUmQeLNSa6MPfWx/4HZLimAw==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(39850400004)(376002)(366004)(346002)(396003)(136003)(6916009)(5660300002)(2906002)(316002)(8936002)(54906003)(6666004)(478600001)(6486002)(86362001)(8676002)(4326008)(6496006)(16526019)(956004)(186003)(9686003)(26005)(66946007)(33716001)(66556008)(66476007)(83380400001)(85182001)(53546011)(38100700002)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?ajgraXQ2RXJOQk82Nlh0ZDZ6YzJLMCtTRE56R3ZzVTFucHBSdmdERVNlSEYx?=
 =?utf-8?B?ckM0UVl5RHNnVVB2TkRoNUhGNEJmREFRb2NqZ2ZxTnVRUEM0Uk8zR3VKMTIz?=
 =?utf-8?B?RllNMS9KK1o3Y3RISHFIYVFJS1dlMG9kY0l1ZVpYR3BBSWRyTDFQYjllMU1W?=
 =?utf-8?B?UjZ2d3RFdUhXalE1dEIyVllIQ09WNW1mUUk0ZUdsamcvMnA4a202UGpBbWdw?=
 =?utf-8?B?OEM1dU10Ylp5UHp5NWcrbjQ0MzdEMDBJam5XWXBibGVQK2IvbG5vMmxXa3c4?=
 =?utf-8?B?aHRnS3JhZlRjMDZoc2gwNkJGQzBOeVFkK2dkQ3ZIVWhNKzRLblVXT2lRakYy?=
 =?utf-8?B?aUlqTldDNmtpdTJmUVY0QzFMVkJFdlhjQmxlOTkvOHNmNlp5MXA4Nm5ENHor?=
 =?utf-8?B?Y1RlWHFXRW9obVlkbDFVMXJqK3hlbExvc3NrQ3NSbHh6ZHd0U3VDNUZmaDZi?=
 =?utf-8?B?M0M4S1hLOHJIbGp0THY3TnNCMW9CZTBTNDN2c1crR21mVjMxcUpKT0xXKzdj?=
 =?utf-8?B?dXBnZmVRMVBOQ1IrSlV4SkpoY1NkaVdVL0g0aHVDR0NSU0VacEc1OW9CYVoy?=
 =?utf-8?B?Vzk1RTgzcmFJYjU3MHNpeWxxWnBJc1RaOU5GbWxGTnpVbTBlQ2dqQ1d3UENh?=
 =?utf-8?B?NlpicXBQTzNxWTFVdUdtZkNYMzVSeENZSlRZd0Eyald0a3plbmgxdWNudXNI?=
 =?utf-8?B?ZEY1em93cFVPdkpiSDVLYWV3MFdVaWFkeElMNG0rWVZRV1VtVGJVWEc2aXFl?=
 =?utf-8?B?cFNyeTBDcGw3MTdVeDhYdTRmSUk1UHF6dFRDRGNPYlVwTmxscVV0a2ovM3p1?=
 =?utf-8?B?a01Rcjd6Rm8rSkJSMUYxQmtFYlpZenFNUm5OOFhBTXUyWmpqd0Q1c1g3cnh2?=
 =?utf-8?B?a3pxVHBuazlyL0tXNTNOM2w0ZTBxUnJnNEVXOGpMUzBGSnZ3bEI3RDBidDdj?=
 =?utf-8?B?aVErZ2ZaRXJqMFhjeXJ2VHdPVUtQdE5teStrcE9IOUNveWdkVHhQSWlHek15?=
 =?utf-8?B?dFV3SU5jR2cydTZvT2hCNnFEaUZRWTFEQm5HMks1bXlOL1IzQ25sd2Naa2Fn?=
 =?utf-8?B?MFowRHRkMEhJcEorSzRPYUtPU1pxcXV5Z2tpdU1qNzJxMXBkYXZVWmdNSEtF?=
 =?utf-8?B?eWVrazNLNDJlVWh0RXNPTlBYYUw2bjZWWmF6NS8zS1NSUzZDWmdxN2QvdWRa?=
 =?utf-8?B?RXFiVUZjeXQzMTZwWE5Gc2ZUSURjNkgvcnprWi8vODJNVzA2NHp5dVRGRktN?=
 =?utf-8?B?eGtDOHJWOHV4U2JKeUVFNmdDQ2Z1c1NwKzlCU2g1ZUYyNnI5Q1VORHdVaDBh?=
 =?utf-8?B?L1BiVzRYVFJHWVdZR1VhSDFubmhwVzZMTzV4OGlHVjloMmJoQm9WdFZES0Ez?=
 =?utf-8?B?UGJhRE5EZlU0QWVidVIxTmdjSDBxUUw0TXRWMG1OeWlsMTNvTUZaMTcvb2Rh?=
 =?utf-8?B?SnMxVFVIaGlIdTNqS3lzaWRJZ3dWeEd6MzhiRHZDNmM2cVg3Z0hScC9Fd2Vy?=
 =?utf-8?B?anR2Zit5RFNXOUZZL3ZkYTNSb0FWV1BhQmluekpIbmVzNWpEb25NcUFLZHk5?=
 =?utf-8?B?Y1BnTzJxb3hPWFFJSU0vK3Y5Vm53QzE3Kzk3Wm03ZVdaZG0zRFBTM1pVaTBv?=
 =?utf-8?B?VHp4b1hCNC8wOTlZamk1N2t5UGFjMnYrSnFXWGRidTlxTmdUNWJveU8vR0tY?=
 =?utf-8?B?TUliK3FzMklHLzhRbThwR2liSUVMQUlQQ2VyVUdRSlNRTk1DeXVmZ0dyTEZV?=
 =?utf-8?Q?R6OlwScuwIXgERb2HqI7q4QevMjVhrhU35vkQPN?=
X-MS-Exchange-CrossTenant-Network-Message-Id: ad15d11a-b684-49d1-704f-08d919ef2ed2
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2021 11:22:12.6113
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: V+M+cA9nxGUnNQsa9C88ci5U59DLempypqScWZoYeO+R7RazTzDkFjBVHNxZw6p79XBiPfxB7BC68bLaR3kWKg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB2490
X-OriginatorOrg: citrix.com

On Mon, May 17, 2021 at 01:09:11PM +0200, Jan Beulich wrote:
> On 14.05.2021 15:50, Roger Pau Monne wrote:
> > @@ -426,7 +426,7 @@ static elf_errorstatus elf_xen_addr_calc_check(struct elf_binary *elf,
> >      }
> >  
> >      /* Initial guess for virt_base is 0 if it is not explicitly defined. */
> > -    if ( parms->virt_base == UNSET_ADDR )
> > +    if ( parms->virt_base == UNSET_ADDR || hvm )
> >      {
> >          parms->virt_base = 0;
> >          elf_msg(elf, "ELF: VIRT_BASE unset, using %#" PRIx64 "\n",
> > @@ -442,7 +442,7 @@ static elf_errorstatus elf_xen_addr_calc_check(struct elf_binary *elf,
> >       * If we are using the modern ELF notes interface then the default
> >       * is 0.
> >       */
> > -    if ( parms->elf_paddr_offset == UNSET_ADDR )
> > +    if ( parms->elf_paddr_offset == UNSET_ADDR || hvm )
> >      {
> >          if ( parms->elf_note_start )
> >              parms->elf_paddr_offset = 0;
> 
> Both of these would want their respective comments also updated, I
> think: There's no defaulting or guessing really in PVH mode, is
> there?
> 
> > @@ -456,8 +456,13 @@ static elf_errorstatus elf_xen_addr_calc_check(struct elf_binary *elf,
> >      parms->virt_kstart = elf->pstart + virt_offset;
> >      parms->virt_kend   = elf->pend   + virt_offset;
> >  
> > -    if ( parms->virt_entry == UNSET_ADDR )
> > -        parms->virt_entry = elf_uval(elf, elf->ehdr, e_entry);
> > +    if ( parms->virt_entry == UNSET_ADDR || hvm )
> > +    {
> > +        if ( parms->phys_entry != UNSET_ADDR32 )
> 
> Don't you need "&& hvm" here to prevent ...
> 
> > +            parms->virt_entry = parms->phys_entry;
> 
> ... this taking effect for a kernel capable of running in both
> PV and PVH modes, instead of ...
> 
> > +        else
> > +            parms->virt_entry = elf_uval(elf, elf->ehdr, e_entry);
> 
> ... this (when actually in PV mode)?

Oh, I somehow assumed that PV guests _must_ provide the entry point in
XEN_ELFNOTE_ENTRY, but I don't think that's the case. Will update and
send a new version.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 18 11:23:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 11:23:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129087.242317 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lixoh-0007eU-JI; Tue, 18 May 2021 11:23:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129087.242317; Tue, 18 May 2021 11:23:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lixoh-0007eN-GD; Tue, 18 May 2021 11:23:15 +0000
Received: by outflank-mailman (input) for mailman id 129087;
 Tue, 18 May 2021 11:23:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tO0P=KN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lixog-0007eF-44
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 11:23:14 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2910118f-e4b4-4d84-bdd9-9a5f8c485ae2;
 Tue, 18 May 2021 11:23:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5A0D9B14A;
 Tue, 18 May 2021 11:23:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2910118f-e4b4-4d84-bdd9-9a5f8c485ae2
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621336992; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=+LHncvE37mvvOoWzc5htVTSDCCpPNb++UpyH4ulyEkU=;
	b=s15lzLOsBL1ZmCC/heO8IdIbOt1wce9wHuu0TvKz7E8zAt/2duCg8Y90o4fghdmNOWuVhb
	rFLWN/Jqgj0EP7U+TjG+xIqZCsk65ut81KESwsdCjcoj3IaRZ3s6/ptPjw7Q/btdHwkMk1
	RE079ALKwngXJfCwhANNmhJwzeBqb4E=
Subject: Re: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
To: Penny Zheng <Penny.Zheng@arm.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 nd <nd@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "julien@xen.org" <julien@xen.org>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-8-penny.zheng@arm.com>
 <7e4706dc-70ea-4dc9-3d70-f07396b462d8@suse.com>
 <VE1PR08MB521528492991FDFC87AC361BF72C9@VE1PR08MB5215.eurprd08.prod.outlook.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <4389b5be-7d23-31d7-67e0-0068cba79934@suse.com>
Date: Tue, 18 May 2021 13:23:11 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <VE1PR08MB521528492991FDFC87AC361BF72C9@VE1PR08MB5215.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 18.05.2021 10:57, Penny Zheng wrote:
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: Tuesday, May 18, 2021 3:35 PM
>>
>> On 18.05.2021 07:21, Penny Zheng wrote:
>>> --- a/xen/common/page_alloc.c
>>> +++ b/xen/common/page_alloc.c
>>> @@ -2447,6 +2447,9 @@ int assign_pages(
>>>      {
>>>          ASSERT(page_get_owner(&pg[i]) == NULL);
>>>          page_set_owner(&pg[i], d);
>>> +        /* use page_set_reserved_owner to set its reserved domain owner.
>> */
>>> +        if ( (pg[i].count_info & PGC_reserved) )
>>> +            page_set_reserved_owner(&pg[i], d);
>>
>> Now this is puzzling: What's the point of setting two owner fields to the same
>> value? I also don't recall you having introduced
>> page_set_reserved_owner() for x86, so how is this going to build there?
>>
> 
> Thanks for pointing out that it will fail on x86.
> As for the same value, sure, I shall change it to domid_t domid to record its reserved owner.
> Only domid is enough for differentiate. 
> And even when domain get rebooted, struct domain may be destroyed, but domid will stays
> The same.

Will it? Are you intending to put in place restrictions that make it
impossible for the ID to get re-used by another domain?

> Major user cases for domain on static allocation are referring to the whole system are static,
> No runtime creation.

Right, but that's not currently enforced afaics. If you would
enforce it, it may simplify a number of things.

>>> @@ -2509,6 +2512,56 @@ struct page_info *alloc_domheap_pages(
>>>      return pg;
>>>  }
>>>
>>> +/*
>>> + * Allocate nr_pfns contiguous pages, starting at #start, of static
>>> +memory,
>>> + * then assign them to one specific domain #d.
>>> + * It is the equivalent of alloc_domheap_pages for static memory.
>>> + */
>>> +struct page_info *alloc_domstatic_pages(
>>> +        struct domain *d, unsigned long nr_pfns, paddr_t start,
>>> +        unsigned int memflags)
>>> +{
>>> +    struct page_info *pg = NULL;
>>> +    unsigned long dma_size;
>>> +
>>> +    ASSERT(!in_irq());
>>> +
>>> +    if ( memflags & MEMF_no_owner )
>>> +        memflags |= MEMF_no_refcount;
>>> +
>>> +    if ( !dma_bitsize )
>>> +        memflags &= ~MEMF_no_dma;
>>> +    else
>>> +    {
>>> +        dma_size = 1ul << bits_to_zone(dma_bitsize);
>>> +        /* Starting address shall meet the DMA limitation. */
>>> +        if ( dma_size && start < dma_size )
>>> +            return NULL;
>>
>> It is the entire range (i.e. in particular the last byte) which needs to meet such
>> a restriction. I'm not convinced though that DMA width restrictions and static
>> allocation are sensible to coexist.
>>
> 
> FWIT, if starting address meets the limitation, the last byte, greater than starting
> address shall meet it too.

I'm afraid I don't know what you're meaning to tell me here.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 18 11:25:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 11:25:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129092.242332 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lixqS-0008Ke-1w; Tue, 18 May 2021 11:25:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129092.242332; Tue, 18 May 2021 11:25:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lixqR-0008KX-UI; Tue, 18 May 2021 11:25:03 +0000
Received: by outflank-mailman (input) for mailman id 129092;
 Tue, 18 May 2021 11:25:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tO0P=KN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lixqQ-0008KR-KX
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 11:25:02 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 5d180322-31f4-479e-99f1-3e99bef5951b;
 Tue, 18 May 2021 11:25:01 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5C1F4B146;
 Tue, 18 May 2021 11:25:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5d180322-31f4-479e-99f1-3e99bef5951b
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621337100; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=pAMQJdglmsufR5MaAs0XgLKdnX0Di8T2vc+gGmAPTk0=;
	b=qtUKDfzcaUhLbT4O3JF3DydFJFzsjvaGXqo8sM72/GP6IybsS7QkmRfoHLtCUM0kF9lgpX
	ezrPzaQTc8z4/UK8zE3xxaT41FPXyod+tzRaKv/IIMheRYd2NjjHDoBPouHMZ0tG5GlaUQ
	JE14G1LV2nms33ZqU1TX7J1oMsO3YvE=
Subject: Re: [PATCH 08/10] xen/arm: introduce reserved_page_list
To: Penny Zheng <Penny.Zheng@arm.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 nd <nd@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "julien@xen.org" <julien@xen.org>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-9-penny.zheng@arm.com>
 <97bc6ca6-206b-197f-de08-20691578b9db@suse.com>
 <VE1PR08MB521538CF7B0BFED3007E5D8DF72C9@VE1PR08MB5215.eurprd08.prod.outlook.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <285e39b3-1bfc-aff3-31e7-d29993d77a20@suse.com>
Date: Tue, 18 May 2021 13:24:59 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <VE1PR08MB521538CF7B0BFED3007E5D8DF72C9@VE1PR08MB5215.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 18.05.2021 10:38, Penny Zheng wrote:
> Hi Jan
> 
>> -----Original Message-----
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: Tuesday, May 18, 2021 3:39 PM
>> To: Penny Zheng <Penny.Zheng@arm.com>
>> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
>> <Wei.Chen@arm.com>; nd <nd@arm.com>; xen-devel@lists.xenproject.org;
>> sstabellini@kernel.org; julien@xen.org
>> Subject: Re: [PATCH 08/10] xen/arm: introduce reserved_page_list
>>
>> On 18.05.2021 07:21, Penny Zheng wrote:
>>> Since page_list under struct domain refers to linked pages as gueast
>>> RAM allocated from heap, it should not include reserved pages of static
>> memory.
>>>
>>> The number of PGC_reserved pages assigned to a domain is tracked in a
>>> new 'reserved_pages' counter. Also introduce a new reserved_page_list
>>> to link pages of static memory. Let page_to_list return
>>> reserved_page_list, when flag is PGC_reserved.
>>>
>>> Later, when domain get destroyed or restarted, those new values will
>>> help relinquish memory to proper place, not been given back to heap.
>>>
>>> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
>>> ---
>>>  xen/common/domain.c     | 1 +
>>>  xen/common/page_alloc.c | 7 +++++--
>>>  xen/include/xen/sched.h | 5 +++++
>>>  3 files changed, 11 insertions(+), 2 deletions(-)
>>
>> This contradicts the title's prefix: There's no Arm-specific change here at all.
>> But imo the title is correct, and the changes should be Arm-specific. There's
>> no point having struct domain fields on e.g. x86 which aren't used there at all.
>>
> 
> Yep, you're right.
> I'll add ifdefs in the following changes.

As necessary, please. Moving stuff to Arm-specific files would be
preferable.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 18 11:28:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 11:28:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129099.242343 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lixtk-0000Zp-HA; Tue, 18 May 2021 11:28:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129099.242343; Tue, 18 May 2021 11:28:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lixtk-0000Zi-Dl; Tue, 18 May 2021 11:28:28 +0000
Received: by outflank-mailman (input) for mailman id 129099;
 Tue, 18 May 2021 11:28:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9UfV=KN=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lixtj-0000Zc-C6
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 11:28:27 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 04e9f61e-5f24-45c3-ad53-2bce603edf7c;
 Tue, 18 May 2021 11:28:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 04e9f61e-5f24-45c3-ad53-2bce603edf7c
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1621337306;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=vINazI0C98YJU6gdc3lBCRp/OBiqxrdwqxXj73qfbac=;
  b=XBmus5OenOB8tsCTnwh/CU7ENQsOpbjUxoTSmIjtCuMVmDOI0AEAeKtS
   qaOoGQF4WA18q94XIC5RG2hjdR6fuQ1bSFWv0h7MYP05ui5A2EnLIvk/n
   KZuLm5KzFFPveD6y8bWXou+6crSgQFpWHRUH9VSnhj+M6ID35v75mhgry
   8=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: UM99sqw0VpGoJXcXcETiBYoAtSMXZbyNmEfoecWmXOTWE6aUzumN8YSXpf3UQGI4RswR1KhePC
 +bDwx8sU0tsMCzesAkTLuMXClm394aR2o/cVRPQg+Fto84EwfTrTf+tq9pn/p8u/JCij6UDyzj
 IJZsJ/cH6T4cip61OG9Eef2visZpMBWG35PFSP35Ni7Jq+MR0Pyb7UbbvJG86A5qhSpN/GPIY+
 DpDuMvyaF1BCDS9EW4XHLsmW8wFsdh9/ZVv9Xm4S148ei6q4qugrpKzmoOnDsfrMdWUfkHlmwg
 pdY=
X-SBRS: 5.1
X-MesageID: 44038943
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:eLFey6uT79B7LhryqG6ezXqp7skCdoMji2hC6mlwRA09TyXGra
 +TdaUguSMc1gx9ZJhBo7G90KnpewK6yXdQ2/hqAV7CZnichILMFu9fBOTZsl/d8kHFh4tgPO
 JbAtVD4b7LfClHZKTBkXCF+r8bqbHtmsDY5pas854ud3ATV0gJ1XYGNu/xKDwReOApP+tcKH
 LKjfA32AZINE5nI/hSaRI+LqT+juyOsKijTQ8NBhYh5gXLpyiv8qTGHx+R2Qpbey9TwJ85mF
 K10jDR1+GGibWW2xXc32jc49B9g9360OZOA8SKl4w8NijssAC1f45sMofy8gzdmNvfq2rCre
 O84SvJZ69ImjbslyCO0FTQMjDboXYTAySI8y7dvZOLyvaJNA7TCKJ69Mpkm1Ximg0dVHwV6t
 M944ujjesjMfr3plW12zH5bWAZqqOKmwtXrQcytQ0VbWJMUs4akWQglHklZqvoSxiKlLzPVt
 MeSv3h2A==
X-IronPort-AV: E=Sophos;i="5.82,310,1613451600"; 
   d="scan'208";a="44038943"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GDsgHiJNAMKb6aIQqzDNEKwqkfpAbr1GG4pZzakX4Yv3sOfpjH5LLSlF+zw1aEeivFc8Jbf8dxv8Cq3FVgAGE5z6n3w9NjZ5j7rLDD2GA6F/g/r9vr9zOrO0Imuyp3GthTOJo9hwLoUCPMZC3R9A3W1SmaqaUsVvofbTvqqSV1EunaOFa7JPWewnyGvdbqzhlc/6XcFTf29x9oKegPRvqkaOH39lw+RCs2Tfv+wC3le1vhIt5X/SUtrtwXIOGyNLxksF96dEE9mnmE3okQOcDR5F5JCkWlt2fd0lEvQKiGrJ/7NbGGQ3PEfFQJJmid/3FNyvVB4Q+8sW2bS1BNkHOA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3IkFluejZckp15DsuWXvgNhQs8OSc1Ee5IBdC3fu5XE=;
 b=hXOf7U7r0VnHzwKo1xh7mz8IddAaOluqRCWkm1gqeCdK30FnnQSN20+rIgFnWw0Y1ctNI7he9Je4mEfaS4f3l47UFwDbSWcDNPic9N0E4SLTnWzZ11dj73aY8I/LZt/ZSHbN8iUP3yNnG4eLvR1a8z0rsim5WV4jKeC/U64JGholfYLjXSaN80Mnmw/vMrrndxsLU2faPd4QKH6t7uTIumojbmHgDbgnnim/52osoWeIQ6zX6ysisnT+CsJm7untzyn8e3aYF7kmISx0s152oRF/puvZSt0sTmB/2Jj7p3Xu/Cmdsb1kAAcYInUALHiOUHPvizvxAawjmxLagwMZUQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3IkFluejZckp15DsuWXvgNhQs8OSc1Ee5IBdC3fu5XE=;
 b=mpA2+Zd7ZYwVnOcODo8kStZ9xpvYnOpvhOfQsP+C9U6uHWoPba3i0t30/7PsfCiWOUKQiDQtAsLuMq3i9eDa+DJehYdmRRLWa1XRDgtGDbhGDoyZryA1jU3Kiggip3tyKWAFhTJse5HB5YTsJG1mEeaiz5v1XoV2ke9fKE7Un+c=
Date: Tue, 18 May 2021 13:28:16 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jason Andryuk <jandryuk@gmail.com>
CC: xen-devel <xen-devel@lists.xenproject.org>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, Jan
 Beulich <jbeulich@suse.com>, Julien Grall <julien@xen.org>, Stefano
 Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH] libelf: improve PVH elfnote parsing
Message-ID: <YKOk0Jy+jD8xs0j5@Air-de-Roger>
References: <20210514135014.78389-1-roger.pau@citrix.com>
 <CAKf6xpsyzazbY_mA0QtAuAqpOPkpuhjrZ1wid0khWy1urh4iBg@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <CAKf6xpsyzazbY_mA0QtAuAqpOPkpuhjrZ1wid0khWy1urh4iBg@mail.gmail.com>
X-ClientProxiedBy: PAYP264CA0027.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:102:11f::14) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 81afef7d-0c6a-46a8-dbf1-08d919f00b71
X-MS-TrafficTypeDiagnostic: DM6PR03MB5084:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB5084209F085D9DBBE1A6FFC98F2C9@DM6PR03MB5084.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: rb0foB80Ng6gb+9Li3vZmYkHBe7q9DvdEPHH28AN5VjuUxHuRUWZNIkeQcJKEhm3OKoNKzczjzr6LnwTe4uFdtiRrRB+kBDERmmFY4zCwmHnIt/h/8rbiQs0ccxXe3mwzrkFMv3yPl6u42VaJUgW7qnm/tnPq2UO8MwScURM0wA5QQzx6mBz+8ZZZUxX/XmHxd5HugqN+y9WulFPiTveCbQn3scZVwxEKRX5smIWq7nun+Fx8t+/4r2r+MryWKGAdfgA3bLPsHJM3WiWosz4aqyFzijQsizlqElkOkNbpdt2f3xT/igKJ4JvD1W5pmPZpAGDrmQxeZcSCCf6gZq7kOoKPtMnN8dF9rYyfH64dW+b3IwKCcEliAYkD80jPVymPKDcdXLHJhGVSdHsis7nlYFKxdI38rwQ2fNrGWAnKPBpUMvka3F4+Mk25rApnkTnu1e4O8WqZmnebZXbpjibBfg5N+vl0P2mOcx3+ErO6vqc5VuMS5W8X+HJxDPAXYOKKIVf6sdVpwPeA+b9sqfB2p+A0L4F5FRqyCPpuIR3eUj9CF8xqRsWD/gUT5+qBqGvpXiTuJbjallg9f32cV2prX7SWPm+ZhwPV+t2YQUCXYMh+MRAJkMop9VoKAVOcZg8TSLw5PzDoNCHOEXIksszGg==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(346002)(39850400004)(396003)(366004)(136003)(376002)(4326008)(6666004)(6486002)(66946007)(5660300002)(186003)(16526019)(85182001)(8676002)(478600001)(53546011)(8936002)(38100700002)(33716001)(9686003)(956004)(54906003)(6496006)(83380400001)(316002)(66556008)(2906002)(86362001)(6916009)(66476007)(26005)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?U3VUZHlwSmdwclFJOFlXS3hTNUozMCtRaHl1NjFSbkdwWlhuOWtkbVBJaUhE?=
 =?utf-8?B?d3FCeUdSU3FQK1M0SDJGTWVpaGVTYVhIenhHZE9Yd1hxaFF4TXptTXM0bk1h?=
 =?utf-8?B?a3h5enhYTi9vV2h0KzFGOTE1cWZpWFhvcnBVZy9nQXZCTFBXM0tobGVLdm5V?=
 =?utf-8?B?bUtXVnJ0TDBSZ1F3dXgwRUp2aDRYRm5xQVgrOExCOG9teE9yRnEyOFY0RndF?=
 =?utf-8?B?M2Vkck1iNVRUNU5MYUNVcDRucFl4L0VzU3hkM1ZIVkdpQzlVcWx4bW1zdVcv?=
 =?utf-8?B?UHBVNU1MdURldFgxZWMwSzRtMi95RHJORW05TVMrNVZEaHVnVjI1M1NBN1Y5?=
 =?utf-8?B?UTFiOEhQSkZEUmJSU28zSGlxelFjSEdMZkp4SnVRMC8vM3BVWHRqWDhQVEV2?=
 =?utf-8?B?ZjVjZUMyNC9nSjg5V2IyUmhsNmhjck11Mk1zUk03dEpOQmNpWjZZNWc5c2gr?=
 =?utf-8?B?dER4N1hvaHhDYjU4VEZKbGUxZ1FEa2lEQW5nQWpWMWt2VjJaS3NvaXlsVGlu?=
 =?utf-8?B?Snp1ZjdSeldDQ3R2M2hVaEpod3hsclRYdndSMEMySngvY2FDZTNhazlKenFK?=
 =?utf-8?B?dVFEUC8vQjhOVittMFVTVlRmRzd5MlUyOEJXQjg3NlhCWlM1REFLVHRsNEtz?=
 =?utf-8?B?NjVYUWZReUQwVkRsVk1pZDl4alRwU3h0bTdzZzBvaXEvSzVvUTNyMm1Hd2sr?=
 =?utf-8?B?YVpLdk1DOHhHMFFaTzdtVURVNjM3RmxsVkU1bEtWem5CMlBQdGp2Kys2M0Ft?=
 =?utf-8?B?N2RFMVNCMDV4UUJiK2Q5NzBtL1N5eWdUbnNRNGdFbmhxdFBKcGJEZ0F6bWU0?=
 =?utf-8?B?YW9xUExXM1dCeFJ1RlZzVGZpcVFHTVVLT1doTW56ekM2eU42UzJkVUxEZ040?=
 =?utf-8?B?dUFCK3E3WnY0bnlvcERBUkpiVVVBdUVyVlpzeWZrZ29vSG9lbFpUN2h1a0J5?=
 =?utf-8?B?eDZHZWpJSlQwNWY1V1JsRjJDZTVFellDNGp0ZFhUWEUyTWs2TG5GL3hCVzFW?=
 =?utf-8?B?QXpLT0VzNDVSZHIycVZSYzBzS3czSjNqRU44T1hqcFJxajdLYUZZenZwdGxx?=
 =?utf-8?B?c2RKTFpOTnBQYVpLSmx4VXEvd2RnRGxDS0MzYlBUTWNhbWZ4ajdtMjdKMFEv?=
 =?utf-8?B?SzlER0ZlTlJFRUZhOUI0TS91Y2RmeWhHb1ptdGtEVnJjbGVlajJrSFk0WVRJ?=
 =?utf-8?B?M1FqRFMzRDM4dGh4VWlwZnhiRmNIalhsWTA3WjErL3oxemJyakVUQXE4UEF3?=
 =?utf-8?B?dlV6UmRRMkI1L2Z2VTYyTHd4Wnh0TUcvY0UzYTNsY0FWRUo2dXh6WDZYZm05?=
 =?utf-8?B?Qnc5NUVKSDlMU08xNUlKbDd3M1podTFScUhYZ3dpZlNyNGJSeFd0TlZVczFC?=
 =?utf-8?B?ajlMV1duaC94c05sS3djSkU3bHJsbnMzNzg4UTd1NjdJUjFXeEwyeU5zdW5q?=
 =?utf-8?B?VkczenUveHY4aWQ1QnVZQi9JRm1LL21TdmJNdlhKaHlTQ2xiR2ttKzZPM2tx?=
 =?utf-8?B?ZmxscGtqTUVQSHhoL3NEUXM5UUhHdmZ6QldDQlZDUHpUa0N0THNTaloyblFi?=
 =?utf-8?B?bk8zYmFxZDM3UkpuNWU1YjBZOTFpc3RjTStmNXozRTNnTHpmY2oyby9JakxX?=
 =?utf-8?B?RkZNZU5qZ2piSytLTW1vZVltejd0eUtmOG5UREVZTTRYZEZCS2N3c3M5OVph?=
 =?utf-8?B?T2x0YmRubW16L1lMcjlXUG01OWhQQmpaamg4VmVJVmR0QWdlOXRtSWVIcERQ?=
 =?utf-8?Q?XrBriIABlSbSqzZ2h4E5cLvNdx+PGg8Y9vTTrcH?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 81afef7d-0c6a-46a8-dbf1-08d919f00b71
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2021 11:28:22.6852
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: c6hpUZzBampDN6uA6CMMgQiLfFH8J+X6Y6ub6/1Gbrqzps84tZHTNABz+hDtXO7DFgqwo3ySC1qPQGmOjgZzqg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5084
X-OriginatorOrg: citrix.com

On Fri, May 14, 2021 at 11:11:14AM -0400, Jason Andryuk wrote:
> On Fri, May 14, 2021 at 9:50 AM Roger Pau Monne <roger.pau@citrix.com> wrote:
> >
> > Pass an hvm boolean parameter to the elf note parsing and checking
> > routines, so that better checking can be done in case libelf is
> > dealing with an hvm container.
> >
> > elf_xen_note_check shouldn't return early unless PHYS32_ENTRY is set
> > and the container is of type HVM, or else the loader and version
> > checks would be avoided for kernels intended to be booted as PV but
> > that also have PHYS32_ENTRY set.
> >
> > Adjust elf_xen_addr_calc_check so that the virtual addresses are
> > actually physical ones (by setting virt_base and elf_paddr_offset to
> > zero) when the container is of type HVM, as that container is always
> > started with paging disabled.
> 
> Should elf_xen_addr_calc_check be changed so that PV operates on
> virtual addresses and HVM operates on physical addresses?

Right... I was aiming with getting away with something simpler and
just assume phys == virt on HVM in order to avoid more complicated
changes and the need to introduce new fields on the structure.

> I worked on some patches for this a while back, but lost track when
> other work pulled me away.  I'll send out what I had, but I think I
> had not tested many of the cases.  Also, I had other questions about
> the approach.  Fundamentally, what notes and limits need to be checked
> for PVH vs. PV?

Those are only sanity checks to assert that the image is kind of fine,
libelf also has checks when loading stuff to make sure a malicious elf
payload cannot fool the loader.

I'm unlikely to be able to do much work on this aside from this
current patch.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 18 11:29:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 11:29:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129104.242357 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lixud-0001C5-UE; Tue, 18 May 2021 11:29:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129104.242357; Tue, 18 May 2021 11:29:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lixud-0001By-Qu; Tue, 18 May 2021 11:29:23 +0000
Received: by outflank-mailman (input) for mailman id 129104;
 Tue, 18 May 2021 11:29:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tO0P=KN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lixuc-0001Bq-HQ
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 11:29:22 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9ea88734-0ca4-4121-b0fe-cc4cd09d1d43;
 Tue, 18 May 2021 11:29:21 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F213BB04F;
 Tue, 18 May 2021 11:29:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9ea88734-0ca4-4121-b0fe-cc4cd09d1d43
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621337361; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QS1dyKvU5PR13HDIroM5cBXtzcLgTIiPnIdw7zsAE+c=;
	b=sWOqAsPaPCBQhsYYvovA3RVHfl/VSu2pX8Q9BDUIetAKgpxANS28QGxUaUZdX+hBXE4lvL
	/ga/GWZfkEWuCC0YUHeqGT4x1uGVlTqyyUSFFrtahtjH/fn3jYtz66SKUriXX+Ybm28JPe
	ADECJkJ+7gEwZpnTwgIvh4tDbY928V8=
Subject: Re: [PATCH v4 03/10] libx86: introduce helper to fetch msr entry
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Ian Jackson <iwj@xenproject.org>, xen-devel@lists.xenproject.org
References: <20210507110422.24608-1-roger.pau@citrix.com>
 <20210507110422.24608-4-roger.pau@citrix.com>
 <035cc783-6083-f141-d4a3-db7a6adc36f5@suse.com>
 <YKOb2hn9XHPGhM5g@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ea4477df-9360-7a45-e0b5-2ebc7052b451@suse.com>
Date: Tue, 18 May 2021 13:29:20 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <YKOb2hn9XHPGhM5g@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 18.05.2021 12:50, Roger Pau Monné wrote:
> On Mon, May 17, 2021 at 05:43:33PM +0200, Jan Beulich wrote:
>> On 07.05.2021 13:04, Roger Pau Monne wrote:
>>> @@ -91,6 +91,21 @@ int x86_msr_copy_from_buffer(struct msr_policy *policy,
>>>                               const msr_entry_buffer_t msrs, uint32_t nr_entries,
>>>                               uint32_t *err_msr);
>>>  
>>> +/**
>>> + * Get a MSR entry from a policy object.
>>> + *
>>> + * @param policy      The msr_policy object.
>>> + * @param idx         The index.
>>> + * @returns a pointer to the requested leaf or NULL in case of error.
>>> + *
>>> + * Do not call this function directly and instead use x86_msr_get_entry that
>>> + * will deal with both const and non-const policies returning a pointer with
>>> + * constness matching that of the input.
>>> + */
>>> +const uint64_t *_x86_msr_get_entry(const struct msr_policy *policy,
>>> +                                   uint32_t idx);
>>> +#define x86_msr_get_entry(p, i) \
>>> +    ((__typeof__(&(p)->platform_info.raw))_x86_msr_get_entry(p, i))
>>>  #endif /* !XEN_LIB_X86_MSR_H */
>>
>> Just two nits: I think it would be nice to retain a blank line ahead of
>> the #endif. And here as well as in the CPUID counterpart you introduce,
>> strictly speaking, name space violations (via the leading underscore).
> 
> I guess another option would be to name the function
> x86_msr_get_entry_const, and keep the x86_msr_get_entry macro as-is.
> 
> Does that seem better?

This would be fine with me, as would be a trailing underscore or a
double underscore after e.g. the first name component.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 18 11:48:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 11:48:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129113.242371 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liyD3-0003bz-Jj; Tue, 18 May 2021 11:48:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129113.242371; Tue, 18 May 2021 11:48:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liyD3-0003bs-GN; Tue, 18 May 2021 11:48:25 +0000
Received: by outflank-mailman (input) for mailman id 129113;
 Tue, 18 May 2021 11:48:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1liyD1-0003bm-Sa
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 11:48:23 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1liyD1-0001ki-Mm; Tue, 18 May 2021 11:48:23 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1liyD1-0007r1-Er; Tue, 18 May 2021 11:48:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=UDUv/qFwQc+LgMrvqS2F+F0f1ctPFePV3udtqjPTNQc=; b=igZalYhRfcDTV6d7iYThAq9IVo
	Oqs8dzwLKfuaWNpVWjjyrMLG1EG0Cntkpvt9wm6EvjW9ZzMOiIXMO1Xhn56ycqYhOtBYeqzEsE/Oo
	Eu4OmEDEv04IZ9L6QvVH9ersH3JG+sUdhOsa3BmE9f+lo+vzHbVS2a2JIniacUZ+L5Ec=;
Subject: Re: [PATCH] Xen: Design doc for 1:1 direct-map and static allocation
To: Penny Zheng <penny.zheng@arm.com>, xen-devel@lists.xenproject.org,
 sstabellini@kernel.org
Cc: Bertrand.Marquis@arm.com, Wei.Chen@arm.com, nd@arm.com
References: <20210518050738.163156-1-penny.zheng@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <7ab73cb0-39d5-f8bf-660f-b3d77f3247bd@xen.org>
Date: Tue, 18 May 2021 12:48:20 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210518050738.163156-1-penny.zheng@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Penny,

On 18/05/2021 06:07, Penny Zheng wrote:
> Create one design doc for 1:1 direct-map and static allocation.
> It is the first draft and aims to describe why and how we allocate
> 1:1 direct-map(guest physical == physical) domains, and why and how
> we let domains on static allocation.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
>   docs/designs/static_alloc_and_direct_map.md | 239 ++++++++++++++++++++
>   1 file changed, 239 insertions(+)
>   create mode 100644 docs/designs/static_alloc_and_direct_map.md
> 
> diff --git a/docs/designs/static_alloc_and_direct_map.md b/docs/designs/static_alloc_and_direct_map.md
> new file mode 100644
> index 0000000000..fdda162188
> --- /dev/null
> +++ b/docs/designs/static_alloc_and_direct_map.md
> @@ -0,0 +1,239 @@
> +# Preface
> +
> +The document is an early draft for 1:1 direct-map memory map
> +(`guest physical == physical`) of domUs and Static Allocation.
> +Since the implementation of these two features shares a lot, we would like
> +to introduce both in one design.
> +
> +Right now, these two features are limited to ARM architecture.
> +
> +This design aims to describe why and how the guest would be created as 1:1
> +direct-map domain, and why and what the static allocation is.
> +
> +This document is partly based on Stefano Stabellini's patch serie v1:
> +[direct-map DomUs](
> +https://lists.xenproject.org/archives/html/xen-devel/2020-04/msg00707.html).

While for the reviewer this is a useful information to have, I am not 
sure a future reader needs to know all the history. So I would move this 
to the commit message.

> +
> +This is a first draft and some questions are still unanswered. When this is
> +the case, it will be included under chapter `DISCUSSION`.
> +
> +# Introduction on Static Allocation
> +
> +Static allocation refers to system or sub-system(domains) for which memory
> +areas are pre-defined by configuration using physical address ranges.
> +
> +## Background
> +
> +Cases where needs static allocation:
> +
> +  * Static allocation needed whenever a system has a pre-defined non-changing
> +behaviour. This is usually the case in safety world where system must behave
> +the same upon reboot, so memory resource for both XEN and domains should be
> +static and pre-defined.
> +
> +  * Static allocation needed whenever a guest wants to allocate memory
> +from refined memory ranges. For example, a system has one high-speed RAM
> +region, and would like to assign it to one specific domain.
> +
> +  * Static allocation needed whenever a system needs a guest restricted to some
> +known memory area due to hardware limitations reason. For example, some device
> +can only do DMA to a specific part of the memory.
> +
> +Limitations:
> +  * There is no consideration for PV devices at the moment.
> +
> +## Design on Static Allocation
> +
> +Static allocation refers to system or sub-system(domains) for which memory
> +areas are pre-defined by configuration using physical address ranges.
> +
> +These pre-defined memory, -- Static Momery, as parts of RAM reserved in the

s/Momery/Memory/

> +beginning, shall never go to heap allocator or boot allocator for any use.

I think you mean "buddy" rather than "heap". Looking at your code, you 
are treating static memory region as domheap pages.

> +
> +### Static Allocation for Domains
> +
> +### New Deivce Tree Node: `xen,static_mem`

S/Deivce/

> +
> +Here introduces new `xen,static_mem` node to define static memory nodes for
> +one specific domain.
> +
> +For domains on static allocation, users need to pre-define guest RAM regions in
> +configuration, through `xen,static_mem` node under approriate `domUx` node.
> +
> +Here is one example:
> +
> +
> +        domU1 {
> +            compatible = "xen,domain";
> +            #address-cells = <0x2>;
> +            #size-cells = <0x2>;
> +            cpus = <2>;
> +            xen,static-mem = <0x0 0xa0000000 0x0 0x20000000>;
> +            ...
> +        };
> +
> +RAM at 0xa0000000 of 512 MB are static memory reserved for domU1 as its RAM.
> +
> +### New Page Flag: `PGC_reserved`
> +
> +In order to differentiate and manage pages reserved as static memory with
> +those which are allocated from heap allocator for normal domains, we shall
> +introduce a new page flag `PGC_reserved` to tell.
> +
> +Grant pages `PGC_reserved` when initializing static memory.
> +
> +### New linked page list: `reserved_page_list` in  `struct domain`
> +
> +Right now, for normal domains, on assigning pages to domain, pages allocated
> +from heap allocator as guest RAM shall be inserted to one linked page
> +list `page_list` for later managing and storing.
> +
> +So in order to tell, pages allocated from static memory, shall be inserted
> +to a different linked page list `reserved_page_list`.

You already have the flag ``PGC_reserved`` to indicate whether the 
memory is reserved or not. So why do you also need to link list it?

> +
> +Later, when domain get destroyed and memory relinquished, only pages in
> +`page_list` go back to heap, and pages in `reserved_page_list` shall not.

While going through the series, I could not find any code implementing 
this. However, this is not enough to prevent a page to go to the heap 
allocator because a domain can release memory at runtime using 
hypercalls like XENMEM_remove_from_physmap.

One of the use case is when the guest decides to balloon out some 
memory. This will call free_domheap_pages().

Effectively, you are treating static memory as domheap pages. So I think 
it would be better if you hook in free_domheap_pages() to decide which 
allocator is used.

Now, if a guest can balloon out memory, it can also balloon in memory. 
There are two cases:
    1) The region used to be RAM region statically allocated
    2) The region used to be unallocated.

I think for 1), we need to be able to re-use the page previously. For 
2), it is not clear to me whether a guest with memory statically 
allocated should be allowed to allocate "dynamic" pages.

> +### Memory Allocation for Domains on Static Allocation
> +
> +RAM regions pre-defined as static memory for one specifc domain shall be parsed
> +and reserved from the beginning. And they shall never go to any memory
> +allocator for any use.

Technically, you are introducing a new allocator. So do you mean they 
should not be given to neither the buddy allocator nor the bot allocator?

> +
> +Later when allocating static memory for this specific domain, after acquiring
> +those reserved regions, users need to a do set of verification before
> +assigning.
> +For each page there, it at least includes the following steps:
> +1. Check if it is in free state and has zero reference count.
> +2. Check if the page is reserved(`PGC_reserved`).
> +
> +Then, assigning these pages to this specific domain, and all pages go to one
> +new linked page list `reserved_page_list`.
> +
> +At last, set up guest P2M mapping. By default, it shall be mapped to the fixed
> +guest RAM address `GUEST_RAM0_BASE`, `GUEST_RAM1_BASE`, just like normal
> +domains. But later in 1:1 direct-map design, if `direct-map` is set, the guest
> +physical address will equal to physical address.
> +
> +### Static Allocation for Xen itself
> +
> +### New Deivce Tree Node: `xen,reserved_heap`

s/Deivce/Device/

> +
> +Static memory for Xen heap refers to parts of RAM reserved in the beginning
> +for Xen heap only. The memory is pre-defined through XEN configuration
> +using physical address ranges.
> +
> +The reserved memory for Xen heap is an optional feature and can be enabled
> +by adding a device tree property in the `chosen` node. Currently, this feature
> +is only supported on AArch64.
> +
> +Here is one example:
> +
> +
> +        chosen {
> +            xen,reserved-heap = <0x0 0x30000000 0x0 0x40000000>;
> +            ...
> +        };
> +
> +RAM at 0x30000000 of 1G size will be reserved as heap memory. Later, heap
> +allocator will allocate memory only from this specific region.

This section is quite confusing. I think we need to clearly 
differentiate heap vs allocator.

In Xen we have two heaps:
    1) Xen heap: It is always mapped in Xen virtual address space. This 
is mainly used for xen internal allocation.
    2) Domain heap: It may not always be mapped in Xen virtual address 
space. This is mainly used for domain memory and mapped on-demand.

For Arm64 (and x86), two heaps are allocated from the same region. But 
on Arm32, they are different.

We also have two allocator:
    1) Boot allocator: This is used during boot only. There is no 
concept of heap at this time.
    2) Buddy allocator: This is the current runtime allocator. This can 
either allocator from either heap.

AFAICT, this design is introducing a 3rd allocator that will return 
domain heap pages.

Now, back to this section, are you saying you will separate the two 
heaps and force the buddy allocator to allocate xen heap pages from a 
specific region?

[...]

> +### Memory Allocation for 1:1 direct-map Domain
> +
> +Implementing memory allocation for 1:1 direct-map domain includes two parts:
> +Static Allocation for Domain and 1:1 direct-map.
> +
> +The first part has been elaborated before in chapter `Memory Allocation for
> +Domains on Static Allocation`. Then, to ensure 1:1 direct-map, when setting up
> +guest P2M mapping, it needs to make sure that guest physical address equal to
> +physical address(`gfn == mfn`).
> +
> +*DISCUSSION:
> +
> +  * Here only supports booting up one domain on static allocation or on 1:1
> +direct-map through device tree, is `xl` also needed?

I think they can be separated for now.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 18 12:04:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 12:04:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129126.242401 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liySJ-0006KZ-LH; Tue, 18 May 2021 12:04:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129126.242401; Tue, 18 May 2021 12:04:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liySJ-0006KS-ID; Tue, 18 May 2021 12:04:11 +0000
Received: by outflank-mailman (input) for mailman id 129126;
 Tue, 18 May 2021 12:04:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h4/q=KN=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1liySH-00061r-Th
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 12:04:09 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e2bf2292-acc3-4494-8e46-f23a1392c8d4;
 Tue, 18 May 2021 12:04:09 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5D168AE4D;
 Tue, 18 May 2021 12:04:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e2bf2292-acc3-4494-8e46-f23a1392c8d4
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621339448; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ShVArUyOIEVD2D6Itz0h2kIqNU7o0HPqwOJHjk14MSo=;
	b=cBfB0klVUc0lnpdiDqIF2AlRDRju4JYcXjkVCwdwexwjHvWWjW/nQqUoBtEcQOqdCOB928
	KYG9P+k+5oO3fI5DEYtE9YQp0Wxu9CNQp8Zy5L2XG07GkTcsOSc5NBBIIMZiuUmKvWHvE5
	HnkA+qAbwWkGZTSHhFEMpal6L0z04hk=
Subject: [PATCH 1/2] automation: use DOCKER_CMD for building containers too
From: Dario Faggioli <dfaggioli@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Doug Goldstein <cardoe@cardoe.com>,
 Roger Pau Monne <roger.pau@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Date: Tue, 18 May 2021 14:04:07 +0200
Message-ID: <162133944760.25010.12549941575201320853.stgit@Wayrath>
In-Reply-To: <162133919718.25010.4197057069904956422.stgit@Wayrath>
References: <162133919718.25010.4197057069904956422.stgit@Wayrath>
User-Agent: StGit/0.23
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit

From: Dario Faggioli <dario@Solace.fritz.box>

Use DOCKER_CMD from the environment (if defined) in the containers'
makefile too, so that, e.g., when doing `export DOCKED_CMD=podman`
podman is used for building the containers too.

Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
---
Cc: Doug Goldstein <cardoe@cardoe.com>
Cc: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
---
 automation/build/Makefile |    5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/automation/build/Makefile b/automation/build/Makefile
index 7c7612b1d9..a4b2b85178 100644
--- a/automation/build/Makefile
+++ b/automation/build/Makefile
@@ -2,6 +2,7 @@
 # the base of where these containers will appear
 REGISTRY := registry.gitlab.com/xen-project/xen
 CONTAINERS = $(subst .dockerfile,,$(wildcard */*.dockerfile))
+DOCKER_CMD ?= docker
 
 help:
 	@echo "Builds containers for building Xen based on different distros"
@@ -10,9 +11,9 @@ help:
 	@echo "To push container builds, set the env var PUSH"
 
 %: %.dockerfile ## Builds containers
-	docker build -t $(REGISTRY)/$(@D):$(@F) -f $< $(<D)
+	$(DOCKER_CMD) build -t $(REGISTRY)/$(@D):$(@F) -f $< $(<D)
 	@if [ ! -z $${PUSH+x} ]; then \
-		docker push $(REGISTRY)/$(@D):$(@F); \
+		$(DOCKER_CMD) push $(REGISTRY)/$(@D):$(@F); \
 	fi
 
 .PHONY: all




From xen-devel-bounces@lists.xenproject.org Tue May 18 12:04:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 12:04:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129125.242391 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liySE-000635-E5; Tue, 18 May 2021 12:04:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129125.242391; Tue, 18 May 2021 12:04:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liySE-00062n-AK; Tue, 18 May 2021 12:04:06 +0000
Received: by outflank-mailman (input) for mailman id 129125;
 Tue, 18 May 2021 12:04:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h4/q=KN=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1liySC-00061r-Q0
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 12:04:04 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b615a3b2-df48-4451-b539-484ab59b726e;
 Tue, 18 May 2021 12:04:03 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BC99BAE58;
 Tue, 18 May 2021 12:04:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b615a3b2-df48-4451-b539-484ab59b726e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621339442; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=UMXYFcUxc0UVJaWe2oeuoPn4/3Ru9pOjbKNvFVy4ETM=;
	b=rBUpWMgv2QEfnJROEx+VxI1V3L2nK5TYoevgFXX4Ot3U6gNq1H1yRSRT5E8wtyY01ux/kC
	cuhEm56msvFycCN45D9n3WtPxx6I506Bi5xUT1yTIeWxUSOnhAPWGiePXp6wcMG09a4fEc
	WpdFpit1t1caKB7Qn4QB6ELCUBbN5zg=
Subject: [PATCH 0/2] automation: fix building in the openSUSE Tumbleweed
From: Dario Faggioli <dfaggioli@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Roger Pau Monne <roger.pau@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Doug Goldstein <cardoe@cardoe.com>
Date: Tue, 18 May 2021 14:04:01 +0200
Message-ID: <162133919718.25010.4197057069904956422.stgit@Wayrath>
User-Agent: StGit/0.23
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit

Building Xen in openSUSE Tumbleweed in the GitLab CI was broken due to
python and libzstd development dependencies, so let's update the docker
file to fix that.

If this change is accepted, I'm happy to push a new, updated, image to
our registry (ISTR that I used to have the right to do that).

While there, extend the generalization of the container runtime to use
(we have that in containerize already, through the DOCKER_CMD variable)
to the local building of the containers as well.

Regards
---
Dario Faggioli (2):
      automation: use DOCKER_CMD for building containers too
      automation: fix dependencies on openSUSE Tumbleweed containers

 automation/build/suse/opensuse-tumbleweed.dockerfile | 5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)
--
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)



From xen-devel-bounces@lists.xenproject.org Tue May 18 12:04:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 12:04:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129128.242412 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liySO-0006f0-U6; Tue, 18 May 2021 12:04:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129128.242412; Tue, 18 May 2021 12:04:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liySO-0006et-R5; Tue, 18 May 2021 12:04:16 +0000
Received: by outflank-mailman (input) for mailman id 129128;
 Tue, 18 May 2021 12:04:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h4/q=KN=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1liySN-0006dQ-N2
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 12:04:15 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7fa726d3-bca4-4850-bbbc-4a5f223fb383;
 Tue, 18 May 2021 12:04:14 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2A3A9B06A;
 Tue, 18 May 2021 12:04:14 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7fa726d3-bca4-4850-bbbc-4a5f223fb383
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621339454; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=0LOPcwSGuMwwumFleyNeVrr2U7rfzgFVjivAhaw7nFo=;
	b=OL3mT7/6I2YER8HlvagwVM7uEqzxX/TjmpmeQkICLX+2lr2gUhavv5AD1znFJNyXqUt2+U
	klN3PrzI9CRjDOUJ4FQwGw80YjC0QXND7/lhxawll6f+Qo8WA2+ra9OA+CUeCa1K+K1RAR
	Qg/g95OHWBTVn3D4vmMZ96PBfovN3J8=
Subject: [PATCH 2/2] automation: fix dependencies on openSUSE Tumbleweed
 containers
From: Dario Faggioli <dfaggioli@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Doug Goldstein <cardoe@cardoe.com>,
 Roger Pau Monne <roger.pau@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Date: Tue, 18 May 2021 14:04:13 +0200
Message-ID: <162133945335.25010.4601866854997664898.stgit@Wayrath>
In-Reply-To: <162133919718.25010.4197057069904956422.stgit@Wayrath>
References: <162133919718.25010.4197057069904956422.stgit@Wayrath>
User-Agent: StGit/0.23
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit

From: Dario Faggioli <dario@Solace.fritz.box>

Fix the build inside our openSUSE Tumbleweed container by using
the proper python development packages (and adding zstd headers).

Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
---
Cc: Doug Goldstein <cardoe@cardoe.com>
Cc: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
---
 .../build/suse/opensuse-tumbleweed.dockerfile      |    5 ++---
 1 file changed, 2 insertions(+), 3 deletions(-)

diff --git a/automation/build/suse/opensuse-tumbleweed.dockerfile b/automation/build/suse/opensuse-tumbleweed.dockerfile
index 8ff7b9b5ce..5b99cb8a53 100644
--- a/automation/build/suse/opensuse-tumbleweed.dockerfile
+++ b/automation/build/suse/opensuse-tumbleweed.dockerfile
@@ -45,6 +45,7 @@ RUN zypper install -y --no-recommends \
         libtasn1-devel \
         libuuid-devel \
         libyajl-devel \
+        libzstd-devel \
         lzo-devel \
         make \
         nasm \
@@ -56,10 +57,8 @@ RUN zypper install -y --no-recommends \
         pandoc \
         patch \
         pkg-config \
-        python \
         python-devel \
-        python3 \
-        python3-devel \
+        python38-devel \
         systemd-devel \
         tar \
         transfig \




From xen-devel-bounces@lists.xenproject.org Tue May 18 12:05:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 12:05:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129136.242424 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liyU2-0007p7-AZ; Tue, 18 May 2021 12:05:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129136.242424; Tue, 18 May 2021 12:05:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liyU2-0007p0-7N; Tue, 18 May 2021 12:05:58 +0000
Received: by outflank-mailman (input) for mailman id 129136;
 Tue, 18 May 2021 12:05:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1liyU0-0007or-QD
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 12:05:56 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1liyU0-000259-L4; Tue, 18 May 2021 12:05:56 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1liyU0-0000rv-FH; Tue, 18 May 2021 12:05:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=s527q3jSUoyvqtJKS+4JbLgnjGNc4W4dYEbupvnnUeU=; b=0necZ8mO6HtDLKmYlRqpP4gpia
	pOYS9rmlo7OuCJuo69vF7VfUfYveMy6jc83FFZn+Tfv8ORP+oQzzCdqy95UgmBq/QIKeYSOptf3CQ
	b1suK+8HJOI7fal4+43tPH3XGNPlM9jH3xj7sLhfHv+Aox57mEjzo9y+xJRYRPtisFSg=;
Subject: Re: [PATCH 10/10] xen/arm: introduce allocate_static_memory
To: Penny Zheng <penny.zheng@arm.com>, xen-devel@lists.xenproject.org,
 sstabellini@kernel.org
Cc: Bertrand.Marquis@arm.com, Wei.Chen@arm.com, nd@arm.com
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-11-penny.zheng@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <7e9bacde-8a1c-c9f8-a06d-2f39f2192315@xen.org>
Date: Tue, 18 May 2021 13:05:54 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210518052113.725808-11-penny.zheng@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Penny,

On 18/05/2021 06:21, Penny Zheng wrote:
> This commit introduces allocate_static_memory to allocate static memory as
> guest RAM for domain on Static Allocation.
> 
> It uses alloc_domstatic_pages to allocate pre-defined static memory banks
> for this domain, and uses guest_physmap_add_page to set up P2M table,
> guest starting at fixed GUEST_RAM0_BASE, GUEST_RAM1_BASE.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
>   xen/arch/arm/domain_build.c | 157 +++++++++++++++++++++++++++++++++++-
>   1 file changed, 155 insertions(+), 2 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 30b55588b7..9f662313ad 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -437,6 +437,50 @@ static bool __init allocate_bank_memory(struct domain *d,
>       return true;
>   }
>   
> +/*
> + * #ram_index and #ram_index refer to the index and starting address of guest
> + * memory kank stored in kinfo->mem.
> + * Static memory at #smfn of #tot_size shall be mapped #sgfn, and
> + * #sgfn will be next guest address to map when returning.
> + */
> +static bool __init allocate_static_bank_memory(struct domain *d,
> +                                               struct kernel_info *kinfo,
> +                                               int ram_index,

Please use unsigned.

> +                                               paddr_t ram_addr,
> +                                               gfn_t* sgfn,

I am confused, what is the difference between ram_addr and sgfn?

> +                                               mfn_t smfn,
> +                                               paddr_t tot_size)
> +{
> +    int res;
> +    struct membank *bank;
> +    paddr_t _size = tot_size;
> +
> +    bank = &kinfo->mem.bank[ram_index];
> +    bank->start = ram_addr;
> +    bank->size = bank->size + tot_size;
> +
> +    while ( tot_size > 0 )
> +    {
> +        unsigned int order = get_allocation_size(tot_size);
> +
> +        res = guest_physmap_add_page(d, *sgfn, smfn, order);
> +        if ( res )
> +        {
> +            dprintk(XENLOG_ERR, "Failed map pages to DOMU: %d", res);
> +            return false;
> +        }
> +
> +        *sgfn = gfn_add(*sgfn, 1UL << order);
> +        smfn = mfn_add(smfn, 1UL << order);
> +        tot_size -= (1ULL << (PAGE_SHIFT + order));
> +    }
> +
> +    kinfo->mem.nr_banks = ram_index + 1;
> +    kinfo->unassigned_mem -= _size;
> +
> +    return true;
> +}
> +
>   static void __init allocate_memory(struct domain *d, struct kernel_info *kinfo)
>   {
>       unsigned int i;
> @@ -480,6 +524,116 @@ fail:
>             (unsigned long)kinfo->unassigned_mem >> 10);
>   }
>   
> +/* Allocate memory from static memory as RAM for one specific domain d. */
> +static void __init allocate_static_memory(struct domain *d,
> +                                            struct kernel_info *kinfo)
> +{
> +    int nr_banks, _banks = 0;

AFAICT, _banks is the index in the array. I think it would be clearer if 
it is caller 'bank' or 'idx'.

> +    size_t ram0_size = GUEST_RAM0_SIZE, ram1_size = GUEST_RAM1_SIZE;
> +    paddr_t bank_start, bank_size;
> +    gfn_t sgfn;
> +    mfn_t smfn;
> +
> +    kinfo->mem.nr_banks = 0;
> +    sgfn = gaddr_to_gfn(GUEST_RAM0_BASE);
> +    nr_banks = d->arch.static_mem.nr_banks;
> +    ASSERT(nr_banks >= 0);
> +
> +    if ( kinfo->unassigned_mem <= 0 )
> +        goto fail;
> +
> +    while ( _banks < nr_banks )
> +    {
> +        bank_start = d->arch.static_mem.bank[_banks].start;
> +        smfn = maddr_to_mfn(bank_start);
> +        bank_size = d->arch.static_mem.bank[_banks].size;

The variable name are slightly confusing because it doesn't tell whether 
this is physical are guest RAM. You might want to consider to prefix 
them with p (resp. g) for physical (resp. guest) RAM.

> +
> +        if ( !alloc_domstatic_pages(d, bank_size >> PAGE_SHIFT, bank_start, 0) )
> +        {
> +            printk(XENLOG_ERR
> +                    "%pd: cannot allocate static memory"
> +                    "(0x%"PRIx64" - 0x%"PRIx64")",

bank_start and bank_size are both paddr_t. So this should be PRIpaddr.

> +                    d, bank_start, bank_start + bank_size);
> +            goto fail;
> +        }
> +
> +        /*
> +         * By default, it shall be mapped to the fixed guest RAM address
> +         * `GUEST_RAM0_BASE`, `GUEST_RAM1_BASE`.
> +         * Starting from RAM0(GUEST_RAM0_BASE).
> +         */

Ok. So you are first trying to exhaust the guest bank 0 and then moved 
to bank 1. This wasn't entirely clear from the design document.

I am fine with that, but in this case, the developper should not need to 
know that (in fact this is not part of the ABI).

Regarding this code, I am a bit concerned about the scalability if we 
introduce a second bank.

Can we have an array of the possible guest banks and increment the index 
when exhausting the current bank?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 18 12:09:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 12:09:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129143.242435 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liyXQ-00005p-R9; Tue, 18 May 2021 12:09:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129143.242435; Tue, 18 May 2021 12:09:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liyXQ-00005i-Ne; Tue, 18 May 2021 12:09:28 +0000
Received: by outflank-mailman (input) for mailman id 129143;
 Tue, 18 May 2021 12:09:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1liyXP-00005V-FW
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 12:09:27 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1liyXP-00028H-8c; Tue, 18 May 2021 12:09:27 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1liyXP-00017N-2r; Tue, 18 May 2021 12:09:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=B75SvbFvvSGSgBV6m719z90SnL+abssGo0p4Kj3jQmk=; b=BzkxiSKuut5z200zepDDLDmlTk
	etBxQoar823Om8n5W4OAlnIRIQEr8+iYW8dqegilYaLVVpLD+kL8rN6Y0UmiO1uDZx61+SIpKCmBP
	SfwPztRivE9HMRDpEDovr2XUub1yDB/Eexb1rSJwLOwm5ddUaKBnPYY82V4KZqVJmR7E=;
Subject: Re: [PATCH 09/10] xen/arm: parse `xen,static-mem` info during domain
 construction
To: Penny Zheng <penny.zheng@arm.com>, xen-devel@lists.xenproject.org,
 sstabellini@kernel.org
Cc: Bertrand.Marquis@arm.com, Wei.Chen@arm.com, nd@arm.com
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-10-penny.zheng@arm.com>
From: Julien Grall <julien@xen.org>
Message-ID: <61b41d12-c69e-fe41-0b5e-d35a485b4a51@xen.org>
Date: Tue, 18 May 2021 13:09:25 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210518052113.725808-10-penny.zheng@arm.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Penny,

On 18/05/2021 06:21, Penny Zheng wrote:
> This commit parses `xen,static-mem` device tree property, to acquire
> static memory info reserved for this domain, when constructing domain
> during boot-up.
> 
> Related info shall be stored in new static_mem value under per domain
> struct arch_domain.

So far, this seems to only be used during boot. So can't this be kept in 
the kinfo structure?

> 
> Right now, the implementation of allocate_static_memory is missing, and
> will be introduced later. It just BUG() out at the moment.
> 
> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
> ---
>   xen/arch/arm/domain_build.c  | 58 ++++++++++++++++++++++++++++++++----
>   xen/include/asm-arm/domain.h |  3 ++
>   2 files changed, 56 insertions(+), 5 deletions(-)
> 
> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
> index 282416e74d..30b55588b7 100644
> --- a/xen/arch/arm/domain_build.c
> +++ b/xen/arch/arm/domain_build.c
> @@ -2424,17 +2424,61 @@ static int __init construct_domU(struct domain *d,
>   {
>       struct kernel_info kinfo = {};
>       int rc;
> -    u64 mem;
> +    u64 mem, static_mem_size = 0;
> +    const struct dt_property *prop;
> +    u32 static_mem_len;
> +    bool static_mem = false;
> +
> +    /*
> +     * Guest RAM could be of static memory from static allocation,
> +     * which will be specified through "xen,static-mem" property.
> +     */
> +    prop = dt_find_property(node, "xen,static-mem", &static_mem_len);
> +    if ( prop )
> +    {
> +        const __be32 *cell;
> +        u32 addr_cells = 2, size_cells = 2, reg_cells;
> +        u64 start, size;
> +        int i, banks;
> +        static_mem = true;
> +
> +        dt_property_read_u32(node, "#address-cells", &addr_cells);
> +        dt_property_read_u32(node, "#size-cells", &size_cells);
> +        BUG_ON(size_cells > 2 || addr_cells > 2);
> +        reg_cells = addr_cells + size_cells;
> +
> +        cell = (const __be32 *)prop->value;
> +        banks = static_mem_len / (reg_cells * sizeof (u32));
> +        BUG_ON(banks > NR_MEM_BANKS);
> +
> +        for ( i = 0; i < banks; i++ )
> +        {
> +            device_tree_get_reg(&cell, addr_cells, size_cells, &start, &size);
> +            d->arch.static_mem.bank[i].start = start;
> +            d->arch.static_mem.bank[i].size = size;
> +            static_mem_size += size;
> +
> +            printk(XENLOG_INFO
> +                    "Static Memory Bank[%d] for Domain %pd:"
> +                    "0x%"PRIx64"-0x%"PRIx64"\n",
> +                    i, d,
> +                    d->arch.static_mem.bank[i].start,
> +                    d->arch.static_mem.bank[i].start +
> +                    d->arch.static_mem.bank[i].size);
> +        }
> +        d->arch.static_mem.nr_banks = banks;
> +    }

Could we allocate the memory as we parse?

>   
>       rc = dt_property_read_u64(node, "memory", &mem);
> -    if ( !rc )
> +    if ( !static_mem && !rc )
>       {
>           printk("Error building DomU: cannot read \"memory\" property\n");
>           return -EINVAL;
>       }
> -    kinfo.unassigned_mem = (paddr_t)mem * SZ_1K;
> +    kinfo.unassigned_mem = static_mem ? static_mem_size : (paddr_t)mem * SZ_1K;
>   
> -    printk("*** LOADING DOMU cpus=%u memory=%"PRIx64"KB ***\n", d->max_vcpus, mem);
> +    printk("*** LOADING DOMU cpus=%u memory=%"PRIx64"KB ***\n",
> +            d->max_vcpus, (kinfo.unassigned_mem) >> 10);
>   
>       kinfo.vpl011 = dt_property_read_bool(node, "vpl011");
>   
> @@ -2452,7 +2496,11 @@ static int __init construct_domU(struct domain *d,
>       /* type must be set before allocate memory */
>       d->arch.type = kinfo.type;
>   #endif
> -    allocate_memory(d, &kinfo);
> +    if ( static_mem )
> +        /* allocate_static_memory(d, &kinfo); */
> +        BUG();
> +    else
> +        allocate_memory(d, &kinfo);
>   
>       rc = prepare_dtb_domU(d, &kinfo);
>       if ( rc < 0 )
> diff --git a/xen/include/asm-arm/domain.h b/xen/include/asm-arm/domain.h
> index c9277b5c6d..81b8eb453c 100644
> --- a/xen/include/asm-arm/domain.h
> +++ b/xen/include/asm-arm/domain.h
> @@ -10,6 +10,7 @@
>   #include <asm/gic.h>
>   #include <asm/vgic.h>
>   #include <asm/vpl011.h>
> +#include <asm/setup.h>
>   #include <public/hvm/params.h>
>   
>   struct hvm_domain
> @@ -89,6 +90,8 @@ struct arch_domain
>   #ifdef CONFIG_TEE
>       void *tee;
>   #endif
> +
> +    struct meminfo static_mem;
>   }  __cacheline_aligned;
>   
>   struct arch_vcpu
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 18 12:09:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 12:09:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129146.242449 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liyXj-0000aC-5Q; Tue, 18 May 2021 12:09:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129146.242449; Tue, 18 May 2021 12:09:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liyXj-0000a5-1a; Tue, 18 May 2021 12:09:47 +0000
Received: by outflank-mailman (input) for mailman id 129146;
 Tue, 18 May 2021 12:09:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liyXh-0000Zg-PI; Tue, 18 May 2021 12:09:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liyXh-00028X-LK; Tue, 18 May 2021 12:09:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1liyXh-0002Fq-DK; Tue, 18 May 2021 12:09:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1liyXh-0003SG-Ct; Tue, 18 May 2021 12:09:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=+UpWNLE32KAvv+1XsXlrXZouWNF+/rhXcVRlyuZFAwg=; b=1WiIGVYYfhb+y4FmYLwPNf5NV5
	7ZRYSQqgDW3v0nPKTHsGkc7TSCLF8waHl+T1V8kfaLjNo8laJBSJcM0C99tiA5iC18xCCwRj0ER9D
	1xvMLwzsZiLq4GvXtmyTW2V/tVJ5RCoxNTC/CTc2f9JiN8uSDcQ9Yx0g80WSVU0MtK8U=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161986-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 161986: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=367196caa07ac31443bc360145cc10fbef4fdf92
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 18 May 2021 12:09:45 +0000

flight 161986 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161986/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                367196caa07ac31443bc360145cc10fbef4fdf92
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  271 days
Failing since        152659  2020-08-21 14:07:39 Z  269 days  495 attempts
Testing same since   161986  2021-05-18 00:39:37 Z    0 days    1 attempts

------------------------------------------------------------
504 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 153748 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 18 12:13:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 12:13:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129157.242462 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liybE-00029o-Tp; Tue, 18 May 2021 12:13:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129157.242462; Tue, 18 May 2021 12:13:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1liybE-00029h-Qk; Tue, 18 May 2021 12:13:24 +0000
Received: by outflank-mailman (input) for mailman id 129157;
 Tue, 18 May 2021 12:13:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1liybD-00029b-M4
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 12:13:23 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1liybD-0002Dq-9v; Tue, 18 May 2021 12:13:23 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1liybD-0001fz-3z; Tue, 18 May 2021 12:13:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=pM30EnypZaP2BuhRnIaEemhN0UI2YBiwJXbTiA4Khus=; b=3su872PzYVx/huA7JB+KkOJ74S
	an8Y9jKSi2ai9tzSAegtpIrbyreR9qeiU3N6K38k9sz1PWrjo75j4GL3W3Heig+rKvpu64Pu94pXn
	aqjiOWldfU942gTgqUcEF8cR3LFt0Zu3IEmjhh8AK9ENLJltUP6GxWg5VzxHfDaajlkE=;
Subject: Re: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
To: Penny Zheng <Penny.Zheng@arm.com>, Jan Beulich <jbeulich@suse.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 nd <nd@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-8-penny.zheng@arm.com>
 <7e4706dc-70ea-4dc9-3d70-f07396b462d8@suse.com>
 <VE1PR08MB521528492991FDFC87AC361BF72C9@VE1PR08MB5215.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
Message-ID: <75275b2f-9de3-944a-d55c-a62bbbf1bb8c@xen.org>
Date: Tue, 18 May 2021 13:13:21 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <VE1PR08MB521528492991FDFC87AC361BF72C9@VE1PR08MB5215.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Penny,

On 18/05/2021 09:57, Penny Zheng wrote:
>> -----Original Message-----
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: Tuesday, May 18, 2021 3:35 PM
>> To: Penny Zheng <Penny.Zheng@arm.com>
>> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
>> <Wei.Chen@arm.com>; nd <nd@arm.com>; xen-devel@lists.xenproject.org;
>> sstabellini@kernel.org; julien@xen.org
>> Subject: Re: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
>>
>> On 18.05.2021 07:21, Penny Zheng wrote:
>>> --- a/xen/common/page_alloc.c
>>> +++ b/xen/common/page_alloc.c
>>> @@ -2447,6 +2447,9 @@ int assign_pages(
>>>       {
>>>           ASSERT(page_get_owner(&pg[i]) == NULL);
>>>           page_set_owner(&pg[i], d);
>>> +        /* use page_set_reserved_owner to set its reserved domain owner.
>> */
>>> +        if ( (pg[i].count_info & PGC_reserved) )
>>> +            page_set_reserved_owner(&pg[i], d);
>>
>> Now this is puzzling: What's the point of setting two owner fields to the same
>> value? I also don't recall you having introduced
>> page_set_reserved_owner() for x86, so how is this going to build there?
>>
> 
> Thanks for pointing out that it will fail on x86.
> As for the same value, sure, I shall change it to domid_t domid to record its reserved owner.
> Only domid is enough for differentiate.
> And even when domain get rebooted, struct domain may be destroyed, but domid will stays
> The same.
> Major user cases for domain on static allocation are referring to the whole system are static,
> No runtime creation.

One may want to have static memory yet doesn't care about the domid. So 
I am not in favor to restrict about the domid unless there is no other way.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 18 12:52:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 12:52:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129176.242492 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lizCq-0006Cc-BV; Tue, 18 May 2021 12:52:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129176.242492; Tue, 18 May 2021 12:52:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lizCq-0006CV-8U; Tue, 18 May 2021 12:52:16 +0000
Received: by outflank-mailman (input) for mailman id 129176;
 Tue, 18 May 2021 12:52:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lizCo-0006CP-QR
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 12:52:14 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lizCn-0002q3-PP; Tue, 18 May 2021 12:52:13 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lizCn-0004eJ-JS; Tue, 18 May 2021 12:52:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=4BAW2gMWNYCTUhSbN+226fa0j0oBx0VQ07FNZy+FR8E=; b=3YEdx0DzwFBhUzxEeafrMnxoX1
	8mqxeFQNhUFrqtFOzlIgqbVytnhTbD1iUdqeW3sGiJ8KHFG4SQRKXjLBjMvguZ7yu7i1t8oriuxUb
	0eQ5V1v4MDenbl12H7ZabVFlxHpR8SJmAYSsR99xr3Hhele+iSHLdWkIWjq63TnM3p1E=;
Subject: Re: [PATCH v3 2/2] tools/xenstore: simplify xenstored main loop
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210518061907.16374-1-jgross@suse.com>
 <20210518061907.16374-3-jgross@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <1f3f7620-bd8a-d705-6348-136a452a0a74@xen.org>
Date: Tue, 18 May 2021 13:52:11 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210518061907.16374-3-jgross@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 18/05/2021 07:19, Juergen Gross wrote:
> The main loop of xenstored is rather complicated due to different
> handling of socket and ring-page interfaces. Unify that handling by
> introducing interface type specific functions can_read() and
> can_write().
> 
> Take the opportunity to remove the empty list check before calling
> write_messages() because the function is already able to cope with an
> empty list.
> 
> Signed-off-by: Juergen Gross <jgross@suse.com>

Reviewed-by: Julien Grall <jgrall@amazon.com>

I have also committed the series.

Cheers,

> ---
> V2:
> - split off function vector introduction (Julien Grall)
> V3:
> - expand commit message (Julien Grall)
> ---
>   tools/xenstore/xenstored_core.c   | 77 +++++++++++++++----------------
>   tools/xenstore/xenstored_core.h   |  2 +
>   tools/xenstore/xenstored_domain.c |  2 +
>   3 files changed, 41 insertions(+), 40 deletions(-)
> 
> diff --git a/tools/xenstore/xenstored_core.c b/tools/xenstore/xenstored_core.c
> index 856f518075..883a1a582a 100644
> --- a/tools/xenstore/xenstored_core.c
> +++ b/tools/xenstore/xenstored_core.c
> @@ -1659,9 +1659,34 @@ static int readfd(struct connection *conn, void *data, unsigned int len)
>   	return rc;
>   }
>   
> +static bool socket_can_process(struct connection *conn, int mask)
> +{
> +	if (conn->pollfd_idx == -1)
> +		return false;
> +
> +	if (fds[conn->pollfd_idx].revents & ~(POLLIN | POLLOUT)) {
> +		talloc_free(conn);
> +		return false;
> +	}
> +
> +	return (fds[conn->pollfd_idx].revents & mask) && !conn->is_ignored;
> +}
> +
> +static bool socket_can_write(struct connection *conn)
> +{
> +	return socket_can_process(conn, POLLOUT);
> +}
> +
> +static bool socket_can_read(struct connection *conn)
> +{
> +	return socket_can_process(conn, POLLIN);
> +}
> +
>   const struct interface_funcs socket_funcs = {
>   	.write = writefd,
>   	.read = readfd,
> +	.can_write = socket_can_write,
> +	.can_read = socket_can_read,
>   };
>   
>   static void accept_connection(int sock)
> @@ -2296,47 +2321,19 @@ int main(int argc, char *argv[])
>   			if (&next->list != &connections)
>   				talloc_increase_ref_count(next);
>   
> -			if (conn->domain) {
> -				if (domain_can_read(conn))
> -					handle_input(conn);
> -				if (talloc_free(conn) == 0)
> -					continue;
> -
> -				talloc_increase_ref_count(conn);
> -				if (domain_can_write(conn) &&
> -				    !list_empty(&conn->out_list))
> -					handle_output(conn);
> -				if (talloc_free(conn) == 0)
> -					continue;
> -			} else {
> -				if (conn->pollfd_idx != -1) {
> -					if (fds[conn->pollfd_idx].revents
> -					    & ~(POLLIN|POLLOUT))
> -						talloc_free(conn);
> -					else if ((fds[conn->pollfd_idx].revents
> -						  & POLLIN) &&
> -						 !conn->is_ignored)
> -						handle_input(conn);
> -				}
> -				if (talloc_free(conn) == 0)
> -					continue;
> -
> -				talloc_increase_ref_count(conn);
> -
> -				if (conn->pollfd_idx != -1) {
> -					if (fds[conn->pollfd_idx].revents
> -					    & ~(POLLIN|POLLOUT))
> -						talloc_free(conn);
> -					else if ((fds[conn->pollfd_idx].revents
> -						  & POLLOUT) &&
> -						 !conn->is_ignored)
> -						handle_output(conn);
> -				}
> -				if (talloc_free(conn) == 0)
> -					continue;
> +			if (conn->funcs->can_read(conn))
> +				handle_input(conn);
> +			if (talloc_free(conn) == 0)
> +				continue;
>   
> -				conn->pollfd_idx = -1;
> -			}
> +			talloc_increase_ref_count(conn);
> +
> +			if (conn->funcs->can_write(conn))
> +				handle_output(conn);
> +			if (talloc_free(conn) == 0)
> +				continue;
> +
> +			conn->pollfd_idx = -1;
>   		}
>   
>   		if (delayed_requests) {
> diff --git a/tools/xenstore/xenstored_core.h b/tools/xenstore/xenstored_core.h
> index 6c4845c196..bb36111ecc 100644
> --- a/tools/xenstore/xenstored_core.h
> +++ b/tools/xenstore/xenstored_core.h
> @@ -90,6 +90,8 @@ struct connection;
>   struct interface_funcs {
>   	int (*write)(struct connection *, const void *, unsigned int);
>   	int (*read)(struct connection *, void *, unsigned int);
> +	bool (*can_write)(struct connection *);
> +	bool (*can_read)(struct connection *);
>   };
>   
>   struct connection
> diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
> index f3cd56050e..708bf68af0 100644
> --- a/tools/xenstore/xenstored_domain.c
> +++ b/tools/xenstore/xenstored_domain.c
> @@ -175,6 +175,8 @@ static int readchn(struct connection *conn, void *data, unsigned int len)
>   static const struct interface_funcs domain_funcs = {
>   	.write = writechn,
>   	.read = readchn,
> +	.can_write = domain_can_write,
> +	.can_read = domain_can_read,
>   };
>   
>   static void *map_interface(domid_t domid)
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 18 13:09:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 13:09:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129185.242516 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lizTI-0007u5-2S; Tue, 18 May 2021 13:09:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129185.242516; Tue, 18 May 2021 13:09:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lizTH-0007ty-Vd; Tue, 18 May 2021 13:09:15 +0000
Received: by outflank-mailman (input) for mailman id 129185;
 Tue, 18 May 2021 13:09:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lizTH-0007ts-EX
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 13:09:15 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lizTF-00038X-3P; Tue, 18 May 2021 13:09:13 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lizTE-000687-T1; Tue, 18 May 2021 13:09:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=VpFEJf3dK1mzzzqLQxOB5odV1T/4BCQbbWCtEN/OTNU=; b=LZGktt1l2jIoT6Zl3QJL17AB1A
	U/uxZDGFO12tk45PrsyvmSl+vU2u9ZqTiH/BU1MM+TVhH9ZdIpI2v0PngRhpPyxCYDhrvlKk4DPDV
	OsmqkfBZ/sEYWYHZDR5pogRp4+r/80ogBMoB0QD6a567CravTukQN753hdINUEzr/O78=;
Subject: Re: [PATCH] tools/xenstore: claim resources when running as daemon
To: Juergen Gross <jgross@suse.com>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20210514084133.18658-1-jgross@suse.com>
 <1e38cce0-6960-ac21-b349-dac8551e23ed@xen.org>
 <fe5f1e6a-1a89-ea12-feb5-318f25d4281f@suse.com>
 <39860a0c-5ac5-2537-532f-6ce288cc7219@xen.org>
 <e69f7d4c-a616-1265-e909-fd14feea7412@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <1b1e32ce-a43e-65d5-ad59-35c045b03fbe@xen.org>
Date: Tue, 18 May 2021 14:09:11 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <e69f7d4c-a616-1265-e909-fd14feea7412@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Juergen,

On 18/05/2021 07:43, Juergen Gross wrote:
> On 17.05.21 17:55, Julien Grall wrote:
>>
>>>
>>>> So the admin should be able to configure them. At this point, I think 
> 
>>>> the two limit should be set my the initscript rather than xenstored 
>>>> itself.
>>>
>>> But the admin would need to know the Xen internals for selecting the
>>> correct limits. In the end I'd be fine with moving this modification to
>>> the script starting Xenstore (which would be launch-xenstore), but the
>>> configuration item should be "max number of domains to support".
>>
>> I would be fine with "max numer of domains to support". What I care the 
> 
>> most here is the limits are actually applied most of (if not all) the 
>> time.
> 
> I did another test and found:
> 
> - the xl daemon for a guest will use 2 socket connections
> - qemu for a HVM guest will use 3 socket connections
> - qemu for a PV guest is using one socket connection
> - 14 other files are used by xenstored
> 
> So we should set the limit to 5 * n_dom + 100 (some headroom will be
> nice IMO).

This looks fine to me.

> 
>>
>>>
>>>>
>>>> This would also avoid the problem where Xenstored is not allowed to 
>>>> modify its limit (see more below).
>>>>
>>>>>
>>>>> Signed-off-by: Juergen Gross <jgross@suse.com>
>>>>> ---
>>>>>   tools/xenstore/xenstored_core.c   |  2 ++
>>>>>   tools/xenstore/xenstored_core.h   |  3 ++
>>>>>   tools/xenstore/xenstored_minios.c |  4 +++
>>>>>   tools/xenstore/xenstored_posix.c  | 46 
>>>>> +++++++++++++++++++++++++++++++
>>>>>   4 files changed, 55 insertions(+)
>>>>>
>>>>> diff --git a/tools/xenstore/xenstored_core.c 
>>>>> b/tools/xenstore/xenstored_core.c
>>>>> index b66d119a98..964e693450 100644
>>>>> --- a/tools/xenstore/xenstored_core.c
>>>>> +++ b/tools/xenstore/xenstored_core.c
>>>>> @@ -2243,6 +2243,8 @@ int main(int argc, char *argv[])
>>>>>           xprintf = trace;
>>>>>   #endif
>>>>> +    claim_resources();
>>>>> +
>>>>>       signal(SIGHUP, trigger_reopen_log);
>>>>>       if (tracefile)
>>>>>           tracefile = 
> talloc_strdup(NULL, tracefile);
>>>>> diff --git a/tools/xenstore/xenstored_core.h 
>>>>> b/tools/xenstore/xenstored_core.h
>>>>> index 1467270476..ac26973648 100644
>>>>> --- a/tools/xenstore/xenstored_core.h
>>>>> +++ b/tools/xenstore/xenstored_core.h
>>>>> @@ -255,6 +255,9 @@ void daemonize(void);
>>>>>   /* Close stdin/stdout/stderr to complete daemonize */
>>>>>   void finish_daemonize(void);
>>>>> +/* Set OOM-killer score and raise ulimit. */
>>>>> +void claim_resources(void);
>>>>> +
>>>>>   /* Open a pipe for signal handling */
>>>>>   void init_pipe(int reopen_log_pipe[2]);
>>>>> diff --git a/tools/xenstore/xenstored_minios.c 
>>>>> b/tools/xenstore/xenstored_minios.c
>>>>> index c94493e52a..df8ff580b0 100644
>>>>> --- a/tools/xenstore/xenstored_minios.c
>>>>> +++ b/tools/xenstore/xenstored_minios.c
>>>>> @@ -32,6 +32,10 @@ void finish_daemonize(void)
>>>>>   {
>>>>>   }
>>>>> +void claim_resources(void)
>>>>> +{
>>>>> +}
>>>>> +
>>>>>   void init_pipe(int reopen_log_pipe[2])
>>>>>   {
>>>>>       reopen_log_pipe[0] = -1;
>>>>> diff --git a/tools/xenstore/xenstored_posix.c 
>>>>> b/tools/xenstore/xenstored_posix.c
>>>>> index 48c37ffe3e..0074fbd8b2 100644
>>>>> --- a/tools/xenstore/xenstored_posix.c
>>>>> +++ b/tools/xenstore/xenstored_posix.c
>>>>> @@ -22,6 +22,7 @@
>>>>>   #include <fcntl.h>
>>>>>   #include <stdlib.h>
>>>>>   #include <sys/mman.h>
>>>>> +#include <sys/resource.h>
>>>>>   #include "utils.h"
>>>>>   #include "xenstored_core.h"
>>>>> @@ -87,6 +88,51 @@ void finish_daemonize(void)
>>>>>       close(devnull);
>>>>>   }
>>>>> +static void avoid_oom_killer(void)
>>>>> +{
>>>>> +    char path[32];
>>>>> +    char val[] = "-500";
>>>>> +    int fd;
>>>>> +
>>>>> +    snprintf(path, sizeof(path), "/proc/%d/oom_score_adj", 
>>>>> (int)getpid());
>>>>
>>>> This looks Linux specific. How about other OSes?
>>>
>>> I don't know whether other OSes have an OOM killer, and if they do, how
>>> to configure it. It is a best effort attempt, after all.
>>
>> I have CCed Roger who should be able to help for FreeBSD.
> 
> It would be possible to set the OOM-score from the launch script, too.

It would be ideal if all the limits are set from the launch script. At 
least, they can be changed by the admin and also possibly be enforced 
(if Xenstored is not allowed to do it).

> 
>>
>>>
>>>>
>>>>> +
>>>>> +    fd = open(path, O_WRONLY);
>>>>> +    /* Do nothing if file doesn't exist. */
>>>>
>>>> Your commit message leads to think that we *must* configure the OOM. 
>>>> If not, then we should not continue. But here, this suggest this is 
>>>> optional. In fact...
>>>
>>> I can modify the commit message by adding a "Try to".
>>>
>>>>
>>>>> +    if (fd < 0)
>>>>> +        return;
>>>>> +    /* Ignore errors. */
>>>>> +    write(fd, val, sizeof(val));
>>>>
>>>> ... xenstored may not be allowed to modify its own parameters. So 
>>>> this would continue silently without the admin necessarily knowning 
>>>> the limit wasn't applied.
>>>
>>> I can add a line in the Xenstore log in this regard.
>>
>> This feels wrong to me. If a limit cannot be applied then it should 
>> fail early rather than possibly at the wrong moment a few days 
>> (months?) after.
> 
> I think issuing a warning would be better here. We are running with
> no OOM adjustments since years now.

Right, this is a sign that the OOM adjustment is not necessary in most 
of the cases. But the fact you sent this patch suggests that you or 
someone else saw Xenstored crashing.

The idea with failing hard and early is the admin will directly be aware 
that didn't happen. It can then take action before it is too late (e.g. 
Xenstored was killed while VMs are running).

With the warning, I am worry the admin may not notice it because they 
are easy to miss.

Anyway, if the OOM adjustment is moved to the launch script, then there 
is less change it may fail (the launch script should have higher 
privilege than xenstored).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 18 13:20:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 13:20:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129194.242533 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lizdy-0001eE-8T; Tue, 18 May 2021 13:20:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129194.242533; Tue, 18 May 2021 13:20:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lizdy-0001e7-5U; Tue, 18 May 2021 13:20:18 +0000
Received: by outflank-mailman (input) for mailman id 129194;
 Tue, 18 May 2021 13:20:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9UfV=KN=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lizdw-0001dl-JG
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 13:20:16 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 90d936fb-ddbc-456d-a3fd-9d1c5735fcd0;
 Tue, 18 May 2021 13:20:15 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 90d936fb-ddbc-456d-a3fd-9d1c5735fcd0
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1621344015;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=tkr509xDl2ml6KFuF5tfkfOoC8mwnSycDFcQN+B7JHU=;
  b=AuqrxVKu849fBhgyfZm7YZQrzK6mfRNYXwcf3mczxw9JhUqeNEbHkZfJ
   UocxWBwaMr3bIPKlVKx3C5tP8AeTe0MUlatmtPz9CeRzVuvOfUHtmooog
   SQD19YXD87KePM/6X3z2XO7QF+Sr0AArXrGN+O56KypYlPM6k382h8fsP
   k=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: +OrUdvr3a+yVlnLCPLOtGjZZ06ij6/hPUNoHgPo6fzm6oqtoKnW3vdmynCu8ppIV6KzLTgUX2X
 Z75bwTm+8MbAVJaW/2dtC+2YwTjwaXVAcvJL6xQNgZqZWcLf7w7ryIt0hRFWQfShnChtUShJ45
 O8kli5Zat40Kaqhlmrfs0ByOCY7iT+mZpZ1Iz3BuRDAxnTdLuTBRKXnMp0org5G0Yw2xov6nhQ
 fQPHmrTJ1/Y7R86ebg0NIj6mwEMPcLeXQMgvpaVpsoRUQHUXfVaeDMrcEGhbmst/k7QNobzIBL
 LMA=
X-SBRS: 5.1
X-MesageID: 45580777
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:VUYRO6riVf07sgGZaQOuyV0aV5trL9V00zEX/kB9WHVpm5Sj5r
 mTdYcgpGfJYVcqKQ0dcLW7UpVoLkmskKKdjbN+AV7mZniBhILKFvAc0WKB+UyFJ8SWzIc0vs
 oNHJSWSueAamSS5vyb3ODMKadD/DDxytHKuQ6x9RZQpEpRGtpdBk9Ce3ymO3wzYAxIQbcCON
 6i6tFcpzymEE5nEPiTNz0gWueGi9rVmJfheBJDKAUg7GC1/ECVwY+/Nx2WmjIfWT9J27tKyx
 mxryXJooGnvLWB1hrRzSvt449NmN3no+EzdfCku4wwLzqpsAKhf5lnV6DHgzwvuuGo7z8R4a
 nxiiZlG8F9r0/XfnmorTvhsjOQpAoG2jvHzFDdvnf5u8z+Q1sBerh8bKxiA2bk13Y=
X-IronPort-AV: E=Sophos;i="5.82,310,1613451600"; 
   d="scan'208";a="45580777"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fawffb896T402Ww8IceMFxLi23FMxpbNrPQio+Lotwt4k3FGnNL/IcnyjwUQTxrwww5flblWcIbGuk+8LC6+bbG0FUpdyfniegJ69+wj37pDiF7bmmwndk9r7XO57ycneTTMoxiQTz+mnKSIlkOslez2GCvx/pzcvg8K2LePGclW16wCqj9vAvV2jKZi5AAjrvZlHlWcf6AK8JZKZKP9XuA9bhKm8EB86ok8DL0mnWOWBNIXau2W0k4HVd//jmXqE9nlyd1WJcHg3SuGu8ByKYvDW2yKrRbuMI6kFDkpP27HBE8ty+yYjvMgnQm0sHiC/N42IkcMFfrt/mDmWyJRhg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Rok4PL/LVmbq0cbbPKzEQbQJ5uwA90o6l02yW9rgrIY=;
 b=lZ/nCKXQCZ4vKBgswhdetYKUgS/pV4GfPXO3nYK/GjnDMTVp+Hi9x8iLNe0baIPYIzpoop6/22VEYDykRm0LsK2MqsOfcf2mRZX3BJnxhdO75qCPjKU01YOAY3hLT9wzE4IkvTL6r5w0cOHjb0qoiyY2/JtgdtBbiG4dfIN5rSQYHZm3pGnGi1geLWME+DLUIRAPM8kFywqXqNvaPBiZNuvgBDdYXMfAhx80bGo2jyP+u/nxgLitaSQcmO6VpGIWGvfOzL2kQKpq4pg9pt5sRqJSL9QUYwkj6A8yYUuRimdl/oJe/l00+i6piYOHDiYVU+KdiUZnvCHrpX7vJSKYzA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Rok4PL/LVmbq0cbbPKzEQbQJ5uwA90o6l02yW9rgrIY=;
 b=Md4tihh8ialx4Bb5kgVKzlBfL5puNPX5pzNuVHuB5lzXe3kWRK3KX/wfzQ2E8IFNuXtkZWGhPT3GNqV4gHWtRp/LOl/PwgKCNXtqaayVIJfW+7M6RLbYVRNDd/ga2rcdF33J5MqZUYHWjee/MiEAkDnBLOwF5nJT3LRNBIgmJyI=
Date: Tue, 18 May 2021 15:20:05 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Dario Faggioli <dfaggioli@suse.com>
CC: <xen-devel@lists.xenproject.org>, Doug Goldstein <cardoe@cardoe.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 2/2] automation: fix dependencies on openSUSE Tumbleweed
 containers
Message-ID: <YKO/BcUAtjSgc2pV@Air-de-Roger>
References: <162133919718.25010.4197057069904956422.stgit@Wayrath>
 <162133945335.25010.4601866854997664898.stgit@Wayrath>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <162133945335.25010.4601866854997664898.stgit@Wayrath>
X-ClientProxiedBy: MRXP264CA0045.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:14::33) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 51a228fe-bb66-48d8-bcdd-08d919ffaa68
X-MS-TrafficTypeDiagnostic: DM5PR03MB3211:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB321134B019DB10031D0BDB968F2C9@DM5PR03MB3211.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: FpPWNATWE9kTWGqbK5YdwoNgIJbRHUtmHXxiy2cAXyej7wPKicvBXVXpsrv3YAExCXIb5PfltQ9JRBR5uNM4yKFxHM4OSmF68uTfUlo0GFEJsDdeDKOBgFGvCtL1ufjSe8W4gN4Haub47MKhOn1DbgbpZ1kTFI12dVrEk09stKwPQvWBw8JkhSs94lN+jAJxHfCy26i9XXW2hh4UhLJV2GBMdpkYQNGuJ/cpffukrEi5RGJ2fN9sZr6Z+vdOHZseJpmsiAnzRyDI+3uUj5/pkXcSkYOx2pcD5pI0uDvElBsCc8mdaZoyVXjiS4YxcRZWvDsDEG46HJa/uv1RMT+bR9cunTGIZF/G7NFCc3sxiE3pYpPY0Nx70HnYYe54yTAX8juzLyzPrHqopMJymAoDf2dTopmp0BGNC7cYR6ffc94y5jwQyHPc8j+fRG+pBjzZMmY/ZyVWwiF6co4QARGcPc/0Xlq0luSb8zA6xLLAmh50exzLtHGIRJzFP85g77GIZ3nh4+t+MSeWo3MWIMurOJtwAEuzqwENGivgF+MG8a+TBtt0z3Y8ZM8If5Wzr26aRLyoN6C3AH/Av134n68Pqfi87IEZG0yi3mVx5kaAtTs=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(136003)(366004)(346002)(376002)(396003)(39850400004)(66556008)(16526019)(38100700002)(2906002)(66476007)(478600001)(6666004)(66946007)(85182001)(8936002)(26005)(33716001)(54906003)(186003)(5660300002)(316002)(6496006)(4326008)(9686003)(8676002)(107886003)(6486002)(956004)(86362001)(83380400001)(6916009);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?b0RtQzBEbjJhWFV6dVBkbGkzcitkK3RkU1dCU2VSL3FPRVhuRms4YmpGTXN6?=
 =?utf-8?B?SEpMSDRWUG1NNi9mZXVvTHBSNXpWRHBkU1VtbjhKYmZXU29Ma2tFdnhmQlVN?=
 =?utf-8?B?Z1haeGZRak5pSTF5WXowZms3c2k3c2lhT1lLTGJNdjNnTmZYMlp1Z3VNcGRU?=
 =?utf-8?B?RXVJK09SNDQzNXgvSDVobVpyVlFJSmRqRFBaRjR2VXpMSXZxOS9haE9PTmJk?=
 =?utf-8?B?bzRtMmFuUmhPenVrKzBkYWlXNVNZUmtmMDU5UXVoSW5SZ1lSQUhKSHhSSFZl?=
 =?utf-8?B?bXZxZXVZOVNUMUdPbzQ0RURieDNmSEkwR3pBeG51STcyM3llRitKVFFxMGk2?=
 =?utf-8?B?ZmlNQmlZWVcwdjR2cVk4eHhSV2t5WXM2cUFieVM1YVV5bFRuRlpZSW5pU2pm?=
 =?utf-8?B?akR0Y3d6bnJQVVdURm9KN09SVnIrZlNHakprM3h6Umo1ZkZ5Nyt0bFUzZ1J2?=
 =?utf-8?B?Z0ZLM2Qxd2pkeWl5b0dKRHZzZERyZ3M2eEtYY1pHQy9CWUM0Y1k4VUl0SnRp?=
 =?utf-8?B?c005TXhOdUR0YXhJZUpTU044Y3ppdlVKMzFKalVCNmxvTlhmSnU4S1RVZVFZ?=
 =?utf-8?B?UU1mWE5hMDFGZjVKRzNMNWU2bEJCWkdVSzVhdGlWMlF3MmYxRjBTWUllZkpY?=
 =?utf-8?B?clQwMXZxNS9QWjlsYUFyekJJR0VhRTVIdFpxc2FNTzZXak1idTBNd0p0aDFT?=
 =?utf-8?B?N3hoNmxEY1poVk4yRFNaRGhjTUt6cjZCTkU1cEpnaUROR3RhUWE3Vkc4RFFs?=
 =?utf-8?B?RGZ6TUc2N3NTVEMvaG81VFg1NzdkVWdCMHFONitPckh6SDMyY29KWmFGUVMx?=
 =?utf-8?B?WHpMRUlRSFN2aTBGY2FsTEtCTi9RdUdiYStSYjZkR2NPT2NRSGZ1R05rUXl2?=
 =?utf-8?B?K1VpWG9kcThzZ0NSeHE5ZW0wYnBBZ2pCcXVtVnJDc2kzMVZuUGVEL21Gb3hu?=
 =?utf-8?B?RlprS0dDTEwyVU9GZzNQRVEweHJ4MlRzSWRPaSs1Nmo3ZC9WakVDY2xhbWVm?=
 =?utf-8?B?TGxSRzc0VDVLTlcyL09iLzZnNmxLNXl4bk16b3VSK2o4SHNrQy9JaHVmTS9W?=
 =?utf-8?B?MDg4T2NxaFBnNE9EU1VyRVVoVmQwWFdUbHY4OWhrUnVnOTBGNEpTQWtFQ1Vv?=
 =?utf-8?B?eTBrY1BTRk9pdnlpNVlRRjdOd1E0dzh3RDFLVTh6YTNFY2FzLzZwQmxlYWFi?=
 =?utf-8?B?RHJGZjF0bGY4S0hRWGMrSFRPZlkzaHk5dkl4V3Y1MklmZUsvT2Mxci9YNkpO?=
 =?utf-8?B?RXF1R2ltcFpoRFA0SkJVcmVQSXZaaGkrOGx0UU5zSFQzd1N1T3BPcUdOZUgw?=
 =?utf-8?B?YkgzVXU0OUVrcnlUUEh6T2tvcVQxRm5Za2gxU1V1UURGYjF2R0hHWU1ZRnJz?=
 =?utf-8?B?andFU09nNDUwa0FUUFJydWNDZjhkaCtmbGM3TUZqUGxpTm54WUZtYlVqWnha?=
 =?utf-8?B?dEJaYUkzK2QxZzF3U3h5ckNxbkMrZmNBblMrYW9pZlpaQXVJY2RldENiMFFT?=
 =?utf-8?B?QWlsdDhraEYvZUU1QnRKUXl3eHZZMjNlSUFlM21DTVUzQTQ3aEppLzZvNFRL?=
 =?utf-8?B?cnd2N0U2dDBZUHJoVmJhME0wRys2enNZY0lhMUhOTHI1RkJYZ2V1d1RRdEhW?=
 =?utf-8?B?SVVYVS80akp4RWVnSHMrUk80bjRCUFgzSzliSHNWeWRGUkxxbmZ6cXJMeVA5?=
 =?utf-8?B?NklTMWx0T3hnTjBrYlRFYU05b0w5V3M3MW9oQUt4TE9nSlVTdVdQVDhrWFpx?=
 =?utf-8?Q?rL4MhJN/bqM9zTMEQR1i9Atc8X2/48tfOpId3gu?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 51a228fe-bb66-48d8-bcdd-08d919ffaa68
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2021 13:20:11.9067
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: fm2x3vX6sfgpcY9T6i5z79XIJrlIadn2uvm9t5JsOviUmsdQf503NGH265BD1tSGXbo+n3rA+VsyDop/3/vTyg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3211
X-OriginatorOrg: citrix.com

On Tue, May 18, 2021 at 02:04:13PM +0200, Dario Faggioli wrote:
> From: Dario Faggioli <dario@Solace.fritz.box>
> 
> Fix the build inside our openSUSE Tumbleweed container by using
> the proper python development packages (and adding zstd headers).
> 
> Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
> ---
> Cc: Doug Goldstein <cardoe@cardoe.com>
> Cc: Roger Pau Monne <roger.pau@citrix.com>
> Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> ---
>  .../build/suse/opensuse-tumbleweed.dockerfile      |    5 ++---
>  1 file changed, 2 insertions(+), 3 deletions(-)
> 
> diff --git a/automation/build/suse/opensuse-tumbleweed.dockerfile b/automation/build/suse/opensuse-tumbleweed.dockerfile
> index 8ff7b9b5ce..5b99cb8a53 100644
> --- a/automation/build/suse/opensuse-tumbleweed.dockerfile
> +++ b/automation/build/suse/opensuse-tumbleweed.dockerfile
> @@ -45,6 +45,7 @@ RUN zypper install -y --no-recommends \
>          libtasn1-devel \
>          libuuid-devel \
>          libyajl-devel \
> +        libzstd-devel \
>          lzo-devel \
>          make \
>          nasm \
> @@ -56,10 +57,8 @@ RUN zypper install -y --no-recommends \
>          pandoc \
>          patch \
>          pkg-config \
> -        python \
>          python-devel \
> -        python3 \
> -        python3-devel \
> +        python38-devel \

When I tested python3-devel was translated into python38-devel, which
I think is better as we don't need to modify the docker file for every
new Python version?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 18 13:33:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 13:33:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129205.242551 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lizr6-0003Ed-Sf; Tue, 18 May 2021 13:33:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129205.242551; Tue, 18 May 2021 13:33:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lizr6-0003EW-PM; Tue, 18 May 2021 13:33:52 +0000
Received: by outflank-mailman (input) for mailman id 129205;
 Tue, 18 May 2021 13:33:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lizr5-0003EQ-Jx
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 13:33:51 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lizr3-0003YP-9l; Tue, 18 May 2021 13:33:49 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lizr3-0007pZ-3X; Tue, 18 May 2021 13:33:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=1lkajqPw+kP8aaSXtpGKkRvkJZSt98OyATyPZb6KLrM=; b=oICMJhN5+6jd8hNoCf/61EVFe0
	JNhwik5TAaLWdHIuE7sTgHuDb1o79a8OBNusfeQqVzp0xmfay69YnNDD3hjXQJOXPfUT30akOpDNc
	OWH/uSxiWkUplVZpUeTtI+bwQtoBHj9JDHbbY6t47EyI1tIpNOT/gjn1xulM+YGs5+JY=;
Subject: Re: [PATCH 05/14] tools/libs: guest: Use const whenever we point to
 literal strings
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210405155713.29754-1-julien@xen.org>
 <20210405155713.29754-6-julien@xen.org> <YJqbrs482KY23QQE@perard>
From: Julien Grall <julien@xen.org>
Message-ID: <c933099d-c448-a57d-0510-ca8bc7e368ab@xen.org>
Date: Tue, 18 May 2021 14:33:47 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <YJqbrs482KY23QQE@perard>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Anthony,

On 11/05/2021 15:58, Anthony PERARD wrote:
> On Mon, Apr 05, 2021 at 04:57:04PM +0100, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> literal strings are not meant to be modified. So we should use const
>> *char rather than char * when we want to store a pointer to them.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>> ---
>> diff --git a/tools/libs/guest/xg_dom_x86.c b/tools/libs/guest/xg_dom_x86.c
>> index 2953aeb90b35..e379b07f9945 100644
>> --- a/tools/libs/guest/xg_dom_x86.c
>> +++ b/tools/libs/guest/xg_dom_x86.c
>> @@ -1148,11 +1148,12 @@ static int vcpu_hvm(struct xc_dom_image *dom)
>>   
>>   /* ------------------------------------------------------------------------ */
>>   
>> -static int x86_compat(xc_interface *xch, uint32_t domid, char *guest_type)
>> +static int x86_compat(xc_interface *xch, uint32_t domid,
>> +                      const char *guest_type)
>>   {
>>       static const struct {
>> -        char           *guest;
>> -        uint32_t        size;
>> +        const char      *guest;
>> +        uint32_t       size;
> 
> It seems that one space have been removed by mistake just before "size".

Well spotted. I will fix on commit.

> 
> The rest looks good:
> Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Thank you!

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 18 13:48:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 13:48:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129216.242568 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj05J-0004l9-Ag; Tue, 18 May 2021 13:48:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129216.242568; Tue, 18 May 2021 13:48:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj05J-0004l2-7c; Tue, 18 May 2021 13:48:33 +0000
Received: by outflank-mailman (input) for mailman id 129216;
 Tue, 18 May 2021 13:48:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lj05H-0004kw-RA
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 13:48:31 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lj05G-0003ne-4p; Tue, 18 May 2021 13:48:30 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lj05F-0000bX-Uw; Tue, 18 May 2021 13:48:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=jdEWK8AR8gSujYRxvxOsyvAMbhgjBsHtSjPLjmUTNY8=; b=tcCH8FKlk732owMuRgXfdORONR
	rA92Ajze3EjFCZSi9+D99aJCfsUY0jcQFy1gct4JEtdrSRJxpYo5xfeeaTMBFFETrJ4nklI5LukRJ
	YL5+RxStE1JOKE5pKYv8HI8YJs5enrbcbusLCpVx02yS22gvb7HmYXHCIfhSeiRjUK7k=;
Subject: Re: [PATCH 09/14] tools/console: Use const whenever we point to
 literal strings
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: xen-devel@lists.xenproject.org, Julien Grall <jgrall@amazon.com>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210405155713.29754-1-julien@xen.org>
 <20210405155713.29754-10-julien@xen.org> <YJqgXz1s8N3T4+Fo@perard>
From: Julien Grall <julien@xen.org>
Message-ID: <87662d43-82a9-d0ac-8aaf-c42e19647961@xen.org>
Date: Tue, 18 May 2021 14:48:28 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <YJqgXz1s8N3T4+Fo@perard>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Anthony,

On 11/05/2021 16:18, Anthony PERARD wrote:
> On Mon, Apr 05, 2021 at 04:57:08PM +0100, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> literal strings are not meant to be modified. So we should use const
>> char * rather than char * when we want to store a pointer to them.
>>
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
>> ---
>> diff --git a/tools/console/daemon/io.c b/tools/console/daemon/io.c
>> index 4af27ffc5d02..6a8a94e31b65 100644
>> --- a/tools/console/daemon/io.c
>> +++ b/tools/console/daemon/io.c
>> @@ -109,9 +109,9 @@ struct console {
>>   };
>>   
>>   struct console_type {
>> -	char *xsname;
>> -	char *ttyname;
>> -	char *log_suffix;
>> +	const char *xsname;
> 
> I think that const of `xsname` is lost in console_init() in the same
> file.
> We have:
> 
>      static int console_init(.. )
>      {
>          struct console_type **con_type = (struct console_type **)data;
>          char *xsname, *xspath;
>          xsname = (char *)(*con_type)->xsname;
>      }
> 
> So constify "xsname" in console_init() should be part of the patch, I
> think.

Good point. I am not entirely sure why the cast has been added because 
the field has always been a char *.

Anyway, I will remove it.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 18 14:01:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 14:01:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129235.242605 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj0I1-0007IK-Tc; Tue, 18 May 2021 14:01:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129235.242605; Tue, 18 May 2021 14:01:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj0I1-0007ID-Qd; Tue, 18 May 2021 14:01:41 +0000
Received: by outflank-mailman (input) for mailman id 129235;
 Tue, 18 May 2021 14:01:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lj0Hz-0007I1-VN
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 14:01:39 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lj0Hy-00047v-JD; Tue, 18 May 2021 14:01:38 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lj0Hy-0001pY-9t; Tue, 18 May 2021 14:01:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=7PXQN3S7KHSj3e/q5/numPsRjous6camyiVOt5f8VAI=; b=ku6rIq1CIlaGIknXCztT8RkspT
	GYLdSi9Do5yKkRtBE4bSggYDax5ytjvl/zO8YoGDd/55Osg9bLdxIAHSIXICSQP4F7QANpA0XMz9/
	dHeEppqO0zTcx6wQ6tZgKGIKobRbS84YoZO+4jpOOzKYZ4vp13S3FDtgozNaOUAu/X2E=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	Julien Grall <jgrall@amazon.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 0/2] Use const whenever we point to literal strings (take 1)
Date: Tue, 18 May 2021 15:01:32 +0100
Message-Id: <20210518140134.31541-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1

From: Julien Grall <jgrall@amazon.com>

Hi all,

By default, both Clang and GCC will happily compile C code where
non-const char * point to literal strings. This means the following
code will be accepted:

    char *str = "test";

    str[0] = 'a';

Literal strings will reside in rodata, so they are not modifiable.
This will result to an permission fault at runtime if the permissions
are enforced in the page-tables (this is the case in Xen).

I am not aware of code trying to modify literal strings in Xen.
However, there is a frequent use of non-const char * to point to
literal strings. Given the size of the codebase, there is a risk
to involuntarily introduce code that will modify literal strings.

Therefore it would be better to enforce using const when pointing
to such strings. Both GCC and Clang provide an option to warn
for such case (see -Wwrite-strings) and therefore could be used
by Xen.

This series doesn't yet make use of -Wwrite-strings because
the tree is not fully converted. Instead, it contains some easy
and non-controversial use of const in the code.

Julien Grall (2):
  xen/char: console: Use const whenever we point to literal strings
  tools/console: Use const whenever we point to literal strings

 tools/console/client/main.c |  4 ++--
 tools/console/daemon/io.c   | 15 ++++++++-------
 xen/drivers/char/console.c  |  7 ++++---
 3 files changed, 14 insertions(+), 12 deletions(-)

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 18 14:01:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 14:01:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129237.242627 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj0I3-0007oa-F1; Tue, 18 May 2021 14:01:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129237.242627; Tue, 18 May 2021 14:01:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj0I3-0007nk-BW; Tue, 18 May 2021 14:01:43 +0000
Received: by outflank-mailman (input) for mailman id 129237;
 Tue, 18 May 2021 14:01:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lj0I2-0007KH-29
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 14:01:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lj0I1-00048D-3F; Tue, 18 May 2021 14:01:41 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lj0I0-0001pY-R3; Tue, 18 May 2021 14:01:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=ZHxYmRfDdLu4FrS7B9sF2TjHiGx4cl5soDoTcAWKfvw=; b=6zfceufaUl++gBaP0pT8q99oL
	HKNEExzOPGfYbj91CgLPMOapbAn4F6kieIKThFiya5uepImEnP0x90GyNZDYcUCTJgU5nsWJT67Ad
	iMja/URFk7cbgb18rQOUSYTB4kuCHtbmgwwsZDVfW7aHAGVHdO5FH8PzQx6PezUYdJBoo=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	Julien Grall <jgrall@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 2/2] tools/console: Use const whenever we point to literal strings
Date: Tue, 18 May 2021 15:01:34 +0100
Message-Id: <20210518140134.31541-3-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210518140134.31541-1-julien@xen.org>
References: <20210518140134.31541-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

Literal strings are not meant to be modified. So we should use const
char * rather than char * when we want to store a pointer to them.

Take the opportunity to remove the cast (char *) in console_init(). It
is unnecessary and will remove the const.

Signed-off-by: Julien Grall <jgrall@amazon.com>
Acked-by: Wei Liu <wl@xen.org>

---
    Changes in v2:
        - Remove the cast (char *) in console_init()
        - Add Wei's acked-by
---
 tools/console/client/main.c |  4 ++--
 tools/console/daemon/io.c   | 15 ++++++++-------
 2 files changed, 10 insertions(+), 9 deletions(-)

diff --git a/tools/console/client/main.c b/tools/console/client/main.c
index 088be28dff02..80157be42144 100644
--- a/tools/console/client/main.c
+++ b/tools/console/client/main.c
@@ -325,7 +325,7 @@ int main(int argc, char **argv)
 {
 	struct termios attr;
 	int domid;
-	char *sopt = "hn:";
+	const char *sopt = "hn:";
 	int ch;
 	unsigned int num = 0;
 	int opt_ind=0;
@@ -345,7 +345,7 @@ int main(int argc, char **argv)
 	char *end;
 	console_type type = CONSOLE_INVAL;
 	bool interactive = 0;
-	char *console_names = "serial, pv, vuart";
+	const char *console_names = "serial, pv, vuart";
 
 	while((ch = getopt_long(argc, argv, sopt, lopt, &opt_ind)) != -1) {
 		switch(ch) {
diff --git a/tools/console/daemon/io.c b/tools/console/daemon/io.c
index 4af27ffc5d02..200b575d76f8 100644
--- a/tools/console/daemon/io.c
+++ b/tools/console/daemon/io.c
@@ -87,14 +87,14 @@ struct buffer {
 };
 
 struct console {
-	char *ttyname;
+	const char *ttyname;
 	int master_fd;
 	int master_pollfd_idx;
 	int slave_fd;
 	int log_fd;
 	struct buffer buffer;
 	char *xspath;
-	char *log_suffix;
+	const char *log_suffix;
 	int ring_ref;
 	xenevtchn_handle *xce_handle;
 	int xce_pollfd_idx;
@@ -109,9 +109,9 @@ struct console {
 };
 
 struct console_type {
-	char *xsname;
-	char *ttyname;
-	char *log_suffix;
+	const char *xsname;
+	const char *ttyname;
+	const char *log_suffix;
 	bool optional;
 	bool use_gnttab;
 };
@@ -813,7 +813,8 @@ static int console_init(struct console *con, struct domain *dom, void **data)
 	int err = -1;
 	struct timespec ts;
 	struct console_type **con_type = (struct console_type **)data;
-	char *xsname, *xspath;
+	const char *xsname;
+	char *xspath;
 
 	if (clock_gettime(CLOCK_MONOTONIC, &ts) < 0) {
 		dolog(LOG_ERR, "Cannot get time of day %s:%s:L%d",
@@ -835,7 +836,7 @@ static int console_init(struct console *con, struct domain *dom, void **data)
 	con->log_suffix = (*con_type)->log_suffix;
 	con->optional = (*con_type)->optional;
 	con->use_gnttab = (*con_type)->use_gnttab;
-	xsname = (char *)(*con_type)->xsname;
+	xsname = (*con_type)->xsname;
 	xspath = xs_get_domain_path(xs, dom->domid);
 	s = realloc(xspath, strlen(xspath) +
 		    strlen(xsname) + 1);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 18 14:01:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 14:01:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129236.242610 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj0I2-0007LK-AN; Tue, 18 May 2021 14:01:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129236.242610; Tue, 18 May 2021 14:01:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj0I2-0007KK-3A; Tue, 18 May 2021 14:01:42 +0000
Received: by outflank-mailman (input) for mailman id 129236;
 Tue, 18 May 2021 14:01:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lj0I1-0007I7-8f
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 14:01:41 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lj0I0-000483-2G; Tue, 18 May 2021 14:01:40 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lj0Hz-0001pY-Pu; Tue, 18 May 2021 14:01:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From;
	 bh=QC/l4qwnhGeUAmcofs7Krcq3dxsEAeNaOUqTwh1uBys=; b=xTlMMwJv2eAQQHzor73HAEhJE
	e73sEpK2PR8kCjnsScNjrzCnfXBpCMxPj39WBkEmGMw4PNRO4POW075j0ld4TIvB6BkQywCjH6+4h
	4ctfMflIa3Fsd2EfuAB5EXW7vyzMQmNFy0KlKUM1OQSK/Zk77V6Z1cd9IK1y0xmHMlwZA=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	Julien Grall <jgrall@amazon.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2 1/2] xen/char: console: Use const whenever we point to literal strings
Date: Tue, 18 May 2021 15:01:33 +0100
Message-Id: <20210518140134.31541-2-julien@xen.org>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <20210518140134.31541-1-julien@xen.org>
References: <20210518140134.31541-1-julien@xen.org>

From: Julien Grall <jgrall@amazon.com>

Literal strings are not meant to be modified. So we should use const
char * rather than char * when we want to store a pointer to them.

The array should also not be modified at all and is only used by
xenlog_update_val(). So take the opportunity to add an extra const and
move the definition in the function.

Signed-off-by: Julien Grall <jgrall@amazon.com>

---
    Changes in v2:
        - The array content should never be modified
        - Move lvl2opt in xenlog_update_val()
---
 xen/drivers/char/console.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
index 23583751709c..7d0a603d0311 100644
--- a/xen/drivers/char/console.c
+++ b/xen/drivers/char/console.c
@@ -168,10 +168,11 @@ static int parse_guest_loglvl(const char *s);
 static char xenlog_val[LOGLVL_VAL_SZ];
 static char xenlog_guest_val[LOGLVL_VAL_SZ];
 
-static char *lvl2opt[] = { "none", "error", "warning", "info", "all" };
-
 static void xenlog_update_val(int lower, int upper, char *val)
 {
+    static const char * const lvl2opt[] =
+        { "none", "error", "warning", "info", "all" };
+
     snprintf(val, LOGLVL_VAL_SZ, "%s/%s", lvl2opt[lower], lvl2opt[upper]);
 }
 
@@ -262,7 +263,7 @@ static int parse_guest_loglvl(const char *s)
     return ret;
 }
 
-static char *loglvl_str(int lvl)
+static const char *loglvl_str(int lvl)
 {
     switch ( lvl )
     {
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 18 14:02:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 14:02:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129245.242638 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj0J1-0000YG-OM; Tue, 18 May 2021 14:02:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129245.242638; Tue, 18 May 2021 14:02:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj0J1-0000Y9-L2; Tue, 18 May 2021 14:02:43 +0000
Received: by outflank-mailman (input) for mailman id 129245;
 Tue, 18 May 2021 14:02:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lj0J0-0000Xt-LO
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 14:02:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lj0Ir-00049G-W2; Tue, 18 May 2021 14:02:33 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lj0Ir-0001uS-Qb; Tue, 18 May 2021 14:02:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=Djo0Z8UCuwusoCbx3XP4eRV8Bal3+Nb6wmPQyTK86qQ=; b=qGUEWcKhGEZGeKSCM2sfbrqm81
	8n/e9CzaDKyRMDKAQuPRLZwnPUJC0WnrtMssAo5yOtNkUpdK/W08xU+gLo7kCzq90RNb/dmLN8W0F
	3QR370j7yOlhoFStBJ4x69GeAba55Lhy78LWz+qH5u4m0z/yK2ZMZ6p+HVAVGjz2uzSI=;
Subject: Re: PING Re: [PATCH 00/14] Use const whether we point to literal
 strings (take 1)
To: Wei Liu <wl@xen.org>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
 Anthony PERARD <anthony.perard@citrix.com>, Julien Grall
 <jgrall@amazon.com>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Dario Faggioli <dfaggioli@suse.com>, Tim Deegan <tim@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <20210405155713.29754-1-julien@xen.org>
 <05eaa910-7630-d1e4-4b70-3008d672fe5c@xen.org>
 <20210517184153.wwj4ek4bkmelpuia@liuwe-devbox-debian-v2>
From: Julien Grall <julien@xen.org>
Message-ID: <d7db6d71-f285-44b1-f76b-72f18f0e5f24@xen.org>
Date: Tue, 18 May 2021 15:02:31 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210517184153.wwj4ek4bkmelpuia@liuwe-devbox-debian-v2>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Wei,

On 17/05/2021 19:41, Wei Liu wrote:
> On Mon, May 10, 2021 at 06:49:01PM +0100, Julien Grall wrote:
>> Hi,
>>
>> Ian, Wei, Anthony, can I get some feedbacks on the tools side?
> 
> I think this is moving to the right direction so
> 
> Acked-by: Wei Liu <wl@xen.org>

Thanks! I had committed most of the tools code but one patch. I have 
kept your acked-by on the patch and will wait Anthony to do a full review.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 18 14:06:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 14:06:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129255.242649 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj0Ml-0001Mm-8l; Tue, 18 May 2021 14:06:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129255.242649; Tue, 18 May 2021 14:06:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj0Ml-0001Mf-5i; Tue, 18 May 2021 14:06:35 +0000
Received: by outflank-mailman (input) for mailman id 129255;
 Tue, 18 May 2021 14:06:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lj0Mj-0001MY-EM
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 14:06:33 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lj0Mi-0004Eb-JG; Tue, 18 May 2021 14:06:32 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lj0Mi-0002FN-DI; Tue, 18 May 2021 14:06:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=fmCN197QOAwfYxZ0Bl55KFhJHHuAFHlw1wnxSXS7vJ0=; b=xzbfKm+oZEvvIE0WDoXQfpTdBb
	Pq2I6uFcGKner7CBdCwS4SKEKTFVmdfaKq5AloErR36hHmFrDEpd2mtIH0wCTRI9vEaUlxEpLiXKV
	W31jkiF22JtDSy6d35B2JOqhxFXk6XZmVyeZGNXVyypzUxFRd+ga0e3sZhWpfK+ZHs/M=;
Subject: Re: [PATCH v3 2/5] xen/common: Guard iommu symbols with
 CONFIG_HAS_PASSTHROUGH
To: Jan Beulich <jbeulich@suse.com>, Connor Davis <connojdavis@gmail.com>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair23@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1621017334.git.connojdavis@gmail.com>
 <1156cb116da19ef64323e472bb6b6e87c6c73d77.1621017334.git.connojdavis@gmail.com>
 <556d1933-3b11-0780-edec-b6dc1729bc56@suse.com>
 <98b429d0-2673-624e-1690-9c0e8373ed5b@xen.org>
 <7cf966f6-7ccf-ba63-2b67-129577a7ca53@gmail.com>
 <8e415cac-a8b3-67a6-2f7b-489b964ceb50@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <fc967847-4a08-050c-aaac-5cfb42742f0e@xen.org>
Date: Tue, 18 May 2021 15:06:29 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <8e415cac-a8b3-67a6-2f7b-489b964ceb50@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 18/05/2021 07:27, Jan Beulich wrote:
> On 18.05.2021 06:11, Connor Davis wrote:
>>
>> On 5/17/21 9:42 AM, Julien Grall wrote:
>>> Hi Jan,
>>>
>>> On 17/05/2021 12:16, Jan Beulich wrote:
>>>> On 14.05.2021 20:53, Connor Davis wrote:
>>>>> --- a/xen/common/memory.c
>>>>> +++ b/xen/common/memory.c
>>>>> @@ -294,7 +294,9 @@ int guest_remove_page(struct domain *d, unsigned
>>>>> long gmfn)
>>>>>        p2m_type_t p2mt;
>>>>>    #endif
>>>>>        mfn_t mfn;
>>>>> +#ifdef CONFIG_HAS_PASSTHROUGH
>>>>>        bool *dont_flush_p, dont_flush;
>>>>> +#endif
>>>>>        int rc;
>>>>>      #ifdef CONFIG_X86
>>>>> @@ -385,13 +387,17 @@ int guest_remove_page(struct domain *d,
>>>>> unsigned long gmfn)
>>>>>         * Since we're likely to free the page below, we need to suspend
>>>>>         * xenmem_add_to_physmap()'s suppressing of IOMMU TLB flushes.
>>>>>         */
>>>>> +#ifdef CONFIG_HAS_PASSTHROUGH
>>>>>        dont_flush_p = &this_cpu(iommu_dont_flush_iotlb);
>>>>>        dont_flush = *dont_flush_p;
>>>>>        *dont_flush_p = false;
>>>>> +#endif
>>>>>          rc = guest_physmap_remove_page(d, _gfn(gmfn), mfn, 0);
>>>>>    +#ifdef CONFIG_HAS_PASSTHROUGH
>>>>>        *dont_flush_p = dont_flush;
>>>>> +#endif
>>>>>          /*
>>>>>         * With the lack of an IOMMU on some platforms, domains with
>>>>> DMA-capable
>>>>> @@ -839,11 +845,13 @@ int xenmem_add_to_physmap(struct domain *d,
>>>>> struct xen_add_to_physmap *xatp,
>>>>>        xatp->gpfn += start;
>>>>>        xatp->size -= start;
>>>>>    +#ifdef CONFIG_HAS_PASSTHROUGH
>>>>>        if ( is_iommu_enabled(d) )
>>>>>        {
>>>>>           this_cpu(iommu_dont_flush_iotlb) = 1;
>>>>>           extra.ppage = &pages[0];
>>>>>        }
>>>>> +#endif
>>>>>          while ( xatp->size > done )
>>>>>        {
>>>>> @@ -868,6 +876,7 @@ int xenmem_add_to_physmap(struct domain *d,
>>>>> struct xen_add_to_physmap *xatp,
>>>>>            }
>>>>>        }
>>>>>    +#ifdef CONFIG_HAS_PASSTHROUGH
>>>>>        if ( is_iommu_enabled(d) )
>>>>>        {
>>>>>            int ret;
>>>>> @@ -894,6 +903,7 @@ int xenmem_add_to_physmap(struct domain *d,
>>>>> struct xen_add_to_physmap *xatp,
>>>>>            if ( unlikely(ret) && rc >= 0 )
>>>>>                rc = ret;
>>>>>        }
>>>>> +#endif
>>>>>          return rc;
>>>>>    }
>>>>
>>>> I wonder whether all of these wouldn't better become CONFIG_X86:
>>>> ISTR Julien indicating that he doesn't see the override getting used
>>>> on Arm. (Julien, please correct me if I'm misremembering.)
>>>
>>> Right, so far, I haven't been in favor to introduce it because:
>>>     1) The P2M code may free some memory. So you can't always ignore
>>> the flush (I think this is wrong for the upper layer to know when this
>>> can happen).
>>>     2) It is unclear what happen if the IOMMU TLBs and the PT contains
>>> different mappings (I received conflicted advice).
>>>
>>> So it is better to always flush and as early as possible.
>>
>> So keep it as is or switch to CONFIG_X86?
> 
> Please switch, unless anyone else voices a strong opinion towards
> keeping as is.

I would like to avoid adding more #ifdef CONFIG_X86 in the common code. 
Can we instead provide a wrapper for them?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 18 14:09:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 14:09:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129261.242663 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj0PS-00021K-QB; Tue, 18 May 2021 14:09:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129261.242663; Tue, 18 May 2021 14:09:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj0PS-00021D-MC; Tue, 18 May 2021 14:09:22 +0000
Received: by outflank-mailman (input) for mailman id 129261;
 Tue, 18 May 2021 14:09:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lj0PQ-000213-F9
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 14:09:20 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lj0PQ-0004HE-9Z; Tue, 18 May 2021 14:09:20 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lj0PQ-0002Uw-3m; Tue, 18 May 2021 14:09:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=8VzoNuSMnkdjcHIDa7TdMHySCUtujRo8+2Q4ZI57YSU=; b=GWeCIJBse188UdY5LN3vc66NDl
	NSGBTjrlPdjiUZzzauAUjKAeHGuELjsUSm264+K6wQsyX1QeRcpp//yUooaQEYEU+MBNYYFTiQDe0
	NkJdBbaK+H4IpDuFvgXcZRBQknh+I56iQMazFckfeXnDuh2kATpZo5Upz6m0voaYlM2U=;
Subject: Re: Discussion of Xenheap problems on AArch64
To: Henry Wang <Henry.Wang@arm.com>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Wei Chen <Wei.Chen@arm.com>, Penny Zheng <Penny.Zheng@arm.com>,
 Bertrand Marquis <Bertrand.Marquis@arm.com>
References: <PA4PR08MB6253F49C13ED56811BA5B64E92479@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <cdde98ca-4183-c92b-adca-801330992fc5@xen.org>
 <PA4PR08MB62538BBA256E66A0415F0C7192479@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <f14aa1d6-35d2-a9a3-0672-7f0d3ae3ec89@xen.org>
 <PA4PR08MB62534C4130B59CAA9A8A8BF792419@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <PA4PR08MB6253FBC7F5E690DB74F2E11F92409@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <2a65b8c0-fccc-2ccc-f736-7f3f666e84d1@xen.org>
 <PA4PR08MB62537A958107CD234831E0B892579@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <ba649865-410b-e1be-39a3-c4cac802f464@xen.org>
 <PA4PR08MB6253F85E184CA51BDB99786992539@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <ba1bc084-5a5b-1410-acba-33bfca7c4f6a@xen.org>
 <PA4PR08MB6253E95579D8277D7FD1BE9A92509@PA4PR08MB6253.eurprd08.prod.outlook.com>
 <7247122c-127d-705c-78a5-7f9460f5821a@xen.org>
 <PA4PR08MB6253AB1B1286086E4EDE60A2922D9@PA4PR08MB6253.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
Message-ID: <5cebec6d-0e41-f47f-0a8d-9b96b886c53e@xen.org>
Date: Tue, 18 May 2021 15:09:18 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <PA4PR08MB6253AB1B1286086E4EDE60A2922D9@PA4PR08MB6253.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Henry,

On 17/05/2021 07:38, Henry Wang wrote:
> 
>> From: Julien Grall <julien@xen.org>
>> On a previous e-mail, you said you tweaked the FVP model to set those
>> regions. Were you trying to mimick the memory layout of a real HW
>> (either current or future)?
> 
> Not really, I was just trying to cover as many cases as possible and these
> regions were just picked for testing your patchset in different scenarios.

Thanks for the confirmation. It means we don't have to fix it right now. :).

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 18 14:27:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 14:27:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129279.242680 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj0ga-0004O0-Iq; Tue, 18 May 2021 14:27:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129279.242680; Tue, 18 May 2021 14:27:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj0ga-0004Nt-Fi; Tue, 18 May 2021 14:27:04 +0000
Received: by outflank-mailman (input) for mailman id 129279;
 Tue, 18 May 2021 14:27:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9UfV=KN=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lj0gZ-0004Nn-5u
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 14:27:03 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e43e93e7-fa7a-42e5-9f53-e7a089f356a3;
 Tue, 18 May 2021 14:27:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e43e93e7-fa7a-42e5-9f53-e7a089f356a3
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1621348021;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=n29HjX0WQ8nIQ/X/SgZMOi6yhYglU+HENMd35asfXdI=;
  b=WsrbA9JJxEEEtNWiFKf69YWXkUqlHD66rwVykeUhxPd4KZrDWDUR8/oh
   q+WXMmqx8EAywVLj2B/ML5500MaPvl3y4/I0pn2rRfQkf6Y+PClJBeui3
   /EIUyXq6iaVhDpCOOPEtNresPjQuuTMHtsAgG7MtbcYA68ofkryVIalKp
   8=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: +YgaiWr9bGJip5GtOYgkHMbEWYzp1uT3EHXwjptk9zQLICfH5U+UZ3ssv9E+/SGFCkpAq06T/d
 RHWJ3DUeOGQZVt9o41FrwaP80i2ZNAu4kKY/pWn2GNOmBSmrZOdGUHKfzCusuEkOhHrDfaxi2c
 QrnNZyzWvw3H4/JNXlH4lrwap+Xpdwx/o2CS7O5t9DQzK+6otf4cq7bQH6pTyQM0kib6p1NKNo
 8WfPgFiE8N75W+nILyDd/B8lPN6X5mLSCHWwpolVtonjG7NfQD34eLSgcBggbrz2DIs/KTZAXn
 FYo=
X-SBRS: 5.1
X-MesageID: 44034907
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:fH1AeKMIqS91QsBcTjujsMiBIKoaSvp037BK7S1MoNJuEvBw9v
 re+MjzsCWftN9/Yh4dcLy7VpVoIkmskKKdg7NhXotKNTOO0AeVxelZhrcKqAeQeREWmNQ96U
 9hGZIOdeEZDzJB/LrHCN/TKade/DGFmprY+9s31x1WPGZXgzkL1XYDNu6ceHcGIjVuNN4CO7
 e3wNFInDakcWR/VLXAOpFUN9Kz3uEijfjdEGY7OyI=
X-IronPort-AV: E=Sophos;i="5.82,310,1613451600"; 
   d="scan'208";a="44034907"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=obESudzM6sgOGOomWrUZ5F277v5v3AqBTMemo3eLjL7EZiyvJ9XyVe+uptjsoowXC0st4yw5YqAc7CLsX+k720ctP7RTYte/K3Yvw5E5EpjRb2Ap5nqKbxf4M5A1Z+2elbZinClJhVzwGdjSEKAZKKfEcW7i/+hSVsjPPKOhcpw5FdCNGj8dA8gZlNl/B6KsX3tirnrWc1Z987a6KPsRWatKLkjxpRQ9rKdvI1h+SfgWscqOW6Bq/bTjK/1BkQ3WjZQzAEeArVrZx6hn0xjkiYu9Qz9mlofkRlRatXtamDMNEF4mJ4w0WF7P2x06FPGp60MsEeMZZTjWNdSZBRCJNA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IFJzgAqhNGAANfFvgIfHiokGxfmjbaM78LQEcFqxtFU=;
 b=iwmOof0+0Yg1vPfePYOvE3dZt2QIkmswNPcnQA0nCMzWStPwWggMgO+QFdbA8I9T8r8/NBdD+YkJ91pW6qq50d6qAZrEBLd5GgptqfGWaAXrcE8qd7//OBv43a6sjIty4JO/9+kdnlXNjw7ZyR6O+dCYsQOgzlW63DBdqAIat8N9dY/4nmzwpZAJFH6C9MoItqXDUqi18s6r9H0+2cmn2YNhlm5lGpZLeP8j0ovOlSTtOD9XDZgQiLD46lO44haka+1DpV1wxPOeADdQUIwH3wOPoHJL96x7YdNI1ZCpCgS5+0JUjzZZ27rXYwIxLipkh0M5mIqIUJlo/g32fg5xcw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=IFJzgAqhNGAANfFvgIfHiokGxfmjbaM78LQEcFqxtFU=;
 b=aCxLNPQ7uSJk7Njb9ryiTTA3TcgdXAFEC5ZHYxykAt+D87FVmXNTcHAujHRPr9IXnoeGHFM+USUoB9GuuUhOLqhSGgd0hVLO/tr/JXzhjgrNINag8xaPF/ftVLO7DoIE86ot3ScznvSX9XpoZMnKV7uIz6D18zDPK+LGbvrQoYo=
Date: Tue, 18 May 2021 16:26:51 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Dario Faggioli <dfaggioli@suse.com>
CC: <xen-devel@lists.xenproject.org>, Doug Goldstein <cardoe@cardoe.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 1/2] automation: use DOCKER_CMD for building containers
 too
Message-ID: <YKPOqwiiXqOZZ2cO@Air-de-Roger>
References: <162133919718.25010.4197057069904956422.stgit@Wayrath>
 <162133944760.25010.12549941575201320853.stgit@Wayrath>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <162133944760.25010.12549941575201320853.stgit@Wayrath>
X-ClientProxiedBy: PAZP264CA0007.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:102:21::12) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 1936f78a-3deb-47dc-6c68-08d91a08fe5f
X-MS-TrafficTypeDiagnostic: DM6PR03MB4601:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4601AA32473FD55334C173738F2C9@DM6PR03MB4601.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: lukjFZf24/I469+cgK4eYfNhZNqsEqeQw893e7dcyCnXnpSj/VEhWfRGk0abev5tMNPO0wG81LvTECV/DsIlVr8A/p4XaCAN4uA7TqeGeLPZiw9HmLsk4mCKv7BSuCMyAEIUGqeV7cug4ASJ3osSj+jxbQXmGcDzMuZzrpFK62yNQL13YFVLFo7CRVOArbyIviRWiK1fhd4Lr9KNMJ4B2onO2nSXXlg7CO0Wuz7ZWZsIPJQSUpJ5vEnxMtAoEoJOdrQSmU23N/Atp0kZugjCIdnaLkw8/T4l/uT8CBoo2At7bt0atQb0RVui8FEHdVDQfj/tATgH4ee0GPaHyVsokyhtxY2vFTMVyVqCvM5e8dYw3ASnNIvN9SmDCKOuSPE+0/7OXtve6GhoVhrVrPwQJkd4uwuDAN8hCypNR2gascp85CuxH0lwx7dlaXNUaxD3rShrDJMwz8t8h62FT3Ts/xtN8oT6tmwvkrsTrq0hCU5e6II87Abg/nLqgHM9di9h6JQqTUhlffAQzSa9bndp+5inEWKZxdnv2tfIMBMEEMozHOvv5vZxQ6EZJoKmBWPfZd2yG7Rg6h+5XpO+SAbAIrSX3iD7zRi2P4nCu9zMZoQ=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(366004)(396003)(346002)(136003)(39850400004)(376002)(8676002)(16526019)(186003)(86362001)(8936002)(316002)(54906003)(956004)(2906002)(85182001)(9686003)(6666004)(66556008)(26005)(6916009)(6486002)(107886003)(38100700002)(6496006)(4326008)(66946007)(5660300002)(4744005)(33716001)(478600001)(66476007);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?MklEUnZkL0krU2wvdnFoVzRiN04xNGhMZmd3MllKbGZXU1JPeWl3S0N1ZmlV?=
 =?utf-8?B?a3FjaG84MDFmbWJjVDJuN0dYVDEvSTRRODBlSTVDdnE0MHl4RzFINGdsdmYz?=
 =?utf-8?B?aHJEdDF5bG9ldWVmSEhvQ3kvK1ovdU45L3dnRE54NTk1L1V2dFNCeU1LcGNv?=
 =?utf-8?B?N3U2V04xUEMrYndaR2FMbFhPNkNZaXlTY1FOdlptR2o5MmhiNFNhSk92eXEy?=
 =?utf-8?B?M1BQbVNldzNVdlNBdnRTN3JYanhtZVJGZW1KeWNJRjh4bXhGaTYwR1pmd3l0?=
 =?utf-8?B?RFUxK3VzV2pJbkIvclNJZVZ1VDFNdytjSnlPUlg3QkFxSEYzSGh1K1c5dkV1?=
 =?utf-8?B?S3AxaTdhVFcyQmM4Mm5iM2labWtJU1hkZnBLMG5XQ3pVTDdvaCt6blJJMHBy?=
 =?utf-8?B?dVhFdGlHRDRrdlpxaDVoRk81Ym9wdzZ6RmsvUDg4dzY2dElCaDRDVDlqTGxv?=
 =?utf-8?B?Mno3R3QzWHIvUHROekdQd2lOVVhONmxybFk4LzBlTXlESlNxN2h4R2IrTnMr?=
 =?utf-8?B?dHg1VmdSTlRGdlRjSXI5UlFjNzY1Ky9YQjVRQmJhL21aUnZJTkRPM1dLdzVJ?=
 =?utf-8?B?M0h1TTByTUNsTnE2RC9YaldaMTc3K0NZRjMrREMybm05T1YrMlZqSjNPUHRq?=
 =?utf-8?B?YXpPckkxSlRZbWZGTE9UYTNMTVU0V2o4LzR4bVJDSE4rNFBkYjBGRFdOcnBK?=
 =?utf-8?B?VEphZEtRY1RjTmVhOUFzZk80eDB6WHZzOXdMK0NwcDRXTFp3R0hucENkT2Nj?=
 =?utf-8?B?dUdDUm40TzZjUmdsUVNCVjB4aWh0MDVKbkZTUlZlUkcvV0czeDR6aW95SXFh?=
 =?utf-8?B?d1Rubks3Ni9VaTB5V0pXWU5Qb2cwRWJlME5lbUZMRnVjT1Frb2dGWlZ2Z0ZE?=
 =?utf-8?B?d2tCeGlKblllbDdTME1VRTB2emJpRjNwckxZOUQ1MUM2YnhrTEtDTS9nWHAr?=
 =?utf-8?B?SkNjaHdXTC9IQjZhd05ReFJUSjlCc3JLMVRraVdVRGhRMlBkYXRqK3ovNnhY?=
 =?utf-8?B?YVFHQTJpdHFicGNrdzZkd2lYdUpVZEtTdkhmbTFEM3k2aFVpMytVY0cxd0xs?=
 =?utf-8?B?UjVoOXRreUNYMUZIOVlFaDkxRDIyY1UrNHRCMUI1YTdyN3lQNWNIVmY2c0RS?=
 =?utf-8?B?bUNmbTdxRzZyMWlwdXVOY0lvZnMyWlhFOVhQQk00dE4rWXFyR3RzVXhIWlpV?=
 =?utf-8?B?TERMaUprdkVWc3I1UzlhdklDSjBYNFQ5V21lKzhaVmhrRzdwL3BGNkZ1MXZo?=
 =?utf-8?B?YmxTRlUyUE5NUC9qTTROYjI3UTlVUVpNODUwK2puQzN0QmNGZFpOMzU0Q3VZ?=
 =?utf-8?B?M0FpeVk3RzV2VEhNRHl3ZUVic1pMUHlyMkxzR0tZNC9SK0xiMGFwY1ZoSjZD?=
 =?utf-8?B?NnZCQTVhcnFUOFh2TGZTeDVNdlMvWE9JSzlhb1lPSXl0cWtDeUZQbldvM1M0?=
 =?utf-8?B?STlsTW5saTEzYkdQRFFiZkpuemxQdll0Z1pRbHhBb2grN1E2Ymw5ckI5SjBI?=
 =?utf-8?B?b25nZEIyeEttWnFHdFhoQXJtWGxZSWxXOE9vdXVZUmwzME5Leks4eVkzZUlV?=
 =?utf-8?B?eGUwSEh2Ui9FVDh2UDhMM2pUSnlsSExrTFFHd3dBUjdyenFCUEpoV25PbzBK?=
 =?utf-8?B?ZmxPVVdTNllQQ0x0QVF3Mm9TR2djckw4b3VTd2kvRkJPWk5JdjA2OGYyS1Zt?=
 =?utf-8?B?VjF0NDhBaE5IcG5EdGZqNlBiZGNzWlU5VUk3cXRVTVNJZ2dnamxkcVZZR0hn?=
 =?utf-8?Q?XB/12RejpGUPty5G5/rFKz//+GPOw2VKmAP+l2M?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 1936f78a-3deb-47dc-6c68-08d91a08fe5f
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2021 14:26:58.0687
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: bbG2sRxrEeNzU8/S9Id2AhjJpgJJK7LgwETAQxyP3Dl2N6lmgxkiv4XjagEjAZj/GNZMdjEcE1SYj9/2KDjFpw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4601
X-OriginatorOrg: citrix.com

On Tue, May 18, 2021 at 02:04:07PM +0200, Dario Faggioli wrote:
> From: Dario Faggioli <dario@Solace.fritz.box>
> 
> Use DOCKER_CMD from the environment (if defined) in the containers'
> makefile too, so that, e.g., when doing `export DOCKED_CMD=podman`
> podman is used for building the containers too.
> 
> Signed-off-by: Dario Faggioli <dfaggioli@suse.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks!


From xen-devel-bounces@lists.xenproject.org Tue May 18 14:33:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 14:33:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129285.242694 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj0n6-0005nn-DC; Tue, 18 May 2021 14:33:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129285.242694; Tue, 18 May 2021 14:33:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj0n6-0005ng-A1; Tue, 18 May 2021 14:33:48 +0000
Received: by outflank-mailman (input) for mailman id 129285;
 Tue, 18 May 2021 14:33:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h4/q=KN=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1lj0n5-0005na-4X
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 14:33:47 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 528c92dc-a590-409a-be68-fbff54e7dded;
 Tue, 18 May 2021 14:33:45 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A9C92B14B;
 Tue, 18 May 2021 14:33:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 528c92dc-a590-409a-be68-fbff54e7dded
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621348424; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=F2Tf6kI7UeHgpgT+OGr2H+OVhbgDckz6+Wy28MEwB+A=;
	b=aHHSS2cDAEWxgVraQPYn8uccQD1UEHQNvC9dDmrJtZZV9TlSnrDUsRoTBGTzopqNcwBctn
	SBA/3QQBqMzVDg7tzMGErAEnCtbjcaXAPsNO7qJPDj/BF7ZgkNyd2wSRNefjIeAaN2E7U9
	Gn7e2DfsxRzLtqIUv2LfjpBW9d+872E=
Message-ID: <9160502180e3c36a52cb841520615bc7fe91b42b.camel@suse.com>
Subject: Re: [PATCH 2/2] automation: fix dependencies on openSUSE Tumbleweed
 containers
From: Dario Faggioli <dfaggioli@suse.com>
To: Roger Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Doug Goldstein <cardoe@cardoe.com>, 
	Andrew Cooper <andrew.cooper3@citrix.com>
Date: Tue, 18 May 2021 16:33:43 +0200
In-Reply-To: <YKO/BcUAtjSgc2pV@Air-de-Roger>
References: <162133919718.25010.4197057069904956422.stgit@Wayrath>
	 <162133945335.25010.4601866854997664898.stgit@Wayrath>
	 <YKO/BcUAtjSgc2pV@Air-de-Roger>
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-O66TOK2Q3vQOAWcOaALn"
User-Agent: Evolution 3.40.1 (by Flathub.org) 
MIME-Version: 1.0


--=-O66TOK2Q3vQOAWcOaALn
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, 2021-05-18 at 15:20 +0200, Roger Pau Monn=C3=A9 wrote:
> On Tue, May 18, 2021 at 02:04:13PM +0200, Dario Faggioli wrote:
> > From: Dario Faggioli <dario@Solace.fritz.box>
> >=20
Mmm... this email address does not really exist, and it's a mistake
that it ended up here. :-/

> > Fix the build inside our openSUSE Tumbleweed container by using
> > the proper python development packages (and adding zstd headers).
> >=20
> > Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
> > ---
> > Cc: Doug Goldstein <cardoe@cardoe.com>
> > Cc: Roger Pau Monne <roger.pau@citrix.com>
> > Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> > ---
> > =C2=A0.../build/suse/opensuse-tumbleweed.dockerfile=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 |=C2=A0=C2=A0=C2=A0 5 ++---
> > =C2=A01 file changed, 2 insertions(+), 3 deletions(-)
> >=20
> > diff --git a/automation/build/suse/opensuse-tumbleweed.dockerfile
> > b/automation/build/suse/opensuse-tumbleweed.dockerfile
> > index 8ff7b9b5ce..5b99cb8a53 100644
> > --- a/automation/build/suse/opensuse-tumbleweed.dockerfile
> > +++ b/automation/build/suse/opensuse-tumbleweed.dockerfile
> >=20
> > @@ -56,10 +57,8 @@ RUN zypper install -y --no-recommends \
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 pandoc \
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 patch \
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 pkg-config \
> > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 python \
> > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 python-devel \
> > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 python3 \
> > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 python3-devel \
> > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 python38-devel \
>=20
> When I tested python3-devel was translated into python38-devel,=C2=A0
>
Oh, really? And when was it that you tested it, if I can ask?

> which
> I think is better as we don't need to modify the docker file for
> every
> new Python version?
>=20
That would definitely be better. Right now, I don't see any
python3-devel package. If python3-devel can still be used (and it
somehow translates to the proper -devel package), then sure we should
use it. I'm not sure how that would happen, but maybe it's just me
being unaware of some packaging magic.

Let me put "python3-devel" there and test locally again, so we know if
it actually works.

Thanks and Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-O66TOK2Q3vQOAWcOaALn
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAmCj0EcACgkQFkJ4iaW4
c+6cShAAglO8B5fyx/MA+yWiQlyVfzamxpEE9BNKP7mIzr3yoMWPj2Hd1mxdxfJS
FQ5iNLSf9KWnMW95jQgZfWwxaT3B+BhNw6lNemNFtF9gRl3y3aW25xbLiezCFCun
W9rbKa/mfScH1xisMtpnkUKEnpddfipHZohFzp/o4oLHomxUGpB0C8OXoWNZXJ2f
WDH+ijKjGUyIfecYt4pTGrYxXIb7aFZ86vm2j48L0NNpdKNwediyas20A+tB1mZV
sSVPCG+PblUZCfp7u06gD1gQnxLpOYkUz7n2tugYup4DG0F9gmbLk3w6vJ2/4UKK
pH+UWkj/LLVijp1AhhI5QvF5tM8X66e2A2lioU0m5drL1lc6V+5nQmCBPdNFCScE
+DdaTWPBBt+yDofaY086QyDhOan7IoQYtBxzD+6xlAr6492VwteOeuXKBc1PWNPx
FMFOf4WQeJTCYwcm1pnw3gyzpEl+eMk1SYDV6PnlSgaJSuqnMrARQLhhEdkZw/66
WF+VfmrIFdkov5l3Z8JQ/crq4MdiajHxxxl5+Hdgv08OUo5yOKL+EZsoIrsHUaXz
U/L9cdsGQrBHK3WzmY9QkMU+dsQNKPYrWeOArhIfJ8EFQFVIAmy8xeeJJfX5F7IR
gDMcsqXNAexxvFG2tQo+711UVfjxUDSO0pl6utvc10BDF3KtDnw=
=WOLs
-----END PGP SIGNATURE-----

--=-O66TOK2Q3vQOAWcOaALn--



From xen-devel-bounces@lists.xenproject.org Tue May 18 14:35:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 14:35:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129290.242704 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj0p8-0006QZ-RF; Tue, 18 May 2021 14:35:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129290.242704; Tue, 18 May 2021 14:35:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj0p8-0006QS-O6; Tue, 18 May 2021 14:35:54 +0000
Received: by outflank-mailman (input) for mailman id 129290;
 Tue, 18 May 2021 14:35:54 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h4/q=KN=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1lj0p8-0006QM-Ao
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 14:35:54 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 26de69a1-aa8c-4acd-b3da-b5bea3fc50a3;
 Tue, 18 May 2021 14:35:53 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F2A87AEF5;
 Tue, 18 May 2021 14:35:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 26de69a1-aa8c-4acd-b3da-b5bea3fc50a3
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621348553; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=s7941cM5/hhFGV8oIKLdXJ1XaE5cl95i2/P5PmmedkQ=;
	b=EnnHyTa1HOrDR0nFas/Ow1fvntDRZA2xjXPr3fzF4r31TO0QjK4fqF5y0Ku20kUjL7f2mE
	ghDf5GdaVaSUs9wNpmJS2HvRDBH+EIYe/KPlIcGgngFW6dKS6/yYqFjAIY7pf8GE20L98/
	PKQMGsFDVmx7xJ2OQKMjABsKI3LIi84=
Message-ID: <ed78f3c1aa4c31fa57ec6f9898a309ad0781b367.camel@suse.com>
Subject: Re: [PATCH 1/2] automation: use DOCKER_CMD for building containers
 too
From: Dario Faggioli <dfaggioli@suse.com>
To: Roger Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Doug Goldstein <cardoe@cardoe.com>, 
	Andrew Cooper <andrew.cooper3@citrix.com>
Date: Tue, 18 May 2021 16:35:51 +0200
In-Reply-To: <YKPOqwiiXqOZZ2cO@Air-de-Roger>
References: <162133919718.25010.4197057069904956422.stgit@Wayrath>
	 <162133944760.25010.12549941575201320853.stgit@Wayrath>
	 <YKPOqwiiXqOZZ2cO@Air-de-Roger>
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-gXoIaqaTIx4oDURcQPRZ"
User-Agent: Evolution 3.40.1 (by Flathub.org) 
MIME-Version: 1.0


--=-gXoIaqaTIx4oDURcQPRZ
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, 2021-05-18 at 16:26 +0200, Roger Pau Monn=C3=A9 wrote:
> On Tue, May 18, 2021 at 02:04:07PM +0200, Dario Faggioli wrote:
> > From: Dario Faggioli <dario@Solace.fritz.box>
> >=20
> > Use DOCKER_CMD from the environment (if defined) in the containers'
> > makefile too, so that, e.g., when doing `export DOCKED_CMD=3Dpodman`
> > podman is used for building the containers too.
> >=20
> > Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
>=20
> Acked-by: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
>
Thanks! If it has not been committed yet, can I resend with a From:
that makes sense?

And, either way, sorry for the noise... :-(

Thanks and Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-gXoIaqaTIx4oDURcQPRZ
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAmCj0McACgkQFkJ4iaW4
c+5WrQ//VlOJqmIql6hOwuDum3kWmUQpFwYnVsPhoXv3iHE4YQeSwGuVZdwDmizt
XvPwr8Y7OXbv4X50Hw1L8M3MDJnyIPeALDOc63Qp6rzn5nlLvrqBaCn6SW1OVfvn
C+8qSXtenM+jnqKtTPuw7esEzbiy0GhXT5a/ciH4gtPdzL7syZeUkxDdDOxe5eEe
bo8FiM5a70fukkSXOM4wm+CNigBQH4En/aACB0JoQwQRXuM5c3A6BwO/WE3/Pf3+
CAd+5opcz218jfL38WGjsmJ/xJ/tv48PUsSsu7st35f3m00RDHsh7yWGgITZkD/d
nONEnJMSSf3i6Ll1Oiz2o84LMKqWwiIsXQA4B3KsN9qvzEkcnRvJ2Gb36oXdwbby
pya2gbUn43WITDrGo+1Jn+l4UCx/YFC3M+bWjCIOa/CEx19sybcdigh5ubB7rqE7
izeVe0KU/syLHGUuu2psdfX+2OHVaytYlqpeU7dLidu4RI8iaNd7icSC9Ff57Khv
FF9xAEOcMxfI43PkE+5YK9eAOdyTEPZt6XdzzJvtnccUP7ZV6vkNDQpr3CZz8fcA
/nTBalbUvKoZDGhTkZlcyaxAyHdFbjjTdgLOHs4eICsFrQmrQENV1sb0wdEHhOfZ
1A7RmYumh1JINQzmzBlyAEXEG0I4OdPoprC55x9tac+RhOH3ZaI=
=HICx
-----END PGP SIGNATURE-----

--=-gXoIaqaTIx4oDURcQPRZ--



From xen-devel-bounces@lists.xenproject.org Tue May 18 14:40:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 14:40:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129299.242718 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj0t5-00077K-Ed; Tue, 18 May 2021 14:39:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129299.242718; Tue, 18 May 2021 14:39:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj0t5-00077D-BY; Tue, 18 May 2021 14:39:59 +0000
Received: by outflank-mailman (input) for mailman id 129299;
 Tue, 18 May 2021 14:39:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/tj/=KN=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1lj0t4-000777-LN
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 14:39:58 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [40.107.20.68]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bac80553-c498-4bb2-b4c7-3b16a24ca9cf;
 Tue, 18 May 2021 14:39:56 +0000 (UTC)
Received: from DB8PR03CA0032.eurprd03.prod.outlook.com (2603:10a6:10:be::45)
 by DBBPR08MB4904.eurprd08.prod.outlook.com (2603:10a6:10:f2::10) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.26; Tue, 18 May
 2021 14:39:55 +0000
Received: from DB5EUR03FT039.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:be:cafe::3c) by DB8PR03CA0032.outlook.office365.com
 (2603:10a6:10:be::45) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.25 via Frontend
 Transport; Tue, 18 May 2021 14:39:55 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT039.mail.protection.outlook.com (10.152.21.120) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Tue, 18 May 2021 14:39:54 +0000
Received: ("Tessian outbound 3c5232d12880:v92");
 Tue, 18 May 2021 14:39:54 +0000
Received: from a31ab92ba805.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 041A91DC-A10D-4AF6-B7CD-9ED5ABA26EA1.1; 
 Tue, 18 May 2021 14:39:47 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a31ab92ba805.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 18 May 2021 14:39:47 +0000
Received: from VI1PR08MB3629.eurprd08.prod.outlook.com (2603:10a6:803:7f::25)
 by VE1PR08MB5600.eurprd08.prod.outlook.com (2603:10a6:800:1b0::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.28; Tue, 18 May
 2021 14:39:45 +0000
Received: from VI1PR08MB3629.eurprd08.prod.outlook.com
 ([fe80::5ca9:87ed:e959:758a]) by VI1PR08MB3629.eurprd08.prod.outlook.com
 ([fe80::5ca9:87ed:e959:758a%5]) with mapi id 15.20.4129.031; Tue, 18 May 2021
 14:39:44 +0000
Received: from smtpclient.apple (82.8.129.65) by
 LO4P123CA0120.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:192::17) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4108.25 via Frontend Transport; Tue, 18 May 2021 14:39:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bac80553-c498-4bb2-b4c7-3b16a24ca9cf
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SWeXAeQUtjtyw9RVs8UQWsQ4Zvnw+i9+BKnCk0yF7s0=;
 b=unEBeyqzuB7AkDVKIGm8oy8fw7o0FNkVs6fqXFT4SNt9kB20QSWOiATOGN7OByj/6zW6GJl0GcbU+xZqTBsF8ZDljJmYkMS63eAVpQhj+Q9XuBBQHIz4MzkTZCYNJDtnxJVZbEgN+2edEcypeu6lD5P+5e2EBnmNpKUyvZE0ivM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 3d95174608c72e4d
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EVG5oXjQz/4EXyy0BYTXzqwHvJhkza9QqwmD7Qc+S+49W6H8vofLJnu9PjKoZYilqEMp9BmdKQxF2yyCRo9ZY5yjSkRnkGv5lgOBTWZO+EVAw7o8zrQxUC2g+9TCqtfbN3IuTlWTVCGJJME72j+oNZAflYyzVPoYP+5gbgo4mLWpu5pJYzFIlw0pIhO3I57KDT6u7m73HveLGU8CrQpP1icj99JQH5dXyhEJo7/5lh1tq4UlN5qOtN9SvX3kizvdNycMNy4b8zNfzeksQ6wvvBN3KqKaVHXoJA8pdEYyZVDjSxI8xSOy8jCVbgxbKas2yLxmKGmh1xm8NwZ2IGIkgw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SWeXAeQUtjtyw9RVs8UQWsQ4Zvnw+i9+BKnCk0yF7s0=;
 b=RQ8Qhnc5/7722vSeIHq1/V/KCCT5P/LXb8iMGGPflhQV/rOF/9bPLMkjHfwduVwIZJzKHhT991qtN6VlvQV3SqULMPwcTiUvCrsf4YpvorBpd/JU7wxveLFfQv6NBuVck2oLWlkHOXw8Jr5dZDmHC6UfgKA4g168I2uK8XPQuLgvbKjE5DDEpqnHVkl++af21RFJnqsLLnzpdw6AE9wpkZosExGE6dant2SzVvYW1yuQ6UMknhIEJQG1TRErYOqBkxeIUA6WwAShRtJCtQvOcGqHFc/AIkC+VEBemPvBo3zkpU+ufJ126mQJHCacKK9TGPt2cYXhITbCuUdbD9wBjA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SWeXAeQUtjtyw9RVs8UQWsQ4Zvnw+i9+BKnCk0yF7s0=;
 b=unEBeyqzuB7AkDVKIGm8oy8fw7o0FNkVs6fqXFT4SNt9kB20QSWOiATOGN7OByj/6zW6GJl0GcbU+xZqTBsF8ZDljJmYkMS63eAVpQhj+Q9XuBBQHIz4MzkTZCYNJDtnxJVZbEgN+2edEcypeu6lD5P+5e2EBnmNpKUyvZE0ivM=
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
Content-Type: text/plain;
	charset=us-ascii
Subject: Re: [PATCH v2 1/2] xen/char: console: Use const whenever we point to
 literal strings
From: Luca Fancellu <luca.fancellu@arm.com>
In-Reply-To: <20210518140134.31541-2-julien@xen.org>
Date: Tue, 18 May 2021 15:39:36 +0100
Cc: xen-devel@lists.xenproject.org,
 Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
Content-Transfer-Encoding: 7bit
Message-Id: <46179889-061C-43C2-843C-F8E7A687BA5D@arm.com>
References: <20210518140134.31541-1-julien@xen.org>
 <20210518140134.31541-2-julien@xen.org>
To: Julien Grall <julien@xen.org>
X-Mailer: Apple Mail (2.3654.80.0.2.43)
X-Originating-IP: [82.8.129.65]
X-ClientProxiedBy: LO4P123CA0120.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:192::17) To VI1PR08MB3629.eurprd08.prod.outlook.com
 (2603:10a6:803:7f::25)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: fc35bd78-b203-4da0-0fa7-08d91a0acd6f
X-MS-TrafficTypeDiagnostic: VE1PR08MB5600:|DBBPR08MB4904:
X-Microsoft-Antispam-PRVS:
	<DBBPR08MB4904FFCE63516CA62D768AB2E42C9@DBBPR08MB4904.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:1227;OLM:1227;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 k3i1SRzFurFamEel4iGqgejDA5jbcoIPMgF79Y9gcoe71ff6R4HwYLs80to6/TWgE/TRm0g6cW7AI2uNTVteJhFmYjYoCWC9VMH0WFdB0QiZn8yOfImttaeOBElKFSouZ9br4UuwLFK4sMv2JeMJooi5qILkyjNvBR8XmjZQfaeOu9Ib83Te+mx5L9D5tSUN99em9bQDpUX4+FKIGHJTkkqj10n3B84uqTSietaCJB5fPuWJziQXud+eaWQNYYUCk+XQ/w5lAqEMB6AyjbxWsKltqi0Agigm4vVxaSlGI5hxe2Tafo+ULEytMco2Ej8OdsdycZ9PYAVvVqhwferY+8IcGIaoJje+4YJAcpLpe9XeRMMVB9MN6/9NlrLfN12ZMi+ICrAvKuP1ASUbeTWs89JFI4a6ctVATxtZbY7szLVNsmFvsQnUKRagfJF0xaRfkVRHHGDcpklQf7E4fPKnxp0mhtwKMa228hz7BldpzgRqlQawZvoDOyqWGbDmYucozS68EJNyUK9EodrA22GhM7kSawqbu//qszy6pg22bdTye/5lA1mUN2SRufSr6moH/BsX1Ya53cyv1GROW51c8BvOeGlhh0W6MBBCS4VVJsEmwrDqp2QLH78lm6ZYQwApYloqYlIQKflySdTPFJzSOngD20WHED7vycPbvSfW0l73dN9XAGrtfYgNH9INnvKW
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR08MB3629.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(376002)(346002)(396003)(136003)(39850400004)(52116002)(2616005)(6506007)(478600001)(53546011)(8676002)(33656002)(956004)(4326008)(186003)(6916009)(83380400001)(8936002)(16526019)(66946007)(86362001)(5660300002)(44832011)(66556008)(2906002)(38100700002)(38350700002)(6512007)(6486002)(26005)(66476007)(316002)(36756003)(54906003)(6666004)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
 =?us-ascii?Q?yl3+OHLAnPkKfDbVeI6NbdBYGJR70zyOy004UoeanViKXbdNr4JTqTKHfM2W?=
 =?us-ascii?Q?c8jd13PsbQNrJpyA5O/hEYqYFnQx+kpBvBJRDO7WIu9i8c5Ikh04NzUawApF?=
 =?us-ascii?Q?AWXwdj0fhP7ZD5FYwXW6D4O896epSrfwCLgLZb+ufHhZsY8fxF/POL9pR27x?=
 =?us-ascii?Q?cH8zCXcTRzwwpyJLJYnVdIZgjRUxLyLdwLFf9yIBLftr0Z8N2x3ZsqMt5qYi?=
 =?us-ascii?Q?9i42HwRTiv90aElxn6zpW7wPDeZVeqJmTrq7eCRtQjGHxm3y6oDAug4/dD2r?=
 =?us-ascii?Q?m1oAzGG7CUXd5QCbPNmLi/BRTqVuquqtpNNbATXOX5ZVMhlbfXlb8cZuDt3S?=
 =?us-ascii?Q?L08+kiL1apJXd9M66ncIZX/hCogh3ViTxY8u62MI8pUR56f5zL5hkvqLQ4y9?=
 =?us-ascii?Q?Z233cvCsH/uuUrXx5f2xhVMonQ+fv2jnvGdUG5D2JvHY5C3NZNmP1Mazn+nJ?=
 =?us-ascii?Q?2c/UD4ritu8g4WfmxJwcig9pdXUuU8JxzrlCs5BMUafUItDZsdmhLWDZG5fk?=
 =?us-ascii?Q?34AsHdk8l6YMQnjcuQXtQiQNT1Tnaf6p29ANHtHlFxyeKedmv4Ro5P4QdeXg?=
 =?us-ascii?Q?XWTDtR+jr8yG9zOfoyfsREp66Nt/1mTFbYp6iIa/X12r1MhqvONoKhby9kJ9?=
 =?us-ascii?Q?E6v/xHIfpcMQh7Z+Ap+v+hIIEIqYdOys8g/ZScDPgITkg94f+waBKZxXtMgP?=
 =?us-ascii?Q?1vLVMBShpZt2RVbeYR7K87+ZPTWOYu0h0x0kPRvO2WoPkRf4mYzNgZDQl1Uf?=
 =?us-ascii?Q?xltT24K9KMCboNKrYO+MB+3PnS88xeBzQHwCHELvcsoznlcvH6+Zn7Mxwl41?=
 =?us-ascii?Q?GyCxqY2M4YC5m17u7grA7D1M3qQAAs/hV0tL+YsPrP3LSz5ZANCsvEJpbEwm?=
 =?us-ascii?Q?TC+zjCzotr+DOH/Q0JgR1Ecm057gfbu67fHdYyC5MftA2IZwiEQXMXleorWO?=
 =?us-ascii?Q?pHFXnHzgIB3eU6xkueSc86iPW+uylaC3dr/T56W6pE/l1iGMvfp0O9ycu/Wc?=
 =?us-ascii?Q?HHCNUy3jREGMjhRvcf9y6Jf3Wn2jhUyMoUHJ/X99NqbcQpF3vH57Os9tZ5/T?=
 =?us-ascii?Q?SpQq8/XanAYd+TNdEGciKdlm3wsFTdcE3ba1Oiikxs7ViYhkzC7qKMi/gXu3?=
 =?us-ascii?Q?2KWCu1ooOcYZnInjRdwKnl1nCih2rh35j0+kiS4cFozOuFlBcgw+Dnkkjiib?=
 =?us-ascii?Q?Gg9lu5/Cswt1QHUy80doi0boAVciDAUp8J9NgzeueRRLRBjcjtHI7RrUeU4L?=
 =?us-ascii?Q?aDK9gXOoNrQ0RX+4zj2T3Oj+W1KA2V93wzyzd6LVqmGC/qTVNJhen36sfF4h?=
 =?us-ascii?Q?leAVz9Ob5GLb9mqQ+1f58IQj?=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB5600
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT039.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	410e5e25-ab5a-4554-dc41-08d91a0ac6db
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	+Pdofux9yY5lOJE0g+kX9ecaaDrpAnfoaGbHuLxBNIofp+rQDWHggcHW/lRsQomnF6tRUVGvIPVkSj/ROTslbz0rU+3Yb7Od4zQ1AMjK2gGF+18LxDQ4AE6KKgqzhkSFi4LfsWd6MSXZam3FrxLTbILxKhwg4JRD0f6N/fEWxEDx3Oahgq39n0ZML23BFkb2fU+vwklSqywj0liM4Zg+eLAULMfQdwAIv08o1NF6olmQAKHEgapA59sEg4/sqhQEL+FmHLEf5Q5FAnVtgViJeZEYm+EfRC1te8emj/bjhc3A/Ahb/gzBqJANTsOplLuCbvtne+nkAJ9Ldfr5p6KpMSkE75PiBalKMxZFa02MiYILNOILYnuSjAMZtK6a6xle6VHFe2h337tDUrX4MSql4as6edxcEhaY+UogDRY10RcdDQRdQsCm64CCce2qTwnOPetkGmkdX2kjEDgEUH4knyCoJEQJMC6KjlvyMQ9vFqbq8zFFJDmlFC+mJEyNO8NCURCDbBUwXt2uJhoBGbbcNmgZeQY38zi9KF/ayMLHkM8CjA2sw0yj6K5PpLkPRJn0E3YeyYKVwMx3N5fuj8TMtumr0MfM7Z0zhd2uvi0173uD7j4CnTzSQJY9eW9Q9yH3+hbRjOYIvnioWpx4f5RtXP2Jw4Xd6nc62yIG9zEK0Dk=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(39850400004)(346002)(376002)(136003)(46966006)(36840700001)(83380400001)(6486002)(82310400003)(6506007)(2906002)(4326008)(47076005)(70206006)(70586007)(54906003)(53546011)(6666004)(33656002)(36756003)(956004)(2616005)(478600001)(86362001)(8676002)(6862004)(356005)(36860700001)(81166007)(82740400003)(5660300002)(336012)(26005)(8936002)(16526019)(316002)(186003)(44832011)(6512007);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2021 14:39:54.8118
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: fc35bd78-b203-4da0-0fa7-08d91a0acd6f
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT039.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB4904



> On 18 May 2021, at 15:01, Julien Grall <julien@xen.org> wrote:
> 
> From: Julien Grall <jgrall@amazon.com>
> 
> Literal strings are not meant to be modified. So we should use const
> char * rather than char * when we want to store a pointer to them.
> 
> The array should also not be modified at all and is only used by
> xenlog_update_val(). So take the opportunity to add an extra const and
> move the definition in the function.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> ---
>    Changes in v2:
>        - The array content should never be modified
>        - Move lvl2opt in xenlog_update_val()
> ---
> xen/drivers/char/console.c | 7 ++++---
> 1 file changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/xen/drivers/char/console.c b/xen/drivers/char/console.c
> index 23583751709c..7d0a603d0311 100644
> --- a/xen/drivers/char/console.c
> +++ b/xen/drivers/char/console.c
> @@ -168,10 +168,11 @@ static int parse_guest_loglvl(const char *s);
> static char xenlog_val[LOGLVL_VAL_SZ];
> static char xenlog_guest_val[LOGLVL_VAL_SZ];
> 
> -static char *lvl2opt[] = { "none", "error", "warning", "info", "all" };
> -
> static void xenlog_update_val(int lower, int upper, char *val)
> {
> +    static const char * const lvl2opt[] =
> +        { "none", "error", "warning", "info", "all" };
> +
>     snprintf(val, LOGLVL_VAL_SZ, "%s/%s", lvl2opt[lower], lvl2opt[upper]);
> }
> 
> @@ -262,7 +263,7 @@ static int parse_guest_loglvl(const char *s)
>     return ret;
> }
> 
> -static char *loglvl_str(int lvl)
> +static const char *loglvl_str(int lvl)
> {
>     switch ( lvl )
>     {
> -- 
> 2.17.1
> 

Hi Julien,

Seems good to me and very sensible.

Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>

Cheers,
Luca



From xen-devel-bounces@lists.xenproject.org Tue May 18 14:48:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 14:48:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129308.242733 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj11A-0000AD-II; Tue, 18 May 2021 14:48:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129308.242733; Tue, 18 May 2021 14:48:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj11A-00009z-EJ; Tue, 18 May 2021 14:48:20 +0000
Received: by outflank-mailman (input) for mailman id 129308;
 Tue, 18 May 2021 14:48:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9UfV=KN=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lj119-00009q-0A
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 14:48:19 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bda15b62-f9ce-482e-a680-432ff0bc542a;
 Tue, 18 May 2021 14:48:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bda15b62-f9ce-482e-a680-432ff0bc542a
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1621349297;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=icYH55/ktMuKqJXIYQX+IxbyRvKiSotvh1pWmXo1AUE=;
  b=BA0uU+r6oyKBw0C/3P18v05UDHWurPp6HJ9EKpxYmPe5FXr03RB/j9Ib
   bRYmO+bU8CbRaSe2hbzbgQZZ7hJw6lW41iGUxjVN7FSThDZm57kti5tSL
   r3/utMOAiOrZXd4OOyFPF3rL9ASOOZhzqSsyJQwLKNjY1L4vdYdoxIrNM
   w=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: fCq3tYZm7UNpOJAzp37WcBhS62B6i3H9b6R35egD5k7fyg8ZTdz1VDp8O8vztBkVqcNj/vUbFu
 J+RHHSgaaPWXHiwPiArtDFCutdJ8GmG39SQ6dR1RYoOYUzq+ZC3Xg3q2Wae2HExdj2BBnRU4DS
 KPRFTfWkIrnoRNj20HDySkAgBvWU7gxC3apVQL45ucO/Yk0hOby9DfGI1EmHLvDd1BV4UtqFjY
 kob+ZiHxmfrI5Ea8v5fzSYoD0C7xBhrWu1hmnktq2nfwtJ2l3AMG6GCE5j7CslfKFcmycZ0Gr8
 xP8=
X-SBRS: 5.1
X-MesageID: 44432571
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:xQd3qKDwxxYYoQrlHeigsceALOsnbusQ8zAXPh9KJyC9I/b2qy
 nxppgmPH/P6Ar4WBkb6Le90c67MA7hHP9OkPMs1NKZPTUO11HYVb2KgbGSpgEIeBeOh9K1t5
 0QC5SWYeeYZTMR4LeYkWvIYOrIqOP3jpxA7t2uqUuFIzsaD52JuGxCe3mm+wBNNUR77fVTLu
 vR2uN34x6bPVgHZMWyAXcIG8LZocfQqZ7gaRkaQzY69Qinl1qTmf/HOind+i1bfyJEwL8k/2
 SAuRf+/L+fv/ayzQKZ/3PP7q5RhMDqxrJ4dYKxY/AuW3bRYzuTFcZcs+XohkFxnAjv0idkrD
 D0mWZhAywpgEmhOl1cyHDWqnndODVH0Q6r9bbXuwqlnSRVLAhKfPapvrgpBycx3XBQ9e2U4J
 g7rV5xiKAneS8o5B6NnOQgdysa3XZc8kBS29L6sRRkIP0jgehq3PAiFQVuYdE9IB4=
X-IronPort-AV: E=Sophos;i="5.82,310,1613451600"; 
   d="scan'208";a="44432571"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fCBlc9rfjcmrun4jHuwGChf/deEMKkG6Wvqel6n9q8JBM/rBCLJ+23c8lQnD+jo6bPhiY2a316oa7qsEi6G6AMdLI2O6wRqrEBc7lcTYH7GYVw7Q1mUxiUBpUcgHMumU8ay3Bwr2KS0nGmuvUMvU5LxDKI9bOJZRzDTXUxzFZwOnvuRao9oYmddDKiI7pgwOY7JBtaEbQOkij79x47L8+KznuRZscbLKQiDw+MYi+PA6V0kwBcNRt8MUjXcUehFeK22rpL5/yompF+PW8MBtsrh6UfIhz8uGzEgnJxjMhPinXGzxaDVgk2NKRwiVGTwjo+IEueA+dJkKJklGiBBX6g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8jNCC6vvRDBjjCCC/T2rCWhHpwqAyBSyU3L/nHcUPaM=;
 b=SIogAl207IfGUISjFqT/fe3wGx/mpiKIgs4eshiyfMOQJBkDHFToFUbvSPkKjIWT1/agyalVrMaGopGngvb/nVzmYnrnFdcSntRAmjxuCNKtwACiNbZyH2RdsuJ+1Jh9RekxtEhJ/DFcjJmP4YzIlYfnfMfd0aAVjAT1W3mvn+qnVJO3kmHBW64bH5AV+sUjdZuUBqc2b/XOX4Qmp4Cpz+8wlfjk/WSego3M4zBkesGkJjMq2vty6gNQV8Uh/H7AwtfoGFhAIJ1x9hNpSDGJkGdU1URD+hdGTVlJRxl8iVOM1RDUOnzttnTH/J4kkZFj1KQGPacInsepDxP4kUfABA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8jNCC6vvRDBjjCCC/T2rCWhHpwqAyBSyU3L/nHcUPaM=;
 b=iE2jLzJPNFs8OM0VzVjePA0u7V2jaETCsxxnIZ6xWEkFjBOvM1T9VlnSxwQSh7CeHxMyCau9rTwKBEV8lXVGdd8konnrP8fc7YOdGxf6+GFMLwUYg+aDHYbF9ucUo/ic35d4AyNcryHGBuF5fEkFGdgloEVJrzV6B1J1hRLxlBQ=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, "George
 Dunlap" <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, "Julien
 Grall" <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v2] libelf: improve PVH elfnote parsing
Date: Tue, 18 May 2021 16:47:41 +0200
Message-ID: <20210518144741.44395-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.31.1
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: MR2P264CA0060.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:31::24) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: cda54c9d-fcdf-49ed-f144-08d91a0be7e9
X-MS-TrafficTypeDiagnostic: DS7PR03MB5575:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DS7PR03MB5575A6E95E19E2263D432B3B8F2C9@DS7PR03MB5575.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: IXNuqSy4dUb++9ndfkAjOHd10Od3CGvjIf83NGC6I00jEGVYbaVkiBDZUZ3Z74XKXURupW5sI0/CUrltlFI+GUP9fXWFe+M0QgR1MN9Dp7TWc85Vc/6zpKZ5f9QLJSDFJCfJ9G4VLEKqtru3Nt1F74rbWQB/RfpWUTYr7/LM2v6b6pSaskfnTYzhQ6l09QN/CC3AD++/7j6VoTvOxKdaM5v2+yzOQojLycJw5o5bm7dBPEbjut/V+/k0/XfQK2IfeLmYEfFtpoImxrMg6llXmfvK0x+cAfh12E0IFsdav1P/OrYxenrjVh15qF5+RYJbCk/gREfDKnyR8lALiPiPz17vQMtTIoNqX+jWo5q42Xzwnj0LWUB9lsvsRjAQPEfJw2rx1BZK+tu9R5BU3JP3KFDU4LSCz03uwDLgX47HDD2aGcdRY+E/TzEb0AfZ+eVaUjK7+GLo7bRfr5/RVpfRWlX1rGEcFOBo0iCcvj9AbGF++64gKZwyHFOzQ4QGiN0RxD+wmdVlhrz1nZKyNuopOKXAXRyH4qd5HpQ10b7qVlbJ+MXNiLwZRzBfErSqOIcT3EvSDE0Ghbw6Y0m0VD+lmMwFUxnlFlWSNxWYnbQP5iI=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(376002)(39860400002)(346002)(396003)(366004)(2616005)(956004)(6496006)(8936002)(316002)(5660300002)(6916009)(66556008)(26005)(8676002)(36756003)(2906002)(54906003)(66476007)(186003)(4326008)(16526019)(6666004)(66946007)(86362001)(6486002)(1076003)(38100700002)(478600001)(83380400001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?dWFJV0NDeVAxUGgzS3ZoeE55Yi9QbUE5TnNVNmVxaWI3VkR2eXRSN3M4OUQr?=
 =?utf-8?B?b3QvK2VpL092Yyt0N2JCZHpCQnRVOXl1Mmo2dm5VL2tlQVYybHlmOUxyM21o?=
 =?utf-8?B?cHpvK0hWMWJuQ1Iyc1pITE15ZWdSYlJZV0c0ck5Jdk94N3hleU9yWjNLVTFW?=
 =?utf-8?B?ZDBKc3A4dE1IbmR1RE1keWh3citpcjlXK2M0T1UwYncwTTNoZHZYWTQxYUNK?=
 =?utf-8?B?N0ZTRjlaZDJwb1NZemRIOE1ESzFlQjlHQTE5WERuczRWc1hCalNuWlZVK0R1?=
 =?utf-8?B?U1AxaFFsZXE1dVZCUlV5djZHQjJuY2g2Q2tGN0Z3RC9RSU1BSG80alEyaVU1?=
 =?utf-8?B?V3VKRXdwZmQ4dFFWc3ZCODZqZVdEbnprdGd2RzJnaGgzVkRaZTd3blRmaTcz?=
 =?utf-8?B?ZHVMbGo2S3hodk1jTjhzcjJnMldLazIyUmQ2cWFhN0NQRkhtaW85YUVEWjY1?=
 =?utf-8?B?TXh5WXJQWE51SUkrUGVaRXFBQ3NyWnJ3aWh3d3FpV3dKUlc3N2R6WDdzMGNL?=
 =?utf-8?B?U0NEaVhxeHM5WFNPb0RITkxGUWJERUIxSEpoU1d1MEJWVXF6RmFNWis0a2tq?=
 =?utf-8?B?UGZ6MjZ5U3JVby9JL0ZzVnVESHdvTExQOGFtSzg1OGYxOWdSc3FmUjMvNm9s?=
 =?utf-8?B?bjN3K0REeHJSRStSSWlzKzFHQk5GVUExOWpjRm1mb2JzWG1oTzcyWlAxWm02?=
 =?utf-8?B?MFYvL0wyaHpERkk0WXBmK1EwL0VXRVU0anZkd09SN2FGcElrakZkRUpQQ3BS?=
 =?utf-8?B?UUhYSEtPakVLa1ZQZkE2TnNYQUpraG05L2QrajFKNmp5QmpNamtZNmo0ZVJG?=
 =?utf-8?B?QUJCY0F2Q3ZrOVFzZ0lSSG1Pemt5eFRSdnllN2xiYVlDejltU3doRVAvZWhP?=
 =?utf-8?B?elVoZVVpNG1GMHdXa3VrZ0pWOTlWV2NLeUFaSzVSTUxjVTJ4UldDWHFJdGUy?=
 =?utf-8?B?TmdYK1I0LzRTazlEcHFQRk12b2FLZGZBZVhhbEcvNFh2d25uUkVwUlI4dVI1?=
 =?utf-8?B?UnQ1Wjh2K0tFeEE2aW51T096SWJrOXREOStoYTcwVnpmU3J2NEM3MmthZi9P?=
 =?utf-8?B?cDlWRzB6TFRyUC9hZkZlNmV2VG9LUnpoeFIrMlVySnRJcmloVlZjaTZsZjF5?=
 =?utf-8?B?cXBuZFAyVG01Y21FcGZwQkxtYjFqSVFucktEWjViZjV5NE5yRVlvY1JyOGlJ?=
 =?utf-8?B?RlFkQk5wWjVsWnp0aVRKdXNSZEpaelBML3BBTzZnY2RFOUo1TEs1YmtZZFdQ?=
 =?utf-8?B?NTQrcFMrbVZOV05Xd2tyMzFNV0s1WmJqRjZrTWFFL2ZRU2JZcDdBQTdNbEtC?=
 =?utf-8?B?YjNCdFhzb2NhYnNIbHdVRTVCd2JjeTNlWlVTOHFrYThxbEhpYXEvYmNLMmZV?=
 =?utf-8?B?WmJsUXZJNGhEYTYvN01iS3lNYnVaQkw1WHg1bHJxODVPYzJ5REtxbFo4TXlE?=
 =?utf-8?B?cENibTNLUzNubXFhREZBWFpwR1p2aE40Skw3SmtnNXhyWXJaSXo3RTBBVm5m?=
 =?utf-8?B?L3dKK0tDSkUxTDIwb2ROZGlOQmFaZEs1bW9weEpBWE1TTHZ4b3lvbTNTakkx?=
 =?utf-8?B?Lzg3SHJrZ0dISjJSUTErZlNCcWJycktqcXpFNGVXbXIwdEZFb0V6TUhxOW5S?=
 =?utf-8?B?Q3l0eHRHNDF3OUgzczFWK29nVTNpNzlxK0krTXhnaWdKMU9kYUdPYkFQVDJG?=
 =?utf-8?B?aHZzZkFwYTBReTkwVHI1ZVFmcmpiZys3dzlicVhKd2d1N2gwRGhlbmcrTkdS?=
 =?utf-8?Q?80d9XoKYN5SMFoUPLN782KAP2HJDX8kRMj24vzw?=
X-MS-Exchange-CrossTenant-Network-Message-Id: cda54c9d-fcdf-49ed-f144-08d91a0be7e9
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2021 14:47:48.8880
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: bHnvk1bBf8x5Gg9zSrHvSIehzYyjR4pXo/yLJ/HKo6/lbB9ep6HsVpByhm/l1ZHEOTrddBeiVNzRl//Mll79Kg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5575
X-OriginatorOrg: citrix.com

Pass an hvm boolean parameter to the elf note parsing and checking
routines, so that better checking can be done in case libelf is
dealing with an hvm container.

elf_xen_note_check shouldn't return early unless PHYS32_ENTRY is set
and the container is of type HVM, or else the loader and version
checks would be avoided for kernels intended to be booted as PV but
that also have PHYS32_ENTRY set.

Adjust elf_xen_addr_calc_check so that the virtual addresses are
actually physical ones (by setting virt_base and elf_paddr_offset to
zero) when the container is of type HVM, as that container is always
started with paging disabled.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v1:
 - Expand comments.
 - Do not set virt_entry to phys_entry unless it's an HVM container.
---
 tools/fuzz/libelf/libelf-fuzzer.c   |  3 ++-
 tools/libs/guest/xg_dom_elfloader.c |  6 ++++--
 tools/libs/guest/xg_dom_hvmloader.c |  2 +-
 xen/arch/x86/hvm/dom0_build.c       |  2 +-
 xen/arch/x86/pv/dom0_build.c        |  2 +-
 xen/common/libelf/libelf-dominfo.c  | 32 +++++++++++++++++++----------
 xen/include/xen/libelf.h            |  2 +-
 7 files changed, 31 insertions(+), 18 deletions(-)

diff --git a/tools/fuzz/libelf/libelf-fuzzer.c b/tools/fuzz/libelf/libelf-fuzzer.c
index 1ba85717114..84fb84720fa 100644
--- a/tools/fuzz/libelf/libelf-fuzzer.c
+++ b/tools/fuzz/libelf/libelf-fuzzer.c
@@ -17,7 +17,8 @@ int LLVMFuzzerTestOneInput(const uint8_t *data, size_t size)
         return -1;
 
     elf_parse_binary(elf);
-    elf_xen_parse(elf, &parms);
+    elf_xen_parse(elf, &parms, false);
+    elf_xen_parse(elf, &parms, true);
 
     return 0;
 }
diff --git a/tools/libs/guest/xg_dom_elfloader.c b/tools/libs/guest/xg_dom_elfloader.c
index 06e713fe111..ad71163dd92 100644
--- a/tools/libs/guest/xg_dom_elfloader.c
+++ b/tools/libs/guest/xg_dom_elfloader.c
@@ -135,7 +135,8 @@ static elf_negerrnoval xc_dom_probe_elf_kernel(struct xc_dom_image *dom)
      * or else we might be trying to load a plain ELF.
      */
     elf_parse_binary(&elf);
-    rc = elf_xen_parse(&elf, dom->parms);
+    rc = elf_xen_parse(&elf, dom->parms,
+                       dom->container_type == XC_DOM_HVM_CONTAINER);
     if ( rc != 0 )
         return rc;
 
@@ -166,7 +167,8 @@ static elf_negerrnoval xc_dom_parse_elf_kernel(struct xc_dom_image *dom)
 
     /* parse binary and get xen meta info */
     elf_parse_binary(elf);
-    if ( elf_xen_parse(elf, dom->parms) != 0 )
+    if ( elf_xen_parse(elf, dom->parms,
+                       dom->container_type == XC_DOM_HVM_CONTAINER) != 0 )
     {
         rc = -EINVAL;
         goto out;
diff --git a/tools/libs/guest/xg_dom_hvmloader.c b/tools/libs/guest/xg_dom_hvmloader.c
index ec6ebad7fd5..3a63b23ba39 100644
--- a/tools/libs/guest/xg_dom_hvmloader.c
+++ b/tools/libs/guest/xg_dom_hvmloader.c
@@ -73,7 +73,7 @@ static elf_negerrnoval xc_dom_probe_hvm_kernel(struct xc_dom_image *dom)
      * else we might be trying to load a PV kernel.
      */
     elf_parse_binary(&elf);
-    rc = elf_xen_parse(&elf, dom->parms);
+    rc = elf_xen_parse(&elf, dom->parms, true);
     if ( rc == 0 )
         return -EINVAL;
 
diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
index 878dc1d808e..c24b9efdb0a 100644
--- a/xen/arch/x86/hvm/dom0_build.c
+++ b/xen/arch/x86/hvm/dom0_build.c
@@ -561,7 +561,7 @@ static int __init pvh_load_kernel(struct domain *d, const module_t *image,
     elf_set_verbose(&elf);
 #endif
     elf_parse_binary(&elf);
-    if ( (rc = elf_xen_parse(&elf, &parms)) != 0 )
+    if ( (rc = elf_xen_parse(&elf, &parms, true)) != 0 )
     {
         printk("Unable to parse kernel for ELFNOTES\n");
         return rc;
diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c
index e0801a9e6d1..af47615b226 100644
--- a/xen/arch/x86/pv/dom0_build.c
+++ b/xen/arch/x86/pv/dom0_build.c
@@ -353,7 +353,7 @@ int __init dom0_construct_pv(struct domain *d,
         elf_set_verbose(&elf);
 
     elf_parse_binary(&elf);
-    if ( (rc = elf_xen_parse(&elf, &parms)) != 0 )
+    if ( (rc = elf_xen_parse(&elf, &parms, false)) != 0 )
         goto out;
 
     /* compatibility check */
diff --git a/xen/common/libelf/libelf-dominfo.c b/xen/common/libelf/libelf-dominfo.c
index 69c94b6f3bb..5ad2832aa75 100644
--- a/xen/common/libelf/libelf-dominfo.c
+++ b/xen/common/libelf/libelf-dominfo.c
@@ -360,7 +360,7 @@ elf_errorstatus elf_xen_parse_guest_info(struct elf_binary *elf,
 /* sanity checks                                                            */
 
 static elf_errorstatus elf_xen_note_check(struct elf_binary *elf,
-                              struct elf_dom_parms *parms)
+                              struct elf_dom_parms *parms, bool hvm)
 {
     if ( (ELF_PTRVAL_INVALID(parms->elf_note_start)) &&
          (ELF_PTRVAL_INVALID(parms->guest_info)) )
@@ -382,7 +382,7 @@ static elf_errorstatus elf_xen_note_check(struct elf_binary *elf,
     }
 
     /* PVH only requires one ELF note to be set */
-    if ( parms->phys_entry != UNSET_ADDR32 )
+    if ( parms->phys_entry != UNSET_ADDR32 && hvm )
     {
         elf_msg(elf, "ELF: Found PVH image\n");
         return 0;
@@ -414,7 +414,7 @@ static elf_errorstatus elf_xen_note_check(struct elf_binary *elf,
 }
 
 static elf_errorstatus elf_xen_addr_calc_check(struct elf_binary *elf,
-                                   struct elf_dom_parms *parms)
+                                   struct elf_dom_parms *parms, bool hvm)
 {
     uint64_t virt_offset;
 
@@ -425,8 +425,11 @@ static elf_errorstatus elf_xen_addr_calc_check(struct elf_binary *elf,
         return -1;
     }
 
-    /* Initial guess for virt_base is 0 if it is not explicitly defined. */
-    if ( parms->virt_base == UNSET_ADDR )
+    /*
+     * Initial guess for virt_base is 0 if it is not explicitly defined in the
+     * PV case. For PVH virt_base is forced to 0 because paging is disabled.
+     */
+    if ( parms->virt_base == UNSET_ADDR || hvm )
     {
         parms->virt_base = 0;
         elf_msg(elf, "ELF: VIRT_BASE unset, using %#" PRIx64 "\n",
@@ -441,8 +444,10 @@ static elf_errorstatus elf_xen_addr_calc_check(struct elf_binary *elf,
      *
      * If we are using the modern ELF notes interface then the default
      * is 0.
+     *
+     * For PVH this is forced to 0, as it's already a legacy option for PV.
      */
-    if ( parms->elf_paddr_offset == UNSET_ADDR )
+    if ( parms->elf_paddr_offset == UNSET_ADDR || hvm )
     {
         if ( parms->elf_note_start )
             parms->elf_paddr_offset = 0;
@@ -456,8 +461,13 @@ static elf_errorstatus elf_xen_addr_calc_check(struct elf_binary *elf,
     parms->virt_kstart = elf->pstart + virt_offset;
     parms->virt_kend   = elf->pend   + virt_offset;
 
-    if ( parms->virt_entry == UNSET_ADDR )
-        parms->virt_entry = elf_uval(elf, elf->ehdr, e_entry);
+    if ( parms->virt_entry == UNSET_ADDR || hvm )
+    {
+        if ( parms->phys_entry != UNSET_ADDR32 && hvm )
+            parms->virt_entry = parms->phys_entry;
+        else
+            parms->virt_entry = elf_uval(elf, elf->ehdr, e_entry);
+    }
 
     if ( parms->bsd_symtab )
     {
@@ -499,7 +509,7 @@ static elf_errorstatus elf_xen_addr_calc_check(struct elf_binary *elf,
 /* glue it all together ...                                                 */
 
 elf_errorstatus elf_xen_parse(struct elf_binary *elf,
-                  struct elf_dom_parms *parms)
+                  struct elf_dom_parms *parms, bool hvm)
 {
     ELF_HANDLE_DECL(elf_shdr) shdr;
     ELF_HANDLE_DECL(elf_phdr) phdr;
@@ -594,9 +604,9 @@ elf_errorstatus elf_xen_parse(struct elf_binary *elf,
         }
     }
 
-    if ( elf_xen_note_check(elf, parms) != 0 )
+    if ( elf_xen_note_check(elf, parms, hvm) != 0 )
         return -1;
-    if ( elf_xen_addr_calc_check(elf, parms) != 0 )
+    if ( elf_xen_addr_calc_check(elf, parms, hvm) != 0 )
         return -1;
     return 0;
 }
diff --git a/xen/include/xen/libelf.h b/xen/include/xen/libelf.h
index b73998150fc..be47b0cc366 100644
--- a/xen/include/xen/libelf.h
+++ b/xen/include/xen/libelf.h
@@ -454,7 +454,7 @@ int elf_xen_parse_note(struct elf_binary *elf,
 int elf_xen_parse_guest_info(struct elf_binary *elf,
                              struct elf_dom_parms *parms);
 int elf_xen_parse(struct elf_binary *elf,
-                  struct elf_dom_parms *parms);
+                  struct elf_dom_parms *parms, bool hvm);
 
 static inline void *elf_memcpy_unchecked(void *dest, const void *src, size_t n)
     { return memcpy(dest, src, n); }
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Tue May 18 15:08:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 15:08:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129318.242750 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj1KG-0002Vk-DY; Tue, 18 May 2021 15:08:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129318.242750; Tue, 18 May 2021 15:08:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj1KG-0002Vd-A8; Tue, 18 May 2021 15:08:04 +0000
Received: by outflank-mailman (input) for mailman id 129318;
 Tue, 18 May 2021 15:08:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tO0P=KN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lj1KF-0002VX-R0
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 15:08:03 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 74ee493d-68e5-432e-9ea0-c1ad0e14a456;
 Tue, 18 May 2021 15:08:02 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F043DB179;
 Tue, 18 May 2021 15:08:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 74ee493d-68e5-432e-9ea0-c1ad0e14a456
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621350482; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=jXao6gYeo6sgIBtQXwc8fsCaMUIigIiOiDjCPXMEif0=;
	b=fim7hYNHZMOOLw8Nu6NMIHJwdx/aHdN77itcpDDTLb1ycbLNjpcm9ZKjGHKbNq3oRuvOfD
	Hyt77pusZt5lhXS5SwAHbXDaKOfY7ZU3WxZ4v179AuBx3cQMZG+kyxQmLIw6ZndOpUb4kE
	0XomicQkqkwOU4LoWkqsafEzlUNwXFA=
Subject: Re: [PATCH v2 1/2] xen/char: console: Use const whenever we point to
 literal strings
To: Julien Grall <julien@xen.org>
Cc: Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210518140134.31541-1-julien@xen.org>
 <20210518140134.31541-2-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d4a4acdf-9c73-d256-ee3b-f65126d08d37@suse.com>
Date: Tue, 18 May 2021 17:08:01 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210518140134.31541-2-julien@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 18.05.2021 16:01, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Literal strings are not meant to be modified. So we should use const
> char * rather than char * when we want to store a pointer to them.
> 
> The array should also not be modified at all and is only used by
> xenlog_update_val(). So take the opportunity to add an extra const and
> move the definition in the function.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Tue May 18 15:09:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 15:09:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129325.242764 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj1Li-00038w-RN; Tue, 18 May 2021 15:09:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129325.242764; Tue, 18 May 2021 15:09:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj1Li-00038p-NO; Tue, 18 May 2021 15:09:34 +0000
Received: by outflank-mailman (input) for mailman id 129325;
 Tue, 18 May 2021 15:09:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9UfV=KN=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lj1Lh-00038f-KJ
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 15:09:33 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 205df946-5f22-4540-a464-fb6ddc60455b;
 Tue, 18 May 2021 15:09:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 205df946-5f22-4540-a464-fb6ddc60455b
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1621350572;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=Rtdu5ZoquGyyJYxzdLOR3fsayeqZcUW6wAn6nNSKNCc=;
  b=UIEbOgvxdVWZPaNhbrAhk0VH8s91IFlvUTgVPZAwx4rfFFmMz3CEduVR
   xypczHdNpY5bzUPoVg5/cInExnGB4cddwdWAq5ADOCAzj/Kq3mIiR+3yd
   +e8/5Jk/+J8+fu+idGCybmYlBkZrq/PgbLuPeJOwGd19uDkYRsAkO/0nS
   8=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: og+AtmDBftUKsvByv7n66oNp2nb/tbB4CPE6siEgaaz49SD9k40Nr5qieT7uCI31IOyPLhfEzO
 PCNtNSvMyCid8Q5gkURGriTzNwUG9A4h2cg7k0gP2Ioz9iPUIIn933rdiq7VP6N3WY0UUNyd6g
 klYLDscPAQc+juhJNP/n6njs4quRUaiQYvcpXKNEKWnvDmX/3bc/ajF+nDT2I6DHX8XEWLwrHY
 HgfLDKREBy70HGdEudJcAh9t/SeLYjWMlHZF8JqeZej6tT5vol45by+44cHRAG9QtHWHoj1LG8
 r5k=
X-SBRS: 5.1
X-MesageID: 44040568
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:tedTRaCejdir/vrlHelo55DYdb4zR+YMi2TDt3oddfU1SL38qy
 nKpp4mPHDP5wr5NEtPpTniAtjjfZq/z/5ICOAqVN/PYOCPggCVxepZnOjfKlPbehEX9oRmpN
 1dm6oVMqyMMbCt5/yKnDVRELwbsaa6GLjDv5a785/0JzsaE52J6W1Ce2GmO3wzfiZqL7wjGq
 GR48JWzgDQAkj+PqyAdx84t/GonayzqK7b
X-IronPort-AV: E=Sophos;i="5.82,310,1613451600"; 
   d="scan'208";a="44040568"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dyMnlWtAsrJBzcIt2Rgm3gLEywE0JcdBy91wyZIImhMEJ700X9kjvJ1CEhQ+bk2zt6adNAazkhLAiDIds5oY8f5n/7Jh6o9cXuB9tsAP1l4z7BlIPpaEWiDd60BZaEJbd/yPD3HmgPua8hFDURRq76oy6B6N29NRMIPdaSbS8gJEt0xklTMxEDiPJ83e8ll0/WKVnqECCWQK9uaKXC1MY3RuSPj1qoMPFYWvk2ynZWOhqnpRY54cecqnopNtt+vFJYgAbBN77A54EbTuyCm1HaKFPvTge1wfNJu+lt2qdMqYA4QteheybVqHeU/peI0/NEgqXPSpSlEICxAoznnL8w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sDZUmFxWengpWd/EA510qPChCg/XFq5fxaM7oEaAzFs=;
 b=eBiymLImk12kBsKQ0eFNS8YgDQkmmZknizfUX0cc3Aw8ejAQfcH/7Fntrt8AYWmv3U5VmTKt/Ugzh4DI0WmRqmAQkGfuFcyJ9WzRCrRn/k5i7Us2sfh647fNLmgGD+hnMAxPKRu9enh8PS+COxxOGRG7yCFi7PVtLi+2bK0xM/Cu+DPVjaGQutk4F/TkJaKddse29As90IyJ1OMBXNZ9Xd40Xfl1ATRRgfc/o8L5Yokocd4Y8N4iVjpxw53NtY2BCfxwMfpdILW20F3OvSmc3TH0J8oWCqAy2u5epW4AWHqBzs74khG4DN91mHWjSOyuP6/w5HQoWXYYJAj+biu2Ww==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sDZUmFxWengpWd/EA510qPChCg/XFq5fxaM7oEaAzFs=;
 b=ZcaAQgEbAMugJP87Z9WLXEVLsh6rKRBT67XsZXM/rEfqIrPUpw+7zTc4ubAHqGOcBFjxc2N+KtQrFUQSnIsyAJEHg22VQpqK4AFhS/tLj5bvh0WA1AvWHTJj8iXYXiIbOdwdG1ZSHXfvHEdhmJ74OJ3ESbxGrxzt0EMJEJpYo8g=
Date: Tue, 18 May 2021 17:09:22 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Dario Faggioli <dfaggioli@suse.com>
CC: <xen-devel@lists.xenproject.org>, Doug Goldstein <cardoe@cardoe.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH 2/2] automation: fix dependencies on openSUSE Tumbleweed
 containers
Message-ID: <YKPYoq2p6a0avevB@Air-de-Roger>
References: <162133919718.25010.4197057069904956422.stgit@Wayrath>
 <162133945335.25010.4601866854997664898.stgit@Wayrath>
 <YKO/BcUAtjSgc2pV@Air-de-Roger>
 <9160502180e3c36a52cb841520615bc7fe91b42b.camel@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
In-Reply-To: <9160502180e3c36a52cb841520615bc7fe91b42b.camel@suse.com>
X-ClientProxiedBy: MR2P264CA0172.FRAP264.PROD.OUTLOOK.COM (2603:10a6:501::11)
 To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 5ade7aeb-fb14-45b5-9b56-08d91a0eee5a
X-MS-TrafficTypeDiagnostic: DM4PR03MB5999:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM4PR03MB59993A701E16B1286C878E298F2C9@DM4PR03MB5999.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: k/PgoQrzME+bzcfyCTE4czqJOcRQ2bxItqBT2aJ52A+1YwzcKu4SStAfWU6XUxwqgJULjfkCVdhAaTiViFE7C1s1f9aPxneYst1Xo6vYtiy+fRE/V/cQRQ6/QVX8pbnjLaweDmjSaIZNcQrAsA8o5ShCFVkBEjInupEmvmZ3C+wPM41S0v3KbFJuX6xmR/1MtYcweZgB1ttjFusT1/CVEDbJ5g80vksp+MP38KtAlMMZc4EPmA7SXV2NXWgSf3CdCXQygibVymD4TwwmZsMx2Vf+8qm2S/nplBV+1DvMaNN2rXSkMPbkzqhNW1jnzj5IH+Toa0VCeL1JI0O2LyJ2bCRSlgjIGRCTrxbdyFyfgb9aakARQmy0i327kCI/AZ/Ku8klPmrE65gQlCOhxSDTXwjASx7PpmghxmIQczSwfjFQITQLxm6kMkUNEFsZAy2tEbF7/ynyuFdG9Ov5lADqRt230NbXO44kxGyb7i6D1T/0xGFK52RWd3V1l9Wk612Sc8f4mhwVMdXw9Brqwr0PdwIaiNnQ1jNvABhHVNDkfDxJ2vp2JSnh3JMkNGsnfMgEzErQ88we8Cvhnx6uBV3K7B+9cdN14P1ckOb4PROiLsU=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(39860400002)(346002)(136003)(396003)(366004)(376002)(8936002)(8676002)(66946007)(66476007)(66556008)(33716001)(956004)(6666004)(4326008)(85182001)(6496006)(6486002)(316002)(54906003)(186003)(478600001)(38100700002)(16526019)(6916009)(9686003)(107886003)(5660300002)(26005)(86362001)(83380400001)(2906002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?dEwrZDJBNWhxNDBlR0kycVF6N284V2tvSy92YlQ4SzhhWExIUmsySWRoNk04?=
 =?utf-8?B?NXBZVTdVTXl3RmkyVXdGTDhWODRyU0hWTFhSc04xbnJORW51NG90SVVrUy9L?=
 =?utf-8?B?RGFoMmNUYzkyMmhJdWJCcVRmYjdWd2J3M2s2Q1lIeFpVdUpGckx6NjNWb05j?=
 =?utf-8?B?cE5qUUM3ODV6OUZvNUpDNEZZaHZBNjF1VUxCc29oTU9KWDZWU0YyY1Q1TWMr?=
 =?utf-8?B?SzZLWW9Va1BLc0RxMWpGR0tZZlFXdnVIVG9CYllmY1R3NmtCYW8yOGFOWk9p?=
 =?utf-8?B?Y2pubXU1d2ZzM0xVeUZCS256Zmx1RkcrUlhjaUVac2VLNUVTcGEvVmh1YjVG?=
 =?utf-8?B?a3U3MXlLUjNoc3Rlem1xaTBaRXdxWnczblN6alJKOW5CUURsbWhQZ2QvTjBQ?=
 =?utf-8?B?T01IaHFPWXltYklNbEd5NWsyTW0ycjdUMW5ya1hKSTFkN2N3K0phU050TURo?=
 =?utf-8?B?dHVrZEFiVit6d0hlbjF5SUVRVDhKNEdmQXkzejFhNkJuNEJ2RGllY3dRR01Y?=
 =?utf-8?B?T0toTzV4NzlOTS9vaDMyVVQ5KzVDOGRXOEJjc0VvRGVIdU04Zmd3d1lSOHBT?=
 =?utf-8?B?SnVxbVRScHVTSUV3WGRuU3N2MklzNVBGQ2Iyemw5NVJGVWNSMkhwb1YrTmN4?=
 =?utf-8?B?U0FnN0VvNkR2ZjFWMFJVQ2FaQlhjV3dLQUlDRERad0V1M1VIUnBwTXExRWhE?=
 =?utf-8?B?TzZXRWQvcXJKUEhZSTh4Q2Z6VWZmQWlkakwyQ3hPMXF3NllKb3lHcUYzb0kz?=
 =?utf-8?B?SWhkQXRBdG9BZmt1V2VUVysxZVRWNDZFakRlT1ZVb0xsd3pwOG8vOUtOQ21W?=
 =?utf-8?B?TmljQUg4Y2NGZ0dnbkpRR2p5MTdueEtmemZHdmFHR0VTUmgveEhLNEt3K29M?=
 =?utf-8?B?aERrSHI4TmxJTFhmeFRwQ0svcjRvQk1WbWdNVktMckxSTkJDQ0xNOTF6NC9W?=
 =?utf-8?B?TlZnTmlaNGtLWE13akZFdDgreGwvbGRkSHZEanNIVytRWk9iRFBFemF4UUpQ?=
 =?utf-8?B?czlvaFk4TDkwSUhobWsybFhWQVl1bVNHZTlmZDBLMGxPa0x2a21idEY4OGE5?=
 =?utf-8?B?b1NhdzM1TmVUYzNCSExaNlpyUEZpclp1ZmVHdmtJWGphUUVGd0x0bFcwRm8x?=
 =?utf-8?B?dkVvOW5zbTRhQU55dmRTK2Z1V05hS2g4SzUxSkVpSXpKUlZMOURWNlBZVi9B?=
 =?utf-8?B?dUpQbllNNE9kNVU3YjBBT0pNb1Y5cldXYXMwWXFwNklhdTl1TlJydUZmdjQ3?=
 =?utf-8?B?Q3U1N3BKazRZLzVUS1VsbWFtcFBrOHhFbm5UckRDV01WcWdpcjlmNnB4Z3Ix?=
 =?utf-8?B?c1YvbFMwK1YrTnZIMkU3bFdRQk83R3Jnbm8rU3RPc1NUZXpSRGp1ZTg3UFl5?=
 =?utf-8?B?SkZTUHFmZ01Ybm1ZSHRlTmxhOGFvOHpiSnBFQUVCRXB6ckRCNzNZZlhZS3h0?=
 =?utf-8?B?RVB2UGFNOEdvVGRSTDdGdDNrWFRiZmZnbFBrSVdBV1F5RExzNi84Y3JLZG1U?=
 =?utf-8?B?TFArckEwMEpBMzBNT3RTOU5HQ1liZXgzdDZyVkpNWnI5ajN2bTNGOTVjRk5s?=
 =?utf-8?B?OVRzTlNrOGZuVjNQeEZqa3Vib1lMekxsTWo0eVkzZ011ZkxxN1NheDYyZVVL?=
 =?utf-8?B?VEgxbFIySHBFTlVRUVY4SFVWRERCclRGd3AwSVhIQjN4S2ZpU1YwUTRtRGVN?=
 =?utf-8?B?bDBsU2NxMTJLeXRZeEoxeVZRNG4vSDk2eEhEOXlGZXFnWEpQRDhWL2gzU0dP?=
 =?utf-8?Q?Su3kN/LAXYr0FMUjGTsBtTp2OJCioKyujShe0Wi?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 5ade7aeb-fb14-45b5-9b56-08d91a0eee5a
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2021 15:09:28.2068
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: mEylsGAWtMrE+ftpzuYbYGjicwa1DbKfw89WO5f8d7cgs2Ug520cxkjzwXvXp1UGHklY2JZM7v2SmDcu9+6rDw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR03MB5999
X-OriginatorOrg: citrix.com

On Tue, May 18, 2021 at 04:33:43PM +0200, Dario Faggioli wrote:
> On Tue, 2021-05-18 at 15:20 +0200, Roger Pau Monn=C3=A9 wrote:
> > On Tue, May 18, 2021 at 02:04:13PM +0200, Dario Faggioli wrote:
> > > From: Dario Faggioli <dario@Solace.fritz.box>
> > >=20
> Mmm... this email address does not really exist, and it's a mistake
> that it ended up here. :-/
>=20
> > > Fix the build inside our openSUSE Tumbleweed container by using
> > > the proper python development packages (and adding zstd headers).
> > >=20
> > > Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
> > > ---
> > > Cc: Doug Goldstein <cardoe@cardoe.com>
> > > Cc: Roger Pau Monne <roger.pau@citrix.com>
> > > Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> > > ---
> > > =C2=A0.../build/suse/opensuse-tumbleweed.dockerfile=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 |=C2=A0=C2=A0=C2=A0 5 ++---
> > > =C2=A01 file changed, 2 insertions(+), 3 deletions(-)
> > >=20
> > > diff --git a/automation/build/suse/opensuse-tumbleweed.dockerfile
> > > b/automation/build/suse/opensuse-tumbleweed.dockerfile
> > > index 8ff7b9b5ce..5b99cb8a53 100644
> > > --- a/automation/build/suse/opensuse-tumbleweed.dockerfile
> > > +++ b/automation/build/suse/opensuse-tumbleweed.dockerfile
> > >=20
> > > @@ -56,10 +57,8 @@ RUN zypper install -y --no-recommends \
> > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 pandoc \
> > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 patch \
> > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 pkg-config \
> > > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 python \
> > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 python-devel \
> > > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 python3 \
> > > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 python3-devel \
> > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 python38-devel \
> >=20
> > When I tested python3-devel was translated into python38-devel,=C2=A0
> >
> Oh, really? And when was it that you tested it, if I can ask?

I've tried just now with the current docker file, and this is the
(trimmed) output:

Step 7/7 : RUN zypper install -y --no-recommends         acpica         bc =
        bin86         bison         bzip2         checkpolicy         clang=
         cmake         dev86         discount         flex         gcc     =
    gcc-c++         gettext-tools         git         glib2-devel         g=
libc-devel         glibc-devel-32bit         gzip         hostname         =
libSDL2-devel         libaio-devel         libbz2-devel         libext2fs-d=
evel         libgnutls-devel         libjpeg62-devel         libnl3-devel  =
       libnuma-devel         libpixman-1-0-devel         libpng16-devel    =
     libssh2-devel         libtasn1-devel         libuuid-devel         lib=
yajl-devel         lzo-devel         make         nasm         ncurses-deve=
l         ocaml         ocaml-findlib-devel         ocaml-ocamlbuild       =
  ocaml-ocamldoc         pandoc         patch         pkg-config         py=
thon         python-devel         python3         python3-devel         sys=
temd-devel         tar         transfig         valgrind-devel         wget=
         which         xz-devel         zlib-devel         &&         zyppe=
r clean -a
 ---> Running in af6a184e482b
Retrieving repository 'openSUSE-Tumbleweed-Non-Oss' metadata [..done]
Building repository 'openSUSE-Tumbleweed-Non-Oss' cache [....done]
Retrieving repository 'openSUSE-Tumbleweed-Oss' metadata [.......done]
Building repository 'openSUSE-Tumbleweed-Oss' cache [....done]
Retrieving repository 'openSUSE-Tumbleweed-Update' metadata [.done]
Building repository 'openSUSE-Tumbleweed-Update' cache [....done]
Loading repository data...
Reading installed packages...
'python3' not found in package names. Trying capabilities.
'python3-devel' not found in package names. Trying capabilities.

There's this message here ...

'pkg-config' not found in package names. Trying capabilities.
Resolving package dependencies...

The following 509 NEW packages are going to be installed:
[...] python38 python38-base python38-devel [...]

... but it seems python3-devel gets translated into python38-devel, or
maybe something else selects python38-devel?

Not familiar with the system, so maybe it's indeed a dependency of
some other package.

> > which
> > I think is better as we don't need to modify the docker file for
> > every
> > new Python version?
> >=20
> That would definitely be better. Right now, I don't see any
> python3-devel package. If python3-devel can still be used (and it
> somehow translates to the proper -devel package), then sure we should
> use it. I'm not sure how that would happen, but maybe it's just me
> being unaware of some packaging magic.
>=20
> Let me put "python3-devel" there and test locally again, so we know if
> it actually works.

It does seem to be picked up, whether that's because python3-devel
gets translated into python38-devel, or something else pulls it in
I don't certainly know.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 18 15:13:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 15:13:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129331.242775 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj1PT-0004Vv-Gi; Tue, 18 May 2021 15:13:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129331.242775; Tue, 18 May 2021 15:13:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj1PT-0004Vo-Ca; Tue, 18 May 2021 15:13:27 +0000
Received: by outflank-mailman (input) for mailman id 129331;
 Tue, 18 May 2021 15:13:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tO0P=KN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lj1PR-0004Vi-Ih
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 15:13:25 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bb3f3790-f0a2-4af4-87de-3164b64ab22b;
 Tue, 18 May 2021 15:13:24 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C69FCADAA;
 Tue, 18 May 2021 15:13:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bb3f3790-f0a2-4af4-87de-3164b64ab22b
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621350803; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=nzFvHABDtDjkJraAdgukYDaFZGMeiKGGlNuVgZ5vXjQ=;
	b=PZTg05gsZF4j+7eQ+uXdhXFGFGnlHjtz5z/Kxz320SSZMTRnsb90Ht+QRj6hcyh5L6AA1v
	mQBNFlNrJXD+TvypwD5Ce+dpJrn+L7eLJzCbBC8/AMVXG6gewnfhxbcvj7+7Cmy1MzC/Em
	8aOvewi6UmaEcN4g5DrtMsQHMxGo5RM=
Subject: Re: [PATCH v3 2/5] xen/common: Guard iommu symbols with
 CONFIG_HAS_PASSTHROUGH
To: Julien Grall <julien@xen.org>, Connor Davis <connojdavis@gmail.com>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair23@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1621017334.git.connojdavis@gmail.com>
 <1156cb116da19ef64323e472bb6b6e87c6c73d77.1621017334.git.connojdavis@gmail.com>
 <556d1933-3b11-0780-edec-b6dc1729bc56@suse.com>
 <98b429d0-2673-624e-1690-9c0e8373ed5b@xen.org>
 <7cf966f6-7ccf-ba63-2b67-129577a7ca53@gmail.com>
 <8e415cac-a8b3-67a6-2f7b-489b964ceb50@suse.com>
 <fc967847-4a08-050c-aaac-5cfb42742f0e@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <922c6304-9299-a697-2405-1b7f6d069842@suse.com>
Date: Tue, 18 May 2021 17:13:22 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <fc967847-4a08-050c-aaac-5cfb42742f0e@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 18.05.2021 16:06, Julien Grall wrote:
> 
> 
> On 18/05/2021 07:27, Jan Beulich wrote:
>> On 18.05.2021 06:11, Connor Davis wrote:
>>>
>>> On 5/17/21 9:42 AM, Julien Grall wrote:
>>>> Hi Jan,
>>>>
>>>> On 17/05/2021 12:16, Jan Beulich wrote:
>>>>> On 14.05.2021 20:53, Connor Davis wrote:
>>>>>> --- a/xen/common/memory.c
>>>>>> +++ b/xen/common/memory.c
>>>>>> @@ -294,7 +294,9 @@ int guest_remove_page(struct domain *d, unsigned
>>>>>> long gmfn)
>>>>>>        p2m_type_t p2mt;
>>>>>>    #endif
>>>>>>        mfn_t mfn;
>>>>>> +#ifdef CONFIG_HAS_PASSTHROUGH
>>>>>>        bool *dont_flush_p, dont_flush;
>>>>>> +#endif
>>>>>>        int rc;
>>>>>>      #ifdef CONFIG_X86
>>>>>> @@ -385,13 +387,17 @@ int guest_remove_page(struct domain *d,
>>>>>> unsigned long gmfn)
>>>>>>         * Since we're likely to free the page below, we need to suspend
>>>>>>         * xenmem_add_to_physmap()'s suppressing of IOMMU TLB flushes.
>>>>>>         */
>>>>>> +#ifdef CONFIG_HAS_PASSTHROUGH
>>>>>>        dont_flush_p = &this_cpu(iommu_dont_flush_iotlb);
>>>>>>        dont_flush = *dont_flush_p;
>>>>>>        *dont_flush_p = false;
>>>>>> +#endif
>>>>>>          rc = guest_physmap_remove_page(d, _gfn(gmfn), mfn, 0);
>>>>>>    +#ifdef CONFIG_HAS_PASSTHROUGH
>>>>>>        *dont_flush_p = dont_flush;
>>>>>> +#endif
>>>>>>          /*
>>>>>>         * With the lack of an IOMMU on some platforms, domains with
>>>>>> DMA-capable
>>>>>> @@ -839,11 +845,13 @@ int xenmem_add_to_physmap(struct domain *d,
>>>>>> struct xen_add_to_physmap *xatp,
>>>>>>        xatp->gpfn += start;
>>>>>>        xatp->size -= start;
>>>>>>    +#ifdef CONFIG_HAS_PASSTHROUGH
>>>>>>        if ( is_iommu_enabled(d) )
>>>>>>        {
>>>>>>           this_cpu(iommu_dont_flush_iotlb) = 1;
>>>>>>           extra.ppage = &pages[0];
>>>>>>        }
>>>>>> +#endif
>>>>>>          while ( xatp->size > done )
>>>>>>        {
>>>>>> @@ -868,6 +876,7 @@ int xenmem_add_to_physmap(struct domain *d,
>>>>>> struct xen_add_to_physmap *xatp,
>>>>>>            }
>>>>>>        }
>>>>>>    +#ifdef CONFIG_HAS_PASSTHROUGH
>>>>>>        if ( is_iommu_enabled(d) )
>>>>>>        {
>>>>>>            int ret;
>>>>>> @@ -894,6 +903,7 @@ int xenmem_add_to_physmap(struct domain *d,
>>>>>> struct xen_add_to_physmap *xatp,
>>>>>>            if ( unlikely(ret) && rc >= 0 )
>>>>>>                rc = ret;
>>>>>>        }
>>>>>> +#endif
>>>>>>          return rc;
>>>>>>    }
>>>>>
>>>>> I wonder whether all of these wouldn't better become CONFIG_X86:
>>>>> ISTR Julien indicating that he doesn't see the override getting used
>>>>> on Arm. (Julien, please correct me if I'm misremembering.)
>>>>
>>>> Right, so far, I haven't been in favor to introduce it because:
>>>>     1) The P2M code may free some memory. So you can't always ignore
>>>> the flush (I think this is wrong for the upper layer to know when this
>>>> can happen).
>>>>     2) It is unclear what happen if the IOMMU TLBs and the PT contains
>>>> different mappings (I received conflicted advice).
>>>>
>>>> So it is better to always flush and as early as possible.
>>>
>>> So keep it as is or switch to CONFIG_X86?
>>
>> Please switch, unless anyone else voices a strong opinion towards
>> keeping as is.
> 
> I would like to avoid adding more #ifdef CONFIG_X86 in the common code. 
> Can we instead provide a wrapper for them?

Doable, sure, but I don't know whether Connor is up to going this
more extensive route.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 18 15:17:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 15:17:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129340.242789 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj1T3-0005BJ-2F; Tue, 18 May 2021 15:17:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129340.242789; Tue, 18 May 2021 15:17:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj1T2-0005BC-VR; Tue, 18 May 2021 15:17:08 +0000
Received: by outflank-mailman (input) for mailman id 129340;
 Tue, 18 May 2021 15:17:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lj1T1-0005B6-5C
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 15:17:07 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lj1T0-0005Rb-2A; Tue, 18 May 2021 15:17:06 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lj1Sz-0008Kf-Ro; Tue, 18 May 2021 15:17:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=Jb/nZpPfWCK59tKvpib0PXBBg+ntRrhpI15dC4MYvr8=; b=a/2T2mhtsV++KtKHRMnr0ZIel3
	VC587ykPBswkELrePHDn64klODKnwAOZ/fa0TzhgoBFxDUAF1fj8uYV0Z3gnP++U9TROWW0bKelaS
	QMrNuQXG9lid24qHACZEYRH8R/zUeTsRqXgvqaeDZNMiOkLQ9Lz9/IUQ3uZcRZm137g4=;
Subject: Re: [PATCH v3 2/5] xen/common: Guard iommu symbols with
 CONFIG_HAS_PASSTHROUGH
To: Jan Beulich <jbeulich@suse.com>, Connor Davis <connojdavis@gmail.com>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair23@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 Paul Durrant <paul@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1621017334.git.connojdavis@gmail.com>
 <1156cb116da19ef64323e472bb6b6e87c6c73d77.1621017334.git.connojdavis@gmail.com>
 <556d1933-3b11-0780-edec-b6dc1729bc56@suse.com>
 <98b429d0-2673-624e-1690-9c0e8373ed5b@xen.org>
 <7cf966f6-7ccf-ba63-2b67-129577a7ca53@gmail.com>
 <8e415cac-a8b3-67a6-2f7b-489b964ceb50@suse.com>
 <fc967847-4a08-050c-aaac-5cfb42742f0e@xen.org>
 <922c6304-9299-a697-2405-1b7f6d069842@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <d486c0f9-b615-0706-e1f2-3fd15bd7ec6a@xen.org>
Date: Tue, 18 May 2021 16:17:03 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <922c6304-9299-a697-2405-1b7f6d069842@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Jan,

On 18/05/2021 16:13, Jan Beulich wrote:
> On 18.05.2021 16:06, Julien Grall wrote:
>>
>>
>> On 18/05/2021 07:27, Jan Beulich wrote:
>>> On 18.05.2021 06:11, Connor Davis wrote:
>>>>
>>>> On 5/17/21 9:42 AM, Julien Grall wrote:
>>>>> Hi Jan,
>>>>>
>>>>> On 17/05/2021 12:16, Jan Beulich wrote:
>>>>>> On 14.05.2021 20:53, Connor Davis wrote:
>>>>>>> --- a/xen/common/memory.c
>>>>>>> +++ b/xen/common/memory.c
>>>>>>> @@ -294,7 +294,9 @@ int guest_remove_page(struct domain *d, unsigned
>>>>>>> long gmfn)
>>>>>>>         p2m_type_t p2mt;
>>>>>>>     #endif
>>>>>>>         mfn_t mfn;
>>>>>>> +#ifdef CONFIG_HAS_PASSTHROUGH
>>>>>>>         bool *dont_flush_p, dont_flush;
>>>>>>> +#endif
>>>>>>>         int rc;
>>>>>>>       #ifdef CONFIG_X86
>>>>>>> @@ -385,13 +387,17 @@ int guest_remove_page(struct domain *d,
>>>>>>> unsigned long gmfn)
>>>>>>>          * Since we're likely to free the page below, we need to suspend
>>>>>>>          * xenmem_add_to_physmap()'s suppressing of IOMMU TLB flushes.
>>>>>>>          */
>>>>>>> +#ifdef CONFIG_HAS_PASSTHROUGH
>>>>>>>         dont_flush_p = &this_cpu(iommu_dont_flush_iotlb);
>>>>>>>         dont_flush = *dont_flush_p;
>>>>>>>         *dont_flush_p = false;
>>>>>>> +#endif
>>>>>>>           rc = guest_physmap_remove_page(d, _gfn(gmfn), mfn, 0);
>>>>>>>     +#ifdef CONFIG_HAS_PASSTHROUGH
>>>>>>>         *dont_flush_p = dont_flush;
>>>>>>> +#endif
>>>>>>>           /*
>>>>>>>          * With the lack of an IOMMU on some platforms, domains with
>>>>>>> DMA-capable
>>>>>>> @@ -839,11 +845,13 @@ int xenmem_add_to_physmap(struct domain *d,
>>>>>>> struct xen_add_to_physmap *xatp,
>>>>>>>         xatp->gpfn += start;
>>>>>>>         xatp->size -= start;
>>>>>>>     +#ifdef CONFIG_HAS_PASSTHROUGH
>>>>>>>         if ( is_iommu_enabled(d) )
>>>>>>>         {
>>>>>>>            this_cpu(iommu_dont_flush_iotlb) = 1;
>>>>>>>            extra.ppage = &pages[0];
>>>>>>>         }
>>>>>>> +#endif
>>>>>>>           while ( xatp->size > done )
>>>>>>>         {
>>>>>>> @@ -868,6 +876,7 @@ int xenmem_add_to_physmap(struct domain *d,
>>>>>>> struct xen_add_to_physmap *xatp,
>>>>>>>             }
>>>>>>>         }
>>>>>>>     +#ifdef CONFIG_HAS_PASSTHROUGH
>>>>>>>         if ( is_iommu_enabled(d) )
>>>>>>>         {
>>>>>>>             int ret;
>>>>>>> @@ -894,6 +903,7 @@ int xenmem_add_to_physmap(struct domain *d,
>>>>>>> struct xen_add_to_physmap *xatp,
>>>>>>>             if ( unlikely(ret) && rc >= 0 )
>>>>>>>                 rc = ret;
>>>>>>>         }
>>>>>>> +#endif
>>>>>>>           return rc;
>>>>>>>     }
>>>>>>
>>>>>> I wonder whether all of these wouldn't better become CONFIG_X86:
>>>>>> ISTR Julien indicating that he doesn't see the override getting used
>>>>>> on Arm. (Julien, please correct me if I'm misremembering.)
>>>>>
>>>>> Right, so far, I haven't been in favor to introduce it because:
>>>>>      1) The P2M code may free some memory. So you can't always ignore
>>>>> the flush (I think this is wrong for the upper layer to know when this
>>>>> can happen).
>>>>>      2) It is unclear what happen if the IOMMU TLBs and the PT contains
>>>>> different mappings (I received conflicted advice).
>>>>>
>>>>> So it is better to always flush and as early as possible.
>>>>
>>>> So keep it as is or switch to CONFIG_X86?
>>>
>>> Please switch, unless anyone else voices a strong opinion towards
>>> keeping as is.
>>
>> I would like to avoid adding more #ifdef CONFIG_X86 in the common code.
>> Can we instead provide a wrapper for them?
> 
> Doable, sure, but I don't know whether Connor is up to going this
> more extensive route.

That's a fair point. If that the case, then I prefer the #ifdef 
CONFIG_HAS_PASSTHROUGH version.

I can add an item in my todo list to introduce some helpers.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 18 15:21:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 15:21:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129349.242800 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj1XW-0006Xk-Kp; Tue, 18 May 2021 15:21:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129349.242800; Tue, 18 May 2021 15:21:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj1XW-0006Xd-H9; Tue, 18 May 2021 15:21:46 +0000
Received: by outflank-mailman (input) for mailman id 129349;
 Tue, 18 May 2021 15:21:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lj1XV-0006XX-Ko
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 15:21:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lj1XV-0005Wi-GD; Tue, 18 May 2021 15:21:45 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lj1XV-0000Gt-7e; Tue, 18 May 2021 15:21:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=bihKm/0I5e2904VEtD9FhHuR9ZFEzgsDrIr7s+Qg+Yk=; b=jwjiEbVEp6SoBekt84khga6jzN
	TowDPHVP/SKdmNJhrs0KH57rsRRzbkhvmMtAFEioVQnfpXspgWK1J5+QLpXBAJuJGq5oKQVbB5Nbw
	ZeZ4mxpz8RsaRVdfoQkiT+X0qp7KOPmvCoi6czFW7R0Sm1nnuU7R61EhwGGsKNslRxHo=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk,
	Julien Grall <jgrall@amazon.com>
Subject: [PATCH] tools/xenstored: Remove unused parameter in check_domains()
Date: Tue, 18 May 2021 16:21:40 +0100
Message-Id: <20210518152140.6333-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1

From: Julien Grall <jgrall@amazon.com>

The parameter of check_domains() is not used within the function. In fact,
this was a left over of the original implementation as the version merged
doesn't need to know whether we are restoring.

So remove it.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 tools/xenstore/xenstored_control.c | 2 +-
 tools/xenstore/xenstored_domain.c  | 4 ++--
 tools/xenstore/xenstored_domain.h  | 2 +-
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index 8e470f2b2056..07458d7b48d0 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -589,7 +589,7 @@ void lu_read_state(void)
 	 * have died while we were live-updating. So check all the domains are
 	 * still alive.
 	 */
-	check_domains(true);
+	check_domains();
 }
 
 static const char *lu_activate_binary(const void *ctx)
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 3d4d0649a243..0e4bae9a9dd6 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -220,7 +220,7 @@ static bool get_domain_info(unsigned int domid, xc_dominfo_t *dominfo)
 	       dominfo->domid == domid;
 }
 
-void check_domains(bool restore)
+void check_domains(void)
 {
 	xc_dominfo_t dominfo;
 	struct domain *domain;
@@ -277,7 +277,7 @@ void handle_event(void)
 		barf_perror("Failed to read from event fd");
 
 	if (port == virq_port)
-		check_domains(false);
+		check_domains();
 
 	if (xenevtchn_unmask(xce_handle, port) == -1)
 		barf_perror("Failed to write to event fd");
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index dc9759171317..cc5147d7e747 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -21,7 +21,7 @@
 
 void handle_event(void);
 
-void check_domains(bool restore);
+void check_domains(void);
 
 /* domid, mfn, eventchn, path */
 int do_introduce(struct connection *conn, struct buffered_data *in);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 18 15:22:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 15:22:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129350.242811 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj1Xq-0006zY-Tt; Tue, 18 May 2021 15:22:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129350.242811; Tue, 18 May 2021 15:22:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj1Xq-0006zR-Qn; Tue, 18 May 2021 15:22:06 +0000
Received: by outflank-mailman (input) for mailman id 129350;
 Tue, 18 May 2021 15:22:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lj1Xp-0006xj-Cg
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 15:22:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lj1Xo-0005X2-Av; Tue, 18 May 2021 15:22:04 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lj1Xo-0000Pi-28; Tue, 18 May 2021 15:22:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=bihKm/0I5e2904VEtD9FhHuR9ZFEzgsDrIr7s+Qg+Yk=; b=cJvMU+dWy3WYOLAX3QmAGWVAsW
	YIjXBbrZMB+RqlTqkxKeCMWLT1c1oeGC82Opkw4UrTTsSLTSpf4cqT0fTB9/vlCeVqVx8BSbOcwW5
	y6X5qlbHOu+Q65DSz5OteY4d2u3ycNHJkvaK7sdFZlyjS2UmERmBJFy6tqi9genSwIHY=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk,
	Julien Grall <jgrall@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Juergen Gross <jgross@suse.com>
Subject: [PATCH] tools/xenstored: Remove unused parameter in check_domains()
Date: Tue, 18 May 2021 16:21:57 +0100
Message-Id: <20210518152157.6481-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1

From: Julien Grall <jgrall@amazon.com>

The parameter of check_domains() is not used within the function. In fact,
this was a left over of the original implementation as the version merged
doesn't need to know whether we are restoring.

So remove it.

Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 tools/xenstore/xenstored_control.c | 2 +-
 tools/xenstore/xenstored_domain.c  | 4 ++--
 tools/xenstore/xenstored_domain.h  | 2 +-
 3 files changed, 4 insertions(+), 4 deletions(-)

diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
index 8e470f2b2056..07458d7b48d0 100644
--- a/tools/xenstore/xenstored_control.c
+++ b/tools/xenstore/xenstored_control.c
@@ -589,7 +589,7 @@ void lu_read_state(void)
 	 * have died while we were live-updating. So check all the domains are
 	 * still alive.
 	 */
-	check_domains(true);
+	check_domains();
 }
 
 static const char *lu_activate_binary(const void *ctx)
diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
index 3d4d0649a243..0e4bae9a9dd6 100644
--- a/tools/xenstore/xenstored_domain.c
+++ b/tools/xenstore/xenstored_domain.c
@@ -220,7 +220,7 @@ static bool get_domain_info(unsigned int domid, xc_dominfo_t *dominfo)
 	       dominfo->domid == domid;
 }
 
-void check_domains(bool restore)
+void check_domains(void)
 {
 	xc_dominfo_t dominfo;
 	struct domain *domain;
@@ -277,7 +277,7 @@ void handle_event(void)
 		barf_perror("Failed to read from event fd");
 
 	if (port == virq_port)
-		check_domains(false);
+		check_domains();
 
 	if (xenevtchn_unmask(xce_handle, port) == -1)
 		barf_perror("Failed to write to event fd");
diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
index dc9759171317..cc5147d7e747 100644
--- a/tools/xenstore/xenstored_domain.h
+++ b/tools/xenstore/xenstored_domain.h
@@ -21,7 +21,7 @@
 
 void handle_event(void);
 
-void check_domains(bool restore);
+void check_domains(void);
 
 /* domid, mfn, eventchn, path */
 int do_introduce(struct connection *conn, struct buffered_data *in);
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 18 15:22:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 15:22:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129354.242822 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj1YX-0007gF-7V; Tue, 18 May 2021 15:22:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129354.242822; Tue, 18 May 2021 15:22:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj1YX-0007g8-3w; Tue, 18 May 2021 15:22:49 +0000
Received: by outflank-mailman (input) for mailman id 129354;
 Tue, 18 May 2021 15:22:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lj1YW-0007fu-69
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 15:22:48 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lj1YV-0005Xl-RT; Tue, 18 May 2021 15:22:47 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lj1YV-0000WN-LZ; Tue, 18 May 2021 15:22:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:References:Cc:To:From:Subject;
	bh=4+nvdpyKFrDm8KjzAEYArHQX24Wn4s70eOMRzkrhMyI=; b=J26saD6UcelOVA5yiQSWtTqfy8
	F6qj2jMpwKg1iKqDuKIZq9MUCJcpTvhypsesQLTfhvqsWnMx9vU/XCV8ULhkA5p71x7DglyWHS9uV
	tBwvwktzStxKdty19WywTjM9mqj7LDo/+MSQbAf84UP4vFNs0ldjhospbhznLeNbxTv4=;
Subject: Re: [PATCH] tools/xenstored: Remove unused parameter in
 check_domains()
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, Julien Grall <jgrall@amazon.com>
References: <20210518152140.6333-1-julien@xen.org>
Message-ID: <bb1ce950-89eb-8d36-f554-76787fced11b@xen.org>
Date: Tue, 18 May 2021 16:22:45 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210518152140.6333-1-julien@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

Please ignore this patch. I forgot to CC the maintainers. I will resend it.

Sorry for the noise.

Cheers,

On 18/05/2021 16:21, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> The parameter of check_domains() is not used within the function. In fact,
> this was a left over of the original implementation as the version merged
> doesn't need to know whether we are restoring.
> 
> So remove it.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> ---
>   tools/xenstore/xenstored_control.c | 2 +-
>   tools/xenstore/xenstored_domain.c  | 4 ++--
>   tools/xenstore/xenstored_domain.h  | 2 +-
>   3 files changed, 4 insertions(+), 4 deletions(-)
> 
> diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstored_control.c
> index 8e470f2b2056..07458d7b48d0 100644
> --- a/tools/xenstore/xenstored_control.c
> +++ b/tools/xenstore/xenstored_control.c
> @@ -589,7 +589,7 @@ void lu_read_state(void)
>   	 * have died while we were live-updating. So check all the domains are
>   	 * still alive.
>   	 */
> -	check_domains(true);
> +	check_domains();
>   }
>   
>   static const char *lu_activate_binary(const void *ctx)
> diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored_domain.c
> index 3d4d0649a243..0e4bae9a9dd6 100644
> --- a/tools/xenstore/xenstored_domain.c
> +++ b/tools/xenstore/xenstored_domain.c
> @@ -220,7 +220,7 @@ static bool get_domain_info(unsigned int domid, xc_dominfo_t *dominfo)
>   	       dominfo->domid == domid;
>   }
>   
> -void check_domains(bool restore)
> +void check_domains(void)
>   {
>   	xc_dominfo_t dominfo;
>   	struct domain *domain;
> @@ -277,7 +277,7 @@ void handle_event(void)
>   		barf_perror("Failed to read from event fd");
>   
>   	if (port == virq_port)
> -		check_domains(false);
> +		check_domains();
>   
>   	if (xenevtchn_unmask(xce_handle, port) == -1)
>   		barf_perror("Failed to write to event fd");
> diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored_domain.h
> index dc9759171317..cc5147d7e747 100644
> --- a/tools/xenstore/xenstored_domain.h
> +++ b/tools/xenstore/xenstored_domain.h
> @@ -21,7 +21,7 @@
>   
>   void handle_event(void);
>   
> -void check_domains(bool restore);
> +void check_domains(void);
>   
>   /* domid, mfn, eventchn, path */
>   int do_introduce(struct connection *conn, struct buffered_data *in);
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 18 15:24:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 15:24:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129363.242836 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj1aF-0008OQ-Lb; Tue, 18 May 2021 15:24:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129363.242836; Tue, 18 May 2021 15:24:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj1aF-0008OJ-IJ; Tue, 18 May 2021 15:24:35 +0000
Received: by outflank-mailman (input) for mailman id 129363;
 Tue, 18 May 2021 15:24:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h4/q=KN=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1lj1aD-0008O6-Ik
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 15:24:33 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9b4e4d50-4400-48c9-9929-0e59982739f1;
 Tue, 18 May 2021 15:24:32 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 08976AF21;
 Tue, 18 May 2021 15:24:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9b4e4d50-4400-48c9-9929-0e59982739f1
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621351472; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type;
	bh=LdchXg5sD+yim6rkWrmO58TKx/j7GGKezGaYKhI977k=;
	b=IztIi2iySIHDdcvp3+HaMVEEYj/xFEyqaVuIrrBSLLhj5BAr014qIT3w9OarOoxPRx7QHe
	s2eN1hrCyagxplH7QGRToiXDFFgIWtK4xh/bgXOlJBzG+eBOUzllBR1LNgbG12slYCSTc5
	LImoDYfPV910nkwp6blzp6Jp2wlTgKw=
Message-ID: <f7738499f24f6682f4ae1c1c750e30f322dfdbf3.camel@suse.com>
Subject: QEMU backport necessary for building with "recent" toolchain (on
 openSUSE Tumbleweed)
From: Dario Faggioli <dfaggioli@suse.com>
To: xen-devel <xen-devel@lists.xenproject.org>
Cc: Anthony Perard <anthony.perard@citrix.com>, Ian Jackson
 <iwj@xenproject.org>,  Wei Liu <wl@xen.org>, Roger Pau Monne
 <roger.pau@citrix.com>
Date: Tue, 18 May 2021 17:24:30 +0200
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-dWM7hf+zHIeucaFvLuqN"
User-Agent: Evolution 3.40.1 (by Flathub.org) 
MIME-Version: 1.0


--=-dWM7hf+zHIeucaFvLuqN
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hello,

While trying to build Xen on openSUSE Tumbleweed, I run into this
error, when qemu-xen is being built:

ld: Error: unable to disambiguate: -no-pie (did you mean --no-pie ?)
make[1]: *** [Makefile:53: multiboot.img] Error 1
make: *** [Makefile:576: pc-bios/optionrom/all] Error 2
make: Leaving directory '/build/tools/qemu-xen-build'
make[3]: *** [Makefile:212: subdir-all-qemu-xen-dir] Error 2
make[3]: Leaving directory '/build/tools'
make[2]: *** [/build/tools/../tools/Rules.mk:156: subdirs-install] Error 2
make[2]: Leaving directory '/build/tools'
make[1]: *** [Makefile:66: install] Error 2
make[1]: Leaving directory '/build/tools'
make: *** [Makefile:140: install-tools] Error 2

Build tools versions are as follows:

dario@885e566747e1:~> gcc -v
gcc version 10.3.0 (SUSE Linux)=20

dario@885e566747e1:~> ld -v
GNU ld (GNU Binutils; openSUSE Tumbleweed) 2.36.1.20210326-3

I think we need the following commit in our QEMU: bbd2d5a812077
("build: -no-pie is no functional linker flag").

I have attempted a quick-&-dirty backport of it here:
https://xenbits.xen.org/gitweb/?p=3Dpeople/dariof/qemu-xen.git;a=3Dcommit;h=
=3D85575b7b661cedb8e6f6e192d36199ca9fde5841

Feel free to use it as a base, or tell me if I can help more with it in
any other way with it.

Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-dWM7hf+zHIeucaFvLuqN
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAmCj3C4ACgkQFkJ4iaW4
c+7Vbw/9ETlckt36nVjn7zX5I12Od0zSnz9TLDFg/9Bpg3dG4Q+zjMMaK+h6DoOI
Iz7lVsgdqq+B5FPP26ITOqvL2ary7ULAQliaT9GquN4i3r9p8qvWRXo6NkYfDNcH
3BxGZuBrm0OkLiP4gixw+VlrCRN3LGct+dUcTXAcJ//9U0LSZyZJm5ryEMm2BJxn
8JRK8FNeisRIrnqpTfD39fVPB5Qn5ddgPyAEwVPd1koWK4RiEGbRaA0Et3ZJrNtl
2sID/1nkjScMGMpfYDQ6txnp3Izmta2ZskO1l9GdGnFN95CLaCSiA8dVHKSgjpO+
BGAtJ4RxTjr1tT+AF64sGV4pJ08SY6e+WSudmeKptH0vAoLKWjJLMlyYzpEjg8ic
lqhyqYtsE+iHWrrEwQZeufSCL3T6IH1AoyPqnxF8H5MvMYtJVCDovq2qOTpuQ6cx
Pvv9Gw0CfZYiIKhoYzOX7XDvoQHJEoPG8n4xsMr5XTUkZtYMqJAzSESURD5Fk0e4
eVCrXX27RvkKR+g4BbuVAHqDdeL+sBpFT6UBbHFMbb69h5ZxOt/5a1AIQ+2Ncbtz
D5cDpi8VWnt+nDMyXwJFjoUKnxd1MAekTzcYXE1ouQcRfFFewfKfyV+mYs0C7Vjy
4RJ0MtF7jm/jY0LZIaSshim9T/0PgPZkZj9U1sDY1cqBf+M/fGI=
=Sdo0
-----END PGP SIGNATURE-----

--=-dWM7hf+zHIeucaFvLuqN--



From xen-devel-bounces@lists.xenproject.org Tue May 18 15:30:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 15:30:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129375.242850 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj1fz-0001SB-KA; Tue, 18 May 2021 15:30:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129375.242850; Tue, 18 May 2021 15:30:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj1fz-0001S4-HF; Tue, 18 May 2021 15:30:31 +0000
Received: by outflank-mailman (input) for mailman id 129375;
 Tue, 18 May 2021 15:30:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h4/q=KN=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1lj1fy-0001Ru-6D
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 15:30:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 70013234-f156-4ab8-8bb4-bfef67ccb8c4;
 Tue, 18 May 2021 15:30:29 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2A86BAFBF;
 Tue, 18 May 2021 15:30:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 70013234-f156-4ab8-8bb4-bfef67ccb8c4
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621351828; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=X+czU1VmpyowP7r/f8lRmHd6JBxgqdQOEcws0oO8zlQ=;
	b=dWNAVAi3MOETbiOZOYzsONhAJzwQ9u0CeJ2p+ri2YkT2+U1l76eSR7/Yhig+qY53JgNmCl
	9PMbYx5+xvQzdCoLEO+zV6JC8sn6TPhocwzQboIDtUGBUwNXyqkNMx80wBta3UjH078Xno
	2k0G5bdXl4KneGOEApCM0jj5K7fZWzE=
Message-ID: <a393f47c5450195cf8be88e7ea5e9d3977576563.camel@suse.com>
Subject: Re: [PATCH 2/2] automation: fix dependencies on openSUSE Tumbleweed
 containers
From: Dario Faggioli <dfaggioli@suse.com>
To: Roger Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Doug Goldstein <cardoe@cardoe.com>, 
	Andrew Cooper <andrew.cooper3@citrix.com>
Date: Tue, 18 May 2021 17:30:26 +0200
In-Reply-To: <9160502180e3c36a52cb841520615bc7fe91b42b.camel@suse.com>
References: <162133919718.25010.4197057069904956422.stgit@Wayrath>
	 <162133945335.25010.4601866854997664898.stgit@Wayrath>
	 <YKO/BcUAtjSgc2pV@Air-de-Roger>
	 <9160502180e3c36a52cb841520615bc7fe91b42b.camel@suse.com>
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-ZXqkCeZwfnw85QfOh/fH"
User-Agent: Evolution 3.40.1 (by Flathub.org) 
MIME-Version: 1.0


--=-ZXqkCeZwfnw85QfOh/fH
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, 2021-05-18 at 16:33 +0200, Dario Faggioli wrote:
> On Tue, 2021-05-18 at 15:20 +0200, Roger Pau Monn=C3=A9 wrote:
> > On Tue, May 18, 2021 at 02:04:13PM +0200, Dario Faggioli wrote:
> > > From: Dario Faggioli <dario@Solace.fritz.box>
> > >=20
> Mmm... this email address does not really exist, and it's a mistake
> that it ended up here. :-/
>=20
> > > Fix the build inside our openSUSE Tumbleweed container by using
> > > the proper python development packages (and adding zstd headers).
> > >=20
> > > Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
> > > ---
> > > Cc: Doug Goldstein <cardoe@cardoe.com>
> > > Cc: Roger Pau Monne <roger.pau@citrix.com>
> > > Cc: Andrew Cooper <andrew.cooper3@citrix.com>
> > > ---
> > > =C2=A0.../build/suse/opensuse-tumbleweed.dockerfile=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 |=C2=A0=C2=A0=C2=A0 5 ++---
> > > =C2=A01 file changed, 2 insertions(+), 3 deletions(-)
> > >=20
> > > diff --git a/automation/build/suse/opensuse-tumbleweed.dockerfile
> > > b/automation/build/suse/opensuse-tumbleweed.dockerfile
> > > index 8ff7b9b5ce..5b99cb8a53 100644
> > > --- a/automation/build/suse/opensuse-tumbleweed.dockerfile
> > > +++ b/automation/build/suse/opensuse-tumbleweed.dockerfile
> > >=20
> > > @@ -56,10 +57,8 @@ RUN zypper install -y --no-recommends \
> > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 pandoc \
> > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 patch \
> > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 pkg-config \
> > > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 python \
> > > =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 python-devel \
> > > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 python3 \
> > > -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 python3-devel \
> > > +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 python38-devel \
> >=20
> > When I tested python3-devel was translated into python38-devel,=C2=A0
> >=20
> Oh, really? And when was it that you tested it, if I can ask?
>=20
> > which
> > I think is better as we don't need to modify the docker file for
> > every
> > new Python version?
> >=20
> That would definitely be better. Right now, I don't see any
> python3-devel package. If python3-devel can still be used (and it
> somehow translates to the proper -devel package), then sure we should
> use it. I'm not sure how that would happen, but maybe it's just me
> being unaware of some packaging magic.
>=20
> Let me put "python3-devel" there and test locally again, so we know
> if
> it actually works.
>=20
Ok, indeed it works. And, on a second though, it's not obscure at all
that it does.

It's just that python38-devel _provides_ python3-devel, which makes a
lot of sense, and it was silly of me to not think it does that and use
just python3-devel in the first place:

dario@4b10a592ca98:~> zypper if --provides python38-devel
Information for package python38-devel:                                    =
                                                                           =
                                                                           =
            =20
---------------------------------------                                    =
                                                                           =
                                                                           =
            =20
Repository     : @System                                                   =
                                                                           =
                                                                           =
            =20
Name           : python38-devel                                            =
                                                                           =
                                                                           =
            =20
Version        : 3.8.10-1.2
Arch           : x86_64
Vendor         : openSUSE
Installed Size : 882.7 KiB
Installed      : Yes
Status         : up-to-date
Source package : python38-core-3.8.10-1.2.src
Summary        : Include Files and Libraries Mandatory for Building Python =
Modules
[...]
Provides       : [8]
    libpython3.so()(64bit)
    pkgconfig(python-3.8) =3D 3.8
    pkgconfig(python-3.8-embed) =3D 3.8
    pkgconfig(python3) =3D 3.8
    pkgconfig(python3-embed) =3D 3.8
    python3-devel =3D 3.8.10
    python38-devel =3D 3.8.10-1.2
    python38-devel(x86-64) =3D 3.8.10-1.2

What now puzzles me a little, though, is why the build was failing, as
python3-devel was already there in the docker file. Maybe we "just"
forgot to push the image?

Well, there's a different issue (missing libzstd-devel), so I'll send a
v2 of this series anyway.

Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-ZXqkCeZwfnw85QfOh/fH
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAmCj3ZIACgkQFkJ4iaW4
c+4sbRAAjkTFmVhbLi+vYpgAPIgX+YyN41Y8x9Ejb8UqCFyB1HKWoeOaQdodOp1z
zIkdKS+8F1GaA0vxlD+rzD0s8DWtH1O30K/6DJDeSY/qpKnNB23Pp3M7lcgluwlO
Se0wOq1aYT88KaOg013inf2t21U/uN2frrYkM2UAD0Ha5o1qh7iy86qDhLBrpBjV
zkhY0VPQtFLn68m/1COqIB+G6DbZ8ccDo1tu/xk3tdV+9R/2Ll/o6V6+BaefvTUB
dwch/KYAqXRM8hvaYvGnXpKs9mAOlijqJNvYZQxP/clH7jTadVQQF3IIB1fXRwxX
ytVV8bBuX1Nxorb/hSqVI8xiwdoGOsZKuILK2Uh8Vjn/5KfvHPVyiNUEIOVD1TeY
tQzNZ2f4shqYkmvS+Rz/26g78RcN6UMcDEPHQV0/Pc0NpjQVzRfpnUC3WQLegKTN
92r9PqWjIxXPKd05pfQChtSTs8kJVwr3DqHiYiABJfVj5y0BsBZJOX9IJreoFffW
/50v8WlBmy+aADq8Kfv1DvM0mBne1l4XI7rwBUSEdB/SGyuzSuWgsFizBnG0WKyv
/Z7sVBBNedmGiMxOkDyDUJ7uEFd4KjuIgktAWEo8w0Ye2Gwqzv4sExibFJf7F1zs
0Sb4/Xh29JoGn/w8+oEU93R9EgeZOXdALYCnRRiBdXkQVt2APd0=
=80KJ
-----END PGP SIGNATURE-----

--=-ZXqkCeZwfnw85QfOh/fH--



From xen-devel-bounces@lists.xenproject.org Tue May 18 15:35:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 15:35:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129383.242861 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj1kW-00028M-8g; Tue, 18 May 2021 15:35:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129383.242861; Tue, 18 May 2021 15:35:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj1kW-00028F-3x; Tue, 18 May 2021 15:35:12 +0000
Received: by outflank-mailman (input) for mailman id 129383;
 Tue, 18 May 2021 15:35:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+8gn=KN=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lj1kU-000289-L9
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 15:35:10 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e871397d-90f4-4e5e-b9ac-8ec86cda7059;
 Tue, 18 May 2021 15:35:09 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EAEC5B188;
 Tue, 18 May 2021 15:35:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e871397d-90f4-4e5e-b9ac-8ec86cda7059
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621352109; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=T9nuqRtQoAQI2yRmNsEhIiOVqjnRHxHLJ6RBmYMXqmc=;
	b=gwchIatSZci2ox+pcRpc/3wh0LfD+elfwR6aIJylrscLphg992qxh8l6YmH4tGwfuVw6VI
	w8C9Iv5X3hGGArBKJdg5JzyRVwemIZkXNXjbvdV+YNBIvdRbvQG/aWXy2m7CawkIuBjxG+
	E6qDGfT9ot+IetQZfHACYgnzDGFGycg=
Subject: Re: [PATCH] tools/xenstored: Remove unused parameter in
 check_domains()
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, Julien Grall <jgrall@amazon.com>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
References: <20210518152157.6481-1-julien@xen.org>
From: Juergen Gross <jgross@suse.com>
Message-ID: <2ed0d959-2d55-570e-1f2c-1688470ee331@suse.com>
Date: Tue, 18 May 2021 17:35:08 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <20210518152157.6481-1-julien@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="CVEah46odhSi8FV3h0QJgALmjpPePzNGN"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--CVEah46odhSi8FV3h0QJgALmjpPePzNGN
Content-Type: multipart/mixed; boundary="wO8va5x3MdUwAcEBkiDPwwVjO0wPiWNsd";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>, xen-devel@lists.xenproject.org
Cc: raphning@amazon.co.uk, Julien Grall <jgrall@amazon.com>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Message-ID: <2ed0d959-2d55-570e-1f2c-1688470ee331@suse.com>
Subject: Re: [PATCH] tools/xenstored: Remove unused parameter in
 check_domains()
References: <20210518152157.6481-1-julien@xen.org>
In-Reply-To: <20210518152157.6481-1-julien@xen.org>

--wO8va5x3MdUwAcEBkiDPwwVjO0wPiWNsd
Content-Type: multipart/mixed;
 boundary="------------C0F8BD7C06459D98FBE1A237"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------C0F8BD7C06459D98FBE1A237
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 18.05.21 17:21, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
>=20
> The parameter of check_domains() is not used within the function. In fa=
ct,
> this was a left over of the original implementation as the version merg=
ed
> doesn't need to know whether we are restoring.
>=20
> So remove it.
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------C0F8BD7C06459D98FBE1A237
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------C0F8BD7C06459D98FBE1A237--

--wO8va5x3MdUwAcEBkiDPwwVjO0wPiWNsd--

--CVEah46odhSi8FV3h0QJgALmjpPePzNGN
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB4BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmCj3qwFAwAAAAAACgkQsN6d1ii/Ey9m
LQf4wzFHofG7ej0Lb44/cq5SLWz8eI1sl9TZuOpitff/cLjMmmSx+rA4U7mvxaOnvxa/JeQt9OpH
2l7V8r6Jx7NFVYLCA2hmk7L0cy0NHelPQnojkNXDaWWt8MAdCksC+n44PW7L6ZrH67CNy2MPfFOG
rySkloD5J5YhUInESN0GEWGDzbOLn0+tXtNi6AGVJohxke2HkdEdYqx5woFKAudcSeRfbz/WsORa
kIufSijibj+OoF7affY0/Z9NZUsdZydlZn3eZ8p7BB0E7ZHtx6Qijq9bP5cfK57XDUxtyZlPXSeT
wcLB6BTrrdTU7vBEOrQ+i5S2IBqYsTBW3YH2HO6w
=8yPA
-----END PGP SIGNATURE-----

--CVEah46odhSi8FV3h0QJgALmjpPePzNGN--


From xen-devel-bounces@lists.xenproject.org Tue May 18 15:58:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 15:58:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129402.242881 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj26g-0004UA-Be; Tue, 18 May 2021 15:58:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129402.242881; Tue, 18 May 2021 15:58:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj26g-0004U3-7y; Tue, 18 May 2021 15:58:06 +0000
Received: by outflank-mailman (input) for mailman id 129402;
 Tue, 18 May 2021 15:58:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lj26e-0004Tt-86; Tue, 18 May 2021 15:58:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lj26e-00068B-45; Tue, 18 May 2021 15:58:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lj26d-0005aX-QR; Tue, 18 May 2021 15:58:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lj26d-0001Nb-Py; Tue, 18 May 2021 15:58:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=f6C4gzuL6Krb0hMVaktAMPll2FsCpQqolrf8UhCG8YY=; b=RVfGo8I7XaSYzjhNjcmM4KuADh
	SwO6JVAXnKJYnPqrDyqAPZoB38TwhNUj599SoyqY3VupONBcHk0cX16wlcbKTgVNo8I6u+OEn147u
	BaiuqjewPhCQl5WFdOJGF/Rbwnv7lrSMnj3jvGZ0w6QQTRI5amwtfLR3fWDd2SxOgPxc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162023-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162023: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=caa9c4471d1d74b2d236467aaf7e63a806ac11a4
X-Osstest-Versions-That:
    xen=3ac8835a80b27fc4e7116dbde78d3eececc66fc9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 18 May 2021 15:58:03 +0000

flight 162023 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162023/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  caa9c4471d1d74b2d236467aaf7e63a806ac11a4
baseline version:
 xen                  3ac8835a80b27fc4e7116dbde78d3eececc66fc9

Last test of basis   161985  2021-05-17 21:00:36 Z    0 days
Testing same since   162023  2021-05-18 13:00:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Juergen Gross <jgross@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   3ac8835a80..caa9c4471d  caa9c4471d1d74b2d236467aaf7e63a806ac11a4 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue May 18 16:02:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 16:02:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129411.242898 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj2AW-0006OQ-05; Tue, 18 May 2021 16:02:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129411.242898; Tue, 18 May 2021 16:02:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj2AV-0006OJ-SG; Tue, 18 May 2021 16:02:03 +0000
Received: by outflank-mailman (input) for mailman id 129411;
 Tue, 18 May 2021 16:02:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Xey/=KN=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1lj2AU-0006OD-TC
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 16:02:02 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6b480e7a-7b96-47b5-bda7-f3403399ee89;
 Tue, 18 May 2021 16:02:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6b480e7a-7b96-47b5-bda7-f3403399ee89
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1621353721;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=xcGyOaCqAVUMCBuO1cFrZgtaxJNzh7MpEj13jxmSZN0=;
  b=c/ozoi4dFWpDq/QHQ3GFKn98/Kbs4NMeqFYLKRgtH1HJljxEPDHW8Ejv
   KE2aSN1/HldOnfVEkzx4yBRbpQznoejNnDEOoHDN6gVVIIJtdUHar78vG
   F7v6khNevAbEobLDkvrtw91S3bOwB5zFJhtPC6H2jPFvtbkp+Po8RqAmH
   Y=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: NJDesJ2glXr5d2tWFnCKXphqs33pM33/jj1Rh6M5/Zt3Y5ob9VSDzBPkS6ZOTsmYcRWvaFMv6/
 YBELsxBTDIQ2jqao6K6pUkYzzhDZtVvYfr5i9UJynTwtMlNEKQMlfMw8Tnex5m66Zyv2GeBsd2
 ZGTqANlBx7SyCw7xIrUETe/DbdLQ6h73fDtjKx9GfZ2pXd40psAVP5MxMx5BPRx/TPke+BYaiW
 T9yYfSRLsE4dr6N0+K20xrlewKGW3dmzO1w0Li5DyIQYIjmz5cTPOpvCUkCiUAtDA37kENOhhk
 utA=
X-SBRS: 5.1
X-MesageID: 44046508
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:7KZ50KAJy7qOBQjlHemu55DYdb4zR+YMi2TC1yhKKCC9Vvbo8P
 xG/c5rsSMc5wx8ZJhNo7+90ey7MBXhHP1OkOws1NWZLWrbUQKTRekIh+bfKn/bak/DH4ZmpN
 5dmsNFaOEYY2IVsfrH
X-IronPort-AV: E=Sophos;i="5.82,310,1613451600"; 
   d="scan'208";a="44046508"
Date: Tue, 18 May 2021 17:01:57 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Julien Grall <julien@xen.org>
CC: <xen-devel@lists.xenproject.org>, Julien Grall <jgrall@amazon.com>, "Ian
 Jackson" <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH v2 2/2] tools/console: Use const whenever we point to
 literal strings
Message-ID: <YKPk9QJJS6kMy6uP@perard>
References: <20210518140134.31541-1-julien@xen.org>
 <20210518140134.31541-3-julien@xen.org>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <20210518140134.31541-3-julien@xen.org>

On Tue, May 18, 2021 at 03:01:34PM +0100, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Literal strings are not meant to be modified. So we should use const
> char * rather than char * when we want to store a pointer to them.
> 
> Take the opportunity to remove the cast (char *) in console_init(). It
> is unnecessary and will remove the const.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> Acked-by: Wei Liu <wl@xen.org>
> 
> ---
>     Changes in v2:
>         - Remove the cast (char *) in console_init()

Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Tue May 18 16:12:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 16:12:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129418.242912 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj2Kq-0007qZ-3n; Tue, 18 May 2021 16:12:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129418.242912; Tue, 18 May 2021 16:12:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj2Kq-0007qS-00; Tue, 18 May 2021 16:12:44 +0000
Received: by outflank-mailman (input) for mailman id 129418;
 Tue, 18 May 2021 16:12:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=/tj/=KN=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1lj2Ko-0007qG-Ap
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 16:12:42 +0000
Received: from EUR02-AM5-obe.outbound.protection.outlook.com (unknown
 [40.107.0.54]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 75276f83-fbfb-459c-a245-4907c6cc74b7;
 Tue, 18 May 2021 16:12:40 +0000 (UTC)
Received: from DU2PR04CA0079.eurprd04.prod.outlook.com (2603:10a6:10:232::24)
 by VI1PR08MB3183.eurprd08.prod.outlook.com (2603:10a6:803:47::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.26; Tue, 18 May
 2021 16:12:32 +0000
Received: from DB5EUR03FT037.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:232:cafe::b0) by DU2PR04CA0079.outlook.office365.com
 (2603:10a6:10:232::24) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.32 via Frontend
 Transport; Tue, 18 May 2021 16:12:32 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT037.mail.protection.outlook.com (10.152.20.215) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Tue, 18 May 2021 16:12:32 +0000
Received: ("Tessian outbound 3050e7a5b95d:v92");
 Tue, 18 May 2021 16:12:32 +0000
Received: from 91e0cfdcd992.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 18D3F69B-46DB-467F-A563-F17877597127.1; 
 Tue, 18 May 2021 16:12:13 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 91e0cfdcd992.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Tue, 18 May 2021 16:12:13 +0000
Received: from VI1PR08MB3629.eurprd08.prod.outlook.com (2603:10a6:803:7f::25)
 by VE1PR08MB5647.eurprd08.prod.outlook.com (2603:10a6:800:1b2::9)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.26; Tue, 18 May
 2021 16:12:11 +0000
Received: from VI1PR08MB3629.eurprd08.prod.outlook.com
 ([fe80::5ca9:87ed:e959:758a]) by VI1PR08MB3629.eurprd08.prod.outlook.com
 ([fe80::5ca9:87ed:e959:758a%5]) with mapi id 15.20.4129.031; Tue, 18 May 2021
 16:12:11 +0000
Received: from smtpclient.apple (82.8.129.65) by
 LO2P265CA0307.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:a5::31) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4108.25 via Frontend Transport; Tue, 18 May 2021 16:12:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 75276f83-fbfb-459c-a245-4907c6cc74b7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eiBLi0AFnGNZcNcezd06ehlbAK8xI0EM1EKgwai0JwI=;
 b=6Y/0S6q9dlhOldzsl2hj49RoIycirPcJfCnDZhkeBRAqlSzdC4gPjgXD0TiF0hkA7kYy+EtHcmS3sgXFDP0Cwa5d9Pwd9EZtd8CmZL1l4o5E4LQQ7Fj23HrxYVvLJoe8uTMCmBoL9Mq91NIT3BQzs1d6AjnBfL0QosHLBnY5IDM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 3bbd0117cfd6853e
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=m2pC8JJY3aYOqQwRCNW6Sm1Y450hTsrI+OHmcpbjKDoF/NBGF7JwUOnwkfcKQ8jO8uqzdb1XnDRjqaFjlIfWcPyaKAgr51mjZYRAuu799wQd51asGjrt9vnDehlWKif6C5pYxAOBxnm59aDgTHSwHzEfiW/339AcYFocy2Uk2LxCqKhd1Z/+XExYMGTL7+9bBRJVU2BsGZiSaY7ArdUyunCvtWrgT1o5OtfKB4552wefSjwSjNy/NczaVHySR3gXr6I/xjrEifLNGb4X/D7ED6HRnYoIu8BUP5u/Dl44n+p2nM0PNoKzhPsaM7GS4qNyYo2qL5x3V0C6l9luKCBh3Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eiBLi0AFnGNZcNcezd06ehlbAK8xI0EM1EKgwai0JwI=;
 b=aSE24dozesi7+im1ur1Oh94GYw4WdsarPcUgdPtr4w6atEngOyt3mbuoyvpUViRseCP0VscZRNTjva5SGxKxE7YChInhKKSdZrRlvBCJHryQ5+k2VoEpcaIOo2WSbtiNFAudgxVEOvnJ70zvbT3Xk6eVjgiAmfKI3Sk4GjYToepIIFHTvUoeTAB9JopoDoUcf4jOmceF9QNMc6Un/pZn+IPkLNr17JrO6WPk2XnUCa2Pxu86KmSP+x8ChcUIXmUpPWhYbVZ5a97t4CmWuYC1gDmKif/+nRl3t/aDjvp/OzuIJN9SH1AmnyGpqDlbszd58rYgZXkyeIUAxR+0lJwOIA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=eiBLi0AFnGNZcNcezd06ehlbAK8xI0EM1EKgwai0JwI=;
 b=6Y/0S6q9dlhOldzsl2hj49RoIycirPcJfCnDZhkeBRAqlSzdC4gPjgXD0TiF0hkA7kYy+EtHcmS3sgXFDP0Cwa5d9Pwd9EZtd8CmZL1l4o5E4LQQ7Fj23HrxYVvLJoe8uTMCmBoL9Mq91NIT3BQzs1d6AjnBfL0QosHLBnY5IDM=
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
Content-Type: text/plain;
	charset=us-ascii
Subject: Re: [PATCH] tools/xenstored: Remove unused parameter in
 check_domains()
From: Luca Fancellu <luca.fancellu@arm.com>
In-Reply-To: <20210518152157.6481-1-julien@xen.org>
Date: Tue, 18 May 2021 17:12:04 +0100
Cc: xen-devel@lists.xenproject.org,
 raphning@amazon.co.uk,
 Julien Grall <jgrall@amazon.com>,
 Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>,
 Juergen Gross <jgross@suse.com>
Content-Transfer-Encoding: quoted-printable
Message-Id: <95A0C7AA-4C70-49EB-9DD9-C9AAE987A5B5@arm.com>
References: <20210518152157.6481-1-julien@xen.org>
To: Julien Grall <julien@xen.org>
X-Mailer: Apple Mail (2.3654.80.0.2.43)
X-Originating-IP: [82.8.129.65]
X-ClientProxiedBy: LO2P265CA0307.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a5::31) To VI1PR08MB3629.eurprd08.prod.outlook.com
 (2603:10a6:803:7f::25)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 704957c8-0caf-45d8-e56d-08d91a17bdcf
X-MS-TrafficTypeDiagnostic: VE1PR08MB5647:|VI1PR08MB3183:
X-Microsoft-Antispam-PRVS:
	<VI1PR08MB318368F4E27A21CDCCD92269E42C9@VI1PR08MB3183.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 M8ejY2latpJBCO3bH8nJNKTdyD38JPk8EQJ0tYRmZCnHrmY0ligEBlRN7HZNFVIL4LKO9dCH+qecDmh6pErZSZqCCpDID6mUJncmFM0JDIMKa2m7/NXYPQEsNYDk8mgIVCllrBmqKVaqv+AtcQbiVJWp05Ws+gHQZnkKzifN1jPE4f3trTX3cy4+txOANh+okRaMM4QxA+nXP8i54ggKN22w5ZL8fZXA5AzaSwyAyc+sX8sYHKrPPNeoEo+3hILuz0cOaR/LVjKljPBVfPXgy36Wqv8P+wj/OMCJRTxeB6F7G+S5BHt9rmGLAWUndngkM95W9da+Q7+XFNmfR3XsMgJfBoU11b0YgUbRflNjg80eYmRCYSKx/MK5AYe3IWtoRsX+Kn6aNKD0ohMLo+cGTI3s3sArkVmV1QLkSQv1ho7l+LOu2ba2z0S+6O6iBSVGmQ4evzkN9Ej4LexXKLej3t72YtCiYAb2P9/3v3rwssXesRfNPLr6itfpYXyhdAL2B72Yx3oQ1eBrrtuVKcctcZIMCvt17IfR7CJBoX1CDawMmkK8nCIQyUppjfZpRiPsoz8It7GPgYvanigkQNjd2vJVA5reUEBZVYcvLprKXDyKry8TqYYKbcX1Ubv02JfBbe514lf7QTddpWntZqxxgfBiPcqygWNeFxbb+On9nlqON5EJKA4g7ldQ5lPfv6iS
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR08MB3629.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(5660300002)(6486002)(2616005)(66476007)(52116002)(66946007)(6506007)(53546011)(498600001)(66556008)(8676002)(54906003)(956004)(6916009)(36756003)(16526019)(44832011)(83380400001)(86362001)(8936002)(26005)(38350700002)(38100700002)(6666004)(6512007)(186003)(4326008)(33656002)(2906002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
 =?us-ascii?Q?9SxQ8TOvUhYyse5Wvh/gZwG33c0vXoUys2lBR1UgviteJnVMBB4TnWhvgo6H?=
 =?us-ascii?Q?q2HXqeMLZl+DLiSbdiFs/A3BUzbuovtMvAqHxyaUq4vsReJQMt/Qflhf1mrJ?=
 =?us-ascii?Q?N2DvEWYwfycCM248Vcp69n5/NH+ncGAxFdJ5AwRykkHmigmQd8QsB9nGMGfG?=
 =?us-ascii?Q?tWnSNirjUOBtfRvrUyy2ZxiEpzjRRMDPSQINJsFklUMD7F2TyB3gYCSdkcM/?=
 =?us-ascii?Q?BEBXRZbmhc3/MwJLSt9DXJooY+0hxspTEPIdOg69gE0ApoMSmsJGmkjkB0Lp?=
 =?us-ascii?Q?0fcZRxgI0URH0MkZrc6MpZd7SBhYCltNtPfdTI3BCqnRmvLhPt798Em6ZZF7?=
 =?us-ascii?Q?1B/hIXQWZnnrfQ6WAFmDeKBPkH+765/4q6eWIJF5yJ0/hGP/a30TuGBCzjYv?=
 =?us-ascii?Q?3R5HahK8JUuVkpSETF1VHRNGOZGJloMtM83sC0v95ICXfrlWnPD7CoyMz0V8?=
 =?us-ascii?Q?lvlX/lEiKIMFVbk3eRmiru/2yh2npJbxfdyQWYCxv1KCGpXyA3XOJJsGqMNs?=
 =?us-ascii?Q?IE6gV9LwGaanoRzC68f0Q8UgEok86dJffyI1zTLzrm8Sh+VUsGJYP0G1zdZy?=
 =?us-ascii?Q?+U7OAI3revnKe/Nen0/bMm6axolx5zF2JxnvAyHMIUoUBBno7V+LIZRn8urI?=
 =?us-ascii?Q?PU0GMy4yw+4HGTugidL50L2HbmGhfLKKKOf+U1EY2sW9Jd+q+oQ5lQteWyeL?=
 =?us-ascii?Q?DAbFTKA8wWNTK2qurwBF4DewNmWuaK8gNtsd/2BkrRM3BylTi3T3LolPJ6Vu?=
 =?us-ascii?Q?zq80jeZS55vajpfeLUQ1T+/SejF4JnU3FFW5jfk2PusL4D3EdkajPNpClAjU?=
 =?us-ascii?Q?1F3BuXHyNO7/cC7Igst0im0yoXPMOXX7p5VCdNgZqfouL97UOwgcTyM7rgI6?=
 =?us-ascii?Q?AwAziVtrOak0CQnbDCU7OwpV9jP23Zqn5+SZ8O+NMdsu0lirwhP8Q7P4SY5E?=
 =?us-ascii?Q?ekvD+dKnnEyRYUygYIlKKiXRWSeFBuy7tkIEzMeCRgwDeEovtUihZ2nGR4dv?=
 =?us-ascii?Q?p4s4+r7YkdAuctJ4+BG2ozGII5V0hB7ju4nV941pqlPMeaRhXROPVr2X/2Eo?=
 =?us-ascii?Q?xv1XzNYlA+P4HNHZx0mzwC4ZGWUceC6v6ac7fCkFhWr+wX6LbVBYvKY4PY/S?=
 =?us-ascii?Q?PkDw8VhpJJjJBiFNgcEzkWBT2YpmUTeCL4AaK++my4uXIVqAXq8nYz2obboL?=
 =?us-ascii?Q?BMIv1YmiKOFGxHWAQDKVb8gqcPGNhcxB37rYe+i/WF3P2wcM9ylRVQsgUwoA?=
 =?us-ascii?Q?4Kq5TLezlmVatzIH+otbF/GEbD3KNrth3jPMz5hP/JqKlMX4mSIXl3voZxp1?=
 =?us-ascii?Q?P329rRi4R8NBHBiwhBNfPr9t?=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB5647
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT037.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	931df35a-7a09-4d55-ec4e-08d91a17b15a
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	EgQaFo9Gw1TlUdwygp14StZEJsawUKDE3lH06M7r70dD0uXg8/nLQzaB4F3w0sw5cFDdisiZmtwn+Kq5XBA0ovgsc3d3rVpq3qm2IO7/oiGGBzXrh7IaeoS4BBIeCQHg4CLac965Hthqrb3A3b4wh1nfbD0kyoQ+8kR4J97S5B40VRT5l/+hiXvBukrRLQSSfrRoL8s1HEvIdNrWMuLKA3/kj8tZRqTKht90frG37TirwZ70IUHPxI9cKFCk3cN3r8nrLNxq8qQ3/xvAtYL69onZRuCg7MkAPUHijl5bOQoADV4GCVtq1ufiNMXdEjHkpRiL9nfnfefVM7SDjywn7fYmFFbKMLhI0jF574CJHOUhBFT9ODnxNK+nxk1759zy0ES5HAfPzU4WiQk+WtilQECkRaQsj0Iwc5nSUgusdXHW5PKG94SvH62CposAqnuFzmu7pLUNc2wPoJJH4KeFD0qagGMrndZ1j7E5NxRws8PtdpELmSAITe8FnOTWaER00JL32hVm0F/sooJ8zP31hr0w53tJZehL2N/AFJAQX3aaPZcgCC/v8DgXDtIjBTbSNlJVSVIRs4BfHp+8KeWa2OSqGO8BQjlFazYbbuZl3tuQ/YDw2lMB9200G1sna2l5Kn9UepYltCuK0OtFNMnVyJfPUF26AqH4V1C4S6QwR1U=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(39860400002)(396003)(346002)(376002)(36840700001)(46966006)(47076005)(5660300002)(6666004)(81166007)(33656002)(316002)(6512007)(16526019)(186003)(2616005)(82740400003)(26005)(956004)(6506007)(356005)(6862004)(2906002)(53546011)(54906003)(36860700001)(83380400001)(4326008)(478600001)(336012)(36756003)(70586007)(8936002)(44832011)(86362001)(82310400003)(6486002)(70206006)(107886003)(8676002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2021 16:12:32.0664
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 704957c8-0caf-45d8-e56d-08d91a17bdcf
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT037.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3183



> On 18 May 2021, at 16:21, Julien Grall <julien@xen.org> wrote:
>=20
> From: Julien Grall <jgrall@amazon.com>
>=20
> The parameter of check_domains() is not used within the function. In fact=
,
> this was a left over of the original implementation as the version merged
> doesn't need to know whether we are restoring.
>=20
> So remove it.
>=20
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> ---
> tools/xenstore/xenstored_control.c | 2 +-
> tools/xenstore/xenstored_domain.c  | 4 ++--
> tools/xenstore/xenstored_domain.h  | 2 +-
> 3 files changed, 4 insertions(+), 4 deletions(-)
>=20
> diff --git a/tools/xenstore/xenstored_control.c b/tools/xenstore/xenstore=
d_control.c
> index 8e470f2b2056..07458d7b48d0 100644
> --- a/tools/xenstore/xenstored_control.c
> +++ b/tools/xenstore/xenstored_control.c
> @@ -589,7 +589,7 @@ void lu_read_state(void)
> 	 * have died while we were live-updating. So check all the domains are
> 	 * still alive.
> 	 */
> -	check_domains(true);
> +	check_domains();
> }
>=20
> static const char *lu_activate_binary(const void *ctx)
> diff --git a/tools/xenstore/xenstored_domain.c b/tools/xenstore/xenstored=
_domain.c
> index 3d4d0649a243..0e4bae9a9dd6 100644
> --- a/tools/xenstore/xenstored_domain.c
> +++ b/tools/xenstore/xenstored_domain.c
> @@ -220,7 +220,7 @@ static bool get_domain_info(unsigned int domid, xc_do=
minfo_t *dominfo)
> 	       dominfo->domid =3D=3D domid;
> }
>=20
> -void check_domains(bool restore)
> +void check_domains(void)
> {
> 	xc_dominfo_t dominfo;
> 	struct domain *domain;
> @@ -277,7 +277,7 @@ void handle_event(void)
> 		barf_perror("Failed to read from event fd");
>=20
> 	if (port =3D=3D virq_port)
> -		check_domains(false);
> +		check_domains();
>=20
> 	if (xenevtchn_unmask(xce_handle, port) =3D=3D -1)
> 		barf_perror("Failed to write to event fd");
> diff --git a/tools/xenstore/xenstored_domain.h b/tools/xenstore/xenstored=
_domain.h
> index dc9759171317..cc5147d7e747 100644
> --- a/tools/xenstore/xenstored_domain.h
> +++ b/tools/xenstore/xenstored_domain.h
> @@ -21,7 +21,7 @@
>=20
> void handle_event(void);
>=20
> -void check_domains(bool restore);
> +void check_domains(void);
>=20
> /* domid, mfn, eventchn, path */
> int do_introduce(struct connection *conn, struct buffered_data *in);
> --=20
> 2.17.1
>=20
>=20

Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>

Cheers,
Luca=


From xen-devel-bounces@lists.xenproject.org Tue May 18 16:12:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 16:12:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129419.242923 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj2L3-0008By-GX; Tue, 18 May 2021 16:12:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129419.242923; Tue, 18 May 2021 16:12:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj2L3-0008Bp-DB; Tue, 18 May 2021 16:12:57 +0000
Received: by outflank-mailman (input) for mailman id 129419;
 Tue, 18 May 2021 16:12:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tO0P=KN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lj2L2-0008B1-3E
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 16:12:56 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 23203f20-c267-4e8d-813d-f49cf3375082;
 Tue, 18 May 2021 16:12:55 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A422EB020;
 Tue, 18 May 2021 16:12:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 23203f20-c267-4e8d-813d-f49cf3375082
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621354374; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=Cq7biW7fc2hJQfE/Ns6QbKXcGZHWBPzcWQpMIgj02jo=;
	b=FJmcwiWscyjGUmggt+4X9eKg9IH90QGIU0LrfnELuMWGKftEl0olyHQMuUuUoSiWNyClXH
	LD/WxvOBLy6rDO0WTFU6qqNeZen4DzLp0iaof4xZ8oVQttETaRnS9RSL40D/0WYTE715bS
	/IHwzSsc5RNzommPKH7V2Ry2Rnv++98=
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 0/2] xen-pciback: a fix and a workaround
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Konrad Wilk <konrad.wilk@oracle.com>
Message-ID: <38774140-871d-59a4-cf49-9cb1cc78c381@suse.com>
Date: Tue, 18 May 2021 18:12:54 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

The first change completes a several years old but still incomplete
change. As mentioned there, reverting the original change may also
be an option. The second change works around some odd libxl behavior,
as described in [1]. As per a response to that mail addressing the
issue in libxl may also be possible, but it's not clear to me who
would get to doing so at which point in time. Hence the kernel side
alternative is being proposed here.

As to Konrad being on the Cc list: I find it puzzling that he's
listed under "XEN PCI SUBSYSTEM", but pciback isn't considered part
of this.

1: redo VF placement in the virtual topology
2: reconfigure also from backend watch handler

Jan

[1] https://lists.xen.org/archives/html/xen-devel/2021-03/msg00956.html


From xen-devel-bounces@lists.xenproject.org Tue May 18 16:13:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 16:13:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129425.242934 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj2Lp-0000df-RI; Tue, 18 May 2021 16:13:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129425.242934; Tue, 18 May 2021 16:13:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj2Lp-0000dY-OG; Tue, 18 May 2021 16:13:45 +0000
Received: by outflank-mailman (input) for mailman id 129425;
 Tue, 18 May 2021 16:13:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tO0P=KN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lj2Lo-0000dE-Ki
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 16:13:44 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f83cbe19-95fc-4c38-b49a-355a1957fb1b;
 Tue, 18 May 2021 16:13:43 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C605AB020;
 Tue, 18 May 2021 16:13:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f83cbe19-95fc-4c38-b49a-355a1957fb1b
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621354422; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=RVTfOHZt+5GTfUvqHRoOkSW+lbhVxEv08w2Zo444uTE=;
	b=iEqxJpQLu0ZIQx4U0CyNmVw7WjftF8ZLl6GpUil/nIg+X+4SOMWbBwS4UEQj8FFAqUdzsY
	XdA3Nh7xHNo1veHXHHEGFXiRT79nujhokYmlqGlcYv+k5k/uwt/G+EhakiEHzRRkDklYKS
	/JwR5pffR9UpeZ8DiWJ/DChfrk+zCao=
Subject: [PATCH v2 1/2] xen-pciback: redo VF placement in the virtual topology
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Konrad Wilk <konrad.wilk@oracle.com>
References: <38774140-871d-59a4-cf49-9cb1cc78c381@suse.com>
Message-ID: <8def783b-404c-3452-196d-3f3fd4d72c9e@suse.com>
Date: Tue, 18 May 2021 18:13:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <38774140-871d-59a4-cf49-9cb1cc78c381@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

The commit referenced below was incomplete: It merely affected what
would get written to the vdev-<N> xenstore node. The guest would still
find the function at the original function number as long as 
__xen_pcibk_get_pci_dev() wouldn't be in sync. The same goes for AER wrt
__xen_pcibk_get_pcifront_dev().

Undo overriding the function to zero and instead make sure that VFs at
function zero remain alone in their slot. This has the added benefit of
improving overall capacity, considering that there's only a total of 32
slots available right now (PCI segment and bus can both only ever be
zero at present).

Fixes: 8a5248fe10b1 ("xen PV passthru: assign SR-IOV virtual functions to separate virtual slots")
Signed-off-by: Jan Beulich <jbeulich@suse.com>
Cc: stable@vger.kernel.org
---
Like the original change this has the effect of changing where devices
would appear in the guest, when there are multiple of them. I don't see
an immediate problem with this, but if there is we may need to reduce
the effect of the change.
Taking into account, besides the described breakage, how xen-pcifront's
pcifront_scan_bus() works, I also wonder what problem it was in the
first place that needed fixing. It may therefore also be worth to
consider simply reverting the original change.

--- a/drivers/xen/xen-pciback/vpci.c
+++ b/drivers/xen/xen-pciback/vpci.c
@@ -70,7 +70,7 @@ static int __xen_pcibk_add_pci_dev(struc
 				   struct pci_dev *dev, int devid,
 				   publish_pci_dev_cb publish_cb)
 {
-	int err = 0, slot, func = -1;
+	int err = 0, slot, func = PCI_FUNC(dev->devfn);
 	struct pci_dev_entry *t, *dev_entry;
 	struct vpci_dev_data *vpci_dev = pdev->pci_dev_data;
 
@@ -95,22 +95,25 @@ static int __xen_pcibk_add_pci_dev(struc
 
 	/*
 	 * Keep multi-function devices together on the virtual PCI bus, except
-	 * virtual functions.
+	 * that we want to keep virtual functions at func 0 on their own. They
+	 * aren't multi-function devices and hence their presence at func 0
+	 * may cause guests to not scan the other functions.
 	 */
-	if (!dev->is_virtfn) {
+	if (!dev->is_virtfn || func) {
 		for (slot = 0; slot < PCI_SLOT_MAX; slot++) {
 			if (list_empty(&vpci_dev->dev_list[slot]))
 				continue;
 
 			t = list_entry(list_first(&vpci_dev->dev_list[slot]),
 				       struct pci_dev_entry, list);
+			if (t->dev->is_virtfn && !PCI_FUNC(t->dev->devfn))
+				continue;
 
 			if (match_slot(dev, t->dev)) {
 				dev_info(&dev->dev, "vpci: assign to virtual slot %d func %d\n",
-					 slot, PCI_FUNC(dev->devfn));
+					 slot, func);
 				list_add_tail(&dev_entry->list,
 					      &vpci_dev->dev_list[slot]);
-				func = PCI_FUNC(dev->devfn);
 				goto unlock;
 			}
 		}
@@ -123,7 +126,6 @@ static int __xen_pcibk_add_pci_dev(struc
 				 slot);
 			list_add_tail(&dev_entry->list,
 				      &vpci_dev->dev_list[slot]);
-			func = dev->is_virtfn ? 0 : PCI_FUNC(dev->devfn);
 			goto unlock;
 		}
 	}



From xen-devel-bounces@lists.xenproject.org Tue May 18 16:14:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 16:14:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129431.242948 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj2MF-0001Es-6h; Tue, 18 May 2021 16:14:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129431.242948; Tue, 18 May 2021 16:14:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj2MF-0001El-3K; Tue, 18 May 2021 16:14:11 +0000
Received: by outflank-mailman (input) for mailman id 129431;
 Tue, 18 May 2021 16:14:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=tO0P=KN=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lj2ME-0001CC-3m
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 16:14:10 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8495d333-d79c-45f0-994d-3ecf211878bc;
 Tue, 18 May 2021 16:14:09 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 68702ABED;
 Tue, 18 May 2021 16:14:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8495d333-d79c-45f0-994d-3ecf211878bc
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621354448; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=3Qh/UZ1VOm2y48tyKXzASGbqQVo3psCNIRYWaNH1KbM=;
	b=BBDARNgKmSrexkj1M/FtvHzFy3OiIqf5YV/0KY1P9OL4vf2FFYZfOnZKts8XQVPRj3Otce
	xY0bwqdrovRTCwslvfcOtSo+nQYLWkXTK/i+hl1XKuOc8zaB4wqzw30jKi6MXYm9x8WpVf
	rflxrlwLXkO7fLhkuJtOnbdmAgYZBv8=
Subject: [PATCH v2 2/2] xen-pciback: reconfigure also from backend watch
 handler
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Konrad Wilk <konrad.wilk@oracle.com>
References: <38774140-871d-59a4-cf49-9cb1cc78c381@suse.com>
Message-ID: <2337cbd6-94b9-4187-9862-c03ea12e0c61@suse.com>
Date: Tue, 18 May 2021 18:14:07 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <38774140-871d-59a4-cf49-9cb1cc78c381@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

When multiple PCI devices get assigned to a guest right at boot, libxl
incrementally populates the backend tree. The writes for the first of
the devices trigger the backend watch. In turn xen_pcibk_setup_backend()
will set the XenBus state to Initialised, at which point no further
reconfigures would happen unless a device got hotplugged. Arrange for
reconfigure to also get triggered from the backend watch handler.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Cc: stable@vger.kernel.org
---
v2: Also move comment. Add a comment.
---
I will admit that this isn't entirely race-free (with the guest actually
attaching in parallel), but from the looks of it such a race ought to be
benign (not the least thanks to the mutex). Ideally the tool stack
wouldn't write num_devs until all devices had their information
populated. I tried doing so in libxl, but xen_pcibk_setup_backend()
calling xenbus_dev_fatal() when not being able to read that node
prohibits such an approach (or else libxl and driver changes would need
to go into use in lock-step).

I wonder why what is being watched isn't just the num_devs node. Right
now the watch triggers quite frequently without anything relevant
actually having changed (I suppose in at least some cases in response
to writes by the backend itself).

--- a/drivers/xen/xen-pciback/xenbus.c
+++ b/drivers/xen/xen-pciback/xenbus.c
@@ -359,7 +359,8 @@ out:
 	return err;
 }
 
-static int xen_pcibk_reconfigure(struct xen_pcibk_device *pdev)
+static int xen_pcibk_reconfigure(struct xen_pcibk_device *pdev,
+				 enum xenbus_state state)
 {
 	int err = 0;
 	int num_devs;
@@ -373,9 +374,7 @@ static int xen_pcibk_reconfigure(struct
 	dev_dbg(&pdev->xdev->dev, "Reconfiguring device ...\n");
 
 	mutex_lock(&pdev->dev_lock);
-	/* Make sure we only reconfigure once */
-	if (xenbus_read_driver_state(pdev->xdev->nodename) !=
-	    XenbusStateReconfiguring)
+	if (xenbus_read_driver_state(pdev->xdev->nodename) != state)
 		goto out;
 
 	err = xenbus_scanf(XBT_NIL, pdev->xdev->nodename, "num_devs", "%d",
@@ -500,6 +499,10 @@ static int xen_pcibk_reconfigure(struct
 		}
 	}
 
+	if (state != XenbusStateReconfiguring)
+		/* Make sure we only reconfigure once. */
+		goto out;
+
 	err = xenbus_switch_state(pdev->xdev, XenbusStateReconfigured);
 	if (err) {
 		xenbus_dev_fatal(pdev->xdev, err,
@@ -525,7 +528,7 @@ static void xen_pcibk_frontend_changed(s
 		break;
 
 	case XenbusStateReconfiguring:
-		xen_pcibk_reconfigure(pdev);
+		xen_pcibk_reconfigure(pdev, XenbusStateReconfiguring);
 		break;
 
 	case XenbusStateConnected:
@@ -664,6 +667,15 @@ static void xen_pcibk_be_watch(struct xe
 		xen_pcibk_setup_backend(pdev);
 		break;
 
+	case XenbusStateInitialised:
+		/*
+		 * We typically move to Initialised when the first device was
+		 * added. Hence subsequent devices getting added may need
+		 * reconfiguring.
+		 */
+		xen_pcibk_reconfigure(pdev, XenbusStateInitialised);
+		break;
+
 	default:
 		break;
 	}



From xen-devel-bounces@lists.xenproject.org Tue May 18 16:23:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 16:23:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129444.242958 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj2Vc-0002o0-4W; Tue, 18 May 2021 16:23:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129444.242958; Tue, 18 May 2021 16:23:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj2Vc-0002nt-1T; Tue, 18 May 2021 16:23:52 +0000
Received: by outflank-mailman (input) for mailman id 129444;
 Tue, 18 May 2021 16:23:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vJ9Y=KN=vps.thesusis.net=psusi@srs-us1.protection.inumbo.net>)
 id 1lj2Vb-0002nn-0H
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 16:23:51 +0000
Received: from vps.thesusis.net (unknown [34.202.238.73])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9e330265-db04-42b0-8e92-cec01f22dee4;
 Tue, 18 May 2021 16:23:50 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by vps.thesusis.net (Postfix) with ESMTP id 44EF421786;
 Tue, 18 May 2021 12:23:50 -0400 (EDT)
Received: from vps.thesusis.net ([127.0.0.1])
 by localhost (vps.thesusis.net [127.0.0.1]) (amavisd-new, port 10024)
 with ESMTP id Xl3Chf79Qxz2; Tue, 18 May 2021 12:23:50 -0400 (EDT)
Received: by vps.thesusis.net (Postfix, from userid 1000)
 id 17BE82178C; Tue, 18 May 2021 12:23:50 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9e330265-db04-42b0-8e92-cec01f22dee4
References: <87o8dw52jc.fsf@vps.thesusis.net> <20210506143654.17924-1-phill@thesusis.net> <YJRRCEJrQOwVymdP@google.com>
User-agent: mu4e 1.5.7; emacs 26.3
From: Phillip Susi <phill@thesusis.net>
To: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Cc: xen-devel@lists.xenproject.org, linux-input@vger.kernel.org
Subject: Re: [PATCH] Xen Keyboard: don't advertise every key known to man
Date: Tue, 18 May 2021 12:20:00 -0400
In-reply-to: <YJRRCEJrQOwVymdP@google.com>
Message-ID: <871ra4yprd.fsf@vps.thesusis.net>
MIME-Version: 1.0
Content-Type: text/plain


Dmitry Torokhov writes:

> By doing this you are stopping delivery of all key events from this
> device.

It does?  How does the PS/2 keyboard driver work then?  It has no way of
knowning what keys the keyboard has other than waiting to see what scan
codes are emitted.

If the keys must be advertised in order to emit them at runtime, then I
see no other possible fix than to remove the codes from the modalias
string in the input subsystem.  Or maybe allow certain drivers that
don't know to set some sort of flag that allows them to emit all codes
at runtime, but NOT advertise them so you end up with a mile long
modalias.



From xen-devel-bounces@lists.xenproject.org Tue May 18 16:42:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 16:42:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129457.242990 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj2nq-0005Mq-6g; Tue, 18 May 2021 16:42:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129457.242990; Tue, 18 May 2021 16:42:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj2nq-0005Mj-3A; Tue, 18 May 2021 16:42:42 +0000
Received: by outflank-mailman (input) for mailman id 129457;
 Tue, 18 May 2021 16:42:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h4/q=KN=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1lj2np-0005MN-H6
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 16:42:41 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 85f1c055-f329-43d9-81f6-06d9b7c2226a;
 Tue, 18 May 2021 16:42:40 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 244E8AEA8;
 Tue, 18 May 2021 16:42:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 85f1c055-f329-43d9-81f6-06d9b7c2226a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621356160; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=yoL84o6smbYpcuGm8W7AhLmby+UbYQ66wf/eTVj8DXo=;
	b=IjHtJqvEmLTSax8FqNaEk2E5FA38npg4/KtL8yqaSTXffAivemIpLb701s0pHjbASHZIAh
	J4m3t8HowHBRl5eoB1k+88XOhitvrBR9vNypHFimZRI5M/2sbpwnHqxYPwr8JWLc9aJrSO
	aJQk7OO55aXuiNO6POtWCdOj08BmCXM=
Subject: [PATCH v2 1/2] automation: use DOCKER_CMD for building containers too
From: Dario Faggioli <dfaggioli@suse.com>
To: xen-devel@lists.xenproject.org
Cc: =?utf-8?q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Doug Goldstein <cardoe@cardoe.com>, Andrew Cooper <andrew.cooper3@citrix.com>
Date: Tue, 18 May 2021 18:42:39 +0200
Message-ID: <162135615931.20014.4296434793748937843.stgit@Wayrath>
In-Reply-To: <162135593827.20014.14959979363028895972.stgit@Wayrath>
References: <162135593827.20014.14959979363028895972.stgit@Wayrath>
User-Agent: StGit/0.23
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit

Use DOCKER_CMD from the environment (if defined) in the containers'
makefile too, so that, e.g., when doing `export DOCKED_CMD=podman`
podman is used for building the containers too.

Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
Acked-by: Roger Pau Monné <roger.pau@citrix.com>
---
Cc: Doug Goldstein <cardoe@cardoe.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
---
Changes from v1:
- fix my email address in From:
---
 automation/build/Makefile |    5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/automation/build/Makefile b/automation/build/Makefile
index 7c7612b1d9..a4b2b85178 100644
--- a/automation/build/Makefile
+++ b/automation/build/Makefile
@@ -2,6 +2,7 @@
 # the base of where these containers will appear
 REGISTRY := registry.gitlab.com/xen-project/xen
 CONTAINERS = $(subst .dockerfile,,$(wildcard */*.dockerfile))
+DOCKER_CMD ?= docker
 
 help:
 	@echo "Builds containers for building Xen based on different distros"
@@ -10,9 +11,9 @@ help:
 	@echo "To push container builds, set the env var PUSH"
 
 %: %.dockerfile ## Builds containers
-	docker build -t $(REGISTRY)/$(@D):$(@F) -f $< $(<D)
+	$(DOCKER_CMD) build -t $(REGISTRY)/$(@D):$(@F) -f $< $(<D)
 	@if [ ! -z $${PUSH+x} ]; then \
-		docker push $(REGISTRY)/$(@D):$(@F); \
+		$(DOCKER_CMD) push $(REGISTRY)/$(@D):$(@F); \
 	fi
 
 .PHONY: all




From xen-devel-bounces@lists.xenproject.org Tue May 18 16:42:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 16:42:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129456.242979 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj2nk-00055U-Uq; Tue, 18 May 2021 16:42:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129456.242979; Tue, 18 May 2021 16:42:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj2nk-00055N-Rt; Tue, 18 May 2021 16:42:36 +0000
Received: by outflank-mailman (input) for mailman id 129456;
 Tue, 18 May 2021 16:42:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h4/q=KN=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1lj2nk-00055E-3m
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 16:42:36 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1ffad699-c342-421e-9f5a-85064485cb1e;
 Tue, 18 May 2021 16:42:35 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6EB20AEA8;
 Tue, 18 May 2021 16:42:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1ffad699-c342-421e-9f5a-85064485cb1e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621356154; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=fz26dujv6Bet8gex+riX+B35ZkBkrIv/4Cx5jjYEB0E=;
	b=pbhx93TU2knJKiaDiY8rOQ2fDJCMgYW5JdnzUgFxTxsoT1i+D6fcumiHBI1R/FfBSgt9iE
	wzs2UkEQSOAwbLYPupTQuQPdRj8v9ibnE3IEjhjJib026BL2JPWFWgXbEBNDUxb1s2Vn95
	AxcmIvrqSvzGUHWYpssus0jCoStd/lA=
Subject: [PATCH v2 0/2] automation: fix building in the openSUSE Tumbleweed
From: Dario Faggioli <dfaggioli@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Doug Goldstein <cardoe@cardoe.com>,
 Roger Pau Monne <roger.pau@citrix.com>,
 =?utf-8?q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Date: Tue, 18 May 2021 18:42:33 +0200
Message-ID: <162135593827.20014.14959979363028895972.stgit@Wayrath>
User-Agent: StGit/0.23
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit

Fix the build in the openSUSE Tumbleweed container within our CI. There
was a missing dependency (libzstd-devel), which needs to be added to the
dockerfile.

OTOH, python3-devel was in the dockerfile already, and hence it should
be there in the image. Yet, build was failing due to that... Maybe we
forgot to build and then push a new image after adding it?

Well, whatever. If this change is accepted, I'm happy to push a new,
updated, image to our registry (ISTR that I used to have the right to
do that).

While there, extend the generalization of the container runtime to use
(we have that in containerize already, through the DOCKER_CMD variable)
to the local building of the containers as well.

Thanks and Regards
---
Dario Faggioli (2):
      automation: use DOCKER_CMD for building containers too
      automation: fix dependencies on openSUSE Tumbleweed containers

 automation/build/suse/opensuse-tumbleweed.dockerfile | 3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)
--
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)



From xen-devel-bounces@lists.xenproject.org Tue May 18 16:42:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 16:42:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129458.243000 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj2nw-0005ip-Fv; Tue, 18 May 2021 16:42:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129458.243000; Tue, 18 May 2021 16:42:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj2nw-0005ig-CY; Tue, 18 May 2021 16:42:48 +0000
Received: by outflank-mailman (input) for mailman id 129458;
 Tue, 18 May 2021 16:42:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=h4/q=KN=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1lj2nv-0005hb-AW
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 16:42:47 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f6c45aca-585c-4d7f-aa04-754eed64d00e;
 Tue, 18 May 2021 16:42:46 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E9097AEA8;
 Tue, 18 May 2021 16:42:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f6c45aca-585c-4d7f-aa04-754eed64d00e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621356166; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=jrX1NYllNsPfXSj7P+2XpQsI01YhtHcN3zywj7DDQY0=;
	b=XOQWxcqny7KFdAde0u7OR5RQmOGNifs9GFRadBftJy1ZgbBL5mfceh+reqZ2XnqG9jX57b
	8eM/7EyZWI8pNKJGQqrLsPVUcrjlupUl1lkNEH/ExZVUyZYnsuHEN4eGM9mVVSiXe24gpM
	+aIVEOzyxQLjEBen45A0E/0jSWFAK1E=
Subject: [PATCH v2 2/2] automation: fix dependencies on openSUSE Tumbleweed
 containers
From: Dario Faggioli <dfaggioli@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Doug Goldstein <cardoe@cardoe.com>,
 Roger Pau Monne <roger.pau@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>
Date: Tue, 18 May 2021 18:42:45 +0200
Message-ID: <162135616513.20014.6303562342690753615.stgit@Wayrath>
In-Reply-To: <162135593827.20014.14959979363028895972.stgit@Wayrath>
References: <162135593827.20014.14959979363028895972.stgit@Wayrath>
User-Agent: StGit/0.23
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 7bit

Fix the build inside our openSUSE Tumbleweed container by using
adding libzstd headers. While there, remove the explicit dependency
for python and python3 as the respective -devel packages will pull
them in anyway.

Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
---
Cc: Doug Goldstein <cardoe@cardoe.com>
Cc: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>
---
Changes from v1:
- fix my email address in From:
- don't request python38-devel explicitly, python3-devel
  is just fine and is more generic (and hence better!)
---
 .../build/suse/opensuse-tumbleweed.dockerfile      |    3 +--
 1 file changed, 1 insertion(+), 2 deletions(-)

diff --git a/automation/build/suse/opensuse-tumbleweed.dockerfile b/automation/build/suse/opensuse-tumbleweed.dockerfile
index 8ff7b9b5ce..a33ab0d870 100644
--- a/automation/build/suse/opensuse-tumbleweed.dockerfile
+++ b/automation/build/suse/opensuse-tumbleweed.dockerfile
@@ -45,6 +45,7 @@ RUN zypper install -y --no-recommends \
         libtasn1-devel \
         libuuid-devel \
         libyajl-devel \
+        libzstd-devel \
         lzo-devel \
         make \
         nasm \
@@ -56,9 +57,7 @@ RUN zypper install -y --no-recommends \
         pandoc \
         patch \
         pkg-config \
-        python \
         python-devel \
-        python3 \
         python3-devel \
         systemd-devel \
         tar \




From xen-devel-bounces@lists.xenproject.org Tue May 18 16:49:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 16:49:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129472.243012 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj2u5-00072H-6U; Tue, 18 May 2021 16:49:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129472.243012; Tue, 18 May 2021 16:49:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj2u5-00072A-3K; Tue, 18 May 2021 16:49:09 +0000
Received: by outflank-mailman (input) for mailman id 129472;
 Tue, 18 May 2021 16:49:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lj2u3-000720-SW; Tue, 18 May 2021 16:49:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lj2u3-0007YC-Ms; Tue, 18 May 2021 16:49:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lj2u3-0007HG-Et; Tue, 18 May 2021 16:49:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lj2u3-0004h7-EL; Tue, 18 May 2021 16:49:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=KQmanFUYe+sEA3RrhVRoilHE3Okr3/OVHXU5Pa4X1Gg=; b=1tosUs4qd9B1Ts3brRXBmXmYM4
	QcBcK5HeomyZwHCvO+KX7e0zvbGSMuLAiZAAolJqDbkJhYRuTNYEfiqbw/+25YYoa3m+7gWtdsSgO
	Ywn48RNRps2y6SZJaibFf0xTwRvLfA4sJX/WgtthqlMIDV2wfNc5vLn6yaBspVK/u3iU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162002-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162002: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=29e300ff815283259e81822ed3cb926bb9ad6460
X-Osstest-Versions-That:
    ovmf=1fbf5e30ae8eb725f4e10984f7b0a208f78abbd0
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 18 May 2021 16:49:07 +0000

flight 162002 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162002/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 29e300ff815283259e81822ed3cb926bb9ad6460
baseline version:
 ovmf                 1fbf5e30ae8eb725f4e10984f7b0a208f78abbd0

Last test of basis   161987  2021-05-18 01:10:06 Z    0 days
Testing same since   162002  2021-05-18 08:10:25 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Ray Ni <ray.ni@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   1fbf5e30ae..29e300ff81  29e300ff815283259e81822ed3cb926bb9ad6460 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue May 18 17:04:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 17:04:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129498.243049 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj38K-0001yN-3A; Tue, 18 May 2021 17:03:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129498.243049; Tue, 18 May 2021 17:03:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj38K-0001yG-0E; Tue, 18 May 2021 17:03:52 +0000
Received: by outflank-mailman (input) for mailman id 129498;
 Tue, 18 May 2021 17:03:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lj38I-0001yA-Vx
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 17:03:50 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lj38H-0007pL-Oe; Tue, 18 May 2021 17:03:49 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lj38H-0006eg-FU; Tue, 18 May 2021 17:03:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=V2fnyoE48qQ6Hp55cFrx3NNSfeosJA7sSVYcgqQFi8k=; b=L0X4SzIKPssOfA1dn/4wmVhzoq
	2e0sCMC7H5CRHo0P5nYI2a60W5Ohr7tRCKM13HnABbexF19JJIFVleZSaKI9O4Z8xOwhfG4A5vAqA
	pGXqGUWhvXyotW51qhDAIVQgoQqo0STEMfFOsXk1xbOU8fTVKqvN0Qd4XO9cPCRt8pUk=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	Julien Grall <jgrall@amazon.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] tools/libs: guest: Fix Arm build after 8fc4916daf2a
Date: Tue, 18 May 2021 18:03:39 +0100
Message-Id: <20210518170339.29706-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1

From: Julien Grall <jgrall@amazon.com>

Gitlab CI spotted an issue when building the tools Arm:

xg_dom_arm.c: In function 'meminit':
xg_dom_arm.c:401:50: error: passing argument 3 of 'set_mode' discards 'const' qualifier from pointer target type [-Werror=discarded-qualifiers]
  401 |     rc = set_mode(dom->xch, dom->guest_domid, dom->guest_type);
      |                                               ~~~^~~~~~~~~~~~

This is because the const was not propagated in the Arm code. Fix it
by constifying the 3rd parameter of set_mode().

Fixes: 8fc4916daf2a ("tools/libs: guest: Use const whenever we point to literal strings")
Signed-off-by: Julien Grall <jgrall@amazon.com>
---
 tools/libs/guest/xg_dom_arm.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/tools/libs/guest/xg_dom_arm.c b/tools/libs/guest/xg_dom_arm.c
index b4c24f15fb27..01e85e0ea9c7 100644
--- a/tools/libs/guest/xg_dom_arm.c
+++ b/tools/libs/guest/xg_dom_arm.c
@@ -195,7 +195,7 @@ static int vcpu_arm64(struct xc_dom_image *dom)
 
 /* ------------------------------------------------------------------------ */
 
-static int set_mode(xc_interface *xch, uint32_t domid, char *guest_type)
+static int set_mode(xc_interface *xch, uint32_t domid, const char *guest_type)
 {
     static const struct {
         char           *guest;
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 18 17:05:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 17:05:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129502.243061 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj39z-0002bh-HN; Tue, 18 May 2021 17:05:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129502.243061; Tue, 18 May 2021 17:05:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj39z-0002ba-CU; Tue, 18 May 2021 17:05:35 +0000
Received: by outflank-mailman (input) for mailman id 129502;
 Tue, 18 May 2021 17:05:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ZUbH=KN=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lj39x-0002bU-EO
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 17:05:33 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9b790591-ac42-436d-aebf-a30301176381;
 Tue, 18 May 2021 17:05:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9b790591-ac42-436d-aebf-a30301176381
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1621357532;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=OJlvLfAk82M2YX33XbcmWR+kJiIBCEtJlB5wXHetciM=;
  b=HADsnl+TYpdw8+rg0eidYQgs4Pa3vHTZ6W1KTCtKfaZFpmSuFeZk1grX
   sVumU8Y7ldMx8+ZcbKWTS4J3uQ9NC9k4F0LHMffxw/1YeBc9QZBpanOE2
   nDwTuFiBGlorxLpSstr/TMm4PepcCsc8yy56cvBADjQZAXHXDvfJSzK4B
   w=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 7Kbi1QLlQtzngWwhMSkt5sSsH8fJRLwHf4JLwhnKIYfsY3pGJs6UOd93xcjl6jqCEfbnhgr6ax
 Cbmq4ear9ACtoKanH0i3kHcZgoDt94gQKySsIBnOGacn0aAcNfARVGApY/zex3vX/nY4k8AIAK
 4e2tDxlwHmWQKtWzdqmPd+0kL++v5R0cphcuv2KFY60nBk53UpZdq7Ld7elp8zcYO6TP5nuwiH
 ZMBp/t35K+HDhcjqucaRE42ePxLP93B73qIctsZh2JXg7vb1vMQ7lkTkX//TchQNFhGjXyC7R3
 1kw=
X-SBRS: 5.1
X-MesageID: 44167109
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:72aP6aENXTHmcGvupLqEw8eALOsnbusQ8zAXPiFKOHlom6mj/a
 2TdZsguSMc5Ax/ZJhYo6H4BEDiewK/yXcW2+ks1N6ZNWHbUQ2TQr2KhrGSoAEIdReeygdr79
 YFT0EvMrbN5IBB/L3HCdODYrAdKQS8gceVbDvlvg9QpN9RGttd0zs=
X-IronPort-AV: E=Sophos;i="5.82,310,1613451600"; 
   d="scan'208";a="44167109"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=gN8rbMcmv2St3Oi4PpzpbOXLaq7bCOsTUZITXDGeh735IlJf6ahGNbvbFiJ9Nvb2xj4WRr0TF00U2DSw1EZSN3wDBSUZxb5M4QT3q/+Blj5Q8aPs3087HGhAaf22YgJaxlx46EjvvhkhFZex2Of3fU9bf350HB7rriU2+7y6fyV3XzK5Q+2c6nOE5rvlyxzRJJ79yPJM2J7BPGe1NcHSli/OQnat93uLwkQeWmfbWsg1vwTnIYUmdUTeOKgGlp4DFp2n/1tOOBgdlYFgLGKoqGqSGdYXBYTJU96vbn3J6VH6o9aQq30qOJg7z6XX3wnwKunaQf1XKapPmbg/mXyCQA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ez8T/bRyCzQguqJlaEIIOTkIwuzhj9Syc9PpMALLdAE=;
 b=aYxOocdWsyMSeuxqlhcVBL/6dshbBhmiLIG4T9tigWLFvUQ/cJ3qvr4+WSqOPeg6Ewy8rsrC41M0KWrc8VviI2qYXPNTCYb7fd1UBJA7TYsRvUHdfVRYHs1kaLZyfovuXs7Hdtb9H21TwSrl33CVDyB+3xgoJp1aBu4FgYLXeQ1G2ebaHX/R0WG2IOA7Waj8pmDxv2AmXn8Jdg7nMgMKiLVZZbS3ru807O2h60Rp5/mkH2cVKAB29n4Py5wf47GEQv5tdM+JxKAhYM/ZIEVk3a6gWsg68DHSLf11/dxlLgTkJxLkG7NAkIcpmNbNs32mvxGtSgxFIq1iY5M+7ZOuOA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Ez8T/bRyCzQguqJlaEIIOTkIwuzhj9Syc9PpMALLdAE=;
 b=vSdLTo2bQ0t4s47Vt3PR4So+vSJZmkdsDXhGjEncG0cWzWIyNgo4w1H31Sop28bfayEDBkqGdi3x+phYsuqrCBeRoU51/EBpGGgw6pHjTnztf7wh7RoLXv48VWp6AfMaIyGg7gm50VgMr2igNgtduYscxzYMlzkm8ulAs97BDRA=
Subject: Re: [PATCH] tools/libs: guest: Fix Arm build after 8fc4916daf2a
To: Julien Grall <julien@xen.org>, <xen-devel@lists.xenproject.org>
CC: Julien Grall <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>, "Wei
 Liu" <wl@xen.org>
References: <20210518170339.29706-1-julien@xen.org>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <6d77f58a-06d6-aadf-0451-b46020169004@citrix.com>
Date: Tue, 18 May 2021 18:05:22 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <20210518170339.29706-1-julien@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0325.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a4::25) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e679f782-42cc-41f4-156b-08d91a1f22ce
X-MS-TrafficTypeDiagnostic: BYAPR03MB4679:
X-Microsoft-Antispam-PRVS: <BYAPR03MB46793138D3FA9141C2F750ABBA2C9@BYAPR03MB4679.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6430;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 56C1J0V5Ded9AUADELTN4uGDX3+iVPM3UaZwZYs64yQcDK2yIsC7OYPlse2BxcB0uFXTIqqzfpmjUfCp/GKDC+G2PfUDqVNzRnMYjVgeMu2z0qX8DTcdYGLqBz9OgQ3KAXgsMhxPWe9zQm1um5ulf3+VCoaf+UxNIKZCm0p9kdGmCshvG/JIgM9+12QCGjwRPGuFyZ2EC3+30YPfZIAsuQpGbxb4XUQVptYI7bwH8HHov4JuhNB0G740OULMNovqp7VzQSzfG+Oi2xr97dSwZ0gBrQZ0pUieWT6+g7B/ny/+wCxSPdzIn++N3FEogvLR9QvEyatOnAQQv5VXrq103OzRl4sFWl86U1ZWpl1bID3SZ3wwH6OKEADYgi1D6kJ0ZkmWQeo5ZactGlr2S8zhDu1lsv4PzTEhyL5VCsAl+5DSsid0++azGB4ZUxj+xdh/c3hHth/SqAZlXmefsqRpwIxNaYLtDePczxK4uXxSt7pd8A/Js5qIBIaYvW7Kun5yksvQgwtaa07kfT178nu0XU1rkB5s9D1cPKXF/3Fb3XTh1U8JuhaTfzw3XvmWA2oLYQcw1mCVhqzLXpp7voGkG5MZgwRij6m8xOV9zu2nvLRJn158vgbK4zSS3+lBVr10shw/UE2nkv8G9rVni2qpAc8fgd2COhNLhRQ8QbBAEp0wUlow3Ghu/6kgUI89Cf8h
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(39860400002)(376002)(366004)(396003)(346002)(6666004)(8936002)(4326008)(186003)(6486002)(16526019)(31696002)(478600001)(8676002)(316002)(16576012)(54906003)(66946007)(956004)(2616005)(66556008)(66476007)(2906002)(5660300002)(31686004)(86362001)(36756003)(53546011)(38100700002)(26005)(4744005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?RVhMWXkwdGNpSUhycGU3SHFsUThFem9BZzRWVUxDUldGaEdsb0lzMkNwTmtn?=
 =?utf-8?B?eWVJeGlZYUt4UUFZTzBleHd4NlA2L1I4V3l6VE93UGtodklRN1p1MjFXMVU4?=
 =?utf-8?B?OUdqNW1qR1M2UmVWMXRvaFpyUWg3THVMbUpMQU16VTBlMkFxS1RGZ20zSXhL?=
 =?utf-8?B?bmxvaG42WUh2NXZEMHc3SEphT2F5S1RIaHFJUjNPUHl5RWd2bFFuRUJveDdQ?=
 =?utf-8?B?ckdRdmpPbzJaWjNReGd6Y2dqTUVkcHhJTjIyc2NuKzU2UStWeUNCTGZxZzZD?=
 =?utf-8?B?N25WQmJocElMTmdLVTFBVklBL214L3did2dtK1hEZ1RtdEZPUkgzT2VKaVZ5?=
 =?utf-8?B?R1VlKzJRbmhDRTdjNkJPVkpDVWo4TS9ubVdoQ3RUK0hCWE81TnBIeEdxZFVz?=
 =?utf-8?B?R29NdjBscXVTZkdmK2tuRUVKb0RVNFNGRVdwY3QzcWt0WC9CNklHQTdsWjNV?=
 =?utf-8?B?Y1pGNlBneFpBTm9VdlJFM1pLSllZanprazVQN0RwWk1mUnVnRTI3SnhWSEIv?=
 =?utf-8?B?cU05a1g1emxmUGZBckxjWmsyaFl0SFlYSU9ROGQ1QmxMNVhaTGMvUVVIRHpR?=
 =?utf-8?B?SGt4UWN6VEVVdWgwRDZPMFpEeng2cXJxQzg3ejRkb1VtSWFUY1V0L3B0VTBL?=
 =?utf-8?B?SStwNVlzYUhlTzZoQ0h4eVhlQnB2NGM4OHZ1OUwxaktRRjU3TXBreGpvSlhB?=
 =?utf-8?B?NmJ4bm5PTmpENHpiQ0pqbEg0ZUtwSlZoTENHRzNsaldyQ2txTHFqUWpiaUFV?=
 =?utf-8?B?NmdSWEU1MXdjOTZPNlpOcmFzTVQzOU5KWVozdnFaK0NsdTl4MjF5RVVnbFVP?=
 =?utf-8?B?TEtSVjI0MHREb0NQRnFYZ3hYUjlVSE03MEdJNzFLVXozM0M1YW5ISXpENjVo?=
 =?utf-8?B?SlB2TkFxMzZVd1M3SUg2VVpUdElDdkpRR3BGRTNlNjgxQmkybWZHVmxlc2wx?=
 =?utf-8?B?ZHFZbitwYjU1a1hERmpUNGU1eExobWxaNEZQQS94dkU4S2x6ZlRreE5meHJo?=
 =?utf-8?B?L1dLS3FhbEkvWTRuVzJtUlEwbjJvRDZKelk2SFFZTG1ULzk3L1JVNTJjY3Z0?=
 =?utf-8?B?Vkx6Z3kxeDNNZy8rdmg0eFVvUGFCOVYwZ0FhbzB4Sm42REVGUTEyaXJxb2hj?=
 =?utf-8?B?cjdKVmFMOUFZQkZKWVJNSzQxNFh4K2NiTkdocG1GMDd4WEtHOEJ2RWw4TDBY?=
 =?utf-8?B?TDlFQ1V2QzMxSjNDSlBsMUFVQnR2azhiUXVxbXdOQUQrTFFQNXcreGN6dWU0?=
 =?utf-8?B?Q29sUXlGY2lPVjFuUnJEa3dpZnFwUWtMZUpWekxSb0dIanEyRVNZSGthSk4y?=
 =?utf-8?B?WTk5WVlPMHpTaTdDZkwvbFhIVnQrcjNqTWozUDZ6U2s4UnYxRmpCNytUZ1ND?=
 =?utf-8?B?cjdhcXc5OUxqdG56MVpBaFQ0Vjc2Tzh0NHdIZE9jaHpGMDNJRHlVMHBMcGtF?=
 =?utf-8?B?ZFRhaU1EYzZ0OHlNSnBiQnlsWTlRbjVQbmpMMHZOS1BCN1gxd0R6b3NKaVY2?=
 =?utf-8?B?akZhMzZ6cUdsUFdPQ0p2c0VPWTdlcWtmVW93QjdzWkJvcW8vbXBKeU8wWjNo?=
 =?utf-8?B?ZkhRT0FLOWRCMU85dkk3SHp0Nk5ERXlrbFhvdmt0OUpnY1ZIT0Q1cURvWWFW?=
 =?utf-8?B?SVNpYXFIZ2RtdUZWQ1R6bmtJdnVZb1JuTnlvQ0EwZnovOXkrMFlFVEVLL1Z2?=
 =?utf-8?B?YUE0dm5WNWZuTG42amxYSkUrV1Vuby9GNjdBbUlaVHA5Yi94azV0eDlXM0VH?=
 =?utf-8?Q?HpK2axBHOPhIHG0WZwjYikaBdezt3gLEVQ9nzKY?=
X-MS-Exchange-CrossTenant-Network-Message-Id: e679f782-42cc-41f4-156b-08d91a1f22ce
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2021 17:05:28.3103
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: /SGFaCFLmJuTwUfIn+8rhwRh4/Of018NvuVlEzeEl1zOSbRErmdfyoJRFKTbusNDIjsRXlsboMVqkWmaNEWlKsEOdJuDW0de/b+fMIu70Tc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4679
X-OriginatorOrg: citrix.com

On 18/05/2021 18:03, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
>
> Gitlab CI spotted an issue when building the tools Arm:
>
> xg_dom_arm.c: In function 'meminit':
> xg_dom_arm.c:401:50: error: passing argument 3 of 'set_mode' discards 'const' qualifier from pointer target type [-Werror=discarded-qualifiers]
>   401 |     rc = set_mode(dom->xch, dom->guest_domid, dom->guest_type);
>       |                                               ~~~^~~~~~~~~~~~
>
> This is because the const was not propagated in the Arm code. Fix it
> by constifying the 3rd parameter of set_mode().
>
> Fixes: 8fc4916daf2a ("tools/libs: guest: Use const whenever we point to literal strings")
> Signed-off-by: Julien Grall <jgrall@amazon.com>

Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>


From xen-devel-bounces@lists.xenproject.org Tue May 18 17:07:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 17:07:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129507.243072 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj3Bg-0003Du-Sw; Tue, 18 May 2021 17:07:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129507.243072; Tue, 18 May 2021 17:07:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj3Bg-0003Dn-Ps; Tue, 18 May 2021 17:07:20 +0000
Received: by outflank-mailman (input) for mailman id 129507;
 Tue, 18 May 2021 17:07:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lj3Bf-0003D2-3l; Tue, 18 May 2021 17:07:19 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lj3Bf-0007tt-0K; Tue, 18 May 2021 17:07:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lj3Be-0007vU-OE; Tue, 18 May 2021 17:07:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lj3Be-0001zt-Nk; Tue, 18 May 2021 17:07:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=DiIHgjXu8Gd4u8056KytDCiiT3nFBI5gOD8sF/r4YVk=; b=Cj6Zq63wyM2jhuWvlHk8Fybx99
	ChJ2TjklqDs6zaiiKteNb39QRfjdcXbbC3ykprBKF80fKbhO8oJSa69FJUyChvdJmzpWpXCRJMok+
	klcnZL3a56HINnEEPWd0FfD/YE0FIWrMfQCml026SnjKtaMNB6hZrpqR0SbX1DWpyv10=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162036-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162036: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=01d84420fb4a9be2ec474a7c1910bb22c28b53c8
X-Osstest-Versions-That:
    xen=caa9c4471d1d74b2d236467aaf7e63a806ac11a4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 18 May 2021 17:07:18 +0000

flight 162036 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162036/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm               6 xen-build                fail REGR. vs. 162023
 build-armhf                   6 xen-build                fail REGR. vs. 162023

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  01d84420fb4a9be2ec474a7c1910bb22c28b53c8
baseline version:
 xen                  caa9c4471d1d74b2d236467aaf7e63a806ac11a4

Last test of basis   162023  2021-05-18 13:00:27 Z    0 days
Testing same since   162036  2021-05-18 16:00:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  pass    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 01d84420fb4a9be2ec474a7c1910bb22c28b53c8
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:51:48 2021 +0100

    tools/xenmon: xenbaked: Mark const the field text in stat_map_t
    
    The field text in stat_map_t will point to string literals. So mark it
    as const to allow the compiler to catch any modified of the string.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 4b7702727a8d89fea0a239adcbeb18aa2c85ede0
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:51:28 2021 +0100

    tools/top: The string parameter in set_prompt() and set_delay() should be const
    
    Neither string parameter in set_prompt() and set_delay() are meant to
    be modified. In particular, new_prompt can point to a literal string.
    
    So mark the two parameters as const and propagate it.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 5605cfd49a18df41a21fb50cd81528312a39d7c9
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:50:32 2021 +0100

    tools/misc: Use const whenever we point to literal strings
    
    literal strings are not meant to be modified. So we should use const
    char * rather than char * when we we to store a pointer to them.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 89aae4ad8f495b647de33f2df5046b3ce68225f8
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:35:07 2021 +0100

    tools/libs: stat: Use const whenever we point to literal strings
    
    literal strings are not meant to be modified. So we should use const
    char * rather than char * when we want to store a pointer to them.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 8fc4916daf2aac34088ebd5ec3d6fd707ac4221d
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:34:22 2021 +0100

    tools/libs: guest: Use const whenever we point to literal strings
    
    literal strings are not meant to be modified. So we should use const
    *char rather than char * when we want to store a pointer to them.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue May 18 17:47:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 17:47:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129533.243125 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj3oG-0007Lq-Gq; Tue, 18 May 2021 17:47:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129533.243125; Tue, 18 May 2021 17:47:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj3oG-0007Lj-Dd; Tue, 18 May 2021 17:47:12 +0000
Received: by outflank-mailman (input) for mailman id 129533;
 Tue, 18 May 2021 17:47:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=NjCN=KN=oracle.com=daniel.kiper@srs-us1.protection.inumbo.net>)
 id 1lj3oF-0007Ld-6u
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 17:47:11 +0000
Received: from mx0a-00069f02.pphosted.com (unknown [205.220.165.32])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 987fbc12-c441-4d57-a04d-f9985179e926;
 Tue, 18 May 2021 17:47:09 +0000 (UTC)
Received: from pps.filterd (m0246627.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 14IHfNFa030982; Tue, 18 May 2021 17:47:07 GMT
Received: from oracle.com (aserp3020.oracle.com [141.146.126.70])
 by mx0b-00069f02.pphosted.com with ESMTP id 38ker18sqg-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 18 May 2021 17:47:07 +0000
Received: from aserp3020.oracle.com (aserp3020.oracle.com [127.0.0.1])
 by pps.podrdrct (8.16.0.36/8.16.0.36) with SMTP id 14IHl6Rv001480;
 Tue, 18 May 2021 17:47:06 GMT
Received: from nam04-mw2-obe.outbound.protection.outlook.com
 (mail-mw2nam08lp2171.outbound.protection.outlook.com [104.47.73.171])
 by aserp3020.oracle.com with ESMTP id 38mecfqup4-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 18 May 2021 17:47:05 +0000
Received: from DM5PR1001MB2236.namprd10.prod.outlook.com (2603:10b6:4:35::18)
 by DS7PR10MB5088.namprd10.prod.outlook.com (2603:10b6:5:3a2::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.26; Tue, 18 May
 2021 17:46:52 +0000
Received: from DM5PR1001MB2236.namprd10.prod.outlook.com
 ([fe80::c93a:7a62:bc1d:9a34]) by DM5PR1001MB2236.namprd10.prod.outlook.com
 ([fe80::c93a:7a62:bc1d:9a34%5]) with mapi id 15.20.4129.031; Tue, 18 May 2021
 17:46:52 +0000
Received: from tomti.i.net-space.pl (84.10.22.86) by
 AM6P192CA0108.EURP192.PROD.OUTLOOK.COM (2603:10a6:209:8d::49) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4108.25 via Frontend Transport; Tue, 18 May 2021 17:46:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 987fbc12-c441-4d57-a04d-f9985179e926
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=date : from : to : cc
 : subject : message-id : references : content-type : in-reply-to :
 mime-version; s=corp-2020-01-29;
 bh=OknAS0Vku0/6NWsRPI/c0PQqhkBgugDVL1iba7WFc74=;
 b=oBYUAn38tZ2bxqIVm3+SbDVBG7k2NJBGll4rxAjyJ0loTx/+LRgUBBswG2Hm3KQL3oB5
 ujeBRRdjvmSXVY7cSnxYozcqTjTNK/xaWQyWI+c4Q5l7lmafOYRzj4bnTbg8v3niK1Tg
 6M5YjWmMA5XSL34hlVE2UZ7dKuZd3VDVE6hCPpPFV6kHDCxeTQWYQHXc7QeAt/dWqUz5
 r0wC0Cbuxx/PlF3fNTZRWwW+8TCwQcmeiIU2V2XCiOxV4gahDR5eN+P9G+aQunsyIC09
 FKOfj3fqRGQbTp+naRBHWavVO2rnuWc2A6XZi02yA77UJCELhT/rNMriJ28rFOsKY5+J hQ== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HHyUTXKj+QHcDpUTvANI9v6SpQyem0xaWF71+jn/pXxYi82Aj+O9ONIDqn/iGslwPTnU4cmQ23k+8qLkpc64wKGkR0gaKE8I0kBr4qmIyjbop7AyuDwRmbqbHHusnfyH4t0iKC9q9bNtdhmbx+RnqPEbUnsD8REAy0YXTqzC3JmC4fSyAf80lgvUHzCOpePwK/Btg4epB7dumwEwj76BuVgrQzuXEw+2/qfzJOSlaYu4BdTzGUPtB+gIHNJOQ+JdUbyffPrrbIzkJnVCJcQmn46WLN+U1xox4Aug7EpeumpLxziFb0KpvGJZldK7Uq/JF3Knc8EnKQ/sVzlaLqOP7Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OknAS0Vku0/6NWsRPI/c0PQqhkBgugDVL1iba7WFc74=;
 b=UC7jVtZl64T4tjAko0GVk8Nrik93JEtzypy2mgZ7+suQ52REH+lTjm8VrI2Le9vKv2TnK0a1rCdWi49cnsjTG2bAIz3KRQtRNgkS2YWBsJLs8G137AS5FlnKxQKZszv0PqfjQdaVPxPmj78pchCZeMsueVwJ7SQ0pzNrXBFxrpZNF0sJ8X8OJidul5VV7myeSa2dAGYb03tUa6CZ2W/YTQWgMS/e+UqTynL6L8lhXYEGdLyBqL+V6MIXMNq6ijb3BXkSi+Yati7Bv4Dgz5tJJxWSy5bmUo1UvmaYLvzAoup1H7XvJM4W3S3uz/muzUi6Mw1DUD9hlvy6BwnTOG3L6A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=OknAS0Vku0/6NWsRPI/c0PQqhkBgugDVL1iba7WFc74=;
 b=iSTiaO0iTa04uLtpwbwsxBEw7hHP6tm4iGHxoXKDxbEQxOKhLmkGfvF1fvjRnQUnUGzFNbOcM2FzmzOPUSAwN6JK2YsXL7YeoMPvoq70sV8cRTBZBelCo2wIkhqWISISpPEo6q3zJX3knfJzn3Dgr2abXDqaLG5t5HxbOogq/3A=
Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=oracle.com;
Date: Tue, 18 May 2021 19:46:33 +0200
From: Daniel Kiper <daniel.kiper@oracle.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Bob Eshleman <bobbyeshleman@gmail.com>,
        Andrew Cooper <andrew.cooper3@citrix.com>,
        George Dunlap <george.dunlap@citrix.com>,
        Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
        Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
        Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v3 2/5] xen/x86: manually build xen.mb.efi binary
Message-ID: <20210518174633.spo5kmgcbuo6dg5k@tomti.i.net-space.pl>
References: <cover.1611273359.git.bobbyeshleman@gmail.com>
 <28d5536a2f7691e8f79d55f1470fa89ce4fae93d.1611273359.git.bobbyeshleman@gmail.com>
 <3c621726-31c4-6a79-a020-88c59644111b@suse.com>
 <74ea104d-3826-d80d-3af5-f444d065c73f@gmail.com>
 <a183a5f9-0f36-187d-fd06-8d6db99cbe43@suse.com>
 <20210517132039.6czppjfge27x4mwg@tomti.i.net-space.pl>
 <ee89a22d-5f46-51ed-4c46-63cfc60cbafc@suse.com>
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <ee89a22d-5f46-51ed-4c46-63cfc60cbafc@suse.com>
User-Agent: NeoMutt/20170113 (1.7.2)
X-Originating-IP: [84.10.22.86]
X-ClientProxiedBy: AM6P192CA0108.EURP192.PROD.OUTLOOK.COM
 (2603:10a6:209:8d::49) To DM5PR1001MB2236.namprd10.prod.outlook.com
 (2603:10b6:4:35::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 883dbd60-cc5f-4878-c272-08d91a24eba8
X-MS-TrafficTypeDiagnostic: DS7PR10MB5088:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<DS7PR10MB5088612D56862A1351AB12F3E32C9@DS7PR10MB5088.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	jy/T8pxZJXhNHv32or2ly3Drx1BtJ9q3qlDBYvRhtSVLgQ4jFn0CUbm5Y4AyOwBJ+ZHxpo06jZUS3f+KLBynthC8smEOo+kOTUtxWFvaBt+pNRll4tbwrWK4kOa/nLe1d40dCswstNiqNer1fnvSc7FquoRmreewLc4KNhiqD893HtV0jtpnh5wgUUJ09joAFIkD3oHUpFQxupJ1M1WoOjGY8RxYKsRdS//d3XJDKIy4q/jruZZIJniOdbOot0Syg4C7+oIrlrTxqRHoaURD2wchDg/TWA2KWxH66yZf7e/RACovU/IPf3AZffHTxSYvMQStUwecHEMb9E95UYKCrYIsamdQGcoTnKu2g6phFqlFEIK6iqrErRcJ/d2BDRDwpKFxkxXHb4xjg1qxsi6YPsEApI/TJoXf8xBZ8tN9fcApf9c2kC0c6ZaSUbH5QbTgPDxvESwg/BI1f23n8fSe4fNK1Akw/fCu+V8Dm84xbte+XiERTE2ACrof6Xl5FiIIV7leY7dvHjAr09OnXxz362BHXLfDMn2hIhA61C68ddEMXApXmeYXN7ShNjqXF32qpydadED90+/7FdTJDH1Di5Qfn0alAVOKA7Nel818l/LX4zB2fq82b1pCxUeI8x2Fzrx4lXh872Kt7wVBWmb/Kg==
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR1001MB2236.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(39860400002)(346002)(366004)(376002)(396003)(136003)(316002)(16526019)(6666004)(8676002)(956004)(186003)(66946007)(54906003)(9686003)(4326008)(52116002)(86362001)(38100700002)(38350700002)(26005)(66476007)(55016002)(2906002)(8936002)(6506007)(7696005)(478600001)(6916009)(53546011)(1076003)(44832011)(66556008)(5660300002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?us-ascii?Q?aasTtakEfrnyXRslgE9kBPggxFUO+ne7ImIYCaWRBsXZVOD9YJOT8RP8nam6?=
 =?us-ascii?Q?PqytFpFMgJo4bg3DbZNgzn/X1W+WVOKOPjZ1dz0fuUNCNVuilsRunZfcZOzu?=
 =?us-ascii?Q?K5u/AiriHtsyoIfLXa1AFg2wOWXMO20oZSArsPDCl0FQajsHuZBGrNtotxzk?=
 =?us-ascii?Q?1LIjLqMPrRX1VY67gdouugLLVtXJ6pqCy4pYbFPBsTqlnKmTCiGLP+cICpMx?=
 =?us-ascii?Q?4Im0e/x39wgumEEewCpLBTViFF8tWfFKS/de6uwySEz57g2Z4pcc0hy3V8gT?=
 =?us-ascii?Q?9mnNbT6RlH/U+z3F1X+4XsgTgi+SJUrI/BwGqRTaEitBQJg6kNrPtF5YcNao?=
 =?us-ascii?Q?+BhtLVqDTlCH0Ekmzmr6QR6nIr7ValzZlreEnoFOWPrFj0frOJ7jjZI27ts+?=
 =?us-ascii?Q?/hs+6JHwp71ESCWvhvsx19AssR4M9SsJvRyA4QSRBZI0R7jlL2uzfWU0OVFA?=
 =?us-ascii?Q?Q0oyAkvCHtLycdCnBRg6S/GfJFgtTQHrPlCz639cwCF5BcVJzi549aqO5JXt?=
 =?us-ascii?Q?I1HvuxSa8hmU8MdeM58Mgk7ecItHgrCZXTOnOzO0w7bBS+AfpgU4eQZZbIFC?=
 =?us-ascii?Q?VevS3doQqmjm8vkfhOCk6tcciMv21JgEYMUWCm4HS6y8+AL9hxvsYXlLMGoj?=
 =?us-ascii?Q?dxPmkfigyKrKAW+tTOmQXN04FAyYVWLcFwun8rd+ucx/y+O3HT1/AMcMkGWo?=
 =?us-ascii?Q?hLXjFQZPM4/rCXzmS10MldF8Vx8FBHAQKPTNgaZRNrNfaeEKJD2ezEsm99aw?=
 =?us-ascii?Q?1rHmwCcGudpCUqh+qgQNZGW4qgrsXdUUzzOUiPBE023cHVsay+bqLoza2tjT?=
 =?us-ascii?Q?j33/sYzn0aJvQQimvUD8gMB1CSslKA952lMRbPkeclJiHDsy/hZ1IKVTSAGI?=
 =?us-ascii?Q?4HUW0mcAGA+5kCSZnMsa7flm+wkHT0wJ9g+4EEMioaqn/w+geDMuLN2Plhr+?=
 =?us-ascii?Q?LYQ+SeXm6/BjaNAwFNUz9QGoZ3F30R/FrIfOs+NtJ2hOOlEpbSulL5d732sL?=
 =?us-ascii?Q?75BHXPXN7CWUkSo7REgFfyLPWjI0+GYUyqnDAKC6R//n4Qf1CvJa99V7AJV4?=
 =?us-ascii?Q?HG6/EukJ/GelEF3XqC87hBkSjCuc5/U+MGEPdZ9p76rAz4g56VJjxzpfK5wi?=
 =?us-ascii?Q?orklKlwJ4QBbQ4r1CPvxFoBEwLqCS0vroUj/DcZjIfBOa6mISzQLZWbumta8?=
 =?us-ascii?Q?uDvVjs9HEq7aYDNEUisfXMVml3Op29qEQUU+CAXyGoEPz0SrJOmetfVUxsbX?=
 =?us-ascii?Q?Ob3NREnWC0B9K2rDAsdgzOGRMyNYJzRy9lIhuBWQp5fqEQO91npropY+j8y+?=
 =?us-ascii?Q?hcOV3uqxoduSEV0vpRXQ86nc?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 883dbd60-cc5f-4878-c272-08d91a24eba8
X-MS-Exchange-CrossTenant-AuthSource: DM5PR1001MB2236.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 18 May 2021 17:46:52.6405
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: Z4B1666DSpp56OqolJ4DCUeAHA/odPb94rih4uNiZboA4l37TPe6T+hF1fIqKzpOc3KcW8akogPowqLaSxAJrQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR10MB5088
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9988 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0 bulkscore=0 mlxlogscore=999
 phishscore=0 suspectscore=0 malwarescore=0 adultscore=0 spamscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2105180123
X-Proofpoint-GUID: TJm8NL4RWw5UrQTgBlBkf5ii-su9H3Jr
X-Proofpoint-ORIG-GUID: TJm8NL4RWw5UrQTgBlBkf5ii-su9H3Jr

On Mon, May 17, 2021 at 03:24:28PM +0200, Jan Beulich wrote:
> On 17.05.2021 15:20, Daniel Kiper wrote:
> > On Mon, May 17, 2021 at 08:48:32AM +0200, Jan Beulich wrote:
> >> On 07.05.2021 22:26, Bob Eshleman wrote:
> >>> What is your intuition WRT the idea that instead of trying add a PE/COFF hdr
> >>> in front of Xen's mb2 bin, we instead go the route of introducing valid mb2
> >>> entry points into xen.efi?
> >>
> >> At the first glance I think this is going to be less intrusive, and hence
> >> to be preferred. But of course I haven't experimented in any way ...
> >
> > When I worked on this a few years ago I tried that way. Sadly I failed
> > because I was not able to produce "linear" PE image using binutils
> > exiting that days.
>
> What is a "linear" PE image?

The problem with Multiboot family protocols is that all code and data
sections have to be glued together in the image and as such loaded into
the memory (IIRC BSS is an exception but it has to live behind the
image). So, you cannot use PE image which has different representation
in file and memory. IIRC by default at least code and data sections in
xen.efi have different sizes in PE file and in memory. I tried to fix
that using linker script and objcopy but it did not work. Sadly I do
not remember the details but there is pretty good chance you can find
relevant emails in Xen-devel archive with me explaining what kind of
problems I met.

> > Maybe
> > newer binutils are more flexible and will be able to produce a PE image
> > with properties required by Multiboot2 protocol.
>
> Isn't all you need the MB2 header within the first so many bytes of the
> (disk) image? Or was it the image as loaded into memory? Both should be
> possible to arrange for.

IIRC Multiboot2 protocol requires the header in first 32 kiB of an image.
So, this is not a problem.

Daniel


From xen-devel-bounces@lists.xenproject.org Tue May 18 17:51:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 17:51:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129539.243136 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj3sj-0000LU-7P; Tue, 18 May 2021 17:51:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129539.243136; Tue, 18 May 2021 17:51:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj3sj-0000LN-3Z; Tue, 18 May 2021 17:51:49 +0000
Received: by outflank-mailman (input) for mailman id 129539;
 Tue, 18 May 2021 17:51:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vJ9Y=KN=vps.thesusis.net=psusi@srs-us1.protection.inumbo.net>)
 id 1lj3sh-0000LH-56
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 17:51:47 +0000
Received: from vps.thesusis.net (unknown [34.202.238.73])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c0d4aa79-b83e-42ee-9e0a-0dd3ba73c248;
 Tue, 18 May 2021 17:51:46 +0000 (UTC)
Received: from localhost (localhost [127.0.0.1])
 by vps.thesusis.net (Postfix) with ESMTP id 4492A2181A;
 Tue, 18 May 2021 13:51:46 -0400 (EDT)
Received: from vps.thesusis.net ([IPv6:::1])
 by localhost (vps.thesusis.net [IPv6:::1]) (amavisd-new, port 10024)
 with ESMTP id lFpdzBQUarjJ; Tue, 18 May 2021 13:51:46 -0400 (EDT)
Received: by vps.thesusis.net (Postfix, from userid 1000)
 id 0BC992181E; Tue, 18 May 2021 13:51:46 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c0d4aa79-b83e-42ee-9e0a-0dd3ba73c248
References: <87o8dw52jc.fsf@vps.thesusis.net> <20210506143654.17924-1-phill@thesusis.net> <YJRRCEJrQOwVymdP@google.com> <871ra4yprd.fsf@vps.thesusis.net>
User-agent: mu4e 1.5.7; emacs 26.3
From: Phillip Susi <phill@thesusis.net>
To: Dmitry Torokhov <dmitry.torokhov@gmail.com>
Cc: xen-devel@lists.xenproject.org, linux-input@vger.kernel.org
Subject: Re: [PATCH] Xen Keyboard: don't advertise every key known to man
Date: Tue, 18 May 2021 13:13:45 -0400
In-reply-to: <871ra4yprd.fsf@vps.thesusis.net>
Message-ID: <87zgwsc4lp.fsf@vps.thesusis.net>
MIME-Version: 1.0
Content-Type: text/plain


Phillip Susi writes:

> Dmitry Torokhov writes:
>
>> By doing this you are stopping delivery of all key events from this
>> device.

Hrm... I don't have very many "interesting" keys to test, but when I hit
the menu key, I see KEY_COMPOSE, which is > KEY_MIN_INTERESTING.  When I
press the button to have my vnc client send a windows key, I see
KEY_LEFTCTRL+KEY_ESC.  I also see KEY_PAUSE when I hit that key and it
is also "interesting".  I get the same thing with or without this patch,
so it does not appear to be breaking delivery of the keys that are no
longer being advertised.

Oddly though, libinput list-devices does not even show the Xen Virtual
Keyboard.  It's sysfs path is /sys/class/input/input1, but it also does
not have a device node in /dev/input so I can't even ask libinput to
only monitor that device.

Ok... this is really odd.. it does show the device without this patch,
and not with it.  The input events I was seeing were coming through the
"AT Translated Set 2 keyboard" and no events come though the Xen Virtual
Keyboard ( even without this patch ).  This makes me wonder why we have
this device at all?


From xen-devel-bounces@lists.xenproject.org Tue May 18 18:06:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 18:06:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129550.243159 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj470-000221-NR; Tue, 18 May 2021 18:06:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129550.243159; Tue, 18 May 2021 18:06:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj470-00021u-J0; Tue, 18 May 2021 18:06:34 +0000
Received: by outflank-mailman (input) for mailman id 129550;
 Tue, 18 May 2021 18:06:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lj46z-00021j-UC; Tue, 18 May 2021 18:06:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lj46z-0000Y8-O0; Tue, 18 May 2021 18:06:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lj46z-0001Uc-Gq; Tue, 18 May 2021 18:06:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lj46z-0001Mb-GH; Tue, 18 May 2021 18:06:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tpttrHIa8vZbrvDL5i0gls/cT9TgGdqQsPR5KclkmxI=; b=JJYmzauI38bfdQXyQX0XLEpCZ9
	U3KSm9Op8g18uehk1Z3+UuuOpZhE8Y4U758pzWBo8jabg8Ev6CxlUmXWRPTlW47yWrxqWPjR0Yzeq
	K2/wWc/9SCFBbyr6QW/Ua8LjtaWtHmWOcCsm+6Jm9OWpOm865Px4QH8z6SvE+lgYkKds=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161990-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 161990: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
    xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=3ac8835a80b27fc4e7116dbde78d3eececc66fc9
X-Osstest-Versions-That:
    xen=cb199cc7de987cfda4659fccf51059f210f6ad34
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 18 May 2021 18:06:33 +0000

flight 161990 xen-unstable real [real]
flight 162045 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/161990/
http://logs.test-lab.xenproject.org/osstest/logs/162045/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-credit2   8 xen-boot            fail pass in 162045-retest
 test-amd64-amd64-examine      4 memdisk-try-append  fail pass in 162045-retest

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2 15 migrate-support-check fail in 162045 never pass
 test-arm64-arm64-xl-credit2 16 saverestore-support-check fail in 162045 never pass
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail like 161939
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161973
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161973
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 161973
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161973
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161973
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161973
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161973
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161973
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161973
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161973
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161973
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  3ac8835a80b27fc4e7116dbde78d3eececc66fc9
baseline version:
 xen                  cb199cc7de987cfda4659fccf51059f210f6ad34

Last test of basis   161973  2021-05-17 01:52:45 Z    1 days
Failing since        161983  2021-05-17 17:06:44 Z    1 days    2 attempts
Testing same since   161990  2021-05-18 05:50:04 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Connor Davis <connojdavis@gmail.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   cb199cc7de..3ac8835a80  3ac8835a80b27fc4e7116dbde78d3eececc66fc9 -> master


From xen-devel-bounces@lists.xenproject.org Tue May 18 18:11:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 18:11:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129574.243211 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj4C1-0004Vx-2P; Tue, 18 May 2021 18:11:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129574.243211; Tue, 18 May 2021 18:11:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj4C0-0004Ve-SI; Tue, 18 May 2021 18:11:44 +0000
Received: by outflank-mailman (input) for mailman id 129574;
 Tue, 18 May 2021 18:11:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lj4Bz-0004Uv-HZ
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 18:11:43 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lj4By-0000e1-8Q; Tue, 18 May 2021 18:11:42 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lj4By-0004Mq-2p; Tue, 18 May 2021 18:11:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:MIME-Version:Date:
	Message-ID:Subject:From:Cc:To;
	bh=ofuvlVgP5VBf3Y1E5xtHZWsKDGDSQQPy3HRZEKYa4oE=; b=t+kYy/V3ucTlpUE/VwzINcxFuN
	6+sTYw+mMRM8LKa9iaxbJTCzI2e6gk2hqQ3m8U610fRt7cEYuZ3WYJHny0q3bc7pk39z7Bj8x3dr/
	6XVdLJbqVy3oX9qRRWWijiWDeuLICgWJHYa/YpnU7pDd7H1sOylNDsjXN3ojAYmmk4wI=;
To: Juergen Gross <jgross@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Edwin Torok <edvin.torok@citrix.com>, "Doebel, Bjoern" <doebel@amazon.de>,
 raphning@amazon.co.uk, "Durrant, Paul" <pdurrant@amazon.co.uk>
From: Julien Grall <julien@xen.org>
Subject: Preserving transactions accross Xenstored Live-Update
Message-ID: <13bbb51e-f63d-a886-272f-e6a6252fb468@xen.org>
Date: Tue, 18 May 2021 19:11:40 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Juergen,

I have started to look at preserving transaction accross Live-update in 
C Xenstored. So far, I managed to transfer transaction that read/write 
existing nodes.

Now, I am running into trouble to transfer new/deleted node within a 
transaction with the existing migration format.

C Xenstored will keep track of nodes accessed during the transaction but 
not the children (AFAICT for performance reason).

Therefore we have the name of the children but not the content (i.e. 
permission, data...).

I have been exploring a couple of approaches:
    1) Introducing a flag to indicate there is a child but no content.

Pros:
   * Close to the existing stream.
   * Fairly implementation agnostic.

Cons:
   * Memory overhead as we need to transfer the full path (rather than 
the child name)
   * Checking for duplication (if the node was actually accessed) will 
introduce runtime overhead.

     2) Extend XS_STATE_TYPE_NODE (or introduce a new record) to allow 
transferring the children name for transaction

Pros:
   * The implementation is more straight forward

Cons:
    * The stream becomes implementation specific

Neither approach looks very appealing to me. So I would like to request 
some feedback for other proposals or preference between the two options.

Note that I haven't looked into much detail how transactions works on 
OCaml Xenstored.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Tue May 18 19:11:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 19:11:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129634.243253 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj57H-0002HW-LW; Tue, 18 May 2021 19:10:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129634.243253; Tue, 18 May 2021 19:10:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj57H-0002HP-GQ; Tue, 18 May 2021 19:10:55 +0000
Received: by outflank-mailman (input) for mailman id 129634;
 Tue, 18 May 2021 19:10:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lj57G-0002HF-Ue; Tue, 18 May 2021 19:10:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lj57G-0001cu-Pt; Tue, 18 May 2021 19:10:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lj57G-00055D-GI; Tue, 18 May 2021 19:10:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lj57G-00037n-Fn; Tue, 18 May 2021 19:10:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=DLTNGSR4hE6YXfDGm6mYuoZFfOqNw0izv5qlGx2jDKI=; b=54lqJxSHEMIdmg7FkJzo3NjCEs
	9vbBP3ygMfD61OjBd2xhsx8AzFtrB9spjf9Q/3e7Joo2w5l85mwadFeduZthGLef8Tn+dUY0+bITE
	AgRXVGX9wVKFqwTTVvIGuV2QiBPpDqfqqE6WcfF8mN4LmLBjlmVNaGtMP/rNEFAvMt5s=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162055-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162055: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=01d84420fb4a9be2ec474a7c1910bb22c28b53c8
X-Osstest-Versions-That:
    xen=caa9c4471d1d74b2d236467aaf7e63a806ac11a4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 18 May 2021 19:10:54 +0000

flight 162055 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162055/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm               6 xen-build                fail REGR. vs. 162023
 build-armhf                   6 xen-build                fail REGR. vs. 162023

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  01d84420fb4a9be2ec474a7c1910bb22c28b53c8
baseline version:
 xen                  caa9c4471d1d74b2d236467aaf7e63a806ac11a4

Last test of basis   162023  2021-05-18 13:00:27 Z    0 days
Testing same since   162036  2021-05-18 16:00:26 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  pass    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 01d84420fb4a9be2ec474a7c1910bb22c28b53c8
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:51:48 2021 +0100

    tools/xenmon: xenbaked: Mark const the field text in stat_map_t
    
    The field text in stat_map_t will point to string literals. So mark it
    as const to allow the compiler to catch any modified of the string.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 4b7702727a8d89fea0a239adcbeb18aa2c85ede0
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:51:28 2021 +0100

    tools/top: The string parameter in set_prompt() and set_delay() should be const
    
    Neither string parameter in set_prompt() and set_delay() are meant to
    be modified. In particular, new_prompt can point to a literal string.
    
    So mark the two parameters as const and propagate it.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 5605cfd49a18df41a21fb50cd81528312a39d7c9
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:50:32 2021 +0100

    tools/misc: Use const whenever we point to literal strings
    
    literal strings are not meant to be modified. So we should use const
    char * rather than char * when we we to store a pointer to them.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 89aae4ad8f495b647de33f2df5046b3ce68225f8
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:35:07 2021 +0100

    tools/libs: stat: Use const whenever we point to literal strings
    
    literal strings are not meant to be modified. So we should use const
    char * rather than char * when we want to store a pointer to them.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 8fc4916daf2aac34088ebd5ec3d6fd707ac4221d
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:34:22 2021 +0100

    tools/libs: guest: Use const whenever we point to literal strings
    
    literal strings are not meant to be modified. So we should use const
    *char rather than char * when we want to store a pointer to them.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue May 18 19:49:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 19:49:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129675.243267 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj5ih-0005cs-Rc; Tue, 18 May 2021 19:49:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129675.243267; Tue, 18 May 2021 19:49:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj5ih-0005cl-Oa; Tue, 18 May 2021 19:49:35 +0000
Received: by outflank-mailman (input) for mailman id 129675;
 Tue, 18 May 2021 19:49:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lj5ig-0005cb-IP; Tue, 18 May 2021 19:49:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lj5ig-0002Dy-DR; Tue, 18 May 2021 19:49:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lj5ig-0006ci-5M; Tue, 18 May 2021 19:49:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lj5ig-0003Wr-4d; Tue, 18 May 2021 19:49:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=JZtH0e8Rw4RZKB0Ws14E1rcVROtNfae3zs+fJUPtzGk=; b=3HXDr6CghLZSvlV3SbLJRQ2amN
	Qs1UOA2OJpEM3HK0ZgCKvEpgArONaEjfltsKgo7Y/bUW+xqoNIIxcIjlm53CBywrAQl+EnNtpPkLl
	pUuqj0JndY/qxWX2kztVI9aNkp4QOKilTQVB1z0XM0uwkuNZsxUS6yCtUw0Sx8LbSaI0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162046-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162046: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=42ec0a315b8a2f445b7a7d74b8d78965f1dff8f6
X-Osstest-Versions-That:
    ovmf=29e300ff815283259e81822ed3cb926bb9ad6460
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 18 May 2021 19:49:34 +0000

flight 162046 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162046/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 42ec0a315b8a2f445b7a7d74b8d78965f1dff8f6
baseline version:
 ovmf                 29e300ff815283259e81822ed3cb926bb9ad6460

Last test of basis   162002  2021-05-18 08:10:25 Z    0 days
Testing same since   162046  2021-05-18 17:10:06 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Zhiguang Liu <zhiguang.liu@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   29e300ff81..42ec0a315b  42ec0a315b8a2f445b7a7d74b8d78965f1dff8f6 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Tue May 18 19:57:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 19:57:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129684.243286 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj5q6-00072v-Mb; Tue, 18 May 2021 19:57:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129684.243286; Tue, 18 May 2021 19:57:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj5q6-00072o-Jc; Tue, 18 May 2021 19:57:14 +0000
Received: by outflank-mailman (input) for mailman id 129684;
 Tue, 18 May 2021 19:57:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lj5q5-00072e-6H; Tue, 18 May 2021 19:57:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lj5q4-0002MC-Sy; Tue, 18 May 2021 19:57:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lj5q4-0006rV-Jl; Tue, 18 May 2021 19:57:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lj5q4-0000pt-JG; Tue, 18 May 2021 19:57:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=HwTLZmbhQJ5HKgjMWVdRNXPurvq75c0bwSsXu+10LUQ=; b=08fO0mYPYmnfl+L26P9KugKbQB
	K87hQUEARx3rnDepzgsUF8RkKZWlFGQlYj4iGdXKybGQvx3QBty4PV9wn7Z+QFbX6dQ9cBgKAAcCO
	diuveGplGjCf13Pztd9/11Lr+lQf6rnclasRYYfE08sfyKmVjJIHfvsbkH5VYSwsYVik=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-unstable-smoke bisection] complete build-arm64-xsm
Message-Id: <E1lj5q4-0000pt-JG@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 18 May 2021 19:57:12 +0000

branch xen-unstable-smoke
xenbranch xen-unstable-smoke
job build-arm64-xsm
testid xen-build

Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  8fc4916daf2aac34088ebd5ec3d6fd707ac4221d
  Bug not present: caa9c4471d1d74b2d236467aaf7e63a806ac11a4
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/162060/


  commit 8fc4916daf2aac34088ebd5ec3d6fd707ac4221d
  Author: Julien Grall <jgrall@amazon.com>
  Date:   Tue May 18 14:34:22 2021 +0100
  
      tools/libs: guest: Use const whenever we point to literal strings
      
      literal strings are not meant to be modified. So we should use const
      *char rather than char * when we want to store a pointer to them.
      
      Signed-off-by: Julien Grall <jgrall@amazon.com>
      Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
      Acked-by: Wei Liu <wl@xen.org>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable-smoke/build-arm64-xsm.xen-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable-smoke/build-arm64-xsm.xen-build --summary-out=tmp/162060.bisection-summary --basis-template=162023 --blessings=real,real-bisect,real-retry xen-unstable-smoke build-arm64-xsm xen-build
Searching for failure / basis pass:
 162055 fail [host=laxton0] / 162023 ok.
Failure / basis pass flights: 162055 / 162023
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 7ea428895af2840d85c524f0bd11a38aac308308 01d84420fb4a9be2ec474a7c1910bb22c28b53c8
Basis pass 7ea428895af2840d85c524f0bd11a38aac308308 caa9c4471d1d74b2d236467aaf7e63a806ac11a4
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/qemu-xen.git#7ea428895af2840d85c524f0bd11a38aac308308-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/xen.git#caa9c4471d1d74b2d236467aaf7e63a806ac11a4-01d84420fb4a9be2ec474a7c1910bb22c28b53c8
Loaded 5001 nodes in revision graph
Searching for test results:
 162023 pass 7ea428895af2840d85c524f0bd11a38aac308308 caa9c4471d1d74b2d236467aaf7e63a806ac11a4
 162036 fail 7ea428895af2840d85c524f0bd11a38aac308308 01d84420fb4a9be2ec474a7c1910bb22c28b53c8
 162047 pass 7ea428895af2840d85c524f0bd11a38aac308308 caa9c4471d1d74b2d236467aaf7e63a806ac11a4
 162050 fail 7ea428895af2840d85c524f0bd11a38aac308308 01d84420fb4a9be2ec474a7c1910bb22c28b53c8
 162052 fail 7ea428895af2840d85c524f0bd11a38aac308308 89aae4ad8f495b647de33f2df5046b3ce68225f8
 162054 fail 7ea428895af2840d85c524f0bd11a38aac308308 8fc4916daf2aac34088ebd5ec3d6fd707ac4221d
 162056 pass 7ea428895af2840d85c524f0bd11a38aac308308 caa9c4471d1d74b2d236467aaf7e63a806ac11a4
 162057 fail 7ea428895af2840d85c524f0bd11a38aac308308 8fc4916daf2aac34088ebd5ec3d6fd707ac4221d
 162055 fail 7ea428895af2840d85c524f0bd11a38aac308308 01d84420fb4a9be2ec474a7c1910bb22c28b53c8
 162059 pass 7ea428895af2840d85c524f0bd11a38aac308308 caa9c4471d1d74b2d236467aaf7e63a806ac11a4
 162060 fail 7ea428895af2840d85c524f0bd11a38aac308308 8fc4916daf2aac34088ebd5ec3d6fd707ac4221d
Searching for interesting versions
 Result found: flight 162023 (pass), for basis pass
 For basis failure, parent search stopping at 7ea428895af2840d85c524f0bd11a38aac308308 caa9c4471d1d74b2d236467aaf7e63a806ac11a4, results HASH(0x5592b2a64f98) HASH(0x5592b2a705d8) HASH(0x5592b2a78da0) HASH(0x5592b2a80608) Result found: flight 162036 (fail), for basis failure (at ancestor ~449)
 Repro found: flight 162047 (pass), for basis pass
 Repro found: flight 162050 (fail), for basis failure
 0 revisions at 7ea428895af2840d85c524f0bd11a38aac308308 caa9c4471d1d74b2d236467aaf7e63a806ac11a4
No revisions left to test, checking graph state.
 Result found: flight 162023 (pass), for last pass
 Result found: flight 162054 (fail), for first failure
 Repro found: flight 162056 (pass), for last pass
 Repro found: flight 162057 (fail), for first failure
 Repro found: flight 162059 (pass), for last pass
 Repro found: flight 162060 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  8fc4916daf2aac34088ebd5ec3d6fd707ac4221d
  Bug not present: caa9c4471d1d74b2d236467aaf7e63a806ac11a4
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/162060/


  commit 8fc4916daf2aac34088ebd5ec3d6fd707ac4221d
  Author: Julien Grall <jgrall@amazon.com>
  Date:   Tue May 18 14:34:22 2021 +0100
  
      tools/libs: guest: Use const whenever we point to literal strings
      
      literal strings are not meant to be modified. So we should use const
      *char rather than char * when we want to store a pointer to them.
      
      Signed-off-by: Julien Grall <jgrall@amazon.com>
      Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
      Acked-by: Wei Liu <wl@xen.org>

Revision graph left in /home/logs/results/bisect/xen-unstable-smoke/build-arm64-xsm.xen-build.{dot,ps,png,html,svg}.
----------------------------------------
162060: tolerable ALL FAIL

flight 162060 xen-unstable-smoke real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/162060/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 build-arm64-xsm               6 xen-build               fail baseline untested


jobs:
 build-arm64-xsm                                              fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Tue May 18 20:39:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 20:39:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129696.243304 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj6Ul-0002kY-12; Tue, 18 May 2021 20:39:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129696.243304; Tue, 18 May 2021 20:39:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj6Uk-0002kR-Tn; Tue, 18 May 2021 20:39:14 +0000
Received: by outflank-mailman (input) for mailman id 129696;
 Tue, 18 May 2021 20:39:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lj6Uj-0002kH-Tt; Tue, 18 May 2021 20:39:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lj6Uj-00037z-I2; Tue, 18 May 2021 20:39:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lj6Uj-0000cA-7B; Tue, 18 May 2021 20:39:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lj6Uj-0004jv-6g; Tue, 18 May 2021 20:39:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=N1uTlx3cq94ee+jtaIWyPHu3EU00yie7O8q6IOHxU7o=; b=TTDMxExxF43QKstgInNxvY2s13
	257skctvdb0cu1tF0HVdE0Yjr4/WgFyUdkeTN1fDMjVZJRLEP9BuKzATDP5og/q1r/HkuY9SaIZeL
	hYSMrbemyzmZFz8LWiAVdqjj/V+sAF9H+ptZWq81V30Bx8/3DcsyvXvMZATnYzeTCthI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-161996-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 161996: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=8ac91e6c6033ebc12c5c1e4aa171b81a662bd70f
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 18 May 2021 20:39:13 +0000

flight 161996 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/161996/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                8ac91e6c6033ebc12c5c1e4aa171b81a662bd70f
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  291 days
Failing since        152366  2020-08-01 20:49:34 Z  289 days  488 attempts
Testing same since   161996  2021-05-18 07:32:22 Z    0 days    1 attempts

------------------------------------------------------------
6063 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1645713 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 18 21:53:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 21:53:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129714.243324 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj7eN-0001lf-12; Tue, 18 May 2021 21:53:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129714.243324; Tue, 18 May 2021 21:53:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj7eM-0001lY-U8; Tue, 18 May 2021 21:53:14 +0000
Received: by outflank-mailman (input) for mailman id 129714;
 Tue, 18 May 2021 21:53:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lj7eL-0001lO-7c; Tue, 18 May 2021 21:53:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lj7eL-0004Xh-1u; Tue, 18 May 2021 21:53:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lj7eK-0005KK-R9; Tue, 18 May 2021 21:53:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lj7eK-0000C5-Qc; Tue, 18 May 2021 21:53:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=rbzwqeou8vLvtAN0eLuw1G5hAoFyBvGifRHOzqWhTJI=; b=PDOD3LOKpmeuktz15Wpi8gC3N6
	ppegnnW8bOjytgqIIly6GfnQRC0F2w8dO9zhkuLjksF9DDgJB94SxRpI/CylWsujRTtHxtFTc9LHG
	qKaeCQmbb0aJU+6zgaz8F8Nuy/t4wbzCOog6/yzR7OyG4DgOGwewbHMF5o6EAq1SuSgc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162062-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162062: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=01d84420fb4a9be2ec474a7c1910bb22c28b53c8
X-Osstest-Versions-That:
    xen=caa9c4471d1d74b2d236467aaf7e63a806ac11a4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 18 May 2021 21:53:12 +0000

flight 162062 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162062/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm               6 xen-build                fail REGR. vs. 162023
 build-armhf                   6 xen-build                fail REGR. vs. 162023

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  01d84420fb4a9be2ec474a7c1910bb22c28b53c8
baseline version:
 xen                  caa9c4471d1d74b2d236467aaf7e63a806ac11a4

Last test of basis   162023  2021-05-18 13:00:27 Z    0 days
Testing same since   162036  2021-05-18 16:00:26 Z    0 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  pass    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 01d84420fb4a9be2ec474a7c1910bb22c28b53c8
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:51:48 2021 +0100

    tools/xenmon: xenbaked: Mark const the field text in stat_map_t
    
    The field text in stat_map_t will point to string literals. So mark it
    as const to allow the compiler to catch any modified of the string.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 4b7702727a8d89fea0a239adcbeb18aa2c85ede0
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:51:28 2021 +0100

    tools/top: The string parameter in set_prompt() and set_delay() should be const
    
    Neither string parameter in set_prompt() and set_delay() are meant to
    be modified. In particular, new_prompt can point to a literal string.
    
    So mark the two parameters as const and propagate it.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 5605cfd49a18df41a21fb50cd81528312a39d7c9
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:50:32 2021 +0100

    tools/misc: Use const whenever we point to literal strings
    
    literal strings are not meant to be modified. So we should use const
    char * rather than char * when we we to store a pointer to them.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 89aae4ad8f495b647de33f2df5046b3ce68225f8
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:35:07 2021 +0100

    tools/libs: stat: Use const whenever we point to literal strings
    
    literal strings are not meant to be modified. So we should use const
    char * rather than char * when we want to store a pointer to them.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 8fc4916daf2aac34088ebd5ec3d6fd707ac4221d
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:34:22 2021 +0100

    tools/libs: guest: Use const whenever we point to literal strings
    
    literal strings are not meant to be modified. So we should use const
    *char rather than char * when we want to store a pointer to them.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Tue May 18 23:22:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 18 May 2021 23:22:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129731.243343 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj92R-0001nr-BH; Tue, 18 May 2021 23:22:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129731.243343; Tue, 18 May 2021 23:22:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lj92R-0001nk-88; Tue, 18 May 2021 23:22:11 +0000
Received: by outflank-mailman (input) for mailman id 129731;
 Tue, 18 May 2021 23:22:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lj92P-0001na-Mi; Tue, 18 May 2021 23:22:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lj92P-0005yl-Dv; Tue, 18 May 2021 23:22:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lj92P-00012R-6H; Tue, 18 May 2021 23:22:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lj92P-0000hA-5o; Tue, 18 May 2021 23:22:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Oc7XfKxxPbkkxqXx426aH5REj3f3x6CP+8nIANtWTAU=; b=uJTqSvMMooS2QdcomJW/hWrA1r
	oTB21LSMMAHJDmTAS6iuzOIog1qiopdslzJ9wTxOmFkS4892QM61ss+0IiKvzFmhIPQw0G/nQ9OjD
	oRUuaGQDLPUxjoZLaocxnCjX/wqdEdt3PBW6kUMIRq45DqHa8je5fB6Vn3AXP2iEDi/M=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162065-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162065: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=01d84420fb4a9be2ec474a7c1910bb22c28b53c8
X-Osstest-Versions-That:
    xen=caa9c4471d1d74b2d236467aaf7e63a806ac11a4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 18 May 2021 23:22:09 +0000

flight 162065 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162065/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm               6 xen-build                fail REGR. vs. 162023
 build-armhf                   6 xen-build                fail REGR. vs. 162023

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  01d84420fb4a9be2ec474a7c1910bb22c28b53c8
baseline version:
 xen                  caa9c4471d1d74b2d236467aaf7e63a806ac11a4

Last test of basis   162023  2021-05-18 13:00:27 Z    0 days
Testing same since   162036  2021-05-18 16:00:26 Z    0 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  pass    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 01d84420fb4a9be2ec474a7c1910bb22c28b53c8
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:51:48 2021 +0100

    tools/xenmon: xenbaked: Mark const the field text in stat_map_t
    
    The field text in stat_map_t will point to string literals. So mark it
    as const to allow the compiler to catch any modified of the string.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 4b7702727a8d89fea0a239adcbeb18aa2c85ede0
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:51:28 2021 +0100

    tools/top: The string parameter in set_prompt() and set_delay() should be const
    
    Neither string parameter in set_prompt() and set_delay() are meant to
    be modified. In particular, new_prompt can point to a literal string.
    
    So mark the two parameters as const and propagate it.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 5605cfd49a18df41a21fb50cd81528312a39d7c9
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:50:32 2021 +0100

    tools/misc: Use const whenever we point to literal strings
    
    literal strings are not meant to be modified. So we should use const
    char * rather than char * when we we to store a pointer to them.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 89aae4ad8f495b647de33f2df5046b3ce68225f8
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:35:07 2021 +0100

    tools/libs: stat: Use const whenever we point to literal strings
    
    literal strings are not meant to be modified. So we should use const
    char * rather than char * when we want to store a pointer to them.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 8fc4916daf2aac34088ebd5ec3d6fd707ac4221d
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:34:22 2021 +0100

    tools/libs: guest: Use const whenever we point to literal strings
    
    literal strings are not meant to be modified. So we should use const
    *char rather than char * when we want to store a pointer to them.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed May 19 01:14:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 01:14:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129752.243369 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljAnB-0001QB-Q0; Wed, 19 May 2021 01:14:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129752.243369; Wed, 19 May 2021 01:14:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljAnB-0001Q4-Mt; Wed, 19 May 2021 01:14:33 +0000
Received: by outflank-mailman (input) for mailman id 129752;
 Wed, 19 May 2021 01:14:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljAnA-0001Pu-7N; Wed, 19 May 2021 01:14:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljAnA-0005Gl-0r; Wed, 19 May 2021 01:14:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljAn9-0007e5-O1; Wed, 19 May 2021 01:14:31 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ljAn9-0001kZ-NZ; Wed, 19 May 2021 01:14:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=YCj7YXhx2aUm7dRjdnt3TZn2kSxb0sAI+dJegxtPi04=; b=husYBQvtri4hxEYL5MzeIP0nDY
	UDPb5odBQv1VO99pEzWzp5CBLi0wJ6hxMjg4JKiXl7l1n3SqM6BnR8rPEi7LbkTlerGWZ/A184d6K
	kcW9VoyhVptyWM42D+1XK3tMya/X2D9rWG+OMnbkwKG5w4W7BLN8S1SkwQYB63GsyZiE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162067-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162067: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-start/debianhvm.repeat:fail:heisenbug
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=01d84420fb4a9be2ec474a7c1910bb22c28b53c8
X-Osstest-Versions-That:
    xen=caa9c4471d1d74b2d236467aaf7e63a806ac11a4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 May 2021 01:14:31 +0000

flight 162067 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162067/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm               6 xen-build                fail REGR. vs. 162023
 build-armhf                   6 xen-build                fail REGR. vs. 162023

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 20 guest-start/debianhvm.repeat fail pass in 162065

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  01d84420fb4a9be2ec474a7c1910bb22c28b53c8
baseline version:
 xen                  caa9c4471d1d74b2d236467aaf7e63a806ac11a4

Last test of basis   162023  2021-05-18 13:00:27 Z    0 days
Testing same since   162036  2021-05-18 16:00:26 Z    0 days    5 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  pass    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 01d84420fb4a9be2ec474a7c1910bb22c28b53c8
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:51:48 2021 +0100

    tools/xenmon: xenbaked: Mark const the field text in stat_map_t
    
    The field text in stat_map_t will point to string literals. So mark it
    as const to allow the compiler to catch any modified of the string.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 4b7702727a8d89fea0a239adcbeb18aa2c85ede0
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:51:28 2021 +0100

    tools/top: The string parameter in set_prompt() and set_delay() should be const
    
    Neither string parameter in set_prompt() and set_delay() are meant to
    be modified. In particular, new_prompt can point to a literal string.
    
    So mark the two parameters as const and propagate it.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 5605cfd49a18df41a21fb50cd81528312a39d7c9
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:50:32 2021 +0100

    tools/misc: Use const whenever we point to literal strings
    
    literal strings are not meant to be modified. So we should use const
    char * rather than char * when we we to store a pointer to them.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 89aae4ad8f495b647de33f2df5046b3ce68225f8
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:35:07 2021 +0100

    tools/libs: stat: Use const whenever we point to literal strings
    
    literal strings are not meant to be modified. So we should use const
    char * rather than char * when we want to store a pointer to them.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 8fc4916daf2aac34088ebd5ec3d6fd707ac4221d
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:34:22 2021 +0100

    tools/libs: guest: Use const whenever we point to literal strings
    
    literal strings are not meant to be modified. So we should use const
    *char rather than char * when we want to store a pointer to them.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed May 19 01:27:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 01:27:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129762.243383 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljAzm-0002tD-0y; Wed, 19 May 2021 01:27:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129762.243383; Wed, 19 May 2021 01:27:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljAzl-0002t6-Tz; Wed, 19 May 2021 01:27:33 +0000
Received: by outflank-mailman (input) for mailman id 129762;
 Wed, 19 May 2021 01:27:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljAzk-0002sw-LP; Wed, 19 May 2021 01:27:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljAzk-0005UL-Ay; Wed, 19 May 2021 01:27:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljAzk-0008BK-0K; Wed, 19 May 2021 01:27:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ljAzj-0008J6-W1; Wed, 19 May 2021 01:27:31 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=S1l2dPmvHxxJp/CZ2wuozuXrk1i/fCIS6Y8P57Iz9Z8=; b=LXc6ZDBHmAAUvTC+yNpeLXsz8S
	UtPd2f7UlRJJ+Z56CPpdmIc6HOZmi8fROu4xF/gtO2+xLxUVM/4SRkY4Sx3dHreMKm1pzfomXkweF
	osViWdZyD42OQDU/VHgWk2eChiesft86dNHArKigKSb/vPVKw5R01JBA5xOQ4sxxBiaE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162019-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162019: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=8e22b27994dba67f3d67d2cb7cf67d7863862b67
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 May 2021 01:27:31 +0000

flight 162019 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162019/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                8e22b27994dba67f3d67d2cb7cf67d7863862b67
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  271 days
Failing since        152659  2020-08-21 14:07:39 Z  270 days  496 attempts
Testing same since   162019  2021-05-18 12:11:58 Z    0 days    1 attempts

------------------------------------------------------------
505 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 154440 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed May 19 02:23:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 02:23:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129779.243398 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljBrM-0000S0-FY; Wed, 19 May 2021 02:22:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129779.243398; Wed, 19 May 2021 02:22:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljBrM-0000Rt-Ac; Wed, 19 May 2021 02:22:56 +0000
Received: by outflank-mailman (input) for mailman id 129779;
 Wed, 19 May 2021 02:22:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TeaP=KO=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1ljBrL-0000Rn-1Q
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 02:22:55 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.7.70]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 83f0bdbc-d648-46f8-8547-58bc50078462;
 Wed, 19 May 2021 02:22:51 +0000 (UTC)
Received: from DB6PR0202CA0006.eurprd02.prod.outlook.com (2603:10a6:4:29::16)
 by HE1PR0801MB2027.eurprd08.prod.outlook.com (2603:10a6:3:4f::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.26; Wed, 19 May
 2021 02:22:49 +0000
Received: from DB5EUR03FT037.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:29:cafe::db) by DB6PR0202CA0006.outlook.office365.com
 (2603:10a6:4:29::16) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.32 via Frontend
 Transport; Wed, 19 May 2021 02:22:48 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT037.mail.protection.outlook.com (10.152.20.215) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Wed, 19 May 2021 02:22:48 +0000
Received: ("Tessian outbound ea2c9a942a09:v92");
 Wed, 19 May 2021 02:22:48 +0000
Received: from 9e26abac3fdf.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 B117F9F7-03E7-4A0A-8080-0471CCB87BEC.1; 
 Wed, 19 May 2021 02:22:42 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 9e26abac3fdf.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 19 May 2021 02:22:42 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com (2603:10a6:803:10a::33)
 by VI1PR0801MB1917.eurprd08.prod.outlook.com (2603:10a6:800:8a::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.27; Wed, 19 May
 2021 02:22:32 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5]) by VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5%6]) with mapi id 15.20.4129.032; Wed, 19 May 2021
 02:22:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 83f0bdbc-d648-46f8-8547-58bc50078462
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ikhoc6GdUC42oBi1PvkZVyIyI2+Le60z8MEIPDNfNX8=;
 b=EDX6x8WVYNruD8VulONpKxLq3xubzTjSY0mRmz9eUhBgpcxy9NtMbLBfM5Ix7mNJ61JLUgJ5tZ6t2Z5x1tggnJlc7D29TsOfSgwwYWRh6DYTeDuyB5vEHiAenCHZTGFymr8WY9TDbLjdeuny+tXBusJ9XDYdoIWGKJrSBZKUXNo=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=GfTFKY9jJfiDX8ng6P+1yCt4sYINNEMkGbv7vIq1xEYpekGV4VyuNmOsCI/hNVjkKGkQQG7UXYb8ZXpALTyhYepha3imrwWFeixx//0rEnOXKjX6o+0oEyt3WuKQyOQAkVt+loHbQxpYE/HwA5S8xjdxqlYIXTAGQPD4Agjv643hG/8Va8iqEoeZLn9y6KGfK5QXlW3ugtoUczmdtM3F2uvDrk3HfChaPxdINYpSL9jNyg5Xl7JGDgSXRXbrijtaX0dCoYCYixKHLYLHWmoicph5ctMPK0UD4vgJioMypG+Fv0M5EejkTqsXK+wRpYml5uRSV0M1HQnYY4p+uTBqfg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ikhoc6GdUC42oBi1PvkZVyIyI2+Le60z8MEIPDNfNX8=;
 b=YaiAS/146ld4MCgvg/u5k5ruV17PRFm3Nw+ZDLK5RMWCe28iyyO3lvqyX1JgnMFScTHUbhdJqPbwC/Idr09TZAVUk/qZ40bi6wY8gvAsIsRK1hq6xoN5BuTkcYMtPWEgK6WYIq3p3CKUvJ70tXtQZYOyTH/n7ScyUFCs9kpdVkWre1e6rbb+vaxhsU3y04ODd/bkhDln7BNrMaHiCkOpcUELM5SqDjcjIq0XpGMhMFMF1R6OrWTssRbckVg2oVx7zN8I7FcRNgTobeTDhMDLo/S+PTLozMCekqT31ZXmB+yhWPwR3lTCKT2iEjUOnWuh5Pv3Q2jQMiyHdER3bnbeHQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ikhoc6GdUC42oBi1PvkZVyIyI2+Le60z8MEIPDNfNX8=;
 b=EDX6x8WVYNruD8VulONpKxLq3xubzTjSY0mRmz9eUhBgpcxy9NtMbLBfM5Ix7mNJ61JLUgJ5tZ6t2Z5x1tggnJlc7D29TsOfSgwwYWRh6DYTeDuyB5vEHiAenCHZTGFymr8WY9TDbLjdeuny+tXBusJ9XDYdoIWGKJrSBZKUXNo=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "sstabellini@kernel.org"
	<sstabellini@kernel.org>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	nd <nd@arm.com>
Subject: RE: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
Thread-Topic: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
Thread-Index: AQHXS6WvvFnJG9tHGECWPMZXor2qraro8JyAgAAPtYA=
Date: Wed, 19 May 2021 02:22:31 +0000
Message-ID:
 <VE1PR08MB521519C6F09E92EDB9C9A1AEF72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-2-penny.zheng@arm.com>
 <e1b90f06-92d2-11da-c556-4081907124b8@xen.org>
In-Reply-To: <e1b90f06-92d2-11da-c556-4081907124b8@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 6B559B687229854E82AB1EBEC9A0A5BF.0
x-checkrecipientchecked: true
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [203.126.0.111]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: 20d2035f-4b73-41f1-cb0a-08d91a6cff05
x-ms-traffictypediagnostic: VI1PR0801MB1917:|HE1PR0801MB2027:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<HE1PR0801MB20274E40F38C3FD01063AAC2F72B9@HE1PR0801MB2027.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:5236;OLM:5236;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 fnGZFGAA0HtckBxXjf6ALSZI6vDNqsTc0ZZpaeriPsW5MnfcvX07/Lz4lh4GZIjJgGZrwAlrVtOotxGJjP45Bix8G+QxiVbR07Kvusqj7w/oUXoVYjIEtsqiyQRTojvtYnbYmysFww/rI95L94fEJK1A+IQHB9kGkzADDWq/k26PQyT7oLUCysATCzHJWG1cYnuR07O0k5ryn2iYT0KDAoP4n/aFbSEgpU1inJrb50mZAKDrWnhpgUPRQRjMp2YzT5+F/G9CD7Xaexp8gV0byoRlPQGFZAuui9K7mZaDtnq4akNYvPWZj9HE7Qnf8Kbf9NX89tY3CP+0YSjzqfmn4TdAJM/3Tt7t7PgJm4aTpVJgxZpFPZCSAK+mzxhgxl0/V5BhjMTJDTWxB1QG+lMZbD/qPA+FjBXlukibpGUgFawgtp4m4hLZIUb4YqSFNNAWgpu7F4qlVxzrLpH5XZFMgl5RErShzrv2mIzIW4y6tyYOSuwbFDkjkh5hB+gQT2Ly/ceOIKW0Bcd/wAMiVhQil7vOUmvkHv0E5XJcRrZCFZ+m08+XWo5f0z/rK5+7/EzssPLc2PQ56tgNg6Pm/dPCZp4EI24jYn/se1YiFUtnpZU/TT2YKtGi67Rhr+fOAaHuWMzSAuT198X3gOHQyNnQA9Akm1kPAZJf+6745bfUJ0aVdErDdSz/AnT+IKEapl1tPO+BmDJYYOwuX89oSaUb6sd07v4RixpwF+uAPDMlOrw=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR08MB5215.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(376002)(366004)(346002)(39850400004)(396003)(26005)(7696005)(76116006)(66946007)(2906002)(9686003)(55016002)(6506007)(33656002)(186003)(53546011)(86362001)(83380400001)(4326008)(316002)(8936002)(5660300002)(110136005)(52536014)(8676002)(478600001)(64756008)(66476007)(54906003)(66556008)(66446008)(71200400001)(38100700002)(966005)(122000001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?M1dabEs0OFF5TTJkN0ZKa2xsT2xrNytjMlZGQjNzeGg2SUJ5NkFTTlIwaGJB?=
 =?utf-8?B?em9RK2UyK282VXFkbDU3RnN4UXhiUXV6UGIwNld0ekVQUGo3eklTUEZIdi9W?=
 =?utf-8?B?ZXRyV2ZYdVZlWDd1cGxaeHhoeFI0d2VSZm1SSUlCK0NoTXpRT1d3VlZmU1Z4?=
 =?utf-8?B?VG1HNkJaN1dpVVB2aUtnNW1VZWUxQjgzNkJiZE1QL3prdzVQaHFldDljazRM?=
 =?utf-8?B?Ykt5dmVkRExwZlNiR1h4K3hVYmkvOFpmR0lrUmlFTzFVbDY0cm55RnJqTkJk?=
 =?utf-8?B?akxvdzl6ZXpxMUx5SXR0Q0F5R0xhejMwaDR3ZkZEdkVSNFV5ZSs2SldSVWdL?=
 =?utf-8?B?dS9HL2JCcW1xT3RPNmltamhPTE56SzByNXB2WkN3eTJCM2JVSGJvaFUrTXhM?=
 =?utf-8?B?dDdVL1pRUTgyQXlJQVByUFhKZzlDY1dia1pBbU81U3g0VEFnSVJTWlIvMi8r?=
 =?utf-8?B?N0tuWGNzbVRMS3k3UHFhemxWU1dNbGFib1djejIyRmNIOWlWL0lDVjErM1hn?=
 =?utf-8?B?U2lGa0tpRmorMVh2K21RS1h1SUdyTFhkL3FhWXJENFNVRDNOM0JjcHkydmxa?=
 =?utf-8?B?amlYdUtIeEcvSDFyb1orOHdoYU4yR3NRN0Y3eWdWcytvM0lqTEZXK1NYQ2Qx?=
 =?utf-8?B?TDI2Ky9BMlJzWEwveERFZlgvbVd0Q0Y1MU4vekxncm14M2tRRXNOM3ZpN3VB?=
 =?utf-8?B?TkduKzc0MWxxNmJZaTltTU1lZE5wd0JLayttU3pPUWx2em8vWlR6NWMwRmRs?=
 =?utf-8?B?MVV6YU9CSXNrRXpSMTVRL25FS3FDcFJqdEUvL1h6SStQMmo2Zk9TNGFjeTlu?=
 =?utf-8?B?RHpxdmdrZlVtUDhDSG53bWsyRmZrTmVnaXFvNEUyQW1zWWZzaFdIUzdRdHAw?=
 =?utf-8?B?WTdGKy84MjhTNVFGVGRyVHlkcDZ2MVZqa3M1bXM1WU4vNjA5NEJkMENIYlBN?=
 =?utf-8?B?blBwMkU5eVJua3pLRU81YnpFcmZIRTJ5dERFMWN1TjZ0MWVnUnMrWG1pdHZK?=
 =?utf-8?B?UTdyT3dkTG1oSjRRU3hYc01FbnVNWWZmSGxlRlZiUU94T3p1NEx2RC9pZ0Rm?=
 =?utf-8?B?aXFielVpdm5QbEtQd3hsbjRjM3ZLTXJ5d0RkYzBDdS9IS1p0R0ljUk9NVGNT?=
 =?utf-8?B?b3c4UnRldWlERmJpS2M4Z05JTjVWVkFDMDVrYlRHdTRZVVY0Z0FiNXM1VVkv?=
 =?utf-8?B?MFVmV0s4L1g2TDV2QnJiNnR1bkR1S3hZano2anIrbEpQd3cwS01vcTV6NURS?=
 =?utf-8?B?bHg0MnJCbGQ2S0src0Rjdmxid3NLUmpuQ1NPWmZURFIxQlFJQ2h1WmdaOGM4?=
 =?utf-8?B?cW9WTUIwSEg3bXl2b254cTBnNGdaU1BSNmRVdzMvSGJVbWNyRnN2YzEvZmVm?=
 =?utf-8?B?eXIxV1piaFpXN2VES3FUdkRLaGpoNW1FQ05qK1FPbGwzV2NDcTg1N0JEajFs?=
 =?utf-8?B?Umo1bDgvem1SeXJGR09yZnRpUnZHWFFNaDRQbWlOZEYwdVJzSUwwUXJxUkVk?=
 =?utf-8?B?RjREUzVzUmQyRjFTWXorTTA2Zk5vRzhmelFaWEtJZjRDWHZGckRKdGhKY0pR?=
 =?utf-8?B?S2NENFl3UnY1aWRoM0M2MHA4SlgyUytkRU94cExUQ01iY3AxdEV5K3VhQjIw?=
 =?utf-8?B?bm5KRFZ0RnFRcEV6ekFuUFRZMWNEcHE0SHBCR1RtTEh5U1RoTmt5UWFSN2VI?=
 =?utf-8?B?SE9oZnBOZlpYb0pnMjJ1WFZEWjgzeUUyK0w4YzFuc3k3eHVuM3RLTndydVVm?=
 =?utf-8?Q?ZLAPFviZtwf7kGdFAsus5jdNTaG2cGdAB5KiuHe?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0801MB1917
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT037.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	09fadc7a-180c-4a61-fd92-08d91a6cf545
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	HTk0KFJIFPahU0LowWO+6pPTb59f0VYFbYMz0tQvdNdX5xb9K+9kC3YIuKrtJ5F2t7efLGzsY7l4tvErmX0jDK/hwFheOK/YiI6LUkDiub5LtBzf7wZeZWi+wWgrSZQfWEk5vxF1w0VFyBT15z11o4tAeBSL3fYoh6SDe/UdDs7KIKIs5leDCH4mD1/5+5F0C/Z53XwbNH0Dm0Q4wrTqgvQDeRJ1UjS1wsYbdwWZ9PxbhTqqAro8N2l/hF1pjA3QUcWsuOhxP30jAatKAAwOuT3QNurn9Tme5tg7BeevT6uRKeeq+eEVb/NsiwhrJNWl/GWz/fu9qbwqyi+4aeoPOJGUJksxmu9bkdpom88hsxAZg/y8oHkaaxufWFsFvnJJ6VZ3wq5ylwWl+boB2KYvNLI1k+mf/bOa8ME4m7fUWebwdnIvTPsTCzUUkV4DAJcloDLMG+J0S+ggs2oLKt09qYhsNqLHZl+sUcgOAT0ieccuQlSCQVa6T3siviVIoDmhVF5APDmK0zIhDFXKG/Xy9q0LkoNzyRFktI5IDDEE/NsZAB+y+xVCkefNVUOuR9qk9Dv9X3mJSq/w9MRS3X/STPVMMkQtWiMVsGf2tS+8CgCZoue/oKJkbtxWgE+D4dVhjQTvOYkYdqfEM16SBU/7wPf6/GZXWIiFaTJrj7WaDZCSv0T3OxIvLGlT5tXKS9NeNQWFlfvpd/o7/K+lA7daNStdcYXV5xfuKkyCH62rrZo=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39850400004)(136003)(376002)(346002)(396003)(36840700001)(46966006)(316002)(8936002)(478600001)(82740400003)(81166007)(356005)(336012)(83380400001)(52536014)(4326008)(9686003)(5660300002)(54906003)(70586007)(186003)(7696005)(36860700001)(53546011)(8676002)(55016002)(26005)(6506007)(110136005)(47076005)(82310400003)(33656002)(70206006)(2906002)(86362001)(966005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2021 02:22:48.6915
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 20d2035f-4b73-41f1-cb0a-08d91a6cff05
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT037.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: HE1PR0801MB2027

SGkgSnVsaWVuDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSnVsaWVu
IEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4NCj4gU2VudDogVHVlc2RheSwgTWF5IDE4LCAyMDIxIDQ6
NTggUE0NCj4gVG86IFBlbm55IFpoZW5nIDxQZW5ueS5aaGVuZ0Bhcm0uY29tPjsgeGVuLWRldmVs
QGxpc3RzLnhlbnByb2plY3Qub3JnOw0KPiBzc3RhYmVsbGluaUBrZXJuZWwub3JnDQo+IENjOiBC
ZXJ0cmFuZCBNYXJxdWlzIDxCZXJ0cmFuZC5NYXJxdWlzQGFybS5jb20+OyBXZWkgQ2hlbg0KPiA8
V2VpLkNoZW5AYXJtLmNvbT47IG5kIDxuZEBhcm0uY29tPg0KPiBTdWJqZWN0OiBSZTogW1BBVENI
IDAxLzEwXSB4ZW4vYXJtOiBpbnRyb2R1Y2UgZG9tYWluIG9uIFN0YXRpYyBBbGxvY2F0aW9uDQo+
IA0KPiBIaSBQZW5ueSwNCj4gDQo+IE9uIDE4LzA1LzIwMjEgMDY6MjEsIFBlbm55IFpoZW5nIHdy
b3RlOg0KPiA+IFN0YXRpYyBBbGxvY2F0aW9uIHJlZmVycyB0byBzeXN0ZW0gb3Igc3ViLXN5c3Rl
bShkb21haW5zKSBmb3Igd2hpY2gNCj4gPiBtZW1vcnkgYXJlYXMgYXJlIHByZS1kZWZpbmVkIGJ5
IGNvbmZpZ3VyYXRpb24gdXNpbmcgcGh5c2ljYWwgYWRkcmVzcw0KPiByYW5nZXMuDQo+ID4gVGhv
c2UgcHJlLWRlZmluZWQgbWVtb3J5LCAtLSBTdGF0aWMgTW9tZXJ5LCBhcyBwYXJ0cyBvZiBSQU0g
cmVzZXJ2ZWQNCj4gPiBpbiB0aGUNCj4gDQo+IHMvTW9tZXJ5L01lbW9yeS8NCg0KT2gsIHRoeCEN
Cg0KPiANCj4gPiBiZWdpbm5pbmcsIHNoYWxsIG5ldmVyIGdvIHRvIGhlYXAgYWxsb2NhdG9yIG9y
IGJvb3QgYWxsb2NhdG9yIGZvciBhbnkgdXNlLg0KPiA+DQo+ID4gRG9tYWlucyBvbiBTdGF0aWMg
QWxsb2NhdGlvbiBpcyBzdXBwb3J0ZWQgdGhyb3VnaCBkZXZpY2UgdHJlZSBwcm9wZXJ0eQ0KPiA+
IGB4ZW4sc3RhdGljLW1lbWAgc3BlY2lmeWluZyByZXNlcnZlZCBSQU0gYmFua3MgYXMgdGhpcyBk
b21haW4ncyBndWVzdA0KPiBSQU0uDQo+ID4gQnkgZGVmYXVsdCwgdGhleSBzaGFsbCBiZSBtYXBw
ZWQgdG8gdGhlIGZpeGVkIGd1ZXN0IFJBTSBhZGRyZXNzDQo+ID4gYEdVRVNUX1JBTTBfQkFTRWAs
IGBHVUVTVF9SQU0xX0JBU0VgLg0KPiA+DQo+ID4gVGhpcyBwYXRjaCBpbnRyb2R1Y2VzIHRoaXMg
bmV3IGB4ZW4sc3RhdGljLW1lbWAgcHJvcGVydHkgdG8gZGVmaW5lDQo+ID4gc3RhdGljIG1lbW9y
eSBub2RlcyBpbiBkZXZpY2UgdHJlZSBmaWxlLg0KPiA+IFRoaXMgcGF0Y2ggYWxzbyBkb2N1bWVu
dHMgYW5kIHBhcnNlcyB0aGlzIG5ldyBhdHRyaWJ1dGUgYXQgYm9vdCB0aW1lDQo+ID4gYW5kIHN0
b3JlcyByZWxhdGVkIGluZm8gaW4gc3RhdGljX21lbSBmb3IgbGF0ZXIgaW5pdGlhbGl6YXRpb24u
DQo+ID4NCj4gPiBTaWduZWQtb2ZmLWJ5OiBQZW5ueSBaaGVuZyA8cGVubnkuemhlbmdAYXJtLmNv
bT4NCj4gPiAtLS0NCj4gPiAgIGRvY3MvbWlzYy9hcm0vZGV2aWNlLXRyZWUvYm9vdGluZy50eHQg
fCAzMyArKysrKysrKysrKysrKysrKw0KPiA+ICAgeGVuL2FyY2gvYXJtL2Jvb3RmZHQuYyAgICAg
ICAgICAgICAgICB8IDUyICsrKysrKysrKysrKysrKysrKysrKysrKysrKw0KPiA+ICAgeGVuL2lu
Y2x1ZGUvYXNtLWFybS9zZXR1cC5oICAgICAgICAgICB8ICAyICsrDQo+ID4gICAzIGZpbGVzIGNo
YW5nZWQsIDg3IGluc2VydGlvbnMoKykNCj4gPg0KPiA+IGRpZmYgLS1naXQgYS9kb2NzL21pc2Mv
YXJtL2RldmljZS10cmVlL2Jvb3RpbmcudHh0DQo+ID4gYi9kb2NzL21pc2MvYXJtL2RldmljZS10
cmVlL2Jvb3RpbmcudHh0DQo+ID4gaW5kZXggNTI0M2JjN2ZkMy4uZDIwOTE0OWQ3MSAxMDA2NDQN
Cj4gPiAtLS0gYS9kb2NzL21pc2MvYXJtL2RldmljZS10cmVlL2Jvb3RpbmcudHh0DQo+ID4gKysr
IGIvZG9jcy9taXNjL2FybS9kZXZpY2UtdHJlZS9ib290aW5nLnR4dA0KPiA+IEBAIC0yNjgsMyAr
MjY4LDM2IEBAIFRoZSBEVEIgZnJhZ21lbnQgaXMgbG9hZGVkIGF0IDB4YzAwMDAwMCBpbiB0aGUN
Cj4gZXhhbXBsZSBhYm92ZS4gSXQgc2hvdWxkDQo+ID4gICBmb2xsb3cgdGhlIGNvbnZlbnRpb24g
ZXhwbGFpbmVkIGluIGRvY3MvbWlzYy9hcm0vcGFzc3Rocm91Z2gudHh0LiBUaGUNCj4gPiAgIERU
QiBmcmFnbWVudCB3aWxsIGJlIGFkZGVkIHRvIHRoZSBndWVzdCBkZXZpY2UgdHJlZSwgc28gdGhh
dCB0aGUgZ3Vlc3QNCj4gPiAgIGtlcm5lbCB3aWxsIGJlIGFibGUgdG8gZGlzY292ZXIgdGhlIGRl
dmljZS4NCj4gPiArDQo+ID4gKw0KPiA+ICtTdGF0aWMgQWxsb2NhdGlvbg0KPiA+ICs9PT09PT09
PT09PT09DQo+ID4gKw0KPiA+ICtTdGF0aWMgQWxsb2NhdGlvbiByZWZlcnMgdG8gc3lzdGVtIG9y
IHN1Yi1zeXN0ZW0oZG9tYWlucykgZm9yIHdoaWNoDQo+ID4gK21lbW9yeSBhcmVhcyBhcmUgcHJl
LWRlZmluZWQgYnkgY29uZmlndXJhdGlvbiB1c2luZyBwaHlzaWNhbCBhZGRyZXNzDQo+IHJhbmdl
cy4NCj4gPiArVGhvc2UgcHJlLWRlZmluZWQgbWVtb3J5LCAtLSBTdGF0aWMgTW9tZXJ5LCBhcyBw
YXJ0cyBvZiBSQU0gcmVzZXJ2ZWQNCj4gPiAraW4gdGhlDQo+IA0KPiBzL01vbWVyeS9NZW1vcnkv
DQo+IA0KDQpPaCwgdGh4DQoNCj4gPiArYmVnaW5uaW5nLCBzaGFsbCBuZXZlciBnbyB0byBoZWFw
IGFsbG9jYXRvciBvciBib290IGFsbG9jYXRvciBmb3IgYW55IHVzZS4NCj4gPiArDQo+ID4gK0Rv
bWFpbnMgb24gU3RhdGljIEFsbG9jYXRpb24gaXMgc3VwcG9ydGVkIHRocm91Z2ggZGV2aWNlIHRy
ZWUNCj4gPiArcHJvcGVydHkgYHhlbixzdGF0aWMtbWVtYCBzcGVjaWZ5aW5nIHJlc2VydmVkIFJB
TSBiYW5rcyBhcyB0aGlzIGRvbWFpbidzDQo+IGd1ZXN0IFJBTS4NCj4gDQo+IEkgd291bGQgc3Vn
Z2VzdCB0byB1c2UgInBoeXNpY2FsIFJBTSIgd2hlbiB5b3UgcmVmZXIgdG8gdGhlIGhvc3QgbWVt
b3J5Lg0KPiANCj4gPiArQnkgZGVmYXVsdCwgdGhleSBzaGFsbCBiZSBtYXBwZWQgdG8gdGhlIGZp
eGVkIGd1ZXN0IFJBTSBhZGRyZXNzDQo+ID4gK2BHVUVTVF9SQU0wX0JBU0VgLCBgR1VFU1RfUkFN
MV9CQVNFYC4NCj4gDQo+IFRoZXJlIGFyZSBhIGZldyBiaXRzIHRoYXQgbmVlZHMgdG8gY2xhcmlm
aWVkIG9yIHBhcnQgb2YgdGhlIGRlc2NyaXB0aW9uOg0KPiAgICAxKSAiQnkgZGVmYXVsdCIgc3Vn
Z2VzdHMgdGhlcmUgaXMgYW4gYWx0ZXJuYXRpdmUgcG9zc2liaWxpdHkuDQo+IEhvd2V2ZXIsIEkg
ZG9uJ3Qgc2VlIGFueS4NCj4gICAgMikgV2lsbCB0aGUgZmlyc3QgcmVnaW9uIG9mIHhlbixzdGF0
aWMtbWVtIGJlIG1hcHBlZCB0byBHVUVTVF9SQU0wX0JBU0UNCj4gYW5kIHRoZSBzZWNvbmQgdG8g
R1VFU1RfUkFNMV9CQVNFPyBXaGF0IGlmIGEgdGhpcmQgcmVnaW9uIGlzIHNwZWNpZmljZWQ/DQo+
ICAgIDMpIFdlIGRvbid0IGd1YXJhbnRlZSB0aGUgYmFzZSBhZGRyZXNzIGFuZCB0aGUgc2l6ZSBv
ZiB0aGUgYmFua3MuDQo+IFdvdWxkbid0IGl0IGJlIGJldHRlciB0byBsZXQgdGhlIGFkbWluIHNl
bGVjdCB0aGUgcmVnaW9uIGhlL3NoZSB3YW50cz8NCj4gICAgNCkgSG93IGRvIHlvdSBkZXRlcm1p
bmUgdGhlIG51bWJlciBvZiBjZWxscyBmb3IgdGhlIGFkZHJlc3MgYW5kIHRoZSBzaXplPw0KPiAN
Cg0KVGhlIHNwZWNpZmljIGltcGxlbWVudGF0aW9uIG9uIHRoaXMgcGFydCBjb3VsZCBiZSB0cmFj
ZWQgdG8gdGhlIGxhc3QgY29tbWl0DQpodHRwczovL3BhdGNoZXcub3JnL1hlbi8yMDIxMDUxODA1
MjExMy43MjU4MDgtMS1wZW5ueS56aGVuZ0Bhcm0uY29tLzIwMjEwNTE4MDUyMTEzLjcyNTgwOC0x
MS1wZW5ueS56aGVuZ0Bhcm0uY29tLw0KDQpJdCB3aWxsIGV4aGF1c3QgdGhlIEdVRVNUX1JBTTBf
U0laRSwgdGhlbiBzZWVrIHRvIHRoZSBHVUVTVF9SQU0xX0JBU0UuDQpHVUVTVF9SQU0wIG1heSB0
YWtlIHVwIHNldmVyYWwgcmVnaW9ucy4NCg0KWWVzLCBJIG1heSBhZGQgdGhlIDE6MSBkaXJlY3Qt
bWFwIHNjZW5hcmlvIGhlcmUgdG8gZXhwbGFpbiB0aGUgYWx0ZXJuYXRpdmUgcG9zc2liaWxpdHkN
Cg0KRm9yIHRoZSB0aGlyZCBwb2ludCwgYXJlIHlvdSBzdWdnZXN0aW5nIHRoYXQgd2UgY291bGQg
cHJvdmlkZSBhbiBvcHRpb24gdGhhdCBsZXQgdXNlcg0KYWxzbyBkZWZpbmUgZ3Vlc3QgbWVtb3J5
IGJhc2UgYWRkcmVzcy9zaXplPw0KDQpJJ20gY29uZnVzZWQgb24gdGhlIGZvdXJ0aCBwb2ludCwg
eW91IG1lYW4gdGhlIGFkZHJlc3MgY2VsbCBhbmQgc2l6ZSBjZWxsIGZvciB4ZW4sc3RhdGljLW1l
bSA9IDwuLi4+Pw0KSXQgd2lsbCBiZSBjb25zaXN0ZW50IHdpdGggdGhlIG9uZXMgZGVmaW5lZCBp
biB0aGUgcGFyZW50IG5vZGUsIGRvbVV4Lg0KDQo+ID4gK1N0YXRpYyBBbGxvY2F0aW9uIGlzIG9u
bHkgc3VwcG9ydGVkIG9uIEFBcmNoNjQgZm9yIG5vdy4NCj4gDQo+IFRoZSBjb2RlIGRvZXNuJ3Qg
c2VlbSB0byBiZSBBQXJjaDY0IHNwZWNpZmljLiBTbyB3aHkgY2FuJ3QgdGhpcyBiZSB1c2VkIGZv
cg0KPiAzMi1iaXQgQXJtPw0KPiANCg0KVHJ1ZSwgd2UgaGF2ZSBwbGFucyB0byBtYWtlIGl0IGFs
c28gd29ya2FibGUgaW4gQUFyY2gzMiBpbiB0aGUgZnV0dXJlLg0KQmVjYXVzZSB3ZSBjb25zaWRl
cmVkIFhFTiBvbiBjb3J0ZXgtUjUyLg0KDQo+ID4gKw0KPiA+ICtUaGUgZHRiIHByb3BlcnR5IHNo
b3VsZCBsb29rIGxpa2UgYXMgZm9sbG93czoNCj4gPiArDQo+ID4gKyAgICAgICAgY2hvc2VuIHsN
Cj4gPiArICAgICAgICAgICAgZG9tVTEgew0KPiA+ICsgICAgICAgICAgICAgICAgY29tcGF0aWJs
ZSA9ICJ4ZW4sZG9tYWluIjsNCj4gPiArICAgICAgICAgICAgICAgICNhZGRyZXNzLWNlbGxzID0g
PDB4Mj47DQo+ID4gKyAgICAgICAgICAgICAgICAjc2l6ZS1jZWxscyA9IDwweDI+Ow0KPiA+ICsg
ICAgICAgICAgICAgICAgY3B1cyA9IDwyPjsNCj4gPiArICAgICAgICAgICAgICAgIHhlbixzdGF0
aWMtbWVtID0gPDB4MCAweDMwMDAwMDAwIDB4MCAweDIwMDAwMDAwPjsNCj4gPiArDQo+ID4gKyAg
ICAgICAgICAgICAgICAuLi4NCj4gPiArICAgICAgICAgICAgfTsNCj4gPiArICAgICAgICB9Ow0K
PiA+ICsNCj4gPiArRE9NVTEgb24gU3RhdGljIEFsbG9jYXRpb24gaGFzIHJlc2VydmVkIFJBTSBi
YW5rIGF0IDB4MzAwMDAwMDAgb2YNCj4gPiArNTEyTUIgc2l6ZQ0KPiANCj4gRG8geW91IG1lYW4g
IkRvbVUxIHdpbGwgaGF2ZSBhIHN0YXRpYyBtZW1vcnkgb2YgNTEyTUIgcmVzZXJ2ZWQgZnJvbSB0
aGUNCj4gcGh5c2ljYWwgYWRkcmVzcy4uLiI/DQo+DQoNClllcywgeWVzLiBZb3UgcGhyYXNlIGl0
IG1vcmUgY2xlYXJseSwgdGh4DQogDQo+ID4gK2FzIGd1ZXN0IFJBTS4NCj4gPiBkaWZmIC0tZ2l0
IGEveGVuL2FyY2gvYXJtL2Jvb3RmZHQuYyBiL3hlbi9hcmNoL2FybS9ib290ZmR0LmMgaW5kZXgN
Cj4gPiBkY2ZmNTEyNjQ4Li5lOWYxNGU2YTQ0IDEwMDY0NA0KPiA+IC0tLSBhL3hlbi9hcmNoL2Fy
bS9ib290ZmR0LmMNCj4gPiArKysgYi94ZW4vYXJjaC9hcm0vYm9vdGZkdC5jDQo+ID4gQEAgLTMy
Nyw2ICszMjcsNTUgQEAgc3RhdGljIHZvaWQgX19pbml0IHByb2Nlc3NfY2hvc2VuX25vZGUoY29u
c3Qgdm9pZA0KPiAqZmR0LCBpbnQgbm9kZSwNCj4gPiAgICAgICBhZGRfYm9vdF9tb2R1bGUoQk9P
VE1PRF9SQU1ESVNLLCBzdGFydCwgZW5kLXN0YXJ0LCBmYWxzZSk7DQo+ID4gICB9DQo+ID4NCj4g
PiArc3RhdGljIGludCBfX2luaXQgcHJvY2Vzc19zdGF0aWNfbWVtb3J5KGNvbnN0IHZvaWQgKmZk
dCwgaW50IG5vZGUsDQo+ID4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICBjb25zdCBjaGFyICpuYW1lLA0KPiA+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgdTMyIGFkZHJlc3NfY2VsbHMsIHUzMiBzaXplX2NlbGxzLA0KPiA+ICsgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgdm9pZCAqZGF0YSkgew0KPiA+ICsgICAg
aW50IGk7DQo+ID4gKyAgICBpbnQgYmFua3M7DQo+ID4gKyAgICBjb25zdCBfX2JlMzIgKmNlbGw7
DQo+ID4gKyAgICBwYWRkcl90IHN0YXJ0LCBzaXplOw0KPiA+ICsgICAgdTMyIHJlZ19jZWxscyA9
IGFkZHJlc3NfY2VsbHMgKyBzaXplX2NlbGxzOw0KPiA+ICsgICAgc3RydWN0IG1lbWluZm8gKm1l
bSA9IGRhdGE7DQo+ID4gKyAgICBjb25zdCBzdHJ1Y3QgZmR0X3Byb3BlcnR5ICpwcm9wOw0KPiA+
ICsNCj4gPiArICAgIGlmICggYWRkcmVzc19jZWxscyA8IDEgfHwgc2l6ZV9jZWxscyA8IDEgKQ0K
PiA+ICsgICAgew0KPiA+ICsgICAgICAgIHByaW50aygiZmR0OiBpbnZhbGlkICNhZGRyZXNzLWNl
bGxzIG9yICNzaXplLWNlbGxzIGZvciBzdGF0aWMgbWVtb3J5Iik7DQo+ID4gKyAgICAgICAgcmV0
dXJuIC1FSU5WQUw7DQo+ID4gKyAgICB9DQo+ID4gKw0KPiA+ICsgICAgLyoNCj4gPiArICAgICAq
IENoZWNrIGlmIHN0YXRpYyBtZW1vcnkgcHJvcGVydHkgYmVsb25ncyB0byBhIHNwZWNpZmljIGRv
bWFpbiwgdGhhdCBpcywNCj4gPiArICAgICAqIGl0cyBub2RlIGBkb21VeGAgaGFzIGNvbXBhdGli
bGUgc3RyaW5nICJ4ZW4sZG9tYWluIi4NCj4gPiArICAgICAqLw0KPiA+ICsgICAgaWYgKCBmZHRf
bm9kZV9jaGVja19jb21wYXRpYmxlKGZkdCwgbm9kZSwgInhlbixkb21haW4iKSAhPSAwICkNCj4g
PiArICAgICAgICBwcmludGsoInhlbixzdGF0aWMtbWVtIHByb3BlcnR5IGNhbiBvbmx5IGxvY2F0
ZSB1bmRlciAvZG9tVXgNCj4gPiArIG5vZGUuXG4iKTsNCj4gPiArDQo+ID4gKyAgICBwcm9wID0g
ZmR0X2dldF9wcm9wZXJ0eShmZHQsIG5vZGUsIG5hbWUsIE5VTEwpOw0KPiA+ICsgICAgaWYgKCAh
cHJvcCApDQo+ID4gKyAgICAgICAgcmV0dXJuIC1FTk9FTlQ7DQo+ID4gKw0KPiA+ICsgICAgY2Vs
bCA9IChjb25zdCBfX2JlMzIgKilwcm9wLT5kYXRhOw0KPiA+ICsgICAgYmFua3MgPSBmZHQzMl90
b19jcHUocHJvcC0+bGVuKSAvIChyZWdfY2VsbHMgKiBzaXplb2YgKHUzMikpOw0KPiA+ICsNCj4g
PiArICAgIGZvciAoIGkgPSAwOyBpIDwgYmFua3MgJiYgbWVtLT5ucl9iYW5rcyA8IE5SX01FTV9C
QU5LUzsgaSsrICkNCj4gPiArICAgIHsNCj4gPiArICAgICAgICBkZXZpY2VfdHJlZV9nZXRfcmVn
KCZjZWxsLCBhZGRyZXNzX2NlbGxzLCBzaXplX2NlbGxzLCAmc3RhcnQsICZzaXplKTsNCj4gPiAr
ICAgICAgICAvKiBTb21lIERUIG1heSBkZXNjcmliZSBlbXB0eSBiYW5rLCBpZ25vcmUgdGhlbSAq
Lw0KPiA+ICsgICAgICAgIGlmICggIXNpemUgKQ0KPiA+ICsgICAgICAgICAgICBjb250aW51ZTsN
Cj4gPiArICAgICAgICBtZW0tPmJhbmtbbWVtLT5ucl9iYW5rc10uc3RhcnQgPSBzdGFydDsNCj4g
PiArICAgICAgICBtZW0tPmJhbmtbbWVtLT5ucl9iYW5rc10uc2l6ZSA9IHNpemU7DQo+ID4gKyAg
ICAgICAgbWVtLT5ucl9iYW5rcysrOw0KPiA+ICsgICAgfQ0KPiA+ICsNCj4gPiArICAgIGlmICgg
aSA8IGJhbmtzICkNCj4gPiArICAgICAgICByZXR1cm4gLUVOT1NQQzsNCj4gPiArICAgIHJldHVy
biAwOw0KPiA+ICt9DQo+ID4gKw0KPiA+ICAgc3RhdGljIGludCBfX2luaXQgZWFybHlfc2Nhbl9u
b2RlKGNvbnN0IHZvaWQgKmZkdCwNCj4gPiAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICBpbnQgbm9kZSwgY29uc3QgY2hhciAqbmFtZSwgaW50IGRlcHRoLA0KPiA+ICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgIHUzMiBhZGRyZXNzX2NlbGxzLCB1MzIgc2l6ZV9j
ZWxscywNCj4gPiBAQCAtMzQ1LDYgKzM5NCw5IEBAIHN0YXRpYyBpbnQgX19pbml0IGVhcmx5X3Nj
YW5fbm9kZShjb25zdCB2b2lkICpmZHQsDQo+ID4gICAgICAgICAgIHByb2Nlc3NfbXVsdGlib290
X25vZGUoZmR0LCBub2RlLCBuYW1lLCBhZGRyZXNzX2NlbGxzLCBzaXplX2NlbGxzKTsNCj4gPiAg
ICAgICBlbHNlIGlmICggZGVwdGggPT0gMSAmJiBkZXZpY2VfdHJlZV9ub2RlX21hdGNoZXMoZmR0
LCBub2RlLCAiY2hvc2VuIikgKQ0KPiA+ICAgICAgICAgICBwcm9jZXNzX2Nob3Nlbl9ub2RlKGZk
dCwgbm9kZSwgbmFtZSwgYWRkcmVzc19jZWxscywNCj4gPiBzaXplX2NlbGxzKTsNCj4gPiArICAg
IGVsc2UgaWYgKCBkZXB0aCA9PSAyICYmIGZkdF9nZXRfcHJvcGVydHkoZmR0LCBub2RlLCAieGVu
LHN0YXRpYy1tZW0iLA0KPiBOVUxMKSApDQo+ID4gKyAgICAgICAgcHJvY2Vzc19zdGF0aWNfbWVt
b3J5KGZkdCwgbm9kZSwgInhlbixzdGF0aWMtbWVtIiwgYWRkcmVzc19jZWxscywNCj4gPiArICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgc2l6ZV9jZWxscywgJmJvb3RpbmZvLnN0YXRpY19t
ZW0pOw0KPiANCj4gSSBhbSBhIGJpdCBjb25jZXJuZWQgdG8gYWRkIHlldCBhbm90aGVyIG1ldGhv
ZCB0byBwYXJzZSB0aGUgRFQgYW5kIGFsbCB0aGUNCj4gZXh0cmEgY29kZSBpdCB3aWxsIGFkZCBs
aWtlIGluIHBhdGNoICMyLg0KPiANCj4gIEZyb20gdGhlIGhvc3QgUG9WLCB0aGV5IGFyZSBtZW1v
cnkgcmVzZXJ2ZWQgZm9yIGEgc3BlY2lmaWMgcHVycG9zZS4NCj4gV291bGQgaXQgYmUgcG9zc2li
bGUgdG8gY29uc2lkZXIgdGhlIHJlc2VydmUtbWVtb3J5IGJpbmRpbmcgZm9yIHRoYXQNCj4gcHVy
cG9zZT8gVGhpcyB3aWxsIGhhcHBlbiBvdXRzaWRlIG9mIGNob3NlbiwgYnV0IHdlIGNvdWxkIHVz
ZSBhIHBoYW5kbGUgdG8NCj4gcmVmZXIgdGhlIHJlZ2lvbi4NCj4gDQoNCkNvcnJlY3QgbWUgaWYg
SSB1bmRlcnN0YW5kIHdyb25nbHksIGRvIHlvdSBtZWFuIHdoYXQgdGhpcyBkZXZpY2UgdHJlZSBz
bmlwcGV0IGxvb2tzIGxpa2U6DQoNCnJlc2VydmVkLW1lbW9yeSB7DQogICAjYWRkcmVzcy1jZWxs
cyA9IDwyPjsNCiAgICNzaXplLWNlbGxzID0gPDI+Ow0KICAgcmFuZ2VzOw0KIA0KICAgc3RhdGlj
LW1lbS1kb21VMTogc3RhdGljLW1lbUAweDMwMDAwMDAwew0KICAgICAgcmVnID0gPDB4MCAweDMw
MDAwMDAwIDB4MCAweDIwMDAwMDAwPjsNCiAgIH07DQp9Ow0KDQpDaG9zZW4gew0KIC4uLg0KZG9t
VTEgew0KICAgeGVuLHN0YXRpYy1tZW0gPSA8JnN0YXRpYy1tZW0tZG9tVTE+Ow0KfTsNCi4uLg0K
fTsNCg0KPiA+DQo+ID4gICAgICAgaWYgKCByYyA8IDAgKQ0KPiA+ICAgICAgICAgICBwcmludGso
ImZkdDogbm9kZSBgJXMnOiBwYXJzaW5nIGZhaWxlZFxuIiwgbmFtZSk7IGRpZmYgLS1naXQNCj4g
PiBhL3hlbi9pbmNsdWRlL2FzbS1hcm0vc2V0dXAuaCBiL3hlbi9pbmNsdWRlL2FzbS1hcm0vc2V0
dXAuaCBpbmRleA0KPiA+IDUyODMyNDQwMTUuLjVlOWYyOTY3NjAgMTAwNjQ0DQo+ID4gLS0tIGEv
eGVuL2luY2x1ZGUvYXNtLWFybS9zZXR1cC5oDQo+ID4gKysrIGIveGVuL2luY2x1ZGUvYXNtLWFy
bS9zZXR1cC5oDQo+ID4gQEAgLTc0LDYgKzc0LDggQEAgc3RydWN0IGJvb3RpbmZvIHsNCj4gPiAg
ICNpZmRlZiBDT05GSUdfQUNQSQ0KPiA+ICAgICAgIHN0cnVjdCBtZW1pbmZvIGFjcGk7DQo+ID4g
ICAjZW5kaWYNCj4gPiArICAgIC8qIFN0YXRpYyBNZW1vcnkgKi8NCj4gPiArICAgIHN0cnVjdCBt
ZW1pbmZvIHN0YXRpY19tZW07DQo+ID4gICB9Ow0KPiA+DQo+ID4gICBleHRlcm4gc3RydWN0IGJv
b3RpbmZvIGJvb3RpbmZvOw0KPiA+DQo+IA0KPiBDaGVlcnMsDQo+IA0KPiAtLQ0KPiBKdWxpZW4g
R3JhbGwNCg==


From xen-devel-bounces@lists.xenproject.org Wed May 19 03:17:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 03:17:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129792.243415 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljChb-0005N9-GU; Wed, 19 May 2021 03:16:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129792.243415; Wed, 19 May 2021 03:16:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljChb-0005N2-DO; Wed, 19 May 2021 03:16:55 +0000
Received: by outflank-mailman (input) for mailman id 129792;
 Wed, 19 May 2021 03:16:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TeaP=KO=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1ljChZ-0005Mt-LT
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 03:16:53 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown
 [40.107.6.64]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1f8d1b0a-06ed-4e27-b295-cb0b56c015b5;
 Wed, 19 May 2021 03:16:50 +0000 (UTC)
Received: from AM0PR10CA0113.EURPRD10.PROD.OUTLOOK.COM (2603:10a6:208:e6::30)
 by DB6PR0802MB2197.eurprd08.prod.outlook.com (2603:10a6:4:85::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.25; Wed, 19 May
 2021 03:16:48 +0000
Received: from AM5EUR03FT041.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:208:e6:cafe::45) by AM0PR10CA0113.outlook.office365.com
 (2603:10a6:208:e6::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.25 via Frontend
 Transport; Wed, 19 May 2021 03:16:48 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT041.mail.protection.outlook.com (10.152.17.186) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Wed, 19 May 2021 03:16:48 +0000
Received: ("Tessian outbound 504317ef584c:v92");
 Wed, 19 May 2021 03:16:48 +0000
Received: from 19eb3a260be2.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 BA835598-D0CF-4475-8F97-AE5AD337A706.1; 
 Wed, 19 May 2021 03:16:42 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 19eb3a260be2.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 19 May 2021 03:16:42 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com (2603:10a6:803:10a::33)
 by VI1PR08MB4320.eurprd08.prod.outlook.com (2603:10a6:803:100::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.28; Wed, 19 May
 2021 03:16:35 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5]) by VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5%6]) with mapi id 15.20.4129.032; Wed, 19 May 2021
 03:16:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1f8d1b0a-06ed-4e27-b295-cb0b56c015b5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jZ5dh24l587YaolIFC7gsI2CKlT3cx0O/yOShvalRvk=;
 b=vi1E8EiM+ZK+jhu5sMUqGGU9xpH418gQ/hiKkyB+YqkeoXzKOjMxdpFqjxusDT2CHdM2XcTXydXXANlnZQjyLYKTkvQMH1oBPrARgd2Eh9OwFVsl1qRyxVPi1hlhdgQALkZ+50I7Q+pnfP0u0jpE8yJpzKWwLPzWrcbPZBZXMys=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=JguJ5EIFnuVxp8rz2QhSuLXj0tn4ahKHYWwsXznNe8+gtMAR1vtTzHRd4HG6phwut8z6j9zuscy+fjd8di5fxOffUTZvH+U8vVNnKauitlV8sSqYmyQMjVaw//0MJgkCjkMWtl4Rnz8r0JdBeQ0w+2/1PeNfF8a/Mco+ars3ckaufGgT64nMzresf+pjOAEAAYnH+TBdNM4ASVX/UT81lsjVvmLgC6HDG7YqedjAo6o5j6NLrTSTr94pOhKywT0Q3u+xRnVevzTVZFMOYMjHU0WQysslOiE6tN/UPjLDKwF50QbxEd7uqHUo4CprImQw4rgkaJdYAit8AwuGL+NRKw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jZ5dh24l587YaolIFC7gsI2CKlT3cx0O/yOShvalRvk=;
 b=CprisYbhQMgNOIQAwujoAIJdennH4GleNmXHy4oLW/BS9wgvWiD54uo811y6fqaplAuiSnwDoY3gAe4Zr2xkkPdn6JluP9Twq8kp03fEeCDomma/KwoJb3ksj6bG0YqMTlwXdkyCjHdIzrZcNFDm2z50yUhjqSrjSIFfaXtK+Knd9cpXq4eci8D1SJfeSqGIolrZhpgLgSnMsQtYJhlz46CH4hZr+Cd/jOaTlcHmg8ZcyAYnXdoRM6zV4H9DhSxJ/Q+xxCqhCVRvD9QQq8OwQe8MliEHaFV0MRhDoe/P13kxjkTNhHPEl1GLBs1B76zxsAzF263IVO5/XwXVJ60DHg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jZ5dh24l587YaolIFC7gsI2CKlT3cx0O/yOShvalRvk=;
 b=vi1E8EiM+ZK+jhu5sMUqGGU9xpH418gQ/hiKkyB+YqkeoXzKOjMxdpFqjxusDT2CHdM2XcTXydXXANlnZQjyLYKTkvQMH1oBPrARgd2Eh9OwFVsl1qRyxVPi1hlhdgQALkZ+50I7Q+pnfP0u0jpE8yJpzKWwLPzWrcbPZBZXMys=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "sstabellini@kernel.org"
	<sstabellini@kernel.org>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	nd <nd@arm.com>
Subject: RE: [PATCH 03/10] xen/arm: introduce PGC_reserved
Thread-Topic: [PATCH 03/10] xen/arm: introduce PGC_reserved
Thread-Index: AQHXS6W8Xfb+Jc0d2U+rpPnqyW8lIKro/c8AgAEW9PA=
Date: Wed, 19 May 2021 03:16:32 +0000
Message-ID:
 <VE1PR08MB5215F3ECA8B5D9624E34A794F72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-4-penny.zheng@arm.com>
 <bc6a20ef-675d-bbd6-74f7-4ecc45805ee7@xen.org>
In-Reply-To: <bc6a20ef-675d-bbd6-74f7-4ecc45805ee7@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 291AA81FE4629542806F779287CB7FBE.0
x-checkrecipientchecked: true
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [203.126.0.111]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: d87d4042-e0c7-468a-b8f7-08d91a748a14
x-ms-traffictypediagnostic: VI1PR08MB4320:|DB6PR0802MB2197:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DB6PR0802MB219787C7BC742CEA630B5309F72B9@DB6PR0802MB2197.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 3Ob/qSpJMIUvBsHX6PrxcB8tMt8pdcDMP4WUVC7exXok+JI0hMD9xpZ+OEaVWFVUEI6RpfHC+cfsaI9TS/XKmiCPMk7O9ML0mHQ5bNL1vOlRJo4tWBYYfx+FucOQZUN7GeTWOWIJ3zS0kxanohLFqHMrhRo+Vu0zmUNmHLeCTs6rzNR7Qw3wN8MwRqWSejH5SGz2M8ftXGhTk4mNAdytzcI8pe6ozSg6Z7yP1lSx90M47rSa0H+uDdl/+BU1IugTdUOrEC3buJhnZXWDpsIYzhaRCrlA/5VCfetrEol1GxYnS4Gu0nit0aZ/5Z6D3vuhnZBvKvHzhDyxgrO8Sbvhr/jH4zHEP3MZym/flIsu/JLoMq7Y1wxg0bZf11ywLtDux0RBhWkwCb8gb3O8xiR5C3AcD4Yw6U3Q376xmMCErDFI4GxokB2u4Sg8bFxm96RTxlR4bWCOcSpkOTbuZmfD/fGVeUHOOAbEeGyRcRwXxp5buHWCv1EFSKlr1GF2w4D2ZWRcCedVvuGs24Ws97PjW84/nhzYZFfJjGJBzZ+7lN0TNXWPOiKQ10LFGrvdr2ZNtjltzKQt/bFNkaM0zCGY0IvwBeVqN2QtcATJ0ohaL5w=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR08MB5215.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(376002)(346002)(39860400002)(136003)(55016002)(8676002)(66446008)(478600001)(9686003)(64756008)(6506007)(33656002)(52536014)(86362001)(26005)(66476007)(4326008)(186003)(66556008)(71200400001)(76116006)(66946007)(5660300002)(8936002)(7696005)(110136005)(54906003)(122000001)(83380400001)(53546011)(316002)(2906002)(38100700002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?WFFXd3BRcTVpU3pjVEplb1g3VnprUWVRdnVQMFJqSGxzMVVnK05wcXJVRUVq?=
 =?utf-8?B?R0E3RXozekVPMEdESG9xaHM4QVpOMURWcUtOZ1NoM2psdVkvK0Q0andEZFQ0?=
 =?utf-8?B?VFRNRjVRM2dmbW9VeGVjQ1BkQWU4Q2oxUU1sR0ZPSTd0WkRTRlY3c053a29K?=
 =?utf-8?B?RytRY0FCallncmYzT2pHUHd4QmpmMWFsdXU1K3diWmdKd29HMzBEVU1SK3dC?=
 =?utf-8?B?eklSdkdySVphTDhwRXp2VnRqbWZDakUrN1FPVmFoVTBVeVpVWDFRWUFUeXZZ?=
 =?utf-8?B?YWlHcHFIc1lJN0xZTWhiTUtoSHpjaE1IdlBKTEZtemtoQjFnbHBiQm1jUEw2?=
 =?utf-8?B?V1MzdDdVTk9iTEExSjY2VTl6NjRZNEFvUVRKaEFLczZkWE1Xd2JOekg4bU00?=
 =?utf-8?B?QU5mYUNTUzB4UWZMRncyam5HZ2xoeFpaM3lIeWV5L2pPVmc5T3BvT1lRbjZX?=
 =?utf-8?B?WWJncThXZVF4UTE4R3hBWThOZ2RMckhIcEVHZ2ovbTNSbEhpTnJNMGZxNmZO?=
 =?utf-8?B?WDhwSjhMZUZKK3FkaXp2aG5iNHhoOURqbk4wbEJXZzB3eko1ZmdlM2g5dW1D?=
 =?utf-8?B?QUZjbytIMThUNGNaNGNGdTRyQW9uRHNYdHNLMUM0VmxnMks1MmRtYWpWZ1Vj?=
 =?utf-8?B?NWFURUhzQ3k2N21YcW44eWoxV2RFS0E5YkhnaTB6bDZPZ2w5aGtxTEw0QjJP?=
 =?utf-8?B?bUZyaXZsT0dmVnZweHNUNDNGT01aR1QvTUVyRUpWTUpZa1Rnbm9Fb2xuUGM5?=
 =?utf-8?B?anllSkhDbnZiZFRUUnBvMXFBQkNRdWEvaWJZZ2ZHdmo5UHVrTmQ5ZFpaT0FW?=
 =?utf-8?B?WjdyMzJ6VUFiT01oTFZyNGp3Uzkxc1QwbmVkSklLZjdlVkI5V3cvY0ZaUWNt?=
 =?utf-8?B?U094ZEpRMGI0a1J0MzMxZFozYWZzSldQenV0QVQweEZLMklyUm5LSHorVHd4?=
 =?utf-8?B?dkl1eXEyZlUweDlsSVJVL1VYWW54blovVjBWL1B0ckhtTk1iRk1tVDBnRk9v?=
 =?utf-8?B?ejF2TS8xYXYrV3FZbk5YSTlOZ01aank0dTVZaEh3bUNwS3ZKWHFtVytRV3R0?=
 =?utf-8?B?dE91SjhqRktwdmFHWU1VRk5RSnJERS9pRVp2L20yN1ZEcURwY01TMnZTN3R2?=
 =?utf-8?B?Y2RSNEY4VktuajNRQXRFSGJTZXN6Rzc3RnpmVk1TOEF2UlRpK1VVSjg1aXpa?=
 =?utf-8?B?WC9nY0laOW1XZjg3ODFPeGpwdkFtcFJYTkIwbzRRVStXRXBhWHNTVGRaSnhh?=
 =?utf-8?B?VnVreTl6Ni9EdTl0cVpOM2NvNjJwajQ5RjNlNlFqajZvL2gxbEVtdEhtRTVl?=
 =?utf-8?B?cFp4OWcrc3JPeWpDSnhQNEVkSi9wUzg2eEtzeXNvd3k2c1UzUTRNdkJhQkVl?=
 =?utf-8?B?UCtOaFhRWTladU1hekNvcEhzWHY1OVpqd1JBZ2crdy94cHhqRmQ4S1E3enZJ?=
 =?utf-8?B?cVhnVlU4UFZ1WVZWUnBRZnI3ZW5TR3l6ckdYKzhWMEpMWDFEUUJyOThQa2hF?=
 =?utf-8?B?dmw4T29QVk9NZEFiWHdGaGhiVkYvQUd5RHdETTYwV2doWHFHYkdxK3pVTzVo?=
 =?utf-8?B?RnRBQzYwTjRidnJvL3FYYWd3TG5TYjZRc25SSDl0T0FLSDNIcXNwVnpDaFNF?=
 =?utf-8?B?d1BLYkV2ODltaGtNUzhLMXoyY3pBbU1zR2FQcFBzeXp3ck9WQ2lvTm4yc0pP?=
 =?utf-8?B?NmtLTFlUMytYbVZjcFl6QU0vYmJQaFp6WmpRUE5GcGMybjkxS2FlbS8xZGR2?=
 =?utf-8?Q?nCRkh9+EV7oNbTZ/Pzr2DDI1jxOL/WV3bMV8eN2?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB4320
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT041.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	5a590840-e9c9-4f3c-1fc6-08d91a748204
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	hLvHBDM7Ls/Gjm14H3iIeJx4requUIOz/GgDFqBd2sGcdXSV2rAHNZlZ5fDaI4xtCPnJrih0VlyESamVHLwFiPzggmhc3Xoydz/pTEWka9H9p6XXafzze/a+fUpDiyYr2HV+VjcUlVwNfAWSAHCIwCAV1tLiE0gIfjNjCaGBMVkIu8YRNt5WSdcGJEeu8xnhJqRLlKscPC+R6tmBcEBjTRfgSBK5IQleFPnpNGs+WYL97GCPUx2cO13a4FvmFwi9AkWJmu543OyG9djzofWyOCvuhrxwN36vzy8t8T6dZNFxqr4K9IeZs+WvJrrqb8HpFx5jvpdjtXKogZIknWAw3T2pxShX4tgw8G4RJLkx61PfEn0TYofrGBKrBAslq7CiY5VqcI/ST8puNXaOgaTUPC91MiR/TBgNmCIXzt9vZE61f2nYMl1JRn+s8FrGLOR/8z3TtySIFkrBkZ4DanNSsbjJgSpWg83FgW+rVcvHN1e9urWkmsCZcsok3buIdY4vrWlvHKzZXMR3VwFQq2nZXpsF+YyMECic7GEO66OgqoMsRYd1c6QtfuJPXltHGMBkGIG004dEGEqPafRTGhwKZ1DMi/KCjrKRasM4IptsAJbm17uZOQS7JHa0ALbqle41Ks1SXBJjCN0GbpLD6gie+3g8PBN52GdmUT5C12Nz76w=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(136003)(376002)(39860400002)(346002)(46966006)(36840700001)(36860700001)(8936002)(47076005)(52536014)(5660300002)(82740400003)(356005)(81166007)(70206006)(83380400001)(2906002)(82310400003)(70586007)(86362001)(8676002)(54906003)(110136005)(9686003)(7696005)(316002)(33656002)(4326008)(55016002)(186003)(26005)(6506007)(53546011)(478600001)(336012);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2021 03:16:48.4248
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: d87d4042-e0c7-468a-b8f7-08d91a748a14
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT041.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0802MB2197

SGkgSnVsaWVuDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSnVsaWVu
IEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4NCj4gU2VudDogVHVlc2RheSwgTWF5IDE4LCAyMDIxIDU6
NDYgUE0NCj4gVG86IFBlbm55IFpoZW5nIDxQZW5ueS5aaGVuZ0Bhcm0uY29tPjsgeGVuLWRldmVs
QGxpc3RzLnhlbnByb2plY3Qub3JnOw0KPiBzc3RhYmVsbGluaUBrZXJuZWwub3JnDQo+IENjOiBC
ZXJ0cmFuZCBNYXJxdWlzIDxCZXJ0cmFuZC5NYXJxdWlzQGFybS5jb20+OyBXZWkgQ2hlbg0KPiA8
V2VpLkNoZW5AYXJtLmNvbT47IG5kIDxuZEBhcm0uY29tPg0KPiBTdWJqZWN0OiBSZTogW1BBVENI
IDAzLzEwXSB4ZW4vYXJtOiBpbnRyb2R1Y2UgUEdDX3Jlc2VydmVkDQo+IA0KPiANCj4gDQo+IE9u
IDE4LzA1LzIwMjEgMDY6MjEsIFBlbm55IFpoZW5nIHdyb3RlOg0KPiA+IEluIG9yZGVyIHRvIGRp
ZmZlcmVudGlhdGUgcGFnZXMgb2Ygc3RhdGljIG1lbW9yeSwgZnJvbSB0aG9zZSBhbGxvY2F0ZWQN
Cj4gPiBmcm9tIGhlYXAsIHRoaXMgcGF0Y2ggaW50cm9kdWNlcyBhIG5ldyBwYWdlIGZsYWcgUEdD
X3Jlc2VydmVkIHRvIHRlbGwuDQo+ID4NCj4gPiBOZXcgc3RydWN0IHJlc2VydmVkIGluIHN0cnVj
dCBwYWdlX2luZm8gaXMgdG8gZGVzY3JpYmUgcmVzZXJ2ZWQgcGFnZQ0KPiA+IGluZm8sIHRoYXQg
aXMsIHdoaWNoIHNwZWNpZmljIGRvbWFpbiB0aGlzIHBhZ2UgaXMgcmVzZXJ2ZWQgdG8uID4NCj4g
PiBIZWxwZXIgcGFnZV9nZXRfcmVzZXJ2ZWRfb3duZXIgYW5kIHBhZ2Vfc2V0X3Jlc2VydmVkX293
bmVyIGFyZQ0KPiA+IGRlc2lnbmF0ZWQgdG8gZ2V0L3NldCByZXNlcnZlZCBwYWdlJ3Mgb3duZXIu
DQo+ID4NCj4gPiBTdHJ1Y3QgZG9tYWluIGlzIGVubGFyZ2VkIHRvIG1vcmUgdGhhbiBQQUdFX1NJ
WkUsIGR1ZSB0bw0KPiA+IG5ld2x5LWltcG9ydGVkIHN0cnVjdCByZXNlcnZlZCBpbiBzdHJ1Y3Qg
cGFnZV9pbmZvLg0KPiANCj4gc3RydWN0IGRvbWFpbiBtYXkgZW1iZWQgYSBwb2ludGVyIHRvIGEg
c3RydWN0IHBhZ2VfaW5mbyBidXQgbmV2ZXIgZGlyZWN0bHkNCj4gZW1iZWQgdGhlIHN0cnVjdHVy
ZS4gU28gY2FuIHlvdSBjbGFyaWZ5IHdoYXQgeW91IG1lYW4/DQo+IA0KPiA+DQo+ID4gU2lnbmVk
LW9mZi1ieTogUGVubnkgWmhlbmcgPHBlbm55LnpoZW5nQGFybS5jb20+DQo+ID4gLS0tDQo+ID4g
ICB4ZW4vaW5jbHVkZS9hc20tYXJtL21tLmggfCAxNiArKysrKysrKysrKysrKystDQo+ID4gICAx
IGZpbGUgY2hhbmdlZCwgMTUgaW5zZXJ0aW9ucygrKSwgMSBkZWxldGlvbigtKQ0KPiA+DQo+ID4g
ZGlmZiAtLWdpdCBhL3hlbi9pbmNsdWRlL2FzbS1hcm0vbW0uaCBiL3hlbi9pbmNsdWRlL2FzbS1h
cm0vbW0uaA0KPiBpbmRleA0KPiA+IDBiN2RlMzEwMmUuLmQ4OTIyZmQ1ZGIgMTAwNjQ0DQo+ID4g
LS0tIGEveGVuL2luY2x1ZGUvYXNtLWFybS9tbS5oDQo+ID4gKysrIGIveGVuL2luY2x1ZGUvYXNt
LWFybS9tbS5oDQo+ID4gQEAgLTg4LDcgKzg4LDE1IEBAIHN0cnVjdCBwYWdlX2luZm8NCj4gPiAg
ICAgICAgICAgICovDQo+ID4gICAgICAgICAgIHUzMiB0bGJmbHVzaF90aW1lc3RhbXA7DQo+ID4g
ICAgICAgfTsNCj4gPiAtICAgIHU2NCBwYWQ7DQo+ID4gKw0KPiA+ICsgICAgLyogUGFnZSBpcyBy
ZXNlcnZlZC4gKi8NCj4gPiArICAgIHN0cnVjdCB7DQo+ID4gKyAgICAgICAgLyoNCj4gPiArICAg
ICAgICAgKiBSZXNlcnZlZCBPd25lciBvZiB0aGlzIHBhZ2UsDQo+ID4gKyAgICAgICAgICogaWYg
dGhpcyBwYWdlIGlzIHJlc2VydmVkIHRvIGEgc3BlY2lmaWMgZG9tYWluLg0KPiA+ICsgICAgICAg
ICAqLw0KPiA+ICsgICAgICAgIHN0cnVjdCBkb21haW4gKmRvbWFpbjsNCj4gPiArICAgIH0gcmVz
ZXJ2ZWQ7DQo+IA0KPiBUaGUgc3BhY2UgaW4gcGFnZV9pbmZvIGlzIHF1aXRlIHRpZ2h0LCBzbyBJ
IHdvdWxkIGxpa2UgdG8gYXZvaWQgaW50cm9kdWNpbmcgbmV3DQo+IGZpZWxkcyB1bmxlc3Mgd2Ug
Y2FuJ3QgZ2V0IGF3YXkgZnJvbSBpdC4NCj4gDQo+IEluIHRoaXMgY2FzZSwgaXQgaXMgbm90IGNs
ZWFyIHdoeSB3ZSBuZWVkIHRvIGRpZmZlcmVudGlhdGUgdGhlICJPd25lciINCj4gdnMgdGhlICJS
ZXNlcnZlZCBPd25lciIuIEl0IG1pZ2h0IGJlIGNsZWFyZXIgaWYgdGhpcyBjaGFuZ2UgaXMgZm9s
ZGVkIGluIHRoZQ0KPiBmaXJzdCB1c2VyIG9mIHRoZSBmaWVsZC4NCj4gDQo+IEFzIGFuIGFzaWRl
LCBmb3IgMzItYml0IEFybSwgeW91IG5lZWQgdG8gYWRkIGEgNC1ieXRlIHBhZGRpbmcuDQo+IA0K
DQpZZWFoLCBJIG1heSBkZWxldGUgdGhpcyBjaGFuZ2UuIEkgaW1wb3J0ZWQgdGhpcyBjaGFuZ2Ug
YXMgY29uc2lkZXJpbmcgdGhlIGZ1bmN0aW9uYWxpdHkNCm9mIHJlYm9vdGluZyBkb21haW4gb24g
c3RhdGljIGFsbG9jYXRpb24uIA0KDQpBIGxpdHRsZSBtb3JlIGRpc2N1c3Npb24gb24gcmVib290
aW5nIGRvbWFpbiBvbiBzdGF0aWMgYWxsb2NhdGlvbi4gDQpDb25zaWRlcmluZyB0aGUgbWFqb3Ig
dXNlciBjYXNlcyBmb3IgZG9tYWluIG9uIHN0YXRpYyBhbGxvY2F0aW9uDQphcmUgc3lzdGVtIGhh
cyBhIHRvdGFsIHByZS1kZWZpbmVkLCBzdGF0aWMgYmVoYXZpb3IgYWxsIHRoZSB0aW1lLiBObyBk
b21haW4gYWxsb2NhdGlvbg0Kb24gcnVudGltZSwgd2hpbGUgdGhlcmUgc3RpbGwgZXhpc3RzIGRv
bWFpbiByZWJvb3RpbmcuDQoNCkFuZCB3aGVuIHJlYm9vdGluZyBkb21haW4gb24gc3RhdGljIGFs
bG9jYXRpb24sIGFsbCB0aGVzZSByZXNlcnZlZCBwYWdlcyBjb3VsZA0Kbm90IGdvIGJhY2sgdG8g
aGVhcCB3aGVuIGZyZWVpbmcgdGhlbS4gIFNvIEkgYW0gY29uc2lkZXJpbmcgdG8gdXNlIG9uZSBn
bG9iYWwNCmBzdHJ1Y3QgcGFnZV9pbmZvKltET01JRF1gIHZhbHVlIHRvIHN0b3JlLg0KIA0KQXMg
SmFuIHN1Z2dlc3RlZCwgd2hlbiBkb21haW4gZ2V0IHJlYm9vdGVkLCBzdHJ1Y3QgZG9tYWluIHdp
bGwgbm90IGV4aXN0IGFueW1vcmUuDQpCdXQgSSB0aGluayBET01JRCBpbmZvIGNvdWxkIGxhc3Qu
DQoNCj4gPiAgIH07DQo+ID4NCj4gPiAgICNkZWZpbmUgUEdfc2hpZnQoaWR4KSAgIChCSVRTX1BF
Ul9MT05HIC0gKGlkeCkpDQo+ID4gQEAgLTEwOCw2ICsxMTYsOSBAQCBzdHJ1Y3QgcGFnZV9pbmZv
DQo+ID4gICAgIC8qIFBhZ2UgaXMgWGVuIGhlYXA/ICovDQo+ID4gICAjZGVmaW5lIF9QR0NfeGVu
X2hlYXAgICAgIFBHX3NoaWZ0KDIpDQo+ID4gICAjZGVmaW5lIFBHQ194ZW5faGVhcCAgICAgIFBH
X21hc2soMSwgMikNCj4gPiArICAvKiBQYWdlIGlzIHJlc2VydmVkLCByZWZlcnJpbmcgc3RhdGlj
IG1lbW9yeSAqLw0KPiANCj4gSSB3b3VsZCBkcm9wIHRoZSBzZWNvbmQgcGFydCBvZiB0aGUgc2Vu
dGVuY2UgYmVjYXVzZSB0aGUgZmxhZyBjb3VsZCBiZSB1c2VkDQo+IGZvciBvdGhlciBwdXJwb3Nl
LiBPbmUgZXhhbXBsZSBpcyByZXNlcnZlZCBtZW1vcnkgd2hlbiBMaXZlIFVwZGF0aW5nLg0KPiAN
Cg0KU3VyZSwgSSB3aWxsIGRyb3AgaXQuDQoNCj4gPiArI2RlZmluZSBfUEdDX3Jlc2VydmVkICAg
ICBQR19zaGlmdCgzKQ0KPiA+ICsjZGVmaW5lIFBHQ19yZXNlcnZlZCAgICAgIFBHX21hc2soMSwg
MykNCj4gPiAgIC8qIC4uLiAqLw0KPiA+ICAgLyogUGFnZSBpcyBicm9rZW4/ICovDQo+ID4gICAj
ZGVmaW5lIF9QR0NfYnJva2VuICAgICAgIFBHX3NoaWZ0KDcpDQo+ID4gQEAgLTE2MSw2ICsxNzIs
OSBAQCBleHRlcm4gdW5zaWduZWQgbG9uZyB4ZW5oZWFwX2Jhc2VfcGR4Ow0KPiA+ICAgI2RlZmlu
ZSBwYWdlX2dldF9vd25lcihfcCkgICAgKF9wKS0+di5pbnVzZS5kb21haW4NCj4gPiAgICNkZWZp
bmUgcGFnZV9zZXRfb3duZXIoX3AsX2QpICgoX3ApLT52LmludXNlLmRvbWFpbiA9IChfZCkpDQo+
ID4NCj4gPiArI2RlZmluZSBwYWdlX2dldF9yZXNlcnZlZF9vd25lcihfcCkgICAgKF9wKS0+cmVz
ZXJ2ZWQuZG9tYWluDQo+ID4gKyNkZWZpbmUgcGFnZV9zZXRfcmVzZXJ2ZWRfb3duZXIoX3AsX2Qp
ICgoX3ApLT5yZXNlcnZlZC5kb21haW4gPSAoX2QpKQ0KPiA+ICsNCj4gPiAgICNkZWZpbmUgbWFk
ZHJfZ2V0X293bmVyKG1hKSAgIChwYWdlX2dldF9vd25lcihtYWRkcl90b19wYWdlKChtYSkpKSkN
Cj4gPg0KPiA+ICAgI2RlZmluZSBmcmFtZV90YWJsZSAoKHN0cnVjdCBwYWdlX2luZm8gKilGUkFN
RVRBQkxFX1ZJUlRfU1RBUlQpDQo+ID4NCj4gDQo+IENoZWVycywNCj4gDQo+IC0tDQo+IEp1bGll
biBHcmFsbA0KDQoNCkNoZWVycywNCg0KUGVubnkgWmhlbmcNCg==


From xen-devel-bounces@lists.xenproject.org Wed May 19 03:30:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 03:30:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129800.243426 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljCux-0007dz-Oc; Wed, 19 May 2021 03:30:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129800.243426; Wed, 19 May 2021 03:30:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljCux-0007ds-LT; Wed, 19 May 2021 03:30:43 +0000
Received: by outflank-mailman (input) for mailman id 129800;
 Wed, 19 May 2021 03:30:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljCuw-0007di-Tr; Wed, 19 May 2021 03:30:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljCuw-0007uy-NL; Wed, 19 May 2021 03:30:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljCuw-0004jS-F3; Wed, 19 May 2021 03:30:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ljCuw-0000p3-EZ; Wed, 19 May 2021 03:30:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=cSSnswMn88NZhMyXw3quGJs1j4teLeCUWPngN/rMXuk=; b=yvqXo0il0I0qSao7+d0iqJEAPI
	7/T/FjUGnBmyxWJAUAneTw800EFV1ZeN9bc4NCfHnDlwOmGVcTPxjW8Ln1ChrJ9yOFTL+kqnMaRnU
	1uGPE0tccrpDdi4mvSOTPI7QcJVGwOWAPqlpwhEEwaiemls2yib4oTr6iPAi8HekIjCQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162072-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162072: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=01d84420fb4a9be2ec474a7c1910bb22c28b53c8
X-Osstest-Versions-That:
    xen=caa9c4471d1d74b2d236467aaf7e63a806ac11a4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 May 2021 03:30:42 +0000

flight 162072 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162072/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm               6 xen-build                fail REGR. vs. 162023
 build-armhf                   6 xen-build                fail REGR. vs. 162023

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  01d84420fb4a9be2ec474a7c1910bb22c28b53c8
baseline version:
 xen                  caa9c4471d1d74b2d236467aaf7e63a806ac11a4

Last test of basis   162023  2021-05-18 13:00:27 Z    0 days
Testing same since   162036  2021-05-18 16:00:26 Z    0 days    6 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  pass    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 01d84420fb4a9be2ec474a7c1910bb22c28b53c8
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:51:48 2021 +0100

    tools/xenmon: xenbaked: Mark const the field text in stat_map_t
    
    The field text in stat_map_t will point to string literals. So mark it
    as const to allow the compiler to catch any modified of the string.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 4b7702727a8d89fea0a239adcbeb18aa2c85ede0
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:51:28 2021 +0100

    tools/top: The string parameter in set_prompt() and set_delay() should be const
    
    Neither string parameter in set_prompt() and set_delay() are meant to
    be modified. In particular, new_prompt can point to a literal string.
    
    So mark the two parameters as const and propagate it.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 5605cfd49a18df41a21fb50cd81528312a39d7c9
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:50:32 2021 +0100

    tools/misc: Use const whenever we point to literal strings
    
    literal strings are not meant to be modified. So we should use const
    char * rather than char * when we we to store a pointer to them.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 89aae4ad8f495b647de33f2df5046b3ce68225f8
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:35:07 2021 +0100

    tools/libs: stat: Use const whenever we point to literal strings
    
    literal strings are not meant to be modified. So we should use const
    char * rather than char * when we want to store a pointer to them.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 8fc4916daf2aac34088ebd5ec3d6fd707ac4221d
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:34:22 2021 +0100

    tools/libs: guest: Use const whenever we point to literal strings
    
    literal strings are not meant to be modified. So we should use const
    *char rather than char * when we want to store a pointer to them.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed May 19 05:03:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 05:03:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129814.243446 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljEMJ-00088Y-PM; Wed, 19 May 2021 05:03:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129814.243446; Wed, 19 May 2021 05:03:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljEMJ-00088R-M1; Wed, 19 May 2021 05:03:03 +0000
Received: by outflank-mailman (input) for mailman id 129814;
 Wed, 19 May 2021 05:03:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TeaP=KO=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1ljEMH-000889-M3
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 05:03:01 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.21.45]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 961d48c6-5a3d-4efa-abd9-7dd76b14e3cf;
 Wed, 19 May 2021 05:02:59 +0000 (UTC)
Received: from AM6P192CA0014.EURP192.PROD.OUTLOOK.COM (2603:10a6:209:83::27)
 by AM8PR08MB5826.eurprd08.prod.outlook.com (2603:10a6:20b:1c6::21) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.25; Wed, 19 May
 2021 05:02:58 +0000
Received: from VE1EUR03FT063.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:83:cafe::a7) by AM6P192CA0014.outlook.office365.com
 (2603:10a6:209:83::27) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.33 via Frontend
 Transport; Wed, 19 May 2021 05:02:58 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT063.mail.protection.outlook.com (10.152.18.236) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Wed, 19 May 2021 05:02:57 +0000
Received: ("Tessian outbound 504317ef584c:v92");
 Wed, 19 May 2021 05:02:57 +0000
Received: from e012a13182cf.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 0603CF9E-C432-4B8F-9366-2CC5697909BF.1; 
 Wed, 19 May 2021 05:02:50 +0000
Received: from EUR04-HE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id e012a13182cf.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 19 May 2021 05:02:50 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com (20.179.30.225) by
 VI1PR08MB4511.eurprd08.prod.outlook.com (20.179.26.87) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25; Wed, 19 May 2021 05:02:39 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5]) by VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5%6]) with mapi id 15.20.4129.032; Wed, 19 May 2021
 05:02:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 961d48c6-5a3d-4efa-abd9-7dd76b14e3cf
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=x5qXD2jGcni2Px+bNJpHHJZV6e6pW0k2rrjGwSL7V8k=;
 b=2XpVeT7Gr+azuNyTMaliE8cSqGVH/ODfzVe65WaUOZPSfYQgVR8BXJOo4U+HFYzoz3fSKBRNWtjy3suWAs2+84LlCmUFh8npCOnXrBDS9eWfVE0QCiWp1rDdQvcD701goJ37q94DvsL3eu8GswKaAtd93PRFy2bcMwk4t0gDuCA=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Yyg+5OmhZ1pCPJ8iQsp0TRJwlkJf2lEa0tpwYvZ0xFITTRGNdLDq3wp+rz3bJrm3XwLwHrWrzNOnYmKnhX9QajoTsYSDUFqtYuvs15hyIj3rDfyhZDZm/9Irsqxh3s8HT+bq14JGBZ3iHCnm+CRUcQ8vNxbMnlTESukC2VI2QuynbpTAnnsV2bErRH2K6vH/lBj9wlMBFlQawbFRcB+B6/r9YlW58aOvr+qSE0jsBA5mZgq//sqq95fbaT+qrs2JGm8lWJLPpw7kFVXIECiryRgA6uZodT+p/gQWypJpGUZFpu4VnKXEzAXDDs9EM1ms3098rCc4wERBykZhZAgGRw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=x5qXD2jGcni2Px+bNJpHHJZV6e6pW0k2rrjGwSL7V8k=;
 b=aZR+0Dgs5XwJohMuvEfyRMfMpg1ytJBeZ0ws3SBXt1VNKIXHT7Dmyupi2qsQvQjFE59XLKJUfY1Rakik6rMTXe2ahcXWiTMg/oLKiVmKuqS/m3fiXk0ahZlexmSz3WJflG2mH9VzpRor4HfWNihsSgYU1zV9lesEeJ1jQYIrw/yP3KUESbHh0s8b4RfP3vwsbsgx5bMAKyq1Q8kvwOgvcXZujH+q9N9fdc9fsUqWaIWNe7pRggAI5QAnyD8akhWWcZJy19lltYQo3WVlPyRGBPRasOLCxbjgkTtYFAWOUV0NyJ4w/hV8wdioIfSD4ub8Nj1fXnEAbuMD3SI3reuL9g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=x5qXD2jGcni2Px+bNJpHHJZV6e6pW0k2rrjGwSL7V8k=;
 b=2XpVeT7Gr+azuNyTMaliE8cSqGVH/ODfzVe65WaUOZPSfYQgVR8BXJOo4U+HFYzoz3fSKBRNWtjy3suWAs2+84LlCmUFh8npCOnXrBDS9eWfVE0QCiWp1rDdQvcD701goJ37q94DvsL3eu8GswKaAtd93PRFy2bcMwk4t0gDuCA=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "sstabellini@kernel.org"
	<sstabellini@kernel.org>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	nd <nd@arm.com>
Subject: RE: [PATCH 04/10] xen/arm: static memory initialization
Thread-Topic: [PATCH 04/10] xen/arm: static memory initialization
Thread-Index: AQHXS6WzoKPo0e3K4EKgzRPDOGKP8KrpAfyAgAEhnUA=
Date: Wed, 19 May 2021 05:02:39 +0000
Message-ID:
 <VE1PR08MB521574C4D5DCAEA658A8541CF72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-5-penny.zheng@arm.com>
 <d492ca1a-b9d6-6250-750c-7f511b183735@xen.org>
In-Reply-To: <d492ca1a-b9d6-6250-750c-7f511b183735@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 0B5365BAD35C22418AEF9B21A61010B8.0
x-checkrecipientchecked: true
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [203.126.0.111]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: 093c47ec-facc-44c4-e8ba-08d91a835e75
x-ms-traffictypediagnostic: VI1PR08MB4511:|AM8PR08MB5826:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM8PR08MB582696FC09C14B896B2B3FA3F72B9@AM8PR08MB5826.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:4303;OLM:4303;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 D52edmj/pipEGZ5necsXBZn093nRTm9HQMYL+TTkFMfGu0fMOQjyy3SybacbYjLUdVvKK2E5oeltz+WNzh4YnuzooUibOL559owfDFBCdGwvr8BYpSa4pYjb9EO7zPXvLy3922konWPVFOlavLTpHSmrYOkCBczb4++e5Dvhr9Y9OGsumz2lXhfwhqC0Sq8kEM/9ryy8RkCrNn0/tvakm+K7Jt7VUzUs+A2qeEj7NsBDArz7yJF7YNMNMnI0PsP2HoAgPsoki1+P4FEan60AFOtdkbdRIVzzqKZiyEIedExSTTpLRWf6a8Ml+OWfCuiMMV6hlHFcSQSKl3YRKpPVYNTy3S18A/Xj8O5LMQG10rYrxxxnxQfaokSVPK6JEmR9yWKMA/4j2S+pLWDnsiIC/AMREbBil/GCtCCTCtlMQgXWyJTmalOW9VsVyq7CSrR+MScA9h+pcusfKfI+y6l1V/aQKsfhvwAGbpz7pZw4OtufqOKLp1B5vbfRgwcLUHT+a8jlh8X8rIoAtOETlmwAmbejoUiSSIdtUGU8GvjEFEkLEWr7jJpdFK5YGRs0HRSC0DRz+vGq1HLKEm7jcvmsjqlQIa9faO9Py9OeSy1fEAk=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR08MB5215.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(396003)(136003)(346002)(376002)(366004)(186003)(122000001)(66556008)(66476007)(66946007)(76116006)(33656002)(52536014)(64756008)(66446008)(8676002)(6506007)(53546011)(478600001)(7696005)(54906003)(110136005)(71200400001)(4326008)(316002)(83380400001)(9686003)(8936002)(55016002)(5660300002)(26005)(38100700002)(86362001)(2906002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?N0NNZG9aa1JkSzlFWGpxM2txOHRudFpMTGRCcGx2cmZ3Tk05SktnYVFIWFhm?=
 =?utf-8?B?a2pnUXNqOElSME9JTGRpYVRHdzhFTURBYzJmZWg5bURTN084UURTelZOdTlS?=
 =?utf-8?B?OEdkazk3MHRMcnYweU5wbE5VOXFHR1JMWC85dEozc3A5YmxHUVRhTDcxOW9Z?=
 =?utf-8?B?ZE9hbExEbXA4cjJiSTBXZWcyMXlQbnlpTVdJRUs0M3lTalAzV3BVWW5kbHpx?=
 =?utf-8?B?Nno1Mnlra1E2MHhUUllFeURtTCtKT3RLWlB0bENvKzZia2R4UnRZNTZITzRN?=
 =?utf-8?B?VlpRNzdCY29FQjRpWE1EaG9iR0JBMy9mOVhjN1VwMFNBbVhlZ0pHYkt0MGha?=
 =?utf-8?B?YkdiaFhpcE5CanFIcDRYZmw4a3FnR1phN3JTakVvSElPOStMWGluS3dIWkR2?=
 =?utf-8?B?ODU2Sk1CUHNGclN6aUg1Mm9obk9ScHYvdkRIQS9UYXNiUDFKb0hxeGdRbmNF?=
 =?utf-8?B?SklGRTVtNkZIckp2V2NhZEhZTXY2Mi9wV2l0YzB6S1AxR2FsdXhmL2J0YWt4?=
 =?utf-8?B?OGozdVFycnhoSGJ2ck9nWENjTDhMRXdtTFpVUElTQml6bTh4TWhUVHpEeFpJ?=
 =?utf-8?B?MGlLdURxRTZSN0NlRXY4STFrSGxFNVR5K1BrTnMxZFNlT2FmYUNkdFFNZWNV?=
 =?utf-8?B?K21tV0w3S2swZEJLL0VHVnZRL2gwVzl5cUhsdTZ2MGErVExjSVl0NW1ycU4y?=
 =?utf-8?B?U296ZkR1dUM3U2pFTUp0ZENacktnNXQ0UTg3eW9weU1RY085Q002MHFvdFVi?=
 =?utf-8?B?RGsySENXajdmY1l1bmhZQlBieXg4NWtoZ3MrMlNVMmxnYngxaUp3K2xWcnhU?=
 =?utf-8?B?UE8vbW5TVGpEbzZIdndkR2Q0K0pOcE5BYnBsOTlZTDZOWGZ2WFRxZ0YzWWZC?=
 =?utf-8?B?TE1wQS9oK1BnMVhheGtQT0p2SlcvcTlaSmJFKzJNUzlVNk5SaFUrTnZWRlJY?=
 =?utf-8?B?YVJaMWc1WXRMZ20yZFA2SXJpZzFybHVjTU03cEc2SytLNkFkOFFoRXRMcU1D?=
 =?utf-8?B?U1RycWNHVnpUN0R1VnU3cU1UWU83eVZiMkVFeDczQXBmazY0aUpzRWVkWEtE?=
 =?utf-8?B?Unlnck5sYzZKd0tzclkrZG15MTNWZXNkUXlLTjZrQTI3c2RjMUVjVzUwUnI2?=
 =?utf-8?B?bGZYcTE1R0xrc0MvTTRhWlAzWjlza2c4Ky9XdTdhdytFdTFmdGhBQ0VKYTky?=
 =?utf-8?B?eDNQVkFiMVVKbXRFbmsxS0pkOWFXNExEZjdiV2M0TVkwUC9CQTV3K3d4eGcy?=
 =?utf-8?B?bWs1ODBnZ1FxRjlrTk53c2lMNlgzQUVoZHovNFZkbTdoSHhJTG9WUkFmdzVq?=
 =?utf-8?B?VU1VZzhFS1ZLOWVEcXJJVjBNOXJYZ3kzUEtaakp6Q0huWnNaMWJscG4yczdx?=
 =?utf-8?B?OG4xMFFVb2ROcVcyaHVqelN2UXdhNk96Z0lwTkVWSHB0R0NCcXJtWUdja29F?=
 =?utf-8?B?dFdWY2E4eTYxRDI0NXg2ekdFazNJaGdZd0d0NFQwS0lmWG1tT2ZDY3NHSW9T?=
 =?utf-8?B?NTRhR1h5N0xGT0xCYlFURmhZSVlIbnFtcG93V0pGL2tRelJFV1ROZkpYYTRa?=
 =?utf-8?B?ZWc3REI3MVF5Uy9jdStZVk1MajU1SUg4VGtIc3RkRGQzd2VzTUpPRlVERHg3?=
 =?utf-8?B?VjdwUVhRODlsY2RFWnNMYk1lL1FaMzlvRFlkZWo2dUUvLytSQXFJOWRKWXNx?=
 =?utf-8?B?M0FQL1ROR2doZTUzU2JibVh1bVNnM0VrS3crSjdYZ3FlenlMbkJTbUx6M0JF?=
 =?utf-8?Q?qf+WXwgqo+KK6URGMi3gzT5epsMPidEu97DNK0W?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB4511
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT063.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	f57dc015-f151-4bb5-f6a8-08d91a8353a5
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	AcO8q2Zf1omPcgbQX/JkxugaSpfucHY6iCuDYhWVescRDUwpgEc1O7lTJ6/MUiFVF0WC8kMZI2KUxW9xi4z7wzJ8zIbeVxCAoMUrW2C1w96xBxRiemvawYI6GHBBVFPeUFIZe0K4Hht2cBw4BoqOgs+IGDAwz9ogrIkxVxqGHsDRUfDVhVmwq02dSRem12ZO7reeGCCJBFyopw/oChBMzZIm6mZvjpCCOGwRKtSkIUZcynRZLH8oj44CbdbQaqRWHnmrKE8sqFWfumeASEuBlK63J5O8kSj9CFr30qAK1ZT5qxeLphCaDkuNniAixha6RCqfuWnGUebCckFCZipNsorDtCbunlE8esASpkhma1iFV+Ui0qYFzeHZ0YUDchk+ijBpa2gijCtPDG406Wg8YsT38Jjdbb/5s/yGAU3q8BDpr3x+qisIWK5YTqJV2u4Z+FSRFfEdIgAIJqjqSRdbVn3Zys8AOTkJAW89ys+ON61iquMcGUISBKsSrNfdsOcaHIYJInFIM37Rd/deBRBTzjzeCiCIumL0R3FmET9LSelcyoIVDPyPb7thpShFnXkbMctt9egsNRzIWQ5q+5W1xGaEmkHU2YHiHbUiAS98UVG92cFaSi88mgl9xvjNtNt4AQNuMC1keA9gOiBn1R8mSMd46Z20sp8R1+z5O+kObv4=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(136003)(346002)(396003)(376002)(46966006)(36840700001)(86362001)(9686003)(186003)(2906002)(26005)(33656002)(47076005)(55016002)(8936002)(478600001)(4326008)(36860700001)(7696005)(82310400003)(83380400001)(70586007)(70206006)(336012)(8676002)(53546011)(356005)(54906003)(316002)(82740400003)(6506007)(5660300002)(81166007)(52536014)(110136005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2021 05:02:57.5697
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 093c47ec-facc-44c4-e8ba-08d91a835e75
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT063.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM8PR08MB5826

SGkgSnVsaWVuDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSnVsaWVu
IEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4NCj4gU2VudDogVHVlc2RheSwgTWF5IDE4LCAyMDIxIDY6
MDEgUE0NCj4gVG86IFBlbm55IFpoZW5nIDxQZW5ueS5aaGVuZ0Bhcm0uY29tPjsgeGVuLWRldmVs
QGxpc3RzLnhlbnByb2plY3Qub3JnOw0KPiBzc3RhYmVsbGluaUBrZXJuZWwub3JnDQo+IENjOiBC
ZXJ0cmFuZCBNYXJxdWlzIDxCZXJ0cmFuZC5NYXJxdWlzQGFybS5jb20+OyBXZWkgQ2hlbg0KPiA8
V2VpLkNoZW5AYXJtLmNvbT47IG5kIDxuZEBhcm0uY29tPg0KPiBTdWJqZWN0OiBSZTogW1BBVENI
IDA0LzEwXSB4ZW4vYXJtOiBzdGF0aWMgbWVtb3J5IGluaXRpYWxpemF0aW9uDQo+IA0KPiBIaSBQ
ZW5ueSwNCj4gDQo+IE9uIDE4LzA1LzIwMjEgMDY6MjEsIFBlbm55IFpoZW5nIHdyb3RlOg0KPiA+
IFRoaXMgcGF0Y2ggaW50cm9kdWNlcyBzdGF0aWMgbWVtb3J5IGluaXRpYWxpemF0aW9uLCBkdXJp
bmcgc3lzdGVtIFJBTSBib290DQo+IHVwLg0KPiA+DQo+ID4gTmV3IGZ1bmMgaW5pdF9zdGF0aWNt
ZW1fcGFnZXMgaXMgdGhlIGVxdWl2YWxlbnQgb2YgaW5pdF9oZWFwX3BhZ2VzLA0KPiA+IHJlc3Bv
bnNpYmxlIGZvciBzdGF0aWMgbWVtb3J5IGluaXRpYWxpemF0aW9uLg0KPiA+DQo+ID4gSGVscGVy
IGZ1bmMgZnJlZV9zdGF0aWNtZW1fcGFnZXMgaXMgdGhlIGVxdWl2YWxlbnQgb2YgZnJlZV9oZWFw
X3BhZ2VzLA0KPiA+IHRvIGZyZWUgbnJfcGZucyBwYWdlcyBvZiBzdGF0aWMgbWVtb3J5Lg0KPiA+
IEZvciBlYWNoIHBhZ2UsIGl0IGluY2x1ZGVzIHRoZSBmb2xsb3dpbmcgc3RlcHM6DQo+ID4gMS4g
Y2hhbmdlIHBhZ2Ugc3RhdGUgZnJvbSBpbi11c2UoYWxzbyBpbml0aWFsaXphdGlvbiBzdGF0ZSkg
dG8gZnJlZQ0KPiA+IHN0YXRlIGFuZCBncmFudCBQR0NfcmVzZXJ2ZWQuDQo+ID4gMi4gc2V0IGl0
cyBvd25lciBOVUxMIGFuZCBtYWtlIHN1cmUgdGhpcyBwYWdlIGlzIG5vdCBhIGd1ZXN0IGZyYW1l
IGFueQ0KPiA+IG1vcmUgMy4gZm9sbG93IHRoZSBzYW1lIGNhY2hlIGNvaGVyZW5jeSBwb2xpY3kg
aW4gZnJlZV9oZWFwX3BhZ2VzIDQuDQo+ID4gc2NydWIgdGhlIHBhZ2UgaW4gbmVlZA0KPiA+DQo+
ID4gU2lnbmVkLW9mZi1ieTogUGVubnkgWmhlbmcgPHBlbm55LnpoZW5nQGFybS5jb20+DQo+ID4g
LS0tDQo+ID4gICB4ZW4vYXJjaC9hcm0vc2V0dXAuYyAgICB8ICAyICsrDQo+ID4gICB4ZW4vY29t
bW9uL3BhZ2VfYWxsb2MuYyB8IDcwDQo+ICsrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysr
KysrKysrKysrDQo+ID4gICB4ZW4vaW5jbHVkZS94ZW4vbW0uaCAgICB8ICAzICsrDQo+ID4gICAz
IGZpbGVzIGNoYW5nZWQsIDc1IGluc2VydGlvbnMoKykNCj4gPg0KPiA+IGRpZmYgLS1naXQgYS94
ZW4vYXJjaC9hcm0vc2V0dXAuYyBiL3hlbi9hcmNoL2FybS9zZXR1cC5jIGluZGV4DQo+ID4gNDQ0
ZGJiZDY3Ni4uZjgwMTYyYzQ3OCAxMDA2NDQNCj4gPiAtLS0gYS94ZW4vYXJjaC9hcm0vc2V0dXAu
Yw0KPiA+ICsrKyBiL3hlbi9hcmNoL2FybS9zZXR1cC5jDQo+ID4gQEAgLTgxOCw2ICs4MTgsOCBA
QCBzdGF0aWMgdm9pZCBfX2luaXQgc2V0dXBfbW0odm9pZCkNCj4gPg0KPiA+ICAgICAgIHNldHVw
X2ZyYW1ldGFibGVfbWFwcGluZ3MocmFtX3N0YXJ0LCByYW1fZW5kKTsNCj4gPiAgICAgICBtYXhf
cGFnZSA9IFBGTl9ET1dOKHJhbV9lbmQpOw0KPiA+ICsNCj4gPiArICAgIGluaXRfc3RhdGljbWVt
X3BhZ2VzKCk7DQo+ID4gICB9DQo+ID4gICAjZW5kaWYNCj4gPg0KPiA+IGRpZmYgLS1naXQgYS94
ZW4vY29tbW9uL3BhZ2VfYWxsb2MuYyBiL3hlbi9jb21tb24vcGFnZV9hbGxvYy5jIGluZGV4DQo+
ID4gYWNlNjMzM2MxOC4uNThiNTNjNmFjMiAxMDA2NDQNCj4gPiAtLS0gYS94ZW4vY29tbW9uL3Bh
Z2VfYWxsb2MuYw0KPiA+ICsrKyBiL3hlbi9jb21tb24vcGFnZV9hbGxvYy5jDQo+ID4gQEAgLTE1
MCw2ICsxNTAsOSBAQA0KPiA+ICAgI2RlZmluZSBwMm1fcG9kX29mZmxpbmVfb3JfYnJva2VuX2hp
dChwZykgMA0KPiA+ICAgI2RlZmluZSBwMm1fcG9kX29mZmxpbmVfb3JfYnJva2VuX3JlcGxhY2Uo
cGcpIEJVR19PTihwZyAhPSBOVUxMKQ0KPiA+ICAgI2VuZGlmDQo+ID4gKyNpZmRlZiBDT05GSUdf
QVJNXzY0DQo+ID4gKyNpbmNsdWRlIDxhc20vc2V0dXAuaD4NCj4gPiArI2VuZGlmDQo+ID4NCj4g
PiAgIC8qDQo+ID4gICAgKiBDb21tYS1zZXBhcmF0ZWQgbGlzdCBvZiBoZXhhZGVjaW1hbCBwYWdl
IG51bWJlcnMgY29udGFpbmluZyBiYWQNCj4gYnl0ZXMuDQo+ID4gQEAgLTE1MTIsNiArMTUxNSw0
OSBAQCBzdGF0aWMgdm9pZCBmcmVlX2hlYXBfcGFnZXMoDQo+ID4gICAgICAgc3Bpbl91bmxvY2so
JmhlYXBfbG9jayk7DQo+ID4gICB9DQo+ID4NCj4gPiArLyogRXF1aXZhbGVudCBvZiBmcmVlX2hl
YXBfcGFnZXMgdG8gZnJlZSBucl9wZm5zIHBhZ2VzIG9mIHN0YXRpYw0KPiA+ICttZW1vcnkuICov
IHN0YXRpYyB2b2lkIGZyZWVfc3RhdGljbWVtX3BhZ2VzKHN0cnVjdCBwYWdlX2luZm8gKnBnLA0K
PiA+ICt1bnNpZ25lZCBsb25nIG5yX3BmbnMsDQo+IA0KPiBUaGlzIGZ1bmN0aW9uIGlzIGRlYWxp
bmcgd2l0aCBNRk5zLCBzbyB0aGUgc2Vjb25kIHBhcmFtZXRlciBzaG91bGQgYmUgY2FsbGVkDQo+
IG5yX21mbnMuDQo+IA0KDQpBZ3JlZWQsIHRoeC4NCg0KPiA+ICsgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICBib29sIG5lZWRfc2NydWIpIHsNCj4gPiArICAgIG1mbl90IG1mbiA9IHBh
Z2VfdG9fbWZuKHBnKTsNCj4gPiArICAgIGludCBpOw0KPiA+ICsNCj4gPiArICAgIGZvciAoIGkg
PSAwOyBpIDwgbnJfcGZuczsgaSsrICkNCj4gPiArICAgIHsNCj4gPiArICAgICAgICBzd2l0Y2gg
KCBwZ1tpXS5jb3VudF9pbmZvICYgUEdDX3N0YXRlICkNCj4gPiArICAgICAgICB7DQo+ID4gKyAg
ICAgICAgY2FzZSBQR0Nfc3RhdGVfaW51c2U6DQo+ID4gKyAgICAgICAgICAgIEJVR19PTihwZ1tp
XS5jb3VudF9pbmZvICYgUEdDX2Jyb2tlbik7DQo+ID4gKyAgICAgICAgICAgIC8qIE1ha2UgaXQg
ZnJlZSBhbmQgcmVzZXJ2ZWQuICovDQo+ID4gKyAgICAgICAgICAgIHBnW2ldLmNvdW50X2luZm8g
PSBQR0Nfc3RhdGVfZnJlZSB8IFBHQ19yZXNlcnZlZDsNCj4gPiArICAgICAgICAgICAgYnJlYWs7
DQo+ID4gKw0KPiA+ICsgICAgICAgIGRlZmF1bHQ6DQo+ID4gKyAgICAgICAgICAgIHByaW50ayhY
RU5MT0dfRVJSDQo+ID4gKyAgICAgICAgICAgICAgICAgICAiUGFnZSBzdGF0ZSBzaGFsbCBiZSBv
bmx5IGluIFBHQ19zdGF0ZV9pbnVzZS4gIg0KPiA+ICsgICAgICAgICAgICAgICAgICAgInBnWyV1
XSBNRk4gJSJQUklfbWZuIiBjb3VudF9pbmZvPSUjbHgNCj4gdGxiZmx1c2hfdGltZXN0YW1wPSUj
eC5cbiIsDQo+ID4gKyAgICAgICAgICAgICAgICAgICBpLCBtZm5feChtZm4pICsgaSwNCj4gPiAr
ICAgICAgICAgICAgICAgICAgIHBnW2ldLmNvdW50X2luZm8sDQo+ID4gKyAgICAgICAgICAgICAg
ICAgICBwZ1tpXS50bGJmbHVzaF90aW1lc3RhbXApOw0KPiA+ICsgICAgICAgICAgICBCVUcoKTsN
Cj4gPiArICAgICAgICB9DQo+ID4gKw0KPiA+ICsgICAgICAgIC8qDQo+ID4gKyAgICAgICAgICog
Rm9sbG93IHRoZSBzYW1lIGNhY2hlIGNvaGVyZW5jZSBzY2hlbWUgaW4gZnJlZV9oZWFwX3BhZ2Vz
Lg0KPiA+ICsgICAgICAgICAqIElmIGEgcGFnZSBoYXMgbm8gb3duZXIgaXQgd2lsbCBuZWVkIG5v
IHNhZmV0eSBUTEIgZmx1c2guDQo+ID4gKyAgICAgICAgICovDQo+ID4gKyAgICAgICAgcGdbaV0u
dS5mcmVlLm5lZWRfdGxiZmx1c2ggPSAocGFnZV9nZXRfb3duZXIoJnBnW2ldKSAhPSBOVUxMKTsN
Cj4gPiArICAgICAgICBpZiAoIHBnW2ldLnUuZnJlZS5uZWVkX3RsYmZsdXNoICkNCj4gPiArICAg
ICAgICAgICAgcGFnZV9zZXRfdGxiZmx1c2hfdGltZXN0YW1wKCZwZ1tpXSk7DQo+ID4gKw0KPiA+
ICsgICAgICAgIC8qIFRoaXMgcGFnZSBpcyBub3QgYSBndWVzdCBmcmFtZSBhbnkgbW9yZS4gKi8N
Cj4gPiArICAgICAgICBwYWdlX3NldF9vd25lcigmcGdbaV0sIE5VTEwpOw0KPiA+ICsgICAgICAg
IHNldF9ncGZuX2Zyb21fbWZuKG1mbl94KG1mbikgKyBpLCBJTlZBTElEX00yUF9FTlRSWSk7DQo+
IA0KPiBUaGUgY29kZSBsb29rcyBxdWl0ZSBzaW1pbGFyIHRvIGZyZWVfaGVhcF9wYWdlcygpLiBD
b3VsZCB3ZSBwb3NzaWJseSBjcmVhdGUNCj4gYW4gaGVscGVyIHdoaWNoIGNhbiBiZSBjYWxsZWQg
ZnJvbSBib3RoPw0KPiANCg0KT2ssIEkgd2lsbCBleHRyYWN0IHRoZSBjb21tb24gY29kZSBoZXJl
IGFuZCB0aGVyZS4NCiANCj4gPiArDQo+ID4gKyAgICAgICAgaWYgKCBuZWVkX3NjcnViICkNCj4g
PiArICAgICAgICAgICAgc2NydWJfb25lX3BhZ2UoJnBnW2ldKTsNCj4gDQo+IFNvIHRoZSBzY3J1
YmJpbmcgd2lsbCBiZSBzeW5jaHJvbm91cy4gSXMgaXQgd2hhdCB3ZSB3YW50Pw0KPiANCj4gWW91
IGFsc28gc2VlbSB0byBtaXNzIHRoZSBmbHVzaCB0aGUgY2FsbCB0byBmbHVzaF9wYWdlX3RvX3Jh
bSgpLg0KPiANCg0KWWVhaCwgcmlnaHQgbm93LCBpdCBpcyBzeW5jaHJvbm91cy4NCkkgZ3Vlc3Mg
dGhhdCBpdCBpcyBub3QgYW4gb3B0aW1hbCBjaG9pY2UsIG9ubHkgYSB3b3JrYWJsZSBvbmUgcmln
aHQgbm93Lg0KSSdtIHRyeWluZyB0byBib3Jyb3cgc29tZSBhc3luY2hyb25vdXMgc2NydWJiaW5n
IGlkZWFzIGZyb20gaGVhcCBpbiB0aGUgZnV0dXJlLg0KDQpZZXMhIElmIHdlIGFyZSBkb2luZyBz
eW5jaHJvbm91cyBzY3J1YmJpbmcsIHdlIG5lZWQgdG8gZmx1c2hfcGFnZV90b19yYW0oKS4gVGh4
Lg0KDQo+ID4gKyAgICB9DQo+ID4gK30NCj4gPg0KPiA+ICAgLyoNCj4gPiAgICAqIEZvbGxvd2lu
ZyBydWxlcyBhcHBsaWVkIGZvciBwYWdlIG9mZmxpbmU6DQo+ID4gQEAgLTE4MjgsNiArMTg3NCwz
MCBAQCBzdGF0aWMgdm9pZCBpbml0X2hlYXBfcGFnZXMoDQo+ID4gICAgICAgfQ0KPiA+ICAgfQ0K
PiA+DQo+ID4gKy8qIEVxdWl2YWxlbnQgb2YgaW5pdF9oZWFwX3BhZ2VzIHRvIGRvIHN0YXRpYyBt
ZW1vcnkgaW5pdGlhbGl6YXRpb24NCj4gPiArKi8gdm9pZCBfX2luaXQgaW5pdF9zdGF0aWNtZW1f
cGFnZXModm9pZCkgew0KPiA+ICsgICAgaW50IGJhbms7DQo+ID4gKw0KPiA+ICsgICAgLyoNCj4g
PiArICAgICAqIFRPRE86IENvbnNpZGVyaW5nIE5VTUEtc3VwcG9ydCBzY2VuYXJpby4NCj4gPiAr
ICAgICAqLw0KPiA+ICsgICAgZm9yICggYmFuayA9IDAgOyBiYW5rIDwgYm9vdGluZm8uc3RhdGlj
X21lbS5ucl9iYW5rczsgYmFuaysrICkNCj4gDQo+IGJvb3RpbmZvIGlzIGFybSBzcGVjaWZpYywg
c28gdGhpcyBjb2RlIHNob3VsZCBsaXZlIGluIGFyY2gvYXJtIHJhdGhlciB0aGFuDQo+IGNvbW1v
bi8uDQo+IA0KDQpZZXMsIEknbSBjb25zaWRlcmluZyB0byBjcmVhdGUgYSBuZXcgQ09ORklHX1NU
QVRJQ19NRU0gdG8gaW5jbHVkZSBhbGwNCnN0YXRpYy1tZW1vcnktcmVsYXRlZCBmdW5jdGlvbnMu
DQoNCj4gPiArICAgIHsNCj4gPiArICAgICAgICBwYWRkcl90IGJhbmtfc3RhcnQgPSBib290aW5m
by5zdGF0aWNfbWVtLmJhbmtbYmFua10uc3RhcnQ7DQo+ID4gKyAgICAgICAgcGFkZHJfdCBiYW5r
X3NpemUgPSBib290aW5mby5zdGF0aWNfbWVtLmJhbmtbYmFua10uc2l6ZTsNCj4gPiArICAgICAg
ICBwYWRkcl90IGJhbmtfZW5kID0gYmFua19zdGFydCArIGJhbmtfc2l6ZTsNCj4gPiArDQo+ID4g
KyAgICAgICAgYmFua19zdGFydCA9IHJvdW5kX3BndXAoYmFua19zdGFydCk7DQo+ID4gKyAgICAg
ICAgYmFua19lbmQgPSByb3VuZF9wZ2Rvd24oYmFua19lbmQpOw0KPiA+ICsgICAgICAgIGlmICgg
YmFua19lbmQgPD0gYmFua19zdGFydCApDQo+ID4gKyAgICAgICAgICAgIHJldHVybjsNCj4gPiAr
DQo+ID4gKyAgICAgICAgZnJlZV9zdGF0aWNtZW1fcGFnZXMobWFkZHJfdG9fcGFnZShiYW5rX3N0
YXJ0KSwNCj4gPiArICAgICAgICAgICAgICAgICAgICAgICAgICAgIChiYW5rX2VuZCAtIGJhbmtf
c3RhcnQpID4+IFBBR0VfU0hJRlQsIGZhbHNlKTsNCj4gPiArICAgIH0NCj4gPiArfQ0KPiA+ICsN
Cj4gPiAgIHN0YXRpYyB1bnNpZ25lZCBsb25nIGF2YWlsX2hlYXBfcGFnZXMoDQo+ID4gICAgICAg
dW5zaWduZWQgaW50IHpvbmVfbG8sIHVuc2lnbmVkIGludCB6b25lX2hpLCB1bnNpZ25lZCBpbnQg
bm9kZSkNCj4gPiAgIHsNCj4gPiBkaWZmIC0tZ2l0IGEveGVuL2luY2x1ZGUveGVuL21tLmggYi94
ZW4vaW5jbHVkZS94ZW4vbW0uaCBpbmRleA0KPiA+IDY2N2Y5ZGFjODMuLjhiMWEyMjA3YjIgMTAw
NjQ0DQo+ID4gLS0tIGEveGVuL2luY2x1ZGUveGVuL21tLmgNCj4gPiArKysgYi94ZW4vaW5jbHVk
ZS94ZW4vbW0uaA0KPiA+IEBAIC04NSw2ICs4NSw5IEBAIGJvb2wgc2NydWJfZnJlZV9wYWdlcyh2
b2lkKTsNCj4gPiAgIH0gd2hpbGUgKCBmYWxzZSApDQo+ID4gICAjZGVmaW5lIEZSRUVfWEVOSEVB
UF9QQUdFKHApIEZSRUVfWEVOSEVBUF9QQUdFUyhwLCAwKQ0KPiA+DQo+ID4gKy8qIFN0YXRpYyBN
ZW1vcnkgKi8NCj4gPiArdm9pZCBpbml0X3N0YXRpY21lbV9wYWdlcyh2b2lkKTsNCj4gPiArDQo+
ID4gICAvKiBNYXAgbWFjaGluZSBwYWdlIHJhbmdlIGluIFhlbiB2aXJ0dWFsIGFkZHJlc3Mgc3Bh
Y2UuICovDQo+ID4gICBpbnQgbWFwX3BhZ2VzX3RvX3hlbigNCj4gPiAgICAgICB1bnNpZ25lZCBs
b25nIHZpcnQsDQo+ID4NCj4gDQo+IENoZWVycywNCj4gDQo+IC0tDQo+IEp1bGllbiBHcmFsbA0K
DQpDaGVlcnMNCg0KUGVubnkNCg==


From xen-devel-bounces@lists.xenproject.org Wed May 19 05:03:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 05:03:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129817.243457 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljEN6-0000Ky-2I; Wed, 19 May 2021 05:03:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129817.243457; Wed, 19 May 2021 05:03:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljEN5-0000Kr-VT; Wed, 19 May 2021 05:03:51 +0000
Received: by outflank-mailman (input) for mailman id 129817;
 Wed, 19 May 2021 05:03:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljEN3-0000KS-UH; Wed, 19 May 2021 05:03:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljEN3-0001aR-Os; Wed, 19 May 2021 05:03:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljEN3-0000Kx-EM; Wed, 19 May 2021 05:03:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ljEN3-00005g-Dr; Wed, 19 May 2021 05:03:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=y/9ASJu4L/9oFzgFvWlm3W/hPCYjY+Mth2W32NOYjns=; b=RC0umxiIRkp2of55g7Hdxuujla
	V7rh+wW5AKTaUJGixD3u78CL8ipjIj6bQXYP8ZUP8cymMWMRamL8HjcyN57QP60ozCrVfrMf86DW9
	7xTLS5i5IE96XRMefRoJrsTPE5/hjT8BravGwwJttPdey/h2UtRfGLH8/l0JDk5zmQQU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162058-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162058: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=caa9c4471d1d74b2d236467aaf7e63a806ac11a4
X-Osstest-Versions-That:
    xen=3ac8835a80b27fc4e7116dbde78d3eececc66fc9
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 May 2021 05:03:49 +0000

flight 162058 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162058/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161990
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161990
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 161990
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161990
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161990
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161990
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161990
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161990
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161990
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161990
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161990
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  caa9c4471d1d74b2d236467aaf7e63a806ac11a4
baseline version:
 xen                  3ac8835a80b27fc4e7116dbde78d3eececc66fc9

Last test of basis   161990  2021-05-18 05:50:04 Z    0 days
Testing same since   162058  2021-05-18 18:36:47 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Juergen Gross <jgross@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   3ac8835a80..caa9c4471d  caa9c4471d1d74b2d236467aaf7e63a806ac11a4 -> master


From xen-devel-bounces@lists.xenproject.org Wed May 19 05:10:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 05:10:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129711.243477 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljETn-0001wM-2K; Wed, 19 May 2021 05:10:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129711.243477; Wed, 19 May 2021 05:10:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljETm-0001wF-VR; Wed, 19 May 2021 05:10:46 +0000
Received: by outflank-mailman (input) for mailman id 129711;
 Tue, 18 May 2021 21:48:44 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5NBq=KN=linux.intel.com=mathias.nyman@srs-us1.protection.inumbo.net>)
 id 1lj7a0-0000y7-1a
 for xen-devel@lists.xenproject.org; Tue, 18 May 2021 21:48:44 +0000
Received: from mga14.intel.com (unknown [192.55.52.115])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 12490555-4eec-4776-b6e0-da31b1133c59;
 Tue, 18 May 2021 21:48:43 +0000 (UTC)
Received: from orsmga008.jf.intel.com ([10.7.209.65])
 by fmsmga103.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 18 May 2021 14:48:41 -0700
Received: from mattu-haswell.fi.intel.com (HELO [10.237.72.170])
 ([10.237.72.170])
 by orsmga008.jf.intel.com with ESMTP; 18 May 2021 14:48:38 -0700
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 12490555-4eec-4776-b6e0-da31b1133c59
IronPort-SDR: dC7HSHSAofpvcYZlqbs+hY35G8EGiMmoX4tzulrCoTB5ximZ0fcOaZfePIaemaAIU+BOv0Y4JK
 TftIAsttdmMA==
X-IronPort-AV: E=McAfee;i="6200,9189,9988"; a="200524580"
X-IronPort-AV: E=Sophos;i="5.82,310,1613462400"; 
   d="scan'208";a="200524580"
IronPort-SDR: pLyD72iVxOHTJiAI6YsrUf6P71lNg593hFHGbEUjWmynh4aV30K2s2UmOEhKCyB+vCY9JQqjve
 8ouXL7nbAycw==
X-ExtLoop1: 1
X-IronPort-AV: E=Sophos;i="5.82,310,1613462400"; 
   d="scan'208";a="439649931"
Subject: Re: [PATCH v2 1/4] usb: early: Avoid using DbC if already enabled
To: Connor Davis <connojdavis@gmail.com>, Jan Beulich <jbeulich@suse.com>
Cc: Jann Horn <jannh@google.com>, Lee Jones <lee.jones@linaro.org>,
 Chunfeng Yun <chunfeng.yun@mediatek.com>, linux-usb@vger.kernel.org,
 linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org,
 Greg Kroah-Hartman <gregkh@linuxfoundation.org>
References: <cover.1620950220.git.connojdavis@gmail.com>
 <d160cee9b61c0ec41c2cd5ff9b4e107011d39d8c.1620952511.git.connojdavis@gmail.com>
 <8ccce25a-e3ca-cb30-f6a3-f9243a85a49b@suse.com>
 <16400ee4-4406-8b26-10c0-a423b2b1fed0@gmail.com>
 <ddb58cbd-0a72-f680-80f4-ce09b13a2cee@suse.com>
 <55325db1-b086-fc81-9117-6560c4914a12@gmail.com>
From: Mathias Nyman <mathias.nyman@linux.intel.com>
Autocrypt: addr=mathias.nyman@linux.intel.com; prefer-encrypt=mutual; keydata=
 mQINBFMB0ccBEADd+nZnZrFDsIjQtclVz6OsqFOQ6k0nQdveiDNeBuwyFYykkBpaGekoHZ6f
 lH4ogPZzQ+pzoJEMlRGXc881BIggKMCMH86fYJGfZKWdfpg9O6mqSxyEuvBHKe9eZCBKPvoC
 L2iwygtO8TcXXSCynvXSeZrOwqAlwnxWNRm4J2ikDck5S5R+Qie0ZLJIfaId1hELofWfuhy+
 tOK0plFR0HgVVp8O7zWYT2ewNcgAzQrRbzidA3LNRfkL7jrzyAxDapuejuK8TMrFQT/wW53e
 uegnXcRJaibJD84RUJt+mJrn5BvZ0MYfyDSc1yHVO+aZcpNr+71yZBQVgVEI/AuEQ0+p9wpt
 O9Wt4zO2KT/R5lq2lSz1MYMJrtfFRKkqC6PsDSB4lGSgl91XbibK5poxrIouVO2g9Jabg04T
 MIPpVUlPme3mkYHLZUsboemRQp5/pxV4HTFR0xNBCmsidBICHOYAepCzNmfLhfo1EW2Uf+t4
 L8IowAaoURKdgcR2ydUXjhACVEA/Ldtp3ftF4hTQ46Qhba/p4MUFtDAQ5yeA5vQVuspiwsqB
 BoL/298+V119JzM998d70Z1clqTc8fiGMXyVnFv92QKShDKyXpiisQn2rrJVWeXEIVoldh6+
 J8M3vTwzetnvIKpoQdSFJ2qxOdQ8iYRtz36WYl7hhT3/hwkHuQARAQABtCdNYXRoaWFzIE55
 bWFuIDxtYXRoaWFzLm55bWFuQGdtYWlsLmNvbT6JAjsEEwECACUCGwMGCwkIBwMCBhUIAgkK
 CwQWAgMBAh4BAheABQJTAeo1AhkBAAoJEFiDn/uYk8VJOdIP/jhA+RpIZ7rdUHFIYkHEKzHw
 tkwrJczGA5TyLgQaI8YTCTPSvdNHU9Rj19mkjhUO/9MKvwfoT2RFYqhkrtk0K92STDaBNXTL
 JIi4IHBqjXOyJ/dPADU0xiRVtCHWkBgjEgR7Wihr7McSdVpgupsaXhbZjXXgtR/N7PE0Wltz
 hAL2GAnMuIeJyXhIdIMLb+uyoydPCzKdH6znfu6Ox76XfGWBCqLBbvqPXvk4oH03jcdt+8UG
 2nfSeti/To9ANRZIlSKGjddCGMa3xzjtTx9ryf1Xr0MnY5PeyNLexpgHp93sc1BKxKKtYaT0
 lR6p0QEKeaZ70623oB7Sa2Ts4IytqUVxkQKRkJVWeQiPJ/dZYTK5uo15GaVwufuF8VTwnMkC
 4l5X+NUYNAH1U1bpRtlT40aoLEUhWKAyVdowxW4yGCP3nL5E69tZQQgsag+OnxBa6f88j63u
 wxmOJGNXcwCerkCb+wUPwJzChSifFYmuV5l89LKHgSbv0WHSN9OLkuhJO+I9fsCNvro1Y7dT
 U/yq4aSVzjaqPT3yrnQkzVDxrYT54FLWO1ssFKAOlcfeWzqrT9QNcHIzHMQYf5c03Kyq3yMI
 Xi91hkw2uc/GuA2CZ8dUD3BZhUT1dm0igE9NViE1M7F5lHQONEr7MOCg1hcrkngY62V6vh0f
 RcDeV0ISwlZWuQINBFMB0ccBEACXKmWvojkaG+kh/yipMmqZTrCozsLeGitxJzo5hq9ev31N
 2XpPGx4AGhpccbco63SygpVN2bOd0W62fJJoxGohtf/g0uVtRSuK43OTstoBPqyY/35+VnAV
 oA5cnfvtdx5kQPIL6LRcxmYKgN4/3+A7ejIxbOrjWFmbWCC+SgX6mzHHBrV0OMki8R+NnrNa
 NkUmMmosi7jBSKdoi9VqDqgQTJF/GftvmaZHqgmVJDWNrCv7UiorhesfIWPt1O/AIk9luxlE
 dHwkx5zkWa9CGYvV6LfP9BznendEoO3qYZ9IcUlW727Le80Q1oh69QnHoI8pODDBBTJvEq1h
 bOWcPm/DsNmDD8Rwr/msRmRyIoxjasFi5WkM/K/pzujICKeUcNGNsDsEDJC5TCmRO/TlvCvm
 0X+vdfEJRZV6Z+QFBflK1asUz9QHFre5csG8MyVZkwTR9yUiKi3KiqQdaEu+LuDD2CGF5t68
 xEl66Y6mwfyiISkkm3ETA4E8rVZP1rZQBBm83c5kJEDvs0A4zrhKIPTcI1smK+TWbyVyrZ/a
 mGYDrZzpF2N8DfuNSqOQkLHIOL3vuOyx3HPzS05lY3p+IIVmnPOEdZhMsNDIGmVorFyRWa4K
 uYjBP/W3E5p9e6TvDSDzqhLoY1RHfAIadM3I8kEx5wqco67VIgbIHHB9DbRcxQARAQABiQIf
 BBgBAgAJBQJTAdHHAhsMAAoJEFiDn/uYk8VJb7AQAK56tgX8V1Wa6RmZDmZ8dmBC7W8nsMRz
 PcKWiDSMIvTJT5bygMy1lf7gbHXm7fqezRtSfXAXr/OJqSA8LB2LWfThLyuuCvrdNsQNrI+3
 D+hjHJjhW/4185y3EdmwwHcelixPg0X9EF+lHCltV/w29Pv3PiGDkoKxJrnOpnU6jrwiBebz
 eAYBfpSEvrCm4CR4hf+T6MdCs64UzZnNt0nxL8mLCCAGmq1iks9M4bZk+LG36QjCKGh8PDXz
 9OsnJmCggptClgjTa7pO6040OW76pcVrP2rZrkjo/Ld/gvSc7yMO/m9sIYxLIsR2NDxMNpmE
 q/H7WO+2bRG0vMmsndxpEYS4WnuhKutoTA/goBEhtHu1fg5KC+WYXp9wZyTfeNPrL0L8F3N1
 BCEYefp2JSZ/a355X6r2ROGSRgIIeYjAiSMgGAZMPEVsdvKsYw6BH17hDRzltNyIj5S0dIhb
 Gjynb3sXforM/GVbr4mnuxTdLXQYlj2EJ4O4f0tkLlADT7podzKSlSuZsLi2D+ohKxtP3U/r
 42i8PBnX2oAV0UIkYk7Oel/3hr0+BP666SnTls9RJuoXc7R5XQVsomqXID6GmjwFQR5Wh/RE
 IJtkiDAsk37cfZ9d1kZ2gCQryTV9lmflSOB6AFZkOLuEVSC5qW8M/s6IGDfYXN12YJaZPptJ fiD/
Message-ID: <0dac96cc-dce1-e908-400e-9c8e68c41a54@linux.intel.com>
Date: Wed, 19 May 2021 00:50:46 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <55325db1-b086-fc81-9117-6560c4914a12@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 17.5.2021 17.24, Connor Davis wrote:
> 
> On 5/17/21 8:13 AM, Jan Beulich wrote:
>> On 17.05.2021 15:48, Connor Davis wrote:
>>> On 5/17/21 3:32 AM, Jan Beulich wrote:
>>>> On 14.05.2021 02:56, Connor Davis wrote:
>>>>> Check if the debug capability is enabled in early_xdbc_parse_parameter,
>>>>> and if it is, return with an error. This avoids collisions with whatever
>>>>> enabled the DbC prior to linux starting.
>>>> Doesn't this go too far and prevent use even if firmware (perhaps
>>>> mistakenly) left it enabled?
>>> Yes, but how is one supposed to distinguish the broken firmware and
>>> non-broken
>>>
>>> firmware cases?
>> Well, a first step might be to only check if running virtualized.
>> And then if your running virtualized, there might be a way to
>> inquire the hypervisor?
> 
> Right, but if it was enabled by something other than a hypervisor,
> 
> or you're not running virtualized, how do you distinguish then? IMO
> 
> the proper thing to do in any case is to simply not use the DbC in linux.
> 

Sounds reasonable.

We can address "broken firmware" during the xHC handoff from BIOS to OS. 
Only first OS that loads after BIOS should see the 
"HC BIOS owned semaphore" bit set in xHCI MMIO.

If it's set then OS requests ownership, which clears BIOS owned bit.
If DbC is running here we can stop it.

-Mathias


From xen-devel-bounces@lists.xenproject.org Wed May 19 05:23:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 05:23:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129837.243496 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljEfh-0003X7-95; Wed, 19 May 2021 05:23:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129837.243496; Wed, 19 May 2021 05:23:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljEfh-0003X0-62; Wed, 19 May 2021 05:23:05 +0000
Received: by outflank-mailman (input) for mailman id 129837;
 Wed, 19 May 2021 05:23:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljEff-0003Wq-FO; Wed, 19 May 2021 05:23:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljEff-0001ud-7S; Wed, 19 May 2021 05:23:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljEfe-0001Nx-Uz; Wed, 19 May 2021 05:23:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ljEfe-0001FZ-UV; Wed, 19 May 2021 05:23:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=87ydD1HB9gCVmHN3a5GzoC2CEbmYhOl3lkzWRrXAyjc=; b=HAhRlpzyd8bQb27S0llqs2yk9Y
	nfuXdcrWcJ+6++8RTZNzL43elj5LGyz40F5XysL26kGeuN/XrlOFdFuUzF7Sq90a0pa1nYN209FZh
	8svA2/x5tF4n7l7WaEUP9eeXh99lXiIcXFOKX4e4fja6OjbgAYhkicGR4P/83Ia9bju8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162075-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162075: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=01d84420fb4a9be2ec474a7c1910bb22c28b53c8
X-Osstest-Versions-That:
    xen=caa9c4471d1d74b2d236467aaf7e63a806ac11a4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 May 2021 05:23:02 +0000

flight 162075 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162075/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm               6 xen-build                fail REGR. vs. 162023
 build-armhf                   6 xen-build                fail REGR. vs. 162023

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  01d84420fb4a9be2ec474a7c1910bb22c28b53c8
baseline version:
 xen                  caa9c4471d1d74b2d236467aaf7e63a806ac11a4

Last test of basis   162023  2021-05-18 13:00:27 Z    0 days
Testing same since   162036  2021-05-18 16:00:26 Z    0 days    7 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  pass    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 01d84420fb4a9be2ec474a7c1910bb22c28b53c8
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:51:48 2021 +0100

    tools/xenmon: xenbaked: Mark const the field text in stat_map_t
    
    The field text in stat_map_t will point to string literals. So mark it
    as const to allow the compiler to catch any modified of the string.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 4b7702727a8d89fea0a239adcbeb18aa2c85ede0
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:51:28 2021 +0100

    tools/top: The string parameter in set_prompt() and set_delay() should be const
    
    Neither string parameter in set_prompt() and set_delay() are meant to
    be modified. In particular, new_prompt can point to a literal string.
    
    So mark the two parameters as const and propagate it.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 5605cfd49a18df41a21fb50cd81528312a39d7c9
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:50:32 2021 +0100

    tools/misc: Use const whenever we point to literal strings
    
    literal strings are not meant to be modified. So we should use const
    char * rather than char * when we we to store a pointer to them.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 89aae4ad8f495b647de33f2df5046b3ce68225f8
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:35:07 2021 +0100

    tools/libs: stat: Use const whenever we point to literal strings
    
    literal strings are not meant to be modified. So we should use const
    char * rather than char * when we want to store a pointer to them.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 8fc4916daf2aac34088ebd5ec3d6fd707ac4221d
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:34:22 2021 +0100

    tools/libs: guest: Use const whenever we point to literal strings
    
    literal strings are not meant to be modified. So we should use const
    *char rather than char * when we want to store a pointer to them.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed May 19 05:23:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 05:23:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129840.243510 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljEgR-00047P-JA; Wed, 19 May 2021 05:23:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129840.243510; Wed, 19 May 2021 05:23:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljEgR-00047I-Fg; Wed, 19 May 2021 05:23:51 +0000
Received: by outflank-mailman (input) for mailman id 129840;
 Wed, 19 May 2021 05:23:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TeaP=KO=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1ljEgP-00047A-OX
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 05:23:49 +0000
Received: from EUR01-HE1-obe.outbound.protection.outlook.com (unknown
 [40.107.13.53]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2212bd21-7e88-47b8-8d3a-0812dd6cab43;
 Wed, 19 May 2021 05:23:48 +0000 (UTC)
Received: from AS8PR05CA0026.eurprd05.prod.outlook.com (2603:10a6:20b:311::31)
 by DB6PR0802MB2328.eurprd08.prod.outlook.com (2603:10a6:4:87::15)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.28; Wed, 19 May
 2021 05:23:46 +0000
Received: from AM5EUR03FT012.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:311:cafe::13) by AS8PR05CA0026.outlook.office365.com
 (2603:10a6:20b:311::31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.33 via Frontend
 Transport; Wed, 19 May 2021 05:23:46 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT012.mail.protection.outlook.com (10.152.16.161) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Wed, 19 May 2021 05:23:45 +0000
Received: ("Tessian outbound 504317ef584c:v92");
 Wed, 19 May 2021 05:23:45 +0000
Received: from 0bf437523488.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 19F5032D-B6DF-4786-AAE2-E3DE496EC631.1; 
 Wed, 19 May 2021 05:23:40 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 0bf437523488.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 19 May 2021 05:23:40 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com (2603:10a6:803:10a::33)
 by VE1PR08MB5616.eurprd08.prod.outlook.com (2603:10a6:800:1a1::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.26; Wed, 19 May
 2021 05:23:38 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5]) by VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5%6]) with mapi id 15.20.4129.032; Wed, 19 May 2021
 05:23:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2212bd21-7e88-47b8-8d3a-0812dd6cab43
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UWp3zk2rSfL5Q8py+LPgARHCiYwC4mREdAmjMgK18jQ=;
 b=V2C1S3lanoGFXLwLAMPrjtiSNXXufc/n7iZPIBio3+KVej7ehdD847DwgxsF/yxi5oJ5AVHkbK/oa2AhWZV/NV30/Te7MeOw7rBQcSyEgJuoUQ/2Sm4WaWd2Aq861+BYLMLlGSO+hcIZ0tVWb/L34+uFDJboVx0ifcp6W+5/SyE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eR7vrKuEUWKKemD7Idu50O8TwbDQ+8uCsVb7kKW776znWYL0FlCz0xs07mitp/71hsZHnazjQIfNol6fveBIgPpxG6qFlPjgNag8ndPuAU9baiJn9mhND3robstib6GCQdV3DEye/v1dPIy6HMlQUNBzd7kKb3eP1Rha49QHE3a4gqF8V1tZDX88aFuzkaT45X50B9XbGuFfL8wRWrPKvVeFXrRW4zqgZgnzRYgM3HcutBfcptfnmSrGvCaYmc1fVP0RRoTlpnuL4/Y7VBIjWsX1wh+0SwNjhYuO7/Xbt832da3dzV7f7Jzm0vEu6mBlUEnNcpy0KbQzieoN56Soqw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UWp3zk2rSfL5Q8py+LPgARHCiYwC4mREdAmjMgK18jQ=;
 b=fyjJLOX9YVgICEzfwJLs1bNZvsVy9nu/a8BWUaOwSUY+Z8Oot+99GrxnyX+dGyiidpV+021/Nn0fovM2DfOPtcypDeguE7MpGP4PO4oMVtGqNNCH9dTBSHyv80Q5y6sygMNWUK6hytqEvF3bGT3ZPk0+BfIWkKfeCXSEpc7cUDBhfsiNz4rET8c8b9ajTactcEtXhiCunflWlADCFKDRg7FUm7evSzhG/laRca99W+7CGvtsS+D0UleEaGQofSr9IZmxNRG2VzE9zcVlsVg+sb9Z9NZ/GzDJepX3e18n1/KGX5NW9abzKJiysTPfzg0DQa/O4wf7yLdhetj5WYahiw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=UWp3zk2rSfL5Q8py+LPgARHCiYwC4mREdAmjMgK18jQ=;
 b=V2C1S3lanoGFXLwLAMPrjtiSNXXufc/n7iZPIBio3+KVej7ehdD847DwgxsF/yxi5oJ5AVHkbK/oa2AhWZV/NV30/Te7MeOw7rBQcSyEgJuoUQ/2Sm4WaWd2Aq861+BYLMLlGSO+hcIZ0tVWb/L34+uFDJboVx0ifcp6W+5/SyE=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "sstabellini@kernel.org"
	<sstabellini@kernel.org>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	nd <nd@arm.com>
Subject: RE: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages
Thread-Topic: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages
Thread-Index: AQHXS6W1I2rvO8FRzkm++uFejE9ZNKrpBh+AgAE7kqA=
Date: Wed, 19 May 2021 05:23:38 +0000
Message-ID:
 <VE1PR08MB52156570D7795C3595674BE5F72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-6-penny.zheng@arm.com>
 <e8e4148e-017b-955b-dd18-4576ce7c94ec@xen.org>
In-Reply-To: <e8e4148e-017b-955b-dd18-4576ce7c94ec@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 5838AA240C5F0D40996A904F5970C569.0
x-checkrecipientchecked: true
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [203.126.0.111]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: f545e556-75f1-407c-2469-08d91a864678
x-ms-traffictypediagnostic: VE1PR08MB5616:|DB6PR0802MB2328:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DB6PR0802MB232842DD13D9070F3FC03A5DF72B9@DB6PR0802MB2328.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:1751;OLM:1751;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 X4rbQkVEz8DQzb3emKi8evMtl2Can6ErsuahFwV5NV1uXnFMLOBXSLbs83cojhg+DPxnXqFwq8D9hOL0h71VSJT2EdYvEi24BZ6HFoAOlwvzmSZ9luTutm7QwXW5yYffbjYX5c7gZx1qgRE7ZjQo6kJP5yb3+FXU3+eW0zHKfV2aBx6X6ElhzFBXgQ+VEa3eivn/nLvHLfvU0AyvNO5rRBlXXWY6WmvrpsLqrntohXcq0tP7LrpiSt4oFF1O6pXhpyYomq5o6ZjGBw4GPpJiFQWNfbvhnxEHUaX7qEgnpDrE20VQWTRjUPijbXWJvNINFypTrVA4uOTv89EB+iO8SSdZKCJneVqVUNr+TeTucmrZJoYXe9IP0ONjH2UKzmaeVCBHvhmm9DkXOGzbGSfhRqwUC0/oO0c6FRyO6ok14pAZ3EzR9vPqsbT58BuZJl8LYRmfGsM1gaepBBU7UProlG+dQP7t67u+ATk0BVLHcoBgQ5Dc9UVMI5WBBPZ5NMh380+bw+dlnlMEIf5/vCJnnQk8yLxjI1b8Z/7S4elQwIjAZ+Iiiq57WBmZ6NJgYRTvixwTIrpVjs7ZaerbfjrXnXixc+S+sVU3MmfOk1UPh/E=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR08MB5215.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(39860400002)(376002)(346002)(136003)(396003)(83380400001)(8676002)(122000001)(6506007)(53546011)(478600001)(33656002)(52536014)(9686003)(5660300002)(38100700002)(7696005)(86362001)(26005)(71200400001)(66476007)(2906002)(55016002)(76116006)(64756008)(66946007)(66446008)(66556008)(316002)(8936002)(110136005)(54906003)(4326008)(186003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?ZGFZbFJVT3EvL3EzUDZBRE95WFBKcFZFY050ZVlrbFBhK3kwR2RCbHFvNXZF?=
 =?utf-8?B?OXRYeDQyRVo1aU5WcGRNL0szVGlkRFVrajYyaUN0YVlvY2FjWnBLSFdORnRJ?=
 =?utf-8?B?SGIwZCtxMW04Z1hMTUVUTTBOazVTbHBlVFVOTVNwNVlTSWlCeFFKbjV4Z2pj?=
 =?utf-8?B?K0NNUStmZzIyY21tWUowYmE0YkIveHQ5ZGFpOHQ3V00xbHlXb2M0VWdqVjBP?=
 =?utf-8?B?RldtM3QwL3I4TmZVSWR3aC8vdG80VTlmcDFGb2NIbjZkRkNNRlF4OHVpWE5L?=
 =?utf-8?B?bU5DMTUxK1BNMUR4NUpGcDRXc0FJZjhzTk5xelB2cUlnMTVrSllVaGNTQTZL?=
 =?utf-8?B?SUhvZ1FxMlpjQmRFUVRQdzBEVVpocHRQeTY2cVRSaEVCLzVIbzQwdFdzNG5r?=
 =?utf-8?B?ZmdYOGU1VUZYSW9HRC9mcFRuT2RjYXVodG94M1BVQ3N6M3M4Z1BIcjBaampx?=
 =?utf-8?B?R2tESjFsNEFaVGpsbkhGKzN5bnNmQmIxZVU4RDdla1psTEhzc0x4NlI3L0Ru?=
 =?utf-8?B?R0psMllpMC9xUmRnWkRWdHNxUEJWbytqMW54TUQ5bkpnZEVaV1NURlBmL0dh?=
 =?utf-8?B?aWIydVI4NWxmalprUnYwYXlHK2dTZzU2cXdqKzdIQUpBcVZLU2hxVlhmbVlW?=
 =?utf-8?B?ZnlMUE1GTWtJMkpnRUhjYkdLRHY4Y1Nna1hpSEw2dnRRUk1yWmF0d082ZUYv?=
 =?utf-8?B?YzZaSVh0M2hpVmZTOElocHNTMmJSRzdhVjZTY1BBUDJFenlyV2xreXFIdlJB?=
 =?utf-8?B?NHJ1aUdqQkQyT045V0dIdG5IUE9zMUxab2Z3WG0xU2ZwY2F3RG1IK1FvT25y?=
 =?utf-8?B?SmhlQkoxbVhramlydHNveTFsT0x2KytETEhYdTFBV1V1bk8yWHlKL0l3Szh6?=
 =?utf-8?B?MmkxY1RhaWNURGJRdDBTcGFPN1ZzWDY0VVcrSFYzTWw4SDhiakZjd1RVV0ov?=
 =?utf-8?B?MTdKaExlQjNGTEtrbmpOTjgrT2h4OHpQYTY0R1grdkhnVWdmeXNVNnVURnhJ?=
 =?utf-8?B?dVJLUVkyWnRUMmg0bHhJVG9hQW9jZGpxWFNkUGt3WVFqMjlBVkxLeU5lWGFs?=
 =?utf-8?B?b1YvQW4za3J3dmlkWmNjWldMUzM1cndicktWRkNYcHVLcSt4OXBmc0NJK0V5?=
 =?utf-8?B?WGpJN0dtNEdJY05GN3ZFalFiNEFIaFI3MzZnUjlTdHcyNHVHbS9VS1RmSjg0?=
 =?utf-8?B?bVJ4b21VZ3FPUGU0WkpxVHJ3Y3RkajErTlR3NzNwdklaZHdaYTY4S3BodzJP?=
 =?utf-8?B?WkFITnVMMWhwU1U4clhnZi9lUTR0SDBEeHJxcm8yVmRtc3FvTUJtbGdmb0xQ?=
 =?utf-8?B?Z202aE9iV2t3ZS8zdmhiMmxBREZLVVBZMnlFMkJmMDNMNGRJNmJ5NEllU2po?=
 =?utf-8?B?Y3JrbXcvbGJZd2VKMndZcUxlcE1pTjBCcEE1a0JkaW43QnRMRjRPV2NwVFQv?=
 =?utf-8?B?VURQcVUwL3lFalNLNTVIbjhVUGVDamJ6T3lVTW9qUHFPU3dHZGtodjFWaE9O?=
 =?utf-8?B?MW8vZVZ0eFFya0d0UVdjdC9qLzF3YllVclVtV1AzeEVObmFqcnFWTEhGNTRq?=
 =?utf-8?B?WC9QOXZ0ejdGZnUwZUtxZVNDVWxscmMvNmY4UWtFOVRmMkhNNmtKVFhlb0Iz?=
 =?utf-8?B?RVQ3OGRmODZOSVMrckZOcEFLb05EaXVIcERhc2xhMzFwOXl2S1ZOUHlKTTI5?=
 =?utf-8?B?b1RSR1lTUUhrYjB1a3Z1cVBZWU1WMW85NUZKTGZHR1phZ3lrcDcxVGZJaUgy?=
 =?utf-8?Q?qxEnYhQj0Win9KHyApdIjFfd7ZqPV0wsKmQYfhn?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB5616
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT012.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	6cabd9c4-cba5-4423-dc70-08d91a864203
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	CVS9blvmmhuC2WUMXRJPTTtCHX/E6gexExUiauzVcP1O1U/sVWN/Q/IZl4rnp4k5gf5rTLSJz8wnSgsrRcmlIscIR29iYlU5GZYb54HO3rtledF+O1QhgiJ2rykF92LqcRaSV9WJCcalT9Y4BQdqZCnfWlbYvAfMqJQ1vRdZ3VCudn6d31/LvyqjXTWFsbqwauBmcikyCgEGtnK9IDs0R6HC2+pq3bpA/0LM1Vsxu1NhE+R8ow4i50R1wCWeBB2W8PjOPkvwlBGbn4tZqA29hzb3Jet7N8cmaUDOCBZtAJblP3Ha+dRvdnQek19N3wQ8aaaKkDm29g5LDWT1poyVVDzjqCTRnNSTZXXbexBGRapUkIcleiOXFnoB6uz9bp6kFHGkmifRlmy7Ms6pn0Nphv6svDr5llaDFmmpc3hozozyRqvipks7t65P/atRKt3D33+gZu93fytAq2IqIEMQrslaCpEwdw7XGKASKa5Agru8u2Xpet1mNv+4VKvocqE+WJccqT6NKFd8haap16pE1Vjryo5J/70uYtIoBUYP1rY+6pZTnA5YHJ0+PFRMQziR2sefYQ6SoXBsAOo0hbvXnFvPdjjSl7iC2Yz9InGVPSF4WPQlN7yPhULd21do/AAqyKN1QeJpBdRQoZGSGuS5ubPEu7zNz9/d8UtX/WxpZoc=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(376002)(396003)(346002)(136003)(36840700001)(46966006)(26005)(81166007)(110136005)(36860700001)(336012)(82740400003)(83380400001)(9686003)(6506007)(316002)(53546011)(4326008)(70586007)(86362001)(54906003)(7696005)(55016002)(70206006)(478600001)(33656002)(52536014)(82310400003)(186003)(8936002)(5660300002)(356005)(2906002)(47076005)(8676002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2021 05:23:45.9044
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: f545e556-75f1-407c-2469-08d91a864678
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT012.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB6PR0802MB2328

SGkgSnVsaWVuDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSnVsaWVu
IEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4NCj4gU2VudDogVHVlc2RheSwgTWF5IDE4LCAyMDIxIDY6
MTUgUE0NCj4gVG86IFBlbm55IFpoZW5nIDxQZW5ueS5aaGVuZ0Bhcm0uY29tPjsgeGVuLWRldmVs
QGxpc3RzLnhlbnByb2plY3Qub3JnOw0KPiBzc3RhYmVsbGluaUBrZXJuZWwub3JnDQo+IENjOiBC
ZXJ0cmFuZCBNYXJxdWlzIDxCZXJ0cmFuZC5NYXJxdWlzQGFybS5jb20+OyBXZWkgQ2hlbg0KPiA8
V2VpLkNoZW5AYXJtLmNvbT47IG5kIDxuZEBhcm0uY29tPg0KPiBTdWJqZWN0OiBSZTogW1BBVENI
IDA1LzEwXSB4ZW4vYXJtOiBpbnRyb2R1Y2UgYWxsb2Nfc3RhdGljbWVtX3BhZ2VzDQo+IA0KPiBI
aSBQZW5ueSwNCj4gDQo+IE9uIDE4LzA1LzIwMjEgMDY6MjEsIFBlbm55IFpoZW5nIHdyb3RlOg0K
PiA+IGFsbG9jX3N0YXRpY21lbV9wYWdlcyBpcyBkZXNpZ25hdGVkIHRvIGFsbG9jYXRlIG5yX3Bm
bnMgY29udGlndW91cw0KPiA+IHBhZ2VzIG9mIHN0YXRpYyBtZW1vcnkuIEFuZCBpdCBpcyB0aGUg
ZXF1aXZhbGVudCBvZiBhbGxvY19oZWFwX3BhZ2VzDQo+ID4gZm9yIHN0YXRpYyBtZW1vcnkuDQo+
ID4gVGhpcyBjb21taXQgb25seSBjb3ZlcnMgYWxsb2NhdGluZyBhdCBzcGVjaWZpZWQgc3RhcnRp
bmcgYWRkcmVzcy4NCj4gPg0KPiA+IEZvciBlYWNoIHBhZ2UsIGl0IHNoYWxsIGNoZWNrIGlmIHRo
ZSBwYWdlIGlzIHJlc2VydmVkDQo+ID4gKFBHQ19yZXNlcnZlZCkgYW5kIGZyZWUuIEl0IHNoYWxs
IGFsc28gZG8gYSBzZXQgb2YgbmVjZXNzYXJ5DQo+ID4gaW5pdGlhbGl6YXRpb24sIHdoaWNoIGFy
ZSBtb3N0bHkgdGhlIHNhbWUgb25lcyBpbiBhbGxvY19oZWFwX3BhZ2VzLA0KPiA+IGxpa2UsIGZv
bGxvd2luZyB0aGUgc2FtZSBjYWNoZS1jb2hlcmVuY3kgcG9saWN5IGFuZCB0dXJuaW5nIHBhZ2UN
Cj4gPiBzdGF0dXMgaW50byBQR0Nfc3RhdGVfdXNlZCwgZXRjLg0KPiA+DQo+ID4gU2lnbmVkLW9m
Zi1ieTogUGVubnkgWmhlbmcgPHBlbm55LnpoZW5nQGFybS5jb20+DQo+ID4gLS0tDQo+ID4gICB4
ZW4vY29tbW9uL3BhZ2VfYWxsb2MuYyB8IDY0DQo+ICsrKysrKysrKysrKysrKysrKysrKysrKysr
KysrKysrKysrKysrKysrDQo+ID4gICAxIGZpbGUgY2hhbmdlZCwgNjQgaW5zZXJ0aW9ucygrKQ0K
PiA+DQo+ID4gZGlmZiAtLWdpdCBhL3hlbi9jb21tb24vcGFnZV9hbGxvYy5jIGIveGVuL2NvbW1v
bi9wYWdlX2FsbG9jLmMgaW5kZXgNCj4gPiA1OGI1M2M2YWMyLi5hZGYyODg5ZTc2IDEwMDY0NA0K
PiA+IC0tLSBhL3hlbi9jb21tb24vcGFnZV9hbGxvYy5jDQo+ID4gKysrIGIveGVuL2NvbW1vbi9w
YWdlX2FsbG9jLmMNCj4gPiBAQCAtMTA2OCw2ICsxMDY4LDcwIEBAIHN0YXRpYyBzdHJ1Y3QgcGFn
ZV9pbmZvICphbGxvY19oZWFwX3BhZ2VzKA0KPiA+ICAgICAgIHJldHVybiBwZzsNCj4gPiAgIH0N
Cj4gPg0KPiA+ICsvKg0KPiA+ICsgKiBBbGxvY2F0ZSBucl9wZm5zIGNvbnRpZ3VvdXMgcGFnZXMs
IHN0YXJ0aW5nIGF0ICNzdGFydCwgb2Ygc3RhdGljIG1lbW9yeS4NCj4gPiArICogSXQgaXMgdGhl
IGVxdWl2YWxlbnQgb2YgYWxsb2NfaGVhcF9wYWdlcyBmb3Igc3RhdGljIG1lbW9yeSAgKi8NCj4g
PiArc3RhdGljIHN0cnVjdCBwYWdlX2luZm8gKmFsbG9jX3N0YXRpY21lbV9wYWdlcyh1bnNpZ25l
ZCBsb25nIG5yX3BmbnMsDQo+IA0KPiBUaGlzIHdhbnRzIHRvIGJlIG5yX21mbnMuDQo+IA0KPiA+
ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBwYWRkcl90
IHN0YXJ0LA0KPiANCj4gSSB3b3VsZCBwcmVmZXIgaWYgdGhpcyBoZWxwZXIgdGFrZXMgYW4gbWZu
X3QgaW4gcGFyYW1ldGVyLg0KPiANCg0KU3VyZSwgSSB3aWxsIGNoYW5nZSBib3RoLg0KDQo+ID4g
KyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHVuc2lnbmVk
IGludA0KPiA+ICttZW1mbGFncykgew0KPiA+ICsgICAgYm9vbCBuZWVkX3RsYmZsdXNoID0gZmFs
c2U7DQo+ID4gKyAgICB1aW50MzJfdCB0bGJmbHVzaF90aW1lc3RhbXAgPSAwOw0KPiA+ICsgICAg
dW5zaWduZWQgaW50IGk7DQo+ID4gKyAgICBzdHJ1Y3QgcGFnZV9pbmZvICpwZzsNCj4gPiArICAg
IG1mbl90IHNfbWZuOw0KPiA+ICsNCj4gPiArICAgIC8qIEZvciBub3csIGl0IG9ubHkgc3VwcG9y
dHMgYWxsb2NhdGluZyBhdCBzcGVjaWZpZWQgYWRkcmVzcy4gKi8NCj4gPiArICAgIHNfbWZuID0g
bWFkZHJfdG9fbWZuKHN0YXJ0KTsNCj4gPiArICAgIHBnID0gbWZuX3RvX3BhZ2Uoc19tZm4pOw0K
PiANCj4gV2Ugc2hvdWxkIGF2b2lkIHRvIG1ha2UgdGhlIGFzc3VtcHRpb24gdGhlIHN0YXJ0IGFk
ZHJlc3Mgd2lsbCBiZSB2YWxpZC4NCj4gU28geW91IHdhbnQgdG8gY2FsbCBtZm5fdmFsaWQoKSBm
aXJzdC4NCj4gDQo+IEF0IHRoZSBzYW1lIHRpbWUsIHRoZXJlIGlzIG5vIGd1YXJhbnRlZSB0aGF0
IGlmIHRoZSBmaXJzdCBwYWdlIGlzIHZhbGlkLCB0aGVuIHRoZQ0KPiBuZXh0IG5yX3BmbnMgd2ls
bCBiZS4gU28gdGhlIGNoZWNrIHNob3VsZCBiZSBwZXJmb3JtZWQgZm9yIGFsbCBvZiB0aGVtLg0K
PiANCg0KT2suIEknbGwgZG8gdmFsaWRhdGlvbiBjaGVjayBvbiBib3RoIG9mIHRoZW0uDQoNCj4g
PiArICAgIGlmICggIXBnICkNCj4gPiArICAgICAgICByZXR1cm4gTlVMTDsNCj4gPiArDQo+ID4g
KyAgICBmb3IgKCBpID0gMDsgaSA8IG5yX3BmbnM7IGkrKykNCj4gPiArICAgIHsNCj4gPiArICAg
ICAgICAvKg0KPiA+ICsgICAgICAgICAqIFJlZmVyZW5jZSBjb3VudCBtdXN0IGNvbnRpbnVvdXNs
eSBiZSB6ZXJvIGZvciBmcmVlIHBhZ2VzDQo+ID4gKyAgICAgICAgICogb2Ygc3RhdGljIG1lbW9y
eShQR0NfcmVzZXJ2ZWQpLg0KPiA+ICsgICAgICAgICAqLw0KPiA+ICsgICAgICAgIEFTU0VSVChw
Z1tpXS5jb3VudF9pbmZvICYgUEdDX3Jlc2VydmVkKTsNCj4gPiArICAgICAgICBpZiAoIChwZ1tp
XS5jb3VudF9pbmZvICYgflBHQ19yZXNlcnZlZCkgIT0gUEdDX3N0YXRlX2ZyZWUgKQ0KPiA+ICsg
ICAgICAgIHsNCj4gPiArICAgICAgICAgICAgcHJpbnRrKFhFTkxPR19FUlINCj4gPiArICAgICAg
ICAgICAgICAgICAgICAiUmVmZXJlbmNlIGNvdW50IG11c3QgY29udGludW91c2x5IGJlIHplcm8g
Zm9yIGZyZWUgcGFnZXMiDQo+ID4gKyAgICAgICAgICAgICAgICAgICAgInBnWyV1XSBNRk4gJSJQ
UklfbWZuIiBjPSUjbHggdD0lI3hcbiIsDQo+ID4gKyAgICAgICAgICAgICAgICAgICAgaSwgbWZu
X3gocGFnZV90b19tZm4ocGcgKyBpKSksDQo+ID4gKyAgICAgICAgICAgICAgICAgICAgcGdbaV0u
Y291bnRfaW5mbywgcGdbaV0udGxiZmx1c2hfdGltZXN0YW1wKTsNCj4gPiArICAgICAgICAgICAg
QlVHKCk7DQo+IA0KPiBTbyB3ZSB3b3VsZCBjcmFzaCBYZW4gaWYgdGhlIGNhbGxlciBwYXNzIGEg
d3JvbmcgcmFuZ2UuIElzIGl0IHdoYXQgd2Ugd2FudD8NCj4gDQo+IEFsc28sIHdobyBpcyBnb2lu
ZyB0byBwcmV2ZW50IGNvbmN1cnJlbnQgYWNjZXNzPw0KPiANCg0KU3VyZSwgdG8gZml4IGNvbmN1
cnJlbmN5IGlzc3VlLCBJIG1heSBuZWVkIHRvIGFkZCBvbmUgc3BpbmxvY2sgbGlrZQ0KYHN0YXRp
YyBERUZJTkVfU1BJTkxPQ0soc3RhdGljbWVtX2xvY2spO2ANCg0KSW4gY3VycmVudCBhbGxvY19o
ZWFwX3BhZ2VzLCBpdCB3aWxsIGRvIHNpbWlsYXIgY2hlY2ssIHRoYXQgcGFnZXMgaW4gZnJlZSBz
dGF0ZSBNVVNUIGhhdmUNCnplcm8gcmVmZXJlbmNlIGNvdW50LiBJIGd1ZXNzLCBpZiBjb25kaXRp
b24gbm90IG1ldCwgdGhlcmUgaXMgbm8gbmVlZCB0byBwcm9jZWVkLg0KDQo+ID4gKyAgICAgICAg
fQ0KPiA+ICsNCj4gPiArICAgICAgICBpZiAoICEobWVtZmxhZ3MgJiBNRU1GX25vX3RsYmZsdXNo
KSApDQo+ID4gKyAgICAgICAgICAgIGFjY3VtdWxhdGVfdGxiZmx1c2goJm5lZWRfdGxiZmx1c2gs
ICZwZ1tpXSwNCj4gPiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAmdGxiZmx1c2hf
dGltZXN0YW1wKTsNCj4gPiArDQo+ID4gKyAgICAgICAgLyoNCj4gPiArICAgICAgICAgKiBSZXNl
cnZlIGZsYWcgUEdDX3Jlc2VydmVkIGFuZCBjaGFuZ2UgcGFnZSBzdGF0ZQ0KPiA+ICsgICAgICAg
ICAqIHRvIFBHQ19zdGF0ZV9pbnVzZS4NCj4gPiArICAgICAgICAgKi8NCj4gPiArICAgICAgICBw
Z1tpXS5jb3VudF9pbmZvID0gKHBnW2ldLmNvdW50X2luZm8gJiBQR0NfcmVzZXJ2ZWQpIHwNCj4g
UEdDX3N0YXRlX2ludXNlOw0KPiA+ICsgICAgICAgIC8qIEluaXRpYWxpc2UgZmllbGRzIHdoaWNo
IGhhdmUgb3RoZXIgdXNlcyBmb3IgZnJlZSBwYWdlcy4gKi8NCj4gPiArICAgICAgICBwZ1tpXS51
LmludXNlLnR5cGVfaW5mbyA9IDA7DQo+ID4gKyAgICAgICAgcGFnZV9zZXRfb3duZXIoJnBnW2ld
LCBOVUxMKTsNCj4gPiArDQo+ID4gKyAgICAgICAgLyoNCj4gPiArICAgICAgICAgKiBFbnN1cmUg
Y2FjaGUgYW5kIFJBTSBhcmUgY29uc2lzdGVudCBmb3IgcGxhdGZvcm1zIHdoZXJlIHRoZQ0KPiA+
ICsgICAgICAgICAqIGd1ZXN0IGNhbiBjb250cm9sIGl0cyBvd24gdmlzaWJpbGl0eSBvZi90aHJv
dWdoIHRoZSBjYWNoZS4NCj4gPiArICAgICAgICAgKi8NCj4gPiArICAgICAgICBmbHVzaF9wYWdl
X3RvX3JhbShtZm5feChwYWdlX3RvX21mbigmcGdbaV0pKSwNCj4gPiArICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICEobWVtZmxhZ3MgJiBNRU1GX25vX2ljYWNoZV9mbHVzaCkpOw0KPiA+ICsg
ICAgfQ0KPiA+ICsNCj4gPiArICAgIGlmICggbmVlZF90bGJmbHVzaCApDQo+ID4gKyAgICAgICAg
ZmlsdGVyZWRfZmx1c2hfdGxiX21hc2sodGxiZmx1c2hfdGltZXN0YW1wKTsNCj4gPiArDQo+ID4g
KyAgICByZXR1cm4gcGc7DQo+ID4gK30NCj4gPiArDQo+ID4gICAvKiBSZW1vdmUgYW55IG9mZmxp
bmVkIHBhZ2UgaW4gdGhlIGJ1ZGR5IHBvaW50ZWQgdG8gYnkgaGVhZC4gKi8NCj4gPiAgIHN0YXRp
YyBpbnQgcmVzZXJ2ZV9vZmZsaW5lZF9wYWdlKHN0cnVjdCBwYWdlX2luZm8gKmhlYWQpDQo+ID4g
ICB7DQo+ID4NCj4gDQo+IENoZWVycywNCj4gDQo+IC0tDQo+IEp1bGllbiBHcmFsbA0KDQpDaGVl
cnMsDQoNClBlbm55IFpoZW5nDQo=


From xen-devel-bounces@lists.xenproject.org Wed May 19 05:35:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 05:35:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129853.243520 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljErw-0005kw-Nu; Wed, 19 May 2021 05:35:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129853.243520; Wed, 19 May 2021 05:35:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljErw-0005kp-Kx; Wed, 19 May 2021 05:35:44 +0000
Received: by outflank-mailman (input) for mailman id 129853;
 Wed, 19 May 2021 05:35:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TeaP=KO=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1ljErv-0005kj-Ox
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 05:35:43 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com (unknown
 [40.107.4.70]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8d21f5dc-8b27-4d1a-a626-aeacd5c6730f;
 Wed, 19 May 2021 05:35:42 +0000 (UTC)
Received: from AM6PR08CA0044.eurprd08.prod.outlook.com (2603:10a6:20b:c0::32)
 by AM0PR08MB4146.eurprd08.prod.outlook.com (2603:10a6:208:129::32)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.25; Wed, 19 May
 2021 05:35:37 +0000
Received: from VE1EUR03FT025.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:c0:cafe::a8) by AM6PR08CA0044.outlook.office365.com
 (2603:10a6:20b:c0::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.33 via Frontend
 Transport; Wed, 19 May 2021 05:35:37 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT025.mail.protection.outlook.com (10.152.18.74) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Wed, 19 May 2021 05:35:36 +0000
Received: ("Tessian outbound 504317ef584c:v92");
 Wed, 19 May 2021 05:35:36 +0000
Received: from 9259a0bf5ff6.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 020C6141-DB80-46B2-9DB0-2C85A1C4728A.1; 
 Wed, 19 May 2021 05:35:30 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 9259a0bf5ff6.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 19 May 2021 05:35:30 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com (2603:10a6:803:10a::33)
 by VE1PR08MB4669.eurprd08.prod.outlook.com (2603:10a6:802:a8::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.28; Wed, 19 May
 2021 05:35:27 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5]) by VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5%6]) with mapi id 15.20.4129.032; Wed, 19 May 2021
 05:35:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8d21f5dc-8b27-4d1a-a626-aeacd5c6730f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=X+HouS1vdV1PlPvVgBL1LTnzyMBfsrJLb2gV+keXSvw=;
 b=Hidk6jU+u/Vb6jD3+JnCl7jkgq5S5OabmBipMFN8ExHJJsGOLQ22QicwDEIZC5qyNrP+1++VQZcUjjIj62i5V5sh6qcxgWDTK7CkPHSheRjQN7zb7GOrZuYnaeyCTc01S7ilO9UGqyct2PCL5pqta7Ye6E9LmzqTDkeWF2UXjx8=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Ef05ZaA89fKxGu1hLhSpqItVGXd9n51yo6rt3HGMOtKw/gV3vIogRS8HN3dAC1aSgfJkeCJFJt2jlsDpYmf5zJGQDMrqQnVZAyNUMq0OSKfOd62eh8Y6DD/VMszbKRzmpczFSJuRwbo+QeMyBzvNX5S7qAwHrfUwlm6WuoePeCIWzV+EOqqH9IRM1Zn56z40N4U4KmuBdHqUBL3XN9EaQjwmb2n6jUm32+14f/249e5a5CD1HplMH+VWWGMpG91AQk53eV4uFn9TQXxvsSslrL39SpTBnzxWTh1Y7MuQ4an5k6zFw/5xNabEPA4iJ3A7IJSYE+LKj3Ete3aAqeRz7Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=X+HouS1vdV1PlPvVgBL1LTnzyMBfsrJLb2gV+keXSvw=;
 b=L9lXcEWBEmnYyGgw+QpFVifdjy+06gIUNM1oXtA5YkBtARByZb1ujrdtYOwNM70ng7TtpCH6jTsxe+6rEEh0u5//m4WXf5HIgAz4lXRcZNSMyIza2BVBgg6tuGpBQ9QLQMDViM5oXsxh1dza2HP0vIAzCFij9xBPUXUdBwc0BRwhsEJhE3EE+HN5RsjHOTST/+LSBtl3/2IFaKkzitd2mbWpLdvbmnlRfvB7AvV5ciIqFw7S6TQMHgOLryT+9pAC4thP0BYSJKU3xzxGSA6TzcmxFzJiV8NxdGgh7wcunhsJkYH21tM86/ZCRylDiTY2qMXdTKbT7gZkZDKUJFbYKg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=X+HouS1vdV1PlPvVgBL1LTnzyMBfsrJLb2gV+keXSvw=;
 b=Hidk6jU+u/Vb6jD3+JnCl7jkgq5S5OabmBipMFN8ExHJJsGOLQ22QicwDEIZC5qyNrP+1++VQZcUjjIj62i5V5sh6qcxgWDTK7CkPHSheRjQN7zb7GOrZuYnaeyCTc01S7ilO9UGqyct2PCL5pqta7Ye6E9LmzqTDkeWF2UXjx8=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "sstabellini@kernel.org"
	<sstabellini@kernel.org>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	nd <nd@arm.com>
Subject: RE: [PATCH 06/10] xen: replace order with nr_pfns in assign_pages for
 better compatibility
Thread-Topic: [PATCH 06/10] xen: replace order with nr_pfns in assign_pages
 for better compatibility
Thread-Index: AQHXS6W5eIGNu7DPG0i7DMYwhf4hF6rpB5CAgAFAuBA=
Date: Wed, 19 May 2021 05:35:26 +0000
Message-ID:
 <VE1PR08MB52156E4C005F6B9675377548F72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-7-penny.zheng@arm.com>
 <7dc01bcd-1570-82fa-5d15-11c28a857b3f@xen.org>
In-Reply-To: <7dc01bcd-1570-82fa-5d15-11c28a857b3f@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 45F244372718734B9FA5144BEC341513.0
x-checkrecipientchecked: true
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [203.126.0.111]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: 7e2961a8-4cf9-4dc0-0fff-08d91a87ee40
x-ms-traffictypediagnostic: VE1PR08MB4669:|AM0PR08MB4146:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM0PR08MB41466CEC10CEAFD74516F81BF72B9@AM0PR08MB4146.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8882;OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 /LUPUxFxerRTzJFJ9uL1ZtLSQnpPO9D4ISf8IhOVCq2ZYF5E7rJUICNpYew6BHMbwV48UQWwVWWYEXugUx4I+NnFAYAryPbIqdCNdsHK7VRXEpDJ9kp2D54BQUuiG/GMAt+gkEG0Upva1+EKsoPOZcL7Silr6IuptIMYr0ppXs/QJPE4LwZEW/qM0tdxssg5R+SVajrDICW0STDgulZ4lyrX1J5adBfbZB+xl+MRJEWaAaqWRJb+XOMKC25lybTUMm3EmWDQl1Lxp8M7I2XGCTAxhqKLCYF3pNqXler9MByuM8CSOH0fUp4y1SrtUCPp1l+qO+esgHWaj1hMt6Es61wEqhS6HCSEY+o5ZqhYmXsMh1TgqvsNUVikw1wlGpiRQ4UJQHmQitc/7m+1BikwKyhS/A4qz8aUk9BObPPPthvsZOk34tiGMECY7rEto2BmvPwS42IgtWpqXHI8k1iIN5wypoURDIWS+2JA9/+qnLyd1RsC/wF3W9R1q/pfm+Yg4Zb74WMsMzsO0nKwRGbe21aNa1oIHl+TY7JaMUs2gL7LmD8Yt8JCwXfCZ0D3oBvmWy1beEPzWZepX6IOa2ZyTRWFBQs36pZrq+WinKSdrMQ=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR08MB5215.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39860400002)(376002)(346002)(366004)(136003)(478600001)(33656002)(64756008)(5660300002)(8936002)(66556008)(66946007)(66476007)(76116006)(66446008)(71200400001)(26005)(86362001)(2906002)(8676002)(52536014)(38100700002)(7696005)(122000001)(186003)(316002)(4326008)(110136005)(54906003)(53546011)(55016002)(83380400001)(6506007)(9686003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?ZUlzdmJOVC9QZTFPTDRzYjZYclBES0p2cVVGTUNtenRhNHp0V1VCR0RwRFlX?=
 =?utf-8?B?Z0xmWEI5RGNOOXQ1Rlp1Tm5aelRvMEpRM2tVS1cxcUdLUnNRMktOS3BWQ0Fn?=
 =?utf-8?B?NnI2azhrZTErZ3BBWFBwTjU5R3FzSE42b1NRaXB4VE10ejRnKzU0OWRZN1Q4?=
 =?utf-8?B?b2N2UHlwa2V4WkN6bnpCaEd5Y3A0Ym5TRWFuLzhhL3czSnYyUWN6L0NmMUdo?=
 =?utf-8?B?ckdneUs1ZHllaElkMFNtNU9Ha0xjQ2orR3NtVXludkRaTmdJMjJnY3FycmZE?=
 =?utf-8?B?aVI5Y3dhNHQ0dUx1WGpiMnZsSDZMZ3pNRlZqZ3N5SVdVSFhPVkJWZ2ZpV2tB?=
 =?utf-8?B?MVEwSFBkejVleXZtNmlQeGVJMHc1NFJpWTB5UFhiKzZHUS8rTnlVT0w3MGsy?=
 =?utf-8?B?b0FRR1AvQXpPMHZjazRRa2VwL1RzY09CNXlJOWZGZ0JaTXdoZ054R2NxdmNj?=
 =?utf-8?B?RWlEbDFvNnVuNVVPTFBpcXprQ05nWnNZdWYwczNyZWc4WnFyTnczMUNVTFpl?=
 =?utf-8?B?T2JoMFd2MHBzMUxtZHRvR1B0ZHBtSHlBVVcvcysxM2IyZU5mUG9pSU1YL0xQ?=
 =?utf-8?B?RGMyQnMvdFlSOUxRNWRCOEJEdm5UWGJocnNCaWI1RS8wWnlMYkNzclJoY0ty?=
 =?utf-8?B?dTN0TDJCakVLTGdka21zcE93SWlWQ21DSHdLSnd3b21BMEdGdndlMzFGTEJW?=
 =?utf-8?B?Z2ZZMDgrYXlsTWk0RGxxd1l6M0crTHdPdW1Ic3F4V1I2cTc5NFBvRnBnMlZ5?=
 =?utf-8?B?SlFhRzNGTVcrdkwyNVZNRVBQSlhNNTJJVDY4TUEzbDBBNEhCTkgzMmt3MnZL?=
 =?utf-8?B?a09xUHlNQ0xqbFN3NjVENS9uSDRZV2tWekhQV0krbVhRMitldkxiMit2VjN3?=
 =?utf-8?B?dEoyVkRmQkJRRDRoK0VZOTdMQm9BY2Z2SHdGc2NvNC9RMHpwWHBCWUd4cVc3?=
 =?utf-8?B?c0tsdjdQakFTWHZZYUh2NDJBMnNJNlptTXFqZlRTUEQ3c0FpTGVManBFZk9Q?=
 =?utf-8?B?ZS9SUjJGY2hNMDBsRmEzdnE1a0I1dUxjb0tyZ1dESDUwL21xWmdQRWJ6RGI2?=
 =?utf-8?B?N2RBYVVnZlRqbUhHOVRHRDd5NVJkVU5iRmhBSGVVeDZYUGRIK2crVm80YjY0?=
 =?utf-8?B?bElpb0VINnFvL0dtZENoQWVXRkFweVgrMGFjTWxoT200Y3dqb1hucUpoMEZi?=
 =?utf-8?B?Y1ZiMmh6aHBZSndwRXZNQ05DaWVMMndTY3kydjRYWGhQMVg3UndXYUluZ290?=
 =?utf-8?B?dDVjT3FtMDk1L3IxL1dXQUQ0bVNXdFpGZko1WjFCVXcwcDAwdUdSMmJ3d3ZI?=
 =?utf-8?B?Rm1ibE5xeU1qeWllVVBpalFjV0Vpemx5Vllvajk1RzlhNndEdkV1OEFjck9Z?=
 =?utf-8?B?eHhMTjBvNzJQK05Fb2k0dzFxTFJVNWc5Q0FDeWhNK2pUN2FSZ2UxbGtNRWJQ?=
 =?utf-8?B?cnlUTS9SMG5rN3JRMXJFeHZwUjhzcktjSjRZU2RWSmRpQW9uU3FQakJ0blRW?=
 =?utf-8?B?MU41MWNndzFoWWFrSVBqbXJPU2RCZktoVEJHYnRBS0k3LzBOeUJ6QlNMZXla?=
 =?utf-8?B?M3VmZ3FjMVhFeGdXNDJlK091VE4vdmJkRi80VUk4a3drdkVVdFlPM2FJY2tl?=
 =?utf-8?B?TWVQdyt0V3MwSEtubE8xR3ZUdmhqSjFDZEpLUmpYaTFHYU5kMzIzZ0ZEQXVw?=
 =?utf-8?B?dkR2SnFpQXAzZGlQakJyOFh3c21MNmhqUTZseXc4Yy9peEhhZkR4cjBuRmdF?=
 =?utf-8?Q?xk4Gfp7F1h6rzKOFz8F++oNHM6dNAktpFPkhGqc?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB4669
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT025.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	4a79fd96-9d76-4028-1ba0-08d91a87e854
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Qo3SIg9UO+E7aDiohW+livzI647TZFb3UPHCZDsAcQrdOiStI/TTFA4SJu5SHZb8PwVKKPNUPqLzQ7CnVITEDFUl65gaeifWOcuu6fM4bGYzcq+1h+sv+QTo68BukwxuprfI8/jQobrXhQyV5g6PyOrfQLahrhjsPhYGaoZWJJWkXPhejNOJjtwJeVqmEBuGwdlBAj5ZTNkq1MGJh0SbcD6MUVsAyudKdiBsOa6pCkFIjEf1ITIozww1pp+EOwa3rtiLQXCPGoiylBu0ZfwE5H1dDQpAuwlhr3HzjZvIyGqQXsaOeI90W5rBp8HoFnI06KznZm2ZVTXlYj7qVwv73Hb7kj2AYFNSZXN2N6Rp1JN7V30mGp8mZPvZhPyrysoNF/3vU8n+Zf672KT8VeoHHX8KTSW7fjKqRAv3CMITX6zHXsU7PqPJWL7y9cvAMWiD5LL7j0WMjx60RmDGEdA4vury91J0dbDVtQn4OQs1BATrI48q2XJfXAkWqyQ2CihKyvvBTJ6SKItRjBQ4uVS4Hs3AL6bYo41B3k92uYTTyKZBPnLr/FLF4i3hHU+blpo1KD2EoU4tXKxSgJGLi1aBP3HXYZ78p8gbIChc1HtIIWM6aR/FI7CWWIe3+n7D89eyqmJGheuuo0oVQUtyRyLxyg==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(136003)(346002)(39860400002)(376002)(46966006)(36840700001)(70586007)(54906003)(478600001)(2906002)(9686003)(8676002)(55016002)(70206006)(356005)(83380400001)(81166007)(110136005)(316002)(4326008)(47076005)(53546011)(186003)(6506007)(26005)(82310400003)(7696005)(336012)(8936002)(82740400003)(36860700001)(33656002)(5660300002)(86362001)(52536014);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2021 05:35:36.8059
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 7e2961a8-4cf9-4dc0-0fff-08d91a87ee40
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT025.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB4146

SGkgSnVsaWVuDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSnVsaWVu
IEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4NCj4gU2VudDogVHVlc2RheSwgTWF5IDE4LCAyMDIxIDY6
MjEgUE0NCj4gVG86IFBlbm55IFpoZW5nIDxQZW5ueS5aaGVuZ0Bhcm0uY29tPjsgeGVuLWRldmVs
QGxpc3RzLnhlbnByb2plY3Qub3JnOw0KPiBzc3RhYmVsbGluaUBrZXJuZWwub3JnDQo+IENjOiBC
ZXJ0cmFuZCBNYXJxdWlzIDxCZXJ0cmFuZC5NYXJxdWlzQGFybS5jb20+OyBXZWkgQ2hlbg0KPiA8
V2VpLkNoZW5AYXJtLmNvbT47IG5kIDxuZEBhcm0uY29tPg0KPiBTdWJqZWN0OiBSZTogW1BBVENI
IDA2LzEwXSB4ZW46IHJlcGxhY2Ugb3JkZXIgd2l0aCBucl9wZm5zIGluIGFzc2lnbl9wYWdlcw0K
PiBmb3IgYmV0dGVyIGNvbXBhdGliaWxpdHkNCj4gDQo+IEhpIFBlbm55LA0KPiANCj4gT24gMTgv
MDUvMjAyMSAwNjoyMSwgUGVubnkgWmhlbmcgd3JvdGU6DQo+ID4gRnVuY3Rpb24gcGFyYW1ldGVy
IG9yZGVyIGluIGFzc2lnbl9wYWdlcyBpcyBhbHdheXMgdXNlZCBhcyAxdWwgPDwNCj4gPiBvcmRl
ciwgcmVmZXJyaW5nIHRvIDJAb3JkZXIgcGFnZXMuDQo+ID4NCj4gPiBOb3csIGZvciBiZXR0ZXIg
Y29tcGF0aWJpbGl0eSB3aXRoIG5ldyBzdGF0aWMgbWVtb3J5LCBvcmRlciBzaGFsbCBiZQ0KPiA+
IHJlcGxhY2VkIHdpdGggbnJfcGZucyBwb2ludGluZyB0byBwYWdlIGNvdW50IHdpdGggbm8gY29u
c3RyYWludCwgbGlrZQ0KPiA+IDI1ME1CLg0KPiANCj4gV2UgaGF2ZSBzaW1pbGFyIHJlcXVpcmVt
ZW50cyBmb3IgTGl2ZVVwZGF0ZSBiZWNhdXNlIGFyZSBwcmVzZXJ2aW5nIHRoZQ0KPiBtZW1vcnkg
d2l0aCBhIG51bWJlciBvZiBwYWdlcyAocmF0aGVyIHRoYW4gYSBwb3dlci1vZi10d28pLiBXaXRo
IHRoZQ0KPiBjdXJyZW50IGludGVyZmFjZSB3b3VsZCBiZSBuZWVkIHRvIHNwbGl0IHRoZSByYW5n
ZSBpbiBhIHBvd2VyIG9mIDIgd2hpY2ggaXMgYQ0KPiBiaXQgb2YgcGFpbi4NCj4gDQo+IEhvd2V2
ZXIsIEkgdGhpbmsgSSB3b3VsZCBwcmVmZXIgaWYgd2UgaW50cm9kdWNlIGEgbmV3IGludGVyZmFj
ZSAobWF5YmUNCj4gYXNzaWduX3BhZ2VzX25yKCkpIHJhdGhlciB0aGFuIGNoYW5nZSB0aGUgbWVh
bmluZyBvZiB0aGUgZmllbGQuIFRoaXMgaXMgZm9yDQo+IHR3byByZWFzb25zOg0KPiAgICAxKSBX
ZSBsaW1pdCB0aGUgcmlzayB0byBtYWtlIG1pc3Rha2Ugd2hlbiBiYWNrcG9ydGluZyBhIHBhdGNo
IHRvdWNoDQo+IGFzc2lnbl9wYWdlcygpLg0KPiAgICAyKSBBZGRpbmcgKDFVTCA8PCBvcmRlcikg
Zm9yIHByZXR0eSBtdWNoIGFsbCB0aGUgY2FsbGVyIGlzIG5vdCBuaWNlLg0KPiANCg0KT2suIEkg
d2lsbCBjcmVhdGUgYSBuZXcgaW50ZXJmYWNlIGFzc2lnbl9wYWdlc19ucigpLCBhbmQgbGV0IGFz
c2lnbl9wYWdlcyB0byBjYWxsIGl0IHdpdGgNCjJAb3JkZXIuDQoNCj4gQ2hlZXJzLA0KPiANCj4g
LS0NCj4gSnVsaWVuIEdyYWxsDQoNCkNoZWVycw0KDQpQZW5ueSBaaGVuZw0K


From xen-devel-bounces@lists.xenproject.org Wed May 19 05:59:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 05:59:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129859.243532 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljFFG-0008AK-QH; Wed, 19 May 2021 05:59:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129859.243532; Wed, 19 May 2021 05:59:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljFFG-0008AD-Mk; Wed, 19 May 2021 05:59:50 +0000
Received: by outflank-mailman (input) for mailman id 129859;
 Wed, 19 May 2021 05:59:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ihdG=KO=gmail.com=technologyrss.mail@srs-us1.protection.inumbo.net>)
 id 1ljFFF-0008A7-Rw
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 05:59:49 +0000
Received: from mail-pl1-x62f.google.com (unknown [2607:f8b0:4864:20::62f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7db13265-c259-4b87-aa10-5ef375deefaa;
 Wed, 19 May 2021 05:59:48 +0000 (UTC)
Received: by mail-pl1-x62f.google.com with SMTP id n8so1173373plf.7
 for <xen-devel@lists.xenproject.org>; Tue, 18 May 2021 22:59:48 -0700 (PDT)
Received: from [10.66.100.3] ([144.48.119.14])
 by smtp.gmail.com with ESMTPSA id z25sm15239141pgu.89.2021.05.18.22.59.46
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 18 May 2021 22:59:47 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7db13265-c259-4b87-aa10-5ef375deefaa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=to:from:subject:message-id:date:user-agent:mime-version
         :content-language;
        bh=kiWS2QcDpg5SOw42EK6XFIhDfEh9lFxVSQUCtCUZLqo=;
        b=GJ59tyD6rnzfpvRJlZCQelswEdlVXib6R3AJvi1J7bJWjzDQAJFEM7zdtVWjd13zEg
         93YOs0ySVpZhrCi3qVgsMtZFEpMLa+jdTn2D9rDSGgy72G5WHnei697tvum/Yn4a2ICh
         BL/euVU/J29xX4//p6rttXGdx5PpWlco2V9dC0CZrczte7LxgRZSAbgZJZMUgqzEbbnB
         9rr6pWlcqPQVVn+G3Qy5Ecw4y7iD5d6F9dHBXFEA3lrRwsp+N2Kbp9jxHo/PiuskDBio
         5ev9+smhK4MRfRcLoI4Mu1++4khEhw1wdhtyZSjmcmk2HeXzbCJk7Nix3f8t1xKJXvJo
         w3EA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:to:from:subject:message-id:date:user-agent
         :mime-version:content-language;
        bh=kiWS2QcDpg5SOw42EK6XFIhDfEh9lFxVSQUCtCUZLqo=;
        b=e4On3y4tGcIlO2kHum5HuQxk7fF3QcR0qJpjM8MaayyvmbJam6n0rcyQB4ZLQx38QE
         Iq5+WidzD3EpJ7HyrDec4EiBvxv0ngChGU6P/YIZhiJTG5dGDqe5gykUGoqtdImoD1wE
         30bSzkFyuux+sjKoXEBtH5VsfPFGzpaxJTLJ33lbDBn9B3QoTD1tx1mJB8LahLrjM+m3
         QLvw7ObBLACEnitEwgJ537PVEEkWrLqRAr2Wx8C/aT70blvL+aQARJKypBHC9Ne6EQ7j
         Dv2pI9Zz/lYjEnAwtPPTZzAxpome2E4c9VNcQe2RszM/a3BgTnvSqVOZEYGpw3/CKox/
         XaKA==
X-Gm-Message-State: AOAM533Z0kuuI3gEdZOhFMPJa/IJYHara6gvNPK0UNU6yOJrd1ZGIn0G
	JhqTA+gaRZjXhk0IgFcofrnBO373TZk=
X-Google-Smtp-Source: ABdhPJw3BUdH3TgZG/l6mh5bbgPotDVTwy4WHlHVxgygSTzzld+1c5/g/6U1UwFVWDg8x7zEuSTjPg==
X-Received: by 2002:a17:902:d690:b029:f0:beca:59b9 with SMTP id v16-20020a170902d690b02900f0beca59b9mr8969311ply.71.1621403987890;
        Tue, 18 May 2021 22:59:47 -0700 (PDT)
To: xen-devel@lists.xenproject.org
From: Technologyrss Mail <technologyrss.mail@gmail.com>
Subject: Deploy XEN Project
Message-ID: <4468222a-ac0b-7544-351d-286231a6bc9c@gmail.com>
Date: Wed, 19 May 2021 11:59:44 +0600
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
Content-Type: multipart/alternative;
 boundary="------------1BD072F9CEE4686738D11295"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------1BD072F9CEE4686738D11295
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit

*Hi,*

I am new user for DEV XEN project on my Centos server. Please guide me 
how to install & deploy XEN project on centos 7?


---

*Thanks & Regards.*

Support Admin

Facebook <htps://facebook.com/technologyrss> | Twitter 
<https://twitter.com/technologyrss1> | Website <https://technologyrss.com>

116/1 West Malibagh, D. I. T Road

Dhaka-1217, Bangladesh

*Mob :* +088 01716915504

*Email :* support.admin@technologyrss.com

*Web :* www.technologyrss.com <https://technologyrss.com>


--------------1BD072F9CEE4686738D11295
Content-Type: text/html; charset=utf-8
Content-Transfer-Encoding: 7bit

<html>
  <head>

    <meta http-equiv="content-type" content="text/html; charset=UTF-8">
  </head>
  <body>
    <p><b>Hi,</b></p>
    <p>I am new user for DEV XEN project on my Centos server. Please
      guide me how to install &amp; deploy XEN project on centos 7? <br>
    </p>
    <p><br>
    </p>
    <p>---<br>
    </p>
    <p><b style="font-size:12px">Thanks &amp; Regards.</b> </p>
    <p style="font-size:12px">Support Admin</p>
    <p style="font-size:12px"><a
        href="htps://facebook.com/technologyrss">Facebook</a> | <a
        href="https://twitter.com/technologyrss1">Twitter</a> | <a
        href="https://technologyrss.com">Website</a></p>
    <p style="font-size:12px">116/1 West Malibagh, D. I. T Road</p>
    <p style="font-size:12px">Dhaka-1217, Bangladesh</p>
    <p style="font-size:12px"><b>Mob :</b> +088 01716915504</p>
    <p style="font-size:12px"><b>Email :</b> <a
        class="moz-txt-link-abbreviated"
        href="mailto:support.admin@technologyrss.com">support.admin@technologyrss.com</a></p>
    <p style="font-size:12px"><b>Web :</b> <a
        href="https://technologyrss.com">www.technologyrss.com</a></p>
  </body>
</html>

--------------1BD072F9CEE4686738D11295--


From xen-devel-bounces@lists.xenproject.org Wed May 19 06:04:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 06:04:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129864.243543 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljFJE-0001BZ-Ad; Wed, 19 May 2021 06:03:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129864.243543; Wed, 19 May 2021 06:03:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljFJE-0001BS-7V; Wed, 19 May 2021 06:03:56 +0000
Received: by outflank-mailman (input) for mailman id 129864;
 Wed, 19 May 2021 06:03:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TeaP=KO=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1ljFJC-0001BM-QO
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 06:03:54 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [40.107.22.88]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 79e00011-461e-4369-9161-efd0e595be1e;
 Wed, 19 May 2021 06:03:53 +0000 (UTC)
Received: from AM5PR0701CA0068.eurprd07.prod.outlook.com (2603:10a6:203:2::30)
 by DB7PR08MB3292.eurprd08.prod.outlook.com (2603:10a6:5:1f::30) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.26; Wed, 19 May
 2021 06:03:50 +0000
Received: from AM5EUR03FT013.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:2:cafe::a1) by AM5PR0701CA0068.outlook.office365.com
 (2603:10a6:203:2::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4150.11 via Frontend
 Transport; Wed, 19 May 2021 06:03:49 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT013.mail.protection.outlook.com (10.152.16.140) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Wed, 19 May 2021 06:03:49 +0000
Received: ("Tessian outbound ea2c9a942a09:v92");
 Wed, 19 May 2021 06:03:48 +0000
Received: from 01e3ef9e16dc.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 5A9DE626-8494-4A57-B32C-17C09B1E58F6.1; 
 Wed, 19 May 2021 06:03:42 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 01e3ef9e16dc.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 19 May 2021 06:03:42 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com (2603:10a6:803:10a::33)
 by VI1PR08MB5536.eurprd08.prod.outlook.com (2603:10a6:803:13b::24)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.25; Wed, 19 May
 2021 06:03:35 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5]) by VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5%6]) with mapi id 15.20.4129.032; Wed, 19 May 2021
 06:03:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 79e00011-461e-4369-9161-efd0e595be1e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wIoj3AhThDtknK6f4+HF9yXlP2uS6wGNCvd8TbjyTTU=;
 b=Ub/8QLWt6mwW4vWahPvAgJEmdOmJa72HikNtqbk+0eyxVrtDdDETO5za+VEK1UXzpjaZjiFhQyh0tv6/dDty56IeoCyq7dHWoDH+wAvOJoh8FuGpAPw3Ux9wS5SlaLCKWpDqpVT2/3AYA9ih3U24pyEdQo6EDXSeNMCIY2ATzwk=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=fA59ABGY0p8mds1jnDoZbvbeFq8ILX7ACTwF8wRshxkzEQejFg8TsrXH4TLmKAK55iwUrhRV2BnKQVM/ZWBR6clUflzvrB/RJHPaaAm4WKhcUpssfBOjmbRcfkHL3JHWZStwLjJHDvGAWZfBtYT/FHWqBs0wDBgrcA5SQWE6XT7blOMRvXoqRjPJTulKHLhtw8jszVoBnJbtyZ4NVFvzIV9dLZb8laj4ZrgOZQaGklEGL4fwOsAKMG7HTYlT0xrU8Yi2xFdmSX9no9us/eQ3TiFXxONGihGSLIDmuD9h649Kyh8l4ga6m0oAvvn9OWrf/b6EgAQZ6+Sdwwxjs8MXZA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wIoj3AhThDtknK6f4+HF9yXlP2uS6wGNCvd8TbjyTTU=;
 b=R/zpSYcKB/jaTk7DOPo0e8fa6mG2BvnrSfIUb14qp4QtUzjfzErx2mlLKXe8yq6RGo1kbvikH40agQ4dvWnUsp3h6smRJRufKD7Hp8T7JogaWb2DYaVjJoGL8EQwRyj0PXjK+gWw1RQlHsn+2e/9UhPwjeL1IEQ0gFod91lxDaSe9Hvw8ILD3aBflJxX74n86NWcZvqIzBx0w8S5hSOJ1dJXQEZBR4fJbSTVrezJn2Fs7VzVm0ngj0qk5c92GRSAPvLLwvPy+AaGevucJOFocjyBVqnupaU5boyG2DzSvxolhB7f2LWrJqam9eK+i68dmr5ahzgclx1NIIaMyIKMVw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wIoj3AhThDtknK6f4+HF9yXlP2uS6wGNCvd8TbjyTTU=;
 b=Ub/8QLWt6mwW4vWahPvAgJEmdOmJa72HikNtqbk+0eyxVrtDdDETO5za+VEK1UXzpjaZjiFhQyh0tv6/dDty56IeoCyq7dHWoDH+wAvOJoh8FuGpAPw3Ux9wS5SlaLCKWpDqpVT2/3AYA9ih3U24pyEdQo6EDXSeNMCIY2ATzwk=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "sstabellini@kernel.org"
	<sstabellini@kernel.org>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	nd <nd@arm.com>
Subject: RE: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
Thread-Topic: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
Thread-Index: AQHXS6W/k/joZEkeaUiDjrfYH3jVEarpCkuAgAE/+8A=
Date: Wed, 19 May 2021 06:03:35 +0000
Message-ID:
 <VE1PR08MB521506E1668D332008260215F72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-8-penny.zheng@arm.com>
 <d2d1c50f-16bb-778b-acdd-0684878c100f@xen.org>
In-Reply-To: <d2d1c50f-16bb-778b-acdd-0684878c100f@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: E1D52A67E0E15E4D83FBC246C1848AE8.0
x-checkrecipientchecked: true
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [203.126.0.111]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: 80cab728-f968-4116-c0e1-08d91a8bdf0e
x-ms-traffictypediagnostic: VI1PR08MB5536:|DB7PR08MB3292:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DB7PR08MB3292CEFA4256C8D57390A2DBF72B9@DB7PR08MB3292.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:3968;OLM:3968;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 gkwiEo3vkLUZEA+ucwEoKD5NoXdcQy1thvj5ddyeRgxQsUbsl2fdQrdgpswNUl5agcH2usVVLkrhTKvdP7nZ/jdtkWyPvWtTrZqO2/yiQisoWs/EwV5NOQGj4sM3oo99omjRgXYn4DhnRFG0bSirTOxr3Jq1fjbjNrrAdhRaZQ+6wWJ2/fkKcIzwgRdVoXgzWmez/FGfskOQeeUyholVAou59dzAszJpyDHPFxHyHjW1oNILmXMCvFEOzpPXKepVD50plQikCvuEUJh/ZEZGTQRSyLwVyZT7TkebHpMr75sf8A0ekp8X4B0+2lbkpHmpZ51kbtqHXjd44iijRx7JQSuenYa+j0TMDNOYfBmf0TJaiuvt95lIPHMpOdf2AmmqL6D21yVEEq5AUoI0tIObtLGLSvQVIPkkeU9GDWcfsVLKMPkLYNz9uLY1+0zNfrYL4rgXPuya/KGXDEPyTYyDbnhMyDoOCPjdBuy6cB9l4gUjgsTMpPprZlZq/h2o/s6y0P5nig5B4drCkGQ37Gj07D24V7fGu4AR/mxySGWQphs9kWllRArQihS0OV4TjTSb0siczux9kGBiichKUDDo/yX+iOM1rnS/n1NV7jPmlxw=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR08MB5215.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(136003)(396003)(346002)(376002)(39860400002)(7696005)(2906002)(8936002)(66446008)(66476007)(478600001)(76116006)(66946007)(8676002)(110136005)(52536014)(53546011)(54906003)(26005)(186003)(64756008)(86362001)(6506007)(66556008)(9686003)(38100700002)(122000001)(55016002)(5660300002)(71200400001)(83380400001)(4326008)(316002)(33656002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?YjcrV1l4OEJlK3VpS0FFK2hHRDVkRTNnT1NRY2l1MHlBS0RVRkMyRnBGRTlO?=
 =?utf-8?B?dTgxY0Z6Q3lIWDZ4SEhvcEJ2d2Q0M1FKZW1zWW9PTjJ6VGpSVjZrSFBic2Ur?=
 =?utf-8?B?WWFnaDJEcWVKSU50ZlhKbkFOVmJtUVNveGpPYk1RQURwQkc5Q1JrcEhZVWIy?=
 =?utf-8?B?bWNBSStVR3VNNHJmekMyZlEwR3hQcUsrVE96aGg1WVQ0WmFGTkZoMm5kTnpr?=
 =?utf-8?B?OHhtS1Z1bVZ1SWFJNHBicGF2RW9peXBTU1Z2OWFoTnJvK0g4dTVWdXFueFBG?=
 =?utf-8?B?b01Za25aL2l6b3BZTUt6NVVMYmN1aGEycnhFbWVSbVNHUVR3ZlJ5dktud1Z4?=
 =?utf-8?B?OFczREhiaHdDS0RlY1pUTEYwU0ZsbExOSUowejIwU3dOTGk0cVBPQmJreXUy?=
 =?utf-8?B?VENUdEhwaGZIVXNCT2ljMXgxQkpLM29XbEwyNFFtRWxOZFNIZll4cEdiUGZY?=
 =?utf-8?B?OE5xVXQ5VGo5RGVoMlRIMVRCamM2YVJ6U2dicCtkeW9MVTRyWjRtWDNPTEJC?=
 =?utf-8?B?S2daT1RCY1dGZGxmQ1BmTkJhVDd0SXA5UC96dTZuVCt6K09VaUtiY3I0Y2hW?=
 =?utf-8?B?eUxKdG5mTWU5eGVKUWZoNWhjc3crb1NTcEl3SCsxdDJ0Z2c3YndvRVhUVmto?=
 =?utf-8?B?OEFwem5SeVdEN0s3eW95Ulh0bUJpeTEzRVRac1pHTTc3VFJGclNPOVhwL0gy?=
 =?utf-8?B?U2JrMGVaVFhxUkxkL0FWMVRqTmlmeEYvc0p3a2xBVFdCRHN2Q2FOMFhDa0pR?=
 =?utf-8?B?SWxNeXY4bUdtdmNXOHJxYmhHVFBFRHVMUnFzMGNQNUlOeXZRL0JMdlJQdmNL?=
 =?utf-8?B?aWVmQUp3MlJJMnVtcm4wNE1ObG15ZS9jdm1wRWpFaUlwT0hRVUJFUy9RQzFx?=
 =?utf-8?B?K1A5Vm1zWGRFcmxzaDNTRWFmazA1Y2plSkNJSFpaRkJrVmkxTDB3M1J1aWVJ?=
 =?utf-8?B?a3Q0K05uMElacVpONWp1UkR5bFZjYzJFTlBHWThsWlBoYzNGQ250d0xKdnJQ?=
 =?utf-8?B?N0ZUK01mb3hnb25DalpCM0Q4VXFQcXlPbjE1UmVwdlVHRlMvQjM5YmVjVWZ2?=
 =?utf-8?B?M2UxcU4vRDdEd2ZtM00xS01vUGsxZTkwNTVFMWdhRG9ETXlmTVVRZ0dlR3VO?=
 =?utf-8?B?ajgwcVcvMGR0aWtHc1BFSWZMUG5iajlBT0NwdnhZNWZjT2dCbEY1a3U5bkMy?=
 =?utf-8?B?NzRoTU9adEQvbHljWDl5dWFEdU5mM2cycVk3UTFPdkZpd2Y5N05FVlk1OEVw?=
 =?utf-8?B?dzF6WU1pTjhZK3YrOW50QTUvMjhBczhxQ0dIdmJENjdDZmxJRGNsQVZKTDYv?=
 =?utf-8?B?QnFkams2YzFiLy9VN2tuVXpUV2hPSzRpVk1DVmkrK0pXbW1nK3JaWnRGazVp?=
 =?utf-8?B?RDN6bGRPMytjS3NidkNOdkZiQjBGbW9hYklmWHdhb0V3bHJpRmZhTWJwa1Bl?=
 =?utf-8?B?Vi8vTmdoK1JHamRVZ1VhR0NmdURGcGtQWjVDNjMzTEtUWUpYTWpyMzZrc2NU?=
 =?utf-8?B?UUV0OWZtaGc3Y0VEelR4S05lYTZwQ004T0EycFFYNnJjM21tZ3hkbUU0ZVdv?=
 =?utf-8?B?cFcxa3pCZFdWOFZsd0NBenZrOVA1U2FScjV2RzhmTzU2cFVkYWVRcFp4QnVE?=
 =?utf-8?B?WjR4Y1kyQ2dNeUFTdUJvNzBxWUhDMURlMUVoMEIxTThtUmo4c3hpc21jSFFu?=
 =?utf-8?B?di96T0VzNlFXOWlqT2RNVU0rTXhsc1JoRE94Qyt4UlBYaEYrd2lCdXlFNThT?=
 =?utf-8?Q?H9iswxBRZMWvn+8NR+mglm6H/PBYCCcIPl46cdq?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB5536
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT013.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	381b6a12-2696-4f87-86c4-08d91a8bd6bb
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	pouN4ocJkxHXN8bS7zM6mDOCg5RCiRZ0kp6p+ttK1cjETMqLQl+yC1svD1TxaUOb8tF4F88ILfWMVTs2Wxpqpsgy9F4IwJ/PMXTPu591lY3t+ydPqqax+0EQ+p58hMUBkG5zLnSCPfDlF6criY32zz4ZMJJN8Pu9JugwAYOdyDDMDL+YwzYOYyyZrf8+bTdC669MswkQ7EXjwETeq1iR9edRBQX+fAxYS9VWB6mW+xiv10UXnbg88OJSWJjMD9BfsZa1BUzO1FBINjhKlFnQEB1rtmFioT7nESDQTiICRc8vmNRUaV/K1JdVBIZNRnkWxb65iNvPZmzbOHDmWNr3zQrLF8pNHBWP2FE4VfUzDRdp292Np5VkRNCoEQaG7UaKvmJrK9AWh0pMpbsJgqCDEdlQa1GSnaW320A6bxT8Cwj02O5EC5BrscNMDk6kfWV0oQCybbMx/uhZdkzsYg6SqA8Ot2QfT9oEADBNglItDQCDVX86VjGRX3JxocD3NsUmx2E52C78ySJ8az7WdW/NTzBcr9zXvJYi5S1vyseLYu3Tfp3qgyc3hq3dIiJDqjFaU4YWFvYGZVNtFybzh/N3Mnrt4+I3qO8RFWjCX1axJurr6QbZNhnXCzSJP0vKxX/KheahBweYY+RYpdptFnM38g==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(39860400002)(376002)(396003)(136003)(46966006)(36840700001)(26005)(82740400003)(4326008)(52536014)(186003)(7696005)(6506007)(82310400003)(53546011)(86362001)(8676002)(81166007)(478600001)(70586007)(110136005)(54906003)(70206006)(5660300002)(36860700001)(316002)(336012)(33656002)(55016002)(9686003)(47076005)(2906002)(356005)(83380400001)(8936002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2021 06:03:49.3944
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 80cab728-f968-4116-c0e1-08d91a8bdf0e
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT013.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3292

SGkgSnVsaWVuDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSnVsaWVu
IEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4NCj4gU2VudDogVHVlc2RheSwgTWF5IDE4LCAyMDIxIDY6
MzAgUE0NCj4gVG86IFBlbm55IFpoZW5nIDxQZW5ueS5aaGVuZ0Bhcm0uY29tPjsgeGVuLWRldmVs
QGxpc3RzLnhlbnByb2plY3Qub3JnOw0KPiBzc3RhYmVsbGluaUBrZXJuZWwub3JnDQo+IENjOiBC
ZXJ0cmFuZCBNYXJxdWlzIDxCZXJ0cmFuZC5NYXJxdWlzQGFybS5jb20+OyBXZWkgQ2hlbg0KPiA8
V2VpLkNoZW5AYXJtLmNvbT47IG5kIDxuZEBhcm0uY29tPg0KPiBTdWJqZWN0OiBSZTogW1BBVENI
IDA3LzEwXSB4ZW4vYXJtOiBpbnRydWR1Y2UgYWxsb2NfZG9tc3RhdGljX3BhZ2VzDQo+IA0KPiBI
aSBQZW5ueSwNCj4gDQo+IFRpdGxlOiBzL2ludHJ1ZHVjZS9pbnRyb2R1Y2UvDQo+IA0KDQpUaHh+
DQoNCj4gT24gMTgvMDUvMjAyMSAwNjoyMSwgUGVubnkgWmhlbmcgd3JvdGU6DQo+ID4gYWxsb2Nf
ZG9tc3RhdGljX3BhZ2VzIGlzIHRoZSBlcXVpdmFsZW50IG9mIGFsbG9jX2RvbWhlYXBfcGFnZXMg
Zm9yDQo+ID4gc3RhdGljIG1tZW9yeSwgYW5kIGl0IGlzIHRvIGFsbG9jYXRlIG5yX3BmbnMgcGFn
ZXMgb2Ygc3RhdGljIG1lbW9yeQ0KPiA+IGFuZCBhc3NpZ24gdGhlbSB0byBvbmUgc3BlY2lmaWMg
ZG9tYWluLg0KPiA+DQo+ID4gSXQgdXNlcyBhbGxvY19zdGF0aWNtZW5fcGFnZXMgdG8gZ2V0IG5y
X3BhZ2VzIHBhZ2VzIG9mIHN0YXRpYyBtZW1vcnksDQo+ID4gdGhlbiBvbiBzdWNjZXNzLCBpdCB3
aWxsIHVzZSBhc3NpZ25fcGFnZXMgdG8gYXNzaWduIHRob3NlIHBhZ2VzIHRvIG9uZQ0KPiA+IHNw
ZWNpZmljIGRvbWFpbiwgaW5jbHVkaW5nIHVzaW5nIHBhZ2Vfc2V0X3Jlc2VydmVkX293bmVyIHRv
IHNldCBpdHMNCj4gPiByZXNlcnZlZCBkb21haW4gb3duZXIuDQo+ID4NCj4gPiBTaWduZWQtb2Zm
LWJ5OiBQZW5ueSBaaGVuZyA8cGVubnkuemhlbmdAYXJtLmNvbT4NCj4gPiAtLS0NCj4gPiAgIHhl
bi9jb21tb24vcGFnZV9hbGxvYy5jIHwgNTMNCj4gKysrKysrKysrKysrKysrKysrKysrKysrKysr
KysrKysrKysrKysrKysNCj4gPiAgIHhlbi9pbmNsdWRlL3hlbi9tbS5oICAgIHwgIDQgKysrKw0K
PiA+ICAgMiBmaWxlcyBjaGFuZ2VkLCA1NyBpbnNlcnRpb25zKCspDQo+ID4NCj4gPiBkaWZmIC0t
Z2l0IGEveGVuL2NvbW1vbi9wYWdlX2FsbG9jLmMgYi94ZW4vY29tbW9uL3BhZ2VfYWxsb2MuYyBp
bmRleA0KPiA+IDBlYjlmMjJhMDAuLmYxZjEyOTZhNjEgMTAwNjQ0DQo+ID4gLS0tIGEveGVuL2Nv
bW1vbi9wYWdlX2FsbG9jLmMNCj4gPiArKysgYi94ZW4vY29tbW9uL3BhZ2VfYWxsb2MuYw0KPiA+
IEBAIC0yNDQ3LDYgKzI0NDcsOSBAQCBpbnQgYXNzaWduX3BhZ2VzKA0KPiA+ICAgICAgIHsNCj4g
PiAgICAgICAgICAgQVNTRVJUKHBhZ2VfZ2V0X293bmVyKCZwZ1tpXSkgPT0gTlVMTCk7DQo+ID4g
ICAgICAgICAgIHBhZ2Vfc2V0X293bmVyKCZwZ1tpXSwgZCk7DQo+ID4gKyAgICAgICAgLyogdXNl
IHBhZ2Vfc2V0X3Jlc2VydmVkX293bmVyIHRvIHNldCBpdHMgcmVzZXJ2ZWQgZG9tYWluIG93bmVy
Lg0KPiAqLw0KPiA+ICsgICAgICAgIGlmICggKHBnW2ldLmNvdW50X2luZm8gJiBQR0NfcmVzZXJ2
ZWQpICkNCj4gPiArICAgICAgICAgICAgcGFnZV9zZXRfcmVzZXJ2ZWRfb3duZXIoJnBnW2ldLCBk
KTsNCj4gDQo+IEkgaGF2ZSBza2ltbWVkIHRocm91Z2ggdGhlIHJlc3Qgb2YgdGhlIHNlcmllcyBh
bmQgY291bGRuJ3QgZmluZCBhbnlvbmUNCj4gY2FsbGluZyBwYWdlX2dldF9yZXNlcnZlZF9vd25l
cigpLiBUaGUgdmFsdWUgaXMgYWxzbyBnb2luZyB0byBiZSB0aGUgZXhhY3QNCj4gc2FtZSBhcyBw
YWdlX3NldF9vd25lcigpLg0KPiANCj4gU28gd2h5IGRvIHdlIG5lZWQgaXQ/DQo+IA0KDQpJbiBt
eSBmaXJzdCBpbnRlbnQsIFRoaXMgdHdvIGhlbHBlciBwYWdlX2dldF9yZXNlcnZlZF9vd25lci8g
cGFnZV9zZXRfcmVzZXJ2ZWRfb3duZXINCmFuZCB0aGUgbmV3IGZpZWxkIGByZXNlcnZlZGAgaW4g
cGFnZV9pbmZvIGFyZSBhbGwgZm9yIHJlYm9vdGluZyBkb21haW4gb24gc3RhdGljIGFsbG9jYXRp
b24uIA0KDQpJIHdhcyBjb25zaWRlcmluZyB0aGF0LCB3aGVuIGltcGxlbWVudGluZyByZWJvb3Rp
bmcgZG9tYWluIG9uIHN0YXRpYyBhbGxvY2F0aW9uLCBtZW1vcnkNCndpbGwgYmUgcmVsaW5xdWlz
aGVkIGFuZCByaWdodCBub3csIGFsbCBmcmVlZCBiYWNrIHRvIGhlYXAsIHdoaWNoIGlzIG5vdCBz
dWl0YWJsZSBmb3Igc3RhdGljIG1lbW9yeSBoZXJlLg0KYCByZWxpbnF1aXNoX21lbW9yeShkLCAm
ZC0+cGFnZV9saXN0KSAgLS0+IHB1dF9wYWdlIC0tPiAgZnJlZV9kb21oZWFwX3BhZ2VgDQoNCkZv
ciBwYWdlcyBpbiBQR0NfcmVzZXJ2ZWQsIG5vdywgSSBhbSBjb25zaWRlcmluZyB0aGF0LCBvdGhl
ciB0aGFuIGdpdmluZyBpdCBiYWNrIHRvIGhlYXAsIG1heWJlIGNyZWF0aW5nDQphIG5ldyBnbG9i
YWwgYHN0cnVjdCBwYWdlX2luZm8qW0RPTUlEXWAgdmFsdWUgdG8gaG9sZC4NCg0KU28gaXQgaXMg
YmV0dGVyIHRvIGhhdmUgYSBuZXcgZmllbGQgaW4gc3RydWN0IHBhZ2VfaW5mbywgYXMgZm9sbG93
cywgdG8gaG9sZCBzdWNoIGluZm8uDQoNCi8qIFBhZ2UgaXMgcmVzZXJ2ZWQuICovDQpzdHJ1Y3Qg
ew0KICAgIC8qDQogICAgICogUmVzZXJ2ZWQgT3duZXIgb2YgdGhpcyBwYWdlLA0KICAgICAqIGlm
IHRoaXMgcGFnZSBpcyByZXNlcnZlZCB0byBhIHNwZWNpZmljIGRvbWFpbi4NCiAgICAgKi8NCiAg
ICBkb21pZF90IHJlc2VydmVkX293bmVyOw0KfSByZXNlcnZlZDsNCg0KQnV0IHRoaXMgcGF0Y2gg
U2VyaWUgaXMgbm90IGdvaW5nIHRvIGluY2x1ZGUgdGhpcyBmZWF0dXJlLCBhbmQgSSB3aWxsIGRl
bGV0ZSByZWxhdGVkIGhlbHBlcnMgYW5kIHZhbHVlcy4NCg0KPiA+ICAgICAgICAgICBzbXBfd21i
KCk7IC8qIERvbWFpbiBwb2ludGVyIG11c3QgYmUgdmlzaWJsZSBiZWZvcmUgdXBkYXRpbmcNCj4g
cmVmY250LiAqLw0KPiA+ICAgICAgICAgICBwZ1tpXS5jb3VudF9pbmZvID0NCj4gPiAgICAgICAg
ICAgICAgIChwZ1tpXS5jb3VudF9pbmZvICYgUEdDX2V4dHJhKSB8IFBHQ19hbGxvY2F0ZWQgfCAx
Ow0KPiANCj4gVGhpcyB3aWxsIGNsb2JiZXIgUEdDX3Jlc2VydmVkLg0KPiANCg0KcmVsYXRlZCBj
aGFuZ2VzIGhhdmUgYmVlbiBpbmNsdWRlZCBpbnRvIHRoZSBjb21taXQgb2YgIjAwMDgteGVuLWFy
bS1pbnRyb2R1Y2UtcmVzZXJ2ZWRfcGFnZV9saXN0LnBhdGNoIi4NCiANCj4gPiBAQCAtMjUwOSw2
ICsyNTEyLDU2IEBAIHN0cnVjdCBwYWdlX2luZm8gKmFsbG9jX2RvbWhlYXBfcGFnZXMoDQo+ID4g
ICAgICAgcmV0dXJuIHBnOw0KPiA+ICAgfQ0KPiA+DQo+ID4gKy8qDQo+ID4gKyAqIEFsbG9jYXRl
IG5yX3BmbnMgY29udGlndW91cyBwYWdlcywgc3RhcnRpbmcgYXQgI3N0YXJ0LCBvZiBzdGF0aWMN
Cj4gPiArbWVtb3J5LA0KPiANCj4gcy9ucl9wZm5zL25yX21mbnMvDQo+IA0KDQpTdXJlLg0KDQo+
ID4gKyAqIHRoZW4gYXNzaWduIHRoZW0gdG8gb25lIHNwZWNpZmljIGRvbWFpbiAjZC4NCj4gPiAr
ICogSXQgaXMgdGhlIGVxdWl2YWxlbnQgb2YgYWxsb2NfZG9taGVhcF9wYWdlcyBmb3Igc3RhdGlj
IG1lbW9yeS4NCj4gPiArICovDQo+ID4gK3N0cnVjdCBwYWdlX2luZm8gKmFsbG9jX2RvbXN0YXRp
Y19wYWdlcygNCj4gPiArICAgICAgICBzdHJ1Y3QgZG9tYWluICpkLCB1bnNpZ25lZCBsb25nIG5y
X3BmbnMsIHBhZGRyX3Qgc3RhcnQsDQo+IA0KPiBzL25yX3BmbnMvbmZfbWZucy8uIEFsc28sIEkg
d291bGQgdGhlIHRoaXJkIHBhcmFtZXRlciB0byBiZSBhbiBtZm5fdC4NCj4gDQoNClN1cmUuDQoN
Cj4gPiArICAgICAgICB1bnNpZ25lZCBpbnQgbWVtZmxhZ3MpDQo+ID4gK3sNCj4gPiArICAgIHN0
cnVjdCBwYWdlX2luZm8gKnBnID0gTlVMTDsNCj4gPiArICAgIHVuc2lnbmVkIGxvbmcgZG1hX3Np
emU7DQo+ID4gKw0KPiA+ICsgICAgQVNTRVJUKCFpbl9pcnEoKSk7DQo+ID4gKw0KPiA+ICsgICAg
aWYgKCBtZW1mbGFncyAmIE1FTUZfbm9fb3duZXIgKQ0KPiA+ICsgICAgICAgIG1lbWZsYWdzIHw9
IE1FTUZfbm9fcmVmY291bnQ7DQo+ID4gKw0KPiA+ICsgICAgaWYgKCAhZG1hX2JpdHNpemUgKQ0K
PiA+ICsgICAgICAgIG1lbWZsYWdzICY9IH5NRU1GX25vX2RtYTsNCj4gPiArICAgIGVsc2UNCj4g
PiArICAgIHsNCj4gPiArICAgICAgICBkbWFfc2l6ZSA9IDF1bCA8PCBiaXRzX3RvX3pvbmUoZG1h
X2JpdHNpemUpOw0KPiA+ICsgICAgICAgIC8qIFN0YXJ0aW5nIGFkZHJlc3Mgc2hhbGwgbWVldCB0
aGUgRE1BIGxpbWl0YXRpb24uICovDQo+ID4gKyAgICAgICAgaWYgKCBkbWFfc2l6ZSAmJiBzdGFy
dCA8IGRtYV9zaXplICkNCj4gPiArICAgICAgICAgICAgcmV0dXJuIE5VTEw7DQo+ID4gKyAgICB9
DQo+ID4gKw0KPiA+ICsgICAgcGcgPSBhbGxvY19zdGF0aWNtZW1fcGFnZXMobnJfcGZucywgc3Rh
cnQsIG1lbWZsYWdzKTsNCj4gPiArICAgIGlmICggIXBnICkNCj4gPiArICAgICAgICByZXR1cm4g
TlVMTDsNCj4gPiArDQo+ID4gKyAgICBpZiAoIGQgJiYgIShtZW1mbGFncyAmIE1FTUZfbm9fb3du
ZXIpICkNCj4gPiArICAgIHsNCj4gPiArICAgICAgICBpZiAoIG1lbWZsYWdzICYgTUVNRl9ub19y
ZWZjb3VudCApDQo+ID4gKyAgICAgICAgew0KPiA+ICsgICAgICAgICAgICB1bnNpZ25lZCBsb25n
IGk7DQo+ID4gKw0KPiA+ICsgICAgICAgICAgICBmb3IgKCBpID0gMDsgaSA8IG5yX3BmbnM7IGkr
KyApDQo+ID4gKyAgICAgICAgICAgICAgICBwZ1tpXS5jb3VudF9pbmZvID0gUEdDX2V4dHJhOw0K
PiA+ICsgICAgICAgIH0NCj4gPiArICAgICAgICBpZiAoIGFzc2lnbl9wYWdlcyhkLCBwZywgbnJf
cGZucywgbWVtZmxhZ3MpICkNCj4gPiArICAgICAgICB7DQo+ID4gKyAgICAgICAgICAgIGZyZWVf
c3RhdGljbWVtX3BhZ2VzKHBnLCBucl9wZm5zLCBtZW1mbGFncyAmIE1FTUZfbm9fc2NydWIpOw0K
PiA+ICsgICAgICAgICAgICByZXR1cm4gTlVMTDsNCj4gPiArICAgICAgICB9DQo+ID4gKyAgICB9
DQo+ID4gKw0KPiA+ICsgICAgcmV0dXJuIHBnOw0KPiA+ICt9DQo+ID4gKw0KPiA+ICAgdm9pZCBm
cmVlX2RvbWhlYXBfcGFnZXMoc3RydWN0IHBhZ2VfaW5mbyAqcGcsIHVuc2lnbmVkIGludCBvcmRl
cikNCj4gPiAgIHsNCj4gPiAgICAgICBzdHJ1Y3QgZG9tYWluICpkID0gcGFnZV9nZXRfb3duZXIo
cGcpOyBkaWZmIC0tZ2l0DQo+ID4gYS94ZW4vaW5jbHVkZS94ZW4vbW0uaCBiL3hlbi9pbmNsdWRl
L3hlbi9tbS5oIGluZGV4DQo+ID4gZGNmOWRhYWE0Ni4uZTQ1OTg3ZjBlZCAxMDA2NDQNCj4gPiAt
LS0gYS94ZW4vaW5jbHVkZS94ZW4vbW0uaA0KPiA+ICsrKyBiL3hlbi9pbmNsdWRlL3hlbi9tbS5o
DQo+ID4gQEAgLTExMSw2ICsxMTEsMTAgQEAgdW5zaWduZWQgbG9uZyBfX211c3RfY2hlY2sNCj4g
ZG9tYWluX2FkanVzdF90b3RfcGFnZXMoc3RydWN0IGRvbWFpbiAqZCwNCj4gPiAgIGludCBkb21h
aW5fc2V0X291dHN0YW5kaW5nX3BhZ2VzKHN0cnVjdCBkb21haW4gKmQsIHVuc2lnbmVkIGxvbmcN
Cj4gcGFnZXMpOw0KPiA+ICAgdm9pZCBnZXRfb3V0c3RhbmRpbmdfY2xhaW1zKHVpbnQ2NF90ICpm
cmVlX3BhZ2VzLCB1aW50NjRfdA0KPiA+ICpvdXRzdGFuZGluZ19wYWdlcyk7DQo+ID4NCj4gPiAr
LyogU3RhdGljIE1lbW9yeSAqLw0KPiA+ICtzdHJ1Y3QgcGFnZV9pbmZvICphbGxvY19kb21zdGF0
aWNfcGFnZXMoc3RydWN0IGRvbWFpbiAqZCwNCj4gPiArICAgICAgICB1bnNpZ25lZCBsb25nIG5y
X3BmbnMsIHBhZGRyX3Qgc3RhcnQsIHVuc2lnbmVkIGludCBtZW1mbGFncyk7DQo+ID4gKw0KPiA+
ICAgLyogRG9tYWluIHN1YmFsbG9jYXRvci4gVGhlc2UgZnVuY3Rpb25zIGFyZSAqbm90KiBpbnRl
cnJ1cHQtc2FmZS4qLw0KPiA+ICAgdm9pZCBpbml0X2RvbWhlYXBfcGFnZXMocGFkZHJfdCBwcywg
cGFkZHJfdCBwZSk7DQo+ID4gICBzdHJ1Y3QgcGFnZV9pbmZvICphbGxvY19kb21oZWFwX3BhZ2Vz
KA0KPiA+DQo+IA0KPiBDaGVlcnMsDQo+IA0KPiAtLQ0KPiBKdWxpZW4gR3JhbGwNCg0KQ2hlZXJz
LA0KDQpQZW5ueQ0K


From xen-devel-bounces@lists.xenproject.org Wed May 19 06:31:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 06:31:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129875.243554 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljFjy-0004Pp-N8; Wed, 19 May 2021 06:31:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129875.243554; Wed, 19 May 2021 06:31:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljFjy-0004Pi-JB; Wed, 19 May 2021 06:31:34 +0000
Received: by outflank-mailman (input) for mailman id 129875;
 Wed, 19 May 2021 06:31:33 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xObs=KO=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1ljFjw-0004Pa-SZ
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 06:31:33 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c9101ca6-fbf5-489b-ba66-296d49a5f86a;
 Wed, 19 May 2021 06:31:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c9101ca6-fbf5-489b-ba66-296d49a5f86a
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1621405891;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=FZ6+xoS8uThBtECknO4vRFgPzMMZeVCDhkbUuV3RH4Q=;
  b=c6f53r8OQm2nWt3tTZuFRIOmCeZeibcjaBk/jXa/VzUPwDQ8J0IL5ONj
   LmKU67Bq/OmU8c4c84NmDUlD8GV52q/SO2KArLs1DxDK3RAS81dCx+ZbB
   nP2j1AIvdtvdriGAl2GzlJr8jJsONFXQZR49SkBWs8ZzkeqcTKJb+2JjG
   s=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 8zE8lz1tGO7MBnWzpEN1+cZm3iw/UHVV49mGC+Pw4YnpVZXaqx8TIPHgcnhaQWk6BcUzsgO4VF
 wdhKU3tc4iJH/jFj2PndUyruG8gRIAQFHpNqlJsfSROLLv7SCDaGpW6VuyIpqnTcdkqXhemGTx
 CISWOabYAU9RfpS+JxYizrTDu2ZgtFvKKMSgVs+Iq4424ljhABaeUuYsL+TmP7ke1eXtCgjL7R
 0hQpqI6WhJXaKr+3nXLwsqDOS9Fxo4I3wpOVkEUfuVagn/iIq6gK+61C8/HOS4gRvK1UEL2WWq
 Z90=
X-SBRS: 5.1
X-MesageID: 44095803
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:0mxoJ6HUDNZ54AcdpLqF0JHXdLJyesId70hD6qkvc3Jom52j+P
 xGws526faVslYssHFJo6HnBEClewKgyXcT2/hsAV7CZnidhILMFuBfBOTZsljd8kHFh4pgPO
 JbAtdD4b7LfCtHZKTBkXGF+r8bqbHtms3Y5pa9vgJQpENRGtpdBm9Ce3am+yZNNXB77PQCZf
 2hDp0tnUvQRZ1bVLX1OpFrNNKz6eHjpdbDW1orFhQn4A6BgXeD87jhCSWV2R8YTndm3aoi2X
 KtqX272oyT99WAjjPM3W7a6Jpb3PH7zMFYOcCKgs8Jbh3xlweTYph7UbHqhkF0nAiW0idurD
 DwmWZlAywqgEmhOV1d4CGdmjUI6QxeqUMLkjSj8D3eSaWTfkNJNyJD7bgpOCcxpXBQ5e2Vfc
 pwriukXqFsfGT9dRLGlpP1viFR5z6JSEUZ4JguZlxkIMYjgexq3MAiFH08KuZJIMus0vFYLA
 ApNrCF2Mpr
X-IronPort-AV: E=Sophos;i="5.82,312,1613451600"; 
   d="scan'208";a="44095803"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HtFeiNlhP7Bb6TSiyJr/0ROqe87MfuHlGGK7K65ocmeDg99+BCiHuc/6dIjNbKhVKSyece1Y0VtEepjXBJ6ZX5KbxpzZTCL2EFmMgVzwTgP4oMQ+8XiobxDG4j2GWinEtRaD9dacqosG5hcBGBaUFLvIy3NefuQZKIv5CbhcC59yop0ESnMCgN+XogSBqW4uhOuT5iq0K0O/KYv0ZF9AOIdTd0nu0E8/PfGYm/376PtHA9BAGWOuybPFV/NHKPeTEgiOd17EaLoe46AVqdz3ITPVdxMV00T2SCiRkx2jHkSQLzCqV2Q7R+ND/BxYN898zyGX2YADxJmPWSzWLIPy3g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ISQhvIx7mV5WsPnv++Xg0eaJ16OM11/+z5eJsHkoCRc=;
 b=cOR2aneSrzxpbHrzmS3HKOy5pkWlNKyNhBecZJHciimxAYpied1BD0DYktg3vMdGHT7BME+K2DN6VThc6Kisujtf6cENZgxH/nw3YoKXV4ixRVl0rCFT8wHI6tYF7aOr39GMfetZnr9rSA3eBTeJNxLjSNknYl5KMAiYCaGzmoQhCS8TriiSQxv+rQwwQ5InFizYrWRIyNbpSxDxJNnqU6qqrAtW5ghCAcdYTXHUNvkuAN64a0eTnMwZpMRAGTZi/hIWgLTfCzxnS/W7sC02hg83TKFNzR+u1CDvV/3wRdT9rEfbs+sqJFyoOUw6Owd6go8avlPiSgZcjABQGhLWQg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ISQhvIx7mV5WsPnv++Xg0eaJ16OM11/+z5eJsHkoCRc=;
 b=tMK459cxYhc0m6IULOs4cijqb4aDw3jGgaKTG88sdq2y9NLoviWsoBkrhXSPeVJ0o7BzrhJShDkKJzfB1ZPo3agFxLSK+8I9wi455k4lMlVn5INIMJ8N+rFyK3oa+5HKmK5B646exYzHC6ehrLAIWlVPsInNHJ42xWDnHx5E6Pc=
Date: Wed, 19 May 2021 08:28:12 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Dario Faggioli <dfaggioli@suse.com>
CC: <xen-devel@lists.xenproject.org>, Doug Goldstein <cardoe@cardoe.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2 2/2] automation: fix dependencies on openSUSE
 Tumbleweed containers
Message-ID: <YKSv/BGxuy+OCn3t@Air-de-Roger>
References: <162135593827.20014.14959979363028895972.stgit@Wayrath>
 <162135616513.20014.6303562342690753615.stgit@Wayrath>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <162135616513.20014.6303562342690753615.stgit@Wayrath>
X-ClientProxiedBy: PR0P264CA0053.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1d::17) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 2508ad14-5e0b-4618-03c6-08d91a8f49be
X-MS-TrafficTypeDiagnostic: DM5PR03MB2971:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB2971AEC4A1D806B00BEA44468F2B9@DM5PR03MB2971.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6430;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: NBfXBDKk8G7xzwlDNPIbw2060Sg1emeseKqamemgil5hcoql9SVAaHPke6Q1JbuF+gZYoKzQJaTIN9PUSF2OXuD5qNVb+TxURS56ccg3Mo3zl6CgMKYsqHDnMuRFMZ+q1zBdShzsRK+N5RMD+tqdHAG8T4fIUse4yfr7HsP1sAgwzNYImZj79vcTATZjLP2laIplULUKWQtL79Ka3lzTsuBKHntRtfhsc8IL5o7i4GbRdE+T+DOD3KLp7zL9S6svtLCQDfeLSnPX2vRzMFCtyNYNHCWa3niUERrPHlDuMIt+xHcsmLC4YlNzBYEUl4lfoo0lyW5ELRitE/+K4lFQmByeQI5OW44K/MFiMgOLOXB+DEgK8IQoKbnwD9DoM4SZjBampqQ3sLrGFPeyobexLSupuTwhTfSd8wBZlXzwBVbnw3KSxaN0V/l09p1HXMl6g7QASI2Z0zmC9Ez/6G/GG/8fJ9wrt+zrKEJpM2DUMGaOGMojRfW+4IQtg0V5v1SQzM4INDyE+J7wivCdK2X+cW1HAS5YPUz70kHbwMyyeGkF6sb+NEJrhAMk2ALcye6tKuJfJFjk9PTJCpyRbRQDMCAaF4SslaSR0guzrn22S60=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(346002)(376002)(136003)(396003)(39850400004)(366004)(8936002)(8676002)(33716001)(66476007)(66556008)(66946007)(478600001)(956004)(54906003)(6666004)(186003)(4744005)(85182001)(6486002)(6496006)(4326008)(316002)(16526019)(38100700002)(5660300002)(107886003)(6916009)(86362001)(26005)(83380400001)(2906002)(9686003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?d3JyaGJoODNNVFJiWSs4NUtQSFdXTVJWUzJhak9qR3RhWFEyVVRwNWI5TGNt?=
 =?utf-8?B?YWJleDdJYnl1ZzFaa0lpUkE2cnBMWk1SS1lraUZWMmxsSEhNR2w0ZTE5Z3BQ?=
 =?utf-8?B?NTAzUEo0VktnMTZUSmF3dWkxczM2UHhFSVY3cFIrK0xSWEw3dUFlQ1ZNTW1w?=
 =?utf-8?B?RFlRWUJIbWwvNHlkaGhUMjNkTm5nQi9rMnY3UXZIaHhMZGNLRExsOFJjdlBn?=
 =?utf-8?B?aWhMZE11SG5LSzdCcUd1YWIvMC9MaHEzcEtKTHlnd2ljYjBCRFZIaU1qZlU2?=
 =?utf-8?B?WkhrR29yTmZOWWtCWDQydStnM1g1QTZSQy9tb1hDc2hTQUNDWkkzaVZQZTlY?=
 =?utf-8?B?V1h3VisyUk1OdXJUMmQvV3RzVXd2MC82SU5hZllFb3FPSDl2RGJQaVNwUlNK?=
 =?utf-8?B?bTh2VnEraFg3MXFOc0E5aWlEeXQ0S2ZzZFV5L21ieis1QTgrZ2xHa0Z2Tktm?=
 =?utf-8?B?czE0ZU44b3ZnUllKNXZzN0FoYUJpOFlFdWxQZGFzWHFEdUtEWWVzOFVYTjds?=
 =?utf-8?B?WXRSVDBwNU9qU2NVVmlFUWthc2pINEpGS3lrL0VKWmwrdFNDOWRwOVZlcXp1?=
 =?utf-8?B?WDBRRm5sQVpTQmdBdnFsWkZURDdBNURvb2tJNnRMWjBCdHhMVEJwYi91YlFz?=
 =?utf-8?B?TDhUT3JDeWszay8yZVNUUjFUUG1lWm1uN05aV0xRTXFYTjNiNHdaeDY4V2U3?=
 =?utf-8?B?Zmhid3Jpc1hZUHhJQUEwNVRyTG1zbGs5THNKY052NU9JazBuQnVRRW55L0gx?=
 =?utf-8?B?SlhRdHgwVUR6Z3VFSGhrMnVBZkdqNUhyQ0JqbmVTMDVKMFNJeUlMTGozMktt?=
 =?utf-8?B?Y0txbUJEeWdOYjNmZndiQzFBTHQ3U3d0a2cweHNXRXpGUDAyczlZSEpxMThl?=
 =?utf-8?B?M2dkbWpnT0JmRVpMMDFZeHg4anNCOC95MTc0K3BQelNmYWNBZ2JwamlPL0x0?=
 =?utf-8?B?c2E5U1l3THZUWjlWVmtMQnF5UlByRURaZENDYnFuSGNZQzlvVkg1V3p3cVpr?=
 =?utf-8?B?eEc2UFRMNXlOK1g4TkRpVndTaCtyTnFoUVNhTTkySTRFNlFSNXp2bkJ4WkxE?=
 =?utf-8?B?YW9rTVpUWlFsUmF4WHpBckplYmk3VmRGR2l3UXd5ZnFBbFdmS29ZMWhIY1dL?=
 =?utf-8?B?VzlNZ1E1cmhndXdOd2w1SHVOWDRGZmZiMVNaZk5YcGJENWt4RDhESVIvekpG?=
 =?utf-8?B?S0RDMnZoR3ZFbkN1TVgvVTI5cmVMakdIdW1FcTh3bk9vdEFDYjk2MzliT3RY?=
 =?utf-8?B?dG1KRVZBUlQ5RjVydy9hNWM0QjlqYTJucnpYWTFndU85RlZ3cUt2aHJKeURu?=
 =?utf-8?B?MGI2ak9VQUpNWkw3VjM5a0FMMStiWlM4Nmd3UzFVZE9rdmN2R3cvaTZtemV1?=
 =?utf-8?B?KzlLOUVrZkh5eVZMUlpGcXNCWVpsVENGb0hmRTFDUXhBcVZBbk1EWGFsU1lq?=
 =?utf-8?B?cVpyS0dNVDR1Yk05ejlmWS9JRVZJV2QxNDhlTzR6aWZ4YVcvbFphc2JCelh5?=
 =?utf-8?B?SlltdkhNSFRGd1VHVmlZR1RVUXNsSllveXd3ZktuK2dVWjFqMXdocXFEMWZU?=
 =?utf-8?B?cys3WjBjVkJYTytIcWpYVERndlcwYm9VVmU0cW5WUit2MDFxZk04cXVOUzYx?=
 =?utf-8?B?OHNzWWMzVzI4UTdnV0lhRVVaWU9PWWQvdTZUVCtjZ24rNXlZTzlST0tEUHJ1?=
 =?utf-8?B?UEErTFdtcGxNQ0hyTWp4c2ZuVE5qY2xjbTBYa2JUd3NnQXl4cVdJZGoveG1w?=
 =?utf-8?Q?s7xkIflm6zAVObtwb2h09NnQGp3TrGnkWB9+1Ce?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 2508ad14-5e0b-4618-03c6-08d91a8f49be
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2021 06:28:17.0715
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: wY5wreM47g4GD6BO3NUUgQyjJVOVOFcCYwLnVWiFt8hWIQ79uB9LCLUhjkBBW3RUyxqKZS+xBfTUjtB9uVCFmw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB2971
X-OriginatorOrg: citrix.com

On Tue, May 18, 2021 at 06:42:45PM +0200, Dario Faggioli wrote:
> Fix the build inside our openSUSE Tumbleweed container by using
> adding libzstd headers. While there, remove the explicit dependency
> for python and python3 as the respective -devel packages will pull
> them in anyway.
> 
> Signed-off-by: Dario Faggioli <dfaggioli@suse.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Can you try to push an updated container to the registry?

Thanks!


From xen-devel-bounces@lists.xenproject.org Wed May 19 06:43:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 06:43:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129880.243565 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljFvs-0005vB-QK; Wed, 19 May 2021 06:43:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129880.243565; Wed, 19 May 2021 06:43:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljFvs-0005v4-NI; Wed, 19 May 2021 06:43:52 +0000
Received: by outflank-mailman (input) for mailman id 129880;
 Wed, 19 May 2021 06:43:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TeaP=KO=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1ljFvs-0005uy-59
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 06:43:52 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.8.81]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9a3c4676-f627-41f0-b1bb-47e4d84d35e9;
 Wed, 19 May 2021 06:43:48 +0000 (UTC)
Received: from AM5PR0602CA0011.eurprd06.prod.outlook.com
 (2603:10a6:203:a3::21) by AM0PR08MB3972.eurprd08.prod.outlook.com
 (2603:10a6:208:127::32) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.25; Wed, 19 May
 2021 06:43:46 +0000
Received: from AM5EUR03FT015.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:a3:cafe::d6) by AM5PR0602CA0011.outlook.office365.com
 (2603:10a6:203:a3::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.33 via Frontend
 Transport; Wed, 19 May 2021 06:43:46 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT015.mail.protection.outlook.com (10.152.16.132) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Wed, 19 May 2021 06:43:46 +0000
Received: ("Tessian outbound 3c5232d12880:v92");
 Wed, 19 May 2021 06:43:45 +0000
Received: from a14a8ee00a0d.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 AB202A36-AF05-409E-A74D-3D6E89A796B9.1; 
 Wed, 19 May 2021 06:43:39 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a14a8ee00a0d.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 19 May 2021 06:43:39 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com (2603:10a6:803:10a::33)
 by VI1PR08MB4368.eurprd08.prod.outlook.com (2603:10a6:803:fe::16)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.28; Wed, 19 May
 2021 06:43:28 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5]) by VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5%6]) with mapi id 15.20.4129.032; Wed, 19 May 2021
 06:43:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9a3c4676-f627-41f0-b1bb-47e4d84d35e9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TDkPoAmre9aQNnw42rYDOBvpVXncRLGsNG06vQE78PI=;
 b=HD6HuCzGFz4KIzqcFP+opPFptCfRfTiPbCFYfst/EbFBWQFuKaHuYG3SMo+qtc0to/14a6hpCf7nd8dSkDa7BamwHTSjqBkOZjjSQznoZIIfBeTVdMnnbVKnCRGDO6v49TFKfz3cY6BXFcs2DFYnjQ/pLgzoac0jsDxcbJk1yLE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Gvo/E+0Ogfogq5gznPetqKAh3HunWoI0bRDqxx1jd3z/IHe6xe7qzsEe1SEZnXxf6p8LvYgApS+zQy0B+GMhoafQFh9mdyKksSOfkcJ8PGCW9soFo+hVpvT2P15at2k6i+zHvzIrAjoq7fwvtAUbKzurmOLKRBUKDvIGPWYhhIg2txSk2Wkh1I3W8g0QsMhCMFSCitnpGmZzwPa/ZQMaQyh3bETPDztkgnOXYXHRne0xp2Je4F1HfAUBSDkPNuxEMj7kYEi1YKFb9Cj9p68oM1jNx2Fz0nidYxWsFJqKbOwCtcXHsVgPkYNZfiS13JaatDSBz2t0D4S31ZQuxsXWgQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TDkPoAmre9aQNnw42rYDOBvpVXncRLGsNG06vQE78PI=;
 b=SNVc+zcM4LhWRu4XU/qBv+huzCKyfNMBxIpTLOFXG/tWeGZXOJ7mofXFDTCRnPu24Y3VjxRLl7neB495IuaclnokXLiUDAxWY4t7GPy3p9m1c8WLp1sUkUwT3vObqYGajECDPS/Y0gyaEZ70eelTmek45+oBTVJwzWDYWDVruiOJtDDQdXVqqs7SL0UeLPX0j1E299TKD9el1pQznnH4M8jtLGfZCyGNknohtOkiHSnUxdXD6gZ6iRWiOeQVwoBsbnVjVs0fFUoYXBIoSXMzpPSPesALF/kqkwRBJ9tjmfG+tS6DYtHp0a9lJzcynp2xpgX+Air5j9ccklBu5bcW4Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TDkPoAmre9aQNnw42rYDOBvpVXncRLGsNG06vQE78PI=;
 b=HD6HuCzGFz4KIzqcFP+opPFptCfRfTiPbCFYfst/EbFBWQFuKaHuYG3SMo+qtc0to/14a6hpCf7nd8dSkDa7BamwHTSjqBkOZjjSQznoZIIfBeTVdMnnbVKnCRGDO6v49TFKfz3cY6BXFcs2DFYnjQ/pLgzoac0jsDxcbJk1yLE=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "sstabellini@kernel.org"
	<sstabellini@kernel.org>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	nd <nd@arm.com>
Subject: RE: [PATCH 08/10] xen/arm: introduce reserved_page_list
Thread-Topic: [PATCH 08/10] xen/arm: introduce reserved_page_list
Thread-Index: AQHXS6XFXoougD95T02cEJMQScUNQ6rpEy8AgAFDiqA=
Date: Wed, 19 May 2021 06:43:28 +0000
Message-ID:
 <VE1PR08MB5215D90DCB8B2BB6DF6140EDF72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-9-penny.zheng@arm.com>
 <c002d9b2-8210-1c03-b374-76e037b65e2f@xen.org>
In-Reply-To: <c002d9b2-8210-1c03-b374-76e037b65e2f@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 0C72FC08968CD642B9A78CEA2E5C66DB.0
x-checkrecipientchecked: true
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [203.126.0.111]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: 64c3b6d2-4d35-4a3f-16ce-08d91a9173a5
x-ms-traffictypediagnostic: VI1PR08MB4368:|AM0PR08MB3972:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM0PR08MB397242897EE9C76A301F1110F72B9@AM0PR08MB3972.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:935;OLM:935;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 ePI7QNpKkG2lOGiQCyBmipz6d6fkO5kz/zg5H4YZqBB4ZJBXrAUlFG16DnhAj+rTt+maatc6+3zTjedk6xz2jW2XUlZj4EiA5vkMELRLcE8CMhm9df78IzFOVRYrTc+j+1civFKELsJG2WW/LzeylL04bPwWG0mRgWHK2xntzBa2dNxrzKefOpXjANNPw1EoFJNVKCqthkzkyAdSyKy0vhwqAEVTT0saWQ2QNKUHvcJJwFkxhj1aHZZRWZT1M9osbG4sUWWPhVsgjko9HJzQ44bQWWWWy0orhIUPMOf7lJQxPPaxT0uagTDuwnifTLD8ZBbD0bdMfPiDZ8+FdLrAZZJqiDW8IQ7+AxxrCs0ENvL7QAzCxMezuNrcO8hNjVMP0Co19hj1dPIkbNHviRLItwsxx2CdNzqTpjum5H5nTDx5xWSiRNQMWW08VNC0Ws2APGgSgtrAEjZbglq4V5WFFbgUSIvCT8Vd7mvIp3/QKsHtl7tcOqy2pSt3D0d9jaNwaSrZB2R1cv88UeJI2vZU9FzkSOjuwtpo3eoRZEbdXmrFY0iZ/oxVp1J9Pl7nzOFOs+I5JQz/U2onT3tOsWYW+8Ras0vCQzxzornwJ8v/Ts8=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR08MB5215.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(39860400002)(136003)(376002)(396003)(366004)(5660300002)(54906003)(4326008)(66446008)(110136005)(316002)(66946007)(76116006)(52536014)(66556008)(86362001)(186003)(64756008)(478600001)(2906002)(33656002)(122000001)(55016002)(8936002)(66476007)(71200400001)(83380400001)(26005)(8676002)(6506007)(38100700002)(9686003)(53546011)(7696005);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?ejhVWENEc29USGtHam1nLy9KdHZiSkJXTjFIQjk5THJaSmJ0T2xMc2l4QndE?=
 =?utf-8?B?QVdyN3JZYTVGYmVXK3V0NmhETGZMWTZPcXlMY0RZOW1JOS9TU21zM1VjQTN5?=
 =?utf-8?B?YXFHVkxaN1dxNXBaVE5Zc1hHVXBtaEdhSzJtS0tOdTV5M0xNckJpSTZLMWQ1?=
 =?utf-8?B?SnJFbkFybnQ4ZVVrVkIrRkF1N2lWKzU1R01DUFBmaWszejIzT0p6NnZQMFR5?=
 =?utf-8?B?dCt0ZlU4b2ZHTmREOE1XY3R2Uzd5SmZ3UExEZVJsTDAxZzR2b3BDaENqREsy?=
 =?utf-8?B?RU9UZVNBbGZDTHFxQndhZHdmd3lSak92NXR2SXd5cmVDeFUyY3gvTkRRZEUz?=
 =?utf-8?B?V0dvUUlzSHdwQXA2dTR3b0tKRTYyemx0Q3NQUFptUHVjMVBPd2F2dEhHK00r?=
 =?utf-8?B?UHMxM2g3dWRmcis1bFptRnQ5TmVmdUtyVXVIaGE3YTRXSWRicE0rOUIvQXhT?=
 =?utf-8?B?Mi9vV2FRK000YTFNZk1ZaXc0S08vUDFIQzF3VXhFUkNJZ3B1TDZPYnlqWUZH?=
 =?utf-8?B?a1RMRDArSkg4MzcyNGdtNmtWcTFJZjg5STd1RXB4c3pRZWJxeWVyNG9DMGxy?=
 =?utf-8?B?WDlxYXdOVTVrTjh6TVBFNEl6ZkNtNWJ1Ym84UE9oWndyV1Ivd3ZvMWZyNGp4?=
 =?utf-8?B?VXY4NVpZTmpBR2hvbUU3RnV3WEVIZnRWU2dPM2xlQXA1NXU2WlBRUHF2M2VY?=
 =?utf-8?B?YStVc0FyVG5aMDVEZUdLRUZXUlQra05td29LOWh0N2U5ZVB1cFZ6NGptbnNH?=
 =?utf-8?B?Tm1HSjNyOUEwRVVZb0wyTmFDTmVoTUovN0ZhWGpudE1HQmRIc1dYSUlBOS9F?=
 =?utf-8?B?alN6OE5nODBuVHVJYkVkQUhTT0VmMjhlR2pSdjFRckdOQXVLREJRVlZzQjBj?=
 =?utf-8?B?RjVIMVF5Ry9LMGhGK3ZJd1NRZ0crREx6cGJzOGNFc3ljbHJTMEduc2QyWEtZ?=
 =?utf-8?B?c3E2Ly9mSnBsMU1yYXV3VmVDRWpCWFQ1SGl2dzUwNUZ2YU9BWGw4a2l3MnM0?=
 =?utf-8?B?ekxvNEg5d2NBT3hZRlNZM1FjWkJoalBMWVZlcGh2Q29MWFU1ZnlCQTdNbGR2?=
 =?utf-8?B?d1djQ2tQR0tCTTlLQzU2dnlkNmc0cDEvWWg4eE9TUEE2anpoV1BnQ1Y3bGo5?=
 =?utf-8?B?VEJyMFZ5UG4wbW1BSnM3SDRpZHFGanZXWXF3NTRJb0YwdVZ3U2lTbkNSL1ZK?=
 =?utf-8?B?MkFuNUo5M2p6V2tlTGVwcktkcVE5YXBDZFRuOG8wQ1d0bTltQURhUG1RVnVF?=
 =?utf-8?B?NGgxeHd1MTdWSCtQQWpvcURsdzhPaHZBSUR4Z1VNdS8vN0RGZ2lMRWZaV3lz?=
 =?utf-8?B?MFRVWDFkamRzaGdvU215alRtNGhyUkNpdUhzMkJVYkpybFRPRWE3WEFzQ25h?=
 =?utf-8?B?Rm5qSnhtVU80eStpMk5uK3hIYmI1STQ5L2hTYmVYREFmdXBRYjlVQ2Ywd3lJ?=
 =?utf-8?B?akEvZUE3VTFxeGtMekNZQis4UWlYQ0J0Y0xGV0ErTUFaa2dkdFlhSk1Qcnkz?=
 =?utf-8?B?Z041R0VZRml5b1R3cWFNWm5VdmNlZlM1TGwyaTRRVmN3WFB0VFhSTWNIZVlQ?=
 =?utf-8?B?d293YjZvd3hmWTZFZkVaQ2RQSHRvdjJ5WUZRZm9BWUpTeDBrbmgyWXpxT1NI?=
 =?utf-8?B?ajFOWXpWZTI4NmJIWWptVDlrT1ZWUTU2YVBPOVZ4NmFQV0ZMdXV2enhIUVlW?=
 =?utf-8?B?ZldLRVNjbjFEdHVRdFhiS3cydEhCeXRxWUpISlRoK1h2RCtWMWVvNm9od2Zp?=
 =?utf-8?Q?SYEz6AOxXhozGBJLAKQO/ZDeTsyQASJ/pUiNbbS?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB4368
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT015.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	db6c401e-cdc4-4c2b-0ef1-08d91a916910
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	5vNRiIEJnbrQB1QHlrRki0LjrxViX0tFMelKXOzdq9+zFNcztLQYV7kQ1hEeMXVtxxejPy+KZQWz+PdCaQSURRD/Rm8ADMRLlhMyae0/o8jENByfJe44xgp5ndYXAEcPBA2RHx19ZAu5R5CtxXOeTJBW8bCfc2xyKCXqNtERyG1ZlrLOWspefhSTOdwGdkw+NKN35iZXWwWVkEuroD5l9qcCwitTARYpN2c97HW7+dUvnBnlilT2UCw7SOLCR+1dki20zR4TWtJtA23ZXRdt63RmJL+fNtpDPOp6LZlq7zsQ5dR1KhqFS2luyTzr7UvWLawdyDAnJU8SGeDiRPoy7IFgYRGb9rOhFnjhiFC3CUkAyZMY/1YYc7KCtQRBotMw8UQ1VHTb/mBwoIC0PRzYZG81JGtXSkqCdFFtL3gLuaUhXHv9GEXUTXNjUSI+HcNJ0FzHrqMti+AkSXo/ft+lTdVQqAIOvKnHpGOqLGX/MWZF6AOjT94sqrrVLthp65P39ENBimSO5ahgf85k7Ns8I58atiLctfGGEFRqglKs+urmuHoDZObGQhHT64isvZnXOyuGBexVygOJeOwJRJGx/mr5MjR0153dqj47RVUMMeIa5GaEUIlm5Xrkd0TI7eY2IoNCOGviNUgWKsEJHeJcxrH+c4iSyteeqd2sVUdxMIg=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(39860400002)(376002)(136003)(346002)(46966006)(36840700001)(36860700001)(356005)(70206006)(81166007)(52536014)(33656002)(4326008)(26005)(8676002)(9686003)(47076005)(82310400003)(54906003)(8936002)(70586007)(86362001)(53546011)(186003)(2906002)(110136005)(336012)(478600001)(316002)(6506007)(82740400003)(55016002)(83380400001)(5660300002)(7696005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2021 06:43:46.1595
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 64c3b6d2-4d35-4a3f-16ce-08d91a9173a5
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT015.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB3972

SGkgSnVsaWVuDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSnVsaWVu
IEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4NCj4gU2VudDogVHVlc2RheSwgTWF5IDE4LCAyMDIxIDc6
MDIgUE0NCj4gVG86IFBlbm55IFpoZW5nIDxQZW5ueS5aaGVuZ0Bhcm0uY29tPjsgeGVuLWRldmVs
QGxpc3RzLnhlbnByb2plY3Qub3JnOw0KPiBzc3RhYmVsbGluaUBrZXJuZWwub3JnDQo+IENjOiBC
ZXJ0cmFuZCBNYXJxdWlzIDxCZXJ0cmFuZC5NYXJxdWlzQGFybS5jb20+OyBXZWkgQ2hlbg0KPiA8
V2VpLkNoZW5AYXJtLmNvbT47IG5kIDxuZEBhcm0uY29tPg0KPiBTdWJqZWN0OiBSZTogW1BBVENI
IDA4LzEwXSB4ZW4vYXJtOiBpbnRyb2R1Y2UgcmVzZXJ2ZWRfcGFnZV9saXN0DQo+IA0KPiBIaSBQ
ZW5ueSwNCj4gDQo+IE9uIDE4LzA1LzIwMjEgMDY6MjEsIFBlbm55IFpoZW5nIHdyb3RlOg0KPiA+
IFNpbmNlIHBhZ2VfbGlzdCB1bmRlciBzdHJ1Y3QgZG9tYWluIHJlZmVycyB0byBsaW5rZWQgcGFn
ZXMgYXMgZ3VlYXN0DQo+ID4gUkFNDQo+IA0KPiBzL2d1ZWFzdC9ndWVzdC8NCj4gDQoNClRoeH4N
Cg0KPiA+IGFsbG9jYXRlZCBmcm9tIGhlYXAsIGl0IHNob3VsZCBub3QgaW5jbHVkZSByZXNlcnZl
ZCBwYWdlcyBvZiBzdGF0aWMgbWVtb3J5Lg0KPiANCj4gWW91IGFscmVhZHkgaGF2ZSBQR0NfcmVz
ZXJ2ZWQgdG8gaW5kaWNhdGUgdGhleSBhcmUgInN0YXRpYyBtZW1vcnkiLiBTbyB3aHkNCj4gZG8g
eW91IG5lZWQgeWV0IGFub3RoZXIgbGlzdD8NCj4gDQo+ID4NCj4gPiBUaGUgbnVtYmVyIG9mIFBH
Q19yZXNlcnZlZCBwYWdlcyBhc3NpZ25lZCB0byBhIGRvbWFpbiBpcyB0cmFja2VkIGluIGENCj4g
PiBuZXcgJ3Jlc2VydmVkX3BhZ2VzJyBjb3VudGVyLiBBbHNvIGludHJvZHVjZSBhIG5ldyByZXNl
cnZlZF9wYWdlX2xpc3QNCj4gPiB0byBsaW5rIHBhZ2VzIG9mIHN0YXRpYyBtZW1vcnkuIExldCBw
YWdlX3RvX2xpc3QgcmV0dXJuDQo+ID4gcmVzZXJ2ZWRfcGFnZV9saXN0LCB3aGVuIGZsYWcgaXMg
UEdDX3Jlc2VydmVkLg0KPiA+DQo+ID4gTGF0ZXIsIHdoZW4gZG9tYWluIGdldCBkZXN0cm95ZWQg
b3IgcmVzdGFydGVkLCB0aG9zZSBuZXcgdmFsdWVzIHdpbGwNCj4gPiBoZWxwIHJlbGlucXVpc2gg
bWVtb3J5IHRvIHByb3BlciBwbGFjZSwgbm90IGJlZW4gZ2l2ZW4gYmFjayB0byBoZWFwLg0KPiA+
DQo+ID4gU2lnbmVkLW9mZi1ieTogUGVubnkgWmhlbmcgPHBlbm55LnpoZW5nQGFybS5jb20+DQo+
ID4gLS0tDQo+ID4gICB4ZW4vY29tbW9uL2RvbWFpbi5jICAgICB8IDEgKw0KPiA+ICAgeGVuL2Nv
bW1vbi9wYWdlX2FsbG9jLmMgfCA3ICsrKysrLS0NCj4gPiAgIHhlbi9pbmNsdWRlL3hlbi9zY2hl
ZC5oIHwgNSArKysrKw0KPiA+ICAgMyBmaWxlcyBjaGFuZ2VkLCAxMSBpbnNlcnRpb25zKCspLCAy
IGRlbGV0aW9ucygtKQ0KPiA+DQo+ID4gZGlmZiAtLWdpdCBhL3hlbi9jb21tb24vZG9tYWluLmMg
Yi94ZW4vY29tbW9uL2RvbWFpbi5jIGluZGV4DQo+ID4gNmI3MWM2ZDZhOS4uYzM4YWZkMjk2OSAx
MDA2NDQNCj4gPiAtLS0gYS94ZW4vY29tbW9uL2RvbWFpbi5jDQo+ID4gKysrIGIveGVuL2NvbW1v
bi9kb21haW4uYw0KPiA+IEBAIC01NzgsNiArNTc4LDcgQEAgc3RydWN0IGRvbWFpbiAqZG9tYWlu
X2NyZWF0ZShkb21pZF90IGRvbWlkLA0KPiA+ICAgICAgIElOSVRfUEFHRV9MSVNUX0hFQUQoJmQt
PnBhZ2VfbGlzdCk7DQo+ID4gICAgICAgSU5JVF9QQUdFX0xJU1RfSEVBRCgmZC0+ZXh0cmFfcGFn
ZV9saXN0KTsNCj4gPiAgICAgICBJTklUX1BBR0VfTElTVF9IRUFEKCZkLT54ZW5wYWdlX2xpc3Qp
Ow0KPiA+ICsgICAgSU5JVF9QQUdFX0xJU1RfSEVBRCgmZC0+cmVzZXJ2ZWRfcGFnZV9saXN0KTsN
Cj4gPg0KPiA+ICAgICAgIHNwaW5fbG9ja19pbml0KCZkLT5ub2RlX2FmZmluaXR5X2xvY2spOw0K
PiA+ICAgICAgIGQtPm5vZGVfYWZmaW5pdHkgPSBOT0RFX01BU0tfQUxMOyBkaWZmIC0tZ2l0DQo+
ID4gYS94ZW4vY29tbW9uL3BhZ2VfYWxsb2MuYyBiL3hlbi9jb21tb24vcGFnZV9hbGxvYy5jIGlu
ZGV4DQo+ID4gZjFmMTI5NmE2MS4uZTNmMDdlYzZjNSAxMDA2NDQNCj4gPiAtLS0gYS94ZW4vY29t
bW9uL3BhZ2VfYWxsb2MuYw0KPiA+ICsrKyBiL3hlbi9jb21tb24vcGFnZV9hbGxvYy5jDQo+ID4g
QEAgLTI0MTAsNyArMjQxMCw3IEBAIGludCBhc3NpZ25fcGFnZXMoDQo+ID4NCj4gPiAgICAgICAg
ICAgZm9yICggaSA9IDA7IGkgPCBucl9wZm5zOyBpKysgKQ0KPiA+ICAgICAgICAgICB7DQo+ID4g
LSAgICAgICAgICAgIEFTU0VSVCghKHBnW2ldLmNvdW50X2luZm8gJiB+UEdDX2V4dHJhKSk7DQo+
ID4gKyAgICAgICAgICAgIEFTU0VSVCghKHBnW2ldLmNvdW50X2luZm8gJiB+KFBHQ19leHRyYSB8
DQo+ID4gKyBQR0NfcmVzZXJ2ZWQpKSk7DQo+IEkgdGhpbmsgdGhpcyBjaGFuZ2UgYmVsb25ncyB0
byB0aGUgcHJldmlvdXMgcGF0Y2guDQo+IA0KDQpPay4gSXQgd2lsbCBiZSByZS1vcmdhbml6ZWQg
aW50byBwcmV2aW91cyBjb21taXQgb2YNCiJ4ZW4vYXJtOiBpbnRydWR1Y2UgYWxsb2NfZG9tc3Rh
dGljX3BhZ2VzIg0KDQo+ID4gICAgICAgICAgICAgICBpZiAoIHBnW2ldLmNvdW50X2luZm8gJiBQ
R0NfZXh0cmEgKQ0KPiA+ICAgICAgICAgICAgICAgICAgIGV4dHJhX3BhZ2VzKys7DQo+ID4gICAg
ICAgICAgIH0NCj4gPiBAQCAtMjQzOSw2ICsyNDM5LDkgQEAgaW50IGFzc2lnbl9wYWdlcygNCj4g
PiAgICAgICAgICAgfQ0KPiA+ICAgICAgIH0NCj4gPg0KPiA+ICsgICAgaWYgKCBwZ1swXS5jb3Vu
dF9pbmZvICYgUEdDX3Jlc2VydmVkICkNCj4gPiArICAgICAgICBkLT5yZXNlcnZlZF9wYWdlcyAr
PSBucl9wZm5zOw0KPiANCj4gcmVzZXJ2ZWRfcGFnZXMgZG9lc24ndCBzZWVtIHRvIGJlIGV2ZXIg
cmVhZCBvciBkZWNyZW1lbnRlZC4gU28gd2h5IGRvDQo+IHlvdSBuZWVkIGl0Pw0KPg0KDQpZZWFo
LCBJIG1heSBkZWxldGUgaXQgZnJvbSB0aGlzIFBhdGNoIFNlcmllLg0KDQpMaWtlIEkgYWRkcmVz
c2VkIGluIGJlZm9yZSBjb21taXRzOg0KDQoid2hlbiBpbXBsZW1lbnRpbmcgcmVib290aW5nIGRv
bWFpbiBvbiBzdGF0aWMgYWxsb2NhdGlvbiwgbWVtb3J5IHdpbGwgYmUgcmVsaW5xdWlzaGVkDQph
bmQgcmlnaHQgbm93LCBhbGwgc2hhbGwgYmUgZnJlZWQgYmFjayB0byBoZWFwLCB3aGljaCBpcyBu
b3Qgc3VpdGFibGUgZm9yIHN0YXRpYyBtZW1vcnkgaGVyZS4NCmAgcmVsaW5xdWlzaF9tZW1vcnko
ZCwgJmQtPnBhZ2VfbGlzdCkgIC0tPiBwdXRfcGFnZSAtLT4gIGZyZWVfZG9taGVhcF9wYWdlYA0K
DQpGb3IgcGFnZXMgaW4gUEdDX3Jlc2VydmVkLCBub3csIEkgYW0gY29uc2lkZXJpbmcgdGhhdCwg
b3RoZXIgdGhhbiBnaXZpbmcgaXQgYmFjayB0byBoZWFwLA0KbWF5YmUgY3JlYXRpbmcgYSBuZXcg
Z2xvYmFsIGBzdHJ1Y3QgcGFnZV9pbmZvKltET01JRF1gIHZhbHVlIHRvIGhvbGQuDQoNClNvIGl0
IGlzIGJldHRlciB0byBoYXZlIGEgbmV3IGZpZWxkIGluIHN0cnVjdCBwYWdlX2luZm8sIGFzIGZv
bGxvd3MsIHRvIGhvbGQgc3VjaCBpbmZvLg0KDQovKiBQYWdlIGlzIHJlc2VydmVkLiAqLw0Kc3Ry
dWN0IHsNCiAgICAvKg0KICAgICAqIFJlc2VydmVkIE93bmVyIG9mIHRoaXMgcGFnZSwNCiAgICAg
KiBpZiB0aGlzIHBhZ2UgaXMgcmVzZXJ2ZWQgdG8gYSBzcGVjaWZpYyBkb21haW4uDQogICAgICov
DQogICAgZG9taWRfdCByZXNlcnZlZF9vd25lcjsNCn0gcmVzZXJ2ZWQ7DQoiIA0KDQpTbyBJIHdp
bGwgZGVsZXRlIGhlcmUsIGFuZCByZS1pbXBvcnQgd2hlbiBpbXBsZW1lbnRpbmcgcmVib290aW5n
IGRvbWFpbiBvbiBzdGF0aWMgYWxsb2NhdGlvbi4NCg0KPiA+ICsNCj4gPiAgICAgICBpZiAoICEo
bWVtZmxhZ3MgJiBNRU1GX25vX3JlZmNvdW50KSAmJg0KPiA+ICAgICAgICAgICAgdW5saWtlbHko
ZG9tYWluX2FkanVzdF90b3RfcGFnZXMoZCwgbnJfcGZucykgPT0gbnJfcGZucykgKQ0KPiA+ICAg
ICAgICAgICBnZXRfa25vd25hbGl2ZV9kb21haW4oZCk7DQo+ID4gQEAgLTI0NTIsNyArMjQ1NSw3
IEBAIGludCBhc3NpZ25fcGFnZXMoDQo+ID4gICAgICAgICAgICAgICBwYWdlX3NldF9yZXNlcnZl
ZF9vd25lcigmcGdbaV0sIGQpOw0KPiA+ICAgICAgICAgICBzbXBfd21iKCk7IC8qIERvbWFpbiBw
b2ludGVyIG11c3QgYmUgdmlzaWJsZSBiZWZvcmUgdXBkYXRpbmcNCj4gcmVmY250LiAqLw0KPiA+
ICAgICAgICAgICBwZ1tpXS5jb3VudF9pbmZvID0NCj4gPiAtICAgICAgICAgICAgKHBnW2ldLmNv
dW50X2luZm8gJiBQR0NfZXh0cmEpIHwgUEdDX2FsbG9jYXRlZCB8IDE7DQo+ID4gKyAgICAgICAg
ICAgIChwZ1tpXS5jb3VudF9pbmZvICYgKFBHQ19leHRyYSB8IFBHQ19yZXNlcnZlZCkpIHwNCj4g
PiArIFBHQ19hbGxvY2F0ZWQgfCAxOw0KPiANCj4gU2FtZSBoZXJlLg0KDQpJJ2xsIHJlLW9yZ2Fu
aXplIGl0IHRvIHRoZSBwcmV2aW91cyBjb21taXQuDQoNCj4gDQo+ID4gICAgICAgICAgIHBhZ2Vf
bGlzdF9hZGRfdGFpbCgmcGdbaV0sIHBhZ2VfdG9fbGlzdChkLCAmcGdbaV0pKTsNCj4gPiAgICAg
ICB9DQo+ID4NCj4gPiBkaWZmIC0tZ2l0IGEveGVuL2luY2x1ZGUveGVuL3NjaGVkLmggYi94ZW4v
aW5jbHVkZS94ZW4vc2NoZWQuaCBpbmRleA0KPiA+IDM5ODIxNjcxNDQuLmI2MzMzZWQ4YmIgMTAw
NjQ0DQo+ID4gLS0tIGEveGVuL2luY2x1ZGUveGVuL3NjaGVkLmgNCj4gPiArKysgYi94ZW4vaW5j
bHVkZS94ZW4vc2NoZWQuaA0KPiA+IEBAIC0zNjgsNiArMzY4LDcgQEAgc3RydWN0IGRvbWFpbg0K
PiA+ICAgICAgIHN0cnVjdCBwYWdlX2xpc3RfaGVhZCBwYWdlX2xpc3Q7ICAvKiBsaW5rZWQgbGlz
dCAqLw0KPiA+ICAgICAgIHN0cnVjdCBwYWdlX2xpc3RfaGVhZCBleHRyYV9wYWdlX2xpc3Q7IC8q
IGxpbmtlZCBsaXN0IChzaXplIGV4dHJhX3BhZ2VzKSAqLw0KPiA+ICAgICAgIHN0cnVjdCBwYWdl
X2xpc3RfaGVhZCB4ZW5wYWdlX2xpc3Q7IC8qIGxpbmtlZCBsaXN0IChzaXplDQo+ID4geGVuaGVh
cF9wYWdlcykgKi8NCj4gPiArICAgIHN0cnVjdCBwYWdlX2xpc3RfaGVhZCByZXNlcnZlZF9wYWdl
X2xpc3Q7IC8qIGxpbmtlZCBsaXN0IChzaXplDQo+ID4gKyByZXNlcnZlZCBwYWdlcykgKi8NCj4g
Pg0KPiA+ICAgICAgIC8qDQo+ID4gICAgICAgICogVGhpcyBmaWVsZCBzaG91bGQgb25seSBiZSBk
aXJlY3RseSBhY2Nlc3NlZCBieQ0KPiA+IGRvbWFpbl9hZGp1c3RfdG90X3BhZ2VzKCkgQEAgLTM3
OSw2ICszODAsNyBAQCBzdHJ1Y3QgZG9tYWluDQo+ID4gICAgICAgdW5zaWduZWQgaW50ICAgICBv
dXRzdGFuZGluZ19wYWdlczsgLyogcGFnZXMgY2xhaW1lZCBidXQgbm90IHBvc3Nlc3NlZA0KPiAq
Lw0KPiA+ICAgICAgIHVuc2lnbmVkIGludCAgICAgbWF4X3BhZ2VzOyAgICAgICAgIC8qIG1heGlt
dW0gdmFsdWUgZm9yDQo+IGRvbWFpbl90b3RfcGFnZXMoKSAqLw0KPiA+ICAgICAgIHVuc2lnbmVk
IGludCAgICAgZXh0cmFfcGFnZXM7ICAgICAgIC8qIHBhZ2VzIG5vdCBpbmNsdWRlZCBpbg0KPiBk
b21haW5fdG90X3BhZ2VzKCkgKi8NCj4gPiArICAgIHVuc2lnbmVkIGludCAgICAgcmVzZXJ2ZWRf
cGFnZXM7ICAgIC8qIHBhZ2VzIG9mIHN0YXRpYyBtZW1vcnkgKi8NCj4gPiAgICAgICBhdG9taWNf
dCAgICAgICAgIHNocl9wYWdlczsgICAgICAgICAvKiBzaGFyZWQgcGFnZXMgKi8NCj4gPiAgICAg
ICBhdG9taWNfdCAgICAgICAgIHBhZ2VkX3BhZ2VzOyAgICAgICAvKiBwYWdlZC1vdXQgcGFnZXMg
Ki8NCj4gPg0KPiA+IEBAIC01ODgsNiArNTkwLDkgQEAgc3RhdGljIGlubGluZSBzdHJ1Y3QgcGFn
ZV9saXN0X2hlYWQgKnBhZ2VfdG9fbGlzdCgNCj4gPiAgICAgICBpZiAoIHBnLT5jb3VudF9pbmZv
ICYgUEdDX2V4dHJhICkNCj4gPiAgICAgICAgICAgcmV0dXJuICZkLT5leHRyYV9wYWdlX2xpc3Q7
DQo+ID4NCj4gPiArICAgIGlmICggcGctPmNvdW50X2luZm8gJiBQR0NfcmVzZXJ2ZWQgKQ0KPiA+
ICsgICAgICAgIHJldHVybiAmZC0+cmVzZXJ2ZWRfcGFnZV9saXN0Ow0KPiA+ICsNCj4gPiAgICAg
ICByZXR1cm4gJmQtPnBhZ2VfbGlzdDsNCj4gPiAgIH0NCj4gPg0KPiA+DQo+IA0KPiBDaGVlcnMs
DQo+IA0KPiAtLQ0KPiBKdWxpZW4gR3JhbGwNCg0KQ2hlZXJzDQoNClBlbm55IFpoZW5nDQo=


From xen-devel-bounces@lists.xenproject.org Wed May 19 06:46:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 06:46:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129886.243576 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljFyc-0006dM-Cy; Wed, 19 May 2021 06:46:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129886.243576; Wed, 19 May 2021 06:46:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljFyc-0006dF-9o; Wed, 19 May 2021 06:46:42 +0000
Received: by outflank-mailman (input) for mailman id 129886;
 Wed, 19 May 2021 06:46:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TeaP=KO=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1ljFya-0006d8-Uo
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 06:46:41 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.8.73]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 77c3da16-163c-4abf-ad64-8c0348d52495;
 Wed, 19 May 2021 06:46:39 +0000 (UTC)
Received: from DB6PR0801CA0058.eurprd08.prod.outlook.com (2603:10a6:4:2b::26)
 by AM9PR08MB5873.eurprd08.prod.outlook.com (2603:10a6:20b:2dd::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.26; Wed, 19 May
 2021 06:46:37 +0000
Received: from DB5EUR03FT035.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:2b:cafe::b2) by DB6PR0801CA0058.outlook.office365.com
 (2603:10a6:4:2b::26) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.32 via Frontend
 Transport; Wed, 19 May 2021 06:46:37 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT035.mail.protection.outlook.com (10.152.20.65) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Wed, 19 May 2021 06:46:36 +0000
Received: ("Tessian outbound 504317ef584c:v92");
 Wed, 19 May 2021 06:46:36 +0000
Received: from 1dd00e7453c0.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 1774D172-7DFA-473C-BA3E-27FADDC84E4F.1; 
 Wed, 19 May 2021 06:46:30 +0000
Received: from EUR01-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 1dd00e7453c0.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 19 May 2021 06:46:30 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com (2603:10a6:803:10a::33)
 by VI1PR0801MB1920.eurprd08.prod.outlook.com (2603:10a6:800:8f::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.28; Wed, 19 May
 2021 06:46:29 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5]) by VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5%6]) with mapi id 15.20.4129.032; Wed, 19 May 2021
 06:46:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 77c3da16-163c-4abf-ad64-8c0348d52495
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2pxsC1t+NLF4X5GmOARzvzf6dayhkqZGGMAervDQAUE=;
 b=LstdcJkjQDzuXc8Rc95uZOPN9VCCRu/Ccle6f5iSG5c8mqwHXIROYkbPKcWarZIWSg0Nnm1BCNlpdlBQrEYayY58jvsqDpjDsm79JNBZVUKqLVDtF3hqdlMXBs3Bd9Ur70TmOGHdWT6PG/MKaXQLDKlam15zFTry1iwa+OhuMOg=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=hfor7oII/j5wGZwvzLyqe14viMjqr0alhmKgknzJK8aWOZ1HwUjkwN3PIVtuN8595xy9UM7jd45J4McI+yz81V7Kul24RzS4qsDFiLWl1yD3o9qEaovP2x0xCwXqQ0UzU9nKz2g7nsm1brDioCKv8CFHH+0vIkzKZEqjMhkJ0q7ET9lVym1ZyFJ8lLRipg0yBkPLrDqZZ1kP0/FlPz0TgQfPTZT5Ls5RWudsvGVHHwB1PLrgp4jovk/rc5jmEtNiTZOsFxsD4kS72cXrulftRGsIG1NxSJQJI8kBjwkJK8Gx6rfJWApKvQ8hgQgc7E5i9ndpT5oDFZM7Vhg/L4TbeQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2pxsC1t+NLF4X5GmOARzvzf6dayhkqZGGMAervDQAUE=;
 b=V5DAUOsY1fXtQlspeYI8HD4B9bi+BOSgb15n1MuLLMSiCvbfs371TNmJ9zekabozeVjwv71ly+y58fznCTH68wbJnYjM/lBmDt2uUelocCCbgRu4zm+1YyjKoc907z51nkAM+T/i8sTbW8Bz/aLHjro97J9gyRuJnTDwTLXclbTqpigcb3SIolQQ6acsA0m5I7TXN8WlyOUdN/l6GpglTk1+Mdj3YMPMHBgtq9DyS64e8PciMb0LIdJ87HnmFch65WvGf12L76lxwOzdpajv84cpmuqj3eNJruf6UrM+z02J/XIoi6nwZl/mNvLKr2pZGRo9HXpUk/0+UgjosgeMpA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2pxsC1t+NLF4X5GmOARzvzf6dayhkqZGGMAervDQAUE=;
 b=LstdcJkjQDzuXc8Rc95uZOPN9VCCRu/Ccle6f5iSG5c8mqwHXIROYkbPKcWarZIWSg0Nnm1BCNlpdlBQrEYayY58jvsqDpjDsm79JNBZVUKqLVDtF3hqdlMXBs3Bd9Ur70TmOGHdWT6PG/MKaXQLDKlam15zFTry1iwa+OhuMOg=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	nd <nd@arm.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "sstabellini@kernel.org"
	<sstabellini@kernel.org>, "julien@xen.org" <julien@xen.org>
Subject: RE: [PATCH 08/10] xen/arm: introduce reserved_page_list
Thread-Topic: [PATCH 08/10] xen/arm: introduce reserved_page_list
Thread-Index: AQHXS6XFXoougD95T02cEJMQScUNQ6ro2ogAgAAQIWCAAC7qgIABQ90g
Date: Wed, 19 May 2021 06:46:29 +0000
Message-ID:
 <VE1PR08MB521533FA7B7D799DA7083256F72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-9-penny.zheng@arm.com>
 <97bc6ca6-206b-197f-de08-20691578b9db@suse.com>
 <VE1PR08MB521538CF7B0BFED3007E5D8DF72C9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <285e39b3-1bfc-aff3-31e7-d29993d77a20@suse.com>
In-Reply-To: <285e39b3-1bfc-aff3-31e7-d29993d77a20@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 361DFC8BAB7DF544BB4BCD3708316DB1.0
x-checkrecipientchecked: true
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [203.126.0.111]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: 572e14e1-3b5f-449e-5ca3-08d91a91d967
x-ms-traffictypediagnostic: VI1PR0801MB1920:|AM9PR08MB5873:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM9PR08MB587342E9E58EBB79F35F64B3F72B9@AM9PR08MB5873.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:8882;OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 /+kP2u2Fn5J7E6zl8xuAiXAabMOLTkD7BTEIUlNXFgjzTusZOo+s5JlMyIv7YvoW/A/N9zojRJjSGQNvFd8Yfgzoil0PUm0v0Js1H/o5hJwKZHiKXuYjcVoSnJEdqw4lAFlajnWw5EiuZ2iiN+3MbGvVEwvqYwZ85vrfO6e0hdsfiTjtGweG6/MzOYp8MguVroU7k4Ntk0vcKaeyrBANouCS3egSsbuWWn8ShCM9LJ0WB9f/ARfls9I7zp6eFEa2VOAd4EpFRBCXKZXs3KkOif7OjzbliRGP4ynpG0Vgpsn938uMAE6z777si/DeEPmlb/gq452vECOVSGPSOkNAAI6b8TnpYWDfGb/3o9vTzxlf+LKcBTl8l11shIc1x1qGR/nF7S8bEGNemUQ1vT2yoi2vfd/noaF107lkNhGQ0JhAcEX7gBIt0xZdiJ1FNIIwDWWQqt4WHUO/+nwD3Q4GjzJa4gqlMltNh+vATd6VqfIarYotBI3IjKUvyyHXW8N3LvP2U3yNHOvHmbAMb/YYlhUp9LrcPBMBFYaBOBsdNm/0hjslFdMkqzHd1PxZ2fDB7wlZxaIaKXn5288bGQvDmbMh8QP1KkCzqvxwG3MZeB4=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR08MB5215.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(376002)(396003)(136003)(39860400002)(366004)(122000001)(9686003)(86362001)(66476007)(71200400001)(66556008)(33656002)(66446008)(64756008)(38100700002)(76116006)(66946007)(6506007)(83380400001)(478600001)(53546011)(8676002)(4326008)(26005)(6916009)(5660300002)(8936002)(55016002)(54906003)(316002)(7696005)(2906002)(186003)(52536014);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?MTZQSDhlMWx0Sis4Zm4xUHlSMFNkQkhUUlRsSUJ1MGdrNWZZY2J5MkRvMWpF?=
 =?utf-8?B?Vys0aDlOQThkVHR4bU00dUZSaWZnNW1ueEo5TGNxQjJnWkpaVlAvcmQwcXNz?=
 =?utf-8?B?R3ZqdmNuQXFHVVNNOEUwM09UZFZPekgwUUt2V3RxNlNQTS9hamFROVRPMGZE?=
 =?utf-8?B?UGhPZEEvcGQ1a1kyeGFzOG5CY0VlcFNQbytHSFBHZ3ROUEtzZEFGTnVsMVZZ?=
 =?utf-8?B?eUpxdTdodmJINnFNNzdZc29lSmRZeUpKblVLTkdENHJhVDAyY01wNmlBUVhJ?=
 =?utf-8?B?K0ZTaXI2eW9IekVjY0JiVXJ5RTF1NXdBOXBqOStQWVRLeHRaOHR1aHUxcDd5?=
 =?utf-8?B?MlMwQjl4S3phcTY4N3FqWS9DRGxqQlhBMmtCUFhlbUdNVG5LY0d6SnhRZ0Qx?=
 =?utf-8?B?MlRLUndNS1QwbUwxdVA3RlRmbVArckxMOVFneFdJRi9nWXZISEZWWHVFRlor?=
 =?utf-8?B?N0w2VWQxSmFKWGRKeGFDUHRVSlEvcVNnU2Uzc0ZmaTRmR1VVSXVYY0hmdHk0?=
 =?utf-8?B?cXkzajk2TDhUOTFCZWl0NU1YRU83SjhGZU04NEFzV3NhWXpKaUR4aXNQMTZh?=
 =?utf-8?B?dm5nYS9DajRzM1UvbEpIa2xFVlVUcFhNQks0SFlkVHB5TytjdXpGMW1oTGtR?=
 =?utf-8?B?Q2ZzN01DeHlIcmV2MlhBYi81a3pBdXVxVWFFNks0dDNRc1M0WUk1S3VvQVFY?=
 =?utf-8?B?V05nZDlsbmRsb012amVublAzN05SbGFMRGhIUGpYWlh6YzRoNDRRTzllbjJt?=
 =?utf-8?B?QlhOTk1rTFFqcGc2TjNtdUN3ZyswamlkSWZpVlY2UmlCN041SUVCZFlvUVFT?=
 =?utf-8?B?Sk13VWlDV3RlRFdkUlVob0p4K3hNdWFCbnFaYzdMZ0dHQzhBdVlsb3FZSEFQ?=
 =?utf-8?B?Y2s4QXNwNTJaZXlQQ1NTcExOYWE5aW1XUHFHOVpMa3JDNnM0OWFMblVzak5S?=
 =?utf-8?B?RXgrazJqOXBVVHZmUCtEWThaY0RsbkJJVC8zTE9OMVhkNjR0U3FGVGQvcS9L?=
 =?utf-8?B?QXhHb1hFdFY3S2dZV0pSeklXVWEwYUZhZzJKeGE0dGZNN1BtS053RGxLVTlr?=
 =?utf-8?B?NDN1alVFb0luS3UvT2lBcW9qT2NWcjNMN0JFSEV5VjJORVU5MjFIVTliTFFV?=
 =?utf-8?B?bnliZW01ZS9WWVdnTXY2ZFI2RkZXVVRwUENuYjE0dVY2UDRrdDlmL2tLbTlK?=
 =?utf-8?B?cE1yS25xWk5uZkhHM1BVcVdHd1gweHEzZVpkNnZJSVZINURNbmdnRzlsWVQ5?=
 =?utf-8?B?QnBWMklXcU9BZTlBU2dNMVNTY29ma1VVcklxaWFnWitYQ2NQMU1pRDZpZVRM?=
 =?utf-8?B?M2ZUQk04QlIvSW1kbS8xR1pzZjNnMEM1L3N5bUdhalhIdzk3VUtVaDRxQ25P?=
 =?utf-8?B?SUdjbmVmZGlnTFJhOVJzY1BjU0tIS3VJKzc0OVNTS1d1U0ZKZCtGNGpZaHpU?=
 =?utf-8?B?NVRNTzR1aUxQa01JWFhkcmhZWGZxTjRhWXRUVGVpOVRUR25PblE2czJvaU1X?=
 =?utf-8?B?SlcySXF5Wk55a0JnS2hRZ2J0b3BBK3laY2xyL1dQV3U5YzdmNERpWDRHcGRx?=
 =?utf-8?B?b3pvVkRhOStpeUtZY1BzQVpPWkFKQk41TVU4ZmV0SnBkaEFUMzNnNmhYZzZL?=
 =?utf-8?B?YjZjV2hDMFpUM3pkbkhLYzNVYXdSU05od1BPZnV3VjYyNlA4ZVoyQ2ZFNzhx?=
 =?utf-8?B?em5oV2pKaU9yZmpkRmExSE5xdjdWOVNKYUxBUjNhYVZGRFZjMHdtYTF2MVZl?=
 =?utf-8?Q?7T+CA090UcrL5U7S1IFgdwW5KTNIA4w9OmgZN5z?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0801MB1920
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT035.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	e9ecb5b2-fbde-4749-b851-08d91a91d4e8
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	BcRYx7FNb6XC7cdAybuMOBaVAuj8QmLXa6Ppo6glpVF9zd6HvmUBoXC//rLn5dZn+G+TTWodCETvOjjS+c3G/aZj3nDkK3+h1A2Okx2wzPOBTDr54zdpnBD+ZFHjRlb404Lu11GX6TiL4Fadv6bZ//1noUv9emYMRkuEwlTKEWe+FElWGGdbMDJXCtRWPtlx4kPe8rAfmQQFXYPI6ZkpgqNQbMQTadS8FcbxgWycRQwlsYdu2JJN92o/eF37tElAXPHe2pHLkVnocQxd0Arp+wsLm0cAcmNvbsYqE0NL8GJHNQknpQ4zoM1ss9s4792VTZqr34IpkkrDkESxtPn9caqfPfBCo8iu1cPUJnhT8OtQ+v/8S+eUfXplWv/2LS37GE0FFti9fiKxF8EZKk/A2keReNQzReCbwFNFVolZyfaGOLE+NQ7ox3ky5q87Osq7hyJK0X+c+BgW1BAMl/xSD0A0Jkkert8Y26Xin3MxTK1OA5F8N37fyl8CYFBIU5wags8aToG/b5HnGt4B568plcntfVDDXH0wng+filSA/1tWA3M5UAlggfQz3aM+0LmEBPNYPQrYGhqucEVqPvJq3bTzmKaUu/Y70bFohprM6Gy2rgdX7Mfl9RDGdpBvR07ntniaV5zscC/R1qthXP14dl0i/5t1t3t52RlSm+pemF4=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(376002)(396003)(39860400002)(136003)(36840700001)(46966006)(82740400003)(5660300002)(478600001)(54906003)(8676002)(7696005)(70586007)(4326008)(26005)(6862004)(53546011)(316002)(336012)(6506007)(52536014)(86362001)(47076005)(83380400001)(81166007)(186003)(2906002)(33656002)(36860700001)(9686003)(8936002)(356005)(70206006)(55016002)(82310400003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2021 06:46:36.9575
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 572e14e1-3b5f-449e-5ca3-08d91a91d967
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT035.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB5873

SGkgSmFuIA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEphbiBCZXVs
aWNoIDxqYmV1bGljaEBzdXNlLmNvbT4NCj4gU2VudDogVHVlc2RheSwgTWF5IDE4LCAyMDIxIDc6
MjUgUE0NCj4gVG86IFBlbm55IFpoZW5nIDxQZW5ueS5aaGVuZ0Bhcm0uY29tPg0KPiBDYzogQmVy
dHJhbmQgTWFycXVpcyA8QmVydHJhbmQuTWFycXVpc0Bhcm0uY29tPjsgV2VpIENoZW4NCj4gPFdl
aS5DaGVuQGFybS5jb20+OyBuZCA8bmRAYXJtLmNvbT47IHhlbi1kZXZlbEBsaXN0cy54ZW5wcm9q
ZWN0Lm9yZzsNCj4gc3N0YWJlbGxpbmlAa2VybmVsLm9yZzsganVsaWVuQHhlbi5vcmcNCj4gU3Vi
amVjdDogUmU6IFtQQVRDSCAwOC8xMF0geGVuL2FybTogaW50cm9kdWNlIHJlc2VydmVkX3BhZ2Vf
bGlzdA0KPiANCj4gT24gMTguMDUuMjAyMSAxMDozOCwgUGVubnkgWmhlbmcgd3JvdGU6DQo+ID4g
SGkgSmFuDQo+ID4NCj4gPj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gPj4gRnJvbTog
SmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPg0KPiA+PiBTZW50OiBUdWVzZGF5LCBNYXkg
MTgsIDIwMjEgMzozOSBQTQ0KPiA+PiBUbzogUGVubnkgWmhlbmcgPFBlbm55LlpoZW5nQGFybS5j
b20+DQo+ID4+IENjOiBCZXJ0cmFuZCBNYXJxdWlzIDxCZXJ0cmFuZC5NYXJxdWlzQGFybS5jb20+
OyBXZWkgQ2hlbg0KPiA+PiA8V2VpLkNoZW5AYXJtLmNvbT47IG5kIDxuZEBhcm0uY29tPjsgeGVu
LQ0KPiBkZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZzsNCj4gPj4gc3N0YWJlbGxpbmlAa2VybmVs
Lm9yZzsganVsaWVuQHhlbi5vcmcNCj4gPj4gU3ViamVjdDogUmU6IFtQQVRDSCAwOC8xMF0geGVu
L2FybTogaW50cm9kdWNlIHJlc2VydmVkX3BhZ2VfbGlzdA0KPiA+Pg0KPiA+PiBPbiAxOC4wNS4y
MDIxIDA3OjIxLCBQZW5ueSBaaGVuZyB3cm90ZToNCj4gPj4+IFNpbmNlIHBhZ2VfbGlzdCB1bmRl
ciBzdHJ1Y3QgZG9tYWluIHJlZmVycyB0byBsaW5rZWQgcGFnZXMgYXMgZ3VlYXN0DQo+ID4+PiBS
QU0gYWxsb2NhdGVkIGZyb20gaGVhcCwgaXQgc2hvdWxkIG5vdCBpbmNsdWRlIHJlc2VydmVkIHBh
Z2VzIG9mDQo+ID4+PiBzdGF0aWMNCj4gPj4gbWVtb3J5Lg0KPiA+Pj4NCj4gPj4+IFRoZSBudW1i
ZXIgb2YgUEdDX3Jlc2VydmVkIHBhZ2VzIGFzc2lnbmVkIHRvIGEgZG9tYWluIGlzIHRyYWNrZWQg
aW4NCj4gPj4+IGEgbmV3ICdyZXNlcnZlZF9wYWdlcycgY291bnRlci4gQWxzbyBpbnRyb2R1Y2Ug
YSBuZXcNCj4gPj4+IHJlc2VydmVkX3BhZ2VfbGlzdCB0byBsaW5rIHBhZ2VzIG9mIHN0YXRpYyBt
ZW1vcnkuIExldCBwYWdlX3RvX2xpc3QNCj4gPj4+IHJldHVybiByZXNlcnZlZF9wYWdlX2xpc3Qs
IHdoZW4gZmxhZyBpcyBQR0NfcmVzZXJ2ZWQuDQo+ID4+Pg0KPiA+Pj4gTGF0ZXIsIHdoZW4gZG9t
YWluIGdldCBkZXN0cm95ZWQgb3IgcmVzdGFydGVkLCB0aG9zZSBuZXcgdmFsdWVzIHdpbGwNCj4g
Pj4+IGhlbHAgcmVsaW5xdWlzaCBtZW1vcnkgdG8gcHJvcGVyIHBsYWNlLCBub3QgYmVlbiBnaXZl
biBiYWNrIHRvIGhlYXAuDQo+ID4+Pg0KPiA+Pj4gU2lnbmVkLW9mZi1ieTogUGVubnkgWmhlbmcg
PHBlbm55LnpoZW5nQGFybS5jb20+DQo+ID4+PiAtLS0NCj4gPj4+ICB4ZW4vY29tbW9uL2RvbWFp
bi5jICAgICB8IDEgKw0KPiA+Pj4gIHhlbi9jb21tb24vcGFnZV9hbGxvYy5jIHwgNyArKysrKy0t
ICB4ZW4vaW5jbHVkZS94ZW4vc2NoZWQuaCB8IDUNCj4gPj4+ICsrKysrDQo+ID4+PiAgMyBmaWxl
cyBjaGFuZ2VkLCAxMSBpbnNlcnRpb25zKCspLCAyIGRlbGV0aW9ucygtKQ0KPiA+Pg0KPiA+PiBU
aGlzIGNvbnRyYWRpY3RzIHRoZSB0aXRsZSdzIHByZWZpeDogVGhlcmUncyBubyBBcm0tc3BlY2lm
aWMgY2hhbmdlIGhlcmUgYXQNCj4gYWxsLg0KPiA+PiBCdXQgaW1vIHRoZSB0aXRsZSBpcyBjb3Jy
ZWN0LCBhbmQgdGhlIGNoYW5nZXMgc2hvdWxkIGJlIEFybS1zcGVjaWZpYy4NCj4gPj4gVGhlcmUn
cyBubyBwb2ludCBoYXZpbmcgc3RydWN0IGRvbWFpbiBmaWVsZHMgb24gZS5nLiB4ODYgd2hpY2gg
YXJlbid0IHVzZWQNCj4gdGhlcmUgYXQgYWxsLg0KPiA+Pg0KPiA+DQo+ID4gWWVwLCB5b3UncmUg
cmlnaHQuDQo+ID4gSSdsbCBhZGQgaWZkZWZzIGluIHRoZSBmb2xsb3dpbmcgY2hhbmdlcy4NCj4g
DQo+IEFzIG5lY2Vzc2FyeSwgcGxlYXNlLiBNb3Zpbmcgc3R1ZmYgdG8gQXJtLXNwZWNpZmljIGZp
bGVzIHdvdWxkIGJlIHByZWZlcmFibGUuDQo+IA0KDQpTdXJlLCBJJ2xsIGFkZCBhIG5ldyBDT05G
SUdfU1RBVElDTUVNIHRvIGluY2x1ZGUgYWxsIHJlbGF0ZWQgZnVuY3Rpb25zIGFuZCB2YXJpYWJs
ZXMuIFRoeA0KDQo+IEphbg0KDQpDaGVlcnMNCg0KUGVubnkgWmhlbmcNCg==


From xen-devel-bounces@lists.xenproject.org Wed May 19 07:20:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 07:20:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129894.243593 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljGUm-0001Xp-3r; Wed, 19 May 2021 07:19:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129894.243593; Wed, 19 May 2021 07:19:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljGUm-0001Xi-0f; Wed, 19 May 2021 07:19:56 +0000
Received: by outflank-mailman (input) for mailman id 129894;
 Wed, 19 May 2021 07:19:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljGUk-0001XY-Fq; Wed, 19 May 2021 07:19:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljGUk-0003vz-5i; Wed, 19 May 2021 07:19:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljGUj-0006dn-Si; Wed, 19 May 2021 07:19:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ljGUj-0002SX-SB; Wed, 19 May 2021 07:19:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wOrohpe3Py79bOxXeFACUbiM2l2MFfrCE7Vh5vtdEUY=; b=D6NFHdpmUKAScjS559lM19N7Zp
	MN31uaXnsOJ/oXGGcx802EczlY1D4ur4LpYzHz3u/DpRA042TswCd26RTEEf2jDJGYWlCJVrNM9zZ
	GtIKGpI/IRJrv3Xr563tmb6NJZOigJK2X612HzqQXjr5ibhqbhTJKH/nnoyOP6xMz5Uk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162076-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 162076: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=5663be9f3a97843665d6d75024a22905578cf7a6
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 May 2021 07:19:53 +0000

flight 162076 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162076/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              5663be9f3a97843665d6d75024a22905578cf7a6
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  313 days
Failing since        151818  2020-07-11 04:18:52 Z  312 days  305 attempts
Testing same since   162076  2021-05-19 04:18:55 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 58038 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed May 19 07:26:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 07:26:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129903.243607 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljGas-00031b-07; Wed, 19 May 2021 07:26:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129903.243607; Wed, 19 May 2021 07:26:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljGar-00031U-TT; Wed, 19 May 2021 07:26:13 +0000
Received: by outflank-mailman (input) for mailman id 129903;
 Wed, 19 May 2021 07:26:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljGaq-00031K-HL; Wed, 19 May 2021 07:26:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljGaq-00043m-Cq; Wed, 19 May 2021 07:26:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljGaq-0006ph-2I; Wed, 19 May 2021 07:26:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ljGaq-0007wd-1l; Wed, 19 May 2021 07:26:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=5xwpkOEMsFdFlrQoBG9On4tXX5H7ttIadmES1HwwLoE=; b=KMPJlAqHsMJB7bVv1I7eU5txDe
	pVXt2EMgDl9/oDEA8Wre/0jv9GnMTCVyYvy4DJvDC1gxMJAPf7O8hiuoX0RdTlAlD4Yl9Qz7oEAbF
	+YmhbgjDSjDE6j/p8QL34ia4Fi7OBTQzmPf1v1OIhbhH/dXrnRhZz4Wp/OeveQ0WTiZk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162079-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162079: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=01d84420fb4a9be2ec474a7c1910bb22c28b53c8
X-Osstest-Versions-That:
    xen=caa9c4471d1d74b2d236467aaf7e63a806ac11a4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 May 2021 07:26:12 +0000

flight 162079 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162079/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm               6 xen-build                fail REGR. vs. 162023
 build-armhf                   6 xen-build                fail REGR. vs. 162023

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  01d84420fb4a9be2ec474a7c1910bb22c28b53c8
baseline version:
 xen                  caa9c4471d1d74b2d236467aaf7e63a806ac11a4

Last test of basis   162023  2021-05-18 13:00:27 Z    0 days
Testing same since   162036  2021-05-18 16:00:26 Z    0 days    8 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  pass    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 01d84420fb4a9be2ec474a7c1910bb22c28b53c8
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:51:48 2021 +0100

    tools/xenmon: xenbaked: Mark const the field text in stat_map_t
    
    The field text in stat_map_t will point to string literals. So mark it
    as const to allow the compiler to catch any modified of the string.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 4b7702727a8d89fea0a239adcbeb18aa2c85ede0
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:51:28 2021 +0100

    tools/top: The string parameter in set_prompt() and set_delay() should be const
    
    Neither string parameter in set_prompt() and set_delay() are meant to
    be modified. In particular, new_prompt can point to a literal string.
    
    So mark the two parameters as const and propagate it.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 5605cfd49a18df41a21fb50cd81528312a39d7c9
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:50:32 2021 +0100

    tools/misc: Use const whenever we point to literal strings
    
    literal strings are not meant to be modified. So we should use const
    char * rather than char * when we we to store a pointer to them.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 89aae4ad8f495b647de33f2df5046b3ce68225f8
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:35:07 2021 +0100

    tools/libs: stat: Use const whenever we point to literal strings
    
    literal strings are not meant to be modified. So we should use const
    char * rather than char * when we want to store a pointer to them.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 8fc4916daf2aac34088ebd5ec3d6fd707ac4221d
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:34:22 2021 +0100

    tools/libs: guest: Use const whenever we point to literal strings
    
    literal strings are not meant to be modified. So we should use const
    *char rather than char * when we want to store a pointer to them.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed May 19 07:27:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 07:27:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129907.243621 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljGbv-0003br-Bv; Wed, 19 May 2021 07:27:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129907.243621; Wed, 19 May 2021 07:27:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljGbv-0003bk-8W; Wed, 19 May 2021 07:27:19 +0000
Received: by outflank-mailman (input) for mailman id 129907;
 Wed, 19 May 2021 07:27:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TeaP=KO=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1ljGbt-0003be-Mj
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 07:27:17 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.21.77]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c1846d52-9693-4657-827b-77b4930036cb;
 Wed, 19 May 2021 07:27:15 +0000 (UTC)
Received: from AM5PR0202CA0018.eurprd02.prod.outlook.com
 (2603:10a6:203:69::28) by DB7PR08MB3499.eurprd08.prod.outlook.com
 (2603:10a6:10:4b::31) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.26; Wed, 19 May
 2021 07:27:09 +0000
Received: from VE1EUR03FT057.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:69:cafe::bb) by AM5PR0202CA0018.outlook.office365.com
 (2603:10a6:203:69::28) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.32 via Frontend
 Transport; Wed, 19 May 2021 07:27:09 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT057.mail.protection.outlook.com (10.152.19.123) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Wed, 19 May 2021 07:27:09 +0000
Received: ("Tessian outbound 504317ef584c:v92");
 Wed, 19 May 2021 07:27:08 +0000
Received: from e37bf086faab.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 F85D0727-7E2D-4CAC-B057-0AF74B3B9432.1; 
 Wed, 19 May 2021 07:27:03 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id e37bf086faab.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 19 May 2021 07:27:03 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com (2603:10a6:803:10a::33)
 by VI1PR08MB4173.eurprd08.prod.outlook.com (2603:10a6:803:e8::32)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.25; Wed, 19 May
 2021 07:27:01 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5]) by VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5%6]) with mapi id 15.20.4129.032; Wed, 19 May 2021
 07:27:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c1846d52-9693-4657-827b-77b4930036cb
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0p1owc7bEvWjg5Nl2oUmkysCRCk6ZAGC2+/X1Md/3qY=;
 b=j4BWsXo7oYlcs1O29nm7QIe28aWw9b1oN8z37jInc1Irx0CIZNFyl557YOyp5QSE5SnJBM/jQ85LSLFQM9oGODEpza5WwaQbCSVzqUKAG2NzesnZpjX1Sk6W/PEAHo2zL5BaQIN5/HKeQh+3kBo+qoGkIttL6p8KNFW6hooqsWo=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=H0Zs+cZOIZy2HRBzJLpPbqwWFgowzCB5jG9519HgB9EoGGYq1kXQIo22EJiq0SCHasnRSoLdayfQ4QWslQr5otn+rDyk5zRXJC5shkkH7Bv2SFK9gfjVzHcm2SpGUjAbofxsoyu4vxSGjv4DkzzexDD11prDAqIKjt1bLZrYexvKVqVwjBn1alnwJhPRvltXM4laGu6+B4zl96q531AbCarCUz4U8WdEl91HJnxfYSjhhpqdN0OryNuAOJn95bu7CahQiWdiLlwg0sb80dBdR56V76VaV3bK7HT7HM+oKU2YXmWOAoJF6Kt5lGsyrOv71lT8H0gVa826a7rB9+13og==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0p1owc7bEvWjg5Nl2oUmkysCRCk6ZAGC2+/X1Md/3qY=;
 b=AHlNXKPZfbfIqThC3rHsp7gdZxyIqPESsP+u5Zj8+1BNWSThDHPfeadzGJw57elOrzKk0jmSZFBdcrt/7QeLImhVCZP1gPeWLlN9QSX/cSJzWPWRkWgNSzxTCuqFeh7Y/4nIxObRa8hhKwdpvKfYY6D3djWLeL3JpuNCbC+Cr7oMI+1Qi4dErGkoPqh24Wncb0yXe8mW4v7UG8XdEt3b1yr/Uszlxid6oIzkD19HeLBvR45q1fPJvnqBg2a4oS1NJEmFo5v7hioV0AwmpsDFnHoyxv648b9uA3h79A0s/qKfLMinZQanlRmnqhjmy1ykNSVuwIhqppYJoIqJrpLGKg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=0p1owc7bEvWjg5Nl2oUmkysCRCk6ZAGC2+/X1Md/3qY=;
 b=j4BWsXo7oYlcs1O29nm7QIe28aWw9b1oN8z37jInc1Irx0CIZNFyl557YOyp5QSE5SnJBM/jQ85LSLFQM9oGODEpza5WwaQbCSVzqUKAG2NzesnZpjX1Sk6W/PEAHo2zL5BaQIN5/HKeQh+3kBo+qoGkIttL6p8KNFW6hooqsWo=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "sstabellini@kernel.org"
	<sstabellini@kernel.org>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	nd <nd@arm.com>
Subject: RE: [PATCH 10/10] xen/arm: introduce allocate_static_memory
Thread-Topic: [PATCH 10/10] xen/arm: introduce allocate_static_memory
Thread-Index: AQHXS6XHA2Y5LfY1BUmlVQXGKXjd9qrpJQIAgAE5LmA=
Date: Wed, 19 May 2021 07:27:00 +0000
Message-ID:
 <VE1PR08MB5215B4D187DFE8AE20DF2B95F72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-11-penny.zheng@arm.com>
 <7e9bacde-8a1c-c9f8-a06d-2f39f2192315@xen.org>
In-Reply-To: <7e9bacde-8a1c-c9f8-a06d-2f39f2192315@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 0C3D7B8FF800544084DBEF3E07B9D9B5.0
x-checkrecipientchecked: true
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [203.126.0.111]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: 3f9fb75c-e93d-4c39-dbd8-08d91a978348
x-ms-traffictypediagnostic: VI1PR08MB4173:|DB7PR08MB3499:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DB7PR08MB34997142C04055A471C8F68AF72B9@DB7PR08MB3499.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 4fgaQKdIfd/K1O+Hz0Ucb65q7KpuZnxm04UbqlilTV9WIbgK1NNRFF9lgej0aRV88falV5lM5KsMXY2Y75elnuJNg2GC5P+sBj3xzXVOKq3h3xQrjbeck/YljxJXOeL+hFLBREb04+YE5AfPJArVf4aOJl5zzmnGptFYgnfoH4G7P/BIcpF/0TImqnZdgZEq3Krdxpsb5E0IuHUOfIWdNZl8loq6hHrqEG637uvWmDhPwuUSSCAIpgOqudQ5sGwavNhDIoFi4CxOtYX5J77QiLcCHeZfa9jgedFVeax4rIrPlGE6nvD2YDIIGtHo99N1YEI1F3K1JXbN2L64bi2xV/Eh4+LInC1YH6olAUK4Or0BpDIOmxTqZ3pt7/eaO8AoMJwwxCkBnj4HlBPTLSeG2gI2L+HTunqHKVkKjiEh+4/EaQGVAElQgXfvk6KqsODTdgMShBQWaiL5hqUmokEjzxr+kSJi2wjulziPrjpshotmeJdp1h8w45ypCDv4P2gV3WW+bKiw9r51txrP+33n5564kU8IK4gS7vRUHFxa6WLA1IBGdjm/8hoZknX/zb3MIycgJ3ajmULviSVYNx+2UGs+pr5XWkjEoqgyQGxEbug=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR08MB5215.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(376002)(136003)(366004)(396003)(39860400002)(5660300002)(316002)(110136005)(52536014)(33656002)(8676002)(8936002)(4326008)(26005)(122000001)(9686003)(38100700002)(6506007)(186003)(53546011)(66556008)(83380400001)(478600001)(71200400001)(55016002)(86362001)(7696005)(54906003)(76116006)(66946007)(64756008)(66446008)(2906002)(66476007);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?WG1vdEszWWlvSVc1cFoxNFZMcmd1THpXaVBjZVlSN1dLL3BxczUyMWphamNs?=
 =?utf-8?B?L0FhZVQ4aExoQUZibGs0MkJHZkhKeXNyLzJSalhLKzVHS21nTDlocnFMU0hX?=
 =?utf-8?B?alFvYUgvdHYrcURtb2dtN3N0eWxpK09vdGk2NFBnTDh0c1prajVmOU9ibENB?=
 =?utf-8?B?OHJTZnNNck9CNk1OU0JOb1NTU2ZFdUJISE9UZTd3TFZTMStTZXU2QmhQeUZp?=
 =?utf-8?B?TTczbzN1Mmw5TDZxNEJMNnV0QThwKzJEbWRkSUxnL0t5QTEzdnVXVjJVL0tE?=
 =?utf-8?B?UTg4VE5DZ01ubURYcmUwTkN3dVA3aVFKT0ZPNHR1ZUJpNzlWaGNFaUlmVHlr?=
 =?utf-8?B?KzR4Ump6UGNTZFh0Q2Y1VGxhd2Q4cThJQi9MU3A1TUhDdUd1NDN1TGNSVmNy?=
 =?utf-8?B?VkoyQS9LbU9QS1VvOVdZSzNTYmtFNHdUTVQ2OXg5bXdLa3FSYUpQblVYeUpo?=
 =?utf-8?B?TVdUM2F4S3ZHbEczZzhHWFo5UzhyWnd0YktidVhtTjM5YkhhVzZTaU1rT2tD?=
 =?utf-8?B?SHg5ZVhRY21pVUxteDhPbjlKeUJvN3pONVF5UlJBQ2lkaEhEbTdzTEdTeVZw?=
 =?utf-8?B?dGNyNVRma2tZRUdIQ2VvSE5OMDRvZDVYSk91N0p3QUtuMkFCL3BQUm5JWE5t?=
 =?utf-8?B?NUZraGgxeGJzMHEzQmZrcm0xcHdZK3JMcVR4Kys0L0ZRekNzQXQrVU1LYTdt?=
 =?utf-8?B?MXYzZEl4dnZqMG40aXRLUy91anFpUFE1dEZ2M0hLMDlDMzR6ck9ONlZrSEtq?=
 =?utf-8?B?MklOZWt5dThHSys4QUVsZzF5VU5lTU4xMzVnMUQwRE1xUWlxTDBiNC9Cb0hh?=
 =?utf-8?B?RVI0NUlSOHllVWx3dTZoNmJab0dTZDc1QW8xOXE5K200VXJ5MmFjYzFza1F4?=
 =?utf-8?B?U3hVUHR6cWNOaEljSkF2OEFSMlNIb0Q0TjM3ckVVS0pmU20ySmhmVkJQejBa?=
 =?utf-8?B?N0pOd2pINUxLaXY0Z0dsWE5oWEJlM05ZQVlocGQvQ1VaVy82dlVsM0pCWENK?=
 =?utf-8?B?U1lNQmoyYUxTNEVlV0xUVHlPamFmNjdkV3E0R0ZqS3NoUkFwVGpDbjE5Yjdj?=
 =?utf-8?B?NWRIL2dzeGlRRnY4eE5VcGVienZiNTJHeHg1RkNTb21RM0JnS3hxTXlrcU84?=
 =?utf-8?B?dmhmeFErQTcyZmRsTjl4Q1kwK0k3Sms3bzJ6STZrZldnVDdzSmlwNEJxd2c0?=
 =?utf-8?B?R3FVbUhPdmxmalBYVEcwKytWc0tQUUQwaXd6V2lYYzJGelQ0d08vWFN4cUd1?=
 =?utf-8?B?ZG83Y0liRC81a1ZSTVNRV3FucnE3OFI4STQ0RjZQc0xkWFV6ZW9mNitkK0Jn?=
 =?utf-8?B?aFNMYTJmMDlMcGJIWUtTSndSV0NibTBFYXJ3Y2NCR29COGdLOHVlSkl2OEdV?=
 =?utf-8?B?ekdWU2VzangvRXN3UmlOWmhRaktkajRLaUc5TDNFRSthUXlYR1AxOWFvRVcr?=
 =?utf-8?B?SlA0ZGJaM2VFV0p3MDROZjV5VkltajBPbGtVcHNlb1I5SDZ6MHMvdkRkaVVG?=
 =?utf-8?B?eGRFbDYzK1c1U215TE5qdEc5SmVtT1lsWnNSQXUxMzR0WS9TWlBQZ1BtTFZh?=
 =?utf-8?B?WVpxTUNmQ0oxNGF3TDBHY2FMK05NOG9zWDJTUW0reVhyVlZKZmxkdU1LRGo2?=
 =?utf-8?B?Tkh3Kzh6YjdXQXFZODJSSVlsbkNQU2dmOCtqWFArUjVnQVdQeDEzbXJoTDVR?=
 =?utf-8?B?ckxRSTRFaVB0am5uSTlXUG1ZOVpua3ZBWXlNZ2dJQ0pocXZYemsyRUY4MVVk?=
 =?utf-8?Q?8ppWdHungC6s5zWpanpnitAkhN1kq9jaGBj6hPK?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB4173
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT057.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	afcf4dd3-6642-4c7e-3216-08d91a977e40
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	QP6N4xhSog53DVHC+xRFBEhBJZF0p2u2EqLZ1EnJ2wfncnE+TxmzLdvMGWiVI41z3eEdRdsxmd/U/TN6f6zX61ogz9EmxGaaXM34uEE1DEGlQpHboyeqB0Mn4lxmNg06AfSnsylVV+WEdJnaNdYNHeAbgsu+RPN3TZBu+Y94lSvKGInsQK19XvWj8znTmriBbLT8eqmxdAptOiK2qHXzXiFaOSIGXz09kttzxpBXYySpZ/Fz6AZC2kNAtC69S6v93Io0zXJg1KJ6y0Xgcvy+wy0rIuZG6L5IyE3DPNLtWd/fInwHluJDHsHG7GBRzRsUb0c1/9zrklX0S2nPAUDPS8kGSiIGPLIWztEqBcmAFQ6hfUwPyFATRJJOLuHnqjhxdgK+H0e2JaRWK00niuzLTU6C48Ybgagphu5V6qSLPg+sf9un66K5DZcDnVWQ4QYx9HaCJmzVrn6+zZOrJZ3dU8ySUQE7lpf1/Ci/STxbSjFtV3iQfXCmTEdN2UKYHEV6WkheoF2VusXw4glRt8ukIKY+5zZhvj38nZ1C6L+D8LbedX8gRkQPFQHvjQVj+euXa2Vs7i3ZAgTLyV1F3HPYNgcCQqFsvx1LZqE/NrsEZ5vFAR/qICJtwec9T9vggzL4Bcfrk8QpB4d46efQO44y9A==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(346002)(376002)(136003)(39860400002)(46966006)(36840700001)(36860700001)(8676002)(2906002)(70206006)(54906003)(70586007)(8936002)(186003)(110136005)(316002)(7696005)(52536014)(47076005)(5660300002)(4326008)(82740400003)(81166007)(55016002)(478600001)(86362001)(83380400001)(6506007)(53546011)(336012)(82310400003)(9686003)(33656002)(26005)(356005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2021 07:27:09.2910
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 3f9fb75c-e93d-4c39-dbd8-08d91a978348
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT057.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3499

SGkgSnVsaWVuDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSnVsaWVu
IEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4NCj4gU2VudDogVHVlc2RheSwgTWF5IDE4LCAyMDIxIDg6
MDYgUE0NCj4gVG86IFBlbm55IFpoZW5nIDxQZW5ueS5aaGVuZ0Bhcm0uY29tPjsgeGVuLWRldmVs
QGxpc3RzLnhlbnByb2plY3Qub3JnOw0KPiBzc3RhYmVsbGluaUBrZXJuZWwub3JnDQo+IENjOiBC
ZXJ0cmFuZCBNYXJxdWlzIDxCZXJ0cmFuZC5NYXJxdWlzQGFybS5jb20+OyBXZWkgQ2hlbg0KPiA8
V2VpLkNoZW5AYXJtLmNvbT47IG5kIDxuZEBhcm0uY29tPg0KPiBTdWJqZWN0OiBSZTogW1BBVENI
IDEwLzEwXSB4ZW4vYXJtOiBpbnRyb2R1Y2UgYWxsb2NhdGVfc3RhdGljX21lbW9yeQ0KPiANCj4g
SGkgUGVubnksDQo+IA0KPiBPbiAxOC8wNS8yMDIxIDA2OjIxLCBQZW5ueSBaaGVuZyB3cm90ZToN
Cj4gPiBUaGlzIGNvbW1pdCBpbnRyb2R1Y2VzIGFsbG9jYXRlX3N0YXRpY19tZW1vcnkgdG8gYWxs
b2NhdGUgc3RhdGljDQo+ID4gbWVtb3J5IGFzIGd1ZXN0IFJBTSBmb3IgZG9tYWluIG9uIFN0YXRp
YyBBbGxvY2F0aW9uLg0KPiA+DQo+ID4gSXQgdXNlcyBhbGxvY19kb21zdGF0aWNfcGFnZXMgdG8g
YWxsb2NhdGUgcHJlLWRlZmluZWQgc3RhdGljIG1lbW9yeQ0KPiA+IGJhbmtzIGZvciB0aGlzIGRv
bWFpbiwgYW5kIHVzZXMgZ3Vlc3RfcGh5c21hcF9hZGRfcGFnZSB0byBzZXQgdXAgUDJNDQo+ID4g
dGFibGUsIGd1ZXN0IHN0YXJ0aW5nIGF0IGZpeGVkIEdVRVNUX1JBTTBfQkFTRSwgR1VFU1RfUkFN
MV9CQVNFLg0KPiA+DQo+ID4gU2lnbmVkLW9mZi1ieTogUGVubnkgWmhlbmcgPHBlbm55LnpoZW5n
QGFybS5jb20+DQo+ID4gLS0tDQo+ID4gICB4ZW4vYXJjaC9hcm0vZG9tYWluX2J1aWxkLmMgfCAx
NTcNCj4gKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKystDQo+ID4gICAxIGZpbGUg
Y2hhbmdlZCwgMTU1IGluc2VydGlvbnMoKyksIDIgZGVsZXRpb25zKC0pDQo+ID4NCj4gPiBkaWZm
IC0tZ2l0IGEveGVuL2FyY2gvYXJtL2RvbWFpbl9idWlsZC5jIGIveGVuL2FyY2gvYXJtL2RvbWFp
bl9idWlsZC5jDQo+ID4gaW5kZXggMzBiNTU1ODhiNy4uOWY2NjIzMTNhZCAxMDA2NDQNCj4gPiAt
LS0gYS94ZW4vYXJjaC9hcm0vZG9tYWluX2J1aWxkLmMNCj4gPiArKysgYi94ZW4vYXJjaC9hcm0v
ZG9tYWluX2J1aWxkLmMNCj4gPiBAQCAtNDM3LDYgKzQzNyw1MCBAQCBzdGF0aWMgYm9vbCBfX2lu
aXQgYWxsb2NhdGVfYmFua19tZW1vcnkoc3RydWN0DQo+IGRvbWFpbiAqZCwNCj4gPiAgICAgICBy
ZXR1cm4gdHJ1ZTsNCj4gPiAgIH0NCj4gPg0KPiA+ICsvKg0KPiA+ICsgKiAjcmFtX2luZGV4IGFu
ZCAjcmFtX2luZGV4IHJlZmVyIHRvIHRoZSBpbmRleCBhbmQgc3RhcnRpbmcgYWRkcmVzcw0KPiA+
ICtvZiBndWVzdA0KPiA+ICsgKiBtZW1vcnkga2FuayBzdG9yZWQgaW4ga2luZm8tPm1lbS4NCj4g
PiArICogU3RhdGljIG1lbW9yeSBhdCAjc21mbiBvZiAjdG90X3NpemUgc2hhbGwgYmUgbWFwcGVk
ICNzZ2ZuLCBhbmQNCj4gPiArICogI3NnZm4gd2lsbCBiZSBuZXh0IGd1ZXN0IGFkZHJlc3MgdG8g
bWFwIHdoZW4gcmV0dXJuaW5nLg0KPiA+ICsgKi8NCj4gPiArc3RhdGljIGJvb2wgX19pbml0IGFs
bG9jYXRlX3N0YXRpY19iYW5rX21lbW9yeShzdHJ1Y3QgZG9tYWluICpkLA0KPiA+ICsgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHN0cnVjdCBrZXJuZWxfaW5m
byAqa2luZm8sDQo+ID4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgaW50IHJhbV9pbmRleCwNCj4gDQo+IFBsZWFzZSB1c2UgdW5zaWduZWQuDQo+IA0KPiA+
ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHBhZGRyX3Qg
cmFtX2FkZHIsDQo+ID4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgZ2ZuX3QqIHNnZm4sDQo+IA0KPiBJIGFtIGNvbmZ1c2VkLCB3aGF0IGlzIHRoZSBkaWZm
ZXJlbmNlIGJldHdlZW4gcmFtX2FkZHIgYW5kIHNnZm4/DQo+IA0KDQpXZSBuZWVkIHRvIGNvbnN0
cnVjdGluZyBraW5mby0+bWVtKGd1ZXN0IFJBTSBiYW5rcykgaGVyZSwgYW5kDQp3ZSBhcmUgaW5k
ZXhpbmcgaW4gc3RhdGljX21lbShwaHlzaWNhbCByYW0gYmFua3MpLiBNdWx0aXBsZSBwaHlzaWNh
bCByYW0NCmJhbmtzIGNvbnNpc3Qgb2Ygb25lIGd1ZXN0IHJhbSBiYW5rKGxpa2UsIEdVRVNUX1JB
TTApLiANCg0KcmFtX2FkZHIgIGhlcmUgd2lsbCBlaXRoZXIgYmUgR1VFU1RfUkFNMF9CQVNFLCBv
ciBHVUVTVF9SQU0xX0JBU0UsDQpmb3Igbm93LiBJIGtpbmRzIHN0cnVnZ2xlZCBpbiBob3cgdG8g
bmFtZSBpdC4gQW5kIG1heWJlIGl0IHNoYWxsIG5vdCBiZSBhDQpwYXJhbWV0ZXIgaGVyZS4NCg0K
TWF5YmUgSSBzaG91bGQgc3dpdGNoLi4gY2FzZS4uIG9uIHRoZSByYW1faW5kZXgsIGlmIGl0cyAw
LCBpdHMgR1VFU1RfUkFNMF9CQVNFLA0KQW5kIGlmIGl0cyAxLCBpdHMgR1VFU1RfUkFNMV9CQVNF
Lg0KDQo+ID4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
bWZuX3Qgc21mbiwNCj4gPiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICBwYWRkcl90IHRvdF9zaXplKSB7DQo+ID4gKyAgICBpbnQgcmVzOw0KPiA+ICsgICAg
c3RydWN0IG1lbWJhbmsgKmJhbms7DQo+ID4gKyAgICBwYWRkcl90IF9zaXplID0gdG90X3NpemU7
DQo+ID4gKw0KPiA+ICsgICAgYmFuayA9ICZraW5mby0+bWVtLmJhbmtbcmFtX2luZGV4XTsNCj4g
PiArICAgIGJhbmstPnN0YXJ0ID0gcmFtX2FkZHI7DQo+ID4gKyAgICBiYW5rLT5zaXplID0gYmFu
ay0+c2l6ZSArIHRvdF9zaXplOw0KPiA+ICsNCj4gPiArICAgIHdoaWxlICggdG90X3NpemUgPiAw
ICkNCj4gPiArICAgIHsNCj4gPiArICAgICAgICB1bnNpZ25lZCBpbnQgb3JkZXIgPSBnZXRfYWxs
b2NhdGlvbl9zaXplKHRvdF9zaXplKTsNCj4gPiArDQo+ID4gKyAgICAgICAgcmVzID0gZ3Vlc3Rf
cGh5c21hcF9hZGRfcGFnZShkLCAqc2dmbiwgc21mbiwgb3JkZXIpOw0KPiA+ICsgICAgICAgIGlm
ICggcmVzICkNCj4gPiArICAgICAgICB7DQo+ID4gKyAgICAgICAgICAgIGRwcmludGsoWEVOTE9H
X0VSUiwgIkZhaWxlZCBtYXAgcGFnZXMgdG8gRE9NVTogJWQiLCByZXMpOw0KPiA+ICsgICAgICAg
ICAgICByZXR1cm4gZmFsc2U7DQo+ID4gKyAgICAgICAgfQ0KPiA+ICsNCj4gPiArICAgICAgICAq
c2dmbiA9IGdmbl9hZGQoKnNnZm4sIDFVTCA8PCBvcmRlcik7DQo+ID4gKyAgICAgICAgc21mbiA9
IG1mbl9hZGQoc21mbiwgMVVMIDw8IG9yZGVyKTsNCj4gPiArICAgICAgICB0b3Rfc2l6ZSAtPSAo
MVVMTCA8PCAoUEFHRV9TSElGVCArIG9yZGVyKSk7DQo+ID4gKyAgICB9DQo+ID4gKw0KPiA+ICsg
ICAga2luZm8tPm1lbS5ucl9iYW5rcyA9IHJhbV9pbmRleCArIDE7DQo+ID4gKyAgICBraW5mby0+
dW5hc3NpZ25lZF9tZW0gLT0gX3NpemU7DQo+ID4gKw0KPiA+ICsgICAgcmV0dXJuIHRydWU7DQo+
ID4gK30NCj4gPiArDQo+ID4gICBzdGF0aWMgdm9pZCBfX2luaXQgYWxsb2NhdGVfbWVtb3J5KHN0
cnVjdCBkb21haW4gKmQsIHN0cnVjdCBrZXJuZWxfaW5mbw0KPiAqa2luZm8pDQo+ID4gICB7DQo+
ID4gICAgICAgdW5zaWduZWQgaW50IGk7DQo+ID4gQEAgLTQ4MCw2ICs1MjQsMTE2IEBAIGZhaWw6
DQo+ID4gICAgICAgICAgICAgKHVuc2lnbmVkIGxvbmcpa2luZm8tPnVuYXNzaWduZWRfbWVtID4+
IDEwKTsNCj4gPiAgIH0NCj4gPg0KPiA+ICsvKiBBbGxvY2F0ZSBtZW1vcnkgZnJvbSBzdGF0aWMg
bWVtb3J5IGFzIFJBTSBmb3Igb25lIHNwZWNpZmljIGRvbWFpbg0KPiA+ICtkLiAqLyBzdGF0aWMg
dm9pZCBfX2luaXQgYWxsb2NhdGVfc3RhdGljX21lbW9yeShzdHJ1Y3QgZG9tYWluICpkLA0KPiA+
ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHN0cnVjdCBrZXJu
ZWxfaW5mbw0KPiA+ICsqa2luZm8pIHsNCj4gPiArICAgIGludCBucl9iYW5rcywgX2JhbmtzID0g
MDsNCj4gDQo+IEFGQUlDVCwgX2JhbmtzIGlzIHRoZSBpbmRleCBpbiB0aGUgYXJyYXkuIEkgdGhp
bmsgaXQgd291bGQgYmUgY2xlYXJlciBpZiBpdCBpcw0KPiBjYWxsZXIgJ2JhbmsnIG9yICdpZHgn
Lg0KPiANCg0KU3VyZSwgSeKAmWxsIHVzZSB0aGUgJ2JhbmsnIGhlcmUuDQoNCj4gPiArICAgIHNp
emVfdCByYW0wX3NpemUgPSBHVUVTVF9SQU0wX1NJWkUsIHJhbTFfc2l6ZSA9IEdVRVNUX1JBTTFf
U0laRTsNCj4gPiArICAgIHBhZGRyX3QgYmFua19zdGFydCwgYmFua19zaXplOw0KPiA+ICsgICAg
Z2ZuX3Qgc2dmbjsNCj4gPiArICAgIG1mbl90IHNtZm47DQo+ID4gKw0KPiA+ICsgICAga2luZm8t
Pm1lbS5ucl9iYW5rcyA9IDA7DQo+ID4gKyAgICBzZ2ZuID0gZ2FkZHJfdG9fZ2ZuKEdVRVNUX1JB
TTBfQkFTRSk7DQo+ID4gKyAgICBucl9iYW5rcyA9IGQtPmFyY2guc3RhdGljX21lbS5ucl9iYW5r
czsNCj4gPiArICAgIEFTU0VSVChucl9iYW5rcyA+PSAwKTsNCj4gPiArDQo+ID4gKyAgICBpZiAo
IGtpbmZvLT51bmFzc2lnbmVkX21lbSA8PSAwICkNCj4gPiArICAgICAgICBnb3RvIGZhaWw7DQo+
ID4gKw0KPiA+ICsgICAgd2hpbGUgKCBfYmFua3MgPCBucl9iYW5rcyApDQo+ID4gKyAgICB7DQo+
ID4gKyAgICAgICAgYmFua19zdGFydCA9IGQtPmFyY2guc3RhdGljX21lbS5iYW5rW19iYW5rc10u
c3RhcnQ7DQo+ID4gKyAgICAgICAgc21mbiA9IG1hZGRyX3RvX21mbihiYW5rX3N0YXJ0KTsNCj4g
PiArICAgICAgICBiYW5rX3NpemUgPSBkLT5hcmNoLnN0YXRpY19tZW0uYmFua1tfYmFua3NdLnNp
emU7DQo+IA0KPiBUaGUgdmFyaWFibGUgbmFtZSBhcmUgc2xpZ2h0bHkgY29uZnVzaW5nIGJlY2F1
c2UgaXQgZG9lc24ndCB0ZWxsIHdoZXRoZXIgdGhpcw0KPiBpcyBwaHlzaWNhbCBhcmUgZ3Vlc3Qg
UkFNLiBZb3UgbWlnaHQgd2FudCB0byBjb25zaWRlciB0byBwcmVmaXggdGhlbSB3aXRoIHANCj4g
KHJlc3AuIGcpIGZvciBwaHlzaWNhbCAocmVzcC4gZ3Vlc3QpIFJBTS4NCg0KU3VyZSwgSSdsbCBy
ZW5hbWUgdG8gbWFrZSBpdCBtb3JlIGNsZWFybHkuDQoNCj4gDQo+ID4gKw0KPiA+ICsgICAgICAg
IGlmICggIWFsbG9jX2RvbXN0YXRpY19wYWdlcyhkLCBiYW5rX3NpemUgPj4gUEFHRV9TSElGVCwg
YmFua19zdGFydCwNCj4gMCkgKQ0KPiA+ICsgICAgICAgIHsNCj4gPiArICAgICAgICAgICAgcHJp
bnRrKFhFTkxPR19FUlINCj4gPiArICAgICAgICAgICAgICAgICAgICAiJXBkOiBjYW5ub3QgYWxs
b2NhdGUgc3RhdGljIG1lbW9yeSINCj4gPiArICAgICAgICAgICAgICAgICAgICAiKDB4JSJQUkl4
NjQiIC0gMHglIlBSSXg2NCIpIiwNCj4gDQo+IGJhbmtfc3RhcnQgYW5kIGJhbmtfc2l6ZSBhcmUg
Ym90aCBwYWRkcl90LiBTbyB0aGlzIHNob3VsZCBiZSBQUklwYWRkci4NCg0KU3VyZSwgSSdsbCBj
aGFuZ2UNCg0KPiANCj4gPiArICAgICAgICAgICAgICAgICAgICBkLCBiYW5rX3N0YXJ0LCBiYW5r
X3N0YXJ0ICsgYmFua19zaXplKTsNCj4gPiArICAgICAgICAgICAgZ290byBmYWlsOw0KPiA+ICsg
ICAgICAgIH0NCj4gPiArDQo+ID4gKyAgICAgICAgLyoNCj4gPiArICAgICAgICAgKiBCeSBkZWZh
dWx0LCBpdCBzaGFsbCBiZSBtYXBwZWQgdG8gdGhlIGZpeGVkIGd1ZXN0IFJBTSBhZGRyZXNzDQo+
ID4gKyAgICAgICAgICogYEdVRVNUX1JBTTBfQkFTRWAsIGBHVUVTVF9SQU0xX0JBU0VgLg0KPiA+
ICsgICAgICAgICAqIFN0YXJ0aW5nIGZyb20gUkFNMChHVUVTVF9SQU0wX0JBU0UpLg0KPiA+ICsg
ICAgICAgICAqLw0KPiANCj4gT2suIFNvIHlvdSBhcmUgZmlyc3QgdHJ5aW5nIHRvIGV4aGF1c3Qg
dGhlIGd1ZXN0IGJhbmsgMCBhbmQgdGhlbiBtb3ZlZCB0bw0KPiBiYW5rIDEuIFRoaXMgd2Fzbid0
IGVudGlyZWx5IGNsZWFyIGZyb20gdGhlIGRlc2lnbiBkb2N1bWVudC4NCj4gDQo+IEkgYW0gZmlu
ZSB3aXRoIHRoYXQsIGJ1dCBpbiB0aGlzIGNhc2UsIHRoZSBkZXZlbG9wcGVyIHNob3VsZCBub3Qg
bmVlZCB0byBrbm93DQo+IHRoYXQgKGluIGZhY3QgdGhpcyBpcyBub3QgcGFydCBvZiB0aGUgQUJJ
KS4NCj4gDQo+IFJlZ2FyZGluZyB0aGlzIGNvZGUsIEkgYW0gYSBiaXQgY29uY2VybmVkIGFib3V0
IHRoZSBzY2FsYWJpbGl0eSBpZiB3ZSBpbnRyb2R1Y2UNCj4gYSBzZWNvbmQgYmFuay4NCj4gDQo+
IENhbiB3ZSBoYXZlIGFuIGFycmF5IG9mIHRoZSBwb3NzaWJsZSBndWVzdCBiYW5rcyBhbmQgaW5j
cmVtZW50IHRoZSBpbmRleA0KPiB3aGVuIGV4aGF1c3RpbmcgdGhlIGN1cnJlbnQgYmFuaz8NCj4g
DQoNCkNvcnJlY3QgbWUgaWYgSSB1bmRlcnN0YW5kIHdyb25nbHksIA0KDQpXaGF0IHlvdSBzdWdn
ZXN0IGhlcmUgaXMgdGhhdCB3ZSBtYWtlIGFuIGFycmF5IG9mIGd1ZXN0IGJhbmtzLCByaWdodCBu
b3csIGluY2x1ZGluZw0KR1VFU1RfUkFNMCBhbmQgR1VFU1RfUkFNMS4gQW5kIGlmIGxhdGVyLCBh
ZGRpbmcgbW9yZSBndWVzdCBiYW5rcywgaXQgd2lsbCBub3QNCmJyaW5nIHNjYWxhYmlsaXR5IHBy
b2JsZW0gaGVyZSwgcmlnaHQ/DQoNCg0KPiBDaGVlcnMsDQo+IA0KPiAtLQ0KPiBKdWxpZW4gR3Jh
bGwNCg0KQ2hlZXJzDQoNClBlbm55IFpoZW5nDQo=


From xen-devel-bounces@lists.xenproject.org Wed May 19 07:49:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 07:49:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129918.243637 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljGxc-000693-8S; Wed, 19 May 2021 07:49:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129918.243637; Wed, 19 May 2021 07:49:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljGxc-00068w-5I; Wed, 19 May 2021 07:49:44 +0000
Received: by outflank-mailman (input) for mailman id 129918;
 Wed, 19 May 2021 07:49:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljGxa-00068m-Hw; Wed, 19 May 2021 07:49:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljGxa-0004QF-AW; Wed, 19 May 2021 07:49:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljGxa-0007VA-0J; Wed, 19 May 2021 07:49:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ljGxZ-0006rF-W9; Wed, 19 May 2021 07:49:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=frJYnt6mRN04Paa8SlxKV/kkTsIqU8TNkahu8IogPM0=; b=j42IX8mg5ASAFHTn8Pyi/Dmqiw
	v6uLOFHkDS55PXlq3sy5b9UmP8JAI/D1a6G7s0zdvi993pGxYKloisbfsY6tG28imU/PyJHmpxxUC
	O9OjiGNAzqZqQOPEWFbMA3Lb7Dv2Zl6TogKIdMb0GoJAWW9EBNTfcGeuAusUi/FCnfFk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162063-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162063: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=8ac91e6c6033ebc12c5c1e4aa171b81a662bd70f
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 May 2021 07:49:41 +0000

flight 162063 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162063/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  14 guest-start    fail in 161996 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-credit2  13 debian-fixup               fail pass in 161996

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                8ac91e6c6033ebc12c5c1e4aa171b81a662bd70f
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  291 days
Failing since        152366  2020-08-01 20:49:34 Z  290 days  489 attempts
Testing same since   161996  2021-05-18 07:32:22 Z    1 days    2 attempts

------------------------------------------------------------
6063 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1645713 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed May 19 07:52:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 07:52:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129927.243652 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljH0D-0007YA-VY; Wed, 19 May 2021 07:52:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129927.243652; Wed, 19 May 2021 07:52:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljH0D-0007Y3-SY; Wed, 19 May 2021 07:52:25 +0000
Received: by outflank-mailman (input) for mailman id 129927;
 Wed, 19 May 2021 07:52:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TeaP=KO=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1ljH0C-0007Xx-Uz
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 07:52:25 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown
 [40.107.6.64]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4ea648ae-681b-4c27-9b61-e3ed3956048f;
 Wed, 19 May 2021 07:52:23 +0000 (UTC)
Received: from AS8PR04CA0046.eurprd04.prod.outlook.com (2603:10a6:20b:312::21)
 by AM6PR08MB3285.eurprd08.prod.outlook.com (2603:10a6:209:4b::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.28; Wed, 19 May
 2021 07:52:21 +0000
Received: from AM5EUR03FT025.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:312:cafe::f0) by AS8PR04CA0046.outlook.office365.com
 (2603:10a6:20b:312::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.25 via Frontend
 Transport; Wed, 19 May 2021 07:52:21 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT025.mail.protection.outlook.com (10.152.16.157) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Wed, 19 May 2021 07:52:21 +0000
Received: ("Tessian outbound 3c5232d12880:v92");
 Wed, 19 May 2021 07:52:20 +0000
Received: from 3656df387953.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 85BB060B-384B-4004-AB82-DAF8700CAAD7.1; 
 Wed, 19 May 2021 07:52:15 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 3656df387953.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 19 May 2021 07:52:15 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com (2603:10a6:803:10a::33)
 by VI1PR0802MB2494.eurprd08.prod.outlook.com (2603:10a6:800:b6::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.25; Wed, 19 May
 2021 07:52:10 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5]) by VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5%6]) with mapi id 15.20.4129.032; Wed, 19 May 2021
 07:52:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4ea648ae-681b-4c27-9b61-e3ed3956048f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Oz/j1/U/AkMxxZ8CB/mK3KXR+dZo3nuUFYqFkmitehk=;
 b=v0VQh8h6WZ8g6mgy1vuglGuzxktI6qwdEuhJJlvLy0lzaOoAMP/3bBno9DnedtoKRqm8Z8z/zy8qo8bV/jJ3lqYqZgA6ceFDQoHXNnLr2C91mzgExJyQSbNHXtOezILcCCFHd4jrV5rHtMqYUakcgagbXwYxi4rJf2DqgIXyavI=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=K3WNb1OKLzo0Urf7dyCuOTG602qvCfWfyv3jsWpIYBXCI01A9IgYBgIKxGamdpixKNn/b311AeuTScT/roJHb4o40blgrUuVqK1Ot1/PreYEQ5N1ELLcZ8muKegB1A2ZwizM6NtTfP+ItUAs9zyDEnp0169cjkmNrEoW2pYADxyqwqaCSMn2Vb8sfJIExNGP4tiE2lsYlrkKZnwLlYhhddDMz4hlsaLpRpglVqmLSVnp4NtZrHESXj8gDKQP0YKZFn1y6nH7hpLOTAgzyJzewHDikfvhlPUTaCdlUXWUaGbJWm1YHLsc3hlYoB5eS1d1sCe/qsVnHmacQjQ8WH6PSQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Oz/j1/U/AkMxxZ8CB/mK3KXR+dZo3nuUFYqFkmitehk=;
 b=WEjeunWmegZVsjmdlCIjJqB7DliDwm6GaT6QGos2wVlcOarrRUTYh5q5wwbbFuATeI9ezMAlKmn1w+wKWc6IqltG6byk14/GZeeeEPgjnlbOS4LGOZPWtIl/8WFb/mnZWTjYFU1UeAaJkngOf6f20QF8kj4lNMaLyHgjoEnDPB7MIARTBrza4FHImH3GUiciKJkUeJduCC/1RshyUF6ODXL1NHk+40ISlcqe2L0ushUbXtEu4uSe3Em2NHb3MofPiuHwrdfpBl0jZn0xtj7dIE53K5JOqrPgeXIkrAb/KM0oUV7F5SrShLJ3wVO3+ziKQZK/JkpHMrvgrh9sWjAXhw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Oz/j1/U/AkMxxZ8CB/mK3KXR+dZo3nuUFYqFkmitehk=;
 b=v0VQh8h6WZ8g6mgy1vuglGuzxktI6qwdEuhJJlvLy0lzaOoAMP/3bBno9DnedtoKRqm8Z8z/zy8qo8bV/jJ3lqYqZgA6ceFDQoHXNnLr2C91mzgExJyQSbNHXtOezILcCCFHd4jrV5rHtMqYUakcgagbXwYxi4rJf2DqgIXyavI=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, Jan Beulich <jbeulich@suse.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	nd <nd@arm.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "sstabellini@kernel.org"
	<sstabellini@kernel.org>
Subject: RE: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
Thread-Topic: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
Thread-Index: AQHXS6W/k/joZEkeaUiDjrfYH3jVEaro2ToAgAAR3sCAADv/gIABQxZg
Date: Wed, 19 May 2021 07:52:10 +0000
Message-ID:
 <VE1PR08MB5215CB5102529F32DC695CFDF72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-8-penny.zheng@arm.com>
 <7e4706dc-70ea-4dc9-3d70-f07396b462d8@suse.com>
 <VE1PR08MB521528492991FDFC87AC361BF72C9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <75275b2f-9de3-944a-d55c-a62bbbf1bb8c@xen.org>
In-Reply-To: <75275b2f-9de3-944a-d55c-a62bbbf1bb8c@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: A02319949143924B889E8A3D2B2F1646.0
x-checkrecipientchecked: true
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [203.126.0.111]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: 35666b76-6cb8-41b3-fa71-08d91a9b0868
x-ms-traffictypediagnostic: VI1PR0802MB2494:|AM6PR08MB3285:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB3285BCB1A0BB62FD7E063D31F72B9@AM6PR08MB3285.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 MKiD0eqfe+5RdB7nNt74CEELZppI89O9ghAnV92/SZ1AwsZAEuXc5b5KgiBkqdFBdXR7dYqHk8rz0gmfjLwqP0azdr/p/g+ZvbGhzUq4F8DVIeW1YYPDdz6tLmHTQzotXGF5JsUmJYEv+XtHlYyft8+zmmANwE4Kt2VU5A/aa+oelW7WGwmNpWmK35RS5wHcKuryuaVq8LIJeubXRpAE8b9nU34fH7Pz1z/bQI4/a3oI/yb5bSZT+1yllNP2IK0Vl2NNeNmf4QW31AN4a+K/g2Nw3YAXm2Kk1rXvqsopnbvXT/LOxwnkqEWNvwNBnLwuJefc/64bsY23JD+K/EU3rHkTcqhDehj78WsuXtgfA5+/Y+FNf7yUFTdBhICqG8HE+l+djsLyMkG0TMUmK5W0EXTIN1X3V7lSoCPONKuu/NZFMGTMpMh1geMtfpFDAELrMtqNjg9Dr47fP0Vqg0xCv8dZNAlf7z4+fHM1jAmv6dM+REHDOArFpAh/+ktjf5M8YCBU4DQMXwyxXWkbdwjiYwy/LwwaUtDFBSUECwrQ6qQnnXR84eyrwusGHXjnyeqZtlUFD9nRQhLf3x1bB7+DTjAp/epptG4cr62giB256Yo=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR08MB5215.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(39860400002)(376002)(346002)(136003)(396003)(52536014)(186003)(6506007)(2906002)(55016002)(122000001)(38100700002)(110136005)(7696005)(54906003)(53546011)(71200400001)(83380400001)(9686003)(26005)(5660300002)(64756008)(66476007)(66556008)(66446008)(478600001)(8936002)(86362001)(316002)(8676002)(76116006)(33656002)(4326008)(66946007);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?Q2dWbEl4Q1VzdE1OQm5kT0t0VGYrLzducnR6RnQ0RWRjUEdOeWNtQnVkQTlV?=
 =?utf-8?B?U1BuOGtjZ1FqQTFtdnJkdEhOOGVZQ1JrbFhPdDJIWEpzWFE0SERYcHd2NFJW?=
 =?utf-8?B?c0V0NXBMR2lJRUludTRkY3doMjFYM1MxWlUrK01jZGZJSGFHSU9VSHdLZVE5?=
 =?utf-8?B?WTZaRUpEaU1SWThpOGVQa3RjTWhrbGx2dDAyS2ZoTm4vZHJoRGRxaEJ6RWNY?=
 =?utf-8?B?dXNUS3BjWi83N2R1blBaRHI2cFBhMityRUxtb0RFSmFieE14TDRQTHpIQ0VE?=
 =?utf-8?B?ZmZCaUh4eFlHWGZ4TU52ZmFwQ0dORSs0aFluQU0wWHIvaU1taW02UkFXZ3k2?=
 =?utf-8?B?OTh3dUxIMWNWQnFDQ0lxdXFWSFc3cE5vUVBjaG4vaGk5TS8xNWhoSVJZNHlM?=
 =?utf-8?B?VDNJSGdCOGJMODdTVWdkVWx1cUU3R0p2cEcyYTFzcHByblFQREYyRFBFbE5J?=
 =?utf-8?B?VEZyTGtpckJKU3lUZzdJQ2dKMmVZZGk2Zlc2MHlUQ2w4aldHYUp5anBUamNQ?=
 =?utf-8?B?Z0hsR09XVU9uWUJwOEpBMEwzTlpjSWdGL3F1WHVnVWJwczIzS1NGRllIOE9r?=
 =?utf-8?B?MVRNNEJaUE15U1RtMzFJSlo2TGtheGRGdlM1QlB3N0lDS2xrQXorLy9UeHNE?=
 =?utf-8?B?M2M0RGJLU0pKWkkvWjZWVUpMZUpTK1V0Qk45MThtTGtidUxGWDJhTW5UWFFP?=
 =?utf-8?B?R2c2dEhsR2JXNTNuUW5LNnpSRGh2Vmh5MzY1aWZ2TndUN3VFbnJ0L3JOaXQ2?=
 =?utf-8?B?WXNtOEpoYlBPSzVuKzlicTdOMEpUZjhaOEo4dWlpVi9ReVBJY2syMXBFUVlH?=
 =?utf-8?B?YzhhUWhmdU1GclhxWXBpZmJTay8zWVI2SFVWai9md1FoR05zYWgzdTlxSmk2?=
 =?utf-8?B?bUZMTGhLVVFEZEI3ZHJyVkt6OEFwSUx6MzVjV05TanRIN0Q0cHBab3JWSFIz?=
 =?utf-8?B?UWVrYTJ1amk3SEV0NlNJRmYzdzlmTTZNYUg4eXdpdnIrVXNsUEUyQkxlNEdG?=
 =?utf-8?B?R1dqNm5pQzlxOG5wdklBdmpnZnhVWllib0RVWm9wWnR6aXd2QVBYN0x4TEZj?=
 =?utf-8?B?b3NCbWwvUEw3ZUZWd3ZoYVN3RnpHSDBLeDdUOTZ6dEFGKzE3RUJBcm1ZenRm?=
 =?utf-8?B?VWJLYWFTekx5VGVIckNmc0w2ZHlIL0d1M1E0SzdOMTZrakFlZEZhYUNYV21J?=
 =?utf-8?B?OC8wYlJidFJLWkM4bWtMVUY3N2loQnlhcGg3cHVRL20vRFM3d1lrT1RXTUNX?=
 =?utf-8?B?OTRUaklPYTUwSmcvVEc3TVlmYzdhRURVcDltSEhaNmNLN1A0KzlDU2QrbmRP?=
 =?utf-8?B?QVNicmYwaEljbGhKcVF3cTQ0ZngxdGk0cW1EM1VJU2p5N3NYMVlnYm1aOEFC?=
 =?utf-8?B?M2FqM01Yb2kwRnNnc2dJQmVoelRMYVZDTjdLNFNhWHY5b0RLQUM0OFNLbWla?=
 =?utf-8?B?U1VDcHp2WXZ5L1pHZFFiMVRuWC8vQXNPMXFhbzNSRjRLRTgwLzJla0ZMVG1S?=
 =?utf-8?B?eHAraWRHNWErU1JvbkFNRnFNTE5FaWpDdDBDeXE5Tjh4NFdyUENEckZramgw?=
 =?utf-8?B?QXAzcjlyaW4xNThGdzl0Wk55ZVZlbkc3UVQycFRhbXVYRm0yam5vb0lqUUVD?=
 =?utf-8?B?R1lRVUNwTkdjV1JFWCtSZGJoUU1iZGtBcm1hQUNSeTNDcWN5aXBRQmdxNDBF?=
 =?utf-8?B?bW5OcmRpSjFMUklvSXdTK3VhUzNhMXY1YURJWVB0c09BMnV4VVQ0MllqREQ4?=
 =?utf-8?Q?CtvkpvGppjyckQRKUX5DpeIkxrjO5MOofhJGv+Y?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0802MB2494
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT025.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	938458ba-e00c-48bc-d612-08d91a9b021a
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	WBSpnQqi3HVxHaN031snA7ZWooCA2Y6V9ffO9w+EQRSBph2dkG6b6kSEEgmyaXM6ou8rC1zVIDcQSjKwZ4ZLCQmE8dWULxdl/7qOtJDl5L+Uy1oJfnzL4aOpPzbrGVGafAuh6HsET+86p0VwHekFXJRoaS2TT0B6NOQ2Y+tks+P4YllD1iboslfj4sM3AQLa41gubXfggqTo6Hq78y0+LlayXB3MPu13vhnufnviI2ftzNBvquyfnVp2Ne0IfqGxGksPi41dF9GGAmkdoVvQddZZZvJitfJuSlZdLKhcG84oiav4FsF1IH6UafXubh5cH3Y2pi4EC/6BRyVSORj+rFjmN3byNu7PKsYXFNssHPEJG8b5+ARl0MFPFCDIqWw+qG7E6daoi57ykslqDGdw9ByVFHZWZK4+DclhmS2hM8EzAFEtq/WzpXnEdXBpWDX1u5AHid/QzsUNTDhIlnUp+PzE2T+KdXxItxj+CQrgc7xQvSnCt5fhD+7sF4TpUmqjYdPV7Y5InbSCtuah3qUT2wbsC6DiAZH5W2FeWrYZsNJ1rIro9ePzTNVIpBwo3QLdAgL7F7N5c/WCDFGajNYg1doKhPuyN6QHIxIP3gQNFhpKTZW7u36kehOOIlxw/08VMPIXJG172adFxx+zA9r1/6mMqOzho+ktmUoshhuGZxk=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(136003)(39860400002)(346002)(376002)(46966006)(36840700001)(86362001)(110136005)(336012)(9686003)(5660300002)(54906003)(316002)(2906002)(8676002)(52536014)(82310400003)(4326008)(107886003)(6506007)(70206006)(53546011)(26005)(82740400003)(83380400001)(8936002)(36860700001)(47076005)(55016002)(186003)(356005)(7696005)(33656002)(70586007)(81166007)(478600001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2021 07:52:21.2196
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 35666b76-6cb8-41b3-fa71-08d91a9b0868
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT025.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB3285

SGkgSnVsaWVuDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSnVsaWVu
IEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4NCj4gU2VudDogVHVlc2RheSwgTWF5IDE4LCAyMDIxIDg6
MTMgUE0NCj4gVG86IFBlbm55IFpoZW5nIDxQZW5ueS5aaGVuZ0Bhcm0uY29tPjsgSmFuIEJldWxp
Y2ggPGpiZXVsaWNoQHN1c2UuY29tPg0KPiBDYzogQmVydHJhbmQgTWFycXVpcyA8QmVydHJhbmQu
TWFycXVpc0Bhcm0uY29tPjsgV2VpIENoZW4NCj4gPFdlaS5DaGVuQGFybS5jb20+OyBuZCA8bmRA
YXJtLmNvbT47IHhlbi1kZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZzsNCj4gc3N0YWJlbGxpbmlA
a2VybmVsLm9yZw0KPiBTdWJqZWN0OiBSZTogW1BBVENIIDA3LzEwXSB4ZW4vYXJtOiBpbnRydWR1
Y2UgYWxsb2NfZG9tc3RhdGljX3BhZ2VzDQo+IA0KPiBIaSBQZW5ueSwNCj4gDQo+IE9uIDE4LzA1
LzIwMjEgMDk6NTcsIFBlbm55IFpoZW5nIHdyb3RlOg0KPiA+PiAtLS0tLU9yaWdpbmFsIE1lc3Nh
Z2UtLS0tLQ0KPiA+PiBGcm9tOiBKYW4gQmV1bGljaCA8amJldWxpY2hAc3VzZS5jb20+DQo+ID4+
IFNlbnQ6IFR1ZXNkYXksIE1heSAxOCwgMjAyMSAzOjM1IFBNDQo+ID4+IFRvOiBQZW5ueSBaaGVu
ZyA8UGVubnkuWmhlbmdAYXJtLmNvbT4NCj4gPj4gQ2M6IEJlcnRyYW5kIE1hcnF1aXMgPEJlcnRy
YW5kLk1hcnF1aXNAYXJtLmNvbT47IFdlaSBDaGVuDQo+ID4+IDxXZWkuQ2hlbkBhcm0uY29tPjsg
bmQgPG5kQGFybS5jb20+OyB4ZW4tDQo+IGRldmVsQGxpc3RzLnhlbnByb2plY3Qub3JnOw0KPiA+
PiBzc3RhYmVsbGluaUBrZXJuZWwub3JnOyBqdWxpZW5AeGVuLm9yZw0KPiA+PiBTdWJqZWN0OiBS
ZTogW1BBVENIIDA3LzEwXSB4ZW4vYXJtOiBpbnRydWR1Y2UgYWxsb2NfZG9tc3RhdGljX3BhZ2Vz
DQo+ID4+DQo+ID4+IE9uIDE4LjA1LjIwMjEgMDc6MjEsIFBlbm55IFpoZW5nIHdyb3RlOg0KPiA+
Pj4gLS0tIGEveGVuL2NvbW1vbi9wYWdlX2FsbG9jLmMNCj4gPj4+ICsrKyBiL3hlbi9jb21tb24v
cGFnZV9hbGxvYy5jDQo+ID4+PiBAQCAtMjQ0Nyw2ICsyNDQ3LDkgQEAgaW50IGFzc2lnbl9wYWdl
cygNCj4gPj4+ICAgICAgIHsNCj4gPj4+ICAgICAgICAgICBBU1NFUlQocGFnZV9nZXRfb3duZXIo
JnBnW2ldKSA9PSBOVUxMKTsNCj4gPj4+ICAgICAgICAgICBwYWdlX3NldF9vd25lcigmcGdbaV0s
IGQpOw0KPiA+Pj4gKyAgICAgICAgLyogdXNlIHBhZ2Vfc2V0X3Jlc2VydmVkX293bmVyIHRvIHNl
dCBpdHMgcmVzZXJ2ZWQgZG9tYWluIG93bmVyLg0KPiA+PiAqLw0KPiA+Pj4gKyAgICAgICAgaWYg
KCAocGdbaV0uY291bnRfaW5mbyAmIFBHQ19yZXNlcnZlZCkgKQ0KPiA+Pj4gKyAgICAgICAgICAg
IHBhZ2Vfc2V0X3Jlc2VydmVkX293bmVyKCZwZ1tpXSwgZCk7DQo+ID4+DQo+ID4+IE5vdyB0aGlz
IGlzIHB1enpsaW5nOiBXaGF0J3MgdGhlIHBvaW50IG9mIHNldHRpbmcgdHdvIG93bmVyIGZpZWxk
cyB0bw0KPiA+PiB0aGUgc2FtZSB2YWx1ZT8gSSBhbHNvIGRvbid0IHJlY2FsbCB5b3UgaGF2aW5n
IGludHJvZHVjZWQNCj4gPj4gcGFnZV9zZXRfcmVzZXJ2ZWRfb3duZXIoKSBmb3IgeDg2LCBzbyBo
b3cgaXMgdGhpcyBnb2luZyB0byBidWlsZCB0aGVyZT8NCj4gPj4NCj4gPg0KPiA+IFRoYW5rcyBm
b3IgcG9pbnRpbmcgb3V0IHRoYXQgaXQgd2lsbCBmYWlsIG9uIHg4Ni4NCj4gPiBBcyBmb3IgdGhl
IHNhbWUgdmFsdWUsIHN1cmUsIEkgc2hhbGwgY2hhbmdlIGl0IHRvIGRvbWlkX3QgZG9taWQgdG8g
cmVjb3JkIGl0cw0KPiByZXNlcnZlZCBvd25lci4NCj4gPiBPbmx5IGRvbWlkIGlzIGVub3VnaCBm
b3IgZGlmZmVyZW50aWF0ZS4NCj4gPiBBbmQgZXZlbiB3aGVuIGRvbWFpbiBnZXQgcmVib290ZWQs
IHN0cnVjdCBkb21haW4gbWF5IGJlIGRlc3Ryb3llZCwgYnV0DQo+ID4gZG9taWQgd2lsbCBzdGF5
cyBUaGUgc2FtZS4NCj4gPiBNYWpvciB1c2VyIGNhc2VzIGZvciBkb21haW4gb24gc3RhdGljIGFs
bG9jYXRpb24gYXJlIHJlZmVycmluZyB0byB0aGUNCj4gPiB3aG9sZSBzeXN0ZW0gYXJlIHN0YXRp
YywgTm8gcnVudGltZSBjcmVhdGlvbi4NCj4gDQo+IE9uZSBtYXkgd2FudCB0byBoYXZlIHN0YXRp
YyBtZW1vcnkgeWV0IGRvZXNuJ3QgY2FyZSBhYm91dCB0aGUgZG9taWQuIFNvIEkNCj4gYW0gbm90
IGluIGZhdm9yIHRvIHJlc3RyaWN0IGFib3V0IHRoZSBkb21pZCB1bmxlc3MgdGhlcmUgaXMgbm8g
b3RoZXIgd2F5Lg0KPiANCg0KVGhlIHVzZXIgY2FzZSB5b3UgYnJpbmcgdXAgaGVyZSBpcyB0aGF0
IHN0YXRpYyBtZW1vcnkgcG9vbD8gDQoNClJpZ2h0IG5vdywgdGhlIHVzZXIgY2FzZXMgYXJlIG1v
c3RseSByZXN0cmljdGVkIHRvIHN0YXRpYyBzeXN0ZW0uDQpJZiB3ZSBicmluZyBydW50aW1lIGFs
bG9jYXRpb24sIGB4bGAgaGVyZSwgaXQgd2lsbCBhZGQgYSBsb3QgbW9yZSBjb21wbGV4aXR5LiAN
CkJ1dCBpZiB0aGUgc3lzdGVtIGhhcyBzdGF0aWMgYmVoYXZpb3IsIHRoZSBkb21pZCBpcyBhbHNv
IHN0YXRpYy4gDQoNCk9uIHJlYm9vdGluZyBkb21haW4gZnJvbSBzdGF0aWMgbWVtb3J5IHBvb2ws
IGl0IGJyaW5ncyB1cCBtb3JlIGRpc2N1c3Npb24sIGxpa2UNCmRvIHdlIGludGVuZCB0byBnaXZl
IHRoZSBtZW1vcnkgYmFjayB0byBzdGF0aWMgbWVtb3J5IHBvb2wgd2hlbiByZWJvb3RpbmcsDQpp
ZiBzbywgcmFtIGNvdWxkIGJlIGFsbG9jYXRlZCBmcm9tIGRpZmZlcmVudCBwbGFjZSBjb21wYXJl
ZCB3aXRoIHRoZSBwcmV2aW91cyBvbmUuDQpBbmQgaXQga2luZHMgZGVzdHJveXMgc3lzdGVtIHN0
YXRpYyBiZWhhdmlvci4gDQoNCm9yIG5vdCBnaXZpbmcgYmFjaywganVzdCBzdG9yZSBpdCBpbiBn
bG9iYWwgdmFyaWFibGUgYHN0cnVjdCBwYWdlX2luZm8gKltET01JRF1gIGxpa2UgdGhlDQpvdGhl
cnMuIA0KDQo+IENoZWVycywNCj4gDQo+IC0tDQo+IEp1bGllbiBHcmFsbA0KDQpDaGVlcnMNCg0K
UGVubnkgWmhlbmcNCg==


From xen-devel-bounces@lists.xenproject.org Wed May 19 07:58:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 07:58:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129933.243663 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljH5x-0008HL-Le; Wed, 19 May 2021 07:58:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129933.243663; Wed, 19 May 2021 07:58:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljH5x-0008HE-IM; Wed, 19 May 2021 07:58:21 +0000
Received: by outflank-mailman (input) for mailman id 129933;
 Wed, 19 May 2021 07:58:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=TeaP=KO=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1ljH5w-0008H8-Mw
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 07:58:20 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com (unknown
 [40.107.4.68]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4459739e-9d99-4310-99b3-cf9d058bf421;
 Wed, 19 May 2021 07:58:19 +0000 (UTC)
Received: from AM6P191CA0042.EURP191.PROD.OUTLOOK.COM (2603:10a6:209:7f::19)
 by AM9PR08MB6004.eurprd08.prod.outlook.com (2603:10a6:20b:285::5) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.26; Wed, 19 May
 2021 07:58:18 +0000
Received: from VE1EUR03FT050.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:7f:cafe::1a) by AM6P191CA0042.outlook.office365.com
 (2603:10a6:209:7f::19) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.25 via Frontend
 Transport; Wed, 19 May 2021 07:58:18 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 VE1EUR03FT050.mail.protection.outlook.com (10.152.19.209) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Wed, 19 May 2021 07:58:17 +0000
Received: ("Tessian outbound 3c5232d12880:v92");
 Wed, 19 May 2021 07:58:17 +0000
Received: from a15a64af0ede.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 95C9BC84-2C03-48EC-92AF-14DC46B9433C.1; 
 Wed, 19 May 2021 07:58:11 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id a15a64af0ede.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 19 May 2021 07:58:11 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com (2603:10a6:803:10a::33)
 by VI1PR08MB4366.eurprd08.prod.outlook.com (2603:10a6:803:fc::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.28; Wed, 19 May
 2021 07:58:08 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5]) by VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5%6]) with mapi id 15.20.4129.032; Wed, 19 May 2021
 07:58:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4459739e-9d99-4310-99b3-cf9d058bf421
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xnlDh86C58JYNQlFKUuNIswqitz6f1t8JwsKTbmEjoo=;
 b=9i7aOssTubm07d1z+o5DMY2q2Cl2jD1Ts76bZ7fBOMPLjSVvY6SXPqzgbMvDJdDmJb/hc0Ll6kyYfTmGrAJw3Zs/hQbTrALnj/AGhj2NAk6U8nymgZbeyKsE4raqqAhggltQ3SytTru+0a60JDA5mgaM5eZ/doB1YkmaVxTl++U=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Eqm2VeoSASeFTA+bilaifRcRtpOns5oqVdaq2Esp1qwfucI6TEqCpknV8u88qmWc7KWFfRVT1qDA+xACodeZMDsdKv8l+uvsxZj3MDqr0Npg92qX6tuIIZvIgZuKd8uds/I6nmqyx5zb2XrLIZE8c1SAVp24/M4hj0DLN8VQ7r25/9GLIJLPpMKg8xT0c3UgBNHIGRA04PJlMFgCWZS2teC7kNabV/lM+3SV+UkYaLs+VZXobLQ3opzC/UweWk1GHTnZC4cv/gaCNn6MCce6zhRBjRPBhBq99Ggo4oMmRcZ1iXQ5YkstYiKPgWTQCu6RcDSj5yc2gOsO96dPzWm0pw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xnlDh86C58JYNQlFKUuNIswqitz6f1t8JwsKTbmEjoo=;
 b=aRxj5GyFLrc6pCw8g1dMZybUPIww5gPDSyi/7EGQOE4KIFL9p1JZEmzoEq6rn21jQ6/tm3v9c9ly6YpdLSy2DyBdZ3xPgJTnVIjqcLAtlL3+Nx+MyDxsrO6f2/1urHqEENOa3wn9MSOtJI9NquLr7ge1vmKnu2QuQApbR0b4DZOVQ4rzaEtcP0PTf0Kjn2c8wZ0+1YO33conOFBINPexkyjTRGSo+ejkH1DODIcNCQoHA+hIyb+MYOI2orMfz6UXZHexOt/dufQrYdHyU8bIxSEqpwuqlBnw0Wo/3DK8VZwR5waXD1t9xhCCBerNFw1VTOVpkKl3xlrh1QCS5R3G/Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xnlDh86C58JYNQlFKUuNIswqitz6f1t8JwsKTbmEjoo=;
 b=9i7aOssTubm07d1z+o5DMY2q2Cl2jD1Ts76bZ7fBOMPLjSVvY6SXPqzgbMvDJdDmJb/hc0Ll6kyYfTmGrAJw3Zs/hQbTrALnj/AGhj2NAk6U8nymgZbeyKsE4raqqAhggltQ3SytTru+0a60JDA5mgaM5eZ/doB1YkmaVxTl++U=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "sstabellini@kernel.org"
	<sstabellini@kernel.org>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	nd <nd@arm.com>
Subject: RE: [PATCH 09/10] xen/arm: parse `xen,static-mem` info during domain
 construction
Thread-Topic: [PATCH 09/10] xen/arm: parse `xen,static-mem` info during domain
 construction
Thread-Index: AQHXS6XDy5MrKETGVUijoDUAUijwiqrpJf6AgAFKs2A=
Date: Wed, 19 May 2021 07:58:08 +0000
Message-ID:
 <VE1PR08MB5215E7B93DC2E7FCBE7B0898F72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-10-penny.zheng@arm.com>
 <61b41d12-c69e-fe41-0b5e-d35a485b4a51@xen.org>
In-Reply-To: <61b41d12-c69e-fe41-0b5e-d35a485b4a51@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: A63B90C98C539447A65B65F2450941CC.0
x-checkrecipientchecked: true
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [203.126.0.111]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: c80b356a-2a14-4daa-4c8d-08d91a9bdcf7
x-ms-traffictypediagnostic: VI1PR08MB4366:|AM9PR08MB6004:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM9PR08MB600404DFD7DA0B99E38E5644F72B9@AM9PR08MB6004.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:4125;OLM:4125;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 fZNlJT7lLIwq19pPge/8fw4cq6Yqb3VK48g5a7Jxok2YB6EhQEnhGIYpOX6spq6P1pzskrm22NVsZO50NggZ8DCaoXoCAXaAuSZC4NwpCB7enQrmty7Qi1zk1i+xUBCyweaSVR2PZSMv2aE16umoP5xAlspjPWMDystPzmHblxU0PJtC3vHeNFe0JXLXBbqDLIQwjGO5Jr7mVXyyDXxuHZo1XUcVKyxp3/+L180umYSE9LExQMe+JEQXXC71K90hZ3dIgYldpJLPTVYtdsoxbDXE7WTxXT/uPsscPKH78bQOWAXZcEkJsjyOevx3mDYFlbvhFgz/gjMDHDds8gqcBudxDmxozHbEs7evj9jwt7BjdhOb2fGUkukShzQlc5JJ3S5HV/G3aWMTx7JlGsT4ej0GFp6GRTtbujMxplLge3jIHxWcFyObJtWpfXF1UCANmgtNZ2DEc3Lr2WPepcUz74xynLyuooT+5CqvFv5ir8MxgWzfda3t5J9ITL5+YFzdgrWuqX6DWyWUwGdQvco2NjO089T8flJPHAH9+xlX1bBwrZtxSZR/Fne5Urtk64SICesfLLfsEPvpInwK4ulAj1WUZY2NtXOvgFPBwBnqXxrBjt0qBx753taL+7xBVbScnXHLTA+FmATC2u+/kfsb/g==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR08MB5215.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(136003)(346002)(366004)(39860400002)(396003)(38100700002)(64756008)(66476007)(8676002)(66946007)(66446008)(66556008)(76116006)(52536014)(8936002)(122000001)(5660300002)(86362001)(33656002)(83380400001)(316002)(7696005)(4326008)(478600001)(110136005)(26005)(54906003)(55016002)(186003)(9686003)(2906002)(53546011)(71200400001)(6506007)(81973001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?cTY0aFJyYjBlVmNqZnBlWGxaNnc5ZGFua0dIcHcrc1g0NDdFeVIzUnRvd1c3?=
 =?utf-8?B?SktyNEcvbjhkZndCN3htYTVYNy9BckVNRE5tRzM2TWtkQjdRUHFLbFhYZmt2?=
 =?utf-8?B?S2o2TDkrQkc0NEFFMHVBRWpWL1lSc2c1eS9FNVVtWVljVzlGazNOS01yUXlT?=
 =?utf-8?B?cEN6alFnRXV0bDNBa2FCYmRoOWZmcHIrN3IwbCtFOHF3Zi9HbTZ2bUtPL1dL?=
 =?utf-8?B?UVBkVUcwMFdwbEhvWHVvRi9rVFlNKzZ0VHN2UmE1R1J1OVZySFZiaExRc0E3?=
 =?utf-8?B?aGxoenZsUmdQL1J5M2hSdFJsMWZQQ01uNEg4UE4zbkdmY1FPS1BTdTJZVThz?=
 =?utf-8?B?b1pvcXdocGdQTDFHZmJodERPSXJPVVA5MzFNTE5QVEJ0Qk9QQXBYc3Bxak0w?=
 =?utf-8?B?VmZ5UW8yT2tFRzFQdmN4dDdCYWdyWlJwR0tVK083MlRsNC9IU1k2RmxOYllw?=
 =?utf-8?B?SE5RRnJ0YTNJaXlDMDk5TTRyQXVTN1pvWnhWNTk3ZXV4NGtJZ0laeFhsWWVW?=
 =?utf-8?B?NnlyeFN6ZjB5YnI0cHBMVEZiQVVtenB2SER1RFZQZnFYSEptVnRoL3lZYjZX?=
 =?utf-8?B?R0NNRDd3TklIQ1NrYnFUZDBvTTViQURqaVM1UFdINWdmYkppSktJd09Na1lF?=
 =?utf-8?B?OWFncVdQUno0dW5na3BGeEZJaHpuY3dDMXRhLzRNZVNFdHM3UXVLQWEzQUo1?=
 =?utf-8?B?aHpCZUFVT2dQcm1FclRSMFBQSWl6Tjh2TWVuY1lUdmFxcG5iSzlzOXQvVnRP?=
 =?utf-8?B?QnJvdGs3clhyQm9naStkTEJyMUZYT2dmdmhENk96cHR5bngvNzlzUCtQbG9o?=
 =?utf-8?B?Y0xkRnpNeHZDcVdGRWk3eXlGZ1NBYTEzMVJBK2ZONWtvdVFMQWNCY2syd1dp?=
 =?utf-8?B?SjVIdkVGU1VaTjkxR3d3L3pGeUZWTG5UdmY0bklSZXBGZEtUKy81TmZvZWZI?=
 =?utf-8?B?MFJBeDZGRXpqQTVVOVdIYitraThrQkNZdnpDR2dLdlViMGtEMVZNTWRRb3BJ?=
 =?utf-8?B?VXl1TFdNVnFTaWpWWWRyQU9Fa3pPWnZjNG5Xam54SnYza1dpayt1Q3VNTlls?=
 =?utf-8?B?ZUNEbjJpR05tUjBmU1l0dzdhK29FTmNFNHlZanNIVUE2bWlqMk50VEp4Vmd5?=
 =?utf-8?B?Q1dDQVFqWXBMTk5CR0VGMUh5VHNNVm8yUUZ5dXQ5azdMd2x5Wmk2RmFJN2RF?=
 =?utf-8?B?WG5WeXpVNUdwSE1DclB1L0lwUkt0SkJKbDNqT05rUnQ4d0JHWXVFY1E4QThl?=
 =?utf-8?B?M2FXQiswQWxTY0VUYXU2SVAwSnlkOWRhSGZjMFJEMUh0bk81dTdHOXRHU1Rv?=
 =?utf-8?B?cjlkcVBqUmRVSEJKT0RZVVlSU1pPcTVwV2lSeDdZM1J5cEUvaTFXWTFvQ2Vp?=
 =?utf-8?B?M0twQklOUnNMNlZ3R2U0dnlpVWY3ZUF5M0k0OG5oRzNlaU14UDdXYnFRbVJ6?=
 =?utf-8?B?ak9nbjFqRUJtSmkzeEFFVUFvbUMzRHk3YVduV3lXZGdEWjlZMXlBcTcycDFv?=
 =?utf-8?B?ekIzcUdIeW9jQUVzMmJnQmZ4ZlBxdFJMaXcwZlJITmVKY1N0R0l3cFlHd2JZ?=
 =?utf-8?B?N3F1bDhYRlVQSEZQWWxqWnJpMmRUWmtyTkZob3pVcTVaVE9yOFFnTEhOTEpP?=
 =?utf-8?B?UEZzSGpVb1Y1NnQvOEpGRDFKR1V5dUY5M0tjWlhGdGlKZ3N6Tm1hR1FYaCtT?=
 =?utf-8?B?cXBZOWNFQllOWWJXaHRHSE1nZVlvaUpPenZjR3hVSlN4d0xKOWFoTjdxME1Y?=
 =?utf-8?Q?HzOMKu2jIfsKI+HhNxXvnJB47MM2/sM5ASEFMCw?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB4366
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 VE1EUR03FT050.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	91fd67f7-825f-4d6d-fc49-08d91a9bd742
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	FrWeK+nzrrHxg8aJ34QtbLkkPG4/yXhGlZ5I5EYTrIerPd85fy7yKOZCLpVE/DLd254UBx/v+8eNsRuRhy37arIgmCWB/KZY/9Esof5NIKkSKnc+kXVLx/TWvH3JM45vtdh4kB64XfT6N6DDJzufWo96t/WQJ7u5haOLYoAXsSbg05qDr2jqKN1I6U4pDUde1PbmgZLq54kMOM8Z093KbbPwImpe0P+885uJKKXXUyDckWwd1MciNrHA64ehu/ubgDYLTpTJDQ3doQqNebU4G+m1km2vf27I+q4DEuDPqsC16tFsiZ+vlHEJHS1AwJwM5dwEA5nDi//TCUWO5sI+dzu/LQ/tYYRin4riCqhB0f6MQASuJxrXX7xhdnQMurc1Y812kWF2ddsAi1AuvTac4mPv6Kj/37c9quWLGzSCC0tY8CwqYz+g8dW09AzffSTKU6nyFzhKF/OGO595+XD2PE+2MgTANH9GOpeIEeKKf6+7CuLq5vf0raTj/H61tZtg8vQ09A1hlg4GOShclygTddv2E/2y+sV/zCQrb0wgU42pRcy2VlphDBv1pY0ZPsndXsPW5Y3cx+8QTzCH4zGrM+VP3WHAtY46fAOQex3nohPxpM3NqSwrPb03VgqvMCilwa56kQh+pakkJGqAeKeGY2yjYtL2bg4/RcN8aGcGFyfAoMdkTvKfyJA7B4lwSZT1
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(136003)(376002)(346002)(396003)(46966006)(36840700001)(70206006)(83380400001)(2906002)(36860700001)(54906003)(70586007)(4326008)(186003)(110136005)(82310400003)(5660300002)(316002)(47076005)(9686003)(8676002)(55016002)(8936002)(53546011)(356005)(52536014)(33656002)(6506007)(7696005)(82740400003)(336012)(26005)(86362001)(478600001)(81166007)(81973001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2021 07:58:17.7405
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: c80b356a-2a14-4daa-4c8d-08d91a9bdcf7
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	VE1EUR03FT050.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB6004

SGkgSnVsaWVuDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSnVsaWVu
IEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4NCj4gU2VudDogVHVlc2RheSwgTWF5IDE4LCAyMDIxIDg6
MDkgUE0NCj4gVG86IFBlbm55IFpoZW5nIDxQZW5ueS5aaGVuZ0Bhcm0uY29tPjsgeGVuLWRldmVs
QGxpc3RzLnhlbnByb2plY3Qub3JnOw0KPiBzc3RhYmVsbGluaUBrZXJuZWwub3JnDQo+IENjOiBC
ZXJ0cmFuZCBNYXJxdWlzIDxCZXJ0cmFuZC5NYXJxdWlzQGFybS5jb20+OyBXZWkgQ2hlbg0KPiA8
V2VpLkNoZW5AYXJtLmNvbT47IG5kIDxuZEBhcm0uY29tPg0KPiBTdWJqZWN0OiBSZTogW1BBVENI
IDA5LzEwXSB4ZW4vYXJtOiBwYXJzZSBgeGVuLHN0YXRpYy1tZW1gIGluZm8gZHVyaW5nDQo+IGRv
bWFpbiBjb25zdHJ1Y3Rpb24NCj4gDQo+IEhpIFBlbm55LA0KPiANCj4gT24gMTgvMDUvMjAyMSAw
NjoyMSwgUGVubnkgWmhlbmcgd3JvdGU6DQo+ID4gVGhpcyBjb21taXQgcGFyc2VzIGB4ZW4sc3Rh
dGljLW1lbWAgZGV2aWNlIHRyZWUgcHJvcGVydHksIHRvIGFjcXVpcmUNCj4gPiBzdGF0aWMgbWVt
b3J5IGluZm8gcmVzZXJ2ZWQgZm9yIHRoaXMgZG9tYWluLCB3aGVuIGNvbnN0cnVjdGluZyBkb21h
aW4NCj4gPiBkdXJpbmcgYm9vdC11cC4NCj4gPg0KPiA+IFJlbGF0ZWQgaW5mbyBzaGFsbCBiZSBz
dG9yZWQgaW4gbmV3IHN0YXRpY19tZW0gdmFsdWUgdW5kZXIgcGVyIGRvbWFpbg0KPiA+IHN0cnVj
dCBhcmNoX2RvbWFpbi4NCj4gDQo+IFNvIGZhciwgdGhpcyBzZWVtcyB0byBvbmx5IGJlIHVzZWQg
ZHVyaW5nIGJvb3QuIFNvIGNhbid0IHRoaXMgYmUga2VwdCBpbiB0aGUNCj4ga2luZm8gc3RydWN0
dXJlPw0KPiANCg0KU3VyZSwgSSdsbCBzdG9yZSBpdCBpbiBraW5mbw0KDQo+ID4NCj4gPiBSaWdo
dCBub3csIHRoZSBpbXBsZW1lbnRhdGlvbiBvZiBhbGxvY2F0ZV9zdGF0aWNfbWVtb3J5IGlzIG1p
c3NpbmcsDQo+ID4gYW5kIHdpbGwgYmUgaW50cm9kdWNlZCBsYXRlci4gSXQganVzdCBCVUcoKSBv
dXQgYXQgdGhlIG1vbWVudC4NCj4gPg0KPiA+IFNpZ25lZC1vZmYtYnk6IFBlbm55IFpoZW5nIDxw
ZW5ueS56aGVuZ0Bhcm0uY29tPg0KPiA+IC0tLQ0KPiA+ICAgeGVuL2FyY2gvYXJtL2RvbWFpbl9i
dWlsZC5jICB8IDU4DQo+ICsrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrLS0tLQ0KPiA+
ICAgeGVuL2luY2x1ZGUvYXNtLWFybS9kb21haW4uaCB8ICAzICsrDQo+ID4gICAyIGZpbGVzIGNo
YW5nZWQsIDU2IGluc2VydGlvbnMoKyksIDUgZGVsZXRpb25zKC0pDQo+ID4NCj4gPiBkaWZmIC0t
Z2l0IGEveGVuL2FyY2gvYXJtL2RvbWFpbl9idWlsZC5jIGIveGVuL2FyY2gvYXJtL2RvbWFpbl9i
dWlsZC5jDQo+ID4gaW5kZXggMjgyNDE2ZTc0ZC4uMzBiNTU1ODhiNyAxMDA2NDQNCj4gPiAtLS0g
YS94ZW4vYXJjaC9hcm0vZG9tYWluX2J1aWxkLmMNCj4gPiArKysgYi94ZW4vYXJjaC9hcm0vZG9t
YWluX2J1aWxkLmMNCj4gPiBAQCAtMjQyNCwxNyArMjQyNCw2MSBAQCBzdGF0aWMgaW50IF9faW5p
dCBjb25zdHJ1Y3RfZG9tVShzdHJ1Y3QgZG9tYWluDQo+ICpkLA0KPiA+ICAgew0KPiA+ICAgICAg
IHN0cnVjdCBrZXJuZWxfaW5mbyBraW5mbyA9IHt9Ow0KPiA+ICAgICAgIGludCByYzsNCj4gPiAt
ICAgIHU2NCBtZW07DQo+ID4gKyAgICB1NjQgbWVtLCBzdGF0aWNfbWVtX3NpemUgPSAwOw0KPiA+
ICsgICAgY29uc3Qgc3RydWN0IGR0X3Byb3BlcnR5ICpwcm9wOw0KPiA+ICsgICAgdTMyIHN0YXRp
Y19tZW1fbGVuOw0KPiA+ICsgICAgYm9vbCBzdGF0aWNfbWVtID0gZmFsc2U7DQo+ID4gKw0KPiA+
ICsgICAgLyoNCj4gPiArICAgICAqIEd1ZXN0IFJBTSBjb3VsZCBiZSBvZiBzdGF0aWMgbWVtb3J5
IGZyb20gc3RhdGljIGFsbG9jYXRpb24sDQo+ID4gKyAgICAgKiB3aGljaCB3aWxsIGJlIHNwZWNp
ZmllZCB0aHJvdWdoICJ4ZW4sc3RhdGljLW1lbSIgcHJvcGVydHkuDQo+ID4gKyAgICAgKi8NCj4g
PiArICAgIHByb3AgPSBkdF9maW5kX3Byb3BlcnR5KG5vZGUsICJ4ZW4sc3RhdGljLW1lbSIsICZz
dGF0aWNfbWVtX2xlbik7DQo+ID4gKyAgICBpZiAoIHByb3AgKQ0KPiA+ICsgICAgew0KPiA+ICsg
ICAgICAgIGNvbnN0IF9fYmUzMiAqY2VsbDsNCj4gPiArICAgICAgICB1MzIgYWRkcl9jZWxscyA9
IDIsIHNpemVfY2VsbHMgPSAyLCByZWdfY2VsbHM7DQo+ID4gKyAgICAgICAgdTY0IHN0YXJ0LCBz
aXplOw0KPiA+ICsgICAgICAgIGludCBpLCBiYW5rczsNCj4gPiArICAgICAgICBzdGF0aWNfbWVt
ID0gdHJ1ZTsNCj4gPiArDQo+ID4gKyAgICAgICAgZHRfcHJvcGVydHlfcmVhZF91MzIobm9kZSwg
IiNhZGRyZXNzLWNlbGxzIiwgJmFkZHJfY2VsbHMpOw0KPiA+ICsgICAgICAgIGR0X3Byb3BlcnR5
X3JlYWRfdTMyKG5vZGUsICIjc2l6ZS1jZWxscyIsICZzaXplX2NlbGxzKTsNCj4gPiArICAgICAg
ICBCVUdfT04oc2l6ZV9jZWxscyA+IDIgfHwgYWRkcl9jZWxscyA+IDIpOw0KPiA+ICsgICAgICAg
IHJlZ19jZWxscyA9IGFkZHJfY2VsbHMgKyBzaXplX2NlbGxzOw0KPiA+ICsNCj4gPiArICAgICAg
ICBjZWxsID0gKGNvbnN0IF9fYmUzMiAqKXByb3AtPnZhbHVlOw0KPiA+ICsgICAgICAgIGJhbmtz
ID0gc3RhdGljX21lbV9sZW4gLyAocmVnX2NlbGxzICogc2l6ZW9mICh1MzIpKTsNCj4gPiArICAg
ICAgICBCVUdfT04oYmFua3MgPiBOUl9NRU1fQkFOS1MpOw0KPiA+ICsNCj4gPiArICAgICAgICBm
b3IgKCBpID0gMDsgaSA8IGJhbmtzOyBpKysgKQ0KPiA+ICsgICAgICAgIHsNCj4gPiArICAgICAg
ICAgICAgZGV2aWNlX3RyZWVfZ2V0X3JlZygmY2VsbCwgYWRkcl9jZWxscywgc2l6ZV9jZWxscywg
JnN0YXJ0LCAmc2l6ZSk7DQo+ID4gKyAgICAgICAgICAgIGQtPmFyY2guc3RhdGljX21lbS5iYW5r
W2ldLnN0YXJ0ID0gc3RhcnQ7DQo+ID4gKyAgICAgICAgICAgIGQtPmFyY2guc3RhdGljX21lbS5i
YW5rW2ldLnNpemUgPSBzaXplOw0KPiA+ICsgICAgICAgICAgICBzdGF0aWNfbWVtX3NpemUgKz0g
c2l6ZTsNCj4gPiArDQo+ID4gKyAgICAgICAgICAgIHByaW50ayhYRU5MT0dfSU5GTw0KPiA+ICsg
ICAgICAgICAgICAgICAgICAgICJTdGF0aWMgTWVtb3J5IEJhbmtbJWRdIGZvciBEb21haW4gJXBk
OiINCj4gPiArICAgICAgICAgICAgICAgICAgICAiMHglIlBSSXg2NCItMHglIlBSSXg2NCJcbiIs
DQo+ID4gKyAgICAgICAgICAgICAgICAgICAgaSwgZCwNCj4gPiArICAgICAgICAgICAgICAgICAg
ICBkLT5hcmNoLnN0YXRpY19tZW0uYmFua1tpXS5zdGFydCwNCj4gPiArICAgICAgICAgICAgICAg
ICAgICBkLT5hcmNoLnN0YXRpY19tZW0uYmFua1tpXS5zdGFydCArDQo+ID4gKyAgICAgICAgICAg
ICAgICAgICAgZC0+YXJjaC5zdGF0aWNfbWVtLmJhbmtbaV0uc2l6ZSk7DQo+ID4gKyAgICAgICAg
fQ0KPiA+ICsgICAgICAgIGQtPmFyY2guc3RhdGljX21lbS5ucl9iYW5rcyA9IGJhbmtzOw0KPiA+
ICsgICAgfQ0KPiANCj4gQ291bGQgd2UgYWxsb2NhdGUgdGhlIG1lbW9yeSBhcyB3ZSBwYXJzZT8N
Cj4gDQoNCk9rLiBJJ2xsIHRyeS4NCg0KPiA+DQo+ID4gICAgICAgcmMgPSBkdF9wcm9wZXJ0eV9y
ZWFkX3U2NChub2RlLCAibWVtb3J5IiwgJm1lbSk7DQo+ID4gLSAgICBpZiAoICFyYyApDQo+ID4g
KyAgICBpZiAoICFzdGF0aWNfbWVtICYmICFyYyApDQo+ID4gICAgICAgew0KPiA+ICAgICAgICAg
ICBwcmludGsoIkVycm9yIGJ1aWxkaW5nIERvbVU6IGNhbm5vdCByZWFkIFwibWVtb3J5XCIgcHJv
cGVydHlcbiIpOw0KPiA+ICAgICAgICAgICByZXR1cm4gLUVJTlZBTDsNCj4gPiAgICAgICB9DQo+
ID4gLSAgICBraW5mby51bmFzc2lnbmVkX21lbSA9IChwYWRkcl90KW1lbSAqIFNaXzFLOw0KPiA+
ICsgICAga2luZm8udW5hc3NpZ25lZF9tZW0gPSBzdGF0aWNfbWVtID8gc3RhdGljX21lbV9zaXpl
IDoNCj4gPiArIChwYWRkcl90KW1lbSAqIFNaXzFLOw0KPiA+DQo+ID4gLSAgICBwcmludGsoIioq
KiBMT0FESU5HIERPTVUgY3B1cz0ldSBtZW1vcnk9JSJQUkl4NjQiS0IgKioqXG4iLCBkLQ0KPiA+
bWF4X3ZjcHVzLCBtZW0pOw0KPiA+ICsgICAgcHJpbnRrKCIqKiogTE9BRElORyBET01VIGNwdXM9
JXUgbWVtb3J5PSUiUFJJeDY0IktCICoqKlxuIiwNCj4gPiArICAgICAgICAgICAgZC0+bWF4X3Zj
cHVzLCAoa2luZm8udW5hc3NpZ25lZF9tZW0pID4+IDEwKTsNCj4gPg0KPiA+ICAgICAgIGtpbmZv
LnZwbDAxMSA9IGR0X3Byb3BlcnR5X3JlYWRfYm9vbChub2RlLCAidnBsMDExIik7DQo+ID4NCj4g
PiBAQCAtMjQ1Miw3ICsyNDk2LDExIEBAIHN0YXRpYyBpbnQgX19pbml0IGNvbnN0cnVjdF9kb21V
KHN0cnVjdCBkb21haW4NCj4gKmQsDQo+ID4gICAgICAgLyogdHlwZSBtdXN0IGJlIHNldCBiZWZv
cmUgYWxsb2NhdGUgbWVtb3J5ICovDQo+ID4gICAgICAgZC0+YXJjaC50eXBlID0ga2luZm8udHlw
ZTsNCj4gPiAgICNlbmRpZg0KPiA+IC0gICAgYWxsb2NhdGVfbWVtb3J5KGQsICZraW5mbyk7DQo+
ID4gKyAgICBpZiAoIHN0YXRpY19tZW0gKQ0KPiA+ICsgICAgICAgIC8qIGFsbG9jYXRlX3N0YXRp
Y19tZW1vcnkoZCwgJmtpbmZvKTsgKi8NCj4gPiArICAgICAgICBCVUcoKTsNCj4gPiArICAgIGVs
c2UNCj4gPiArICAgICAgICBhbGxvY2F0ZV9tZW1vcnkoZCwgJmtpbmZvKTsNCj4gPg0KPiA+ICAg
ICAgIHJjID0gcHJlcGFyZV9kdGJfZG9tVShkLCAma2luZm8pOw0KPiA+ICAgICAgIGlmICggcmMg
PCAwICkNCj4gPiBkaWZmIC0tZ2l0IGEveGVuL2luY2x1ZGUvYXNtLWFybS9kb21haW4uaA0KPiA+
IGIveGVuL2luY2x1ZGUvYXNtLWFybS9kb21haW4uaCBpbmRleCBjOTI3N2I1YzZkLi44MWI4ZWI0
NTNjIDEwMDY0NA0KPiA+IC0tLSBhL3hlbi9pbmNsdWRlL2FzbS1hcm0vZG9tYWluLmgNCj4gPiAr
KysgYi94ZW4vaW5jbHVkZS9hc20tYXJtL2RvbWFpbi5oDQo+ID4gQEAgLTEwLDYgKzEwLDcgQEAN
Cj4gPiAgICNpbmNsdWRlIDxhc20vZ2ljLmg+DQo+ID4gICAjaW5jbHVkZSA8YXNtL3ZnaWMuaD4N
Cj4gPiAgICNpbmNsdWRlIDxhc20vdnBsMDExLmg+DQo+ID4gKyNpbmNsdWRlIDxhc20vc2V0dXAu
aD4NCj4gPiAgICNpbmNsdWRlIDxwdWJsaWMvaHZtL3BhcmFtcy5oPg0KPiA+DQo+ID4gICBzdHJ1
Y3QgaHZtX2RvbWFpbg0KPiA+IEBAIC04OSw2ICs5MCw4IEBAIHN0cnVjdCBhcmNoX2RvbWFpbg0K
PiA+ICAgI2lmZGVmIENPTkZJR19URUUNCj4gPiAgICAgICB2b2lkICp0ZWU7DQo+ID4gICAjZW5k
aWYNCj4gPiArDQo+ID4gKyAgICBzdHJ1Y3QgbWVtaW5mbyBzdGF0aWNfbWVtOw0KPiA+ICAgfSAg
X19jYWNoZWxpbmVfYWxpZ25lZDsNCj4gPg0KPiA+ICAgc3RydWN0IGFyY2hfdmNwdQ0KPiA+DQo+
IA0KPiBDaGVlcnMsDQo+IA0KPiAtLQ0KPiBKdWxpZW4gR3JhbGwNCg0KQ2hlZXJzDQoNClBlbm55
DQoNCg==


From xen-devel-bounces@lists.xenproject.org Wed May 19 09:09:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 09:09:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129952.243680 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljICQ-0007JE-7q; Wed, 19 May 2021 09:09:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129952.243680; Wed, 19 May 2021 09:09:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljICQ-0007J7-36; Wed, 19 May 2021 09:09:06 +0000
Received: by outflank-mailman (input) for mailman id 129952;
 Wed, 19 May 2021 09:09:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=iEmX=KO=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1ljICP-0007IG-3L
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 09:09:05 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 229aa98e-3b1f-41b2-a926-e0084948aa17;
 Wed, 19 May 2021 09:09:03 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D314AAD4D;
 Wed, 19 May 2021 09:09:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 229aa98e-3b1f-41b2-a926-e0084948aa17
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621415343; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=d38op44yArEklt9nY3Pxlhuel+QoKnf9ixOflUR1UkY=;
	b=qG5/udPchljOP4Yi6/qxS0o2tJU9apI7L1aHThSr1iE6VIXOEisUy5QdVxzjkMBw4PmJCn
	5VYt25PcDcKDnlPatV75VOZpLbqVWQPPdVyuyG+/XWZ5vgJMmh7Tn0Ux1VAVYJJ9+tWHNB
	dXlPJlu6zE6dRmvM/lVc1hCKVoTCL3U=
To: Julien Grall <julien@xen.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Edwin Torok <edvin.torok@citrix.com>, "Doebel, Bjoern" <doebel@amazon.de>,
 raphning@amazon.co.uk, "Durrant, Paul" <pdurrant@amazon.co.uk>
References: <13bbb51e-f63d-a886-272f-e6a6252fb468@xen.org>
From: Juergen Gross <jgross@suse.com>
Subject: Re: Preserving transactions accross Xenstored Live-Update
Message-ID: <377d042d-40ec-dafc-3d03-370c4f5dbb4c@suse.com>
Date: Wed, 19 May 2021 11:09:01 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <13bbb51e-f63d-a886-272f-e6a6252fb468@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="Q0ERxfPhXqLWuomJi5eJsqfKJ0KpRzZPy"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--Q0ERxfPhXqLWuomJi5eJsqfKJ0KpRzZPy
Content-Type: multipart/mixed; boundary="VEav0ghbVwDkAjldmXSNEOQ1smduzBaJd";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Edwin Torok <edvin.torok@citrix.com>, "Doebel, Bjoern" <doebel@amazon.de>,
 raphning@amazon.co.uk, "Durrant, Paul" <pdurrant@amazon.co.uk>
Message-ID: <377d042d-40ec-dafc-3d03-370c4f5dbb4c@suse.com>
Subject: Re: Preserving transactions accross Xenstored Live-Update
References: <13bbb51e-f63d-a886-272f-e6a6252fb468@xen.org>
In-Reply-To: <13bbb51e-f63d-a886-272f-e6a6252fb468@xen.org>

--VEav0ghbVwDkAjldmXSNEOQ1smduzBaJd
Content-Type: multipart/mixed;
 boundary="------------C69D794A7A1B5E58D3B75BCB"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------C69D794A7A1B5E58D3B75BCB
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 18.05.21 20:11, Julien Grall wrote:
> Hi Juergen,
>=20
> I have started to look at preserving transaction accross Live-update in=20

> C Xenstored. So far, I managed to transfer transaction that read/write =

> existing nodes.
>=20
> Now, I am running into trouble to transfer new/deleted node within a=20
> transaction with the existing migration format.
>=20
> C Xenstored will keep track of nodes accessed during the transaction bu=
t=20
> not the children (AFAICT for performance reason).

Not performance reasons, but because there isn't any need for that:

The children are either unchanged (so the non-transaction node records
apply), or they will be among the tracked nodes (transaction node
records apply). So in both cases all children should be known.

In case a child has been deleted in the transaction, the stream should
contain a node record for that child with the transaction-id and the
number of permissions being zero: see docs/designs/xenstore-migration.md


Juergen

--------------C69D794A7A1B5E58D3B75BCB
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------C69D794A7A1B5E58D3B75BCB--

--VEav0ghbVwDkAjldmXSNEOQ1smduzBaJd--

--Q0ERxfPhXqLWuomJi5eJsqfKJ0KpRzZPy
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmCk1a0FAwAAAAAACgkQsN6d1ii/Ey8l
CQf/a9CHgVa36Y2Gvo8BsHSRnGL5wiAZ5CXE7SH2HwBpu7jED9cEptfUVgeekB7szsNHg1coDfgA
wCoSheXuD8ZOV86dRRtwwr1e/Fnw7TAJjSYxoJq6i0bjtvTXlg7fvC71+CWVlpwRqvWnEepZqFeA
pQb9HSJpkrcrA8f0QuImWsEAwtT7vTC57oua34cLf31OMcOktYM4fl2g28btRRqkGDvtlmo5GBAV
s4Cjh9eQ6FkMiCw/uUxjU9E85cWuN7glCebjtjpeNizhKz0YF6mw/8S1B3T4cJMWoD6aaHhrBRPU
ubadA4/2sPjzuutm8eYjSvdYy5RkIn3AZmt+JDYbKA==
=pGk5
-----END PGP SIGNATURE-----

--Q0ERxfPhXqLWuomJi5eJsqfKJ0KpRzZPy--


From xen-devel-bounces@lists.xenproject.org Wed May 19 09:29:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 09:29:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129957.243691 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljIWV-0001Ny-Uk; Wed, 19 May 2021 09:29:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129957.243691; Wed, 19 May 2021 09:29:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljIWV-0001Nr-RY; Wed, 19 May 2021 09:29:51 +0000
Received: by outflank-mailman (input) for mailman id 129957;
 Wed, 19 May 2021 09:29:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fOiY=KO=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ljIWU-0001Nj-IJ
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 09:29:50 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d5c28893-fc35-4c89-8241-fac1070c28d8;
 Wed, 19 May 2021 09:29:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 84F3BAE39;
 Wed, 19 May 2021 09:29:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d5c28893-fc35-4c89-8241-fac1070c28d8
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621416588; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=70fZJvNZqwFm//PLmJ2D6mSvvED+4URflxvX5wyUEiw=;
	b=UcrhyrzPWpUdVWTzvspsKzKq7tHfVXSF3pQQieM4GqdTK/6zK97L9U8oqfEUAtyQ/3JCtd
	/x8bQmv9YGusAr5cNsprTkCBWAM0LLPFZ3TARRzMo6w2F9qbVFA9uvbyZeicCR8zCaz73b
	gkWOBVM/In89PBmVi2WskOt3TLHe9jw=
Subject: Re: [PATCH v3 2/5] xen/x86: manually build xen.mb.efi binary
To: Daniel Kiper <daniel.kiper@oracle.com>
Cc: Bob Eshleman <bobbyeshleman@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <cover.1611273359.git.bobbyeshleman@gmail.com>
 <28d5536a2f7691e8f79d55f1470fa89ce4fae93d.1611273359.git.bobbyeshleman@gmail.com>
 <3c621726-31c4-6a79-a020-88c59644111b@suse.com>
 <74ea104d-3826-d80d-3af5-f444d065c73f@gmail.com>
 <a183a5f9-0f36-187d-fd06-8d6db99cbe43@suse.com>
 <20210517132039.6czppjfge27x4mwg@tomti.i.net-space.pl>
 <ee89a22d-5f46-51ed-4c46-63cfc60cbafc@suse.com>
 <20210518174633.spo5kmgcbuo6dg5k@tomti.i.net-space.pl>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <51333867-d693-38e2-bd1c-fce28241a604@suse.com>
Date: Wed, 19 May 2021 11:29:43 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210518174633.spo5kmgcbuo6dg5k@tomti.i.net-space.pl>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 18.05.2021 19:46, Daniel Kiper wrote:
> On Mon, May 17, 2021 at 03:24:28PM +0200, Jan Beulich wrote:
>> On 17.05.2021 15:20, Daniel Kiper wrote:
>>> On Mon, May 17, 2021 at 08:48:32AM +0200, Jan Beulich wrote:
>>>> On 07.05.2021 22:26, Bob Eshleman wrote:
>>>>> What is your intuition WRT the idea that instead of trying add a PE/COFF hdr
>>>>> in front of Xen's mb2 bin, we instead go the route of introducing valid mb2
>>>>> entry points into xen.efi?
>>>>
>>>> At the first glance I think this is going to be less intrusive, and hence
>>>> to be preferred. But of course I haven't experimented in any way ...
>>>
>>> When I worked on this a few years ago I tried that way. Sadly I failed
>>> because I was not able to produce "linear" PE image using binutils
>>> exiting that days.
>>
>> What is a "linear" PE image?
> 
> The problem with Multiboot family protocols is that all code and data
> sections have to be glued together in the image and as such loaded into
> the memory (IIRC BSS is an exception but it has to live behind the
> image). So, you cannot use PE image which has different representation
> in file and memory. IIRC by default at least code and data sections in
> xen.efi have different sizes in PE file and in memory. I tried to fix
> that using linker script and objcopy but it did not work. Sadly I do
> not remember the details but there is pretty good chance you can find
> relevant emails in Xen-devel archive with me explaining what kind of
> problems I met.

Ah, this rings a bell. Even the .bss-is-last assumption doesn't hold,
because .reloc (for us as well as in general) comes later, but needs
loading (in the right place). Since even xen.gz isn't simply the
compressed linker output, but a post-processed (by mkelf32) image,
maybe what we need is a build tool doing similar post-processing on
xen.efi? Otoh getting disk image and in-memory image aligned ought
to be possible by setting --section-alignment= and --file-alignment=
to the same value (resulting in a much larger file) - adjusting file
positions would effectively be what a post-processing tool would need
to do (like with mkelf32 perhaps we could then at least save the
first ~2Mb of space). Which would still leave .reloc to be dealt with
- maybe we could place this after .init, but still ahead of
__init_end (such that the memory would get freed late in the boot
process). Not sure whether EFI loaders would "like" such an unusual
placement.

Also not sure what to do with Dwarf debug info, which just recently
we managed to avoid needing to strip unconditionally.

>>> Maybe
>>> newer binutils are more flexible and will be able to produce a PE image
>>> with properties required by Multiboot2 protocol.
>>
>> Isn't all you need the MB2 header within the first so many bytes of the
>> (disk) image? Or was it the image as loaded into memory? Both should be
>> possible to arrange for.
> 
> IIRC Multiboot2 protocol requires the header in first 32 kiB of an image.
> So, this is not a problem.

I was about to ask "Disk image or in-memory image?" But this won't
matter if the image as a whole got linearized, as long as the first
section doesn't start to high up. I notice that xen-syms doesn't fit
this requirement either, only the output of mkelf32 does. Which
suggests that there may not be a way around a post-processing tool.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 19 09:37:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 09:37:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129963.243702 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljIeI-0002pf-PT; Wed, 19 May 2021 09:37:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129963.243702; Wed, 19 May 2021 09:37:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljIeI-0002pY-Ls; Wed, 19 May 2021 09:37:54 +0000
Received: by outflank-mailman (input) for mailman id 129963;
 Wed, 19 May 2021 09:37:53 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljIeH-0002pO-Pt; Wed, 19 May 2021 09:37:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljIeH-0006kA-JS; Wed, 19 May 2021 09:37:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljIeH-0004DZ-5q; Wed, 19 May 2021 09:37:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ljIeH-00010c-5L; Wed, 19 May 2021 09:37:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=q3XXJ9nVptDPYEuZbRBBmkN4GRYPG4qP+kvFKD+BjHQ=; b=Ct83SWISVI9XEwixbUoN7JEMLX
	E6JIBwyQ9LZXhx1Qesu4iT4TJkb5BMNqSWa4u0AiboS9nr7DwdfmEvgpx+j7TvesvpMFqTb/cnGkv
	VnjSpWRTLs4/TF7Wh/LwN9udH1hwsfzn8FawLBh4fZ4wv6imfK5JDdjIaR9SBbayH6dY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162083-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162083: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=01d84420fb4a9be2ec474a7c1910bb22c28b53c8
X-Osstest-Versions-That:
    xen=caa9c4471d1d74b2d236467aaf7e63a806ac11a4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 May 2021 09:37:53 +0000

flight 162083 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162083/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm               6 xen-build                fail REGR. vs. 162023
 build-armhf                   6 xen-build                fail REGR. vs. 162023

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  01d84420fb4a9be2ec474a7c1910bb22c28b53c8
baseline version:
 xen                  caa9c4471d1d74b2d236467aaf7e63a806ac11a4

Last test of basis   162023  2021-05-18 13:00:27 Z    0 days
Testing same since   162036  2021-05-18 16:00:26 Z    0 days    9 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  pass    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 01d84420fb4a9be2ec474a7c1910bb22c28b53c8
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:51:48 2021 +0100

    tools/xenmon: xenbaked: Mark const the field text in stat_map_t
    
    The field text in stat_map_t will point to string literals. So mark it
    as const to allow the compiler to catch any modified of the string.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 4b7702727a8d89fea0a239adcbeb18aa2c85ede0
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:51:28 2021 +0100

    tools/top: The string parameter in set_prompt() and set_delay() should be const
    
    Neither string parameter in set_prompt() and set_delay() are meant to
    be modified. In particular, new_prompt can point to a literal string.
    
    So mark the two parameters as const and propagate it.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 5605cfd49a18df41a21fb50cd81528312a39d7c9
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:50:32 2021 +0100

    tools/misc: Use const whenever we point to literal strings
    
    literal strings are not meant to be modified. So we should use const
    char * rather than char * when we we to store a pointer to them.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 89aae4ad8f495b647de33f2df5046b3ce68225f8
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:35:07 2021 +0100

    tools/libs: stat: Use const whenever we point to literal strings
    
    literal strings are not meant to be modified. So we should use const
    char * rather than char * when we want to store a pointer to them.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 8fc4916daf2aac34088ebd5ec3d6fd707ac4221d
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:34:22 2021 +0100

    tools/libs: guest: Use const whenever we point to literal strings
    
    literal strings are not meant to be modified. So we should use const
    *char rather than char * when we want to store a pointer to them.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed May 19 09:49:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 09:49:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129971.243716 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljIp2-0004Kr-PQ; Wed, 19 May 2021 09:49:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129971.243716; Wed, 19 May 2021 09:49:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljIp2-0004Kk-Lp; Wed, 19 May 2021 09:49:00 +0000
Received: by outflank-mailman (input) for mailman id 129971;
 Wed, 19 May 2021 09:48:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljIp1-0004Ka-2O; Wed, 19 May 2021 09:48:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljIp0-0006un-Sv; Wed, 19 May 2021 09:48:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljIp0-0004lO-NA; Wed, 19 May 2021 09:48:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ljIp0-00026z-Mf; Wed, 19 May 2021 09:48:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=G/Dui+a8LmOYX2GEiujI2/rAQ7kT9wYso3JtWUHFKJo=; b=yKDxoniwzz0W1FHDCVJeVJwm2V
	0i+shQtaJSJg8CeHYP5frya15mS9JxycLhDtQywJ9iwupoxQStSlrc5Yg4vwR+mYjCpy0sSP1hv+e
	KUu9QNMKsBYScjBLLtLTam9eAeixaUJ+ARff7fXWTs3oQPxjjQfMQ72NZXbPt6iLqX5g=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162086-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 162086: all pass - PUSHED
X-Osstest-Versions-This:
    xen=caa9c4471d1d74b2d236467aaf7e63a806ac11a4
X-Osstest-Versions-That:
    xen=cb199cc7de987cfda4659fccf51059f210f6ad34
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 May 2021 09:48:58 +0000

flight 162086 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162086/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  caa9c4471d1d74b2d236467aaf7e63a806ac11a4
baseline version:
 xen                  cb199cc7de987cfda4659fccf51059f210f6ad34

Last test of basis   161968  2021-05-16 09:19:30 Z    3 days
Testing same since   162086  2021-05-19 09:18:32 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Connor Davis <connojdavis@gmail.com>
  Jan Beulich <jbeulich@suse.com>
  Juergen Gross <jgross@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   cb199cc7de..caa9c4471d  caa9c4471d1d74b2d236467aaf7e63a806ac11a4 -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Wed May 19 09:49:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 09:49:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129974.243729 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljIpf-0004v3-2K; Wed, 19 May 2021 09:49:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129974.243729; Wed, 19 May 2021 09:49:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljIpe-0004uw-Vi; Wed, 19 May 2021 09:49:38 +0000
Received: by outflank-mailman (input) for mailman id 129974;
 Wed, 19 May 2021 09:49:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fOiY=KO=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ljIpd-0004uf-J2
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 09:49:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d7f0cb2e-dcec-4e9f-817b-4a7474c82bf9;
 Wed, 19 May 2021 09:49:36 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 633B8AEFF;
 Wed, 19 May 2021 09:49:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d7f0cb2e-dcec-4e9f-817b-4a7474c82bf9
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621417775; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=I4LUYUhWxvpW3Gf1BFqwEtTorA2UD3yBZdhvlhtZSnc=;
	b=W7SzwF/67jqePExDPBHCsjTFyL3VHO66PzLMijMpIQfTj5DHwhyNFTgC4RTigoXzlglP9J
	7s5jAbc57awYfC/8iczlByW8auNssjUztcdOILGAcvQg+jdpsj1c+sIRKndUTq+4Jhe66p
	oIr8ryzC8CkAlxI/k2jxAMyk+9/JzPU=
Subject: Re: [PATCH 03/10] xen/arm: introduce PGC_reserved
To: Penny Zheng <Penny.Zheng@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 nd <nd@arm.com>, Julien Grall <julien@xen.org>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-4-penny.zheng@arm.com>
 <bc6a20ef-675d-bbd6-74f7-4ecc45805ee7@xen.org>
 <VE1PR08MB5215F3ECA8B5D9624E34A794F72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <9e22e4de-0d09-5195-bd8f-2ca326264807@suse.com>
Date: Wed, 19 May 2021 11:49:35 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <VE1PR08MB5215F3ECA8B5D9624E34A794F72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 19.05.2021 05:16, Penny Zheng wrote:
>> From: Julien Grall <julien@xen.org>
>> Sent: Tuesday, May 18, 2021 5:46 PM
>>
>> On 18/05/2021 06:21, Penny Zheng wrote:
>>> --- a/xen/include/asm-arm/mm.h
>>> +++ b/xen/include/asm-arm/mm.h
>>> @@ -88,7 +88,15 @@ struct page_info
>>>            */
>>>           u32 tlbflush_timestamp;
>>>       };
>>> -    u64 pad;
>>> +
>>> +    /* Page is reserved. */
>>> +    struct {
>>> +        /*
>>> +         * Reserved Owner of this page,
>>> +         * if this page is reserved to a specific domain.
>>> +         */
>>> +        struct domain *domain;
>>> +    } reserved;
>>
>> The space in page_info is quite tight, so I would like to avoid introducing new
>> fields unless we can't get away from it.
>>
>> In this case, it is not clear why we need to differentiate the "Owner"
>> vs the "Reserved Owner". It might be clearer if this change is folded in the
>> first user of the field.
>>
>> As an aside, for 32-bit Arm, you need to add a 4-byte padding.
>>
> 
> Yeah, I may delete this change. I imported this change as considering the functionality
> of rebooting domain on static allocation. 
> 
> A little more discussion on rebooting domain on static allocation. 
> Considering the major user cases for domain on static allocation
> are system has a total pre-defined, static behavior all the time. No domain allocation
> on runtime, while there still exists domain rebooting.
> 
> And when rebooting domain on static allocation, all these reserved pages could
> not go back to heap when freeing them.  So I am considering to use one global
> `struct page_info*[DOMID]` value to store.

Except such a separate array will consume quite a bit of space for
no real gain: v.free has 32 bits of padding space right now on
Arm64, so there's room for a domid_t there already. Even on Arm32
this could be arranged for, as I doubt "order" needs to be 32 bits
wide.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 19 10:34:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 10:34:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.129990.243747 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljJWw-0001Zm-OF; Wed, 19 May 2021 10:34:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 129990.243747; Wed, 19 May 2021 10:34:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljJWw-0001Zf-Ku; Wed, 19 May 2021 10:34:22 +0000
Received: by outflank-mailman (input) for mailman id 129990;
 Wed, 19 May 2021 10:34:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fOiY=KO=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ljJWv-0001ZZ-6X
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 10:34:21 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 06d505b5-5b02-450e-9744-2e2d97332b71;
 Wed, 19 May 2021 10:34:19 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id CDD8AAE63;
 Wed, 19 May 2021 10:34:18 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 06d505b5-5b02-450e-9744-2e2d97332b71
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621420458; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=XqG/HsDt9WIQTMcyoH5Iz0kL/CFjqRbSI1L5G96TZ8o=;
	b=lODXIuJlR7w4TbHm8JzY0+5yDBFdmqkKsAqyormI0IxmRCvE/0K/3HXvVFtJSkjja3Ixym
	w6myFKOcUc/i8Zt9tB5VP6HlX733TeoegCaFTevSQyfFWH98tSeIUqocLfYBWOIW6F+gZM
	3J7i8a4Fh0tzrOD8YLq9EoLp6dVXg8I=
Subject: Re: [PATCH v2] libelf: improve PVH elfnote parsing
To: Roger Pau Monne <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20210518144741.44395-1-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c645b764-00fe-2b90-3b31-7f2bb6f07c02@suse.com>
Date: Wed, 19 May 2021 12:34:19 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210518144741.44395-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 18.05.2021 16:47, Roger Pau Monne wrote:
> @@ -425,8 +425,11 @@ static elf_errorstatus elf_xen_addr_calc_check(struct elf_binary *elf,
>          return -1;
>      }
>  
> -    /* Initial guess for virt_base is 0 if it is not explicitly defined. */
> -    if ( parms->virt_base == UNSET_ADDR )
> +    /*
> +     * Initial guess for virt_base is 0 if it is not explicitly defined in the
> +     * PV case. For PVH virt_base is forced to 0 because paging is disabled.
> +     */
> +    if ( parms->virt_base == UNSET_ADDR || hvm )
>      {
>          parms->virt_base = 0;
>          elf_msg(elf, "ELF: VIRT_BASE unset, using %#" PRIx64 "\n",

This message is wrong now if hvm is true but parms->virt_base != UNSET_ADDR.
Best perhaps is to avoid emitting the message altogether when hvm is true.
(Since you'll be touching it anyway, perhaps a good opportunity to do away
with passing parms->virt_base to elf_msg(), and instead simply use a literal
zero.)

> @@ -441,8 +444,10 @@ static elf_errorstatus elf_xen_addr_calc_check(struct elf_binary *elf,
>       *
>       * If we are using the modern ELF notes interface then the default
>       * is 0.
> +     *
> +     * For PVH this is forced to 0, as it's already a legacy option for PV.
>       */
> -    if ( parms->elf_paddr_offset == UNSET_ADDR )
> +    if ( parms->elf_paddr_offset == UNSET_ADDR || hvm )
>      {
>          if ( parms->elf_note_start )

Don't you want "|| hvm" here as well, or alternatively suppress the
fallback to the __xen_guest section in the PVH case (near the end of
elf_xen_parse())?

>              parms->elf_paddr_offset = 0;

Similar remark as further up for the elf_msg() down below here.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 19 11:24:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 11:24:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130000.243758 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljKJa-0006NY-83; Wed, 19 May 2021 11:24:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130000.243758; Wed, 19 May 2021 11:24:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljKJa-0006NR-51; Wed, 19 May 2021 11:24:38 +0000
Received: by outflank-mailman (input) for mailman id 130000;
 Wed, 19 May 2021 11:24:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljKJY-0006NH-G1; Wed, 19 May 2021 11:24:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljKJY-0000E5-91; Wed, 19 May 2021 11:24:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljKJX-0002rK-VG; Wed, 19 May 2021 11:24:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ljKJX-0008Pw-Uo; Wed, 19 May 2021 11:24:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Vqa5VLklcEcziQ8fy2Y7uLaCROAFd2aPoycvnlS3LQk=; b=WxrAAPmQRp+ibKgD5+MTDWr61g
	986xFfNhLZrBJBSUgi8SKFG9IhMAs+5kHQHQBP+ebtAubQR4Bk1KAxWkHf6o2l/yi6M7bQt44YdNI
	RhPJjyKa+XJaxgsu/0MxO5ZCxCY9epKoMbQG1nrpa5vJ4AJZv7cbZ4ejPpGn+osqnk9k=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162087-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162087: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=01d84420fb4a9be2ec474a7c1910bb22c28b53c8
X-Osstest-Versions-That:
    xen=caa9c4471d1d74b2d236467aaf7e63a806ac11a4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 May 2021 11:24:35 +0000

flight 162087 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162087/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm               6 xen-build                fail REGR. vs. 162023
 build-armhf                   6 xen-build                fail REGR. vs. 162023

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  01d84420fb4a9be2ec474a7c1910bb22c28b53c8
baseline version:
 xen                  caa9c4471d1d74b2d236467aaf7e63a806ac11a4

Last test of basis   162023  2021-05-18 13:00:27 Z    0 days
Testing same since   162036  2021-05-18 16:00:26 Z    0 days   10 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  pass    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 01d84420fb4a9be2ec474a7c1910bb22c28b53c8
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:51:48 2021 +0100

    tools/xenmon: xenbaked: Mark const the field text in stat_map_t
    
    The field text in stat_map_t will point to string literals. So mark it
    as const to allow the compiler to catch any modified of the string.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 4b7702727a8d89fea0a239adcbeb18aa2c85ede0
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:51:28 2021 +0100

    tools/top: The string parameter in set_prompt() and set_delay() should be const
    
    Neither string parameter in set_prompt() and set_delay() are meant to
    be modified. In particular, new_prompt can point to a literal string.
    
    So mark the two parameters as const and propagate it.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 5605cfd49a18df41a21fb50cd81528312a39d7c9
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:50:32 2021 +0100

    tools/misc: Use const whenever we point to literal strings
    
    literal strings are not meant to be modified. So we should use const
    char * rather than char * when we we to store a pointer to them.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 89aae4ad8f495b647de33f2df5046b3ce68225f8
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:35:07 2021 +0100

    tools/libs: stat: Use const whenever we point to literal strings
    
    literal strings are not meant to be modified. So we should use const
    char * rather than char * when we want to store a pointer to them.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 8fc4916daf2aac34088ebd5ec3d6fd707ac4221d
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:34:22 2021 +0100

    tools/libs: guest: Use const whenever we point to literal strings
    
    literal strings are not meant to be modified. So we should use const
    *char rather than char * when we want to store a pointer to them.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed May 19 11:25:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 11:25:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130005.243771 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljKKI-0006vq-Hy; Wed, 19 May 2021 11:25:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130005.243771; Wed, 19 May 2021 11:25:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljKKI-0006vj-Eo; Wed, 19 May 2021 11:25:22 +0000
Received: by outflank-mailman (input) for mailman id 130005;
 Wed, 19 May 2021 11:25:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljKKH-0006vZ-Bw; Wed, 19 May 2021 11:25:21 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljKKH-0000ES-9L; Wed, 19 May 2021 11:25:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljKKG-0002si-Vd; Wed, 19 May 2021 11:25:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ljKKG-0000jv-V8; Wed, 19 May 2021 11:25:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=xcam1RmKa/U7VRydrWmf9cCAruIOMeWcSIlQP07XfmI=; b=Kb0+/R52nngXAkFbnWoHxWIADv
	8z8h/pacSIfFhLSlu3zUGBvlswdt06zw4yS+VQdfMR9Y35ZX51lflK+Ja9/iNfXlX7lza3n0hz1vF
	61K4Pnc2Q2+hNHo9GvYr/VNZwHEWVxets6IBMBaVfZKxJoPHYR0kN2C/GtqE6hUjeL94=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162071-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162071: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=15ee7b76891a78141e6e30ef3f8572e8d6b326d2
X-Osstest-Versions-That:
    ovmf=42ec0a315b8a2f445b7a7d74b8d78965f1dff8f6
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 May 2021 11:25:20 +0000

flight 162071 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162071/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 15ee7b76891a78141e6e30ef3f8572e8d6b326d2
baseline version:
 ovmf                 42ec0a315b8a2f445b7a7d74b8d78965f1dff8f6

Last test of basis   162046  2021-05-18 17:10:06 Z    0 days
Testing same since   162071  2021-05-19 01:40:34 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Sergei Dmitrouk <sergei@posteo.net>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   42ec0a315b..15ee7b7689  15ee7b76891a78141e6e30ef3f8572e8d6b326d2 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Wed May 19 12:00:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 12:00:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130015.243786 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljKrt-0002Ew-Cr; Wed, 19 May 2021 12:00:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130015.243786; Wed, 19 May 2021 12:00:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljKrt-0002Eo-76; Wed, 19 May 2021 12:00:05 +0000
Received: by outflank-mailman (input) for mailman id 130015;
 Wed, 19 May 2021 12:00:04 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=udgl=KO=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1ljKrs-00029A-Ov
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 12:00:04 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.21.42]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cd7ce512-9f3f-4ab9-a1eb-eb3f50447b5d;
 Wed, 19 May 2021 12:00:02 +0000 (UTC)
Received: from AS8PR04CA0034.eurprd04.prod.outlook.com (2603:10a6:20b:312::9)
 by VI1PR0801MB1744.eurprd08.prod.outlook.com (2603:10a6:800:5c::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.28; Wed, 19 May
 2021 11:59:57 +0000
Received: from AM5EUR03FT054.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:20b:312:cafe::93) by AS8PR04CA0034.outlook.office365.com
 (2603:10a6:20b:312::9) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.25 via Frontend
 Transport; Wed, 19 May 2021 11:59:57 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT054.mail.protection.outlook.com (10.152.16.212) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Wed, 19 May 2021 11:59:57 +0000
Received: ("Tessian outbound 6c8a2be3c2e7:v92");
 Wed, 19 May 2021 11:59:57 +0000
Received: from da2f4b75cf98.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 9D695A54-1C33-42C6-8E94-EFC063A55DA2.1; 
 Wed, 19 May 2021 11:59:46 +0000
Received: from EUR03-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id da2f4b75cf98.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 19 May 2021 11:59:46 +0000
Received: from VI1PR08MB3629.eurprd08.prod.outlook.com (2603:10a6:803:7f::25)
 by VI1PR08MB5488.eurprd08.prod.outlook.com (2603:10a6:803:137::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.28; Wed, 19 May
 2021 11:59:45 +0000
Received: from VI1PR08MB3629.eurprd08.prod.outlook.com
 ([fe80::5ca9:87ed:e959:758a]) by VI1PR08MB3629.eurprd08.prod.outlook.com
 ([fe80::5ca9:87ed:e959:758a%5]) with mapi id 15.20.4129.033; Wed, 19 May 2021
 11:59:45 +0000
Received: from smtpclient.apple (82.8.129.65) by
 LO2P265CA0514.GBRP265.PROD.OUTLOOK.COM (2603:10a6:600:13b::21) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.33 via Frontend Transport; Wed, 19 May 2021 11:59:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cd7ce512-9f3f-4ab9-a1eb-eb3f50447b5d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CaDl1v1cN/lR/FpGQKhQjoH+MGVIwO/8c3sSQjIN32Q=;
 b=D53K7dEv/pW339nV28RUTqxGFbkmEG7dvptmelzirBc06nReZOqeObIsoWW3oZXZ5ZXcOGH7R6xLZ+JzuGOQ77KElDlmaeFylAVZUK2CRH1UUc9S1DhT71xvGjgSQjq+3psOgoRE9jOL1lR42YRl63yXortt0fWk6bkpFu/NQcM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: a26856f033cbb8a8
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=W1tdISRJ+Hy2iAazTP2PhEyREngX314YOZLTZZsSwSfu/QFP9Gefc6HbqoF7iEeKw5/TKpegmYw7joo2V/yiXeTgisjPKslZ7Hitg0qSkNodDEfK3Laz5sd/M4s3QLQG5uveiAKg2t3yh7A2LlRs3/CQxZhY3f6TAuasLEe42KhWgoA5wZhWRTwf8occ6+Btv2eiX29NjNWAAxz3unI7qUIP7aXcvJWPPy6bxMgPAoBkwt9Jt+YS89oefXT55fG4WPZkZOZzcedNDXEyIA/fom0iLA01McL4MrYx5oxUj3vEB88VhyV7yVlzk0CrwoJxapRPqIxOXYOVTG6J4coVoQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CaDl1v1cN/lR/FpGQKhQjoH+MGVIwO/8c3sSQjIN32Q=;
 b=SY9oddkX4eYr6QT7mvnDBV/3QbKPgOiZyLIa+pPMR5FBZr8U27Kxmu5pvX22Cep+8JXtjNk9C9YhZqNDv3+dpbI/febMXUntw9wM5qkZenzNsBbXZBoJR4sMgeOHWEiV1DBXIIhBALkOxHEiVc8KdElhy4tvSrzSgp9nnDdeLpyXqbLen4hcgx7lO/pcyOrZJ3oB5FDlfC7QTv/3ZzbuaqweiJ3TNSzXqxPHYtMsN9uuVJPKqUX4Uf5NoWNxTtiOWUNvKfD9UbUEICyEiZmeanKChmQnmCOkE/fV66suHaqxFXSrNSNxiUycs+n9NjxmOQE3T9YiXs+U37mLLufAiA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=CaDl1v1cN/lR/FpGQKhQjoH+MGVIwO/8c3sSQjIN32Q=;
 b=D53K7dEv/pW339nV28RUTqxGFbkmEG7dvptmelzirBc06nReZOqeObIsoWW3oZXZ5ZXcOGH7R6xLZ+JzuGOQ77KElDlmaeFylAVZUK2CRH1UUc9S1DhT71xvGjgSQjq+3psOgoRE9jOL1lR42YRl63yXortt0fWk6bkpFu/NQcM=
Authentication-Results-Original: gmail.com; dkim=none (message not signed)
 header.d=none;gmail.com; dmarc=none action=none header.from=arm.com;
Content-Type: text/plain;
	charset=us-ascii
Subject: Re: Deploy XEN Project
From: Luca Fancellu <luca.fancellu@arm.com>
In-Reply-To: <4468222a-ac0b-7544-351d-286231a6bc9c@gmail.com>
Date: Wed, 19 May 2021 12:59:38 +0100
Cc: xen-devel@lists.xenproject.org
Content-Transfer-Encoding: quoted-printable
Message-Id: <2B331F1E-2177-427A-A531-FF8566CB7303@arm.com>
References: <4468222a-ac0b-7544-351d-286231a6bc9c@gmail.com>
To: Technologyrss Mail <technologyrss.mail@gmail.com>
X-Mailer: Apple Mail (2.3654.80.0.2.43)
X-Originating-IP: [82.8.129.65]
X-ClientProxiedBy: LO2P265CA0514.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:13b::21) To VI1PR08MB3629.eurprd08.prod.outlook.com
 (2603:10a6:803:7f::25)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 7297e538-c834-4bc5-3020-08d91abd9f3c
X-MS-TrafficTypeDiagnostic: VI1PR08MB5488:|VI1PR0801MB1744:
X-Microsoft-Antispam-PRVS:
	<VI1PR0801MB1744732A51ADABF55B82973CE42B9@VI1PR0801MB1744.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 dXPEOZsoWcpD1SoS/8HJ/q2xqxCJSvTZj8tNzPZnNCaRiNIIQ0LV+YLOXixae5IQSKd9+9L88m7vM9J26cbiI8co5FqU6NFnLqoq/ZE55HXlmXuFVpenwPIJjc5sSs0KxLCAp9vHTLgyqMXtxRYTr/7k/sUdRhSDpcu+uQpiHNrJGH+2PM+vdOET3dlhEyn5SJf1ScrbjnRh/2Hkv5dPcpRm4dX/C88v+r8Jw/BmjGJ/5atzI95AZbeoXvdNhxMAOxfsXJjnFACFRUj4CoFcbv0dgXFwI7brGIm6Deg0243pFkDU6auni9Xt5xDQqsJn7TD+dFfYUUJ1sYBDAZ8QMt/LX2336k/AOr14PxQkIOJS5hIdVCWh1TaRkVksdBEvPaJZYsJnFm5/eamlkYqKTEmbfEekvHEh4vlwnOABufW7tbtVSOVVi2ChfIeWNz20louXA5oBvSF05oHe5S0nfd7Ksuwq2g05mjkbDCcB1f08RV8nawKgci5eIl6xNZVUFl0x6n0ctpgLRcJKoXtKH4UbfHCGhWICBe0XQWxXmDsb641h+I+7oIxw//TBamvNVtyurTY+kdOGgGMTblQMz7JEspBhqgXnTTK8zPExO3LuzrWoo0wsgYKHd0UY/DqRrbsD6Mo4ErvyAkULJbT7WRsAdzMcK0tvviSZSh33ON9mSIk0gxlC2zSjC1tYop7zkryRfWBmU+LE+2UxKYeaekKIT4KzUTAJFbYnBMWRiL/L0Yc9qMmVOQALKXUSzsfNT5bKQb3zKZdt/55R1b04Mg==
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR08MB3629.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(136003)(366004)(346002)(376002)(396003)(15974865002)(2616005)(8676002)(956004)(6916009)(38100700002)(6512007)(52116002)(5660300002)(44832011)(66476007)(83380400001)(36756003)(26005)(6506007)(6486002)(316002)(86362001)(16526019)(33656002)(66556008)(2906002)(966005)(4326008)(4744005)(66946007)(8936002)(6666004)(3480700007)(38350700002)(186003)(53546011)(478600001)(7116003)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
 =?us-ascii?Q?n5FPsSNRiHFjDYeyDvSJzTkmU4g+72ao8V8A8CoxPYEa8+v7ik600NnPBp2L?=
 =?us-ascii?Q?x0OTnX6pleq5BAASRVw61Z1Q3coH9AtDpJbNFQkq5b00Qxfec/5SXc2d55HX?=
 =?us-ascii?Q?lUPOESm/v4j+vUb1C2DlJoyjBGxSCtoKREDt5wQ1rFDus4pJdA1LHCzL8llD?=
 =?us-ascii?Q?x/E/AsQ4PpBV76VPT9Dj/lUfMG0ykB47IdSstoSF6uLUPdrfzISvP+h1t6W6?=
 =?us-ascii?Q?QHPmP5dh/a2QGNnFkC47P96cjEp+9Cji3i9RHzR5wiuLsmsHV1KIttJQYdwK?=
 =?us-ascii?Q?RIexzEVJPLV7XW0utLvfxbByg4ZpXBf1o36fGLZjlRhb8g/Mvx9gZpRCs/XF?=
 =?us-ascii?Q?/D4s8cSGrlPInXOmsFRcRzRcp6Z45fxsm5mWIPRjKYdNa/apD7qRiKmkrOQ/?=
 =?us-ascii?Q?rZiSLbSm/97IcEetsvCerOU2DQA5SLXlaqxsMTopiiI/sAR5gisxDuYjYy48?=
 =?us-ascii?Q?920g6V4pzmPh++/I7FldSgRAp5D3YTXHezpecQuoAZjJxr/YlyxTNQB6PZ2Q?=
 =?us-ascii?Q?NGXhSxGLmO1QV8aFsGXTE8NjvoWbiMH9Y+KQDcqxvtWaytN2Um9JqdK6e2Xg?=
 =?us-ascii?Q?HMmUD1xnq8aUo2MgfkYagnuUrNJ3wWuIuBP2QwHWQTt0UTx8/mmrg8wUdwA6?=
 =?us-ascii?Q?kWvYmOgQZ9+qm5nK7D3X2BOsN1KdxwGfo9xvkN/pK6FyDtdyA4SF6ZJ03gLy?=
 =?us-ascii?Q?Tu01tzGpV8j0waDXyZspT6ddRgRHF6QixgFeKjtSGnxzTC6BEYfNPO852DS+?=
 =?us-ascii?Q?4xa+4r16jx9zFMcniDHr/spy/YCYtGrD3JLtBGyqCcGn4PAHLHApguvyAvpm?=
 =?us-ascii?Q?v7BlODad3PBr2HRgCtW31L6be/6mV5aszhr3wicWYega402FU4we0nv85U/h?=
 =?us-ascii?Q?Cen0b7h+susKq5zXodwfUsxbZ2KzlICZn5oTPPLoShUWvKC8nWtfiOphZIis?=
 =?us-ascii?Q?F9aip2vqCgSCkfZvfyKbknLy2A5ss9CAy3USUG5psZ9TM6/n+XPL5Beng4pI?=
 =?us-ascii?Q?iwy+K4CdHPMm93c1EMTb+xlqj7igI71DQ+73AP71rIe0vEwRB57lLXMkPoVA?=
 =?us-ascii?Q?jJIJjYZVqzWqCGbOO/kOswnaLxpC6Bvh8mZx6iIKdLl+2v3RYhSq6r8Gd17N?=
 =?us-ascii?Q?n51fWBdN7bmNfMm9AbgIsv/89QaHbsJF1JbjXljovcr9vGHIPQs+Di2EuI6U?=
 =?us-ascii?Q?FVOiufr6GFCgv0AwT9RIYzDQJ4zM8Fy8h/ys6LY0GDK1D6/Xm9nDbKY1hIMH?=
 =?us-ascii?Q?+XfvKb4ysbXrU4Cxy9X3zHnfwtkI6RMVeHtUDj/9arfC6yXErnawGzJolPih?=
 =?us-ascii?Q?ZylvFF0dxY4t+Ywiaheh65j1?=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB5488
Original-Authentication-Results: gmail.com; dkim=none (message not signed)
 header.d=none;gmail.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT054.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	71fcf4ae-c58d-41f8-0768-08d91abd97cb
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Prk9K+wAwsVUyEcerNUmUkvE50z5W+UGC1Cjb+/Lgz/L1SLySZKviSnnbnjBPJQJ8TRkxhi85yV82RZqAHCnbIx3WL+SgNdKKDfshOuDa3Uxc8Bv8lC0ymqyoQ3wNJfeAst7DeWO9ddhKkPprxfHgbEeRX5sP9DeCUBX5cmld4VXC2Wr7HjnFAXfB60+2z9yiY3fOCE9TsLZXJCP3ofPqbKRd+RqnIo59OVP4XqGZy1YPoSDCX+HwHsBAPIimKgq3BIJur5qKNOZE2fiiuoJ+PCGVeNLUygqiTMFoWW/4IdbsRMxTw69JhgKTS+m20ds6LzvbKQNMmRZ5gbsk8r5U2rbHM4S9RLJ9HjLDtGLtJOtfZLExtQ79mLq2/3/s0qYQTB3gObxvvcDEjGGrXqiccim6G3hFH4qFA94sUXiSBLRdD+XdWTy20EdPimGQzcKo0srqPfRrGlh8cSkQ5ttOP6+J4XfqDoNcnOXSuKvNEXoomvmHykAc80WCKHLY18VXyKOeJVl/JE1sNMivJuk2PLUENWtZ1cGeAhEJmw4Sp6X8RN4oAWEyTqARjWWhS/Pb7ON1I0uCw3JnxcgbPaIQk2AWFtwx2btGXD2uAdENxvkQQEAx4NcH+oHU76wdUy7wzz7bCJ2vWoNzvn5Mlc1Szab3QBW9yS4BQo1zDystdOICAJU+sapCuItXR4hqSVzUdcbSE1dAmCBbrCNo1Uix2ufR0JKQmG6qOY1p/Mw0Q4=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(136003)(346002)(396003)(39860400002)(36840700001)(46966006)(44832011)(82740400003)(4744005)(81166007)(3480700007)(53546011)(356005)(4326008)(16526019)(5660300002)(33656002)(6666004)(6862004)(86362001)(8936002)(186003)(6486002)(8676002)(2616005)(956004)(6506007)(6512007)(83380400001)(478600001)(316002)(70586007)(70206006)(47076005)(7116003)(15974865002)(36860700001)(36756003)(26005)(2906002)(82310400003)(336012)(966005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2021 11:59:57.1619
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 7297e538-c834-4bc5-3020-08d91abd9f3c
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT054.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR0801MB1744



> On 19 May 2021, at 06:59, Technologyrss Mail <technologyrss.mail@gmail.co=
m> wrote:
>=20
> Hi,
>=20
> I am new user for DEV XEN project on my Centos server. Please guide me ho=
w to install & deploy XEN project on centos 7?=20
>=20

Hi,

I think you might write to xen-users@, here the mailing lists for Xen https=
://xenproject.org/help/mailing-list

Cheers,
Luca

>=20
>=20
> ---
>=20
> Thanks & Regards.
>=20
> Support Admin
>=20
> Facebook | Twitter | Website
>=20
> 116/1 West Malibagh, D. I. T Road
>=20
> Dhaka-1217, Bangladesh
>=20
> Mob : +088 01716915504
>=20
> Email : support.admin@technologyrss.com
>=20
> Web : www.technologyrss.com
>=20



From xen-devel-bounces@lists.xenproject.org Wed May 19 12:28:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 12:28:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130030.243797 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljLIz-0005CD-KS; Wed, 19 May 2021 12:28:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130030.243797; Wed, 19 May 2021 12:28:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljLIz-0005C6-Gg; Wed, 19 May 2021 12:28:05 +0000
Received: by outflank-mailman (input) for mailman id 130030;
 Wed, 19 May 2021 12:28:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljLIy-0005Bu-HD; Wed, 19 May 2021 12:28:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljLIy-0001IK-Al; Wed, 19 May 2021 12:28:04 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljLIx-0006Sk-V4; Wed, 19 May 2021 12:28:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ljLIx-0002YR-UZ; Wed, 19 May 2021 12:28:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=N3xY/mpi9pTE51mUt0RaZwch3VBKn1y396e+mz2feiM=; b=r4VcHKcaQM1u4y8PkPo8iCkJ/u
	scpqiots5Iq6eBw7K/boD7lPSLC4Ippkt1dDRcNemJqOh35sfEnjXsP4FiG7j/nhN2mBfWSvXMpSd
	JeMcIo7Q5RbwWEhdjC/1qTzPLj/6s8q2xOa50A8Fvn1Fz/JXzWK0Ppd+cC0v+bCq+79k=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162070-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162070: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-saverestore.2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=c313e52e6459de2e9064767083a0c949c476e32b
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 May 2021 12:28:03 +0000

flight 162070 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162070/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-libvirt-vhd 17 guest-saverestore.2      fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                c313e52e6459de2e9064767083a0c949c476e32b
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  272 days
Failing since        152659  2020-08-21 14:07:39 Z  270 days  497 attempts
Testing same since   162070  2021-05-19 01:38:54 Z    0 days    1 attempts

------------------------------------------------------------
506 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 155638 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed May 19 12:33:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 12:33:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130040.243811 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljLNk-0006hW-ED; Wed, 19 May 2021 12:33:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130040.243811; Wed, 19 May 2021 12:33:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljLNk-0006hP-A0; Wed, 19 May 2021 12:33:00 +0000
Received: by outflank-mailman (input) for mailman id 130040;
 Wed, 19 May 2021 12:32:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ljLNi-0006hJ-VF
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 12:32:59 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ljLNh-0001Nd-D7; Wed, 19 May 2021 12:32:57 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ljLNh-0000ne-6a; Wed, 19 May 2021 12:32:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=IQ3aCoUHwi/lTR0+WrwBYffNjniV/qd4sAoovl954I0=; b=Ab4D1FbQ6Clf3eFPiKghcDL8P9
	EFv+SgmKfQ1e6TZ0ppBEb4sIWj0YZGB9pRZ6lddXlgquhzSQHtjvjV/wsxJUZkF7BPV404JVlXvJM
	GlkVXK1Gr2zlOrDNGIFMu48rW6ue9t9taxaqYIao6eTSHQfPtHoBY5hjJeIHF3V7hi9g=;
Subject: Re: Preserving transactions accross Xenstored Live-Update
To: Juergen Gross <jgross@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Edwin Torok <edvin.torok@citrix.com>, "Doebel, Bjoern" <doebel@amazon.de>,
 raphning@amazon.co.uk, "Durrant, Paul" <pdurrant@amazon.co.uk>
References: <13bbb51e-f63d-a886-272f-e6a6252fb468@xen.org>
 <377d042d-40ec-dafc-3d03-370c4f5dbb4c@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <c14d7a27-b486-01c1-1a24-70f286c34431@xen.org>
Date: Wed, 19 May 2021 13:32:54 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <377d042d-40ec-dafc-3d03-370c4f5dbb4c@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Juergen,

On 19/05/2021 10:09, Juergen Gross wrote:
> On 18.05.21 20:11, Julien Grall wrote:
>>
>> I have started to look at preserving transaction accross Live-update in 
> 
>> C Xenstored. So far, I managed to transfer transaction that read/write 
>> existing nodes.
>>
>> Now, I am running into trouble to transfer new/deleted node within a 
>> transaction with the existing migration format.
>>
>> C Xenstored will keep track of nodes accessed during the transaction 
>> but not the children (AFAICT for performance reason).
> 
> Not performance reasons, but because there isn't any need for that:
> 
> The children are either unchanged (so the non-transaction node records
> apply), or they will be among the tracked nodes (transaction node
> records apply). So in both cases all children should be known.
In theory, opening a new transaction means you will not see any 
modification in the global database until the transaction has been 
committed. What you describe would break that because a client would be 
able to see new nodes added outside of the transaction.

However, C Xenstored implements neither of the two. Currently, when a 
node is accessed within the transaction, we will also store the names of 
the current children.

To give an example with access to the global DB (prefixed with TID0) and 
within a transaction (TID1)

	1) TID0: MKDIR "data/bar"
         2) Start transaction TID1
	3) TID1: DIRECTORY "data"
		-> This will cache the node data
	4) TID0: MKDIR "data/foo"
		-> This will create "foo" in the global database
	5) TID1: MKDIR "data/fish"
		-> This will create "fish" in the transaction
	5) TID1: DIRECTORY "data"
		-> This will only return "bar" and "fish"

If we Live-Update between 4) and 5). Then we should make sure that "bar" 
cannot be seen in the listing by TID1.

Therefore, I don't think we can restore the children using the global 
node here. Instead we need to find a way to transfer the list of known 
children within the transaction.

As a fun fact, C Xenstored implements weirdly the transaction, so TID1 
will be able to access "bar" if it knows the name but not list it.

> 
> In case a child has been deleted in the transaction, the stream should
> contain a node record for that child with the transaction-id and the
> number of permissions being zero: see docs/designs/xenstore-migration.md

See above why this is not sufficient.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 19 12:33:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 12:33:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130043.243821 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljLOE-0007Fp-LS; Wed, 19 May 2021 12:33:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130043.243821; Wed, 19 May 2021 12:33:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljLOE-0007Fi-IY; Wed, 19 May 2021 12:33:30 +0000
Received: by outflank-mailman (input) for mailman id 130043;
 Wed, 19 May 2021 12:33:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ljLOC-0007EL-Ls
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 12:33:28 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ljLOB-0001O2-Jg; Wed, 19 May 2021 12:33:27 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ljLOB-000136-ER; Wed, 19 May 2021 12:33:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:References:Cc:To:From:Subject;
	bh=t4RY/3wXzuEjLelBJC+xa0CrMeeaCNHzY5mT4HWLUK0=; b=XcS/PG7ioDqgX/j2quPDrvbocZ
	oWjOGX+x7AWyy8EuS5zSsCIWwJm7KBp4WK6l4tKL/HyAiXDBqux7vqQMFX61EXh5uQ8hvcH9vpXyY
	8wIoWqYeGVRsHcUs7vYftRyWoLIfheqaSMnN5iMB9FLRshm94qXlu/xPNpiuWW7g+Wl0=;
Subject: Re: Preserving transactions accross Xenstored Live-Update
From: Julien Grall <julien@xen.org>
To: Juergen Gross <jgross@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Edwin Torok <edvin.torok@citrix.com>, "Doebel, Bjoern" <doebel@amazon.de>,
 raphning@amazon.co.uk, "Durrant, Paul" <pdurrant@amazon.co.uk>
References: <13bbb51e-f63d-a886-272f-e6a6252fb468@xen.org>
 <377d042d-40ec-dafc-3d03-370c4f5dbb4c@suse.com>
 <c14d7a27-b486-01c1-1a24-70f286c34431@xen.org>
Message-ID: <b8413748-a889-8b0c-df93-2c93ed832369@xen.org>
Date: Wed, 19 May 2021 13:33:25 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <c14d7a27-b486-01c1-1a24-70f286c34431@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 19/05/2021 13:32, Julien Grall wrote:
> Hi Juergen,
> 
> On 19/05/2021 10:09, Juergen Gross wrote:
>> On 18.05.21 20:11, Julien Grall wrote:
>>>
>>> I have started to look at preserving transaction accross Live-update in 
>>
>>> C Xenstored. So far, I managed to transfer transaction that 
>>> read/write existing nodes.
>>>
>>> Now, I am running into trouble to transfer new/deleted node within a 
>>> transaction with the existing migration format.
>>>
>>> C Xenstored will keep track of nodes accessed during the transaction 
>>> but not the children (AFAICT for performance reason).
>>
>> Not performance reasons, but because there isn't any need for that:
>>
>> The children are either unchanged (so the non-transaction node records
>> apply), or they will be among the tracked nodes (transaction node
>> records apply). So in both cases all children should be known.
> In theory, opening a new transaction means you will not see any 
> modification in the global database until the transaction has been 
> committed. What you describe would break that because a client would be 
> able to see new nodes added outside of the transaction.
> 
> However, C Xenstored implements neither of the two. Currently, when a 
> node is accessed within the transaction, we will also store the names of 
> the current children.
> 
> To give an example with access to the global DB (prefixed with TID0) and 
> within a transaction (TID1)
> 
>      1) TID0: MKDIR "data/bar"
>          2) Start transaction TID1
>      3) TID1: DIRECTORY "data"
>          -> This will cache the node data
>      4) TID0: MKDIR "data/foo"
>          -> This will create "foo" in the global database
>      5) TID1: MKDIR "data/fish"
>          -> This will create "fish" in the transaction
>      5) TID1: DIRECTORY "data"
>          -> This will only return "bar" and "fish"
> 
> If we Live-Update between 4) and 5). Then we should make sure that "bar" 
> cannot be seen in the listing by TID1.

I meant "foo" here. Sorry for the confusion.

> 
> Therefore, I don't think we can restore the children using the global 
> node here. Instead we need to find a way to transfer the list of known 
> children within the transaction.
> 
> As a fun fact, C Xenstored implements weirdly the transaction, so TID1 
> will be able to access "bar" if it knows the name but not list it.
> 
>>
>> In case a child has been deleted in the transaction, the stream should
>> contain a node record for that child with the transaction-id and the
>> number of permissions being zero: see docs/designs/xenstore-migration.md
> 
> See above why this is not sufficient.
> 
> Cheers,
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 19 12:36:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 12:36:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130052.243832 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljLR2-000824-3c; Wed, 19 May 2021 12:36:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130052.243832; Wed, 19 May 2021 12:36:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljLR2-00081x-0Z; Wed, 19 May 2021 12:36:24 +0000
Received: by outflank-mailman (input) for mailman id 130052;
 Wed, 19 May 2021 12:36:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fOiY=KO=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ljLR1-00081r-K2
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 12:36:23 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9d8f5961-0bb4-4ea0-b448-0f7260d7ff8e;
 Wed, 19 May 2021 12:36:22 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 836D9ACAD;
 Wed, 19 May 2021 12:36:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9d8f5961-0bb4-4ea0-b448-0f7260d7ff8e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621427781; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=s0i22D5OWmS40N801WsJtN48/IEfxSCZBm0fibDvw7o=;
	b=UbDiE9ZWbJR9icdJBPrSKwbEKr1Xv6WjWGs7h0XRL8k7DGcvGDs6lsD2oCghlv2ThIxE4p
	FdANSZcPMqlxtosheungWoNKfAw6henA6FL0L37P0BjlJZJh7jBkjTUx08KWG8cZMGvxJ5
	Bnc7QBNYspOO5wDKcYSz3xiYGXyX3ig=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Tim Deegan <tim@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/shadow: fix DO_UNSHADOW()
Message-ID: <cdee4753-674d-23a3-7b94-fed9f2bdd0c1@suse.com>
Date: Wed, 19 May 2021 14:36:22 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

When adding the HASH_CALLBACKS_CHECK() I failed to properly recognize
the (somewhat unusually formatted) if() around the call to
hash_domain_foreach()). Gcc 11 is absolutely right in pointing out the
apparently misleading indentation. Besides adding the missing braces,
also adjust the two oddly formatted if()-s in the macro.

Fixes: 90629587e16e ("x86/shadow: replace stale literal numbers in hash_{vcpu,domain}_foreach()")
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
I'm puzzled as to why this bug didn't cause any fallout.

--- a/xen/arch/x86/mm/shadow/common.c
+++ b/xen/arch/x86/mm/shadow/common.c
@@ -2220,8 +2220,8 @@ void sh_remove_shadows(struct domain *d,
      */
 #define DO_UNSHADOW(_type) do {                                         \
     t = (_type);                                                        \
-    if( !(pg->count_info & PGC_page_table)                              \
-        || !(pg->shadow_flags & (1 << t)) )                             \
+    if ( !(pg->count_info & PGC_page_table) ||                          \
+         !(pg->shadow_flags & (1 << t)) )                               \
         break;                                                          \
     smfn = shadow_hash_lookup(d, mfn_x(gmfn), t);                       \
     if ( unlikely(!mfn_valid(smfn)) )                                   \
@@ -2235,11 +2235,13 @@ void sh_remove_shadows(struct domain *d,
         sh_unpin(d, smfn);                                              \
     else if ( sh_type_has_up_pointer(d, t) )                            \
         sh_remove_shadow_via_pointer(d, smfn);                          \
-    if( !fast                                                           \
-        && (pg->count_info & PGC_page_table)                            \
-        && (pg->shadow_flags & (1 << t)) )                              \
+    if ( !fast &&                                                       \
+         (pg->count_info & PGC_page_table) &&                           \
+         (pg->shadow_flags & (1 << t)) )                                \
+    {                                                                   \
         HASH_CALLBACKS_CHECK(SHF_page_type_mask);                       \
         hash_domain_foreach(d, masks[t], callbacks, smfn);              \
+    }                                                                   \
 } while (0)
 
     DO_UNSHADOW(SH_type_l2_32_shadow);


From xen-devel-bounces@lists.xenproject.org Wed May 19 12:37:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 12:37:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130057.243844 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljLRg-00009H-Ck; Wed, 19 May 2021 12:37:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130057.243844; Wed, 19 May 2021 12:37:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljLRg-00009A-9T; Wed, 19 May 2021 12:37:04 +0000
Received: by outflank-mailman (input) for mailman id 130057;
 Wed, 19 May 2021 12:37:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ljLRf-000092-LM
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 12:37:03 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ljLRd-0001UU-R9; Wed, 19 May 2021 12:37:01 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ljLRd-0001OZ-Lc; Wed, 19 May 2021 12:37:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=i0JrGOKKtE5fuHn0dNGcL+2YqLV56zteMLqCjUw10Kw=; b=5XgRdbRqu9tox0AZoqdgtADnDY
	Vs547cZegxiwwv3iyZcDq2xq4tpYigkM2tPWlygEud1A1xmcOgCI9ujf1a5k/sixh80A0Inum4vy2
	eFFZHanP2H0LRMQi5aAFVSYbDyUYSt5BJ1QeDLEs4MleXsckGe23Kgg89WTdzjsXG830=;
Subject: Re: [PATCH] tools/libs: guest: Fix Arm build after 8fc4916daf2a
To: Andrew Cooper <andrew.cooper3@citrix.com>, xen-devel@lists.xenproject.org
Cc: Julien Grall <jgrall@amazon.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>
References: <20210518170339.29706-1-julien@xen.org>
 <6d77f58a-06d6-aadf-0451-b46020169004@citrix.com>
From: Julien Grall <julien@xen.org>
Message-ID: <8f2513b6-57a7-692d-8211-213a41ef7af6@xen.org>
Date: Wed, 19 May 2021 13:36:58 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <6d77f58a-06d6-aadf-0451-b46020169004@citrix.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 18/05/2021 18:05, Andrew Cooper wrote:
> On 18/05/2021 18:03, Julien Grall wrote:
>> From: Julien Grall <jgrall@amazon.com>
>>
>> Gitlab CI spotted an issue when building the tools Arm:
>>
>> xg_dom_arm.c: In function 'meminit':
>> xg_dom_arm.c:401:50: error: passing argument 3 of 'set_mode' discards 'const' qualifier from pointer target type [-Werror=discarded-qualifiers]
>>    401 |     rc = set_mode(dom->xch, dom->guest_domid, dom->guest_type);
>>        |                                               ~~~^~~~~~~~~~~~
>>
>> This is because the const was not propagated in the Arm code. Fix it
>> by constifying the 3rd parameter of set_mode().
>>
>> Fixes: 8fc4916daf2a ("tools/libs: guest: Use const whenever we point to literal strings")
>> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

Thanks! I have committed with just your ack to unblock the build.

CHeers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 19 12:49:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 12:49:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130069.243862 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljLdG-0001nn-Iu; Wed, 19 May 2021 12:49:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130069.243862; Wed, 19 May 2021 12:49:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljLdG-0001ng-Fo; Wed, 19 May 2021 12:49:02 +0000
Received: by outflank-mailman (input) for mailman id 130069;
 Wed, 19 May 2021 12:49:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xx4F=KO=oracle.com=daniel.kiper@srs-us1.protection.inumbo.net>)
 id 1ljLdF-0001na-38
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 12:49:01 +0000
Received: from mx0a-00069f02.pphosted.com (unknown [205.220.165.32])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c154c066-27dd-4785-be6a-d7c02fba45e8;
 Wed, 19 May 2021 12:48:59 +0000 (UTC)
Received: from pps.filterd (m0246629.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 14JCkasF030803; Wed, 19 May 2021 12:48:57 GMT
Received: from oracle.com (aserp3030.oracle.com [141.146.126.71])
 by mx0b-00069f02.pphosted.com with ESMTP id 38m9bggkay-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 19 May 2021 12:48:57 +0000
Received: from aserp3030.oracle.com (aserp3030.oracle.com [127.0.0.1])
 by pps.podrdrct (8.16.0.36/8.16.0.36) with SMTP id 14JCl0GH181888;
 Wed, 19 May 2021 12:48:56 GMT
Received: from nam10-bn7-obe.outbound.protection.outlook.com
 (mail-bn7nam10lp2101.outbound.protection.outlook.com [104.47.70.101])
 by aserp3030.oracle.com with ESMTP id 38meeg3xfw-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 19 May 2021 12:48:56 +0000
Received: from DM5PR1001MB2236.namprd10.prod.outlook.com (2603:10b6:4:35::18)
 by DM8PR10MB5495.namprd10.prod.outlook.com (2603:10b6:8:22::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.25; Wed, 19 May
 2021 12:48:54 +0000
Received: from DM5PR1001MB2236.namprd10.prod.outlook.com
 ([fe80::c93a:7a62:bc1d:9a34]) by DM5PR1001MB2236.namprd10.prod.outlook.com
 ([fe80::c93a:7a62:bc1d:9a34%5]) with mapi id 15.20.4129.031; Wed, 19 May 2021
 12:48:54 +0000
Received: from tomti.i.net-space.pl (84.10.22.86) by
 AS8PR04CA0034.eurprd04.prod.outlook.com (2603:10a6:20b:312::9) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Wed, 19 May 2021 12:48:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c154c066-27dd-4785-be6a-d7c02fba45e8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=date : from : to : cc
 : subject : message-id : references : content-type : in-reply-to :
 mime-version; s=corp-2020-01-29;
 bh=v8Mv0YxUg0LT0nZYkE+Wu3XOZkdXRRDwH3J3yR4Cs5I=;
 b=e3PTLyDCt/S31hS/tS4sqElzrkxn1AeTChg+lHbRLcLVxA1lqihxgz5FUi45C4cUaxXx
 wkkf0D3Q3fr3agGZj2/8XgTxQJ5AuFz/Zjbr4860vM4lJwg9P3dr0H8nJgslvaRYd463
 Qv+rxp84VWr4iAlrKcOTpM7hwnl9/GaXr+mz4STpjxESgF12tU7P29VLNHL1zLDsLt2H
 3DltivIPRhg4yNHQtPLhJpizjwtwkZ/qSQ3Cr31cPWtlQJEJTvojn+Mhj2XRWoJByiiQ
 1HVE9SMU5AVVpoX0qmhbo7yb4r//Dp+8KaZEOyk3IisqsvMB0piwaC/MgAf4/PItQGI6 NA== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OSTTy3LCUnD4IgrQNGXY+VnNBXj+cc7qq1dTBJd14lHIa+tl6X6IntDbOxctkGRkxMUoG4q66d0TFmlnIUCFhgSmc75sRrim2AjWtCFF0hvCz8uVVUFD8ChRDjjDB8tBAQX9CuVLfIKuHKOmecaIBvlriAZQW50SZ4/9oJd354pjsPbYXyidNb5sRRAPXXfUjUlvdGYUVxyLZ1AsE5GBxAn+5swiGjoRCJwXwgr8mlgDQKzkfazyo9Z8B9JPDNDvpSvSZmdsYVJUvQ7udxskvAgsd4F5KS7dCRDmusw/vOV/75G5bYLPVJkHbDf8s/1C/1/8cyIkdKXIxE8REVDYYA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=v8Mv0YxUg0LT0nZYkE+Wu3XOZkdXRRDwH3J3yR4Cs5I=;
 b=SfrfoAr8WfTqLfMsyCMdZdL8KH0alP2C/3C2zjZ6e0Yh2tCVyCxQ7y7wvgZAOupBeUxp8JLmNLRqOpz72VQkEiYAEn1P1IJ115D7+g3CwR4hX4IVqLZ5CT3xXJfDftBxlquaMY9Ex4SJdorQLBoJtQDGhJfOZf6HUorV0wvP1UHZhEhd5/1OHiVzVAkE/Lsj5Zda/TlYU1VV+prceLPH9F0Ch1gtutxrukILPuu1MZB+RXHpeFq5VJJvAFCjAjPezYToOlL422oCeOYJelftKxnXaQfQNKzDPYyPgD8b8AOCxqA74/N5oeRq+zY1FR+eEBuFNQEreBZedsfirgHhLA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=v8Mv0YxUg0LT0nZYkE+Wu3XOZkdXRRDwH3J3yR4Cs5I=;
 b=zQ9WfJ7XTotVyicqpc1R5q4NTAsaO1H0QhaBXf165sJ0SKSTFGRIck+Klo7Kedu53wEqvXzsbKaXhr3x0hQYz07j9ArtIUMNT6iZLI7dgXs84vE6CQsUzFuYIkcVIeauZ2j3ZLOcL1IgpmgHlZP3GMqhf1MBlfZOySuY8LjYJQQ=
Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=oracle.com;
Date: Wed, 19 May 2021 14:48:46 +0200
From: Daniel Kiper <daniel.kiper@oracle.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: Bob Eshleman <bobbyeshleman@gmail.com>,
        Andrew Cooper <andrew.cooper3@citrix.com>,
        George Dunlap <george.dunlap@citrix.com>,
        Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
        Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
        Xen-devel <xen-devel@lists.xenproject.org>
Subject: Re: [PATCH v3 2/5] xen/x86: manually build xen.mb.efi binary
Message-ID: <20210519124846.go3zyqzojsaj35in@tomti.i.net-space.pl>
References: <cover.1611273359.git.bobbyeshleman@gmail.com>
 <28d5536a2f7691e8f79d55f1470fa89ce4fae93d.1611273359.git.bobbyeshleman@gmail.com>
 <3c621726-31c4-6a79-a020-88c59644111b@suse.com>
 <74ea104d-3826-d80d-3af5-f444d065c73f@gmail.com>
 <a183a5f9-0f36-187d-fd06-8d6db99cbe43@suse.com>
 <20210517132039.6czppjfge27x4mwg@tomti.i.net-space.pl>
 <ee89a22d-5f46-51ed-4c46-63cfc60cbafc@suse.com>
 <20210518174633.spo5kmgcbuo6dg5k@tomti.i.net-space.pl>
 <51333867-d693-38e2-bd1c-fce28241a604@suse.com>
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <51333867-d693-38e2-bd1c-fce28241a604@suse.com>
User-Agent: NeoMutt/20170113 (1.7.2)
X-Originating-IP: [84.10.22.86]
X-ClientProxiedBy: AS8PR04CA0034.eurprd04.prod.outlook.com
 (2603:10a6:20b:312::9) To DM5PR1001MB2236.namprd10.prod.outlook.com
 (2603:10b6:4:35::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f86fca12-9d62-40c8-3fe2-08d91ac475c4
X-MS-TrafficTypeDiagnostic: DM8PR10MB5495:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<DM8PR10MB54951089CE7138C3A720AB1EE32B9@DM8PR10MB5495.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	UpIC3UNWeoNNiYW6hsfKAbVDcY6Sfcj1TsuCwCfP0Q1QSPH4qoGOqbVzpJ1T/eW3l/DlbNqg0TDaxnT1yovZhu5I8wP6zXt8ImqYGmxpivp3CxdIwIg5rH0L1voAWUaJnO6NQ/l//X1Ixf6VO1qUWa/YsJKR3S/Z7XDrT8r2P672zeyXBF3jmiBG4WwEnddGgn71fSgi42E3NxHTRaN3C2faUwJtiV04JsERel//xzcggSzGiN9HjTEq2uVtmmYFwVTyqJGd9en/CuJTxxZy9TcXwrBHYtAXuOafTJ33KJjiledr7iVRd4OQNtlQ9GgTSpryV7Bh5SYKw0YtzG+sJDog22Tv080zbMKGLegLWiq2b/NcySbHp/JKKD8yldAj2o4/OsArJ1vqAOJPBHWgXhNXeDnE91BOBF8mLQ2MnPP++/NcHWnEfD4zFIAM1KvS+C6HPbYwZOXC7DrQ+YUoXnG9Y4/cKuC4hIAwY6z6Y29aP0aSAsFnKiaNmd5moQzoWmDt2f59WyHfJSr4XdXFlyNkcMBSq2sfe1sMu/F527F43hDGz2Q3TTa8ZOcRBb16kWDCcTBCmKgwQg6hCYks4clxU6DbYxSPSKo63ynxOQVkAKftmIRcK0mvSizicsFVPUs9lWhHQRHTQEAEajwAYQ==
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR1001MB2236.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(39860400002)(366004)(136003)(376002)(346002)(38100700002)(956004)(54906003)(316002)(9686003)(38350700002)(8676002)(6916009)(16526019)(186003)(55016002)(26005)(8936002)(53546011)(478600001)(6506007)(66556008)(86362001)(66476007)(5660300002)(2906002)(6666004)(66946007)(83380400001)(1076003)(52116002)(44832011)(4326008)(7696005);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?us-ascii?Q?mHfj4BbSx8txw6WOU0i41KqyWD5bUmpys/457d84xPg6dVNTAGRochQM+Goq?=
 =?us-ascii?Q?LZU/r3U1hktgBb8riNvfcoA9AUUTTRO4wlBF4627d1jmSJmBxujBic2ubJlO?=
 =?us-ascii?Q?MYrD0XOxHoX2zx/NofGuXIlTWPiCUM5O/+5reOl0+M/SlJaN6p6hbDDMpnH5?=
 =?us-ascii?Q?zGdgRdxQfWsEyIhPsZH78uZg/2Hs1u/9is9nO/UImam9wKHdyMlnHR+1zB/m?=
 =?us-ascii?Q?t0BQ6noV8GKAKTmrablj+/KXN1OV8YzA4xcJ5Gg4QH0Cr6Kblbhkuxlz91bg?=
 =?us-ascii?Q?JVAKkNZ/SL93Omrvu4srCqJfIfGWq26h/4X6Ts0L0Q/fo6j2SW8zrhiWyXg6?=
 =?us-ascii?Q?WpoP6p+RU/H3cnmetkQKqhyBJFKXG9i/K7v41htF3rUeGyPKwatAKsGjXueP?=
 =?us-ascii?Q?0L+tVwX83Jy7qyK51qveh2mMuHCrcKaCdzFhNGi5lRk4WehzQyEXucYwG/of?=
 =?us-ascii?Q?KjsPSmVh8v3TOX8uL8K4g+MSPgEOq9Cno0MgagwXhrBrw8yerGcUTnCeNw4x?=
 =?us-ascii?Q?5X8CqRXEO4TMWDRvzy7aYCvfAMk9lpWAm+laVYI9bAQbldC/zgQ3G3GWUT5n?=
 =?us-ascii?Q?ctxTgpcQRVdH8sOaooLhKuGYvRYBjQjb+LJVqMyosRZhQzQRfgnZ7wo6gzIH?=
 =?us-ascii?Q?9xDVTP35uvpQWx+0NXmgTfHXMWF5qSwPZAcX0dgJfCs4vS0ZaguzqZln1NXz?=
 =?us-ascii?Q?gAcdlH4hArfK2MPlonPMa8sH+/3BECXh6fKKavxyq8+A2HKMcQ2nrypkbUbY?=
 =?us-ascii?Q?IaPoUtQybNEXdOrdmtBz/n+6R8dRzKRCa/FBKMim4WJQSTDUUikbMsWEA3Pc?=
 =?us-ascii?Q?ew+G+xHE/tZvc3IqrlhLSb9eMob979GB0J5i7rZ2cUe30lbwylHlv19I5hBJ?=
 =?us-ascii?Q?Zta8Khi7bXEX7uBzjf3UpftqXladySoLLfD9aRp5bpedp8bSfNLwHLiu4tQD?=
 =?us-ascii?Q?Jl7IqAX6jxyiBnLELGiLMBhKPkr3EG2apL5rHj9TwoxSkC0yky1lbalpwv15?=
 =?us-ascii?Q?+JTfoAjBbEckmjbMxFUdldwLr8vwY+RPxqwszWeixSrIcCAt3mlMXtC3Kfri?=
 =?us-ascii?Q?KH6ffjVNaVGAed87+Ko4IB/uDi2NnGqro7qgOjTTvrr6Jrd37KR+QIV/tlJm?=
 =?us-ascii?Q?iO/n7QLOwF+Q6jiuCiU6V0YfSJTkDgY8a0v3udwM22RuoHRDwd1dv31o1Wpq?=
 =?us-ascii?Q?b2W4Vyk3YI17FNkOKC+plVsMgtkjzdfLTaiZhrPBYABCsSm6Wr8KKxzhtz1G?=
 =?us-ascii?Q?TcEmNjQ74N1dPZhYXm3TyWGOAGIaKAplmsPHp5PAi0NC3EZnCPORtraOEC8K?=
 =?us-ascii?Q?FQ31ONHl2y1GXxGCbJPf1Pok?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: f86fca12-9d62-40c8-3fe2-08d91ac475c4
X-MS-Exchange-CrossTenant-AuthSource: DM5PR1001MB2236.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2021 12:48:54.4331
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: jDnk3HvdasWlWdz5/8KhOudo1tu5dI5ktqersMn8ox5DZ7oNswlgx++q268BPoiEkK1dA/wj650lOWtQcHbtYw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM8PR10MB5495
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9988 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0 mlxlogscore=999 adultscore=0
 phishscore=0 malwarescore=0 bulkscore=0 spamscore=0 suspectscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2105190080
X-Proofpoint-ORIG-GUID: VaEeggWTii_LXCOwQ2A7VWi6arVPOhQf
X-Proofpoint-GUID: VaEeggWTii_LXCOwQ2A7VWi6arVPOhQf

On Wed, May 19, 2021 at 11:29:43AM +0200, Jan Beulich wrote:
> On 18.05.2021 19:46, Daniel Kiper wrote:
> > On Mon, May 17, 2021 at 03:24:28PM +0200, Jan Beulich wrote:
> >> On 17.05.2021 15:20, Daniel Kiper wrote:
> >>> On Mon, May 17, 2021 at 08:48:32AM +0200, Jan Beulich wrote:
> >>>> On 07.05.2021 22:26, Bob Eshleman wrote:
> >>>>> What is your intuition WRT the idea that instead of trying add a PE/COFF hdr
> >>>>> in front of Xen's mb2 bin, we instead go the route of introducing valid mb2
> >>>>> entry points into xen.efi?
> >>>>
> >>>> At the first glance I think this is going to be less intrusive, and hence
> >>>> to be preferred. But of course I haven't experimented in any way ...
> >>>
> >>> When I worked on this a few years ago I tried that way. Sadly I failed
> >>> because I was not able to produce "linear" PE image using binutils
> >>> exiting that days.
> >>
> >> What is a "linear" PE image?
> >
> > The problem with Multiboot family protocols is that all code and data
> > sections have to be glued together in the image and as such loaded into
> > the memory (IIRC BSS is an exception but it has to live behind the
> > image). So, you cannot use PE image which has different representation
> > in file and memory. IIRC by default at least code and data sections in
> > xen.efi have different sizes in PE file and in memory. I tried to fix
> > that using linker script and objcopy but it did not work. Sadly I do
> > not remember the details but there is pretty good chance you can find
> > relevant emails in Xen-devel archive with me explaining what kind of
> > problems I met.
>
> Ah, this rings a bell. Even the .bss-is-last assumption doesn't hold,
> because .reloc (for us as well as in general) comes later, but needs
> loading (in the right place). Since even xen.gz isn't simply the

However, IIRC it is not used when Xen is loaded through Multiboot2
protocol. So, I think it may stay in the image as is and the Mutliboot2
header should not cover .reloc section.

By the way, why do we need .reloc section in the PE image? Is not %rip
relative addressing sufficient? IIRC the Linux kernel just contains
a stub .reloc section. Could not we do the same?

> compressed linker output, but a post-processed (by mkelf32) image,
> maybe what we need is a build tool doing similar post-processing on
> xen.efi? Otoh getting disk image and in-memory image aligned ought

Yep, this should work too.

> to be possible by setting --section-alignment= and --file-alignment=
> to the same value (resulting in a much larger file) - adjusting file

IIRC this did not work for some reason. Maybe it would be better to
enforce correct alignment and required padding using linker script.

> positions would effectively be what a post-processing tool would need
> to do (like with mkelf32 perhaps we could then at least save the
> first ~2Mb of space). Which would still leave .reloc to be dealt with
> - maybe we could place this after .init, but still ahead of
> __init_end (such that the memory would get freed late in the boot
> process). Not sure whether EFI loaders would "like" such an unusual
> placement.

Yeah, good question...

> Also not sure what to do with Dwarf debug info, which just recently
> we managed to avoid needing to strip unconditionally.

I think debug info may stay as is. Just Multiboot2 header should not
cover it if it is not needed.

> >>> Maybe
> >>> newer binutils are more flexible and will be able to produce a PE image
> >>> with properties required by Multiboot2 protocol.
> >>
> >> Isn't all you need the MB2 header within the first so many bytes of the
> >> (disk) image? Or was it the image as loaded into memory? Both should be
> >> possible to arrange for.
> >
> > IIRC Multiboot2 protocol requires the header in first 32 kiB of an image.
> > So, this is not a problem.
>
> I was about to ask "Disk image or in-memory image?" But this won't
> matter if the image as a whole got linearized, as long as the first
> section doesn't start to high up. I notice that xen-syms doesn't fit
> this requirement either, only the output of mkelf32 does. Which
> suggests that there may not be a way around a post-processing tool.

Could not we drop 2 MiB padding at the beginning of xen-syms by changing
some things in the linker script?

Daniel


From xen-devel-bounces@lists.xenproject.org Wed May 19 12:50:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 12:50:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130076.243873 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljLeS-00039w-3S; Wed, 19 May 2021 12:50:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130076.243873; Wed, 19 May 2021 12:50:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljLeR-00039p-Ur; Wed, 19 May 2021 12:50:15 +0000
Received: by outflank-mailman (input) for mailman id 130076;
 Wed, 19 May 2021 12:50:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=iEmX=KO=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1ljLeQ-00039f-9R
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 12:50:14 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3204d45d-16cc-4623-9c62-d91e2b399ebf;
 Wed, 19 May 2021 12:50:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5A7C0AED7;
 Wed, 19 May 2021 12:50:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3204d45d-16cc-4623-9c62-d91e2b399ebf
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621428612; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=dBhF9NACif15HSNzoliXA7saFgfM+oNn346W0Y6vd+M=;
	b=Y7K4C1QtA56hQCNSa2KvZTiUsQQwhnOkb7HhprZFsvL1KO673mi1NxU5oKq+sB2zT12eYj
	d1WzY7LFDzNApojaie5dDW4lwse5rNFI05+nwFEO1/US6zbv2py5t2Ni1vhUKlgS3hZ0Og
	N/Ae/nWwOer3rOcvi0WZqvt8FHjR4HA=
Subject: Re: Preserving transactions accross Xenstored Live-Update
To: Julien Grall <julien@xen.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Edwin Torok <edvin.torok@citrix.com>, "Doebel, Bjoern" <doebel@amazon.de>,
 raphning@amazon.co.uk, "Durrant, Paul" <pdurrant@amazon.co.uk>
References: <13bbb51e-f63d-a886-272f-e6a6252fb468@xen.org>
 <377d042d-40ec-dafc-3d03-370c4f5dbb4c@suse.com>
 <c14d7a27-b486-01c1-1a24-70f286c34431@xen.org>
 <b8413748-a889-8b0c-df93-2c93ed832369@xen.org>
From: Juergen Gross <jgross@suse.com>
Message-ID: <95144b63-292b-3d60-b7d2-1847a1611fd6@suse.com>
Date: Wed, 19 May 2021 14:50:11 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <b8413748-a889-8b0c-df93-2c93ed832369@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="iDH0aVccMOs4XnyG2g8xEsPXqCpFs9zyg"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--iDH0aVccMOs4XnyG2g8xEsPXqCpFs9zyg
Content-Type: multipart/mixed; boundary="QGImb2DvUUkWPU3p9vgHyO63SCX6sFpCc";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Edwin Torok <edvin.torok@citrix.com>, "Doebel, Bjoern" <doebel@amazon.de>,
 raphning@amazon.co.uk, "Durrant, Paul" <pdurrant@amazon.co.uk>
Message-ID: <95144b63-292b-3d60-b7d2-1847a1611fd6@suse.com>
Subject: Re: Preserving transactions accross Xenstored Live-Update
References: <13bbb51e-f63d-a886-272f-e6a6252fb468@xen.org>
 <377d042d-40ec-dafc-3d03-370c4f5dbb4c@suse.com>
 <c14d7a27-b486-01c1-1a24-70f286c34431@xen.org>
 <b8413748-a889-8b0c-df93-2c93ed832369@xen.org>
In-Reply-To: <b8413748-a889-8b0c-df93-2c93ed832369@xen.org>

--QGImb2DvUUkWPU3p9vgHyO63SCX6sFpCc
Content-Type: multipart/mixed;
 boundary="------------2924E910E0C4D61D5FEBF81B"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------2924E910E0C4D61D5FEBF81B
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 19.05.21 14:33, Julien Grall wrote:
>=20
>=20
> On 19/05/2021 13:32, Julien Grall wrote:
>> Hi Juergen,
>>
>> On 19/05/2021 10:09, Juergen Gross wrote:
>>> On 18.05.21 20:11, Julien Grall wrote:
>>>>
>>>> I have started to look at preserving transaction accross Live-update=20
in=20
>>>
>>>> C Xenstored. So far, I managed to transfer transaction that=20
>>>> read/write existing nodes.
>>>>
>>>> Now, I am running into trouble to transfer new/deleted node within a=20

>>>> transaction with the existing migration format.
>>>>
>>>> C Xenstored will keep track of nodes accessed during the transaction=20

>>>> but not the children (AFAICT for performance reason).
>>>
>>> Not performance reasons, but because there isn't any need for that:
>>>
>>> The children are either unchanged (so the non-transaction node record=
s
>>> apply), or they will be among the tracked nodes (transaction node
>>> records apply). So in both cases all children should be known.
>> In theory, opening a new transaction means you will not see any=20
>> modification in the global database until the transaction has been=20
>> committed. What you describe would break that because a client would=20
>> be able to see new nodes added outside of the transaction.
>>
>> However, C Xenstored implements neither of the two. Currently, when a =

>> node is accessed within the transaction, we will also store the names =

>> of the current children.
>>
>> To give an example with access to the global DB (prefixed with TID0)=20
>> and within a transaction (TID1)
>>
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A01) TID0: MKDIR "data/bar"
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 2) Start transaction =
TID1
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A03) TID1: DIRECTORY "data"
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 -> This will cache th=
e node data
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A04) TID0: MKDIR "data/foo"
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 -> This will create "=
foo" in the global database
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A05) TID1: MKDIR "data/fish"
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 -> This will create "=
fish" inthe transaction
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A05) TID1: DIRECTORY "data"
>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 -> This will only ret=
urn "bar" and "fish"
>>
>> If we Live-Update between 4) and 5). Then we should make sure that=20
>> "bar" cannot be seen in the listing by TID1.
>=20
> I meant "foo" here. Sorry for the confusion.
>=20
>>
>> Therefore, I don't think we can restore the children using the global =

>> node here. Instead we need to find a way to transfer the list of known=20

>> children within the transaction.
>>
>> As a fun fact, C Xenstored implements weirdly the transaction, so TID1=20

>> will be able to access "bar" if it knows the name but not list it.

And this is the basic problem, I think.

C Xenstored should be repaired by adding all (remaining) children of a
node into the TID's database when the list of children is modified
either globally or in a transaction. A child having been added globally
needs to be added as "deleted" into the TID's database.


Juergen

--------------2924E910E0C4D61D5FEBF81B
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------2924E910E0C4D61D5FEBF81B--

--QGImb2DvUUkWPU3p9vgHyO63SCX6sFpCc--

--iDH0aVccMOs4XnyG2g8xEsPXqCpFs9zyg
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmClCYMFAwAAAAAACgkQsN6d1ii/Ey9H
8Af6A4HR4UT1jJsyp+tKtF+jnGN4IE5/vZp8CEkVwwQOjDRqxc6ohlKTy9dJcowFGNh7jbrdbspv
hQyHXrT1TEISie/aMwrPL0BoUPzsH8y+mSAKe/H5tSvZZu2AqezwNQvO3LSA/KJHYtCJbK6NOzM6
zaxruJ4mH1pwHFjmoyKuib7AbAZMjNRJ2dy3PHcFa5ONYC8KiWEIXP4DcpOLJjME32N0ZUpMTBob
KMBvDBNJ5/5LkrmyPXutXhBy5a4P+cCMfnZ+q+MAokYUpJD9kHT4iPKLTAF9W88Q+zQ3W1ZbMBLf
zUa3giyZ3Dcx1JIK6vgWduWOYzlzvWk0H4JsKDD6sw==
=ed1i
-----END PGP SIGNATURE-----

--iDH0aVccMOs4XnyG2g8xEsPXqCpFs9zyg--


From xen-devel-bounces@lists.xenproject.org Wed May 19 13:12:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 13:12:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130122.243926 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljM0D-0007eu-I8; Wed, 19 May 2021 13:12:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130122.243926; Wed, 19 May 2021 13:12:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljM0D-0007en-F6; Wed, 19 May 2021 13:12:45 +0000
Received: by outflank-mailman (input) for mailman id 130122;
 Wed, 19 May 2021 13:12:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=udgl=KO=arm.com=luca.fancellu@srs-us1.protection.inumbo.net>)
 id 1ljM0B-0007eh-Bs
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 13:12:43 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7e1b::626])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0d111433-e209-4e0e-8f4d-4cd9960b0923;
 Wed, 19 May 2021 13:12:41 +0000 (UTC)
Received: from AM5P194CA0005.EURP194.PROD.OUTLOOK.COM (2603:10a6:203:8f::15)
 by DBBPR08MB5978.eurprd08.prod.outlook.com (2603:10a6:10:1f5::13) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.27; Wed, 19 May
 2021 13:12:39 +0000
Received: from AM5EUR03FT014.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:203:8f:cafe::e3) by AM5P194CA0005.outlook.office365.com
 (2603:10a6:203:8f::15) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.33 via Frontend
 Transport; Wed, 19 May 2021 13:12:39 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT014.mail.protection.outlook.com (10.152.16.130) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Wed, 19 May 2021 13:12:38 +0000
Received: ("Tessian outbound 504317ef584c:v92");
 Wed, 19 May 2021 13:12:38 +0000
Received: from b70c10a0e2d7.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 736B3D62-D8D1-45A2-97A9-6B56A3A8EF36.1; 
 Wed, 19 May 2021 13:12:30 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id b70c10a0e2d7.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Wed, 19 May 2021 13:12:30 +0000
Received: from VI1PR08MB3629.eurprd08.prod.outlook.com (2603:10a6:803:7f::25)
 by VI1PR08MB3758.eurprd08.prod.outlook.com (2603:10a6:803:bd::26)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.25; Wed, 19 May
 2021 13:12:24 +0000
Received: from VI1PR08MB3629.eurprd08.prod.outlook.com
 ([fe80::5ca9:87ed:e959:758a]) by VI1PR08MB3629.eurprd08.prod.outlook.com
 ([fe80::5ca9:87ed:e959:758a%5]) with mapi id 15.20.4129.033; Wed, 19 May 2021
 13:12:24 +0000
Received: from smtpclient.apple (82.8.129.65) by
 LO4P123CA0156.GBRP123.PROD.OUTLOOK.COM (2603:10a6:600:188::17) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.32 via Frontend Transport; Wed, 19 May 2021 13:12:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0d111433-e209-4e0e-8f4d-4cd9960b0923
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YvwwWGAEFt90ka/F0LiIrlodcOLwe4zL5M3dJ3EI4gU=;
 b=TYI8kD0B70LTUpUyHUTFAmpSjYHg81W2QhnYVFit6jWu/MW2ncu0nz4Rl7CkLJ0cVQS4zfsJL2+U8dvGSwFovFEfaxt6kHZ+S3dLv48vAiaDL5zyxsu2A3t5cPw9d3/wjBrI+dUyiChSlro+Iho8Q6gyedH7SIQdDMM6PWT8tHQ=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CheckRecipientChecked: true
X-CR-MTA-CID: 04811a044d062e1a
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=j2qqUDKgeq43zGnMCcb0uD84s/+IbcGr5z+AfA3fs+hkiPsUrqYl8gqAmxgMsQC806WlN2y4Lh80CHlXpHjmMnObIUF5+YFJ8L9ap1aUEC7v9TBFD6KZ6TaLhvVKUhlhV1JQ5vWf/vgDSR2d+HqB5+LMd+TS9ciqvrzhyaGOd0Ahu+gC5pGzq+Z8STfCygo15noAIKA7zBVJRCBG16o2uFYUBH4z1p+vRVYueMRJbRbmvwLIs/6hYFmZkK5kOjTvktADlls14VXa4EcbNIL3A8d1SrfU4VjZzLXrOBTJNThCqeedwvT1IwmcdSbKtHenYxGY8+l31o5c05k7+Tcjaw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YvwwWGAEFt90ka/F0LiIrlodcOLwe4zL5M3dJ3EI4gU=;
 b=Pinw3VJTU6GccX2NbNYmZh9t3MhiVpxMi/Ss+CaYKGommpdhuaWWhIc2Qsy6pIFFdzG1B2MoOydA+giKhQt6Yv0y8r8xqym0iTKxp7PV3vXKkJ7lAOCv9SkqcZniYa3B6RKdtlNKMURi5cgVHIg0uXlShlKCnBpWncKQ+pcce+0t55/UNY26gnvkTA71HKOdaqYlHzVFpBA6dI01wdMvaP8ac8GGds5vagY9rwEuFQnFH1mQz8rKaQ5roikIa34W1JzrWx0pBNKclI4bK4DTmxFA8TF2UwSg1JwBqbQ+7og196e6WMIN6FXgzMC+a8dpbnj72xztGjVfbRfPkLlhHg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YvwwWGAEFt90ka/F0LiIrlodcOLwe4zL5M3dJ3EI4gU=;
 b=TYI8kD0B70LTUpUyHUTFAmpSjYHg81W2QhnYVFit6jWu/MW2ncu0nz4Rl7CkLJ0cVQS4zfsJL2+U8dvGSwFovFEfaxt6kHZ+S3dLv48vAiaDL5zyxsu2A3t5cPw9d3/wjBrI+dUyiChSlro+Iho8Q6gyedH7SIQdDMM6PWT8tHQ=
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
Content-Type: text/plain;
	charset=us-ascii
Subject: Re: [PATCH] x86/shadow: fix DO_UNSHADOW()
From: Luca Fancellu <luca.fancellu@arm.com>
In-Reply-To: <cdee4753-674d-23a3-7b94-fed9f2bdd0c1@suse.com>
Date: Wed, 19 May 2021 14:12:17 +0100
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Tim Deegan <tim@xen.org>,
 George Dunlap <george.dunlap@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 Wei Liu <wl@xen.org>,
 =?utf-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Content-Transfer-Encoding: quoted-printable
Message-Id: <601E0335-5A78-462C-A43B-0B8195CAB70C@arm.com>
References: <cdee4753-674d-23a3-7b94-fed9f2bdd0c1@suse.com>
To: Jan Beulich <jbeulich@suse.com>
X-Mailer: Apple Mail (2.3654.80.0.2.43)
X-Originating-IP: [82.8.129.65]
X-ClientProxiedBy: LO4P123CA0156.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:188::17) To VI1PR08MB3629.eurprd08.prod.outlook.com
 (2603:10a6:803:7f::25)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 022b8c2a-0329-43c5-c79d-08d91ac7c6ee
X-MS-TrafficTypeDiagnostic: VI1PR08MB3758:|DBBPR08MB5978:
X-Microsoft-Antispam-PRVS:
	<DBBPR08MB597879744FAD21EDE71DFF8EE42B9@DBBPR08MB5978.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
NoDisclaimer: true
X-MS-Oob-TLC-OOBClassifiers: OLM:8273;OLM:8273;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 M/usEBZPruVMmgwmmvlFUeLXNwgHkv54EpVwHXayGdLljbul6vm5ofMMKxHog5VP6KXufF6cPrN41A/wsdZAqpspn6KHsCuBpy95lhyJJa/H4ib5WaJxP9BvFtn8AMU6+Oi6W2d5cgir1qcfLzvIbM9ZVDw07dIghrxPH1LSXeWmMioIYh1pTgH0Fg5zCljiXSvPachcriy/KeueInd8f7qsAG9VukXWMl8564k15C+RPfYpUYTh3tcq65XiGB5sc8DwyMrDS4+3hTpasGInfC39O9dhI2pl/0KusooUveaL1GOprkyr4Y52BbEupw0EIs7Jsq46BLHHvyPI6/CXkmXbWNwgTELd/LR6YEOiXD8GniyYtavKmQA6C0H8lnOwlk0b/FvS3zVKagHOfftsogz0xhyfOWAICjndKyW0v1QtBO5szTTd1iyHSY/mMMd/EpkrIm4G4VdmY2DgVp/JE2rDsTnX6CpvbZk1f/Rj26aw/gFI3+UnBkzfsaGTxBHNiItHyJzMlDTyHkeV9gfb7Rn0g2TiFhwQINh05PP0u+HvbpUIu5EOS1Qe8B+Ep9cCiddVRGQ7NSY5gd898PZ9wP/62J/spAybkJtrLeEzyttWxKIPc/V6hPc0yBJlvUIEuf6iNPu7Ey12NFIewh2glE0zd+mLKIdm3B1+PCS6tqyNFnq2e1e8MYbMrQrI5tzC
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VI1PR08MB3629.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(376002)(346002)(366004)(39860400002)(6916009)(33656002)(956004)(8676002)(36756003)(2616005)(478600001)(66946007)(66556008)(86362001)(66476007)(2906002)(16526019)(53546011)(6506007)(38350700002)(186003)(38100700002)(83380400001)(52116002)(6512007)(316002)(8936002)(54906003)(5660300002)(6486002)(6666004)(44832011)(4326008)(26005)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
 =?us-ascii?Q?c1u3fWBKNgDKNzPo7/17YZUKWeUwMBCV7BDrXYFCPvIzmm0vWj2n81EXK4Ey?=
 =?us-ascii?Q?GKcADBfEn2lOgiIDEouvQrq6TeR89Oft63geDQijFjQEpS/882yUL36UXR/M?=
 =?us-ascii?Q?EWE6M2GjcyVrkH9vn1pXqyCXOvpDJpjBfwplo7eVd/N5XiF1V+YqW67ByL9B?=
 =?us-ascii?Q?hJQd1HRy5HwI6kAn4NvCRDDLEJHR/K03cT7tWVduDeYc6FbqS54zOcCZ9LDg?=
 =?us-ascii?Q?i2iF0DPxcuEgVJw6zrqOpX3EetgbVbPhhe8THfaz9tO8tbg69bcnDDbIuqn+?=
 =?us-ascii?Q?3K3zvzYzN6D/xoNN3OYFOJsK35hYAWDtuVFtxHG/TDrPqY19EwGoo3sBsQxU?=
 =?us-ascii?Q?Yi0+y9NZVRUivCQ8dtfzoFgtsvy1luI/wAR0FkoFZblxDgvcpTDcwmZuVx4S?=
 =?us-ascii?Q?Q7ljgEol4GoGxOwVgCneXoRVXhmCBsiVoqNWdgbBTHNOYekhLOfuao2Dz5Hz?=
 =?us-ascii?Q?guY56fHtmvgqzTjIVyPYI3GEor42bDcA0FUXrxGGrQxbMmM3C7tc6MFXpgic?=
 =?us-ascii?Q?Asol5RiJZfhha9Na+RkkDaNwVI8GnououAbqOw9HiEWTB45U9LbkNLmTXMn3?=
 =?us-ascii?Q?/JJGoYEdJwcW3/DSeG89du3H0aqY5Wz3XYFrY8sH11tN4jWA2hsWkD7Du/LW?=
 =?us-ascii?Q?rvzJIKQYTrK2xQCihBE1Ua+zMjUoC4fwaPFzl+9nJtv+5KGvOo2zo9tzz47b?=
 =?us-ascii?Q?zfITEnAoUdm1knvUCNmSxO1rsLvOITjDnX8Kr5qpR9EFFLUxvGV+7FHmntrY?=
 =?us-ascii?Q?+VsaVt1DSXYtlIPAimfsU4ah9GugeGGbtIDjoImcTa8hEGyBaDn5zfx2KJIr?=
 =?us-ascii?Q?W4j0C+W/fS9fI3oF6KZQt4abCpw5lA6yI1jbCpmvU6b2H0ThFEs7eEq7fw65?=
 =?us-ascii?Q?xsi/nS496IhsWRIvcZFeex2VwHaTNJ6bAyCxZsd1hEeIzgKN1AJtT0wTKD4g?=
 =?us-ascii?Q?S9d7YfH7fhDXFewzAwXsYl2H4EserVH3eKeZqdqgJqXh5O9jSCjfz9yp4TP7?=
 =?us-ascii?Q?OPxwROEdCZmKoXPicJgEKvYt14Fnhw4THtJ0FbeoRNXC8tVUXnpV5TuVcotn?=
 =?us-ascii?Q?gag3aGS5lZh+4l1kmfSbSUs+rlWtem+hNjuECZqf5agccf9ea+MtLG4dMr4j?=
 =?us-ascii?Q?NqiQiqnvo18xKvKfrKVZrQW3X18b34XVdlWsJpiBDpCcSxXz14ZOs7P7XL4G?=
 =?us-ascii?Q?LmS8ag4H5C7SNZgP3RiOA8faBUO5V243IBZRhp92S0oBbhj+BLcKV9gtRU22?=
 =?us-ascii?Q?Wk0D5M4UsxUsU44Qb8pG72KZUmnMuhymMeb9JFaXyqZrzTPJIOzXW8JyPsjy?=
 =?us-ascii?Q?JU2frIKuvHALuuhxDy5Fcku0?=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3758
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT014.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	a961e46b-2240-4e8f-8208-08d91ac7be4f
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	UlVzj3H6mvklHuzwMyC1yDy4X95ZD5xPGvi+0I8rS10y4F8Ogr8ShG4ITrjLf1jNZ27HbWnKVoF7tahyMpsPUmcxtuhP3neeoyhf5ZO+kyE3JS5vVHBQNHfOJSV1xY1J8Zw0ksYRVFJgl2RGYU8YgYTukGNipxkicYxxfgtT148Cmlp5lf0BuILXoLt8cU4DpsloaH83UpvcwiOYtziffsFbW/9ScXodX5WMZaElZakCw2apZvmHGzLJn0MGQ6nFJEWI+ARqXD2n9XRS5V8W8XiQZDElb/QB4ggzmLgawr5VDU3XURHOAMPGBIZ/EAoY1QnawRvBtvVOiG+h1Uuvp+BXOZFG2ogO2lhaf7Hru3oHxXyVFWVyPh4OOFwOzpbd+MqwJ+Jy0Ft1KMPAc7z6dKPJqu651zkU7Zp2KuRObboyLDTBmPdf9AWvYWRgTH0VKRJmOQ0CXq+JdvocyC1gcOUvKOMir93cfyXxVQW7LTS0vhGQtvVUmlnJFsmEitrjfuxfMbrNhB6eeC0hS6OhrYoH7WFNbYBoUeCj09Q1ticxqD93kNM2bvG+Fop0H6arDdW+NTxLOLm+kOEzT4ifUXr/h4qtfH7mc/Vd7IX9gDlI2PbgpyW9mxM0lRDluSyLYfUzIedF2HwQ4/s5cl9jvQiZfXWejKTFDgZJGF/PNtA=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(136003)(376002)(396003)(346002)(39860400002)(46966006)(36840700001)(2616005)(70206006)(186003)(6666004)(6512007)(36756003)(956004)(16526019)(70586007)(36860700001)(336012)(316002)(44832011)(107886003)(82740400003)(4326008)(47076005)(5660300002)(86362001)(6486002)(6862004)(356005)(8676002)(83380400001)(81166007)(8936002)(53546011)(33656002)(82310400003)(2906002)(54906003)(6506007)(478600001)(26005);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2021 13:12:38.7397
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 022b8c2a-0329-43c5-c79d-08d91ac7c6ee
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT014.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DBBPR08MB5978



> On 19 May 2021, at 13:36, Jan Beulich <jbeulich@suse.com> wrote:
>=20
> When adding the HASH_CALLBACKS_CHECK() I failed to properly recognize
> the (somewhat unusually formatted) if() around the call to
> hash_domain_foreach()). Gcc 11 is absolutely right in pointing out the
> apparently misleading indentation. Besides adding the missing braces,
> also adjust the two oddly formatted if()-s in the macro.
>=20
> Fixes: 90629587e16e ("x86/shadow: replace stale literal numbers in hash_{=
vcpu,domain}_foreach()")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>

Cheers,
Luca

> ---
> I'm puzzled as to why this bug didn't cause any fallout.
>=20
> --- a/xen/arch/x86/mm/shadow/common.c
> +++ b/xen/arch/x86/mm/shadow/common.c
> @@ -2220,8 +2220,8 @@ void sh_remove_shadows(struct domain *d,
>      */
> #define DO_UNSHADOW(_type) do {                                         \
>     t =3D (_type);                                                       =
 \
> -    if( !(pg->count_info & PGC_page_table)                              =
\
> -        || !(pg->shadow_flags & (1 << t)) )                             =
\
> +    if ( !(pg->count_info & PGC_page_table) ||                          =
\
> +         !(pg->shadow_flags & (1 << t)) )                               =
\
>         break;                                                          \
>     smfn =3D shadow_hash_lookup(d, mfn_x(gmfn), t);                      =
 \
>     if ( unlikely(!mfn_valid(smfn)) )                                   \
> @@ -2235,11 +2235,13 @@ void sh_remove_shadows(struct domain *d,
>         sh_unpin(d, smfn);                                              \
>     else if ( sh_type_has_up_pointer(d, t) )                            \
>         sh_remove_shadow_via_pointer(d, smfn);                          \
> -    if( !fast                                                           =
\
> -        && (pg->count_info & PGC_page_table)                            =
\
> -        && (pg->shadow_flags & (1 << t)) )                              =
\
> +    if ( !fast &&                                                       =
\
> +         (pg->count_info & PGC_page_table) &&                           =
\
> +         (pg->shadow_flags & (1 << t)) )                                =
\
> +    {                                                                   =
\
>         HASH_CALLBACKS_CHECK(SHF_page_type_mask);                       \
>         hash_domain_foreach(d, masks[t], callbacks, smfn);              \
> +    }                                                                   =
\
> } while (0)
>=20
>     DO_UNSHADOW(SH_type_l2_32_shadow);
>=20



From xen-devel-bounces@lists.xenproject.org Wed May 19 13:34:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 13:34:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130157.243955 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljML8-0002cX-QD; Wed, 19 May 2021 13:34:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130157.243955; Wed, 19 May 2021 13:34:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljML8-0002cQ-Ls; Wed, 19 May 2021 13:34:22 +0000
Received: by outflank-mailman (input) for mailman id 130157;
 Wed, 19 May 2021 13:34:21 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljML7-0002cG-9f; Wed, 19 May 2021 13:34:21 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljML7-0002Qk-2e; Wed, 19 May 2021 13:34:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljML6-0001CM-R5; Wed, 19 May 2021 13:34:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ljML6-0005Fp-QZ; Wed, 19 May 2021 13:34:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=4BJ93oPXmi+7XbqHlcYG/FcaWexZQdmKGRvoTBatQIU=; b=PGCfApbHfagl9sIUopwnZiIM5m
	jrjsu7OtETtZ+XTMs8LwZ7KI1wexHiTU0SkLgvpbk5SBA5Kj4D2XXZq3h/+l8SrBOGEXymRiLJzUw
	vyrDhX5nbdyqJhFQNIJmqv/1FD6hCxvBrkR5sJtgJ4vqoSYxHfF6C3VEP9ep4DsSFE4A=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162089-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162089: regressions - FAIL
X-Osstest-Failures:
    xen-unstable-smoke:build-arm64-xsm:xen-build:fail:regression
    xen-unstable-smoke:build-armhf:xen-build:fail:regression
    xen-unstable-smoke:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:heisenbug
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:build-check(1):blocked:nonblocking
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=01d84420fb4a9be2ec474a7c1910bb22c28b53c8
X-Osstest-Versions-That:
    xen=caa9c4471d1d74b2d236467aaf7e63a806ac11a4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 May 2021 13:34:20 +0000

flight 162089 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162089/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-xsm               6 xen-build                fail REGR. vs. 162023
 build-armhf                   6 xen-build                fail REGR. vs. 162023

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail pass in 162087

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-xsm       1 build-check(1)               blocked  n/a
 test-armhf-armhf-xl           1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  01d84420fb4a9be2ec474a7c1910bb22c28b53c8
baseline version:
 xen                  caa9c4471d1d74b2d236467aaf7e63a806ac11a4

Last test of basis   162023  2021-05-18 13:00:27 Z    1 days
Testing same since   162036  2021-05-18 16:00:26 Z    0 days   11 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              fail    
 build-amd64                                                  pass    
 build-armhf                                                  fail    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          blocked 
 test-arm64-arm64-xl-xsm                                      blocked 
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 01d84420fb4a9be2ec474a7c1910bb22c28b53c8
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:51:48 2021 +0100

    tools/xenmon: xenbaked: Mark const the field text in stat_map_t
    
    The field text in stat_map_t will point to string literals. So mark it
    as const to allow the compiler to catch any modified of the string.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 4b7702727a8d89fea0a239adcbeb18aa2c85ede0
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:51:28 2021 +0100

    tools/top: The string parameter in set_prompt() and set_delay() should be const
    
    Neither string parameter in set_prompt() and set_delay() are meant to
    be modified. In particular, new_prompt can point to a literal string.
    
    So mark the two parameters as const and propagate it.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 5605cfd49a18df41a21fb50cd81528312a39d7c9
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:50:32 2021 +0100

    tools/misc: Use const whenever we point to literal strings
    
    literal strings are not meant to be modified. So we should use const
    char * rather than char * when we we to store a pointer to them.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 89aae4ad8f495b647de33f2df5046b3ce68225f8
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:35:07 2021 +0100

    tools/libs: stat: Use const whenever we point to literal strings
    
    literal strings are not meant to be modified. So we should use const
    char * rather than char * when we want to store a pointer to them.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>

commit 8fc4916daf2aac34088ebd5ec3d6fd707ac4221d
Author: Julien Grall <jgrall@amazon.com>
Date:   Tue May 18 14:34:22 2021 +0100

    tools/libs: guest: Use const whenever we point to literal strings
    
    literal strings are not meant to be modified. So we should use const
    *char rather than char * when we want to store a pointer to them.
    
    Signed-off-by: Julien Grall <jgrall@amazon.com>
    Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
    Acked-by: Wei Liu <wl@xen.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed May 19 13:42:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 13:42:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130166.243969 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljMT1-00044w-Op; Wed, 19 May 2021 13:42:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130166.243969; Wed, 19 May 2021 13:42:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljMT1-00044p-Ld; Wed, 19 May 2021 13:42:31 +0000
Received: by outflank-mailman (input) for mailman id 130166;
 Wed, 19 May 2021 13:42:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=bFFr=KO=citrix.com=edvin.torok@srs-us1.protection.inumbo.net>)
 id 1ljMT0-00044j-1X
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 13:42:30 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c8000892-7ec5-41ab-b5d0-d9cff7f883ff;
 Wed, 19 May 2021 13:42:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c8000892-7ec5-41ab-b5d0-d9cff7f883ff
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1621431747;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:mime-version;
  bh=x1rejkMGvgY24zii8MY/nB/kKar9b51SOKayRzdzmu8=;
  b=aiaARwRU1pGfk8dl9H7SopBGN5JC04jvQUPJ89xHgcrV3DKyKAIKzAOp
   pzWNYzJb2ijpvRJF9casWYL8+Krank+AZOtQpC8OTcJAwTlOUMIRnG6/k
   tLAbUPLpljGc+HVQ2VsM8j5iphgJfH4Yn9BcQ+vQB2GkPWrTXbG2Ocg24
   I=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: bUTU1UFmJjPwFYNZ5b6b0UDQwDBbZNcS02JHuM+8KVXfx8D12cyiPJnD2j5oo2a5QoeTHKNOyc
 tXdfvAE9ZzfWrl43s56wJuWuytlDf2+xGtmq5xapEAehKy3fhaw7WbY0j1yz7LYYWz3GNqwTXp
 cIDUUcJuIu5A8EOUlxz5581oro/x52CbEF4aa05yOjLNb8hTyEntAomeHPMD/woUiWAzzDeYlm
 V9jF2GsM1wVTLeJ7uUVnXtbJL8E7ic3mmxopv73M7PL7ofx1ljv6jpp5kwTSbWQ0Q7eggPGg1g
 gAU=
X-SBRS: 5.1
X-MesageID: 44150018
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:TDEJ7qNkPq2dCcBcTjujsMiBIKoaSvp037BK7S1MoNJuEvBw9v
 re+MjzsCWftN9/Yh4dcLy7VpVoIkmskKKdg7NhXotKNTOO0AeVxelZhrcKqAeQeREWmNQ96U
 9hGZIOdeEZDzJB/LrHCN/TKade/DGFmprY+9s31x1WPGZXgzkL1XYDNu6ceHcGIjVuNN4CO7
 e3wNFInDakcWR/VLXAOpFUN9Kz3uEijfjdEGY7OyI=
X-IronPort-AV: E=Sophos;i="5.82,312,1613451600"; 
   d="scan'208,217";a="44150018"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AwAYtHGFydEiIRg343voD3I53s+d+gV+qpTho3vYDpQYYROh0tgti0HPhPgrOEAO6gQSUbRooFNL0VHU8+cHdEdX5wEcUGuYM9Y5jQ+Cp2MTE4Qt/cWfr1BeoPv8xqMr0vMi/1QrtAfC6NaAdq95gCydHXpLCJwQj9adYb5oa4ZrupFrlSOSTWViodhzMxvSsGfeYd3wG3Gma7jKgK+uYoYhSU3gm5HVGHHmr+vJq5HOTArZKTMwAXeIQO8Hpfd7lStEdUSKKevEtvq/47Qvo/PcO9v7wM73G3/p74UFwPClAPZya2zgeE6AQO8mvrJZ17kAyFcoiritU79So1KkwA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=x1rejkMGvgY24zii8MY/nB/kKar9b51SOKayRzdzmu8=;
 b=mCMKsxQulvLh8En72Amw+UFUwxj/b7YuTQQkGMkGoJHhrV85aVieSdKJkl9ctdohf2PKTFSzK8wtSnMaVHEDf/LtDJYQTLdtKmqpmR/zMQQwpzjus1xLy1SrByXMB9EWuhoLsbhtQv7zxxhr81dxW0ijL85lZEodKyMOmX0LhQISBmyK5bt6Fia8z1SwT8dizMKnXL44u2l3EgqOY9Sf7yuM/x2rV508E0Bhc7Tq5ME8qk/lAuy1Lene1x+kLqq7xU3/ptuwuOkTcJ1hSroM4FN2cQiGLFaTy0ImsrBbsoDT0gcl4c79/vceBflSAfa8ifyBrSFm3I6nbiLjlMWKFQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=x1rejkMGvgY24zii8MY/nB/kKar9b51SOKayRzdzmu8=;
 b=n1cMnyR4WPQ886YNEp09DVNK5zf1b/am9YkSwcr49zONmKlHsktWNCW3iY4Ctx+VaZn4cWVr9BgdraUTdc5ceLiDBjG85o+XI/4h0h2neGpEk3082Ak3Pnx37Nn5rF4wa0rH77Dkmc0PrnvKt3Yh19X/F3CmQCBsJBqx1EVhFyo=
From: Edwin Torok <edvin.torok@citrix.com>
To: "jgross@suse.com" <jgross@suse.com>, "julien@xen.org" <julien@xen.org>
CC: "pdurrant@amazon.co.uk" <pdurrant@amazon.co.uk>, "raphning@amazon.co.uk"
	<raphning@amazon.co.uk>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "doebel@amazon.de" <doebel@amazon.de>
Subject: Re: Preserving transactions accross Xenstored Live-Update
Thread-Topic: Preserving transactions accross Xenstored Live-Update
Thread-Index: AQHXTBFhvlD0AcPet0+9JB30RQoD36rqhRSAgABMXoA=
Date: Wed, 19 May 2021 13:42:21 +0000
Message-ID: <a46bb4d5bbd3b8ee4d633de4c9d10a597f4dc5d6.camel@citrix.com>
References: <13bbb51e-f63d-a886-272f-e6a6252fb468@xen.org>
	 <377d042d-40ec-dafc-3d03-370c4f5dbb4c@suse.com>
In-Reply-To: <377d042d-40ec-dafc-3d03-370c4f5dbb4c@suse.com>
Accept-Language: en-GB, en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
user-agent: Evolution 3.36.4-0ubuntu1 
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: d7faefed-9eae-4de0-8457-08d91acbedd1
x-ms-traffictypediagnostic: SJ0PR03MB5487:
x-microsoft-antispam-prvs: <SJ0PR03MB548725FB801708642646FABC9B2B9@SJ0PR03MB5487.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: UQMF2qADwMaug2gRH+FFLnP6bEXEKe9xSuRlqw9a+vo3VZpiZGhTGFrc8BJq/phcdVX9Mw/GjqpAhB/xtmPg3Cf9enSYhfV4Z1Hq+pN4/puzTXoDvpv/mNcFdDtZE0f7UGcVcG3vrlca0QVfxIbesGeWFH4Ia1/E2Ti337gD5XJNdpIiGi/I8q6MhE+tG6TFFLUIWfJq/ZJ9hZl5JM4SZlv2ovBvuZRs+0OqTcBX8xJ5lA+xZFDeTWsjNzvsvwr79dzED/x9FdEatvYMzrRkkOO5rcNpoODabO0xbn4ZxSQvckt3pVpQPxbFga2SjnRYgnmOvqs8D9k5gugcDDQ0MA35FoNv5RA5YDfD/6T9bsGWGz/yaNjthQAyrhwdjgvyAD13Ft5w3261CjBi1j7Jr5E8+bd7FLK2PPgBQYnvc2Gfpnw8m8fDmXlqTv9XYwf/hG2gqQyArNU6AZ+74GeLt/nW4vfB/k5x9b21HBI7ZYWT2mzoA+EK9U54nnWkgzEse+WNjlARayOnIHCbdD4u2+z9+bkP66M/60HsGTKJHsc9dVlrVvFXEKIFXPQrYnV93eSln8qgi5yo/pyg7plL+HUfhuR/WKIkkHF1jpHBxAIHaU9RdzADBnil/ddKV3BSOelk4vEPL38vcibRiGT5+xP8D+8aNK+TxqPIm/8pFe33+iTbDQQFbGH7RwJbkPyZzkAHAiTmeLkCBaMMJJAiiHevrt3Zk7Iv2U7bVO2OiyI=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:SJ0PR03MB5888.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(136003)(376002)(366004)(396003)(346002)(316002)(66556008)(66946007)(966005)(15650500001)(64756008)(66446008)(110136005)(2906002)(66476007)(54906003)(478600001)(91956017)(8676002)(76116006)(6512007)(53546011)(8936002)(6506007)(2616005)(71200400001)(83380400001)(66574015)(36756003)(122000001)(38100700002)(26005)(86362001)(166002)(4326008)(6486002)(186003)(5660300002)(69594002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: =?utf-8?B?ZnBlenFPQzVlSVdXYmdndGJuWHVPN2U4blMwb1BaT3V3ZHBaRHRvM09qdVZa?=
 =?utf-8?B?ME4yNjZOSkhRVi96dDJKZjZvV0ViKzNCaE1ROUhVbTU2R1pxMndSMTVWZjRK?=
 =?utf-8?B?K1FyekJmMUcxZDZ3L2VWOS82OTlGYlhleTRmOWZURGdURVg4azU4SUl5d0hN?=
 =?utf-8?B?Sklqc3VydzZlNExRMUZmemh1blVqMVF3M0ZiNFVYWXRRbWh5UE9tbDl2V3lO?=
 =?utf-8?B?cGpvcGJhRE9kOFVWWkZiaFFxd1RBMjZ5OWtCcTBVVXNWS2FzeFhjSzl0ZCtk?=
 =?utf-8?B?VEFvbytpL0JtZUpJNzkvby9pbVlPQUZSMk1pbEM4RXV4MTdRUTFyeDF5RWF6?=
 =?utf-8?B?cm40amJXQ3hTV3FJbmNXUFA0a3BycmN3QlpDWWN6eVBweXdLOGloekdNaUtq?=
 =?utf-8?B?ZWx2V21pVGlFUTNtYUtHcCtMRFFSajN6aUg0VEVzMVhXd0ExT3Q2SkNySllX?=
 =?utf-8?B?WExZVEJXWEx6S2VQWHBtcnRxRExKb1Z5VlQ2bDBmeGNYOEpDMENiakVSd2dt?=
 =?utf-8?B?MjJKeWZPYmU5SjRIRkdSSTNCNWlrVWpQWFZMVEJGV21wcDlzSTY3bFQzYnZP?=
 =?utf-8?B?RGVXZi9hQjVmZWFyTGJXZ1R5YWNnVnNoQzZRR01SQUNTbUJFU3N5RE9KcDVY?=
 =?utf-8?B?SnFvaitvQVZ2K1dKYllqcWR3TEhLRm02OVVMTklrdEdxWXY1OXpVRTgvUlBj?=
 =?utf-8?B?OTZxemVrQ3prSmw0dE1IS2FTWUowcVQ5bUdvSVpsUEJRU0RBQktLZVltZ1NE?=
 =?utf-8?B?MDFnQzQvSTRkTEZCUkNwdEdHbFY2VkNidkhSaHdyZW8vQ3NkU1FzTzcvRm5I?=
 =?utf-8?B?T1NrNGxWMllLRjdnK0U2ZTBqanNjTXZlNEJMckxVU2U4ZytpbDdKelFpV3ZE?=
 =?utf-8?B?WEJwcG80bXkyUEdZVy9jWVZzemo3NzRaTzlBbjBDRVE0dDBwQmN3aWgrYTRY?=
 =?utf-8?B?aUpUcG8xdWV4dzNrd2hWSERkdTFRb0VnQm9VeFVCYkEwczJFMDM5emZrNWNS?=
 =?utf-8?B?VElwZllsVmZTWkJtTmxlWU9hSWpnTTFJMy81MHF6Sm43NUJXL2lpUWxtUzVZ?=
 =?utf-8?B?Z0lSN1QrTTJUMmRSSmsrUWxEbUhDakdJVHJIb0pUaUNwSjQwaXlEcmVkQ3JW?=
 =?utf-8?B?ZWxHUkg1Z3ZZR21QNnhFb1RaWlVDOE1lVERMelBUOTJqcVBRV21BTktDZ0Y2?=
 =?utf-8?B?Vkd2dTBIMHdSTkltb0ZRZWJQaU1mU0syUjlFRVorbVdhSU50RmJ1eDhlaTBi?=
 =?utf-8?B?ZFZyNUR1bHlvbXFHdzJnUVh2WEV4UjdNUU42UFExYVJ5a3lIazVGUklqOUpF?=
 =?utf-8?B?ajBubUR4OWUyM0NQQjVDaFRKaDZwcVEzMFlCdUJwMEV4V3EyVm9HLzJrWHZW?=
 =?utf-8?B?czQ4dVg5L1JyT3ErZjVyWHpwZmZNclJBVWxQZlgxODdMc2Y3MVRJalBtUDE4?=
 =?utf-8?B?MUxleDZvVjVLdjlZZnJuTGhydXRKdzZKUGczaDg1bVljOW9SSGx5S0t2Mytv?=
 =?utf-8?B?RHNsN1FySUJWL1JZYWk2bS9rZ0pIN3ZNc3dTUTdWYldJdmJHZ0Vja3ltTjhN?=
 =?utf-8?B?d3NJNStCR1diNXF5NEVpbG9WaUl0UHVHTzgxeFFVL1NBV0xubnVta2ZBcmw3?=
 =?utf-8?B?MS8vQWZFelpPVzlkTFdKREQvOXFFZjFZTHNDV2JPeHc1S25JK1lBQkZTOEFw?=
 =?utf-8?B?UHZvS0FKRXFVeXdnQldlQk1YNkw3bW4vdnNDL3VhTWUwcm9MRXc1c1B3eUJF?=
 =?utf-8?Q?gRVvOwxmKn7vIQIxtqhl93amiGbPx99NN4wScmh?=
x-ms-exchange-transport-forked: True
Content-Type: multipart/alternative;
	boundary="_000_a46bb4d5bbd3b8ee4d633de4c9d10a597f4dc5d6camelcitrixcom_"
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: SJ0PR03MB5888.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d7faefed-9eae-4de0-8457-08d91acbedd1
X-MS-Exchange-CrossTenant-originalarrivaltime: 19 May 2021 13:42:21.9324
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: pWXJqSTfCierhj98gAWqkF8z2ZGrUOhXcKk/fhQnb4Pr4wc0DMVbzLl7gTB6e8Z5YGOgVSeCTKteMlnOJjafRQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: SJ0PR03MB5487
X-OriginatorOrg: citrix.com

--_000_a46bb4d5bbd3b8ee4d633de4c9d10a597f4dc5d6camelcitrixcom_
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64

T24gV2VkLCAyMDIxLTA1LTE5IGF0IDExOjA5ICswMjAwLCBKdWVyZ2VuIEdyb3NzIHdyb3RlOg0K
DQpPbiAxOC4wNS4yMSAyMDoxMSwgSnVsaWVuIEdyYWxsIHdyb3RlOg0KDQpIaSBKdWVyZ2VuLA0K
DQoNCkkgaGF2ZSBzdGFydGVkIHRvIGxvb2sgYXQgcHJlc2VydmluZyB0cmFuc2FjdGlvbiBhY2Ny
b3NzIExpdmUtdXBkYXRlIGluDQoNCg0KQyBYZW5zdG9yZWQuIFNvIGZhciwgSSBtYW5hZ2VkIHRv
IHRyYW5zZmVyIHRyYW5zYWN0aW9uIHRoYXQgcmVhZC93cml0ZQ0KDQpleGlzdGluZyBub2Rlcy4N
Cg0KDQpOb3csIEkgYW0gcnVubmluZyBpbnRvIHRyb3VibGUgdG8gdHJhbnNmZXIgbmV3L2RlbGV0
ZWQgbm9kZSB3aXRoaW4gYQ0KDQp0cmFuc2FjdGlvbiB3aXRoIHRoZSBleGlzdGluZyBtaWdyYXRp
b24gZm9ybWF0Lg0KDQoNCkMgWGVuc3RvcmVkIHdpbGwga2VlcCB0cmFjayBvZiBub2RlcyBhY2Nl
c3NlZCBkdXJpbmcgdGhlIHRyYW5zYWN0aW9uIGJ1dA0KDQpub3QgdGhlIGNoaWxkcmVuIChBRkFJ
Q1QgZm9yIHBlcmZvcm1hbmNlIHJlYXNvbikuDQoNCg0KTm90IHBlcmZvcm1hbmNlIHJlYXNvbnMs
IGJ1dCBiZWNhdXNlIHRoZXJlIGlzbid0IGFueSBuZWVkIGZvciB0aGF0Og0KDQoNClRoZSBjaGls
ZHJlbiBhcmUgZWl0aGVyIHVuY2hhbmdlZCAoc28gdGhlIG5vbi10cmFuc2FjdGlvbiBub2RlIHJl
Y29yZHMNCg0KYXBwbHkpLCBvciB0aGV5IHdpbGwgYmUgYW1vbmcgdGhlIHRyYWNrZWQgbm9kZXMg
KHRyYW5zYWN0aW9uIG5vZGUNCg0KcmVjb3JkcyBhcHBseSkuIFNvIGluIGJvdGggY2FzZXMgYWxs
IGNoaWxkcmVuIHNob3VsZCBiZSBrbm93bi4NCg0KDQpJbiBjYXNlIGEgY2hpbGQgaGFzIGJlZW4g
ZGVsZXRlZCBpbiB0aGUgdHJhbnNhY3Rpb24sIHRoZSBzdHJlYW0gc2hvdWxkDQoNCmNvbnRhaW4g
YSBub2RlIHJlY29yZCBmb3IgdGhhdCBjaGlsZCB3aXRoIHRoZSB0cmFuc2FjdGlvbi1pZCBhbmQg
dGhlDQoNCm51bWJlciBvZiBwZXJtaXNzaW9ucyBiZWluZyB6ZXJvOiBzZWUgZG9jcy9kZXNpZ25z
L3hlbnN0b3JlLW1pZ3JhdGlvbi5tZA0KDQoNClRoZSBwcm9ibGVtIGZvciBveGVuc3RvcmVkIGlz
IHRoYXQgeW91IG1pZ2h0J3ZlIHRha2VuIGEgc25hcHNob3QgaW4gdGhlIHBhc3QsIHlvdXIgcm9v
dCBoYXMgbW92ZWQgb24sIGJ1dCB5b3UgaGF2ZSBpbiB5b3VyIHNuYXBzaG90IGEgbG90IG9mIG5v
ZGVzIHRoYXQgaGF2ZSBiZWVuIGRlbGV0ZWQgaW4gdGhlIGxhdGVzdCByb290Lg0KDQpBIGJydXRl
IGZvcmNlIHdheSBtaWdodCBiZSB0byBkaWZmIHRoZSB0cmFuc2FjdGlvbidzIHN0YXRlIGFuZCB0
aGUgbGF0ZXN0IHJvb3Qgc3RhdGUgYW5kIGR1bXAgdGhlIGRlbHRhIGVudHJpZXMgYXMgYWRkaW5n
L2RlbGV0aW5nIG5vZGVzIGluIHRoZSBtaWdyYXRpb24gc3RyZWFtLg0KVGhpcyBjb3VsZCBsZWFk
IHRvIGR1bXBpbmcgYSBsb3Qgb2YgZHVwbGljYXRlIHN0YXRlLCBhbmQgcmVzdWx0IGluIGFuIGV4
cGxvc2lvbiBvZiBmaWxlIHNpemUgKGUuZy4gaWYgeW91IHJ1biAxMDAwIGRvbWFpbiwgdGhlIGN1
cnJlbnQgbWF4IHN1cHBvcnRlZCBsaW1pdCAgYW5kIGVhY2ggaGFzIG9uZSB0aW55IHRyYW5zYWN0
aW9uIGZyb20gdGhlIHBhc3QNCnRoaXMgd2lsbCBsZWFkIHRvIDEwMDB4IGFtcGxpZmljYXRpb24g
b2YgeGVuc3RvcmUgc2l6ZSBpbiB0aGUgZHVtcC4gSW4tbWVtb3J5IGlzIGZpbmUgYmVjYXVzZSBP
Q2FtbCB3aWxsIHNoYXJlIGNvbW1vbiB0cmVlIG5vZGVzIHRoYXQgYXJlIHVuY2hhbmdlZCkuDQpU
aGlzIHNob3VsZCBjb3JyZWN0bHkgcmVzdG9yZSBjb250ZW50IGJ1dCBoYXZlIGEgYmFkIGVmZmVj
dCBvbiBjb25mbGljdCBzZW1hbnRpY3M6IHlvdXIgbWlncmF0ZWQgdHJhbnNhY3Rpb25zIHdpbGwg
YWxsIHRoZW4gbGlrZWx5IGNvbmZsaWN0IGF0IHRoZSByb290LCBvciBuZWFyIHRoZSByb290IGFu
ZCBmYWlsIGFueXdheS4NCldoZXJlYXMgd2l0aG91dCBhIGxpdmUtdXBkYXRlIGFzIGxvbmcgYXMg
eW91IGRvIG5vdCBtb2RpZnkgYW55IG9mIHRoZSBvbGQgc3RhdGUgeW91IHdvdWxkIGdldCB0aGUg
Y29uZmxpY3QgbWFya2VyIGZ1cnRoZXIgZG93biB0aGUgdHJlZSBhbmQgbW9zdCBvZiB0aGUgdGlt
ZSBhYmxlIHRvIGF2b2lkIGNvbmZsaWN0cy4NCkkndmUgdHJpZWQgaW1wbGVtZW50aW5nIHRoaXMg
bGFzdCB5ZWFyOiBodHRwczovL2dpdGh1Yi5jb20vZWR3aW50b3Jvay94ZW4vcHVsbC8yL2NvbW1p
dHMvYTlmMDU3MTMxYjc1ZTFiZDJkY2I0OWM3OTU2MzBhYjU4NzViN2Y3NiNkaWZmLTBmNDgyNjQ3
MTc3NWQ3OGJmYzY5MjJjNjMxNTJlMjY4ZWYzODYxNzFlYmQ5ODUyMDhjYjgyZTIxYzYyMWU3NDlS
Mjg4LVIzNjUNCihpZ25vcmUgdGhlIGF3ZnVsIGluZGVudGF0aW9uIHRoYXQgY29kZSBoYXMgYmVl
biByZWJhc2VkIHdpdGggaWdub3JlX2FsbF9zcGFjZSBzbyBtYW55IHRpbWVzIGJldHdlZW4gZGlm
ZmVyZW50IGJyYW5jaGVzIG9mIFhlbiB0aGF0IHdoaXRlc3BhY2UgY29ycmVjdG5lc3MgaGFzIGJl
ZW4gbG9zdCkNCg0KSSd2ZSBnb3QgYSBmdXp6ZXIvdW5pdCB0ZXN0IGZvciBsaXZlLXVwZGF0ZSAo
c2VlIHhlbi1kZXZlbCksIGJ1dCBpdCBoYXMgdHJhbnNhY3Rpb25zIHR1cm5lZCBvZmYgY3VycmVu
dGx5IGJlY2F1c2UgSSBjb3VsZG4ndCBnZXQgaXQgdG8gd29yayByZWxpYWJseSwgaXQgYWx3YXlz
IGZvdW5kIGV4YW1wbGVzIHdoZXJlIHRoZSB0cmFuc2FjdGlvbiBjb25mbGljdCBzdGF0ZSB3YXMg
bm90IGlkZW50aWNhbCBwcmUvcG9zdCB1cGRhdGUuDQpJZiB3ZSBhYm9ydCBhbGwgdHJhbnNhY3Rp
b25zIGFmdGVyIG1pZ3JhdGlvbiBhcyBkaXNjdXNzZWQgcHJldmlvdXNseSB0aGVuIGl0IG1pZ2h0
IGJlIHBvc3NpYmxlIHRvIGdldCB0aGlzIHRvIHdvcmsgaWYgd2UgYWNjZXB0IHRoZSBzaXplIGV4
cGxvc2lvbiBhcyBhIHBvc3NpYmlsaXR5IGFuZCBkdW1wIHRyYW5zYWN0aW9uIHN0YXRlIHRvIC92
YXIvdG1wLCBub3QgdG8gL3RtcCAod2hpY2ggbWlnaHQgYmUgYSB0bXBmcyB0aGF0IGdpdmVzIHlv
dSBFTk9TUEMpLg0KDQpMaXZlIHVwZGF0ZXMgYXJlIGEgZmFpcmx5IG5pY2hlIHVzZSBjYXNlIGFu
ZCBJJ2QgbGlrZSB0byBzZWUgdGhlIGN1cnJlbnQgdmFyaWFudCB3aXRob3V0IHRyYW5zYWN0aW9u
cyBwcm92ZW4gdG8gd29yayBvbiBhbiBhY3R1YWwgWFNBIChsaWtlbHkgdGhlIG5leHQgb3hlbnN0
b3JlZCBYU0EgYWJvdXQgcXVldWUgbGltaXRzIGlmIHdlIGZpbmQgYSBzb2x1dGlvbiB0byB0aGF0
KSwNCmFuZCBvbmx5IGFmdGVyIHRoYXQgZGVwbG95IGxpdmUtdXBkYXRlIHN1cHBvcnQgd2l0aCB0
cmFuc2FjdGlvbnMuDQpXZSBhbHNvIGNvbXBsZXRlbHkgbGFjayBhbnkgdW5pdCB0ZXN0cyBmb3Ig
dHJhbnNhY3Rpb25zIChhc2lkZSBmcm9tIHRoZSBmdXp6ZXIgdGhhdCBJIHN0YXJ0ZWQgd3JpdGlu
Zywgd2hpY2ggZG9lcyBqdXN0IHNvbWUgdmVyeSBtaW5pbWFsIHN0YXRlIGNvbXBhcmlzb25zKSwg
d2UgZG8gbm90IGhhdmUgYSBmb3JtYWwgbW9kZWwgb24gaG93IHRyYW5zYWN0aW9ucw0KYW5kIHRy
YW5zYWN0aW9uIGNvbmZsaWN0cyBzaG91bGQgYmUgaGFuZGxlZCB0byBjaGVjayB3aGV0aGVyIHRy
YW5zYWN0aW9ucyBiZWhhdmUgY29ycmVjdGx5LCB0aG91Z2ggYSBmYWlybHkgZ29vZCBhcHByb21p
eGF0aW9uIGlzOiBydW4gMiBveGVuc3RvcmVkIG9uZSB3aXRoIGFuZCB3aXRob3V0IGxpdmUtdXBk
YXRlIGFuZCBjaGVjayB0aGF0IHRoZXkgcHJvZHVjZSBlcXVpdmFsZW50DQoobm90IG5lY2Vzc2Fy
aWx5IGlkZW50aWNhbCwgdHhpZCBjYW4gY2hhbmdlKSBhbnN3ZXJzLiBBcyBsb25nIGFzIHdlIGRv
IG5vdCBoYXZlIHRvIGNoYW5nZSB0aGUgdHJhbnNhY3Rpb24gc2VtYW50aWNzIG9yIGNvZGUgaW4g
YW55IHdheSB0byBzdXBwb3J0IGxpdmUgdXBkYXRlLg0KDQpCZXN0IHJlZ2FyZHMsDQotLUVkd2lu
DQoNCg0KDQoNCkp1ZXJnZW4NCg0KW0NBVVRJT04gLSBFWFRFUk5BTCBFTUFJTF0gRE8gTk9UIHJl
cGx5LCBjbGljayBsaW5rcywgb3Igb3BlbiBhdHRhY2htZW50cyB1bmxlc3MgeW91IGhhdmUgdmVy
aWZpZWQgdGhlIHNlbmRlciBhbmQga25vdyB0aGUgY29udGVudCBpcyBzYWZlLg0K

--_000_a46bb4d5bbd3b8ee4d633de4c9d10a597f4dc5d6camelcitrixcom_
Content-Type: text/html; charset="utf-8"
Content-ID: <1B3B3CA6C5DC8C4EBBE54F460BCFCD79@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64

PGh0bWwgZGlyPSJsdHIiPg0KPGhlYWQ+DQo8bWV0YSBodHRwLWVxdWl2PSJDb250ZW50LVR5cGUi
IGNvbnRlbnQ9InRleHQvaHRtbDsgY2hhcnNldD11dGYtOCI+DQo8L2hlYWQ+DQo8Ym9keSBzdHls
ZT0idGV4dC1hbGlnbjpsZWZ0OyBkaXJlY3Rpb246bHRyOyI+DQo8ZGl2Pk9uIFdlZCwgMjAyMS0w
NS0xOSBhdCAxMTowOSArMDIwMCwgSnVlcmdlbiBHcm9zcyB3cm90ZTo8L2Rpdj4NCjxibG9ja3F1
b3RlIHR5cGU9ImNpdGUiIHN0eWxlPSJtYXJnaW46MCAwIDAgLjhleDsgYm9yZGVyLWxlZnQ6MnB4
ICM3MjlmY2Ygc29saWQ7cGFkZGluZy1sZWZ0OjFleCI+DQo8cHJlPk9uIDE4LjA1LjIxIDIwOjEx
LCBKdWxpZW4gR3JhbGwgd3JvdGU6PC9wcmU+DQo8YmxvY2txdW90ZSB0eXBlPSJjaXRlIiBzdHls
ZT0ibWFyZ2luOjAgMCAwIC44ZXg7IGJvcmRlci1sZWZ0OjJweCAjNzI5ZmNmIHNvbGlkO3BhZGRp
bmctbGVmdDoxZXgiPg0KPHByZT5IaSBKdWVyZ2VuLDwvcHJlPg0KPHByZT48YnI+PC9wcmU+DQo8
cHJlPkkgaGF2ZSBzdGFydGVkIHRvIGxvb2sgYXQgcHJlc2VydmluZyB0cmFuc2FjdGlvbiBhY2Ny
b3NzIExpdmUtdXBkYXRlIGluIDwvcHJlPg0KPC9ibG9ja3F1b3RlPg0KPHByZT48YnI+PC9wcmU+
DQo8YmxvY2txdW90ZSB0eXBlPSJjaXRlIiBzdHlsZT0ibWFyZ2luOjAgMCAwIC44ZXg7IGJvcmRl
ci1sZWZ0OjJweCAjNzI5ZmNmIHNvbGlkO3BhZGRpbmctbGVmdDoxZXgiPg0KPHByZT5DIFhlbnN0
b3JlZC4gU28gZmFyLCBJIG1hbmFnZWQgdG8gdHJhbnNmZXIgdHJhbnNhY3Rpb24gdGhhdCByZWFk
L3dyaXRlIDwvcHJlPg0KPHByZT5leGlzdGluZyBub2Rlcy48L3ByZT4NCjxwcmU+PGJyPjwvcHJl
Pg0KPHByZT5Ob3csIEkgYW0gcnVubmluZyBpbnRvIHRyb3VibGUgdG8gdHJhbnNmZXIgbmV3L2Rl
bGV0ZWQgbm9kZSB3aXRoaW4gYSA8L3ByZT4NCjxwcmU+dHJhbnNhY3Rpb24gd2l0aCB0aGUgZXhp
c3RpbmcgbWlncmF0aW9uIGZvcm1hdC48L3ByZT4NCjxwcmU+PGJyPjwvcHJlPg0KPHByZT5DIFhl
bnN0b3JlZCB3aWxsIGtlZXAgdHJhY2sgb2Ygbm9kZXMgYWNjZXNzZWQgZHVyaW5nIHRoZSB0cmFu
c2FjdGlvbiBidXQgPC9wcmU+DQo8cHJlPm5vdCB0aGUgY2hpbGRyZW4gKEFGQUlDVCBmb3IgcGVy
Zm9ybWFuY2UgcmVhc29uKS48L3ByZT4NCjwvYmxvY2txdW90ZT4NCjxwcmU+PGJyPjwvcHJlPg0K
PHByZT5Ob3QgcGVyZm9ybWFuY2UgcmVhc29ucywgYnV0IGJlY2F1c2UgdGhlcmUgaXNuJ3QgYW55
IG5lZWQgZm9yIHRoYXQ6PC9wcmU+DQo8cHJlPjxicj48L3ByZT4NCjxwcmU+VGhlIGNoaWxkcmVu
IGFyZSBlaXRoZXIgdW5jaGFuZ2VkIChzbyB0aGUgbm9uLXRyYW5zYWN0aW9uIG5vZGUgcmVjb3Jk
czwvcHJlPg0KPHByZT5hcHBseSksIG9yIHRoZXkgd2lsbCBiZSBhbW9uZyB0aGUgdHJhY2tlZCBu
b2RlcyAodHJhbnNhY3Rpb24gbm9kZTwvcHJlPg0KPHByZT5yZWNvcmRzIGFwcGx5KS4gU28gaW4g
Ym90aCBjYXNlcyBhbGwgY2hpbGRyZW4gc2hvdWxkIGJlIGtub3duLjwvcHJlPg0KPHByZT48YnI+
PC9wcmU+DQo8cHJlPkluIGNhc2UgYSBjaGlsZCBoYXMgYmVlbiBkZWxldGVkIGluIHRoZSB0cmFu
c2FjdGlvbiwgdGhlIHN0cmVhbSBzaG91bGQ8L3ByZT4NCjxwcmU+Y29udGFpbiBhIG5vZGUgcmVj
b3JkIGZvciB0aGF0IGNoaWxkIHdpdGggdGhlIHRyYW5zYWN0aW9uLWlkIGFuZCB0aGU8L3ByZT4N
CjxwcmU+bnVtYmVyIG9mIHBlcm1pc3Npb25zIGJlaW5nIHplcm86IHNlZSBkb2NzL2Rlc2lnbnMv
eGVuc3RvcmUtbWlncmF0aW9uLm1kPC9wcmU+DQo8L2Jsb2NrcXVvdGU+DQo8ZGl2Pjxicj4NCjwv
ZGl2Pg0KPGRpdj48YnI+DQo8L2Rpdj4NCjxkaXY+VGhlIHByb2JsZW0gZm9yIG94ZW5zdG9yZWQg
aXMgdGhhdCB5b3UgbWlnaHQndmUgdGFrZW4gYSBzbmFwc2hvdCBpbiB0aGUgcGFzdCwgeW91ciBy
b290IGhhcyBtb3ZlZCBvbiwgYnV0IHlvdSBoYXZlIGluIHlvdXIgc25hcHNob3QgYSBsb3Qgb2Yg
bm9kZXMgdGhhdCBoYXZlIGJlZW4gZGVsZXRlZCBpbiB0aGUgbGF0ZXN0IHJvb3QuPC9kaXY+DQo8
ZGl2Pjxicj4NCjwvZGl2Pg0KPGRpdj5BIGJydXRlIGZvcmNlIHdheSBtaWdodCBiZSB0byBkaWZm
IHRoZSB0cmFuc2FjdGlvbidzIHN0YXRlIGFuZCB0aGUgbGF0ZXN0IHJvb3Qgc3RhdGUgYW5kIGR1
bXAgdGhlIGRlbHRhIGVudHJpZXMgYXMgYWRkaW5nL2RlbGV0aW5nIG5vZGVzIGluIHRoZSBtaWdy
YXRpb24gc3RyZWFtLiZuYnNwOzwvZGl2Pg0KPGRpdj5UaGlzIGNvdWxkIGxlYWQgdG8gZHVtcGlu
ZyBhIGxvdCBvZiBkdXBsaWNhdGUgc3RhdGUsIGFuZCByZXN1bHQgaW4gYW4gZXhwbG9zaW9uIG9m
IGZpbGUgc2l6ZSAoZS5nLiBpZiB5b3UgcnVuIDEwMDAgZG9tYWluLCB0aGUgY3VycmVudCBtYXgg
c3VwcG9ydGVkIGxpbWl0ICZuYnNwO2FuZCBlYWNoIGhhcyBvbmUgdGlueSB0cmFuc2FjdGlvbiBm
cm9tIHRoZSBwYXN0PC9kaXY+DQo8ZGl2PnRoaXMgd2lsbCBsZWFkIHRvIDEwMDB4IGFtcGxpZmlj
YXRpb24gb2YgeGVuc3RvcmUgc2l6ZSBpbiB0aGUgZHVtcC4gSW4tbWVtb3J5IGlzIGZpbmUgYmVj
YXVzZSBPQ2FtbCB3aWxsIHNoYXJlIGNvbW1vbiB0cmVlIG5vZGVzIHRoYXQgYXJlIHVuY2hhbmdl
ZCkuPC9kaXY+DQo8ZGl2PlRoaXMgc2hvdWxkIGNvcnJlY3RseSByZXN0b3JlIGNvbnRlbnQgYnV0
IGhhdmUgYSBiYWQgZWZmZWN0IG9uIGNvbmZsaWN0IHNlbWFudGljczogeW91ciBtaWdyYXRlZCB0
cmFuc2FjdGlvbnMgd2lsbCBhbGwgdGhlbiBsaWtlbHkgY29uZmxpY3QgYXQgdGhlIHJvb3QsIG9y
IG5lYXIgdGhlIHJvb3QgYW5kIGZhaWwgYW55d2F5LjwvZGl2Pg0KPGRpdj5XaGVyZWFzIHdpdGhv
dXQgYSBsaXZlLXVwZGF0ZSBhcyBsb25nIGFzIHlvdSBkbyBub3QgbW9kaWZ5IGFueSBvZiB0aGUg
b2xkIHN0YXRlIHlvdSB3b3VsZCBnZXQgdGhlIGNvbmZsaWN0IG1hcmtlciBmdXJ0aGVyIGRvd24g
dGhlIHRyZWUgYW5kIG1vc3Qgb2YgdGhlIHRpbWUgYWJsZSB0byBhdm9pZCBjb25mbGljdHMuPC9k
aXY+DQo8ZGl2PkkndmUgdHJpZWQgaW1wbGVtZW50aW5nIHRoaXMgbGFzdCB5ZWFyOiZuYnNwOzxh
IGhyZWY9Imh0dHBzOi8vZ2l0aHViLmNvbS9lZHdpbnRvcm9rL3hlbi9wdWxsLzIvY29tbWl0cy9h
OWYwNTcxMzFiNzVlMWJkMmRjYjQ5Yzc5NTYzMGFiNTg3NWI3Zjc2I2RpZmYtMGY0ODI2NDcxNzc1
ZDc4YmZjNjkyMmM2MzE1MmUyNjhlZjM4NjE3MWViZDk4NTIwOGNiODJlMjFjNjIxZTc0OVIyODgt
UjM2NSI+aHR0cHM6Ly9naXRodWIuY29tL2Vkd2ludG9yb2sveGVuL3B1bGwvMi9jb21taXRzL2E5
ZjA1NzEzMWI3NWUxYmQyZGNiNDljNzk1NjMwYWI1ODc1YjdmNzYjZGlmZi0wZjQ4MjY0NzE3NzVk
NzhiZmM2OTIyYzYzMTUyZTI2OGVmMzg2MTcxZWJkOTg1MjA4Y2I4MmUyMWM2MjFlNzQ5UjI4OC1S
MzY1PC9hPjwvZGl2Pg0KPGRpdj4oaWdub3JlIHRoZSBhd2Z1bCBpbmRlbnRhdGlvbiB0aGF0IGNv
ZGUgaGFzIGJlZW4gcmViYXNlZCB3aXRoIGlnbm9yZV9hbGxfc3BhY2Ugc28gbWFueSB0aW1lcyBi
ZXR3ZWVuIGRpZmZlcmVudCBicmFuY2hlcyBvZiBYZW4gdGhhdCB3aGl0ZXNwYWNlIGNvcnJlY3Ru
ZXNzIGhhcyBiZWVuIGxvc3QpPC9kaXY+DQo8ZGl2Pjxicj4NCjwvZGl2Pg0KPGRpdj4NCjxkaXY+
SSd2ZSBnb3QgYSBmdXp6ZXIvdW5pdCB0ZXN0IGZvciBsaXZlLXVwZGF0ZSAoc2VlIHhlbi1kZXZl
bCksIGJ1dCBpdCBoYXMgdHJhbnNhY3Rpb25zIHR1cm5lZCBvZmYgY3VycmVudGx5IGJlY2F1c2Ug
SSBjb3VsZG4ndCBnZXQgaXQgdG8gd29yayByZWxpYWJseSwgaXQgYWx3YXlzIGZvdW5kIGV4YW1w
bGVzIHdoZXJlIHRoZSB0cmFuc2FjdGlvbiBjb25mbGljdCBzdGF0ZSB3YXMgbm90IGlkZW50aWNh
bCBwcmUvcG9zdCB1cGRhdGUuPC9kaXY+DQo8ZGl2PklmIHdlIGFib3J0IGFsbCB0cmFuc2FjdGlv
bnMgYWZ0ZXIgbWlncmF0aW9uIGFzIGRpc2N1c3NlZCBwcmV2aW91c2x5IHRoZW4gaXQgbWlnaHQg
YmUgcG9zc2libGUgdG8gZ2V0IHRoaXMgdG8gd29yayBpZiB3ZSBhY2NlcHQgdGhlIHNpemUgZXhw
bG9zaW9uIGFzIGEgcG9zc2liaWxpdHkgYW5kIGR1bXAgdHJhbnNhY3Rpb24gc3RhdGUgdG8gL3Zh
ci90bXAsIG5vdCB0byAvdG1wICh3aGljaCBtaWdodCBiZSBhIHRtcGZzIHRoYXQgZ2l2ZXMgeW91
DQogRU5PU1BDKS48L2Rpdj4NCjxkaXY+PC9kaXY+DQo8L2Rpdj4NCjxkaXY+PGJyPg0KPC9kaXY+
DQo8ZGl2PkxpdmUgdXBkYXRlcyBhcmUgYSBmYWlybHkgbmljaGUgdXNlIGNhc2UgYW5kIEknZCBs
aWtlIHRvIHNlZSB0aGUgY3VycmVudCB2YXJpYW50IHdpdGhvdXQgdHJhbnNhY3Rpb25zIHByb3Zl
biB0byB3b3JrIG9uIGFuIGFjdHVhbCBYU0EgKGxpa2VseSB0aGUgbmV4dCBveGVuc3RvcmVkIFhT
QSBhYm91dCBxdWV1ZSBsaW1pdHMgaWYgd2UgZmluZCBhIHNvbHV0aW9uIHRvIHRoYXQpLDwvZGl2
Pg0KPGRpdj5hbmQgb25seSBhZnRlciB0aGF0IGRlcGxveSBsaXZlLXVwZGF0ZSBzdXBwb3J0IHdp
dGggdHJhbnNhY3Rpb25zLjwvZGl2Pg0KPGRpdj5XZSBhbHNvIGNvbXBsZXRlbHkgbGFjayBhbnkg
dW5pdCB0ZXN0cyBmb3IgdHJhbnNhY3Rpb25zIChhc2lkZSBmcm9tIHRoZSBmdXp6ZXIgdGhhdCBJ
IHN0YXJ0ZWQgd3JpdGluZywgd2hpY2ggZG9lcyBqdXN0IHNvbWUgdmVyeSBtaW5pbWFsIHN0YXRl
IGNvbXBhcmlzb25zKSwgd2UgZG8gbm90IGhhdmUgYSBmb3JtYWwgbW9kZWwgb24gaG93IHRyYW5z
YWN0aW9uczwvZGl2Pg0KPGRpdj5hbmQgdHJhbnNhY3Rpb24gY29uZmxpY3RzIHNob3VsZCBiZSBo
YW5kbGVkIHRvIGNoZWNrIHdoZXRoZXIgdHJhbnNhY3Rpb25zIGJlaGF2ZSBjb3JyZWN0bHksIHRo
b3VnaCBhIGZhaXJseSBnb29kIGFwcHJvbWl4YXRpb24gaXM6IHJ1biAyIG94ZW5zdG9yZWQgb25l
IHdpdGggYW5kIHdpdGhvdXQgbGl2ZS11cGRhdGUgYW5kIGNoZWNrIHRoYXQgdGhleSBwcm9kdWNl
IGVxdWl2YWxlbnQ8L2Rpdj4NCjxkaXY+KG5vdCBuZWNlc3NhcmlseSBpZGVudGljYWwsIHR4aWQg
Y2FuIGNoYW5nZSkgYW5zd2Vycy4gQXMgbG9uZyBhcyB3ZSBkbyBub3QgaGF2ZSB0byBjaGFuZ2Ug
dGhlIHRyYW5zYWN0aW9uIHNlbWFudGljcyBvciBjb2RlIGluIGFueSB3YXkgdG8gc3VwcG9ydCBs
aXZlIHVwZGF0ZS48L2Rpdj4NCjxkaXY+PGJyPg0KPC9kaXY+DQo8ZGl2PkJlc3QgcmVnYXJkcyw8
L2Rpdj4NCjxkaXY+LS1FZHdpbjwvZGl2Pg0KPGRpdj48YnI+DQo8L2Rpdj4NCjxibG9ja3F1b3Rl
IHR5cGU9ImNpdGUiIHN0eWxlPSJtYXJnaW46MCAwIDAgLjhleDsgYm9yZGVyLWxlZnQ6MnB4ICM3
MjlmY2Ygc29saWQ7cGFkZGluZy1sZWZ0OjFleCI+DQo8cHJlPjxicj48L3ByZT4NCjxwcmU+PGJy
PjwvcHJlPg0KPHByZT5KdWVyZ2VuPC9wcmU+DQo8IS0tIHRleHQvaHRtbCAtLT4NCjx0YWJsZSBz
dHlsZT0iYmFja2dyb3VuZC1jb2xvcjogI0ZFRDEyRTsgbWFyZ2luLWJvdHRvbTo4cHg7IHdpZHRo
OiAxMDAlOyB0ZXh0LWFsaWduOiBjZW50ZXIiPg0KPHRib2R5Pg0KPHRyPg0KPHRkIHN0eWxlPSJm
b250LXNpemU6IDEycHg7Ij48c3BhbiBzdHlsZT0iZm9udC1zaXplOiAxMnB4OyBmb250LWZhbWls
eTogQXJpYWw7Ij5bQ0FVVElPTiAtIEVYVEVSTkFMIEVNQUlMXSBETyBOT1QgcmVwbHksIGNsaWNr
IGxpbmtzLCBvciBvcGVuIGF0dGFjaG1lbnRzIHVubGVzcyB5b3UgaGF2ZSB2ZXJpZmllZCB0aGUg
c2VuZGVyIGFuZCBrbm93IHRoZSBjb250ZW50IGlzIHNhZmUuPC9zcGFuPjwvdGQ+DQo8L3RyPg0K
PC90Ym9keT4NCjwvdGFibGU+DQo8L2Jsb2NrcXVvdGU+DQo8L2JvZHk+DQo8L2h0bWw+DQo=

--_000_a46bb4d5bbd3b8ee4d633de4c9d10a597f4dc5d6camelcitrixcom_--


From xen-devel-bounces@lists.xenproject.org Wed May 19 14:03:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 14:03:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130176.243979 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljMmx-0006S5-Co; Wed, 19 May 2021 14:03:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130176.243979; Wed, 19 May 2021 14:03:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljMmx-0006Ry-9o; Wed, 19 May 2021 14:03:07 +0000
Received: by outflank-mailman (input) for mailman id 130176;
 Wed, 19 May 2021 14:03:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ufKr=KO=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1ljMmv-0006Rs-IG
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 14:03:05 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0a8854df-f6d0-4079-8833-320895bd9a7d;
 Wed, 19 May 2021 14:03:04 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9C452AFB1;
 Wed, 19 May 2021 14:03:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0a8854df-f6d0-4079-8833-320895bd9a7d
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621432983; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=N3n94Kxto49aIydwtZ1YuEe2MS4KN56hEGTksL0LjdY=;
	b=YVNlgj45CvUJq9jL3eo0h4JKgNrp1Ww/ztsiWFiEagrBYVFtsNQnqX1ysL7Ku2CltUXo3r
	Y2oW0XNK8iWHFREu84lsGWoRwRcW5vB/DWgW5MkTUTSgBH2rIu4hVL1eL3PkqfyVkpVa73
	M9R68zjm3q5yjR1BVGqcn1rDL1W/Mos=
Message-ID: <671922d6fea6fe534de09d552044838df0b484c4.camel@suse.com>
Subject: Re: [PATCH v3 5/5] tools/ocaml: Fix redefinition errors
From: Dario Faggioli <dfaggioli@suse.com>
To: Julien Grall <julien@xen.org>, Costin Lupu <costin.lupu@cs.pub.ro>, 
	xen-devel@lists.xenproject.org
Cc: Christian Lindig <christian.lindig@citrix.com>, David Scott
	 <dave@recoil.org>, Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Date: Wed, 19 May 2021 16:03:02 +0200
In-Reply-To: <bb15b60d-ebbb-d0c1-9b95-605cf5ae5a41@xen.org>
References: <cover.1620633386.git.costin.lupu@cs.pub.ro>
	 <50763a92df0c58ed0d7749717b7ff5e2a039a4dd.1620633386.git.costin.lupu@cs.pub.ro>
	 <bb15b60d-ebbb-d0c1-9b95-605cf5ae5a41@xen.org>
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-w+dynnTqWcY8kpkCqqOG"
User-Agent: Evolution 3.40.1 (by Flathub.org) 
MIME-Version: 1.0


--=-w+dynnTqWcY8kpkCqqOG
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

Hey,

So...

On Mon, 2021-05-17 at 19:16 +0100, Julien Grall wrote:
> On 10/05/2021 09:35, Costin Lupu wrote:
> > If PAGE_SIZE is already defined in the system (e.g. in
> > /usr/include/limits.h
> > header) then gcc will trigger a redefinition error because of -
> > Werror. This
> > patch replaces usage of PAGE_* macros with XC_PAGE_* macros in
> > order to avoid
> > confusion between control domain page granularity (PAGE_*
> > definitions) and
> > guest domain page granularity (which is what we are dealing with
> > here).
> >=20
> > Same issue applies for redefinitions of Val_none and Some_val
> > macros which
> > can be already define in the OCaml system headers (e.g.
> > /usr/lib/ocaml/caml/mlvalues.h).
> >=20
... At least this part is absolutely necessary for building Xen on
openSUSE Tumbleweed, therefore:

> > Signed-off-by: Costin Lupu <costin.lupu@cs.pub.ro>
>=20
> Reviewed-by: Julien Grall <jgrall@amazon.com>
>=20
Tested-by: Dario Faggioli <dfaggioli@suse.com>

Thanks and Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-w+dynnTqWcY8kpkCqqOG
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAmClGpYACgkQFkJ4iaW4
c+7SlhAAuO1vCJRxW044qTtSm4mHQ4mq8qw7LrNlm73PK23e8asr4ocLr+cG2AgY
pEEjCcpDtUZ6Hj8oPDpUpEuCOjL7q9rX1Q+fAWV6ifcHyigLb4sD31yAc7m0N8oT
Co4/3sioxANylWeFcWNoygwBVAZtzjuePtNqev92ZkXhHaW6sVFNriC1a1EePAm9
/BVWpi+7zvawZBn+ZIKiD1HYE/wM/cEY2LmPJL0gzMGnxv4d0ITEIVzTrEG8GC2T
NzDVzw8K3dBz3Ulz7/nJpvMCohY1hqRL5a8ixIJTMjv0N09URC/8HUThlm1L6JIb
5QIf618huy5bnHZ2i/YzjTk4DwRXl3n/Lgsb/8Fw4uTguMmRNsrjRF+KUhiW7bPz
A9Kwr72PFaJHNNB02lsH10zgbB/q228gVXoL0Vn0An017o9EPhfpgIvAvCQTA7I9
p0MLbXfQ4uV/0Ow+7RWInofckWyejr38yYfsXcWyH2k9dxudow6UMHZP6VO3KnxN
t7JBsO8ZQ0zx/1iWpBNE3GT+UrAN91OXgf9O6PMLfT6WmSyKA9gm26n3SxoiTa+d
TiuwWXWkY3iBkmfvPqIgDT6PdzZDpGwoEQY5TmGJFOSBKAn8oO1Dc15dmLIPk4Oi
0OkOHTxkaM0xUOxDZIZvlA2ZWh3pgBrGNpNH2bYoRe72MezRAfA=
=F9cj
-----END PGP SIGNATURE-----

--=-w+dynnTqWcY8kpkCqqOG--



From xen-devel-bounces@lists.xenproject.org Wed May 19 14:35:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 14:35:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130184.243997 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljNHs-0001E6-QU; Wed, 19 May 2021 14:35:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130184.243997; Wed, 19 May 2021 14:35:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljNHs-0001Dz-Mq; Wed, 19 May 2021 14:35:04 +0000
Received: by outflank-mailman (input) for mailman id 130184;
 Wed, 19 May 2021 14:35:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fOiY=KO=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ljNHr-0001Dt-EY
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 14:35:03 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7ca89015-c81d-458a-ad65-f81a33a7c288;
 Wed, 19 May 2021 14:35:01 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2B375AEA6;
 Wed, 19 May 2021 14:35:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7ca89015-c81d-458a-ad65-f81a33a7c288
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621434900; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=B34+PNn0NfMzZMKy6F3i6BO9UdAQaX51iUoim72t5tU=;
	b=ovRkkyp63RcIRlynvAO+YI12INTi+aHO6RnuyODinoSQ0gt5C6P0AkbFZt9KIbT4wwxeUy
	5zKtyfsN2PBEPIlomPnD+GVlCWfKSRqPOqDfwflD46VSb/6QauEfkDmlEXVccmQAdhcP3o
	HLNcNJCg09eJcRqgjCesY5u0AZFchN8=
Subject: Re: [PATCH v3 2/5] xen/x86: manually build xen.mb.efi binary
To: Daniel Kiper <daniel.kiper@oracle.com>
Cc: Bob Eshleman <bobbyeshleman@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <cover.1611273359.git.bobbyeshleman@gmail.com>
 <28d5536a2f7691e8f79d55f1470fa89ce4fae93d.1611273359.git.bobbyeshleman@gmail.com>
 <3c621726-31c4-6a79-a020-88c59644111b@suse.com>
 <74ea104d-3826-d80d-3af5-f444d065c73f@gmail.com>
 <a183a5f9-0f36-187d-fd06-8d6db99cbe43@suse.com>
 <20210517132039.6czppjfge27x4mwg@tomti.i.net-space.pl>
 <ee89a22d-5f46-51ed-4c46-63cfc60cbafc@suse.com>
 <20210518174633.spo5kmgcbuo6dg5k@tomti.i.net-space.pl>
 <51333867-d693-38e2-bd1c-fce28241a604@suse.com>
 <20210519124846.go3zyqzojsaj35in@tomti.i.net-space.pl>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c55f44dd-47bb-8e60-c1a3-446c561d6740@suse.com>
Date: Wed, 19 May 2021 16:35:00 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210519124846.go3zyqzojsaj35in@tomti.i.net-space.pl>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 19.05.2021 14:48, Daniel Kiper wrote:
> On Wed, May 19, 2021 at 11:29:43AM +0200, Jan Beulich wrote:
>> On 18.05.2021 19:46, Daniel Kiper wrote:
>>> On Mon, May 17, 2021 at 03:24:28PM +0200, Jan Beulich wrote:
>>>> On 17.05.2021 15:20, Daniel Kiper wrote:
>>>>> On Mon, May 17, 2021 at 08:48:32AM +0200, Jan Beulich wrote:
>>>>>> On 07.05.2021 22:26, Bob Eshleman wrote:
>>>>>>> What is your intuition WRT the idea that instead of trying add a PE/COFF hdr
>>>>>>> in front of Xen's mb2 bin, we instead go the route of introducing valid mb2
>>>>>>> entry points into xen.efi?
>>>>>>
>>>>>> At the first glance I think this is going to be less intrusive, and hence
>>>>>> to be preferred. But of course I haven't experimented in any way ...
>>>>>
>>>>> When I worked on this a few years ago I tried that way. Sadly I failed
>>>>> because I was not able to produce "linear" PE image using binutils
>>>>> exiting that days.
>>>>
>>>> What is a "linear" PE image?
>>>
>>> The problem with Multiboot family protocols is that all code and data
>>> sections have to be glued together in the image and as such loaded into
>>> the memory (IIRC BSS is an exception but it has to live behind the
>>> image). So, you cannot use PE image which has different representation
>>> in file and memory. IIRC by default at least code and data sections in
>>> xen.efi have different sizes in PE file and in memory. I tried to fix
>>> that using linker script and objcopy but it did not work. Sadly I do
>>> not remember the details but there is pretty good chance you can find
>>> relevant emails in Xen-devel archive with me explaining what kind of
>>> problems I met.
>>
>> Ah, this rings a bell. Even the .bss-is-last assumption doesn't hold,
>> because .reloc (for us as well as in general) comes later, but needs
>> loading (in the right place). Since even xen.gz isn't simply the
> 
> However, IIRC it is not used when Xen is loaded through Multiboot2
> protocol. So, I think it may stay in the image as is and the Mutliboot2
> header should not cover .reloc section.
> 
> By the way, why do we need .reloc section in the PE image? Is not %rip
> relative addressing sufficient? IIRC the Linux kernel just contains
> a stub .reloc section. Could not we do the same?

%rip-relative addressing can (obviously, I think) help only for text.
But we also have data containing pointers, which need relocating.

>> compressed linker output, but a post-processed (by mkelf32) image,
>> maybe what we need is a build tool doing similar post-processing on
>> xen.efi? Otoh getting disk image and in-memory image aligned ought
> 
> Yep, this should work too.
> 
>> to be possible by setting --section-alignment= and --file-alignment=
>> to the same value (resulting in a much larger file) - adjusting file
> 
> IIRC this did not work for some reason. Maybe it would be better to
> enforce correct alignment and required padding using linker script.

I'm not convinced the linker script is the correct vehicle here. It
is mainly about placement in the address space (i.e. laying out how
things will end up in memory), not about file layout.

>> positions would effectively be what a post-processing tool would need
>> to do (like with mkelf32 perhaps we could then at least save the
>> first ~2Mb of space). Which would still leave .reloc to be dealt with
>> - maybe we could place this after .init, but still ahead of
>> __init_end (such that the memory would get freed late in the boot
>> process). Not sure whether EFI loaders would "like" such an unusual
>> placement.
> 
> Yeah, good question...
> 
>> Also not sure what to do with Dwarf debug info, which just recently
>> we managed to avoid needing to strip unconditionally.
> 
> I think debug info may stay as is. Just Multiboot2 header should not
> cover it if it is not needed.

You did say that .bss is expected to be last, which both .reloc and
debug info violate.

>>>>> Maybe
>>>>> newer binutils are more flexible and will be able to produce a PE image
>>>>> with properties required by Multiboot2 protocol.
>>>>
>>>> Isn't all you need the MB2 header within the first so many bytes of the
>>>> (disk) image? Or was it the image as loaded into memory? Both should be
>>>> possible to arrange for.
>>>
>>> IIRC Multiboot2 protocol requires the header in first 32 kiB of an image.
>>> So, this is not a problem.
>>
>> I was about to ask "Disk image or in-memory image?" But this won't
>> matter if the image as a whole got linearized, as long as the first
>> section doesn't start to high up. I notice that xen-syms doesn't fit
>> this requirement either, only the output of mkelf32 does. Which
>> suggests that there may not be a way around a post-processing tool.
> 
> Could not we drop 2 MiB padding at the beginning of xen-syms by changing
> some things in the linker script?

Not sure, but at the first glance I don't think so. If we want section
and file alignment to match, and if we want sections at 2Mb boundaries,
then the first section - since it cannot start at 0 - will need to be
at 2Mb. In xen-syms the linker manages to put it at 32kb, but I think
arranging for such also for PE output (despite both alignments set to
match) would require a linker change, perhaps even a new command line
option (because the two alignment requests can't be silently ignored).

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 19 14:38:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 14:38:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130192.244008 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljNLE-0001uX-Df; Wed, 19 May 2021 14:38:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130192.244008; Wed, 19 May 2021 14:38:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljNLE-0001uQ-AY; Wed, 19 May 2021 14:38:32 +0000
Received: by outflank-mailman (input) for mailman id 130192;
 Wed, 19 May 2021 14:38:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljNLC-0001uG-Qi; Wed, 19 May 2021 14:38:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljNLC-0003b2-LA; Wed, 19 May 2021 14:38:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljNLC-0004Lx-D9; Wed, 19 May 2021 14:38:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ljNLC-0003cn-Cf; Wed, 19 May 2021 14:38:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=T2bMwUmZadyeNjJtbEqjxXGYk8WZgCUaDPjzy8e8lDs=; b=SJWnsPmvToIfVogrTpha3n/AZl
	QxECfpa/AaDWGufQYWgMVChBjd9B/099ObAWSjjQVNpZbv18oujIcSeaJcukOSSL0NuDAoE/1dRM7
	rE+Ebp6WwT0XkDapz4G45b71Wd32PuSXT/yCRn5eB8HiyWaSrUpzGfkkf+7abXzz969Y=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [xen-unstable-smoke bisection] complete build-armhf
Message-Id: <E1ljNLC-0003cn-Cf@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 May 2021 14:38:30 +0000

branch xen-unstable-smoke
xenbranch xen-unstable-smoke
job build-armhf
testid xen-build

Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  8fc4916daf2aac34088ebd5ec3d6fd707ac4221d
  Bug not present: caa9c4471d1d74b2d236467aaf7e63a806ac11a4
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/162094/


  commit 8fc4916daf2aac34088ebd5ec3d6fd707ac4221d
  Author: Julien Grall <jgrall@amazon.com>
  Date:   Tue May 18 14:34:22 2021 +0100
  
      tools/libs: guest: Use const whenever we point to literal strings
      
      literal strings are not meant to be modified. So we should use const
      *char rather than char * when we want to store a pointer to them.
      
      Signed-off-by: Julien Grall <jgrall@amazon.com>
      Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
      Acked-by: Wei Liu <wl@xen.org>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/xen-unstable-smoke/build-armhf.xen-build.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/xen-unstable-smoke/build-armhf.xen-build --summary-out=tmp/162094.bisection-summary --basis-template=162023 --blessings=real,real-bisect,real-retry xen-unstable-smoke build-armhf xen-build
Searching for failure / basis pass:
 162089 fail [host=cubietruck-picasso] / 162023 [host=cubietruck-gleizes] 161985 [host=cubietruck-braque] 161982 [host=cubietruck-braque] 161980 [host=cubietruck-braque] 161937 ok.
Failure / basis pass flights: 162089 / 161937
Tree: qemuu git://xenbits.xen.org/qemu-xen.git
Tree: xen git://xenbits.xen.org/xen.git
Latest 7ea428895af2840d85c524f0bd11a38aac308308 01d84420fb4a9be2ec474a7c1910bb22c28b53c8
Basis pass 7ea428895af2840d85c524f0bd11a38aac308308 cb199cc7de987cfda4659fccf51059f210f6ad34
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/qemu-xen.git#7ea428895af2840d85c524f0bd11a38aac308308-7ea428895af2840d85c524f0bd11a38aac308308 git://xenbits.xen.org/xen.git#cb199cc7de987cfda4659fccf51059f210f6ad34-01d84420fb4a9be2ec474a7c1910bb22c28b53c8
Loaded 5001 nodes in revision graph
Searching for test results:
 161937 pass 7ea428895af2840d85c524f0bd11a38aac308308 cb199cc7de987cfda4659fccf51059f210f6ad34
 161980 [host=cubietruck-braque]
 161982 [host=cubietruck-braque]
 161985 [host=cubietruck-braque]
 162023 [host=cubietruck-gleizes]
 162036 fail 7ea428895af2840d85c524f0bd11a38aac308308 01d84420fb4a9be2ec474a7c1910bb22c28b53c8
 162055 fail 7ea428895af2840d85c524f0bd11a38aac308308 01d84420fb4a9be2ec474a7c1910bb22c28b53c8
 162061 pass 7ea428895af2840d85c524f0bd11a38aac308308 cb199cc7de987cfda4659fccf51059f210f6ad34
 162062 fail 7ea428895af2840d85c524f0bd11a38aac308308 01d84420fb4a9be2ec474a7c1910bb22c28b53c8
 162064 fail 7ea428895af2840d85c524f0bd11a38aac308308 01d84420fb4a9be2ec474a7c1910bb22c28b53c8
 162065 fail 7ea428895af2840d85c524f0bd11a38aac308308 01d84420fb4a9be2ec474a7c1910bb22c28b53c8
 162066 pass 7ea428895af2840d85c524f0bd11a38aac308308 27eb6833134d0f3ddfb02d09055776e837e9a747
 162068 fail 7ea428895af2840d85c524f0bd11a38aac308308 8fc4916daf2aac34088ebd5ec3d6fd707ac4221d
 162067 fail 7ea428895af2840d85c524f0bd11a38aac308308 01d84420fb4a9be2ec474a7c1910bb22c28b53c8
 162069 pass 7ea428895af2840d85c524f0bd11a38aac308308 8b9890e1c0f5b35c199f40eb4e6cd0ce6c34829b
 162072 fail 7ea428895af2840d85c524f0bd11a38aac308308 01d84420fb4a9be2ec474a7c1910bb22c28b53c8
 162073 pass 7ea428895af2840d85c524f0bd11a38aac308308 caa9c4471d1d74b2d236467aaf7e63a806ac11a4
 162074 fail 7ea428895af2840d85c524f0bd11a38aac308308 8fc4916daf2aac34088ebd5ec3d6fd707ac4221d
 162075 fail 7ea428895af2840d85c524f0bd11a38aac308308 01d84420fb4a9be2ec474a7c1910bb22c28b53c8
 162077 pass 7ea428895af2840d85c524f0bd11a38aac308308 caa9c4471d1d74b2d236467aaf7e63a806ac11a4
 162079 [host=cubietruck-braque]
 162080 fail 7ea428895af2840d85c524f0bd11a38aac308308 8fc4916daf2aac34088ebd5ec3d6fd707ac4221d
 162081 [host=cubietruck-braque]
 162083 fail 7ea428895af2840d85c524f0bd11a38aac308308 01d84420fb4a9be2ec474a7c1910bb22c28b53c8
 162085 [host=cubietruck-braque]
 162087 [host=cubietruck-braque]
 162088 pass 7ea428895af2840d85c524f0bd11a38aac308308 caa9c4471d1d74b2d236467aaf7e63a806ac11a4
 162089 fail 7ea428895af2840d85c524f0bd11a38aac308308 01d84420fb4a9be2ec474a7c1910bb22c28b53c8
 162092 [host=cubietruck-braque]
 162094 fail 7ea428895af2840d85c524f0bd11a38aac308308 8fc4916daf2aac34088ebd5ec3d6fd707ac4221d
Searching for interesting versions
 Result found: flight 161937 (pass), for basis pass
 For basis failure, parent search stopping at 7ea428895af2840d85c524f0bd11a38aac308308 caa9c4471d1d74b2d236467aaf7e63a806ac11a4, results HASH(0x5619d90a5008) HASH(0x5619d90bc8d8) HASH(0x5619d90a26d8) For basis failure, parent search stopping at 7ea428895af2840d85c524f0bd11a38aac308308 8b9890e1c0f5b35c199f40eb4e6cd0ce6c34829b, results HASH(0x5619d90b11e0) For basis failure, parent search stopping at 7ea428895af2840d85c524f0bd11a38aac308308 27eb6833134d0f3ddfb02d09055776e837e9a747, results HASH(0x\
 5619d90a6590) For basis failure, parent search stopping at 7ea428895af2840d85c524f0bd11a38aac308308 cb199cc7de987cfda4659fccf51059f210f6ad34, results HASH(0x5619d909a628) HASH(0x5619d90a4888) Result found: flight 162036 (fail), for basis failure (at ancestor ~449)
 Repro found: flight 162061 (pass), for basis pass
 Repro found: flight 162062 (fail), for basis failure
 0 revisions at 7ea428895af2840d85c524f0bd11a38aac308308 caa9c4471d1d74b2d236467aaf7e63a806ac11a4
No revisions left to test, checking graph state.
 Result found: flight 162073 (pass), for last pass
 Result found: flight 162074 (fail), for first failure
 Repro found: flight 162077 (pass), for last pass
 Repro found: flight 162080 (fail), for first failure
 Repro found: flight 162088 (pass), for last pass
 Repro found: flight 162094 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  xen git://xenbits.xen.org/xen.git
  Bug introduced:  8fc4916daf2aac34088ebd5ec3d6fd707ac4221d
  Bug not present: caa9c4471d1d74b2d236467aaf7e63a806ac11a4
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/162094/


  commit 8fc4916daf2aac34088ebd5ec3d6fd707ac4221d
  Author: Julien Grall <jgrall@amazon.com>
  Date:   Tue May 18 14:34:22 2021 +0100
  
      tools/libs: guest: Use const whenever we point to literal strings
      
      literal strings are not meant to be modified. So we should use const
      *char rather than char * when we want to store a pointer to them.
      
      Signed-off-by: Julien Grall <jgrall@amazon.com>
      Reviewed-by: Anthony PERARD <anthony.perard@citrix.com>
      Acked-by: Wei Liu <wl@xen.org>

Revision graph left in /home/logs/results/bisect/xen-unstable-smoke/build-armhf.xen-build.{dot,ps,png,html,svg}.
----------------------------------------
162094: tolerable ALL FAIL

flight 162094 xen-unstable-smoke real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/162094/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 build-armhf                   6 xen-build               fail baseline untested


jobs:
 build-armhf                                                  fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Wed May 19 15:39:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 15:39:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130200.244021 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljOHp-0007dQ-Om; Wed, 19 May 2021 15:39:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130200.244021; Wed, 19 May 2021 15:39:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljOHp-0007dJ-JO; Wed, 19 May 2021 15:39:05 +0000
Received: by outflank-mailman (input) for mailman id 130200;
 Wed, 19 May 2021 15:39:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1ljOHo-0007cS-Oi
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 15:39:04 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1ljOHo-0004bL-Mw
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 15:39:04 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1ljOHo-000111-Lp
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 15:39:04 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1ljOHm-0004F8-Un; Wed, 19 May 2021 16:39:02 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Subject:CC:To:Date:Message-ID:
	Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=KDO5B+1IHs/ZjRDt6ZAGhi+fYL+oqJkh2SN66X1QhiQ=; b=etEbEcRnoa7qAxhJ7GKwXqHmoY
	FAxcsLP4r9+nOmEdFvk6B11OFc0URGwZHoDQB3qh6T7Qr4Y1x8YYR5GY08fJNFBm0F/0Txab4+rD0
	7QyInrfXwsDvFKKariFs8B4MQfhUYgPIHPQL6XGoC2BXAmXQthWrjKsthXuapHthV4xU=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Transfer-Encoding: 7bit
Message-ID: <24741.12566.639691.461134@mariner.uk.xensource.com>
Date: Wed, 19 May 2021 16:39:02 +0100
To: xen-devel@lists.xenproject.org
CC: community.manager@xenproject.org
Subject: IRC networks
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

Summary:

We have for many years used the Freenode IRC network for real-time
chat about Xen.  Unfortunately, Freenode is undergoing a crisis.

There is a dispute between, on the one hand, Andrew Lee, and on the
other hand, all (or almost all) Freenode volunteer staff.  We must
make a decision.

I have read all the publicly available materials and asked around with
my contacts.  My conclusions:

 * We do not want to continue to use irc.freenode.*.
 * We might want to use libera.chat, but:
 * Our best option is probably to move to OFTC https://www.oftc.net/


Discussion:

Firstly, my starting point.

I have been on IRC since at least 1993.  Currently my main public
networks are OFTC and Freenode.

I do not have any personal involvement with public IRC networks.  Of
the principals in the current Freenode dispute, I have only heard of
one, who is a person I have experience of in a Debian context but have
not worked closely with.

George asked me informally to use my knowledge and contacts to shed
light on the situation.  I decided that, having done my research, I
would report more formally and publicly here rather than just
informally to George.


Historical background:

 * Freenode has had drama before.  In about 2001 OFTC split off from
   Freenode after an argument over governance.  IIRC there was drama
   again in 2006.  Significant proportion of the Free Software world,
   including Debian, now use OFTC.  Debian switched in 2006.

Facts that I'm (now) pretty sure of:

 * Freenode's actual servers run on donated services; that is,
   the hardware is owned by those donating the services, and the
   systems are managed by Freenode volunteers, known as "staff".

 * The freenode domain names are currently registered to a limited
   liability company owned by Andrew Lee (rasengan).

 * At least 10 Freenode staff have quit in protest, writing similar
   resignation letters protesting about Andrew Lee's actions [1].  It
   does not appear that any Andrew Lee has the public support of any
   Freenode staff.

 * Andrew Lee claims that he "owns" Freenode.[2]

 * A large number of channel owners for particular Free Software
   projects who previously used Freenode have said they will switch
   away from Freenode.

Discussion and findings on Freenode:

There is, as might be expected, some murk about who said what to whom
when, what promises were made and/or broken, and so on.  The matter
was also complicated by the leaking earlier this week of draft(s) of
(at least one of) the Freenode staffers' resignation letters.

Andrew Lee has put forward a position statement [2].  A large part of
the thrust of that statement is allegations that the current head of
Freenode staff, tomaw, "forced out" the previous head, christel.  This
allegation is strongly disputed by by all those current (resigning)
Freenode staff I have seen comment.  In any case it does not seem to
be particularly germane; in none of my reading did tomaw seem to be
playing any kind of leading role.  tomaw is not mentioned in the
resignation letters.

Some of the links led to me to logs of discussions on #freenode.  I
read some of these in particular[3].  MB I haven't been able to verify
that these logs have not been tampered with.  Having said that and
taking the logs at face value, I found the rasengan writing there to
be disingenuous and obtuse.

Andrew Lee has been heavily involved in Bitcoin.  Bitcoin is a hive of
scum and villainy, a pyramid scheme, and an environmental disaster,
all rolled into one.  This does not make me think well of Lee.

Additionally, it seems that Andrew Lee has been involved in previous
governance drama involving a different IRC network, Snoonet.

I have come to the very firm conclusion that we should have nothing to
do with Andrew Lee, and avoid using services that he has some
effective control over.

Alternatives:

The departing Freenode staff are setting up a replacement,
"libera.chat".  This is operational but still suffering from teething
problems and of course has a significant load as it deals with an
influx of users on a new setup.

On the staff and trust question: As I say, I haven't heard of any of
the Freenode staff, with one exception.  Unfortunately the one
exception does not inspire confidence in me[4] - although NB that is
only one data point.

On the other hand, Debian has had many many years of drama-free
involvement with OFTC.  OFTC has a formal governance arrangement and
it is associated with Software in the Public Interest.  I notice that
the last few OFTC'[s annual officer elections have been run partly by
Steve McIntyre.  Steve is a friend of mine (and he is a former Debian
Project Leader) and I take his involvement as a good sign.

I recommend that we switch to using OFTC as soon as possible.


Ian.


References:

Starting point for the resigning Freenode staff's side [1]:
  https://gist.github.com/joepie91/df80d8d36cd9d1bde46ba018af497409

Andrew Lee's side [2]:
  https://gist.github.com/realrasengan/88549ec34ee32d01629354e4075d2d48

[3]
https://paste.sr.ht/~ircwright/7e751d2162e4eb27cba25f6f8893c1f38930f7c4

[4] I won't give the name since I don't want to be shitposting.


From xen-devel-bounces@lists.xenproject.org Wed May 19 15:39:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 15:39:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130203.244032 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljOIb-0008Bt-0D; Wed, 19 May 2021 15:39:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130203.244032; Wed, 19 May 2021 15:39:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljOIa-0008Bm-TY; Wed, 19 May 2021 15:39:52 +0000
Received: by outflank-mailman (input) for mailman id 130203;
 Wed, 19 May 2021 15:39:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fOiY=KO=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ljOIZ-0008Be-Hn
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 15:39:51 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 73546a6d-b89d-4d01-b144-87c20a559093;
 Wed, 19 May 2021 15:39:50 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AB86BB01E;
 Wed, 19 May 2021 15:39:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 73546a6d-b89d-4d01-b144-87c20a559093
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621438789; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=wevaPow5YfXo4W6yF0r59ykU5BpQ+t1e1n7P7fEPngQ=;
	b=bazLB7xUhaavFfNGGvg1xlZkzkUVQ9wOE5MQAofz7SJML6Bwn1Lyb9LtsPiy3Wj7evK3jp
	Xr/cUUgF9+jLPJn+QddUYmDX03Tj0b/uV7WiJ1aKeFg19WjET5TtELVCECwnsEzHa7ixrI
	y9U65/Qtph7F17R4KMO/S0KaMNYXl20=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86: make hypervisor build with gcc11
Message-ID: <ca7a78e5-2ee9-4109-7905-3b9186475f3d@suse.com>
Date: Wed, 19 May 2021 17:39:50 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

Gcc 11 looks to make incorrect assumptions about valid ranges that
pointers may be used for addressing when they are derived from e.g. a
plain constant. See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=100680.

Utilize RELOC_HIDE() to work around the issue, which for x86 manifests
in at least
- mpparse.c:efi_check_config(),
- tboot.c:tboot_probe(),
- tboot.c:tboot_gen_frametable_integrity(),
- x86_emulate.c:x86_emulate() (at -O2 only).
The last case is particularly odd not just because it only triggers at
higher optimization levels, but also because it only affects one of at
least three similar constructs. Various "note" diagnostics claim the
valid index range to be [0, 2⁶³-1].

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/tools/tests/x86_emulator/x86-emulate.c
+++ b/tools/tests/x86_emulator/x86-emulate.c
@@ -8,6 +8,13 @@
 
 #define ERR_PTR(val) NULL
 
+/* See gcc bug 100680, but here don't bother making this version dependent. */
+#define gcc11_wrap(x) ({                  \
+    unsigned long x_;                     \
+    __asm__ ( "" : "=g" (x_) : "0" (x) ); \
+    (typeof(x))x_;                        \
+})
+
 #define cpu_has_amd_erratum(nr) 0
 #define cpu_has_mpx false
 #define read_bndcfgu() 0
--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -726,7 +726,7 @@ union vex {
 #define copy_VEX(ptr, vex) ({ \
     if ( !mode_64bit() ) \
         (vex).reg |= 8; \
-    (ptr)[0 - PFX_BYTES] = ext < ext_8f08 ? 0xc4 : 0x8f; \
+    gcc11_wrap(ptr)[0 - PFX_BYTES] = ext < ext_8f08 ? 0xc4 : 0x8f; \
     (ptr)[1 - PFX_BYTES] = (vex).raw[0]; \
     (ptr)[2 - PFX_BYTES] = (vex).raw[1]; \
     container_of((ptr) + 1 - PFX_BYTES, typeof(vex), raw[0]); \
--- a/xen/include/asm-x86/fixmap.h
+++ b/xen/include/asm-x86/fixmap.h
@@ -78,7 +78,7 @@ extern void __set_fixmap(
 
 #define clear_fixmap(idx) __set_fixmap(idx, 0, 0)
 
-#define __fix_to_virt(x) (FIXADDR_TOP - ((x) << PAGE_SHIFT))
+#define __fix_to_virt(x) gcc11_wrap(FIXADDR_TOP - ((x) << PAGE_SHIFT))
 #define __virt_to_fix(x) ((FIXADDR_TOP - ((x)&PAGE_MASK)) >> PAGE_SHIFT)
 
 #define fix_to_virt(x)   ((void *)__fix_to_virt(x))
--- a/xen/include/xen/compiler.h
+++ b/xen/include/xen/compiler.h
@@ -140,6 +140,12 @@
     __asm__ ("" : "=r"(__ptr) : "0"(ptr));      \
     (typeof(ptr)) (__ptr + (off)); })
 
+#if CONFIG_GCC_VERSION >= 110000 /* See gcc bug 100680. */
+# define gcc11_wrap(x) RELOC_HIDE(x, 0)
+#else
+# define gcc11_wrap(x) (x)
+#endif
+
 #ifdef __GCC_ASM_FLAG_OUTPUTS__
 # define ASM_FLAG_OUT(yes, no) yes
 #else
--- a/xen/include/xen/pdx.h
+++ b/xen/include/xen/pdx.h
@@ -19,7 +19,7 @@ extern u64 pdx_region_mask(u64 base, u64
 extern void set_pdx_range(unsigned long smfn, unsigned long emfn);
 
 #define page_to_pdx(pg)  ((pg) - frame_table)
-#define pdx_to_page(pdx) (frame_table + (pdx))
+#define pdx_to_page(pdx) gcc11_wrap(frame_table + (pdx))
 
 bool __mfn_valid(unsigned long mfn);
 


From xen-devel-bounces@lists.xenproject.org Wed May 19 15:48:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 15:48:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130211.244044 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljOR1-0001Eh-S6; Wed, 19 May 2021 15:48:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130211.244044; Wed, 19 May 2021 15:48:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljOR1-0001Ea-PA; Wed, 19 May 2021 15:48:35 +0000
Received: by outflank-mailman (input) for mailman id 130211;
 Wed, 19 May 2021 15:48:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fOiY=KO=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ljOR0-0001EE-GF
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 15:48:34 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 04dff2c3-9e11-4aea-9a04-23a82a85d2bb;
 Wed, 19 May 2021 15:48:33 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 817D6AEF8;
 Wed, 19 May 2021 15:48:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 04dff2c3-9e11-4aea-9a04-23a82a85d2bb
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621439312; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=9Wp1XDNH3tqp0mzl2nSP1h1Pe8UcGfBsHR7PAJ3dlQw=;
	b=bTvX/CXb1Ey/bPwWbAexDHxyJ4RYbqcOurLagfxT4FovkK1kArJWsscooIpgj0tPhxQDbm
	GjVEdW5Fal2SDMInBLiqBLCyO15U9ovU5Hmc0LleNlR3EMNYH4+sryW23/pHF/uB/lDvke
	aEgx9k5UiJwgSj08aHwoQe1QY93xJqM=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Lukasz Hawrylko <lukasz.hawrylko@linux.intel.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/tboot: include all valid frame table entries in S3
 integrity check
Message-ID: <e878fd86-2d82-ce3c-1dc4-d3a07025f1d4@suse.com>
Date: Wed, 19 May 2021 17:48:33 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

The difference of two pdx_to_page() return values is a number of pages,
not the number of bytes covered by the corresponding frame table entries.

Fixes: 3cb68d2b59ab ("tboot: fix S3 issue for Intel Trusted Execution Technology.")
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/tboot.c
+++ b/xen/arch/x86/tboot.c
@@ -323,12 +323,12 @@ static void tboot_gen_frametable_integri
         if ( nidx >= max_idx )
             break;
         vmac_update((uint8_t *)pdx_to_page(sidx * PDX_GROUP_COUNT),
-                       pdx_to_page(eidx * PDX_GROUP_COUNT)
-                       - pdx_to_page(sidx * PDX_GROUP_COUNT), &ctx);
+                    (eidx - sidx) * PDX_GROUP_COUNT * sizeof(*frame_table),
+                    &ctx);
     }
     vmac_update((uint8_t *)pdx_to_page(sidx * PDX_GROUP_COUNT),
-                   pdx_to_page(max_pdx - 1) + 1
-                   - pdx_to_page(sidx * PDX_GROUP_COUNT), &ctx);
+                (max_pdx - sidx * PDX_GROUP_COUNT) * sizeof(*frame_table),
+                &ctx);
 
     *mac = vmac(NULL, 0, nonce, NULL, &ctx);
 


From xen-devel-bounces@lists.xenproject.org Wed May 19 15:49:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 15:49:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130217.244054 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljOS9-0001qs-6q; Wed, 19 May 2021 15:49:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130217.244054; Wed, 19 May 2021 15:49:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljOS9-0001ql-3m; Wed, 19 May 2021 15:49:45 +0000
Received: by outflank-mailman (input) for mailman id 130217;
 Wed, 19 May 2021 15:49:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=fOiY=KO=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ljOS7-0001qY-9w
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 15:49:43 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d0d02537-fffe-4c21-8b53-a65bf72ca2ff;
 Wed, 19 May 2021 15:49:42 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 76C3FABB1;
 Wed, 19 May 2021 15:49:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d0d02537-fffe-4c21-8b53-a65bf72ca2ff
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621439381; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=81zFPIuhUnEJqIsltZDYP1MSh2QxuDgB2YkRgK/qc6s=;
	b=b9yEdtU65m4bFJl0T0djMMSByCq+XrdYdzFRffdM6xo5QVCAZ/HP6WY/+sRK8nFHlJVlMx
	BwcRJV8IX8BFQld17X+41lwpMS/Bnb2P97FmMEb/fdtrSbWmTh90s9v3faLQzy0IhOREoG
	d6Brn74wXNg7nuroVS/FaHYv+QFCJDA=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Lukasz Hawrylko <lukasz.hawrylko@linux.intel.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/tboot: adjust UUID check
Message-ID: <422e39c9-0cba-0944-b813-dfe2578ad719@suse.com>
Date: Wed, 19 May 2021 17:49:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Replace a bogus cast, move the static variable into the only function
using it, and add __initconst. While there, also remove a pointless NULL
check.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/tboot.c
+++ b/xen/arch/x86/tboot.c
@@ -27,8 +27,6 @@ static vmac_t domain_mac;     /* MAC for
 static vmac_t xenheap_mac;    /* MAC for xen heap during S3 */
 static vmac_t frametable_mac; /* MAC for frame table during S3 */
 
-static const uuid_t tboot_shared_uuid = TBOOT_SHARED_UUID;
-
 /* used by tboot_protect_mem_regions() and/or tboot_parse_dmar_table() */
 static uint64_t __initdata txt_heap_base, __initdata txt_heap_size;
 static uint64_t __initdata sinit_base, __initdata sinit_size;
@@ -93,6 +91,7 @@ static void __init tboot_copy_memory(uns
 void __init tboot_probe(void)
 {
     tboot_shared_t *tboot_shared;
+    static const uuid_t __initconst tboot_shared_uuid = TBOOT_SHARED_UUID;
 
     /* Look for valid page-aligned address for shared page. */
     if ( !opt_tboot_pa || (opt_tboot_pa & ~PAGE_MASK) )
@@ -101,9 +100,7 @@ void __init tboot_probe(void)
     /* Map and check for tboot UUID. */
     set_fixmap(FIX_TBOOT_SHARED_BASE, opt_tboot_pa);
     tboot_shared = fix_to_virt(FIX_TBOOT_SHARED_BASE);
-    if ( tboot_shared == NULL )
-        return;
-    if ( memcmp(&tboot_shared_uuid, (uuid_t *)tboot_shared, sizeof(uuid_t)) )
+    if ( memcmp(&tboot_shared_uuid, &tboot_shared->uuid, sizeof(uuid_t)) )
         return;
 
     /* new tboot_shared (w/ GAS support, integrity, etc.) is not backwards


From xen-devel-bounces@lists.xenproject.org Wed May 19 16:04:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 16:04:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130227.244066 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljOgS-0004gZ-Ka; Wed, 19 May 2021 16:04:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130227.244066; Wed, 19 May 2021 16:04:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljOgS-0004gS-HB; Wed, 19 May 2021 16:04:32 +0000
Received: by outflank-mailman (input) for mailman id 130227;
 Wed, 19 May 2021 16:04:31 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljOgR-0004gI-0E; Wed, 19 May 2021 16:04:31 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljOgQ-0005an-Pp; Wed, 19 May 2021 16:04:30 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljOgQ-0000PN-BK; Wed, 19 May 2021 16:04:30 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ljOgQ-0008Vb-Ar; Wed, 19 May 2021 16:04:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=WSf/RxpApr16UUKzw+T6/GY2QMckI88CMF+kZAqTpdg=; b=YYeicJoGCFDA93OIaN36hpL0Q7
	kl49f7cbFBJo95DBABrF7QKSVGi74LOyMuFsm7JSGhtJPcEkdULvvcOu3Cri5HljbHSTBN9XhUMSM
	cNr3sc9D0qO+pHsZALFE40m5rIfgOI/xYvPUXa/caZChrp47IxVcFYA4eupHLgx8S7e4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162078-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162078: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl-multivcpu:xen-boot:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=caa9c4471d1d74b2d236467aaf7e63a806ac11a4
X-Osstest-Versions-That:
    xen=caa9c4471d1d74b2d236467aaf7e63a806ac11a4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 May 2021 16:04:30 +0000

flight 162078 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162078/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-multivcpu  8 xen-boot                  fail pass in 162058
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat  fail pass in 162058

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check fail in 162058 never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check fail in 162058 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162058
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162058
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162058
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162058
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162058
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162058
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162058
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162058
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162058
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162058
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162058
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  caa9c4471d1d74b2d236467aaf7e63a806ac11a4
baseline version:
 xen                  caa9c4471d1d74b2d236467aaf7e63a806ac11a4

Last test of basis   162078  2021-05-19 05:04:34 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                fail    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Wed May 19 16:15:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 16:15:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130237.244080 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljOrD-00069k-P5; Wed, 19 May 2021 16:15:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130237.244080; Wed, 19 May 2021 16:15:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljOrD-00069d-Ld; Wed, 19 May 2021 16:15:39 +0000
Received: by outflank-mailman (input) for mailman id 130237;
 Wed, 19 May 2021 16:15:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IUdU=KO=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1ljOrC-00069X-6q
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 16:15:38 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 423b47e5-7ace-4820-bc6a-4e49962a6c25;
 Wed, 19 May 2021 16:15:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 423b47e5-7ace-4820-bc6a-4e49962a6c25
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1621440937;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=/RvdvcoNzgaEU1wmQad6fXVJxD7hD5MTNpngWSv+Rl8=;
  b=Z+Vbfn+ev2uJeY4pfm/BXwtwfNjnKcyfMrae8upYxlwZhhQeij6VsmhR
   QMijYLqRyZmku/YPK7Wp1oLLwqTfGSrIGay/bMGQwfPkaPBMF/mR97oGR
   tWK2BD5y95nRFfGvLTj4ZUNaHyHnNgDu7rrH0f5a5PvjSEWvV+sN5q3en
   g=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: q1d+fYkCESONGoI8sfdNo+Pzitcgl8/dIz4k+MGVpMhxGBIpYzviGI14blLjTaym9OE3DOI7MY
 bRWgNlm+6MrpmQsiWAE9D0w0bemWXCSSgXiZ4M2hluwy0ko+KRx6PbpHV+fLgLsNNMOmZsYBx+
 mUQUeIGQFbG9bW50XjxMgn99nZjN8EH6re41ldvcHgDAaKCjE7M1igImGYVk7apcShlj1OdROr
 OiFGrT6E+WskSkPxQsabNS1wUeMmDEi6lNhiUY18HynxUSjA2UVM3iHdb9itm5EbBUxU81Oj5d
 j8g=
X-SBRS: 5.1
X-MesageID: 44254766
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-Data: A9a23:Esav5qq4Mr3wbhSS6VwRmzAFeLteBmJqZRIvgKrLsJaIsI4StFCzt
 garIBnQaPfcZGPyKtp3aYvg9k0A65Lcm9A1HFdl+CE1Fygb9JuZCYyVIHmrMnLJJKUvbq7GA
 +byyDXkBJppJpMJjk71atANlVEljufXAOKU5NfsYkidfyc9IMsaoU8ly7RRbrJA24DjWlvQ4
 IKq/aUzBXf+s9JKGjNMg068gEsHUMTa4Fv0aXRnOJinFHeH/5UkJMp3yZOZdhMUcaENdgKOf
 M7RzanRw4/s10xF5uVJMFrMWhZirrb6ZWBig5fNMkSoqkAqSicais7XOBeAAKv+Zvrgc91Zk
 b1wWZKMpQgBLJD+nL1CWDNiMB4vF5F315qdYiaRvpnGp6HGWyOEL/RGCUg3OcsT+/ptAHEI/
 vsdQNwPRknd3aTsmuv9E7Q8wJ56RCXoFNp3VnVI4jzeF/krB7zeRaHD/fdT3Ssqh9AIFvHbD
 yYcQWYzNkmcPEMRUrsRII4yx7evh0TgT2FV+QuYi6lp0jbTyiUkhdABN/KKI4fXFK25hH2wp
 33E13T0BAkAM96SwibD9Wij7sffkCW+VI8MGbmQ8v9xnEbV1mEVEAcRV1awvb++kEHWc9lYL
 kkJ/CsyvO43/UqiQdTndw21pmaeuRwRUMYWFPc1gDxh0YKNvVzfXDJdCGccOJp87afaWADGy
 HebouHtXGNkmoe2diO3yoa7iGnxZwYaeDpqiTA/cecV3zXyiNht1EuVH4cySPPdYs7dQ2+pm
 23TxMQqr/BD1ZdRhv3TEUXv3mr0zqUlWDLZ8ek+soiNwARjeMaBbpGk5ELX5PJNRGpyZgLa5
 yFa8yRyAfpnMH1sqMBvaL5XdF1Kz6zfWNE5vbKIN8NwnwlBA1b5IehtDMhWfS+FyPronAMFh
 2eP4WtsCGJ7ZST7N8ebnaroVJRCIVfc+STNCamPM4smjmlZXw6b5iB+DXN8LEi2yRVErE3LA
 r/GIZfEJStLUsxPkWvpL9rxJJd2n0jSM0uIHsulp/lmuJLDDEOopUAtYArWMr9htPvayOgXm
 v4GX/a3J9xkeLSWSgHc8JIJLEBMKn4+BJvsrNdQePLFKQ1jcFzNwdeIqV/9U+SJR5hoq9o=
IronPort-HdrOrdr: A9a23:ZsxnYqrfxCXETaOEi7umYyAaV5viL9V00zEX/kB9WHVpm5Oj+f
 xGzc516farslossSkb6Ky90KnpewK5yXbsibNhfItKLzOWx1dAS7sSrbcKogeQVREWk9Q96U
 4OSdkHNDSdNykZsS++2njELz9C+qjFzEnLv5ak854Fd2gDAMsMj3YbNu/YKDwNeOAvP+tlKH
 P23Lshm9PUQwVvUi3NPAhiYwGsnayvqLvWJTo9QzI34giHij2lrJTgFQKD4xsYWzRThZ8/7G
 nsiWXCl+eemsD+7iWZ+37Y7pxQltek4MBEHtawhs8cLSipohq0Zb5mR6aJsFkO0aSSARcR4Z
 3xSiUbToJOAkDqDziISNzWqlHdOQMVmjjfIJmj8CDeSILCNWgH4oF69Pxkm1PimjsdVZdHof
 52NiuixulqJAKFkyLn69fSURZ20kKyvHo5iOYWy2dSSI0EddZq3MEiFNM8KuZxIMvW0vFtLA
 BVNrCX2B+WSyLtU5nThBgi/DVtZAV6Iv6ieDlMhiW46UkjoJlJ9TpQ+CVEpAZ0yHsUcegy2w
 3rCNUbqI1z
X-IronPort-AV: E=Sophos;i="5.82,313,1613451600"; 
   d="scan'208";a="44254766"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=LuY7M+DFngoe90UJ+F6CrbE8gL9L9+lxI8+FiqT8Ls9WPQXIdlDqxtWT1HuVfqyUoHvelJAmL+yHLVqRu5yt72kmPF74hV7Phaohx5v9VLmMKztNfuDqec1EuILWtPfmsMGOELmkXO+WSlB+CozdqM8+WJhGs2wi3WhXrJC4y099J4bRcPGyyklelhJbu/6MqRviIbvjyLzTMESYvlwOExWSGOiSV/Y/vbRr7RiKqRJ3GitwX001MDGauU6umUYjYVehGrrkCbdv4hPOwKg3B139FMBXXIgTVnhfherizUuST/E3PJqzb067puYkRUC4IRwwCwht/0ZWRRRwueuBOw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/RvdvcoNzgaEU1wmQad6fXVJxD7hD5MTNpngWSv+Rl8=;
 b=hjJhs0ksR6zHCNjPKd02j6dvaKkvbDxau/4UQLzByO4PdnzTzJcOOL5tonQkexuJO9SGzNznqk+lTaGhRlbVmu6AvW4Q0x6YHFhtCURIlVtntB20HRy++DKBlrhMWvPQ63j4O0aZllV2SSTm4EG65ePGKxE8pz28Bi9vYRjx0HGG1Xz90hGv3Rbs0Z9gzJOWW2mAw3fk2DyKoEnT1gnOeHvPzMHnBBkEopczFB5ErBTQbuMWYJ4WR22pIaloaTFxTLAeG0vqQS23tFAEo3lcZygRvsCI7Kyg+b++qpG5+vPBwb6YHyocw4kVtP7jYKE3tpW1oKPEPSLGGZFZ1hmNTQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/RvdvcoNzgaEU1wmQad6fXVJxD7hD5MTNpngWSv+Rl8=;
 b=Kc5UIfqzfhYai3qpz+ytpR3PTMmx3/x4A5mW8J8O7mvsB3Sn85h4pCjMsTpBM8N3Nni6WILInf2vDPtMo5GmKA46tGpno8K+gLNeQ4wqlQ7lzRD0sDqhGriA1G6EkrqKOUmCbSAN2ExOT5zrv+C19zztxhAqXNH8CcobfoNo+sk=
From: George Dunlap <George.Dunlap@citrix.com>
To: Ian Jackson <iwj@xenproject.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"community.manager@xenproject.org" <community.manager@xenproject.org>
Subject: Re: IRC networks
Thread-Topic: IRC networks
Thread-Index: AQHXTMUre4pjfhF+RUKgWz01JlgFDKrq+tiA
Date: Wed, 19 May 2021 16:15:34 +0000
Message-ID: <10B84448-21E3-41F2-AAD2-B9F6E9EB5DE8@citrix.com>
References: <24741.12566.639691.461134@mariner.uk.xensource.com>
In-Reply-To: <24741.12566.639691.461134@mariner.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3654.60.0.2.21)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 1d7e63f6-0299-47fa-2ca0-08d91ae154e3
x-ms-traffictypediagnostic: PH0PR03MB5941:
x-microsoft-antispam-prvs: <PH0PR03MB5941D51EC1CAD5B2ED7F8E36992B9@PH0PR03MB5941.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8882;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: d+7R9GDYMZCuB9lTL20ny6/IrcseT8c6u4QHnOJOavCkZxwmX0+Pylg7vFyateoTvpOkCO5DbMTTBYrUEpOFyxWRNTwkz5wXEli9Qc2UOmydP1MCWp0NXEQpdpuIJ6tw6JUUHQsgVx/uEFtN6go6z1C5U2jFVYEcBf80F0GO4zwjHMzwMJvUl2J/JINKLIIFY40dN5B9L0EOWJBahHvESz2uMp3eCzPqUXMukQmz1jhworgt5KGjvgWxHr353k2GIi4W+KwH5lYJET1qCYvxAG58f61GlxkGjIhJT9nN7S1NYZUg5TmIfg9BOEtyl61Fy1ySTScNpAJkdg2PhyDQhWZO4+rVrAX3j0kw0uknE/7IapLgY65kfnQYfWcGRV5VVgHb0z5KER15sishvU7VEgVagtTbkE54NWxu+gF1CAEVg2Vl19CzUdXtNWxUsTzR1jG7e1Rov1FBlE5qusXm1ZiimYXksvOcXoMpvoC8N8Vs7WaXgYSrTBzBIYezpP2jvZmwSPKCan619ulFhBHTYT6z0CtrDes/XWofx+SkBsTi0n4EH4zuahSASpCa64bH2JlcVcC1mxGmp24m8bPnQOzVykPDekyqyJychQ+oUvTScTcHtMbxa2VWQVCWe/PPZTROefvX1V609emNXGUCt3xDEbrhxhrmcPbUiC/pOT0=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(376002)(396003)(39860400002)(366004)(346002)(83380400001)(3480700007)(66946007)(64756008)(66446008)(66476007)(66556008)(2906002)(6486002)(76116006)(6916009)(71200400001)(91956017)(4326008)(26005)(38100700002)(478600001)(186003)(5660300002)(86362001)(122000001)(53546011)(8936002)(8676002)(7116003)(6512007)(316002)(2616005)(33656002)(54906003)(4744005)(36756003)(6506007)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: =?utf-8?B?UEVyYmUxbXpqQkE1aXRUWkhrTGVZK2k2V1NMSHdhOWZuSUxoOHM4cmFVY255?=
 =?utf-8?B?ZndpcDRxYWZmOTJHRlJpZkMvOFdjQXAyaXNqOUFqVzRHUmZROVZpcnZ4cEJy?=
 =?utf-8?B?b1hCcDhuU3YvRjZIUWdNTHltSzN1di82cEZiK0w3bHB3ZjV6cmNFZ1lpQTJt?=
 =?utf-8?B?VituSEZPNnVYakx0ejdldkRSZ2dZNFIwbGpLUGU1MlB6VXpFWDYvYUl6N1lE?=
 =?utf-8?B?TGlvbG00QW1rT2Uwa2RseHI4K1IrU2toV1VZd2NzWFRGaHFROUFwVlZHZGZF?=
 =?utf-8?B?bkRPZjNZOE5OUW03ZWQ1WUJoeVN1bkh3cVRpenFFTGx2QUxYSFFxQ2tJL3o2?=
 =?utf-8?B?aFF5TzU1QldHUXdyM0VZOGVCNHZUdU0vUzJNbEsrRnNSeUdTdkRsMUJQaUd1?=
 =?utf-8?B?MS92NWJNMTJjN3lhN0VPdUk3T3BjU09taTRqb3pKMWlBZEc1K1hJZjlVOFNZ?=
 =?utf-8?B?MlVJMXNSMnhVNnI2NnVoQlcrRENwb01MVzA3ckpzeUFHSnVvc3R2d1JVWnNp?=
 =?utf-8?B?R1c0U0RwRkd6bWpRamRPNXBLQ1p0Wklzc051bXFyVHg3OThnNmJ0cXdnVng5?=
 =?utf-8?B?cm5zNENWdXpRRHBWRHVlODFxT2xMNjloSFgyeGRVVTU1N293dUNNRy9jN1pE?=
 =?utf-8?B?L3BOclhiQ1V2a0FXZjQ5Nm0wY3hqTEhBMWkwWDNsUUhiVisyT1BkRm5OY3R5?=
 =?utf-8?B?cnNOcVFoU0pNVzV6UjR1bmJGOS96aU5jSmhQOU8wcDFBRy9tNDRIRFdQRjNz?=
 =?utf-8?B?UVJVanJid0hweUpoYURCb1EwNlJFMXQ4T1AzTGcrQVg2Qlk2NFg1RURDK1ZO?=
 =?utf-8?B?VXF3TFdMZ0k5RXBIVzlTTnBrNFhiZndlUHN4SnZzWVFuZ2pZVHFsbFlWN0Nt?=
 =?utf-8?B?MFJqS1ZpYVh6ejlORWZVbHBLcHhJQVgzNEloTjNOTU1sMDFJd0U4cVA2UjhO?=
 =?utf-8?B?YjhaN21iS3pGMFc3endYeEtoZHo5OUxmd3lPMDZZcTRJV2RpWVg2N2VSRFJU?=
 =?utf-8?B?b0NFUEpUS1pQeC93aWpsU2Jtb2k2NkUwVDY0WW5jMkQyMnZYYlRLa0NHTk9K?=
 =?utf-8?B?aFlxdVFIb1BueTRYYWx5QngvbWpCMnBwdUlGbTlWbmtIYTB1d3BVb2tSdWJt?=
 =?utf-8?B?aDdsUTJQUm9hK0luVlFiQnVValB5Qyt5Y3oyOU9jKzZpd0IrczZCcG5sdHNa?=
 =?utf-8?B?dWRhdkhFWi82SjFMTkVsUHpHTjk5UFlwSFczemZEc2VzR3RzeGZCRHphR0RL?=
 =?utf-8?B?TFkyMUNQeVcrV1AwN1JOemgwclFDUC9ZQWY1YjYzemhCWnBzYnNjM0VJTDg2?=
 =?utf-8?B?VzE4aW5OVHRQSHdUdTRzK251V3Q4UDBmSzVrRTl0enpFZk1HeXNyYzBldm8w?=
 =?utf-8?B?Z0hlK3VuaDBlOXplK0dwN1hWLzZ6ak1SN3BVZ1B2c1IvdTJMTFF4VG91aGVO?=
 =?utf-8?B?OFF0clBSYmtlMDYrQm5QTDJSMUIzL2VxWkNXNWJVZzliTlUyRjFZdk1ra09x?=
 =?utf-8?B?N1FiNGQ3am9PVFRJUXJJRjVpOHNyZ0V6aW5pcFNPdjJXOXc4SkNoUSt3OGw5?=
 =?utf-8?B?SVhoM1F3TjA2Y3R4M0xaUWR1OVRYUVo0dGFvK1ZaZTVOSGNWUFljdVJFWVVh?=
 =?utf-8?B?YUF4RC8wQ0lobUZQUGJpQW5jTTY5a2pUUEUrbFZhZWVnQmx5YzczUGg0YWhO?=
 =?utf-8?B?K2ZNM05EbWRTdXdaQlRwbmlyM25WNUxPenhiY29McnpUWVNPWEZNRDRnK2tz?=
 =?utf-8?B?NnJpOEZhSTNzR1ROUTF5K1RRQUhJNzJBMGJQQms5eEoyOERIUlVCSkpiNUlr?=
 =?utf-8?B?bnVBSE5GMkd1dWgxalp2dz09?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <7618C7F3BDC1124FBE69200421585EF1@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 1d7e63f6-0299-47fa-2ca0-08d91ae154e3
X-MS-Exchange-CrossTenant-originalarrivaltime: 19 May 2021 16:15:34.2887
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: coJoBaN+x5ZvK9UhSrBKKcPn+4W7zMzmHUF7gTaR5w6OHJ9BTt8/c2sP8AvIjLNu3UHBGiQB+ONFfc5YnPG2KkLRdqsbnVD8IgVwTtyUwhM=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB5941
X-OriginatorOrg: citrix.com

DQoNCj4gT24gTWF5IDE5LCAyMDIxLCBhdCA0OjM5IFBNLCBJYW4gSmFja3NvbiA8aXdqQHhlbnBy
b2plY3Qub3JnPiB3cm90ZToNCj4gDQoNCj4gSSByZWNvbW1lbmQgdGhhdCB3ZSBzd2l0Y2ggdG8g
dXNpbmcgT0ZUQyBhcyBzb29uIGFzIHBvc3NpYmxlLg0KDQpUaGFua3MsIElhbi4gIEkgdGVuZCB0
byBhZ3JlZSB3aXRoIHRoZSByZWNvbW1lbmRhdGlvbi4gIEkgdGhpbmsgdW5sZXNzIHNvbWVvbmUg
d2FudHMgdG8gYXJndWUgdGhhdCB3ZSBnbyB0byBsaWJlcmEgKG9yIHN0aWNrIHdpdGggZnJlZW5v
ZGUpLCB3ZSBzaG91bGQgZ28gd2l0aCB0aGF0IG9wdGlvbi4gIA0KDQpOb3JtYWxseSBmb3IgYSBk
ZWNpc2lvbiBsaWtlIHRoaXMgd2XigJlkIHdhaXQgMiB3ZWVrcyBmb3IgY291bnRlci1hcmd1bWVu
dHMgYmVmb3JlIG1ha2luZyBpdCBvZmZpY2lhbC4gIERvZXMgYW55b25lIHdhbnQgdG8gYXJndWUg
dGhhdCB3ZSBzaG91bGQgbW92ZSB1cCB0aGUgdGltZXRhYmxlIGZvciBzb21lIHJlYXNvbj8NCg0K
SeKAmXZlIHJlZ2lzdGVyZWQgI3hlbmRldmVsIG9uIG9mdGM7IEnigJlkIGVuY291cmFnZSBlYXJs
eSBhZG9wdGVycyB0byBnaXZlIGl0IGEgdHJ5Lg0KDQogLUdlb3JnZQ==


From xen-devel-bounces@lists.xenproject.org Wed May 19 16:18:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 16:18:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130244.244091 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljOtq-0006pX-BH; Wed, 19 May 2021 16:18:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130244.244091; Wed, 19 May 2021 16:18:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljOtq-0006pQ-86; Wed, 19 May 2021 16:18:22 +0000
Received: by outflank-mailman (input) for mailman id 130244;
 Wed, 19 May 2021 16:18:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljOto-0006pG-HG; Wed, 19 May 2021 16:18:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljOto-0005pt-8f; Wed, 19 May 2021 16:18:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljOtn-0000xY-W5; Wed, 19 May 2021 16:18:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ljOtn-0006nn-VZ; Wed, 19 May 2021 16:18:19 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=hrqqIBHQfH9KG8kzB9XFaKUWNblLhl3gDf14to0C/hI=; b=Kzpg8vXqxmjF9Cu6imAuMrH4pG
	+TiqMprHHaMtWuWrlcXu9/XhP+wFcP6fUPGt23SideuBpVnYUrASk01aOb2FVOeo4UZb8uIrZ97rw
	KPPHIJqo28+kpL+f36dp7C4FnwaVdXKkVgTAUnbGtrl06nVEqdCeiwD0Ks4JvCQk02cY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162093-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162093: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=935abe1cc463917c697c1451ec8d313a5d75f7de
X-Osstest-Versions-That:
    xen=caa9c4471d1d74b2d236467aaf7e63a806ac11a4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 May 2021 16:18:19 +0000

flight 162093 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162093/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  935abe1cc463917c697c1451ec8d313a5d75f7de
baseline version:
 xen                  caa9c4471d1d74b2d236467aaf7e63a806ac11a4

Last test of basis   162023  2021-05-18 13:00:27 Z    1 days
Failing since        162036  2021-05-18 16:00:26 Z    1 days   12 attempts
Testing same since   162093  2021-05-19 14:01:35 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   caa9c4471d..935abe1cc4  935abe1cc463917c697c1451ec8d313a5d75f7de -> smoke


From xen-devel-bounces@lists.xenproject.org Wed May 19 17:11:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 17:11:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130261.244104 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljPip-00040Z-5Z; Wed, 19 May 2021 17:11:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130261.244104; Wed, 19 May 2021 17:11:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljPip-00040S-2P; Wed, 19 May 2021 17:11:03 +0000
Received: by outflank-mailman (input) for mailman id 130261;
 Wed, 19 May 2021 17:11:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ljPin-00040K-Do
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 17:11:01 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ljPil-0006hk-UB; Wed, 19 May 2021 17:10:59 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ljPil-0005DR-Ne; Wed, 19 May 2021 17:10:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=Hu9lQT0GuHInhC9q3DZD/hR540w21XkpC1TWm+UbIBU=; b=F12mz48ozWo1Qr6nvObc0JtnOE
	N9TsHRHcDppPa/oflIF557CdVNPEwDZLdZFNaAVeWR1PACRmZdpU5Ml3W4H3D7AviPgbhLkphGrwu
	+vEoxp0ARJtRtne05phgGDtcIXJVAD6Szi46gSz2HggsQPp2rqRWqulBwm3noEBolqXg=;
Subject: Re: Preserving transactions accross Xenstored Live-Update
To: Juergen Gross <jgross@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Edwin Torok <edvin.torok@citrix.com>, "Doebel, Bjoern" <doebel@amazon.de>,
 raphning@amazon.co.uk, "Durrant, Paul" <pdurrant@amazon.co.uk>
References: <13bbb51e-f63d-a886-272f-e6a6252fb468@xen.org>
 <377d042d-40ec-dafc-3d03-370c4f5dbb4c@suse.com>
 <c14d7a27-b486-01c1-1a24-70f286c34431@xen.org>
 <b8413748-a889-8b0c-df93-2c93ed832369@xen.org>
 <95144b63-292b-3d60-b7d2-1847a1611fd6@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <911b7981-5c92-b224-1ce3-c312ebd423f7@xen.org>
Date: Wed, 19 May 2021 18:10:57 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <95144b63-292b-3d60-b7d2-1847a1611fd6@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Juergen,

On 19/05/2021 13:50, Juergen Gross wrote:
> On 19.05.21 14:33, Julien Grall wrote:
>>
>>
>> On 19/05/2021 13:32, Julien Grall wrote:
>>> Hi Juergen,
>>>
>>> On 19/05/2021 10:09, Juergen Gross wrote:
>>>> On 18.05.21 20:11, Julien Grall wrote:
>>>>>
>>>>> I have started to look at preserving transaction accross Live-update 
> in
>>>>
>>>>> C Xenstored. So far, I managed to transfer transaction that 
>>>>> read/write existing nodes.
>>>>>
>>>>> Now, I am running into trouble to transfer new/deleted node within a 
> 
>>>>> transaction with the existing migration format.
>>>>>
>>>>> C Xenstored will keep track of nodes accessed during the transaction 
> 
>>>>> but not the children (AFAICT for performance reason).
>>>>
>>>> Not performance reasons, but because there isn't any need for that:
>>>>
>>>> The children are either unchanged (so the non-transaction node records
>>>> apply), or they will be among the tracked nodes (transaction node
>>>> records apply). So in both cases all children should be known.
>>> In theory, opening a new transaction means you will not see any 
>>> modification in the global database until the transaction has been 
>>> committed. What you describe would break that because a client would 
>>> be able to see new nodes added outside of the transaction.
>>>
>>> However, C Xenstored implements neither of the two. Currently, when a 
>>> node is accessed within the transaction, we will also store the names 
>>> of the current children.
>>>
>>> To give an example with access to the global DB (prefixed with TID0) 
>>> and within a transaction (TID1)
>>>
>>>      1) TID0: MKDIR "data/bar"
>>>          2) Start transaction TID1
>>>      3) TID1: DIRECTORY "data"
>>>          -> This will cache the node data
>>>      4) TID0: MKDIR "data/foo"
>>>          -> This will create "foo" in the global database
>>>      5) TID1: MKDIR "data/fish"
>>>          -> This will create "fish" inthe transaction
>>>      5) TID1: DIRECTORY "data"
>>>          -> This will only return "bar" and "fish"
>>>
>>> If we Live-Update between 4) and 5). Then we should make sure that 
>>> "bar" cannot be seen in the listing by TID1.
>>
>> I meant "foo" here. Sorry for the confusion.
>>
>>>
>>> Therefore, I don't think we can restore the children using the global 
>>> node here. Instead we need to find a way to transfer the list of known 
> 
>>> children within the transaction.
>>>
>>> As a fun fact, C Xenstored implements weirdly the transaction, so TID1 
> 
>>> will be able to access "bar" if it knows the name but not list it.
> 
> And this is the basic problem, I think.
> 
> C Xenstored should be repaired by adding all (remaining) children of a
> node into the TID's database when the list of children is modified
> either globally or in a transaction. A child having been added globally
> needs to be added as "deleted" into the TID's database.

IIUC, for every modifications in the global database, we would need to 
walk every single transactions and check whether a parent was accessed. 
Am I correct?

If so, I don't think this is a workable solution because of the cost to 
execute a single command.

Is it something you plan to address differently with your rework of the DB?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 19 17:52:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 17:52:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130286.244134 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljQN1-0000gm-R0; Wed, 19 May 2021 17:52:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130286.244134; Wed, 19 May 2021 17:52:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljQN1-0000gf-NS; Wed, 19 May 2021 17:52:35 +0000
Received: by outflank-mailman (input) for mailman id 130286;
 Wed, 19 May 2021 17:52:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ufKr=KO=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1ljQMz-0000gS-TA
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 17:52:33 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 49dd08ff-c781-420d-9e3c-3da7321213a6;
 Wed, 19 May 2021 17:52:33 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1F6D0AC47;
 Wed, 19 May 2021 17:52:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 49dd08ff-c781-420d-9e3c-3da7321213a6
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621446752; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=zedULLW8DGxX1TLqyo9Mi46k+PxRWRbfziyEmVYvztA=;
	b=adc3KTvdkttFurZbadKEWBWose4cSZpPL4W5vQOLgSfYzK2VuHDRpqNcgX+10R+8ar6l8N
	Ezd/GBvSx1ya6YGNy3p2V7OasvdsnPjlbbnHUZdWJzwFp1ex2oDyNJCXSQbAZKDNOQdFZo
	8ScfT2YkOgS1Y0EDwt8L8OP68DDHdGo=
Message-ID: <b596d5ea2e96be5c6d627e14b87beb51ba4a094e.camel@suse.com>
Subject: Re: [PATCH v2 2/2] automation: fix dependencies on openSUSE
 Tumbleweed containers
From: Dario Faggioli <dfaggioli@suse.com>
To: Roger Pau =?ISO-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Doug Goldstein <cardoe@cardoe.com>, 
	Andrew Cooper <andrew.cooper3@citrix.com>
Date: Wed, 19 May 2021 19:52:30 +0200
In-Reply-To: <YKSv/BGxuy+OCn3t@Air-de-Roger>
References: <162135593827.20014.14959979363028895972.stgit@Wayrath>
	 <162135616513.20014.6303562342690753615.stgit@Wayrath>
	 <YKSv/BGxuy+OCn3t@Air-de-Roger>
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-Z8btzhCDS64c9HjaHiXd"
User-Agent: Evolution 3.40.1 (by Flathub.org) 
MIME-Version: 1.0


--=-Z8btzhCDS64c9HjaHiXd
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, 2021-05-19 at 08:28 +0200, Roger Pau Monn=C3=A9 wrote:
> On Tue, May 18, 2021 at 06:42:45PM +0200, Dario Faggioli wrote:
> > Fix the build inside our openSUSE Tumbleweed container by using
> > adding libzstd headers. While there, remove the explicit dependency
> > for python and python3 as the respective -devel packages will pull
> > them in anyway.
> >=20
> > Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
>=20
> Acked-by: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
>=20
Thanks!

> Can you try to push an updated container to the registry?
>=20
Yeah, I tried, but I'm getting this:

STEP 8: COMMIT registry.gitlab.com/xen-project/xen/suse:opensuse-tumbleweed
--> 940c6edbff9
940c6edbff965135a25bc20f0e2a59cf6062b9e8bc3516858828cbb7bba92d8f
Getting image source signatures
Copying blob acc28ee93e9b [--------------------------------------] 8.0b / 3=
.5KiB
Copying blob 89c6eef91991 [--------------------------------------] 8.0b / 5=
7.0MiB
Copying blob 20dabc80d591 [--------------------------------------] 8.0b / 9=
0.6MiB
Copying blob 5ea007576ed8 [--------------------------------------] 8.0b / 2=
.0GiB
Error: error copying image to the remote destination: Error writing blob: E=
rror initiating layer upload to /v2/xen-project/xen/suse/blobs/uploads/ in =
registry.gitlab.com: errors:
denied: requested access to the resource is denied
unauthorized: authentication required

make: *** [Makefile:15: suse/opensuse-tumbleweed] Error 125

So, either I'm doing something wrong, or I was just misremembering and
I don't have the permission to do that... Can we check if I do?

Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-Z8btzhCDS64c9HjaHiXd
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAmClUF4ACgkQFkJ4iaW4
c+4O/xAAwSdCZwqauw3kclkn03W5pkzgTdUyxwEJtENqywjoyNRauXX8rWE68DIe
rReTh2YWZfTWURxnk1XmhMN78TIBa/BsCmMe39hbEU6iesk9pUGTV9vamX+9+KMB
LlFbzT1wCxNNKVUxxeo49K7F3ZUL1wQTAO6nHlfDBEHTdRUF5x/Y1ncJ4YYrJ6vL
D/Qsq6tem2XoxffgX8pUwqCvQXRRecfp8GnA3l6G1GiZAImryPkGAtAufQXVQb3F
/EdjD7InES/zMK6HrftPyFWYw+DEctHiVaNTq+a1UQ1xlwABTtu+YOb12DF2Pml5
VgXi4IrkRQrx2W5iiWmXvzWd3s2ERs2xsOE6r7pcUp+bpmP1qsEn8ckjHgB4q7eh
M08Zf0OABGrXf8CU6H5TxPF1I07LGNK3QzN6COv/ZNj5U0/H/E35r/viagm4TsdN
5uhtPY9Ai9tAInB74B4RDYIzi8P6OiDAMw4p8skeHrKRILBVCUuL0PbtqsCQAexM
LxjJNRQaXTrtmODxDWwTkGXf1UC8Ww/h20eDHrXyMmX2fMXrx7RqwZShOOSOAS1H
YdCPpSlDLYa+Rs1oWBe2a/MzwkrMilNG6tleqya+j8yojLqPgD3/9T7j5uwOlKsw
Hjv0UKIS3fhuHo1bbuG0Ekw3pQerlr1kZXUr+0ptJwqYW3U+a/Q=
=Ji1a
-----END PGP SIGNATURE-----

--=-Z8btzhCDS64c9HjaHiXd--



From xen-devel-bounces@lists.xenproject.org Wed May 19 18:27:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 18:27:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130296.244145 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljQum-0004Ai-I6; Wed, 19 May 2021 18:27:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130296.244145; Wed, 19 May 2021 18:27:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljQum-0004Ab-F4; Wed, 19 May 2021 18:27:28 +0000
Received: by outflank-mailman (input) for mailman id 130296;
 Wed, 19 May 2021 18:27:27 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ljQul-0004AV-4X
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 18:27:27 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ljQuk-00083q-TW; Wed, 19 May 2021 18:27:26 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ljQuk-0003jx-No; Wed, 19 May 2021 18:27:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=KER7IdtF5PMSqwmGU0+kGoiqT5QVxY45ekVVsbNdjFg=; b=hVF9SPrGXSwlG94RVrR0JYMuIx
	HkOO7Ukl+hqMol4V8AwZVITnsmCAL2Il1TTc+92+kvm+J1+YTE75/mw4de/gLmLERao3py5h1JRWx
	f15p1jLlL0l7ul/12dLb6l8NHzxr+TkAEhaBA1tiyDqQQk4sSn4EgIuMt9f5aSd3okVY=;
Subject: Re: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
To: Penny Zheng <Penny.Zheng@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 nd <nd@arm.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-2-penny.zheng@arm.com>
 <e1b90f06-92d2-11da-c556-4081907124b8@xen.org>
 <VE1PR08MB521519C6F09E92EDB9C9A1AEF72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
Message-ID: <66e32065-ea2d-d000-1a70-e5598a182b6a@xen.org>
Date: Wed, 19 May 2021 19:27:24 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <VE1PR08MB521519C6F09E92EDB9C9A1AEF72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

On 19/05/2021 03:22, Penny Zheng wrote:
> Hi Julien

Hi Penny,

>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: Tuesday, May 18, 2021 4:58 PM
>> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org;
>> sstabellini@kernel.org
>> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
>> <Wei.Chen@arm.com>; nd <nd@arm.com>
>> Subject: Re: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
>>> +beginning, shall never go to heap allocator or boot allocator for any use.
>>> +
>>> +Domains on Static Allocation is supported through device tree
>>> +property `xen,static-mem` specifying reserved RAM banks as this domain's
>> guest RAM.
>>
>> I would suggest to use "physical RAM" when you refer to the host memory.
>>
>>> +By default, they shall be mapped to the fixed guest RAM address
>>> +`GUEST_RAM0_BASE`, `GUEST_RAM1_BASE`.
>>
>> There are a few bits that needs to clarified or part of the description:
>>     1) "By default" suggests there is an alternative possibility.
>> However, I don't see any.
>>     2) Will the first region of xen,static-mem be mapped to GUEST_RAM0_BASE
>> and the second to GUEST_RAM1_BASE? What if a third region is specificed?
>>     3) We don't guarantee the base address and the size of the banks.
>> Wouldn't it be better to let the admin select the region he/she wants?
>>     4) How do you determine the number of cells for the address and the size?
>>
> 
> The specific implementation on this part could be traced to the last commit
> https://patchew.org/Xen/20210518052113.725808-1-penny.zheng@arm.com/20210518052113.725808-11-penny.zheng@arm.com/

Right. My point is an admin should not have to read the code in order to 
figure out how the allocation works.

> 
> It will exhaust the GUEST_RAM0_SIZE, then seek to the GUEST_RAM1_BASE.
> GUEST_RAM0 may take up several regions.

Can this be clarified in the commit message.

> Yes, I may add the 1:1 direct-map scenario here to explain the alternative possibility

Ok. So I would suggest to remove "by default" until the implementation 
for direct-map is added.

> For the third point, are you suggesting that we could provide an option that let user
> also define guest memory base address/size?

When reading the document, I originally thought that the first region 
would be mapped to bank1, and then the second region mapped to bank2...

But from your reply, this is not the expected behavior. Therefore, 
please ignore my suggestion here.

> I'm confused on the fourth point, you mean the address cell and size cell for xen,static-mem = <...>?

Yes. This should be clarified in the document. See more below why?

> It will be consistent with the ones defined in the parent node, domUx.
Hmmm... To take the example you provided, the parent would be chosen. 
However, from the example, I would expect the property #{address, 
size}-cells in domU1 to be used. What did I miss?

>>> +Static Allocation is only supported on AArch64 for now.
>>
>> The code doesn't seem to be AArch64 specific. So why can't this be used for
>> 32-bit Arm?
>>
> 
> True, we have plans to make it also workable in AArch32 in the future.
> Because we considered XEN on cortex-R52.

All the code seems to be implemented in arm generic code. So isn't it 
already working?

>>>    static int __init early_scan_node(const void *fdt,
>>>                                      int node, const char *name, int depth,
>>>                                      u32 address_cells, u32 size_cells,
>>> @@ -345,6 +394,9 @@ static int __init early_scan_node(const void *fdt,
>>>            process_multiboot_node(fdt, node, name, address_cells, size_cells);
>>>        else if ( depth == 1 && device_tree_node_matches(fdt, node, "chosen") )
>>>            process_chosen_node(fdt, node, name, address_cells,
>>> size_cells);
>>> +    else if ( depth == 2 && fdt_get_property(fdt, node, "xen,static-mem",
>> NULL) )
>>> +        process_static_memory(fdt, node, "xen,static-mem", address_cells,
>>> +                              size_cells, &bootinfo.static_mem);
>>
>> I am a bit concerned to add yet another method to parse the DT and all the
>> extra code it will add like in patch #2.
>>
>>   From the host PoV, they are memory reserved for a specific purpose.
>> Would it be possible to consider the reserve-memory binding for that
>> purpose? This will happen outside of chosen, but we could use a phandle to
>> refer the region.
>>
> 
> Correct me if I understand wrongly, do you mean what this device tree snippet looks like:

Yes, this is what I had in mind. Although I have one small remark below.


> reserved-memory {
>     #address-cells = <2>;
>     #size-cells = <2>;
>     ranges;
>   
>     static-mem-domU1: static-mem@0x30000000{

I think the node would need to contain a compatible (name to be defined).

>        reg = <0x0 0x30000000 0x0 0x20000000>;
>     };
> };
> 
> Chosen {
>   ...
> domU1 {
>     xen,static-mem = <&static-mem-domU1>;
> };
> ...
> };
> 
>>>
>>>        if ( rc < 0 )
>>>            printk("fdt: node `%s': parsing failed\n", name); diff --git
>>> a/xen/include/asm-arm/setup.h b/xen/include/asm-arm/setup.h index
>>> 5283244015..5e9f296760 100644
>>> --- a/xen/include/asm-arm/setup.h
>>> +++ b/xen/include/asm-arm/setup.h
>>> @@ -74,6 +74,8 @@ struct bootinfo {
>>>    #ifdef CONFIG_ACPI
>>>        struct meminfo acpi;
>>>    #endif
>>> +    /* Static Memory */
>>> +    struct meminfo static_mem;
>>>    };
>>>
>>>    extern struct bootinfo bootinfo;
>>>
>>
>> Cheers,
>>
>> --
>> Julien Grall

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 19 18:42:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 18:42:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130302.244155 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljR9F-0006NX-Ra; Wed, 19 May 2021 18:42:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130302.244155; Wed, 19 May 2021 18:42:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljR9F-0006NQ-Oh; Wed, 19 May 2021 18:42:25 +0000
Received: by outflank-mailman (input) for mailman id 130302;
 Wed, 19 May 2021 18:42:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dLqQ=KO=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ljR9D-0006NI-Qe
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 18:42:24 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [85.215.255.21])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 35da6c87-dd7d-49df-ad4c-7f24c308a1bb;
 Wed, 19 May 2021 18:42:21 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.26.1 AUTH)
 with ESMTPSA id z0b24bx4JIgK004
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Wed, 19 May 2021 20:42:20 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 35da6c87-dd7d-49df-ad4c-7f24c308a1bb
ARC-Seal: i=1; a=rsa-sha256; t=1621449740; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=lcSWtzXaOvxj7MsIKsdfDKez0Fy0ECtz6xe+e4GJ09zaUx8O118pSoixtArzkMMSy2
    FX858hzjj7cvNJ9aoMJlhZKtvel8NIXjzHQondWTHA8qQbdYvVScfzovttlOnB2vhhwy
    /iGbZkX5r2/HfE/SrhYqoPMLUb+k/cl5hLq+h+wj/F6XAzq9gI2R1SRlqGjOloUsmbXa
    n7WYsR57+Wg2WjVi8dh8PPHmD1FjDpnVNpsa/nG0/oRD4UqnGDkFAfRivM7aMryU6/Ra
    RsTOwUG3e4lzSMSttZJvdGvrhJrXENmQBtOW3v+Ykv41gyzQsb+g1fOnyPUZJgxcBjIH
    yr/A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1621449740;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=mCJ9nZZdYMufSz5UqlzFY4JHtrSF7XvaEMFLbphEo6o=;
    b=U+PuczWDnfE4bSGv9ZMZFGHXg0bTyJWsS3wFPV+Fb65eVHlLheyAWJ5H9sjGEXiXao
    TMDRYyIE9E3FLf2OLswbk/qkaFu5Nqu+MmxFiVnMrfkyN78XcdsPTyDBlBzBbHqDs3uu
    d0SKoPezzU0MTGwS4RGNacfYJ9lrjHPyhqAtZktBoJUJNBf6qaCJHwK+9DyuQbj8RFqL
    kCB/JtBW0nyYVR5p+EjVJXWzEXsQ3Isc9GtWg9QF3A36z1GjSgAPQRtTO24Gn9fGeaLW
    H2gybu5xfGexipzDeC811xZPPczzGHa0d5fQy46y99wHPis6/LCs6y2dM2ijLW6/bGZ3
    n9iA==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1621449740;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=mCJ9nZZdYMufSz5UqlzFY4JHtrSF7XvaEMFLbphEo6o=;
    b=b58mEcqHFuEe11s5RfkBkFxilYmI8YELhztA9HewVD+rBm8QG/HYQ9SL2rWtK5QOTp
    4DwBi6Rd8bIZvhRsbuW2/FuHatVqUnsg/+J1WfRQ0NtiQe2HN9Nr07fHqJ5tUehxK7Qn
    N/ZuTMQotDrsW27f1e1Tk2YuuUqakXCIecnmXJU0nUEdtx/xXceFQWxXDfoOK4qQxvYP
    DV+wGDqZRSVUOH90ZVc0ZCdsBY3cG27WPxdfZOr5/bF3ZMdXtCC1GFlh1SfppcVSFb4U
    AU7YKNNA1Rt3VdlibNkXM0yDRgbBKhqzDcmmIyJxoU+Pi72s2nSkdAZtCgY+8ft11s79
    gTiw==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisF9Wx7WbE3s+BU2kLCYUBd7t4vRd/ulzKn4R+Wk"
X-RZG-CLASS-ID: mo00
Date: Wed, 19 May 2021 20:42:05 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: regression in recent pvops kernels, dom0 crashes early
Message-ID: <20210519204205.5bf59d51.olaf@aepfle.de>
In-Reply-To: <7abb3c8f-4a9b-700b-5c0c-dc6f42336eab@suse.com>
References: <20210513122457.4182eb7f.olaf@aepfle.de>
	<7abb3c8f-4a9b-700b-5c0c-dc6f42336eab@suse.com>
X-Mailer: Claws Mail 2021.04.23 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/KDp9mBry2JYq=FciRNN_/Dj";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/KDp9mBry2JYq=FciRNN_/Dj
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Mon, 17 May 2021 12:54:02 +0200
schrieb Jan Beulich <jbeulich@suse.com>:

> x86/Xen: swap NX determination and GDT setup on BSP
>=20
> xen_setup_gdt(), via xen_load_gdt_boot(), wants to adjust page tables.
> For this to work when NX is not available, x86_configure_nx() needs to
> be called first.


Thanks. I tried this patch on-top of the SLE15-SP3 kernel branch.
Without the patch booting fails as reported.
With the patch the dom0 starts as expected.


Olaf

--Sig_/KDp9mBry2JYq=FciRNN_/Dj
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmClW/0ACgkQ86SN7mm1
DoCtrA//Yvc7TtGgzGuqFFkLaj4rD3zvgNWDhLF44/oFU23Ksxh7YoT5oaQSxiVy
JliWgplE11NiRv5RGZQ91T+s52Vaglk0DFhqKrBo35XPAauAuKraLlQ25W8VjSrr
80mehnsAuRzFexd9e7O1+gmUKz5TdW7Ys6MTn/L7zSVTQQaDdEOQtOdQfGHzkdfv
+L4EUA7x1XAvxIsd5VViATIZ2ieYAxLQE2ApgzppwowKSJki+V31QINMoPHPd3+P
A8OldTeygCfecWZNjELkuxULRPjTxCrbS9NIEZ6ubBEiRSMICnmafX73d9AcIrDh
XlSq6Wse7hH9R/+ZxlYeMZPQZI0kDEYyRrFE+5OK7WpWllDpm92XlbqvbtLBcbus
9h8qoYgDjJAgRhxWhhtBxyWuq6bfEQu+7GLA1YZeULSS29nKnCaasA9eMUdq3eA2
9ZQ3Ie+5jlAb2i2KQSIgr6XiD3gMALt6oDjX/bKrDkYFZZAjvbfFvDvkxvKYvmaM
6IhMGJBwIhRAlEQ2yS4B/VihvAoRxOJLzlnGrlvafvCbUD4bjYffstnXKhmQsc/L
TTthNL4j3l2sSdtG3sWqetQ16WMGva56Oq3LX746BQkiN3N6uy9k9/RT6JXAGXNm
G47AL5w/5kp6OWqnDJif+17b4QC1na6dMg1p0r+qqZ+EmC9DYao=
=oroi
-----END PGP SIGNATURE-----

--Sig_/KDp9mBry2JYq=FciRNN_/Dj--


From xen-devel-bounces@lists.xenproject.org Wed May 19 18:50:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 18:50:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130313.244175 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljRGq-0007zJ-ST; Wed, 19 May 2021 18:50:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130313.244175; Wed, 19 May 2021 18:50:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljRGq-0007zC-Oo; Wed, 19 May 2021 18:50:16 +0000
Received: by outflank-mailman (input) for mailman id 130313;
 Wed, 19 May 2021 18:50:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=94wl=KO=gmail.com=f.fainelli@srs-us1.protection.inumbo.net>)
 id 1ljRGp-0007z6-Qu
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 18:50:15 +0000
Received: from mail-pj1-x102b.google.com (unknown [2607:f8b0:4864:20::102b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5747aefe-4f4d-4ef9-93a5-09a2d3349c2d;
 Wed, 19 May 2021 18:50:14 +0000 (UTC)
Received: by mail-pj1-x102b.google.com with SMTP id k5so7803618pjj.1
 for <xen-devel@lists.xenproject.org>; Wed, 19 May 2021 11:50:14 -0700 (PDT)
Received: from [10.230.29.202] ([192.19.223.252])
 by smtp.gmail.com with ESMTPSA id 204sm126125pfy.56.2021.05.19.11.50.09
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 19 May 2021 11:50:13 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5747aefe-4f4d-4ef9-93a5-09a2d3349c2d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=dYfUs37GH26f9qor1VLh+dBs069CWpkdAXD/ElhmJms=;
        b=TrWBAUFZb5IOcZqvzjceFmBdD//LhG3J/GS/NNJVT3gU+F4jePK2De3KUi6WMq1NHA
         zndANjSseMb1Pp2JRwuG0SKzESIJZgHwjEgadrwQvWeoPO53UCfAxaiTf1Zb6HKEWFg+
         KxvquRqXRHR2JXs6tVsspVphZHfh6QRuiwDBKvnzs/KhFOZgDEomwsEaIwbkEmyXnii1
         Zfxo7hizbKJjteQW7iRhmVRGFo9GFw0B11Cxe44wbA/ziDpsrbDsCg/BzVUrrcWoLnZb
         /YnwocBrRVTzDPZIHZwS44CV6X61m/f/ggI85MB9OD/nN5l0XNYHfp13C3qnJ0yxFZH1
         QMng==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=dYfUs37GH26f9qor1VLh+dBs069CWpkdAXD/ElhmJms=;
        b=QOekDqQawqSpHa1Pb1HsBndJx41oOWWcI0ORd6vndiKwbskwyhocv64VlP5g4mKiqv
         SvfrsJaO4P6bxT+K3GjO+N2CJqaRVvWEvacKtZObxWAegvINnL4d/X2Dh1Z7pxtQYXLX
         5zfFPzOQr99EL+ipqbwDz2N4kJW7KLx8AHm9B6xq1irTfNPAqNkt+m2E8wOqXe4TY/7e
         FXimYZfNorEkWlVMFSFIqBOqtCkltG7k6izFsaXfwiXGpauEkoqqNYtFwuAst05rDwdR
         NEvyY6mnvwO0I2hsw9x/BlNckyjykbm9a8zGUT45hvPmQPt9WayA9F+xnoAMwQsF2R69
         68Ww==
X-Gm-Message-State: AOAM533SQ9fCKCKky15eIxPj2oJS+Rk8zivlmEm3ClYHNmNdYxe4535P
	vUi2kuO9OCa0PK4hSqfQTdY=
X-Google-Smtp-Source: ABdhPJxg56VOtjzCy0nTcRzR5TrWV2NzF6pi2y8FSlVZjGvfF9DZQZKizJXSGDAHfIg8aWG5cZking==
X-Received: by 2002:a17:90b:1949:: with SMTP id nk9mr742999pjb.220.1621450213929;
        Wed, 19 May 2021 11:50:13 -0700 (PDT)
Subject: Re: [PATCH v7 01/15] swiotlb: Refactor swiotlb init functions
To: Claire Chang <tientzu@chromium.org>, Rob Herring <robh+dt@kernel.org>,
 mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>,
 Will Deacon <will@kernel.org>, Frank Rowand <frowand.list@gmail.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com,
 jgross@suse.com, Christoph Hellwig <hch@lst.de>,
 Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org, paulus@samba.org,
 "list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
 sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
 grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding
 <treding@nvidia.com>, mingo@kernel.org, bauerman@linux.ibm.com,
 peterz@infradead.org, Greg KH <gregkh@linuxfoundation.org>,
 Saravana Kannan <saravanak@google.com>,
 "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
 heikki.krogerus@linux.intel.com,
 Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
 Randy Dunlap <rdunlap@infradead.org>, Dan Williams
 <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>,
 linux-devicetree <devicetree@vger.kernel.org>,
 lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
 xen-devel@lists.xenproject.org, Nicolas Boichat <drinkcat@chromium.org>,
 Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
 bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
 daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
 intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
 jxgao@google.com, joonas.lahtinen@linux.intel.com,
 linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
 matthew.auld@intel.com, rodrigo.vivi@intel.com,
 thomas.hellstrom@linux.intel.com
References: <20210518064215.2856977-1-tientzu@chromium.org>
 <20210518064215.2856977-2-tientzu@chromium.org>
From: Florian Fainelli <f.fainelli@gmail.com>
Message-ID: <170a54f2-be20-ec29-1d7f-3388e5f928c6@gmail.com>
Date: Wed, 19 May 2021 11:50:07 -0700
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Firefox/78.0 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <20210518064215.2856977-2-tientzu@chromium.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit



On 5/17/2021 11:42 PM, Claire Chang wrote:
> Add a new function, swiotlb_init_io_tlb_mem, for the io_tlb_mem struct
> initialization to make the code reusable.
> 
> Note that we now also call set_memory_decrypted in swiotlb_init_with_tbl.
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>
> ---
>  kernel/dma/swiotlb.c | 51 ++++++++++++++++++++++----------------------
>  1 file changed, 25 insertions(+), 26 deletions(-)
> 
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index 8ca7d505d61c..d3232fc19385 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -168,9 +168,30 @@ void __init swiotlb_update_mem_attributes(void)
>  	memset(vaddr, 0, bytes);
>  }
>  
> -int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
> +static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
> +				    unsigned long nslabs, bool late_alloc)
>  {
> +	void *vaddr = phys_to_virt(start);
>  	unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
> +
> +	mem->nslabs = nslabs;
> +	mem->start = start;
> +	mem->end = mem->start + bytes;
> +	mem->index = 0;
> +	mem->late_alloc = late_alloc;
> +	spin_lock_init(&mem->lock);
> +	for (i = 0; i < mem->nslabs; i++) {
> +		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
> +		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
> +		mem->slots[i].alloc_size = 0;
> +	}
> +
> +	set_memory_decrypted((unsigned long)vaddr, bytes >> PAGE_SHIFT);
> +	memset(vaddr, 0, bytes);

You are doing an unconditional set_memory_decrypted() followed by a
memset here, and then:

> +}
> +
> +int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
> +{
>  	struct io_tlb_mem *mem;
>  	size_t alloc_size;
>  
> @@ -186,16 +207,8 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
>  	if (!mem)
>  		panic("%s: Failed to allocate %zu bytes align=0x%lx\n",
>  		      __func__, alloc_size, PAGE_SIZE);
> -	mem->nslabs = nslabs;
> -	mem->start = __pa(tlb);
> -	mem->end = mem->start + bytes;
> -	mem->index = 0;
> -	spin_lock_init(&mem->lock);
> -	for (i = 0; i < mem->nslabs; i++) {
> -		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
> -		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
> -		mem->slots[i].alloc_size = 0;
> -	}
> +
> +	swiotlb_init_io_tlb_mem(mem, __pa(tlb), nslabs, false);

You convert this call site with swiotlb_init_io_tlb_mem() which did not
do the set_memory_decrypted()+memset(). Is this okay or should
swiotlb_init_io_tlb_mem() add an additional argument to do this
conditionally?
-- 
Florian


From xen-devel-bounces@lists.xenproject.org Wed May 19 18:54:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 18:54:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130320.244186 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljRLH-0000EU-DY; Wed, 19 May 2021 18:54:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130320.244186; Wed, 19 May 2021 18:54:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljRLH-0000EN-AV; Wed, 19 May 2021 18:54:51 +0000
Received: by outflank-mailman (input) for mailman id 130320;
 Wed, 19 May 2021 18:54:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=94wl=KO=gmail.com=f.fainelli@srs-us1.protection.inumbo.net>)
 id 1ljRLF-0000EH-RR
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 18:54:49 +0000
Received: from mail-pl1-x62f.google.com (unknown [2607:f8b0:4864:20::62f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 68511ef0-e4ae-4c9a-94b7-ebcb2989b1fe;
 Wed, 19 May 2021 18:54:49 +0000 (UTC)
Received: by mail-pl1-x62f.google.com with SMTP id z4so5388091plg.8
 for <xen-devel@lists.xenproject.org>; Wed, 19 May 2021 11:54:49 -0700 (PDT)
Received: from [10.230.29.202] ([192.19.223.252])
 by smtp.gmail.com with ESMTPSA id k15sm142717pfi.0.2021.05.19.11.54.43
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 19 May 2021 11:54:47 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 68511ef0-e4ae-4c9a-94b7-ebcb2989b1fe
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=dOHil3dku6us1u8ptJ6JIh3xpDeGpcOMbsXrWxsBKoE=;
        b=cnnxKuD9wRMCCtazVzj6toxbgXcwxrb9Ouc86iDvINHgDyupM9mykeJyyckQXE4Rnq
         hq06c/AhNLxAl+uKAOVIR8t75aOIsL0QVqGA5chzbeSt0vvpUDh6dyPJSN8iQr/8cSTT
         naFa7nEaysxmImWzMWxEPel5LUj87LeUzIBfbiDL5YPxen+HitAXivixEaC0dK5umZ9C
         /hRQaspTbmz5yXsVRHAtEaCoC9OrLSx5wv4NZtIDg4qqr8EWekVxY4epEeIxfEclRFCW
         3sXGad/MPi4oLZncAM2dP7ovJhLYIIt2FuIu6GCpyimyCgdKa0aICR4C+b71QYuc2obh
         TPYA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=dOHil3dku6us1u8ptJ6JIh3xpDeGpcOMbsXrWxsBKoE=;
        b=RPPxfoLkre4EuUa0DayBT7RkCi/6FPLoZSSehqSdWE2MDkoy98++cvqj45PGEc/bjn
         jM1/b+F1HQ6j02EIgW+bdr6ksN+VwLJVy8e1zHZMOsQNfzfXmLrnjQxC8sVS0V2AM9h5
         sZOP1fxBjGriIPuYpyNzmyOKVaS6RF5F4/8SQBvK6h75zyQ2MlM4YPkeDor3JLg6x/hU
         xNYvdWIrDT1J3u2h1BUGBC0QWyp5W6Tz+QjYD6tkS4Du6w1xwHlBYn0Ktzt7GBKWSX+/
         X+3G0UPz5+EQrOoCA37A1+bPiSd6W+HiRSmauHuvhdV02SuUfrOXzIRiq13UOcIXR9Zs
         MVzA==
X-Gm-Message-State: AOAM532UrvKNHeHFtxJ0xCQxp+w64gdemjeYZmIlJmhZcOY6o6gYcsWA
	WI+RzQGaDMccuXJfLOInHIs=
X-Google-Smtp-Source: ABdhPJxW6B5ZCMhQ2r/PwVD83uNx0fbqHe93UAEa6y6ZOrEtjXLovPk1PFllt4xgN3GhSn3N4TP6jw==
X-Received: by 2002:a17:902:ab89:b029:ee:dc90:7008 with SMTP id f9-20020a170902ab89b02900eedc907008mr1143149plr.30.1621450488169;
        Wed, 19 May 2021 11:54:48 -0700 (PDT)
Subject: Re: [PATCH v7 04/15] swiotlb: Add restricted DMA pool initialization
To: Claire Chang <tientzu@chromium.org>, Rob Herring <robh+dt@kernel.org>,
 mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>,
 Will Deacon <will@kernel.org>, Frank Rowand <frowand.list@gmail.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com,
 jgross@suse.com, Christoph Hellwig <hch@lst.de>,
 Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org, paulus@samba.org,
 "list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
 sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
 grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding
 <treding@nvidia.com>, mingo@kernel.org, bauerman@linux.ibm.com,
 peterz@infradead.org, Greg KH <gregkh@linuxfoundation.org>,
 Saravana Kannan <saravanak@google.com>,
 "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
 heikki.krogerus@linux.intel.com,
 Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
 Randy Dunlap <rdunlap@infradead.org>, Dan Williams
 <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>,
 linux-devicetree <devicetree@vger.kernel.org>,
 lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
 xen-devel@lists.xenproject.org, Nicolas Boichat <drinkcat@chromium.org>,
 Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
 bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
 daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
 intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
 jxgao@google.com, joonas.lahtinen@linux.intel.com,
 linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
 matthew.auld@intel.com, rodrigo.vivi@intel.com,
 thomas.hellstrom@linux.intel.com
References: <20210518064215.2856977-1-tientzu@chromium.org>
 <20210518064215.2856977-5-tientzu@chromium.org>
From: Florian Fainelli <f.fainelli@gmail.com>
Message-ID: <ae050c36-8179-e465-84f0-e46c361bd71e@gmail.com>
Date: Wed, 19 May 2021 11:54:42 -0700
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Firefox/78.0 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <20210518064215.2856977-5-tientzu@chromium.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit



On 5/17/2021 11:42 PM, Claire Chang wrote:
> Add the initialization function to create restricted DMA pools from
> matching reserved-memory nodes.
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>
> ---
>  include/linux/device.h  |  4 +++
>  include/linux/swiotlb.h |  3 +-
>  kernel/dma/swiotlb.c    | 76 +++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 82 insertions(+), 1 deletion(-)
> 
> diff --git a/include/linux/device.h b/include/linux/device.h
> index 38a2071cf776..4987608ea4ff 100644
> --- a/include/linux/device.h
> +++ b/include/linux/device.h
> @@ -416,6 +416,7 @@ struct dev_links_info {
>   * @dma_pools:	Dma pools (if dma'ble device).
>   * @dma_mem:	Internal for coherent mem override.
>   * @cma_area:	Contiguous memory area for dma allocations
> + * @dma_io_tlb_mem: Internal for swiotlb io_tlb_mem override.
>   * @archdata:	For arch-specific additions.
>   * @of_node:	Associated device tree node.
>   * @fwnode:	Associated device node supplied by platform firmware.
> @@ -521,6 +522,9 @@ struct device {
>  #ifdef CONFIG_DMA_CMA
>  	struct cma *cma_area;		/* contiguous memory area for dma
>  					   allocations */
> +#endif
> +#ifdef CONFIG_DMA_RESTRICTED_POOL
> +	struct io_tlb_mem *dma_io_tlb_mem;
>  #endif
>  	/* arch specific additions */
>  	struct dev_archdata	archdata;
> diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> index 216854a5e513..03ad6e3b4056 100644
> --- a/include/linux/swiotlb.h
> +++ b/include/linux/swiotlb.h
> @@ -72,7 +72,8 @@ extern enum swiotlb_force swiotlb_force;
>   *		range check to see if the memory was in fact allocated by this
>   *		API.
>   * @nslabs:	The number of IO TLB blocks (in groups of 64) between @start and
> - *		@end. This is command line adjustable via setup_io_tlb_npages.
> + *		@end. For default swiotlb, this is command line adjustable via
> + *		setup_io_tlb_npages.
>   * @used:	The number of used IO TLB block.
>   * @list:	The free list describing the number of free entries available
>   *		from each index.
> diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> index b849b01a446f..1d8eb4de0d01 100644
> --- a/kernel/dma/swiotlb.c
> +++ b/kernel/dma/swiotlb.c
> @@ -39,6 +39,13 @@
>  #ifdef CONFIG_DEBUG_FS
>  #include <linux/debugfs.h>
>  #endif
> +#ifdef CONFIG_DMA_RESTRICTED_POOL
> +#include <linux/io.h>
> +#include <linux/of.h>
> +#include <linux/of_fdt.h>
> +#include <linux/of_reserved_mem.h>
> +#include <linux/slab.h>
> +#endif
>  
>  #include <asm/io.h>
>  #include <asm/dma.h>
> @@ -690,3 +697,72 @@ static int __init swiotlb_create_default_debugfs(void)
>  late_initcall(swiotlb_create_default_debugfs);
>  
>  #endif
> +
> +#ifdef CONFIG_DMA_RESTRICTED_POOL
> +static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
> +				    struct device *dev)
> +{
> +	struct io_tlb_mem *mem = rmem->priv;
> +	unsigned long nslabs = rmem->size >> IO_TLB_SHIFT;
> +
> +	if (dev->dma_io_tlb_mem)
> +		return 0;
> +
> +	/*
> +	 * Since multiple devices can share the same pool, the private data,
> +	 * io_tlb_mem struct, will be initialized by the first device attached
> +	 * to it.
> +	 */
> +	if (!mem) {
> +		mem = kzalloc(struct_size(mem, slots, nslabs), GFP_KERNEL);
> +		if (!mem)
> +			return -ENOMEM;
> +
> +		if (PageHighMem(pfn_to_page(PHYS_PFN(rmem->base)))) {
> +			kfree(mem);
> +			return -EINVAL;

This could probably deserve a warning here to indicate that the reserved
area must be accessible within the linear mapping as I would expect a
lot of people to trip over that.

Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
-- 
Florian


From xen-devel-bounces@lists.xenproject.org Wed May 19 19:00:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 19:00:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130327.244197 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljRQH-0000uZ-1j; Wed, 19 May 2021 19:00:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130327.244197; Wed, 19 May 2021 19:00:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljRQG-0000uS-Ul; Wed, 19 May 2021 19:00:00 +0000
Received: by outflank-mailman (input) for mailman id 130327;
 Wed, 19 May 2021 18:59:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=94wl=KO=gmail.com=f.fainelli@srs-us1.protection.inumbo.net>)
 id 1ljRQF-0000uM-6E
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 18:59:59 +0000
Received: from mail-pf1-x42e.google.com (unknown [2607:f8b0:4864:20::42e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id efdee6e5-0491-4271-ad82-2ec470acd004;
 Wed, 19 May 2021 18:59:58 +0000 (UTC)
Received: by mail-pf1-x42e.google.com with SMTP id c17so10557313pfn.6
 for <xen-devel@lists.xenproject.org>; Wed, 19 May 2021 11:59:58 -0700 (PDT)
Received: from [10.230.29.202] ([192.19.223.252])
 by smtp.gmail.com with ESMTPSA id 63sm140020pfz.26.2021.05.19.11.59.51
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 19 May 2021 11:59:57 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: efdee6e5-0491-4271-ad82-2ec470acd004
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=M073CmbM6OvFOLG+SWp0SNpXEbUxhFfHxxoda72NgXE=;
        b=Ekc2ScXY1SHvF4H29yTRhgjiTSWV6Cyne34MQrajkIT3p2pu6mu773Qlyr0kpC2NNU
         P+GJiyNV1K2vRtkjtAhIgI9VHVYmFkAxziAc3YITYD5dCFwhGNrQytvT7wWGVjrLLVt/
         Z+eGZOsPbQoTabmseviirRgNiL7jb93KoBbw+F8LkLIJVv78TKzBCCrYFTn6M7Wbv/bo
         0UQmBpP3G5FMrLaiP4lB4mvY0KeluMFddI6VD1W8D7dp8q/8Zp42g/wrvlGw3mxI6q03
         5p/katWWYvUn05FJWNltsTeqIMenps29TVH4Onz8E4y1A5fSnj1TT7Ka2b9bvOBYDjcG
         zYkg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=M073CmbM6OvFOLG+SWp0SNpXEbUxhFfHxxoda72NgXE=;
        b=GGGYWZkPipCdRAsa8T1ey0mfx3dRK8pzTRtRGbDmFrGSPfa4+o4YR1Vf31LsUVexO2
         CAWMLwbFbj4D6atO51T+oEhsIfa+U/2dg2slJfYdDEo3wl/5KXKc6UJIX+qRIXPXJL1z
         zKf+wNnny3/tzEZ84O7ueL5SdZi6Ari5B7+iEbZK6GkHCi9Zd6j91X1gdOJDGZYL7aiD
         MNC20uHuT5QnZL/ZIdtvpbp0E60IqhIFKobOkv4H5JwHta02eHJ0dHFkZccEm31idOuV
         3oPNgmGdZ/4vygcTD4KzCiER+CCeBjeU+VnvnYBcWFQOYwGJvC3WNvDIvDLHRAPNbK2k
         0FFQ==
X-Gm-Message-State: AOAM533MHL++7NRjGd4pGbuxF/BFUWho4or2HJWy2arH82acH7CEEl3G
	eREvdIB4CPn/DOvP4RK2yFs=
X-Google-Smtp-Source: ABdhPJyp0yRJO2tDNgGd4CfLo9HXihdjW7DvkJwzlUWA3fY93WK5PBHIJbH2WL1TmMBXCaStbeZwNQ==
X-Received: by 2002:a65:48c2:: with SMTP id o2mr575344pgs.376.1621450797716;
        Wed, 19 May 2021 11:59:57 -0700 (PDT)
Subject: Re: [PATCH v7 11/15] dma-direct: Add a new wrapper
 __dma_direct_free_pages()
To: Claire Chang <tientzu@chromium.org>, Rob Herring <robh+dt@kernel.org>,
 mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>,
 Will Deacon <will@kernel.org>, Frank Rowand <frowand.list@gmail.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com,
 jgross@suse.com, Christoph Hellwig <hch@lst.de>,
 Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org, paulus@samba.org,
 "list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
 sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
 grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding
 <treding@nvidia.com>, mingo@kernel.org, bauerman@linux.ibm.com,
 peterz@infradead.org, Greg KH <gregkh@linuxfoundation.org>,
 Saravana Kannan <saravanak@google.com>,
 "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
 heikki.krogerus@linux.intel.com,
 Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
 Randy Dunlap <rdunlap@infradead.org>, Dan Williams
 <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>,
 linux-devicetree <devicetree@vger.kernel.org>,
 lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
 xen-devel@lists.xenproject.org, Nicolas Boichat <drinkcat@chromium.org>,
 Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
 bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
 daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
 intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
 jxgao@google.com, joonas.lahtinen@linux.intel.com,
 linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
 matthew.auld@intel.com, rodrigo.vivi@intel.com,
 thomas.hellstrom@linux.intel.com
References: <20210518064215.2856977-1-tientzu@chromium.org>
 <20210518064215.2856977-12-tientzu@chromium.org>
From: Florian Fainelli <f.fainelli@gmail.com>
Message-ID: <8c274da9-db90-cb42-c9b2-815ee0c6fca3@gmail.com>
Date: Wed, 19 May 2021 11:59:50 -0700
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Firefox/78.0 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <20210518064215.2856977-12-tientzu@chromium.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit



On 5/17/2021 11:42 PM, Claire Chang wrote:
> Add a new wrapper __dma_direct_free_pages() that will be useful later
> for swiotlb_free().
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>

Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
-- 
Florian


From xen-devel-bounces@lists.xenproject.org Wed May 19 19:00:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 19:00:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130328.244208 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljRQT-0001x2-AK; Wed, 19 May 2021 19:00:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130328.244208; Wed, 19 May 2021 19:00:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljRQT-0001wv-6t; Wed, 19 May 2021 19:00:13 +0000
Received: by outflank-mailman (input) for mailman id 130328;
 Wed, 19 May 2021 19:00:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=94wl=KO=gmail.com=f.fainelli@srs-us1.protection.inumbo.net>)
 id 1ljRQR-0001wR-TO
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 19:00:11 +0000
Received: from mail-pj1-x102c.google.com (unknown [2607:f8b0:4864:20::102c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 66643df8-d6fa-44e2-aa66-eae52c3b25ed;
 Wed, 19 May 2021 19:00:11 +0000 (UTC)
Received: by mail-pj1-x102c.google.com with SMTP id
 h20-20020a17090aa894b029015db8f3969eso3302189pjq.3
 for <xen-devel@lists.xenproject.org>; Wed, 19 May 2021 12:00:11 -0700 (PDT)
Received: from [10.230.29.202] ([192.19.223.252])
 by smtp.gmail.com with ESMTPSA id y26sm89076pge.94.2021.05.19.12.00.04
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 19 May 2021 12:00:09 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 66643df8-d6fa-44e2-aa66-eae52c3b25ed
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=bliAyPcCTKuHaZYRIw5XXZxaaB4NHvg1JF88TNVLe3Q=;
        b=TKwRdnu4+Sp1YkQdeC//ZaZe5gFK1OBbc6fiWqiSSxtaemxo8tsS6cyyKonvkggi45
         AkmXBkDwPjivTxMkW/6u87n74jtYP0vz5yoiKDWAMEYfh+QJdqbXNMoLboJ95qsaB+Sz
         evhXDQC1zkb888kY45tuzx3cOkf8aRLE0tysepxM3+bqwCkk54R2Q027+/7CcS0ORj65
         6mx2ltgEPNZ+I2TPARzQB2NU/eRypDG4kJedWyyBabBiqBkk+XEYxHjO+wV8NZqkj5Xb
         Wiz3FWUYSWAt8nsw3SelNmNlkX4sXNF7RuifyB1BY+CyHdw4EnNC9bgPveVhRQQuluiO
         dhqg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=bliAyPcCTKuHaZYRIw5XXZxaaB4NHvg1JF88TNVLe3Q=;
        b=GQKkGZvEBXbvdgSMooAX2Hwp42W3pYrsRIOLoe8UjkVDp0kHkCjSkVRnm9Q4ea9BVc
         v+Wa0wrPpAJNZstYHLHtzEcOznYkB1rLqUCDG3Zn2LRkdvsTFH78u1O/4LXGBVW5Z3VY
         fvAA3JC+xXWQrFOkjWnlQNANZtDCKWtL3tyEOCw0FHp2/rUeVhtLNwI+9JDSf7/GqOsd
         oxSMHButnsUd7waDL6ecaUY/+uz+QE4GkRAoQpsQpU2jcD056PpkgxjdZybaWSWK4fwh
         BfxeQyY0ZvpVF3ip21YaWj3nLrxH8SwUk9RubeHRS4nMPs8HN1zbhelnm6hX4iphZ9bf
         F11g==
X-Gm-Message-State: AOAM5338/HVJuxurh2QvA8CdU1PHeixDHnmPMnD1UMj5opioiOI+otD9
	4C58o/I81bqWKJv86OBn3WM=
X-Google-Smtp-Source: ABdhPJzSFi5vywroBZ7tSNHRQs1V9GPTz+dOSZzWMkXvwkz7CoWyRh73gQHWXYp/m+DT8rff9721jw==
X-Received: by 2002:a17:903:10a:b029:f4:109c:dc08 with SMTP id y10-20020a170903010ab02900f4109cdc08mr1027246plc.10.1621450810600;
        Wed, 19 May 2021 12:00:10 -0700 (PDT)
Subject: Re: [PATCH v7 03/15] swiotlb: Add DMA_RESTRICTED_POOL
To: Claire Chang <tientzu@chromium.org>, Rob Herring <robh+dt@kernel.org>,
 mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>,
 Will Deacon <will@kernel.org>, Frank Rowand <frowand.list@gmail.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com,
 jgross@suse.com, Christoph Hellwig <hch@lst.de>,
 Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org, paulus@samba.org,
 "list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
 sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
 grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding
 <treding@nvidia.com>, mingo@kernel.org, bauerman@linux.ibm.com,
 peterz@infradead.org, Greg KH <gregkh@linuxfoundation.org>,
 Saravana Kannan <saravanak@google.com>,
 "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
 heikki.krogerus@linux.intel.com,
 Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
 Randy Dunlap <rdunlap@infradead.org>, Dan Williams
 <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>,
 linux-devicetree <devicetree@vger.kernel.org>,
 lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
 xen-devel@lists.xenproject.org, Nicolas Boichat <drinkcat@chromium.org>,
 Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
 bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
 daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
 intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
 jxgao@google.com, joonas.lahtinen@linux.intel.com,
 linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
 matthew.auld@intel.com, rodrigo.vivi@intel.com,
 thomas.hellstrom@linux.intel.com
References: <20210518064215.2856977-1-tientzu@chromium.org>
 <20210518064215.2856977-4-tientzu@chromium.org>
From: Florian Fainelli <f.fainelli@gmail.com>
Message-ID: <d9975ce8-7ae9-2ece-a1c5-a16d0aed8143@gmail.com>
Date: Wed, 19 May 2021 12:00:03 -0700
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Firefox/78.0 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <20210518064215.2856977-4-tientzu@chromium.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit



On 5/17/2021 11:42 PM, Claire Chang wrote:
> Add a new kconfig symbol, DMA_RESTRICTED_POOL, for restricted DMA pool.
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>

Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
-- 
Florian


From xen-devel-bounces@lists.xenproject.org Wed May 19 19:08:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 19:08:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130340.244219 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljRYC-0002xF-60; Wed, 19 May 2021 19:08:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130340.244219; Wed, 19 May 2021 19:08:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljRYC-0002x8-2A; Wed, 19 May 2021 19:08:12 +0000
Received: by outflank-mailman (input) for mailman id 130340;
 Wed, 19 May 2021 19:08:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljRYA-0002wy-SM; Wed, 19 May 2021 19:08:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljRYA-0000KM-Jj; Wed, 19 May 2021 19:08:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljRYA-0001n3-9t; Wed, 19 May 2021 19:08:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ljRYA-0005MW-9R; Wed, 19 May 2021 19:08:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Ugt7I3eZTztw158auMAguwKs+Wy7KHYMCTzNGMjrQXI=; b=49QRRmyfd9ONgmASjPO6meuu1R
	CwDirzRHRU48TUEW1Ml9EUesZxReUA8nLWs+cNeqrrF5X/ijB7Zsvt14utZ6N4242n0SfXk+V2i0t
	qRE0ZQSVzk0x7viRcQoj0FObH6PpzVf1pYdEm5YjMKKC7JcTA/h0lkNtK7YCV6q6Hseo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162082-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162082: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=8ac91e6c6033ebc12c5c1e4aa171b81a662bd70f
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 May 2021 19:08:10 +0000

flight 162082 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162082/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  14 guest-start    fail in 161996 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-credit2  13 debian-fixup               fail pass in 161996

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                8ac91e6c6033ebc12c5c1e4aa171b81a662bd70f
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  291 days
Failing since        152366  2020-08-01 20:49:34 Z  290 days  490 attempts
Testing same since   161996  2021-05-18 07:32:22 Z    1 days    3 attempts

------------------------------------------------------------
6063 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1645713 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed May 19 19:18:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 19:18:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130351.244233 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljRi3-0004TJ-CO; Wed, 19 May 2021 19:18:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130351.244233; Wed, 19 May 2021 19:18:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljRi3-0004TC-9Q; Wed, 19 May 2021 19:18:23 +0000
Received: by outflank-mailman (input) for mailman id 130351;
 Wed, 19 May 2021 19:18:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=94wl=KO=gmail.com=f.fainelli@srs-us1.protection.inumbo.net>)
 id 1ljRi1-0004T6-J8
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 19:18:21 +0000
Received: from mail-pf1-x42c.google.com (unknown [2607:f8b0:4864:20::42c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1f521c52-7293-4dfb-8161-2c5561bae6c0;
 Wed, 19 May 2021 19:18:20 +0000 (UTC)
Received: by mail-pf1-x42c.google.com with SMTP id f22so2221733pfn.0
 for <xen-devel@lists.xenproject.org>; Wed, 19 May 2021 12:18:20 -0700 (PDT)
Received: from [10.230.29.202] ([192.19.223.252])
 by smtp.gmail.com with ESMTPSA id d131sm147671pfd.176.2021.05.19.12.18.15
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 19 May 2021 12:18:19 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1f521c52-7293-4dfb-8161-2c5561bae6c0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=qkzCD1nWkBURpBrpJXY23W3vLoZSck6AY/wcT/olSRk=;
        b=Jr32Uvht2cSrNILgfPTD5ViC6pT8b+8Rv+juxOSQTQwrJ1CAD0MBJHubcHjyOKsUbI
         JfDjb6IlvhZ8t3+ym7sOQ4a9U6dHEsOXxzesVMPv6UDrlo59kYq82tWRbs9w2uqU+yYo
         8oA/biTXJEGtIK7ZUbiBmZQYbJzsutIezjLCjW+be0py6uPVuz9vfS/rc828YcCt+zVs
         WRnXVlWSjrk7BMwlJdqMQwTvyoJQltUeu2a7CMQB207kX6rGqLkbNNdIKCNuEPYfxlj8
         c8CbGpi6xXQEr0VSmTVe/i1v9YaB96qj2/4NW8B/lTGCTjbb7h0/a34Yz4Z5jXjVRuz+
         oRyg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=qkzCD1nWkBURpBrpJXY23W3vLoZSck6AY/wcT/olSRk=;
        b=uQzavruJFqy504LXT8xUipP0ipIEPU0YBtZJZQYFAQgRADuL3+s14cmtWRL5y4VN9D
         VrFw2DkvjIuQqWgLu14nBoiAdTAus17g/WgJhPbb+L/eR9pbrryLd9vGanW0dd8zLEng
         sNcrhGfywME3AxCVeDlLwefOP77LPq0np/wQy5xbV4mVI9LzhFvjPp63OAqsa3n/uBvq
         SbRseGONikZ8U4q2D0LAgPfDN/zxbA4Cjhw5BkIKLAtBaPQCwlZHD2IyuF7VrVMoBYMB
         pnoAIub3jNO3quMtS2SFu1c2zZK+a67qjJZDgUu/GMuh6yNQA1ABib5DZjWIhV0b6pHc
         Fbnw==
X-Gm-Message-State: AOAM530eKRGin8RXwQHu+hTLVlqHHArEOG7/iW38uYdcykTe0QPxQspz
	7xcSh6rOT2vD1uiXfblKJgM=
X-Google-Smtp-Source: ABdhPJyun5zEd3qUDcE/GtFSWzJiED1dVWEJkkT2Lvnwoihmboc879WuDVAjjHLr8vtaHKK8YVFCpg==
X-Received: by 2002:a65:4286:: with SMTP id j6mr636207pgp.261.1621451900054;
        Wed, 19 May 2021 12:18:20 -0700 (PDT)
Subject: Re: [PATCH v7 05/15] swiotlb: Add a new get_io_tlb_mem getter
To: Claire Chang <tientzu@chromium.org>, Rob Herring <robh+dt@kernel.org>,
 mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>,
 Will Deacon <will@kernel.org>, Frank Rowand <frowand.list@gmail.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com,
 jgross@suse.com, Christoph Hellwig <hch@lst.de>,
 Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org, paulus@samba.org,
 "list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
 sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
 grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding
 <treding@nvidia.com>, mingo@kernel.org, bauerman@linux.ibm.com,
 peterz@infradead.org, Greg KH <gregkh@linuxfoundation.org>,
 Saravana Kannan <saravanak@google.com>,
 "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
 heikki.krogerus@linux.intel.com,
 Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
 Randy Dunlap <rdunlap@infradead.org>, Dan Williams
 <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>,
 linux-devicetree <devicetree@vger.kernel.org>,
 lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
 xen-devel@lists.xenproject.org, Nicolas Boichat <drinkcat@chromium.org>,
 Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
 bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
 daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
 intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
 jxgao@google.com, joonas.lahtinen@linux.intel.com,
 linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
 matthew.auld@intel.com, rodrigo.vivi@intel.com,
 thomas.hellstrom@linux.intel.com
References: <20210518064215.2856977-1-tientzu@chromium.org>
 <20210518064215.2856977-6-tientzu@chromium.org>
From: Florian Fainelli <f.fainelli@gmail.com>
Message-ID: <52714d95-3562-97fc-0dee-761adfc364cb@gmail.com>
Date: Wed, 19 May 2021 12:18:13 -0700
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Firefox/78.0 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <20210518064215.2856977-6-tientzu@chromium.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit



On 5/17/2021 11:42 PM, Claire Chang wrote:
> Add a new getter, get_io_tlb_mem, to help select the io_tlb_mem struct.
> The restricted DMA pool is preferred if available.
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>

Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
-- 
Florian


From xen-devel-bounces@lists.xenproject.org Wed May 19 19:19:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 19:19:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130355.244243 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljRjS-00055x-Mq; Wed, 19 May 2021 19:19:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130355.244243; Wed, 19 May 2021 19:19:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljRjS-00055q-Jr; Wed, 19 May 2021 19:19:50 +0000
Received: by outflank-mailman (input) for mailman id 130355;
 Wed, 19 May 2021 19:19:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=94wl=KO=gmail.com=f.fainelli@srs-us1.protection.inumbo.net>)
 id 1ljRjR-00055i-6l
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 19:19:49 +0000
Received: from mail-pf1-x434.google.com (unknown [2607:f8b0:4864:20::434])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fb305a74-8fc9-4194-b0b6-8a9666d7c7e4;
 Wed, 19 May 2021 19:19:48 +0000 (UTC)
Received: by mail-pf1-x434.google.com with SMTP id g18so8927326pfr.2
 for <xen-devel@lists.xenproject.org>; Wed, 19 May 2021 12:19:48 -0700 (PDT)
Received: from [10.230.29.202] ([192.19.223.252])
 by smtp.gmail.com with ESMTPSA id y66sm128104pgb.14.2021.05.19.12.19.42
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 19 May 2021 12:19:47 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fb305a74-8fc9-4194-b0b6-8a9666d7c7e4
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=ghVDOoMvorTJdaMzHQG/RIv5BmHhyL6sw4dTSHsJEnU=;
        b=rcLYOAKu9u/Pk2N2pIINoWg7Qql5CHIaTXPW1rmbbrFNRBJ17e6P88xZ/Yr3+K40Vg
         KFK9I+oMG/IgtitN6mi8iqL0UX94lczwdh7i0nBMb8jue1bBvD+G4ynJruy5PLRKgM8v
         188V9tFxrPE9B0B1aPKzXgftN+LtzT071WIlfZD+1U+hRwMChhgRPMvMzKYVh5Q7yJyY
         X233KhJbv3QXKhh18IR7fU0ZBHQkRzKr/brYbrWfvUGYpz9WfdE1CfhVQ0XA+pTrnR5x
         VWSQyHCExyqYP4xFwbo2E8+nGlIJeknoGezPntMeUW6YyiidoBVXS6cxaKILk/O3+EJK
         3+Jg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=ghVDOoMvorTJdaMzHQG/RIv5BmHhyL6sw4dTSHsJEnU=;
        b=MuwPnJjtfDqLBb/SL8hCYOxA+pS3OYY9pFjDGOwSD80IAyrlN1xW1+1H673hUCDMtU
         Bz4Gxq4EH05ERPRLIQQKy7BVQ7J4FMx6qKlywFennT2fkRXaij9Zrf5ge4b6ltkoqIx/
         NuIprLNjgpb8Lkw/7NReLqntoEEopAZ/4+neiEIu+UFDNnzc1kb/HO938ef3dM88JZHP
         fKUuCXzeNSGs2EhSLDPAvHEOnAEoQchoiBJbW98yWKvMpeHU/l4h3+J2Jwpyyc4t3MBk
         o/ZdNjj7KVxpag5njaZHVgqkIJQ4BgVXC/3hIGiyfnkIejPWrJZ/qfOmKIbmwsY5KNPT
         f6Nw==
X-Gm-Message-State: AOAM53112IG4pZzzu3gVSo2kZCsdoQxRbByd4786ImASa1vL4W3616Ev
	WdsmBzsoRXcsQaVyesIVwvo=
X-Google-Smtp-Source: ABdhPJwjgDsBVD45dTtfq/CuwPFV8Nn9xvyX5u2jGlIQAuONAFWU5ZnSLNw+gRQXo9RnVxq5dGzq7g==
X-Received: by 2002:a63:4e01:: with SMTP id c1mr645397pgb.265.1621451987803;
        Wed, 19 May 2021 12:19:47 -0700 (PDT)
Subject: Re: [PATCH v7 06/15] swiotlb: Update is_swiotlb_buffer to add a
 struct device argument
To: Claire Chang <tientzu@chromium.org>, Rob Herring <robh+dt@kernel.org>,
 mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>,
 Will Deacon <will@kernel.org>, Frank Rowand <frowand.list@gmail.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com,
 jgross@suse.com, Christoph Hellwig <hch@lst.de>,
 Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org, paulus@samba.org,
 "list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
 sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
 grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding
 <treding@nvidia.com>, mingo@kernel.org, bauerman@linux.ibm.com,
 peterz@infradead.org, Greg KH <gregkh@linuxfoundation.org>,
 Saravana Kannan <saravanak@google.com>,
 "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
 heikki.krogerus@linux.intel.com,
 Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
 Randy Dunlap <rdunlap@infradead.org>, Dan Williams
 <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>,
 linux-devicetree <devicetree@vger.kernel.org>,
 lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
 xen-devel@lists.xenproject.org, Nicolas Boichat <drinkcat@chromium.org>,
 Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
 bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
 daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
 intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
 jxgao@google.com, joonas.lahtinen@linux.intel.com,
 linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
 matthew.auld@intel.com, rodrigo.vivi@intel.com,
 thomas.hellstrom@linux.intel.com
References: <20210518064215.2856977-1-tientzu@chromium.org>
 <20210518064215.2856977-7-tientzu@chromium.org>
From: Florian Fainelli <f.fainelli@gmail.com>
Message-ID: <e825f332-eabe-4a82-1528-8bc9d1e60625@gmail.com>
Date: Wed, 19 May 2021 12:19:41 -0700
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Firefox/78.0 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <20210518064215.2856977-7-tientzu@chromium.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit



On 5/17/2021 11:42 PM, Claire Chang wrote:
> Update is_swiotlb_buffer to add a struct device argument. This will be
> useful later to allow for restricted DMA pool.
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>

Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
-- 
Florian


From xen-devel-bounces@lists.xenproject.org Wed May 19 19:20:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 19:20:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130359.244254 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljRjk-0006Gv-VI; Wed, 19 May 2021 19:20:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130359.244254; Wed, 19 May 2021 19:20:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljRjk-0006Go-S9; Wed, 19 May 2021 19:20:08 +0000
Received: by outflank-mailman (input) for mailman id 130359;
 Wed, 19 May 2021 19:20:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=94wl=KO=gmail.com=f.fainelli@srs-us1.protection.inumbo.net>)
 id 1ljRjk-0006GT-Ff
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 19:20:08 +0000
Received: from mail-pg1-x536.google.com (unknown [2607:f8b0:4864:20::536])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 47675ec2-7ff0-43bf-a5ea-bf3f7540bb42;
 Wed, 19 May 2021 19:20:07 +0000 (UTC)
Received: by mail-pg1-x536.google.com with SMTP id i5so10189339pgm.0
 for <xen-devel@lists.xenproject.org>; Wed, 19 May 2021 12:20:07 -0700 (PDT)
Received: from [10.230.29.202] ([192.19.223.252])
 by smtp.gmail.com with ESMTPSA id m5sm109361pgl.75.2021.05.19.12.20.01
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 19 May 2021 12:20:06 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 47675ec2-7ff0-43bf-a5ea-bf3f7540bb42
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=uMIDYXiQCge+lCPg7jWJZVlhfPYt5aTIXixKF3xFdtw=;
        b=Yg91ASxZAaNzg/r8+0Ydre6wM45Ae3A1iPokHrBnYytUGLfpFnHdf3IFz8HESX0/2+
         g821YtdozGCEBqe5IWPqRoORjgSuvSUsnUJvkFHQmi1Z8EyDysdL0ajJwNlL5xiTuXsv
         wFEKThjSNrz9xGLrgbp/gM/UJUw8aKgxniG4Yn3LJ2CAh4MiM/s09DXVGVwgC1M7Lc1O
         mqFqDRB1Z5q6tLXZhXMkex5oVmrXUvI0/xcrkwhwr72xMCAoZJcgyVejktim5H+sPgp9
         TLPqhl56m5cKFfFNduokObt8ZVxO6KXBoFBtUzi8i8SumVh20RnjYnxygdSMJ5gi3qzx
         uvlg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=uMIDYXiQCge+lCPg7jWJZVlhfPYt5aTIXixKF3xFdtw=;
        b=pbMc26j1DkYxpnoZ5SrjJ30I+8JAj5UtHE6fVpwUhysETIeBR1KHqhj7lqcZZfTZVd
         KVGXSGOpF+CurbHwLmr0uWhXWPgHZAYg4vjGprOjuMD/IDZD0R5m2ep0RfVyw/wBw5kn
         9vWV4OVF2Ukn4RiVA5kXEDBwHnhYXUO0drxYxvLrYS1s2GVvsc9pUrKr5dINKj79q/Y7
         nFfUMDZbA4od0baqQh1+eJWSoifs+XE3hb8IyPyJPBiErU96Nd9f6+JpUFzWGzSeZJlv
         RPY9rSidFDhhg254JVRoGlvIhQynNn1w+TTjnpBePfzaxOAGK6loO7EJwtOpJJQuJ+L4
         jKBw==
X-Gm-Message-State: AOAM532PxrBx10jXjLk7tfcJN5ZuDSJay08O3z2uE7esmlj49QBODYWM
	Hl0EXgFL6VD7C+pNcaOvt6o=
X-Google-Smtp-Source: ABdhPJylNyq7yX8wc3qiI92uMNP4dk54r6bNs8/LZR+DzeCwuyO2pJ1p5Agdk04xE/qvY5oPdEYXRQ==
X-Received: by 2002:aa7:90d4:0:b029:28e:b912:acf with SMTP id k20-20020aa790d40000b029028eb9120acfmr661343pfk.43.1621452007052;
        Wed, 19 May 2021 12:20:07 -0700 (PDT)
Subject: Re: [PATCH v7 07/15] swiotlb: Update is_swiotlb_active to add a
 struct device argument
To: Claire Chang <tientzu@chromium.org>, Rob Herring <robh+dt@kernel.org>,
 mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>,
 Will Deacon <will@kernel.org>, Frank Rowand <frowand.list@gmail.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com,
 jgross@suse.com, Christoph Hellwig <hch@lst.de>,
 Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org, paulus@samba.org,
 "list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
 sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
 grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding
 <treding@nvidia.com>, mingo@kernel.org, bauerman@linux.ibm.com,
 peterz@infradead.org, Greg KH <gregkh@linuxfoundation.org>,
 Saravana Kannan <saravanak@google.com>,
 "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
 heikki.krogerus@linux.intel.com,
 Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
 Randy Dunlap <rdunlap@infradead.org>, Dan Williams
 <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>,
 linux-devicetree <devicetree@vger.kernel.org>,
 lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
 xen-devel@lists.xenproject.org, Nicolas Boichat <drinkcat@chromium.org>,
 Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
 bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
 daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
 intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
 jxgao@google.com, joonas.lahtinen@linux.intel.com,
 linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
 matthew.auld@intel.com, rodrigo.vivi@intel.com,
 thomas.hellstrom@linux.intel.com
References: <20210518064215.2856977-1-tientzu@chromium.org>
 <20210518064215.2856977-8-tientzu@chromium.org>
From: Florian Fainelli <f.fainelli@gmail.com>
Message-ID: <6cae5ffa-f31b-ba08-c2cf-4a3dd76afb3b@gmail.com>
Date: Wed, 19 May 2021 12:20:00 -0700
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Firefox/78.0 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <20210518064215.2856977-8-tientzu@chromium.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit



On 5/17/2021 11:42 PM, Claire Chang wrote:
> Update is_swiotlb_active to add a struct device argument. This will be
> useful later to allow for restricted DMA pool.
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>

Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
-- 
Florian


From xen-devel-bounces@lists.xenproject.org Wed May 19 19:24:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 19:24:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130367.244266 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljRno-000763-H4; Wed, 19 May 2021 19:24:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130367.244266; Wed, 19 May 2021 19:24:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljRno-00075w-E5; Wed, 19 May 2021 19:24:20 +0000
Received: by outflank-mailman (input) for mailman id 130367;
 Wed, 19 May 2021 19:24:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=94wl=KO=gmail.com=f.fainelli@srs-us1.protection.inumbo.net>)
 id 1ljRnn-00075q-MJ
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 19:24:19 +0000
Received: from mail-pf1-x42a.google.com (unknown [2607:f8b0:4864:20::42a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id aebbb963-545d-4c9c-bd6c-9cca701e0564;
 Wed, 19 May 2021 19:24:19 +0000 (UTC)
Received: by mail-pf1-x42a.google.com with SMTP id x18so6303601pfi.9
 for <xen-devel@lists.xenproject.org>; Wed, 19 May 2021 12:24:19 -0700 (PDT)
Received: from [10.230.29.202] ([192.19.223.252])
 by smtp.gmail.com with ESMTPSA id gj21sm4690007pjb.49.2021.05.19.12.24.13
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 19 May 2021 12:24:17 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aebbb963-545d-4c9c-bd6c-9cca701e0564
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=47JQ73C0NJdLVL+El8Yu9LZib8r+sSTJ+nf9zBKe9o0=;
        b=CToMN5sVOeaSvxsFwh30mnX1cEfbujdvLOsJoVGXyNK5xIClhHeY9+5gjOfAEdXX3G
         eU6qVJByg68kQ/w6JlnavFgIe+nfjuKKhbAsZxJHLIoUmLJC+FNQXwfuXltAASilweQx
         t+08hp5u1PLu1TqToQLVPh129PwSHFiJcf7M+I2aRakLaRJ2reqCHrxjRIjxuLTiBnzU
         obYFCXVysXAW0hEOj/ZFRzKi1W/0ELgGhGTgBKGpo2iX5m/PP/BptzsrBt+DoXhRmi5/
         WDDfuoHSDbRwV59E2nqJbeAwXtBih5h7WjHhxwWGM7Qsk3IgYQx+mCus3GhZ5jZ2T7eK
         2tug==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=47JQ73C0NJdLVL+El8Yu9LZib8r+sSTJ+nf9zBKe9o0=;
        b=H/qjl1bxUdwi8E2LVSV6U6h/r0SGr/0fosgKHoC/FWwareX/7fs9bS3PTl/Wr5zW7q
         rCD5WjjGd9ygjdAfMWBdDW3H5XrLfz48Mak5hQ7AjYMPJ91IaqfLm6qgqOTrNILK6A4e
         +xP/d3cE8i7Qt5fn6zZxnLt0zTc7PIaVEcClMFe2T/sKrIu8TbKdW+UvhjSUVGWZRQwN
         muRxeNJyKCeLtPmxh/ErAvYoL+KaPRI5FVwbQ2Uhu1p7LlH2ET55b4dKXeMRTDEfy5ZJ
         bEqJgGnjboDCukvRst/kNyWCEH0uHipYpem0bdVaZFjCtvcD8vkv3n7fL/QcFGCAEayU
         flcA==
X-Gm-Message-State: AOAM531yRfm+4MkzTN+uHNsiRmzEaSW1/QwL+xIvLvGF0SItqrDgnied
	NhLcifMKsnBMO50suKk7NAI=
X-Google-Smtp-Source: ABdhPJzJyhu29XiaIUqubYuVceyIJwVsMCRuIXu8+2duk9EuCth6A3fkL6TVyZT/YsIkYQB5Otc01Q==
X-Received: by 2002:a62:1d52:0:b029:2dd:ee:1439 with SMTP id d79-20020a621d520000b02902dd00ee1439mr573310pfd.57.1621452258195;
        Wed, 19 May 2021 12:24:18 -0700 (PDT)
Subject: Re: [PATCH v7 02/15] swiotlb: Refactor swiotlb_create_debugfs
To: Claire Chang <tientzu@chromium.org>, Rob Herring <robh+dt@kernel.org>,
 mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>,
 Will Deacon <will@kernel.org>, Frank Rowand <frowand.list@gmail.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com,
 jgross@suse.com, Christoph Hellwig <hch@lst.de>,
 Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org, paulus@samba.org,
 "list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
 sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
 grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding
 <treding@nvidia.com>, mingo@kernel.org, bauerman@linux.ibm.com,
 peterz@infradead.org, Greg KH <gregkh@linuxfoundation.org>,
 Saravana Kannan <saravanak@google.com>,
 "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
 heikki.krogerus@linux.intel.com,
 Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
 Randy Dunlap <rdunlap@infradead.org>, Dan Williams
 <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>,
 linux-devicetree <devicetree@vger.kernel.org>,
 lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
 xen-devel@lists.xenproject.org, Nicolas Boichat <drinkcat@chromium.org>,
 Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
 bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
 daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
 intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
 jxgao@google.com, joonas.lahtinen@linux.intel.com,
 linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
 matthew.auld@intel.com, rodrigo.vivi@intel.com,
 thomas.hellstrom@linux.intel.com
References: <20210518064215.2856977-1-tientzu@chromium.org>
 <20210518064215.2856977-3-tientzu@chromium.org>
From: Florian Fainelli <f.fainelli@gmail.com>
Message-ID: <d4a3ee6d-55be-1a60-9092-66b444dc9dda@gmail.com>
Date: Wed, 19 May 2021 12:24:11 -0700
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Firefox/78.0 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <20210518064215.2856977-3-tientzu@chromium.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit



On 5/17/2021 11:42 PM, Claire Chang wrote:
> Split the debugfs creation to make the code reusable for supporting
> different bounce buffer pools, e.g. restricted DMA pool.
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>

Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
-- 
Florian


From xen-devel-bounces@lists.xenproject.org Wed May 19 19:27:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 19:27:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130373.244277 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljRqp-0007jv-1W; Wed, 19 May 2021 19:27:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130373.244277; Wed, 19 May 2021 19:27:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljRqo-0007jo-Ti; Wed, 19 May 2021 19:27:26 +0000
Received: by outflank-mailman (input) for mailman id 130373;
 Wed, 19 May 2021 19:27:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RCSE=KO=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1ljRqn-0007jg-Mf
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 19:27:26 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 192efbd5-9497-479b-acf9-47e2513f4bed;
 Wed, 19 May 2021 19:27:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 192efbd5-9497-479b-acf9-47e2513f4bed
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1621452444;
  h=to:cc:references:from:subject:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=DvekPUmQwPfrv9nUusMjykgwzqhBcwFkWiiRwly1mNU=;
  b=AlcjPNyIcKmkqeGhuYaZRG6+pOLuj8vSetGatiuqqGhy/gCQ0tT50yWe
   h75rGQq2ko2vfF+TluH2D1Fym6xyN8cfLoobP9AqesVNB0nctCcU1Hxmb
   GZXmjhVBb0JW7AjJ+9t9ll8RoldojjNwsUk0Jec14D4Qcg86nflRMxJgn
   M=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: XrjIJCONLPUlen8Uyn5Z24sv1FpRPr8NK4lzNvBlhtNAOxpcpZVdN6rCFXug1y/Qhpk5NsgQBL
 xXK5umPpMfamT/0nysMQJTqPj/6Xqivzfe5ZmngXwTFEQ78Zq3AbFIWfB/VRVT4T843zA6zHPD
 zuKt5KXxNnGbSXgHwL9NrDx2OngCWUpu8SbS9N9E/iMOKsT0GaIqcTiOM6ycboxF4qaWW8/NPr
 23jds3o+502ATO2+8qNXzJjFUDLrcStTXz5kNEbN91sUG+5kUIe/h3ID9itWiJjGUtWZ+ghYX1
 kmE=
X-SBRS: 5.1
X-MesageID: 44271284
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-Data: A9a23:pbvcO6rJljxNe+tCDdnNFD981DZeBmJ0ZRIvgKrLsJaIsI4StFCzt
 garIBnQP/3fNzemLtF2PIWz80pUvJPWmNNlGQBrpCtjQixH85uZCYyVIHmrMnLJJKUvbq7GA
 +byyDXkBJppJpMJjk71atANlVEljufXAOKU5NfsYkidfyc9IMsaoU8ly7RRbrJA24DjWlvQ4
 IKq/aUzBXf+s9JKGjNMg068gEsHUMTa4Fv0aXRnOJinFHeH/5UkJMp3yZOZdhMUcaENdgKOf
 M7RzanRw4/s10xF5uVJMFrMWhZirrb6ZWBig5fNMkSoqkAqSicais7XOBeAAKv+Zvrgc91Zk
 b1wWZKMpQgBEL+didQjYgliNXsgAvJ+8/j1MF+liJnGp6HGWyOEL/RGCUg3OcsT+/ptAHEI/
 vsdQNwPRknd3aTsmuv9E7Q8wJ56RCXoFNp3VnVI5DfVF/s5B7vERL3H/4Rw1zYsnMFeW/3ZY
 qL1bBIzME6fOkcfZj/7DroeoeaRwVLbcgQDkwyQhqo3zGXY6lxuhe2F3N39JYXRGJQ9clyjj
 mnP5Wj+DzkRPcaTzjfD+XWp7sffkCW+VI8MGbmQ8v9xnEbV1mEVEAcRV1awvb++kEHWc95RI
 kMb+y0qrIAp6VemCNL6WnWFTGWs50BGHYAKSqtjtVHLkPO8Dxul6nYsYCwQeewYhd8MY2Z06
 QCxlJTbFBt1iejAIZ6CzYt4vQ9eKABMczVbP35VHFNYizXwiNtt10qSF76PBIbw3oWsQ2Coq
 9yfhHVm390uYdg3O7JXFLwtqxyrvISBagco6gjNUmuh42uVj6b+PNfxtzA3ARtaRbt1r2VtX
 lBfwaByD8hUV/lhcRBhp81XTdmUCw6tamG0vLKWN8BJG86RF5ufkWd4vGkWGauUGp9bKGWBj
 LH74FwBjHOsAJdaRfAuON/gYyjb5YPhCc7kRpjpgilmOcEsHDJrCBpGOB7Bt0iwwRNEufxuZ
 v+mnTOEUC9y5VJPl2HtGY/wENYDm0gD+I8kbcCql0j4ieLGOhZ4i94taTOzUwzw14vdyC39+
 NdDLcqajRJZVez1eC7M9oAPa1sNKBAG6Vre8qS7qsbrztJaJVwc
IronPort-HdrOrdr: A9a23:wLzY7aG4uPqAZWBEpLqE6MeALOsnbusQ8zAXPidKOH5om62j5q
 OTdZsguCMc5Ax8ZJhCo7C90cu7L080nKQdieJ6AV7LZniDhILCFuBfBOXZrAEIMheOk9K0is
 xbAs9D4XfLfD5Hsfo=
X-IronPort-AV: E=Sophos;i="5.82,313,1613451600"; 
   d="scan'208";a="44271284"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FTD1HBdIfDQ+j+Mnc+KFmcIudVdYFlhssokU1K6IRsL8mE62y+QTXu7xe20xSXSsInSJDH9+gQYfgPZYM8kg8FD1RrqFnT67ncc2kACIJ3wMghhCIw8b95uB3G3Cu923QfxjH4NOgRV1pQwTWr08792AKa10o9QZrnMOBXRITAO8+Gtfq6380/kIny1ZhQJE91BAD0wa0feb7/EVKLh4HQc9W8Tn0kUAvDX4SiUbor/d/qpXULRP+Ca5E/wsBIJMRjXUGZIQtprG4PoihbiKiP4VuNrnKLeSA1qDj/jqn93K2d64q364acBpfKdT/beRck63nsF+mG3m4fw9hoFaqQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DvekPUmQwPfrv9nUusMjykgwzqhBcwFkWiiRwly1mNU=;
 b=J1OXckm9JRATfeGvhDM2hbBCdLKD3GoQoFUFxSVvh+sgts97UfxGkT1s4vPy87VeQVrjMzdxCQxm2v/raH3R+mTqWK1BW8NQgnukEtRG8ebdGjLfiH8CWmMR6Q5nUnb8hCeM2gMx94t1dEPOLCTJ2lp/kCrlijmFJKJIxLeLLEp2Ode52GH+E0K61Wbis2oczmRlecbmFiFsbIMRuu4a7CLDQf2p8V05SJNNsEoZyYfgKX8O12mupSAzw7VL4Xq+XiwMEVl0z9IU/AYWYPZonkfzPQDKAl66T3jgP2IRpGiMRSY0OvKTytIgGSW5ldjXrNVYcszwaygOcdW0iHy1Uw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DvekPUmQwPfrv9nUusMjykgwzqhBcwFkWiiRwly1mNU=;
 b=cD9kLPqAAJuOFl3UD/qZmJ/ERdOXoM+c7pO1e/hYR88dEf9WQxZgS86OyTYyVAGXql1oW64L+n78IqDjVnHvKGqRnpaNtRAjHuk/9IX9fdjv68wz8SwtE0I2UyIrv2O2n2R9+xGxAja4GS52lx0SYoTxET7G461BEhc8bkM6ISE=
To: Dario Faggioli <dfaggioli@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
	<roger.pau@citrix.com>
CC: <xen-devel@lists.xenproject.org>, Doug Goldstein <cardoe@cardoe.com>
References: <162135593827.20014.14959979363028895972.stgit@Wayrath>
 <162135616513.20014.6303562342690753615.stgit@Wayrath>
 <YKSv/BGxuy+OCn3t@Air-de-Roger>
 <b596d5ea2e96be5c6d627e14b87beb51ba4a094e.camel@suse.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: [PATCH v2 2/2] automation: fix dependencies on openSUSE
 Tumbleweed containers
Message-ID: <26642b6b-c988-406e-040e-905bdeae1b2f@citrix.com>
Date: Wed, 19 May 2021 20:25:43 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <b596d5ea2e96be5c6d627e14b87beb51ba4a094e.camel@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable
Content-Language: en-GB
X-ClientProxiedBy: LO2P265CA0379.GBRP265.PROD.OUTLOOK.COM
 (2603:10a6:600:a3::31) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b4301a55-5795-44e4-fed6-08d91afbe8ed
X-MS-TrafficTypeDiagnostic: BY5PR03MB5000:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BY5PR03MB50004D6A6A8064F571371986BA2B9@BY5PR03MB5000.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: g/5hdPX+nB7ToazTsX2XA73yI0O8jeAFpNHa3393c2M3PuubF1FUjhJMibmF85SBe9XZTBmCApWFe7dOwlcTF91qaaIQdytVeH6yR/BjKTTfEp4Lro1hXmGelJT4no+51Am8SaCZJ3WITGG32b5NHV+GuLnmdMRvuQVA7/DieaiOtUhlEuMhIFhqg9WgWXGyf7rR3p+bUtcg4xCIqdNbe7xNZikhPNrcr6F3haFjJQNrlUrm3BBCAcSVqQw9cOyuMboIdGYT8sLillD6vTsTG5S051HZ6sCrfxIyC6hjsP9sWLDYqM/CSKPaldzPTsLPzb3TjiuRgdObkvFZq+58ynsz51i0pY5OAQYW0DaCi3QDDKkQrI5nPq+QP3pK4z9JSv+Pk1ZsGpJKXrrDRw36vVWxy2I1Ua6hAnQv4biLzt1dMl4FqNjxmvIHIWJaoBbFLAZdtJpxjoc+f6DZFWPydnTG/2nruexlq+KlV32c1V3yWoYeP2EYR65IDHHJIYFSOSNDbMDv4TMIpjoLDTQGAte+fqQ3/H8VaBqnxWjnIMil9//a1xqagNwWyjS7Q53BE7n8mxCfp2xsmy3vCnqyCPU3xQTl72J7FGDPWLFV7E+NePzOqGLBJjdDHF84X+avhdvlbXCpVsWV8jT7aYydOqumgBP36TMDUFpKLNSL107B69LUZ9zQyb7Th4ojNoYe
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(366004)(39860400002)(376002)(396003)(346002)(66556008)(66476007)(478600001)(8676002)(31696002)(66946007)(38100700002)(4326008)(110136005)(6666004)(16576012)(5660300002)(2616005)(36756003)(956004)(31686004)(53546011)(6486002)(316002)(83380400001)(86362001)(6636002)(186003)(16526019)(8936002)(26005)(2906002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?VFRGN2xLV1l5d3JsWWgzdFJZR2tMdHlqam1vMHlLRDAxWUNOZ3dHNHJEWE1Z?=
 =?utf-8?B?TXAzZTJ6aHJsRUZjcExkQkp1S3VtR0FEei9FbEJkSHMyUXZYdk1LVzNBUDBV?=
 =?utf-8?B?SFVENmRvamhCbExHWWtlQ2trdnZGSGZuSndCb3crQm9qRlc4dXNVbUpuc0JI?=
 =?utf-8?B?R0haYkRJeUYwZlV5UC9zTUdFR0VVVGlqMml1L09EYlpuNEJOYXBCaWFsaXZG?=
 =?utf-8?B?UmNaSGo3ZGQxUU1UOFg3VjFybzRnY21pQXJBa0lVNVRjNzd6NnBQbkU5WmtJ?=
 =?utf-8?B?bkU5bDV5UW9HRFhpZ0pYUW1TUFN2QjdSSXNQQlJQUVdCTzYvVlRJU1JlMXFQ?=
 =?utf-8?B?R3pnVFE1bjRwb0lidG94S0twS0pOMVZoZE9qdWlzeXdNSjhRMjRTVVpBWDBU?=
 =?utf-8?B?Yi8rVy96OGkzcGZFaHlmOVNmUkR3bVExNm1ydVlkc09nWnBJSU5qSEk1K05U?=
 =?utf-8?B?NHZicHpYZTV5KzBEMHdqK3UzNGJRODVPd2o1Rm9wZ0FoaEZRWkU2aFpBdlhv?=
 =?utf-8?B?Y0srRHdOR2l1S0xNTlFPRS9xVGV2MzNnQzhBdFVuT25aUkU2ZHpNNUFHakw0?=
 =?utf-8?B?QWt5dlh6ZkE0a283TUtBTmFNV2RnQ2VXaU1BSjQxa2FGNUpkWEJacEhzaGhF?=
 =?utf-8?B?THhCVTFKczVHYW5aTVVFM0s0NklubHJZN3lwNTNMTEhMeVdIODVMOTd6MFg3?=
 =?utf-8?B?dWpscjlKTWZKUzBjaVpzVllVNGlMODROMWZabVNZNDYzTmpzcG9xS1ZrVGtV?=
 =?utf-8?B?K2JMeDlEajVBWGNHaFNKbEcyc0w4MGtYV2gxVWwzNnROTHcraTJYTHVibjNx?=
 =?utf-8?B?K2ZCT1Y1VldnRm5ZYytyMGRzQnJzVlVjWFYvNDBsSE9IQldBamVYRVZ4LzV1?=
 =?utf-8?B?Yit4VGVTSjk5Q2o2N3lyU3dDWGhVV2MxdkFteUtET0JZY2ZoeFZCdUpZVTgz?=
 =?utf-8?B?dGZsbjJlOXNQaDhyNlFiamZKMlNNeWZlOEVORmR2ZGdYaE9TRkJJTC9nUVcw?=
 =?utf-8?B?MjZ2bGw3MlNRTVZOVWxPZlVKM0tLbmV4WmcvVDU2VU55Qm43VnhFdUk4NFl2?=
 =?utf-8?B?M2o5YTNiZVY0TlJGQzV3ZGZhK3J3WWh0UGtjV0tYdUNkbElBdWR4WWhjS2w3?=
 =?utf-8?B?dnFVeVIvSDhHTENWSGxQRzhYWmdlYmhPRnlaNmxhUWt5STF6T1Npbk9qb3o0?=
 =?utf-8?B?Mko4MHZyRmRmSS9ZZG9McXMvL1JLMFdob2pERVNiUGlJYVZucndCeUcxMTFI?=
 =?utf-8?B?eDUrZjNvQ2Z1ZTBEWElGa3QvdkxJZHArN1lNdVJxWk9jdlJESkMrdEpDbmZD?=
 =?utf-8?B?Uk1ubDRNOVdubmtGN0NidVVwREFkZWtnY1E3MEV0TUd6NTRsbW5tMEk1d0J0?=
 =?utf-8?B?Q1dMNFd0dHZXcFFvNW1yWGZsTXhxZGhaeWJ2MmNabHBoemU2ZVBoRkMyY1dy?=
 =?utf-8?B?VTdrd1NQeTBmeE5hL1VSV0kzVkVnSmx3YVp0NmttSGZYWmlzdW1vMnBpU3dV?=
 =?utf-8?B?c2ZPRTBxSnRZWEJMUnVlRndwOU9FblFra0NiSVo4YmNLdkRsYUw5eUJCc2dw?=
 =?utf-8?B?aTJha2pCNHNRS2tCeHFidGJFMGNKcnJIMTFhQVFLTFpHaWNtdGlIYUlKLzY4?=
 =?utf-8?B?ZnFWM1BpNHBtRVJTaUdjcnBjS0dtRWdpc0FmSWVRMG05dlR1c3RaYUlyUmNw?=
 =?utf-8?B?bmd0SUY3VTQ3WG83QXM4N1RBeGI3RHBhNzJWdGYzY3BXcFRLeXAvMGljNEpB?=
 =?utf-8?Q?FiFYn0Uq2UquTJNsDlaEMra+ufV9b+/paCX1yxK?=
X-MS-Exchange-CrossTenant-Network-Message-Id: b4301a55-5795-44e4-fed6-08d91afbe8ed
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 19 May 2021 19:25:49.9708
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: /4BFDPZs4zktMWWN1WU9c5yKkqawWq+X5PNPwtxA/Qh35zohA4PbYyiX57X5kYQ+qqC0fFdVSdQKfLW0NHkY2ONKvtW73Tp0kMEBd3s2RnY=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR03MB5000
X-OriginatorOrg: citrix.com

On 19/05/2021 18:52, Dario Faggioli wrote:
> On Wed, 2021-05-19 at 08:28 +0200, Roger Pau Monn=C3=A9 wrote:
>> On Tue, May 18, 2021 at 06:42:45PM +0200, Dario Faggioli wrote:
>>> Fix the build inside our openSUSE Tumbleweed container by using
>>> adding libzstd headers. While there, remove the explicit dependency
>>> for python and python3 as the respective -devel packages will pull
>>> them in anyway.
>>>
>>> Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
>> Acked-by: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
>>
> Thanks!
>
>> Can you try to push an updated container to the registry?
>>
> Yeah, I tried, but I'm getting this:
>
> STEP 8: COMMIT registry.gitlab.com/xen-project/xen/suse:opensuse-tumblewe=
ed
> --> 940c6edbff9
> 940c6edbff965135a25bc20f0e2a59cf6062b9e8bc3516858828cbb7bba92d8f
> Getting image source signatures
> Copying blob acc28ee93e9b [--------------------------------------] 8.0b /=
 3.5KiB
> Copying blob 89c6eef91991 [--------------------------------------] 8.0b /=
 57.0MiB
> Copying blob 20dabc80d591 [--------------------------------------] 8.0b /=
 90.6MiB
> Copying blob 5ea007576ed8 [--------------------------------------] 8.0b /=
 2.0GiB
> Error: error copying image to the remote destination: Error writing blob:=
 Error initiating layer upload to /v2/xen-project/xen/suse/blobs/uploads/ i=
n registry.gitlab.com: errors:
> denied: requested access to the resource is denied
> unauthorized: authentication required
>
> make: *** [Makefile:15: suse/opensuse-tumbleweed] Error 125
>
> So, either I'm doing something wrong, or I was just misremembering and
> I don't have the permission to do that... Can we check if I do?

Hmm.

I built the container locally, which reused some of the layers you
pushed, and it all pushed successfully for me.

I've committed this series so xen.git matches reality.=C2=A0 Lets see how t=
he
updated container fairs...

~Andrew



From xen-devel-bounces@lists.xenproject.org Wed May 19 19:46:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 19:46:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130387.244287 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljS90-0001f0-Nz; Wed, 19 May 2021 19:46:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130387.244287; Wed, 19 May 2021 19:46:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljS90-0001et-L2; Wed, 19 May 2021 19:46:14 +0000
Received: by outflank-mailman (input) for mailman id 130387;
 Wed, 19 May 2021 19:46:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ljS8z-0001en-Go
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 19:46:13 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ljS8z-0000xn-Bn; Wed, 19 May 2021 19:46:13 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ljS8z-0001xs-56; Wed, 19 May 2021 19:46:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=pz9DbyOAp/jb08WHvI64yligNviNfZwfWLrleamqCI8=; b=ejrVrO7ryiu0CRG5Qy0goa/Af4
	mmoAa7XHPRhq9Szi/Tx3D5OOOd7/Qw3LA2tLvR1gkb0mNDTQVi1A0qjhgjF30xlWF2VM0lnMn5jsO
	xB/epcezT5+hq+Bwg48m/5rlCZwP5axU3ZsNpbkmuISMRWGDi3KcvmSKHXjC4jJwjqg4=;
Subject: Re: [PATCH 03/10] xen/arm: introduce PGC_reserved
To: Penny Zheng <Penny.Zheng@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 nd <nd@arm.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-4-penny.zheng@arm.com>
 <bc6a20ef-675d-bbd6-74f7-4ecc45805ee7@xen.org>
 <VE1PR08MB5215F3ECA8B5D9624E34A794F72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
Message-ID: <2f4eb08e-261b-70c4-bcbc-e08db36a50a9@xen.org>
Date: Wed, 19 May 2021 20:46:11 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <VE1PR08MB5215F3ECA8B5D9624E34A794F72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 19/05/2021 04:16, Penny Zheng wrote:
> Hi Julien

Hi Penny,

> 
>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: Tuesday, May 18, 2021 5:46 PM
>> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org;
>> sstabellini@kernel.org
>> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
>> <Wei.Chen@arm.com>; nd <nd@arm.com>
>> Subject: Re: [PATCH 03/10] xen/arm: introduce PGC_reserved
>>
>>
>>
>> On 18/05/2021 06:21, Penny Zheng wrote:
>>> In order to differentiate pages of static memory, from those allocated
>>> from heap, this patch introduces a new page flag PGC_reserved to tell.
>>>
>>> New struct reserved in struct page_info is to describe reserved page
>>> info, that is, which specific domain this page is reserved to. >
>>> Helper page_get_reserved_owner and page_set_reserved_owner are
>>> designated to get/set reserved page's owner.
>>>
>>> Struct domain is enlarged to more than PAGE_SIZE, due to
>>> newly-imported struct reserved in struct page_info.
>>
>> struct domain may embed a pointer to a struct page_info but never directly
>> embed the structure. So can you clarify what you mean?
>>
>>>
>>> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
>>> ---
>>>    xen/include/asm-arm/mm.h | 16 +++++++++++++++-
>>>    1 file changed, 15 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
>> index
>>> 0b7de3102e..d8922fd5db 100644
>>> --- a/xen/include/asm-arm/mm.h
>>> +++ b/xen/include/asm-arm/mm.h
>>> @@ -88,7 +88,15 @@ struct page_info
>>>             */
>>>            u32 tlbflush_timestamp;
>>>        };
>>> -    u64 pad;
>>> +
>>> +    /* Page is reserved. */
>>> +    struct {
>>> +        /*
>>> +         * Reserved Owner of this page,
>>> +         * if this page is reserved to a specific domain.
>>> +         */
>>> +        struct domain *domain;
>>> +    } reserved;
>>
>> The space in page_info is quite tight, so I would like to avoid introducing new
>> fields unless we can't get away from it.
>>
>> In this case, it is not clear why we need to differentiate the "Owner"
>> vs the "Reserved Owner". It might be clearer if this change is folded in the
>> first user of the field.
>>
>> As an aside, for 32-bit Arm, you need to add a 4-byte padding.
>>
> 
> Yeah, I may delete this change. I imported this change as considering the functionality
> of rebooting domain on static allocation.
> 
> A little more discussion on rebooting domain on static allocation.
> Considering the major user cases for domain on static allocation
> are system has a total pre-defined, static behavior all the time. No domain allocation
> on runtime, while there still exists domain rebooting.

Hmmm... With this series it is still possible to allocate memory at 
runtime outside of the static allocation (see my comment on the design 
document [1]). So is it meant to be complete?

> 
> And when rebooting domain on static allocation, all these reserved pages could
> not go back to heap when freeing them.  So I am considering to use one global
> `struct page_info*[DOMID]` value to store.
>   
> As Jan suggested, when domain get rebooted, struct domain will not exist anymore.
> But I think DOMID info could last.
You would need to make sure the domid to be reserved. But I am not 
entirely convinced this is necessary here.

When recreating the domain, you need a way to know its configuration. 
Mostly likely this will come from the Device-Tree. At which point, you 
can also find the static region from there.

Cheers,

[1] <7ab73cb0-39d5-f8bf-660f-b3d77f3247bd@xen.org>

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 19 19:49:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 19:49:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130392.244299 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljSCV-0002Iz-9j; Wed, 19 May 2021 19:49:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130392.244299; Wed, 19 May 2021 19:49:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljSCV-0002Is-5F; Wed, 19 May 2021 19:49:51 +0000
Received: by outflank-mailman (input) for mailman id 130392;
 Wed, 19 May 2021 19:49:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ljSCU-0002Im-9l
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 19:49:50 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ljSCU-00010Y-4Q; Wed, 19 May 2021 19:49:50 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ljSCT-00028C-UW; Wed, 19 May 2021 19:49:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=GHeoP0Y3rWLOwI+tggXR5A1dWN2Rrxh0BUjn04x36zc=; b=k+xVproAt6YE0+UsRGvdJ+NB12
	5FzIAVhqzY6OcMhN3hQlr1rI7h9HZRwPIrauV4nQwV60U2F8f7fJBT92vV1nEprSSd+EgK86jOa7U
	Hx4InE75E1AkIV3e+6ZpGhyy0WhUTo9Pr0y/fJoulVmgYChB5x9zyQ1mRsOkRXWY70rU=;
Subject: Re: [PATCH 03/10] xen/arm: introduce PGC_reserved
To: Jan Beulich <jbeulich@suse.com>, Penny Zheng <Penny.Zheng@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 nd <nd@arm.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-4-penny.zheng@arm.com>
 <bc6a20ef-675d-bbd6-74f7-4ecc45805ee7@xen.org>
 <VE1PR08MB5215F3ECA8B5D9624E34A794F72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <9e22e4de-0d09-5195-bd8f-2ca326264807@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <765312c9-3b71-eb3a-5c8d-2ba0aa019595@xen.org>
Date: Wed, 19 May 2021 20:49:47 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <9e22e4de-0d09-5195-bd8f-2ca326264807@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 19/05/2021 10:49, Jan Beulich wrote:
> On 19.05.2021 05:16, Penny Zheng wrote:
>>> From: Julien Grall <julien@xen.org>
>>> Sent: Tuesday, May 18, 2021 5:46 PM
>>>
>>> On 18/05/2021 06:21, Penny Zheng wrote:
>>>> --- a/xen/include/asm-arm/mm.h
>>>> +++ b/xen/include/asm-arm/mm.h
>>>> @@ -88,7 +88,15 @@ struct page_info
>>>>             */
>>>>            u32 tlbflush_timestamp;
>>>>        };
>>>> -    u64 pad;
>>>> +
>>>> +    /* Page is reserved. */
>>>> +    struct {
>>>> +        /*
>>>> +         * Reserved Owner of this page,
>>>> +         * if this page is reserved to a specific domain.
>>>> +         */
>>>> +        struct domain *domain;
>>>> +    } reserved;
>>>
>>> The space in page_info is quite tight, so I would like to avoid introducing new
>>> fields unless we can't get away from it.
>>>
>>> In this case, it is not clear why we need to differentiate the "Owner"
>>> vs the "Reserved Owner". It might be clearer if this change is folded in the
>>> first user of the field.
>>>
>>> As an aside, for 32-bit Arm, you need to add a 4-byte padding.
>>>
>>
>> Yeah, I may delete this change. I imported this change as considering the functionality
>> of rebooting domain on static allocation.
>>
>> A little more discussion on rebooting domain on static allocation.
>> Considering the major user cases for domain on static allocation
>> are system has a total pre-defined, static behavior all the time. No domain allocation
>> on runtime, while there still exists domain rebooting.
>>
>> And when rebooting domain on static allocation, all these reserved pages could
>> not go back to heap when freeing them.  So I am considering to use one global
>> `struct page_info*[DOMID]` value to store.
> 
> Except such a separate array will consume quite a bit of space for
> no real gain: v.free has 32 bits of padding space right now on
> Arm64, so there's room for a domid_t there already. Even on Arm32
> this could be arranged for, as I doubt "order" needs to be 32 bits
> wide.

I agree we shouldn't need 32-bit to cover the "order". Although, I would 
like to see any user reading the field before introducing it.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 19 20:02:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 20:02:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130400.244313 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljSOG-0004YV-Dn; Wed, 19 May 2021 20:02:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130400.244313; Wed, 19 May 2021 20:02:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljSOG-0004YO-Ac; Wed, 19 May 2021 20:02:00 +0000
Received: by outflank-mailman (input) for mailman id 130400;
 Wed, 19 May 2021 20:01:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ljSOE-0004YI-Tm
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 20:01:58 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ljSOD-0001Kk-Ra; Wed, 19 May 2021 20:01:57 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ljSOD-00035k-LQ; Wed, 19 May 2021 20:01:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=Y3j6reANSiIScwr3eRjvR6Yhf3VR7aqJqtMfKYZyhKQ=; b=DFTngzctTIdvSuJvPCPRoBMMsn
	KSZztBhRaNZD/mJV9ocSLIUMKXT7cijxNYf1Re1UIl7p4PbyjBZa/ZhY03T0tIMYpdZL+7G21eINO
	y15c/GbE5nLmQ4i+kYCH3+TL2fPZY09dOfYrBJz7L2rFJwVy+PMC5ITF1WkAbt+1+dYA=;
Subject: Re: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
To: Penny Zheng <Penny.Zheng@arm.com>, Jan Beulich <jbeulich@suse.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 nd <nd@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-8-penny.zheng@arm.com>
 <7e4706dc-70ea-4dc9-3d70-f07396b462d8@suse.com>
 <VE1PR08MB521528492991FDFC87AC361BF72C9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <75275b2f-9de3-944a-d55c-a62bbbf1bb8c@xen.org>
 <VE1PR08MB5215CB5102529F32DC695CFDF72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
Message-ID: <953418db-5484-bb1e-e0dc-96798a284479@xen.org>
Date: Wed, 19 May 2021 21:01:55 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <VE1PR08MB5215CB5102529F32DC695CFDF72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 19/05/2021 08:52, Penny Zheng wrote:
> Hi Julien

Hi Penny,

> 
>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: Tuesday, May 18, 2021 8:13 PM
>> To: Penny Zheng <Penny.Zheng@arm.com>; Jan Beulich <jbeulich@suse.com>
>> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
>> <Wei.Chen@arm.com>; nd <nd@arm.com>; xen-devel@lists.xenproject.org;
>> sstabellini@kernel.org
>> Subject: Re: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
>>
>> Hi Penny,
>>
>> On 18/05/2021 09:57, Penny Zheng wrote:
>>>> -----Original Message-----
>>>> From: Jan Beulich <jbeulich@suse.com>
>>>> Sent: Tuesday, May 18, 2021 3:35 PM
>>>> To: Penny Zheng <Penny.Zheng@arm.com>
>>>> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
>>>> <Wei.Chen@arm.com>; nd <nd@arm.com>; xen-
>> devel@lists.xenproject.org;
>>>> sstabellini@kernel.org; julien@xen.org
>>>> Subject: Re: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
>>>>
>>>> On 18.05.2021 07:21, Penny Zheng wrote:
>>>>> --- a/xen/common/page_alloc.c
>>>>> +++ b/xen/common/page_alloc.c
>>>>> @@ -2447,6 +2447,9 @@ int assign_pages(
>>>>>        {
>>>>>            ASSERT(page_get_owner(&pg[i]) == NULL);
>>>>>            page_set_owner(&pg[i], d);
>>>>> +        /* use page_set_reserved_owner to set its reserved domain owner.
>>>> */
>>>>> +        if ( (pg[i].count_info & PGC_reserved) )
>>>>> +            page_set_reserved_owner(&pg[i], d);
>>>>
>>>> Now this is puzzling: What's the point of setting two owner fields to
>>>> the same value? I also don't recall you having introduced
>>>> page_set_reserved_owner() for x86, so how is this going to build there?
>>>>
>>>
>>> Thanks for pointing out that it will fail on x86.
>>> As for the same value, sure, I shall change it to domid_t domid to record its
>> reserved owner.
>>> Only domid is enough for differentiate.
>>> And even when domain get rebooted, struct domain may be destroyed, but
>>> domid will stays The same.
>>> Major user cases for domain on static allocation are referring to the
>>> whole system are static, No runtime creation.
>>
>> One may want to have static memory yet doesn't care about the domid. So I
>> am not in favor to restrict about the domid unless there is no other way.
>>
> 
> The user case you bring up here is that static memory pool?

No. The use case I am talking about is an user who wants to give a 
specific memory region to the guest but doesn't care about which domid 
was allocated to the guest.

> 
> Right now, the user cases are mostly restricted to static system.
> If we bring runtime allocation, `xl` here, it will add a lot more complexity.
> But if the system has static behavior, the domid is also static.
I read this as the admin would have to specify the domain ID in the 
Device-Tree. Is that what you meant?

If so, then I don't see why we should mandate that. I would mind less if 
by static you mean the domid will be allocated by Xen and then not 
changed accross reboot.

> 
> On rebooting domain from static memory pool, it brings up more discussion, like
> do we intend to give the memory back to static memory pool when rebooting,
> if so, ram could be allocated from different place compared with the previous one.

You should have all the information in the Device-Tree to re-assign the 
correct regions. So why would it be allocated from a different place?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 19 20:10:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 20:10:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130407.244327 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljSW5-0005kI-AL; Wed, 19 May 2021 20:10:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130407.244327; Wed, 19 May 2021 20:10:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljSW5-0005kB-5x; Wed, 19 May 2021 20:10:05 +0000
Received: by outflank-mailman (input) for mailman id 130407;
 Wed, 19 May 2021 20:10:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ljSW3-0005Wk-TX
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 20:10:03 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ljSW3-0001Sr-Nh; Wed, 19 May 2021 20:10:03 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ljSW3-0003bs-HQ; Wed, 19 May 2021 20:10:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=jeeplVUrpI5vSqsEwgVxlFss+v6TihWakEDQEIh+8so=; b=t+XVoox0cV8fBC6k8eXBh9F5m1
	MkpzQVV/0yO1sUNA5YZdibjwucInneCAa6SRWGRPZDV2m/yCcGreM+8q1jKjLq8bWiiISpzOjbACM
	vkAGL4QMHGYwFJmw9+TxJjTkDlOgl/zuJIDlbR3TINHqntezX4zxYWNLErmvg1ZvL/OI=;
Subject: Re: [PATCH 10/10] xen/arm: introduce allocate_static_memory
To: Penny Zheng <Penny.Zheng@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 nd <nd@arm.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-11-penny.zheng@arm.com>
 <7e9bacde-8a1c-c9f8-a06d-2f39f2192315@xen.org>
 <VE1PR08MB5215B4D187DFE8AE20DF2B95F72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
Message-ID: <72a374ca-4d75-70b4-3ee9-ad1dbdefa2d6@xen.org>
Date: Wed, 19 May 2021 21:10:01 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <VE1PR08MB5215B4D187DFE8AE20DF2B95F72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 19/05/2021 08:27, Penny Zheng wrote:
> Hi Julien
> 
>> -----Original Message-----
>> From: Julien Grall <julien@xen.org>
>> Sent: Tuesday, May 18, 2021 8:06 PM
>> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org;
>> sstabellini@kernel.org
>> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
>> <Wei.Chen@arm.com>; nd <nd@arm.com>
>> Subject: Re: [PATCH 10/10] xen/arm: introduce allocate_static_memory
>>
>> Hi Penny,
>>
>> On 18/05/2021 06:21, Penny Zheng wrote:
>>> This commit introduces allocate_static_memory to allocate static
>>> memory as guest RAM for domain on Static Allocation.
>>>
>>> It uses alloc_domstatic_pages to allocate pre-defined static memory
>>> banks for this domain, and uses guest_physmap_add_page to set up P2M
>>> table, guest starting at fixed GUEST_RAM0_BASE, GUEST_RAM1_BASE.
>>>
>>> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
>>> ---
>>>    xen/arch/arm/domain_build.c | 157
>> +++++++++++++++++++++++++++++++++++-
>>>    1 file changed, 155 insertions(+), 2 deletions(-)
>>>
>>> diff --git a/xen/arch/arm/domain_build.c b/xen/arch/arm/domain_build.c
>>> index 30b55588b7..9f662313ad 100644
>>> --- a/xen/arch/arm/domain_build.c
>>> +++ b/xen/arch/arm/domain_build.c
>>> @@ -437,6 +437,50 @@ static bool __init allocate_bank_memory(struct
>> domain *d,
>>>        return true;
>>>    }
>>>
>>> +/*
>>> + * #ram_index and #ram_index refer to the index and starting address
>>> +of guest
>>> + * memory kank stored in kinfo->mem.
>>> + * Static memory at #smfn of #tot_size shall be mapped #sgfn, and
>>> + * #sgfn will be next guest address to map when returning.
>>> + */
>>> +static bool __init allocate_static_bank_memory(struct domain *d,
>>> +                                               struct kernel_info *kinfo,
>>> +                                               int ram_index,
>>
>> Please use unsigned.
>>
>>> +                                               paddr_t ram_addr,
>>> +                                               gfn_t* sgfn,
>>
>> I am confused, what is the difference between ram_addr and sgfn?
>>
> 
> We need to constructing kinfo->mem(guest RAM banks) here, and
> we are indexing in static_mem(physical ram banks). Multiple physical ram
> banks consist of one guest ram bank(like, GUEST_RAM0).
> 
> ram_addr  here will either be GUEST_RAM0_BASE, or GUEST_RAM1_BASE,
> for now. I kinds struggled in how to name it. And maybe it shall not be a
> parameter here.
> 
> Maybe I should switch.. case.. on the ram_index, if its 0, its GUEST_RAM0_BASE,
> And if its 1, its GUEST_RAM1_BASE.

You only need to set kinfo->mem.bank[ram_index].start once. This is when 
you know the bank is first used.

AFAICT, this function will map the memory for a range start at ``sgfn``. 
It doesn't feel this belongs to the function.

The same remark is valid for kinfo->mem.nr_banks.

>>> +                                               mfn_t smfn,
>>> +                                               paddr_t tot_size) {
>>> +    int res;
>>> +    struct membank *bank;
>>> +    paddr_t _size = tot_size;
>>> +
>>> +    bank = &kinfo->mem.bank[ram_index];
>>> +    bank->start = ram_addr;
>>> +    bank->size = bank->size + tot_size;
>>> +
>>> +    while ( tot_size > 0 )
>>> +    {
>>> +        unsigned int order = get_allocation_size(tot_size);
>>> +
>>> +        res = guest_physmap_add_page(d, *sgfn, smfn, order);
>>> +        if ( res )
>>> +        {
>>> +            dprintk(XENLOG_ERR, "Failed map pages to DOMU: %d", res);
>>> +            return false;
>>> +        }
>>> +
>>> +        *sgfn = gfn_add(*sgfn, 1UL << order);
>>> +        smfn = mfn_add(smfn, 1UL << order);
>>> +        tot_size -= (1ULL << (PAGE_SHIFT + order));
>>> +    }
>>> +
>>> +    kinfo->mem.nr_banks = ram_index + 1;
>>> +    kinfo->unassigned_mem -= _size;
>>> +
>>> +    return true;
>>> +}
>>> +
>>>    static void __init allocate_memory(struct domain *d, struct kernel_info
>> *kinfo)
>>>    {
>>>        unsigned int i;
>>> @@ -480,6 +524,116 @@ fail:
>>>              (unsigned long)kinfo->unassigned_mem >> 10);
>>>    }
>>>
>>> +/* Allocate memory from static memory as RAM for one specific domain
>>> +d. */ static void __init allocate_static_memory(struct domain *d,
>>> +                                            struct kernel_info
>>> +*kinfo) {
>>> +    int nr_banks, _banks = 0;
>>
>> AFAICT, _banks is the index in the array. I think it would be clearer if it is
>> caller 'bank' or 'idx'.
>>
> 
> Sure, I’ll use the 'bank' here.
> 
>>> +    size_t ram0_size = GUEST_RAM0_SIZE, ram1_size = GUEST_RAM1_SIZE;
>>> +    paddr_t bank_start, bank_size;
>>> +    gfn_t sgfn;
>>> +    mfn_t smfn;
>>> +
>>> +    kinfo->mem.nr_banks = 0;
>>> +    sgfn = gaddr_to_gfn(GUEST_RAM0_BASE);
>>> +    nr_banks = d->arch.static_mem.nr_banks;
>>> +    ASSERT(nr_banks >= 0);
>>> +
>>> +    if ( kinfo->unassigned_mem <= 0 )
>>> +        goto fail;
>>> +
>>> +    while ( _banks < nr_banks )
>>> +    {
>>> +        bank_start = d->arch.static_mem.bank[_banks].start;
>>> +        smfn = maddr_to_mfn(bank_start);
>>> +        bank_size = d->arch.static_mem.bank[_banks].size;
>>
>> The variable name are slightly confusing because it doesn't tell whether this
>> is physical are guest RAM. You might want to consider to prefix them with p
>> (resp. g) for physical (resp. guest) RAM.
> 
> Sure, I'll rename to make it more clearly.
> 
>>
>>> +
>>> +        if ( !alloc_domstatic_pages(d, bank_size >> PAGE_SHIFT, bank_start,
>> 0) )
>>> +        {
>>> +            printk(XENLOG_ERR
>>> +                    "%pd: cannot allocate static memory"
>>> +                    "(0x%"PRIx64" - 0x%"PRIx64")",
>>
>> bank_start and bank_size are both paddr_t. So this should be PRIpaddr.
> 
> Sure, I'll change
> 
>>
>>> +                    d, bank_start, bank_start + bank_size);
>>> +            goto fail;
>>> +        }
>>> +
>>> +        /*
>>> +         * By default, it shall be mapped to the fixed guest RAM address
>>> +         * `GUEST_RAM0_BASE`, `GUEST_RAM1_BASE`.
>>> +         * Starting from RAM0(GUEST_RAM0_BASE).
>>> +         */
>>
>> Ok. So you are first trying to exhaust the guest bank 0 and then moved to
>> bank 1. This wasn't entirely clear from the design document.
>>
>> I am fine with that, but in this case, the developper should not need to know
>> that (in fact this is not part of the ABI).
>>
>> Regarding this code, I am a bit concerned about the scalability if we introduce
>> a second bank.
>>
>> Can we have an array of the possible guest banks and increment the index
>> when exhausting the current bank?
>>
> 
> Correct me if I understand wrongly,
> 
> What you suggest here is that we make an array of guest banks, right now, including
> GUEST_RAM0 and GUEST_RAM1. And if later, adding more guest banks, it will not
> bring scalability problem here, right?

Yes. This should also reduce the current complexity of the code.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 19 20:19:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 20:19:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130414.244337 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljSf9-0006jZ-6u; Wed, 19 May 2021 20:19:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130414.244337; Wed, 19 May 2021 20:19:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljSf9-0006jS-3o; Wed, 19 May 2021 20:19:27 +0000
Received: by outflank-mailman (input) for mailman id 130414;
 Wed, 19 May 2021 20:19:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ljSf7-0006jM-On
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 20:19:25 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ljSf7-0001cP-Ie; Wed, 19 May 2021 20:19:25 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ljSf7-0004Dg-CS; Wed, 19 May 2021 20:19:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=+Rnvnhd/0HT/XthA1LClHFuxX5pf6YhLMjjVwgpQ0co=; b=YpVT4QwcXe0365iutBgIGrlqcf
	jgXGqPCJhAYoPc+FcbOqSMGzl/ISco1JTKIAnoix7kmMMAqHzVFwOLvpbybQrB7M3RRHEBD6kzJBk
	kqNk1CM5FN9rKfnfifaJS16ygU4HjQ4jcC2N8w21KUcOcLoUNNzy+QyIP+UkFz3PV6CI=;
Subject: Re: Hand over of the Xen shared info page
To: Anastasiia Lukianenko <anastasiia_lukianenko@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrii Chepurnyi <Andrii_Chepurnyi@epam.com>,
 Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <64bc6ab6ec387acebb40c1b4786dfda1050f9d50.camel@epam.com>
 <8ff05bdf-a6c4-6b14-b39c-7d9b3bb9d279@xen.org>
 <1db54c363eae22613280e7181805abee396fe5e9.camel@epam.com>
From: Julien Grall <julien@xen.org>
Message-ID: <8d1ecf6c-a0d1-d9bc-5daf-d02a34fff1e6@xen.org>
Date: Wed, 19 May 2021 21:19:23 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <1db54c363eae22613280e7181805abee396fe5e9.camel@epam.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 14/05/2021 10:50, Anastasiia Lukianenko wrote:
> Hi Julien!

Hello,

> On Thu, 2021-05-13 at 09:37 +0100, Julien Grall wrote:
>>
>> On 13/05/2021 09:03, Anastasiia Lukianenko wrote:
>> The alternative is for U-boot to go through the DT and infer which
>> regions are free (IOW any region not described).
> 
> Thank you for interest in the problem and advice on how to solve it.
> Could you please clarify how we could find free regions using DT in U-
> boot?

I don't know U-boot code, so I can't tell whether what I suggest would work.

In theory, the device-tree should described every region allocated in 
address space. So if you parse the device-tree and create a list (or any 
datastructure) with the regions, then any range not present in the list 
would be free region you could use.

However, I realized a few days ago that the magic pages are not 
described in the DT. We probably want to fix it by marking the page as 
"reserved" or create a specific bindings.

So you will need a specific quirk for them.

I have posted some more idea a separate thread [1] related to FreeBSD 
support for Arm.

Cheers,

[1] 
https://lore.kernel.org/xen-devel/f7360dac-5d83-733b-7ec5-c73d4dc0350d@xen.org/

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 19 22:09:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 22:09:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130486.244403 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljUNq-00033M-NG; Wed, 19 May 2021 22:09:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130486.244403; Wed, 19 May 2021 22:09:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljUNq-00033F-Jr; Wed, 19 May 2021 22:09:42 +0000
Received: by outflank-mailman (input) for mailman id 130486;
 Wed, 19 May 2021 22:09:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljUNo-000334-W3; Wed, 19 May 2021 22:09:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljUNo-0003Qo-IT; Wed, 19 May 2021 22:09:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljUNo-0001jr-Ab; Wed, 19 May 2021 22:09:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ljUNo-0005ih-9s; Wed, 19 May 2021 22:09:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=6WWt+13s4iO9lVt0CAXYL3OFmzezVYbxS12yikElKOQ=; b=f5wmxgTNQp8eHuVmKzoTa4RHB7
	wFSfImbQyek9PKxyhwRaK3Da9I/yBFpLOL/d8o2BABYhPRV79wFRbAdM055oEMJaY1RkHVcjXe8ig
	eXdi/rcvH+Vlot9hmG/kUNVFIARaBbFzy8A3qPONwlGFu9WTrzIo2m5MgqpseLJ4k8KA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162096-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162096: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=aa77acc28098d04945af998f3fc0dbd3759b5b41
X-Osstest-Versions-That:
    xen=935abe1cc463917c697c1451ec8d313a5d75f7de
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 May 2021 22:09:40 +0000

flight 162096 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162096/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  aa77acc28098d04945af998f3fc0dbd3759b5b41
baseline version:
 xen                  935abe1cc463917c697c1451ec8d313a5d75f7de

Last test of basis   162093  2021-05-19 14:01:35 Z    0 days
Testing same since   162096  2021-05-19 19:01:37 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dario Faggioli <dfaggioli@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   935abe1cc4..aa77acc280  aa77acc28098d04945af998f3fc0dbd3759b5b41 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed May 19 22:24:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 22:24:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130494.244417 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljUbf-0005Fd-0G; Wed, 19 May 2021 22:23:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130494.244417; Wed, 19 May 2021 22:23:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljUbe-0005FW-Sw; Wed, 19 May 2021 22:23:58 +0000
Received: by outflank-mailman (input) for mailman id 130494;
 Wed, 19 May 2021 22:23:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljUbd-0005FM-MB; Wed, 19 May 2021 22:23:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljUbd-0003fa-Az; Wed, 19 May 2021 22:23:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljUbd-0002IO-0O; Wed, 19 May 2021 22:23:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ljUbc-0005OT-W8; Wed, 19 May 2021 22:23:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=QZo/s/Bk1xi7I7LPmJiv5rYeo6SzFjUCJwnbXYx5GhQ=; b=NMZD3rSOUECm362wKq98Q4Q+5c
	NK05mXFNdxI6gpWsV5ZawxyYG6zfYXexhrN2Ftr+uzkpaYNGHTLrTXfn89jvc31zqRI+LVl+ZH0Pn
	bG6O32WyRTktUua4/bYcwfODIQU1BbOVtp1RbqQFZNXzzWRamOPFl8OoFzxswMXizTa0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162084-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 162084: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-armhf-armhf-xl-rtds:guest-stop:fail:allowable
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=e05d387ba736bcabe414b0aa05831d151ac40385
X-Osstest-Versions-That:
    linux=b82e5721a17325739adef83a029340a63b53d4ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 19 May 2021 22:23:56 +0000

flight 162084 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162084/

Failures :-/ but no regressions.

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds     17 guest-stop               fail REGR. vs. 161947

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 161947
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 161947
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 161947
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 161947
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 161947
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 161947
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 161947
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 161947
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 161947
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 161947
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 161947
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                e05d387ba736bcabe414b0aa05831d151ac40385
baseline version:
 linux                b82e5721a17325739adef83a029340a63b53d4ad

Last test of basis   161947  2021-05-14 08:11:45 Z    5 days
Testing same since   162084  2021-05-19 08:42:14 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alan Stern <stern@rowland.harvard.edu>
  Alex Deucher <alexander.deucher@amd.com>
  Alexander Aring <aahringo@redhat.com>
  Alexandre Belloni <alexandre.belloni@bootlin.com>
  Alexei Starovoitov <ast@kernel.org>
  Alexey Kardashevskiy <aik@ozlabs.ru>
  Andrew Morton <akpm@linux-foundation.org>
  Andrii Nakryiko <andrii@kernel.org>
  Anthony Wang <anthony1.wang@amd.com>
  Anup Patel <anup.patel@wdc.com>
  Archie Pusaka <apusaka@chromium.org>
  Ard Biesheuvel <ardb@kernel.org>
  Aurabindo Pillai <aurabindo.pillai@amd.com>
  Axel Rasmussen <axelrasmussen@google.com>
  Badhri Jagan Sridharan <badhri@google.com>
  Baoquan He <bhe@redhat.com>
  Baptiste Lepers <baptiste.lepers@gmail.com>
  Bart Van Assche <bvanassche@acm.org>
  Bence Csókás <bence98@sch.bme.hu>
  Bindu Ramamurthy <bindur12@amd.com>
  Bjorn Andersson <bjorn.andersson@linaro.org>
  Bjorn Helgaas <bhelgaas@google.com>
  Christoph Hellwig <hch@lst.de>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Chunfeng Yun <chunfeng.yun@mediatek.com>
  Colin Ian King <colin.king@canonical.com>
  Cong Wang <cong.wang@bytedance.com>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Lezcano <daniel.lezcano@linaro.org>
  Daniel Wheeler <daniel.wheeler@amd.com>
  Darrick J. Wong <darrick.wong@oracle.com>
  Dave Switzer <david.switzer@intel.com>
  David Bauer <mail@david-bauer.net>
  David S. Miller <davem@davemloft.net>
  David Teigland <teigland@redhat.com>
  David Ward <david.ward@gatech.edu>
  Dawid Lukwinski <dawid.lukwinski@intel.com>
  Dinghao Liu <dinghao.liu@zju.edu.cn>
  Dmitry Baryshkov <dmitry.baryshkov@linaro.org>
  Dmitry Osipenko <digetx@gmail.com>
  Du Cheng <ducheng2@gmail.com>
  Eddie James <eajames@linux.ibm.com>
  Edwin Peer <edwin.peer@broadcom.com>
  Emmanuel Grumbach <emmanuel.grumbach@intel.com>
  Eric Biggers <ebiggers@google.com>
  Eric Dumazet <edumazet@google.com>
  Felix Fietkau <nbd@nbd.name>
  Ferry Toth <ftoth@exalondelft.nl>
  Florian Fainelli <f.fainelli@gmail.com>
  Govindarajulu Varadarajan <gvaradar@cisco.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo A. R. Silva <gustavoars@kernel.org>
  Hans de Goede <hdegoede@redhat.com>
  Hao Chen <chenhao288@hisilicon.com>
  Hoang Le <hoang.h.le@dektech.com.au>
  Huazhong Tan <tanhuazhong@huawei.com>
  Hugh Dickins <hughd@google.com>
  Hulk Robot <hulkrobot@huawei.com>
  Ilias Apalodimas <ilias.apalodimas@linaro.org>
  Ilya Dryomov <idryomov@gmail.com>
  Ilya Lipnitskiy <ilya.lipnitskiy@gmail.com>
  Jack Wang <jinpu.wang@ionos.com>
  Jaegeuk Kim <jaegeuk@kernel.org>
  Jani Nikula <jani.nikula@intel.com>
  Jarkko Sakkinen <jarkko@kernel.org>
  Jaroslaw Gawin <jaroslawx.gawin@intel.com>
  Jason Self <jason@bluehome.net>
  Jean-Baptiste Maneyrol <jmaneyrol@invensense.com>
  Jeff Layton <jlayton@kernel.org>
  Jens Axboe <axboe@kernel.dk>
  Jesper Dangaard Brouer <brouer@redhat.com>
  Jia-Ju Bai <baijiaju1990@gmail.com>
  Jian Shen <shenjian15@huawei.com>
  Joel Stanley <joel@jms.id.au>
  Joerg Roedel <jroedel@suse.de>
  Johannes Berg <johannes.berg@intel.com>
  Jon Hunter <jonathanh@nvidia.com>
  Jon Maloy <jmaloy@redhat.com>
  Jonathan Cameron <Jonathan.Cameron@huawei.com>
  Jonathan McDowell <noodles@earth.li>
  Jonathon Reinhart <Jonathon.Reinhart@gmail.com>
  Jouni Roivas <jouni.roivas@tuxera.com>
  Kai Vehmanen <kai.vehmanen@linux.intel.com>
  Kai-Heng Feng <kai.heng.feng@canonical.com>
  Kalle Valo <kvalo@codeaurora.org>
  Karsten Graul <kgraul@linux.ibm.com>
  Kees Cook <keescook@chromium.org>
  Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
  Krzysztof Kozlowski <krzk@kernel.org>
  Krzysztof Kozlowski <krzysztof.kozlowski@canonical.com>
  Kuninori Morimoto <kuninori.morimoto.gx@renesas.com>
  Lee Gibson <leegib@gmail.com>
  Lino Sanfilippo <LinoSanfilippo@gmx.de>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Lorenzo Pieralisi <lorenzo.pieralisi@arm.com>
  Lukasz Luba <lukasz.luba@arm.com>
  Lv Yunlong <lyl2019@mail.ustc.edu.cn>
  Maciej W. Rozycki <macro@orcam.me.uk>
  Maciej Żenczykowski <maze@google.com>
  Marc Kleine-Budde <mkl@pengutronix.de>
  Marc Zyngier <maz@kernel.org>
  Marcel Hamer <marcel@solidxs.se>
  Marcel Holtmann <marcel@holtmann.org>
  Marcelo Ricardo Leitner <marcelo.leitner@gmail.com>
  Marek Szyprowski <m.szyprowski@samsung.com>
  Mark Brown <broonie@kernel.org>
  Masahiro Yamada <masahiroy@kernel.org>
  Mateusz Palczewski <mateusz.palczewski@intel.com>
  Mathias Nyman <mathias.nyman@linux.intel.com>
  Matteo Croce <mcroce@linux.microsoft.com>
  Matthew Wilcox (Oracle) <willy@infradead.org>
  Maxim Schwalm <maxim.schwalm@gmail.com> # Asus TF700T
  Maximilian Luz <luzmaximilian@gmail.com>
  Miaohe Lin <linmiaohe@huawei.com>
  Michael Chan <michael.chan@broadcom.com>
  Michael Ellerman <mpe@ellerman.id.au>
  Michael Walle <michael@walle.cc>
  Mihai Moldovan <ionic@ionic.de>
  Mikhail Durnev <mikhail_durnev@mentor.com>
  Miklos Szeredi <mszeredi@redhat.com>
  Minas Harutyunyan <Minas.Harutyunyan@synopsys.com>
  Nicolas Pitre <nico@fluxnic.net>
  Nikola Livic <nlivic@gmail.com>
  Nikolay Aleksandrov <nikolay@nvidia.com>
  Nobuhiro Iwamatsu <nobuhiro1.iwamatsu@toshiba.co.jp>
  Odin Ugedal <odin@uged.al>
  Olga Kornievskaia <kolga@netapp.com>
  Oliver Neukum <oneukum@suse.com>
  Omar Sandoval <osandov@fb.com>
  Pablo Neira Ayuso <pablo@netfilter.org>
  Pali Rohár <pali@kernel.org>
  Palmer Dabbelt <palmerdabbelt@google.com>
  Paolo Bonzini <pbonzini@redhat.com>
  Paul Menzel <pmenzel@molgen.mpg.de>
  Paweł Chmiel <pawel.mikolaj.chmiel@gmail.com>
  Peng Li <lipeng321@huawei.com>
  Peter Xu <peterx@redhat.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Phil Elwell <phil@raspberrypi.com>
  Phil Sutter <phil@nwl.cc>
  Phillip Lougher <phillip@squashfs.org.uk>
  Pierre-Louis Bossart <pierre-louis.bossart@linux.intel.com>
  Quentin Perret <qperret@google.com>
  Rafael J. Wysocki <rafael.j.wysocki@intel.com>
  Ray Jui <ray.jui@broadcom.com>
  Robin Singh <robin.singh@amd.com>
  Russell King <rmk+kernel@armlinux.org.uk>
  Sandeep Singh <sandeep.singh@amd.com>
  Sasha Levin <sashal@kernel.org>
  Sean Christopherson <seanjc@google.com>
  Sergei Trofimovich <slyfox@gentoo.org>
  Shuah Khan <skhan@linuxfoundation.org>
  Srikar Dronamraju <srikar@linux.vnet.ibm.com>
  Stefan Assmann <sassmann@kpanic.de>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  Sun Ke <sunke32@huawei.com>
  Suravee Suthikulpanit <suravee.suthikulpanit@amd.com>
  Svyatoslav Ryhel <clamor95@gmail.com> # Asus TF201
  Sylwester Nawrocki <s.nawrocki@samsung.com>
  syzbot+ffb0b3ffa6cfbc7d7b3f@syzkaller.appspotmail.com
  Takashi Iwai <tiwai@suse.de>
  Takashi Sakamoto <o-takashi@sakamocchi.jp>
  Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
  Thomas Bogendoerfer <tsbogend@alpha.franken.de>
  Thomas Gleixner <tglx@linutronix.de>
  Tomi Valkeinen <tomi.valkeinen@ideasonboard.com>
  Tong Zhang <ztong0001@gmail.com>
  Tony Lindgren <tony@atomide.com>
  Tony Nguyen <anthony.l.nguyen@intel.com>
  Torin Cooper-Bennun <torin@maxiluxsystems.com>
  Trond Myklebust <trond.myklebust@hammerspace.com>
  Ville Syrjälä <ville.syrjala@linux.intel.com>
  Vineet Gupta <vgupta@synopsys.com>
  Vladimir Isaev <isaev@synopsys.com>
  Vlastimil Babka <vbabka@suse.cz>
  Wesley Cheng <wcheng@codeaurora.org>
  Will Deacon <will@kernel.org>
  Wolfram Sang <wsa+renesas@sang-engineering.com>
  Wolfram Sang <wsa@kernel.org>
  Xin Long <lucien.xin@gmail.com>
  Yang Yingliang <yangyingliang@huawei.com>
  Yaqi Chen <chendotjs@gmail.com>
  Yonghong Song <yhs@fb.com>
  Yufeng Mo <moyufeng@huawei.com>
  Yunjian Wang <wangyunjian@huawei.com>
  Zhen Lei <thunder.leizhen@huawei.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   b82e5721a173..e05d387ba736  e05d387ba736bcabe414b0aa05831d151ac40385 -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Wed May 19 23:11:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 23:11:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130508.244433 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljVLV-0001hM-NS; Wed, 19 May 2021 23:11:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130508.244433; Wed, 19 May 2021 23:11:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljVLV-0001hF-K6; Wed, 19 May 2021 23:11:21 +0000
Received: by outflank-mailman (input) for mailman id 130508;
 Wed, 19 May 2021 23:11:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+fN8=KO=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1ljVLU-0001h9-KP
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 23:11:20 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id aab6d8f6-e046-487a-848a-5aae1e1d0b5a;
 Wed, 19 May 2021 23:11:19 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id A37C16135A;
 Wed, 19 May 2021 23:11:18 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: aab6d8f6-e046-487a-848a-5aae1e1d0b5a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1621465878;
	bh=EvFrIr6fZy8YEMJUnzLDKEcfcg+OZxPugZIf/czZVko=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=pL0XCRuTIW3VYsZLlwzkbwsPBtrYA8/P3wgI388dwFY/xiYNnZwUQuyzzeKCRsd3h
	 Kb2R+06vsDak29xzOov67/jxkBLDQQHvm9ybmiuRkxyAj3qu6fA+JASWqQCTeumv5J
	 qfLwOd5/9Zhfpc5q4mIC6ZOX+DbhBqLG63qdQ6x05MeCQN/2dwrXmlHAlHQea5Qcnj
	 Sdzb8tvfSuE6iGtsBbmWwp//uCGfXmGkv5M5rYPQZmgmTN7cXb1Af12kwTX48N31Xf
	 g1g6/BLvV1ixf4CP9HnRRBX+A9cNXOYjchARHYclEP1J1tc9G5gUwR5I2brX6XinTA
	 rgp5IXbnHgayA==
Date: Wed, 19 May 2021 16:11:18 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Anastasiia Lukianenko <anastasiia_lukianenko@epam.com>, 
    "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
    Andrii Chepurnyi <Andrii_Chepurnyi@epam.com>, 
    Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>, 
    Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: Hand over of the Xen shared info page
In-Reply-To: <8d1ecf6c-a0d1-d9bc-5daf-d02a34fff1e6@xen.org>
Message-ID: <alpine.DEB.2.21.2105191604130.14426@sstabellini-ThinkPad-T480s>
References: <64bc6ab6ec387acebb40c1b4786dfda1050f9d50.camel@epam.com> <8ff05bdf-a6c4-6b14-b39c-7d9b3bb9d279@xen.org> <1db54c363eae22613280e7181805abee396fe5e9.camel@epam.com> <8d1ecf6c-a0d1-d9bc-5daf-d02a34fff1e6@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Wed, 19 May 2021, Julien Grall wrote:
> On 14/05/2021 10:50, Anastasiia Lukianenko wrote:
> > Hi Julien!
> 
> Hello,
> 
> > On Thu, 2021-05-13 at 09:37 +0100, Julien Grall wrote:
> > > 
> > > On 13/05/2021 09:03, Anastasiia Lukianenko wrote:
> > > The alternative is for U-boot to go through the DT and infer which
> > > regions are free (IOW any region not described).
> > 
> > Thank you for interest in the problem and advice on how to solve it.
> > Could you please clarify how we could find free regions using DT in U-
> > boot?
> 
> I don't know U-boot code, so I can't tell whether what I suggest would work.
> 
> In theory, the device-tree should described every region allocated in address
> space. So if you parse the device-tree and create a list (or any
> datastructure) with the regions, then any range not present in the list would
> be free region you could use.

Yes, any "empty" memory region which is neither memory nor MMIO should
work.


> However, I realized a few days ago that the magic pages are not described in
> the DT. We probably want to fix it by marking the page as "reserved" or create
> a specific bindings.
> 
> So you will need a specific quirk for them.

It should also be possible to keep the shared info page allocated and
simply pass the address to the kernel by adding the right node to device
tree. To do that, we would have to add a description of the magic pages
to device tree, which I think would be good to have in any case. In that
case it would be best to add a proper binding for it under the "xen,xen"
node.


From xen-devel-bounces@lists.xenproject.org Wed May 19 23:25:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 19 May 2021 23:25:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130515.244444 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljVZY-0003DQ-0Q; Wed, 19 May 2021 23:25:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130515.244444; Wed, 19 May 2021 23:25:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljVZX-0003DJ-Tg; Wed, 19 May 2021 23:25:51 +0000
Received: by outflank-mailman (input) for mailman id 130515;
 Wed, 19 May 2021 23:25:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+fN8=KO=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1ljVZW-0003DD-FH
 for xen-devel@lists.xenproject.org; Wed, 19 May 2021 23:25:50 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0e94c275-4a81-4b8e-b87b-c313bc3f2397;
 Wed, 19 May 2021 23:25:49 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id CE45D611AD;
 Wed, 19 May 2021 23:25:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0e94c275-4a81-4b8e-b87b-c313bc3f2397
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1621466749;
	bh=H4hqy0CIONNcsO7xABco8gJwT2V3piA9EG/FYUE9IWQ=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=anTIj30D6wsaa1VVWWnarvCF6yL9FeTXQ1NsTM4MTG/7Hpp1zMwkFemXSOk8QcMqB
	 meUcJ/7DwYvMww54pDDSzfcKWOatvdPP3jgT52aS6J3qNHTkEKuNtDgdlS6Ph0gBBd
	 P/Lc+pt75lXySgsdKW80rjhMIVuZMVBmQEFMmp0QKZyKVth4gJfTprQwZnYqdmejFX
	 M3dqkNvOdcNi/xFp1/kq4BF6hkH4bzgZC2hM2KTjyFD/fVbUFwZ4V9P3W0N6Fz6Tbp
	 WVmr0RFka8n1wgUqVEGPoiQK3VEGxLuP63vGvlK1TyJFOuOWBlz70Jl0T+Ne08o7Ck
	 6xLYaTF+6+4tA==
Date: Wed, 19 May 2021 16:25:48 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Elliott Mitchell <ehem+xen@m5p.com>, xen-devel@lists.xenproject.org, 
    Roger Pau Monn?? <royger@freebsd.org>, Mitchell Horne <mhorne@freebsd.org>, 
    Stefano Stabellini <sstabellini@kernel.org>, 
    Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: Uses of /hypervisor memory range (was: FreeBSD/Xen/ARM issues)
In-Reply-To: <f7360dac-5d83-733b-7ec5-c73d4dc0350d@xen.org>
Message-ID: <alpine.DEB.2.21.2105191611540.14426@sstabellini-ThinkPad-T480s>
References: <YIptpndhk6MOJFod@Air-de-Roger> <YItwHirnih6iUtRS@mattapan.m5p.com> <YIu80FNQHKS3+jVN@Air-de-Roger> <YJDcDjjgCsQUdsZ7@mattapan.m5p.com> <YJURGaqAVBSYnMRf@Air-de-Roger> <YJYem5CW/97k/e5A@mattapan.m5p.com> <YJs/YAgB8molh7e5@mattapan.m5p.com>
 <54427968-9b13-36e6-0001-27fb49f85635@xen.org> <YJ3jlGSxs60Io+dp@mattapan.m5p.com> <93936406-574f-7fd0-53bf-3bafaa4b1947@xen.org> <YJ8hTE/JbJygtVAL@mattapan.m5p.com> <f7360dac-5d83-733b-7ec5-c73d4dc0350d@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Sat, 15 May 2021, Julien Grall wrote:
> (+ Andrew, + Stefano)
> 
> On 15/05/2021 02:18, Elliott Mitchell wrote:
> > On Fri, May 14, 2021 at 09:32:10AM +0100, Julien Grall wrote:
> > > On 14/05/2021 03:42, Elliott Mitchell wrote:
> > > > 
> > > > Issue is what is the intended use of the memory range allocated to
> > > > /hypervisor in the device-tree on ARM?  What do the Xen developers plan
> > > > for?  What is expected?
> > > 
> > >   From docs/misc/arm/device-tree/guest.txt:
> > > 
> > > "
> > > - reg: specifies the base physical address and size of a region in
> > >     memory where the grant table should be mapped to, using an
> > >     HYPERVISOR_memory_op hypercall. The memory region is large enough to
> > > map
> > >     the whole grant table (it is larger or equal to
> > > gnttab_max_grant_frames()).
> > >     This property is unnecessary when booting Dom0 using ACPI.
> > > "
> > > 
> > > Effectively, this is a known space in memory that is unallocated. Not
> > > all the guests will use it if they have a better way to find unallocated
> > > space.
> > 
> > The use of "should" is generally considered strong encouragement to do
> > so.  A warning $something is lurking here and you may regret it if you
> > recklessly disobey this without knowning what is going on behind the
> > scenes.
> 
> I thought a bit more over night. The potential trouble I can think of for a
> domU is the magic pages are not described in the DT.
> 
> I think every other regions should be discoverable from the DT (at least for a
> domU).
> 
> > Whereas your language here suggests "can" is a better word since it is
> > simply a random unused address range.
> > 
> > 
> > > > Was the /hypervisor range intended *strictly* for mapping grant-tables?
> > > 
> > > It was introduced to tell the OS a place where the grant-table could be
> > > conveniently mapped.
> > 
> > Yet this is strange.  If any $random unused address range is acceptable,
> > why bother suggesting a particular one?  If this is really purely the
> > OS's choice, why is Xen bothering to suggest a range at all?
> 
> I have added Stefano who may have more historical context than what I wrote in
> my previous e-mail.

At the time when the binding was introduced, there was a long history of
issues in finding a "safe" place where to map the grant table. Consider
that on x86 not all "empty" regions are really empty, there can be
hidden MMIO regions and with PV guests there is not even a stage2
address translation to protect you.

To stay on the safe side, we wanted to have a way to provide a range
known to be safe so that Linux wouldn't have to worry about it.


> > > > Was it intended for /hypervisor to grow over the
> > > > years as hardware got cheaper?
> > > I don't understand this question.
> > 
> > Going to the trouble of suggesting a range points to something going on.
> > I'm looking for an explanation since strange choices might hint at
> > something unpleasant lurking below and I should watch where I step.
> > 
> > 
> > > > Might it be better to deprecate the /hypervisor range and have domains
> > > > allocate any available address space for foreign mappings?
> > > 
> > > It may be easy for FreeBSD to find available address space but so far
> > > this has not been the case in Linux (I haven't checked the latest
> > > version though).
> > > 
> > > To be clear, an OS is free to not use the range provided in /hypervisor
> > > (maybe this is not clear enough in the spec?). This was mostly
> > > introduced to overcome some issues we saw in Linux when Xen on Arm was
> > > introduced.
> > 
> > Mind if I paraphrase this?
> > 
> > "this is a bring-up hack for Linux which hangs around since we haven't
> > felt any pressure to fix the underlying Linux issue"
> > 
> > Is that reasonable?
> 
> Yes. I have revisited the problem a few times and every time I got stuck
> because not all the I/O regions where reported to Linux. So Linux would not be
> able to find a safe unallocated space.

Exactly!



> > > > Should the FreeBSD implementation be treating grant tables as distinct
> > > > from other foreign mappings?
> > > 
> > > Both require unallocated addres space to work. IIRC FreeBSD is able to
> > > find unallocated space easily, so I would recommend to use it.
> > 
> > That is supposed to be, but it appears there is presently a bug which has
> > broken the functionality on ARM.  
> 
> Do you mind share some details?
> 
> > As such, as a proper lazy developer if
> > I can abuse the /hypervisor address range for all foreign mappings, I
> > will.
> 
> Are you aiming to support dom0 now?

I guess it is not quite an abuse. This was meant to be a safe address
range to do special mappings, primarily for the grant table. Maybe it
could be used for other things too.

I would be open to the idea of extending the device tree description to
say that it can be used for other mappings as well.



> > My feeling is one of two things should happen with the /hypervisor
> > address range:
> > 
> > 1>  OSes could be encouraged to use it for all foreign mappings.  The
> > range should be dynamic in some fashion.  There could be a handy way to
> > allow configuring the amount of address space thus reserved.
> 
> In the context of XSA-300 and virtio on Xen on Arm, we discussed about
> providing a revion for foreign mapping. The main trouble here is figuring out
> of the size, if you mess it up then you may break all the PV drivers.
> 
> If the problem is finding space, then I would like to suggest a different
> approach (I think I may have discussed it with Andrew). Xen is in maintaining
> the P2M for the guest and therefore now where are the unallocated space. How
> about introducing a new hypercall to ask for "unallocated space"?
> 
> This would not work for older hypervisor but you could use the RAM instead (as
> Linux does). This is has drawbacks (e.g. shattering superpage, reducing the
> amount of memory usuable...), but for 1> you would also need a hack for older
> Xen.

I am starting to wonder if it would make sense to add a new device tree
binding to describe larger free region for foreign mapping rather then a
hypercall. It could be several GBs or even TBs in size. I like the idea
of having it device tree because after all this is static information.
I can see that a hypercall would also work and I am open to it but if
possible I think it would be better not to extend the hypercall
interface when there is a good alternative.


> > 2>  The range should be declared deprecated.  Everyone should be put on
> > the same page that this was a quick hack for bringing up Xen/ARM/Linux,
> > and really it shouldn't have escaped.
> 
> How about relaxing the wording instead?

Yes, I agree!



From xen-devel-bounces@lists.xenproject.org Thu May 20 00:36:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 00:36:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130528.244455 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljWfk-0001qE-U2; Thu, 20 May 2021 00:36:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130528.244455; Thu, 20 May 2021 00:36:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljWfk-0001q7-R2; Thu, 20 May 2021 00:36:20 +0000
Received: by outflank-mailman (input) for mailman id 130528;
 Thu, 20 May 2021 00:36:20 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vYVO=KP=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1ljWfj-0001q1-PN
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 00:36:20 +0000
Received: from aserp2120.oracle.com (unknown [141.146.126.78])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2b4a3fff-4289-4f29-af53-69b76a58a8a9;
 Thu, 20 May 2021 00:36:18 +0000 (UTC)
Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1])
 by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 14K0T4KW168290;
 Thu, 20 May 2021 00:36:18 GMT
Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71])
 by aserp2120.oracle.com with ESMTP id 38j68mk598-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 20 May 2021 00:36:17 +0000
Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1])
 by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 14K0PJVs080348;
 Thu, 20 May 2021 00:36:17 GMT
Received: from nam11-bn8-obe.outbound.protection.outlook.com
 (mail-bn8nam11lp2173.outbound.protection.outlook.com [104.47.58.173])
 by aserp3030.oracle.com with ESMTP id 38meeggxd8-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 20 May 2021 00:36:17 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com (2603:10b6:208:321::10)
 by BLAPR10MB5010.namprd10.prod.outlook.com (2603:10b6:208:30d::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.25; Thu, 20 May
 2021 00:36:16 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf]) by BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf%7]) with mapi id 15.20.4129.033; Thu, 20 May 2021
 00:36:15 +0000
Received: from [10.74.100.102] (160.34.88.102) by
 BYAPR21CA0016.namprd21.prod.outlook.com (2603:10b6:a03:114::26) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4173.12 via Frontend
 Transport; Thu, 20 May 2021 00:36:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2b4a3fff-4289-4f29-af53-69b76a58a8a9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=e7JL9BougwBdqLLM7IJdi3SWfPIdq1UHo9tiH7xcB8E=;
 b=0KTutHThn7dsOF4co0D4RmcwAAUVFzbBklu868lXmdGea07llJaCvTRbBIWlsLuz0m2f
 //UxptKhMK7CzU9MD228RPJZVI0eMjH5O4o5HH8E/f2liKkQTKrj0vk3s26U5FzSuFPF
 scOaXnkCfZh938tiuAcNkvCZI5TFZ9Z6gzHKF7EkAsvE0VuOakGhLACpAyUzF55GFtjZ
 QaVydHaTJ911EVNs0w+pknnZDJBkeb+rKBh80GRForY2e0oM58qtk68mB3lhOhNFndzn
 QGykGsIIjqpE21edXeR6NvyfLAqWubCJ7oesbJpJELS1ur0y84Wq3E2gyQoNu0r1o13D GA== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cmrpvx8eRHSr6+9FnaKhAQJbTHUFj5uJvxecBYtMjm4zrbmTAdjY6m3y4NBf6x4duzgojeL84OCgcLKBBSVrAHep/S25GJvDWWDOHHxWaMAGUWpzVBza7Jcl7/ewHIzhLNphgH7uxAjXdRZzo2qbX791yK95OiPKOI/BOi+pHIlFnoJxr16d6PxhDZXmw1AWpNk4ar1I7vgAL9kJpqgkytN6YBRBGpWL520cyr1Ev/2DXcgIAH0r+K5grsHjULiTXEK39R2esbrhBw4deSiNFhTH1uBQoloDr6R5HIVSrTc8aoymHQVIMX0reZMmMby5IeZg7ZM5GWQuqWVXj1sJAg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=e7JL9BougwBdqLLM7IJdi3SWfPIdq1UHo9tiH7xcB8E=;
 b=aNQvqv0bVlIb3Y3qt73HsUU86Uy1+Qkx4JPBPA+fmEeCh03pB4U32UUIz5MGOX0pr6w4h/hAr8nbuc52pM7Zh6YbX36WUDi3h6Um0n3HKGGAzgfVVKjBK3v0XznL8V5k7+XLtFGJPT0M6Sotj+5bzqOB0CWEp3crkiHQF8xiH7SPhizzXJKSyrtyT9rTy0UibrsYolVdiib7xRmLfTVLMI/kfe7XsDjurQdwEy8pPKmIR3/GzPM9zQIT46NIY+Ry1M1/px83b8U/JskO8kd103Vev8ierlEQC/V1og/c3Pr2jXZEOcPvche+ocY536guPDnLxer/3OrtgURankuYaA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=e7JL9BougwBdqLLM7IJdi3SWfPIdq1UHo9tiH7xcB8E=;
 b=t8kPfHbPd1W1DlJvIiAljZ1XjaJo4eqsz17mtpiddGgrEordzUGKTK5yE9txWUIKxsyRzV7FbB9plLdi9eSOtUVwxzVtWkpkFvbga3/3bRo11QVfoTTdk/lh/Axf4+hpiLeLJm15aHSQANFjSfltO0cjz7CF1StQrcHR3/bfE3E=
Subject: Re: [PATCH v2 1/2] xen-pciback: redo VF placement in the virtual
 topology
To: Jan Beulich <jbeulich@suse.com>,
        "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Juergen Gross <jgross@suse.com>, Konrad Wilk <konrad.wilk@oracle.com>
References: <38774140-871d-59a4-cf49-9cb1cc78c381@suse.com>
 <8def783b-404c-3452-196d-3f3fd4d72c9e@suse.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <87d771dd-8b00-4101-b76b-21087ea1b1df@oracle.com>
Date: Wed, 19 May 2021 20:36:11 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
In-Reply-To: <8def783b-404c-3452-196d-3f3fd4d72c9e@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Originating-IP: [160.34.88.102]
X-ClientProxiedBy: BYAPR21CA0016.namprd21.prod.outlook.com
 (2603:10b6:a03:114::26) To BLAPR10MB5009.namprd10.prod.outlook.com
 (2603:10b6:208:321::10)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 8d902290-9f78-43b0-38c0-08d91b2746d2
X-MS-TrafficTypeDiagnostic: BLAPR10MB5010:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<BLAPR10MB50107957122FD224933D43338A2A9@BLAPR10MB5010.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:1201;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	3BdSNU/ep4AuLjUdyPmuSkfX8HBFdJuB79CLl0acLkboJAK2YXy4UAVCKeXgsapIP0iiblm89zdJbCyGg/9FxbIGRaPSCCCyqUzyxOirHjoAtMCcefdvUel6qUg38N3T2kiuALAyWWgp8QRB4MxXTDD2sNq9iPqBD6np/3mMX6ApMnoa0DcV6usnCkiLHvjIAPWzKkZm8R8yY3J9qQ1uys/Dzn2C2VzDxgvVvKczgCS5lgCosB0cILOX0pgKrECEqlmyYkM23fbUr3E4aiZEzpmFNOUQ3lL+7Nn+MoHderew3DDOKahQKjBnnCqx6U1eK5OXBf7ub7/ne10iCYfbyS6HiRUxwkLmkPMzjmNOLAfTGy80P34fnu/2kN7ZLPf7Vn49Nr5gcNHRucZ/PDm0gdIl0/eNtLFphsCZoksqDfCpqnvGuGffewlD0JullLZNkZjPEgjFr6TCYv2GqNipQz7Nven+sTG+nlZ+9/Av5KbiEYkQ8lVP5ErPsFYiyEAqVgIZkRCTJ2Wwq/RydRM3If/gk57OG0/xZ89Inysc944jnpetY8roTz0Tps2UDe+4BTgsMo+vPFw66YfPt69Ta2KN+l3BAFr9t3AqlYXRneZ1bjx6pMUu8rZeqeU0ZPRadhUj6mk/dsVJ3mCwZYybo5UbM9csVQdp/mnETm7VDxdCD98Gpyp8Xetj5YTb/AIn
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB5009.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(346002)(136003)(39860400002)(396003)(376002)(16526019)(66476007)(107886003)(66556008)(44832011)(186003)(66946007)(8676002)(110136005)(4326008)(2906002)(53546011)(31696002)(38100700002)(26005)(31686004)(2616005)(36756003)(6666004)(4744005)(956004)(86362001)(16576012)(316002)(54906003)(478600001)(8936002)(5660300002)(6486002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?utf-8?B?dCs2bHJ2SXNlZ2wrUWFrM3NyK0QzUFgxM2RkODJrWjkvN0I0RXk2MHBGbHht?=
 =?utf-8?B?bm5WSVZwWUNHZU1TVU5LSG13OXN3QTdHRy9uRmFLL3JiNFpGSnBXL1diemxy?=
 =?utf-8?B?ZGl2QkFTbC9lc1gxZWpRMnNJNDZsVW51TW5DdENGSVhVOHlhUk53YlRSVHNG?=
 =?utf-8?B?dndndCsxanNDL2N4Mlo1RHJDdWx2aFRRVTlnN2JBeEJ4VFREWUhiZG5NeHB0?=
 =?utf-8?B?OCtOL3Vjb3l4YUtCb29STlUzTHdtTVBockF1SEQyRjJ3MVp0R3VtR2psMisz?=
 =?utf-8?B?UVVXUW9hQUNldTZGT2VmcVdlUCtLbVRxcy8vQVNxWjhQSWJhVTJFa2U5S3pV?=
 =?utf-8?B?UHl6cW9aQUR5MkRCMnVZYWNHV3JCZUVDUjZyOVJoeitPY2VDK1dxa3E5UURq?=
 =?utf-8?B?L21WR3FucXFVZGlUY3YwYVovWlZqNXJvM2FNZHRXVkF4K3g1a0I0Ym9LeU5s?=
 =?utf-8?B?ay93ZTUwaVJ4ZWJWYW95K0xxNkRpSVRNRTEwWGk2aklOSVUrWXRuUFFsZEJx?=
 =?utf-8?B?RFdHOVlKVmtLcEd5aHlOTS82cExZNGpqMmsyL014dFlwK1l3U01VYXUvQ3N6?=
 =?utf-8?B?Nmw2RmIzY0NHbXFUeFh0M0lnd2FZVW83bk1vallFaVlvQ3R5b3Z0UndxTXQw?=
 =?utf-8?B?eW9sNncwOWF2Y1FsZDN4Q3oyUDRWdGxrRjJIckJNUFArUmZoNXJmNHErb290?=
 =?utf-8?B?YWxpNC9xZS9MWXY1M3Q1S2x5OWEwN0F5ei96NnVpNnVkcWdlb2Z2ZmV4cW9m?=
 =?utf-8?B?eE1kOG5ubTFxMFB3Vmo5UnRiWm9MTSttZ3I3cnBHNXBoMmlVMnlHR2grc1JI?=
 =?utf-8?B?YnRtalNpNHR2SGlXZHZidFk0QUx1MHZXeVdqaktXaDl4TTlEUDFQT05LallH?=
 =?utf-8?B?amZPZHRBbkd2RDNvYmk5U09MZkhZYXlMcHZDK1BrS1pRZUFNc0V4a2VUR0F6?=
 =?utf-8?B?a3MxR2ZCaTAwZWsyMXRtM1grMXVZc1IxeStUTmNUSTdmc1FoSGwydVBqYVU1?=
 =?utf-8?B?azQ0MTdCZnlNR2hOa0dlNmVSSGRCYVg2VDl2QmVwVFJXZk1PTjhDV2dKUlFx?=
 =?utf-8?B?cm13NStMSVp4dG1pVEM2ZXJRTXRaVUZ4QXBYYjJUaHQxbnRrMWlqNEpvZkZt?=
 =?utf-8?B?YmNjb0ZwUUM0alR1dWxVTE1KcFQ1UDI1cVNQdWR0dTlQZ29TRzhhMHNGMlNu?=
 =?utf-8?B?NGlYamVmb3lOU1pPUlhERW82Q1dZTmtydklSSGN5bUxiRUxWcktLZmN2QjA5?=
 =?utf-8?B?dS9oY2VGU3lkTEZJaFBNbE9sNnZTc1g1dVRxSUVBTGsrT3Q1eU5nT3dWQ3Vh?=
 =?utf-8?B?VXZIR0UwR3pyZlhqWjhyc3E4dVd6L2xZa2YrRmZ5N0lpdXJ4Sks3Um9pL25D?=
 =?utf-8?B?QW9qWkJOSFRXSDY2Z0ZDNHpGbjlZam5DNkM3UzdEdTdOSnVyQjBtMW1rVXNw?=
 =?utf-8?B?dTFuRi9pbUNSTTh5UzU2anhta1ZPcHE0UXBqM0RxeG5paFJCT1k1NDQ5VXRW?=
 =?utf-8?B?em9GWVJ6ZEsranpZemQ2ODNwYVJsNzJkdnhRT1JnQ0s5ZXN1ZkU4aERYR2E2?=
 =?utf-8?B?djRmWVNrZjhuNHU1TWZuc29KUGlKeURqb3JxWXdTYTJSNlUzRW9KQ2J4YjY5?=
 =?utf-8?B?U0ZjQk5waEJ5NUlmRktXTlJJYmlCdkxxOXYraEZYbUs0Yko2SCtBb1g3Qzhz?=
 =?utf-8?B?djAxcStCeXhBMm1PWFVHQUFUOWxsaVI0Nm9SaE5PYjV2UHZkajZ4N05zbmY4?=
 =?utf-8?Q?fdaO5JhMkP9bt2eChGlVLtRbI03zDGJEutaf3F8?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8d902290-9f78-43b0-38c0-08d91b2746d2
X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB5009.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 May 2021 00:36:15.6946
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: vycd1kLbPJ2Vm7g4RGd9x2uUFcCFxcDB3KglXGLU6dCWFls4wVGHt3rNHqHChrNwY5hF0C6V80Z9dIpoMemzs/TSNoM0gJ6F6bVP5x4O4u0=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLAPR10MB5010
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9989 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0 mlxlogscore=999 adultscore=0
 phishscore=0 malwarescore=0 bulkscore=0 spamscore=0 suspectscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2105200001
X-Proofpoint-ORIG-GUID: tsgkU37CjacsXT6sLQo7P7srcts9V3AR
X-Proofpoint-GUID: tsgkU37CjacsXT6sLQo7P7srcts9V3AR
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9989 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0 mlxlogscore=999
 priorityscore=1501 impostorscore=0 suspectscore=0 clxscore=1015
 adultscore=0 bulkscore=0 phishscore=0 spamscore=0 malwarescore=0
 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2104190000 definitions=main-2105200001


On 5/18/21 12:13 PM, Jan Beulich wrote:
>  
> @@ -95,22 +95,25 @@ static int __xen_pcibk_add_pci_dev(struc
>  
>  	/*
>  	 * Keep multi-function devices together on the virtual PCI bus, except
> -	 * virtual functions.
> +	 * that we want to keep virtual functions at func 0 on their own. They
> +	 * aren't multi-function devices and hence their presence at func 0
> +	 * may cause guests to not scan the other functions.


So your reading of the original commit is that whatever the issue it was, only function zero was causing the problem? In other words, you are not concerned that pci_scan_slot() may now look at function 1 and skip all higher-numbered function (assuming the problem is still there)?


-boris


From xen-devel-bounces@lists.xenproject.org Thu May 20 00:37:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 00:37:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130533.244466 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljWh8-0002PW-A4; Thu, 20 May 2021 00:37:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130533.244466; Thu, 20 May 2021 00:37:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljWh8-0002PP-72; Thu, 20 May 2021 00:37:46 +0000
Received: by outflank-mailman (input) for mailman id 130533;
 Thu, 20 May 2021 00:37:45 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vYVO=KP=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1ljWh7-0002PJ-1d
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 00:37:45 +0000
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 08c007f1-a27a-4b3b-aa3d-5bb7429f5cfc;
 Thu, 20 May 2021 00:37:44 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 14K0T4BE079521;
 Thu, 20 May 2021 00:37:42 GMT
Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79])
 by userp2130.oracle.com with ESMTP id 38j5qrb77b-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 20 May 2021 00:37:42 +0000
Received: from pps.filterd (userp3020.oracle.com [127.0.0.1])
 by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 14K0PNSK110291;
 Thu, 20 May 2021 00:37:41 GMT
Received: from nam12-bn8-obe.outbound.protection.outlook.com
 (mail-bn8nam12lp2175.outbound.protection.outlook.com [104.47.55.175])
 by userp3020.oracle.com with ESMTP id 38n4917d7v-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 20 May 2021 00:37:41 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com (2603:10b6:208:321::10)
 by BL3PR10MB5492.namprd10.prod.outlook.com (2603:10b6:208:33f::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.26; Thu, 20 May
 2021 00:37:38 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf]) by BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf%7]) with mapi id 15.20.4129.033; Thu, 20 May 2021
 00:37:38 +0000
Received: from [10.74.100.102] (160.34.88.102) by
 BYAPR21CA0009.namprd21.prod.outlook.com (2603:10b6:a03:114::19) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4173.7 via Frontend
 Transport; Thu, 20 May 2021 00:37:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 08c007f1-a27a-4b3b-aa3d-5bb7429f5cfc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=ciWryW7wUTAk64dSUZKF3sWgeCfje7xc/ZurcNjaL/4=;
 b=TtFlm6ARGePbut7rkWEjw356lZfoE6tE8TFkkE8Z0fyqMnQrStMhDcjekwqy9B6AzSsS
 IgWpxavrIMPjPbW/4WJLtz9A0pzbI8Zj55Ljd815PBIKeaXyAEdeNO9eRFJLsRtZaJ75
 dqArJNxkCgKJ5KHH2HkTpcj0fpw18HLSTT6z7PyJtmN1yeVGMKDCIK9n79d4t93Kh03w
 32n4YYka2ta9xpLk7joFrq39xYj/fduspau3Tv0J0fgpBTQFi70x9HgZM7uCi2hgmRQJ
 MwaiPnMwtSMjr6AzY09syvkeYQ1Jk58xBf5UiXxdCkuGanT8XTL1v4+ZlDJpPrkE0g49 XQ== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WyFQejNH3ng3eirCC+cm1R6vB3XVwZJDz1/md8ySuRaIFO4K0NNYgIoDkDI2dzLXVSwuDJKM945D1UxMvHkNPcwGQz4gHz5nlNv6glZLd/7d+Akvy8IWPun11HKUQsSOznAXOtNELWeGevsrlXjAHk0QcLD3gPC82LBS2g7uddnNmWQPs2y/wUAT9ohKmbPa0iXuzz7Pdt4BmUuQwgEI2JyFz7nZR4rwz6ZMfsXpWFoI0QxsePQyWPZQIlexuFMhzj7FPXRFuRkuHeNlPDmepMXj3M8X+u73CnLspL+/MgyN8eurp7nX51KQ0O4aB9Q9QN/E1a1K/s3/IvvygsICDQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ciWryW7wUTAk64dSUZKF3sWgeCfje7xc/ZurcNjaL/4=;
 b=NLYquX6CkddQpyGYWMcn3vuo9U1gl6cGa5dLtyI7avOi8v6/MWTy7WpGt4I/w1qyLWsXslo6HnILq7bQkgKG6m58SFFlvAQGpBu8nwTElD3Y3qqFNxHFccbhV9z/L6zzAQAQIZWxmVvh3bdLq+BPtOtTqdLNbSqBUFHykBucCqwI5G2CqXdeC4eN/9m8H2H3AuMhIr1M9aFtgN+wZI+ILWdEvvP9IyTeTXzpo/JxbxcCNJZR0BC3mMU7NqQFrOcCqJ5jX+Zu+E5VxKcSVUPx0PO7mnS/nGVp5Ucu1ebCDWetCL+Sn0m/maYZrs6MsEADZBD2I9eOimYij2xcN2+BSA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ciWryW7wUTAk64dSUZKF3sWgeCfje7xc/ZurcNjaL/4=;
 b=DL1V/6vZFqR8cva274/CXe6U6QSOc00xc+uoJjeDbVDgPWlUyf7uF8b20VuFHGnSqYAdT9KnyJr3s3lUHJmPbyW0khctIiH174uKvCYG7+FovH9MdvqkjAW9X0/ubLHGE4NcJhqCNLcE/bPGRqLhdxiGmecbIwR/3adu8YWhYSM=
Subject: Re: [PATCH v2 2/2] xen-pciback: reconfigure also from backend watch
 handler
To: Jan Beulich <jbeulich@suse.com>,
        "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Juergen Gross <jgross@suse.com>, Konrad Wilk <konrad.wilk@oracle.com>
References: <38774140-871d-59a4-cf49-9cb1cc78c381@suse.com>
 <2337cbd6-94b9-4187-9862-c03ea12e0c61@suse.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <7ba55a34-f1ef-2c2a-86e1-d2b104b88e54@oracle.com>
Date: Wed, 19 May 2021 20:37:34 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
In-Reply-To: <2337cbd6-94b9-4187-9862-c03ea12e0c61@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Originating-IP: [160.34.88.102]
X-ClientProxiedBy: BYAPR21CA0009.namprd21.prod.outlook.com
 (2603:10b6:a03:114::19) To BLAPR10MB5009.namprd10.prod.outlook.com
 (2603:10b6:208:321::10)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e7c3cfcb-54ca-4d8a-d6c6-08d91b277827
X-MS-TrafficTypeDiagnostic: BL3PR10MB5492:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<BL3PR10MB5492E9E9BB2D75ED89E7F34F8A2A9@BL3PR10MB5492.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	c388O39tNJYWZmcUiwZRqjiAPa7dMIJCNvLTTOdobekvBW1wo4bTEAMSm8EZ0arZh+KXfhD1JznO+DesW/R8rBQu3WLWsdwxhA94PVIAE51cd8WuJkZ5gtMPezXvOWeTDqPwJbj0jY714LJ7vQlkt0JpG6OIDWa1SUvI+wd95XNMmn2Ktt2zsnDkKmRuZ7CbiGB6kgksWvs6JQJ/ocIy0k4zQqu622gfIhsL0pjDRn8sWg+SNOrfWA3nWFqAD2coWKCE16/d1gWDQY+U8k7sahaxBsnA/oTN/oFMdCWzKiNbCKk1bl+/mjUDqV7uCrIxdlITnHVRmwsFJaZie3XQr3N8Uq7jYqMzM8Ljm4zV0voC4Kujt4BptxHyZOvF2Lz7SPeZQbDTP98Bef44gDPeIVCvNmryOErP1xCe5FHTZxOrKQKwNHIUD5JiEVlJb9nZlDPsNtGY3XTtCDL+dvK/7Cj3haMXupCNjr+8+92jXPap9aAaFROPoavJQ4CZQ+yczqd/uwGl3FpDySB2nyfGnBBt3NwD3K1VURBUTkMeMcREqjd0xPm4N6GCK7zPjRoFnkDmfj3fAvdQP6nz5peKrGmAxqQ8LckaQNxNCRST77iaeMqRPXSGX0NzgBuowO2mct7MJ8adiClZoC2NGDXLkEtbsLNe5UFCXvl5W61Syj5es5nTwKKk6zel4fukyiN+
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB5009.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(346002)(136003)(366004)(376002)(39860400002)(36756003)(16576012)(4744005)(110136005)(38100700002)(54906003)(316002)(8936002)(86362001)(107886003)(478600001)(2906002)(31696002)(186003)(4326008)(956004)(66556008)(66476007)(53546011)(6666004)(31686004)(2616005)(26005)(6486002)(44832011)(16526019)(5660300002)(8676002)(66946007)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?utf-8?B?Y2ZZSDBUNE1BMStIcWRBVXhWek51YWY3TnhjUmZlYndMcEo1TW1HUjVWeVRL?=
 =?utf-8?B?eUdubmZBb3Q4YXN1ckZIMkhTYTVnc3ZISG8rZWQxOVBIS3Z3YXlQZWVYUUJ5?=
 =?utf-8?B?ajNvaTZUbXR4R1V1RUVVV0tUSXIyb3pGQzlTZ3NJWEZuTit1WDJueFU2NVV3?=
 =?utf-8?B?YkZ5WlJHS21hVmlWbGRqbUt4c3dYak9tU0FhaFd5UkhYdTBaUHU0QVg4MWR0?=
 =?utf-8?B?OTlRVlRqaDFpcWFsclBIOEFrYmtGNzJMVFlMRnlBR0ZmdWlvMGNGYUVxTVln?=
 =?utf-8?B?eUZSTWdiYk1VZzVCam90Q3p4TG9MdFdyRWVkekZBakg1Z29vTTdpa1ZpbU9F?=
 =?utf-8?B?eC9WWlBDN0IzRmltZjlyTmlEMjVKYkFyN09tcDdoREYrOFAvcW5lMHRieXBa?=
 =?utf-8?B?dlp3MHV1b09hN2Jocy9jVmYxWnBvMExBeDFYMzNEVGdPNXlwbW5tNWdYd2gx?=
 =?utf-8?B?Wm5XVS8wQWpXcFppbjBhQnVwcjMrbEJxMGN6SGhnR3ZWWlh2SWo1NlpTZUFh?=
 =?utf-8?B?SHpqZjV5eXd3K3JUK05aSkpZYXVIclZXSmhldWlQT0xXU2c5L0MvUDYyZ2JR?=
 =?utf-8?B?V3dTbmpFd0xoSlQ3TVdBQzQyVWpFUzRRNU9Vc21RdFp1ajd3Wm5kT0QxWmRm?=
 =?utf-8?B?OVdVSW5sSDZnaG9qbitkNW1laTNYeFhwRUp3M2lVajBySklpeng0WjU4b0Vt?=
 =?utf-8?B?NTU3QnprVkcrYm1DSnMrU1diOU0yd1JhelprWkUxQkV2em1xNllQelcwM01j?=
 =?utf-8?B?Q1N1Q0IrVEt6ejlKQVJvSFI1R0wvL0tIUFdSS0pwY1dnNXovcE9NbWw1OUR0?=
 =?utf-8?B?cVBVaEtCbEh3M3lFRENaR0pLejAyWEMyai9HMDhjNk1TdGhNbzhlSUdLc1hq?=
 =?utf-8?B?ZEkyemIxMExkejlOcGtGOTYxMy9ZVlVoOTZScWhnZkIzRm9IY05lalJPZ05q?=
 =?utf-8?B?d0ZNdkV3b3JQenpZRzlOOUVyNjlNSzNuUXl5cEhFVkEvUGZmbFlnWktxOWhs?=
 =?utf-8?B?YVd3WURQRGI3N1I4L0dkcTlYNHN6MTNMY2ZOZ3AwY0dONENiVzArTTBRdmoz?=
 =?utf-8?B?bWdqcE9IV2xkaC9oQk5mWFEzOTI4MDBqRDZoMDZQRmlyMXlUWHBudDFyUUlq?=
 =?utf-8?B?MTd1SXRaazlpVTN3bEJwME51VHcxS201d0xnb01PdFNucVRCMUU4YjdVUzIr?=
 =?utf-8?B?dEc3SnRUeW1DQnJqQmxTK3RYbjBpN0pkU1hmU2pSekkyaDBoUXpXdlV3MVBE?=
 =?utf-8?B?b0t0RmwwcWZtMWR0dkVSTUswcEI1YVVDb216TVZHK1N0WVpERnQzRGN4ajc3?=
 =?utf-8?B?SmVqalNyOXQ2L3JNcS9zOERsSkJXMThDZ0hYMENySWwzeVRlRHpkb042VXM1?=
 =?utf-8?B?VkdNY2JBZW5LdmMvVno5SzdZSE5mRTFNR3FkSW9BbW1qa29kQ29YZ3h2WEh1?=
 =?utf-8?B?NXVuUzVldTFDRERlMDNqTHJCS1NKaWgrSytqc2phOENmRytmVmhxZjYvTUpU?=
 =?utf-8?B?QmJXVlpwL05SeXQyLy91dnppeEVSa2V3Zk9jL0dzajRqeFRENVhTS25EM2or?=
 =?utf-8?B?L0pwVmN4N1FpdU4yYnlYNUI3aDRxczUwb0VldnkyUjg0V05qczEydi9ycmo0?=
 =?utf-8?B?ekVCTlNNYmorOVJuUWlrbGpMdmhKUWh4QVlmSGhRVnMzOHJDQi9lOTNDQjJH?=
 =?utf-8?B?ZkJGVng3VW42b3BvMGtjNTlFZHBka2lVdUhEbklZUmUyOUVXRVZrYUh0TDlQ?=
 =?utf-8?Q?OdgmZ849457rcRKq6FM7FUfdxgk1Hxrlm7Po/bz?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: e7c3cfcb-54ca-4d8a-d6c6-08d91b277827
X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB5009.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 May 2021 00:37:38.3923
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: KM4P2WPpoEXy7J32ECXdlKKvRaD/9sUL6AjGsLOemISdR62x27FSZNQ2S7mZOglsSzqauhC/kSLuHu9Q/uoRIxg1e1uUjH9QhpFaPihssnc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL3PR10MB5492
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9989 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 phishscore=0 spamscore=0 bulkscore=0
 suspectscore=0 mlxlogscore=999 adultscore=0 malwarescore=0 mlxscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2105200001
X-Proofpoint-GUID: VD7S5TTupQOwMdUknHP-If5-tN9loCKD
X-Proofpoint-ORIG-GUID: VD7S5TTupQOwMdUknHP-If5-tN9loCKD
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9989 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 clxscore=1015 impostorscore=0
 mlxscore=0 lowpriorityscore=0 malwarescore=0 mlxlogscore=999
 suspectscore=0 adultscore=0 priorityscore=1501 spamscore=0 phishscore=0
 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2104190000 definitions=main-2105200001


On 5/18/21 12:14 PM, Jan Beulich wrote:
> When multiple PCI devices get assigned to a guest right at boot, libxl
> incrementally populates the backend tree. The writes for the first of
> the devices trigger the backend watch. In turn xen_pcibk_setup_backend()
> will set the XenBus state to Initialised, at which point no further
> reconfigures would happen unless a device got hotplugged. Arrange for
> reconfigure to also get triggered from the backend watch handler.
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Cc: stable@vger.kernel.org


Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>



From xen-devel-bounces@lists.xenproject.org Thu May 20 03:10:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 03:10:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130554.244478 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljZ4a-0005gM-Em; Thu, 20 May 2021 03:10:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130554.244478; Thu, 20 May 2021 03:10:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljZ4a-0005gF-AT; Thu, 20 May 2021 03:10:08 +0000
Received: by outflank-mailman (input) for mailman id 130554;
 Thu, 20 May 2021 03:10:06 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljZ4Y-0005c9-EU; Thu, 20 May 2021 03:10:06 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljZ4Y-0006Cr-6K; Thu, 20 May 2021 03:10:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljZ4X-0002eK-RF; Thu, 20 May 2021 03:10:05 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ljZ4X-000329-Qj; Thu, 20 May 2021 03:10:05 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=5iQtIGqUaskJjTV6lb+YLrwHVGlzc/NHlfy1IjusT1U=; b=VpdNWHiwaeu2b3S1nzBtp+YgkX
	fXK5ta3vDZTJ4kR/ph7YqNtvYi5bog5UPnYZK08z8u5mF5vnmpuIUG3E0EpINm8zTa4A6DMt6upMc
	MiMtFKh7INEydYaNm3vRdBOHTzaKYc+ilrN2Zwbi6Fm3ZrycpVQHqrXkR4ZvM9i/Y01c=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162090-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162090: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=c313e52e6459de2e9064767083a0c949c476e32b
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 20 May 2021 03:10:05 +0000

flight 162090 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162090/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                c313e52e6459de2e9064767083a0c949c476e32b
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  272 days
Failing since        152659  2020-08-21 14:07:39 Z  271 days  498 attempts
Testing same since   162070  2021-05-19 01:38:54 Z    1 days    2 attempts

------------------------------------------------------------
506 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 155638 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu May 20 05:21:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 05:21:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130581.244510 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljb7W-0002Bx-3F; Thu, 20 May 2021 05:21:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130581.244510; Thu, 20 May 2021 05:21:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljb7W-0002Bq-03; Thu, 20 May 2021 05:21:18 +0000
Received: by outflank-mailman (input) for mailman id 130581;
 Thu, 20 May 2021 05:21:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WSyT=KP=epam.com=prvs=5774c33267=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1ljb7U-0002Bh-FD
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 05:21:16 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1b75f90d-76a6-4770-949d-7d15dee536a9;
 Thu, 20 May 2021 05:21:15 +0000 (UTC)
Received: from pps.filterd (m0174679.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 14K5Fs4L024003; Thu, 20 May 2021 05:21:13 GMT
Received: from eur05-vi1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2170.outbound.protection.outlook.com [104.47.17.170])
 by mx0a-0039f301.pphosted.com with ESMTP id 38ncmfre6d-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Thu, 20 May 2021 05:21:13 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB6068.eurprd03.prod.outlook.com (2603:10a6:208:166::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.28; Thu, 20 May
 2021 05:21:07 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::3541:4069:60ca:de3d]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::3541:4069:60ca:de3d%6]) with mapi id 15.20.4129.033; Thu, 20 May 2021
 05:21:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1b75f90d-76a6-4770-949d-7d15dee536a9
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TOi7XQjODKaEs61Z+COwJ17Q5Ibzksfrmw/B1kQvJ+4GAw78gQftNv/bnklcbJHPK6imicOABJP/7CnISKW6KQLZaCvJyaWPO4jgnlxIFwo7yCQfiM3w830z13ZwEnfoW4nnDDj6vHu7j2BMP4eP1bRX7dXOpl5VgehiKocMqwEgh7Z6sSU284C9x/imZzLsQEfDkKMckTkTvrUM/FErR0zjkY8boojv0XKmfJRbHoS0fiM8pYDYpjC2IDAeZh5I6L2uzZ9vvgruH73Zam3610UL8u2Qa9gf5umvrjBBzEWucKGWm4koLsBHI2ccffs9rG5C1i8ofS3+yjSdVmDqZw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DZBvgqB7Vhd+FeUtcnz5ilvqhUOlcnvgkCl4OvW/oNA=;
 b=ZXi2TwHTuE4IUxIEC/lvN5xnn2F+NrpKkoF49gwo6czuNk/i0yz1Anz2Y4c9AYxwGk2fsk/LIjzDcSSOf/vryS3yTyOh9kIaFobWG4M4ttnRtuYXTI5cYr6/jOr4mnIpOjh2SN5jwlKJiAs9r1RrtALpeOiY3b35k7bURwAbUjA090Q3PD9rrsu9UpkXAdsC3Qu7DZWmuIWLFcT8cDTMqRV8m+CdXsNSFZJgbeN+UbKcPsUERQ/q8lw7QPDq7h9nxU88efVEkT/C2CHVlahEzrpnLw7AYwP24NDhJuTDEaTMWrXrWaqycdgNj55MIRl+PvjG8pYn/B5IUKNAQBL6Qg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DZBvgqB7Vhd+FeUtcnz5ilvqhUOlcnvgkCl4OvW/oNA=;
 b=qT96nyfhZqrN0tTa4IRrtBDzTLKwp3lRfykMGL5+/UTPzpUXIb3sDlP7u/+pMEC+mDObCRbI5K5nu14cMQ6eSWzz4qGj7zYLKv833T7tdQp6sWkGU9FwniQZljJOoFWTkBCzA9ZwcJFcFm6hShjuH0bK/KPW9oYcgqAN/kdi9HVdFpAUE0tbM56c6VsZ/7eyfuV8bwfxCv8kXXRQoXkoV3So/IIpa4DJDvFOl2xOALV/+NHYsax74apz2glNDXp4tGG6T0TVfn7MCWifyAWk3RdAON5jX+4f/UotjxtRpK8K8rek0IRqZ8h82XBwfmwU+5YmImshC3Qu1z2v5tsgIQ==
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
CC: Anastasiia Lukianenko <Anastasiia_Lukianenko@epam.com>,
        "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        Andrii
 Chepurnyi <Andrii_Chepurnyi@epam.com>
Subject: Re: Hand over of the Xen shared info page
Thread-Topic: Hand over of the Xen shared info page
Thread-Index: AQHXR86B8dxtD5lx60CgXee9ciHSoarhFtmAgAGmygCACItDgIAAMAgAgABnUwA=
Date: Thu, 20 May 2021 05:21:07 +0000
Message-ID: <4b686071-3260-87b1-d240-8d3c2b48f1f8@epam.com>
References: <64bc6ab6ec387acebb40c1b4786dfda1050f9d50.camel@epam.com>
 <8ff05bdf-a6c4-6b14-b39c-7d9b3bb9d279@xen.org>
 <1db54c363eae22613280e7181805abee396fe5e9.camel@epam.com>
 <8d1ecf6c-a0d1-d9bc-5daf-d02a34fff1e6@xen.org>
 <alpine.DEB.2.21.2105191604130.14426@sstabellini-ThinkPad-T480s>
In-Reply-To: <alpine.DEB.2.21.2105191604130.14426@sstabellini-ThinkPad-T480s>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: kernel.org; dkim=none (message not signed)
 header.d=none;kernel.org; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: d834b328-cec1-4455-6fc9-08d91b4f1247
x-ms-traffictypediagnostic: AM0PR03MB6068:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: 
 <AM0PR03MB60680FBFF562EB5E1EE87442E72A9@AM0PR03MB6068.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 ayvm0qfb2TVfXyFWsA+ly+bpRGRvcJUwnBXL0SWSHpXCvjgTsmjKsxa9oZTVM8J3rbqd7anX7I2clAR1OPcR8sBmHNNDyYv73oC1YcAjyBX0D7L0sBhe2Lyb8FYwhn0M4gceZxQgoqMDhm5z4NAIwMC/2uIQwQKvag80fI5X2AQBRdrPd88ZPZ6161rnzwjR3HeoYO6SUHxrjyo3dFZ8mvuc5VKIKRqiP4V7/95X8ZiB2USmXJ4xuVzOAi6XXbM2Fv7lXMUUcecQUBkzuiuVm84rOGuZqy26+7GrvT3algVs4704KO2PUpDgDIN6G2DCpUubzy+huBmFDRKxwQJiq66jU/4wkCWjVPGeiZ7qywHI2XIRevq9CHNVClVPtdsDyb7cCpRqtAhAvE/q71I+Y48ZtVOaC4rjHhKDy8+rAQJFlqsiIN1jexGoWGpuVNPjhlBxQ7wwQdiKEtU0emc9veNRgYdNXV02qHDr5EF9wxAaJnT23z3dYXCf7jufowHoecvct+ZIs07JIafemHGZbgCWhyAzo80J1eEdP4r0SqHnrapc3Qu2OMtlRmy47FIpS+FGiNu/6SszD1+rFYvQeVPfpLzkt5WkQoK7JlUiNKrE87yMEF0fPL71ORO6wiIwgOFRxkx4DpSbo8/6p+8SpY7f8d5CRfuPfjlysEQoEhU=
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39860400002)(346002)(136003)(376002)(366004)(66476007)(83380400001)(76116006)(66946007)(66556008)(66446008)(64756008)(316002)(31696002)(4326008)(5660300002)(110136005)(54906003)(186003)(2906002)(8676002)(26005)(86362001)(478600001)(6506007)(36756003)(31686004)(6512007)(53546011)(107886003)(122000001)(2616005)(6486002)(38100700002)(71200400001)(8936002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?utf-8?B?emR6WDl3QXZUelg4clhXQXkvcHhQZ3RTallMSFJGa2ZZMmpnVmtycXRuVHIr?=
 =?utf-8?B?VlFKa2VJbFVMTkpBK2szcWluc3RmT2xveStjNHBjS1dtcDUweFJWS2lUbkMy?=
 =?utf-8?B?Wm1vVXJ5MkVjOXkzQVBuR0xaaWFndjc0U0VZQ3B2dnZaQkY0WjRoUWFvb3pa?=
 =?utf-8?B?UTNIYW93V2dmTG03UkJPQUJXU09QR085S005WlMzaHlLVjFYR1BrWXdzZmNs?=
 =?utf-8?B?WWx2Z09aS0tGcHNPYmFBZklvdnlHbTFjZTlNb1lDR0tBeHYveUxmTTVMc3R1?=
 =?utf-8?B?NGlXY2NQN1U1cmFVbVlJbS9jMVRIcXQ2NWFGeXpJYnFrOVJOTmovYjBCN3g1?=
 =?utf-8?B?bWMvTVNQRGRzcGNtejR1WGN5dEJFdnpHZ0RmTEpQUVF5WjdHeUdtOXBzanNn?=
 =?utf-8?B?S0dRT0xSVlhzeTZrQ3NFK0tkTHZVRmxHR0NEUlE3ci8vTlB2NjF2Q1VOMmFM?=
 =?utf-8?B?QnllblltS05RakthRDdndzBmN3ZqNGVhU0pMTVRaK2dmcWVpMkNFM28zRG5X?=
 =?utf-8?B?cGJtYkw1K00xL0tWMkwxZEN4Mk9LWkdtS2dEdmR6amFCcG5IQnJjYkJkY1dJ?=
 =?utf-8?B?NHY4dXpTVHViV0JUNGNVV2dpcEd0QUNGNmZWclRjMzJzZEg0ZWRyYS9yK1hJ?=
 =?utf-8?B?a0srcldWNkJxbjRvSXVZRUZHT0pGb1BKeU8xbCtwR3RKZFRhS3NMclFBbnhn?=
 =?utf-8?B?VU5ZZUJtcVB1cHhVZ1gvRVRoTW1QeFNjVURxeUVuREhyUlZDV0VBNzdOcXF5?=
 =?utf-8?B?VjlIREd4NWJRWi9ralNZMUMyZENPYjlENElyTTZRZnhEbklsN3ZzRmpWR1Ix?=
 =?utf-8?B?dmRMQ05vYWhPUDlvTnMyRERIb3hxUk9WK0Y4S0Z2aXRxSUhzRmFpOU44dVdI?=
 =?utf-8?B?MitOWFZaTG11QkVjVmxmNkFpbHl1WlU3eDZENUh3dVU1MDBhK2R4Y0ZJQ1ow?=
 =?utf-8?B?TVNMemhNWXk1WW1XUktIcXQ4eElXbElZSy9JdlVqZ3lHZ0hTZVRIS0xzQ2xG?=
 =?utf-8?B?cEZZUG1XR21ZaEU3ZkhnNVFzdDBrS3NPRExVQkx0VHRQZy8vNENmcnRrQlFz?=
 =?utf-8?B?SENRUHBVUHd4Z2Q4NFBRVjJjMEtNUWZmcHY5amtOVkRwaUxpK05JaHlySVBu?=
 =?utf-8?B?ZmFPWGorekt6TjhSblRad0xzbmQ0MFNZeUo1Y0xjSFRWbGJ1UEcvQzZ4aUdz?=
 =?utf-8?B?L1NIZnk0V05BS3VZUkNRQVZNWTNDWlhlalNZRnBtRmJiWVRWZmpuWHJUWkM0?=
 =?utf-8?B?N3hQUmQ5NU5vL042bWpFTk5tb3RSUVZCWmRhbU4zb0YwcUdTWWtkNWF2K0Jy?=
 =?utf-8?B?eGxFTHZpZXA0SkdnOGdldDBwOU9CSnNuSHRiUWg1Vyt5OHlobFlKSjdDUmJy?=
 =?utf-8?B?ZGYzWUZYSjA4ZmlLc013SlNiV29FTXpJUm5TMkJzZHNleWFhTndzUWJoKzlv?=
 =?utf-8?B?Zy9xRkJvSUZBdWJBZHRlVTlNRWI5R1ZvSUNQN0tsTEVCMzFGdWFDRUZUYlNF?=
 =?utf-8?B?VG41TjA3Y01DWWVpNks5aEpySS83MER3b0JjS002cTE4SWdJdGVWMmFPOW9R?=
 =?utf-8?B?eHZyS21uR2ZxdWtZVjBDNEFmeTM1RVl5dkJ5SmY2QURTR1ltUjFRL2lXazVn?=
 =?utf-8?B?Zm1DQ0tkMkJKU1RWMysrZFJzOFA0anhrMWpuSWFaSExiRmh6SkNySWpPeCtP?=
 =?utf-8?B?SmxhOGM4UzRQZmNqM0lBZ3MrV044b2ZoNFVUOTVURlIzdXdIeDlpbVZVNlNt?=
 =?utf-8?Q?8Pn3AJe7DKam1UeoKy0o7GLJNg4tvRCyCGGws/u?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <4D971F2542783948A363FB148D6E212B@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d834b328-cec1-4455-6fc9-08d91b4f1247
X-MS-Exchange-CrossTenant-originalarrivaltime: 20 May 2021 05:21:07.1578
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: w3eDM/Ks6ydlwtOw5VsWttx0m3jXW8VvHHr2ulXNOimNbPYyrorddAk66rkA9izWQrLcQ4Nnno2altjbAc51joaUhZgeYdliCfUpzv3no3dRkCrzK4Nb5xn8vt3WCPw3
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB6068
X-Proofpoint-GUID: pX_P9yJp3pktZXxINP1X-JDBCgFVU_Oz
X-Proofpoint-ORIG-GUID: pX_P9yJp3pktZXxINP1X-JDBCgFVU_Oz
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 adultscore=0 bulkscore=0
 mlxscore=0 malwarescore=0 mlxlogscore=999 impostorscore=0 spamscore=0
 suspectscore=0 lowpriorityscore=0 phishscore=0 priorityscore=1501
 clxscore=1011 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2104190000 definitions=main-2105200042

SGksIGFsbCENCg0KT24gNS8yMC8yMSAyOjExIEFNLCBTdGVmYW5vIFN0YWJlbGxpbmkgd3JvdGU6
DQo+IE9uIFdlZCwgMTkgTWF5IDIwMjEsIEp1bGllbiBHcmFsbCB3cm90ZToNCj4+IE9uIDE0LzA1
LzIwMjEgMTA6NTAsIEFuYXN0YXNpaWEgTHVraWFuZW5rbyB3cm90ZToNCj4+PiBIaSBKdWxpZW4h
DQo+PiBIZWxsbywNCj4+DQo+Pj4gT24gVGh1LCAyMDIxLTA1LTEzIGF0IDA5OjM3ICswMTAwLCBK
dWxpZW4gR3JhbGwgd3JvdGU6DQo+Pj4+IE9uIDEzLzA1LzIwMjEgMDk6MDMsIEFuYXN0YXNpaWEg
THVraWFuZW5rbyB3cm90ZToNCj4+Pj4gVGhlIGFsdGVybmF0aXZlIGlzIGZvciBVLWJvb3QgdG8g
Z28gdGhyb3VnaCB0aGUgRFQgYW5kIGluZmVyIHdoaWNoDQo+Pj4+IHJlZ2lvbnMgYXJlIGZyZWUg
KElPVyBhbnkgcmVnaW9uIG5vdCBkZXNjcmliZWQpLg0KPj4+IFRoYW5rIHlvdSBmb3IgaW50ZXJl
c3QgaW4gdGhlIHByb2JsZW0gYW5kIGFkdmljZSBvbiBob3cgdG8gc29sdmUgaXQuDQo+Pj4gQ291
bGQgeW91IHBsZWFzZSBjbGFyaWZ5IGhvdyB3ZSBjb3VsZCBmaW5kIGZyZWUgcmVnaW9ucyB1c2lu
ZyBEVCBpbiBVLQ0KPj4+IGJvb3Q/DQo+PiBJIGRvbid0IGtub3cgVS1ib290IGNvZGUsIHNvIEkg
Y2FuJ3QgdGVsbCB3aGV0aGVyIHdoYXQgSSBzdWdnZXN0IHdvdWxkIHdvcmsuDQo+Pg0KPj4gSW4g
dGhlb3J5LCB0aGUgZGV2aWNlLXRyZWUgc2hvdWxkIGRlc2NyaWJlZCBldmVyeSByZWdpb24gYWxs
b2NhdGVkIGluIGFkZHJlc3MNCj4+IHNwYWNlLiBTbyBpZiB5b3UgcGFyc2UgdGhlIGRldmljZS10
cmVlIGFuZCBjcmVhdGUgYSBsaXN0IChvciBhbnkNCj4+IGRhdGFzdHJ1Y3R1cmUpIHdpdGggdGhl
IHJlZ2lvbnMsIHRoZW4gYW55IHJhbmdlIG5vdCBwcmVzZW50IGluIHRoZSBsaXN0IHdvdWxkDQo+
PiBiZSBmcmVlIHJlZ2lvbiB5b3UgY291bGQgdXNlLg0KPiBZZXMsIGFueSAiZW1wdHkiIG1lbW9y
eSByZWdpb24gd2hpY2ggaXMgbmVpdGhlciBtZW1vcnkgbm9yIE1NSU8gc2hvdWxkDQo+IHdvcmsu
DQoNCllvdSBuZWVkIHRvIGFjY291bnQgb24gbWFueSB0aGluZ3Mgd2hpbGUgY3JlYXRpbmcgdGhl
IGxpc3Qgb2YgcmVnaW9uczoNCg0KZGV2aWNlIHJlZ2lzdGVyIG1hcHBpbmdzLCByZXNlcnZlZCBu
b2RlcywgbWVtb3J5IG5vZGVzLCBkZXZpY2UgdHJlZQ0KDQpvdmVybGF5cyBwYXJzaW5nIGV0Yy4g
SXQgYWxsIHNlZW0gdG8gYmUgYSBub3QtdGhhdC10cml2aWFsIHRhc2sgYW5kIGFmdGVyDQoNCmFs
bCBpZiBpbXBsZW1lbnRlZCBpdCBvbmx5IGxpdmVzIGluIFUtYm9vdCBhbmQgeW91IGhhdmUgdG8g
bWFpbnRhaW4gdGhhdA0KDQpjb2RlLiBTbywgaWYgc29tZSBvdGhlciBlbnRpdHkgbmVlZHMgdGhl
IHNhbWUgeW91IG5lZWQgdG8gaW1wbGVtZW50DQoNCnRoYXQgYXMgd2VsbC4gQW5kIGFsc28geW91
IGNhbiBpbWFnaW5lIGEgc3lzdGVtIHdpdGhvdXQgZGV2aWNlIHRyZWUgYXQgYWxsLi4uDQoNClNv
LCBJIGFtIG5vdCBzYXlpbmcgaXQgaXMgbm90IHBvc3NpYmxlIHRvIGltcGxlbWVudCwgYnV0IEkg
anVzdCBxdWVzdGlvbiBpZg0KDQpzdWNoIGFuIGltcGxlbWVudGF0aW9uIGlzIHJlYWxseSBhIGdv
b2Qgd2F5IGZvcndhcmQuDQoNCj4NCj4NCj4+IEhvd2V2ZXIsIEkgcmVhbGl6ZWQgYSBmZXcgZGF5
cyBhZ28gdGhhdCB0aGUgbWFnaWMgcGFnZXMgYXJlIG5vdCBkZXNjcmliZWQgaW4NCj4+IHRoZSBE
VC4gV2UgcHJvYmFibHkgd2FudCB0byBmaXggaXQgYnkgbWFya2luZyB0aGUgcGFnZSBhcyAicmVz
ZXJ2ZWQiIG9yIGNyZWF0ZQ0KPj4gYSBzcGVjaWZpYyBiaW5kaW5ncy4NCj4+DQo+PiBTbyB5b3Ug
d2lsbCBuZWVkIGEgc3BlY2lmaWMgcXVpcmsgZm9yIHRoZW0uDQo+IEl0IHNob3VsZCBhbHNvIGJl
IHBvc3NpYmxlIHRvIGtlZXAgdGhlIHNoYXJlZCBpbmZvIHBhZ2UgYWxsb2NhdGVkIGFuZA0KPiBz
aW1wbHkgcGFzcyB0aGUgYWRkcmVzcyB0byB0aGUga2VybmVsIGJ5IGFkZGluZyB0aGUgcmlnaHQg
bm9kZSB0byBkZXZpY2UNCj4gdHJlZS4NCkFuZCB0aGVuIHlvdSBuZWVkIHRvIG1vZGlmeSB5b3Vy
IE9TIGFuZCB0aGlzIGlzIG5vdCBvbmx5IExpbnV4Li4uDQo+IFRvIGRvIHRoYXQsIHdlIHdvdWxk
IGhhdmUgdG8gYWRkIGEgZGVzY3JpcHRpb24gb2YgdGhlIG1hZ2ljIHBhZ2VzDQo+IHRvIGRldmlj
ZSB0cmVlLCB3aGljaCBJIHRoaW5rIHdvdWxkIGJlIGdvb2QgdG8gaGF2ZSBpbiBhbnkgY2FzZS4g
SW4gdGhhdA0KPiBjYXNlIGl0IHdvdWxkIGJlIGJlc3QgdG8gYWRkIGEgcHJvcGVyIGJpbmRpbmcg
Zm9yIGl0IHVuZGVyIHRoZSAieGVuLHhlbiINCj4gbm9kZS4NCg0KSSB3b3VsZCBzYXkgdGhhdCBx
dWVyeWluZyBYZW4gZm9yIHN1Y2ggYSBtZW1vcnkgcGFnZSBzZWVtcyBtdWNoDQoNCm1vcmUgY2xl
YW5lciBhbmQgYWxsb3dzIHRoZSBndWVzdCBPUyBlaXRoZXIgdG8gY29udGludWUgdXNpbmcgdGhl
IGV4aXN0aW5nDQoNCm1ldGhvZCB3aXRoIG1lbW9yeSBhbGxvY2F0aW9uIG9yIHVzaW5nIHRoZSBw
YWdlIHByb3ZpZGVkIGJ5IFhlbi4NCg==


From xen-devel-bounces@lists.xenproject.org Thu May 20 05:36:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 05:36:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130591.244520 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljbMU-0003tr-I7; Thu, 20 May 2021 05:36:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130591.244520; Thu, 20 May 2021 05:36:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljbMU-0003tk-F4; Thu, 20 May 2021 05:36:46 +0000
Received: by outflank-mailman (input) for mailman id 130591;
 Thu, 20 May 2021 05:36:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yP7T=KP=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1ljbMT-0003te-Ao
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 05:36:45 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com (unknown
 [40.107.14.81]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0728c81f-f704-4db6-8224-8ed50fe6a162;
 Thu, 20 May 2021 05:36:41 +0000 (UTC)
Received: from DB6PR0801CA0062.eurprd08.prod.outlook.com (2603:10a6:4:2b::30)
 by AM6PR08MB4392.eurprd08.prod.outlook.com (2603:10a6:20b:bf::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.25; Thu, 20 May
 2021 05:36:38 +0000
Received: from DB5EUR03FT034.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:2b:cafe::a1) by DB6PR0801CA0062.outlook.office365.com
 (2603:10a6:4:2b::30) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.33 via Frontend
 Transport; Thu, 20 May 2021 05:36:38 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT034.mail.protection.outlook.com (10.152.20.87) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Thu, 20 May 2021 05:36:38 +0000
Received: ("Tessian outbound 2cd7db0b285f:v92");
 Thu, 20 May 2021 05:36:38 +0000
Received: from 2731af17ea97.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 01F6D3D5-3A79-498D-94DA-D25FF0EFADBD.1; 
 Thu, 20 May 2021 05:36:32 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 2731af17ea97.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 20 May 2021 05:36:32 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com (2603:10a6:803:10a::33)
 by VI1PR08MB5439.eurprd08.prod.outlook.com (2603:10a6:803:13b::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.28; Thu, 20 May
 2021 05:36:30 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5]) by VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5%6]) with mapi id 15.20.4129.035; Thu, 20 May 2021
 05:36:30 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0728c81f-f704-4db6-8224-8ed50fe6a162
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XdOakbm0wwp6rMYlQk3tkoob786cGyFumVa69hg2v0A=;
 b=zodgKG7drhqUSHQ2MtKiFB3iEzy5WBNCTDw41Zh87q6/X+j4pW4lsgrYeqAFmFLae4YexP8CtAzf54ZDPDx0/LlvxDs1nD+Ng1+qQ70L9wjm5w8Fow8PKXCv2DUrh1zVDLWvhg9iQIalaMIEgwyrStigzy5U+utEA0NmSzkmYQM=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FBaeXPdeKtT5wO72KUzIuekN9jpsF81JJzETn5Qg+4rxzKOy9orNDpZpqj2n/bX6iY7yLyAuob/cN66RzhgZ6y+3CFyj2uwBkvkqrQ2uK8OhxkA/wDo8xZQ3gDkIm3k8Bi+FfIZE6DgpFDrt3lKM2b+bckORu29N/lMI+3ABWsyO6abFRvan4ohz4FwWAJ8D6lgJyeXSsZjxbz5ziIIbBjkCYe+VU/CmgWlYF4F/J5JYGP+jSeFiUm6+uJcwsl8MuLYhbe4VreVI+y/xxauzXhhOLNZpPT7JzTbxdwESE/gB/M0npxhCcDIQKnx2jht8F5kDX6kJqHVt1mDdr9UMlQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XdOakbm0wwp6rMYlQk3tkoob786cGyFumVa69hg2v0A=;
 b=TJ73pV2VLjwNyH2DdcNdSUX1IpZWTTTlHXCFH57n/IkJdHmNPvj5WQ3PPvQBiXWroL1cgrj9KlRrnpguXxynRPCmXcFiBawzwiF7U4z4DIe3spNb56W2e2/tE0lcZO+6h8nF9gECwYQu8ymcgUrxb37RE1PTZ6TpdJAwEy/UN8/pg063u5aEm5W5aG7fFQNvYaVzjUCHJRgwL19FaU3S8/fSpjcKSkkvay+aGhcm2TTZYcC1yJ3K5uWBlpzShn0vYMt7P0wwt8EF77yHatNYkD0at/caU00Roc2HpHDkxTKoBXx5rHCAJGFL2S4n0G9tAqNUAi/7hG3cCzqLwnNe1g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XdOakbm0wwp6rMYlQk3tkoob786cGyFumVa69hg2v0A=;
 b=zodgKG7drhqUSHQ2MtKiFB3iEzy5WBNCTDw41Zh87q6/X+j4pW4lsgrYeqAFmFLae4YexP8CtAzf54ZDPDx0/LlvxDs1nD+Ng1+qQ70L9wjm5w8Fow8PKXCv2DUrh1zVDLWvhg9iQIalaMIEgwyrStigzy5U+utEA0NmSzkmYQM=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "sstabellini@kernel.org"
	<sstabellini@kernel.org>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	nd <nd@arm.com>
Subject: RE: [PATCH] Xen: Design doc for 1:1 direct-map and static allocation
Thread-Topic: [PATCH] Xen: Design doc for 1:1 direct-map and static allocation
Thread-Index: AQHXS6PG30lajMBFKE+1h85t0yxrzqrpIB4AgAKNuoA=
Date: Thu, 20 May 2021 05:36:29 +0000
Message-ID:
 <VE1PR08MB52152E43D11EB446000F4563F72A9@VE1PR08MB5215.eurprd08.prod.outlook.com>
References: <20210518050738.163156-1-penny.zheng@arm.com>
 <7ab73cb0-39d5-f8bf-660f-b3d77f3247bd@xen.org>
In-Reply-To: <7ab73cb0-39d5-f8bf-660f-b3d77f3247bd@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 6ED2A169E72BA34F8842DCE18902C5E7.0
x-checkrecipientchecked: true
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [203.126.0.111]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: 79fd1e69-c3c8-4b72-cc5e-08d91b513d47
x-ms-traffictypediagnostic: VI1PR08MB5439:|AM6PR08MB4392:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM6PR08MB43920802B06B9437B38F1EB3F72A9@AM6PR08MB4392.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:4303;OLM:4303;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 jZuLLZlDVZuPxocVYfH+csbXHDhkh+TMb4SVRBZIf0a7cDYdPu8aF/QJbeBSciWR9skmPKWgeRISr1Zqgfqo5Ep+/7en+lPvASsNOxjOj/KlzF/w0/qMhRzpP2RfSI+3qscXST0X4D0b+RtrEHV48NQhvufCzN5jsnGVGs4LsCNmDR4cTh+ry7sxBBQCcDwkZqL1v8zs3pz1BwOn61aHHkcoTok32EDAFZgApswkxxjmp1CMpFyBr4HaghbQD7l2PnAgkPNrMT/LG2XjV6Dl7rleooYdcUG4eWcPg+X+pZOJaQmkrsJwB2KwWGqNKLVLzMEsuy3lm0mVgOJwvnmvwsrSaoTfJW2at0wvHBUDwX4m+uxEQ8C/vM+vTfI0UXEkS0N7cAQWMzi553sOuaISDhG4ZKRESF7GQb5asPu3U9x8XtNr6HTflXpUzUqQL9aA0rWEhdmuMVytHloRqF0lesD62oG0i4uvPWWOwybfs5nJdy4LovfBTdAFxontwaqSDv6jfFpGp57hsXi5DMpuK8eGXMK54u9pCdrmgNMUfg6BmfJj2kybcsdworZ/I/FcKvJr6RZKkTOdU8fcCob5Tn6udrHJUeYb4r24zY2Of3cW+b+hIi3A+/KvBF/HNDAQIwbNu1wtbAnY4safTT5+c4qgXg3L+R0+xr+G/Oj9DI616D9NQctSZ3x/bRUSc3iE
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR08MB5215.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(396003)(39850400004)(136003)(346002)(366004)(83380400001)(110136005)(316002)(5660300002)(54906003)(52536014)(2906002)(4326008)(53546011)(76116006)(66946007)(7696005)(6506007)(33656002)(8936002)(66446008)(71200400001)(66476007)(66556008)(122000001)(186003)(38100700002)(64756008)(86362001)(478600001)(55016002)(26005)(9686003)(8676002)(30864003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?WHo1N0dFZ2txRmRIK0pFa3V0UmpmZ0FBZ2tUdm1jWFZMSTU1NUg2ZmNaNnN5?=
 =?utf-8?B?MGdtTHk1eVZ4Z2JjdVZOZGV6cDF4R0FteVRnemg4RGsvK0RoUTl4dVVHRUd1?=
 =?utf-8?B?R1VoQ1FMYVYyOHM4WG1GUStiZEViSWdXbVJpSFl4dkM2UFZ4WTFOQ0hhbW91?=
 =?utf-8?B?NEFJeUFEb2NWNXJYa1dMaHFVK1NYOUdCUDV0QzFPQ1RUQjBUZFUxaG9nZGhK?=
 =?utf-8?B?OUI1eTRSVE45S2xzbUFJc3M1N1VhVmJ0Y3FvaVc1MDZMTWZ5WDJaLzZjOFZh?=
 =?utf-8?B?bTlJa05URHlJRFBoa1FuM0RqT1pod2xzK01TSUUvS2VNdkxwT2lOMnBQY21O?=
 =?utf-8?B?N1ZlMjk1RHlJa3dmSnRUMGk5Z2FoRlJXUS9lNENyTTZYdzdxWkpRV21QODVp?=
 =?utf-8?B?YjZmOTJiUzZTRVZBN1NXKzVTQ3ZCZ3duNW5LNU1xbkllQ2N0Mjl1c1JtYnJz?=
 =?utf-8?B?WkNraHNKZThPUjhsbEswTWlvQzBxOGpqVElWVzhnSlhzWTVjSFA0SGEzQUwz?=
 =?utf-8?B?UGRNR3o5dUNpeEU5QXR0MFM3cldMZVVPNlN6RUVMYms4N2dVMGd4MmhVVXUz?=
 =?utf-8?B?SzlOeEJLYnRqMWdsYW94M0J5Y2VaQ2JBdHFDZkxORDYxV0NabkExeUxqcWZk?=
 =?utf-8?B?RVRpajBLK0p3V1RReUM2elZhQy9VdiszRDdscGNPM2pVT2VoL0lWL2puMWhK?=
 =?utf-8?B?QitoemxLYVFkNWNWMG1xMUloOUJ4VStTTXJGcWVmRFY2ejBabEFCSDZyby9X?=
 =?utf-8?B?b25BOXJaSWtodURva1AxeElSOS91YjhUWmJleVZjQXZkeExEUGFwSDM4OGMy?=
 =?utf-8?B?czZaRGZqcVhsUlY5cU5Jb1Z5blNMTHdOYnh1LzZtM3Q1SEtPZmpwUFd0MWNj?=
 =?utf-8?B?UDA2dk83NjBwMmFhaFBFWExYTnBwb2w2SkNxeFNsQzFadVcxa1dVSzlIOGV2?=
 =?utf-8?B?SXl1WGhDSXlzK0RobUVySU4zZ0NGMlRxeUJYM2srR1g4dE8zeVJ6RjVIZ0Vy?=
 =?utf-8?B?dExVWUZFRlNtZzhqZnpvK0J5c2FrTWQwUnpvVnUrRHBkSVZGK3VzKzhqSWVW?=
 =?utf-8?B?ay9CQ25NTS8zbFB1UjVHc0RkZWR0M3dOell0VFNxMFFUcFJpSU9sZVlQRngx?=
 =?utf-8?B?eFFvM0JLamJGdVY4NWRxV1Brdk5vUmV6dENSOWd1R05SSzY5WGVPcERQbjFX?=
 =?utf-8?B?WFR3cWJlclJZOEwxdUpkbVJmRXExNGhoNnpFZGVFaDBQVkJQNVBxOWFHd2pr?=
 =?utf-8?B?NlFnYTNpUC9XM0ZqeEcyMkhtUGJCczFic3BwazBESmdsVE90OFlpK0IybjJi?=
 =?utf-8?B?N0NVM0ZYdFY3alZwMFNqb1Mwa1BHTGYxdFN2K0FJanFWSDllUWlVbFB0c0x3?=
 =?utf-8?B?eWlzV0RBQ1hlQjhaWElvbnpoMWVTMEJqODdNT3RIbm13WUtQbmo1dHZOMWNF?=
 =?utf-8?B?ZDluYlJTenF4NHNKOWtQbU1pdVd4OGU3WVlkNUQrS0xtamNBMW5vVE02aWpm?=
 =?utf-8?B?MGRORUs5YkNsb1FWcTgvM1BZdktQRnhOOHMrYVJWWmNBYnVCbVJDNS9UQitt?=
 =?utf-8?B?em8zcnlLVzRlM1pNUEdsaEYvV1pscnRwUlRvRHZNZU1iY1plTnJzcEVCOVJ6?=
 =?utf-8?B?U1VYWG5IZ0FLSnh2T3JPVjRnbUNYRURJdFpBcTd6QUk4d2FVZ2xhS01Gemx2?=
 =?utf-8?B?UDNtSHUva1ZpOUhGRjcxQTlNaWFKYkVNb3NFWGltRTJDQWwxNmx1N3doOWVy?=
 =?utf-8?Q?9TWlqtxIQgWKjgGxr5fjQ1EuuTLAEudE2JdyY4g?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB5439
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	6c930735-e9d3-4da5-6e3a-08d91b51386c
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	tK7KZ1gYqrdsUOtUz4tVjPpU3ABzpJHOhB3DFuHVAiUa0JC8oIRnjW1Jf8xmr3EJAypbZ6+BbutfaU4a+D70mmXodJq34P2rWOIdAgiWw2iA6/Vf9+hbrsUqpgINOz/+sThYcT62CAxM97iPiB3jLMIoOoW4S+q+zrRvz6ou2tLdgiFo1kLFdbSro3RdyOUTWDDFesHmnih6EKAZ/caV9ImIbR5vAqgb8J1GvL+Tvm+Dhyb3/c3GMPqAiAS12c1T80H/mO7iYFyvwuH/MPJfe5IqBCGBvmexpYDqzkvKCjJWAOrHtwd1RD+jJdaRo7Lv9VJ4qDM0h/SZpHaZkPVTUbDXcUXy27xAAII0pMrjbnBQFG4osrgIO1Awt2xrdAGeMsh6W+xDRDLqzstmcWPvoFvoK905Rxl+8EbyyR4/UKHx98JHPmUhfJ1aBc/v7vHWrxsAIk7JiDKr2+t3jtH1josABcN4VF45y4KxXnQoXm/XXzOheUJafvLvRrDQ1Di5CaRUQvpq6pjSuw8QKSSH6mgAtiJNol9S2bAX4AXefG0vKq7nohvYu1mTikQXLdZtGr8dyAKaeb3W5jMEMPcUQEXF+hDvUv5L8Mfm3TpuCpPGILVhfR7exMS9dnYDeYNIP6yTEIrA6yt51TgMfuE/rY6TqDZy5aIQw3dCC40tUyMiuOVdUEukOjCGy6JQbeold3uihGXoUx0AscKm1ejXihgNaZJkxlzC/uh3s89rML0=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(396003)(346002)(39850400004)(136003)(36840700001)(46966006)(5660300002)(9686003)(52536014)(36860700001)(53546011)(6506007)(54906003)(86362001)(26005)(47076005)(478600001)(4326008)(55016002)(8936002)(70586007)(70206006)(110136005)(82310400003)(356005)(8676002)(186003)(81166007)(336012)(316002)(7696005)(83380400001)(2906002)(30864003)(82740400003)(33656002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 May 2021 05:36:38.3972
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 79fd1e69-c3c8-4b72-cc5e-08d91b513d47
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT034.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM6PR08MB4392

SGkgSnVsaWVuDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSnVsaWVu
IEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4NCj4gU2VudDogVHVlc2RheSwgTWF5IDE4LCAyMDIxIDc6
NDggUE0NCj4gVG86IFBlbm55IFpoZW5nIDxQZW5ueS5aaGVuZ0Bhcm0uY29tPjsgeGVuLWRldmVs
QGxpc3RzLnhlbnByb2plY3Qub3JnOw0KPiBzc3RhYmVsbGluaUBrZXJuZWwub3JnDQo+IENjOiBC
ZXJ0cmFuZCBNYXJxdWlzIDxCZXJ0cmFuZC5NYXJxdWlzQGFybS5jb20+OyBXZWkgQ2hlbg0KPiA8
V2VpLkNoZW5AYXJtLmNvbT47IG5kIDxuZEBhcm0uY29tPg0KPiBTdWJqZWN0OiBSZTogW1BBVENI
XSBYZW46IERlc2lnbiBkb2MgZm9yIDE6MSBkaXJlY3QtbWFwIGFuZCBzdGF0aWMgYWxsb2NhdGlv
bg0KPiANCj4gSGkgUGVubnksDQo+IA0KPiBPbiAxOC8wNS8yMDIxIDA2OjA3LCBQZW5ueSBaaGVu
ZyB3cm90ZToNCj4gPiBDcmVhdGUgb25lIGRlc2lnbiBkb2MgZm9yIDE6MSBkaXJlY3QtbWFwIGFu
ZCBzdGF0aWMgYWxsb2NhdGlvbi4NCj4gPiBJdCBpcyB0aGUgZmlyc3QgZHJhZnQgYW5kIGFpbXMg
dG8gZGVzY3JpYmUgd2h5IGFuZCBob3cgd2UgYWxsb2NhdGUNCj4gPiAxOjEgZGlyZWN0LW1hcChn
dWVzdCBwaHlzaWNhbCA9PSBwaHlzaWNhbCkgZG9tYWlucywgYW5kIHdoeSBhbmQgaG93IHdlDQo+
ID4gbGV0IGRvbWFpbnMgb24gc3RhdGljIGFsbG9jYXRpb24uDQo+ID4NCj4gPiBTaWduZWQtb2Zm
LWJ5OiBQZW5ueSBaaGVuZyA8cGVubnkuemhlbmdAYXJtLmNvbT4NCj4gPiAtLS0NCj4gPiAgIGRv
Y3MvZGVzaWducy9zdGF0aWNfYWxsb2NfYW5kX2RpcmVjdF9tYXAubWQgfCAyMzkNCj4gKysrKysr
KysrKysrKysrKysrKysNCj4gPiAgIDEgZmlsZSBjaGFuZ2VkLCAyMzkgaW5zZXJ0aW9ucygrKQ0K
PiA+ICAgY3JlYXRlIG1vZGUgMTAwNjQ0IGRvY3MvZGVzaWducy9zdGF0aWNfYWxsb2NfYW5kX2Rp
cmVjdF9tYXAubWQNCj4gPg0KPiA+IGRpZmYgLS1naXQgYS9kb2NzL2Rlc2lnbnMvc3RhdGljX2Fs
bG9jX2FuZF9kaXJlY3RfbWFwLm1kDQo+ID4gYi9kb2NzL2Rlc2lnbnMvc3RhdGljX2FsbG9jX2Fu
ZF9kaXJlY3RfbWFwLm1kDQo+ID4gbmV3IGZpbGUgbW9kZSAxMDA2NDQNCj4gPiBpbmRleCAwMDAw
MDAwMDAwLi5mZGRhMTYyMTg4DQo+ID4gLS0tIC9kZXYvbnVsbA0KPiA+ICsrKyBiL2RvY3MvZGVz
aWducy9zdGF0aWNfYWxsb2NfYW5kX2RpcmVjdF9tYXAubWQNCj4gPiBAQCAtMCwwICsxLDIzOSBA
QA0KPiA+ICsjIFByZWZhY2UNCj4gPiArDQo+ID4gK1RoZSBkb2N1bWVudCBpcyBhbiBlYXJseSBk
cmFmdCBmb3IgMToxIGRpcmVjdC1tYXAgbWVtb3J5IG1hcCAoYGd1ZXN0DQo+ID4gK3BoeXNpY2Fs
ID09IHBoeXNpY2FsYCkgb2YgZG9tVXMgYW5kIFN0YXRpYyBBbGxvY2F0aW9uLg0KPiA+ICtTaW5j
ZSB0aGUgaW1wbGVtZW50YXRpb24gb2YgdGhlc2UgdHdvIGZlYXR1cmVzIHNoYXJlcyBhIGxvdCwg
d2Ugd291bGQNCj4gPiArbGlrZSB0byBpbnRyb2R1Y2UgYm90aCBpbiBvbmUgZGVzaWduLg0KPiA+
ICsNCj4gPiArUmlnaHQgbm93LCB0aGVzZSB0d28gZmVhdHVyZXMgYXJlIGxpbWl0ZWQgdG8gQVJN
IGFyY2hpdGVjdHVyZS4NCj4gPiArDQo+ID4gK1RoaXMgZGVzaWduIGFpbXMgdG8gZGVzY3JpYmUg
d2h5IGFuZCBob3cgdGhlIGd1ZXN0IHdvdWxkIGJlIGNyZWF0ZWQNCj4gPiArYXMgMToxIGRpcmVj
dC1tYXAgZG9tYWluLCBhbmQgd2h5IGFuZCB3aGF0IHRoZSBzdGF0aWMgYWxsb2NhdGlvbiBpcy4N
Cj4gPiArDQo+ID4gK1RoaXMgZG9jdW1lbnQgaXMgcGFydGx5IGJhc2VkIG9uIFN0ZWZhbm8gU3Rh
YmVsbGluaSdzIHBhdGNoIHNlcmllIHYxOg0KPiA+ICtbZGlyZWN0LW1hcCBEb21Vc10oDQo+ID4g
K2h0dHBzOi8vbGlzdHMueGVucHJvamVjdC5vcmcvYXJjaGl2ZXMvaHRtbC94ZW4tZGV2ZWwvMjAy
MC0NCj4gMDQvbXNnMDA3MDcuaHRtbCkuDQo+IA0KPiBXaGlsZSBmb3IgdGhlIHJldmlld2VyIHRo
aXMgaXMgYSB1c2VmdWwgaW5mb3JtYXRpb24gdG8gaGF2ZSwgSSBhbSBub3Qgc3VyZSBhDQo+IGZ1
dHVyZSByZWFkZXIgbmVlZHMgdG8ga25vdyBhbGwgdGhlIGhpc3RvcnkuIFNvIEkgd291bGQgbW92
ZSB0aGlzIHRvIHRoZQ0KPiBjb21taXQgbWVzc2FnZS4NCj4gDQo+ID4gKw0KPiA+ICtUaGlzIGlz
IGEgZmlyc3QgZHJhZnQgYW5kIHNvbWUgcXVlc3Rpb25zIGFyZSBzdGlsbCB1bmFuc3dlcmVkLiBX
aGVuDQo+ID4gK3RoaXMgaXMgdGhlIGNhc2UsIGl0IHdpbGwgYmUgaW5jbHVkZWQgdW5kZXIgY2hh
cHRlciBgRElTQ1VTU0lPTmAuDQo+ID4gKw0KPiA+ICsjIEludHJvZHVjdGlvbiBvbiBTdGF0aWMg
QWxsb2NhdGlvbg0KPiA+ICsNCj4gPiArU3RhdGljIGFsbG9jYXRpb24gcmVmZXJzIHRvIHN5c3Rl
bSBvciBzdWItc3lzdGVtKGRvbWFpbnMpIGZvciB3aGljaA0KPiA+ICttZW1vcnkgYXJlYXMgYXJl
IHByZS1kZWZpbmVkIGJ5IGNvbmZpZ3VyYXRpb24gdXNpbmcgcGh5c2ljYWwgYWRkcmVzcw0KPiBy
YW5nZXMuDQo+ID4gKw0KPiA+ICsjIyBCYWNrZ3JvdW5kDQo+ID4gKw0KPiA+ICtDYXNlcyB3aGVy
ZSBuZWVkcyBzdGF0aWMgYWxsb2NhdGlvbjoNCj4gPiArDQo+ID4gKyAgKiBTdGF0aWMgYWxsb2Nh
dGlvbiBuZWVkZWQgd2hlbmV2ZXIgYSBzeXN0ZW0gaGFzIGEgcHJlLWRlZmluZWQNCj4gPiArbm9u
LWNoYW5naW5nIGJlaGF2aW91ci4gVGhpcyBpcyB1c3VhbGx5IHRoZSBjYXNlIGluIHNhZmV0eSB3
b3JsZA0KPiA+ICt3aGVyZSBzeXN0ZW0gbXVzdCBiZWhhdmUgdGhlIHNhbWUgdXBvbiByZWJvb3Qs
IHNvIG1lbW9yeSByZXNvdXJjZQ0KPiBmb3INCj4gPiArYm90aCBYRU4gYW5kIGRvbWFpbnMgc2hv
dWxkIGJlIHN0YXRpYyBhbmQgcHJlLWRlZmluZWQuDQo+ID4gKw0KPiA+ICsgICogU3RhdGljIGFs
bG9jYXRpb24gbmVlZGVkIHdoZW5ldmVyIGEgZ3Vlc3Qgd2FudHMgdG8gYWxsb2NhdGUNCj4gPiAr
bWVtb3J5IGZyb20gcmVmaW5lZCBtZW1vcnkgcmFuZ2VzLiBGb3IgZXhhbXBsZSwgYSBzeXN0ZW0g
aGFzIG9uZQ0KPiA+ICtoaWdoLXNwZWVkIFJBTSByZWdpb24sIGFuZCB3b3VsZCBsaWtlIHRvIGFz
c2lnbiBpdCB0byBvbmUgc3BlY2lmaWMgZG9tYWluLg0KPiA+ICsNCj4gPiArICAqIFN0YXRpYyBh
bGxvY2F0aW9uIG5lZWRlZCB3aGVuZXZlciBhIHN5c3RlbSBuZWVkcyBhIGd1ZXN0DQo+ID4gK3Jl
c3RyaWN0ZWQgdG8gc29tZSBrbm93biBtZW1vcnkgYXJlYSBkdWUgdG8gaGFyZHdhcmUgbGltaXRh
dGlvbnMNCj4gPiArcmVhc29uLiBGb3IgZXhhbXBsZSwgc29tZSBkZXZpY2UgY2FuIG9ubHkgZG8g
RE1BIHRvIGEgc3BlY2lmaWMgcGFydCBvZg0KPiB0aGUgbWVtb3J5Lg0KPiA+ICsNCj4gPiArTGlt
aXRhdGlvbnM6DQo+ID4gKyAgKiBUaGVyZSBpcyBubyBjb25zaWRlcmF0aW9uIGZvciBQViBkZXZp
Y2VzIGF0IHRoZSBtb21lbnQuDQo+ID4gKw0KPiA+ICsjIyBEZXNpZ24gb24gU3RhdGljIEFsbG9j
YXRpb24NCj4gPiArDQo+ID4gK1N0YXRpYyBhbGxvY2F0aW9uIHJlZmVycyB0byBzeXN0ZW0gb3Ig
c3ViLXN5c3RlbShkb21haW5zKSBmb3Igd2hpY2gNCj4gPiArbWVtb3J5IGFyZWFzIGFyZSBwcmUt
ZGVmaW5lZCBieSBjb25maWd1cmF0aW9uIHVzaW5nIHBoeXNpY2FsIGFkZHJlc3MNCj4gcmFuZ2Vz
Lg0KPiA+ICsNCj4gPiArVGhlc2UgcHJlLWRlZmluZWQgbWVtb3J5LCAtLSBTdGF0aWMgTW9tZXJ5
LCBhcyBwYXJ0cyBvZiBSQU0gcmVzZXJ2ZWQNCj4gPiAraW4gdGhlDQo+IA0KPiBzL01vbWVyeS9N
ZW1vcnkvDQo+IA0KDQpUaHguDQoNCj4gPiArYmVnaW5uaW5nLCBzaGFsbCBuZXZlciBnbyB0byBo
ZWFwIGFsbG9jYXRvciBvciBib290IGFsbG9jYXRvciBmb3IgYW55IHVzZS4NCj4gDQo+IEkgdGhp
bmsgeW91IG1lYW4gImJ1ZGR5IiByYXRoZXIgdGhhbiAiaGVhcCIuIExvb2tpbmcgYXQgeW91ciBj
b2RlLCB5b3UgYXJlDQo+IHRyZWF0aW5nIHN0YXRpYyBtZW1vcnkgcmVnaW9uIGFzIGRvbWhlYXAg
cGFnZXMuDQo+IA0KPiA+ICsNCj4gPiArIyMjIFN0YXRpYyBBbGxvY2F0aW9uIGZvciBEb21haW5z
DQo+ID4gKw0KPiA+ICsjIyMgTmV3IERlaXZjZSBUcmVlIE5vZGU6IGB4ZW4sc3RhdGljX21lbWAN
Cj4gDQo+IFMvRGVpdmNlLw0KPiANCg0KVGh4Lg0KDQo+ID4gKw0KPiA+ICtIZXJlIGludHJvZHVj
ZXMgbmV3IGB4ZW4sc3RhdGljX21lbWAgbm9kZSB0byBkZWZpbmUgc3RhdGljIG1lbW9yeQ0KPiA+
ICtub2RlcyBmb3Igb25lIHNwZWNpZmljIGRvbWFpbi4NCj4gPiArDQo+ID4gK0ZvciBkb21haW5z
IG9uIHN0YXRpYyBhbGxvY2F0aW9uLCB1c2VycyBuZWVkIHRvIHByZS1kZWZpbmUgZ3Vlc3QgUkFN
DQo+ID4gK3JlZ2lvbnMgaW4gY29uZmlndXJhdGlvbiwgdGhyb3VnaCBgeGVuLHN0YXRpY19tZW1g
IG5vZGUgdW5kZXIgYXBwcm9yaWF0ZQ0KPiBgZG9tVXhgIG5vZGUuDQo+ID4gKw0KPiA+ICtIZXJl
IGlzIG9uZSBleGFtcGxlOg0KPiA+ICsNCj4gPiArDQo+ID4gKyAgICAgICAgZG9tVTEgew0KPiA+
ICsgICAgICAgICAgICBjb21wYXRpYmxlID0gInhlbixkb21haW4iOw0KPiA+ICsgICAgICAgICAg
ICAjYWRkcmVzcy1jZWxscyA9IDwweDI+Ow0KPiA+ICsgICAgICAgICAgICAjc2l6ZS1jZWxscyA9
IDwweDI+Ow0KPiA+ICsgICAgICAgICAgICBjcHVzID0gPDI+Ow0KPiA+ICsgICAgICAgICAgICB4
ZW4sc3RhdGljLW1lbSA9IDwweDAgMHhhMDAwMDAwMCAweDAgMHgyMDAwMDAwMD47DQo+ID4gKyAg
ICAgICAgICAgIC4uLg0KPiA+ICsgICAgICAgIH07DQo+ID4gKw0KPiA+ICtSQU0gYXQgMHhhMDAw
MDAwMCBvZiA1MTIgTUIgYXJlIHN0YXRpYyBtZW1vcnkgcmVzZXJ2ZWQgZm9yIGRvbVUxIGFzDQo+
IGl0cyBSQU0uDQo+ID4gKw0KPiA+ICsjIyMgTmV3IFBhZ2UgRmxhZzogYFBHQ19yZXNlcnZlZGAN
Cj4gPiArDQo+ID4gK0luIG9yZGVyIHRvIGRpZmZlcmVudGlhdGUgYW5kIG1hbmFnZSBwYWdlcyBy
ZXNlcnZlZCBhcyBzdGF0aWMgbWVtb3J5DQo+ID4gK3dpdGggdGhvc2Ugd2hpY2ggYXJlIGFsbG9j
YXRlZCBmcm9tIGhlYXAgYWxsb2NhdG9yIGZvciBub3JtYWwNCj4gPiArZG9tYWlucywgd2Ugc2hh
bGwgaW50cm9kdWNlIGEgbmV3IHBhZ2UgZmxhZyBgUEdDX3Jlc2VydmVkYCB0byB0ZWxsLg0KPiA+
ICsNCj4gPiArR3JhbnQgcGFnZXMgYFBHQ19yZXNlcnZlZGAgd2hlbiBpbml0aWFsaXppbmcgc3Rh
dGljIG1lbW9yeS4NCj4gPiArDQo+ID4gKyMjIyBOZXcgbGlua2VkIHBhZ2UgbGlzdDogYHJlc2Vy
dmVkX3BhZ2VfbGlzdGAgaW4gIGBzdHJ1Y3QgZG9tYWluYA0KPiA+ICsNCj4gPiArUmlnaHQgbm93
LCBmb3Igbm9ybWFsIGRvbWFpbnMsIG9uIGFzc2lnbmluZyBwYWdlcyB0byBkb21haW4sIHBhZ2Vz
DQo+ID4gK2FsbG9jYXRlZCBmcm9tIGhlYXAgYWxsb2NhdG9yIGFzIGd1ZXN0IFJBTSBzaGFsbCBi
ZSBpbnNlcnRlZCB0byBvbmUNCj4gPiArbGlua2VkIHBhZ2UgbGlzdCBgcGFnZV9saXN0YCBmb3Ig
bGF0ZXIgbWFuYWdpbmcgYW5kIHN0b3JpbmcuDQo+ID4gKw0KPiA+ICtTbyBpbiBvcmRlciB0byB0
ZWxsLCBwYWdlcyBhbGxvY2F0ZWQgZnJvbSBzdGF0aWMgbWVtb3J5LCBzaGFsbCBiZQ0KPiA+ICtp
bnNlcnRlZCB0byBhIGRpZmZlcmVudCBsaW5rZWQgcGFnZSBsaXN0IGByZXNlcnZlZF9wYWdlX2xp
c3RgLg0KPiANCj4gWW91IGFscmVhZHkgaGF2ZSB0aGUgZmxhZyBgYFBHQ19yZXNlcnZlZGBgIHRv
IGluZGljYXRlIHdoZXRoZXIgdGhlIG1lbW9yeQ0KPiBpcyByZXNlcnZlZCBvciBub3QuIFNvIHdo
eSBkbyB5b3UgYWxzbyBuZWVkIHRvIGxpbmsgbGlzdCBpdD8NCj4gDQoNClllcywgSSBpbnRyb2R1
Y2UgdGhpcyBsaW5rIGxpc3QgdG8gdHJ5IHRvIGZpeCBpc3N1ZXMgb2YgcmVib290aW5nIGRvbWFp
biBvbiBzdGF0aWMgYWxsb2NhdGlvbi4NCg0KQW5kIGFmdGVyIHRha2luZyB5b3VyIHN1Z2dlc3Rp
b24gb24gZ2V0dGluZyBzdGF0aWMgbWVtb3J5IHJlZ2lvbnMgaW5mbyBhZ2FpbiBmcm9tDQpkZXZp
Y2UgdHJlZSBjb25maWd1cmF0aW9uLCBJIHRoaW5rIHRoaXMgbGluayBsaXN0IGlzIG5vdCBuZWNl
c3NhcnkuIEFuZCBJIHdpbGwgZGVsZXRlIGhlcmUNCmFuZCBpbiBjb2RlcyB0b28uDQoNCj4gPiAr
DQo+ID4gK0xhdGVyLCB3aGVuIGRvbWFpbiBnZXQgZGVzdHJveWVkIGFuZCBtZW1vcnkgcmVsaW5x
dWlzaGVkLCBvbmx5IHBhZ2VzDQo+ID4gK2luIGBwYWdlX2xpc3RgIGdvIGJhY2sgdG8gaGVhcCwg
YW5kIHBhZ2VzIGluIGByZXNlcnZlZF9wYWdlX2xpc3RgIHNoYWxsIG5vdC4NCj4gDQo+IFdoaWxl
IGdvaW5nIHRocm91Z2ggdGhlIHNlcmllcywgSSBjb3VsZCBub3QgZmluZCBhbnkgY29kZSBpbXBs
ZW1lbnRpbmcgdGhpcy4NCj4gSG93ZXZlciwgdGhpcyBpcyBub3QgZW5vdWdoIHRvIHByZXZlbnQg
YSBwYWdlIHRvIGdvIHRvIHRoZSBoZWFwIGFsbG9jYXRvcg0KPiBiZWNhdXNlIGEgZG9tYWluIGNh
biByZWxlYXNlIG1lbW9yeSBhdCBydW50aW1lIHVzaW5nIGh5cGVyY2FsbHMgbGlrZQ0KPiBYRU5N
RU1fcmVtb3ZlX2Zyb21fcGh5c21hcC4NCj4gDQo+IE9uZSBvZiB0aGUgdXNlIGNhc2UgaXMgd2hl
biB0aGUgZ3Vlc3QgZGVjaWRlcyB0byBiYWxsb29uIG91dCBzb21lIG1lbW9yeS4NCj4gVGhpcyB3
aWxsIGNhbGwgZnJlZV9kb21oZWFwX3BhZ2VzKCkuDQo+IA0KPiBFZmZlY3RpdmVseSwgeW91IGFy
ZSB0cmVhdGluZyBzdGF0aWMgbWVtb3J5IGFzIGRvbWhlYXAgcGFnZXMuIFNvIEkgdGhpbmsgaXQN
Cj4gd291bGQgYmUgYmV0dGVyIGlmIHlvdSBob29rIGluIGZyZWVfZG9taGVhcF9wYWdlcygpIHRv
IGRlY2lkZSB3aGljaA0KPiBhbGxvY2F0b3IgaXMgdXNlZC4NCj4gDQo+IE5vdywgaWYgYSBndWVz
dCBjYW4gYmFsbG9vbiBvdXQgbWVtb3J5LCBpdCBjYW4gYWxzbyBiYWxsb29uIGluIG1lbW9yeS4N
Cj4gVGhlcmUgYXJlIHR3byBjYXNlczoNCj4gICAgIDEpIFRoZSByZWdpb24gdXNlZCB0byBiZSBS
QU0gcmVnaW9uIHN0YXRpY2FsbHkgYWxsb2NhdGVkDQo+ICAgICAyKSBUaGUgcmVnaW9uIHVzZWQg
dG8gYmUgdW5hbGxvY2F0ZWQuDQo+IA0KPiBJIHRoaW5rIGZvciAxKSwgd2UgbmVlZCB0byBiZSBh
YmxlIHRvIHJlLXVzZSB0aGUgcGFnZSBwcmV2aW91c2x5LiBGb3IgMiksIGl0IGlzDQo+IG5vdCBj
bGVhciB0byBtZSB3aGV0aGVyIGEgZ3Vlc3Qgd2l0aCBtZW1vcnkgc3RhdGljYWxseSBhbGxvY2F0
ZWQgc2hvdWxkIGJlDQo+IGFsbG93ZWQgdG8gYWxsb2NhdGUgImR5bmFtaWMiIHBhZ2VzLg0KPiAN
Cg0KWWVhaCwgSSBzaGFyZSB0aGUgc2FtZSB3aXRoIHlvdSBvZiBob29raW5nIGluIGZyZWVfZG9t
aGVhcF9wYWdlcygpLiBJJ20gdGhpbmtpbmcNCnRoYXQgaWYgcGFnZXMgb2YgUEdDX3Jlc2VydmVk
LCB3ZSBtYXkgY3JlYXRlIGEgbmV3IGZ1bmMgZnJlZV9zdGF0aWNtZW1fcGFnZXMgdG8NCmZyZWUg
dGhlbS4NCg0KRm9yIGlzc3VlcyBvbiBiYWxsb29uaW5nIG91dCBvciBpbiwgaXQgaXMgbm90IHN1
cHBvcnRlZCBoZXJlLg0KRG9tYWluIG9uIFN0YXRpYyBBbGxvY2F0aW9uIGFuZCAxOjEgZGlyZWN0
LW1hcCBhcmUgYWxsIGJhc2VkIG9uIGRvbTAtbGVzcyByaWdodA0Kbm93LCBzbyBubyBQViwgZ3Jh
bnQgdGFibGUsIGV2ZW50IGNoYW5uZWwsIGV0YywgY29uc2lkZXJlZC4NCg0KUmlnaHQgbm93LCBp
dCBvbmx5IHN1cHBvcnRzIGRldmljZSBnb3QgcGFzc3Rocm91Z2ggaW50byB0aGUgZ3Vlc3QuDQoN
Cj4gPiArIyMjIE1lbW9yeSBBbGxvY2F0aW9uIGZvciBEb21haW5zIG9uIFN0YXRpYyBBbGxvY2F0
aW9uDQo+ID4gKw0KPiA+ICtSQU0gcmVnaW9ucyBwcmUtZGVmaW5lZCBhcyBzdGF0aWMgbWVtb3J5
IGZvciBvbmUgc3BlY2lmYyBkb21haW4gc2hhbGwNCj4gPiArYmUgcGFyc2VkIGFuZCByZXNlcnZl
ZCBmcm9tIHRoZSBiZWdpbm5pbmcuIEFuZCB0aGV5IHNoYWxsIG5ldmVyIGdvIHRvDQo+ID4gK2Fu
eSBtZW1vcnkgYWxsb2NhdG9yIGZvciBhbnkgdXNlLg0KPiANCj4gVGVjaG5pY2FsbHksIHlvdSBh
cmUgaW50cm9kdWNpbmcgYSBuZXcgYWxsb2NhdG9yLiBTbyBkbyB5b3UgbWVhbiB0aGV5IHNob3Vs
ZA0KPiBub3QgYmUgZ2l2ZW4gdG8gbmVpdGhlciB0aGUgYnVkZHkgYWxsb2NhdG9yIG5vciB0aGUg
Ym90IGFsbG9jYXRvcj8NCj4gDQoNClllcy4gVGhlc2UgcHJlLWRlZmluZWQgUkFNIHJlZ2lvbnMg
d2lsbCBub3QgYmUgZ2l2ZW4gdG8gYW55IGN1cnJlbnQNCm1lbW9yeSBhbGxvY2F0b3IuIElmIGJl
IGdpdmVuIHRoZXJlLCB0aGVyZSBpcyBubyBndWFyYW50ZWUgdGhhdCBpdCB3aWxsDQpub3QgYmUg
YWxsb2NhdGVkIGZvciBvdGhlciB1c2UuDQoNCkFuZCByaWdodCBub3csIGluIG15IGN1cnJlbnQg
ZGVzaWduLCB0aGVzZSBwcmUtZGVmaW5lZCBSQU0gcmVnaW9ucyBhcmUgZWl0aGVyIGZvcg0Kb25l
IHNwZWNpZmljIGRvbWFpbiBhcyBndWVzdCBSQU0gb3IgYXMgWEVOIGhlYXAuDQogIA0KPiA+ICsN
Cj4gPiArTGF0ZXIgd2hlbiBhbGxvY2F0aW5nIHN0YXRpYyBtZW1vcnkgZm9yIHRoaXMgc3BlY2lm
aWMgZG9tYWluLCBhZnRlcg0KPiA+ICthY3F1aXJpbmcgdGhvc2UgcmVzZXJ2ZWQgcmVnaW9ucywg
dXNlcnMgbmVlZCB0byBhIGRvIHNldCBvZg0KPiA+ICt2ZXJpZmljYXRpb24gYmVmb3JlIGFzc2ln
bmluZy4NCj4gPiArRm9yIGVhY2ggcGFnZSB0aGVyZSwgaXQgYXQgbGVhc3QgaW5jbHVkZXMgdGhl
IGZvbGxvd2luZyBzdGVwczoNCj4gPiArMS4gQ2hlY2sgaWYgaXQgaXMgaW4gZnJlZSBzdGF0ZSBh
bmQgaGFzIHplcm8gcmVmZXJlbmNlIGNvdW50Lg0KPiA+ICsyLiBDaGVjayBpZiB0aGUgcGFnZSBp
cyByZXNlcnZlZChgUEdDX3Jlc2VydmVkYCkuDQo+ID4gKw0KPiA+ICtUaGVuLCBhc3NpZ25pbmcg
dGhlc2UgcGFnZXMgdG8gdGhpcyBzcGVjaWZpYyBkb21haW4sIGFuZCBhbGwgcGFnZXMgZ28NCj4g
PiArdG8gb25lIG5ldyBsaW5rZWQgcGFnZSBsaXN0IGByZXNlcnZlZF9wYWdlX2xpc3RgLg0KPiA+
ICsNCj4gPiArQXQgbGFzdCwgc2V0IHVwIGd1ZXN0IFAyTSBtYXBwaW5nLiBCeSBkZWZhdWx0LCBp
dCBzaGFsbCBiZSBtYXBwZWQgdG8NCj4gPiArdGhlIGZpeGVkIGd1ZXN0IFJBTSBhZGRyZXNzIGBH
VUVTVF9SQU0wX0JBU0VgLCBgR1VFU1RfUkFNMV9CQVNFYCwNCj4gPiAranVzdCBsaWtlIG5vcm1h
bCBkb21haW5zLiBCdXQgbGF0ZXIgaW4gMToxIGRpcmVjdC1tYXAgZGVzaWduLCBpZg0KPiA+ICtg
ZGlyZWN0LW1hcGAgaXMgc2V0LCB0aGUgZ3Vlc3QgcGh5c2ljYWwgYWRkcmVzcyB3aWxsIGVxdWFs
IHRvIHBoeXNpY2FsDQo+IGFkZHJlc3MuDQo+ID4gKw0KPiA+ICsjIyMgU3RhdGljIEFsbG9jYXRp
b24gZm9yIFhlbiBpdHNlbGYNCj4gPiArDQo+ID4gKyMjIyBOZXcgRGVpdmNlIFRyZWUgTm9kZTog
YHhlbixyZXNlcnZlZF9oZWFwYA0KPiANCj4gcy9EZWl2Y2UvRGV2aWNlLw0KPiANCg0KVGh4Lg0K
DQo+ID4gKw0KPiA+ICtTdGF0aWMgbWVtb3J5IGZvciBYZW4gaGVhcCByZWZlcnMgdG8gcGFydHMg
b2YgUkFNIHJlc2VydmVkIGluIHRoZQ0KPiA+ICtiZWdpbm5pbmcgZm9yIFhlbiBoZWFwIG9ubHku
IFRoZSBtZW1vcnkgaXMgcHJlLWRlZmluZWQgdGhyb3VnaCBYRU4NCj4gPiArY29uZmlndXJhdGlv
biB1c2luZyBwaHlzaWNhbCBhZGRyZXNzIHJhbmdlcy4NCj4gPiArDQo+ID4gK1RoZSByZXNlcnZl
ZCBtZW1vcnkgZm9yIFhlbiBoZWFwIGlzIGFuIG9wdGlvbmFsIGZlYXR1cmUgYW5kIGNhbiBiZQ0K
PiA+ICtlbmFibGVkIGJ5IGFkZGluZyBhIGRldmljZSB0cmVlIHByb3BlcnR5IGluIHRoZSBgY2hv
c2VuYCBub2RlLg0KPiA+ICtDdXJyZW50bHksIHRoaXMgZmVhdHVyZSBpcyBvbmx5IHN1cHBvcnRl
ZCBvbiBBQXJjaDY0Lg0KPiA+ICsNCj4gPiArSGVyZSBpcyBvbmUgZXhhbXBsZToNCj4gPiArDQo+
ID4gKw0KPiA+ICsgICAgICAgIGNob3NlbiB7DQo+ID4gKyAgICAgICAgICAgIHhlbixyZXNlcnZl
ZC1oZWFwID0gPDB4MCAweDMwMDAwMDAwIDB4MCAweDQwMDAwMDAwPjsNCj4gPiArICAgICAgICAg
ICAgLi4uDQo+ID4gKyAgICAgICAgfTsNCj4gPiArDQo+ID4gK1JBTSBhdCAweDMwMDAwMDAwIG9m
IDFHIHNpemUgd2lsbCBiZSByZXNlcnZlZCBhcyBoZWFwIG1lbW9yeS4gTGF0ZXIsDQo+ID4gK2hl
YXAgYWxsb2NhdG9yIHdpbGwgYWxsb2NhdGUgbWVtb3J5IG9ubHkgZnJvbSB0aGlzIHNwZWNpZmlj
IHJlZ2lvbi4NCj4gDQo+IFRoaXMgc2VjdGlvbiBpcyBxdWl0ZSBjb25mdXNpbmcuIEkgdGhpbmsg
d2UgbmVlZCB0byBjbGVhcmx5IGRpZmZlcmVudGlhdGUgaGVhcCB2cw0KPiBhbGxvY2F0b3IuDQo+
IA0KPiBJbiBYZW4gd2UgaGF2ZSB0d28gaGVhcHM6DQo+ICAgICAxKSBYZW4gaGVhcDogSXQgaXMg
YWx3YXlzIG1hcHBlZCBpbiBYZW4gdmlydHVhbCBhZGRyZXNzIHNwYWNlLiBUaGlzIGlzDQo+IG1h
aW5seSB1c2VkIGZvciB4ZW4gaW50ZXJuYWwgYWxsb2NhdGlvbi4NCj4gICAgIDIpIERvbWFpbiBo
ZWFwOiBJdCBtYXkgbm90IGFsd2F5cyBiZSBtYXBwZWQgaW4gWGVuIHZpcnR1YWwgYWRkcmVzcyBz
cGFjZS4NCj4gVGhpcyBpcyBtYWlubHkgdXNlZCBmb3IgZG9tYWluIG1lbW9yeSBhbmQgbWFwcGVk
IG9uLWRlbWFuZC4NCj4gDQo+IEZvciBBcm02NCAoYW5kIHg4NiksIHR3byBoZWFwcyBhcmUgYWxs
b2NhdGVkIGZyb20gdGhlIHNhbWUgcmVnaW9uLiBCdXQgb24NCj4gQXJtMzIsIHRoZXkgYXJlIGRp
ZmZlcmVudC4NCj4gDQo+IFdlIGFsc28gaGF2ZSB0d28gYWxsb2NhdG9yOg0KPiAgICAgMSkgQm9v
dCBhbGxvY2F0b3I6IFRoaXMgaXMgdXNlZCBkdXJpbmcgYm9vdCBvbmx5LiBUaGVyZSBpcyBubyBj
b25jZXB0IG9mDQo+IGhlYXAgYXQgdGhpcyB0aW1lLg0KPiAgICAgMikgQnVkZHkgYWxsb2NhdG9y
OiBUaGlzIGlzIHRoZSBjdXJyZW50IHJ1bnRpbWUgYWxsb2NhdG9yLiBUaGlzIGNhbiBlaXRoZXIN
Cj4gYWxsb2NhdG9yIGZyb20gZWl0aGVyIGhlYXAuDQo+IA0KPiBBRkFJQ1QsIHRoaXMgZGVzaWdu
IGlzIGludHJvZHVjaW5nIGEgM3JkIGFsbG9jYXRvciB0aGF0IHdpbGwgcmV0dXJuIGRvbWFpbiBo
ZWFwDQo+IHBhZ2VzLg0KPiANCj4gTm93LCBiYWNrIHRvIHRoaXMgc2VjdGlvbiwgYXJlIHlvdSBz
YXlpbmcgeW91IHdpbGwgc2VwYXJhdGUgdGhlIHR3byBoZWFwcyBhbmQNCj4gZm9yY2UgdGhlIGJ1
ZGR5IGFsbG9jYXRvciB0byBhbGxvY2F0ZSB4ZW4gaGVhcCBwYWdlcyBmcm9tIGEgc3BlY2lmaWMg
cmVnaW9uPw0KPiANCj4gWy4uLl0NCg0KSSB3aWxsIHRyeSB0byBleHBsYWluIGNsZWFybHkgaGVy
ZS4gDQpUaGUgaW50ZW50aW9uIGJlaGluZCB0aGlzIHJlc2VydmVkIGhlYXAgaXMgdGhhdCBmb3Ig
c3VwcG9ydGluZyB0b3RhbCBzdGF0aWMgc3lzdGVtLCB3ZQ0Kbm90IG9ubHkgd2FudCB0byBwcmUt
ZGVmaW5lIG1lbW9yeSByZXNvdXJjZSBmb3IgZ3Vlc3RzLCBidXQgYWxzbyBmb3IgeGVuIHJ1bnRp
bWUNCmFsbG9jYXRpb24uIEFueSBydW50aW1lIGJlaGF2aW9yIGFyZSBtb3JlIHByZWRpY3RhYmxl
Lg0KDQpSaWdodCBub3csIG9uIEFBcmNoNjQsIGFsbCBSQU0sIGV4Y2VwdCByZXNlcnZlZCBtZW1v
cnksIHdpbGwgYmUgZ2l2ZW4gdG8gYnVkZHkNCmFsbG9jYXRvciBhcyBoZWFwLCAgbGlrZSB5b3Ug
c2FpZCwgZ3Vlc3QgUkFNIGZvciBub3JtYWwgZG9tYWluIHdpbGwgYmUgYWxsb2NhdGVkDQpmcm9t
IHRoZXJlLCB4bWFsbG9jIGV2ZW50dWFsbHkgaXMgZ2V0IG1lbW9yeSBmcm9tIHRoZXJlLCBldGMu
IFNvIHdlIHdhbnQgdG8gcmVmaW5lDQp0aGUgaGVhcCBoZXJlLCBub3QgaXRlcmF0aW5nIHRocm91
Z2ggYm9vdGluZm8ubWVtIHRvIHNldCB1cCBYRU4gaGVhcCwgYnV0IGxpa2UNCml0ZXJhdGluZyBi
b290aW5mby4gcmVzZXJ2ZWRfaGVhcCB0byBzZXQgdXAgWEVOIGhlYXAuDQoNClRydWUsIG9uIEFS
TTMyLCB4ZW4gaGVhcCBhbmQgZG9tYWluIGhlYXAgYXJlIHNlcGFyYXRlbHkgbWFwcGVkLCB3aGlj
aCBpcyBtb3JlDQpjb21wbGljYXRlZCBoZXJlLiBUaGF0J3Mgd2h5IEkgb25seSB0YWxraW5nIGFi
b3V0IGltcGxlbWVudGluZyB0aGVzZSBmZWF0dXJlcyBvbg0KQUFyY2g2NCBhcyBmaXJzdCBzdGVw
Lg0KDQo+IA0KPiA+ICsjIyMgTWVtb3J5IEFsbG9jYXRpb24gZm9yIDE6MSBkaXJlY3QtbWFwIERv
bWFpbg0KPiA+ICsNCj4gPiArSW1wbGVtZW50aW5nIG1lbW9yeSBhbGxvY2F0aW9uIGZvciAxOjEg
ZGlyZWN0LW1hcCBkb21haW4gaW5jbHVkZXMgdHdvDQo+IHBhcnRzOg0KPiA+ICtTdGF0aWMgQWxs
b2NhdGlvbiBmb3IgRG9tYWluIGFuZCAxOjEgZGlyZWN0LW1hcC4NCj4gPiArDQo+ID4gK1RoZSBm
aXJzdCBwYXJ0IGhhcyBiZWVuIGVsYWJvcmF0ZWQgYmVmb3JlIGluIGNoYXB0ZXIgYE1lbW9yeQ0K
PiA+ICtBbGxvY2F0aW9uIGZvciBEb21haW5zIG9uIFN0YXRpYyBBbGxvY2F0aW9uYC4gVGhlbiwg
dG8gZW5zdXJlIDE6MQ0KPiA+ICtkaXJlY3QtbWFwLCB3aGVuIHNldHRpbmcgdXAgZ3Vlc3QgUDJN
IG1hcHBpbmcsIGl0IG5lZWRzIHRvIG1ha2Ugc3VyZQ0KPiA+ICt0aGF0IGd1ZXN0IHBoeXNpY2Fs
IGFkZHJlc3MgZXF1YWwgdG8gcGh5c2ljYWwgYWRkcmVzcyhgZ2ZuID09IG1mbmApLg0KPiA+ICsN
Cj4gPiArKkRJU0NVU1NJT046DQo+ID4gKw0KPiA+ICsgICogSGVyZSBvbmx5IHN1cHBvcnRzIGJv
b3RpbmcgdXAgb25lIGRvbWFpbiBvbiBzdGF0aWMgYWxsb2NhdGlvbiBvcg0KPiA+ICtvbiAxOjEg
ZGlyZWN0LW1hcCB0aHJvdWdoIGRldmljZSB0cmVlLCBpcyBgeGxgIGFsc28gbmVlZGVkPw0KPiAN
Cj4gSSB0aGluayB0aGV5IGNhbiBiZSBzZXBhcmF0ZWQgZm9yIG5vdy4NCj4gDQoNCkFncmVlfn4N
Cg0KPiBDaGVlcnMsDQo+IA0KPiAtLQ0KPiBKdWxpZW4gR3JhbGwNCg==


From xen-devel-bounces@lists.xenproject.org Thu May 20 06:07:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 06:07:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130602.244532 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljbqK-0007B2-51; Thu, 20 May 2021 06:07:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130602.244532; Thu, 20 May 2021 06:07:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljbqK-0007Av-1z; Thu, 20 May 2021 06:07:36 +0000
Received: by outflank-mailman (input) for mailman id 130602;
 Thu, 20 May 2021 06:07:34 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yP7T=KP=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1ljbqI-0007Ap-B2
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 06:07:34 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown
 [40.107.6.84]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f36e2610-a230-406c-83aa-f6f75384fabb;
 Thu, 20 May 2021 06:07:32 +0000 (UTC)
Received: from DB9PR06CA0015.eurprd06.prod.outlook.com (2603:10a6:10:1db::20)
 by DB9PR08MB6684.eurprd08.prod.outlook.com (2603:10a6:10:26d::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4150.23; Thu, 20 May
 2021 06:07:30 +0000
Received: from DB5EUR03FT023.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:10:1db:cafe::cf) by DB9PR06CA0015.outlook.office365.com
 (2603:10a6:10:1db::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4150.23 via Frontend
 Transport; Thu, 20 May 2021 06:07:30 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT023.mail.protection.outlook.com (10.152.20.68) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Thu, 20 May 2021 06:07:30 +0000
Received: ("Tessian outbound 0f1e4509c199:v92");
 Thu, 20 May 2021 06:07:30 +0000
Received: from f3ce78f5e04a.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 08DA521A-05BB-4DAD-8285-0D7C11034E38.1; 
 Thu, 20 May 2021 06:07:24 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id f3ce78f5e04a.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 20 May 2021 06:07:24 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com (2603:10a6:803:10a::33)
 by VI1PR08MB3406.eurprd08.prod.outlook.com (2603:10a6:803:7b::32)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.28; Thu, 20 May
 2021 06:07:22 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5]) by VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5%6]) with mapi id 15.20.4129.035; Thu, 20 May 2021
 06:07:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f36e2610-a230-406c-83aa-f6f75384fabb
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PA7asYMuc4GZMY+Dz+51Dz848iAHkI0/iINYpx9QvLw=;
 b=TcrbCXQLIgCDSirMG7o1m8aZMQDgFSopkRRXu4C0Rkf6yCLVr9zW0goI5hvIGl1GSnfCkGU2BN4vTg54fLGCvamAPSZkJXrYp5b3EYme18uiHyoDcvHvPaVFE7ASlKxN8141kzhyYms0FwLhqrc9+6AOptwFc/46oBlvYk9LlBY=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=HIIvsvNkA9hNx/V3YB+4q3/MRoay7dOUbBYXt0fz0xESo51C+x2eCecKiYjAExxJXFRzjuFoVaT8JHZkp4h7D6Y8gaGa2E81QZqXsxtQH88QbI0rH0UmNbEfzIlJBXhXzLurRDi0mD4OLB8cjFiuYXOhQ/mou9fPIDk+iEfVjxgnSm/eK8q5RtUh5pp4LOHVNBqcVgZPd/DRQlzQnQ7w3w6vPlL5imKv7j6amBVDCyZG97ChWRME74BNphdS32KEgrpwr9itxAOKd/Zmr2MICTvZI52AzWoW9HB+LOF5Q3TPmKBK/l39B7EPnvjGbNd0Z39fkPwWEPTiPz9xjS6F8Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PA7asYMuc4GZMY+Dz+51Dz848iAHkI0/iINYpx9QvLw=;
 b=n3zMy18FgN5HOMDsNoxEeqCYllfBkx32Dm8gNBnjX2QUoXn+eS5Z1/6wjYSLMrveD7N+kKFoz4LPY9+4SkQf+i1MBfwbqd4rqsM98OUt3EH2Y1BCmblFhtgKPoxcQ3ggt5RBjXrPir5Gfs8oJD8eZr4hKo/QbRn/lb/uLP77U93bttmVav0Ajlq5Zm1xkhNVY2gU+uaRnz7wKLdVM6NWtPYucJGYGUF5EXAMDbhgwl7JJ0pmzc6/SVnElnYB1TZ9YILiPCdQuqz4q8Bv41b/TnyZh9m3ygqBI3g1x3yMwi42yTpr5fkoyy18QaVqOj1s4sSKrMgVzoUON7y5cGAH/w==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=PA7asYMuc4GZMY+Dz+51Dz848iAHkI0/iINYpx9QvLw=;
 b=TcrbCXQLIgCDSirMG7o1m8aZMQDgFSopkRRXu4C0Rkf6yCLVr9zW0goI5hvIGl1GSnfCkGU2BN4vTg54fLGCvamAPSZkJXrYp5b3EYme18uiHyoDcvHvPaVFE7ASlKxN8141kzhyYms0FwLhqrc9+6AOptwFc/46oBlvYk9LlBY=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "sstabellini@kernel.org"
	<sstabellini@kernel.org>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	nd <nd@arm.com>
Subject: RE: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
Thread-Topic: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
Thread-Index: AQHXS6WvvFnJG9tHGECWPMZXor2qraro8JyAgAAPtYCAAiGdAIAAuwwg
Date: Thu, 20 May 2021 06:07:21 +0000
Message-ID:
 <VE1PR08MB5215C1F5041860102BBAD595F72A9@VE1PR08MB5215.eurprd08.prod.outlook.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-2-penny.zheng@arm.com>
 <e1b90f06-92d2-11da-c556-4081907124b8@xen.org>
 <VE1PR08MB521519C6F09E92EDB9C9A1AEF72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <66e32065-ea2d-d000-1a70-e5598a182b6a@xen.org>
In-Reply-To: <66e32065-ea2d-d000-1a70-e5598a182b6a@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 1BD33F983AFB0140BC13A9260EC6A868.0
x-checkrecipientchecked: true
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [203.126.0.111]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: b956f725-6ecd-4677-82a8-08d91b558d1f
x-ms-traffictypediagnostic: VI1PR08MB3406:|DB9PR08MB6684:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DB9PR08MB6684B1189E16C236D27E6897F72A9@DB9PR08MB6684.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 oUGshDfv/EMVgHou8oHcD6sGKqFvc1IBH3q1Lnkt6/hE3AhIS8jXeBUAwwYsZr3yJQHwA+1HAn2rZFTv8CuE3xuYmMUCD+WBoW+8Xy8KvDZWZ5yEiyFZ9v4YWW2q3KsweJnmtltAN/6onSQ3R8tUR6ibn5bzSPMkRBChOAW0cdbpamCf35cWHpdlk6vE8oaHXRgCMpixQMhFzHHHcSLnV+C5ZsjBxoyRlUBbHMslSRZXJQ0GjRwcm5RESOveNJgQCXSGnHEN8ZDw3JrtiY7dEgszpPWgd6G5zzopXK+B4QbMtGTMzZvlyfdlI/X1Yv98+AaAZ1a3+yrTfTn/P8wEHj7E4JXwohqfn2Vt4BQCIuMxG1+ymYKX/SXdiEDR8JGBCAX7tz6nWz3hSz588xHM49/UaQVurBL2Hk3wVkcR7RdQsnEPj7XKz9fut2jGye6U28pSAGkfCJWbBaESVlnVh/PYOiv06NELf5YsQvW03OA52ZziTtuaU925OVe+MFu++chEOi4ETzjTzY9ZSHUI+dKwMWG/Whb2dqjorjnWyqiQ1sEfcwEjiHELshhEYfKiaFQhdGnJ/VZCZBkCZC0PK7QjH3WB7esBxNqLjNKMurxKw5Km4ngMWRonkNnTwPioDwDU7CUvyGGgML5L2R718VTnaEdYGyEV2p+vRSd+CKw/Tr0rWuMHjxr8KyQJ9Sm0+m69+XI83AVa2WPXmLa7RSIR+p61zpaOhYhZ1dXuVCw=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR08MB5215.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39850400004)(136003)(396003)(366004)(346002)(376002)(6506007)(86362001)(7696005)(52536014)(316002)(71200400001)(110136005)(2906002)(8676002)(26005)(4326008)(966005)(54906003)(53546011)(66476007)(66946007)(8936002)(5660300002)(55016002)(33656002)(66446008)(64756008)(66556008)(83380400001)(9686003)(76116006)(186003)(478600001)(38100700002)(122000001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?RGJ6M3pxSS8xNVM0ZlJnUDBxUFM4RytmMHZRenBrUnIzV3FyR1RDNTJSQnVZ?=
 =?utf-8?B?ZW5ZMEJ5dkF1S2EzdGw4aGltM2pWdlo1UmJHV0JZRzdWVnNUbnlENWhhVFNM?=
 =?utf-8?B?N2tEWEl0ajZRTUxXREEzek1xYVJkN1JsSEFyenFZRWFHSU5XOXJHZDFvWGNF?=
 =?utf-8?B?N1kzMDR5UDQxdE1UaElIcDl2Y0lXYUpFREhTY3ByTFZaejVHYklIRkJrZnNL?=
 =?utf-8?B?c3ZrMzR6SC92ZmJ1VE5FS1dWRUlCK0VDWEo1U2pZK2dueUJGTFlUVE5uRXpB?=
 =?utf-8?B?RnRySnJtd1JqaFdiUnRWclhsVFpHcjA5UGFkdlUyNkx1R2k1bTM4REcyTDZ0?=
 =?utf-8?B?Z0cxQjRCWE9CdlAxMWR3VGhqTThBcnZJb0cyR1g5NFM3T1c0eUwzWkdoKzBS?=
 =?utf-8?B?ejllenRUdU1HRGpnbU9MMUFDTk9DRkVhd09OME9ibHkzZVd4ZlloRCtSQkt2?=
 =?utf-8?B?MVpXa2M2bkhXMkZrMi9yZFNmaGNjWjhxT1NETGg2dGZDRThuRFlUTlB6akxV?=
 =?utf-8?B?dWJpQmVUNWo4TE5zZmJxZUk0Q2cxekM2dnJEZkJDTi94WlgwRkV0OHl6aGNY?=
 =?utf-8?B?K3YrbmluMDRrdjVPZE1Md2hCWmtpcFRGWU1XVExodW94YnJFOE1tMjF0WkQv?=
 =?utf-8?B?Q2JOSEVVVEdLTFVRUFlvVzVwNjdOKzN6cGVlUTlHTW9vUGNaZEJMWW1tVm5L?=
 =?utf-8?B?ZVlHci9DZ1oxSEVOTjR5WUVqQTJXVUZKQkNrUDU1L2tQaExkcHpCczlmZGpT?=
 =?utf-8?B?ek5NWHVMbDRDaVBtUDRZQngwZ2RVWFJlZHZyU2pZK1ZlcEllV0pGaFMvUUJ2?=
 =?utf-8?B?Vk1MdG1qS0NOeitnQmtGMmxZQzU2Y1cwcSt2b3MycHFLVWErQWMrK1BYSU1E?=
 =?utf-8?B?ZkpMSmp3NXpMbUJ5MWtPZVR3aWwxNFdKMzE2MUZDL2tSU0pldDZwSWJ6WFZN?=
 =?utf-8?B?WjA1bUNRNHh1WDRjMXdZbThjQTA4c1lSSGRqcGRZcFlKUnE4R3BVcnlpenFv?=
 =?utf-8?B?NlBOZEZheTloZzI3a3Fxa0NtekNIendwdjVabGdza1hzTHI2RUp2VUo0TytF?=
 =?utf-8?B?NzdYZGpRSC90N3Rpc1JJZURTNGxEUlVxbnpHRUsva2JXNWJOY1FJUU9GZDNw?=
 =?utf-8?B?ZjIrUWt3a1VBNWhhWkdFVFpSSXdUUlNQYkpSOGRKZ3pOVHowbzBEdjgrbWxj?=
 =?utf-8?B?WU9mbVZ6eTNFYzJ3K2NtK3E4dzBEOWlzNXZIak9CZG84cHVyVCtRSDdZYWlI?=
 =?utf-8?B?SUU3K3FZUW4rdmh4aCt0KzNMVU5EbHIzdUZhYlpMNm9WeGIxMmFwckxwL2dM?=
 =?utf-8?B?MW1MTFA5ZFlQdjltSHpmZUh5L0NTaU50cnhBaThoc01kUWFKSXpUMENMMFFj?=
 =?utf-8?B?MWRSVHZkTmkrc1ZUdU1vNzd6TWJRdnIrOU1SYUNsZTdSRXp1elFBUnFDZERB?=
 =?utf-8?B?a2tKMjZDNEZGVUljblNmRlNyTkpJc3g5MXcyM3U4MWZCVjhWeEVha2tKS1h5?=
 =?utf-8?B?TDZQRGk0Z0dLUGJvZU5kZ3JzR1hUUWZlaGtzUmlSdlU4NEJzMWJwWUhmcFFT?=
 =?utf-8?B?RmJaRXgyZWNxWVZxbGtmOVFmRDNnOUFkVmgzZ09JWkhJemE5ZWZuS09SaWM2?=
 =?utf-8?B?M01IN1BKOFp3T3lVZzBQMStkNmZ2MEpDdXZDVTlZRUhoYlZLMjhNcmVzdjFO?=
 =?utf-8?B?VzV3TnNsNUErWVVMUmFQQm9xRmVsdmdnajliWVJwTlg5UUFIRkNXM29pZVZ5?=
 =?utf-8?Q?ACjyR1bzgV88T/BSnWKO9veqXxfvIf0UNJRNUKx?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3406
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT023.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	c5659df8-10f4-46b4-78a4-08d91b558809
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	qFKP7pTBzPSrPYwqSN0NNnJud6yNqex+j2YauItUSEZ4mGDOXkq9hTekHIyW0wvUHQMYwvG5EpvL6Vnr0U5DmHVjckp4GzmdcCQRcUGJY/MXBIPBg4EYpaERI2XXYpubtpXvV3jAOlG03ZJSdW+oJtzef2zIGWDY0wtguYoCAdLLfXHZbMcLRJzVIL/u39iEuALDmZ74Pne7aHa9Vse32nTk5cEBVC2aIZFIgUUBSwuH88PvjX38zun9tEiS3VJDTiNu8daBMWnawUBkHwbwDTFgsMCHdetBI2+vY/KQ6SB63NM0j+SJJ4YV/ggfJPmyURuZ5goM3UMSWgQUJCDFexsDexAcyftDipGjuA19X0O+5j2dszKRRM7au43qzgCa0InlcsJa4lH6yYeRHZuZb9NaItJVklS5EhcnNdt/1mtlG/wyYEa3tyMRGOdve4wgvM7hQYxBMuytPIFLr9+xTC0aZ0n+PRUuljDGIwZCj1gp0D40YP3mK+1o+N5MpDYMHMlezYQ6otULhUxgQOv6OUQBFq4kLdx80OkBb3GXtAUGRtHYsLCjiLjGBhZSsol2umaBudGnwiD7MVW2vkadHzu2izHEbKuD5ecV56Y6nO9tw17NYbnvZ3LZTMCbII5l12+2MysaMl0hwGfLzA4lYXAhFdQefL5XD8ztPr9NiZjeMZkhC8CVzkBjq2/dGMr6fBtaDZHdmWTS8QopP/1FP/P0brzbJV2FkjBIw6vv/o8=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39850400004)(376002)(396003)(346002)(136003)(36840700001)(46966006)(5660300002)(966005)(52536014)(4326008)(70586007)(70206006)(316002)(336012)(186003)(55016002)(2906002)(110136005)(356005)(54906003)(82740400003)(9686003)(81166007)(8936002)(8676002)(6506007)(7696005)(26005)(83380400001)(82310400003)(47076005)(36860700001)(86362001)(478600001)(53546011)(33656002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 May 2021 06:07:30.3368
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b956f725-6ecd-4677-82a8-08d91b558d1f
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT023.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB9PR08MB6684

SGkgSnVsaWVuDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSnVsaWVu
IEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4NCj4gU2VudDogVGh1cnNkYXksIE1heSAyMCwgMjAyMSAy
OjI3IEFNDQo+IFRvOiBQZW5ueSBaaGVuZyA8UGVubnkuWmhlbmdAYXJtLmNvbT47IHhlbi1kZXZl
bEBsaXN0cy54ZW5wcm9qZWN0Lm9yZzsNCj4gc3N0YWJlbGxpbmlAa2VybmVsLm9yZw0KPiBDYzog
QmVydHJhbmQgTWFycXVpcyA8QmVydHJhbmQuTWFycXVpc0Bhcm0uY29tPjsgV2VpIENoZW4NCj4g
PFdlaS5DaGVuQGFybS5jb20+OyBuZCA8bmRAYXJtLmNvbT4NCj4gU3ViamVjdDogUmU6IFtQQVRD
SCAwMS8xMF0geGVuL2FybTogaW50cm9kdWNlIGRvbWFpbiBvbiBTdGF0aWMgQWxsb2NhdGlvbg0K
PiANCj4gT24gMTkvMDUvMjAyMSAwMzoyMiwgUGVubnkgWmhlbmcgd3JvdGU6DQo+ID4gSGkgSnVs
aWVuDQo+IA0KPiBIaSBQZW5ueSwNCj4gDQo+ID4+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0t
DQo+ID4+IEZyb206IEp1bGllbiBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+ID4+IFNlbnQ6IFR1
ZXNkYXksIE1heSAxOCwgMjAyMSA0OjU4IFBNDQo+ID4+IFRvOiBQZW5ueSBaaGVuZyA8UGVubnku
WmhlbmdAYXJtLmNvbT47DQo+ID4+IHhlbi1kZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZzsgc3N0
YWJlbGxpbmlAa2VybmVsLm9yZw0KPiA+PiBDYzogQmVydHJhbmQgTWFycXVpcyA8QmVydHJhbmQu
TWFycXVpc0Bhcm0uY29tPjsgV2VpIENoZW4NCj4gPj4gPFdlaS5DaGVuQGFybS5jb20+OyBuZCA8
bmRAYXJtLmNvbT4NCj4gPj4gU3ViamVjdDogUmU6IFtQQVRDSCAwMS8xMF0geGVuL2FybTogaW50
cm9kdWNlIGRvbWFpbiBvbiBTdGF0aWMNCj4gPj4gQWxsb2NhdGlvbg0KPiA+Pj4gK2JlZ2lubmlu
Zywgc2hhbGwgbmV2ZXIgZ28gdG8gaGVhcCBhbGxvY2F0b3Igb3IgYm9vdCBhbGxvY2F0b3IgZm9y
IGFueSB1c2UuDQo+ID4+PiArDQo+ID4+PiArRG9tYWlucyBvbiBTdGF0aWMgQWxsb2NhdGlvbiBp
cyBzdXBwb3J0ZWQgdGhyb3VnaCBkZXZpY2UgdHJlZQ0KPiA+Pj4gK3Byb3BlcnR5IGB4ZW4sc3Rh
dGljLW1lbWAgc3BlY2lmeWluZyByZXNlcnZlZCBSQU0gYmFua3MgYXMgdGhpcw0KPiA+Pj4gK2Rv
bWFpbidzDQo+ID4+IGd1ZXN0IFJBTS4NCj4gPj4NCj4gPj4gSSB3b3VsZCBzdWdnZXN0IHRvIHVz
ZSAicGh5c2ljYWwgUkFNIiB3aGVuIHlvdSByZWZlciB0byB0aGUgaG9zdCBtZW1vcnkuDQo+ID4+
DQo+ID4+PiArQnkgZGVmYXVsdCwgdGhleSBzaGFsbCBiZSBtYXBwZWQgdG8gdGhlIGZpeGVkIGd1
ZXN0IFJBTSBhZGRyZXNzDQo+ID4+PiArYEdVRVNUX1JBTTBfQkFTRWAsIGBHVUVTVF9SQU0xX0JB
U0VgLg0KPiA+Pg0KPiA+PiBUaGVyZSBhcmUgYSBmZXcgYml0cyB0aGF0IG5lZWRzIHRvIGNsYXJp
ZmllZCBvciBwYXJ0IG9mIHRoZSBkZXNjcmlwdGlvbjoNCj4gPj4gICAgIDEpICJCeSBkZWZhdWx0
IiBzdWdnZXN0cyB0aGVyZSBpcyBhbiBhbHRlcm5hdGl2ZSBwb3NzaWJpbGl0eS4NCj4gPj4gSG93
ZXZlciwgSSBkb24ndCBzZWUgYW55Lg0KPiA+PiAgICAgMikgV2lsbCB0aGUgZmlyc3QgcmVnaW9u
IG9mIHhlbixzdGF0aWMtbWVtIGJlIG1hcHBlZCB0bw0KPiA+PiBHVUVTVF9SQU0wX0JBU0UgYW5k
IHRoZSBzZWNvbmQgdG8gR1VFU1RfUkFNMV9CQVNFPyBXaGF0IGlmIGEgdGhpcmQNCj4gcmVnaW9u
IGlzIHNwZWNpZmljZWQ/DQo+ID4+ICAgICAzKSBXZSBkb24ndCBndWFyYW50ZWUgdGhlIGJhc2Ug
YWRkcmVzcyBhbmQgdGhlIHNpemUgb2YgdGhlIGJhbmtzLg0KPiA+PiBXb3VsZG4ndCBpdCBiZSBi
ZXR0ZXIgdG8gbGV0IHRoZSBhZG1pbiBzZWxlY3QgdGhlIHJlZ2lvbiBoZS9zaGUgd2FudHM/DQo+
ID4+ICAgICA0KSBIb3cgZG8geW91IGRldGVybWluZSB0aGUgbnVtYmVyIG9mIGNlbGxzIGZvciB0
aGUgYWRkcmVzcyBhbmQgdGhlIHNpemU/DQo+ID4+DQo+ID4NCj4gPiBUaGUgc3BlY2lmaWMgaW1w
bGVtZW50YXRpb24gb24gdGhpcyBwYXJ0IGNvdWxkIGJlIHRyYWNlZCB0byB0aGUgbGFzdA0KPiA+
IGNvbW1pdA0KPiA+IGh0dHBzOi8vcGF0Y2hldy5vcmcvWGVuLzIwMjEwNTE4MDUyMTEzLjcyNTgw
OC0xLQ0KPiBwZW5ueS56aGVuZ0Bhcm0uY29tLzIwDQo+ID4gMjEwNTE4MDUyMTEzLjcyNTgwOC0x
MS1wZW5ueS56aGVuZ0Bhcm0uY29tLw0KPiANCj4gUmlnaHQuIE15IHBvaW50IGlzIGFuIGFkbWlu
IHNob3VsZCBub3QgaGF2ZSB0byByZWFkIHRoZSBjb2RlIGluIG9yZGVyIHRvIGZpZ3VyZQ0KPiBv
dXQgaG93IHRoZSBhbGxvY2F0aW9uIHdvcmtzLg0KPiANCj4gPg0KPiA+IEl0IHdpbGwgZXhoYXVz
dCB0aGUgR1VFU1RfUkFNMF9TSVpFLCB0aGVuIHNlZWsgdG8gdGhlIEdVRVNUX1JBTTFfQkFTRS4N
Cj4gPiBHVUVTVF9SQU0wIG1heSB0YWtlIHVwIHNldmVyYWwgcmVnaW9ucy4NCj4gDQo+IENhbiB0
aGlzIGJlIGNsYXJpZmllZCBpbiB0aGUgY29tbWl0IG1lc3NhZ2UuDQo+IA0KDQpPaywgSSB3aWxs
IGV4cGFuZCBjb21taXQgdG8gbGV0IGl0IGJlIGNsYXJpZmllZCBtb3JlIGNsZWFybHkuDQoNCj4g
PiBZZXMsIEkgbWF5IGFkZCB0aGUgMToxIGRpcmVjdC1tYXAgc2NlbmFyaW8gaGVyZSB0byBleHBs
YWluIHRoZQ0KPiA+IGFsdGVybmF0aXZlIHBvc3NpYmlsaXR5DQo+IA0KPiBPay4gU28gSSB3b3Vs
ZCBzdWdnZXN0IHRvIHJlbW92ZSAiYnkgZGVmYXVsdCIgdW50aWwgdGhlIGltcGxlbWVudGF0aW9u
IGZvcg0KPiBkaXJlY3QtbWFwIGlzIGFkZGVkLg0KPiANCg0KT2ssICB3aWxsIGRvLiBUaHguDQoN
Cj4gPiBGb3IgdGhlIHRoaXJkIHBvaW50LCBhcmUgeW91IHN1Z2dlc3RpbmcgdGhhdCB3ZSBjb3Vs
ZCBwcm92aWRlIGFuDQo+ID4gb3B0aW9uIHRoYXQgbGV0IHVzZXIgYWxzbyBkZWZpbmUgZ3Vlc3Qg
bWVtb3J5IGJhc2UgYWRkcmVzcy9zaXplPw0KPiANCj4gV2hlbiByZWFkaW5nIHRoZSBkb2N1bWVu
dCwgSSBvcmlnaW5hbGx5IHRob3VnaHQgdGhhdCB0aGUgZmlyc3QgcmVnaW9uIHdvdWxkIGJlDQo+
IG1hcHBlZCB0byBiYW5rMSwgYW5kIHRoZW4gdGhlIHNlY29uZCByZWdpb24gbWFwcGVkIHRvIGJh
bmsyLi4uDQo+IA0KPiBCdXQgZnJvbSB5b3VyIHJlcGx5LCB0aGlzIGlzIG5vdCB0aGUgZXhwZWN0
ZWQgYmVoYXZpb3IuIFRoZXJlZm9yZSwgcGxlYXNlIGlnbm9yZQ0KPiBteSBzdWdnZXN0aW9uIGhl
cmUuDQo+IA0KPiA+IEknbSBjb25mdXNlZCBvbiB0aGUgZm91cnRoIHBvaW50LCB5b3UgbWVhbiB0
aGUgYWRkcmVzcyBjZWxsIGFuZCBzaXplIGNlbGwgZm9yDQo+IHhlbixzdGF0aWMtbWVtID0gPC4u
Lj4/DQo+IA0KPiBZZXMuIFRoaXMgc2hvdWxkIGJlIGNsYXJpZmllZCBpbiB0aGUgZG9jdW1lbnQu
IFNlZSBtb3JlIGJlbG93IHdoeT8NCj4gDQo+ID4gSXQgd2lsbCBiZSBjb25zaXN0ZW50IHdpdGgg
dGhlIG9uZXMgZGVmaW5lZCBpbiB0aGUgcGFyZW50IG5vZGUsIGRvbVV4Lg0KPiBIbW1tLi4uIFRv
IHRha2UgdGhlIGV4YW1wbGUgeW91IHByb3ZpZGVkLCB0aGUgcGFyZW50IHdvdWxkIGJlIGNob3Nl
bi4NCj4gSG93ZXZlciwgZnJvbSB0aGUgZXhhbXBsZSwgSSB3b3VsZCBleHBlY3QgdGhlIHByb3Bl
cnR5ICN7YWRkcmVzcywgc2l6ZX0tY2VsbHMNCj4gaW4gZG9tVTEgdG8gYmUgdXNlZC4gV2hhdCBk
aWQgSSBtaXNzPw0KPiANCg0KWWVhaCwgdGhlIHByb3BlcnR5ICN7YWRkcmVzcywgc2l6ZX0tY2Vs
bHMgaW4gZG9tVTEgd2lsbCBiZSB1c2VkLiBBbmQgdGhlIHBhcmVudA0Kbm9kZSB3aWxsIGJlIGRv
bVUxLg0KDQpUaGUgZHRiIHByb3BlcnR5IHNob3VsZCBsb29rIGxpa2UgYXMgZm9sbG93czoNCg0K
ICAgICAgICBjaG9zZW4gew0KICAgICAgICAgICAgZG9tVTEgew0KICAgICAgICAgICAgICAgIGNv
bXBhdGlibGUgPSAieGVuLGRvbWFpbiI7DQogICAgICAgICAgICAgICAgI2FkZHJlc3MtY2VsbHMg
PSA8MHgyPjsNCiAgICAgICAgICAgICAgICAjc2l6ZS1jZWxscyA9IDwweDI+Ow0KICAgICAgICAg
ICAgICAgIGNwdXMgPSA8Mj47DQogICAgICAgICAgICAgICAgeGVuLHN0YXRpYy1tZW0gPSA8MHgw
IDB4MzAwMDAwMDAgMHgwIDB4MjAwMDAwMDA+Ow0KDQogICAgICAgICAgICAgICAgLi4uDQogICAg
ICAgICAgICB9Ow0KICAgICAgICB9Ow0KDQo+ID4gK0RPTVUxIG9uIFN0YXRpYyBBbGxvY2F0aW9u
IGhhcyByZXNlcnZlZCBSQU0gYmFuayBhdCAweDMwMDAwMDAwIG9mIDUxMk1CIHNpemUNCg0KPiA+
Pj4gK1N0YXRpYyBBbGxvY2F0aW9uIGlzIG9ubHkgc3VwcG9ydGVkIG9uIEFBcmNoNjQgZm9yIG5v
dy4NCj4gPj4NCj4gPj4gVGhlIGNvZGUgZG9lc24ndCBzZWVtIHRvIGJlIEFBcmNoNjQgc3BlY2lm
aWMuIFNvIHdoeSBjYW4ndCB0aGlzIGJlDQo+ID4+IHVzZWQgZm9yIDMyLWJpdCBBcm0/DQo+ID4+
DQo+ID4NCj4gPiBUcnVlLCB3ZSBoYXZlIHBsYW5zIHRvIG1ha2UgaXQgYWxzbyB3b3JrYWJsZSBp
biBBQXJjaDMyIGluIHRoZSBmdXR1cmUuDQo+ID4gQmVjYXVzZSB3ZSBjb25zaWRlcmVkIFhFTiBv
biBjb3J0ZXgtUjUyLg0KPiANCj4gQWxsIHRoZSBjb2RlIHNlZW1zIHRvIGJlIGltcGxlbWVudGVk
IGluIGFybSBnZW5lcmljIGNvZGUuIFNvIGlzbid0IGl0IGFscmVhZHkNCj4gd29ya2luZz8NCj4g
DQo+ID4+PiAgICBzdGF0aWMgaW50IF9faW5pdCBlYXJseV9zY2FuX25vZGUoY29uc3Qgdm9pZCAq
ZmR0LA0KPiA+Pj4gICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIGludCBub2Rl
LCBjb25zdCBjaGFyICpuYW1lLCBpbnQgZGVwdGgsDQo+ID4+PiAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgdTMyIGFkZHJlc3NfY2VsbHMsIHUzMg0KPiA+Pj4gc2l6ZV9jZWxs
cywgQEAgLTM0NSw2ICszOTQsOSBAQCBzdGF0aWMgaW50IF9faW5pdCBlYXJseV9zY2FuX25vZGUo
Y29uc3QNCj4gdm9pZCAqZmR0LA0KPiA+Pj4gICAgICAgICAgICBwcm9jZXNzX211bHRpYm9vdF9u
b2RlKGZkdCwgbm9kZSwgbmFtZSwgYWRkcmVzc19jZWxscywgc2l6ZV9jZWxscyk7DQo+ID4+PiAg
ICAgICAgZWxzZSBpZiAoIGRlcHRoID09IDEgJiYgZGV2aWNlX3RyZWVfbm9kZV9tYXRjaGVzKGZk
dCwgbm9kZSwNCj4gImNob3NlbiIpICkNCj4gPj4+ICAgICAgICAgICAgcHJvY2Vzc19jaG9zZW5f
bm9kZShmZHQsIG5vZGUsIG5hbWUsIGFkZHJlc3NfY2VsbHMsDQo+ID4+PiBzaXplX2NlbGxzKTsN
Cj4gPj4+ICsgICAgZWxzZSBpZiAoIGRlcHRoID09IDIgJiYgZmR0X2dldF9wcm9wZXJ0eShmZHQs
IG5vZGUsDQo+ID4+PiArICJ4ZW4sc3RhdGljLW1lbSIsDQo+ID4+IE5VTEwpICkNCj4gPj4+ICsg
ICAgICAgIHByb2Nlc3Nfc3RhdGljX21lbW9yeShmZHQsIG5vZGUsICJ4ZW4sc3RhdGljLW1lbSIs
IGFkZHJlc3NfY2VsbHMsDQo+ID4+PiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgc2l6
ZV9jZWxscywgJmJvb3RpbmZvLnN0YXRpY19tZW0pOw0KPiA+Pg0KPiA+PiBJIGFtIGEgYml0IGNv
bmNlcm5lZCB0byBhZGQgeWV0IGFub3RoZXIgbWV0aG9kIHRvIHBhcnNlIHRoZSBEVCBhbmQNCj4g
Pj4gYWxsIHRoZSBleHRyYSBjb2RlIGl0IHdpbGwgYWRkIGxpa2UgaW4gcGF0Y2ggIzIuDQo+ID4+
DQo+ID4+ICAgRnJvbSB0aGUgaG9zdCBQb1YsIHRoZXkgYXJlIG1lbW9yeSByZXNlcnZlZCBmb3Ig
YSBzcGVjaWZpYyBwdXJwb3NlLg0KPiA+PiBXb3VsZCBpdCBiZSBwb3NzaWJsZSB0byBjb25zaWRl
ciB0aGUgcmVzZXJ2ZS1tZW1vcnkgYmluZGluZyBmb3IgdGhhdA0KPiA+PiBwdXJwb3NlPyBUaGlz
IHdpbGwgaGFwcGVuIG91dHNpZGUgb2YgY2hvc2VuLCBidXQgd2UgY291bGQgdXNlIGENCj4gPj4g
cGhhbmRsZSB0byByZWZlciB0aGUgcmVnaW9uLg0KPiA+Pg0KPiA+DQo+ID4gQ29ycmVjdCBtZSBp
ZiBJIHVuZGVyc3RhbmQgd3JvbmdseSwgZG8geW91IG1lYW4gd2hhdCB0aGlzIGRldmljZSB0cmVl
DQo+IHNuaXBwZXQgbG9va3MgbGlrZToNCj4gDQo+IFllcywgdGhpcyBpcyB3aGF0IEkgaGFkIGlu
IG1pbmQuIEFsdGhvdWdoIEkgaGF2ZSBvbmUgc21hbGwgcmVtYXJrIGJlbG93Lg0KPiANCj4gDQo+
ID4gcmVzZXJ2ZWQtbWVtb3J5IHsNCj4gPiAgICAgI2FkZHJlc3MtY2VsbHMgPSA8Mj47DQo+ID4g
ICAgICNzaXplLWNlbGxzID0gPDI+Ow0KPiA+ICAgICByYW5nZXM7DQo+ID4NCj4gPiAgICAgc3Rh
dGljLW1lbS1kb21VMTogc3RhdGljLW1lbUAweDMwMDAwMDAwew0KPiANCj4gSSB0aGluayB0aGUg
bm9kZSB3b3VsZCBuZWVkIHRvIGNvbnRhaW4gYSBjb21wYXRpYmxlIChuYW1lIHRvIGJlIGRlZmlu
ZWQpLg0KPiANCg0KT2ssIG1heWJlLCBobW1tLCBob3cgYWJvdXQgInhlbixzdGF0aWMtbWVtb3J5
Ij8NCg0KPiA+ICAgICAgICByZWcgPSA8MHgwIDB4MzAwMDAwMDAgMHgwIDB4MjAwMDAwMDA+Ow0K
PiA+ICAgICB9Ow0KPiA+IH07DQo+ID4NCj4gPiBDaG9zZW4gew0KPiA+ICAgLi4uDQo+ID4gZG9t
VTEgew0KPiA+ICAgICB4ZW4sc3RhdGljLW1lbSA9IDwmc3RhdGljLW1lbS1kb21VMT47IH07IC4u
Lg0KPiA+IH07DQo+ID4NCj4gPj4+DQo+ID4+PiAgICAgICAgaWYgKCByYyA8IDAgKQ0KPiA+Pj4g
ICAgICAgICAgICBwcmludGsoImZkdDogbm9kZSBgJXMnOiBwYXJzaW5nIGZhaWxlZFxuIiwgbmFt
ZSk7IGRpZmYgLS1naXQNCj4gPj4+IGEveGVuL2luY2x1ZGUvYXNtLWFybS9zZXR1cC5oIGIveGVu
L2luY2x1ZGUvYXNtLWFybS9zZXR1cC5oIGluZGV4DQo+ID4+PiA1MjgzMjQ0MDE1Li41ZTlmMjk2
NzYwIDEwMDY0NA0KPiA+Pj4gLS0tIGEveGVuL2luY2x1ZGUvYXNtLWFybS9zZXR1cC5oDQo+ID4+
PiArKysgYi94ZW4vaW5jbHVkZS9hc20tYXJtL3NldHVwLmgNCj4gPj4+IEBAIC03NCw2ICs3NCw4
IEBAIHN0cnVjdCBib290aW5mbyB7DQo+ID4+PiAgICAjaWZkZWYgQ09ORklHX0FDUEkNCj4gPj4+
ICAgICAgICBzdHJ1Y3QgbWVtaW5mbyBhY3BpOw0KPiA+Pj4gICAgI2VuZGlmDQo+ID4+PiArICAg
IC8qIFN0YXRpYyBNZW1vcnkgKi8NCj4gPj4+ICsgICAgc3RydWN0IG1lbWluZm8gc3RhdGljX21l
bTsNCj4gPj4+ICAgIH07DQo+ID4+Pg0KPiA+Pj4gICAgZXh0ZXJuIHN0cnVjdCBib290aW5mbyBi
b290aW5mbzsNCj4gPj4+DQo+ID4+DQo+ID4+IENoZWVycywNCj4gPj4NCj4gPj4gLS0NCj4gPj4g
SnVsaWVuIEdyYWxsDQo+IA0KPiBDaGVlcnMsDQo+IA0KPiAtLQ0KPiBKdWxpZW4gR3JhbGwNCg0K
Q2hlZXJzDQoNClBlbm55IFpoZW5nDQo=


From xen-devel-bounces@lists.xenproject.org Thu May 20 06:20:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 06:20:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130609.244543 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljc2T-0000ws-AE; Thu, 20 May 2021 06:20:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130609.244543; Thu, 20 May 2021 06:20:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljc2T-0000wl-79; Thu, 20 May 2021 06:20:09 +0000
Received: by outflank-mailman (input) for mailman id 130609;
 Thu, 20 May 2021 06:20:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yP7T=KP=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1ljc2S-0000wf-8Y
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 06:20:08 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.8.81]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b3142bfc-dfc6-4753-ad7c-8dbb38b82be9;
 Thu, 20 May 2021 06:20:07 +0000 (UTC)
Received: from DB6PR0802CA0036.eurprd08.prod.outlook.com (2603:10a6:4:a3::22)
 by AM9PR08MB7133.eurprd08.prod.outlook.com (2603:10a6:20b:41e::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.25; Thu, 20 May
 2021 06:20:05 +0000
Received: from DB5EUR03FT051.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:4:a3:cafe::26) by DB6PR0802CA0036.outlook.office365.com
 (2603:10a6:4:a3::22) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4150.23 via Frontend
 Transport; Thu, 20 May 2021 06:20:05 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT051.mail.protection.outlook.com (10.152.21.19) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Thu, 20 May 2021 06:20:05 +0000
Received: ("Tessian outbound 3050e7a5b95d:v92");
 Thu, 20 May 2021 06:20:04 +0000
Received: from 71b7d577fada.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 1592454B-588D-4407-8B66-A17F2E3B96FD.1; 
 Thu, 20 May 2021 06:19:59 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 71b7d577fada.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 20 May 2021 06:19:59 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com (2603:10a6:803:10a::33)
 by VI1PR08MB3725.eurprd08.prod.outlook.com (2603:10a6:803:b6::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.26; Thu, 20 May
 2021 06:19:55 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5]) by VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5%6]) with mapi id 15.20.4129.035; Thu, 20 May 2021
 06:19:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b3142bfc-dfc6-4753-ad7c-8dbb38b82be9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7uD7QjGj3hn/viJ6Ez140KSlsHTGhbOTBCU0UISPcGk=;
 b=+MRlyWzhN9McCRjAspkwmmzx1RV5HKxG0CJzRsycC3pG8hYUo9d29O6/KOolZuyf4tm4ZKJB//7TBepSR8R40S26aIO4OJlKmiyAGw4ZjulOZH4ZcxxrcvAFeFTeFQOcroP6cGiLYqOCw3nz+GFdY/7iEEegMKA326z0dWJRyvE=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EK4MN0uNYcIdSSWF8ducPv4HquTkp0cHLadboJRjwbBClW6jaHM6UcX7mrexCPrxdfeJQJz3IjluDaOb4i+fr1HLzhdRBm8rNPIuHDEoFkoqXgQINo4uKZU8Vy1N6vnxUwSnE8XF6nddte4y1GPO80GBEKTVPhXXAf3bUzEZmJ6NQFdywuczB9l+SDW7TWcP/0XbbIRKi9tUp7Xq8/eKXR/lzBmpwy81RSt/62+9Z8g/kzpv6PjX1F1AAKyJdA10euxL9dBbzZ2f53Jvx5kYdc/KnzrKBKtChlbJE5JzCRPKuhfiUGwDXGSC/KvAqGlyK6u3RaotkGV1eImhSyj1Zg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7uD7QjGj3hn/viJ6Ez140KSlsHTGhbOTBCU0UISPcGk=;
 b=ie2GDteDn7pcTGYKPBtRbTLYbUm+Z1RI5wd4UBlDLzwxWLO+teiKvNvs3RjTdBQhN5TJ1xGOcOV0RKZYT+eqfAMquafGMWxntxxf+0P4cieGt5JaR7t1d4IDwwdO8/WWOkir0Xch20WQJiMC5bqEkIUnWZ4GLZWz+VcDzhBZHWKBpE4Xr95WKYvtjcjiYP+swyp7IIzZUzUmm1AZm/agUjM8VKQBy+lmE/DfrMuGTePHDDYBrutfvokiapE508U4y2nGMHOsxf9xibbYdKLnITyaOUHyWwJv0LKl2T7DyJKcnWLL5nfnMy0RWuxUFz2w0PgMoLvmX7n0tSuInY/7Qg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=7uD7QjGj3hn/viJ6Ez140KSlsHTGhbOTBCU0UISPcGk=;
 b=+MRlyWzhN9McCRjAspkwmmzx1RV5HKxG0CJzRsycC3pG8hYUo9d29O6/KOolZuyf4tm4ZKJB//7TBepSR8R40S26aIO4OJlKmiyAGw4ZjulOZH4ZcxxrcvAFeFTeFQOcroP6cGiLYqOCw3nz+GFdY/7iEEegMKA326z0dWJRyvE=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "sstabellini@kernel.org"
	<sstabellini@kernel.org>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	nd <nd@arm.com>
Subject: RE: [PATCH 03/10] xen/arm: introduce PGC_reserved
Thread-Topic: [PATCH 03/10] xen/arm: introduce PGC_reserved
Thread-Index: AQHXS6W8Xfb+Jc0d2U+rpPnqyW8lIKro/c8AgAEW9PCAASMugIAArd6w
Date: Thu, 20 May 2021 06:19:55 +0000
Message-ID:
 <VE1PR08MB52155DD56E548E98AE937CE8F72A9@VE1PR08MB5215.eurprd08.prod.outlook.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-4-penny.zheng@arm.com>
 <bc6a20ef-675d-bbd6-74f7-4ecc45805ee7@xen.org>
 <VE1PR08MB5215F3ECA8B5D9624E34A794F72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <2f4eb08e-261b-70c4-bcbc-e08db36a50a9@xen.org>
In-Reply-To: <2f4eb08e-261b-70c4-bcbc-e08db36a50a9@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 8EBBB523F86A8445B971EEB7ED844952.0
x-checkrecipientchecked: true
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [203.126.0.111]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: b935bd22-a4bf-4285-2767-08d91b574ef4
x-ms-traffictypediagnostic: VI1PR08MB3725:|AM9PR08MB7133:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM9PR08MB7133E7163D83DEF4FC25470BF72A9@AM9PR08MB7133.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 PkE75mFST7oJBX3mz7IJu/XaStgpzIAa+3fQmgh+avfgvo709abUqriGZfYSsgJ3BDPsTpnW1y8HG8hQkQEye5LLTuUc3Uv3KITWQEcKrkbDRAr+jle2kW9+u7xI5Asl8igf1qbHrLBOHZxu+zp8Aq1msYoS9b8CZBXS6vZKhhjb4WgSdeyoIDL5FClRZK9ecu/Pydvcm+RlRblbdJon4Z9Jt3emlmAvj3o2z/VBGZTsbh3diIUuY7XVP+8JnHUkHNR8R2NqThwggEBMnZit01JWDfxZD5tbyiMt7cXl/THZ8FOKjPukiWW2nr+qpw6fdLdnefy3HMjSkqoS6rRUz4rYzF9idXM3o3JZpSIWV3Zs3yeDaA+bdofYINVk6C7lyeejae4JQvQHDyf+aZa28FeRukAfmKJJmwAyWJ+wpizOsYR6r7VH8fjT1Z5qKSzJVUJEynEkaGS6+9Im9vcmNYfRAfDBDB8VukvJapbnqWPIjR31VjSjhPlvvWRd3M2ciR2vT9zgh4wLLFhsEpsWWZoZuw40lGOeAWsuexIYnMGb7ZN+ea/JnCJtyoBg/XZoHid1UeAS2ojxcNf9uAwgfnaJ31UuWauLBuqVkUcnkkY=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR08MB5215.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(376002)(39850400004)(396003)(346002)(366004)(8936002)(186003)(9686003)(7696005)(33656002)(55016002)(26005)(4326008)(66556008)(83380400001)(6506007)(38100700002)(66446008)(66476007)(5660300002)(66946007)(53546011)(71200400001)(2906002)(86362001)(110136005)(122000001)(54906003)(316002)(8676002)(478600001)(76116006)(52536014)(64756008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?UmpFN2tkZHAzVkpOR25BMGxjTjhjOTdtaTJiUjBISnV0ajVLTzNTQjU0bkF2?=
 =?utf-8?B?bW5ZQjkzaUtFWm5VQzAvejdHUlhQQlNKWDlZNGEvQ0Q0TUk2eUFXY2M3Wkoy?=
 =?utf-8?B?ZERNeCtLQ3I4SDZla095L2dsMVFabXhJZXkwTnZYU0xXQjlvUHE4Z3Rhd3BY?=
 =?utf-8?B?TTJqZUk3NlFtck8zV0lBNWNzVDZuY2ZiWjRPUWlHNkpucXZ0Z1R6YnlaSGJk?=
 =?utf-8?B?SGJPdHlIeHNrWHVaQmI3VlM4UFR1Y0JXNWZ3L1BFcmxkc3luWldCQ29vc1FZ?=
 =?utf-8?B?bWFoU3BMbzNpcW5BSUVudEtKeGhjUnBEa09FS0xGZ01kUTZZY1ZudEQwYWI2?=
 =?utf-8?B?Znc0M0J2SkpIYzdsTE4zK0JNUWhZV1g3VDdOVGNSNnpYTTRLaWVHUitPNFdr?=
 =?utf-8?B?RmhNY2JFUklvYXRXbWQ1U05rVDR6eFB2d3JPM2FOM21aWlNBT05MWHFNWHor?=
 =?utf-8?B?Z0RmbmxBSUhSeDI3QlpTQlBqN0JwTDFzNi9JYWMxNkVCTUZ6VFFrVWhoWWlT?=
 =?utf-8?B?eEppcGg3NUg5NGVGYW1FWnNScmhmTlo3YytJYWpFTlVPVi9xamxCZ1dWM2gz?=
 =?utf-8?B?bDVzUERZbVUvSXNyaU5hZWIzZ3ptY2dZTjcvWHhUUzZGSHVzMlVnbk1QeDBz?=
 =?utf-8?B?cHdnQXZmdU9IOU1RVTk5YTJEYmFTUUFneHd3UE5hNHFnbDF4SHF4cVZFRTJM?=
 =?utf-8?B?U3JTQ3VsaGQyQmdVSzJueEVpazk1RytTN2p5eW1wcnJuLzBLZU5QRnJuVHd3?=
 =?utf-8?B?S2pDTXlKWldVUXpWZkdrN3RITVdWU2YwbzJsWEVCZGR3ZXhSNnZEaWhrc25a?=
 =?utf-8?B?TDdmR3VGRm9NdkV6TzlDSlQyTnZwMmJ4eXRZdjFLN1lUcldsQWZFcXdMZTRo?=
 =?utf-8?B?bk01S2N0b3RZczV2K0haZFMzd2hIUjJrYVltdFJnZTl5QjQ4OVlIRGoxTnZL?=
 =?utf-8?B?Wk9PNFltYXpMOGVFVW9vejRVTjdjWVhnbTFQUVZzT1BXMjlvOXpNVmVIalNn?=
 =?utf-8?B?cks5Sll1YmNZREhLVWxibmJlTkxHN3p1aHpTbWlOcG1WZmRBcWNPb20xL0sr?=
 =?utf-8?B?SnF1RDZFdThjS1BSd2tUMytwUncxNG5ielN5YmxLNWYvRmxjc09UcjVGdDJw?=
 =?utf-8?B?Y2NEUGtta3EzamlwelhhNnM4ckVhQXllSDA2RGliM0ZKczd5Z2ZrMXBvYWxm?=
 =?utf-8?B?MW9NOGlobXZKeDlITUNIK2JrVHRFbDFlaDlDQXZxUDRqMFdTWkJxSTg3bXQr?=
 =?utf-8?B?UTRSWTkySjh5Yk5PZDVVb2oxZTBicVdjMU9pL0t3VGRqeHpEQ2tXVWtIS3NV?=
 =?utf-8?B?RmVYNEZPUklUaytwWGZaUCtSRXhocCtyT05YM1k5Sm5jMFJwSkRPdkRiUzdu?=
 =?utf-8?B?UndHb3p0R0cvUzFuQXV4M3BVamg4RHd4bFBnSTkzNmhzRTFlZS9YdTNoUnh6?=
 =?utf-8?B?SWdHY2xmcjFuSUR0ekpGdTdZUmlOVlpoV2pLdmFJRUIwS2V0aEZrQWdncEY0?=
 =?utf-8?B?UVR4cXJMeXU0T0R4TVVGcVFhcElCWFMxWGxUbUlsZFV1dGh4WVlFUXZyLzVC?=
 =?utf-8?B?TjlpM0xsbVJmNFRYZTQwc0RCOHg4SlRFOFJKZzV2UXhyU01rV2k0bTZLZFk0?=
 =?utf-8?B?VHdkbDdtT2pVWXBaMW9QZHVUNUlHNnhhUHNWekpoNVBVanhzblJ1Qkp0Kzly?=
 =?utf-8?B?T2NZRnRyajM4dFVvWGlsK09wRWc5c2RjZE9Oc3hFQ3hTdzErczArdTl3dmE2?=
 =?utf-8?Q?y2Ha3YMGhqCicBvJFdBx0KqD561ptvDTgX300TU?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB3725
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT051.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	c269a0b8-2789-4615-5cd2-08d91b574987
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	Q/nruM7aBZQPGaJtdizMp9n01RlfsyAeTVXbAcH2N6Mbc9jxxy6dS81vqV+X7jCf51AgD7aYK9K/bL027qyvMjDgxXi1n/xhXZPfII+u67cKciQ7PSjI2Rs/5yleLcwMY2YtRaK8MrK8fExZuhq8W47B9Hr+j/sN8B5fFDY8n90IXOViNGF268vhJlfnM5GWW9cDOtrpk/tLX3T/mwHc2E1vFG/q4rh36JTG/gq7xJ0xmPUbC0hjnmVzllC4DrhmMjd5231M+9AYEFpNw2A95CSm4oJ+nWPhKD5VJaQR9k7Mgil0pUEavZyp5yS5Ui8iRY7QXlEDB1wRL5Jm27fbD0kE9yUuhM5OVE5L1VBT2vTZMIr8yqZ2tEdGCwLGoVUG/8LYkyfcHaqYZ4QHVkHzp36fm26zQzZTjqlrg9Otx3fdKZPPBMQurMxnmRYjJGcxqjsyFKSNRWHf8FCpxXJiEpwMjkM8704iDll1q0iBP9N3QmV40PHlSQmZ2qrGhrYURb/OSKRU6EKdyTiWxy9j+8YOppHSdpV3F7dG0y+lmyudbp4WQ8fqXkJ7LIv7N7LJ7ltLw4KUD1CxtxccSIppzCkZ7ul7h5tPxvKtjisgGbInWoZwjnqAsjLPAQB360Rt4fhzVOyodVutEXMAnXS2Tz0+J8ntmVCnV0bvL2ymBok=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(396003)(39850400004)(346002)(376002)(136003)(36840700001)(46966006)(2906002)(26005)(36860700001)(83380400001)(8936002)(5660300002)(8676002)(47076005)(4326008)(81166007)(336012)(54906003)(82740400003)(186003)(9686003)(110136005)(356005)(52536014)(55016002)(86362001)(33656002)(316002)(53546011)(7696005)(82310400003)(70586007)(6506007)(478600001)(70206006);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 May 2021 06:20:05.0294
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: b935bd22-a4bf-4285-2767-08d91b574ef4
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT051.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR08MB7133

SGkgSnVsaWVuDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSnVsaWVu
IEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4NCj4gU2VudDogVGh1cnNkYXksIE1heSAyMCwgMjAyMSAz
OjQ2IEFNDQo+IFRvOiBQZW5ueSBaaGVuZyA8UGVubnkuWmhlbmdAYXJtLmNvbT47IHhlbi1kZXZl
bEBsaXN0cy54ZW5wcm9qZWN0Lm9yZzsNCj4gc3N0YWJlbGxpbmlAa2VybmVsLm9yZw0KPiBDYzog
QmVydHJhbmQgTWFycXVpcyA8QmVydHJhbmQuTWFycXVpc0Bhcm0uY29tPjsgV2VpIENoZW4NCj4g
PFdlaS5DaGVuQGFybS5jb20+OyBuZCA8bmRAYXJtLmNvbT4NCj4gU3ViamVjdDogUmU6IFtQQVRD
SCAwMy8xMF0geGVuL2FybTogaW50cm9kdWNlIFBHQ19yZXNlcnZlZA0KPiANCj4gDQo+IA0KPiBP
biAxOS8wNS8yMDIxIDA0OjE2LCBQZW5ueSBaaGVuZyB3cm90ZToNCj4gPiBIaSBKdWxpZW4NCj4g
DQo+IEhpIFBlbm55LA0KPiANCj4gPg0KPiA+PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0tLQ0K
PiA+PiBGcm9tOiBKdWxpZW4gR3JhbGwgPGp1bGllbkB4ZW4ub3JnPg0KPiA+PiBTZW50OiBUdWVz
ZGF5LCBNYXkgMTgsIDIwMjEgNTo0NiBQTQ0KPiA+PiBUbzogUGVubnkgWmhlbmcgPFBlbm55Llpo
ZW5nQGFybS5jb20+Ow0KPiA+PiB4ZW4tZGV2ZWxAbGlzdHMueGVucHJvamVjdC5vcmc7IHNzdGFi
ZWxsaW5pQGtlcm5lbC5vcmcNCj4gPj4gQ2M6IEJlcnRyYW5kIE1hcnF1aXMgPEJlcnRyYW5kLk1h
cnF1aXNAYXJtLmNvbT47IFdlaSBDaGVuDQo+ID4+IDxXZWkuQ2hlbkBhcm0uY29tPjsgbmQgPG5k
QGFybS5jb20+DQo+ID4+IFN1YmplY3Q6IFJlOiBbUEFUQ0ggMDMvMTBdIHhlbi9hcm06IGludHJv
ZHVjZSBQR0NfcmVzZXJ2ZWQNCj4gPj4NCj4gPj4NCj4gPj4NCj4gPj4gT24gMTgvMDUvMjAyMSAw
NjoyMSwgUGVubnkgWmhlbmcgd3JvdGU6DQo+ID4+PiBJbiBvcmRlciB0byBkaWZmZXJlbnRpYXRl
IHBhZ2VzIG9mIHN0YXRpYyBtZW1vcnksIGZyb20gdGhvc2UNCj4gPj4+IGFsbG9jYXRlZCBmcm9t
IGhlYXAsIHRoaXMgcGF0Y2ggaW50cm9kdWNlcyBhIG5ldyBwYWdlIGZsYWcgUEdDX3Jlc2VydmVk
DQo+IHRvIHRlbGwuDQo+ID4+Pg0KPiA+Pj4gTmV3IHN0cnVjdCByZXNlcnZlZCBpbiBzdHJ1Y3Qg
cGFnZV9pbmZvIGlzIHRvIGRlc2NyaWJlIHJlc2VydmVkIHBhZ2UNCj4gPj4+IGluZm8sIHRoYXQg
aXMsIHdoaWNoIHNwZWNpZmljIGRvbWFpbiB0aGlzIHBhZ2UgaXMgcmVzZXJ2ZWQgdG8uID4NCj4g
Pj4+IEhlbHBlciBwYWdlX2dldF9yZXNlcnZlZF9vd25lciBhbmQgcGFnZV9zZXRfcmVzZXJ2ZWRf
b3duZXIgYXJlDQo+ID4+PiBkZXNpZ25hdGVkIHRvIGdldC9zZXQgcmVzZXJ2ZWQgcGFnZSdzIG93
bmVyLg0KPiA+Pj4NCj4gPj4+IFN0cnVjdCBkb21haW4gaXMgZW5sYXJnZWQgdG8gbW9yZSB0aGFu
IFBBR0VfU0laRSwgZHVlIHRvDQo+ID4+PiBuZXdseS1pbXBvcnRlZCBzdHJ1Y3QgcmVzZXJ2ZWQg
aW4gc3RydWN0IHBhZ2VfaW5mby4NCj4gPj4NCj4gPj4gc3RydWN0IGRvbWFpbiBtYXkgZW1iZWQg
YSBwb2ludGVyIHRvIGEgc3RydWN0IHBhZ2VfaW5mbyBidXQgbmV2ZXINCj4gPj4gZGlyZWN0bHkg
ZW1iZWQgdGhlIHN0cnVjdHVyZS4gU28gY2FuIHlvdSBjbGFyaWZ5IHdoYXQgeW91IG1lYW4/DQo+
ID4+DQo+ID4+Pg0KPiA+Pj4gU2lnbmVkLW9mZi1ieTogUGVubnkgWmhlbmcgPHBlbm55LnpoZW5n
QGFybS5jb20+DQo+ID4+PiAtLS0NCj4gPj4+ICAgIHhlbi9pbmNsdWRlL2FzbS1hcm0vbW0uaCB8
IDE2ICsrKysrKysrKysrKysrKy0NCj4gPj4+ICAgIDEgZmlsZSBjaGFuZ2VkLCAxNSBpbnNlcnRp
b25zKCspLCAxIGRlbGV0aW9uKC0pDQo+ID4+Pg0KPiA+Pj4gZGlmZiAtLWdpdCBhL3hlbi9pbmNs
dWRlL2FzbS1hcm0vbW0uaCBiL3hlbi9pbmNsdWRlL2FzbS1hcm0vbW0uaA0KPiA+PiBpbmRleA0K
PiA+Pj4gMGI3ZGUzMTAyZS4uZDg5MjJmZDVkYiAxMDA2NDQNCj4gPj4+IC0tLSBhL3hlbi9pbmNs
dWRlL2FzbS1hcm0vbW0uaA0KPiA+Pj4gKysrIGIveGVuL2luY2x1ZGUvYXNtLWFybS9tbS5oDQo+
ID4+PiBAQCAtODgsNyArODgsMTUgQEAgc3RydWN0IHBhZ2VfaW5mbw0KPiA+Pj4gICAgICAgICAg
ICAgKi8NCj4gPj4+ICAgICAgICAgICAgdTMyIHRsYmZsdXNoX3RpbWVzdGFtcDsNCj4gPj4+ICAg
ICAgICB9Ow0KPiA+Pj4gLSAgICB1NjQgcGFkOw0KPiA+Pj4gKw0KPiA+Pj4gKyAgICAvKiBQYWdl
IGlzIHJlc2VydmVkLiAqLw0KPiA+Pj4gKyAgICBzdHJ1Y3Qgew0KPiA+Pj4gKyAgICAgICAgLyoN
Cj4gPj4+ICsgICAgICAgICAqIFJlc2VydmVkIE93bmVyIG9mIHRoaXMgcGFnZSwNCj4gPj4+ICsg
ICAgICAgICAqIGlmIHRoaXMgcGFnZSBpcyByZXNlcnZlZCB0byBhIHNwZWNpZmljIGRvbWFpbi4N
Cj4gPj4+ICsgICAgICAgICAqLw0KPiA+Pj4gKyAgICAgICAgc3RydWN0IGRvbWFpbiAqZG9tYWlu
Ow0KPiA+Pj4gKyAgICB9IHJlc2VydmVkOw0KPiA+Pg0KPiA+PiBUaGUgc3BhY2UgaW4gcGFnZV9p
bmZvIGlzIHF1aXRlIHRpZ2h0LCBzbyBJIHdvdWxkIGxpa2UgdG8gYXZvaWQNCj4gPj4gaW50cm9k
dWNpbmcgbmV3IGZpZWxkcyB1bmxlc3Mgd2UgY2FuJ3QgZ2V0IGF3YXkgZnJvbSBpdC4NCj4gPj4N
Cj4gPj4gSW4gdGhpcyBjYXNlLCBpdCBpcyBub3QgY2xlYXIgd2h5IHdlIG5lZWQgdG8gZGlmZmVy
ZW50aWF0ZSB0aGUgIk93bmVyIg0KPiA+PiB2cyB0aGUgIlJlc2VydmVkIE93bmVyIi4gSXQgbWln
aHQgYmUgY2xlYXJlciBpZiB0aGlzIGNoYW5nZSBpcyBmb2xkZWQNCj4gPj4gaW4gdGhlIGZpcnN0
IHVzZXIgb2YgdGhlIGZpZWxkLg0KPiA+Pg0KPiA+PiBBcyBhbiBhc2lkZSwgZm9yIDMyLWJpdCBB
cm0sIHlvdSBuZWVkIHRvIGFkZCBhIDQtYnl0ZSBwYWRkaW5nLg0KPiA+Pg0KPiA+DQo+ID4gWWVh
aCwgSSBtYXkgZGVsZXRlIHRoaXMgY2hhbmdlLiBJIGltcG9ydGVkIHRoaXMgY2hhbmdlIGFzIGNv
bnNpZGVyaW5nDQo+ID4gdGhlIGZ1bmN0aW9uYWxpdHkgb2YgcmVib290aW5nIGRvbWFpbiBvbiBz
dGF0aWMgYWxsb2NhdGlvbi4NCj4gPg0KPiA+IEEgbGl0dGxlIG1vcmUgZGlzY3Vzc2lvbiBvbiBy
ZWJvb3RpbmcgZG9tYWluIG9uIHN0YXRpYyBhbGxvY2F0aW9uLg0KPiA+IENvbnNpZGVyaW5nIHRo
ZSBtYWpvciB1c2VyIGNhc2VzIGZvciBkb21haW4gb24gc3RhdGljIGFsbG9jYXRpb24gYXJlDQo+
ID4gc3lzdGVtIGhhcyBhIHRvdGFsIHByZS1kZWZpbmVkLCBzdGF0aWMgYmVoYXZpb3IgYWxsIHRo
ZSB0aW1lLiBObw0KPiA+IGRvbWFpbiBhbGxvY2F0aW9uIG9uIHJ1bnRpbWUsIHdoaWxlIHRoZXJl
IHN0aWxsIGV4aXN0cyBkb21haW4gcmVib290aW5nLg0KPiANCj4gSG1tbS4uLiBXaXRoIHRoaXMg
c2VyaWVzIGl0IGlzIHN0aWxsIHBvc3NpYmxlIHRvIGFsbG9jYXRlIG1lbW9yeSBhdCBydW50aW1l
DQo+IG91dHNpZGUgb2YgdGhlIHN0YXRpYyBhbGxvY2F0aW9uIChzZWUgbXkgY29tbWVudCBvbiB0
aGUgZGVzaWduIGRvY3VtZW50IFsxXSkuDQo+IFNvIGlzIGl0IG1lYW50IHRvIGJlIGNvbXBsZXRl
Pw0KPiANCg0KSSdtIGd1ZXNzaW5nIHRoYXQgdGhlICJhbGxvY2F0ZSBtZW1vcnkgYXQgcnVudGlt
ZSBvdXRzaWRlIG9mIHRoZSBzdGF0aWMgYWxsb2NhdGlvbiIgaXMNCnJlZmVycmluZyB0byBYRU4g
aGVhcCBvbiBzdGF0aWMgYWxsb2NhdGlvbiwgdGhhdCBpcywgdXNlcnMgcHJlLWRlZmluaW5nIGhl
YXAgaW4gZGV2aWNlIHRyZWUNCmNvbmZpZ3VyYXRpb24gdG8gbGV0IHRoZSB3aG9sZSBzeXN0ZW0g
bW9yZSBzdGF0aWMgYW5kIHByZWRpY3RhYmxlLg0KDQpBbmQgSSd2ZSByZXBsaWVkIHlvdSBpbiB0
aGUgZGVzaWduIHRoZXJlLCBzb3JyeSBmb3IgdGhlIGxhdGUgcmVwbHkuIFNhdmUgeW91ciB0aW1l
LCBhbmQNCknigJlsbCBwYXN0ZSBoZXJlOg0KDQoiUmlnaHQgbm93LCBvbiBBQXJjaDY0LCBhbGwg
UkFNLCBleGNlcHQgcmVzZXJ2ZWQgbWVtb3J5LCB3aWxsIGJlIGZpbmFsbHkgZ2l2ZW4gdG8NCmJ1
ZGR5IGFsbG9jYXRvciBhcyBoZWFwLCAgbGlrZSB5b3Ugc2FpZCwgZ3Vlc3QgUkFNIGZvciBub3Jt
YWwgZG9tYWluIHdpbGwgYmUgYWxsb2NhdGVkDQpmcm9tIHRoZXJlLCB4bWFsbG9jIGV2ZW50dWFs
bHkgaXMgZ2V0IG1lbW9yeSBmcm9tIHRoZXJlLCBldGMuIFNvIHdlIHdhbnQgdG8gcmVmaW5lIHRo
ZSBoZWFwDQpoZXJlLCBub3QgaXRlcmF0aW5nIHRocm91Z2ggYGJvb3RpbmZvLm1lbWAgdG8gc2V0
IHVwIFhFTiBoZWFwLCBidXQgbGlrZSBpdGVyYXRpbmcNCmBib290aW5mby4gcmVzZXJ2ZWRfaGVh
cGAgdG8gc2V0IHVwIFhFTiBoZWFwLg0KDQpPbiBBUk0zMiwgeGVuIGhlYXAgYW5kIGRvbWFpbiBo
ZWFwIGFyZSBzZXBhcmF0ZWx5IG1hcHBlZCwgd2hpY2ggaXMgbW9yZSBjb21wbGljYXRlZA0KaGVy
ZS4gVGhhdCdzIHdoeSBJIG9ubHkgdGFsa2luZyBhYm91dCBpbXBsZW1lbnRpbmcgdGhlc2UgZmVh
dHVyZXMgb24gQUFyY2g2NCBhcyBmaXJzdCBzdGVwLiINCg0KIEFib3ZlIGltcGxlbWVudGF0aW9u
IHdpbGwgYmUgZGVsaXZlcmVkIHRocm91Z2ggYW5vdGhlciBwYXRjaCBTZXJpZS4gVGhpcyBwYXRj
aCBTZXJpZQ0KSXMgb25seSBmb2N1c2luZyBvbiBEb21haW4gb24gU3RhdGljIEFsbG9jYXRpb24u
IA0KDQo+ID4NCj4gPiBBbmQgd2hlbiByZWJvb3RpbmcgZG9tYWluIG9uIHN0YXRpYyBhbGxvY2F0
aW9uLCBhbGwgdGhlc2UgcmVzZXJ2ZWQNCj4gPiBwYWdlcyBjb3VsZCBub3QgZ28gYmFjayB0byBo
ZWFwIHdoZW4gZnJlZWluZyB0aGVtLiAgU28gSSBhbQ0KPiA+IGNvbnNpZGVyaW5nIHRvIHVzZSBv
bmUgZ2xvYmFsIGBzdHJ1Y3QgcGFnZV9pbmZvKltET01JRF1gIHZhbHVlIHRvIHN0b3JlLg0KPiA+
DQo+ID4gQXMgSmFuIHN1Z2dlc3RlZCwgd2hlbiBkb21haW4gZ2V0IHJlYm9vdGVkLCBzdHJ1Y3Qg
ZG9tYWluIHdpbGwgbm90IGV4aXN0DQo+IGFueW1vcmUuDQo+ID4gQnV0IEkgdGhpbmsgRE9NSUQg
aW5mbyBjb3VsZCBsYXN0Lg0KPiBZb3Ugd291bGQgbmVlZCB0byBtYWtlIHN1cmUgdGhlIGRvbWlk
IHRvIGJlIHJlc2VydmVkLiBCdXQgSSBhbSBub3QgZW50aXJlbHkNCj4gY29udmluY2VkIHRoaXMg
aXMgbmVjZXNzYXJ5IGhlcmUuDQo+IA0KPiBXaGVuIHJlY3JlYXRpbmcgdGhlIGRvbWFpbiwgeW91
IG5lZWQgYSB3YXkgdG8ga25vdyBpdHMgY29uZmlndXJhdGlvbi4NCj4gTW9zdGx5IGxpa2VseSB0
aGlzIHdpbGwgY29tZSBmcm9tIHRoZSBEZXZpY2UtVHJlZS4gQXQgd2hpY2ggcG9pbnQsIHlvdSBj
YW4gYWxzbw0KPiBmaW5kIHRoZSBzdGF0aWMgcmVnaW9uIGZyb20gdGhlcmUuDQo+IA0KDQpUcnVl
LCB0cnVlLiBJJ2xsIGRpZyBtb3JlIGluIHlvdXIgc3VnZ2VzdGlvbiwgdGh4LiDwn5iJDQoNCj4g
Q2hlZXJzLA0KPiANCj4gWzFdIDw3YWI3M2NiMC0zOWQ1LWY4YmYtNjYwZi1iM2Q3N2YzMjQ3YmRA
eGVuLm9yZz4NCj4gDQo+IC0tDQo+IEp1bGllbiBHcmFsbA0KDQpDaGVlcnMNCg0KUGVubnkNCg0K


From xen-devel-bounces@lists.xenproject.org Thu May 20 06:29:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 06:29:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130620.244554 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljcBV-0001lL-DQ; Thu, 20 May 2021 06:29:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130620.244554; Thu, 20 May 2021 06:29:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljcBV-0001lE-AA; Thu, 20 May 2021 06:29:29 +0000
Received: by outflank-mailman (input) for mailman id 130620;
 Thu, 20 May 2021 06:29:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yP7T=KP=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1ljcBT-0001l8-Ob
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 06:29:27 +0000
Received: from EUR03-AM5-obe.outbound.protection.outlook.com (unknown
 [40.107.3.82]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e6cec95a-9ea4-4d64-a2b5-84eb979a19cb;
 Thu, 20 May 2021 06:29:24 +0000 (UTC)
Received: from AM5PR0601CA0056.eurprd06.prod.outlook.com (2603:10a6:206::21)
 by PAXPR08MB6720.eurprd08.prod.outlook.com (2603:10a6:102:130::9) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.26; Thu, 20 May
 2021 06:29:22 +0000
Received: from AM5EUR03FT035.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:206:0:cafe::b6) by AM5PR0601CA0056.outlook.office365.com
 (2603:10a6:206::21) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4150.23 via Frontend
 Transport; Thu, 20 May 2021 06:29:22 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT035.mail.protection.outlook.com (10.152.16.119) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Thu, 20 May 2021 06:29:22 +0000
Received: ("Tessian outbound 504317ef584c:v92");
 Thu, 20 May 2021 06:29:21 +0000
Received: from e69d7be52dcb.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 39EC3DAE-0BEC-49F8-A151-BEC2027BFD9E.1; 
 Thu, 20 May 2021 06:29:16 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id e69d7be52dcb.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 20 May 2021 06:29:16 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com (2603:10a6:803:10a::33)
 by VI1PR08MB4317.eurprd08.prod.outlook.com (2603:10a6:803:101::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4150.23; Thu, 20 May
 2021 06:29:10 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5]) by VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5%6]) with mapi id 15.20.4129.035; Thu, 20 May 2021
 06:29:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e6cec95a-9ea4-4d64-a2b5-84eb979a19cb
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8pjdr2fjKAFhVaEtS2MtFw4PeT9oNA3os/KbfYDL5/o=;
 b=KX4ML5t5NVoci5x3u1g6/qbUU0Z8vMJ3LcYb+Y+yXnRJEmpfudWeCvgz0BQ8lqVkytsko4H+OykFeb76k3TRTqDgPV3XyKn+A9zY/I2TzTfGVIwhctGvyG5Tq1sRUp8OQZTa+UJcfVEj3rlJTYUF/QQBwevTeN39rMIkcWdW1J0=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Tv5Bi+LtmOomVBKfO0LjOGmmBIvMhsJBFfrUY42iBHhGr4VlNWQnBy9h1SAR5rbwm/+lsq76ZsrN57PqDIiixD59OZ4y3KBAvXqtVFAbJs17vIc6tWKgL1q9r7Il/y7NPahowV/RkOoVHtCpXfOLaN2h1IFpcwJ4UQL1i6vxm5NFidMYhzeyYOAKYmkquKFsbgU0txhIjGB63b6RmPPUJjK+cirQpB1bn/+fwZsa0S/xEcAOKxjPqzbY3ooOPrTMiUNNbURW5Rv9xyvWO5S4WteT8qZ3QU4VBgBLtrV4Hy1IAnMDPlrkAsynElS23ae3CX0BQrW0TvGiPOK6HD8T7g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8pjdr2fjKAFhVaEtS2MtFw4PeT9oNA3os/KbfYDL5/o=;
 b=Li7mYKcxGz4PoXX8R7HcPaWCHXYxttiAGUEt2c2NbtlA/J2DVQeprOUzajACINd3AoniAutbuohhvuJQT5AGLJw6Pb7vWePix4xXdOQIE0W3XTQU2gtK9r1QBrUnSFWvq+rPQBkEfPqxjbmQWqQFFZM90UzovCDjJrrklc/9Qj4jS5ovoPiOH2c0dSPl1do13M6MJcf5rhUUyDnmGPaYSqdY0b1t4TOb2ur61YpFp0AnnENInEKMd5+p2igx0aWbrgNgN67K88wCFj+vX2u65tR2dskzG+/Wn6yBA5DRvEg9esC73ehQ8Mth2fszKLfnPOBwu9HB4+MJs5NQnjJmpg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8pjdr2fjKAFhVaEtS2MtFw4PeT9oNA3os/KbfYDL5/o=;
 b=KX4ML5t5NVoci5x3u1g6/qbUU0Z8vMJ3LcYb+Y+yXnRJEmpfudWeCvgz0BQ8lqVkytsko4H+OykFeb76k3TRTqDgPV3XyKn+A9zY/I2TzTfGVIwhctGvyG5Tq1sRUp8OQZTa+UJcfVEj3rlJTYUF/QQBwevTeN39rMIkcWdW1J0=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "sstabellini@kernel.org"
	<sstabellini@kernel.org>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	nd <nd@arm.com>
Subject: RE: [PATCH 10/10] xen/arm: introduce allocate_static_memory
Thread-Topic: [PATCH 10/10] xen/arm: introduce allocate_static_memory
Thread-Index: AQHXS6XHA2Y5LfY1BUmlVQXGKXjd9qrpJQIAgAE5LmCAAOBqgIAAq0Tw
Date: Thu, 20 May 2021 06:29:09 +0000
Message-ID:
 <VE1PR08MB521597DE1752C8A7B4D58019F72A9@VE1PR08MB5215.eurprd08.prod.outlook.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-11-penny.zheng@arm.com>
 <7e9bacde-8a1c-c9f8-a06d-2f39f2192315@xen.org>
 <VE1PR08MB5215B4D187DFE8AE20DF2B95F72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <72a374ca-4d75-70b4-3ee9-ad1dbdefa2d6@xen.org>
In-Reply-To: <72a374ca-4d75-70b4-3ee9-ad1dbdefa2d6@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 093885EA1275F943B8F43EC9EA3EA1A4.0
x-checkrecipientchecked: true
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [203.126.0.111]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: 9a33aff5-8782-468f-6f39-08d91b589b18
x-ms-traffictypediagnostic: VI1PR08MB4317:|PAXPR08MB6720:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<PAXPR08MB6720C7714BF89DAC8E3BC312F72A9@PAXPR08MB6720.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:7219;OLM:7219;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 LxdfksCNOlOT4K+zFBP0DPfZF9HsQQ2sw2BMeZSFHK3vuAf2C5dchUk6pPNU5qFDNu/nI6XwpOCZo0XyOHxxKuqrlRPoXD6STzjnODqUgD9tm8cBabsMRuT08aHOQA36piyL5Wd2sRR2MWt6HKNwMklRwViFQ+VZTxlJPiQLED9yEIij+inxBGBlrnsUCwv7nkLPex88Suqi+CRLylG+hmHCn/zCE+eLp0FCJScEQBM0r1b3w9n3HwpEUTQhsYI5tLF66kBSDlAn7m9Ep0XUqehALj0pH4re6vq4ODwzLjYyq46Se8EGbxUirA/1jiiyZAOiOGBSTcKW8XfgxAWhdWjJ5sEvGxk+wdG2TYfiSvEFg0BBSVNzGOd6C/xuF+jblocsPktufs+LVnik8RFX15LAERKooUyvzC8IiSRztTI0WnkU7CgPKKF51h7p+bQSQ4nrN1Ls75UJsTfTrZzSCtkI6WHll5y4qKOT2EmkKji7h2vu9V/YzHlH57yrCHBHLixcf8J5MOKLUh7rMeiznF7iOUHwiFHGTuvqP+PniTy4E+B7xv3jwc4UEaTVSe9FFPrW97Bvb6srsuNdRe4vb3pvSYoq0taWr5FaETUiWX0=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR08MB5215.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(366004)(376002)(346002)(39850400004)(478600001)(8676002)(66446008)(64756008)(66556008)(122000001)(66476007)(38100700002)(86362001)(316002)(5660300002)(66946007)(186003)(110136005)(54906003)(8936002)(76116006)(9686003)(53546011)(6506007)(52536014)(2906002)(26005)(55016002)(83380400001)(33656002)(7696005)(71200400001)(4326008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?V1ZFbzlPeGs5cThkbVlpKzc3WDd1LzN6U2lZb1ZTR3RwMWllb2hpTzRLUzl4?=
 =?utf-8?B?S1gvZ2xMSDhueFFxd2dHZ0VFbVhFMjVmZ0JwbTlSemV0R0dxYytuNnNzSjdB?=
 =?utf-8?B?b1N3WEFkRHd6dmVDRVJYdmREWExsRUo0L3FRNHFQdzF4QTZtK0haOWp4TzI3?=
 =?utf-8?B?SEFsM1VIUW9ralRMKy90RlNmM010Mi9KM2ZXMzFSUnp5OVQ1eVJrcnhJamZL?=
 =?utf-8?B?VjRZTjdxYWM1ekJmSFloMndxNEdzM1g1OFVmaE9QU0pLejNEQ21ZcnBXdklM?=
 =?utf-8?B?NWxIQ1kxQll6UmplQkNJam5wK2RIN3d3Nmo2cGc3RTJZRmsycEp1MW5YeVNy?=
 =?utf-8?B?bGJ2RWdNeUd0cXRYd1FnbFBaVHhQREFOREJIU0tlR2EvS242dUs5a2Yra0R1?=
 =?utf-8?B?bU5FTVJqREhIMlNZeE41S044SzRBWFFzY3FrQTlzZFA2ZnBaOVcxUHh2ZkNY?=
 =?utf-8?B?bXlKM3dTVWhUR1E0ZXVYVEJ6ZlNZeWI4TVZwK1dFVXIwbHVKcEVjVVhlN3ll?=
 =?utf-8?B?TEtTMXNBUFhyTmhzaFhqY1FIQ0hCcFlvYk9ZZkg3a0o4a1pYUk1adnVHb2VF?=
 =?utf-8?B?dGx3U3dqYmFSRHFiTDVxWkhCY24zRUlDdXlOVGVwOUswRnlvWVQ0ZE4wamJ2?=
 =?utf-8?B?UGhOSzFIWTdYK01LR0Q1bjU2NklIWHB5ME5MdE1FZ3NQMG1HRDNXSWlUSzF0?=
 =?utf-8?B?UzZ5eGFZTnFDbHZ0dUpYOU9LMXJDU29oNFBTVkV2NG80cFgzajdsd01vajJr?=
 =?utf-8?B?SWJUdURoaXN1V3VIYWJITkp4cWY4QmIrSk5vYUFMeitGNjRjUWhIT0UzQ2ht?=
 =?utf-8?B?WWxvTlgyMEtsY1JaNUNuOE9WY3MvZkloSHhYbG5QU3U4azBKd3BlY0xFc1Yr?=
 =?utf-8?B?WnlSL01xQ3h6MTZPODBVN0Zqb1l4SDNJY1ZLalRPYkxycFZwNE5Ed1doNW0y?=
 =?utf-8?B?YXZXNXN5TzRwQmRRRTgvUm45eTFhNDBrbnVrTXpjeVFZNWtZeEYvTEhLQ0ht?=
 =?utf-8?B?ZjM5VGhicm9qYVJRdlFCTmtzcHVSNk9hTmhOSlE0ZmJBMHcrQktVSEpYelZQ?=
 =?utf-8?B?NnlEUkRHdDhEUFZvMS9lZU5LQmg3TndzSTFlYVZFVHd3LzJtWTBvSSt2VG5P?=
 =?utf-8?B?YUlYdC8xSDVvNnJ4RUpYOVVBSlNTTm4xeUd5djdZOUN4cWVUUkthMlhvTUxY?=
 =?utf-8?B?OHlKY0gwNGFGcThaT3h5SUo2V3FVdUUxeTBZNm8wVmpyL1ZBdVREdGtpQm9j?=
 =?utf-8?B?MlpIWTBTbVB1Rm02eTJOZXNBTE9xOG5KTjNGUHZZK3BTenF6cnhrZGlTbG1N?=
 =?utf-8?B?dlpqamlCRlZqYXVGa2dLUk9jeHUvMVNtcTFUTDNoNUJnbFI4K3ZNZ3FUUUxT?=
 =?utf-8?B?UHJnQVBFdUlIVDRxNXU2Wk00RHZOYUZadDFYVzh0bzYyWXMvT0tYZERqcS9v?=
 =?utf-8?B?RW1ObVVYLyt5U1FGSFkwS2loVGxsMHZldjN2ZkZuSU5qV2JjMS9GWDAyZXFv?=
 =?utf-8?B?Tm14ODF1ZE1ENWJnVmV4VkFhcERwdzRZOTJSbWd0TVVZeGN4REU0N2hMUXdX?=
 =?utf-8?B?eWx0MGNmN25DdDlBdFY2TXlrQWlleVAwelNXOHVQY3p3N2tXTkJGd1l6eGVY?=
 =?utf-8?B?T1lyZ0NleHdBSCs3Zi90cExKT0ZkTTB0SmI4VDZTYmZwOHBPSERON0lmTzNR?=
 =?utf-8?B?UURNbmdnSkI2NE5wODAveUswK3poR2RrZ3FXVVdjVzhweU1paDBEZDBkUE5v?=
 =?utf-8?Q?qGd75+QrhfhJosU3eHRZmaerYRcTOgHrKy93Wub?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB4317
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT035.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	57774dc4-c829-4304-058b-08d91b5893af
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	rAH0LFCnXV+9WB2kxNJ51lBrY7X/CcNe2hKLhViH3dClVBKDt2RTVTZiJ/tZHYEN7tLo2hBuuKPEugb6GSQ/TnsdH3FS8kTapocYD4UnACNPzwLfotiCNwBkZRBfqmO5NmST9EqmLeA9BAtMviNO8SCr7ch9s1kyWimvb+3Nd8IS1vk14IwbOZJ76FL4HqH8Y7HRFK72CZWEx6LM/Yq2d/CJmNykj0g7+5YtCIxYrpeOMaOeCHs/7IUlzHQMdOAhKn4lZ01gs4ADcAvMWeym8/6IgeTfxS/zQFYRMfWURSbBmV9eVfQbp9U5+yZ9LqGvIlj6zeR4uHIzQiYnn9Xw14Ff5VWvdw83iyB8+c34+/i/Qrq3PjLGf9/NfJNvHtE6I3Q4ViEifjuhVnT3nr5ayrxwVc0scCpblY+E/szcWFsQhgCs/ybwVRvJzeh7g293PyLjGzBK0G++/LOyBQ2mXhN5imfalohqL5fRsu0NHMlSgMPqrVeyPnKQBs8ESWOOPsZAVq4DSQAdvw4Olskhf9Y7vaiwQhP0A8NV1oJxtsd18CklknxmPFzUmpNrBA8jFcwkzGi92NcGWrv/p1t92VZZPWhwD2hm8Q2IPaXyUY/OArWM3EgKgZW2U9rUq2RFH98ccBSlMsRMd/PEa6N9FQ==
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39850400004)(136003)(396003)(376002)(346002)(46966006)(36840700001)(36860700001)(83380400001)(5660300002)(86362001)(8936002)(478600001)(356005)(186003)(82310400003)(9686003)(55016002)(81166007)(82740400003)(70586007)(70206006)(6506007)(52536014)(336012)(4326008)(47076005)(110136005)(53546011)(8676002)(7696005)(33656002)(26005)(54906003)(316002)(2906002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 May 2021 06:29:22.2057
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 9a33aff5-8782-468f-6f39-08d91b589b18
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT035.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PAXPR08MB6720

SGkgSnVsaWVuIA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IEp1bGll
biBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+IFNlbnQ6IFRodXJzZGF5LCBNYXkgMjAsIDIwMjEg
NDoxMCBBTQ0KPiBUbzogUGVubnkgWmhlbmcgPFBlbm55LlpoZW5nQGFybS5jb20+OyB4ZW4tZGV2
ZWxAbGlzdHMueGVucHJvamVjdC5vcmc7DQo+IHNzdGFiZWxsaW5pQGtlcm5lbC5vcmcNCj4gQ2M6
IEJlcnRyYW5kIE1hcnF1aXMgPEJlcnRyYW5kLk1hcnF1aXNAYXJtLmNvbT47IFdlaSBDaGVuDQo+
IDxXZWkuQ2hlbkBhcm0uY29tPjsgbmQgPG5kQGFybS5jb20+DQo+IFN1YmplY3Q6IFJlOiBbUEFU
Q0ggMTAvMTBdIHhlbi9hcm06IGludHJvZHVjZSBhbGxvY2F0ZV9zdGF0aWNfbWVtb3J5DQo+IA0K
PiANCj4gDQo+IE9uIDE5LzA1LzIwMjEgMDg6MjcsIFBlbm55IFpoZW5nIHdyb3RlOg0KPiA+IEhp
IEp1bGllbg0KPiA+DQo+ID4+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+ID4+IEZyb206
IEp1bGllbiBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+ID4+IFNlbnQ6IFR1ZXNkYXksIE1heSAx
OCwgMjAyMSA4OjA2IFBNDQo+ID4+IFRvOiBQZW5ueSBaaGVuZyA8UGVubnkuWmhlbmdAYXJtLmNv
bT47DQo+ID4+IHhlbi1kZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9yZzsgc3N0YWJlbGxpbmlAa2Vy
bmVsLm9yZw0KPiA+PiBDYzogQmVydHJhbmQgTWFycXVpcyA8QmVydHJhbmQuTWFycXVpc0Bhcm0u
Y29tPjsgV2VpIENoZW4NCj4gPj4gPFdlaS5DaGVuQGFybS5jb20+OyBuZCA8bmRAYXJtLmNvbT4N
Cj4gPj4gU3ViamVjdDogUmU6IFtQQVRDSCAxMC8xMF0geGVuL2FybTogaW50cm9kdWNlIGFsbG9j
YXRlX3N0YXRpY19tZW1vcnkNCj4gPj4NCj4gPj4gSGkgUGVubnksDQo+ID4+DQo+ID4+IE9uIDE4
LzA1LzIwMjEgMDY6MjEsIFBlbm55IFpoZW5nIHdyb3RlOg0KPiA+Pj4gVGhpcyBjb21taXQgaW50
cm9kdWNlcyBhbGxvY2F0ZV9zdGF0aWNfbWVtb3J5IHRvIGFsbG9jYXRlIHN0YXRpYw0KPiA+Pj4g
bWVtb3J5IGFzIGd1ZXN0IFJBTSBmb3IgZG9tYWluIG9uIFN0YXRpYyBBbGxvY2F0aW9uLg0KPiA+
Pj4NCj4gPj4+IEl0IHVzZXMgYWxsb2NfZG9tc3RhdGljX3BhZ2VzIHRvIGFsbG9jYXRlIHByZS1k
ZWZpbmVkIHN0YXRpYyBtZW1vcnkNCj4gPj4+IGJhbmtzIGZvciB0aGlzIGRvbWFpbiwgYW5kIHVz
ZXMgZ3Vlc3RfcGh5c21hcF9hZGRfcGFnZSB0byBzZXQgdXAgUDJNDQo+ID4+PiB0YWJsZSwgZ3Vl
c3Qgc3RhcnRpbmcgYXQgZml4ZWQgR1VFU1RfUkFNMF9CQVNFLCBHVUVTVF9SQU0xX0JBU0UuDQo+
ID4+Pg0KPiA+Pj4gU2lnbmVkLW9mZi1ieTogUGVubnkgWmhlbmcgPHBlbm55LnpoZW5nQGFybS5j
b20+DQo+ID4+PiAtLS0NCj4gPj4+ICAgIHhlbi9hcmNoL2FybS9kb21haW5fYnVpbGQuYyB8IDE1
Nw0KPiA+PiArKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKy0NCj4gPj4+ICAgIDEg
ZmlsZSBjaGFuZ2VkLCAxNTUgaW5zZXJ0aW9ucygrKSwgMiBkZWxldGlvbnMoLSkNCj4gPj4+DQo+
ID4+PiBkaWZmIC0tZ2l0IGEveGVuL2FyY2gvYXJtL2RvbWFpbl9idWlsZC5jDQo+ID4+PiBiL3hl
bi9hcmNoL2FybS9kb21haW5fYnVpbGQuYyBpbmRleCAzMGI1NTU4OGI3Li45ZjY2MjMxM2FkIDEw
MDY0NA0KPiA+Pj4gLS0tIGEveGVuL2FyY2gvYXJtL2RvbWFpbl9idWlsZC5jDQo+ID4+PiArKysg
Yi94ZW4vYXJjaC9hcm0vZG9tYWluX2J1aWxkLmMNCj4gPj4+IEBAIC00MzcsNiArNDM3LDUwIEBA
IHN0YXRpYyBib29sIF9faW5pdCBhbGxvY2F0ZV9iYW5rX21lbW9yeShzdHJ1Y3QNCj4gPj4gZG9t
YWluICpkLA0KPiA+Pj4gICAgICAgIHJldHVybiB0cnVlOw0KPiA+Pj4gICAgfQ0KPiA+Pj4NCj4g
Pj4+ICsvKg0KPiA+Pj4gKyAqICNyYW1faW5kZXggYW5kICNyYW1faW5kZXggcmVmZXIgdG8gdGhl
IGluZGV4IGFuZCBzdGFydGluZw0KPiA+Pj4gK2FkZHJlc3Mgb2YgZ3Vlc3QNCj4gPj4+ICsgKiBt
ZW1vcnkga2FuayBzdG9yZWQgaW4ga2luZm8tPm1lbS4NCj4gPj4+ICsgKiBTdGF0aWMgbWVtb3J5
IGF0ICNzbWZuIG9mICN0b3Rfc2l6ZSBzaGFsbCBiZSBtYXBwZWQgI3NnZm4sIGFuZA0KPiA+Pj4g
KyAqICNzZ2ZuIHdpbGwgYmUgbmV4dCBndWVzdCBhZGRyZXNzIHRvIG1hcCB3aGVuIHJldHVybmlu
Zy4NCj4gPj4+ICsgKi8NCj4gPj4+ICtzdGF0aWMgYm9vbCBfX2luaXQgYWxsb2NhdGVfc3RhdGlj
X2JhbmtfbWVtb3J5KHN0cnVjdCBkb21haW4gKmQsDQo+ID4+PiArICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdHJ1Y3Qga2VybmVsX2luZm8gKmtpbmZvLA0K
PiA+Pj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgaW50
IHJhbV9pbmRleCwNCj4gPj4NCj4gPj4gUGxlYXNlIHVzZSB1bnNpZ25lZC4NCj4gPj4NCj4gPj4+
ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgIHBhZGRyX3Qg
cmFtX2FkZHIsDQo+ID4+PiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICBnZm5fdCogc2dmbiwNCj4gPj4NCj4gPj4gSSBhbSBjb25mdXNlZCwgd2hhdCBpcyB0
aGUgZGlmZmVyZW5jZSBiZXR3ZWVuIHJhbV9hZGRyIGFuZCBzZ2ZuPw0KPiA+Pg0KPiA+DQo+ID4g
V2UgbmVlZCB0byBjb25zdHJ1Y3Rpbmcga2luZm8tPm1lbShndWVzdCBSQU0gYmFua3MpIGhlcmUs
IGFuZCB3ZSBhcmUNCj4gPiBpbmRleGluZyBpbiBzdGF0aWNfbWVtKHBoeXNpY2FsIHJhbSBiYW5r
cykuIE11bHRpcGxlIHBoeXNpY2FsIHJhbQ0KPiA+IGJhbmtzIGNvbnNpc3Qgb2Ygb25lIGd1ZXN0
IHJhbSBiYW5rKGxpa2UsIEdVRVNUX1JBTTApLg0KPiA+DQo+ID4gcmFtX2FkZHIgIGhlcmUgd2ls
bCBlaXRoZXIgYmUgR1VFU1RfUkFNMF9CQVNFLCBvciBHVUVTVF9SQU0xX0JBU0UsDQo+IGZvcg0K
PiA+IG5vdy4gSSBraW5kcyBzdHJ1Z2dsZWQgaW4gaG93IHRvIG5hbWUgaXQuIEFuZCBtYXliZSBp
dCBzaGFsbCBub3QgYmUgYQ0KPiA+IHBhcmFtZXRlciBoZXJlLg0KPiA+DQo+ID4gTWF5YmUgSSBz
aG91bGQgc3dpdGNoLi4gY2FzZS4uIG9uIHRoZSByYW1faW5kZXgsIGlmIGl0cyAwLCBpdHMNCj4g
PiBHVUVTVF9SQU0wX0JBU0UsIEFuZCBpZiBpdHMgMSwgaXRzIEdVRVNUX1JBTTFfQkFTRS4NCj4g
DQo+IFlvdSBvbmx5IG5lZWQgdG8gc2V0IGtpbmZvLT5tZW0uYmFua1tyYW1faW5kZXhdLnN0YXJ0
IG9uY2UuIFRoaXMgaXMgd2hlbg0KPiB5b3Uga25vdyB0aGUgYmFuayBpcyBmaXJzdCB1c2VkLg0K
PiANCj4gQUZBSUNULCB0aGlzIGZ1bmN0aW9uIHdpbGwgbWFwIHRoZSBtZW1vcnkgZm9yIGEgcmFu
Z2Ugc3RhcnQgYXQgYGBzZ2ZuYGAuDQo+IEl0IGRvZXNuJ3QgZmVlbCB0aGlzIGJlbG9uZ3MgdG8g
dGhlIGZ1bmN0aW9uLg0KPiANCj4gVGhlIHNhbWUgcmVtYXJrIGlzIHZhbGlkIGZvciBraW5mby0+
bWVtLm5yX2JhbmtzLg0KPiANCg0KT2suIEkgZmluYWxseSB0b3RhbGx5IHVuZGVyc3RhbmQgd2hh
dCB5b3Ugc3VnZ2VzdCBoZXJlLg0KSSdsbCB0cnkgdG8gbGV0IHRoZSBhY3Rpb24gcmVsYXRlZCB0
byBzZXR0aW5nIGtpbmZvLT5tZW0uYmFua1tyYW1faW5kZXhdLnN0YXJ0Lw0Ka2luZm8tPm1lbS5i
YW5rW3JhbV9pbmRleF0uc2l6ZS8ga2luZm8tPm1lbS4gbnJfYmFua3Mgb3V0IG9mIHRoaXMgZnVu
Y3Rpb24sDQphbmQgb25seSBrZWVwIHRoZSBzaW1wbGUgZnVuY3Rpb25hbGl0eSBvZiBtYXBwaW5n
IHRoZSBtZW1vcnkgZm9yIGEgcmFuZ2Ugc3RhcnQNCmF0IGBgc2dmbmBgLg0KDQo+ID4+PiArICAg
ICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBtZm5fdCBzbWZuLA0K
PiA+Pj4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgcGFk
ZHJfdCB0b3Rfc2l6ZSkgew0KPiA+Pj4gKyAgICBpbnQgcmVzOw0KPiA+Pj4gKyAgICBzdHJ1Y3Qg
bWVtYmFuayAqYmFuazsNCj4gPj4+ICsgICAgcGFkZHJfdCBfc2l6ZSA9IHRvdF9zaXplOw0KPiA+
Pj4gKw0KPiA+Pj4gKyAgICBiYW5rID0gJmtpbmZvLT5tZW0uYmFua1tyYW1faW5kZXhdOw0KPiA+
Pj4gKyAgICBiYW5rLT5zdGFydCA9IHJhbV9hZGRyOw0KPiA+Pj4gKyAgICBiYW5rLT5zaXplID0g
YmFuay0+c2l6ZSArIHRvdF9zaXplOw0KPiA+Pj4gKw0KPiA+Pj4gKyAgICB3aGlsZSAoIHRvdF9z
aXplID4gMCApDQo+ID4+PiArICAgIHsNCj4gPj4+ICsgICAgICAgIHVuc2lnbmVkIGludCBvcmRl
ciA9IGdldF9hbGxvY2F0aW9uX3NpemUodG90X3NpemUpOw0KPiA+Pj4gKw0KPiA+Pj4gKyAgICAg
ICAgcmVzID0gZ3Vlc3RfcGh5c21hcF9hZGRfcGFnZShkLCAqc2dmbiwgc21mbiwgb3JkZXIpOw0K
PiA+Pj4gKyAgICAgICAgaWYgKCByZXMgKQ0KPiA+Pj4gKyAgICAgICAgew0KPiA+Pj4gKyAgICAg
ICAgICAgIGRwcmludGsoWEVOTE9HX0VSUiwgIkZhaWxlZCBtYXAgcGFnZXMgdG8gRE9NVTogJWQi
LCByZXMpOw0KPiA+Pj4gKyAgICAgICAgICAgIHJldHVybiBmYWxzZTsNCj4gPj4+ICsgICAgICAg
IH0NCj4gPj4+ICsNCj4gPj4+ICsgICAgICAgICpzZ2ZuID0gZ2ZuX2FkZCgqc2dmbiwgMVVMIDw8
IG9yZGVyKTsNCj4gPj4+ICsgICAgICAgIHNtZm4gPSBtZm5fYWRkKHNtZm4sIDFVTCA8PCBvcmRl
cik7DQo+ID4+PiArICAgICAgICB0b3Rfc2l6ZSAtPSAoMVVMTCA8PCAoUEFHRV9TSElGVCArIG9y
ZGVyKSk7DQo+ID4+PiArICAgIH0NCj4gPj4+ICsNCj4gPj4+ICsgICAga2luZm8tPm1lbS5ucl9i
YW5rcyA9IHJhbV9pbmRleCArIDE7DQo+ID4+PiArICAgIGtpbmZvLT51bmFzc2lnbmVkX21lbSAt
PSBfc2l6ZTsNCj4gPj4+ICsNCj4gPj4+ICsgICAgcmV0dXJuIHRydWU7DQo+ID4+PiArfQ0KPiA+
Pj4gKw0KPiA+Pj4gICAgc3RhdGljIHZvaWQgX19pbml0IGFsbG9jYXRlX21lbW9yeShzdHJ1Y3Qg
ZG9tYWluICpkLCBzdHJ1Y3QNCj4gPj4+IGtlcm5lbF9pbmZvDQo+ID4+ICpraW5mbykNCj4gPj4+
ICAgIHsNCj4gPj4+ICAgICAgICB1bnNpZ25lZCBpbnQgaTsNCj4gPj4+IEBAIC00ODAsNiArNTI0
LDExNiBAQCBmYWlsOg0KPiA+Pj4gICAgICAgICAgICAgICh1bnNpZ25lZCBsb25nKWtpbmZvLT51
bmFzc2lnbmVkX21lbSA+PiAxMCk7DQo+ID4+PiAgICB9DQo+ID4+Pg0KPiA+Pj4gKy8qIEFsbG9j
YXRlIG1lbW9yeSBmcm9tIHN0YXRpYyBtZW1vcnkgYXMgUkFNIGZvciBvbmUgc3BlY2lmaWMNCj4g
Pj4+ICtkb21haW4gZC4gKi8gc3RhdGljIHZvaWQgX19pbml0IGFsbG9jYXRlX3N0YXRpY19tZW1v
cnkoc3RydWN0IGRvbWFpbiAqZCwNCj4gPj4+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgIHN0cnVjdCBrZXJuZWxfaW5mbw0KPiA+Pj4gKypraW5mbykgew0KPiA+
Pj4gKyAgICBpbnQgbnJfYmFua3MsIF9iYW5rcyA9IDA7DQo+ID4+DQo+ID4+IEFGQUlDVCwgX2Jh
bmtzIGlzIHRoZSBpbmRleCBpbiB0aGUgYXJyYXkuIEkgdGhpbmsgaXQgd291bGQgYmUgY2xlYXJl
cg0KPiA+PiBpZiBpdCBpcyBjYWxsZXIgJ2JhbmsnIG9yICdpZHgnLg0KPiA+Pg0KPiA+DQo+ID4g
U3VyZSwgSeKAmWxsIHVzZSB0aGUgJ2JhbmsnIGhlcmUuDQo+ID4NCj4gPj4+ICsgICAgc2l6ZV90
IHJhbTBfc2l6ZSA9IEdVRVNUX1JBTTBfU0laRSwgcmFtMV9zaXplID0gR1VFU1RfUkFNMV9TSVpF
Ow0KPiA+Pj4gKyAgICBwYWRkcl90IGJhbmtfc3RhcnQsIGJhbmtfc2l6ZTsNCj4gPj4+ICsgICAg
Z2ZuX3Qgc2dmbjsNCj4gPj4+ICsgICAgbWZuX3Qgc21mbjsNCj4gPj4+ICsNCj4gPj4+ICsgICAg
a2luZm8tPm1lbS5ucl9iYW5rcyA9IDA7DQo+ID4+PiArICAgIHNnZm4gPSBnYWRkcl90b19nZm4o
R1VFU1RfUkFNMF9CQVNFKTsNCj4gPj4+ICsgICAgbnJfYmFua3MgPSBkLT5hcmNoLnN0YXRpY19t
ZW0ubnJfYmFua3M7DQo+ID4+PiArICAgIEFTU0VSVChucl9iYW5rcyA+PSAwKTsNCj4gPj4+ICsN
Cj4gPj4+ICsgICAgaWYgKCBraW5mby0+dW5hc3NpZ25lZF9tZW0gPD0gMCApDQo+ID4+PiArICAg
ICAgICBnb3RvIGZhaWw7DQo+ID4+PiArDQo+ID4+PiArICAgIHdoaWxlICggX2JhbmtzIDwgbnJf
YmFua3MgKQ0KPiA+Pj4gKyAgICB7DQo+ID4+PiArICAgICAgICBiYW5rX3N0YXJ0ID0gZC0+YXJj
aC5zdGF0aWNfbWVtLmJhbmtbX2JhbmtzXS5zdGFydDsNCj4gPj4+ICsgICAgICAgIHNtZm4gPSBt
YWRkcl90b19tZm4oYmFua19zdGFydCk7DQo+ID4+PiArICAgICAgICBiYW5rX3NpemUgPSBkLT5h
cmNoLnN0YXRpY19tZW0uYmFua1tfYmFua3NdLnNpemU7DQo+ID4+DQo+ID4+IFRoZSB2YXJpYWJs
ZSBuYW1lIGFyZSBzbGlnaHRseSBjb25mdXNpbmcgYmVjYXVzZSBpdCBkb2Vzbid0IHRlbGwNCj4g
Pj4gd2hldGhlciB0aGlzIGlzIHBoeXNpY2FsIGFyZSBndWVzdCBSQU0uIFlvdSBtaWdodCB3YW50
IHRvIGNvbnNpZGVyIHRvDQo+ID4+IHByZWZpeCB0aGVtIHdpdGggcCAocmVzcC4gZykgZm9yIHBo
eXNpY2FsIChyZXNwLiBndWVzdCkgUkFNLg0KPiA+DQo+ID4gU3VyZSwgSSdsbCByZW5hbWUgdG8g
bWFrZSBpdCBtb3JlIGNsZWFybHkuDQo+ID4NCj4gPj4NCj4gPj4+ICsNCj4gPj4+ICsgICAgICAg
IGlmICggIWFsbG9jX2RvbXN0YXRpY19wYWdlcyhkLCBiYW5rX3NpemUgPj4gUEFHRV9TSElGVCwN
Cj4gPj4+ICsgYmFua19zdGFydCwNCj4gPj4gMCkgKQ0KPiA+Pj4gKyAgICAgICAgew0KPiA+Pj4g
KyAgICAgICAgICAgIHByaW50ayhYRU5MT0dfRVJSDQo+ID4+PiArICAgICAgICAgICAgICAgICAg
ICAiJXBkOiBjYW5ub3QgYWxsb2NhdGUgc3RhdGljIG1lbW9yeSINCj4gPj4+ICsgICAgICAgICAg
ICAgICAgICAgICIoMHglIlBSSXg2NCIgLSAweCUiUFJJeDY0IikiLA0KPiA+Pg0KPiA+PiBiYW5r
X3N0YXJ0IGFuZCBiYW5rX3NpemUgYXJlIGJvdGggcGFkZHJfdC4gU28gdGhpcyBzaG91bGQgYmUg
UFJJcGFkZHIuDQo+ID4NCj4gPiBTdXJlLCBJJ2xsIGNoYW5nZQ0KPiA+DQo+ID4+DQo+ID4+PiAr
ICAgICAgICAgICAgICAgICAgICBkLCBiYW5rX3N0YXJ0LCBiYW5rX3N0YXJ0ICsgYmFua19zaXpl
KTsNCj4gPj4+ICsgICAgICAgICAgICBnb3RvIGZhaWw7DQo+ID4+PiArICAgICAgICB9DQo+ID4+
PiArDQo+ID4+PiArICAgICAgICAvKg0KPiA+Pj4gKyAgICAgICAgICogQnkgZGVmYXVsdCwgaXQg
c2hhbGwgYmUgbWFwcGVkIHRvIHRoZSBmaXhlZCBndWVzdCBSQU0gYWRkcmVzcw0KPiA+Pj4gKyAg
ICAgICAgICogYEdVRVNUX1JBTTBfQkFTRWAsIGBHVUVTVF9SQU0xX0JBU0VgLg0KPiA+Pj4gKyAg
ICAgICAgICogU3RhcnRpbmcgZnJvbSBSQU0wKEdVRVNUX1JBTTBfQkFTRSkuDQo+ID4+PiArICAg
ICAgICAgKi8NCj4gPj4NCj4gPj4gT2suIFNvIHlvdSBhcmUgZmlyc3QgdHJ5aW5nIHRvIGV4aGF1
c3QgdGhlIGd1ZXN0IGJhbmsgMCBhbmQgdGhlbg0KPiA+PiBtb3ZlZCB0byBiYW5rIDEuIFRoaXMg
d2Fzbid0IGVudGlyZWx5IGNsZWFyIGZyb20gdGhlIGRlc2lnbiBkb2N1bWVudC4NCj4gPj4NCj4g
Pj4gSSBhbSBmaW5lIHdpdGggdGhhdCwgYnV0IGluIHRoaXMgY2FzZSwgdGhlIGRldmVsb3BwZXIg
c2hvdWxkIG5vdCBuZWVkDQo+ID4+IHRvIGtub3cgdGhhdCAoaW4gZmFjdCB0aGlzIGlzIG5vdCBw
YXJ0IG9mIHRoZSBBQkkpLg0KPiA+Pg0KPiA+PiBSZWdhcmRpbmcgdGhpcyBjb2RlLCBJIGFtIGEg
Yml0IGNvbmNlcm5lZCBhYm91dCB0aGUgc2NhbGFiaWxpdHkgaWYgd2UNCj4gPj4gaW50cm9kdWNl
IGEgc2Vjb25kIGJhbmsuDQo+ID4+DQo+ID4+IENhbiB3ZSBoYXZlIGFuIGFycmF5IG9mIHRoZSBw
b3NzaWJsZSBndWVzdCBiYW5rcyBhbmQgaW5jcmVtZW50IHRoZQ0KPiA+PiBpbmRleCB3aGVuIGV4
aGF1c3RpbmcgdGhlIGN1cnJlbnQgYmFuaz8NCj4gPj4NCj4gPg0KPiA+IENvcnJlY3QgbWUgaWYg
SSB1bmRlcnN0YW5kIHdyb25nbHksDQo+ID4NCj4gPiBXaGF0IHlvdSBzdWdnZXN0IGhlcmUgaXMg
dGhhdCB3ZSBtYWtlIGFuIGFycmF5IG9mIGd1ZXN0IGJhbmtzLCByaWdodA0KPiA+IG5vdywgaW5j
bHVkaW5nDQo+ID4gR1VFU1RfUkFNMCBhbmQgR1VFU1RfUkFNMS4gQW5kIGlmIGxhdGVyLCBhZGRp
bmcgbW9yZSBndWVzdCBiYW5rcywgaXQNCj4gPiB3aWxsIG5vdCBicmluZyBzY2FsYWJpbGl0eSBw
cm9ibGVtIGhlcmUsIHJpZ2h0Pw0KPiANCj4gWWVzLiBUaGlzIHNob3VsZCBhbHNvIHJlZHVjZSB0
aGUgY3VycmVudCBjb21wbGV4aXR5IG9mIHRoZSBjb2RlLg0KPiANCj4gQ2hlZXJzLA0KPiANCj4g
LS0NCj4gSnVsaWVuIEdyYWxsDQoNCkNoZWVycw0KDQpQZW5ueQ0K


From xen-devel-bounces@lists.xenproject.org Thu May 20 06:39:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 06:39:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130627.244565 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljcLF-0003D3-Bb; Thu, 20 May 2021 06:39:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130627.244565; Thu, 20 May 2021 06:39:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljcLF-0003Cw-8E; Thu, 20 May 2021 06:39:33 +0000
Received: by outflank-mailman (input) for mailman id 130627;
 Thu, 20 May 2021 06:39:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dGsj=KP=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ljcLE-0003Cp-6b
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 06:39:32 +0000
Received: from mail-qk1-x733.google.com (unknown [2607:f8b0:4864:20::733])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5e520b53-27af-48f8-a688-dd49eb0c160f;
 Thu, 20 May 2021 06:39:31 +0000 (UTC)
Received: by mail-qk1-x733.google.com with SMTP id o27so15147777qkj.9
 for <xen-devel@lists.xenproject.org>; Wed, 19 May 2021 23:39:31 -0700 (PDT)
Received: from mail-qt1-f173.google.com (mail-qt1-f173.google.com.
 [209.85.160.173])
 by smtp.gmail.com with ESMTPSA id m15sm1291215qtn.47.2021.05.19.23.39.30
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 19 May 2021 23:39:30 -0700 (PDT)
Received: by mail-qt1-f173.google.com with SMTP id g8so4523807qtp.4
 for <xen-devel@lists.xenproject.org>; Wed, 19 May 2021 23:39:30 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5e520b53-27af-48f8-a688-dd49eb0c160f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=e/NSxqpkIDrHAb8EdngT+J5y92xOhsZUELUZeumbo14=;
        b=XZhu4NB364LO5HgzFR5xaRq+2+Ny8I/tbuM557W/djcP2WZR5/YY1P3k16G0KqQZ5g
         9wOC75EFGAaEHt77aJ4y5qFZynUNK1/otklqj7so1biOf1FGROrwm/ILpXyxkwLvM+aC
         k3452c8bAQ6BEmJklQOi+zYSoBDbgxiFQQFO0=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=e/NSxqpkIDrHAb8EdngT+J5y92xOhsZUELUZeumbo14=;
        b=kIFewTmZjCmPhU5/es2xzjzAAHzrtH4r8aktVDK1FQmVfZ++fP3P00fNAj0HiJ5ctr
         PJyB7SRJwH2nBo0pejSwBSygw8n97EsYiSSwvbIn7B1fnXBicQy5xFEDGcdH/JvyNgau
         WxzBBD8MWcwQ/nA8effpOOkzUscaO7hTxXERqJas2zXOTanGIX1WEQcMscS8DBGWC3i0
         +N+jpfO3FHfwincFDK5HRmFNTzff5L7UNLlETUZSD0rmdBu+W4jEiBIF4/obE0RndStp
         KFGe0cvp4zAP6FAURRLwiYiKrSvwGPUAI9Ra0BQ4sKk7va1Pjyl1w00sGs2aN7Mq7MNC
         vJmw==
X-Gm-Message-State: AOAM530iNhLs3jV/SdmfvP67tgBEMrhwJ7VaZQYwlYfP8DJ7bpjNpWxr
	+mNXd9i3n7nVwwyGkR1aNoVw24s6eDTWbg==
X-Google-Smtp-Source: ABdhPJxPYyA1WiJdHgs6JUw6dzp4Q/hcjWQAiEXFjNhIJiqDqKgisBOq+LHqSoDkAFKzAbWQ07LY+Q==
X-Received: by 2002:ae9:c010:: with SMTP id u16mr3398113qkk.133.1621492770618;
        Wed, 19 May 2021 23:39:30 -0700 (PDT)
X-Received: by 2002:a05:6638:14ce:: with SMTP id l14mr3931980jak.90.1621492759440;
 Wed, 19 May 2021 23:39:19 -0700 (PDT)
MIME-Version: 1.0
References: <20210518064215.2856977-1-tientzu@chromium.org>
 <20210518064215.2856977-5-tientzu@chromium.org> <ae050c36-8179-e465-84f0-e46c361bd71e@gmail.com>
In-Reply-To: <ae050c36-8179-e465-84f0-e46c361bd71e@gmail.com>
From: Claire Chang <tientzu@chromium.org>
Date: Thu, 20 May 2021 14:39:08 +0800
X-Gmail-Original-Message-ID: <CALiNf29jpi7=UKFPc7TK8wiNbq6TL=Uj-Mvmj9HRvOLnr-r_Ow@mail.gmail.com>
Message-ID: <CALiNf29jpi7=UKFPc7TK8wiNbq6TL=Uj-Mvmj9HRvOLnr-r_Ow@mail.gmail.com>
Subject: Re: [PATCH v7 04/15] swiotlb: Add restricted DMA pool initialization
To: Florian Fainelli <f.fainelli@gmail.com>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>, 
	Will Deacon <will@kernel.org>, Frank Rowand <frowand.list@gmail.com>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com, jgross@suse.com, 
	Christoph Hellwig <hch@lst.de>, Marek Szyprowski <m.szyprowski@samsung.com>, benh@kernel.crashing.org, 
	paulus@samba.org, 
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, sstabellini@kernel.org, 
	Robin Murphy <robin.murphy@arm.com>, grant.likely@arm.com, xypron.glpk@gmx.de, 
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org, bauerman@linux.ibm.com, 
	peterz@infradead.org, Greg KH <gregkh@linuxfoundation.org>, 
	Saravana Kannan <saravanak@google.com>, "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, 
	heikki.krogerus@linux.intel.com, 
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Randy Dunlap <rdunlap@infradead.org>, 
	Dan Williams <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
	linux-devicetree <devicetree@vger.kernel.org>, lkml <linux-kernel@vger.kernel.org>, 
	linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, 
	Nicolas Boichat <drinkcat@chromium.org>, Jim Quinlan <james.quinlan@broadcom.com>, 
	Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com, 
	Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk, 
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
	Jianxiong Gao <jxgao@google.com>, joonas.lahtinen@linux.intel.com, 
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
	matthew.auld@intel.com, rodrigo.vivi@intel.com, 
	thomas.hellstrom@linux.intel.com
Content-Type: text/plain; charset="UTF-8"

On Thu, May 20, 2021 at 2:54 AM Florian Fainelli <f.fainelli@gmail.com> wrote:
>
>
>
> On 5/17/2021 11:42 PM, Claire Chang wrote:
> > Add the initialization function to create restricted DMA pools from
> > matching reserved-memory nodes.
> >
> > Signed-off-by: Claire Chang <tientzu@chromium.org>
> > ---
> >  include/linux/device.h  |  4 +++
> >  include/linux/swiotlb.h |  3 +-
> >  kernel/dma/swiotlb.c    | 76 +++++++++++++++++++++++++++++++++++++++++
> >  3 files changed, 82 insertions(+), 1 deletion(-)
> >
> > diff --git a/include/linux/device.h b/include/linux/device.h
> > index 38a2071cf776..4987608ea4ff 100644
> > --- a/include/linux/device.h
> > +++ b/include/linux/device.h
> > @@ -416,6 +416,7 @@ struct dev_links_info {
> >   * @dma_pools:       Dma pools (if dma'ble device).
> >   * @dma_mem: Internal for coherent mem override.
> >   * @cma_area:        Contiguous memory area for dma allocations
> > + * @dma_io_tlb_mem: Internal for swiotlb io_tlb_mem override.
> >   * @archdata:        For arch-specific additions.
> >   * @of_node: Associated device tree node.
> >   * @fwnode:  Associated device node supplied by platform firmware.
> > @@ -521,6 +522,9 @@ struct device {
> >  #ifdef CONFIG_DMA_CMA
> >       struct cma *cma_area;           /* contiguous memory area for dma
> >                                          allocations */
> > +#endif
> > +#ifdef CONFIG_DMA_RESTRICTED_POOL
> > +     struct io_tlb_mem *dma_io_tlb_mem;
> >  #endif
> >       /* arch specific additions */
> >       struct dev_archdata     archdata;
> > diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
> > index 216854a5e513..03ad6e3b4056 100644
> > --- a/include/linux/swiotlb.h
> > +++ b/include/linux/swiotlb.h
> > @@ -72,7 +72,8 @@ extern enum swiotlb_force swiotlb_force;
> >   *           range check to see if the memory was in fact allocated by this
> >   *           API.
> >   * @nslabs:  The number of IO TLB blocks (in groups of 64) between @start and
> > - *           @end. This is command line adjustable via setup_io_tlb_npages.
> > + *           @end. For default swiotlb, this is command line adjustable via
> > + *           setup_io_tlb_npages.
> >   * @used:    The number of used IO TLB block.
> >   * @list:    The free list describing the number of free entries available
> >   *           from each index.
> > diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> > index b849b01a446f..1d8eb4de0d01 100644
> > --- a/kernel/dma/swiotlb.c
> > +++ b/kernel/dma/swiotlb.c
> > @@ -39,6 +39,13 @@
> >  #ifdef CONFIG_DEBUG_FS
> >  #include <linux/debugfs.h>
> >  #endif
> > +#ifdef CONFIG_DMA_RESTRICTED_POOL
> > +#include <linux/io.h>
> > +#include <linux/of.h>
> > +#include <linux/of_fdt.h>
> > +#include <linux/of_reserved_mem.h>
> > +#include <linux/slab.h>
> > +#endif
> >
> >  #include <asm/io.h>
> >  #include <asm/dma.h>
> > @@ -690,3 +697,72 @@ static int __init swiotlb_create_default_debugfs(void)
> >  late_initcall(swiotlb_create_default_debugfs);
> >
> >  #endif
> > +
> > +#ifdef CONFIG_DMA_RESTRICTED_POOL
> > +static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
> > +                                 struct device *dev)
> > +{
> > +     struct io_tlb_mem *mem = rmem->priv;
> > +     unsigned long nslabs = rmem->size >> IO_TLB_SHIFT;
> > +
> > +     if (dev->dma_io_tlb_mem)
> > +             return 0;
> > +
> > +     /*
> > +      * Since multiple devices can share the same pool, the private data,
> > +      * io_tlb_mem struct, will be initialized by the first device attached
> > +      * to it.
> > +      */
> > +     if (!mem) {
> > +             mem = kzalloc(struct_size(mem, slots, nslabs), GFP_KERNEL);
> > +             if (!mem)
> > +                     return -ENOMEM;
> > +
> > +             if (PageHighMem(pfn_to_page(PHYS_PFN(rmem->base)))) {
> > +                     kfree(mem);
> > +                     return -EINVAL;
>
> This could probably deserve a warning here to indicate that the reserved
> area must be accessible within the linear mapping as I would expect a
> lot of people to trip over that.

Ok. Will add it.

>
> Reviewed-by: Florian Fainelli <f.fainelli@gmail.com>
> --
> Florian


From xen-devel-bounces@lists.xenproject.org Thu May 20 06:41:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 06:41:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130635.244576 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljcN8-0004Zi-Py; Thu, 20 May 2021 06:41:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130635.244576; Thu, 20 May 2021 06:41:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljcN8-0004Zb-My; Thu, 20 May 2021 06:41:30 +0000
Received: by outflank-mailman (input) for mailman id 130635;
 Thu, 20 May 2021 06:41:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=RJtO=KP=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1ljcN7-0004ZT-9X
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 06:41:29 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f67741ca-acba-48c5-a5df-b5138bf8791b;
 Thu, 20 May 2021 06:41:27 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AC806B024;
 Thu, 20 May 2021 06:41:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f67741ca-acba-48c5-a5df-b5138bf8791b
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621492886; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=mVXTagReuGycKErdPG7FDZ+tTCn3ID5QCn8f5wW5t/E=;
	b=nAgwpRA4hRG2dfTSd1M7F/fv+pZmGlEOYjWV4tlabM+8ojkqVM+MAvcUKwayEd9oEdE3yJ
	TkOgph35BcWOkRZxdBbmKAHUEfYtgWq8IMKGYRq2moh0pN60BhRvJg3946DKK9JIZ2KinM
	9zeSjQqPp93uuU86XlKP/D+J4QpXk4E=
To: Julien Grall <julien@xen.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Edwin Torok <edvin.torok@citrix.com>, "Doebel, Bjoern" <doebel@amazon.de>,
 raphning@amazon.co.uk, "Durrant, Paul" <pdurrant@amazon.co.uk>
References: <13bbb51e-f63d-a886-272f-e6a6252fb468@xen.org>
 <377d042d-40ec-dafc-3d03-370c4f5dbb4c@suse.com>
 <c14d7a27-b486-01c1-1a24-70f286c34431@xen.org>
 <b8413748-a889-8b0c-df93-2c93ed832369@xen.org>
 <95144b63-292b-3d60-b7d2-1847a1611fd6@suse.com>
 <911b7981-5c92-b224-1ce3-c312ebd423f7@xen.org>
From: Juergen Gross <jgross@suse.com>
Subject: Re: Preserving transactions accross Xenstored Live-Update
Message-ID: <9b2b0fc1-a0a3-043d-8924-05c05a547e91@suse.com>
Date: Thu, 20 May 2021 08:41:25 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <911b7981-5c92-b224-1ce3-c312ebd423f7@xen.org>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="pbXZBJu5LoogYkwt1xYBJxbQEUecFhGOK"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--pbXZBJu5LoogYkwt1xYBJxbQEUecFhGOK
Content-Type: multipart/mixed; boundary="lyWCHCmrnenusREgf0Ao9orD9SJ3bR7pm";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Julien Grall <julien@xen.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Edwin Torok <edvin.torok@citrix.com>, "Doebel, Bjoern" <doebel@amazon.de>,
 raphning@amazon.co.uk, "Durrant, Paul" <pdurrant@amazon.co.uk>
Message-ID: <9b2b0fc1-a0a3-043d-8924-05c05a547e91@suse.com>
Subject: Re: Preserving transactions accross Xenstored Live-Update
References: <13bbb51e-f63d-a886-272f-e6a6252fb468@xen.org>
 <377d042d-40ec-dafc-3d03-370c4f5dbb4c@suse.com>
 <c14d7a27-b486-01c1-1a24-70f286c34431@xen.org>
 <b8413748-a889-8b0c-df93-2c93ed832369@xen.org>
 <95144b63-292b-3d60-b7d2-1847a1611fd6@suse.com>
 <911b7981-5c92-b224-1ce3-c312ebd423f7@xen.org>
In-Reply-To: <911b7981-5c92-b224-1ce3-c312ebd423f7@xen.org>

--lyWCHCmrnenusREgf0Ao9orD9SJ3bR7pm
Content-Type: multipart/mixed;
 boundary="------------823AEA93739DD27E2782A2E3"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------823AEA93739DD27E2782A2E3
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 19.05.21 19:10, Julien Grall wrote:
> Hi Juergen,
>=20
> On 19/05/2021 13:50, Juergen Gross wrote:
>> On 19.05.21 14:33, Julien Grall wrote:
>>>
>>>
>>> On 19/05/2021 13:32, Julien Grall wrote:
>>>> Hi Juergen,
>>>>
>>>> On 19/05/2021 10:09, Juergen Gross wrote:
>>>>> On 18.05.21 20:11, Julien Grall wrote:
>>>>>>
>>>>>> I have started to look at preserving transaction accross Live-upda=
te=20
>> in
>>>>>
>>>>>> C Xenstored. So far, I managed to transfer transaction that=20
>>>>>> read/write existing nodes.
>>>>>>
>>>>>> Now, I am running into trouble to transfer new/deleted node within=20
a=20
>>
>>>>>> transaction with the existing migration format.
>>>>>>
>>>>>> C Xenstored will keep track of nodes accessed during the transacti=
on=20
>>
>>>>>> but not the children (AFAICT for performance reason).
>>>>>
>>>>> Not performance reasons, but because there isn't any need for that:=

>>>>>
>>>>> The children are either unchanged (so the non-transaction node reco=
rds
>>>>> apply), or they will be among the tracked nodes (transaction node
>>>>> records apply). So in both cases all children should be known.
>>>> In theory, opening a new transaction means you will not see any=20
>>>> modification in the global database until the transaction has been=20
>>>> committed. What you describe would break that because a client would=20

>>>> be able to see new nodes added outside of the transaction.
>>>>
>>>> However, C Xenstored implements neither of the two. Currently, when =

>>>> a node is accessed within the transaction, we will also store the=20
>>>> names of the current children.
>>>>
>>>> To give an example with access to the global DB (prefixed with TID0)=20

>>>> and within a transaction (TID1)
>>>>
>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A01) TID0: MKDIR "data/bar"
>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 2) Start transactio=
n TID1
>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A03) TID1: DIRECTORY "data"
>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 -> This will cache =
thenode data
>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A04) TID0: MKDIR "data/foo"
>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 -> This will create=20
"foo" in the global database
>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A05) TID1: MKDIR "data/fish"
>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 -> This will create=20
"fish" inthe transaction
>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A05) TID1: DIRECTORY "data"
>>>> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 -> This will only r=
eturn "bar" and "fish"
>>>>
>>>> If we Live-Update between 4) and 5). Then we should make sure that=20
>>>> "bar" cannot be seen in the listing by TID1.
>>>
>>> I meant "foo" here. Sorry for the confusion.
>>>
>>>>
>>>> Therefore, I don't think we can restore the children using the=20
>>>> global node here. Instead we need to find a way to transfer the list=20

>>>> of known=20
>>
>>>> children within the transaction.
>>>>
>>>> As a fun fact, C Xenstored implements weirdly the transaction, so TI=
D1=20
>>
>>>> will be able to access "bar" if it knows the name but not list it.
>>
>> And this is the basic problem, I think.
>>
>> C Xenstored should be repaired by adding all (remaining) children of a=

>> node into the TID's database when the list of children is modified
>> either globally or in a transaction. A child having been added globall=
y
>> needs to be added as "deleted" into the TID's database.
>=20
> IIUC, for every modifications in the global database, we would need to =

> walk every single transactions and check whether a parent was accessed.=20

> Am I correct?

Not really. When a node is being read during a transaction and it is
found in the global data base only, its gen-count can be tested for
being older or newer than the transaction start. If it is newer we can
traverse the path up to "/" and treat each parent the same way (so if
a parent is found in the transaction data base, the presence of the
child can be verified, and if it is global only, the gen-count can be
tested against the transaction again).

> If so, I don't think this is a workable solution because of the cost to=20

> execute a single command.

My variant will affect the transaction internal reads only, and the
additional cost will be limited by the distance of the read node from
the root node.

> Is it something you plan to address differently with your rework of the=20
DB?

Yes. I want to have the transaction specific variants of nodes linked to
the global ones, which solves this problem in an easy way.


Juergen

--------------823AEA93739DD27E2782A2E3
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------823AEA93739DD27E2782A2E3--

--lyWCHCmrnenusREgf0Ao9orD9SJ3bR7pm--

--pbXZBJu5LoogYkwt1xYBJxbQEUecFhGOK
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmCmBJUFAwAAAAAACgkQsN6d1ii/Ey+H
QQf/ek+rawQt/Cf2PlUToX1R5xK/Pb9FPIlWCA1WjW5KTXwFyoo955g/OJOlSHlZZUIVAjd5wO4t
RunPAAU9wARy83x7LzZmAyoKJnGOSPsU+kpZslhn4XIqEpQhcB3mfURo7doxSKf5cyo9Pk1ZcI9h
WkEW4MpBRzTgkXTCvil4gWMMMoleiviF3/511Bs5IFGDQvpuCuOOuxdagUPT1AlwTHSJFLYbK/H+
lw7/utyvnYef9Vp3M32PvmNbOpx67aXcQ61jLmJAycbuZNsR2672AO6BgvxLL1blm808rXAlyuFG
ctOXgt/edB3AfjQJuzkF6qPcg/jjzPtF9dj0bCZZaQ==
=ogy7
-----END PGP SIGNATURE-----

--pbXZBJu5LoogYkwt1xYBJxbQEUecFhGOK--


From xen-devel-bounces@lists.xenproject.org Thu May 20 06:47:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 06:47:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130642.244587 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljcTI-0005IY-GT; Thu, 20 May 2021 06:47:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130642.244587; Thu, 20 May 2021 06:47:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljcTI-0005IR-DP; Thu, 20 May 2021 06:47:52 +0000
Received: by outflank-mailman (input) for mailman id 130642;
 Thu, 20 May 2021 06:47:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dGsj=KP=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1ljcTG-0005IL-Vx
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 06:47:51 +0000
Received: from mail-pg1-x536.google.com (unknown [2607:f8b0:4864:20::536])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 04e0867d-a7d3-40ae-ae32-ee08de62c650;
 Thu, 20 May 2021 06:47:49 +0000 (UTC)
Received: by mail-pg1-x536.google.com with SMTP id k15so11146255pgb.10
 for <xen-devel@lists.xenproject.org>; Wed, 19 May 2021 23:47:49 -0700 (PDT)
Received: from mail-pf1-f182.google.com (mail-pf1-f182.google.com.
 [209.85.210.182])
 by smtp.gmail.com with ESMTPSA id r5sm5817767pjd.2.2021.05.19.23.47.48
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 19 May 2021 23:47:48 -0700 (PDT)
Received: by mail-pf1-f182.google.com with SMTP id e17so1039722pfl.5
 for <xen-devel@lists.xenproject.org>; Wed, 19 May 2021 23:47:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 04e0867d-a7d3-40ae-ae32-ee08de62c650
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=oIhFsW3yOUmKSioHfdtl49xT1nsW2ymAOSCthcMXiio=;
        b=lM9veaUv42y4nXibj1lGSdifgvjsNbUKUEol+uJziWw6kVRwnZkDD52tXqcK/vhasS
         PUDj92pN5vsIbUfabcqexhaZYlSojZJ1wq1mzK9ZTms/+1Va4jsCnNJ6mu4u4DoMsmtY
         16/lsdYHKt3Z41zBC1z7c65B0IdsqOKp+QHbc=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=oIhFsW3yOUmKSioHfdtl49xT1nsW2ymAOSCthcMXiio=;
        b=K65/Ma0MtKtPthbBfRk7MNkZRqyuSv1a3RKOSHl6E6S5Fg3v+nRekqZqHNS4p2/UDX
         rt1h5tO/MdcQGZfWkd0lmUQn4JzuKJHZotNNBJmCh9NHBnj44DFWHKA9Z11jdXP1mLHE
         TCUz/bTzOmRDf+P5Ngh7x77a0fZFK5HOvyJ1Jp5vbNKLlfYarDTd4B7+ynnkiJR5D2gN
         c694LGiN+imZrbWSFDCvBdQtHl6Lfuat3iAQ7zms0Cs7OxQpAnUE4tsnUnn+kpXcgIq0
         QsIYUczlrq+YxtqVX1TZIk7dDZ6oCWvhIuE/v0f1u+w9JdEvO43wwxzhYjXZ8DDuPNZb
         Dayw==
X-Gm-Message-State: AOAM531K45sgJk6cp7LZgk3K/B2uzvdI20c1+7b6j6vEmTAVVeBX4AAt
	FJkcfhw++hlgKz3nJ+PDWh6T+lh8vBktYw==
X-Google-Smtp-Source: ABdhPJyF/QJOp/NTDxAyajeaHl1mJOdn6Hq/NRyQj8aGcj4BdB1Fzd6TjCMWgsLaTLY+WYPxnPo7kA==
X-Received: by 2002:a62:d108:0:b029:25d:497e:2dfd with SMTP id z8-20020a62d1080000b029025d497e2dfdmr3267362pfg.29.1621493268803;
        Wed, 19 May 2021 23:47:48 -0700 (PDT)
X-Received: by 2002:a5d:8c82:: with SMTP id g2mr3632711ion.34.1621492834046;
 Wed, 19 May 2021 23:40:34 -0700 (PDT)
MIME-Version: 1.0
References: <20210518064215.2856977-1-tientzu@chromium.org>
 <20210518064215.2856977-2-tientzu@chromium.org> <170a54f2-be20-ec29-1d7f-3388e5f928c6@gmail.com>
In-Reply-To: <170a54f2-be20-ec29-1d7f-3388e5f928c6@gmail.com>
From: Claire Chang <tientzu@chromium.org>
Date: Thu, 20 May 2021 14:40:23 +0800
X-Gmail-Original-Message-ID: <CALiNf2-9fRbH3Xs=fA+N1iRztFxeC0iTsyOSZFe=F42uwXS0Sg@mail.gmail.com>
Message-ID: <CALiNf2-9fRbH3Xs=fA+N1iRztFxeC0iTsyOSZFe=F42uwXS0Sg@mail.gmail.com>
Subject: Re: [PATCH v7 01/15] swiotlb: Refactor swiotlb init functions
To: Florian Fainelli <f.fainelli@gmail.com>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>, 
	Will Deacon <will@kernel.org>, Frank Rowand <frowand.list@gmail.com>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com, jgross@suse.com, 
	Christoph Hellwig <hch@lst.de>, Marek Szyprowski <m.szyprowski@samsung.com>, benh@kernel.crashing.org, 
	paulus@samba.org, 
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, sstabellini@kernel.org, 
	Robin Murphy <robin.murphy@arm.com>, grant.likely@arm.com, xypron.glpk@gmx.de, 
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org, bauerman@linux.ibm.com, 
	peterz@infradead.org, Greg KH <gregkh@linuxfoundation.org>, 
	Saravana Kannan <saravanak@google.com>, "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, 
	heikki.krogerus@linux.intel.com, 
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Randy Dunlap <rdunlap@infradead.org>, 
	Dan Williams <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
	linux-devicetree <devicetree@vger.kernel.org>, lkml <linux-kernel@vger.kernel.org>, 
	linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, 
	Nicolas Boichat <drinkcat@chromium.org>, Jim Quinlan <james.quinlan@broadcom.com>, 
	Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com, 
	Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk, 
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
	Jianxiong Gao <jxgao@google.com>, joonas.lahtinen@linux.intel.com, 
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
	matthew.auld@intel.com, rodrigo.vivi@intel.com, 
	thomas.hellstrom@linux.intel.com
Content-Type: text/plain; charset="UTF-8"

On Thu, May 20, 2021 at 2:50 AM Florian Fainelli <f.fainelli@gmail.com> wrote:
>
>
>
> On 5/17/2021 11:42 PM, Claire Chang wrote:
> > Add a new function, swiotlb_init_io_tlb_mem, for the io_tlb_mem struct
> > initialization to make the code reusable.
> >
> > Note that we now also call set_memory_decrypted in swiotlb_init_with_tbl.
> >
> > Signed-off-by: Claire Chang <tientzu@chromium.org>
> > ---
> >  kernel/dma/swiotlb.c | 51 ++++++++++++++++++++++----------------------
> >  1 file changed, 25 insertions(+), 26 deletions(-)
> >
> > diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
> > index 8ca7d505d61c..d3232fc19385 100644
> > --- a/kernel/dma/swiotlb.c
> > +++ b/kernel/dma/swiotlb.c
> > @@ -168,9 +168,30 @@ void __init swiotlb_update_mem_attributes(void)
> >       memset(vaddr, 0, bytes);
> >  }
> >
> > -int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
> > +static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
> > +                                 unsigned long nslabs, bool late_alloc)
> >  {
> > +     void *vaddr = phys_to_virt(start);
> >       unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
> > +
> > +     mem->nslabs = nslabs;
> > +     mem->start = start;
> > +     mem->end = mem->start + bytes;
> > +     mem->index = 0;
> > +     mem->late_alloc = late_alloc;
> > +     spin_lock_init(&mem->lock);
> > +     for (i = 0; i < mem->nslabs; i++) {
> > +             mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
> > +             mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
> > +             mem->slots[i].alloc_size = 0;
> > +     }
> > +
> > +     set_memory_decrypted((unsigned long)vaddr, bytes >> PAGE_SHIFT);
> > +     memset(vaddr, 0, bytes);
>
> You are doing an unconditional set_memory_decrypted() followed by a
> memset here, and then:
>
> > +}
> > +
> > +int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
> > +{
> >       struct io_tlb_mem *mem;
> >       size_t alloc_size;
> >
> > @@ -186,16 +207,8 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
> >       if (!mem)
> >               panic("%s: Failed to allocate %zu bytes align=0x%lx\n",
> >                     __func__, alloc_size, PAGE_SIZE);
> > -     mem->nslabs = nslabs;
> > -     mem->start = __pa(tlb);
> > -     mem->end = mem->start + bytes;
> > -     mem->index = 0;
> > -     spin_lock_init(&mem->lock);
> > -     for (i = 0; i < mem->nslabs; i++) {
> > -             mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
> > -             mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
> > -             mem->slots[i].alloc_size = 0;
> > -     }
> > +
> > +     swiotlb_init_io_tlb_mem(mem, __pa(tlb), nslabs, false);
>
> You convert this call site with swiotlb_init_io_tlb_mem() which did not
> do the set_memory_decrypted()+memset(). Is this okay or should
> swiotlb_init_io_tlb_mem() add an additional argument to do this
> conditionally?

I'm actually not sure if this it okay. If not, will add an additional
argument for it.

> --
> Florian


From xen-devel-bounces@lists.xenproject.org Thu May 20 07:03:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 07:03:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130648.244597 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljcib-0007b5-T9; Thu, 20 May 2021 07:03:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130648.244597; Thu, 20 May 2021 07:03:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljcib-0007ax-Q6; Thu, 20 May 2021 07:03:41 +0000
Received: by outflank-mailman (input) for mailman id 130648;
 Thu, 20 May 2021 07:03:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3HBq=KP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ljcia-0007ar-S1
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 07:03:40 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f150f76a-edc3-493a-adca-638e7dc5a9cf;
 Thu, 20 May 2021 07:03:40 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2DEC0B016;
 Thu, 20 May 2021 07:03:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f150f76a-edc3-493a-adca-638e7dc5a9cf
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621494219; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=1L6iwsMcik0d8dtekBjdkf+tGhrTCEmOukCkZ9TBY0w=;
	b=kYIIA1eb1UoFFRGM0Hufu52lnvaU3dHqoJu+YJxjkwiopVTWnvLdx2HRHmAxmWsqukfDHo
	oNEVi3gq50xWhBKFx6w9B/CityWWaUXBUcvxRPplfILgW2T3VIJPiej5tXook/KtZ9959Z
	ZOKVF6CM8FLMQKMOjdhV/iSWYIzvrY0=
Subject: Re: regression in recent pvops kernels, dom0 crashes early
To: Olaf Hering <olaf@aepfle.de>
Cc: xen-devel@lists.xenproject.org
References: <20210513122457.4182eb7f.olaf@aepfle.de>
 <7abb3c8f-4a9b-700b-5c0c-dc6f42336eab@suse.com>
 <20210519204205.5bf59d51.olaf@aepfle.de>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <bb51ff7d-bd02-f039-dace-1c7f31fd2e1e@suse.com>
Date: Thu, 20 May 2021 09:03:34 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <20210519204205.5bf59d51.olaf@aepfle.de>
Content-Type: text/plain; charset=windows-1252
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 19.05.2021 20:42, Olaf Hering wrote:
> Am Mon, 17 May 2021 12:54:02 +0200
> schrieb Jan Beulich <jbeulich@suse.com>:
> 
>> x86/Xen: swap NX determination and GDT setup on BSP
>>
>> xen_setup_gdt(), via xen_load_gdt_boot(), wants to adjust page tables.
>> For this to work when NX is not available, x86_configure_nx() needs to
>> be called first.
> 
> 
> Thanks. I tried this patch on-top of the SLE15-SP3 kernel branch.
> Without the patch booting fails as reported.
> With the patch the dom0 starts as expected.

Just to be sure - you did not need the other patch that I said I suspect
is needed as a prereq?

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 20 07:05:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 07:05:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130655.244608 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljckM-0008DH-9f; Thu, 20 May 2021 07:05:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130655.244608; Thu, 20 May 2021 07:05:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljckM-0008DA-6Y; Thu, 20 May 2021 07:05:30 +0000
Received: by outflank-mailman (input) for mailman id 130655;
 Thu, 20 May 2021 07:05:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3HBq=KP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ljckK-0008D4-Ol
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 07:05:28 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e1c71415-5393-4c84-ac77-657a06c0accd;
 Thu, 20 May 2021 07:05:27 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A393FB008;
 Thu, 20 May 2021 07:05:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e1c71415-5393-4c84-ac77-657a06c0accd
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621494326; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=UN8EQdqlO1ZovXZd8DiiRg4AjMAaYh4SSksCxnz7QS4=;
	b=gtS2naKPs6s7aJQuHlG8F9jJ07JNHj2K1JbyhuJ9JyosuC3jJagSSY85M/lG6KYRVi+MmL
	mJbMbxpbMFpIaDDeSi7yKq8t7+I522uvJzen9M8T5wXTtD+BY87kVEPjcYohMMie37YktN
	pPk6147s81fK2pSGgvAHyL7GDAcWbkc=
Subject: Re: [PATCH 03/10] xen/arm: introduce PGC_reserved
To: Julien Grall <julien@xen.org>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 nd <nd@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Penny Zheng <Penny.Zheng@arm.com>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-4-penny.zheng@arm.com>
 <bc6a20ef-675d-bbd6-74f7-4ecc45805ee7@xen.org>
 <VE1PR08MB5215F3ECA8B5D9624E34A794F72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <9e22e4de-0d09-5195-bd8f-2ca326264807@suse.com>
 <765312c9-3b71-eb3a-5c8d-2ba0aa019595@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <624d99d3-6299-f712-e667-5330ab4b1492@suse.com>
Date: Thu, 20 May 2021 09:05:26 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <765312c9-3b71-eb3a-5c8d-2ba0aa019595@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 19.05.2021 21:49, Julien Grall wrote:
> On 19/05/2021 10:49, Jan Beulich wrote:
>> On 19.05.2021 05:16, Penny Zheng wrote:
>>>> From: Julien Grall <julien@xen.org>
>>>> Sent: Tuesday, May 18, 2021 5:46 PM
>>>>
>>>> On 18/05/2021 06:21, Penny Zheng wrote:
>>>>> --- a/xen/include/asm-arm/mm.h
>>>>> +++ b/xen/include/asm-arm/mm.h
>>>>> @@ -88,7 +88,15 @@ struct page_info
>>>>>             */
>>>>>            u32 tlbflush_timestamp;
>>>>>        };
>>>>> -    u64 pad;
>>>>> +
>>>>> +    /* Page is reserved. */
>>>>> +    struct {
>>>>> +        /*
>>>>> +         * Reserved Owner of this page,
>>>>> +         * if this page is reserved to a specific domain.
>>>>> +         */
>>>>> +        struct domain *domain;
>>>>> +    } reserved;
>>>>
>>>> The space in page_info is quite tight, so I would like to avoid introducing new
>>>> fields unless we can't get away from it.
>>>>
>>>> In this case, it is not clear why we need to differentiate the "Owner"
>>>> vs the "Reserved Owner". It might be clearer if this change is folded in the
>>>> first user of the field.
>>>>
>>>> As an aside, for 32-bit Arm, you need to add a 4-byte padding.
>>>>
>>>
>>> Yeah, I may delete this change. I imported this change as considering the functionality
>>> of rebooting domain on static allocation.
>>>
>>> A little more discussion on rebooting domain on static allocation.
>>> Considering the major user cases for domain on static allocation
>>> are system has a total pre-defined, static behavior all the time. No domain allocation
>>> on runtime, while there still exists domain rebooting.
>>>
>>> And when rebooting domain on static allocation, all these reserved pages could
>>> not go back to heap when freeing them.  So I am considering to use one global
>>> `struct page_info*[DOMID]` value to store.
>>
>> Except such a separate array will consume quite a bit of space for
>> no real gain: v.free has 32 bits of padding space right now on
>> Arm64, so there's room for a domid_t there already. Even on Arm32
>> this could be arranged for, as I doubt "order" needs to be 32 bits
>> wide.
> 
> I agree we shouldn't need 32-bit to cover the "order". Although, I would 
> like to see any user reading the field before introducing it.

Of course, but I thought the plan was to mark static pages with their
designated domid, which would happen prior to domain creation. The
reader of the field would then naturally appear during domain creation.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 20 07:36:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 07:36:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130676.244644 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljdDx-0003PP-5y; Thu, 20 May 2021 07:36:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130676.244644; Thu, 20 May 2021 07:36:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljdDx-0003PI-2j; Thu, 20 May 2021 07:36:05 +0000
Received: by outflank-mailman (input) for mailman id 130676;
 Thu, 20 May 2021 07:36:03 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljdDv-0003P8-EK; Thu, 20 May 2021 07:36:03 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljdDv-0002gc-AH; Thu, 20 May 2021 07:36:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljdDv-0004zt-0s; Thu, 20 May 2021 07:36:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ljdDv-0003kz-0M; Thu, 20 May 2021 07:36:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=9vWM+98QvISKdNQUBvK4yMYb5LU02Uk7WFUg73UrdNs=; b=pPh/72v9+mUtIi7duBXl57N+Hz
	LKgQC0BuwDlv3Fu1KkStyM0k7tF3UpyK9FJLSGJRE2RmBxg9ycbHKToIzMMNL79MpdhTKJ10Toa2V
	oJ9eZjPR5Mpje4TX0Jd6klRNxT3RhaU2NFMdRrNiXIymWoT5hXi0lD/6Ayg8oBu/338U=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162095-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162095: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl:xen-boot:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=935abe1cc463917c697c1451ec8d313a5d75f7de
X-Osstest-Versions-That:
    xen=caa9c4471d1d74b2d236467aaf7e63a806ac11a4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 20 May 2021 07:36:03 +0000

flight 162095 xen-unstable real [real]
flight 162101 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162095/
http://logs.test-lab.xenproject.org/osstest/logs/162101/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl           8 xen-boot            fail pass in 162101-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl         15 migrate-support-check fail in 162101 never pass
 test-armhf-armhf-xl     16 saverestore-support-check fail in 162101 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162078
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162078
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162078
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162078
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162078
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162078
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162078
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162078
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162078
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162078
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162078
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  935abe1cc463917c697c1451ec8d313a5d75f7de
baseline version:
 xen                  caa9c4471d1d74b2d236467aaf7e63a806ac11a4

Last test of basis   162078  2021-05-19 05:04:34 Z    1 days
Testing same since   162095  2021-05-19 16:38:54 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Julien Grall <jgrall@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   caa9c4471d..935abe1cc4  935abe1cc463917c697c1451ec8d313a5d75f7de -> master


From xen-devel-bounces@lists.xenproject.org Thu May 20 07:43:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 07:43:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130700.244676 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljdKr-0005s2-AL; Thu, 20 May 2021 07:43:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130700.244676; Thu, 20 May 2021 07:43:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljdKr-0005rv-6n; Thu, 20 May 2021 07:43:13 +0000
Received: by outflank-mailman (input) for mailman id 130700;
 Thu, 20 May 2021 07:43:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3HBq=KP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ljdKq-0005rp-NF
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 07:43:12 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 934488d6-0b22-4aab-b263-f889b650697e;
 Thu, 20 May 2021 07:43:11 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 0B022B148;
 Thu, 20 May 2021 07:43:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 934488d6-0b22-4aab-b263-f889b650697e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621496591; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Fm7akdWL3Mft1M3L5KThdacJthcmV01J56Xd2xuO7so=;
	b=kIKysHJFAjOqZR+0HLtpbIKhUdZ8nIN//cmsY2w1EvdUzfTFlvhI/zm4gNx9D+izLpRkUm
	EJAx60oIq8RPFzUzrR9SInyTcjTPQ9XCRauvLHH7o2plmdraCHO5ovCJHNkiUHeHffhYEI
	QPU4PN0kp4e603rokI1Njzdn/cPrRt0=
Subject: Re: [PATCH v2 1/2] xen-pciback: redo VF placement in the virtual
 topology
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Juergen Gross <jgross@suse.com>, Konrad Wilk <konrad.wilk@oracle.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <38774140-871d-59a4-cf49-9cb1cc78c381@suse.com>
 <8def783b-404c-3452-196d-3f3fd4d72c9e@suse.com>
 <87d771dd-8b00-4101-b76b-21087ea1b1df@oracle.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <214a6c61-5f6a-d841-312a-be2abb95f77a@suse.com>
Date: Thu, 20 May 2021 09:43:10 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <87d771dd-8b00-4101-b76b-21087ea1b1df@oracle.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 20.05.2021 02:36, Boris Ostrovsky wrote:
> 
> On 5/18/21 12:13 PM, Jan Beulich wrote:
>>  
>> @@ -95,22 +95,25 @@ static int __xen_pcibk_add_pci_dev(struc
>>  
>>  	/*
>>  	 * Keep multi-function devices together on the virtual PCI bus, except
>> -	 * virtual functions.
>> +	 * that we want to keep virtual functions at func 0 on their own. They
>> +	 * aren't multi-function devices and hence their presence at func 0
>> +	 * may cause guests to not scan the other functions.
> 
> 
> So your reading of the original commit is that whatever the issue it was, only function zero was causing the problem? In other words, you are not concerned that pci_scan_slot() may now look at function 1 and skip all higher-numbered function (assuming the problem is still there)?

I'm not sure I understand the question: Whether to look at higher numbered
slots is a function of slot 0's multi-function bit alone, aiui. IOW if
slot 1 is being looked at in the first place, slots 2-7 should also be
looked at.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 20 07:45:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 07:45:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130706.244687 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljdMo-0006X9-N4; Thu, 20 May 2021 07:45:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130706.244687; Thu, 20 May 2021 07:45:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljdMo-0006X2-Im; Thu, 20 May 2021 07:45:14 +0000
Received: by outflank-mailman (input) for mailman id 130706;
 Thu, 20 May 2021 07:45:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dJWQ=KP=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ljdMn-0006Wu-DZ
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 07:45:13 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [81.169.146.219])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7eae0b4e-14b5-41ad-af7b-84f93b0f038b;
 Thu, 20 May 2021 07:45:12 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.26.1 AUTH)
 with ESMTPSA id y090b8x4K7jA0cc
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 20 May 2021 09:45:10 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7eae0b4e-14b5-41ad-af7b-84f93b0f038b
ARC-Seal: i=1; a=rsa-sha256; t=1621496710; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=gGNfEGUreVRKgFFeL0cX1JaC42UZ4MUPeJkabnu3pB6bBlvj1PYSd7reRHKASLys0E
    4bMqmNmLVQQKKm4GFddvQPViuyeKezrmNLd7imEl/Wu3Bmi0WSA4U4dwxCWjq5zDvU9j
    QxF5iZAEN7Hm9JS5PsemI7zxZjcuj0MidDo+Yvf+TkArZL9qmo+Ai/5h1mOW0qB4iMVy
    rDtSJDQhRG6jnhQbQhV772u9pw9/bx9Q/5jqlzhH2KpNoz9xy+9s+XRU6T/vc/MfvqD3
    VuhROXgmPW6rGyVjsdNhFx7AK08QEeLn4mu7YNwooLBKLD40ngOp8TaLOTIKGHGFYB61
    TCUg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1621496710;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=0OfHzXvdFiyUIiicO0Kahi/YjMVyAnJSl9D97PORxXg=;
    b=IRCe8c+V6qAhYv+xmcF7gGwbfFhGp3VLtgwqo6LaubA/ZzqDpgneDIvBSr2OCyMrTb
    aWyX5uan1oxoX/CTw/ZBwjO0d3UiBnp3iywJxz6bQUF3ZbkfWlrbTe5BFFV9KDw1liqy
    kz9J3enUAtW/LsOtAggvO1vd6mZZhJ/jdPdkJUa4K0jZabNBKfgXzpPmMziLh8MD/9HW
    iF2wGUqRl1FDaV5v1LfsRRadgLO7IJ9LnX9cVMlGwKYwG/Ht/7ElWCBV7XXKSnMJVQSb
    InghQTJ7ayRESrJ31Bcw3Egqk1hng/5108YZPOh8Jyzug3yAf4+vcDS9bZXsHTWCPp9e
    NroA==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1621496710;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=0OfHzXvdFiyUIiicO0Kahi/YjMVyAnJSl9D97PORxXg=;
    b=pc5nzVssaCQ3v3GnVrPU+BZ9GAcYGo6H4ao7DrrIY32R2p9ltO4dV7wJcw42+3lYtn
    gHyTCKlncAveU7JcfEWHbgp51CM9y2vxF5bSQfLcEvg3yyXkikLoq7xjcaqkqMeVPtLH
    LW08aPOoJt/FKifSYhlwplNofVM61+twlmzpRy+CPHlZYJxG7R5x3YlYg6pHwNjhiARS
    X4AOQGS6nIHhtWvuoJsN3akj7cuhbM1PkxEVgFsX6eYZdYOdWai/UjnB82xwR4PYI4IK
    CjPyvlw3Af5QYAkqtFJVmzxSECcB1TWFJL7bEpzOQwWgLBG/DHM5UJ9G3Fp9XVwaQ8Yh
    gBaQ==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisF9Wx7WbE3s+BU2kLCYUBd7t4vRd/ulzKn4R+Wk"
X-RZG-CLASS-ID: mo00
Date: Thu, 20 May 2021 09:45:03 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: regression in recent pvops kernels, dom0 crashes early
Message-ID: <20210520094503.606a1761.olaf@aepfle.de>
In-Reply-To: <bb51ff7d-bd02-f039-dace-1c7f31fd2e1e@suse.com>
References: <20210513122457.4182eb7f.olaf@aepfle.de>
	<7abb3c8f-4a9b-700b-5c0c-dc6f42336eab@suse.com>
	<20210519204205.5bf59d51.olaf@aepfle.de>
	<bb51ff7d-bd02-f039-dace-1c7f31fd2e1e@suse.com>
X-Mailer: Claws Mail 2021.04.23 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/T04FX8zY4.2tjF1.8jNEBDL";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/T04FX8zY4.2tjF1.8jNEBDL
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Thu, 20 May 2021 09:03:34 +0200
schrieb Jan Beulich <jbeulich@suse.com>:

> Just to be sure - you did not need the other patch that I said I suspect
> is needed as a prereq?

Yes, I needed just this single patch which moves x86_configure_nx up.


Olaf

--Sig_/T04FX8zY4.2tjF1.8jNEBDL
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmCmE38ACgkQ86SN7mm1
DoAR3Q//ezkM5a4jqG85ltBDLesCD7XCMJX9i7kj8V+oGyZj5x7+9C5Rz1I6BZxh
c1RsFfRjCIVHoOr9/6JsLcnM/DmhFuxiDOXgiD+UAiZle5oIAY3JV6ldCLR5GJII
xeGUAMiulnybtStsZ5BrWK7mTX6IjB/icXn8YirJeFKGxti8l2TlPee/5xdCO49s
Rs6HbI2eSyofS/rxVdkCBDlopciXQxYWXePfrsqtqZhdRnwT769RD9IXeC4y8P7d
rTi6nvs8tOnVJo7V2ZBoKViKkcF/zSSKDO3LToE4CoPdueRkpPCBJyeXzHXhKhDy
1PQzsp86IcpPchX6UcDyzGYEGHQeM99DG2VzZHz1Mm01fwaGWItawOE+rHvABKMt
HHtERfrXNQEr6iMXBQi6ogmplI8SXyMDG2v8G6e76qfSnN6UQHUJL+ZyIq9ZZlLl
Zf4vZw1320mvG+7sR4uGjULY2tnxs5hrONAsg8LiUDpwGauMMZ3B5Sqrv5bYsVVV
lmeRoVm8+KQkzzx4y87BVYoKVMGGPpESiXMo4dW4KH8GrK44e5a9C/av7H7vxodj
SQZWncF6kp26MFSIHqKkKZmkKaKiKxvMvzMJcgniQOj/ezR2QzXXRj1flS9rs3uh
M8zmsS/nqkytkfRZZoNxphnTKiAQVrplM97zBl7YI0juA9syPHk=
=anl8
-----END PGP SIGNATURE-----

--Sig_/T04FX8zY4.2tjF1.8jNEBDL--


From xen-devel-bounces@lists.xenproject.org Thu May 20 08:05:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 08:05:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130717.244698 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljdgB-00014y-JR; Thu, 20 May 2021 08:05:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130717.244698; Thu, 20 May 2021 08:05:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljdgB-00014p-FQ; Thu, 20 May 2021 08:05:15 +0000
Received: by outflank-mailman (input) for mailman id 130717;
 Thu, 20 May 2021 08:05:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljdgA-00014e-4C; Thu, 20 May 2021 08:05:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljdg9-0003hB-KO; Thu, 20 May 2021 08:05:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljdg9-00061P-8M; Thu, 20 May 2021 08:05:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ljdg9-0008K3-7s; Thu, 20 May 2021 08:05:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=XL9wlKOsEIORVhA2v01lP1HmbPjjxW4ILH9BeLoVWYs=; b=CqCXDQekxiLdNq2rRRjlbk1WdY
	Lv88FWo5+OzDUeQVe/9rZH15npqe+oxfWFkYBxmbcfDp9wMfalbF0Yfn0m9MlsokRnzkQGhOWbVPq
	IMggdZZZ7uOQbNttBhx+IkoitHgLy1p0aTSuGPsi5YsSa1Z1mL+gEqGEQk7zTypTWeHY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162100-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 162100: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=0ad0204ce7f7b512ee349dfbf5cdd751ab0adc1c
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 20 May 2021 08:05:13 +0000

flight 162100 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162100/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              0ad0204ce7f7b512ee349dfbf5cdd751ab0adc1c
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  314 days
Failing since        151818  2020-07-11 04:18:52 Z  313 days  306 attempts
Testing same since   162100  2021-05-20 04:20:01 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 58184 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu May 20 08:40:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 08:40:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130728.244712 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljeEE-0005No-El; Thu, 20 May 2021 08:40:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130728.244712; Thu, 20 May 2021 08:40:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljeEE-0005Nh-BF; Thu, 20 May 2021 08:40:26 +0000
Received: by outflank-mailman (input) for mailman id 130728;
 Thu, 20 May 2021 08:40:24 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yP7T=KP=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1ljeEC-0005Na-An
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 08:40:24 +0000
Received: from EUR04-DB3-obe.outbound.protection.outlook.com (unknown
 [40.107.6.62]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 085aee99-1923-47d1-b6b4-20d16fea9a61;
 Thu, 20 May 2021 08:40:21 +0000 (UTC)
Received: from DB6PR0301CA0100.eurprd03.prod.outlook.com (2603:10a6:6:30::47)
 by AM0PR08MB5315.eurprd08.prod.outlook.com (2603:10a6:208:18e::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.26; Thu, 20 May
 2021 08:40:20 +0000
Received: from DB5EUR03FT027.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:6:30:cafe::75) by DB6PR0301CA0100.outlook.office365.com
 (2603:10a6:6:30::47) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4150.23 via Frontend
 Transport; Thu, 20 May 2021 08:40:20 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 DB5EUR03FT027.mail.protection.outlook.com (10.152.20.121) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Thu, 20 May 2021 08:40:20 +0000
Received: ("Tessian outbound 0f1e4509c199:v92");
 Thu, 20 May 2021 08:40:19 +0000
Received: from 1f7cd618013a.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 A1A07728-D74C-489F-B569-8ACEE9411D98.1; 
 Thu, 20 May 2021 08:40:09 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 1f7cd618013a.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 20 May 2021 08:40:09 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com (2603:10a6:803:10a::33)
 by VI1PR08MB2672.eurprd08.prod.outlook.com (2603:10a6:802:1c::27)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4108.28; Thu, 20 May
 2021 08:40:07 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5]) by VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5%6]) with mapi id 15.20.4129.035; Thu, 20 May 2021
 08:40:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 085aee99-1923-47d1-b6b4-20d16fea9a61
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=j9wtlUhWO35YfeOykc3hwJSJu2Gmu6QDo3KokY3O7i4=;
 b=ypFJFIdUKGE39AxMr7zKxKCCQ0Y2/xHDKAfWXkEFo7m1DCllEsjOOgCwyd2EypZqX7GIiSLp9KfLMruEIT+6cLM/Ss/UgwwaesFKPifpYCBEgXMCmBzHkem8b5G6B0+HMcQWKcq5J5karlQHXUsvJh2DN6ZvdnRND+D5vNT6/3c=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=QVnT0Z/hM1oIwRGYYubCb4N4p5pXSl1MbffliZBVS2k/h30OlQb2D2ExO+1EQTbQw3izB+HRKfFBGxfHVfC9ZfWjRW0r5AI1Icbzi8oCyt5ZSV0vyj6GZ4oQN6tBC2kag+WNC+qJmReenJ3xxcyEtvKlK8kDIxeISfLXXkMV+jbb1NVpdpO9Pt/T8RNEMKLv7IB3geiN49rLI1Tz6eMw/DrHpjw9ZZycWkSXh192TSukq2Oqjyh7LLwQ4ufg69TiP5HdhwCUGp5P2LUSKGmbUIeBd9zF0QhntnOqDL4I9hd9tUPviEDwzT3iLX+giT8Q/4lKGHW2LTsRs+oJtvBMyQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=j9wtlUhWO35YfeOykc3hwJSJu2Gmu6QDo3KokY3O7i4=;
 b=DBidUJdGNWV1ChXj5PW9y0rX2iV4fy+3Buq5z+LgV2Si4F9NtSztALL0tETahyQvOyRvpKllQZ1QkIHP42J11A+qsMuYk+17+haamiqjCntn+GKmYFZXU+Znzd6OJGIdY9tElPKkwCaxENNR4iGkpql7DKtCJn+srtEmjS6LOQm1B7jQVlcZC/HsKjAcV8UvAqGgAemBQpD3Q9crDHQeCfpyhmb3tNnGhVGMtBn+vzpQlkoYZKOWyf8ia8hstc1rxkDa8salzPugrZ+MPbZ92xTov3wM18ldgdvoY4gZaO+bydTP23BgA0F5xRljPxcrh7JgFvU7cxP54o4Sg6J5/g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=j9wtlUhWO35YfeOykc3hwJSJu2Gmu6QDo3KokY3O7i4=;
 b=ypFJFIdUKGE39AxMr7zKxKCCQ0Y2/xHDKAfWXkEFo7m1DCllEsjOOgCwyd2EypZqX7GIiSLp9KfLMruEIT+6cLM/Ss/UgwwaesFKPifpYCBEgXMCmBzHkem8b5G6B0+HMcQWKcq5J5karlQHXUsvJh2DN6ZvdnRND+D5vNT6/3c=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "sstabellini@kernel.org"
	<sstabellini@kernel.org>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	nd <nd@arm.com>
Subject: RE: [PATCH 03/10] xen/arm: introduce PGC_reserved
Thread-Topic: [PATCH 03/10] xen/arm: introduce PGC_reserved
Thread-Index: AQHXS6W8Xfb+Jc0d2U+rpPnqyW8lIKro/c8AgAEW9PCAASMugIAArd6wgAAeryA=
Date: Thu, 20 May 2021 08:40:07 +0000
Message-ID:
 <VE1PR08MB5215E19BFE3E7F329229D8E7F72A9@VE1PR08MB5215.eurprd08.prod.outlook.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-4-penny.zheng@arm.com>
 <bc6a20ef-675d-bbd6-74f7-4ecc45805ee7@xen.org>
 <VE1PR08MB5215F3ECA8B5D9624E34A794F72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <2f4eb08e-261b-70c4-bcbc-e08db36a50a9@xen.org>
 <VE1PR08MB52155DD56E548E98AE937CE8F72A9@VE1PR08MB5215.eurprd08.prod.outlook.com>
In-Reply-To:
 <VE1PR08MB52155DD56E548E98AE937CE8F72A9@VE1PR08MB5215.eurprd08.prod.outlook.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: CCD544482997E74FAD1A226D6DFF10FD.0
x-checkrecipientchecked: true
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [203.126.0.111]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: 52698d92-1f68-4db6-9c74-08d91b6ae6b2
x-ms-traffictypediagnostic: VI1PR08MB2672:|AM0PR08MB5315:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM0PR08MB5315D0331B078610B418A279F72A9@AM0PR08MB5315.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 hLwQ5qZs7UyqAabZltMX2pKhhMJqdJxbOJgZzNA8yIsMXeWWZ918Kwb1HifABNI1602+ydK5+gPzT9wWeJIgqho9aKr31Ogt+IOMUk3cy/zSX/RHkCK/maH/NQQAxdf1NFpzVFC4OjAmlOWNTzC0CmrDbna4Oi6BR5YM4DGOdXxFvppL75IkVcOfeUzLr1693JE+Bh+XBCiImzXr45dwheYq5C2hN+xo5J05jJTLEv0TKohXso4fsJHfQIpPOh6Z4C51kpWiSQeJki9tNDrs+8U+44w0MmH/2ohXtU2PNsHa/e+c4gdWhVnn5931D2yLhSqg1mLpSXx1HM/FvLze8tWy1LGFi5eCBJIl9Cn4OCBDfQ/orZjUaPkDcLlmGzrt50wwOWyafqoJoeT8e2dyO97qJEDXSpMTMh7g0vMBoo5TbZzBrR5A9TEiVeFfFLfdkBEU0lvkX8wnbRCmaolwBozoekHjzkp+pRespUcB/kMAgS4Ts78E8GmHCnNNWNGGqWVcJw3+fjzRatHI+ocH5eBq1MaIKZpj52/tcech5K6kZOMA+zgHejCEUVdWDj6ooqZZKfzpkcqj6lh0UD34xnkkxdmWc1t7UPd5Y11S9ZA=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR08MB5215.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(376002)(366004)(346002)(136003)(39860400002)(9686003)(53546011)(2906002)(64756008)(5660300002)(55016002)(66446008)(26005)(52536014)(7696005)(33656002)(2940100002)(478600001)(6506007)(76116006)(66946007)(122000001)(83380400001)(54906003)(8676002)(66556008)(4326008)(71200400001)(186003)(110136005)(86362001)(8936002)(66476007)(316002)(38100700002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?VXJLUnF6SzdydjBjRlBGQXNMSHB3dG11VmNHUFl3OWlyVzZmN2JrZkZmS1Nz?=
 =?utf-8?B?YWVoMm5RSGVjdVRCWk9kOTVpa3BRV0FvTWNPYU9xTlNkVktlOUd3RFBaaTJX?=
 =?utf-8?B?b2ZaTWdtRnU5MjNYQXZGZTZiNW5lVzJDNUN1R2d2Zk5oQVpwL1hvMk9rU21Y?=
 =?utf-8?B?SU5laDJST2FKeWVFK281RHhYTG5KMU9oaXFUalZDV1JnVkMzYnlCRDdkRms0?=
 =?utf-8?B?eWRQZkR3bS9kUHd6eHJpY1FpSnhTVG8yV0ZmUGhwWEpTV3k2T0ZlcWU3VWZr?=
 =?utf-8?B?ZVpmakxzUlpCL3FNenBvc2xHeTZiczFTcndyMXJGbno5VG1OZVpWS1pKKzMy?=
 =?utf-8?B?UzFIWHdTRE1iamJYRWdudXFrdlZzMTFzcUl1cFUxSTFweG9NMjRkamFrNTJP?=
 =?utf-8?B?dlZoaGNtVjFFRXZNZWF1YllEYkpHOTN4YXZTNFcxVHVHTlFrYWNaaGQ4L3hx?=
 =?utf-8?B?Qnh5bTYydlBiMDd0eUNic3oyc3o2aTdqdngvcUMzSjFIY0wydk1kaThjVkxO?=
 =?utf-8?B?bEpDeFRoeGVjRi9iL0M0REFyU3Y5VnAwZTRoUkM5aW1YN1pqcFZQUE5LUVFx?=
 =?utf-8?B?dzhwY2g5OVlYdklqbmFYbXZyWVppWHJKM3dHVlhKUFR4eCtFMlpoRGVVQ0FQ?=
 =?utf-8?B?dThTcGVHWG9hbFNwVmhUQTdXQVk0dUgrMFJyU2t6aFkzNEpQRnNCQXFUOWNN?=
 =?utf-8?B?NUdROGt6bWhPUllvMGFYb3RNM0VBaXpHTE9hUHEvMU1XL1ZlZ1cwVUZRTjdY?=
 =?utf-8?B?VWhoZU1JakpqZm8zdUxKREdiRk9YU3BRVjVnSGFvU2ZLQWZ1MmNtZ2dnZ0VI?=
 =?utf-8?B?RlUyQU51NElVSXNtOS9idGFlbnoxQk5zdzdDenhOYVh0NmE5NjUvdjZaekp5?=
 =?utf-8?B?MGJGSW1SMEZGSUtZWUZZUEw4aE9MakNjR3ZSREkwSCtJVnc3Z2FGeExkM2RV?=
 =?utf-8?B?MWx1VWJibVl1YXNNY0FPbFBrYUh2YXFzT1V0YTdrcTlkZDdwYmVrbWlyVWhn?=
 =?utf-8?B?ZkZHbjhQY0ppTGtVeWU4MzdaejF1YmtOSVRHTVAxcHEvYnR3NXVoNWFteHNu?=
 =?utf-8?B?bXM1TWIwd0d1M2N3SUNXZGZwQU5leFQ2UUVpN1FlSGMvNEhXNXRpd1Mrek1h?=
 =?utf-8?B?Z2JLYjlMN2llUHpnY2JaZDJTR1lUWTM4K0xOcFdEM2VUWlhVdEVUaEpxNHFj?=
 =?utf-8?B?dlByN052QnlkRjNhNmtlWi8ybmh1WGEzMHdPczlMNm14WTY2YWRpU2FVRm1t?=
 =?utf-8?B?T0ZjZjZqR2w0Z1luV2Vob3F4SXpFSG1LYkxOcFhlTExKK2tvT0xRTmxvRVh0?=
 =?utf-8?B?NUFSNXNtNzJwTnA5YWJCNjgzZVlKZXEwNis5bnF4aTlnVWE2RlVJWldIVTJa?=
 =?utf-8?B?cGdqMEVHTE0vdWlnZFZ0NXlqYVhFYnpxS0x6ZXZDRkRzYWJjTjlXdWdoVTE1?=
 =?utf-8?B?U1VxRjhzbWIveHFZSjRIaE9CVHg0ditLR0UrQVdQMnZIK0FjNGtFNHp4bjBJ?=
 =?utf-8?B?b25URjV6TUxLRmEyQ2VuVVFxUlBzWnYzRWpDKzBrajFsbUdNUlErNFlWRTZG?=
 =?utf-8?B?c29zZjQzMmZySVkrRWFkczloOE5DU3d4MXp6dzZiYWVBMVJZNUh3cEY2c3dN?=
 =?utf-8?B?cHc0b0cxZ1dDekdjT2pBbmUzVjErS1RNbElRdmxHVm5UZEEwT0pRbXBWZmlI?=
 =?utf-8?B?V0VaTHllREZnM3lOazl1QWNYY0dHbHNISWppVzRkd0ZCaHBDNlVsajRIZmZt?=
 =?utf-8?Q?KElDwHMYgNcGII9uN9QLh2MGstddd9sIxthHEvh?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB2672
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 DB5EUR03FT027.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	21e42153-eb96-4b3d-892b-08d91b6adf3c
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	xeRQ+vJUqXXHcyoFdN91d4O+/FNElBvFUIbBf3pjg6SBU7067Xyhy2HS8AfsSUtNVZJCL8N7lNnV3ZkzJ+T+K6aoLq9QOaqCVKvSsVe7x+F/oFqjrU7gZ68ZVmzMYjYcSamDFrqSau2uzhpWesxajm41Z+bWrvQK+vMsOWuiA+u5i3sFlc2z0EoAu1/H6kUiGUpvXS+6uCr5IocD16Ixthf1RekarErabtQ3hyxldsSuarSqm/FCx58ngEqSq/HdF94rg/WIk2HyW1HMM+1ZeHgtOLBC+dxhg6PT1deeHN2vqAVzZs3NDvtMjpSqLoIOb2b2cDgMvVZj9Q3U6hGV9qMbRrvAdb/1l6xFINj9/swhRuyDLESEMq/MLkiDrl+ee4KB3uOK0izDlY1tqdSG4ee7pDyVkxQC9j9Jsd79OVDtOmw/3vpES29kU254aLW/92DqPjPWur2GgE5Y3kJf8tqc81VQBt+3M6Pw3HZ7/w9gZ0mwA0J6/rPYzFCWVfjdjrgFIMemjYo+uFhiyS5RLm3N3tQGoXJTpWH38hIQSdFVWrtZi/6KfNTQQ0ZIKy8y60GWErSj+NWuWof62mP/5ia8l36sGISdg28wFD5saF5ycUnKm7AcCb7LQEgB6qYB/3PgHOQu9FdVNugBtqkGeqtVSam8fJBA7AVkX7P9a5M=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(376002)(346002)(396003)(136003)(39860400002)(36840700001)(46966006)(86362001)(52536014)(36860700001)(2906002)(70586007)(53546011)(70206006)(82310400003)(33656002)(5660300002)(55016002)(26005)(6506007)(356005)(9686003)(2940100002)(82740400003)(4326008)(81166007)(478600001)(336012)(186003)(47076005)(8676002)(110136005)(316002)(8936002)(54906003)(7696005)(83380400001);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 May 2021 08:40:20.0494
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 52698d92-1f68-4db6-9c74-08d91b6ae6b2
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	DB5EUR03FT027.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR08MB5315

SGkganVsaWVuIA0KDQo+IC0tLS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+IEZyb206IFBlbm55
IFpoZW5nDQo+IFNlbnQ6IFRodXJzZGF5LCBNYXkgMjAsIDIwMjEgMjoyMCBQTQ0KPiBUbzogSnVs
aWVuIEdyYWxsIDxqdWxpZW5AeGVuLm9yZz47IHhlbi1kZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9y
ZzsNCj4gc3N0YWJlbGxpbmlAa2VybmVsLm9yZw0KPiBDYzogQmVydHJhbmQgTWFycXVpcyA8QmVy
dHJhbmQuTWFycXVpc0Bhcm0uY29tPjsgV2VpIENoZW4NCj4gPFdlaS5DaGVuQGFybS5jb20+OyBu
ZCA8bmRAYXJtLmNvbT4NCj4gU3ViamVjdDogUkU6IFtQQVRDSCAwMy8xMF0geGVuL2FybTogaW50
cm9kdWNlIFBHQ19yZXNlcnZlZA0KPiANCj4gSGkgSnVsaWVuDQo+IA0KPiA+IC0tLS0tT3JpZ2lu
YWwgTWVzc2FnZS0tLS0tDQo+ID4gRnJvbTogSnVsaWVuIEdyYWxsIDxqdWxpZW5AeGVuLm9yZz4N
Cj4gPiBTZW50OiBUaHVyc2RheSwgTWF5IDIwLCAyMDIxIDM6NDYgQU0NCj4gPiBUbzogUGVubnkg
WmhlbmcgPFBlbm55LlpoZW5nQGFybS5jb20+OyB4ZW4tZGV2ZWxAbGlzdHMueGVucHJvamVjdC5v
cmc7DQo+ID4gc3N0YWJlbGxpbmlAa2VybmVsLm9yZw0KPiA+IENjOiBCZXJ0cmFuZCBNYXJxdWlz
IDxCZXJ0cmFuZC5NYXJxdWlzQGFybS5jb20+OyBXZWkgQ2hlbg0KPiA+IDxXZWkuQ2hlbkBhcm0u
Y29tPjsgbmQgPG5kQGFybS5jb20+DQo+ID4gU3ViamVjdDogUmU6IFtQQVRDSCAwMy8xMF0geGVu
L2FybTogaW50cm9kdWNlIFBHQ19yZXNlcnZlZA0KPiA+DQo+ID4NCj4gPg0KPiA+IE9uIDE5LzA1
LzIwMjEgMDQ6MTYsIFBlbm55IFpoZW5nIHdyb3RlOg0KPiA+ID4gSGkgSnVsaWVuDQo+ID4NCj4g
PiBIaSBQZW5ueSwNCj4gPg0KPiA+ID4NCj4gPiA+PiAtLS0tLU9yaWdpbmFsIE1lc3NhZ2UtLS0t
LQ0KPiA+ID4+IEZyb206IEp1bGllbiBHcmFsbCA8anVsaWVuQHhlbi5vcmc+DQo+ID4gPj4gU2Vu
dDogVHVlc2RheSwgTWF5IDE4LCAyMDIxIDU6NDYgUE0NCj4gPiA+PiBUbzogUGVubnkgWmhlbmcg
PFBlbm55LlpoZW5nQGFybS5jb20+Ow0KPiA+ID4+IHhlbi1kZXZlbEBsaXN0cy54ZW5wcm9qZWN0
Lm9yZzsgc3N0YWJlbGxpbmlAa2VybmVsLm9yZw0KPiA+ID4+IENjOiBCZXJ0cmFuZCBNYXJxdWlz
IDxCZXJ0cmFuZC5NYXJxdWlzQGFybS5jb20+OyBXZWkgQ2hlbg0KPiA+ID4+IDxXZWkuQ2hlbkBh
cm0uY29tPjsgbmQgPG5kQGFybS5jb20+DQo+ID4gPj4gU3ViamVjdDogUmU6IFtQQVRDSCAwMy8x
MF0geGVuL2FybTogaW50cm9kdWNlIFBHQ19yZXNlcnZlZA0KPiA+ID4+DQo+ID4gPj4NCj4gPiA+
Pg0KPiA+ID4+IE9uIDE4LzA1LzIwMjEgMDY6MjEsIFBlbm55IFpoZW5nIHdyb3RlOg0KPiA+ID4+
PiBJbiBvcmRlciB0byBkaWZmZXJlbnRpYXRlIHBhZ2VzIG9mIHN0YXRpYyBtZW1vcnksIGZyb20g
dGhvc2UNCj4gPiA+Pj4gYWxsb2NhdGVkIGZyb20gaGVhcCwgdGhpcyBwYXRjaCBpbnRyb2R1Y2Vz
IGEgbmV3IHBhZ2UgZmxhZw0KPiA+ID4+PiBQR0NfcmVzZXJ2ZWQNCj4gPiB0byB0ZWxsLg0KPiA+
ID4+Pg0KPiA+ID4+PiBOZXcgc3RydWN0IHJlc2VydmVkIGluIHN0cnVjdCBwYWdlX2luZm8gaXMg
dG8gZGVzY3JpYmUgcmVzZXJ2ZWQNCj4gPiA+Pj4gcGFnZSBpbmZvLCB0aGF0IGlzLCB3aGljaCBz
cGVjaWZpYyBkb21haW4gdGhpcyBwYWdlIGlzIHJlc2VydmVkDQo+ID4gPj4+IHRvLiA+IEhlbHBl
ciBwYWdlX2dldF9yZXNlcnZlZF9vd25lciBhbmQgcGFnZV9zZXRfcmVzZXJ2ZWRfb3duZXINCj4g
PiA+Pj4gYXJlIGRlc2lnbmF0ZWQgdG8gZ2V0L3NldCByZXNlcnZlZCBwYWdlJ3Mgb3duZXIuDQo+
ID4gPj4+DQo+ID4gPj4+IFN0cnVjdCBkb21haW4gaXMgZW5sYXJnZWQgdG8gbW9yZSB0aGFuIFBB
R0VfU0laRSwgZHVlIHRvDQo+ID4gPj4+IG5ld2x5LWltcG9ydGVkIHN0cnVjdCByZXNlcnZlZCBp
biBzdHJ1Y3QgcGFnZV9pbmZvLg0KPiA+ID4+DQo+ID4gPj4gc3RydWN0IGRvbWFpbiBtYXkgZW1i
ZWQgYSBwb2ludGVyIHRvIGEgc3RydWN0IHBhZ2VfaW5mbyBidXQgbmV2ZXINCj4gPiA+PiBkaXJl
Y3RseSBlbWJlZCB0aGUgc3RydWN0dXJlLiBTbyBjYW4geW91IGNsYXJpZnkgd2hhdCB5b3UgbWVh
bj8NCj4gPiA+Pg0KPiA+ID4+Pg0KPiA+ID4+PiBTaWduZWQtb2ZmLWJ5OiBQZW5ueSBaaGVuZyA8
cGVubnkuemhlbmdAYXJtLmNvbT4NCj4gPiA+Pj4gLS0tDQo+ID4gPj4+ICAgIHhlbi9pbmNsdWRl
L2FzbS1hcm0vbW0uaCB8IDE2ICsrKysrKysrKysrKysrKy0NCj4gPiA+Pj4gICAgMSBmaWxlIGNo
YW5nZWQsIDE1IGluc2VydGlvbnMoKyksIDEgZGVsZXRpb24oLSkNCj4gPiA+Pj4NCj4gPiA+Pj4g
ZGlmZiAtLWdpdCBhL3hlbi9pbmNsdWRlL2FzbS1hcm0vbW0uaCBiL3hlbi9pbmNsdWRlL2FzbS1h
cm0vbW0uaA0KPiA+ID4+IGluZGV4DQo+ID4gPj4+IDBiN2RlMzEwMmUuLmQ4OTIyZmQ1ZGIgMTAw
NjQ0DQo+ID4gPj4+IC0tLSBhL3hlbi9pbmNsdWRlL2FzbS1hcm0vbW0uaA0KPiA+ID4+PiArKysg
Yi94ZW4vaW5jbHVkZS9hc20tYXJtL21tLmgNCj4gPiA+Pj4gQEAgLTg4LDcgKzg4LDE1IEBAIHN0
cnVjdCBwYWdlX2luZm8NCj4gPiA+Pj4gICAgICAgICAgICAgKi8NCj4gPiA+Pj4gICAgICAgICAg
ICB1MzIgdGxiZmx1c2hfdGltZXN0YW1wOw0KPiA+ID4+PiAgICAgICAgfTsNCj4gPiA+Pj4gLSAg
ICB1NjQgcGFkOw0KPiA+ID4+PiArDQo+ID4gPj4+ICsgICAgLyogUGFnZSBpcyByZXNlcnZlZC4g
Ki8NCj4gPiA+Pj4gKyAgICBzdHJ1Y3Qgew0KPiA+ID4+PiArICAgICAgICAvKg0KPiA+ID4+PiAr
ICAgICAgICAgKiBSZXNlcnZlZCBPd25lciBvZiB0aGlzIHBhZ2UsDQo+ID4gPj4+ICsgICAgICAg
ICAqIGlmIHRoaXMgcGFnZSBpcyByZXNlcnZlZCB0byBhIHNwZWNpZmljIGRvbWFpbi4NCj4gPiA+
Pj4gKyAgICAgICAgICovDQo+ID4gPj4+ICsgICAgICAgIHN0cnVjdCBkb21haW4gKmRvbWFpbjsN
Cj4gPiA+Pj4gKyAgICB9IHJlc2VydmVkOw0KPiA+ID4+DQo+ID4gPj4gVGhlIHNwYWNlIGluIHBh
Z2VfaW5mbyBpcyBxdWl0ZSB0aWdodCwgc28gSSB3b3VsZCBsaWtlIHRvIGF2b2lkDQo+ID4gPj4g
aW50cm9kdWNpbmcgbmV3IGZpZWxkcyB1bmxlc3Mgd2UgY2FuJ3QgZ2V0IGF3YXkgZnJvbSBpdC4N
Cj4gPiA+Pg0KPiA+ID4+IEluIHRoaXMgY2FzZSwgaXQgaXMgbm90IGNsZWFyIHdoeSB3ZSBuZWVk
IHRvIGRpZmZlcmVudGlhdGUgdGhlICJPd25lciINCj4gPiA+PiB2cyB0aGUgIlJlc2VydmVkIE93
bmVyIi4gSXQgbWlnaHQgYmUgY2xlYXJlciBpZiB0aGlzIGNoYW5nZSBpcw0KPiA+ID4+IGZvbGRl
ZCBpbiB0aGUgZmlyc3QgdXNlciBvZiB0aGUgZmllbGQuDQo+ID4gPj4NCj4gPiA+PiBBcyBhbiBh
c2lkZSwgZm9yIDMyLWJpdCBBcm0sIHlvdSBuZWVkIHRvIGFkZCBhIDQtYnl0ZSBwYWRkaW5nLg0K
PiA+ID4+DQo+ID4gPg0KPiA+ID4gWWVhaCwgSSBtYXkgZGVsZXRlIHRoaXMgY2hhbmdlLiBJIGlt
cG9ydGVkIHRoaXMgY2hhbmdlIGFzDQo+ID4gPiBjb25zaWRlcmluZyB0aGUgZnVuY3Rpb25hbGl0
eSBvZiByZWJvb3RpbmcgZG9tYWluIG9uIHN0YXRpYyBhbGxvY2F0aW9uLg0KPiA+ID4NCj4gPiA+
IEEgbGl0dGxlIG1vcmUgZGlzY3Vzc2lvbiBvbiByZWJvb3RpbmcgZG9tYWluIG9uIHN0YXRpYyBh
bGxvY2F0aW9uLg0KPiA+ID4gQ29uc2lkZXJpbmcgdGhlIG1ham9yIHVzZXIgY2FzZXMgZm9yIGRv
bWFpbiBvbiBzdGF0aWMgYWxsb2NhdGlvbiBhcmUNCj4gPiA+IHN5c3RlbSBoYXMgYSB0b3RhbCBw
cmUtZGVmaW5lZCwgc3RhdGljIGJlaGF2aW9yIGFsbCB0aGUgdGltZS4gTm8NCj4gPiA+IGRvbWFp
biBhbGxvY2F0aW9uIG9uIHJ1bnRpbWUsIHdoaWxlIHRoZXJlIHN0aWxsIGV4aXN0cyBkb21haW4g
cmVib290aW5nLg0KPiA+DQo+ID4gSG1tbS4uLiBXaXRoIHRoaXMgc2VyaWVzIGl0IGlzIHN0aWxs
IHBvc3NpYmxlIHRvIGFsbG9jYXRlIG1lbW9yeSBhdA0KPiA+IHJ1bnRpbWUgb3V0c2lkZSBvZiB0
aGUgc3RhdGljIGFsbG9jYXRpb24gKHNlZSBteSBjb21tZW50IG9uIHRoZSBkZXNpZ24NCj4gZG9j
dW1lbnQgWzFdKS4NCj4gPiBTbyBpcyBpdCBtZWFudCB0byBiZSBjb21wbGV0ZT8NCj4gPg0KPiAN
Cj4gSSdtIGd1ZXNzaW5nIHRoYXQgdGhlICJhbGxvY2F0ZSBtZW1vcnkgYXQgcnVudGltZSBvdXRz
aWRlIG9mIHRoZSBzdGF0aWMNCj4gYWxsb2NhdGlvbiIgaXMgcmVmZXJyaW5nIHRvIFhFTiBoZWFw
IG9uIHN0YXRpYyBhbGxvY2F0aW9uLCB0aGF0IGlzLCB1c2VycyBwcmUtDQo+IGRlZmluaW5nIGhl
YXAgaW4gZGV2aWNlIHRyZWUgY29uZmlndXJhdGlvbiB0byBsZXQgdGhlIHdob2xlIHN5c3RlbSBt
b3JlIHN0YXRpYw0KPiBhbmQgcHJlZGljdGFibGUuDQo+IA0KPiBBbmQgSSd2ZSByZXBsaWVkIHlv
dSBpbiB0aGUgZGVzaWduIHRoZXJlLCBzb3JyeSBmb3IgdGhlIGxhdGUgcmVwbHkuIFNhdmUgeW91
ciB0aW1lLA0KPiBhbmQgSeKAmWxsIHBhc3RlIGhlcmU6DQo+IA0KPiAiUmlnaHQgbm93LCBvbiBB
QXJjaDY0LCBhbGwgUkFNLCBleGNlcHQgcmVzZXJ2ZWQgbWVtb3J5LCB3aWxsIGJlIGZpbmFsbHkN
Cj4gZ2l2ZW4gdG8gYnVkZHkgYWxsb2NhdG9yIGFzIGhlYXAsICBsaWtlIHlvdSBzYWlkLCBndWVz
dCBSQU0gZm9yIG5vcm1hbCBkb21haW4NCj4gd2lsbCBiZSBhbGxvY2F0ZWQgZnJvbSB0aGVyZSwg
eG1hbGxvYyBldmVudHVhbGx5IGlzIGdldCBtZW1vcnkgZnJvbSB0aGVyZSwgZXRjLg0KPiBTbyB3
ZSB3YW50IHRvIHJlZmluZSB0aGUgaGVhcCBoZXJlLCBub3QgaXRlcmF0aW5nIHRocm91Z2ggYGJv
b3RpbmZvLm1lbWAgdG8NCj4gc2V0IHVwIFhFTiBoZWFwLCBidXQgbGlrZSBpdGVyYXRpbmcgYGJv
b3RpbmZvLiByZXNlcnZlZF9oZWFwYCB0byBzZXQgdXAgWEVODQo+IGhlYXAuDQo+IA0KPiBPbiBB
Uk0zMiwgeGVuIGhlYXAgYW5kIGRvbWFpbiBoZWFwIGFyZSBzZXBhcmF0ZWx5IG1hcHBlZCwgd2hp
Y2ggaXMgbW9yZQ0KPiBjb21wbGljYXRlZCBoZXJlLiBUaGF0J3Mgd2h5IEkgb25seSB0YWxraW5n
IGFib3V0IGltcGxlbWVudGluZyB0aGVzZSBmZWF0dXJlcw0KPiBvbiBBQXJjaDY0IGFzIGZpcnN0
IHN0ZXAuIg0KPiANCj4gIEFib3ZlIGltcGxlbWVudGF0aW9uIHdpbGwgYmUgZGVsaXZlcmVkIHRo
cm91Z2ggYW5vdGhlciBwYXRjaCBTZXJpZS4gVGhpcw0KPiBwYXRjaCBTZXJpZSBJcyBvbmx5IGZv
Y3VzaW5nIG9uIERvbWFpbiBvbiBTdGF0aWMgQWxsb2NhdGlvbi4NCj4gDQoNCk9oLCBTZWNvbmQg
dGhvdWdodCBvbiB0aGlzLiANCkFuZCBJIHRoaW5rIHlvdSBhcmUgcmVmZXJyaW5nIHRvIGJhbGxv
b24gaW4vb3V0IGhlcmUsIGhtbSwgYWxzbywgbGlrZQ0KSSByZXBsaWVkIHRoZXJlOg0KIkZvciBp
c3N1ZXMgb24gYmFsbG9vbmluZyBvdXQgb3IgaW4sIGl0IGlzIG5vdCBzdXBwb3J0ZWQgaGVyZS4N
CkRvbWFpbiBvbiBTdGF0aWMgQWxsb2NhdGlvbiBhbmQgMToxIGRpcmVjdC1tYXAgYXJlIGFsbCBi
YXNlZCBvbg0KZG9tMC1sZXNzIHJpZ2h0IG5vdywgc28gbm8gUFYsIGdyYW50IHRhYmxlLCBldmVu
dCBjaGFubmVsLCBldGMsIGNvbnNpZGVyZWQuDQoNClJpZ2h0IG5vdywgaXQgb25seSBzdXBwb3J0
cyBkZXZpY2UgZ290IHBhc3N0aHJvdWdoIGludG8gdGhlIGd1ZXN0LiINCg0KPiA+ID4NCj4gPiA+
IEFuZCB3aGVuIHJlYm9vdGluZyBkb21haW4gb24gc3RhdGljIGFsbG9jYXRpb24sIGFsbCB0aGVz
ZSByZXNlcnZlZA0KPiA+ID4gcGFnZXMgY291bGQgbm90IGdvIGJhY2sgdG8gaGVhcCB3aGVuIGZy
ZWVpbmcgdGhlbS4gIFNvIEkgYW0NCj4gPiA+IGNvbnNpZGVyaW5nIHRvIHVzZSBvbmUgZ2xvYmFs
IGBzdHJ1Y3QgcGFnZV9pbmZvKltET01JRF1gIHZhbHVlIHRvIHN0b3JlLg0KPiA+ID4NCj4gPiA+
IEFzIEphbiBzdWdnZXN0ZWQsIHdoZW4gZG9tYWluIGdldCByZWJvb3RlZCwgc3RydWN0IGRvbWFp
biB3aWxsIG5vdA0KPiA+ID4gZXhpc3QNCj4gPiBhbnltb3JlLg0KPiA+ID4gQnV0IEkgdGhpbmsg
RE9NSUQgaW5mbyBjb3VsZCBsYXN0Lg0KPiA+IFlvdSB3b3VsZCBuZWVkIHRvIG1ha2Ugc3VyZSB0
aGUgZG9taWQgdG8gYmUgcmVzZXJ2ZWQuIEJ1dCBJIGFtIG5vdA0KPiA+IGVudGlyZWx5IGNvbnZp
bmNlZCB0aGlzIGlzIG5lY2Vzc2FyeSBoZXJlLg0KPiA+DQo+ID4gV2hlbiByZWNyZWF0aW5nIHRo
ZSBkb21haW4sIHlvdSBuZWVkIGEgd2F5IHRvIGtub3cgaXRzIGNvbmZpZ3VyYXRpb24uDQo+ID4g
TW9zdGx5IGxpa2VseSB0aGlzIHdpbGwgY29tZSBmcm9tIHRoZSBEZXZpY2UtVHJlZS4gQXQgd2hp
Y2ggcG9pbnQsIHlvdQ0KPiA+IGNhbiBhbHNvIGZpbmQgdGhlIHN0YXRpYyByZWdpb24gZnJvbSB0
aGVyZS4NCj4gPg0KPiANCj4gVHJ1ZSwgdHJ1ZS4gSSdsbCBkaWcgbW9yZSBpbiB5b3VyIHN1Z2dl
c3Rpb24sIHRoeC4g8J+YiQ0KPiANCj4gPiBDaGVlcnMsDQo+ID4NCj4gPiBbMV0gPDdhYjczY2Iw
LTM5ZDUtZjhiZi02NjBmLWIzZDc3ZjMyNDdiZEB4ZW4ub3JnPg0KPiA+DQo+ID4gLS0NCj4gPiBK
dWxpZW4gR3JhbGwNCj4gDQo+IENoZWVycw0KPiANCj4gUGVubnkNCg0KQ2hlZXJzDQoNClBlbm55
DQoNCg==


From xen-devel-bounces@lists.xenproject.org Thu May 20 08:50:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 08:50:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130735.244722 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljeOF-0006sh-Ew; Thu, 20 May 2021 08:50:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130735.244722; Thu, 20 May 2021 08:50:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljeOF-0006sa-BR; Thu, 20 May 2021 08:50:47 +0000
Received: by outflank-mailman (input) for mailman id 130735;
 Thu, 20 May 2021 08:50:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ljeOE-0006sD-Nj
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 08:50:46 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ljeOE-0004Rg-Ew; Thu, 20 May 2021 08:50:46 +0000
Received: from [54.239.6.190] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ljeOD-0008Gt-Uy; Thu, 20 May 2021 08:50:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=DNNHWBCG7jeUgPjiDjxAy0k0ewwcPFxvtSBYjBKt9FE=; b=z4/JiHT2UQQgABz3K0Bj5wGHlf
	BMHhaaAcUEB9VGHHUMrMFeoes4jx44cJVZg3TqijxLW+5d+NQ1uFFgW7/dQq7MLJj9NlQse4ZRo/6
	H2w/iWS7SV4jfLrer5wdH08jdkVsMLHa1OcamKPrc/yV23ogtwKYCdh0fEu0sG68llnY=;
Subject: Re: [PATCH 01/10] xen/arm: introduce domain on Static Allocation
To: Penny Zheng <Penny.Zheng@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 nd <nd@arm.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-2-penny.zheng@arm.com>
 <e1b90f06-92d2-11da-c556-4081907124b8@xen.org>
 <VE1PR08MB521519C6F09E92EDB9C9A1AEF72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <66e32065-ea2d-d000-1a70-e5598a182b6a@xen.org>
 <VE1PR08MB5215C1F5041860102BBAD595F72A9@VE1PR08MB5215.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
Message-ID: <14fb6fe4-c293-6994-8cbc-872d3bd8a3ac@xen.org>
Date: Thu, 20 May 2021 09:50:42 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <VE1PR08MB5215C1F5041860102BBAD595F72A9@VE1PR08MB5215.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 20/05/2021 07:07, Penny Zheng wrote:
>>> It will be consistent with the ones defined in the parent node, domUx.
>> Hmmm... To take the example you provided, the parent would be chosen.
>> However, from the example, I would expect the property #{address, size}-cells
>> in domU1 to be used. What did I miss?
>>
> 
> Yeah, the property #{address, size}-cells in domU1 will be used. And the parent
> node will be domU1.

You may have misunderstood what I meant. "domU1" is the node that 
contains the property "xen,static-mem". The parent node would be the one 
above (in our case "chosen").

> 
> The dtb property should look like as follows:
> 
>          chosen {
>              domU1 {
>                  compatible = "xen,domain";
>                  #address-cells = <0x2>;
>                  #size-cells = <0x2>;
>                  cpus = <2>;
>                  xen,static-mem = <0x0 0x30000000 0x0 0x20000000>;
> 
>                  ...
>              };
>          };
> 
>>> +DOMU1 on Static Allocation has reserved RAM bank at 0x30000000 of 512MB size
> 
>>>>> +Static Allocation is only supported on AArch64 for now.
>>>>
>>>> The code doesn't seem to be AArch64 specific. So why can't this be
>>>> used for 32-bit Arm?
>>>>
>>>
>>> True, we have plans to make it also workable in AArch32 in the future.
>>> Because we considered XEN on cortex-R52.
>>
>> All the code seems to be implemented in arm generic code. So isn't it already
>> working?
>>
>>>>>     static int __init early_scan_node(const void *fdt,
>>>>>                                       int node, const char *name, int depth,
>>>>>                                       u32 address_cells, u32
>>>>> size_cells, @@ -345,6 +394,9 @@ static int __init early_scan_node(const
>> void *fdt,
>>>>>             process_multiboot_node(fdt, node, name, address_cells, size_cells);
>>>>>         else if ( depth == 1 && device_tree_node_matches(fdt, node,
>> "chosen") )
>>>>>             process_chosen_node(fdt, node, name, address_cells,
>>>>> size_cells);
>>>>> +    else if ( depth == 2 && fdt_get_property(fdt, node,
>>>>> + "xen,static-mem",
>>>> NULL) )
>>>>> +        process_static_memory(fdt, node, "xen,static-mem", address_cells,
>>>>> +                              size_cells, &bootinfo.static_mem);
>>>>
>>>> I am a bit concerned to add yet another method to parse the DT and
>>>> all the extra code it will add like in patch #2.
>>>>
>>>>    From the host PoV, they are memory reserved for a specific purpose.
>>>> Would it be possible to consider the reserve-memory binding for that
>>>> purpose? This will happen outside of chosen, but we could use a
>>>> phandle to refer the region.
>>>>
>>>
>>> Correct me if I understand wrongly, do you mean what this device tree
>> snippet looks like:
>>
>> Yes, this is what I had in mind. Although I have one small remark below.
>>
>>
>>> reserved-memory {
>>>      #address-cells = <2>;
>>>      #size-cells = <2>;
>>>      ranges;
>>>
>>>      static-mem-domU1: static-mem@0x30000000{
>>
>> I think the node would need to contain a compatible (name to be defined).
>>
> 
> Ok, maybe, hmmm, how about "xen,static-memory"?

I would possibly add "domain" in the name to make clear this is domain 
memory. Stefano, what do you think?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 20 08:59:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 08:59:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130768.244770 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljeWd-000147-5G; Thu, 20 May 2021 08:59:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130768.244770; Thu, 20 May 2021 08:59:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljeWd-000140-0f; Thu, 20 May 2021 08:59:27 +0000
Received: by outflank-mailman (input) for mailman id 130768;
 Thu, 20 May 2021 08:59:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ljeWc-00013j-2B
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 08:59:26 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ljeWb-0004Zp-PN; Thu, 20 May 2021 08:59:25 +0000
Received: from [54.239.6.190] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ljeWb-0000bL-JA; Thu, 20 May 2021 08:59:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=JnnSLTgV1WSHQxgrbFV08c+JBRaneFJrp2d1GJbRiE4=; b=SOyae/0uRZYtOWDdDTXLEk/H3U
	qafu+GbdAh+Gyh112NhMghauLV1UDqPXkIcCiLEBGLl0v0Hg9iXlPzgONFUnlzaebEM2QN5kn1VJf
	e6wQcdbQH6R9rBxJeg/J7XVrQV0850FMtQOyy/rK4TdkU/X1o3OqGDnTUFMONBEH5PHA=;
Subject: Re: [PATCH 03/10] xen/arm: introduce PGC_reserved
To: Penny Zheng <Penny.Zheng@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 nd <nd@arm.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-4-penny.zheng@arm.com>
 <bc6a20ef-675d-bbd6-74f7-4ecc45805ee7@xen.org>
 <VE1PR08MB5215F3ECA8B5D9624E34A794F72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <2f4eb08e-261b-70c4-bcbc-e08db36a50a9@xen.org>
 <VE1PR08MB52155DD56E548E98AE937CE8F72A9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <VE1PR08MB5215E19BFE3E7F329229D8E7F72A9@VE1PR08MB5215.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
Message-ID: <44b46f35-cc51-9274-77f2-cfd18c998a38@xen.org>
Date: Thu, 20 May 2021 09:59:23 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <VE1PR08MB5215E19BFE3E7F329229D8E7F72A9@VE1PR08MB5215.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 20/05/2021 09:40, Penny Zheng wrote:
> Hi julien

Hi Penny,

> 
>> -----Original Message-----
>> From: Penny Zheng
>> Sent: Thursday, May 20, 2021 2:20 PM
>> To: Julien Grall <julien@xen.org>; xen-devel@lists.xenproject.org;
>> sstabellini@kernel.org
>> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
>> <Wei.Chen@arm.com>; nd <nd@arm.com>
>> Subject: RE: [PATCH 03/10] xen/arm: introduce PGC_reserved
>>
>> Hi Julien
>>
>>> -----Original Message-----
>>> From: Julien Grall <julien@xen.org>
>>> Sent: Thursday, May 20, 2021 3:46 AM
>>> To: Penny Zheng <Penny.Zheng@arm.com>; xen-devel@lists.xenproject.org;
>>> sstabellini@kernel.org
>>> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
>>> <Wei.Chen@arm.com>; nd <nd@arm.com>
>>> Subject: Re: [PATCH 03/10] xen/arm: introduce PGC_reserved
>>>
>>>
>>>
>>> On 19/05/2021 04:16, Penny Zheng wrote:
>>>> Hi Julien
>>>
>>> Hi Penny,
>>>
>>>>
>>>>> -----Original Message-----
>>>>> From: Julien Grall <julien@xen.org>
>>>>> Sent: Tuesday, May 18, 2021 5:46 PM
>>>>> To: Penny Zheng <Penny.Zheng@arm.com>;
>>>>> xen-devel@lists.xenproject.org; sstabellini@kernel.org
>>>>> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
>>>>> <Wei.Chen@arm.com>; nd <nd@arm.com>
>>>>> Subject: Re: [PATCH 03/10] xen/arm: introduce PGC_reserved
>>>>>
>>>>>
>>>>>
>>>>> On 18/05/2021 06:21, Penny Zheng wrote:
>>>>>> In order to differentiate pages of static memory, from those
>>>>>> allocated from heap, this patch introduces a new page flag
>>>>>> PGC_reserved
>>> to tell.
>>>>>>
>>>>>> New struct reserved in struct page_info is to describe reserved
>>>>>> page info, that is, which specific domain this page is reserved
>>>>>> to. > Helper page_get_reserved_owner and page_set_reserved_owner
>>>>>> are designated to get/set reserved page's owner.
>>>>>>
>>>>>> Struct domain is enlarged to more than PAGE_SIZE, due to
>>>>>> newly-imported struct reserved in struct page_info.
>>>>>
>>>>> struct domain may embed a pointer to a struct page_info but never
>>>>> directly embed the structure. So can you clarify what you mean?
>>>>>
>>>>>>
>>>>>> Signed-off-by: Penny Zheng <penny.zheng@arm.com>
>>>>>> ---
>>>>>>     xen/include/asm-arm/mm.h | 16 +++++++++++++++-
>>>>>>     1 file changed, 15 insertions(+), 1 deletion(-)
>>>>>>
>>>>>> diff --git a/xen/include/asm-arm/mm.h b/xen/include/asm-arm/mm.h
>>>>> index
>>>>>> 0b7de3102e..d8922fd5db 100644
>>>>>> --- a/xen/include/asm-arm/mm.h
>>>>>> +++ b/xen/include/asm-arm/mm.h
>>>>>> @@ -88,7 +88,15 @@ struct page_info
>>>>>>              */
>>>>>>             u32 tlbflush_timestamp;
>>>>>>         };
>>>>>> -    u64 pad;
>>>>>> +
>>>>>> +    /* Page is reserved. */
>>>>>> +    struct {
>>>>>> +        /*
>>>>>> +         * Reserved Owner of this page,
>>>>>> +         * if this page is reserved to a specific domain.
>>>>>> +         */
>>>>>> +        struct domain *domain;
>>>>>> +    } reserved;
>>>>>
>>>>> The space in page_info is quite tight, so I would like to avoid
>>>>> introducing new fields unless we can't get away from it.
>>>>>
>>>>> In this case, it is not clear why we need to differentiate the "Owner"
>>>>> vs the "Reserved Owner". It might be clearer if this change is
>>>>> folded in the first user of the field.
>>>>>
>>>>> As an aside, for 32-bit Arm, you need to add a 4-byte padding.
>>>>>
>>>>
>>>> Yeah, I may delete this change. I imported this change as
>>>> considering the functionality of rebooting domain on static allocation.
>>>>
>>>> A little more discussion on rebooting domain on static allocation.
>>>> Considering the major user cases for domain on static allocation are
>>>> system has a total pre-defined, static behavior all the time. No
>>>> domain allocation on runtime, while there still exists domain rebooting.
>>>
>>> Hmmm... With this series it is still possible to allocate memory at
>>> runtime outside of the static allocation (see my comment on the design
>> document [1]).
>>> So is it meant to be complete?
>>>
>>
>> I'm guessing that the "allocate memory at runtime outside of the static
>> allocation" is referring to XEN heap on static allocation, that is, users pre-
>> defining heap in device tree configuration to let the whole system more static
>> and predictable.
>>
>> And I've replied you in the design there, sorry for the late reply. Save your time,
>> and I’ll paste here:
>>
>> "Right now, on AArch64, all RAM, except reserved memory, will be finally
>> given to buddy allocator as heap,  like you said, guest RAM for normal domain
>> will be allocated from there, xmalloc eventually is get memory from there, etc.
>> So we want to refine the heap here, not iterating through `bootinfo.mem` to
>> set up XEN heap, but like iterating `bootinfo. reserved_heap` to set up XEN
>> heap.
>>
>> On ARM32, xen heap and domain heap are separately mapped, which is more
>> complicated here. That's why I only talking about implementing these features
>> on AArch64 as first step."
>>
>>   Above implementation will be delivered through another patch Serie. This
>> patch Serie Is only focusing on Domain on Static Allocation.
>>
> 
> Oh, Second thought on this.
> And I think you are referring to balloon in/out here, hmm, also, like

Yes I am referring to balloon in/out.

> I replied there:
> "For issues on ballooning out or in, it is not supported here.

So long you are not using the solution in prod then you are fine (see 
below)... But then we should make clear this feature is a tech preview.

> Domain on Static Allocation and 1:1 direct-map are all based on
> dom0-less right now, so no PV, grant table, event channel, etc, considered.
> 
> Right now, it only supports device got passthrough into the guest."

So we are not creating the hypervisor node in the DT for dom0less domU. 
However, the hypercalls are still accessible by a domU if it really
wants to use them.

Therefore, a guest can easily mess up with your static configuration and 
predictability.

IMHO, this is a must to solve before "static memory" can be used in 
production.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 20 09:04:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 09:04:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130784.244781 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljebo-0002qR-U9; Thu, 20 May 2021 09:04:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130784.244781; Thu, 20 May 2021 09:04:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljebo-0002qK-QI; Thu, 20 May 2021 09:04:48 +0000
Received: by outflank-mailman (input) for mailman id 130784;
 Thu, 20 May 2021 09:04:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yP7T=KP=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1ljebn-0002qC-Cx
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 09:04:47 +0000
Received: from EUR05-DB8-obe.outbound.protection.outlook.com (unknown
 [2a01:111:f400:7e1a::62a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f81b8b64-054b-4446-897a-a3943bfad6d1;
 Thu, 20 May 2021 09:04:45 +0000 (UTC)
Received: from AM0PR04CA0079.eurprd04.prod.outlook.com (2603:10a6:208:be::20)
 by DB8PR08MB5452.eurprd08.prod.outlook.com (2603:10a6:10:111::10)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.31; Thu, 20 May
 2021 09:04:35 +0000
Received: from AM5EUR03FT063.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:208:be:cafe::1d) by AM0PR04CA0079.outlook.office365.com
 (2603:10a6:208:be::20) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4150.23 via Frontend
 Transport; Thu, 20 May 2021 09:04:35 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT063.mail.protection.outlook.com (10.152.16.226) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Thu, 20 May 2021 09:04:34 +0000
Received: ("Tessian outbound 504317ef584c:v92");
 Thu, 20 May 2021 09:04:34 +0000
Received: from dc81d4b26ba6.1
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 DFC8C732-C9E0-4F39-94E7-E6DC7235103D.1; 
 Thu, 20 May 2021 09:04:27 +0000
Received: from EUR03-DB5-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id dc81d4b26ba6.1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Thu, 20 May 2021 09:04:27 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com (2603:10a6:803:10a::33)
 by VE1PR08MB4750.eurprd08.prod.outlook.com (2603:10a6:802:a2::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.28; Thu, 20 May
 2021 09:04:16 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5]) by VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5%6]) with mapi id 15.20.4129.035; Thu, 20 May 2021
 09:04:16 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f81b8b64-054b-4446-897a-a3943bfad6d1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bwPz95NhtCV37vbe+vi/Qn9VoI07rjzr2lu7RSjYEfk=;
 b=Wevpf66CHWPx/NK7M8ldL/o1YH51LSDBuuj5gLoIQWfBduBDF2VA83YgRy2TMYl9zm7DvR9epiMZm5qbVpxyZomENa0RjoV/U8LMnLRXD/9xINVV3i3umezr1ozKpN5FDz/sQ2utvLUUIY2R/u4+GFEVImfaXqKxgN24ieo5zmc=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=buU7t/fXWLKqWsCEaf5y86A4wotaB9wvLGh9iiPyiHqUjw0EISCdvodxX+jVdr/xt/kjGkDXHdm7cKMZF2kyUNhC9zExuzQhWpi1QH5IMWKzAenP0L445gymV6hW+MHKgfRWoRj/A47ggyDl/QT1T3f9Tx2N5bw0MlIK020HDgk5migL49LmOjQE8KpeUOqEQjmsjiEmpLW4YJsJ6u9hq2xOaPD+Ak+RDjGEIivQIlN70Q1GBpYE0TRyxlARC7+/p26ikkdSoAqvn4ptMAXK0jg1Vo3Sg4lCo2emLpQS6st2FaKuTNq2s7fuVvlZvsMApZiGRmg+1XTPIhK7N3F1Cw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bwPz95NhtCV37vbe+vi/Qn9VoI07rjzr2lu7RSjYEfk=;
 b=REy8WCaX3RaaqzOzggj3EOaGWBgqSZJt/q3eBTuma/yomoTIRMbgFj87fDsDAZepNK2h96dm1FnUiddvAwG7Y9QNotVbm8/Ug9twHlT3P8cire1KQiVnAb7h9vWuXkj8uV5idFtI9Gc/LKkku/HqW4R/AvTxsMXjgJvq31rxX2GNEX64r8zzVcQhO1yPV95mOvy5MtVT4pgSPygBGtPn2sYQp3qtSX5bD48KPpTTJDmGq0cdB0I5DUyy0KXLX07PBkEmzmUeOhykxAbuXfgmMi/48CwHFL7fZO4nD9tZ1YfzWtz4ADuGcUhENHpiTb0KiXvJD9POFyILfzvhuY4AYQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=bwPz95NhtCV37vbe+vi/Qn9VoI07rjzr2lu7RSjYEfk=;
 b=Wevpf66CHWPx/NK7M8ldL/o1YH51LSDBuuj5gLoIQWfBduBDF2VA83YgRy2TMYl9zm7DvR9epiMZm5qbVpxyZomENa0RjoV/U8LMnLRXD/9xINVV3i3umezr1ozKpN5FDz/sQ2utvLUUIY2R/u4+GFEVImfaXqKxgN24ieo5zmc=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	nd <nd@arm.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "sstabellini@kernel.org"
	<sstabellini@kernel.org>, "julien@xen.org" <julien@xen.org>
Subject: RE: [PATCH 04/10] xen/arm: static memory initialization
Thread-Topic: [PATCH 04/10] xen/arm: static memory initialization
Thread-Index: AQHXS6WzoKPo0e3K4EKgzRPDOGKP8Kro0+kAgAAm7iCAABMkgIADBSKg
Date: Thu, 20 May 2021 09:04:15 +0000
Message-ID:
 <VE1PR08MB52152D4CD1542D0B35CF95CDF72A9@VE1PR08MB5215.eurprd08.prod.outlook.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-5-penny.zheng@arm.com>
 <dbffa647-37e2-93b6-4041-a1344aeb1837@suse.com>
 <VE1PR08MB5215E7203960F535BC857F5CF72C9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <f9776245-0a33-4cb8-fd5a-f7be8ab38b78@suse.com>
In-Reply-To: <f9776245-0a33-4cb8-fd5a-f7be8ab38b78@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 510BFB9C37BA57438A6E2DCD4C118F7D.0
x-checkrecipientchecked: true
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [203.126.0.111]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: e4c7848e-dedc-4186-b245-08d91b6e49b5
x-ms-traffictypediagnostic: VE1PR08MB4750:|DB8PR08MB5452:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DB8PR08MB5452F335CA0DDB82265308C2F72A9@DB8PR08MB5452.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:10000;OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 f82Uk5vL5fEDsHZuLHBaTOQLdrwlIPYDZj8VIphi/nm9ZdKpWi0xZHnFYxsnzfOqYxjh96jC5qlMIX6h75tbPfloqDr0HwHqaKa9STjMCgzEM9Rij3DjDkgTfdx2qVDrmrpwIKDGCn2VOWPUvBTn42JxMzo1pQwyyIeuk3JNXuvnoS0le2ps8c93AVg8dw0OwvasDgn42AFttMjjX5Xn0vjqkadDmNxjTXdVHirF4VauLSguy6BGU6WaXPWuHU87bNYq0KEK/fa/SS8ZaN9jb6tLkFD9oAbfoWLYVmuXugN6+ZBtm1aqGXC1pZjhmuxnt549kfzmkvXP040PeVTeEFatTY0gV8RgpwHmWt70wF3UkLvfBIAhp3/D3zcbVcsFoeWZ7Tq3CpZHYLZ5VryGSTwjf2BVfBsjYCKnNESi9mRtnA7uN0mC0Ekw/OMiol2Zv5HRQ7pbo73DklmLnQ6SMAhhSunyvn5MuRhPiGlofsJjZchFXkTcrDqtTXFZQQ1PXB6fmfbg0X6ESWQsJWbAV8wiHJAhba9ON1aVa4R0VsnfHQzIylxXDrY8GGBShTFC/ouret5Da+H1MY4Ky3CPQj903dKoBiHUmKijeKABaPk=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR08MB5215.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(396003)(39860400002)(366004)(346002)(376002)(66476007)(7696005)(478600001)(26005)(66556008)(8936002)(66446008)(2906002)(64756008)(8676002)(186003)(86362001)(54906003)(9686003)(6506007)(4326008)(6916009)(76116006)(52536014)(5660300002)(38100700002)(55016002)(53546011)(122000001)(71200400001)(66946007)(316002)(33656002)(83380400001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?TW94Y2I0KzZKbTZ4VUlIYnVNa2ZzQTdDM0l0R2JyLzZZazFTaDVFRE02bVJ1?=
 =?utf-8?B?TXh6T3BNV0x3VWZ6RXowckFvRVRHcmxFeXhhMFc3VUQrc0FxN0ZNRWYrZ3lG?=
 =?utf-8?B?U09kNzRDVVVVMytoVnVXR2lkdmxmcEpzNCttNnAxdXByUVZHcGs3dmtTbWlZ?=
 =?utf-8?B?T3hnOVgzSEFWbk1vaStYUzUzRWlvMkNPWHZSdVN6TUhCNDcvaFc3K3ZQK3Np?=
 =?utf-8?B?ZXFGaEJXcDJuYnNHZ1pZKzdrV0V1LzZsV1pmSUZkVG0wd0hYWUc4MUhSVjBQ?=
 =?utf-8?B?RStkYmVxb1V6TlZ5Ry9jV0U1ZFFpV2I2aDlhK1pzeUVKaVFPR0JTOTJ3YXh0?=
 =?utf-8?B?NDhUR0JmRURZMmRPL0JkMEVJajkzYVpia1JWQ3ZNZG1Iakp5amRMdEFlRXQ3?=
 =?utf-8?B?dnFBNm51Y0svOVVlaWE3UU9QQmhmOUZRVXgxVmZEVmtxemVDdEZWeGVNbEF3?=
 =?utf-8?B?dGtraHVYQ0tTRUJFVEVoZGlzQUtRSFVMM1BFV2N3NUd0ay9MODdBZlFEU2sr?=
 =?utf-8?B?dnZPQkdpRnJyWUwvamVaY3lyZFJtcGExY1ZtRXVGMVFJM2JkY1dhd05wOThu?=
 =?utf-8?B?ZTdTTFhjdE9ZZExPVm9zZXc1YkNRWG51V09rYU5XQzBPSHJNVytmM3BXcUEr?=
 =?utf-8?B?cUw1aEEyb3BwZk1SUG5mUDRXQ1N3cnZoMmlmRE9TWXZGZHUzK2o4WVc5eG5X?=
 =?utf-8?B?bklLSjkwajVPYXFUUFIwTlRxMDhIbjFtRFQvbHY3RDdha0xUcWhMQmdvcHp4?=
 =?utf-8?B?N1FZV3B6N0pZRkl0RjNKeEhIa2xUeHhnNW9sNjRqeFVqeVZzeFpkNy90MkZS?=
 =?utf-8?B?cDZVdGYvYVk0VXR4ai9ySHMvQUpsczhRNlJTc1R5Y29Yd2FLZlNLRmJoQVdp?=
 =?utf-8?B?cEluNmVuVGl2a0wxZzlxaWtuZlVVbG9LL3I4M2lIdW9NTHdLZEhId1RFeTEx?=
 =?utf-8?B?V3l1S2JzVW5IeGdiS0taWWloeU9jSTZ6Uy95am5lWDZnaFE1WW15MzZuSGs4?=
 =?utf-8?B?ZEd2a2RuUnUzQ09PdjQ3emdWNUNibUNBSEpHcmFzdzVPQ3laYVBJMEdiTXpY?=
 =?utf-8?B?VzNxd1JYK2tBWXJSOWhiVFNRQVptWGxUM1AwN3Q5SmJId01uWE1iNERwUDNr?=
 =?utf-8?B?SzJiN1p3S25SRUMzc0lvREd0b3Y4TFZmcG1FT1REUEs2Z1VqV1pHdGZxNEdV?=
 =?utf-8?B?TXZoWDlqZmR1ckZqYy9mZmF6TEdNalUrdzljLzcyTjNiMTJVNnBJTEMvREdE?=
 =?utf-8?B?eXkvWFJHSkpwMC9Ldy9kU25YakZWUno5OHkydTFBOVVuTitKYlcvbm1UYnF1?=
 =?utf-8?B?R1dzQmVCcHVnTFhadjZaNWhINE5ObVNWOEVwUVc5ZzlFa3NmcTQ4MEkxaTFS?=
 =?utf-8?B?YndZK2x4UXRaRlpTOWxkNjY3SThMblMySUJ1bEFGV3dWbGoxRUM3MDkrekpi?=
 =?utf-8?B?VG1rSjl3aHhkd05RanhRY3lOZU5STXZ6QVNISSs2Vk9oczV6MHhoWHM2VTlx?=
 =?utf-8?B?T2FxYkw4czViamRXUHk2M2U0dG0yaGt3eFdHTERBUURHT0J2dDV5bjZEUGRk?=
 =?utf-8?B?ekhhYStKVjZ1cEJtc08ybVkweHlEb0swc0VBdW1KSVAzV0NJMXVhVUJIcWhC?=
 =?utf-8?B?dFpmY0FxaDR0QUZBRXhLTmovdUJ0bXNUempWOWN0d2IxZnM3c2JuUEZFLzFs?=
 =?utf-8?B?ZlkrbEpyMkFFYzljQ0NpSWlTR2kvaHcrMGxBTzA3OWhPNU5kTWRlTmhBRjRF?=
 =?utf-8?Q?7PLi/T4G9o4zsdfS4hANE3vUGZ4DaHzuNSP7cAN?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VE1PR08MB4750
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT063.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	4f7ed585-e383-4adf-d2ce-08d91b6e3ea1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	3E6TaYjNeAb/J88t4uRIzoDKjtRUl7BxL45+f1Oh3LcYlQYFtqT8FMpFyg6wmzOdpT6WEkkE/Y4X7eTCmyIADQW+iOa2vA9bU3bX6AcTPtQuN2ekMgtLHWUfrzhpdWTVsgCOFKuapHSeNR2HLDzw+YrZRW4Z07bWhKC/h1RgrRkgOC2DavKitTTmnE+9GxuFO5fSsay3BjrZc5rqVNc2OWU6R8NUFNGD+RylZ9rLmDO6ONKQXHWHkex3z69A93xiAa7lesltDxCIaOBmPoznR+3ClpbAYeapjfswb/uAe8yQNH6Bg7j3Iyu4C0TqBC+iizkPwP11+N40VcNUK/RZ4zJ1z9Rksn4pJo9p6P5Oo1Bts385j1qRquHdsNkAAT3TDZ2xcFAXAk3skQqmPeux+HvxztICT0O4l2V+ylTPhKBAmwMHNmkptANoa/bUF2cV1/bP52IP/YkchTlLAnV7ritlyTVgaYDTd/tXavhsiENIHWkDvLVrpiyalnN5Z3oqrjp4YIlJr3V140jiIbzLnPT9zRJ0IblbAzYp+B4jqbMyyvrbaW9/l2MrifoFAwX7T1/9uBNdwL0E9rSIOT/C3bS3qrJsz7LHyliDov9qA+ZPcywZkX1IG+4ujSUyCYeNnUzFBd4wX411YuaTpIexC3DDeh+bJJqAbwPpSZR/+FA=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39860400002)(396003)(346002)(136003)(376002)(36840700001)(46966006)(82740400003)(54906003)(6506007)(26005)(6862004)(186003)(55016002)(52536014)(9686003)(5660300002)(70586007)(336012)(47076005)(70206006)(8676002)(7696005)(53546011)(478600001)(82310400003)(356005)(36860700001)(86362001)(83380400001)(316002)(8936002)(2906002)(81166007)(4326008)(33656002);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 May 2021 09:04:34.6023
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: e4c7848e-dedc-4186-b245-08d91b6e49b5
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT063.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB8PR08MB5452

SGkgSmFuDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSmFuIEJldWxp
Y2ggPGpiZXVsaWNoQHN1c2UuY29tPg0KPiBTZW50OiBUdWVzZGF5LCBNYXkgMTgsIDIwMjEgNjo0
MyBQTQ0KPiBUbzogUGVubnkgWmhlbmcgPFBlbm55LlpoZW5nQGFybS5jb20+DQo+IENjOiBCZXJ0
cmFuZCBNYXJxdWlzIDxCZXJ0cmFuZC5NYXJxdWlzQGFybS5jb20+OyBXZWkgQ2hlbg0KPiA8V2Vp
LkNoZW5AYXJtLmNvbT47IG5kIDxuZEBhcm0uY29tPjsgeGVuLWRldmVsQGxpc3RzLnhlbnByb2pl
Y3Qub3JnOw0KPiBzc3RhYmVsbGluaUBrZXJuZWwub3JnOyBqdWxpZW5AeGVuLm9yZw0KPiBTdWJq
ZWN0OiBSZTogW1BBVENIIDA0LzEwXSB4ZW4vYXJtOiBzdGF0aWMgbWVtb3J5IGluaXRpYWxpemF0
aW9uDQo+IA0KPiBPbiAxOC4wNS4yMDIxIDExOjUxLCBQZW5ueSBaaGVuZyB3cm90ZToNCj4gPj4g
RnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPg0KPiA+PiBTZW50OiBUdWVzZGF5
LCBNYXkgMTgsIDIwMjEgMzoxNiBQTQ0KPiA+Pg0KPiA+PiBPbiAxOC4wNS4yMDIxIDA3OjIxLCBQ
ZW5ueSBaaGVuZyB3cm90ZToNCj4gPj4+IFRoaXMgcGF0Y2ggaW50cm9kdWNlcyBzdGF0aWMgbWVt
b3J5IGluaXRpYWxpemF0aW9uLCBkdXJpbmcgc3lzdGVtDQo+ID4+PiBSQU0gYm9vdA0KPiA+PiB1
cC4NCj4gPj4+DQo+ID4+PiBOZXcgZnVuYyBpbml0X3N0YXRpY21lbV9wYWdlcyBpcyB0aGUgZXF1
aXZhbGVudCBvZiBpbml0X2hlYXBfcGFnZXMsDQo+ID4+PiByZXNwb25zaWJsZSBmb3Igc3RhdGlj
IG1lbW9yeSBpbml0aWFsaXphdGlvbi4NCj4gPj4+DQo+ID4+PiBIZWxwZXIgZnVuYyBmcmVlX3N0
YXRpY21lbV9wYWdlcyBpcyB0aGUgZXF1aXZhbGVudCBvZg0KPiA+Pj4gZnJlZV9oZWFwX3BhZ2Vz
LCB0byBmcmVlIG5yX3BmbnMgcGFnZXMgb2Ygc3RhdGljIG1lbW9yeS4NCj4gPj4+IEZvciBlYWNo
IHBhZ2UsIGl0IGluY2x1ZGVzIHRoZSBmb2xsb3dpbmcgc3RlcHM6DQo+ID4+PiAxLiBjaGFuZ2Ug
cGFnZSBzdGF0ZSBmcm9tIGluLXVzZShhbHNvIGluaXRpYWxpemF0aW9uIHN0YXRlKSB0byBmcmVl
DQo+ID4+PiBzdGF0ZSBhbmQgZ3JhbnQgUEdDX3Jlc2VydmVkLg0KPiA+Pj4gMi4gc2V0IGl0cyBv
d25lciBOVUxMIGFuZCBtYWtlIHN1cmUgdGhpcyBwYWdlIGlzIG5vdCBhIGd1ZXN0IGZyYW1lDQo+
ID4+PiBhbnkgbW9yZQ0KPiA+Pg0KPiA+PiBCdXQgaXNuJ3QgdGhlIGdvYWwgKGFzIHBlciB0aGUg
cHJldmlvdXMgcGF0Y2gpIHRvIGFzc29jaWF0ZSBzdWNoDQo+ID4+IHBhZ2VzIHdpdGggYSBfc3Bl
Y2lmaWNfIGRvbWFpbj8NCj4gPj4NCj4gPg0KPiA+IEZyZWVfc3RhdGljbWVtX3BhZ2VzIGFyZSBh
bGlrZSBmcmVlX2hlYXBfcGFnZXMsIGFyZSBub3QgdXNlZCBvbmx5IGZvcg0KPiBpbml0aWFsaXph
dGlvbi4NCj4gPiBGcmVlaW5nIHVzZWQgcGFnZXMgdG8gdW51c2VkIGFyZSBhbHNvIGluY2x1ZGVk
Lg0KPiA+IEhlcmUsIHNldHRpbmcgaXRzIG93bmVyIE5VTEwgaXMgdG8gc2V0IG93bmVyIGluIHVz
ZWQgZmllbGQgTlVMTC4NCj4gDQo+IEknbSBhZnJhaWQgSSBzdGlsbCBkb24ndCB1bmRlcnN0YW5k
Lg0KPiANCg0KV2hlbiBpbml0aWFsaXppbmcgaGVhcCwgeGVuIGlzIHVzaW5nIGZyZWVfaGVhcF9w
YWdlcyB0byBkbyB0aGUgaW5pdGlhbGl6YXRpb24uDQpBbmQgYWxzbyB3aGVuIG5vcm1hbCBkb21h
aW4gZ2V0IGRlc3Ryb3llZC9yZWJvb3RlZCwgeGVuIGlzIHVzaW5nIGZyZWVfZG9taGVhcF9wYWdl
cywNCndoaWNoIGNhbGxzIGZyZWVfaGVhcF9wYWdlcyB0byBmcmVlIHRoZSBwYWdlcy4NCg0KU28g
aGVyZSwgc2luY2UgZnJlZV9zdGF0aWNtZW1fcGFnZXMgaXMgdGhlIGVxdWl2YWxlbnQgb2YgdGhl
IGZyZWVfaGVhcF9wYWdlcyBmb3Igc3RhdGljDQptZW1vcnksIEknbSBhbHNvIGNvbnNpZGVyaW5n
IGJvdGggdHdvIHNjZW5hcmlvcyBoZXJlLiBBbmQgaWYgaXQgaXMgZG9tYWluIGdldCBkZXN0cm95
ZWQvcmVib290ZWQsDQpQYWdlIHN0YXRlIGlzIGluIGludXNlIHN0YXRlKFBHQ19pbnVzZSksIGFu
ZCB0aGUgcGFnZV9pbmZvLnYuaW51c2UuZG9tYWluIGlzIGhvbGRpbmcgdGhlDQpkb21haW4gb3du
ZXIuDQpXaGVuIGZyZWVpbmcgdGhlbiwgd2UgbmVlZCB0byBzd2l0Y2ggdGhlIHBhZ2Ugc3RhdGUg
dG8gZnJlZSBzdGF0ZShQR0NfZnJlZSkgYW5kIHNldCBpdHMgaW51c2Ugb3duZXIgdG8gTlVMTC4N
Cg0KSSdsbCBjbGFyaWZ5IGl0IG1vcmUgY2xlYXJseSBpbiBjb21taXQgbWVzc2FnZS4NCg0KPiA+
IFN0aWxsLCBJIG5lZWQgdG8gYWRkIG1vcmUgZXhwbGFuYXRpb24gaGVyZS4NCj4gDQo+IFllcyBw
bGVhc2UuDQo+IA0KPiA+Pj4gQEAgLTE1MTIsNiArMTUxNSw0OSBAQCBzdGF0aWMgdm9pZCBmcmVl
X2hlYXBfcGFnZXMoDQo+ID4+PiAgICAgIHNwaW5fdW5sb2NrKCZoZWFwX2xvY2spOw0KPiA+Pj4g
IH0NCj4gPj4+DQo+ID4+PiArLyogRXF1aXZhbGVudCBvZiBmcmVlX2hlYXBfcGFnZXMgdG8gZnJl
ZSBucl9wZm5zIHBhZ2VzIG9mIHN0YXRpYw0KPiA+Pj4gK21lbW9yeS4gKi8gc3RhdGljIHZvaWQg
ZnJlZV9zdGF0aWNtZW1fcGFnZXMoc3RydWN0IHBhZ2VfaW5mbyAqcGcsDQo+ID4+IHVuc2lnbmVk
IGxvbmcgbnJfcGZucywNCj4gPj4+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICBi
b29sIG5lZWRfc2NydWIpDQo+ID4+DQo+ID4+IFJpZ2h0IG5vdyB0aGlzIGZ1bmN0aW9uIGdldHMg
Y2FsbGVkIG9ubHkgZnJvbSBhbiBfX2luaXQgb25lLiBVbmxlc3MNCj4gPj4gaXQgaXMgaW50ZW5k
ZWQgdG8gZ2FpbiBmdXJ0aGVyIGNhbGxlcnMsIGl0IHNob3VsZCBiZSBtYXJrZWQgX19pbml0IGl0
c2VsZiB0aGVuLg0KPiA+PiBPdGhlcndpc2UgaXQgc2hvdWxkIGJlIG1hZGUgc3VyZSB0aGF0IG90
aGVyIGFyY2hpdGVjdHVyZXMgZG9uJ3QNCj4gPj4gaW5jbHVkZSB0aGlzIChkZWFkIHRoZXJlKSBj
b2RlLg0KPiA+Pg0KPiA+DQo+ID4gU3VyZSwgSSdsbCBhZGQgX19pbml0LiBUaHguDQo+IA0KPiBE
aWRuJ3QgSSBzZWUgYSAybmQgY2FsbCB0byB0aGUgZnVuY3Rpb24gbGF0ZXIgaW4gdGhlIHNlcmll
cz8gVGhhdCBvbmUgZG9lc24ndA0KPiBwZXJtaXQgdGhlIGZ1bmN0aW9uIHRvIGJlIF9faW5pdCwg
aWlyYy4NCj4gDQo+IEphbg0KDQpDaGVlcnMNCg0KUGVubnkNCg==


From xen-devel-bounces@lists.xenproject.org Thu May 20 09:17:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 09:17:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130792.244791 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljenZ-0004Ma-1c; Thu, 20 May 2021 09:16:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130792.244791; Thu, 20 May 2021 09:16:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljenY-0004MT-Uy; Thu, 20 May 2021 09:16:56 +0000
Received: by outflank-mailman (input) for mailman id 130792;
 Thu, 20 May 2021 09:16:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljenX-0004MI-3S; Thu, 20 May 2021 09:16:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljenW-0004uf-Tn; Thu, 20 May 2021 09:16:54 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljenW-0001QT-II; Thu, 20 May 2021 09:16:54 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ljenW-0001HQ-Hq; Thu, 20 May 2021 09:16:54 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3U17yi2b5vhAseOYqKZh7RlLeuv/NA7lnIGxaTFJ5Q8=; b=OxFPM1uYatntvSa/1BU9uJpcvU
	kpdwHoGo5nmDVCVXQ3vnOdNr/umI5VQK+N862PsV38rouXgDu6eJCr8Nut2EdLnMucfCcLDwSbnBA
	vQhBeiz5O49yNng+tO8cbZULZAflcUfaTbKSw7x8oLsgt5lw11PnpZQP3EhiWVDYrK9M=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162097-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162097: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=c3d0e3fd41b7f0f5d5d5b6022ab7e813f04ea727
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 20 May 2021 09:16:54 +0000

flight 162097 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162097/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                c3d0e3fd41b7f0f5d5d5b6022ab7e813f04ea727
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  292 days
Failing since        152366  2020-08-01 20:49:34 Z  291 days  491 attempts
Testing same since   162097  2021-05-19 19:41:22 Z    0 days    1 attempts

------------------------------------------------------------
6063 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1645796 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu May 20 09:18:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 09:18:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130802.244806 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljep4-00051y-KT; Thu, 20 May 2021 09:18:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130802.244806; Thu, 20 May 2021 09:18:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljep4-00051r-Ga; Thu, 20 May 2021 09:18:30 +0000
Received: by outflank-mailman (input) for mailman id 130802;
 Thu, 20 May 2021 09:18:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=0lSn=KP=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1ljep3-00051d-Jn
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 09:18:29 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f433512e-2848-471b-ab6e-4315d5e52256;
 Thu, 20 May 2021 09:18:28 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AEB14AD4B;
 Thu, 20 May 2021 09:18:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f433512e-2848-471b-ab6e-4315d5e52256
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621502307; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=R+rniqkUCnnA1A3M+xbkedH13kaeiqxG/AcGLxGQMCw=;
	b=M5BBFuxNoOChyXX8GREoLkeadL+zcCne5dinAEDt20P7+Q2vFfCGGw6NReRk8tzn90Tod3
	wyKEGVJNRydyMmWTnqGb6N8nLyyj34zwbTkdv4uOoTUUrGTYe1k9ATjadJjVrLDwPHAsh3
	kawh0UDZ1LjGw2C8ZHY03TBxAtEIjA0=
Message-ID: <c905e6872399dacf960d9890432ac0d53081e05c.camel@suse.com>
Subject: Re: [PATCH v2 2/2] automation: fix dependencies on openSUSE
 Tumbleweed containers
From: Dario Faggioli <dfaggioli@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Roger Pau
	=?ISO-8859-1?Q?Monn=E9?=
	 <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Doug Goldstein <cardoe@cardoe.com>
Date: Thu, 20 May 2021 11:18:21 +0200
In-Reply-To: <26642b6b-c988-406e-040e-905bdeae1b2f@citrix.com>
References: <162135593827.20014.14959979363028895972.stgit@Wayrath>
	 <162135616513.20014.6303562342690753615.stgit@Wayrath>
	 <YKSv/BGxuy+OCn3t@Air-de-Roger>
	 <b596d5ea2e96be5c6d627e14b87beb51ba4a094e.camel@suse.com>
	 <26642b6b-c988-406e-040e-905bdeae1b2f@citrix.com>
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-ubIOhFvU1+SjXDW2c3Wo"
User-Agent: Evolution 3.40.1 (by Flathub.org) 
MIME-Version: 1.0


--=-ubIOhFvU1+SjXDW2c3Wo
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, 2021-05-19 at 20:25 +0100, Andrew Cooper wrote:
>=20
> I built the container locally, which reused some of the layers you
> pushed, and it all pushed successfully for me.
>=20
> I've committed this series so xen.git matches reality.=C2=A0 Lets see how
> the
> updated container fairs...
>=20
Well, something still looks off, although I don't know what.

In fact, we still see:=20
https://gitlab.com/xen-project/xen/-/jobs/1277448985

checking for deflateCopy in -lz... no
2608configure: error: Could not find zlib
2609configure: error: ./configure failed for tools

Which means we're missing libz.

In fact, if I pull the container that's currently in the registry, I
can see that:

dario@b17aaa86cacf:~> rpm -qa|grep zlib                              =20
zlib-devel-1.2.11-18.3.x86_64=20

However, if I build it locally, with:

dario@Solace:~/src/xen/xen.git/automation/build> make suse/opensuse-tumblew=
eed

And then enter and use it for building, the package is there and
configure works.

dario@51e463d1d47e:~> rpm -qa|grep libz
libzstd1-1.4.9-1.2.x86_64
libz1-1.2.11-18.3.x86_64
libzck1-1.1.11-1.1.x86_64
libzypp-17.25.10-1.1.x86_64
libzio1-1.06-4.26.x86_64
libzzip-0-13-0.13.72-1.2.x86_64
libzstd-devel-1.4.9-1.2.x86_64

dario@51e463d1d47e:~> ./configure
checking build system type... x86_64-pc-linux-gnu                    =20
checking host system type... x86_64-pc-linux-gnu
... ... ...
checking for pandoc... /usr/bin/pandoc
checking for perl... /usr/bin/perl
configure: creating ./config.status

So, what am I missing or doing wrong?

Thanks and Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-ubIOhFvU1+SjXDW2c3Wo
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAmCmKV0ACgkQFkJ4iaW4
c+5J2g/+PENuILMIz/xyJxNWvIqc7Gn95SrjeH/ay6M8EqVYUtuwBN2dCHr+/Hiq
McjvhDSYGPgLu9FpxsYQLSkcaLiBSl8Fwhh+2Z+TZFWMBfoiY7bmpW/c4oJss/Nb
BJ5msa7sh08wTr2X8mI9YEPbr0DrgnvDv2E7JKSL6ASZfD4SwJO89hJM03I0vY2B
OxbeiYoBtsDBJ0tS3hUcxcX8t7Dn65BFGx1a1iEx82OFE7t89XS02yPDhy+qOgIv
HFk+QX4wsUZ38Z+qFU/DgFDSyYY03dhs8LlMKFk69J0hdm7bWTkEE68o8chAbSlK
QQyargBFBk6/yHLz/nolINrqG+KP0YEXOeXAYgPxO4XV5OkoIip+4djJrJwb6YXZ
Sn5A6F00AkgF4ThoiYOjV87X9ZbGuAk2D1/JikEKuH6X2d0otQNP/5lq7grjgwaJ
kreSFYXNxrxKShyCogCqBDlIuJi7zfTQrWziv4Ce/lsI2BukBfwRV26tq7E8sLOf
Mym58NqTP+fC7wwx/p2/uneJZtk4RITBFgyLmagktv/9WnyvXgph4qvu0y0XxDA4
CfG+CGL8ZrQbc+9UhsWCKaXSYRsOk5xtI3Brczn2quf3U5cGxnDyKFDBfrUFCdYg
CyrO2eyipJIKYrjVMavVQxRXv8VUWWzSYZlhKp39UC19of5+oPA=
=ErIj
-----END PGP SIGNATURE-----

--=-ubIOhFvU1+SjXDW2c3Wo--



From xen-devel-bounces@lists.xenproject.org Thu May 20 09:19:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 09:19:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130808.244817 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljept-0005dy-TB; Thu, 20 May 2021 09:19:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130808.244817; Thu, 20 May 2021 09:19:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljept-0005dr-QB; Thu, 20 May 2021 09:19:21 +0000
Received: by outflank-mailman (input) for mailman id 130808;
 Thu, 20 May 2021 09:19:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ljeps-0005db-IC
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 09:19:20 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ljeps-0004wh-AD; Thu, 20 May 2021 09:19:20 +0000
Received: from [54.239.6.190] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ljeps-00025V-42; Thu, 20 May 2021 09:19:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=firLHKP4Voz3HrLDZC1dRE0JFIHHVE23ghAiJeGPvpk=; b=NL8SBtDqghWMVB6WoY/NdOEDVn
	nYTi/pw+/+5M5IsQa6WaxMwsZ/Tk+/03o7PnmgujZk6GBA+MseukGQAfFIRzZIUttwTJNlioIH2CJ
	Z675fhbVhpvQRWRLk7CEp05FToWld3jUuKO0rii8bsVIJBI4TANvViAKFEwL2Q+KLjAw=;
Subject: Re: [PATCH] Xen: Design doc for 1:1 direct-map and static allocation
To: Penny Zheng <Penny.Zheng@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 nd <nd@arm.com>
References: <20210518050738.163156-1-penny.zheng@arm.com>
 <7ab73cb0-39d5-f8bf-660f-b3d77f3247bd@xen.org>
 <VE1PR08MB52152E43D11EB446000F4563F72A9@VE1PR08MB5215.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
Message-ID: <f0b798f4-8fa6-9906-9971-36fd5205ae74@xen.org>
Date: Thu, 20 May 2021 10:19:17 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <VE1PR08MB52152E43D11EB446000F4563F72A9@VE1PR08MB5215.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 20/05/2021 06:36, Penny Zheng wrote:
> Hi Julien

Hi Penny,

>>> +
>>> +Later, when domain get destroyed and memory relinquished, only pages
>>> +in `page_list` go back to heap, and pages in `reserved_page_list` shall not.
>>
>> While going through the series, I could not find any code implementing this.
>> However, this is not enough to prevent a page to go to the heap allocator
>> because a domain can release memory at runtime using hypercalls like
>> XENMEM_remove_from_physmap.
>>
>> One of the use case is when the guest decides to balloon out some memory.
>> This will call free_domheap_pages().
>>
>> Effectively, you are treating static memory as domheap pages. So I think it
>> would be better if you hook in free_domheap_pages() to decide which
>> allocator is used.
>>
>> Now, if a guest can balloon out memory, it can also balloon in memory.
>> There are two cases:
>>      1) The region used to be RAM region statically allocated
>>      2) The region used to be unallocated.
>>
>> I think for 1), we need to be able to re-use the page previously. For 2), it is
>> not clear to me whether a guest with memory statically allocated should be
>> allowed to allocate "dynamic" pages.
>>
> 
> Yeah, I share the same with you of hooking in free_domheap_pages(). I'm thinking
> that if pages of PGC_reserved, we may create a new func free_staticmem_pages to
> free them.
> 
> For issues on ballooning out or in, it is not supported here.

It is fine that the implementation doesn't yet implement it. However, I 
think the design document should take into account ballooning. This is 
because even if...

> Domain on Static Allocation and 1:1 direct-map are all based on dom0-less right
> now, so no PV, grant table, event channel, etc, considered.

... there is no PV support & co, a guest is still able to issue 
hypercalls (they are not hidden). Therefore your guest will be able to 
disturb your static allocation.

> 
> Right now, it only supports device got passthrough into the guest.
> 
>>> +### Memory Allocation for Domains on Static Allocation
>>> +
>>> +RAM regions pre-defined as static memory for one specifc domain shall
>>> +be parsed and reserved from the beginning. And they shall never go to
>>> +any memory allocator for any use.
>>
>> Technically, you are introducing a new allocator. So do you mean they should
>> not be given to neither the buddy allocator nor the bot allocator?
>>
> 
> Yes. These pre-defined RAM regions will not be given to any current
> memory allocator. If be given there, there is no guarantee that it will
> not be allocated for other use.
> 
> And right now, in my current design, these pre-defined RAM regions are either for
> one specific domain as guest RAM or as XEN heap.
>    
>>> +
>>> +Later when allocating static memory for this specific domain, after
>>> +acquiring those reserved regions, users need to a do set of
>>> +verification before assigning.
>>> +For each page there, it at least includes the following steps:
>>> +1. Check if it is in free state and has zero reference count.
>>> +2. Check if the page is reserved(`PGC_reserved`).
>>> +
>>> +Then, assigning these pages to this specific domain, and all pages go
>>> +to one new linked page list `reserved_page_list`.
>>> +
>>> +At last, set up guest P2M mapping. By default, it shall be mapped to
>>> +the fixed guest RAM address `GUEST_RAM0_BASE`, `GUEST_RAM1_BASE`,
>>> +just like normal domains. But later in 1:1 direct-map design, if
>>> +`direct-map` is set, the guest physical address will equal to physical
>> address.
>>> +
>>> +### Static Allocation for Xen itself
>>> +
>>> +### New Deivce Tree Node: `xen,reserved_heap`
>>
>> s/Deivce/Device/
>>
> 
> Thx.
> 
>>> +
>>> +Static memory for Xen heap refers to parts of RAM reserved in the
>>> +beginning for Xen heap only. The memory is pre-defined through XEN
>>> +configuration using physical address ranges.
>>> +
>>> +The reserved memory for Xen heap is an optional feature and can be
>>> +enabled by adding a device tree property in the `chosen` node.
>>> +Currently, this feature is only supported on AArch64.
>>> +
>>> +Here is one example:
>>> +
>>> +
>>> +        chosen {
>>> +            xen,reserved-heap = <0x0 0x30000000 0x0 0x40000000>;
>>> +            ...
>>> +        };
>>> +
>>> +RAM at 0x30000000 of 1G size will be reserved as heap memory. Later,
>>> +heap allocator will allocate memory only from this specific region.
>>
>> This section is quite confusing. I think we need to clearly differentiate heap vs
>> allocator.
>>
>> In Xen we have two heaps:
>>      1) Xen heap: It is always mapped in Xen virtual address space. This is
>> mainly used for xen internal allocation.
>>      2) Domain heap: It may not always be mapped in Xen virtual address space.
>> This is mainly used for domain memory and mapped on-demand.
>>
>> For Arm64 (and x86), two heaps are allocated from the same region. But on
>> Arm32, they are different.
>>
>> We also have two allocator:
>>      1) Boot allocator: This is used during boot only. There is no concept of
>> heap at this time.
>>      2) Buddy allocator: This is the current runtime allocator. This can either
>> allocator from either heap.
>>
>> AFAICT, this design is introducing a 3rd allocator that will return domain heap
>> pages.
>>
>> Now, back to this section, are you saying you will separate the two heaps and
>> force the buddy allocator to allocate xen heap pages from a specific region?
>>
>> [...]
> 
> I will try to explain clearly here.
> The intention behind this reserved heap is that for supporting total static system, we
> not only want to pre-define memory resource for guests, but also for xen runtime
> allocation. Any runtime behavior are more predictable.
> 
> Right now, on AArch64, all RAM, except reserved memory, will be given to buddy
> allocator as heap,  like you said, guest RAM for normal domain will be allocated
> from there, xmalloc eventually is get memory from there, etc. So we want to refine
> the heap here, not iterating through bootinfo.mem to set up XEN heap, but like
> iterating bootinfo. reserved_heap to set up XEN heap.

So effectively, you want to move to a split heap like on Arm32. Is that 
correct?

But let's take a step back from the actual code (this is implementation 
details). If the Device-Tree describes all the regions statically 
allocated to domains, why can't the memory used by Xen heap be the left 
over?

> 
> True, on ARM32, xen heap and domain heap are separately mapped, which is more
> complicated here. That's why I only talking about implementing these features on
> AArch64 as first step.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 20 09:27:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 09:27:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130818.244828 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljexg-0007AN-Pc; Thu, 20 May 2021 09:27:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130818.244828; Thu, 20 May 2021 09:27:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljexg-0007AG-KN; Thu, 20 May 2021 09:27:24 +0000
Received: by outflank-mailman (input) for mailman id 130818;
 Thu, 20 May 2021 09:27:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3HBq=KP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ljexf-0007AA-3O
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 09:27:23 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 00e0c74f-3340-4325-be5a-8eafe6f91952;
 Thu, 20 May 2021 09:27:22 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6CD17AE81;
 Thu, 20 May 2021 09:27:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 00e0c74f-3340-4325-be5a-8eafe6f91952
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621502841; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=7rqhgH9R9B73ijK+Qc/m24+QJNhL73dhbh8I92A83hI=;
	b=ssiW7HuR6VuDhLIudyXNbhAV165+/yUOINVmdBJPkP8WqLcNoiN8AOtKJBbZ+nQfCHpM1U
	xKHfeJP9KHfu5WpHvPwYsuz3GgupGVwlyWbNCbesGicGw2X/S/Nd0tJ6dvBooobkjHXvdy
	AhZha3fFAbxoREIWUt7brmFhc9heTJQ=
Subject: Re: [PATCH 03/10] xen/arm: introduce PGC_reserved
To: Julien Grall <julien@xen.org>, Penny Zheng <Penny.Zheng@arm.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 nd <nd@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-4-penny.zheng@arm.com>
 <bc6a20ef-675d-bbd6-74f7-4ecc45805ee7@xen.org>
 <VE1PR08MB5215F3ECA8B5D9624E34A794F72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <2f4eb08e-261b-70c4-bcbc-e08db36a50a9@xen.org>
 <VE1PR08MB52155DD56E548E98AE937CE8F72A9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <VE1PR08MB5215E19BFE3E7F329229D8E7F72A9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <44b46f35-cc51-9274-77f2-cfd18c998a38@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b53c5ef5-ccab-a472-1edc-63082d89f09a@suse.com>
Date: Thu, 20 May 2021 11:27:20 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <44b46f35-cc51-9274-77f2-cfd18c998a38@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 20.05.2021 10:59, Julien Grall wrote:
> On 20/05/2021 09:40, Penny Zheng wrote:
>> Oh, Second thought on this.
>> And I think you are referring to balloon in/out here, hmm, also, like
> 
> Yes I am referring to balloon in/out.
> 
>> I replied there:
>> "For issues on ballooning out or in, it is not supported here.
> 
> So long you are not using the solution in prod then you are fine (see 
> below)... But then we should make clear this feature is a tech preview.
> 
>> Domain on Static Allocation and 1:1 direct-map are all based on
>> dom0-less right now, so no PV, grant table, event channel, etc, considered.
>>
>> Right now, it only supports device got passthrough into the guest."
> 
> So we are not creating the hypervisor node in the DT for dom0less domU. 
> However, the hypercalls are still accessible by a domU if it really
> wants to use them.
> 
> Therefore, a guest can easily mess up with your static configuration and 
> predictability.
> 
> IMHO, this is a must to solve before "static memory" can be used in 
> production.

I'm having trouble seeing why it can't be addressed right away: Such
guests can balloon in only after they've ballooned out some pages,
and such balloon-in requests would be satisfied from the same static
memory that is associated by the guest anyway.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 20 09:27:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 09:27:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130819.244839 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljexp-0007TT-0G; Thu, 20 May 2021 09:27:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130819.244839; Thu, 20 May 2021 09:27:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljexo-0007TM-TD; Thu, 20 May 2021 09:27:32 +0000
Received: by outflank-mailman (input) for mailman id 130819;
 Thu, 20 May 2021 09:27:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V2Ic=KP=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1ljexm-0007S8-Jx
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 09:27:31 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a450d468-e2e4-435a-8911-55d764a6b7c5;
 Thu, 20 May 2021 09:27:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a450d468-e2e4-435a-8911-55d764a6b7c5
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1621502849;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=ymDKjaKBVx1WNdaDd2C7ms6i7omM7h+sumnYI/XkCZA=;
  b=NoRq0wetx7MGUJ1m9MKzXyn/SdBDuDYieJOxa16MwAsQV9JoTYKe1azn
   XDgwVLUEvpd25KxQKPSeIDqxSt5YpX3qz8pywO0E6I59eGhC2ZE7NGqcz
   AsTDW9ueEvRi+kbcReao4NnFAeCWvguC36eH7aSE+O6m0YRQdXuVODm3e
   o=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: mxztB3+pPQnrO44ZqoGdyyUUhbSuRrkfWUbykDVK9cvROWASrtHN5IG0RcEBDSiM24Xa/vNrd6
 bKii+CB/03qcSIgc/dwjf1B+xj+9rnfEbQLGjIdLvJMlwGLRzLQnzh/3hS4nJ2GFKXAMAZOAkZ
 OlbNzxWv75vJ+FW8y8Py+DxKXdsPwMb+PLbEy+Eh4S3T/VV8hRcNgQQEeIgUXSFtkbWyzfmD2C
 n6QX3R7qKKqrShS4YJ5LmlB/loJnW3KNOmRDip8VevnEW7GTXpP43zEAHtJBHVCj1Ll/f3rNHz
 5oY=
X-SBRS: 5.1
X-MesageID: 44596987
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-Data: A9a23:54Txo6AzJf+behVW/zTjw5YqxClBgxIJ4kV8jS/XYbTApG8i1DEGy
 mQWCjrTbqzcajbzetp2YIq18B4D6JeEzIQxQQY4rX1jcSlH+JHPbTi7wuYcHM8wwunrFh8PA
 xA2M4GYRCwMZiaH4EjratANlFEkvU2ybuOU5NXsZ2YhH2eIdA970Ug6w7Ng09Y26TSEK1jlV
 e3a8pW31GCNg1aYAkpMg05UgEoy1BhakGpwUm0WPZinjneH/5UmJMt3yZWKB2n5WuFp8tuSH
 I4v+l0bElTxpH/BAvv9+lryn9ZjrrT6ZWBigVIOM0Sub4QrSoXfHc/XOdJFAXq7hQllkPh15
 41mpMCaCj0FZKr2xeZFYTNUL3txaPguFL/veRBTsOSWxkzCNXDt3+9vHAc9OohwFuRfWD8Us
 6ZCcXZUM0DF3bveLLGTE4GAguw5K8bmJsUHs2xIxjDFF/c2B5vERs0m4PcEgGph2JEeQp4yY
 eIHUz9GUD3CTCZMEWtLM6NnoteXoX/wJmgwRFW9+vNsvjm7IBZK+KfpGMrYfJqNX8o9tkSFo
 mPL+UzpDxdcM8aQoRKe6W6ljOLLmSL9WaoRGae++/osh0ecrkQRAhALUVqwodGil1WzHdlYL
 iQ86ico6KQ/6kGvZt38RAGj5m6JuAYGXNhdGPF87xuCooLV/ASxFmUCViRGatEtqIkxXzNC6
 7OSt4q3X3o16uTTEC/NsO3Nxd+vBcQLBWstPQQVESFG2fW5p7tujEr9ZNpfSqHg27UZBgrML
 yC2QDkW3utJ1JRahvTjoDgrkBr2+MGRE1ddChH/GzL9t1koPOZJcqT1sQCz0BpWEGqOorBtV
 lAqnNKCpMQHEJ2AjiCEROhl8FqBvK3eaWO0bbKCBfAcG9WRF5yLJto4DNJWfh0B3iM4ldjBO
 h67hO+pzMUPVEZGlIcuC25LNyjP8UQHPYi9Ps04k/IXPckrHON51HgxNSZ8IFwBYGBzyPpia
 P93gO6HDGoACLQP8dZFb7xEjNcWKtQF7T6DFPjTkkX8uZLDNSH9dFvwGAbXBgzPxPjf+1u9H
 hc2H5bi9iizp8WuM3GLrtZLdQ5iwLpSLcmelvG7v9Wre2JOMGogF+XQ0fUmfYlklL5SjeDG4
 je2XUow9bY1rSSvxdmiApy7VI7SYA==
IronPort-HdrOrdr: A9a23:Ekzdz6AUeDs2vc3lHeissceALOsnbusQ8zAXPh9KJiC9I/b1qy
 nxppkmPH/P6Qr4WBkb6Le90Y27MAnhHP9OkPIs1NKZMjUO11HYTr2KgbGSpgEIXheOi9K1tp
 0QApSWaueAdGSS5PySiGLTc6dCsaq6GeKT9J/jJh9WPH9XgspbnmFE42igYylLrF4sP+tHKH
 PQ3LsLm9LOEk5nIviTNz0gZazuttfLnJXpbVotHBg88jSDijuu9frTDwWY9g12aUIP/Z4StU
 z+1yDp7KSqtP+2jjXG0XXI0phQkNz9jvNeGc23jNQPIDmEsHfnWG1YYczAgNkJmpDs1L5z++
 O85ivIfv4DpE85R1vF4ScEgGLboXITAxaI8y7pvZPhyfaJDA7SRfAxwr6w33Pimj0dVepHod
 d2NlSixtNq5CP77VbADufzJmVXf2qP0DEfeL0o/jZiubV3Us4mkWVJxjIoLH5HJlO91Lwa
X-IronPort-AV: E=Sophos;i="5.82,313,1613451600"; 
   d="scan'208";a="44596987"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SS/IO9rv6Rkxsr4jZZFpuqlMTDL+W09+O4btq7MoBBj7qyUvhkDV4lHsgxASTw5ZcudX84+Eesu2ztFS1N25kk4PeQrWIDK8ZZZ8+TuVOoNLTMq1LP0B14ysFtSllsZcKHUpVVqMOxp5q8DtRVtAm9qlp85Yt9DkU/dFgURXZXx63pm6pW7mdXOnnRYn+eIT/babiKdUHyk7zvjimzlDqqR8CBtsPfscoZu+B6RNUF2MD0yhTW4JSYdI6LAiWYOMEIPBL48x6fmtaTw06YmYTPI7Y0Rp0zNanjMXPepipaD3QgGXbMSQTZygxQGfQc6IfAShKtIvXwnCu5qFNpuqLA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dkhkwhZQvzRV/SfNdKFoIZrmzPlqYdtF2VoACyZP/Dc=;
 b=TTVrkU06ei99FcwK/k2JPm+6mms98CLZtff9DD4ip8P25ewzb+fRWBRLGW9Ep1lBFXUr1vFybcDUdl7o/KwGR25ddbVuLvtvuP8MYUbCcmgaAD3F0oNWntw/o6lo2tbECaId+ExKaeogLuC/YZgPCH+Wa+Qys/Bg/fgGbfS22YV66PRArqv6w8SFiRHvTCKBsxgCCB7vetW012KsvKOttGwFKLPN78TgtQHOSKH+NyYR5rF7F/gpp2ggu9IcNW5XZR7LNOIF5nUY5OiGHQ6kojrqsrTyE7oWsyKQBrVMDuZdgGVSYADuD0s/0Dug1ycETSWq1Ze+wQKg02j3vbh/uQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=dkhkwhZQvzRV/SfNdKFoIZrmzPlqYdtF2VoACyZP/Dc=;
 b=qduoGhde+uSlVqahcK5zsrbpwCVnCyH6ybyebFk/8vZLiJD1xWL7Ss83uHHDUztJ37XOdNJ98St/kS9+3HdBcIlE2/1rHMsNyEfsbzjcv/46Ee1n/Ji7ypstRKO4hfobXbbHJhMWefje6wyA6D59yvL7Wb/diT65pogVzoN5fxo=
Date: Thu, 20 May 2021 11:27:20 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: <xen-devel@lists.xenproject.org>, Ian Jackson <iwj@xenproject.org>, Wei
 Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Julien Grall <julien@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>
Subject: Re: [PATCH v2] libelf: improve PVH elfnote parsing
Message-ID: <YKYreLP8N16vcIVB@Air-de-Roger>
References: <20210518144741.44395-1-roger.pau@citrix.com>
 <c645b764-00fe-2b90-3b31-7f2bb6f07c02@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <c645b764-00fe-2b90-3b31-7f2bb6f07c02@suse.com>
X-ClientProxiedBy: MRXP264CA0048.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:14::36) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f1f70597-df8c-448c-8c69-08d91b717ace
X-MS-TrafficTypeDiagnostic: DS7PR03MB5607:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DS7PR03MB56070FB917815CA7A94E9C5F8F2A9@DS7PR03MB5607.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: CMPXtcRVILMkhWFRxlTm0cKLQfI+DxVfT1Fe+yj7FRMgDj1UzyuKWQ9buDFXAAolob6uoZL6DwoATplXAM6xbPh40BXALC6XS4iM72RaczfH/Ma3kCKk5KOOHn7/5wG5aiIpdqcHh9RpN8S5b/Cbwye8vpuYFsU5B4FSgk48GuBz/KvI6vu3SB0mFmZld5uU6OihvAc1JUzqSxC5/ySdYpHvs+NaT0P2grC4F9j2UghdFOfnb63mE/TJREDIlWoFmN7YJG/E8r6GIlXezATAFchP8sAZq8+36l0QGeGcHwTwMiL5Z09BLpaajSbHnTxbEoAYaHGDiY1XXSytzrjWb0lAIZxt3fJi/cvSY4UrWsMQJg1PnexyQNd3bexzIzWAu0aexlqLff5G+N6Nz+37ZRFofekGRIEPB9+EyfotutnrAEJP8IeWlo+Zr9jrnuoeh2IosUaiBbXSu5AQgz3Ab1yC84KcjHprzAKrXOfmGGUHapEjmwkY6lgurILo0AMg7ScsFLeFK5dejBxMLO/AgBT2YxXjbdEsAoDpGBUYATdqnaCs3vOgZpk9Fl9qox2ohusXKF+XANcxwLCv38DT7c5h5Pe/AIHLvcLmM/qOpTs=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(136003)(346002)(376002)(396003)(366004)(39850400004)(4326008)(9686003)(66476007)(478600001)(85182001)(5660300002)(66946007)(186003)(66556008)(16526019)(2906002)(956004)(6666004)(38100700002)(86362001)(83380400001)(6496006)(8676002)(54906003)(6916009)(8936002)(26005)(6486002)(316002)(33716001)(53546011);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?QWJnaFAyaVIzV0FBTlZxbUR4emRoVGxtTUUzcnBQR1BGZlFBTFFmSWZjOENt?=
 =?utf-8?B?NVhSOGw0STV5ZlVhSFhYZjg2Vlo5OEE2MWdPUDY5M055S3FWNzZrdFVHOFZD?=
 =?utf-8?B?UU1jMGVhMjFUTHFidmhBTDU0bFV1R0lzOXFIeWhOUmY3V0RneXRDeUhyNnAw?=
 =?utf-8?B?TGdmL3NDbHJSc3oxVFl4VC9GMWV5ZFFQYlhLa1FheGhvTU9LVjYvbEdxWm5F?=
 =?utf-8?B?aHpHU0VyengzMis2VWVnQzhkcnRjNUtvc1hhcU51TXVuQWZNMDA3MFJUWDdh?=
 =?utf-8?B?b09sdTlIY3ZydFVlNEFyeGY1Z09uL0tHUHpmRVZXRDNqM3lMcWN1SjVJNUxq?=
 =?utf-8?B?c1lPUmZBSGYrT0ExcGxnRHcvTmcvSTdyMFJVMk8wcFpNaU81aFZUUG9hVlE0?=
 =?utf-8?B?SUtZdENMN1J3bEtza0NRK3JnVCt6WSszSFp0K2g1VDVGS3JtUHhoVXhJWFZq?=
 =?utf-8?B?cHFtYUE2M3NtRnd3dXRMdXhVN2oydVBtUUJQNEVqeDVYUHVqejlKMHplR0ZV?=
 =?utf-8?B?dkhTVStGYXVhbXlDUU9kaFpLRnMzUmVLbDJuV2xVSDI4bnU2TU5XNUdZd1Zm?=
 =?utf-8?B?SDRPN1d3ZlVUZXl3aFpzSU0xcW1kZ1Z3S29TMDF1djFNMVV1MmxRa3RUcHNY?=
 =?utf-8?B?N0dIRHByUXNqWTBXZVdUSHBZY1R4djFoa2NTUDhkSXhFdmorRGh0R01keVY2?=
 =?utf-8?B?WjZEVGFnTFN6OHdxaVdhTmNneWVFSVBNcStka1dCL2xBMmNGZDdXV01QcEJn?=
 =?utf-8?B?MjlnYytCTEErb3VuYnpvR3pVSlJBS3k3c2J4WENLN2dDRTdRSXhJNm90OHFF?=
 =?utf-8?B?UzlCZ0NiVTNhVkpKNXhQajVBU013U2JON2s2eVRzZ3gxRUJSZHB1OHhiZ3Bp?=
 =?utf-8?B?amg4em9pamRFNHg0TjNzbW05blpRdjJvSDR4cGpjajl4ZDZuempJbnVqU2xz?=
 =?utf-8?B?ajZybTNyWTNnc0lvbHZVWURhRVdzTHVkTzB1R3VHdlM2T0hzVGJLSVVJTHhL?=
 =?utf-8?B?T0VjS0w4ODhxQmZKOVUxVjIwMkVleWdxcGNPQlVXY3orUmVQb0dSWEJ4NWIw?=
 =?utf-8?B?RW9oTjZkVTBKMks2SWZxRTJ1T2I1a2oyY3JxZTQ3WmIrQVhLSkNmQUtXV2xo?=
 =?utf-8?B?ak9ka1NDV2pKcmt1WEJsZmQ5REhYeTVtM0RHSnZQK1dzZHNLbHFnNzFidGVV?=
 =?utf-8?B?Rmg5VXJWekw2bDBkYTczYnRKb0RlbjhSTnJCNk1FNXQybTlTOHluR1ExVklU?=
 =?utf-8?B?TWJSaGlzcktzNkZ2ZDE2T2U0Vzd0QU56Q0ZVYkRrNlBrRzU1QTJDTkk4N2or?=
 =?utf-8?B?SXJITXlPdC9Sb3lmZDd1cmdjd1hjOERDdDgyMnkwVUxtZHNzaHpGSXBiSEdk?=
 =?utf-8?B?bFR4dExtamhmaTZvM3EzV0cyQmxvOVlSVGV4KzlBb2kzbnRQYVBONUE3d3pt?=
 =?utf-8?B?K3dTalZtUW43Zks4TVErUGVyaURFRUprNmhnNys0a1BQZkRJQlhWNnF6RjFu?=
 =?utf-8?B?clJXQTVXMUNFUWEwbjA1NWE3Z1cwR1cxSm5tdFYwTFhrSU1CT3k5WmlpdVNk?=
 =?utf-8?B?OGwxT01xTzNEdHJNMldsUVAvYlRJTTYwcThaNDRTencwM1Q0amRxVnJESkVv?=
 =?utf-8?B?eW01bkZUNGhmUDA2Zko5MWlHNHZIOU5aenIxYzBITzgwNldobVd1YitPa0dJ?=
 =?utf-8?B?ZzgrL2dWSXpHdnowM3Zlck51ZDlCTmlEZ0t5clJuSTZEbmlLOERTYUlYTmlh?=
 =?utf-8?Q?bV9ar8RmaBoBJSbloE26Qg25SkZFR+DA32e1Orf?=
X-MS-Exchange-CrossTenant-Network-Message-Id: f1f70597-df8c-448c-8c69-08d91b717ace
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 May 2021 09:27:25.7636
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: z+l5OnW6HY69yTxh0kY5gj87s679xmvqn62pDuMb+Rr1mt17DhApmF6/89CFwuKp2S8HpImst3fK2bWjHBadiA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5607
X-OriginatorOrg: citrix.com

On Wed, May 19, 2021 at 12:34:19PM +0200, Jan Beulich wrote:
> On 18.05.2021 16:47, Roger Pau Monne wrote:
> > @@ -425,8 +425,11 @@ static elf_errorstatus elf_xen_addr_calc_check(struct elf_binary *elf,
> >          return -1;
> >      }
> >  
> > -    /* Initial guess for virt_base is 0 if it is not explicitly defined. */
> > -    if ( parms->virt_base == UNSET_ADDR )
> > +    /*
> > +     * Initial guess for virt_base is 0 if it is not explicitly defined in the
> > +     * PV case. For PVH virt_base is forced to 0 because paging is disabled.
> > +     */
> > +    if ( parms->virt_base == UNSET_ADDR || hvm )
> >      {
> >          parms->virt_base = 0;
> >          elf_msg(elf, "ELF: VIRT_BASE unset, using %#" PRIx64 "\n",
> 
> This message is wrong now if hvm is true but parms->virt_base != UNSET_ADDR.
> Best perhaps is to avoid emitting the message altogether when hvm is true.
> (Since you'll be touching it anyway, perhaps a good opportunity to do away
> with passing parms->virt_base to elf_msg(), and instead simply use a literal
> zero.)
> 
> > @@ -441,8 +444,10 @@ static elf_errorstatus elf_xen_addr_calc_check(struct elf_binary *elf,
> >       *
> >       * If we are using the modern ELF notes interface then the default
> >       * is 0.
> > +     *
> > +     * For PVH this is forced to 0, as it's already a legacy option for PV.
> >       */
> > -    if ( parms->elf_paddr_offset == UNSET_ADDR )
> > +    if ( parms->elf_paddr_offset == UNSET_ADDR || hvm )
> >      {
> >          if ( parms->elf_note_start )
> 
> Don't you want "|| hvm" here as well, or alternatively suppress the
> fallback to the __xen_guest section in the PVH case (near the end of
> elf_xen_parse())?

The legacy __xen_guest section doesn't support PHYS32_ENTRY, so yes,
that part could be completely skipped when called from an HVM
container.

I think I will fix that in another patch though if you are OK, as
it's not strictly related to the calculation fixes done here.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 20 09:28:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 09:28:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130831.244849 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljeyr-0008OR-Fr; Thu, 20 May 2021 09:28:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130831.244849; Thu, 20 May 2021 09:28:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljeyr-0008OK-Cx; Thu, 20 May 2021 09:28:37 +0000
Received: by outflank-mailman (input) for mailman id 130831;
 Thu, 20 May 2021 09:28:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3HBq=KP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ljeyq-0008O7-Ay
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 09:28:36 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 855f4462-1201-4487-b0b2-553312c34c64;
 Thu, 20 May 2021 09:28:35 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 391FBB264;
 Thu, 20 May 2021 09:28:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 855f4462-1201-4487-b0b2-553312c34c64
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621502914; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Id3ljkLkZzG/SvPiXMFE+JlFX7yGWGR/3nBdO8XGeCk=;
	b=U2vhyOkZdBadeQr7Xeu7RaOLfkvYXqCjZo4mJf/3qhQ1yZe/TtEDkeTdBdCiVwggpMchhI
	74oInRisxdqsEjn2PtqF8nBcCgpt8KZzH8MlwNXRWVONaQ+rJnFFAbl82WXa1sIjJC4+8Y
	3POAxoXFZ10fLD/8D63XNI7P+cRLDsY=
Subject: Re: [PATCH v2] libelf: improve PVH elfnote parsing
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20210518144741.44395-1-roger.pau@citrix.com>
 <c645b764-00fe-2b90-3b31-7f2bb6f07c02@suse.com>
 <YKYreLP8N16vcIVB@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <162f76e1-9da5-c750-2591-ea011b4b2842@suse.com>
Date: Thu, 20 May 2021 11:28:33 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <YKYreLP8N16vcIVB@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 20.05.2021 11:27, Roger Pau Monné wrote:
> On Wed, May 19, 2021 at 12:34:19PM +0200, Jan Beulich wrote:
>> On 18.05.2021 16:47, Roger Pau Monne wrote:
>>> @@ -425,8 +425,11 @@ static elf_errorstatus elf_xen_addr_calc_check(struct elf_binary *elf,
>>>          return -1;
>>>      }
>>>  
>>> -    /* Initial guess for virt_base is 0 if it is not explicitly defined. */
>>> -    if ( parms->virt_base == UNSET_ADDR )
>>> +    /*
>>> +     * Initial guess for virt_base is 0 if it is not explicitly defined in the
>>> +     * PV case. For PVH virt_base is forced to 0 because paging is disabled.
>>> +     */
>>> +    if ( parms->virt_base == UNSET_ADDR || hvm )
>>>      {
>>>          parms->virt_base = 0;
>>>          elf_msg(elf, "ELF: VIRT_BASE unset, using %#" PRIx64 "\n",
>>
>> This message is wrong now if hvm is true but parms->virt_base != UNSET_ADDR.
>> Best perhaps is to avoid emitting the message altogether when hvm is true.
>> (Since you'll be touching it anyway, perhaps a good opportunity to do away
>> with passing parms->virt_base to elf_msg(), and instead simply use a literal
>> zero.)
>>
>>> @@ -441,8 +444,10 @@ static elf_errorstatus elf_xen_addr_calc_check(struct elf_binary *elf,
>>>       *
>>>       * If we are using the modern ELF notes interface then the default
>>>       * is 0.
>>> +     *
>>> +     * For PVH this is forced to 0, as it's already a legacy option for PV.
>>>       */
>>> -    if ( parms->elf_paddr_offset == UNSET_ADDR )
>>> +    if ( parms->elf_paddr_offset == UNSET_ADDR || hvm )
>>>      {
>>>          if ( parms->elf_note_start )
>>
>> Don't you want "|| hvm" here as well, or alternatively suppress the
>> fallback to the __xen_guest section in the PVH case (near the end of
>> elf_xen_parse())?
> 
> The legacy __xen_guest section doesn't support PHYS32_ENTRY, so yes,
> that part could be completely skipped when called from an HVM
> container.
> 
> I think I will fix that in another patch though if you are OK, as
> it's not strictly related to the calculation fixes done here.

That's fine; it wants to be a prereq to the one here then, though,
I think.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 20 09:32:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 09:32:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130841.244861 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljf2O-0001P4-0W; Thu, 20 May 2021 09:32:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130841.244861; Thu, 20 May 2021 09:32:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljf2N-0001Ox-Ta; Thu, 20 May 2021 09:32:15 +0000
Received: by outflank-mailman (input) for mailman id 130841;
 Thu, 20 May 2021 09:32:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3HBq=KP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ljf2M-0001Or-K2
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 09:32:14 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 330c381a-872c-4e12-a4fd-8b77eee0aacd;
 Thu, 20 May 2021 09:32:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DEC22AD05;
 Thu, 20 May 2021 09:32:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 330c381a-872c-4e12-a4fd-8b77eee0aacd
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621503133; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=hwuztHQr2Kb8ZQYj6qaMQveg8GSd7xgfZc3Cy1tcmsw=;
	b=dw0VRgAqBUHLJCEJXV0544dTD0M/JWe7DZo2yLbkcZjpKg6Pp/s7ady1Pnlv9TUhdHMKJQ
	0Fy0YmqUWJ0KlhuwXVpgMjqb+IRzyIex4TcNIm2IdjSpEe+tNjuhpP8cVf0CVbDzlgbb80
	1Gu+d1CbxLRToVMIDcgGnu9Z5uv+LxI=
Subject: Re: [PATCH 04/10] xen/arm: static memory initialization
To: Penny Zheng <Penny.Zheng@arm.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 nd <nd@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "julien@xen.org" <julien@xen.org>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-5-penny.zheng@arm.com>
 <dbffa647-37e2-93b6-4041-a1344aeb1837@suse.com>
 <VE1PR08MB5215E7203960F535BC857F5CF72C9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <f9776245-0a33-4cb8-fd5a-f7be8ab38b78@suse.com>
 <VE1PR08MB52152D4CD1542D0B35CF95CDF72A9@VE1PR08MB5215.eurprd08.prod.outlook.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1a3b3024-5955-3c94-3af3-fd20e0d37a73@suse.com>
Date: Thu, 20 May 2021 11:32:12 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <VE1PR08MB52152D4CD1542D0B35CF95CDF72A9@VE1PR08MB5215.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 20.05.2021 11:04, Penny Zheng wrote:
> Hi Jan
> 
>> -----Original Message-----
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: Tuesday, May 18, 2021 6:43 PM
>> To: Penny Zheng <Penny.Zheng@arm.com>
>> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
>> <Wei.Chen@arm.com>; nd <nd@arm.com>; xen-devel@lists.xenproject.org;
>> sstabellini@kernel.org; julien@xen.org
>> Subject: Re: [PATCH 04/10] xen/arm: static memory initialization
>>
>> On 18.05.2021 11:51, Penny Zheng wrote:
>>>> From: Jan Beulich <jbeulich@suse.com>
>>>> Sent: Tuesday, May 18, 2021 3:16 PM
>>>>
>>>> On 18.05.2021 07:21, Penny Zheng wrote:
>>>>> This patch introduces static memory initialization, during system
>>>>> RAM boot
>>>> up.
>>>>>
>>>>> New func init_staticmem_pages is the equivalent of init_heap_pages,
>>>>> responsible for static memory initialization.
>>>>>
>>>>> Helper func free_staticmem_pages is the equivalent of
>>>>> free_heap_pages, to free nr_pfns pages of static memory.
>>>>> For each page, it includes the following steps:
>>>>> 1. change page state from in-use(also initialization state) to free
>>>>> state and grant PGC_reserved.
>>>>> 2. set its owner NULL and make sure this page is not a guest frame
>>>>> any more
>>>>
>>>> But isn't the goal (as per the previous patch) to associate such
>>>> pages with a _specific_ domain?
>>>>
>>>
>>> Free_staticmem_pages are alike free_heap_pages, are not used only for
>> initialization.
>>> Freeing used pages to unused are also included.
>>> Here, setting its owner NULL is to set owner in used field NULL.
>>
>> I'm afraid I still don't understand.
>>
> 
> When initializing heap, xen is using free_heap_pages to do the initialization.
> And also when normal domain get destroyed/rebooted, xen is using free_domheap_pages,
> which calls free_heap_pages to free the pages.
> 
> So here, since free_staticmem_pages is the equivalent of the free_heap_pages for static
> memory, I'm also considering both two scenarios here. And if it is domain get destroyed/rebooted,
> Page state is in inuse state(PGC_inuse), and the page_info.v.inuse.domain is holding the
> domain owner.
> When freeing then, we need to switch the page state to free state(PGC_free) and set its inuse owner to NULL.

Perhaps my confusion comes from your earlier outline missing

3. re-associate the page with the domain (as represented in free
   pages)

The property of "designated for Dom<N>" should never go away, if I
understand the overall proposal correctly.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 20 09:42:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 09:42:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130848.244871 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljfCB-0002u1-0D; Thu, 20 May 2021 09:42:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130848.244871; Thu, 20 May 2021 09:42:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljfCA-0002tu-TE; Thu, 20 May 2021 09:42:22 +0000
Received: by outflank-mailman (input) for mailman id 130848;
 Thu, 20 May 2021 09:42:22 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=dJWQ=KP=aepfle.de=olaf@srs-us1.protection.inumbo.net>)
 id 1ljfC9-0002to-2z
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 09:42:21 +0000
Received: from mo4-p00-ob.smtp.rzone.de (unknown [81.169.146.162])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 38677db3-371f-4424-877f-42badcb9b1e6;
 Thu, 20 May 2021 09:42:20 +0000 (UTC)
Received: from sender by smtp.strato.de (RZmta 47.26.1 AUTH)
 with ESMTPSA id y090b8x4K9gI1WD
 (using TLSv1.3 with cipher TLS_AES_256_GCM_SHA384 (256 bits))
 (Client did not present a certificate);
 Thu, 20 May 2021 11:42:18 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 38677db3-371f-4424-877f-42badcb9b1e6
ARC-Seal: i=1; a=rsa-sha256; t=1621503738; cv=none;
    d=strato.com; s=strato-dkim-0002;
    b=mdUu44O058xdiGLZuflX23sF4F48yLP7lNYwxyAwPOqAUNK+GONo9p/SvYdHPjXwnt
    nUschUnqUeuaDWRZCQkW0UdCfVEgLVROPHwsu+mnd4eWBuqE4oXIvNoMF1drNiUM7BGm
    BzFONro4QvIOH+CEOmn0IRNM+b/E6wDbE7UeRps8UCgpDcs+DLsJkYQYxGZh/pFcYA25
    Zcs2gBYGLO1dJCSHdri/zDAmLMfmCxhd43OGv2iJ1iEQ7AuCSoiLglA0J6svd5Xxf5S1
    L0RQhMwvneaEol6iOXlvjY0kc/5ifRDgVsBchFBPg/8qy20eSeCtF9wcDt0aGnfiG9uY
    jxng==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; t=1621503738;
    s=strato-dkim-0002; d=strato.com;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=iO/vJOdJVgZVet5cNxZHIeNDk8t9dkkX/g1Ln/bCvZI=;
    b=XAmGeQiiyWcwVscVqJwUbNAox0IhbQ710tlVR/YH/dssaChDRR2GeWi8AoX49RVM7d
    uJBi5Pk87Hzna/Gq9inBiAAQGT0DHYvsgGmt98WYuKHdp4z2Eey8zSpolEkOXZSES3M1
    7c2rAGXZmJwpXlPIXPR5uozLZEuA9qnk4UPaeHKSo/gNHROGiZrK46lIegYCjahY/bRs
    V0uY6lcwtV+KSuOOXtutkuUjpK5Iv/h53wugKccYCBiVmMfaVxeyiyTLLZ9ChHFDTMOR
    1YAaetv7UYYrinEdgSEFfc4FbGG2SvY8yLOdYSMORW1X5XtAw9KDCnjOcVV5ZWn/PNTy
    ZmRA==
ARC-Authentication-Results: i=1; strato.com;
    dkim=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; t=1621503738;
    s=strato-dkim-0002; d=aepfle.de;
    h=References:In-Reply-To:Message-ID:Subject:Cc:To:From:Date:Cc:Date:
    From:Subject:Sender;
    bh=iO/vJOdJVgZVet5cNxZHIeNDk8t9dkkX/g1Ln/bCvZI=;
    b=qhUME75t2zbBmW5Wvkt/1FflvGYTDlhlBr2OJq8/qi85Rk2DWYRrEln3BtIDY6n3C1
    83W3InFHa7bFK6LYYPMJAvmqX6A5X7j22ofF/rrSYpUZg4M0iNJZdj+XniyUn58QyRlN
    NM8g6jVzvDqR9/NplNTQVxCt7n8nVBvaL0YDXOv9U0d2vd67RNPBc7UPDXup3ZZjN46b
    nxvuCswm9MEEfP3xuC59Sb9psDX03LDdnPwkL/te/iCTYWE3s+j2hQprHGb0YLS44cBR
    kwUJtYa2z5dTUPA72Ke4damFX2vtSIemVAGNvU0h1pvlvlm4iTv1Gq+MZHsBa9G4PA41
    ox1w==
Authentication-Results: strato.com;
    dkim=none
X-RZG-AUTH: ":P2EQZWCpfu+qG7CngxMFH1J+3q8wa/QLpd5ylWvMDX3y/OuD5rXVisF9Wx7WbE3s+BU2kLCYUBd7t4vRd/ulzKn4R+Wk"
X-RZG-CLASS-ID: mo00
Date: Thu, 20 May 2021 11:42:10 +0200
From: Olaf Hering <olaf@aepfle.de>
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel@lists.xenproject.org
Subject: Re: regression in recent pvops kernels, dom0 crashes early
Message-ID: <20210520114210.2d87f752.olaf@aepfle.de>
In-Reply-To: <20210520094503.606a1761.olaf@aepfle.de>
References: <20210513122457.4182eb7f.olaf@aepfle.de>
	<7abb3c8f-4a9b-700b-5c0c-dc6f42336eab@suse.com>
	<20210519204205.5bf59d51.olaf@aepfle.de>
	<bb51ff7d-bd02-f039-dace-1c7f31fd2e1e@suse.com>
	<20210520094503.606a1761.olaf@aepfle.de>
X-Mailer: Claws Mail 2021.04.23 (GTK+ 2.24.32; x86_64-suse-linux-gnu)
MIME-Version: 1.0
Content-Type: multipart/signed; boundary="Sig_/SOeGg0J1gBRJ3.7tORmW9aU";
 protocol="application/pgp-signature"; micalg=pgp-sha256

--Sig_/SOeGg0J1gBRJ3.7tORmW9aU
Content-Type: text/plain; charset=US-ASCII
Content-Transfer-Encoding: quoted-printable

Am Thu, 20 May 2021 09:45:03 +0200
schrieb Olaf Hering <olaf@aepfle.de>:

> Am Thu, 20 May 2021 09:03:34 +0200
> schrieb Jan Beulich <jbeulich@suse.com>:
>=20
> > Just to be sure - you did not need the other patch that I said I suspect
> > is needed as a prereq? =20
> Yes, I needed just this single patch which moves x86_configure_nx up.

I tried the very same approach with the SLE12-SP4-LTSS branch, which also f=
ixed dom0 boot.


Olaf

--Sig_/SOeGg0J1gBRJ3.7tORmW9aU
Content-Type: application/pgp-signature
Content-Description: Digitale Signatur von OpenPGP

-----BEGIN PGP SIGNATURE-----

iQIzBAEBCAAdFiEE97o7Um30LT3B+5b/86SN7mm1DoAFAmCmLvMACgkQ86SN7mm1
DoB/dRAAg8cAb9VTR8/tOMsK0+98rE1v/e5Iap3fVRKf4kOzlqa/QGyRxCIZvrrW
UQ1O7Nol/ldfWvHHC47MukXpgfpbJfZmgg9tXuRZNftez8mUXYBSe9IApx6yzZTO
DVemLrZqd6g6TAOTC5CnQH9N/3nz/oSUDfSjnS8Kay3vAOcByZQ4ySJ4POTZOxPt
eAskXnj7uF+JBvHkHzctg5jkEJnEqd8KfX20qKktyaRcB6PbgCihBwDKm153zPf8
nf9acIccabfGmT+gMRQPzqkX5U6ixXG2KXrhR1O7/QmIsAjClAWc8QIVA6a9/1gM
gskgVpWhAgk0aIzl2QUaLuxHhHXA/xVL/6ivMMTtjmTGUMeU5g/qo5wFjBI2xwHm
AXj9MZW9fw4XJNiwYqTgrMvK5vMbBZqXPmawJ9quIrcVb0XUtrtDpedSdTmbKsVP
t7AmQ3zpAYa8prpi5nB2ecyWtA6hFDaNewcQP/xP17HuzhYLevFVgmeBZuoM94sy
Nx2VloaCbBhUOU6OJag5HubWabUKHyiGI09VnXTnGznwExoJ9jFDgZFmlRCrdzsG
JSONrh+TtG2cS6wdPhW98pKQs9djsFjAM/7EoOsLBeal3Gvv7eb3AENYlR33ebac
nFWxM6XdtMU8qoRuTKKQMVZG/JwkALg8ClbRES1J8cSrZRy15VM=
=qIDr
-----END PGP SIGNATURE-----

--Sig_/SOeGg0J1gBRJ3.7tORmW9aU--


From xen-devel-bounces@lists.xenproject.org Thu May 20 09:45:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 09:45:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130854.244883 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljfFN-0003am-FG; Thu, 20 May 2021 09:45:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130854.244883; Thu, 20 May 2021 09:45:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljfFN-0003af-Bo; Thu, 20 May 2021 09:45:41 +0000
Received: by outflank-mailman (input) for mailman id 130854;
 Thu, 20 May 2021 09:45:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ljfFL-0003aY-P4
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 09:45:39 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ljfFL-0005QE-Iy; Thu, 20 May 2021 09:45:39 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ljfFL-0004CE-CY; Thu, 20 May 2021 09:45:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=BxH/GFn0fN7kXTcFiXRVG8uCM4g0mwU6q07w/M/m7Ig=; b=LQggWTT7QNPSJkC2L4mkw2h258
	ULqC1ytNIYweLOOH0uDGIzk4eq8TSPSfGzswD0cOxn8oVMQi8kekz3B80yoi4pn8OE7QakA+BMXkW
	1ld1H6ZzzmMjJ56sWY6OFpijZs1EJWuxMWJCy4ynRizhsXImMUQ3C/QrQ7AHzbR2lz9M=;
Subject: Re: [PATCH 03/10] xen/arm: introduce PGC_reserved
To: Jan Beulich <jbeulich@suse.com>, Penny Zheng <Penny.Zheng@arm.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 nd <nd@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-4-penny.zheng@arm.com>
 <bc6a20ef-675d-bbd6-74f7-4ecc45805ee7@xen.org>
 <VE1PR08MB5215F3ECA8B5D9624E34A794F72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <2f4eb08e-261b-70c4-bcbc-e08db36a50a9@xen.org>
 <VE1PR08MB52155DD56E548E98AE937CE8F72A9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <VE1PR08MB5215E19BFE3E7F329229D8E7F72A9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <44b46f35-cc51-9274-77f2-cfd18c998a38@xen.org>
 <b53c5ef5-ccab-a472-1edc-63082d89f09a@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <4b36ce2c-e60f-8974-fa5b-b8f2261e25ad@xen.org>
Date: Thu, 20 May 2021 10:45:37 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <b53c5ef5-ccab-a472-1edc-63082d89f09a@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 20/05/2021 10:27, Jan Beulich wrote:
> On 20.05.2021 10:59, Julien Grall wrote:
>> On 20/05/2021 09:40, Penny Zheng wrote:
>>> Oh, Second thought on this.
>>> And I think you are referring to balloon in/out here, hmm, also, like
>>
>> Yes I am referring to balloon in/out.
>>
>>> I replied there:
>>> "For issues on ballooning out or in, it is not supported here.
>>
>> So long you are not using the solution in prod then you are fine (see
>> below)... But then we should make clear this feature is a tech preview.
>>
>>> Domain on Static Allocation and 1:1 direct-map are all based on
>>> dom0-less right now, so no PV, grant table, event channel, etc, considered.
>>>
>>> Right now, it only supports device got passthrough into the guest."
>>
>> So we are not creating the hypervisor node in the DT for dom0less domU.
>> However, the hypercalls are still accessible by a domU if it really
>> wants to use them.
>>
>> Therefore, a guest can easily mess up with your static configuration and
>> predictability.
>>
>> IMHO, this is a must to solve before "static memory" can be used in
>> production.
> 
> I'm having trouble seeing why it can't be addressed right away: 

It can be solved right away. Dom0less domUs don't officially know they 
are running on Xen (they could bruteforce it though), so I think it 
would be fine to merge without it for a tech preview.

> Such
> guests can balloon in only after they've ballooned out some pages,
> and such balloon-in requests would be satisfied from the same static
> memory that is associated by the guest anyway.

This would require some bookeeping to know the page was used previously. 
But nothing very challenging though.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 20 09:46:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 09:46:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130859.244894 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljfFt-00049k-On; Thu, 20 May 2021 09:46:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130859.244894; Thu, 20 May 2021 09:46:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljfFt-00049d-KQ; Thu, 20 May 2021 09:46:13 +0000
Received: by outflank-mailman (input) for mailman id 130859;
 Thu, 20 May 2021 09:46:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ljfFr-00049M-EK
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 09:46:11 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ljfFr-0005SW-9w; Thu, 20 May 2021 09:46:11 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ljfFq-0004De-Qu; Thu, 20 May 2021 09:46:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=oMc5ISGUUD/1bW8gB68xeYoMO7OT5jBlt80GZRsh4M8=; b=teY/N1I6D6gQkj7LJY++PKgYB1
	yPhmkThv6ds32Rx5HmG6HwtQkd4tM2tznEVZg/iGBSB5kJOa4LdYKhsLrv6c0cDvMDx1EXMHHCSER
	mY4MJYNtTqI+ZvW7j5yKhFySWBmgjaeUWLE2hzddV0YAwJFGy9IB/vSJwqeGIO4slREU=;
Subject: Re: Hand over of the Xen shared info page
To: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>,
 Stefano Stabellini <sstabellini@kernel.org>
Cc: Anastasiia Lukianenko <Anastasiia_Lukianenko@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrii Chepurnyi <Andrii_Chepurnyi@epam.com>
References: <64bc6ab6ec387acebb40c1b4786dfda1050f9d50.camel@epam.com>
 <8ff05bdf-a6c4-6b14-b39c-7d9b3bb9d279@xen.org>
 <1db54c363eae22613280e7181805abee396fe5e9.camel@epam.com>
 <8d1ecf6c-a0d1-d9bc-5daf-d02a34fff1e6@xen.org>
 <alpine.DEB.2.21.2105191604130.14426@sstabellini-ThinkPad-T480s>
 <4b686071-3260-87b1-d240-8d3c2b48f1f8@epam.com>
From: Julien Grall <julien@xen.org>
Message-ID: <0d502893-f433-30b9-72a5-6842274239f3@xen.org>
Date: Thu, 20 May 2021 10:46:07 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <4b686071-3260-87b1-d240-8d3c2b48f1f8@epam.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 20/05/2021 06:21, Oleksandr Andrushchenko wrote:
> Hi, all!

Hi,


> On 5/20/21 2:11 AM, Stefano Stabellini wrote:
>> On Wed, 19 May 2021, Julien Grall wrote:
>>> On 14/05/2021 10:50, Anastasiia Lukianenko wrote:
>>>> Hi Julien!
>>> Hello,
>>>
>>>> On Thu, 2021-05-13 at 09:37 +0100, Julien Grall wrote:
>>>>> On 13/05/2021 09:03, Anastasiia Lukianenko wrote:
>>>>> The alternative is for U-boot to go through the DT and infer which
>>>>> regions are free (IOW any region not described).
>>>> Thank you for interest in the problem and advice on how to solve it.
>>>> Could you please clarify how we could find free regions using DT in U-
>>>> boot?
>>> I don't know U-boot code, so I can't tell whether what I suggest would work.
>>>
>>> In theory, the device-tree should described every region allocated in address
>>> space. So if you parse the device-tree and create a list (or any
>>> datastructure) with the regions, then any range not present in the list would
>>> be free region you could use.
>> Yes, any "empty" memory region which is neither memory nor MMIO should
>> work.
> 
> You need to account on many things while creating the list of regions:
> 
> device register mappings, reserved nodes, memory nodes, device tree
> 
> overlays parsing etc. It all seem to be a not-that-trivial task and after
> 
> all if implemented it only lives in U-boot and you have to maintain that
> 
> code. So, if some other entity needs the same you need to implement
> 
> that as well.

Yes, there are some complexity. I have suggested other approach in a 
separate thread. Did you look at them?

> And also you can imagine a system without device tree at all...
Xen doesn't provide a stable guest layout. Such system would have to be 
tailored to a given guest configuration, Xen version (can be custom)...

Therefore, hardcoding the value in the system (not in Xen headers!) is 
not going to make it much worse.

> So, I am not saying it is not possible to implement, but I just question if
> 
> such an implementation is really a good way forward.
> 
>>
>>
>>> However, I realized a few days ago that the magic pages are not described in
>>> the DT. We probably want to fix it by marking the page as "reserved" or create
>>> a specific bindings.
>>>
>>> So you will need a specific quirk for them.
>> It should also be possible to keep the shared info page allocated and
>> simply pass the address to the kernel by adding the right node to device
>> tree.
> And then you need to modify your OS and this is not only Linux...
>> To do that, we would have to add a description of the magic pages
>> to device tree, which I think would be good to have in any case. In that
>> case it would be best to add a proper binding for it under the "xen,xen"
>> node.
> 
> I would say that querying Xen for such a memory page seems much
> 
> more cleaner and allows the guest OS either to continue using the existing
> 
> method with memory allocation or using the page provided by Xen.

IIUC your proposal, you are asking to add an hypercall to query which 
guest physical region can be used for the shared info page.

This may solve the problem you have at hand, but this is not scalable. 
There are a few other regions which regions unallocated memory (e.g. 
grant table, grant mapping, foreign memory map,....).

So if we were going to involve Xen, then I think it is better to have a 
generic way to ask Xen for unallocated space.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 20 09:52:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 09:52:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130866.244904 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljfLW-0005bi-Cz; Thu, 20 May 2021 09:52:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130866.244904; Thu, 20 May 2021 09:52:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljfLW-0005bb-9w; Thu, 20 May 2021 09:52:02 +0000
Received: by outflank-mailman (input) for mailman id 130866;
 Thu, 20 May 2021 09:52:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ljfLV-0005bV-Co
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 09:52:01 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ljfLU-0005Ye-1e; Thu, 20 May 2021 09:52:00 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ljfLT-0004xp-Qw; Thu, 20 May 2021 09:51:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=fCVVZ0KsF+1U4HjRTjT+kTrOtWdfFEKWtJAJWs8INVg=; b=wCXThbOU3HaftzT/izor9olwzb
	Ewk2ihhgU2a+yseo6+QHL6pIo68mloecduPIW4VIOcFZhH9IdiEV5QYOzJfRsmtKrV19lqK7Ys2yz
	ic67++co4GF6qh7duyd83QX4pvnS7rtphQUGzZFPTcNZ+jzq+9p0qb31As1Qcfse6m68=;
Subject: Re: Uses of /hypervisor memory range (was: FreeBSD/Xen/ARM issues)
To: Stefano Stabellini <sstabellini@kernel.org>
Cc: Elliott Mitchell <ehem+xen@m5p.com>, xen-devel@lists.xenproject.org,
 Roger Pau Monn?? <royger@freebsd.org>, Mitchell Horne <mhorne@freebsd.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>
References: <YIptpndhk6MOJFod@Air-de-Roger>
 <YItwHirnih6iUtRS@mattapan.m5p.com> <YIu80FNQHKS3+jVN@Air-de-Roger>
 <YJDcDjjgCsQUdsZ7@mattapan.m5p.com> <YJURGaqAVBSYnMRf@Air-de-Roger>
 <YJYem5CW/97k/e5A@mattapan.m5p.com> <YJs/YAgB8molh7e5@mattapan.m5p.com>
 <54427968-9b13-36e6-0001-27fb49f85635@xen.org>
 <YJ3jlGSxs60Io+dp@mattapan.m5p.com>
 <93936406-574f-7fd0-53bf-3bafaa4b1947@xen.org>
 <YJ8hTE/JbJygtVAL@mattapan.m5p.com>
 <f7360dac-5d83-733b-7ec5-c73d4dc0350d@xen.org>
 <alpine.DEB.2.21.2105191611540.14426@sstabellini-ThinkPad-T480s>
From: Julien Grall <julien@xen.org>
Message-ID: <b6fe6e06-517c-ee4c-5b71-a1bee4d4df13@xen.org>
Date: Thu, 20 May 2021 10:51:57 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <alpine.DEB.2.21.2105191611540.14426@sstabellini-ThinkPad-T480s>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Stefano,

On 20/05/2021 00:25, Stefano Stabellini wrote:
> On Sat, 15 May 2021, Julien Grall wrote:
>>> My feeling is one of two things should happen with the /hypervisor
>>> address range:
>>>
>>> 1>  OSes could be encouraged to use it for all foreign mappings.  The
>>> range should be dynamic in some fashion.  There could be a handy way to
>>> allow configuring the amount of address space thus reserved.
>>
>> In the context of XSA-300 and virtio on Xen on Arm, we discussed about
>> providing a revion for foreign mapping. The main trouble here is figuring out
>> of the size, if you mess it up then you may break all the PV drivers.
>>
>> If the problem is finding space, then I would like to suggest a different
>> approach (I think I may have discussed it with Andrew). Xen is in maintaining
>> the P2M for the guest and therefore now where are the unallocated space. How
>> about introducing a new hypercall to ask for "unallocated space"?
>>
>> This would not work for older hypervisor but you could use the RAM instead (as
>> Linux does). This is has drawbacks (e.g. shattering superpage, reducing the
>> amount of memory usuable...), but for 1> you would also need a hack for older
>> Xen.
> 
> I am starting to wonder if it would make sense to add a new device tree
> binding to describe larger free region for foreign mapping rather then a
> hypercall. It could be several GBs or even TBs in size. I like the idea
> of having it device tree because after all this is static information.
> I can see that a hypercall would also work and I am open to it but if
> possible I think it would be better not to extend the hypercall
> interface when there is a good alternative.

There are two issues with the Device-Tree approach:
   1) This doesn't work on ACPI
   2) It is not clear how to define the size of the region. An admin 
doesn't have the right information in hand to know how many mappings 
will be done (a backend may map multiple time the same page).

The advantage of the hypercall solution is the size is "virtually" 
unlimited and not based on a specific firmware.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 20 11:42:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 11:42:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130881.244916 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljh4f-0007rQ-OG; Thu, 20 May 2021 11:42:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130881.244916; Thu, 20 May 2021 11:42:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljh4f-0007rJ-Ks; Thu, 20 May 2021 11:42:45 +0000
Received: by outflank-mailman (input) for mailman id 130881;
 Thu, 20 May 2021 11:42:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3HBq=KP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ljh4e-0007rD-AN
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 11:42:44 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 83d1b782-5a92-48f1-8209-4acb07cf9712;
 Thu, 20 May 2021 11:42:43 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D1F14ABE8;
 Thu, 20 May 2021 11:42:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 83d1b782-5a92-48f1-8209-4acb07cf9712
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621510962; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=yL3Xk8Mq0X824q1qm2+RtG2tpaaondahQ7Jy4zmuZdM=;
	b=b5UXBNbM6FmAZ2EaTcRCMFAYF48xWTpnQHwepZj7lHl6W/WoWoWp1zoetB8s62+D9p6wxB
	GZLb+TPa/3N5hAKjMx+XbpqobGGygWsAbCgc5grFjRFeeZDrPnPy0orJFlR0RrGIqO/u7F
	gfL4PHqaLHl6umi+1I8mY8FAoB6jXZU=
To: Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/Xen: swap NX determination and GDT setup on BSP
Message-ID: <12a866b0-9e89-59f7-ebeb-a2a6cec0987a@suse.com>
Date: Thu, 20 May 2021 13:42:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

xen_setup_gdt(), via xen_load_gdt_boot(), wants to adjust page tables.
For this to work when NX is not available, x86_configure_nx() needs to
be called first.

Reported-by: Olaf Hering <olaf@aepfle.de>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/arch/x86/xen/enlighten_pv.c
+++ b/arch/x86/xen/enlighten_pv.c
@@ -1273,16 +1273,16 @@ asmlinkage __visible void __init xen_sta
 	/* Get mfn list */
 	xen_build_dynamic_phys_to_machine();
 
+	/* Work out if we support NX */
+	get_cpu_cap(&boot_cpu_data);
+	x86_configure_nx();
+
 	/*
 	 * Set up kernel GDT and segment registers, mainly so that
 	 * -fstack-protector code can be executed.
 	 */
 	xen_setup_gdt(0);
 
-	/* Work out if we support NX */
-	get_cpu_cap(&boot_cpu_data);
-	x86_configure_nx();
-
 	/* Determine virtual and physical address sizes */
 	get_cpu_address_sizes(&boot_cpu_data);
 


From xen-devel-bounces@lists.xenproject.org Thu May 20 11:46:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 11:46:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130887.244927 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljh82-00005R-8Y; Thu, 20 May 2021 11:46:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130887.244927; Thu, 20 May 2021 11:46:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljh82-00005J-4j; Thu, 20 May 2021 11:46:14 +0000
Received: by outflank-mailman (input) for mailman id 130887;
 Thu, 20 May 2021 11:46:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3HBq=KP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ljh80-00005A-BX
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 11:46:12 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 452d03e5-8045-492b-bb8b-5b139634e41a;
 Thu, 20 May 2021 11:46:11 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 4EBBAAC4B;
 Thu, 20 May 2021 11:46:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 452d03e5-8045-492b-bb8b-5b139634e41a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621511170; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=A8q+fkSuWfl4pXkgwyhK6ygE0KXVq43LbF4JProMcWQ=;
	b=b6B1/lM0z/Cw4r6XtealbDwhnP3XuTEp9BtAR5mohAmbvgiAu2g9Fx7XGa0bz/96uIi+WY
	PMkIrGhM7UWTYnhbeZkoKSlgu+ZvsDKQO5HeH0pe4umPHJ35PY9UTbe4sD3ZUel46PCNe7
	GGA+FXlZiYnLJ+VkJvTAMzcVWeHALXo=
Subject: Re: [PATCH] xen-netback: correct success/error reporting for the
 SKB-with-fraglist case
To: paul@xen.org, Wei Liu <wl@xen.org>,
 "netdev@vger.kernel.org" <netdev@vger.kernel.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <4dd5b8ec-a255-7ab1-6dbf-52705acd6d62@suse.com>
 <67bc0728-761b-c3dd-bdd5-1a850ff79fbb@xen.org>
 <76c94541-21a8-7ae5-c4c4-48552f16c3fd@suse.com>
 <17e50fb5-31f7-60a5-1eec-10d18a40ad9a@xen.org>
 <57580966-3880-9e59-5d82-e1de9006aa0c@suse.com>
 <a26c1ecd-e303-3138-eb7e-96f0203ca888@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1a522244-4be8-5e33-77c7-4ea5cf130335@suse.com>
Date: Thu, 20 May 2021 13:46:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <a26c1ecd-e303-3138-eb7e-96f0203ca888@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 25.02.2021 17:23, Paul Durrant wrote:
> On 25/02/2021 14:00, Jan Beulich wrote:
>> On 25.02.2021 13:11, Paul Durrant wrote:
>>> On 25/02/2021 07:33, Jan Beulich wrote:
>>>> On 24.02.2021 17:39, Paul Durrant wrote:
>>>>> On 23/02/2021 16:29, Jan Beulich wrote:
>>>>>> When re-entering the main loop of xenvif_tx_check_gop() a 2nd time, the
>>>>>> special considerations for the head of the SKB no longer apply. Don't
>>>>>> mistakenly report ERROR to the frontend for the first entry in the list,
>>>>>> even if - from all I can tell - this shouldn't matter much as the overall
>>>>>> transmit will need to be considered failed anyway.
>>>>>>
>>>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>>>>
>>>>>> --- a/drivers/net/xen-netback/netback.c
>>>>>> +++ b/drivers/net/xen-netback/netback.c
>>>>>> @@ -499,7 +499,7 @@ check_frags:
>>>>>>     				 * the header's copy failed, and they are
>>>>>>     				 * sharing a slot, send an error
>>>>>>     				 */
>>>>>> -				if (i == 0 && sharedslot)
>>>>>> +				if (i == 0 && !first_shinfo && sharedslot)
>>>>>>     					xenvif_idx_release(queue, pending_idx,
>>>>>>     							   XEN_NETIF_RSP_ERROR);
>>>>>>     				else
>>>>>>
>>>>>
>>>>> I think this will DTRT, but to my mind it would make more sense to clear
>>>>> 'sharedslot' before the 'goto check_frags' at the bottom of the function.
>>>>
>>>> That was my initial idea as well, but
>>>> - I think it is for a reason that the variable is "const".
>>>> - There is another use of it which would then instead need further
>>>>     amending (and which I believe is at least part of the reason for
>>>>     the variable to be "const").
>>>>
>>>
>>> Oh, yes. But now that I look again, don't you want:
>>>
>>> if (i == 0 && first_shinfo && sharedslot)
>>>
>>> ? (i.e no '!')
>>>
>>> The comment states that the error should be indicated when the first
>>> frag contains the header in the case that the map succeeded but the
>>> prior copy from the same ref failed. This can only possibly be the case
>>> if this is the 'first_shinfo'
>>
>> I don't think so, no - there's a difference between "first frag"
>> (at which point first_shinfo is NULL) and first frag list entry
>> (at which point first_shinfo is non-NULL).
> 
> Yes, I realise I got it backwards. Confusing name but the comment above 
> its declaration does explain.
> 
>>
>>> (which is why I still think it is safe to unconst 'sharedslot' and
>>> clear it).
>>
>> And "no" here as well - this piece of code
>>
>> 		/* First error: if the header haven't shared a slot with the
>> 		 * first frag, release it as well.
>> 		 */
>> 		if (!sharedslot)
>> 			xenvif_idx_release(queue,
>> 					   XENVIF_TX_CB(skb)->pending_idx,
>> 					   XEN_NETIF_RSP_OKAY);
>>
>> specifically requires sharedslot to have the value that was
>> assigned to it at the start of the function (this property
>> doesn't go away when switching from fragments to frag list).
>> Note also how it uses XENVIF_TX_CB(skb)->pending_idx, i.e. the
>> value the local variable pending_idx was set from at the start
>> of the function.
>>
> 
> True, we do have to deal with freeing up the header if the first map 
> error comes on the frag list.
> 
> Reviewed-by: Paul Durrant <paul@xen.org>

Since I've not seen this go into 5.13-rc, may I ask what the disposition
of this is?

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 20 11:57:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 11:57:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130895.244938 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljhJ8-0001cC-9D; Thu, 20 May 2021 11:57:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130895.244938; Thu, 20 May 2021 11:57:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljhJ8-0001c5-5o; Thu, 20 May 2021 11:57:42 +0000
Received: by outflank-mailman (input) for mailman id 130895;
 Thu, 20 May 2021 11:57:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=RJtO=KP=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1ljhJ6-0001bz-50
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 11:57:40 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 72b26c6a-6ba7-4a2d-95b4-265fba9f409c;
 Thu, 20 May 2021 11:57:39 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 5A378ABCD;
 Thu, 20 May 2021 11:57:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 72b26c6a-6ba7-4a2d-95b4-265fba9f409c
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621511858; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=90ynb7W4IbZViJolLygoEiEPp05KyvS2LpYo57sxRNM=;
	b=a17E57/IumMYM1MnncqoNtbYv1cpYWvynwtC5JSYx+yb1RcgA3SLoKL7Tw0qpUJHc0G649
	VeqlYGEQkO2m/3TgYqhK+WIGafl5YwY6AK7iajT3ThX/LHX0+SQEa/u3ApwCJWzwSk7shp
	19pQ8rcPdqFsTkK3DgkZKPTAnTIWOlE=
Subject: Re: [PATCH] x86/Xen: swap NX determination and GDT setup on BSP
To: Jan Beulich <jbeulich@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <12a866b0-9e89-59f7-ebeb-a2a6cec0987a@suse.com>
From: Juergen Gross <jgross@suse.com>
Message-ID: <65bbc317-893e-da41-97e0-c8f2e1feb3e2@suse.com>
Date: Thu, 20 May 2021 13:57:37 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <12a866b0-9e89-59f7-ebeb-a2a6cec0987a@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="0AEBa4MpbCXOV1cmQIET3rclFpHMIvf5n"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--0AEBa4MpbCXOV1cmQIET3rclFpHMIvf5n
Content-Type: multipart/mixed; boundary="GnAMB2g1ctDazWWWSslv2JHGDeH49994E";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Message-ID: <65bbc317-893e-da41-97e0-c8f2e1feb3e2@suse.com>
Subject: Re: [PATCH] x86/Xen: swap NX determination and GDT setup on BSP
References: <12a866b0-9e89-59f7-ebeb-a2a6cec0987a@suse.com>
In-Reply-To: <12a866b0-9e89-59f7-ebeb-a2a6cec0987a@suse.com>

--GnAMB2g1ctDazWWWSslv2JHGDeH49994E
Content-Type: multipart/mixed;
 boundary="------------9BB6C5810E23B1B91CE8BB5F"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------9BB6C5810E23B1B91CE8BB5F
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 20.05.21 13:42, Jan Beulich wrote:
> xen_setup_gdt(), via xen_load_gdt_boot(), wants to adjust page tables.
> For this to work when NX is not available, x86_configure_nx() needs to
> be called first.
>=20
> Reported-by: Olaf Hering <olaf@aepfle.de>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Juergen Gross <jgross@suse.com>


Juergen

--------------9BB6C5810E23B1B91CE8BB5F
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------9BB6C5810E23B1B91CE8BB5F--

--GnAMB2g1ctDazWWWSslv2JHGDeH49994E--

--0AEBa4MpbCXOV1cmQIET3rclFpHMIvf5n
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmCmTrEFAwAAAAAACgkQsN6d1ii/Ey/H
swf7Bw4KRPCBXOLQRxHXECjX1GuU01V6QWanXxNdqUdrpP01B6GsRqJHbHXaxexcT9EaOielTULm
KzbQVpdlJW1JxRfwREzEsfomNPQXh9kSCC+JtPivHV1FyDvFfoFlQkDbVwus4GFRKzLoV+O4GwNM
OGwLcsa98MfOSemhCjDRhhpqqa10HhOrSbpekaC9KJJdDQa5wfMkdJm2nxbIIOeggtSzm1lWqkxX
jdNZBaODKj4smJBP6Hqh/3pdNSmbJUBVpnVsFS46OWhTTQbwBMLxX7PVObf75nSgRevKoQltZkxP
NSwyldrTbMa2KstBmc7GzL5o9qTnH7hufxJi0EmEdQ==
=p+vZ
-----END PGP SIGNATURE-----

--0AEBa4MpbCXOV1cmQIET3rclFpHMIvf5n--


From xen-devel-bounces@lists.xenproject.org Thu May 20 12:08:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 12:08:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130910.244948 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljhTq-0003Ot-NQ; Thu, 20 May 2021 12:08:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130910.244948; Thu, 20 May 2021 12:08:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljhTq-0003Om-JZ; Thu, 20 May 2021 12:08:46 +0000
Received: by outflank-mailman (input) for mailman id 130910;
 Thu, 20 May 2021 12:08:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3HBq=KP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ljhTp-0003Og-Nu
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 12:08:45 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7f416f45-3d21-43a2-88d9-7483e49946d1;
 Thu, 20 May 2021 12:08:45 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 45DBAABE8;
 Thu, 20 May 2021 12:08:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7f416f45-3d21-43a2-88d9-7483e49946d1
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621512524; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=IJwQA4crkqD3kpXr035U2mAeY7TyM9BRVy0f7OQl3nM=;
	b=hsPI4l3dGClF8r3amtykNlYxaqFn9NNdZ6zzdflGj5vmLZYnjchXYRU5l76ruFdJvEgsqa
	/oeacMYK+J+DPscHt63cKtK0ZzMiydz9bGL8BcRcyibdEdx+vsqDx78CaLHjC+xgVCRp4E
	zZPUm4s+gs7i0GsSVMbb6qlQYF5cG+4=
Subject: Re: [PATCH] x86/Xen: swap NX determination and GDT setup on BSP
To: Juergen Gross <jgross@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <12a866b0-9e89-59f7-ebeb-a2a6cec0987a@suse.com>
 <65bbc317-893e-da41-97e0-c8f2e1feb3e2@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f594a439-ec1d-34fa-3ccf-b162441fa0af@suse.com>
Date: Thu, 20 May 2021 14:08:43 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <65bbc317-893e-da41-97e0-c8f2e1feb3e2@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 20.05.2021 13:57, Juergen Gross wrote:
> On 20.05.21 13:42, Jan Beulich wrote:
>> xen_setup_gdt(), via xen_load_gdt_boot(), wants to adjust page tables.
>> For this to work when NX is not available, x86_configure_nx() needs to
>> be called first.
>>
>> Reported-by: Olaf Hering <olaf@aepfle.de>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Reviewed-by: Juergen Gross <jgross@suse.com>

Thanks. I guess I forgot

Cc: stable@vger.kernel.org

If you agree, can you please add this before pushing to Linus?

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 20 12:18:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 12:18:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130918.244960 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljhdP-0004tl-MA; Thu, 20 May 2021 12:18:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130918.244960; Thu, 20 May 2021 12:18:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljhdP-0004te-I1; Thu, 20 May 2021 12:18:39 +0000
Received: by outflank-mailman (input) for mailman id 130918;
 Thu, 20 May 2021 12:18:38 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=RJtO=KP=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1ljhdN-0004tS-VY
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 12:18:37 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 12a63026-878b-4a1c-a4bf-f847a4ad8bcc;
 Thu, 20 May 2021 12:18:37 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 08A69AD8A;
 Thu, 20 May 2021 12:18:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 12a63026-878b-4a1c-a4bf-f847a4ad8bcc
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621513116; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=DEnIaLnARTFMf0ucnUE9Ck7bXS+MzTaCuja+6DqC3tc=;
	b=p0F0zqVhkj6JvuqFs3ZJs5R8Xq9YrWxGR9jBQd4NUwjYAlN6vfDBNR7kLFR157K0bNLjc3
	7nKmF0g53Dm4RK2ujjAjJzs/fMNhqcQFFgO1PInKsllIcHiTLxqjBI95KkyFr0ofNkFb7c
	FQJrHQjyberxyOsXefiUuGIJCt18fgE=
Subject: Re: [PATCH] x86/Xen: swap NX determination and GDT setup on BSP
To: Jan Beulich <jbeulich@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <12a866b0-9e89-59f7-ebeb-a2a6cec0987a@suse.com>
 <65bbc317-893e-da41-97e0-c8f2e1feb3e2@suse.com>
 <f594a439-ec1d-34fa-3ccf-b162441fa0af@suse.com>
From: Juergen Gross <jgross@suse.com>
Message-ID: <0f407fc3-8513-341c-ef40-27d45ea74e3d@suse.com>
Date: Thu, 20 May 2021 14:18:34 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <f594a439-ec1d-34fa-3ccf-b162441fa0af@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="Q72iP5e8UkjNkT0fjKmBcbhoD46qs6rR8"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--Q72iP5e8UkjNkT0fjKmBcbhoD46qs6rR8
Content-Type: multipart/mixed; boundary="R2yJUS4kCF1sNHP6dXOyuqj0bUdSofBLn";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Message-ID: <0f407fc3-8513-341c-ef40-27d45ea74e3d@suse.com>
Subject: Re: [PATCH] x86/Xen: swap NX determination and GDT setup on BSP
References: <12a866b0-9e89-59f7-ebeb-a2a6cec0987a@suse.com>
 <65bbc317-893e-da41-97e0-c8f2e1feb3e2@suse.com>
 <f594a439-ec1d-34fa-3ccf-b162441fa0af@suse.com>
In-Reply-To: <f594a439-ec1d-34fa-3ccf-b162441fa0af@suse.com>

--R2yJUS4kCF1sNHP6dXOyuqj0bUdSofBLn
Content-Type: multipart/mixed;
 boundary="------------30DC238A4BA0140D098F400D"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------30DC238A4BA0140D098F400D
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 20.05.21 14:08, Jan Beulich wrote:
> On 20.05.2021 13:57, Juergen Gross wrote:
>> On 20.05.21 13:42, Jan Beulich wrote:
>>> xen_setup_gdt(), via xen_load_gdt_boot(), wants to adjust page tables=
=2E
>>> For this to work when NX is not available, x86_configure_nx() needs t=
o
>>> be called first.
>>>
>>> Reported-by: Olaf Hering <olaf@aepfle.de>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> Reviewed-by: Juergen Gross <jgross@suse.com>
>=20
> Thanks. I guess I forgot
>=20
> Cc: stable@vger.kernel.org
>=20
> If you agree, can you please add this before pushing to Linus?

Yes, will do that.


Juergen


--------------30DC238A4BA0140D098F400D
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------30DC238A4BA0140D098F400D--

--R2yJUS4kCF1sNHP6dXOyuqj0bUdSofBLn--

--Q72iP5e8UkjNkT0fjKmBcbhoD46qs6rR8
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmCmU5oFAwAAAAAACgkQsN6d1ii/Ey+j
kAf/eyhf0HkvKabgDFSpdhVuwaxThATuBr75ewpXtn2n9D0iSv5VdUPbzCMRKxsSKnI96tCJ8s6l
1R2Ml0cqIK7cChueaGFnspQT6yJrC12f9cz6BL+YYjgQuBTN849Y+gUTe2HLjKTZmBso0IpKfvOy
aaQ9euDeRiClkijxLEgpoUoP+zTjemVGgT7+Ob/aKjH5pwTsykGBxLpSO4AbAMa57YEpeTjEa7E9
zXFnNrHje9LTZx+iXNhKnhQoGAk8TNxDberVP9rGoPv75PiC9QR5h46mCm44AC1WI2bJQ/+CyIna
7mJFRN5vpfo0ANVL0bN6wa9V0bF/ztQmvd9BvMiHzA==
=VBU2
-----END PGP SIGNATURE-----

--Q72iP5e8UkjNkT0fjKmBcbhoD46qs6rR8--


From xen-devel-bounces@lists.xenproject.org Thu May 20 12:30:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 12:30:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130928.244971 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljhos-00076W-PI; Thu, 20 May 2021 12:30:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130928.244971; Thu, 20 May 2021 12:30:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljhos-00076P-LM; Thu, 20 May 2021 12:30:30 +0000
Received: by outflank-mailman (input) for mailman id 130928;
 Thu, 20 May 2021 12:30:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V2Ic=KP=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1ljhor-00076J-0Z
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 12:30:29 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1c89876f-37e6-41c3-b55c-1262c17a5cb3;
 Thu, 20 May 2021 12:30:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c89876f-37e6-41c3-b55c-1262c17a5cb3
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1621513827;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=S6R9/4+ouCIYKdaAg20d3aEyrHyjFZM5VLIj6sTkVjA=;
  b=KN19loinZsyYzZqwwvk9rS/aR0O+gPSJ+zEUv/KIiLQUuByh7cKpAi6I
   l17c4IOhCO3VAkHwuk17o2KHrl1RSY4MXdl0sOQrC9YXHHSbPrXB/vePg
   gEJzxelO2AdEdrjW64VtLV1bUhvvw9VAbAO0/o0J0xeIwkzp0ln7ZfiEe
   U=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: ingEAIdoWJ4amoCfXnaXqwCfpqXOGuDL8Go/9mxVmtzuQsI9GYJ/2cean/rDTaSQhEVDI4u+J3
 OPBW8YLvwofcVf3dKj2LNyx9yrkx5tvsHyXPVZ4a9Av3UPIqTCOwtdxDWEt+dy+0HN7M4JuzN/
 VqXV8hZUKznBlTdB9JWuMhnR3udXoIUCfzkjIJacbZNqeXfj3ULTk0uWSbNdECdnr1Ios2ydZX
 oM1iR+lKC2Cpd3UfZdx+iuyprGBuy58WZ/3HnRmQcSsX/1CJ72J2ZebsZ8kNOKFPpNsbqs+rkM
 k7Q=
X-SBRS: 5.1
X-MesageID: 45773330
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-Data: A9a23:8xppoqgwj5qMiPuwrhCuMMnbX161fRAKZh0ujC45NGQN5FlHY01je
 htvWmGGPffcYjP0f4t0Oom0oEMH75aBz4cwSwtr/ywyHywb9cadCdqndUqhZCn6wu8v7a5EA
 2fyTvGacajYm1eF/k/F3oDJ9CI6iufQLlbFILas1hpZHGeIcw98z0I58wIFqtQw24HhXlrc4
 Y2aT/D3YzdJ5RYlagr41IrbwP9flKyaVOQw5wFWiVhj5TcyplFNZH4tDfjZw0jQG+G4KtWSV
 efbpIxVy0uCl/sb5nFJpZ6gGqECaua60QFjERO6UYD66vRJjnRaPqrWqJPwwKqY4tmEt4kZ9
 TlDiXC/YR14Evzuofk8bxBzOiB5F/UB5Z3jeVHq5KR/z2WeG5ft6/BnDUVwNowE4OdnR2pJ8
 JT0KhhUMErF3bjvhuvmGq8236zPL+GyVG8bkmtnwjzDS+4vXLjIQrnQ5M8e1zA17ixLNaqDO
 JVCMmE3BPjGS0x+Jg0sKoATpvmxqnWheBJI80io/4NitgA/yyQuieOwYbI5YOeiVchT20qVu
 G/C12D4GQ0BcsySzyKf9XChjfOJmjn0MKoTC7+Q5vNsmEeUxGEYFFsRT1TTifuzh1O6WtlfA
 1cJ4Sdopq83nGSpU938UhuQsHOC+BkGVLJ4CPYm4QuAzq7V5QexBWUeSDNFLts8u6ceWjgCx
 lKP2dTzClRSXKa9ECzHsO3O9HXrZHhTdzZqiTI4oRUt+YjP8aMKkzPzR/1KEamf1proOxWo6
 mXfxMQhvIn/nfLnxo3iowqe2WP998CUJuImzl+JBzr4t2uVcKbgN9TxswmDhRpVBNvBFjG8U
 G44d99yBQzkJb+KjjDFZOwQELyz6/+BPVUwanY0RMJ4qVxBF5O5FL28AQ2Sxm8yaK7omhezO
 ic/XD+9A7cJbROXgVdfOd7ZNijT5fGI+S7ZuhXogj1mPsAZSeN61HgxNBT4M57FyRd8+U3AB
 XtrWZn1VitLYUiW5BG3W/0cwdcWKtMWnjqOLa0XOy+PjOrPDFbIGOxtGAbfMYgEAFas/Vy9H
 yB3bJDRlX2ykYTWP0HqzGLkBQBbdSRkXciu9aS6tIere2JbJY3oMNeIqZsJcI15haVF0ODO+
 3C2QEhDz1Tjw3bALG23hrpLMdsDgb4XQaoHABER
IronPort-HdrOrdr: A9a23:ymSPMKxrJ6Um06LSKKtYKrPw5b1zdoMgy1knxilNoNJuEvBw9v
 re/sjzuiWYtN98Yh4dcJW7SdC9qBDnhP1ICOsqV4tKNTOO0FdAbrsSi7cKpQePJ8SXzIVgPM
 xbH5SWZueQMXFKyevCpCyCP/lI+qjjzImYwcrT1XVVdicvQL1h6goRMHf9LmRGACRLH5gBL7
 zZwsZcvTKvdU8aYa2Adx04Y9Q=
X-IronPort-AV: E=Sophos;i="5.82,313,1613451600"; 
   d="scan'208";a="45773330"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Q7wkF3/MW7GkQuMsxmqZ7BuYd9ZxXKi0bEc+SWX3PevcmSru6Sfez0kyhZeam3i3Al7UaRnemSyiGbpFpD+Qqon22RVrKhF1VlvkikhR+Fl2ONzifYuATYd85ZgAsOH4tNZMD8B8IKEhxfIeOKHsCkKFUeyVlEbgtmbdDhg4TYPg8vjiyPNTEVE5q7I5dyinEaKnNXqI40Yb4B+qQBn36DUy/RQ3OnkYebEFp7/h1HLZw9gjy1exZCrNkYqL0DjAk1cThYNTbqFQraiP85OnusOJNeOT8rX6nxnPObn4xphRqw3yOQPhsiPhysE7DgGGEpBqj0+BAbN0eeOknTpbRg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WHxq2OPylvnc+qBbuMabDm0op4+VbArYNpTBZ2saP6I=;
 b=dVThx6r+9iDIWoX+Di9n4xUXP8Abcqlzxg7E7Cka8DapmGK/tSLVqRBZTgAS3JSVy5DX4M4oY9hJBWmhO1nOXQpHzdjs7xRDA3FHCQwgaP+ISXUVOiOiuR/kBmXyzrsWtcBqVYkOvwHMzzn+GyRxR2CvKGWf5fD3G//Yqqx9sQrGS9k3CXvYg2NEyJCUvnNHQN0BSF9zz32GqpnWty3RggwXiFgSeggCe72VtEZH3zIvHcprtFIcVgNOl7TY7DO/urJxMwvKDgrceObeawVl4AbnysOTw26vGAxi3i6XrrH6kkTxZ5WQpF7LNv8FkIexWiLTdiFkJd7vYchqIZ29vQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=WHxq2OPylvnc+qBbuMabDm0op4+VbArYNpTBZ2saP6I=;
 b=lfjq1AXBssrsXCUzjmB5Wdc1qmEBo11d4E8JbSFXsGlFThLPbnGkM0DXiONZ6ntzOkD6gmQZiV1LRK9iNLRK1S1e806BrFpR6wFv0Xx+EdqUrJSAzIsDeEmpLEYhsmn0E2kCvUOqXnSl9svAcYgPeG0rD7VwintNhfhiQXm8Dtk=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Ian
 Jackson" <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
Subject: [PATCH v3 0/2] libelf: small fixes for PVH
Date: Thu, 20 May 2021 14:30:10 +0200
Message-ID: <20210520123012.89855-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.31.1
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: MRXP264CA0033.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:14::21) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 1584a063-ca12-4813-f9ba-08d91b8b09f8
X-MS-TrafficTypeDiagnostic: DM6PR03MB4218:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB42189054F6C828D3932100D18F2A9@DM6PR03MB4218.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: L9nL3Wyb91xTA3qbCewHlZUaniCgdzRDcZ0mvS282rNAGUS96W+kmb823rWFBHL0IB+9mrY/a9wQeLRAlGBcp0sBrR0cP5H+a+iiFiUt/Rmlyr7MlJSL8CogOnJ8VySRvaWbWhHxMAELdRdi057v1u88DnVDPRUTu9n+Jcb5vh8KWh/zQaVmviPoRq2rw7yYc23ygi/l4k/wS9GaqPSHSe313psqUvNExBa6680PbYPAVxPDJEN2WfX1+BYoIzqMlo+7V9Zri5CJ2aX0WRz1r+bjXAIMdIoIxKNsLDRRdeNANKR+sTWUZziGwmiW15wchdau09aMNfwqqy4JCdYo/wl1iZlr5ysEdwU0xqYB3Sx8r/C+JlgC26e1Omv5YPtnbnckSbGNseYXqjGQdmOPniHLjJIFg30kMhFNxLMGq9KWmlWXY6q6xo/yfwrkhQ52cHrq2nhuE8jX28mY5qZQtZSnegS6iT1ARrSgWz+ic9mEhe576NLGtNUdhuW9N5aqMI3ovBEVhF6OLCDG0Nx1jUBLA6X+/ogZnFd9ubhVKmGR0qiBxiWJT4sTN5bj6x/GFcoUpvZ/wrWcw8vLVxDjyfku5BPatngdcNe47uDSDdw=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(39860400002)(366004)(396003)(376002)(136003)(346002)(6496006)(16526019)(8676002)(5660300002)(8936002)(186003)(2906002)(2616005)(6666004)(66556008)(66476007)(956004)(66946007)(1076003)(4744005)(26005)(86362001)(478600001)(6916009)(6486002)(38100700002)(36756003)(316002)(4326008)(83380400001)(54906003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?TTNHeUFHMUJUOHpQMitzZlhFdnNWTmNPbTlOV1V2WjZWRjRiczZPU2R5b2dX?=
 =?utf-8?B?VkV1YjU1OSt0TXhSZElTSXdTeCswbFU1Um9SVVQ4MlowclIwd0FGZUJpQ0kw?=
 =?utf-8?B?VEFmY2x3cG1uK2NWd045WFROeFFIYTNlbUJ6VVBQSTRqWDk4STZ5TEVoSVkx?=
 =?utf-8?B?dVhiUVRKTk8vOUlMM2pEWEtCZ3FrZEtRWG9Fa0lDcE5ieDd3a1NJVXBqdE9s?=
 =?utf-8?B?M2pCYzM3UkVyOHNxODFob0pEb0Jha00wbTRoMm1HUHEzODVMNEY4NnNmL092?=
 =?utf-8?B?M2dWbGF0OFdQRlZ5Yk00TGNWVUJrbSs4a1E2R3ZyMjZLZDE2eDBiQmwvWmZC?=
 =?utf-8?B?WUlGNzYwSytHcnFhbjRVbEFld1RTMlphQzVyV3pwcHN6V1gyNWkxMXhoWlVF?=
 =?utf-8?B?cGFuY2V2TjlWM2VLOUM1Y3RaRG15alJyeE02SU5YTUtkZlAxN3BZNXQrb1JE?=
 =?utf-8?B?K1JPbnlqNmZIVWVPY1B6MkRQZW5obThmbnFadDJDWTM4VzRtZGlQV2dtc2N2?=
 =?utf-8?B?dE9iNWlnSVc3eSszM0Y2M2kyMFZoc2lmRFUwRWN1K28xTU9ZY0IydEVQSnlo?=
 =?utf-8?B?VEQ4YnFTODhSbTJqUHd1SFJOOUNRSWEyTE51Q3p5V0krK1BkWVpWb2lDZlJt?=
 =?utf-8?B?UkZTYytUZitHWm1RSUlrMnZnM0VESWc3Si9Xc3oxMG1nWXQxUG5hTHRNeVFz?=
 =?utf-8?B?OWErdU5IOStsMW5UM0dUeGh3Nk95U09vb1YreDd0bVR1Y0ZxYm9KekpTNmsw?=
 =?utf-8?B?UGZiQm1TNHIxZzJwcUZhYW5pdkUyRU9JUVJDOTZ5N2QvTHZ4OUdHb1EzTUhW?=
 =?utf-8?B?eXRISm1CbnBMVUZKdkdrdmxINkZTVEdtWjRpMlVUMUg1NWVQMGd4MnRyejZQ?=
 =?utf-8?B?UTcxVFRlY1I5SVJYbWkrSXpuZnZxZTlMK3BMNkxjVnZKUmxVSHRNVzVPYlh1?=
 =?utf-8?B?MXl4WTBPVkNBR2h5N2UxMTNrVTVoT3d3VGNBVzVEeW1vVXV1REZRbTBZQkdx?=
 =?utf-8?B?cnJlLzBCdThNV2JYVWQrcE1Ra1BINnRiZ1hsSWpSL0JKcUVNdXVQdGNDVFk4?=
 =?utf-8?B?Q3lEeTFFL1VFU09SVTVZT1E1SE9mamE5UTNKd0ZreWNscytVMUprREJwR3hk?=
 =?utf-8?B?Nk4wb1VaaU93bjE2NTQ5WFpjWG5KU0wvdkNtSXhyc3dZN2xTY2h1SmMyQ3l1?=
 =?utf-8?B?OXZlTy9VNHlTZ3BXWXI4aU5iRnIxMFE2TmptQkJRUWo0TzFaZVZXcWQxNU03?=
 =?utf-8?B?RGhiZkdUZkhHMUZ4cDkybjJFYzdkcW01Mld0WUYvVU5YOEhaT2RnSkx6cmVp?=
 =?utf-8?B?dkRLUEd1NHc4ZUFvemtVY2xHNkpveWNBU2gwRk9YV2IrMjNab2ZUZ0xzNTdz?=
 =?utf-8?B?SDk2RW1KTXdYUE9nVEUxT3crcmIxOCsvOXpkRW9janJydE1NOWxqZkEwU0tu?=
 =?utf-8?B?UHJnQkEvVHdmdURUdDhHMDZsUXBOODFZY2pOUEw0eTBDdmdWcFZydzlpNGkx?=
 =?utf-8?B?L1l6Y3BNMkloQSsvOFlBb2k1MWwyOHB4NlF0UXRIRzZMSm0ybWJWdFh4aStt?=
 =?utf-8?B?ZCtrWDdxdGs2QStscnZCdFRVMER3dm52cGFxRnpzcFBZZVZLQTRHdTE4RGRU?=
 =?utf-8?B?RHh2L2p2dXJOY2ZzSk9udWdHaXdYc3ZqTUNacktFUmhvYVVhRWhLRCt0Tmd1?=
 =?utf-8?B?S29RY1daMmJVYXpMRmFjc3N3RzhrVWdTR2VDSXdXMXRWTzlHRnNXYXY3amo1?=
 =?utf-8?Q?f0njpybwj7Tuk49tjAC4RKuOjn9egklIOukLZ5N?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 1584a063-ca12-4813-f9ba-08d91b8b09f8
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 May 2021 12:30:23.2377
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: DDrVkTruokSL9KYxtYBFQsRxNzq9Me39syvq9qZ5Cxws2l5wdhrKbNJdK3LyAh6R5rOIFJs3a3Wj5bOa6YwmEQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4218
X-OriginatorOrg: citrix.com

Hello,

A couple of small fixes for PVH loading. The first one is likely not
very relevant, since PVH couldn't be booted anyway with the data in the
__xen_guest section, so it's mostly a cleanup.

Second patch fixes the checks for PVH loading, as in that case physical
addresses must always be used to perform the bound calculations.

Thanks, Roger.

Roger Pau Monne (2):
  libelf: don't attempt to parse __xen_guest for PVH
  libelf: improve PVH elfnote parsing

 tools/fuzz/libelf/libelf-fuzzer.c   |  3 +-
 tools/libs/guest/xg_dom_elfloader.c |  6 ++--
 tools/libs/guest/xg_dom_hvmloader.c |  2 +-
 xen/arch/x86/hvm/dom0_build.c       |  2 +-
 xen/arch/x86/pv/dom0_build.c        |  2 +-
 xen/common/libelf/libelf-dominfo.c  | 49 +++++++++++++++++------------
 xen/include/xen/libelf.h            |  2 +-
 7 files changed, 39 insertions(+), 27 deletions(-)

-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Thu May 20 12:30:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 12:30:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130929.244982 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljhp4-0007QT-2a; Thu, 20 May 2021 12:30:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130929.244982; Thu, 20 May 2021 12:30:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljhp3-0007QM-Un; Thu, 20 May 2021 12:30:41 +0000
Received: by outflank-mailman (input) for mailman id 130929;
 Thu, 20 May 2021 12:30:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V2Ic=KP=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1ljhp2-0007Pl-QU
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 12:30:40 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d2883251-3f26-41aa-86d1-c13d1837d1ef;
 Thu, 20 May 2021 12:30:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d2883251-3f26-41aa-86d1-c13d1837d1ef
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1621513839;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:content-transfer-encoding:mime-version;
  bh=P40Hog//rApTFLCIQlAqgR9U8c+aP1G/ygiI3z8p7k0=;
  b=AeegSZUuGiBrUEAYekEQn69dGtYsj7CdOOKHSI2lSTaGHlzbpITeuewP
   RYJ5GM2EeT/ezaz4OZzpFIlauV7N+be9keMBaa7gZFXFb3lxEZO89SmBr
   eE8gfebyEVaCgH95H23T9cuFHmHPTUWii03TnjdbzZzKH74h7OUf3sPO2
   c=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: ymovIQQs5e0mFckladA2MZsPtjhMjBw94wJ3E9sQlrAIJ3s36NOWbMkbKwCKNHSps9op3cAgOa
 DhKzN3eu5f5KuRRcG4HcVnvdM7B2yIUB/fdDP3+/F39FBfnGYPUryOuSXk2v7YzJWkqc94mvvY
 IC+zF6sONS3oZKV5xPVMcvNnMPGvaBxwmfytsrefQC97rj5XYg2IXLY43xCYQ3jqfFHNnHjG9B
 RBDNLWTls0gr0UvE0CKCFgsMi8m/av3r+Ygq7RD85jBRRwfFZZvzcqgHKYN9GjShp1X6oRnh57
 +UY=
X-SBRS: 5.1
X-MesageID: 44609525
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-Data: A9a23:tRL2O6AxBJrg4hVW/zTjw5YqxClBgxIJ4kV8jS/XYbTApGgihTwBx
 2pMDDiHafjeNmCneNh/a4zk9kxXsJLXyNdlQQY4rX1jcSlH+JHPbTi7wuYcHM8wwunrFh8PA
 xA2M4GYRCwMZiaH4EjratANlFEkvU2ybuOU5NXsZ2YhH2eIdA970Ug6w7Ng09Y26TSEK1jlV
 e3a8pW31GCNg1aYAkpMg05UgEoy1BhakGpwUm0WPZinjneH/5UmJMt3yZWKB2n5WuFp8tuSH
 I4v+l0bElTxpH/BAvv9+lryn9ZjrrT6ZWBigVIOM0Sub4QrSoXfHc/XOdJFAXq7hQllkPgyl
 PgVqKfveDxqFYDMt+Eld15nASZhaPguFL/veRBTsOSWxkzCNXDt3+9vHAc9OohwFuRfWD8Us
 6ZCcXZUM0DF3bveLLGTE4GAguw5K8bmJsUHs2xIxjDFF/c2B5vERs0m4PcEgGlo150UQJ4yY
 eJDMDgzNw3DSSFIeWscB7UGs76Rol/GJmgwRFW9+vNsvjm7IBZK+KjgNp/Zd8KHQe1Rn12Ev
 STW8mLhGBYYOdeDjz2f/RqEh/DNtTP2XpoIE7+1/eIsh0ecrkQMDDUGWF39puO24ma8Ud9CL
 00f+gI1sLM/skesS7HVXQC8oXOClg4RXZxXCeJSwBqW1qPe7gKdB24FZj1MctorsIkxXzNC6
 7OSt4q3X3o16uTTEC/NsO3Nxd+vBcQLBSxeSHcZdlQ02JrIjYEciyrrCelxF7Hg27UZBgrML
 yC2QDkW3utJ1JRahvTjoDgrkBr2+MGRE1ddChH/GzL9t1koPOZJcqT1sQCz0BpWEGqOorBtV
 lAqnNKCpMQHEJ2AjiCEROhl8FqBvK3eaWO0bbKCBfAcG9WRF5yLJto4DNJWfh0B3iM4ldjBO
 h67hO+pzMUPVEZGlIcuC25LNyjP8UQHPYi9Ps04k/IXPckrHON51HgxNSZ8IFwBYGBzyPpia
 P93gO6HDGoACLQP8dZFb7xEjNcWKtQF7T6DFPjTkkX8uZLDNSH9dFvwGAbXBgzPxPjf+1u9H
 hc2H5bi9iizp8WuM3GLrtZLdQ5iwLpSLcmelvG7v9Wre2JOMGogF+XQ0fUmfYlklL5SjeDG4
 je2XUow9bY1rSSvxdmiApy7VI7SYA==
IronPort-HdrOrdr: A9a23:l4asdaEJfmp6Jzo7pLqEIceALOsnbusQ8zAXPo5KKSC9E/bo9f
 xG88536faZslkssTQb6Km90cq7MBDhHPxOi7X5VI3KNDUO+lHYSr2Ki7GN/9SJIUbDH4VmuZ
 uIHZIeNPTASXVCyePAzCbQKadE/PC3tI2ln+Xm9FEoZh1rbqwI1XYfNi+rVmB7Xgt+Prx8MJ
 aH/MJIqwGtdh0sH6CGOkU=
X-IronPort-AV: E=Sophos;i="5.82,313,1613451600"; 
   d="scan'208";a="44609525"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nTlk9r4seE+eYt4symQZ80G+KoxqSpLW0PGRI3kXbMvTnTRtrGF+XCpOyXp4yq4qY7b9nb1bSzwcLS7YVXDS6xT9zVFHDe1cBLwz479jv+oV61WNXNu7g4wZOq6/bpPN3B6i2Si817+wJA+BzpShwnqrZWwnwEvrr0EEWp8b+gpvLxIEnOx6dWbjJ4oQMBEv1OnsULB1Gviu2zt0zKrWGoWWVaiZiaH81MX/D2RuvaPUgsBbFRM/sv09umU5U6yL7yaAK1MfvEQMhAeEeFzhRGcvLVE5nlhdVq/FuKTbjtWqtn0ytdVdfns0qwRoUu0KfEmPaGGrpVoUzQEmz3JX1w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sPwA3OmQRR4Le01oTZkBmItUHWI1w1PcWSsZAP5z+lQ=;
 b=bq5YSQx+CSkSkcswmJFHA91PTNnV/dQp/QVG2BOmayP8UyKXor/I31xb1pCPQecKgLeNt66Bbd+RUWPJ0Fd2iN8hnov0kbCyrUZyvMENmx/uVhGMOVlcH5UjPWwkMBLdOJBdvbaIj1yvTdrB6La786ZVhNCTOFYDpXWUA1n+qbJ2LKo/KM8n6dgjh2o0xhFay79gjgXtxGBAHfee+r52LvB6qAnQUh+EyUxXFCr8wZwIRUIGJl6C5cB6XADS47wgQAn4PUN6P7oqU+fwCAbJPzgL9KuaXM+KiC5iJqNq/+tGwgN/6efsD7oXlS30SMJiiYkwC7PM5QBeGC7MXzdeyQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sPwA3OmQRR4Le01oTZkBmItUHWI1w1PcWSsZAP5z+lQ=;
 b=K4crQvpwKBmEFdafEZ0mZLJRrz7BlL0DwYEFHAXuca66Zq/Elkr9FNcgck2dpmuY+X96ORvUa1WvFkz9T5uqkeDaJ/gXw2KL/51vsNBeuY2C5838q130lgJJ2FKmi8t5RB/MnF7a5F+BC6/B1Mkb5s6uYKkfrASSvssc6lrpMYk=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>, "George
 Dunlap" <george.dunlap@citrix.com>, Jan Beulich <jbeulich@suse.com>, "Julien
 Grall" <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
Subject: [PATCH v3 2/2] libelf: improve PVH elfnote parsing
Date: Thu, 20 May 2021 14:30:12 +0200
Message-ID: <20210520123012.89855-3-roger.pau@citrix.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210520123012.89855-1-roger.pau@citrix.com>
References: <20210520123012.89855-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: MRXP264CA0029.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:14::17) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b333a26a-36aa-4d7b-8ce0-08d91b8b10f3
X-MS-TrafficTypeDiagnostic: DM6PR03MB3738:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB373882934A92D349C98C0F908F2A9@DM6PR03MB3738.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: IPKZcnKuHIHSCyHzpOcEIkrISU7QSx9q2LTB+7aHBcOJJ+fp/blj5Zj5rZWqP4+1FM7C0S374+hxtAf2TDgVYhzCe+Kbu9o4XABMYuOAQWYmx2sYe5KFyhQxzIrwBhB6GXoQRxaQijIOwqahYohXHgHSBn6J1l1fr8aqvCAG2LEx3/ZvszZICDBbyAZdJFltjTzqN1dQspnxxBn23Kz6t4SDzq5OYhruWDTvbWrar1UqteH3WPefFkvIanGe4ebqUAWvYJKBqAv+AaKj32EckObVJ2KAB4BmHMI0oWDSfjoby9DJiTEyMjR5UJHeMfY4jAefXLwv/sC7kla2Wizyc6uPn6jdKNmw4MWwEU4lPELBMFTw0ZyYg2nVmLl8wl72Rlmv3XOzddcbhiNw2SkSCKtPh39N3AxN4S2QxqYqoV+OTpT1/CT/2DIscCI3ev6BrsGq6HxuDNzlYJCys1fkP809mq87KbF2EiAfT7zBQ9eA3yLGAQW0MVoOKN4oeDrYXcJmOexErnvVax+013y3p0cnRpmIdkLwZstG8zJHkWFENt1k8m2p0TdaN3p6vQ4X17uVvoUe4DPEWMsNn2FotJzXHhUQjkhbIlFtnz3pqU8=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(396003)(136003)(346002)(376002)(39860400002)(1076003)(478600001)(66476007)(86362001)(8936002)(2906002)(36756003)(4326008)(6916009)(6486002)(5660300002)(8676002)(6496006)(66556008)(38100700002)(26005)(83380400001)(316002)(66946007)(2616005)(186003)(16526019)(6666004)(956004)(54906003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?dGJKNm5mNjRWWlgya21TRlR4elJ3RHFFUXBOSjVZckJVRXVEODZva2QvNzVB?=
 =?utf-8?B?TllYVU1ETloyY1V5UUErZnVxVlVnQVJJbVF3RTd2Z2VFVmlZNVp4aUxUN0lo?=
 =?utf-8?B?bktmRGtybHNwb1RoUDVIdmJwbHhxM2pVOHdwa0djbjNKWHpsdlAvV3AvV1Nr?=
 =?utf-8?B?ckNsUzlhdHRmcG8zNVd5T1JjMk9RVDlMa0pRZGo4UjBoK0VzclVCRDV4SlNP?=
 =?utf-8?B?Q2UzZitnRitUZklpTlBzYitWa2NweHY4NTZyMC9qN3hrYjh1RUhzYlNaQkRl?=
 =?utf-8?B?ZDFLWlVtU2VZZVhtV3BlYkxZMkxBQ1cxdHFZUnB1TzRiL05EUVFUcTV3VHpv?=
 =?utf-8?B?b2hnVFpXWTd1anJYNTFxZkJ5MzFjcVJUcEhDYnBPclFhRkVxMHpkOWxnQStV?=
 =?utf-8?B?ODdTQUFMR2hHUDRXNmlrYUl4OE5kQ0g3WXdDNFpCOVZHeEU0SGtzUGQ1YnZa?=
 =?utf-8?B?cmIvL2R6OEc4TDEzQThQTVdlVmRhb0pwejF1akZrS0IvLzh5TmxDUURzV1BC?=
 =?utf-8?B?cEswZ2VVREdEdTRCRVhYandCSmIrV3hFN1pNUjNiRU5PS1phRUxzdWNTbHcv?=
 =?utf-8?B?bHpWOXRvT0RiaHpMSGwvc1NmaGRqTkh6YTAycWpKQ1NZYnQrYnpBd3ZlaXRM?=
 =?utf-8?B?MFZydERBeWE2ajE4V1I4aWpkRnFnYXF5MXZGbmF3N0haTUsrQkZDMStpR29S?=
 =?utf-8?B?YUlzZXRMUzlYK2Zpb0czc2lhTnNrdXUwOUtkQ3MzZk1NdmRMY0JiNnN6SXZ5?=
 =?utf-8?B?Q0hWTy9IN3hZVXpTdjI3T1ZkK1ZqYmFuUnZ5TmJ0Umc4OUs3a3ZLVTg5MkdB?=
 =?utf-8?B?UmptSmVFYTF2WUdlOGRqbE5EZFZIclB5M2NWN1J0K0g3Zm1ZbEpNdnd5eWU5?=
 =?utf-8?B?WWJpUHpkVlB6aldDVUZObS9pT0ZkL3h6bEtFcU9zL2lEZ2xvd2VkWVRzWXZJ?=
 =?utf-8?B?Y2NjbEhLUWVTdmNJYzFCU0M3YVRhbFZnVFRvVU5vQndYZUdJVlF3amlMeWxE?=
 =?utf-8?B?cUxEUDlkRktlcHZHUzVtZUNSYjFsZVVSSlJTZmg0WDRTblZMV1ZqRTFCTWlj?=
 =?utf-8?B?Vi9kL0R2WjZBZGFKTlY0Z3VGaWVSVEZXUldsb1k2WHorcFlvMFpHZ3ZFRk9y?=
 =?utf-8?B?aGhQZk5VWmNpMkFDWEhxR0daVzlZaWkzeFB2L2JMRHRoMXp2N3dqaWdrTEFS?=
 =?utf-8?B?empQQzFIYlRzQ2xoZmtXcHJNWnhGdEM1NzVCdzhzazMzWnpjZlZCMHVYYlNT?=
 =?utf-8?B?Z0VWMDZBNlIyMHRxOExQcFRrc3RFNlg4a2VVNHpqdFhDZk1xamg4QlZGaFpI?=
 =?utf-8?B?NzdXbHZGMGxXajVGU2MvQVdKYWxGMVJVRjgvQ1hGTStOcExSN1NBRjdkL3lD?=
 =?utf-8?B?M2sydTkyWFg3YTlYbnRFYWgvZW56S3FFK3NtZk0vSm0rdzBYREpZWjFkT29B?=
 =?utf-8?B?OEhFRUFncUxlam01aG4rekhYeDZ6eU9mY2M5OEhBN1NTTmRXN2FCeUs0MGpU?=
 =?utf-8?B?NXhtN1YrTkRBejlNS2s2Z2VDRVVUL3NOT1Q0QXNzU1laaDJicGJXZ3pvN1VQ?=
 =?utf-8?B?YXlJdWpDT09UWmpoWUgrOXBBYXdxVXVCOXJ1S2NvRlQxWjliZmNoNlhMVzFi?=
 =?utf-8?B?Z2UvSklHajhHNGdzL2tRUnRKdWRIdmlrRjh5T2grcUlPL0tXL3pkK2ljMWJ1?=
 =?utf-8?B?ZmpzNWIvcFBuZGVnWVNKR0Rhd1gydWlMT3NGdGZXTnJJaGRRMU1KL0NEdkRB?=
 =?utf-8?Q?OMQzFcwdn9FWiq9wBWeXvV/cXCFxuQ/MteoeBh/?=
X-MS-Exchange-CrossTenant-Network-Message-Id: b333a26a-36aa-4d7b-8ce0-08d91b8b10f3
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 May 2021 12:30:34.9701
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: mkN/5OwhO9xxQtvZa59V58rE6CDfuoSVpsXGHLoNJ7rRzEFcAE2EGpMc4nxxN96HN36vsvKLQee6PMdvEa5u4w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3738
X-OriginatorOrg: citrix.com

Pass an hvm boolean parameter to the elf note parsing and checking
routines, so that better checking can be done in case libelf is
dealing with an hvm container.

elf_xen_note_check shouldn't return early unless PHYS32_ENTRY is set
and the container is of type HVM, or else the loader and version
checks would be avoided for kernels intended to be booted as PV but
that also have PHYS32_ENTRY set.

Adjust elf_xen_addr_calc_check so that the virtual addresses are
actually physical ones (by setting virt_base and elf_paddr_offset to
zero) when the container is of type HVM, as that container is always
started with paging disabled.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v2:
 - Do not print messages about VIRT_BASE/ELF_PADDR_OFFSET unset on
   HVM.
 - Make it explicit that elf_paddr_offset will be set to 0 regardless
   of elf_note_start value for HVM.

Changes since v1:
 - Expand comments.
 - Do not set virt_entry to phys_entry unless it's an HVM container.
---
 tools/fuzz/libelf/libelf-fuzzer.c   |  3 +-
 tools/libs/guest/xg_dom_elfloader.c |  6 ++--
 tools/libs/guest/xg_dom_hvmloader.c |  2 +-
 xen/arch/x86/hvm/dom0_build.c       |  2 +-
 xen/arch/x86/pv/dom0_build.c        |  2 +-
 xen/common/libelf/libelf-dominfo.c  | 43 ++++++++++++++++++-----------
 xen/include/xen/libelf.h            |  2 +-
 7 files changed, 37 insertions(+), 23 deletions(-)

diff --git a/tools/fuzz/libelf/libelf-fuzzer.c b/tools/fuzz/libelf/libelf-fuzzer.c
index 1ba85717114..84fb84720fa 100644
--- a/tools/fuzz/libelf/libelf-fuzzer.c
+++ b/tools/fuzz/libelf/libelf-fuzzer.c
@@ -17,7 +17,8 @@ int LLVMFuzzerTestOneInput(const uint8_t *data, size_t size)
         return -1;
 
     elf_parse_binary(elf);
-    elf_xen_parse(elf, &parms);
+    elf_xen_parse(elf, &parms, false);
+    elf_xen_parse(elf, &parms, true);
 
     return 0;
 }
diff --git a/tools/libs/guest/xg_dom_elfloader.c b/tools/libs/guest/xg_dom_elfloader.c
index 06e713fe111..ad71163dd92 100644
--- a/tools/libs/guest/xg_dom_elfloader.c
+++ b/tools/libs/guest/xg_dom_elfloader.c
@@ -135,7 +135,8 @@ static elf_negerrnoval xc_dom_probe_elf_kernel(struct xc_dom_image *dom)
      * or else we might be trying to load a plain ELF.
      */
     elf_parse_binary(&elf);
-    rc = elf_xen_parse(&elf, dom->parms);
+    rc = elf_xen_parse(&elf, dom->parms,
+                       dom->container_type == XC_DOM_HVM_CONTAINER);
     if ( rc != 0 )
         return rc;
 
@@ -166,7 +167,8 @@ static elf_negerrnoval xc_dom_parse_elf_kernel(struct xc_dom_image *dom)
 
     /* parse binary and get xen meta info */
     elf_parse_binary(elf);
-    if ( elf_xen_parse(elf, dom->parms) != 0 )
+    if ( elf_xen_parse(elf, dom->parms,
+                       dom->container_type == XC_DOM_HVM_CONTAINER) != 0 )
     {
         rc = -EINVAL;
         goto out;
diff --git a/tools/libs/guest/xg_dom_hvmloader.c b/tools/libs/guest/xg_dom_hvmloader.c
index ec6ebad7fd5..3a63b23ba39 100644
--- a/tools/libs/guest/xg_dom_hvmloader.c
+++ b/tools/libs/guest/xg_dom_hvmloader.c
@@ -73,7 +73,7 @@ static elf_negerrnoval xc_dom_probe_hvm_kernel(struct xc_dom_image *dom)
      * else we might be trying to load a PV kernel.
      */
     elf_parse_binary(&elf);
-    rc = elf_xen_parse(&elf, dom->parms);
+    rc = elf_xen_parse(&elf, dom->parms, true);
     if ( rc == 0 )
         return -EINVAL;
 
diff --git a/xen/arch/x86/hvm/dom0_build.c b/xen/arch/x86/hvm/dom0_build.c
index 878dc1d808e..c24b9efdb0a 100644
--- a/xen/arch/x86/hvm/dom0_build.c
+++ b/xen/arch/x86/hvm/dom0_build.c
@@ -561,7 +561,7 @@ static int __init pvh_load_kernel(struct domain *d, const module_t *image,
     elf_set_verbose(&elf);
 #endif
     elf_parse_binary(&elf);
-    if ( (rc = elf_xen_parse(&elf, &parms)) != 0 )
+    if ( (rc = elf_xen_parse(&elf, &parms, true)) != 0 )
     {
         printk("Unable to parse kernel for ELFNOTES\n");
         return rc;
diff --git a/xen/arch/x86/pv/dom0_build.c b/xen/arch/x86/pv/dom0_build.c
index e0801a9e6d1..af47615b226 100644
--- a/xen/arch/x86/pv/dom0_build.c
+++ b/xen/arch/x86/pv/dom0_build.c
@@ -353,7 +353,7 @@ int __init dom0_construct_pv(struct domain *d,
         elf_set_verbose(&elf);
 
     elf_parse_binary(&elf);
-    if ( (rc = elf_xen_parse(&elf, &parms)) != 0 )
+    if ( (rc = elf_xen_parse(&elf, &parms, false)) != 0 )
         goto out;
 
     /* compatibility check */
diff --git a/xen/common/libelf/libelf-dominfo.c b/xen/common/libelf/libelf-dominfo.c
index abea1011c18..24d1371dd7a 100644
--- a/xen/common/libelf/libelf-dominfo.c
+++ b/xen/common/libelf/libelf-dominfo.c
@@ -360,7 +360,7 @@ elf_errorstatus elf_xen_parse_guest_info(struct elf_binary *elf,
 /* sanity checks                                                            */
 
 static elf_errorstatus elf_xen_note_check(struct elf_binary *elf,
-                              struct elf_dom_parms *parms)
+                              struct elf_dom_parms *parms, bool hvm)
 {
     if ( (ELF_PTRVAL_INVALID(parms->elf_note_start)) &&
          (ELF_PTRVAL_INVALID(parms->guest_info)) )
@@ -382,7 +382,7 @@ static elf_errorstatus elf_xen_note_check(struct elf_binary *elf,
     }
 
     /* PVH only requires one ELF note to be set */
-    if ( parms->phys_entry != UNSET_ADDR32 )
+    if ( parms->phys_entry != UNSET_ADDR32 && hvm )
     {
         elf_msg(elf, "ELF: Found PVH image\n");
         return 0;
@@ -414,7 +414,7 @@ static elf_errorstatus elf_xen_note_check(struct elf_binary *elf,
 }
 
 static elf_errorstatus elf_xen_addr_calc_check(struct elf_binary *elf,
-                                   struct elf_dom_parms *parms)
+                                   struct elf_dom_parms *parms, bool hvm)
 {
     uint64_t virt_offset;
 
@@ -425,12 +425,15 @@ static elf_errorstatus elf_xen_addr_calc_check(struct elf_binary *elf,
         return -1;
     }
 
-    /* Initial guess for virt_base is 0 if it is not explicitly defined. */
-    if ( parms->virt_base == UNSET_ADDR )
+    /*
+     * Initial guess for virt_base is 0 if it is not explicitly defined in the
+     * PV case. For PVH virt_base is forced to 0 because paging is disabled.
+     */
+    if ( parms->virt_base == UNSET_ADDR || hvm )
     {
         parms->virt_base = 0;
-        elf_msg(elf, "ELF: VIRT_BASE unset, using %#" PRIx64 "\n",
-                parms->virt_base);
+        if ( !hvm )
+            elf_msg(elf, "ELF: VIRT_BASE unset, using 0\n");
     }
 
     /*
@@ -441,23 +444,31 @@ static elf_errorstatus elf_xen_addr_calc_check(struct elf_binary *elf,
      *
      * If we are using the modern ELF notes interface then the default
      * is 0.
+     *
+     * For PVH this is forced to 0, as it's already a legacy option for PV.
      */
-    if ( parms->elf_paddr_offset == UNSET_ADDR )
+    if ( parms->elf_paddr_offset == UNSET_ADDR || hvm )
     {
-        if ( parms->elf_note_start )
+        if ( parms->elf_note_start || hvm )
             parms->elf_paddr_offset = 0;
         else
             parms->elf_paddr_offset = parms->virt_base;
-        elf_msg(elf, "ELF_PADDR_OFFSET unset, using %#" PRIx64 "\n",
-                parms->elf_paddr_offset);
+        if ( !hvm )
+            elf_msg(elf, "ELF_PADDR_OFFSET unset, using %#" PRIx64 "\n",
+                    parms->elf_paddr_offset);
     }
 
     virt_offset = parms->virt_base - parms->elf_paddr_offset;
     parms->virt_kstart = elf->pstart + virt_offset;
     parms->virt_kend   = elf->pend   + virt_offset;
 
-    if ( parms->virt_entry == UNSET_ADDR )
-        parms->virt_entry = elf_uval(elf, elf->ehdr, e_entry);
+    if ( parms->virt_entry == UNSET_ADDR || hvm )
+    {
+        if ( parms->phys_entry != UNSET_ADDR32 && hvm )
+            parms->virt_entry = parms->phys_entry;
+        else
+            parms->virt_entry = elf_uval(elf, elf->ehdr, e_entry);
+    }
 
     if ( parms->bsd_symtab )
     {
@@ -499,7 +510,7 @@ static elf_errorstatus elf_xen_addr_calc_check(struct elf_binary *elf,
 /* glue it all together ...                                                 */
 
 elf_errorstatus elf_xen_parse(struct elf_binary *elf,
-                  struct elf_dom_parms *parms)
+                  struct elf_dom_parms *parms, bool hvm)
 {
     ELF_HANDLE_DECL(elf_shdr) shdr;
     ELF_HANDLE_DECL(elf_phdr) phdr;
@@ -592,9 +603,9 @@ elf_errorstatus elf_xen_parse(struct elf_binary *elf,
         }
     }
 
-    if ( elf_xen_note_check(elf, parms) != 0 )
+    if ( elf_xen_note_check(elf, parms, hvm) != 0 )
         return -1;
-    if ( elf_xen_addr_calc_check(elf, parms) != 0 )
+    if ( elf_xen_addr_calc_check(elf, parms, hvm) != 0 )
         return -1;
     return 0;
 }
diff --git a/xen/include/xen/libelf.h b/xen/include/xen/libelf.h
index b73998150fc..be47b0cc366 100644
--- a/xen/include/xen/libelf.h
+++ b/xen/include/xen/libelf.h
@@ -454,7 +454,7 @@ int elf_xen_parse_note(struct elf_binary *elf,
 int elf_xen_parse_guest_info(struct elf_binary *elf,
                              struct elf_dom_parms *parms);
 int elf_xen_parse(struct elf_binary *elf,
-                  struct elf_dom_parms *parms);
+                  struct elf_dom_parms *parms, bool hvm);
 
 static inline void *elf_memcpy_unchecked(void *dest, const void *src, size_t n)
     { return memcpy(dest, src, n); }
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Thu May 20 12:30:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 12:30:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130930.244993 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljhp7-0007kE-Ge; Thu, 20 May 2021 12:30:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130930.244993; Thu, 20 May 2021 12:30:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljhp7-0007k7-DM; Thu, 20 May 2021 12:30:45 +0000
Received: by outflank-mailman (input) for mailman id 130930;
 Thu, 20 May 2021 12:30:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=V2Ic=KP=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1ljhp6-0007iu-4H
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 12:30:44 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c04ed44d-6193-42e6-9876-75e697ba86b7;
 Thu, 20 May 2021 12:30:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c04ed44d-6193-42e6-9876-75e697ba86b7
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1621513842;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:content-transfer-encoding:mime-version;
  bh=pz3YuJzmpguFtK4yG9GS6ET957dZMF7imY4C1CnCr5M=;
  b=dkUUsjX3nOFYCn183Md2Hpy5F5qbUpj9Dn1wH4GKqtS6iZ4WKKBnlc7D
   EFjX8AS4EpFelEwYA+UlxFRvesz/OcuqsOxTwG6UpagamkLHrYREHo1am
   AUnh/X1+6sill9oxL+8fj63uDLBdKi4AFqwcma2X5nzfbgKiGDM6De878
   w=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: UQ8p2LR5tn+Kk8TxP1ZzDXvXlH1AdSAjqKyRvU9kGyT0vjqa9E58tMvHvxxZZD5sny7PF0AHM2
 opprexNebWA1W9KI3+xaeTu4ljwQqmpIoYBjCWbfQJyECcNmY2F0MZWaM06o3qoj+R4CNe4uw1
 HnPre1+iq8FgTqYM8M1roxsDimWzPy+OeK8OEy8MpNR5ZqG2hiTjE0kKPa8WqYksK71gvnoKFA
 DcwuEqNmEPw2v1EOz3/eh5X6jHd9gT9jSZdDukj19EtDH3iSnJ+GE5sphsTPDZqUK1FFr4SvNf
 w+g=
X-SBRS: 5.1
X-MesageID: 44217040
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-Data: A9a23:N2uh5KsZttZfLkdMLd0HVZBMR+fnVEteMUV32f8akzHdYApBsoF/q
 tZmKT+HOanYMWPxc4x2aNvkpB4DuMeBydFmT1Nt+XpnHyxA+JbJXdiXEBz9bniYRiHhoOOLz
 Cm8hv3odp1coqr0/0/1WlTZhSAnk/7OHtIQMcacUsxLbVYMpBwJ1FQzwYbVvqYy2YLgW17U6
 IusyyHiEATNNwBcYzp8B52r8HuDjNyq0N/PlgVjDRzjlAa2e0g9VPrzF4noR5fLatA88tqBb
 /TC1NmEElbxpH/BPD8HfoHTKSXmSpaKVeSHZ+E/t6KK2nCurQRquko32WZ1he66RFxlkvgoo
 Oihu6BcRi92M4OQk8lAeiNyCih6bZZE85vMcHWg5Jn7I03uKxMAwt1rBUAye4YZ5vx2ESdF8
 vlwxDIlN07ZwbjsmfTiF7kq3J1LwMrDZevzvll6yj7UF7A+SI3rSKTW/95Imjw3g6iiGN6CO
 5BANmIzNHwsZTViYVIoOc8RmNu4rXjTfR9V8kOuorUOtj27IAtZj+G2bYu9lsaxbdVYmAOUq
 3zL+0z9AwoGL5qPxDyd6HWui+TT2yThV+o6FaK63u5nhkWJwW4eAwFQUkG0ydG7gEOjX9NUK
 2QP5zEj66M18SSDTMT5XhC+iG6JuFgbQdU4O/Yh9AiHx67Q4gCYLmsJVDhMbJohrsBebSMu/
 k+EmZXuHzMHjVGOYSvDrPHO92r0YHVFazVbDcMZcecby4bujt04o0/mdPx6Garltd/+PTqhz
 i/f+UDSmI4vYd43O7STpA6d2mj998mRE2bZ9S2NADv9s1oRiJqNItzwsAKKtZ6sOa7EFgHpg
 ZQSpySJAAni57mjkzaRCMEEAb2k/fqMNDC0bbVHRMJ6rmrFF5JOZ+ltDNBCyKVBaZxsldzBO
 hW7VeZtCHh7ZirCUEOPS9jtY/nGNIC5fTgfahwxUjapSsQpHDJrAQk3Oh/Kt4wTuBFzwcnTx
 qt3ge7zVC1HWMyLPRKdRvsH0K9D+8zN7TiKGPjGI+Cc+efONRa9FOZeWHPTP79R0U9xiFiMm
 zqpH5DRkEs3vSyXSnS/zLP/2nhafCBnWsiu85I/myzqClMOJVzNwsT5mNsJU4dkg75UhqHP+
 HS8UVVf013xmTvMLgDiV5ypQO+HsUpXxZ7jARERAA==
IronPort-HdrOrdr: A9a23:iNbysKkbAw+YZby64nQEDs2+MavpDfMeimdD5ihNYBxZY6Wkfp
 +V8cjzhCWftN9OYhodcIi7SdG9qADnhOVICOgqTPyftWzd1FdAQ7sSibcKrweAJ8S6zJ8l6U
 4CSdk1NDSTNykcsS+S2mDRfLgdKZu8gcaVbIzlvhRQpHRRGsRdBnBCe2Sm+yNNJDVuNN4cLt
 6x98BHrz2vdTA8dcKgHEQIWODFupniiI/mSQRuPW9q1CC+yReTrJLqGRmR2RkTFxlVx605zG
 TDmwvloo2+rvCAzAPG3WO71eUYpDKh8KoMOCW/sLlUFtzesHfqWG2nYczBgNkBmpDv1L/tqq
 iIn/5vBbU215qbRBDOnfKk4Xic7N9p0Q6u9bbQuwqeneXpAD09EMZPnoRfb1/Q7Fchpsh11O
 ZR03uerIc/N2K3oM3R3am9a/hRrDvCnZPiq59hs5VVa/pWVFaQl/1rwKpxKuZJIMvX0vFXLA
 BeNrCv2B8NSyLlU5nwhBge/DWDZAVNIiu7
X-IronPort-AV: E=Sophos;i="5.82,313,1613451600"; 
   d="scan'208";a="44217040"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NuQJOlW44zMV0DU9DZ7GUj4KzeXKrI/FevVEzMkDv1dMHMSyIgD6dPQaSeZCXHb2CcZx3K6L1+6SaYZKNzmqeepS1vS+zLdNV72qGJU7oN4nU95yZeKtSN/Dci12bPT8kc2B/pSQZjO4WSsD87MQ1gWWmTgp84tKZemhrv7aQUr54DDRalrEcdscHVnlDyAqlp5GYadaSEjYUfAeFJgL0fko2p2fCMiymi3hhqYVB1oq+BNJvfCusNJ7r70fV6vwbzEntNOxJjwOMkpEONbggdAK37ndZ3AMF54kcLbhOZqlEBXWBPDQaZU0Cxpb3F2Lj9X1lp6QZZ5NQq4cnY8qLw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FX66gErbVbl+5wBpfx4Y0txO4O300Tono3Zv/ra1bB4=;
 b=VlavZ2t0d7r1aUz3XadaFS9t6ea6JhE3tYH6vPLmSUlvCcjR40koHFMTEXGKpwQWBPPAzSx3vvrWliIACFqVA1jiWaz1hWMfeJE4Nhe6YQoDgQV4FEJLHk1ljD5aA4ibBkHtOdc6DxmlkJP/PbkAVwZVbhEghQUSiEcCGgzIAo4CB9Vm6JiEEy9rwdhf99vqVGOte6X4IL4+uKDAFIaUBvA7+1IqOD/KmwrQFO7upolZFMkhTWm/gjwrC0yqsUoQS09z0aLBDEQtdUyGUhxaeYxlJI43o1CtVXnjCRMPcsp92V3eUCrIKMq/RYXOMkmypi8jI25PK0PoIl8UJdv3IA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FX66gErbVbl+5wBpfx4Y0txO4O300Tono3Zv/ra1bB4=;
 b=GuIqUt9aFXVbVmZch71cjhCkoZzGF8ayxsH/goGq+9nRYWty/28lfyX87IIZo3WjM8DfNbPtspHU3tGRQzTNR4NIMoUVetqAMDSHmGtrAwNj2Yasnxt8rjAK+paObQ9BvDYMNymdyB1BVCgL6yroiGHkxQsi0pTZSv+qW4Qbp7A=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, "Ian
 Jackson" <iwj@xenproject.org>, Jan Beulich <jbeulich@suse.com>, Julien Grall
	<julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
Subject: [PATCH v3 1/2] libelf: don't attempt to parse __xen_guest for PVH
Date: Thu, 20 May 2021 14:30:11 +0200
Message-ID: <20210520123012.89855-2-roger.pau@citrix.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210520123012.89855-1-roger.pau@citrix.com>
References: <20210520123012.89855-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: MRXP264CA0040.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:14::28) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 460f9abf-1ebf-4ce4-7cf1-08d91b8b0dbf
X-MS-TrafficTypeDiagnostic: DM6PR03MB3738:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB3738F218B2C96DF78CB91F1A8F2A9@DM6PR03MB3738.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6430;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: RAvq3dkgZeXoN83o2IeCLj0MS6uuk7h7p7KeW7jHRFBnv1eP25IvjwpEkZKxIdLF9we9RaEORhXktUMhmglRJdOZ80cQNGwiWQB4GVy+Pwdcq9p3CfRzPoq+bAb2vBrsc9SZIghauOaPeiGfPEC/0JIpPx21BMc9V4EnkWEtrp1UZE/UhxBnn5Wl+ikhwoKfT6riR1CbTxMR0BJ47TTvKLmAjlXTvhbPGSecVIEtk5enZAYx6KqzWvSDC0SWJHDZ1JeRbqZNDGD5epiDZGcpHN/lDes5i8yYANmlwH7NtqZOnWDrWmn0Zi3//amiPycZvnWNfdvIQFsCFmt9/tzy+shZ99ahJxIZog8jd8plrJOZSALepyPgYOif6zcE2tvqUQkLlG+0EClf7MOdk1J1J2/w2pZdWtSz8z8eiK8hwhRauJYIi8JSdms5iCzXZcD0oCacqfRzvFLAeau5Izb/ifJPr1RbO57/iC/sDMMecz5wU2vr89p2SzbIAFgTNRUP5GBJcavsZBvhYirOPmTrFvjwhvF13kZe7pwZ1uUYinvEm6D6a1egu6xtiTKod/W9gs24BTx9JEwMeVugEWQulM3Qc3TEIe6Hv9abzpBHVo8=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(396003)(136003)(346002)(376002)(39860400002)(1076003)(478600001)(66476007)(86362001)(8936002)(2906002)(36756003)(4326008)(6916009)(6486002)(5660300002)(4744005)(8676002)(6496006)(66556008)(38100700002)(26005)(83380400001)(316002)(66946007)(2616005)(186003)(16526019)(6666004)(956004)(54906003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?VDYxNHduODA5Y1RsVGlENU8vWmFWbFhxbnM1cUk5THBhUmQ3Vyt6MlR4dzVy?=
 =?utf-8?B?R1FyNS9WeUxjVUt6ZU9oZjlkU2UxR2J2ams2L1JVNit4VjJwdnZnbnZWVXl2?=
 =?utf-8?B?RndqanM2ODIrejFUVEpKQzRLRTEzSEk2d0xLM0E0Ynk0WHNaOWZ2UmtKcVJq?=
 =?utf-8?B?cHlmMUcwMFU2eUN2TmJsZ2IvQncwbVFjMC9ZQlYrdGlQU0lQeUpUb3lkRFpH?=
 =?utf-8?B?NjF5bUdDbFV6KzZWNUd0bkQ2elliUW16aGo2VXByNlg4bVRCNlNvbjhNc0VT?=
 =?utf-8?B?ZEZaMVBVbUg4UDM0eGJlYjFoWGFydi9ydjF1YTlCQTJlR0dRSC80MkZNL0sr?=
 =?utf-8?B?VEt2eU9WSnNPYmZhTlRkOWpTOHdERTFKaCtFZlhRNVpZZk9XVHRUYTQ0SEE1?=
 =?utf-8?B?aEJUbTBZa3Jkc2xGMVNBdkhiak9OSStiU1d1MFFQUDRBVVhTcTZNSnFhMjI5?=
 =?utf-8?B?eXZCa3JuUFlpbXZDUGxBMm9IaUt6TUw0SXZ1OUl3cDNtajRxeG9MNisvOTdh?=
 =?utf-8?B?RFpEa3c0eXBsVGpFZUxZNXVxZTZYQ2pCOFhrSWdRMFY2Z2FZS0xDSUtiTFJZ?=
 =?utf-8?B?c0ZtZ3h5azY0L2ErYzVvd1lMVEFvdFYrNlNCUHFSbU5abUJETHFVYU5XdkN5?=
 =?utf-8?B?WjhnaEpEQkJjVER0dnByeUlYR1hRUE1uTmVYbjc3Q0UrQXhFQkhZQkIzSUJk?=
 =?utf-8?B?dkc4NTFId1MwaHpyM1NZdW84ZVVoOFFaVDJXRFpMbnJzU0l2VnlYcDJ5S3dS?=
 =?utf-8?B?bnNGOXVvU3NOelVhYWpXQkRkWm9UNDlPKzRpTk42N3l0MUJ4eUlNd2pTajBQ?=
 =?utf-8?B?Ulg4dEdlaUgwNjRlUmlVOG1lN1QxWmhjYjVmY0dOSzdrZ1A3emNhd1JHWUp5?=
 =?utf-8?B?ditUUS9LU0JNckVMNW9STEFpVGYzWW9mRWlhcW5WQU9hQktVRE9wMlgyQkpE?=
 =?utf-8?B?WHdZclpMVkx0NW14WkJTOGJQWWZ5eHN5aWRFN1M0dUNudng3KzlDbHJhN0dO?=
 =?utf-8?B?ZXRRNTNNTTY1YUtiMlNvT20rS0xJR0pRS1ROdTF5N003WUxjeENHcGxOYi8w?=
 =?utf-8?B?M2xOVXJBbXVmVEN2TUJINXNsbXpRTHRSYXJ6N3lpdjdzeXQ4bzJCZnBVVDF6?=
 =?utf-8?B?Z0JQRFNZS2dpalBxRXpMWGxlbEkvYmRlWktwelk5bnZWKzFRQkNXL1NWa1hJ?=
 =?utf-8?B?SGJ6TUc1V2srNmE1dTk3OGZFZUZrc3hNN3p2QVJreW1EUVJMTnJxNFNPMElW?=
 =?utf-8?B?MVV1OWV0SXNoYnlpSlAwT2FwNUthREd6QzhQZUsyTC9menV6ajdzdzdTbTB4?=
 =?utf-8?B?d2d3eWtveVdxemRneXNWbTY1bE9Fc2tOMnREbm54bDA5bkc2VnNjN2dSS05i?=
 =?utf-8?B?WWV1Mk90SFNYODRHLytEYm40TjJLaFFTckFhaUJpaWlhSU1sSjVQbERudGdz?=
 =?utf-8?B?TUE5VW4rcFBtRnVuN0JBc1puN2lpTm9na0krRk1uUWJKZHI3MGhFRGdPNi8w?=
 =?utf-8?B?UmkwVCtYNDg4NTIybXEwSDczVlJFWjQrYmUzQ1hGbXRVc3dGRGR2eUpkUU1L?=
 =?utf-8?B?ZUxsY2VyYU56WW5ERlFmRGh1cXJqU3FUcmFGUXhBQ1QrbjRzQjFlcTU1R3Ns?=
 =?utf-8?B?SHNCZGx4Q3Q4Rzg5VzZWZGt4WElDbE14VXVYSENWM2lRcnRpSG1BazMwWjZ5?=
 =?utf-8?B?SzZSRXdPVm9PRjZ4VTNrSzNxUG1OWEN1cDBRbVZndXhwc2VIL0ZNY3ZWMmFx?=
 =?utf-8?Q?6gotedSg4ZPVnNHIBurAnfhCGvmuXJ33YFebcOu?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 460f9abf-1ebf-4ce4-7cf1-08d91b8b0dbf
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 May 2021 12:30:29.6793
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: p+vAAGOKYR39RxVx8DC259yl+hCE62yg5XP/O0txNEV50b2Axmn3iClfmq2GHTq10oakq/O6UbqFxPyT+nHMlQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3738
X-OriginatorOrg: citrix.com

The legacy __xen_guest section doesn't support the PHYS32_ENTRY
elfnote, so it's pointless to attempt to parse the elfnotes from that
section when called from an hvm container.

Suggested-by: Jan Beulich <jbeulich@suse.com>
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
Changes since v2:
 - New in this version.
---
 xen/common/libelf/libelf-dominfo.c | 6 ++----
 1 file changed, 2 insertions(+), 4 deletions(-)

diff --git a/xen/common/libelf/libelf-dominfo.c b/xen/common/libelf/libelf-dominfo.c
index 69c94b6f3bb..abea1011c18 100644
--- a/xen/common/libelf/libelf-dominfo.c
+++ b/xen/common/libelf/libelf-dominfo.c
@@ -577,10 +577,8 @@ elf_errorstatus elf_xen_parse(struct elf_binary *elf,
 
     }
 
-    /*
-     * Finally fall back to the __xen_guest section.
-     */
-    if ( xen_elfnotes == 0 )
+    /* Finally fall back to the __xen_guest section for PV guests only. */
+    if ( xen_elfnotes == 0 && !hvm )
     {
         shdr = elf_shdr_by_name(elf, "__xen_guest");
         if ( ELF_HANDLE_VALID(shdr) )
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Thu May 20 12:35:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 12:35:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130951.245003 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljhtu-0000eL-2R; Thu, 20 May 2021 12:35:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130951.245003; Thu, 20 May 2021 12:35:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljhtt-0000eE-VA; Thu, 20 May 2021 12:35:41 +0000
Received: by outflank-mailman (input) for mailman id 130951;
 Thu, 20 May 2021 12:35:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3HBq=KP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ljhts-0000e7-Hr
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 12:35:40 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bcdae2e4-fadc-42ba-9fbe-e2363bc36753;
 Thu, 20 May 2021 12:35:39 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E1741ABE8;
 Thu, 20 May 2021 12:35:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bcdae2e4-fadc-42ba-9fbe-e2363bc36753
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621514139; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=BBQQpskrfLGC7oDL2URbXUpCHF5nSRPkk5RgOsObezE=;
	b=J9iWCncbh4BHRAi5BxHjxXLPx3nSk0k73jdqsWW/s5Q1IIARfNVw6BmRrrSeS96vFmQu5B
	Rg+jnCJhwtO/O7X2yH/Wx+dFYkdaKrIZozq/L+IzoOiHz0S9Gu9hzETgt5HtAPDC5awS+u
	nhw8Egt7YCgBZKMgqUvWC6NHZKcHKOs=
Subject: Re: [PATCH v3 1/2] libelf: don't attempt to parse __xen_guest for PVH
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20210520123012.89855-1-roger.pau@citrix.com>
 <20210520123012.89855-2-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <29ecaaaf-d096-e8af-fc6f-292ff2b3d5ae@suse.com>
Date: Thu, 20 May 2021 14:35:38 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <20210520123012.89855-2-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 20.05.2021 14:30, Roger Pau Monne wrote:
> The legacy __xen_guest section doesn't support the PHYS32_ENTRY
> elfnote, so it's pointless to attempt to parse the elfnotes from that
> section when called from an hvm container.
> 
> Suggested-by: Jan Beulich <jbeulich@suse.com>
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
> Changes since v2:
>  - New in this version.
> ---
>  xen/common/libelf/libelf-dominfo.c | 6 ++----
>  1 file changed, 2 insertions(+), 4 deletions(-)
> 
> diff --git a/xen/common/libelf/libelf-dominfo.c b/xen/common/libelf/libelf-dominfo.c
> index 69c94b6f3bb..abea1011c18 100644
> --- a/xen/common/libelf/libelf-dominfo.c
> +++ b/xen/common/libelf/libelf-dominfo.c
> @@ -577,10 +577,8 @@ elf_errorstatus elf_xen_parse(struct elf_binary *elf,
>  
>      }
>  
> -    /*
> -     * Finally fall back to the __xen_guest section.
> -     */
> -    if ( xen_elfnotes == 0 )
> +    /* Finally fall back to the __xen_guest section for PV guests only. */
> +    if ( xen_elfnotes == 0 && !hvm )

Isn't this depending on the 2nd patch adding the function parameter?
IOW doesn't this break the build, even if just transiently? With the
respective hunk from patch 2 moved here
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 20 12:37:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 12:37:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130957.245014 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljhvM-0001FZ-DR; Thu, 20 May 2021 12:37:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130957.245014; Thu, 20 May 2021 12:37:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljhvM-0001FS-AF; Thu, 20 May 2021 12:37:12 +0000
Received: by outflank-mailman (input) for mailman id 130957;
 Thu, 20 May 2021 12:37:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3HBq=KP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ljhvL-0001FF-Er
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 12:37:11 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 459601b0-e461-482e-99ba-234b5ae42aba;
 Thu, 20 May 2021 12:37:10 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EF813ABE8;
 Thu, 20 May 2021 12:37:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 459601b0-e461-482e-99ba-234b5ae42aba
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621514230; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=KXmTi30s2R/5r0eMavJpsdbNUKjY1P+JZ6ZKn62iJOE=;
	b=K3lMcDrOmSXLdJf2joKiAOBGit9v05SaNoWhPuP6/r6j0WEFhFrN7CSkIQS6J6vEiTTD3i
	pzSy+s8IW8jW1vmdtYtl6ZjuO5+RVV8ooLfJ1jiKBlFi3GaoD38QrrOshYOI7ZOt1VKMNu
	6ZGzfQT7B3/BQ28B82dbiA1/cqEdORs=
Subject: Re: [PATCH v3 2/2] libelf: improve PVH elfnote parsing
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <20210520123012.89855-1-roger.pau@citrix.com>
 <20210520123012.89855-3-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e9633187-42c1-e922-70b9-30c0b5a862b8@suse.com>
Date: Thu, 20 May 2021 14:37:09 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <20210520123012.89855-3-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 20.05.2021 14:30, Roger Pau Monne wrote:
> Pass an hvm boolean parameter to the elf note parsing and checking
> routines, so that better checking can be done in case libelf is
> dealing with an hvm container.
> 
> elf_xen_note_check shouldn't return early unless PHYS32_ENTRY is set
> and the container is of type HVM, or else the loader and version
> checks would be avoided for kernels intended to be booted as PV but
> that also have PHYS32_ENTRY set.
> 
> Adjust elf_xen_addr_calc_check so that the virtual addresses are
> actually physical ones (by setting virt_base and elf_paddr_offset to
> zero) when the container is of type HVM, as that container is always
> started with paging disabled.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

With the one hunk moved to patch 1
Reviewed-by: Jan Beulich <jbeulich@suse.com>

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 20 12:37:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 12:37:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130959.245026 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljhvg-0001kA-La; Thu, 20 May 2021 12:37:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130959.245026; Thu, 20 May 2021 12:37:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljhvg-0001k3-IR; Thu, 20 May 2021 12:37:32 +0000
Received: by outflank-mailman (input) for mailman id 130959;
 Thu, 20 May 2021 12:37:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WSyT=KP=epam.com=prvs=5774c33267=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1ljhvf-0001iW-BA
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 12:37:31 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 95300298-f3ef-44b6-b0f6-f1e624989bd1;
 Thu, 20 May 2021 12:37:30 +0000 (UTC)
Received: from pps.filterd (m0174676.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 14KCabLA003314; Thu, 20 May 2021 12:37:28 GMT
Received: from eur05-db8-obe.outbound.protection.outlook.com
 (mail-db8eur05lp2107.outbound.protection.outlook.com [104.47.17.107])
 by mx0a-0039f301.pphosted.com with ESMTP id 38np2e0g12-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Thu, 20 May 2021 12:37:28 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB5009.eurprd03.prod.outlook.com (2603:10a6:208:105::33)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.28; Thu, 20 May
 2021 12:37:26 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::3541:4069:60ca:de3d]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::3541:4069:60ca:de3d%6]) with mapi id 15.20.4129.033; Thu, 20 May 2021
 12:37:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 95300298-f3ef-44b6-b0f6-f1e624989bd1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=nALrMstIlogndlm7C7UlsWHBldekZd8SXOfHQf7Xf6PX5AulIwtbk5uvbk3D1zZqMay/FyndD78+KgdKf5BTBOtgcx9+PRRgjYVVU4vMQVBV8BikEQDv/LN3ZSpiOGQXvoWb+ofd5vXavJQKirVRBF1iHmdOOSFGrf6GY7Lh6kC8toZT09aOUw3NW4v9h/x+gBJVqTiwqhoos6esBv66BxCLAG2RYHrlMK0f432Xgj2XO5o16KhELRWgIM6OqyU5BbjdXLyWAyJ7TvM2rs7UFTWe7JShx4Et/DQYY0HKkJZzMSzK3TqQQtHT77lwLbzgJH5SHCccojXEvCoSWDyaig==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8zDKcSm7p+DfaZQ1ZIrsz7KAPlrbPwXHx3x5bBdujwQ=;
 b=Eabb3bktBUisNzeTK7tl8MaqE06GsARM+WuoJ5jW2d7isd7QGe4SSKFhglt8XnFeQoegy2sZ7yembxL1AFyi/IQV2SiXw7CmnSRkRsalRlsnkKvL4b88Hr/uY6Bbxb1WxlcNHVgW4Xk0B29ZClqyZaONEholBAiQBRxBipUJ+mBX84fQcGvjs5ugKuoUlfmF6U/RfL1fIPTycRgaor7jKNQF9baT6wtXIFxklH4b4OOyxUPmR+lPFWkFzYPcRg9grQrSSKrNvQxNCkVelJMO5Cwg3oTsKl6/6qACek4aMACErK0poHBdagiLtgXAbxHDvHiwn4R0vA+MIZdmQjJK3g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8zDKcSm7p+DfaZQ1ZIrsz7KAPlrbPwXHx3x5bBdujwQ=;
 b=l/4fvpr5LMvF/l2fOZSHV4P1FZ1r7hXd6cpTNoMFl4xHtZDuTzC2GpeDPS9AQmRxDv8SEBVtoaIrvP2M9oQl2Ki1k7ECvm+MyK7uHg613biCwljrlDWuugVZBU5HXmaxtqA+4OejSCxjBvOpyt1g3h7lOqL2nCZzw0aULUUolECfXuC89WoAJF+rPgx3H7k2q3J5nix8a2RdFWsDtM+epknQP+gLOMZEZkd/9ymhz4Bdk92YhCGv07yl0vWXixVMpaRCFxw5RTqWelRJyFuJbLeEs2adZw8scsuLENJXlrqo4xssCfheAzmSuQdOMj6gFlQo/KfJjagkf+/ptplF0A==
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
CC: Anastasiia Lukianenko <Anastasiia_Lukianenko@epam.com>,
        "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        Andrii
 Chepurnyi <Andrii_Chepurnyi@epam.com>
Subject: Re: Hand over of the Xen shared info page
Thread-Topic: Hand over of the Xen shared info page
Thread-Index: 
 AQHXR86B8dxtD5lx60CgXee9ciHSoarhFtmAgAGmygCACItDgIAAMAgAgABnUwCAAEoLgIAAL9yA
Date: Thu, 20 May 2021 12:37:25 +0000
Message-ID: <343a2710-c0a2-5454-8e1c-2b0a7f263f01@epam.com>
References: <64bc6ab6ec387acebb40c1b4786dfda1050f9d50.camel@epam.com>
 <8ff05bdf-a6c4-6b14-b39c-7d9b3bb9d279@xen.org>
 <1db54c363eae22613280e7181805abee396fe5e9.camel@epam.com>
 <8d1ecf6c-a0d1-d9bc-5daf-d02a34fff1e6@xen.org>
 <alpine.DEB.2.21.2105191604130.14426@sstabellini-ThinkPad-T480s>
 <4b686071-3260-87b1-d240-8d3c2b48f1f8@epam.com>
 <0d502893-f433-30b9-72a5-6842274239f3@xen.org>
In-Reply-To: <0d502893-f433-30b9-72a5-6842274239f3@xen.org>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: cb9e926c-4e3f-401c-3e45-08d91b8c060b
x-ms-traffictypediagnostic: AM0PR03MB5009:
x-ms-exchange-transport-forked: True
x-microsoft-antispam-prvs: 
 <AM0PR03MB500958C2CDAE9A7A6229FB7CE72A9@AM0PR03MB5009.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 p+ojGHH1npJj/d29sudZ2TIEokC1DQcR40tZ9oziHtUxeSQNZygwIclcRwRqLvFlenl63XgHR223FP0nhpT8tz7nkiFF6y5I2aakp55AcS2rU6FoqFHY5KSc3D+O6byJ522Pi4UigBR788vUxoAQQGNG8RXP0moIsfiv5Q3erdM2oToYsoaMRNtEndAEAZBucPHOHbHhZb/HYURELrjagdvfB23iIAD2o5vuN2Dy9UzzWunYycK1QeActU53voF/VTCvXWwuRzLlbG7ZMgg8p/w0f5wsjZFrv4Hnd+8wkmOoomRfJTUfeDBeJRxEh6MqOTrrsvKeKc7+GXiFURmDAex6E5vD/2m3g8EFCyw9E49h01Mg5ZkK+7MouK2s9E4ylI8jEM0QiHyyVy8ImIZGA3kOWcvD0DGhqwf7CvjiGR+WNqstSPyYWAkOu2MSVAwnclTrvCb9BkTHmhhoJ7mCTV/cKr3T5iemVApjWzGlaVQqK+jheIwBjdWv86E3qojIVtbBjkUd/9bL1oCjVbMfo/i+F+kxmYASVIirykAuoSZBGT01FhofgrT4i6y+kDj5b3JsZR2VQOIYysYqlFmEZPf/DV8ABwIosnXESRBqfcM6XIdSL7j12Zpeo13dlfCVzQqa9poDMs5BJGcg88rnkcFii0yXXAYzow8qitA3Gx8=
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(396003)(39860400002)(136003)(366004)(376002)(6486002)(36756003)(26005)(6506007)(54906003)(83380400001)(110136005)(31696002)(31686004)(6512007)(64756008)(66446008)(66556008)(4326008)(66476007)(2906002)(76116006)(53546011)(66946007)(186003)(107886003)(71200400001)(2616005)(122000001)(478600001)(5660300002)(86362001)(8676002)(316002)(38100700002)(8936002)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?utf-8?B?aHpYS29pZDdPZjU1SU01N09OVzYveFpyMHZLc0gyOG04eGxGR1hRQU14UkdG?=
 =?utf-8?B?a1hyMUtUOVVvT1oyMFhEV0hMVXgyZ1BiMndqL0FGZFltQno2b3FUYk9VVTIy?=
 =?utf-8?B?L2RKMUgweVNVMXBKdTFyK2RLTG9NdGVmME9GMi9UVjJnZGJhck54WmRLZnFY?=
 =?utf-8?B?MVd5OVpzUGJLdlVHLzFCM21JYnFKWmxYZm11Q1duUnZUcmE0b2t6YW41dFFm?=
 =?utf-8?B?YTFmakhCb0tITFVGTmtKQ1pVdC9adG1lbE9Qd1g3a0Vwak14VzAyTkQzc2M5?=
 =?utf-8?B?cGZpMXRpZTVSYmFJOGl4ZU0wTENIbVRRNUVrYmFDZUU5cjBGL3RrQ2x2bVhw?=
 =?utf-8?B?YzZEbk5nRzZBdUt2YW5uM2hUblptNXFxcWppMGJmRkF5T3lmdy9BTkhqM08w?=
 =?utf-8?B?c1dxMVNpbWRsUDFwZXJLekU1Y1ZUSjFkeDZWR29senZtMXJKdTBxL29aRFpm?=
 =?utf-8?B?V0ttblMzOVU2enRpU1dVeGkwcEZUNGZCclg3bnYrWjF3WjFMWDN5eDRJeTZK?=
 =?utf-8?B?SUxNV1RnQmRhbmVmZVdVdjBEUEIrN1FUdHNVM3dyUytZSmJNd3NQdUdpaEdU?=
 =?utf-8?B?eEZmaGc2MWd1a3VKV2hVUnIxdS9WdFpFNkNEb3FDMFVnRSt3UGtuSjdpU2JC?=
 =?utf-8?B?SUlyZzNzWEY3aStVMHJoWmszRXpQUnlCTnB6clp6czlsUXRDL3JMZUNQRTls?=
 =?utf-8?B?TjczUXFKajl6NmFnTEUzOHZKY2wrckRGSFd6NnJnZGJIVUV0ZTBDWW0ydlgr?=
 =?utf-8?B?NjZlbW94eHlBMXhOWmdlRkdxZ2xXZjcwN2FmdTY1K3lUSStxcEhvdkFMMXRs?=
 =?utf-8?B?dFVWS0lMT3FZa3ZxY1FxYW9hRkdnbDkwZlQvWjhPRkliVDFrTE1rSjJDdzJE?=
 =?utf-8?B?Yk1LUDl4R1ZFSDRVSHB6ZnloV09iWUluQ1NVVEdrLytycGE1c1dsTnp1QzMy?=
 =?utf-8?B?VXJpN2JVSTI4QUVZT0VYQ2E4KzgzUUx6ZFJ6bWUwRW5idlRvOEhmYUZhNmp2?=
 =?utf-8?B?ejFtOXVOSWpQL0h1RUpIR2RBNFppajIrRi9ITWpJaWwyVnFWR0c3ejR4V1NM?=
 =?utf-8?B?ODJoVHU3amkxNEp1Tm9ISTdNZjg0NGlNb2hqZ2FQbnMrVUlmM2NOUTgzYTlT?=
 =?utf-8?B?TGM5ZFBDRCsrUHhOZG5vd0xEUmhlSE5lYUJwSlQ3d2p1b3dIMnppelpwRm9I?=
 =?utf-8?B?U2V0U2daeWxXSDgzdUJlYzh0aFVYaDB5YVRqSnFGMUdoMVRxa0hvd3JtTkpp?=
 =?utf-8?B?SlBDbkc0cXUzTytrZzJ5cTZtaFZTd3l0ZFdJRmRrb1RJL3lLQnF4VmpNL2hO?=
 =?utf-8?B?Nit1b1NlbGRzVCtQK0xkUDBKbWR6ZEUxSWJHRjBMVUcyN0JFZXVkeVRmQjkv?=
 =?utf-8?B?UGRmQUxmZkhFZVdGUFVsVUlXd0Z5bWxQemdmVUVtd1h3RlRIdWFXb2xWOU1T?=
 =?utf-8?B?WnhjN21lOXVaMkFhUmowcTNXNWhpOUF3OXFKNlBoWnNQT0J6R3JFTEMrbzVq?=
 =?utf-8?B?UEdqYlZGTERST1NoaXUvMXl6WFBrcjhoVHBsRUhJZlBpU01jRkNCTjdUUU1x?=
 =?utf-8?B?TnFRNjFlbUh5d3MzZWJ5cW1yeVdvRW1icXNibEhWYWdSem8xODZYQVVQUm4z?=
 =?utf-8?B?a2ZJZlEyVUhOVUROd0ZLZVVZTnF0SmJWR3ozVkJ0NENVWVpzVHlLU0lpajlD?=
 =?utf-8?B?WjhjWHo0MjY4V1dTRWlUTExJV0xDajF2d1VmNmo1bWo4SXN2eDNWeHFWRkRl?=
 =?utf-8?Q?bJNBXv0Ar5ftWkWyAfnNEl/LEkiTXBgKRfv764J?=
Content-Type: text/plain; charset="utf-8"
Content-ID: <DC037EEF6B276640896AE1E36FC508D0@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: cb9e926c-4e3f-401c-3e45-08d91b8c060b
X-MS-Exchange-CrossTenant-originalarrivaltime: 20 May 2021 12:37:25.9182
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: PSkABlwcx2GxFrowVSYruJhZo0CRkN6EeBI6rZ0AGvVwT+zEZilDn7T7yt5AoRmd1TP8rCc9gBeEeGEXZCIoU3/6x/cbSCIwlM48zWnlfE+VIx9XzXzArNFxROvZZ8hF
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB5009
X-Proofpoint-GUID: e0VB8KoA_woPSpEQsAqZjdz3TGJOkiz_
X-Proofpoint-ORIG-GUID: e0VB8KoA_woPSpEQsAqZjdz3TGJOkiz_
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 phishscore=0 mlxlogscore=999
 clxscore=1015 malwarescore=0 priorityscore=1501 suspectscore=0
 lowpriorityscore=0 mlxscore=0 adultscore=0 bulkscore=0 impostorscore=0
 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2104190000 definitions=main-2105200096

SGksDQoNCk9uIDUvMjAvMjEgMTI6NDYgUE0sIEp1bGllbiBHcmFsbCB3cm90ZToNCj4NCj4NCj4g
T24gMjAvMDUvMjAyMSAwNjoyMSwgT2xla3NhbmRyIEFuZHJ1c2hjaGVua28gd3JvdGU6DQo+PiBI
aSwgYWxsIQ0KPg0KPiBIaSwNCj4NCj4NCj4+IE9uIDUvMjAvMjEgMjoxMSBBTSwgU3RlZmFubyBT
dGFiZWxsaW5pIHdyb3RlOg0KPj4+IE9uIFdlZCwgMTkgTWF5IDIwMjEsIEp1bGllbiBHcmFsbCB3
cm90ZToNCj4+Pj4gT24gMTQvMDUvMjAyMSAxMDo1MCwgQW5hc3Rhc2lpYSBMdWtpYW5lbmtvIHdy
b3RlOg0KPj4+Pj4gSGkgSnVsaWVuIQ0KPj4+PiBIZWxsbywNCj4+Pj4NCj4+Pj4+IE9uIFRodSwg
MjAyMS0wNS0xMyBhdCAwOTozNyArMDEwMCwgSnVsaWVuIEdyYWxsIHdyb3RlOg0KPj4+Pj4+IE9u
IDEzLzA1LzIwMjEgMDk6MDMsIEFuYXN0YXNpaWEgTHVraWFuZW5rbyB3cm90ZToNCj4+Pj4+PiBU
aGUgYWx0ZXJuYXRpdmUgaXMgZm9yIFUtYm9vdCB0byBnbyB0aHJvdWdoIHRoZSBEVCBhbmQgaW5m
ZXIgd2hpY2gNCj4+Pj4+PiByZWdpb25zIGFyZSBmcmVlIChJT1cgYW55IHJlZ2lvbiBub3QgZGVz
Y3JpYmVkKS4NCj4+Pj4+IFRoYW5rIHlvdSBmb3IgaW50ZXJlc3QgaW4gdGhlIHByb2JsZW0gYW5k
IGFkdmljZSBvbiBob3cgdG8gc29sdmUgaXQuDQo+Pj4+PiBDb3VsZCB5b3UgcGxlYXNlIGNsYXJp
ZnkgaG93IHdlIGNvdWxkIGZpbmQgZnJlZSByZWdpb25zIHVzaW5nIERUIGluIFUtDQo+Pj4+PiBi
b290Pw0KPj4+PiBJIGRvbid0IGtub3cgVS1ib290IGNvZGUsIHNvIEkgY2FuJ3QgdGVsbCB3aGV0
aGVyIHdoYXQgSSBzdWdnZXN0IHdvdWxkIHdvcmsuDQo+Pj4+DQo+Pj4+IEluIHRoZW9yeSwgdGhl
IGRldmljZS10cmVlIHNob3VsZCBkZXNjcmliZWQgZXZlcnkgcmVnaW9uIGFsbG9jYXRlZCBpbiBh
ZGRyZXNzDQo+Pj4+IHNwYWNlLiBTbyBpZiB5b3UgcGFyc2UgdGhlIGRldmljZS10cmVlIGFuZCBj
cmVhdGUgYSBsaXN0IChvciBhbnkNCj4+Pj4gZGF0YXN0cnVjdHVyZSkgd2l0aCB0aGUgcmVnaW9u
cywgdGhlbiBhbnkgcmFuZ2Ugbm90IHByZXNlbnQgaW4gdGhlIGxpc3Qgd291bGQNCj4+Pj4gYmUg
ZnJlZSByZWdpb24geW91IGNvdWxkIHVzZS4NCj4+PiBZZXMsIGFueSAiZW1wdHkiIG1lbW9yeSBy
ZWdpb24gd2hpY2ggaXMgbmVpdGhlciBtZW1vcnkgbm9yIE1NSU8gc2hvdWxkDQo+Pj4gd29yay4N
Cj4+DQo+PiBZb3UgbmVlZCB0byBhY2NvdW50IG9uIG1hbnkgdGhpbmdzIHdoaWxlIGNyZWF0aW5n
IHRoZSBsaXN0IG9mIHJlZ2lvbnM6DQo+Pg0KPj4gZGV2aWNlIHJlZ2lzdGVyIG1hcHBpbmdzLCBy
ZXNlcnZlZCBub2RlcywgbWVtb3J5IG5vZGVzLCBkZXZpY2UgdHJlZQ0KPj4NCj4+IG92ZXJsYXlz
IHBhcnNpbmcgZXRjLiBJdCBhbGwgc2VlbSB0byBiZSBhIG5vdC10aGF0LXRyaXZpYWwgdGFzayBh
bmQgYWZ0ZXINCj4+DQo+PiBhbGwgaWYgaW1wbGVtZW50ZWQgaXQgb25seSBsaXZlcyBpbiBVLWJv
b3QgYW5kIHlvdSBoYXZlIHRvIG1haW50YWluIHRoYXQNCj4+DQo+PiBjb2RlLiBTbywgaWYgc29t
ZSBvdGhlciBlbnRpdHkgbmVlZHMgdGhlIHNhbWUgeW91IG5lZWQgdG8gaW1wbGVtZW50DQo+Pg0K
Pj4gdGhhdCBhcyB3ZWxsLg0KPg0KPiBZZXMsIHRoZXJlIGFyZSBzb21lIGNvbXBsZXhpdHkuIEkg
aGF2ZSBzdWdnZXN0ZWQgb3RoZXIgYXBwcm9hY2ggaW4gYSBzZXBhcmF0ZSB0aHJlYWQuIERpZCB5
b3UgbG9vayBhdCB0aGVtPw0KDQpZZXMsIHNvIHByb2JhYmx5IHdlIGNhbiByZS11c2UgdGhlIHNv
bHV0aW9uIGZyb20gdGhhdCB0aHJlYWQgd2hlbiBpdCBjb21lcyB0byBzb21lIGNvbmNsdXNpb24N
Cg0KYW5kIGltcGxlbWVudGF0aW9uLg0KDQo+DQo+PiBBbmQgYWxzbyB5b3UgY2FuIGltYWdpbmUg
YSBzeXN0ZW0gd2l0aG91dCBkZXZpY2UgdHJlZSBhdCBhbGwuLi4NCj4gWGVuIGRvZXNuJ3QgcHJv
dmlkZSBhIHN0YWJsZSBndWVzdCBsYXlvdXQuIFN1Y2ggc3lzdGVtIHdvdWxkIGhhdmUgdG8gYmUg
dGFpbG9yZWQgdG8gYSBnaXZlbiBndWVzdCBjb25maWd1cmF0aW9uLCBYZW4gdmVyc2lvbiAoY2Fu
IGJlIGN1c3RvbSkuLi4NCj4NCj4gVGhlcmVmb3JlLCBoYXJkY29kaW5nIHRoZSB2YWx1ZSBpbiB0
aGUgc3lzdGVtIChub3QgaW4gWGVuIGhlYWRlcnMhKSBpcyBub3QgZ29pbmcgdG8gbWFrZSBpdCBt
dWNoIHdvcnNlLg0KQWdyZWUgdG8gc29tZSBleHRlbnQNCj4NCj4+IFNvLCBJIGFtIG5vdCBzYXlp
bmcgaXQgaXMgbm90IHBvc3NpYmxlIHRvIGltcGxlbWVudCwgYnV0IEkganVzdCBxdWVzdGlvbiBp
Zg0KPj4NCj4+IHN1Y2ggYW4gaW1wbGVtZW50YXRpb24gaXMgcmVhbGx5IGEgZ29vZCB3YXkgZm9y
d2FyZC4NCj4+DQo+Pj4NCj4+Pg0KPj4+PiBIb3dldmVyLCBJIHJlYWxpemVkIGEgZmV3IGRheXMg
YWdvIHRoYXQgdGhlIG1hZ2ljIHBhZ2VzIGFyZSBub3QgZGVzY3JpYmVkIGluDQo+Pj4+IHRoZSBE
VC4gV2UgcHJvYmFibHkgd2FudCB0byBmaXggaXQgYnkgbWFya2luZyB0aGUgcGFnZSBhcyAicmVz
ZXJ2ZWQiIG9yIGNyZWF0ZQ0KPj4+PiBhIHNwZWNpZmljIGJpbmRpbmdzLg0KPj4+Pg0KPj4+PiBT
byB5b3Ugd2lsbCBuZWVkIGEgc3BlY2lmaWMgcXVpcmsgZm9yIHRoZW0uDQo+Pj4gSXQgc2hvdWxk
IGFsc28gYmUgcG9zc2libGUgdG8ga2VlcCB0aGUgc2hhcmVkIGluZm8gcGFnZSBhbGxvY2F0ZWQg
YW5kDQo+Pj4gc2ltcGx5IHBhc3MgdGhlIGFkZHJlc3MgdG8gdGhlIGtlcm5lbCBieSBhZGRpbmcg
dGhlIHJpZ2h0IG5vZGUgdG8gZGV2aWNlDQo+Pj4gdHJlZS4NCj4+IEFuZCB0aGVuIHlvdSBuZWVk
IHRvIG1vZGlmeSB5b3VyIE9TIGFuZCB0aGlzIGlzIG5vdCBvbmx5IExpbnV4Li4uDQo+Pj4gVG8g
ZG8gdGhhdCwgd2Ugd291bGQgaGF2ZSB0byBhZGQgYSBkZXNjcmlwdGlvbiBvZiB0aGUgbWFnaWMg
cGFnZXMNCj4+PiB0byBkZXZpY2UgdHJlZSwgd2hpY2ggSSB0aGluayB3b3VsZCBiZSBnb29kIHRv
IGhhdmUgaW4gYW55IGNhc2UuIEluIHRoYXQNCj4+PiBjYXNlIGl0IHdvdWxkIGJlIGJlc3QgdG8g
YWRkIGEgcHJvcGVyIGJpbmRpbmcgZm9yIGl0IHVuZGVyIHRoZSAieGVuLHhlbiINCj4+PiBub2Rl
Lg0KPj4NCj4+IEkgd291bGQgc2F5IHRoYXQgcXVlcnlpbmcgWGVuIGZvciBzdWNoIGEgbWVtb3J5
IHBhZ2Ugc2VlbXMgbXVjaA0KPj4NCj4+IG1vcmUgY2xlYW5lciBhbmQgYWxsb3dzIHRoZSBndWVz
dCBPUyBlaXRoZXIgdG8gY29udGludWUgdXNpbmcgdGhlIGV4aXN0aW5nDQo+Pg0KPj4gbWV0aG9k
IHdpdGggbWVtb3J5IGFsbG9jYXRpb24gb3IgdXNpbmcgdGhlIHBhZ2UgcHJvdmlkZWQgYnkgWGVu
Lg0KPg0KPiBJSVVDIHlvdXIgcHJvcG9zYWwsIHlvdSBhcmUgYXNraW5nIHRvIGFkZCBhbiBoeXBl
cmNhbGwgdG8gcXVlcnkgd2hpY2ggZ3Vlc3QgcGh5c2ljYWwgcmVnaW9uIGNhbiBiZSB1c2VkIGZv
ciB0aGUgc2hhcmVkIGluZm8gcGFnZS4NCj4NCj4gVGhpcyBtYXkgc29sdmUgdGhlIHByb2JsZW0g
eW91IGhhdmUgYXQgaGFuZCwgYnV0IHRoaXMgaXMgbm90IHNjYWxhYmxlLiBUaGVyZSBhcmUgYSBm
ZXcgb3RoZXIgcmVnaW9ucyB3aGljaCByZWdpb25zIHVuYWxsb2NhdGVkIG1lbW9yeSAoZS5nLiBn
cmFudCB0YWJsZSwgZ3JhbnQgbWFwcGluZywgZm9yZWlnbiBtZW1vcnkgbWFwLC4uLi4pLg0KPg0K
PiBTbyBpZiB3ZSB3ZXJlIGdvaW5nIHRvIGludm9sdmUgWGVuLCB0aGVuIEkgdGhpbmsgaXQgaXMg
YmV0dGVyIHRvIGhhdmUgYSBnZW5lcmljIHdheSB0byBhc2sgWGVuIGZvciB1bmFsbG9jYXRlZCBz
cGFjZS4NCkFncmVlDQo+DQo+IENoZWVycywNCj4NClRoYW5rIHlvdSwNCg0KT2xla3NhbmRyDQo=


From xen-devel-bounces@lists.xenproject.org Thu May 20 13:00:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 13:00:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130977.245036 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljiHm-00051N-Mi; Thu, 20 May 2021 13:00:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130977.245036; Thu, 20 May 2021 13:00:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljiHm-00051G-Ji; Thu, 20 May 2021 13:00:22 +0000
Received: by outflank-mailman (input) for mailman id 130977;
 Thu, 20 May 2021 13:00:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljiHk-000516-US; Thu, 20 May 2021 13:00:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljiHk-0000Q5-NK; Thu, 20 May 2021 13:00:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljiHk-0005ZZ-BA; Thu, 20 May 2021 13:00:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ljiHk-0001dT-Af; Thu, 20 May 2021 13:00:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=T7jzxzXHCLL7UQWPTDXiGp7mmfQbeggwbsJJypRja2k=; b=BWL/kWqbNx3zZxZ/J4EmvG10r6
	XcgQQlO68GDw1y8cmo3ch2ZFArNgWCWDzGZIWl3cgBQJqGPO4Lzy8JnLTkGvvJPRbr8bGtgbZ7br4
	1UJ/LaDK4GdO8y7iUSv0r1fU1LhXff68M22n2dMcmcEyTVkFxYzUybfwGYcurSVHal+A=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162099-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162099: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=d874bc081600528f0400977460b4f98f21e156a1
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 20 May 2021 13:00:20 +0000

flight 162099 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162099/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                d874bc081600528f0400977460b4f98f21e156a1
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  273 days
Failing since        152659  2020-08-21 14:07:39 Z  271 days  499 attempts
Testing same since   162099  2021-05-20 03:13:30 Z    0 days    1 attempts

------------------------------------------------------------
507 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 155961 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu May 20 13:34:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 13:34:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130986.245050 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljioq-0008El-Dn; Thu, 20 May 2021 13:34:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130986.245050; Thu, 20 May 2021 13:34:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljioq-0008Ee-Aa; Thu, 20 May 2021 13:34:32 +0000
Received: by outflank-mailman (input) for mailman id 130986;
 Thu, 20 May 2021 13:34:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3HBq=KP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ljioo-0008EY-VW
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 13:34:30 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3b51d228-83c9-4c38-8ff7-7711c2681e1a;
 Thu, 20 May 2021 13:34:29 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 227A7ACAD;
 Thu, 20 May 2021 13:34:29 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3b51d228-83c9-4c38-8ff7-7711c2681e1a
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621517669; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=UkmRxEMjRg8EDY9RlqvwFs9Sc9dkUWwsLuUhnci+pxY=;
	b=FffhZbdD8UMsXj6TBfAm93pvLuk4ApEG2nqNdU53fViOpe5t9s5Q8na57O++Sx8rOdOz1u
	x1mRMpnBRMOdOu7obZiaL9wWr8XPq7Rwni9DlfjNmUr3suBoELAT7zmtSV8PJD0T1SB7mY
	crPpikRnzUiwysRJklX6H2aSmdJ90sE=
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2] x86emul: de-duplicate scatters to the same linear address
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Message-ID: <cf935e11-27c8-969f-9fb2-a5c0e85ccff1@suse.com>
Date: Thu, 20 May 2021 15:34:28 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

The SDM specifically allows for earlier writes to fully overlapping
ranges to be dropped. If a guest did so, hvmemul_phys_mmio_access()
would crash it if varying data was written to the same address. Detect
overlaps early, as doing so in hvmemul_{linear,phys}_mmio_access() would
be quite a bit more difficult. To maintain proper faulting behavior,
instead of dropping earlier write instances of fully overlapping slots
altogether, write the data of the final of these slots multiple times.
(We also can't pull ahead the [single] write of the data of the last of
the slots, clearing all involved slots' op_mask bits together, as this
would yield incorrect results if there were intervening partially
overlapping ones.)

Note that due to cache slot use being linear address based, there's no
similar issue with multiple writes to the same physical address (mapped
through different linear addresses).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Maintain correct faulting behavior.

--- a/xen/arch/x86/x86_emulate/x86_emulate.c
+++ b/xen/arch/x86/x86_emulate/x86_emulate.c
@@ -10073,15 +10073,36 @@ x86_emulate(
 
         for ( i = 0; op_mask; ++i )
         {
-            long idx = b & 1 ? index.qw[i] : index.dw[i];
+            long idx = (b & 1 ? index.qw[i]
+                              : index.dw[i]) * (1 << state->sib_scale);
+            unsigned long offs = truncate_ea(ea.mem.off + idx);
+            unsigned int j, slot;
 
             if ( !(op_mask & (1 << i)) )
                 continue;
 
-            rc = ops->write(ea.mem.seg,
-                            truncate_ea(ea.mem.off +
-                                        idx * (1 << state->sib_scale)),
-                            (void *)mmvalp + i * op_bytes, op_bytes, ctxt);
+            /*
+             * hvmemul_linear_mmio_access() will find a cache slot based on
+             * linear address.  hvmemul_phys_mmio_access() will crash the
+             * domain if observing varying data getting written to the same
+             * cache slot.  Utilize that squashing earlier writes to fully
+             * overlapping addresses is permitted by the spec.  We can't,
+             * however, drop the writes altogether, to maintain correct
+             * faulting behavior.  Instead write the data from the last of
+             * the fully overlapping slots multiple times.
+             */
+            for ( j = (slot = i) + 1; j < n; ++j )
+            {
+                long idx2 = (b & 1 ? index.qw[j]
+                                   : index.dw[j]) * (1 << state->sib_scale);
+
+                if ( (op_mask & (1 << j)) &&
+                     truncate_ea(ea.mem.off + idx2) == offs )
+                    slot = j;
+            }
+
+            rc = ops->write(ea.mem.seg, offs,
+                            (void *)mmvalp + slot * op_bytes, op_bytes, ctxt);
             if ( rc != X86EMUL_OKAY )
             {
                 /* See comment in gather emulation. */


From xen-devel-bounces@lists.xenproject.org Thu May 20 13:55:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 13:55:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.130993.245062 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljj8t-000267-3X; Thu, 20 May 2021 13:55:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 130993.245062; Thu, 20 May 2021 13:55:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljj8t-000260-0S; Thu, 20 May 2021 13:55:15 +0000
Received: by outflank-mailman (input) for mailman id 130993;
 Thu, 20 May 2021 13:55:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Vutm=KP=citrix.com=anthony.perard@srs-us1.protection.inumbo.net>)
 id 1ljj8s-00025u-1n
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 13:55:14 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1ddb2f5c-a05e-4062-a69f-86b4cc5cdf37;
 Thu, 20 May 2021 13:55:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1ddb2f5c-a05e-4062-a69f-86b4cc5cdf37
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1621518912;
  h=date:from:to:cc:subject:message-id:references:
   mime-version:in-reply-to;
  bh=yFlLm1I8zfA7TB+HeiSRjDEgBiNva7cVF9o20o+moaY=;
  b=HAjtWbtPFhM/CAAtVvWSx9FH721hEpdyL7UDBt7margA11L6CwJgIY1+
   LT0JwKmeUMswd8Br9alwQT8Nrm6PkNEYkQNV76yBn0cvK7kvX3g/G9QMu
   XYh9bTR3vAxJeeplqEPISODxGtgJDpXBl4NwjdLcqneiUZYW+4xTEDw0R
   w=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: vSvPhW+kc8LkUwrPyaaLzIOi1LKpa9J3uXbpgex8wzo28tfUnEbJEebzwBKO8sOXoYVs3UEI/9
 lgjZmrc5fa7LiVOyrti6CvhqLCDpFHoOuRt51Wa9K4+bIsGRmb3WsjYcs8TWWyy79wnQMqIcMG
 z2JmO/ANiSTXJIRLqW5xJ6gponXvFDLU0/ZstW6pLXppCaRSXwL7ZanLqHhjiNmRwgacyU+eRd
 TTYm0VEzJJbfa6rnTk4lVEM19VcD7wqiPSj24oS9e8opj1XfE/6DW3C96HkgHb2INbG/t7tFs9
 g94=
X-SBRS: 5.1
X-MesageID: 44341743
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-Data: A9a23:S1z0Ia4NojWC++t43B5FaQxRtFjHchMFZxGqfqrLsTDasY5as4F+v
 jYdUGmEP/feN2vzeIh+Pti39klV6JXUm9diTlY9rXswHi5G8cbLO4+Ufxz6V8+wwmwvb67FA
 +E2MISowBUcFyeEzvuVGuG66yY6jclkf5KkYAL+EnkZqTRMFWFx03qPp8Zj2tQx2YXgUlvT0
 T/Pi5a31GGNimYc3l08s8pvmDs31BglkGpF1rCWTakjUG72zxH5PrpGTU2CByKQrr1vNvy7X
 47+IISRpQs1yfuP5uSNyd4XemVSKlLb0JPnZnB+A8BOiTAazsA+PzpS2FPxpi67hh3Q9+2dx
 umhurSPCjUIEqLppd8RDUhSEHFCFvVsp4fudC3XXcy7lyUqclPpyvRqSko3IZcZ6qB8BmQmG
 f4wcW5XKErZ3qTvneL9ELAEascLdaEHOKsWvG1gyjfIS+4rW5nZT43B5MNC3Sd2jcdLdRrbT
 5ZFMmY2M0ibC/FJEg00DaM5u9/4v1WhbDZYgmPM4ps9yXeGmWSd15CyaYGIK7RmX/59n1maj
 nLL+XzjBRMXP8DZziCKmlqzgsffkCW9X5gdfJW0+Pdlj1yUwm07EwANWB2wpvzRol6zXZdTJ
 lIZ/gIqrLMu7wq7Q9/lRRq6rXWY+BkGVLJt//YSsV/XjPCOukDAWzhCFGcphMEaWNEeen8Y9
 3OGuu7SCWI+ur7FdVnF3OishGbnUcQKFlPudRPoXCNcvYO6+tBi30qSJjpwOPTr14WoQFkc1
 xjP/HBn3eRL5SIe//jjpTj6bySQSo8lp+LfzivQRH7tygpkaIO/a4Ws5DA3Bt4bd93AEDFtU
 JUe8vVyDdzi77nWzkRho81XRtlFAspp1xWG3DZS82EJrWjFxpJaVdk4DMtCyKJV3iAsImSBj
 Kj74lI52XOuFCL1PPUfj3yZUpt1pUQfKTgVfq+NNYcfCnSAXCSG4DtvdSatM5PFyxB3+ZzTz
 ayzLJb9ZV5HWP8P5GfnGI8gPUoDm3lWKZX7HsugkXxKENO2ORaodFvyGAvfNrxmtPvc/m04M
 b93bqO39vmWa8WmCgG/zGLZBQliwaQTbXwul/FqSw==
IronPort-HdrOrdr: A9a23:FFHTtKotqrgZXymw1BhZPsYaV5sLL9V00zEX/kB9WHVpmwKj5q
 WTdZUgpGTJYVMqMk3I9urhBEDtewK/yXcx2+gs1NSZLX3bURWTXfFfBOLZqlWKdkGQmNK1vp
 0QBJSWZueAbmRSvILW4BOzFt4hxNWLmZrY/Nv2/jNMZiUvQ54lwitFJiynMmtQLTM2eqYRJd
 69ze4CjwXlRlgtVOScIRA+Lpb+juyOtLnDJSQeDxoC7gSTiD+zrITxFQOVxH4lIk5y6IZn0U
 Pg1zbh7qGGtfymzxPHk1De9I5Xntz6o+EzePCku4w2Jjn0sAquee1aKtq/lQFwhcSJzko2m9
 /RpBpIBbUU15qoRBDQnTLdnzD61jAg8nnjzkLdr0fCjKXCNUAHIvsEvJledBTB7UomoZVb64
 Jkm0ykl7c/N2KyoMzGjeK4Hi1Cpw6MunwlnvcUj3tDFbcYU7NOkbc7lXklZavo2BiKnrzORo
 JVfbnhzecTe1ibYhnizxNS6c3pVH8rFgyLRVUDt6WuoklroEw=
X-IronPort-AV: E=Sophos;i="5.82,313,1613451600"; 
   d="scan'208";a="44341743"
Date: Thu, 20 May 2021 14:55:08 +0100
From: Anthony PERARD <anthony.perard@citrix.com>
To: Dario Faggioli <dfaggioli@suse.com>
CC: xen-devel <xen-devel@lists.xenproject.org>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Roger Pau Monne
	<roger.pau@citrix.com>
Subject: Re: QEMU backport necessary for building with "recent" toolchain (on
 openSUSE Tumbleweed)
Message-ID: <YKZqPMNawZUbR4eu@perard>
References: <f7738499f24f6682f4ae1c1c750e30f322dfdbf3.camel@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <f7738499f24f6682f4ae1c1c750e30f322dfdbf3.camel@suse.com>

On Tue, May 18, 2021 at 05:24:30PM +0200, Dario Faggioli wrote:
> Hello,
> 
> I think we need the following commit in our QEMU: bbd2d5a812077
> ("build: -no-pie is no functional linker flag").

Hi Dario,

I'm hoping to update qemu-xen to a newer version of QEMU (6.0) which
would have the fix, but that's need a fix of libxl,
    "Fix libxl with QEMU 6.0 + remove some more deprecated usages."
So I would prefer to avoid adding more to the current branch.

The branch stable-4.15 already have the fix if you need, in the mean
time.

Thanks,

-- 
Anthony PERARD


From xen-devel-bounces@lists.xenproject.org Thu May 20 14:39:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 14:39:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131003.245073 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljjpH-0006EN-L5; Thu, 20 May 2021 14:39:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131003.245073; Thu, 20 May 2021 14:39:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljjpH-0006EG-HQ; Thu, 20 May 2021 14:39:03 +0000
Received: by outflank-mailman (input) for mailman id 131003;
 Thu, 20 May 2021 14:39:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vYVO=KP=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1ljjpG-0006EA-Np
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 14:39:02 +0000
Received: from userp2120.oracle.com (unknown [156.151.31.85])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9e733652-0f14-499a-8d14-80a977f850e3;
 Thu, 20 May 2021 14:39:01 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
 by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 14KEFCCf187254;
 Thu, 20 May 2021 14:39:01 GMT
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
 by userp2120.oracle.com with ESMTP id 38j6xnmvvp-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 20 May 2021 14:39:00 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
 by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 14KEG5FD041644;
 Thu, 20 May 2021 14:38:59 GMT
Received: from nam10-dm6-obe.outbound.protection.outlook.com
 (mail-dm6nam10lp2108.outbound.protection.outlook.com [104.47.58.108])
 by aserp3020.oracle.com with ESMTP id 38nrxyvrk4-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 20 May 2021 14:38:59 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com (2603:10b6:208:321::10)
 by BL0PR10MB2867.namprd10.prod.outlook.com (2603:10b6:208:77::17)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4150.25; Thu, 20 May
 2021 14:38:57 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf]) by BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf%7]) with mapi id 15.20.4150.023; Thu, 20 May 2021
 14:38:57 +0000
Received: from [10.74.103.151] (160.34.89.151) by
 BYAPR01CA0055.prod.exchangelabs.com (2603:10b6:a03:94::32) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4150.23 via Frontend Transport; Thu, 20 May 2021 14:38:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9e733652-0f14-499a-8d14-80a977f850e3
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=6zaSteqAwXZQKFyG6eNHEu8n1t0gNu+xC2iOMIy3VC0=;
 b=Iae5XO5sUzk7ub50UjHe+gJzsALoqv4dqmFFjTeaglikfOOWOvp8EIt7N172R97TqkMR
 Es3kbKkIFRnpE990yW7wcRDq44vbONfdRcYTV0YTyRJBIVlFXkWprIbSuwDLJCZ8t1rw
 5wYNHHm4af4NFhhZWwMTiWyTDVy1nhyXKi8qfk7aMXX/jlpB3MwDG6BywhGkHqWGLucC
 +KUUmXrhC0+xS4DGg/7DzNeEU3RGRZPLpljpL9PLPBTp7THxTnLLwQ5yNTdr3XRp2kFW
 Eme5I6iKZzSYTMvSVcvfHMcHnMqxZhwfuYW8L45hOV6kcJaXay3b+Dl2DWahvhLoNb0+ jg== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Iw1Ghjld+bPSbqAotvfcpK44Dy5K3h6Q3sv/ehYce66T0w3qFZ1pQdCv1zHsaDyPgQpZwlD/5l0iu9RwuSpK5eT+c/t/YpmAqjvmnqxaHLUlKy3Tn2gWavMbdDBsgk7eRClfuY2oGvbTSeSQek+jnZP7uc1T/wjarW8PTACofKl4YF3FfXC8BLmRndFg/a2z4Rx0aCzZAKYIZkXJ6bmJKV2lNMTDaM9b3YhQEp3yNNmqcD1cyqwhfMPdEL1XjdcxWIcxMK7V/omiwuYW65LvFi5odHGNMwZrkzuL3BiPcYf06UaMwrdgX3lUXv+C+WmLPiAu6SdEehExQpNQpfQngg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6zaSteqAwXZQKFyG6eNHEu8n1t0gNu+xC2iOMIy3VC0=;
 b=lB1z3SLmQQIQ19C/kdwM+ePZnjFI76C58e+F22Qs97ukkO+wbPHO6Xh19FBABVclnTWYyeXT7Vxt55HXwmgaaDuIMIn9wfEaRylD6p1MIzQecCEjppOcjN48cTYghem+53QqJS8HA6gM8NDLDXrgYsDe8719+w/G0gN/uJ8okaaIEDVCDDeMay5Bh3t0VeOTcv7/2oyyI4OkMSQI24gxINbPYXpOMO2qRUzzzBSXXuhb4ADH/+byLhtmf913Y3MkL+5P4Sj6HUe+HCbDtPX39/QcA2OxNZQxfqunXkadtXCfPjfivH24o41MRCnTfMYSSVhcutB5RlEP9FhYweEJUg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=6zaSteqAwXZQKFyG6eNHEu8n1t0gNu+xC2iOMIy3VC0=;
 b=XtMzjF3zQGHYzpilCs+/xoVaT0EFPpTgC9FLXgXXZDyzBlnqD7lR02Kf+e5rUqdPiHOnX0/W2RP7TcsTwDvTY6Vor9Za2Gyp3xhN0lLuQvhgpODZXbjecxTSoPjtbJdLW4vX58NqoMr0aXSxZV0iOWHTuuGHSSKPVDVC9Sj5Dcs=
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=oracle.com;
Subject: Re: [PATCH v2 1/2] xen-pciback: redo VF placement in the virtual
 topology
To: Jan Beulich <jbeulich@suse.com>
Cc: Juergen Gross <jgross@suse.com>, Konrad Wilk <konrad.wilk@oracle.com>,
        "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <38774140-871d-59a4-cf49-9cb1cc78c381@suse.com>
 <8def783b-404c-3452-196d-3f3fd4d72c9e@suse.com>
 <87d771dd-8b00-4101-b76b-21087ea1b1df@oracle.com>
 <214a6c61-5f6a-d841-312a-be2abb95f77a@suse.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <75bfa4bd-cb4b-fb56-1600-6aebda4cf4cb@oracle.com>
Date: Thu, 20 May 2021 10:38:52 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
In-Reply-To: <214a6c61-5f6a-d841-312a-be2abb95f77a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Originating-IP: [160.34.89.151]
X-ClientProxiedBy: BYAPR01CA0055.prod.exchangelabs.com (2603:10b6:a03:94::32)
 To BLAPR10MB5009.namprd10.prod.outlook.com (2603:10b6:208:321::10)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 4fcce813-bbf6-481f-b310-08d91b9cffc3
X-MS-TrafficTypeDiagnostic: BL0PR10MB2867:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<BL0PR10MB286746A66465D9210A5E23C28A2A9@BL0PR10MB2867.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	L8JmZ7CrOyaEWNMHI5PjW64PPDKfro8X5NN+wP7EvFmVlmDtqkYERIguJ+BHPQKUcrnkDf0GOnUva9sBydqQVmo5YOcRK6W+8LnuuStUQBFhgIMogeT6pcMrTnJQKSZtSFb2BKNg6X52RxvMZc0Q3xQNiJMAmbFUcAG1auT7DvcsU5sb/oACi4DvreJXj2IXNkkosUvzZZUfI5wtiSGUD5w+4mMf2P6Mjl0dyyrN+ycNfY/FH+IpVL7zIcK40ljoatTib95wMTiOdKFiQ1fT8smUXkufA3YEzZjnAa6Ij52JJhMYlvpEYsMfahOBIU2UAHDpquoBKdrYyvIOghmx9ucqe0hrVz/prVeBwIqipmY0TsiZnn8qhQLarrTAebzs9OFnftkviOlh24RpN91OJOXT0dlwdmPKtwqx4kK2Ihl7IpczxIjHTCIecdK6OdF8hRHYiQho7XIq4w393w4dGIxBqtpHD6Wu6/11WCHx9D871Y/GhScH/+cLKCP/EjsVV2Kg1jx1jbKomruk90NPn6norvLZGmmlJnHbmVZmCGWMuzHtT7WLUz9bzDbmKgO+UweycB+ef8L4vfq/btP2Nvw0s1+kn4BuzCDlMCVWuezdc8+6CGqTRK9s+NUFu7lXtngBAmTesjmiGg3NUlKezAnxMrfIPilt8kn/YNPooAQ=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB5009.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(39860400002)(136003)(366004)(346002)(376002)(53546011)(38100700002)(31686004)(26005)(54906003)(316002)(16576012)(66476007)(66556008)(66946007)(8676002)(6486002)(16526019)(6916009)(5660300002)(86362001)(956004)(44832011)(31696002)(186003)(2616005)(8936002)(4326008)(36756003)(2906002)(478600001)(6666004)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?utf-8?B?cThpSDE0RzQxTVRIZHZFUXRlWmVLeW1IMElQbFlMTEZmeG5TQU5rSjFuZUhv?=
 =?utf-8?B?Y3VzbUtmRVZyQTJ3SDdPVlErb3JrdEJIRG43Y2NVKzE2M3NVUWExaExMUGVD?=
 =?utf-8?B?SHlQbHBQa09DYW1oS2pTMmtpcFJzSExWRExrQ2pBZTNkV0tlOGF3TmNOVlBE?=
 =?utf-8?B?S295OFU0ZFlJZ1ZFVnNjN29wajNtUkduaGcrLzM4RFpyNGxDT2ZXejYxMkZr?=
 =?utf-8?B?TEFIRzNjOHpJTHJxbGJybkMzV0szdHlpL2MyQTU1K2o4WU9yMys3blhOWS8r?=
 =?utf-8?B?cmNyYVRGOUVNbjBYMWFTcFZwRWJmaTJvbExmWXJlWGRtRzhNdytHT1lmMnpx?=
 =?utf-8?B?bEdrVEpXVGFXZlI4ZUZpa3JZNHdPd3Q5SWtzaGhxMzNwVlBzNzlPc0VnYlRo?=
 =?utf-8?B?WUM2eU5YMzZ3amNQV09nN3B6WXkxMzlpa0p6Y3ptZGp6NERlank1NHRKVUpU?=
 =?utf-8?B?RnRPaXpRWkpYcXk4bVR3WWF6dFAxUitTSWNSSWNzaFhmK1Z6a2t2TUQ2VW1L?=
 =?utf-8?B?T2VYaU1hRGxSNSttY244Zy9JeWdsL1ZIdkQyQUZEc0JaT0IyQXZiMlBWdC9p?=
 =?utf-8?B?b203eU5VbmUrZTB6SmFWa0JNY09zcWJPMndRZmV4MFZrUStaK2tYQ3MrRlZI?=
 =?utf-8?B?ZEhIemViNkhtVGVtVkxWTnJlZGcrYWYxQXJKbkx2M0ZTbkROOEdjTG1XNDVS?=
 =?utf-8?B?bndmS1NPWDEyS1NsbXlyMXBXSHVaL21pNHN5V29idDU4UnVpUFNrQlFQOHhW?=
 =?utf-8?B?UDFnNHMzRU43aDVWSHdjZnZxaUxZSnoyZGx6MWJOOUxZbXRZU0FGRlAxSE5w?=
 =?utf-8?B?UnhWbU0zdkdzS2RWaWl5QUoyU2dMUklld2lqRlc1b3REUXo3d2l1MDFwTmRt?=
 =?utf-8?B?OERyZ3FCQjRWTTQyeit0bUNaenJvUDduc1JmTm0wZmd2WTkwRlBFVHhadEFO?=
 =?utf-8?B?T3c3aU1qaG9mRm5ZVEFxNUNkaHYrWTRMY1BoRm1oQWRZZVU3aURQa0FlNG1p?=
 =?utf-8?B?NzFnU2t2MWJhN3pHUTVZdXFQY3UxNmd2cEp4Z0FzbGhobWNOYzlmT1lUakpR?=
 =?utf-8?B?MkdML3ZvaWJ6OEdqb0NEaUtBYzJranFLSGp0cDFQSUxaR3V1RDc1WUVvQ0s4?=
 =?utf-8?B?MWVwOXo4M0U2bWRobDdrbk1UV2lLYlFkS0xiZ2t5NkNvbWl1UUswQXFhaTVB?=
 =?utf-8?B?UGdTd3hlVy9wSm14cEEyczhudXhtamxQR3RvZU5NUFBxRVhSRGN5Zndnd2hF?=
 =?utf-8?B?bjVIYTBEVlNpeFdlWXp3NmNpTnhEZnlCbzZseWFvUjNsMmlJdEM2UXNjRFB3?=
 =?utf-8?B?U21qeU5XZE12dkMwSlBJVWRTUWNJOGlhcnpsY0JYQlh2azlNVHdHeHhqdVFS?=
 =?utf-8?B?K21sRXVSUGRBekdDZ0w5dXRpQS8zakpSUDF1dDcvSFA4OUtiUXYyelJib1NO?=
 =?utf-8?B?SlBlVXVQNmZMTkxjeG1wQ2IrMjFnT3NYT0U0Q2p5ZTNmQ29STXRJc0s4NFB0?=
 =?utf-8?B?c1pYMjVsNDB5Mm1HZjQ3U1Z0TGprY2YzKzhVUkxLUEVRdmpEWkduU0pzNnNF?=
 =?utf-8?B?RnBha1ZiU1FyZkE1emVscEdQaTJMeXgvVjBNUXEybnVFVUwzc3o4NEovOUpr?=
 =?utf-8?B?SkZzNDl6YnBjMHY4cXMrRWpObFdjZkI3a1VXNXVXQ25JYW1rNnJjUC8vTGgv?=
 =?utf-8?B?eWxOSFd4TzEzNHluNGlHVHBNRXBTZ3pJcWdMLzZ1aUI3ZUt2aE03RnNEcnpY?=
 =?utf-8?Q?MebBIonUtdh1rzGSqzHOTYlfJy7VuJfgeaQxaq+?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 4fcce813-bbf6-481f-b310-08d91b9cffc3
X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB5009.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 May 2021 14:38:57.0903
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: uk9qg2Wfsvp2SwQZR/02avMsZougb1iMxm/Fy3yio24Vse7bg6xFrWVvZrIZROqjSZcb+Fvo1EX5Q+lG9SHx2lEVeQ2LQ6PmTNMmSzDewB4=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BL0PR10MB2867
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9989 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 suspectscore=0
 malwarescore=0 mlxlogscore=999 adultscore=0 mlxscore=0 spamscore=0
 phishscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2104190000 definitions=main-2105200101
X-Proofpoint-GUID: -ozUc_rhKmj9xWaFE67AZWUoGeyqsrA4
X-Proofpoint-ORIG-GUID: -ozUc_rhKmj9xWaFE67AZWUoGeyqsrA4
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9989 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 impostorscore=0 mlxscore=0
 mlxlogscore=999 adultscore=0 malwarescore=0 priorityscore=1501
 phishscore=0 suspectscore=0 lowpriorityscore=0 bulkscore=0 clxscore=1015
 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2104190000 definitions=main-2105200101


On 5/20/21 3:43 AM, Jan Beulich wrote:
> On 20.05.2021 02:36, Boris Ostrovsky wrote:
>> On 5/18/21 12:13 PM, Jan Beulich wrote:
>>>  
>>> @@ -95,22 +95,25 @@ static int __xen_pcibk_add_pci_dev(struc
>>>  
>>>  	/*
>>>  	 * Keep multi-function devices together on the virtual PCI bus, except
>>> -	 * virtual functions.
>>> +	 * that we want to keep virtual functions at func 0 on their own. They
>>> +	 * aren't multi-function devices and hence their presence at func 0
>>> +	 * may cause guests to not scan the other functions.
>>
>> So your reading of the original commit is that whatever the issue it was, only function zero was causing the problem? In other words, you are not concerned that pci_scan_slot() may now look at function 1 and skip all higher-numbered function (assuming the problem is still there)?
> I'm not sure I understand the question: Whether to look at higher numbered
> slots is a function of slot 0's multi-function bit alone, aiui. IOW if
> slot 1 is being looked at in the first place, slots 2-7 should also be
> looked at.


Wasn't the original patch describing a problem strictly as one for single-function devices, so the multi-function bit is not set? I.e. if all VFs (which are single-function devices) are placed in the same slot then pci_scan_slot() would only look at function 0 and ignore anything higher-numbered.


My question is whether it would "only look at function 0 and ignore anything higher-numbered" or "only look at the lowest-numbered function and ignore anything higher-numbered".


-boris



From xen-devel-bounces@lists.xenproject.org Thu May 20 14:44:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 14:44:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131014.245084 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljjuA-0007eG-6D; Thu, 20 May 2021 14:44:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131014.245084; Thu, 20 May 2021 14:44:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljjuA-0007e9-1y; Thu, 20 May 2021 14:44:06 +0000
Received: by outflank-mailman (input) for mailman id 131014;
 Thu, 20 May 2021 14:44:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3HBq=KP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ljju9-0007dI-F8
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 14:44:05 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1e2d0e7a-eeaa-4b59-8c6c-11137c1865c3;
 Thu, 20 May 2021 14:44:04 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id BB192ABCD;
 Thu, 20 May 2021 14:44:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1e2d0e7a-eeaa-4b59-8c6c-11137c1865c3
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621521843; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=qzR/ZFe8YSQ0NoyAIGH+NSWvidOWFN3SC4PNFwCv+ao=;
	b=uNmrZoHj57SZ0C83T24aSse+cEgk4tWNoHD7CKDr720T+l9iROe9r2DhO3nagSdC3pROLX
	fda+BhnPR0k3EmPKdjQZLkMx9L0ySq3NVE0XUy0pFrfUvWbSfkO+qcJc+nPN/xZN/mcnM0
	OfWaKTXV7UgcYC7O5QqGDbNMvwt51NM=
Subject: Re: [PATCH v2 1/2] xen-pciback: redo VF placement in the virtual
 topology
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Juergen Gross <jgross@suse.com>, Konrad Wilk <konrad.wilk@oracle.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <38774140-871d-59a4-cf49-9cb1cc78c381@suse.com>
 <8def783b-404c-3452-196d-3f3fd4d72c9e@suse.com>
 <87d771dd-8b00-4101-b76b-21087ea1b1df@oracle.com>
 <214a6c61-5f6a-d841-312a-be2abb95f77a@suse.com>
 <75bfa4bd-cb4b-fb56-1600-6aebda4cf4cb@oracle.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <83aa295a-9329-f516-d439-75d198b92bf0@suse.com>
Date: Thu, 20 May 2021 16:44:03 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <75bfa4bd-cb4b-fb56-1600-6aebda4cf4cb@oracle.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 20.05.2021 16:38, Boris Ostrovsky wrote:
> 
> On 5/20/21 3:43 AM, Jan Beulich wrote:
>> On 20.05.2021 02:36, Boris Ostrovsky wrote:
>>> On 5/18/21 12:13 PM, Jan Beulich wrote:
>>>>  
>>>> @@ -95,22 +95,25 @@ static int __xen_pcibk_add_pci_dev(struc
>>>>  
>>>>  	/*
>>>>  	 * Keep multi-function devices together on the virtual PCI bus, except
>>>> -	 * virtual functions.
>>>> +	 * that we want to keep virtual functions at func 0 on their own. They
>>>> +	 * aren't multi-function devices and hence their presence at func 0
>>>> +	 * may cause guests to not scan the other functions.
>>>
>>> So your reading of the original commit is that whatever the issue it was, only function zero was causing the problem? In other words, you are not concerned that pci_scan_slot() may now look at function 1 and skip all higher-numbered function (assuming the problem is still there)?
>> I'm not sure I understand the question: Whether to look at higher numbered
>> slots is a function of slot 0's multi-function bit alone, aiui. IOW if
>> slot 1 is being looked at in the first place, slots 2-7 should also be
>> looked at.
> 
> 
> Wasn't the original patch describing a problem strictly as one for single-function devices, so the multi-function bit is not set? I.e. if all VFs (which are single-function devices) are placed in the same slot then pci_scan_slot() would only look at function 0 and ignore anything higher-numbered.
> 
> 
> My question is whether it would "only look at function 0 and ignore anything higher-numbered" or "only look at the lowest-numbered function and ignore anything higher-numbered".

The common scanning logic is to look at slot 0 first. If that's populated,
other slots get looked at only if slot 0 has the multi-function bit set.
If slot 0 is not populated, nothing is known about the other slots, and
hence they need to be scanned.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 20 14:46:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 14:46:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131020.245094 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljjw6-0008Gn-HR; Thu, 20 May 2021 14:46:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131020.245094; Thu, 20 May 2021 14:46:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljjw6-0008Gg-ET; Thu, 20 May 2021 14:46:06 +0000
Received: by outflank-mailman (input) for mailman id 131020;
 Thu, 20 May 2021 14:46:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3HBq=KP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ljjw5-0008Ga-2w
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 14:46:05 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c030af74-c962-4348-896f-cfb218a95d3d;
 Thu, 20 May 2021 14:46:02 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id AA707ABCD;
 Thu, 20 May 2021 14:46:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c030af74-c962-4348-896f-cfb218a95d3d
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621521961; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=rAJ/Wdhb+P/6nTVsY6VoRSp6VLmlOkUGpAVBdyRH50E=;
	b=f3WU5037GxUFmMkoV4XMywWjvEeu6ufvsjSVrPesbJq0/FgBCacu6QI4j1TIUteP6jHhHD
	jPDjstO/EO3KVO67bS0nYM4Nw6U16Hv3NaHduc5Gl0xETEUj555swAAMEcZ68ypq3Gdaoh
	ZMO/LroEP0dnncQsVcz0KKhuMSKyRvA=
Subject: Re: [PATCH v2 1/2] xen-pciback: redo VF placement in the virtual
 topology
From: Jan Beulich <jbeulich@suse.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: Juergen Gross <jgross@suse.com>, Konrad Wilk <konrad.wilk@oracle.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <38774140-871d-59a4-cf49-9cb1cc78c381@suse.com>
 <8def783b-404c-3452-196d-3f3fd4d72c9e@suse.com>
 <87d771dd-8b00-4101-b76b-21087ea1b1df@oracle.com>
 <214a6c61-5f6a-d841-312a-be2abb95f77a@suse.com>
 <75bfa4bd-cb4b-fb56-1600-6aebda4cf4cb@oracle.com>
 <83aa295a-9329-f516-d439-75d198b92bf0@suse.com>
Message-ID: <184ecd2b-b35f-8427-7ae2-e265750f4fae@suse.com>
Date: Thu, 20 May 2021 16:46:01 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <83aa295a-9329-f516-d439-75d198b92bf0@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 20.05.2021 16:44, Jan Beulich wrote:
> On 20.05.2021 16:38, Boris Ostrovsky wrote:
>>
>> On 5/20/21 3:43 AM, Jan Beulich wrote:
>>> On 20.05.2021 02:36, Boris Ostrovsky wrote:
>>>> On 5/18/21 12:13 PM, Jan Beulich wrote:
>>>>>  
>>>>> @@ -95,22 +95,25 @@ static int __xen_pcibk_add_pci_dev(struc
>>>>>  
>>>>>  	/*
>>>>>  	 * Keep multi-function devices together on the virtual PCI bus, except
>>>>> -	 * virtual functions.
>>>>> +	 * that we want to keep virtual functions at func 0 on their own. They
>>>>> +	 * aren't multi-function devices and hence their presence at func 0
>>>>> +	 * may cause guests to not scan the other functions.
>>>>
>>>> So your reading of the original commit is that whatever the issue it was, only function zero was causing the problem? In other words, you are not concerned that pci_scan_slot() may now look at function 1 and skip all higher-numbered function (assuming the problem is still there)?
>>> I'm not sure I understand the question: Whether to look at higher numbered
>>> slots is a function of slot 0's multi-function bit alone, aiui. IOW if
>>> slot 1 is being looked at in the first place, slots 2-7 should also be
>>> looked at.
>>
>>
>> Wasn't the original patch describing a problem strictly as one for single-function devices, so the multi-function bit is not set? I.e. if all VFs (which are single-function devices) are placed in the same slot then pci_scan_slot() would only look at function 0 and ignore anything higher-numbered.
>>
>>
>> My question is whether it would "only look at function 0 and ignore anything higher-numbered" or "only look at the lowest-numbered function and ignore anything higher-numbered".
> 
> The common scanning logic is to look at slot 0 first. If that's populated,
> other slots get looked at only if slot 0 has the multi-function bit set.
> If slot 0 is not populated, nothing is known about the other slots, and
> hence they need to be scanned.

In particular Linux'es next_fn() ends with

	/* dev may be NULL for non-contiguous multifunction devices */
	if (!dev || dev->multifunction)
		return (fn + 1) % 8;

	return 0;

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 20 14:57:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 14:57:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131028.245106 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljk7O-0001RA-JY; Thu, 20 May 2021 14:57:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131028.245106; Thu, 20 May 2021 14:57:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljk7O-0001R3-GW; Thu, 20 May 2021 14:57:46 +0000
Received: by outflank-mailman (input) for mailman id 131028;
 Thu, 20 May 2021 14:57:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=3HBq=KP=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ljk7M-0001Qu-R6
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 14:57:44 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 246ef10a-802a-4653-908c-a78f08872746;
 Thu, 20 May 2021 14:57:44 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 25032ABC2;
 Thu, 20 May 2021 14:57:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 246ef10a-802a-4653-908c-a78f08872746
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621522663; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ST0sC9uQ5RgchluVr6oobddJnoUy7XzyvU3yLAJB2xU=;
	b=qpY1R+PqGfmIgFTc5q7q5fJiAd07H+9DwlKe3PY7TzsPsPEFKOTPbnpSER9+ttE5zbu9wi
	j9Y13fw+S3KdBe6bjQbJxbthGBpZGr/CeyDrE09CFaKDf+cHmcNWIvdqyBto5tQgsgyYWD
	d/hGNYtAFJgxqaw8pKCxJRjadAnBpyo=
Subject: Re: [PATCH 00/13] Intel Hardware P-States (HWP) support
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Community Manager <community.manager@xenproject.org>,
 xen-devel@lists.xenproject.org
References: <20210503192810.36084-1-jandryuk@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <572e0a5f-8ade-1a06-c7a2-cfcfc64c6fe6@suse.com>
Date: Thu, 20 May 2021 16:57:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <20210503192810.36084-1-jandryuk@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 03.05.2021 21:27, Jason Andryuk wrote:
> Open questions:
> 
> HWP defaults to enabled and running in balanced mode.  This is useful
> for testing, but should the old ACPI cpufreq driver remain the default?

As long as this new code is experimental, yes. But once it's deemed
stable and fully supported, I think on HWP-capable hardware the new
driver should be used by default.

> This series unilaterally enables Hardware Duty Cycling (HDC) which is
> another feature to autonomously powerdown things.  That is enabled if
> HWP is enabled.  Maybe that want to be configurable?

Probably; not even sure what the default would want to be.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 20 16:22:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 16:22:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131037.245117 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljlQd-0001ex-Pf; Thu, 20 May 2021 16:21:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131037.245117; Thu, 20 May 2021 16:21:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljlQd-0001eq-Mg; Thu, 20 May 2021 16:21:43 +0000
Received: by outflank-mailman (input) for mailman id 131037;
 Thu, 20 May 2021 16:21:42 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VWqy=KP=kernel.org=sstabellini@srs-us1.protection.inumbo.net>)
 id 1ljlQc-0001ek-Mg
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 16:21:42 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2c6911a5-8bc0-4ee0-a171-8eec5c704666;
 Thu, 20 May 2021 16:21:41 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id AD8D16128A;
 Thu, 20 May 2021 16:21:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c6911a5-8bc0-4ee0-a171-8eec5c704666
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1621527700;
	bh=YMVfveNMim4AYPmvuG50EyQ/1XFPJJUMxIc5vnIzvtA=;
	h=Date:From:To:cc:Subject:In-Reply-To:References:From;
	b=pjjwSBNnda/hTfwhMvvj/dG2HD4u3a4nUzcnm58HAorcT/GWzbNGvvaWD5kYuTL0s
	 JohZuTCjAjR+sxCy2gosXMTJTiT90glDIPMwBw6hPyGSAlDRO/PKGbPYxux65oiw1E
	 /q4ZHGwMMMgJh33rs3Vl9AL+um+52IiF2ZUPlhHL1wpRTLjmOMHWYhXJ7ZRRnSboJp
	 UpJ4dl0n9zzW/Omtm7AzT3zM4mM866aavtJBG3iEmGW3dvXZkXCPQfqKxEs7bbWwST
	 HLOd5v09cDhgM1JqBcu6yafW7S86TVi7AYmHVYeXPdWepE9sD4os92HrHpjk61Lr/o
	 GyIPMOxKKwy4Q==
Date: Thu, 20 May 2021 09:21:40 -0700 (PDT)
From: Stefano Stabellini <sstabellini@kernel.org>
X-X-Sender: sstabellini@sstabellini-ThinkPad-T480s
To: Julien Grall <julien@xen.org>
cc: Stefano Stabellini <sstabellini@kernel.org>, 
    Elliott Mitchell <ehem+xen@m5p.com>, xen-devel@lists.xenproject.org, 
    Roger Pau Monn?? <royger@freebsd.org>, Mitchell Horne <mhorne@freebsd.org>, 
    Andrew Cooper <andrew.cooper3@citrix.com>
Subject: Re: Uses of /hypervisor memory range (was: FreeBSD/Xen/ARM issues)
In-Reply-To: <b6fe6e06-517c-ee4c-5b71-a1bee4d4df13@xen.org>
Message-ID: <alpine.DEB.2.21.2105200919100.14426@sstabellini-ThinkPad-T480s>
References: <YIptpndhk6MOJFod@Air-de-Roger> <YItwHirnih6iUtRS@mattapan.m5p.com> <YIu80FNQHKS3+jVN@Air-de-Roger> <YJDcDjjgCsQUdsZ7@mattapan.m5p.com> <YJURGaqAVBSYnMRf@Air-de-Roger> <YJYem5CW/97k/e5A@mattapan.m5p.com> <YJs/YAgB8molh7e5@mattapan.m5p.com>
 <54427968-9b13-36e6-0001-27fb49f85635@xen.org> <YJ3jlGSxs60Io+dp@mattapan.m5p.com> <93936406-574f-7fd0-53bf-3bafaa4b1947@xen.org> <YJ8hTE/JbJygtVAL@mattapan.m5p.com> <f7360dac-5d83-733b-7ec5-c73d4dc0350d@xen.org> <alpine.DEB.2.21.2105191611540.14426@sstabellini-ThinkPad-T480s>
 <b6fe6e06-517c-ee4c-5b71-a1bee4d4df13@xen.org>
User-Agent: Alpine 2.21 (DEB 202 2017-01-01)
MIME-Version: 1.0
Content-Type: text/plain; charset=US-ASCII

On Thu, 20 May 2021, Julien Grall wrote:
> On 20/05/2021 00:25, Stefano Stabellini wrote:
> > On Sat, 15 May 2021, Julien Grall wrote:
> > > > My feeling is one of two things should happen with the /hypervisor
> > > > address range:
> > > > 
> > > > 1>  OSes could be encouraged to use it for all foreign mappings.  The
> > > > range should be dynamic in some fashion.  There could be a handy way to
> > > > allow configuring the amount of address space thus reserved.
> > > 
> > > In the context of XSA-300 and virtio on Xen on Arm, we discussed about
> > > providing a revion for foreign mapping. The main trouble here is figuring
> > > out
> > > of the size, if you mess it up then you may break all the PV drivers.
> > > 
> > > If the problem is finding space, then I would like to suggest a different
> > > approach (I think I may have discussed it with Andrew). Xen is in
> > > maintaining
> > > the P2M for the guest and therefore now where are the unallocated space.
> > > How
> > > about introducing a new hypercall to ask for "unallocated space"?
> > > 
> > > This would not work for older hypervisor but you could use the RAM instead
> > > (as
> > > Linux does). This is has drawbacks (e.g. shattering superpage, reducing
> > > the
> > > amount of memory usuable...), but for 1> you would also need a hack for
> > > older
> > > Xen.
> > 
> > I am starting to wonder if it would make sense to add a new device tree
> > binding to describe larger free region for foreign mapping rather then a
> > hypercall. It could be several GBs or even TBs in size. I like the idea
> > of having it device tree because after all this is static information.
> > I can see that a hypercall would also work and I am open to it but if
> > possible I think it would be better not to extend the hypercall
> > interface when there is a good alternative.
> 
> There are two issues with the Device-Tree approach:
>   1) This doesn't work on ACPI
>   2) It is not clear how to define the size of the region. An admin doesn't
> have the right information in hand to know how many mappings will be done (a
> backend may map multiple time the same page).
> 
> The advantage of the hypercall solution is the size is "virtually" unlimited
> and not based on a specific firmware.

Fair enough


From xen-devel-bounces@lists.xenproject.org Thu May 20 16:32:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 16:32:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131044.245128 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljlaX-0003Aw-UQ; Thu, 20 May 2021 16:31:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131044.245128; Thu, 20 May 2021 16:31:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljlaX-0003Ap-QZ; Thu, 20 May 2021 16:31:57 +0000
Received: by outflank-mailman (input) for mailman id 131044;
 Thu, 20 May 2021 16:31:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=vYVO=KP=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1ljlaW-0003Aj-Ju
 for xen-devel@lists.xenproject.org; Thu, 20 May 2021 16:31:56 +0000
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a94cd6f2-7950-42b6-b29a-d5cb34b5b8d7;
 Thu, 20 May 2021 16:31:55 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 14KGAE0b147816;
 Thu, 20 May 2021 16:31:54 GMT
Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79])
 by userp2130.oracle.com with ESMTP id 38j5qrdb2v-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 20 May 2021 16:31:54 +0000
Received: from pps.filterd (userp3020.oracle.com [127.0.0.1])
 by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 14KGVWu4134630;
 Thu, 20 May 2021 16:31:53 GMT
Received: from nam10-dm6-obe.outbound.protection.outlook.com
 (mail-dm6nam10lp2102.outbound.protection.outlook.com [104.47.58.102])
 by userp3020.oracle.com with ESMTP id 38n491wea6-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 20 May 2021 16:31:53 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com (2603:10b6:208:321::10)
 by BLAPR10MB5235.namprd10.prod.outlook.com (2603:10b6:208:325::19)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.26; Thu, 20 May
 2021 16:31:50 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf]) by BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf%7]) with mapi id 15.20.4150.023; Thu, 20 May 2021
 16:31:50 +0000
Received: from [10.74.103.151] (138.3.201.23) by
 SN4PR0801CA0002.namprd08.prod.outlook.com (2603:10b6:803:29::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4150.24 via Frontend
 Transport; Thu, 20 May 2021 16:31:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a94cd6f2-7950-42b6-b29a-d5cb34b5b8d7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=TdWsbshtvy8CTiEBqnJ9thX/c6XJnD8yNkyF2CmpKsg=;
 b=AGUhaJRbbL+LCKP2pGvcp9Ro6GK9UW/HgCKyN4shV6fzetImfi9U+KjohPv+rgU3pXPv
 51v4tI4SnkFElexCyxMSmbTedvOKWi620wwuwcE/z/Y0n0bY287IvEUrjaw1+B18VFPV
 QnpaHEbiKMN3BQgXu2bfluI15YC5jjiZyhXjDlQS5jnbD+31TqeWobKRmCGxhQvgnwUM
 CT/eU7I/Hop0j17bEla+O0DMgljP7jTW/rRk5NdC0FwVkamQxLcemC3ge5NGLzf/LcSM
 bGDdULryhMkcEmfBCUi/Qo4432dMdXafBE2b5Tk/oPfyr4cS8Np6MShoniF3LhSlI+BK mA== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Q3zGizRzFlHexRXcAqMULNgcek0EoWzpOnSIbdWYkp0dBzkOgn8xYb0JARl/uXB+VPnldqdzwRYjimCCZi/kOyrqeloWk2fJM94U2AKEQ1JlysFNH+9VQeHItSA0GN5PTeqlvG0OZdG3hSFXTwLw+b0cGkEehrqyuxDGQWpzzvxpo37mQCkRPgY9ZL61fSOVGT1Ci805PIncTXoaALmbw4f9m0mYgA6ikyTJfZGbn7WTL57w0WPwgG+E98mIlFmGQ2q4GOO/3gwYQaR5SogI0noW6MFTN8GnvX5XVdNn6oaYqLQjcZij0P6T5EJGx4cEmlg87WmOpyWfADWIDusb3A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TdWsbshtvy8CTiEBqnJ9thX/c6XJnD8yNkyF2CmpKsg=;
 b=kw/J4vZFNiLMMzYHIMV+Jl+C0FJ4gddxtKCFAVUQ7vKcpvMhnoh04aJsk3NnpUnlbv1Vbm2MzwVZT/MQtuX43wmTZ0hzu4fiU5fT2No0a9KYQLlyjI1SruFLH9n+IUQ9dHjxB24uIUO1sGwYojqzx8/g12VubSp2NEYPDvaH0mYstZU9yE0id8IaW8oOKABkX7gDYyFKEmVcDbRlix8jcSBGWi0O+wgKt3FxxJcUincKjD96KCEsq5cr2M8rZPnJ4aNc9+IT3tAxNvHpKEmTEKGszkVNOqHVIJWd8mVXpJLuAg00mfSbI7LO+P0kQztS17iVA2CygdnU53GH4MgUuA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=TdWsbshtvy8CTiEBqnJ9thX/c6XJnD8yNkyF2CmpKsg=;
 b=sAHdq2wGRMGltPBaB5K2/oVJKn3d+yvkXaVSYaw9igaHOSXm0rSseDRR7w8ZMeo4JeJA7FdkYC8TkRR1GQrolRIXsDoEGVdYzmDtjONEBvQypM8gA6hCAo/ysLCbQ2yS7EK23QXXF9uszHIUUqstCeIbQf9UYGuoKBYVIrt8QXk=
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=oracle.com;
Subject: Re: [PATCH v2 1/2] xen-pciback: redo VF placement in the virtual
 topology
To: Jan Beulich <jbeulich@suse.com>
Cc: Juergen Gross <jgross@suse.com>, Konrad Wilk <konrad.wilk@oracle.com>,
        "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <38774140-871d-59a4-cf49-9cb1cc78c381@suse.com>
 <8def783b-404c-3452-196d-3f3fd4d72c9e@suse.com>
 <87d771dd-8b00-4101-b76b-21087ea1b1df@oracle.com>
 <214a6c61-5f6a-d841-312a-be2abb95f77a@suse.com>
 <75bfa4bd-cb4b-fb56-1600-6aebda4cf4cb@oracle.com>
 <83aa295a-9329-f516-d439-75d198b92bf0@suse.com>
 <184ecd2b-b35f-8427-7ae2-e265750f4fae@suse.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <36a58512-25da-2885-3e2e-1be75ef21071@oracle.com>
Date: Thu, 20 May 2021 12:31:46 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
In-Reply-To: <184ecd2b-b35f-8427-7ae2-e265750f4fae@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Originating-IP: [138.3.201.23]
X-ClientProxiedBy: SN4PR0801CA0002.namprd08.prod.outlook.com
 (2603:10b6:803:29::12) To BLAPR10MB5009.namprd10.prod.outlook.com
 (2603:10b6:208:321::10)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 68833bfe-24b9-4f08-6ed0-08d91bacc507
X-MS-TrafficTypeDiagnostic: BLAPR10MB5235:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<BLAPR10MB52350B44D0447BA8E0D615298A2A9@BLAPR10MB5235.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	9VbLBf55p6ddMfJVb6deyDqU2AQaA4cFuplQUbTPb/HLnbOyb8RhdQtWl8mt1IOunOKRIlxqP933Fod0JsOR/9W5gaYm0Pt9LWSRziN0R20NF2/xNfq/amxR0+xqvzOhPjxwpBD8ZSiQAvgB9Xl6WCeCIHyRUpn++nlmQvFO/MHScSMYcixebEyfhpYenAE48iJLOL1OQ2OiEwWNe7VYZ9jC8kDcrCz5dEXL408GWzoUbhvMt/aSMO5nhAbxfh6OSBxJjmNjATs9cgJf01uqR3vn3dZ8Ka89Bv4yPmFQjujqRmN0eWq6YAKKWiW/HBStrlf2NBP/VQVfInoW6hhjc8x1fyfIeyhxZ+GTBcZnlYSjSxafkSRc/ZCOACottDj7lM2PP4coJELiSzI6AyG17pxsIHUjYvdOVb2fzS46pGUjcWC1JmJq1Vie2w+o1y9JsAXBNGNsKgeiuBv9N5aNf8sfky1y8nU6cDZlcuxe/jAb6hquULxjN6MAOFalVma1NLRRFiF4zjpkJ4xPmQkHhIiY+hXKe/LaQs58uZFQ+5fCkoC76uTx7TMn+mbPTEnY/TGnSrFwVdBQsNzKtMpJWq5x6h1GtkI5Vbp3pK8vp6HCIEkJL1zH0coExZyJhXei7/SdcdoRnemjLvWuPtOLwp+5hYVW9qvXalnD3wPvcRxO+x3H7uOBSMIIRhhrbFxj
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB5009.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(346002)(39860400002)(136003)(376002)(366004)(396003)(66476007)(478600001)(66946007)(66556008)(31696002)(4326008)(44832011)(53546011)(54906003)(16576012)(2906002)(26005)(316002)(2616005)(956004)(86362001)(6486002)(6666004)(38100700002)(36756003)(5660300002)(31686004)(186003)(16526019)(8936002)(8676002)(6916009)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?utf-8?B?bm55eGN6ZVNlQW1zSWxiOHVSLytqbDVzVXJZUFV4TXZFazJUTDlzZS9zOWVE?=
 =?utf-8?B?WU91ZEJuYkRReW9VT0hJNXJhRTBsbTByZU1IYTFXelRjNDJQUGZsZEMrNGhp?=
 =?utf-8?B?OFhOWFd6NFhYdkE4RHVpUUZCVmlHUzJFS2k4VnRxazh5WmZEa091M3lhTXNM?=
 =?utf-8?B?TWdYd2JWNGtiNm1uL1pYWDlKZ04zVDF0OWRpdXZnUDdveGp6OTJjZU5NY2dD?=
 =?utf-8?B?cFZTaUJScUNmSDNhZHUvZE9Ed2JFbW5GbktDbVdQQTFVMitXc3dYOEszMzFU?=
 =?utf-8?B?dlRjWVdkN1JLdWRLTlVPbGFxMGNpL25XVEpKNno5RlpsVjVSTkdwM1FxaGpZ?=
 =?utf-8?B?M3U4MDlSS0toSnN5QXFJOFY3T1ptT21pTUc5Y20rUk1aL1lZLzcvc2hVc2xn?=
 =?utf-8?B?dFo5Z1NuU2k1MmtVUnRjK3ZKNW5jbFR1Q2cxcDcxd0NjaWw5MEhBaDl3M1gr?=
 =?utf-8?B?TUtKakxMVDEraGtpdkdSNXJhMUp2eEdwWHVHTUF0cDZFVmZ6L0UwUmNGeEd2?=
 =?utf-8?B?WFhGMmh5U1N5TXNDcDNCUVYrSG9QbHhEZnNqRndpbjVnNmZhRFE0bjVzd3F5?=
 =?utf-8?B?eGk5UXJUUm11aGJZK1lLUXZ4bzhyaXllYVlyZXlaVkpvWWNZZWtjSlhBUGY4?=
 =?utf-8?B?bjF4YXVDbEtGNGYrM1hZcnU3RDlyVFdhSGpxZnJneFZLMmRYa1VrcGJ2VmZW?=
 =?utf-8?B?WWl5SEszZXdUNTNMNFpVOGdUZ1QzckVxWWJadTcyTHVyQzFSVUhVYVE0K3Fs?=
 =?utf-8?B?b3Y5NENIbzE1dnRBQklxaFlmK1JYRnplWDQ2Qko5VGVBOEViWGVMa3BTRk84?=
 =?utf-8?B?N0RmajdsL0hBcS9XNVN4Z2ppbVR2UVN6OUpIQXc4SExNTkljWUdsV2lFOTRR?=
 =?utf-8?B?L2t2RUZKeXJLdVJuaVdqWmxMa0dTckd3eGJuY0NXd3lNWGR0Q01LMUtpdTNh?=
 =?utf-8?B?YWlSOTVnaTRkVnhpc2dUUDY2TzI5MzJTNXNVLzQvY2FhUGdBcStmR3c2dFNn?=
 =?utf-8?B?TGtKTGlWYTdhOVR1S24ya2JDeXJoM3VHTXUwUGRtVGNCMmNvMFVSNXdxSVRQ?=
 =?utf-8?B?TFpXd1hEVE5ZNmo5Mk94dE9xS3FieHJkc1VkRDN6d0lrVDV5blJ1cnpPeldi?=
 =?utf-8?B?WmRETTBUdXhwZkNFa29QQXY2WnpZK2wyNHVuanA1Y0VmNG1NTWFvSFpINVU5?=
 =?utf-8?B?UFQzUW1mblJOOXNPNkhwNVlraGdiTlpTNDVLeEUra3IvWDRQdkhGenoySy9N?=
 =?utf-8?B?SDUyY1gyVUViYldwMVRtS3JVaCt1YmhsZFoxakk0QW01akMwNExLMmMwVDdM?=
 =?utf-8?B?Nk95bmttenI5RGY1bVp1elFFS1pRaGVFZitMRVQwQTdKWjZzVFQrcmtQZW5a?=
 =?utf-8?B?SUJjak5CQU8wL0FtczhsOGNLZGQzL3owRERheklXNUd4SXpFelh2eWFTUFVO?=
 =?utf-8?B?YlEwMXdMdjg5dkw1QVJ6WDYrS1dhUmthRzljb2V5bG94VmQxOURjUW9SenRm?=
 =?utf-8?B?LzB1NVRCWmZINkdFNkU1UUF0RFZrT05rTDZveVYxa3BYMGRxZnNvMkNLcjZU?=
 =?utf-8?B?Z0tQRGY4VGdZOEJ4cmpzd0pPbitHeHZKdktFamN0bi9UYXdwdk9XbHViYUJv?=
 =?utf-8?B?VllTVEdmUEduN05WUXJxejRzNG84NFhkblA0bTQ2N2tlbWhmaUlORnFiTjhW?=
 =?utf-8?B?YXJBSE9pUzRCeFBQS1UyTUxFZDYxOENqSFBBbERxbzd6Y1oxZFNsdGtJRmpT?=
 =?utf-8?Q?50l0uafmtd+UaGvfnDN6mKwcYFObDVsA8VT689W?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 68833bfe-24b9-4f08-6ed0-08d91bacc507
X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB5009.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 20 May 2021 16:31:50.5062
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: dzOAqR25qSDaLSnvDpgEc8ncVuLMMhjwewT6YUFvttLFP3KUJIRK+Ll6aweVXfyBLulkn1qCUY5jZsxxkFdppAT/rQUN+AGWlXDUk6RhBOg=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLAPR10MB5235
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9989 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 phishscore=0 spamscore=0 bulkscore=0
 suspectscore=0 mlxlogscore=999 adultscore=0 malwarescore=0 mlxscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2105200107
X-Proofpoint-GUID: isBr_AJNrQF4BE8Wcu5lYJwSE-dekiUt
X-Proofpoint-ORIG-GUID: isBr_AJNrQF4BE8Wcu5lYJwSE-dekiUt
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9989 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 clxscore=1015 impostorscore=0
 mlxscore=0 lowpriorityscore=0 malwarescore=0 mlxlogscore=999
 suspectscore=0 adultscore=0 priorityscore=1501 spamscore=0 phishscore=0
 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2104190000 definitions=main-2105200106


On 5/20/21 10:46 AM, Jan Beulich wrote:
> On 20.05.2021 16:44, Jan Beulich wrote:
>> On 20.05.2021 16:38, Boris Ostrovsky wrote:
>>> On 5/20/21 3:43 AM, Jan Beulich wrote:
>>>> On 20.05.2021 02:36, Boris Ostrovsky wrote:
>>>>> On 5/18/21 12:13 PM, Jan Beulich wrote:
>>>>>>  
>>>>>> @@ -95,22 +95,25 @@ static int __xen_pcibk_add_pci_dev(struc
>>>>>>  
>>>>>>  	/*
>>>>>>  	 * Keep multi-function devices together on the virtual PCI bus, except
>>>>>> -	 * virtual functions.
>>>>>> +	 * that we want to keep virtual functions at func 0 on their own. They
>>>>>> +	 * aren't multi-function devices and hence their presence at func 0
>>>>>> +	 * may cause guests to not scan the other functions.
>>>>> So your reading of the original commit is that whatever the issue it was, only function zero was causing the problem? In other words, you are not concerned that pci_scan_slot() may now look at function 1 and skip all higher-numbered function (assuming the problem is still there)?
>>>> I'm not sure I understand the question: Whether to look at higher numbered
>>>> slots is a function of slot 0's multi-function bit alone, aiui. IOW if
>>>> slot 1 is being looked at in the first place, slots 2-7 should also be
>>>> looked at.
>>>
>>> Wasn't the original patch describing a problem strictly as one for single-function devices, so the multi-function bit is not set? I.e. if all VFs (which are single-function devices) are placed in the same slot then pci_scan_slot() would only look at function 0 and ignore anything higher-numbered.
>>>
>>>
>>> My question is whether it would "only look at function 0 and ignore anything higher-numbered" or "only look at the lowest-numbered function and ignore anything higher-numbered".
>> The common scanning logic is to look at slot 0 first. If that's populated,
>> other slots get looked at only if slot 0 has the multi-function bit set.
>> If slot 0 is not populated, nothing is known about the other slots, and
>> hence they need to be scanned.
> In particular Linux'es next_fn() ends with
>
> 	/* dev may be NULL for non-contiguous multifunction devices */
> 	if (!dev || dev->multifunction)
> 		return (fn + 1) % 8;
>
> 	return 0;



Ah yes.


Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>



From xen-devel-bounces@lists.xenproject.org Thu May 20 17:57:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 17:57:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131058.245147 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljmup-0002I2-4f; Thu, 20 May 2021 17:56:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131058.245147; Thu, 20 May 2021 17:56:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljmup-0002Hv-1F; Thu, 20 May 2021 17:56:59 +0000
Received: by outflank-mailman (input) for mailman id 131058;
 Thu, 20 May 2021 17:56:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljmun-0002Hl-MF; Thu, 20 May 2021 17:56:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljmun-0005z2-Ht; Thu, 20 May 2021 17:56:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljmun-0003PA-4G; Thu, 20 May 2021 17:56:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ljmun-0003k3-3m; Thu, 20 May 2021 17:56:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wULbB/NHBE6kqeAky0M3O2s0MJS4zNOtdroJ6idebOk=; b=y+f/JN4UyMU3w/BVHHg/SIvVjN
	EDJ17rDrmACR3I8wpFjnN/YV4sYYh3KbNR1RVmHO+XiOZiTDBeCEJf37sGMzn2XrtiDTWQSORtd+0
	wubEdxZLJThQ+tx5LlgXsEl2NAVZqVP07IDA4KYQXDXhDCvjTgbxpETEdbo4rzENIEcw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162102-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162102: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict:guest-start/debianhvm.repeat:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=aa77acc28098d04945af998f3fc0dbd3759b5b41
X-Osstest-Versions-That:
    xen=935abe1cc463917c697c1451ec8d313a5d75f7de
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 20 May 2021 17:56:57 +0000

flight 162102 xen-unstable real [real]
flight 162105 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162102/
http://logs.test-lab.xenproject.org/osstest/logs/162105/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict 14 guest-start/debianhvm.repeat fail pass in 162105-retest

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    18 guest-start/debian.repeat fail REGR. vs. 162095

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162095
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162095
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162095
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162095
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162095
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162095
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162095
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162095
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162095
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162095
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162095
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  aa77acc28098d04945af998f3fc0dbd3759b5b41
baseline version:
 xen                  935abe1cc463917c697c1451ec8d313a5d75f7de

Last test of basis   162095  2021-05-19 16:38:54 Z    1 days
Testing same since   162102  2021-05-20 07:39:10 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Dario Faggioli <dfaggioli@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        fail    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   935abe1cc4..aa77acc280  aa77acc28098d04945af998f3fc0dbd3759b5b41 -> master


From xen-devel-bounces@lists.xenproject.org Thu May 20 20:48:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 20 May 2021 20:48:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131074.245161 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljpaN-0000eB-78; Thu, 20 May 2021 20:48:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131074.245161; Thu, 20 May 2021 20:48:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljpaN-0000e4-3q; Thu, 20 May 2021 20:48:03 +0000
Received: by outflank-mailman (input) for mailman id 131074;
 Thu, 20 May 2021 20:48:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljpaM-0000du-Ca; Thu, 20 May 2021 20:48:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljpaM-0000WX-5z; Thu, 20 May 2021 20:48:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljpaL-0000hl-PL; Thu, 20 May 2021 20:48:01 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ljpaL-0007jT-Oo; Thu, 20 May 2021 20:48:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=gRZ5am/uDA/mLpl0T52Q/x6yrJAe/AwWfOEXxAs/Pn8=; b=Ssv/1b38GCCU+KxoZJCMhAQfy0
	IRP04uxFghbaTND8evaG2kPCNkgXCEjaC1OhigOOWC4t1285PNQdoMb6G3vQ6jCa5W0zbuHzCdOkF
	HIsJjY/R3UYucuVfdGCdVQBFsni+kvz0TLR62z2lF+boIuoiPj/iocHPAsuX9CEn8aHs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162103-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162103: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-credit1:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-shadow:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=c3d0e3fd41b7f0f5d5d5b6022ab7e813f04ea727
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 20 May 2021 20:48:01 +0000

flight 162103 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162103/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      14 guest-start    fail in 162097 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-credit1  13 debian-fixup     fail in 162097 pass in 162103
 test-arm64-arm64-xl-xsm      13 debian-fixup               fail pass in 162097
 test-amd64-amd64-xl-credit1  22 guest-start/debian.repeat  fail pass in 162097
 test-amd64-amd64-xl-shadow   22 guest-start/debian.repeat  fail pass in 162097
 test-amd64-amd64-examine      4 memdisk-try-append         fail pass in 162097

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                c3d0e3fd41b7f0f5d5d5b6022ab7e813f04ea727
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  293 days
Failing since        152366  2020-08-01 20:49:34 Z  291 days  492 attempts
Testing same since   162097  2021-05-19 19:41:22 Z    1 days    2 attempts

------------------------------------------------------------
6063 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  fail    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   fail    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1645796 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 21 01:38:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 May 2021 01:38:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131083.245175 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lju6r-0006cY-8B; Fri, 21 May 2021 01:37:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131083.245175; Fri, 21 May 2021 01:37:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lju6r-0006c6-0g; Fri, 21 May 2021 01:37:53 +0000
Received: by outflank-mailman (input) for mailman id 131083;
 Fri, 21 May 2021 01:37:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lju6q-0006bw-6M; Fri, 21 May 2021 01:37:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lju6p-00032T-QD; Fri, 21 May 2021 01:37:51 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lju6p-0004bg-9e; Fri, 21 May 2021 01:37:51 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lju6p-0006YC-99; Fri, 21 May 2021 01:37:51 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=RMvSTxk9TX4yAK/msZnZhzvivFA+Y1xU/sVoW5LD1io=; b=GkpCwN6DtzHll7vKaH4ZFMLRRa
	zjB8TWPOsUFHefNvI3rzwNWmZ8yA+7Kyq/b9oNaF6WtikGk3JOfwvH2EcdCbK7xMVu+l0/SAJqb/l
	1Sr/f0xBn0Y2lduqTxhd+Kxyc8ulCE1v1rXvwTJEKiQf2Y0wChfIm8WNwH+NDwT5gWDo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162104-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162104: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=fea2ad71c3e23f743701741346b51fdfbbff5ebf
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 21 May 2021 01:37:51 +0000

flight 162104 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162104/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                fea2ad71c3e23f743701741346b51fdfbbff5ebf
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  273 days
Failing since        152659  2020-08-21 14:07:39 Z  272 days  500 attempts
Testing same since   162104  2021-05-20 13:09:43 Z    0 days    1 attempts

------------------------------------------------------------
510 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 157234 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 21 03:24:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 May 2021 03:24:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131094.245188 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljvm7-0008SC-5e; Fri, 21 May 2021 03:24:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131094.245188; Fri, 21 May 2021 03:24:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljvm7-0008S5-22; Fri, 21 May 2021 03:24:35 +0000
Received: by outflank-mailman (input) for mailman id 131094;
 Fri, 21 May 2021 03:24:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljvm5-0008Ru-SS; Fri, 21 May 2021 03:24:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljvm5-0005BS-GV; Fri, 21 May 2021 03:24:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ljvm5-0007Kg-5c; Fri, 21 May 2021 03:24:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ljvm5-0007Yf-4t; Fri, 21 May 2021 03:24:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=d4oMmnLJRT2XFd2vBdgHTWddfLLVznRAvGXbPGXkPk8=; b=Jkf8w5/NwiQBShfgvGe3rtHpOU
	Rowo74QquygPbx09jP6xA6c3DTFGwadct7q2N1DUM+ccc0aRBlYvkKmDkKKk7xujLVcPB3P4nmbBk
	XGv6sWU3786pBnwYPxzCQwnJlDjRfWkJabewESpaYN0yH2nFuLEGiHa/b8m+6GDRV2MA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162106-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162106: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-localmigrate/x10:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=f01da525b3de8e59b2656b55d40c60462098651f
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 21 May 2021 03:24:33 +0000

flight 162106 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162106/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 19 guest-localmigrate/x10  fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                f01da525b3de8e59b2656b55d40c60462098651f
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  293 days
Failing since        152366  2020-08-01 20:49:34 Z  292 days  493 attempts
Testing same since   162106  2021-05-20 21:10:47 Z    0 days    1 attempts

------------------------------------------------------------
6069 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1648797 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 21 05:26:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 May 2021 05:26:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131103.245203 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljxg3-0002sz-WD; Fri, 21 May 2021 05:26:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131103.245203; Fri, 21 May 2021 05:26:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljxg3-0002ss-Sj; Fri, 21 May 2021 05:26:27 +0000
Received: by outflank-mailman (input) for mailman id 131103;
 Fri, 21 May 2021 05:26:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7IOp=KQ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1ljxg1-0002sm-U5
 for xen-devel@lists.xenproject.org; Fri, 21 May 2021 05:26:26 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c286a887-0914-4eae-8ffc-13b5c554babb;
 Fri, 21 May 2021 05:26:23 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E133AAC1A;
 Fri, 21 May 2021 05:26:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c286a887-0914-4eae-8ffc-13b5c554babb
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621574783; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=zJLMNq1dYimxxdw70v+420DgM7LUa17lAK04OQM3TM8=;
	b=L2J2OI1AxpIpk+icFhpPJS4GmkdXfuyTYSZZ93VWWDem5XPXzJ4TZ9S10d1rKlsKJ7zxQc
	mf/ho2pAOg10chaNQyNUWNoWi8XGNt0MGBbqWzutrEZ+SPlnMtCsfkyTsmVgO204WxDCFB
	weNFV7wMMJtnFH4/4JukEBy1rvcXurM=
Subject: Re: [PATCH v2] libelf: improve PVH elfnote parsing
To: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <20210518144741.44395-1-roger.pau@citrix.com>
 <c645b764-00fe-2b90-3b31-7f2bb6f07c02@suse.com>
 <YKYreLP8N16vcIVB@Air-de-Roger>
 <162f76e1-9da5-c750-2591-ea011b4b2842@suse.com>
From: Juergen Gross <jgross@suse.com>
Message-ID: <d4250baa-9680-cd48-3684-2b61b955713d@suse.com>
Date: Fri, 21 May 2021 07:26:21 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <162f76e1-9da5-c750-2591-ea011b4b2842@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="HdSGxOgLC3lGjgTwDp6KHEeuIMFHJI8dY"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--HdSGxOgLC3lGjgTwDp6KHEeuIMFHJI8dY
Content-Type: multipart/mixed; boundary="RLmkayiUrPnLMbGmBDWzKaZGYPWdByjPN";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
Cc: xen-devel@lists.xenproject.org, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
Message-ID: <d4250baa-9680-cd48-3684-2b61b955713d@suse.com>
Subject: Re: [PATCH v2] libelf: improve PVH elfnote parsing
References: <20210518144741.44395-1-roger.pau@citrix.com>
 <c645b764-00fe-2b90-3b31-7f2bb6f07c02@suse.com>
 <YKYreLP8N16vcIVB@Air-de-Roger>
 <162f76e1-9da5-c750-2591-ea011b4b2842@suse.com>
In-Reply-To: <162f76e1-9da5-c750-2591-ea011b4b2842@suse.com>

--RLmkayiUrPnLMbGmBDWzKaZGYPWdByjPN
Content-Type: multipart/mixed;
 boundary="------------2ADF1466956EDA7E1D5545BB"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------2ADF1466956EDA7E1D5545BB
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 20.05.21 11:28, Jan Beulich wrote:
> On 20.05.2021 11:27, Roger Pau Monn=C3=A9 wrote:
>> On Wed, May 19, 2021 at 12:34:19PM +0200, Jan Beulich wrote:
>>> On 18.05.2021 16:47, Roger Pau Monne wrote:
>>>> @@ -425,8 +425,11 @@ static elf_errorstatus elf_xen_addr_calc_check(=
struct elf_binary *elf,
>>>>           return -1;
>>>>       }
>>>>  =20
>>>> -    /* Initial guess for virt_base is 0 if it is not explicitly def=
ined. */
>>>> -    if ( parms->virt_base =3D=3D UNSET_ADDR )
>>>> +    /*
>>>> +     * Initial guess for virt_base is 0 if it is not explicitly def=
ined in the
>>>> +     * PV case. For PVH virt_base is forced to 0 because paging is =
disabled.
>>>> +     */
>>>> +    if ( parms->virt_base =3D=3D UNSET_ADDR || hvm )
>>>>       {
>>>>           parms->virt_base =3D 0;
>>>>           elf_msg(elf, "ELF: VIRT_BASE unset, using %#" PRIx64 "\n",=

>>>
>>> This message is wrong now if hvm is true but parms->virt_base !=3D UN=
SET_ADDR.
>>> Best perhaps is to avoid emitting the message altogether when hvm is =
true.
>>> (Since you'll be touching it anyway, perhaps a good opportunity to do=20
away
>>> with passing parms->virt_base to elf_msg(), and instead simply use a =
literal
>>> zero.)
>>>
>>>> @@ -441,8 +444,10 @@ static elf_errorstatus elf_xen_addr_calc_check(=
struct elf_binary *elf,
>>>>        *
>>>>        * If we are using the modern ELF notes interface then the def=
ault
>>>>        * is 0.
>>>> +     *
>>>> +     * For PVH this is forced to 0, as it's already a legacy option=20
for PV.
>>>>        */
>>>> -    if ( parms->elf_paddr_offset =3D=3D UNSET_ADDR )
>>>> +    if ( parms->elf_paddr_offset =3D=3D UNSET_ADDR || hvm )
>>>>       {
>>>>           if ( parms->elf_note_start )
>>>
>>> Don't you want "|| hvm" here as well, or alternatively suppress the
>>> fallback to the __xen_guest section in the PVH case (near the end of
>>> elf_xen_parse())?
>>
>> The legacy __xen_guest section doesn't support PHYS32_ENTRY, so yes,
>> that part could be completely skipped when called from an HVM
>> container.
>>
>> I think I will fix that in another patch though if you are OK, as
>> it's not strictly related to the calculation fixes done here.
>=20
> That's fine; it wants to be a prereq to the one here then, though,
> I think.

Would it be possible to add some comment to xen/include/public/elfnote.h
Indicating which elfnotes are evaluated for which guest types, including
a hint which elfnotes _have_ been evaluated before this series? This
will help cleaning up guests regarding advertisement of elfnotes
(something I've been planning to do for the Linux kernel).


Juergen

--------------2ADF1466956EDA7E1D5545BB
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------2ADF1466956EDA7E1D5545BB--

--RLmkayiUrPnLMbGmBDWzKaZGYPWdByjPN--

--HdSGxOgLC3lGjgTwDp6KHEeuIMFHJI8dY
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmCnRH0FAwAAAAAACgkQsN6d1ii/Ey/c
jQf+NsE4DA1mFjXMBEUP+0SDGa1ytt1ksFbkh0Ty0clnvqVy9JogkAEccOyPHro9gS2J1nRq2EAp
BvITKHNB1ycQUOeWPldBXD8ATjk2XT7QW5Na724jAf5FO9EC213Q76hPYVqIJBWH7feH5DfYzDZg
+Bv7FASi5hTLmyTt36K6LPkcUaRoGZ/auMQBRQwk2hEgFRiQtoP2z62FTFBQmReLX9XdgUXJQelZ
jAU3iJoOeGqknfaU281lSdErWnlyb5DEsl9NYi0GCKuG3iIW/y9GO03U4E12VvP5UYAwAFVhWpkm
rCi0IufNqQTVTnUeLH4ncwdhb05CkYGmXf8gzLHb1w==
=Kjpd
-----END PGP SIGNATURE-----

--HdSGxOgLC3lGjgTwDp6KHEeuIMFHJI8dY--


From xen-devel-bounces@lists.xenproject.org Fri May 21 05:27:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 May 2021 05:27:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131108.245213 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljxgV-0003K8-Cf; Fri, 21 May 2021 05:26:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131108.245213; Fri, 21 May 2021 05:26:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljxgV-0003Jz-9h; Fri, 21 May 2021 05:26:55 +0000
Received: by outflank-mailman (input) for mailman id 131108;
 Fri, 21 May 2021 05:26:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=jA7z=KQ=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1ljxgU-0003Ds-Kb
 for xen-devel@lists.xenproject.org; Fri, 21 May 2021 05:26:54 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cbdedb3e-8093-43c0-ac34-6ad09071a3c3;
 Fri, 21 May 2021 05:26:53 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 3344AAC1A;
 Fri, 21 May 2021 05:26:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cbdedb3e-8093-43c0-ac34-6ad09071a3c3
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621574812; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=h0Y29QIaGDhTJ8srlXj+KggabqhvgpqM7Cl5qz6AGjA=;
	b=YcXfUaKk8kBA6KxMspjLXpy2fLIGFObR5qdRGx9sgNd9rvOex4sW50kAoh02aXH+FK39XP
	FfX5LPkLPK2xB5ETinEmK24QBog80PwgCFbalyd7ZcNjLV6b+CNGJ3HQb25lF0EKX75dda
	DYqnl5sZCcHYcno5gydqNHh7jhzUQdI=
Message-ID: <8f780c50f5f672c65bc6a917460a5743e157707a.camel@suse.com>
Subject: Re: QEMU backport necessary for building with "recent" toolchain
 (on openSUSE Tumbleweed)
From: Dario Faggioli <dfaggioli@suse.com>
To: Anthony PERARD <anthony.perard@citrix.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>, Ian Jackson
 <iwj@xenproject.org>,  Wei Liu <wl@xen.org>, Roger Pau Monne
 <roger.pau@citrix.com>
Date: Fri, 21 May 2021 07:26:51 +0200
In-Reply-To: <YKZqPMNawZUbR4eu@perard>
References: <f7738499f24f6682f4ae1c1c750e30f322dfdbf3.camel@suse.com>
	 <YKZqPMNawZUbR4eu@perard>
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-tuUp2rABFTmBvrlO40XX"
User-Agent: Evolution 3.40.1 (by Flathub.org) 
MIME-Version: 1.0


--=-tuUp2rABFTmBvrlO40XX
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Thu, 2021-05-20 at 14:55 +0100, Anthony PERARD wrote:
> On Tue, May 18, 2021 at 05:24:30PM +0200, Dario Faggioli wrote:
> >=20
> > I think we need the following commit in our QEMU: bbd2d5a812077
> > ("build: -no-pie is no functional linker flag").
>=20
> Hi Dario,
>=20
Hi,

> I'm hoping to update qemu-xen to a newer version of QEMU (6.0) which
> would have the fix, but that's need a fix of libxl,
> =C2=A0=C2=A0=C2=A0 "Fix libxl with QEMU 6.0 + remove some more deprecated=
 usages."
> So I would prefer to avoid adding more to the current branch.
>=20
Sure, makes sense.

I wanted to bring it up, in case it hadn't been noticed yet. If it has
been noticed and there's a plan then we're good, I guess.

> The branch stable-4.15 already have the fix if you need, in the mean
> time.
>=20
Ok, thanks!

Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-tuUp2rABFTmBvrlO40XX
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAmCnRJsACgkQFkJ4iaW4
c+4qxxAAi6BstseQQgqCp4YIV/kBDLfTXzojzwMZpcj+12Egx8VbB+QrLd6sz4LI
TDzYuOvVWs0psZZApFsOqfQCUxM9iFxbxG9ZRltxIHMvwszV2ZMN+4KSTqLQHHw1
m6yy05EBO/t73eOIjv45UUhRsQkZ+O05ne6YVl/fgO0/0FVwtfbSXht6oqmgNEsS
TkWLeUC+S1VJP2hdc6+QlJfzs0lsKIG/pW8tjjFIyCH28ZFytdYUVCW1ydgoTHJE
NTar6Tiw7vBxlU/RSO/b+4iM7U47lSD5rlVAz35SpbkOHBeRWgD49xNSoeRluAUQ
3ETn1XdF2WdGZLTqosUMybFEECl5NYUvanvilEgYls7m9vvOwGTxY4CFzXjdf7Bh
F/EK4DzIk6SSIhwxb3VO9kTf36Vp6XlLtseLKMpxnxKA7wf+nfsOb2W+ld2JePp+
OhAP34of5HiiWn/LzQNzJ/pdPMHyFTMLY24NOX/kl19XdQntyVVf15i1bQCRU/F+
q+ofdjOiSBak7Jff43vIeaJssYYMfzjbmt1LcTuRLDywAl83ZLxqE3+IgQYrWe54
YqbvyXMb+jGiD6NVVe37ydPZqSPU+c5XYGMpNj9Z5seguaKKk8dkWCh8v6d/gi7Z
fapykUNZN96QigOaMWIFA/dF10DwUkxaX7+6Y2hzONHdMnURUXU=
=j0N/
-----END PGP SIGNATURE-----

--=-tuUp2rABFTmBvrlO40XX--



From xen-devel-bounces@lists.xenproject.org Fri May 21 05:27:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 May 2021 05:27:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131113.245225 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljxgs-0003s8-Mz; Fri, 21 May 2021 05:27:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131113.245225; Fri, 21 May 2021 05:27:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljxgs-0003ry-Jv; Fri, 21 May 2021 05:27:18 +0000
Received: by outflank-mailman (input) for mailman id 131113;
 Fri, 21 May 2021 05:27:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Y0JZ=KQ=amazon.com=prvs=768a5fbb6=anchalag@srs-us1.protection.inumbo.net>)
 id 1ljxgr-0003lt-E8
 for xen-devel@lists.xenproject.org; Fri, 21 May 2021 05:27:17 +0000
Received: from smtp-fw-80006.amazon.com (unknown [99.78.197.217])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6e96726c-88cc-436d-81cf-aa9999cb611b;
 Fri, 21 May 2021 05:27:16 +0000 (UTC)
Received: from pdx4-co-svc-p1-lb2-vlan2.amazon.com (HELO
 email-inbound-relay-1a-e34f1ddc.us-east-1.amazon.com) ([10.25.36.210])
 by smtp-border-fw-80006.pdx80.corp.amazon.com with ESMTP;
 21 May 2021 05:27:14 +0000
Received: from EX13MTAUWA001.ant.amazon.com
 (iad12-ws-svc-p26-lb9-vlan2.iad.amazon.com [10.40.163.34])
 by email-inbound-relay-1a-e34f1ddc.us-east-1.amazon.com (Postfix) with ESMTPS
 id 84D9AA1E62; Fri, 21 May 2021 05:27:07 +0000 (UTC)
Received: from EX13D07UWA004.ant.amazon.com (10.43.160.32) by
 EX13MTAUWA001.ant.amazon.com (10.43.160.58) with Microsoft SMTP Server (TLS)
 id 15.0.1497.18; Fri, 21 May 2021 05:26:51 +0000
Received: from EX13MTAUWA001.ant.amazon.com (10.43.160.58) by
 EX13D07UWA004.ant.amazon.com (10.43.160.32) with Microsoft SMTP Server (TLS)
 id 15.0.1497.18; Fri, 21 May 2021 05:26:51 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.160.118) with Microsoft SMTP
 Server id 15.0.1497.18 via Frontend Transport; Fri, 21 May 2021 05:26:51
 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id F1AC340124; Fri, 21 May 2021 05:26:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6e96726c-88cc-436d-81cf-aa9999cb611b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
  t=1621574837; x=1653110837;
  h=date:from:to:cc:message-id:references:mime-version:
   in-reply-to:subject;
  bh=kDiMVanU/J46MBpWtTAkHD58jSy5mLoyYAH1XvH7tTU=;
  b=KaGi33Vipk1k55lu9qtcYFN200iDKBsinU+IO1Z1mRFTJec89utsK9Rd
   kOKY/Glg//qialmMWX3yL/tFGtCfbg8Ilfdqh2IrYQ9D7FnXPPmlPkm0U
   BjNt+tLTKKFRF0glSBapVEc01exvMisB3JywJ/xgdMWm3BGMwaZ+ojyzG
   Y=;
X-IronPort-AV: E=Sophos;i="5.82,313,1613433600"; 
   d="scan'208";a="2533323"
Subject: Re: [PATCH v3 01/11] xen/manage: keep track of the on-going suspend mode
Date: Fri, 21 May 2021 05:26:50 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: <boris.ostrovsky@oracle.com>
CC: "tglx@linutronix.de" <tglx@linutronix.de>, "mingo@redhat.com"
	<mingo@redhat.com>, "bp@alien8.de" <bp@alien8.de>, "hpa@zytor.com"
	<hpa@zytor.com>, "jgross@suse.com" <jgross@suse.com>,
	"linux-pm@vger.kernel.org" <linux-pm@vger.kernel.org>, "linux-mm@kvack.org"
	<linux-mm@kvack.org>, "sstabellini@kernel.org" <sstabellini@kernel.org>,
	"konrad.wilk@oracle.com" <konrad.wilk@oracle.com>, "roger.pau@citrix.com"
	<roger.pau@citrix.com>, "axboe@kernel.dk" <axboe@kernel.dk>,
	"davem@davemloft.net" <davem@davemloft.net>, "rjw@rjwysocki.net"
	<rjw@rjwysocki.net>, "len.brown@intel.com" <len.brown@intel.com>,
	"pavel@ucw.cz" <pavel@ucw.cz>, "peterz@infradead.org" <peterz@infradead.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"vkuznets@redhat.com" <vkuznets@redhat.com>, "netdev@vger.kernel.org"
	<netdev@vger.kernel.org>, "linux-kernel@vger.kernel.org"
	<linux-kernel@vger.kernel.org>,
	<Woodhouse@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>, David
	<dwmw@amazon.co.uk>, "benh@kernel.crashing.org" <benh@kernel.crashing.org>,
	<anchalag@amazon.com>, <aams@amazon.com>
Message-ID: <20210521052650.GA19056@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
References: <5f1e4772-7bd9-e6c0-3fe6-eef98bb72bd8@oracle.com>
 <20200921215447.GA28503@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <e3e447e5-2f7a-82a2-31c8-10c2ffcbfb2c@oracle.com>
 <20200922231736.GA24215@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <20200925190423.GA31885@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <274ddc57-5c98-5003-c850-411eed1aea4c@oracle.com>
 <20200925222826.GA11755@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <cc738014-6a79-a5ae-cb2a-a02ff15b4582@oracle.com>
 <20200930212944.GA3138@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <8cd59d9c-36b1-21cf-e59f-40c5c20c65f8@oracle.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <8cd59d9c-36b1-21cf-e59f-40c5c20c65f8@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk

On Thu, Oct 01, 2020 at 08:43:58AM -0400, boris.ostrovsky@oracle.com wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> >>>>>>> Also, wrt KASLR stuff, that issue is still seen sometimes but I haven't had
> >>>>>>> bandwidth to dive deep into the issue and fix it.
> >>>> So what's the plan there? You first mentioned this issue early this year and judged by your response it is not clear whether you will ever spend time looking at it.
> >>>>
> >>> I do want to fix it and did do some debugging earlier this year just haven't
> >>> gotten back to it. Also, wanted to understand if the issue is a blocker to this
> >>> series?
> >>
> >> Integrating code with known bugs is less than ideal.
> >>
> > So for this series to be accepted, KASLR needs to be fixed along with other
> > comments of course?
> 
> 
> Yes, please.
> 
> 
> 
> >>> I had some theories when debugging around this like if the random base address picked by kaslr for the
> >>> resuming kernel mismatches the suspended kernel and just jogging my memory, I didn't find that as the case.
> >>> Another hunch was if physical address of registered vcpu info at boot is different from what suspended kernel
> >>> has and that can cause CPU's to get stuck when coming online.
> >>
> >> I'd think if this were the case you'd have 100% failure rate. And we are also re-registering vcpu info on xen restore and I am not aware of any failures due to KASLR.
> >>
> > What I meant there wrt VCPU info was that VCPU info is not unregistered during hibernation,
> > so Xen still remembers the old physical addresses for the VCPU information, created by the
> > booting kernel. But since the hibernation kernel may have different physical
> > addresses for VCPU info and if mismatch happens, it may cause issues with resume.
> > During hibernation, the VCPU info register hypercall is not invoked again.
> 
> 
> I still don't think that's the cause but it's certainly worth having a look.
> 
Hi Boris,
Apologies for picking this up after last year. 
I did some dive deep on the above statement and that is indeed the case that's happening. 
I did some debugging around KASLR and hibernation using reboot mode.
I observed in my debug prints that whenever vcpu_info* address for secondary vcpu assigned 
in xen_vcpu_setup at boot is different than what is in the image, resume gets stuck for that vcpu
in bringup_cpu(). That means we have different addresses for &per_cpu(xen_vcpu_info, cpu) at boot and after
control jumps into the image. 

I failed to get any prints after it got stuck in bringup_cpu() and
I do not have an option to send a sysrq signal to the guest or rather get a kdump.
This change is not observed in every hibernate-resume cycle. I am not sure if this is a bug or an 
expected behavior. 
Also, I am contemplating the idea that it may be a bug in xen code getting triggered only when
KASLR is enabled but I do not have substantial data to prove that.
Is this a coincidence that this always happens for 1st vcpu?
Moreover, since hypervisor is not aware that guest is hibernated and it looks like a regular shutdown to dom0 during reboot mode,
will re-registering vcpu_info for secondary vcpu's even plausible? I could definitely use some advice to debug this further.

 
Some printk's from my debugging:

At Boot:

xen_vcpu_setup: xen_have_vcpu_info_placement=1 cpu=1, vcpup=0xffff9e548fa560e0, info.mfn=3996246 info.offset=224,

Image Loads:
It ends up in the condition:
 xen_vcpu_setup()
 {
 ...
 if (xen_hvm_domain()) {
        if (per_cpu(xen_vcpu, cpu) == &per_cpu(xen_vcpu_info, cpu))
                return 0; 
 }
 ...
 }

xen_vcpu_setup: checking mfn on resume cpu=1, info.mfn=3934806 info.offset=224, &per_cpu(xen_vcpu_info, cpu)=0xffff9d7240a560e0

This is tested on c4.2xlarge [8vcpu 15GB mem] instance with 5.10 kernel running
in the guest.

Thanks,
Anchal.
> 
> -boris
> 
> 


From xen-devel-bounces@lists.xenproject.org Fri May 21 06:34:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 May 2021 06:34:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131127.245235 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljyjI-0002QB-Nf; Fri, 21 May 2021 06:33:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131127.245235; Fri, 21 May 2021 06:33:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljyjI-0002Q4-Ki; Fri, 21 May 2021 06:33:52 +0000
Received: by outflank-mailman (input) for mailman id 131127;
 Fri, 21 May 2021 06:33:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=eRHa=KQ=arm.com=michal.orzel@srs-us1.protection.inumbo.net>)
 id 1ljyjG-0002Px-NN
 for xen-devel@lists.xenproject.org; Fri, 21 May 2021 06:33:50 +0000
Received: from foss.arm.com (unknown [217.140.110.172])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTP
 id c1cba710-067a-49ca-97c6-4db6c1cebdf7;
 Fri, 21 May 2021 06:33:48 +0000 (UTC)
Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14])
 by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id C789211B3;
 Thu, 20 May 2021 23:33:47 -0700 (PDT)
Received: from [10.57.6.192] (unknown [10.57.6.192])
 by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 7450B3F719;
 Thu, 20 May 2021 23:33:45 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c1cba710-067a-49ca-97c6-4db6c1cebdf7
Subject: Re: [PATCH v3 10/10] arm64: Change type of hsr, cpsr, spsr_el1 to
 uint64_t
To: Julien Grall <julien@xen.org>, Jan Beulich <jbeulich@suse.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, Tamas K Lengyel <tamas@tklengyel.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>, bertrand.marquis@arm.com,
 wei.chen@arm.com, xen-devel@lists.xenproject.org
References: <20210505074308.11016-1-michal.orzel@arm.com>
 <20210505074308.11016-11-michal.orzel@arm.com>
 <c5676e69-a474-d1ad-c7e9-49c03be3ab66@suse.com>
 <1ff4f9fb-0eca-189a-2b47-b910dc6b3639@arm.com>
 <42a998be-2f99-a1b6-ace6-4c5d42af7046@xen.org>
 <54e845e1-f283-d70c-a0c2-73e768e5a56e@suse.com>
 <b8a14892-0290-3aff-c4b5-6d363b884db7@xen.org>
From: Michal Orzel <michal.orzel@arm.com>
Message-ID: <f65babea-bd4f-f1fa-07db-78d83727b155@arm.com>
Date: Fri, 21 May 2021 08:33:39 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <b8a14892-0290-3aff-c4b5-6d363b884db7@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

Hi guys,

On 17.05.2021 18:03, Julien Grall wrote:
> Hi Jan,
> 
> On 17/05/2021 08:01, Jan Beulich wrote:
>> On 12.05.2021 19:59, Julien Grall wrote:
>>> Hi,
>>>
>>> On 11/05/2021 07:37, Michal Orzel wrote:
>>>> On 05.05.2021 10:00, Jan Beulich wrote:
>>>>> On 05.05.2021 09:43, Michal Orzel wrote:
>>>>>> --- a/xen/include/public/arch-arm.h
>>>>>> +++ b/xen/include/public/arch-arm.h
>>>>>> @@ -267,10 +267,10 @@ struct vcpu_guest_core_regs
>>>>>>           /* Return address and mode */
>>>>>>        __DECL_REG(pc64,         pc32);             /* ELR_EL2 */
>>>>>> -    uint32_t cpsr;                              /* SPSR_EL2 */
>>>>>> +    uint64_t cpsr;                              /* SPSR_EL2 */
>>>>>>           union {
>>>>>> -        uint32_t spsr_el1;       /* AArch64 */
>>>>>> +        uint64_t spsr_el1;       /* AArch64 */
>>>>>>            uint32_t spsr_svc;       /* AArch32 */
>>>>>>        };
>>>>>
>>>>> This change affects, besides domctl, also default_initialise_vcpu(),
>>>>> which Arm's arch_initialise_vcpu() calls. I realize do_arm_vcpu_op()
>>>>> only allows two unrelated VCPUOP_* to pass, but then I don't
>>>>> understand why arch_initialise_vcpu() doesn't simply return e.g.
>>>>> -EOPNOTSUPP. Hence I suspect I'm missing something.
>>>
>>> I think it is just an overlooked when reviewing the following commit:
>>>
>>> commit 192df6f9122ddebc21d0a632c10da3453aeee1c2
>>> Author: Roger Pau Monné <roger.pau@citrix.com>
>>> Date:   Tue Dec 15 14:12:32 2015 +0100
>>>
>>>       x86: allow HVM guests to use hypercalls to bring up vCPUs
>>>
>>>       Allow the usage of the VCPUOP_initialise, VCPUOP_up, VCPUOP_down,
>>>       VCPUOP_is_up, VCPUOP_get_physid and VCPUOP_send_nmi hypercalls from HVM
>>>       guests.
>>>
>>>       This patch introduces a new structure (vcpu_hvm_context) that
>>> should be used
>>>       in conjuction with the VCPUOP_initialise hypercall in order to
>>> initialize
>>>       vCPUs for HVM guests.
>>>
>>>       Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
>>>       Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>       Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>>       Acked-by: Ian Campbell <ian.campbell@citrix.com>
>>>
>>> On Arm, the structure vcpu_guest_context is not exposed outside of Xen
>>> and the tools. Interestingly vcpu_guest_core_regs is but it should only
>>> be used within vcpu_guest_context.
>>>
>>> So as this is not used by stable ABI, it is fine to break it.
>>>
>>>>>
>>>> I agree that do_arm_vcpu_op only allows two VCPUOP* to pass and
>>>> arch_initialise_vcpu being called in case of VCPUOP_initialise
>>>> has no sense as VCPUOP_initialise is not supported on arm.
>>>> It makes sense that it should return -EOPNOTSUPP.
>>>> However do_arm_vcpu_op will not accept VCPUOP_initialise and will return
>>>> -EINVAL. So arch_initialise_vcpu for arm will not be called.
>>>> Do you think that changing this behaviour so that arch_initialise_vcpu returns
>>>> -EOPNOTSUPP should be part of this patch?
>>>
>>> I think this change is unrelated. So it should be handled in a follow-up
>>> patch.
>>
>> My only difference in viewing this is that I'd say the adjustment
>> would better be a prereq patch to this one, such that the one here
>> ends up being more obviously correct.
> 
> The function is already not reachable so I felt it was unfair to require the clean-up for merging this code.
> 
>> Also, if the function is
>> indeed not meant to be reachable, besides making it return
>> -EOPNOTSUPP (or alike) it should probably also have
>> ASSERT_UNREACHABLE() added.
> 
> +1 on the idea.
> 
> Cheers,
> 
FWICS, all the discussion is about creating the next patch fixing the VCPUOP_initialise function.
Is there anything left to do in this patch or are you going to ack it?

Cheers,
Michal


From xen-devel-bounces@lists.xenproject.org Fri May 21 06:41:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 May 2021 06:41:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131133.245247 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljyqz-0003p8-Iv; Fri, 21 May 2021 06:41:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131133.245247; Fri, 21 May 2021 06:41:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljyqz-0003p1-Em; Fri, 21 May 2021 06:41:49 +0000
Received: by outflank-mailman (input) for mailman id 131133;
 Fri, 21 May 2021 06:41:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=3Lf9=KQ=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1ljyqy-0003ov-3H
 for xen-devel@lists.xenproject.org; Fri, 21 May 2021 06:41:48 +0000
Received: from EUR05-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.21.81]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1ea25986-c241-4d9d-b514-8d9387bf1014;
 Fri, 21 May 2021 06:41:45 +0000 (UTC)
Received: from AM6P192CA0092.EURP192.PROD.OUTLOOK.COM (2603:10a6:209:8d::33)
 by AM4PR08MB2817.eurprd08.prod.outlook.com (2603:10a6:205:9::18) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.25; Fri, 21 May
 2021 06:41:42 +0000
Received: from AM5EUR03FT037.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:8d:cafe::a1) by AM6P192CA0092.outlook.office365.com
 (2603:10a6:209:8d::33) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4150.23 via Frontend
 Transport; Fri, 21 May 2021 06:41:42 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT037.mail.protection.outlook.com (10.152.17.241) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Fri, 21 May 2021 06:41:41 +0000
Received: ("Tessian outbound 504317ef584c:v92");
 Fri, 21 May 2021 06:41:41 +0000
Received: from 9cfdb7593ec0.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 58C94B55-E628-4F93-9CFE-28E502E7C5AC.1; 
 Fri, 21 May 2021 06:41:35 +0000
Received: from EUR05-AM6-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id 9cfdb7593ec0.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Fri, 21 May 2021 06:41:35 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com (2603:10a6:803:10a::33)
 by VI1PR08MB2863.eurprd08.prod.outlook.com (2603:10a6:802:1f::11)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.26; Fri, 21 May
 2021 06:41:33 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5]) by VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::9d05:1301:2f9c:80c5%6]) with mapi id 15.20.4129.035; Fri, 21 May 2021
 06:41:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1ea25986-c241-4d9d-b514-8d9387bf1014
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iu1gW/13rg6QUg9hXG/eAtr0kFQa59kQ3VdXnfkN6LU=;
 b=86mx60bVbzFxWLN5Om1fWJbutMH6o8x+asI1dJOYJj+SWysY9DR17kaIeqWVJtf5R/w1x7Wn4cBFCeP/36PYc6cldJ8nlRqjLQHQXA6jpA665mJ/spwRE2YZyplOdRAX7YNTllUdTKzGDfy+KodO/D4qv6McVe2xul2zvAIgcOU=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Gu4MAFyyyalo5xSFzn38dM4XdYuaE6xZpWd/gC45IAiH51q/joW1JcJfYmEDChrxn0k3PDFLV25r4UaKsX7aX3LzWIbWq5QsfWdUodCEuUDL9rRCeNsuU5edW/AVv1qMuBlup2ixvfvQ6kGDcSgICHHRvqtkPDekmqD2jGyEkjBvtyRICjbuJ04rqZB7BXJKI2PDYbMfImBcnQ//jl1vkZq2sVYLB1BN3rl8E2MhhVJR6Qp7s3NWzOtEeT64d/d0oakM0v3wnejTJKODQ8qRmupA1/XfBsgbc+Z/7exsS6um+HlLQDnU+smNxustO4enYuR8bDWyOOzvxPn8ewM+/Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iu1gW/13rg6QUg9hXG/eAtr0kFQa59kQ3VdXnfkN6LU=;
 b=hrOXUTOIEJZA87kleTc1Y5yfLAAwx1CPlEnv5ckpkX+FmN0GKMu83Iyuns/b03ekLBgt1R3gFxDtahM3CZYbkabmithLpRi8B2kKAzpd7DB1D2n7thW1Dh3BBHkhW9JO4eU2fqCgb9erWZ4ZcUzJgU4T08vwAGXKDX2/0/kO12tLGRV6Oa4U1Wm2P7e+PRhXYn/b/mujkPy0cRYsyoMCooQEtYl2yoe2ucKqzvpedgfumVZEafA3cBN6gtp/LhOXZMw8udOERy95o566BcT+6ZmcBMAGsvo2uXL0cNZ7arao1lGNK1ExBkaVF7Y/Is1rqKM/A7nzZQuyMtnITbp3nA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=iu1gW/13rg6QUg9hXG/eAtr0kFQa59kQ3VdXnfkN6LU=;
 b=86mx60bVbzFxWLN5Om1fWJbutMH6o8x+asI1dJOYJj+SWysY9DR17kaIeqWVJtf5R/w1x7Wn4cBFCeP/36PYc6cldJ8nlRqjLQHQXA6jpA665mJ/spwRE2YZyplOdRAX7YNTllUdTKzGDfy+KodO/D4qv6McVe2xul2zvAIgcOU=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	nd <nd@arm.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "sstabellini@kernel.org"
	<sstabellini@kernel.org>, "julien@xen.org" <julien@xen.org>
Subject: RE: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
Thread-Topic: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
Thread-Index: AQHXS6W/k/joZEkeaUiDjrfYH3jVEaro2ToAgAAR3sCAAC37gIAEZlqw
Date: Fri, 21 May 2021 06:41:32 +0000
Message-ID:
 <VE1PR08MB521538B39E6290BBA0842F97F7299@VE1PR08MB5215.eurprd08.prod.outlook.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-8-penny.zheng@arm.com>
 <7e4706dc-70ea-4dc9-3d70-f07396b462d8@suse.com>
 <VE1PR08MB521528492991FDFC87AC361BF72C9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <4389b5be-7d23-31d7-67e0-0068cba79934@suse.com>
In-Reply-To: <4389b5be-7d23-31d7-67e0-0068cba79934@suse.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 4D7D3952DB29D044B8046A3D21197DBA.0
x-checkrecipientchecked: true
Authentication-Results-Original: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
x-originating-ip: [203.126.0.111]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: ed7d618f-d683-4ee6-36da-08d91c237e66
x-ms-traffictypediagnostic: VI1PR08MB2863:|AM4PR08MB2817:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<AM4PR08MB28173ABABDF43E2404BCF81AF7299@AM4PR08MB2817.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:9508;OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 uJ51OqEBXra8MZbrgvXvS1V9MHKSxpsFrEfVA3eHUubVUmkrKGGgtylXm2NXKRD9B1oOQLaXRFG7JFDBb67tiBSYfXOwLri24VbbHz8111Tpe4YMmC4GkuC0fv+m5A2ADS3QRqbcROUVH592OVw3TuLKGXF6ScKWYqb8TVUxY+kKskdaQyUFIpwaKHMvM3tq1HhIf+VbVnC+/nG7lMQry7RszgfnFLZn2kTQTIqj8oMzpdhTYMYmFtAWCIc4PFh9/tuZnsQUdd+De4mymg5DSAoW5+t1DK4ZKZXf8FTLoHQuRN3J+v8zxMh3/0aj5RMqCMk7aBVv0N2UHfRhfTfYJyY2ryNDpiT+Iu4wKk/fko3UoQh0OcSeloisOd5J4tbjhtrEIxbw7aQgIejSfHjOz9tJ2L2UychqAf5ciabtR2Zt5twiBQGF0/QEVTFvTzVqeVy9k8mYtzl3T9kYwmojuh2cbDu3e+CDv30QfyULWDs6nAJ8/UJhA59+LzUeqk9/2OV45HB/QK65INWR2N5rS2eRIhB5+8YY/j6S/RfAAATm5Gucq0qV9ZfdNUwWbnr24lU2S7aGVEKfNIfRSp8gmFTCiQxzm4tTsN7xJTaACUk=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR08MB5215.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(376002)(366004)(136003)(39850400004)(396003)(316002)(478600001)(54906003)(33656002)(86362001)(8676002)(26005)(53546011)(7696005)(6506007)(8936002)(83380400001)(76116006)(9686003)(5660300002)(55016002)(52536014)(66946007)(38100700002)(6916009)(66476007)(66446008)(64756008)(66556008)(2906002)(186003)(71200400001)(122000001)(4326008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?Tmp4YXNmaTR5ZElvV1VtN05mekRudFd2WnJTQWIzSjU4MEJqVTYwb3FhMlA0?=
 =?utf-8?B?L0pNL3Z4WEFndndLK3M3dVE3KzI1VzhYczVFUHRNeG5Mbk5MRERuY3ZROU8r?=
 =?utf-8?B?WC95Wmc3Z21hSGlsbHA5WkNGTTZBTXVOZU90M0YvQXpjcXFQSHFPWHlDOXhl?=
 =?utf-8?B?aWl1Umtra1BQbnJnTllZZnJVTXdWRDFFUzgwNGRrRzU1V2lQSVM1bXdoQ1hm?=
 =?utf-8?B?UGVlUkZWcU1pYnpwYzNuSzlJSi9SSHNXbUhKMEdGd1ZhMHNjelBnMnF1YWFB?=
 =?utf-8?B?dVhKV2NnVG5CSWV4bkFHQUVnNzg2bjhRZXRvc3ZnN1pqNkYwVjUxQWhtQ0I2?=
 =?utf-8?B?UEExeEdZa0VHUTY0dE1CYTJnaVJDSmx2RU4rWEV3TXRNMmdGTCtlc3ZYS2lW?=
 =?utf-8?B?dmtheXpFaDlybXc0UGVLeTdLeUQ3Y1EvbXVGcUY5MTRwMWxaUkhrdkl6TWtY?=
 =?utf-8?B?alhKMzc3bDdrWE9hd2RoSEdCQjJPUEg5aGFEaG5qLzlBRUIyZmlpTDVIOHNs?=
 =?utf-8?B?MmVKZzFzZ3VJQ281aHoxNE9NQ1lXMmZ4UUhjd1VRcWN3SWNNdzZJNitiMlo4?=
 =?utf-8?B?a3RqckcvSFgrWnpCSjd6V01nUnQrd00vellFRlo0SFFuaUtKZVVyNU5rTDVj?=
 =?utf-8?B?T1BLang5QzB0b2NiNVJyTHZTUUlXWlFVbTdkT0pTV0ltR1NST3J1MExCV3hB?=
 =?utf-8?B?L3RSTXhQMFpYU1MwTTJqZG1Ma0lCWUtFTWJhTm80Q21XTmxvSXRQWlNkV0tt?=
 =?utf-8?B?cFBjRmxnbnVicmxZS0l6ZVNVYUovWDAyZHhVT0lOUUVPY24wMFpKOHZPdE81?=
 =?utf-8?B?NFo2RHJqZ09XcDVJK2JMWmZoeEhqQVlCRkpxRVNmWGhYSjlXY3BMNVFuaHN4?=
 =?utf-8?B?VDllcHNrd1lyT2FmaUprZWVnS1lwZ0dNMENPUFJvOSt6dm50eXJ5SEVHNTRs?=
 =?utf-8?B?RkNnM1B1cDZtV3BlcStHS2VseU5sN0RySTR2bENTSVVoZVY5MVo2d1VsVEkz?=
 =?utf-8?B?dzBMOHE0aThhaXZERklFQkNRcTI3ak5oam1hY0ZVQU84SzlaSDAxYlZxbFlr?=
 =?utf-8?B?cHdPaElWcW9OUEdVS3hKcmFOV2MwRGZBY282eVJrZ2poTmF6SFpoVXpaenV4?=
 =?utf-8?B?cDhjN2IyWHZjQXBJWE9TdnZ0c0VKT1lUOEl1aU83ZWF6NFEybEhSY2tBekc4?=
 =?utf-8?B?ekFnTUVSZzZ2MWJYaDhlNnA4Szh3Y1kyUldOTkhiaDhlaDg2Ky9zZUlFMGw1?=
 =?utf-8?B?RU5CWngycG1vN1ZnTWRxaXFxUUxyYU9HaEFZN3lNYkx0WlhuNHliTjh0V0p5?=
 =?utf-8?B?WGpkOTQrTGxmL0N2QzNrVWhGbGR1RlMzYzBoSFJKREJlc2x2dCtsWUJNTXk2?=
 =?utf-8?B?VW5DQ2t6WFM3T2RKRllQZmtEc3RqQndFeStjTmpmSnlpRTZla3R3Yy9MYkhR?=
 =?utf-8?B?UVNJWlhnWmp6V3hJckxiTGIwVHc2K3hNeXk5RkVpWEc1RDFjZ3VUZHJub3J4?=
 =?utf-8?B?dW1wcDkxNkZBNCthRnNieG5zd2N1OE96LzBaUEFZampBUnVyOVh5QTFuaUFr?=
 =?utf-8?B?YW9TQkx5dVdoNk5kcVF6cUZYZEpLSU9xZTRGZWl1a1l5UVFzN2c2Y2xBTXZF?=
 =?utf-8?B?Vy9XMWxpSk5xT2NNNUwzQ1loaGJtYmxsV2p6N2xFWFM5ZStBTXVmeGhrMVp2?=
 =?utf-8?B?cEorZ2hnREhBS3ZQa1hSUVc0a0daMHNpZms3NXRjVUxJV0VkY01ZMjJxeVVs?=
 =?utf-8?Q?Ne6/Hz8udU+vMPANi6I6DBInj2qAjTNRSHv1Mv8?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB2863
Original-Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT037.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	ebb90b85-b707-42eb-d4da-08d91c237932
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	lDfahx7plQFs4K6e7Ha1yHE1uiCnNJ5bTrg+pkzMFDldXId1Mk9lgv5P7M+6Q34U6WTKAiYwcj4estTy0F5O5BvkAx4iD3SiQTriaw3cBT6yJYagN1iIOjLpVzAynanVlge34q2jqW43KRgjpo4ZY0/1avFMmw7LPF+djZTNMZqVHPXwGzk2Ni0rlsJcHtI7CvSwwCCzUF2jmjM0VWqB7OjJX/bzPSw4J/3dJ3sHZMhj2yatQ8FSUweKKxhk73x8J9Jm5F3mGOOEgm2s+VCjhTRXMwZzDMhA2wt1mmyLQiNrGM9U6AdAOEPpWTDPXs2S188kHP2lOFBYFta0bQvLvWs29ivJt1n2azH2Ng7iY2Az3KEqbOUXm8kU6tZyzZZ4sI321IpcOUQ8k06K9hpcWszL559W7mccW5MPdOx3ld9feg7eqc2xRBFqurmI9rA+SItwtPo7NJg9KpmgZwnGYbp865g3JyeJrw8jpBBpGcRb4JRU+0kPc3Htu+3QTAyJlZAooDhNnqbil0K5cjXYYMulW5KJB6TJp95MztIYB5yr3O8de8Q/RRdlNDyVMxzoPmn27DOMxbGd5ZKFwHyMCgwM1nMWSZRJ/FG7mLYPFfC4rj4VXvVQMSo8ag5KUJmaUxJUr55BER/pmMGGA14VF2KqzEu6+1h/wQTmZx0OJ0M=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(39850400004)(376002)(396003)(136003)(346002)(36840700001)(46966006)(26005)(8936002)(53546011)(4326008)(6862004)(7696005)(316002)(54906003)(55016002)(6506007)(52536014)(5660300002)(70206006)(86362001)(70586007)(186003)(8676002)(336012)(478600001)(36860700001)(47076005)(83380400001)(81166007)(356005)(9686003)(33656002)(2906002)(82740400003)(82310400003);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 May 2021 06:41:41.9057
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: ed7d618f-d683-4ee6-36da-08d91c237e66
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT037.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM4PR08MB2817

SGkgSmFuDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogSmFuIEJldWxp
Y2ggPGpiZXVsaWNoQHN1c2UuY29tPg0KPiBTZW50OiBUdWVzZGF5LCBNYXkgMTgsIDIwMjEgNzoy
MyBQTQ0KPiBUbzogUGVubnkgWmhlbmcgPFBlbm55LlpoZW5nQGFybS5jb20+DQo+IENjOiBCZXJ0
cmFuZCBNYXJxdWlzIDxCZXJ0cmFuZC5NYXJxdWlzQGFybS5jb20+OyBXZWkgQ2hlbg0KPiA8V2Vp
LkNoZW5AYXJtLmNvbT47IG5kIDxuZEBhcm0uY29tPjsgeGVuLWRldmVsQGxpc3RzLnhlbnByb2pl
Y3Qub3JnOw0KPiBzc3RhYmVsbGluaUBrZXJuZWwub3JnOyBqdWxpZW5AeGVuLm9yZw0KPiBTdWJq
ZWN0OiBSZTogW1BBVENIIDA3LzEwXSB4ZW4vYXJtOiBpbnRydWR1Y2UgYWxsb2NfZG9tc3RhdGlj
X3BhZ2VzDQo+IA0KPiBPbiAxOC4wNS4yMDIxIDEwOjU3LCBQZW5ueSBaaGVuZyB3cm90ZToNCj4g
Pj4gRnJvbTogSmFuIEJldWxpY2ggPGpiZXVsaWNoQHN1c2UuY29tPg0KPiA+PiBTZW50OiBUdWVz
ZGF5LCBNYXkgMTgsIDIwMjEgMzozNSBQTQ0KPiA+Pg0KPiA+PiBPbiAxOC4wNS4yMDIxIDA3OjIx
LCBQZW5ueSBaaGVuZyB3cm90ZToNCj4gPj4+IC0tLSBhL3hlbi9jb21tb24vcGFnZV9hbGxvYy5j
DQo+ID4+PiArKysgYi94ZW4vY29tbW9uL3BhZ2VfYWxsb2MuYw0KPiA+Pj4gQEAgLTI0NDcsNiAr
MjQ0Nyw5IEBAIGludCBhc3NpZ25fcGFnZXMoDQo+ID4+PiAgICAgIHsNCj4gPj4+ICAgICAgICAg
IEFTU0VSVChwYWdlX2dldF9vd25lcigmcGdbaV0pID09IE5VTEwpOw0KPiA+Pj4gICAgICAgICAg
cGFnZV9zZXRfb3duZXIoJnBnW2ldLCBkKTsNCj4gPj4+ICsgICAgICAgIC8qIHVzZSBwYWdlX3Nl
dF9yZXNlcnZlZF9vd25lciB0byBzZXQgaXRzIHJlc2VydmVkIGRvbWFpbiBvd25lci4NCj4gPj4g
Ki8NCj4gPj4+ICsgICAgICAgIGlmICggKHBnW2ldLmNvdW50X2luZm8gJiBQR0NfcmVzZXJ2ZWQp
ICkNCj4gPj4+ICsgICAgICAgICAgICBwYWdlX3NldF9yZXNlcnZlZF9vd25lcigmcGdbaV0sIGQp
Ow0KPiA+Pg0KPiA+PiBOb3cgdGhpcyBpcyBwdXp6bGluZzogV2hhdCdzIHRoZSBwb2ludCBvZiBz
ZXR0aW5nIHR3byBvd25lciBmaWVsZHMgdG8NCj4gPj4gdGhlIHNhbWUgdmFsdWU/IEkgYWxzbyBk
b24ndCByZWNhbGwgeW91IGhhdmluZyBpbnRyb2R1Y2VkDQo+ID4+IHBhZ2Vfc2V0X3Jlc2VydmVk
X293bmVyKCkgZm9yIHg4Niwgc28gaG93IGlzIHRoaXMgZ29pbmcgdG8gYnVpbGQgdGhlcmU/DQo+
ID4+DQo+ID4NCj4gPiBUaGFua3MgZm9yIHBvaW50aW5nIG91dCB0aGF0IGl0IHdpbGwgZmFpbCBv
biB4ODYuDQo+ID4gQXMgZm9yIHRoZSBzYW1lIHZhbHVlLCBzdXJlLCBJIHNoYWxsIGNoYW5nZSBp
dCB0byBkb21pZF90IGRvbWlkIHRvIHJlY29yZCBpdHMNCj4gcmVzZXJ2ZWQgb3duZXIuDQo+ID4g
T25seSBkb21pZCBpcyBlbm91Z2ggZm9yIGRpZmZlcmVudGlhdGUuDQo+ID4gQW5kIGV2ZW4gd2hl
biBkb21haW4gZ2V0IHJlYm9vdGVkLCBzdHJ1Y3QgZG9tYWluIG1heSBiZSBkZXN0cm95ZWQsIGJ1
dA0KPiA+IGRvbWlkIHdpbGwgc3RheXMgVGhlIHNhbWUuDQo+IA0KPiBXaWxsIGl0PyBBcmUgeW91
IGludGVuZGluZyB0byBwdXQgaW4gcGxhY2UgcmVzdHJpY3Rpb25zIHRoYXQgbWFrZSBpdCBpbXBv
c3NpYmxlDQo+IGZvciB0aGUgSUQgdG8gZ2V0IHJlLXVzZWQgYnkgYW5vdGhlciBkb21haW4/DQo+
IA0KPiA+IE1ham9yIHVzZXIgY2FzZXMgZm9yIGRvbWFpbiBvbiBzdGF0aWMgYWxsb2NhdGlvbiBh
cmUgcmVmZXJyaW5nIHRvIHRoZQ0KPiA+IHdob2xlIHN5c3RlbSBhcmUgc3RhdGljLCBObyBydW50
aW1lIGNyZWF0aW9uLg0KPiANCj4gUmlnaHQsIGJ1dCB0aGF0J3Mgbm90IGN1cnJlbnRseSBlbmZv
cmNlZCBhZmFpY3MuIElmIHlvdSB3b3VsZCBlbmZvcmNlIGl0LCBpdCBtYXkNCj4gc2ltcGxpZnkg
YSBudW1iZXIgb2YgdGhpbmdzLg0KPiANCj4gPj4+IEBAIC0yNTA5LDYgKzI1MTIsNTYgQEAgc3Ry
dWN0IHBhZ2VfaW5mbyAqYWxsb2NfZG9taGVhcF9wYWdlcygNCj4gPj4+ICAgICAgcmV0dXJuIHBn
Ow0KPiA+Pj4gIH0NCj4gPj4+DQo+ID4+PiArLyoNCj4gPj4+ICsgKiBBbGxvY2F0ZSBucl9wZm5z
IGNvbnRpZ3VvdXMgcGFnZXMsIHN0YXJ0aW5nIGF0ICNzdGFydCwgb2Ygc3RhdGljDQo+ID4+PiAr
bWVtb3J5LA0KPiA+Pj4gKyAqIHRoZW4gYXNzaWduIHRoZW0gdG8gb25lIHNwZWNpZmljIGRvbWFp
biAjZC4NCj4gPj4+ICsgKiBJdCBpcyB0aGUgZXF1aXZhbGVudCBvZiBhbGxvY19kb21oZWFwX3Bh
Z2VzIGZvciBzdGF0aWMgbWVtb3J5Lg0KPiA+Pj4gKyAqLw0KPiA+Pj4gK3N0cnVjdCBwYWdlX2lu
Zm8gKmFsbG9jX2RvbXN0YXRpY19wYWdlcygNCj4gPj4+ICsgICAgICAgIHN0cnVjdCBkb21haW4g
KmQsIHVuc2lnbmVkIGxvbmcgbnJfcGZucywgcGFkZHJfdCBzdGFydCwNCj4gPj4+ICsgICAgICAg
IHVuc2lnbmVkIGludCBtZW1mbGFncykNCj4gPj4+ICt7DQo+ID4+PiArICAgIHN0cnVjdCBwYWdl
X2luZm8gKnBnID0gTlVMTDsNCj4gPj4+ICsgICAgdW5zaWduZWQgbG9uZyBkbWFfc2l6ZTsNCj4g
Pj4+ICsNCj4gPj4+ICsgICAgQVNTRVJUKCFpbl9pcnEoKSk7DQo+ID4+PiArDQo+ID4+PiArICAg
IGlmICggbWVtZmxhZ3MgJiBNRU1GX25vX293bmVyICkNCj4gPj4+ICsgICAgICAgIG1lbWZsYWdz
IHw9IE1FTUZfbm9fcmVmY291bnQ7DQo+ID4+PiArDQo+ID4+PiArICAgIGlmICggIWRtYV9iaXRz
aXplICkNCj4gPj4+ICsgICAgICAgIG1lbWZsYWdzICY9IH5NRU1GX25vX2RtYTsNCj4gPj4+ICsg
ICAgZWxzZQ0KPiA+Pj4gKyAgICB7DQo+ID4+PiArICAgICAgICBkbWFfc2l6ZSA9IDF1bCA8PCBi
aXRzX3RvX3pvbmUoZG1hX2JpdHNpemUpOw0KPiA+Pj4gKyAgICAgICAgLyogU3RhcnRpbmcgYWRk
cmVzcyBzaGFsbCBtZWV0IHRoZSBETUEgbGltaXRhdGlvbi4gKi8NCj4gPj4+ICsgICAgICAgIGlm
ICggZG1hX3NpemUgJiYgc3RhcnQgPCBkbWFfc2l6ZSApDQo+ID4+PiArICAgICAgICAgICAgcmV0
dXJuIE5VTEw7DQo+ID4+DQo+ID4+IEl0IGlzIHRoZSBlbnRpcmUgcmFuZ2UgKGkuZS4gaW4gcGFy
dGljdWxhciB0aGUgbGFzdCBieXRlKSB3aGljaCBuZWVkcw0KPiA+PiB0byBtZWV0IHN1Y2ggYSBy
ZXN0cmljdGlvbi4gSSdtIG5vdCBjb252aW5jZWQgdGhvdWdoIHRoYXQgRE1BIHdpZHRoDQo+ID4+
IHJlc3RyaWN0aW9ucyBhbmQgc3RhdGljIGFsbG9jYXRpb24gYXJlIHNlbnNpYmxlIHRvIGNvZXhp
c3QuDQo+ID4+DQo+ID4NCj4gPiBGV0lULCBpZiBzdGFydGluZyBhZGRyZXNzIG1lZXRzIHRoZSBs
aW1pdGF0aW9uLCB0aGUgbGFzdCBieXRlLCBncmVhdGVyDQo+ID4gdGhhbiBzdGFydGluZyBhZGRy
ZXNzIHNoYWxsIG1lZXQgaXQgdG9vLg0KPiANCj4gSSdtIGFmcmFpZCBJIGRvbid0IGtub3cgd2hh
dCB5b3UncmUgbWVhbmluZyB0byB0ZWxsIG1lIGhlcmUuDQo+IA0KDQpSZWZlcnJpbmcgdG8gYWxs
b2NfZG9taGVhcF9wYWdlcywgaWYgYGRtYV9iaXRzaXplYCBpcyBub25lLXplcm8gdmFsdWUsIA0K
aXQgd2lsbCB1c2UgIGFsbG9jX2hlYXBfcGFnZXMgdG8gYWxsb2NhdGUgcGFnZXMgZnJvbSBbZG1h
X3pvbmUgKyAxLA0Kem9uZV9oaV0sIGBkbWFfem9uZSArIDFgIHBvaW50aW5nIHRvIGFkZHJlc3Mg
bGFyZ2VyIHRoYW4gMl4oZG1hX3pvbmUgKyAxKS4NClNvIEkgd2FzIHNldHRpbmcgYWRkcmVzcyBs
aW1pdGF0aW9uIGZvciB0aGUgc3RhcnRpbmcgYWRkcmVzcy4gICANCg0KPiBKYW4NCg0KQ2hlZXJz
DQoNClBlbm55DQo=


From xen-devel-bounces@lists.xenproject.org Fri May 21 07:08:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 May 2021 07:08:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131144.245258 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljzGF-0006Tn-RJ; Fri, 21 May 2021 07:07:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131144.245258; Fri, 21 May 2021 07:07:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljzGF-0006Tg-Mo; Fri, 21 May 2021 07:07:55 +0000
Received: by outflank-mailman (input) for mailman id 131144;
 Fri, 21 May 2021 07:07:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qZ6I=KQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ljzGD-0006Ta-KG
 for xen-devel@lists.xenproject.org; Fri, 21 May 2021 07:07:53 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7ac3c737-07b5-47c3-a81d-3cdb8319c63c;
 Fri, 21 May 2021 07:07:52 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6C9A7ABED;
 Fri, 21 May 2021 07:07:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7ac3c737-07b5-47c3-a81d-3cdb8319c63c
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621580871; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=JGorgyN/Y8P/wPLBJaVIuIEc9IeX+lrkCM5+Z6yZkrc=;
	b=iJCJuiBJH0xot7fAN8iWc3hXOdPkJFkcX9LRxxJssHPM8QiZD77f50LbNKbmKfDUQJi+89
	oibx22t/aCdB9BgaN2iZzkQcbe2B972T8p21Bp4OEQp8/OP8YrgWgwPzgjHgvm+PiYV28K
	o6oIkq9va5b11t0rzIeZ0gENGlXE6Ng=
Subject: Re: [PATCH v3 10/10] arm64: Change type of hsr, cpsr, spsr_el1 to
 uint64_t
To: Michal Orzel <michal.orzel@arm.com>
Cc: Stefano Stabellini <sstabellini@kernel.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Wei Liu <wl@xen.org>, Tamas K Lengyel <tamas@tklengyel.com>,
 Alexandru Isaila <aisaila@bitdefender.com>,
 Petre Pircalabu <ppircalabu@bitdefender.com>, bertrand.marquis@arm.com,
 wei.chen@arm.com, xen-devel@lists.xenproject.org,
 Julien Grall <julien@xen.org>
References: <20210505074308.11016-1-michal.orzel@arm.com>
 <20210505074308.11016-11-michal.orzel@arm.com>
 <c5676e69-a474-d1ad-c7e9-49c03be3ab66@suse.com>
 <1ff4f9fb-0eca-189a-2b47-b910dc6b3639@arm.com>
 <42a998be-2f99-a1b6-ace6-4c5d42af7046@xen.org>
 <54e845e1-f283-d70c-a0c2-73e768e5a56e@suse.com>
 <b8a14892-0290-3aff-c4b5-6d363b884db7@xen.org>
 <f65babea-bd4f-f1fa-07db-78d83727b155@arm.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c2d72d18-8266-2866-565a-f91ec4e22d84@suse.com>
Date: Fri, 21 May 2021 09:07:50 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <f65babea-bd4f-f1fa-07db-78d83727b155@arm.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 21.05.2021 08:33, Michal Orzel wrote:
> On 17.05.2021 18:03, Julien Grall wrote:
>> On 17/05/2021 08:01, Jan Beulich wrote:
>>> On 12.05.2021 19:59, Julien Grall wrote:
>>>> Hi,
>>>>
>>>> On 11/05/2021 07:37, Michal Orzel wrote:
>>>>> On 05.05.2021 10:00, Jan Beulich wrote:
>>>>>> On 05.05.2021 09:43, Michal Orzel wrote:
>>>>>>> --- a/xen/include/public/arch-arm.h
>>>>>>> +++ b/xen/include/public/arch-arm.h
>>>>>>> @@ -267,10 +267,10 @@ struct vcpu_guest_core_regs
>>>>>>>           /* Return address and mode */
>>>>>>>        __DECL_REG(pc64,         pc32);             /* ELR_EL2 */
>>>>>>> -    uint32_t cpsr;                              /* SPSR_EL2 */
>>>>>>> +    uint64_t cpsr;                              /* SPSR_EL2 */
>>>>>>>           union {
>>>>>>> -        uint32_t spsr_el1;       /* AArch64 */
>>>>>>> +        uint64_t spsr_el1;       /* AArch64 */
>>>>>>>            uint32_t spsr_svc;       /* AArch32 */
>>>>>>>        };
>>>>>>
>>>>>> This change affects, besides domctl, also default_initialise_vcpu(),
>>>>>> which Arm's arch_initialise_vcpu() calls. I realize do_arm_vcpu_op()
>>>>>> only allows two unrelated VCPUOP_* to pass, but then I don't
>>>>>> understand why arch_initialise_vcpu() doesn't simply return e.g.
>>>>>> -EOPNOTSUPP. Hence I suspect I'm missing something.
>>>>
>>>> I think it is just an overlooked when reviewing the following commit:
>>>>
>>>> commit 192df6f9122ddebc21d0a632c10da3453aeee1c2
>>>> Author: Roger Pau Monné <roger.pau@citrix.com>
>>>> Date:   Tue Dec 15 14:12:32 2015 +0100
>>>>
>>>>       x86: allow HVM guests to use hypercalls to bring up vCPUs
>>>>
>>>>       Allow the usage of the VCPUOP_initialise, VCPUOP_up, VCPUOP_down,
>>>>       VCPUOP_is_up, VCPUOP_get_physid and VCPUOP_send_nmi hypercalls from HVM
>>>>       guests.
>>>>
>>>>       This patch introduces a new structure (vcpu_hvm_context) that
>>>> should be used
>>>>       in conjuction with the VCPUOP_initialise hypercall in order to
>>>> initialize
>>>>       vCPUs for HVM guests.
>>>>
>>>>       Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
>>>>       Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
>>>>       Reviewed-by: Jan Beulich <jbeulich@suse.com>
>>>>       Acked-by: Ian Campbell <ian.campbell@citrix.com>
>>>>
>>>> On Arm, the structure vcpu_guest_context is not exposed outside of Xen
>>>> and the tools. Interestingly vcpu_guest_core_regs is but it should only
>>>> be used within vcpu_guest_context.
>>>>
>>>> So as this is not used by stable ABI, it is fine to break it.
>>>>
>>>>>>
>>>>> I agree that do_arm_vcpu_op only allows two VCPUOP* to pass and
>>>>> arch_initialise_vcpu being called in case of VCPUOP_initialise
>>>>> has no sense as VCPUOP_initialise is not supported on arm.
>>>>> It makes sense that it should return -EOPNOTSUPP.
>>>>> However do_arm_vcpu_op will not accept VCPUOP_initialise and will return
>>>>> -EINVAL. So arch_initialise_vcpu for arm will not be called.
>>>>> Do you think that changing this behaviour so that arch_initialise_vcpu returns
>>>>> -EOPNOTSUPP should be part of this patch?
>>>>
>>>> I think this change is unrelated. So it should be handled in a follow-up
>>>> patch.
>>>
>>> My only difference in viewing this is that I'd say the adjustment
>>> would better be a prereq patch to this one, such that the one here
>>> ends up being more obviously correct.
>>
>> The function is already not reachable so I felt it was unfair to require the clean-up for merging this code.
>>
>>> Also, if the function is
>>> indeed not meant to be reachable, besides making it return
>>> -EOPNOTSUPP (or alike) it should probably also have
>>> ASSERT_UNREACHABLE() added.
>>
>> +1 on the idea.
>>
>> Cheers,
>>
> FWICS, all the discussion is about creating the next patch fixing the VCPUOP_initialise function.
> Is there anything left to do in this patch or are you going to ack it?

Afaic I'd find it quite helpful if that other patch was a prereq to this
one, making more obvious that the change here is not going to break
anything. But it's Arm stuff, so Arm folks get the final say anyway.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 21 07:09:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 May 2021 07:09:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131150.245269 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljzHX-00075Q-6G; Fri, 21 May 2021 07:09:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131150.245269; Fri, 21 May 2021 07:09:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljzHX-00075J-1Q; Fri, 21 May 2021 07:09:15 +0000
Received: by outflank-mailman (input) for mailman id 131150;
 Fri, 21 May 2021 07:09:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qZ6I=KQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ljzHW-00075B-Dw
 for xen-devel@lists.xenproject.org; Fri, 21 May 2021 07:09:14 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a34dee3b-569b-483e-9970-04c1d595f8fe;
 Fri, 21 May 2021 07:09:13 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A6094ABED;
 Fri, 21 May 2021 07:09:12 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a34dee3b-569b-483e-9970-04c1d595f8fe
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621580952; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=TsMbFp7m4npdtuj+D1y0vmkSecJLSvOEkXcaz5d1N1U=;
	b=hYdHwiZqVUaRxid3TsxSWmZD9jkl8cnQAVNqphOoM1LEUeFh9LfOY0kEV0sPoFQSjtqmXz
	idnabPxg6Z8fplZN8gwE/mKGM/7usa5XsO6qO4hvcgjbTr1E6cOmv3sPfBdhba/XGjNHUO
	3WBdvDecK4oNmQYuIEOXUj9eMGC6qkg=
Subject: Re: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
To: Penny Zheng <Penny.Zheng@arm.com>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 nd <nd@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>,
 "julien@xen.org" <julien@xen.org>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-8-penny.zheng@arm.com>
 <7e4706dc-70ea-4dc9-3d70-f07396b462d8@suse.com>
 <VE1PR08MB521528492991FDFC87AC361BF72C9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <4389b5be-7d23-31d7-67e0-0068cba79934@suse.com>
 <VE1PR08MB521538B39E6290BBA0842F97F7299@VE1PR08MB5215.eurprd08.prod.outlook.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <cc03d9d9-227a-0788-1a88-b35a77f5f18d@suse.com>
Date: Fri, 21 May 2021 09:09:12 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <VE1PR08MB521538B39E6290BBA0842F97F7299@VE1PR08MB5215.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 21.05.2021 08:41, Penny Zheng wrote:
> Hi Jan
> 
>> -----Original Message-----
>> From: Jan Beulich <jbeulich@suse.com>
>> Sent: Tuesday, May 18, 2021 7:23 PM
>> To: Penny Zheng <Penny.Zheng@arm.com>
>> Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>; Wei Chen
>> <Wei.Chen@arm.com>; nd <nd@arm.com>; xen-devel@lists.xenproject.org;
>> sstabellini@kernel.org; julien@xen.org
>> Subject: Re: [PATCH 07/10] xen/arm: intruduce alloc_domstatic_pages
>>
>> On 18.05.2021 10:57, Penny Zheng wrote:
>>>> From: Jan Beulich <jbeulich@suse.com>
>>>> Sent: Tuesday, May 18, 2021 3:35 PM
>>>>
>>>> On 18.05.2021 07:21, Penny Zheng wrote:
>>>>> --- a/xen/common/page_alloc.c
>>>>> +++ b/xen/common/page_alloc.c
>>>>> @@ -2447,6 +2447,9 @@ int assign_pages(
>>>>>      {
>>>>>          ASSERT(page_get_owner(&pg[i]) == NULL);
>>>>>          page_set_owner(&pg[i], d);
>>>>> +        /* use page_set_reserved_owner to set its reserved domain owner.
>>>> */
>>>>> +        if ( (pg[i].count_info & PGC_reserved) )
>>>>> +            page_set_reserved_owner(&pg[i], d);
>>>>
>>>> Now this is puzzling: What's the point of setting two owner fields to
>>>> the same value? I also don't recall you having introduced
>>>> page_set_reserved_owner() for x86, so how is this going to build there?
>>>>
>>>
>>> Thanks for pointing out that it will fail on x86.
>>> As for the same value, sure, I shall change it to domid_t domid to record its
>> reserved owner.
>>> Only domid is enough for differentiate.
>>> And even when domain get rebooted, struct domain may be destroyed, but
>>> domid will stays The same.
>>
>> Will it? Are you intending to put in place restrictions that make it impossible
>> for the ID to get re-used by another domain?
>>
>>> Major user cases for domain on static allocation are referring to the
>>> whole system are static, No runtime creation.
>>
>> Right, but that's not currently enforced afaics. If you would enforce it, it may
>> simplify a number of things.
>>
>>>>> @@ -2509,6 +2512,56 @@ struct page_info *alloc_domheap_pages(
>>>>>      return pg;
>>>>>  }
>>>>>
>>>>> +/*
>>>>> + * Allocate nr_pfns contiguous pages, starting at #start, of static
>>>>> +memory,
>>>>> + * then assign them to one specific domain #d.
>>>>> + * It is the equivalent of alloc_domheap_pages for static memory.
>>>>> + */
>>>>> +struct page_info *alloc_domstatic_pages(
>>>>> +        struct domain *d, unsigned long nr_pfns, paddr_t start,
>>>>> +        unsigned int memflags)
>>>>> +{
>>>>> +    struct page_info *pg = NULL;
>>>>> +    unsigned long dma_size;
>>>>> +
>>>>> +    ASSERT(!in_irq());
>>>>> +
>>>>> +    if ( memflags & MEMF_no_owner )
>>>>> +        memflags |= MEMF_no_refcount;
>>>>> +
>>>>> +    if ( !dma_bitsize )
>>>>> +        memflags &= ~MEMF_no_dma;
>>>>> +    else
>>>>> +    {
>>>>> +        dma_size = 1ul << bits_to_zone(dma_bitsize);
>>>>> +        /* Starting address shall meet the DMA limitation. */
>>>>> +        if ( dma_size && start < dma_size )
>>>>> +            return NULL;
>>>>
>>>> It is the entire range (i.e. in particular the last byte) which needs
>>>> to meet such a restriction. I'm not convinced though that DMA width
>>>> restrictions and static allocation are sensible to coexist.
>>>>
>>>
>>> FWIT, if starting address meets the limitation, the last byte, greater
>>> than starting address shall meet it too.
>>
>> I'm afraid I don't know what you're meaning to tell me here.
>>
> 
> Referring to alloc_domheap_pages, if `dma_bitsize` is none-zero value, 
> it will use  alloc_heap_pages to allocate pages from [dma_zone + 1,
> zone_hi], `dma_zone + 1` pointing to address larger than 2^(dma_zone + 1).
> So I was setting address limitation for the starting address.   

But does this zone concept apply to static pages at all?

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 21 07:18:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 May 2021 07:18:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131158.245280 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljzQI-00005a-W5; Fri, 21 May 2021 07:18:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131158.245280; Fri, 21 May 2021 07:18:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljzQI-00005T-Ss; Fri, 21 May 2021 07:18:18 +0000
Received: by outflank-mailman (input) for mailman id 131158;
 Fri, 21 May 2021 07:18:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7IOp=KQ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1ljzQH-00005N-Fs
 for xen-devel@lists.xenproject.org; Fri, 21 May 2021 07:18:17 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2da2144f-c705-4dcb-95c2-846bdc2e2aac;
 Fri, 21 May 2021 07:18:16 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7CC82AB64;
 Fri, 21 May 2021 07:18:15 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2da2144f-c705-4dcb-95c2-846bdc2e2aac
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621581495; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=Q07r8cRs2qhSdSj5Uz/0nUX9useYysZDIgasLmFapNc=;
	b=HsLgQscw8bOBuxRsIq+a/NPY7sref+rQar3SvkYqPpLOGivQ/cAqxQYSr8qVN+1zRsrEm1
	YCJP9ewGJTKd3kLtp02aPto6etKRxh69MfCpyzNUVrSBucaDfRrh8jPzrjHcKLyxp7gWuY
	/xueaivcU4p239Bxl28n3n8vCMPkUP4=
To: Jan Beulich <jbeulich@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <12a866b0-9e89-59f7-ebeb-a2a6cec0987a@suse.com>
 <65bbc317-893e-da41-97e0-c8f2e1feb3e2@suse.com>
 <f594a439-ec1d-34fa-3ccf-b162441fa0af@suse.com>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH] x86/Xen: swap NX determination and GDT setup on BSP
Message-ID: <3953076f-c2fa-2e2a-4b07-fb610046a27d@suse.com>
Date: Fri, 21 May 2021 09:18:14 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <f594a439-ec1d-34fa-3ccf-b162441fa0af@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="jEqfWVILnjSRw4bhSA6PXBDXCw2lGHtM7"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--jEqfWVILnjSRw4bhSA6PXBDXCw2lGHtM7
Content-Type: multipart/mixed; boundary="wHXg1WDbQiEyhbh9OjEzRkdcpanyy5Jia";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Message-ID: <3953076f-c2fa-2e2a-4b07-fb610046a27d@suse.com>
Subject: Re: [PATCH] x86/Xen: swap NX determination and GDT setup on BSP
References: <12a866b0-9e89-59f7-ebeb-a2a6cec0987a@suse.com>
 <65bbc317-893e-da41-97e0-c8f2e1feb3e2@suse.com>
 <f594a439-ec1d-34fa-3ccf-b162441fa0af@suse.com>
In-Reply-To: <f594a439-ec1d-34fa-3ccf-b162441fa0af@suse.com>

--wHXg1WDbQiEyhbh9OjEzRkdcpanyy5Jia
Content-Type: multipart/mixed;
 boundary="------------D97EB68808ED1E935BC05A77"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------D97EB68808ED1E935BC05A77
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 20.05.21 14:08, Jan Beulich wrote:
> On 20.05.2021 13:57, Juergen Gross wrote:
>> On 20.05.21 13:42, Jan Beulich wrote:
>>> xen_setup_gdt(), via xen_load_gdt_boot(), wants to adjust page tables=
=2E
>>> For this to work when NX is not available, x86_configure_nx() needs t=
o
>>> be called first.
>>>
>>> Reported-by: Olaf Hering <olaf@aepfle.de>
>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>
>> Reviewed-by: Juergen Gross <jgross@suse.com>
>=20
> Thanks. I guess I forgot
>=20
> Cc: stable@vger.kernel.org
>=20
> If you agree, can you please add this before pushing to Linus?

Uh, just had a look why x86_configure_nx() was called after
xen_setup_gdt().

Upstream your patch will be fine, but before kernel 5.9 it will
break running as 32-bit PV guest (see commit 36104cb9012a82e7).

So I will take your patch as is, but for kernels 5.8 and older I
recommend a different approach by directly setting the NX
capability after checking the cpuid bit instead of letting that
do get_cpu_cap().


Juergen

--------------D97EB68808ED1E935BC05A77
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------D97EB68808ED1E935BC05A77--

--wHXg1WDbQiEyhbh9OjEzRkdcpanyy5Jia--

--jEqfWVILnjSRw4bhSA6PXBDXCw2lGHtM7
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmCnXrYFAwAAAAAACgkQsN6d1ii/Ey/O
DAf+N/VeKapYQchrAzUXKI5KOGlgWcfknU3s0OLQE5up82lAx9ahDHEKH8JMZJZpfSQXuH7ry5df
VvpiRa5FTODVsLcrVgbHzyEXVrR+xcbRiYI5Zvd+VZP6Q8sZaprA2acu9wttafYKPEuI8wF5lcHi
XWFqcfTQjoetk3urB7+5t0n2wd+Uttz1yAZGOZYQ2q62HS1rXgwkMfNAa1jVTYE4rxnXdyzy2gnA
8BhmcNmAq8msY+RMnCDzEGGPOJDeMeIvIBQzq7ReTyLiE8+zEZ3og4Vx1uv0HrdSvZBe8jFJObj+
yAoEEeAdvTtHJmeTktYCO7pik2+HQM6aV1n2di25bg==
=Ehxt
-----END PGP SIGNATURE-----

--jEqfWVILnjSRw4bhSA6PXBDXCw2lGHtM7--


From xen-devel-bounces@lists.xenproject.org Fri May 21 07:26:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 May 2021 07:26:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131165.245291 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljzXv-0001Y8-QB; Fri, 21 May 2021 07:26:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131165.245291; Fri, 21 May 2021 07:26:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljzXv-0001Y1-MI; Fri, 21 May 2021 07:26:11 +0000
Received: by outflank-mailman (input) for mailman id 131165;
 Fri, 21 May 2021 07:26:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qZ6I=KQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1ljzXu-0001Xv-Fq
 for xen-devel@lists.xenproject.org; Fri, 21 May 2021 07:26:10 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f31b127d-d8d8-43de-bf31-c4e7d1435439;
 Fri, 21 May 2021 07:26:09 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id D1B53AB64;
 Fri, 21 May 2021 07:26:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f31b127d-d8d8-43de-bf31-c4e7d1435439
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621581968; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QgWvsxiemLxZi6Uo1RdnfIDagmZQ6WxkASMEMiRYIuw=;
	b=ApjhcA9Onsc9oxlCzlv1VwnpqrBBAlRvnZAy4KReIEJDcHvIuwESNFDFJRelvJaQpoin/f
	lBUmfBRQ4nqTEmqzBPd2mcrIfhfnTIIkrqqBffYNhkB7XzxywFBKOWKscjsUuF7Y0ZViHO
	xmwqgFAU9P1ZKwQ40yVFoshHwtMTF8A=
Subject: Re: [PATCH] x86/Xen: swap NX determination and GDT setup on BSP
To: Juergen Gross <jgross@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
References: <12a866b0-9e89-59f7-ebeb-a2a6cec0987a@suse.com>
 <65bbc317-893e-da41-97e0-c8f2e1feb3e2@suse.com>
 <f594a439-ec1d-34fa-3ccf-b162441fa0af@suse.com>
 <3953076f-c2fa-2e2a-4b07-fb610046a27d@suse.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <89c46d1a-9474-0f17-3fda-4809a14adb45@suse.com>
Date: Fri, 21 May 2021 09:26:08 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <3953076f-c2fa-2e2a-4b07-fb610046a27d@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 21.05.2021 09:18, Juergen Gross wrote:
> On 20.05.21 14:08, Jan Beulich wrote:
>> On 20.05.2021 13:57, Juergen Gross wrote:
>>> On 20.05.21 13:42, Jan Beulich wrote:
>>>> xen_setup_gdt(), via xen_load_gdt_boot(), wants to adjust page tables.
>>>> For this to work when NX is not available, x86_configure_nx() needs to
>>>> be called first.
>>>>
>>>> Reported-by: Olaf Hering <olaf@aepfle.de>
>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>
>>> Reviewed-by: Juergen Gross <jgross@suse.com>
>>
>> Thanks. I guess I forgot
>>
>> Cc: stable@vger.kernel.org
>>
>> If you agree, can you please add this before pushing to Linus?
> 
> Uh, just had a look why x86_configure_nx() was called after
> xen_setup_gdt().
> 
> Upstream your patch will be fine, but before kernel 5.9 it will
> break running as 32-bit PV guest (see commit 36104cb9012a82e7).

Oh, indeed. That commit then actually introduced the issue here,
and hence a Fixes: tag may be warranted.

> So I will take your patch as is, but for kernels 5.8 and older I
> recommend a different approach by directly setting the NX
> capability after checking the cpuid bit instead of letting that
> do get_cpu_cap().

Right - perhaps the only halfway viable option.

64-bit kernels predating 4f277295e54c may then also need that one,
but perhaps all stable ones already have it because it was tagged
for stable.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 21 07:45:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 May 2021 07:45:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131172.245302 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljzqp-0003oa-EE; Fri, 21 May 2021 07:45:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131172.245302; Fri, 21 May 2021 07:45:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ljzqp-0003oT-BD; Fri, 21 May 2021 07:45:43 +0000
Received: by outflank-mailman (input) for mailman id 131172;
 Fri, 21 May 2021 07:45:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7IOp=KQ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1ljzqn-0003oN-SE
 for xen-devel@lists.xenproject.org; Fri, 21 May 2021 07:45:41 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c584965b-0326-4988-a95b-0ea1611794cd;
 Fri, 21 May 2021 07:45:40 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 9640BAACA;
 Fri, 21 May 2021 07:45:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c584965b-0326-4988-a95b-0ea1611794cd
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621583139; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=/sMvNp+9lKkN+WiHz3qNpqJVARtOU7zT7NAo9LzV8rU=;
	b=lXvVE1ZGhPjH2EZB2MVm75MauFOOxpZ0wp6bf7V04u5oo3V1ymGp9U0BYRQ4U8rLG1NNug
	RS/iYtICwtUO8pjCJCmifcHX8ajoe88VSy0eLBazBE2uCMZAtoIdIwLw0IKxlfPCtkEsLm
	pBYSf/KiPeiaCS+XJzHZzrKfiXWkIeU=
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
References: <12a866b0-9e89-59f7-ebeb-a2a6cec0987a@suse.com>
 <65bbc317-893e-da41-97e0-c8f2e1feb3e2@suse.com>
 <f594a439-ec1d-34fa-3ccf-b162441fa0af@suse.com>
 <3953076f-c2fa-2e2a-4b07-fb610046a27d@suse.com>
 <89c46d1a-9474-0f17-3fda-4809a14adb45@suse.com>
From: Juergen Gross <jgross@suse.com>
Subject: Re: [PATCH] x86/Xen: swap NX determination and GDT setup on BSP
Message-ID: <2d019c04-415b-293b-052b-26b1ea3be189@suse.com>
Date: Fri, 21 May 2021 09:45:38 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <89c46d1a-9474-0f17-3fda-4809a14adb45@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="xYVPNHcK2UiZjHVM9n8b5M6RfVqE7AsuZ"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--xYVPNHcK2UiZjHVM9n8b5M6RfVqE7AsuZ
Content-Type: multipart/mixed; boundary="PwWjzX44LP6A6yeUvPoM1lSs3gpCYXqct";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <2d019c04-415b-293b-052b-26b1ea3be189@suse.com>
Subject: Re: [PATCH] x86/Xen: swap NX determination and GDT setup on BSP
References: <12a866b0-9e89-59f7-ebeb-a2a6cec0987a@suse.com>
 <65bbc317-893e-da41-97e0-c8f2e1feb3e2@suse.com>
 <f594a439-ec1d-34fa-3ccf-b162441fa0af@suse.com>
 <3953076f-c2fa-2e2a-4b07-fb610046a27d@suse.com>
 <89c46d1a-9474-0f17-3fda-4809a14adb45@suse.com>
In-Reply-To: <89c46d1a-9474-0f17-3fda-4809a14adb45@suse.com>

--PwWjzX44LP6A6yeUvPoM1lSs3gpCYXqct
Content-Type: multipart/mixed;
 boundary="------------4B6D1E5FAC04B12CF593AF73"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------4B6D1E5FAC04B12CF593AF73
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 21.05.21 09:26, Jan Beulich wrote:
> On 21.05.2021 09:18, Juergen Gross wrote:
>> On 20.05.21 14:08, Jan Beulich wrote:
>>> On 20.05.2021 13:57, Juergen Gross wrote:
>>>> On 20.05.21 13:42, Jan Beulich wrote:
>>>>> xen_setup_gdt(), via xen_load_gdt_boot(), wants to adjust page tabl=
es.
>>>>> For this to work when NX is not available, x86_configure_nx() needs=20
to
>>>>> be called first.
>>>>>
>>>>> Reported-by: Olaf Hering <olaf@aepfle.de>
>>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>>
>>>> Reviewed-by: Juergen Gross <jgross@suse.com>
>>>
>>> Thanks. I guess I forgot
>>>
>>> Cc: stable@vger.kernel.org
>>>
>>> If you agree, can you please add this before pushing to Linus?
>>
>> Uh, just had a look why x86_configure_nx() was called after
>> xen_setup_gdt().
>>
>> Upstream your patch will be fine, but before kernel 5.9 it will
>> break running as 32-bit PV guest (see commit 36104cb9012a82e7).
>=20
> Oh, indeed. That commit then actually introduced the issue here,
> and hence a Fixes: tag may be warranted.

Added it already. :-)

And I've limited the backport to happen not for 5.8 and older, of
course.


Juergen

--------------4B6D1E5FAC04B12CF593AF73
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------4B6D1E5FAC04B12CF593AF73--

--PwWjzX44LP6A6yeUvPoM1lSs3gpCYXqct--

--xYVPNHcK2UiZjHVM9n8b5M6RfVqE7AsuZ
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmCnZSIFAwAAAAAACgkQsN6d1ii/Ey8x
Twf/Yn8DWeclME/OqMShr1eqPWdNHCYDZsdr0FnzeC8A6xIA+1IRp17aqBql8JOxbqzwpYJ5gh3H
JzFRs/1xMBwwJcbKhbXVunXHlODsRqzD0BQlukoPTmTBYlaKINzGgcVw9fvlkD9jOnhmL8eGgKkj
xN3XYeeiqx3byngrO6fMOENq+vJmKLVvTbZgpQrWo50JhcH6GiaUM09GrXX8ZSZgsCwEepoHdO93
dtK4H3VkqeRM9OhNGHnbQHxRKFzSyYOtiC/n28WN7W1eMV4CwHZrfMCDJZXB6DXZqHAqPSlmYHQQ
SsinAGpZM7/CpSZuhHLB6/I1xy5DYJE2iMqyGJIKtw==
=h3ze
-----END PGP SIGNATURE-----

--xYVPNHcK2UiZjHVM9n8b5M6RfVqE7AsuZ--


From xen-devel-bounces@lists.xenproject.org Fri May 21 08:23:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 May 2021 08:23:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131184.245312 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lk0RG-0008Kh-Kn; Fri, 21 May 2021 08:23:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131184.245312; Fri, 21 May 2021 08:23:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lk0RG-0008Ka-Hj; Fri, 21 May 2021 08:23:22 +0000
Received: by outflank-mailman (input) for mailman id 131184;
 Fri, 21 May 2021 08:23:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lk0RG-0008KQ-0z; Fri, 21 May 2021 08:23:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lk0RF-0002lL-Qz; Fri, 21 May 2021 08:23:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lk0RF-0006re-FY; Fri, 21 May 2021 08:23:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lk0RF-0004oY-F5; Fri, 21 May 2021 08:23:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=cJXBvKo95d8z/PQrpvvBUy+VlbQf6rkj+SsvXZsICxc=; b=xMbT9oXhgZA/uQe1WuePhATEyy
	ZOti2MQk7E5ZZeH/30D3TVMvwqAYF4TtsuxTSDLsGxmZGGmgukbzND1/rqrAVE9U30NkQq1GIIOaN
	P7pPzNGzDlzbOAP/QiqgbXyXHYb/5yt4kEaOWvxEhLxj473HtKGbhV4xipXPN54kCgUQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162110-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 162110: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=015fe0439f0592ca0b0274b306258a1e7aafe43c
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 21 May 2021 08:23:21 +0000

flight 162110 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162110/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              015fe0439f0592ca0b0274b306258a1e7aafe43c
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  315 days
Failing since        151818  2020-07-11 04:18:52 Z  314 days  307 attempts
Testing same since   162110  2021-05-21 04:20:08 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 58452 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 21 08:29:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 May 2021 08:29:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131192.245327 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lk0XX-0000d6-CI; Fri, 21 May 2021 08:29:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131192.245327; Fri, 21 May 2021 08:29:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lk0XX-0000cz-8n; Fri, 21 May 2021 08:29:51 +0000
Received: by outflank-mailman (input) for mailman id 131192;
 Fri, 21 May 2021 08:29:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7IOp=KQ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lk0XW-0000ct-2I
 for xen-devel@lists.xenproject.org; Fri, 21 May 2021 08:29:50 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0aa4128f-ea1e-4da7-9758-2edb9e19e37d;
 Fri, 21 May 2021 08:29:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2EAC8AAA6;
 Fri, 21 May 2021 08:29:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0aa4128f-ea1e-4da7-9758-2edb9e19e37d
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621585788; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=DecKl4PtsMcOavJpQVs9JFoCwl4A/96tMO59OYiuSgI=;
	b=X0hW84LtZblqMjvCQKJkV/+JAPnaBRQhqH6f8wHJ/21esXnn+fjIWMd3KQqbjZul5umtXY
	0K4CEdEZh6YI8Jy5eB2Tzs1x/f1aokG0Rs/9K+3RV7dRzeCl+395kLlqjImplthjy+C8L2
	Ccv58Ky3B9EInGy0Shy8X00+tzxaC3M=
Subject: Re: [PATCH v2 0/2] xen-pciback: a fix and a workaround
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Konrad Wilk <konrad.wilk@oracle.com>
References: <38774140-871d-59a4-cf49-9cb1cc78c381@suse.com>
From: Juergen Gross <jgross@suse.com>
Message-ID: <1b2e0512-ae80-3b15-0c0a-3578bb2f762f@suse.com>
Date: Fri, 21 May 2021 10:29:47 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <38774140-871d-59a4-cf49-9cb1cc78c381@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="f0Pe6FeVN3AmBT0U8zIUfi1ts5alt0KdL"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--f0Pe6FeVN3AmBT0U8zIUfi1ts5alt0KdL
Content-Type: multipart/mixed; boundary="zGJ1Ntz7S26aEz8LQrjpkAvgjQRszdFCn";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Boris Ostrovsky <boris.ostrovsky@oracle.com>,
 Konrad Wilk <konrad.wilk@oracle.com>
Message-ID: <1b2e0512-ae80-3b15-0c0a-3578bb2f762f@suse.com>
Subject: Re: [PATCH v2 0/2] xen-pciback: a fix and a workaround
References: <38774140-871d-59a4-cf49-9cb1cc78c381@suse.com>
In-Reply-To: <38774140-871d-59a4-cf49-9cb1cc78c381@suse.com>

--zGJ1Ntz7S26aEz8LQrjpkAvgjQRszdFCn
Content-Type: multipart/mixed;
 boundary="------------1316A6A41B3A0631BD406A7E"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------1316A6A41B3A0631BD406A7E
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 18.05.21 18:12, Jan Beulich wrote:
> The first change completes a several years old but still incomplete
> change. As mentioned there, reverting the original change may also
> be an option. The second change works around some odd libxl behavior,
> as described in [1]. As per a response to that mail addressing the
> issue in libxl may also be possible, but it's not clear to me who
> would get to doing so at which point in time. Hence the kernel side
> alternative is being proposed here.
>=20
> As to Konrad being on the Cc list: I find it puzzling that he's
> listed under "XEN PCI SUBSYSTEM", but pciback isn't considered part
> of this.
>=20
> 1: redo VF placement in the virtual topology
> 2: reconfigure also from backend watch handler

Series pushed to xen/tip.git for-linus-5.13b


Juergen

--------------1316A6A41B3A0631BD406A7E
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------1316A6A41B3A0631BD406A7E--

--zGJ1Ntz7S26aEz8LQrjpkAvgjQRszdFCn--

--f0Pe6FeVN3AmBT0U8zIUfi1ts5alt0KdL
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmCnb3sFAwAAAAAACgkQsN6d1ii/Ey80
pwf+I5DmZN5BRKi/hdVW8Czx6uAbjjMXvjRtaEy3mgQo8qHkG+ZfTZwrsN/ofOyD+4q1HQJAnhEc
HPZnnuSPUTWud352p1d7hbHL5aGaHXCf5xLU67bmj8SOXcsB+LrQH4gZBrGmFNgG4fqsQgfpQWsd
HD1hWqpvr3pWRgl0hlgRAM3Bz6D/OgtUl9Yj2LvxEXw9wiW1D5IA/mUGjNE5P16104TTy0ke9Ffr
no1Uh7yVjRdwNgUmdJuvXZ99sNRkXZOuVnY2XtIlDWNnZyQBd+5wiuUT+CyHIYfCtl/aLiORtxfd
ARNzUl2+riIkNXJ+BTevfCVj2I5q7a7NmXQgaaqM7A==
=JQ/T
-----END PGP SIGNATURE-----

--f0Pe6FeVN3AmBT0U8zIUfi1ts5alt0KdL--


From xen-devel-bounces@lists.xenproject.org Fri May 21 08:30:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 May 2021 08:30:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131197.245338 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lk0Y5-0001sh-Lv; Fri, 21 May 2021 08:30:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131197.245338; Fri, 21 May 2021 08:30:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lk0Y5-0001sY-Iw; Fri, 21 May 2021 08:30:25 +0000
Received: by outflank-mailman (input) for mailman id 131197;
 Fri, 21 May 2021 08:30:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=7IOp=KQ=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lk0Y3-0001sO-Na
 for xen-devel@lists.xenproject.org; Fri, 21 May 2021 08:30:23 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7bf1221a-c816-4600-ad0c-fe3cc5400e8e;
 Fri, 21 May 2021 08:30:22 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C6099AACA;
 Fri, 21 May 2021 08:30:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7bf1221a-c816-4600-ad0c-fe3cc5400e8e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621585821; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=MWM8l6mAKrJ8U74Lc+HaaPPYjP+PYe/VBRl7eg9Z1WA=;
	b=YkHmCmZ49Nr4aYdWHWUitm1wad57DlozRydjD7S1MkI/NV/yRLp5O9qnntD8Ec9x6IIGH6
	SxxC/SKZP73B0zJjnxw08EBhMjcPmUePTsoEVOIATLX64XfGgW5lM4+D0D3VZV/FR2MIqW
	Gyrk5fmhfGlyjedlDMMU4Pyl4cao7EI=
Subject: Re: [PATCH] x86/Xen: swap NX determination and GDT setup on BSP
To: Jan Beulich <jbeulich@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <12a866b0-9e89-59f7-ebeb-a2a6cec0987a@suse.com>
From: Juergen Gross <jgross@suse.com>
Message-ID: <89335e44-64ae-be79-afc5-876d57b2ff5d@suse.com>
Date: Fri, 21 May 2021 10:30:21 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <12a866b0-9e89-59f7-ebeb-a2a6cec0987a@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="bwjv468KiQEgKbbtHpv4qf4yT74eoMjHI"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--bwjv468KiQEgKbbtHpv4qf4yT74eoMjHI
Content-Type: multipart/mixed; boundary="9AEnW2FVjjtl94digKiu8gtdywboVhOue";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: Jan Beulich <jbeulich@suse.com>,
 Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Message-ID: <89335e44-64ae-be79-afc5-876d57b2ff5d@suse.com>
Subject: Re: [PATCH] x86/Xen: swap NX determination and GDT setup on BSP
References: <12a866b0-9e89-59f7-ebeb-a2a6cec0987a@suse.com>
In-Reply-To: <12a866b0-9e89-59f7-ebeb-a2a6cec0987a@suse.com>

--9AEnW2FVjjtl94digKiu8gtdywboVhOue
Content-Type: multipart/mixed;
 boundary="------------BFE3439117E7BA4DAC744DC9"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------BFE3439117E7BA4DAC744DC9
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 20.05.21 13:42, Jan Beulich wrote:
> xen_setup_gdt(), via xen_load_gdt_boot(), wants to adjust page tables.
> For this to work when NX is not available, x86_configure_nx() needs to
> be called first.
>=20
> Reported-by: Olaf Hering <olaf@aepfle.de>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Pushed to xen/tip.git for-linus-5.13b


Juergen


--------------BFE3439117E7BA4DAC744DC9
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------BFE3439117E7BA4DAC744DC9--

--9AEnW2FVjjtl94digKiu8gtdywboVhOue--

--bwjv468KiQEgKbbtHpv4qf4yT74eoMjHI
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmCnb50FAwAAAAAACgkQsN6d1ii/Ey/M
Rgf9FU/1lDosBznSOVQSsyutBCFPlyCPWdYArEyd4BB79HjuOemRh/id+MEVZUu/O3cUKZGQTqOC
dqXdLbetu3WdkN0KMVYTydHuHetAgB/XBDUwrZ50vW1XQwFp09q9KVitR2daCGuQHru7OyeVSxmT
trI+mWNl5NVGPCHbiy+KbMkS50K2Ut0niSwNDeZDa0Qz+XjdJBp8QtfvYr0mjjiO9PN+bvIt+zOi
TR7/KIcznQnSktOyrhaZ6htjcs2PTUKJA64qVCx4FayWc2NfA+b1jMUYxq2j36oiPXUal4SsVl7p
59DG00Guvo3M1VSYxCuqAFKA4ju1sHit2shbrkFqLQ==
=54ey
-----END PGP SIGNATURE-----

--bwjv468KiQEgKbbtHpv4qf4yT74eoMjHI--


From xen-devel-bounces@lists.xenproject.org Fri May 21 09:10:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 May 2021 09:10:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131209.245349 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lk1Ai-00060v-OG; Fri, 21 May 2021 09:10:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131209.245349; Fri, 21 May 2021 09:10:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lk1Ai-00060o-Ks; Fri, 21 May 2021 09:10:20 +0000
Received: by outflank-mailman (input) for mailman id 131209;
 Fri, 21 May 2021 09:10:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GxT2=KQ=ffwll.ch=daniel.vetter@srs-us1.protection.inumbo.net>)
 id 1lk1Ag-00060g-HI
 for xen-devel@lists.xenproject.org; Fri, 21 May 2021 09:10:18 +0000
Received: from mail-wm1-x32f.google.com (unknown [2a00:1450:4864:20::32f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2f1b1392-567d-4988-892b-7164b613c034;
 Fri, 21 May 2021 09:10:16 +0000 (UTC)
Received: by mail-wm1-x32f.google.com with SMTP id
 n17-20020a7bc5d10000b0290169edfadac9so6976584wmk.1
 for <xen-devel@lists.xenproject.org>; Fri, 21 May 2021 02:10:16 -0700 (PDT)
Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa])
 by smtp.gmail.com with ESMTPSA id y2sm13589457wmq.45.2021.05.21.02.10.14
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 21 May 2021 02:10:14 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2f1b1392-567d-4988-892b-7164b613c034
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=ffwll.ch; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=1cn49LnC9MKqFexYQ7Z0jLWqvfPvwRgzNVyBE6xCMzo=;
        b=is09Q9qxPVyW0NgJaiKqPOqAgDwUvOEAav7i+ZL2/Ccp4gvX0OeKSQfBkc348utKqf
         cC2qOlnY1+LC6R0B9IR0gNU/DFLJLLH4+dwh9lrZyfQQJQotuzewpZvfyI8GYOI7PGhl
         czy0z5y5GEqCFxZ4qScRCiwGNYe2HLNIwMtdQ=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=1cn49LnC9MKqFexYQ7Z0jLWqvfPvwRgzNVyBE6xCMzo=;
        b=pzdBTndyMo+cd7EvuIvBW2gXvg2T+HkiF23N7qAaV9DIXk3gxDg22G0Re7XFzoWzDB
         BukpfIcAo2b3xPnAjwtRanQnt6RtwrQo9AimXEIlNKPHFsp6RzNQEAGNP8d+/5itFvcC
         QZSdwFRyrwI0eUdFrNc5lKLOKFSuf2xHeAzDuv7t6qsqcCP/wRZpwbZV2lhPwRojlzOD
         3S/fISiLboQcxohuqfaSvbk0eFhw0iUBvHP+mVWGt51ZqoZ8PZwg+G+6GwWicC8FDYv3
         YrKYjIJMIjNOao3FwgkS6h2b+uncGYMNcrtMrypCaFVLX2PWT3ncbRauLdqh9TIIYQNx
         Qt/Q==
X-Gm-Message-State: AOAM533PFRTROvJnmrlPFCBwHPB+jQUerUysLn1Ia48IU/4d39LBPjJN
	0aPJQNQ8XPBRyxrEaEse+mXl/A==
X-Google-Smtp-Source: ABdhPJyT+6ranHRVQTb55siylh+EXYDlHb4SOwMXRTurfI70FKwfrCVWQUisCBlN9GVJKBZtwLJJ0A==
X-Received: by 2002:a05:600c:3510:: with SMTP id h16mr7502448wmq.38.1621588215236;
        Fri, 21 May 2021 02:10:15 -0700 (PDT)
From: Daniel Vetter <daniel.vetter@ffwll.ch>
To: DRI Development <dri-devel@lists.freedesktop.org>
Cc: Intel Graphics Development <intel-gfx@lists.freedesktop.org>,
	Daniel Vetter <daniel.vetter@ffwll.ch>,
	Daniel Vetter <daniel.vetter@intel.com>,
	Joel Stanley <joel@jms.id.au>,
	Andrew Jeffery <andrew@aj.id.au>,
	=?UTF-8?q?Noralf=20Tr=C3=B8nnes?= <noralf@tronnes.org>,
	Linus Walleij <linus.walleij@linaro.org>,
	Emma Anholt <emma@anholt.net>,
	David Lechner <david@lechnology.com>,
	Kamlesh Gurudasani <kamlesh.gurudasani@gmail.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Maxime Ripard <mripard@kernel.org>,
	Thomas Zimmermann <tzimmermann@suse.de>,
	Sam Ravnborg <sam@ravnborg.org>,
	Alex Deucher <alexander.deucher@amd.com>,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	linux-aspeed@lists.ozlabs.org,
	linux-arm-kernel@lists.infradead.org,
	xen-devel@lists.xenproject.org
Subject: [PATCH 11/11] drm/tiny: drm_gem_simple_display_pipe_prepare_fb is the default
Date: Fri, 21 May 2021 11:09:59 +0200
Message-Id: <20210521090959.1663703-11-daniel.vetter@ffwll.ch>
X-Mailer: git-send-email 2.31.0
In-Reply-To: <20210521090959.1663703-1-daniel.vetter@ffwll.ch>
References: <20210521090959.1663703-1-daniel.vetter@ffwll.ch>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

Goes through all the drivers and deletes the default hook since it's
the default now.

Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
Cc: Joel Stanley <joel@jms.id.au>
Cc: Andrew Jeffery <andrew@aj.id.au>
Cc: "Noralf Trønnes" <noralf@tronnes.org>
Cc: Linus Walleij <linus.walleij@linaro.org>
Cc: Emma Anholt <emma@anholt.net>
Cc: David Lechner <david@lechnology.com>
Cc: Kamlesh Gurudasani <kamlesh.gurudasani@gmail.com>
Cc: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: Maxime Ripard <mripard@kernel.org>
Cc: Thomas Zimmermann <tzimmermann@suse.de>
Cc: Sam Ravnborg <sam@ravnborg.org>
Cc: Alex Deucher <alexander.deucher@amd.com>
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: linux-aspeed@lists.ozlabs.org
Cc: linux-arm-kernel@lists.infradead.org
Cc: xen-devel@lists.xenproject.org
---
 drivers/gpu/drm/aspeed/aspeed_gfx_crtc.c | 1 -
 drivers/gpu/drm/gud/gud_drv.c            | 1 -
 drivers/gpu/drm/mcde/mcde_display.c      | 1 -
 drivers/gpu/drm/pl111/pl111_display.c    | 1 -
 drivers/gpu/drm/tiny/hx8357d.c           | 1 -
 drivers/gpu/drm/tiny/ili9225.c           | 1 -
 drivers/gpu/drm/tiny/ili9341.c           | 1 -
 drivers/gpu/drm/tiny/ili9486.c           | 1 -
 drivers/gpu/drm/tiny/mi0283qt.c          | 1 -
 drivers/gpu/drm/tiny/repaper.c           | 1 -
 drivers/gpu/drm/tiny/st7586.c            | 1 -
 drivers/gpu/drm/tiny/st7735r.c           | 1 -
 drivers/gpu/drm/tve200/tve200_display.c  | 1 -
 drivers/gpu/drm/xen/xen_drm_front_kms.c  | 1 -
 14 files changed, 14 deletions(-)

diff --git a/drivers/gpu/drm/aspeed/aspeed_gfx_crtc.c b/drivers/gpu/drm/aspeed/aspeed_gfx_crtc.c
index 098f96d4d50d..827e62c1daba 100644
--- a/drivers/gpu/drm/aspeed/aspeed_gfx_crtc.c
+++ b/drivers/gpu/drm/aspeed/aspeed_gfx_crtc.c
@@ -220,7 +220,6 @@ static const struct drm_simple_display_pipe_funcs aspeed_gfx_funcs = {
 	.enable		= aspeed_gfx_pipe_enable,
 	.disable	= aspeed_gfx_pipe_disable,
 	.update		= aspeed_gfx_pipe_update,
-	.prepare_fb	= drm_gem_simple_display_pipe_prepare_fb,
 	.enable_vblank	= aspeed_gfx_enable_vblank,
 	.disable_vblank	= aspeed_gfx_disable_vblank,
 };
diff --git a/drivers/gpu/drm/gud/gud_drv.c b/drivers/gpu/drm/gud/gud_drv.c
index e8b672dc9832..1925df9c0fb7 100644
--- a/drivers/gpu/drm/gud/gud_drv.c
+++ b/drivers/gpu/drm/gud/gud_drv.c
@@ -364,7 +364,6 @@ static void gud_debugfs_init(struct drm_minor *minor)
 static const struct drm_simple_display_pipe_funcs gud_pipe_funcs = {
 	.check      = gud_pipe_check,
 	.update	    = gud_pipe_update,
-	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 };
 
 static const struct drm_mode_config_funcs gud_mode_config_funcs = {
diff --git a/drivers/gpu/drm/mcde/mcde_display.c b/drivers/gpu/drm/mcde/mcde_display.c
index 4ddc55d58f38..ce12a36e2db4 100644
--- a/drivers/gpu/drm/mcde/mcde_display.c
+++ b/drivers/gpu/drm/mcde/mcde_display.c
@@ -1479,7 +1479,6 @@ static struct drm_simple_display_pipe_funcs mcde_display_funcs = {
 	.update = mcde_display_update,
 	.enable_vblank = mcde_display_enable_vblank,
 	.disable_vblank = mcde_display_disable_vblank,
-	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 };
 
 int mcde_display_init(struct drm_device *drm)
diff --git a/drivers/gpu/drm/pl111/pl111_display.c b/drivers/gpu/drm/pl111/pl111_display.c
index 6fd7f13f1aca..b5a8859739a2 100644
--- a/drivers/gpu/drm/pl111/pl111_display.c
+++ b/drivers/gpu/drm/pl111/pl111_display.c
@@ -440,7 +440,6 @@ static struct drm_simple_display_pipe_funcs pl111_display_funcs = {
 	.enable = pl111_display_enable,
 	.disable = pl111_display_disable,
 	.update = pl111_display_update,
-	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 };
 
 static int pl111_clk_div_choose_div(struct clk_hw *hw, unsigned long rate,
diff --git a/drivers/gpu/drm/tiny/hx8357d.c b/drivers/gpu/drm/tiny/hx8357d.c
index da5df93450de..9b33c05732aa 100644
--- a/drivers/gpu/drm/tiny/hx8357d.c
+++ b/drivers/gpu/drm/tiny/hx8357d.c
@@ -184,7 +184,6 @@ static const struct drm_simple_display_pipe_funcs hx8357d_pipe_funcs = {
 	.enable = yx240qv29_enable,
 	.disable = mipi_dbi_pipe_disable,
 	.update = mipi_dbi_pipe_update,
-	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 };
 
 static const struct drm_display_mode yx350hv15_mode = {
diff --git a/drivers/gpu/drm/tiny/ili9225.c b/drivers/gpu/drm/tiny/ili9225.c
index 69265d8a3beb..976d3209f164 100644
--- a/drivers/gpu/drm/tiny/ili9225.c
+++ b/drivers/gpu/drm/tiny/ili9225.c
@@ -328,7 +328,6 @@ static const struct drm_simple_display_pipe_funcs ili9225_pipe_funcs = {
 	.enable		= ili9225_pipe_enable,
 	.disable	= ili9225_pipe_disable,
 	.update		= ili9225_pipe_update,
-	.prepare_fb	= drm_gem_simple_display_pipe_prepare_fb,
 };
 
 static const struct drm_display_mode ili9225_mode = {
diff --git a/drivers/gpu/drm/tiny/ili9341.c b/drivers/gpu/drm/tiny/ili9341.c
index ad9ce7b4f76f..37e0c33399c8 100644
--- a/drivers/gpu/drm/tiny/ili9341.c
+++ b/drivers/gpu/drm/tiny/ili9341.c
@@ -140,7 +140,6 @@ static const struct drm_simple_display_pipe_funcs ili9341_pipe_funcs = {
 	.enable = yx240qv29_enable,
 	.disable = mipi_dbi_pipe_disable,
 	.update = mipi_dbi_pipe_update,
-	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 };
 
 static const struct drm_display_mode yx240qv29_mode = {
diff --git a/drivers/gpu/drm/tiny/ili9486.c b/drivers/gpu/drm/tiny/ili9486.c
index 75aa1476c66c..e9a63f4b2993 100644
--- a/drivers/gpu/drm/tiny/ili9486.c
+++ b/drivers/gpu/drm/tiny/ili9486.c
@@ -153,7 +153,6 @@ static const struct drm_simple_display_pipe_funcs waveshare_pipe_funcs = {
 	.enable = waveshare_enable,
 	.disable = mipi_dbi_pipe_disable,
 	.update = mipi_dbi_pipe_update,
-	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 };
 
 static const struct drm_display_mode waveshare_mode = {
diff --git a/drivers/gpu/drm/tiny/mi0283qt.c b/drivers/gpu/drm/tiny/mi0283qt.c
index 82fd1ad3413f..023de49e7a8e 100644
--- a/drivers/gpu/drm/tiny/mi0283qt.c
+++ b/drivers/gpu/drm/tiny/mi0283qt.c
@@ -144,7 +144,6 @@ static const struct drm_simple_display_pipe_funcs mi0283qt_pipe_funcs = {
 	.enable = mi0283qt_enable,
 	.disable = mipi_dbi_pipe_disable,
 	.update = mipi_dbi_pipe_update,
-	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 };
 
 static const struct drm_display_mode mi0283qt_mode = {
diff --git a/drivers/gpu/drm/tiny/repaper.c b/drivers/gpu/drm/tiny/repaper.c
index 2cee07a2e00b..007d9d59f01c 100644
--- a/drivers/gpu/drm/tiny/repaper.c
+++ b/drivers/gpu/drm/tiny/repaper.c
@@ -861,7 +861,6 @@ static const struct drm_simple_display_pipe_funcs repaper_pipe_funcs = {
 	.enable = repaper_pipe_enable,
 	.disable = repaper_pipe_disable,
 	.update = repaper_pipe_update,
-	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 };
 
 static int repaper_connector_get_modes(struct drm_connector *connector)
diff --git a/drivers/gpu/drm/tiny/st7586.c b/drivers/gpu/drm/tiny/st7586.c
index 05db980cc047..1be55bed609a 100644
--- a/drivers/gpu/drm/tiny/st7586.c
+++ b/drivers/gpu/drm/tiny/st7586.c
@@ -268,7 +268,6 @@ static const struct drm_simple_display_pipe_funcs st7586_pipe_funcs = {
 	.enable		= st7586_pipe_enable,
 	.disable	= st7586_pipe_disable,
 	.update		= st7586_pipe_update,
-	.prepare_fb	= drm_gem_simple_display_pipe_prepare_fb,
 };
 
 static const struct drm_display_mode st7586_mode = {
diff --git a/drivers/gpu/drm/tiny/st7735r.c b/drivers/gpu/drm/tiny/st7735r.c
index ec9dc817a2cc..122320db5d38 100644
--- a/drivers/gpu/drm/tiny/st7735r.c
+++ b/drivers/gpu/drm/tiny/st7735r.c
@@ -136,7 +136,6 @@ static const struct drm_simple_display_pipe_funcs st7735r_pipe_funcs = {
 	.enable		= st7735r_pipe_enable,
 	.disable	= mipi_dbi_pipe_disable,
 	.update		= mipi_dbi_pipe_update,
-	.prepare_fb	= drm_gem_simple_display_pipe_prepare_fb,
 };
 
 static const struct st7735r_cfg jd_t18003_t01_cfg = {
diff --git a/drivers/gpu/drm/tve200/tve200_display.c b/drivers/gpu/drm/tve200/tve200_display.c
index 50e1fb71869f..17b8c8dd169d 100644
--- a/drivers/gpu/drm/tve200/tve200_display.c
+++ b/drivers/gpu/drm/tve200/tve200_display.c
@@ -316,7 +316,6 @@ static const struct drm_simple_display_pipe_funcs tve200_display_funcs = {
 	.enable = tve200_display_enable,
 	.disable = tve200_display_disable,
 	.update = tve200_display_update,
-	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 	.enable_vblank = tve200_display_enable_vblank,
 	.disable_vblank = tve200_display_disable_vblank,
 };
diff --git a/drivers/gpu/drm/xen/xen_drm_front_kms.c b/drivers/gpu/drm/xen/xen_drm_front_kms.c
index 371202ebe900..cfda74490765 100644
--- a/drivers/gpu/drm/xen/xen_drm_front_kms.c
+++ b/drivers/gpu/drm/xen/xen_drm_front_kms.c
@@ -302,7 +302,6 @@ static const struct drm_simple_display_pipe_funcs display_funcs = {
 	.mode_valid = display_mode_valid,
 	.enable = display_enable,
 	.disable = display_disable,
-	.prepare_fb = drm_gem_simple_display_pipe_prepare_fb,
 	.check = display_check,
 	.update = display_update,
 };
-- 
2.31.0



From xen-devel-bounces@lists.xenproject.org Fri May 21 09:38:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 May 2021 09:38:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131216.245360 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lk1bW-0008TT-21; Fri, 21 May 2021 09:38:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131216.245360; Fri, 21 May 2021 09:38:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lk1bV-0008TM-Ue; Fri, 21 May 2021 09:38:01 +0000
Received: by outflank-mailman (input) for mailman id 131216;
 Fri, 21 May 2021 09:38:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6S5r=KQ=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lk1bU-0008TE-2S
 for xen-devel@lists.xenproject.org; Fri, 21 May 2021 09:38:00 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 89c57c51-75d7-4d28-9551-cfc64209afba;
 Fri, 21 May 2021 09:37:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 89c57c51-75d7-4d28-9551-cfc64209afba
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1621589878;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=V5EFvMg3jfMFvzguFgH186OthJYDl3cIR8GbLLffcHQ=;
  b=CF/MBeTeoUvpeXnzjfLDaYBpSYik4Xz5mtgKcWlDcUx8vXKyb4+eybC9
   ZQwcW2OPVTFJuSH1TeR+MHLKRH/mn64oKa0KmsC/Qf8cBwIvPGx2WVHDr
   qUwJCagllb3yoLAj8p6YuyVeJbV/Vhh9NaU4VtvWq/4N0ZLfwMHvQeA+s
   4=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 0HkmY+Nz04qwXX8lzYq/3zOOEBcE4n8YQzDnK3bJo5tPjnKu0Kn3p5/x+z2TPO+nG4+vXA/i9O
 VPQisO5PibBX7YnlBwslovsxg268IkF2tjqTDyAtEbc5KM/mR/fS/j8+owBziwfYg0aR8N4SAK
 isKrdLJHa8z8Kew0DFBADv5DkSVyipuBiWvY5dqMagNZqEKGbBwKXjUe/UlfbQ51lGbANE0Hgf
 xOk9kXn9t5WXB2O958CwcfIVWK8mOK9piyBclL4opPcbWhpG5X9etbS4LjlemmJMfwfu0c67LG
 PWQ=
X-SBRS: 5.1
X-MesageID: 44329957
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-Data: A9a23:+fRLRqAZKNJwVRVW/wPjw5YqxClBgxIJ4kV8jS/XYbTApG8ihjQCm
 2IeCzvQM/+CN2X2LYsiO9m29xgH65aGzYU1QQY4rX1jcSlH+JHPbTi7wuYcHM8wwunrFh8PA
 xA2M4GYRCwMZiaH4EjratANlFEkvU2ybuOU5NXsZ2YhH2eIdA970Ug6w7Ng09Y26TSEK1jlV
 e3a8pW31GCNg1aYAkpMg05UgEoy1BhakGpwUm0WPZinjneH/5UmJMt3yZWKB2n5WuFp8tuSH
 I4v+l0bElTxpH/BAvv9+lryn9ZjrrT6ZWBigVIOM0Sub4QrSoXfHc/XOdJFAXq7hQllkPhzz
 vZ0hcCtTT0rHfLvhL4GUhdkCS5xaPguFL/veRBTsOSWxkzCNXDt3+9vHAc9OohwFuRfWD8Us
 6ZCcXZUM0DF3bveLLGTE4GAguw5K8bmJsUHs2xIxjDFF/c2B5vERs0m4PcEgGxq15ETQp4yY
 eJIVD5kdSjKWSRGP1kRC5gzxvX4nyLwJmgwRFW9+vNsvjm7IBZK+LriKt3OYfSRWN5Y2E2fo
 wru/W70HxUbP9y30iee/zSngeqntTP2XsceGaO18tZugUaP3SoDBRsOT1y5rPKlzEmkVLp3K
 lMW0jojq7Ao806mRcW7WAe3yFabujYMVtwWFPc1gCmP167V7gCxFmUCCDlbZ7QOr9QqTDYn0
 luImdLBBjF1trCRD3WH+d+pQSiaYHZPazVYPGldEFtDuYCLTJwPYgznTNBAKZ7pk9nPGxKv4
 CzQtykwu68cgptev0mkxmwrkw5At7CQEFRsvFSGDzr4hu9qTNT7PtT1sDA3+d4FfN7AFAjZ1
 JQRs5XGtIgz4YexeDthqQnnNJ+u/erNFDTBjVN1E5Al+lxBEFb4JtsJvlmSyKpzW/vomAMFg
 meI42u9B7cJZhNGiJObhKrrVqwXIVDIT4iNaxwtRoMmjmJNmOq7EMdGPhX4M4fFyxlErE3CE
 c7EIJzE4YgyUPs4pNZJewvt+eBynX1vrY8ibbv60w6mwdKjiI29Eu9eWGZimtsRsfPVyC2Io
 o03H5bblH1ivBjWP3C/HXg7dgtRcxDWxPne9qRqSwJ0Clo3QD1+U6eJn9vMueVNxsxoqwsBx
 VnkMmdww1vjn3zXbwKMb3FocrT0Wphj63k8OEQR0ZyAghDPva7HAH8jSqYK
IronPort-HdrOrdr: A9a23:apZfz6tG8gMNELDTEwKAWuMy7skCToMji2hC6mlwRA09TyXGra
 6TdaUguiMc1gx8ZJhBo7C90KnpewK7yXdQ2/htAV7EZnibhILIFvAZ0WKG+Vzd8kLFh4tgPM
 tbAsxD4ZjLfCdHZKXBkXqF+rQbsaG6GcmT7I+0pRodLnAJGtRdBkVCe32m+yVNNXl77PECZe
 OhD6R81l2dkSN9VLXLOpBJZZmNmzWl/6iWLyIuNloC0k2jnDmo4Ln1H1yzxREFSQ5Cxr8k7C
 zsjxH5zr/LiYD59jbsk0voq7hGktrozdVOQOaWjNIOFznqggG0IKx8Rry5uiwvqu3H0idrrD
 D1mWZkAy1P0QKUQonsyiGdnDUIkQxeqkMK8GXow0cK+qfCNXQH46Mrv/MqTvPbg3BQ9u2Unp
 g7hl5wGvJsfFr9dR/Glq/1vidR5wGJSEoZ4JouZkNkIP0jgZ9q3MEiFRBuYds99ByT0vFuLA
 A4NrCj2B8RSyLDU0zk
X-IronPort-AV: E=Sophos;i="5.82,313,1613451600"; 
   d="scan'208";a="44329957"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=doF0lzzlCpRNUGpy/rYm02zwxcb7rY2AVACd5Say7edAwbRoxVlTNiykRpI1ILal2T/tzcVmU6K6UFPpGBrh7ohrnNM9y5wKIerlzDhLLQwhw0H6r9tYqKW8fllWh+AMhPLTdsBAk/ZbWtieotdHwpQhjINBg7wCDPGUOOwgNVOjI+rx1mFwvts8vfVYceP367T+5g8vrrj0fkClxf9zgq1dS6Xo9T+NvjQVssJ+5VnpyXnVfyQwGRS27+U2nXxz7hDqKlN7nZ+5zueEMDavugoDozm9TLXASUHe3wqsUFoftUxFyHGyarceyPG0+YI9hQPgdqaNi01Es37Nknc07g==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wWwnRSBxKZqHRNCT6g9seyV9Rg0Xk/a5GNpEqkZ4XLk=;
 b=T9i184bPVQMWKv10sbA7anmfDEbEzACmq/NCPF9jIwcrm8rNSrCE5RxRcDtvK2PB22+pdMYKQ1DO+BmLf68hHgT5yweV+iY/l9N5ff9XHVASKoaFFaHdTMhSNM2tcvjFSg04soAYPyGUK10jsQr4/8rgD4Tjec9+0aQtriF96JWNmcf5hzfeNtz9QlgybRXvhVvnYz+P2rT4ptnQhKHietZxS05R1PuqVy4Hm6pWUiTRrm6ni1+NUzPyR5Jq/mY0oV3b1VHC0mGcqL/Bw24sMG2pSTlPZ5wQo/Cip+Xyzv+BMujFcYojj3U4K/RwEZE/nr/DCechDoURs2xojahTzQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=wWwnRSBxKZqHRNCT6g9seyV9Rg0Xk/a5GNpEqkZ4XLk=;
 b=Iv/OSN2u4qBaSq+FHuDkAQ9URON+0aXsmUUxigXJiwCBXEg5KpLSRbYMwIC+dCdKb/3YzGL5rz7VA5RfVJ1gxx3tcfR1pUe5/N7MLwh1CGOpyyS8Tz1DkvxSwjAL8uOwtXCa8dMY24eY3PqdJKYsycPtZmfdxR04+CQQZilheN8=
Date: Fri, 21 May 2021 11:37:47 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Juergen Gross <jgross@suse.com>
CC: Jan Beulich <jbeulich@suse.com>, <xen-devel@lists.xenproject.org>, "Ian
 Jackson" <iwj@xenproject.org>, Wei Liu <wl@xen.org>, Andrew Cooper
	<andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>,
	"Julien Grall" <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>
Subject: Re: [PATCH v2] libelf: improve PVH elfnote parsing
Message-ID: <YKd/awAn3UkZuszh@Air-de-Roger>
References: <20210518144741.44395-1-roger.pau@citrix.com>
 <c645b764-00fe-2b90-3b31-7f2bb6f07c02@suse.com>
 <YKYreLP8N16vcIVB@Air-de-Roger>
 <162f76e1-9da5-c750-2591-ea011b4b2842@suse.com>
 <d4250baa-9680-cd48-3684-2b61b955713d@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <d4250baa-9680-cd48-3684-2b61b955713d@suse.com>
X-ClientProxiedBy: MR2P264CA0038.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500::26)
 To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 692a0959-3367-4ab3-3cbb-08d91c3c1b5b
X-MS-TrafficTypeDiagnostic: DM5PR03MB2777:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB27779268066D624124CB35518F299@DM5PR03MB2777.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: +m6yz/Z5/0w7G/s/muRTo2fUpfFu+wsk9bcEgAzYKIHnkQ5XoHvH/McgcXAzdHSKo7KzJp5CoOv1mcPfTkEgq18XPxShKFhwLZCuJ5Ra0mXHvz3WSP+itFjO4ex9O9Fiv38Z3amQDO1iTCAG/zWVVzQ7wqdHfb9TkUC8SBNokesyt6xvIjE9JWZ2BncMqay5nMe2aI0LnKqhyWZRDIwzuqjwHiOGnPSleXb0vaJ3Se1V9yQOG5PfK2bJkOReNw1f2WTkYNKcC0TIhbsuSLoKOTFo7dG4TDN3ISvYkKAoiioiPbPx4w6mmqh/Ef+CxK4Q+u5ffrArzXyFW9Qgy26o0wpTR21Xuqb+NcYgPs86uycuD59O525J9Qs/xt0gGJirWgbHy5OYN8yYUuIoV9N7wA1P+A20RW3ALPED1C9uIydDNOjrVPsJu/pNPQ+IAkTJTnh4/0xUkHqTUPV9mQaBfXNdQksNmO3D0pdDtIxJ0KC72n+k92skBqBHMQTYykQKNacAujUlTvke+N/8fqAEy8eRrcOMNAAchexDMq5TzslxpoTQjFZFbdE9UY6PBp0uf037TVzgwGmABwADgBD8TVnOT9R1maKd/mOqS6uA7U8=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(396003)(376002)(346002)(136003)(366004)(39860400002)(16526019)(8676002)(8936002)(54906003)(186003)(478600001)(86362001)(6496006)(33716001)(6916009)(26005)(6666004)(53546011)(9686003)(316002)(4326008)(956004)(2906002)(6486002)(83380400001)(85182001)(5660300002)(66476007)(66946007)(38100700002)(66556008);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?eEo5ZThWNm92WEJ3SGNZY0tWL0VQS0dRMEVXTmJ3Q001SUdabUZmMGttQVhj?=
 =?utf-8?B?TTF1b3BSNnI2NEhMd3BaTkZjZHRUZTNza3JjY0tCTDFlZDl3Sk9KekpMTzNP?=
 =?utf-8?B?cVdXZVRxOXcvVzByM0dEeWV5LytjUllGc09mWE5mQ0xCclF6RUwzdFY3bVE5?=
 =?utf-8?B?Q3p6aHN3VmphUEZwelgxMHE5YUdDSnJVMzF5Mlpubk5PaldzbmNpcjV6NWsy?=
 =?utf-8?B?UGRoNzBtcXFTVk5OYVJzUnNkNGMwOXQ0Y3JCTWZzbGE3TE5hTUIrWGNGSVZo?=
 =?utf-8?B?WmRXL05DWjR1QjRyNDNKVVhYRlNUaDVnMTVYVXdwb0tRaHVkMUFlcU1leW9O?=
 =?utf-8?B?TUtzR1N3bXIxRlg0Z3pNQVNyQWYvRmI3Sm43K1owazZTbVBXTjNtTDlXQjZy?=
 =?utf-8?B?Y0pIK2FSUzFnekJHY0lDcTZMNUhxd0J2VnU3RFVubVhBekgvWVBXQVUzNUJ3?=
 =?utf-8?B?RUxyM1MzS0ZDeWlTMGZ2YThFalh5UU5nY2xVK3R4RUF6TFQ1ak1pM0tKL3Fk?=
 =?utf-8?B?bFRIV0tCVUxQb01abUlyZnFqRU1BTkVQblgxYVpmSlRqQndkMGxLNW1ST093?=
 =?utf-8?B?RGdaazFJWVVYbnEyWjF3eDExYnVERnozd3JiT2tLa2pvbTVPQzVheUE0T1NT?=
 =?utf-8?B?aVV3MlExOWZaVVJFRmMrOFNqRU1vanpKM0tnZktaS0NnYlVUeXViUDFacTFF?=
 =?utf-8?B?Qk5VbEExYjNzSE5UaFRObU9KbUpTTi9zVTdVM1M2S1NJanI3ZmdhKzhxL1la?=
 =?utf-8?B?L2hUWVB3ZytjbHJodU5zUkl6cldhYjB5b3VnNU01d0ZCOEpHOVhHMm8rdkEx?=
 =?utf-8?B?Wmo0RG9sUnJXakNNQmxjcDlyR2t4Z3Erc3JEb2Z5bHIyRjFTUnpENXc1bUxu?=
 =?utf-8?B?dUJkM3lWY3JiZmVQaTd3dHFzd0JyNEh5SWpCSW5vY29Lb1FjQlRybTcyQm41?=
 =?utf-8?B?cFU0TzJ2NWx5RFpmNWtXYjNEbmRtbEVrYWYxdVJMaXFPdUpuOFBEeitWZCtW?=
 =?utf-8?B?ZTg4cXJ5WHQyWkI4Qmt5Q05zTEJDd2NGbEV2cmRoODh0dmxqYkZRdlFhUXZ2?=
 =?utf-8?B?VnA0RjlwUHBRVTA0d2NjZDI2SXlHMXBrRWVCVjVzUExZU3BGTEs5a256WjBp?=
 =?utf-8?B?d3A3Rk4zSHRNd3l0VDQ0ZVFCV1ZiSEhsbFdCdGliU2tXVGluZlBWalJHS1ZX?=
 =?utf-8?B?UVk2QWc5WVdjTXJHam0zQWVWK0JvQVhwOHcvQy90SFA5aDkrLzJrSWNkNy9j?=
 =?utf-8?B?K0oyOG96S2FCSGJuQlFicENLZmV1d3hFNVpYN1MyVmRlR0UvdHk1T3ZCV2sy?=
 =?utf-8?B?V0VnNzQ3TlFqMUZKNUl4MUtwQ0hLZUhVTVJucGpYT1Zma2FSVnVGRzNubG5x?=
 =?utf-8?B?dmlSb2xhUWV1Q3Rxbm1qTXNLaVdrNGc1Yi9QVEFIeHdVV3FyaS9zNkd1Uk9q?=
 =?utf-8?B?RjN1b3JnaE5PcTMxdm0yM2JSemRWMk15eEM4OVB0cjhKM2ZWN0hJY3R4Z0w0?=
 =?utf-8?B?Y3ZrL1dXZHpHL2l1dU5xd1VROG9oOEgvM0I0UnZkRXUrTXFSV2RUb2xPYkQw?=
 =?utf-8?B?d1ZSVUtia0lQeTJyaUt4OG8vc0tnWkJzTDhHaEI5bUJUQVBqMllxclhoUFF1?=
 =?utf-8?B?ODBROWVOelNESGltZ2ZxbjFKbHVidWRoQ2xhTE9lZ2JrTVNFOTc0LzNWa1RI?=
 =?utf-8?B?cXYrQXJ6WmFINExEak9pZHBCSDJ2ZjFkWW9SNWFaS0dRSUJlZVU4dWh0WW93?=
 =?utf-8?Q?ejVZ/8IOZaHj3i1VeUNZY1o3aM3LXW+t+p6BVPi?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 692a0959-3367-4ab3-3cbb-08d91c3c1b5b
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 May 2021 09:37:53.4097
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: gWNlvl3Muv/bCENJQjt45DCvHN2KvTq/GkcZC9hVosoSSs5PjwcqbC0CT5qQlH7Vifb0pQUb2M6bYFVsv2WdZQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB2777
X-OriginatorOrg: citrix.com

On Fri, May 21, 2021 at 07:26:21AM +0200, Juergen Gross wrote:
> On 20.05.21 11:28, Jan Beulich wrote:
> > On 20.05.2021 11:27, Roger Pau Monné wrote:
> > > On Wed, May 19, 2021 at 12:34:19PM +0200, Jan Beulich wrote:
> > > > On 18.05.2021 16:47, Roger Pau Monne wrote:
> > > > > @@ -425,8 +425,11 @@ static elf_errorstatus elf_xen_addr_calc_check(struct elf_binary *elf,
> > > > >           return -1;
> > > > >       }
> > > > > -    /* Initial guess for virt_base is 0 if it is not explicitly defined. */
> > > > > -    if ( parms->virt_base == UNSET_ADDR )
> > > > > +    /*
> > > > > +     * Initial guess for virt_base is 0 if it is not explicitly defined in the
> > > > > +     * PV case. For PVH virt_base is forced to 0 because paging is disabled.
> > > > > +     */
> > > > > +    if ( parms->virt_base == UNSET_ADDR || hvm )
> > > > >       {
> > > > >           parms->virt_base = 0;
> > > > >           elf_msg(elf, "ELF: VIRT_BASE unset, using %#" PRIx64 "\n",
> > > > 
> > > > This message is wrong now if hvm is true but parms->virt_base != UNSET_ADDR.
> > > > Best perhaps is to avoid emitting the message altogether when hvm is true.
> > > > (Since you'll be touching it anyway, perhaps a good opportunity
> > > > to do
> away
> > > > with passing parms->virt_base to elf_msg(), and instead simply use a literal
> > > > zero.)
> > > > 
> > > > > @@ -441,8 +444,10 @@ static elf_errorstatus elf_xen_addr_calc_check(struct elf_binary *elf,
> > > > >        *
> > > > >        * If we are using the modern ELF notes interface then the default
> > > > >        * is 0.
> > > > > +     *
> > > > > +     * For PVH this is forced to 0, as it's already a
> > > > > legacy option
> for PV.
> > > > >        */
> > > > > -    if ( parms->elf_paddr_offset == UNSET_ADDR )
> > > > > +    if ( parms->elf_paddr_offset == UNSET_ADDR || hvm )
> > > > >       {
> > > > >           if ( parms->elf_note_start )
> > > > 
> > > > Don't you want "|| hvm" here as well, or alternatively suppress the
> > > > fallback to the __xen_guest section in the PVH case (near the end of
> > > > elf_xen_parse())?
> > > 
> > > The legacy __xen_guest section doesn't support PHYS32_ENTRY, so yes,
> > > that part could be completely skipped when called from an HVM
> > > container.
> > > 
> > > I think I will fix that in another patch though if you are OK, as
> > > it's not strictly related to the calculation fixes done here.
> > 
> > That's fine; it wants to be a prereq to the one here then, though,
> > I think.
> 
> Would it be possible to add some comment to xen/include/public/elfnote.h
> Indicating which elfnotes are evaluated for which guest types,

For PVH after this patch the only mandatory note should be
PHYS32_ENTRY. BSD_SYMTAB is also parsed if found on that case.

> including
> a hint which elfnotes _have_ been evaluated before this series?

Before this patch mostly all PV notes are parsed, and you have to
manage to get suitable values set so that:

    if ( (parms->virt_kstart > parms->virt_kend) ||
         (parms->virt_entry < parms->virt_kstart) ||
         (parms->virt_entry > parms->virt_kend) ||
         (parms->virt_base > parms->virt_kstart) )
    {
        elf_err(elf, "ERROR: ELF start or entries are out of bounds\n");
        return -1;
    }

Doesn't trigger. That can be done by setting virt_entry or the native
elf entry point to a value that satisfies the condition. Or by setting
a suitable offset in VIRT_BASE to a value that matches the offset
added to the native entry point in the ELF header.

I can try to write this up, albeit I think it can get messy. Not sure
what's the best approach here regarding recommended settings to
satisfy old Xen versions.

Roger.


From xen-devel-bounces@lists.xenproject.org Fri May 21 10:43:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 May 2021 10:43:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131223.245370 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lk2cb-0006aZ-Rj; Fri, 21 May 2021 10:43:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131223.245370; Fri, 21 May 2021 10:43:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lk2cb-0006aS-Oi; Fri, 21 May 2021 10:43:13 +0000
Received: by outflank-mailman (input) for mailman id 131223;
 Fri, 21 May 2021 10:43:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8sL1=KQ=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1lk2ca-0006aM-Dz
 for xen-devel@lists.xenproject.org; Fri, 21 May 2021 10:43:12 +0000
Received: from wnew3-smtp.messagingengine.com (unknown [64.147.123.17])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 23f00bdc-7a37-44e7-9975-26e58d976214;
 Fri, 21 May 2021 10:43:11 +0000 (UTC)
Received: from compute3.internal (compute3.nyi.internal [10.202.2.43])
 by mailnew.west.internal (Postfix) with ESMTP id CFA6C127F;
 Fri, 21 May 2021 06:43:08 -0400 (EDT)
Received: from mailfrontend2 ([10.202.2.163])
 by compute3.internal (MEProxy); Fri, 21 May 2021 06:43:09 -0400
Received: from mail-itl (ip5b434f04.dynamic.kabel-deutschland.de [91.67.79.4])
 by mail.messagingengine.com (Postfix) with ESMTPA;
 Fri, 21 May 2021 06:43:05 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 23f00bdc-7a37-44e7-9975-26e58d976214
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:content-type:date:from:in-reply-to
	:message-id:mime-version:references:subject:to:x-me-proxy
	:x-me-proxy:x-me-sender:x-me-sender:x-sasl-enc; s=fm2; bh=5uODI0
	1Xgq6m2Hs1jcEGX2rKBKwRaYcR9su+Lfn8v9I=; b=COtRV+F19cAeQdDNk/rVdC
	Qog2aXIYdX+hhTJoJ4wkbuDGi2XXAs2fj64hX1SwUyQGyNDPi/ojfU4lekj+OXZs
	4TvLBsWR76nVhlvLZZAddWhltadoEGkD1yCVvjIBiWlDAN7zsUvWUdLHF1gCbe0x
	TsiKlgOVLiscgkDk4i1yYCsyUBF79psshPwSVqkzVnTCVyFteOpt5yHpwbgaRBwR
	LHp5egybXYWTXKHMNXIqx7c8uR1RyXHtV/A1mWKSPBRw4nxdCgxgi3gi/iUG5VEa
	qxHcORBLPHHNNQaMhgk/1QI+r21ijf3nurUINMh0WQKdpUDjX9b7A+EJiCTiMCDw
	==
X-ME-Sender: <xms:u46nYEyh3wZwvMGTA5vBI3zdj3e24axeVBi9PJUqCruMlycdv-ZsHA>
    <xme:u46nYIQRKLpp5zCZkG_RpbHBThveg62q3e0EYoFeY2RcvKd5_bHRRS24lABixvm3J
    8Z0aLg7UPvxjQ>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrvdejfedgfedvucetufdoteggodetrfdotf
    fvucfrrhhofhhilhgvmecuhfgrshhtofgrihhlpdfqfgfvpdfurfetoffkrfgpnffqhgen
    uceurghilhhouhhtmecufedttdenucesvcftvggtihhpihgvnhhtshculddquddttddmne
    cujfgurhepfffhvffukfhfgggtuggjsehgtderredttdejnecuhfhrohhmpeforghrvghk
    ucforghrtgiihihkohifshhkihdqifpkrhgvtghkihcuoehmrghrmhgrrhgvkhesihhnvh
    hishhisghlvghthhhinhhgshhlrggsrdgtohhmqeenucggtffrrghtthgvrhhnpeegjeej
    jefftdefgffghfeujedvheffhedtjeejgfevhfefgfeigfelkeegjeejgfenucffohhmrg
    hinhepghhithhhuhgsrdgtohhmpdigvghnphhrohhjvggtthdrohhrghenucfkphepledu
    rdeijedrjeelrdegnecuvehluhhsthgvrhfuihiivgeptdenucfrrghrrghmpehmrghilh
    hfrhhomhepmhgrrhhmrghrvghksehinhhvihhsihgslhgvthhhihhnghhslhgrsgdrtgho
    mh
X-ME-Proxy: <xmx:u46nYGVl3NcSG603U5Dc6sPvTEae_YeRemGaz37WpHxU-8JMgw_mjA>
    <xmx:u46nYChW7z_JLjaA6qqYp-56HzvX5LPXftyI6zGPcw63_D6vVTgzVw>
    <xmx:u46nYGBjc5kymt7v0TJG_QYkdllTG-gwfaYOchOpMqFUHsSx9dEMFg>
    <xmx:vI6nYHKvcbgLVUghBWSNPmANdv41Y8Bys2iYHcL9bJIpBipIRFdYWJyKg10>
Date: Fri, 21 May 2021 12:43:00 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	linux-block@vger.kernel.org, netdev@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Jens Axboe <axboe@kernel.dk>,
	"David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jiri Slaby <jirislaby@kernel.org>
Subject: Re: [PATCH 0/8] xen: harden frontends against malicious backends
Message-ID: <YKeOtbXkFz7JTMn0@mail-itl>
References: <20210513100302.22027-1-jgross@suse.com>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="bxyuzBoNLlcC9wan"
Content-Disposition: inline
In-Reply-To: <20210513100302.22027-1-jgross@suse.com>


--bxyuzBoNLlcC9wan
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Fri, 21 May 2021 12:43:00 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Juergen Gross <jgross@suse.com>
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org,
	linux-block@vger.kernel.org, netdev@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	Boris Ostrovsky <boris.ostrovsky@oracle.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>,
	Jens Axboe <axboe@kernel.dk>,
	"David S. Miller" <davem@davemloft.net>,
	Jakub Kicinski <kuba@kernel.org>,
	Greg Kroah-Hartman <gregkh@linuxfoundation.org>,
	Jiri Slaby <jirislaby@kernel.org>
Subject: Re: [PATCH 0/8] xen: harden frontends against malicious backends

On Thu, May 13, 2021 at 12:02:54PM +0200, Juergen Gross wrote:
> Xen backends of para-virtualized devices can live in dom0 kernel, dom0
> user land, or in a driver domain. This means that a backend might
> reside in a less trusted environment than the Xen core components, so
> a backend should not be able to do harm to a Xen guest (it can still
> mess up I/O data, but it shouldn't be able to e.g. crash a guest by
> other means or cause a privilege escalation in the guest).
>=20
> Unfortunately many frontends in the Linux kernel are fully trusting
> their respective backends. This series is starting to fix the most
> important frontends: console, disk and network.
>=20
> It was discussed to handle this as a security problem, but the topic
> was discussed in public before, so it isn't a real secret.

Is it based on patches we ship in Qubes[1] and also I've sent here some
years ago[2]? I see a lot of similarities. If not, you may want to
compare them.

[1] https://github.com/QubesOS/qubes-linux-kernel/
[2] https://lists.xenproject.org/archives/html/xen-devel/2018-04/msg02336.h=
tml


> Juergen Gross (8):
>   xen: sync include/xen/interface/io/ring.h with Xen's newest version
>   xen/blkfront: read response from backend only once
>   xen/blkfront: don't take local copy of a request from the ring page
>   xen/blkfront: don't trust the backend response data blindly
>   xen/netfront: read response from backend only once
>   xen/netfront: don't read data from request on the ring page
>   xen/netfront: don't trust the backend response data blindly
>   xen/hvc: replace BUG_ON() with negative return value
>=20
>  drivers/block/xen-blkfront.c    | 118 +++++++++-----
>  drivers/net/xen-netfront.c      | 184 ++++++++++++++-------
>  drivers/tty/hvc/hvc_xen.c       |  15 +-
>  include/xen/interface/io/ring.h | 278 ++++++++++++++++++--------------
>  4 files changed, 369 insertions(+), 226 deletions(-)
>=20
> --=20
> 2.26.2
>=20
>=20

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--bxyuzBoNLlcC9wan
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmCnjrUACgkQ24/THMrX
1yyaAgf/V30jyv6uv6+F7OW2zOfe72gfIS/EQrm6baOF7VkhumGU3/xVm5uGtf0c
MRInt992m2TocU3i807K9juNN42uowicJQMofvWIo0DmU+SFLO7skFDIy1doVZwf
V57we8V1xtULjiW9LFB5gtjyypfD9BnuP+UJczQ1GkvVW0tbrnt9yOnt/RkkbPTo
8Iv+fhPOv/nfH07j2IFmfKTVQXLgpIXEDQjRocpMU9aqx4QxXjLwrV8X5Kl/dDHU
YPTiLLy/lORMJ4YzapwnQSSrIt8ta/i5ZD8RzICPFDqDA9UoHwTXt8AbeBvM7wsm
ts5+9qugZ3Ea/gKhq2VN7t6OKAHw0Q==
=YEbr
-----END PGP SIGNATURE-----

--bxyuzBoNLlcC9wan--


From xen-devel-bounces@lists.xenproject.org Fri May 21 11:40:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 May 2021 11:40:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131234.245382 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lk3Vi-0003tD-CO; Fri, 21 May 2021 11:40:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131234.245382; Fri, 21 May 2021 11:40:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lk3Vi-0003t6-9M; Fri, 21 May 2021 11:40:10 +0000
Received: by outflank-mailman (input) for mailman id 131234;
 Fri, 21 May 2021 11:40:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lk3Vh-0003sw-2I; Fri, 21 May 2021 11:40:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lk3Vg-00066L-Pd; Fri, 21 May 2021 11:40:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lk3Vg-0000kg-EK; Fri, 21 May 2021 11:40:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lk3Vg-0003fC-Dm; Fri, 21 May 2021 11:40:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8eBSVq9ZJabjboiNHeczEjqjnzc0JFtq93E8AISaQnI=; b=GSG5PK47ir+jSU4nePwjmHdkkb
	xFMAweTyU4LxloZKR3wCTvzvcLxlPP8VYojtobR2NG4XJN0vjLUc+f0X6wwwWhK3jbmaUQdoiBMgH
	n08fOlAx7EmF54sP8p0WXOGMk8xpGe6LY77JQUit46iriyLSIoNdYinEzDNOwpzYmu7I=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162108-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162108: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=972e848b53970d12cb2ca64687ef8ff797fb6236
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 21 May 2021 11:40:08 +0000

flight 162108 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162108/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                972e848b53970d12cb2ca64687ef8ff797fb6236
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  274 days
Failing since        152659  2020-08-21 14:07:39 Z  272 days  501 attempts
Testing same since   162108  2021-05-21 01:53:46 Z    0 days    1 attempts

------------------------------------------------------------
510 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 158487 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 21 13:13:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 May 2021 13:13:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131250.245396 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lk4xF-0003nv-J3; Fri, 21 May 2021 13:12:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131250.245396; Fri, 21 May 2021 13:12:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lk4xF-0003no-Fy; Fri, 21 May 2021 13:12:41 +0000
Received: by outflank-mailman (input) for mailman id 131250;
 Fri, 21 May 2021 13:12:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IAnY=KQ=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1lk4xD-0003ni-BK
 for xen-devel@lists.xenproject.org; Fri, 21 May 2021 13:12:39 +0000
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8b8b7158-31d1-461a-86fb-acd4d55e7e49;
 Fri, 21 May 2021 13:12:37 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 14LCxBaY101291;
 Fri, 21 May 2021 13:12:35 GMT
Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80])
 by userp2130.oracle.com with ESMTP id 38j5qrfn3w-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 21 May 2021 13:12:35 +0000
Received: from pps.filterd (userp3030.oracle.com [127.0.0.1])
 by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 14LD0ZLo052766;
 Fri, 21 May 2021 13:12:35 GMT
Received: from nam10-bn7-obe.outbound.protection.outlook.com
 (mail-bn7nam10lp2101.outbound.protection.outlook.com [104.47.70.101])
 by userp3030.oracle.com with ESMTP id 38megnhqnj-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 21 May 2021 13:12:35 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com (2603:10b6:208:321::10)
 by BLAPR10MB4978.namprd10.prod.outlook.com (2603:10b6:208:30e::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4150.26; Fri, 21 May
 2021 13:12:33 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf]) by BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf%7]) with mapi id 15.20.4150.023; Fri, 21 May 2021
 13:12:33 +0000
Received: from [10.74.102.99] (160.34.88.99) by
 SN7PR04CA0198.namprd04.prod.outlook.com (2603:10b6:806:126::23) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4150.26 via Frontend
 Transport; Fri, 21 May 2021 13:12:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8b8b7158-31d1-461a-86fb-acd4d55e7e49
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=pd9iADF2AoIYmkR0KCUndINvq5YCd1FizfhYJYGi53k=;
 b=GArF92PezCxGYZXvmHgAszK+L9fxPBBWwHGwCxu6w4odBH0lWNSZcyQfWKihKL+J6Arl
 I4cXJtxkjht3KZu1ZKuGKA7phNwi+J+QADNKRBS/eft2gL/iIizeC0ZtcB3jfVE89WEC
 vXNvHAKWlQ1kIfHhWuUMXyPerrH3nQA4Yq9iEr0Rv2UpJ37m+tENt58NXn6zzT08G9kb
 jBEZkPRMzs7WFVtQggW1T7m9nq8jG0sy7gLNgrAj+tJRzI8fqfoM+fxEMTiLVOr6MoF2
 q+zyJiNaXxv0Oo+34ao/ArmJ2fO3f9+DuRj7lhKZRTZrHWDPBRjvg3eLEyyQNTg+MiTj qg== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=AunWRNxw+GgLTv7G2G8CP+iuTlwI1B929+4rzwnEQOssM1eyTNIAItZlHylO+yahhh+0ZNf31ke9OPp7Yo3biVVlvnqkegQZNjwpi7r6WXi8pFpp3dFADt03/mdN8AFlve6V7cpW2kL/6ZtD61uxBRbptyKjC3cat2VtgnQ4b2LirPwZDIMHWeHu9FMyRByjaCJAVbVo1WFBvCxMdKmLGmLqdrbENYcyIP8p9aHuMXDsG+ykc1Mo/UPEdefHPwlAUK4DO00k486M8VVi3ghMMNkLEGtq161DXOtP+g1U5GW9hJpUYOD0KhfvU5hlxpmSNXiZz/1ppPlcHefEKm7jgg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pd9iADF2AoIYmkR0KCUndINvq5YCd1FizfhYJYGi53k=;
 b=Khb+sFqwJ2BBvvOR0AOlDjP/xr3bYBf1XQQ0oOuhqU32vpb5zbEWLNl2dK4NRyEUfuV+zvpg7FDEwt+/9ZZEIk8ND0gR68QBv6sblzQAgQn0B6q6cSc2AHWmS2EA3t8r9rmbA6Ff3H7wu/QfPqlkgw+bDBSBKgnrafDk1O4D+E/NW1b2x8IBVu0CzNGCt6KIP3xcMJBm9nvEILGpuTlIsoyp8SOtYB2rXegdrziP88CXuG7PHbs9uKwqkndlZ3OBsiE5Zx8yDpMlWBDw3mZA3EbCXnxKK3uAOJlt605F2esBdWrNEMn4wMD22u7J9q9/IrCFTaMhBomwVmFH9Y6NFQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=pd9iADF2AoIYmkR0KCUndINvq5YCd1FizfhYJYGi53k=;
 b=KVdJ0S3R47LKgnm88A6TFZvJkoZOmAwYHuealcz06RM30LYjbeMEHQphrFYPeZBon7neEVGCOpt/7CJV+duBsVgHm28BRt6yHlvKwyq6VABaqaX2aXl3R8V+ScZQ8xvbPaT1snPX6SNgtBYcPOuf99sm6rXRslfd4NQywMatdY4=
Authentication-Results: lists.xenproject.org; dkim=none (message not signed)
 header.d=none;lists.xenproject.org; dmarc=none action=none
 header.from=oracle.com;
Subject: Re: [PATCH] x86/Xen: swap NX determination and GDT setup on BSP
To: Juergen Gross <jgross@suse.com>, Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <12a866b0-9e89-59f7-ebeb-a2a6cec0987a@suse.com>
 <65bbc317-893e-da41-97e0-c8f2e1feb3e2@suse.com>
 <f594a439-ec1d-34fa-3ccf-b162441fa0af@suse.com>
 <3953076f-c2fa-2e2a-4b07-fb610046a27d@suse.com>
 <89c46d1a-9474-0f17-3fda-4809a14adb45@suse.com>
 <2d019c04-415b-293b-052b-26b1ea3be189@suse.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <0103d46f-4feb-e49d-f738-a2bf3d38c216@oracle.com>
Date: Fri, 21 May 2021 09:12:29 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
In-Reply-To: <2d019c04-415b-293b-052b-26b1ea3be189@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Originating-IP: [160.34.88.99]
X-ClientProxiedBy: SN7PR04CA0198.namprd04.prod.outlook.com
 (2603:10b6:806:126::23) To BLAPR10MB5009.namprd10.prod.outlook.com
 (2603:10b6:208:321::10)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 75e14976-763c-4556-c0e9-08d91c5a1884
X-MS-TrafficTypeDiagnostic: BLAPR10MB4978:
X-Microsoft-Antispam-PRVS: 
	<BLAPR10MB4978436925EE7AED8C0539DF8A299@BLAPR10MB4978.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	ILz57p/o4t6Sm6Y+pYLmutca0flY3rTgFewoEVoXteNRjuSVMGLJprxhojUrPzl8QIwhQ2gw+vROWWaqzpRrNxTbLw+WHWKmG1K0zi89SBw+oImgj0UmyhkEXZ+ZWoyfwLJIcuwVf6lwT25T13kGiSFg8x9TwWuwkMtjADPY9oW214OsZ7gyb1/QYmamu5DV42DELH7/lGfJM2ClxhhsNDkh0lf6O6mKHciwAz/apXQ6mDeWUbTb8wMfbQRLENx3bj7hYTRmj9tIlFRQKOmbh54Q0SrKusOnGHIekMqLw1H5i4VH5q5QWUXsrIsB7G7yhDNfXHUNPrCO0Oe/3Vw5NlOocudLPXiPQYJXV5yGuo+BkI+lFxjj8+EhNht//id4DURco1WZk5CXLHXxj78+WSjzwI08g3BicN5yIEjZ3v9V5hE7xMbgGNeieGEZTNzLjqSY5ifzN8b8V4CYpmkpGyowvKFa8UfSL7x+5ZZKrVGWstgvALIj9Z0bLcJqJE2UbaWyHh9bA9UZffZeltvNhATq0zR7GBCg9CWxhK5b60lvxbb1yN9ajNaRcGWYOC+mRuiJAjDrzNAQ8SOOKGS1mdP64nv7+wvG1Rw+7p3rDX1Vz8/rLl/4NTk7QDCbAf+MHidVIjjuw3pyWxHY/G10ALEau9+G6JfVSnpHOTJG1nkrlm78LxtKdX7WqicZwR/j
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB5009.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(39860400002)(346002)(396003)(366004)(136003)(478600001)(66946007)(8676002)(26005)(956004)(2906002)(8936002)(186003)(66556008)(66476007)(16526019)(53546011)(2616005)(110136005)(31686004)(5660300002)(16576012)(6486002)(316002)(31696002)(86362001)(36756003)(83380400001)(38100700002)(44832011)(6666004)(4326008)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?utf-8?B?bWVZY25IQXBBZjd4WEgzUUFKbU9PMkdYdXhLR3JtTFl2NzVQREdsdW94bTNn?=
 =?utf-8?B?TG5YbHhva2xJY0ZoMmtJUlZ6MW1zNnh1VDVDRzF2SzNxbTlDbCs1WTFId2hQ?=
 =?utf-8?B?bjhzR2tjbitaaENwLzV1T3l2TmlZVm1HK0RnU0hNb3ZGTVBwUmZ6cDllZ3Vi?=
 =?utf-8?B?WlZkV0wxZEdaWjZWeGJJcXlxYlBxT1RnWXlVMWVlblBtQkFuZTlrT04zckRw?=
 =?utf-8?B?Wmg4MGE5Um9ldmtwZEI5MzRaS2xuS1RCUC80SVRUYnpRcTNxcFdUQWE1YUow?=
 =?utf-8?B?MjNnY2VMOXRwVFdkT05MSnlwc3dHQ3grLzlKQldtM2tkcll3Z2FWaTNDZ3BF?=
 =?utf-8?B?ejhNVGdVNUhXanlqUjdxQzlGM3RLbFoxeFR1TzU1emozbEtIQ1pHYVlaNURH?=
 =?utf-8?B?M0ZBbW1MYW80bDd0Z2RPcXVhbHVQd296dEZtWTBhclNsZk4vMk0walFocGJi?=
 =?utf-8?B?ZGpnWU1pQWx4YXFCUkZCK0U3cnptUUxBeWQwTGR5RVhNRGZoaUgraGU3cjJz?=
 =?utf-8?B?MzBLUFl6OE5vZ2RGUVU2M2NPYm9Xc21rRHdoblhHRkNYaC9pcDQ2dTFpRFJB?=
 =?utf-8?B?cjZOTnY5WTIyVXlJcUtZNHFEQ2JIeVhLOEJmUDFYQUdCUlBZdDBpWUdqMkE5?=
 =?utf-8?B?NllwNjh2VkFSYjlNc2NoREYxRGxCWHFYdG1QWW1PcTNXWlRzdldKcEk0V0Fp?=
 =?utf-8?B?YmJBOVF1Y09FenRod1k4aW5INkxIRW9aM2FRcGo0cVJWRUROcWh6T25KQzdv?=
 =?utf-8?B?VUNZNjFzMllJZlFCOEN3U2E0MDZNa0ZaOURUTUFYaVkyTWt4cldtWjE3SU1H?=
 =?utf-8?B?ZENqYUV4RjAzT0krY2JRYURYb3JoQ1ZTQnk0ZVdzcndZT1ladkpTKzRqU1c2?=
 =?utf-8?B?VzBzY2ZwNXFPeEFDd09LbVVWTC96U1M0bTBVMTdSZkFwcUQ0eXFUdXRGdGxm?=
 =?utf-8?B?dENoYTFXajVWYVBJTjd3cVBxWWtIeWxheWxBR1I1NXROa3hHMzFDNHZzOGdK?=
 =?utf-8?B?SVlDem1YL1RTY0grMllqcFRNN2ZYTW9Wc3lJZzJMUno2VDd5b2ttOVJMWWla?=
 =?utf-8?B?c2dnL0hMVHF4aFc5dHpmTHBkZ3F0UlIvZHE3MElSSG1qdEhRMEpGWURjTyta?=
 =?utf-8?B?TXJTczRoQy9TaWowRE16Z0NTK0tQQ0UwQ25TaDVwOEV5cVZjY21OVXdsOEVx?=
 =?utf-8?B?OGJXZm9QOTVlQzZ2Y0xrQXJ6dnVPcThpQWVTd3JwS1Z2UGRHbDNUajdHVGJY?=
 =?utf-8?B?enk2Sy95NFFIMmxDc1AzUGd6YnFzWHhvbEJwd1ZMTi8xVDh1b0JDOUpRR216?=
 =?utf-8?B?YTZNMkV4R2tWUFA1RTRtZXNkZlRLS1FHaFo5Yk9JQllRSlNrMDRlT3Z5Um1X?=
 =?utf-8?B?MUdyYVVvaWNXV0V1aXZScHZDOEN5OWQ0ZFRnbE1vV2NEQmh3aVV0Zm9WSWRo?=
 =?utf-8?B?VnFJZ3FQZVJ5WlFPT3ExdStrRFRGRWpBbVZXRFpOanc2WXNyK1YyakFzUngw?=
 =?utf-8?B?em05enNYYmlSS1VMMnZXUGtwWUdaSTYwN1RjQ1lFdUJwbnZEdDlCQThsOHR5?=
 =?utf-8?B?MUZWQXFhUUpFb2RTUXBidjVESEJkeTBtZTB1WGVRS0tSeXYrTFRsMm5WZjlF?=
 =?utf-8?B?cEJRR1N5V2JaeVhoMUh4WW5zRG5uZnJtRFBsaTh5ZzFTNTd3K29TL2VaY05t?=
 =?utf-8?B?ZExqL2xxWk5XQ2NBZkw3bjg0R1NwQWd5NC9TT0RCUDBZRVFrUVpVb2tqMGx6?=
 =?utf-8?Q?Nfd8QCqExT4SUa710FsQ/COuuwtusMhZZJuxe2y?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 75e14976-763c-4556-c0e9-08d91c5a1884
X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB5009.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 May 2021 13:12:33.5037
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 0nM9xmyn9dLvV2Ou9ZSP7SFwIK7OkwTt7OyYBsl5UvYLLNePoZ71UscViDQMvb7qo5MF+zdq7pa2kdyzqGWkiFOjFEg1H/Z8T4awnmkDt7o=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BLAPR10MB4978
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9990 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxlogscore=999 phishscore=0
 adultscore=0 malwarescore=0 bulkscore=0 mlxscore=0 spamscore=0
 suspectscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2104190000 definitions=main-2105210077
X-Proofpoint-GUID: FGtvg42JK3_Av3hOdqYkxJduk5Ai-vA6
X-Proofpoint-ORIG-GUID: FGtvg42JK3_Av3hOdqYkxJduk5Ai-vA6
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9990 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 clxscore=1015 impostorscore=0
 mlxscore=0 lowpriorityscore=0 malwarescore=0 mlxlogscore=999
 suspectscore=0 adultscore=0 priorityscore=1501 spamscore=0 phishscore=0
 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2104190000 definitions=main-2105210077


On 5/21/21 3:45 AM, Juergen Gross wrote:
> On 21.05.21 09:26, Jan Beulich wrote:
>> On 21.05.2021 09:18, Juergen Gross wrote:
>>> On 20.05.21 14:08, Jan Beulich wrote:
>>>> On 20.05.2021 13:57, Juergen Gross wrote:
>>>>> On 20.05.21 13:42, Jan Beulich wrote:
>>>>>> xen_setup_gdt(), via xen_load_gdt_boot(), wants to adjust page tables.
>>>>>> For this to work when NX is not available, x86_configure_nx() needs 
> to
>>>>>> be called first.
>>>>>>
>>>>>> Reported-by: Olaf Hering <olaf@aepfle.de>
>>>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>>>
>>>>> Reviewed-by: Juergen Gross <jgross@suse.com>
>>>>
>>>> Thanks. I guess I forgot
>>>>
>>>> Cc: stable@vger.kernel.org
>>>>
>>>> If you agree, can you please add this before pushing to Linus?
>>>
>>> Uh, just had a look why x86_configure_nx() was called after
>>> xen_setup_gdt().
>>>
>>> Upstream your patch will be fine, but before kernel 5.9 it will
>>> break running as 32-bit PV guest (see commit 36104cb9012a82e7).
>>
>> Oh, indeed. That commit then actually introduced the issue here,
>> and hence a Fixes: tag may be warranted.
>
> Added it already. :-)
>
> And I've limited the backport to happen not for 5.8 and older, of
> course.


Did something changed recently that this became a problem? That commit has been there for 3 years.


Didn't Olaf report this to be a problem only on SLES kernels?



-boris



From xen-devel-bounces@lists.xenproject.org Fri May 21 13:15:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 May 2021 13:15:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131256.245407 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lk50C-0004TK-48; Fri, 21 May 2021 13:15:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131256.245407; Fri, 21 May 2021 13:15:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lk50C-0004TD-08; Fri, 21 May 2021 13:15:44 +0000
Received: by outflank-mailman (input) for mailman id 131256;
 Fri, 21 May 2021 13:15:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qZ6I=KQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lk50A-0004T7-LW
 for xen-devel@lists.xenproject.org; Fri, 21 May 2021 13:15:42 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4a2a318c-baf8-4cc2-882c-334c911da0da;
 Fri, 21 May 2021 13:15:41 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 1657DACB1;
 Fri, 21 May 2021 13:15:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4a2a318c-baf8-4cc2-882c-334c911da0da
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621602941; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=MWFhObEmM365zZvigirQaG6yDJLonTUk6shsrryPPLo=;
	b=MdCAKm9KwqBxjvolZ/p8gvCZ2dV+mIVqJjgO0udEYEWk1hlrMJ8D/x8XnzsDxE84p/NLjJ
	JwwohHKBH52lSNr4Afh/5TEBLr3Z4t/TwueLKL1eTGZ1ykPxpI5xUzZs3L7jDsdDTvoZAl
	7Z15CtM9EwnU3BfI1cicDbbDdrhp3J8=
Subject: Re: [PATCH] x86/Xen: swap NX determination and GDT setup on BSP
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Juergen Gross <jgross@suse.com>
References: <12a866b0-9e89-59f7-ebeb-a2a6cec0987a@suse.com>
 <65bbc317-893e-da41-97e0-c8f2e1feb3e2@suse.com>
 <f594a439-ec1d-34fa-3ccf-b162441fa0af@suse.com>
 <3953076f-c2fa-2e2a-4b07-fb610046a27d@suse.com>
 <89c46d1a-9474-0f17-3fda-4809a14adb45@suse.com>
 <2d019c04-415b-293b-052b-26b1ea3be189@suse.com>
 <0103d46f-4feb-e49d-f738-a2bf3d38c216@oracle.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c2733c58-4514-6df0-3f0b-0f8b65132017@suse.com>
Date: Fri, 21 May 2021 15:15:40 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <0103d46f-4feb-e49d-f738-a2bf3d38c216@oracle.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 21.05.2021 15:12, Boris Ostrovsky wrote:
> 
> On 5/21/21 3:45 AM, Juergen Gross wrote:
>> On 21.05.21 09:26, Jan Beulich wrote:
>>> On 21.05.2021 09:18, Juergen Gross wrote:
>>>> On 20.05.21 14:08, Jan Beulich wrote:
>>>>> On 20.05.2021 13:57, Juergen Gross wrote:
>>>>>> On 20.05.21 13:42, Jan Beulich wrote:
>>>>>>> xen_setup_gdt(), via xen_load_gdt_boot(), wants to adjust page tables.
>>>>>>> For this to work when NX is not available, x86_configure_nx() needs 
>> to
>>>>>>> be called first.
>>>>>>>
>>>>>>> Reported-by: Olaf Hering <olaf@aepfle.de>
>>>>>>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
>>>>>>
>>>>>> Reviewed-by: Juergen Gross <jgross@suse.com>
>>>>>
>>>>> Thanks. I guess I forgot
>>>>>
>>>>> Cc: stable@vger.kernel.org
>>>>>
>>>>> If you agree, can you please add this before pushing to Linus?
>>>>
>>>> Uh, just had a look why x86_configure_nx() was called after
>>>> xen_setup_gdt().
>>>>
>>>> Upstream your patch will be fine, but before kernel 5.9 it will
>>>> break running as 32-bit PV guest (see commit 36104cb9012a82e7).
>>>
>>> Oh, indeed. That commit then actually introduced the issue here,
>>> and hence a Fixes: tag may be warranted.
>>
>> Added it already. :-)
>>
>> And I've limited the backport to happen not for 5.8 and older, of
>> course.
> 
> 
> Did something changed recently that this became a problem? That commit has been there for 3 years.

He happened to try on a system where NX was turned off in the BIOS. That's
not a setting you would usually find in use.


> Didn't Olaf report this to be a problem only on SLES kernels?

Which I assume have backports of the problematic change.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 21 13:17:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 May 2021 13:17:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131262.245417 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lk51m-00055R-FN; Fri, 21 May 2021 13:17:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131262.245417; Fri, 21 May 2021 13:17:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lk51m-00055K-CQ; Fri, 21 May 2021 13:17:22 +0000
Received: by outflank-mailman (input) for mailman id 131262;
 Fri, 21 May 2021 13:17:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=08+4=KQ=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1lk51l-00055E-Iw
 for xen-devel@lists.xenproject.org; Fri, 21 May 2021 13:17:21 +0000
Received: from mail-lf1-x135.google.com (unknown [2a00:1450:4864:20::135])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3c679fe6-ac74-406e-96d4-05b03dc4e3dd;
 Fri, 21 May 2021 13:17:20 +0000 (UTC)
Received: by mail-lf1-x135.google.com with SMTP id c10so8638647lfm.0
 for <xen-devel@lists.xenproject.org>; Fri, 21 May 2021 06:17:20 -0700 (PDT)
Received: from otyshchenko.router ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id m23sm619959lfb.279.2021.05.21.06.17.18
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 21 May 2021 06:17:19 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3c679fe6-ac74-406e-96d4-05b03dc4e3dd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id;
        bh=OpIENZDAD+5nPWeUwsHpP2y2zUT9yqD/5d+tccVsuWo=;
        b=Jq2YzdDXvlWKafByJOrgVmcms5x7W3A/Lky66lo/aGaggjeVs+bPiDuo2rFunCgQK8
         V5QjAo+nyy5nHKMp6NjEl0tmcNN8HKeMhkOLjdDz6/hxhIu2kmjY/Z0TV4BoXEcD4JtZ
         S8DquBGUUtMF0eoKEmwRgY344iirpWe1Db5/m7caicrxuXy1Os0iPQuO32InxWXICCnV
         uTB8AQtcWUIS674FCMKXtvL5rfkgP2S2OIJRsejJVM1Tjgo/BsB1Y1FhI4QUIT+2AnL+
         VDyh48d9aDqxnDAbkRj0X2Jkd6mAZCZVNz41CVzT2Koqin6fHoCDqMKm/iqVZMFJfi+/
         xNHw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id;
        bh=OpIENZDAD+5nPWeUwsHpP2y2zUT9yqD/5d+tccVsuWo=;
        b=lnkXaU+N25XYyjG0QHQl3tNJ2KqW1XLjZuj0b7knCYOBSr67Y9N0L0683a77emKFpd
         fMY2OBm5YJljqQcV1eiMbwrkrxPuSRl6/0Wy4AV113r4AnSBXik9gmfwjuLf64NVzXyR
         OWU/oKJYySJu+y4PdqvO7Cmp0aHcw+n0gRWgJYSOdLUFYSmGBmwY+Rik3SnkS7hJaiph
         CjFIFpYjqGEiWlbN+u4FZEvqYfJgGe9vg5AR9U2dTnfpHeNL/3IqyO5vC8+7jyg38++y
         bQQoU7imIUnu2kfAinfsrqxYM8S4YI2sgqzm+OncQuTdOBPa6Z/YbrVaWcQb8tJ2gUXh
         OqCA==
X-Gm-Message-State: AOAM530yWqoTpaZtktb89W4V6CQR8g6wH054VznN2SQEE+GNv9bngeU4
	M/ELEGdgWm7K5zGYG5kTYgg=
X-Google-Smtp-Source: ABdhPJzpuQVGrOG940ZbE1w+sWsIHAfo7A+EDv70Yfvv2O4LPdtO0BgISlK5LJ9Fr4c2yt9bAG/n0w==
X-Received: by 2002:a05:6512:3b10:: with SMTP id f16mr2182383lfv.393.1621603039478;
        Fri, 21 May 2021 06:17:19 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: olekstysh@gmail.com,
	xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Wei Chen <Wei.Chen@arm.com>,
	Kaly Xin <Kaly.Xin@arm.com>,
	Artem Mygaiev <joculator@gmail.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>
Subject: [PATCH V5 0/2] Virtio support for toolstack on Arm (Was "IOREQ feature (+ virtio-mmio) on Arm")
Date: Fri, 21 May 2021 16:16:43 +0300
Message-Id: <1621603005-5799-1-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Hello all.

The purpose of this patch series is to add missing virtio-mmio bits to Xen toolstack on Arm.
The Virtio support for toolstack [1] was postponed as the main target was to upstream IOREQ/DM
support on Arm in the first place. Now, we already have IOREQ support in, so we can resume Virtio
enabling work. You can find previous discussion at [2].

Patch series [3] was reworked and rebased on recent "staging branch"
(972ba1d x86/shadow: streamline shadow_get_page_from_l1e()) and tested on
Renesas Salvator-X board + H3 ES3.0 SoC (Arm64) with "updated" virtio-mmio disk backend [4]
running in Driver domain and unmodified Linux Guest running on existing
virtio-blk driver (frontend). No issues were observed. Guest domain 'reboot/destroy'
use-cases work properly.

Any feedback/help would be highly appreciated.

[1] 
https://lore.kernel.org/xen-devel/1610488352-18494-24-git-send-email-olekstysh@gmail.com/
https://lore.kernel.org/xen-devel/1610488352-18494-25-git-send-email-olekstysh@gmail.com/
[2]
https://lists.xenproject.org/archives/html/xen-devel/2021-01/msg02403.html
https://lists.xenproject.org/archives/html/xen-devel/2021-01/msg02536.html
[3] https://github.com/otyshchenko1/xen/commits/libxl_virtio
[4] https://github.com/xen-troops/virtio-disk/commits/ioreq_ml3


Julien Grall (1):
  libxl: Introduce basic virtio-mmio support on Arm

Oleksandr Tyshchenko (1):
  libxl: Add support for Virtio disk configuration

 docs/man/xl-disk-configuration.5.pod.in   |  27 +
 tools/include/libxl.h                     |   6 +
 tools/libs/light/libxl_arm.c              | 133 ++++-
 tools/libs/light/libxl_device.c           |  38 +-
 tools/libs/light/libxl_disk.c             |  99 +++-
 tools/libs/light/libxl_types.idl          |   4 +
 tools/libs/light/libxl_types_internal.idl |   1 +
 tools/libs/light/libxl_utils.c            |   2 +
 tools/libs/util/libxlu_disk_l.c           | 881 +++++++++++++++---------------
 tools/libs/util/libxlu_disk_l.h           |   2 +-
 tools/libs/util/libxlu_disk_l.l           |   1 +
 tools/xl/xl_block.c                       |  11 +
 xen/include/public/arch-arm.h             |   7 +
 13 files changed, 764 insertions(+), 448 deletions(-)

-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Fri May 21 13:17:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 May 2021 13:17:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131263.245428 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lk51t-0005Op-Nc; Fri, 21 May 2021 13:17:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131263.245428; Fri, 21 May 2021 13:17:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lk51t-0005Oi-KP; Fri, 21 May 2021 13:17:29 +0000
Received: by outflank-mailman (input) for mailman id 131263;
 Fri, 21 May 2021 13:17:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=08+4=KQ=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1lk51s-0005O2-8z
 for xen-devel@lists.xenproject.org; Fri, 21 May 2021 13:17:28 +0000
Received: from mail-lj1-x236.google.com (unknown [2a00:1450:4864:20::236])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3e9a4774-88ca-4482-b3ad-aef95db03645;
 Fri, 21 May 2021 13:17:22 +0000 (UTC)
Received: by mail-lj1-x236.google.com with SMTP id s25so23890990ljo.11
 for <xen-devel@lists.xenproject.org>; Fri, 21 May 2021 06:17:22 -0700 (PDT)
Received: from otyshchenko.router ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id m23sm619959lfb.279.2021.05.21.06.17.20
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 21 May 2021 06:17:21 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3e9a4774-88ca-4482-b3ad-aef95db03645
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=4iz/FmyIftUl2ke8uiiDoohKHMKvZQ8NbKwXpiUQBho=;
        b=pbhiTrMOt4/xZ/XcrePve1J1yZWCJPAQ2fJg+gfNra4Er/JwQDMAhZgUTm00ULn909
         TZUGpL7qkhrzho5UUgf9M3ywR/qDSpvuKrh1Tjjw3ISZv+Jx//skB8ebHoWtupQ8Gq8A
         bLU+hy/dwH7A32NtraiGsPbdTqLGL5LAJm8YuYj0JEk3f/47ZJ4rry8TSh1mk4RYbQ51
         URmpfqQF8Cw2O6uzSYVMAl5fy3ipTlecqKcieNdOwrVuaV6xn+yT1zcMUeuJSxCTPF06
         +mL4H4UMA7pd0yhVWACN+gcmJh4nWnY3gE6OTzkzTZ6mg2wSRTsK1DYqqARKeT61zmQe
         6/DA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=4iz/FmyIftUl2ke8uiiDoohKHMKvZQ8NbKwXpiUQBho=;
        b=KncCPdTQDxPJ7RlAiTj4J9Zny99YYxEeF8c6hPbZvgP05fzOJb/PRQ7yzfOiNoaWA8
         QwCNFnnI1lXAwsQb0gm3ttYy7v4W8K918VnnWB+SUAEDCR8n7Q0bYlxP+GY6tucjaGFv
         tAQIAjSed9gqiKpG080kqeC/c2nv2BogO9z2b7NH5lKf6xM7mWEihmxnsq9O0c2MxxVo
         Un5nSFg+hylve/9j5nyOxzP9TSNwrrhU8vQb112jQ1/OIajIAf3Od6vCJ1UxQmSC4HL4
         aQLyK0iETlFExD9nufSYdcI6GxKFxoKUrAbLzj8+1p4otKScoEu2sGsiVbfYqEiiqHZg
         fg3A==
X-Gm-Message-State: AOAM53385iRtCQCkI+aRygqMjJMUVfpvsisK382HrLwZ201Obg2inrYa
	J53/TZ7REXr1K4OJD5oDZeo=
X-Google-Smtp-Source: ABdhPJwtNylLq5U68Qw/hh55wsJ8Y6UrnivtEgKkiULVeAk3HIfajEOeVDYmDl2S4XiFUljtRNDmwA==
X-Received: by 2002:a05:651c:1258:: with SMTP id h24mr6786794ljh.340.1621603041657;
        Fri, 21 May 2021 06:17:21 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: olekstysh@gmail.com,
	xen-devel@lists.xenproject.org
Cc: Julien Grall <julien.grall@arm.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Wei Chen <Wei.Chen@arm.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Kaly Xin <Kaly.Xin@arm.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Subject: [PATCH V5 0/2] Virtio support for toolstack on Arm (Was "IOREQ feature (+ virtio-mmio) on Arm")
Date: Fri, 21 May 2021 16:16:45 +0300
Message-Id: <1621603005-5799-3-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1621603005-5799-1-git-send-email-olekstysh@gmail.com>
References: <1621603005-5799-1-git-send-email-olekstysh@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Julien Grall <julien.grall@arm.com>

Date: Wed, 29 Jul 2020 17:36:53 +0300
Subject: [PATCH V5 2/2] libxl: Introduce basic virtio-mmio support on Arm

This patch introduces helpers to allocate Virtio MMIO params
(IRQ and memory region) and create specific device node in
the Guest device-tree with allocated params. In order to deal
with multiple Virtio devices, reserve corresponding ranges.
For now, we reserve 1MB for memory regions and 10 SPIs.

As these helpers should be used for every Virtio device attached
to the Guest, call them for Virtio disk(s).

Please note, with statically allocated Virtio IRQs there is
a risk of a clash with a physical IRQs of passthrough devices.
For the first version, it's fine, but we should consider allocating
the Virtio IRQs automatically. Thankfully, we know in advance which
IRQs will be used for passthrough to be able to choose non-clashed
ones.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes RFC -> V1:
   - was squashed with:
     "[RFC PATCH V1 09/12] libxl: Handle virtio-mmio irq in more correct way"
     "[RFC PATCH V1 11/12] libxl: Insert "dma-coherent" property into virtio-mmio device node"
     "[RFC PATCH V1 12/12] libxl: Fix duplicate memory node in DT"
   - move VirtIO MMIO #define-s to xen/include/public/arch-arm.h

Changes V1 -> V2:
   - update the author of a patch

Changes V2 -> V3:
   - no changes

Changes V3 -> V4:
   - no changes

Changes V4 -> V5:
   - split the changes, change the order of the patches
   - drop an extra "virtio" configuration option
   - update patch description
   - use CONTAINER_OF instead of own implementation
   - reserve ranges for Virtio MMIO params and put them
     in correct location
   - create helpers to allocate Virtio MMIO params, add
     corresponding sanity-сhecks
   - add comment why MMIO size 0x200 is chosen
   - update debug print
   - drop Wei's T-b
---
 tools/libs/light/libxl_arm.c  | 133 +++++++++++++++++++++++++++++++++++++++++-
 xen/include/public/arch-arm.h |   7 +++
 2 files changed, 138 insertions(+), 2 deletions(-)

diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
index e2901f1..a9f15ce 100644
--- a/tools/libs/light/libxl_arm.c
+++ b/tools/libs/light/libxl_arm.c
@@ -8,6 +8,56 @@
 #include <assert.h>
 #include <xen/device_tree_defs.h>
 
+/*
+ * There is no clear requirements for the total size of Virtio MMIO region.
+ * The size of control registers is 0x100 and device-specific configuration
+ * registers starts at the offset 0x100, however it's size depends on the device
+ * and the driver. Pick the biggest known size at the moment to cover most
+ * of the devices (also consider allowing the user to configure the size via
+ * config file for the one not conforming with the proposed value).
+ */
+#define VIRTIO_MMIO_DEV_SIZE   xen_mk_ullong(0x200)
+
+static uint64_t virtio_mmio_base;
+static uint32_t virtio_mmio_irq;
+
+static void init_virtio_mmio_params(void)
+{
+    virtio_mmio_base = GUEST_VIRTIO_MMIO_BASE;
+    virtio_mmio_irq = GUEST_VIRTIO_MMIO_SPI_FIRST;
+}
+
+static uint64_t alloc_virtio_mmio_base(libxl__gc *gc)
+{
+    uint64_t base = virtio_mmio_base;
+
+    /* Make sure we have enough reserved resources */
+    if ((virtio_mmio_base + VIRTIO_MMIO_DEV_SIZE >
+        GUEST_VIRTIO_MMIO_BASE + GUEST_VIRTIO_MMIO_SIZE)) {
+        LOG(ERROR, "Ran out of reserved range for Virtio MMIO BASE 0x%"PRIx64"\n",
+            virtio_mmio_base);
+        return 0;
+    }
+    virtio_mmio_base += VIRTIO_MMIO_DEV_SIZE;
+
+    return base;
+}
+
+static uint32_t alloc_virtio_mmio_irq(libxl__gc *gc)
+{
+    uint32_t irq = virtio_mmio_irq;
+
+    /* Make sure we have enough reserved resources */
+    if (virtio_mmio_irq > GUEST_VIRTIO_MMIO_SPI_LAST) {
+        LOG(ERROR, "Ran out of reserved range for Virtio MMIO IRQ %u\n",
+            virtio_mmio_irq);
+        return 0;
+    }
+    virtio_mmio_irq++;
+
+    return irq;
+}
+
 static const char *gicv_to_string(libxl_gic_version gic_version)
 {
     switch (gic_version) {
@@ -26,8 +76,8 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
 {
     uint32_t nr_spis = 0;
     unsigned int i;
-    uint32_t vuart_irq;
-    bool vuart_enabled = false;
+    uint32_t vuart_irq, virtio_irq = 0;
+    bool vuart_enabled = false, virtio_enabled = false;
 
     /*
      * If pl011 vuart is enabled then increment the nr_spis to allow allocation
@@ -39,6 +89,35 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
         vuart_enabled = true;
     }
 
+    /*
+     * Virtio MMIO params are non-unique across the whole system and must be
+     * initialized for every new guest.
+     */
+    init_virtio_mmio_params();
+    for (i = 0; i < d_config->num_disks; i++) {
+        libxl_device_disk *disk = &d_config->disks[i];
+
+        if (disk->virtio) {
+            disk->base = alloc_virtio_mmio_base(gc);
+            if (!disk->base)
+                return ERROR_FAIL;
+
+            disk->irq = alloc_virtio_mmio_irq(gc);
+            if (!disk->irq)
+                return ERROR_FAIL;
+
+            if (virtio_irq < disk->irq)
+                virtio_irq = disk->irq;
+            virtio_enabled = true;
+
+            LOG(DEBUG, "Allocate Virtio MMIO params for Vdev %s: IRQ %u BASE 0x%"PRIx64,
+                disk->vdev, disk->irq, disk->base);
+        }
+    }
+
+    if (virtio_enabled)
+        nr_spis += (virtio_irq - 32) + 1;
+
     for (i = 0; i < d_config->b_info.num_irqs; i++) {
         uint32_t irq = d_config->b_info.irqs[i];
         uint32_t spi;
@@ -58,6 +137,13 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
             return ERROR_FAIL;
         }
 
+        /* The same check as for vpl011 */
+        if (virtio_enabled &&
+           (irq >= GUEST_VIRTIO_MMIO_SPI_FIRST && irq <= virtio_irq)) {
+            LOG(ERROR, "Physical IRQ %u conflicting with Virtio MMIO IRQ range\n", irq);
+            return ERROR_FAIL;
+        }
+
         if (irq < 32)
             continue;
 
@@ -660,6 +746,38 @@ static int make_vpl011_uart_node(libxl__gc *gc, void *fdt,
     return 0;
 }
 
+static int make_virtio_mmio_node(libxl__gc *gc, void *fdt,
+                                 uint64_t base, uint32_t irq)
+{
+    int res;
+    gic_interrupt intr;
+    /* Placeholder for virtio@ + a 64-bit number + \0 */
+    char buf[24];
+
+    snprintf(buf, sizeof(buf), "virtio@%"PRIx64, base);
+    res = fdt_begin_node(fdt, buf);
+    if (res) return res;
+
+    res = fdt_property_compat(gc, fdt, 1, "virtio,mmio");
+    if (res) return res;
+
+    res = fdt_property_regs(gc, fdt, GUEST_ROOT_ADDRESS_CELLS, GUEST_ROOT_SIZE_CELLS,
+                            1, base, VIRTIO_MMIO_DEV_SIZE);
+    if (res) return res;
+
+    set_interrupt(intr, irq, 0xf, DT_IRQ_TYPE_EDGE_RISING);
+    res = fdt_property_interrupts(gc, fdt, &intr, 1);
+    if (res) return res;
+
+    res = fdt_property(fdt, "dma-coherent", NULL, 0);
+    if (res) return res;
+
+    res = fdt_end_node(fdt);
+    if (res) return res;
+
+    return 0;
+}
+
 static const struct arch_info *get_arch_info(libxl__gc *gc,
                                              const struct xc_dom_image *dom)
 {
@@ -860,10 +978,14 @@ static int libxl__prepare_dtb(libxl__gc *gc, libxl_domain_build_info *info,
     int rc, res;
     size_t fdt_size = 0;
     int pfdt_size = 0;
+    unsigned int i;
 
     const libxl_version_info *vers;
     const struct arch_info *ainfo;
 
+    libxl_domain_config *d_config =
+        CONTAINER_OF(info, libxl_domain_config, b_info);
+
     vers = libxl_get_version_info(CTX);
     if (vers == NULL) return ERROR_FAIL;
 
@@ -963,6 +1085,13 @@ next_resize:
         if (info->tee == LIBXL_TEE_TYPE_OPTEE)
             FDT( make_optee_node(gc, fdt) );
 
+        for (i = 0; i < d_config->num_disks; i++) {
+            libxl_device_disk *disk = &d_config->disks[i];
+
+            if (disk->virtio)
+                FDT( make_virtio_mmio_node(gc, fdt, disk->base, disk->irq) );
+        }
+
         if (pfdt)
             FDT( copy_partial_fdt(gc, fdt, pfdt) );
 
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index 713fd65..1c02248 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -394,6 +394,10 @@ typedef uint64_t xen_callback_t;
 
 /* Physical Address Space */
 
+/* Virtio MMIO mappings */
+#define GUEST_VIRTIO_MMIO_BASE   xen_mk_ullong(0x02000000)
+#define GUEST_VIRTIO_MMIO_SIZE   xen_mk_ullong(0x00100000)
+
 /*
  * vGIC mappings: Only one set of mapping is used by the guest.
  * Therefore they can overlap.
@@ -458,6 +462,9 @@ typedef uint64_t xen_callback_t;
 
 #define GUEST_VPL011_SPI        32
 
+#define GUEST_VIRTIO_MMIO_SPI_FIRST   33
+#define GUEST_VIRTIO_MMIO_SPI_LAST    43
+
 /* PSCI functions */
 #define PSCI_cpu_suspend 0
 #define PSCI_cpu_off     1
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Fri May 21 13:17:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 May 2021 13:17:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131264.245440 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lk51x-0005jB-8C; Fri, 21 May 2021 13:17:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131264.245440; Fri, 21 May 2021 13:17:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lk51x-0005j4-4s; Fri, 21 May 2021 13:17:33 +0000
Received: by outflank-mailman (input) for mailman id 131264;
 Fri, 21 May 2021 13:17:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=08+4=KQ=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1lk51v-0005O2-W9
 for xen-devel@lists.xenproject.org; Fri, 21 May 2021 13:17:32 +0000
Received: from mail-lj1-x235.google.com (unknown [2a00:1450:4864:20::235])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f938bd00-433f-4956-93d4-5463fcbd4e7c;
 Fri, 21 May 2021 13:17:22 +0000 (UTC)
Received: by mail-lj1-x235.google.com with SMTP id o8so23978400ljp.0
 for <xen-devel@lists.xenproject.org>; Fri, 21 May 2021 06:17:22 -0700 (PDT)
Received: from otyshchenko.router ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id m23sm619959lfb.279.2021.05.21.06.17.19
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 21 May 2021 06:17:20 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f938bd00-433f-4956-93d4-5463fcbd4e7c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=dKUkXE/MAv3nl/93lZHxjieAXTr06nBtxVTTGn1DoaY=;
        b=o20eSxsIZvLSYpJYkzR4Nd0XRhMSn1zWv6TMDyEQs3aHgz1B74ckqKhcj3mX58ny1k
         le18CTtTl/fxOCUoQ8PIhHof096U0qGIYroEe8TAoZiBTEKoPYOmsc2T1G7iMghFrDEX
         n3LYaXpJ5ZHPmCAYSCVv3XcIv8P0+/Tg4t4415mnOyxt+x8IErqZYE38FBfT1c/Hc8H4
         IpcLkd3t6Dc6SXB3CDvBxdEdbE6vcSxim0GGQJWZNQzhWwOC0K6N/jBJ1RkJlX9zUFRn
         kHDZ9LpmPzf29z3dQ86cXhzbtKmyz9sQ0ioN6W9x9xbswA8ms0wGzXY1BbDBWa4Cj5YV
         0/DQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=dKUkXE/MAv3nl/93lZHxjieAXTr06nBtxVTTGn1DoaY=;
        b=MRjRjwQXcXgRlEFVMrWC34JIPFAJHBa0zwgLcjoEdDayGR+W1TNGXB4wUHJzeLLkLK
         yBLMYd4Tzs7DAATeb/XeN3CPBUpjh5rENazjCugHDwRKqmzKPXl9CkWNdQZd3PDj9un+
         3WAUFMYLaAFh9LD0Jjm7EeewFzi10Fdavl3Ki3gOLC+VyWk9Exu0GhLlpMcWv7GlLeIp
         5S6gpLigjGmFvwP7XhZNamRCZOBMIKORKSTE3KIDiOo34a3Pl6T1EPEZkqHNHGGWFNLW
         tyA64QgNLTJX8GEIvnqhbzpVDu5SrznXsFP6acNTzc5pI6sR3psLV93hnMPFOn38id3f
         NGTQ==
X-Gm-Message-State: AOAM5325vw9xOh2sIaVk2zsV0wB15mjWFCyGGWRvcp/YXxawXFc7uXJq
	n0pNCNz3PYL8f9Pj6MaOkyg=
X-Google-Smtp-Source: ABdhPJwe+tSXk840ngANuFYQO+J23sbhDvmjbQCIj0UpvdYAqcp8T+Ti40fJPDEYEKnUoWJBNhjwLg==
X-Received: by 2002:a05:651c:612:: with SMTP id k18mr7021299lje.339.1621603040689;
        Fri, 21 May 2021 06:17:20 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: olekstysh@gmail.com,
	xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Wei Chen <Wei.Chen@arm.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Kaly Xin <Kaly.Xin@arm.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>
Subject: [PATCH V5 0/2] Virtio support for toolstack on Arm (Was "IOREQ feature (+ virtio-mmio) on Arm")
Date: Fri, 21 May 2021 16:16:44 +0300
Message-Id: <1621603005-5799-2-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1621603005-5799-1-git-send-email-olekstysh@gmail.com>
References: <1621603005-5799-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Date: Thu, 20 May 2021 20:41:01 +0300
Subject: [PATCH V5 1/2] libxl: Add support for Virtio disk configuration

This patch adds basic support for configuring and assisting virtio-disk
backend (emualator) which is intended to run out of Qemu and could be
run in any domain.
Although the Virtio block device is quite different from traditional
Xen PV block device (vbd) from the toolstack point of view:
 - the frontend is virtio-blk which is not a Xenbus driver, so nothing
   written to Xenstore are fetched by the frontend (the vdev is not
   passed to the frontend, etc)
 - the ring-ref/event-channel are not used for the backend<->frontend
   communication, the proposed IPC for Virtio is IOREQ/DM
it is still a "block device" and ought to be integrated in existing
"disk" handling. So, re-use (and adapt) "disk" parsing/configuration
logic to deal with Virtio devices as well.

Besides introducing new disk backend type (LIBXL_DISK_BACKEND_VIRTIO)
introduce new device kind (LIBXL__DEVICE_KIND_VIRTIO_DISK) as current
one (LIBXL__DEVICE_KIND_VBD) doesn't fit into Virtio disk model.

In order to inform the toolstack that Virtio disk needs to be used
extend "disk" configuration by introducing new "virtio" flag.
An example of domain configuration:
disk = [ 'backend=DomD, phy:/dev/mmcblk1p3, xvda1, rw, virtio' ]

Please note, this patch is not enough for virtio-disk to work
on Xen (Arm), as for every Virtio device (including disk) we need
to allocate Virtio MMIO params (IRQ and memory region) and pass
them to the backend, also update Guest device-tree with the allocated
params. The subsequent patch will add these missing bits.
For the current patch, the default "irq" and "base" are just written
to the Xenstore.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

---
Changes RFC -> V1:
   - no changes

Changes V1 -> V2:
   - rebase according to the new location of libxl_virtio_disk.c

Changes V2 -> V3:
   - no changes

Changes V3 -> V4:
   - rebase according to the new argument for DEFINE_DEVICE_TYPE_STRUCT

Changes V4 -> V5:
   - split the changes, change the order of the patches
   - update patch description
   - don't introduce new "vdisk" configuration option with own parsing logic,
     re-use Xen PV block "disk" parsing/configuration logic for the virtio-disk
   - introduce "virtio" flag and document it's usage
   - add LIBXL_HAVE_DEVICE_DISK_VIRTIO
   - update libxlu_disk_l.[ch]
   - drop num_disks variable/MAX_VIRTIO_DISKS
   - drop Wei's T-b
---
 docs/man/xl-disk-configuration.5.pod.in   |  27 +
 tools/include/libxl.h                     |   6 +
 tools/libs/light/libxl_device.c           |  38 +-
 tools/libs/light/libxl_disk.c             |  99 +++-
 tools/libs/light/libxl_types.idl          |   4 +
 tools/libs/light/libxl_types_internal.idl |   1 +
 tools/libs/light/libxl_utils.c            |   2 +
 tools/libs/util/libxlu_disk_l.c           | 881 +++++++++++++++---------------
 tools/libs/util/libxlu_disk_l.h           |   2 +-
 tools/libs/util/libxlu_disk_l.l           |   1 +
 tools/xl/xl_block.c                       |  11 +
 11 files changed, 626 insertions(+), 446 deletions(-)

diff --git a/docs/man/xl-disk-configuration.5.pod.in b/docs/man/xl-disk-configuration.5.pod.in
index 71d0e86..9cc189f 100644
--- a/docs/man/xl-disk-configuration.5.pod.in
+++ b/docs/man/xl-disk-configuration.5.pod.in
@@ -344,8 +344,35 @@ can be used to disable "hole punching" for file based backends which
 were intentionally created non-sparse to avoid fragmentation of the
 file.
 
+=item B<virtio>
+
+=over 4
+
+=item Description
+
+Enables experimental Virtio support for disk
+
+=item Supported values
+
+absent, present
+
+=item Mandatory
+
+No
+
+=item Default value
+
+absent
+
 =back
 
+Besides forcing toolstack to use specific Xen Virtio backend implementation
+(for example, virtio-disk), this also affects the guest's view of the device
+and requires virtio-blk driver to be used.
+Please note, the virtual device (vdev) is not passed to the guest in that case,
+but it still must be specified for the internal purposes.
+
+=back
 
 =head1 COLO Parameters
 
diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index ae7fe27..58e14e6 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -503,6 +503,12 @@
 #define LIBXL_HAVE_X86_MSR_RELAXED 1
 
 /*
+ * LIBXL_HAVE_DEVICE_DISK_VIRTIO indicates that a 'virtio' field
+ * (of boolean type) is present in libxl_device_disk.
+ */
+#define LIBXL_HAVE_DEVICE_DISK_VIRTIO 1
+
+/*
  * libxl ABI compatibility
  *
  * The only guarantee which libxl makes regarding ABI compatibility
diff --git a/tools/libs/light/libxl_device.c b/tools/libs/light/libxl_device.c
index 36c4e41..7c8cb53 100644
--- a/tools/libs/light/libxl_device.c
+++ b/tools/libs/light/libxl_device.c
@@ -292,6 +292,9 @@ static int disk_try_backend(disk_try_backend_args *a,
     /* returns 0 (ie, DISK_BACKEND_UNKNOWN) on failure, or
      * backend on success */
 
+    if (a->disk->virtio && backend != LIBXL_DISK_BACKEND_VIRTIO)
+        goto bad_virtio;
+
     switch (backend) {
     case LIBXL_DISK_BACKEND_PHY:
         if (a->disk->format != LIBXL_DISK_FORMAT_RAW) {
@@ -329,6 +332,29 @@ static int disk_try_backend(disk_try_backend_args *a,
         if (a->disk->script) goto bad_script;
         return backend;
 
+    case LIBXL_DISK_BACKEND_VIRTIO:
+        if (a->disk->format != LIBXL_DISK_FORMAT_RAW)
+            goto bad_format;
+
+        if (a->disk->script)
+            goto bad_script;
+
+        if (libxl_defbool_val(a->disk->colo_enable))
+            goto bad_colo;
+
+        if (a->disk->backend_domid != LIBXL_TOOLSTACK_DOMID) {
+            LOG(DEBUG, "Disk vdev=%s, is using a storage driver domain, "
+                       "skipping physical device check", a->disk->vdev);
+            return backend;
+        }
+
+        if (libxl__try_phy_backend(a->stab.st_mode))
+            return backend;
+
+        LOG(DEBUG, "Disk vdev=%s, backend virtio unsuitable as phys path not a "
+                   "block device", a->disk->vdev);
+        return 0;
+
     default:
         LOG(DEBUG, "Disk vdev=%s, backend %d unknown", a->disk->vdev, backend);
         return 0;
@@ -352,6 +378,11 @@ static int disk_try_backend(disk_try_backend_args *a,
     LOG(DEBUG, "Disk vdev=%s, backend %s not compatible with colo",
         a->disk->vdev, libxl_disk_backend_to_string(backend));
     return 0;
+
+ bad_virtio:
+    LOG(DEBUG, "Disk vdev=%s, backend %s not compatible with virtio",
+        a->disk->vdev, libxl_disk_backend_to_string(backend));
+    return 0;
 }
 
 int libxl__backendpath_parse_domid(libxl__gc *gc, const char *be_path,
@@ -392,7 +423,8 @@ int libxl__device_disk_set_backend(libxl__gc *gc, libxl_device_disk *disk) {
         }
         memset(&a.stab, 0, sizeof(a.stab));
     } else if ((disk->backend == LIBXL_DISK_BACKEND_UNKNOWN ||
-                disk->backend == LIBXL_DISK_BACKEND_PHY) &&
+                disk->backend == LIBXL_DISK_BACKEND_PHY ||
+                disk->backend == LIBXL_DISK_BACKEND_VIRTIO) &&
                disk->backend_domid == LIBXL_TOOLSTACK_DOMID &&
                !disk->script) {
         if (stat(disk->pdev_path, &a.stab)) {
@@ -408,7 +440,8 @@ int libxl__device_disk_set_backend(libxl__gc *gc, libxl_device_disk *disk) {
         ok=
             disk_try_backend(&a, LIBXL_DISK_BACKEND_PHY) ?:
             disk_try_backend(&a, LIBXL_DISK_BACKEND_QDISK) ?:
-            disk_try_backend(&a, LIBXL_DISK_BACKEND_TAP);
+            disk_try_backend(&a, LIBXL_DISK_BACKEND_TAP) ?:
+            disk_try_backend(&a, LIBXL_DISK_BACKEND_VIRTIO);
         if (ok)
             LOG(DEBUG, "Disk vdev=%s, using backend %s",
                        disk->vdev,
@@ -441,6 +474,7 @@ char *libxl__device_disk_string_of_backend(libxl_disk_backend backend)
         case LIBXL_DISK_BACKEND_QDISK: return "qdisk";
         case LIBXL_DISK_BACKEND_TAP: return "phy";
         case LIBXL_DISK_BACKEND_PHY: return "phy";
+        case LIBXL_DISK_BACKEND_VIRTIO: return "virtio_disk";
         default: return NULL;
     }
 }
diff --git a/tools/libs/light/libxl_disk.c b/tools/libs/light/libxl_disk.c
index 411ffea..4332dab 100644
--- a/tools/libs/light/libxl_disk.c
+++ b/tools/libs/light/libxl_disk.c
@@ -174,6 +174,16 @@ static int libxl__device_disk_setdefault(libxl__gc *gc, uint32_t domid,
         disk->backend = LIBXL_DISK_BACKEND_QDISK;
     }
 
+    /* Force virtio_disk backend for Virtio devices */
+    if (disk->virtio) {
+        if (!(disk->backend == LIBXL_DISK_BACKEND_VIRTIO ||
+              disk->backend == LIBXL_DISK_BACKEND_UNKNOWN)) {
+            LOGD(ERROR, domid, "Backend for Virtio devices on must be virtio_disk");
+            return ERROR_FAIL;
+        }
+        disk->backend = LIBXL_DISK_BACKEND_VIRTIO;
+    }
+
     rc = libxl__device_disk_set_backend(gc, disk);
     return rc;
 }
@@ -204,6 +214,9 @@ static int libxl__device_from_disk(libxl__gc *gc, uint32_t domid,
         case LIBXL_DISK_BACKEND_QDISK:
             device->backend_kind = LIBXL__DEVICE_KIND_QDISK;
             break;
+        case LIBXL_DISK_BACKEND_VIRTIO:
+            device->backend_kind = LIBXL__DEVICE_KIND_VIRTIO_DISK;
+            break;
         default:
             LOGD(ERROR, domid, "Unrecognized disk backend type: %d",
                  disk->backend);
@@ -212,7 +225,8 @@ static int libxl__device_from_disk(libxl__gc *gc, uint32_t domid,
 
     device->domid = domid;
     device->devid = devid;
-    device->kind  = LIBXL__DEVICE_KIND_VBD;
+    device->kind = disk->backend == LIBXL_DISK_BACKEND_VIRTIO ?
+        LIBXL__DEVICE_KIND_VIRTIO_DISK : LIBXL__DEVICE_KIND_VBD;
 
     return 0;
 }
@@ -330,7 +344,17 @@ static void device_disk_add(libxl__egc *egc, uint32_t domid,
 
                 assert(device->backend_kind == LIBXL__DEVICE_KIND_VBD);
                 break;
+            case LIBXL_DISK_BACKEND_VIRTIO:
+                dev = disk->pdev_path;
+
+                flexarray_append(back, "params");
+                flexarray_append(back, dev);
 
+                flexarray_append_pair(back, "base", GCSPRINTF("%lu", disk->base));
+                flexarray_append_pair(back, "irq", GCSPRINTF("%u", disk->irq));
+
+                assert(device->backend_kind == LIBXL__DEVICE_KIND_VIRTIO_DISK);
+                break;
             case LIBXL_DISK_BACKEND_TAP:
                 LOG(ERROR, "blktap is not supported");
                 rc = ERROR_FAIL;
@@ -532,6 +556,26 @@ static int libxl__disk_from_xenstore(libxl__gc *gc, const char *libxl_path,
     }
     libxl_string_to_backend(ctx, tmp, &(disk->backend));
 
+    if (disk->backend == LIBXL_DISK_BACKEND_VIRTIO) {
+        disk->virtio = true;
+
+        tmp = libxl__xs_read(gc, XBT_NULL,
+                             GCSPRINTF("%s/base", libxl_path));
+        if (!tmp) {
+            LOG(ERROR, "Missing xenstore node %s/base", libxl_path);
+            goto cleanup;
+        }
+        disk->base = strtoul(tmp, NULL, 10);
+
+        tmp = libxl__xs_read(gc, XBT_NULL,
+                             GCSPRINTF("%s/irq", libxl_path));
+        if (!tmp) {
+            LOG(ERROR, "Missing xenstore node %s/irq", libxl_path);
+            goto cleanup;
+        }
+        disk->irq = strtoul(tmp, NULL, 10);
+    }
+
     disk->vdev = xs_read(ctx->xsh, XBT_NULL,
                          GCSPRINTF("%s/dev", libxl_path), &len);
     if (!disk->vdev) {
@@ -575,6 +619,41 @@ cleanup:
     return rc;
 }
 
+static int libxl_device_disk_get_path(libxl__gc *gc, uint32_t domid,
+                                      char **path)
+{
+    const char *dir;
+    int rc;
+
+    /*
+     * As we don't know exactly what device kind to be used here, guess it
+     * by checking the presence of the corresponding path in Xenstore.
+     * First, try to read path for vbd device (default) and if not exists
+     * read path for virtio_disk device. This will work as long as both Xen PV
+     * and Virtio disk devices are not assigned to the same guest.
+     */
+    *path = GCSPRINTF("%s/device/%s",
+                      libxl__xs_libxl_path(gc, domid),
+                      libxl__device_kind_to_string(LIBXL__DEVICE_KIND_VBD));
+
+    rc = libxl__xs_read_checked(gc, XBT_NULL, *path, &dir);
+    if (rc)
+        return rc;
+
+    if (dir)
+        return 0;
+
+    *path = GCSPRINTF("%s/device/%s",
+                      libxl__xs_libxl_path(gc, domid),
+                      libxl__device_kind_to_string(LIBXL__DEVICE_KIND_VIRTIO_DISK));
+
+    rc = libxl__xs_read_checked(gc, XBT_NULL, *path, &dir);
+    if (rc)
+        return rc;
+
+    return 0;
+}
+
 int libxl_vdev_to_device_disk(libxl_ctx *ctx, uint32_t domid,
                               const char *vdev, libxl_device_disk *disk)
 {
@@ -588,10 +667,12 @@ int libxl_vdev_to_device_disk(libxl_ctx *ctx, uint32_t domid,
 
     libxl_device_disk_init(disk);
 
-    libxl_path = libxl__domain_device_libxl_path(gc, domid, devid,
-                                                 LIBXL__DEVICE_KIND_VBD);
+    rc = libxl_device_disk_get_path(gc, domid, &libxl_path);
+    if (rc)
+        return rc;
 
-    rc = libxl__disk_from_xenstore(gc, libxl_path, devid, disk);
+    rc = libxl__disk_from_xenstore(gc, GCSPRINTF("%s/%d", libxl_path, devid),
+                                   devid, disk);
 
     GC_FREE;
     return rc;
@@ -605,16 +686,19 @@ int libxl_device_disk_getinfo(libxl_ctx *ctx, uint32_t domid,
     char *fe_path, *libxl_path;
     char *val;
     int rc;
+    libxl__device_kind kind;
 
     diskinfo->backend = NULL;
 
     diskinfo->devid = libxl__device_disk_dev_number(disk->vdev, NULL, NULL);
 
-    /* tap devices entries in xenstore are written as vbd devices. */
+    /* tap devices entries in xenstore are written as vbd/virtio_disk devices. */
+    kind = disk->backend == LIBXL_DISK_BACKEND_VIRTIO ?
+        LIBXL__DEVICE_KIND_VIRTIO_DISK : LIBXL__DEVICE_KIND_VBD;
     fe_path = libxl__domain_device_frontend_path(gc, domid, diskinfo->devid,
-                                                 LIBXL__DEVICE_KIND_VBD);
+                                                 kind);
     libxl_path = libxl__domain_device_libxl_path(gc, domid, diskinfo->devid,
-                                                 LIBXL__DEVICE_KIND_VBD);
+                                                 kind);
     diskinfo->backend = xs_read(ctx->xsh, XBT_NULL,
                                 GCSPRINTF("%s/backend", libxl_path), NULL);
     if (!diskinfo->backend) {
@@ -1375,6 +1459,7 @@ LIBXL_DEFINE_DEVICE_LIST(disk)
 #define libxl__device_disk_update_devid NULL
 
 DEFINE_DEVICE_TYPE_STRUCT(disk, VBD, disks,
+    .get_path    = libxl_device_disk_get_path,
     .merge       = libxl_device_disk_merge,
     .dm_needed   = libxl_device_disk_dm_needed,
     .from_xenstore = (device_from_xenstore_fn_t)libxl__disk_from_xenstore,
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index f45addd..d513dde 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -130,6 +130,7 @@ libxl_disk_backend = Enumeration("disk_backend", [
     (1, "PHY"),
     (2, "TAP"),
     (3, "QDISK"),
+    (4, "VIRTIO"),
     ])
 
 libxl_nic_type = Enumeration("nic_type", [
@@ -699,6 +700,9 @@ libxl_device_disk = Struct("device_disk", [
     ("is_cdrom", integer),
     ("direct_io_safe", bool),
     ("discard_enable", libxl_defbool),
+    ("virtio", bool),
+    ("irq", uint32),
+    ("base", uint64),
     # Note that the COLO configuration settings should be considered unstable.
     # They may change incompatibly in future versions of Xen.
     ("colo_enable", libxl_defbool),
diff --git a/tools/libs/light/libxl_types_internal.idl b/tools/libs/light/libxl_types_internal.idl
index 3593e21..8f71980 100644
--- a/tools/libs/light/libxl_types_internal.idl
+++ b/tools/libs/light/libxl_types_internal.idl
@@ -32,6 +32,7 @@ libxl__device_kind = Enumeration("device_kind", [
     (14, "PVCALLS"),
     (15, "VSND"),
     (16, "VINPUT"),
+    (17, "VIRTIO_DISK"),
     ])
 
 libxl__console_backend = Enumeration("console_backend", [
diff --git a/tools/libs/light/libxl_utils.c b/tools/libs/light/libxl_utils.c
index 4699c4a..fa406de 100644
--- a/tools/libs/light/libxl_utils.c
+++ b/tools/libs/light/libxl_utils.c
@@ -304,6 +304,8 @@ int libxl_string_to_backend(libxl_ctx *ctx, char *s, libxl_disk_backend *backend
         *backend = LIBXL_DISK_BACKEND_TAP;
     } else if (!strcmp(s, "qdisk")) {
         *backend = LIBXL_DISK_BACKEND_QDISK;
+    } else if (!strcmp(s, "virtio_disk")) {
+        *backend = LIBXL_DISK_BACKEND_VIRTIO;
     } else if (!strcmp(s, "tap")) {
         p = strchr(s, ':');
         if (!p) {
diff --git a/tools/libs/util/libxlu_disk_l.c b/tools/libs/util/libxlu_disk_l.c
index 32d4b74..7abc699 100644
--- a/tools/libs/util/libxlu_disk_l.c
+++ b/tools/libs/util/libxlu_disk_l.c
@@ -549,8 +549,8 @@ static void yynoreturn yy_fatal_error ( const char* msg , yyscan_t yyscanner );
 	yyg->yy_hold_char = *yy_cp; \
 	*yy_cp = '\0'; \
 	yyg->yy_c_buf_p = yy_cp;
-#define YY_NUM_RULES 36
-#define YY_END_OF_BUFFER 37
+#define YY_NUM_RULES 37
+#define YY_END_OF_BUFFER 38
 /* This struct is not used in this scanner,
    but its presence is necessary. */
 struct yy_trans_info
@@ -558,74 +558,75 @@ struct yy_trans_info
 	flex_int32_t yy_verify;
 	flex_int32_t yy_nxt;
 	};
-static const flex_int16_t yy_acclist[575] =
+static const flex_int16_t yy_acclist[583] =
     {   0,
-       35,   35,   37,   33,   34,   36, 8193,   33,   34,   36,
-    16385, 8193,   33,   36,16385,   33,   34,   36,   34,   36,
-       33,   34,   36,   33,   34,   36,   33,   34,   36,   33,
-       34,   36,   33,   34,   36,   33,   34,   36,   33,   34,
-       36,   33,   34,   36,   33,   34,   36,   33,   34,   36,
-       33,   34,   36,   33,   34,   36,   33,   34,   36,   33,
-       34,   36,   33,   34,   36,   33,   34,   36,   35,   36,
-       36,   33,   33, 8193,   33, 8193,   33,16385, 8193,   33,
-     8193,   33,   33, 8224,   33,16416,   33,   33,   33,   33,
-       33,   33,   33,   33,   33,   33,   33,   33,   33,   33,
-
-       33,   33,   33,   33,   33,   33,   33,   33,   33,   35,
-     8193,   33, 8193,   33, 8193, 8224,   33, 8224,   33, 8224,
-       23,   33,   33,   33,   33,   33,   33,   33,   33,   33,
-       33,   33,   33,   33,   33,   33,   33,   33,   33,   33,
-       33,   33,   33,   33,   33, 8224,   33, 8224,   33, 8224,
-       23,   33,   33,   28, 8224,   33,16416,   33,   33,   15,
-       33,   33,   33,   33,   33,   33,   33,   33,   33, 8217,
-     8224,   33,16409,16416,   33,   33,   31, 8224,   33,16416,
-       33, 8216, 8224,   33,16408,16416,   33,   33, 8219, 8224,
-       33,16411,16416,   33,   33,   33,   33,   33,   28, 8224,
-
-       33,   28, 8224,   33,   28,   33,   28, 8224,   33,    3,
-       33,   15,   33,   33,   33,   33,   33,   30, 8224,   33,
-    16416,   33,   33,   33, 8217, 8224,   33, 8217, 8224,   33,
-     8217,   33, 8217, 8224,   33,   33,   31, 8224,   33,   31,
-     8224,   33,   31,   33,   31, 8224, 8216, 8224,   33, 8216,
-     8224,   33, 8216,   33, 8216, 8224,   33, 8219, 8224,   33,
-     8219, 8224,   33, 8219,   33, 8219, 8224,   33,   33,   10,
-       33,   33,   28, 8224,   33,   28, 8224,   33,   28, 8224,
-       28,   33,   28,   33,    3,   33,   33,   33,   33,   33,
-       33,   33,   30, 8224,   33,   30, 8224,   33,   30,   33,
-
-       30, 8224,   33,   33,   29, 8224,   33,16416, 8217, 8224,
-       33, 8217, 8224,   33, 8217, 8224, 8217,   33, 8217,   33,
-       33,   31, 8224,   33,   31, 8224,   33,   31, 8224,   31,
-       33,   31, 8216, 8224,   33, 8216, 8224,   33, 8216, 8224,
-     8216,   33, 8216,   33, 8219, 8224,   33, 8219, 8224,   33,
-     8219, 8224, 8219,   33, 8219,   33,   33,   10,   23,   10,
-        7,   33,   33,   33,   33,   33,   33,   33,   13,   33,
-       30, 8224,   33,   30, 8224,   33,   30, 8224,   30,   33,
-       30,    2,   33,   29, 8224,   33,   29, 8224,   33,   29,
-       33,   29, 8224,   16,   33,   33,   11,   33,   22,   10,
-
-       10,   23,    7,   23,    7,   33,    8,   33,   33,   33,
-       33,    6,   33,   13,   33,    2,   23,    2,   33,   29,
-     8224,   33,   29, 8224,   33,   29, 8224,   29,   33,   29,
-       16,   33,   33,   11,   23,   11,   26, 8224,   33,16416,
-       22,   23,   22,    7,    7,   23,   33,    8,   23,    8,
-       33,   33,   33,   33,    6,   23,    6,    6,   23,    6,
-       23,   33,    2,    2,   23,   33,   33,   11,   11,   23,
-       26, 8224,   33,   26, 8224,   33,   26,   33,   26, 8224,
-       22,   23,   33,    8,    8,   23,   33,   33,   17,   18,
-        6,    6,   23,    6,    6,   33,   33,   14,   33,   26,
-
-     8224,   33,   26, 8224,   33,   26, 8224,   26,   33,   26,
-       33,   33,   33,   17,   23,   17,   18,   23,   18,    6,
-        6,   33,   33,   14,   33,   20,    9,   19,   17,   17,
-       23,   18,   18,   23,    6,    5,    6,   33,   21,   20,
-       23,   20,    9,   23,    9,   19,   23,   19,    4,    6,
-        5,    6,   33,   21,   23,   21,   20,   20,   23,    9,
-        9,   23,   19,   19,   23,    4,    6,   12,   33,   21,
-       21,   23,   12,   33
+       36,   36,   38,   34,   35,   37, 8193,   34,   35,   37,
+    16385, 8193,   34,   37,16385,   34,   35,   37,   35,   37,
+       34,   35,   37,   34,   35,   37,   34,   35,   37,   34,
+       35,   37,   34,   35,   37,   34,   35,   37,   34,   35,
+       37,   34,   35,   37,   34,   35,   37,   34,   35,   37,
+       34,   35,   37,   34,   35,   37,   34,   35,   37,   34,
+       35,   37,   34,   35,   37,   34,   35,   37,   36,   37,
+       37,   34,   34, 8193,   34, 8193,   34,16385, 8193,   34,
+     8193,   34,   34, 8225,   34,16417,   34,   34,   34,   34,
+       34,   34,   34,   34,   34,   34,   34,   34,   34,   34,
+
+       34,   34,   34,   34,   34,   34,   34,   34,   34,   34,
+       36, 8193,   34, 8193,   34, 8193, 8225,   34, 8225,   34,
+     8225,   24,   34,   34,   34,   34,   34,   34,   34,   34,
+       34,   34,   34,   34,   34,   34,   34,   34,   34,   34,
+       34,   34,   34,   34,   34,   34,   34, 8225,   34, 8225,
+       34, 8225,   24,   34,   34,   29, 8225,   34,16417,   34,
+       34,   16,   34,   34,   34,   34,   34,   34,   34,   34,
+       34, 8218, 8225,   34,16410,16417,   34,   34,   32, 8225,
+       34,16417,   34, 8217, 8225,   34,16409,16417,   34,   34,
+     8220, 8225,   34,16412,16417,   34,   34,   34,   34,   34,
+
+       34,   29, 8225,   34,   29, 8225,   34,   29,   34,   29,
+     8225,   34,    3,   34,   16,   34,   34,   34,   34,   34,
+       31, 8225,   34,16417,   34,   34,   34, 8218, 8225,   34,
+     8218, 8225,   34, 8218,   34, 8218, 8225,   34,   34,   32,
+     8225,   34,   32, 8225,   34,   32,   34,   32, 8225, 8217,
+     8225,   34, 8217, 8225,   34, 8217,   34, 8217, 8225,   34,
+     8220, 8225,   34, 8220, 8225,   34, 8220,   34, 8220, 8225,
+       34,   34,   10,   34,   34,   34,   29, 8225,   34,   29,
+     8225,   34,   29, 8225,   29,   34,   29,   34,    3,   34,
+       34,   34,   34,   34,   34,   34,   31, 8225,   34,   31,
+
+     8225,   34,   31,   34,   31, 8225,   34,   34,   30, 8225,
+       34,16417, 8218, 8225,   34, 8218, 8225,   34, 8218, 8225,
+     8218,   34, 8218,   34,   34,   32, 8225,   34,   32, 8225,
+       34,   32, 8225,   32,   34,   32, 8217, 8225,   34, 8217,
+     8225,   34, 8217, 8225, 8217,   34, 8217,   34, 8220, 8225,
+       34, 8220, 8225,   34, 8220, 8225, 8220,   34, 8220,   34,
+       34,   10,   24,   10,   15,   34,    7,   34,   34,   34,
+       34,   34,   34,   34,   13,   34,   31, 8225,   34,   31,
+     8225,   34,   31, 8225,   31,   34,   31,    2,   34,   30,
+     8225,   34,   30, 8225,   34,   30,   34,   30, 8225,   17,
+
+       34,   34,   11,   34,   23,   10,   10,   24,   15,   34,
+        7,   24,    7,   34,    8,   34,   34,   34,   34,    6,
+       34,   13,   34,    2,   24,    2,   34,   30, 8225,   34,
+       30, 8225,   34,   30, 8225,   30,   34,   30,   17,   34,
+       34,   11,   24,   11,   27, 8225,   34,16417,   23,   24,
+       23,    7,    7,   24,   34,    8,   24,    8,   34,   34,
+       34,   34,    6,   24,    6,    6,   24,    6,   24,   34,
+        2,    2,   24,   34,   34,   11,   11,   24,   27, 8225,
+       34,   27, 8225,   34,   27,   34,   27, 8225,   23,   24,
+       34,    8,    8,   24,   34,   34,   18,   19,    6,    6,
+
+       24,    6,    6,   34,   34,   14,   34,   27, 8225,   34,
+       27, 8225,   34,   27, 8225,   27,   34,   27,   34,   34,
+       34,   18,   24,   18,   19,   24,   19,    6,    6,   34,
+       34,   14,   34,   21,    9,   20,   18,   18,   24,   19,
+       19,   24,    6,    5,    6,   34,   22,   21,   24,   21,
+        9,   24,    9,   20,   24,   20,    4,    6,    5,    6,
+       34,   22,   24,   22,   21,   21,   24,    9,    9,   24,
+       20,   20,   24,    4,    6,   12,   34,   22,   22,   24,
+       12,   34
     } ;
 
-static const flex_int16_t yy_accept[356] =
+static const flex_int16_t yy_accept[362] =
     {   0,
         1,    1,    1,    2,    3,    4,    7,   12,   16,   19,
        21,   24,   27,   30,   33,   36,   39,   42,   45,   48,
@@ -633,39 +634,40 @@ static const flex_int16_t yy_accept[356] =
        74,   76,   79,   81,   82,   83,   84,   87,   87,   88,
        89,   90,   91,   92,   93,   94,   95,   96,   97,   98,
        99,  100,  101,  102,  103,  104,  105,  106,  107,  108,
-      109,  110,  111,  113,  115,  116,  118,  120,  121,  122,
+      109,  110,  111,  112,  114,  116,  117,  119,  121,  122,
       123,  124,  125,  126,  127,  128,  129,  130,  131,  132,
       133,  134,  135,  136,  137,  138,  139,  140,  141,  142,
-      143,  144,  145,  146,  148,  150,  151,  152,  153,  154,
-
-      158,  159,  160,  162,  163,  164,  165,  166,  167,  168,
-      169,  170,  175,  176,  177,  181,  182,  187,  188,  189,
-      194,  195,  196,  197,  198,  199,  202,  205,  207,  209,
-      210,  212,  214,  215,  216,  217,  218,  222,  223,  224,
-      225,  228,  231,  233,  235,  236,  237,  240,  243,  245,
-      247,  250,  253,  255,  257,  258,  261,  264,  266,  268,
-      269,  270,  271,  272,  273,  276,  279,  281,  283,  284,
-      285,  287,  288,  289,  290,  291,  292,  293,  296,  299,
-      301,  303,  304,  305,  309,  312,  315,  317,  319,  320,
-      321,  322,  325,  328,  330,  332,  333,  336,  339,  341,
-
-      343,  344,  345,  348,  351,  353,  355,  356,  357,  358,
-      360,  361,  362,  363,  364,  365,  366,  367,  368,  369,
-      371,  374,  377,  379,  381,  382,  383,  384,  387,  390,
-      392,  394,  396,  397,  398,  399,  400,  401,  403,  405,
-      406,  407,  408,  409,  410,  411,  412,  413,  414,  416,
-      418,  419,  420,  423,  426,  428,  430,  431,  433,  434,
-      436,  437,  441,  443,  444,  445,  447,  448,  450,  451,
-      452,  453,  454,  455,  457,  458,  460,  462,  463,  464,
-      466,  467,  468,  469,  471,  474,  477,  479,  481,  483,
-      484,  485,  487,  488,  489,  490,  491,  492,  494,  495,
-
-      496,  497,  498,  500,  503,  506,  508,  510,  511,  512,
-      513,  514,  516,  517,  519,  520,  521,  522,  523,  524,
-      526,  527,  528,  529,  530,  532,  533,  535,  536,  538,
-      539,  540,  542,  543,  545,  546,  548,  549,  551,  553,
-      554,  556,  557,  558,  560,  561,  563,  564,  566,  568,
-      570,  571,  573,  575,  575
+      143,  144,  145,  146,  147,  148,  150,  152,  153,  154,
+
+      155,  156,  160,  161,  162,  164,  165,  166,  167,  168,
+      169,  170,  171,  172,  177,  178,  179,  183,  184,  189,
+      190,  191,  196,  197,  198,  199,  200,  201,  202,  205,
+      208,  210,  212,  213,  215,  217,  218,  219,  220,  221,
+      225,  226,  227,  228,  231,  234,  236,  238,  239,  240,
+      243,  246,  248,  250,  253,  256,  258,  260,  261,  264,
+      267,  269,  271,  272,  273,  274,  275,  276,  277,  280,
+      283,  285,  287,  288,  289,  291,  292,  293,  294,  295,
+      296,  297,  300,  303,  305,  307,  308,  309,  313,  316,
+      319,  321,  323,  324,  325,  326,  329,  332,  334,  336,
+
+      337,  340,  343,  345,  347,  348,  349,  352,  355,  357,
+      359,  360,  361,  362,  364,  365,  367,  368,  369,  370,
+      371,  372,  373,  374,  375,  377,  380,  383,  385,  387,
+      388,  389,  390,  393,  396,  398,  400,  402,  403,  404,
+      405,  406,  407,  409,  411,  413,  414,  415,  416,  417,
+      418,  419,  420,  421,  422,  424,  426,  427,  428,  431,
+      434,  436,  438,  439,  441,  442,  444,  445,  449,  451,
+      452,  453,  455,  456,  458,  459,  460,  461,  462,  463,
+      465,  466,  468,  470,  471,  472,  474,  475,  476,  477,
+      479,  482,  485,  487,  489,  491,  492,  493,  495,  496,
+
+      497,  498,  499,  500,  502,  503,  504,  505,  506,  508,
+      511,  514,  516,  518,  519,  520,  521,  522,  524,  525,
+      527,  528,  529,  530,  531,  532,  534,  535,  536,  537,
+      538,  540,  541,  543,  544,  546,  547,  548,  550,  551,
+      553,  554,  556,  557,  559,  561,  562,  564,  565,  566,
+      568,  569,  571,  572,  574,  576,  578,  579,  581,  583,
+      583
     } ;
 
 static const YY_CHAR yy_ec[256] =
@@ -708,216 +710,217 @@ static const YY_CHAR yy_meta[35] =
         1,    1,    1,    1
     } ;
 
-static const flex_int16_t yy_base[424] =
+static const flex_int16_t yy_base[430] =
     {   0,
-        0,    0,  901,  900,  902,  897,   33,   36,  905,  905,
-       45,   63,   31,   42,   51,   52,  890,   33,   65,   67,
-       69,   70,  889,   71,  888,   75,    0,  905,  893,  905,
-       91,   94,    0,    0,  103,  886,  112,    0,   89,   98,
-      113,   92,  114,   99,  100,   48,  121,  116,  119,   74,
-      124,  129,  123,  135,  132,  133,  137,  134,  138,  139,
-      141,    0,  155,    0,    0,  164,    0,    0,  849,  142,
-      152,  164,  140,  161,  165,  166,  167,  168,  169,  173,
-      174,  178,  176,  180,  184,  208,  189,  183,  192,  195,
-      215,  191,  193,  223,    0,    0,  905,  208,  204,  236,
-
-      219,  209,  238,  196,  237,  831,  242,  815,  241,  224,
-      243,  261,  244,  259,  277,  266,  286,  250,  288,  298,
-      249,  283,  274,  282,  294,  308,    0,  310,    0,  295,
-      305,  905,  308,  306,  313,  314,  342,  319,  316,  320,
-      331,    0,  349,    0,  342,  344,  356,    0,  358,    0,
-      365,    0,  367,    0,  354,  375,    0,  377,    0,  363,
-      356,  809,  327,  322,  384,    0,    0,    0,    0,  379,
-      905,  382,  384,  386,  390,  372,  392,  403,    0,  410,
-        0,  407,  413,  423,  426,    0,    0,    0,    0,  409,
-      424,  435,    0,    0,    0,    0,  437,    0,    0,    0,
-
-        0,  433,  444,    0,    0,    0,    0,  391,  440,  781,
-      905,  769,  439,  445,  444,  447,  449,  454,  453,  399,
-      464,    0,    0,    0,    0,  757,  465,  476,    0,  478,
-        0,  479,  476,  753,  462,  490,  749,  905,  745,  905,
-      483,  737,  424,  485,  487,  490,  500,  493,  905,  729,
-      905,  502,  518,    0,    0,    0,    0,  905,  498,  721,
-      905,  527,  713,    0,  705,  905,  495,  697,  905,  365,
-      521,  528,  530,  685,  905,  534,  540,  540,  657,  905,
-      537,  542,  650,  905,  553,    0,  557,    0,    0,  551,
-      641,  905,  558,  557,  633,  614,  613,  905,  547,  555,
-
-      563,  565,  569,  584,    0,    0,    0,    0,  583,  570,
-      585,  612,  905,  601,  905,  522,  580,  589,  594,  905,
-      600,  585,  563,  520,  905,  514,  905,  586,  486,  597,
-      480,  441,  905,  416,  905,  345,  905,  334,  905,  601,
-      254,  905,  242,  905,  200,  905,  151,  905,  905,  607,
-       86,  905,  905,  905,  620,  624,  627,  631,  635,  639,
-      643,  647,  651,  655,  659,  663,  667,  671,  675,  679,
-      683,  687,  691,  695,  699,  703,  707,  711,  715,  719,
-      723,  727,  731,  735,  739,  743,  747,  751,  755,  759,
-      763,  767,  771,  775,  779,  783,  787,  791,  795,  799,
-
-      803,  807,  811,  815,  819,  823,  827,  831,  835,  839,
-      843,  847,  851,  855,  859,  863,  867,  871,  875,  879,
-      883,  887,  891
+        0,    0,  912,  911,  913,  908,   33,   36,  916,  916,
+       45,   63,   31,   42,   51,   52,  901,   33,   65,   67,
+       69,   70,  900,   71,  899,   77,    0,  916,  904,  916,
+       93,   96,    0,    0,  105,  897,  114,    0,   91,  100,
+      115,   94,  116,  101,  102,   48,   74,  118,  121,  123,
+       78,  128,  131,  137,  124,  125,  133,  135,  136,  140,
+      142,  141,    0,  163,    0,    0,  166,    0,    0,  902,
+      143,  146,  163,  164,  166,  167,  149,  169,  170,  175,
+      179,  176,  182,  177,  184,  192,  212,  193,  186,  196,
+      187,  219,  201,  150,  199,  227,    0,    0,  916,  209,
+
+      212,  243,  224,  213,  245,  223,  198,  895,  231,  894,
+      244,  230,  243,  261,  255,  259,  279,  266,  288,  275,
+      291,  301,  268,  284,  298,  301,  285,  302,  311,    0,
+      314,    0,  311,  318,  916,  312,  317,  246,  232,  342,
+      320,  325,  323,  349,    0,  351,    0,  344,  349,  360,
+        0,  363,    0,  367,    0,  370,    0,  330,  377,    0,
+      379,    0,  365,  358,  899,  368,  329,  331,  381,    0,
+        0,    0,    0,  381,  916,  383,  385,  387,  391,  397,
+      393,  409,    0,  411,    0,  412,  414,  424,  427,    0,
+        0,    0,    0,  422,  425,  436,    0,    0,    0,    0,
+
+      438,    0,    0,    0,    0,  434,  445,    0,    0,    0,
+        0,  440,  442,  898,  916,  400,  897,  443,  448,  449,
+      451,  453,  458,  457,  413,  469,    0,    0,    0,    0,
+      896,  469,  480,    0,  482,    0,  483,  480,  895,  489,
+      497,  894,  916,  916,  851,  916,  490,  839,  478,  492,
+      494,  497,  507,  501,  916,  823,  916,  509,  525,    0,
+        0,    0,    0,  916,  505,  811,  916,  534,  783,    0,
+      771,  916,  518,  759,  916,  523,  528,  538,  540,  755,
+      916,  511,  540,  549,  751,  916,  544,  547,  747,  916,
+      560,    0,  562,    0,    0,  555,  739,  916,  484,  561,
+
+      731,  723,  715,  916,  449,  566,  564,  566,  576,  578,
+        0,    0,    0,    0,  584,  574,  586,  707,  916,  699,
+      916,  581,  587,  590,  597,  916,  687,  659,  652,  643,
+      916,  635,  916,  597,  616,  599,  614,  604,  916,  600,
+      916,  541,  916,  467,  916,  603,  455,  916,  404,  916,
+      385,  916,  328,  916,  916,  609,  203,  916,  916,  916,
+      622,  626,  629,  633,  637,  641,  645,  649,  653,  657,
+      661,  665,  669,  673,  677,  681,  685,  689,  693,  697,
+      701,  705,  709,  713,  717,  721,  725,  729,  733,  737,
+      741,  745,  749,  753,  757,  761,  765,  769,  773,  777,
+
+      781,  785,  789,  793,  797,  801,  805,  809,  813,  817,
+      821,  825,  829,  833,  837,  841,  845,  849,  853,  857,
+      861,  865,  869,  873,  877,  881,  885,  889,  893
     } ;
 
-static const flex_int16_t yy_def[424] =
+static const flex_int16_t yy_def[430] =
     {   0,
-      354,    1,  355,  355,  354,  356,  357,  357,  354,  354,
-      358,  358,   12,   12,   12,   12,   12,   12,   12,   12,
-       12,   12,   12,   12,   12,   12,  359,  354,  356,  354,
-      360,  357,  361,  361,  362,   12,  356,  363,   12,   12,
+      360,    1,  361,  361,  360,  362,  363,  363,  360,  360,
+      364,  364,   12,   12,   12,   12,   12,   12,   12,   12,
+       12,   12,   12,   12,   12,   12,  365,  360,  362,  360,
+      366,  363,  367,  367,  368,   12,  362,  369,   12,   12,
        12,   12,   12,   12,   12,   12,   12,   12,   12,   12,
        12,   12,   12,   12,   12,   12,   12,   12,   12,   12,
-       12,  359,  360,  361,  361,  364,  365,  365,  354,   12,
+       12,   12,  365,  366,  367,  367,  370,  371,  371,  360,
        12,   12,   12,   12,   12,   12,   12,   12,   12,   12,
-       12,   12,   12,   12,   12,  362,   12,   12,   12,   12,
-       12,   12,   12,  364,  365,  365,  354,   12,   12,  366,
-
-       12,   12,   12,   12,   12,   12,   12,   12,   12,   12,
-       12,  367,   86,   86,  368,   12,  369,   12,   12,  370,
-       12,   12,   12,   12,   12,  371,  372,  366,  372,   12,
-       12,  354,   86,   12,   12,   12,  373,   12,   12,   12,
-      374,  375,  367,  375,   86,   86,  376,  377,  368,  377,
-      378,  379,  369,  379,   12,  380,  381,  370,  381,   12,
-       12,  382,   12,   12,  371,  372,  372,  383,  383,   12,
-      354,   86,   86,   86,   12,   12,   12,  384,  385,  373,
-      385,   12,   12,  386,  374,  375,  375,  387,  387,   86,
-       86,  376,  377,  377,  388,  388,  378,  379,  379,  389,
-
-      389,   12,  380,  381,  381,  390,  390,   12,   12,  391,
-      354,  392,   86,   12,   86,   86,   86,   12,   86,   12,
-      384,  385,  385,  393,  393,  394,   86,  395,  396,  386,
-      396,   86,   86,  397,   12,  398,  391,  354,  399,  354,
-       86,  400,   12,   86,   86,   86,  401,   86,  354,  402,
-      354,   86,  395,  396,  396,  403,  403,  354,   86,  404,
-      354,  405,  406,  406,  399,  354,   86,  407,  354,   12,
-       86,   86,   86,  408,  354,  408,  408,   86,  402,  354,
-       86,   86,  404,  354,  409,  410,  405,  410,  406,   86,
-      407,  354,   12,   86,  411,  412,  408,  354,  408,  408,
-
-       86,   86,   86,  409,  410,  410,  413,  413,   86,   12,
-       86,  414,  354,  415,  354,  408,  408,   86,   86,  354,
-      416,  417,  418,  414,  354,  415,  354,  408,  408,   86,
-      419,  420,  354,  421,  354,  422,  354,  408,  354,   86,
-      423,  354,  420,  354,  421,  354,  422,  354,  354,   86,
-      423,  354,  354,    0,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354
+       12,   12,   12,   12,   12,   12,  368,   12,   12,   12,
+       12,   12,   12,   12,   12,  370,  371,  371,  360,   12,
+
+       12,  372,   12,   12,   12,   12,   12,   12,   12,   12,
+       12,   12,   12,  373,   87,   87,  374,   12,  375,   12,
+       12,  376,   12,   12,   12,   12,   12,   12,  377,  378,
+      372,  378,   12,   12,  360,   87,   12,   12,   12,  379,
+       12,   12,   12,  380,  381,  373,  381,   87,   87,  382,
+      383,  374,  383,  384,  385,  375,  385,   12,  386,  387,
+      376,  387,   12,   12,  388,   12,   12,   12,  377,  378,
+      378,  389,  389,   12,  360,   87,   87,   87,   12,   12,
+       12,  390,  391,  379,  391,   12,   12,  392,  380,  381,
+      381,  393,  393,   87,   87,  382,  383,  383,  394,  394,
+
+      384,  385,  385,  395,  395,   12,  386,  387,  387,  396,
+      396,   12,   12,  397,  360,   12,  398,   87,   12,   87,
+       87,   87,   12,   87,   12,  390,  391,  391,  399,  399,
+      400,   87,  401,  402,  392,  402,   87,   87,  403,   12,
+      404,  397,  360,  360,  405,  360,   87,  406,   12,   87,
+       87,   87,  407,   87,  360,  408,  360,   87,  401,  402,
+      402,  409,  409,  360,   87,  410,  360,  411,  412,  412,
+      405,  360,   87,  413,  360,   12,   87,   87,   87,  414,
+      360,  414,  414,   87,  408,  360,   87,   87,  410,  360,
+      415,  416,  411,  416,  412,   87,  413,  360,   12,   87,
+
+      417,  418,  414,  360,  414,  414,   87,   87,   87,  415,
+      416,  416,  419,  419,   87,   12,   87,  420,  360,  421,
+      360,  414,  414,   87,   87,  360,  422,  423,  424,  420,
+      360,  421,  360,  414,  414,   87,  425,  426,  360,  427,
+      360,  428,  360,  414,  360,   87,  429,  360,  426,  360,
+      427,  360,  428,  360,  360,   87,  429,  360,  360,    0,
+      360,  360,  360,  360,  360,  360,  360,  360,  360,  360,
+      360,  360,  360,  360,  360,  360,  360,  360,  360,  360,
+      360,  360,  360,  360,  360,  360,  360,  360,  360,  360,
+      360,  360,  360,  360,  360,  360,  360,  360,  360,  360,
+
+      360,  360,  360,  360,  360,  360,  360,  360,  360,  360,
+      360,  360,  360,  360,  360,  360,  360,  360,  360,  360,
+      360,  360,  360,  360,  360,  360,  360,  360,  360
     } ;
 
-static const flex_int16_t yy_nxt[940] =
+static const flex_int16_t yy_nxt[951] =
     {   0,
         6,    7,    8,    9,    6,    6,    6,    6,   10,   11,
        12,   13,   14,   15,   16,   17,   18,   19,   17,   17,
        17,   17,   20,   17,   21,   22,   23,   24,   25,   17,
        26,   17,   17,   17,   32,   32,   33,   32,   32,   33,
        36,   34,   36,   42,   34,   29,   29,   29,   30,   35,
-       50,   36,   37,   38,   43,   44,   39,   36,   79,   45,
+       50,   36,   37,   38,   43,   44,   39,   36,   80,   45,
        36,   36,   40,   29,   29,   29,   30,   35,   46,   48,
        37,   38,   41,   47,   36,   49,   36,   53,   36,   36,
-       36,   56,   58,   36,   36,   55,   82,   60,   51,  342,
-       54,   61,   52,   29,   64,   32,   32,   33,   36,   65,
-
-       70,   36,   34,   29,   29,   29,   30,   36,   36,   36,
-       29,   38,   66,   66,   66,   67,   66,   71,   74,   66,
-       68,   72,   36,   36,   73,   36,   77,   78,   36,   76,
-       36,   53,   36,   36,   75,   85,   80,   83,   36,   86,
-       84,   36,   36,   36,   36,   81,   36,   36,   36,   36,
-       36,   36,   93,   89,  337,   98,   88,   29,   64,  101,
-       90,   36,   91,   65,   92,   87,   29,   95,   89,   99,
-       36,  100,   96,   36,   36,   36,   36,   36,   36,  106,
-      105,   85,   36,   36,  102,   36,  107,   36,  103,   36,
-      109,  112,   36,   36,  104,  108,  115,  110,   36,  117,
-
-       36,   36,   36,  335,   36,   36,  122,  111,   29,   29,
-       29,   30,  118,   36,  116,   29,   38,   36,   36,  113,
-      114,  119,  120,  123,   36,   29,   95,  121,   36,  134,
-      131,   96,  130,   36,  125,  124,  126,  126,   66,  127,
-      126,  132,  133,  126,  129,  333,   36,   36,  135,  137,
-       36,   36,   36,  140,  139,   35,   35,  352,   36,   36,
-       85,  141,  141,   66,  142,  141,  160,  145,  141,  144,
-       35,   35,   89,  117,  155,   36,  146,  147,  147,   66,
-      148,  147,  162,   36,  147,  150,  151,  151,   66,  152,
-      151,   36,   36,  151,  154,  120,  161,   36,  156,  156,
-
-       66,  157,  156,   36,   36,  156,  159,  164,  171,  163,
-       29,  166,   29,  168,   36,   36,  167,  170,  169,   35,
-       35,  172,   36,   36,  173,   36,  213,  184,   36,   36,
-      175,   36,  174,   29,  186,  212,   36,  349,  183,  187,
-      177,  176,  178,  178,   66,  179,  178,  182,  348,  178,
-      181,   29,  188,   35,   35,   35,   35,  189,   29,  193,
-       29,  195,  190,   36,  194,   36,  196,   29,  198,   29,
-      200,  191,   36,  199,   36,  201,  219,   29,  204,   29,
-      206,   36,  202,  205,  209,  207,   29,  166,   36,  293,
-      208,  214,  167,   35,   35,   35,   35,   35,   35,   36,
-
-       36,   36,  249,  218,  220,   29,  222,  216,   36,  217,
-      235,  223,   29,  224,  215,  226,   36,  227,  225,  346,
-       35,   35,   36,  228,  228,   66,  229,  228,   29,  186,
-      228,  231,  232,   36,  187,  233,   35,   29,  193,   29,
-      198,  234,   36,  194,  344,  199,   29,  204,  236,   36,
-       35,  241,  205,  242,   36,   35,   35,  270,   35,   35,
-       35,   35,  247,   36,   35,   35,   29,  222,  244,  262,
-      248,   36,  223,  243,  245,  246,   35,  252,   29,  254,
-       29,  256,  258,  342,  255,  259,  257,   35,   35,  339,
-       35,   35,   69,  264,   35,   35,   35,   35,   35,   35,
-
-      267,   35,   35,  275,   35,   35,   35,   35,  271,   35,
-       35,  276,  277,   35,   35,  272,  278,  315,  273,  281,
-       29,  254,  290,  313,  282,  275,  255,  285,  285,   66,
-      286,  285,   35,   35,  285,  288,  295,  298,  296,   35,
-       35,   35,   35,  298,  301,  328,  299,  294,   35,   35,
-      275,   35,   35,   35,  303,   29,  305,  300,  275,   29,
-      307,  306,   35,   35,  302,  308,  337,   36,   35,   35,
-      309,  310,  320,  316,   35,   35,   35,   35,  322,   36,
-       35,   35,  317,  275,  319,  311,   29,  305,  335,  275,
-      318,  321,  306,  323,   35,   35,   35,   35,  330,  329,
-
-       35,   35,  331,  333,  327,   35,   35,  338,   35,   35,
-      353,  340,   35,   35,  350,  325,  275,  315,   35,   35,
-       27,   27,   27,   27,   29,   29,   29,   31,   31,   31,
-       31,   36,   36,   36,   36,   62,  313,   62,   62,   63,
-       63,   63,   63,   65,  269,   65,   65,   35,   35,   35,
-       35,   69,   69,  261,   69,   94,   94,   94,   94,   96,
-      251,   96,   96,  128,  128,  128,  128,  143,  143,  143,
-      143,  149,  149,  149,  149,  153,  153,  153,  153,  158,
-      158,  158,  158,  165,  165,  165,  165,  167,  298,  167,
-      167,  180,  180,  180,  180,  185,  185,  185,  185,  187,
-
-      292,  187,  187,  192,  192,  192,  192,  194,  240,  194,
-      194,  197,  197,  197,  197,  199,  289,  199,  199,  203,
-      203,  203,  203,  205,  284,  205,  205,  210,  210,  210,
-      210,  169,  280,  169,  169,  221,  221,  221,  221,  223,
-      269,  223,  223,  230,  230,  230,  230,  189,  266,  189,
-      189,  196,  211,  196,  196,  201,  261,  201,  201,  207,
-      251,  207,  207,  237,  237,  237,  237,  239,  239,  239,
-      239,  225,  240,  225,  225,  250,  250,  250,  250,  253,
-      253,  253,  253,  255,  238,  255,  255,  260,  260,  260,
-      260,  263,  263,  263,  263,  265,  265,  265,  265,  268,
-
-      268,  268,  268,  274,  274,  274,  274,  279,  279,  279,
-      279,  257,  211,  257,  257,  283,  283,  283,  283,  287,
-      287,  287,  287,  264,  138,  264,  264,  291,  291,  291,
-      291,  297,  297,  297,  297,  304,  304,  304,  304,  306,
-      136,  306,  306,  312,  312,  312,  312,  314,  314,  314,
-      314,  308,   97,  308,  308,  324,  324,  324,  324,  326,
-      326,  326,  326,  332,  332,  332,  332,  334,  334,  334,
-      334,  336,  336,  336,  336,  341,  341,  341,  341,  343,
-      343,  343,  343,  345,  345,  345,  345,  347,  347,  347,
-      347,  351,  351,  351,  351,   36,   30,   59,   57,   36,
-
-       30,  354,   28,   28,    5,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354
+       36,   56,   58,   36,   53,   55,   36,   36,   51,   60,
+       54,   84,   52,   61,   62,   29,   65,   32,   32,   33,
+
+       36,   66,   71,   36,   34,   29,   29,   29,   30,   36,
+       36,   36,   29,   38,   67,   67,   67,   68,   67,   72,
+       75,   67,   69,   73,   36,   36,   74,   36,   78,   79,
+       36,   77,   36,   36,   36,   83,   76,   36,   81,   85,
+       36,   87,   36,   86,   36,   36,   36,   82,   89,   36,
+       36,   36,   36,   94,   90,   36,  100,   88,   36,   36,
+       92,   91,   93,  101,   90,   29,   65,   95,   29,   97,
+      102,   66,   36,   36,   98,   36,   36,  106,   36,   36,
+      125,  108,  107,  103,   36,   36,   36,   86,   36,  104,
+      105,   36,  109,   36,  111,   36,   36,  110,  112,  114,
+
+      117,   36,   36,  119,  120,   36,  348,   36,   36,  138,
+       36,  113,   29,   29,   29,   30,  124,  118,   36,   29,
+       38,   36,   36,  115,  116,  121,  122,  126,   36,   29,
+       97,  123,   36,   36,  134,   98,  127,  133,  140,   36,
+       36,   36,  128,  129,  129,   67,  130,  129,  135,  136,
+      129,  132,   36,   36,   36,   36,  137,  142,  181,  143,
+       86,  144,  144,   67,  145,  144,   35,   35,  144,  147,
+       35,   35,   90,  119,  180,   36,  149,   36,  148,  150,
+      150,   67,  151,  150,   36,  163,  150,  153,  154,  154,
+       67,  155,  154,   36,   36,  154,  157,  164,  122,  158,
+
+       36,  159,  159,   67,  160,  159,  165,   36,  159,  162,
+       36,   36,  167,   29,  170,  168,   29,  172,  166,  171,
+       36,  175,  173,   35,   35,  176,   36,   36,  177,   36,
+      188,  343,   36,  174,   36,  218,  178,  217,   36,   36,
+       36,  179,  182,  182,   67,  183,  182,  187,  186,  182,
+      185,   29,  190,   29,  192,   35,   35,  191,  206,  193,
+       35,   35,   29,  197,  194,   29,  199,   36,  198,   29,
+      202,  200,   29,  204,   36,  203,  195,   36,  205,   29,
+      208,   29,  210,   29,  170,  209,  213,  211,  341,  171,
+       36,  216,  212,  219,   35,   35,   35,   35,   35,   35,
+
+       36,  224,   36,  244,  223,  225,   36,  339,  221,   36,
+      222,   29,  227,   29,  229,  220,  255,  228,  232,  230,
+      231,   36,   36,   36,  233,  233,   67,  234,  233,   29,
+      190,  233,  236,   35,   35,  191,  238,   35,   29,  197,
+       29,  202,  239,   36,  198,  237,  203,   29,  208,   36,
+      241,   36,  281,  209,   35,  247,  248,   36,  358,  240,
+       35,   35,   35,   35,   35,   35,  253,   36,   35,   35,
+      355,   29,  227,  250,  254,  322,  249,  228,  251,  252,
+       35,  258,   29,  260,   29,  262,  264,   36,  261,  265,
+      263,   35,   35,   36,   35,   35,  268,  316,   36,   70,
+
+      270,   35,   35,   35,   35,   35,   35,  273,   35,   35,
+      281,  276,   35,   35,  304,  277,   35,   35,  282,  283,
+       35,   35,  278,  305,  284,  279,  287,   29,  260,   35,
+       35,  288,   36,  261,  291,  291,   67,  292,  291,   35,
+       35,  291,  294,  304,  354,  296,  301,  299,  302,   35,
+       35,   35,   35,  307,  300,   35,   35,  306,   35,  309,
+       35,   35,   29,  311,   29,  313,   35,   35,  312,  281,
+      314,  308,   35,   35,  315,   35,   35,   35,   35,  326,
+       29,  311,  328,   36,  281,  325,  312,   35,   35,  317,
+      281,  324,  327,  323,  329,   35,   35,   35,   35,  336,
+
+      281,   35,   35,  352,  334,  337,  335,  350,   35,   35,
+       35,   35,  359,  346,   35,   35,  356,  348,  344,  345,
+       35,   35,   27,   27,   27,   27,   29,   29,   29,   31,
+       31,   31,   31,   36,   36,   36,   36,   63,  321,   63,
+       63,   64,   64,   64,   64,   66,  319,   66,   66,   35,
+       35,   35,   35,   70,   70,  343,   70,   96,   96,   96,
+       96,   98,  341,   98,   98,  131,  131,  131,  131,  146,
+      146,  146,  146,  152,  152,  152,  152,  156,  156,  156,
+      156,  161,  161,  161,  161,  169,  169,  169,  169,  171,
+      339,  171,  171,  184,  184,  184,  184,  189,  189,  189,
+
+      189,  191,  333,  191,  191,  196,  196,  196,  196,  198,
+      331,  198,  198,  201,  201,  201,  201,  203,  281,  203,
+      203,  207,  207,  207,  207,  209,  321,  209,  209,  214,
+      214,  214,  214,  173,  319,  173,  173,  226,  226,  226,
+      226,  228,  275,  228,  228,  235,  235,  235,  235,  193,
+      267,  193,  193,  200,  257,  200,  200,  205,  304,  205,
+      205,  211,  298,  211,  211,  242,  242,  242,  242,  245,
+      245,  245,  245,  230,  246,  230,  230,  256,  256,  256,
+      256,  259,  259,  259,  259,  261,  295,  261,  261,  266,
+      266,  266,  266,  269,  269,  269,  269,  271,  271,  271,
+
+      271,  274,  274,  274,  274,  280,  280,  280,  280,  285,
+      285,  285,  285,  263,  290,  263,  263,  289,  289,  289,
+      289,  293,  293,  293,  293,  270,  286,  270,  270,  297,
+      297,  297,  297,  303,  303,  303,  303,  310,  310,  310,
+      310,  312,  275,  312,  312,  318,  318,  318,  318,  320,
+      320,  320,  320,  314,  272,  314,  314,  330,  330,  330,
+      330,  332,  332,  332,  332,  338,  338,  338,  338,  340,
+      340,  340,  340,  342,  342,  342,  342,  347,  347,  347,
+      347,  349,  349,  349,  349,  351,  351,  351,  351,  353,
+      353,  353,  353,  357,  357,  357,  357,  215,  267,  257,
+
+      246,  243,  215,  141,  139,   99,   36,   30,   59,   57,
+       36,   30,  360,   28,   28,    5,  360,  360,  360,  360,
+      360,  360,  360,  360,  360,  360,  360,  360,  360,  360,
+      360,  360,  360,  360,  360,  360,  360,  360,  360,  360,
+      360,  360,  360,  360,  360,  360,  360,  360,  360,  360
     } ;
 
-static const flex_int16_t yy_chk[940] =
+static const flex_int16_t yy_chk[951] =
     {   0,
         1,    1,    1,    1,    1,    1,    1,    1,    1,    1,
         1,    1,    1,    1,    1,    1,    1,    1,    1,    1,
@@ -927,101 +930,102 @@ static const flex_int16_t yy_chk[940] =
        18,   14,   11,   11,   13,   14,   11,   46,   46,   14,
        15,   16,   11,   12,   12,   12,   12,   12,   14,   16,
        12,   12,   12,   15,   19,   16,   20,   20,   21,   22,
-       24,   22,   24,   50,   26,   21,   50,   26,   19,  351,
-       20,   26,   19,   31,   31,   32,   32,   32,   39,   31,
-
-       39,   42,   32,   35,   35,   35,   35,   40,   44,   45,
-       35,   35,   37,   37,   37,   37,   37,   39,   42,   37,
-       37,   40,   41,   43,   41,   48,   45,   45,   49,   44,
-       47,   47,   53,   51,   43,   53,   48,   51,   52,   54,
-       52,   55,   56,   58,   54,   49,   57,   59,   60,   73,
-       61,   70,   60,   61,  347,   70,   56,   63,   63,   73,
-       58,   71,   59,   63,   59,   55,   66,   66,   57,   71,
-       74,   72,   66,   72,   75,   76,   77,   78,   79,   78,
-       77,   79,   80,   81,   74,   83,   80,   82,   75,   84,
-       82,   85,   88,   85,   76,   81,   87,   83,   87,   89,
-
-       92,   89,   93,  345,   90,  104,   92,   84,   86,   86,
-       86,   86,   90,   99,   88,   86,   86,   98,  102,   86,
-       86,   91,   91,   93,   91,   94,   94,   91,  101,  104,
-      102,   94,  101,  110,   99,   98,  100,  100,  100,  100,
-      100,  103,  103,  100,  100,  343,  105,  103,  105,  107,
-      109,  107,  111,  110,  109,  113,  113,  341,  121,  118,
-      111,  112,  112,  112,  112,  112,  121,  113,  112,  112,
-      114,  114,  116,  116,  118,  116,  114,  115,  115,  115,
-      115,  115,  123,  123,  115,  115,  117,  117,  117,  117,
-      117,  124,  122,  117,  117,  119,  122,  119,  120,  120,
-
-      120,  120,  120,  125,  130,  120,  120,  125,  131,  124,
-      126,  126,  128,  128,  131,  134,  126,  130,  128,  133,
-      133,  133,  135,  136,  133,  139,  164,  140,  138,  140,
-      134,  164,  133,  141,  141,  163,  163,  338,  139,  141,
-      136,  135,  137,  137,  137,  137,  137,  138,  336,  137,
-      137,  143,  143,  145,  145,  146,  146,  143,  147,  147,
-      149,  149,  145,  155,  147,  161,  149,  151,  151,  153,
-      153,  146,  160,  151,  270,  153,  176,  156,  156,  158,
-      158,  176,  155,  156,  161,  158,  165,  165,  170,  270,
-      160,  170,  165,  172,  172,  173,  173,  174,  174,  175,
-
-      208,  177,  220,  175,  177,  178,  178,  173,  220,  174,
-      208,  178,  180,  180,  172,  182,  182,  183,  180,  334,
-      190,  190,  183,  184,  184,  184,  184,  184,  185,  185,
-      184,  184,  190,  243,  185,  191,  191,  192,  192,  197,
-      197,  202,  202,  192,  332,  197,  203,  203,  209,  209,
-      213,  213,  203,  214,  214,  215,  215,  243,  216,  216,
-      217,  217,  218,  218,  219,  219,  221,  221,  215,  235,
-      219,  235,  221,  214,  216,  217,  227,  227,  228,  228,
-      230,  230,  232,  331,  228,  233,  230,  233,  233,  329,
-      232,  232,  236,  236,  241,  241,  244,  244,  245,  245,
-
-      241,  246,  246,  247,  248,  248,  267,  267,  244,  259,
-      259,  247,  247,  252,  252,  245,  248,  326,  246,  252,
-      253,  253,  267,  324,  259,  316,  253,  262,  262,  262,
-      262,  262,  271,  271,  262,  262,  272,  276,  273,  272,
-      272,  273,  273,  277,  278,  316,  276,  271,  281,  281,
-      299,  278,  278,  282,  282,  285,  285,  277,  300,  287,
-      287,  285,  290,  290,  281,  287,  323,  293,  294,  294,
-      290,  293,  303,  299,  301,  301,  302,  302,  310,  310,
-      303,  303,  300,  317,  302,  294,  304,  304,  322,  328,
-      301,  309,  304,  311,  309,  309,  311,  311,  318,  317,
-
-      318,  318,  319,  321,  314,  319,  319,  328,  330,  330,
-      350,  330,  340,  340,  340,  312,  297,  296,  350,  350,
-      355,  355,  355,  355,  356,  356,  356,  357,  357,  357,
-      357,  358,  358,  358,  358,  359,  295,  359,  359,  360,
-      360,  360,  360,  361,  291,  361,  361,  362,  362,  362,
-      362,  363,  363,  283,  363,  364,  364,  364,  364,  365,
-      279,  365,  365,  366,  366,  366,  366,  367,  367,  367,
-      367,  368,  368,  368,  368,  369,  369,  369,  369,  370,
-      370,  370,  370,  371,  371,  371,  371,  372,  274,  372,
-      372,  373,  373,  373,  373,  374,  374,  374,  374,  375,
-
-      268,  375,  375,  376,  376,  376,  376,  377,  265,  377,
-      377,  378,  378,  378,  378,  379,  263,  379,  379,  380,
-      380,  380,  380,  381,  260,  381,  381,  382,  382,  382,
-      382,  383,  250,  383,  383,  384,  384,  384,  384,  385,
-      242,  385,  385,  386,  386,  386,  386,  387,  239,  387,
-      387,  388,  237,  388,  388,  389,  234,  389,  389,  390,
-      226,  390,  390,  391,  391,  391,  391,  392,  392,  392,
-      392,  393,  212,  393,  393,  394,  394,  394,  394,  395,
-      395,  395,  395,  396,  210,  396,  396,  397,  397,  397,
-      397,  398,  398,  398,  398,  399,  399,  399,  399,  400,
-
-      400,  400,  400,  401,  401,  401,  401,  402,  402,  402,
-      402,  403,  162,  403,  403,  404,  404,  404,  404,  405,
-      405,  405,  405,  406,  108,  406,  406,  407,  407,  407,
-      407,  408,  408,  408,  408,  409,  409,  409,  409,  410,
-      106,  410,  410,  411,  411,  411,  411,  412,  412,  412,
-      412,  413,   69,  413,  413,  414,  414,  414,  414,  415,
-      415,  415,  415,  416,  416,  416,  416,  417,  417,  417,
-      417,  418,  418,  418,  418,  419,  419,  419,  419,  420,
-      420,  420,  420,  421,  421,  421,  421,  422,  422,  422,
-      422,  423,  423,  423,  423,   36,   29,   25,   23,   17,
-
-        6,    5,    4,    3,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354
+       24,   22,   24,   47,   47,   21,   26,   51,   19,   26,
+       20,   51,   19,   26,   26,   31,   31,   32,   32,   32,
+
+       39,   31,   39,   42,   32,   35,   35,   35,   35,   40,
+       44,   45,   35,   35,   37,   37,   37,   37,   37,   39,
+       42,   37,   37,   40,   41,   43,   41,   48,   45,   45,
+       49,   44,   50,   55,   56,   50,   43,   52,   48,   52,
+       53,   54,   57,   53,   58,   59,   54,   49,   56,   60,
+       62,   61,   71,   60,   61,   72,   71,   55,   77,   94,
+       59,   58,   59,   72,   57,   64,   64,   62,   67,   67,
+       73,   64,   73,   74,   67,   75,   76,   77,   78,   79,
+       94,   79,   78,   74,   80,   82,   84,   80,   81,   75,
+       76,   83,   81,   85,   83,   89,   91,   82,   84,   86,
+
+       88,   86,   88,   90,   91,   90,  357,  107,   95,  107,
+       93,   85,   87,   87,   87,   87,   93,   89,  100,   87,
+       87,  101,  104,   87,   87,   92,   92,   95,   92,   96,
+       96,   92,  106,  103,  104,   96,  100,  103,  109,  112,
+      109,  139,  101,  102,  102,  102,  102,  102,  105,  105,
+      102,  102,  113,  111,  105,  138,  106,  111,  139,  112,
+      113,  114,  114,  114,  114,  114,  115,  115,  114,  114,
+      116,  116,  118,  118,  138,  118,  116,  123,  115,  117,
+      117,  117,  117,  117,  120,  123,  117,  117,  119,  119,
+      119,  119,  119,  124,  127,  119,  119,  124,  121,  120,
+
+      121,  122,  122,  122,  122,  122,  125,  125,  122,  122,
+      126,  128,  127,  129,  129,  128,  131,  131,  126,  129,
+      133,  134,  131,  136,  136,  136,  137,  134,  136,  141,
+      143,  353,  143,  133,  142,  168,  136,  167,  167,  158,
+      168,  137,  140,  140,  140,  140,  140,  142,  141,  140,
+      140,  144,  144,  146,  146,  148,  148,  144,  158,  146,
+      149,  149,  150,  150,  148,  152,  152,  164,  150,  154,
+      154,  152,  156,  156,  163,  154,  149,  166,  156,  159,
+      159,  161,  161,  169,  169,  159,  164,  161,  351,  169,
+      174,  166,  163,  174,  176,  176,  177,  177,  178,  178,
+
+      179,  180,  181,  216,  179,  181,  180,  349,  177,  216,
+      178,  182,  182,  184,  184,  176,  225,  182,  187,  184,
+      186,  186,  225,  187,  188,  188,  188,  188,  188,  189,
+      189,  188,  188,  194,  194,  189,  195,  195,  196,  196,
+      201,  201,  206,  206,  196,  194,  201,  207,  207,  212,
+      213,  213,  305,  207,  218,  218,  219,  219,  347,  212,
+      220,  220,  221,  221,  222,  222,  223,  223,  224,  224,
+      344,  226,  226,  220,  224,  305,  219,  226,  221,  222,
+      232,  232,  233,  233,  235,  235,  237,  249,  233,  238,
+      235,  238,  238,  299,  237,  237,  240,  299,  240,  241,
+
+      241,  247,  247,  250,  250,  251,  251,  247,  252,  252,
+      253,  249,  254,  254,  282,  250,  265,  265,  253,  253,
+      258,  258,  251,  282,  254,  252,  258,  259,  259,  273,
+      273,  265,  276,  259,  268,  268,  268,  268,  268,  277,
+      277,  268,  268,  283,  342,  273,  278,  276,  279,  278,
+      278,  279,  279,  284,  277,  287,  287,  283,  288,  288,
+      284,  284,  291,  291,  293,  293,  296,  296,  291,  306,
+      293,  287,  300,  300,  296,  307,  307,  308,  308,  309,
+      310,  310,  316,  316,  322,  308,  310,  309,  309,  300,
+      323,  307,  315,  306,  317,  315,  315,  317,  317,  324,
+
+      334,  324,  324,  340,  322,  325,  323,  338,  325,  325,
+      336,  336,  356,  336,  346,  346,  346,  337,  334,  335,
+      356,  356,  361,  361,  361,  361,  362,  362,  362,  363,
+      363,  363,  363,  364,  364,  364,  364,  365,  332,  365,
+      365,  366,  366,  366,  366,  367,  330,  367,  367,  368,
+      368,  368,  368,  369,  369,  329,  369,  370,  370,  370,
+      370,  371,  328,  371,  371,  372,  372,  372,  372,  373,
+      373,  373,  373,  374,  374,  374,  374,  375,  375,  375,
+      375,  376,  376,  376,  376,  377,  377,  377,  377,  378,
+      327,  378,  378,  379,  379,  379,  379,  380,  380,  380,
+
+      380,  381,  320,  381,  381,  382,  382,  382,  382,  383,
+      318,  383,  383,  384,  384,  384,  384,  385,  303,  385,
+      385,  386,  386,  386,  386,  387,  302,  387,  387,  388,
+      388,  388,  388,  389,  301,  389,  389,  390,  390,  390,
+      390,  391,  297,  391,  391,  392,  392,  392,  392,  393,
+      289,  393,  393,  394,  285,  394,  394,  395,  280,  395,
+      395,  396,  274,  396,  396,  397,  397,  397,  397,  398,
+      398,  398,  398,  399,  271,  399,  399,  400,  400,  400,
+      400,  401,  401,  401,  401,  402,  269,  402,  402,  403,
+      403,  403,  403,  404,  404,  404,  404,  405,  405,  405,
+
+      405,  406,  406,  406,  406,  407,  407,  407,  407,  408,
+      408,  408,  408,  409,  266,  409,  409,  410,  410,  410,
+      410,  411,  411,  411,  411,  412,  256,  412,  412,  413,
+      413,  413,  413,  414,  414,  414,  414,  415,  415,  415,
+      415,  416,  248,  416,  416,  417,  417,  417,  417,  418,
+      418,  418,  418,  419,  245,  419,  419,  420,  420,  420,
+      420,  421,  421,  421,  421,  422,  422,  422,  422,  423,
+      423,  423,  423,  424,  424,  424,  424,  425,  425,  425,
+      425,  426,  426,  426,  426,  427,  427,  427,  427,  428,
+      428,  428,  428,  429,  429,  429,  429,  242,  239,  231,
+
+      217,  214,  165,  110,  108,   70,   36,   29,   25,   23,
+       17,    6,    5,    4,    3,  360,  360,  360,  360,  360,
+      360,  360,  360,  360,  360,  360,  360,  360,  360,  360,
+      360,  360,  360,  360,  360,  360,  360,  360,  360,  360,
+      360,  360,  360,  360,  360,  360,  360,  360,  360,  360
     } ;
 
 #define YY_TRAILING_MASK 0x2000
@@ -1199,9 +1203,9 @@ static int vdev_and_devtype(DiskParseContext *dpc, char *str) {
 #undef DPC /* needs to be defined differently the actual lexer */
 #define DPC ((DiskParseContext*)yyextra)
 
-#line 1202 "libxlu_disk_l.c"
+#line 1206 "libxlu_disk_l.c"
 
-#line 1204 "libxlu_disk_l.c"
+#line 1208 "libxlu_disk_l.c"
 
 #define INITIAL 0
 #define LEXERR 1
@@ -1483,7 +1487,7 @@ YY_DECL
 #line 180 "libxlu_disk_l.l"
  /*----- the scanner rules which do the parsing -----*/
 
-#line 1486 "libxlu_disk_l.c"
+#line 1490 "libxlu_disk_l.c"
 
 	while ( /*CONSTCOND*/1 )		/* loops until end-of-file is reached */
 		{
@@ -1515,14 +1519,14 @@ yy_match:
 			while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 				{
 				yy_current_state = (int) yy_def[yy_current_state];
-				if ( yy_current_state >= 355 )
+				if ( yy_current_state >= 361 )
 					yy_c = yy_meta[yy_c];
 				}
 			yy_current_state = yy_nxt[yy_base[yy_current_state] + yy_c];
 			*yyg->yy_state_ptr++ = yy_current_state;
 			++yy_cp;
 			}
-		while ( yy_current_state != 354 );
+		while ( yy_current_state != 360 );
 
 yy_find_action:
 		yy_current_state = *--yyg->yy_state_ptr;
@@ -1648,76 +1652,81 @@ YY_RULE_SETUP
 #line 201 "libxlu_disk_l.l"
 { libxl_defbool_set(&DPC->disk->discard_enable, false); }
 	YY_BREAK
-/* Note that the COLO configuration settings should be considered unstable.
-  * They may change incompatibly in future versions of Xen. */
 case 15:
 YY_RULE_SETUP
-#line 204 "libxlu_disk_l.l"
-{ libxl_defbool_set(&DPC->disk->colo_enable, true); }
+#line 202 "libxlu_disk_l.l"
+{ DPC->disk->virtio = 1; }
 	YY_BREAK
+/* Note that the COLO configuration settings should be considered unstable.
+  * They may change incompatibly in future versions of Xen. */
 case 16:
 YY_RULE_SETUP
 #line 205 "libxlu_disk_l.l"
-{ libxl_defbool_set(&DPC->disk->colo_enable, false); }
+{ libxl_defbool_set(&DPC->disk->colo_enable, true); }
 	YY_BREAK
 case 17:
-/* rule 17 can match eol */
 YY_RULE_SETUP
 #line 206 "libxlu_disk_l.l"
-{ STRIP(','); SAVESTRING("colo-host", colo_host, FROMEQUALS); }
+{ libxl_defbool_set(&DPC->disk->colo_enable, false); }
 	YY_BREAK
 case 18:
 /* rule 18 can match eol */
 YY_RULE_SETUP
 #line 207 "libxlu_disk_l.l"
-{ STRIP(','); setcoloport(DPC, FROMEQUALS); }
+{ STRIP(','); SAVESTRING("colo-host", colo_host, FROMEQUALS); }
 	YY_BREAK
 case 19:
 /* rule 19 can match eol */
 YY_RULE_SETUP
 #line 208 "libxlu_disk_l.l"
-{ STRIP(','); SAVESTRING("colo-export", colo_export, FROMEQUALS); }
+{ STRIP(','); setcoloport(DPC, FROMEQUALS); }
 	YY_BREAK
 case 20:
 /* rule 20 can match eol */
 YY_RULE_SETUP
 #line 209 "libxlu_disk_l.l"
-{ STRIP(','); SAVESTRING("active-disk", active_disk, FROMEQUALS); }
+{ STRIP(','); SAVESTRING("colo-export", colo_export, FROMEQUALS); }
 	YY_BREAK
 case 21:
 /* rule 21 can match eol */
 YY_RULE_SETUP
 #line 210 "libxlu_disk_l.l"
+{ STRIP(','); SAVESTRING("active-disk", active_disk, FROMEQUALS); }
+	YY_BREAK
+case 22:
+/* rule 22 can match eol */
+YY_RULE_SETUP
+#line 211 "libxlu_disk_l.l"
 { STRIP(','); SAVESTRING("hidden-disk", hidden_disk, FROMEQUALS); }
 	YY_BREAK
 /* the target magic parameter, eats the rest of the string */
-case 22:
+case 23:
 YY_RULE_SETUP
-#line 214 "libxlu_disk_l.l"
+#line 215 "libxlu_disk_l.l"
 { STRIP(','); SAVESTRING("target", pdev_path, FROMEQUALS); }
 	YY_BREAK
 /* unknown parameters */
-case 23:
-/* rule 23 can match eol */
+case 24:
+/* rule 24 can match eol */
 YY_RULE_SETUP
-#line 218 "libxlu_disk_l.l"
+#line 219 "libxlu_disk_l.l"
 { xlu__disk_err(DPC,yytext,"unknown parameter"); }
 	YY_BREAK
 /* deprecated prefixes */
 /* the "/.*" in these patterns ensures that they count as if they
    * matched the whole string, so these patterns take precedence */
-case 24:
+case 25:
 YY_RULE_SETUP
-#line 225 "libxlu_disk_l.l"
+#line 226 "libxlu_disk_l.l"
 {
                     STRIP(':');
                     DPC->had_depr_prefix=1; DEPRECATE("use `[format=]...,'");
                     setformat(DPC, yytext);
                  }
 	YY_BREAK
-case 25:
+case 26:
 YY_RULE_SETUP
-#line 231 "libxlu_disk_l.l"
+#line 232 "libxlu_disk_l.l"
 {
                     char *newscript;
                     STRIP(':');
@@ -1731,30 +1740,22 @@ YY_RULE_SETUP
                     free(newscript);
                 }
 	YY_BREAK
-case 26:
+case 27:
 *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
 yyg->yy_c_buf_p = yy_cp = yy_bp + 8;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 244 "libxlu_disk_l.l"
-{ DPC->had_depr_prefix=1; DEPRECATE(0); }
-	YY_BREAK
-case 27:
-YY_RULE_SETUP
 #line 245 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 28:
-*yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
-yyg->yy_c_buf_p = yy_cp = yy_bp + 4;
-YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
 #line 246 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 29:
 *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
-yyg->yy_c_buf_p = yy_cp = yy_bp + 6;
+yyg->yy_c_buf_p = yy_cp = yy_bp + 4;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
 #line 247 "libxlu_disk_l.l"
@@ -1762,7 +1763,7 @@ YY_RULE_SETUP
 	YY_BREAK
 case 30:
 *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
-yyg->yy_c_buf_p = yy_cp = yy_bp + 5;
+yyg->yy_c_buf_p = yy_cp = yy_bp + 6;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
 #line 248 "libxlu_disk_l.l"
@@ -1770,26 +1771,34 @@ YY_RULE_SETUP
 	YY_BREAK
 case 31:
 *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
-yyg->yy_c_buf_p = yy_cp = yy_bp + 4;
+yyg->yy_c_buf_p = yy_cp = yy_bp + 5;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
 #line 249 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 32:
-/* rule 32 can match eol */
+*yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
+yyg->yy_c_buf_p = yy_cp = yy_bp + 4;
+YY_DO_BEFORE_ACTION; /* set up yytext again */
+YY_RULE_SETUP
+#line 250 "libxlu_disk_l.l"
+{ DPC->had_depr_prefix=1; DEPRECATE(0); }
+	YY_BREAK
+case 33:
+/* rule 33 can match eol */
 YY_RULE_SETUP
-#line 251 "libxlu_disk_l.l"
+#line 252 "libxlu_disk_l.l"
 {
 		  xlu__disk_err(DPC,yytext,"unknown deprecated disk prefix");
 		  return 0;
 		}
 	YY_BREAK
 /* positional parameters */
-case 33:
-/* rule 33 can match eol */
+case 34:
+/* rule 34 can match eol */
 YY_RULE_SETUP
-#line 258 "libxlu_disk_l.l"
+#line 259 "libxlu_disk_l.l"
 {
     STRIP(',');
 
@@ -1816,27 +1825,27 @@ YY_RULE_SETUP
     }
 }
 	YY_BREAK
-case 34:
+case 35:
 YY_RULE_SETUP
-#line 284 "libxlu_disk_l.l"
+#line 285 "libxlu_disk_l.l"
 {
     BEGIN(LEXERR);
     yymore();
 }
 	YY_BREAK
-case 35:
+case 36:
 YY_RULE_SETUP
-#line 288 "libxlu_disk_l.l"
+#line 289 "libxlu_disk_l.l"
 {
     xlu__disk_err(DPC,yytext,"bad disk syntax"); return 0;
 }
 	YY_BREAK
-case 36:
+case 37:
 YY_RULE_SETUP
-#line 291 "libxlu_disk_l.l"
+#line 292 "libxlu_disk_l.l"
 YY_FATAL_ERROR( "flex scanner jammed" );
 	YY_BREAK
-#line 1839 "libxlu_disk_l.c"
+#line 1848 "libxlu_disk_l.c"
 			case YY_STATE_EOF(INITIAL):
 			case YY_STATE_EOF(LEXERR):
 				yyterminate();
@@ -2104,7 +2113,7 @@ static int yy_get_next_buffer (yyscan_t yyscanner)
 		while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 			{
 			yy_current_state = (int) yy_def[yy_current_state];
-			if ( yy_current_state >= 355 )
+			if ( yy_current_state >= 361 )
 				yy_c = yy_meta[yy_c];
 			}
 		yy_current_state = yy_nxt[yy_base[yy_current_state] + yy_c];
@@ -2128,11 +2137,11 @@ static int yy_get_next_buffer (yyscan_t yyscanner)
 	while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 		{
 		yy_current_state = (int) yy_def[yy_current_state];
-		if ( yy_current_state >= 355 )
+		if ( yy_current_state >= 361 )
 			yy_c = yy_meta[yy_c];
 		}
 	yy_current_state = yy_nxt[yy_base[yy_current_state] + yy_c];
-	yy_is_jam = (yy_current_state == 354);
+	yy_is_jam = (yy_current_state == 360);
 	if ( ! yy_is_jam )
 		*yyg->yy_state_ptr++ = yy_current_state;
 
@@ -2941,4 +2950,4 @@ void yyfree (void * ptr , yyscan_t yyscanner)
 
 #define YYTABLES_NAME "yytables"
 
-#line 291 "libxlu_disk_l.l"
+#line 292 "libxlu_disk_l.l"
diff --git a/tools/libs/util/libxlu_disk_l.h b/tools/libs/util/libxlu_disk_l.h
index 6abeecf..df20fcc 100644
--- a/tools/libs/util/libxlu_disk_l.h
+++ b/tools/libs/util/libxlu_disk_l.h
@@ -694,7 +694,7 @@ extern int yylex (yyscan_t yyscanner);
 #undef yyTABLES_NAME
 #endif
 
-#line 291 "libxlu_disk_l.l"
+#line 292 "libxlu_disk_l.l"
 
 #line 699 "libxlu_disk_l.h"
 #undef xlu__disk_yyIN_HEADER
diff --git a/tools/libs/util/libxlu_disk_l.l b/tools/libs/util/libxlu_disk_l.l
index 3bd639a..d68a59c 100644
--- a/tools/libs/util/libxlu_disk_l.l
+++ b/tools/libs/util/libxlu_disk_l.l
@@ -198,6 +198,7 @@ script=[^,]*,?	{ STRIP(','); SAVESTRING("script", script, FROMEQUALS); }
 direct-io-safe,? { DPC->disk->direct_io_safe = 1; }
 discard,?	{ libxl_defbool_set(&DPC->disk->discard_enable, true); }
 no-discard,?	{ libxl_defbool_set(&DPC->disk->discard_enable, false); }
+virtio,?	{ DPC->disk->virtio = 1; }
  /* Note that the COLO configuration settings should be considered unstable.
   * They may change incompatibly in future versions of Xen. */
 colo,?		{ libxl_defbool_set(&DPC->disk->colo_enable, true); }
diff --git a/tools/xl/xl_block.c b/tools/xl/xl_block.c
index 70eed43..50a4d45 100644
--- a/tools/xl/xl_block.c
+++ b/tools/xl/xl_block.c
@@ -50,6 +50,11 @@ int main_blockattach(int argc, char **argv)
         return 0;
     }
 
+    if (disk.virtio) {
+        fprintf(stderr, "block-attach is not supported for Virtio device\n");
+        return 1;
+    }
+
     if (libxl_device_disk_add(ctx, fe_domid, &disk, 0)) {
         fprintf(stderr, "libxl_device_disk_add failed.\n");
         return 1;
@@ -119,6 +124,12 @@ int main_blockdetach(int argc, char **argv)
         fprintf(stderr, "Error: Device %s not connected.\n", argv[optind+1]);
         return 1;
     }
+
+    if (disk.virtio) {
+        fprintf(stderr, "block-detach is not supported for Virtio device\n");
+        return 1;
+    }
+
     rc = !force ? libxl_device_disk_safe_remove(ctx, domid, &disk, 0) :
         libxl_device_disk_destroy(ctx, domid, &disk, 0);
     if (rc) {
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Fri May 21 13:18:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 May 2021 13:18:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131279.245451 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lk52V-0006sS-Of; Fri, 21 May 2021 13:18:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131279.245451; Fri, 21 May 2021 13:18:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lk52V-0006sJ-LD; Fri, 21 May 2021 13:18:07 +0000
Received: by outflank-mailman (input) for mailman id 131279;
 Fri, 21 May 2021 13:18:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=IAnY=KQ=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1lk52U-0006rf-9d
 for xen-devel@lists.xenproject.org; Fri, 21 May 2021 13:18:06 +0000
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id db770803-1591-44d4-b2de-85a5676ef5f9;
 Fri, 21 May 2021 13:18:05 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 14LCxAQs101282;
 Fri, 21 May 2021 13:18:04 GMT
Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79])
 by userp2130.oracle.com with ESMTP id 38j5qrfnku-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 21 May 2021 13:18:04 +0000
Received: from pps.filterd (userp3020.oracle.com [127.0.0.1])
 by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 14LD1eZp145917;
 Fri, 21 May 2021 13:18:04 GMT
Received: from nam11-co1-obe.outbound.protection.outlook.com
 (mail-co1nam11lp2175.outbound.protection.outlook.com [104.47.56.175])
 by userp3020.oracle.com with ESMTP id 38n492rge0-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Fri, 21 May 2021 13:18:03 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com (2603:10b6:208:321::10)
 by MN2PR10MB4397.namprd10.prod.outlook.com (2603:10b6:208:1d4::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.25; Fri, 21 May
 2021 13:18:01 +0000
Received: from BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf]) by BLAPR10MB5009.namprd10.prod.outlook.com
 ([fe80::78a3:67d:a8ca:93cf%7]) with mapi id 15.20.4150.023; Fri, 21 May 2021
 13:18:01 +0000
Received: from [10.74.102.99] (160.34.88.99) by
 SA0PR13CA0017.namprd13.prod.outlook.com (2603:10b6:806:130::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4173.11 via Frontend
 Transport; Fri, 21 May 2021 13:18:01 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: db770803-1591-44d4-b2de-85a5676ef5f9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=Y6OC6xhyoeNTCpVsAz3Ulm367dNvqrBA+Vw+GWRiLuI=;
 b=vhb6i9GVMP3ZniqBV50rnqh2sYc9rgZGASO4ZvWZJ1VCTyXCuP1pfgU2jkpiY+7ExH+1
 57zz1M+RDtStFuS0Oh4k49cc/8N4no3jxOnTogA4xQcui7VJV/YLJdQypfyEhsFmdkKq
 Nrh2CBBs9ehSipwjTf8mgWtFwXKgzLDP2V+kr3crTBcHklK/bIUJYsAU3z17z9JCkda7
 pHWIJ19ML1qvOwHONTVT6A88SE3GhyrWiqtHZA/gdZ6gIZDNdm0cL+6/DNXxrKXNpDbJ
 2ScUvsxw4ZHlrddQ/phx4oohJAz0IRbuwZ4TSQz6NwTCFHuS/MUs594C02AP6B2LBOZW dg== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=SoUKal8rRCk+yceFhlQ2WhsZeFVGQk/y1Uj+w8YHyFxw6KEHyn3K25bWpT2dS22gZ2yqc+moqR8RZrcJ3Tq2ipPtk2PeKNDr/TlWqIspCSr9crNMTHlMHLlCPrefQ0LqP0uRMceqV1t07bzPM3dOaCeQuXzIOLVITB52zeBpCMr1iDLGfKU1o+LKi78ssGOi7zCFlbXfiHDvlyEAYoikf2nC5Y9YzGikZ9YgYk4VNxlg791ZyzMfbuVdztxztpeHLOaJbsd8PxYXUQibOjCpIERdr3EAXMQ3lXXbRR0ayT33r0m9R0ZcVl5SSzq5Sm45z79eT3xCcxX3tQMnaaPpJw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Y6OC6xhyoeNTCpVsAz3Ulm367dNvqrBA+Vw+GWRiLuI=;
 b=ki0M/eYJ593AGwtQHWpYv8zt4sOtovK8GCO30SlQfdbHnDha5nfnM2Cp1vWA+1Gf8PaQZNL1QNgInq1lh0hCQtkGUk2UdkYHe4iUMA24rLCQmjQFpizfRE1oMnfiIYHYJEOnUNDop9pd+94rI4LATaDDTmnJDLWRffPZ+BzRt0r6ntWq7t1DSootpurbSQYJvIYFlKH71BJzP5SaWq5o0NBfi/awA9JDKet2KI1NbZ1+NbNC9wwZxwwHXmzWkxbpSHk8PLpKDJyiDePDNjlGRJF2Me7ofV1nsg8ZbEvgIa4dZtnEYr5nj0cNKsC1YQJl9plbYx6vUQz2EXPbOwBgcw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Y6OC6xhyoeNTCpVsAz3Ulm367dNvqrBA+Vw+GWRiLuI=;
 b=ldRk/rn3swyhaNel0naYINVpcrFmukcF7KOLWklKZKhK61IH9guOaqPPyGl4rdjFrpxE0Ift7sXra9BuSv1ZYA1hqqmUYXUBNI8ET6JcdBCLDeE/XZ3cpfTmjYIdtWLVqBOVzmbow82Vxv+yltDuEc9ocfygq/ZViaODvgDCXRU=
Authentication-Results: suse.com; dkim=none (message not signed)
 header.d=none;suse.com; dmarc=none action=none header.from=oracle.com;
Subject: Re: [PATCH] x86/Xen: swap NX determination and GDT setup on BSP
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        Juergen Gross <jgross@suse.com>
References: <12a866b0-9e89-59f7-ebeb-a2a6cec0987a@suse.com>
 <65bbc317-893e-da41-97e0-c8f2e1feb3e2@suse.com>
 <f594a439-ec1d-34fa-3ccf-b162441fa0af@suse.com>
 <3953076f-c2fa-2e2a-4b07-fb610046a27d@suse.com>
 <89c46d1a-9474-0f17-3fda-4809a14adb45@suse.com>
 <2d019c04-415b-293b-052b-26b1ea3be189@suse.com>
 <0103d46f-4feb-e49d-f738-a2bf3d38c216@oracle.com>
 <c2733c58-4514-6df0-3f0b-0f8b65132017@suse.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <9a65639a-1407-02b6-737e-141dd427de3a@oracle.com>
Date: Fri, 21 May 2021 09:17:58 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.1
In-Reply-To: <c2733c58-4514-6df0-3f0b-0f8b65132017@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Originating-IP: [160.34.88.99]
X-ClientProxiedBy: SA0PR13CA0017.namprd13.prod.outlook.com
 (2603:10b6:806:130::22) To BLAPR10MB5009.namprd10.prod.outlook.com
 (2603:10b6:208:321::10)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 8f65d078-588c-40c1-b529-08d91c5adc25
X-MS-TrafficTypeDiagnostic: MN2PR10MB4397:
X-Microsoft-Antispam-PRVS: 
	<MN2PR10MB4397BF5C5730775E710613B68A299@MN2PR10MB4397.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:274;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	ys6VujfkoMfGfuKJnRlYEGtdqsAUKQCgoTlIinzk74z+ebxpq71nZS2ir7GHRpMhMnR/uuSkYq3q5Lxlqq5RyKxzwEKTeB2hpzpT6/0kENwWNWWBwUjvn+//EJOIiJvI/3FQUTIQTQYm6lNCRY4ftsvM1MRyM4xAyCVkyd/FcP/jZhV9st2mW22sUKS3WpQ1KO6vqqg+VJ5GSZJR03Z1afpRkAB+sJM6RfRp6MOi0SNzV7fGtM3hAHSnUXs42+CSv2uPUreQSZQYKkP6xdHu5iQAMsqa7+14WW334Z+TsV5+0di5INa/NfG4zB1YYS96kJHJy59QZ0duEvVAduzL4JWC9QVjw90NUT7v2RAg08OdRIP2ZC8c1lE4oz0cNtfZMwM9QXnIuLvFhzf9ceoT+WxkdZyjNM4YbOH6/4D9eHuhwx0Mqd3F35ORv6Uyivsqu/AdPiRLamWtQcNdYVqsh7ZWqhRUb8hiOSMgsGUQmnC8exnlN/jIJyTlmle+IGEMoICqi3ATO05TbRRfOLUdET7T5NZ7i6xnQgm+4jmjUNQfy6QzJVfv1js4yJEv4F4Zht0xWHtOGRTsWfMIIjfaC4NE2Da/eeqN8bUUdS3zZZHIfbX4vwCMWPh+Xe9wPuZUmKqGNEtYnNkswbfiTHOVR9hmEiRX4Ex1SUTeCw+4OllvsBhu7qUCL3Moaj5cjfQN
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BLAPR10MB5009.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(136003)(366004)(376002)(396003)(39860400002)(346002)(8936002)(83380400001)(6486002)(16576012)(478600001)(26005)(54906003)(956004)(316002)(2616005)(44832011)(6666004)(5660300002)(558084003)(2906002)(36756003)(16526019)(66946007)(66556008)(66476007)(38100700002)(186003)(8676002)(4326008)(53546011)(31686004)(31696002)(86362001)(6916009)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?utf-8?B?d2VhR2U1ODUxcEJuUnlkTjBENHdJc2JOaDZtZ0YvK3F2clZ0U1ZqS05kOTUz?=
 =?utf-8?B?ZmtuZ2xoclJHMXB5bHNjU2NoTVcxY0RmYjljbkNKdG5FWnRyMUY5aldHcjBQ?=
 =?utf-8?B?VHN3Mk1BY0dDWC8yVkR4dEZDZlFtQmkyL2hWak9aMjdsRUduZ2ZrOGdvNTla?=
 =?utf-8?B?YnpKZzIrU0Z6TGpiZ0U0ZHZwZkZpRSs1N0Jkb1hMV2hUczdLZlozQmloekZS?=
 =?utf-8?B?b0VrVThKc1ZFTFd6YWh4aVpxUjJ5cGxZdHJlenBseU9HRHdvVmdZOUpMNUJB?=
 =?utf-8?B?K2lSUmx4Mi81eCtUREh2Z3lGY0N3RFR0Q1l4dytDSWJrQU9aa2hIcGJEa0NI?=
 =?utf-8?B?N0htOW8zazl5ckpMZllxSmU3eHRjRFUwMXdIS0ROUXRCeEJQT0ZVcjFlMzJi?=
 =?utf-8?B?NjhjWTZsMDFTQlVERHk3N0ZmQUtEcS81SHhxM2d6QVVXeGo0aDhQQWkzMEN0?=
 =?utf-8?B?QkNGcHRWbnNMWDZwc1BnQkF3U2loY0Z6SFRYWUFKbzZLcTdmTzRFaEFoSnlk?=
 =?utf-8?B?SU1YbzAxTEJNTkF2ZVJLZzRoRzR3eHhKU1NZajZ5YmlOY2J4NGlyTlR6N0lI?=
 =?utf-8?B?NkJ5YlFqbFN1endwTUQ0ZVRPdFRwVkZGeExFdFBYZ3Q4SCtBUitkM2x3Tkdy?=
 =?utf-8?B?NHh6c0cxTzl6djlIZGFSdUNyRGh4bWIyWUFHaFFvR0xkUmFyN2R3bWZsTVBo?=
 =?utf-8?B?NUJ2YkljL2RhSUczcjRyUUhpK3lkeXZKUnllTkFZN1BvS2IwK1hMTmdvUWt0?=
 =?utf-8?B?OTdzdXVUcTdVTnBqZE9SZVo0Lzc3V3FsQXdxRmI0QzU4cURGUGtreG10Y3Nk?=
 =?utf-8?B?MVg3eVVWS0l2SDk1bzVOQnczMW1hcmNDdnlzRndsRzdVWlpxQ3Nud2JJOTM2?=
 =?utf-8?B?WUl5OVNUU1RrUlFlYWJrcDRGU0V3RnN4K2dvQU03NlBqcDFnZE1zcUdkRjdm?=
 =?utf-8?B?MThEU2UzcURvUms1SXl2YkxEekdtTFBmNkZRSVBKQ2VCa25iTjdoQXBJRGlL?=
 =?utf-8?B?b2JjdUdSMUN5ZUEvbzJYYzhMc01tSXZ4R21DRmJEL1NjR09HNzFTaERnRXBK?=
 =?utf-8?B?TmJ3MjhUTlhGb1RjZVdhZXB5b2g4bG9zNlBObFhhbmpLTWIwODVGTTVoOTN6?=
 =?utf-8?B?L3B2dkxMS3I2WHJOWUhDbGJkRWU2RkNwdC9OYktvRU1mQityZHFLbGV3cGt6?=
 =?utf-8?B?YU5ZTjlKSHBRT0NHbU9TME0yWmIxYVNhK0hGSThnSXRJc25OcFZlSUlGSnE3?=
 =?utf-8?B?b2wyNkJDcERhRlpxT1hMcHUybVloRTd0MWt6VzFpSHdjWitNcXJQTmY0aVFT?=
 =?utf-8?B?emROZDZwRGg1R3diaFZIbGJkcTBndnZycTJ1Z2JUMENla3dBeUxRekgrMElv?=
 =?utf-8?B?dG13SVFYSk1JSTNiQlE1ZTR4TmFQTDhLeERwSitVU1hBbGlyQURlMkFzNndz?=
 =?utf-8?B?ZEJ5WlIwa3dPQVNid2draWdFeTVKNmYxVkVES2hqVk1RUGx2WVRNOElCOTJQ?=
 =?utf-8?B?dUVlcXZudVBwSm1OTG02emxsSk8xY210ajV0dm9ybHdaWEFFMmQ3RHhIZmRQ?=
 =?utf-8?B?UHJvb1l5U2tJaGljQ1A3alVkc3dZSVc2Z1BTZ0N1UjJjRHhLMlh1UmJBNGxl?=
 =?utf-8?B?bUxHVk41SmxaNlVMbkxTS291UG5qeEZYend6dzUzSWplUUdTNTY0VU5rNkNH?=
 =?utf-8?B?eERKc0tia2x6dVFTZ1Y4Y1ZXWisyeU5CSDZVZ043VXpyTGovSjd1UWN3RXZ0?=
 =?utf-8?Q?CDFXyXs63gN+kxhtbbKz9Vnu3Pdh4+SKlpALxXe?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8f65d078-588c-40c1-b529-08d91c5adc25
X-MS-Exchange-CrossTenant-AuthSource: BLAPR10MB5009.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 21 May 2021 13:18:01.6963
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 2W8ydE1nNOHuM968WAa9bbx7onStnt6WEEuX8DuWsYDzumsNdiSeDgsUELMQDeupbsiOT4wAgfa/vKKqqk7wIwgd7SLK3jL17hBHlmSoSLY=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: MN2PR10MB4397
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9990 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 phishscore=0 spamscore=0 bulkscore=0
 suspectscore=0 mlxlogscore=999 adultscore=0 malwarescore=0 mlxscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2105210077
X-Proofpoint-GUID: -EKwBMJFOzi0QohzCyDr6C06h5CoYSdQ
X-Proofpoint-ORIG-GUID: -EKwBMJFOzi0QohzCyDr6C06h5CoYSdQ
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9990 signatures=668683
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 clxscore=1015 impostorscore=0
 mlxscore=0 lowpriorityscore=0 malwarescore=0 mlxlogscore=999
 suspectscore=0 adultscore=0 priorityscore=1501 spamscore=0 phishscore=0
 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2104190000 definitions=main-2105210077


On 5/21/21 9:15 AM, Jan Beulich wrote:
> On 21.05.2021 15:12, Boris Ostrovsky wrote:
>>
>> Did something changed recently that this became a problem? That commit has been there for 3 years.
> He happened to try on a system where NX was turned off in the BIOS.


Yes, I missed that part.


-boris




From xen-devel-bounces@lists.xenproject.org Fri May 21 13:31:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 May 2021 13:31:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131298.245462 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lk5Fa-0000rA-W9; Fri, 21 May 2021 13:31:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131298.245462; Fri, 21 May 2021 13:31:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lk5Fa-0000r3-Sp; Fri, 21 May 2021 13:31:38 +0000
Received: by outflank-mailman (input) for mailman id 131298;
 Fri, 21 May 2021 13:31:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=qZ6I=KQ=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lk5FY-0000qx-Or
 for xen-devel@lists.xenproject.org; Fri, 21 May 2021 13:31:36 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 973b7e06-c570-49aa-8e04-11fa01becfca;
 Fri, 21 May 2021 13:31:35 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 7D4DEAC7A;
 Fri, 21 May 2021 13:31:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 973b7e06-c570-49aa-8e04-11fa01becfca
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621603894; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=cqYoKHr/OUSjW/0A9yXkvaNEkKfdi2ibW6kI/dsMNb8=;
	b=cvrGykJ7GBRIFe2WqEHRv6zGcU699awRp6T1im/y9c/GIu6naITbpZY8YMG4VsaEf2XRHt
	NwDaodBCGX2fv2jET8tdm4YJaRalVqM7h8sNaCpFD8npMirpr9rasly6g1ZSJgOHsk9hsu
	h7FVujJp3f2113Vm7t3v4iHQYlttCKw=
Subject: Re: [PATCH v3 1/2] libelf: don't attempt to parse __xen_guest for PVH
From: Jan Beulich <jbeulich@suse.com>
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20210520123012.89855-1-roger.pau@citrix.com>
 <20210520123012.89855-2-roger.pau@citrix.com>
 <29ecaaaf-d096-e8af-fc6f-292ff2b3d5ae@suse.com>
Message-ID: <6dd3d6fe-04cc-4b9d-e92f-6d4c81785300@suse.com>
Date: Fri, 21 May 2021 15:31:33 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <29ecaaaf-d096-e8af-fc6f-292ff2b3d5ae@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 20.05.2021 14:35, Jan Beulich wrote:
> On 20.05.2021 14:30, Roger Pau Monne wrote:
>> The legacy __xen_guest section doesn't support the PHYS32_ENTRY
>> elfnote, so it's pointless to attempt to parse the elfnotes from that
>> section when called from an hvm container.
>>
>> Suggested-by: Jan Beulich <jbeulich@suse.com>
>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
>> ---
>> Changes since v2:
>>  - New in this version.
>> ---
>>  xen/common/libelf/libelf-dominfo.c | 6 ++----
>>  1 file changed, 2 insertions(+), 4 deletions(-)
>>
>> diff --git a/xen/common/libelf/libelf-dominfo.c b/xen/common/libelf/libelf-dominfo.c
>> index 69c94b6f3bb..abea1011c18 100644
>> --- a/xen/common/libelf/libelf-dominfo.c
>> +++ b/xen/common/libelf/libelf-dominfo.c
>> @@ -577,10 +577,8 @@ elf_errorstatus elf_xen_parse(struct elf_binary *elf,
>>  
>>      }
>>  
>> -    /*
>> -     * Finally fall back to the __xen_guest section.
>> -     */
>> -    if ( xen_elfnotes == 0 )
>> +    /* Finally fall back to the __xen_guest section for PV guests only. */
>> +    if ( xen_elfnotes == 0 && !hvm )
> 
> Isn't this depending on the 2nd patch adding the function parameter?
> IOW doesn't this break the build, even if just transiently? With the
> respective hunk from patch 2 moved here
> Reviewed-by: Jan Beulich <jbeulich@suse.com>

With the intention of committing I did this hunk movement, noticing
that
- it's more than just one hunk,
- a tool stack maintainer ack is actually going to be needed (all
  respective hunks are now in the first patch).
I'll keep the massaged patches locally, until the missing ack arrives.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 21 13:33:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 May 2021 13:33:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131304.245473 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lk5H9-0001Tj-Bs; Fri, 21 May 2021 13:33:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131304.245473; Fri, 21 May 2021 13:33:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lk5H9-0001Tc-7i; Fri, 21 May 2021 13:33:15 +0000
Received: by outflank-mailman (input) for mailman id 131304;
 Fri, 21 May 2021 13:33:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lk5H8-0001TS-7u; Fri, 21 May 2021 13:33:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lk5H8-0007z0-3a; Fri, 21 May 2021 13:33:14 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lk5H7-00050x-Qr; Fri, 21 May 2021 13:33:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lk5H7-0006wt-Q0; Fri, 21 May 2021 13:33:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jmCZLawip62gSHYDFPUugO2WQZV8cjrk1jI6cR0REKo=; b=unpb1dpbQyid0Wyye8/SBbU/BI
	FTbbXRGdvgW0q2CDm6v3i/NYkcqLLsCWn36eT/lsvDpA+rT2AbzY+Gm1UOLkFtJLZwnYoZTSNyF7P
	FuA8jV1N+BsliVyDPBGHttzx9ymMiBAXrrY29Ftekx2md9fOZr87R57BG+nNHYzsN4V0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162111-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162111: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=04ae17218deec25c6f488609c5e2ca9c419d2c4b
X-Osstest-Versions-That:
    ovmf=15ee7b76891a78141e6e30ef3f8572e8d6b326d2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 21 May 2021 13:33:13 +0000

flight 162111 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162111/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 04ae17218deec25c6f488609c5e2ca9c419d2c4b
baseline version:
 ovmf                 15ee7b76891a78141e6e30ef3f8572e8d6b326d2

Last test of basis   162071  2021-05-19 01:40:34 Z    2 days
Testing same since   162111  2021-05-21 07:11:13 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Garrett Kirkendall <garrett.kirkendall@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   15ee7b7689..04ae17218d  04ae17218deec25c6f488609c5e2ca9c419d2c4b -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri May 21 13:45:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 May 2021 13:45:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131312.245487 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lk5TP-000319-Jd; Fri, 21 May 2021 13:45:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131312.245487; Fri, 21 May 2021 13:45:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lk5TP-000312-GA; Fri, 21 May 2021 13:45:55 +0000
Received: by outflank-mailman (input) for mailman id 131312;
 Fri, 21 May 2021 13:41:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=DwWI=KQ=lechnology.com=david@srs-us1.protection.inumbo.net>)
 id 1lk5PO-0002v2-PK
 for xen-devel@lists.xenproject.org; Fri, 21 May 2021 13:41:46 +0000
Received: from vern.gendns.com (unknown [98.142.107.122])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3feab211-ee3e-457d-8523-4b95c8442a83;
 Fri, 21 May 2021 13:41:45 +0000 (UTC)
Received: from [2600:1700:4830:165f::fb2] (port=60256)
 by vern.gendns.com with esmtpsa (TLS1.2) tls
 TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (Exim 4.94.2)
 (envelope-from <david@lechnology.com>)
 id 1lk5P6-0003HQ-MU; Fri, 21 May 2021 09:41:41 -0400
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3feab211-ee3e-457d-8523-4b95c8442a83
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=lechnology.com; s=default; h=Content-Transfer-Encoding:Content-Type:
	In-Reply-To:MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender
	:Reply-To:Content-ID:Content-Description:Resent-Date:Resent-From:
	Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:
	List-Unsubscribe:List-Subscribe:List-Post:List-Owner:List-Archive;
	bh=9lP4cKqfXHid5MksziM3FSc3DFRksG808KiNvq/BVYE=; b=hmX2PqtAcvfv2cTaWaKqbBzYnw
	DxSzvIsTFnCd9Ad9cdgXcoZVPVp+Z0y4/scrHZ6lQs88LnWXAa1YT4bs9Ia475q2X9vr1EOloNLCE
	THXs6NcMbbEEyxq3reOMRol8oP6jgXycuu8nbCc+4m9ZxjpBaxuPlbxnbeqkFLH5sNb79jTHyusGn
	Aj53qnao2AuiRFV92GruAtnjQKtCSKCaf2iZEz2O8VAEap+BHfbNyR/Ot54sQ9hGJPQZPKsE8tb7U
	0yMzptP6/L6YqlsKaZ6pc6CV/OvWHV6P877WAEb11FA4+jFhLuJ5+9IAG3jPixiqPaX93WGLZqRzV
	iYx9lswA==;
Subject: Re: [PATCH 11/11] drm/tiny: drm_gem_simple_display_pipe_prepare_fb is
 the default
To: Daniel Vetter <daniel.vetter@ffwll.ch>,
 DRI Development <dri-devel@lists.freedesktop.org>
Cc: Intel Graphics Development <intel-gfx@lists.freedesktop.org>,
 Daniel Vetter <daniel.vetter@intel.com>, Joel Stanley <joel@jms.id.au>,
 Andrew Jeffery <andrew@aj.id.au>, =?UTF-8?Q?Noralf_Tr=c3=b8nnes?=
 <noralf@tronnes.org>, Linus Walleij <linus.walleij@linaro.org>,
 Emma Anholt <emma@anholt.net>,
 Kamlesh Gurudasani <kamlesh.gurudasani@gmail.com>,
 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
 Maxime Ripard <mripard@kernel.org>, Thomas Zimmermann <tzimmermann@suse.de>,
 Sam Ravnborg <sam@ravnborg.org>, Alex Deucher <alexander.deucher@amd.com>,
 Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
 linux-aspeed@lists.ozlabs.org, linux-arm-kernel@lists.infradead.org,
 xen-devel@lists.xenproject.org
References: <20210521090959.1663703-1-daniel.vetter@ffwll.ch>
 <20210521090959.1663703-11-daniel.vetter@ffwll.ch>
From: David Lechner <david@lechnology.com>
Message-ID: <15839b4b-04e5-c65a-8e7c-6e1bdce9757f@lechnology.com>
Date: Fri, 21 May 2021 08:41:34 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
MIME-Version: 1.0
In-Reply-To: <20210521090959.1663703-11-daniel.vetter@ffwll.ch>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-AntiAbuse: This header was added to track abuse, please include it with any abuse report
X-AntiAbuse: Primary Hostname - vern.gendns.com
X-AntiAbuse: Original Domain - lists.xenproject.org
X-AntiAbuse: Originator/Caller UID/GID - [47 12] / [47 12]
X-AntiAbuse: Sender Address Domain - lechnology.com
X-Get-Message-Sender-Via: vern.gendns.com: authenticated_id: davidmain+lechnology.com/only user confirmed/virtual account not confirmed
X-Authenticated-Sender: vern.gendns.com: davidmain@lechnology.com
X-Source: 
X-Source-Args: 
X-Source-Dir: 

On 5/21/21 4:09 AM, Daniel Vetter wrote:
> Goes through all the drivers and deletes the default hook since it's
> the default now.

Acked-by: David Lechner <david@lechnology.com>



From xen-devel-bounces@lists.xenproject.org Fri May 21 14:09:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 May 2021 14:09:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131323.245500 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lk5qA-0005X2-IL; Fri, 21 May 2021 14:09:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131323.245500; Fri, 21 May 2021 14:09:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lk5qA-0005Wv-FU; Fri, 21 May 2021 14:09:26 +0000
Received: by outflank-mailman (input) for mailman id 131323;
 Fri, 21 May 2021 14:09:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=KNw9=KQ=tronnes.org=noralf@srs-us1.protection.inumbo.net>)
 id 1lk5q8-0005Wp-VP
 for xen-devel@lists.xenproject.org; Fri, 21 May 2021 14:09:25 +0000
Received: from smtp.domeneshop.no (unknown [2a01:5b40:0:3005::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5a8d50dd-b3ba-4dba-a552-6ae1c2338342;
 Fri, 21 May 2021 14:09:23 +0000 (UTC)
Received: from [2a01:799:95f:4600:cca0:57ac:c55d:a485] (port=50828)
 by smtp.domeneshop.no with esmtpsa (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128)
 (Exim 4.92) (envelope-from <noralf@tronnes.org>)
 id 1lk5q6-0005ru-3Q; Fri, 21 May 2021 16:09:22 +0200
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5a8d50dd-b3ba-4dba-a552-6ae1c2338342
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=tronnes.org
	; s=ds202012; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject:Sender:Reply-To:
	Content-ID:Content-Description:Resent-Date:Resent-From:Resent-Sender:
	Resent-To:Resent-Cc:Resent-Message-ID:List-Id:List-Help:List-Unsubscribe:
	List-Subscribe:List-Post:List-Owner:List-Archive;
	bh=mzxTAoQKZErI0PSlV6Tx+3ILAHO7jFNRjBMhpv39Rlg=; b=bILOGlhtsDznJmhl3TFXvnvplE
	Ixe5T9HBrMiDZAMMx686k9FpKn1MoqOli+IjA8R8W5lHgPq/6V5gQAy0cvXVppCuGncYVYXrfJ1iA
	mDTQj4RQ0x2cVmKxxqif50mnzTNJd1YW5eYwJwKp0Ls9DwseAh9+YEu/yXcVzyi/ZHKjxENmXmIsj
	jJua4+BnHtIVIU6l/fRg8ZFtLHYf2pkbwZwOlWiMbiYsR03CkDUc5iDBh+KZW/j2/rKY1cGCxELJy
	qUoNtLvmohAEWOPP9E7yLiBxq7gf6X4HjlOOtsTm5+Jf46IQdc8hXtE1zPrKNoHY/NCKNzIbW8tjL
	fJseoIJg==;
Subject: Re: [PATCH 11/11] drm/tiny: drm_gem_simple_display_pipe_prepare_fb is
 the default
To: Daniel Vetter <daniel.vetter@ffwll.ch>,
 DRI Development <dri-devel@lists.freedesktop.org>
Cc: Intel Graphics Development <intel-gfx@lists.freedesktop.org>,
 Daniel Vetter <daniel.vetter@intel.com>, Joel Stanley <joel@jms.id.au>,
 Andrew Jeffery <andrew@aj.id.au>, Linus Walleij <linus.walleij@linaro.org>,
 Emma Anholt <emma@anholt.net>, David Lechner <david@lechnology.com>,
 Kamlesh Gurudasani <kamlesh.gurudasani@gmail.com>,
 Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
 Maxime Ripard <mripard@kernel.org>, Thomas Zimmermann <tzimmermann@suse.de>,
 Sam Ravnborg <sam@ravnborg.org>, Alex Deucher <alexander.deucher@amd.com>,
 Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
 linux-aspeed@lists.ozlabs.org, linux-arm-kernel@lists.infradead.org,
 xen-devel@lists.xenproject.org
References: <20210521090959.1663703-1-daniel.vetter@ffwll.ch>
 <20210521090959.1663703-11-daniel.vetter@ffwll.ch>
From: =?UTF-8?Q?Noralf_Tr=c3=b8nnes?= <noralf@tronnes.org>
Message-ID: <0b2b3fd7-7740-4c4e-78a5-142a6e9892ea@tronnes.org>
Date: Fri, 21 May 2021 16:09:13 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.1
MIME-Version: 1.0
In-Reply-To: <20210521090959.1663703-11-daniel.vetter@ffwll.ch>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit



Den 21.05.2021 11.09, skrev Daniel Vetter:
> Goes through all the drivers and deletes the default hook since it's
> the default now.
> 
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>

Acked-by: Noralf Trønnes <noralf@tronnes.org>


From xen-devel-bounces@lists.xenproject.org Fri May 21 14:14:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 May 2021 14:14:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131331.245515 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lk5uz-0006x0-7N; Fri, 21 May 2021 14:14:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131331.245515; Fri, 21 May 2021 14:14:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lk5uz-0006wt-4F; Fri, 21 May 2021 14:14:25 +0000
Received: by outflank-mailman (input) for mailman id 131331;
 Fri, 21 May 2021 14:14:23 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Etof=KQ=epam.com=prvs=5775dbad49=oleksandr_andrushchenko@srs-us1.protection.inumbo.net>)
 id 1lk5ux-0006wl-AI
 for xen-devel@lists.xenproject.org; Fri, 21 May 2021 14:14:23 +0000
Received: from mx0a-0039f301.pphosted.com (unknown [148.163.133.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id add62575-3631-4132-ba4d-9b582952caab;
 Fri, 21 May 2021 14:14:22 +0000 (UTC)
Received: from pps.filterd (m0174676.ppops.net [127.0.0.1])
 by mx0a-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 14LED83G001232; Fri, 21 May 2021 14:13:16 GMT
Received: from eur05-vi1-obe.outbound.protection.outlook.com
 (mail-vi1eur05lp2171.outbound.protection.outlook.com [104.47.17.171])
 by mx0a-0039f301.pphosted.com with ESMTP id 38pe3j81pv-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Fri, 21 May 2021 14:13:16 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com (2603:10a6:20b:153::17)
 by AM0PR03MB5699.eurprd03.prod.outlook.com (2603:10a6:208:173::28)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4129.26; Fri, 21 May
 2021 14:13:13 +0000
Received: from AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::3541:4069:60ca:de3d]) by AM0PR03MB6324.eurprd03.prod.outlook.com
 ([fe80::3541:4069:60ca:de3d%6]) with mapi id 15.20.4129.035; Fri, 21 May 2021
 14:13:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: add62575-3631-4132-ba4d-9b582952caab
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VuaJK0qIzgGtwhSXrqdlIPHi6juChTrYWypjuY77a9e5JkkVJ4DCpTojre6SN/GQicwNaVwIagfRoyisPfvRaribK1U1VJcIBZbrDGF+xW6HVHID8AaatI00LQpZrA+02vZ0W4oPW9y7wFncRX//BPDUDsFB3zSjccx0Qz5HBMvH7tTnbPs+wn6KyXGNr9AJsnSr8b8iuIqIkEIZqFPg0hTuy/eXTzhB+p9mxLwJbhJQe2T17bFEbLT7nCMIkimJ1uDG4Gu72jGpqqHAcZaZPqDh8NgcRc7Ix9qJNW03HR2J0kSKE6ID8gbty0EQ9FGS61JH5TP1eyFJxxQJ98dZ3Q==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=synphXwSrLm3nRPt+NsSMiX7qe0j9811C0H3uUFQWJc=;
 b=SZMk1h7K8B0hg54pNeWeCea0/CXrOZLHpAT1nudo+cMjA+UA98G+PqXRIc24xNTTROGKfuCs934AJf/1l9NQBmgU2aSTdT6Yy+BNetxk7BhfnLzIZQYXGCQt/qwLgmfsz+sB7UnGyT9luKc2SonZpYGucbO6qTmdn0KxKVTAEws5KrzwkLktvS0GRbCNLogHNXgDsxtS4ij2i0NIr+FYGNk/ABx/LAfeqqWbZx8vrbye73FsvdJfZxK6jgfCrrlbhggPpGdVQEWjh2pL2TYBsER1JCufwGbEMi+N6HU4wMaWEbgGGZLjHKDSIQlXwmrcBgvdR5l54cHk7uSs2TVSAw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=synphXwSrLm3nRPt+NsSMiX7qe0j9811C0H3uUFQWJc=;
 b=nbWDGCP0Hn0kqaztMmur09A9hQrDbhQuH0svSqUg8zNNuvSsshQbThNVJjOmjcApv6nUeX5mr+20amGK7BCT3YjQ9Jigikj/afbFS5JwrW41OMqzfyU5i5dIYi5X0/Z8pccdAQbcnpE3U9MhAGrvzziUaUwpQ7lI0htWIJvFBPaGbYu0aJZG9iLGkIvLxntoPv+CH7nmFprFTkvx6EKtzKghVS8l4Wt5ZVU1IFVWINWdVjXu7Y/THEk6km9gdOJ6Di1ERUNDDECNi3ZajD9IUxf4gLWVkpXcvGTL6J4E/dminsi+FVIdDyqJD7Eo4PH3mEyWZs/0fCSM9A+QDBhtmA==
From: Oleksandr Andrushchenko <Oleksandr_Andrushchenko@epam.com>
To: Daniel Vetter <daniel.vetter@ffwll.ch>,
        DRI Development
	<dri-devel@lists.freedesktop.org>
CC: Intel Graphics Development <intel-gfx@lists.freedesktop.org>,
        Daniel
 Vetter <daniel.vetter@intel.com>,
        Joel Stanley <joel@jms.id.au>, Andrew
 Jeffery <andrew@aj.id.au>,
        =?utf-8?B?Tm9yYWxmIFRyw7hubmVz?=
	<noralf@tronnes.org>,
        Linus Walleij <linus.walleij@linaro.org>,
        Emma Anholt
	<emma@anholt.net>, David Lechner <david@lechnology.com>,
        Kamlesh Gurudasani
	<kamlesh.gurudasani@gmail.com>,
        Maxime Ripard <mripard@kernel.org>,
        Thomas
 Zimmermann <tzimmermann@suse.de>,
        Sam Ravnborg <sam@ravnborg.org>,
        Alex
 Deucher <alexander.deucher@amd.com>,
        Andy Shevchenko
	<andriy.shevchenko@linux.intel.com>,
        "linux-aspeed@lists.ozlabs.org"
	<linux-aspeed@lists.ozlabs.org>,
        "linux-arm-kernel@lists.infradead.org"
	<linux-arm-kernel@lists.infradead.org>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
Subject: Re: [PATCH 11/11] drm/tiny: drm_gem_simple_display_pipe_prepare_fb is
 the default
Thread-Topic: [PATCH 11/11] drm/tiny: drm_gem_simple_display_pipe_prepare_fb
 is the default
Thread-Index: AQHXTiEf04OO3eCHAEO/U1ulzURrU6rt+psA
Date: Fri, 21 May 2021 14:13:13 +0000
Message-ID: <5c75b291-0b47-92de-e90d-2842b0e5f996@epam.com>
References: <20210521090959.1663703-1-daniel.vetter@ffwll.ch>
 <20210521090959.1663703-11-daniel.vetter@ffwll.ch>
In-Reply-To: <20210521090959.1663703-11-daniel.vetter@ffwll.ch>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: ffwll.ch; dkim=none (message not signed)
 header.d=none;ffwll.ch; dmarc=none action=none header.from=epam.com;
x-originating-ip: [185.199.97.5]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 0bdf988e-a91a-452c-3aeb-08d91c62925a
x-ms-traffictypediagnostic: AM0PR03MB5699:
x-microsoft-antispam-prvs: 
 <AM0PR03MB56997BA8B412A973FBE143CCE7299@AM0PR03MB5699.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:962;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 tmGi7CkLNA9HCBt4RJY63mpvC6/lOFd4BzM8h4P0cCkTjo3/w+OEeK7i1Sc1cy7I23DPvgbKPuhKgyXEPQIxoiqz5WL+6byDV8REM0kFONDtd1x7JTA7IFH2s9Wb8fJ3Ld3p6JCsIUPnY4xCaK1ZMw35mUxEog90d6jV9063RjqSsLrfZE5C9+FIpwtK23lz3PJQyVEj12eo7CEEweG/d5OCIJXJHYk97BhjPnVgWA+N0WWYqNCpzRa6dRx6dzKYfRgCUPyuik3YdGRMrXgrV59fAdO+cQfPQiRm/Ai+FYroGE2YJA5EQjYljxkkNPdJoNKIcjzo/Vu2c+rJ2Vz5GOYYTxatC4WeD1dRWA5F2F35dAm6RVuY/RFgqBF76Af/QjnP5f3PRzfYBCfyWA8yTKsN6jRkxnNYou06NjWWQMCenh8W10qT6Gmz4NqmpMj+cxO5G3B1G+lx2YTeNsIGBxSQP4E7SWWTcjIXA5rRMXzrJtTOKT6sXhEIuNctx0Oz2R0z4CSVqWWDBjlhskyymN8qNCTr7De/Npryqb/ZiwCvCfdEdsrE6gfhN8S4ujqd/4jzToHjcNxhMJgxieTbgOl8K57u892un5M7H/slHEG8QwgNh1iOjnLZgya7p1FoYmglyy6KMpT9+Q6IVRkvEiaD8lNsDeNjIJGjuZutSNs=
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM0PR03MB6324.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(366004)(346002)(396003)(136003)(376002)(39860400002)(2906002)(2616005)(478600001)(122000001)(66446008)(6512007)(66556008)(66946007)(38100700002)(91956017)(6486002)(76116006)(4326008)(66476007)(54906003)(86362001)(316002)(64756008)(110136005)(8936002)(7416002)(53546011)(5660300002)(8676002)(186003)(31686004)(558084003)(31696002)(83380400001)(36756003)(26005)(71200400001)(6506007)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?utf-8?B?b1ZnVDNYN0w1ZVdWUzMzcU4wWjlCOTI5Uzh5QSt5L1NtMXp6QTdOMSszcnlN?=
 =?utf-8?B?cGJPWmo2SktKdE1BdkRHaUY4Z1JqVFUwZTVjbiswWnFrTmtCMlg4TGl0VG9B?=
 =?utf-8?B?YlI5U2IweXlFRmxCcTZjdExYd3lOL0FYeDcyV0hXZlBKK3NVSThmNkRBUnls?=
 =?utf-8?B?My9iMEZNS1lBNkFwWTdkMlJmRHBaeWFwZituSXRZV0tJaFJ1U1Y3Z2xxckQ0?=
 =?utf-8?B?K1pTWW1zcFlMdWdzWVVKNkF3NHJDMWNiZXdnQ3hWazBtd200LzJZZk5XS1F2?=
 =?utf-8?B?SWtic1EwV1JaVGdBRGJHOEJUcjhlOGNrVm8zVk1JTFdiYmhVK3NNZ3FQdDZJ?=
 =?utf-8?B?bkZ2REp2bnFuajM3Mnh3THQ4NTlyTytwRG9rYVlpekI5bVBMMHNsaG5zQzA4?=
 =?utf-8?B?YmtIZTBoMENuMkRkUXJ1UXpQV0c5bWkrdGx0NUZMNzFQZWRDTDluUXdudzBH?=
 =?utf-8?B?Zm0wb1dTTUZZTExYeGJveUNPVEtjaERzZGFZQ3ZJenlkc0VRL29rUm5YMXFs?=
 =?utf-8?B?ZDRqeStiaTMxZzJsYm90VlF6NXUvY3kyYllLa2Ezb3R4eVArU00rTCtXUyt5?=
 =?utf-8?B?ejVJbGFXdmRBRjBGbWpmb2FrcmRwbWhldmNIbER6QXNXVWdlaGZtUHg5VXRC?=
 =?utf-8?B?MkR2VVV5RkRJejE5cVl0c3JRSmVqODRBSFBEN2kyandFV0FvVzJ6UkNsdVRs?=
 =?utf-8?B?M3FpdlFOSDRiRFQ4VUVKYWhmWlB2QzhGZnpJQXBsWEgxWUoxU09kY3hLRGNm?=
 =?utf-8?B?SHFkMlJQYmxKTUdmcnpQTFdjemJUYXRuQStVUFRtVER3S1lXeWx0emc2WlpG?=
 =?utf-8?B?Yjl2eG9LNHJnby9vQlN0aWZBQXZDc2trclpEVWJKc1hnc2ZCWXhjV3k4ZE16?=
 =?utf-8?B?cDZqdk56VE9Dd2dWODhZbzVUL0hFUkRSMnR0S3ZaTG9sUk1sNzNzY0R6WjNE?=
 =?utf-8?B?RGZPN05iSENrZGpyYUh1MVFWdkxmNTFRaXJvMTI2N015dlUvQk9DbkV4NEpo?=
 =?utf-8?B?eGtFQzl1SUdraUtQc0Yrem5ncVh1bEdWK0xXOFN5dFVaUDdoODlFVERESWZh?=
 =?utf-8?B?T01YcHgvUEFrTXd3amUwWE5LQ0NmTkt2WEttczUyL0tuVnl2N1V5QzJDVVhC?=
 =?utf-8?B?ZjU4dzVaSHBuYktneTRMYmVTMVFrcjdXR3dwdjEzbjVzZ2Y0WFVCaVhBaUVl?=
 =?utf-8?B?Ym1RM0djUXFzR0NoR0R3WmxCZVFBZEMxZEhDcHQ1THhScmdhMDB0SlhQcjFU?=
 =?utf-8?B?YUhrZVdsODBmRjZRaWdxT1JZdExIanZmM0l4UWxuUW5RUEViMUxWeFBKelJD?=
 =?utf-8?B?aEhPNXdNZW5oYmd4SzhQSlZiaUFjVFlCa0pRaFdmcDhVdWRRSU5scEllVHNC?=
 =?utf-8?B?Z2cwK2dsMGJHT1E1V1JUdC9zZ2liQWlEYXc1N1IrV1d1OHRPSmtZREM4VE9u?=
 =?utf-8?B?azUzekt6VzVhb0FHMnloeWd5a1FDWGpOc0ozVjVOaytGZkF0QlQ5R3QrdHNl?=
 =?utf-8?B?VGpLMzdOWnpBU2JaWCtYYlZiODVTMSs0aHIwaFFrQzl1U2duUXZjYVJ1Tzhv?=
 =?utf-8?B?SEVRaHN2cE5hM1pWZTZtMFA5MTgyUVVWUzRxU2pteEkwR1JHSExNcTJCQlYr?=
 =?utf-8?B?R2J2d2I3M1FWN2JOeHgrZzhGRXcxSTBSZmw0U2p2YXRpaC9uczJEWWNxSEZi?=
 =?utf-8?B?YmMvM2Z2U0xSSDRZYXZYQ054YWdKdVczSC9tV1dsZERtcGJEVXlYQmkxRXJP?=
 =?utf-8?Q?TtT8xnEMcdgbr+0D6CQgES5ADMtm35SHuiNf+sY?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <C5B75DA063347C46B779C8F812FBED80@eurprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM0PR03MB6324.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 0bdf988e-a91a-452c-3aeb-08d91c62925a
X-MS-Exchange-CrossTenant-originalarrivaltime: 21 May 2021 14:13:13.6987
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: cEywJNdU8get7rLb9Yld0YaovUvaWTjG/DVX9Yzk4y0wjgYviEhHRz2i16qk5xKJ80NR93HtaD8Fu7Vp44rUPfHpuMxogbkWjjBRKQRZZd2jj3XUFN9kpMCKGXpwIG25
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB5699
X-Proofpoint-GUID: HF7xGuFoEAPBAFQk1W4RxRM9bpA7WEMd
X-Proofpoint-ORIG-GUID: HF7xGuFoEAPBAFQk1W4RxRM9bpA7WEMd
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 lowpriorityscore=0 mlxscore=0
 mlxlogscore=955 adultscore=0 phishscore=0 suspectscore=0 bulkscore=0
 malwarescore=0 clxscore=1011 impostorscore=0 priorityscore=1501
 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2104190000 definitions=main-2105210080

T24gNS8yMS8yMSAxMjowOSBQTSwgRGFuaWVsIFZldHRlciB3cm90ZToNCj4gR29lcyB0aHJvdWdo
IGFsbCB0aGUgZHJpdmVycyBhbmQgZGVsZXRlcyB0aGUgZGVmYXVsdCBob29rIHNpbmNlIGl0J3MN
Cj4gdGhlIGRlZmF1bHQgbm93Lg0KPg0KPiBTaWduZWQtb2ZmLWJ5OiBEYW5pZWwgVmV0dGVyPGRh
bmllbC52ZXR0ZXJAaW50ZWwuY29tPg0KQWNrZWQtYnk6IE9sZWtzYW5kciBBbmRydXNoY2hlbmtv
IDxvbGVrc2FuZHJfYW5kcnVzaGNoZW5rb0BlcGFtLmNvbT4NCg==


From xen-devel-bounces@lists.xenproject.org Fri May 21 14:32:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 May 2021 14:32:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131342.245526 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lk6CU-0000pP-Rk; Fri, 21 May 2021 14:32:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131342.245526; Fri, 21 May 2021 14:32:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lk6CU-0000pI-O2; Fri, 21 May 2021 14:32:30 +0000
Received: by outflank-mailman (input) for mailman id 131342;
 Fri, 21 May 2021 14:32:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lk6CT-0000p8-OW; Fri, 21 May 2021 14:32:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lk6CT-0000dG-J6; Fri, 21 May 2021 14:32:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lk6CT-0007Uh-97; Fri, 21 May 2021 14:32:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lk6CT-0008BH-8c; Fri, 21 May 2021 14:32:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=X9N2Vz161bsjgMjy8KoRA4C++ZhXpPQ2sFcVA9jbOvQ=; b=XCf09wE91xZVWV+/agRH1rLcjs
	CBg2X8fepgYXI5Ga0yJOpv3EWgvif8IEFwPnbemaHeUtGqYOp+QeplTCay8l+f5ZaO7NlNyA2EWAC
	TClxCQGk2Jtc1w2SI83eGzozWUG1dEVuVxzCRz1aM0lToc9OVxUnO8i7YPeKqc6v5yis=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162107-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162107: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=aa77acc28098d04945af998f3fc0dbd3759b5b41
X-Osstest-Versions-That:
    xen=aa77acc28098d04945af998f3fc0dbd3759b5b41
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 21 May 2021 14:32:29 +0000

flight 162107 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162107/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162102
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162102
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162102
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162102
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162102
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162102
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162102
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162102
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162102
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162102
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162102
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  aa77acc28098d04945af998f3fc0dbd3759b5b41
baseline version:
 xen                  aa77acc28098d04945af998f3fc0dbd3759b5b41

Last test of basis   162107  2021-05-21 01:53:46 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Fri May 21 16:58:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 May 2021 16:58:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131351.245540 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lk8Tw-0005WV-Gy; Fri, 21 May 2021 16:58:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131351.245540; Fri, 21 May 2021 16:58:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lk8Tw-0005WO-Cc; Fri, 21 May 2021 16:58:40 +0000
Received: by outflank-mailman (input) for mailman id 131351;
 Fri, 21 May 2021 16:58:39 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lk8Tv-0005WE-Mz; Fri, 21 May 2021 16:58:39 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lk8Tv-0003XW-BC; Fri, 21 May 2021 16:58:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lk8Tu-0005sI-Ul; Fri, 21 May 2021 16:58:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lk8Tu-0001ao-UG; Fri, 21 May 2021 16:58:38 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=EchusKDaoiuYobivbxg7c5sdWmriCiUyux0eD+YQ1j4=; b=wIMpuGYjjuLDpVj+5cj2NmMCjp
	rtERM9e3j6tTy7BXxkAXqxgcg3joESFGkD+Aj3ssOSBoA5QJ036AMsUC1luyJ+oUz981TplG/IjcB
	KBgvS06o6A5n/aarCkzKKO+JCxNfIqrKV1AGwvmkrW8rzrKiLOXkd/1Ah32KWzgUOsiE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162109-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162109: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=ba816d3c265cfe9ed0ee8347eab63cf5ac3cf5dc
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 21 May 2021 16:58:38 +0000

flight 162109 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162109/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                ba816d3c265cfe9ed0ee8347eab63cf5ac3cf5dc
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  293 days
Failing since        152366  2020-08-01 20:49:34 Z  292 days  494 attempts
Testing same since   162109  2021-05-21 03:29:04 Z    0 days    1 attempts

------------------------------------------------------------
6073 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1649350 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 21 19:46:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 May 2021 19:46:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131363.245564 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkB6D-0003x3-IF; Fri, 21 May 2021 19:46:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131363.245564; Fri, 21 May 2021 19:46:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkB6D-0003wu-FG; Fri, 21 May 2021 19:46:21 +0000
Received: by outflank-mailman (input) for mailman id 131363;
 Fri, 21 May 2021 19:46:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=08+4=KQ=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1lkB6C-0003wW-5J
 for xen-devel@lists.xenproject.org; Fri, 21 May 2021 19:46:20 +0000
Received: from mail-lj1-x22e.google.com (unknown [2a00:1450:4864:20::22e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6d4f6e20-0052-4c6a-8046-39a492ce9546;
 Fri, 21 May 2021 19:46:14 +0000 (UTC)
Received: by mail-lj1-x22e.google.com with SMTP id w15so25277282ljo.10
 for <xen-devel@lists.xenproject.org>; Fri, 21 May 2021 12:46:14 -0700 (PDT)
Received: from otyshchenko.router ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id y15sm737337lje.74.2021.05.21.12.46.12
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 21 May 2021 12:46:12 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6d4f6e20-0052-4c6a-8046-39a492ce9546
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=UV26yBYRduGdA999eGiqvy5axNutGBZWQtuLH3UYFSw=;
        b=XAbkZ/mn4AotI/fpObiQKsd8oKwoS3HaGffQgZQo0e6xeLQU4J2NTlBoq4Z1deOLBA
         3WjEj8HidH4cCnoDSsYXY64ezGwFPAs6oB1rRO4RzCZDMXSkuzMLV7lfPh3kewuxKsb7
         sE2BD7uyL2k/gxphJQXyZ8thw2hJF0puOgB0UdNXF/w+R/AiIa+WCnd6jCHpYogHvQyu
         tnJ1Qfdu3/kfXksm7BE+nqDrJc4kF+SGpCCTQ/3xjjGQ0kRp/DwpPIqPzTUEsTCTmvRs
         YgqCKjt8RsXB0YNMbPDkXH0xDbhDEoYbtDBdsN1WKAWVj2k5hkB1je4umxenL0LaLS0P
         AWeA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=UV26yBYRduGdA999eGiqvy5axNutGBZWQtuLH3UYFSw=;
        b=qXhTON0ft77So1whFTCNgDzHto5tjWz0Fg1LwY2YWn56it6UDy8KaU8Z5mXnYEDi3m
         lgb23AvT3dXkIItbSBappCzsfK3+h9xrJCEvsV3uLiSJXsTcydwuLkELdBUK+koPxvFU
         YZl1lJyShBvWvdS3IuVW+LZA/ugds6VnW1QvY4WOLQjFxJxo+2tTjWskOx+oQeWUphtu
         ug3ABmyRTz1wK6wX5J98ScYiErJeatgy7PN+WD2f+6hUTBKMNATG3/XwdhHvILFHw51m
         I/c3IPSKb6UoTYya5vk+2ya2SU7wh1G68xYj3iQ4Q07XJUgFYY8DuKxKlfSrYEkEEYP4
         QPQA==
X-Gm-Message-State: AOAM531DUAMltied/BpeO1GV6M0nNoMhxfYelYM5ds4vcSwwWSqdvZJ+
	rTCzE95bavLV+DIPrvOwUdhToTLvEqa9kQ==
X-Google-Smtp-Source: ABdhPJxxBzNmjkT17vRcYiIVRBFNMMAFsZMz8cKn3Jn8kSAPp6fEHG5UPbOhKlD+iqbDMau24JwobA==
X-Received: by 2002:a2e:860f:: with SMTP id a15mr7916860lji.3.1621626372916;
        Fri, 21 May 2021 12:46:12 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <julien.grall@arm.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Wei Chen <Wei.Chen@arm.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Kaly Xin <Kaly.Xin@arm.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>,
	Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
Subject: [RESEND PATCH V5 2/2] libxl: Introduce basic virtio-mmio support on Arm
Date: Fri, 21 May 2021 22:46:01 +0300
Message-Id: <1621626361-29076-3-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1621626361-29076-1-git-send-email-olekstysh@gmail.com>
References: <1621626361-29076-1-git-send-email-olekstysh@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Julien Grall <julien.grall@arm.com>

This patch introduces helpers to allocate Virtio MMIO params
(IRQ and memory region) and create specific device node in
the Guest device-tree with allocated params. In order to deal
with multiple Virtio devices, reserve corresponding ranges.
For now, we reserve 1MB for memory regions and 10 SPIs.

As these helpers should be used for every Virtio device attached
to the Guest, call them for Virtio disk(s).

Please note, with statically allocated Virtio IRQs there is
a risk of a clash with a physical IRQs of passthrough devices.
For the first version, it's fine, but we should consider allocating
the Virtio IRQs automatically. Thankfully, we know in advance which
IRQs will be used for passthrough to be able to choose non-clashed
ones.

Signed-off-by: Julien Grall <julien.grall@arm.com>
Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

---
Please note, this is a split/cleanup/hardening of Julien's PoC:
"Add support for Guest IO forwarding to a device emulator"

Changes RFC -> V1:
   - was squashed with:
     "[RFC PATCH V1 09/12] libxl: Handle virtio-mmio irq in more correct way"
     "[RFC PATCH V1 11/12] libxl: Insert "dma-coherent" property into virtio-mmio device node"
     "[RFC PATCH V1 12/12] libxl: Fix duplicate memory node in DT"
   - move VirtIO MMIO #define-s to xen/include/public/arch-arm.h

Changes V1 -> V2:
   - update the author of a patch

Changes V2 -> V3:
   - no changes

Changes V3 -> V4:
   - no changes

Changes V4 -> V5:
   - split the changes, change the order of the patches
   - drop an extra "virtio" configuration option
   - update patch description
   - use CONTAINER_OF instead of own implementation
   - reserve ranges for Virtio MMIO params and put them
     in correct location
   - create helpers to allocate Virtio MMIO params, add
     corresponding sanity-сhecks
   - add comment why MMIO size 0x200 is chosen
   - update debug print
   - drop Wei's T-b
---
---
 tools/libs/light/libxl_arm.c  | 133 +++++++++++++++++++++++++++++++++++++++++-
 xen/include/public/arch-arm.h |   7 +++
 2 files changed, 138 insertions(+), 2 deletions(-)

diff --git a/tools/libs/light/libxl_arm.c b/tools/libs/light/libxl_arm.c
index e2901f1..a9f15ce 100644
--- a/tools/libs/light/libxl_arm.c
+++ b/tools/libs/light/libxl_arm.c
@@ -8,6 +8,56 @@
 #include <assert.h>
 #include <xen/device_tree_defs.h>
 
+/*
+ * There is no clear requirements for the total size of Virtio MMIO region.
+ * The size of control registers is 0x100 and device-specific configuration
+ * registers starts at the offset 0x100, however it's size depends on the device
+ * and the driver. Pick the biggest known size at the moment to cover most
+ * of the devices (also consider allowing the user to configure the size via
+ * config file for the one not conforming with the proposed value).
+ */
+#define VIRTIO_MMIO_DEV_SIZE   xen_mk_ullong(0x200)
+
+static uint64_t virtio_mmio_base;
+static uint32_t virtio_mmio_irq;
+
+static void init_virtio_mmio_params(void)
+{
+    virtio_mmio_base = GUEST_VIRTIO_MMIO_BASE;
+    virtio_mmio_irq = GUEST_VIRTIO_MMIO_SPI_FIRST;
+}
+
+static uint64_t alloc_virtio_mmio_base(libxl__gc *gc)
+{
+    uint64_t base = virtio_mmio_base;
+
+    /* Make sure we have enough reserved resources */
+    if ((virtio_mmio_base + VIRTIO_MMIO_DEV_SIZE >
+        GUEST_VIRTIO_MMIO_BASE + GUEST_VIRTIO_MMIO_SIZE)) {
+        LOG(ERROR, "Ran out of reserved range for Virtio MMIO BASE 0x%"PRIx64"\n",
+            virtio_mmio_base);
+        return 0;
+    }
+    virtio_mmio_base += VIRTIO_MMIO_DEV_SIZE;
+
+    return base;
+}
+
+static uint32_t alloc_virtio_mmio_irq(libxl__gc *gc)
+{
+    uint32_t irq = virtio_mmio_irq;
+
+    /* Make sure we have enough reserved resources */
+    if (virtio_mmio_irq > GUEST_VIRTIO_MMIO_SPI_LAST) {
+        LOG(ERROR, "Ran out of reserved range for Virtio MMIO IRQ %u\n",
+            virtio_mmio_irq);
+        return 0;
+    }
+    virtio_mmio_irq++;
+
+    return irq;
+}
+
 static const char *gicv_to_string(libxl_gic_version gic_version)
 {
     switch (gic_version) {
@@ -26,8 +76,8 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
 {
     uint32_t nr_spis = 0;
     unsigned int i;
-    uint32_t vuart_irq;
-    bool vuart_enabled = false;
+    uint32_t vuart_irq, virtio_irq = 0;
+    bool vuart_enabled = false, virtio_enabled = false;
 
     /*
      * If pl011 vuart is enabled then increment the nr_spis to allow allocation
@@ -39,6 +89,35 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
         vuart_enabled = true;
     }
 
+    /*
+     * Virtio MMIO params are non-unique across the whole system and must be
+     * initialized for every new guest.
+     */
+    init_virtio_mmio_params();
+    for (i = 0; i < d_config->num_disks; i++) {
+        libxl_device_disk *disk = &d_config->disks[i];
+
+        if (disk->virtio) {
+            disk->base = alloc_virtio_mmio_base(gc);
+            if (!disk->base)
+                return ERROR_FAIL;
+
+            disk->irq = alloc_virtio_mmio_irq(gc);
+            if (!disk->irq)
+                return ERROR_FAIL;
+
+            if (virtio_irq < disk->irq)
+                virtio_irq = disk->irq;
+            virtio_enabled = true;
+
+            LOG(DEBUG, "Allocate Virtio MMIO params for Vdev %s: IRQ %u BASE 0x%"PRIx64,
+                disk->vdev, disk->irq, disk->base);
+        }
+    }
+
+    if (virtio_enabled)
+        nr_spis += (virtio_irq - 32) + 1;
+
     for (i = 0; i < d_config->b_info.num_irqs; i++) {
         uint32_t irq = d_config->b_info.irqs[i];
         uint32_t spi;
@@ -58,6 +137,13 @@ int libxl__arch_domain_prepare_config(libxl__gc *gc,
             return ERROR_FAIL;
         }
 
+        /* The same check as for vpl011 */
+        if (virtio_enabled &&
+           (irq >= GUEST_VIRTIO_MMIO_SPI_FIRST && irq <= virtio_irq)) {
+            LOG(ERROR, "Physical IRQ %u conflicting with Virtio MMIO IRQ range\n", irq);
+            return ERROR_FAIL;
+        }
+
         if (irq < 32)
             continue;
 
@@ -660,6 +746,38 @@ static int make_vpl011_uart_node(libxl__gc *gc, void *fdt,
     return 0;
 }
 
+static int make_virtio_mmio_node(libxl__gc *gc, void *fdt,
+                                 uint64_t base, uint32_t irq)
+{
+    int res;
+    gic_interrupt intr;
+    /* Placeholder for virtio@ + a 64-bit number + \0 */
+    char buf[24];
+
+    snprintf(buf, sizeof(buf), "virtio@%"PRIx64, base);
+    res = fdt_begin_node(fdt, buf);
+    if (res) return res;
+
+    res = fdt_property_compat(gc, fdt, 1, "virtio,mmio");
+    if (res) return res;
+
+    res = fdt_property_regs(gc, fdt, GUEST_ROOT_ADDRESS_CELLS, GUEST_ROOT_SIZE_CELLS,
+                            1, base, VIRTIO_MMIO_DEV_SIZE);
+    if (res) return res;
+
+    set_interrupt(intr, irq, 0xf, DT_IRQ_TYPE_EDGE_RISING);
+    res = fdt_property_interrupts(gc, fdt, &intr, 1);
+    if (res) return res;
+
+    res = fdt_property(fdt, "dma-coherent", NULL, 0);
+    if (res) return res;
+
+    res = fdt_end_node(fdt);
+    if (res) return res;
+
+    return 0;
+}
+
 static const struct arch_info *get_arch_info(libxl__gc *gc,
                                              const struct xc_dom_image *dom)
 {
@@ -860,10 +978,14 @@ static int libxl__prepare_dtb(libxl__gc *gc, libxl_domain_build_info *info,
     int rc, res;
     size_t fdt_size = 0;
     int pfdt_size = 0;
+    unsigned int i;
 
     const libxl_version_info *vers;
     const struct arch_info *ainfo;
 
+    libxl_domain_config *d_config =
+        CONTAINER_OF(info, libxl_domain_config, b_info);
+
     vers = libxl_get_version_info(CTX);
     if (vers == NULL) return ERROR_FAIL;
 
@@ -963,6 +1085,13 @@ next_resize:
         if (info->tee == LIBXL_TEE_TYPE_OPTEE)
             FDT( make_optee_node(gc, fdt) );
 
+        for (i = 0; i < d_config->num_disks; i++) {
+            libxl_device_disk *disk = &d_config->disks[i];
+
+            if (disk->virtio)
+                FDT( make_virtio_mmio_node(gc, fdt, disk->base, disk->irq) );
+        }
+
         if (pfdt)
             FDT( copy_partial_fdt(gc, fdt, pfdt) );
 
diff --git a/xen/include/public/arch-arm.h b/xen/include/public/arch-arm.h
index 713fd65..1c02248 100644
--- a/xen/include/public/arch-arm.h
+++ b/xen/include/public/arch-arm.h
@@ -394,6 +394,10 @@ typedef uint64_t xen_callback_t;
 
 /* Physical Address Space */
 
+/* Virtio MMIO mappings */
+#define GUEST_VIRTIO_MMIO_BASE   xen_mk_ullong(0x02000000)
+#define GUEST_VIRTIO_MMIO_SIZE   xen_mk_ullong(0x00100000)
+
 /*
  * vGIC mappings: Only one set of mapping is used by the guest.
  * Therefore they can overlap.
@@ -458,6 +462,9 @@ typedef uint64_t xen_callback_t;
 
 #define GUEST_VPL011_SPI        32
 
+#define GUEST_VIRTIO_MMIO_SPI_FIRST   33
+#define GUEST_VIRTIO_MMIO_SPI_LAST    43
+
 /* PSCI functions */
 #define PSCI_cpu_suspend 0
 #define PSCI_cpu_off     1
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Fri May 21 19:46:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 May 2021 19:46:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131364.245576 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkB6H-0004GQ-Rc; Fri, 21 May 2021 19:46:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131364.245576; Fri, 21 May 2021 19:46:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkB6H-0004G7-Nw; Fri, 21 May 2021 19:46:25 +0000
Received: by outflank-mailman (input) for mailman id 131364;
 Fri, 21 May 2021 19:46:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=08+4=KQ=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1lkB6G-0003wW-1r
 for xen-devel@lists.xenproject.org; Fri, 21 May 2021 19:46:24 +0000
Received: from mail-lj1-x235.google.com (unknown [2a00:1450:4864:20::235])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9b02ae27-e08b-4dfa-ba9c-fc82f431d312;
 Fri, 21 May 2021 19:46:14 +0000 (UTC)
Received: by mail-lj1-x235.google.com with SMTP id e2so18974636ljk.4
 for <xen-devel@lists.xenproject.org>; Fri, 21 May 2021 12:46:14 -0700 (PDT)
Received: from otyshchenko.router ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id y15sm737337lje.74.2021.05.21.12.46.10
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 21 May 2021 12:46:11 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9b02ae27-e08b-4dfa-ba9c-fc82f431d312
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references;
        bh=sLzorQewo0N9P2yCwHLr70sAQFdfDtLleH6gHzW+iyU=;
        b=jkZjrxFO4WZ0HPjy+Otbsw14pHrQ3btnUlwq+XA7V2QJ/sgeBQB3OUJjZl5vYRNhXl
         hlvPCMkPNmLYH1Vlo0UR6xhIiLNqz3mAvUH5ux5PqHhSK235CS2aOg0vdPfcaG5C5kNZ
         R2F49/LMBeW+VvQQLwNiNgJYuQDOCY96yReUYWmkAvOrpyxfZwbzOEiXc6KrgvbiU8KN
         CHNGGAR5LtidBGzh2rBP4pEL/5bDNVCodvx6XWmoQk56MkmIoZQyRF980Lnb7XVoNJcR
         iIYxfZU9AK8XloPMZyuQt3yOIBmiqN5CVpywukS/41Yf1U3fchoQxdYdf7pACre412CB
         b0Og==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references;
        bh=sLzorQewo0N9P2yCwHLr70sAQFdfDtLleH6gHzW+iyU=;
        b=b2KjXRbmJZr3KxcT/DGG3NAkYnpZF8K3i1qw70orxs+5UncjstfIv9ssB+LWEpf+fQ
         VFXcXoTpOiHPz5D85LBTJU4oDt6/GhYiuJ38DIjC9ZWGmFKYpuyebpcmrnvx+G3oW3XU
         j4sqOWZknefimYVcCgkxMcFRupqlfwISmcTaAm0OJG+nC1ZR9I87mQUGLL90vLyFhOx5
         engtorc4F1zdLaqNzX4Rl64EQTJ2ksejf/2vvArGW2oh7DC5f4dTAu2ilMpLjcjzBw1c
         z2yUuV624IEvFuiJJIl9qYlZf9Vo9arKAWkCEzBZtkvKdIEDzlswQypPQy7xAR7bmBs3
         1IjQ==
X-Gm-Message-State: AOAM533oL5lpjQ61jUqzAELOkQ8rRsZnN0EjAHfNU1bUE+44Vw9zLbtv
	PjqUVCvBfaxcOtGgPMJuNA5EotFIxo7kdg==
X-Google-Smtp-Source: ABdhPJxCB1p3O2tD3jiPVfo1H0nqyfIYFgkEfZU8+fkNM9CTnSapxZjIPnVyXiw2nhTiVt4+9rhZoA==
X-Received: by 2002:a2e:1405:: with SMTP id u5mr7797367ljd.137.1621626371954;
        Fri, 21 May 2021 12:46:11 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Wei Chen <Wei.Chen@arm.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Kaly Xin <Kaly.Xin@arm.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>
Subject: [RESEND PATCH V5 1/2] libxl: Add support for Virtio disk configuration
Date: Fri, 21 May 2021 22:46:00 +0300
Message-Id: <1621626361-29076-2-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
In-Reply-To: <1621626361-29076-1-git-send-email-olekstysh@gmail.com>
References: <1621626361-29076-1-git-send-email-olekstysh@gmail.com>

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

This patch adds basic support for configuring and assisting virtio-disk
backend (emualator) which is intended to run out of Qemu and could be
run in any domain.
Although the Virtio block device is quite different from traditional
Xen PV block device (vbd) from the toolstack point of view:
 - the frontend is virtio-blk which is not a Xenbus driver, so nothing
   written to Xenstore are fetched by the frontend (the vdev is not
   passed to the frontend, etc)
 - the ring-ref/event-channel are not used for the backend<->frontend
   communication, the proposed IPC for Virtio is IOREQ/DM
it is still a "block device" and ought to be integrated in existing
"disk" handling. So, re-use (and adapt) "disk" parsing/configuration
logic to deal with Virtio devices as well.

Besides introducing new disk backend type (LIBXL_DISK_BACKEND_VIRTIO)
introduce new device kind (LIBXL__DEVICE_KIND_VIRTIO_DISK) as current
one (LIBXL__DEVICE_KIND_VBD) doesn't fit into Virtio disk model.

In order to inform the toolstack that Virtio disk needs to be used
extend "disk" configuration by introducing new "virtio" flag.
An example of domain configuration:
disk = [ 'backend=DomD, phy:/dev/mmcblk1p3, xvda1, rw, virtio' ]

Please note, this patch is not enough for virtio-disk to work
on Xen (Arm), as for every Virtio device (including disk) we need
to allocate Virtio MMIO params (IRQ and memory region) and pass
them to the backend, also update Guest device-tree with the allocated
params. The subsequent patch will add these missing bits.
For the current patch, the default "irq" and "base" are just written
to the Xenstore.

Signed-off-by: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

---
Changes RFC -> V1:
   - no changes

Changes V1 -> V2:
   - rebase according to the new location of libxl_virtio_disk.c

Changes V2 -> V3:
   - no changes

Changes V3 -> V4:
   - rebase according to the new argument for DEFINE_DEVICE_TYPE_STRUCT

Changes V4 -> V5:
   - split the changes, change the order of the patches
   - update patch description
   - don't introduce new "vdisk" configuration option with own parsing logic,
     re-use Xen PV block "disk" parsing/configuration logic for the virtio-disk
   - introduce "virtio" flag and document it's usage
   - add LIBXL_HAVE_DEVICE_DISK_VIRTIO
   - update libxlu_disk_l.[ch]
   - drop num_disks variable/MAX_VIRTIO_DISKS
   - drop Wei's T-b
---
---
 docs/man/xl-disk-configuration.5.pod.in   |  27 +
 tools/include/libxl.h                     |   6 +
 tools/libs/light/libxl_device.c           |  38 +-
 tools/libs/light/libxl_disk.c             |  99 +++-
 tools/libs/light/libxl_types.idl          |   4 +
 tools/libs/light/libxl_types_internal.idl |   1 +
 tools/libs/light/libxl_utils.c            |   2 +
 tools/libs/util/libxlu_disk_l.c           | 881 +++++++++++++++---------------
 tools/libs/util/libxlu_disk_l.h           |   2 +-
 tools/libs/util/libxlu_disk_l.l           |   1 +
 tools/xl/xl_block.c                       |  11 +
 11 files changed, 626 insertions(+), 446 deletions(-)

diff --git a/docs/man/xl-disk-configuration.5.pod.in b/docs/man/xl-disk-configuration.5.pod.in
index 71d0e86..9cc189f 100644
--- a/docs/man/xl-disk-configuration.5.pod.in
+++ b/docs/man/xl-disk-configuration.5.pod.in
@@ -344,8 +344,35 @@ can be used to disable "hole punching" for file based backends which
 were intentionally created non-sparse to avoid fragmentation of the
 file.
 
+=item B<virtio>
+
+=over 4
+
+=item Description
+
+Enables experimental Virtio support for disk
+
+=item Supported values
+
+absent, present
+
+=item Mandatory
+
+No
+
+=item Default value
+
+absent
+
 =back
 
+Besides forcing toolstack to use specific Xen Virtio backend implementation
+(for example, virtio-disk), this also affects the guest's view of the device
+and requires virtio-blk driver to be used.
+Please note, the virtual device (vdev) is not passed to the guest in that case,
+but it still must be specified for the internal purposes.
+
+=back
 
 =head1 COLO Parameters
 
diff --git a/tools/include/libxl.h b/tools/include/libxl.h
index ae7fe27..58e14e6 100644
--- a/tools/include/libxl.h
+++ b/tools/include/libxl.h
@@ -503,6 +503,12 @@
 #define LIBXL_HAVE_X86_MSR_RELAXED 1
 
 /*
+ * LIBXL_HAVE_DEVICE_DISK_VIRTIO indicates that a 'virtio' field
+ * (of boolean type) is present in libxl_device_disk.
+ */
+#define LIBXL_HAVE_DEVICE_DISK_VIRTIO 1
+
+/*
  * libxl ABI compatibility
  *
  * The only guarantee which libxl makes regarding ABI compatibility
diff --git a/tools/libs/light/libxl_device.c b/tools/libs/light/libxl_device.c
index 36c4e41..7c8cb53 100644
--- a/tools/libs/light/libxl_device.c
+++ b/tools/libs/light/libxl_device.c
@@ -292,6 +292,9 @@ static int disk_try_backend(disk_try_backend_args *a,
     /* returns 0 (ie, DISK_BACKEND_UNKNOWN) on failure, or
      * backend on success */
 
+    if (a->disk->virtio && backend != LIBXL_DISK_BACKEND_VIRTIO)
+        goto bad_virtio;
+
     switch (backend) {
     case LIBXL_DISK_BACKEND_PHY:
         if (a->disk->format != LIBXL_DISK_FORMAT_RAW) {
@@ -329,6 +332,29 @@ static int disk_try_backend(disk_try_backend_args *a,
         if (a->disk->script) goto bad_script;
         return backend;
 
+    case LIBXL_DISK_BACKEND_VIRTIO:
+        if (a->disk->format != LIBXL_DISK_FORMAT_RAW)
+            goto bad_format;
+
+        if (a->disk->script)
+            goto bad_script;
+
+        if (libxl_defbool_val(a->disk->colo_enable))
+            goto bad_colo;
+
+        if (a->disk->backend_domid != LIBXL_TOOLSTACK_DOMID) {
+            LOG(DEBUG, "Disk vdev=%s, is using a storage driver domain, "
+                       "skipping physical device check", a->disk->vdev);
+            return backend;
+        }
+
+        if (libxl__try_phy_backend(a->stab.st_mode))
+            return backend;
+
+        LOG(DEBUG, "Disk vdev=%s, backend virtio unsuitable as phys path not a "
+                   "block device", a->disk->vdev);
+        return 0;
+
     default:
         LOG(DEBUG, "Disk vdev=%s, backend %d unknown", a->disk->vdev, backend);
         return 0;
@@ -352,6 +378,11 @@ static int disk_try_backend(disk_try_backend_args *a,
     LOG(DEBUG, "Disk vdev=%s, backend %s not compatible with colo",
         a->disk->vdev, libxl_disk_backend_to_string(backend));
     return 0;
+
+ bad_virtio:
+    LOG(DEBUG, "Disk vdev=%s, backend %s not compatible with virtio",
+        a->disk->vdev, libxl_disk_backend_to_string(backend));
+    return 0;
 }
 
 int libxl__backendpath_parse_domid(libxl__gc *gc, const char *be_path,
@@ -392,7 +423,8 @@ int libxl__device_disk_set_backend(libxl__gc *gc, libxl_device_disk *disk) {
         }
         memset(&a.stab, 0, sizeof(a.stab));
     } else if ((disk->backend == LIBXL_DISK_BACKEND_UNKNOWN ||
-                disk->backend == LIBXL_DISK_BACKEND_PHY) &&
+                disk->backend == LIBXL_DISK_BACKEND_PHY ||
+                disk->backend == LIBXL_DISK_BACKEND_VIRTIO) &&
                disk->backend_domid == LIBXL_TOOLSTACK_DOMID &&
                !disk->script) {
         if (stat(disk->pdev_path, &a.stab)) {
@@ -408,7 +440,8 @@ int libxl__device_disk_set_backend(libxl__gc *gc, libxl_device_disk *disk) {
         ok=
             disk_try_backend(&a, LIBXL_DISK_BACKEND_PHY) ?:
             disk_try_backend(&a, LIBXL_DISK_BACKEND_QDISK) ?:
-            disk_try_backend(&a, LIBXL_DISK_BACKEND_TAP);
+            disk_try_backend(&a, LIBXL_DISK_BACKEND_TAP) ?:
+            disk_try_backend(&a, LIBXL_DISK_BACKEND_VIRTIO);
         if (ok)
             LOG(DEBUG, "Disk vdev=%s, using backend %s",
                        disk->vdev,
@@ -441,6 +474,7 @@ char *libxl__device_disk_string_of_backend(libxl_disk_backend backend)
         case LIBXL_DISK_BACKEND_QDISK: return "qdisk";
         case LIBXL_DISK_BACKEND_TAP: return "phy";
         case LIBXL_DISK_BACKEND_PHY: return "phy";
+        case LIBXL_DISK_BACKEND_VIRTIO: return "virtio_disk";
         default: return NULL;
     }
 }
diff --git a/tools/libs/light/libxl_disk.c b/tools/libs/light/libxl_disk.c
index 411ffea..4332dab 100644
--- a/tools/libs/light/libxl_disk.c
+++ b/tools/libs/light/libxl_disk.c
@@ -174,6 +174,16 @@ static int libxl__device_disk_setdefault(libxl__gc *gc, uint32_t domid,
         disk->backend = LIBXL_DISK_BACKEND_QDISK;
     }
 
+    /* Force virtio_disk backend for Virtio devices */
+    if (disk->virtio) {
+        if (!(disk->backend == LIBXL_DISK_BACKEND_VIRTIO ||
+              disk->backend == LIBXL_DISK_BACKEND_UNKNOWN)) {
+            LOGD(ERROR, domid, "Backend for Virtio devices on must be virtio_disk");
+            return ERROR_FAIL;
+        }
+        disk->backend = LIBXL_DISK_BACKEND_VIRTIO;
+    }
+
     rc = libxl__device_disk_set_backend(gc, disk);
     return rc;
 }
@@ -204,6 +214,9 @@ static int libxl__device_from_disk(libxl__gc *gc, uint32_t domid,
         case LIBXL_DISK_BACKEND_QDISK:
             device->backend_kind = LIBXL__DEVICE_KIND_QDISK;
             break;
+        case LIBXL_DISK_BACKEND_VIRTIO:
+            device->backend_kind = LIBXL__DEVICE_KIND_VIRTIO_DISK;
+            break;
         default:
             LOGD(ERROR, domid, "Unrecognized disk backend type: %d",
                  disk->backend);
@@ -212,7 +225,8 @@ static int libxl__device_from_disk(libxl__gc *gc, uint32_t domid,
 
     device->domid = domid;
     device->devid = devid;
-    device->kind  = LIBXL__DEVICE_KIND_VBD;
+    device->kind = disk->backend == LIBXL_DISK_BACKEND_VIRTIO ?
+        LIBXL__DEVICE_KIND_VIRTIO_DISK : LIBXL__DEVICE_KIND_VBD;
 
     return 0;
 }
@@ -330,7 +344,17 @@ static void device_disk_add(libxl__egc *egc, uint32_t domid,
 
                 assert(device->backend_kind == LIBXL__DEVICE_KIND_VBD);
                 break;
+            case LIBXL_DISK_BACKEND_VIRTIO:
+                dev = disk->pdev_path;
+
+                flexarray_append(back, "params");
+                flexarray_append(back, dev);
 
+                flexarray_append_pair(back, "base", GCSPRINTF("%lu", disk->base));
+                flexarray_append_pair(back, "irq", GCSPRINTF("%u", disk->irq));
+
+                assert(device->backend_kind == LIBXL__DEVICE_KIND_VIRTIO_DISK);
+                break;
             case LIBXL_DISK_BACKEND_TAP:
                 LOG(ERROR, "blktap is not supported");
                 rc = ERROR_FAIL;
@@ -532,6 +556,26 @@ static int libxl__disk_from_xenstore(libxl__gc *gc, const char *libxl_path,
     }
     libxl_string_to_backend(ctx, tmp, &(disk->backend));
 
+    if (disk->backend == LIBXL_DISK_BACKEND_VIRTIO) {
+        disk->virtio = true;
+
+        tmp = libxl__xs_read(gc, XBT_NULL,
+                             GCSPRINTF("%s/base", libxl_path));
+        if (!tmp) {
+            LOG(ERROR, "Missing xenstore node %s/base", libxl_path);
+            goto cleanup;
+        }
+        disk->base = strtoul(tmp, NULL, 10);
+
+        tmp = libxl__xs_read(gc, XBT_NULL,
+                             GCSPRINTF("%s/irq", libxl_path));
+        if (!tmp) {
+            LOG(ERROR, "Missing xenstore node %s/irq", libxl_path);
+            goto cleanup;
+        }
+        disk->irq = strtoul(tmp, NULL, 10);
+    }
+
     disk->vdev = xs_read(ctx->xsh, XBT_NULL,
                          GCSPRINTF("%s/dev", libxl_path), &len);
     if (!disk->vdev) {
@@ -575,6 +619,41 @@ cleanup:
     return rc;
 }
 
+static int libxl_device_disk_get_path(libxl__gc *gc, uint32_t domid,
+                                      char **path)
+{
+    const char *dir;
+    int rc;
+
+    /*
+     * As we don't know exactly what device kind to be used here, guess it
+     * by checking the presence of the corresponding path in Xenstore.
+     * First, try to read path for vbd device (default) and if not exists
+     * read path for virtio_disk device. This will work as long as both Xen PV
+     * and Virtio disk devices are not assigned to the same guest.
+     */
+    *path = GCSPRINTF("%s/device/%s",
+                      libxl__xs_libxl_path(gc, domid),
+                      libxl__device_kind_to_string(LIBXL__DEVICE_KIND_VBD));
+
+    rc = libxl__xs_read_checked(gc, XBT_NULL, *path, &dir);
+    if (rc)
+        return rc;
+
+    if (dir)
+        return 0;
+
+    *path = GCSPRINTF("%s/device/%s",
+                      libxl__xs_libxl_path(gc, domid),
+                      libxl__device_kind_to_string(LIBXL__DEVICE_KIND_VIRTIO_DISK));
+
+    rc = libxl__xs_read_checked(gc, XBT_NULL, *path, &dir);
+    if (rc)
+        return rc;
+
+    return 0;
+}
+
 int libxl_vdev_to_device_disk(libxl_ctx *ctx, uint32_t domid,
                               const char *vdev, libxl_device_disk *disk)
 {
@@ -588,10 +667,12 @@ int libxl_vdev_to_device_disk(libxl_ctx *ctx, uint32_t domid,
 
     libxl_device_disk_init(disk);
 
-    libxl_path = libxl__domain_device_libxl_path(gc, domid, devid,
-                                                 LIBXL__DEVICE_KIND_VBD);
+    rc = libxl_device_disk_get_path(gc, domid, &libxl_path);
+    if (rc)
+        return rc;
 
-    rc = libxl__disk_from_xenstore(gc, libxl_path, devid, disk);
+    rc = libxl__disk_from_xenstore(gc, GCSPRINTF("%s/%d", libxl_path, devid),
+                                   devid, disk);
 
     GC_FREE;
     return rc;
@@ -605,16 +686,19 @@ int libxl_device_disk_getinfo(libxl_ctx *ctx, uint32_t domid,
     char *fe_path, *libxl_path;
     char *val;
     int rc;
+    libxl__device_kind kind;
 
     diskinfo->backend = NULL;
 
     diskinfo->devid = libxl__device_disk_dev_number(disk->vdev, NULL, NULL);
 
-    /* tap devices entries in xenstore are written as vbd devices. */
+    /* tap devices entries in xenstore are written as vbd/virtio_disk devices. */
+    kind = disk->backend == LIBXL_DISK_BACKEND_VIRTIO ?
+        LIBXL__DEVICE_KIND_VIRTIO_DISK : LIBXL__DEVICE_KIND_VBD;
     fe_path = libxl__domain_device_frontend_path(gc, domid, diskinfo->devid,
-                                                 LIBXL__DEVICE_KIND_VBD);
+                                                 kind);
     libxl_path = libxl__domain_device_libxl_path(gc, domid, diskinfo->devid,
-                                                 LIBXL__DEVICE_KIND_VBD);
+                                                 kind);
     diskinfo->backend = xs_read(ctx->xsh, XBT_NULL,
                                 GCSPRINTF("%s/backend", libxl_path), NULL);
     if (!diskinfo->backend) {
@@ -1375,6 +1459,7 @@ LIBXL_DEFINE_DEVICE_LIST(disk)
 #define libxl__device_disk_update_devid NULL
 
 DEFINE_DEVICE_TYPE_STRUCT(disk, VBD, disks,
+    .get_path    = libxl_device_disk_get_path,
     .merge       = libxl_device_disk_merge,
     .dm_needed   = libxl_device_disk_dm_needed,
     .from_xenstore = (device_from_xenstore_fn_t)libxl__disk_from_xenstore,
diff --git a/tools/libs/light/libxl_types.idl b/tools/libs/light/libxl_types.idl
index f45addd..d513dde 100644
--- a/tools/libs/light/libxl_types.idl
+++ b/tools/libs/light/libxl_types.idl
@@ -130,6 +130,7 @@ libxl_disk_backend = Enumeration("disk_backend", [
     (1, "PHY"),
     (2, "TAP"),
     (3, "QDISK"),
+    (4, "VIRTIO"),
     ])
 
 libxl_nic_type = Enumeration("nic_type", [
@@ -699,6 +700,9 @@ libxl_device_disk = Struct("device_disk", [
     ("is_cdrom", integer),
     ("direct_io_safe", bool),
     ("discard_enable", libxl_defbool),
+    ("virtio", bool),
+    ("irq", uint32),
+    ("base", uint64),
     # Note that the COLO configuration settings should be considered unstable.
     # They may change incompatibly in future versions of Xen.
     ("colo_enable", libxl_defbool),
diff --git a/tools/libs/light/libxl_types_internal.idl b/tools/libs/light/libxl_types_internal.idl
index 3593e21..8f71980 100644
--- a/tools/libs/light/libxl_types_internal.idl
+++ b/tools/libs/light/libxl_types_internal.idl
@@ -32,6 +32,7 @@ libxl__device_kind = Enumeration("device_kind", [
     (14, "PVCALLS"),
     (15, "VSND"),
     (16, "VINPUT"),
+    (17, "VIRTIO_DISK"),
     ])
 
 libxl__console_backend = Enumeration("console_backend", [
diff --git a/tools/libs/light/libxl_utils.c b/tools/libs/light/libxl_utils.c
index 4699c4a..fa406de 100644
--- a/tools/libs/light/libxl_utils.c
+++ b/tools/libs/light/libxl_utils.c
@@ -304,6 +304,8 @@ int libxl_string_to_backend(libxl_ctx *ctx, char *s, libxl_disk_backend *backend
         *backend = LIBXL_DISK_BACKEND_TAP;
     } else if (!strcmp(s, "qdisk")) {
         *backend = LIBXL_DISK_BACKEND_QDISK;
+    } else if (!strcmp(s, "virtio_disk")) {
+        *backend = LIBXL_DISK_BACKEND_VIRTIO;
     } else if (!strcmp(s, "tap")) {
         p = strchr(s, ':');
         if (!p) {
diff --git a/tools/libs/util/libxlu_disk_l.c b/tools/libs/util/libxlu_disk_l.c
index 32d4b74..7abc699 100644
--- a/tools/libs/util/libxlu_disk_l.c
+++ b/tools/libs/util/libxlu_disk_l.c
@@ -549,8 +549,8 @@ static void yynoreturn yy_fatal_error ( const char* msg , yyscan_t yyscanner );
 	yyg->yy_hold_char = *yy_cp; \
 	*yy_cp = '\0'; \
 	yyg->yy_c_buf_p = yy_cp;
-#define YY_NUM_RULES 36
-#define YY_END_OF_BUFFER 37
+#define YY_NUM_RULES 37
+#define YY_END_OF_BUFFER 38
 /* This struct is not used in this scanner,
    but its presence is necessary. */
 struct yy_trans_info
@@ -558,74 +558,75 @@ struct yy_trans_info
 	flex_int32_t yy_verify;
 	flex_int32_t yy_nxt;
 	};
-static const flex_int16_t yy_acclist[575] =
+static const flex_int16_t yy_acclist[583] =
     {   0,
-       35,   35,   37,   33,   34,   36, 8193,   33,   34,   36,
-    16385, 8193,   33,   36,16385,   33,   34,   36,   34,   36,
-       33,   34,   36,   33,   34,   36,   33,   34,   36,   33,
-       34,   36,   33,   34,   36,   33,   34,   36,   33,   34,
-       36,   33,   34,   36,   33,   34,   36,   33,   34,   36,
-       33,   34,   36,   33,   34,   36,   33,   34,   36,   33,
-       34,   36,   33,   34,   36,   33,   34,   36,   35,   36,
-       36,   33,   33, 8193,   33, 8193,   33,16385, 8193,   33,
-     8193,   33,   33, 8224,   33,16416,   33,   33,   33,   33,
-       33,   33,   33,   33,   33,   33,   33,   33,   33,   33,
-
-       33,   33,   33,   33,   33,   33,   33,   33,   33,   35,
-     8193,   33, 8193,   33, 8193, 8224,   33, 8224,   33, 8224,
-       23,   33,   33,   33,   33,   33,   33,   33,   33,   33,
-       33,   33,   33,   33,   33,   33,   33,   33,   33,   33,
-       33,   33,   33,   33,   33, 8224,   33, 8224,   33, 8224,
-       23,   33,   33,   28, 8224,   33,16416,   33,   33,   15,
-       33,   33,   33,   33,   33,   33,   33,   33,   33, 8217,
-     8224,   33,16409,16416,   33,   33,   31, 8224,   33,16416,
-       33, 8216, 8224,   33,16408,16416,   33,   33, 8219, 8224,
-       33,16411,16416,   33,   33,   33,   33,   33,   28, 8224,
-
-       33,   28, 8224,   33,   28,   33,   28, 8224,   33,    3,
-       33,   15,   33,   33,   33,   33,   33,   30, 8224,   33,
-    16416,   33,   33,   33, 8217, 8224,   33, 8217, 8224,   33,
-     8217,   33, 8217, 8224,   33,   33,   31, 8224,   33,   31,
-     8224,   33,   31,   33,   31, 8224, 8216, 8224,   33, 8216,
-     8224,   33, 8216,   33, 8216, 8224,   33, 8219, 8224,   33,
-     8219, 8224,   33, 8219,   33, 8219, 8224,   33,   33,   10,
-       33,   33,   28, 8224,   33,   28, 8224,   33,   28, 8224,
-       28,   33,   28,   33,    3,   33,   33,   33,   33,   33,
-       33,   33,   30, 8224,   33,   30, 8224,   33,   30,   33,
-
-       30, 8224,   33,   33,   29, 8224,   33,16416, 8217, 8224,
-       33, 8217, 8224,   33, 8217, 8224, 8217,   33, 8217,   33,
-       33,   31, 8224,   33,   31, 8224,   33,   31, 8224,   31,
-       33,   31, 8216, 8224,   33, 8216, 8224,   33, 8216, 8224,
-     8216,   33, 8216,   33, 8219, 8224,   33, 8219, 8224,   33,
-     8219, 8224, 8219,   33, 8219,   33,   33,   10,   23,   10,
-        7,   33,   33,   33,   33,   33,   33,   33,   13,   33,
-       30, 8224,   33,   30, 8224,   33,   30, 8224,   30,   33,
-       30,    2,   33,   29, 8224,   33,   29, 8224,   33,   29,
-       33,   29, 8224,   16,   33,   33,   11,   33,   22,   10,
-
-       10,   23,    7,   23,    7,   33,    8,   33,   33,   33,
-       33,    6,   33,   13,   33,    2,   23,    2,   33,   29,
-     8224,   33,   29, 8224,   33,   29, 8224,   29,   33,   29,
-       16,   33,   33,   11,   23,   11,   26, 8224,   33,16416,
-       22,   23,   22,    7,    7,   23,   33,    8,   23,    8,
-       33,   33,   33,   33,    6,   23,    6,    6,   23,    6,
-       23,   33,    2,    2,   23,   33,   33,   11,   11,   23,
-       26, 8224,   33,   26, 8224,   33,   26,   33,   26, 8224,
-       22,   23,   33,    8,    8,   23,   33,   33,   17,   18,
-        6,    6,   23,    6,    6,   33,   33,   14,   33,   26,
-
-     8224,   33,   26, 8224,   33,   26, 8224,   26,   33,   26,
-       33,   33,   33,   17,   23,   17,   18,   23,   18,    6,
-        6,   33,   33,   14,   33,   20,    9,   19,   17,   17,
-       23,   18,   18,   23,    6,    5,    6,   33,   21,   20,
-       23,   20,    9,   23,    9,   19,   23,   19,    4,    6,
-        5,    6,   33,   21,   23,   21,   20,   20,   23,    9,
-        9,   23,   19,   19,   23,    4,    6,   12,   33,   21,
-       21,   23,   12,   33
+       36,   36,   38,   34,   35,   37, 8193,   34,   35,   37,
+    16385, 8193,   34,   37,16385,   34,   35,   37,   35,   37,
+       34,   35,   37,   34,   35,   37,   34,   35,   37,   34,
+       35,   37,   34,   35,   37,   34,   35,   37,   34,   35,
+       37,   34,   35,   37,   34,   35,   37,   34,   35,   37,
+       34,   35,   37,   34,   35,   37,   34,   35,   37,   34,
+       35,   37,   34,   35,   37,   34,   35,   37,   36,   37,
+       37,   34,   34, 8193,   34, 8193,   34,16385, 8193,   34,
+     8193,   34,   34, 8225,   34,16417,   34,   34,   34,   34,
+       34,   34,   34,   34,   34,   34,   34,   34,   34,   34,
+
+       34,   34,   34,   34,   34,   34,   34,   34,   34,   34,
+       36, 8193,   34, 8193,   34, 8193, 8225,   34, 8225,   34,
+     8225,   24,   34,   34,   34,   34,   34,   34,   34,   34,
+       34,   34,   34,   34,   34,   34,   34,   34,   34,   34,
+       34,   34,   34,   34,   34,   34,   34, 8225,   34, 8225,
+       34, 8225,   24,   34,   34,   29, 8225,   34,16417,   34,
+       34,   16,   34,   34,   34,   34,   34,   34,   34,   34,
+       34, 8218, 8225,   34,16410,16417,   34,   34,   32, 8225,
+       34,16417,   34, 8217, 8225,   34,16409,16417,   34,   34,
+     8220, 8225,   34,16412,16417,   34,   34,   34,   34,   34,
+
+       34,   29, 8225,   34,   29, 8225,   34,   29,   34,   29,
+     8225,   34,    3,   34,   16,   34,   34,   34,   34,   34,
+       31, 8225,   34,16417,   34,   34,   34, 8218, 8225,   34,
+     8218, 8225,   34, 8218,   34, 8218, 8225,   34,   34,   32,
+     8225,   34,   32, 8225,   34,   32,   34,   32, 8225, 8217,
+     8225,   34, 8217, 8225,   34, 8217,   34, 8217, 8225,   34,
+     8220, 8225,   34, 8220, 8225,   34, 8220,   34, 8220, 8225,
+       34,   34,   10,   34,   34,   34,   29, 8225,   34,   29,
+     8225,   34,   29, 8225,   29,   34,   29,   34,    3,   34,
+       34,   34,   34,   34,   34,   34,   31, 8225,   34,   31,
+
+     8225,   34,   31,   34,   31, 8225,   34,   34,   30, 8225,
+       34,16417, 8218, 8225,   34, 8218, 8225,   34, 8218, 8225,
+     8218,   34, 8218,   34,   34,   32, 8225,   34,   32, 8225,
+       34,   32, 8225,   32,   34,   32, 8217, 8225,   34, 8217,
+     8225,   34, 8217, 8225, 8217,   34, 8217,   34, 8220, 8225,
+       34, 8220, 8225,   34, 8220, 8225, 8220,   34, 8220,   34,
+       34,   10,   24,   10,   15,   34,    7,   34,   34,   34,
+       34,   34,   34,   34,   13,   34,   31, 8225,   34,   31,
+     8225,   34,   31, 8225,   31,   34,   31,    2,   34,   30,
+     8225,   34,   30, 8225,   34,   30,   34,   30, 8225,   17,
+
+       34,   34,   11,   34,   23,   10,   10,   24,   15,   34,
+        7,   24,    7,   34,    8,   34,   34,   34,   34,    6,
+       34,   13,   34,    2,   24,    2,   34,   30, 8225,   34,
+       30, 8225,   34,   30, 8225,   30,   34,   30,   17,   34,
+       34,   11,   24,   11,   27, 8225,   34,16417,   23,   24,
+       23,    7,    7,   24,   34,    8,   24,    8,   34,   34,
+       34,   34,    6,   24,    6,    6,   24,    6,   24,   34,
+        2,    2,   24,   34,   34,   11,   11,   24,   27, 8225,
+       34,   27, 8225,   34,   27,   34,   27, 8225,   23,   24,
+       34,    8,    8,   24,   34,   34,   18,   19,    6,    6,
+
+       24,    6,    6,   34,   34,   14,   34,   27, 8225,   34,
+       27, 8225,   34,   27, 8225,   27,   34,   27,   34,   34,
+       34,   18,   24,   18,   19,   24,   19,    6,    6,   34,
+       34,   14,   34,   21,    9,   20,   18,   18,   24,   19,
+       19,   24,    6,    5,    6,   34,   22,   21,   24,   21,
+        9,   24,    9,   20,   24,   20,    4,    6,    5,    6,
+       34,   22,   24,   22,   21,   21,   24,    9,    9,   24,
+       20,   20,   24,    4,    6,   12,   34,   22,   22,   24,
+       12,   34
     } ;
 
-static const flex_int16_t yy_accept[356] =
+static const flex_int16_t yy_accept[362] =
     {   0,
         1,    1,    1,    2,    3,    4,    7,   12,   16,   19,
        21,   24,   27,   30,   33,   36,   39,   42,   45,   48,
@@ -633,39 +634,40 @@ static const flex_int16_t yy_accept[356] =
        74,   76,   79,   81,   82,   83,   84,   87,   87,   88,
        89,   90,   91,   92,   93,   94,   95,   96,   97,   98,
        99,  100,  101,  102,  103,  104,  105,  106,  107,  108,
-      109,  110,  111,  113,  115,  116,  118,  120,  121,  122,
+      109,  110,  111,  112,  114,  116,  117,  119,  121,  122,
       123,  124,  125,  126,  127,  128,  129,  130,  131,  132,
       133,  134,  135,  136,  137,  138,  139,  140,  141,  142,
-      143,  144,  145,  146,  148,  150,  151,  152,  153,  154,
-
-      158,  159,  160,  162,  163,  164,  165,  166,  167,  168,
-      169,  170,  175,  176,  177,  181,  182,  187,  188,  189,
-      194,  195,  196,  197,  198,  199,  202,  205,  207,  209,
-      210,  212,  214,  215,  216,  217,  218,  222,  223,  224,
-      225,  228,  231,  233,  235,  236,  237,  240,  243,  245,
-      247,  250,  253,  255,  257,  258,  261,  264,  266,  268,
-      269,  270,  271,  272,  273,  276,  279,  281,  283,  284,
-      285,  287,  288,  289,  290,  291,  292,  293,  296,  299,
-      301,  303,  304,  305,  309,  312,  315,  317,  319,  320,
-      321,  322,  325,  328,  330,  332,  333,  336,  339,  341,
-
-      343,  344,  345,  348,  351,  353,  355,  356,  357,  358,
-      360,  361,  362,  363,  364,  365,  366,  367,  368,  369,
-      371,  374,  377,  379,  381,  382,  383,  384,  387,  390,
-      392,  394,  396,  397,  398,  399,  400,  401,  403,  405,
-      406,  407,  408,  409,  410,  411,  412,  413,  414,  416,
-      418,  419,  420,  423,  426,  428,  430,  431,  433,  434,
-      436,  437,  441,  443,  444,  445,  447,  448,  450,  451,
-      452,  453,  454,  455,  457,  458,  460,  462,  463,  464,
-      466,  467,  468,  469,  471,  474,  477,  479,  481,  483,
-      484,  485,  487,  488,  489,  490,  491,  492,  494,  495,
-
-      496,  497,  498,  500,  503,  506,  508,  510,  511,  512,
-      513,  514,  516,  517,  519,  520,  521,  522,  523,  524,
-      526,  527,  528,  529,  530,  532,  533,  535,  536,  538,
-      539,  540,  542,  543,  545,  546,  548,  549,  551,  553,
-      554,  556,  557,  558,  560,  561,  563,  564,  566,  568,
-      570,  571,  573,  575,  575
+      143,  144,  145,  146,  147,  148,  150,  152,  153,  154,
+
+      155,  156,  160,  161,  162,  164,  165,  166,  167,  168,
+      169,  170,  171,  172,  177,  178,  179,  183,  184,  189,
+      190,  191,  196,  197,  198,  199,  200,  201,  202,  205,
+      208,  210,  212,  213,  215,  217,  218,  219,  220,  221,
+      225,  226,  227,  228,  231,  234,  236,  238,  239,  240,
+      243,  246,  248,  250,  253,  256,  258,  260,  261,  264,
+      267,  269,  271,  272,  273,  274,  275,  276,  277,  280,
+      283,  285,  287,  288,  289,  291,  292,  293,  294,  295,
+      296,  297,  300,  303,  305,  307,  308,  309,  313,  316,
+      319,  321,  323,  324,  325,  326,  329,  332,  334,  336,
+
+      337,  340,  343,  345,  347,  348,  349,  352,  355,  357,
+      359,  360,  361,  362,  364,  365,  367,  368,  369,  370,
+      371,  372,  373,  374,  375,  377,  380,  383,  385,  387,
+      388,  389,  390,  393,  396,  398,  400,  402,  403,  404,
+      405,  406,  407,  409,  411,  413,  414,  415,  416,  417,
+      418,  419,  420,  421,  422,  424,  426,  427,  428,  431,
+      434,  436,  438,  439,  441,  442,  444,  445,  449,  451,
+      452,  453,  455,  456,  458,  459,  460,  461,  462,  463,
+      465,  466,  468,  470,  471,  472,  474,  475,  476,  477,
+      479,  482,  485,  487,  489,  491,  492,  493,  495,  496,
+
+      497,  498,  499,  500,  502,  503,  504,  505,  506,  508,
+      511,  514,  516,  518,  519,  520,  521,  522,  524,  525,
+      527,  528,  529,  530,  531,  532,  534,  535,  536,  537,
+      538,  540,  541,  543,  544,  546,  547,  548,  550,  551,
+      553,  554,  556,  557,  559,  561,  562,  564,  565,  566,
+      568,  569,  571,  572,  574,  576,  578,  579,  581,  583,
+      583
     } ;
 
 static const YY_CHAR yy_ec[256] =
@@ -708,216 +710,217 @@ static const YY_CHAR yy_meta[35] =
         1,    1,    1,    1
     } ;
 
-static const flex_int16_t yy_base[424] =
+static const flex_int16_t yy_base[430] =
     {   0,
-        0,    0,  901,  900,  902,  897,   33,   36,  905,  905,
-       45,   63,   31,   42,   51,   52,  890,   33,   65,   67,
-       69,   70,  889,   71,  888,   75,    0,  905,  893,  905,
-       91,   94,    0,    0,  103,  886,  112,    0,   89,   98,
-      113,   92,  114,   99,  100,   48,  121,  116,  119,   74,
-      124,  129,  123,  135,  132,  133,  137,  134,  138,  139,
-      141,    0,  155,    0,    0,  164,    0,    0,  849,  142,
-      152,  164,  140,  161,  165,  166,  167,  168,  169,  173,
-      174,  178,  176,  180,  184,  208,  189,  183,  192,  195,
-      215,  191,  193,  223,    0,    0,  905,  208,  204,  236,
-
-      219,  209,  238,  196,  237,  831,  242,  815,  241,  224,
-      243,  261,  244,  259,  277,  266,  286,  250,  288,  298,
-      249,  283,  274,  282,  294,  308,    0,  310,    0,  295,
-      305,  905,  308,  306,  313,  314,  342,  319,  316,  320,
-      331,    0,  349,    0,  342,  344,  356,    0,  358,    0,
-      365,    0,  367,    0,  354,  375,    0,  377,    0,  363,
-      356,  809,  327,  322,  384,    0,    0,    0,    0,  379,
-      905,  382,  384,  386,  390,  372,  392,  403,    0,  410,
-        0,  407,  413,  423,  426,    0,    0,    0,    0,  409,
-      424,  435,    0,    0,    0,    0,  437,    0,    0,    0,
-
-        0,  433,  444,    0,    0,    0,    0,  391,  440,  781,
-      905,  769,  439,  445,  444,  447,  449,  454,  453,  399,
-      464,    0,    0,    0,    0,  757,  465,  476,    0,  478,
-        0,  479,  476,  753,  462,  490,  749,  905,  745,  905,
-      483,  737,  424,  485,  487,  490,  500,  493,  905,  729,
-      905,  502,  518,    0,    0,    0,    0,  905,  498,  721,
-      905,  527,  713,    0,  705,  905,  495,  697,  905,  365,
-      521,  528,  530,  685,  905,  534,  540,  540,  657,  905,
-      537,  542,  650,  905,  553,    0,  557,    0,    0,  551,
-      641,  905,  558,  557,  633,  614,  613,  905,  547,  555,
-
-      563,  565,  569,  584,    0,    0,    0,    0,  583,  570,
-      585,  612,  905,  601,  905,  522,  580,  589,  594,  905,
-      600,  585,  563,  520,  905,  514,  905,  586,  486,  597,
-      480,  441,  905,  416,  905,  345,  905,  334,  905,  601,
-      254,  905,  242,  905,  200,  905,  151,  905,  905,  607,
-       86,  905,  905,  905,  620,  624,  627,  631,  635,  639,
-      643,  647,  651,  655,  659,  663,  667,  671,  675,  679,
-      683,  687,  691,  695,  699,  703,  707,  711,  715,  719,
-      723,  727,  731,  735,  739,  743,  747,  751,  755,  759,
-      763,  767,  771,  775,  779,  783,  787,  791,  795,  799,
-
-      803,  807,  811,  815,  819,  823,  827,  831,  835,  839,
-      843,  847,  851,  855,  859,  863,  867,  871,  875,  879,
-      883,  887,  891
+        0,    0,  912,  911,  913,  908,   33,   36,  916,  916,
+       45,   63,   31,   42,   51,   52,  901,   33,   65,   67,
+       69,   70,  900,   71,  899,   77,    0,  916,  904,  916,
+       93,   96,    0,    0,  105,  897,  114,    0,   91,  100,
+      115,   94,  116,  101,  102,   48,   74,  118,  121,  123,
+       78,  128,  131,  137,  124,  125,  133,  135,  136,  140,
+      142,  141,    0,  163,    0,    0,  166,    0,    0,  902,
+      143,  146,  163,  164,  166,  167,  149,  169,  170,  175,
+      179,  176,  182,  177,  184,  192,  212,  193,  186,  196,
+      187,  219,  201,  150,  199,  227,    0,    0,  916,  209,
+
+      212,  243,  224,  213,  245,  223,  198,  895,  231,  894,
+      244,  230,  243,  261,  255,  259,  279,  266,  288,  275,
+      291,  301,  268,  284,  298,  301,  285,  302,  311,    0,
+      314,    0,  311,  318,  916,  312,  317,  246,  232,  342,
+      320,  325,  323,  349,    0,  351,    0,  344,  349,  360,
+        0,  363,    0,  367,    0,  370,    0,  330,  377,    0,
+      379,    0,  365,  358,  899,  368,  329,  331,  381,    0,
+        0,    0,    0,  381,  916,  383,  385,  387,  391,  397,
+      393,  409,    0,  411,    0,  412,  414,  424,  427,    0,
+        0,    0,    0,  422,  425,  436,    0,    0,    0,    0,
+
+      438,    0,    0,    0,    0,  434,  445,    0,    0,    0,
+        0,  440,  442,  898,  916,  400,  897,  443,  448,  449,
+      451,  453,  458,  457,  413,  469,    0,    0,    0,    0,
+      896,  469,  480,    0,  482,    0,  483,  480,  895,  489,
+      497,  894,  916,  916,  851,  916,  490,  839,  478,  492,
+      494,  497,  507,  501,  916,  823,  916,  509,  525,    0,
+        0,    0,    0,  916,  505,  811,  916,  534,  783,    0,
+      771,  916,  518,  759,  916,  523,  528,  538,  540,  755,
+      916,  511,  540,  549,  751,  916,  544,  547,  747,  916,
+      560,    0,  562,    0,    0,  555,  739,  916,  484,  561,
+
+      731,  723,  715,  916,  449,  566,  564,  566,  576,  578,
+        0,    0,    0,    0,  584,  574,  586,  707,  916,  699,
+      916,  581,  587,  590,  597,  916,  687,  659,  652,  643,
+      916,  635,  916,  597,  616,  599,  614,  604,  916,  600,
+      916,  541,  916,  467,  916,  603,  455,  916,  404,  916,
+      385,  916,  328,  916,  916,  609,  203,  916,  916,  916,
+      622,  626,  629,  633,  637,  641,  645,  649,  653,  657,
+      661,  665,  669,  673,  677,  681,  685,  689,  693,  697,
+      701,  705,  709,  713,  717,  721,  725,  729,  733,  737,
+      741,  745,  749,  753,  757,  761,  765,  769,  773,  777,
+
+      781,  785,  789,  793,  797,  801,  805,  809,  813,  817,
+      821,  825,  829,  833,  837,  841,  845,  849,  853,  857,
+      861,  865,  869,  873,  877,  881,  885,  889,  893
     } ;
 
-static const flex_int16_t yy_def[424] =
+static const flex_int16_t yy_def[430] =
     {   0,
-      354,    1,  355,  355,  354,  356,  357,  357,  354,  354,
-      358,  358,   12,   12,   12,   12,   12,   12,   12,   12,
-       12,   12,   12,   12,   12,   12,  359,  354,  356,  354,
-      360,  357,  361,  361,  362,   12,  356,  363,   12,   12,
+      360,    1,  361,  361,  360,  362,  363,  363,  360,  360,
+      364,  364,   12,   12,   12,   12,   12,   12,   12,   12,
+       12,   12,   12,   12,   12,   12,  365,  360,  362,  360,
+      366,  363,  367,  367,  368,   12,  362,  369,   12,   12,
        12,   12,   12,   12,   12,   12,   12,   12,   12,   12,
        12,   12,   12,   12,   12,   12,   12,   12,   12,   12,
-       12,  359,  360,  361,  361,  364,  365,  365,  354,   12,
+       12,   12,  365,  366,  367,  367,  370,  371,  371,  360,
        12,   12,   12,   12,   12,   12,   12,   12,   12,   12,
-       12,   12,   12,   12,   12,  362,   12,   12,   12,   12,
-       12,   12,   12,  364,  365,  365,  354,   12,   12,  366,
-
-       12,   12,   12,   12,   12,   12,   12,   12,   12,   12,
-       12,  367,   86,   86,  368,   12,  369,   12,   12,  370,
-       12,   12,   12,   12,   12,  371,  372,  366,  372,   12,
-       12,  354,   86,   12,   12,   12,  373,   12,   12,   12,
-      374,  375,  367,  375,   86,   86,  376,  377,  368,  377,
-      378,  379,  369,  379,   12,  380,  381,  370,  381,   12,
-       12,  382,   12,   12,  371,  372,  372,  383,  383,   12,
-      354,   86,   86,   86,   12,   12,   12,  384,  385,  373,
-      385,   12,   12,  386,  374,  375,  375,  387,  387,   86,
-       86,  376,  377,  377,  388,  388,  378,  379,  379,  389,
-
-      389,   12,  380,  381,  381,  390,  390,   12,   12,  391,
-      354,  392,   86,   12,   86,   86,   86,   12,   86,   12,
-      384,  385,  385,  393,  393,  394,   86,  395,  396,  386,
-      396,   86,   86,  397,   12,  398,  391,  354,  399,  354,
-       86,  400,   12,   86,   86,   86,  401,   86,  354,  402,
-      354,   86,  395,  396,  396,  403,  403,  354,   86,  404,
-      354,  405,  406,  406,  399,  354,   86,  407,  354,   12,
-       86,   86,   86,  408,  354,  408,  408,   86,  402,  354,
-       86,   86,  404,  354,  409,  410,  405,  410,  406,   86,
-      407,  354,   12,   86,  411,  412,  408,  354,  408,  408,
-
-       86,   86,   86,  409,  410,  410,  413,  413,   86,   12,
-       86,  414,  354,  415,  354,  408,  408,   86,   86,  354,
-      416,  417,  418,  414,  354,  415,  354,  408,  408,   86,
-      419,  420,  354,  421,  354,  422,  354,  408,  354,   86,
-      423,  354,  420,  354,  421,  354,  422,  354,  354,   86,
-      423,  354,  354,    0,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354
+       12,   12,   12,   12,   12,   12,  368,   12,   12,   12,
+       12,   12,   12,   12,   12,  370,  371,  371,  360,   12,
+
+       12,  372,   12,   12,   12,   12,   12,   12,   12,   12,
+       12,   12,   12,  373,   87,   87,  374,   12,  375,   12,
+       12,  376,   12,   12,   12,   12,   12,   12,  377,  378,
+      372,  378,   12,   12,  360,   87,   12,   12,   12,  379,
+       12,   12,   12,  380,  381,  373,  381,   87,   87,  382,
+      383,  374,  383,  384,  385,  375,  385,   12,  386,  387,
+      376,  387,   12,   12,  388,   12,   12,   12,  377,  378,
+      378,  389,  389,   12,  360,   87,   87,   87,   12,   12,
+       12,  390,  391,  379,  391,   12,   12,  392,  380,  381,
+      381,  393,  393,   87,   87,  382,  383,  383,  394,  394,
+
+      384,  385,  385,  395,  395,   12,  386,  387,  387,  396,
+      396,   12,   12,  397,  360,   12,  398,   87,   12,   87,
+       87,   87,   12,   87,   12,  390,  391,  391,  399,  399,
+      400,   87,  401,  402,  392,  402,   87,   87,  403,   12,
+      404,  397,  360,  360,  405,  360,   87,  406,   12,   87,
+       87,   87,  407,   87,  360,  408,  360,   87,  401,  402,
+      402,  409,  409,  360,   87,  410,  360,  411,  412,  412,
+      405,  360,   87,  413,  360,   12,   87,   87,   87,  414,
+      360,  414,  414,   87,  408,  360,   87,   87,  410,  360,
+      415,  416,  411,  416,  412,   87,  413,  360,   12,   87,
+
+      417,  418,  414,  360,  414,  414,   87,   87,   87,  415,
+      416,  416,  419,  419,   87,   12,   87,  420,  360,  421,
+      360,  414,  414,   87,   87,  360,  422,  423,  424,  420,
+      360,  421,  360,  414,  414,   87,  425,  426,  360,  427,
+      360,  428,  360,  414,  360,   87,  429,  360,  426,  360,
+      427,  360,  428,  360,  360,   87,  429,  360,  360,    0,
+      360,  360,  360,  360,  360,  360,  360,  360,  360,  360,
+      360,  360,  360,  360,  360,  360,  360,  360,  360,  360,
+      360,  360,  360,  360,  360,  360,  360,  360,  360,  360,
+      360,  360,  360,  360,  360,  360,  360,  360,  360,  360,
+
+      360,  360,  360,  360,  360,  360,  360,  360,  360,  360,
+      360,  360,  360,  360,  360,  360,  360,  360,  360,  360,
+      360,  360,  360,  360,  360,  360,  360,  360,  360
     } ;
 
-static const flex_int16_t yy_nxt[940] =
+static const flex_int16_t yy_nxt[951] =
     {   0,
         6,    7,    8,    9,    6,    6,    6,    6,   10,   11,
        12,   13,   14,   15,   16,   17,   18,   19,   17,   17,
        17,   17,   20,   17,   21,   22,   23,   24,   25,   17,
        26,   17,   17,   17,   32,   32,   33,   32,   32,   33,
        36,   34,   36,   42,   34,   29,   29,   29,   30,   35,
-       50,   36,   37,   38,   43,   44,   39,   36,   79,   45,
+       50,   36,   37,   38,   43,   44,   39,   36,   80,   45,
        36,   36,   40,   29,   29,   29,   30,   35,   46,   48,
        37,   38,   41,   47,   36,   49,   36,   53,   36,   36,
-       36,   56,   58,   36,   36,   55,   82,   60,   51,  342,
-       54,   61,   52,   29,   64,   32,   32,   33,   36,   65,
-
-       70,   36,   34,   29,   29,   29,   30,   36,   36,   36,
-       29,   38,   66,   66,   66,   67,   66,   71,   74,   66,
-       68,   72,   36,   36,   73,   36,   77,   78,   36,   76,
-       36,   53,   36,   36,   75,   85,   80,   83,   36,   86,
-       84,   36,   36,   36,   36,   81,   36,   36,   36,   36,
-       36,   36,   93,   89,  337,   98,   88,   29,   64,  101,
-       90,   36,   91,   65,   92,   87,   29,   95,   89,   99,
-       36,  100,   96,   36,   36,   36,   36,   36,   36,  106,
-      105,   85,   36,   36,  102,   36,  107,   36,  103,   36,
-      109,  112,   36,   36,  104,  108,  115,  110,   36,  117,
-
-       36,   36,   36,  335,   36,   36,  122,  111,   29,   29,
-       29,   30,  118,   36,  116,   29,   38,   36,   36,  113,
-      114,  119,  120,  123,   36,   29,   95,  121,   36,  134,
-      131,   96,  130,   36,  125,  124,  126,  126,   66,  127,
-      126,  132,  133,  126,  129,  333,   36,   36,  135,  137,
-       36,   36,   36,  140,  139,   35,   35,  352,   36,   36,
-       85,  141,  141,   66,  142,  141,  160,  145,  141,  144,
-       35,   35,   89,  117,  155,   36,  146,  147,  147,   66,
-      148,  147,  162,   36,  147,  150,  151,  151,   66,  152,
-      151,   36,   36,  151,  154,  120,  161,   36,  156,  156,
-
-       66,  157,  156,   36,   36,  156,  159,  164,  171,  163,
-       29,  166,   29,  168,   36,   36,  167,  170,  169,   35,
-       35,  172,   36,   36,  173,   36,  213,  184,   36,   36,
-      175,   36,  174,   29,  186,  212,   36,  349,  183,  187,
-      177,  176,  178,  178,   66,  179,  178,  182,  348,  178,
-      181,   29,  188,   35,   35,   35,   35,  189,   29,  193,
-       29,  195,  190,   36,  194,   36,  196,   29,  198,   29,
-      200,  191,   36,  199,   36,  201,  219,   29,  204,   29,
-      206,   36,  202,  205,  209,  207,   29,  166,   36,  293,
-      208,  214,  167,   35,   35,   35,   35,   35,   35,   36,
-
-       36,   36,  249,  218,  220,   29,  222,  216,   36,  217,
-      235,  223,   29,  224,  215,  226,   36,  227,  225,  346,
-       35,   35,   36,  228,  228,   66,  229,  228,   29,  186,
-      228,  231,  232,   36,  187,  233,   35,   29,  193,   29,
-      198,  234,   36,  194,  344,  199,   29,  204,  236,   36,
-       35,  241,  205,  242,   36,   35,   35,  270,   35,   35,
-       35,   35,  247,   36,   35,   35,   29,  222,  244,  262,
-      248,   36,  223,  243,  245,  246,   35,  252,   29,  254,
-       29,  256,  258,  342,  255,  259,  257,   35,   35,  339,
-       35,   35,   69,  264,   35,   35,   35,   35,   35,   35,
-
-      267,   35,   35,  275,   35,   35,   35,   35,  271,   35,
-       35,  276,  277,   35,   35,  272,  278,  315,  273,  281,
-       29,  254,  290,  313,  282,  275,  255,  285,  285,   66,
-      286,  285,   35,   35,  285,  288,  295,  298,  296,   35,
-       35,   35,   35,  298,  301,  328,  299,  294,   35,   35,
-      275,   35,   35,   35,  303,   29,  305,  300,  275,   29,
-      307,  306,   35,   35,  302,  308,  337,   36,   35,   35,
-      309,  310,  320,  316,   35,   35,   35,   35,  322,   36,
-       35,   35,  317,  275,  319,  311,   29,  305,  335,  275,
-      318,  321,  306,  323,   35,   35,   35,   35,  330,  329,
-
-       35,   35,  331,  333,  327,   35,   35,  338,   35,   35,
-      353,  340,   35,   35,  350,  325,  275,  315,   35,   35,
-       27,   27,   27,   27,   29,   29,   29,   31,   31,   31,
-       31,   36,   36,   36,   36,   62,  313,   62,   62,   63,
-       63,   63,   63,   65,  269,   65,   65,   35,   35,   35,
-       35,   69,   69,  261,   69,   94,   94,   94,   94,   96,
-      251,   96,   96,  128,  128,  128,  128,  143,  143,  143,
-      143,  149,  149,  149,  149,  153,  153,  153,  153,  158,
-      158,  158,  158,  165,  165,  165,  165,  167,  298,  167,
-      167,  180,  180,  180,  180,  185,  185,  185,  185,  187,
-
-      292,  187,  187,  192,  192,  192,  192,  194,  240,  194,
-      194,  197,  197,  197,  197,  199,  289,  199,  199,  203,
-      203,  203,  203,  205,  284,  205,  205,  210,  210,  210,
-      210,  169,  280,  169,  169,  221,  221,  221,  221,  223,
-      269,  223,  223,  230,  230,  230,  230,  189,  266,  189,
-      189,  196,  211,  196,  196,  201,  261,  201,  201,  207,
-      251,  207,  207,  237,  237,  237,  237,  239,  239,  239,
-      239,  225,  240,  225,  225,  250,  250,  250,  250,  253,
-      253,  253,  253,  255,  238,  255,  255,  260,  260,  260,
-      260,  263,  263,  263,  263,  265,  265,  265,  265,  268,
-
-      268,  268,  268,  274,  274,  274,  274,  279,  279,  279,
-      279,  257,  211,  257,  257,  283,  283,  283,  283,  287,
-      287,  287,  287,  264,  138,  264,  264,  291,  291,  291,
-      291,  297,  297,  297,  297,  304,  304,  304,  304,  306,
-      136,  306,  306,  312,  312,  312,  312,  314,  314,  314,
-      314,  308,   97,  308,  308,  324,  324,  324,  324,  326,
-      326,  326,  326,  332,  332,  332,  332,  334,  334,  334,
-      334,  336,  336,  336,  336,  341,  341,  341,  341,  343,
-      343,  343,  343,  345,  345,  345,  345,  347,  347,  347,
-      347,  351,  351,  351,  351,   36,   30,   59,   57,   36,
-
-       30,  354,   28,   28,    5,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354
+       36,   56,   58,   36,   53,   55,   36,   36,   51,   60,
+       54,   84,   52,   61,   62,   29,   65,   32,   32,   33,
+
+       36,   66,   71,   36,   34,   29,   29,   29,   30,   36,
+       36,   36,   29,   38,   67,   67,   67,   68,   67,   72,
+       75,   67,   69,   73,   36,   36,   74,   36,   78,   79,
+       36,   77,   36,   36,   36,   83,   76,   36,   81,   85,
+       36,   87,   36,   86,   36,   36,   36,   82,   89,   36,
+       36,   36,   36,   94,   90,   36,  100,   88,   36,   36,
+       92,   91,   93,  101,   90,   29,   65,   95,   29,   97,
+      102,   66,   36,   36,   98,   36,   36,  106,   36,   36,
+      125,  108,  107,  103,   36,   36,   36,   86,   36,  104,
+      105,   36,  109,   36,  111,   36,   36,  110,  112,  114,
+
+      117,   36,   36,  119,  120,   36,  348,   36,   36,  138,
+       36,  113,   29,   29,   29,   30,  124,  118,   36,   29,
+       38,   36,   36,  115,  116,  121,  122,  126,   36,   29,
+       97,  123,   36,   36,  134,   98,  127,  133,  140,   36,
+       36,   36,  128,  129,  129,   67,  130,  129,  135,  136,
+      129,  132,   36,   36,   36,   36,  137,  142,  181,  143,
+       86,  144,  144,   67,  145,  144,   35,   35,  144,  147,
+       35,   35,   90,  119,  180,   36,  149,   36,  148,  150,
+      150,   67,  151,  150,   36,  163,  150,  153,  154,  154,
+       67,  155,  154,   36,   36,  154,  157,  164,  122,  158,
+
+       36,  159,  159,   67,  160,  159,  165,   36,  159,  162,
+       36,   36,  167,   29,  170,  168,   29,  172,  166,  171,
+       36,  175,  173,   35,   35,  176,   36,   36,  177,   36,
+      188,  343,   36,  174,   36,  218,  178,  217,   36,   36,
+       36,  179,  182,  182,   67,  183,  182,  187,  186,  182,
+      185,   29,  190,   29,  192,   35,   35,  191,  206,  193,
+       35,   35,   29,  197,  194,   29,  199,   36,  198,   29,
+      202,  200,   29,  204,   36,  203,  195,   36,  205,   29,
+      208,   29,  210,   29,  170,  209,  213,  211,  341,  171,
+       36,  216,  212,  219,   35,   35,   35,   35,   35,   35,
+
+       36,  224,   36,  244,  223,  225,   36,  339,  221,   36,
+      222,   29,  227,   29,  229,  220,  255,  228,  232,  230,
+      231,   36,   36,   36,  233,  233,   67,  234,  233,   29,
+      190,  233,  236,   35,   35,  191,  238,   35,   29,  197,
+       29,  202,  239,   36,  198,  237,  203,   29,  208,   36,
+      241,   36,  281,  209,   35,  247,  248,   36,  358,  240,
+       35,   35,   35,   35,   35,   35,  253,   36,   35,   35,
+      355,   29,  227,  250,  254,  322,  249,  228,  251,  252,
+       35,  258,   29,  260,   29,  262,  264,   36,  261,  265,
+      263,   35,   35,   36,   35,   35,  268,  316,   36,   70,
+
+      270,   35,   35,   35,   35,   35,   35,  273,   35,   35,
+      281,  276,   35,   35,  304,  277,   35,   35,  282,  283,
+       35,   35,  278,  305,  284,  279,  287,   29,  260,   35,
+       35,  288,   36,  261,  291,  291,   67,  292,  291,   35,
+       35,  291,  294,  304,  354,  296,  301,  299,  302,   35,
+       35,   35,   35,  307,  300,   35,   35,  306,   35,  309,
+       35,   35,   29,  311,   29,  313,   35,   35,  312,  281,
+      314,  308,   35,   35,  315,   35,   35,   35,   35,  326,
+       29,  311,  328,   36,  281,  325,  312,   35,   35,  317,
+      281,  324,  327,  323,  329,   35,   35,   35,   35,  336,
+
+      281,   35,   35,  352,  334,  337,  335,  350,   35,   35,
+       35,   35,  359,  346,   35,   35,  356,  348,  344,  345,
+       35,   35,   27,   27,   27,   27,   29,   29,   29,   31,
+       31,   31,   31,   36,   36,   36,   36,   63,  321,   63,
+       63,   64,   64,   64,   64,   66,  319,   66,   66,   35,
+       35,   35,   35,   70,   70,  343,   70,   96,   96,   96,
+       96,   98,  341,   98,   98,  131,  131,  131,  131,  146,
+      146,  146,  146,  152,  152,  152,  152,  156,  156,  156,
+      156,  161,  161,  161,  161,  169,  169,  169,  169,  171,
+      339,  171,  171,  184,  184,  184,  184,  189,  189,  189,
+
+      189,  191,  333,  191,  191,  196,  196,  196,  196,  198,
+      331,  198,  198,  201,  201,  201,  201,  203,  281,  203,
+      203,  207,  207,  207,  207,  209,  321,  209,  209,  214,
+      214,  214,  214,  173,  319,  173,  173,  226,  226,  226,
+      226,  228,  275,  228,  228,  235,  235,  235,  235,  193,
+      267,  193,  193,  200,  257,  200,  200,  205,  304,  205,
+      205,  211,  298,  211,  211,  242,  242,  242,  242,  245,
+      245,  245,  245,  230,  246,  230,  230,  256,  256,  256,
+      256,  259,  259,  259,  259,  261,  295,  261,  261,  266,
+      266,  266,  266,  269,  269,  269,  269,  271,  271,  271,
+
+      271,  274,  274,  274,  274,  280,  280,  280,  280,  285,
+      285,  285,  285,  263,  290,  263,  263,  289,  289,  289,
+      289,  293,  293,  293,  293,  270,  286,  270,  270,  297,
+      297,  297,  297,  303,  303,  303,  303,  310,  310,  310,
+      310,  312,  275,  312,  312,  318,  318,  318,  318,  320,
+      320,  320,  320,  314,  272,  314,  314,  330,  330,  330,
+      330,  332,  332,  332,  332,  338,  338,  338,  338,  340,
+      340,  340,  340,  342,  342,  342,  342,  347,  347,  347,
+      347,  349,  349,  349,  349,  351,  351,  351,  351,  353,
+      353,  353,  353,  357,  357,  357,  357,  215,  267,  257,
+
+      246,  243,  215,  141,  139,   99,   36,   30,   59,   57,
+       36,   30,  360,   28,   28,    5,  360,  360,  360,  360,
+      360,  360,  360,  360,  360,  360,  360,  360,  360,  360,
+      360,  360,  360,  360,  360,  360,  360,  360,  360,  360,
+      360,  360,  360,  360,  360,  360,  360,  360,  360,  360
     } ;
 
-static const flex_int16_t yy_chk[940] =
+static const flex_int16_t yy_chk[951] =
     {   0,
         1,    1,    1,    1,    1,    1,    1,    1,    1,    1,
         1,    1,    1,    1,    1,    1,    1,    1,    1,    1,
@@ -927,101 +930,102 @@ static const flex_int16_t yy_chk[940] =
        18,   14,   11,   11,   13,   14,   11,   46,   46,   14,
        15,   16,   11,   12,   12,   12,   12,   12,   14,   16,
        12,   12,   12,   15,   19,   16,   20,   20,   21,   22,
-       24,   22,   24,   50,   26,   21,   50,   26,   19,  351,
-       20,   26,   19,   31,   31,   32,   32,   32,   39,   31,
-
-       39,   42,   32,   35,   35,   35,   35,   40,   44,   45,
-       35,   35,   37,   37,   37,   37,   37,   39,   42,   37,
-       37,   40,   41,   43,   41,   48,   45,   45,   49,   44,
-       47,   47,   53,   51,   43,   53,   48,   51,   52,   54,
-       52,   55,   56,   58,   54,   49,   57,   59,   60,   73,
-       61,   70,   60,   61,  347,   70,   56,   63,   63,   73,
-       58,   71,   59,   63,   59,   55,   66,   66,   57,   71,
-       74,   72,   66,   72,   75,   76,   77,   78,   79,   78,
-       77,   79,   80,   81,   74,   83,   80,   82,   75,   84,
-       82,   85,   88,   85,   76,   81,   87,   83,   87,   89,
-
-       92,   89,   93,  345,   90,  104,   92,   84,   86,   86,
-       86,   86,   90,   99,   88,   86,   86,   98,  102,   86,
-       86,   91,   91,   93,   91,   94,   94,   91,  101,  104,
-      102,   94,  101,  110,   99,   98,  100,  100,  100,  100,
-      100,  103,  103,  100,  100,  343,  105,  103,  105,  107,
-      109,  107,  111,  110,  109,  113,  113,  341,  121,  118,
-      111,  112,  112,  112,  112,  112,  121,  113,  112,  112,
-      114,  114,  116,  116,  118,  116,  114,  115,  115,  115,
-      115,  115,  123,  123,  115,  115,  117,  117,  117,  117,
-      117,  124,  122,  117,  117,  119,  122,  119,  120,  120,
-
-      120,  120,  120,  125,  130,  120,  120,  125,  131,  124,
-      126,  126,  128,  128,  131,  134,  126,  130,  128,  133,
-      133,  133,  135,  136,  133,  139,  164,  140,  138,  140,
-      134,  164,  133,  141,  141,  163,  163,  338,  139,  141,
-      136,  135,  137,  137,  137,  137,  137,  138,  336,  137,
-      137,  143,  143,  145,  145,  146,  146,  143,  147,  147,
-      149,  149,  145,  155,  147,  161,  149,  151,  151,  153,
-      153,  146,  160,  151,  270,  153,  176,  156,  156,  158,
-      158,  176,  155,  156,  161,  158,  165,  165,  170,  270,
-      160,  170,  165,  172,  172,  173,  173,  174,  174,  175,
-
-      208,  177,  220,  175,  177,  178,  178,  173,  220,  174,
-      208,  178,  180,  180,  172,  182,  182,  183,  180,  334,
-      190,  190,  183,  184,  184,  184,  184,  184,  185,  185,
-      184,  184,  190,  243,  185,  191,  191,  192,  192,  197,
-      197,  202,  202,  192,  332,  197,  203,  203,  209,  209,
-      213,  213,  203,  214,  214,  215,  215,  243,  216,  216,
-      217,  217,  218,  218,  219,  219,  221,  221,  215,  235,
-      219,  235,  221,  214,  216,  217,  227,  227,  228,  228,
-      230,  230,  232,  331,  228,  233,  230,  233,  233,  329,
-      232,  232,  236,  236,  241,  241,  244,  244,  245,  245,
-
-      241,  246,  246,  247,  248,  248,  267,  267,  244,  259,
-      259,  247,  247,  252,  252,  245,  248,  326,  246,  252,
-      253,  253,  267,  324,  259,  316,  253,  262,  262,  262,
-      262,  262,  271,  271,  262,  262,  272,  276,  273,  272,
-      272,  273,  273,  277,  278,  316,  276,  271,  281,  281,
-      299,  278,  278,  282,  282,  285,  285,  277,  300,  287,
-      287,  285,  290,  290,  281,  287,  323,  293,  294,  294,
-      290,  293,  303,  299,  301,  301,  302,  302,  310,  310,
-      303,  303,  300,  317,  302,  294,  304,  304,  322,  328,
-      301,  309,  304,  311,  309,  309,  311,  311,  318,  317,
-
-      318,  318,  319,  321,  314,  319,  319,  328,  330,  330,
-      350,  330,  340,  340,  340,  312,  297,  296,  350,  350,
-      355,  355,  355,  355,  356,  356,  356,  357,  357,  357,
-      357,  358,  358,  358,  358,  359,  295,  359,  359,  360,
-      360,  360,  360,  361,  291,  361,  361,  362,  362,  362,
-      362,  363,  363,  283,  363,  364,  364,  364,  364,  365,
-      279,  365,  365,  366,  366,  366,  366,  367,  367,  367,
-      367,  368,  368,  368,  368,  369,  369,  369,  369,  370,
-      370,  370,  370,  371,  371,  371,  371,  372,  274,  372,
-      372,  373,  373,  373,  373,  374,  374,  374,  374,  375,
-
-      268,  375,  375,  376,  376,  376,  376,  377,  265,  377,
-      377,  378,  378,  378,  378,  379,  263,  379,  379,  380,
-      380,  380,  380,  381,  260,  381,  381,  382,  382,  382,
-      382,  383,  250,  383,  383,  384,  384,  384,  384,  385,
-      242,  385,  385,  386,  386,  386,  386,  387,  239,  387,
-      387,  388,  237,  388,  388,  389,  234,  389,  389,  390,
-      226,  390,  390,  391,  391,  391,  391,  392,  392,  392,
-      392,  393,  212,  393,  393,  394,  394,  394,  394,  395,
-      395,  395,  395,  396,  210,  396,  396,  397,  397,  397,
-      397,  398,  398,  398,  398,  399,  399,  399,  399,  400,
-
-      400,  400,  400,  401,  401,  401,  401,  402,  402,  402,
-      402,  403,  162,  403,  403,  404,  404,  404,  404,  405,
-      405,  405,  405,  406,  108,  406,  406,  407,  407,  407,
-      407,  408,  408,  408,  408,  409,  409,  409,  409,  410,
-      106,  410,  410,  411,  411,  411,  411,  412,  412,  412,
-      412,  413,   69,  413,  413,  414,  414,  414,  414,  415,
-      415,  415,  415,  416,  416,  416,  416,  417,  417,  417,
-      417,  418,  418,  418,  418,  419,  419,  419,  419,  420,
-      420,  420,  420,  421,  421,  421,  421,  422,  422,  422,
-      422,  423,  423,  423,  423,   36,   29,   25,   23,   17,
-
-        6,    5,    4,    3,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354,  354,
-      354,  354,  354,  354,  354,  354,  354,  354,  354
+       24,   22,   24,   47,   47,   21,   26,   51,   19,   26,
+       20,   51,   19,   26,   26,   31,   31,   32,   32,   32,
+
+       39,   31,   39,   42,   32,   35,   35,   35,   35,   40,
+       44,   45,   35,   35,   37,   37,   37,   37,   37,   39,
+       42,   37,   37,   40,   41,   43,   41,   48,   45,   45,
+       49,   44,   50,   55,   56,   50,   43,   52,   48,   52,
+       53,   54,   57,   53,   58,   59,   54,   49,   56,   60,
+       62,   61,   71,   60,   61,   72,   71,   55,   77,   94,
+       59,   58,   59,   72,   57,   64,   64,   62,   67,   67,
+       73,   64,   73,   74,   67,   75,   76,   77,   78,   79,
+       94,   79,   78,   74,   80,   82,   84,   80,   81,   75,
+       76,   83,   81,   85,   83,   89,   91,   82,   84,   86,
+
+       88,   86,   88,   90,   91,   90,  357,  107,   95,  107,
+       93,   85,   87,   87,   87,   87,   93,   89,  100,   87,
+       87,  101,  104,   87,   87,   92,   92,   95,   92,   96,
+       96,   92,  106,  103,  104,   96,  100,  103,  109,  112,
+      109,  139,  101,  102,  102,  102,  102,  102,  105,  105,
+      102,  102,  113,  111,  105,  138,  106,  111,  139,  112,
+      113,  114,  114,  114,  114,  114,  115,  115,  114,  114,
+      116,  116,  118,  118,  138,  118,  116,  123,  115,  117,
+      117,  117,  117,  117,  120,  123,  117,  117,  119,  119,
+      119,  119,  119,  124,  127,  119,  119,  124,  121,  120,
+
+      121,  122,  122,  122,  122,  122,  125,  125,  122,  122,
+      126,  128,  127,  129,  129,  128,  131,  131,  126,  129,
+      133,  134,  131,  136,  136,  136,  137,  134,  136,  141,
+      143,  353,  143,  133,  142,  168,  136,  167,  167,  158,
+      168,  137,  140,  140,  140,  140,  140,  142,  141,  140,
+      140,  144,  144,  146,  146,  148,  148,  144,  158,  146,
+      149,  149,  150,  150,  148,  152,  152,  164,  150,  154,
+      154,  152,  156,  156,  163,  154,  149,  166,  156,  159,
+      159,  161,  161,  169,  169,  159,  164,  161,  351,  169,
+      174,  166,  163,  174,  176,  176,  177,  177,  178,  178,
+
+      179,  180,  181,  216,  179,  181,  180,  349,  177,  216,
+      178,  182,  182,  184,  184,  176,  225,  182,  187,  184,
+      186,  186,  225,  187,  188,  188,  188,  188,  188,  189,
+      189,  188,  188,  194,  194,  189,  195,  195,  196,  196,
+      201,  201,  206,  206,  196,  194,  201,  207,  207,  212,
+      213,  213,  305,  207,  218,  218,  219,  219,  347,  212,
+      220,  220,  221,  221,  222,  222,  223,  223,  224,  224,
+      344,  226,  226,  220,  224,  305,  219,  226,  221,  222,
+      232,  232,  233,  233,  235,  235,  237,  249,  233,  238,
+      235,  238,  238,  299,  237,  237,  240,  299,  240,  241,
+
+      241,  247,  247,  250,  250,  251,  251,  247,  252,  252,
+      253,  249,  254,  254,  282,  250,  265,  265,  253,  253,
+      258,  258,  251,  282,  254,  252,  258,  259,  259,  273,
+      273,  265,  276,  259,  268,  268,  268,  268,  268,  277,
+      277,  268,  268,  283,  342,  273,  278,  276,  279,  278,
+      278,  279,  279,  284,  277,  287,  287,  283,  288,  288,
+      284,  284,  291,  291,  293,  293,  296,  296,  291,  306,
+      293,  287,  300,  300,  296,  307,  307,  308,  308,  309,
+      310,  310,  316,  316,  322,  308,  310,  309,  309,  300,
+      323,  307,  315,  306,  317,  315,  315,  317,  317,  324,
+
+      334,  324,  324,  340,  322,  325,  323,  338,  325,  325,
+      336,  336,  356,  336,  346,  346,  346,  337,  334,  335,
+      356,  356,  361,  361,  361,  361,  362,  362,  362,  363,
+      363,  363,  363,  364,  364,  364,  364,  365,  332,  365,
+      365,  366,  366,  366,  366,  367,  330,  367,  367,  368,
+      368,  368,  368,  369,  369,  329,  369,  370,  370,  370,
+      370,  371,  328,  371,  371,  372,  372,  372,  372,  373,
+      373,  373,  373,  374,  374,  374,  374,  375,  375,  375,
+      375,  376,  376,  376,  376,  377,  377,  377,  377,  378,
+      327,  378,  378,  379,  379,  379,  379,  380,  380,  380,
+
+      380,  381,  320,  381,  381,  382,  382,  382,  382,  383,
+      318,  383,  383,  384,  384,  384,  384,  385,  303,  385,
+      385,  386,  386,  386,  386,  387,  302,  387,  387,  388,
+      388,  388,  388,  389,  301,  389,  389,  390,  390,  390,
+      390,  391,  297,  391,  391,  392,  392,  392,  392,  393,
+      289,  393,  393,  394,  285,  394,  394,  395,  280,  395,
+      395,  396,  274,  396,  396,  397,  397,  397,  397,  398,
+      398,  398,  398,  399,  271,  399,  399,  400,  400,  400,
+      400,  401,  401,  401,  401,  402,  269,  402,  402,  403,
+      403,  403,  403,  404,  404,  404,  404,  405,  405,  405,
+
+      405,  406,  406,  406,  406,  407,  407,  407,  407,  408,
+      408,  408,  408,  409,  266,  409,  409,  410,  410,  410,
+      410,  411,  411,  411,  411,  412,  256,  412,  412,  413,
+      413,  413,  413,  414,  414,  414,  414,  415,  415,  415,
+      415,  416,  248,  416,  416,  417,  417,  417,  417,  418,
+      418,  418,  418,  419,  245,  419,  419,  420,  420,  420,
+      420,  421,  421,  421,  421,  422,  422,  422,  422,  423,
+      423,  423,  423,  424,  424,  424,  424,  425,  425,  425,
+      425,  426,  426,  426,  426,  427,  427,  427,  427,  428,
+      428,  428,  428,  429,  429,  429,  429,  242,  239,  231,
+
+      217,  214,  165,  110,  108,   70,   36,   29,   25,   23,
+       17,    6,    5,    4,    3,  360,  360,  360,  360,  360,
+      360,  360,  360,  360,  360,  360,  360,  360,  360,  360,
+      360,  360,  360,  360,  360,  360,  360,  360,  360,  360,
+      360,  360,  360,  360,  360,  360,  360,  360,  360,  360
     } ;
 
 #define YY_TRAILING_MASK 0x2000
@@ -1199,9 +1203,9 @@ static int vdev_and_devtype(DiskParseContext *dpc, char *str) {
 #undef DPC /* needs to be defined differently the actual lexer */
 #define DPC ((DiskParseContext*)yyextra)
 
-#line 1202 "libxlu_disk_l.c"
+#line 1206 "libxlu_disk_l.c"
 
-#line 1204 "libxlu_disk_l.c"
+#line 1208 "libxlu_disk_l.c"
 
 #define INITIAL 0
 #define LEXERR 1
@@ -1483,7 +1487,7 @@ YY_DECL
 #line 180 "libxlu_disk_l.l"
  /*----- the scanner rules which do the parsing -----*/
 
-#line 1486 "libxlu_disk_l.c"
+#line 1490 "libxlu_disk_l.c"
 
 	while ( /*CONSTCOND*/1 )		/* loops until end-of-file is reached */
 		{
@@ -1515,14 +1519,14 @@ yy_match:
 			while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 				{
 				yy_current_state = (int) yy_def[yy_current_state];
-				if ( yy_current_state >= 355 )
+				if ( yy_current_state >= 361 )
 					yy_c = yy_meta[yy_c];
 				}
 			yy_current_state = yy_nxt[yy_base[yy_current_state] + yy_c];
 			*yyg->yy_state_ptr++ = yy_current_state;
 			++yy_cp;
 			}
-		while ( yy_current_state != 354 );
+		while ( yy_current_state != 360 );
 
 yy_find_action:
 		yy_current_state = *--yyg->yy_state_ptr;
@@ -1648,76 +1652,81 @@ YY_RULE_SETUP
 #line 201 "libxlu_disk_l.l"
 { libxl_defbool_set(&DPC->disk->discard_enable, false); }
 	YY_BREAK
-/* Note that the COLO configuration settings should be considered unstable.
-  * They may change incompatibly in future versions of Xen. */
 case 15:
 YY_RULE_SETUP
-#line 204 "libxlu_disk_l.l"
-{ libxl_defbool_set(&DPC->disk->colo_enable, true); }
+#line 202 "libxlu_disk_l.l"
+{ DPC->disk->virtio = 1; }
 	YY_BREAK
+/* Note that the COLO configuration settings should be considered unstable.
+  * They may change incompatibly in future versions of Xen. */
 case 16:
 YY_RULE_SETUP
 #line 205 "libxlu_disk_l.l"
-{ libxl_defbool_set(&DPC->disk->colo_enable, false); }
+{ libxl_defbool_set(&DPC->disk->colo_enable, true); }
 	YY_BREAK
 case 17:
-/* rule 17 can match eol */
 YY_RULE_SETUP
 #line 206 "libxlu_disk_l.l"
-{ STRIP(','); SAVESTRING("colo-host", colo_host, FROMEQUALS); }
+{ libxl_defbool_set(&DPC->disk->colo_enable, false); }
 	YY_BREAK
 case 18:
 /* rule 18 can match eol */
 YY_RULE_SETUP
 #line 207 "libxlu_disk_l.l"
-{ STRIP(','); setcoloport(DPC, FROMEQUALS); }
+{ STRIP(','); SAVESTRING("colo-host", colo_host, FROMEQUALS); }
 	YY_BREAK
 case 19:
 /* rule 19 can match eol */
 YY_RULE_SETUP
 #line 208 "libxlu_disk_l.l"
-{ STRIP(','); SAVESTRING("colo-export", colo_export, FROMEQUALS); }
+{ STRIP(','); setcoloport(DPC, FROMEQUALS); }
 	YY_BREAK
 case 20:
 /* rule 20 can match eol */
 YY_RULE_SETUP
 #line 209 "libxlu_disk_l.l"
-{ STRIP(','); SAVESTRING("active-disk", active_disk, FROMEQUALS); }
+{ STRIP(','); SAVESTRING("colo-export", colo_export, FROMEQUALS); }
 	YY_BREAK
 case 21:
 /* rule 21 can match eol */
 YY_RULE_SETUP
 #line 210 "libxlu_disk_l.l"
+{ STRIP(','); SAVESTRING("active-disk", active_disk, FROMEQUALS); }
+	YY_BREAK
+case 22:
+/* rule 22 can match eol */
+YY_RULE_SETUP
+#line 211 "libxlu_disk_l.l"
 { STRIP(','); SAVESTRING("hidden-disk", hidden_disk, FROMEQUALS); }
 	YY_BREAK
 /* the target magic parameter, eats the rest of the string */
-case 22:
+case 23:
 YY_RULE_SETUP
-#line 214 "libxlu_disk_l.l"
+#line 215 "libxlu_disk_l.l"
 { STRIP(','); SAVESTRING("target", pdev_path, FROMEQUALS); }
 	YY_BREAK
 /* unknown parameters */
-case 23:
-/* rule 23 can match eol */
+case 24:
+/* rule 24 can match eol */
 YY_RULE_SETUP
-#line 218 "libxlu_disk_l.l"
+#line 219 "libxlu_disk_l.l"
 { xlu__disk_err(DPC,yytext,"unknown parameter"); }
 	YY_BREAK
 /* deprecated prefixes */
 /* the "/.*" in these patterns ensures that they count as if they
    * matched the whole string, so these patterns take precedence */
-case 24:
+case 25:
 YY_RULE_SETUP
-#line 225 "libxlu_disk_l.l"
+#line 226 "libxlu_disk_l.l"
 {
                     STRIP(':');
                     DPC->had_depr_prefix=1; DEPRECATE("use `[format=]...,'");
                     setformat(DPC, yytext);
                  }
 	YY_BREAK
-case 25:
+case 26:
 YY_RULE_SETUP
-#line 231 "libxlu_disk_l.l"
+#line 232 "libxlu_disk_l.l"
 {
                     char *newscript;
                     STRIP(':');
@@ -1731,30 +1740,22 @@ YY_RULE_SETUP
                     free(newscript);
                 }
 	YY_BREAK
-case 26:
+case 27:
 *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
 yyg->yy_c_buf_p = yy_cp = yy_bp + 8;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
-#line 244 "libxlu_disk_l.l"
-{ DPC->had_depr_prefix=1; DEPRECATE(0); }
-	YY_BREAK
-case 27:
-YY_RULE_SETUP
 #line 245 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 28:
-*yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
-yyg->yy_c_buf_p = yy_cp = yy_bp + 4;
-YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
 #line 246 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 29:
 *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
-yyg->yy_c_buf_p = yy_cp = yy_bp + 6;
+yyg->yy_c_buf_p = yy_cp = yy_bp + 4;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
 #line 247 "libxlu_disk_l.l"
@@ -1762,7 +1763,7 @@ YY_RULE_SETUP
 	YY_BREAK
 case 30:
 *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
-yyg->yy_c_buf_p = yy_cp = yy_bp + 5;
+yyg->yy_c_buf_p = yy_cp = yy_bp + 6;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
 #line 248 "libxlu_disk_l.l"
@@ -1770,26 +1771,34 @@ YY_RULE_SETUP
 	YY_BREAK
 case 31:
 *yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
-yyg->yy_c_buf_p = yy_cp = yy_bp + 4;
+yyg->yy_c_buf_p = yy_cp = yy_bp + 5;
 YY_DO_BEFORE_ACTION; /* set up yytext again */
 YY_RULE_SETUP
 #line 249 "libxlu_disk_l.l"
 { DPC->had_depr_prefix=1; DEPRECATE(0); }
 	YY_BREAK
 case 32:
-/* rule 32 can match eol */
+*yy_cp = yyg->yy_hold_char; /* undo effects of setting up yytext */
+yyg->yy_c_buf_p = yy_cp = yy_bp + 4;
+YY_DO_BEFORE_ACTION; /* set up yytext again */
+YY_RULE_SETUP
+#line 250 "libxlu_disk_l.l"
+{ DPC->had_depr_prefix=1; DEPRECATE(0); }
+	YY_BREAK
+case 33:
+/* rule 33 can match eol */
 YY_RULE_SETUP
-#line 251 "libxlu_disk_l.l"
+#line 252 "libxlu_disk_l.l"
 {
 		  xlu__disk_err(DPC,yytext,"unknown deprecated disk prefix");
 		  return 0;
 		}
 	YY_BREAK
 /* positional parameters */
-case 33:
-/* rule 33 can match eol */
+case 34:
+/* rule 34 can match eol */
 YY_RULE_SETUP
-#line 258 "libxlu_disk_l.l"
+#line 259 "libxlu_disk_l.l"
 {
     STRIP(',');
 
@@ -1816,27 +1825,27 @@ YY_RULE_SETUP
     }
 }
 	YY_BREAK
-case 34:
+case 35:
 YY_RULE_SETUP
-#line 284 "libxlu_disk_l.l"
+#line 285 "libxlu_disk_l.l"
 {
     BEGIN(LEXERR);
     yymore();
 }
 	YY_BREAK
-case 35:
+case 36:
 YY_RULE_SETUP
-#line 288 "libxlu_disk_l.l"
+#line 289 "libxlu_disk_l.l"
 {
     xlu__disk_err(DPC,yytext,"bad disk syntax"); return 0;
 }
 	YY_BREAK
-case 36:
+case 37:
 YY_RULE_SETUP
-#line 291 "libxlu_disk_l.l"
+#line 292 "libxlu_disk_l.l"
 YY_FATAL_ERROR( "flex scanner jammed" );
 	YY_BREAK
-#line 1839 "libxlu_disk_l.c"
+#line 1848 "libxlu_disk_l.c"
 			case YY_STATE_EOF(INITIAL):
 			case YY_STATE_EOF(LEXERR):
 				yyterminate();
@@ -2104,7 +2113,7 @@ static int yy_get_next_buffer (yyscan_t yyscanner)
 		while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 			{
 			yy_current_state = (int) yy_def[yy_current_state];
-			if ( yy_current_state >= 355 )
+			if ( yy_current_state >= 361 )
 				yy_c = yy_meta[yy_c];
 			}
 		yy_current_state = yy_nxt[yy_base[yy_current_state] + yy_c];
@@ -2128,11 +2137,11 @@ static int yy_get_next_buffer (yyscan_t yyscanner)
 	while ( yy_chk[yy_base[yy_current_state] + yy_c] != yy_current_state )
 		{
 		yy_current_state = (int) yy_def[yy_current_state];
-		if ( yy_current_state >= 355 )
+		if ( yy_current_state >= 361 )
 			yy_c = yy_meta[yy_c];
 		}
 	yy_current_state = yy_nxt[yy_base[yy_current_state] + yy_c];
-	yy_is_jam = (yy_current_state == 354);
+	yy_is_jam = (yy_current_state == 360);
 	if ( ! yy_is_jam )
 		*yyg->yy_state_ptr++ = yy_current_state;
 
@@ -2941,4 +2950,4 @@ void yyfree (void * ptr , yyscan_t yyscanner)
 
 #define YYTABLES_NAME "yytables"
 
-#line 291 "libxlu_disk_l.l"
+#line 292 "libxlu_disk_l.l"
diff --git a/tools/libs/util/libxlu_disk_l.h b/tools/libs/util/libxlu_disk_l.h
index 6abeecf..df20fcc 100644
--- a/tools/libs/util/libxlu_disk_l.h
+++ b/tools/libs/util/libxlu_disk_l.h
@@ -694,7 +694,7 @@ extern int yylex (yyscan_t yyscanner);
 #undef yyTABLES_NAME
 #endif
 
-#line 291 "libxlu_disk_l.l"
+#line 292 "libxlu_disk_l.l"
 
 #line 699 "libxlu_disk_l.h"
 #undef xlu__disk_yyIN_HEADER
diff --git a/tools/libs/util/libxlu_disk_l.l b/tools/libs/util/libxlu_disk_l.l
index 3bd639a..d68a59c 100644
--- a/tools/libs/util/libxlu_disk_l.l
+++ b/tools/libs/util/libxlu_disk_l.l
@@ -198,6 +198,7 @@ script=[^,]*,?	{ STRIP(','); SAVESTRING("script", script, FROMEQUALS); }
 direct-io-safe,? { DPC->disk->direct_io_safe = 1; }
 discard,?	{ libxl_defbool_set(&DPC->disk->discard_enable, true); }
 no-discard,?	{ libxl_defbool_set(&DPC->disk->discard_enable, false); }
+virtio,?	{ DPC->disk->virtio = 1; }
  /* Note that the COLO configuration settings should be considered unstable.
   * They may change incompatibly in future versions of Xen. */
 colo,?		{ libxl_defbool_set(&DPC->disk->colo_enable, true); }
diff --git a/tools/xl/xl_block.c b/tools/xl/xl_block.c
index 70eed43..50a4d45 100644
--- a/tools/xl/xl_block.c
+++ b/tools/xl/xl_block.c
@@ -50,6 +50,11 @@ int main_blockattach(int argc, char **argv)
         return 0;
     }
 
+    if (disk.virtio) {
+        fprintf(stderr, "block-attach is not supported for Virtio device\n");
+        return 1;
+    }
+
     if (libxl_device_disk_add(ctx, fe_domid, &disk, 0)) {
         fprintf(stderr, "libxl_device_disk_add failed.\n");
         return 1;
@@ -119,6 +124,12 @@ int main_blockdetach(int argc, char **argv)
         fprintf(stderr, "Error: Device %s not connected.\n", argv[optind+1]);
         return 1;
     }
+
+    if (disk.virtio) {
+        fprintf(stderr, "block-detach is not supported for Virtio device\n");
+        return 1;
+    }
+
     rc = !force ? libxl_device_disk_safe_remove(ctx, domid, &disk, 0) :
         libxl_device_disk_destroy(ctx, domid, &disk, 0);
     if (rc) {
-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Fri May 21 19:46:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 May 2021 19:46:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131362.245554 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkB66-0003fn-AC; Fri, 21 May 2021 19:46:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131362.245554; Fri, 21 May 2021 19:46:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkB66-0003fg-7F; Fri, 21 May 2021 19:46:14 +0000
Received: by outflank-mailman (input) for mailman id 131362;
 Fri, 21 May 2021 19:46:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=08+4=KQ=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1lkB65-0003fa-FS
 for xen-devel@lists.xenproject.org; Fri, 21 May 2021 19:46:13 +0000
Received: from mail-lj1-x236.google.com (unknown [2a00:1450:4864:20::236])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ba3a113d-3472-4e77-ac4c-eb3b4a4b5c78;
 Fri, 21 May 2021 19:46:12 +0000 (UTC)
Received: by mail-lj1-x236.google.com with SMTP id w15so25277182ljo.10
 for <xen-devel@lists.xenproject.org>; Fri, 21 May 2021 12:46:12 -0700 (PDT)
Received: from otyshchenko.router ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id y15sm737337lje.74.2021.05.21.12.46.09
 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128);
 Fri, 21 May 2021 12:46:10 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ba3a113d-3472-4e77-ac4c-eb3b4a4b5c78
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=YROQyG5sGznrcZjYxyIeawnoh81WCylTnmVNG9Qbuyg=;
        b=PJcqL6mxlZ0XxymAtE5EtZ0Xt8wWyauqXfMi1xzkJN8c6CQ7Kx2VU9YGqn8zlqeJ6n
         Kiw3G9TCpy4Q4ENmrb49rge6zXj0nNLFzR+28eit44gWz1vvRKoqeZ+KCNh5j+WRGrOp
         330Z1Blu/euuHJDFyHTLfjKWkzoMu/eBvUC3dyLWNbtEFsNL1NS1A+xWXyzIi1dum3Se
         z3IhLoukdylRg3JBM9/MdAqyv/zvOcDCEjWEamjz3hhMYhc8NLvt0pDJLWbRf88K3E+3
         ZMLBpNs/dbwbhK0wXy6vkOyRMIY37h0xvZl73TMMoknEQo5TyWxCHpBasa8OOAldSHFS
         Vfwg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=YROQyG5sGznrcZjYxyIeawnoh81WCylTnmVNG9Qbuyg=;
        b=GOkqFVkTsKk6Yp4hlCYtLWZHBIW2jz44V45/gjrN/37nrLm+jaKeC0zyv1Rf4UIYbE
         3dDiGWpzbSeAMrK0g3lghe9wwYtVsOzegDEeT6t9Od9mwjmQIz3LGgeFIP2a9ofu+iAi
         scS0WOGRTY+iN9+SWJk3VyhwU4SgMT9qj+66uTKSOzCIXvkIGkSmjYRgN15rBLP9q/oO
         22F6wEU7wu6iyKD7ayRW2YdgEgpeuKzXv4KXyQEaQ2n/xpgpsTxlGksr/2vQyGz1QtiJ
         udI+AczBJ81cn743sJU8y2aEYvTRG5c97QXdtuiKDlxW5mgIrLb4gqGoDCdVhUhyOxZh
         utLA==
X-Gm-Message-State: AOAM533tz0wJN1w8afnrswrHAsYsQJM+t/reUeI5EQPJOQpaWepge8KS
	KQocuIj7/KfiQC3g7Aa+ySAipbOxua1F/Q==
X-Google-Smtp-Source: ABdhPJwmJZ309jj2en0C2Rfuc/yHg/F5Ys8mqP8vcQTJbIqSmM1VP8YzO6/ks6+0H+gAeT0IqXkmSw==
X-Received: by 2002:a05:651c:50f:: with SMTP id o15mr7872705ljp.452.1621626370803;
        Fri, 21 May 2021 12:46:10 -0700 (PDT)
From: Oleksandr Tyshchenko <olekstysh@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>,
	Anthony PERARD <anthony.perard@citrix.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Julien Grall <julien@xen.org>,
	Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
	Bertrand Marquis <bertrand.marquis@arm.com>,
	Wei Chen <Wei.Chen@arm.com>,
	Kaly Xin <Kaly.Xin@arm.com>,
	Artem Mygaiev <joculator@gmail.com>,
	=?UTF-8?q?Alex=20Benn=C3=A9e?= <alex.bennee@linaro.org>
Subject: [RESEND PATCH V5 0/2] Virtio support for toolstack on Arm (Was "IOREQ feature (+ virtio-mmio) on Arm")
Date: Fri, 21 May 2021 22:45:59 +0300
Message-Id: <1621626361-29076-1-git-send-email-olekstysh@gmail.com>
X-Mailer: git-send-email 2.7.4
MIME-Version: 1.0
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit

From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>

Hello all.

The purpose of this patch series is to add missing virtio-mmio bits to Xen toolstack on Arm.
The Virtio support for toolstack [1] was postponed as the main target was to upstream IOREQ/DM
support on Arm in the first place. Now, we already have IOREQ support in, so we can resume Virtio
enabling work. You can find previous discussion at [2].

Patch series [3] was reworked and rebased on recent "staging branch"
(972ba1d x86/shadow: streamline shadow_get_page_from_l1e()) and tested on
Renesas Salvator-X board + H3 ES3.0 SoC (Arm64) with "updated" virtio-mmio disk backend [4]
running in Driver domain and unmodified Linux Guest running on existing
virtio-blk driver (frontend). No issues were observed. Guest domain 'reboot/destroy'
use-cases work properly.

Any feedback/help would be highly appreciated.

[1] 
https://lore.kernel.org/xen-devel/1610488352-18494-24-git-send-email-olekstysh@gmail.com/
https://lore.kernel.org/xen-devel/1610488352-18494-25-git-send-email-olekstysh@gmail.com/
[2]
https://lists.xenproject.org/archives/html/xen-devel/2021-01/msg02403.html
https://lists.xenproject.org/archives/html/xen-devel/2021-01/msg02536.html
[3] https://github.com/otyshchenko1/xen/commits/libxl_virtio
[4] https://github.com/xen-troops/virtio-disk/commits/ioreq_ml3

Julien Grall (1):
  libxl: Introduce basic virtio-mmio support on Arm

Oleksandr Tyshchenko (1):
  libxl: Add support for Virtio disk configuration

 docs/man/xl-disk-configuration.5.pod.in   |  27 +
 tools/include/libxl.h                     |   6 +
 tools/libs/light/libxl_arm.c              | 133 ++++-
 tools/libs/light/libxl_device.c           |  38 +-
 tools/libs/light/libxl_disk.c             |  99 +++-
 tools/libs/light/libxl_types.idl          |   4 +
 tools/libs/light/libxl_types_internal.idl |   1 +
 tools/libs/light/libxl_utils.c            |   2 +
 tools/libs/util/libxlu_disk_l.c           | 881 +++++++++++++++---------------
 tools/libs/util/libxlu_disk_l.h           |   2 +-
 tools/libs/util/libxlu_disk_l.l           |   1 +
 tools/xl/xl_block.c                       |  11 +
 xen/include/public/arch-arm.h             |   7 +
 13 files changed, 764 insertions(+), 448 deletions(-)

-- 
2.7.4



From xen-devel-bounces@lists.xenproject.org Fri May 21 19:51:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 May 2021 19:51:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131383.245587 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkBB7-0006JN-KS; Fri, 21 May 2021 19:51:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131383.245587; Fri, 21 May 2021 19:51:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkBB7-0006JG-HR; Fri, 21 May 2021 19:51:25 +0000
Received: by outflank-mailman (input) for mailman id 131383;
 Fri, 21 May 2021 19:51:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=08+4=KQ=gmail.com=olekstysh@srs-us1.protection.inumbo.net>)
 id 1lkBB5-0006JA-US
 for xen-devel@lists.xenproject.org; Fri, 21 May 2021 19:51:23 +0000
Received: from mail-lj1-x22f.google.com (unknown [2a00:1450:4864:20::22f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b295818f-a8df-449e-b5ec-002336756546;
 Fri, 21 May 2021 19:51:23 +0000 (UTC)
Received: by mail-lj1-x22f.google.com with SMTP id b12so18080348ljp.1
 for <xen-devel@lists.xenproject.org>; Fri, 21 May 2021 12:51:23 -0700 (PDT)
Received: from [192.168.1.7] ([212.22.223.21])
 by smtp.gmail.com with ESMTPSA id e20sm732688ljn.91.2021.05.21.12.51.20
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Fri, 21 May 2021 12:51:21 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b295818f-a8df-449e-b5ec-002336756546
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-transfer-encoding:content-language;
        bh=dKUEZzl+nHepziUUmCpiGub2H3ufF84sG9duutsuvl4=;
        b=mck1o0E5XKe4kuCbu0BEC/SPoZVPbyJ1KAWEckI6N5ywKwR4SU2elHmIYGFKd4zm9h
         /4szJzv4HiR87DnFS9biv+GoZL1crPSs1N1M0ABqdLTZAIMVjQCrNPpLxEqyGSDx+7BV
         /6NcGLS/7NI/7BdMcBgQHCACuwCoB9xo/G9G898x8nr/MC217Njwo3ZTtqumaMxMyXno
         6joGV30ZjUL0uarjTcupUoBrQ0XWLf6pxXTRbvwFoZwE/VGQZRLA1XnYLpDztESnQ/mC
         9sOkyWCUIAK8bAv6PRSNfUTj1Y1iPQbpL9IL3r1FWByYGnKx6ddf/SkjLBttJDQKI+cM
         cdEw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-transfer-encoding
         :content-language;
        bh=dKUEZzl+nHepziUUmCpiGub2H3ufF84sG9duutsuvl4=;
        b=fyrxXfKvCYZ7u7f9E0Jn7Dm02RIio2HTiUUBlczhTF2SqJCIziwIsgdCAqIFqx6Vm9
         Ac3+5QjImcCuPoFM+6YQZEYRMOC259JpjMTpSrDKlxUztbEQ+FT0qlqzxLO8EUnSLsXL
         YBlzUGGS/qC9fjoLJ75yRqzLVsq2iU7+lqRcYVTG1n/R4ikw7CRv4hpU47BT63+GHBVa
         Xz3ScVgAaArdWw0FNwFawrI3tBw34Kd+rb+JAf0aIRZY8qiuOrynwEGuOPHeFE6Po0U1
         K4MYPlrYB/lrRNeAWn+LGNmkRC15eHOjiz3I+twdG0EK0zzvqHVxClMPLucA99V7tA1n
         t5IA==
X-Gm-Message-State: AOAM532n2qhW0EwGXJKxMb42qQsuTSyPETq2iF2gt5qDmT2D0WzeRo6t
	jGGdUrYbnB47gvDZnv68D2Y=
X-Google-Smtp-Source: ABdhPJyaR0WPyMrke7/wg84GfO7eX1zmtHItlUidCHE1nDlC6rPutiz9cTiZU0rotoRDpj2hb+JubQ==
X-Received: by 2002:a2e:3803:: with SMTP id f3mr7943361lja.230.1621626682026;
        Fri, 21 May 2021 12:51:22 -0700 (PDT)
Subject: Re: [PATCH V5 0/2] Virtio support for toolstack on Arm (Was "IOREQ
 feature (+ virtio-mmio) on Arm")
To: xen-devel@lists.xenproject.org
Cc: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>,
 Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Anthony PERARD <anthony.perard@citrix.com>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>,
 Volodymyr Babchuk <Volodymyr_Babchuk@epam.com>,
 Bertrand Marquis <bertrand.marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 Kaly Xin <Kaly.Xin@arm.com>, Artem Mygaiev <joculator@gmail.com>,
 =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>
References: <1621603005-5799-1-git-send-email-olekstysh@gmail.com>
From: Oleksandr <olekstysh@gmail.com>
Message-ID: <7e268505-dca9-82df-e459-d8c090b6601a@gmail.com>
Date: Fri, 21 May 2021 22:51:19 +0300
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:68.0) Gecko/20100101
 Thunderbird/68.10.0
MIME-Version: 1.0
In-Reply-To: <1621603005-5799-1-git-send-email-olekstysh@gmail.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: 7bit
Content-Language: en-US


On 21.05.21 16:16, Oleksandr Tyshchenko wrote:

Hello, all.

I pushed this series in wrong way, so the patches #1 and #2 appeared 
with the same subject as a cover letter.
I have just resent this patch series properly. Please ignore current 
one. Sorry for the noise.

https://lore.kernel.org/xen-devel/1621626361-29076-1-git-send-email-olekstysh@gmail.com/

> From: Oleksandr Tyshchenko <oleksandr_tyshchenko@epam.com>
>
> Hello all.
>
> The purpose of this patch series is to add missing virtio-mmio bits to Xen toolstack on Arm.
> The Virtio support for toolstack [1] was postponed as the main target was to upstream IOREQ/DM
> support on Arm in the first place. Now, we already have IOREQ support in, so we can resume Virtio
> enabling work. You can find previous discussion at [2].
>
> Patch series [3] was reworked and rebased on recent "staging branch"
> (972ba1d x86/shadow: streamline shadow_get_page_from_l1e()) and tested on
> Renesas Salvator-X board + H3 ES3.0 SoC (Arm64) with "updated" virtio-mmio disk backend [4]
> running in Driver domain and unmodified Linux Guest running on existing
> virtio-blk driver (frontend). No issues were observed. Guest domain 'reboot/destroy'
> use-cases work properly.
>
> Any feedback/help would be highly appreciated.
>
> [1]
> https://lore.kernel.org/xen-devel/1610488352-18494-24-git-send-email-olekstysh@gmail.com/
> https://lore.kernel.org/xen-devel/1610488352-18494-25-git-send-email-olekstysh@gmail.com/
> [2]
> https://lists.xenproject.org/archives/html/xen-devel/2021-01/msg02403.html
> https://lists.xenproject.org/archives/html/xen-devel/2021-01/msg02536.html
> [3] https://github.com/otyshchenko1/xen/commits/libxl_virtio
> [4] https://github.com/xen-troops/virtio-disk/commits/ioreq_ml3
>
>
> Julien Grall (1):
>    libxl: Introduce basic virtio-mmio support on Arm
>
> Oleksandr Tyshchenko (1):
>    libxl: Add support for Virtio disk configuration
>
>   docs/man/xl-disk-configuration.5.pod.in   |  27 +
>   tools/include/libxl.h                     |   6 +
>   tools/libs/light/libxl_arm.c              | 133 ++++-
>   tools/libs/light/libxl_device.c           |  38 +-
>   tools/libs/light/libxl_disk.c             |  99 +++-
>   tools/libs/light/libxl_types.idl          |   4 +
>   tools/libs/light/libxl_types_internal.idl |   1 +
>   tools/libs/light/libxl_utils.c            |   2 +
>   tools/libs/util/libxlu_disk_l.c           | 881 +++++++++++++++---------------
>   tools/libs/util/libxlu_disk_l.h           |   2 +-
>   tools/libs/util/libxlu_disk_l.l           |   1 +
>   tools/xl/xl_block.c                       |  11 +
>   xen/include/public/arch-arm.h             |   7 +
>   13 files changed, 764 insertions(+), 448 deletions(-)
>
-- 
Regards,

Oleksandr Tyshchenko



From xen-devel-bounces@lists.xenproject.org Fri May 21 19:55:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 May 2021 19:55:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131393.245598 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkBFV-00071P-5c; Fri, 21 May 2021 19:55:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131393.245598; Fri, 21 May 2021 19:55:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkBFV-00071I-2b; Fri, 21 May 2021 19:55:57 +0000
Received: by outflank-mailman (input) for mailman id 131393;
 Fri, 21 May 2021 19:55:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkBFT-000718-Vk; Fri, 21 May 2021 19:55:55 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkBFT-0006bx-PH; Fri, 21 May 2021 19:55:55 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkBFT-0004Ph-EW; Fri, 21 May 2021 19:55:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lkBFT-0001Mm-E3; Fri, 21 May 2021 19:55:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=137KHVGM4TaFymjaVA4I1+RBWP/sN74+i61p1i38vGg=; b=XvO3SE7pIoOpH3mYgDmX8Dj6gq
	GzIhnZD68WLlwY1m6/j+qAvCZb+20nJwcW+KSfmbgSEP2sTYRelZ1sVDERlG+rF1lVRHba+dbZAdo
	fOpOm2APSzks+aPwcjPr0LrLeT+mmVto7QRDpUQVxvY4eF6nUXx2YQNV/AqQp+sO1PcE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162113-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162113: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=1fb80369b72c6ba7f80b442e4acf771a6dd56ee7
X-Osstest-Versions-That:
    ovmf=04ae17218deec25c6f488609c5e2ca9c419d2c4b
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 21 May 2021 19:55:55 +0000

flight 162113 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162113/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7
baseline version:
 ovmf                 04ae17218deec25c6f488609c5e2ca9c419d2c4b

Last test of basis   162111  2021-05-21 07:11:13 Z    0 days
Testing same since   162113  2021-05-21 13:41:10 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Li, Walon <walon.li@hpe.com>
  Walon Li <walon.li@hpe.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   04ae17218d..1fb80369b7  1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Fri May 21 20:49:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 May 2021 20:49:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131401.245611 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkC5V-0003WV-8i; Fri, 21 May 2021 20:49:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131401.245611; Fri, 21 May 2021 20:49:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkC5V-0003WO-5f; Fri, 21 May 2021 20:49:41 +0000
Received: by outflank-mailman (input) for mailman id 131401;
 Fri, 21 May 2021 20:49:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkC5U-0003WD-0S; Fri, 21 May 2021 20:49:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkC5T-0007dl-MU; Fri, 21 May 2021 20:49:39 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkC5T-0007CX-BT; Fri, 21 May 2021 20:49:39 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lkC5T-00018X-Az; Fri, 21 May 2021 20:49:39 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Tgte6d59cj5lotUcGPWAZhAFpMhNnp11TqXFriUSUzs=; b=36f7jYZyNwvHGl4aplg3sbuI7q
	WZtN2r0faG0g+6iM3P7u1kfgNZQnrR3ohpuPkhb9p6FdxNCV47RLnVlSl3sxUdrhjHnrwgWnPrR5N
	eOqkNziAlO0hH/gmZibAkNG12ZV27Bk26Fm86P83GZDEDx1LYTli7NHGbVWjL8haCXMs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162112-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162112: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=da9076f323d1470c65634893aa2427987699d4f1
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 21 May 2021 20:49:39 +0000

flight 162112 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162112/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                da9076f323d1470c65634893aa2427987699d4f1
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  274 days
Failing since        152659  2020-08-21 14:07:39 Z  273 days  502 attempts
Testing same since   162112  2021-05-21 12:09:23 Z    0 days    1 attempts

------------------------------------------------------------
510 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 158920 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 21 22:21:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 May 2021 22:21:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131415.245635 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkDW9-000487-Bn; Fri, 21 May 2021 22:21:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131415.245635; Fri, 21 May 2021 22:21:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkDW9-000480-8d; Fri, 21 May 2021 22:21:17 +0000
Received: by outflank-mailman (input) for mailman id 131415;
 Fri, 21 May 2021 22:21:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkDW7-00047q-IT; Fri, 21 May 2021 22:21:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkDW7-0000zt-7n; Fri, 21 May 2021 22:21:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkDW6-0001K3-VO; Fri, 21 May 2021 22:21:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lkDW6-0007XO-Uv; Fri, 21 May 2021 22:21:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=uBCrjOU0QqICds/oCOqgCCRr+wngS9F7rZ39A87lJXU=; b=umEqy1Ko6MmenVQ/ZjSncRUz1H
	vnjeVBfc2jCwu7DldqP74SH05pxvdvRRja9A59TdUfCaFpOZAEaMkjoe5wCiI/7Xm9ybl9hNQJ9GL
	CBZabUdiHk634tPOjIHNUoxlKPmweEeDB9jpSbd2dW8aM5Zv+VRUZyBbAlMKDBZEw6Qw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete test-amd64-amd64-qemuu-freebsd11-amd64
Message-Id: <E1lkDW6-0007XO-Uv@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 21 May 2021 22:21:14 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-qemuu-freebsd11-amd64
testid guest-saverestore

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  1b507e55f8199eaad99744613823f6929e4d57c6
  Bug not present: 4083904bc9fe5da580f7ca397b1e828fbc322732
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/160218/


  commit 1b507e55f8199eaad99744613823f6929e4d57c6
  Merge: 4083904bc9 8d17adf34f
  Author: Peter Maydell <peter.maydell@linaro.org>
  Date:   Thu Mar 18 19:00:49 2021 +0000
  
      Merge remote-tracking branch 'remotes/berrange-gitlab/tags/dep-many-pull-request' into staging
      
      Remove many old deprecated features
      
      The following features have been deprecated for well over the 2
      release cycle we promise
      
        ``-drive file=json:{...{'driver':'file'}}`` (since 3.0)
        ``-vnc acl`` (since 4.0.0)
        ``-mon ...,control=readline,pretty=on|off`` (since 4.1)
        ``migrate_set_downtime`` and ``migrate_set_speed`` (since 2.8.0)
        ``query-named-block-nodes`` result ``encryption_key_missing`` (since 2.10.0)
        ``query-block`` result ``inserted.encryption_key_missing`` (since 2.10.0)
        ``migrate-set-cache-size`` and ``query-migrate-cache-size`` (since 2.11.0)
        ``query-named-block-nodes`` and ``query-block`` result dirty-bitmaps[i].status (since 4.0)
        ``query-cpus`` (since 2.12.0)
        ``query-cpus-fast`` ``arch`` output member (since 3.0.0)
        ``query-events`` (since 4.0)
        chardev client socket with ``wait`` option (since 4.0)
        ``acl_show``, ``acl_reset``, ``acl_policy``, ``acl_add``, ``acl_remove`` (since 4.0.0)
        ``ide-drive`` (since 4.2)
        ``scsi-disk`` (since 4.2)
      
      # gpg: Signature made Thu 18 Mar 2021 09:23:39 GMT
      # gpg:                using RSA key DAF3A6FDB26B62912D0E8E3FBE86EBB415104FDF
      # gpg: Good signature from "Daniel P. Berrange <dan@berrange.com>" [full]
      # gpg:                 aka "Daniel P. Berrange <berrange@redhat.com>" [full]
      # Primary key fingerprint: DAF3 A6FD B26B 6291 2D0E  8E3F BE86 EBB4 1510 4FDF
      
      * remotes/berrange-gitlab/tags/dep-many-pull-request:
        block: remove support for using "file" driver with block/char devices
        block: remove 'dirty-bitmaps' field from 'BlockInfo' struct
        block: remove dirty bitmaps 'status' field
        block: remove 'encryption_key_missing' flag from QAPI
        hw/scsi: remove 'scsi-disk' device
        hw/ide: remove 'ide-drive' device
        chardev: reject use of 'wait' flag for socket client chardevs
        machine: remove 'arch' field from 'query-cpus-fast' QMP command
        machine: remove 'query-cpus' QMP command
        migrate: remove QMP/HMP commands for speed, downtime and cache size
        monitor: remove 'query-events' QMP command
        monitor: raise error when 'pretty' option is used with HMP
        ui, monitor: remove deprecated VNC ACL option and HMP commands
      
      Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
  
  commit 8d17adf34f501ded65a106572740760f0a75577c
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 11:16:32 2021 +0000
  
      block: remove support for using "file" driver with block/char devices
      
      The 'host_device' and 'host_cdrom' drivers must be used instead.
      
      Reviewed-by: Eric Blake <eblake@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit e67d8e2928200e24ecb47c7be3ea8270077f2996
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 19:22:36 2021 +0000
  
      block: remove 'dirty-bitmaps' field from 'BlockInfo' struct
      
      The same data is available in the 'BlockDeviceInfo' struct.
      
      Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 81cbfd5088690c53541ffd0d74851c8ab055a829
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 19:19:54 2021 +0000
  
      block: remove dirty bitmaps 'status' field
      
      The same information is available via the 'recording' and 'busy' fields.
      
      Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit ad1324e044240ae9fcf67e4c215481b7a35591b9
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 18:53:17 2021 +0000
  
      block: remove 'encryption_key_missing' flag from QAPI
      
      This has been hardcoded to "false" since 2.10.0, since secrets required
      to unlock block devices are now always provided up front instead of using
      interactive prompts.
      
      Reviewed-by: Eric Blake <eblake@redhat.com>
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 879be3af49132d232602e0ca783ec9b4112530fa
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 13:40:56 2021 +0000
  
      hw/scsi: remove 'scsi-disk' device
      
      The 'scsi-hd' and 'scsi-cd' devices provide suitable alternatives.
      
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit b50101833987b47e0740f1621de48637c468c3d1
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 13:40:56 2021 +0000
  
      hw/ide: remove 'ide-drive' device
      
      The 'ide-hd' and 'ide-cd' devices provide suitable alternatives.
      
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 24e13a4dc1eb1630eceffc7ab334145d902e763d
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 13:47:17 2021 +0000
  
      chardev: reject use of 'wait' flag for socket client chardevs
      
      This only makes sense conceptually when used with listener chardevs.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 445a5b4087567bf4d4ce76d394adf78d9d5c88a5
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 13:29:31 2021 +0000
  
      machine: remove 'arch' field from 'query-cpus-fast' QMP command
      
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 12:54:55 2021 +0000
  
      machine: remove 'query-cpus' QMP command
      
      The newer 'query-cpus-fast' command avoids side effects on the guest
      execution. Note that some of the field names are different in the
      'query-cpus-fast' command.
      
      Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Tested-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit cbde7be900d2a2279cbc4becb91d1ddd6a014def
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 18:40:12 2021 +0000
  
      migrate: remove QMP/HMP commands for speed, downtime and cache size
      
      The generic 'migrate_set_parameters' command handle all types of param.
      
      Only the QMP commands were documented in the deprecations page, but the
      rationale for deprecating applies equally to HMP, and the replacements
      exist. Furthermore the HMP commands are just shims to the QMP commands,
      so removing the latter breaks the former unless they get re-implemented.
      
      Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 8becb36063fb14df1e3ae4916215667e2cb65fa2
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 13:35:15 2021 +0000
  
      monitor: remove 'query-events' QMP command
      
      The code comment suggests removing QAPIEvent_(str|lookup) symbols too,
      however, these are both auto-generated as standard for any enum in
      QAPI. As such it they'll exist whether we use them or not.
      
      Reviewed-by: Eric Blake <eblake@redhat.com>
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 283d845c9164f57f5dba020a4783bb290493802f
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 17:56:13 2021 +0000
  
      monitor: raise error when 'pretty' option is used with HMP
      
      This is only semantically useful for QMP.
      
      Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 5994dcb8d8525ac044a31913c6bceeee788ec700
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 17:47:31 2021 +0000
  
      ui, monitor: remove deprecated VNC ACL option and HMP commands
      
      The VNC ACL concept has been replaced by the pluggable "authz" framework
      which does not use monitor commands.
      
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-amd64-qemuu-freebsd11-amd64.guest-saverestore.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-amd64-qemuu-freebsd11-amd64.guest-saverestore --summary-out=tmp/162117.bisection-summary --basis-template=152631 --blessings=real,real-bisect,real-retry qemu-mainline test-amd64-amd64-qemuu-freebsd11-amd64 guest-saverestore
Searching for failure / basis pass:
 162112 fail [host=godello0] / 160125 ok.
Failure / basis pass flights: 162112 / 160125
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 15ee7b76891a78141e6e30ef3f8572e8d6b326d2 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 da9076f323d1470c65634893aa2427987699d4f1 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b12498fc575f2ad30f09fe78badc7fef526e2d76 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#030ba3097a6e7d08b99f8a9d19a322f61409c1f6-15ee7b76891a78141e6e30ef3f8572e8d6b326d2 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c74\
 37ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://git.qemu.org/qemu.git#b12498fc575f2ad30f09fe78badc7fef526e2d76-da9076f323d1470c65634893aa2427987699d4f1 git://xenbits.xen.org/osstest/seabios.git#b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee-b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee git://xenbits.xen.org/xen.git#21657ad4f01a634beac570c64c0691e51b9cf366-aa77acc28098d04945af998f3fc0dbd3759b5b41
Loaded 44917 nodes in revision graph
Searching for test results:
 160125 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b12498fc575f2ad30f09fe78badc7fef526e2d76 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160134 fail irrelevant
 160148 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b12498fc575f2ad30f09fe78badc7fef526e2d76 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160157 fail irrelevant
 160158 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3754df04ec291b933c18285210793d02c9d9787a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160160 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b12498fc575f2ad30f09fe78badc7fef526e2d76 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160162 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 f7dcd31885cbe801cac95536a279bbc7e55af4f6 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160163 pass irrelevant
 160165 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 397fbb5b32558dd2b5cd35cb4d25126879384079 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160147 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2e1293cbaac75e84f541f9acfa8e26749f4c3562 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160166 pass irrelevant
 160168 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2e1293cbaac75e84f541f9acfa8e26749f4c3562 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160169 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 f2a9a6c2a86570ccbf8c5c30cbb8bf723168c459 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160174 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 17422da082ffcecb38bd1f2e2de6d56a61e8cd9c b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160178 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0f418a207696b37f05d38f978c8873ee0a4f9815 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160181 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1b507e55f8199eaad99744613823f6929e4d57c6 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160185 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6e71c36557ed41017e634ae392fa80f03ced7fa1 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160190 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 87a80dc4f2f5e51894db143685a5e39c8ce6f651 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4083904bc9fe5da580f7ca397b1e828fbc322732 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160195 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4083904bc9fe5da580f7ca397b1e828fbc322732 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160199 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1b507e55f8199eaad99744613823f6929e4d57c6 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160202 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4083904bc9fe5da580f7ca397b1e828fbc322732 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160206 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1b507e55f8199eaad99744613823f6929e4d57c6 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160211 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4083904bc9fe5da580f7ca397b1e828fbc322732 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160218 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1b507e55f8199eaad99744613823f6929e4d57c6 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160167 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca318882714080fb81fe9eb89a7b7934efc5bfae 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 bdee969c0e65d4d509932b1d70e3a3b2ffbff6d5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160328 fail irrelevant
 160361 fail irrelevant
 160392 fail irrelevant
 160418 fail irrelevant
 160448 fail irrelevant
 160477 fail irrelevant
 160501 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7b9a3c9f94bcac23c534bc9f42a9e914b433b299 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160522 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7b9a3c9f94bcac23c534bc9f42a9e914b433b299 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160541 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ec2e6e016d24bd429792d08cf607e4c5350dcdaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160563 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7993b0f83fe5c3f8555e79781d5d098f99751a94 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cead8c0d17462f3a1150b5657d3f4eaa88faf1cb
 160619 fail irrelevant
 160632 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 62bad17dcae18f55cb3bdc19909543dfdf928a2b 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6ee55e1d10c25c2f6bf5ce2084ad2327e17affa5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 90629587e16e2efdb61da77f25c25fba3c4a5fd7
 160650 fail irrelevant
 160736 fail irrelevant
 160748 fail irrelevant
 160779 fail irrelevant
 160801 fail irrelevant
 160827 fail irrelevant
 160851 fail irrelevant
 160883 fail irrelevant
 160916 fail irrelevant
 160980 fail irrelevant
 161050 fail irrelevant
 161088 fail irrelevant
 161121 fail irrelevant
 161147 fail irrelevant
 161171 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2ad22420a710dc07e3b644f91a5b55c09c39ecf3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 264aa183ad85b2779b27d1312724a291259ccc9f
 161191 fail irrelevant
 161210 fail irrelevant
 161232 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b53173e7cdafb7a318a239d557478fd73734a86a
 161256 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161276 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161290 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161308 fail irrelevant
 161334 fail irrelevant
 161364 fail irrelevant
 161388 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3b0d007a135284981fa750612a47234b83976f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b1cffefa1b163bce9aebc3416f562c1d3886eeaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 9f6cd4983715cb31f0ea540e6bbb63f799a35d8a
 161401 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3b0d007a135284981fa750612a47234b83976f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b1cffefa1b163bce9aebc3416f562c1d3886eeaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aaa3eafb3ba8b544d19ca41cda1477640b22b8fc
 161419 fail irrelevant
 161434 fail irrelevant
 161444 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f2f4c6be2dba3f8e97ac544b9c3da71e9f81b294 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ffa090bc56e73e287a63261e70ac02c0970be61a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee bea65a212c0581520203b6ad0d07615693f42f73
 161455 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f2f4c6be2dba3f8e97ac544b9c3da71e9f81b294 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ffa090bc56e73e287a63261e70ac02c0970be61a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee bea65a212c0581520203b6ad0d07615693f42f73
 161472 fail irrelevant
 161481 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5396354b868bd6652600a654bba7df16701ac1cb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0cef06d18762374c94eb4d511717a4735d668a24 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 11e7f0fe72ca0060762d18268e0388731fe8ccb6
 161495 fail irrelevant
 161514 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5b90b8abb4049e2d98040f548ad23b6ab22d5d19 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0cef06d18762374c94eb4d511717a4735d668a24 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 972ba1d1d4bcb77018b50fd2bb63c0e628859ed3
 161540 fail irrelevant
 161554 fail irrelevant
 161571 fail irrelevant
 161587 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8f860d2633baf9c2b6261f703f86e394c6bc22ca b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161604 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8f860d2633baf9c2b6261f703f86e394c6bc22ca b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161616 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 53c5433e84e8935abed8e91d4a2eb813168a0ecf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161631 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 15106f7dc3290ff3254611f265849a314a93eb0e b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161766 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e93d8bcf9dbd5b8dd3b9ddbb1ece6a37e608f300 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee d26c277826dbbd64b3e3cb57159e1ecbfad33bc8
 161780 fail irrelevant
 161812 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d45a5270d075ea589f0b0ddcf963a5fea1f500ac b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 8cccd6438e86112ab383e41b433b5a7e73be9621
 161826 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
 161839 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
 161853 []
 161856 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 7a2b787880bddbb3bd68b18efe1d6fe339df6ff1
 161862 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 7a2b787880bddbb3bd68b18efe1d6fe339df6ff1
 161876 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161886 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161890 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161896 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 74e31681ba05ed1876818df30c581bc530554fb3 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161907 fail irrelevant
 161915 fail irrelevant
 161924 fail irrelevant
 161938 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5531fd48ded1271b8775725355ab83994e4bc77c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dab59ce031228066eb95a9c518846fcacfb0dbbf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 43d4cc7d36503bcc3aa2aa6ebea2b7912808f254
 161941 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5531fd48ded1271b8775725355ab83994e4bc77c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dab59ce031228066eb95a9c518846fcacfb0dbbf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 43d4cc7d36503bcc3aa2aa6ebea2b7912808f254
 161950 fail irrelevant
 161955 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161961 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161963 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161967 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161971 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6005ee07c380cbde44292f5f6c96e7daa70f4f7d b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161976 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6005ee07c380cbde44292f5f6c96e7daa70f4f7d b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161981 fail irrelevant
 161986 fail irrelevant
 162019 fail irrelevant
 162070 fail irrelevant
 162090 fail irrelevant
 162104 fail irrelevant
 162099 fail irrelevant
 162108 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 15ee7b76891a78141e6e30ef3f8572e8d6b326d2 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 972e848b53970d12cb2ca64687ef8ff797fb6236 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162112 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 15ee7b76891a78141e6e30ef3f8572e8d6b326d2 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 da9076f323d1470c65634893aa2427987699d4f1 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162115 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b12498fc575f2ad30f09fe78badc7fef526e2d76 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 162117 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 15ee7b76891a78141e6e30ef3f8572e8d6b326d2 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 da9076f323d1470c65634893aa2427987699d4f1 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
Searching for interesting versions
 Result found: flight 160125 (pass), for basis pass
 Result found: flight 162112 (fail), for basis failure
 Repro found: flight 162115 (pass), for basis pass
 Repro found: flight 162117 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 4083904bc9fe5da580f7ca397b1e828fbc322732 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
No revisions left to test, checking graph state.
 Result found: flight 160195 (pass), for last pass
 Result found: flight 160199 (fail), for first failure
 Repro found: flight 160202 (pass), for last pass
 Repro found: flight 160206 (fail), for first failure
 Repro found: flight 160211 (pass), for last pass
 Repro found: flight 160218 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  1b507e55f8199eaad99744613823f6929e4d57c6
  Bug not present: 4083904bc9fe5da580f7ca397b1e828fbc322732
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/160218/


  commit 1b507e55f8199eaad99744613823f6929e4d57c6
  Merge: 4083904bc9 8d17adf34f
  Author: Peter Maydell <peter.maydell@linaro.org>
  Date:   Thu Mar 18 19:00:49 2021 +0000
  
      Merge remote-tracking branch 'remotes/berrange-gitlab/tags/dep-many-pull-request' into staging
      
      Remove many old deprecated features
      
      The following features have been deprecated for well over the 2
      release cycle we promise
      
        ``-drive file=json:{...{'driver':'file'}}`` (since 3.0)
        ``-vnc acl`` (since 4.0.0)
        ``-mon ...,control=readline,pretty=on|off`` (since 4.1)
        ``migrate_set_downtime`` and ``migrate_set_speed`` (since 2.8.0)
        ``query-named-block-nodes`` result ``encryption_key_missing`` (since 2.10.0)
        ``query-block`` result ``inserted.encryption_key_missing`` (since 2.10.0)
        ``migrate-set-cache-size`` and ``query-migrate-cache-size`` (since 2.11.0)
        ``query-named-block-nodes`` and ``query-block`` result dirty-bitmaps[i].status (since 4.0)
        ``query-cpus`` (since 2.12.0)
        ``query-cpus-fast`` ``arch`` output member (since 3.0.0)
        ``query-events`` (since 4.0)
        chardev client socket with ``wait`` option (since 4.0)
        ``acl_show``, ``acl_reset``, ``acl_policy``, ``acl_add``, ``acl_remove`` (since 4.0.0)
        ``ide-drive`` (since 4.2)
        ``scsi-disk`` (since 4.2)
      
      # gpg: Signature made Thu 18 Mar 2021 09:23:39 GMT
      # gpg:                using RSA key DAF3A6FDB26B62912D0E8E3FBE86EBB415104FDF
      # gpg: Good signature from "Daniel P. Berrange <dan@berrange.com>" [full]
      # gpg:                 aka "Daniel P. Berrange <berrange@redhat.com>" [full]
      # Primary key fingerprint: DAF3 A6FD B26B 6291 2D0E  8E3F BE86 EBB4 1510 4FDF
      
      * remotes/berrange-gitlab/tags/dep-many-pull-request:
        block: remove support for using "file" driver with block/char devices
        block: remove 'dirty-bitmaps' field from 'BlockInfo' struct
        block: remove dirty bitmaps 'status' field
        block: remove 'encryption_key_missing' flag from QAPI
        hw/scsi: remove 'scsi-disk' device
        hw/ide: remove 'ide-drive' device
        chardev: reject use of 'wait' flag for socket client chardevs
        machine: remove 'arch' field from 'query-cpus-fast' QMP command
        machine: remove 'query-cpus' QMP command
        migrate: remove QMP/HMP commands for speed, downtime and cache size
        monitor: remove 'query-events' QMP command
        monitor: raise error when 'pretty' option is used with HMP
        ui, monitor: remove deprecated VNC ACL option and HMP commands
      
      Signed-off-by: Peter Maydell <peter.maydell@linaro.org>
  
  commit 8d17adf34f501ded65a106572740760f0a75577c
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 11:16:32 2021 +0000
  
      block: remove support for using "file" driver with block/char devices
      
      The 'host_device' and 'host_cdrom' drivers must be used instead.
      
      Reviewed-by: Eric Blake <eblake@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit e67d8e2928200e24ecb47c7be3ea8270077f2996
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 19:22:36 2021 +0000
  
      block: remove 'dirty-bitmaps' field from 'BlockInfo' struct
      
      The same data is available in the 'BlockDeviceInfo' struct.
      
      Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 81cbfd5088690c53541ffd0d74851c8ab055a829
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 19:19:54 2021 +0000
  
      block: remove dirty bitmaps 'status' field
      
      The same information is available via the 'recording' and 'busy' fields.
      
      Reviewed-by: Vladimir Sementsov-Ogievskiy <vsementsov@virtuozzo.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit ad1324e044240ae9fcf67e4c215481b7a35591b9
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 18:53:17 2021 +0000
  
      block: remove 'encryption_key_missing' flag from QAPI
      
      This has been hardcoded to "false" since 2.10.0, since secrets required
      to unlock block devices are now always provided up front instead of using
      interactive prompts.
      
      Reviewed-by: Eric Blake <eblake@redhat.com>
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 879be3af49132d232602e0ca783ec9b4112530fa
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 13:40:56 2021 +0000
  
      hw/scsi: remove 'scsi-disk' device
      
      The 'scsi-hd' and 'scsi-cd' devices provide suitable alternatives.
      
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit b50101833987b47e0740f1621de48637c468c3d1
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 13:40:56 2021 +0000
  
      hw/ide: remove 'ide-drive' device
      
      The 'ide-hd' and 'ide-cd' devices provide suitable alternatives.
      
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 24e13a4dc1eb1630eceffc7ab334145d902e763d
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 13:47:17 2021 +0000
  
      chardev: reject use of 'wait' flag for socket client chardevs
      
      This only makes sense conceptually when used with listener chardevs.
      
      Reviewed-by: Marc-André Lureau <marcandre.lureau@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 445a5b4087567bf4d4ce76d394adf78d9d5c88a5
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 13:29:31 2021 +0000
  
      machine: remove 'arch' field from 'query-cpus-fast' QMP command
      
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 12:54:55 2021 +0000
  
      machine: remove 'query-cpus' QMP command
      
      The newer 'query-cpus-fast' command avoids side effects on the guest
      execution. Note that some of the field names are different in the
      'query-cpus-fast' command.
      
      Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Tested-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit cbde7be900d2a2279cbc4becb91d1ddd6a014def
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 18:40:12 2021 +0000
  
      migrate: remove QMP/HMP commands for speed, downtime and cache size
      
      The generic 'migrate_set_parameters' command handle all types of param.
      
      Only the QMP commands were documented in the deprecations page, but the
      rationale for deprecating applies equally to HMP, and the replacements
      exist. Furthermore the HMP commands are just shims to the QMP commands,
      so removing the latter breaks the former unless they get re-implemented.
      
      Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 8becb36063fb14df1e3ae4916215667e2cb65fa2
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 13:35:15 2021 +0000
  
      monitor: remove 'query-events' QMP command
      
      The code comment suggests removing QAPIEvent_(str|lookup) symbols too,
      however, these are both auto-generated as standard for any enum in
      QAPI. As such it they'll exist whether we use them or not.
      
      Reviewed-by: Eric Blake <eblake@redhat.com>
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 283d845c9164f57f5dba020a4783bb290493802f
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 17:56:13 2021 +0000
  
      monitor: raise error when 'pretty' option is used with HMP
      
      This is only semantically useful for QMP.
      
      Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>
  
  commit 5994dcb8d8525ac044a31913c6bceeee788ec700
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Fri Feb 19 17:47:31 2021 +0000
  
      ui, monitor: remove deprecated VNC ACL option and HMP commands
      
      The VNC ACL concept has been replaced by the pluggable "authz" framework
      which does not use monitor commands.
      
      Reviewed-by: Thomas Huth <thuth@redhat.com>
      Reviewed-by: Dr. David Alan Gilbert <dgilbert@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>

neato: graph is too large for cairo-renderer bitmaps. Scaling by 0.5025 to fit
pnmtopng: 208 colors found
Revision graph left in /home/logs/results/bisect/qemu-mainline/test-amd64-amd64-qemuu-freebsd11-amd64.guest-saverestore.{dot,ps,png,html,svg}.
----------------------------------------
162117: tolerable FAIL

flight 162117 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/162117/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail baseline untested


jobs:
 build-amd64                                                  pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Fri May 21 23:33:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 21 May 2021 23:33:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131426.245649 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkEdw-0002JI-4b; Fri, 21 May 2021 23:33:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131426.245649; Fri, 21 May 2021 23:33:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkEdw-0002JB-1X; Fri, 21 May 2021 23:33:24 +0000
Received: by outflank-mailman (input) for mailman id 131426;
 Fri, 21 May 2021 23:33:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkEdu-0002J1-8U; Fri, 21 May 2021 23:33:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkEdt-00027p-R4; Fri, 21 May 2021 23:33:21 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkEdt-000562-Fr; Fri, 21 May 2021 23:33:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lkEdt-00065j-FN; Fri, 21 May 2021 23:33:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=PPrVplFnbZn583G78qlCiCltQRZ1otwtdjx93YcDxE8=; b=it/k8UbXdBjFy3jkH04xqAZ3t1
	4X/oUNEoa7tOOfYnocvuh5Yqu/bkyxRcYPP5T8OsrzBN9CxF2WzOWBQUHPBnlbFgwiiTpy3rxTSdw
	eTG5/+oWLjGC+PXKQvWO2M1N54Em54J30PMeSYoYDQaI9kBTKIkiHsWrmHVAw9OyMlGw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162114-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162114: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=011ff616ffe8df6b86ee54d14a43c8d1a96a6325
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 21 May 2021 23:33:21 +0000

flight 162114 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162114/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                011ff616ffe8df6b86ee54d14a43c8d1a96a6325
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  294 days
Failing since        152366  2020-08-01 20:49:34 Z  293 days  495 attempts
Testing same since   162114  2021-05-21 17:12:19 Z    0 days    1 attempts

------------------------------------------------------------
6077 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1650447 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 22 05:41:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 May 2021 05:41:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131434.245663 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkKNk-0007VE-Tn; Sat, 22 May 2021 05:41:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131434.245663; Sat, 22 May 2021 05:41:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkKNk-0007V6-NS; Sat, 22 May 2021 05:41:04 +0000
Received: by outflank-mailman (input) for mailman id 131434;
 Sat, 22 May 2021 05:41:02 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkKNi-0007Uw-RK; Sat, 22 May 2021 05:41:02 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkKNi-0006e5-LI; Sat, 22 May 2021 05:41:02 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkKNi-0005LJ-By; Sat, 22 May 2021 05:41:02 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lkKNi-0002g5-9g; Sat, 22 May 2021 05:41:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=RkwkZMNfoSOkj3Z/7a2cFixP2peVG5zRnS3WML5TWAA=; b=KX52ma9L9XfLKE4z4fNVD5BrtH
	h+u/TzFlr6l3rOWi5/tajGvOrR6GYzNJ88kdDUSH0lPQ5fzMBgJ7VlWb8OIKlwi007Op3zAMWoQb+
	8qYViTc5YR2H+nuR7kBJOgHzINWjwRoEYZqwa58EUiimgB/qxEsBTBhVUWlk+1O4MQ5w=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162116-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162116: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=3bbaed2cd0a02ee53958d3d2585e837bcf327278
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 22 May 2021 05:41:02 +0000

flight 162116 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162116/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                3bbaed2cd0a02ee53958d3d2585e837bcf327278
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  274 days
Failing since        152659  2020-08-21 14:07:39 Z  273 days  503 attempts
Testing same since   162116  2021-05-21 21:07:19 Z    0 days    1 attempts

------------------------------------------------------------
510 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 159254 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 22 05:56:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 May 2021 05:56:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131444.245677 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkKd0-0000gM-Ea; Sat, 22 May 2021 05:56:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131444.245677; Sat, 22 May 2021 05:56:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkKd0-0000gF-Bc; Sat, 22 May 2021 05:56:50 +0000
Received: by outflank-mailman (input) for mailman id 131444;
 Sat, 22 May 2021 05:56:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkKcz-0000g5-JU; Sat, 22 May 2021 05:56:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkKcz-0006tY-Dz; Sat, 22 May 2021 05:56:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkKcz-0005wN-2e; Sat, 22 May 2021 05:56:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lkKcz-0005Tn-2B; Sat, 22 May 2021 05:56:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=TMUEjuM14C+SD4PTeN62gM/IY+PsCyGOaC0E7DSIjF4=; b=19ZOuirPHS3elor5y89DA7TL6U
	v5xeGfuKs5oRIIbDqFKVwkZMgjIz5Jppwu9BazNUzBHERMvVVDANl3varHae0bwkhJwC3hUl0hbFa
	Jsleu28qT+3vXL/HPkey0E9x0F7FertAjKh1VNZQFGdoPwfg3cdAiBVpdPczRFbw2URI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162120-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 162120: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=d8c468d58c23cf4254fb98dddd12b4f0225b184a
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 22 May 2021 05:56:49 +0000

flight 162120 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162120/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              d8c468d58c23cf4254fb98dddd12b4f0225b184a
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  316 days
Failing since        151818  2020-07-11 04:18:52 Z  315 days  308 attempts
Testing same since   162120  2021-05-22 04:20:08 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 58637 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 22 07:23:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 May 2021 07:23:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131453.245690 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkLyu-0000UX-01; Sat, 22 May 2021 07:23:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131453.245690; Sat, 22 May 2021 07:23:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkLyt-0000UQ-TE; Sat, 22 May 2021 07:23:31 +0000
Received: by outflank-mailman (input) for mailman id 131453;
 Sat, 22 May 2021 07:23:30 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkLys-0000UG-6R; Sat, 22 May 2021 07:23:30 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkLyr-0008Mv-UQ; Sat, 22 May 2021 07:23:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkLyr-0003Ql-J2; Sat, 22 May 2021 07:23:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lkLyr-00057V-IY; Sat, 22 May 2021 07:23:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=8PfrKfxP4O7jX8bLdguHR17y6Ncx9MP9niQ3aHgBu4c=; b=QTQ5yMbFBBxW5eXzlST2j4bPt2
	/OnpPif1ElvAjoJAT7irtpI2FJOjR8BT77HQwYzkoCjBcLWgCLE9rFUL5uTH2waU+XXjt/DuKt+vE
	h3upOlqORgITls7jiL3YK3Lxn95pwfqQDyIXZhzFO2LKomc+4n2D2Gm6tXI5hnCOoOww=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162118-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162118: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-localmigrate/x10:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=45af60e7ced07ae3def41368c3d260dbf496fbce
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 22 May 2021 07:23:29 +0000

flight 162118 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162118/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 19 guest-localmigrate/x10  fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                45af60e7ced07ae3def41368c3d260dbf496fbce
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  294 days
Failing since        152366  2020-08-01 20:49:34 Z  293 days  496 attempts
Testing same since   162118  2021-05-21 23:42:32 Z    0 days    1 attempts

------------------------------------------------------------
6079 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1650974 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 22 10:48:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 May 2021 10:48:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131469.245704 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkPAZ-0001mf-Ro; Sat, 22 May 2021 10:47:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131469.245704; Sat, 22 May 2021 10:47:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkPAZ-0001mY-Oj; Sat, 22 May 2021 10:47:47 +0000
Received: by outflank-mailman (input) for mailman id 131469;
 Sat, 22 May 2021 10:47:46 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=uZFP=KR=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1lkPAY-0001mR-As
 for xen-devel@lists.xenproject.org; Sat, 22 May 2021 10:47:46 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a2d3db06-9813-40d2-bb27-4b2dbe71d2c9;
 Sat, 22 May 2021 10:47:45 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 62319ABB1;
 Sat, 22 May 2021 10:47:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a2d3db06-9813-40d2-bb27-4b2dbe71d2c9
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621680464; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:  content-transfer-encoding:content-transfer-encoding;
	bh=YVW4fFsQp9kFBzeDykkxPl+Pl2140mXCCdzxA1+za9I=;
	b=AaNXEBTRFwWA9N+NX+QIk+DNpndes8olYZO18kkr8miV3r3jVORk8wYbrTesisIpL2xD3o
	4je9K3ghkEiqQ4R6WbzdLG2R9TWpv9/NC51+IQmojAOSMEqKqzU43YTlK1cW6Kl7+Ez8JV
	nfNUOrDoAqqIz4DkokjVNnB0N2aPppw=
From: Juergen Gross <jgross@suse.com>
To: torvalds@linux-foundation.org
Cc: linux-kernel@vger.kernel.org,
	xen-devel@lists.xenproject.org,
	boris.ostrovsky@oracle.com
Subject: [GIT PULL] xen: branch for v5.13-rc3
Date: Sat, 22 May 2021 12:47:43 +0200
Message-Id: <20210522104743.10801-1-jgross@suse.com>
X-Mailer: git-send-email 2.26.2
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Linus,

Please git pull the following tag:

 git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.13b-rc3-tag

xen: branch for v5.13-rc3

It contains:
- a fix for a boot regression when running as PV guest on hardware without
  NX support
- a small series fixing a bug in the Xen pciback driver when configuring
  a PCI card with multiple virtual functions


Thanks.

Juergen

 arch/x86/xen/enlighten_pv.c      |  8 ++++----
 drivers/xen/xen-pciback/vpci.c   | 14 ++++++++------
 drivers/xen/xen-pciback/xenbus.c | 22 +++++++++++++++++-----
 3 files changed, 29 insertions(+), 15 deletions(-)

Jan Beulich (3):
      x86/Xen: swap NX determination and GDT setup on BSP
      xen-pciback: redo VF placement in the virtual topology
      xen-pciback: reconfigure also from backend watch handler


From xen-devel-bounces@lists.xenproject.org Sat May 22 12:29:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 May 2021 12:29:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131479.245715 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkQkk-0002Tr-8L; Sat, 22 May 2021 12:29:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131479.245715; Sat, 22 May 2021 12:29:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkQkk-0002Tk-5F; Sat, 22 May 2021 12:29:14 +0000
Received: by outflank-mailman (input) for mailman id 131479;
 Sat, 22 May 2021 12:29:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkQki-0002Ta-4E; Sat, 22 May 2021 12:29:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkQkh-0005jC-SP; Sat, 22 May 2021 12:29:11 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkQkh-0002Ge-Hw; Sat, 22 May 2021 12:29:11 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lkQkh-0002Cf-HR; Sat, 22 May 2021 12:29:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=EPB+8nFalyf5ASBu/LAn/EzHqx9WWXF4oeM+JW+lwXg=; b=vJmZzj/e2LzGL/pkMAQdgrgxxu
	BbYnA0YDCl2C+KXCvMFw7ERcypjJvdCUyHydsku/5dQnKNBauwhatYWFqn0VNFuHj5KBUUCYqwhWP
	EP56fvNiNJvqLZuA1KObZpKICiMLS+gYrFy4gq+oic1Tz5o4HPH7oBqQvU6G+lvLwxco=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162119-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162119: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:xen-boot:fail:heisenbug
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=aa77acc28098d04945af998f3fc0dbd3759b5b41
X-Osstest-Versions-That:
    xen=aa77acc28098d04945af998f3fc0dbd3759b5b41
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 22 May 2021 12:29:11 +0000

flight 162119 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162119/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-credit2   8 xen-boot                   fail pass in 162107
 test-armhf-armhf-xl-rtds      8 xen-boot                   fail pass in 162107

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2 15 migrate-support-check fail in 162107 never pass
 test-arm64-arm64-xl-credit2 16 saverestore-support-check fail in 162107 never pass
 test-armhf-armhf-xl-rtds    15 migrate-support-check fail in 162107 never pass
 test-armhf-armhf-xl-rtds 16 saverestore-support-check fail in 162107 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162107
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162107
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162107
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162107
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162107
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162107
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162107
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162107
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162107
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162107
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162107
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  aa77acc28098d04945af998f3fc0dbd3759b5b41
baseline version:
 xen                  aa77acc28098d04945af998f3fc0dbd3759b5b41

Last test of basis   162119  2021-05-22 01:52:52 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sat May 22 15:17:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 May 2021 15:17:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131487.245730 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkTNI-0000a6-57; Sat, 22 May 2021 15:17:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131487.245730; Sat, 22 May 2021 15:17:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkTNI-0000Zz-12; Sat, 22 May 2021 15:17:12 +0000
Received: by outflank-mailman (input) for mailman id 131487;
 Sat, 22 May 2021 15:17:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkTNG-0000Zp-BS; Sat, 22 May 2021 15:17:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkTNG-000055-43; Sat, 22 May 2021 15:17:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkTNF-0002Zz-PU; Sat, 22 May 2021 15:17:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lkTNF-0005aO-P2; Sat, 22 May 2021 15:17:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wqysEASriUyC1L2n69e8nqfwK2jdF89dVf96D2wBm1I=; b=yTtB3OeGrIKiMmoCmRlv5Pq27o
	D2oTa30MzUb5svLg7rTpI382YcGONnOwY4Kaxm9srDFmg4Ldks0JmKVv+8wailRMYnNeTsWD42b/j
	t6Xds/JhaEAHnmOvYp5O0W+xoH1x4Mlcb79V6Un9ae1eN+gIECJO91RCm8ZMOVhcSoLk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162121-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162121: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-saverestore.2:fail:heisenbug
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=3bbaed2cd0a02ee53958d3d2585e837bcf327278
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 22 May 2021 15:17:09 +0000

flight 162121 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162121/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-vhd 17 guest-saverestore.2        fail pass in 162116

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                3bbaed2cd0a02ee53958d3d2585e837bcf327278
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  275 days
Failing since        152659  2020-08-21 14:07:39 Z  274 days  504 attempts
Testing same since   162116  2021-05-21 21:07:19 Z    0 days    2 attempts

------------------------------------------------------------
510 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 159254 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 22 17:36:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 May 2021 17:36:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131499.245743 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkVXZ-0005OV-GR; Sat, 22 May 2021 17:35:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131499.245743; Sat, 22 May 2021 17:35:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkVXZ-0005OO-DZ; Sat, 22 May 2021 17:35:57 +0000
Received: by outflank-mailman (input) for mailman id 131499;
 Sat, 22 May 2021 17:35:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=x/j/=KR=kernel.org=pr-tracker-bot@srs-us1.protection.inumbo.net>)
 id 1lkVXX-0005OI-OA
 for xen-devel@lists.xenproject.org; Sat, 22 May 2021 17:35:55 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ff590cad-e7ff-4c3b-81b6-f59705959183;
 Sat, 22 May 2021 17:35:55 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPS id 47A7C608FE;
 Sat, 22 May 2021 17:35:54 +0000 (UTC)
Received: from pdx-korg-docbuild-2.ci.codeaurora.org (localhost.localdomain
 [127.0.0.1])
 by pdx-korg-docbuild-2.ci.codeaurora.org (Postfix) with ESMTP id 3B2A760A56;
 Sat, 22 May 2021 17:35:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ff590cad-e7ff-4c3b-81b6-f59705959183
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1621704954;
	bh=bTe8gb7C1Tw72t73ARNFscP7oBuHF6wXsRvNXuesbuc=;
	h=Subject:From:In-Reply-To:References:Date:To:Cc:From;
	b=TVydP3L3qVfMH92AlExNK1ilr0HrCCpT9e6ZDsy+jZi/Oxu73WjpZpsEEDkplcZVj
	 X4o675xGJJu0bb6dS3/vFDJnm8XKrI9F+xoCf1Y1aQg/THD0DrY10x1i3yyVTxt4MO
	 aWFSXs6GCiNGRvUMqYT6ReN+pDDjaiiiHTVAC6AarxNfSbIajp9ImZqMGmJ28g6GNk
	 51abzT4AwXkondkvOGS7PWufaE8jlWpwGYDJzJ1VqmFQ8949P9Y/KYoZS6sWjp/Tiq
	 Dl1RPMCN32n5NSC82eFTSrPWBpGe3uHj6dJSOiiZK97y21duZWjt7JsnVyHgitKRDi
	 T3SoUu/BSRXqQ==
Subject: Re: [GIT PULL] xen: branch for v5.13-rc3
From: pr-tracker-bot@kernel.org
In-Reply-To: <20210522104743.10801-1-jgross@suse.com>
References: <20210522104743.10801-1-jgross@suse.com>
X-PR-Tracked-List-Id: <linux-kernel.vger.kernel.org>
X-PR-Tracked-Message-Id: <20210522104743.10801-1-jgross@suse.com>
X-PR-Tracked-Remote: git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.13b-rc3-tag
X-PR-Tracked-Commit-Id: c81d3d24602540f65256f98831d0a25599ea6b87
X-PR-Merge-Tree: torvalds/linux.git
X-PR-Merge-Refname: refs/heads/master
X-PR-Merge-Commit-Id: 23d729263037eddd7413535c68ccf9472a197ccd
Message-Id: <162170495423.3077.3254151684906859811.pr-tracker-bot@kernel.org>
Date: Sat, 22 May 2021 17:35:54 +0000
To: Juergen Gross <jgross@suse.com>
Cc: torvalds@linux-foundation.org, linux-kernel@vger.kernel.org, xen-devel@lists.xenproject.org, boris.ostrovsky@oracle.com

The pull request you sent on Sat, 22 May 2021 12:47:43 +0200:

> git://git.kernel.org/pub/scm/linux/kernel/git/xen/tip.git for-linus-5.13b-rc3-tag

has been merged into torvalds/linux.git:
https://git.kernel.org/torvalds/c/23d729263037eddd7413535c68ccf9472a197ccd

Thank you!

-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/prtracker.html


From xen-devel-bounces@lists.xenproject.org Sat May 22 18:34:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 May 2021 18:34:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131507.245755 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkWSD-0002eh-V2; Sat, 22 May 2021 18:34:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131507.245755; Sat, 22 May 2021 18:34:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkWSD-0002ea-RJ; Sat, 22 May 2021 18:34:29 +0000
Received: by outflank-mailman (input) for mailman id 131507;
 Sat, 22 May 2021 18:34:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkWSD-0002eQ-E1; Sat, 22 May 2021 18:34:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkWSD-0003tp-87; Sat, 22 May 2021 18:34:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkWSC-0003fw-SF; Sat, 22 May 2021 18:34:28 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lkWSC-0008Aq-Rj; Sat, 22 May 2021 18:34:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=okRkpGBWy6lhwaNh9fl9ytoz+jTnCop4uiOCwg7Blw4=; b=OVC5p8iA0XtSViActiL9emWA7H
	rXnLfi5gVxK7I1Hx0TUgIm5Erlch94WWLNwyRh16cngR5mBKuOqSnMgyzGaNZV+Rf90HqAQ0yZ7Oq
	A7O8ox9JdB0IvPEVfOUbpVn8O76GTg0vqcUrlGG+qT444aewA7bpvzBOeO1dlRsNvXvE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162122-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162122: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:guest-start:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-localmigrate/x10:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:heisenbug
    linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:guest-start/freebsd.repeat:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=45af60e7ced07ae3def41368c3d260dbf496fbce
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 22 May 2021 18:34:28 +0000

flight 162122 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162122/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 14 guest-start    fail in 162118 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-amd64-pvgrub 19 guest-localmigrate/x10 fail in 162118 pass in 162122
 test-arm64-arm64-libvirt-xsm 13 debian-fixup     fail in 162118 pass in 162122
 test-arm64-arm64-xl-credit2  13 debian-fixup     fail in 162118 pass in 162122
 test-arm64-arm64-xl-thunderx 13 debian-fixup               fail pass in 162118
 test-amd64-amd64-qemuu-freebsd12-amd64 21 guest-start/freebsd.repeat fail pass in 162118

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 linux                45af60e7ced07ae3def41368c3d260dbf496fbce
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  294 days
Failing since        152366  2020-08-01 20:49:34 Z  293 days  497 attempts
Testing same since   162118  2021-05-21 23:42:32 Z    0 days    2 attempts

------------------------------------------------------------
6079 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1650974 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 22 20:15:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 22 May 2021 20:15:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131515.245769 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkY2B-0003Kj-FJ; Sat, 22 May 2021 20:15:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131515.245769; Sat, 22 May 2021 20:15:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkY2B-0003Kc-C6; Sat, 22 May 2021 20:15:43 +0000
Received: by outflank-mailman (input) for mailman id 131515;
 Sat, 22 May 2021 20:15:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkY2A-0003KS-1J; Sat, 22 May 2021 20:15:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkY29-0005bn-PD; Sat, 22 May 2021 20:15:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkY29-0001FY-CM; Sat, 22 May 2021 20:15:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lkY29-000701-AT; Sat, 22 May 2021 20:15:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=W5HSKlXEqZ0DYc6nnuthK30xTHKA9AgeX40Iv1IQKdI=; b=hHS8WAt0bJmsOX3fPr98ZTR/GY
	qfYc1iaO7mqZBhmIC1cQ32IFLlE6xXrtSO4+BQi90/3AwaJJqvJJwxT/yWud58B8qejknnOlADKuv
	q3LcnCsT1BY0El4SuMKTDqejbrbrmqWnJAgOL0NwXbvg7VyI/gbYTmaxy9GpOdMn0I5U=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162123-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 162123: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=b239a0365b9339ad5e276ed9cb4605963c2d939a
X-Osstest-Versions-That:
    linux=e05d387ba736bcabe414b0aa05831d151ac40385
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 22 May 2021 20:15:41 +0000

flight 162123 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162123/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds   18 guest-start/debian.repeat fail blocked in 162084
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162084
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162084
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162084
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162084
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162084
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162084
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162084
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162084
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162084
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162084
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162084
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                b239a0365b9339ad5e276ed9cb4605963c2d939a
baseline version:
 linux                e05d387ba736bcabe414b0aa05831d151ac40385

Last test of basis   162084  2021-05-19 08:42:14 Z    3 days
Testing same since   162123  2021-05-22 10:12:11 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alex Deucher <alexander.deucher@amd.com>
  Alexandru Elisei <alexandru.elisei@arm.com>
  Alexei Starovoitov <ast@kernel.org>
  Andrew Morton <akpm@linux-foundation.org>
  Andy Shevchenko <andriy.shevchenko@linux.intel.com>
  Anton Ivanov <anton.ivanov@cambridgegreys.com>
  Arnd Bergmann <arnd@arndb.de>
  Benjamin Tissoires <benjamin.tissoires@redhat.com>
  Bjorn Helgaas <bhelgaas@google.com>
  Bodo Stroesser <bostroesser@gmail.com>
  Daniel Thompson <daniel.thompson@linaro.org>
  David S. Miller <davem@davemloft.net>
  Dmitry Torokhov <dmitry.torokhov@gmail.com>
  Eric Dumazet <edumazet@google.com>
  Feilong Lin <linfeilong@huawei.com>
  Finn Behrens <me@kloenk.de>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guenter Roeck <linux@roeck-us.net>
  Gustavo Pimentel <gustavo.pimentel@synopsys.com>
  Hans de Goede <hdegoede@redhat.com>
  Hui Wang <hui.wang@canonical.com>
  Ilya Dryomov <idryomov@gmail.com>
  Ingo Molnar <mingo@kernel.org>
  Jakub Kicinski <kuba@kernel.org>
  James Smart <jsmart2021@gmail.com>
  Jason Self <jason@bluehome.net>
  Jason Wang <jasowang@redhat.com>
  Jeff Layton <jlayton@kernel.org>
  Jens Axboe <axboe@kernel.dk>
  Johannes Berg <johannes.berg@intel.com>
  Jon Hunter <jonathanh@nvidia.com>
  Josh Poimboeuf <jpoimboe@redhat.com>
  Justin Tee <justin.tee@broadcom.com>
  Kaixu Xia <kaixuxia@tencent.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linus Walleij <linus.walleij@linaro.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  louis.wang <liang26812@gmail.com>
  Magnus Karlsson <magnus.karlsson@intel.com>
  Manivannan Sadhasivam <manivannan.sadhasivam@linaro.org>
  Marc Zyngier <maz@kernel.org>
  Martin K. Petersen <martin.petersen@oracle.com>
  Masahiro Yamada <masahiroy@kernel.org>
  Nathan Chancellor <nathan@kernel.org>
  Palmer Dabbelt <palmerdabbelt@google.com>
  Pavel Begunkov <asml.silencec@gmail.com>
  Richard Weinberger <richard@nod.at>
  Ritesh Raj Sarraf <rrs@debian.org>
  Rodrigo Siqueira <Rodrigo.Siqueira@amd.com>
  Russell King <rmk+kernel@armlinux.org.uk>
  Sasha Levin <sashal@kernel.org>
  Shuah Khan <skhan@linuxfoundation.org>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  Takashi Iwai <tiwai@suse.de>
  Vinod Koul <vkoul@kernel.org>
  Xuan Zhuo <xuanzhuo@linux.alibaba.com>
  yangerkun <yangerkun@huawei.com>
  Yannick Vignon <yannick.vignon@nxp.com>
  Zhang Zhengming <zhangzhengming@huawei.com>
  Zhiqiang Liu <liuzhiqiang26@huawei.com>
  Zqiang <qiang.zhang@windriver.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   e05d387ba736..b239a0365b93  b239a0365b9339ad5e276ed9cb4605963c2d939a -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Sun May 23 02:01:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 May 2021 02:01:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131528.245783 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkdQV-0006bz-IZ; Sun, 23 May 2021 02:01:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131528.245783; Sun, 23 May 2021 02:01:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkdQV-0006br-BN; Sun, 23 May 2021 02:01:11 +0000
Received: by outflank-mailman (input) for mailman id 131528;
 Sun, 23 May 2021 02:01:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkdQU-0006bh-3q; Sun, 23 May 2021 02:01:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkdQT-0000kO-UP; Sun, 23 May 2021 02:01:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkdQT-0007Jk-J1; Sun, 23 May 2021 02:01:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lkdQT-0001YB-IX; Sun, 23 May 2021 02:01:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=XFZ7UoNGpaEJU6a59+3LMuUQODawb9OazzPRROtZHGQ=; b=WxobhV2gNs9PUjlHfOFsGOhXQh
	oyoPNVElthqz018rhwJupTY40VR5At+KV2cZLIUoPcaw7ZmmV2qpxWsYI0O0fJDclXrScDMsZtRFL
	zEPIx9KTqIGdnB7FlB6/njp7kktgaumIW4+ZrvQ3/+hxnpd7MhyXnwD0aPDNc3JNiINY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162124-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162124: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-libvirt-vhd:guest-start/debian.repeat:fail:heisenbug
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=3bbaed2cd0a02ee53958d3d2585e837bcf327278
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 23 May 2021 02:01:09 +0000

flight 162124 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162124/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-libvirt-vhd 19 guest-start/debian.repeat  fail pass in 162116

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                3bbaed2cd0a02ee53958d3d2585e837bcf327278
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  275 days
Failing since        152659  2020-08-21 14:07:39 Z  274 days  505 attempts
Testing same since   162116  2021-05-21 21:07:19 Z    1 days    3 attempts

------------------------------------------------------------
510 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 159254 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 23 04:35:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 May 2021 04:35:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131536.245796 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkfp8-0003K5-Pb; Sun, 23 May 2021 04:34:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131536.245796; Sun, 23 May 2021 04:34:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkfp8-0003Jy-Mc; Sun, 23 May 2021 04:34:46 +0000
Received: by outflank-mailman (input) for mailman id 131536;
 Sun, 23 May 2021 04:34:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkfp7-0003Jo-C5; Sun, 23 May 2021 04:34:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkfp7-0003Ry-6r; Sun, 23 May 2021 04:34:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkfp6-0005hF-Ns; Sun, 23 May 2021 04:34:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lkfp6-0007sr-NK; Sun, 23 May 2021 04:34:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=6lGJnvc2sB7VG9tuHxZHDOfrqgIdid8JZ+ycuXXV9aE=; b=T72WorrRrrm/15Ns9zQjsYOPtO
	58xfvtxIXvCyn8rOGOrNLzKBoszKM8knLFT9AbozFGiiKHbJUpuozGWxd63vvNEhxczq8J5GLF+0r
	UoqJkQwCeUFfT1tvXKs3Qfzy94aU/xv/sgRZHYpmSXC1PnWlhlPd8O+3oMv2RyR0LjXY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162125-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162125: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=4ff2473bdb4cf2bb7d208ccf4418d3d7e6b1652c
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 23 May 2021 04:34:44 +0000

flight 162125 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162125/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                4ff2473bdb4cf2bb7d208ccf4418d3d7e6b1652c
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  295 days
Failing since        152366  2020-08-01 20:49:34 Z  294 days  498 attempts
Testing same since   162125  2021-05-22 18:42:16 Z    0 days    1 attempts

------------------------------------------------------------
6081 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1651425 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 23 07:02:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 May 2021 07:02:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131547.245811 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lki8E-000079-CR; Sun, 23 May 2021 07:02:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131547.245811; Sun, 23 May 2021 07:02:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lki8E-00006z-92; Sun, 23 May 2021 07:02:38 +0000
Received: by outflank-mailman (input) for mailman id 131547;
 Sun, 23 May 2021 07:02:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=VPDj=KS=huawei.com=yuehaibing@srs-us1.protection.inumbo.net>)
 id 1lki8C-00006q-KZ
 for xen-devel@lists.xenproject.org; Sun, 23 May 2021 07:02:36 +0000
Received: from szxga06-in.huawei.com (unknown [45.249.212.32])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id d04ccc05-d41e-4eaf-9ec5-2584cfdbb027;
 Sun, 23 May 2021 07:02:31 +0000 (UTC)
Received: from dggems706-chm.china.huawei.com (unknown [172.30.72.58])
 by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4Fnrnh22sgzmYZJ;
 Sun, 23 May 2021 15:00:08 +0800 (CST)
Received: from dggema769-chm.china.huawei.com (10.1.198.211) by
 dggems706-chm.china.huawei.com (10.3.19.183) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id
 15.1.2176.2; Sun, 23 May 2021 15:02:26 +0800
Received: from localhost (10.174.179.215) by dggema769-chm.china.huawei.com
 (10.1.198.211) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2176.2; Sun, 23
 May 2021 15:02:26 +0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d04ccc05-d41e-4eaf-9ec5-2584cfdbb027
From: YueHaibing <yuehaibing@huawei.com>
To: <boris.ostrovsky@oracle.com>, <jgross@suse.com>, <sstabellini@kernel.org>,
	<yuehaibing@huawei.com>
CC: <xen-devel@lists.xenproject.org>, <linux-kernel@vger.kernel.org>
Subject: [PATCH -next] xen/pcpu: Use DEVICE_ATTR_RW macro
Date: Sun, 23 May 2021 15:02:14 +0800
Message-ID: <20210523070214.34948-1-yuehaibing@huawei.com>
X-Mailer: git-send-email 2.10.2.windows.1
MIME-Version: 1.0
Content-Type: text/plain
X-Originating-IP: [10.174.179.215]
X-ClientProxiedBy: dggems704-chm.china.huawei.com (10.3.19.181) To
 dggema769-chm.china.huawei.com (10.1.198.211)
X-CFilter-Loop: Reflected

Use DEVICE_ATTR_RW helper instead of plain DEVICE_ATTR,
which makes the code a bit shorter and easier to read.

Signed-off-by: YueHaibing <yuehaibing@huawei.com>
---
 drivers/xen/pcpu.c | 6 +++---
 1 file changed, 3 insertions(+), 3 deletions(-)

diff --git a/drivers/xen/pcpu.c b/drivers/xen/pcpu.c
index 1bcdd5227771..47aa3a1ccaf5 100644
--- a/drivers/xen/pcpu.c
+++ b/drivers/xen/pcpu.c
@@ -92,7 +92,7 @@ static int xen_pcpu_up(uint32_t cpu_id)
 	return HYPERVISOR_platform_op(&op);
 }
 
-static ssize_t show_online(struct device *dev,
+static ssize_t online_show(struct device *dev,
 			   struct device_attribute *attr,
 			   char *buf)
 {
@@ -101,7 +101,7 @@ static ssize_t show_online(struct device *dev,
 	return sprintf(buf, "%u\n", !!(cpu->flags & XEN_PCPU_FLAGS_ONLINE));
 }
 
-static ssize_t __ref store_online(struct device *dev,
+static ssize_t __ref online_store(struct device *dev,
 				  struct device_attribute *attr,
 				  const char *buf, size_t count)
 {
@@ -130,7 +130,7 @@ static ssize_t __ref store_online(struct device *dev,
 		ret = count;
 	return ret;
 }
-static DEVICE_ATTR(online, S_IRUGO | S_IWUSR, show_online, store_online);
+static DEVICE_ATTR_RW(online);
 
 static struct attribute *pcpu_dev_attrs[] = {
 	&dev_attr_online.attr,
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Sun May 23 08:00:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 May 2021 08:00:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131556.245822 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkj2U-0006Hr-Vi; Sun, 23 May 2021 08:00:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131556.245822; Sun, 23 May 2021 08:00:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkj2U-0006Hk-SF; Sun, 23 May 2021 08:00:46 +0000
Received: by outflank-mailman (input) for mailman id 131556;
 Sun, 23 May 2021 08:00:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkj2T-0006Ha-KV; Sun, 23 May 2021 08:00:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkj2T-0007uJ-E6; Sun, 23 May 2021 08:00:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkj2T-0006KA-4w; Sun, 23 May 2021 08:00:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lkj2T-0002YD-4T; Sun, 23 May 2021 08:00:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ifRxeCbk/RJwpiccrsyHr9WJiWVfnNUr7zQvBUYXGe4=; b=22wDHOceHYSDeMU4KiDgH5kYLP
	K8PNyiug90Q2yHglgieEKpOHrK7rpy48MAjVK8yXrQgn9pA0avOfjAyiKM05sVYWUzxZ8iK2mWVHr
	djtHI6RDoFgjDmipkORg3/uShvJTBsXpYYfh7Ngv3YWUCABYnS2nUdpfeclzpp1iyPj4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162128-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 162128: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=d8c468d58c23cf4254fb98dddd12b4f0225b184a
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 23 May 2021 08:00:45 +0000

flight 162128 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162128/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              d8c468d58c23cf4254fb98dddd12b4f0225b184a
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  317 days
Failing since        151818  2020-07-11 04:18:52 Z  316 days  309 attempts
Testing same since   162120  2021-05-22 04:20:08 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 58637 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 23 10:05:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 May 2021 10:05:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131564.245836 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkkyv-0000F3-Ap; Sun, 23 May 2021 10:05:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131564.245836; Sun, 23 May 2021 10:05:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkkyv-0000Ew-7b; Sun, 23 May 2021 10:05:13 +0000
Received: by outflank-mailman (input) for mailman id 131564;
 Sun, 23 May 2021 10:05:12 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkkyu-0000Em-Nt; Sun, 23 May 2021 10:05:12 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkkyu-0001Xq-Fc; Sun, 23 May 2021 10:05:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkkyu-0004WU-6c; Sun, 23 May 2021 10:05:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lkkyu-0003ci-60; Sun, 23 May 2021 10:05:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Ul/wf4zYy8MDG/sXGglWks2KnUp3FRCc1zQIL6Wuh+g=; b=k04nD9lNtpkkuWc2XKQQwx+WVn
	0SUu2YuUFIop63QvmhPlBV3D6fJvhY6eeiGC3yaCrlK4gdl5JlDs3P8ZYmdaJy4bo4J6yBQYfL/1D
	A4ea47UWMROpwCRm3Qe2WMDleET+K8r78/R+O+A0ErENUibm3BnLeoorb/LBhwbmZrlY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162130-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 162130: all pass - PUSHED
X-Osstest-Versions-This:
    xen=aa77acc28098d04945af998f3fc0dbd3759b5b41
X-Osstest-Versions-That:
    xen=caa9c4471d1d74b2d236467aaf7e63a806ac11a4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 23 May 2021 10:05:12 +0000

flight 162130 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162130/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  aa77acc28098d04945af998f3fc0dbd3759b5b41
baseline version:
 xen                  caa9c4471d1d74b2d236467aaf7e63a806ac11a4

Last test of basis   162086  2021-05-19 09:18:32 Z    4 days
Testing same since   162130  2021-05-23 09:20:40 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Dario Faggioli <dfaggioli@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   caa9c4471d..aa77acc280  aa77acc28098d04945af998f3fc0dbd3759b5b41 -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Sun May 23 11:33:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 May 2021 11:33:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131573.245849 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkmMY-0008Mb-Qa; Sun, 23 May 2021 11:33:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131573.245849; Sun, 23 May 2021 11:33:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkmMY-0008MU-Nc; Sun, 23 May 2021 11:33:42 +0000
Received: by outflank-mailman (input) for mailman id 131573;
 Sun, 23 May 2021 11:33:41 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkmMX-0008MK-Dt; Sun, 23 May 2021 11:33:41 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkmMX-0002ya-2m; Sun, 23 May 2021 11:33:41 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkmMW-0007qp-Lr; Sun, 23 May 2021 11:33:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lkmMW-0004EA-LM; Sun, 23 May 2021 11:33:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=d14BzNjQkevznadh4RWNn4CCPhr404d34UiVhCxc5Oc=; b=FyhJnxffJCAqGSlUadmOSeZYwU
	0EXqvHrki1LhAObugMETAxAmrgXKNEVDsMNSc0s8OC3cGdfMDBk64Q3DqhpGubsFb6kvB4hvvJPaa
	ADo/Egu4errwbT1DrH/2XUWRi07LtL9vOxTKZjL1sq09R4NyXChJZzJkAuMI3ZzYkBCw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162126-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162126: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:xen-boot:fail:heisenbug
    xen-unstable:test-amd64-coresched-amd64-xl:xen-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-multivcpu:guest-localmigrate/x10:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=aa77acc28098d04945af998f3fc0dbd3759b5b41
X-Osstest-Versions-That:
    xen=aa77acc28098d04945af998f3fc0dbd3759b5b41
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 23 May 2021 11:33:40 +0000

flight 162126 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162126/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-credit2   8 xen-boot         fail in 162119 pass in 162126
 test-armhf-armhf-xl-rtds      8 xen-boot         fail in 162119 pass in 162126
 test-amd64-coresched-amd64-xl  7 xen-install               fail pass in 162119
 test-amd64-amd64-xl-multivcpu 20 guest-localmigrate/x10    fail pass in 162119

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162119
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162119
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162119
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162119
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162119
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162119
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162119
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162119
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162119
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162119
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162119
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  aa77acc28098d04945af998f3fc0dbd3759b5b41
baseline version:
 xen                  aa77acc28098d04945af998f3fc0dbd3759b5b41

Last test of basis   162126  2021-05-23 01:53:49 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                fail    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                fail    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun May 23 14:01:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 May 2021 14:01:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131586.245863 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkofU-00052Y-MQ; Sun, 23 May 2021 14:01:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131586.245863; Sun, 23 May 2021 14:01:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkofU-00052R-JP; Sun, 23 May 2021 14:01:24 +0000
Received: by outflank-mailman (input) for mailman id 131586;
 Sun, 23 May 2021 14:01:23 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkofT-00052H-OL; Sun, 23 May 2021 14:01:23 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkofT-0005RQ-EE; Sun, 23 May 2021 14:01:23 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkofT-0006wM-4w; Sun, 23 May 2021 14:01:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lkofT-0007XR-23; Sun, 23 May 2021 14:01:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ky2Rgjdnw4gRCM31DMaSa/XlRLhB6wNZcPBlCynv+6Q=; b=GfXXIiZtvWpYael0s/u44TnTz/
	SO+y/bpoPcTPs4Xx4scZqyr0y6sxWM577szXFyXPpS6zkf/OFeq/bIhZJseA3Sp/XDw4EnPa0uweU
	FDM8OhLWwgrsZwbkSUF9s6ijuNjNQqsg/wzr0aAkhq/wtKYnIloMtjHoAXg/2ltlCbG8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162131-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162131: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=cfa6ffb113f2c0d922034cc77c0d6c52eea05497
X-Osstest-Versions-That:
    ovmf=1fb80369b72c6ba7f80b442e4acf771a6dd56ee7
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 23 May 2021 14:01:23 +0000

flight 162131 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162131/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 cfa6ffb113f2c0d922034cc77c0d6c52eea05497
baseline version:
 ovmf                 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7

Last test of basis   162113  2021-05-21 13:41:10 Z    2 days
Testing same since   162131  2021-05-23 12:12:23 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Laszlo Ersek <lersek@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   1fb80369b7..cfa6ffb113  cfa6ffb113f2c0d922034cc77c0d6c52eea05497 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sun May 23 14:05:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 May 2021 14:05:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131593.245878 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkojq-0005j7-9O; Sun, 23 May 2021 14:05:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131593.245878; Sun, 23 May 2021 14:05:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkojq-0005j0-6B; Sun, 23 May 2021 14:05:54 +0000
Received: by outflank-mailman (input) for mailman id 131593;
 Sun, 23 May 2021 14:05:52 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkojo-0005il-U3; Sun, 23 May 2021 14:05:52 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkojo-0005WG-On; Sun, 23 May 2021 14:05:52 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkojo-000795-HS; Sun, 23 May 2021 14:05:52 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lkojo-0001ik-Gv; Sun, 23 May 2021 14:05:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=5tVRxjO4rHFDmjekh7F4UAcSg1Dbc+S5zk3AxDL7rbI=; b=FWRo0waJrV8k3s6XlxZBty8PND
	KKoXFLIwT2GSy95X6TrNc1GDfwUzUtDaslwmHbWh3NVMSZ3pv629t1LbVxzyhzKN/H4sqncQ1l4ZJ
	hT1pQZRUbAlVFdH8x9D/9RoH6bW7DTp/rLxVMU21AIak1+3+KMPjQTET9A8cAU51b4q4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162127-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162127: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=3bbaed2cd0a02ee53958d3d2585e837bcf327278
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 23 May 2021 14:05:52 +0000

flight 162127 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162127/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                3bbaed2cd0a02ee53958d3d2585e837bcf327278
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  276 days
Failing since        152659  2020-08-21 14:07:39 Z  274 days  506 attempts
Testing same since   162116  2021-05-21 21:07:19 Z    1 days    4 attempts

------------------------------------------------------------
510 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 159254 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 23 17:44:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 May 2021 17:44:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131605.245891 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lks94-00009I-Jt; Sun, 23 May 2021 17:44:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131605.245891; Sun, 23 May 2021 17:44:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lks94-00009B-GJ; Sun, 23 May 2021 17:44:10 +0000
Received: by outflank-mailman (input) for mailman id 131605;
 Sun, 23 May 2021 17:44:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lks93-000091-7c; Sun, 23 May 2021 17:44:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lks92-00015i-Vu; Sun, 23 May 2021 17:44:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lks92-0000n9-Je; Sun, 23 May 2021 17:44:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lks92-00010l-JA; Sun, 23 May 2021 17:44:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=GLBlJXuoXbp0lwrvbqqpX/2ng7tIjKNxaD9zMlIOO9w=; b=MX/LG4euxMbsfP/7zBTO36tUaL
	wCWeKH/P8pEGHqXALItPuBLvHkKkM+C1mHTg4wJIOgKEwYK+IbeCBF2Sx1pgRvNKh6qN+Ly1EavTY
	b5GAIsyXZ5abMPKYh+4T5WA6YG2u/SG0AWbt5+rYFJ6FhJ7JEgZnO6+za6nfYOPa3xhc=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162129-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162129: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=34c5c89890d6295621b6f09b18e7ead9046634bc
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 23 May 2021 17:44:08 +0000

flight 162129 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162129/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                34c5c89890d6295621b6f09b18e7ead9046634bc
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  295 days
Failing since        152366  2020-08-01 20:49:34 Z  294 days  499 attempts
Testing same since   162129  2021-05-23 04:38:22 Z    0 days    1 attempts

------------------------------------------------------------
6083 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1651782 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 23 21:44:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 23 May 2021 21:44:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131618.245905 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkvt0-0004CT-UE; Sun, 23 May 2021 21:43:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131618.245905; Sun, 23 May 2021 21:43:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkvt0-0004CM-RM; Sun, 23 May 2021 21:43:50 +0000
Received: by outflank-mailman (input) for mailman id 131618;
 Sun, 23 May 2021 21:43:49 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkvsz-0004CC-Ia; Sun, 23 May 2021 21:43:49 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkvsz-0005HF-BH; Sun, 23 May 2021 21:43:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkvsy-0002XM-Vk; Sun, 23 May 2021 21:43:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lkvsy-0004Me-Uz; Sun, 23 May 2021 21:43:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=b1AAT62jissxKVDjtUW3TDFfo+7Jq6Wyi5qHU1Y3Zik=; b=eha75jh0ybOoXuQDsOa0QvFOFD
	mhkzQpx98rk5YeYvKtPOR00exBUdjb2bq8DqZzR2yYWs3y0bfTzhct1h62KVFNg2Syc3n+sk6HTnR
	rW5bvUwWZEODxp5lWtCU9P08RtEsr7mDrW9KqhqC/UR7qlHw2oVBCqviKzVZjfnQbW8I=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162132-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162132: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=3bbaed2cd0a02ee53958d3d2585e837bcf327278
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 23 May 2021 21:43:48 +0000

flight 162132 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162132/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                3bbaed2cd0a02ee53958d3d2585e837bcf327278
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  276 days
Failing since        152659  2020-08-21 14:07:39 Z  275 days  507 attempts
Testing same since   162116  2021-05-21 21:07:19 Z    2 days    5 attempts

------------------------------------------------------------
510 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 159254 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 24 01:08:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 May 2021 01:08:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131629.245919 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkz4R-0002pC-SI; Mon, 24 May 2021 01:07:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131629.245919; Mon, 24 May 2021 01:07:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lkz4R-0002p5-Ob; Mon, 24 May 2021 01:07:51 +0000
Received: by outflank-mailman (input) for mailman id 131629;
 Mon, 24 May 2021 01:07:50 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkz4Q-0002ov-4i; Mon, 24 May 2021 01:07:50 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkz4P-00065H-UK; Mon, 24 May 2021 01:07:49 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lkz4P-0001Hg-L6; Mon, 24 May 2021 01:07:49 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lkz4F-0001tz-86; Mon, 24 May 2021 01:07:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Cd12KrdmlDmhc4YKuBVX0C91bZgotli/isHGZhmfX5w=; b=BkTAzM9GUHBBERfvaWMGVs8FAa
	kBYfsw6S/HdmoTlgrnJ4snC4bi1DWkFB06M54K4ziamyP8iBhOdIl3Hl/cZ1flHJitxptgi4EU9AP
	fNnfgfmlBV6ac9wW6u2YIOvr7vuzCEpbZLSwUHGnGabt+4r/I1BwwlkyK5btJnyLZniw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162134-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162134: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=6ebb6814a1ef9573d8488232b50dc53b394c025a
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 24 May 2021 01:07:39 +0000

flight 162134 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162134/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                6ebb6814a1ef9573d8488232b50dc53b394c025a
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  296 days
Failing since        152366  2020-08-01 20:49:34 Z  295 days  500 attempts
Testing same since   162134  2021-05-23 18:12:02 Z    0 days    1 attempts

------------------------------------------------------------
6084 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1652353 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 24 04:29:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 May 2021 04:29:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131642.245934 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ll2Dk-0004QS-BN; Mon, 24 May 2021 04:29:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131642.245934; Mon, 24 May 2021 04:29:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ll2Dk-0004QL-6F; Mon, 24 May 2021 04:29:40 +0000
Received: by outflank-mailman (input) for mailman id 131642;
 Mon, 24 May 2021 04:29:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Omqp=KT=bugseng.com=roberto.bagnara@srs-us1.protection.inumbo.net>)
 id 1ll2Di-0004QF-Pv
 for xen-devel@lists.xenproject.org; Mon, 24 May 2021 04:29:38 +0000
Received: from spartacus.cs.unipr.it (unknown [160.78.167.140])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 84cc6cb0-8f19-4fa9-83b3-56d6194fd072;
 Mon, 24 May 2021 04:29:36 +0000 (UTC)
Received: from [192.168.1.137] (host-87-7-30-209.retail.telecomitalia.it
 [87.7.30.209])
 by spartacus.cs.unipr.it (Postfix) with ESMTPSA id 03E674E0C29
 for <xen-devel@lists.xenproject.org>; Mon, 24 May 2021 06:29:34 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 84cc6cb0-8f19-4fa9-83b3-56d6194fd072
From: Roberto Bagnara <roberto.bagnara@bugseng.com>
Subject: Invalid _Static_assert expanded from HASH_CALLBACKS_CHECK
To: xen-devel@lists.xenproject.org
Message-ID: <ccb37c2e-a3a6-a2e4-bf15-da81f97c94be@bugseng.com>
Date: Mon, 24 May 2021 06:29:34 +0200
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.12) Gecko/20050929
 Thunderbird/1.0.7 Fedora/1.0.7-1.1.fc4 Mnenhy/0.7.3.0
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit


Hi there.

I stumbled upon parsing errors due to invalid uses of
_Static_assert expanded from HASH_CALLBACKS_CHECK where
the tested expression is not constant, as mandated by
the C standard.

Judging from the following comment, there is partial awareness
of the fact this is an issue:

#ifndef __clang__ /* At least some versions dislike some of the uses. */
#define HASH_CALLBACKS_CHECK(mask) \
     BUILD_BUG_ON((mask) > (1U << ARRAY_SIZE(callbacks)) - 1)

Indeed, this is not a fault of Clang: the point is that some
of the expansions of this macro are not C.  Moreover,
the fact that GCC sometimes accepts them is not
something we can rely upon:

$ cat p.c
void f() {
static const int x = 3;
_Static_assert(x < 4, "");
}
$ gcc -c -O p.c
$ gcc -c p.c
p.c: In function ‘f’:
p.c:3:20: error: expression in static assertion is not constant
3 | _Static_assert(x < 4, "");
| ~^~
$

Finally, I think this can be easily avoided: instead
of initializing a static const with a constant expression
and then static-asserting the static const, just static-assert
the constant initializer.

Kind regards,

    Roberto Bagnara


From xen-devel-bounces@lists.xenproject.org Mon May 24 05:56:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 May 2021 05:56:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131669.245944 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ll3ZW-0004LS-Oa; Mon, 24 May 2021 05:56:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131669.245944; Mon, 24 May 2021 05:56:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ll3ZW-0004LL-Lb; Mon, 24 May 2021 05:56:14 +0000
Received: by outflank-mailman (input) for mailman id 131669;
 Mon, 24 May 2021 05:56:14 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ll3ZW-0004LB-27; Mon, 24 May 2021 05:56:14 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ll3ZV-0003fN-Rw; Mon, 24 May 2021 05:56:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ll3ZV-0006Yv-GV; Mon, 24 May 2021 05:56:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ll3ZV-0002Fq-Fk; Mon, 24 May 2021 05:56:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=VSdqr5baMGd4p/mFt7xc1wFjw8iOYEA7F29saGnU9Rg=; b=Ky0Y0ChCcnWLJxi809DHk3h2w4
	8LASht5zvebkZY2+rvueNza9Uew2Jr/wjlFMwAlE19gn/H+36jGxSCubawDaNjlb0tgRkGQmFA2Fe
	a2yY7LyrSag3yVz9PCRuTWbTddxTrXCmWwTDphCa+oXdkZWiBlw7Xc/sVfWBXYHH7xIs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162135-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162135: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-arm64-arm64-xl-credit2:xen-boot:fail:heisenbug
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=3bbaed2cd0a02ee53958d3d2585e837bcf327278
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 24 May 2021 05:56:13 +0000

flight 162135 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162135/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-credit2   8 xen-boot                   fail pass in 162132

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit2 15 migrate-support-check fail in 162132 never pass
 test-arm64-arm64-xl-credit2 16 saverestore-support-check fail in 162132 never pass
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                3bbaed2cd0a02ee53958d3d2585e837bcf327278
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  276 days
Failing since        152659  2020-08-21 14:07:39 Z  275 days  508 attempts
Testing same since   162116  2021-05-21 21:07:19 Z    2 days    6 attempts

------------------------------------------------------------
510 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 159254 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 24 07:44:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 May 2021 07:44:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131687.245994 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ll5Fz-0006Ha-Rh; Mon, 24 May 2021 07:44:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131687.245994; Mon, 24 May 2021 07:44:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ll5Fz-0006HT-Oh; Mon, 24 May 2021 07:44:11 +0000
Received: by outflank-mailman (input) for mailman id 131687;
 Mon, 24 May 2021 07:44:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ll5Fy-0006HJ-MQ; Mon, 24 May 2021 07:44:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ll5Fy-0005Wn-FB; Mon, 24 May 2021 07:44:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ll5Fy-00041L-6O; Mon, 24 May 2021 07:44:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ll5Fy-0001mI-5t; Mon, 24 May 2021 07:44:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=nOC6kSbi0wbPbYJLPaojml5NLh2rrc0kYT71T+bhfEI=; b=mq94WDkPPvOHdo0Q0r6C3a9+kQ
	YuM9+KuKwtYiCk9oDi8vrklBfRNehRIGucJhMGMdEHFAzc6XF843HHV8JekfhniVNJPvL5pnnPvgu
	GH5bKG3BZlwFuk9DDDxoeZ/4vVguqbYfKN4LnCrMXToP/MJCwFSGhcdbcKaJoqmD8Rg0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162138-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 162138: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=d8c468d58c23cf4254fb98dddd12b4f0225b184a
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 24 May 2021 07:44:10 +0000

flight 162138 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162138/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              d8c468d58c23cf4254fb98dddd12b4f0225b184a
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  318 days
Failing since        151818  2020-07-11 04:18:52 Z  317 days  310 attempts
Testing same since   162120  2021-05-22 04:20:08 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 58637 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 24 09:00:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 May 2021 09:00:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131699.246008 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ll6Rm-0005hu-3V; Mon, 24 May 2021 09:00:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131699.246008; Mon, 24 May 2021 09:00:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ll6Rm-0005hn-0L; Mon, 24 May 2021 09:00:26 +0000
Received: by outflank-mailman (input) for mailman id 131699;
 Mon, 24 May 2021 08:01:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yMb4=KT=darkstar.site=sakib@srs-us1.protection.inumbo.net>)
 id 1ll5WV-0000hD-Ba
 for xen-devel@lists.xenproject.org; Mon, 24 May 2021 08:01:15 +0000
Received: from pb-smtp2.pobox.com (unknown [64.147.108.71])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b74c592a-deb1-419a-afce-cfda55b4e452;
 Mon, 24 May 2021 08:01:13 +0000 (UTC)
Received: from pb-smtp2.pobox.com (unknown [127.0.0.1])
 by pb-smtp2.pobox.com (Postfix) with ESMTP id 4C462B33FA;
 Mon, 24 May 2021 04:01:13 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from pb-smtp2.nyi.icgroup.com (unknown [127.0.0.1])
 by pb-smtp2.pobox.com (Postfix) with ESMTP id 44D6EB33F9;
 Mon, 24 May 2021 04:01:13 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from localhost (unknown [95.67.114.216])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by pb-smtp2.pobox.com (Postfix) with ESMTPSA id 23594B33F8;
 Mon, 24 May 2021 04:01:11 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b74c592a-deb1-419a-afce-cfda55b4e452
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc
	:subject:date:message-id:mime-version:content-transfer-encoding;
	 s=sasl; bh=mCsJmtMIZvPu58czx8ZlXkLAmQ9FB2eDwle91D0METI=; b=QUDB
	7yljJ6uAuxl1APj85QsfTuCLWY8hWwQrQ3nCIF0xuECvL+qtOACg1Sian5u0gmz8
	Nj5jskZFrwuJ9RpDe5PdYZib13XXe5g8j8I1F+NAEXxU0YhCVysFnBqit99GDccH
	J8ykkIMs1e/Cd72Gaf4Ectq0DyzxVE7rXANL+CM=
From: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Wei Liu <wl@xen.org>,
	Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Subject: [XEN PATCH v1] libxl/arm: provide guests with random seed
Date: Mon, 24 May 2021 08:00:57 +0000
Message-Id: <20210524080057.1773-1-Sergiy_Kibrik@epam.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
X-Pobox-Relay-ID:
 3417F180-BC66-11EB-99A9-74DE23BA3BAF-90055647!pb-smtp2.pobox.com
Content-Transfer-Encoding: quoted-printable

Pass random seed via FDT, so that guests' CRNGs are better seeded early a=
t boot.
Depending on its configuration Linux can use the seed as device randomnes=
s
or to just quickly initialize CRNG.
In either case this will provide extra randomness to further harden CRNG.

Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
---
 tools/libxl/libxl_arm.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
index 34f8a29056..05c58a428c 100644
--- a/tools/libxl/libxl_arm.c
+++ b/tools/libxl/libxl_arm.c
@@ -342,6 +342,12 @@ static int make_chosen_node(libxl__gc *gc, void *fdt=
, bool ramdisk,
         if (res) return res;
     }
=20
+    uint8_t seed[128];
+    res =3D libxl__random_bytes(gc, seed, sizeof(seed));
+    if (res) return res;
+    res =3D fdt_property(fdt, "rng-seed", seed, sizeof(seed));
+    if (res) return res;
+
     res =3D fdt_end_node(fdt);
     if (res) return res;
=20
--=20
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 24 09:00:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 May 2021 09:00:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131701.246014 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ll6Rm-0005lU-ED; Mon, 24 May 2021 09:00:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131701.246014; Mon, 24 May 2021 09:00:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ll6Rm-0005kd-8Z; Mon, 24 May 2021 09:00:26 +0000
Received: by outflank-mailman (input) for mailman id 131701;
 Mon, 24 May 2021 08:59:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=yMb4=KT=darkstar.site=sakib@srs-us1.protection.inumbo.net>)
 id 1ll6QV-0004xC-GZ
 for xen-devel@lists.xenproject.org; Mon, 24 May 2021 08:59:07 +0000
Received: from pb-smtp1.pobox.com (unknown [64.147.108.70])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 22e90eb8-1801-4bd7-b845-4aed584d810b;
 Mon, 24 May 2021 08:59:05 +0000 (UTC)
Received: from pb-smtp1.pobox.com (unknown [127.0.0.1])
 by pb-smtp1.pobox.com (Postfix) with ESMTP id 76323C5336;
 Mon, 24 May 2021 04:59:05 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from pb-smtp1.nyi.icgroup.com (unknown [127.0.0.1])
 by pb-smtp1.pobox.com (Postfix) with ESMTP id 6CD61C5335;
 Mon, 24 May 2021 04:59:05 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from localhost (unknown [95.67.114.216])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by pb-smtp1.pobox.com (Postfix) with ESMTPSA id AC917C5334;
 Mon, 24 May 2021 04:59:04 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 22e90eb8-1801-4bd7-b845-4aed584d810b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc
	:subject:date:message-id:mime-version:content-transfer-encoding;
	 s=sasl; bh=N3y2IUOrV3tiaMzNDcpoZJubh8fsBaImgmUzGyl9YYg=; b=k0bv
	mGxaD3or+1r/4ZAdUHMIq/qVK51+Ra3roGoCKfOuK9Wie1Y71IwuuawCq24jMbxl
	fzQyLGFDR06/M/NX+7VtjR+XfpPgW/8L/3FxGmuyoWTQb50z7LIhqSbfR0DOeJ+U
	8jE2ZpsrrGf1o/OLi279T0EI7Xhys9CteC615EI=
From: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Wei Liu <wl@xen.org>,
	Sergiy Kibrik <Sergiy_Kibrik@epam.com>
Subject: [XEN PATCH v1] libxl: use getrandom() syscall for random data extraction
Date: Mon, 24 May 2021 08:58:58 +0000
Message-Id: <20210524085858.1902-1-Sergiy_Kibrik@epam.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
X-Pobox-Relay-ID:
 4A7FC54E-BC6E-11EB-928E-D152C8D8090B-90055647!pb-smtp1.pobox.com
Content-Transfer-Encoding: quoted-printable

Simplify libxl__random_bytes() routine by using a newer dedicated syscall=
.
This allows not only to substantially reduce its footprint, but syscall a=
lso
considered to be safer and generally better solution:

https://lwn.net/Articles/606141/

getrandom() available on Linux, FreeBSD and NetBSD.

Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
---
 tools/libxl/libxl_utils.c | 23 ++++-------------------
 1 file changed, 4 insertions(+), 19 deletions(-)

diff --git a/tools/libxl/libxl_utils.c b/tools/libxl/libxl_utils.c
index b039143b8a..f3e56a4026 100644
--- a/tools/libxl/libxl_utils.c
+++ b/tools/libxl/libxl_utils.c
@@ -16,6 +16,7 @@
 #include "libxl_osdeps.h" /* must come before any other headers */
=20
 #include <ctype.h>
+#include <sys/random.h>
=20
 #include "libxl_internal.h"
 #include "_paths.h"
@@ -1226,26 +1227,10 @@ void libxl_string_copy(libxl_ctx *ctx, char **dst=
, char * const*src)
  */
 int libxl__random_bytes(libxl__gc *gc, uint8_t *buf, size_t len)
 {
-    static const char *dev =3D "/dev/urandom";
-    int fd;
-    int ret;
-
-    fd =3D open(dev, O_RDONLY);
-    if (fd < 0) {
-        LOGE(ERROR, "failed to open \"%s\"", dev);
+    ssize_t ret =3D getrandom(buf, len, 0);
+    if (ret !=3D len)
         return ERROR_FAIL;
-    }
-    ret =3D libxl_fd_set_cloexec(CTX, fd, 1);
-    if (ret) {
-        close(fd);
-        return ERROR_FAIL;
-    }
-
-    ret =3D libxl_read_exactly(CTX, fd, buf, len, dev, NULL);
-
-    close(fd);
-
-    return ret;
+    return 0;
 }
=20
 int libxl__prepare_sockaddr_un(libxl__gc *gc,
--=20
2.25.1



From xen-devel-bounces@lists.xenproject.org Mon May 24 09:12:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 May 2021 09:12:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131714.246029 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ll6dL-0007lZ-G0; Mon, 24 May 2021 09:12:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131714.246029; Mon, 24 May 2021 09:12:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ll6dL-0007lS-Cw; Mon, 24 May 2021 09:12:23 +0000
Received: by outflank-mailman (input) for mailman id 131714;
 Mon, 24 May 2021 09:12:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ll6dK-0007lI-7O; Mon, 24 May 2021 09:12:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ll6dK-0007Vs-0n; Mon, 24 May 2021 09:12:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ll6dJ-00081v-OF; Mon, 24 May 2021 09:12:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ll6dJ-0005Gr-Nj; Mon, 24 May 2021 09:12:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qkFyYoPJ4DxHaJnlUtJmJSSRJzPIV5Ei8i0Zof0qlRw=; b=4YsgOBs4gLxDIXbSSGCH+ekpSl
	wqMhjc/jHlMgNlcPtOvLBpnsXPLzoO7aQdYVynJsB2rt+XKUk9ZQcS9Se+D9nZOirLeED66+JQj4O
	3q5hSPxbkChUFKbe/zt/AFc4DG4VEzjG/To+WdT+Kgy7Uvz+DNA5wkn2uIpdYcsQ9hBQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162136-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162136: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=c4681547bcce777daf576925a966ffa824edd09d
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 24 May 2021 09:12:21 +0000

flight 162136 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162136/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                c4681547bcce777daf576925a966ffa824edd09d
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  296 days
Failing since        152366  2020-08-01 20:49:34 Z  295 days  501 attempts
Testing same since   162136  2021-05-24 01:10:36 Z    0 days    1 attempts

------------------------------------------------------------
6084 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1652359 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 24 10:10:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 May 2021 10:10:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131724.246044 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ll7XL-000543-6a; Mon, 24 May 2021 10:10:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131724.246044; Mon, 24 May 2021 10:10:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ll7XL-00053w-32; Mon, 24 May 2021 10:10:15 +0000
Received: by outflank-mailman (input) for mailman id 131724;
 Mon, 24 May 2021 10:10:14 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=tM/V=KT=arm.com=penny.zheng@srs-us1.protection.inumbo.net>)
 id 1ll7XJ-00053q-Vu
 for xen-devel@lists.xenproject.org; Mon, 24 May 2021 10:10:14 +0000
Received: from EUR04-VI1-obe.outbound.protection.outlook.com (unknown
 [40.107.8.71]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d853f2f9-14ce-4504-917e-ed483488ba33;
 Mon, 24 May 2021 10:10:11 +0000 (UTC)
Received: from AM6P194CA0046.EURP194.PROD.OUTLOOK.COM (2603:10a6:209:84::23)
 by DB7PR08MB3676.eurprd08.prod.outlook.com (2603:10a6:10:4d::22) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4150.27; Mon, 24 May
 2021 10:10:09 +0000
Received: from AM5EUR03FT013.eop-EUR03.prod.protection.outlook.com
 (2603:10a6:209:84:cafe::b3) by AM6P194CA0046.outlook.office365.com
 (2603:10a6:209:84::23) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4150.23 via Frontend
 Transport; Mon, 24 May 2021 10:10:09 +0000
Received: from 64aa7808-outbound-1.mta.getcheckrecipient.com (63.35.35.123) by
 AM5EUR03FT013.mail.protection.outlook.com (10.152.16.140) with
 Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4129.25 via Frontend Transport; Mon, 24 May 2021 10:10:08 +0000
Received: ("Tessian outbound 3c287b285c95:v92");
 Mon, 24 May 2021 10:10:08 +0000
Received: from d58ca352fee6.2
 by 64aa7808-outbound-1.mta.getcheckrecipient.com id
 58B78053-CF44-4983-9605-7F314ED76FB2.1; 
 Mon, 24 May 2021 10:10:03 +0000
Received: from EUR01-VE1-obe.outbound.protection.outlook.com
 by 64aa7808-outbound-1.mta.getcheckrecipient.com with ESMTPS id d58ca352fee6.2
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384);
 Mon, 24 May 2021 10:10:03 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com (2603:10a6:803:10a::33)
 by VI1PR08MB4176.eurprd08.prod.outlook.com (2603:10a6:803:ec::25)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4150.23; Mon, 24 May
 2021 10:10:00 +0000
Received: from VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::2807:2ff9:e371:2918]) by VE1PR08MB5215.eurprd08.prod.outlook.com
 ([fe80::2807:2ff9:e371:2918%7]) with mapi id 15.20.4150.027; Mon, 24 May 2021
 10:10:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d853f2f9-14ce-4504-917e-ed483488ba33
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2qrLhbNtUuHXCY/13qfnE+S4fUhnoLYb40Q5r3kqjPI=;
 b=A0C1PlZjcALDIkn9D5nUX00McLMeUDs87KGsJyKWMW+LnK9iGmjt8Z2ATtFEcR/rrWs3d0gNxN+RPl2lQCfCYC0rQmmjvjpveE9x0OO845ohGwfpC8LHN4+wVxIcSi0kmaUjydNqdV4n36O4OcxBHF29WhOOUURI+pYQYkSfK14=
X-MS-Exchange-Authentication-Results: spf=pass (sender IP is 63.35.35.123)
 smtp.mailfrom=arm.com; lists.xenproject.org; dkim=pass (signature was
 verified) header.d=armh.onmicrosoft.com;lists.xenproject.org; dmarc=pass
 action=none header.from=arm.com;
Received-SPF: Pass (protection.outlook.com: domain of arm.com designates
 63.35.35.123 as permitted sender) receiver=protection.outlook.com;
 client-ip=63.35.35.123; helo=64aa7808-outbound-1.mta.getcheckrecipient.com;
X-CR-MTA-TID: 64aa7808
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KYwOAXQNI0nwKyPKMJvPftpa/yUwTOsHPkYaj/oQwwvPg/hhJwVUk3KJ2Q0iiOPL4dLYdGzXltqHGLJ2ZQQErysvc5JzJUKXbh7e4J2YCxC4Cjj7MUgi522gLeLLesrPgmOT6beiY62uhxYE4W6BX2Df+RVOpwwdUdWnb30KYQTBcZPpeEtoSi6qKL1blL3eZQ9PAv6nUjefZnu1ZWL+wwWdcxMFAYqeBeCwqD97Litg2utwWHtPCiuyo+EZkgvI0b9rVpWnfS0kMCUz+us2vjc9BLZuBZW9LAtun8cpY9/cXhfO4cr13sNQNDKGACAE06eiACVybcUkPyYdE8DYfw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2qrLhbNtUuHXCY/13qfnE+S4fUhnoLYb40Q5r3kqjPI=;
 b=n3DSy6fkRSk3WXwoS2aNolibQzebo96RfIrLwjd5+Rx1AxmqVl7VZHFmCQfJJ05pAWz4m8+WQUBju0Rz3E2a07zzICfWHlXN9rQNiJ74R1Tm9yRxzhIezPNWRZKTb0yt5MPcF4+MiBD7Fzjd22QMwzvzbfVpPEcF5B1Oh+51uVPZn58fGVQJm7IKJcBHbR5sHDEnsyBFP/gSrrg1GtbTfIhjJ+zNdbMKyNC+BSsxRzA2GBjN+aWO1mT1Ns48aJxS2pQdTEPHDu/GffQy9QPKbGEeT1psVZp4DT8WwXszarCJQG5JMxzUptwOVHsw0slKJQSvFXwlQb0/X3Ur4XahSg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=arm.com; dmarc=pass action=none header.from=arm.com; dkim=pass
 header.d=arm.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=armh.onmicrosoft.com;
 s=selector2-armh-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=2qrLhbNtUuHXCY/13qfnE+S4fUhnoLYb40Q5r3kqjPI=;
 b=A0C1PlZjcALDIkn9D5nUX00McLMeUDs87KGsJyKWMW+LnK9iGmjt8Z2ATtFEcR/rrWs3d0gNxN+RPl2lQCfCYC0rQmmjvjpveE9x0OO845ohGwfpC8LHN4+wVxIcSi0kmaUjydNqdV4n36O4OcxBHF29WhOOUURI+pYQYkSfK14=
From: Penny Zheng <Penny.Zheng@arm.com>
To: Julien Grall <julien@xen.org>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, "sstabellini@kernel.org"
	<sstabellini@kernel.org>
CC: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
	nd <nd@arm.com>
Subject: RE: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages
Thread-Topic: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages
Thread-Index: AQHXS6W1I2rvO8FRzkm++uFejE9ZNKrpBh+AgAE7kqCACC9kgA==
Date: Mon, 24 May 2021 10:10:00 +0000
Message-ID:
 <VE1PR08MB521532DDD1CC4C8872AE08CFF7269@VE1PR08MB5215.eurprd08.prod.outlook.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-6-penny.zheng@arm.com>
 <e8e4148e-017b-955b-dd18-4576ce7c94ec@xen.org>
 <VE1PR08MB52156570D7795C3595674BE5F72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
In-Reply-To:
 <VE1PR08MB52156570D7795C3595674BE5F72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-ts-tracking-id: 5306F2AFB5E29C48A661B4F303BF7BC8.0
x-checkrecipientchecked: true
Authentication-Results-Original: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
x-originating-ip: [203.126.0.111]
x-ms-publictraffictype: Email
X-MS-Office365-Filtering-Correlation-Id: 22888666-a201-4f50-53a9-08d91e9c1c58
x-ms-traffictypediagnostic: VI1PR08MB4176:|DB7PR08MB3676:
x-ms-exchange-transport-forked: True
X-Microsoft-Antispam-PRVS:
	<DB7PR08MB36760892DF109FF9E9BA9FC6F7269@DB7PR08MB3676.eurprd08.prod.outlook.com>
x-checkrecipientrouted: true
nodisclaimer: true
x-ms-oob-tlc-oobclassifiers: OLM:2887;OLM:2887;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam-Untrusted: BCL:0;
X-Microsoft-Antispam-Message-Info-Original:
 6w+Ys/EonNJw7kP4OWTFtJEiJxhBXGEuloILhP0VkwPTE5UQ911H8uK/XikWiWOYR/bLj5+PMgchqstrK9hOFxdY7ApJ4efgrRFQPHDrMbUmync60OXhCLWKFclgENuKSuv4y4wjKhFAbl/6MXb4ICx0Bd7cjBERVUD+ookBctmplBnZjl4Bu3+ZZ9oVulFkm15nOxw/KJwjVZiOD9IPNQmO5BaRLt4J7vFA08+VjXVEcs4cK3gmPPzc+lNDSImoAM2WjAh1yYDrBB1DtqThFVWB6c/9jhsQ63fNjF596k1dsLC4dmUO4TfNSjxoaywarbJgaz+azcMSZPh9rq+qWKdIrMvf8Q2LllrqpPjk7NHBJEMxVIREFffGKTHSAQFTXgC1P14XyGILFzYPGRsYATM6ptrdDN13O/rJy15k5yCToH3dcgmNx1xZ7h6ReBFzkZCJvriayudfx2lUwma10AYXAIpjdw0ju78QgBZE8beWDomy+tp1xPyzWCKQHm3PAwNHchM8A1B52CvNdpGplLNUpnrHDz6CpTGQ34+gBAIZ5V1K8gWT83FUxaAYVyQ+uEtHQwKo7/d2/d3SE3JiYGE4Th8z6kMX2+QmPnOHQ4k=
X-Forefront-Antispam-Report-Untrusted:
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:VE1PR08MB5215.eurprd08.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(366004)(346002)(396003)(39860400002)(376002)(8936002)(76116006)(8676002)(478600001)(33656002)(4326008)(9686003)(52536014)(66946007)(66476007)(66556008)(71200400001)(66446008)(64756008)(26005)(55016002)(53546011)(7696005)(6506007)(54906003)(186003)(5660300002)(38100700002)(83380400001)(86362001)(110136005)(2906002)(316002)(122000001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata:
 =?utf-8?B?aUNPbEc1SUZwbGd1Z0U0dDlDa0RBcnIxUmY1cVVEYXJWUFlrbDZyMVNNYUtm?=
 =?utf-8?B?bkZaRlFyd1lUdSs1YkRTMTZNWTFiNlZSb3o3T3FEb0hjYUlMdjZiaWh1UUVq?=
 =?utf-8?B?RStvUzVEbS9QaVVNaUx1a09SMDBOVnRrZDZWWG5waHpmZ09GUUpMUDl5VmUv?=
 =?utf-8?B?NVhDbkc3d3ZGSlNSS1FzT1hMM2pNK1hHWVkva1hvZi9aOVhmQWIvODdhc1BV?=
 =?utf-8?B?a2lWQ2RBNkh0TjdXTnROL1M0TlRtZ3lSejYzbnIyOFk4ZkFib3VRYmpSWS9j?=
 =?utf-8?B?WllHaTFBV2xHam9ncDJpdmVhNm9TNmN5SHRKVGNtNlJvSXJyRVExa1FFU0Uy?=
 =?utf-8?B?eENTVEgzdW5IeDBvVnJpODhIUllOaVRrajJtOEVyc1hpd2pncVRHQW9nT1JX?=
 =?utf-8?B?eE5qUFRnclV2cXZPZlpBaEJjQTlwbFZEYjdKYTZZQ2pEOEMyeXovZGYxcHFV?=
 =?utf-8?B?MlM1VDJDLzFSeTVoL0o5Q09obXZEMjFKNTlJWHpoOFc0cFFPQVZocExoUG12?=
 =?utf-8?B?U3NJaitLT0ZqV082QVdzaGY0Mkg2a0hxODVmTkZ1WnFtT1hjblZNOVVlNTVB?=
 =?utf-8?B?WDdkT2NCWkg5UjYvTS9ISDl0MWo1YS9xWmNtbVBLL2V5enBxR08zZDlOeCth?=
 =?utf-8?B?ZWVETGQ5V2JGMUJCTXpqVVZUdjErVlREa04wVUR3VllKdmlzRG05b3BDVFYv?=
 =?utf-8?B?NmloVm81U21IcFQwRmdmdnkyYmwxVVlHOFZyUk9FZE93MStrNitDOElZaWVD?=
 =?utf-8?B?RjFHNk9zV05Ib2Y3R042RnRqaDV3NlZyTjc1TFZVdVlQd3VkdWp4L0RVUmph?=
 =?utf-8?B?WlVNWXViN3ZhbEZxYjNsdFYzY2lpbm5vTGpPdVBaQTc0Zk9TNjB0d1lYT0h5?=
 =?utf-8?B?dU5LT1YrS2hkM1h2TzBDdUxNR2pMQkNmZkhTbjlubWE2QnRKMC9ZN0JqdENE?=
 =?utf-8?B?NENPMnNzR0kzNGRkbEQ0Yzg0VUNiMWV6WEZCb2F3cnRiLzJVV2hZckpmbmNE?=
 =?utf-8?B?amxqRWdmd0x1RlVxemJHNEdWTE9GY096Znd3UWdlTWpJL3pTK2ZNSnZSUGdX?=
 =?utf-8?B?ZnV6b3NYRVBVdTVuWU5SUWhuME9RMWh4QXRSZU1XSzBpbnZ6dnd4UUg0a28z?=
 =?utf-8?B?VlpSNGFGM2NDb20wVGJqdGZVdy92d3hhNlp3dXlSQUYxQkJabkhJK05iRjh5?=
 =?utf-8?B?aEZzNTJlUWg5Tm5VdTNYaTl4OUJ5Um5QSVJJT1hJNVVkR0xkVjdvdkNGVmlK?=
 =?utf-8?B?b0NzZFA5a2tSa0xWMWE3bVFacUh5OTRHb1JuaGdFWk5nc24rbEE1aFhhd2dH?=
 =?utf-8?B?NTY3ek9ob09tbTJsb0labjFPWWlvV1N5cjJ4bTFGZlZ6Qy9HNmFNc1ExQ2hv?=
 =?utf-8?B?M1QyS0R3dEFKUFNVNitaek04NXZ2UitPSUIyTy9VU3JiWjBuS0lSUjdRU0VK?=
 =?utf-8?B?ZzlFdmZzaGRpTVpNdm1QNFdmc29xeitFaTd5a0J5NUxrTGRNTGhXanhNc3N0?=
 =?utf-8?B?dkZTc2c3M3orZkVlRDJXajFQWkliSndIYm9LdXNwRXR4MjZPTTZSTkdLUWx6?=
 =?utf-8?B?ZmdCT2FGMHRLT1BvZkh3aUR6bzd5dHl0TkhuMkNKd1pFL3djVVNEWkt1cmNV?=
 =?utf-8?B?eUZDNThtZ2UrRWtrRys0MlBvdTk5ZGdDYWxCYk5mR3k0Vkt6b1VOMjdiS002?=
 =?utf-8?B?eENHZXc0UmNYbEJMdGxodDFKQjNCNFVtVFA1LzBPVVp4bmt6NDBrQ3dZUms3?=
 =?utf-8?Q?ElyKgLoISp0nk0DunZn3VEE+qG0vdvBdWnfF+O6?=
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-Transport-CrossTenantHeadersStamped: VI1PR08MB4176
Original-Authentication-Results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=arm.com;
X-EOPAttributedMessage: 0
X-MS-Exchange-Transport-CrossTenantHeadersStripped:
 AM5EUR03FT013.eop-EUR03.prod.protection.outlook.com
X-MS-Office365-Filtering-Correlation-Id-Prvs:
	b6351386-4a59-413b-439a-08d91e9c173f
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	tz+EaMeeD47CWrn0yAW+jCcCiQ9OjMFriu7vhbmWYmtJUlqjAWS5BL4UDw0l4H7Hl/SSYczFRndlEHD+8BY9ihruy3gIc3u/3sDGn02WphUVZIWeKKVqGBG/Edzw2lyax4wkuNYbI+3rvrkA1rgxt0uPjAhhKcqUNotNtI11aLYN6YH1LogH0Rr5MkmzwDpcbbUuaGlQO4ZDp8CjoRvw1Jb5Mjl5FMGGF66t2AiArI8nPoZhbZzUkilethW4olXiDXk015+MfNIq7iSflDYmgZg5iQzQPS7pSOuK8eJLQ2C+rqbUjMRUZBF4/PcyY3jTl5KgF8WlesUYq3cieid7NUe2L7NalQRyHcQkPBLjACx+8YJd5YbpafEU0VKzH7Hs/ozeMU9GL/27/6N1gOUHnxYBulWnF4tlVZaRGzxvK0Q9aXK4vAjFY7RIrm6/kyiK4VE/gjE+DMjgwXn1fQrwQgsEppPO9NRSiEMENbzaqvK/B8zECKyaCe4n1HD0N5aX/ajZ7fS6G71Qnu/AKFgwax7wXkq+bjQsHqsdqnQ8TZvhiL0VcEq/d4eiPc45oCaPPLoP/ED1j/ABzfhIDRyjYNy45Msv3RFHfrVvtOxtOWf5jKJYvNadoVACdory+MiPRGRbDcKCo+GwFfgyCMWH3qL6bGkeYKpwBx19Lwg3JOs=
X-Forefront-Antispam-Report:
	CIP:63.35.35.123;CTRY:IE;LANG:en;SCL:1;SRV:;IPV:CAL;SFV:NSPM;H:64aa7808-outbound-1.mta.getcheckrecipient.com;PTR:ec2-63-35-35-123.eu-west-1.compute.amazonaws.com;CAT:NONE;SFS:(4636009)(346002)(136003)(396003)(376002)(39860400002)(36840700001)(46966006)(70586007)(356005)(8936002)(70206006)(82310400003)(9686003)(47076005)(83380400001)(86362001)(186003)(110136005)(36860700001)(5660300002)(478600001)(81166007)(8676002)(52536014)(82740400003)(7696005)(26005)(54906003)(336012)(53546011)(2906002)(55016002)(33656002)(6506007)(316002)(4326008);DIR:OUT;SFP:1101;
X-OriginatorOrg: arm.com
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2021 10:10:08.8258
 (UTC)
X-MS-Exchange-CrossTenant-Network-Message-Id: 22888666-a201-4f50-53a9-08d91e9c1c58
X-MS-Exchange-CrossTenant-Id: f34e5979-57d9-4aaa-ad4d-b122a662184d
X-MS-Exchange-CrossTenant-OriginalAttributedTenantConnectingIp: TenantId=f34e5979-57d9-4aaa-ad4d-b122a662184d;Ip=[63.35.35.123];Helo=[64aa7808-outbound-1.mta.getcheckrecipient.com]
X-MS-Exchange-CrossTenant-AuthSource:
	AM5EUR03FT013.eop-EUR03.prod.protection.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Anonymous
X-MS-Exchange-CrossTenant-FromEntityHeader: HybridOnPrem
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DB7PR08MB3676

SGkgSnVsaWVuDQoNCj4gLS0tLS1PcmlnaW5hbCBNZXNzYWdlLS0tLS0NCj4gRnJvbTogUGVubnkg
WmhlbmcNCj4gU2VudDogV2VkbmVzZGF5LCBNYXkgMTksIDIwMjEgMToyNCBQTQ0KPiBUbzogSnVs
aWVuIEdyYWxsIDxqdWxpZW5AeGVuLm9yZz47IHhlbi1kZXZlbEBsaXN0cy54ZW5wcm9qZWN0Lm9y
ZzsNCj4gc3N0YWJlbGxpbmlAa2VybmVsLm9yZw0KPiBDYzogQmVydHJhbmQgTWFycXVpcyA8QmVy
dHJhbmQuTWFycXVpc0Bhcm0uY29tPjsgV2VpIENoZW4NCj4gPFdlaS5DaGVuQGFybS5jb20+OyBu
ZCA8bmRAYXJtLmNvbT4NCj4gU3ViamVjdDogUkU6IFtQQVRDSCAwNS8xMF0geGVuL2FybTogaW50
cm9kdWNlIGFsbG9jX3N0YXRpY21lbV9wYWdlcw0KPiANCj4gSGkgSnVsaWVuDQo+IA0KPiA+IC0t
LS0tT3JpZ2luYWwgTWVzc2FnZS0tLS0tDQo+ID4gRnJvbTogSnVsaWVuIEdyYWxsIDxqdWxpZW5A
eGVuLm9yZz4NCj4gPiBTZW50OiBUdWVzZGF5LCBNYXkgMTgsIDIwMjEgNjoxNSBQTQ0KPiA+IFRv
OiBQZW5ueSBaaGVuZyA8UGVubnkuWmhlbmdAYXJtLmNvbT47IHhlbi1kZXZlbEBsaXN0cy54ZW5w
cm9qZWN0Lm9yZzsNCj4gPiBzc3RhYmVsbGluaUBrZXJuZWwub3JnDQo+ID4gQ2M6IEJlcnRyYW5k
IE1hcnF1aXMgPEJlcnRyYW5kLk1hcnF1aXNAYXJtLmNvbT47IFdlaSBDaGVuDQo+ID4gPFdlaS5D
aGVuQGFybS5jb20+OyBuZCA8bmRAYXJtLmNvbT4NCj4gPiBTdWJqZWN0OiBSZTogW1BBVENIIDA1
LzEwXSB4ZW4vYXJtOiBpbnRyb2R1Y2UgYWxsb2Nfc3RhdGljbWVtX3BhZ2VzDQo+ID4NCj4gPiBI
aSBQZW5ueSwNCj4gPg0KPiA+IE9uIDE4LzA1LzIwMjEgMDY6MjEsIFBlbm55IFpoZW5nIHdyb3Rl
Og0KPiA+ID4gYWxsb2Nfc3RhdGljbWVtX3BhZ2VzIGlzIGRlc2lnbmF0ZWQgdG8gYWxsb2NhdGUg
bnJfcGZucyBjb250aWd1b3VzDQo+ID4gPiBwYWdlcyBvZiBzdGF0aWMgbWVtb3J5LiBBbmQgaXQg
aXMgdGhlIGVxdWl2YWxlbnQgb2YgYWxsb2NfaGVhcF9wYWdlcw0KPiA+ID4gZm9yIHN0YXRpYyBt
ZW1vcnkuDQo+ID4gPiBUaGlzIGNvbW1pdCBvbmx5IGNvdmVycyBhbGxvY2F0aW5nIGF0IHNwZWNp
ZmllZCBzdGFydGluZyBhZGRyZXNzLg0KPiA+ID4NCj4gPiA+IEZvciBlYWNoIHBhZ2UsIGl0IHNo
YWxsIGNoZWNrIGlmIHRoZSBwYWdlIGlzIHJlc2VydmVkDQo+ID4gPiAoUEdDX3Jlc2VydmVkKSBh
bmQgZnJlZS4gSXQgc2hhbGwgYWxzbyBkbyBhIHNldCBvZiBuZWNlc3NhcnkNCj4gPiA+IGluaXRp
YWxpemF0aW9uLCB3aGljaCBhcmUgbW9zdGx5IHRoZSBzYW1lIG9uZXMgaW4gYWxsb2NfaGVhcF9w
YWdlcywNCj4gPiA+IGxpa2UsIGZvbGxvd2luZyB0aGUgc2FtZSBjYWNoZS1jb2hlcmVuY3kgcG9s
aWN5IGFuZCB0dXJuaW5nIHBhZ2UNCj4gPiA+IHN0YXR1cyBpbnRvIFBHQ19zdGF0ZV91c2VkLCBl
dGMuDQo+ID4gPg0KPiA+ID4gU2lnbmVkLW9mZi1ieTogUGVubnkgWmhlbmcgPHBlbm55LnpoZW5n
QGFybS5jb20+DQo+ID4gPiAtLS0NCj4gPiA+ICAgeGVuL2NvbW1vbi9wYWdlX2FsbG9jLmMgfCA2
NA0KPiA+ICsrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrKysrDQo+ID4gPiAg
IDEgZmlsZSBjaGFuZ2VkLCA2NCBpbnNlcnRpb25zKCspDQo+ID4gPg0KPiA+ID4gZGlmZiAtLWdp
dCBhL3hlbi9jb21tb24vcGFnZV9hbGxvYy5jIGIveGVuL2NvbW1vbi9wYWdlX2FsbG9jLmMgaW5k
ZXgNCj4gPiA+IDU4YjUzYzZhYzIuLmFkZjI4ODllNzYgMTAwNjQ0DQo+ID4gPiAtLS0gYS94ZW4v
Y29tbW9uL3BhZ2VfYWxsb2MuYw0KPiA+ID4gKysrIGIveGVuL2NvbW1vbi9wYWdlX2FsbG9jLmMN
Cj4gPiA+IEBAIC0xMDY4LDYgKzEwNjgsNzAgQEAgc3RhdGljIHN0cnVjdCBwYWdlX2luZm8gKmFs
bG9jX2hlYXBfcGFnZXMoDQo+ID4gPiAgICAgICByZXR1cm4gcGc7DQo+ID4gPiAgIH0NCj4gPiA+
DQo+ID4gPiArLyoNCj4gPiA+ICsgKiBBbGxvY2F0ZSBucl9wZm5zIGNvbnRpZ3VvdXMgcGFnZXMs
IHN0YXJ0aW5nIGF0ICNzdGFydCwgb2Ygc3RhdGljIG1lbW9yeS4NCj4gPiA+ICsgKiBJdCBpcyB0
aGUgZXF1aXZhbGVudCBvZiBhbGxvY19oZWFwX3BhZ2VzIGZvciBzdGF0aWMgbWVtb3J5ICAqLw0K
PiA+ID4gK3N0YXRpYyBzdHJ1Y3QgcGFnZV9pbmZvICphbGxvY19zdGF0aWNtZW1fcGFnZXModW5z
aWduZWQgbG9uZw0KPiA+ID4gK25yX3BmbnMsDQo+ID4NCj4gPiBUaGlzIHdhbnRzIHRvIGJlIG5y
X21mbnMuDQo+ID4NCj4gPiA+ICsgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICBwYWRkcl90IHN0YXJ0LA0KPiA+DQo+ID4gSSB3b3VsZCBwcmVmZXIgaWYgdGhp
cyBoZWxwZXIgdGFrZXMgYW4gbWZuX3QgaW4gcGFyYW1ldGVyLg0KPiA+DQo+IA0KPiBTdXJlLCBJ
IHdpbGwgY2hhbmdlIGJvdGguDQo+IA0KPiA+ID4gKyAgICAgICAgICAgICAgICAgICAgICAgICAg
ICAgICAgICAgICAgICAgICAgICAgIHVuc2lnbmVkIGludA0KPiA+ID4gK21lbWZsYWdzKSB7DQo+
ID4gPiArICAgIGJvb2wgbmVlZF90bGJmbHVzaCA9IGZhbHNlOw0KPiA+ID4gKyAgICB1aW50MzJf
dCB0bGJmbHVzaF90aW1lc3RhbXAgPSAwOw0KPiA+ID4gKyAgICB1bnNpZ25lZCBpbnQgaTsNCj4g
PiA+ICsgICAgc3RydWN0IHBhZ2VfaW5mbyAqcGc7DQo+ID4gPiArICAgIG1mbl90IHNfbWZuOw0K
PiA+ID4gKw0KPiA+ID4gKyAgICAvKiBGb3Igbm93LCBpdCBvbmx5IHN1cHBvcnRzIGFsbG9jYXRp
bmcgYXQgc3BlY2lmaWVkIGFkZHJlc3MuICovDQo+ID4gPiArICAgIHNfbWZuID0gbWFkZHJfdG9f
bWZuKHN0YXJ0KTsNCj4gPiA+ICsgICAgcGcgPSBtZm5fdG9fcGFnZShzX21mbik7DQo+ID4NCj4g
PiBXZSBzaG91bGQgYXZvaWQgdG8gbWFrZSB0aGUgYXNzdW1wdGlvbiB0aGUgc3RhcnQgYWRkcmVz
cyB3aWxsIGJlIHZhbGlkLg0KPiA+IFNvIHlvdSB3YW50IHRvIGNhbGwgbWZuX3ZhbGlkKCkgZmly
c3QuDQo+ID4NCj4gPiBBdCB0aGUgc2FtZSB0aW1lLCB0aGVyZSBpcyBubyBndWFyYW50ZWUgdGhh
dCBpZiB0aGUgZmlyc3QgcGFnZSBpcw0KPiA+IHZhbGlkLCB0aGVuIHRoZSBuZXh0IG5yX3BmbnMg
d2lsbCBiZS4gU28gdGhlIGNoZWNrIHNob3VsZCBiZSBwZXJmb3JtZWQgZm9yIGFsbA0KPiBvZiB0
aGVtLg0KPiA+DQo+IA0KPiBPay4gSSdsbCBkbyB2YWxpZGF0aW9uIGNoZWNrIG9uIGJvdGggb2Yg
dGhlbS4NCj4gDQo+ID4gPiArICAgIGlmICggIXBnICkNCj4gPiA+ICsgICAgICAgIHJldHVybiBO
VUxMOw0KPiA+ID4gKw0KPiA+ID4gKyAgICBmb3IgKCBpID0gMDsgaSA8IG5yX3BmbnM7IGkrKykN
Cj4gPiA+ICsgICAgew0KPiA+ID4gKyAgICAgICAgLyoNCj4gPiA+ICsgICAgICAgICAqIFJlZmVy
ZW5jZSBjb3VudCBtdXN0IGNvbnRpbnVvdXNseSBiZSB6ZXJvIGZvciBmcmVlIHBhZ2VzDQo+ID4g
PiArICAgICAgICAgKiBvZiBzdGF0aWMgbWVtb3J5KFBHQ19yZXNlcnZlZCkuDQo+ID4gPiArICAg
ICAgICAgKi8NCj4gPiA+ICsgICAgICAgIEFTU0VSVChwZ1tpXS5jb3VudF9pbmZvICYgUEdDX3Jl
c2VydmVkKTsNCj4gPiA+ICsgICAgICAgIGlmICggKHBnW2ldLmNvdW50X2luZm8gJiB+UEdDX3Jl
c2VydmVkKSAhPSBQR0Nfc3RhdGVfZnJlZSApDQo+ID4gPiArICAgICAgICB7DQo+ID4gPiArICAg
ICAgICAgICAgcHJpbnRrKFhFTkxPR19FUlINCj4gPiA+ICsgICAgICAgICAgICAgICAgICAgICJS
ZWZlcmVuY2UgY291bnQgbXVzdCBjb250aW51b3VzbHkgYmUgemVybyBmb3IgZnJlZSBwYWdlcyIN
Cj4gPiA+ICsgICAgICAgICAgICAgICAgICAgICJwZ1sldV0gTUZOICUiUFJJX21mbiIgYz0lI2x4
IHQ9JSN4XG4iLA0KPiA+ID4gKyAgICAgICAgICAgICAgICAgICAgaSwgbWZuX3gocGFnZV90b19t
Zm4ocGcgKyBpKSksDQo+ID4gPiArICAgICAgICAgICAgICAgICAgICBwZ1tpXS5jb3VudF9pbmZv
LCBwZ1tpXS50bGJmbHVzaF90aW1lc3RhbXApOw0KPiA+ID4gKyAgICAgICAgICAgIEJVRygpOw0K
PiA+DQo+ID4gU28gd2Ugd291bGQgY3Jhc2ggWGVuIGlmIHRoZSBjYWxsZXIgcGFzcyBhIHdyb25n
IHJhbmdlLiBJcyBpdCB3aGF0IHdlIHdhbnQ/DQo+ID4NCj4gPiBBbHNvLCB3aG8gaXMgZ29pbmcg
dG8gcHJldmVudCBjb25jdXJyZW50IGFjY2Vzcz8NCj4gPg0KPiANCj4gU3VyZSwgdG8gZml4IGNv
bmN1cnJlbmN5IGlzc3VlLCBJIG1heSBuZWVkIHRvIGFkZCBvbmUgc3BpbmxvY2sgbGlrZSBgc3Rh
dGljDQo+IERFRklORV9TUElOTE9DSyhzdGF0aWNtZW1fbG9jayk7YA0KPiANCj4gSW4gY3VycmVu
dCBhbGxvY19oZWFwX3BhZ2VzLCBpdCB3aWxsIGRvIHNpbWlsYXIgY2hlY2ssIHRoYXQgcGFnZXMg
aW4gZnJlZSBzdGF0ZQ0KPiBNVVNUIGhhdmUgemVybyByZWZlcmVuY2UgY291bnQuIEkgZ3Vlc3Ms
IGlmIGNvbmRpdGlvbiBub3QgbWV0LCB0aGVyZSBpcyBubyBuZWVkDQo+IHRvIHByb2NlZWQuDQo+
IA0KDQpBbm90aGVyIHRob3VnaHQgb24gY29uY3VycmVuY3kgcHJvYmxlbSwgd2hlbiBjb25zdHJ1
Y3RpbmcgcGF0Y2ggdjIsIGRvIHdlIG5lZWQgdG8NCmNvbnNpZGVyIGNvbmN1cnJlbmN5IGhlcmU/
IA0KaGVhcF9sb2NrIGlzIHRvIHRha2UgY2FyZSBjb25jdXJyZW50IGFsbG9jYXRpb24gb24gdGhl
IG9uZSBoZWFwLCBidXQgc3RhdGljIG1lbW9yeSBpcw0KYWx3YXlzIHJlc2VydmVkIGZvciBvbmx5
IG9uZSBzcGVjaWZpYyBkb21haW4uDQoNCj4gPiA+ICsgICAgICAgIH0NCj4gPiA+ICsNCj4gPiA+
ICsgICAgICAgIGlmICggIShtZW1mbGFncyAmIE1FTUZfbm9fdGxiZmx1c2gpICkNCj4gPiA+ICsg
ICAgICAgICAgICBhY2N1bXVsYXRlX3RsYmZsdXNoKCZuZWVkX3RsYmZsdXNoLCAmcGdbaV0sDQo+
ID4gPiArICAgICAgICAgICAgICAgICAgICAgICAgICAgICAgICAmdGxiZmx1c2hfdGltZXN0YW1w
KTsNCj4gPiA+ICsNCj4gPiA+ICsgICAgICAgIC8qDQo+ID4gPiArICAgICAgICAgKiBSZXNlcnZl
IGZsYWcgUEdDX3Jlc2VydmVkIGFuZCBjaGFuZ2UgcGFnZSBzdGF0ZQ0KPiA+ID4gKyAgICAgICAg
ICogdG8gUEdDX3N0YXRlX2ludXNlLg0KPiA+ID4gKyAgICAgICAgICovDQo+ID4gPiArICAgICAg
ICBwZ1tpXS5jb3VudF9pbmZvID0gKHBnW2ldLmNvdW50X2luZm8gJiBQR0NfcmVzZXJ2ZWQpIHwN
Cj4gPiBQR0Nfc3RhdGVfaW51c2U7DQo+ID4gPiArICAgICAgICAvKiBJbml0aWFsaXNlIGZpZWxk
cyB3aGljaCBoYXZlIG90aGVyIHVzZXMgZm9yIGZyZWUgcGFnZXMuICovDQo+ID4gPiArICAgICAg
ICBwZ1tpXS51LmludXNlLnR5cGVfaW5mbyA9IDA7DQo+ID4gPiArICAgICAgICBwYWdlX3NldF9v
d25lcigmcGdbaV0sIE5VTEwpOw0KPiA+ID4gKw0KPiA+ID4gKyAgICAgICAgLyoNCj4gPiA+ICsg
ICAgICAgICAqIEVuc3VyZSBjYWNoZSBhbmQgUkFNIGFyZSBjb25zaXN0ZW50IGZvciBwbGF0Zm9y
bXMgd2hlcmUgdGhlDQo+ID4gPiArICAgICAgICAgKiBndWVzdCBjYW4gY29udHJvbCBpdHMgb3du
IHZpc2liaWxpdHkgb2YvdGhyb3VnaCB0aGUgY2FjaGUuDQo+ID4gPiArICAgICAgICAgKi8NCj4g
PiA+ICsgICAgICAgIGZsdXNoX3BhZ2VfdG9fcmFtKG1mbl94KHBhZ2VfdG9fbWZuKCZwZ1tpXSkp
LA0KPiA+ID4gKyAgICAgICAgICAgICAgICAgICAgICAgICAgICAhKG1lbWZsYWdzICYgTUVNRl9u
b19pY2FjaGVfZmx1c2gpKTsNCj4gPiA+ICsgICAgfQ0KPiA+ID4gKw0KPiA+ID4gKyAgICBpZiAo
IG5lZWRfdGxiZmx1c2ggKQ0KPiA+ID4gKyAgICAgICAgZmlsdGVyZWRfZmx1c2hfdGxiX21hc2so
dGxiZmx1c2hfdGltZXN0YW1wKTsNCj4gPiA+ICsNCj4gPiA+ICsgICAgcmV0dXJuIHBnOw0KPiA+
ID4gK30NCj4gPiA+ICsNCj4gPiA+ICAgLyogUmVtb3ZlIGFueSBvZmZsaW5lZCBwYWdlIGluIHRo
ZSBidWRkeSBwb2ludGVkIHRvIGJ5IGhlYWQuICovDQo+ID4gPiAgIHN0YXRpYyBpbnQgcmVzZXJ2
ZV9vZmZsaW5lZF9wYWdlKHN0cnVjdCBwYWdlX2luZm8gKmhlYWQpDQo+ID4gPiAgIHsNCj4gPiA+
DQo+ID4NCj4gPiBDaGVlcnMsDQo+ID4NCj4gPiAtLQ0KPiA+IEp1bGllbiBHcmFsbA0KPiANCj4g
Q2hlZXJzLA0KPiANCj4gUGVubnkgWmhlbmcNCg0KQ2hlZXJzDQoNClBlbm55DQo=


From xen-devel-bounces@lists.xenproject.org Mon May 24 10:24:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 May 2021 10:24:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131731.246055 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ll7lN-0006Yi-H0; Mon, 24 May 2021 10:24:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131731.246055; Mon, 24 May 2021 10:24:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ll7lN-0006Yb-Dw; Mon, 24 May 2021 10:24:45 +0000
Received: by outflank-mailman (input) for mailman id 131731;
 Mon, 24 May 2021 10:24:44 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1ll7lM-0006YV-J6
 for xen-devel@lists.xenproject.org; Mon, 24 May 2021 10:24:44 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ll7lM-0000KD-E9; Mon, 24 May 2021 10:24:44 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1ll7lM-0004TV-8X; Mon, 24 May 2021 10:24:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=rtuo2DCxsEZ49xpvK8D1WtcGheLua/e2XjF0pAI3r24=; b=pLofRS/t6yxMGRIn3LL1tynjHO
	zTuhKGXTli/GNBXwV8njhUImBHcwqFIIJkGALR7Vdo/9HS6xHUDirAxn/mlUBHWAK71pCWkE7fuwg
	aDSqWvW7HZJnyFTcQ7oz6qkk9popAdqsLycnu3ttO6fJBBxYN4gXp6I1vzsjOFeTCrQs=;
Subject: Re: [PATCH 05/10] xen/arm: introduce alloc_staticmem_pages
To: Penny Zheng <Penny.Zheng@arm.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 "sstabellini@kernel.org" <sstabellini@kernel.org>
Cc: Bertrand Marquis <Bertrand.Marquis@arm.com>, Wei Chen <Wei.Chen@arm.com>,
 nd <nd@arm.com>
References: <20210518052113.725808-1-penny.zheng@arm.com>
 <20210518052113.725808-6-penny.zheng@arm.com>
 <e8e4148e-017b-955b-dd18-4576ce7c94ec@xen.org>
 <VE1PR08MB52156570D7795C3595674BE5F72B9@VE1PR08MB5215.eurprd08.prod.outlook.com>
 <VE1PR08MB521532DDD1CC4C8872AE08CFF7269@VE1PR08MB5215.eurprd08.prod.outlook.com>
From: Julien Grall <julien@xen.org>
Message-ID: <d5da5783-be83-6162-e4ab-79326aac8edd@xen.org>
Date: Mon, 24 May 2021 11:24:42 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <VE1PR08MB521532DDD1CC4C8872AE08CFF7269@VE1PR08MB5215.eurprd08.prod.outlook.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 24/05/2021 11:10, Penny Zheng wrote:
> Hi Julien

Hi Penny,

>>>> +    if ( !pg )
>>>> +        return NULL;
>>>> +
>>>> +    for ( i = 0; i < nr_pfns; i++)
>>>> +    {
>>>> +        /*
>>>> +         * Reference count must continuously be zero for free pages
>>>> +         * of static memory(PGC_reserved).
>>>> +         */
>>>> +        ASSERT(pg[i].count_info & PGC_reserved);
>>>> +        if ( (pg[i].count_info & ~PGC_reserved) != PGC_state_free )
>>>> +        {
>>>> +            printk(XENLOG_ERR
>>>> +                    "Reference count must continuously be zero for free pages"
>>>> +                    "pg[%u] MFN %"PRI_mfn" c=%#lx t=%#x\n",
>>>> +                    i, mfn_x(page_to_mfn(pg + i)),
>>>> +                    pg[i].count_info, pg[i].tlbflush_timestamp);
>>>> +            BUG();
>>>
>>> So we would crash Xen if the caller pass a wrong range. Is it what we want?
>>>
>>> Also, who is going to prevent concurrent access?
>>>
>>
>> Sure, to fix concurrency issue, I may need to add one spinlock like `static
>> DEFINE_SPINLOCK(staticmem_lock);`
>>
>> In current alloc_heap_pages, it will do similar check, that pages in free state
>> MUST have zero reference count. I guess, if condition not met, there is no need
>> to proceed.
>>
> 
> Another thought on concurrency problem, when constructing patch v2, do we need to
> consider concurrency here?
> heap_lock is to take care concurrent allocation on the one heap, but static memory is
> always reserved for only one specific domain.
In theory yes, but you are relying on the admin to correctly write the 
device-tree nodes.

You are probably not going to hit the problem today because the domains 
are created one by one. But, as you may want to allocate memory at 
runtime, this is quite important to get the code protected from 
concurrent access.

Here, you will likely want to use the heaplock rather than a new lock. 
So you are also protect against concurrent access to count_info from 
other part of Xen.


Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 24 12:49:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 May 2021 12:49:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131746.246071 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llA1A-00022q-3X; Mon, 24 May 2021 12:49:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131746.246071; Mon, 24 May 2021 12:49:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llA1A-00022j-0c; Mon, 24 May 2021 12:49:12 +0000
Received: by outflank-mailman (input) for mailman id 131746;
 Mon, 24 May 2021 12:49:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JxRd=KT=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1llA18-00022d-Hv
 for xen-devel@lists.xenproject.org; Mon, 24 May 2021 12:49:10 +0000
Received: from aserp2130.oracle.com (unknown [141.146.126.79])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 44020933-42a3-4b56-8c84-e982bb47f04f;
 Mon, 24 May 2021 12:49:08 +0000 (UTC)
Received: from pps.filterd (aserp2130.oracle.com [127.0.0.1])
 by aserp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 14OCiWQk172610;
 Mon, 24 May 2021 12:49:01 GMT
Received: from userp3030.oracle.com (userp3030.oracle.com [156.151.31.80])
 by aserp2130.oracle.com with ESMTP id 38pqfcaytm-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Mon, 24 May 2021 12:49:01 +0000
Received: from pps.filterd (userp3030.oracle.com [127.0.0.1])
 by userp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 14OCj3r0097080;
 Mon, 24 May 2021 12:49:00 GMT
Received: from nam12-dm6-obe.outbound.protection.outlook.com
 (mail-dm6nam12lp2169.outbound.protection.outlook.com [104.47.59.169])
 by userp3030.oracle.com with ESMTP id 38pq2t96pf-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Mon, 24 May 2021 12:49:00 +0000
Received: from CH0PR10MB5020.namprd10.prod.outlook.com (2603:10b6:610:c0::22)
 by CH2PR10MB4133.namprd10.prod.outlook.com (2603:10b6:610:a6::7) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4150.27; Mon, 24 May
 2021 12:48:57 +0000
Received: from CH0PR10MB5020.namprd10.prod.outlook.com
 ([fe80::6cb6:faf9:b596:3b9a]) by CH0PR10MB5020.namprd10.prod.outlook.com
 ([fe80::6cb6:faf9:b596:3b9a%7]) with mapi id 15.20.4150.027; Mon, 24 May 2021
 12:48:57 +0000
Received: from [10.74.98.132] (160.34.88.132) by
 SA9PR13CA0037.namprd13.prod.outlook.com (2603:10b6:806:22::12) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4173.12 via Frontend Transport; Mon, 24 May 2021 12:48:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 44020933-42a3-4b56-8c84-e982bb47f04f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=HZm3Zn4yc2TvaiF4iqcad8aDOxnOocQRYrztVPiJ4U8=;
 b=VUSP8Qk9qxOfIe6WghE3kaW8NLr0it12hlTTqtmXnJ/PNCWiKBwnipS86ukwbnLRpmd+
 5A7npgxkSUo78JrVKCEX4F92xy8qD45C0bMzc06RJcGaLBblaRUekkqYCJYXj0Nv3vRs
 9tDU9brgpl+xIHYO2CCeLHRpmve3CW5bgjunj++gC7agSNPNTM7dCGrY+/YEQxwGJi1a
 s+sWGGgn0eDpkrUA9e/36JE+YV0k3XAZisyK0VY5EhhDvmP0llOqCnR4UN20w8Yitje1
 kMIjvvqaW67ycM+j6MdORuE9SdvwGvYR6EBww1lKDhkaPzhhCSbag7WakbnKfDKvImKk SA== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eWqaQWmpaHE0pwX7k3K4bNFsYKDNshlb5Yc7I5582blSSlJyCfTIjROgggQKNDBamMZ1UqrY4JvQrL8s9mzw38tCjfVgehvBGj9kjtQgdeQQl2UECpiogamQbUPyDUGj+2knND+A2AqMYSluuwFpJyS3BfszBukYocsB/WR7c+sEvPYDsDF7G5gyVAe3C9GlN6O+4QaEAKpUBXaeHWByfxdccqaQ2dp/E+V1IxYnpPm5C8Q6vsX338LOsDZ3OkEAOD+50rY0Kg+1+Mole0Nc++WPPFiPEGPX3s49IYMmwhMVy/+qZ4mEtiECj3tpgmWRZlaGXQbexFIgUrxbn24KLg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HZm3Zn4yc2TvaiF4iqcad8aDOxnOocQRYrztVPiJ4U8=;
 b=At8/A6Rux06cXVHJMPxSXWkSu/NZ/9yeAH2MDDVWsi3/r+HVmNuQ57y8HbH1QazDsyBXXweRJ1uN9IUaGzBVHa7ZjfKUfDV69mg94xH+nokhqWKVAUc+ZMXkUkqUPuSl77DeXN5mW2IEdUUmaoGr/FBlHrUUbTIa32lkkfkwOQboWVe97diU+olvfgWoJZalBLY8Wv1rPzDcd2BUXW1ntPpmM2xI8AlHVsZRrDf2AN68jk8aBTmjqIrbf6Bwk85HyfnJC8JKQ4767BeZnn7gnrYrGblbOykfaaHluuLC2S1xzWYIaHOwRdDY29FIJXLPMpbW+B6XCrZGVZF0ktQQzA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=HZm3Zn4yc2TvaiF4iqcad8aDOxnOocQRYrztVPiJ4U8=;
 b=s2FHAmdqczzmhENZEYdRq9Df3tscgYyJFA2XUT5UPDi7Yx1K2cmMf4vImgQF+jc/J9buIEOWHR3gY6oua6GpgiKXc8Yn/q4XE7ihbO4wi5+3ytGDRE4BXVdz70Af31j2WQgpKUhww+RggPI58/bWpcvXE8p3SiEhyw70RteDM5I=
Authentication-Results: vger.kernel.org; dkim=none (message not signed)
 header.d=none;vger.kernel.org; dmarc=none action=none header.from=oracle.com;
Subject: Re: [PATCH -next] xen/pcpu: Use DEVICE_ATTR_RW macro
To: YueHaibing <yuehaibing@huawei.com>, jgross@suse.com,
        sstabellini@kernel.org
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
References: <20210523070214.34948-1-yuehaibing@huawei.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <c27b37e3-7d4b-1ed1-2b8c-1fbde6e30c3b@oracle.com>
Date: Mon, 24 May 2021 08:48:53 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
In-Reply-To: <20210523070214.34948-1-yuehaibing@huawei.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Originating-IP: [160.34.88.132]
X-ClientProxiedBy: SA9PR13CA0037.namprd13.prod.outlook.com
 (2603:10b6:806:22::12) To CH0PR10MB5020.namprd10.prod.outlook.com
 (2603:10b6:610:c0::22)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: c420616f-ea22-46af-88da-08d91eb24b7a
X-MS-TrafficTypeDiagnostic: CH2PR10MB4133:
X-Microsoft-Antispam-PRVS: 
	<CH2PR10MB41331DBFFA0916826EB820C58A269@CH2PR10MB4133.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:962;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	2ZIizDPRSbtEVrx5faFrCXQEqEnpQWNtevF27hiXkZYtVQySLskGcekVUzIb+SFT7hjVaSW7BeyK6kAldvqKKVUYdmGVzbVu4nbSNel82QbYlEhheii3S1/Ja/pi+uA0nwX30DV4ilQgy9qLFwCF5yEc9rILxFZR7RldOfdCVKG0t2VN05CsXcEjo7oxGD6RWLoalfp+dhdOawMHxC7Q/awoKgTg5YcdrLmPHru25K4ovVmxGv1RPx7lqX1+/1NMH4R1sBHMuq/HvD4pDHhJ8L85Hs/h9cJ+tEA60Q2CI+2icNJ7z2Qj2hrccWQborQwVf7PldqKHzRiB1+3PCR7SJBplHvx9R5tx4/CPA7Hy+fEaQ5OV3Ub7jKdiYTT2qcVTf9Vj0S25NeSCpZo8X5PWgkpluU3I5bnHSz8pTg0+u0WmT+sZVvX1oCAGwzDhw4Ecx1Cxzhlk3HF5E+s9TBXBlOc/qA54VTnY82SNytCBkqyfuaTU3i5SLe3wNZav55wmDSHuaF+weqd23ROFutPCtbhBOkRzcvZ/CZlBa5+sb5waqcmsJqJowA3Lrh0i6+qM3T6xsgmNmrzL64WrTgfiSHUcSKW50ob1/dahMhzE6+vLqFKeN3RzPVQFb1KgBIZeIMssratclEMCKkCWd4KxKSYeJY5v+/o579D7//d2Ktst9BQ+U+NQyXEaVchxaaD
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:CH0PR10MB5020.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(396003)(136003)(39860400002)(346002)(376002)(44832011)(86362001)(8676002)(2616005)(478600001)(31686004)(31696002)(66476007)(66556008)(66946007)(956004)(316002)(186003)(26005)(53546011)(16526019)(8936002)(5660300002)(6666004)(36756003)(6486002)(4326008)(2906002)(38100700002)(16576012)(4744005)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?utf-8?B?QUk4Mit6a3QyeFQ0clh3RlBXcGI1c0QzRVpRbUh2QitEdU5HTnd4Rndqdks0?=
 =?utf-8?B?ZElRR0szVU16ZC9UcDB0Z1ZnaWNOQnNPS1U0em9zcWRJN1paeDR0TXJDV2ht?=
 =?utf-8?B?RFphdVVMb2pmYmRtTUZMaHNvanJWMlgyVjIrRStwc0YrenZrREJNbWVtODhQ?=
 =?utf-8?B?VlQwaHRKRGhkcGlOWHMzVWZvNm1PSmtvQ3FKRE9pSVNJNG40TExwTU1hNy84?=
 =?utf-8?B?VkxtVUZOV2xybnFPc1MrMDR5bFBpL2pIUm4rc2NaZTZxNFZaUGszWk8waUs2?=
 =?utf-8?B?YnlmeHBXWjRXQlJZNXJ2cWlmTUtwMlJGblZHRVd6VTVuQkdJT2duMktJQ2Jq?=
 =?utf-8?B?UGhiaVR6VFlqNVA4VDJ2WlJEK3NGUDFFVWRJYlM2QW1rYkZvS3h6TVRRemtk?=
 =?utf-8?B?VlNSNnR5UVNBZWw2M25MK2F1cGJreGJzOURzNTRDNkxPdEtXMWE5Z2hOMFlD?=
 =?utf-8?B?WThXd2lnNTNwYVlIZXNYbVNib293a2pXRWNIUzhSa25FajhlalVhdFB6ZUlL?=
 =?utf-8?B?OEo2STlvV09ZVTNSNmtxSnNPd29YUm5jaFFGYjc0eHF1dE9FemRyd1NkRElr?=
 =?utf-8?B?cDFic2hIclpPeEtxZnlmbWUyZnFuZGdQcjFwRkM2K3FhZHNQQW5GY2pmajhV?=
 =?utf-8?B?Rzc2RTlBaHRCdWNMZ1ZyaGwreituZ0IwMUtySWtzNFB5YWhUUEhvQzEzc3Yz?=
 =?utf-8?B?Wk5Da0l2azBtVFBTSkJrZlBHa1Q5c3BYUG9vUzFheGVFQ1lUUDdESkhKV3lI?=
 =?utf-8?B?ai9JR0dhSElLeFE3aHdRd0xrekxMRERIR2Z6SUlKV1Z4a3U3RHlIVGlaQ29v?=
 =?utf-8?B?UlhvUEpiZlJHeVpabkRaUXJwYjVGWnZlNWxMejJvYlVSYUJtem1CUDh4Nmhl?=
 =?utf-8?B?VFpSeGZENG9lV09nU0pQWW5JUkV2SS9vcitpMVVHZ2pCcDhEK1MvNjBwdUg5?=
 =?utf-8?B?Wk5RYUNPMktiV0xKaEU1NVllRlN6N3FDMXRwZ1RINTBPbHNLdk5aRTJRMHNp?=
 =?utf-8?B?WE5iSHA0VE8xVEdWUXFDMXNsclBzbWlBbG5MZjg4V2w2NG5KTmhhdnlOMCsv?=
 =?utf-8?B?RnlLVVQwQndtdndzbHJ4NHVPRmNsUVdmNDQ3Y1RkM2RxdWZReDllRjgwUU1B?=
 =?utf-8?B?cVh1RzErcGY0MzFKTzNhd2Y0cFJmbGFGcnVFRGltQXVPa0lXM2V1L2NqWFVr?=
 =?utf-8?B?UVVmbjBvSWxWNVcvVnNqZCt2bS9qc1VGRWR0V1ZRa2FrRjRCa1pNZzd0K3BI?=
 =?utf-8?B?SnJqMTl1OHk0TEl6T3dBU1hhK250VnhlMm9DMHgrZTdHTllCOXBiVzR6SEtF?=
 =?utf-8?B?aEJCUFh1Rkd3WXJ6YzlOREk2Rzk3SEFWVFNGRVVacWpCUmZvQXlDMUJuaWk2?=
 =?utf-8?B?K25ISWtROW9UTVB3dVZMNTB3UjdEaTBMTjlMekR3dkI3ZzdaWlgxYkF1Ymg1?=
 =?utf-8?B?MlorVDVyZDFvdjcxZy9FdzVldlcrYS8yRDZXVDhYZFpOOUQ0cFFsS3MzemNt?=
 =?utf-8?B?MmcrUHNOQy9QZkR2czhGSlBkNHZGYVVjUFM0aUhjQnJOdU95Zk5HTTljMndj?=
 =?utf-8?B?b0lLN2t6UW05Y1NJS1cvSzUzYU0wRWNlSzk2UnphVFJVV1hyMmV1TEhIZjVa?=
 =?utf-8?B?dDlMRHY1QXJFdjJqQ0Q3TlcyV1NWOU14ZWEyYUt2NXgweVh5ODBZdDE0N1I5?=
 =?utf-8?B?d3FDRkxSM0JhVG5IUEZFcEtxRlMxNDJ1aktCOXUxak1RWEpGb0tUa3NLek1x?=
 =?utf-8?Q?Zr2O/JjxcrI8YRuU84zsVnZlvv0x1T7Yd7UeugY?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: c420616f-ea22-46af-88da-08d91eb24b7a
X-MS-Exchange-CrossTenant-AuthSource: CH0PR10MB5020.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2021 12:48:57.2195
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: LlwFnVuVrZFwTdXUMvWktk1XyM8NWD6Bi8hGDIhHdDYFpo9e5IE9CGnFv+O98H1TWgjsR5HPhN+tSeRU9M9LKupaH/Iafj22KKZaWhRNyJs=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR10MB4133
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9993 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 phishscore=0 malwarescore=0 spamscore=0
 adultscore=0 bulkscore=0 mlxlogscore=999 suspectscore=0 mlxscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2105240083
X-Proofpoint-ORIG-GUID: 9N6i1w6HVHdfrVWS1x4Olg2NYGIUA5Gc
X-Proofpoint-GUID: 9N6i1w6HVHdfrVWS1x4Olg2NYGIUA5Gc
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9993 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 spamscore=0 mlxscore=0
 malwarescore=0 mlxlogscore=999 lowpriorityscore=0 impostorscore=0
 adultscore=0 phishscore=0 priorityscore=1501 clxscore=1011 suspectscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2105240083


On 5/23/21 3:02 AM, YueHaibing wrote:
> Use DEVICE_ATTR_RW helper instead of plain DEVICE_ATTR,
> which makes the code a bit shorter and easier to read.
>
> Signed-off-by: YueHaibing <yuehaibing@huawei.com>


Do you think you can also make similar change in drivers/xen/xen-balloon.c and drivers/xen/xenbus/xenbus_probe.c?



Thanks.

-boris



From xen-devel-bounces@lists.xenproject.org Mon May 24 12:54:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 May 2021 12:54:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131752.246083 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llA5u-0003Pi-Nt; Mon, 24 May 2021 12:54:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131752.246083; Mon, 24 May 2021 12:54:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llA5u-0003PV-Kw; Mon, 24 May 2021 12:54:06 +0000
Received: by outflank-mailman (input) for mailman id 131752;
 Mon, 24 May 2021 12:54:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1llA5t-0003O0-C7
 for xen-devel@lists.xenproject.org; Mon, 24 May 2021 12:54:05 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1llA5s-0002oK-3d; Mon, 24 May 2021 12:54:04 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1llA5r-00077Q-U5; Mon, 24 May 2021 12:54:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=AYRuHgm9Ode4vhzNE5GAcYKMADMPqsumL5rEhpavgaQ=; b=t5OOfiAuEdLKLWVZ82ntqRMNQs
	1c0dV+vLfQS3zcgdEiyzCrhfeD0vMYLEfwosCpCu9DeMbFVRzAyuhTaRFgqktGfKjD07TONHqE4JI
	upfJR8YWk9csBvrXQHvWnwEvT+FDbn0CxyCAtoMFtNrTR8pk102NBkiSSY9dihPo6L1U=;
Subject: Re: [XEN PATCH v1] libxl: use getrandom() syscall for random data
 extraction
To: Sergiy Kibrik <Sergiy_Kibrik@epam.com>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>
References: <20210524085858.1902-1-Sergiy_Kibrik@epam.com>
From: Julien Grall <julien@xen.org>
Message-ID: <13bde708-1d87-a2c7-077f-f08db597ae15@xen.org>
Date: Mon, 24 May 2021 13:54:02 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <20210524085858.1902-1-Sergiy_Kibrik@epam.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 24/05/2021 09:58, Sergiy Kibrik wrote:
> Simplify libxl__random_bytes() routine by using a newer dedicated syscall.
> This allows not only to substantially reduce its footprint, but syscall also
> considered to be safer and generally better solution:
> 
> https://lwn.net/Articles/606141/
> 
> getrandom() available on Linux, FreeBSD and NetBSD.

 From the man:

VERSIONS
        getrandom() was introduced in version 3.17 of the Linux kernel. 
  Support was added to glibc in version 2.25.

If I am not mistaken glibc 2.25 was released in 2017. Also, the call was 
only introduced in FreeBSD 12.

So I think we want to check if getrandom() can be used. We may also want 
to consider to fallback to read /dev/urandom if the call return ENOSYS.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 24 13:03:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 May 2021 13:03:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131760.246093 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llAEg-0004rG-Ko; Mon, 24 May 2021 13:03:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131760.246093; Mon, 24 May 2021 13:03:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llAEg-0004r9-Hn; Mon, 24 May 2021 13:03:10 +0000
Received: by outflank-mailman (input) for mailman id 131760;
 Mon, 24 May 2021 13:03:09 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llAEf-0004qi-F1; Mon, 24 May 2021 13:03:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llAEf-00030c-49; Mon, 24 May 2021 13:03:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llAEe-0002ct-TS; Mon, 24 May 2021 13:03:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1llAEe-0000VL-T0; Mon, 24 May 2021 13:03:08 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=YhOcg2+cd8vmU9p9TrP1tAnrsGp3XiOhzb5rvaXhVZM=; b=KaRort0uhC+4dK1rNfTmzbW8ks
	Et+8hFkNmxG3yeWRvpit9eZGppidHxu8QcWRWYDwTI7eA2aDsxKY+k4Udpoi7RR7hU790PtOodgQ/
	kx5qEcQu6GAu+MQu7nqBQogHv08zjWRkkfMsj+vP+bRCRFyCj+QUJnoKRwS4P1i2cmfs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete test-amd64-i386-freebsd10-i386
Message-Id: <E1llAEe-0000VL-T0@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 24 May 2021 13:03:08 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-i386-freebsd10-i386
testid guest-saverestore

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Bug not present: cbde7be900d2a2279cbc4becb91d1ddd6a014def
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/160377/


  commit 8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 12:54:55 2021 +0000
  
      machine: remove 'query-cpus' QMP command
      
      The newer 'query-cpus-fast' command avoids side effects on the guest
      execution. Note that some of the field names are different in the
      'query-cpus-fast' command.
      
      Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Tested-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-i386-freebsd10-i386.guest-saverestore.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-i386-freebsd10-i386.guest-saverestore --summary-out=tmp/162142.bisection-summary --basis-template=152631 --blessings=real,real-bisect,real-retry qemu-mainline test-amd64-i386-freebsd10-i386 guest-saverestore
Searching for failure / basis pass:
 162135 fail [host=fiano0] / 160125 [host=albana1] 160119 [host=huxelrebe1] 160113 [host=elbling0] 160104 [host=chardonnay0] 160097 [host=albana0] 160091 [host=chardonnay1] 160088 [host=albana1] 160082 [host=albana1] 160079 [host=fiano1] 160070 [host=pinot1] 160066 ok.
Failure / basis pass flights: 162135 / 160066
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3f8d1885e48e4d72eab0688f604de62e0aea7a38 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 9dc46386d89d83c73c41c2b19be83a73957c4393
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#4751a48aeb2ab828b0a5cbdc585fd3642967cda1-cfa6ffb113f2c0d922034cc77c0d6c52eea05497 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c74\
 37ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://git.qemu.org/qemu.git#3f8d1885e48e4d72eab0688f604de62e0aea7a38-3bbaed2cd0a02ee53958d3d2585e837bcf327278 git://xenbits.xen.org/osstest/seabios.git#b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee-b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee git://xenbits.xen.org/xen.git#9dc46386d89d83c73c41c2b19be83a73957c4393-aa77acc28098d04945af998f3fc0dbd3759b5b41
Loaded 72069 nodes in revision graph
Searching for test results:
 160048 []
 160050 []
 160057 []
 160062 []
 160064 []
 160066 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3f8d1885e48e4d72eab0688f604de62e0aea7a38 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 9dc46386d89d83c73c41c2b19be83a73957c4393
 160070 [host=pinot1]
 160079 [host=fiano1]
 160082 [host=albana1]
 160088 [host=albana1]
 160091 [host=chardonnay1]
 160097 [host=albana0]
 160104 [host=chardonnay0]
 160113 [host=elbling0]
 160119 [host=huxelrebe1]
 160125 [host=albana1]
 160134 fail irrelevant
 160147 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2e1293cbaac75e84f541f9acfa8e26749f4c3562 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160221 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3f8d1885e48e4d72eab0688f604de62e0aea7a38 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 9dc46386d89d83c73c41c2b19be83a73957c4393
 160287 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3f8d1885e48e4d72eab0688f604de62e0aea7a38 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 9dc46386d89d83c73c41c2b19be83a73957c4393
 160293 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2e1293cbaac75e84f541f9acfa8e26749f4c3562 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160302 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 f2a9a6c2a86570ccbf8c5c30cbb8bf723168c459 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160310 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8a40754bca14df63c6d2ffe473b68a270dc50679 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160167 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca318882714080fb81fe9eb89a7b7934efc5bfae 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 bdee969c0e65d4d509932b1d70e3a3b2ffbff6d5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160318 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7286d62d4e259be8cecf3dc2deea80ecc14489a5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160330 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca318882714080fb81fe9eb89a7b7934efc5bfae 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 bdee969c0e65d4d509932b1d70e3a3b2ffbff6d5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160341 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2255564fd21059960966b47212def9069cb56077 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160345 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 5e8892db93f3fb6a7221f2d47f3c952a7e489737 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160346 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 9e7118023fda7c29016038e2292d4d14129b63da b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160347 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8b858f9998a9d59a9a7188f2c5c6ffb99eff6115 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160349 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 773b0bc2838ede154c6de9d78401b91fafa91062 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 affc55e761ea4c96b9b2de582d813787a317aeda b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160351 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 30ca7eddc486646fa19c9619fcf233ceaa65e28c b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160354 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e7c6a8cf9f5c82aa152273e1c9e80d07b1b0c32c b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b4011741e6b39a8fd0ed5aded96c16c45ead5888
 160355 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 f2e8319d456724c3d8514d943dc4607e2f08e88a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b4011741e6b39a8fd0ed5aded96c16c45ead5888
 160356 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ee2e67da8f882fcdef2c49fcc58e9962aa695f5a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160357 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 81cbfd5088690c53541ffd0d74851c8ab055a829 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160328 fail irrelevant
 160359 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 283d845c9164f57f5dba020a4783bb290493802f b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160365 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3f8d1885e48e4d72eab0688f604de62e0aea7a38 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 9dc46386d89d83c73c41c2b19be83a73957c4393
 160367 fail irrelevant
 160368 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 445a5b4087567bf4d4ce76d394adf78d9d5c88a5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160371 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160372 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160373 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160374 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160375 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160377 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160361 fail irrelevant
 160392 fail irrelevant
 160418 fail irrelevant
 160448 fail irrelevant
 160477 fail irrelevant
 160501 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7b9a3c9f94bcac23c534bc9f42a9e914b433b299 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160522 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7b9a3c9f94bcac23c534bc9f42a9e914b433b299 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160541 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ec2e6e016d24bd429792d08cf607e4c5350dcdaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160563 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7993b0f83fe5c3f8555e79781d5d098f99751a94 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cead8c0d17462f3a1150b5657d3f4eaa88faf1cb
 160619 fail irrelevant
 160632 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 62bad17dcae18f55cb3bdc19909543dfdf928a2b 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6ee55e1d10c25c2f6bf5ce2084ad2327e17affa5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 90629587e16e2efdb61da77f25c25fba3c4a5fd7
 160650 fail irrelevant
 160736 fail irrelevant
 160748 fail irrelevant
 160779 fail irrelevant
 160801 fail irrelevant
 160827 fail irrelevant
 160851 fail irrelevant
 160883 fail irrelevant
 160916 fail irrelevant
 160980 fail irrelevant
 161050 fail irrelevant
 161088 fail irrelevant
 161121 fail irrelevant
 161147 fail irrelevant
 161171 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2ad22420a710dc07e3b644f91a5b55c09c39ecf3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 264aa183ad85b2779b27d1312724a291259ccc9f
 161191 fail irrelevant
 161210 fail irrelevant
 161232 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b53173e7cdafb7a318a239d557478fd73734a86a
 161256 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161276 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161290 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161308 fail irrelevant
 161334 fail irrelevant
 161364 fail irrelevant
 161388 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3b0d007a135284981fa750612a47234b83976f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b1cffefa1b163bce9aebc3416f562c1d3886eeaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 9f6cd4983715cb31f0ea540e6bbb63f799a35d8a
 161401 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3b0d007a135284981fa750612a47234b83976f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b1cffefa1b163bce9aebc3416f562c1d3886eeaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aaa3eafb3ba8b544d19ca41cda1477640b22b8fc
 161419 fail irrelevant
 161434 fail irrelevant
 161444 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f2f4c6be2dba3f8e97ac544b9c3da71e9f81b294 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ffa090bc56e73e287a63261e70ac02c0970be61a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee bea65a212c0581520203b6ad0d07615693f42f73
 161455 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f2f4c6be2dba3f8e97ac544b9c3da71e9f81b294 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ffa090bc56e73e287a63261e70ac02c0970be61a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee bea65a212c0581520203b6ad0d07615693f42f73
 161472 fail irrelevant
 161481 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5396354b868bd6652600a654bba7df16701ac1cb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0cef06d18762374c94eb4d511717a4735d668a24 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 11e7f0fe72ca0060762d18268e0388731fe8ccb6
 161495 fail irrelevant
 161514 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5b90b8abb4049e2d98040f548ad23b6ab22d5d19 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0cef06d18762374c94eb4d511717a4735d668a24 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 972ba1d1d4bcb77018b50fd2bb63c0e628859ed3
 161540 fail irrelevant
 161554 fail irrelevant
 161571 fail irrelevant
 161587 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8f860d2633baf9c2b6261f703f86e394c6bc22ca b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161604 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8f860d2633baf9c2b6261f703f86e394c6bc22ca b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161616 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 53c5433e84e8935abed8e91d4a2eb813168a0ecf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161631 []
 161766 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e93d8bcf9dbd5b8dd3b9ddbb1ece6a37e608f300 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee d26c277826dbbd64b3e3cb57159e1ecbfad33bc8
 161780 fail irrelevant
 161812 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d45a5270d075ea589f0b0ddcf963a5fea1f500ac b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 8cccd6438e86112ab383e41b433b5a7e73be9621
 161826 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
 161839 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
 161853 []
 161856 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 7a2b787880bddbb3bd68b18efe1d6fe339df6ff1
 161862 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 7a2b787880bddbb3bd68b18efe1d6fe339df6ff1
 161876 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161886 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161890 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161896 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 74e31681ba05ed1876818df30c581bc530554fb3 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161907 fail irrelevant
 161915 fail irrelevant
 161924 fail irrelevant
 161938 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5531fd48ded1271b8775725355ab83994e4bc77c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dab59ce031228066eb95a9c518846fcacfb0dbbf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 43d4cc7d36503bcc3aa2aa6ebea2b7912808f254
 161941 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5531fd48ded1271b8775725355ab83994e4bc77c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dab59ce031228066eb95a9c518846fcacfb0dbbf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 43d4cc7d36503bcc3aa2aa6ebea2b7912808f254
 161950 fail irrelevant
 161955 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161961 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161963 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161967 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161971 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6005ee07c380cbde44292f5f6c96e7daa70f4f7d b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161976 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6005ee07c380cbde44292f5f6c96e7daa70f4f7d b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161981 fail irrelevant
 161986 fail irrelevant
 162019 fail irrelevant
 162070 fail irrelevant
 162090 fail irrelevant
 162104 fail irrelevant
 162099 fail irrelevant
 162108 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 15ee7b76891a78141e6e30ef3f8572e8d6b326d2 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 972e848b53970d12cb2ca64687ef8ff797fb6236 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162112 fail irrelevant
 162116 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162121 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162124 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162127 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162132 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162135 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162140 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3f8d1885e48e4d72eab0688f604de62e0aea7a38 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 9dc46386d89d83c73c41c2b19be83a73957c4393
 162142 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
Searching for interesting versions
 Result found: flight 160066 (pass), for basis pass
 Result found: flight 162116 (fail), for basis failure (at ancestor ~1)
 Repro found: flight 162140 (pass), for basis pass
 Repro found: flight 162142 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
No revisions left to test, checking graph state.
 Result found: flight 160371 (pass), for last pass
 Result found: flight 160372 (fail), for first failure
 Repro found: flight 160373 (pass), for last pass
 Repro found: flight 160374 (fail), for first failure
 Repro found: flight 160375 (pass), for last pass
 Repro found: flight 160377 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Bug not present: cbde7be900d2a2279cbc4becb91d1ddd6a014def
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/160377/


  commit 8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 12:54:55 2021 +0000
  
      machine: remove 'query-cpus' QMP command
      
      The newer 'query-cpus-fast' command avoids side effects on the guest
      execution. Note that some of the field names are different in the
      'query-cpus-fast' command.
      
      Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Tested-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>

neato: graph is too large for cairo-renderer bitmaps. Scaling by 0.46102 to fit
pnmtopng: 228 colors found
Revision graph left in /home/logs/results/bisect/qemu-mainline/test-amd64-i386-freebsd10-i386.guest-saverestore.{dot,ps,png,html,svg}.
----------------------------------------
162142: tolerable FAIL

flight 162142 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/162142/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-i386-freebsd10-i386 16 guest-saverestore     fail baseline untested


jobs:
 build-amd64                                                  pass    
 build-i386                                                   pass    
 test-amd64-i386-freebsd10-i386                               fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Mon May 24 13:03:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 May 2021 13:03:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131762.246108 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llAEk-00059J-4n; Mon, 24 May 2021 13:03:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131762.246108; Mon, 24 May 2021 13:03:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llAEj-00059C-VJ; Mon, 24 May 2021 13:03:13 +0000
Received: by outflank-mailman (input) for mailman id 131762;
 Mon, 24 May 2021 13:03:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1llAEj-00058u-5j
 for xen-devel@lists.xenproject.org; Mon, 24 May 2021 13:03:13 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1llAEh-00030j-Ue; Mon, 24 May 2021 13:03:11 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1llAEh-00082C-Oz; Mon, 24 May 2021 13:03:11 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=QJ3IMcTv4kUxe9obVbYvMYWE9NpIOFpHsqzo25ed3xA=; b=AxpOyR4mBA8mMUdsQWbcaS6NSK
	Vzm0yxLwpv0k6SqfWOoVSgSb5c76EsNyN9g7nZew2SOxJxsL1oWg4FJOZ6QhsjndcWFSK7M7E44Uw
	a1ZtmlRFQbKsxcNhoCQ4nShkT2iAHoWYjM+9zp9hvnkiiAQBcy1sTcVLWF6ZD+lEQmkc=;
Subject: Re: [XEN PATCH v1] libxl/arm: provide guests with random seed
To: Sergiy Kibrik <Sergiy_Kibrik@epam.com>, xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>
References: <20210524080057.1773-1-Sergiy_Kibrik@epam.com>
From: Julien Grall <julien@xen.org>
Message-ID: <a3c51dea-80e5-df92-3757-72809ad96434@xen.org>
Date: Mon, 24 May 2021 14:03:10 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <20210524080057.1773-1-Sergiy_Kibrik@epam.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 24/05/2021 09:00, Sergiy Kibrik wrote:
> Pass random seed via FDT, so that guests' CRNGs are better seeded early at boot.
> Depending on its configuration Linux can use the seed as device randomness
> or to just quickly initialize CRNG.
> In either case this will provide extra randomness to further harden CRNG.
> 
> Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
> ---
>   tools/libxl/libxl_arm.c | 6 ++++++
>   1 file changed, 6 insertions(+)
> 
> diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
> index 34f8a29056..05c58a428c 100644
> --- a/tools/libxl/libxl_arm.c
> +++ b/tools/libxl/libxl_arm.c
> @@ -342,6 +342,12 @@ static int make_chosen_node(libxl__gc *gc, void *fdt, bool ramdisk,
>           if (res) return res;
>       }
>   
> +    uint8_t seed[128];

I couldn't find any documentation for the property (although, I have 
found code in Linux). Can you explain where the 128 come from?

Also, local variables should be defined at the beginning of the function.

> +    res = libxl__random_bytes(gc, seed, sizeof(seed)); > +    if (res) return res;
> +    res = fdt_property(fdt, "rng-seed", seed, sizeof(seed));
> +    if (res) return res;
> +
>       res = fdt_end_node(fdt);
>       if (res) return res;

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Mon May 24 13:42:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 May 2021 13:42:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131779.246119 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llAqh-00018B-5k; Mon, 24 May 2021 13:42:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131779.246119; Mon, 24 May 2021 13:42:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llAqh-000184-23; Mon, 24 May 2021 13:42:27 +0000
Received: by outflank-mailman (input) for mailman id 131779;
 Mon, 24 May 2021 13:42:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llAqf-00017u-P4; Mon, 24 May 2021 13:42:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llAqf-0003ep-GU; Mon, 24 May 2021 13:42:25 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llAqf-0004xj-4t; Mon, 24 May 2021 13:42:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1llAqf-0003Q5-4L; Mon, 24 May 2021 13:42:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=oHVXhRfKBEfeBxgmm+Od5z/Tnvc/7ACN55CO5FchJpQ=; b=tNdMYwx7vSXKEo1+e4Zcq8T22t
	TUJy8KwqlQMe8tWuhgDINZTXUlP9Tt+m1omUnlywZoR//DbAfpkx4vb+CRkCyFd2Rwm+z8AJS5MGU
	paNrWE2Mskd9QFBgiaJYLkEabDyen2W7dpRyNajZrsBpdDufwxeJFhvoBjMhEuCIacGA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162137-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162137: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=aa77acc28098d04945af998f3fc0dbd3759b5b41
X-Osstest-Versions-That:
    xen=aa77acc28098d04945af998f3fc0dbd3759b5b41
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 24 May 2021 13:42:25 +0000

flight 162137 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162137/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162126
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162126
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162126
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162126
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162126
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162126
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162126
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162126
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162126
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162126
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162126
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  aa77acc28098d04945af998f3fc0dbd3759b5b41
baseline version:
 xen                  aa77acc28098d04945af998f3fc0dbd3759b5b41

Last test of basis   162137  2021-05-24 01:51:42 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon May 24 14:34:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 May 2021 14:34:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131791.246144 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llBem-0006H0-GM; Mon, 24 May 2021 14:34:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131791.246144; Mon, 24 May 2021 14:34:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llBem-0006Gt-CQ; Mon, 24 May 2021 14:34:12 +0000
Received: by outflank-mailman (input) for mailman id 131791;
 Mon, 24 May 2021 14:34:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SOm5=KT=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1llBel-0005zC-5X
 for xen-devel@lists.xenproject.org; Mon, 24 May 2021 14:34:11 +0000
Received: from mail-oi1-x22e.google.com (unknown [2607:f8b0:4864:20::22e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1c99d4de-3ba0-40f8-a9dd-0629675e6e51;
 Mon, 24 May 2021 14:34:07 +0000 (UTC)
Received: by mail-oi1-x22e.google.com with SMTP id c3so27217128oic.8
 for <xen-devel@lists.xenproject.org>; Mon, 24 May 2021 07:34:07 -0700 (PDT)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id p22sm2840564oop.7.2021.05.24.07.34.06
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 24 May 2021 07:34:07 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c99d4de-3ba0-40f8-a9dd-0629675e6e51
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=2sOrZe5bNOUYq7hgK+U1znFwa5rrkIKkNIboxtn+L9M=;
        b=liBlJJdT9IQFayphoQ9LiGylFPUQJJ9Hg3ZjBg+o3ODXfwDihgspY00McxaVXSPuHL
         9elBHtA8L/MPH7er9ChFvpGR7BIhImU/ltwyABaqJEmd9XFLDg/ZJQn0xpXF8aCa8L6M
         XNVj6SakXgxORS+T/Aw+xjtgle4zfwagVa8ikuJZGXMycauN2HD1+xDip6G8jerGlBrw
         oXWQ26wCtcmCCC3Uj7x9BCTC8pCZae6s0WhDXpAYEA+WHhVrWzJSQCAmFv0JNrQeZ7xI
         vJmTSyVLwnxup5vamxxQ3RlbXeHmt4Mvbn9CDx2g48M/e112BRaqwHu2PPAqH3pA7QFy
         Edsg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=2sOrZe5bNOUYq7hgK+U1znFwa5rrkIKkNIboxtn+L9M=;
        b=UAlI5l66mD2phq/rD6bw2M38DXB/pRfYo+GCX0haClg9nBr6oX+Pmz/Fg9jteZNSyg
         GI7NVhN+OSHhKk5CmeM2xux2NwIskmgjbrTc6zfxl2VYLidIUTbLH4Pf4gI5eTuZmSfV
         WQbaAUXLgz1oKQx0z1zWc6jYyJyT9C4zp4TpzZMum4WR7eJL3DTGKZV+qY5vQKt2pVO2
         VNeQeAQEzsfG8aix9Dx3fGx8gqW57/nZUYQOEPkEzWdO7DfnwO8pT+wbIAWlTXGmiElS
         OpEODE6zAiEw7zyeEOD/ylolDytD/kRp6mlaX06NNhXW/pQgZRpSX+H73pl4EnFCUQiK
         txYQ==
X-Gm-Message-State: AOAM5327NEUuUxLokXuxFI04Yk9Uj8RuQGiHyi3uhSRylMEALgpLvcOE
	jsvj2zf9YRDTDFEZegxFrhkquysxgQ14Iw==
X-Google-Smtp-Source: ABdhPJyhmWVfRrGIBU252wYHKyGOHu0QebZeNgiFtFSkhzEwr7qNY5bc4lnG0fnebbcYFIo7gPTrSw==
X-Received: by 2002:aca:494d:: with SMTP id w74mr11026457oia.47.1621866847260;
        Mon, 24 May 2021 07:34:07 -0700 (PDT)
From: Connor Davis <connojdavis@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair23@gmail.com>,
	Connor Davis <connojdavis@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Alistair Francis <alistair.francis@wdc.com>
Subject: [PATCH v4 1/4] xen/char: Default HAS_NS16550 to y only for X86 and ARM
Date: Mon, 24 May 2021 08:34:25 -0600
Message-Id: <2a8329da33d2af0eab8581a01e3098e8360bda87.1621712830.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <cover.1621712830.git.connojdavis@gmail.com>
References: <cover.1621712830.git.connojdavis@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Defaulting to yes only for X86 and ARM reduces the requirements
for a minimal build when porting new architectures.

Signed-off-by: Connor Davis <connojdavis@gmail.com>
Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
---
 xen/drivers/char/Kconfig | 1 +
 1 file changed, 1 insertion(+)

diff --git a/xen/drivers/char/Kconfig b/xen/drivers/char/Kconfig
index b572305657..2ff5b288e2 100644
--- a/xen/drivers/char/Kconfig
+++ b/xen/drivers/char/Kconfig
@@ -1,5 +1,6 @@
 config HAS_NS16550
 	bool "NS16550 UART driver" if ARM
+	default n if RISCV
 	default y
 	help
 	  This selects the 16550-series UART support. For most systems, say Y.
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Mon May 24 14:34:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 May 2021 14:34:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131792.246155 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llBer-0006ai-PQ; Mon, 24 May 2021 14:34:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131792.246155; Mon, 24 May 2021 14:34:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llBer-0006aZ-LL; Mon, 24 May 2021 14:34:17 +0000
Received: by outflank-mailman (input) for mailman id 131792;
 Mon, 24 May 2021 14:34:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SOm5=KT=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1llBeq-0005zC-5d
 for xen-devel@lists.xenproject.org; Mon, 24 May 2021 14:34:16 +0000
Received: from mail-ot1-x334.google.com (unknown [2607:f8b0:4864:20::334])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 74edadd2-c31c-444e-bd36-81b0bd7c7b2f;
 Mon, 24 May 2021 14:34:09 +0000 (UTC)
Received: by mail-ot1-x334.google.com with SMTP id
 u25-20020a0568302319b02902ac3d54c25eso25430402ote.1
 for <xen-devel@lists.xenproject.org>; Mon, 24 May 2021 07:34:09 -0700 (PDT)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id p22sm2840564oop.7.2021.05.24.07.34.08
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 24 May 2021 07:34:08 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 74edadd2-c31c-444e-bd36-81b0bd7c7b2f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=KDIsJWwedO49iEv9YVCuj+jt8d4VmFrWYbJWRHeDs+U=;
        b=Yh7vBBBU5OVB5ZSt8Zkl1ivh9VVDN84H7CvJXSSqYXgBNb9/qi/NFOcimfilfGx5DN
         rj2k1kDAP80gbtLGAQqltBvMeKD2610eSGLAShM4NbC5AgXNRyTbP0w29celQ65HIDkC
         c4UXNnOij/i9ZOl9GrwmP2yoqhYyShlP2+YzMkjpMkjVlC3/oE+VWj+aDIVEP9F+jE0A
         /DtKviQhW+WALICKHanjH6fwb/Ot6YA5980VlQNQmMzrqE0Q3JpovLBzYK0RYR65R5Zh
         uqvImHq0RQjgFG3kjeM1om8zzvdKzUkQXSBFi7EalKUZ/zX5rearK+386mplelwabxLX
         OQ2w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=KDIsJWwedO49iEv9YVCuj+jt8d4VmFrWYbJWRHeDs+U=;
        b=YO0oqy2LePJvoJ9jMV0m/4nooW/9hKs+PkJj6grie3JlfHcEUp5r2u5ZY09RYAzPoT
         cu6Li4Oyz6hqSUpM/v2zWq6QitFqqySlPZO6x/Lrc3Wo6TVcxN9vC6FZXBwpF/Y1EMlr
         Jdhvm2my/RIWX1DfZn2x/jr+re064+UpPGH4d7d0s8TLuQvZvbQ37iL1E3T4yzqozFgy
         D0iHVM9ulQZ/S30Lv+slne+DZqC9SN4eb5YlENfSye5o0AbIZURrQKaDuLbi5pPkvp1O
         GioRrKOmfLdMXrzMM06QtiOv/Wn77fH8DW+ZBdbsNREI626ftRxj+AfC7AHCgWD8H88j
         CC0w==
X-Gm-Message-State: AOAM532pbcgB42VVzZFTCSTqbUlJlYc4rAdlLiU+3jTUqV3agTyFpdjK
	F1E0bdqQYe3rsglZz0wfmcsPVE7cfJvjpw==
X-Google-Smtp-Source: ABdhPJx1K+5syu74ayMih8RaFCc1VU+YarG3lhylzhElWyrmdpxlimkDUFSXTgl1VRbzNh28REIuiA==
X-Received: by 2002:a9d:7cd8:: with SMTP id r24mr19213213otn.90.1621866848860;
        Mon, 24 May 2021 07:34:08 -0700 (PDT)
From: Connor Davis <connojdavis@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair23@gmail.com>,
	Connor Davis <connojdavis@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Paul Durrant <paul@xen.org>
Subject: [PATCH v4 2/4] xen/common: Guard iommu symbols with CONFIG_HAS_PASSTHROUGH
Date: Mon, 24 May 2021 08:34:26 -0600
Message-Id: <f76852db6601b1bf243781b0b8b16c7a6fdc8a01.1621712830.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <cover.1621712830.git.connojdavis@gmail.com>
References: <cover.1621712830.git.connojdavis@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The variables iommu_enabled and iommu_dont_flush_iotlb are defined in
drivers/passthrough/iommu.c and are referenced in common code, which
causes the link to fail when !CONFIG_HAS_PASSTHROUGH.

Guard references to these variables in common code so that xen
builds when !CONFIG_HAS_PASSTHROUGH.

Signed-off-by: Connor Davis <connojdavis@gmail.com>
---
 xen/common/memory.c     | 10 ++++++++++
 xen/include/xen/iommu.h |  8 +++++++-
 2 files changed, 17 insertions(+), 1 deletion(-)

diff --git a/xen/common/memory.c b/xen/common/memory.c
index b5c70c4b85..72a6b70cb5 100644
--- a/xen/common/memory.c
+++ b/xen/common/memory.c
@@ -294,7 +294,9 @@ int guest_remove_page(struct domain *d, unsigned long gmfn)
     p2m_type_t p2mt;
 #endif
     mfn_t mfn;
+#ifdef CONFIG_HAS_PASSTHROUGH
     bool *dont_flush_p, dont_flush;
+#endif
     int rc;
 
 #ifdef CONFIG_X86
@@ -385,13 +387,17 @@ int guest_remove_page(struct domain *d, unsigned long gmfn)
      * Since we're likely to free the page below, we need to suspend
      * xenmem_add_to_physmap()'s suppressing of IOMMU TLB flushes.
      */
+#ifdef CONFIG_HAS_PASSTHROUGH
     dont_flush_p = &this_cpu(iommu_dont_flush_iotlb);
     dont_flush = *dont_flush_p;
     *dont_flush_p = false;
+#endif
 
     rc = guest_physmap_remove_page(d, _gfn(gmfn), mfn, 0);
 
+#ifdef CONFIG_HAS_PASSTHROUGH
     *dont_flush_p = dont_flush;
+#endif
 
     /*
      * With the lack of an IOMMU on some platforms, domains with DMA-capable
@@ -839,11 +845,13 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
     xatp->gpfn += start;
     xatp->size -= start;
 
+#ifdef CONFIG_HAS_PASSTHROUGH
     if ( is_iommu_enabled(d) )
     {
        this_cpu(iommu_dont_flush_iotlb) = 1;
        extra.ppage = &pages[0];
     }
+#endif
 
     while ( xatp->size > done )
     {
@@ -868,6 +876,7 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
         }
     }
 
+#ifdef CONFIG_HAS_PASSTHROUGH
     if ( is_iommu_enabled(d) )
     {
         int ret;
@@ -894,6 +903,7 @@ int xenmem_add_to_physmap(struct domain *d, struct xen_add_to_physmap *xatp,
         if ( unlikely(ret) && rc >= 0 )
             rc = ret;
     }
+#endif
 
     return rc;
 }
diff --git a/xen/include/xen/iommu.h b/xen/include/xen/iommu.h
index 460755df29..904cdf725d 100644
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -51,9 +51,15 @@ static inline bool_t dfn_eq(dfn_t x, dfn_t y)
     return dfn_x(x) == dfn_x(y);
 }
 
-extern bool_t iommu_enable, iommu_enabled;
+extern bool_t iommu_enable;
 extern bool force_iommu, iommu_quarantine, iommu_verbose;
 
+#ifdef CONFIG_HAS_PASSTHROUGH
+extern bool iommu_enabled;
+#else
+#define iommu_enabled false
+#endif
+
 #ifdef CONFIG_X86
 extern enum __packed iommu_intremap {
    /*
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Mon May 24 14:34:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 May 2021 14:34:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131790.246133 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llBei-00060B-3q; Mon, 24 May 2021 14:34:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131790.246133; Mon, 24 May 2021 14:34:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llBei-000604-0L; Mon, 24 May 2021 14:34:08 +0000
Received: by outflank-mailman (input) for mailman id 131790;
 Mon, 24 May 2021 14:34:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SOm5=KT=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1llBeg-0005zC-Cp
 for xen-devel@lists.xenproject.org; Mon, 24 May 2021 14:34:06 +0000
Received: from mail-oi1-x22d.google.com (unknown [2607:f8b0:4864:20::22d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ef3ee336-5f12-4f8c-b3c8-a34df3002263;
 Mon, 24 May 2021 14:34:05 +0000 (UTC)
Received: by mail-oi1-x22d.google.com with SMTP id d21so27198404oic.11
 for <xen-devel@lists.xenproject.org>; Mon, 24 May 2021 07:34:05 -0700 (PDT)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id p22sm2840564oop.7.2021.05.24.07.34.03
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 24 May 2021 07:34:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ef3ee336-5f12-4f8c-b3c8-a34df3002263
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=q1zjNygg3B7WteZ4Lbuck9mxhDDUSA/SZm+5egQkVsA=;
        b=AYpWY6MvjUwqiZfOme1wDIaeSZDRMf8LeXC2CgWsB2DQFV/nTAZkJer1Ow4PLz4qiP
         ZKGskVToy6hyiYcuBMmX0TrYp7DfZYSDU4320eBXZxScjR1qv+yR8Z7NhAlMOdQkJWlI
         EsmJQISeNEjMS1Ue7UmFfeWZAEiO+M/BLGUuKwCb/61mzVQCR9o33WTSIfcNAnWRvDk0
         LzQJ8I7Wk4/7jnMPEhN0FMpfRjNI1Z/QrmzOVUEiM1v2A8tUIBjyKn5anJL5fzUGvTZm
         nKc3SywYILL7y5MxJBJF1JwXUF7eh75b/cG1YN3/R01zq87L9ib6GjaiLyDiTkQPFvjv
         tlcw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=q1zjNygg3B7WteZ4Lbuck9mxhDDUSA/SZm+5egQkVsA=;
        b=S0kWx0f5/cNEHL+yQObSlQRPGfo0NY1RihtTO2gWCzhYEknhzoNt3QUET8bSx+HHZa
         6ZC9FyX4Wb6nSaAIhoBKFO5QQGQ5H7nsAhGiEidTJ2Ho9NHYot3KinGbBlrTEHkREJBc
         quLmaozlmg6FaV5cCK0dObH9LL44EQQZcYD/KZxUfFjZUv9tTdT8yGeP+9PYZkf65VVC
         MerlM8eluI0nUCwWWQxoB1BkaP5bJTMvP5lGkRVcB73BqPXAadryfoU/qUx1joAVWTRK
         ccUYixMQbtcV6BmSc8PNPj6dNWGbBBaLcP/09MMFVQDlHfPVFX8lIc6g+stid9vRk29G
         5meA==
X-Gm-Message-State: AOAM532N0hzawf6QngZlopK9Xa4sDe3+PDmlgLj4gftSHmiCL6q0PZTE
	Xec7ZI9C5NrbbBJ+drIxC1Bpa4m/0TQzKA==
X-Google-Smtp-Source: ABdhPJzA1mz2bVBRqZO1uu/tfVl5uInLR8dFA7CSAogpdfcg7TRo3KvL5giiQS6eyM3yy4t2T9es5g==
X-Received: by 2002:aca:2b17:: with SMTP id i23mr11116150oik.87.1621866844656;
        Mon, 24 May 2021 07:34:04 -0700 (PDT)
From: Connor Davis <connojdavis@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair23@gmail.com>,
	Connor Davis <connojdavis@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>,
	Paul Durrant <paul@xen.org>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH v4 0/4] Minimal build for RISCV
Date: Mon, 24 May 2021 08:34:24 -0600
Message-Id: <cover.1621712830.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.31.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Hi all,

This series introduces a minimal build for RISCV. It is based on Bobby's
previous work from last year[0] rebased onto current Xen.

This series provides the patches necessary to get a minimal build
working. The build is "minimal" in the sense that it only supports
building TARGET=head.o. The arch/riscv/head.S is just a simple while(1).

The first 2 patches are mods to non-RISCV bits that enable building a
config with:

  !CONFIG_HAS_NS16550
  !CONFIG_HAS_PASSTHROUGH

respectively. The third patch adds the make/Kconfig boilerplate
alongside head.S and asm-riscv/config.h (head.S references ENTRY
that is defined in asm-riscv/config.h).

The last adds a docker container for doing the build. To build from the
docker container (after creating it locally), you can run the following:

  $ make XEN_TARGET_ARCH=riscv SUBSYSTEMS=xen -C xen tiny64_defconfig
  $ make XEN_TARGET_ARCH=riscv SUBSYSTEMS=xen -C xen TARGET=head.o

[0] https://lore.kernel.org/xen-devel/cover.1579615303.git.bobbyeshleman@gmail.com/

Thanks,
Connor

--
Changes since v3:
  - Dropped "xen: Fix build when !CONFIG_GRANT_TABLE" since this was
    applied by Jan
  - Adjusted Kconfig condition for building NS16550
  - Use bool rather than bool_t
  - Removed riscv memory map, as this should probably be done later once
    the frametable size is figured out
  - Consolidated 64-bit #defines in asm-riscv/config.h
  - Renamed riscv64_defconfig to tiny64_defconfig, added CONFIG_DEBUG
    and CONFIG_DEBUG_INFO
  - Fixed logic/alignment/whitespace issues in Kconfig files
  - Use upstream archlinux riscv64 cross-compiler packages instead of
    custom built toolchain in docker container

Changes since v2:
  - Reduced number of riscv files added to ease review

Changes since v1:
  - Dropped "xen/sched: Fix build when NR_CPUS == 1" since this was
    fixed for 4.15
  - Moved #ifdef-ary around iommu_enabled to iommu.h
  - Moved struct grant_table declaration above ifdef CONFIG_GRANT_TABLE
    instead of defining an empty struct when !CONFIG_GRANT_TABLE
--
Connor Davis (4):
  xen/char: Default HAS_NS16550 to y only for X86 and ARM
  xen/common: Guard iommu symbols with CONFIG_HAS_PASSTHROUGH
  xen: Add files needed for minimal riscv build
  automation: Add container for riscv64 builds

 automation/build/archlinux/riscv64.dockerfile | 19 ++++++++
 automation/scripts/containerize               |  1 +
 config/riscv.mk                               |  4 ++
 xen/Makefile                                  |  8 +++-
 xen/arch/riscv/Kconfig                        | 47 +++++++++++++++++++
 xen/arch/riscv/Kconfig.debug                  |  0
 xen/arch/riscv/Makefile                       |  0
 xen/arch/riscv/Rules.mk                       |  0
 xen/arch/riscv/arch.mk                        | 14 ++++++
 xen/arch/riscv/asm-offsets.c                  |  0
 xen/arch/riscv/configs/tiny64_defconfig       | 13 +++++
 xen/arch/riscv/head.S                         |  6 +++
 xen/common/memory.c                           | 10 ++++
 xen/drivers/char/Kconfig                      |  1 +
 xen/include/asm-riscv/config.h                | 47 +++++++++++++++++++
 xen/include/xen/iommu.h                       |  8 +++-
 16 files changed, 175 insertions(+), 3 deletions(-)
 create mode 100644 automation/build/archlinux/riscv64.dockerfile
 create mode 100644 config/riscv.mk
 create mode 100644 xen/arch/riscv/Kconfig
 create mode 100644 xen/arch/riscv/Kconfig.debug
 create mode 100644 xen/arch/riscv/Makefile
 create mode 100644 xen/arch/riscv/Rules.mk
 create mode 100644 xen/arch/riscv/arch.mk
 create mode 100644 xen/arch/riscv/asm-offsets.c
 create mode 100644 xen/arch/riscv/configs/tiny64_defconfig
 create mode 100644 xen/arch/riscv/head.S
 create mode 100644 xen/include/asm-riscv/config.h


base-commit: d4fb5f166c2bfbaf9ba0de69da0d411288f437a9
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Mon May 24 14:34:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 May 2021 14:34:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131793.246166 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llBex-0006xB-2P; Mon, 24 May 2021 14:34:23 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131793.246166; Mon, 24 May 2021 14:34:23 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llBew-0006x0-U5; Mon, 24 May 2021 14:34:22 +0000
Received: by outflank-mailman (input) for mailman id 131793;
 Mon, 24 May 2021 14:34:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SOm5=KT=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1llBev-0005zC-5n
 for xen-devel@lists.xenproject.org; Mon, 24 May 2021 14:34:21 +0000
Received: from mail-oi1-x234.google.com (unknown [2607:f8b0:4864:20::234])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b94264e8-3989-4a34-abd9-b21143b6886e;
 Mon, 24 May 2021 14:34:11 +0000 (UTC)
Received: by mail-oi1-x234.google.com with SMTP id d21so27198680oic.11
 for <xen-devel@lists.xenproject.org>; Mon, 24 May 2021 07:34:11 -0700 (PDT)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id p22sm2840564oop.7.2021.05.24.07.34.09
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 24 May 2021 07:34:10 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b94264e8-3989-4a34-abd9-b21143b6886e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=Z03iMc+ZiqkEReP0ZHr/r/QRsiH4nGo1RN/qJx+zcmg=;
        b=PlzWCYrOTrOYfnUWI9YFPX7BO+qCPijYAuG8H5Wr40Y4V+Yd/aFM7Q8AOyrY8NHVhN
         mqIN7v8gkf+pdWQVJFlMviYtmWz40zZKBpc9s2vldsfhdVbyTGkbpDUHuyED78qO/iJd
         MFmLi2HL8AgUh1Be/Jqy6HWZQngz55KAuUV9yrXdH8z97I8OTISe79BqTilwy7Zt2++o
         vNBQ2nWW+whMEz710wigf3FVaehPdB9wX+nSyV3cmY+kjNIZnDxMT6A5+BQylfmql0og
         163um5lk8iCAqyjoA1mAgrbriFrTPdxsujgtkrlD9CBemaZRJEAjwhlSmCPUpPnDcwwo
         5jVg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=Z03iMc+ZiqkEReP0ZHr/r/QRsiH4nGo1RN/qJx+zcmg=;
        b=FHo9mh79HRDBy49DF4WMJU8iqOjdYUCpqiQrTYI+Hs0z2PcMaw8O+8RDkVkGWtwHUp
         7Em8Gs0eXsEk5+nI7njglUHM+otKTzpCEWKfHW/U2aA81N8TPGKcX+xpPaaeLIxBJjd3
         3kzITAqbBx3KYn85Cdgj5utWqUu6VLWeCO+xi/OmI7xeXUC3/6FhDyb+iaNIFrH+ZQ1u
         s+luSnuydIstpFIgpNmQ+nnQBujyhw8lCUebAC1WOw1Lk3CrBZ4aOVA+BOOIS64ZFADz
         Zj4nh3XaDHDAxu/wT3hq+lS0ZvIV7aBZ0Xg69atrudkBNng3SzkIj0E2VdW3S8rkmF1W
         Nkgw==
X-Gm-Message-State: AOAM533B1XC62bXlBHHiqbZfBpZ3BAH1iWPaeanhxiyE/u/IH6L3hGjY
	Up+kLzaxVywFwSq0QiUT9ntPefWihFwvvg==
X-Google-Smtp-Source: ABdhPJzEjFTW4jW83mjIZnAMaQ84d3qND0LGuVdB83a6CH2v/PsZ0iPrvuly6aa0XB0twH39/VrqyA==
X-Received: by 2002:aca:4cc4:: with SMTP id z187mr10880438oia.157.1621866850389;
        Mon, 24 May 2021 07:34:10 -0700 (PDT)
From: Connor Davis <connojdavis@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair23@gmail.com>,
	Connor Davis <connojdavis@gmail.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Julien Grall <julien@xen.org>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v4 3/4] xen: Add files needed for minimal riscv build
Date: Mon, 24 May 2021 08:34:27 -0600
Message-Id: <88ca49cea8dc0c44604957d42722388bb3d9e3ff.1621712830.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <cover.1621712830.git.connojdavis@gmail.com>
References: <cover.1621712830.git.connojdavis@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add arch-specific makefiles and configs needed to build for
riscv. Also add a minimal head.S that is a simple infinite loop.
head.o can be built with

$ make XEN_TARGET_ARCH=riscv SUBSYSTEMS=xen -C xen tiny64_defconfig
$ make XEN_TARGET_ARCH=riscv SUBSYSTEMS=xen -C xen TARGET=head.o

No other TARGET is supported at the moment.

Signed-off-by: Connor Davis <connojdavis@gmail.com>
---
 config/riscv.mk                         |  4 +++
 xen/Makefile                            |  8 +++--
 xen/arch/riscv/Kconfig                  | 47 +++++++++++++++++++++++++
 xen/arch/riscv/Kconfig.debug            |  0
 xen/arch/riscv/Makefile                 |  0
 xen/arch/riscv/Rules.mk                 |  0
 xen/arch/riscv/arch.mk                  | 14 ++++++++
 xen/arch/riscv/asm-offsets.c            |  0
 xen/arch/riscv/configs/tiny64_defconfig | 13 +++++++
 xen/arch/riscv/head.S                   |  6 ++++
 xen/include/asm-riscv/config.h          | 47 +++++++++++++++++++++++++
 11 files changed, 137 insertions(+), 2 deletions(-)
 create mode 100644 config/riscv.mk
 create mode 100644 xen/arch/riscv/Kconfig
 create mode 100644 xen/arch/riscv/Kconfig.debug
 create mode 100644 xen/arch/riscv/Makefile
 create mode 100644 xen/arch/riscv/Rules.mk
 create mode 100644 xen/arch/riscv/arch.mk
 create mode 100644 xen/arch/riscv/asm-offsets.c
 create mode 100644 xen/arch/riscv/configs/tiny64_defconfig
 create mode 100644 xen/arch/riscv/head.S
 create mode 100644 xen/include/asm-riscv/config.h

diff --git a/config/riscv.mk b/config/riscv.mk
new file mode 100644
index 0000000000..2b2cc2e63a
--- /dev/null
+++ b/config/riscv.mk
@@ -0,0 +1,4 @@
+CONFIG_RISCV := y
+CONFIG_RISCV_$(XEN_OS) := y
+
+CONFIG_XEN_INSTALL_SUFFIX :=
diff --git a/xen/Makefile b/xen/Makefile
index 9f3be7766d..3a1ff0045b 100644
--- a/xen/Makefile
+++ b/xen/Makefile
@@ -26,7 +26,9 @@ MAKEFLAGS += -rR
 EFI_MOUNTPOINT ?= $(BOOT_DIR)/efi
 
 ARCH=$(XEN_TARGET_ARCH)
-SRCARCH=$(shell echo $(ARCH) | sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g')
+SRCARCH=$(shell echo $(ARCH) | \
+          sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g' \
+              -e s'/riscv.*/riscv/g')
 
 # Don't break if the build process wasn't called from the top level
 # we need XEN_TARGET_ARCH to generate the proper config
@@ -35,7 +37,8 @@ include $(XEN_ROOT)/Config.mk
 # Set ARCH/SUBARCH appropriately.
 export TARGET_SUBARCH  := $(XEN_TARGET_ARCH)
 export TARGET_ARCH     := $(shell echo $(XEN_TARGET_ARCH) | \
-                            sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g')
+                            sed -e 's/x86.*/x86/' -e s'/arm\(32\|64\)/arm/g' \
+                                -e s'/riscv.*/riscv/g')
 
 # Allow someone to change their config file
 export KCONFIG_CONFIG ?= .config
@@ -335,6 +338,7 @@ _clean: delete-unfresh-files
 	$(MAKE) $(clean) xsm
 	$(MAKE) $(clean) crypto
 	$(MAKE) $(clean) arch/arm
+	$(MAKE) $(clean) arch/riscv
 	$(MAKE) $(clean) arch/x86
 	$(MAKE) $(clean) test
 	$(MAKE) -f $(BASEDIR)/tools/kconfig/Makefile.kconfig ARCH=$(ARCH) SRCARCH=$(SRCARCH) clean
diff --git a/xen/arch/riscv/Kconfig b/xen/arch/riscv/Kconfig
new file mode 100644
index 0000000000..bd8381c5e0
--- /dev/null
+++ b/xen/arch/riscv/Kconfig
@@ -0,0 +1,47 @@
+config RISCV
+	def_bool y
+
+config RISCV_64
+	def_bool y
+	select 64BIT
+
+config ARCH_DEFCONFIG
+	string
+	default "arch/riscv/configs/tiny64_defconfig"
+
+menu "Architecture Features"
+
+source "arch/Kconfig"
+
+endmenu
+
+menu "ISA Selection"
+
+choice
+	prompt "Base ISA"
+	default RISCV_ISA_RV64IMA if RISCV_64
+	help
+	  This selects the base ISA extensions that Xen will target.
+
+config RISCV_ISA_RV64IMA
+	bool "RV64IMA"
+	help
+	  Use the RV64I base ISA, plus the "M" and "A" extensions
+	  for integer multiply/divide and atomic instructions, respectively.
+
+endchoice
+
+config RISCV_ISA_C
+	bool "Compressed extension"
+	help
+	  Add "C" to the ISA subsets that the toolchain is allowed to
+	  emit when building Xen, which results in compressed instructions
+	  in the Xen binary.
+
+	  If unsure, say N.
+
+endmenu
+
+source "common/Kconfig"
+
+source "drivers/Kconfig"
diff --git a/xen/arch/riscv/Kconfig.debug b/xen/arch/riscv/Kconfig.debug
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/xen/arch/riscv/Makefile b/xen/arch/riscv/Makefile
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/xen/arch/riscv/Rules.mk b/xen/arch/riscv/Rules.mk
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/xen/arch/riscv/arch.mk b/xen/arch/riscv/arch.mk
new file mode 100644
index 0000000000..53dadb8975
--- /dev/null
+++ b/xen/arch/riscv/arch.mk
@@ -0,0 +1,14 @@
+########################################
+# RISCV-specific definitions
+
+CFLAGS-$(CONFIG_RISCV_64) += -mabi=lp64
+
+riscv-march-$(CONFIG_RISCV_ISA_RV64IMA) := rv64ima
+riscv-march-$(CONFIG_RISCV_ISA_C)       := $(riscv-march-y)c
+
+# Note that -mcmodel=medany is used so that Xen can be mapped
+# into the upper half _or_ the lower half of the address space.
+# -mcmodel=medlow would force Xen into the lower half.
+
+CFLAGS += -march=$(riscv-march-y) -mstrict-align -mcmodel=medany
+CFLAGS += -I$(BASEDIR)/include
diff --git a/xen/arch/riscv/asm-offsets.c b/xen/arch/riscv/asm-offsets.c
new file mode 100644
index 0000000000..e69de29bb2
diff --git a/xen/arch/riscv/configs/tiny64_defconfig b/xen/arch/riscv/configs/tiny64_defconfig
new file mode 100644
index 0000000000..3c9a2ff941
--- /dev/null
+++ b/xen/arch/riscv/configs/tiny64_defconfig
@@ -0,0 +1,13 @@
+# CONFIG_SCHED_CREDIT is not set
+# CONFIG_SCHED_RTDS is not set
+# CONFIG_SCHED_NULL is not set
+# CONFIG_SCHED_ARINC653 is not set
+# CONFIG_TRACEBUFFER is not set
+# CONFIG_HYPFS is not set
+# CONFIG_GRANT_TABLE is not set
+# CONFIG_SPECULATIVE_HARDEN_ARRAY is not set
+
+CONFIG_RISCV_64=y
+CONFIG_DEBUG=y
+CONFIG_DEBUG_INFO=y
+CONFIG_EXPERT=y
diff --git a/xen/arch/riscv/head.S b/xen/arch/riscv/head.S
new file mode 100644
index 0000000000..0dbc27ba75
--- /dev/null
+++ b/xen/arch/riscv/head.S
@@ -0,0 +1,6 @@
+#include <asm/config.h>
+
+        .text
+
+ENTRY(start)
+        j  start
diff --git a/xen/include/asm-riscv/config.h b/xen/include/asm-riscv/config.h
new file mode 100644
index 0000000000..e2ae21de61
--- /dev/null
+++ b/xen/include/asm-riscv/config.h
@@ -0,0 +1,47 @@
+#ifndef __RISCV_CONFIG_H__
+#define __RISCV_CONFIG_H__
+
+#if defined(CONFIG_RISCV_64)
+# define LONG_BYTEORDER 3
+# define ELFSIZE 64
+# define MAX_VIRT_CPUS 128u
+#else
+# error "Unsupported RISCV variant"
+#endif
+
+#define BYTES_PER_LONG (1 << LONG_BYTEORDER)
+#define BITS_PER_LONG  (BYTES_PER_LONG << 3)
+#define POINTER_ALIGN  BYTES_PER_LONG
+
+#define BITS_PER_LLONG 64
+
+/* xen_ulong_t is always 64 bits */
+#define BITS_PER_XEN_ULONG 64
+
+#define CONFIG_RISCV_L1_CACHE_SHIFT 6
+#define CONFIG_PAGEALLOC_MAX_ORDER  18
+#define CONFIG_DOMU_MAX_ORDER       9
+#define CONFIG_HWDOM_MAX_ORDER      10
+
+#define OPT_CONSOLE_STR "dtuart"
+#define INVALID_VCPU_ID MAX_VIRT_CPUS
+
+/* Linkage for RISCV */
+#ifdef __ASSEMBLY__
+#define ALIGN .align 2
+
+#define ENTRY(name)                                \
+  .globl name;                                     \
+  ALIGN;                                           \
+  name:
+#endif
+
+#endif /* __RISCV_CONFIG_H__ */
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * indent-tabs-mode: nil
+ * End:
+ */
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Mon May 24 14:34:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 May 2021 14:34:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131795.246177 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llBf1-0007Mi-CU; Mon, 24 May 2021 14:34:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131795.246177; Mon, 24 May 2021 14:34:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llBf1-0007MR-8V; Mon, 24 May 2021 14:34:27 +0000
Received: by outflank-mailman (input) for mailman id 131795;
 Mon, 24 May 2021 14:34:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=SOm5=KT=gmail.com=connojdavis@srs-us1.protection.inumbo.net>)
 id 1llBf0-0005zC-61
 for xen-devel@lists.xenproject.org; Mon, 24 May 2021 14:34:26 +0000
Received: from mail-ot1-x329.google.com (unknown [2607:f8b0:4864:20::329])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a12a2de7-aaba-43cb-bf74-89ba5790a82e;
 Mon, 24 May 2021 14:34:12 +0000 (UTC)
Received: by mail-ot1-x329.google.com with SMTP id
 66-20020a9d02c80000b02903615edf7c1aso3513289otl.13
 for <xen-devel@lists.xenproject.org>; Mon, 24 May 2021 07:34:12 -0700 (PDT)
Received: from localhost.localdomain (142-79-211-230.starry-inc.net.
 [142.79.211.230])
 by smtp.gmail.com with ESMTPSA id p22sm2840564oop.7.2021.05.24.07.34.11
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 24 May 2021 07:34:11 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a12a2de7-aaba-43cb-bf74-89ba5790a82e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=RTdpcVmRCgop2NexjHiymnhaCvkDQbmbyYol6ymHZ0c=;
        b=DSO2OyD5D9kLgCVqNzrAi7JG6GGuxr2HFfxpL5r9b2U1dLCDw+K6qYCC5CA6jdM1QV
         KI0AtKZHzIbGrpzw/Z2yIt9hJe1t9Q0LJb+mo2MHSIs7abKua9k1oSktcEt4+D2qzgFa
         lG+iMQfclCjhF7I052CgVYvUBArqF4qtDTEtS2xGOe67G24wcX8dtX1uaLzlwDN2Gitk
         ddTg5Fnf1TXMXDh8DnLIAVVw/c1E1iwgDwQOJh5dqc8c+h88iWDVeusXfQU/zXNFASSE
         gsvIFMePrNIYTKIkkumGyURco4W3DZQ9r2Vm1tLtzsDx4qFf/LOfqtE9njEc1VIFzpwK
         oAQQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=RTdpcVmRCgop2NexjHiymnhaCvkDQbmbyYol6ymHZ0c=;
        b=EwrDXVKt/U8r59l6s11p1VK36r/AK6t5DhrABRWYyOV6lQ52bdKKScwSvggWOgmAdN
         UEjUBQnGBnyzOhP9vxtdT9ZMPk6pjRQd7a13Y6cwLKzZw02tmyJ9FMVGzS5HeaLoHFlL
         oUfgFMzxrfmlaxzwnBEwErWOvNm8dHjL23hBaRnP9Pk+R8lpD1yWELmn59k60a6kVgap
         BJXCuuIGUhYOBcnQWmxTOWihqqdhtqx6G3xq9fM39HtqK7hsNTyft3OyFy1fr4Q9FrUI
         qGIsmiECenFz9bLAwnUeZ03cCMx8JmSDeLqhM6NhOi4aSbuSfKIylLcIffIM4JWqJ4Pp
         UnQg==
X-Gm-Message-State: AOAM532xv+6C2P6rcc8+K4GPWmRuDDTxvaCqYcZ3icngbxb5XFUYFD99
	XkZ7ZBosO2MW3MQz3N0WB/gQohHwHOf+9g==
X-Google-Smtp-Source: ABdhPJzGTihKkS6tM2uyS4Otw8RMydzhTSScHWdE1C1EXcQp6qQnRPQ5ke+MftA703SlnjRukwOfxQ==
X-Received: by 2002:a9d:855:: with SMTP id 79mr18230805oty.36.1621866852191;
        Mon, 24 May 2021 07:34:12 -0700 (PDT)
From: Connor Davis <connojdavis@gmail.com>
To: xen-devel@lists.xenproject.org
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
	Alistair Francis <alistair23@gmail.com>,
	Connor Davis <connojdavis@gmail.com>,
	Doug Goldstein <cardoe@cardoe.com>
Subject: [PATCH v4 4/4] automation: Add container for riscv64 builds
Date: Mon, 24 May 2021 08:34:28 -0600
Message-Id: <e43a22c723b0320883e4f0dc3d08287937d29181.1621712830.git.connojdavis@gmail.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <cover.1621712830.git.connojdavis@gmail.com>
References: <cover.1621712830.git.connojdavis@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a container for cross-compiling xen to riscv64.
This just includes the cross-compiler and necessary packages for
building xen itself (packages for tools, stubdoms, etc., can be
added later).

Signed-off-by: Connor Davis <connojdavis@gmail.com>
---
 automation/build/archlinux/riscv64.dockerfile | 19 +++++++++++++++++++
 automation/scripts/containerize               |  1 +
 2 files changed, 20 insertions(+)
 create mode 100644 automation/build/archlinux/riscv64.dockerfile

diff --git a/automation/build/archlinux/riscv64.dockerfile b/automation/build/archlinux/riscv64.dockerfile
new file mode 100644
index 0000000000..ff8b2b955d
--- /dev/null
+++ b/automation/build/archlinux/riscv64.dockerfile
@@ -0,0 +1,19 @@
+FROM archlinux
+LABEL maintainer.name="The Xen Project" \
+      maintainer.email="xen-devel@lists.xenproject.org"
+
+# Packages needed for the build
+RUN pacman --noconfirm --needed -Syu \
+    base-devel \
+    git \
+    inetutils \
+    riscv64-linux-gnu-binutils \
+    riscv64-linux-gnu-gcc \
+    riscv64-linux-gnu-glibc
+
+# Add compiler path
+ENV CROSS_COMPILE=riscv64-linux-gnu-
+
+RUN useradd --create-home user
+USER user
+WORKDIR /build
diff --git a/automation/scripts/containerize b/automation/scripts/containerize
index b7c81559fb..59edf0ba40 100755
--- a/automation/scripts/containerize
+++ b/automation/scripts/containerize
@@ -26,6 +26,7 @@ BASE="registry.gitlab.com/xen-project/xen"
 case "_${CONTAINER}" in
     _alpine) CONTAINER="${BASE}/alpine:3.12" ;;
     _archlinux|_arch) CONTAINER="${BASE}/archlinux:current" ;;
+    _riscv64) CONTAINER="${BASE}/archlinux:riscv64" ;;
     _centos7) CONTAINER="${BASE}/centos:7" ;;
     _centos72) CONTAINER="${BASE}/centos:7.2" ;;
     _fedora) CONTAINER="${BASE}/fedora:29";;
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Mon May 24 15:51:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 May 2021 15:51:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131827.246187 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llCqw-0007Nr-2p; Mon, 24 May 2021 15:50:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131827.246187; Mon, 24 May 2021 15:50:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llCqv-0007Nk-Vl; Mon, 24 May 2021 15:50:49 +0000
Received: by outflank-mailman (input) for mailman id 131827;
 Mon, 24 May 2021 15:50:48 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oIFU=KT=oracle.com=konrad.wilk@srs-us1.protection.inumbo.net>)
 id 1llCqu-0007Nd-94
 for xen-devel@lists.xenproject.org; Mon, 24 May 2021 15:50:48 +0000
Received: from mx0b-00069f02.pphosted.com (unknown [205.220.177.32])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f271c372-0fb3-4366-9954-331ff2e3729a;
 Mon, 24 May 2021 15:50:47 +0000 (UTC)
Received: from pps.filterd (m0246630.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 14OFVtat011754; Mon, 24 May 2021 15:49:46 GMT
Received: from oracle.com (userp3020.oracle.com [156.151.31.79])
 by mx0b-00069f02.pphosted.com with ESMTP id 38qxvxra4a-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Mon, 24 May 2021 15:49:46 +0000
Received: from userp3020.oracle.com (userp3020.oracle.com [127.0.0.1])
 by pps.podrdrct (8.16.0.36/8.16.0.36) with SMTP id 14OFjkZ9021181;
 Mon, 24 May 2021 15:49:44 GMT
Received: from nam11-dm6-obe.outbound.protection.outlook.com
 (mail-dm6nam11lp2176.outbound.protection.outlook.com [104.47.57.176])
 by userp3020.oracle.com with ESMTP id 38qbqrbn5j-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Mon, 24 May 2021 15:49:44 +0000
Received: from BYAPR10MB2999.namprd10.prod.outlook.com (2603:10b6:a03:85::27)
 by BY5PR10MB4116.namprd10.prod.outlook.com (2603:10b6:a03:203::13)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4150.26; Mon, 24 May
 2021 15:49:42 +0000
Received: from BYAPR10MB2999.namprd10.prod.outlook.com
 ([fe80::8111:d8f1:c262:808d]) by BYAPR10MB2999.namprd10.prod.outlook.com
 ([fe80::8111:d8f1:c262:808d%6]) with mapi id 15.20.4150.027; Mon, 24 May 2021
 15:49:42 +0000
Received: from 0xbeefdead.lan (130.44.160.152) by
 BL1PR13CA0277.namprd13.prod.outlook.com (2603:10b6:208:2bc::12) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4173.11 via Frontend
 Transport; Mon, 24 May 2021 15:49:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f271c372-0fb3-4366-9954-331ff2e3729a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=date : from : to : cc
 : subject : message-id : references : content-type : in-reply-to :
 mime-version; s=corp-2020-01-29;
 bh=MlK7XwBv8EW7kwoT/0Rmmlv3vhgbarA5dsA7WbtX4ok=;
 b=WLu2y7IYxKKYc9kR282I0lNjq16cJrsaFUwEqFJRPQEu3oOVBQcg+yJnZpf2MQ4phOcD
 wqtck5QlEmPn21ZA0VDCA2wAyqZIYKW0bS+0zJsQgWDxfvvOckWCn/mJZ0C9KWwu6uiV
 MkV+ZWTA+4qQ6CqzAgjgEYW8wvhOeAHKz1L8gfGM0wdHuxLhpar+/cnL2u/gw+Okrcum
 rYlcFX6jPeXgESz4Zcw9K0mhnM/GbV7ePjel43iQADB08H94Ei6HTzl2klw7vCRoTpF/
 23YaPfGeXNEgdLrNHt1zpV2YzPbUxn5cgSF8dJQtIkEWEu4yEuG5ua67l2a/aUWVHB8N bw== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=S8SVVPxkpxj3G+toAdFtVvPfmFKsh97t2GQSoCzaE3MsgAIV6zbNOEFwKd5MI9pdXF/Q9Nm7vKIdO1xCZKKnXezCF9JHWbvzLsssKhyFK986Gom36VPUYcNagxKRF8zoZHtgoGmJPi3rkiejU8eD8o56mrriMu82Cwn2KzgdQtPD0d0Jjlac/fEccS2OtEVUZQqC0VeQmIbXDLqVV3eA4za3uaXGaGp1t194kzlwYUKzuM+G7BMRtAs6X+v+Hwe2uIgNFD0I4fi6a8m20UT9Ig7vMkdIlxkxkMwGR5i8rlrUVVQ5i7S62u4Vvg5VBrgGMR43AZR7CNc1XHzb5lNmlw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MlK7XwBv8EW7kwoT/0Rmmlv3vhgbarA5dsA7WbtX4ok=;
 b=Hr0WgN9WHx9+wHRT+Rm2h01vKBmPoR5RUhruW2sPkz03DKbfEuJHm8qqMbUUxclRK0Rdv1gXNad5k/7nU8hxIvw4qR3KUu/JMHOFsDrIsMoGYjK/q2tKgZZ7A4175oDQTb4Iqw2wpxej8xHZB9LL4YaDQShkukR94anhpqw3TH/5pC/TMM19zFqpNdUnRSmJCiKJvwQWqZb37fm3cYn1N1+CTerwgvrsbFELFrEu4KFwIsXWhY2peH6YsrOJ08ver4+dSmV2TBbbthZpGf7XkZaswDi6cod0z2Yug4tv9PCvMVYyQ8Ncoh1Ogut6YFse0GHxKjMgoT6WVQtS4WuJzw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=MlK7XwBv8EW7kwoT/0Rmmlv3vhgbarA5dsA7WbtX4ok=;
 b=xH59lR7o3E7s5+FyjOG7Kj1q90IQo0/m0q4joVJ5dC26FlZpNz7Ldk49mSpi8hDZrBKHcf1JXA7fVNOoc4DcgjKlWm1lOObUoJ2xZgHIN88aknK9RJm+nUAKNXUk6im6ja19F65ArznG5v0m4pGf7FkF+xG1WJFWH2xQj0gUqgQ=
Authentication-Results: chromium.org; dkim=none (message not signed)
 header.d=none;chromium.org; dmarc=none action=none header.from=oracle.com;
Date: Mon, 24 May 2021 11:49:34 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
        Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
        Frank Rowand <frowand.list@gmail.com>, boris.ostrovsky@oracle.com,
        jgross@suse.com, Christoph Hellwig <hch@lst.de>,
        Marek Szyprowski <m.szyprowski@samsung.com>, benh@kernel.crashing.org,
        paulus@samba.org,
        "list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
        sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
        grant.likely@arm.com, xypron.glpk@gmx.de,
        Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
        bauerman@linux.ibm.com, peterz@infradead.org,
        Greg KH <gregkh@linuxfoundation.org>,
        Saravana Kannan <saravanak@google.com>,
        "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
        heikki.krogerus@linux.intel.com,
        Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
        Randy Dunlap <rdunlap@infradead.org>,
        Dan Williams <dan.j.williams@intel.com>,
        Bartosz Golaszewski <bgolaszewski@baylibre.com>,
        linux-devicetree <devicetree@vger.kernel.org>,
        lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
        xen-devel@lists.xenproject.org,
        Nicolas Boichat <drinkcat@chromium.org>,
        Jim Quinlan <james.quinlan@broadcom.com>,
        Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com,
        Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk,
        Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie,
        dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
        jani.nikula@linux.intel.com, Jianxiong Gao <jxgao@google.com>,
        joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org,
        maarten.lankhorst@linux.intel.com, matthew.auld@intel.com,
        rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v7 04/15] swiotlb: Add restricted DMA pool initialization
Message-ID: <YKvLDlnns3TWEZ5l@0xbeefdead.lan>
References: <20210518064215.2856977-1-tientzu@chromium.org>
 <20210518064215.2856977-5-tientzu@chromium.org>
 <CALiNf2_AWsnGqCnh02ZAGt+B-Ypzs1=-iOG2owm4GZHz2JAc4A@mail.gmail.com>
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CALiNf2_AWsnGqCnh02ZAGt+B-Ypzs1=-iOG2owm4GZHz2JAc4A@mail.gmail.com>
X-Originating-IP: [130.44.160.152]
X-ClientProxiedBy: BL1PR13CA0277.namprd13.prod.outlook.com
 (2603:10b6:208:2bc::12) To BYAPR10MB2999.namprd10.prod.outlook.com
 (2603:10b6:a03:85::27)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 7fbcc4c1-d792-4494-6e1a-08d91ecb8ba4
X-MS-TrafficTypeDiagnostic: BY5PR10MB4116:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<BY5PR10MB41162EC595B376EAA614CBEF89269@BY5PR10MB4116.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	lxMQsLNRA6D5WyYU+pvEzJv+WanfR5b/sGRhvEuqF6R+hb25/BNTjKvJPcx9rDzajGD14lf8sYhIu+26AwjfNIMWB9fighHsQp71dlIuCR2RX5gaOqFCI+8bzhnN6VcPzIiE0ZvyEDzUsRXe0HYriRTisnU34iP3B/XTZTg6JVdNKsrGt4XzKiUJ8LI2YxcKM0pv0Mb1gkgedGz1I3aPHMPB3fwRYiLTjNG1Jcbmn8CCucOilmmIFOM3qmi0jevlQ1KjEIPvtInZiIktzdac6W/qd0TP6kR5dDtaEm2KZm0JPikAUANkoPN7wnZkQChJklWajdkVZKzNitAj6W5j5CHW51qVLNOnA2SjI/BIFXdK89AT6hVbhtffHODLMwN2B78WPmvGSx5F6AujmIuaVo4YRNdN4e77XyDY0wEqVYPrynegaVlYNvjynOqcluJ7fv7ZKGGxUxPXZpq/1zAVfU0enBiZ8IyYuD2KkZif6VrvICClz5MfuDDfGl6YWRGZJ0cW2wi+YfUpq1nYkch/KIZDagN+L+lAIYFzXpJa7XKHkuNfjcY838xAJ/KJ4ht0qWz+staSVDIuAQGbFp6gi2TuE2Hu+s6qdJfuRbmtVH4PaOFBWkZcURAjFM3Qfsnfld+kwgWjaeR5kcczUETgjA==
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB2999.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(366004)(39860400002)(136003)(346002)(376002)(9686003)(478600001)(86362001)(2906002)(6666004)(956004)(186003)(38100700002)(66476007)(55016002)(38350700002)(16526019)(66556008)(66946007)(8886007)(26005)(4744005)(7696005)(8676002)(54906003)(36756003)(7416002)(52116002)(316002)(6916009)(8936002)(6506007)(5660300002)(7406005)(4326008)(7366002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?us-ascii?Q?k9f+vLSIXL2AAotnAoM310FgTBO9E/Tn5j89La1JVNOr6k+sevJwvUJVSwpg?=
 =?us-ascii?Q?+xAtaN56ylkbLbAtdMQCI+KgCP8oG55l/wcUkPZfqNVpiFkPhMsMIKIzwsCf?=
 =?us-ascii?Q?La3ESSrr5RRXv9eCjAjFxhzOYTAhowuq6BRMC78qhHxBawZdp09tNvYiEx2s?=
 =?us-ascii?Q?O1yQHeu+y6ZAJ3h+nBdrLp06csxRDJckDG44OKOh1ZMFxEXBSvi/WrnvjuXo?=
 =?us-ascii?Q?MJtx0UW6OATFRZ1yxhxVZZRTMvqF4SpIBoxTaayvCAYROoPeJXZUvXm+a8n4?=
 =?us-ascii?Q?jd6JOyQgPbER1EDgWKsugoiIudHgFi6r6Ac9HutqFHXQyTlSqxjYsSWZZyGJ?=
 =?us-ascii?Q?aKjuK7UNIW6mpOsMuiKN5Ng/L+2I5MPy9BrtFzqJzVTZgrjY4wt1RUhP+moC?=
 =?us-ascii?Q?t3/nbvHEviKgCzT52k8TBvbPU3pV33tuJgF08RNTNGryYMM/cbfFRRbbWr/i?=
 =?us-ascii?Q?n2e36kdr89XdCXnsk9d6I6UORivqvQS7O3MCJ+mMT1d9lhldFjcaThxYrb2i?=
 =?us-ascii?Q?LVUE6jltznawf1ksouo5Zw34WW0h0pkjxjWS6GumfpDxPvMzk8qcqrzdCUak?=
 =?us-ascii?Q?Sx4k2rH/9A1P8wK8YzyuUWG0uPr81ix2Inmxh4bmxFsotZ/Oi9UYwW9+Ortz?=
 =?us-ascii?Q?VsxUk9zqRtTOLHSBbqz26N9i1hTN6yrn4aKZ5u6gCKNElb46TXWSvSL2iDv2?=
 =?us-ascii?Q?bSlfYMWFkjLaeRQsn1GuJC0o6TwhLrb8FK9du9SyG7X+CkCHArX1ETVbFdgz?=
 =?us-ascii?Q?GcLX326cTexW8+AduVoRyjf+vEqKsRSKJ7q9fK9IbYg+ItQm5E23tV0l4QS3?=
 =?us-ascii?Q?BvwxpQu3rsxpS1qopT8dECAfcnRy9tGIeohJ28+ZeJIs1uT6ronTRjsa5/x5?=
 =?us-ascii?Q?VBgxeYEu1Il2Oq9vYBrmvxH+A2yN1tL61S7Naklc8kyuME+1tNx+UKNlaItj?=
 =?us-ascii?Q?ThX8uQBsAAnsuB9Gl+/g4M7lmNBdGvqF58ULveJP+aAbfRYApLggD8tg4DAP?=
 =?us-ascii?Q?lTeEvBmRSNR/Sy6DTujsQYXqgNSDGvreiWZkNieTTuIOQf7DiM4B23b6Ou91?=
 =?us-ascii?Q?juMRW6uWO8UI/bVgh4OZpiDrloKbxvHVWy0PgPo0+8+4PA1ZNutFfg5h4KeY?=
 =?us-ascii?Q?TyLG/FpQrnwxMpMpCyQJOkRoiOtNnz60ZUTu1or0zdzKx9NERg1fZPTT0FOg?=
 =?us-ascii?Q?myMv/LVetsta3AMlvob2eefW5cmcynI+UNXaNBiUhB+VJRflTflL42nxpygQ?=
 =?us-ascii?Q?568Eu66EpI0EYtK1yYoo3hOS1ZSoaUXh+ISDyJtd32+OBM3BLzD2/CwIlQy8?=
 =?us-ascii?Q?cDaLh+rE4ay997dj46IpjMVL?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 7fbcc4c1-d792-4494-6e1a-08d91ecb8ba4
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB2999.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2021 15:49:42.1378
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 9ONLz8Bfpzu4dW5SAWlMShnXYqz6E4cTtE5FYaSGri6EldOiEB9lf4uzrqlC2CAxWiekmOaDtACIhVsQZuQ4XQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BY5PR10MB4116
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9993 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0 phishscore=0 bulkscore=0
 mlxlogscore=999 malwarescore=0 spamscore=0 suspectscore=0 adultscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2105240095
X-Proofpoint-GUID: jyInOp7QVZfBwaP2UbNyQz4q__eAGIgp
X-Proofpoint-ORIG-GUID: jyInOp7QVZfBwaP2UbNyQz4q__eAGIgp

On Tue, May 18, 2021 at 02:48:35PM +0800, Claire Chang wrote:
> I didn't move this to a separate file because I feel it might be
> confusing for swiotlb_alloc/free (and need more functions to be
> non-static).
> Maybe instead of moving to a separate file, we can try to come up with
> a better naming?

I think you are referring to:

rmem_swiotlb_setup

?

Which is ARM specific and inside the generic code?

<sigh>

Christopher wants to unify it in all the code so there is one single
source, but the "you seperate arch code out from generic" saying
makes me want to move it out.

I agree that if you move it out from generic to arch-specific we have to
expose more of the swiotlb functions, which will undo's Christopher
cleanup code.

How about this - lets leave it as is now, and when there are more
use-cases we can revisit it and then if need to move the code?



From xen-devel-bounces@lists.xenproject.org Mon May 24 15:53:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 May 2021 15:53:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131835.246199 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llCtF-00083v-Jq; Mon, 24 May 2021 15:53:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131835.246199; Mon, 24 May 2021 15:53:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llCtF-00083o-FC; Mon, 24 May 2021 15:53:13 +0000
Received: by outflank-mailman (input) for mailman id 131835;
 Mon, 24 May 2021 15:53:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oIFU=KT=oracle.com=konrad.wilk@srs-us1.protection.inumbo.net>)
 id 1llCtD-00083C-Ic
 for xen-devel@lists.xenproject.org; Mon, 24 May 2021 15:53:11 +0000
Received: from mx0a-00069f02.pphosted.com (unknown [205.220.165.32])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a5b40676-a84f-4cfa-8a9c-40fea868600c;
 Mon, 24 May 2021 15:53:10 +0000 (UTC)
Received: from pps.filterd (m0246629.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 14OFVodS005312; Mon, 24 May 2021 15:51:29 GMT
Received: from oracle.com (userp3020.oracle.com [156.151.31.79])
 by mx0b-00069f02.pphosted.com with ESMTP id 38r267r8mt-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Mon, 24 May 2021 15:51:29 +0000
Received: from userp3020.oracle.com (userp3020.oracle.com [127.0.0.1])
 by pps.podrdrct (8.16.0.36/8.16.0.36) with SMTP id 14OFlg7X024516;
 Mon, 24 May 2021 15:51:28 GMT
Received: from nam04-bn8-obe.outbound.protection.outlook.com
 (mail-bn8nam08lp2048.outbound.protection.outlook.com [104.47.74.48])
 by userp3020.oracle.com with ESMTP id 38qbqrbpx4-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Mon, 24 May 2021 15:51:27 +0000
Received: from BYAPR10MB2999.namprd10.prod.outlook.com (2603:10b6:a03:85::27)
 by BYAPR10MB3096.namprd10.prod.outlook.com (2603:10b6:a03:151::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4150.26; Mon, 24 May
 2021 15:51:23 +0000
Received: from BYAPR10MB2999.namprd10.prod.outlook.com
 ([fe80::8111:d8f1:c262:808d]) by BYAPR10MB2999.namprd10.prod.outlook.com
 ([fe80::8111:d8f1:c262:808d%6]) with mapi id 15.20.4150.027; Mon, 24 May 2021
 15:51:23 +0000
Received: from 0xbeefdead.lan (130.44.160.152) by
 BL1PR13CA0025.namprd13.prod.outlook.com (2603:10b6:208:256::30) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4173.11 via Frontend
 Transport; Mon, 24 May 2021 15:51:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a5b40676-a84f-4cfa-8a9c-40fea868600c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=date : from : to : cc
 : subject : message-id : references : content-type : in-reply-to :
 mime-version; s=corp-2020-01-29;
 bh=ANT5xKW4nx9m49D6/G8fs6o+OuyuuDORxrGpmun4OaE=;
 b=qrHgrHMxhsaF79ZYG/kLlxpYPZVP4f+0Z+b18LeoJSWSx9YeuGQ8XtIHf4YmV5tJ8jEm
 wKNa7q0c1VUC9ewublNDludWoJ030qKy6ECaKENS+S2x07OmSU8YG+BfNBZEInFLYEBJ
 gHGlfVqOFRVX+yMgpTU59cWsKgebxhaf/Q/vD0ly10EAmhVaBE2FABLCaEE57qX0cWhj
 8lQ0BKBx3Zp0XJwQ1cNH2TsisPKNn6vrVqU1IlP/4EBriQWhRUCwDjMMQ9xZ+iwsD7/6
 HKH5UkdWS+DoxN3WMrAGq7YjgP3G6dq1rCrUc5mX5GLRumpFMx4/VsFUPTIOTIi6K5Pf Dg== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BHh0pj5ZxiT6YLjthu35y50e/lVzAGsa6PPkJf7HWbnxfw+iTaQfD98asDhw56/aF2Ssdx1kNirBcf2OMT97qkcWwEnB8EpyW8PvzaLL77u/ilkzzIRb7sdDVHKUV+LetcRwz+w0tACsMm7NfcqrY69HgfpWGfVJaKXIPg0El38fX8IoghQwoGzRcw3fqfl37J5nkwLq9ptDmJs9+uil163icWL3ICiYQdYWq6t4FTAMkfLjQVVAwdGR7vGHv6jUXR9Wef8sSNTe1ZHkJKAPZt5ARkX243QhHu3kQJSiLceIbw6zEWLwqQxmGGpVJrG+Qi6+4in3G/8xDIPbyO3zkw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ANT5xKW4nx9m49D6/G8fs6o+OuyuuDORxrGpmun4OaE=;
 b=OAeqYh0f58V6IHL07eBrGHgQtKQMwhJ+cvnN6gbcy/AmzrjSD0fpF3y57dU/hVpsxKeUK7w9M1hLHfut4tsYajBxnJY94efai/gcO5TUihbkIn1bzBKnwRK93rbB0sRl/qg9VCLYNq1zL5AKUNyQ8tegrfwAAvR6I6lyr3NOAGsQ5XhR+UEqdLeM0q1ftvej6Xh6JqhmtkJuUs5IlFk9eeDyC0jQeFy39V7tebh3BKrQX8/zbc0BUovYXZExHr9MlxhtM97o1R28dhG4ojvOXCUwim4WhVq+Hf41pIvqYzlTqyfABJYlYAQw59ZiCD9qYAROLgil0cHlMvNtqPwZxw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ANT5xKW4nx9m49D6/G8fs6o+OuyuuDORxrGpmun4OaE=;
 b=dfaZOae0bFVaquGcd68p7COHf7NSnMoK+q5vEUfpruHVPdGCrSde54IdgopJb/+t5XxGdHvyUxuuxW76oxZ/xZXqsZTguhBaFsx9K+Uh0hAXVDiCICJntr//mWTrUy6AwQUmpYmcJhbPYBm2bFXngVLZdmqCWTfKPnQVhlpz+/w=
Authentication-Results: chromium.org; dkim=none (message not signed)
 header.d=none;chromium.org; dmarc=none action=none header.from=oracle.com;
Date: Mon, 24 May 2021 11:51:15 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
        Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
        Frank Rowand <frowand.list@gmail.com>, boris.ostrovsky@oracle.com,
        jgross@suse.com, Christoph Hellwig <hch@lst.de>,
        Marek Szyprowski <m.szyprowski@samsung.com>, benh@kernel.crashing.org,
        paulus@samba.org,
        "list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
        sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
        grant.likely@arm.com, xypron.glpk@gmx.de,
        Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
        bauerman@linux.ibm.com, peterz@infradead.org,
        Greg KH <gregkh@linuxfoundation.org>,
        Saravana Kannan <saravanak@google.com>,
        "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
        heikki.krogerus@linux.intel.com,
        Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
        Randy Dunlap <rdunlap@infradead.org>,
        Dan Williams <dan.j.williams@intel.com>,
        Bartosz Golaszewski <bgolaszewski@baylibre.com>,
        linux-devicetree <devicetree@vger.kernel.org>,
        lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
        xen-devel@lists.xenproject.org,
        Nicolas Boichat <drinkcat@chromium.org>,
        Jim Quinlan <james.quinlan@broadcom.com>,
        Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com,
        Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk,
        Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie,
        dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
        jani.nikula@linux.intel.com, Jianxiong Gao <jxgao@google.com>,
        joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org,
        maarten.lankhorst@linux.intel.com, matthew.auld@intel.com,
        rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v7 05/15] swiotlb: Add a new get_io_tlb_mem getter
Message-ID: <YKvLc9onyqdsINP7@0xbeefdead.lan>
References: <20210518064215.2856977-1-tientzu@chromium.org>
 <20210518064215.2856977-6-tientzu@chromium.org>
 <CALiNf28ke3c91Y7xaHUgvJePKXqYA7UmsYJV9yaeZc3-4Lzs8Q@mail.gmail.com>
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CALiNf28ke3c91Y7xaHUgvJePKXqYA7UmsYJV9yaeZc3-4Lzs8Q@mail.gmail.com>
X-Originating-IP: [130.44.160.152]
X-ClientProxiedBy: BL1PR13CA0025.namprd13.prod.outlook.com
 (2603:10b6:208:256::30) To BYAPR10MB2999.namprd10.prod.outlook.com
 (2603:10b6:a03:85::27)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 27d633d6-6886-4ffa-a44a-08d91ecbc7c4
X-MS-TrafficTypeDiagnostic: BYAPR10MB3096:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<BYAPR10MB30962A69BA3EEA748AD71B7989269@BYAPR10MB3096.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:2733;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	+N83Gd+vIa6HFf9GGeGnQWALIhCS11sTuVASeml3ScVVX4kKnQ90+wmWu14xkc2kHAqM1eBMxPhJwGeSP1ybhcQrmrrji/5Yh7NI5AKUXOSH//YUcD6OdkMRAPXEUco7bX2XJlO026ptDlGszXsM+Qa8JjiU6tmYWji57vfTiq/orKeqqgLkQHHAbtjWkRyI55C7TsR9l0l/uGNPf2zuLhmtdqFq7BbjIeQdl0xyufGalOxjj9196lHp1FNjL/SqzSY7fuqeTDLezHQhjzsoYnQed+4S+L2XK93PL6Ztq780tsZ493ifPS/HCl5OePy8j0a77gIgGXbjYuB6dL7SOtjhjltMz5UNPpwNpFBtRPA6YEzQkVsYU2+C+mRoaDQLRUApxKvPMxyhVlgCmur+apUde+Z1cur4qY3HGPq8MNusIwBdSQXjy3MqoIrHAg1GzcilGenGQEIBP9M6Pp2FxeOcLajvKTZnSFz4rx5xSzOSb/3jU4rboTLnN2THtqpEAIu7J8uMBHd7w2LwkCDrb3jSHbbbADhvIT7ZgOKjYwPZzUYaXrCfDQ0xqAz4K1HYih6FqE/23ProZp2orcZc565JO1gB+og9ZYUOk8XLtb9Tlxo8Ch+pQ2nGiyYPQefAwCCY2HafKeS577/MPO3p3e89ufUhQsTgLk375bvPYqE=
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB2999.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(366004)(376002)(346002)(136003)(39860400002)(8936002)(8886007)(9686003)(4326008)(66946007)(66556008)(66476007)(52116002)(5660300002)(36756003)(2906002)(6506007)(7366002)(6916009)(7406005)(8676002)(7696005)(7416002)(316002)(4744005)(956004)(186003)(26005)(16526019)(54906003)(38100700002)(38350700002)(86362001)(478600001)(55016002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?us-ascii?Q?oo36w6L3PwdgG9nBNqyN8WBm2OiXwQ483HbbfXjA5xY9AczPozaEoVkR4yfg?=
 =?us-ascii?Q?zPCJY/w1sw6eV/0nk7eRpYHNzc4hCTjl8JUJcnX4BZhXwaPlqkM7aS+QPTKo?=
 =?us-ascii?Q?qI7C8haJTOXJh1eYTxkF64beSG/lXI9LtquWwRHsYot3B7TnIKK7JxJkBOEU?=
 =?us-ascii?Q?jex4r6FRmWgSalUFUAiS/BosSJcYPT4O+BUqIPg20Tt2M/R/gRoaEgIrKarm?=
 =?us-ascii?Q?2QW5ufRML0QU6582vWejdu4B2ed+ng64wCsojkX2RdUissmkr22DCRol05KR?=
 =?us-ascii?Q?O5svXcOsJTPtKdnznmuJBT/qKUsG+pM3hfYP9bCsZfx7mP76i2WwZyyUa89L?=
 =?us-ascii?Q?P2LUjGZkt6NhNSfkmgknUZfE2uf13dl34AxWqw8yrNnskWa80CJNWj/s3v+k?=
 =?us-ascii?Q?JiLlJszvn+zi6iZxd0aL5PhkJXVKSIiDbKf2p258OMmg04Z9GIlN8GKcydI6?=
 =?us-ascii?Q?YDKtjkq1sbYlaNhe/i1SLA70JOFp1kG3S3hrLptP+NNmLl5QOCe0EE4JNX0U?=
 =?us-ascii?Q?uxw+nslQVakkslnuc2Obq4izZ9RniboX9QsNmT2/w8chqFD7fvq/+S181GFQ?=
 =?us-ascii?Q?zC/5MXz0QLqyN3bDkLG3fdrzqV4sZbYDWpt4rmayhVS7Drui1B+duAY1RGuK?=
 =?us-ascii?Q?ahnSToT+WLZyeq5h+e8lzNlmBptd6o2wWy+QE7ayMzWQPSjmBSqTcd2QCvNI?=
 =?us-ascii?Q?eYb3yRAEt4AqYqjJ1svN+KQ1sJhzlNyGOtJ63Gq/bfpXy0wfxy/XVCdvwF8i?=
 =?us-ascii?Q?IsYjdxlVZAxbWdyCz9IJowF45rNCNxb0OGtaqs7L99BRJWZkJ5wLqCG87Z7P?=
 =?us-ascii?Q?SDNpGOfFSQvG2E/uBgZZpf+ETp/pkk3U4yfsL2DuubLUrNC62S8vmGXZC1SP?=
 =?us-ascii?Q?AbhY2VlH5coloSgcUmuOG+bOzuJdJM+Ta6iG3/A7rsu5TD1aGBfRZ4/IpbEm?=
 =?us-ascii?Q?cB6yPsjhd/D5j7EDbXHu25ujKcNPvGD1YiDrCEAHHZkq1ZnPiktcyMZMiXog?=
 =?us-ascii?Q?BlcrNxx1ptCLXNa9EBkgxPR6C5FEGY/rqokiWWCqvHvxWv7rKEFTvmN4tPMZ?=
 =?us-ascii?Q?2HZpCcGL7R3M77OMDGBuPY8uz2AbQ6Gvwwp0eoe994DjRE9rzCPEh1IganGQ?=
 =?us-ascii?Q?VgT4GwAgR6pZGeQDAh6/bi/Rw4KsW1rxYU+oTxQ9GcwvoWRNiF3tQGyv5EvJ?=
 =?us-ascii?Q?GDPsI2BGsOyLmqo2suSJAwDu9AheDXYoPMQ+EWv3hnFMywAbDJudvbtJ3v4+?=
 =?us-ascii?Q?BOO1BjKfpBgL5nFsIgwXDKrXjYXHTl010/nKw44YQHT31Je6nbYgVC7pXSWA?=
 =?us-ascii?Q?aoEzv4EB4ztnyFFtarSDv7i/?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 27d633d6-6886-4ffa-a44a-08d91ecbc7c4
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB2999.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2021 15:51:23.0272
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: DxAllPJF1Dj9LN1/8w73nsJA1i57Xgng16mblVk0eANUe4Km8DzVt6OxfeczoB8PyL61CptsY18dgcIG9dyUCg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR10MB3096
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9993 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0 phishscore=0 bulkscore=0
 mlxlogscore=999 malwarescore=0 spamscore=0 suspectscore=0 adultscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2105240095
X-Proofpoint-ORIG-GUID: sc6PYAtMB6pfPEa5xGNcqSGSSBmqt-Er
X-Proofpoint-GUID: sc6PYAtMB6pfPEa5xGNcqSGSSBmqt-Er

On Tue, May 18, 2021 at 02:51:52PM +0800, Claire Chang wrote:
> Still keep this function because directly using dev->dma_io_tlb_mem
> will cause issues for memory allocation for existing devices. The pool
> can't support atomic coherent allocation so we need to distinguish the
> per device pool and the default pool in swiotlb_alloc.

This above should really be rolled in the commit. You can prefix it by
"The reason it was done this way was because directly using .."




From xen-devel-bounces@lists.xenproject.org Mon May 24 15:54:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 May 2021 15:54:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131840.246210 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llCu9-0000Fg-TY; Mon, 24 May 2021 15:54:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131840.246210; Mon, 24 May 2021 15:54:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llCu9-0000FZ-QI; Mon, 24 May 2021 15:54:09 +0000
Received: by outflank-mailman (input) for mailman id 131840;
 Mon, 24 May 2021 15:54:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=oIFU=KT=oracle.com=konrad.wilk@srs-us1.protection.inumbo.net>)
 id 1llCu8-0000Db-4B
 for xen-devel@lists.xenproject.org; Mon, 24 May 2021 15:54:08 +0000
Received: from mx0a-00069f02.pphosted.com (unknown [205.220.165.32])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dd74501a-e2cc-452c-a69f-042f5a19cf0a;
 Mon, 24 May 2021 15:54:07 +0000 (UTC)
Received: from pps.filterd (m0246629.ppops.net [127.0.0.1])
 by mx0b-00069f02.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 14OFW6We006046; Mon, 24 May 2021 15:53:36 GMT
Received: from oracle.com (aserp3030.oracle.com [141.146.126.71])
 by mx0b-00069f02.pphosted.com with ESMTP id 38r267r8ng-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Mon, 24 May 2021 15:53:35 +0000
Received: from aserp3030.oracle.com (aserp3030.oracle.com [127.0.0.1])
 by pps.podrdrct (8.16.0.36/8.16.0.36) with SMTP id 14OFp07H032145;
 Mon, 24 May 2021 15:53:34 GMT
Received: from nam04-bn8-obe.outbound.protection.outlook.com
 (mail-bn8nam08lp2046.outbound.protection.outlook.com [104.47.74.46])
 by aserp3030.oracle.com with ESMTP id 38pr0b4701-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Mon, 24 May 2021 15:53:34 +0000
Received: from BYAPR10MB2999.namprd10.prod.outlook.com (2603:10b6:a03:85::27)
 by BYAPR10MB3096.namprd10.prod.outlook.com (2603:10b6:a03:151::23)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4150.26; Mon, 24 May
 2021 15:53:32 +0000
Received: from BYAPR10MB2999.namprd10.prod.outlook.com
 ([fe80::8111:d8f1:c262:808d]) by BYAPR10MB2999.namprd10.prod.outlook.com
 ([fe80::8111:d8f1:c262:808d%6]) with mapi id 15.20.4150.027; Mon, 24 May 2021
 15:53:32 +0000
Received: from 0xbeefdead.lan (130.44.160.152) by
 MN2PR19CA0027.namprd19.prod.outlook.com (2603:10b6:208:178::40) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4150.24 via Frontend
 Transport; Mon, 24 May 2021 15:53:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dd74501a-e2cc-452c-a69f-042f5a19cf0a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=date : from : to : cc
 : subject : message-id : references : content-type : in-reply-to :
 mime-version; s=corp-2020-01-29;
 bh=A7JwayUn9NB8mCxjunudM0ZYoEEzZyrOR+KOm3Nbk/Y=;
 b=vLNS1NYtPu1R+6D8v9fR+7tu1wakKWhso4S4TMh1hXnHktYgdbka69vwwM0Ebq9H2LLT
 BaHPLTI6m5bPywWO8BDR91EfNnWKXHdSYfYLQoGVHOnlaeUWTxak+iz0H42zrjRXVqoN
 DBknsO4TW4REbFb4VMzlFMMgZv/tiz8izHPT/FjkB8qt5ggZXYFeqbgWYthKUzM/LgHl
 5zM3Pu3d9iUpqo8vFLdtj5K1fd/+gczS3SipYMfxocmVRr9Rwuh+kLKYI7D/iMncEESA
 qLmHLkVbqR96lPzGoQogxLVx+8vcnkZBDprVOEEuGDDm1zp/9yIuRzhvHilZhCJ7/maE jg== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VoyzvSeybqrYgtTdx2epAplFD49dsvlNWVgf6y5jjwphzehjOl59t4EAK7hc9srv9KEmBVQWe1TSwZhZl8fV1DGtrUn8gskBlGAMISDn478ClkyoeQSYKeyIEoIKhFjGA+i+zbFLXW+arWi5qYPIRdDS1llVpGkQ8F+zsXq25hNTY/PDJWmK685us1zD406vHQoBVQRNwknB96BeectJ4unOnjJjmTgb3wWl3KWzjo/R5nOm9byEdpxuCDh6eGWY6gkuUZ3tptQ0D3Rgdlku0yNTn/Fk7Iuqv0XwB8o68F6QFnVwYIKQVtFakTnlCPO/SOOmk4kNCJJE7uVLaycZ1w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=A7JwayUn9NB8mCxjunudM0ZYoEEzZyrOR+KOm3Nbk/Y=;
 b=oRwSdKTweSsbA4+gVwABEjikbXLUbRQm6SJJsdzZOjhWkib0yzHMZ0bqkfHujfeakLlNfksaSNCBRL0GXx6CPdu8xprbRCGVCQDGy7d8nzmty88RNYOUu59N7+2rsxv7MF++HEEyljuYu06CT6SU9/MyfbSQ6txjfJnsa4ye164WPci/QcOErKEz1CUKGKeoFbehMaYwn5T47mPOCnODsQ9spG2sLGMxftsuEoY8JiN2R8gMZgfqAJDo/RTMwbgGs2mhzgJFLrTKx91paBilvfLAyLdrCMWTfXf/dyhLDCd67/rV9f7c0ku949ygOzNgU6O0k+lKVvvj8iRTGNB8bA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=A7JwayUn9NB8mCxjunudM0ZYoEEzZyrOR+KOm3Nbk/Y=;
 b=TwpZ4hb2LByKNEYWDv/1b3QPaOvXgSKfc63kxrzyEKQCQJF0HOiGMhbyptD+IQ5xSQggUzakW1jjRKFymb2QW8LZwCqG9YxbWcFu50KHCxMCCHjA+Kzf2GIKczKymocMUS6sGhlVK30vd17t10cxrQIDK/TGvp8H1H2VARpbiDM=
Authentication-Results: chromium.org; dkim=none (message not signed)
 header.d=none;chromium.org; dmarc=none action=none header.from=oracle.com;
Date: Mon, 24 May 2021 11:53:23 -0400
From: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
To: Claire Chang <tientzu@chromium.org>
Cc: Florian Fainelli <f.fainelli@gmail.com>, Rob Herring <robh+dt@kernel.org>,
        mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>,
        Will Deacon <will@kernel.org>, Frank Rowand <frowand.list@gmail.com>,
        boris.ostrovsky@oracle.com, jgross@suse.com,
        Christoph Hellwig <hch@lst.de>,
        Marek Szyprowski <m.szyprowski@samsung.com>, benh@kernel.crashing.org,
        paulus@samba.org,
        "list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
        sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
        grant.likely@arm.com, xypron.glpk@gmx.de,
        Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
        bauerman@linux.ibm.com, peterz@infradead.org,
        Greg KH <gregkh@linuxfoundation.org>,
        Saravana Kannan <saravanak@google.com>,
        "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
        heikki.krogerus@linux.intel.com,
        Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
        Randy Dunlap <rdunlap@infradead.org>,
        Dan Williams <dan.j.williams@intel.com>,
        Bartosz Golaszewski <bgolaszewski@baylibre.com>,
        linux-devicetree <devicetree@vger.kernel.org>,
        lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
        xen-devel@lists.xenproject.org,
        Nicolas Boichat <drinkcat@chromium.org>,
        Jim Quinlan <james.quinlan@broadcom.com>,
        Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com,
        Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk,
        Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie,
        dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
        jani.nikula@linux.intel.com, Jianxiong Gao <jxgao@google.com>,
        joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org,
        maarten.lankhorst@linux.intel.com, matthew.auld@intel.com,
        rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v7 01/15] swiotlb: Refactor swiotlb init functions
Message-ID: <YKvL865kutnHqkVc@0xbeefdead.lan>
References: <20210518064215.2856977-1-tientzu@chromium.org>
 <20210518064215.2856977-2-tientzu@chromium.org>
 <170a54f2-be20-ec29-1d7f-3388e5f928c6@gmail.com>
 <CALiNf2-9fRbH3Xs=fA+N1iRztFxeC0iTsyOSZFe=F42uwXS0Sg@mail.gmail.com>
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CALiNf2-9fRbH3Xs=fA+N1iRztFxeC0iTsyOSZFe=F42uwXS0Sg@mail.gmail.com>
X-Originating-IP: [130.44.160.152]
X-ClientProxiedBy: MN2PR19CA0027.namprd19.prod.outlook.com
 (2603:10b6:208:178::40) To BYAPR10MB2999.namprd10.prod.outlook.com
 (2603:10b6:a03:85::27)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 08e4fc1b-8f07-4c22-09a0-08d91ecc1491
X-MS-TrafficTypeDiagnostic: BYAPR10MB3096:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<BYAPR10MB3096B6F574AFAD453EFAC5A989269@BYAPR10MB3096.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6790;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	TsJFGyxezPXIndmWi2tIGB7pMN9ztv7RcFh168v9DkAiVW2GKDctVYUGiOFhQ65EfuLsPAI/crH2cmy51WbjsE+UPrzhoqZY9akD49FZIsRBPacfO7wa2plu24YYqvOttqec+l2eEpgNCfxrV2My8AbdEF+loaulWwCHyOn846TrliNRs7X80aFOWzGw4rBKgzATs8gKfwYhb3wI0Imk5Y4wZyx0B0CuCx012vBA+S0JIkdP2xQcuvIzuSoKJCj3wqO2knz+4NY7suIQ0+1/+szYf/vLFfvy5XwswfhHg5JbYyNVLmaRZkr8G4+2iJUBC7FqWCx5fPpaAyusjIEeVbtL7JTGAL8QkZqNBEDqze5VtYo8O5fIhpYtcdMTF8lf+/AhItZMc66uEWaraKGoJKonCl6gXeKPujgZxwYZ0TSCuHoPdPvTOnFiTw3KJd3j5IKj/y8cd7+D/dhvEnpN3mjj2aws6VpeZmZKN+PqJD9wHv/+RiuZZw88ZvSBo6j8SBZDNX+qwpE+3xoKsYss89gA7e5iw3xLLGPfxqare4eKZ44CKeV7r6bQh5O0f9NBNeW0C222H6ws3WGueEz4j/Jrg5yFWuBkniBSux3rgeUoEjAp24+JF8dHNibzQpHgDF3VPEaWL3wA/mthvpfUig==
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR10MB2999.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(366004)(376002)(346002)(136003)(39860400002)(8936002)(8886007)(9686003)(4326008)(66946007)(66556008)(66476007)(52116002)(6666004)(5660300002)(36756003)(2906002)(6506007)(7366002)(6916009)(7406005)(8676002)(7696005)(7416002)(316002)(4744005)(956004)(186003)(26005)(16526019)(54906003)(38100700002)(38350700002)(86362001)(478600001)(55016002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?us-ascii?Q?yKZMqEftj8fUJPi6Gyydk5po/IDtKE3QYW3QlmDLM/6uQUS6+ICA21HhoxgS?=
 =?us-ascii?Q?jHHidKgvmQF/Nl8QSARzWqhX9WBedrpIvmjSFG0Rig0vOTBbCIiEzrJjwBr2?=
 =?us-ascii?Q?17RwKzfLDCfh1aHF1ZZvAffYkZ3JEOGtqVpr1PDS2Mct4inKjy2f4b2W6nBb?=
 =?us-ascii?Q?SPSoi+hmUoOyWw4a/bTFsBdMtILjK+Qo+jXVUJX2ik6qkmmBwZlkWrk4RtDA?=
 =?us-ascii?Q?JBiS3T2Z7MhOs0xnSWDMS2LQaTTodv/AaKsOcTAFx2mclhdJ7M4ooVxJX8Al?=
 =?us-ascii?Q?hu1Cu4e2dbTKd1E9crSyo1mlfTgh6Pvfz/MR67ErwCxjMP4iw1NfrFPDy41+?=
 =?us-ascii?Q?uVAI4SLPPSFDnMa05NusOWQRmDUzFJyaJ9TDeN4cAmySEAI4GmBC4WhFZvBe?=
 =?us-ascii?Q?iZLVXOtgmo7W7EjABhuFXFwt5gOL3dsawXN7xkQVccDrWzaV4I6xPYoyiyBl?=
 =?us-ascii?Q?/aBPELm/xb6BaBum6PGNxB8jkLL9X+bplmVOM++x2tmZiIryCJrZdAMS+x0D?=
 =?us-ascii?Q?vbTB+9rxYjEzDZ6kpMx/wSRreQ+GNs+KRF8Kfuh4+Lu1pCBUTHjMErUHiEyq?=
 =?us-ascii?Q?QumlmHebWtQa2+NEngx696gtrakLYFyhD9u9/FtmJrh8Fl743/tEANM3sFnf?=
 =?us-ascii?Q?jc/Bj8iq3zUviqaNWfQxv9wWvwaZi9d5/rbPTxyBZ/8tnTjzuZhEZeLkFo9b?=
 =?us-ascii?Q?aqMHMQxj7MV3OsKZZ2Z+fWJFuJiGFlxaVt4+2aKHBWZHPx1vfhET9LAom8tJ?=
 =?us-ascii?Q?9aZ8Q9ycO7qrWSWM9F+0UeRbbsQHvB8O14Ux5jpSt8T/xbS57V+Ku0AvtALo?=
 =?us-ascii?Q?IKyxBZiri/8wLBrS4sHR5bTZiA45+9Hkzb4zSh11tSHu4/Pie49bN7O8YCB/?=
 =?us-ascii?Q?Pt/weHq391Bs8WTdtMNQLegh9m1EoceInNv87VsmKVSZUKVpG33mWYxJFWCt?=
 =?us-ascii?Q?8/n3OTqlNAEtJtLBUmmQJpWNQh651apWmpmhYlZz4sBpKd31vwWaEmeqCJXj?=
 =?us-ascii?Q?pOreHHq5xq2OXKe1aM14dswYBBAnLOYH8fwErlnKE1VGTgS2kA5RgvCUTFNH?=
 =?us-ascii?Q?bksPyUmQPHvuajUQgBC8a2ZzDKfK5aatcPSS18ZUIvRXPCeVxEVFvBkGo0Gn?=
 =?us-ascii?Q?wgdFgSfytUyX/Phq2pKifHFpQn1a+0Pc+jSWN3SmW6S+K3haInt8K0R3IWiq?=
 =?us-ascii?Q?a47x0pHOP/cj77wIWDcwPHxWq13gTpFVVCPvnw+LDKTsxQ3iJsHzPAx9Ga6k?=
 =?us-ascii?Q?p6vbF9UEKrYNSIg83nDLsQEzqcPd/n0nECi3zijW7veVC/h2pA3664PQZfsm?=
 =?us-ascii?Q?zshbFkdLz+9Mtut9qMmvuDTe?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 08e4fc1b-8f07-4c22-09a0-08d91ecc1491
X-MS-Exchange-CrossTenant-AuthSource: BYAPR10MB2999.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 24 May 2021 15:53:31.8226
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: TwmaXt8HqSELbqkBV+B/DF3SjeYdChpXLoD9nd5acJv8xcUKGm9Cdiv+uBhCXl66yAuGOgtW2aJp6yfIsmpQDw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR10MB3096
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9993 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0 bulkscore=0 spamscore=0
 mlxlogscore=999 malwarescore=0 adultscore=0 phishscore=0 suspectscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2105240095
X-Proofpoint-ORIG-GUID: EHAtG7WMwVF7Lhz1byowmXmOg4cDSTpJ
X-Proofpoint-GUID: EHAtG7WMwVF7Lhz1byowmXmOg4cDSTpJ

> > do the set_memory_decrypted()+memset(). Is this okay or should
> > swiotlb_init_io_tlb_mem() add an additional argument to do this
> > conditionally?
> 
> I'm actually not sure if this it okay. If not, will add an additional
> argument for it.

Any observations discovered? (Want to make sure my memory-cache has the
correct semantics for set_memory_decrypted in mind).
> 
> > --
> > Florian


From xen-devel-bounces@lists.xenproject.org Mon May 24 16:29:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 May 2021 16:29:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131850.246221 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llDRn-0003wp-OI; Mon, 24 May 2021 16:28:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131850.246221; Mon, 24 May 2021 16:28:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llDRn-0003wi-Kj; Mon, 24 May 2021 16:28:55 +0000
Received: by outflank-mailman (input) for mailman id 131850;
 Mon, 24 May 2021 16:28:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YRiH=KT=xen.org=tim@srs-us1.protection.inumbo.net>)
 id 1llDRn-0003wc-43
 for xen-devel@lists.xenproject.org; Mon, 24 May 2021 16:28:55 +0000
Received: from deinos.phlegethon.org (unknown [2001:41d0:8:b1d7::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2077500b-ac23-4838-a498-f1eecc541bfc;
 Mon, 24 May 2021 16:28:54 +0000 (UTC)
Received: from tjd by deinos.phlegethon.org with local (Exim 4.94.2 (FreeBSD))
 (envelope-from <tim@xen.org>)
 id 1llDRk-0009Qa-AT; Mon, 24 May 2021 16:28:52 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2077500b-ac23-4838-a498-f1eecc541bfc
Date: Mon, 24 May 2021 17:28:52 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	George Dunlap <george.dunlap@citrix.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
	Roger Pau =?iso-8859-1?Q?Monn=E9?= <roger.pau@citrix.com>
Subject: Re: [PATCH] x86/shadow: fix DO_UNSHADOW()
Message-ID: <YKvURGHd+gLiGtti@deinos.phlegethon.org>
References: <cdee4753-674d-23a3-7b94-fed9f2bdd0c1@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
In-Reply-To: <cdee4753-674d-23a3-7b94-fed9f2bdd0c1@suse.com>
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on deinos.phlegethon.org); SAEximRunCond expanded to false

At 14:36 +0200 on 19 May (1621434982), Jan Beulich wrote:
> When adding the HASH_CALLBACKS_CHECK() I failed to properly recognize
> the (somewhat unusually formatted) if() around the call to
> hash_domain_foreach()). Gcc 11 is absolutely right in pointing out the
> apparently misleading indentation. Besides adding the missing braces,
> also adjust the two oddly formatted if()-s in the macro.
> 
> Fixes: 90629587e16e ("x86/shadow: replace stale literal numbers in hash_{vcpu,domain}_foreach()")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Reviewed-by: Tim Deegan <tim@xen.org>


From xen-devel-bounces@lists.xenproject.org Mon May 24 16:46:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 May 2021 16:46:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131857.246232 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llDiQ-0006BS-QN; Mon, 24 May 2021 16:46:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131857.246232; Mon, 24 May 2021 16:46:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llDiQ-0006BL-Mu; Mon, 24 May 2021 16:46:06 +0000
Received: by outflank-mailman (input) for mailman id 131857;
 Mon, 24 May 2021 16:46:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llDiP-0006BB-DG; Mon, 24 May 2021 16:46:05 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llDiP-0007KX-61; Mon, 24 May 2021 16:46:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llDiO-0005HX-Tn; Mon, 24 May 2021 16:46:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1llDiO-00037y-TJ; Mon, 24 May 2021 16:46:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=xkJZug8NnwANnxwO/Ovi+sE0+1SqfAa2MAsJ1ILvrVU=; b=uC5MX0w9/sxDXUd20FqCXk1l2w
	Nvc+yfC2ASfao8V4i7xuSSeWfLIZlX/UI1Zj8lJtzwAk363aJu1Kx+LAMabUtUxuM5R/44eHB4CKy
	/79YF5/62/YCkZ8aVu5yxIQzWkKBJlERFYGUpyyogBhnfm3eIxZFgsXd358zVmfrYRb8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162139-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162139: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=3bbaed2cd0a02ee53958d3d2585e837bcf327278
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 24 May 2021 16:46:04 +0000

flight 162139 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162139/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                3bbaed2cd0a02ee53958d3d2585e837bcf327278
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  277 days
Failing since        152659  2020-08-21 14:07:39 Z  276 days  509 attempts
Testing same since   162116  2021-05-21 21:07:19 Z    2 days    7 attempts

------------------------------------------------------------
510 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 159254 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 24 19:22:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 May 2021 19:22:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131868.246246 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llG9X-0003Y2-0y; Mon, 24 May 2021 19:22:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131868.246246; Mon, 24 May 2021 19:22:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llG9W-0003Xv-Td; Mon, 24 May 2021 19:22:14 +0000
Received: by outflank-mailman (input) for mailman id 131868;
 Mon, 24 May 2021 19:22:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llG9V-0003Xl-Mv; Mon, 24 May 2021 19:22:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llG9V-0001j4-HX; Mon, 24 May 2021 19:22:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llG9V-0001NY-48; Mon, 24 May 2021 19:22:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1llG9V-0003Qr-3a; Mon, 24 May 2021 19:22:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=OSY0n2dGg9qQRg3mybXIdKLYl1xmAxELLAcGiFoswOc=; b=oBlCxXSsQCENPlG1ikdrzyAK4O
	162y5f9bsQ1BPsy5vQMQrZlTtdhWH4/aPC3KOwgBIShRFig8DqnlSEBRWbnaUDkwlWtKrRJffqzf/
	pIwsGlGjPj+gmng7u/NKELZkelJTwW8QZ5o5Gx4bBVt3j0vmmtrP3jNxp5tkXbbqMO4M=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162141-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162141: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:guest-start:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:heisenbug
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    linux-linus:test-amd64-amd64-libvirt:guest-start/debian.repeat:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=c4681547bcce777daf576925a966ffa824edd09d
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 24 May 2021 19:22:13 +0000

flight 162141 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162141/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-xsm      13 debian-fixup     fail in 162136 pass in 162141
 test-amd64-amd64-examine    4 memdisk-try-append fail in 162136 pass in 162141
 test-amd64-amd64-libvirt     20 guest-start/debian.repeat  fail pass in 162136

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                c4681547bcce777daf576925a966ffa824edd09d
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  296 days
Failing since        152366  2020-08-01 20:49:34 Z  295 days  502 attempts
Testing same since   162136  2021-05-24 01:10:36 Z    0 days    2 attempts

------------------------------------------------------------
6084 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     fail    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1652359 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 24 20:37:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 May 2021 20:37:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131877.246260 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llHKY-0001fB-M4; Mon, 24 May 2021 20:37:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131877.246260; Mon, 24 May 2021 20:37:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llHKY-0001f4-IY; Mon, 24 May 2021 20:37:42 +0000
Received: by outflank-mailman (input) for mailman id 131877;
 Mon, 24 May 2021 20:37:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5+P1=KT=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1llHKX-0001ey-4a
 for xen-devel@lists.xenproject.org; Mon, 24 May 2021 20:37:41 +0000
Received: from mail-qk1-x736.google.com (unknown [2607:f8b0:4864:20::736])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fb112d32-1625-435e-86d4-070dcf340050;
 Mon, 24 May 2021 20:37:39 +0000 (UTC)
Received: by mail-qk1-x736.google.com with SMTP id f18so28295191qko.7
 for <xen-devel@lists.xenproject.org>; Mon, 24 May 2021 13:37:39 -0700 (PDT)
Received: from localhost.localdomain (c-73-89-138-5.hsd1.vt.comcast.net.
 [73.89.138.5])
 by smtp.gmail.com with ESMTPSA id t25sm5142847qkt.62.2021.05.24.13.37.38
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 24 May 2021 13:37:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fb112d32-1625-435e-86d4-070dcf340050
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id;
        bh=fT2owAWSONQ5Td8PrGcFnFo4mj+tLu/Yq6/E3PpJKDM=;
        b=LMVvtw69MKVxK+kROImOEJaQegyq/o3anijqdLvUoo3LIy+h6+7oZ4L8PLDU5K2zN0
         KWajWfCxxq5j4YNcqzCWPUOo2YUcguATVRXGcsi5ZUtSI2LxjfuE8aQS4POE6v/3MdTc
         AhXKR2ep7bqtxiWQBbTFaujfzA8FjPqMeiTPrBcybXkmgCkjiJrZ7BWhJ9OFTUQ+VBsS
         l5w+w7Py1iXe7KcF9gF4AyyYkqNRGCgMSrvG0OM5gfT954viw6jEaGkvS4LgXGQRC8nB
         rFQwKlksuj1jFP4ipU/RX/kcp9pP6ebRpGy29yz25JjN0H6cPV71KvKLDLI34r0GZhyB
         HFRg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id;
        bh=fT2owAWSONQ5Td8PrGcFnFo4mj+tLu/Yq6/E3PpJKDM=;
        b=E6zR1XcmYxYGuSHAiUDx6NGkiR09VgPjBaNjwgsAVg9CVVKYHEmTSJhJaj8Cto3X2B
         KLOdtRk/7Ue3nwSZV1XCUaqUNf1OJXP7IN6+Aovvie4gyvIjqa8kAWv5phHde8su9tCx
         KjLOJOh7DbmRDqCcsLPOEAkN02g9YKQTpuCBEdK8gzp1PEEobJ6rcoKs4WJmyw6XxA+E
         aP2F5qS0DpjiiYwPlB8ETszaY6qfV5s8N8+XkXR/BynxUP+qkwIveK/TF3j7orUGrXtz
         rTNSy7RuckgxKiDFU5+gvWmM1uPIbnmjtdOrT8UsMkZvujVVj8+5t4AcbcLTU3ek/X0E
         kQew==
X-Gm-Message-State: AOAM533tc+SDafo5LEcWP1IDXARvIyRflmJnl4xwcfg6uSF4ySEf8QDh
	Xfg31CfIUOVxmZoIViSfLQU=
X-Google-Smtp-Source: ABdhPJxKKmRl70PFWvfEmJ085Ib8hp4wiOr/94ElFKb3+npLqJHzrtoVbUfm4lqTBF8aMVnggiIdXw==
X-Received: by 2002:a05:620a:1312:: with SMTP id o18mr27335892qkj.158.1621888659431;
        Mon, 24 May 2021 13:37:39 -0700 (PDT)
From: Nick Rosbrook <rosbrookn@gmail.com>
X-Google-Original-From: Nick Rosbrook <rosbrookn@ainfosec.com>
To: xen-devel@lists.xenproject.prg,
	xen-devel@lists.xenproject.org
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [RESEND PATCH 00/12] golang/xenlight: domain life cycle support
Date: Mon, 24 May 2021 16:36:41 -0400
Message-Id: <cover.1621887506.git.rosbrookn@ainfosec.com>
X-Mailer: git-send-email 2.17.1

The primary goal of this patch series is to allow users of the xenlight
package to manage a full domain life cycle. In particular, it provides
support for receiving domain death events so that domain shutdown,
reboot, destroy, etc. can be handled. And, it addresses issues found
when using the package to boot domains with various configurations.

These patches address several things (e.g. bug fixes, code style,
conveniences, new wrapper functions), but are all work towards the final
goal of allowing a package user to manage a full domain life cycle.

Nick Rosbrook (12):
  golang/xenlight: update generated code
  golang/xenlight: fix StringList toC conversion
  golang/xenlight: fix string conversion in generated toC functions
  golang/xenlight: export keyed union interface types
  golang/xenlight: use struct pointers in keyed union fields
  golang/xenlight: rename Ctx receivers to ctx
  golang/xenlight: add logging conveniences for within xenlight
  golang/xenlight: add functional options to configure Context
  golang/xenlight: add DomainDestroy wrapper
  golang/xenlight: add SendTrigger wrapper
  golang/xenlight: do not negate ret when converting to Error
  golang/xenlight: add NotifyDomainDeath method to Context

 tools/golang/xenlight/gengotypes.py  |  11 +-
 tools/golang/xenlight/helpers.gen.go | 210 ++++++++++++--
 tools/golang/xenlight/types.gen.go   |  63 +++--
 tools/golang/xenlight/xenlight.go    | 398 ++++++++++++++++++++-------
 4 files changed, 521 insertions(+), 161 deletions(-)

-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon May 24 20:37:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 May 2021 20:37:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131879.246281 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llHKi-0002G3-At; Mon, 24 May 2021 20:37:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131879.246281; Mon, 24 May 2021 20:37:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llHKi-0002Fp-7Q; Mon, 24 May 2021 20:37:52 +0000
Received: by outflank-mailman (input) for mailman id 131879;
 Mon, 24 May 2021 20:37:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5+P1=KT=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1llHKg-0001ey-Rc
 for xen-devel@lists.xenproject.org; Mon, 24 May 2021 20:37:50 +0000
Received: from mail-qt1-x835.google.com (unknown [2607:f8b0:4864:20::835])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id db339771-0a00-45f4-9b4f-5e086dfa79f1;
 Mon, 24 May 2021 20:37:41 +0000 (UTC)
Received: by mail-qt1-x835.google.com with SMTP id h24so10195544qtm.12
 for <xen-devel@lists.xenproject.org>; Mon, 24 May 2021 13:37:41 -0700 (PDT)
Received: from localhost.localdomain (c-73-89-138-5.hsd1.vt.comcast.net.
 [73.89.138.5])
 by smtp.gmail.com with ESMTPSA id t25sm5142847qkt.62.2021.05.24.13.37.40
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 24 May 2021 13:37:41 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: db339771-0a00-45f4-9b4f-5e086dfa79f1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :in-reply-to:references;
        bh=0XtO/J2QIxV3EsCnIwGPbPpW2RCXziWsO8eq8KNOMcc=;
        b=Hvl2vFOayH56YKy5obvsAuag1njomXqDiheFKyp9suXVMya6CfU1vIIcDgxSbeGqKZ
         hDTA2FokoUEKrbZBfyV/UcRQ7DIpiJl/SnQ2VXK7odOPqEEctLtBM/WvESVNC/zZqLds
         R57CiItNr6Tw4+Fyf2SUDt88MOyTxefYA+nvhVrhHPsR8zOKQ14cgdXrfMBaTSxWaLTh
         Z0rYUoIKTZJISuBAab/zdur3h19pYHsdRcFLa+NoG7EZg7J/G3zl42xQObC3BUnzp9CU
         ASckTwNXeAzxHPVFj3oLQaw548kRELz5HrTdpmazg4HrL3peXoQwiIOBiTpcTlX4YP40
         X6oQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:in-reply-to:references;
        bh=0XtO/J2QIxV3EsCnIwGPbPpW2RCXziWsO8eq8KNOMcc=;
        b=h2/gAf2l74Zfa9zmYEHuPt0DehstjdSRifs36O/W3pUlgmGh58sNcq3pl3iA4WQhB2
         0hRn/AbOzwUaeYVbvswJiyJRwYAfv6h8EdWVHN5Gf3LnFXundc2XelnFSoUA/BxR2kct
         0WNxohYj5bRK4BGk1SVyQbYq/CJPy2d23XeE8XjNVNQVm8ULq6EyfsSD6xu+7MAP6IvT
         WdyGRYOt0p2K668O+98x1PZcTQfeaBnZ8nXOMuNTHs17hqGqSV3io5pmlbXLyW+1zbt1
         WWt47t2fXWHrPH6A+Qi+SD52OHY3kMzRCNl6U2Wof5qG397yBocx17yvGpHK/hr1gCY2
         jU8A==
X-Gm-Message-State: AOAM533SLExVLaD83D+dfcV5RnQNn7sA25KNbTgtEaZSpJf9v5WCkBUb
	w1jVRkM3mORVbRKok21kCHI=
X-Google-Smtp-Source: ABdhPJxb5DHlvWYjgRG4R6V8CZ2jfRLtIuFZLXonuizng5Z/UYV3nC/if1BuceqgJ23zy5XWKI+m8w==
X-Received: by 2002:ac8:7ed2:: with SMTP id x18mr27764289qtj.26.1621888661428;
        Mon, 24 May 2021 13:37:41 -0700 (PDT)
From: Nick Rosbrook <rosbrookn@gmail.com>
X-Google-Original-From: Nick Rosbrook <rosbrookn@ainfosec.com>
To: xen-devel@lists.xenproject.prg,
	xen-devel@lists.xenproject.org
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [RESEND PATCH 02/12] golang/xenlight: fix StringList toC conversion
Date: Mon, 24 May 2021 16:36:43 -0400
Message-Id: <0a15ed9eb6cd70416995f5d9805c98572eb6bd50.1621887506.git.rosbrookn@ainfosec.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1621887506.git.rosbrookn@ainfosec.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>
In-Reply-To: <cover.1621887506.git.rosbrookn@ainfosec.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>

The current implementation of StringList.toC does not correctly account
for how libxl_string_list is expected to be laid out in C, which is clear
when one looks at libxl_string_list_length in libxl.c. In particular,
StringList.toC does not account for the extra memory that should be
allocated for the "sentinel" entry. And, when using the "slice trick" to
create a slice that can address C memory, the unsafe.Pointer conversion
should be on a C.libxl_string_list, not *C.libxl_string_list.

Fix these problems by (1) allocating an extra slot in the slice used to
address the C memory, and explicity set the last entry to nil so the C
memory will be zeroed out, and (2) dereferencing csl in the
unsafe.Pointer conversion.

Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
---
 tools/golang/xenlight/xenlight.go | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/tools/golang/xenlight/xenlight.go b/tools/golang/xenlight/xenlight.go
index b9189dec5c..13171d0ad1 100644
--- a/tools/golang/xenlight/xenlight.go
+++ b/tools/golang/xenlight/xenlight.go
@@ -491,13 +491,14 @@ func (sl *StringList) fromC(csl *C.libxl_string_list) error {
 
 func (sl StringList) toC(csl *C.libxl_string_list) error {
 	var char *C.char
-	size := len(sl)
+	size := len(sl) + 1
 	*csl = (C.libxl_string_list)(C.malloc(C.ulong(size) * C.ulong(unsafe.Sizeof(char))))
-	clist := (*[1 << 30]*C.char)(unsafe.Pointer(csl))[:size:size]
+	clist := (*[1 << 30]*C.char)(unsafe.Pointer(*csl))[:size:size]
 
 	for i, v := range sl {
 		clist[i] = C.CString(v)
 	}
+	clist[len(clist)-1] = nil
 
 	return nil
 }
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon May 24 20:37:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 May 2021 20:37:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131878.246271 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llHKc-0001ve-UM; Mon, 24 May 2021 20:37:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131878.246271; Mon, 24 May 2021 20:37:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llHKc-0001vX-RE; Mon, 24 May 2021 20:37:46 +0000
Received: by outflank-mailman (input) for mailman id 131878;
 Mon, 24 May 2021 20:37:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5+P1=KT=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1llHKb-0001ey-RT
 for xen-devel@lists.xenproject.org; Mon, 24 May 2021 20:37:45 +0000
Received: from mail-qt1-x82f.google.com (unknown [2607:f8b0:4864:20::82f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ff609dc3-16ff-4f17-a461-4436cc8d12bc;
 Mon, 24 May 2021 20:37:40 +0000 (UTC)
Received: by mail-qt1-x82f.google.com with SMTP id m13so21561299qtk.13
 for <xen-devel@lists.xenproject.org>; Mon, 24 May 2021 13:37:40 -0700 (PDT)
Received: from localhost.localdomain (c-73-89-138-5.hsd1.vt.comcast.net.
 [73.89.138.5])
 by smtp.gmail.com with ESMTPSA id t25sm5142847qkt.62.2021.05.24.13.37.39
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 24 May 2021 13:37:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ff609dc3-16ff-4f17-a461-4436cc8d12bc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :in-reply-to:references;
        bh=CIwXzDWTaek3ruD8eh2yS/rqRAAsDRXtMIQbC1kOPa4=;
        b=uv44pKs+RQk5sBayWXiXgIjbhPRtNWOqguwJDtAFR1bs9T4nPpWlzNvZPj2xjT7UN5
         /A8RxqOKZ0sY3PK2K8DkaQJhz6bnbbr8/n2/Yd1Oe25+mGCQYOQvlvYJaQVblmKK6AxE
         T48iEUUddTzwnEpMAz8IN7H5ocOML1wG2QC1dQHcuuv0g35qlQ7TdmHXh54+aKP/GEMw
         SNolx0pyM55oubpZoQXCL82g00/iol0Bjbu5I3O4QVyBgSJHWBTGmi8GHUobnai/1pOP
         QQnDgaP5LdBfcNWCnF6/hgrp2PuAZHrWQuqvujovTFyLhzuJc/XxBsEAe4X9iWGbc9pf
         nCmg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:in-reply-to:references;
        bh=CIwXzDWTaek3ruD8eh2yS/rqRAAsDRXtMIQbC1kOPa4=;
        b=BVASBTUnUIKb31fjocPNh2f5Bp4TGgZBEfpAGi8pAEOc58Az6a68GHg5dVgZU9fep8
         DebRc1pbBdYnNLWUkbDykX2TlcqrsfEbVMGQM2SAhqzgZe0iFW2pZER6NFD7iwhMQcBB
         Xi+ZUdpny24cnFvWcIre3K/xS8lgoO667Q4RP307+V01hJDm36sMltggYWYGYCzhn4jo
         jk3imrtXtHgUzPYPTNTmSImP2Hq8WfJCVaVOyo7knbpIipRn+HBpwEJvyHHRuPDj8LVM
         X8uGBuTNrMrmPaHMGE7dwHGIGFc7K96Ebh9Ffs2x85whKy8tglsknFS8A8XpYD1QtdD4
         zbjA==
X-Gm-Message-State: AOAM533+Of3BrCKMHCiS1fRZGeybN92VILG3hSA0O9ogk0r4V4UGSiDR
	7LVU/MZdtIj7+rGBgxeaXnc=
X-Google-Smtp-Source: ABdhPJz88N55/dJlF3pa57aofg5JUU6h/isN9xjSgCfLuMY6ifvnffVmSKBe5Fv8qihdht66qEeyaQ==
X-Received: by 2002:ac8:505a:: with SMTP id h26mr29001740qtm.252.1621888660522;
        Mon, 24 May 2021 13:37:40 -0700 (PDT)
From: Nick Rosbrook <rosbrookn@gmail.com>
X-Google-Original-From: Nick Rosbrook <rosbrookn@ainfosec.com>
To: xen-devel@lists.xenproject.prg,
	xen-devel@lists.xenproject.org
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [RESEND PATCH 01/12] golang/xenlight: update generated code
Date: Mon, 24 May 2021 16:36:42 -0400
Message-Id: <29e665fc1c9313f5e221e9e5e15d7c2d9c1eb4a7.1621887506.git.rosbrookn@ainfosec.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1621887506.git.rosbrookn@ainfosec.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>
In-Reply-To: <cover.1621887506.git.rosbrookn@ainfosec.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>

Re-generate code to reflect changes to libxl_types.idl from the
following commits:

0570d7f276 x86/msr: introduce an option for compatible MSR behavior selection
7e5cffcd1e viridian: allow vCPU hotplug for Windows VMs
9835246710 viridian: remove implicit limit of 64 VPs per partition

Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
---
 tools/golang/xenlight/helpers.gen.go | 6 ++++++
 tools/golang/xenlight/types.gen.go   | 5 +++++
 2 files changed, 11 insertions(+)

diff --git a/tools/golang/xenlight/helpers.gen.go b/tools/golang/xenlight/helpers.gen.go
index 4c60d27a9c..b454b12d52 100644
--- a/tools/golang/xenlight/helpers.gen.go
+++ b/tools/golang/xenlight/helpers.gen.go
@@ -1113,6 +1113,9 @@ default:
 return fmt.Errorf("invalid union key '%v'", x.Type)}
 x.ArchArm.GicVersion = GicVersion(xc.arch_arm.gic_version)
 x.ArchArm.Vuart = VuartType(xc.arch_arm.vuart)
+if err := x.ArchX86.MsrRelaxed.fromC(&xc.arch_x86.msr_relaxed);err != nil {
+return fmt.Errorf("converting field ArchX86.MsrRelaxed: %v", err)
+}
 x.Altp2M = Altp2MMode(xc.altp2m)
 x.VmtraceBufKb = int(xc.vmtrace_buf_kb)
 
@@ -1589,6 +1592,9 @@ default:
 return fmt.Errorf("invalid union key '%v'", x.Type)}
 xc.arch_arm.gic_version = C.libxl_gic_version(x.ArchArm.GicVersion)
 xc.arch_arm.vuart = C.libxl_vuart_type(x.ArchArm.Vuart)
+if err := x.ArchX86.MsrRelaxed.toC(&xc.arch_x86.msr_relaxed); err != nil {
+return fmt.Errorf("converting field ArchX86.MsrRelaxed: %v", err)
+}
 xc.altp2m = C.libxl_altp2m_mode(x.Altp2M)
 xc.vmtrace_buf_kb = C.int(x.VmtraceBufKb)
 
diff --git a/tools/golang/xenlight/types.gen.go b/tools/golang/xenlight/types.gen.go
index cb13002fdb..f2ceceb61c 100644
--- a/tools/golang/xenlight/types.gen.go
+++ b/tools/golang/xenlight/types.gen.go
@@ -211,6 +211,8 @@ ViridianEnlightenmentSynic ViridianEnlightenment = 7
 ViridianEnlightenmentStimer ViridianEnlightenment = 8
 ViridianEnlightenmentHcallIpi ViridianEnlightenment = 9
 ViridianEnlightenmentExProcessorMasks ViridianEnlightenment = 10
+ViridianEnlightenmentNoVpLimit ViridianEnlightenment = 11
+ViridianEnlightenmentCpuHotplug ViridianEnlightenment = 12
 )
 
 type Hdtype int
@@ -513,6 +515,9 @@ ArchArm struct {
 GicVersion GicVersion
 Vuart VuartType
 }
+ArchX86 struct {
+MsrRelaxed Defbool
+}
 Altp2M Altp2MMode
 VmtraceBufKb int
 }
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon May 24 20:37:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 May 2021 20:37:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131880.246293 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llHKn-0002dc-Lv; Mon, 24 May 2021 20:37:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131880.246293; Mon, 24 May 2021 20:37:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llHKn-0002dN-Hg; Mon, 24 May 2021 20:37:57 +0000
Received: by outflank-mailman (input) for mailman id 131880;
 Mon, 24 May 2021 20:37:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5+P1=KT=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1llHKl-0001ey-Rt
 for xen-devel@lists.xenproject.org; Mon, 24 May 2021 20:37:55 +0000
Received: from mail-qt1-x835.google.com (unknown [2607:f8b0:4864:20::835])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c2118e63-7738-4f41-97de-3c3d00227708;
 Mon, 24 May 2021 20:37:43 +0000 (UTC)
Received: by mail-qt1-x835.google.com with SMTP id i12so2053527qtr.7
 for <xen-devel@lists.xenproject.org>; Mon, 24 May 2021 13:37:43 -0700 (PDT)
Received: from localhost.localdomain (c-73-89-138-5.hsd1.vt.comcast.net.
 [73.89.138.5])
 by smtp.gmail.com with ESMTPSA id t25sm5142847qkt.62.2021.05.24.13.37.41
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 24 May 2021 13:37:42 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c2118e63-7738-4f41-97de-3c3d00227708
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :in-reply-to:references;
        bh=uzJn8fV3EdSO38YaBLqYQLo0qSO3Ncl/nRlIGcswrmU=;
        b=PbqQkJ0hFYrSz/TATustRcgG2UbhpX19aadQxqd9njd05JuntAcPVgWMSMBe29gEJK
         CcMReC7IcnZUtLa+I+fzo+scENB89+P24hB4fxRa+3PSM7qlEPevfzSgPWYS6DH0Mb8e
         plNHyCCEq0OEq/CRoNGjPvUx6WuifULGO62ffpEAXGmOFWcJXKhxGF+MtIi7vQ/rm3Nm
         U10tQXpjwBVscDM9KZ0YyZ0uo8zjg4geNQ5IoysyoFJDCHMScaAUgm0kdH+spVBOLXfv
         sPaBSqaZt+16Ry1M/OQO7FBa+RFlEqZti2zxwTYozEk41Ca5eGd5Et8iwr/V7E5w0JI9
         pSlg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:in-reply-to:references;
        bh=uzJn8fV3EdSO38YaBLqYQLo0qSO3Ncl/nRlIGcswrmU=;
        b=Wyt6uMLRX9S/ZEO2DVEh4P7j/X8skW6AKwWmeIz0nEl7SgRHS7/dbSMTX3HKKNWx9Z
         SzvBwIUyOBWy2qfOBfSI3iO9kk1lkNtH+Tw0uAuqaY14KtO5Vhl7/VeqNR6Grtx9lLxJ
         7/X1RTyB9HGrkoHuB9/0mAwSokSWj4seQbeBdBY3LpCKpOk9eDfDPVQCVNItULUe5DmB
         wTBAQgh0E1uEvmEbrjrFJL4wdi9uQsE9DZr+9Ilx7m3HC6Tdf5isZqywt1xNxqjnFcRk
         OBo9/U2vQp6xJ+hXwIziVybT7sDb6Ogff6/qEaS66OJaexMQoV6JmSFGLnvk37khjU05
         2HZA==
X-Gm-Message-State: AOAM531s2AwTQ2LYC6wPmirFEG6NNxT2nA4GsSztoU2LT9NkfUkGN8sa
	ImA1N/HECPbWEGgD575ekZdtu1fDgN0=
X-Google-Smtp-Source: ABdhPJyWcoigwKqIU6gEAnW4NBwEWCq09Refxst0zE2koG6GhY4k6z/eLqLY7AK5dmD4+wsBteUUGg==
X-Received: by 2002:ac8:1483:: with SMTP id l3mr28238595qtj.142.1621888662674;
        Mon, 24 May 2021 13:37:42 -0700 (PDT)
From: Nick Rosbrook <rosbrookn@gmail.com>
X-Google-Original-From: Nick Rosbrook <rosbrookn@ainfosec.com>
To: xen-devel@lists.xenproject.prg,
	xen-devel@lists.xenproject.org
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [RESEND PATCH 03/12] golang/xenlight: fix string conversion in generated toC functions
Date: Mon, 24 May 2021 16:36:44 -0400
Message-Id: <06763aceff41167d3d3bbd603f729572c1f55c77.1621887506.git.rosbrookn@ainfosec.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1621887506.git.rosbrookn@ainfosec.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>
In-Reply-To: <cover.1621887506.git.rosbrookn@ainfosec.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>

In gengotypes.py, the toC functions only set C string fields when
the Go strings are non-empty. However, to prevent segfaults in some
cases, these fields should always at least be set to nil so that the C
memory is zeroed out.

Update gengotypes.py so that the generated code always sets these fields
to nil first, and then proceeds to check if the Go string is non-empty.
And, commit the new generated code.

Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
---
 tools/golang/xenlight/gengotypes.py  |   1 +
 tools/golang/xenlight/helpers.gen.go | 160 +++++++++++++++++++++++++++
 2 files changed, 161 insertions(+)

diff --git a/tools/golang/xenlight/gengotypes.py b/tools/golang/xenlight/gengotypes.py
index 3e40c3d5dc..e6daa9b92f 100644
--- a/tools/golang/xenlight/gengotypes.py
+++ b/tools/golang/xenlight/gengotypes.py
@@ -527,6 +527,7 @@ def xenlight_golang_convert_to_C(ty = None, outer_name = None,
 
     elif gotypename == 'string':
         # Use the cgo helper for converting C strings.
+        s += '{0}.{1} = nil\n'.format(cvarname, cname)
         s += 'if {0}.{1} != "" {{\n'.format(govarname,goname)
         s += '{0}.{1} = C.CString({2}.{3})}}\n'.format(cvarname,cname,
                                                    govarname,goname)
diff --git a/tools/golang/xenlight/helpers.gen.go b/tools/golang/xenlight/helpers.gen.go
index b454b12d52..5222898fb8 100644
--- a/tools/golang/xenlight/helpers.gen.go
+++ b/tools/golang/xenlight/helpers.gen.go
@@ -154,8 +154,10 @@ C.libxl_vnc_info_dispose(xc)}
 if err := x.Enable.toC(&xc.enable); err != nil {
 return fmt.Errorf("converting field Enable: %v", err)
 }
+xc.listen = nil
 if x.Listen != "" {
 xc.listen = C.CString(x.Listen)}
+xc.passwd = nil
 if x.Passwd != "" {
 xc.passwd = C.CString(x.Passwd)}
 xc.display = C.int(x.Display)
@@ -216,11 +218,13 @@ return fmt.Errorf("converting field Enable: %v", err)
 }
 xc.port = C.int(x.Port)
 xc.tls_port = C.int(x.TlsPort)
+xc.host = nil
 if x.Host != "" {
 xc.host = C.CString(x.Host)}
 if err := x.DisableTicketing.toC(&xc.disable_ticketing); err != nil {
 return fmt.Errorf("converting field DisableTicketing: %v", err)
 }
+xc.passwd = nil
 if x.Passwd != "" {
 xc.passwd = C.CString(x.Passwd)}
 if err := x.AgentMouse.toC(&xc.agent_mouse); err != nil {
@@ -233,8 +237,10 @@ if err := x.ClipboardSharing.toC(&xc.clipboard_sharing); err != nil {
 return fmt.Errorf("converting field ClipboardSharing: %v", err)
 }
 xc.usbredirection = C.int(x.Usbredirection)
+xc.image_compression = nil
 if x.ImageCompression != "" {
 xc.image_compression = C.CString(x.ImageCompression)}
+xc.streaming_video = nil
 if x.StreamingVideo != "" {
 xc.streaming_video = C.CString(x.StreamingVideo)}
 
@@ -278,8 +284,10 @@ return fmt.Errorf("converting field Enable: %v", err)
 if err := x.Opengl.toC(&xc.opengl); err != nil {
 return fmt.Errorf("converting field Opengl: %v", err)
 }
+xc.display = nil
 if x.Display != "" {
 xc.display = C.CString(x.Display)}
+xc.xauthority = nil
 if x.Xauthority != "" {
 xc.xauthority = C.CString(x.Xauthority)}
 
@@ -337,6 +345,7 @@ return fmt.Errorf("converting field Uuid: %v", err)
 }
 xc.domid = C.libxl_domid(x.Domid)
 xc.ssidref = C.uint32_t(x.Ssidref)
+xc.ssid_label = nil
 if x.SsidLabel != "" {
 xc.ssid_label = C.CString(x.SsidLabel)}
 xc.running = C.bool(x.Running)
@@ -391,6 +400,7 @@ C.libxl_cpupoolinfo_dispose(xc)}
 }()
 
 xc.poolid = C.uint32_t(x.Poolid)
+xc.pool_name = nil
 if x.PoolName != "" {
 xc.pool_name = C.CString(x.PoolName)}
 xc.sched = C.libxl_scheduler(x.Sched)
@@ -458,9 +468,11 @@ if err != nil{
 C.libxl_channelinfo_dispose(xc)}
 }()
 
+xc.backend = nil
 if x.Backend != "" {
 xc.backend = C.CString(x.Backend)}
 xc.backend_id = C.uint32_t(x.BackendId)
+xc.frontend = nil
 if x.Frontend != "" {
 xc.frontend = C.CString(x.Frontend)}
 xc.frontend_id = C.uint32_t(x.FrontendId)
@@ -478,6 +490,7 @@ if !ok {
 return errors.New("wrong type for union key connection")
 }
 var pty C.libxl_channelinfo_connection_union_pty
+pty.path = nil
 if tmp.Path != "" {
 pty.path = C.CString(tmp.Path)}
 ptyBytes := C.GoBytes(unsafe.Pointer(&pty),C.sizeof_libxl_channelinfo_connection_union_pty)
@@ -563,24 +576,33 @@ C.libxl_version_info_dispose(xc)}
 
 xc.xen_version_major = C.int(x.XenVersionMajor)
 xc.xen_version_minor = C.int(x.XenVersionMinor)
+xc.xen_version_extra = nil
 if x.XenVersionExtra != "" {
 xc.xen_version_extra = C.CString(x.XenVersionExtra)}
+xc.compiler = nil
 if x.Compiler != "" {
 xc.compiler = C.CString(x.Compiler)}
+xc.compile_by = nil
 if x.CompileBy != "" {
 xc.compile_by = C.CString(x.CompileBy)}
+xc.compile_domain = nil
 if x.CompileDomain != "" {
 xc.compile_domain = C.CString(x.CompileDomain)}
+xc.compile_date = nil
 if x.CompileDate != "" {
 xc.compile_date = C.CString(x.CompileDate)}
+xc.capabilities = nil
 if x.Capabilities != "" {
 xc.capabilities = C.CString(x.Capabilities)}
+xc.changeset = nil
 if x.Changeset != "" {
 xc.changeset = C.CString(x.Changeset)}
 xc.virt_start = C.uint64_t(x.VirtStart)
 xc.pagesize = C.int(x.Pagesize)
+xc.commandline = nil
 if x.Commandline != "" {
 xc.commandline = C.CString(x.Commandline)}
+xc.build_id = nil
 if x.BuildId != "" {
 xc.build_id = C.CString(x.BuildId)}
 
@@ -650,8 +672,10 @@ if err := x.Oos.toC(&xc.oos); err != nil {
 return fmt.Errorf("converting field Oos: %v", err)
 }
 xc.ssidref = C.uint32_t(x.Ssidref)
+xc.ssid_label = nil
 if x.SsidLabel != "" {
 xc.ssid_label = C.CString(x.SsidLabel)}
+xc.name = nil
 if x.Name != "" {
 xc.name = C.CString(x.Name)}
 xc.domid = C.libxl_domid(x.Domid)
@@ -665,6 +689,7 @@ if err := x.Platformdata.toC(&xc.platformdata); err != nil {
 return fmt.Errorf("converting field Platformdata: %v", err)
 }
 xc.poolid = C.uint32_t(x.Poolid)
+xc.pool_name = nil
 if x.PoolName != "" {
 xc.pool_name = C.CString(x.PoolName)}
 if err := x.RunHotplugScripts.toC(&xc.run_hotplug_scripts); err != nil {
@@ -712,6 +737,7 @@ C.libxl_domain_restore_params_dispose(xc)}
 
 xc.checkpointed_stream = C.int(x.CheckpointedStream)
 xc.stream_version = C.uint32_t(x.StreamVersion)
+xc.colo_proxy_script = nil
 if x.ColoProxyScript != "" {
 xc.colo_proxy_script = C.CString(x.ColoProxyScript)}
 if err := x.UserspaceColoProxy.toC(&xc.userspace_colo_proxy); err != nil {
@@ -1312,6 +1338,7 @@ xc.shadow_memkb = C.uint64_t(x.ShadowMemkb)
 xc.iommu_memkb = C.uint64_t(x.IommuMemkb)
 xc.rtc_timeoffset = C.uint32_t(x.RtcTimeoffset)
 xc.exec_ssidref = C.uint32_t(x.ExecSsidref)
+xc.exec_ssid_label = nil
 if x.ExecSsidLabel != "" {
 xc.exec_ssid_label = C.CString(x.ExecSsidLabel)}
 if err := x.Localtime.toC(&xc.localtime); err != nil {
@@ -1323,6 +1350,7 @@ return fmt.Errorf("converting field DisableMigrate: %v", err)
 if err := x.Cpuid.toC(&xc.cpuid); err != nil {
 return fmt.Errorf("converting field Cpuid: %v", err)
 }
+xc.blkdev_start = nil
 if x.BlkdevStart != "" {
 xc.blkdev_start = C.CString(x.BlkdevStart)}
 if numVnumaNodes := len(x.VnumaNodes); numVnumaNodes > 0 {
@@ -1342,15 +1370,20 @@ if err := x.DeviceModelStubdomain.toC(&xc.device_model_stubdomain); err != nil {
 return fmt.Errorf("converting field DeviceModelStubdomain: %v", err)
 }
 xc.stubdomain_memkb = C.uint64_t(x.StubdomainMemkb)
+xc.stubdomain_kernel = nil
 if x.StubdomainKernel != "" {
 xc.stubdomain_kernel = C.CString(x.StubdomainKernel)}
+xc.stubdomain_ramdisk = nil
 if x.StubdomainRamdisk != "" {
 xc.stubdomain_ramdisk = C.CString(x.StubdomainRamdisk)}
+xc.device_model = nil
 if x.DeviceModel != "" {
 xc.device_model = C.CString(x.DeviceModel)}
 xc.device_model_ssidref = C.uint32_t(x.DeviceModelSsidref)
+xc.device_model_ssid_label = nil
 if x.DeviceModelSsidLabel != "" {
 xc.device_model_ssid_label = C.CString(x.DeviceModelSsidLabel)}
+xc.device_model_user = nil
 if x.DeviceModelUser != "" {
 xc.device_model_user = C.CString(x.DeviceModelUser)}
 if err := x.Extra.toC(&xc.extra); err != nil {
@@ -1397,17 +1430,22 @@ if err := x.ClaimMode.toC(&xc.claim_mode); err != nil {
 return fmt.Errorf("converting field ClaimMode: %v", err)
 }
 xc.event_channels = C.uint32_t(x.EventChannels)
+xc.kernel = nil
 if x.Kernel != "" {
 xc.kernel = C.CString(x.Kernel)}
+xc.cmdline = nil
 if x.Cmdline != "" {
 xc.cmdline = C.CString(x.Cmdline)}
+xc.ramdisk = nil
 if x.Ramdisk != "" {
 xc.ramdisk = C.CString(x.Ramdisk)}
+xc.device_tree = nil
 if x.DeviceTree != "" {
 xc.device_tree = C.CString(x.DeviceTree)}
 if err := x.Acpi.toC(&xc.acpi); err != nil {
 return fmt.Errorf("converting field Acpi: %v", err)
 }
+xc.bootloader = nil
 if x.Bootloader != "" {
 xc.bootloader = C.CString(x.Bootloader)}
 if err := x.BootloaderArgs.toC(&xc.bootloader_args); err != nil {
@@ -1432,6 +1470,7 @@ if !ok {
 return errors.New("wrong type for union key type")
 }
 var hvm C.libxl_domain_build_info_type_union_hvm
+hvm.firmware = nil
 if tmp.Firmware != "" {
 hvm.firmware = C.CString(tmp.Firmware)}
 hvm.bios = C.libxl_bios_type(tmp.Bios)
@@ -1465,6 +1504,7 @@ return fmt.Errorf("converting field ViridianEnable: %v", err)
 if err := tmp.ViridianDisable.toC(&hvm.viridian_disable); err != nil {
 return fmt.Errorf("converting field ViridianDisable: %v", err)
 }
+hvm.timeoffset = nil
 if tmp.Timeoffset != "" {
 hvm.timeoffset = C.CString(tmp.Timeoffset)}
 if err := tmp.Hpet.toC(&hvm.hpet); err != nil {
@@ -1481,10 +1521,13 @@ return fmt.Errorf("converting field NestedHvm: %v", err)
 if err := tmp.Altp2M.toC(&hvm.altp2m); err != nil {
 return fmt.Errorf("converting field Altp2M: %v", err)
 }
+hvm.system_firmware = nil
 if tmp.SystemFirmware != "" {
 hvm.system_firmware = C.CString(tmp.SystemFirmware)}
+hvm.smbios_firmware = nil
 if tmp.SmbiosFirmware != "" {
 hvm.smbios_firmware = C.CString(tmp.SmbiosFirmware)}
+hvm.acpi_firmware = nil
 if tmp.AcpiFirmware != "" {
 hvm.acpi_firmware = C.CString(tmp.AcpiFirmware)}
 hvm.hdtype = C.libxl_hdtype(tmp.Hdtype)
@@ -1497,6 +1540,7 @@ return fmt.Errorf("converting field Vga: %v", err)
 if err := tmp.Vnc.toC(&hvm.vnc); err != nil {
 return fmt.Errorf("converting field Vnc: %v", err)
 }
+hvm.keymap = nil
 if tmp.Keymap != "" {
 hvm.keymap = C.CString(tmp.Keymap)}
 if err := tmp.Sdl.toC(&hvm.sdl); err != nil {
@@ -1509,19 +1553,23 @@ if err := tmp.GfxPassthru.toC(&hvm.gfx_passthru); err != nil {
 return fmt.Errorf("converting field GfxPassthru: %v", err)
 }
 hvm.gfx_passthru_kind = C.libxl_gfx_passthru_kind(tmp.GfxPassthruKind)
+hvm.serial = nil
 if tmp.Serial != "" {
 hvm.serial = C.CString(tmp.Serial)}
+hvm.boot = nil
 if tmp.Boot != "" {
 hvm.boot = C.CString(tmp.Boot)}
 if err := tmp.Usb.toC(&hvm.usb); err != nil {
 return fmt.Errorf("converting field Usb: %v", err)
 }
 hvm.usbversion = C.int(tmp.Usbversion)
+hvm.usbdevice = nil
 if tmp.Usbdevice != "" {
 hvm.usbdevice = C.CString(tmp.Usbdevice)}
 if err := tmp.VkbDevice.toC(&hvm.vkb_device); err != nil {
 return fmt.Errorf("converting field VkbDevice: %v", err)
 }
+hvm.soundhw = nil
 if tmp.Soundhw != "" {
 hvm.soundhw = C.CString(tmp.Soundhw)}
 if err := tmp.XenPlatformPci.toC(&hvm.xen_platform_pci); err != nil {
@@ -1550,18 +1598,23 @@ if !ok {
 return errors.New("wrong type for union key type")
 }
 var pv C.libxl_domain_build_info_type_union_pv
+pv.kernel = nil
 if tmp.Kernel != "" {
 pv.kernel = C.CString(tmp.Kernel)}
 pv.slack_memkb = C.uint64_t(tmp.SlackMemkb)
+pv.bootloader = nil
 if tmp.Bootloader != "" {
 pv.bootloader = C.CString(tmp.Bootloader)}
 if err := tmp.BootloaderArgs.toC(&pv.bootloader_args); err != nil {
 return fmt.Errorf("converting field BootloaderArgs: %v", err)
 }
+pv.cmdline = nil
 if tmp.Cmdline != "" {
 pv.cmdline = C.CString(tmp.Cmdline)}
+pv.ramdisk = nil
 if tmp.Ramdisk != "" {
 pv.ramdisk = C.CString(tmp.Ramdisk)}
+pv.features = nil
 if tmp.Features != "" {
 pv.features = C.CString(tmp.Features)}
 if err := tmp.E820Host.toC(&pv.e820_host); err != nil {
@@ -1578,10 +1631,13 @@ var pvh C.libxl_domain_build_info_type_union_pvh
 if err := tmp.Pvshim.toC(&pvh.pvshim); err != nil {
 return fmt.Errorf("converting field Pvshim: %v", err)
 }
+pvh.pvshim_path = nil
 if tmp.PvshimPath != "" {
 pvh.pvshim_path = C.CString(tmp.PvshimPath)}
+pvh.pvshim_cmdline = nil
 if tmp.PvshimCmdline != "" {
 pvh.pvshim_cmdline = C.CString(tmp.PvshimCmdline)}
+pvh.pvshim_extra = nil
 if tmp.PvshimExtra != "" {
 pvh.pvshim_extra = C.CString(tmp.PvshimExtra)}
 pvhBytes := C.GoBytes(unsafe.Pointer(&pvh),C.sizeof_libxl_domain_build_info_type_union_pvh)
@@ -1635,6 +1691,7 @@ C.libxl_device_vfb_dispose(xc)}
 }()
 
 xc.backend_domid = C.libxl_domid(x.BackendDomid)
+xc.backend_domname = nil
 if x.BackendDomname != "" {
 xc.backend_domname = C.CString(x.BackendDomname)}
 xc.devid = C.libxl_devid(x.Devid)
@@ -1644,6 +1701,7 @@ return fmt.Errorf("converting field Vnc: %v", err)
 if err := x.Sdl.toC(&xc.sdl); err != nil {
 return fmt.Errorf("converting field Sdl: %v", err)
 }
+xc.keymap = nil
 if x.Keymap != "" {
 xc.keymap = C.CString(x.Keymap)}
 
@@ -1689,10 +1747,12 @@ C.libxl_device_vkb_dispose(xc)}
 }()
 
 xc.backend_domid = C.libxl_domid(x.BackendDomid)
+xc.backend_domname = nil
 if x.BackendDomname != "" {
 xc.backend_domname = C.CString(x.BackendDomname)}
 xc.devid = C.libxl_devid(x.Devid)
 xc.backend_type = C.libxl_vkb_backend(x.BackendType)
+xc.unique_id = nil
 if x.UniqueId != "" {
 xc.unique_id = C.CString(x.UniqueId)}
 xc.feature_disable_keyboard = C.bool(x.FeatureDisableKeyboard)
@@ -1758,14 +1818,18 @@ C.libxl_device_disk_dispose(xc)}
 }()
 
 xc.backend_domid = C.libxl_domid(x.BackendDomid)
+xc.backend_domname = nil
 if x.BackendDomname != "" {
 xc.backend_domname = C.CString(x.BackendDomname)}
+xc.pdev_path = nil
 if x.PdevPath != "" {
 xc.pdev_path = C.CString(x.PdevPath)}
+xc.vdev = nil
 if x.Vdev != "" {
 xc.vdev = C.CString(x.Vdev)}
 xc.backend = C.libxl_disk_backend(x.Backend)
 xc.format = C.libxl_disk_format(x.Format)
+xc.script = nil
 if x.Script != "" {
 xc.script = C.CString(x.Script)}
 xc.removable = C.int(x.Removable)
@@ -1781,13 +1845,17 @@ return fmt.Errorf("converting field ColoEnable: %v", err)
 if err := x.ColoRestoreEnable.toC(&xc.colo_restore_enable); err != nil {
 return fmt.Errorf("converting field ColoRestoreEnable: %v", err)
 }
+xc.colo_host = nil
 if x.ColoHost != "" {
 xc.colo_host = C.CString(x.ColoHost)}
 xc.colo_port = C.int(x.ColoPort)
+xc.colo_export = nil
 if x.ColoExport != "" {
 xc.colo_export = C.CString(x.ColoExport)}
+xc.active_disk = nil
 if x.ActiveDisk != "" {
 xc.active_disk = C.CString(x.ActiveDisk)}
+xc.hidden_disk = nil
 if x.HiddenDisk != "" {
 xc.hidden_disk = C.CString(x.HiddenDisk)}
 
@@ -1883,124 +1951,180 @@ C.libxl_device_nic_dispose(xc)}
 }()
 
 xc.backend_domid = C.libxl_domid(x.BackendDomid)
+xc.backend_domname = nil
 if x.BackendDomname != "" {
 xc.backend_domname = C.CString(x.BackendDomname)}
 xc.devid = C.libxl_devid(x.Devid)
 xc.mtu = C.int(x.Mtu)
+xc.model = nil
 if x.Model != "" {
 xc.model = C.CString(x.Model)}
 if err := x.Mac.toC(&xc.mac); err != nil {
 return fmt.Errorf("converting field Mac: %v", err)
 }
+xc.ip = nil
 if x.Ip != "" {
 xc.ip = C.CString(x.Ip)}
+xc.bridge = nil
 if x.Bridge != "" {
 xc.bridge = C.CString(x.Bridge)}
+xc.ifname = nil
 if x.Ifname != "" {
 xc.ifname = C.CString(x.Ifname)}
+xc.script = nil
 if x.Script != "" {
 xc.script = C.CString(x.Script)}
 xc.nictype = C.libxl_nic_type(x.Nictype)
 xc.rate_bytes_per_interval = C.uint64_t(x.RateBytesPerInterval)
 xc.rate_interval_usecs = C.uint32_t(x.RateIntervalUsecs)
+xc.gatewaydev = nil
 if x.Gatewaydev != "" {
 xc.gatewaydev = C.CString(x.Gatewaydev)}
+xc.coloft_forwarddev = nil
 if x.ColoftForwarddev != "" {
 xc.coloft_forwarddev = C.CString(x.ColoftForwarddev)}
+xc.colo_sock_mirror_id = nil
 if x.ColoSockMirrorId != "" {
 xc.colo_sock_mirror_id = C.CString(x.ColoSockMirrorId)}
+xc.colo_sock_mirror_ip = nil
 if x.ColoSockMirrorIp != "" {
 xc.colo_sock_mirror_ip = C.CString(x.ColoSockMirrorIp)}
+xc.colo_sock_mirror_port = nil
 if x.ColoSockMirrorPort != "" {
 xc.colo_sock_mirror_port = C.CString(x.ColoSockMirrorPort)}
+xc.colo_sock_compare_pri_in_id = nil
 if x.ColoSockComparePriInId != "" {
 xc.colo_sock_compare_pri_in_id = C.CString(x.ColoSockComparePriInId)}
+xc.colo_sock_compare_pri_in_ip = nil
 if x.ColoSockComparePriInIp != "" {
 xc.colo_sock_compare_pri_in_ip = C.CString(x.ColoSockComparePriInIp)}
+xc.colo_sock_compare_pri_in_port = nil
 if x.ColoSockComparePriInPort != "" {
 xc.colo_sock_compare_pri_in_port = C.CString(x.ColoSockComparePriInPort)}
+xc.colo_sock_compare_sec_in_id = nil
 if x.ColoSockCompareSecInId != "" {
 xc.colo_sock_compare_sec_in_id = C.CString(x.ColoSockCompareSecInId)}
+xc.colo_sock_compare_sec_in_ip = nil
 if x.ColoSockCompareSecInIp != "" {
 xc.colo_sock_compare_sec_in_ip = C.CString(x.ColoSockCompareSecInIp)}
+xc.colo_sock_compare_sec_in_port = nil
 if x.ColoSockCompareSecInPort != "" {
 xc.colo_sock_compare_sec_in_port = C.CString(x.ColoSockCompareSecInPort)}
+xc.colo_sock_compare_notify_id = nil
 if x.ColoSockCompareNotifyId != "" {
 xc.colo_sock_compare_notify_id = C.CString(x.ColoSockCompareNotifyId)}
+xc.colo_sock_compare_notify_ip = nil
 if x.ColoSockCompareNotifyIp != "" {
 xc.colo_sock_compare_notify_ip = C.CString(x.ColoSockCompareNotifyIp)}
+xc.colo_sock_compare_notify_port = nil
 if x.ColoSockCompareNotifyPort != "" {
 xc.colo_sock_compare_notify_port = C.CString(x.ColoSockCompareNotifyPort)}
+xc.colo_sock_redirector0_id = nil
 if x.ColoSockRedirector0Id != "" {
 xc.colo_sock_redirector0_id = C.CString(x.ColoSockRedirector0Id)}
+xc.colo_sock_redirector0_ip = nil
 if x.ColoSockRedirector0Ip != "" {
 xc.colo_sock_redirector0_ip = C.CString(x.ColoSockRedirector0Ip)}
+xc.colo_sock_redirector0_port = nil
 if x.ColoSockRedirector0Port != "" {
 xc.colo_sock_redirector0_port = C.CString(x.ColoSockRedirector0Port)}
+xc.colo_sock_redirector1_id = nil
 if x.ColoSockRedirector1Id != "" {
 xc.colo_sock_redirector1_id = C.CString(x.ColoSockRedirector1Id)}
+xc.colo_sock_redirector1_ip = nil
 if x.ColoSockRedirector1Ip != "" {
 xc.colo_sock_redirector1_ip = C.CString(x.ColoSockRedirector1Ip)}
+xc.colo_sock_redirector1_port = nil
 if x.ColoSockRedirector1Port != "" {
 xc.colo_sock_redirector1_port = C.CString(x.ColoSockRedirector1Port)}
+xc.colo_sock_redirector2_id = nil
 if x.ColoSockRedirector2Id != "" {
 xc.colo_sock_redirector2_id = C.CString(x.ColoSockRedirector2Id)}
+xc.colo_sock_redirector2_ip = nil
 if x.ColoSockRedirector2Ip != "" {
 xc.colo_sock_redirector2_ip = C.CString(x.ColoSockRedirector2Ip)}
+xc.colo_sock_redirector2_port = nil
 if x.ColoSockRedirector2Port != "" {
 xc.colo_sock_redirector2_port = C.CString(x.ColoSockRedirector2Port)}
+xc.colo_filter_mirror_queue = nil
 if x.ColoFilterMirrorQueue != "" {
 xc.colo_filter_mirror_queue = C.CString(x.ColoFilterMirrorQueue)}
+xc.colo_filter_mirror_outdev = nil
 if x.ColoFilterMirrorOutdev != "" {
 xc.colo_filter_mirror_outdev = C.CString(x.ColoFilterMirrorOutdev)}
+xc.colo_filter_redirector0_queue = nil
 if x.ColoFilterRedirector0Queue != "" {
 xc.colo_filter_redirector0_queue = C.CString(x.ColoFilterRedirector0Queue)}
+xc.colo_filter_redirector0_indev = nil
 if x.ColoFilterRedirector0Indev != "" {
 xc.colo_filter_redirector0_indev = C.CString(x.ColoFilterRedirector0Indev)}
+xc.colo_filter_redirector0_outdev = nil
 if x.ColoFilterRedirector0Outdev != "" {
 xc.colo_filter_redirector0_outdev = C.CString(x.ColoFilterRedirector0Outdev)}
+xc.colo_filter_redirector1_queue = nil
 if x.ColoFilterRedirector1Queue != "" {
 xc.colo_filter_redirector1_queue = C.CString(x.ColoFilterRedirector1Queue)}
+xc.colo_filter_redirector1_indev = nil
 if x.ColoFilterRedirector1Indev != "" {
 xc.colo_filter_redirector1_indev = C.CString(x.ColoFilterRedirector1Indev)}
+xc.colo_filter_redirector1_outdev = nil
 if x.ColoFilterRedirector1Outdev != "" {
 xc.colo_filter_redirector1_outdev = C.CString(x.ColoFilterRedirector1Outdev)}
+xc.colo_compare_pri_in = nil
 if x.ColoComparePriIn != "" {
 xc.colo_compare_pri_in = C.CString(x.ColoComparePriIn)}
+xc.colo_compare_sec_in = nil
 if x.ColoCompareSecIn != "" {
 xc.colo_compare_sec_in = C.CString(x.ColoCompareSecIn)}
+xc.colo_compare_out = nil
 if x.ColoCompareOut != "" {
 xc.colo_compare_out = C.CString(x.ColoCompareOut)}
+xc.colo_compare_notify_dev = nil
 if x.ColoCompareNotifyDev != "" {
 xc.colo_compare_notify_dev = C.CString(x.ColoCompareNotifyDev)}
+xc.colo_sock_sec_redirector0_id = nil
 if x.ColoSockSecRedirector0Id != "" {
 xc.colo_sock_sec_redirector0_id = C.CString(x.ColoSockSecRedirector0Id)}
+xc.colo_sock_sec_redirector0_ip = nil
 if x.ColoSockSecRedirector0Ip != "" {
 xc.colo_sock_sec_redirector0_ip = C.CString(x.ColoSockSecRedirector0Ip)}
+xc.colo_sock_sec_redirector0_port = nil
 if x.ColoSockSecRedirector0Port != "" {
 xc.colo_sock_sec_redirector0_port = C.CString(x.ColoSockSecRedirector0Port)}
+xc.colo_sock_sec_redirector1_id = nil
 if x.ColoSockSecRedirector1Id != "" {
 xc.colo_sock_sec_redirector1_id = C.CString(x.ColoSockSecRedirector1Id)}
+xc.colo_sock_sec_redirector1_ip = nil
 if x.ColoSockSecRedirector1Ip != "" {
 xc.colo_sock_sec_redirector1_ip = C.CString(x.ColoSockSecRedirector1Ip)}
+xc.colo_sock_sec_redirector1_port = nil
 if x.ColoSockSecRedirector1Port != "" {
 xc.colo_sock_sec_redirector1_port = C.CString(x.ColoSockSecRedirector1Port)}
+xc.colo_filter_sec_redirector0_queue = nil
 if x.ColoFilterSecRedirector0Queue != "" {
 xc.colo_filter_sec_redirector0_queue = C.CString(x.ColoFilterSecRedirector0Queue)}
+xc.colo_filter_sec_redirector0_indev = nil
 if x.ColoFilterSecRedirector0Indev != "" {
 xc.colo_filter_sec_redirector0_indev = C.CString(x.ColoFilterSecRedirector0Indev)}
+xc.colo_filter_sec_redirector0_outdev = nil
 if x.ColoFilterSecRedirector0Outdev != "" {
 xc.colo_filter_sec_redirector0_outdev = C.CString(x.ColoFilterSecRedirector0Outdev)}
+xc.colo_filter_sec_redirector1_queue = nil
 if x.ColoFilterSecRedirector1Queue != "" {
 xc.colo_filter_sec_redirector1_queue = C.CString(x.ColoFilterSecRedirector1Queue)}
+xc.colo_filter_sec_redirector1_indev = nil
 if x.ColoFilterSecRedirector1Indev != "" {
 xc.colo_filter_sec_redirector1_indev = C.CString(x.ColoFilterSecRedirector1Indev)}
+xc.colo_filter_sec_redirector1_outdev = nil
 if x.ColoFilterSecRedirector1Outdev != "" {
 xc.colo_filter_sec_redirector1_outdev = C.CString(x.ColoFilterSecRedirector1Outdev)}
+xc.colo_filter_sec_rewriter0_queue = nil
 if x.ColoFilterSecRewriter0Queue != "" {
 xc.colo_filter_sec_rewriter0_queue = C.CString(x.ColoFilterSecRewriter0Queue)}
+xc.colo_checkpoint_host = nil
 if x.ColoCheckpointHost != "" {
 xc.colo_checkpoint_host = C.CString(x.ColoCheckpointHost)}
+xc.colo_checkpoint_port = nil
 if x.ColoCheckpointPort != "" {
 xc.colo_checkpoint_port = C.CString(x.ColoCheckpointPort)}
 
@@ -2053,6 +2177,7 @@ xc.power_mgmt = C.bool(x.PowerMgmt)
 xc.permissive = C.bool(x.Permissive)
 xc.seize = C.bool(x.Seize)
 xc.rdm_policy = C.libxl_rdm_reserve_policy(x.RdmPolicy)
+xc.name = nil
 if x.Name != "" {
 xc.name = C.CString(x.Name)}
 
@@ -2126,6 +2251,7 @@ xc.devid = C.libxl_devid(x.Devid)
 xc.version = C.int(x.Version)
 xc.ports = C.int(x.Ports)
 xc.backend_domid = C.libxl_domid(x.BackendDomid)
+xc.backend_domname = nil
 if x.BackendDomname != "" {
 xc.backend_domname = C.CString(x.BackendDomname)}
 
@@ -2223,6 +2349,7 @@ if err != nil{
 C.libxl_device_dtdev_dispose(xc)}
 }()
 
+xc.path = nil
 if x.Path != "" {
 xc.path = C.CString(x.Path)}
 
@@ -2259,6 +2386,7 @@ C.libxl_device_vtpm_dispose(xc)}
 }()
 
 xc.backend_domid = C.libxl_domid(x.BackendDomid)
+xc.backend_domname = nil
 if x.BackendDomname != "" {
 xc.backend_domname = C.CString(x.BackendDomname)}
 xc.devid = C.libxl_devid(x.Devid)
@@ -2299,12 +2427,16 @@ C.libxl_device_p9_dispose(xc)}
 }()
 
 xc.backend_domid = C.libxl_domid(x.BackendDomid)
+xc.backend_domname = nil
 if x.BackendDomname != "" {
 xc.backend_domname = C.CString(x.BackendDomname)}
+xc.tag = nil
 if x.Tag != "" {
 xc.tag = C.CString(x.Tag)}
+xc.path = nil
 if x.Path != "" {
 xc.path = C.CString(x.Path)}
+xc.security_model = nil
 if x.SecurityModel != "" {
 xc.security_model = C.CString(x.SecurityModel)}
 xc.devid = C.libxl_devid(x.Devid)
@@ -2339,6 +2471,7 @@ C.libxl_device_pvcallsif_dispose(xc)}
 }()
 
 xc.backend_domid = C.libxl_domid(x.BackendDomid)
+xc.backend_domname = nil
 if x.BackendDomname != "" {
 xc.backend_domname = C.CString(x.BackendDomname)}
 xc.devid = C.libxl_devid(x.Devid)
@@ -2399,9 +2532,11 @@ C.libxl_device_channel_dispose(xc)}
 }()
 
 xc.backend_domid = C.libxl_domid(x.BackendDomid)
+xc.backend_domname = nil
 if x.BackendDomname != "" {
 xc.backend_domname = C.CString(x.BackendDomname)}
 xc.devid = C.libxl_devid(x.Devid)
+xc.name = nil
 if x.Name != "" {
 xc.name = C.CString(x.Name)}
 xc.connection = C.libxl_channel_connection(x.Connection)
@@ -2416,6 +2551,7 @@ if !ok {
 return errors.New("wrong type for union key connection")
 }
 var socket C.libxl_device_channel_connection_union_socket
+socket.path = nil
 if tmp.Path != "" {
 socket.path = C.CString(tmp.Path)}
 socketBytes := C.GoBytes(unsafe.Pointer(&socket),C.sizeof_libxl_device_channel_connection_union_socket)
@@ -2452,6 +2588,7 @@ if err != nil{
 C.libxl_connector_param_dispose(xc)}
 }()
 
+xc.unique_id = nil
 if x.UniqueId != "" {
 xc.unique_id = C.CString(x.UniqueId)}
 xc.width = C.uint32_t(x.Width)
@@ -2497,6 +2634,7 @@ C.libxl_device_vdispl_dispose(xc)}
 }()
 
 xc.backend_domid = C.libxl_domid(x.BackendDomid)
+xc.backend_domname = nil
 if x.BackendDomname != "" {
 xc.backend_domname = C.CString(x.BackendDomname)}
 xc.devid = C.libxl_devid(x.Devid)
@@ -2608,6 +2746,7 @@ if err != nil{
 C.libxl_vsnd_stream_dispose(xc)}
 }()
 
+xc.unique_id = nil
 if x.UniqueId != "" {
 xc.unique_id = C.CString(x.UniqueId)}
 xc._type = C.libxl_vsnd_stream_type(x.Type)
@@ -2654,6 +2793,7 @@ if err != nil{
 C.libxl_vsnd_pcm_dispose(xc)}
 }()
 
+xc.name = nil
 if x.Name != "" {
 xc.name = C.CString(x.Name)}
 if err := x.Params.toC(&xc.params); err != nil {
@@ -2714,11 +2854,14 @@ C.libxl_device_vsnd_dispose(xc)}
 }()
 
 xc.backend_domid = C.libxl_domid(x.BackendDomid)
+xc.backend_domname = nil
 if x.BackendDomname != "" {
 xc.backend_domname = C.CString(x.BackendDomname)}
 xc.devid = C.libxl_devid(x.Devid)
+xc.short_name = nil
 if x.ShortName != "" {
 xc.short_name = C.CString(x.ShortName)}
+xc.long_name = nil
 if x.LongName != "" {
 xc.long_name = C.CString(x.LongName)}
 if err := x.Params.toC(&xc.params); err != nil {
@@ -3103,9 +3246,11 @@ if err != nil{
 C.libxl_diskinfo_dispose(xc)}
 }()
 
+xc.backend = nil
 if x.Backend != "" {
 xc.backend = C.CString(x.Backend)}
 xc.backend_id = C.uint32_t(x.BackendId)
+xc.frontend = nil
 if x.Frontend != "" {
 xc.frontend = C.CString(x.Frontend)}
 xc.frontend_id = C.uint32_t(x.FrontendId)
@@ -3149,9 +3294,11 @@ if err != nil{
 C.libxl_nicinfo_dispose(xc)}
 }()
 
+xc.backend = nil
 if x.Backend != "" {
 xc.backend = C.CString(x.Backend)}
 xc.backend_id = C.uint32_t(x.BackendId)
+xc.frontend = nil
 if x.Frontend != "" {
 xc.frontend = C.CString(x.Frontend)}
 xc.frontend_id = C.uint32_t(x.FrontendId)
@@ -3198,9 +3345,11 @@ if err != nil{
 C.libxl_vtpminfo_dispose(xc)}
 }()
 
+xc.backend = nil
 if x.Backend != "" {
 xc.backend = C.CString(x.Backend)}
 xc.backend_id = C.uint32_t(x.BackendId)
+xc.frontend = nil
 if x.Frontend != "" {
 xc.frontend = C.CString(x.Frontend)}
 xc.frontend_id = C.uint32_t(x.FrontendId)
@@ -3254,9 +3403,11 @@ xc._type = C.libxl_usbctrl_type(x.Type)
 xc.devid = C.libxl_devid(x.Devid)
 xc.version = C.int(x.Version)
 xc.ports = C.int(x.Ports)
+xc.backend = nil
 if x.Backend != "" {
 xc.backend = C.CString(x.Backend)}
 xc.backend_id = C.uint32_t(x.BackendId)
+xc.frontend = nil
 if x.Frontend != "" {
 xc.frontend = C.CString(x.Frontend)}
 xc.frontend_id = C.uint32_t(x.FrontendId)
@@ -3422,6 +3573,7 @@ if err != nil{
 C.libxl_connectorinfo_dispose(xc)}
 }()
 
+xc.unique_id = nil
 if x.UniqueId != "" {
 xc.unique_id = C.CString(x.UniqueId)}
 xc.width = C.uint32_t(x.Width)
@@ -3473,9 +3625,11 @@ if err != nil{
 C.libxl_vdisplinfo_dispose(xc)}
 }()
 
+xc.backend = nil
 if x.Backend != "" {
 xc.backend = C.CString(x.Backend)}
 xc.backend_id = C.uint32_t(x.BackendId)
+xc.frontend = nil
 if x.Frontend != "" {
 xc.frontend = C.CString(x.Frontend)}
 xc.frontend_id = C.uint32_t(x.FrontendId)
@@ -3611,9 +3765,11 @@ if err != nil{
 C.libxl_vsndinfo_dispose(xc)}
 }()
 
+xc.backend = nil
 if x.Backend != "" {
 xc.backend = C.CString(x.Backend)}
 xc.backend_id = C.uint32_t(x.BackendId)
+xc.frontend = nil
 if x.Frontend != "" {
 xc.frontend = C.CString(x.Frontend)}
 xc.frontend_id = C.uint32_t(x.FrontendId)
@@ -3664,9 +3820,11 @@ if err != nil{
 C.libxl_vkbinfo_dispose(xc)}
 }()
 
+xc.backend = nil
 if x.Backend != "" {
 xc.backend = C.CString(x.Backend)}
 xc.backend_id = C.uint32_t(x.BackendId)
+xc.frontend = nil
 if x.Frontend != "" {
 xc.frontend = C.CString(x.Frontend)}
 xc.frontend_id = C.uint32_t(x.FrontendId)
@@ -3902,6 +4060,7 @@ return fmt.Errorf("converting field Compression: %v", err)
 if err := x.Netbuf.toC(&xc.netbuf); err != nil {
 return fmt.Errorf("converting field Netbuf: %v", err)
 }
+xc.netbufscript = nil
 if x.Netbufscript != "" {
 xc.netbufscript = C.CString(x.Netbufscript)}
 if err := x.Diskbuf.toC(&xc.diskbuf); err != nil {
@@ -4035,6 +4194,7 @@ if !ok {
 return errors.New("wrong type for union key type")
 }
 var disk_eject C.libxl_event_type_union_disk_eject
+disk_eject.vdev = nil
 if tmp.Vdev != "" {
 disk_eject.vdev = C.CString(tmp.Vdev)}
 if err := tmp.Disk.toC(&disk_eject.disk); err != nil {
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon May 24 20:38:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 May 2021 20:38:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131883.246304 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llHKs-0003Aj-6M; Mon, 24 May 2021 20:38:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131883.246304; Mon, 24 May 2021 20:38:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llHKs-0003AY-2A; Mon, 24 May 2021 20:38:02 +0000
Received: by outflank-mailman (input) for mailman id 131883;
 Mon, 24 May 2021 20:38:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5+P1=KT=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1llHKq-0001ey-S5
 for xen-devel@lists.xenproject.org; Mon, 24 May 2021 20:38:00 +0000
Received: from mail-qt1-x829.google.com (unknown [2607:f8b0:4864:20::829])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3e19ada3-e1b5-41ea-b64a-fbe3971da842;
 Mon, 24 May 2021 20:37:43 +0000 (UTC)
Received: by mail-qt1-x829.google.com with SMTP id s12so12402206qta.3
 for <xen-devel@lists.xenproject.org>; Mon, 24 May 2021 13:37:43 -0700 (PDT)
Received: from localhost.localdomain (c-73-89-138-5.hsd1.vt.comcast.net.
 [73.89.138.5])
 by smtp.gmail.com with ESMTPSA id t25sm5142847qkt.62.2021.05.24.13.37.42
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 24 May 2021 13:37:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3e19ada3-e1b5-41ea-b64a-fbe3971da842
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :in-reply-to:references;
        bh=hfBP94KizJpDruLBLBLMUfOcD8AldREODeB/q9SIq2M=;
        b=EQMocBqOydCDD/W8SK/dRmkPliwTZBPDqqBue37vjCJUpXE7Kazzvg3C0+yP6P18Ii
         SfhM0N145Miw5QwY7hJq+dRJc/v1S4Rie7TO5ZA3rT1Jlni76D5sx9SwsCJSVlHQxnsX
         WY34MhAbv/qGh332hnI+zir9IDLTnNlCkAxVd1IrVcG4yzAM4iGiSVT+fbdEXMa9RiyC
         ubjhuVK2GJrSvhjNjRF/U52xlmzAo5eF1a5BYLminyYi9qBDYbFkk6Vh19o3PsZJOmik
         1WFJqU/5Gz6RT1ZCY2N3QK+lVZ+7ejaaLFjq/J5s30t2uQIilmL0Aiuwn7hd3GUNVowB
         C4Tg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:in-reply-to:references;
        bh=hfBP94KizJpDruLBLBLMUfOcD8AldREODeB/q9SIq2M=;
        b=CoJyKzt1vSaoFrfQX3iggknKs0kp+u0HCWACOL/xPQU+Zsikb1RjWXLr+4vUAKkvOT
         1P2Mr5b1b4JrABLAFvqXXTHf7hB7bz6RK3zRzkxu5IdVvT6IWd9Y7PD60CPpnTo5AQq7
         nHTs22ZtKkKzm752sAD665nVQp24xuhyobJ7q3sEriUgDvrPkz6DYzDg4u6Iw0PGQdls
         3ZRGkN2yL4g2sO0uEPGUIfSDHF/rGUvCKG+WtqRCX8mef9Jwa//LEbWg5IXa8LKdJh6G
         3+1PerPMtIyNiwL9IqAiAfeHgF+HBrcYiyiaoeZ3U7L/j2rx9q2Jo9Lu/BwgpD1U9A8O
         v8og==
X-Gm-Message-State: AOAM532xqX1Mmai9CZHk2mVSwRd8GKVFLI/Ayw3KKFR7GZOXbu4zKZVL
	y57B325N0wr6ZPq5GZy8LIA=
X-Google-Smtp-Source: ABdhPJyKe6Or6ryElDeBwz+s/qA1wfWs/ZYdvbgEtPXmQ70fTLPF1ZXiKPco+HZJ8+TI5BqNoB/45A==
X-Received: by 2002:ac8:66da:: with SMTP id m26mr28281280qtp.102.1621888663707;
        Mon, 24 May 2021 13:37:43 -0700 (PDT)
From: Nick Rosbrook <rosbrookn@gmail.com>
X-Google-Original-From: Nick Rosbrook <rosbrookn@ainfosec.com>
To: xen-devel@lists.xenproject.prg,
	xen-devel@lists.xenproject.org
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [RESEND PATCH 04/12] golang/xenlight: export keyed union interface types
Date: Mon, 24 May 2021 16:36:45 -0400
Message-Id: <29a3fbc93262cb9b31f02d6c94c018b200dfa43e.1621887506.git.rosbrookn@ainfosec.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1621887506.git.rosbrookn@ainfosec.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>
In-Reply-To: <cover.1621887506.git.rosbrookn@ainfosec.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>

For structs that have a keyed union, e.g. DomainBuildInfo, the TypeUnion
field must be exported so that package users can get/set the fields
within. This means that users are aware of the existence of the
interface type used in those fields (see [1]), so it is awkward that the
interface itself is not exported. However, the single method within the
interface must remain unexported so that users cannot mistakenly "implement"
those interfaces.

Since there seems to be no reason to do otherwise, export the keyed
union interface types.

[1] https://pkg.go.dev/xenbits.xenproject.org/git-http/xen.git/tools/golang/xenlight?tab=doc#DeviceUsbdev

Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
---
 tools/golang/xenlight/gengotypes.py |  6 +--
 tools/golang/xenlight/types.gen.go  | 58 ++++++++++++++---------------
 2 files changed, 32 insertions(+), 32 deletions(-)

diff --git a/tools/golang/xenlight/gengotypes.py b/tools/golang/xenlight/gengotypes.py
index e6daa9b92f..3796632f7d 100644
--- a/tools/golang/xenlight/gengotypes.py
+++ b/tools/golang/xenlight/gengotypes.py
@@ -159,7 +159,7 @@ def xenlight_golang_define_union(ty = None, struct_name = '', union_name = ''):
     extras = []
 
     interface_name = '{0}_{1}_union'.format(struct_name, ty.keyvar.name)
-    interface_name = xenlight_golang_fmt_name(interface_name, exported=False)
+    interface_name = xenlight_golang_fmt_name(interface_name)
 
     s += 'type {0} interface {{\n'.format(interface_name)
     s += 'is{0}()\n'.format(interface_name)
@@ -341,7 +341,7 @@ def xenlight_golang_union_from_C(ty = None, union_name = '', struct_name = ''):
     field_name = xenlight_golang_fmt_name('{0}_union'.format(keyname))
 
     interface_name = '{0}_{1}_union'.format(struct_name, keyname)
-    interface_name = xenlight_golang_fmt_name(interface_name, exported=False)
+    interface_name = xenlight_golang_fmt_name(interface_name)
 
     cgo_keyname = keyname
     if cgo_keyname in go_keywords:
@@ -546,7 +546,7 @@ def xenlight_golang_union_to_C(ty = None, union_name = '',
     gokeytype = xenlight_golang_fmt_name(keytype)
 
     interface_name = '{0}_{1}_union'.format(struct_name, keyname)
-    interface_name = xenlight_golang_fmt_name(interface_name, exported=False)
+    interface_name = xenlight_golang_fmt_name(interface_name)
 
     cgo_keyname = keyname
     if cgo_keyname in go_keywords:
diff --git a/tools/golang/xenlight/types.gen.go b/tools/golang/xenlight/types.gen.go
index f2ceceb61c..a214dd9df6 100644
--- a/tools/golang/xenlight/types.gen.go
+++ b/tools/golang/xenlight/types.gen.go
@@ -337,18 +337,18 @@ State int
 Evtch int
 Rref int
 Connection ChannelConnection
-ConnectionUnion channelinfoConnectionUnion
+ConnectionUnion ChannelinfoConnectionUnion
 }
 
-type channelinfoConnectionUnion interface {
-ischannelinfoConnectionUnion()
+type ChannelinfoConnectionUnion interface {
+isChannelinfoConnectionUnion()
 }
 
 type ChannelinfoConnectionUnionPty struct {
 Path string
 }
 
-func (x ChannelinfoConnectionUnionPty) ischannelinfoConnectionUnion(){}
+func (x ChannelinfoConnectionUnionPty) isChannelinfoConnectionUnion(){}
 
 type Vminfo struct {
 Uuid Uuid
@@ -510,7 +510,7 @@ Apic Defbool
 DmRestrict Defbool
 Tee TeeType
 Type DomainType
-TypeUnion domainBuildInfoTypeUnion
+TypeUnion DomainBuildInfoTypeUnion
 ArchArm struct {
 GicVersion GicVersion
 Vuart VuartType
@@ -522,8 +522,8 @@ Altp2M Altp2MMode
 VmtraceBufKb int
 }
 
-type domainBuildInfoTypeUnion interface {
-isdomainBuildInfoTypeUnion()
+type DomainBuildInfoTypeUnion interface {
+isDomainBuildInfoTypeUnion()
 }
 
 type DomainBuildInfoTypeUnionHvm struct {
@@ -575,7 +575,7 @@ RdmMemBoundaryMemkb uint64
 McaCaps uint64
 }
 
-func (x DomainBuildInfoTypeUnionHvm) isdomainBuildInfoTypeUnion(){}
+func (x DomainBuildInfoTypeUnionHvm) isDomainBuildInfoTypeUnion(){}
 
 type DomainBuildInfoTypeUnionPv struct {
 Kernel string
@@ -588,7 +588,7 @@ Features string
 E820Host Defbool
 }
 
-func (x DomainBuildInfoTypeUnionPv) isdomainBuildInfoTypeUnion(){}
+func (x DomainBuildInfoTypeUnionPv) isDomainBuildInfoTypeUnion(){}
 
 type DomainBuildInfoTypeUnionPvh struct {
 Pvshim Defbool
@@ -597,7 +597,7 @@ PvshimCmdline string
 PvshimExtra string
 }
 
-func (x DomainBuildInfoTypeUnionPvh) isdomainBuildInfoTypeUnion(){}
+func (x DomainBuildInfoTypeUnionPvh) isDomainBuildInfoTypeUnion(){}
 
 type DeviceVfb struct {
 BackendDomid Domid
@@ -761,11 +761,11 @@ type DeviceUsbdev struct {
 Ctrl Devid
 Port int
 Type UsbdevType
-TypeUnion deviceUsbdevTypeUnion
+TypeUnion DeviceUsbdevTypeUnion
 }
 
-type deviceUsbdevTypeUnion interface {
-isdeviceUsbdevTypeUnion()
+type DeviceUsbdevTypeUnion interface {
+isDeviceUsbdevTypeUnion()
 }
 
 type DeviceUsbdevTypeUnionHostdev struct {
@@ -773,7 +773,7 @@ Hostbus byte
 Hostaddr byte
 }
 
-func (x DeviceUsbdevTypeUnionHostdev) isdeviceUsbdevTypeUnion(){}
+func (x DeviceUsbdevTypeUnionHostdev) isDeviceUsbdevTypeUnion(){}
 
 type DeviceDtdev struct {
 Path string
@@ -807,18 +807,18 @@ BackendDomname string
 Devid Devid
 Name string
 Connection ChannelConnection
-ConnectionUnion deviceChannelConnectionUnion
+ConnectionUnion DeviceChannelConnectionUnion
 }
 
-type deviceChannelConnectionUnion interface {
-isdeviceChannelConnectionUnion()
+type DeviceChannelConnectionUnion interface {
+isDeviceChannelConnectionUnion()
 }
 
 type DeviceChannelConnectionUnionSocket struct {
 Path string
 }
 
-func (x DeviceChannelConnectionUnionSocket) isdeviceChannelConnectionUnion(){}
+func (x DeviceChannelConnectionUnionSocket) isDeviceChannelConnectionUnion(){}
 
 type ConnectorParam struct {
 UniqueId string
@@ -1116,31 +1116,31 @@ Domid Domid
 Domuuid Uuid
 ForUser uint64
 Type EventType
-TypeUnion eventTypeUnion
+TypeUnion EventTypeUnion
 }
 
-type eventTypeUnion interface {
-iseventTypeUnion()
+type EventTypeUnion interface {
+isEventTypeUnion()
 }
 
 type EventTypeUnionDomainShutdown struct {
 ShutdownReason byte
 }
 
-func (x EventTypeUnionDomainShutdown) iseventTypeUnion(){}
+func (x EventTypeUnionDomainShutdown) isEventTypeUnion(){}
 
 type EventTypeUnionDiskEject struct {
 Vdev string
 Disk DeviceDisk
 }
 
-func (x EventTypeUnionDiskEject) iseventTypeUnion(){}
+func (x EventTypeUnionDiskEject) isEventTypeUnion(){}
 
 type EventTypeUnionOperationComplete struct {
 Rc int
 }
 
-func (x EventTypeUnionOperationComplete) iseventTypeUnion(){}
+func (x EventTypeUnionOperationComplete) isEventTypeUnion(){}
 
 type PsrCmtType int
 const(
@@ -1175,11 +1175,11 @@ PsrFeatTypeMba PsrFeatType = 2
 type PsrHwInfo struct {
 Id uint32
 Type PsrFeatType
-TypeUnion psrHwInfoTypeUnion
+TypeUnion PsrHwInfoTypeUnion
 }
 
-type psrHwInfoTypeUnion interface {
-ispsrHwInfoTypeUnion()
+type PsrHwInfoTypeUnion interface {
+isPsrHwInfoTypeUnion()
 }
 
 type PsrHwInfoTypeUnionCat struct {
@@ -1188,7 +1188,7 @@ CbmLen uint32
 CdpEnabled bool
 }
 
-func (x PsrHwInfoTypeUnionCat) ispsrHwInfoTypeUnion(){}
+func (x PsrHwInfoTypeUnionCat) isPsrHwInfoTypeUnion(){}
 
 type PsrHwInfoTypeUnionMba struct {
 CosMax uint32
@@ -1196,5 +1196,5 @@ ThrtlMax uint32
 Linear bool
 }
 
-func (x PsrHwInfoTypeUnionMba) ispsrHwInfoTypeUnion(){}
+func (x PsrHwInfoTypeUnionMba) isPsrHwInfoTypeUnion(){}
 
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon May 24 20:38:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 May 2021 20:38:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131885.246315 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llHKx-0003d5-GZ; Mon, 24 May 2021 20:38:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131885.246315; Mon, 24 May 2021 20:38:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llHKx-0003cg-Cj; Mon, 24 May 2021 20:38:07 +0000
Received: by outflank-mailman (input) for mailman id 131885;
 Mon, 24 May 2021 20:38:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5+P1=KT=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1llHKv-0001ey-S7
 for xen-devel@lists.xenproject.org; Mon, 24 May 2021 20:38:05 +0000
Received: from mail-qt1-x835.google.com (unknown [2607:f8b0:4864:20::835])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 371a11a3-daef-4f03-870a-063378a3ebe8;
 Mon, 24 May 2021 20:37:45 +0000 (UTC)
Received: by mail-qt1-x835.google.com with SMTP id f8so21602691qth.6
 for <xen-devel@lists.xenproject.org>; Mon, 24 May 2021 13:37:45 -0700 (PDT)
Received: from localhost.localdomain (c-73-89-138-5.hsd1.vt.comcast.net.
 [73.89.138.5])
 by smtp.gmail.com with ESMTPSA id t25sm5142847qkt.62.2021.05.24.13.37.43
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 24 May 2021 13:37:44 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 371a11a3-daef-4f03-870a-063378a3ebe8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :in-reply-to:references;
        bh=gmgGK17wIRa1DlqHFSQ1z3Zv/54AdOc2b0ySCVRt0yU=;
        b=B05k9j4uMyR1hRjIwmDxThYoVmBYL2zqqZjqz943qVU+pE1JKzh6R+y+4aYxEetmDn
         ZRUXr9S93x/qWIe/8BXNDwjEXv0h1jYA2DnIGWFaAVSaIHd2keq3W96tCUdrscC2rAIK
         fvXYT84dAnD2sf6fe4Zd+toxcowKimjLuC4uFK+pQMKthzKPnYuzmopkzRUI6X4F1bGj
         m4uLFrRufBfGPmnI2efNm6UNsQIFVgCXL+sADX2VYz4fBkRldOXlXIVJQvRaGFKI8iaE
         A+LKWKSFVyXYwPVRsPN90Pw5kVe52qIiFh1kuOEiieRyjw6OypJ+f5L0f4bvwucRB7EY
         +JIg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:in-reply-to:references;
        bh=gmgGK17wIRa1DlqHFSQ1z3Zv/54AdOc2b0ySCVRt0yU=;
        b=paC669tCplGCgMzXHNTvj+mTnp+WK272/AVudAfAd0yRyXuji+wtTeV6R60HEFRkvw
         YWa0IzOeGP3HUijzVw8PqOhuOTwhCtWs/obNbJmvqu5WdlTIvRcQlLe7EUAX6muNWrnQ
         ArlpaZSPtgSf1NShXGWDkP0Cx37U/wPpVke4shfyz1iv5+KbwuM3oDhqJYCXnk6ccvb5
         WLh9OPMzxAwMhj6FnMZMnnOtksr1L+roCuVH8e7rBD40Ae1IZUNcKBDuBVXMQqIaSSlD
         huuArC4vNTOpsTeYAbqqbP5CBLUhfOJ/B+sJrbucB257FmTirMP40TGKDmRQXWQV4MQo
         CPHg==
X-Gm-Message-State: AOAM5301rGOuLqW7EKwS2oYiWlvZh+lhO2lFw5g8Ov/kzFXViuwR/+uS
	7GpZQ9kN98nankTepQqNv7s=
X-Google-Smtp-Source: ABdhPJydQniZGUyZMjJR+HeCNDL6lVO7LbJWXrKQPMkjEMzzgQBlIsoCwdB54Q4X4AyvnLF9qbJ/Uw==
X-Received: by 2002:a05:622a:144d:: with SMTP id v13mr735850qtx.4.1621888664835;
        Mon, 24 May 2021 13:37:44 -0700 (PDT)
From: Nick Rosbrook <rosbrookn@gmail.com>
X-Google-Original-From: Nick Rosbrook <rosbrookn@ainfosec.com>
To: xen-devel@lists.xenproject.prg,
	xen-devel@lists.xenproject.org
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [RESEND PATCH 05/12] golang/xenlight: use struct pointers in keyed union fields
Date: Mon, 24 May 2021 16:36:46 -0400
Message-Id: <ebeb085b9b4b5d3dddd66607b409590f5e7cdfc6.1621887506.git.rosbrookn@ainfosec.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1621887506.git.rosbrookn@ainfosec.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>
In-Reply-To: <cover.1621887506.git.rosbrookn@ainfosec.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>

Currently, when marshalig Go types with keyed union fields, we assign the
value of the struct (e.g. DomainBuildInfoTypeUnionHvm) which implements the
interface of the keyed union field (e.g. DomainBuildInfoTypeUnion).
As-is, this means that if a populated DomainBuildInfo is marshaled to
e.g. JSON, unmarshaling back to DomainBuildInfo will fail.

When the encoding/json is unmarshaling data into a Go type, and
encounters a JSON object, it basically can either marshal the data into
an empty interface, a map, or a struct. It cannot, however, marshal data
into an interface with at least one method defined on it (e.g.
DomainBuildInfoTypeUnion). Before this check is done, however, the
decoder will check if the Go type is a pointer, and dereference it if
so. It will then use the type of this value as the "target" type.

This means that if the TypeUnion field is populated with a
DomainBuildInfoTypeUnion, the decoder will see a non-empty interface and
fail. If the TypeUnion field is populated with a
*DomainBuildInfoTypeUnionHvm, it dereferences the pointer and sees a
struct instead, allowing decoding to continue normally.

Since there does not appear to be a strict need for NOT using pointers
in these fields, update code generation to set keyed union fields to
pointers of their implementing structs.

Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
---
 tools/golang/xenlight/gengotypes.py  |  4 +--
 tools/golang/xenlight/helpers.gen.go | 44 ++++++++++++++--------------
 2 files changed, 24 insertions(+), 24 deletions(-)

diff --git a/tools/golang/xenlight/gengotypes.py b/tools/golang/xenlight/gengotypes.py
index 3796632f7d..57f2576468 100644
--- a/tools/golang/xenlight/gengotypes.py
+++ b/tools/golang/xenlight/gengotypes.py
@@ -404,7 +404,7 @@ def xenlight_golang_union_from_C(ty = None, union_name = '', struct_name = ''):
         s += 'if err := {0}.fromC(xc);'.format(goname)
         s += 'err != nil {{\n return fmt.Errorf("converting field {0}: %v", err)\n}}\n'.format(goname)
 
-        s += 'x.{0} = {1}\n'.format(field_name, goname)
+        s += 'x.{0} = &{1}\n'.format(field_name, goname)
 
     # End switch statement
     s += 'default:\n'
@@ -571,7 +571,7 @@ def xenlight_golang_union_to_C(ty = None, union_name = '',
         gotype  = xenlight_golang_fmt_name(cgotype)
 
         field_name = xenlight_golang_fmt_name('{0}_union'.format(keyname))
-        s += 'tmp, ok := x.{0}.({1})\n'.format(field_name,gotype)
+        s += 'tmp, ok := x.{0}.(*{1})\n'.format(field_name,gotype)
         s += 'if !ok {\n'
         s += 'return errors.New("wrong type for union key {0}")\n'.format(keyname)
         s += '}\n'
diff --git a/tools/golang/xenlight/helpers.gen.go b/tools/golang/xenlight/helpers.gen.go
index 5222898fb8..8fc5ec1649 100644
--- a/tools/golang/xenlight/helpers.gen.go
+++ b/tools/golang/xenlight/helpers.gen.go
@@ -443,7 +443,7 @@ var connectionPty ChannelinfoConnectionUnionPty
 if err := connectionPty.fromC(xc);err != nil {
  return fmt.Errorf("converting field connectionPty: %v", err)
 }
-x.ConnectionUnion = connectionPty
+x.ConnectionUnion = &connectionPty
 case ChannelConnectionSocket:
 x.ConnectionUnion = nil
 case ChannelConnectionUnknown:
@@ -485,7 +485,7 @@ switch x.Connection{
 case ChannelConnectionUnknown:
 break
 case ChannelConnectionPty:
-tmp, ok := x.ConnectionUnion.(ChannelinfoConnectionUnionPty)
+tmp, ok := x.ConnectionUnion.(*ChannelinfoConnectionUnionPty)
 if !ok {
 return errors.New("wrong type for union key connection")
 }
@@ -1120,7 +1120,7 @@ var typeHvm DomainBuildInfoTypeUnionHvm
 if err := typeHvm.fromC(xc);err != nil {
  return fmt.Errorf("converting field typeHvm: %v", err)
 }
-x.TypeUnion = typeHvm
+x.TypeUnion = &typeHvm
 case DomainTypeInvalid:
 x.TypeUnion = nil
 case DomainTypePv:
@@ -1128,13 +1128,13 @@ var typePv DomainBuildInfoTypeUnionPv
 if err := typePv.fromC(xc);err != nil {
  return fmt.Errorf("converting field typePv: %v", err)
 }
-x.TypeUnion = typePv
+x.TypeUnion = &typePv
 case DomainTypePvh:
 var typePvh DomainBuildInfoTypeUnionPvh
 if err := typePvh.fromC(xc);err != nil {
  return fmt.Errorf("converting field typePvh: %v", err)
 }
-x.TypeUnion = typePvh
+x.TypeUnion = &typePvh
 default:
 return fmt.Errorf("invalid union key '%v'", x.Type)}
 x.ArchArm.GicVersion = GicVersion(xc.arch_arm.gic_version)
@@ -1465,7 +1465,7 @@ xc.tee = C.libxl_tee_type(x.Tee)
 xc._type = C.libxl_domain_type(x.Type)
 switch x.Type{
 case DomainTypeHvm:
-tmp, ok := x.TypeUnion.(DomainBuildInfoTypeUnionHvm)
+tmp, ok := x.TypeUnion.(*DomainBuildInfoTypeUnionHvm)
 if !ok {
 return errors.New("wrong type for union key type")
 }
@@ -1593,7 +1593,7 @@ hvm.mca_caps = C.uint64_t(tmp.McaCaps)
 hvmBytes := C.GoBytes(unsafe.Pointer(&hvm),C.sizeof_libxl_domain_build_info_type_union_hvm)
 copy(xc.u[:],hvmBytes)
 case DomainTypePv:
-tmp, ok := x.TypeUnion.(DomainBuildInfoTypeUnionPv)
+tmp, ok := x.TypeUnion.(*DomainBuildInfoTypeUnionPv)
 if !ok {
 return errors.New("wrong type for union key type")
 }
@@ -1623,7 +1623,7 @@ return fmt.Errorf("converting field E820Host: %v", err)
 pvBytes := C.GoBytes(unsafe.Pointer(&pv),C.sizeof_libxl_domain_build_info_type_union_pv)
 copy(xc.u[:],pvBytes)
 case DomainTypePvh:
-tmp, ok := x.TypeUnion.(DomainBuildInfoTypeUnionPvh)
+tmp, ok := x.TypeUnion.(*DomainBuildInfoTypeUnionPvh)
 if !ok {
 return errors.New("wrong type for union key type")
 }
@@ -2283,7 +2283,7 @@ var typeHostdev DeviceUsbdevTypeUnionHostdev
 if err := typeHostdev.fromC(xc);err != nil {
  return fmt.Errorf("converting field typeHostdev: %v", err)
 }
-x.TypeUnion = typeHostdev
+x.TypeUnion = &typeHostdev
 default:
 return fmt.Errorf("invalid union key '%v'", x.Type)}
 
@@ -2310,7 +2310,7 @@ xc.port = C.int(x.Port)
 xc._type = C.libxl_usbdev_type(x.Type)
 switch x.Type{
 case UsbdevTypeHostdev:
-tmp, ok := x.TypeUnion.(DeviceUsbdevTypeUnionHostdev)
+tmp, ok := x.TypeUnion.(*DeviceUsbdevTypeUnionHostdev)
 if !ok {
 return errors.New("wrong type for union key type")
 }
@@ -2508,7 +2508,7 @@ var connectionSocket DeviceChannelConnectionUnionSocket
 if err := connectionSocket.fromC(xc);err != nil {
  return fmt.Errorf("converting field connectionSocket: %v", err)
 }
-x.ConnectionUnion = connectionSocket
+x.ConnectionUnion = &connectionSocket
 case ChannelConnectionUnknown:
 x.ConnectionUnion = nil
 default:
@@ -2546,7 +2546,7 @@ break
 case ChannelConnectionPty:
 break
 case ChannelConnectionSocket:
-tmp, ok := x.ConnectionUnion.(DeviceChannelConnectionUnionSocket)
+tmp, ok := x.ConnectionUnion.(*DeviceChannelConnectionUnionSocket)
 if !ok {
 return errors.New("wrong type for union key connection")
 }
@@ -4107,7 +4107,7 @@ var typeDiskEject EventTypeUnionDiskEject
 if err := typeDiskEject.fromC(xc);err != nil {
  return fmt.Errorf("converting field typeDiskEject: %v", err)
 }
-x.TypeUnion = typeDiskEject
+x.TypeUnion = &typeDiskEject
 case EventTypeDomainCreateConsoleAvailable:
 x.TypeUnion = nil
 case EventTypeDomainDeath:
@@ -4117,13 +4117,13 @@ var typeDomainShutdown EventTypeUnionDomainShutdown
 if err := typeDomainShutdown.fromC(xc);err != nil {
  return fmt.Errorf("converting field typeDomainShutdown: %v", err)
 }
-x.TypeUnion = typeDomainShutdown
+x.TypeUnion = &typeDomainShutdown
 case EventTypeOperationComplete:
 var typeOperationComplete EventTypeUnionOperationComplete
 if err := typeOperationComplete.fromC(xc);err != nil {
  return fmt.Errorf("converting field typeOperationComplete: %v", err)
 }
-x.TypeUnion = typeOperationComplete
+x.TypeUnion = &typeOperationComplete
 default:
 return fmt.Errorf("invalid union key '%v'", x.Type)}
 
@@ -4178,7 +4178,7 @@ xc.for_user = C.uint64_t(x.ForUser)
 xc._type = C.libxl_event_type(x.Type)
 switch x.Type{
 case EventTypeDomainShutdown:
-tmp, ok := x.TypeUnion.(EventTypeUnionDomainShutdown)
+tmp, ok := x.TypeUnion.(*EventTypeUnionDomainShutdown)
 if !ok {
 return errors.New("wrong type for union key type")
 }
@@ -4189,7 +4189,7 @@ copy(xc.u[:],domain_shutdownBytes)
 case EventTypeDomainDeath:
 break
 case EventTypeDiskEject:
-tmp, ok := x.TypeUnion.(EventTypeUnionDiskEject)
+tmp, ok := x.TypeUnion.(*EventTypeUnionDiskEject)
 if !ok {
 return errors.New("wrong type for union key type")
 }
@@ -4203,7 +4203,7 @@ return fmt.Errorf("converting field Disk: %v", err)
 disk_ejectBytes := C.GoBytes(unsafe.Pointer(&disk_eject),C.sizeof_libxl_event_type_union_disk_eject)
 copy(xc.u[:],disk_ejectBytes)
 case EventTypeOperationComplete:
-tmp, ok := x.TypeUnion.(EventTypeUnionOperationComplete)
+tmp, ok := x.TypeUnion.(*EventTypeUnionOperationComplete)
 if !ok {
 return errors.New("wrong type for union key type")
 }
@@ -4278,13 +4278,13 @@ var typeCat PsrHwInfoTypeUnionCat
 if err := typeCat.fromC(xc);err != nil {
  return fmt.Errorf("converting field typeCat: %v", err)
 }
-x.TypeUnion = typeCat
+x.TypeUnion = &typeCat
 case PsrFeatTypeMba:
 var typeMba PsrHwInfoTypeUnionMba
 if err := typeMba.fromC(xc);err != nil {
  return fmt.Errorf("converting field typeMba: %v", err)
 }
-x.TypeUnion = typeMba
+x.TypeUnion = &typeMba
 default:
 return fmt.Errorf("invalid union key '%v'", x.Type)}
 
@@ -4323,7 +4323,7 @@ xc.id = C.uint32_t(x.Id)
 xc._type = C.libxl_psr_feat_type(x.Type)
 switch x.Type{
 case PsrFeatTypeCat:
-tmp, ok := x.TypeUnion.(PsrHwInfoTypeUnionCat)
+tmp, ok := x.TypeUnion.(*PsrHwInfoTypeUnionCat)
 if !ok {
 return errors.New("wrong type for union key type")
 }
@@ -4334,7 +4334,7 @@ cat.cdp_enabled = C.bool(tmp.CdpEnabled)
 catBytes := C.GoBytes(unsafe.Pointer(&cat),C.sizeof_libxl_psr_hw_info_type_union_cat)
 copy(xc.u[:],catBytes)
 case PsrFeatTypeMba:
-tmp, ok := x.TypeUnion.(PsrHwInfoTypeUnionMba)
+tmp, ok := x.TypeUnion.(*PsrHwInfoTypeUnionMba)
 if !ok {
 return errors.New("wrong type for union key type")
 }
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon May 24 20:38:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 May 2021 20:38:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131889.246326 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llHL1-0004DV-UN; Mon, 24 May 2021 20:38:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131889.246326; Mon, 24 May 2021 20:38:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llHL1-0004DI-Qf; Mon, 24 May 2021 20:38:11 +0000
Received: by outflank-mailman (input) for mailman id 131889;
 Mon, 24 May 2021 20:38:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5+P1=KT=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1llHL0-0001ey-SK
 for xen-devel@lists.xenproject.org; Mon, 24 May 2021 20:38:10 +0000
Received: from mail-qt1-x836.google.com (unknown [2607:f8b0:4864:20::836])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5820f8d2-f153-4cae-a01a-6ee81b2b2caa;
 Mon, 24 May 2021 20:37:46 +0000 (UTC)
Received: by mail-qt1-x836.google.com with SMTP id i19so15076025qtw.9
 for <xen-devel@lists.xenproject.org>; Mon, 24 May 2021 13:37:46 -0700 (PDT)
Received: from localhost.localdomain (c-73-89-138-5.hsd1.vt.comcast.net.
 [73.89.138.5])
 by smtp.gmail.com with ESMTPSA id t25sm5142847qkt.62.2021.05.24.13.37.44
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 24 May 2021 13:37:45 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5820f8d2-f153-4cae-a01a-6ee81b2b2caa
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :in-reply-to:references;
        bh=x72Z2VWLOz6ahDBGysPBZw8qVE2KpRZgT2UaiLPYGoc=;
        b=Cw0ZSYaofRCWAfHDy+CfgskUQqQQ06W+8/xlsBgcRaS97XlztjW7YWKa1MHP8gpWoe
         s59XcFI0Wf7/Bd3r5dI5F9M4vXsX+3AIAZBnJ3tX9TCaEkBe5jVlCTnwvoSovbAwgh8+
         DgRUhTLxtgMDzHqmydBsweA1/1sAKzDxzblK8AUg1ikkmcxArOzva2irjDvDYtmb7hks
         8J4FZ8IAtooU8ooW1DBRlAnTBabW8MdwmxJfihKNQsn9F7065O+bNgiokjcKeFwYWRt+
         nHTE70wZN+gdWcUmFgRsa+vScoPQUCl3y56tL4de3Nvz6ej+h/AeMzuHI2ujREYbFfxC
         Dwfg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:in-reply-to:references;
        bh=x72Z2VWLOz6ahDBGysPBZw8qVE2KpRZgT2UaiLPYGoc=;
        b=gE9J4upgLHMMwgQQkZ2kffnuSzlAzJd2NqNOV+tilgM6HQYeOlQ7pflws3R7YSBNIG
         XGfLMrS00rZVyF96uUKv4hXMIrOX6jE7F4I4/HKFj8KjdiutXJmcCFiVUbSVsDr7mutB
         OxtN6WA7sVWuLnpu4YYwCmTzqp9HFA9h29q+6yPT8nT+sH2yps85qEGExLWk+yjknsUM
         X6Wne66GIDbEFUegNzuPUyUop3HFzWPxFb2cUNzhMrihRCsoCnRjVvJIKDT25upm3osa
         oteFNSWCQVuVbl9R5tqicWTwNmtZpbg/B9kV1pAPX4INS7ZgAq+4I6h0KQK3H0y1Hr6g
         cNcw==
X-Gm-Message-State: AOAM531J55qJS349PPYtw7qjCgZWzSRC9VMboJWZFTeQ1obDaykiQxED
	2CTj8fscKjZvnmRNsXF6mEQ=
X-Google-Smtp-Source: ABdhPJx6M1UNe75t87tnAzF5iaZq7xIMYUp7nUh2k24anPFVVcKb5qYcyaVrUQLtzGw5fXOK655yFA==
X-Received: by 2002:a05:622a:b:: with SMTP id x11mr28993947qtw.272.1621888665927;
        Mon, 24 May 2021 13:37:45 -0700 (PDT)
From: Nick Rosbrook <rosbrookn@gmail.com>
X-Google-Original-From: Nick Rosbrook <rosbrookn@ainfosec.com>
To: xen-devel@lists.xenproject.prg,
	xen-devel@lists.xenproject.org
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [RESEND PATCH 06/12] golang/xenlight: rename Ctx receivers to ctx
Date: Mon, 24 May 2021 16:36:47 -0400
Message-Id: <c1f7b48068d3855f48f818d93ddd23638a0f9f70.1621887506.git.rosbrookn@ainfosec.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1621887506.git.rosbrookn@ainfosec.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>
In-Reply-To: <cover.1621887506.git.rosbrookn@ainfosec.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>

As a matter of style, it is strange to see capitalized receiver names,
due to the significance of capitalized symbols in Go (although there is
in fact nothing special about a capitalized receiver name). Fix this in
xenlight.go by running:

  gofmt -w -r 'Ctx -> ctx' xenlight.go

from tools/golang/xenlight. There is no functional change.

Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
---
 tools/golang/xenlight/xenlight.go | 154 +++++++++++++++---------------
 1 file changed, 77 insertions(+), 77 deletions(-)

diff --git a/tools/golang/xenlight/xenlight.go b/tools/golang/xenlight/xenlight.go
index 13171d0ad1..fc3eb0bf3f 100644
--- a/tools/golang/xenlight/xenlight.go
+++ b/tools/golang/xenlight/xenlight.go
@@ -203,13 +203,13 @@ type Domid uint32
 // NameToDomid does not guarantee that the domid associated with name at
 // the time NameToDomid is called is the same as the domid associated with
 // name at the time NameToDomid returns.
-func (Ctx *Context) NameToDomid(name string) (Domid, error) {
+func (ctx *Context) NameToDomid(name string) (Domid, error) {
 	var domid C.uint32_t
 
 	cname := C.CString(name)
 	defer C.free(unsafe.Pointer(cname))
 
-	if ret := C.libxl_name_to_domid(Ctx.ctx, cname, &domid); ret != 0 {
+	if ret := C.libxl_name_to_domid(ctx.ctx, cname, &domid); ret != 0 {
 		return DomidInvalid, Error(ret)
 	}
 
@@ -223,8 +223,8 @@ func (Ctx *Context) NameToDomid(name string) (Domid, error) {
 // DomidToName does not guarantee that the name (if any) associated with domid
 // at the time DomidToName is called is the same as the name (if any) associated
 // with domid at the time DomidToName returns.
-func (Ctx *Context) DomidToName(domid Domid) string {
-	cname := C.libxl_domid_to_name(Ctx.ctx, C.uint32_t(domid))
+func (ctx *Context) DomidToName(domid Domid) string {
+	cname := C.libxl_domid_to_name(ctx.ctx, C.uint32_t(domid))
 	defer C.free(unsafe.Pointer(cname))
 
 	return C.GoString(cname)
@@ -594,10 +594,10 @@ func SchedulerFromString(name string) (s Scheduler, err error) {
 
 // libxl_cpupoolinfo * libxl_list_cpupool(libxl_ctx*, int *nb_pool_out);
 // void libxl_cpupoolinfo_list_free(libxl_cpupoolinfo *list, int nb_pool);
-func (Ctx *Context) ListCpupool() (list []Cpupoolinfo) {
+func (ctx *Context) ListCpupool() (list []Cpupoolinfo) {
 	var nbPool C.int
 
-	c_cpupool_list := C.libxl_list_cpupool(Ctx.ctx, &nbPool)
+	c_cpupool_list := C.libxl_list_cpupool(ctx.ctx, &nbPool)
 
 	defer C.libxl_cpupoolinfo_list_free(c_cpupool_list, nbPool)
 
@@ -617,10 +617,10 @@ func (Ctx *Context) ListCpupool() (list []Cpupoolinfo) {
 }
 
 // int libxl_cpupool_info(libxl_ctx *ctx, libxl_cpupoolinfo *info, uint32_t poolid);
-func (Ctx *Context) CpupoolInfo(Poolid uint32) (pool Cpupoolinfo, err error) {
+func (ctx *Context) CpupoolInfo(Poolid uint32) (pool Cpupoolinfo, err error) {
 	var c_cpupool C.libxl_cpupoolinfo
 
-	ret := C.libxl_cpupool_info(Ctx.ctx, &c_cpupool, C.uint32_t(Poolid))
+	ret := C.libxl_cpupool_info(ctx.ctx, &c_cpupool, C.uint32_t(Poolid))
 	if ret != 0 {
 		err = Error(-ret)
 		return
@@ -638,7 +638,7 @@ func (Ctx *Context) CpupoolInfo(Poolid uint32) (pool Cpupoolinfo, err error) {
 //                          uint32_t *poolid);
 // FIXME: uuid
 // FIXME: Setting poolid
-func (Ctx *Context) CpupoolCreate(Name string, Scheduler Scheduler, Cpumap Bitmap) (err error, Poolid uint32) {
+func (ctx *Context) CpupoolCreate(Name string, Scheduler Scheduler, Cpumap Bitmap) (err error, Poolid uint32) {
 	poolid := C.uint32_t(C.LIBXL_CPUPOOL_POOLID_ANY)
 	name := C.CString(Name)
 	defer C.free(unsafe.Pointer(name))
@@ -653,7 +653,7 @@ func (Ctx *Context) CpupoolCreate(Name string, Scheduler Scheduler, Cpumap Bitma
 	}
 	defer C.libxl_bitmap_dispose(&cbm)
 
-	ret := C.libxl_cpupool_create(Ctx.ctx, name, C.libxl_scheduler(Scheduler),
+	ret := C.libxl_cpupool_create(ctx.ctx, name, C.libxl_scheduler(Scheduler),
 		cbm, &uuid, &poolid)
 	if ret != 0 {
 		err = Error(-ret)
@@ -666,8 +666,8 @@ func (Ctx *Context) CpupoolCreate(Name string, Scheduler Scheduler, Cpumap Bitma
 }
 
 // int libxl_cpupool_destroy(libxl_ctx *ctx, uint32_t poolid);
-func (Ctx *Context) CpupoolDestroy(Poolid uint32) (err error) {
-	ret := C.libxl_cpupool_destroy(Ctx.ctx, C.uint32_t(Poolid))
+func (ctx *Context) CpupoolDestroy(Poolid uint32) (err error) {
+	ret := C.libxl_cpupool_destroy(ctx.ctx, C.uint32_t(Poolid))
 	if ret != 0 {
 		err = Error(-ret)
 		return
@@ -677,8 +677,8 @@ func (Ctx *Context) CpupoolDestroy(Poolid uint32) (err error) {
 }
 
 // int libxl_cpupool_cpuadd(libxl_ctx *ctx, uint32_t poolid, int cpu);
-func (Ctx *Context) CpupoolCpuadd(Poolid uint32, Cpu int) (err error) {
-	ret := C.libxl_cpupool_cpuadd(Ctx.ctx, C.uint32_t(Poolid), C.int(Cpu))
+func (ctx *Context) CpupoolCpuadd(Poolid uint32, Cpu int) (err error) {
+	ret := C.libxl_cpupool_cpuadd(ctx.ctx, C.uint32_t(Poolid), C.int(Cpu))
 	if ret != 0 {
 		err = Error(-ret)
 		return
@@ -689,14 +689,14 @@ func (Ctx *Context) CpupoolCpuadd(Poolid uint32, Cpu int) (err error) {
 
 // int libxl_cpupool_cpuadd_cpumap(libxl_ctx *ctx, uint32_t poolid,
 //                                 const libxl_bitmap *cpumap);
-func (Ctx *Context) CpupoolCpuaddCpumap(Poolid uint32, Cpumap Bitmap) (err error) {
+func (ctx *Context) CpupoolCpuaddCpumap(Poolid uint32, Cpumap Bitmap) (err error) {
 	var cbm C.libxl_bitmap
 	if err = Cpumap.toC(&cbm); err != nil {
 		return
 	}
 	defer C.libxl_bitmap_dispose(&cbm)
 
-	ret := C.libxl_cpupool_cpuadd_cpumap(Ctx.ctx, C.uint32_t(Poolid), &cbm)
+	ret := C.libxl_cpupool_cpuadd_cpumap(ctx.ctx, C.uint32_t(Poolid), &cbm)
 	if ret != 0 {
 		err = Error(-ret)
 		return
@@ -706,8 +706,8 @@ func (Ctx *Context) CpupoolCpuaddCpumap(Poolid uint32, Cpumap Bitmap) (err error
 }
 
 // int libxl_cpupool_cpuremove(libxl_ctx *ctx, uint32_t poolid, int cpu);
-func (Ctx *Context) CpupoolCpuremove(Poolid uint32, Cpu int) (err error) {
-	ret := C.libxl_cpupool_cpuremove(Ctx.ctx, C.uint32_t(Poolid), C.int(Cpu))
+func (ctx *Context) CpupoolCpuremove(Poolid uint32, Cpu int) (err error) {
+	ret := C.libxl_cpupool_cpuremove(ctx.ctx, C.uint32_t(Poolid), C.int(Cpu))
 	if ret != 0 {
 		err = Error(-ret)
 		return
@@ -718,14 +718,14 @@ func (Ctx *Context) CpupoolCpuremove(Poolid uint32, Cpu int) (err error) {
 
 // int libxl_cpupool_cpuremove_cpumap(libxl_ctx *ctx, uint32_t poolid,
 //                                    const libxl_bitmap *cpumap);
-func (Ctx *Context) CpupoolCpuremoveCpumap(Poolid uint32, Cpumap Bitmap) (err error) {
+func (ctx *Context) CpupoolCpuremoveCpumap(Poolid uint32, Cpumap Bitmap) (err error) {
 	var cbm C.libxl_bitmap
 	if err = Cpumap.toC(&cbm); err != nil {
 		return
 	}
 	defer C.libxl_bitmap_dispose(&cbm)
 
-	ret := C.libxl_cpupool_cpuremove_cpumap(Ctx.ctx, C.uint32_t(Poolid), &cbm)
+	ret := C.libxl_cpupool_cpuremove_cpumap(ctx.ctx, C.uint32_t(Poolid), &cbm)
 	if ret != 0 {
 		err = Error(-ret)
 		return
@@ -735,11 +735,11 @@ func (Ctx *Context) CpupoolCpuremoveCpumap(Poolid uint32, Cpumap Bitmap) (err er
 }
 
 // int libxl_cpupool_rename(libxl_ctx *ctx, const char *name, uint32_t poolid);
-func (Ctx *Context) CpupoolRename(Name string, Poolid uint32) (err error) {
+func (ctx *Context) CpupoolRename(Name string, Poolid uint32) (err error) {
 	name := C.CString(Name)
 	defer C.free(unsafe.Pointer(name))
 
-	ret := C.libxl_cpupool_rename(Ctx.ctx, name, C.uint32_t(Poolid))
+	ret := C.libxl_cpupool_rename(ctx.ctx, name, C.uint32_t(Poolid))
 	if ret != 0 {
 		err = Error(-ret)
 		return
@@ -749,10 +749,10 @@ func (Ctx *Context) CpupoolRename(Name string, Poolid uint32) (err error) {
 }
 
 // int libxl_cpupool_cpuadd_node(libxl_ctx *ctx, uint32_t poolid, int node, int *cpus);
-func (Ctx *Context) CpupoolCpuaddNode(Poolid uint32, Node int) (Cpus int, err error) {
+func (ctx *Context) CpupoolCpuaddNode(Poolid uint32, Node int) (Cpus int, err error) {
 	ccpus := C.int(0)
 
-	ret := C.libxl_cpupool_cpuadd_node(Ctx.ctx, C.uint32_t(Poolid), C.int(Node), &ccpus)
+	ret := C.libxl_cpupool_cpuadd_node(ctx.ctx, C.uint32_t(Poolid), C.int(Node), &ccpus)
 	if ret != 0 {
 		err = Error(-ret)
 		return
@@ -764,10 +764,10 @@ func (Ctx *Context) CpupoolCpuaddNode(Poolid uint32, Node int) (Cpus int, err er
 }
 
 // int libxl_cpupool_cpuremove_node(libxl_ctx *ctx, uint32_t poolid, int node, int *cpus);
-func (Ctx *Context) CpupoolCpuremoveNode(Poolid uint32, Node int) (Cpus int, err error) {
+func (ctx *Context) CpupoolCpuremoveNode(Poolid uint32, Node int) (Cpus int, err error) {
 	ccpus := C.int(0)
 
-	ret := C.libxl_cpupool_cpuremove_node(Ctx.ctx, C.uint32_t(Poolid), C.int(Node), &ccpus)
+	ret := C.libxl_cpupool_cpuremove_node(ctx.ctx, C.uint32_t(Poolid), C.int(Node), &ccpus)
 	if ret != 0 {
 		err = Error(-ret)
 		return
@@ -779,8 +779,8 @@ func (Ctx *Context) CpupoolCpuremoveNode(Poolid uint32, Node int) (Cpus int, err
 }
 
 // int libxl_cpupool_movedomain(libxl_ctx *ctx, uint32_t poolid, uint32_t domid);
-func (Ctx *Context) CpupoolMovedomain(Poolid uint32, Id Domid) (err error) {
-	ret := C.libxl_cpupool_movedomain(Ctx.ctx, C.uint32_t(Poolid), C.uint32_t(Id))
+func (ctx *Context) CpupoolMovedomain(Poolid uint32, Id Domid) (err error) {
+	ret := C.libxl_cpupool_movedomain(ctx.ctx, C.uint32_t(Poolid), C.uint32_t(Id))
 	if ret != 0 {
 		err = Error(-ret)
 		return
@@ -792,8 +792,8 @@ func (Ctx *Context) CpupoolMovedomain(Poolid uint32, Id Domid) (err error) {
 //
 // Utility functions
 //
-func (Ctx *Context) CpupoolFindByName(name string) (info Cpupoolinfo, found bool) {
-	plist := Ctx.ListCpupool()
+func (ctx *Context) CpupoolFindByName(name string) (info Cpupoolinfo, found bool) {
+	plist := ctx.ListCpupool()
 
 	for i := range plist {
 		if plist[i].PoolName == name {
@@ -805,14 +805,14 @@ func (Ctx *Context) CpupoolFindByName(name string) (info Cpupoolinfo, found bool
 	return
 }
 
-func (Ctx *Context) CpupoolMakeFree(Cpumap Bitmap) (err error) {
-	plist := Ctx.ListCpupool()
+func (ctx *Context) CpupoolMakeFree(Cpumap Bitmap) (err error) {
+	plist := ctx.ListCpupool()
 
 	for i := range plist {
 		var Intersection Bitmap
 		Intersection = Cpumap.And(plist[i].Cpumap)
 		if !Intersection.IsEmpty() {
-			err = Ctx.CpupoolCpuremoveCpumap(plist[i].Poolid, Intersection)
+			err = ctx.CpupoolCpuremoveCpumap(plist[i].Poolid, Intersection)
 			if err != nil {
 				return
 			}
@@ -940,8 +940,8 @@ func (bm Bitmap) String() (s string) {
 }
 
 //int libxl_get_max_cpus(libxl_ctx *ctx);
-func (Ctx *Context) GetMaxCpus() (maxCpus int, err error) {
-	ret := C.libxl_get_max_cpus(Ctx.ctx)
+func (ctx *Context) GetMaxCpus() (maxCpus int, err error) {
+	ret := C.libxl_get_max_cpus(ctx.ctx)
 	if ret < 0 {
 		err = Error(-ret)
 		return
@@ -951,8 +951,8 @@ func (Ctx *Context) GetMaxCpus() (maxCpus int, err error) {
 }
 
 //int libxl_get_online_cpus(libxl_ctx *ctx);
-func (Ctx *Context) GetOnlineCpus() (onCpus int, err error) {
-	ret := C.libxl_get_online_cpus(Ctx.ctx)
+func (ctx *Context) GetOnlineCpus() (onCpus int, err error) {
+	ret := C.libxl_get_online_cpus(ctx.ctx)
 	if ret < 0 {
 		err = Error(-ret)
 		return
@@ -962,8 +962,8 @@ func (Ctx *Context) GetOnlineCpus() (onCpus int, err error) {
 }
 
 //int libxl_get_max_nodes(libxl_ctx *ctx);
-func (Ctx *Context) GetMaxNodes() (maxNodes int, err error) {
-	ret := C.libxl_get_max_nodes(Ctx.ctx)
+func (ctx *Context) GetMaxNodes() (maxNodes int, err error) {
+	ret := C.libxl_get_max_nodes(ctx.ctx)
 	if ret < 0 {
 		err = Error(-ret)
 		return
@@ -973,9 +973,9 @@ func (Ctx *Context) GetMaxNodes() (maxNodes int, err error) {
 }
 
 //int libxl_get_free_memory(libxl_ctx *ctx, uint64_t *memkb);
-func (Ctx *Context) GetFreeMemory() (memkb uint64, err error) {
+func (ctx *Context) GetFreeMemory() (memkb uint64, err error) {
 	var cmem C.uint64_t
-	ret := C.libxl_get_free_memory(Ctx.ctx, &cmem)
+	ret := C.libxl_get_free_memory(ctx.ctx, &cmem)
 
 	if ret < 0 {
 		err = Error(-ret)
@@ -988,12 +988,12 @@ func (Ctx *Context) GetFreeMemory() (memkb uint64, err error) {
 }
 
 //int libxl_get_physinfo(libxl_ctx *ctx, libxl_physinfo *physinfo)
-func (Ctx *Context) GetPhysinfo() (physinfo *Physinfo, err error) {
+func (ctx *Context) GetPhysinfo() (physinfo *Physinfo, err error) {
 	var cphys C.libxl_physinfo
 	C.libxl_physinfo_init(&cphys)
 	defer C.libxl_physinfo_dispose(&cphys)
 
-	ret := C.libxl_get_physinfo(Ctx.ctx, &cphys)
+	ret := C.libxl_get_physinfo(ctx.ctx, &cphys)
 
 	if ret < 0 {
 		err = Error(ret)
@@ -1005,22 +1005,22 @@ func (Ctx *Context) GetPhysinfo() (physinfo *Physinfo, err error) {
 }
 
 //const libxl_version_info* libxl_get_version_info(libxl_ctx *ctx);
-func (Ctx *Context) GetVersionInfo() (info *VersionInfo, err error) {
+func (ctx *Context) GetVersionInfo() (info *VersionInfo, err error) {
 	var cinfo *C.libxl_version_info
 
-	cinfo = C.libxl_get_version_info(Ctx.ctx)
+	cinfo = C.libxl_get_version_info(ctx.ctx)
 
 	err = info.fromC(cinfo)
 
 	return
 }
 
-func (Ctx *Context) DomainInfo(Id Domid) (di *Dominfo, err error) {
+func (ctx *Context) DomainInfo(Id Domid) (di *Dominfo, err error) {
 	var cdi C.libxl_dominfo
 	C.libxl_dominfo_init(&cdi)
 	defer C.libxl_dominfo_dispose(&cdi)
 
-	ret := C.libxl_domain_info(Ctx.ctx, &cdi, C.uint32_t(Id))
+	ret := C.libxl_domain_info(ctx.ctx, &cdi, C.uint32_t(Id))
 
 	if ret != 0 {
 		err = Error(-ret)
@@ -1032,8 +1032,8 @@ func (Ctx *Context) DomainInfo(Id Domid) (di *Dominfo, err error) {
 	return
 }
 
-func (Ctx *Context) DomainUnpause(Id Domid) (err error) {
-	ret := C.libxl_domain_unpause(Ctx.ctx, C.uint32_t(Id), nil)
+func (ctx *Context) DomainUnpause(Id Domid) (err error) {
+	ret := C.libxl_domain_unpause(ctx.ctx, C.uint32_t(Id), nil)
 
 	if ret != 0 {
 		err = Error(-ret)
@@ -1042,8 +1042,8 @@ func (Ctx *Context) DomainUnpause(Id Domid) (err error) {
 }
 
 //int libxl_domain_pause(libxl_ctx *ctx, uint32_t domain);
-func (Ctx *Context) DomainPause(id Domid) (err error) {
-	ret := C.libxl_domain_pause(Ctx.ctx, C.uint32_t(id), nil)
+func (ctx *Context) DomainPause(id Domid) (err error) {
+	ret := C.libxl_domain_pause(ctx.ctx, C.uint32_t(id), nil)
 
 	if ret != 0 {
 		err = Error(-ret)
@@ -1052,8 +1052,8 @@ func (Ctx *Context) DomainPause(id Domid) (err error) {
 }
 
 //int libxl_domain_shutdown(libxl_ctx *ctx, uint32_t domid);
-func (Ctx *Context) DomainShutdown(id Domid) (err error) {
-	ret := C.libxl_domain_shutdown(Ctx.ctx, C.uint32_t(id), nil)
+func (ctx *Context) DomainShutdown(id Domid) (err error) {
+	ret := C.libxl_domain_shutdown(ctx.ctx, C.uint32_t(id), nil)
 
 	if ret != 0 {
 		err = Error(-ret)
@@ -1062,8 +1062,8 @@ func (Ctx *Context) DomainShutdown(id Domid) (err error) {
 }
 
 //int libxl_domain_reboot(libxl_ctx *ctx, uint32_t domid);
-func (Ctx *Context) DomainReboot(id Domid) (err error) {
-	ret := C.libxl_domain_reboot(Ctx.ctx, C.uint32_t(id), nil)
+func (ctx *Context) DomainReboot(id Domid) (err error) {
+	ret := C.libxl_domain_reboot(ctx.ctx, C.uint32_t(id), nil)
 
 	if ret != 0 {
 		err = Error(-ret)
@@ -1073,9 +1073,9 @@ func (Ctx *Context) DomainReboot(id Domid) (err error) {
 
 //libxl_dominfo * libxl_list_domain(libxl_ctx*, int *nb_domain_out);
 //void libxl_dominfo_list_free(libxl_dominfo *list, int nb_domain);
-func (Ctx *Context) ListDomain() (glist []Dominfo) {
+func (ctx *Context) ListDomain() (glist []Dominfo) {
 	var nbDomain C.int
-	clist := C.libxl_list_domain(Ctx.ctx, &nbDomain)
+	clist := C.libxl_list_domain(ctx.ctx, &nbDomain)
 	defer C.libxl_dominfo_list_free(clist, nbDomain)
 
 	if int(nbDomain) == 0 {
@@ -1095,11 +1095,11 @@ func (Ctx *Context) ListDomain() (glist []Dominfo) {
 //libxl_vcpuinfo *libxl_list_vcpu(libxl_ctx *ctx, uint32_t domid,
 //				int *nb_vcpu, int *nr_cpus_out);
 //void libxl_vcpuinfo_list_free(libxl_vcpuinfo *, int nr_vcpus);
-func (Ctx *Context) ListVcpu(id Domid) (glist []Vcpuinfo) {
+func (ctx *Context) ListVcpu(id Domid) (glist []Vcpuinfo) {
 	var nbVcpu C.int
 	var nrCpu C.int
 
-	clist := C.libxl_list_vcpu(Ctx.ctx, C.uint32_t(id), &nbVcpu, &nrCpu)
+	clist := C.libxl_list_vcpu(ctx.ctx, C.uint32_t(id), &nbVcpu, &nrCpu)
 	defer C.libxl_vcpuinfo_list_free(clist, nbVcpu)
 
 	if int(nbVcpu) == 0 {
@@ -1125,9 +1125,9 @@ func (ct ConsoleType) String() (str string) {
 
 //int libxl_console_get_tty(libxl_ctx *ctx, uint32_t domid, int cons_num,
 //libxl_console_type type, char **path);
-func (Ctx *Context) ConsoleGetTty(id Domid, consNum int, conType ConsoleType) (path string, err error) {
+func (ctx *Context) ConsoleGetTty(id Domid, consNum int, conType ConsoleType) (path string, err error) {
 	var cpath *C.char
-	ret := C.libxl_console_get_tty(Ctx.ctx, C.uint32_t(id), C.int(consNum), C.libxl_console_type(conType), &cpath)
+	ret := C.libxl_console_get_tty(ctx.ctx, C.uint32_t(id), C.int(consNum), C.libxl_console_type(conType), &cpath)
 	if ret != 0 {
 		err = Error(-ret)
 		return
@@ -1140,9 +1140,9 @@ func (Ctx *Context) ConsoleGetTty(id Domid, consNum int, conType ConsoleType) (p
 
 //int libxl_primary_console_get_tty(libxl_ctx *ctx, uint32_t domid_vm,
 //					char **path);
-func (Ctx *Context) PrimaryConsoleGetTty(domid uint32) (path string, err error) {
+func (ctx *Context) PrimaryConsoleGetTty(domid uint32) (path string, err error) {
 	var cpath *C.char
-	ret := C.libxl_primary_console_get_tty(Ctx.ctx, C.uint32_t(domid), &cpath)
+	ret := C.libxl_primary_console_get_tty(ctx.ctx, C.uint32_t(domid), &cpath)
 	if ret != 0 {
 		err = Error(-ret)
 		return
@@ -1154,7 +1154,7 @@ func (Ctx *Context) PrimaryConsoleGetTty(domid uint32) (path string, err error)
 }
 
 // DeviceNicAdd adds a nic to a domain.
-func (Ctx *Context) DeviceNicAdd(domid Domid, nic *DeviceNic) error {
+func (ctx *Context) DeviceNicAdd(domid Domid, nic *DeviceNic) error {
 	var cnic C.libxl_device_nic
 
 	if err := nic.toC(&cnic); err != nil {
@@ -1162,7 +1162,7 @@ func (Ctx *Context) DeviceNicAdd(domid Domid, nic *DeviceNic) error {
 	}
 	defer C.libxl_device_nic_dispose(&cnic)
 
-	ret := C.libxl_device_nic_add(Ctx.ctx, C.uint32_t(domid), &cnic, nil)
+	ret := C.libxl_device_nic_add(ctx.ctx, C.uint32_t(domid), &cnic, nil)
 	if ret != 0 {
 		return Error(ret)
 	}
@@ -1171,7 +1171,7 @@ func (Ctx *Context) DeviceNicAdd(domid Domid, nic *DeviceNic) error {
 }
 
 // DeviceNicRemove removes a nic from a domain.
-func (Ctx *Context) DeviceNicRemove(domid Domid, nic *DeviceNic) error {
+func (ctx *Context) DeviceNicRemove(domid Domid, nic *DeviceNic) error {
 	var cnic C.libxl_device_nic
 
 	if err := nic.toC(&cnic); err != nil {
@@ -1179,7 +1179,7 @@ func (Ctx *Context) DeviceNicRemove(domid Domid, nic *DeviceNic) error {
 	}
 	defer C.libxl_device_nic_dispose(&cnic)
 
-	ret := C.libxl_device_nic_remove(Ctx.ctx, C.uint32_t(domid), &cnic, nil)
+	ret := C.libxl_device_nic_remove(ctx.ctx, C.uint32_t(domid), &cnic, nil)
 	if ret != 0 {
 		return Error(ret)
 	}
@@ -1188,7 +1188,7 @@ func (Ctx *Context) DeviceNicRemove(domid Domid, nic *DeviceNic) error {
 }
 
 // DevicePciAdd is used to passthrough a PCI device to a domain.
-func (Ctx *Context) DevicePciAdd(domid Domid, pci *DevicePci) error {
+func (ctx *Context) DevicePciAdd(domid Domid, pci *DevicePci) error {
 	var cpci C.libxl_device_pci
 
 	if err := pci.toC(&cpci); err != nil {
@@ -1196,7 +1196,7 @@ func (Ctx *Context) DevicePciAdd(domid Domid, pci *DevicePci) error {
 	}
 	defer C.libxl_device_pci_dispose(&cpci)
 
-	ret := C.libxl_device_pci_add(Ctx.ctx, C.uint32_t(domid), &cpci, nil)
+	ret := C.libxl_device_pci_add(ctx.ctx, C.uint32_t(domid), &cpci, nil)
 	if ret != 0 {
 		return Error(ret)
 	}
@@ -1205,7 +1205,7 @@ func (Ctx *Context) DevicePciAdd(domid Domid, pci *DevicePci) error {
 }
 
 // DevicePciRemove removes a PCI device from a domain.
-func (Ctx *Context) DevicePciRemove(domid Domid, pci *DevicePci) error {
+func (ctx *Context) DevicePciRemove(domid Domid, pci *DevicePci) error {
 	var cpci C.libxl_device_pci
 
 	if err := pci.toC(&cpci); err != nil {
@@ -1213,7 +1213,7 @@ func (Ctx *Context) DevicePciRemove(domid Domid, pci *DevicePci) error {
 	}
 	defer C.libxl_device_pci_dispose(&cpci)
 
-	ret := C.libxl_device_pci_remove(Ctx.ctx, C.uint32_t(domid), &cpci, nil)
+	ret := C.libxl_device_pci_remove(ctx.ctx, C.uint32_t(domid), &cpci, nil)
 	if ret != 0 {
 		return Error(ret)
 	}
@@ -1222,7 +1222,7 @@ func (Ctx *Context) DevicePciRemove(domid Domid, pci *DevicePci) error {
 }
 
 // DeviceUsbdevAdd adds a USB device to a domain.
-func (Ctx *Context) DeviceUsbdevAdd(domid Domid, usbdev *DeviceUsbdev) error {
+func (ctx *Context) DeviceUsbdevAdd(domid Domid, usbdev *DeviceUsbdev) error {
 	var cusbdev C.libxl_device_usbdev
 
 	if err := usbdev.toC(&cusbdev); err != nil {
@@ -1230,7 +1230,7 @@ func (Ctx *Context) DeviceUsbdevAdd(domid Domid, usbdev *DeviceUsbdev) error {
 	}
 	defer C.libxl_device_usbdev_dispose(&cusbdev)
 
-	ret := C.libxl_device_usbdev_add(Ctx.ctx, C.uint32_t(domid), &cusbdev, nil)
+	ret := C.libxl_device_usbdev_add(ctx.ctx, C.uint32_t(domid), &cusbdev, nil)
 	if ret != 0 {
 		return Error(ret)
 	}
@@ -1239,7 +1239,7 @@ func (Ctx *Context) DeviceUsbdevAdd(domid Domid, usbdev *DeviceUsbdev) error {
 }
 
 // DeviceUsbdevRemove removes a USB device from a domain.
-func (Ctx *Context) DeviceUsbdevRemove(domid Domid, usbdev *DeviceUsbdev) error {
+func (ctx *Context) DeviceUsbdevRemove(domid Domid, usbdev *DeviceUsbdev) error {
 	var cusbdev C.libxl_device_usbdev
 
 	if err := usbdev.toC(&cusbdev); err != nil {
@@ -1247,7 +1247,7 @@ func (Ctx *Context) DeviceUsbdevRemove(domid Domid, usbdev *DeviceUsbdev) error
 	}
 	defer C.libxl_device_usbdev_dispose(&cusbdev)
 
-	ret := C.libxl_device_usbdev_remove(Ctx.ctx, C.uint32_t(domid), &cusbdev, nil)
+	ret := C.libxl_device_usbdev_remove(ctx.ctx, C.uint32_t(domid), &cusbdev, nil)
 	if ret != 0 {
 		return Error(ret)
 	}
@@ -1256,7 +1256,7 @@ func (Ctx *Context) DeviceUsbdevRemove(domid Domid, usbdev *DeviceUsbdev) error
 }
 
 // DomainCreateNew creates a new domain.
-func (Ctx *Context) DomainCreateNew(config *DomainConfig) (Domid, error) {
+func (ctx *Context) DomainCreateNew(config *DomainConfig) (Domid, error) {
 	var cdomid C.uint32_t
 	var cconfig C.libxl_domain_config
 	err := config.toC(&cconfig)
@@ -1265,7 +1265,7 @@ func (Ctx *Context) DomainCreateNew(config *DomainConfig) (Domid, error) {
 	}
 	defer C.libxl_domain_config_dispose(&cconfig)
 
-	ret := C.libxl_domain_create_new(Ctx.ctx, &cconfig, &cdomid, nil, nil)
+	ret := C.libxl_domain_create_new(ctx.ctx, &cconfig, &cdomid, nil, nil)
 	if ret != 0 {
 		return Domid(0), Error(ret)
 	}
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon May 24 20:38:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 May 2021 20:38:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131891.246337 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llHL7-0004uO-Fv; Mon, 24 May 2021 20:38:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131891.246337; Mon, 24 May 2021 20:38:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llHL7-0004uF-Ag; Mon, 24 May 2021 20:38:17 +0000
Received: by outflank-mailman (input) for mailman id 131891;
 Mon, 24 May 2021 20:38:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5+P1=KT=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1llHL5-0001ey-SW
 for xen-devel@lists.xenproject.org; Mon, 24 May 2021 20:38:15 +0000
Received: from mail-qt1-x829.google.com (unknown [2607:f8b0:4864:20::829])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b97bdf1b-26ce-4ef3-95a2-ccf3d493893c;
 Mon, 24 May 2021 20:37:47 +0000 (UTC)
Received: by mail-qt1-x829.google.com with SMTP id v4so21607714qtp.1
 for <xen-devel@lists.xenproject.org>; Mon, 24 May 2021 13:37:47 -0700 (PDT)
Received: from localhost.localdomain (c-73-89-138-5.hsd1.vt.comcast.net.
 [73.89.138.5])
 by smtp.gmail.com with ESMTPSA id t25sm5142847qkt.62.2021.05.24.13.37.46
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 24 May 2021 13:37:46 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b97bdf1b-26ce-4ef3-95a2-ccf3d493893c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :in-reply-to:references;
        bh=9uXGDDpJ9CrrbQAMDJkXAgE2yF4sZn3Mfqg0mD2vt8w=;
        b=f+S1418ZVs4jNRq5JPcL3ZkKr/yrsAXbaZwrixfKE08F34Gf8M/kgrpE7MYUQNXeHS
         ZaMzEcsPVNwNb7usCAXyuhZlRRZNHXm8GLPSjDGo972/srrl+LbxcsZPSpE4dw2ZSbfq
         LsYRd4puWnoPH91/1WqxJqLr30HAO1osjX/By9pN8+930wD+DQu42h7hYkPZTptitofh
         xFTNoeDxfCZOZtdSy3PUxfd+eHU9ZaOQZwt6rpIsNtdjVZgEa7eLc0ufQ5Tg8Kp+gJsf
         O2OWXYtKDvkLzM2raQjQUkIZ1avB2+ywbnj7CctVWQi9OKHkSObtBF0m6U6pE5AANhez
         zcvA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:in-reply-to:references;
        bh=9uXGDDpJ9CrrbQAMDJkXAgE2yF4sZn3Mfqg0mD2vt8w=;
        b=NoJswnTlHHrxzCY3fiqm5Dz7THX2Vz+ylN8Bv+5o3xKjaTmcZ5kqfeUt78FIg1Hefj
         /mt1zsF395aZ52XunnoSn9garPjS27uI+jTXJFJnYTRB233NpmRQdl/Q6ycJQP1NMAol
         BMhIxu9E5UzUjPH8wG3Ea72EM4KjWG4ClRxj8DC6i9OGd3tzyVYp4UtN4+QBOnQdM4yk
         jPh+V7A3509KUHSGwrV2H2qKuVIb1v3cFevC1HyF+sAVXrq8gx63cJMw0VRjRikRhujr
         Wgf4mzblP/bKL7+fqzWF4fZ2ROPOJLJXHg2jdAygjIOgAph6QWamdBsjJc7LrVf9VXAa
         yroQ==
X-Gm-Message-State: AOAM530O3y6OwdP3X/mI2D+f+e3bxzYOt9kE9ONYXruy/LAAwyGz/YyU
	bwoYwsz6liaMH9MgNqGSNq8=
X-Google-Smtp-Source: ABdhPJz32pZMzPFi766pOO+4M3WoRNhbgUGhqo72JUmiexqunSwH+GtFEWRIQDP9xPyXbh3tDHoj6g==
X-Received: by 2002:ac8:7e93:: with SMTP id w19mr27948061qtj.314.1621888666941;
        Mon, 24 May 2021 13:37:46 -0700 (PDT)
From: Nick Rosbrook <rosbrookn@gmail.com>
X-Google-Original-From: Nick Rosbrook <rosbrookn@ainfosec.com>
To: xen-devel@lists.xenproject.prg,
	xen-devel@lists.xenproject.org
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [RESEND PATCH 07/12] golang/xenlight: add logging conveniences for within xenlight
Date: Mon, 24 May 2021 16:36:48 -0400
Message-Id: <452aac2489990ac0195c62d8cb820fbe5786c466.1621887506.git.rosbrookn@ainfosec.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1621887506.git.rosbrookn@ainfosec.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>
In-Reply-To: <cover.1621887506.git.rosbrookn@ainfosec.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>

Add some logging methods to Context to provide easy use of the
Contenxt's xentoollog_logger. These are not exported, but the LogLevel
type is so that a later commit can allow the Context's log level to be
configurable.

Becuase cgo does not support calling C functions with variable
arguments, e.g. xtl_log, add an xtl_log_wrap function to the cgo preamble
that accepts an already formatted string, and handle the formatting in
Go.

Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
---
 tools/golang/xenlight/xenlight.go | 45 +++++++++++++++++++++++++++++++
 1 file changed, 45 insertions(+)

diff --git a/tools/golang/xenlight/xenlight.go b/tools/golang/xenlight/xenlight.go
index fc3eb0bf3f..f68d7b6e97 100644
--- a/tools/golang/xenlight/xenlight.go
+++ b/tools/golang/xenlight/xenlight.go
@@ -32,6 +32,15 @@ static const libxl_childproc_hooks childproc_hooks = { .chldowner = libxl_sigchl
 void xenlight_set_chldproc(libxl_ctx *ctx) {
 	libxl_childproc_setmode(ctx, &childproc_hooks, NULL);
 }
+
+void xtl_log_wrap(struct xentoollog_logger *logger,
+		  xentoollog_level level,
+		  int errnoval,
+		  const char *context,
+		  const char *msg)
+{
+    xtl_log(logger, level, errnoval, context, "%s", msg);
+}
 */
 import "C"
 
@@ -192,6 +201,42 @@ func (ctx *Context) Close() error {
 	return nil
 }
 
+// LogLevel represents an xentoollog_level, and can be used to configre the log
+// level of a Context's logger.
+type LogLevel int
+
+const (
+	//LogLevelNone     LogLevel = C.XTL_NONE
+	LogLevelDebug    LogLevel = C.XTL_DEBUG
+	LogLevelVerbose  LogLevel = C.XTL_VERBOSE
+	LogLevelDetail   LogLevel = C.XTL_DETAIL
+	LogLevelProgress LogLevel = C.XTL_PROGRESS
+	LogLevelInfo     LogLevel = C.XTL_INFO
+	LogLevelNotice   LogLevel = C.XTL_NOTICE
+	LogLevelWarn     LogLevel = C.XTL_WARN
+	LogLevelError    LogLevel = C.XTL_ERROR
+	LogLevelCritical LogLevel = C.XTL_CRITICAL
+	//LogLevelNumLevels LogLevel = C.XTL_NUM_LEVELS
+)
+
+func (ctx *Context) log(lvl LogLevel, errnoval int, format string, a ...interface{}) {
+	msg := C.CString(fmt.Sprintf(format, a...))
+	defer C.free(unsafe.Pointer(msg))
+	context := C.CString("xenlight")
+	defer C.free(unsafe.Pointer(context))
+
+	C.xtl_log_wrap((*C.xentoollog_logger)(unsafe.Pointer(ctx.logger)),
+		C.xentoollog_level(lvl), C.int(errnoval), context, msg)
+}
+
+func (ctx *Context) logd(format string, a ...interface{}) {
+	ctx.log(LogLevelDebug, -1, format, a...)
+}
+
+func (ctx *Context) logw(format string, a ...interface{}) {
+	ctx.log(LogLevelWarn, -1, format, a...)
+}
+
 /*
  * Types: Builtins
  */
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon May 24 20:38:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 May 2021 20:38:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131897.246348 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llHLC-0005XE-PT; Mon, 24 May 2021 20:38:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131897.246348; Mon, 24 May 2021 20:38:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llHLC-0005X5-LU; Mon, 24 May 2021 20:38:22 +0000
Received: by outflank-mailman (input) for mailman id 131897;
 Mon, 24 May 2021 20:38:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5+P1=KT=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1llHLA-0001ey-Sp
 for xen-devel@lists.xenproject.org; Mon, 24 May 2021 20:38:20 +0000
Received: from mail-qv1-xf33.google.com (unknown [2607:f8b0:4864:20::f33])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 350ac5e6-6dfb-4f62-a383-343ea6a0b2f6;
 Mon, 24 May 2021 20:37:48 +0000 (UTC)
Received: by mail-qv1-xf33.google.com with SMTP id c13so13706066qvx.5
 for <xen-devel@lists.xenproject.org>; Mon, 24 May 2021 13:37:48 -0700 (PDT)
Received: from localhost.localdomain (c-73-89-138-5.hsd1.vt.comcast.net.
 [73.89.138.5])
 by smtp.gmail.com with ESMTPSA id t25sm5142847qkt.62.2021.05.24.13.37.47
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 24 May 2021 13:37:47 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 350ac5e6-6dfb-4f62-a383-343ea6a0b2f6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :in-reply-to:references;
        bh=kEyMtRkVLFS8i1iO7mYtPK1CFn3AKiqVnXMVPcMO4BI=;
        b=Vrt/ol88f+vhW1GfAucOx70Qu+yZwIoMcbshMwrLnzRMhAgYMMK/Lv46VGXZpfqnP1
         5bS1YfA5QIfmt94DA8U5eGKapbh6dVmPmRpJ9eqltYO+zTibgImd5DT+d9ZbMSx15xhR
         wnQu8yiv2m1DoMGoNFRdoyqDyzVMtjDZr7ChQ3I6WsBoEnhjgCnzEvK2F8zvi+lOPZ83
         W8eqVuHxo1S8zOY7XH/fstFsFT3+IlbU59swx2mSkXsHx3ooI1w1D/cIQRDIZUfna2tf
         hy6GH4YM9V1Sxi6l8UplT1EpeTrmXVu1fMMgNm9n4o5yi2jYhXe2kwsKkwQ/gmXYLmhE
         /xPA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:in-reply-to:references;
        bh=kEyMtRkVLFS8i1iO7mYtPK1CFn3AKiqVnXMVPcMO4BI=;
        b=kbIT20CGEhNwhYmb1ZZ+UZ/yrzDGfzzYsHoqGm8VFWrh0LR5mFGZwqlxQtu7asBfau
         K0PsrH3oXHZXOyu9Vq2Nsejz/gGOeycjvJezLUt9vOSigDVgPshdE9ZDce83+W8u/FIC
         J+lmOOsP56Yr8cTAX8idGbciCsJqe7/bbYwiTIPpcPJXZo0aIXsaGm73Qyx80PdTljjW
         KXzlpNG9yGC+v45c4uxbENt2Q14gUdUaELDhghwSr2QJPgF8ZMOWbIhJBqGxkNdXYOzo
         XNXo0WUwahXA1UG9UnDsxlV6pYDUDsCPLvQ5UmvrJcdH/4nLszFX6efdScgyO9JulAe8
         Di/A==
X-Gm-Message-State: AOAM532qM9H6qHUKK/+XJ2eIdrnJU4jX20Rs1egteCbIfiVa7uOkAxDi
	J+PtzVnlT2xEdHIKVGxypDc=
X-Google-Smtp-Source: ABdhPJzv32CRG5XOYG3kL2lFnYIno7V3neguHQ9gP5UhEg2ReDfWkIk7Hkqp+XEdktaK/+wjnh6OoA==
X-Received: by 2002:a0c:b28c:: with SMTP id r12mr32851943qve.32.1621888667919;
        Mon, 24 May 2021 13:37:47 -0700 (PDT)
From: Nick Rosbrook <rosbrookn@gmail.com>
X-Google-Original-From: Nick Rosbrook <rosbrookn@ainfosec.com>
To: xen-devel@lists.xenproject.prg,
	xen-devel@lists.xenproject.org
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [RESEND PATCH 08/12] golang/xenlight: add functional options to configure Context
Date: Mon, 24 May 2021 16:36:49 -0400
Message-Id: <dc5cd6728e8477c9eb3ba75a55c7128da46a86ef.1621887506.git.rosbrookn@ainfosec.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1621887506.git.rosbrookn@ainfosec.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>
In-Reply-To: <cover.1621887506.git.rosbrookn@ainfosec.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>

Add a ContextOption type to support functional options in NewContext.
Then, add a variadic ContextOption parameter to NewContext, which allows
callers to specify 0 or more configuration options.

For now, just add the WithLogLevel option so that callers can set the
log level of the Context's xentoollog_logger. Future configuration
options can be created by adding an appropriate field to the
contextOptions struct and creating a With<OptionName> function to return
a ContextOption

Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
---
 tools/golang/xenlight/xenlight.go | 44 +++++++++++++++++++++++++++++--
 1 file changed, 42 insertions(+), 2 deletions(-)

diff --git a/tools/golang/xenlight/xenlight.go b/tools/golang/xenlight/xenlight.go
index f68d7b6e97..65f93abe32 100644
--- a/tools/golang/xenlight/xenlight.go
+++ b/tools/golang/xenlight/xenlight.go
@@ -136,7 +136,7 @@ func sigchldHandler(ctx *Context) {
 }
 
 // NewContext returns a new Context.
-func NewContext() (ctx *Context, err error) {
+func NewContext(opts ...ContextOption) (ctx *Context, err error) {
 	ctx = &Context{}
 
 	defer func() {
@@ -146,8 +146,19 @@ func NewContext() (ctx *Context, err error) {
 		}
 	}()
 
+	// Set the default context options. These fields may
+	// be modified by the provided opts.
+	copts := &contextOptions{
+		logLevel: LogLevelError,
+	}
+
+	for _, opt := range opts {
+		opt.apply(copts)
+	}
+
 	// Create a logger
-	ctx.logger = C.xtl_createlogger_stdiostream(C.stderr, C.XTL_ERROR, 0)
+	ctx.logger = C.xtl_createlogger_stdiostream(C.stderr,
+		C.xentoollog_level(copts.logLevel), 0)
 
 	// Allocate a context
 	ret := C.libxl_ctx_alloc(&ctx.ctx, C.LIBXL_VERSION, 0,
@@ -201,6 +212,35 @@ func (ctx *Context) Close() error {
 	return nil
 }
 
+type contextOptions struct {
+	logLevel LogLevel
+}
+
+// ContextOption is used to configure options for a Context.
+type ContextOption interface {
+	apply(*contextOptions)
+}
+
+type funcContextOption struct {
+	f func(*contextOptions)
+}
+
+func (fco *funcContextOption) apply(c *contextOptions) {
+	fco.f(c)
+}
+
+func newFuncContextOption(f func(*contextOptions)) *funcContextOption {
+	return &funcContextOption{f}
+}
+
+// WithLogLevel sets the log level for a Context's logger. The default level is
+// LogLevelError.
+func WithLogLevel(level LogLevel) ContextOption {
+	return newFuncContextOption(func(co *contextOptions) {
+		co.logLevel = level
+	})
+}
+
 // LogLevel represents an xentoollog_level, and can be used to configre the log
 // level of a Context's logger.
 type LogLevel int
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon May 24 20:38:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 May 2021 20:38:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131899.246358 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llHLH-00064S-4v; Mon, 24 May 2021 20:38:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131899.246358; Mon, 24 May 2021 20:38:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llHLH-000644-0z; Mon, 24 May 2021 20:38:27 +0000
Received: by outflank-mailman (input) for mailman id 131899;
 Mon, 24 May 2021 20:38:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5+P1=KT=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1llHLF-0001ey-T3
 for xen-devel@lists.xenproject.org; Mon, 24 May 2021 20:38:25 +0000
Received: from mail-qt1-x833.google.com (unknown [2607:f8b0:4864:20::833])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a5402a5a-d6d8-4f73-9c70-67ac9f5ed901;
 Mon, 24 May 2021 20:37:49 +0000 (UTC)
Received: by mail-qt1-x833.google.com with SMTP id h24so10195773qtm.12
 for <xen-devel@lists.xenproject.org>; Mon, 24 May 2021 13:37:49 -0700 (PDT)
Received: from localhost.localdomain (c-73-89-138-5.hsd1.vt.comcast.net.
 [73.89.138.5])
 by smtp.gmail.com with ESMTPSA id t25sm5142847qkt.62.2021.05.24.13.37.48
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 24 May 2021 13:37:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a5402a5a-d6d8-4f73-9c70-67ac9f5ed901
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :in-reply-to:references;
        bh=MJAWGHEUAw5eoKAlFYrwM/9yu/NhcTzkr9C5oJHPCDA=;
        b=Ev2HuCwLxs6mXamXhdwzSjERFJi8CEscqkB7Q+iZuYUeo6pGoJSW9HmzCpHffgdfb8
         xExpudG7W1CQCSxiKlftWO9Nv1o/GwOVcqQ2c65+VjPKZNjXoduf0qgMttTTukKbuT+k
         7pQAMPKD2lBNN/+XIn5Hr4palHG25SMc/KwFlYpcueuoR87YZlWeutC3dMqEuPhXBkAU
         3vziF5PMLpTYNA8LTvTmFfvnKsdxyphOWs8klTh7uFq9EqzMZtRY4beos6uEbDPi54vh
         FTbUUavmWoM3l93RCEXTt/43Zw5XfRrRY9mnTNGxOiv/aV3CB4tOD15sYAsFsUd/RWY8
         XQ3g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:in-reply-to:references;
        bh=MJAWGHEUAw5eoKAlFYrwM/9yu/NhcTzkr9C5oJHPCDA=;
        b=mHfoc8k8qgvZQ8a6TF/FsEW6ySxX2nHy3nDjd5QCD8l0djXBkylfQKfMgZsl6UhI0B
         AZSBrXx15LwGWIclbG/ARjbG9J10c5qjFIPH6Luj0G60BcX+ewDfOfiuN+gZrtL9fCQf
         edSmX+6PtdST9AR+7M+6Hc8WWNbB9REChiJp5RJsuzrWzFmt0iCy8ApcQzk5qdQUstaD
         vwPgqeRmGPstBwqo+/GdimkItQhIDLgJI6TZinMKsYiXHVdkQBrrXt9a78qJ3Y2yUZT3
         1jkjG5UgnRClhRLgm7IqoQ2EUbcs6ODpVh+wYvcqIiXYEFUg1lPNcHdxNqKBrDxvhBEl
         vkXw==
X-Gm-Message-State: AOAM53062sGdyx7XfZIJqCcO8Qm17S997Ow5adYrkIKOFC4KjwLXqY6C
	zBFgzjm4tdveFJ5SJvDrXLo=
X-Google-Smtp-Source: ABdhPJwG+1Ue8vohST1MgBlBsluetTjfaTiM/njhK9jyLVPzMh+gqG6MRmhARJDGBEAxzsrqxYRVww==
X-Received: by 2002:ac8:12c9:: with SMTP id b9mr16264534qtj.90.1621888668864;
        Mon, 24 May 2021 13:37:48 -0700 (PDT)
From: Nick Rosbrook <rosbrookn@gmail.com>
X-Google-Original-From: Nick Rosbrook <rosbrookn@ainfosec.com>
To: xen-devel@lists.xenproject.prg,
	xen-devel@lists.xenproject.org
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [RESEND PATCH 09/12] golang/xenlight: add DomainDestroy wrapper
Date: Mon, 24 May 2021 16:36:50 -0400
Message-Id: <82c68547f4cec1c82132cd6a867696f4b38dcd3d.1621887506.git.rosbrookn@ainfosec.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1621887506.git.rosbrookn@ainfosec.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>
In-Reply-To: <cover.1621887506.git.rosbrookn@ainfosec.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>

Add a wrapper around libxl_domain_destroy.

Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
---
 tools/golang/xenlight/xenlight.go | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/tools/golang/xenlight/xenlight.go b/tools/golang/xenlight/xenlight.go
index 65f93abe32..1e0ed109e4 100644
--- a/tools/golang/xenlight/xenlight.go
+++ b/tools/golang/xenlight/xenlight.go
@@ -1357,3 +1357,13 @@ func (ctx *Context) DomainCreateNew(config *DomainConfig) (Domid, error) {
 
 	return Domid(cdomid), nil
 }
+
+// DomainDestroy destroys a domain given a domid.
+func (ctx *Context) DomainDestroy(domid Domid) error {
+	ret := C.libxl_domain_destroy(ctx.ctx, C.uint32_t(domid), nil)
+	if ret != 0 {
+		return Error(ret)
+	}
+
+	return nil
+}
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon May 24 20:43:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 May 2021 20:43:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131941.246369 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llHPt-0000Gx-Rv; Mon, 24 May 2021 20:43:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131941.246369; Mon, 24 May 2021 20:43:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llHPt-0000Gj-Oc; Mon, 24 May 2021 20:43:13 +0000
Received: by outflank-mailman (input) for mailman id 131941;
 Mon, 24 May 2021 20:43:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5+P1=KT=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1llHLU-0001ey-TW
 for xen-devel@lists.xenproject.org; Mon, 24 May 2021 20:38:40 +0000
Received: from mail-qk1-x729.google.com (unknown [2607:f8b0:4864:20::729])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 27c6af09-440c-442f-9373-fb53e860d068;
 Mon, 24 May 2021 20:37:52 +0000 (UTC)
Received: by mail-qk1-x729.google.com with SMTP id h20so12792625qko.11
 for <xen-devel@lists.xenproject.org>; Mon, 24 May 2021 13:37:52 -0700 (PDT)
Received: from localhost.localdomain (c-73-89-138-5.hsd1.vt.comcast.net.
 [73.89.138.5])
 by smtp.gmail.com with ESMTPSA id t25sm5142847qkt.62.2021.05.24.13.37.51
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 24 May 2021 13:37:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 27c6af09-440c-442f-9373-fb53e860d068
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :in-reply-to:references;
        bh=qZQxVcviBuvDjUeKOepu8YHqa+eGuG2DXDt5FNQL7LQ=;
        b=G+pyi2ApCHcfc8UBweruANC8e0cOXX1CT03CKkAiDQRJHE7Klfn/pOpBYStmWWcyA4
         AaYspo8J8GV+nlWKzUOviIvEb+K/L3KaHitNBCX35AfW8kdilbhDTj2Ih+mtyOfGM8Yr
         F6nBzUQijOg+DmNcRbgoOg96MmWJ0EdBHAWkGpHLkVJTUYcMR6XCap7KSN2lpvXCeSkh
         I2wZq1GZ/PRiZQNe4/3wLXlfK7NgQOSFqaxwGopBrV+57r0/qvF4ZpsUEW+By46BlXxT
         dAIWqnDwKUkL/ZBMot4k6a94qk4V1uIM83QIstPfIjkNVGN8MYd8rASPEAQCNcKJUt1r
         LxWQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:in-reply-to:references;
        bh=qZQxVcviBuvDjUeKOepu8YHqa+eGuG2DXDt5FNQL7LQ=;
        b=lGYwDTmUVlw67BsPd9DybsYtrfI6U/n3xuFEsVsk5K9vgCoakI2stb8Zm/MFLnumYm
         qoeC0Y3y2SYn0tOGLNGvH/cPAuDVZF28ahnow891HhbK9nugTbT0CDtqKZhQPnWO4QlV
         Vk3O4ZKi9cJCqiEyr0nmLbCyj2ZOeInqegJriE+zK+0CYuTVnjFCh4eJXrxK8DiRCari
         HWyt0NnmXfY/PQYc+OItG3qB+EnULSAwqsZ5w0NdLsF3sC1Zj9AwUg0P9cLACjy+UyY8
         onHXzo2N/OFIIVLUbXiiEVzHMvBz5uZ3tmWBw5ehtMtoYhU2vBGDZhmbulH2NrlHvd2U
         +q7w==
X-Gm-Message-State: AOAM533EWehlpskrWWNCsvTzncjw+JjY+PlmMJzT7x04PvccDBnprQjq
	9cCJJfTewej3h3xTVVhPtu4=
X-Google-Smtp-Source: ABdhPJxK46rUoHOq90Ayw4jb8Va6QGKRHd2hRegjODp3MiUcX7+YcJZ+SyslYYQ7cj5J7aomMpbqPw==
X-Received: by 2002:a05:620a:1252:: with SMTP id a18mr30362134qkl.416.1621888672226;
        Mon, 24 May 2021 13:37:52 -0700 (PDT)
From: Nick Rosbrook <rosbrookn@gmail.com>
X-Google-Original-From: Nick Rosbrook <rosbrookn@ainfosec.com>
To: xen-devel@lists.xenproject.prg,
	xen-devel@lists.xenproject.org
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [RESEND PATCH 12/12] golang/xenlight: add NotifyDomainDeath method to Context
Date: Mon, 24 May 2021 16:36:53 -0400
Message-Id: <e415b0e26954cfc6689fbd3ba7d79fe664f3bb50.1621887506.git.rosbrookn@ainfosec.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1621887506.git.rosbrookn@ainfosec.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>
In-Reply-To: <cover.1621887506.git.rosbrookn@ainfosec.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>

Add a helper function to wait for domain death events, and then write
the events to a provided channel. This handles the enabling/disabling of
the event type, freeing the event, and converting it to a Go type. The
caller can then handle the event however they need to. This function
will run until a provided context.Context is cancelled.

NotifyDomainDeath spawns two goroutines that return when the
context.Context is done. The first will make sure that the domain death
event is disabled, and that the corresponding event queue is cleared.
The second calls libxl_event_wait, and writes the event to the provided
channel.

With this, callers should be able to manage a full domain life cycle.
Add to the comment of DomainCreateNew so that package uses know they
should use this method in conjunction with DomainCreateNew.

Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
---
 tools/golang/xenlight/xenlight.go | 83 ++++++++++++++++++++++++++++++-
 1 file changed, 82 insertions(+), 1 deletion(-)

diff --git a/tools/golang/xenlight/xenlight.go b/tools/golang/xenlight/xenlight.go
index 6fb22665cc..8406883433 100644
--- a/tools/golang/xenlight/xenlight.go
+++ b/tools/golang/xenlight/xenlight.go
@@ -53,6 +53,7 @@ import "C"
  */
 
 import (
+	"context"
 	"fmt"
 	"os"
 	"os/signal"
@@ -1340,7 +1341,9 @@ func (ctx *Context) DeviceUsbdevRemove(domid Domid, usbdev *DeviceUsbdev) error
 	return nil
 }
 
-// DomainCreateNew creates a new domain.
+// DomainCreateNew creates a new domain. Callers of DomainCreateNew are
+// responsible for handling the death of the resulting domain. This should be
+// done using NotifyDomainDeath.
 func (ctx *Context) DomainCreateNew(config *DomainConfig) (Domid, error) {
 	var cdomid C.uint32_t
 	var cconfig C.libxl_domain_config
@@ -1358,6 +1361,84 @@ func (ctx *Context) DomainCreateNew(config *DomainConfig) (Domid, error) {
 	return Domid(cdomid), nil
 }
 
+// NotifyDomainDeath registers an event handler for domain death events for a
+// given domnid, and writes events received to ec. NotifyDomainDeath returns an
+// error if it cannot register the event handler, but other errors encountered
+// are just logged. The goroutine spawned by calling NotifyDomainDeath runs
+// until the provided context.Context's Done channel is closed.
+func (ctx *Context) NotifyDomainDeath(c context.Context, domid Domid, ec chan<- Event) error {
+	var deathw *C.libxl_evgen_domain_death
+
+	ret := C.libxl_evenable_domain_death(ctx.ctx, C.uint32_t(domid), 0, &deathw)
+	if ret != 0 {
+		return Error(ret)
+	}
+
+	// Spawn a goroutine that is responsible for cleaning up when the
+	// passed context.Context's Done channel is closed.
+	go func() {
+		<-c.Done()
+
+		ctx.logd("cleaning up domain death event handler for domain %d", domid)
+
+		// Disable the event generation.
+		C.libxl_evdisable_domain_death(ctx.ctx, deathw)
+
+		// Make sure any events that were generated get cleaned up so they
+		// do not linger in the libxl event queue.
+		var evc *C.libxl_event
+		for {
+			ret := C.libxl_event_check(ctx.ctx, &evc, C.LIBXL_EVENTMASK_ALL, nil, nil)
+			if ret != 0 {
+				return
+			}
+			C.libxl_event_free(ctx.ctx, evc)
+		}
+	}()
+
+	go func() {
+		var (
+			ev  Event
+			evc *C.libxl_event
+		)
+
+		for {
+			select {
+			case <-c.Done():
+				return
+			default:
+				// Go on and check for an event...
+			}
+
+			ret := C.libxl_event_wait(ctx.ctx, &evc, C.LIBXL_EVENTMASK_ALL, nil, nil)
+			if ret != 0 {
+				ctx.logw("unexpected error waiting for event: %s", Error(ret))
+				continue
+			}
+
+			// Try to convert the event to Go, and then free the
+			// C.libxl_event no matter what.
+			err := ev.fromC(evc)
+			C.libxl_event_free(ctx.ctx, evc)
+			if err != nil {
+				ctx.logw("error converting event from C: %v", err)
+				continue
+			}
+
+			ctx.logd("received domain death event (domid=%v, type=%v)", ev.Domid, ev.Type)
+
+			// Write the event to the channel
+			select {
+			case ec <- ev:
+			case <-c.Done():
+				return
+			}
+		}
+	}()
+
+	return nil
+}
+
 // DomainDestroy destroys a domain given a domid.
 func (ctx *Context) DomainDestroy(domid Domid) error {
 	ret := C.libxl_domain_destroy(ctx.ctx, C.uint32_t(domid), nil)
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon May 24 20:43:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 May 2021 20:43:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131957.246388 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llHQ0-0000h9-Na; Mon, 24 May 2021 20:43:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131957.246388; Mon, 24 May 2021 20:43:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llHQ0-0000g7-GY; Mon, 24 May 2021 20:43:20 +0000
Received: by outflank-mailman (input) for mailman id 131957;
 Mon, 24 May 2021 20:43:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5+P1=KT=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1llHLK-0001ey-TK
 for xen-devel@lists.xenproject.org; Mon, 24 May 2021 20:38:30 +0000
Received: from mail-qt1-x830.google.com (unknown [2607:f8b0:4864:20::830])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 78775fdc-6b0a-4a6a-bd06-66fc0f96909c;
 Mon, 24 May 2021 20:37:50 +0000 (UTC)
Received: by mail-qt1-x830.google.com with SMTP id t17so3827254qta.11
 for <xen-devel@lists.xenproject.org>; Mon, 24 May 2021 13:37:50 -0700 (PDT)
Received: from localhost.localdomain (c-73-89-138-5.hsd1.vt.comcast.net.
 [73.89.138.5])
 by smtp.gmail.com with ESMTPSA id t25sm5142847qkt.62.2021.05.24.13.37.49
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 24 May 2021 13:37:49 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 78775fdc-6b0a-4a6a-bd06-66fc0f96909c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :in-reply-to:references;
        bh=Bztafs77V0wo2Yc5MZ62RAYiSnYmIu905Zy3uFkoAyc=;
        b=cdXgCVQ+7s6+xPZNj+r8PDZUWI1D+Vftou8HktB14FbDErTgnUErpGU4xv4Nd2u1gs
         ynZuviK53q1aM6RIaNFdfgGodt8Nou1pzDOFUnma5aoZ3GEh65gTRozhdUT7s9uLqARo
         XqMRljIvrUxuT1fWiAklcU3tHmG3Sdf/xHtVkrhxV4tOKThioNzjcT+5gQmo2cSAsJjk
         FISwHGqNCoSHsonhqt3SF655BVjXNTtTD+bBPcwe8tgpPk2F7IAFumqBTfdoReYOspa3
         YbifWz2w+OVqpvysiNBOiWEjFKzy1bGEMojWIdVSfoY+PKPYG+5xBClqGS8D7WwqBrNw
         qtVQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:in-reply-to:references;
        bh=Bztafs77V0wo2Yc5MZ62RAYiSnYmIu905Zy3uFkoAyc=;
        b=LXFPA7M78iw53fSnm0z+VOP0cRaZfcZXm3zp0CCzn7pj9Hj6jMm0kKIiqc6tIACtEf
         2ZoM3Wk3VIENoq45l7mwxnHrdxTBvfJ2LH9RJ7w2mjbt8XwQYjdh4HshtXiJKNzBkk+c
         a0e2W77xF5hVhRWyZmene9PkLraHrd1sCDzEloMoW6eDAFpVuZxEj98tDie3SyOr0djj
         AdXAf9d7VZ9SHQRlU5Td0VZ+6s6/uI2hrRK+XfVYnkF27G9GdOHigDDovFsuM73ArlPr
         BFHbYnsxNBiXBGoPvoe41SvW5zRspEQ4KhjVdIqKEubHRl4dM89wkF8dxjyEIv73Rgpp
         VoZQ==
X-Gm-Message-State: AOAM5329e8gQDBE8PO4APTPVAFldMIk1QjdIkXZ0vmGcULjjxPkJuBoj
	1Ic4ulrsgnyLrw0ngKH3mDs=
X-Google-Smtp-Source: ABdhPJyxooUFZcWV9TBoDhn+5+SpABivWW2mrNmvKSUR2/ceaeSNRpOwPHMdcSvtO7JEmUQ9xrGoYg==
X-Received: by 2002:ac8:4252:: with SMTP id r18mr25958491qtm.82.1621888669896;
        Mon, 24 May 2021 13:37:49 -0700 (PDT)
From: Nick Rosbrook <rosbrookn@gmail.com>
X-Google-Original-From: Nick Rosbrook <rosbrookn@ainfosec.com>
To: xen-devel@lists.xenproject.prg,
	xen-devel@lists.xenproject.org
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [RESEND PATCH 10/12] golang/xenlight: add SendTrigger wrapper
Date: Mon, 24 May 2021 16:36:51 -0400
Message-Id: <7788e3f5f1af622782ede1b879f4f02ec63fa546.1621887506.git.rosbrookn@ainfosec.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1621887506.git.rosbrookn@ainfosec.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>
In-Reply-To: <cover.1621887506.git.rosbrookn@ainfosec.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>

Add a warpper around libxl_send_trigger.

Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
---
 tools/golang/xenlight/xenlight.go | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/tools/golang/xenlight/xenlight.go b/tools/golang/xenlight/xenlight.go
index 1e0ed109e4..d153feb851 100644
--- a/tools/golang/xenlight/xenlight.go
+++ b/tools/golang/xenlight/xenlight.go
@@ -1367,3 +1367,14 @@ func (ctx *Context) DomainDestroy(domid Domid) error {
 
 	return nil
 }
+
+// SendTrigger sends a Trigger to the domain specified by domid.
+func (ctx *Context) SendTrigger(domid Domid, trigger Trigger, vcpuid int) error {
+	ret := C.libxl_send_trigger(ctx.ctx, C.uint32_t(domid),
+		C.libxl_trigger(trigger), C.uint32_t(vcpuid), nil)
+	if ret != 0 {
+		return Error(ret)
+	}
+
+	return nil
+}
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Mon May 24 20:43:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 24 May 2021 20:43:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.131970.246410 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llHQG-0001jV-3r; Mon, 24 May 2021 20:43:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 131970.246410; Mon, 24 May 2021 20:43:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llHQG-0001jO-0D; Mon, 24 May 2021 20:43:36 +0000
Received: by outflank-mailman (input) for mailman id 131970;
 Mon, 24 May 2021 20:43:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5+P1=KT=gmail.com=rosbrookn@srs-us1.protection.inumbo.net>)
 id 1llHLP-0001ey-TM
 for xen-devel@lists.xenproject.org; Mon, 24 May 2021 20:38:35 +0000
Received: from mail-qv1-xf32.google.com (unknown [2607:f8b0:4864:20::f32])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 00947046-2e8c-4f35-809d-b69a18d0ab17;
 Mon, 24 May 2021 20:37:51 +0000 (UTC)
Received: by mail-qv1-xf32.google.com with SMTP id ez19so14903082qvb.3
 for <xen-devel@lists.xenproject.org>; Mon, 24 May 2021 13:37:51 -0700 (PDT)
Received: from localhost.localdomain (c-73-89-138-5.hsd1.vt.comcast.net.
 [73.89.138.5])
 by smtp.gmail.com with ESMTPSA id t25sm5142847qkt.62.2021.05.24.13.37.50
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Mon, 24 May 2021 13:37:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 00947046-2e8c-4f35-809d-b69a18d0ab17
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :in-reply-to:references;
        bh=8RHEO4rjzSWCDCSeU8KH/ouMS7fzg0KSm8WpWB6iQ0M=;
        b=nLWFNrdnKIm7Ua2HOww9s19j0PP204z3qiMCFBp+83qlAImJwMTOuRRCJsRXJR5zDP
         ErdrQ0XGJpNnYYHP6bjIOeLKzUXQfnP7NYVbSKeXHFOcg0oWV9c9BlamjIfwYvplKB1t
         uD6XiotgzFevOJBGv6JpehpiSc8bMqyKeiSz7ZhddO/rmHvg9TpZoj9d7p1gDqAcJRyR
         nboeKBqFhx91jBLRutaAJjCUv5GWL+F26WDCRa0O5YsdgCOzYNTCtJ5+S0nj2gXABSw9
         CKDu1zE/S+Yaxi+IshtHMHT1lDBK/dSCeDHCrrgJatUfKcCpoBIRTDQGnn3r4R1pBa5p
         mhhQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:in-reply-to:references;
        bh=8RHEO4rjzSWCDCSeU8KH/ouMS7fzg0KSm8WpWB6iQ0M=;
        b=lzli4nSrlgK9rJS83lr7dOt241jqKIaHD3BGgKjNWyygtnn2504nxmoQjrV+L9TQeZ
         njHvm6Y2GHsGlzM/wdVQxuJIQrj0FYNwxHJLMtEbRVhmdwphZ5PV1py82vq/CpaMNL/T
         3MewJbl8NhvYvLtcN3l/FDcifeK+mXRf3nztL2FkeqLJUcBD981G/LnxZSwd05nsQdYq
         nBOuClGOaDjWJQz9TvyvEChoeIGMUGMpGFEXl3PG6oUSJwApw8uPn7CMlZgpLSgYv9rk
         14QX+U6Q0OIn5Bj1rbMl3kAThajze/EFIlX5Lh6XKz/qTSay+xDnFcIrcGwnCK53v03J
         3uPA==
X-Gm-Message-State: AOAM531HYLKZiZLgH5AVKdgHojHgtU4gxciMHNp7+Jl9K2SMkDocxzpv
	6VNXMVjfi4OLvr1dNnnB/BQ=
X-Google-Smtp-Source: ABdhPJwBi7YXGDE83pTP//+sHby1z1JIFNQPgNbLE9qNLpKIgm+Mq9a7Pnh1C7dJWjpJu4r1jDr2YQ==
X-Received: by 2002:a05:6214:11a4:: with SMTP id u4mr31962146qvv.55.1621888671080;
        Mon, 24 May 2021 13:37:51 -0700 (PDT)
From: Nick Rosbrook <rosbrookn@gmail.com>
X-Google-Original-From: Nick Rosbrook <rosbrookn@ainfosec.com>
To: xen-devel@lists.xenproject.prg,
	xen-devel@lists.xenproject.org
Cc: Nick Rosbrook <rosbrookn@ainfosec.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Wei Liu <wl@xen.org>
Subject: [RESEND PATCH 11/12] golang/xenlight: do not negate ret when converting to Error
Date: Mon, 24 May 2021 16:36:52 -0400
Message-Id: <82bc8b720c3dfb178e52d10ddbebfa8dc5880e7b.1621887506.git.rosbrookn@ainfosec.com>
X-Mailer: git-send-email 2.17.1
In-Reply-To: <cover.1621887506.git.rosbrookn@ainfosec.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>
In-Reply-To: <cover.1621887506.git.rosbrookn@ainfosec.com>
References: <cover.1621887506.git.rosbrookn@ainfosec.com>

There are several locations where the return code from calling into C is
negated when being converted to Error. This results in error strings
like "libxl error: <x>", rather than the correct message. Fix all
occurrances of this by running:

  gofmt -w -r 'Error(-ret) -> Error(ret)' xenlight.go

from tools/golang/xenlight.

Signed-off-by: Nick Rosbrook <rosbrookn@ainfosec.com>
---
 tools/golang/xenlight/xenlight.go | 46 +++++++++++++++----------------
 1 file changed, 23 insertions(+), 23 deletions(-)

diff --git a/tools/golang/xenlight/xenlight.go b/tools/golang/xenlight/xenlight.go
index d153feb851..6fb22665cc 100644
--- a/tools/golang/xenlight/xenlight.go
+++ b/tools/golang/xenlight/xenlight.go
@@ -668,7 +668,7 @@ func SchedulerFromString(name string) (s Scheduler, err error) {
 
 	ret := C.libxl_scheduler_from_string(cname, &cs)
 	if ret != 0 {
-		err = Error(-ret)
+		err = Error(ret)
 		return
 	}
 
@@ -707,7 +707,7 @@ func (ctx *Context) CpupoolInfo(Poolid uint32) (pool Cpupoolinfo, err error) {
 
 	ret := C.libxl_cpupool_info(ctx.ctx, &c_cpupool, C.uint32_t(Poolid))
 	if ret != 0 {
-		err = Error(-ret)
+		err = Error(ret)
 		return
 	}
 	defer C.libxl_cpupoolinfo_dispose(&c_cpupool)
@@ -741,7 +741,7 @@ func (ctx *Context) CpupoolCreate(Name string, Scheduler Scheduler, Cpumap Bitma
 	ret := C.libxl_cpupool_create(ctx.ctx, name, C.libxl_scheduler(Scheduler),
 		cbm, &uuid, &poolid)
 	if ret != 0 {
-		err = Error(-ret)
+		err = Error(ret)
 		return
 	}
 
@@ -754,7 +754,7 @@ func (ctx *Context) CpupoolCreate(Name string, Scheduler Scheduler, Cpumap Bitma
 func (ctx *Context) CpupoolDestroy(Poolid uint32) (err error) {
 	ret := C.libxl_cpupool_destroy(ctx.ctx, C.uint32_t(Poolid))
 	if ret != 0 {
-		err = Error(-ret)
+		err = Error(ret)
 		return
 	}
 
@@ -765,7 +765,7 @@ func (ctx *Context) CpupoolDestroy(Poolid uint32) (err error) {
 func (ctx *Context) CpupoolCpuadd(Poolid uint32, Cpu int) (err error) {
 	ret := C.libxl_cpupool_cpuadd(ctx.ctx, C.uint32_t(Poolid), C.int(Cpu))
 	if ret != 0 {
-		err = Error(-ret)
+		err = Error(ret)
 		return
 	}
 
@@ -783,7 +783,7 @@ func (ctx *Context) CpupoolCpuaddCpumap(Poolid uint32, Cpumap Bitmap) (err error
 
 	ret := C.libxl_cpupool_cpuadd_cpumap(ctx.ctx, C.uint32_t(Poolid), &cbm)
 	if ret != 0 {
-		err = Error(-ret)
+		err = Error(ret)
 		return
 	}
 
@@ -794,7 +794,7 @@ func (ctx *Context) CpupoolCpuaddCpumap(Poolid uint32, Cpumap Bitmap) (err error
 func (ctx *Context) CpupoolCpuremove(Poolid uint32, Cpu int) (err error) {
 	ret := C.libxl_cpupool_cpuremove(ctx.ctx, C.uint32_t(Poolid), C.int(Cpu))
 	if ret != 0 {
-		err = Error(-ret)
+		err = Error(ret)
 		return
 	}
 
@@ -812,7 +812,7 @@ func (ctx *Context) CpupoolCpuremoveCpumap(Poolid uint32, Cpumap Bitmap) (err er
 
 	ret := C.libxl_cpupool_cpuremove_cpumap(ctx.ctx, C.uint32_t(Poolid), &cbm)
 	if ret != 0 {
-		err = Error(-ret)
+		err = Error(ret)
 		return
 	}
 
@@ -826,7 +826,7 @@ func (ctx *Context) CpupoolRename(Name string, Poolid uint32) (err error) {
 
 	ret := C.libxl_cpupool_rename(ctx.ctx, name, C.uint32_t(Poolid))
 	if ret != 0 {
-		err = Error(-ret)
+		err = Error(ret)
 		return
 	}
 
@@ -839,7 +839,7 @@ func (ctx *Context) CpupoolCpuaddNode(Poolid uint32, Node int) (Cpus int, err er
 
 	ret := C.libxl_cpupool_cpuadd_node(ctx.ctx, C.uint32_t(Poolid), C.int(Node), &ccpus)
 	if ret != 0 {
-		err = Error(-ret)
+		err = Error(ret)
 		return
 	}
 
@@ -854,7 +854,7 @@ func (ctx *Context) CpupoolCpuremoveNode(Poolid uint32, Node int) (Cpus int, err
 
 	ret := C.libxl_cpupool_cpuremove_node(ctx.ctx, C.uint32_t(Poolid), C.int(Node), &ccpus)
 	if ret != 0 {
-		err = Error(-ret)
+		err = Error(ret)
 		return
 	}
 
@@ -867,7 +867,7 @@ func (ctx *Context) CpupoolCpuremoveNode(Poolid uint32, Node int) (Cpus int, err
 func (ctx *Context) CpupoolMovedomain(Poolid uint32, Id Domid) (err error) {
 	ret := C.libxl_cpupool_movedomain(ctx.ctx, C.uint32_t(Poolid), C.uint32_t(Id))
 	if ret != 0 {
-		err = Error(-ret)
+		err = Error(ret)
 		return
 	}
 
@@ -1028,7 +1028,7 @@ func (bm Bitmap) String() (s string) {
 func (ctx *Context) GetMaxCpus() (maxCpus int, err error) {
 	ret := C.libxl_get_max_cpus(ctx.ctx)
 	if ret < 0 {
-		err = Error(-ret)
+		err = Error(ret)
 		return
 	}
 	maxCpus = int(ret)
@@ -1039,7 +1039,7 @@ func (ctx *Context) GetMaxCpus() (maxCpus int, err error) {
 func (ctx *Context) GetOnlineCpus() (onCpus int, err error) {
 	ret := C.libxl_get_online_cpus(ctx.ctx)
 	if ret < 0 {
-		err = Error(-ret)
+		err = Error(ret)
 		return
 	}
 	onCpus = int(ret)
@@ -1050,7 +1050,7 @@ func (ctx *Context) GetOnlineCpus() (onCpus int, err error) {
 func (ctx *Context) GetMaxNodes() (maxNodes int, err error) {
 	ret := C.libxl_get_max_nodes(ctx.ctx)
 	if ret < 0 {
-		err = Error(-ret)
+		err = Error(ret)
 		return
 	}
 	maxNodes = int(ret)
@@ -1063,7 +1063,7 @@ func (ctx *Context) GetFreeMemory() (memkb uint64, err error) {
 	ret := C.libxl_get_free_memory(ctx.ctx, &cmem)
 
 	if ret < 0 {
-		err = Error(-ret)
+		err = Error(ret)
 		return
 	}
 
@@ -1108,7 +1108,7 @@ func (ctx *Context) DomainInfo(Id Domid) (di *Dominfo, err error) {
 	ret := C.libxl_domain_info(ctx.ctx, &cdi, C.uint32_t(Id))
 
 	if ret != 0 {
-		err = Error(-ret)
+		err = Error(ret)
 		return
 	}
 
@@ -1121,7 +1121,7 @@ func (ctx *Context) DomainUnpause(Id Domid) (err error) {
 	ret := C.libxl_domain_unpause(ctx.ctx, C.uint32_t(Id), nil)
 
 	if ret != 0 {
-		err = Error(-ret)
+		err = Error(ret)
 	}
 	return
 }
@@ -1131,7 +1131,7 @@ func (ctx *Context) DomainPause(id Domid) (err error) {
 	ret := C.libxl_domain_pause(ctx.ctx, C.uint32_t(id), nil)
 
 	if ret != 0 {
-		err = Error(-ret)
+		err = Error(ret)
 	}
 	return
 }
@@ -1141,7 +1141,7 @@ func (ctx *Context) DomainShutdown(id Domid) (err error) {
 	ret := C.libxl_domain_shutdown(ctx.ctx, C.uint32_t(id), nil)
 
 	if ret != 0 {
-		err = Error(-ret)
+		err = Error(ret)
 	}
 	return
 }
@@ -1151,7 +1151,7 @@ func (ctx *Context) DomainReboot(id Domid) (err error) {
 	ret := C.libxl_domain_reboot(ctx.ctx, C.uint32_t(id), nil)
 
 	if ret != 0 {
-		err = Error(-ret)
+		err = Error(ret)
 	}
 	return
 }
@@ -1214,7 +1214,7 @@ func (ctx *Context) ConsoleGetTty(id Domid, consNum int, conType ConsoleType) (p
 	var cpath *C.char
 	ret := C.libxl_console_get_tty(ctx.ctx, C.uint32_t(id), C.int(consNum), C.libxl_console_type(conType), &cpath)
 	if ret != 0 {
-		err = Error(-ret)
+		err = Error(ret)
 		return
 	}
 	defer C.free(unsafe.Pointer(cpath))
@@ -1229,7 +1229,7 @@ func (ctx *Context) PrimaryConsoleGetTty(domid uint32) (path string, err error)
 	var cpath *C.char
 	ret := C.libxl_primary_console_get_tty(ctx.ctx, C.uint32_t(domid), &cpath)
 	if ret != 0 {
-		err = Error(-ret)
+		err = Error(ret)
 		return
 	}
 	defer C.free(unsafe.Pointer(cpath))
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Tue May 25 01:52:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 May 2021 01:52:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132023.246421 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llMF9-0001jX-Ev; Tue, 25 May 2021 01:52:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132023.246421; Tue, 25 May 2021 01:52:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llMF9-0001jP-9X; Tue, 25 May 2021 01:52:27 +0000
Received: by outflank-mailman (input) for mailman id 132023;
 Tue, 25 May 2021 01:52:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llMF8-0001jF-BY; Tue, 25 May 2021 01:52:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llMF8-0005yS-1O; Tue, 25 May 2021 01:52:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llMF7-0001al-KU; Tue, 25 May 2021 01:52:25 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1llMF7-0004PJ-JG; Tue, 25 May 2021 01:52:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=pQc/qU57t06uK6D5WVZ1rP41Qu1XTAQjKvT/br2uzmw=; b=NQs4ZJb7LFHj/h3N7l4BOWewaH
	bDhYtchrSozpV+3m8CW7QFLeEUO0F3xzWN3P4PZtj/mhI48472EKTfVzN7zicd2XwMPCJreeVf7FL
	RyWtgha1eYAlDumYi4u94Leh0hY5zqaLkDz3wL5sBjm/LdzO4mKUxqHOA/lqE/6AJiSA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162143-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162143: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=371ebfe28600fc5a435504b841cd401208a68f07
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 25 May 2021 01:52:25 +0000

flight 162143 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162143/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                371ebfe28600fc5a435504b841cd401208a68f07
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  277 days
Failing since        152659  2020-08-21 14:07:39 Z  276 days  510 attempts
Testing same since   162143  2021-05-24 17:09:47 Z    0 days    1 attempts

------------------------------------------------------------
510 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 159341 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 25 03:09:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 May 2021 03:09:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132033.246434 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llNR4-0000FR-AM; Tue, 25 May 2021 03:08:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132033.246434; Tue, 25 May 2021 03:08:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llNR4-0000FK-7H; Tue, 25 May 2021 03:08:50 +0000
Received: by outflank-mailman (input) for mailman id 132033;
 Tue, 25 May 2021 03:08:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wpal=KU=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1llNR3-0000FE-EE
 for xen-devel@lists.xenproject.org; Tue, 25 May 2021 03:08:49 +0000
Received: from mail-qk1-x729.google.com (unknown [2607:f8b0:4864:20::729])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b2ce7f61-d28c-44b6-ab7b-e1e5aeb4a5c9;
 Tue, 25 May 2021 03:08:48 +0000 (UTC)
Received: by mail-qk1-x729.google.com with SMTP id 76so29069233qkn.13
 for <xen-devel@lists.xenproject.org>; Mon, 24 May 2021 20:08:48 -0700 (PDT)
Received: from mail-qt1-f178.google.com (mail-qt1-f178.google.com.
 [209.85.160.178])
 by smtp.gmail.com with ESMTPSA id e14sm1886525qkl.1.2021.05.24.20.08.43
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 May 2021 20:08:44 -0700 (PDT)
Received: by mail-qt1-f178.google.com with SMTP id k19so22216975qta.2
 for <xen-devel@lists.xenproject.org>; Mon, 24 May 2021 20:08:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b2ce7f61-d28c-44b6-ab7b-e1e5aeb4a5c9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=UUICKcc/sKdAjzKNgq2l5eqYx+k5eTShYrWAkwN1nE8=;
        b=TELw0+2UEzjgBQgY3N9cHf7cvJQg4x1GF5jHbOj/Xhbu2EVYn5hJJdXJJYpk6Mpk8G
         l2k+90bOTlHqSiuws+ej9C6F14blJi9bSItZInf5oUYtu07o9RuIUrgGf8YFYR9loGpy
         S0ow3K1ouDjzn61e3PW37PZcm998eK6VcTTFc=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=UUICKcc/sKdAjzKNgq2l5eqYx+k5eTShYrWAkwN1nE8=;
        b=ktD+3gATcLmZPtPI9IpBpDzT+0DPTfLLxu06BS4M7wKxWy3IBBfdV4gTNY4v2mZAtM
         olmbo+csnx1A+ryUbwPJMZYVllCzdUvq5qHKCiI0fPNfrPcGS0Q7jo+DepzpwNHvmdJ8
         1UH5E6rLj5ImJGv3yZd3yc0IYdWxTH3Dn+HyiFO7aYLWZlOuendq+pNKWsVpOkLQt6vZ
         Zix/AAWbDePclQyTaupKsfX/XjHDgjkb0Z5jFzkPaw5fEWykk+foNlfmnI/jlfNlK/l5
         BVX8+oNMbfAT25woxRS+tGAtgQt49HkWkU7w6/T3L1rhP4anSb6cusk6ZnXdrb6/+dvc
         Jeyw==
X-Gm-Message-State: AOAM532g1kKxFWpInEYyD53LLlth2TkvWJ3w8SvFyzJNcNvORQc3A/5S
	f8Gzx7a5KQNFpCWnq6JniRQRcBfgpYkYPg==
X-Google-Smtp-Source: ABdhPJyFHwnQMLAr3c6O6Sq2SD2TuFW7ecf0oESWTbw+0FTln5lHUh7IaNFpGQy9nIBOWxl+102N9A==
X-Received: by 2002:a37:74e:: with SMTP id 75mr32399312qkh.340.1621912126256;
        Mon, 24 May 2021 20:08:46 -0700 (PDT)
X-Received: by 2002:a05:6638:22b4:: with SMTP id z20mr26846805jas.128.1621912112776;
 Mon, 24 May 2021 20:08:32 -0700 (PDT)
MIME-Version: 1.0
References: <20210518064215.2856977-1-tientzu@chromium.org>
 <20210518064215.2856977-5-tientzu@chromium.org> <CALiNf2_AWsnGqCnh02ZAGt+B-Ypzs1=-iOG2owm4GZHz2JAc4A@mail.gmail.com>
 <YKvLDlnns3TWEZ5l@0xbeefdead.lan>
In-Reply-To: <YKvLDlnns3TWEZ5l@0xbeefdead.lan>
From: Claire Chang <tientzu@chromium.org>
Date: Tue, 25 May 2021 11:08:21 +0800
X-Gmail-Original-Message-ID: <CALiNf2-M-CQdQaHiFTMfOkON6PEd0Yu_TvaCXKx9vXJ-7o5ffg@mail.gmail.com>
Message-ID: <CALiNf2-M-CQdQaHiFTMfOkON6PEd0Yu_TvaCXKx9vXJ-7o5ffg@mail.gmail.com>
Subject: Re: [PATCH v7 04/15] swiotlb: Add restricted DMA pool initialization
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>, 
	Will Deacon <will@kernel.org>, Frank Rowand <frowand.list@gmail.com>, boris.ostrovsky@oracle.com, 
	jgross@suse.com, Christoph Hellwig <hch@lst.de>, Marek Szyprowski <m.szyprowski@samsung.com>, 
	benh@kernel.crashing.org, paulus@samba.org, 
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, sstabellini@kernel.org, 
	Robin Murphy <robin.murphy@arm.com>, grant.likely@arm.com, xypron.glpk@gmx.de, 
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org, bauerman@linux.ibm.com, 
	peterz@infradead.org, Greg KH <gregkh@linuxfoundation.org>, 
	Saravana Kannan <saravanak@google.com>, "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, 
	heikki.krogerus@linux.intel.com, 
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Randy Dunlap <rdunlap@infradead.org>, 
	Dan Williams <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
	linux-devicetree <devicetree@vger.kernel.org>, lkml <linux-kernel@vger.kernel.org>, 
	linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, 
	Nicolas Boichat <drinkcat@chromium.org>, Jim Quinlan <james.quinlan@broadcom.com>, 
	Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com, 
	Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk, 
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
	Jianxiong Gao <jxgao@google.com>, joonas.lahtinen@linux.intel.com, 
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
	matthew.auld@intel.com, rodrigo.vivi@intel.com, 
	thomas.hellstrom@linux.intel.com
Content-Type: text/plain; charset="UTF-8"

On Mon, May 24, 2021 at 11:49 PM Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
>
> On Tue, May 18, 2021 at 02:48:35PM +0800, Claire Chang wrote:
> > I didn't move this to a separate file because I feel it might be
> > confusing for swiotlb_alloc/free (and need more functions to be
> > non-static).
> > Maybe instead of moving to a separate file, we can try to come up with
> > a better naming?
>
> I think you are referring to:
>
> rmem_swiotlb_setup
>
> ?

Yes, and the following swiotlb_alloc/free.

>
> Which is ARM specific and inside the generic code?
>
> <sigh>
>
> Christopher wants to unify it in all the code so there is one single
> source, but the "you seperate arch code out from generic" saying
> makes me want to move it out.
>
> I agree that if you move it out from generic to arch-specific we have to
> expose more of the swiotlb functions, which will undo's Christopher
> cleanup code.
>
> How about this - lets leave it as is now, and when there are more
> use-cases we can revisit it and then if need to move the code?
>
Ok! Sounds good!


From xen-devel-bounces@lists.xenproject.org Tue May 25 03:09:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 May 2021 03:09:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132034.246446 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llNRD-0000Xv-In; Tue, 25 May 2021 03:08:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132034.246446; Tue, 25 May 2021 03:08:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llNRD-0000Xm-F7; Tue, 25 May 2021 03:08:59 +0000
Received: by outflank-mailman (input) for mailman id 132034;
 Tue, 25 May 2021 03:08:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wpal=KU=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1llNRB-0000XI-Qm
 for xen-devel@lists.xenproject.org; Tue, 25 May 2021 03:08:57 +0000
Received: from mail-il1-x12b.google.com (unknown [2607:f8b0:4864:20::12b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f4f89e21-2058-4b48-8b0c-866b7b8e71b4;
 Tue, 25 May 2021 03:08:57 +0000 (UTC)
Received: by mail-il1-x12b.google.com with SMTP id o9so26884709ilh.6
 for <xen-devel@lists.xenproject.org>; Mon, 24 May 2021 20:08:57 -0700 (PDT)
Received: from mail-il1-f178.google.com (mail-il1-f178.google.com.
 [209.85.166.178])
 by smtp.gmail.com with ESMTPSA id z8sm4598705ilq.30.2021.05.24.20.08.55
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 May 2021 20:08:55 -0700 (PDT)
Received: by mail-il1-f178.google.com with SMTP id b5so6922288ilc.12
 for <xen-devel@lists.xenproject.org>; Mon, 24 May 2021 20:08:55 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f4f89e21-2058-4b48-8b0c-866b7b8e71b4
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=ibLnJElvlxlVpclpJPUNJ2cC/oiUOQSOujwiFR/Tp+s=;
        b=UieisilkjhqvCv5TM4Uhta7TWRN2Fo+qizEnyHuJxA4va8bXfSD+rLVykfuHhWprMs
         LshPVy4dFsoXIkmd56qov4ggYtlL759ArIJ4nJdiIlOu+d8oU3Tdh/1w/BJ1fDxfpbcA
         V3s9TTUmNV6cOGqUjcXpB0nmY9PGjsUH8vbqY=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=ibLnJElvlxlVpclpJPUNJ2cC/oiUOQSOujwiFR/Tp+s=;
        b=PWJbCHlauLA4u4girKWOySdVYZYv/qu1qpaTJp4nRuFFSIg3sMGrgT6K+aCJGuhgUP
         K0t+cvSZL69uYwKs/jYDGs9xChMn3VbpDLwRY5I1o5Gt91dpD8nMVKo8OmEfaZhBobdC
         L60L1qyyhhqpYYWx+5BnWVyRsk0MoRYpblwnDgMbQ/VNFBl9rErcqyyy06EvFpwKfmia
         kEfy9WbrhEEKNupEXE+5GD6fRgesYV7BOIP9ljK0phBFgm5Z/XkNUGF6V1onFzHQjP+w
         a6QDfqahhtgFTPJ3gOeHmVou+gsyPw/jJ7GN6PmtgH4zh/uU3ULGJ0Og37lZNY6cxRIG
         TAsw==
X-Gm-Message-State: AOAM532qSkzHxHYGeMqrHRNopnvoW0UsyBiSa7IFAGBFnm4xEcJD4seE
	/N/pysog613TUr6m+omALiTSjiF0Zg5BuQ==
X-Google-Smtp-Source: ABdhPJzs3vEWPqSOmGkQ+nFPQJ2FRCf6CxOSIF/NlT+PdgOAA8gGt3a20xinNb3EPEIw9ouI1DQ83g==
X-Received: by 2002:a05:6e02:1a67:: with SMTP id w7mr16335619ilv.137.1621912136278;
        Mon, 24 May 2021 20:08:56 -0700 (PDT)
X-Received: by 2002:a05:6e02:b:: with SMTP id h11mr18955732ilr.18.1621912124990;
 Mon, 24 May 2021 20:08:44 -0700 (PDT)
MIME-Version: 1.0
References: <20210518064215.2856977-1-tientzu@chromium.org>
 <20210518064215.2856977-6-tientzu@chromium.org> <CALiNf28ke3c91Y7xaHUgvJePKXqYA7UmsYJV9yaeZc3-4Lzs8Q@mail.gmail.com>
 <YKvLc9onyqdsINP7@0xbeefdead.lan>
In-Reply-To: <YKvLc9onyqdsINP7@0xbeefdead.lan>
From: Claire Chang <tientzu@chromium.org>
Date: Tue, 25 May 2021 11:08:34 +0800
X-Gmail-Original-Message-ID: <CALiNf28=fn5r_O8ET0TNM6cS7WO0mwXiMzR5z=eJXmNKFWKdzA@mail.gmail.com>
Message-ID: <CALiNf28=fn5r_O8ET0TNM6cS7WO0mwXiMzR5z=eJXmNKFWKdzA@mail.gmail.com>
Subject: Re: [PATCH v7 05/15] swiotlb: Add a new get_io_tlb_mem getter
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>, 
	Will Deacon <will@kernel.org>, Frank Rowand <frowand.list@gmail.com>, boris.ostrovsky@oracle.com, 
	jgross@suse.com, Christoph Hellwig <hch@lst.de>, Marek Szyprowski <m.szyprowski@samsung.com>, 
	benh@kernel.crashing.org, paulus@samba.org, 
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, sstabellini@kernel.org, 
	Robin Murphy <robin.murphy@arm.com>, grant.likely@arm.com, xypron.glpk@gmx.de, 
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org, bauerman@linux.ibm.com, 
	peterz@infradead.org, Greg KH <gregkh@linuxfoundation.org>, 
	Saravana Kannan <saravanak@google.com>, "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, 
	heikki.krogerus@linux.intel.com, 
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Randy Dunlap <rdunlap@infradead.org>, 
	Dan Williams <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
	linux-devicetree <devicetree@vger.kernel.org>, lkml <linux-kernel@vger.kernel.org>, 
	linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, 
	Nicolas Boichat <drinkcat@chromium.org>, Jim Quinlan <james.quinlan@broadcom.com>, 
	Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com, 
	Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk, 
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
	Jianxiong Gao <jxgao@google.com>, joonas.lahtinen@linux.intel.com, 
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
	matthew.auld@intel.com, rodrigo.vivi@intel.com, 
	thomas.hellstrom@linux.intel.com
Content-Type: text/plain; charset="UTF-8"

On Mon, May 24, 2021 at 11:51 PM Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
>
> On Tue, May 18, 2021 at 02:51:52PM +0800, Claire Chang wrote:
> > Still keep this function because directly using dev->dma_io_tlb_mem
> > will cause issues for memory allocation for existing devices. The pool
> > can't support atomic coherent allocation so we need to distinguish the
> > per device pool and the default pool in swiotlb_alloc.
>
> This above should really be rolled in the commit. You can prefix it by
> "The reason it was done this way was because directly using .."
>

Will add it.


From xen-devel-bounces@lists.xenproject.org Tue May 25 03:14:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 May 2021 03:14:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132047.246456 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llNWy-0002Gl-8X; Tue, 25 May 2021 03:14:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132047.246456; Tue, 25 May 2021 03:14:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llNWy-0002Ge-5U; Tue, 25 May 2021 03:14:56 +0000
Received: by outflank-mailman (input) for mailman id 132047;
 Tue, 25 May 2021 03:14:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Wpal=KU=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1llNWw-0002GY-K1
 for xen-devel@lists.xenproject.org; Tue, 25 May 2021 03:14:54 +0000
Received: from mail-pl1-x62f.google.com (unknown [2607:f8b0:4864:20::62f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8c824c4e-0b7c-43d2-83ad-dc7a48f52b35;
 Tue, 25 May 2021 03:14:53 +0000 (UTC)
Received: by mail-pl1-x62f.google.com with SMTP id v12so15632176plo.10
 for <xen-devel@lists.xenproject.org>; Mon, 24 May 2021 20:14:53 -0700 (PDT)
Received: from mail-pg1-f180.google.com (mail-pg1-f180.google.com.
 [209.85.215.180])
 by smtp.gmail.com with ESMTPSA id s48sm11769117pfw.205.2021.05.24.20.14.52
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 24 May 2021 20:14:52 -0700 (PDT)
Received: by mail-pg1-f180.google.com with SMTP id f22so20665987pgb.9
 for <xen-devel@lists.xenproject.org>; Mon, 24 May 2021 20:14:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8c824c4e-0b7c-43d2-83ad-dc7a48f52b35
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=VHDFyefEfXv2T1h/kXDPOn5vVw8tSm/6FW05x+E2j40=;
        b=merHIeKNi5Uu6C9TAUujDNIN9FxrI34YUx7aW4WyFELIyYppIt99C4UwGvzWaArcmu
         9EaYZFbnrmtDjrMZ7c8349ke6DKJjWiZDCUD5epOu2PqMIr5kYM1CoXZ0XNE1vQ3QFiu
         bvf2bDyHaIMES3MNNrHcengJbyjgwpDl3VY20=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=VHDFyefEfXv2T1h/kXDPOn5vVw8tSm/6FW05x+E2j40=;
        b=Js231dBlhRiGjOhCB0xtNDckY6CR4/sSb50FwrGm+vD/Zalcm67TcSZ2mfr7AWpV3I
         UnmN2DwMIqcc/R6XYlnaLBBKZZmkIj0uJoHAu3zWjM00ao3/nG5k8OkhG8TWUC591ISM
         px2Q/vfk35yx+tshIBD1talopiyBigAP1GInUrkdCEPKqAE81+5eu+kNrG6wwWsamZi2
         hYLdTLl+eatoGILhJSqRN2DsKPvDcOXrTaOD67IcfNYvLFhINwtp9LfYpKWjZ4deTUag
         JLtFxJ+0BEEplQohcG1/lwr4rpYuXLF0OmlxZrghDX/3//xN/RbFmCm7DlVqtJaSx/qI
         kVAA==
X-Gm-Message-State: AOAM531B7+nQ4M66lWuG7CJl+DtGVPQrZLr8aJVNMtc70cPI8dCNvgOj
	WnrGymOuEOraLkVnsksZYLPKbVyt+5AMpw==
X-Google-Smtp-Source: ABdhPJyVB6HbPdbfwPGUg++s80co/GUFVU14nQLwF+soWxwWrwoFgdh3VEcI/qlLhdNXNxT3jRZ5Yw==
X-Received: by 2002:a17:90a:6f06:: with SMTP id d6mr28281937pjk.216.1621912492879;
        Mon, 24 May 2021 20:14:52 -0700 (PDT)
X-Received: by 2002:a6b:7b08:: with SMTP id l8mr16990516iop.50.1621912094090;
 Mon, 24 May 2021 20:08:14 -0700 (PDT)
MIME-Version: 1.0
References: <20210518064215.2856977-1-tientzu@chromium.org>
 <20210518064215.2856977-2-tientzu@chromium.org> <170a54f2-be20-ec29-1d7f-3388e5f928c6@gmail.com>
 <CALiNf2-9fRbH3Xs=fA+N1iRztFxeC0iTsyOSZFe=F42uwXS0Sg@mail.gmail.com> <YKvL865kutnHqkVc@0xbeefdead.lan>
In-Reply-To: <YKvL865kutnHqkVc@0xbeefdead.lan>
From: Claire Chang <tientzu@chromium.org>
Date: Tue, 25 May 2021 11:08:03 +0800
X-Gmail-Original-Message-ID: <CALiNf2_iq3OS+95as4fj+AOMDVYgGL71A1811QLaZ=5T7TRjww@mail.gmail.com>
Message-ID: <CALiNf2_iq3OS+95as4fj+AOMDVYgGL71A1811QLaZ=5T7TRjww@mail.gmail.com>
Subject: Re: [PATCH v7 01/15] swiotlb: Refactor swiotlb init functions
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Florian Fainelli <f.fainelli@gmail.com>, Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, 
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>, Frank Rowand <frowand.list@gmail.com>, 
	boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig <hch@lst.de>, 
	Marek Szyprowski <m.szyprowski@samsung.com>, benh@kernel.crashing.org, paulus@samba.org, 
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, sstabellini@kernel.org, 
	Robin Murphy <robin.murphy@arm.com>, grant.likely@arm.com, xypron.glpk@gmx.de, 
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org, bauerman@linux.ibm.com, 
	peterz@infradead.org, Greg KH <gregkh@linuxfoundation.org>, 
	Saravana Kannan <saravanak@google.com>, "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, 
	heikki.krogerus@linux.intel.com, 
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Randy Dunlap <rdunlap@infradead.org>, 
	Dan Williams <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
	linux-devicetree <devicetree@vger.kernel.org>, lkml <linux-kernel@vger.kernel.org>, 
	linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, 
	Nicolas Boichat <drinkcat@chromium.org>, Jim Quinlan <james.quinlan@broadcom.com>, 
	Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com, 
	Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk, 
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
	Jianxiong Gao <jxgao@google.com>, joonas.lahtinen@linux.intel.com, 
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
	matthew.auld@intel.com, rodrigo.vivi@intel.com, 
	thomas.hellstrom@linux.intel.com
Content-Type: text/plain; charset="UTF-8"

On Mon, May 24, 2021 at 11:53 PM Konrad Rzeszutek Wilk
<konrad.wilk@oracle.com> wrote:
>
> > > do the set_memory_decrypted()+memset(). Is this okay or should
> > > swiotlb_init_io_tlb_mem() add an additional argument to do this
> > > conditionally?
> >
> > I'm actually not sure if this it okay. If not, will add an additional
> > argument for it.
>
> Any observations discovered? (Want to make sure my memory-cache has the
> correct semantics for set_memory_decrypted in mind).

It works fine on my arm64 device.

> >
> > > --
> > > Florian


From xen-devel-bounces@lists.xenproject.org Tue May 25 04:39:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 May 2021 04:39:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132054.246468 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llOqs-0001Py-Df; Tue, 25 May 2021 04:39:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132054.246468; Tue, 25 May 2021 04:39:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llOqs-0001Pr-9N; Tue, 25 May 2021 04:39:34 +0000
Received: by outflank-mailman (input) for mailman id 132054;
 Tue, 25 May 2021 04:39:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llOqr-0001Ph-2H; Tue, 25 May 2021 04:39:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llOqq-0000qj-SM; Tue, 25 May 2021 04:39:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llOqq-00071q-C9; Tue, 25 May 2021 04:39:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1llOqq-0008Pa-BZ; Tue, 25 May 2021 04:39:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=pHBephRCGAfpmCpm6dInw+hwLZkW3o1dQS5WY75VQis=; b=oLHOxiRcy262GTPc7+7njtNY7i
	rbGizbLoFbLW/x7iXGBLUpCkBH6j8YBv8qBF/Tx/K6gvE3xNMZd4e0oPrC+Nf1TAEmTE7sN30MRot
	cMABFerXKtXVWxg4X1o8LRUTY2dxqz3dCeC41nSQ9mGfTWWTdwm/CvFJNIk1vvBq+Aqs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162144-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162144: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=1434a3127887a7e708be5f4edd5e36d64d8622f8
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 25 May 2021 04:39:32 +0000

flight 162144 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162144/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                1434a3127887a7e708be5f4edd5e36d64d8622f8
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  297 days
Failing since        152366  2020-08-01 20:49:34 Z  296 days  503 attempts
Testing same since   162144  2021-05-24 19:40:52 Z    0 days    1 attempts

------------------------------------------------------------
6085 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1652767 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 25 06:17:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 May 2021 06:17:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132065.246482 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llQNu-0002Tm-OD; Tue, 25 May 2021 06:17:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132065.246482; Tue, 25 May 2021 06:17:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llQNu-0002Tf-Ka; Tue, 25 May 2021 06:17:46 +0000
Received: by outflank-mailman (input) for mailman id 132065;
 Tue, 25 May 2021 06:17:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=RIGM=KU=huawei.com=yuehaibing@srs-us1.protection.inumbo.net>)
 id 1llQNs-0002TZ-QI
 for xen-devel@lists.xenproject.org; Tue, 25 May 2021 06:17:44 +0000
Received: from szxga04-in.huawei.com (unknown [45.249.212.190])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 22f3c710-b282-475d-a98e-b3f1b9b876a4;
 Tue, 25 May 2021 06:17:38 +0000 (UTC)
Received: from dggems703-chm.china.huawei.com (unknown [172.30.72.58])
 by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4Fq3hM1jM5z1BQrK;
 Tue, 25 May 2021 14:14:43 +0800 (CST)
Received: from dggema769-chm.china.huawei.com (10.1.198.211) by
 dggems703-chm.china.huawei.com (10.3.19.180) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id
 15.1.2176.2; Tue, 25 May 2021 14:17:34 +0800
Received: from [10.174.179.215] (10.174.179.215) by
 dggema769-chm.china.huawei.com (10.1.198.211) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id
 15.1.2176.2; Tue, 25 May 2021 14:17:33 +0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 22f3c710-b282-475d-a98e-b3f1b9b876a4
Subject: Re: [PATCH -next] xen/pcpu: Use DEVICE_ATTR_RW macro
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>, <jgross@suse.com>,
	<sstabellini@kernel.org>
CC: <xen-devel@lists.xenproject.org>, <linux-kernel@vger.kernel.org>
References: <20210523070214.34948-1-yuehaibing@huawei.com>
 <c27b37e3-7d4b-1ed1-2b8c-1fbde6e30c3b@oracle.com>
From: YueHaibing <yuehaibing@huawei.com>
Message-ID: <e2f4309d-e53a-1511-a732-9cf5c5217d55@huawei.com>
Date: Tue, 25 May 2021 14:17:33 +0800
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.5.0
MIME-Version: 1.0
In-Reply-To: <c27b37e3-7d4b-1ed1-2b8c-1fbde6e30c3b@oracle.com>
Content-Type: text/plain; charset="utf-8"
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [10.174.179.215]
X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To
 dggema769-chm.china.huawei.com (10.1.198.211)
X-CFilter-Loop: Reflected



On 2021/5/24 20:48, Boris Ostrovsky wrote:
> 
> On 5/23/21 3:02 AM, YueHaibing wrote:
>> Use DEVICE_ATTR_RW helper instead of plain DEVICE_ATTR,
>> which makes the code a bit shorter and easier to read.
>>
>> Signed-off-by: YueHaibing <yuehaibing@huawei.com>
> 
> 
> Do you think you can also make similar change in drivers/xen/xen-balloon.c and drivers/xen/xenbus/xenbus_probe.c?
> 
Sure, will do that in v2.
> 
> 
> Thanks.
> 
> -boris
> 
> .
> 


From xen-devel-bounces@lists.xenproject.org Tue May 25 07:03:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 May 2021 07:03:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132072.246493 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llR5j-0007CD-31; Tue, 25 May 2021 07:03:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132072.246493; Tue, 25 May 2021 07:03:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llR5j-0007C6-04; Tue, 25 May 2021 07:03:03 +0000
Received: by outflank-mailman (input) for mailman id 132072;
 Tue, 25 May 2021 07:03:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=n3nm=KU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1llR5i-0007C0-AT
 for xen-devel@lists.xenproject.org; Tue, 25 May 2021 07:03:02 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4142c3a0-7b83-4ed6-b517-c793a7c588f0;
 Tue, 25 May 2021 07:03:01 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 53E81AE99;
 Tue, 25 May 2021 07:03:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4142c3a0-7b83-4ed6-b517-c793a7c588f0
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621926180; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=VJ7IVAyhd+ypXbSEjQNSSNr9w7SRTpHwAFx4NKCZ1Ms=;
	b=EZuaBG9MGug1dsr5q5sfMwMvHhdFr+i5u6nd4kpDN7CgA7nB6h9+KKliqowY2uYgamJojq
	s+NiP0TCpMdUvCEud9E8RWc7647321rBxCvNLm9plEOWHQv46KVEYKBTTfW1xZvKgOmKKq
	cLPm8Spw3Z6jODeyMZTibTTLlbL3zEI=
Subject: Re: [PATCH v4 1/4] xen/char: Default HAS_NS16550 to y only for X86
 and ARM
To: Connor Davis <connojdavis@gmail.com>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair23@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Alistair Francis <alistair.francis@wdc.com>,
 xen-devel@lists.xenproject.org
References: <cover.1621712830.git.connojdavis@gmail.com>
 <2a8329da33d2af0eab8581a01e3098e8360bda87.1621712830.git.connojdavis@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <97ef9c85-b5ba-42c6-88f0-6ac66063754c@suse.com>
Date: Tue, 25 May 2021 09:02:58 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <2a8329da33d2af0eab8581a01e3098e8360bda87.1621712830.git.connojdavis@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 24.05.2021 16:34, Connor Davis wrote:
> Defaulting to yes only for X86 and ARM reduces the requirements
> for a minimal build when porting new architectures.
> 
> Signed-off-by: Connor Davis <connojdavis@gmail.com>
> Reviewed-by: Alistair Francis <alistair.francis@wdc.com>

Acked-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Tue May 25 07:06:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 May 2021 07:06:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132078.246503 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llR98-0007rN-J6; Tue, 25 May 2021 07:06:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132078.246503; Tue, 25 May 2021 07:06:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llR98-0007rG-G3; Tue, 25 May 2021 07:06:34 +0000
Received: by outflank-mailman (input) for mailman id 132078;
 Tue, 25 May 2021 07:06:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=n3nm=KU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1llR97-0007r4-RE
 for xen-devel@lists.xenproject.org; Tue, 25 May 2021 07:06:33 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 97be7c8d-a173-4ac3-8087-c17c9d985c58;
 Tue, 25 May 2021 07:06:33 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6C5F2AE99;
 Tue, 25 May 2021 07:06:32 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 97be7c8d-a173-4ac3-8087-c17c9d985c58
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621926392; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=5MVWrXTN9/FDv7kkR8qPnXjznWziHZao0rmhrij1nsg=;
	b=ciqZREOQ4qUReuWUFe+xT/wSL6wbNEnSW4wJimgR8+dBBsVaAzjDdrUlasPFHOP6UeGRxi
	3cZzbYE7AbdOk+2WM4no6QjPpO7FSsBj4fvtbJpSvmnvLVi32uiAZtaFVizj0ZSGwctzz7
	6Z1s/wkoHK+V37ZVBEDr9BzuF/lVwrs=
Subject: Re: [PATCH v4 1/4] xen/char: Default HAS_NS16550 to y only for X86
 and ARM
From: Jan Beulich <jbeulich@suse.com>
To: Connor Davis <connojdavis@gmail.com>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair23@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Alistair Francis <alistair.francis@wdc.com>,
 xen-devel@lists.xenproject.org
References: <cover.1621712830.git.connojdavis@gmail.com>
 <2a8329da33d2af0eab8581a01e3098e8360bda87.1621712830.git.connojdavis@gmail.com>
 <97ef9c85-b5ba-42c6-88f0-6ac66063754c@suse.com>
Message-ID: <2d1e11a6-4923-9935-5576-002a9acb1510@suse.com>
Date: Tue, 25 May 2021 09:06:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <97ef9c85-b5ba-42c6-88f0-6ac66063754c@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 25.05.2021 09:02, Jan Beulich wrote:
> On 24.05.2021 16:34, Connor Davis wrote:
>> Defaulting to yes only for X86 and ARM reduces the requirements
>> for a minimal build when porting new architectures.
>>
>> Signed-off-by: Connor Davis <connojdavis@gmail.com>
>> Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
> 
> Acked-by: Jan Beulich <jbeulich@suse.com>

Actually, just to make sure: This ...

> --- a/xen/drivers/char/Kconfig
> +++ b/xen/drivers/char/Kconfig
> @@ -1,5 +1,6 @@
>  config HAS_NS16550
>  	bool "NS16550 UART driver" if ARM
> +	default n if RISCV

... won't trigger a kconfig warning prior to the RISCV symbol getting
introduced, will it? (I was about to commit the change when I started
wondering.)

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 25 07:32:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 May 2021 07:32:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132087.246519 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llRYO-0002XQ-OF; Tue, 25 May 2021 07:32:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132087.246519; Tue, 25 May 2021 07:32:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llRYO-0002XJ-L2; Tue, 25 May 2021 07:32:40 +0000
Received: by outflank-mailman (input) for mailman id 132087;
 Tue, 25 May 2021 07:32:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=CDQ4=KU=suse.com=jgross@srs-us1.protection.inumbo.net>)
 id 1llRYM-0002XD-Vs
 for xen-devel@lists.xenproject.org; Tue, 25 May 2021 07:32:39 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6207b7cf-dbbf-4445-a799-e11db0c2953f;
 Tue, 25 May 2021 07:32:37 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id EC8EFAEA6;
 Tue, 25 May 2021 07:32:36 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6207b7cf-dbbf-4445-a799-e11db0c2953f
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621927957; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=QvoPXNgD6HCBQBuP8ufVrFcviWUq9lvfxgeW6D9GpHQ=;
	b=P8nFyf0S7yJjjKYv6oUtJAZzE6q42aT3PjNXiq4uObz12EQaGhnmc56wGCLrdHLfws/oel
	C330pScEn46W5Vwpf+0NYx+j9XgGFEfhPIUOQhH0mf2pW+bF1rgJcFEjpC1AVRqelBgn/Q
	9CFZs49FuyUtDPQlzuSD4wcfwMFvVJk=
Subject: Re: [PATCH v2 0/6] tools/libs: add missing support of linear
 p2m_list, cleanup
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Christian Lindig <christian.lindig@citrix.com>, David Scott <dave@recoil.org>
References: <20210412152236.1975-1-jgross@suse.com>
 <b79c60e4-7c41-be9a-b0df-e9f9cf71eafa@suse.com>
Message-ID: <9cbac4d9-16d8-ff52-eb8f-8375cb88af93@suse.com>
Date: Tue, 25 May 2021 09:32:36 +0200
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.0
MIME-Version: 1.0
In-Reply-To: <b79c60e4-7c41-be9a-b0df-e9f9cf71eafa@suse.com>
Content-Type: multipart/signed; micalg=pgp-sha256;
 protocol="application/pgp-signature";
 boundary="u5G3R3gAkiK0fJcRYcH5NGX47UOp7UR92"

This is an OpenPGP/MIME signed message (RFC 4880 and 3156)
--u5G3R3gAkiK0fJcRYcH5NGX47UOp7UR92
Content-Type: multipart/mixed; boundary="LrgSeolZaKzXCi9bFdcJEAQcswtA4W63B";
 protected-headers="v1"
From: Juergen Gross <jgross@suse.com>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Christian Lindig <christian.lindig@citrix.com>, David Scott <dave@recoil.org>
Message-ID: <9cbac4d9-16d8-ff52-eb8f-8375cb88af93@suse.com>
Subject: Re: [PATCH v2 0/6] tools/libs: add missing support of linear
 p2m_list, cleanup
References: <20210412152236.1975-1-jgross@suse.com>
 <b79c60e4-7c41-be9a-b0df-e9f9cf71eafa@suse.com>
In-Reply-To: <b79c60e4-7c41-be9a-b0df-e9f9cf71eafa@suse.com>

--LrgSeolZaKzXCi9bFdcJEAQcswtA4W63B
Content-Type: multipart/mixed;
 boundary="------------6C914B3192B381E4EB82397E"
Content-Language: en-US

This is a multi-part message in MIME format.
--------------6C914B3192B381E4EB82397E
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Transfer-Encoding: quoted-printable

On 12.05.21 08:58, Juergen Gross wrote:
> Ping?

Now each patch has an Ack by Wei. Could the series be either applied or
get more comments, please?


Juergen

>=20
> On 12.04.21 17:22, Juergen Gross wrote:
>> There are some corners left which don't support the not so very new
>> linear p2m list of pv guests, which has been introduced in Linux kerne=
l
>> 3.19 and which is mandatory for non-legacy versions of Xen since kerne=
l
>> 4.14.
>>
>> This series adds support for the linear p2m list where it is missing
>> (colo support and "xl dump-core").
>>
>> In theory it should be possible to merge the p2m list mapping code
>> from migration handling and core dump handling, but this needs quite
>> some cleanup before this is possible.
>>
>> The first three patches of this series are fixing real problems, so
>> I've put them at the start of this series, especially in order to make=

>> backports easier.
>>
>> The other three patches are only the first steps of cleanup. The main
>> work done here is to concentrate all p2m mapping in libxenguest instea=
d
>> of having one implementation in each of libxenguest and libxenctrl.
>>
>> Merging the two implementations should be rather easy, but this will
>> require to touch many lines of code, as the migration handling variant=

>> seems to be more mature, but it is using the migration stream specific=

>> structures heavily. So I'd like to have some confirmation that my way
>> to clean this up is the right one.
>>
>> My idea would be to add the data needed for p2m mapping to struct
>> domain_info_context and replace the related fields in struct
>> xc_sr_context with a struct domain_info_context. Modifying the
>> interface of xc_core_arch_map_p2m() to take most current parameters
>> via struct domain_info_context would then enable migration coding to
>> use xc_core_arch_map_p2m() for mapping the p2m. xc_core_arch_map_p2m()=

>> should look basically like the current migration p2m mapping code
>> afterwards.
>>
>> Any comments to that plan?
>>
>> Changes in V2:
>> - added missing #include in ocaml stub
>>
>> Juergen Gross (6):
>> =C2=A0=C2=A0 tools/libs/guest: fix max_pfn setting in map_p2m()
>> =C2=A0=C2=A0 tools/libs/ctrl: fix xc_core_arch_map_p2m() to support li=
near p2m
>> =C2=A0=C2=A0=C2=A0=C2=A0 table
>> =C2=A0=C2=A0 tools/libs/ctrl: use common p2m mapping code in xc_domain=
_resume_any()
>> =C2=A0=C2=A0 tools/libs: move xc_resume.c to libxenguest
>> =C2=A0=C2=A0 tools/libs: move xc_core* from libxenctrl to libxenguest
>> =C2=A0=C2=A0 tools/libs/guest: make some definitions private to libxen=
guest
>>
>> =C2=A0 tools/include/xenctrl.h=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0 |=C2=A0 63 ---
>> =C2=A0 tools/include/xenguest.h=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 |=C2=A0 63 +++
>> =C2=A0 tools/libs/ctrl/Makefile=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0 |=C2=A0=C2=A0 4 -
>> =C2=A0 tools/libs/ctrl/xc_core_x86.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 223 ------=
----
>> =C2=A0 tools/libs/ctrl/xc_domain.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 =
|=C2=A0=C2=A0 2 -
>> =C2=A0 tools/libs/ctrl/xc_private.h=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0=20
43 +-
>> =C2=A0 tools/libs/guest/Makefile=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0 |=C2=A0=C2=A0 4 +
>> =C2=A0 .../libs/{ctrl/xc_core.c =3D> guest/xg_core.c}=C2=A0 |=C2=A0=C2=
=A0 7 +-
>> =C2=A0 .../libs/{ctrl/xc_core.h =3D> guest/xg_core.h}=C2=A0 |=C2=A0 15=20
+-
>> =C2=A0 .../xc_core_arm.c =3D> guest/xg_core_arm.c}=C2=A0=C2=A0=C2=A0=C2=
=A0 |=C2=A0 31 +-
>> =C2=A0 .../xc_core_arm.h =3D> guest/xg_core_arm.h}=C2=A0=C2=A0=C2=A0=C2=
=A0 |=C2=A0=C2=A0 0
>> =C2=A0 tools/libs/guest/xg_core_x86.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 | 399 +++++++++=
+++++++++
>> =C2=A0 .../xc_core_x86.h =3D> guest/xg_core_x86.h}=C2=A0=C2=A0=C2=A0=C2=
=A0 |=C2=A0=C2=A0 0
>> =C2=A0 tools/libs/guest/xg_dom_boot.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0=C2=A0 2=20
+-
>> =C2=A0 tools/libs/guest/xg_domain.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0=20
19 +-
>> =C2=A0 tools/libs/guest/xg_offline_page.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0=C2=A0 2 +-
>> =C2=A0 tools/libs/guest/xg_private.h=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0 16 +=
-
>> =C2=A0 .../{ctrl/xc_resume.c =3D> guest/xg_resume.c}=C2=A0=C2=A0 |=C2=A0=20
69 +--
>> =C2=A0 tools/libs/guest/xg_sr_save_x86_pv.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=
=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0=C2=A0 2 +-
>> =C2=A0 tools/ocaml/libs/xc/xenctrl_stubs.c=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=
=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 |=C2=A0=C2=A0 1 +
>> =C2=A0 20 files changed, 545 insertions(+), 420 deletions(-)
>> =C2=A0 delete mode 100644 tools/libs/ctrl/xc_core_x86.c
>> =C2=A0 rename tools/libs/{ctrl/xc_core.c =3D> guest/xg_core.c} (99%)
>> =C2=A0 rename tools/libs/{ctrl/xc_core.h =3D> guest/xg_core.h} (92%)
>> =C2=A0 rename tools/libs/{ctrl/xc_core_arm.c =3D> guest/xg_core_arm.c}=20
(72%)
>> =C2=A0 rename tools/libs/{ctrl/xc_core_arm.h =3D> guest/xg_core_arm.h}=20
(100%)
>> =C2=A0 create mode 100644 tools/libs/guest/xg_core_x86.c
>> =C2=A0 rename tools/libs/{ctrl/xc_core_x86.h =3D> guest/xg_core_x86.h}=20
(100%)
>> =C2=A0 rename tools/libs/{ctrl/xc_resume.c =3D> guest/xg_resume.c} (80=
%)
>>
>=20


--------------6C914B3192B381E4EB82397E
Content-Type: application/pgp-keys;
 name="OpenPGP_0xB0DE9DD628BF132F.asc"
Content-Transfer-Encoding: quoted-printable
Content-Description: OpenPGP public key
Content-Disposition: attachment;
 filename="OpenPGP_0xB0DE9DD628BF132F.asc"

-----BEGIN PGP PUBLIC KEY BLOCK-----

xsBNBFOMcBYBCACgGjqjoGvbEouQZw/ToiBg9W98AlM2QHV+iNHsEs7kxWhKMjrioyspZKOBy=
cWx
w3ie3j9uvg9EOB3aN4xiTv4qbnGiTr3oJhkB1gsb6ToJQZ8uxGq2kaV2KL9650I1SJvedYm8O=
f8Z
d621lSmoKOwlNClALZNew72NjJLEzTalU1OdT7/i1TXkH09XSSI8mEQ/ouNcMvIJNwQpd369y=
9bf
IhWUiVXEK7MlRgUG6MvIj6Y3Am/BBLUVbDa4+gmzDC9ezlZkTZG2t14zWPvxXP3FAp2pkW0xq=
G7/
377qptDmrk42GlSKN4z76ELnLxussxc7I2hx18NUcbP8+uty4bMxABEBAAHNHEp1ZXJnZW4gR=
3Jv
c3MgPGpnQHBmdXBmLm5ldD7CwHkEEwECACMFAlOMcBYCGwMHCwkIBwMCAQYVCAIJCgsEFgIDA=
QIe
AQIXgAAKCRCw3p3WKL8TL0KdB/93FcIZ3GCNwFU0u3EjNbNjmXBKDY4FUGNQH2lvWAUy+dnyT=
hpw
dtF/jQ6j9RwE8VP0+NXcYpGJDWlNb9/JmYqLiX2Q3TyevpB0CA3dbBQp0OW0fgCetToGIQrg0=
MbD
1C/sEOv8Mr4NAfbauXjZlvTj30H2jO0u+6WGM6nHwbh2l5O8ZiHkH32iaSTfN7Eu5RnNVUJbv=
oPH
Z8SlM4KWm8rG+lIkGurqqu5gu8q8ZMKdsdGC4bBxdQKDKHEFExLJK/nRPFmAuGlId1E3fe10v=
5QL
+qHI3EIPtyfE7i9Hz6rVwi7lWKgh7pe0ZvatAudZ+JNIlBKptb64FaiIOAWDCx1SzR9KdWVyZ=
2Vu
IEdyb3NzIDxqZ3Jvc3NAc3VzZS5jb20+wsB5BBMBAgAjBQJTjHCvAhsDBwsJCAcDAgEGFQgCC=
QoL
BBYCAwECHgECF4AACgkQsN6d1ii/Ey/HmQf/RtI7kv5A2PS4RF7HoZhPVPogNVbC4YA6lW7Dr=
Wf0
teC0RR3MzXfy6pJ+7KLgkqMlrAbN/8Dvjoz78X+5vhH/rDLa9BuZQlhFmvcGtCF8eR0T1v0nC=
/nu
AFVGy+67q2DH8As3KPu0344TBDpAvr2uYM4tSqxK4DURx5INz4ZZ0WNFHcqsfvlGJALDeE0Lh=
ITT
d9jLzdDad1pQSToCnLl6SBJZjDOX9QQcyUigZFtCXFst4dlsvddrxyqT1f17+2cFSdu7+ynLm=
XBK
7abQ3rwJY8SbRO2iRulogc5vr/RLMMlscDAiDkaFQWLoqHHOdfO9rURssHNN8WkMnQfvUewRz=
80h
SnVlcmdlbiBHcm9zcyA8amdyb3NzQG5vdmVsbC5jb20+wsB5BBMBAgAjBQJTjHDXAhsDBwsJC=
AcD
AgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey8PUQf/ehmgCI9jB9hlgexLvgOtf7PJn=
FOX
gMLdBQgBlVPO3/D9R8LtF9DBAFPNhlrsfIG/SqICoRCqUcJ96Pn3P7UUinFG/I0ECGF4EvTE1=
jnD
kfJZr6jrbjgyoZHiw/4BNwSTL9rWASyLgqlA8u1mf+c2yUwcGhgkRAd1gOwungxcwzwqgljf0=
N51
N5JfVRHRtyfwq/ge+YEkDGcTU6Y0sPOuj4Dyfm8fJzdfHNQsWq3PnczLVELStJNdapwPOoE+l=
otu
fe3AM2vAEYJ9rTz3Cki4JFUsgLkHFqGZarrPGi1eyQcXeluldO3m91NK/1xMI3/+8jbO0tsn1=
tqS
EUGIJi7ox80eSnVlcmdlbiBHcm9zcyA8amdyb3NzQHN1c2UuZGU+wsB5BBMBAgAjBQJTjHDrA=
hsD
BwsJCAcDAgEGFQgCCQoLBBYCAwECHgECF4AACgkQsN6d1ii/Ey+LhQf9GL45eU5vOowA2u5N3=
g3O
ZUEBmDHVVbqMtzwlmNC4k9Kx39r5s2vcFl4tXqW7g9/ViXYuiDXb0RfUpZiIUW89siKrkzmQ5=
dM7
wRqzgJpJwK8Bn2MIxAKArekWpiCKvBOB/Cc+3EXE78XdlxLyOi/NrmSGRIov0karw2RzMNOu5=
D+j
LRZQd1Sv27AR+IP3I8U4aqnhLpwhK7MEy9oCILlgZ1QZe49kpcumcZKORmzBTNh30FVKK1Evm=
V2x
AKDoaEOgQB4iFQLhJCdP1I5aSgM5IVFdn7v5YgEYuJYx37IoN1EblHI//x/e2AaIHpzK5h88N=
Eaw
QsaNRpNSrcfbFmAg987ATQRTjHAWAQgAyzH6AOODMBjgfWE9VeCgsrwH3exNAU32gLq2xvjpW=
nHI
s98ndPUDpnoxWQugJ6MpMncr0xSwFmHEgnSEjK/PAjppgmyc57BwKII3sV4on+gDVFJR6Y8ZR=
wgn
BC5mVM6JjQ5xDk8WRXljExRfUX9pNhdE5eBOZJrDRoLUmmjDtKzWaDhIg/+1Hzz93X4fCQkNV=
bVF
LELU9bMaLPBG/x5q4iYZ2k2ex6d47YE1ZFdMm6YBYMOljGkZKwYde5ldM9mo45mmwe0icXKLk=
pEd
IXKTZeKDO+Hdv1aqFuAcccTg9RXDQjmwhC3yEmrmcfl0+rPghO0Iv3OOImwTEe4co3c1mwARA=
QAB
wsBfBBgBAgAJBQJTjHAWAhsMAAoJELDendYovxMvQ/gH/1ha96vm4P/L+bQpJwrZ/dneZcmEw=
Tbe
8YFsw2V/Buv6Z4Mysln3nQK5ZadD534CF7TDVft7fC4tU4PONxF5D+/tvgkPfDAfF77zy2AH1=
vJz
Q1fOU8lYFpZXTXIHb+559UqvIB8AdgR3SAJGHHt4RKA0F7f5ipYBBrC6cyXJyyoprT10EMvU8=
VGi
wXvTyJz3fjoYsdFzpWPlJEBRMedCot60g5dmbdrZ5DWClAr0yau47zpWj3enf1tLWaqcsuylW=
svi
uGjKGw7KHQd3bxALOknAp4dN3QwBYCKuZ7AddY9yjynVaD5X7nF9nO5BjR/i1DG86lem3iBDX=
zXs
ZDn8R38=3D
=3D2wuH
-----END PGP PUBLIC KEY BLOCK-----

--------------6C914B3192B381E4EB82397E--

--LrgSeolZaKzXCi9bFdcJEAQcswtA4W63B--

--u5G3R3gAkiK0fJcRYcH5NGX47UOp7UR92
Content-Type: application/pgp-signature; name="OpenPGP_signature.asc"
Content-Description: OpenPGP digital signature
Content-Disposition: attachment; filename="OpenPGP_signature"

-----BEGIN PGP SIGNATURE-----

wsB5BAABCAAjFiEEhRJncuj2BJSl0Jf3sN6d1ii/Ey8FAmCsqBQFAwAAAAAACgkQsN6d1ii/Ey/1
Qwf/dsAdtVCVvbEpG//i4HfheFy1rRpBu+WVb823Kp28zAEAMC5dQFsaSoPZ58h1R/ZcHhgpbyXK
SEl/xR5pEGXyUHQ/affgdXecZZ+FQCrinCFyvDnHVemBnWPo+QKc5DQaSYrbDDnPRpVht8+mRPvx
C9BDzx9sBBCUShwq+1ENQi/RtpO4/fnI3O5IuWzYsYMC0SvSMmp43z760n8A6tJgLwwumb+iZhgT
6A0KHyB+KMPbOT4c1vXze/PLnNoN2HftbnepNu3goL4ylhipIzBbi6Wi9SXeIVR12sqdPLIxy7Rr
LCTGgSrdfQt+nTGOuG2g5zB31pvjZGA7Qcb0K7lWRQ==
=I2G+
-----END PGP SIGNATURE-----

--u5G3R3gAkiK0fJcRYcH5NGX47UOp7UR92--


From xen-devel-bounces@lists.xenproject.org Tue May 25 08:44:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 May 2021 08:44:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132100.246529 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llSg0-00017B-G5; Tue, 25 May 2021 08:44:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132100.246529; Tue, 25 May 2021 08:44:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llSg0-000174-D3; Tue, 25 May 2021 08:44:36 +0000
Received: by outflank-mailman (input) for mailman id 132100;
 Tue, 25 May 2021 08:44:35 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=n3nm=KU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1llSfz-00016y-Me
 for xen-devel@lists.xenproject.org; Tue, 25 May 2021 08:44:35 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c42afd9b-235a-4f33-834b-5848992670a1;
 Tue, 25 May 2021 08:44:34 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id C35E3AEAE;
 Tue, 25 May 2021 08:44:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c42afd9b-235a-4f33-834b-5848992670a1
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621932273; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=+wMF+8OHJAPX0xw9IVTDsZ1Nv9fXbJdPoT6ypLs9eL4=;
	b=UwbrOqPJeo4a25lIwTfLRaopvEPTqiI50dvZ4l/i/264QEhTlKhzkPHsboyMg3ybzNfFeD
	QXF2M1QlU0VReVD0EtFB7h6QoTu/OIh/j0YaRIddUy4exdBvKPlmaC69767kJGhvK8iXwv
	wBolz+1CQQtbtMHNwm3M78kimsJELRk=
Subject: Re: [PATCH v4 2/4] xen/common: Guard iommu symbols with
 CONFIG_HAS_PASSTHROUGH
To: Connor Davis <connojdavis@gmail.com>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair23@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org
References: <cover.1621712830.git.connojdavis@gmail.com>
 <f76852db6601b1bf243781b0b8b16c7a6fdc8a01.1621712830.git.connojdavis@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <3b872d59-4330-68ee-c62b-230f5d6cb2cf@suse.com>
Date: Tue, 25 May 2021 10:44:33 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <f76852db6601b1bf243781b0b8b16c7a6fdc8a01.1621712830.git.connojdavis@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 24.05.2021 16:34, Connor Davis wrote:
> The variables iommu_enabled and iommu_dont_flush_iotlb are defined in
> drivers/passthrough/iommu.c and are referenced in common code, which
> causes the link to fail when !CONFIG_HAS_PASSTHROUGH.
> 
> Guard references to these variables in common code so that xen
> builds when !CONFIG_HAS_PASSTHROUGH.
> 
> Signed-off-by: Connor Davis <connojdavis@gmail.com>

Somewhat hesitantly
Acked-by: Jan Beulich <jbeulich@suse.com>
with ...

> --- a/xen/include/xen/iommu.h
> +++ b/xen/include/xen/iommu.h
> @@ -51,9 +51,15 @@ static inline bool_t dfn_eq(dfn_t x, dfn_t y)
>      return dfn_x(x) == dfn_x(y);
>  }
>  
> -extern bool_t iommu_enable, iommu_enabled;
> +extern bool_t iommu_enable;

... just "bool" used here, as I think I did say before. Can be taken
care of while committing, I think.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 25 08:49:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 May 2021 08:49:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132107.246541 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llSkI-0001lk-4K; Tue, 25 May 2021 08:49:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132107.246541; Tue, 25 May 2021 08:49:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llSkI-0001ld-0F; Tue, 25 May 2021 08:49:02 +0000
Received: by outflank-mailman (input) for mailman id 132107;
 Tue, 25 May 2021 08:49:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=n3nm=KU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1llSkG-0001lX-SP
 for xen-devel@lists.xenproject.org; Tue, 25 May 2021 08:49:00 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 532b9c9a-cfb9-4ca5-82bc-23556e280172;
 Tue, 25 May 2021 08:49:00 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 6247DAEA8;
 Tue, 25 May 2021 08:48:59 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 532b9c9a-cfb9-4ca5-82bc-23556e280172
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621932539; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=IMDhAc0r2bOfyLmnNhMnj8wp/38X8vrpzZWKMua1bAw=;
	b=GeSknq+g4TzaoWu4b0jvgZ+RVcm2jn0IrqbOKkSVP8QYgGH8JQoyawE5hlJKA6yRVQhumR
	lG3osOfgsVsqmntgypxou5QGH8ggZEObrBkcJSU2DQC9oDIXbZFy0YjhsMsQww7kkDQSQA
	nAvQPGs/H8GyfnMrMR5xbTvbasMhmkI=
Subject: Re: [PATCH v4 3/4] xen: Add files needed for minimal riscv build
To: Connor Davis <connojdavis@gmail.com>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair23@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1621712830.git.connojdavis@gmail.com>
 <88ca49cea8dc0c44604957d42722388bb3d9e3ff.1621712830.git.connojdavis@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <7d1b6d2a-641c-4508-9b29-b74db4769170@suse.com>
Date: Tue, 25 May 2021 10:48:58 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <88ca49cea8dc0c44604957d42722388bb3d9e3ff.1621712830.git.connojdavis@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 24.05.2021 16:34, Connor Davis wrote:
> Add arch-specific makefiles and configs needed to build for
> riscv. Also add a minimal head.S that is a simple infinite loop.
> head.o can be built with
> 
> $ make XEN_TARGET_ARCH=riscv SUBSYSTEMS=xen -C xen tiny64_defconfig
> $ make XEN_TARGET_ARCH=riscv SUBSYSTEMS=xen -C xen TARGET=head.o
> 
> No other TARGET is supported at the moment.
> 
> Signed-off-by: Connor Davis <connojdavis@gmail.com>
> ---
>  config/riscv.mk                         |  4 +++
>  xen/Makefile                            |  8 +++--
>  xen/arch/riscv/Kconfig                  | 47 +++++++++++++++++++++++++
>  xen/arch/riscv/Kconfig.debug            |  0
>  xen/arch/riscv/Makefile                 |  0
>  xen/arch/riscv/Rules.mk                 |  0
>  xen/arch/riscv/arch.mk                  | 14 ++++++++
>  xen/arch/riscv/asm-offsets.c            |  0
>  xen/arch/riscv/configs/tiny64_defconfig | 13 +++++++
>  xen/arch/riscv/head.S                   |  6 ++++
>  xen/include/asm-riscv/config.h          | 47 +++++++++++++++++++++++++
>  11 files changed, 137 insertions(+), 2 deletions(-)
>  create mode 100644 config/riscv.mk
>  create mode 100644 xen/arch/riscv/Kconfig
>  create mode 100644 xen/arch/riscv/Kconfig.debug
>  create mode 100644 xen/arch/riscv/Makefile
>  create mode 100644 xen/arch/riscv/Rules.mk
>  create mode 100644 xen/arch/riscv/arch.mk
>  create mode 100644 xen/arch/riscv/asm-offsets.c
>  create mode 100644 xen/arch/riscv/configs/tiny64_defconfig
>  create mode 100644 xen/arch/riscv/head.S
>  create mode 100644 xen/include/asm-riscv/config.h

I think this wants to be accompanied by an addition to ./MAINTAINERS
right away, such that future RISC-V patches can be acked by the
respective designated maintainers, rather than falling under "THE REST".
Question is whether you / we have settled yet who's to become arch
maintainer there.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 25 08:58:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 May 2021 08:58:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132116.246551 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llStq-0003Ju-4S; Tue, 25 May 2021 08:58:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132116.246551; Tue, 25 May 2021 08:58:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llStq-0003Jn-1R; Tue, 25 May 2021 08:58:54 +0000
Received: by outflank-mailman (input) for mailman id 132116;
 Tue, 25 May 2021 08:58:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=n3nm=KU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1llSto-0003Jh-HD
 for xen-devel@lists.xenproject.org; Tue, 25 May 2021 08:58:52 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cd202bfb-c0d6-4243-a979-5f899ab486fa;
 Tue, 25 May 2021 08:58:51 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id E935EAE1F;
 Tue, 25 May 2021 08:58:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cd202bfb-c0d6-4243-a979-5f899ab486fa
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621933131; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=irAn1gzuscv/VwlzsNzq0Us/QytPEd7377xAvJnaDMQ=;
	b=We1PWTIcXT518mnqXgldwXeIJZToWDFxnhypNR1xpkbK5tJeGGXkei/YS/7UEb4G23bNlE
	7nfXkR4zzbqpJjwvthTAGs1wi6sLWxVMaSPS8kcLBrJKZC+q/8cagz5nU6vUI7RCVb7J5u
	B1M/uLCHkioc4f/9vnookGggPtO8Vn0=
Subject: Re: Invalid _Static_assert expanded from HASH_CALLBACKS_CHECK
To: Roberto Bagnara <roberto.bagnara@bugseng.com>
References: <ccb37c2e-a3a6-a2e4-bf15-da81f97c94be@bugseng.com>
Cc: xen-devel@lists.xenproject.org, Tim Deegan <tim@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <38898d21-fe76-36dc-f1e6-497b52c5c0b7@suse.com>
Date: Tue, 25 May 2021 10:58:50 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <ccb37c2e-a3a6-a2e4-bf15-da81f97c94be@bugseng.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 24.05.2021 06:29, Roberto Bagnara wrote:
> I stumbled upon parsing errors due to invalid uses of
> _Static_assert expanded from HASH_CALLBACKS_CHECK where
> the tested expression is not constant, as mandated by
> the C standard.
> 
> Judging from the following comment, there is partial awareness
> of the fact this is an issue:
> 
> #ifndef __clang__ /* At least some versions dislike some of the uses. */
> #define HASH_CALLBACKS_CHECK(mask) \
>      BUILD_BUG_ON((mask) > (1U << ARRAY_SIZE(callbacks)) - 1)
> 
> Indeed, this is not a fault of Clang: the point is that some
> of the expansions of this macro are not C.  Moreover,
> the fact that GCC sometimes accepts them is not
> something we can rely upon:
> 
> $ cat p.c
> void f() {
> static const int x = 3;
> _Static_assert(x < 4, "");
> }
> $ gcc -c -O p.c
> $ gcc -c p.c
> p.c: In function ‘f’:
> p.c:3:20: error: expression in static assertion is not constant
> 3 | _Static_assert(x < 4, "");
> | ~^~
> $

I'd nevertheless like to stick to this as long as not proven
otherwise by future gcc.

> Finally, I think this can be easily avoided: instead
> of initializing a static const with a constant expression
> and then static-asserting the static const, just static-assert
> the constant initializer.

Well, yes, but the whole point of constructs like

    HASH_CALLBACKS_CHECK(callback_mask);
    hash_domain_foreach(d, callback_mask, callbacks, gmfn);

is to make very obvious that the checked mask and the used mask
match. Hence if anything I'd see us eliminate the static const
callback_mask variables altogether. I did avoid doing so in the
earlier change, following the assumption that the choice of
using a static const there was for a reason originally (my guess:
a combination of not wanting to use a #define and of having the
mask values live next to their corresponding arrays).

Cc-ing Tim as the maintainer, to possibly override my views.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 25 09:24:36 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 May 2021 09:24:36 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132125.246563 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llTId-0006OS-5g; Tue, 25 May 2021 09:24:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132125.246563; Tue, 25 May 2021 09:24:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llTId-0006OL-2j; Tue, 25 May 2021 09:24:31 +0000
Received: by outflank-mailman (input) for mailman id 132125;
 Tue, 25 May 2021 09:24:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pLNi=KU=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1llTIa-0006OF-PS
 for xen-devel@lists.xenproject.org; Tue, 25 May 2021 09:24:29 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3f30c449-f908-4993-8ffc-66a6f84fe015;
 Tue, 25 May 2021 09:24:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3f30c449-f908-4993-8ffc-66a6f84fe015
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1621934666;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=vfxo4eo5HcMNYGZ7SiCXmnyzIB2jtqxnw1kYpE6N8Vk=;
  b=Xj6uRvK+rF6CO4RyX7DJGJq6sta24wDn38RW/sx9XMsTIWW/2sdvHBgA
   z14Fy5YrUI8TM9p9tm0hjtvu07h6JwBLEfz6kwlyWkeTxwkiVUq2zrSZR
   vJdBEYKt5F+KFZlOOZS4lIClww2PY7aL+XdDLDKPo8Yl2sw316iAk5Fpx
   c=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: zDKtbVCtGvzCIeCT3H2UOUFw0qOV5CJvdWa2NolGKuiPWc4ItO0r9W/RupWKZPAKbudf911E8E
 FzuDWl0AJOt+IRYBIrTpvk6chYWxwU7X03siXIh2P5zmfZACVZl/4oGiVv56cLLsjxOQ2GguJ1
 sm1XDxtN+T3LPd07J3sMapFJWypH8p+Iplirb3NS5p30rBvdpvYt9csN1+TRaM/G4wzbMlGILD
 wqgdkXcjNGujKAZ32nKVWgRdK5nPg2GlJHMfHcVxUAnUyKjmRHJ6JG7beZ82UkZd8b2jIWWIyP
 f9k=
X-SBRS: 5.1
X-MesageID: 44557381
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-Data: A9a23:k0ZN7KoJpB2B7irUlXhIiNqWokdeBmIEZRIvgKrLsJaIsI4StFCzt
 garIBnSaPfZZjb8c493b4/i9UhSsMSBnYVmQQJo/HpkHylD9puZCYyVIHmrMnLJJKUvbq7GA
 +byyDXkBJppJpMJjk71atANlVEljufXAOKU5NfsYkidfyc9IMsaoU8ly7RRbrJA24DjWlvQ4
 46q+aUzBXf+s9JKGjNMg068gEsHUMTa4Fv0aXRnOJinFHeH/5UkJMp3yZOZdhMUcaENdgKOf
 M7RzanRw4/s10xF5uVJMFrMWhZirrb6ZWBig5fNMkSoqkAqSicais7XOBeAAKv+Zvrgc91Zk
 b1wWZKMpQgBY6uPp8s7UAlhGGJRNIlI87TIGkO7rpnGp6HGWyOEL/RGCUg3OcsT+/ptAHEI/
 vsdQNwPRknd3aTsmuv9E7Q8wJ56RCXoFNp3VnVI5DfVF/s5B7vERL3H/4Rw1zYsnMFeW/3ZY
 qL1bBIzMEmdPEYfYz/7Dro+wNywjX7BcAFUg2Kyo6MG7XPT0AJ+he2F3N39JYXRGJQ9clyjj
 nLL+SH1Dw8XMPSbyCGZ6TS8i+nXhyT5VYkOUrqi+ZZCi1qVwGsRBBQIVECTrvywi0r4UNVaQ
 2Qe/SAkvKUp9EimS9D7dxK9qX+A+BUbXrJ4GOQg5AaA4qHd+QqeCy4PSTspQNUitdQqTD0wj
 AShkNbgBDgpu7qQIU9x7Z/N82n0Y3JMazZfNWleFGPp/uUPvqluiFHwYfBgAJe+zfL8PT/7y
 D+xlywh0uB7YdEw60mrwbzWq2vy/MGUHlBtuVy/snGNtF0gP9H5D2C8wR2Lta8efNrxokyp4
 SBc8/Vy+tziGn1keMalbuIXAPmN7uuJPSfQiFpid3XK32/2oCf6FWy8DSsXGauIDirmUWSzC
 KMwkVkLjHO2AJdNRfUpC79d8+xwkcDd+S3ND5g4lOaih6SdkyfdrElTibO4hj+8yiDAb4lkU
 XtkTSpcJSlDUvk2pNZHb8wczaUq1kgDKZD7H8yjpylLJYG2OS7EIZ9YYQDmRr1os8u5TPD9r
 o832z2ikE4EDoUTo0D/rOYuELz9BShqXcyp85QPKIZu4GNOQQkcNhMY+pt4E6RNlKVJjObYu
 Ha7X05T0l3kgnPbbw6NbxhehHnHBP6TcVpT0fQQAGuV
IronPort-HdrOrdr: A9a23:ywSCkqECJ6Uz9fHIpLqFx5HXdLJyesId70hD6qkvc3Fom52j/f
 xGws5x6fatskdrZJkh8erwW5VoMkmsj6KdhrNhcItKPTOW8ldASbsP0WKM+UyGJ8STzI9gPO
 JbAtBD4b7LfBRHZKTBkW+F+r8bqbHpnpxAx92utkuFJjsaCZ2Imj0JbjpzZXcGITWua6BYKL
 Osou584xawc3Ueacq2QlMfWfLYmtHNnJX6JTYbGh8O8mC1/H2VwY+/NyLd8gYVUjtJz7tn23
 PCiRbF6qKqtOz+4gPA1lXU849dlLLau5p+7Y23+4gowwfX+0SVjbdaKvi/VfcO0aWSAWMR4Z
 rxStEbToNOAj3qDyeISFDWqnTdOX4VmgPfIBmj8DTeSIXCNU0HItsEioRDfhTD7U08+Nl6za
 JQxmqc84FaFBXagU3GlpD1vjxR5zyJSEAZ4KcuZr1kIP4jgbRq3MciFYNuYeA99QfBmfIa+c
 VVfbHhDcdtACenhirizxhSKfSXLwcO9zm9MzY/hvA=
X-IronPort-AV: E=Sophos;i="5.82,328,1613451600"; 
   d="scan'208";a="44557381"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=k09gtwyQWCWJqzhQWCeGFl6yZxIhIoWxmzAZL53V5uEIol7QThmkgplQ2qVi98MYBX1dU1EvChF8TGh6CEauBDm5NgyeMWVBl2rE1vS1XB3vGQhSdk8CuafB+tUC0hSUlwEhyVyZehph1VD4at6iix3Tdj1jwrLQSP0Fn8qa1gw0OU9Ek1yMm3MuicuZcOipcMFtkYIxi7Ah+Guq+W3idhVVtsmG+Pg42cRW93p9ij0uwf5DCNfACL1VNvhJ05djTAfAgQaSoDF5fFY4atk/kg62nwdQUa7CdVkkn1QtOgpL/yJz1oP8uDM2M3pFsjccBNRyep6LrEAAYQArOxT9Sg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vfxo4eo5HcMNYGZ7SiCXmnyzIB2jtqxnw1kYpE6N8Vk=;
 b=T30qaUE1V4HVEVQ3Dd0dv/PN+6OzlW5tPR+6G+RKX+j1JwQP7ctTLABHaJLJvepCzswS9KjlBsqv+utOXzW/wy40IWTaAIOQwm9JbSKjF/EjGMUIcH+bX6sxwIsKm6w4Jc0CQ3UAf/3u9T7DaJ6dL19B+5+n+tSYFfGm7lEQs4fPn12Yu7E8xOehzfE1rD/LXPENc2UrSKyE80sH7iifmOgrY7rC7XWdNnprSaJ9F4tSScd5jla3Ax9+SJu0O7Fo+xF4+Xxhi7hIwAkaI3jAQS3drbjybt7Kzx6sCuZ/w9/Cv+Y/V2PqOBecfG7kZKQKJikIbbvjipiPN1rsNbT95A==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=vfxo4eo5HcMNYGZ7SiCXmnyzIB2jtqxnw1kYpE6N8Vk=;
 b=HBTIqIsbwPMTnGauXxybGFuJD0uhNrwg5kF3UWXFwEMUlUj3nNnUewi+eBloaXvTs23Q3W15u8sE62yj9eCcDA+sCkYcZ0kZmCR9rINvFc2gPKqoxrSG5XHxqLIkocBzr8CZf1qq55jVIdRu9aM9b9QBBgx8KKa+Qdmccbhv6fc=
Subject: Re: [PATCH v4 4/4] automation: Add container for riscv64 builds
To: Connor Davis <connojdavis@gmail.com>, <xen-devel@lists.xenproject.org>
CC: Bobby Eshleman <bobbyeshleman@gmail.com>, Alistair Francis
	<alistair23@gmail.com>, Doug Goldstein <cardoe@cardoe.com>
References: <cover.1621712830.git.connojdavis@gmail.com>
 <e43a22c723b0320883e4f0dc3d08287937d29181.1621712830.git.connojdavis@gmail.com>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <44872529-9e06-c17e-a635-5255d0825541@citrix.com>
Date: Tue, 25 May 2021 10:24:16 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <e43a22c723b0320883e4f0dc3d08287937d29181.1621712830.git.connojdavis@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0494.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:1ab::13) To BYAPR03MB3623.namprd03.prod.outlook.com
 (2603:10b6:a02:aa::12)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: eba49eab-3113-413e-48b4-08d91f5ee1d8
X-MS-TrafficTypeDiagnostic: BYAPR03MB4552:
X-Microsoft-Antispam-PRVS: <BYAPR03MB45522528A980371B0E75DAACBA259@BYAPR03MB4552.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:773;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: rsvovlFrVrM/sARhlxL+hErL4BXCaWMKvh004XbXm5h8uhZoP8wQ0zoDWNYcwdYgLdey+dsEG2OZLnVE8CVw4Zmw8cEeLRfzZnIsVymq71W2jXdc/tYqxfKoflpRb+GZ1OyEO8h/1icLlqz6MBUPsnCFXadK1OoyhroXfpeEMOXGroX5k7A4vDTRt9A/LR64so+b3z+Lf03cwyiQhSBeEeR41slVdBmy04xdk1yRzCDQyu3IUt8JdTfeRPjBMNlMQbV7Eyj52aRibmFti5Ohu9cWPttRiD1e7LJK+U1TuQ9aJe64utVHJ1YLfGP1rfEd6FZfdKEJlrlJTLINjWnH5fRgUWgxNkAuMmyGrcwhEXSs8B3MSE53Uo9wVntjAEf0OIb6pJKufTWaE2j5v9FUKiK5QjwU6COxGp3W2VM6c6SfU6gdt7IwZlPT53vmx8h3Ay/ZBRKpa2R2sovucAIAk5r8KBNJXMURQQGFtCh3f/QjFHPBbIK0qsvACZA8kr8w7AeTyRqA9eSB1InZ2nmcT+hmzV0aD8SdBDuaFD9RGEXicvstWYN2HYub72Mg1lSLIGoeEPej26Tm+ERHbIk/W1WO8Jx6tGnVqUVG4KjNVmRTR9mE4TXtgIVeDYyIvVrSrrfSsTkQyN8FKDQm0WSxlcpvAYyos3//XZdxaOsKvE76wuEOeJGFwp06qwAcwgks
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BYAPR03MB3623.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(366004)(39850400004)(346002)(376002)(396003)(6666004)(478600001)(16576012)(86362001)(31686004)(6486002)(316002)(54906003)(36756003)(8936002)(31696002)(4744005)(2906002)(956004)(53546011)(2616005)(4326008)(16526019)(66556008)(66946007)(26005)(38100700002)(8676002)(5660300002)(66476007)(186003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?VmRyK0Z0aE02ZVcrL2U3b0lCeFJDa0FsOEl0dTc0KzhVa2tJdmRtYTNKUDlr?=
 =?utf-8?B?blZUZFFaNTBPQ2hXUzZDVGxmTXlpeHdHdm5DK1pCVlFDRThndExQbEVMdmFE?=
 =?utf-8?B?bmJaVmczNVhiS2dYelZZZmRQVkRaZ0ppMXJpbEpDL2wrVDlPYTlhT3Fka1Q5?=
 =?utf-8?B?Zm5lWDdYeDZKbWNrQ0w3ckkzTEFWaWlFUGw3Wm9xSXRKcUhONnpGME5lZ1du?=
 =?utf-8?B?NFZmUTJkRmkzZ0JwMU1td2o0aU1WeXpjb1llb0ZPWFdrRkwyWnplMXhQdWdx?=
 =?utf-8?B?QldLU0d2cEN3VHJlbTBKcERLbG1jOTFGQ2tpOHB6Yjk3S3kzcU5EQnpHRGpv?=
 =?utf-8?B?TnBranNXZWRhY2wzQzkwRWxOdFZDS1ZWTHl1dHRLZkJXMGp0aVVkLzRrVnI1?=
 =?utf-8?B?SE41NG1xUlZsYWpDajdjdHpkeUNZdUhhMzNnZTNrSytla3dmSkNsQTZaRVFY?=
 =?utf-8?B?ZDlVaDNHdjA1MjN2c3JTeHk4ZE9tZ21oekZaWjVVR1FjVG95a1FuUmVCaG5C?=
 =?utf-8?B?RmlZUVo0Tm81SUJONDVabFpBOTZzZm5HRVZuRUVaVm1JQXp2ZmliRmlwakF1?=
 =?utf-8?B?ZW1mUEpBUHpWaFFpOE9FOUdza0piSW5hSTZLRFBieUMrZEd3dTZqbFhYN25S?=
 =?utf-8?B?YVRSSTMyenMwZzF5MUJYVTF1eTBPMU5TVksvczZRSjhhdG9MbkpHY3o0bmcy?=
 =?utf-8?B?WXBmSndEdVlLSWozbzlkRVVVTFBsOU8yMXZUZUZaanMvNGMrRlpLdnR6Q3lQ?=
 =?utf-8?B?bStQcFhGaHNGTVNxSmxwZE1hSm1PTmNqYURzWjluR1lodlhmK0xYeXd1RnA3?=
 =?utf-8?B?SnM3TUZ6MndXMXNLUlBUUWMvTUZ2TGlFeEtvK1FweHFpVjdjZjBVKy94N2Jj?=
 =?utf-8?B?Nm5MZFpISExjRVFNYkFJdi9EeHJFamtOWTNmK2RMNW5memEzeE8yQnUzMUg0?=
 =?utf-8?B?ZEMrQXZseG1Tb24wOWNIK29vaVdxMTg5QkNHVE1tbTI3emJsVW4wSHZud1ZK?=
 =?utf-8?B?b1dBWlhsTlJoSlBTSkcvTWRzVjJvUjVBL3gwcGNyR3h1K3JieHRHalZSQVdw?=
 =?utf-8?B?dXp3Y1R5RDJnOCs1TVJySDFpcWhVZDRhWldLWmNVUDFGUjVpVG9ydFZuaXVF?=
 =?utf-8?B?ZXdxQTVid21aYUNxWEUrOVpWRVpPWjQ4RjB5cFJ6N2ZpZUtidklFS2hkNlpC?=
 =?utf-8?B?OFVnZW92T1V4WHR5RnlBSS93MjFwam9PenAwOVQ5WFpnbklzQWNQY1lwN3hs?=
 =?utf-8?B?SS9ZS0Z3THBUOHJvcHphaUJHQUVrUlRRL0l6NzloU3VsZFBUVmZ3bFV0b2JK?=
 =?utf-8?B?bEE4RzNqTzVhVitGbEE3bis0VEx6RXFTRDY0eHUrMDdpY2F5SHRlSWdVb2lq?=
 =?utf-8?B?NWtzV0pBV1ZYWE14WDNzcGZjVzVuemRuVCsxL2tVWFYwdldqZnNZQkVSZXp2?=
 =?utf-8?B?bFVOV3NiZlRXV3lVTWlWbm9ESk5oUFA4VG04VlVCT3dBTGwzRWtKYWMyNDZr?=
 =?utf-8?B?RU9QYkRXUUlZOFU4ZWlNanp2QU1GalAyb3I2MGVESjgrVk1IeEcxSG40Vjd5?=
 =?utf-8?B?VkxFdDcwd09ZbHFqV0RIZXh4UTkzSml2WmRpY0hNYmlBcUdheno2aWJ5MSt2?=
 =?utf-8?B?RWJiRUNlNCt6cVd1cFl4Wktta2RWa2JRNkJTR3gvNktOSDJDNUJmWXNIZ3BF?=
 =?utf-8?B?L3c3VWUwaE5sZ0M2T1Y4MFordWlrdU5MeUNFZUtxc2huK1UyKzhNdUU1TVZI?=
 =?utf-8?Q?EDv4Q26LMi2VnwkLUcogWJ4C6VFnKfPbaACynE7?=
X-MS-Exchange-CrossTenant-Network-Message-Id: eba49eab-3113-413e-48b4-08d91f5ee1d8
X-MS-Exchange-CrossTenant-AuthSource: BYAPR03MB3623.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2021 09:24:22.8359
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: O/tm/cT5uPiGApSQC+8rLPPbQ1TJVndF5glUeDd9gkwPDRw1nTChQkHTBXcoSCuUxEeL5YbsYmgat91Y3FUGSuUPeg2jFIMcjDPYXrPpYXY=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BYAPR03MB4552
X-OriginatorOrg: citrix.com

On 24/05/2021 15:34, Connor Davis wrote:
> Add a container for cross-compiling xen to riscv64.
> This just includes the cross-compiler and necessary packages for
> building xen itself (packages for tools, stubdoms, etc., can be
> added later).
>
> Signed-off-by: Connor Davis <connojdavis@gmail.com>

I've deployed this version of the container, which (surprisingly, not)
is rather smaller than the one with the locally built gcc toolchain.

Therefore, Acked-by: Andrew Cooper <andrew.cooper3@citrix.com> and I'll
commit this right away to make xen.git match reality.

~Andrew


From xen-devel-bounces@lists.xenproject.org Tue May 25 09:45:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 May 2021 09:45:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132133.246578 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llTcR-0000Fs-W3; Tue, 25 May 2021 09:44:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132133.246578; Tue, 25 May 2021 09:44:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llTcR-0000Fl-Sj; Tue, 25 May 2021 09:44:59 +0000
Received: by outflank-mailman (input) for mailman id 132133;
 Tue, 25 May 2021 09:44:59 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llTcR-0000Fb-0N; Tue, 25 May 2021 09:44:59 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llTcQ-00072R-Rn; Tue, 25 May 2021 09:44:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llTcQ-00069u-Hy; Tue, 25 May 2021 09:44:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1llTcQ-00072S-HV; Tue, 25 May 2021 09:44:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Y9D3rnbxDQ9S97sY2yyU/lHLZbXh4DKlWLhrF1xqQzw=; b=QGqEv1oV/smUXTUPk41AqyTNV2
	i9ysBEqBXOIZvvDd7FFyyBkrVpaPNOHLCVH9Ov4nRatAXAfQz9NrQLn77bfYgCgawBm+5K8AdVl/E
	tss8uLIpTlj/u0Jdl0DtQ4e+Rfpi4j7ljgmRHvY1p/ZwdV7wtTHqHpE05/pZgUD8pZvQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162147-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 162147: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=2c1f5cb105b03a364f01752e1def7ece69d03845
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 25 May 2021 09:44:58 +0000

flight 162147 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162147/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              2c1f5cb105b03a364f01752e1def7ece69d03845
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  319 days
Failing since        151818  2020-07-11 04:18:52 Z  318 days  311 attempts
Testing same since   162147  2021-05-25 04:20:13 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 59279 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 25 10:01:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 May 2021 10:01:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132143.246592 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llTsV-0002cA-I4; Tue, 25 May 2021 10:01:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132143.246592; Tue, 25 May 2021 10:01:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llTsV-0002c3-Ez; Tue, 25 May 2021 10:01:35 +0000
Received: by outflank-mailman (input) for mailman id 132143;
 Tue, 25 May 2021 10:01:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llTsU-0002bt-Kb; Tue, 25 May 2021 10:01:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llTsU-0007QR-ES; Tue, 25 May 2021 10:01:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llTsU-0006kO-58; Tue, 25 May 2021 10:01:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1llTsU-0006f2-4b; Tue, 25 May 2021 10:01:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Zx8pwrrsAc51V0v6GbpKzL8IjCItVzD4IR6wHvSvZOU=; b=jdKKir1S9ySucW5GhP8ojfBGI0
	JAohXscsFvbnQzWWsnbISZs2ntLUD/Z1VxbIL6aJ349JGwqrE2BrBYQ+A7QxJrkVZCkX1PHMRpPK5
	DbMXY6sUdPs1CJi9wZTtZ+bvs9IM5nuEZZpe4uY79twR9uwrI5LAHrQXKnJtQhhHH2vk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162145-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162145: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=aa77acc28098d04945af998f3fc0dbd3759b5b41
X-Osstest-Versions-That:
    xen=aa77acc28098d04945af998f3fc0dbd3759b5b41
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 25 May 2021 10:01:34 +0000

flight 162145 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162145/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162137
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162137
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162137
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162137
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162137
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162137
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162137
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162137
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162137
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162137
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162137
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  aa77acc28098d04945af998f3fc0dbd3759b5b41
baseline version:
 xen                  aa77acc28098d04945af998f3fc0dbd3759b5b41

Last test of basis   162145  2021-05-25 01:54:50 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Tue May 25 10:35:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 May 2021 10:35:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132151.246606 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llUOo-0005mo-6k; Tue, 25 May 2021 10:34:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132151.246606; Tue, 25 May 2021 10:34:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llUOo-0005mh-3k; Tue, 25 May 2021 10:34:58 +0000
Received: by outflank-mailman (input) for mailman id 132151;
 Tue, 25 May 2021 10:34:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llUOm-0005mX-Oy; Tue, 25 May 2021 10:34:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llUOm-0007vY-Eq; Tue, 25 May 2021 10:34:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llUOm-0008Tv-60; Tue, 25 May 2021 10:34:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1llUOm-0003oX-5T; Tue, 25 May 2021 10:34:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=fdblZlWPkk7Hi0TsMOM1Odk3rJccyQksOAnp1zp5b6I=; b=epo1kwd9/igEIMblhbdu2495UO
	omiaXtFM9K0h5DPvJpSpHGrafSyk7lHhF+R0qMQf7WFcTkzsNLzF41L1sBbmBytLY9BTmYDIWXVAS
	JyP2sWlpO+sTkNgpnEQV++qRYzAHGTZ10ATrWr5+n7lNDQTPx9HmiCw/yX0COnK/RAh8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162149-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162149: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=81acb1d7bdd5b1bf9c3422dcfeda616db2405d6f
X-Osstest-Versions-That:
    xen=aa77acc28098d04945af998f3fc0dbd3759b5b41
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 25 May 2021 10:34:56 +0000

flight 162149 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162149/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  81acb1d7bdd5b1bf9c3422dcfeda616db2405d6f
baseline version:
 xen                  aa77acc28098d04945af998f3fc0dbd3759b5b41

Last test of basis   162096  2021-05-19 19:01:37 Z    5 days
Testing same since   162149  2021-05-25 08:01:38 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   aa77acc280..81acb1d7bd  81acb1d7bdd5b1bf9c3422dcfeda616db2405d6f -> smoke


From xen-devel-bounces@lists.xenproject.org Tue May 25 11:10:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 May 2021 11:10:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132160.246620 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llUww-0001Fd-0d; Tue, 25 May 2021 11:10:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132160.246620; Tue, 25 May 2021 11:10:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llUwv-0001FW-Ta; Tue, 25 May 2021 11:10:13 +0000
Received: by outflank-mailman (input) for mailman id 132160;
 Tue, 25 May 2021 11:10:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=n3nm=KU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1llUwu-0001FQ-K0
 for xen-devel@lists.xenproject.org; Tue, 25 May 2021 11:10:12 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4e61f08c-466f-49e1-81c1-d20ba199a94d;
 Tue, 25 May 2021 11:10:11 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id A6A50AF1E;
 Tue, 25 May 2021 11:10:10 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4e61f08c-466f-49e1-81c1-d20ba199a94d
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621941010; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=4c9DPkZDWWjIpADE7/pAG+BfjKlKpEyZb+9oX4SaWDw=;
	b=TwnIHqoUXLdS5hFLXVi2ILIPQvzXD7b3EqmqdLDsl5FiXFFZUN1ol5zoXsRHx38Cjrhy26
	GztfRBnqN3kgyjeKMqLQ8OFrusmyVAKBW8rdHQd/jLbR5jpFEEafXrIw19IsF4JJxbDm/W
	B0UeFj6O07lqqlQG9S2M4PWsx+An0Hw=
Subject: Ping: [PATCH 0/3] firmware/shim: build adjustments
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Ian Jackson <iwj@xenproject.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <19695ffc-34d8-b682-b092-668f872d4e57@suse.com>
Message-ID: <eb87b360-5a3a-63d1-5992-a544cc861aa6@suse.com>
Date: Tue, 25 May 2021 13:10:05 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <19695ffc-34d8-b682-b092-668f872d4e57@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 30.04.2021 16:39, Jan Beulich wrote:
> Originally I meant to finally get v2 of "firmware/shim: honor symlinks
> during Xen tree setup" sorted. However, the suggestion to use find's
> -L option, while a suitable equivalent of the -xtype primary, has the
> same drawback: It doesn't distinguish between relative and absolute
> symlinks (and we specifically want to skip relative ones). Locally I'm
> using '(' -type f -o -lname '/*' ')' now, but -lname again being non-
> standard I didn't think it would even be worth submitting. While
> looking into that I did notice a few other anomalies, though, which
> this series tries to address.
> 
> I notice tools/firmware/xen-dir/ isn't included in "X86 ARCHITECTURE".
> I wonder whether that should be added.
> 
> 1: update linkfarm exclusions
> 2: drop XEN_CONFIG_EXPERT uses
> 3: UNSUPPORTED=n

Ping?

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 25 11:14:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 May 2021 11:14:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132166.246630 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llV0a-0001sa-Hc; Tue, 25 May 2021 11:14:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132166.246630; Tue, 25 May 2021 11:14:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llV0a-0001sT-EQ; Tue, 25 May 2021 11:14:00 +0000
Received: by outflank-mailman (input) for mailman id 132166;
 Tue, 25 May 2021 11:13:59 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=n3nm=KU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1llV0Z-0001sN-8i
 for xen-devel@lists.xenproject.org; Tue, 25 May 2021 11:13:59 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 78aa0667-91da-408f-a020-b12d293453ee;
 Tue, 25 May 2021 11:13:58 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 606F4AEA6;
 Tue, 25 May 2021 11:13:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 78aa0667-91da-408f-a020-b12d293453ee
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621941237; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Rv207fLaLWRJKi9gCOwrDZlK6sibJ4OFHXI9bbsEKFQ=;
	b=S37P6LAf0YxBNH5mUXNJEGRt3kEYs2VbCzq4W72pj6cny2HFdgC4X2/0N0m8QYHjsuvYCt
	t/BvAu1gTanJJDbmrvl9ts9JMUgme9Duu0kgPJBkkfVkzb7xJiRdV9fplTEazS0UIleagF
	wMUj6zM5Cph6InvMSo80pqyqAG4QMqU=
From: Jan Beulich <jbeulich@suse.com>
Subject: =?UTF-8?Q?Ping=c2=b2=3a_=5bPATCH=5d_x86=3a_fix_build_race_when_gene?=
 =?UTF-8?Q?rating_temporary_object_files_=28take_2=29?=
To: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, Ian Jackson <iwj@xenproject.org>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <c35ad629-0dea-688a-199d-895186aeffb2@suse.com>
Message-ID: <9c132221-74ba-8f18-3c89-3fbbdb3c58b5@suse.com>
Date: Tue, 25 May 2021 13:13:48 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <c35ad629-0dea-688a-199d-895186aeffb2@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 26.04.2021 11:54, Jan Beulich wrote:
> The original commit wasn't quite sufficient: Emptying DEPS is helpful
> only when nothing will get added to it subsequently. xen/Rules.mk will,
> after including the local Makefile, amend DEPS by dependencies for
> objects living in sub-directories though. For the purpose of suppressing
> dependencies of the makefiles on the .*.d2 files (and thus to avoid
> their re-generation) it is, however, not necessary at all to play with
> DEPS. Instead we can override DEPS_INCLUDE (which generally is a late-
> expansion variable).
> 
> Fixes: 761bb575ce97 ("x86: fix build race when generating temporary object files")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Ping (again)? I'll give it till Wednesday, and if I don't hear back,
I'll commit without any acks.

Ian, I'm also including you here because iirc the .*.d2 thing was an
invention of yours, so you may have an opinion.

Jan

> --- a/xen/arch/x86/Makefile
> +++ b/xen/arch/x86/Makefile
> @@ -314,5 +314,5 @@ clean::
>  # Suppress loading of DEPS files for internal, temporary target files.  This
>  # then also suppresses re-generation of the respective .*.d2 files.
>  ifeq ($(filter-out .xen%.o,$(notdir $(MAKECMDGOALS))),)
> -DEPS:=
> +DEPS_INCLUDE:=
>  endif
> 



From xen-devel-bounces@lists.xenproject.org Tue May 25 13:21:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 May 2021 13:21:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132180.246642 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llWzY-00052Q-Cg; Tue, 25 May 2021 13:21:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132180.246642; Tue, 25 May 2021 13:21:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llWzY-00052J-9d; Tue, 25 May 2021 13:21:04 +0000
Received: by outflank-mailman (input) for mailman id 132180;
 Tue, 25 May 2021 13:21:02 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=n3nm=KU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1llWzW-00052D-He
 for xen-devel@lists.xenproject.org; Tue, 25 May 2021 13:21:02 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 3efb7e8f-341f-4b0a-bce6-8cc4c7e328fb;
 Tue, 25 May 2021 13:21:01 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 2A62CAB71;
 Tue, 25 May 2021 13:21:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3efb7e8f-341f-4b0a-bce6-8cc4c7e328fb
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621948860; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=mTZsaQ6C2TIwnAVIj9W9dEgmWFYDSOJvMmZR7SKqcSQ=;
	b=X8OIhweXdkA0mBSmMtcIlcc37hvmprxjUCxpK7lWkgOP71VB72nSmai362Smrfj6H4Vjvs
	mHFl0d9aTuaQ+HV3Y91VDlq9K1hAFLT6u/ijn/BHImzQTV2B0kQbCi18KfsPs/IZa2VEfE
	zI9qBZKVd/AIHF6ZvZwcEA23Cq1qVU8=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/guest: fix build when HVM and !PV32
Message-ID: <d8230ebc-a3cb-6a87-33c3-ab27bfa17862@suse.com>
Date: Tue, 25 May 2021 15:20:56 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

The commit referenced below still wasn't careful enough - with COMPAT we
will have a compat_handle_okay() visible already, which we first need to
get rid of.

Fixes: bd1e7b47bac0 ("x86/shim: fix build when !PV32")
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/pv/shim.c
+++ b/xen/arch/x86/pv/shim.c
@@ -679,6 +679,7 @@ void pv_shim_inject_evtchn(unsigned int
 # include <compat/grant_table.h>
 #else
 # define compat_gnttab_setup_table gnttab_setup_table
+# undef compat_handle_okay
 # define compat_handle_okay guest_handle_okay
 #endif
 


From xen-devel-bounces@lists.xenproject.org Tue May 25 13:33:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 May 2021 13:33:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132187.246652 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llXBE-0006Uc-HR; Tue, 25 May 2021 13:33:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132187.246652; Tue, 25 May 2021 13:33:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llXBE-0006UV-EP; Tue, 25 May 2021 13:33:08 +0000
Received: by outflank-mailman (input) for mailman id 132187;
 Tue, 25 May 2021 13:33:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llXBD-0006Tm-RB; Tue, 25 May 2021 13:33:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llXBD-0002TS-Mi; Tue, 25 May 2021 13:33:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llXBD-0001OI-DD; Tue, 25 May 2021 13:33:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1llXBD-0007o2-C9; Tue, 25 May 2021 13:33:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=viC9pNPN6Mu0/8y6hpFtcGnWRu3h5+umABlr5hgV4oQ=; b=1Dazz1ljg/CEhDuAjwl9g/DWsu
	UUmzl3GN0W/ig+IMWkSq/aVdfQjdBwQtt6Y7e6C2MfjsTQ8kXesABz5AkgOrVn8IdF4d3zbof8CWZ
	LgYDPMkBgo7An236741lwBmS2Gu8oVQWn7pKtOJvTMH0Dx8ndkClaFiBAfWzYKoeFIAw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162151-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162151: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=3092006fc4e096a7eebb8042cb76d82b09ccece4
X-Osstest-Versions-That:
    xen=81acb1d7bdd5b1bf9c3422dcfeda616db2405d6f
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 25 May 2021 13:33:07 +0000

flight 162151 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162151/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  3092006fc4e096a7eebb8042cb76d82b09ccece4
baseline version:
 xen                  81acb1d7bdd5b1bf9c3422dcfeda616db2405d6f

Last test of basis   162149  2021-05-25 08:01:38 Z    0 days
Testing same since   162151  2021-05-25 11:00:26 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Connor Davis <connojdavis@gmail.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   81acb1d7bd..3092006fc4  3092006fc4e096a7eebb8042cb76d82b09ccece4 -> smoke


From xen-devel-bounces@lists.xenproject.org Tue May 25 14:12:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 May 2021 14:12:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132205.246682 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llXnK-0002aB-Uq; Tue, 25 May 2021 14:12:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132205.246682; Tue, 25 May 2021 14:12:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llXnK-0002a4-QD; Tue, 25 May 2021 14:12:30 +0000
Received: by outflank-mailman (input) for mailman id 132205;
 Tue, 25 May 2021 14:12:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kR7Q=KU=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1llXnJ-0002Zy-HW
 for xen-devel@lists.xenproject.org; Tue, 25 May 2021 14:12:29 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 56667df7-6de8-408d-a479-482e040df043;
 Tue, 25 May 2021 14:12:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 56667df7-6de8-408d-a479-482e040df043
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1621951948;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=Tk5R2CXn4ABCHVx13cYhL1dKO1JTTwTyO92ZNrEclwk=;
  b=fstb1+ioI/tTsdNQu7x6usJyxIz6CY8RHe8cJagzB8FWDsWVpPt3eGP0
   xE/5yfuRdrYnN/OH4mUot1eFeo0QhSl8Rq4qskSCSlUzi0NSbHFwcL2iz
   HNVsQ7a3ggbRqGb8SuIK9TgpfXtG0MLbsSPr58odhvEDXWs0IpZIvl9JI
   0=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 8XD7RitWvw4DcHXM3RYj+5KWTrB0RDStYamZrJJdbiD3j19dM4zj62AIBTfviMv2p4cQntrVcX
 nrU7jEdiy1WfUIOv9fxrS1y/izAxqPEopCxjfpNRad9qIe0FDWYfcd1kfwaWYbJbFBlDdp1MEw
 /Yp+LdVtIAYkyoENRreCf2HobbK8sREa+DG7+hhyjgzA8RTaUfJUX4fNGKrRkzl540JjRt4CpF
 vZxQmnMKYh8+W5iOkbIBkX9FM0vg0cx/Kg/G9yuHwyfWf4LG2AWn+9lpIkH/ChNHt8AKpsggHx
 t3s=
X-SBRS: 5.1
X-MesageID: 46126971
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-Data: A9a23:9ksSWaC4y2181RVW/0Ljw5YqxClBgxIJ4kV8jS/XYbTApDgh1GMHn
 GtKD2uGPP3eZjGjcop+aYixpE9XvJ/Vx9NgQQY4rX1jcSlH+JHPbTi7wuYcHM8wwunrFh8PA
 xA2M4GYRCwMZiaH4EjratANlFEkvU2ybuOU5NXsZ2YhH2eIdA970Ug6w7Nj39Yy6TSEK1jlV
 e3a8pW31GCNg1aYAkpMg05UgEoy1BhakGpwUm0WPZinjneH/5UmJMt3yZWKB2n5WuFp8tuSH
 I4v+l0bElTxpH/BAvv9+lryn9ZjrrT6ZWBigVIOM0Sub4QrSoXfHc/XOdJFAXq7hQllkPhU6
 Mlmk8GVcDw7O4eVwdYnczBoSCpxaPguFL/veRBTsOSWxkzCNXDt3+9vHAc9OohwFuRfWD8Us
 6ZCcXZUM0DF3bveLLGTE4GAguw5K8bmJsUHs2xIxjDFF/c2B5vERs0m4PcEgm5q2JERRJ4yY
 eJHcAFvdhKfYiRiJ3UWS8oklc6toEbgJmgwRFW9+vNsvjm7IBZK+KfpGMrYfJqNX8o9tkeHp
 ErW8mLhGBYYOdeDjz2f/RqEhOXCgCf6U4I6D6Cj+7hhh1j77nweDlgaWEW2pdG9i1WiQJRPJ
 koM4C0soKMuskuxQbHAswaQ+SDe+ERGApwJTr18sljlJrfoDxixI0gYZRgcTeUdpsocHGUY7
 H+gr4jiCmk62FGKck5x5ot4vBvrZ3JMfDVYO35aJecWy4O9+dps0nojWv4mQPbs0I2pcd3l6
 23S9EADa6MvYdnnPklR1X7AmS7kgpHUQgMv6gzTUwpJBSsiP9X8PuREBbXBhMuszbp1rHHa5
 hDoeODEtoji6K1hcwTXHo0w8EmBvartDdElqQcH82MdG9GRF5mLJ9A43d2DDB42Y5xslcHBO
 yc/Rj+9FLcMZSD3PMebkqqaCtgwzLiIKDgWfqqINrJzjmxKXFLXrUlGOB/Lt0iwwRdErE3KE
 crCGSpaJS1BUvoPIfvfb7p17ILHMQhulDuPGsijl0rPPHj3TCf9dIrp+WCmN4gRxKiFvB/U4
 5BYMc6LwA9YS+rwfm/c9ot7ELzABSJT6UzewyCPStO+Hw==
IronPort-HdrOrdr: A9a23:cMfxoqBhxzNMxtTlHeiasceALOsnbusQ8zAXPh9KJCC9I/bzqy
 nxpp8mPEfP+U8ssHFJo7G90dq7MAvhHP9OkMEs1NKZMDUO11HYSb2KgbGM/9SkIVyZygc/79
 YqT0EdMqyVMbESt6+T3OD7KadG/DDtysCVbJLlvhVQpHZRGsJdBmlCZDqzIwlTfk1rFJA5HJ
 2T6o5svDy7Y0kaacy9Gz0sQ/XDj8ejruOqXTc2QzocrCWehzKh77D3VzKC2A0Fbj9JybA+tU
 DYjg3C4Lm5uf3T8G6R64aT1eUYpDLS8KoDOCW+sLlUFtwqsHfqWG1VYczNgNnympDs1L9lqq
 iIn/5qBbUJ15qYRBDOnfKq4Xir7N9m0Q6f9beV7EGT3PDRVXY0DdFMipledQac4008vMtk2K
 YOxG6BsYFLZCmw6xgVyuK4Ii2CrHDE1UbKUNRj/0C3WrFuHoO5bbZvjn+9Na1wVR4SxLpXbt
 WGPfusl8q+K2nqEEwxllMfseCRYg==
X-IronPort-AV: E=Sophos;i="5.82,328,1613451600"; 
   d="scan'208";a="46126971"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cgR5iuAztOlWocJx5qm1T7W3dqFJ3oFAcVGy7way9ZMSpGJQGuJXi9jf/NJu9IALyR6A9Ei1ESsHscxThWhrchznJ+zHyoH5ZtgEy8n/lP4HboMdvrcuoSdcJQq4uZenbDrlx2T+2BSjA1hTLqM8vVNPzKe8oHa/a55z0ESoys57pIsGp49nSy/xBK6BPTfPHf/k7+U6T+O2hXFYdQoAWOEJnox2HXRV4Ud424lbFe1PZJFly2drhmWV6krO2y/RGJ2/nlw+qvMTigOznrs+bZk2bMw9gHWaR/4C1VTHyBvm3q0qNxeFvMHEcnr3tcZFoMkgk1iogK4LcMg1Xay71A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YOywZKrDduUiNYpkV64W6bHWcaGeA5M553mF+IcM46k=;
 b=JeBhBeelWzLUt9Rt94TSz0tyCbVFydnN6T1yl1+fQ59pxKXuot99NtEGvSl6P1lCo0ThWurTWPIyNtZoYSbM+6ceSLwkWtyVcFEITV+eafnUxkA2ZuESa2oGRj2QJFOhGNveo3yRBzRUM4h1g/E6SRELVELQll8nFdnnsKPV+xO5Nx2PfHuDSc0yAVXZ4kmtZuRJN97Is6jT5hKtthonM+QkyFYHvaAyEEFnvr8BA9Mw76DNcZXgm+SYFZXJ//AEqMSBkFoPSvR3bQBxfFSB2yeCBcanpzhOtgEsGyxuhFvgtE8X6uRyn5Xm1RShfy4oCpb1ap3iYjkU/X0pxljFtQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=YOywZKrDduUiNYpkV64W6bHWcaGeA5M553mF+IcM46k=;
 b=FVErCiinY8a6ArwsvBRjlUDwI4nHBtCIi1PEwBliYL91Niqk6wEaYg8cypUnssnOLFzkbSmUmNvaXLuv1H6uPCFLw81pcGDS3AWO6U2V+R33IFkYaqJ0PNAYbMYavfS6DQkTMbYWSQOvfWoFgrzfb413WNyVM4l8IY4IQbZPt8U=
Date: Tue, 25 May 2021 16:12:10 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Ian Jackson
	<iwj@xenproject.org>
Subject: Re: [PATCH 1/3] firmware/shim: update linkfarm exclusions
Message-ID: <YK0FuouoB7XlaQst@Air-de-Roger>
References: <19695ffc-34d8-b682-b092-668f872d4e57@suse.com>
 <d6f37d26-a883-b194-07a9-1ab87d5961f7@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <d6f37d26-a883-b194-07a9-1ab87d5961f7@suse.com>
X-ClientProxiedBy: MR2P264CA0014.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:1::26) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b12c3094-4c05-43c4-d07b-08d91f8719c5
X-MS-TrafficTypeDiagnostic: DM6PR03MB4219:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB421934D67E81182BB58FAD238F259@DM6PR03MB4219.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 1dUAkAOoeD/4TXMJY59sD2o1EyMX+sd407tll881i8TZAV8VRjuQ6S1FJWLbYAf/56rc33jEliyyVevKG2BU3AKF5coLBcCYGq1kUVKnRUIZsEzrDQ/XZ//Jseq/eTcdxiF8W0fguI6B319eNoQ1nQkBWGYImuyAcyRjF3oEaKPJhKP8JDA4XpbcEaYNP1rgWVDT/zUilctqLpm6u4LShrvZ0nxcE+2shMSrmixqQtXQasivgQorRLa8HlXvZXSRjctTevx/FtEMN/ZOoANH+2wbkMdqLFEHAt5tNtwY+g7aHITuJ8UHRqkLrx0txeMtJP+8ba30m0WgxY6xkKm4UNtCLtB7QOroaCzbDXq/iw8Yw6XiEAuXLmsXoTWxvJXGoaH+JbEUb9AnMA/Oz+cWgW/YJ6/j1K3A+zgnQxG0IS/LC4tVElvp+DySmpsjVSDm6M1QqNmWHXyNc209ppGpSik7gzxUaDsl9q0C1HWWc65VGHDUhWWzRD83/fpP1Z1zZiwIhW0a50kF8GPc5veeVBS6urTNnTzQfH50f7AXv0OnCp4TyQ5KuMnFTpEoIAVRl7p5B/uYl+7/eNgV5PVXqOqAujp7+WiIKbZfZFJ/2ck=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(396003)(366004)(39860400002)(346002)(376002)(136003)(66946007)(26005)(86362001)(956004)(33716001)(66556008)(4326008)(5660300002)(6486002)(38100700002)(66476007)(2906002)(9686003)(186003)(8936002)(6496006)(54906003)(6916009)(85182001)(478600001)(316002)(6666004)(16526019)(8676002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?blJSeFU4QUhFdjZMNDcwT1ZJVWtyZjVFSHYzZVB1WHhuem13emdHYWJ5OVA4?=
 =?utf-8?B?bmxnUjE5MkFtTVpFbzhNcy83Y1pCSHdTQVAyVnVVMWdhblNqNzJqSUMxcElP?=
 =?utf-8?B?ekRIU1ZFcnQwc0VjenhDVE1MUWwxMytTYlB3TGJ0WGVMVURtNkl3L0Zxb051?=
 =?utf-8?B?aEJ6aXRHME9QSnFqQTliaDlBTE8xdmxoenRUaGFYZ2FqTGxGZ3k3Wmc0RjJx?=
 =?utf-8?B?a3huZGhvd3B3dFhuNnBIRk9yeWVDdUhwcmtIR0pQU3BDZGFRSXR4WjVRRmwy?=
 =?utf-8?B?NzZpWkpDdVgzc250NUpJZkQ2Q3pyRUZ6eFpJWmhGT09kVlB4Snd4M3orcnk3?=
 =?utf-8?B?ZUVFd2VsQ2R6NDdWUU5ibDVXSlNWSjZFbnp1eW5BOXkvUU9GcnpUNFRtSHZm?=
 =?utf-8?B?MWpXM3pKMksvbVg1cUFEZ0tWUlJTemNSemw0c3Q4YkJ6bkdNRG8yampLWUtQ?=
 =?utf-8?B?OVIycGtnWk5hMnkyUnRKMk1BWVRSTWd6ZzFyRVJLRUoyUjFlL3ZYVzlqeGR2?=
 =?utf-8?B?ODRneERKOVZpOU1vRmtLVHJ4VmJWVmtVU0twMi90RFVSOVJsenJmTXIwUng0?=
 =?utf-8?B?WEFPbUV4cHZ2dmVYdHV5SnZXVzE3SDYwMXVsMGd2VDMzTnViTU5jUU5tc1RL?=
 =?utf-8?B?NFVjd0ZGeHpMaEpQcWN3RHpxcHZ6OFNjQ3BNWVl3ZWpYY2hiemtOWHI0Wlpx?=
 =?utf-8?B?ZUxONW1KeUxJRXZwekNQWWRmYjkxQlVHaGM3RzUwUjJKeGJiSnhsMEcyWmdt?=
 =?utf-8?B?T0FSenZ5WklpYmpjdlV1NUhOcFhwQmIrYXpCVzlHb0R4VkZGVCtwMGJFV3JY?=
 =?utf-8?B?V3RMSzcwTHMyWjFsc3pVWlg2L2RHWG4wRGY1UnpML2RoSjFpRW56YWwvNDdT?=
 =?utf-8?B?SEd4RWxmZFdXUTNTNW85QzVOUTVHdnZneDZnVnNPOEM5NEhDeE41SlNZTGgr?=
 =?utf-8?B?QVN4eEhDbGFVRXJlUGxNV1dOTzJqVFlNVXFRckV0S1lWM0V1VXJiSVhZT0wv?=
 =?utf-8?B?VzNWOFZTYnhDb3BvYkpEbVNKLzJkenBnM3loL1pyL21mOFhMYVNnZHcxSWVt?=
 =?utf-8?B?djVqMlFuOW02eG1IMkJrcVFDbTNpU0F4NUlINDhQcGZ2dERlNTNqbUJRbE1v?=
 =?utf-8?B?ZGREdk1jS3VjNjE4emFneFFVbmFZZjhUQ0ZaUGlRREFMSlNYTmhRUUthb2Er?=
 =?utf-8?B?Y2hKQy85amdQZis5YWJWU3l6MFJ6Nzk3WHIza0h4SE4rbU5WZGdYOVBQem1D?=
 =?utf-8?B?WXd2Y2lCZVVPczhIYkcxbm91bEJ4WTJTeUFKc1BFUkN1OGJWUDNsM211TUVW?=
 =?utf-8?B?WWcra2l6OEFJVng3S2F2N0RXSVFmcVVkMnh0T1Q2S0cvZmpaTGNqcFdsNHJ0?=
 =?utf-8?B?VVUzazN5NkQ2YkttRHNlaHdJejEyZUNqSEFyQlZMZFVhdTMvU3I0WjduMitN?=
 =?utf-8?B?TkgrUGdKZTk5aVN0VFJlVWpvelNkcmdxSHZtUFVxd3lzWWc3VDZ5cFNZZ3ZZ?=
 =?utf-8?B?Tld6bzVIQzVJQXdzYm1pa0xxWWNBaVUyVHFoNEVEZy8zVC9jdUo0dFh0Z0U3?=
 =?utf-8?B?VVBKQW1HUkdhby91T1BLMld2ZnhSWjhtcEhTbzdNSmxmSkpJeC9ZR0Qxc0xI?=
 =?utf-8?B?SVNRNjlJQmdsdyswTWJZSkJ3WGdFZWJLZGkvOGpPOTArWE41T00vMDVXWFB0?=
 =?utf-8?B?REovY2RpSEo1enNKVGhuUlc2SXJPemsvRzZwQVg0TTZ4L0ZZSG9sbkRTYk5w?=
 =?utf-8?Q?hCGqYs48bftB8ngf+JhiFWTiwe7sPXL98whM1WH?=
X-MS-Exchange-CrossTenant-Network-Message-Id: b12c3094-4c05-43c4-d07b-08d91f8719c5
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2021 14:12:16.5595
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ZNVusrxi/grBoln6wsF9kZjmifL5/0RMgePQfv7hIvLRR9YUBEkxGJPZ4gvMQWTSPbKW0nU4OwkEAm8iplsrZw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4219
X-OriginatorOrg: citrix.com

On Fri, Apr 30, 2021 at 04:43:59PM +0200, Jan Beulich wrote:
> Some intermediate files weren't considered at all at the time. Also
> after its introduction, various changes to the build environment have
> rendered the exclusion sets stale. For example, we now have some .*.cmd
> files in the build tree.  Combine all respective patterns into a single
> .* one, seeing that we don't have any actual source files matching this
> pattern in the tree. Add other patterns as well as individual files.
> Also introduce LINK_EXCLUDE_PATHS to deal with entire directories full
> of generated headers as well as a few specific files the names of which
> are too generic to list under LINK_EXCLUDES.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

> 
> --- a/tools/firmware/xen-dir/Makefile
> +++ b/tools/firmware/xen-dir/Makefile
> @@ -15,9 +15,19 @@ DEP_DIRS=$(foreach i, $(LINK_DIRS), $(XE
>  DEP_FILES=$(foreach i, $(LINK_FILES), $(XEN_ROOT)/$(i))
>  
>  # Exclude some intermediate files and final build products
> -LINK_EXCLUDES := '*.[isoa]' '.*.d' '.*.d2' '.config'
> -LINK_EXCLUDES += '*.map' 'xen' 'xen.gz' 'xen.efi' 'xen-syms'
> -LINK_EXCLUDES += '.*.tmp'
> +LINK_EXCLUDES := '*.[isoa]' '*.bin' '*.chk' '*.lnk' '*.gz' '.*'
> +LINK_EXCLUDES += lexer.lex.? parser.tab.? conf
> +LINK_EXCLUDES += asm-offsets.h asm-macros.h compile.h '*-autogen.h'
> +LINK_EXCLUDES += mkelf32 mkreloc symbols config_data.S xen.lds efi.lds
> +LINK_EXCLUDES += '*.map' xen xen.gz xen.efi xen-syms check.efi
> +
> +# To exclude full subtrees or individual files of not sufficiently specific
> +# names, regular expressions are used:
> +LINK_EXCLUDE_PATHS := xen/include/compat/.*
> +LINK_EXCLUDE_PATHS += xen/include/config/.*
> +LINK_EXCLUDE_PATHS += xen/include/generated/.*
> +LINK_EXCLUDE_PATHS += xen/arch/x86/boot/reloc[.]S
> +LINK_EXCLUDE_PATHS += xen/arch/x86/boot/cmdline[.]S
>  
>  # This is all a giant mess and doesn't really work.
>  #
> @@ -32,9 +42,10 @@ LINK_EXCLUDES += '.*.tmp'
>  # support easy development of the shim, but has a side effect of clobbering
>  # the already-built shim.
>  #
> -# $(LINK_EXCLUDES) should be set such that a parallel build of shim and xen/
> -# doesn't cause a subsequent `make install` to decide to regenerate the
> -# linkfarm.  This means that all final build artefacts must be excluded.
> +# $(LINK_EXCLUDES) and $(LINK_EXCLUDE_DIRS) should be set such that a parallel
> +# build of shim and xen/ doesn't cause a subsequent `make install` to decide to
> +# to regenerate the linkfarm.  This means that all intermediate and final build
     ^ duplicated 'to'

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 25 14:16:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 May 2021 14:16:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132212.246693 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llXqn-0003Jy-Dj; Tue, 25 May 2021 14:16:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132212.246693; Tue, 25 May 2021 14:16:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llXqn-0003Jr-A9; Tue, 25 May 2021 14:16:05 +0000
Received: by outflank-mailman (input) for mailman id 132212;
 Tue, 25 May 2021 14:16:03 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kR7Q=KU=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1llXql-0003Jc-Ej
 for xen-devel@lists.xenproject.org; Tue, 25 May 2021 14:16:03 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f134f0ae-0563-403c-8202-0b2ee20bf971;
 Tue, 25 May 2021 14:16:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f134f0ae-0563-403c-8202-0b2ee20bf971
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1621952162;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=yPmoX+4s/u+OuGfu7XB3qwrOFAaaYZypgOrJaH/07vU=;
  b=Y2EOgrBUPN1xbnw61Qh5AcUId5q5c+z1T7vHNWeO31vLyM1HhvBFGheg
   vMnNyhQ+kfnIefauRtR7yDiY7Fb8tDpsTmxJpdF90/w0TqGPGFZPrhEQt
   zlR6aO4GFvMMrn70588BXniKuTdGjgkVrnS8q1EUNqzyAgCAXer8XGplR
   E=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: CTo68Y8WR0BBXJq54gF5ZuPw95gYJlYXYe7keAoYHCe6V1orD2XpVaSF0uvoG9klVEzahpGgRo
 RNoa+A0q0NLpwCZLQWOX761LqW/LxjQwnBsXIIuG5UESXmEFRz47aeFnYgsaEb3iPZX+mRaksq
 eP2/Y7Nidim+6C1PLQA1U854hBneuwzTm5S7UGR29sKSGvnGjOt52y19Zi0x4lMv5A0yLDg+2x
 vzfi8GoC5dpRWAqe7/URqCCiV8lr1YlrtykwGGZOJC/0l+CyJZ5ZCIMH9IfaXg5oq8/kWwPNtR
 /7s=
X-SBRS: 5.1
X-MesageID: 44953532
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-Data: A9a23:MDR9IKOpRhBsipfvrR3Ol8FynXyQoLVcMsEvi/4bfWQNrUokgjdWn
 TdKWjuAMvyJZmunfdklbY7g9hlVuMOBn9djHgto+SlhQUwRpJueD7x1DKtR0wB+jCHnZBg6h
 ynLQoCYdKjYdleF/VHydOCJQUBUjclkfJKlYAL/En03FVUMpBsJ00o5wrZk2NMw27BVPivW0
 T/Mi5yHULOa82Yc3lI8s8pvfzs24ZweEBtB1rAPTagjUG32zhH5P7pGTU2FFFPqQ5E8IwKPb
 72rIIdVXI/u10xF5tuNyt4Xe6CRK1LYFVDmZnF+A8BOjvXez8CbP2lS2Pc0MC9qZzu1c99Zy
 +dd67u3YA0VMrzAw8Y5UjZiFAN3IvgTkFPHCSDXXc27ykTHdz3nwul0DVFwNoodkgp1KTgQr
 7pCcmlLN03TwbjvqF64YrAEasALNs7kMZlZonh95TrYEewnUdbIRKCiCdpwgGth25sTRqy2i
 8wxQzEyTQj/Yj5zP1YRN5gnp+eMiXyjSmgNwL6SjfVuuDWCpOBr65DvOtfIft2BRe1Og12V4
 GnB+gzREhwccdCS1zeB2natnfPU2zP2XpoIE7+1/eIsh0ecrkQMDDUGWF39puO24ma/RNB3O
 0ES4jApr6U56AqsVNaVYvGjiCfa5FhGAYMWSrBqrlvUokbJ3+qHLms2XmBmb/UsiMMnSWcI8
 l6mpdTLDCM65dV5VkmhGqeoQSKaYHZPdD5YP3FVE2Pp8PG5/tho0U6nosJLVf7t14OlRVkc1
 hjX9HBWulkFsSIcO0xXF3jphCiw7rzAUwI4/AneWm/NAuhRP9X+PtXABbQ29599wGelorup5
 yJsdyu2trpm4XSxeMulGrtlIV1Rz6zZWAAweHY2d3Xby9hIx5JEVdoAiAyS2W8wbZdeEdMXS
 BS7VfxtCG97YyLxMP4fj3OZIMU216nwfekJpdiOMoImX3SFTyfarHAGTRPBhAjFzRlz+ZzTz
 L/GKK5A+15BUv85pNd3Ls9AuYIWKtcWlTKLGc+jl0z5uVdcDVbMIYo43JK1RrlRxIuPoRnP8
 sYZMM2Pyh5FV/b5bDWR+okWRW3m51BhbXwqg6S7rtK+Hzc=
IronPort-HdrOrdr: A9a23:EOlCDauHFnyFycMQmxPBoMG17skCToMji2hC6mlwRA09TyXGra
 6TdaUguiMc1gx8ZJhBo7C90KnpewK7yXdQ2/htAV7EZnibhILIFvAZ0WKG+Vzd8kLFh4tgPM
 tbAsxD4ZjLfCdHZKXBkXqF+rQbsaG6GcmT7I+0pRodLnAJGtRdBkVCe32m+yVNNXl77PECZe
 OhD6R81l2dkSN9VLXLOpBJZZmNmzWl/6iWLyIuNloC0k2jnDmo4Ln1H1yzxREFSQ5Cxr8k7C
 zsjxH5zr/LiYD59jbsk0voq7hGktrozdVOQOaWjNIOFznqggG0IKx8Rry5uiwvqu3H0idrrD
 D1mWZkAy1P0QKUQonsyiGdnDUIkQxeqkMK8GXow0cK+qfCNXQH46Mrv/MqTvPbg3BQ9u2Unp
 g7hl5wGvJsfFr9dR/Glq/1vidR5wGJSEoZ4JouZkNkIP0jgZ9q3MEiFRBuYds99ByT0vFuLA
 A4NrCj2B8RSyLDU0zk
X-IronPort-AV: E=Sophos;i="5.82,328,1613451600"; 
   d="scan'208";a="44953532"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=l9yf+J6O1sVEVyTai8LIy7O1zOFZIzG3C1dWdkzd00yiXcJlEusJpm4E8f9w+AO80rV16A97aXYD9OnWaoJyw5JTIyF++xebRf1cwJy/K3MyuJcHnNpnVVA0ekGXvWZx3LuLj9E5e0gTGDP27K8ZpdHXE6u26XmtXJRVKiC3izejRHK4XGQzpr8xucT8XFp8MlSGsImgH+PwbMoTGne/LBkD4p/lx6noTvPpbPzNX+o3oeCdaZUPSwWkwByNdpFYWhrh37mIwvDiNyJKveymavNW1rKfP8dS6w6014YKAL9UVyw7eukJqnooE0ZgLP396y667iEbToOjrUllCtgfhQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yPmoX+4s/u+OuGfu7XB3qwrOFAaaYZypgOrJaH/07vU=;
 b=SNrlEAZoeLSIqyQlz23cKs+ED1plh7R4OBrzd5PI8wbWPPFUNOJN/MarhVtqabJ+AxpsbgS3uUvf/8+xF4dx+1BWQrhYXdGGrKAf/MiNWKtuXFPed1CmxnPwBDYBCgkWyCR9oOnFHSWT7fl3uSFZhBkhlurAr+Nnlu66A0fi8YhTWi+gBb20d0o9h4+MtkBooT8iallp4FyZXMlrWLA9HsyTHZKvnyrik1JQ7i9a8wSCOUCh16KwpacLitKp2efn8ZrcX4QCayYJCeYIHrfKZN9jYydGI5lC6gNORUOnHm5TPSnyvX+USjIYA/i51oCeAbIT7WYi8qUGtPpaZn2Quw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=yPmoX+4s/u+OuGfu7XB3qwrOFAaaYZypgOrJaH/07vU=;
 b=bZPLK5XAjNTxFNRKOyWgtgJhC+8GCP/lbTFc6rvA6kCx+Gq+Fh9dPXrvNHTmxNOssYJersqhzSDOpzwXrvtY2n/qgIZC/wHmOuSmo2j3cfDb9jDzeH80j2FaTZKwQf10R7SVtXiyAcbyDWUsXfg55js9Bjey1Ml5z5SVhCSo1UI=
Date: Tue, 25 May 2021 16:14:40 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Ian Jackson
	<iwj@xenproject.org>
Subject: Re: [PATCH 2/3] firmware/shim: drop XEN_CONFIG_EXPERT uses
Message-ID: <YK0GUBthlbzoNIK/@Air-de-Roger>
References: <19695ffc-34d8-b682-b092-668f872d4e57@suse.com>
 <56bb5e87-fe35-75a8-fe18-ecc959b21799@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <56bb5e87-fe35-75a8-fe18-ecc959b21799@suse.com>
X-ClientProxiedBy: MRXP264CA0018.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:15::30) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: b47db607-113a-44b2-be4d-08d91f8772c4
X-MS-TrafficTypeDiagnostic: DM6PR03MB4219:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4219CA033481009A5B1C9C978F259@DM6PR03MB4219.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:341;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: TkwSXkNjEnQzreyIypgqhKVHfBjj7X5rRhKqJLJqDvR8ysUPyJYztFPLSxWBYZbNlJB3lh4IwSPY3AzM7DUEAPHNMla9nhyEZ+LO0f6dzA/ygeF9cSf7tKI5tazcbzp1p6OgHAMLEog6FuHR3bKom2Q52aV4A+R/ShVhPnwWgTgYEVHcB0kcEhYhsYMhyaFX5/6yhCVSroVxpGmU0Q4ArQ8gXeelijKF7o/lMfEOiOV3/V/PID06ErMgAdC9u44oHISIYro4wwad1qv5PPwmCDN/D3yAKOnXGGXmw7doox+oIDfS0a4DfoEY5JNm+DZ4/G8X5+rj2L7jTn/fyLgJeqOgVpID6MN78c25QrgD1AoN1GM3Dq+vA3q51hkgNJtAf5kppgvt+97hr2Q8WYf+UVlVojoYN/q1WjFUeweN9CB9okAid5d8LMkeqTCbCuEUTOPZnPlxXgCQ7qQ1fuf8Wc1o3kvk45SUhpaGusQjnTRg0Xj5TgyKaaQlEh00UjOle2J3ACvYyTkZ36+ijJQ0IsWnufH1AxRyU/MvUrH4O+kMqKYlwJWBxxjXEbOFOKF49smSDXkUe0h2AqE8lIot+f7bc0fjcktRhQykXfXSvZk=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(396003)(366004)(39860400002)(346002)(376002)(136003)(66946007)(26005)(86362001)(956004)(33716001)(4744005)(66556008)(4326008)(5660300002)(6486002)(38100700002)(66476007)(2906002)(9686003)(186003)(8936002)(6496006)(54906003)(6916009)(85182001)(478600001)(316002)(6666004)(16526019)(8676002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?QjEwWGw5TjRIa1I1WFRLNHRBWHczVHI5UGE0YjBIMzFHU2trTVI5U28xZlNl?=
 =?utf-8?B?eEpDR2VQL0RzYkNqTExRd0lyTXB6b3RnUXk0NGFmRGRqNVp5WlRxSFJ6cjlU?=
 =?utf-8?B?azJJWGRSa3ZMMlpCTnRMdFlQNmhIaGhsZUZDZTg1L2w3NlFXUnpUcXhlaVY5?=
 =?utf-8?B?cU1NbFlPM3l4a044OWl5WER2WENJaVg2THB3R2w1OThBYS9NcllHTG1MQ3Ru?=
 =?utf-8?B?MTZCbWpQMDdacXVOd1BQLzZxQ0ZyL0FmYTdRUnduTUJ6eVFWOFNjU0V1bDZ6?=
 =?utf-8?B?Mys0RGIycEwzV0FEZVk3R2FaTXBqSnlpVFRBNWYwaGQwY2xMNXF5Q1pYOWFU?=
 =?utf-8?B?dC9keUNDNk9UODdmREVEVUJraFlQSWNjV2JjR25DSjdDbXg3bG8zT0hzbW5s?=
 =?utf-8?B?K0QzSE5HbzFtVFFvVCtPaFBaMk55d1NUNGRUQ3ZWMXdtMHRNWm9xaXVJNDRF?=
 =?utf-8?B?OUc1OGVJL3VoRUNiV0FBN2hCUHgxR1U3SThnWlhjTmFwbVFWOFVnMXQzYVov?=
 =?utf-8?B?ZmNNelNlY3V3NzN6RlRzMHZlMlF4bkdNZ3Z1U3ExREoydzBzV1ZzYjUyL2ZZ?=
 =?utf-8?B?SGNkZDBJR0lJMEk3WkZFcWhZZmhpRnFGWkNGbm95cmxXSmV3eVhHSHhEUFlv?=
 =?utf-8?B?MlFLNitVOWdDaEZ5OVVTT2tsSEtUZWE5L2R1T0lqR2NaOEVxTVZuNDhUYk9F?=
 =?utf-8?B?cVNqOEtVTzNhUGFOVjRrakhXLytCRGJvRi9RcDhkbGw4VDdIM1Voc1FWUThN?=
 =?utf-8?B?RTVSd3YvT2d6UnJjNnE4bnVvMmJEZlU3UkYvR0hVdjZXbWEydUdPTVo3bUZE?=
 =?utf-8?B?akdPYUJUN3Jwb0JYUmtERCt1Sk8rRUtPSVNmRUx6dmllektlQzk5QTBseEhW?=
 =?utf-8?B?cEJVK2ljVk9NWXBUUGpvNE1JUTVrZnNlT2FZK0FXRG1PMXBnVXNDcW84T2Zy?=
 =?utf-8?B?akdib085UTFGNXRwWWpiSGdzTmNUb2J5Q1cyem94aFNHd1M4NmFlVUg3NjVJ?=
 =?utf-8?B?Yi9OMUltRjJhQVRqM3d3ckxZSDdXdXBKSHp3OVk4WGVOaTU2dXZaN0lnTFB0?=
 =?utf-8?B?TEVoaVN5K01pdTAzS0wydGdsRGpScmNBTHorNStIaDJZVXdFaGRxVlgwS25F?=
 =?utf-8?B?MmViNTN3L2ZyVlRsUmR0VUJBSTFXbDEyUnJJbkwyY1F0N1N5TFNWSGt5cEY2?=
 =?utf-8?B?Ny81a1A2U3ozV1B6eitraWtoL1VIcmdRQllaUmw3S3VzYUFyenpKK2NlQmtE?=
 =?utf-8?B?WFF6UDJDVjJCM0d3ZlhFYUkvTWQ3UHZBTjZOWXhOZ3ZYTlNpVXFDWlpFeFZY?=
 =?utf-8?B?U3BPekVob01pODFRMXpIZ1VwNDR6UzlMdzB3U3ZMdElwSktmNGFqTG5WYmxY?=
 =?utf-8?B?YmdTalU3VjIvU29jVEFlaHpxdCtMSHNRR3dyT2VvbDhxVmg2RlNGUzE5MldT?=
 =?utf-8?B?c3VRNE1yRGVQSENOaXh2K2krSUJKMGt3MHhKVmN2ODZrSnBBZTVNeGVzVW9D?=
 =?utf-8?B?bHpUTHF4VmR2TTBOQU1ndHI5cjFLNyt2ZjVUUEcxVUJMRlVTMVQ2Y2NqcFN3?=
 =?utf-8?B?ZjZKdEthUFo4YzNnbHFPVWpMbC9qeU1TU2wzVHh1bE9MSlIvYmQzOFBqQkhP?=
 =?utf-8?B?bVVJK2ZTZno4eDBPMVJqTmIySjRuV2QxNXg4MWQ0eXNsRlVWNDFBWkhKWlI1?=
 =?utf-8?B?NXdQOGZna0U0Vm1CTjVIR1lQRS9MRjRQeERveG83ZzFkVENaNmpEcWRyRkdB?=
 =?utf-8?Q?up0H6gLrOIN3E1hPC5ZYG/wevrUC+mCc6Fl+kNI?=
X-MS-Exchange-CrossTenant-Network-Message-Id: b47db607-113a-44b2-be4d-08d91f8772c4
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2021 14:14:45.7333
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: kVSorH2TrL72Bcl60egR6Dj2SwTJ6grNha7VIry+qIdp6D2L5qpgpSZPhueK7SCNF+usBefv3w/Acpyl5Ipr9A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4219
X-OriginatorOrg: citrix.com

On Fri, Apr 30, 2021 at 04:44:21PM +0200, Jan Beulich wrote:
> As of commit d155e4aef35c ("xen: Allow EXPERT mode to be selected from
> the menuconfig directly") EXPERT is a regular config option.

Might be worth mentioning that the default pvshim config already have
EXPERT selected.

> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau Monné <rogerpau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 25 14:39:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 May 2021 14:39:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132221.246704 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llYDB-0005hI-Fm; Tue, 25 May 2021 14:39:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132221.246704; Tue, 25 May 2021 14:39:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llYDB-0005hB-Cb; Tue, 25 May 2021 14:39:13 +0000
Received: by outflank-mailman (input) for mailman id 132221;
 Tue, 25 May 2021 14:39:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kR7Q=KU=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1llYDA-0005gm-NX
 for xen-devel@lists.xenproject.org; Tue, 25 May 2021 14:39:12 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d3b92a95-e061-416c-8f71-baa86da514bd;
 Tue, 25 May 2021 14:39:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d3b92a95-e061-416c-8f71-baa86da514bd
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1621953550;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=6XyHHm7+5cjOe8O5Qb1PSdcoattIPbPoG9IU5X4MlK8=;
  b=Zy88v5kLn/V9ddBU/nVDche6fi87jcs/HXsgPgUejyBoya1JDZHC3sf3
   89F1SUYDd3HAhHbuVxeRjCvFbmKFPZ2+llQrCPJ0pZSweKA6nkr3fQV44
   4MRZ+HYp1a2bYG2qon1WDC50v8jspdrbHQUqgbExxv1Ggll9wkSDLgY/E
   Y=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: iTBe/WSP/RQGdh4bJvCepQyjpG4BYTcbnzUKQwwxeKZSo1m9eIVPGur5+uptX5dYVddAFL8g/6
 YFQVPz48O351/CiKnV8+iVqa4Vx/dnMt3PF2f25EzuLFMXqezexKuw4q3hfBJwC6J+smhL21oD
 y2HK8pUhWIfi51jgd4h9+dQFyyxk2iS7JJVRqmeoWrSJXSdNGfixr9eDnqT8JjOqlSd51F4i6e
 nr+fzv2SECNer1Bx8smMuJFhDtlLTTX3kft8sbaWr5Bovmhwou0sugivwkLyNxGtc4OagWf3zv
 kpQ=
X-SBRS: 5.1
X-MesageID: 44563186
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-Data: A9a23:GRmvBqIJINukt6goFE+RzpUlxSXFcZb7ZxGr2PjKsXjdYENSg2QEn
 GRLWj3Xbq7eY2OhLtl1Po/n/UwB7cXdx4VnSQZlqX01Q3x08seUXt7xwmUcns+xwm8vaGo9s
 q3yv/GZdJhcokf0/0zrbf65xZVF/fngqoDUUYYoAQgsA18+IMsdoUg7wbdh39Q12YLR7z6l4
 rseneWOYDdJ5BYsWo4kw/rrRMRH5amaVJsw5zTSVNgT1LPsvyB94KE3fMldG0DQUIhMdtNWc
 s6YpF2PEsE1yD92Yj+tuu6TnkTn2dc+NyDW4pZdc/DKbhSvOkXee0v0XRYRQR4/ttmHozx+4
 O4W5cOQURV1BajjgchMAzhGKAJiO5QTrdcrIVDn2SCS50jPcn+qyPRyFkAme4Yf/46bA0kXq
 6ZecmpUKEne16TsmdpXScE17ignBNPsM44F/Glp0BnSDOo8QICFSKLPjTNd9Gts254VRKaAD
 yYfQRAwZx+QUSF/BkYGAbsDg7mnjWDZIiIN/Tp5ooJoujOOnWSdyoPFMtDYZ9iLTsV9hVuDq
 yTN+GGRKgEXMpmTxCSI9lqoh/TThmXrVYQKDrq6+/V2xlqJyQQ7Fxk+RVa95/6jhSaWWs1dA
 1wZ/DI0qqo//1DtScPyNzWju2KNtBMYX9tWEsU55RuLx66S5ByWbkAIQDdOZ90hsM4eXiEx2
 xmCmNaBONB0mOTLEzTHrO7S9G7if3JMRYMfWcMaZTBes4bDuJNvtRfgV/oyQI+rtf7pHwill
 lhmsxMCa6UvYd8jjvvhpAGd2Wz9+PAlXSZvuFyMAT7NAhdRId79PtL4tTA3+N4ddN7xc7WXg
 JQTdyFyBsggCouR3AiEXekABr2g4/vt3Nb02gU0RsBJG9hA4ReekWFsDNNWfx0B3iUsI2WBj
 KrvVeR5vsE7AZdSRfUrC79d8uxzpUQaKTgAahwzRoAeCqWdiSfeo3A0DaJu9zuFfLcQfVEXZ
 s7ALJfE4YcyIqV71jumL9rxIpdxmnhW+I8nfrimn0XP+efPPxa9FOZaWGZim8hktctoVi2Oq
 I0BXyZLoj0CONDDjt7/od9LcQ9SdSBlbX00wuQOHtO+zsNdMDhJI9fawK87epwjmKJQl+zS+
 Wq6VFMew1367UAr4y3TApy/QNsDhapCkE8=
IronPort-HdrOrdr: A9a23:nhfjLK/j+/iMn/seu2huk+H1d71zdoMgy1knxilNoENuH/Bwxv
 rFoB1E73TJYVYqOU3IXOrwXZVoJkmsi6KdgLNhQotKOTOW31dAQ7sSkLcKrweQYBEWldQtrZ
 uIEZIOeeEYZGIS5amVkWrIcadFsb/3iZxEnd2/85+WJTsHV0gJ1XYxNu/xKDwLeOApP+tCKH
 LbjPA31QZJHh4sH7CG700+LqD+T72iruOsXfdKPW9/1OHI5gnYqYLSIly95FMzQjlPybAt/S
 zuiAri/Jy5v/W60BPHk0fV8pRNgdPkjuFIDMunjM8JJiTw4zzYMrhJavmnhnQYseuv4FElnJ
 3nuBE7Jf1p53fQZG2u5TPwxgjJyl8Vmivf4G7dpUGmjd3yRTo8BcYEr5leaAHh8EYlvMtxyu
 Ziw3+ZjZxKFhnN9R6NsuTgZlVPrA6ZsHAimekcgzh0So0FcoZLoYge5k9OVL8dASTU7ps4GP
 JiAMy03ocATXqqK1Ti+kV/yt2lWXo+Wj2cRFIZg9eY1zhNkGo87ksD2coFjh47hbkAYqgBw9
 6BHrVjlblIQMNTR7l6Hv05WseyCnbAW1brK2KdIVPuEYsDO37Ltr7PyLg5/+2xYvUzvcgPsa
 WEdGkdmX85ekroB8HL9oZM6Ar1W2m4XS7g0IVm/J5ytqfnSLeuGj3Ga1YziMyppvUZDKTgKt
 6OEaMTJ8WmAXrlGI5P0QG7cYJVM2MmS8EQusw2QRa1iOejEPzCisXrNNLoYJb9GzctXW3yRl
 EZWiLoHd5N6kCwVmW9vQPYXGn1fFfjwIl5H6fb9dUC0YRlDPwfjiElzXCCou2bIzxLtaI7OG
 FkJqn8r6+9rW6quU7V8mRAIHNmfxhoyYSld0kPiR4BMkvyf7pGkc6YY3pu0HyOIQI6Z9/KET
 RYu0989cuMXsOtLBgZeoOa21+h/icuTTOxPsohc5S4lJ3YktVSNOdqZEQ3fT+7TyCcmm5R2S
 N+gEJufD6UKtqhs9T0sHS4bNuvN+WUuz3bbvK8n0iv8Hl0lftfCkfzYATeH/J/oTxeAAa81W
 cBtZP2vtK76HWSAFp6pr5mbHtxVQ2scfB75a6+Fa9pcvuCQnABcY7Nv03AtzgDPlPw8UMcn2
 rgKjDRV83qLzNmywVl+5evyUhzcGqFeUJ2dzRdiq1SUUr7mlsb6575WkKUu1HhHGfqBtttRw
 34XQ==
X-IronPort-AV: E=Sophos;i="5.82,328,1613451600"; 
   d="scan'208";a="44563186"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=edU7RWTAZmpw0NfP2H5HAQFr6tF4DGe9vRAn16Q18Gb5RRA13bPd4k5hTHMkdSuZ2E/xBRcKQ/kN871mD0eAoUl2x8TS9S2P+EUQyhF3YtMQx/OMcjtLOSpbUO8z8e3/zTzqM9YEfII9olAMo+2ykBqkCO/0Qq+MF3YwNSiHibaTRAGQbhSEY4beyVhQcfakaQP6Ecf+TrNnwvyLJZjfCFeABKnP7JJud6tw6VyO7TxxBUmRKy0cFJgFdblEka1dz24z7Jcdj6InSPtYjIy+QNExPCMEJRqygwTEK+QH/IlihJRDGlZQ/L6tvsO6WHcrrWL3qWAFP+vEFC2FvuWFTA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZTtDCpKb95BElI2EcWwLmQFoRxCis1zJsMHVmCRsnOA=;
 b=E8e+8jG16/ORcWQMvq+GMM35hQJNXdFzXNZmO+lgd3hhlk+roZhJ8+SyolE+o5kVPUszYwEJkSKKp1HNz6lmrAeQ6239kbE7tSjqrTD4rBtNow8fGGTkpBXXr+HSfXUA5BAO4sK41QZWUPgWQ7+nhQLdiAS3TI0xOu6Ng++4XRucRYnPFT8LGQWByEbB/v5NCeBMCfP2WXOah/3suG1FT3W5QuMtvKnIpgpp/Dd2R/EiNnFTfsFKxDIssHWBsg7YRDsPPTdsWfAE8CPqGBRn0pq3vGUIUcrFXo3ti0g5fO41GhaTFe+kwxV0E1eFqPo91Npb1tT3i2VbTES5dheh8Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=ZTtDCpKb95BElI2EcWwLmQFoRxCis1zJsMHVmCRsnOA=;
 b=Ca1jMMiqY9DnsdEIvPzfilCigu3vVKV3QrCwmd1R+7YYCe0TfaY7oUNclfXmarBIqZNXcncD6L2Jgck1oh4py3E0ADDIt358JGBrt/eqsYJzdliR27WhD43gAcoF91NlBZ47gXtAAMnYzq1UDBzKHrZpOAqtd/DIk0EQP4449Ds=
Date: Tue, 25 May 2021 16:39:01 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Ian Jackson
	<iwj@xenproject.org>, Stefano Stabellini <sstabellini@kernel.org>, "George
 Dunlap" <george.dunlap@citrix.com>, Dario Faggioli <dfaggioli@suse.com>
Subject: Re: [PATCH 3/3] firmware/shim: UNSUPPORTED=n
Message-ID: <YK0MBTWXYJmihvUn@Air-de-Roger>
References: <19695ffc-34d8-b682-b092-668f872d4e57@suse.com>
 <dbfa9126-6809-64cf-5bd5-01b402616f11@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <dbfa9126-6809-64cf-5bd5-01b402616f11@suse.com>
X-ClientProxiedBy: PR3P192CA0025.EURP192.PROD.OUTLOOK.COM
 (2603:10a6:102:56::30) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e12041de-4990-4e32-cb6f-08d91f8ad91d
X-MS-TrafficTypeDiagnostic: DM6PR03MB3580:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB3580C74981143BDA37ABFBF48F259@DM6PR03MB3580.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: j11J/+w0adQIuP/AUw0nDzSi5x+xHwP2qwyy/VkflpykGtMTjTC/ltXumxXG176FcP5JDaS7SeHj/xzc55IJiW6qlMMvZI167+XaMRR2EHoslyOXLKnVu4SiuUFFoZ0miYW1KbKijIC91Tuc8qCp5/id3RLY8wbZSjD/AInaBZ9a7dUvnI199DUBDbLYncRidLX39NOID2bFMXmUT3B+kJeKKZJgZgtOu47qQEdOxvYoC10rW/NOrvUKhwMBVnE/9bZXm11y7USYj/i+hpjuVjn3ylrAW9HQYArGhHMAEz1neVYubpJxpJeZAZYxTwc9Q0XQ4JvCMMj0vtrOwzxnjqFKbYmt3Lpb0tF66VtL4Ez8mhlHU2LceelvUFOccwjCLNZ6UGlfr8eFrcYgobMREiQ+48iEwjp+6IGzaqOV99yH8Qaf6BM+Ilgvb0nDU6ElO2fHFqflwiV3hCcefUcYyhsvBfllEcRJGd4ttTC76Bi/6T1Qfzdi5Jres4sIiHCP+DrhPhWUInK9AXxaE1VcUUpD/CAzso2XW9S92POLFtE8CukAouQzmm1ldOUzP965icUe5urP3Dtn0Xuhj4MGopnu9pnIwXaDQTeR6q8TGpE=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(396003)(346002)(376002)(366004)(39860400002)(136003)(316002)(38100700002)(33716001)(956004)(8676002)(85182001)(9686003)(6496006)(4326008)(5660300002)(54906003)(6666004)(2906002)(16526019)(86362001)(26005)(66556008)(66946007)(478600001)(186003)(6486002)(8936002)(66476007)(6916009);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?QzRmcnNnc1JzWXZ5WGpFZmlYZGh6dWNvcXNNeUV3Um5HLzhOa3VncDF5SFVE?=
 =?utf-8?B?RGI1YUQxb012VnZFTE5kYmJDWklVem16RmFoUFUyMHBiVXkwZkV1SUZxckdY?=
 =?utf-8?B?TDRVQmhtRXNDRWZOMk9GZ1ZSdFdzcjhLQ1FOV0JscytEYmdRVEhFbFc1OXhl?=
 =?utf-8?B?RTFzc0QxUUxzQWZLYmNhR010RHYwQ1NFM1lIbm9RRlpOSWUyaThKUEg1bmh0?=
 =?utf-8?B?UTBkajY5eHVSYlFTWStkbjdHdUpibmdjL3ErSUkyL29hVXBFNWhVang0MEZ0?=
 =?utf-8?B?SjVLbW9scTJxWHNsQTlzbGt5RVVRUkoxdmdWMTViUythY0xGYVZQR01oNVFT?=
 =?utf-8?B?UGdLYkRUM1NRcXhia2Z4OENGdG9yZEZFTjd3eUZkMFhQS29XQ1MvblQwdjBQ?=
 =?utf-8?B?WS9DUWZobFdKU21obE5ydnJDUWFQQzE0bWl5aTdkZy9FdEVJcWVIemNTQWtX?=
 =?utf-8?B?ZGlQU3UxMmpBeExneS9sVGFwWUJuZHRoYUYraXp0N093RTZtUjlyUnh4cXVU?=
 =?utf-8?B?MUhGbzVQcHh2bnFTRE84ay9JUGxuMDIrSEc1cUxORDlzQ2dLdktDUmxjQXRj?=
 =?utf-8?B?SFh2OURHekIrNXNYWTlzRGhOMVJpYTRWWGJsNXBoem54VEVMQjdxRFlJU1dh?=
 =?utf-8?B?TCsza055NUwvVTFFSXliLzNKajBhcjNPT0lMSEdCVG80VXd5WlNuNjU1K2sx?=
 =?utf-8?B?Rm9tVkplRnV6TFlpTFhvc20yeVpNRVQvZkNMdk00amFtVnh1aDNNS2w1V0pp?=
 =?utf-8?B?OThSWFlRWlJhTG5hUk5PdE1WUzN5YXUvbTAzVVYwemxRYWZBNUtINndGcEZV?=
 =?utf-8?B?S29CVDc3M1FlRzM1ODJCZFQ0NlRqNTFnenhWMnl4RWU4ZWFjN2ZzS3UvNld6?=
 =?utf-8?B?ZFZTY3BUTGpEcGs5N01PeGwxMitCdUxQZTBoc0YrRndheTdPcWxoMDNETEdX?=
 =?utf-8?B?U1kvNFNJUGptY3dsL2NTYkdzMldmV3VPendpd1NMRWduc2pJZG5GeUw4dmdu?=
 =?utf-8?B?Si9LWlREV1NzL053Rk5JZUYza1g4WXFUZGVjRHlKaG1NV3ZlZGRiQU9DK2tl?=
 =?utf-8?B?aUc1UStpNXk5MVhjWUJaaWczTkxjdVpaUjkySmg1TmxsS3p6enFxRHFpaXhI?=
 =?utf-8?B?S29UQTdaVEJQQ05GNTRZdmNvaHQrZ0FudE1yYWI0bHd5WUtlZ2ZIOVJBOXdi?=
 =?utf-8?B?Y2lGUHZUcnludVZCZWx3Y3M5MUFjQVVhNUR0K3lZRmFyai9JYzdOV1Q3dWZC?=
 =?utf-8?B?clNYY3pQbG5sakQ1WEVVT2gzM2dKVmZEQitZbmpHQjQ2ZnJtTm5wVG9KcU1N?=
 =?utf-8?B?Zzh6V0tuYlVlRzV5OVVzeVF6NWpoejdiVHY0MW5ISnV2d05LUHlxY2RZMTZH?=
 =?utf-8?B?L2pHRWhoQlF6ZkI1dUxVSDVQNENNM2daY1JPYTZSRXpTQkgrQ2V6eE9WSmt0?=
 =?utf-8?B?WkUwUEdwWGRSQVpQb2pZREFGemZUV3JFS3RDWENtdXh1RHdDWDRXQWFLTXk5?=
 =?utf-8?B?K1Q2c1FlTU5hK21WV1JubVk3VzhBWGVRTjRuaHNnN210UVBWcXd1RlNodzBE?=
 =?utf-8?B?ZlZKRlgwUVA3VnpYK0FlK2lURjBna2FISjVNK1dWNFVodWRGR1lxVzAyVkoy?=
 =?utf-8?B?U3ZJQUJ2UWVuZzZWU05hYno4WGZ3eGpXeld4Lyt4WnRYWWJ3VENCTkhpVkZu?=
 =?utf-8?B?aVZndmFZMW5SQkRhejY3YXhpdHN1K01FOFZoMnZYQ3NTL2VjQnU4ZEJIZFBL?=
 =?utf-8?Q?4q7e8kGQBlxmyRcBs3L/gxwbuPEfyEv4UQoJ07P?=
X-MS-Exchange-CrossTenant-Network-Message-Id: e12041de-4990-4e32-cb6f-08d91f8ad91d
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2021 14:39:06.0133
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: jBLp2r/x/QZNWgDxMymoe/hjUBL17lAs40ErYmz1HjWkPphc+C3m8uMk601aAlYY0BWMyZlpVXCLVMZGRvVFiw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3580
X-OriginatorOrg: citrix.com

On Fri, Apr 30, 2021 at 04:45:03PM +0200, Jan Beulich wrote:
> We shouldn't default to include any unsupported code in the shim. Mark
> the setting as off, replacing the ARGO specification. This points out
> anomalies with the scheduler configuration: Unsupported schedulers
> better don't default to Y in release builds (like is already the case
> for ARINC653). Without these adjustments, the shim would suddenly build
> with RTDS as its default scheduler.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

> ----
> I'm certainly open to consider alterations on the sched/Kconfig
> adjustments, but _something_ needs to be done there. In particular I'm
> puzzled to find the NULL scheduler marked unsupported. Clearly with
> the shim defaulting to it, it must be supported at least there.

Indeed, I think we should mark NULL as supported for the shim usage
(which is very specific anyway, because it manages a single domain).

> --- a/xen/arch/x86/configs/pvshim_defconfig
> +++ b/xen/arch/x86/configs/pvshim_defconfig
> @@ -15,7 +15,7 @@ CONFIG_SCHED_NULL=y
>  # CONFIG_KEXEC is not set
>  # CONFIG_XENOPROF is not set
>  # CONFIG_XSM is not set
> -# CONFIG_ARGO is not set
> +# CONFIG_UNSUPPORTED is not set
>  # CONFIG_SCHED_CREDIT is not set
>  # CONFIG_SCHED_CREDIT2 is not set
>  # CONFIG_SCHED_RTDS is not set
> --- a/xen/common/sched/Kconfig
> +++ b/xen/common/sched/Kconfig
> @@ -16,7 +16,7 @@ config SCHED_CREDIT2
>  
>  config SCHED_RTDS
>  	bool "RTDS scheduler support (UNSUPPORTED)" if UNSUPPORTED
> -	default y
> +	default DEBUG

I would also be fine with leaving the default as 'n' for unsupported
features.

>  	---help---
>  	  The RTDS scheduler is a soft and firm real-time scheduler for
>  	  multicore, targeted for embedded, automotive, graphics and gaming
> @@ -31,7 +31,7 @@ config SCHED_ARINC653
>  
>  config SCHED_NULL
>  	bool "Null scheduler support (UNSUPPORTED)" if UNSUPPORTED
> -	default y
> +	default PV_SHIM || DEBUG

Do we need the pvshim_defconfig to set CONFIG_SCHED_NULL=y after this?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 25 14:46:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 May 2021 14:46:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132228.246715 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llYKU-00075b-AP; Tue, 25 May 2021 14:46:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132228.246715; Tue, 25 May 2021 14:46:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llYKU-00075U-6h; Tue, 25 May 2021 14:46:46 +0000
Received: by outflank-mailman (input) for mailman id 132228;
 Tue, 25 May 2021 14:46:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kR7Q=KU=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1llYKS-00075O-EK
 for xen-devel@lists.xenproject.org; Tue, 25 May 2021 14:46:44 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 50a1112b-7cc0-4514-a115-cd39386f2502;
 Tue, 25 May 2021 14:46:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 50a1112b-7cc0-4514-a115-cd39386f2502
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1621954002;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=fE/oDfj3ENe2yFenF9c8WjVLqopBj3TLo3USYdrdviU=;
  b=GRJ18v6wlA7aO7+Ejgu6IJtkBuEz25sw2zuq8jUBxfXIqhdW9opVDoyS
   CpucYrgdS853YE2IZQrGXUHe9+rEWt82oW2yypfz10p/0wmJ9M9y+V21z
   xJiJ4ESp5N6cTjblDVFUR8+brmL5yWuPv1T3vp3tsJuJgesrgPxBiDtPz
   s=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: W/6iu7WU4F/CSsQuKW2yi2gzrba/apkelhlAmN+xoRCjGWikefwB2sZX73BVJlpiOksiAAP4qP
 Ay7eC7x9HrTGJ744g6WOcH1ObtQIUdIcwMqNoWZ1rgdC8mJVoc6tEXr48v/XgWAvVOdgESZUcv
 SCnDxqxGj+pN2ZjrOEt2C2EXg81D8ah/PRVVio5Ei6sj38zOJazq/4XoUjllnNwal8plh7M+an
 jvR6oYC7zIX/REm6F+C/yLA3hNZIISl8KgaQo+F28+b6VKEPtqexLre1WTb9FfCD98MoGdL9D8
 LcY=
X-SBRS: 5.1
X-MesageID: 44563902
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-Data: A9a23:LKiUbaIcw0dnem2BFE+RjpUlxSXFcZb7ZxGr2PjKsXjdYENShTVTz
 GJOD2GEaKmDYzGkftB/at+1oU8GvcODyN5hG1dlqX01Q3x08seUXt7xwmUcns+xwm8vaGo9s
 q3yv/GZdJhcokf0/0zrbf65xZVF/fngqoDUUYYoAQgsA18+IMsdoUg7wbdh39Q12YLR7z6l4
 rseneWOYDdJ5BYsWo4kw/rrRMRH5amaVJsw5zTSVNgT1LPsvyB94KE3fMldG0DQUIhMdtNWc
 s6YpF2PEsE1yD92Yj+tuu6TnkTn2dc+NyDW4pZdc/DKbhSvOkXee0v0XRYRQR4/ttmHozx+4
 NFks5O6Ew4xAq3VovVAbDZhPDsjBIQTrdcrIVDn2SCS50jPcn+qyPRyFkAme4Yf/46bA0kXq
 6ZecmpUKEne16TsmdpXScE17ignBNPsM44F/Glp0BnSDOo8QICFSKLPjTNd9Gts254VTK6ED
 yYfQTlkbSvnbxpqBngwKclhoe2UimCmSwQN/Tp5ooJoujOOnWSdyoPFMtDYZ9iLTsV9hVuDq
 yTN+GGRKgEXMpmTxCSI9lqoh/TThmXrVYQKDrq6+/V2xlqJyQQ7Fxk+RVa95/6jhSaDt8l3c
 hJOvHB09O5rqRztFICVswCETGCsujcaVNAKGvUG6Bi32/frzD+2HDU9Z2sUADA5j/PaVQDGx
 3fQwYmwVGY17uzLIZ6O3u3K9GjtZUD5OUdHNXddE1tZizX2iNxr1nryosBf/LlZZzEfMR/32
 SzCiCEji7gJgccP2s1XFnic2Gn1//AlouMvjzg7v15JDCsiPuZJhKTyszA3CMqsy67DEDG8U
 IAswZT20Qz3JcjleNaxrAAx8FaBvabtDdEhqQcwTsNJG8qFoBZPgry8EBkhfRw0Y67oiBfCY
 VPJuBM52XOgFCH2NcdKj3aKI5l6nMDISIW+PtiJP4UmX3SEXFLelM2YTRXLhD6FfYlFufxXB
 Kp3hu78XC5GVvw/lGTeqiV0+eZD+x3SDFj7HPjT5x+mzaCfdDiST7IEO0GJdec38OWPpwC9z
 jqVH5HiJ8l3OAEmXhTqzA==
IronPort-HdrOrdr: A9a23:ZkMooq4kSlJXKHBbCgPXwfCBI+orL9Y04lQ7vn2ZFiY6TiXIra
 +TdaoguSMc6AxwZJkh8erwXpVoZUmsiKKdhrNhQYtKPTOWwldASbsC0WKM+UyEJ8STzJ846U
 4kSdkANDSSNykIsS+Z2njBLz9I+rDum8rE9ISurQYZcegpUdAa0+4QMHfqLqQcfng+OXNWLu
 v62iIRzADQCEj/I/7LSUXsMIP41pP2vaOjRSRDKw8s6QGIgz/twqX9CQKk0hAXVC4K6as+8E
 De+jaJpZmLgrWe8FvxxmXT55NZlJ/K0d1YHvGBjcATN3HFlhuoXoJ8QLeP1QpF4N1H0Gxa1e
 Ukni1Qe/iasxjqDyaISFrWqkjdOQ8Vmj3fIQTyuwqknSSRLwhKefaohupiA1HkAgQbzYhBOA
 8i5RPRi3NtN2K2oM3K3amCa/hbrDvBnZMcq59ks5V+a/pSVFYDl/1SwKtqeK1wVB4Sv7pXbt
 WGSvusvMprTQ==
X-IronPort-AV: E=Sophos;i="5.82,328,1613451600"; 
   d="scan'208";a="44563902"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=NwkkMw2t/l2fxJPmNeIxn3elpj5D30LbYhAOI5ZDSwN4qmzy9T1KFaRbuzrMXWqSLQBYHm0vGrKDFeV52F8WkIlUtEUofniVnCWBS0DSF3JrkbvCHL7QGUaomOO3MrwyGH3QciX7K2+fWv/e09f3osBcENec+B77rRWp45R6eyA+UEqtsXnLwbyk3QuSerCDXd35p1zpIuoiG2Vatw239parG/+K5xjQ8qvWqfu5NyI1oIpZzQxN3kHx74R+AAbk/KvofLKni1t/2sxqvTdaWH6s/q5XrRaKS5gINTZVlIGaiWWuhna24Ja5Rbly7qyXzYGJkEKyiBSfy/E09iDQdQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FpuN/zhpoeoZJVylGpwX2Sd1Rmw9iWkU/StLL8t8Ii4=;
 b=bjvxAhVDca218xdIqOo+VL3CpxycWAupi7OfyvUzKbJrgf5I0/uFWEIYKIOREDRLEj/j4anxSOS8zg4wO8A97oB7zW9vZ4jknOwT8Y6saiLrb7VGMQZ+CcWnCluh7Q2zvhkWX4V4JOLXBa3lEoHck/lpIvZeExQhJHs7ziB/jb68rIbb3504EhPsOF50WvIduYkolr/vagNJIRHk7PKPmSP0+KwQkQxeYY5Ae+oH22n6EDj7KQ8fjYcgVd9Ha8wp4r9/GJUugja8mIzMiLavwaDKcBDqBCgZLl8U75V3asNMPGHQgCleS8h6eO8jelOVdQ+jn+NkMNXMZ+V+XjoPig==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=FpuN/zhpoeoZJVylGpwX2Sd1Rmw9iWkU/StLL8t8Ii4=;
 b=hhL4YsZZW7ATBkIjq9v6RWuK3kz+Fqy5GwjAFQ3+K7h2ocSiBu0PZ6coOlZ3YYzHEJvnk35MVs9r7t4+b8V2N75TtECTgwYAOX1Rxa0V0PdWdxbIFudfr/3Wb9UUuDRq5iEJCoekAQ0cpt2RDYhNPMqbV2mFbi7dJgMiqo8M9mY=
Date: Tue, 25 May 2021 16:46:34 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH] x86/guest: fix build when HVM and !PV32
Message-ID: <YK0NykdJeDc9Gm39@Air-de-Roger>
References: <d8230ebc-a3cb-6a87-33c3-ab27bfa17862@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <d8230ebc-a3cb-6a87-33c3-ab27bfa17862@suse.com>
X-ClientProxiedBy: MR1P264CA0068.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:501:3f::33) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e9734477-f99f-4757-46c8-08d91f8be6f0
X-MS-TrafficTypeDiagnostic: DM5PR03MB2636:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB2636344EFC1E91D1FBEE176B8F259@DM5PR03MB2636.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:136;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: V3WcJq9xx+6JccMCH80rlqXVwGVgQeoD42L8nTs5MpA4iJ3aIINcwiCNrsGVS4LcNEdm0BEfnQxYmRXkKZUHvxmBKHwVkJ/VtTI5wO0UWNB+lAQ4eKPwC8D8BHwAqt5g8+28TYaL56dGadgaFVWjbYJshxm84eC5duqhU3MtY2s41vK9Z9vG672NTF0S8ekn39hzua73ohJxhI6jo9rL9ht1kIWzZDV6Y0fDp2S2JZCRMjZKHf7ChIOwRDGsDQDVq6zjJyAvS8C5lhI/gj/hYPkq/tRScEnWWoZKYFefBdCmJjdNinr7amYZGwfA0CknqRhJPZkB3hPFkBCDbFE9YWmHXCzIcQ7PgCWCfViYVdtz1zgbPUveS/++h/Z3YZts7pXWK1zNWoXNT1iPBBe1/7Cm6Nsz16HZZYBIyXRsbWDcwXsfcx9RtLIobAXJyfOz6MllEr12jZ1iWjmCPMmDaheUnYd+xJRBenQsVPXVdO1LvdKmdvm1NGWs1SH5c2VnSp0ZoBCHOwAOeTG5QtKTosjHZ1K6vVUf3dxWEv5mPS4jiLXNorrpFXRGtJYSENeROWqxhXIi83BFxIJMz+THLP1RePdGJDuvAXvev04pW9g=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(346002)(39860400002)(366004)(396003)(376002)(136003)(26005)(16526019)(66946007)(478600001)(66476007)(86362001)(85182001)(66556008)(6486002)(4744005)(186003)(6916009)(2906002)(8936002)(316002)(38100700002)(8676002)(33716001)(5660300002)(54906003)(9686003)(956004)(6666004)(4326008)(6496006);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?MWtNbkZtTDRlOVEwYkZkNkRmYk5TVHZqVDhJOUR1M3g1OU5HK1ZlNzk1N004?=
 =?utf-8?B?KzlzREN2MWowN20xcmQxb1hOY3pNY0JKb0JMQmxKSlFBV2lvNGtlbEtnYWxs?=
 =?utf-8?B?b0NjVHk5OUJPbXVxUmJvRGVFS0FvSnBDQWptamYwVU5ONGVkTjFtN3liNk83?=
 =?utf-8?B?QmZlcWQ1SnJDYm5Ed01iUnk5bFg0Y0tXMFRRa0RPYmJ4UjNSZW9lRTU5TTY4?=
 =?utf-8?B?TUZ1d2tRSWVaRzVrNE9idTJsRDA2aFEybDlQeTMwVWg1Z1dJTStFVWlic1lL?=
 =?utf-8?B?RlR6WUJqQmt5YzBWZ0t3NHNJZTluSGpyUmRoYklNaFNpdk01ZzM0RHM5bnpu?=
 =?utf-8?B?MHRTblNkNWU1RmM4V0hxRU9aUk1aWWVpMFN5N1pkTzdZWEhmOU4rRk9DdlU4?=
 =?utf-8?B?cEJvRXhObFl6Zk9IeHZLRHR6NnAvVDhsdy9YeklRMW90Q1JuaWJ3VU9tajZn?=
 =?utf-8?B?MUNwQ0I5WjBPODlwN3Exd2ViM2YzMVZETTh0d2VnUk9EK2htOWpMcEJ0RzN0?=
 =?utf-8?B?dDNqYjVRdUh0Z1RuOE8wR3YrazRQN0NtKzBVWkIvTE9DQzhnMVh3bXg3T3BR?=
 =?utf-8?B?VTBOQjRtRHNKUVJTTDJCNXFJRWV4eE9oMEhZR2thTER1SERlNmc5L1ZHZjU0?=
 =?utf-8?B?bjN3VXNSQmFwNy85QmR6Q2JacWtIVFlUeHF1WVJBYUc4WmhaTk5wMkNCVjRQ?=
 =?utf-8?B?Zm1ndTlyQkEzQ21DdDdTa1YvWjBmUXk4Wi9lYWtmRFdPOS9PSysxcmZPbEk5?=
 =?utf-8?B?a0U1NU1KYVJWcnlOYmlZbEJKa08vVGVtMHlLNXJ3YWRVb1RGaGdueUw2TDdV?=
 =?utf-8?B?bDB1aWlxVVgyNmJSdUxWbE95bldRcEJkQk5RRDE5QkhVTWJkMUJBQnl0RzVI?=
 =?utf-8?B?YWdreTZlQ0p1UmxMQnZWQndCUVhmYnJPQTlUZUZ1WlQ2UGdFTDd2SkZ3NUpQ?=
 =?utf-8?B?NUVqSHNvVFllWFY5K2V4TTFpbE1PbXdBdDNtbCtqaWhpcFVYNnpXSWVmdm4y?=
 =?utf-8?B?K0hkS0RrRjU0L3Vkc0VrTFZPSlJRaHlWbUg3WnVNU2paeEJKa0tZZ2p0UkZj?=
 =?utf-8?B?MzFHNmhTcm1IRFptUlZzUlJJYzZnR2lobTRjTzA0d2RwOWRNMG5qNUR1d0d6?=
 =?utf-8?B?cWFmV2h3WkFQYU4zaS84V21QUTNCWjErZ01iMFZXZlJyWXZaakI0b0c2aUsv?=
 =?utf-8?B?bXF1eGk0MzBWb3l3RnNlY1BxczJGNStPQmxKeHl1TS9wbUFiaHhRM2ZZdHBQ?=
 =?utf-8?B?eVhPNmRkSU1yVW5QWG9INE42MjJSTURvanlUVG1GMHYvcXFya2tXY2piUlcw?=
 =?utf-8?B?VDEyRko4RkpveDd4elFTbDlTZldmeVFud29BLzFhUFVoQW5yMzYra3JCK2xo?=
 =?utf-8?B?MkJlUllnYjIza0N6dnBiTk5FMGxoU2V4d29XVG1lRUVZSWV5dHRsVHZFOXZm?=
 =?utf-8?B?R3hhQld6SjlUVHF6ZVd5Ynh5TFFPMllFVTc2VUNwMWlkUWJWQS9aUnF6R1Zv?=
 =?utf-8?B?VlIxaFE4b1FvR09Ha0Q3Nk9FMEdmWWlXUTNMRFNtRkhjTkYwTkdScmhmamdY?=
 =?utf-8?B?QnFVRlNmTStNQXhHZ2IrbWVUQnhHMWNNT2FFa3J3dzFoNXhIaUlLdElKU2xU?=
 =?utf-8?B?UWMyR2JMS0cydkx0alF1aEtCS0JkOVN6QmIrTXJzaEsrNVR3c0xsSWlFYy95?=
 =?utf-8?B?MWxtRmNFaVNqNEg4ZHFtVmFnT1FiaE1heUtuUEZ5U1d2eERUc1ZZdlY0SC8w?=
 =?utf-8?Q?iph7/1l/0JSdH9LcNmNWwwQyDZZ7pmWQP41n//z?=
X-MS-Exchange-CrossTenant-Network-Message-Id: e9734477-f99f-4757-46c8-08d91f8be6f0
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2021 14:46:38.5993
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: r/PDPJTPAdD8iNOMVEyU39YMbB4qQIChAHZ7DSWCYRs4mRSad2qZ5jFWONYk4Lh+igQmd16qWNwYwaMcoz98cg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB2636
X-OriginatorOrg: citrix.com

On Tue, May 25, 2021 at 03:20:56PM +0200, Jan Beulich wrote:
> The commit referenced below still wasn't careful enough - with COMPAT we
> will have a compat_handle_okay() visible already, which we first need to
> get rid of.
> 
> Fixes: bd1e7b47bac0 ("x86/shim: fix build when !PV32")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 25 14:58:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 May 2021 14:58:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132236.246725 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llYVP-00005R-Bl; Tue, 25 May 2021 14:58:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132236.246725; Tue, 25 May 2021 14:58:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llYVP-00005J-8l; Tue, 25 May 2021 14:58:03 +0000
Received: by outflank-mailman (input) for mailman id 132236;
 Tue, 25 May 2021 14:58:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llYVN-000059-Ez; Tue, 25 May 2021 14:58:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llYVN-000418-7x; Tue, 25 May 2021 14:58:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llYVM-000669-UL; Tue, 25 May 2021 14:58:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1llYVC-0003mM-Ve; Tue, 25 May 2021 14:58:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Ij/sX09I860h7trGgDKrQ3ChekB3fgMi2I5/5kQvpFY=; b=cVAofGfGiH5HxA4bkb+k6HknhY
	zHMPvTXaNaT9LgjtQAcYqQQKkFbRLqTBFkrjsP7PT5TNkRMT71A9KfA4ASKKE1zG7QV9ACqzocEOV
	wkeiFlrglbHFgg8X9bI5LqCQQVgop8kiVwOKrRNspfP/QqrnlcHGXjDertZ6+p12m3ss=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162146-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162146: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=0dab1d36f55c3ed649bb8e4c74b9269ef3a63049
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 25 May 2021 14:57:50 +0000

flight 162146 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162146/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                0dab1d36f55c3ed649bb8e4c74b9269ef3a63049
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  278 days
Failing since        152659  2020-08-21 14:07:39 Z  277 days  511 attempts
Testing same since   162146  2021-05-25 01:55:32 Z    0 days    1 attempts

------------------------------------------------------------
510 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 159524 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 25 15:14:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 May 2021 15:14:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132246.246739 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llYlG-0002R8-W1; Tue, 25 May 2021 15:14:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132246.246739; Tue, 25 May 2021 15:14:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llYlG-0002R1-Sy; Tue, 25 May 2021 15:14:26 +0000
Received: by outflank-mailman (input) for mailman id 132246;
 Tue, 25 May 2021 15:14:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=n3nm=KU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1llYlF-0002Qv-Ih
 for xen-devel@lists.xenproject.org; Tue, 25 May 2021 15:14:25 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 43a11048-d434-4ad1-98e9-2687378d547e;
 Tue, 25 May 2021 15:14:24 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id DF9B6AEB3;
 Tue, 25 May 2021 15:14:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 43a11048-d434-4ad1-98e9-2687378d547e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621955664; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=N+xX92bmI+b/vVYYZvh4+HNVQN3U8vjaSOXXPhJLctE=;
	b=dwvhv6o6Bv/Len30Q2LGQvC1lCGrIiqOHYN4JDMAy2ql+CaanCpU65yp/GUhwwu8b4NJuR
	7DTdVC3PZPon0E0VYbJbjPOQHBzyiMgbpACpDUJ++Q2rXSbYIv/oYPhC/jQ4/kg3FinQiA
	llW3gQIwaR00zEwyriqifnC1mxzsE7U=
Subject: Re: [PATCH 2/3] firmware/shim: drop XEN_CONFIG_EXPERT uses
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Ian Jackson <iwj@xenproject.org>
References: <19695ffc-34d8-b682-b092-668f872d4e57@suse.com>
 <56bb5e87-fe35-75a8-fe18-ecc959b21799@suse.com>
 <YK0GUBthlbzoNIK/@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c8d03aa9-92b7-01a8-f043-f705ae25c0ea@suse.com>
Date: Tue, 25 May 2021 17:14:19 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <YK0GUBthlbzoNIK/@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 25.05.2021 16:14, Roger Pau Monné wrote:
> On Fri, Apr 30, 2021 at 04:44:21PM +0200, Jan Beulich wrote:
>> As of commit d155e4aef35c ("xen: Allow EXPERT mode to be selected from
>> the menuconfig directly") EXPERT is a regular config option.
> 
> Might be worth mentioning that the default pvshim config already have
> EXPERT selected.

That's not really related, as what the patch is removing is stale
regardless of config, but I've added a remark.

>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Acked-by: Roger Pau Monné <rogerpau@citrix.com>

Thanks.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 25 15:21:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 May 2021 15:21:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132253.246750 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llYsR-0003pR-Q1; Tue, 25 May 2021 15:21:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132253.246750; Tue, 25 May 2021 15:21:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llYsR-0003pK-N4; Tue, 25 May 2021 15:21:51 +0000
Received: by outflank-mailman (input) for mailman id 132253;
 Tue, 25 May 2021 15:21:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=n3nm=KU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1llYsQ-0003pD-J4
 for xen-devel@lists.xenproject.org; Tue, 25 May 2021 15:21:50 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1d1098a8-eeae-4015-a265-304030f3b4f9;
 Tue, 25 May 2021 15:21:49 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 53302AB71;
 Tue, 25 May 2021 15:21:48 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1d1098a8-eeae-4015-a265-304030f3b4f9
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621956108; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=OTo01Y6FZrlX65+6h/BXe1jMCyZ7OtMZf5FvKE92ORQ=;
	b=JW6ecItZpWcZWhvboF0cMGUrEBkhnp++Z3BQXrBihfkeXnfA24Hw7eSfXDab/T3D3ZKpoT
	+2yjydvAHPRDfMnbc8XNiBRmU9ESiL9QwnaYYF+RT3dPwb6yIvF0/vbcy6jMaUXpZ59QbM
	S/4pveBu3Ed5f0UOIPwNI03bX38blCA=
Subject: Re: [PATCH 3/3] firmware/shim: UNSUPPORTED=n
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Dario Faggioli <dfaggioli@suse.com>, George Dunlap <george.dunlap@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Ian Jackson <iwj@xenproject.org>, Stefano Stabellini <sstabellini@kernel.org>
References: <19695ffc-34d8-b682-b092-668f872d4e57@suse.com>
 <dbfa9126-6809-64cf-5bd5-01b402616f11@suse.com>
 <YK0MBTWXYJmihvUn@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6b74b1de-5ec4-09dc-ba1e-821025402d36@suse.com>
Date: Tue, 25 May 2021 17:21:43 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <YK0MBTWXYJmihvUn@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 25.05.2021 16:39, Roger Pau Monné wrote:
> On Fri, Apr 30, 2021 at 04:45:03PM +0200, Jan Beulich wrote:
>> We shouldn't default to include any unsupported code in the shim. Mark
>> the setting as off, replacing the ARGO specification. This points out
>> anomalies with the scheduler configuration: Unsupported schedulers
>> better don't default to Y in release builds (like is already the case
>> for ARINC653). Without these adjustments, the shim would suddenly build
>> with RTDS as its default scheduler.
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.

>> ----
>> I'm certainly open to consider alterations on the sched/Kconfig
>> adjustments, but _something_ needs to be done there. In particular I'm
>> puzzled to find the NULL scheduler marked unsupported. Clearly with
>> the shim defaulting to it, it must be supported at least there.
> 
> Indeed, I think we should mark NULL as supported for the shim usage
> (which is very specific anyway, because it manages a single domain).

George, Dario, what is your position towards null's support status?

>> --- a/xen/common/sched/Kconfig
>> +++ b/xen/common/sched/Kconfig
>> @@ -16,7 +16,7 @@ config SCHED_CREDIT2
>>  
>>  config SCHED_RTDS
>>  	bool "RTDS scheduler support (UNSUPPORTED)" if UNSUPPORTED
>> -	default y
>> +	default DEBUG
> 
> I would also be fine with leaving the default as 'n' for unsupported
> features.

So would I be; I merely didn't want to make too big of step by
going straight from y to n. George, Dario - you're the maintainers
of this code (and I'd need your ack anyway), do you have any
preference?

>> @@ -31,7 +31,7 @@ config SCHED_ARINC653
>>  
>>  config SCHED_NULL
>>  	bool "Null scheduler support (UNSUPPORTED)" if UNSUPPORTED
>> -	default y
>> +	default PV_SHIM || DEBUG
> 
> Do we need the pvshim_defconfig to set CONFIG_SCHED_NULL=y after this?

I don't think so, the default will be y for it. Explicit settings
are needed only when we want a non-default value.

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 25 15:51:55 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 May 2021 15:51:55 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132260.246761 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llZLJ-00071u-4a; Tue, 25 May 2021 15:51:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132260.246761; Tue, 25 May 2021 15:51:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llZLJ-00071n-1d; Tue, 25 May 2021 15:51:41 +0000
Received: by outflank-mailman (input) for mailman id 132260;
 Tue, 25 May 2021 15:51:39 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=kR7Q=KU=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1llZLG-00071h-Si
 for xen-devel@lists.xenproject.org; Tue, 25 May 2021 15:51:39 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 34dd1a88-7522-4a93-b6ae-46de993a35da;
 Tue, 25 May 2021 15:51:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 34dd1a88-7522-4a93-b6ae-46de993a35da
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1621957897;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=8FH28wRZv8jpY/Y4JSeZjHD48eipdGssRW8a5xyHOAU=;
  b=c113jeuxdBPcr6VeePiprJTATA79S++1uh1lnmWUYAAgVpqxJKzuhu/j
   jYrIZl+eDbHH5VDEONJcR+oamp47/9zvxeBUjpE7kxEMnl0p3d6ueByA0
   GNVDCa5gvwWRHPziwgIhGtBaA+zfRxhb8HYhqHot0JzBgaxr/r4eBsfWR
   k=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 7um9Kg6f/fopVFv8eHnZbknmVJZz6AhRIbcQY1DuFdFJFGgrfax9MMp1gCst5Pga6BubX/2OQP
 KkcoWnfflfYkBqW1Osj/8LhCEpvwzjrAbDxidI8UDxZPLoXGitVKAzRBc53up5MRFnVrMXjwaQ
 Y2RLIryQlAq/ZWpB3bbwgkGAhjMjSfvNwuSta9jQnVkVVuL4e+DgifACFwvu9E2IWTiFNA1aqn
 E2Vp4puPDomdcdXJ+XV+Pyd9+RzvVY9GCxx4Ws1gt41UK13TYIbTGUhXEoUwmWbKe1BGi2+Rz5
 3GI=
X-SBRS: 5.1
X-MesageID: 44593182
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-Data: A9a23:oIOHGa9/1t1A57O7AYFSDrUDn36TJUtcMsCJ2f8bNWPcYEJGY0x3n
 WFOUG3Xb63eN2f9eIslb4uxoUgA6JDTn9BjHVdsrns8E34SpcT7XtnIdU2Y0wF+jyHgoOCLy
 +1EN7Es+ehtFie0Si+Fa+On8j8kvU2xbuKU5NTsY0idfic5Dnd74f5fs7Rh2Ncw3ILkW1nlV
 e7a+KUzBnf0g1aYDUpMg06zgEsHUCPa4W5wUvQWPJinjXeG/5UnJMt3yZKZdhMUdrJ8DO+iL
 9sv+Znilo/vE7XBPfv++lrzWhVirrc/pmFigFIOM0SpqkAqSiDfTs/XnRfTAKtao2zhojx/9
 DlCnYatQwYYJZfBobQiUAFaPXpAGYRd9oaSdBBTseTLp6HHW37lwvEoB0AqJ4wIvO1wBAmi9
 9RBdmpLNErawbvrkPThE4GAhex6RCXvFJkYtXx6iynQEN4tQIzZQrWM7thdtNs1rpwSQKeFP
 ppDAdZpRD7QfBQSEHgoMs0Rksf23WbyKRpEjmvA8MLb5ECMlVcsgdABKuH9ZdiiVchT2EGCq
 Qru/nv7KgEXMsSFzjiI+W7qgfXA9QvgQ54bHrC88v9sgXWQy3YVBRlQUkG0ydG7gEOjX9NUK
 2QP5zEj66M18SSDQtf0UBK5p3qFlgUBQNcWGOo/gCmdx6yR7wuHC2wsSj9adMdgpMIwXSYt1
 FKCg5XuHzMHjVGOYSvDrPHO92r0YHVFazVbDcMZcecby8P+qoAClSjCd4ZcE7a3oIbLPRjrz
 Qnf+UDSmI4vYd43O7STpA6d2mP2/MaYEGbZ9S2NADv+sV8RiJqNItzwsAaLsZ6sOa7EFgHpg
 ZQSpySJAAni57mjkzaRCMEEAb2k/fqMNDC0bbVHRMJ6rm3FF5JOZ+ltDNBCyKVBaZxsldzBO
 hW7VeZtCHh7ZiXCUEOPS9jtY/nGNIC5fTgfahwxUjapSsIhHDJrAQk3Oh/Kt4wTuBNEfV4D1
 WezLp/3UCdy5VVP5zuqXeYNuYLHNQhkmTu7eHwP9Dz6gev2TCPEEt8tbQrRBt3VGYvZ+W05B
 f4EbJDUo/ieOcWjChTqHXk7fQFScSdjXciowyGVH8baSjdb9KgaI6a56ZsqepB/nrQTkeHN/
 3qnXVRfxka5jnrCQThmoFg6AF8zdf6TdU4GABE=
IronPort-HdrOrdr: A9a23:qm847qPuWaKchMBcTjujsMiBIKoaSvp037BK7S1MoNJuEvBw9v
 re+MjzsCWftN9/Yh4dcLy7VpVoIkmskKKdg7NhXotKNTOO0AeVxelZhrcKqAeQeREWmNQ96U
 9hGZIOdeEZDzJB/LrHCN/TKade/DGFmprY+9s31x1WPGZXgzkL1XYDNu6ceHcGIjVuNN4CO7
 e3wNFInDakcWR/VLXAOpFUN9Kz3uEijfjdEGY7OyI=
X-IronPort-AV: E=Sophos;i="5.82,328,1613451600"; 
   d="scan'208";a="44593182"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=iy84syZp/MXGoMper3HSeKg8W+jfYxUhMNGSdeMQdXVUAsQ0OY/lLfiVQ68TaCd2Q2v0qT1U9xB5/ZBt7WF+xjkTwhL+7fqjjUvXtOS/J9+ITuwzFcADzigaq25iyI+JF8oMmU0/yTPqNBgVaSKa6r769S6PPjhQsLDMCvNWUFRFSUB6w/ixd8HpfDpKxN2smjM+a4//S1tw8Nv8irhjNytMmyW+YBdjgMbLIFWcSM1bUUso3WhlKJ9IRlWdDziQWjpN9I1NEKqeD5PVPkTSTEMd0l8y3rdJSZSSA6iYEYhoH7njZPA3iXihP3r0DcZAj4v1xS0pfxxd13UDyWjtYg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=o2OcsxlyamJzPBy4XfIq8db55t4FiaySEwdt7FUUyOE=;
 b=Hq2PJ7bmhp774Yruho6RdqVPtIb3krfjjCEVvgV3H9x+QSrzSd4b1Ds8yahjjXl52qd4es6MDRJTSJepzAfq4BpfPaTbvS9o59ORQAYOg5rx37LbIKDAndMAOSKedKTVzVE19bkq65SypjotDPUbWR3umgKKT7nOXrPSNTQaHfXxtk0RnHKril/WFtRl564buwlhCYzwjEr1Gq41iNmaSeWPn5xiD2iDQ27CW/MxRCKJ+vOy9kcQO0/TRslQgRUtTwoz+GYNzOQeyRFQSCm0GatpsAzeJ0JY9t4NnvSnWqHtHkRdvlChdVT6c/D5/hEH1gur6xvJC7PLzYsD4ewzpA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=o2OcsxlyamJzPBy4XfIq8db55t4FiaySEwdt7FUUyOE=;
 b=jGrRGmQocNYmptCHAivWuFGgPPkoyIRYVMr253IcfWn8QNHLvEswZohfDt3mUDHV5lUGhOuRU3ekDJI7WmGCUZ2NzZ9LJHqI+x3vOV6aabzDAnwLnON7ZBunpzBhn0a5d20l5FPDhtbIR55IjeU1dVyn15q6AEqQX+bbGulvXug=
Date: Tue, 25 May 2021 17:51:18 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: Dario Faggioli <dfaggioli@suse.com>, George Dunlap
	<george.dunlap@citrix.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
	Wei Liu <wl@xen.org>, Ian Jackson <iwj@xenproject.org>, Stefano Stabellini
	<sstabellini@kernel.org>
Subject: Re: [PATCH 3/3] firmware/shim: UNSUPPORTED=n
Message-ID: <YK0c9oPEYVUlNSU6@Air-de-Roger>
References: <19695ffc-34d8-b682-b092-668f872d4e57@suse.com>
 <dbfa9126-6809-64cf-5bd5-01b402616f11@suse.com>
 <YK0MBTWXYJmihvUn@Air-de-Roger>
 <6b74b1de-5ec4-09dc-ba1e-821025402d36@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <6b74b1de-5ec4-09dc-ba1e-821025402d36@suse.com>
X-ClientProxiedBy: MR2P264CA0139.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:30::31) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6e823103-7a3c-4dbb-579f-08d91f94f2fa
X-MS-TrafficTypeDiagnostic: DM6PR03MB4844:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4844DA8C1E6C138C2D0785838F259@DM6PR03MB4844.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: O3wvGvZ9elEZH6o2PQaZayQH80ny1mufm3YAcqN9pTrsO8HxBha2ZlaHeKAk2pd+A1fC5wfhGM1yPyjstnhpOhAEV2j4SKbbF6OfH7M1p2vS0Y7M6CEM/7ez3FFaGW0xJfujYMH1GJEhAnU4ca1UbYtOVGeFnFhtnaesoi2xSbLiMLAXRQ3coRBWGNb/JCkuGpwnhKbQgCbV+jJ3lgUsD58kNFDMCMzgsqU4iBZCzILk+nYXFSh1yAlcoaYcnuER1nbTemBAGBfhgdlzu6roCkyEQzdfJ9Br5iLxaLFW60q4RTTHtuPSgSQ2y9DyZi4Wm/mLm9acfbpngWsQGecI0vc9JDpzlQrdl8vGuodGWEtVmx8nW0O6JIdyJ3TxSCj6uK+r8QS+oUQnzturOFEhJvaWymh4gYzVZAKJesfQZe8J+VSVbpdOEMVglxcc4mnelax8udGTT40OYBpRzpRmk6HAylIJJ33RzAGilL23rJV3QC+au84a1UgvszqItd28fO1gZwjdsL53C2kwwoQjJ5ywR1ySY7jO8fLsOI4Dg8XFfemC3F/mK8AD01n+BkHKQwNSkWTzSaa1k9mOa5MRxa8BrkSxl1IN8r7bbi1xpmI=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(136003)(376002)(39860400002)(396003)(346002)(366004)(6486002)(478600001)(66946007)(66556008)(66476007)(33716001)(26005)(2906002)(956004)(6916009)(6496006)(9686003)(54906003)(6666004)(53546011)(8936002)(5660300002)(186003)(316002)(4326008)(85182001)(38100700002)(86362001)(4744005)(16526019)(8676002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?VzBZenc3bG9SNXlNTUN3YXYwK2tCWWRqdDVFUEZsV2RLS0JFaDZpOGp3M1lv?=
 =?utf-8?B?RGtIN2V6dng1dmFoQmV3eHlLNG5IVmUwM0h3SjVKTmJjT2ZSVjd4eUUvQVN3?=
 =?utf-8?B?TE9oczQ5dE5iTC9YSzZkZndvemFlbzl6VmF5Wmx0VUo5Wk1uQ05UU1M2dXE1?=
 =?utf-8?B?UXd1STZhd1BMSFpad2ZESWlzYzM0THI2N2wwUGxyQTZjdVZoUGNmQ0I4VW5C?=
 =?utf-8?B?Y2xmTU9WS1ppZmcrVjJKMURKU2NwQVFUYW9mNW5qV29wYjdueUpYMXB2U2tV?=
 =?utf-8?B?NExYd1g0S01HLzF4VFUvaE5CZVMxQ2hua0E1OHZweW1zamJZcTRPL2txaFZ6?=
 =?utf-8?B?ckNuNTZObDN3V3ZZZnpaZEtpSXRLTDg1VVJ6ZWtqNlZLZWY0K1YvakVEUDlQ?=
 =?utf-8?B?QmU5dXV2aGpHN2MrYjVsaXdtSEZBZTdPYUdRcldhT21RbUJ1aDFqcTByRWJv?=
 =?utf-8?B?aUJVQnFObERLa20vNmQxR3ZaZGwzMk52M1VCN3hKZGFva252K3dESHZUc0RB?=
 =?utf-8?B?M1k3cHIyckFoTzk1YmhLN3g2ZWErRkw4V1NKQVhXNWF4eHdsdHd3dVVxaVd2?=
 =?utf-8?B?UFBsbUxRdkErVENlZ0Y5Wml2OFE3L0kramhiSDN1YlVxZ3JKQVlDMHl1c2xm?=
 =?utf-8?B?OGdVbFBUaW85R1dkemsvamlKLyt6SjAyS0dCdmJpRWhVdEpaeXdxYTloeHA3?=
 =?utf-8?B?Ulc1RnB4RlFOSEVXMDJITXZzbk8rQkRvSnNrTno2UHdyUUd4WHExZVIwVkhM?=
 =?utf-8?B?NFo4S0Z5YnI1c256ZjVNYVA4L3g4UnRGWXM4bWwyNWRFU3EvS3VYeVQxcWZz?=
 =?utf-8?B?dlZTT1dTaE9zL25SNWxsT0syUnpIZWNVWG9YR2c1RFl3T1pPakIxcEVicGV4?=
 =?utf-8?B?OGlZcTJ4RTcrQjZ5a29QdVg2MkNtbXJVY3pINGVxUnBBZFJtZDAvN2tqTk9Q?=
 =?utf-8?B?cVIrTG45Mk9PbUJqTEpEM0ljL3NaUHRpWS8rcWViOUhUVEhpeTN4QnRYYXFM?=
 =?utf-8?B?TFV4RndrMlpMaVJiUFpUaGtBTGFJdXovYWRDRmM1ZnMxbVVNZGl4eGthTkpU?=
 =?utf-8?B?QzF4K3JXTGZaOVFsamVXYTl4NVcxdGpwSE53a2kwRXJqY2Rid3dvS01ma2dL?=
 =?utf-8?B?OThhY29rdmhQaUo0K3Q4RFIwN1N0QnpwMFlUU0NlMzE4eHFMTm4wYjJUSG5r?=
 =?utf-8?B?THQvRURSN1RmQSswcDNKQjJUQjQwbUtqSU1LeEhGdFFWTXRGK0RXUjE2Zjhj?=
 =?utf-8?B?THhOR1dEa3pla1pVWmVmZXV3blNkQkxiTXFmVXlYQ0xLSFJVVi9BUTluc2k3?=
 =?utf-8?B?TU1EMGVwendnV0o1OTd1YmMrUmk1ZW1UTTR1MU9MTnBZV3BDQjhGM1MxNlhK?=
 =?utf-8?B?dlF2ZE5mZGhsaVVtd3QyYklyL3FTVDUvV3JrQnRmVmRSSlNna1M0bThUcWM5?=
 =?utf-8?B?bThOQ04yOWw4bk5Tb3J5SWx0NHJUczlEYXpzNW56RVlJbm01alAvdE1kMXFm?=
 =?utf-8?B?NTZOaHptbE9wN0gza1hTMFFCTzl5Ylk0c1R0VDRBNjZIVEV4NHFrazl1TFZp?=
 =?utf-8?B?a1UwaEpYcHMvTTNYRzlCMnBaczI4ZlpjVGVKNUlEU1o5UkVvK2ZzWEVWSDh4?=
 =?utf-8?B?ZExldE9QbGl6dHp4QU5aNkRhM1BjUHM3ZU9aR2FOa2ZPdHpHR1dKNTliSDRv?=
 =?utf-8?B?ZmN0UFFuSWlMVzNXTmgzdjF2VDhOd0VKSzExZ2tNZ0ZDenpLN3lqZHZjNFY5?=
 =?utf-8?Q?9DRfhVHYqEglMx7+EVLV9GAYFVdCxiRNq/6qQtr?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 6e823103-7a3c-4dbb-579f-08d91f94f2fa
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2021 15:51:24.2864
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: qhs/nRSn97y4RY9iPLnIRq6b6sp5fsuai+/JNy9EB4U1vYvW67106oPVwOhzJ+q/6Yd/bPzuJYR64c3wanlVeQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4844
X-OriginatorOrg: citrix.com

On Tue, May 25, 2021 at 05:21:43PM +0200, Jan Beulich wrote:
> On 25.05.2021 16:39, Roger Pau Monné wrote:
> > On Fri, Apr 30, 2021 at 04:45:03PM +0200, Jan Beulich wrote:
> >> @@ -31,7 +31,7 @@ config SCHED_ARINC653
> >>  
> >>  config SCHED_NULL
> >>  	bool "Null scheduler support (UNSUPPORTED)" if UNSUPPORTED
> >> -	default y
> >> +	default PV_SHIM || DEBUG
> > 
> > Do we need the pvshim_defconfig to set CONFIG_SCHED_NULL=y after this?
> 
> I don't think so, the default will be y for it. Explicit settings
> are needed only when we want a non-default value.

Right, I think I haven't expressed myself correctly. I wanted to point
out that I think CONFIG_SCHED_NULL=y is no longer needed in the
pvshim_defconfig.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Tue May 25 15:54:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 May 2021 15:54:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132266.246773 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llZOE-0007fv-La; Tue, 25 May 2021 15:54:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132266.246773; Tue, 25 May 2021 15:54:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llZOE-0007fo-Hf; Tue, 25 May 2021 15:54:42 +0000
Received: by outflank-mailman (input) for mailman id 132266;
 Tue, 25 May 2021 15:54:41 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=n3nm=KU=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1llZOD-0007fi-F5
 for xen-devel@lists.xenproject.org; Tue, 25 May 2021 15:54:41 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9318dbcd-b8c7-4a50-8af6-d90eaa465b3b;
 Tue, 25 May 2021 15:54:40 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 8E318AB71;
 Tue, 25 May 2021 15:54:39 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9318dbcd-b8c7-4a50-8af6-d90eaa465b3b
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1621958079; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=bOFxakO4RGNFsjlnSLdzXV3v8NGAhbbK08sIoC70Cbs=;
	b=HDRf4FntaJPDXTN3VAQv/YfBvL0X3deOu6Xiv1UkDK1oT2V1BwtIcPfdNyGhswzNWxpetE
	NuqYb/5rtIZPttIwTfODL3TWXUxbBtq/DnJI81xttMvZYvjzh+mkQTNwb9Ee3xobkne8t8
	JrKbJ61i4URv1oa6hkDC78OOYBuvQek=
Subject: Re: [PATCH 3/3] firmware/shim: UNSUPPORTED=n
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Dario Faggioli <dfaggioli@suse.com>,
 George Dunlap <george.dunlap@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Ian Jackson <iwj@xenproject.org>, Stefano Stabellini <sstabellini@kernel.org>
References: <19695ffc-34d8-b682-b092-668f872d4e57@suse.com>
 <dbfa9126-6809-64cf-5bd5-01b402616f11@suse.com>
 <YK0MBTWXYJmihvUn@Air-de-Roger>
 <6b74b1de-5ec4-09dc-ba1e-821025402d36@suse.com>
 <YK0c9oPEYVUlNSU6@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ad740a43-edfc-0662-d4f5-ea337bdabf19@suse.com>
Date: Tue, 25 May 2021 17:54:35 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <YK0c9oPEYVUlNSU6@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 25.05.2021 17:51, Roger Pau Monné wrote:
> On Tue, May 25, 2021 at 05:21:43PM +0200, Jan Beulich wrote:
>> On 25.05.2021 16:39, Roger Pau Monné wrote:
>>> On Fri, Apr 30, 2021 at 04:45:03PM +0200, Jan Beulich wrote:
>>>> @@ -31,7 +31,7 @@ config SCHED_ARINC653
>>>>  
>>>>  config SCHED_NULL
>>>>  	bool "Null scheduler support (UNSUPPORTED)" if UNSUPPORTED
>>>> -	default y
>>>> +	default PV_SHIM || DEBUG
>>>
>>> Do we need the pvshim_defconfig to set CONFIG_SCHED_NULL=y after this?
>>
>> I don't think so, the default will be y for it. Explicit settings
>> are needed only when we want a non-default value.
> 
> Right, I think I haven't expressed myself correctly. I wanted to point
> out that I think CONFIG_SCHED_NULL=y is no longer needed in the
> pvshim_defconfig.

Oh, I see - yes, that ought to work. I'll make such an adjustment for
v2 (unless I discover something standing in the way).

Jan


From xen-devel-bounces@lists.xenproject.org Tue May 25 16:06:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 May 2021 16:06:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132274.246783 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llZZ8-0001Ee-Nl; Tue, 25 May 2021 16:05:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132274.246783; Tue, 25 May 2021 16:05:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llZZ8-0001EX-Kg; Tue, 25 May 2021 16:05:58 +0000
Received: by outflank-mailman (input) for mailman id 132274;
 Tue, 25 May 2021 16:05:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=+lfR=KU=ffwll.ch=daniel@srs-us1.protection.inumbo.net>)
 id 1llZZ6-0001ER-P0
 for xen-devel@lists.xenproject.org; Tue, 25 May 2021 16:05:57 +0000
Received: from mail-wr1-x429.google.com (unknown [2a00:1450:4864:20::429])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ad65a4c6-14b2-4ab5-8fc6-5de64259b205;
 Tue, 25 May 2021 16:05:54 +0000 (UTC)
Received: by mail-wr1-x429.google.com with SMTP id z17so32818687wrq.7
 for <xen-devel@lists.xenproject.org>; Tue, 25 May 2021 09:05:54 -0700 (PDT)
Received: from phenom.ffwll.local ([2a02:168:57f4:0:efd0:b9e5:5ae6:c2fa])
 by smtp.gmail.com with ESMTPSA id h15sm11169638wmq.1.2021.05.25.09.05.51
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Tue, 25 May 2021 09:05:52 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ad65a4c6-14b2-4ab5-8fc6-5de64259b205
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=ffwll.ch; s=google;
        h=date:from:to:cc:subject:message-id:references:mime-version
         :content-disposition:content-transfer-encoding:in-reply-to;
        bh=f7DBrUo3wL4Qpwwbs5pU+Y2UR7h2QKSlSLVfoQUFNMI=;
        b=B3DENF8eoHhecWAFNQVLS2JEamnemU33gyvaAri/jUlxzebGZ65BPV0vBbEDgtL0i2
         ktvkA4hfm35xWOsU0Qpu7M/Khpm+8vhn3XskgiCo4rDbB3tRmN4WHjCEACwUXqYx4FeY
         Jtta5hNrcC+yqioS0IajWgJ26ctB5Cv70tzOE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:date:from:to:cc:subject:message-id:references
         :mime-version:content-disposition:content-transfer-encoding
         :in-reply-to;
        bh=f7DBrUo3wL4Qpwwbs5pU+Y2UR7h2QKSlSLVfoQUFNMI=;
        b=mR6TH5hXQ75ahc9/8AnjIFdk2lr/XNKDVuwClGvcE0rOjMilqAMm4sG92STPxRTbh5
         kN7Fg7H3tGkAmgawySR7b9vQGqOX+vc86P7TrOevdtVMyQDt6HPkgVBuJPvAZs97porf
         pMY6KhfotRQmR48Yne+ElsXx9Zx8lVgl8gLiE7ESFxfAylSqV2MLNzaAiE6pU7TMKU7S
         c1pD7/lgMWpOG4n86r5o3BVa8IekRnkWOk3dCbVqhJrd0vHu/eK7EtNV1LBO5DNsN6if
         /SYO7NK1mwsQc2048YMYiDY3ls/syjzWSamrYOOA1q1B40cR4Dd27dFfGoK5bEJeIW+2
         TSEQ==
X-Gm-Message-State: AOAM531MJ70EbVxiLSUNxz2zfIvv1nclUoTlh3M6J8HxETNIdR3098rp
	txPpnQoYY50gQ1Ap4ac2++7uxg==
X-Google-Smtp-Source: ABdhPJzmCQM+KdUAXONpPffkFRK0K5WklaT15aqWUiMtaXWooOfXUGpF52ocgwjNDKESzlRppx+OrQ==
X-Received: by 2002:a5d:638b:: with SMTP id p11mr28170569wru.90.1621958753771;
        Tue, 25 May 2021 09:05:53 -0700 (PDT)
Date: Tue, 25 May 2021 18:05:50 +0200
From: Daniel Vetter <daniel@ffwll.ch>
To: Noralf =?iso-8859-1?Q?Tr=F8nnes?= <noralf@tronnes.org>
Cc: Daniel Vetter <daniel.vetter@ffwll.ch>,
	DRI Development <dri-devel@lists.freedesktop.org>,
	Intel Graphics Development <intel-gfx@lists.freedesktop.org>,
	Daniel Vetter <daniel.vetter@intel.com>,
	Joel Stanley <joel@jms.id.au>, Andrew Jeffery <andrew@aj.id.au>,
	Linus Walleij <linus.walleij@linaro.org>,
	Emma Anholt <emma@anholt.net>, David Lechner <david@lechnology.com>,
	Kamlesh Gurudasani <kamlesh.gurudasani@gmail.com>,
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>,
	Maxime Ripard <mripard@kernel.org>,
	Thomas Zimmermann <tzimmermann@suse.de>,
	Sam Ravnborg <sam@ravnborg.org>,
	Alex Deucher <alexander.deucher@amd.com>,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	linux-aspeed@lists.ozlabs.org, linux-arm-kernel@lists.infradead.org,
	xen-devel@lists.xenproject.org
Subject: Re: [PATCH 11/11] drm/tiny: drm_gem_simple_display_pipe_prepare_fb
 is the default
Message-ID: <YK0gXjANguasJLu5@phenom.ffwll.local>
References: <20210521090959.1663703-1-daniel.vetter@ffwll.ch>
 <20210521090959.1663703-11-daniel.vetter@ffwll.ch>
 <0b2b3fd7-7740-4c4e-78a5-142a6e9892ea@tronnes.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <0b2b3fd7-7740-4c4e-78a5-142a6e9892ea@tronnes.org>
X-Operating-System: Linux phenom 5.10.32scarlett+ 

On Fri, May 21, 2021 at 04:09:13PM +0200, Noralf Trnnes wrote:
> 
> 
> Den 21.05.2021 11.09, skrev Daniel Vetter:
> > Goes through all the drivers and deletes the default hook since it's
> > the default now.
> > 
> > Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> 
> Acked-by: Noralf Trnnes <noralf@tronnes.org>

Can you perhaps also look at the prep patch right before this one?
-Daniel
-- 
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch


From xen-devel-bounces@lists.xenproject.org Tue May 25 18:14:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 May 2021 18:14:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132281.246795 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llbYq-0004CA-VG; Tue, 25 May 2021 18:13:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132281.246795; Tue, 25 May 2021 18:13:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llbYq-0004C3-S4; Tue, 25 May 2021 18:13:48 +0000
Received: by outflank-mailman (input) for mailman id 132281;
 Tue, 25 May 2021 18:13:47 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=t54O=KU=gmail.com=bobbyeshleman@srs-us1.protection.inumbo.net>)
 id 1llbYp-0004Bx-3s
 for xen-devel@lists.xenproject.org; Tue, 25 May 2021 18:13:47 +0000
Received: from mail-pf1-x432.google.com (unknown [2607:f8b0:4864:20::432])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 92100fca-13d5-4cd3-b4dc-76b749713528;
 Tue, 25 May 2021 18:13:46 +0000 (UTC)
Received: by mail-pf1-x432.google.com with SMTP id f22so15874728pfn.0
 for <xen-devel@lists.xenproject.org>; Tue, 25 May 2021 11:13:45 -0700 (PDT)
Received: from ?IPv6:2601:1c2:4f80:d230::1? ([2601:1c2:4f80:d230::1])
 by smtp.gmail.com with ESMTPSA id n8sm13774742pfu.111.2021.05.25.11.13.43
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Tue, 25 May 2021 11:13:44 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 92100fca-13d5-4cd3-b4dc-76b749713528
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:organization:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=WR6mTTTVXMLqVbQDN7NW3T5xTnC8y2i35vuOHkOhoyg=;
        b=CfJuS8idlpaud2LoV35hBSWgemSQsHzWc5gL/G9Ehbe2p3X6P9Q1boKvazOlD9X9HJ
         REleBvlUx7E8F7iBDnOXB7BR8SiBK0v2AcoVjSOhUvMex3xUlCBc3d4QpGfULAFr7VRD
         XAiVGMeOEvtDJRvN09FWzjKwtQU6G1kElN1FsoOcafrPWYL3IuAz2xM03BjGoVo7GtGM
         EZk0G/c4zaGd9/9/xiJCl9WaEYWxSF6u8bD9soDNY6c4qYDl1GdRvZBXVRR8Of3SedoI
         akCsHz1vEOxNGQQOrAZBenjddAf7kyBrNfVIUuMYYhTxzBD+Y0HfRDe+mCtLgIA6rqvw
         z2bQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:organization
         :message-id:date:user-agent:mime-version:in-reply-to
         :content-language:content-transfer-encoding;
        bh=WR6mTTTVXMLqVbQDN7NW3T5xTnC8y2i35vuOHkOhoyg=;
        b=UWcP2kujZqI1ObCluf8Qf+sdTM/DJc/3CvTUVBV8akpEGTdWOOP1lKJPcyt+6hbcjv
         8u/m4DTkRxi9H68qrfegLGuGGXe6xjBV54tHupj+CHkQNKpTPD7oKwf2PpImVaqOFMF3
         46mEVKgdszzDBXaOul54Mc1LTSfCqVMXm+QBIIyRhszAFohVEoe/OyW/xR4KfsbWsmus
         eknDde5qMuTP9NB6UEDmow0FR4b9OuurU82NeZm6F/XIJdz7ver/RAlflduocAGqOcuG
         4tmrCPA2er3f8oOmOGE81D7XpBOrlrC10CMMtLXaH90/L41r/5WIJeeYfLnyquZK/hdu
         P/sw==
X-Gm-Message-State: AOAM5310/TzyGl/vs3cVLYQa3fml6jk4AAkAm050Iq1ALo9KSKCvubxH
	k8nHi1zxXsmUO0a9kPfO7ErHeOWM3PcVtYHU
X-Google-Smtp-Source: ABdhPJwRuBXjX83mcUr1iyjBIx2c/PZA3tiBUUVxGwuMsJ9A51Bbr1HN6onDODU/bfW4tDtUnujbiw==
X-Received: by 2002:a65:564c:: with SMTP id m12mr20973639pgs.298.1621966424666;
        Tue, 25 May 2021 11:13:44 -0700 (PDT)
Subject: Re: [PATCH v4 3/4] xen: Add files needed for minimal riscv build
To: Jan Beulich <jbeulich@suse.com>, Connor Davis <connojdavis@gmail.com>
Cc: Alistair Francis <alistair23@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, xen-devel@lists.xenproject.org
References: <cover.1621712830.git.connojdavis@gmail.com>
 <88ca49cea8dc0c44604957d42722388bb3d9e3ff.1621712830.git.connojdavis@gmail.com>
 <7d1b6d2a-641c-4508-9b29-b74db4769170@suse.com>
From: Bob Eshleman <bobbyeshleman@gmail.com>
Organization: Vates SAS
Message-ID: <39a8a78c-3662-528f-fde4-d47427e64b15@gmail.com>
Date: Tue, 25 May 2021 11:13:42 -0700
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.6.1
MIME-Version: 1.0
In-Reply-To: <7d1b6d2a-641c-4508-9b29-b74db4769170@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 5/25/21 1:48 AM, Jan Beulich wrote:
> On 24.05.2021 16:34, Connor Davis wrote:
>> Add arch-specific makefiles and configs needed to build for
>> riscv. Also add a minimal head.S that is a simple infinite loop.
>> head.o can be built with
>>
>> $ make XEN_TARGET_ARCH=riscv SUBSYSTEMS=xen -C xen tiny64_defconfig
>> $ make XEN_TARGET_ARCH=riscv SUBSYSTEMS=xen -C xen TARGET=head.o
>>
>> No other TARGET is supported at the moment.
>>
>> Signed-off-by: Connor Davis <connojdavis@gmail.com>
>> ---
>>  config/riscv.mk                         |  4 +++
>>  xen/Makefile                            |  8 +++--
>>  xen/arch/riscv/Kconfig                  | 47 +++++++++++++++++++++++++
>>  xen/arch/riscv/Kconfig.debug            |  0
>>  xen/arch/riscv/Makefile                 |  0
>>  xen/arch/riscv/Rules.mk                 |  0
>>  xen/arch/riscv/arch.mk                  | 14 ++++++++
>>  xen/arch/riscv/asm-offsets.c            |  0
>>  xen/arch/riscv/configs/tiny64_defconfig | 13 +++++++
>>  xen/arch/riscv/head.S                   |  6 ++++
>>  xen/include/asm-riscv/config.h          | 47 +++++++++++++++++++++++++
>>  11 files changed, 137 insertions(+), 2 deletions(-)
>>  create mode 100644 config/riscv.mk
>>  create mode 100644 xen/arch/riscv/Kconfig
>>  create mode 100644 xen/arch/riscv/Kconfig.debug
>>  create mode 100644 xen/arch/riscv/Makefile
>>  create mode 100644 xen/arch/riscv/Rules.mk
>>  create mode 100644 xen/arch/riscv/arch.mk
>>  create mode 100644 xen/arch/riscv/asm-offsets.c
>>  create mode 100644 xen/arch/riscv/configs/tiny64_defconfig
>>  create mode 100644 xen/arch/riscv/head.S
>>  create mode 100644 xen/include/asm-riscv/config.h
> 
> I think this wants to be accompanied by an addition to ./MAINTAINERS
> right away, such that future RISC-V patches can be acked by the
> respective designated maintainers, rather than falling under "THE REST".
> Question is whether you / we have settled yet who's to become arch
> maintainer there.
> 
> Jan
> 

I'd like to volunteer myself for this, as I'm slated to continue
with the port indefinitely and would at least like to review
patches.  If Connor has the time, I think it makes sense for him
to be listed there too.

Until we have others (it's just us two right now), it'll be a
lot of us reviewing each other's arch-specific work (in addition to
reviewers elsewhere in the Xen project, of course).

-Bobby

-- 
Bobby Eshleman
SE at Vates SAS


From xen-devel-bounces@lists.xenproject.org Tue May 25 19:27:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 May 2021 19:27:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132290.246805 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llchw-0002JQ-JL; Tue, 25 May 2021 19:27:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132290.246805; Tue, 25 May 2021 19:27:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llchw-0002JJ-GO; Tue, 25 May 2021 19:27:16 +0000
Received: by outflank-mailman (input) for mailman id 132290;
 Tue, 25 May 2021 19:27:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llchv-0002Ip-Lm; Tue, 25 May 2021 19:27:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llchv-0000es-Be; Tue, 25 May 2021 19:27:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llchu-0002tK-Vv; Tue, 25 May 2021 19:27:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1llchu-0005oc-VP; Tue, 25 May 2021 19:27:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=i0Tg1IHn6HpUVlyXQOZ2lyaf9wVqoFpmVymBXw6uGgI=; b=MPaauQI1vxK2LPwdkUWRCxxGih
	yqwMZupVyOLNXWpCh41q11IVmzexoCmdAP4cufl7BP+Oluj/MfDK/5EFP/td4EeQS6Kg0M9CmatRf
	XmxcGsFB+N9vojTHAnItqoubUOpryifLpap+VogtSOGeGrCND20Q4hhdBqq0/byaSnjs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162148-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162148: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-credit2:guest-localmigrate/x10:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-qemuu-freebsd12-amd64:guest-localmigrate/x10:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=a050a6d2b7e80ca52b2f4141eaf3420d201b72b3
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 25 May 2021 19:27:14 +0000

flight 162148 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162148/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-xl-credit2  20 guest-localmigrate/x10   fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-qemuu-freebsd12-amd64 19 guest-localmigrate/x10 fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                a050a6d2b7e80ca52b2f4141eaf3420d201b72b3
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  297 days
Failing since        152366  2020-08-01 20:49:34 Z  296 days  504 attempts
Testing same since   162148  2021-05-25 04:44:01 Z    0 days    1 attempts

------------------------------------------------------------
6086 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  fail    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1653364 lines long.)


From xen-devel-bounces@lists.xenproject.org Tue May 25 22:24:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 May 2021 22:24:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132298.246820 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llfTQ-0001Ca-FM; Tue, 25 May 2021 22:24:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132298.246820; Tue, 25 May 2021 22:24:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llfTQ-0001CT-C8; Tue, 25 May 2021 22:24:28 +0000
Received: by outflank-mailman (input) for mailman id 132298;
 Tue, 25 May 2021 22:24:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=xBHr=KU=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1llfTO-0001CN-G7
 for xen-devel@lists.xenproject.org; Tue, 25 May 2021 22:24:26 +0000
Received: from aserp2120.oracle.com (unknown [141.146.126.78])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4643842d-de89-4d79-ae76-8f25486b4b07;
 Tue, 25 May 2021 22:24:25 +0000 (UTC)
Received: from pps.filterd (aserp2120.oracle.com [127.0.0.1])
 by aserp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 14PMACHV127272;
 Tue, 25 May 2021 22:23:45 GMT
Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71])
 by aserp2120.oracle.com with ESMTP id 38rne4306q-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 25 May 2021 22:23:44 +0000
Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1])
 by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 14PMA4nW154683;
 Tue, 25 May 2021 22:23:44 GMT
Received: from nam11-dm6-obe.outbound.protection.outlook.com
 (mail-dm6nam11lp2168.outbound.protection.outlook.com [104.47.57.168])
 by aserp3030.oracle.com with ESMTP id 38pr0c67b2-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Tue, 25 May 2021 22:23:44 +0000
Received: from CH0PR10MB5020.namprd10.prod.outlook.com (2603:10b6:610:c0::22)
 by CH0PR10MB5147.namprd10.prod.outlook.com (2603:10b6:610:c2::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4150.25; Tue, 25 May
 2021 22:23:42 +0000
Received: from CH0PR10MB5020.namprd10.prod.outlook.com
 ([fe80::6cb6:faf9:b596:3b9a]) by CH0PR10MB5020.namprd10.prod.outlook.com
 ([fe80::6cb6:faf9:b596:3b9a%7]) with mapi id 15.20.4150.027; Tue, 25 May 2021
 22:23:42 +0000
Received: from [10.74.96.88] (138.3.201.24) by
 SN4PR0801CA0016.namprd08.prod.outlook.com (2603:10b6:803:29::26) with
 Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4173.20 via Frontend
 Transport; Tue, 25 May 2021 22:23:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4643842d-de89-4d79-ae76-8f25486b4b07
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=J8Tz9xArRZ/Kbo65Z1kBBvQ1yqDyan8L6k8VI6YD/XQ=;
 b=nT6AwjEud8byQRdb2QlfzVMPej7rHVTlcLEsyzm4MOpmjnld1X2O8P8tN1S7z7BfvQ2f
 FyvhXX/gS+X3V0sf6RSbkDn3928xYCNCh1Awpzf4Efw8QvxF9Dv3CAOrVCwUFdv1rmmr
 A27bL2VVh9Zo1vOFqndBHFn1swZ5v+516+44WQVKrY84jbqtT/DByjAqMVp1kpKSe5S7
 XQM/UDqc1lJYdX7FtLsee44dRqqSo6RTXAVLfMUyuxcEHCR807Pkip20q6cE9klzarHC
 CskkfB3s9+CbJQCug96h9A2rS1a4x+0AyZTY/c/lUaf44z0OaAUZwI3R7OY3pTjvq0GA GA== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=jxtDnDzY+GDYoJb6r912+uAniYtBKA2DRlYQ0Xcw3BRVnKo8SsqV22UvUwaoMHf+ozei/3dX/8pa1BxZQiAQmn0Shnte2rOiDXNpV4fjyX3z6hZ+YYXDmb8kd4jBta9kzxospbOM64+Ft1Ku/IB2YJATcqQdwdEuda2hpMm8EG1jUJX9DTHJmnSLn+YTXMHJoRbyASDiisyxc3JDNpPi1rR3kCdoo17OoJg0rJhx3T0vVrAJLyw35H9vI8CP1MesWgnYg49va3+y2NCunjwn7KDAEhL/6xxi4gr8wiF1hQ33sHFM1AUeFzb7/aiCRtuU0R9lGQ8gU2KMcjPXT8RTGg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=J8Tz9xArRZ/Kbo65Z1kBBvQ1yqDyan8L6k8VI6YD/XQ=;
 b=iS2opyuI0vQT+fgSrVAxbEWmUyJ3gUURVolc50Q0KwGnD7TjogYO/xZRrqUu0M9WuSPfsf9Ysyy/qF7m/NLtZ1Pf+aIXnGAloERiNzLbStw4zNJbg9jebuRFk6VIMRok8Yc+353sC20uwbt5/BeKuDsV1zRuvyhu726bSTe+o79Pd8l29KBU7KJLpYxWgbnUu+8htODpvSxWml7raq6XrRZE/z+vQ0XUAdy22iR48QogrYNUzvUarhpqTxQdJktyT0mbj6/yydeusqouImRdqcz+dGvp4R/x1u/r0cT0eRxAY22gMohhth7omE8vz64Yu+IO3BllO7+zaoOS1l5aag==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=J8Tz9xArRZ/Kbo65Z1kBBvQ1yqDyan8L6k8VI6YD/XQ=;
 b=S2JPFhKEQ7TkKjw7A5QNf8dVcAPtANEG2X75Cx9R/E99ndCaX1NqxFOPf1UqgKq2z6n7pjLaSdKLd5R9K9qdGn6BYHmcW/t8D0DL392NLwn+l2qoyVo1zC8/4J+IjclhXZotbqwkYHPiUTugqtQqcjEJLLYnyx7p46a0fpGYez8=
Authentication-Results: amazon.com; dkim=none (message not signed)
 header.d=none;amazon.com; dmarc=none action=none header.from=oracle.com;
Subject: Re: [PATCH v3 01/11] xen/manage: keep track of the on-going suspend
 mode
To: Anchal Agarwal <anchalag@amazon.com>
Cc: "tglx@linutronix.de" <tglx@linutronix.de>,
        "mingo@redhat.com" <mingo@redhat.com>, "bp@alien8.de" <bp@alien8.de>,
        "hpa@zytor.com" <hpa@zytor.com>, "jgross@suse.com" <jgross@suse.com>,
        "linux-pm@vger.kernel.org" <linux-pm@vger.kernel.org>,
        "linux-mm@kvack.org" <linux-mm@kvack.org>,
        "sstabellini@kernel.org" <sstabellini@kernel.org>,
        "konrad.wilk@oracle.com" <konrad.wilk@oracle.com>,
        "roger.pau@citrix.com" <roger.pau@citrix.com>,
        "axboe@kernel.dk" <axboe@kernel.dk>,
        "davem@davemloft.net" <davem@davemloft.net>,
        "rjw@rjwysocki.net" <rjw@rjwysocki.net>,
        "len.brown@intel.com" <len.brown@intel.com>,
        "pavel@ucw.cz" <pavel@ucw.cz>,
        "peterz@infradead.org" <peterz@infradead.org>,
        "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        "vkuznets@redhat.com" <vkuznets@redhat.com>,
        "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
        "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
        Woodhouse@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com,
        David <dwmw@amazon.co.uk>,
        "benh@kernel.crashing.org" <benh@kernel.crashing.org>, aams@amazon.com
References: <5f1e4772-7bd9-e6c0-3fe6-eef98bb72bd8@oracle.com>
 <20200921215447.GA28503@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <e3e447e5-2f7a-82a2-31c8-10c2ffcbfb2c@oracle.com>
 <20200922231736.GA24215@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <20200925190423.GA31885@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <274ddc57-5c98-5003-c850-411eed1aea4c@oracle.com>
 <20200925222826.GA11755@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <cc738014-6a79-a5ae-cb2a-a02ff15b4582@oracle.com>
 <20200930212944.GA3138@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <8cd59d9c-36b1-21cf-e59f-40c5c20c65f8@oracle.com>
 <20210521052650.GA19056@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <0b1f0772-d1b1-0e59-8e99-368e54d40fbf@oracle.com>
Date: Tue, 25 May 2021 18:23:35 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
In-Reply-To: <20210521052650.GA19056@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Originating-IP: [138.3.201.24]
X-ClientProxiedBy: SN4PR0801CA0016.namprd08.prod.outlook.com
 (2603:10b6:803:29::26) To CH0PR10MB5020.namprd10.prod.outlook.com
 (2603:10b6:610:c0::22)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 8a0e154d-1c51-485b-99b0-08d91fcbc079
X-MS-TrafficTypeDiagnostic: CH0PR10MB5147:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<CH0PR10MB5147C266742B68A582A728FC8A259@CH0PR10MB5147.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:9508;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	Rowv2lsYuAeV2l5mN3E2pAgnk8lpIaPIxUozimH9ekGa5GctrUJS+QDPZfkQFuxtmOFtOtVP0bP7ZsI8hoVMyDVDnL3++o5z3zyu2pB2HwnV99OQCMLV8j1IQDxdV8qERwWH2wQ6lf28dm5WsJ/Rih5MKtRw0GrH+VLB0Tm8g67yva9GNyk9x1onCxoTIoT4XcFQubtU4J4doCHuJhywbhQofyc3YoHft5MAXs/dzzG5E+Jr9rjxLGAe/pntk54AgiVgfumm130VIKe6t0TH4oNhjT1VxQsSabRcJ+9yYSFvx8Y/owoxHOGBEYwtzzK418c5sczZ8a+kSsNgtyaWGJinKps3fa1IMiFYKgT1MF47eW6F8Lx3Fkpq3ao8LmZpDHNGKgrcimpPUZMbujDzP81lAgqsFqUTKv5mBRFCTmQOdNPeNg4u/GENWjjF4Usx/d4CilPehhzWsgc/W6eIAG3UySPWTMMtRvvybRs6QocFAOhnhkkMtY9egA+nvl4qbXpqCSwvz0eVyXbb3Q7RmoMutG23KCCMF4GKkA2JQuJgxHiaJFeDNKI/WpEGVdWBQ6MjC3XXo61MsAyQaLzU8KchHABBU29s450KXaJsWNr+m/XJWTeXS04FqHU3doY7/+bWqPuluy/K1xKLocWRQH7bm8LvixmA8VNfS4foYL/3+qkrzdhNN70veeusmCh9
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:CH0PR10MB5020.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(396003)(39860400002)(136003)(366004)(376002)(346002)(36756003)(66476007)(8936002)(86362001)(66946007)(478600001)(38100700002)(6916009)(8676002)(6486002)(66556008)(7416002)(44832011)(54906003)(956004)(316002)(2906002)(186003)(2616005)(5660300002)(31696002)(16526019)(31686004)(16576012)(4326008)(53546011)(26005)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?utf-8?B?dWxKdUtRdGNQcElqWEJOWXZQSzVWL1d6aVV4YjI4enZLYUxla2ZiVW9qdnhI?=
 =?utf-8?B?bzNWaWJSSG0xa2owdkZLVUV0Ykx5SHhCOWxUOTdZWDZqZU5kN1o2UDdtZWlx?=
 =?utf-8?B?RFRqWTdsaVZsWldyWlhTS0dIbzRJZDZEb29Zck5mSDV6aU5TeHBzYjJTMWgx?=
 =?utf-8?B?cVVRSEhtZEo1dHIvcXduMDcxdTlkR3N2RHdqdmJIeU1WOERCNmJxWHdhVWVB?=
 =?utf-8?B?cWlwRkZXdWpSWmVoMWV5QXFESUdFR20xZjN0WmkwSW1UajIwbGs4WU81ZE8z?=
 =?utf-8?B?WUhHTmR5aVVXaGVzcUhHN0RGRFI0bFdmYnJlTnpBYkVsMGF2VlVSa0ROQ3pH?=
 =?utf-8?B?ZjdnY0N4RkhHWFpyWFdsako3dDQzNkx6dVlveGVnOWlrdVNoc2h0OGdrTWh6?=
 =?utf-8?B?YUhIQllGS0ZJMUYvaDNhY2ZWcUV5T1R2RUwzbi9keE51cUVJMUg0RUJQWTJM?=
 =?utf-8?B?RUNVMXNoVUoxN1BnZXIza293cHI3TkxHd2NNQXNBQzIxZFZETFlNVW5sYjNm?=
 =?utf-8?B?QkdXaEN0QUs4d3hnczFSbGpML1cvb0RWdFI1K0RMNjhuMk9jQ0N6UDNaVHJB?=
 =?utf-8?B?bjdwOEtLSmQ2alRlR0liZ2VWVFNLTVMwK1Q4ek9scTExNGRGVXRFNFZOQkZO?=
 =?utf-8?B?Vmd4aFBtYUxrMnFzWWJLWmh2TTV6Z1ZoREtxaDBzbTZtTGMrZzRYcFBJR1NC?=
 =?utf-8?B?Mld3ZXhBb2lTSlRLUUdsTEUzNGg4c243TFFHTkk0Rzc0c1E1Q3RQaStBaEMr?=
 =?utf-8?B?QUlNOG5mVi9ZS2FRZ1c5UUhhZFlqa2l1Tk5JK0FGcndWblgxcDJiZE5vaE1y?=
 =?utf-8?B?Yll3a1l2VVQ4RHI3NHdYc3dFME9DZityWEw4aUV2ZlhBdldWREIvcDZhUjNU?=
 =?utf-8?B?Y0syM1VQVUF4ejBZNTBWQjZmZHduYi8wTjA4WE9ZbjBXZWRqZTdyT0RHZzFh?=
 =?utf-8?B?SmdaRzRJNnZpODNXQUxzVmoxbDlNZkZYd2FPSnd3MmZBaWRDK090bk1KejA0?=
 =?utf-8?B?RzlGQ0l3VUJnL0FCVnNHZkRlWHlrcldWdUJEWmZVWFAwV0NGa0tPNkRLUW5l?=
 =?utf-8?B?czJTNFkxcU15Z2JzNTlMSlNQVjIwcExzeml5K1hRUkJ6Q1EwRmw5YkNUazNO?=
 =?utf-8?B?UkkvRkN4WmR5Qi9hc3BYZGg1L2hHMElOYmRUUDZMajVjZUtOUU04R2V4cUk0?=
 =?utf-8?B?ZWVLSm1sd1FwLzNMa1Q4L2ZwOFg0Q3JhK0ZvdHBGODdsa01ONVJ4Qm41Q0hO?=
 =?utf-8?B?ZzRyK3YrTGgzTWRNTWthTXNHV1FnZWxrN0pLZUtuaS9zU2UyZlRKVkFSeDV4?=
 =?utf-8?B?cVI5eDA4ZktPdE8zcG15ckNxUmNxcUZLdFV3cnB4NHNoSTkvc2I4aTBhSEpQ?=
 =?utf-8?B?aXR5eEp3YmdxandnbnE4ekk1ZTJGS2xBR2MrQXNaeSsyUVhxSUM0RjBYR0NN?=
 =?utf-8?B?RnZFQ1ppeXRzZi9Nc2ZlZklUZlJCbEJSMzlhK0IzZk1XclVxOEVqR2lrQWdz?=
 =?utf-8?B?aE51eUtXZmFpdk45VUdjNjVFZUxhZm5CY1UxMHRrNUdWbi9XeHhFd1hId2Y5?=
 =?utf-8?B?a2V3S1NZaUxtbHVFYUs4alJCRW8yNXZUMUZXNGxjTDgvYWxjczZLa3JQcEJY?=
 =?utf-8?B?K2FzaUlic0p2NWJjQkN1Z0VpaDJ2eHBCQkpkeE9lcWlFQkRhZElpNUQ4WTVJ?=
 =?utf-8?B?YjJhcXc1cFFSN0dQNXFnMHVwZ2x2aFBPVCt0TitXVGlXeW5sN1M3NkhVWnJw?=
 =?utf-8?Q?iif8hnq/OiPu4QKdDPyr52oefA4heszcH9bLw2u?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8a0e154d-1c51-485b-99b0-08d91fcbc079
X-MS-Exchange-CrossTenant-AuthSource: CH0PR10MB5020.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 25 May 2021 22:23:42.2361
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: qmp8vwhJ1y5mlyw+9XHyzQDwFwynlVwqLAZMtFdQH/j1OHaP2ejPPiIKyBcNvUkcjF7pB87IPnz2UdhVo2ouyQpfuFkoUWo9taFx/W8O8yI=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR10MB5147
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9995 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0 bulkscore=0 spamscore=0
 mlxlogscore=999 malwarescore=0 adultscore=0 phishscore=0 suspectscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2105250138
X-Proofpoint-ORIG-GUID: TiUw2lT5LCPLgo7aW3qi6VWKkVOaj727
X-Proofpoint-GUID: TiUw2lT5LCPLgo7aW3qi6VWKkVOaj727
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9995 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 bulkscore=0 phishscore=0
 mlxlogscore=999 spamscore=0 mlxscore=0 priorityscore=1501
 lowpriorityscore=0 impostorscore=0 adultscore=0 clxscore=1011
 malwarescore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2104190000 definitions=main-2105250138


On 5/21/21 1:26 AM, Anchal Agarwal wrote:
>>> What I meant there wrt VCPU info was that VCPU info is not unregistered during hibernation,
>>> so Xen still remembers the old physical addresses for the VCPU information, created by the
>>> booting kernel. But since the hibernation kernel may have different physical
>>> addresses for VCPU info and if mismatch happens, it may cause issues with resume.
>>> During hibernation, the VCPU info register hypercall is not invoked again.
>>
>> I still don't think that's the cause but it's certainly worth having a look.
>>
> Hi Boris,
> Apologies for picking this up after last year. 
> I did some dive deep on the above statement and that is indeed the case that's happening. 
> I did some debugging around KASLR and hibernation using reboot mode.
> I observed in my debug prints that whenever vcpu_info* address for secondary vcpu assigned 
> in xen_vcpu_setup at boot is different than what is in the image, resume gets stuck for that vcpu
> in bringup_cpu(). That means we have different addresses for &per_cpu(xen_vcpu_info, cpu) at boot and after
> control jumps into the image. 
>
> I failed to get any prints after it got stuck in bringup_cpu() and
> I do not have an option to send a sysrq signal to the guest or rather get a kdump.


xenctx and xen-hvmctx might be helpful.


> This change is not observed in every hibernate-resume cycle. I am not sure if this is a bug or an 
> expected behavior. 
> Also, I am contemplating the idea that it may be a bug in xen code getting triggered only when
> KASLR is enabled but I do not have substantial data to prove that.
> Is this a coincidence that this always happens for 1st vcpu?
> Moreover, since hypervisor is not aware that guest is hibernated and it looks like a regular shutdown to dom0 during reboot mode,
> will re-registering vcpu_info for secondary vcpu's even plausible?


I think I am missing how this is supposed to work (maybe we've talked about this but it's been many months since then). You hibernate the guest and it writes the state to swap. The guest is then shut down? And what's next? How do you wake it up?


-boris



>  I could definitely use some advice to debug this further.
>
>  
> Some printk's from my debugging:
>
> At Boot:
>
> xen_vcpu_setup: xen_have_vcpu_info_placement=1 cpu=1, vcpup=0xffff9e548fa560e0, info.mfn=3996246 info.offset=224,
>
> Image Loads:
> It ends up in the condition:
>  xen_vcpu_setup()
>  {
>  ...
>  if (xen_hvm_domain()) {
>         if (per_cpu(xen_vcpu, cpu) == &per_cpu(xen_vcpu_info, cpu))
>                 return 0; 
>  }
>  ...
>  }
>
> xen_vcpu_setup: checking mfn on resume cpu=1, info.mfn=3934806 info.offset=224, &per_cpu(xen_vcpu_info, cpu)=0xffff9d7240a560e0
>
> This is tested on c4.2xlarge [8vcpu 15GB mem] instance with 5.10 kernel running
> in the guest.
>
> Thanks,
> Anchal.
>> -boris
>>
>>


From xen-devel-bounces@lists.xenproject.org Tue May 25 23:47:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Tue, 25 May 2021 23:47:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132307.246831 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llglT-0008WL-JX; Tue, 25 May 2021 23:47:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132307.246831; Tue, 25 May 2021 23:47:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llglT-0008WE-GM; Tue, 25 May 2021 23:47:11 +0000
Received: by outflank-mailman (input) for mailman id 132307;
 Tue, 25 May 2021 23:47:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llglR-0008W4-VD; Tue, 25 May 2021 23:47:09 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llglR-00053R-L6; Tue, 25 May 2021 23:47:09 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llglR-0006eR-Ad; Tue, 25 May 2021 23:47:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1llglR-0003gB-AA; Tue, 25 May 2021 23:47:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=z3fretgk4eesxdp3/KHsV4dGnMoZ6m9q4V+do/I2AI4=; b=sUbXz6bIXn9ff/j723o1mmX30k
	fW6l6klGxgcWVvntemyMFfMYvYpOWYt/97E1VDnoDd8LrRqyZZtd0ngIjX2yMj6584h+1JhQ5YvSo
	4QOF9MtRcr61n33Nxli8IE4Q5ycVXqclFN+FO9wMm/p74MhUwsMtq2n2GnVWnzDMEk/A=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162150-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162150: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:regression
    xen-unstable:test-armhf-armhf-xl:xen-boot:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=81acb1d7bdd5b1bf9c3422dcfeda616db2405d6f
X-Osstest-Versions-That:
    xen=aa77acc28098d04945af998f3fc0dbd3759b5b41
From: osstest service owner <osstest-admin@xenproject.org>
Date: Tue, 25 May 2021 23:47:09 +0000

flight 162150 xen-unstable real [real]
flight 162154 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162150/
http://logs.test-lab.xenproject.org/osstest/logs/162154/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail REGR. vs. 162145

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl           8 xen-boot            fail pass in 162154-retest

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl         15 migrate-support-check fail in 162154 never pass
 test-armhf-armhf-xl     16 saverestore-support-check fail in 162154 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162145
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162145
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162145
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162145
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162145
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162145
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162145
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162145
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162145
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162145
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162145
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  81acb1d7bdd5b1bf9c3422dcfeda616db2405d6f
baseline version:
 xen                  aa77acc28098d04945af998f3fc0dbd3759b5b41

Last test of basis   162145  2021-05-25 01:54:50 Z    0 days
Testing same since   162150  2021-05-25 10:39:08 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          fail    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 81acb1d7bdd5b1bf9c3422dcfeda616db2405d6f
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue May 25 09:08:43 2021 +0200

    x86/shadow: fix DO_UNSHADOW()
    
    When adding the HASH_CALLBACKS_CHECK() I failed to properly recognize
    the (somewhat unusually formatted) if() around the call to
    hash_domain_foreach()). Gcc 11 is absolutely right in pointing out the
    apparently misleading indentation. Besides adding the missing braces,
    also adjust the two oddly formatted if()-s in the macro.
    
    Fixes: 90629587e16e ("x86/shadow: replace stale literal numbers in hash_{vcpu,domain}_foreach()")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
    Reviewed-by: Tim Deegan <tim@xen.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed May 26 01:23:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 01:23:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132317.246851 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lliGu-0006sZ-Pk; Wed, 26 May 2021 01:23:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132317.246851; Wed, 26 May 2021 01:23:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lliGu-0006sS-M8; Wed, 26 May 2021 01:23:44 +0000
Received: by outflank-mailman (input) for mailman id 132317;
 Wed, 26 May 2021 01:23:43 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lliGt-0006sI-4b; Wed, 26 May 2021 01:23:43 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lliGs-00045H-Kk; Wed, 26 May 2021 01:23:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lliGs-00029l-6Z; Wed, 26 May 2021 01:23:42 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lliGs-0006Vk-65; Wed, 26 May 2021 01:23:42 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=9yqwXCJ1yKIjDrfi8o17QrY1Q/yQhpiKg5YKjKZnPpI=; b=upEAnptTnhzJlhpd5seX+SiQ2k
	fppXSnqkrzwxKjBoLsVVR5n5kv9x2kqPUKRt000OBeHhkg8JEoitQO/qKSvot7QAjpUqun/OUep2H
	PCQH2qJTrwO3uqGVoHeSnZ65QkH+X4T4q8Qt13fVpjHhjeW39HoasMVC8aZlGH8lumPU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162152-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162152: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=0dab1d36f55c3ed649bb8e4c74b9269ef3a63049
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 26 May 2021 01:23:42 +0000

flight 162152 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162152/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                0dab1d36f55c3ed649bb8e4c74b9269ef3a63049
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  278 days
Failing since        152659  2020-08-21 14:07:39 Z  277 days  512 attempts
Testing same since   162146  2021-05-25 01:55:32 Z    0 days    2 attempts

------------------------------------------------------------
510 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 159524 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed May 26 04:32:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 04:32:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132332.246874 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lllCp-00070v-SS; Wed, 26 May 2021 04:31:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132332.246874; Wed, 26 May 2021 04:31:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lllCp-00070c-MR; Wed, 26 May 2021 04:31:43 +0000
Received: by outflank-mailman (input) for mailman id 132332;
 Wed, 26 May 2021 04:31:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lllCo-00070S-8y; Wed, 26 May 2021 04:31:42 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lllCo-0007fW-2p; Wed, 26 May 2021 04:31:42 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lllCn-00037R-NU; Wed, 26 May 2021 04:31:41 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lllCn-00066u-N1; Wed, 26 May 2021 04:31:41 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=j5gt7DWctR6J9NgOGpK97nXdisaDN+fs1i+er+vDlGA=; b=s4hjLOipb/RqezPhaPc68yFMoG
	dpkJDwjJixSBxaw9wUqXm7qdbXMg3alKw3ebXBFmVFcOTiDDs389SDdqKFwB+Y7ATs5ryACwabCJe
	TAgY5Tp/vpI01roIuBr3VvKzrwFbhxL/Ui0NQbIV96dHwVXtJG9M5wX4/lsRnqEfZW14=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162153-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162153: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=ad9f25d338605d26acedcaf3ba5fab5ca26f1c10
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 26 May 2021 04:31:41 +0000

flight 162153 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162153/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                ad9f25d338605d26acedcaf3ba5fab5ca26f1c10
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  298 days
Failing since        152366  2020-08-01 20:49:34 Z  297 days  505 attempts
Testing same since   162153  2021-05-25 19:42:13 Z    0 days    1 attempts

------------------------------------------------------------
6086 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1653453 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed May 26 04:41:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 04:41:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132340.246888 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lllM2-0008Rp-Qu; Wed, 26 May 2021 04:41:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132340.246888; Wed, 26 May 2021 04:41:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lllM2-0008Ri-MO; Wed, 26 May 2021 04:41:14 +0000
Received: by outflank-mailman (input) for mailman id 132340;
 Wed, 26 May 2021 04:41:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=D4oP=KV=amazon.com=prvs=773ce8620=anchalag@srs-us1.protection.inumbo.net>)
 id 1lllM1-0008Rc-Pu
 for xen-devel@lists.xenproject.org; Wed, 26 May 2021 04:41:13 +0000
Received: from smtp-fw-6002.amazon.com (unknown [52.95.49.90])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9b87a73e-c974-4bdd-b100-938674d53e82;
 Wed, 26 May 2021 04:41:12 +0000 (UTC)
Received: from iad12-co-svc-p1-lb1-vlan2.amazon.com (HELO
 email-inbound-relay-1d-e69428c4.us-east-1.amazon.com) ([10.43.8.2])
 by smtp-border-fw-6002.iad6.amazon.com with ESMTP; 26 May 2021 04:41:12 +0000
Received: from EX13MTAUWA001.ant.amazon.com
 (iad55-ws-svc-p15-lb9-vlan3.iad.amazon.com [10.40.159.166])
 by email-inbound-relay-1d-e69428c4.us-east-1.amazon.com (Postfix) with ESMTPS
 id 81C19C5C00; Wed, 26 May 2021 04:41:05 +0000 (UTC)
Received: from EX13D07UWA002.ant.amazon.com (10.43.160.77) by
 EX13MTAUWA001.ant.amazon.com (10.43.160.118) with Microsoft SMTP Server (TLS)
 id 15.0.1497.18; Wed, 26 May 2021 04:40:39 +0000
Received: from EX13MTAUWA001.ant.amazon.com (10.43.160.58) by
 EX13D07UWA002.ant.amazon.com (10.43.160.77) with Microsoft SMTP Server (TLS)
 id 15.0.1497.18; Wed, 26 May 2021 04:40:38 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.160.118) with Microsoft SMTP
 Server id 15.0.1497.18 via Frontend Transport; Wed, 26 May 2021 04:40:38
 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id B2A7240153; Wed, 26 May 2021 04:40:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9b87a73e-c974-4bdd-b100-938674d53e82
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
  t=1622004073; x=1653540073;
  h=date:from:to:cc:message-id:references:mime-version:
   in-reply-to:subject;
  bh=IdCtA2fDzgAuC9RDi6qBLpDtVfX4igt+SD3WtXyHdbQ=;
  b=Fd+HG8rJSZsqGzTkylvnBsvnrhU9Q2QUUGqnRp2yLjRFUIoSqNADDVw9
   eBKXvsNnVhWb+9rtEnoz8Scu95+YmyjjPbrOrZmMrrJv++cOprT53Voow
   nj13iHE0LULNR4palxFeYAr4770mGvrcXCnqN4ilB0NCr5aoB7Ol5xukB
   s=;
X-IronPort-AV: E=Sophos;i="5.82,330,1613433600"; 
   d="scan'208";a="114662918"
Subject: Re: [PATCH v3 01/11] xen/manage: keep track of the on-going suspend mode
Date: Wed, 26 May 2021 04:40:38 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
CC: "tglx@linutronix.de" <tglx@linutronix.de>, "mingo@redhat.com"
	<mingo@redhat.com>, "bp@alien8.de" <bp@alien8.de>, "hpa@zytor.com"
	<hpa@zytor.com>, "jgross@suse.com" <jgross@suse.com>,
	"linux-pm@vger.kernel.org" <linux-pm@vger.kernel.org>, "linux-mm@kvack.org"
	<linux-mm@kvack.org>, "sstabellini@kernel.org" <sstabellini@kernel.org>,
	"konrad.wilk@oracle.com" <konrad.wilk@oracle.com>, "roger.pau@citrix.com"
	<roger.pau@citrix.com>, "axboe@kernel.dk" <axboe@kernel.dk>,
	"davem@davemloft.net" <davem@davemloft.net>, "rjw@rjwysocki.net"
	<rjw@rjwysocki.net>, "len.brown@intel.com" <len.brown@intel.com>,
	"pavel@ucw.cz" <pavel@ucw.cz>, "peterz@infradead.org" <peterz@infradead.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"vkuznets@redhat.com" <vkuznets@redhat.com>, "netdev@vger.kernel.org"
	<netdev@vger.kernel.org>, "linux-kernel@vger.kernel.org"
	<linux-kernel@vger.kernel.org>,
	<Woodhouse@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>, David
	<dwmw@amazon.co.uk>, "benh@kernel.crashing.org" <benh@kernel.crashing.org>,
	<Shah@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>, Amit
	<aams@amazon.de>,
	<Agarwal@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>, Anchal
	<anchalag@amazon.com>
Message-ID: <20210526044038.GA16226@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
References: <e3e447e5-2f7a-82a2-31c8-10c2ffcbfb2c@oracle.com>
 <20200922231736.GA24215@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <20200925190423.GA31885@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <274ddc57-5c98-5003-c850-411eed1aea4c@oracle.com>
 <20200925222826.GA11755@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <cc738014-6a79-a5ae-cb2a-a02ff15b4582@oracle.com>
 <20200930212944.GA3138@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <8cd59d9c-36b1-21cf-e59f-40c5c20c65f8@oracle.com>
 <20210521052650.GA19056@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <0b1f0772-d1b1-0e59-8e99-368e54d40fbf@oracle.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <0b1f0772-d1b1-0e59-8e99-368e54d40fbf@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk

On Tue, May 25, 2021 at 06:23:35PM -0400, Boris Ostrovsky wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> On 5/21/21 1:26 AM, Anchal Agarwal wrote:
> >>> What I meant there wrt VCPU info was that VCPU info is not unregistered during hibernation,
> >>> so Xen still remembers the old physical addresses for the VCPU information, created by the
> >>> booting kernel. But since the hibernation kernel may have different physical
> >>> addresses for VCPU info and if mismatch happens, it may cause issues with resume.
> >>> During hibernation, the VCPU info register hypercall is not invoked again.
> >>
> >> I still don't think that's the cause but it's certainly worth having a look.
> >>
> > Hi Boris,
> > Apologies for picking this up after last year.
> > I did some dive deep on the above statement and that is indeed the case that's happening.
> > I did some debugging around KASLR and hibernation using reboot mode.
> > I observed in my debug prints that whenever vcpu_info* address for secondary vcpu assigned
> > in xen_vcpu_setup at boot is different than what is in the image, resume gets stuck for that vcpu
> > in bringup_cpu(). That means we have different addresses for &per_cpu(xen_vcpu_info, cpu) at boot and after
> > control jumps into the image.
> >
> > I failed to get any prints after it got stuck in bringup_cpu() and
> > I do not have an option to send a sysrq signal to the guest or rather get a kdump.
> 
> 
> xenctx and xen-hvmctx might be helpful.
> 
> 
> > This change is not observed in every hibernate-resume cycle. I am not sure if this is a bug or an
> > expected behavior.
> > Also, I am contemplating the idea that it may be a bug in xen code getting triggered only when
> > KASLR is enabled but I do not have substantial data to prove that.
> > Is this a coincidence that this always happens for 1st vcpu?
> > Moreover, since hypervisor is not aware that guest is hibernated and it looks like a regular shutdown to dom0 during reboot mode,
> > will re-registering vcpu_info for secondary vcpu's even plausible?
> 
> 
> I think I am missing how this is supposed to work (maybe we've talked about this but it's been many months since then). You hibernate the guest and it writes the state to swap. The guest is then shut down? And what's next? How do you wake it up?
> 
> 
> -boris
> 
To resume a guest, guest boots up as the fresh guest and then software_resume()
is called which if finds a stored hibernation image, quiesces the devices and loads 
the memory contents from the image. The control then transfers to the targeted kernel.
This further disables non boot cpus,sycore_suspend/resume callbacks are invoked which sets up
the shared_info, pvclock, grant tables etc. Since the vcpu_info pointer for each
non-boot cpu is already registered, the hypercall does not happen again when
bringing up the non boot cpus. This leads to inconsistencies as pointed
out earlier when KASLR is enabled.

Thanks,
Anchal
> 
> 
> >  I could definitely use some advice to debug this further.
> >
> >
> > Some printk's from my debugging:
> >
> > At Boot:
> >
> > xen_vcpu_setup: xen_have_vcpu_info_placement=1 cpu=1, vcpup=0xffff9e548fa560e0, info.mfn=3996246 info.offset=224,
> >
> > Image Loads:
> > It ends up in the condition:
> >  xen_vcpu_setup()
> >  {
> >  ...
> >  if (xen_hvm_domain()) {
> >         if (per_cpu(xen_vcpu, cpu) == &per_cpu(xen_vcpu_info, cpu))
> >                 return 0;
> >  }
> >  ...
> >  }
> >
> > xen_vcpu_setup: checking mfn on resume cpu=1, info.mfn=3934806 info.offset=224, &per_cpu(xen_vcpu_info, cpu)=0xffff9d7240a560e0
> >
> > This is tested on c4.2xlarge [8vcpu 15GB mem] instance with 5.10 kernel running
> > in the guest.
> >
> > Thanks,
> > Anchal.
> >> -boris
> >>
> >>


From xen-devel-bounces@lists.xenproject.org Wed May 26 06:57:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 06:57:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132352.246899 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llnTy-0003f1-2a; Wed, 26 May 2021 06:57:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132352.246899; Wed, 26 May 2021 06:57:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llnTx-0003eu-V6; Wed, 26 May 2021 06:57:33 +0000
Received: by outflank-mailman (input) for mailman id 132352;
 Wed, 26 May 2021 06:57:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llnTw-0003ej-Ic; Wed, 26 May 2021 06:57:32 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llnTw-00024b-BY; Wed, 26 May 2021 06:57:32 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llnTw-0001vy-0o; Wed, 26 May 2021 06:57:32 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1llnTw-0001Kc-0O; Wed, 26 May 2021 06:57:32 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=eXJFSY5VgnXJeuhbRGjMth1wn3imRwhi6dY81whE3E8=; b=AEU1/JqOdxgK1cd6RS6bNoLtCo
	Q5u6AP144ZnRTrqE71VRdHGsvL7gsVz+AqsPYwahxnCBCEv9JBUe4d3uhLQvbu07wwUsvnQj+dT6A
	Mevht8fAZw1W9I865KVx9XdoWEhsiUYVoSqb4Uta2VJCKvwJ2X7ekU3Jwr9Vdd0kOTkk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162159-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 162159: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=b1164a8e68a5aecfc2ff420e4f31c35abbf78455
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 26 May 2021 06:57:32 +0000

flight 162159 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162159/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              b1164a8e68a5aecfc2ff420e4f31c35abbf78455
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  320 days
Failing since        151818  2020-07-11 04:18:52 Z  319 days  312 attempts
Testing same since   162159  2021-05-26 04:20:08 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 59516 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed May 26 07:37:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 07:37:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132360.246913 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llo6s-0007dQ-5t; Wed, 26 May 2021 07:37:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132360.246913; Wed, 26 May 2021 07:37:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llo6s-0007dJ-2s; Wed, 26 May 2021 07:37:46 +0000
Received: by outflank-mailman (input) for mailman id 132360;
 Wed, 26 May 2021 07:37:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZGBu=KV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1llo6q-0007dD-8R
 for xen-devel@lists.xenproject.org; Wed, 26 May 2021 07:37:44 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8181a2fc-b05d-49af-9a8e-04814d7b975d;
 Wed, 26 May 2021 07:37:43 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id F14D9ADE2;
 Wed, 26 May 2021 07:37:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8181a2fc-b05d-49af-9a8e-04814d7b975d
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622014662; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=DAgndc2MvHoRJlh2yskU6N8P+M2J8gBju5BwD3ep2bQ=;
	b=T5e4pvE0nuOx6tpWrbmtSHLUcwGI1ScYp4O4NIn5RsYpGz3nrp9oHO4IjzOKZ9JKKPjnWz
	HipBuPlDA57ewEEW2Es5glNi4VZCjkc0m19M9mHnCgsqz8m7UPA4Ttw+8o8ltrPHWAzSGu
	ET8HakWNR6PUEpt7z9CimIVu8o4sGB8=
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2] firmware/shim: UNSUPPORTED=n
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Dario Faggioli <dfaggioli@suse.com>
References: <19695ffc-34d8-b682-b092-668f872d4e57@suse.com>
Message-ID: <72b98382-34ba-6e9d-c90e-c913dfe66258@suse.com>
Date: Wed, 26 May 2021 09:37:37 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <19695ffc-34d8-b682-b092-668f872d4e57@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

We shouldn't default to include any unsupported code in the shim. Mark
the setting as off, replacing the ARGO specification. This points out
anomalies with the scheduler configuration: Unsupported schedulers
better don't default to Y in release builds (like is already the case
for ARINC653). Without at least the SCHED_NULL adjustments, the shim
would suddenly build with RTDS as its default scheduler.

As a result, the SCHED_NULL setting can also be dropped from defconfig.

Clearly with the shim defaulting to it, SCHED_NULL must be supported at
least there.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Roger Pau Monné <roger.pau@citrix.com>
---
v2: Also drop SCHED_NULL setting from defconfig. Make SCHED_NULL the
    default when PV_SHIM_EXCLUSIVE.
---
I'm certainly open to consider alterations on the sched/Kconfig
adjustments, but _something_ needs to be done there. In particular I
was puzzled to find the NULL scheduler marked unsupported. Clearly with
the shim defaulting to it, it must be supported at least there.

In a PV_SHIM (but perhaps !PV_SHIM_EXCLUSIVE) build with the build-time
default not being SCHED_NULL, when actually running as shim I can't seem
to see how the null scheduler would get chosen as the default
nevertheless. Shouldn't this happen (in the absence of a command line
override)?

--- a/SUPPORT.md
+++ b/SUPPORT.md
@@ -351,9 +351,10 @@ Currently only single-vcpu domains are s
 A very simple, very static scheduling policy
 that always schedules the same vCPU(s) on the same pCPU(s).
 It is designed for maximum determinism and minimum overhead
-on embedded platforms.
+on embedded platforms and the x86 PV shim.
 
     Status: Experimental
+    Status, x86/shim: Supported
 
 ### NUMA scheduler affinity
 
--- a/xen/arch/x86/configs/pvshim_defconfig
+++ b/xen/arch/x86/configs/pvshim_defconfig
@@ -6,7 +6,6 @@ CONFIG_PV_SHIM=y
 CONFIG_PV_SHIM_EXCLUSIVE=y
 CONFIG_NR_CPUS=32
 CONFIG_EXPERT=y
-CONFIG_SCHED_NULL=y
 # Disable features not used by the PV shim
 # CONFIG_XEN_SHSTK is not set
 # CONFIG_GRANT_TABLE is not set
@@ -15,7 +14,7 @@ CONFIG_SCHED_NULL=y
 # CONFIG_KEXEC is not set
 # CONFIG_XENOPROF is not set
 # CONFIG_XSM is not set
-# CONFIG_ARGO is not set
+# CONFIG_UNSUPPORTED is not set
 # CONFIG_SCHED_CREDIT is not set
 # CONFIG_SCHED_CREDIT2 is not set
 # CONFIG_SCHED_RTDS is not set
--- a/xen/common/sched/Kconfig
+++ b/xen/common/sched/Kconfig
@@ -16,7 +16,7 @@ config SCHED_CREDIT2
 
 config SCHED_RTDS
 	bool "RTDS scheduler support (UNSUPPORTED)" if UNSUPPORTED
-	default y
+	default DEBUG
 	---help---
 	  The RTDS scheduler is a soft and firm real-time scheduler for
 	  multicore, targeted for embedded, automotive, graphics and gaming
@@ -31,7 +31,7 @@ config SCHED_ARINC653
 
 config SCHED_NULL
 	bool "Null scheduler support (UNSUPPORTED)" if UNSUPPORTED
-	default y
+	default PV_SHIM || DEBUG
 	---help---
 	  The null scheduler is a static, zero overhead scheduler,
 	  for when there always are less vCPUs than pCPUs, typically
@@ -39,6 +39,7 @@ config SCHED_NULL
 
 choice
 	prompt "Default Scheduler?"
+	default SCHED_NULL_DEFAULT if PV_SHIM_EXCLUSIVE
 	default SCHED_CREDIT2_DEFAULT
 
 	config SCHED_CREDIT_DEFAULT


From xen-devel-bounces@lists.xenproject.org Wed May 26 08:19:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 08:19:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132372.246935 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llolI-0003qk-Nt; Wed, 26 May 2021 08:19:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132372.246935; Wed, 26 May 2021 08:19:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llolI-0003qd-Kg; Wed, 26 May 2021 08:19:32 +0000
Received: by outflank-mailman (input) for mailman id 132372;
 Wed, 26 May 2021 08:19:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZGBu=KV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1llolH-0003qX-1W
 for xen-devel@lists.xenproject.org; Wed, 26 May 2021 08:19:31 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1c39e15a-d7ee-4eee-b7b5-69cb8cd815b3;
 Wed, 26 May 2021 08:19:28 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 91AA1AEB9;
 Wed, 26 May 2021 08:19:27 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c39e15a-d7ee-4eee-b7b5-69cb8cd815b3
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622017167; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=eLD7q/khjMM3mnxf0HBfIXmUE/NijOxCH62Tx/Vfy3o=;
	b=NvqW5+4J/rNLD9uS2PR4JuHX/k4xtBU4J4rBie3a5WpT+8p+tTuOSBg1Qy2H+wDClWLZJt
	ADraAhgFw6pfVL0gkgWn5zIGFT4hXqwINZuLbhJ3+SquUqM3ftpFcV9jx5iHiqlJt26/l+
	FMWNzkZTY/iKQ2m6cJt17qFunpDyKxA=
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v5] IOMMU: make DMA containment of quarantined devices
 optional
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Paul Durrant <paul@xen.org>,
 Kevin Tian <kevin.tian@intel.com>
Message-ID: <e1f30ef7-6631-609d-6948-e9b1f3fa3b37@suse.com>
Date: Wed, 26 May 2021 10:19:22 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

IOMMU: make DMA containment of quarantined devices optional

Containing still in flight DMA was introduced to work around certain
devices / systems hanging hard upon hitting a "not-present" IOMMU fault.
Passing through (such) devices (on such systems) is inherently insecure
(as guests could easily arrange for IOMMU faults of any kind to occur).
Defaulting to a mode where admins may not even become aware of issues
with devices can be considered undesirable. Therefore convert this mode
of operation to an optional one, not one enabled by default.

This involves resurrecting code commit ea38867831da ("x86 / iommu: set
up a scratch page in the quarantine domain") did remove, in a slightly
extended and abstracted fashion. Here, instead of reintroducing a pretty
pointless use of "goto" in domain_context_unmap(), and instead of making
the function (at least temporarily) inconsistent, take the opportunity
and replace the other similarly pointless "goto" as well.

In order to key the re-instated bypasses off of there (not) being a root
page table this further requires moving the allocate_domain_resources()
invocation from reassign_device() to amd_iommu_setup_domain_device() (or
else reassign_device() would allocate a root page table anyway); this is
benign to the second caller of the latter function.

In VT-d's domain_context_unmap(), instead of adding yet another
"goto out" when all that's wanted is a "return", eliminate the "out"
label at the same time.

Take the opportunity and also limit the control to builds supporting
PCI.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v5: IOMMU_quarantine_fault -> IOMMU_quarantine_basic,
    IOMMU_quarantine_write_fault -> IOMMU_quarantine_scratch_page.
    Amend command line description to clarify tool stack based
    quarantining mode when "iommu=no-quarantine". Fully
    s/dummy/scratch/. Re-base.
v4: "full" -> "scratch_page". Duplicate Kconfig help text into command
    line doc. Re-base.
v3: IOMMU_quarantine_basic -> IOMMU_quarantine_fault,
    IOMMU_quarantine_full -> IOMMU_quarantine_write_fault. Kconfig
    option (choice) to select default. Limit to HAS_PCI.
v2: Don't use true/false. Introduce QUARANTINE_SKIP() (albeit I'm not
    really convinced this is an improvement). Add comment.

--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -1364,7 +1364,7 @@ detection of systems known to misbehave
 > Default: `new` unless directed-EOI is supported
 
 ### iommu
-    = List of [ <bool>, verbose, debug, force, required, quarantine,
+    = List of [ <bool>, verbose, debug, force, required, quarantine[=scratch-page],
                 sharept, intremap, intpost, crash-disable,
                 snoop, qinval, igfx, amd-iommu-perdev-intremap,
                 dom0-{passthrough,strict} ]
@@ -1402,11 +1402,32 @@ boolean (e.g. `iommu=no`) can override t
     will prevent Xen from booting if IOMMUs aren't discovered and enabled
     successfully.
 
-*   The `quarantine` boolean can be used to control Xen's behavior when
-    de-assigning devices from guests.  If enabled (the default), Xen always
+*   The `quarantine` option can be used to control Xen's behavior when
+    de-assigning devices from guests.
+
+    When a PCI device is assigned to an untrusted domain, it is possible
+    for that domain to program the device to DMA to an arbitrary address.
+    The IOMMU is used to protect the host from malicious DMA by making
+    sure that the device addresses can only target memory assigned to the
+    guest.  However, when the guest domain is torn down, assigning the
+    device back to the hardware domain would allow any in-flight DMA to
+    potentially target critical host data.  To avoid this, quarantining
+    should be enabled.  Quarantining can be done in two ways: In its basic
+    form, all in-flight DMA will simply be forced to encounter IOMMU
+    faults.  Since there are systems where doing so can cause host lockup,
+    an alternative form is available where writes to memory will be made
+    fault, but reads will be directed to a scratch page.  The implication
+    here is that such reads will go unnoticed, i.e. an admin may not
+    become aware of the underlying problem.
+
+    Therefore, if this option is set to true (the default), Xen always
     quarantines such devices; they must be explicitly assigned back to Dom0
-    before they can be used there again.  If disabled, Xen will only
-    quarantine devices the toolstack hass arranged for getting quarantined.
+    before they can be used there again.  If set to "scratch-page", still
+    active DMA reads will additionally be directed to a "scratch" page.  If
+    set to false, Xen will only quarantine devices the toolstack has arranged
+    for getting quarantined, and only in the "basic" form.
+
+    This option is only valid on builds supporting PCI.
 
 *   The `sharept` boolean controls whether the IOMMU pagetables are shared
     with the CPU-side HAP pagetables, or allocated separately.  Sharing
--- a/xen/drivers/passthrough/Kconfig
+++ b/xen/drivers/passthrough/Kconfig
@@ -39,3 +39,31 @@ endif
 
 config IOMMU_FORCE_PT_SHARE
 	bool
+
+choice
+	prompt "IOMMU device quarantining default behavior"
+	depends on HAS_PCI
+	default IOMMU_QUARANTINE_BASIC
+	---help---
+	  When a PCI device is assigned to an untrusted domain, it is possible
+	  for that domain to program the device to DMA to an arbitrary address.
+	  The IOMMU is used to protect the host from malicious DMA by making
+	  sure that the device addresses can only target memory assigned to the
+	  guest.  However, when the guest domain is torn down, assigning the
+	  device back to the hardware domain would allow any in-flight DMA to
+	  potentially target critical host data.  To avoid this, quarantining
+	  should be enabled.  Quarantining can be done in two ways: In its basic
+	  form, all in-flight DMA will simply be forced to encounter IOMMU
+	  faults.  Since there are systems where doing so can cause host lockup,
+	  an alternative form is available where writes to memory will be made
+	  fault, but reads will be directed to a scratch page.  The implication
+	  here is that such reads will go unnoticed, i.e. an admin may not
+	  become aware of the underlying problem.
+
+	config IOMMU_QUARANTINE_NONE
+		bool "none"
+	config IOMMU_QUARANTINE_BASIC
+		bool "basic"
+	config IOMMU_QUARANTINE_SCRATCH_PAGE
+		bool "scratch page"
+endchoice
--- a/xen/drivers/passthrough/amd/pci_amd_iommu.c
+++ b/xen/drivers/passthrough/amd/pci_amd_iommu.c
@@ -25,6 +25,9 @@
 #include "iommu.h"
 #include "../ats.h"
 
+/* dom_io is used as a sentinel for quarantined devices */
+#define QUARANTINE_SKIP(d) ((d) == dom_io && !dom_iommu(d)->arch.amd.root_table)
+
 static bool_t __read_mostly init_done;
 
 static const struct iommu_init_ops _iommu_init_ops;
@@ -81,19 +84,36 @@ int get_dma_requestor_id(uint16_t seg, u
     return req_id;
 }
 
-static void amd_iommu_setup_domain_device(
+static int __must_check allocate_domain_resources(struct domain *d)
+{
+    struct domain_iommu *hd = dom_iommu(d);
+    int rc;
+
+    spin_lock(&hd->arch.mapping_lock);
+    rc = amd_iommu_alloc_root(d);
+    spin_unlock(&hd->arch.mapping_lock);
+
+    return rc;
+}
+
+static int __must_check amd_iommu_setup_domain_device(
     struct domain *domain, struct amd_iommu *iommu,
     uint8_t devfn, struct pci_dev *pdev)
 {
     struct amd_iommu_dte *table, *dte;
     unsigned long flags;
-    int req_id, valid = 1;
+    int req_id, valid = 1, rc;
     u8 bus = pdev->bus;
-    const struct domain_iommu *hd = dom_iommu(domain);
+    struct domain_iommu *hd = dom_iommu(domain);
 
-    BUG_ON( !hd->arch.amd.root_table ||
-            !hd->arch.amd.paging_mode ||
-            !iommu->dev_table.buffer );
+    if ( QUARANTINE_SKIP(domain) )
+        return 0;
+
+    BUG_ON(!hd->arch.amd.paging_mode || !iommu->dev_table.buffer);
+
+    rc = allocate_domain_resources(domain);
+    if ( rc )
+        return rc;
 
     if ( iommu_hwdom_passthrough && is_hardware_domain(domain) )
         valid = 0;
@@ -151,6 +171,8 @@ static void amd_iommu_setup_domain_devic
 
         amd_iommu_flush_iotlb(devfn, pdev, INV_IOMMU_ALL_PAGES_ADDRESS, 0);
     }
+
+    return 0;
 }
 
 int __init acpi_ivrs_init(void)
@@ -222,18 +244,6 @@ int amd_iommu_alloc_root(struct domain *
     return 0;
 }
 
-static int __must_check allocate_domain_resources(struct domain *d)
-{
-    struct domain_iommu *hd = dom_iommu(d);
-    int rc;
-
-    spin_lock(&hd->arch.mapping_lock);
-    rc = amd_iommu_alloc_root(d);
-    spin_unlock(&hd->arch.mapping_lock);
-
-    return rc;
-}
-
 static int amd_iommu_domain_init(struct domain *d)
 {
     struct domain_iommu *hd = dom_iommu(d);
@@ -283,6 +293,9 @@ static void amd_iommu_disable_domain_dev
     int req_id;
     u8 bus = pdev->bus;
 
+    if ( QUARANTINE_SKIP(domain) )
+        return;
+
     BUG_ON ( iommu->dev_table.buffer == NULL );
     req_id = get_dma_requestor_id(iommu->seg, PCI_BDF2(bus, devfn));
     table = iommu->dev_table.buffer;
@@ -349,11 +362,10 @@ static int reassign_device(struct domain
         pdev->domain = target;
     }
 
-    rc = allocate_domain_resources(target);
+    rc = amd_iommu_setup_domain_device(target, iommu, devfn, pdev);
     if ( rc )
         return rc;
 
-    amd_iommu_setup_domain_device(target, iommu, devfn, pdev);
     AMD_IOMMU_DEBUG("Re-assign %pp from dom%d to dom%d\n",
                     &pdev->sbdf, source->domain_id, target->domain_id);
 
@@ -460,8 +472,7 @@ static int amd_iommu_add_device(u8 devfn
         spin_unlock_irqrestore(&iommu->lock, flags);
     }
 
-    amd_iommu_setup_domain_device(pdev->domain, iommu, devfn, pdev);
-    return 0;
+    return amd_iommu_setup_domain_device(pdev->domain, iommu, devfn, pdev);
 }
 
 static int amd_iommu_remove_device(u8 devfn, struct pci_dev *pdev)
--- a/xen/drivers/passthrough/iommu.c
+++ b/xen/drivers/passthrough/iommu.c
@@ -31,9 +31,24 @@ bool_t __initdata iommu_enable = 1;
 bool_t __read_mostly iommu_enabled;
 bool_t __read_mostly force_iommu;
 bool_t __read_mostly iommu_verbose;
-bool __read_mostly iommu_quarantine = true;
 bool_t __read_mostly iommu_crash_disable;
 
+#define IOMMU_quarantine_none         0 /* aka false */
+#define IOMMU_quarantine_basic        1 /* aka true */
+#define IOMMU_quarantine_scratch_page 2
+#ifdef CONFIG_HAS_PCI
+uint8_t __read_mostly iommu_quarantine =
+# if defined(CONFIG_IOMMU_QUARANTINE_NONE)
+    IOMMU_quarantine_none;
+# elif defined(CONFIG_IOMMU_QUARANTINE_BASIC)
+    IOMMU_quarantine_basic;
+# elif defined(CONFIG_IOMMU_QUARANTINE_SCRATCH_PAGE)
+    IOMMU_quarantine_scratch_page;
+# endif
+#else
+# define iommu_quarantine IOMMU_quarantine_none
+#endif /* CONFIG_HAS_PCI */
+
 static bool __hwdom_initdata iommu_hwdom_none;
 bool __hwdom_initdata iommu_hwdom_strict;
 bool __read_mostly iommu_hwdom_passthrough;
@@ -64,8 +79,12 @@ static int __init parse_iommu_param(cons
         else if ( (val = parse_boolean("force", s, ss)) >= 0 ||
                   (val = parse_boolean("required", s, ss)) >= 0 )
             force_iommu = val;
+#ifdef CONFIG_HAS_PCI
         else if ( (val = parse_boolean("quarantine", s, ss)) >= 0 )
             iommu_quarantine = val;
+        else if ( ss == s + 15 && !strncmp(s, "quarantine=scratch-page", 23) )
+            iommu_quarantine = IOMMU_quarantine_scratch_page;
+#endif
 #ifdef CONFIG_X86
         else if ( (val = parse_boolean("igfx", s, ss)) >= 0 )
             iommu_igfx = val;
@@ -432,7 +451,7 @@ static int __init iommu_quarantine_init(
     dom_io->options |= XEN_DOMCTL_CDF_iommu;
 
     rc = iommu_domain_init(dom_io, 0);
-    if ( rc )
+    if ( rc || iommu_quarantine < IOMMU_quarantine_scratch_page )
         return rc;
 
     if ( !hd->platform_ops->quarantine_init )
--- a/xen/drivers/passthrough/vtd/iommu.c
+++ b/xen/drivers/passthrough/vtd/iommu.c
@@ -42,6 +42,9 @@
 #include "vtd.h"
 #include "../ats.h"
 
+/* dom_io is used as a sentinel for quarantined devices */
+#define QUARANTINE_SKIP(d) ((d) == dom_io && !dom_iommu(d)->arch.vtd.pgd_maddr)
+
 struct mapped_rmrr {
     struct list_head list;
     u64 base, end;
@@ -1328,6 +1331,9 @@ int domain_context_mapping_one(
     int rc, ret;
     bool_t flush_dev_iotlb;
 
+    if ( QUARANTINE_SKIP(domain) )
+        return 0;
+
     ASSERT(pcidevs_locked());
     spin_lock(&iommu->lock);
     maddr = bus_to_context_maddr(iommu, bus);
@@ -1556,6 +1562,9 @@ int domain_context_unmap_one(
     int iommu_domid, rc, ret;
     bool_t flush_dev_iotlb;
 
+    if ( QUARANTINE_SKIP(domain) )
+        return 0;
+
     ASSERT(pcidevs_locked());
     spin_lock(&iommu->lock);
 
@@ -1617,7 +1626,7 @@ static int domain_context_unmap(struct d
 {
     struct acpi_drhd_unit *drhd;
     struct vtd_iommu *iommu;
-    int ret = 0;
+    int ret;
     u8 seg = pdev->seg, bus = pdev->bus, tmp_bus, tmp_devfn, secbus;
     int found = 0;
 
@@ -1632,14 +1641,12 @@ static int domain_context_unmap(struct d
         if ( iommu_debug )
             printk(VTDPREFIX "%pd:Hostbridge: skip %pp unmap\n",
                    domain, &PCI_SBDF3(seg, bus, devfn));
-        if ( !is_hardware_domain(domain) )
-            return -EPERM;
-        goto out;
+        return is_hardware_domain(domain) ? 0 : -EPERM;
 
     case DEV_TYPE_PCIe_BRIDGE:
     case DEV_TYPE_PCIe2PCI_BRIDGE:
     case DEV_TYPE_LEGACY_PCI_BRIDGE:
-        goto out;
+        return 0;
 
     case DEV_TYPE_PCIe_ENDPOINT:
         if ( iommu_debug )
@@ -1681,10 +1688,12 @@ static int domain_context_unmap(struct d
     default:
         dprintk(XENLOG_ERR VTDPREFIX, "%pd:unknown(%u): %pp\n",
                 domain, pdev->type, &PCI_SBDF3(seg, bus, devfn));
-        ret = -EINVAL;
-        goto out;
+        return -EINVAL;
     }
 
+    if ( QUARANTINE_SKIP(domain) )
+        return ret;
+
     /*
      * if no other devices under the same iommu owned by this domain,
      * clear iommu in iommu_bitmap and clear domain_id in domid_bitmp
@@ -1719,7 +1728,6 @@ static int domain_context_unmap(struct d
         iommu->domid_map[iommu_domid] = 0;
     }
 
-out:
     return ret;
 }
 
--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -52,7 +52,9 @@ static inline bool_t dfn_eq(dfn_t x, dfn
 }
 
 extern bool_t iommu_enable, iommu_enabled;
-extern bool force_iommu, iommu_quarantine, iommu_verbose;
+extern bool force_iommu, iommu_verbose;
+/* Boolean except for the specific purposes of drivers/passthrough/iommu.c. */
+extern uint8_t iommu_quarantine;
 
 #ifdef CONFIG_X86
 extern enum __packed iommu_intremap {


From xen-devel-bounces@lists.xenproject.org Wed May 26 08:27:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 08:27:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132381.246946 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llotA-0005JY-O3; Wed, 26 May 2021 08:27:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132381.246946; Wed, 26 May 2021 08:27:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llotA-0005JR-Ki; Wed, 26 May 2021 08:27:40 +0000
Received: by outflank-mailman (input) for mailman id 132381;
 Wed, 26 May 2021 08:27:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=l1WN=KV=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1llot9-0005JL-NY
 for xen-devel@lists.xenproject.org; Wed, 26 May 2021 08:27:40 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 47b1e1ff-aff4-4265-9059-ec2f932c92d2;
 Wed, 26 May 2021 08:27:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 47b1e1ff-aff4-4265-9059-ec2f932c92d2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1622017657;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=aPXm51ioOfNg3iq1pnGlnKn5JyfgPh+sziohno4Pij8=;
  b=Li9aCek3Ug9yH6rkZaoVtW/Id5ZupW64Ajq5RMK/T18bENQM55tzafen
   z+qpio0JLxip3xnQXUShCe4KwyxE32ltgHOGGXwvcBoadnMWjduuEyM3+
   +yDhCuCEjmT1vgUnPlEHHAyIRGzvLhKzJbxlyBT0tKRdgqhTeOlO6KRTZ
   Y=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: vNrsMMYosXSQaAOWmuugwV85jqyjm8YIygs891IubqenHmPBnLsJFgF1KMluCxduRjUXKA9jAJ
 PC0s/u9wlKdaEf9BgW824FmXiMHOuALPok3+ApCWrNRBBABlF3gx4clJcH8p0T53yWFaJWqsWC
 nvPTQHW8ApoEdCdh6H42BSOC5/5ZOrXHUmBuFUSKsY3OrCu6fb1WgA2t1a4oyKPTv2uOFrOyFM
 ZrxrEi8vYkWXzpIsqTxZ8/GJvFQGC/nk0VYTcn4rWMWsEuhMUlz2llXghxMYbm3kWzhZtUNYpN
 Xpw=
X-SBRS: 5.1
X-MesageID: 45020583
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:WhXEZ6iD15l1HIgAWsx/2FcqF3BQX9x13DAbv31ZSRFFG/FwyP
 rAoB1L73PJYWgqNU3I+ergBEGBKUmskqKdxbNhR4tKOzOWxVdATbsSlrcKpgePJ8SQzJ8+6U
 4NSdkaNDS0NykHsS+Y2njJLz9D+qj/zEnAv463pB0MPGIaGZ2IrT0JcjpzencGNTWubqBJcq
 Z0iPA3wwZJLh8sH7qG7zQ+LqT+T7KhruOoXTc2QzocrCWehzKh77D3VzKC2A0Fbj9JybA+tU
 DYjg3Q/MyYwrGG4y6Z81WWw4VdmdPnxNcGLteLkNIpJjLljRvtTJh9WoeFoCs+rIiUmRcXee
 H30lUd1vlImjbsljnfm2qo5+Cg6kdh15ba8y7avZO5yvaJAw7TYqF69PFkmhiw0TtqgDgz6t
 MM44o136Aney8opx6Nk+QgYSsa3nZckUBS5NL7sEYvJrf2SIUh57D3r3klXavpIkrBmcka+b
 5Vfb7hDbBtAAqnU0w=
X-IronPort-AV: E=Sophos;i="5.82,331,1613451600"; 
   d="scan'208";a="45020583"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=ZcteRxCq5hxo3kCyCVsSPme5Zaem37qLMrQMKZmgcR/H+tFOhxAx0DD74eLFIajlnaS0cBEgGt1IUCsedmTkgWsFgOvI7Ic9fQKDSiP2HAAOt+slUECKTrl+uxHOdahyu48kMNqEWIor19l68hnL3vrrsMWTZpYCTwR57NOnki+DIVU058STv7KOKiEf1ZztObdY9hbktjEbB7MdiAqSQe5d3RIeStg1CEEDLkrY2TGZa+i6JC2qwtyF5Br7bjLaDxly8pvibOko6B3RO72NaNh0QPVkkeb91ilU/z0cXNKzRNnBc69e7Ac2sjUOi1EaYpH9dyVVzXPoxWxpFkTEqA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9ANPmPHPRboAcTnkBdlnMdcKLJEVmTCBN0wk/OrYMX0=;
 b=AH7XWmBn0KbfWsjMtdfW6tUAAW4SiDXbN5UsKldAQnQFvaE3UNbcdtQv9Zz9dapa1JWRtu9GvlyaTkGhSgtOH+apJ5/SbIW6uJmEBVTpWte75KJNLLyyFrTGHnnRldo8OzZbvG1+FtiE3po5SDoCHdj+nSkIHmff0elPh6LigDP9COatkZCbJwdMaF1Qr6/mk6NugbheOOa/ZhdtQKoeoN0+TJiOhf0S90qJhUMwtScDa//2I6e86kPZxxSE/yxy/K+S5me7lCgtQbksI+q+NN/8mmeLlmnjZ1s3ilwNK+8guoWDRQnKGL7ApHTyMbA43+YeIC4q7W5A/jVrCFFoxg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=9ANPmPHPRboAcTnkBdlnMdcKLJEVmTCBN0wk/OrYMX0=;
 b=UjjRPPlDT2oQDtZ38YrlHitUUcpjo5+RiRQ1/Cst+dU8ucA6agBFXxOSXladDs3+6Puobekp9ys9x8lsaorb3FBJvEPa7EZP9mRYnIfUCt/N8uo0Os/V5eOLItMc+63GpJh+31iqZsoURPB6PfwIhEgmY91KvdVwyyd3HrWtuNQ=
Date: Wed, 26 May 2021 10:27:27 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, George Dunlap
	<george.dunlap@citrix.com>, Dario Faggioli <dfaggioli@suse.com>
Subject: Re: [PATCH v2] firmware/shim: UNSUPPORTED=n
Message-ID: <YK4Gbz0fat7DRY+o@Air-de-Roger>
References: <19695ffc-34d8-b682-b092-668f872d4e57@suse.com>
 <72b98382-34ba-6e9d-c90e-c913dfe66258@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <72b98382-34ba-6e9d-c90e-c913dfe66258@suse.com>
X-ClientProxiedBy: MR2P264CA0081.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:32::21) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 248e7d56-39ea-4933-f282-08d920201c11
X-MS-TrafficTypeDiagnostic: DM6PR03MB5339:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB533936D076E6E7661D5CE3278F249@DM6PR03MB5339.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 8A3Kvlmur2mp/Mtq6MdM/XKT6b29ZehmoNRkP8StLnPw1rNjRPdyTHjjC4J7p1KUcPwSLvL1uIEK3vKoBfJ4aypLGCWYD1uVH6L0dM+z7NmLM9QBlMA/xwMQ6qMicjYBchX1UnfxoAx7GSMXxNastzmn7bDFzRfT+OYsrz8eezWYIYvkjgX+T/VS3zAkrYvb1+p0hMH3McBerLQyhCdOZTb3cZd//eaFTCTWfuetNqGhYXMxWlWxbYXnzxVXWXlezuyrC6I+vm9fXjNB8XZ1a9b6/wvk4nTVRUb5AhgV6R8SitHcA7AECfnwAYOC3xOdwvzFJ86BDSz9iQSznZryuAF1oPrHzSFLiEgBzG53a04wY+b3If/X2Ra+4lode1YONQxD9Qv7x9LpW8/thITwZb66ZejvtED3VsyJABCnkU3wYT6Admwk4KOPyTUeXnR/xz4wi0l+4iKiPDueP004PPGN/OZIi+JwCv53lwOBHs2gPL2ZuKy6GpO5g0k8MDe3o573IEqZsPgsRwrVXbC4mHW1iY+A3/9IRd3PaX92iSkGrWjZZw3cn+ww7qjrunwOdrNF05TYN8g+1bmnemsqVJbLySkUKO7l8/ToFUBcOcUT/UADM61M+r4sTw2dqHrKobheu4hnXwLh181K1TsALQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(366004)(346002)(39850400004)(396003)(136003)(376002)(478600001)(9686003)(5660300002)(26005)(6666004)(8676002)(54906003)(66476007)(33716001)(6486002)(66946007)(2906002)(85182001)(4326008)(316002)(38100700002)(956004)(6916009)(186003)(16526019)(66556008)(6496006)(8936002)(86362001)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?NlBhWEZCdk42Z2JBbm5NSm5OT21zNjhsdWRVcnZzMzlrdzl5QXB1Uml4K3Ja?=
 =?utf-8?B?WlZRdE5QcHEwSXFNOTVsUm0yRkorYUxMRGlTQ3ZPaWNURDdkZGw3M0VoSTRM?=
 =?utf-8?B?ZjkzL2Y5RlZsbVB1ZE5qakg4eDN4aVdTeVBFQVJjYldNK2JnZDYzUkwrVDlM?=
 =?utf-8?B?WGtsQ1lIdkp6SjdycUdWZ3JSM2hRZzBHaVJXNndqOUEyTDhFSVd0WkRUVmVj?=
 =?utf-8?B?SmhPQXJpeURHUjFUbi9WQVhnR2MraVVqWmZKQ1lEdjhYbzJuSk83czZBM2g3?=
 =?utf-8?B?d0x6QW5ZdWVodzRiVE1vK1crVUw3ZkMvQ3dWVjFLWlNxRUlhcm4waHlRTGVh?=
 =?utf-8?B?T0FVZmZ2UjlmdjBJZkxuNks5TXIyYk1CcFhOaUh1N2FWc0pic2lQZlhaanQ4?=
 =?utf-8?B?YldSNjFCTTlIeXdXVUpZb05Vd3Qxdlo0MnBKTnI4V2IyOVFRM1dsdDhpRUdE?=
 =?utf-8?B?OFNFYk1RQnlvRytFT2FCRWs5bThXTnZsNWZHc084RS9mVUlvSmtoT0FvQ2Vj?=
 =?utf-8?B?a3pkOERRdHB1Y0xGd2hqS2R6RG55a2M1UEdQb1BCZlRRZU9KTytjOS9oVmRB?=
 =?utf-8?B?dkhSMFhDb0tEL2pXejhUZHhsTXFyTW5CUjdxcVh3QlJvT013VzBTNnJoRE5z?=
 =?utf-8?B?Y0FvUzhrcmxGeXNlSjZEM1lBMHBBVFN3ZUJqOUd4NWM2am8vQXNaQ3FtaHJq?=
 =?utf-8?B?cEpIZHVQd2lTaExBOGp2TmZGcWtPN3FIbTB6SHJYNG9ueDdhZjBVYWZPMzR4?=
 =?utf-8?B?UG1ycWlXNFpxWHhDaEpYZjJxWEc5aFdYdkdoOGFMMzNiSUNFMHkwbFJTb2c2?=
 =?utf-8?B?TnZzbkhPb2Q4dm5oWTJKeG8rZ3J3ajNqOXZ0QStGZ1Yyd0hXSVZRK0dDSjJN?=
 =?utf-8?B?QW9rTGljWTlrb1hwU1pidU9oK3ZDNTUwOEFmdzk4K2Eyd2JyaG9xQ0dPaUJW?=
 =?utf-8?B?MmxIN1NwRTlrRFQ3aytrdkp3QU5zRGVrd2txSE1sQXRWMU5WWHFmZ2hGdFpi?=
 =?utf-8?B?c01RdmlDOTQ3VGZzTjV0dVZYVHVTNFErVE5sWFdtWWJpSW9iTVo3M2ZtODZw?=
 =?utf-8?B?bWlpZFh1QnZnc2ttT3NjUnhBWUNYVlE1UHcyUEdrUXV2QWsvSU1zdUZvSFFx?=
 =?utf-8?B?bHI1ZVVGZTdKVStKYVZpTld3Y21xSHRlVG5iLzR4eTdHUk1LODcvK1h3a3dL?=
 =?utf-8?B?NmN0NnQwNi95WGJJTmljdWVYTUpreGRsMzVxMDFURUJpUWNJWVJpdjQ5K3N1?=
 =?utf-8?B?eEFpUlQ2TS9UZzBLVXhPSDJMVGJHTmNIczhYUDE1S25NNTNsZFZ6NUUvVGlk?=
 =?utf-8?B?cWNHVVU3bWFUMlgwOXV2OEZwME95ZXBOemVIVGJTWmNUTDVFMEp5VGs3ZklO?=
 =?utf-8?B?cXJMcy9FcHB6SHJPYUhmOVVVUXZmZUFvVmhvWXZjN3dXMHc1SWNBM0tGU2xi?=
 =?utf-8?B?dnFKMWM1dEdPajZNMEpUYk1zY0pTOWlneFd4MmJFS0twQ0MrVzB4Zy82eWxV?=
 =?utf-8?B?Qkh5dW5NZ05FVXRQK3FTcTFCaHdKQ3B6UlA3SnI1dGR1bmJCRXZpa3NlNHBC?=
 =?utf-8?B?TmpVUHFWU0MrUFF2dnQyclE1dWJ6dXByK2QwUEhSTHFEd0ZCbDdvSFR1YTJO?=
 =?utf-8?B?ZTVSWUZ4Nm5ZNGRKaXFZTUVzT25lSUdYZFpvY2VlUkxvUnV4dUwvYXNtZ3JW?=
 =?utf-8?B?UEVhcWJxVEVEUWpSL2VKYnVvcnFldmlLSlN6czBpSmttbHRyMk1rWVI2elZn?=
 =?utf-8?Q?vEk4B5m3ZySGUnxGPU3cKe30lAxd49BDbEIxuNi?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 248e7d56-39ea-4933-f282-08d920201c11
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 May 2021 08:27:33.3697
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: GPVOaojFQuIFsNAe99hPiahCNQKI223U3xDy+6Du+3TJXgUtPrYGClEr2PXpGk//sZDr/yjdfjXipFZafQv6Cw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5339
X-OriginatorOrg: citrix.com

On Wed, May 26, 2021 at 09:37:37AM +0200, Jan Beulich wrote:
> We shouldn't default to include any unsupported code in the shim. Mark
> the setting as off, replacing the ARGO specification. This points out
> anomalies with the scheduler configuration: Unsupported schedulers
> better don't default to Y in release builds (like is already the case
> for ARINC653). Without at least the SCHED_NULL adjustments, the shim
> would suddenly build with RTDS as its default scheduler.
> 
> As a result, the SCHED_NULL setting can also be dropped from defconfig.
> 
> Clearly with the shim defaulting to it, SCHED_NULL must be supported at
> least there.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Acked-by: Roger Pau Monné <roger.pau@citrix.com>
> ---
> v2: Also drop SCHED_NULL setting from defconfig. Make SCHED_NULL the
>     default when PV_SHIM_EXCLUSIVE.
> ---
> I'm certainly open to consider alterations on the sched/Kconfig
> adjustments, but _something_ needs to be done there. In particular I
> was puzzled to find the NULL scheduler marked unsupported. Clearly with
> the shim defaulting to it, it must be supported at least there.
> 
> In a PV_SHIM (but perhaps !PV_SHIM_EXCLUSIVE) build with the build-time
> default not being SCHED_NULL, when actually running as shim I can't seem
> to see how the null scheduler would get chosen as the default
> nevertheless. Shouldn't this happen (in the absence of a command line
> override)?

I wondered the same yesterday and got to the conclusion that it's
then the duty of the user to add pvshim_extra="sched=null" to the
config file.

I think it would indeed be useful to select the null scheduler at
runtime if booting in shim mode and null is present, but then we would
be ignoring the default scheduler selection made in Kconfig, which
could be confusing.

It's also confusing that the scheduler that gets set as the default
when shim exclusive is selected is not available to enable/disable:

[*] Credit scheduler support (NEW)
[*] Credit2 scheduler support (NEW)
    Default Scheduler? (Null Scheduler)  --->

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Wed May 26 08:31:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 08:31:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132388.246958 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lloxC-0006fi-AR; Wed, 26 May 2021 08:31:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132388.246958; Wed, 26 May 2021 08:31:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lloxC-0006fb-6B; Wed, 26 May 2021 08:31:50 +0000
Received: by outflank-mailman (input) for mailman id 132388;
 Wed, 26 May 2021 08:31:49 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZGBu=KV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lloxB-0006fV-Ii
 for xen-devel@lists.xenproject.org; Wed, 26 May 2021 08:31:49 +0000
Received: from mx2.suse.de (unknown [195.135.220.15])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7228b69c-c542-4673-93ab-84065a47279e;
 Wed, 26 May 2021 08:31:48 +0000 (UTC)
Received: from relay2.suse.de (unknown [195.135.221.27])
 by mx2.suse.de (Postfix) with ESMTP id 92AB7B122;
 Wed, 26 May 2021 08:31:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7228b69c-c542-4673-93ab-84065a47279e
X-Virus-Scanned: by amavisd-new at test-mx.suse.de
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622017907; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=c+k/yzqGlAGx0UctOAHow0WXoiiTdCLO+6Z7UG8i2ro=;
	b=erRdeD3VOz6+be+GDCqCdp+jSfHsaqPflBPKbpQXDaEojRPCZS4h4rNFFh5u/DdwLFThH5
	2gDsFPilL9ZTdjprn+6w4JAs7watJUHM2RlVZqgJAbwhiqFefsvDVnfJT++J7PDmlYWHkf
	AxF5qJIRCXSlpt7mU7sZMY6fd1MH+lc=
Subject: Re: [PATCH v2] firmware/shim: UNSUPPORTED=n
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, Dario Faggioli <dfaggioli@suse.com>
References: <19695ffc-34d8-b682-b092-668f872d4e57@suse.com>
 <72b98382-34ba-6e9d-c90e-c913dfe66258@suse.com>
 <YK4Gbz0fat7DRY+o@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <50ca498e-a95f-f187-2fdb-70f5a1978bfe@suse.com>
Date: Wed, 26 May 2021 10:31:43 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <YK4Gbz0fat7DRY+o@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 26.05.2021 10:27, Roger Pau Monné wrote:> It's also confusing that the scheduler that gets set as the default> when shim exclusive is selected is not available to enable/disable:> > [*] Credit scheduler support (NEW)> [*] Credit2 scheduler support (NEW)>     Default Scheduler? (Null Scheduler)  --->
I don't view this as confusing - we want the null scheduler there in
this case, unconditionally. But yes, the prompt's condition could of
course also have "PV_SHIM || " added.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 26 09:15:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 09:15:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132397.246969 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llpcw-0002IN-Sq; Wed, 26 May 2021 09:14:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132397.246969; Wed, 26 May 2021 09:14:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llpcw-0002IG-PV; Wed, 26 May 2021 09:14:58 +0000
Received: by outflank-mailman (input) for mailman id 132397;
 Wed, 26 May 2021 09:10:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mEKv=KV=huawei.com=chenhuang5@srs-us1.protection.inumbo.net>)
 id 1llpYZ-0002BS-Gk
 for xen-devel@lists.xenproject.org; Wed, 26 May 2021 09:10:27 +0000
Received: from szxga07-in.huawei.com (unknown [45.249.212.35])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8b3c0296-2828-49ec-a767-20958ce8cd30;
 Wed, 26 May 2021 09:10:25 +0000 (UTC)
Received: from dggems704-chm.china.huawei.com (unknown [172.30.72.58])
 by szxga07-in.huawei.com (SkyGuard) with ESMTP id 4FqlTG6lRjzCx9y;
 Wed, 26 May 2021 17:07:30 +0800 (CST)
Received: from dggema756-chm.china.huawei.com (10.1.198.198) by
 dggems704-chm.china.huawei.com (10.3.19.181) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id
 15.1.2176.2; Wed, 26 May 2021 17:10:18 +0800
Received: from localhost.localdomain (10.175.112.125) by
 dggema756-chm.china.huawei.com (10.1.198.198) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id
 15.1.2176.2; Wed, 26 May 2021 17:10:17 +0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8b3c0296-2828-49ec-a767-20958ce8cd30
From: Chen Huang <chenhuang5@huawei.com>
To: Michael Ellerman <mpe@ellerman.id.au>, Benjamin Herrenschmidt
	<benh@kernel.crashing.org>, Paul Mackerras <paulus@samba.org>, "Boris
 Ostrovsky" <boris.ostrovsky@oracle.com>, Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Mark Fasheh <mark@fasheh.com>,
	Joel Becker <jlbec@evilplan.org>, Joseph Qi <joseph.qi@linux.alibaba.com>,
	Nathan Lynch <nathanl@linux.ibm.com>, Andrew Donnellan <ajd@linux.ibm.com>,
	Alexey Kardashevskiy <aik@ozlabs.ru>, Andrew Morton
	<akpm@linux-foundation.org>, Stephen Rothwell <sfr@canb.auug.org.au>, "Jens
 Axboe" <axboe@kernel.dk>, Yang Yingliang <yangyingliang@huawei.com>,
	"Masahiro Yamada" <masahiroy@kernel.org>, Dan Carpenter
	<dan.carpenter@oracle.com>
CC: <linuxppc-dev@lists.ozlabs.org>, <linux-kernel@vger.kernel.org>,
	<xen-devel@lists.xenproject.org>, <ocfs2-devel@oss.oracle.com>, Chen Huang
	<chenhuang5@huawei.com>
Subject: [PATCH -next 1/3] powerpc/rtas: Replaced simple_strtoull() with kstrtoull()
Date: Wed, 26 May 2021 09:20:18 +0000
Message-ID: <20210526092020.554341-1-chenhuang5@huawei.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.175.112.125]
X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To
 dggema756-chm.china.huawei.com (10.1.198.198)
X-CFilter-Loop: Reflected

The simple_strtoull() function is deprecated in some situation, since
it does not check for the range overflow, use kstrtoull() instead.

Signed-off-by: Chen Huang <chenhuang5@huawei.com>
---
 arch/powerpc/kernel/rtas-proc.c | 7 +------
 1 file changed, 1 insertion(+), 6 deletions(-)

diff --git a/arch/powerpc/kernel/rtas-proc.c b/arch/powerpc/kernel/rtas-proc.c
index 6857a5b0a1c3..117886782ebd 100644
--- a/arch/powerpc/kernel/rtas-proc.c
+++ b/arch/powerpc/kernel/rtas-proc.c
@@ -259,7 +259,6 @@ __initcall(proc_rtas_init);
 static int parse_number(const char __user *p, size_t count, u64 *val)
 {
 	char buf[40];
-	char *end;
 
 	if (count > 39)
 		return -EINVAL;
@@ -269,11 +268,7 @@ static int parse_number(const char __user *p, size_t count, u64 *val)
 
 	buf[count] = 0;
 
-	*val = simple_strtoull(buf, &end, 10);
-	if (*end && *end != '\n')
-		return -EINVAL;
-
-	return 0;
+	return kstrtoull(buf, 10, val);
 }
 
 /* ****************************************************************** */
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed May 26 09:15:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 09:15:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132399.246973 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llpcx-0002LB-5w; Wed, 26 May 2021 09:14:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132399.246973; Wed, 26 May 2021 09:14:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llpcx-0002KH-15; Wed, 26 May 2021 09:14:59 +0000
Received: by outflank-mailman (input) for mailman id 132399;
 Wed, 26 May 2021 09:10:32 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mEKv=KV=huawei.com=chenhuang5@srs-us1.protection.inumbo.net>)
 id 1llpYe-0002BS-B3
 for xen-devel@lists.xenproject.org; Wed, 26 May 2021 09:10:32 +0000
Received: from szxga05-in.huawei.com (unknown [45.249.212.191])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b8f20abb-2fc9-421d-9497-7b9cbba889f5;
 Wed, 26 May 2021 09:10:26 +0000 (UTC)
Received: from dggems702-chm.china.huawei.com (unknown [172.30.72.60])
 by szxga05-in.huawei.com (SkyGuard) with ESMTP id 4FqlTK1TgvzkYSH;
 Wed, 26 May 2021 17:07:33 +0800 (CST)
Received: from dggema756-chm.china.huawei.com (10.1.198.198) by
 dggems702-chm.china.huawei.com (10.3.19.179) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id
 15.1.2176.2; Wed, 26 May 2021 17:10:24 +0800
Received: from localhost.localdomain (10.175.112.125) by
 dggema756-chm.china.huawei.com (10.1.198.198) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id
 15.1.2176.2; Wed, 26 May 2021 17:10:23 +0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b8f20abb-2fc9-421d-9497-7b9cbba889f5
From: Chen Huang <chenhuang5@huawei.com>
To: Michael Ellerman <mpe@ellerman.id.au>, Benjamin Herrenschmidt
	<benh@kernel.crashing.org>, Paul Mackerras <paulus@samba.org>, "Boris
 Ostrovsky" <boris.ostrovsky@oracle.com>, Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Mark Fasheh <mark@fasheh.com>,
	Joel Becker <jlbec@evilplan.org>, Joseph Qi <joseph.qi@linux.alibaba.com>,
	Nathan Lynch <nathanl@linux.ibm.com>, Andrew Donnellan <ajd@linux.ibm.com>,
	Alexey Kardashevskiy <aik@ozlabs.ru>, Andrew Morton
	<akpm@linux-foundation.org>, Stephen Rothwell <sfr@canb.auug.org.au>, "Jens
 Axboe" <axboe@kernel.dk>, Yang Yingliang <yangyingliang@huawei.com>,
	"Masahiro Yamada" <masahiroy@kernel.org>, Dan Carpenter
	<dan.carpenter@oracle.com>
CC: <linuxppc-dev@lists.ozlabs.org>, <linux-kernel@vger.kernel.org>,
	<xen-devel@lists.xenproject.org>, <ocfs2-devel@oss.oracle.com>, Chen Huang
	<chenhuang5@huawei.com>
Subject: [PATCH -next 3/3] ocfs2: Replaced simple_strtoull() with kstrtoull()
Date: Wed, 26 May 2021 09:20:20 +0000
Message-ID: <20210526092020.554341-3-chenhuang5@huawei.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20210526092020.554341-1-chenhuang5@huawei.com>
References: <20210526092020.554341-1-chenhuang5@huawei.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.175.112.125]
X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To
 dggema756-chm.china.huawei.com (10.1.198.198)
X-CFilter-Loop: Reflected

The simple_strtoull() function is deprecated in some situation since
it does not check for the range overflow, use kstrtoull() instead.

Signed-off-by: Chen Huang <chenhuang5@huawei.com>
---
 fs/ocfs2/cluster/heartbeat.c | 5 +++--
 1 file changed, 3 insertions(+), 2 deletions(-)

diff --git a/fs/ocfs2/cluster/heartbeat.c b/fs/ocfs2/cluster/heartbeat.c
index 1169c8dc9106..f89ffcbd585f 100644
--- a/fs/ocfs2/cluster/heartbeat.c
+++ b/fs/ocfs2/cluster/heartbeat.c
@@ -1596,12 +1596,13 @@ static ssize_t o2hb_region_start_block_store(struct config_item *item,
 	struct o2hb_region *reg = to_o2hb_region(item);
 	unsigned long long tmp;
 	char *p = (char *)page;
+	ssize_t ret;
 
 	if (reg->hr_bdev)
 		return -EINVAL;
 
-	tmp = simple_strtoull(p, &p, 0);
-	if (!p || (*p && (*p != '\n')))
+	ret = kstrtoull(p, 0, &tmp);
+	if (ret)
 		return -EINVAL;
 
 	reg->hr_start_block = tmp;
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed May 26 09:15:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 09:15:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132401.246981 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llpcx-0002Rz-Ia; Wed, 26 May 2021 09:14:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132401.246981; Wed, 26 May 2021 09:14:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llpcx-0002RD-Ai; Wed, 26 May 2021 09:14:59 +0000
Received: by outflank-mailman (input) for mailman id 132401;
 Wed, 26 May 2021 09:10:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=mEKv=KV=huawei.com=chenhuang5@srs-us1.protection.inumbo.net>)
 id 1llpYj-0002BS-BO
 for xen-devel@lists.xenproject.org; Wed, 26 May 2021 09:10:37 +0000
Received: from szxga04-in.huawei.com (unknown [45.249.212.190])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d5b45e28-f401-4546-8568-420a29a4138c;
 Wed, 26 May 2021 09:10:25 +0000 (UTC)
Received: from dggems701-chm.china.huawei.com (unknown [172.30.72.58])
 by szxga04-in.huawei.com (SkyGuard) with ESMTP id 4FqlTG3KXMz1BQTF;
 Wed, 26 May 2021 17:07:30 +0800 (CST)
Received: from dggema756-chm.china.huawei.com (10.1.198.198) by
 dggems701-chm.china.huawei.com (10.3.19.178) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id
 15.1.2176.2; Wed, 26 May 2021 17:10:22 +0800
Received: from localhost.localdomain (10.175.112.125) by
 dggema756-chm.china.huawei.com (10.1.198.198) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id
 15.1.2176.2; Wed, 26 May 2021 17:10:21 +0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d5b45e28-f401-4546-8568-420a29a4138c
From: Chen Huang <chenhuang5@huawei.com>
To: Michael Ellerman <mpe@ellerman.id.au>, Benjamin Herrenschmidt
	<benh@kernel.crashing.org>, Paul Mackerras <paulus@samba.org>, "Boris
 Ostrovsky" <boris.ostrovsky@oracle.com>, Juergen Gross <jgross@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>, Mark Fasheh <mark@fasheh.com>,
	Joel Becker <jlbec@evilplan.org>, Joseph Qi <joseph.qi@linux.alibaba.com>,
	Nathan Lynch <nathanl@linux.ibm.com>, Andrew Donnellan <ajd@linux.ibm.com>,
	Alexey Kardashevskiy <aik@ozlabs.ru>, Andrew Morton
	<akpm@linux-foundation.org>, Stephen Rothwell <sfr@canb.auug.org.au>, "Jens
 Axboe" <axboe@kernel.dk>, Yang Yingliang <yangyingliang@huawei.com>,
	"Masahiro Yamada" <masahiroy@kernel.org>, Dan Carpenter
	<dan.carpenter@oracle.com>
CC: <linuxppc-dev@lists.ozlabs.org>, <linux-kernel@vger.kernel.org>,
	<xen-devel@lists.xenproject.org>, <ocfs2-devel@oss.oracle.com>, Chen Huang
	<chenhuang5@huawei.com>
Subject: [PATCH -next 2/3] xen: balloon: Replaced simple_strtoull() with kstrtoull()
Date: Wed, 26 May 2021 09:20:19 +0000
Message-ID: <20210526092020.554341-2-chenhuang5@huawei.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20210526092020.554341-1-chenhuang5@huawei.com>
References: <20210526092020.554341-1-chenhuang5@huawei.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit
Content-Type: text/plain
X-Originating-IP: [10.175.112.125]
X-ClientProxiedBy: dggems703-chm.china.huawei.com (10.3.19.180) To
 dggema756-chm.china.huawei.com (10.1.198.198)
X-CFilter-Loop: Reflected

The simple_strtoull() function is deprecated in some situation, since
it does not check for the range overflow, use kstrtoull() instead.

Signed-off-by: Chen Huang <chenhuang5@huawei.com>
---
 drivers/xen/xen-balloon.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/drivers/xen/xen-balloon.c b/drivers/xen/xen-balloon.c
index a8d24433c8e9..1fba838963d2 100644
--- a/drivers/xen/xen-balloon.c
+++ b/drivers/xen/xen-balloon.c
@@ -163,13 +163,16 @@ static ssize_t store_target_kb(struct device *dev,
 			       const char *buf,
 			       size_t count)
 {
-	char *endchar;
+	ssize_t ret;
 	unsigned long long target_bytes;
 
 	if (!capable(CAP_SYS_ADMIN))
 		return -EPERM;
 
-	target_bytes = simple_strtoull(buf, &endchar, 0) * 1024;
+	ret = kstrtoull(buf, 0, &target_bytes);
+	if (ret)
+		return ret;
+	target_bytes *= 1024;
 
 	balloon_set_new_target(target_bytes >> PAGE_SHIFT);
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Wed May 26 09:28:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 09:28:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132421.247002 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llppf-0004xb-Ki; Wed, 26 May 2021 09:28:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132421.247002; Wed, 26 May 2021 09:28:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llppf-0004xU-Hd; Wed, 26 May 2021 09:28:07 +0000
Received: by outflank-mailman (input) for mailman id 132421;
 Wed, 26 May 2021 09:28:05 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8HVS=KV=epam.com=prvs=578065f5e4=sergiy_kibrik@srs-us1.protection.inumbo.net>)
 id 1llppd-0004xO-OT
 for xen-devel@lists.xenproject.org; Wed, 26 May 2021 09:28:05 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d17c8acb-a437-4d8b-853d-77b250743f15;
 Wed, 26 May 2021 09:28:04 +0000 (UTC)
Received: from pps.filterd (m0174682.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 14Q9P4xf004133; Wed, 26 May 2021 09:28:03 GMT
Received: from eur01-he1-obe.outbound.protection.outlook.com
 (mail-he1eur01lp2054.outbound.protection.outlook.com [104.47.0.54])
 by mx0b-0039f301.pphosted.com with ESMTP id 38sk9803a4-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Wed, 26 May 2021 09:28:02 +0000
Received: from AM9PR03MB6836.eurprd03.prod.outlook.com (2603:10a6:20b:2d8::8)
 by AM9PR03MB7009.eurprd03.prod.outlook.com (2603:10a6:20b:286::21)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4173.20; Wed, 26 May
 2021 09:28:00 +0000
Received: from AM9PR03MB6836.eurprd03.prod.outlook.com
 ([fe80::9151:70c8:8d48:5135]) by AM9PR03MB6836.eurprd03.prod.outlook.com
 ([fe80::9151:70c8:8d48:5135%2]) with mapi id 15.20.4173.020; Wed, 26 May 2021
 09:28:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d17c8acb-a437-4d8b-853d-77b250743f15
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=a5dywmKZw7VnbbuGIocqC1NOHlHG4/BJRXA6PSgS3gFdu4op0czPb/7+FgQLJ6oirWKL7F2SsWbryrJ2XqKqsZLyrqUskXVEBhYW2VSJk9pYFWXUjazUOhO/KBN8BthyziMTKLoQFSXsMYGXCshWgFsnVqdcelS8c/LqFAua5EVNeiKO32SULyRPOu1VsE5wU82DomkXEu5m3X3vqt7Gc9hezNFUI1w1ZY36WtHgJWFP8Bgpfg8JpXOKFnulH8A1FanyT0mO2VaQULLHLI+toKZ82aXOhQ/BGOgP9xWJX74Jupoj5TChaYNZJDeZ9ng/iMqPDK2zh+6a7WiEvCUmOQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NffypTio3y26g9xeQCvUmpW46XRwRJGf2AwY7Bi4Ho0=;
 b=GSmQxXMEfFBt70wvuP/h1N9m5ok6XCfzTs2G+kGs9egMCeOE3GtMjlaOwC6+yv2eMf7USxhsIKVs9RTDzXqyAMA3kGI5jBGemjdFyCbe1Ype4g/XBaVTiKY+aiRJIp8L7Sf4BnbWebRrMDUE4oC3EBurnJ9QU6Ci0Sh2r8p0FdL32Gwtqa2fDHHZIbspBTM3AYV0zpxA3pmIpnGCiuquXjTUpTbzoLgDma7USCwkZW/TZRJZC2v6kjt4CMxtSHALYXTbFJJxpQ6z8On/yBr2ICSidcu72mEOU4f5NfSL/qlXUg0wq5ACjlI/cbv9WnZoiVlrumitN0YYxim5QBixgw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=NffypTio3y26g9xeQCvUmpW46XRwRJGf2AwY7Bi4Ho0=;
 b=pB+emSJMWtTDTBS0aWlvaSQB8lNfsZIqZ3XFujwiUOt+1LzevfBUUMlfg0dsdZ/BUSQfO2FoCz6w78pRnG0ZIYQlQEpB5CAqd0iYmor73gvYFhPpCiUN6AgC60GNMAyXYG5E3KQ1ckNkdE9NgJ3gWfxBEz+/GkhxWirWbqeRcwRrxeQABjSqPgxiAIj1R6v0JJydHV23xA3WyYIM4XQ7+x0tnovanDgwElCUZD3DgIyyH9R/TwhIIIoBfY4YMpqgxDCJy7zqZcia1Dqo4eIz6HJnybPXweiWVxirE/e22a2OI/F7uGUU04fcbrAcv2MeNxi/KUYHYMVz/Z7bqlxslQ==
From: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
To: Julien Grall <julien@xen.org>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>
Subject: RE: [XEN PATCH v1] libxl/arm: provide guests with random seed
Thread-Topic: [XEN PATCH v1] libxl/arm: provide guests with random seed
Thread-Index: AQHXUJ0pDoWarWZCUEmrgeaODNENvKr1enfA
Date: Wed, 26 May 2021 09:28:00 +0000
Message-ID: 
 <AM9PR03MB6836B02970F4F1CAEEAEAD78F0249@AM9PR03MB6836.eurprd03.prod.outlook.com>
References: <20210524080057.1773-1-Sergiy_Kibrik@epam.com>
 <a3c51dea-80e5-df92-3757-72809ad96434@xen.org>
In-Reply-To: <a3c51dea-80e5-df92-3757-72809ad96434@xen.org>
Accept-Language: en-US
Content-Language: ru-RU
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=epam.com;
x-originating-ip: [95.67.114.216]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 08d01ae5-7a1f-4e24-f4be-08d920288e65
x-ms-traffictypediagnostic: AM9PR03MB7009:
x-microsoft-antispam-prvs: 
 <AM9PR03MB7009A0FE67B669AC4F8ED51AF0249@AM9PR03MB7009.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:10000;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 n83fb/TBZLp2QkIIs7kyDKBkDQv7opa4PGQY0y349hDwlOBN76E2jE1zQzah53fhme6iTa3NKlMZvNQT1HfKpIKEX2uRHH/wJFTX6bLQ75q6esgJo3jmSPEgnsUTtX3jHB4abR7IRsr3/pfEThaykyOqzsTddGZO1ZYXn9hkPYtkDrqbNiiBdLer/yycfqUQ4fDVWnwkh3iTknz82MiAe6OpenAHDnLLI6Ozp0K6MnB6An/FGO+vMIVH5q6AarA5xvk/4u70puhZmJen/m/fKLhdppAXlvBjAXYMIUHMYiUjAnoUGFa5wK/HvBzY4PgbLcECB47aH+y3PyTGya/FIeanbEsapoqBqIhkL1eHW+f1ICNT7Uk7EHzMQRID5koy4O217AKTOE9qPHRH9G2WYIdVxQELEByOhXZxrpDaFjm5TcC7/025HsmsCIRg8hzQJ0vGcAmw9xoqozh1VfYtpaHphH36sdNXWfoClPONiZRsyxa7ONfv0vLFkpkMzwchkGoOs9+dc4px+Z0bWM9gwddW5rMESi4peQPbiX6KOI5rZEg7qIPNU4yIJ8edmX6rLb2L7v3YXT0kEPivo0PW+lJUZWQ6Ci03H7BWG4YMQgE=
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM9PR03MB6836.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(136003)(346002)(366004)(396003)(39860400002)(66476007)(66556008)(66446008)(64756008)(2906002)(8676002)(110136005)(83380400001)(54906003)(86362001)(122000001)(55016002)(33656002)(6506007)(7696005)(26005)(71200400001)(76116006)(66946007)(52536014)(8936002)(38100700002)(478600001)(4744005)(5660300002)(9686003)(186003)(316002)(4326008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?utf-8?B?aU9pd1B5K1M1ZW4wS003UzRXU2lOd3FxY0Q5NVJ3dE5iZGJJb3VCN2xNYUZm?=
 =?utf-8?B?VVlGbzVRbFJHT2daaHkyU2ZBSExsd2xsNVdSM25UamJpdnRkejFnV3hnVjBV?=
 =?utf-8?B?SldqY2VkV01UM25wdFFNWjBQYXdRdjFrQU5pSGE3MjBSYm1ZOHdNR0szaDBJ?=
 =?utf-8?B?N09LbUZsZGQwSHpwS1R2UndUWkZxaUJoQWVlTzJLVVlTNzVIUVFhd0tRUDVU?=
 =?utf-8?B?dTRPM2lTb3QwYmJDSUpFR2thYW1YUEdJQnNhcHdobmFsb0EzaHU0T0Q1ZlBk?=
 =?utf-8?B?Ly9id3NXWjhzSC9PTEpoWDJRN2NncEp2TEJaUVZ0bnV1VDlQTDIrMXE5UkxM?=
 =?utf-8?B?K1hXVnlmVnVpU3dGOE0vb3BYVFdGVFVSY25jbnpEY2Vya1l3Ukt1bWFLeTBt?=
 =?utf-8?B?blpReGNDWUN2QVB1LzlKOEQzNWpPZ3BuU3dPK3VmOXZYbWlZUGVxdHVRdTR1?=
 =?utf-8?B?ayt2dGgzMXh6UEltMERhN2VGc3FOUzladVduaWdUVlBWWDQ2d2R1YktRZmpi?=
 =?utf-8?B?a250dWRtZkF5N1BDL1RTK09HMzJYWmI2VGZGOFVlR2lnZ2EyR09GL3hqZm9y?=
 =?utf-8?B?R21MUVJlRURDTE9kZDAzamFNYjZ5WHd6MVM5dDh6aWxad0lQSVRPbiszZUdM?=
 =?utf-8?B?ZnlFNWFnajRjM3VGL3hGQWx1NlhkWGRZbVVqaUVzUDJlL3hVZWp3Y012dnRQ?=
 =?utf-8?B?OE5xT1F5RUhidHRKM3JGS1JhVHJ1dTE0d3ZvYUNRZ0ZUNFdwKzZtSndpTnVJ?=
 =?utf-8?B?RDR3NGI0VUZTTGF3azVIUEZUWmlWODhwYUVGd0Q1Y2prM0xFM3haQ2pxY1lP?=
 =?utf-8?B?bkdIREo4eDVBN3FqSW9kZXRVbXllNkc3dlhNdi81RnhtZ2QySjVuTHIzU0NF?=
 =?utf-8?B?ekdwaXBLcE5GLzNLUjR0UFhiV3pVY3lkYjVIN0YwWTZmZzU0YnlGMXNQMWpD?=
 =?utf-8?B?NUtxNS9VYTdmTzhrYkJ3T2RseXRWMHljMFNSRmQzd2wwTlh3NUI1WkxOYUNO?=
 =?utf-8?B?cVRXTERqQjJacFZCMEcrRXFMN2w4T2t6bmFLTkpLcWM0T1BxMHdSSlp1L1VI?=
 =?utf-8?B?a0NqZ1RkVWR1cS84bnFOcFNyNm5sV21NaWM0TVFIQVNRUEFQZ0ZEbEhvKzJr?=
 =?utf-8?B?MTJwZkhJTklraUd5bkNFNGM3LzJYYzhlOSt1RDRrVXRITVdqcmF1N0R1NTBr?=
 =?utf-8?B?dUMrYUcxUmNuSy9LakFZWnIyK2N4aXErL05yUDY1UjJKSkZWOHlucFF5Znd5?=
 =?utf-8?B?NVlEYWpUclcyRGs2c1RjbWl4bXdMMUp6cGwvanAwVjAwbW9BSzRCNExncU5V?=
 =?utf-8?B?T3dmYlZiM1hrWUovdU12QTVVUzlsWnJoVTRSZUdFeW1qNDMzeGtVclBiaWln?=
 =?utf-8?B?bzVFVmcxczFwdTdzZ3JEUWxZNWdRUHdiNnRlR1BaZkFJbVhpWUE1WkQxd2lO?=
 =?utf-8?B?b0xMREJQMUt2S0p1RGlKMFZDZHhpTmozK3RuN0V5dEROUXI2bkFtZTdLMFE4?=
 =?utf-8?B?ZVBTd0JCdDBacC9zWmJEWmRSenZBSEZsZ2VEZnF1d1k5YjlrMWRQMEZGRkor?=
 =?utf-8?B?NGhUcHZEQWRQSVFIQzc4NXRCVklZR1ZXN1JMZ3VoSUF1d2JsTmhwNXVZay83?=
 =?utf-8?B?QytWL2dGelpIKzc3OFg0d2xFV2NiQkxmU3h5UFVvc25PS3pzeGNISlpBQlNr?=
 =?utf-8?B?SlFkSFlhZjhZOFRZaSs4S3g1NzRuRlhCazVMdFRVZFEyTzQ2RjlhTUsxd3Y2?=
 =?utf-8?Q?AA2vHyKGyssisWZ/xI=3D?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM9PR03MB6836.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 08d01ae5-7a1f-4e24-f4be-08d920288e65
X-MS-Exchange-CrossTenant-originalarrivaltime: 26 May 2021 09:28:00.8775
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: sftWuI4P52j79JNZOzYYikSMSEukSM8i2r8+PbR1s9UqxncPIxklQ0n3yY3P1IhampjLxlzWUWL5BwPRh57jbw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM9PR03MB7009
X-Proofpoint-GUID: D-14amPwqowJxZKN-rRg-L6nrWA7xwub
X-Proofpoint-ORIG-GUID: D-14amPwqowJxZKN-rRg-L6nrWA7xwub
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 mlxscore=0
 malwarescore=0 bulkscore=0 phishscore=0 clxscore=1011 impostorscore=0
 adultscore=0 suspectscore=0 spamscore=0 mlxlogscore=999 lowpriorityscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2105260062

SGkgSnVsaWVuLA0KDQo+ID4gZGlmZiAtLWdpdCBhL3Rvb2xzL2xpYnhsL2xpYnhsX2FybS5jIGIv
dG9vbHMvbGlieGwvbGlieGxfYXJtLmMgaW5kZXgNCj4gPiAzNGY4YTI5MDU2Li4wNWM1OGE0Mjhj
IDEwMDY0NA0KPiA+IC0tLSBhL3Rvb2xzL2xpYnhsL2xpYnhsX2FybS5jDQo+ID4gKysrIGIvdG9v
bHMvbGlieGwvbGlieGxfYXJtLmMNCj4gPiBAQCAtMzQyLDYgKzM0MiwxMiBAQCBzdGF0aWMgaW50
IG1ha2VfY2hvc2VuX25vZGUobGlieGxfX2djICpnYywgdm9pZA0KPiAqZmR0LCBib29sIHJhbWRp
c2ssDQo+ID4gICAgICAgICAgIGlmIChyZXMpIHJldHVybiByZXM7DQo+ID4gICAgICAgfQ0KPiA+
DQo+ID4gKyAgICB1aW50OF90IHNlZWRbMTI4XTsNCj4gDQo+IEkgY291bGRuJ3QgZmluZCBhbnkg
ZG9jdW1lbnRhdGlvbiBmb3IgdGhlIHByb3BlcnR5IChhbHRob3VnaCwgSSBoYXZlIGZvdW5kDQo+
IGNvZGUgaW4gTGludXgpLiBDYW4geW91IGV4cGxhaW4gd2hlcmUgdGhlIDEyOCBjb21lIGZyb20/
DQogDQpJIGRpZG4ndCBmaW5kIGRvY3VtZW50YXRpb24gZWl0aGVyLCBwcm9iYWJseSB0aGF0IHBh
cnQgaXMgdW4tZG9jdW1lbnRlZCB5ZXQuDQpUaGlzIGlzIGtpbmQgb2YgdHJhZGVvZmYgYmV0d2Vl
biBDaGFDaGEyMCBrZXkgc2l6ZSBvZiAzMiAod2hpY2ggaXMgdXNlZCBpbiBndWVzdCBMaW51eCBD
Uk5HKSwgYW5kIGRhdGEgc2l6ZSB0aGF0IGhvc3QgaXMgZXhwZWN0ZWQgdG8gcHJvdmlkZSB3L28g
YmVpbmcgYmxvY2tlZCBvciBkZWxheWVkDQood2hpY2ggaXMgMjU2IGFjY29yZGluZyB0byBnZXRy
YW5kb20oKSBtYW4gcGFnZSkuIEluIGNhc2Ugb2YgMTI4LWJ5dGVzIHNlZWQgZWFjaCBieXRlIG9m
IENSTkcgc3RhdGUgd2lsbCBiZSBtaXhlZCA0IHRpbWVzIHVzaW5nIGJ5dGVzIGZyb20gdGhpcyBz
ZWVkLg0KDQo+IEFsc28sIGxvY2FsIHZhcmlhYmxlcyBzaG91bGQgYmUgZGVmaW5lZCBhdCB0aGUg
YmVnaW5uaW5nIG9mIHRoZSBmdW5jdGlvbi4NCj4gDQoNCldpbGwgZml4IHRoYXQuDQoNClRoYW5r
IHlvdSBmb3IgcmV2aWV3LA0KICBTZXJnaXkNCg==


From xen-devel-bounces@lists.xenproject.org Wed May 26 09:31:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 09:31:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132429.247013 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llpsy-0006Mw-8h; Wed, 26 May 2021 09:31:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132429.247013; Wed, 26 May 2021 09:31:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llpsy-0006Mp-55; Wed, 26 May 2021 09:31:32 +0000
Received: by outflank-mailman (input) for mailman id 132429;
 Wed, 26 May 2021 09:31:30 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8HVS=KV=epam.com=prvs=578065f5e4=sergiy_kibrik@srs-us1.protection.inumbo.net>)
 id 1llpsw-0006Mj-AB
 for xen-devel@lists.xenproject.org; Wed, 26 May 2021 09:31:30 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2c1f7966-8f11-4347-9682-54a2695ca15a;
 Wed, 26 May 2021 09:31:29 +0000 (UTC)
Received: from pps.filterd (m0174681.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 14Q9QNJr023256; Wed, 26 May 2021 09:31:28 GMT
Received: from eur04-he1-obe.outbound.protection.outlook.com
 (mail-he1eur04lp2051.outbound.protection.outlook.com [104.47.13.51])
 by mx0b-0039f301.pphosted.com with ESMTP id 38sk4br4pm-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Wed, 26 May 2021 09:31:28 +0000
Received: from AM9PR03MB6836.eurprd03.prod.outlook.com (2603:10a6:20b:2d8::8)
 by AM0PR03MB4641.eurprd03.prod.outlook.com (2603:10a6:208:cb::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4173.20; Wed, 26 May
 2021 09:31:23 +0000
Received: from AM9PR03MB6836.eurprd03.prod.outlook.com
 ([fe80::9151:70c8:8d48:5135]) by AM9PR03MB6836.eurprd03.prod.outlook.com
 ([fe80::9151:70c8:8d48:5135%2]) with mapi id 15.20.4173.020; Wed, 26 May 2021
 09:31:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c1f7966-8f11-4347-9682-54a2695ca15a
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=K9RDp1kRPCq97Bwz7V00GwkfKbZFrmHvoooVSTwi+xE+GDFmxmif2tkVL5rvao5Wo1zDsSTaiSqWT8axnWjttLs0TfVPjRgpI9j7sNo0hSJ+1hKnExciIHvWquc3vnNgaaCE//RKAzWn3G81S96Im52ZSPceROabFfzM6DO3HlJvE9sXB2xUYTovmaVr18YxkBv2im+znABia36mzKLtIkzMvBvTWz4ut0/yF/IuXVOmA5IRaUwpowOPIDJazRI2UbHHODsYWR7IUJEqbRjbYladPclEoGbGOSv5DMLn0dZiKK3xfNoeNGymXDbbW8qDiVkKEmLrc0ol89np+eOsyQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/z7ZfJGX+yXehS429o8PNtFTKLSmkLooC46JXApD3TQ=;
 b=SS++DUzFY/fnZbw8AqHfHykSKNpaYxjK1UedmkgjH3Oy9xz+RQiHzAErc7A8w1GFlZnMEHlgzJZFwwCdWj9WZojH7yIcPhGJ874OWJxAPWZWJnE+oL8ivK4a+4TQcNLmTeNX6MQM6ebVyyg68nriy70QhBFmQ70GYW8iWCLAU1mvemaM86rXL7vDYv0f0zGnNiOOBQ+DPrTbJhivjdM8PgJmGnzEQJow7RsEHyjH78YD9gDVqIAlOWFRgbBvIxO0IVozr/rjwwTUwfa9O5M/D09nPdUEFHlinqh9l9e98KxoB6b6I6MddL0wgs0ewfqzPYwe9zjDD3XJvDEUEqYIXA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=/z7ZfJGX+yXehS429o8PNtFTKLSmkLooC46JXApD3TQ=;
 b=yDnFlgIKsVLZ3nKyYA9t6rWusygpfOahJ247K1OXQhgaiH8dW6yWFriB+fSc7zi6jNG7WC7pxrOhBZJFC4oDK2gNP6Q8sHeTFx+BGZJ+4ytJZTbJOhS4oHskKUiXQllzamjcwHzpM8dbF4TBEIoVDo2qfOBMnCv8T9gwPbMXVDJMAD9ZcPYDlA2w+bsecODGG3WRWcZ+YYX5+tPauU8MZcxGFGehwyTYF+hFTC/MpwJ7eRfn8lQglvx8P99pXdB5Ld2SxJOUjLnA3CuRGR5TPuDUJyDpU2dpR1CjAC1QaEWwmxepGjPjirfkugyTkYj9CHL0NQsH2PEzM/AumRO2eg==
From: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
To: Julien Grall <julien@xen.org>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>
Subject: RE: [XEN PATCH v1] libxl: use getrandom() syscall for random data
 extraction
Thread-Topic: [XEN PATCH v1] libxl: use getrandom() syscall for random data
 extraction
Thread-Index: AQHXUJviVlkCcTT7D0WghZSRk2JxUar1gejg
Date: Wed, 26 May 2021 09:31:23 +0000
Message-ID: 
 <AM9PR03MB68362CAC724A6BEE4A50B96AF0249@AM9PR03MB6836.eurprd03.prod.outlook.com>
References: <20210524085858.1902-1-Sergiy_Kibrik@epam.com>
 <13bde708-1d87-a2c7-077f-f08db597ae15@xen.org>
In-Reply-To: <13bde708-1d87-a2c7-077f-f08db597ae15@xen.org>
Accept-Language: en-US
Content-Language: ru-RU
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=epam.com;
x-originating-ip: [95.67.114.216]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 816b6b38-ad95-4345-9abd-08d920290704
x-ms-traffictypediagnostic: AM0PR03MB4641:
x-microsoft-antispam-prvs: 
 <AM0PR03MB464187E41A3E2F14E4853238F0249@AM0PR03MB4641.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:8882;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 o9N2iyDu/RauI4YbukYcXZ7dj24vs3dI5rEb9C0o0Gk5gbqvOrBTLzKF70ZmiTECoSfKcxRmNRiDC1ku2KuQqt4Ngz2FkobgUfOoo//0qM0JhfdL4a6JEn6hwtJGfFEH93gFKu+t/sw5xAdMxYi6M7HH+2f/NkF5BOKWHr0OtpPyeBfIQTIxQClZmD6QIdydgQtD4io+4DHvOslup0MX1lKwgZGI4tffmvaS9PIHbUd6zKYY2D+nBBsEOoGD93hbHBrVAgAaTQ/0O1NASm2xLrEdACKlwM5cGsgLkrgcocCNSFX18ecXHZ6ePQ10kLi1ps32W79FNlOUUcoyoDWz310vnxm1Ip8c++mYJte9UdNC8RtFjZCS2sh1yOujHpHlB/6eqKM6bffpNg5fig9tkGYpYPvAYzdqQCRr0mXmmDHLm2WcOiBrfbBhEs/gffpRmydnNFGnB0d/YXYmR8LtDxc3uYk6hMF7YpBnD4IeIyF/rCOrHI/bEODTj/urN6mW4lJT+/vMWxbH2cEbhcy5vAHdHc2ZaxLC+SPmt5SYECmv4kvpQEa7ylHIZTtOhA5tIUzGKwlXVVExmZFNUgEm+mBdZd+HH6pK4LvVNX52EhI=
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM9PR03MB6836.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(136003)(366004)(39860400002)(376002)(346002)(110136005)(316002)(54906003)(9686003)(2906002)(7696005)(8936002)(71200400001)(33656002)(26005)(4326008)(4744005)(5660300002)(55016002)(478600001)(186003)(86362001)(76116006)(52536014)(66946007)(66476007)(64756008)(38100700002)(8676002)(6506007)(122000001)(66556008)(66446008);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?utf-8?B?bGpzS3YvbFlGRFQvNHEybkRoNDkwN3NqR3RTbFVYUVVaRm1NV1BzaFRQY1gz?=
 =?utf-8?B?V25PV0l0ZTdleUlacGptVnBRcGl2WXQ5cElISWRqVWlYeGUxTU50elY5aktW?=
 =?utf-8?B?QlhpdnRQdDRZR3VHTHY4RU9KY0FWakRvdHl4emVvYTNsVHZxeWhUeG1Hc2tl?=
 =?utf-8?B?RnBpb1hOOVRDRVBKZHR1andMS1lTb3BYenFOREk1RmtIaE1md1JaMEpsL3Rn?=
 =?utf-8?B?OWphamJ2WDV0TzN1ZHpNdyt2RkhVR2JhYURuWnFURUlMUWdzT3IvWGVuZEdV?=
 =?utf-8?B?YzVFOHdFZ1FJWTR5SWROc1lqWmcvamh4TW5TU3NSSi94M2FOajlyMXFRTFZz?=
 =?utf-8?B?M0toMnMwdktkcFV4dUJJbnVrWFM4Y2RoWjRhbU9EYmU1OHRoczRHaVMrL1lx?=
 =?utf-8?B?SFpVclFuLzNJRFJLcTZTZkladFRYOWNGYWg0MDhCMENxUzN1V3dVcTdWSnFL?=
 =?utf-8?B?cllPUk91SzRsLy9idW9IR1VwYnlRSFpEUmtNY3dlK2lId0pWcllLbC8xcGRH?=
 =?utf-8?B?U2dBWTBYaU9Fc0ZzM3h4SXdwTW1lSlpNVVh6dTJ3TlUzbTg5czR1TWFlVnda?=
 =?utf-8?B?T1ZxSTF0VDB6QkF2akFMVzJBeXdIWlZyK09ZTEt3UitRcXBsUnhqUERKcW40?=
 =?utf-8?B?b3V6ZWtQUmxBeWE2RDJaUGYzM0hEdFh1QnZYenhEVGZmUjJ6N2E2OG9OSFdH?=
 =?utf-8?B?RmJxRUQ0WW1mNHFMVHhKamhMOGhNM2FPZTRYSjAvdFlUTG9KVmRDTFAyNzRr?=
 =?utf-8?B?R2VLVDZFdUR3eFhIRkZqRWhDcnFRYkFGWWY1UHZnVG56bDBSMkE4K0h3NDhv?=
 =?utf-8?B?SFpLMVlxd3dPOVR4V1MrWVA5Q05UWkVkRlJXYnF6NVBoUVBNOS9uVDBpM09y?=
 =?utf-8?B?WlpraWEwMHBuUW02bXo4QjkwTlVRYlRqdDRQRy9XZ3Voejl1Q2hYRW5PeEJL?=
 =?utf-8?B?RlF5eUk0WjNlKzlJbVVyaGx1N09MMzE5cUhBSDViR0Nycis3NHpxM0RtN1Fy?=
 =?utf-8?B?WEkvOU93b0MvWUlJZGVPTFlRRUNHbS8vVVpQZzlIcFlVVm5YcEUrTHZRejFB?=
 =?utf-8?B?SUFqM2xIMVRiWHdlTnNGSGJBS3l2eGoxZzFNWVdYeE1HSjdsUExCVVpBWEw3?=
 =?utf-8?B?bE85L3ZMcWlWNEkzcWZLZEpzSk50NDFjUHJEeFBUaU1kYXlmdDlXY01CcFFR?=
 =?utf-8?B?WS9Gc3NSaDhmaGI0dTNNQi8ycVNoT2FjUHNqRW9RV3V0aW1kSVBhQkVGRzVl?=
 =?utf-8?B?RWxSMlBBZ0cyZXRjMEtSeFBMUmxsQThHMExUeExudXVuMzg0M3lDYkRDTGha?=
 =?utf-8?B?V3RNbzJMTFN5TC8vbVNBSEJGWG16SHc4WitBaGYxcmw3aEx4L1orYURxcmkw?=
 =?utf-8?B?Szh5SGs2djlxN09QbVdGakRMQlNaTEtLZ21jWDhRS21GdFBESkZnSnhYNFNn?=
 =?utf-8?B?YVBxMXlUbTJoUW1pd1FBYW1wV3RmZzJtQjFoMTBBSlk2MFdwd1I3aVczSVJx?=
 =?utf-8?B?R1lyNHdMT3pwNFAvVUVqbk5nREs5VGlOTUEybXYvR1U3OXBHZTJ4STljdWRD?=
 =?utf-8?B?dUxGc2ZYRFh2Nkg5QWRGNHV1SzNwU0lpWkNRQUtpdjJoNWRHUTdBZW56SU1U?=
 =?utf-8?B?eTVQR1ZMMm5LTjhqc1dPdlNFM25PeGRXdzE2cFhVYmk3a3FtZzhJTGEyaWlX?=
 =?utf-8?B?QmQwOUJWVDR4UUxTQk1GVU53YnMzNVMzS2EzTXlCZzhqYTFEc25XaXpycm9s?=
 =?utf-8?Q?AtdFwn94QOshmsaR1Rcwq0qhuV3MdFhfrfHRWC4?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM9PR03MB6836.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 816b6b38-ad95-4345-9abd-08d920290704
X-MS-Exchange-CrossTenant-originalarrivaltime: 26 May 2021 09:31:23.2020
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: JkQMGiR/NPdxyzBo8eu1CGw+CLavj6v2EG97QSmi5FQK1OI8H2eacko9DWz9+QTI8Tc+MFTmrXqTlfM+/sCwHA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB4641
X-Proofpoint-GUID: rW1x8WRWUqU2mjsKGeJAQkfvfpBfBaWs
X-Proofpoint-ORIG-GUID: rW1x8WRWUqU2mjsKGeJAQkfvfpBfBaWs
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 priorityscore=1501 mlxscore=0
 clxscore=1015 malwarescore=0 mlxlogscore=988 spamscore=0 adultscore=0
 suspectscore=0 impostorscore=0 phishscore=0 lowpriorityscore=0 bulkscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2105260062

SGkgSnVsaWVuLA0KDQo+IA0KPiAgRnJvbSB0aGUgbWFuOg0KPiANCj4gVkVSU0lPTlMNCj4gICAg
ICAgICBnZXRyYW5kb20oKSB3YXMgaW50cm9kdWNlZCBpbiB2ZXJzaW9uIDMuMTcgb2YgdGhlIExp
bnV4IGtlcm5lbC4NCj4gICBTdXBwb3J0IHdhcyBhZGRlZCB0byBnbGliYyBpbiB2ZXJzaW9uIDIu
MjUuDQo+IA0KPiBJZiBJIGFtIG5vdCBtaXN0YWtlbiBnbGliYyAyLjI1IHdhcyByZWxlYXNlZCBp
biAyMDE3LiBBbHNvLCB0aGUgY2FsbCB3YXMgb25seQ0KPiBpbnRyb2R1Y2VkIGluIEZyZWVCU0Qg
MTIuDQo+IA0KPiBTbyBJIHRoaW5rIHdlIHdhbnQgdG8gY2hlY2sgaWYgZ2V0cmFuZG9tKCkgY2Fu
IGJlIHVzZWQuIFdlIG1heSBhbHNvIHdhbnQgdG8NCj4gY29uc2lkZXIgdG8gZmFsbGJhY2sgdG8g
cmVhZCAvZGV2L3VyYW5kb20gaWYgdGhlIGNhbGwgcmV0dXJuIEVOT1NZUy4NCj4gDQoNCllvdSBt
ZWFuIGl0cyBhdmFpbGFiaWxpdHkgc2hvdWxkIGJlIGNoZWNrZWQgYm90aCBhdCBidWlsZCBhbmQg
cnVudGltZT8NCg0KLS0NCnJlZ2FyZHMsDQogIFNlcmdpeQ0K


From xen-devel-bounces@lists.xenproject.org Wed May 26 09:51:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 09:51:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132437.247023 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llqCM-0000CI-Ve; Wed, 26 May 2021 09:51:34 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132437.247023; Wed, 26 May 2021 09:51:34 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llqCM-0000CB-SZ; Wed, 26 May 2021 09:51:34 +0000
Received: by outflank-mailman (input) for mailman id 132437;
 Wed, 26 May 2021 09:51:33 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llqCL-0000C0-MN; Wed, 26 May 2021 09:51:33 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llqCL-0005rG-FT; Wed, 26 May 2021 09:51:33 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llqCL-0003A3-7q; Wed, 26 May 2021 09:51:33 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1llqCL-0003ml-6y; Wed, 26 May 2021 09:51:33 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=EtPOBytZOiBUCP3aZrnKsyCpa5wke8/p6gntDlFHFVI=; b=MlC+aukp5GBlTBnd/1UXQGJWur
	Em8JCKbg5qbhFCSdGAO6Q5pvORD5ETB0xpbcbNR3r0T6Hvy31BaG2reb8//7irvI/KslLV1I5JRVG
	fBzNaV26JdABQqm/IOdl1mFLBIB4NfISpDB0SRIt5IqR2J3J5o5EVGqegai8oaRbXbTA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162156-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162156: regressions - FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:guest-start/debianhvm.repeat:fail:regression
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=3092006fc4e096a7eebb8042cb76d82b09ccece4
X-Osstest-Versions-That:
    xen=aa77acc28098d04945af998f3fc0dbd3759b5b41
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 26 May 2021 09:51:33 +0000

flight 162156 xen-unstable real [real]
flight 162162 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162156/
http://logs.test-lab.xenproject.org/osstest/logs/162162/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 20 guest-start/debianhvm.repeat fail in 162162 REGR. vs. 162145

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 162162-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162145
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162145
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162145
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162145
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162145
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162145
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162145
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162145
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162145
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162145
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162145
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  3092006fc4e096a7eebb8042cb76d82b09ccece4
baseline version:
 xen                  aa77acc28098d04945af998f3fc0dbd3759b5b41

Last test of basis   162145  2021-05-25 01:54:50 Z    1 days
Failing since        162150  2021-05-25 10:39:08 Z    0 days    2 attempts
Testing same since   162156  2021-05-26 00:06:46 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Connor Davis <connojdavis@gmail.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

------------------------------------------------------------
commit 3092006fc4e096a7eebb8042cb76d82b09ccece4
Author: Connor Davis <connojdavis@gmail.com>
Date:   Mon May 24 08:34:28 2021 -0600

    automation: Add container for riscv64 builds
    
    Add a container for cross-compiling xen to riscv64.
    This just includes the cross-compiler and necessary packages for
    building xen itself (packages for tools, stubdoms, etc., can be
    added later).
    
    Signed-off-by: Connor Davis <connojdavis@gmail.com>
    Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

commit 81acb1d7bdd5b1bf9c3422dcfeda616db2405d6f
Author: Jan Beulich <jbeulich@suse.com>
Date:   Tue May 25 09:08:43 2021 +0200

    x86/shadow: fix DO_UNSHADOW()
    
    When adding the HASH_CALLBACKS_CHECK() I failed to properly recognize
    the (somewhat unusually formatted) if() around the call to
    hash_domain_foreach()). Gcc 11 is absolutely right in pointing out the
    apparently misleading indentation. Besides adding the missing braces,
    also adjust the two oddly formatted if()-s in the macro.
    
    Fixes: 90629587e16e ("x86/shadow: replace stale literal numbers in hash_{vcpu,domain}_foreach()")
    Signed-off-by: Jan Beulich <jbeulich@suse.com>
    Reviewed-by: Luca Fancellu <luca.fancellu@arm.com>
    Reviewed-by: Tim Deegan <tim@xen.org>
(qemu changes not included)


From xen-devel-bounces@lists.xenproject.org Wed May 26 11:04:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 11:04:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132447.247041 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llrL9-0006zn-EY; Wed, 26 May 2021 11:04:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132447.247041; Wed, 26 May 2021 11:04:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llrL9-0006zg-BR; Wed, 26 May 2021 11:04:43 +0000
Received: by outflank-mailman (input) for mailman id 132447;
 Wed, 26 May 2021 11:04:42 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1llrL8-0006za-60
 for xen-devel@lists.xenproject.org; Wed, 26 May 2021 11:04:42 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1llrL8-0007A8-4K
 for xen-devel@lists.xenproject.org; Wed, 26 May 2021 11:04:42 +0000
Received: from iwj (helo=mariner.uk.xensource.com)
 by xenbits.xenproject.org with local-bsmtp (Exim 4.92)
 (envelope-from <iwj@xenproject.org>) id 1llrL8-0003nq-3I
 for xen-devel@lists.xenproject.org; Wed, 26 May 2021 11:04:42 +0000
Received: from iwj by mariner.uk.xensource.com with local (Exim 4.89)
 (envelope-from <iwj@xenproject.org>)
 id 1llrL4-0007Wl-Os; Wed, 26 May 2021 12:04:38 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=References:In-Reply-To:Subject:Cc:To:Date
	:Message-ID:Content-Transfer-Encoding:Content-Type:MIME-Version:From;
	bh=gq2I9hbRkeKKDL7TqQ0Q2XZpi3HNA0AmgyGee+LQdB4=; b=1GpJn2aRuLTF1kP+QY5W68MzWJ
	BbgpmItZYeyE6mO14bCc5+eq+7o61iRDj2XFqiZN9L01phgc7Yp2VP9wES536qH3o6IWGP9zWhUdV
	rprML24N5ZVn7YU9SOwl0YYKHVZyu6ijRpZ1/g9NOEhwSuYFs/Ls0Ny5C8az2LSuyrLU=;
From: Ian Jackson <iwj@xenproject.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Message-ID: <24750.11078.528225.772285@mariner.uk.xensource.com>
Date: Wed, 26 May 2021 12:04:38 +0100
To: George Dunlap <George.Dunlap@citrix.com>
Cc: "xen-devel\@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
    "community.manager\@xenproject.org" <community.manager@xenproject.org>
Subject: Re: IRC networks
In-Reply-To: <10B84448-21E3-41F2-AAD2-B9F6E9EB5DE8@citrix.com>
References: <24741.12566.639691.461134@mariner.uk.xensource.com>
	<10B84448-21E3-41F2-AAD2-B9F6E9EB5DE8@citrix.com>
X-Mailer: VM 8.2.0b under 24.5.1 (i686-pc-linux-gnu)

George Dunlap writes ("Re: IRC networks"):
> Thanks, Ian.  I tend to agree with the recommendation.  I think unless someone wants to argue that we go to libera (or stick with freenode), we should go with that option.  
> 
> Normally for a decision like this we’d wait 2 weeks for counter-arguments before making it official.  Does anyone want to argue that we should move up the timetable for some reason?
> 
> I’ve registered #xendevel on oftc; I’d encourage early adopters to give it a try.

Recent behaviour by the new de facto operators of Freenode has been
quite egregious.  In particular, it is now against the rules to set
your topic to direct your users to libera.chat, the replacement set up
by the resigning Freenode staff.  The server operators have been
taking opver channels where the project are trying to migrate.  (I
think that probably applies to OFTC too.)

Also, the new de facto operators of Freeonode are using user count to
justify their behaviour.

I am not prepared to be counted as a user of these terrible people,
and used by them to justify their awful actions.  I will be
disconnecting from Freenode as soon as I have sent a message to this
effect to all the Xen-related channels.

I appreciately that making this decision unilaterally for myself in
this wa might be seen as jumping the gun on the commkunity decision
process.

But I am not prepared to wait any longer.  Sorry.

Ian.


From xen-devel-bounces@lists.xenproject.org Wed May 26 11:05:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 11:05:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132452.247051 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llrLz-0007Xw-Ox; Wed, 26 May 2021 11:05:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132452.247051; Wed, 26 May 2021 11:05:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llrLz-0007Xp-Lv; Wed, 26 May 2021 11:05:35 +0000
Received: by outflank-mailman (input) for mailman id 132452;
 Wed, 26 May 2021 11:05:34 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llrLy-0007Xf-E1; Wed, 26 May 2021 11:05:34 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llrLy-0007Ar-Aa; Wed, 26 May 2021 11:05:34 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llrLy-0007KR-0W; Wed, 26 May 2021 11:05:34 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1llrLx-0005Um-WC; Wed, 26 May 2021 11:05:34 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=FxzUYIH4pqWqq+EX5VlKtlI2xRsvqO4+PticrhQuu34=; b=Oyv9AwablJTs1lM4EAPV4Qzb+v
	qgU+r10Qn/8uwPzYQlDz4mRXNo8ml6XQ1NRawLGVMZBMV753bHOm9VWcve7yMJtsXpg6/zWtCyQZP
	Tu0MD096oFbslV2iuNdDxXjqvODdvVvAs27pfWyHXGI5lNRlqBWXnMDZznS6sBv59UN8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162161-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162161: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=7793d19bac84cb84d919035faa9aa638f0a33370
X-Osstest-Versions-That:
    xen=3092006fc4e096a7eebb8042cb76d82b09ccece4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 26 May 2021 11:05:33 +0000

flight 162161 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162161/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  7793d19bac84cb84d919035faa9aa638f0a33370
baseline version:
 xen                  3092006fc4e096a7eebb8042cb76d82b09ccece4

Last test of basis   162151  2021-05-25 11:00:26 Z    1 days
Testing same since   162161  2021-05-26 08:01:40 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Roger Pau Monné <rogerpau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   3092006fc4..7793d19bac  7793d19bac84cb84d919035faa9aa638f0a33370 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed May 26 11:28:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 11:28:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132462.247066 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llri7-0001Sw-LG; Wed, 26 May 2021 11:28:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132462.247066; Wed, 26 May 2021 11:28:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llri7-0001Sp-II; Wed, 26 May 2021 11:28:27 +0000
Received: by outflank-mailman (input) for mailman id 132462;
 Wed, 26 May 2021 11:28:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=PyTD=KV=citrix.com=george.dunlap@srs-us1.protection.inumbo.net>)
 id 1llri5-0001ST-LG
 for xen-devel@lists.xenproject.org; Wed, 26 May 2021 11:28:25 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1b59d9ad-291e-4e0c-af65-ed2f37cbcac9;
 Wed, 26 May 2021 11:28:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1b59d9ad-291e-4e0c-af65-ed2f37cbcac9
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1622028503;
  h=from:to:cc:subject:date:message-id:references:
   in-reply-to:content-id:content-transfer-encoding:
   mime-version;
  bh=mhzlBimBWcs8Mhhv3EymKBxQKVkKqOw57uSN1oly9Dg=;
  b=WXhynHbCAHlB02Kb72SOML5+2od82DjwXV1TecimzV8MrquvFvoZyd/P
   1d9yK4tiBbaBJd4UkuU2J7xJP4sJciIq0BeQRFUMQ4Cvq59PaoSGRheIJ
   jCPTPLrKXRmhHd7dm/REuqczdzmOmHO0ZdFWOUPK8bxa+tteUD45FwgAB
   8=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: QmFZioIRceKdE8PxmSCA+K+OxhnFyvRHebkJuHB8UYZbNfaMAznaqE3nyNYScI7Ga45pznXp12
 En44MwY1DOAUQdKBpOgbb7qDr/omeQPloeog8wTJ/X94N8hgHJN83yVK27nV/vYzbLlxV/DY/C
 jTrMIe+flEQRMc5Yeo3pLNk/8X/3zS8DgbNtYk2amcN+5I9OL/Wf72N4hmux74gL2521YWP9P9
 UzTG8zNaTEkJVW4CyK5wKk/4SxHxQzE1ZovM4KllMWTH5pUNo53HZS/E5a9xvKM3RIdH8RaYF3
 RVI=
X-SBRS: 5.1
X-MesageID: 45032342
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:6/ELQqu/DN0Cm+vtiNfPKoHg7skCv4Mji2hC6mlwRA09TyXGra
 6TdaUguiMc1gx8ZJh5o6H9BEGBKUmskaKceeEqTPaftXrdyRSVxeZZnMvfKlzbamPDH4tmtJ
 uIHJIOcOEYYWIK7/oSpTPIburIo+P3s5xA592utEuFJDsCA8oLgmcJaTpzUHcGPjWubaBJSa
 Z0jfA3wAZIDE5nF/hTcUN1OdQryee78a7OUFojPVoK+QOOhTSn5PrRCB6DxCoTVDtJ3PML7X
 XFuxaR3NTgj9iLjjvnk0PD5ZVfn9XsjvFZAtaXt8QTIjLwzi61eYVaXaGYtjxdmpDr1L9qqq
 iJn/4TBbU015rjRBDtnfIr4Xi57N8a0Q6k9bZfuwq5nSW2fkNhNyMLv/MmTvKQ0TtQgDg76t
 MW44vRjeslMTrQ2Cv6/NTGTBdsiw69pmcji/caizhFXZIZc6I5l/1TwKvEeK1wbB4SxbpXW9
 WGNvusrsq+sGnqGUwxtVMfjOBEnk5DVituZ3Jy9fB9/wIm6EyR/nFojfD3xE1wga7VY6M0kN
 gsHJ4Y5o1zcg==
X-IronPort-AV: E=Sophos;i="5.82,331,1613451600"; 
   d="scan'208";a="45032342"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=J91sTNOBheHCssWFPwnGhVCZMF0NQDJ1lnJkM+kkAdhonulKANfkNVOoHMxD1brtxt6j2ikACgqz3h2Z2WhZDuGUhs+REx2ac3VPPa06hXzRuvG2Jg853LSO+g/O19GI5nHQAQ+6IVHTcoVuuL9aaWgtBM1x7t2AnvCnnOj1tImeX8vfZS+5rc8N+pW9Ut3M0MV9iv8z+YmKmh8zOCXOjNLoooEGmJSq+B81IAA5AznVjKeQcUqqHpCaYWbziNCDohyr9MEDue2u6bFgjjDlo7UoiBi8GelSwkyAMLoj8qiT1RkqvEXCQ0qlR9ndRL9ky0Wie3rZvsauASeT4DUl7A==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mhzlBimBWcs8Mhhv3EymKBxQKVkKqOw57uSN1oly9Dg=;
 b=AEhwRNCcn8Qaf5Eyl5e+9B3B1AIcJceiO+7Ho/830WTsVGFU7zhTxuMsqOJjdDLSp0BYzGVI/tHLOdRCOScCNwwbY78uInpwzxTv3olZHpKNPX9lao9yRv+JMboxcFfm9MgjcBehz+F90YhkFoLhFwNgXmYGD/+7WwKwhX3BeZWivm5e6GnZUGNPeTrS6xFh2RVLTEBh5gzG1aZUdLOllOpeVKWW7D6cbwblrCiTIdpc1N011ySpMlNQfjfiLMJ05wPTD6aOz8pEdyY0WdvJ3rlou3ZrGBzVAW5hI0WdCrm6hL3H7FBVt+EmY/8CQeoSVaxIqBetnTnlM/i/2eUE/g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mhzlBimBWcs8Mhhv3EymKBxQKVkKqOw57uSN1oly9Dg=;
 b=bQ9PTsST/soz2CW/Icx8ov7KRuZ28aOefQNvNo2eMbVolFBV+lIcjjh19RcQnH1LCPCJJ3EiBaXYPVaG63syndrttEk0HgmoYwEenX4rbC1b+NeR7iFYaNOzFb6I0/XUTy3v2OmSShfbNYDFxyqTMmSur44tyKeb3983IgzqTn8=
From: George Dunlap <George.Dunlap@citrix.com>
To: Ian Jackson <iwj@xenproject.org>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"community.manager@xenproject.org" <community.manager@xenproject.org>
Subject: Re: IRC networks
Thread-Topic: IRC networks
Thread-Index: AQHXTMUre4pjfhF+RUKgWz01JlgFDKrq+tiAgAqpdACAAAaZgA==
Date: Wed, 26 May 2021 11:28:17 +0000
Message-ID: <728C3CFC-89BE-4084-A632-2E9502B28985@citrix.com>
References: <24741.12566.639691.461134@mariner.uk.xensource.com>
 <10B84448-21E3-41F2-AAD2-B9F6E9EB5DE8@citrix.com>
 <24750.11078.528225.772285@mariner.uk.xensource.com>
In-Reply-To: <24750.11078.528225.772285@mariner.uk.xensource.com>
Accept-Language: en-US
Content-Language: en-US
X-MS-Has-Attach:
X-MS-TNEF-Correlator:
x-mailer: Apple Mail (2.3654.60.0.2.21)
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: dc81b949-d5ac-4689-363c-08d920395ba4
x-ms-traffictypediagnostic: PH0PR03MB6365:
x-microsoft-antispam-prvs: <PH0PR03MB6365933F7C85DC353BFF030199249@PH0PR03MB6365.namprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:9508;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 8KQ40Tx+R0YqfsbeROY79A/6SOYQ6urUnsmZ4F1zrF9F5PuBlMxoPaZbtGi2UOddkfdc35BrVgPlWODnLibFleF0+DP+90a8h9msvACJEOt9v309BQccN5fTuBemGjoezr8yXHnTbUfazjmLKEXKTb5FFtdxvE5fzJAc5th/LG4hznuGdAP4AVsY+4SDWRAMu8pB44VhcFcd58jo+w05hmUm0HwOrdTVDq7izXF/21RWvwzM5F3mVLpDhBgLoQBPz5w1BFZ1AHcP5BsxM+6eTs3530ufYLLt1b5AtslQ1AIxJ7tJfeKvL8jyK8WAf/nRUD7kWAoOJct6i2LlIeJlnrAvVM5Dyozdy22/BpERkURTNBCgheJLlh5Cr9VALWRQLNUcZ8sVFk6vpVPAtZ7OKIoVYj2hOLBV9Xlh1Cu6u/lTj7NyrrXB17S/cejJ9d88TD//HXdYBkXDoy1/B9dMEP+27AxKDGqKg2AgyycqwpmGosKIVVYAAWQhR8Yy260zf+alQ2E0Pd+q5PdHM7hZPMlq533He6GwEUr/Tg7E3WcWsz50+U2hfgCq3xQIPKFNErUDW7hdDGT/49BySe3tHOKpwpERluxdzneNWkVymvnOFl3SWIEQn38sfU6yhyiKQ0AVO/gfjeNoeAQZyK0s53H+CD4Ke8L+PoqMvwuc7/Q=
x-forefront-antispam-report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:PH0PR03MB5669.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(366004)(346002)(376002)(39850400004)(396003)(186003)(2616005)(36756003)(6486002)(54906003)(53546011)(5660300002)(91956017)(6512007)(3480700007)(64756008)(66446008)(66556008)(2906002)(76116006)(6506007)(7116003)(26005)(8936002)(83380400001)(6916009)(38100700002)(33656002)(316002)(71200400001)(66946007)(86362001)(8676002)(66476007)(4326008)(478600001)(122000001)(45980500001);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: =?utf-8?B?QzRudWczQktpTEJoM0w4Zm1vSk50cERqdUJ2LzB1empYTzdqdmkwREFYaHlX?=
 =?utf-8?B?WmZVMXU0NzJWV3lMeFdRWUY1KzBIeHAwMms5bGhaM0V2d0JTR0pRQjR5a09D?=
 =?utf-8?B?ZFpjYVVNbzM0SDVQYkdaSnhaSzExVVkvTHFQL3I4dkgwOTBrOW1zRUw2NDcx?=
 =?utf-8?B?eEpGOS8zNUJlOUUvRjZmVWVsVXkrY2JGNVpDQXZyV2daRDJnRUIxdFVPOEZJ?=
 =?utf-8?B?ZGZabmRWbmt2QjV2Z0J5dTBqcFQ3bXI2QTByUjY1RFRQajMzUS9KNFZlTGpk?=
 =?utf-8?B?V1pDejBYVWM5QUpyWUZXRkRQUzJVa0VVN0UxOHBacjc4R3NqOXNIOEoxTGpD?=
 =?utf-8?B?WmJKTmFnVUo3d1VGUTI4MHNPdHZ0NjRZVU5Lc004SWJ4N0NHeXdiWFdQMUM2?=
 =?utf-8?B?MlRGNnZ3Y0ZuNDhONlh3N3NTcXhDN1Z3bGZMc3NlNjIwL1hUZmFGOTRCWVJw?=
 =?utf-8?B?dmREemE0Qy9xKyt5LzdFVkFUQnZRRUpWbGFxSTRUQTNqckdGbW50MjU1NTBI?=
 =?utf-8?B?Z0tZTTRoRytzMEFkQkE1dmlFaWlSWlBhSWRmalUzNHBwQmwxTklkZzB0aXBq?=
 =?utf-8?B?T05VcEptaTRMdjBCMFlrQjhwdUszemhSZ2k3ZHp4VzFtSE9qQjBVOVJ5MEg1?=
 =?utf-8?B?ZFFjbkhGV3JjU3lTMlJwdmdEY0Q4QVVHcXpzZFhIMDFCd3diSVlvVVNoQWtp?=
 =?utf-8?B?NVJ4TVNWbTd5NGdtMUJ3WjBVb1BVaERuODIycjV6YWNGNVVYNW5pZ1loUEdM?=
 =?utf-8?B?TnREaS8yUkl5K0dLRFB0ekFHd0wrUVJIWW1FNVJBdDFmdHFwNGtJcW00WFJI?=
 =?utf-8?B?VDhaa3JLRjhuZG1XWGZha1VweXlHM1lnaThEOXN1UWZXbWJSN21GSmE3TmdU?=
 =?utf-8?B?STBGQTYrS1RYcUxveFhMbjE1NkFSRnd4TDJ3eEMvTkpndkt2ZHV2OUJiTlg5?=
 =?utf-8?B?RjRPV3FWM2J3QWtRa21YQktSNk1yU3N1ZkVZeDQrN3YwVmlPNWZvQXZ0Z003?=
 =?utf-8?B?NFRwclJKa3RVa2lUb1BqT0ZWcURUakk0Z1RxaENTaWUrRmxLNTZHNzgzd2Rh?=
 =?utf-8?B?ZmVjZURDYlRKNm8ydXlVVDZISXJkVXU1eVUzcHg1M0xSV3h2ek1OTzlzcGZm?=
 =?utf-8?B?U3RKZ0JVR2xkR212VEd4WG9VZnc2WmNWNTJNYWdzVXpzM3l2NElLVThXTXJk?=
 =?utf-8?B?cUJMb3pPVjNXMVI5dGttNVNVc2tLU1BoNXExMkRxc0txQTFSMndKL0hYYmFm?=
 =?utf-8?B?RkVrWGtzMFBRYmlaV1NBeDJDT3p0THpVVk4yYXpMT1VucTY2MHhOMGtwR3Jk?=
 =?utf-8?B?VFJrVDVZdmdoZVJ3YkFOeE94cktsc2huMEVqcTQzWGFCVTBUMENKQWtDOUFM?=
 =?utf-8?B?M0dLWFVKRnN6Tml3QUtNbWFzWE1LQXdrcHlIa2w4K3hhQlFjODdzWDVvbkNZ?=
 =?utf-8?B?SWZjV1dyL1hUYm5reEJCL1N5YVF6VFFjZ1BEcUsrN3ZpQ1dSdzVJNlVXWkNK?=
 =?utf-8?B?czlwQW4yQW16TUdXNjZRanlhTy80WlN5NXkzekxJb3FwTUpNdnBHdGtiSlBt?=
 =?utf-8?B?aFJkZGIrSUNDT3VaaE9YWTdNWDZmYUVieEZOcnVwdTFNSWp3MnVpaDY4RVdq?=
 =?utf-8?B?bE9iZkpmVmZYd3ozenZmZ3NKSWpEQ2NWMGsrbWhXYXM1ZXVxYnZTRS8xSThX?=
 =?utf-8?B?RzZaQjZxZzdKdWdnbkNyY3FTYkhEWGhGeGk0cjJ3bTJxSXAveVNXYlY2KzBX?=
 =?utf-8?B?WmlXZmdlZzBTSnBsVSszVjRnVVR1Sy80ekNuNml1bUlsUmhqVk5ieTY2cVVU?=
 =?utf-8?B?QmtWL0dYVnZVZnFobFpmZz09?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-ID: <C9E627F48204FA48B0E897BA2DA5CD65@namprd03.prod.outlook.com>
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: PH0PR03MB5669.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: dc81b949-d5ac-4689-363c-08d920395ba4
X-MS-Exchange-CrossTenant-originalarrivaltime: 26 May 2021 11:28:17.0646
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: nuCsYVGeHTl6E7QUgzOZBn5baMOERxLkGm73/OpbSKh/aniwrXY6cB0jbMv4UBclDNr5T5wot+s54nDy6vvoDZmV9pfQtdEJ+Jpftxzc3jw=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: PH0PR03MB6365
X-OriginatorOrg: citrix.com

DQoNCj4gT24gTWF5IDI2LCAyMDIxLCBhdCAxMjowNCBQTSwgSWFuIEphY2tzb24gPGl3akB4ZW5w
cm9qZWN0Lm9yZz4gd3JvdGU6DQo+IA0KPiBHZW9yZ2UgRHVubGFwIHdyaXRlcyAoIlJlOiBJUkMg
bmV0d29ya3MiKToNCj4+IFRoYW5rcywgSWFuLiAgSSB0ZW5kIHRvIGFncmVlIHdpdGggdGhlIHJl
Y29tbWVuZGF0aW9uLiAgSSB0aGluayB1bmxlc3Mgc29tZW9uZSB3YW50cyB0byBhcmd1ZSB0aGF0
IHdlIGdvIHRvIGxpYmVyYSAob3Igc3RpY2sgd2l0aCBmcmVlbm9kZSksIHdlIHNob3VsZCBnbyB3
aXRoIHRoYXQgb3B0aW9uLiAgDQo+PiANCj4+IE5vcm1hbGx5IGZvciBhIGRlY2lzaW9uIGxpa2Ug
dGhpcyB3ZeKAmWQgd2FpdCAyIHdlZWtzIGZvciBjb3VudGVyLWFyZ3VtZW50cyBiZWZvcmUgbWFr
aW5nIGl0IG9mZmljaWFsLiAgRG9lcyBhbnlvbmUgd2FudCB0byBhcmd1ZSB0aGF0IHdlIHNob3Vs
ZCBtb3ZlIHVwIHRoZSB0aW1ldGFibGUgZm9yIHNvbWUgcmVhc29uPw0KPj4gDQo+PiBJ4oCZdmUg
cmVnaXN0ZXJlZCAjeGVuZGV2ZWwgb24gb2Z0YzsgSeKAmWQgZW5jb3VyYWdlIGVhcmx5IGFkb3B0
ZXJzIHRvIGdpdmUgaXQgYSB0cnkuDQo+IA0KPiBSZWNlbnQgYmVoYXZpb3VyIGJ5IHRoZSBuZXcg
ZGUgZmFjdG8gb3BlcmF0b3JzIG9mIEZyZWVub2RlIGhhcyBiZWVuDQo+IHF1aXRlIGVncmVnaW91
cy4gIEluIHBhcnRpY3VsYXIsIGl0IGlzIG5vdyBhZ2FpbnN0IHRoZSBydWxlcyB0byBzZXQNCj4g
eW91ciB0b3BpYyB0byBkaXJlY3QgeW91ciB1c2VycyB0byBsaWJlcmEuY2hhdCwgdGhlIHJlcGxh
Y2VtZW50IHNldCB1cA0KPiBieSB0aGUgcmVzaWduaW5nIEZyZWVub2RlIHN0YWZmLiAgVGhlIHNl
cnZlciBvcGVyYXRvcnMgaGF2ZSBiZWVuDQo+IHRha2luZyBvcHZlciBjaGFubmVscyB3aGVyZSB0
aGUgcHJvamVjdCBhcmUgdHJ5aW5nIHRvIG1pZ3JhdGUuICAoSQ0KPiB0aGluayB0aGF0IHByb2Jh
Ymx5IGFwcGxpZXMgdG8gT0ZUQyB0b28uKQ0KPiANCj4gQWxzbywgdGhlIG5ldyBkZSBmYWN0byBv
cGVyYXRvcnMgb2YgRnJlZW9ub2RlIGFyZSB1c2luZyB1c2VyIGNvdW50IHRvDQo+IGp1c3RpZnkg
dGhlaXIgYmVoYXZpb3VyLg0KDQpZZXMsIEkgc2F3IHRoaXMgdG9vLCBhbmQgSSBhZ3JlZS4gIEni
gJlsbCB0cnkgdG8gZ2V0IHRoZSB4ZW5wcm9qZWN0IHdlYnBhZ2UgY2hhbmdlZCB0b2RheSBpZiBw
b3NzaWJsZS4NCg0KIC1HZW9yZ2UNCg0K


From xen-devel-bounces@lists.xenproject.org Wed May 26 12:13:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 12:13:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132471.247077 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llsPo-0006It-8f; Wed, 26 May 2021 12:13:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132471.247077; Wed, 26 May 2021 12:13:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llsPo-0006Im-5b; Wed, 26 May 2021 12:13:36 +0000
Received: by outflank-mailman (input) for mailman id 132471;
 Wed, 26 May 2021 12:13:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wTMo=KV=kernel.org=will@srs-us1.protection.inumbo.net>)
 id 1llsPn-0006Ie-G5
 for xen-devel@lists.xenproject.org; Wed, 26 May 2021 12:13:35 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 072cf1d1-6408-47f7-a08c-740e07ffb08b;
 Wed, 26 May 2021 12:13:34 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 7DB6A613D6;
 Wed, 26 May 2021 12:13:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 072cf1d1-6408-47f7-a08c-740e07ffb08b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1622031214;
	bh=6cgAAAry3KXjHYICFlfgug7HP7MtfiezqAbh5vyjbAk=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=dD4raOXl4m/1dmUYxCXd+9Zhh/XbrtQC52Qny8ZwVBaqmrNGpiCc5ZIU1b4WjGm8H
	 uJa3jIZybN6M6kTMpAfuccsLT4/wRQbg2MVSjmE1yL01mC3lcM16NVvL5naGjFRQ57
	 RUc8/qmG6+LKY4mt6jNuAHGXfU60PflHdIRn81RXVX/L8quJSpzl4wYs9fdYRMYSyz
	 TDHUTCxqGOaTMeMxI6d2iUlniRYHOiu06+32s75wchffQ8s3cbbJwvz7Jm6fWP5KI+
	 RznUS6G8WImqfu6sIHm02R4cmQSm1/c2dk4BLfXSL89hrwovsmQduosVw5UITnC7eb
	 z+DGv0xJWhG6g==
Date: Wed, 26 May 2021 13:13:23 +0100
From: Will Deacon <will@kernel.org>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
	bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
	jxgao@google.com, joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com, rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v7 14/15] dt-bindings: of: Add restricted DMA pool
Message-ID: <20210526121322.GA19313@willie-the-truck>
References: <20210518064215.2856977-1-tientzu@chromium.org>
 <20210518064215.2856977-15-tientzu@chromium.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210518064215.2856977-15-tientzu@chromium.org>
User-Agent: Mutt/1.10.1 (2018-07-13)

Hi Claire,

On Tue, May 18, 2021 at 02:42:14PM +0800, Claire Chang wrote:
> Introduce the new compatible string, restricted-dma-pool, for restricted
> DMA. One can specify the address and length of the restricted DMA memory
> region by restricted-dma-pool in the reserved-memory node.
> 
> Signed-off-by: Claire Chang <tientzu@chromium.org>
> ---
>  .../reserved-memory/reserved-memory.txt       | 27 +++++++++++++++++++
>  1 file changed, 27 insertions(+)
> 
> diff --git a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
> index e8d3096d922c..284aea659015 100644
> --- a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
> +++ b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
> @@ -51,6 +51,23 @@ compatible (optional) - standard definition
>            used as a shared pool of DMA buffers for a set of devices. It can
>            be used by an operating system to instantiate the necessary pool
>            management subsystem if necessary.
> +        - restricted-dma-pool: This indicates a region of memory meant to be
> +          used as a pool of restricted DMA buffers for a set of devices. The
> +          memory region would be the only region accessible to those devices.
> +          When using this, the no-map and reusable properties must not be set,
> +          so the operating system can create a virtual mapping that will be used
> +          for synchronization. The main purpose for restricted DMA is to
> +          mitigate the lack of DMA access control on systems without an IOMMU,
> +          which could result in the DMA accessing the system memory at
> +          unexpected times and/or unexpected addresses, possibly leading to data
> +          leakage or corruption. The feature on its own provides a basic level
> +          of protection against the DMA overwriting buffer contents at
> +          unexpected times. However, to protect against general data leakage and
> +          system memory corruption, the system needs to provide way to lock down
> +          the memory access, e.g., MPU. Note that since coherent allocation
> +          needs remapping, one must set up another device coherent pool by
> +          shared-dma-pool and use dma_alloc_from_dev_coherent instead for atomic
> +          coherent allocation.
>          - vendor specific string in the form <vendor>,[<device>-]<usage>
>  no-map (optional) - empty property
>      - Indicates the operating system must not create a virtual mapping
> @@ -120,6 +137,11 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB).
>  			compatible = "acme,multimedia-memory";
>  			reg = <0x77000000 0x4000000>;
>  		};
> +
> +		restricted_dma_mem_reserved: restricted_dma_mem_reserved {
> +			compatible = "restricted-dma-pool";
> +			reg = <0x50000000 0x400000>;
> +		};

nit: You need to update the old text that states "This example defines 3
contiguous regions ...".

>  	};
>  
>  	/* ... */
> @@ -138,4 +160,9 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB).
>  		memory-region = <&multimedia_reserved>;
>  		/* ... */
>  	};
> +
> +	pcie_device: pcie_device@0,0 {
> +		memory-region = <&restricted_dma_mem_reserved>;
> +		/* ... */
> +	};

I still don't understand how this works for individual PCIe devices -- how
is dev->of_node set to point at the node you have above?

I tried adding the memory-region to the host controller instead, and then
I see it crop up in dmesg:

  | pci-host-generic 40000000.pci: assigned reserved memory node restricted_dma_mem_reserved

but none of the actual PCI devices end up with 'dma_io_tlb_mem' set, and
so the restricted DMA area is not used. In fact, swiotlb isn't used at all.

What am I missing to make this work with PCIe devices?

Thanks,

Will


From xen-devel-bounces@lists.xenproject.org Wed May 26 12:30:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 12:30:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132478.247088 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llsgQ-0000E2-Mx; Wed, 26 May 2021 12:30:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132478.247088; Wed, 26 May 2021 12:30:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llsgQ-0000Dv-JI; Wed, 26 May 2021 12:30:46 +0000
Received: by outflank-mailman (input) for mailman id 132478;
 Wed, 26 May 2021 12:30:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llsgQ-0000Dl-0c; Wed, 26 May 2021 12:30:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llsgP-0000L4-QG; Wed, 26 May 2021 12:30:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llsgP-0002Gk-Ci; Wed, 26 May 2021 12:30:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1llsgP-0007Aj-CE; Wed, 26 May 2021 12:30:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=HJPtsN9UImXyLoB2XlVRbsUper/aHDIyGtLpibxNUls=; b=BeYlSqCYv5zGBoyUSZCF9Uwt7v
	7lnzhKBKe2+4AJ3XtZTUAaCBgbQQ2alvMFcul/8B+T1LhWO8f1+GdM6fz5jkEMv7YbWhVGECTTiS9
	BOkd/NhBvEU3u1RfkCKhO74q6NacTdS6WJmo+u+vaSyAfJmp1fA3txpHMNhcJNoF7OdI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162158-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162158: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=92f8c6fef13b31ba222c4d20ad8afd2b79c4c28e
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 26 May 2021 12:30:45 +0000

flight 162158 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162158/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                92f8c6fef13b31ba222c4d20ad8afd2b79c4c28e
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  279 days
Failing since        152659  2020-08-21 14:07:39 Z  277 days  513 attempts
Testing same since   162158  2021-05-26 01:37:17 Z    0 days    1 attempts

------------------------------------------------------------
510 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 161234 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed May 26 12:59:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 12:59:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132487.247104 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llt7t-0002dJ-32; Wed, 26 May 2021 12:59:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132487.247104; Wed, 26 May 2021 12:59:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llt7t-0002dC-05; Wed, 26 May 2021 12:59:09 +0000
Received: by outflank-mailman (input) for mailman id 132487;
 Wed, 26 May 2021 12:59:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZGBu=KV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1llt7r-0002d6-Od
 for xen-devel@lists.xenproject.org; Wed, 26 May 2021 12:59:07 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 8d95842c-955c-406e-9398-81d2508af71c;
 Wed, 26 May 2021 12:59:06 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by smtp-out2.suse.de (Postfix) with ESMTP id 378841FD29;
 Wed, 26 May 2021 12:59:05 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 1445111A98;
 Wed, 26 May 2021 12:59:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8d95842c-955c-406e-9398-81d2508af71c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622033945; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=jwQbCnoo88KOyKF0JjLRzmmEZgGZkhuakgk/vH3ZXoI=;
	b=bLiabYByA7DFE4/6NIgc86DpE4LdM41qd06Gqj7y8gU5nI6y5lUB87gdnwj4RMgfGwrroV
	cI7YkZ5Cd9QA85lWMVl0GRNldD4SOJ/rSbTFS08jEiLwOaSn1OxrL/Wei8r5YNxAXAKTD0
	0vT7oc7pF92JdoPsN+smz8+mB7PWS+c=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Olaf Hering <olaf@aepfle.de>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH] x86/AMD: expose SYSCFG, TOM, and TOM2 to Dom0
Message-ID: <c5764274-1257-809e-a2a7-d87b9d0fe675@suse.com>
Date: Wed, 26 May 2021 14:59:00 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Sufficiently old Linux (3.12-ish) accesses these MSRs in an unguarded
manner. Furthermore these MSRs, at least on Fam11 and older CPUs, are
also consulted by modern Linux, and their (bogus) built-in zapping of
#GP faults from MSR accesses leads to it effectively reading zero
instead of the intended values, which are relevant for PCI BAR placement
(which ought to all live in MMIO-type space, not in DRAM-type one).

For SYSCFG, only certain bits get exposed. In fact, whether to expose
MtrrVarDramEn is debatable: It controls use of not just TOM, but also
the IORRs. Introduce (consistently named) constants for the bits we're
interested in and use them in pre-existing code as well.

As a welcome side effect, verbosity on/of debug builds gets (perhaps
significantly) reduced.

Note that at least as far as those MSR accesses by Linux are concerned,
there's no similar issue for DomU-s, as the accesses sit behind PCI
device matching logic. The checked for devices would never be exposed to
DomU-s in the first place. Nevertheless I think that at least for HVM we
should return sensible values, not 0 (as svm_msr_read_intercept() does
right now). The intended values may, however, need to be determined by
hvmloader, and then get made known to Xen.

Fixes: 322ec7c89f66 ("x86/pv: disallow access to unknown MSRs")
Reported-by: Olaf Hering <olaf@aepfle.de>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -468,14 +468,14 @@ static void check_syscfg_dram_mod_en(voi
 		return;
 
 	rdmsrl(MSR_K8_SYSCFG, syscfg);
-	if (!(syscfg & K8_MTRRFIXRANGE_DRAM_MODIFY))
+	if (!(syscfg & SYSCFG_MTRR_FIX_DRAM_MOD_EN))
 		return;
 
 	if (!test_and_set_bool(printed))
 		printk(KERN_ERR "MTRR: SYSCFG[MtrrFixDramModEn] not "
 			"cleared by BIOS, clearing this bit\n");
 
-	syscfg &= ~K8_MTRRFIXRANGE_DRAM_MODIFY;
+	syscfg &= ~SYSCFG_MTRR_FIX_DRAM_MOD_EN;
 	wrmsrl(MSR_K8_SYSCFG, syscfg);
 }
 
--- a/xen/arch/x86/cpu/mtrr/generic.c
+++ b/xen/arch/x86/cpu/mtrr/generic.c
@@ -224,7 +224,7 @@ static void __init print_mtrr_state(cons
 		uint64_t syscfg, tom2;
 
 		rdmsrl(MSR_K8_SYSCFG, syscfg);
-		if (syscfg & (1 << 21)) {
+		if (syscfg & SYSCFG_MTRR_TOM2_EN) {
 			rdmsrl(MSR_K8_TOP_MEM2, tom2);
 			printk("%sTOM2: %012"PRIx64"%s\n", level, tom2,
 			       syscfg & (1 << 22) ? " (WB)" : "");
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -339,6 +339,19 @@ int guest_rdmsr(struct vcpu *v, uint32_t
         *val = msrs->tsc_aux;
         break;
 
+    case MSR_K8_SYSCFG:
+    case MSR_K8_TOP_MEM1:
+    case MSR_K8_TOP_MEM2:
+        if ( !(cp->x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) )
+            goto gp_fault;
+        if ( !is_hardware_domain(d) )
+            return X86EMUL_UNHANDLEABLE;
+        rdmsrl(msr, *val);
+        if ( msr == MSR_K8_SYSCFG )
+            *val &= (SYSCFG_TOM2_FORCE_WB | SYSCFG_MTRR_TOM2_EN |
+                     SYSCFG_MTRR_VAR_DRAM_EN | SYSCFG_MTRR_FIX_DRAM_EN);
+        break;
+
     case MSR_K8_HWCR:
         if ( !(cp->x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) )
             goto gp_fault;
--- a/xen/arch/x86/x86_64/mmconf-fam10h.c
+++ b/xen/arch/x86/x86_64/mmconf-fam10h.c
@@ -69,7 +69,7 @@ static void __init get_fam10h_pci_mmconf
 	rdmsrl(address, val);
 
 	/* TOP_MEM2 is not enabled? */
-	if (!(val & (1<<21))) {
+	if (!(val & SYSCFG_MTRR_TOM2_EN)) {
 		tom2 = 1ULL << 32;
 	} else {
 		/* TOP_MEM2 */
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -116,6 +116,13 @@
 #define  PASID_PASID_MASK                   0x000fffff
 #define  PASID_VALID                        (_AC(1, ULL) << 31)
 
+#define MSR_K8_SYSCFG                       0xc0010010
+#define  SYSCFG_MTRR_FIX_DRAM_EN            (_AC(1, ULL) << 18)
+#define  SYSCFG_MTRR_FIX_DRAM_MOD_EN        (_AC(1, ULL) << 19)
+#define  SYSCFG_MTRR_VAR_DRAM_EN            (_AC(1, ULL) << 20)
+#define  SYSCFG_MTRR_TOM2_EN                (_AC(1, ULL) << 21)
+#define  SYSCFG_TOM2_FORCE_WB               (_AC(1, ULL) << 22)
+
 #define MSR_K8_VM_CR                        0xc0010114
 #define  VM_CR_INIT_REDIRECTION             (_AC(1, ULL) <<  1)
 #define  VM_CR_SVM_DISABLE                  (_AC(1, ULL) <<  4)
@@ -279,10 +286,7 @@
 #define MSR_K8_TOP_MEM1			0xc001001a
 #define MSR_K7_CLK_CTL			0xc001001b
 #define MSR_K8_TOP_MEM2			0xc001001d
-#define MSR_K8_SYSCFG			0xc0010010
 
-#define K8_MTRRFIXRANGE_DRAM_ENABLE	0x00040000 /* MtrrFixDramEn bit    */
-#define K8_MTRRFIXRANGE_DRAM_MODIFY	0x00080000 /* MtrrFixDramModEn bit */
 #define K8_MTRR_RDMEM_WRMEM_MASK	0x18181818 /* Mask: RdMem|WrMem    */
 
 #define MSR_K7_HWCR			0xc0010015


From xen-devel-bounces@lists.xenproject.org Wed May 26 13:18:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 13:18:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132496.247116 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lltQk-0004wr-QP; Wed, 26 May 2021 13:18:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132496.247116; Wed, 26 May 2021 13:18:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lltQk-0004wk-Me; Wed, 26 May 2021 13:18:38 +0000
Received: by outflank-mailman (input) for mailman id 132496;
 Wed, 26 May 2021 13:18:36 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZGBu=KV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lltQi-0004wd-NY
 for xen-devel@lists.xenproject.org; Wed, 26 May 2021 13:18:36 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 6e0c7ee0-f5eb-40ed-bc05-ec69dc22df22;
 Wed, 26 May 2021 13:18:35 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by smtp-out2.suse.de (Postfix) with ESMTP id D50D21FD29;
 Wed, 26 May 2021 13:18:34 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id C218711A98;
 Wed, 26 May 2021 13:18:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6e0c7ee0-f5eb-40ed-bc05-ec69dc22df22
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622035114; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=c/RaWJzeeSzUVsZjxQFr9uaJNCVGVhWTqv3Zw8iL1fc=;
	b=EapiWJsHHON8KctPld+xvh3y2XUGZgJNYPOzxkgKupLvUIZm8rBM/be0pI297i3ZIeR0qs
	Jcd+5DRL3q36lDb8dmmBR43hjcr05aPpn6rF8096poGbYKc0N8BOLBJWwQGq5nENaj2OJ/
	rlZtPA/UdpjF5d6TO33B8nZY1/kNoFA=
Subject: Re: [PATCH 01/13] cpufreq: Allow restricting to internal governors
 only
To: Jason Andryuk <jandryuk@gmail.com>
References: <20210503192810.36084-1-jandryuk@gmail.com>
 <20210503192810.36084-2-jandryuk@gmail.com>
Cc: xen-devel@lists.xenproject.org
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <927b886a-9b0c-2162-763b-9c2147227b8c@suse.com>
Date: Wed, 26 May 2021 15:18:30 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <20210503192810.36084-2-jandryuk@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 03.05.2021 21:27, Jason Andryuk wrote:
> For hwp, the standard governors are not usable, and only the internal
> one is applicable.

So you say "one" here but use plural in the subject. Which one is
it (going to be)?

>  Add the cpufreq_governor_internal boolean to
> indicate when an internal governor, like hwp-internal, will be used.
> This is set during presmp_initcall, so that it can suppress governor

DYM s/is/will be/? Afaict this is going to happen later in the series.
Which is a good indication that such "hanging in the air" changes
aren't necessarily the best way of splitting a set of changes, ...

> --- a/xen/drivers/cpufreq/cpufreq.c
> +++ b/xen/drivers/cpufreq/cpufreq.c
> @@ -57,6 +57,7 @@ struct cpufreq_dom {
>  };
>  static LIST_HEAD_READ_MOSTLY(cpufreq_dom_list_head);
>  
> +bool __read_mostly cpufreq_governor_internal;

... also supported by you introducing a non-static variable without
any consumer outside of this file (and without any producer at all).

> @@ -122,6 +123,9 @@ int __init cpufreq_register_governor(struct cpufreq_governor *governor)
>      if (!governor)
>          return -EINVAL;
>  
> +    if (cpufreq_governor_internal && strstr(governor->name, "internal") == NULL)

I wonder whether designating any governors ending in "-internal"
wouldn't be less prone for possible future ambiguities.

> --- a/xen/include/acpi/cpufreq/cpufreq.h
> +++ b/xen/include/acpi/cpufreq/cpufreq.h
> @@ -115,6 +115,7 @@ extern struct cpufreq_governor cpufreq_gov_dbs;
>  extern struct cpufreq_governor cpufreq_gov_userspace;
>  extern struct cpufreq_governor cpufreq_gov_performance;
>  extern struct cpufreq_governor cpufreq_gov_powersave;
> +extern bool cpufreq_governor_internal;

Please separate from the governor declarations by a blank line.

Sorry, all quite nit-like remarks, but still ...

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 26 13:24:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 13:24:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132509.247136 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lltWG-0006c4-Ix; Wed, 26 May 2021 13:24:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132509.247136; Wed, 26 May 2021 13:24:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lltWG-0006bx-Fy; Wed, 26 May 2021 13:24:20 +0000
Received: by outflank-mailman (input) for mailman id 132509;
 Wed, 26 May 2021 13:24:19 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZGBu=KV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lltWF-0006br-MV
 for xen-devel@lists.xenproject.org; Wed, 26 May 2021 13:24:19 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a477c6bb-2de4-4ea5-bf25-0b6c1c525132;
 Wed, 26 May 2021 13:24:18 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by smtp-out2.suse.de (Postfix) with ESMTP id C3B2D1FD2A;
 Wed, 26 May 2021 13:24:17 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 9D37111A98;
 Wed, 26 May 2021 13:24:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a477c6bb-2de4-4ea5-bf25-0b6c1c525132
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622035457; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=EtfsgYNsPNzJpxDuCz6haBq85KulN4as1kYTkz2nBg8=;
	b=eaP8ax7g2oMaBP2fu3arukDLr7mAiE34LLpeODZkkiWDuLaYDF5fbgSXNngAB8DhO2Um35
	PUrrP+JrYK2QBMfWa3Wnw5n99kury/7d4WUMM/gyc2DIxfFmGaqGr54yKKFL7/ECUTLd9A
	f1zHzE73krU6VpyHaUkZ8ZfwYz8vmPg=
Subject: Re: [PATCH 02/13] cpufreq: Add perf_freq to cpuinfo
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20210503192810.36084-1-jandryuk@gmail.com>
 <20210503192810.36084-3-jandryuk@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6f3d3833-8540-ca92-8d1c-e4b7bd2217ce@suse.com>
Date: Wed, 26 May 2021 15:24:13 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <20210503192810.36084-3-jandryuk@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 03.05.2021 21:27, Jason Andryuk wrote:
> acpi-cpufreq scales the aperf/mperf measurements by max_freq, but HWP
> needs to scale by base frequency.  Settings max_freq to base_freq
> "works" but the code is not obvious, and returning values to userspace
> is tricky.  Add an additonal perf_freq member which is used for scaling
> aperf/mperf measurements.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> ---
> I don't like this, but it seems the best way to re-use the common
> aperf/mperf code.  The other option would be to add wrappers that then
> do the acpi vs. hwp scaling.

Not sure I understand what you mean by "wrappers". I would assume that
for hwp you simply install a different getavg hook? Or else I guess
I'd need to see at least an outline of what you see as the alternative.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 26 13:46:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 13:46:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132518.247147 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llts3-0000TJ-Ct; Wed, 26 May 2021 13:46:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132518.247147; Wed, 26 May 2021 13:46:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llts3-0000TC-9u; Wed, 26 May 2021 13:46:51 +0000
Received: by outflank-mailman (input) for mailman id 132518;
 Wed, 26 May 2021 13:46:50 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZGBu=KV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1llts2-0000T6-IN
 for xen-devel@lists.xenproject.org; Wed, 26 May 2021 13:46:50 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9194cb9a-5b12-452a-b352-1b2048d5c4ca;
 Wed, 26 May 2021 13:46:49 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by smtp-out1.suse.de (Postfix) with ESMTP id 11316218C1;
 Wed, 26 May 2021 13:27:14 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id E5D3711A98;
 Wed, 26 May 2021 13:27:13 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9194cb9a-5b12-452a-b352-1b2048d5c4ca
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622035634; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=CX/wrSTqHD/qc2J/s31eFXwhF2y71gJjkVHsKXIRBuY=;
	b=oqatW4EKVQ7t4b9SAAQbSGSH3qLCyA6bc6I1C4Jpq6MRCLAmo7eF1DE57MwFz17IEgttpF
	zoLbBCMp4NtQXNCYurJ6t9PtVqa3GVy7FGblKuuXJ8vyxbTCZk4yMoDOMo5FGFnKKVVLEf
	k5CPajSPCrKz41o54KbQsQ3ViCvZDbw=
Subject: Re: [PATCH 03/13] cpufreq: Export intel_feature_detect
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel@lists.xenproject.org
References: <20210503192810.36084-1-jandryuk@gmail.com>
 <20210503192810.36084-4-jandryuk@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <68c32d6e-8c6f-35da-c9cd-a560d3d6895b@suse.com>
Date: Wed, 26 May 2021 15:27:10 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <20210503192810.36084-4-jandryuk@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 03.05.2021 21:28, Jason Andryuk wrote:
> Export feature_detect as intel_feature_detect so it can be re-used by
> HWP.
> 
> Signed-off-by: Jason Andryuk <jandryuk@gmail.com>

Acked-by: Jan Beulich <jbeulich@suse.com>
albeit ...

> --- a/xen/arch/x86/acpi/cpufreq/cpufreq.c
> +++ b/xen/arch/x86/acpi/cpufreq/cpufreq.c
> @@ -340,7 +340,7 @@ static unsigned int get_cur_freq_on_cpu(unsigned int cpu)
>      return extract_freq(get_cur_val(cpumask_of(cpu)), data);
>  }
>  
> -static void feature_detect(void *info)
> +void intel_feature_detect(void *info)
>  {
>      struct cpufreq_policy *policy = info;

... because of this (requiring the hwp code to stay in sync with
possible changes here, without the compiler being able to point
out inconsistencies) I'm not overly happy with such a change. Yet
I guess this isn't the first case we have in the code base.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 26 14:10:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 14:10:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132525.247158 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lluEt-0003Yo-B2; Wed, 26 May 2021 14:10:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132525.247158; Wed, 26 May 2021 14:10:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lluEt-0003Yh-76; Wed, 26 May 2021 14:10:27 +0000
Received: by outflank-mailman (input) for mailman id 132525;
 Wed, 26 May 2021 14:10:26 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=nl8y=KV=huawei.com=yuehaibing@srs-us1.protection.inumbo.net>)
 id 1lluEs-0003Yb-9r
 for xen-devel@lists.xenproject.org; Wed, 26 May 2021 14:10:26 +0000
Received: from szxga06-in.huawei.com (unknown [45.249.212.32])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ea4827a7-1acd-4e4c-abda-8d08324150fe;
 Wed, 26 May 2021 14:10:24 +0000 (UTC)
Received: from dggems702-chm.china.huawei.com (unknown [172.30.72.59])
 by szxga06-in.huawei.com (SkyGuard) with ESMTP id 4Fqt7z5r8zzmZLL;
 Wed, 26 May 2021 22:07:59 +0800 (CST)
Received: from dggema769-chm.china.huawei.com (10.1.198.211) by
 dggems702-chm.china.huawei.com (10.3.19.179) with Microsoft SMTP Server
 (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256) id
 15.1.2176.2; Wed, 26 May 2021 22:10:20 +0800
Received: from localhost (10.174.179.215) by dggema769-chm.china.huawei.com
 (10.1.198.211) with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.2176.2; Wed, 26
 May 2021 22:10:20 +0800
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ea4827a7-1acd-4e4c-abda-8d08324150fe
From: YueHaibing <yuehaibing@huawei.com>
To: <boris.ostrovsky@oracle.com>, <jgross@suse.com>, <sstabellini@kernel.org>,
	<yuehaibing@huawei.com>
CC: <xen-devel@lists.xenproject.org>, <linux-kernel@vger.kernel.org>
Subject: [PATCH -next] xen: Use DEVICE_ATTR_*() macro
Date: Wed, 26 May 2021 22:10:19 +0800
Message-ID: <20210526141019.13752-1-yuehaibing@huawei.com>
X-Mailer: git-send-email 2.10.2.windows.1
MIME-Version: 1.0
Content-Type: text/plain
X-Originating-IP: [10.174.179.215]
X-ClientProxiedBy: dggems701-chm.china.huawei.com (10.3.19.178) To
 dggema769-chm.china.huawei.com (10.1.198.211)
X-CFilter-Loop: Reflected

Use DEVICE_ATTR_*() helper instead of plain DEVICE_ATTR(),
which makes the code a bit shorter and easier to read.

Signed-off-by: YueHaibing <yuehaibing@huawei.com>
---
 drivers/xen/pcpu.c                |  6 +++---
 drivers/xen/xen-balloon.c         | 28 +++++++++++-----------------
 drivers/xen/xenbus/xenbus_probe.c | 15 +++++++--------
 3 files changed, 21 insertions(+), 28 deletions(-)

diff --git a/drivers/xen/pcpu.c b/drivers/xen/pcpu.c
index 1bcdd5227771..47aa3a1ccaf5 100644
--- a/drivers/xen/pcpu.c
+++ b/drivers/xen/pcpu.c
@@ -92,7 +92,7 @@ static int xen_pcpu_up(uint32_t cpu_id)
 	return HYPERVISOR_platform_op(&op);
 }
 
-static ssize_t show_online(struct device *dev,
+static ssize_t online_show(struct device *dev,
 			   struct device_attribute *attr,
 			   char *buf)
 {
@@ -101,7 +101,7 @@ static ssize_t show_online(struct device *dev,
 	return sprintf(buf, "%u\n", !!(cpu->flags & XEN_PCPU_FLAGS_ONLINE));
 }
 
-static ssize_t __ref store_online(struct device *dev,
+static ssize_t __ref online_store(struct device *dev,
 				  struct device_attribute *attr,
 				  const char *buf, size_t count)
 {
@@ -130,7 +130,7 @@ static ssize_t __ref store_online(struct device *dev,
 		ret = count;
 	return ret;
 }
-static DEVICE_ATTR(online, S_IRUGO | S_IWUSR, show_online, store_online);
+static DEVICE_ATTR_RW(online);
 
 static struct attribute *pcpu_dev_attrs[] = {
 	&dev_attr_online.attr,
diff --git a/drivers/xen/xen-balloon.c b/drivers/xen/xen-balloon.c
index a8d24433c8e9..8cd583db20b1 100644
--- a/drivers/xen/xen-balloon.c
+++ b/drivers/xen/xen-balloon.c
@@ -134,13 +134,13 @@ void xen_balloon_init(void)
 EXPORT_SYMBOL_GPL(xen_balloon_init);
 
 #define BALLOON_SHOW(name, format, args...)				\
-	static ssize_t show_##name(struct device *dev,			\
+	static ssize_t name##_show(struct device *dev,			\
 				   struct device_attribute *attr,	\
 				   char *buf)				\
 	{								\
 		return sprintf(buf, format, ##args);			\
 	}								\
-	static DEVICE_ATTR(name, S_IRUGO, show_##name, NULL)
+	static DEVICE_ATTR_RO(name)
 
 BALLOON_SHOW(current_kb, "%lu\n", PAGES2KB(balloon_stats.current_pages));
 BALLOON_SHOW(low_kb, "%lu\n", PAGES2KB(balloon_stats.balloon_low));
@@ -152,16 +152,15 @@ static DEVICE_ULONG_ATTR(retry_count, 0444, balloon_stats.retry_count);
 static DEVICE_ULONG_ATTR(max_retry_count, 0644, balloon_stats.max_retry_count);
 static DEVICE_BOOL_ATTR(scrub_pages, 0644, xen_scrub_pages);
 
-static ssize_t show_target_kb(struct device *dev, struct device_attribute *attr,
+static ssize_t target_kb_show(struct device *dev, struct device_attribute *attr,
 			      char *buf)
 {
 	return sprintf(buf, "%lu\n", PAGES2KB(balloon_stats.target_pages));
 }
 
-static ssize_t store_target_kb(struct device *dev,
+static ssize_t target_kb_store(struct device *dev,
 			       struct device_attribute *attr,
-			       const char *buf,
-			       size_t count)
+			       const char *buf, size_t count)
 {
 	char *endchar;
 	unsigned long long target_bytes;
@@ -176,22 +175,19 @@ static ssize_t store_target_kb(struct device *dev,
 	return count;
 }
 
-static DEVICE_ATTR(target_kb, S_IRUGO | S_IWUSR,
-		   show_target_kb, store_target_kb);
+static DEVICE_ATTR_RW(target_kb);
 
-
-static ssize_t show_target(struct device *dev, struct device_attribute *attr,
-			      char *buf)
+static ssize_t target_show(struct device *dev, struct device_attribute *attr,
+			   char *buf)
 {
 	return sprintf(buf, "%llu\n",
 		       (unsigned long long)balloon_stats.target_pages
 		       << PAGE_SHIFT);
 }
 
-static ssize_t store_target(struct device *dev,
+static ssize_t target_store(struct device *dev,
 			    struct device_attribute *attr,
-			    const char *buf,
-			    size_t count)
+			    const char *buf, size_t count)
 {
 	char *endchar;
 	unsigned long long target_bytes;
@@ -206,9 +202,7 @@ static ssize_t store_target(struct device *dev,
 	return count;
 }
 
-static DEVICE_ATTR(target, S_IRUGO | S_IWUSR,
-		   show_target, store_target);
-
+static DEVICE_ATTR_RW(target);
 
 static struct attribute *balloon_attrs[] = {
 	&dev_attr_target_kb.attr,
diff --git a/drivers/xen/xenbus/xenbus_probe.c b/drivers/xen/xenbus/xenbus_probe.c
index 97f0d234482d..33d09b3f6211 100644
--- a/drivers/xen/xenbus/xenbus_probe.c
+++ b/drivers/xen/xenbus/xenbus_probe.c
@@ -207,7 +207,7 @@ void xenbus_otherend_changed(struct xenbus_watch *watch,
 EXPORT_SYMBOL_GPL(xenbus_otherend_changed);
 
 #define XENBUS_SHOW_STAT(name)						\
-static ssize_t show_##name(struct device *_dev,				\
+static ssize_t name##_show(struct device *_dev,				\
 			   struct device_attribute *attr,		\
 			   char *buf)					\
 {									\
@@ -215,14 +215,14 @@ static ssize_t show_##name(struct device *_dev,				\
 									\
 	return sprintf(buf, "%d\n", atomic_read(&dev->name));		\
 }									\
-static DEVICE_ATTR(name, 0444, show_##name, NULL)
+static DEVICE_ATTR_RO(name)
 
 XENBUS_SHOW_STAT(event_channels);
 XENBUS_SHOW_STAT(events);
 XENBUS_SHOW_STAT(spurious_events);
 XENBUS_SHOW_STAT(jiffies_eoi_delayed);
 
-static ssize_t show_spurious_threshold(struct device *_dev,
+static ssize_t spurious_threshold_show(struct device *_dev,
 				       struct device_attribute *attr,
 				       char *buf)
 {
@@ -231,9 +231,9 @@ static ssize_t show_spurious_threshold(struct device *_dev,
 	return sprintf(buf, "%d\n", dev->spurious_threshold);
 }
 
-static ssize_t set_spurious_threshold(struct device *_dev,
-				      struct device_attribute *attr,
-				      const char *buf, size_t count)
+static ssize_t spurious_threshold_store(struct device *_dev,
+					struct device_attribute *attr,
+					const char *buf, size_t count)
 {
 	struct xenbus_device *dev = to_xenbus_device(_dev);
 	unsigned int val;
@@ -248,8 +248,7 @@ static ssize_t set_spurious_threshold(struct device *_dev,
 	return count;
 }
 
-static DEVICE_ATTR(spurious_threshold, 0644, show_spurious_threshold,
-		   set_spurious_threshold);
+static DEVICE_ATTR_RW(spurious_threshold);
 
 static struct attribute *xenbus_attrs[] = {
 	&dev_attr_event_channels.attr,
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed May 26 14:12:43 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 14:12:43 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132531.247169 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lluH4-0004AQ-ND; Wed, 26 May 2021 14:12:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132531.247169; Wed, 26 May 2021 14:12:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lluH4-0004AJ-K5; Wed, 26 May 2021 14:12:42 +0000
Received: by outflank-mailman (input) for mailman id 132531;
 Wed, 26 May 2021 14:12:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JqRz=KV=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1lluH3-0004AD-Du
 for xen-devel@lists.xenproject.org; Wed, 26 May 2021 14:12:41 +0000
Received: from mail-lf1-x12b.google.com (unknown [2a00:1450:4864:20::12b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c64919c0-62b5-4f4a-aa49-ab2e3880485d;
 Wed, 26 May 2021 14:12:40 +0000 (UTC)
Received: by mail-lf1-x12b.google.com with SMTP id b26so2761657lfq.4
 for <xen-devel@lists.xenproject.org>; Wed, 26 May 2021 07:12:40 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c64919c0-62b5-4f4a-aa49-ab2e3880485d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=xOdUrHgzqsp1HZ0QrFZmpPzALlbxU/wNHYDvsX5bHK8=;
        b=Ck/c2wM600RTobbsJFz5z3CPRR4f0/DGN0TFzj1Zok6aK0WPBJv1I17+gQsiIWLiAd
         0Sv41yHRF5xFGvoiog3vpHEy3Kd274a0auIl8BVyIKYSp+/X+2v97HiTV+XjyrpOzVMh
         c+s4UUCILKLq/uARYHcAXCegwJcCehaunrGgzhMq3rlv/rXw3CSxYIWMAUXkmy72AnBf
         OGndZ+/9/+5mwbqtSo5I+28EICjXncg2dkrvDy17cwdlGpsUip89RqbYzdcfZl23CjR0
         V/TVWWHIINmlwdoKpReUoIDynOmuhUjO+iq8Bo509m8bDuCNLIVt0YMULGjcwf63G03b
         9k5w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=xOdUrHgzqsp1HZ0QrFZmpPzALlbxU/wNHYDvsX5bHK8=;
        b=c00kylmL0Fdg/NLuHNGf5UG9zkdrg5Lt2oMn8RWuodB4i8EtarpfdoLAVYxdSXd0G+
         X6bHx48INO1Wq3YMJEFKHDCe646KbVj68hV8GMmuarAzjopRFuei4HNwTZuazoX+n6/H
         EoqLeYAtxc1rB5RJCzeER2gNcJltY/ihTqVwx+SHRLvm+9O2F6xDRzYjY5WFFsdFrUWa
         cOcI6pFJoJJXqZzufMXYuUJgrUJr/C6gUbhP02yVfkYVeb34fahXisJ2Sh72Acd2rbkd
         LJdTfcYfkMW2wgfa3X00w8uRZJ0uH1q+cRy12P8n2tM4PlqdrENMO2/ZqEqULtmcNwdz
         hevw==
X-Gm-Message-State: AOAM533IH+l2caAWoB5b9T5K599KC01nYXYKrwjOtEhrN3LjLiaZlV9u
	kEBH8Tnzpyn4R0C/gwA5o28Kg2bv+sSnhKX2r0c=
X-Google-Smtp-Source: ABdhPJxiPvHexGKtrCXzK6nJWEA/ieEaRekMbTE4/Wqsb+1pqfB6MfBxj3G8HTS5ChATQr4//46bpvF7USQjbP4rqkk=
X-Received: by 2002:a05:6512:3c91:: with SMTP id h17mr2232729lfv.562.1622038359489;
 Wed, 26 May 2021 07:12:39 -0700 (PDT)
MIME-Version: 1.0
References: <20210503192810.36084-1-jandryuk@gmail.com> <20210503192810.36084-2-jandryuk@gmail.com>
 <927b886a-9b0c-2162-763b-9c2147227b8c@suse.com>
In-Reply-To: <927b886a-9b0c-2162-763b-9c2147227b8c@suse.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Wed, 26 May 2021 10:12:27 -0400
Message-ID: <CAKf6xptZ=tHUUX+NXMfUPz_=wJJz6_FbEG6BraXRgcRokK5bcg@mail.gmail.com>
Subject: Re: [PATCH 01/13] cpufreq: Allow restricting to internal governors only
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"

On Wed, May 26, 2021 at 9:18 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 03.05.2021 21:27, Jason Andryuk wrote:
> > For hwp, the standard governors are not usable, and only the internal
> > one is applicable.
>
> So you say "one" here but use plural in the subject. Which one is
> it (going to be)?

hwp only uses a single governor, but this is common code.  AMD or ARM
could require their own internal governor which is why the subject say
plural.

> >  Add the cpufreq_governor_internal boolean to
> > indicate when an internal governor, like hwp-internal, will be used.
> > This is set during presmp_initcall, so that it can suppress governor
>
> DYM s/is/will be/? Afaict this is going to happen later in the series.
> Which is a good indication that such "hanging in the air" changes
> aren't necessarily the best way of splitting a set of changes, ...

In terms of the patch series, yes, "will be".  The use of "is" is
directing how to use the feature.  Yes, it is "hanging in the air",
but I was trying to explain the "why" and "how" of using it.

I was trying to split this preparatory change from the actual hwp
introduction.  I suppose it could be ordered after hwp, and the extra,
unusable governors would be advertised until then.

> > --- a/xen/drivers/cpufreq/cpufreq.c
> > +++ b/xen/drivers/cpufreq/cpufreq.c
> > @@ -57,6 +57,7 @@ struct cpufreq_dom {
> >  };
> >  static LIST_HEAD_READ_MOSTLY(cpufreq_dom_list_head);
> >
> > +bool __read_mostly cpufreq_governor_internal;
>
> ... also supported by you introducing a non-static variable without
> any consumer outside of this file (and without any producer at all).
>
> > @@ -122,6 +123,9 @@ int __init cpufreq_register_governor(struct cpufreq_governor *governor)
> >      if (!governor)
> >          return -EINVAL;
> >
> > +    if (cpufreq_governor_internal && strstr(governor->name, "internal") == NULL)
>
> I wonder whether designating any governors ending in "-internal"
> wouldn't be less prone for possible future ambiguities.

Yes, that would be good.

> > --- a/xen/include/acpi/cpufreq/cpufreq.h
> > +++ b/xen/include/acpi/cpufreq/cpufreq.h
> > @@ -115,6 +115,7 @@ extern struct cpufreq_governor cpufreq_gov_dbs;
> >  extern struct cpufreq_governor cpufreq_gov_userspace;
> >  extern struct cpufreq_governor cpufreq_gov_performance;
> >  extern struct cpufreq_governor cpufreq_gov_powersave;
> > +extern bool cpufreq_governor_internal;
>
> Please separate from the governor declarations by a blank line.

Sure.

> Sorry, all quite nit-like remarks, but still ...

It's fine.  Would a design session be useful to discuss hwp?

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Wed May 26 14:20:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 14:20:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132539.247181 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lluOA-00051y-IC; Wed, 26 May 2021 14:20:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132539.247181; Wed, 26 May 2021 14:20:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lluOA-00051P-CL; Wed, 26 May 2021 14:20:02 +0000
Received: by outflank-mailman (input) for mailman id 132539;
 Wed, 26 May 2021 14:20:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JqRz=KV=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1lluO8-0004ty-64
 for xen-devel@lists.xenproject.org; Wed, 26 May 2021 14:20:00 +0000
Received: from mail-lj1-x234.google.com (unknown [2a00:1450:4864:20::234])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6353e0e5-2e27-49af-b3fa-cd64bca7f290;
 Wed, 26 May 2021 14:19:59 +0000 (UTC)
Received: by mail-lj1-x234.google.com with SMTP id v5so1850718ljg.12
 for <xen-devel@lists.xenproject.org>; Wed, 26 May 2021 07:19:58 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6353e0e5-2e27-49af-b3fa-cd64bca7f290
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=D4hFRvfhndbQ0kDZcfbLCf6Wo7jdMEsACo6DVcaA1gg=;
        b=nqvAn2KiezJBX+v6TEIiWmWQktpeX1QaiuImizwFyEg5qIxf2TY6OKd6jLXSJwa7/t
         8nXPvium9kBZu9eUKrNgPFjJ8jC0DK3hUfADxkwwEZO/RFoKEVJKgJGz4zq+/34TQQKe
         ixYVULUiSav1YQHvS8S3s9UrL9iakQTPISQyugQP6RjS8lX79xGv5JgfaYGFnXMmNIQ+
         pGCNNBo3Xx7Kr0ki4wNORTVZuagZ46Oyx9rYX5/tKomhdesR7OM5kYovkLD5UC0uGqxI
         yFw82xze5ZxNep+eaGY2AnBiXRuhud+keTpxLodlYj05BVQKW3HrYr9dTJ80l5ic5lG3
         3r3A==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=D4hFRvfhndbQ0kDZcfbLCf6Wo7jdMEsACo6DVcaA1gg=;
        b=ZakS5Ux3IQ7zv+1z0PkNQMWe34wtLrIasZ1YAKONb9fHJShggoVAQSg7vhnP4Y5/7y
         GS8rYVnoTqwXNz11l8J7Ha6b1IWzU8AhhdQuKhV8RshEF95b4pvLKhxyc/TBhjNCNK22
         QJoNvmpxV/REc3gRacgmXyNUvGWFmbbTfXK5/aedOwXIw/eqPZq8iIpcWUVBpWdD58vR
         dgAO9q+bFzn0WzkstPdNir8CLTkowWaDjL/zP1RkAPWmuWdsoAjkMLfBnWsMKc5ls9j4
         UfmO6ix3fiGd4lg5Bx5XgZrG6+9lp/xQ0jiapJ/3M6hmag5jhQRw/wrdST0U/lpqhbOX
         OrYQ==
X-Gm-Message-State: AOAM533mka1fIGrM3Q8MxPzJoXarfTYgQsdp9AShWcsav3elM38zKUOO
	8g/lCKYA+yCNxtMHDbKsZUZAQfXkYFVOcU4Flpc=
X-Google-Smtp-Source: ABdhPJyJ+tggwotGdFo6o3KFtsI+wNVRN8GDWuYyaKW3RACTfl7+yrDuPjolnsA4AH6gvxvlqgBa78SirESMBmE+BRs=
X-Received: by 2002:a05:651c:1a7:: with SMTP id c7mr2395805ljn.77.1622038797873;
 Wed, 26 May 2021 07:19:57 -0700 (PDT)
MIME-Version: 1.0
References: <20210503192810.36084-1-jandryuk@gmail.com> <20210503192810.36084-3-jandryuk@gmail.com>
 <6f3d3833-8540-ca92-8d1c-e4b7bd2217ce@suse.com>
In-Reply-To: <6f3d3833-8540-ca92-8d1c-e4b7bd2217ce@suse.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Wed, 26 May 2021 10:19:46 -0400
Message-ID: <CAKf6xpsqeNPtGQ-4f7oc5idZTHurMWxen2H94LNzHdkXGmC7uw@mail.gmail.com>
Subject: Re: [PATCH 02/13] cpufreq: Add perf_freq to cpuinfo
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Wei Liu <wl@xen.org>, xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"

On Wed, May 26, 2021 at 9:24 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 03.05.2021 21:27, Jason Andryuk wrote:
> > acpi-cpufreq scales the aperf/mperf measurements by max_freq, but HWP
> > needs to scale by base frequency.  Settings max_freq to base_freq
> > "works" but the code is not obvious, and returning values to userspace
> > is tricky.  Add an additonal perf_freq member which is used for scaling
> > aperf/mperf measurements.
> >
> > Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
> > ---
> > I don't like this, but it seems the best way to re-use the common
> > aperf/mperf code.  The other option would be to add wrappers that then
> > do the acpi vs. hwp scaling.
>
> Not sure I understand what you mean by "wrappers". I would assume that
> for hwp you simply install a different getavg hook? Or else I guess
> I'd need to see at least an outline of what you see as the alternative.

Something like a common get_measured_perf() returning just the
aperf/mperf ratio with cpufreq_driver specific getavg() scaling as
appropriate.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Wed May 26 14:44:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 14:44:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132548.247191 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llum3-00081W-KM; Wed, 26 May 2021 14:44:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132548.247191; Wed, 26 May 2021 14:44:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llum3-00081P-GZ; Wed, 26 May 2021 14:44:43 +0000
Received: by outflank-mailman (input) for mailman id 132548;
 Wed, 26 May 2021 14:44:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JqRz=KV=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1llum2-00081J-Ly
 for xen-devel@lists.xenproject.org; Wed, 26 May 2021 14:44:42 +0000
Received: from mail-lj1-x232.google.com (unknown [2a00:1450:4864:20::232])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5e6838ac-72a5-43da-8a49-68949570e4bb;
 Wed, 26 May 2021 14:44:41 +0000 (UTC)
Received: by mail-lj1-x232.google.com with SMTP id b12so2006098ljp.1
 for <xen-devel@lists.xenproject.org>; Wed, 26 May 2021 07:44:41 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5e6838ac-72a5-43da-8a49-68949570e4bb
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=DmGzLwWpMtm0N3bYg4lgK+/MnGvyjKcNNzXT2Jsec8U=;
        b=oK9p4uaO8rtf2B1z1LPohkBBsqYgYizsoCSTMPPnGXnRowp4cSSjpbLP6gf0qUQFfJ
         uyw7/h+ftCp+mxtaUSFQJqoDyvdrEODXKBR6fxxRWvHyqtixAY8rL15N6zUYqzoTfxPE
         cAFBQplRyMOlHqlfZBKrFiRkfVXAvXv0f5MKpT+fy5k6KFP5lwrtVpHJ0pLLbXvlrmSn
         zYR6YKiwesDZu73moOp94z6EjlV6eVnxVP0fCO/ktXcI6a3jxohsXlWD4qe65Q/5Sphy
         lNxEJ2ccqHRHNP4WOlbhg/8YR+0JO7DkQv5Lly8vDPdPn4x+PhYTFRXFJHtYLyqj4JAz
         hcYA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=DmGzLwWpMtm0N3bYg4lgK+/MnGvyjKcNNzXT2Jsec8U=;
        b=sGLsls0iDJsOSK+oVF4Ygb/+nXLrPazCo7tw/MflmYvkexkjsVUh6Wrr8LrZ1wucgS
         78g6Gclp3kVCxMCZrdY2DemipM2594MLJ4VQDPMXCthP624yn59Zwj/X2x+kKNa2vnmV
         HyG0tU4114nLBSdZ2eTR1EkXQq7g/29cim2L801bk8+XDBLQaB5+sROtZNjEnUAFIpYX
         4lVzMTzqkbYwtexQJCUtIQaiq9JJaIoesCVM8ZN8kVH4m7kT3ZtdqJXd+NIlUIUXjs9T
         b2Aa2b7UPtUfK634ik+P/SH+Ajy+NVtTS2M6EWTUsROAHnfQG/kqbOj6NE3LiTO0OZ+S
         d+Rw==
X-Gm-Message-State: AOAM531dI0CwgRuebwgvbofs8vr+NOI9Hv7o6VF2s0I4vykrfXJdb794
	2pPOK6PpicbpG5pkBLXu8hdVkewqja1sm4F68/U=
X-Google-Smtp-Source: ABdhPJwYyoTPpDN0XuFHa+XLU7foSWuX37LIl8ZNLYF0dx019FjBmT7yHeC4GHPMMR173lVNjGdm4yLwpbnjaJRks3k=
X-Received: by 2002:a05:651c:232:: with SMTP id z18mr2578241ljn.489.1622040280623;
 Wed, 26 May 2021 07:44:40 -0700 (PDT)
MIME-Version: 1.0
References: <20210503192810.36084-1-jandryuk@gmail.com> <20210503192810.36084-4-jandryuk@gmail.com>
 <68c32d6e-8c6f-35da-c9cd-a560d3d6895b@suse.com>
In-Reply-To: <68c32d6e-8c6f-35da-c9cd-a560d3d6895b@suse.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Wed, 26 May 2021 10:44:28 -0400
Message-ID: <CAKf6xpuni2=Ud9hojAn2U_aBEVQHNU7KkR9sG8WM6RMCYOnf7Q@mail.gmail.com>
Subject: Re: [PATCH 03/13] cpufreq: Export intel_feature_detect
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Wei Liu <wl@xen.org>, xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"

On Wed, May 26, 2021 at 9:27 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 03.05.2021 21:28, Jason Andryuk wrote:
> > Export feature_detect as intel_feature_detect so it can be re-used by
> > HWP.
> >
> > Signed-off-by: Jason Andryuk <jandryuk@gmail.com>
>
> Acked-by: Jan Beulich <jbeulich@suse.com>
> albeit ...
>
> > --- a/xen/arch/x86/acpi/cpufreq/cpufreq.c
> > +++ b/xen/arch/x86/acpi/cpufreq/cpufreq.c
> > @@ -340,7 +340,7 @@ static unsigned int get_cur_freq_on_cpu(unsigned int cpu)
> >      return extract_freq(get_cur_val(cpumask_of(cpu)), data);
> >  }
> >
> > -static void feature_detect(void *info)
> > +void intel_feature_detect(void *info)
> >  {
> >      struct cpufreq_policy *policy = info;
>
> ... because of this (requiring the hwp code to stay in sync with
> possible changes here, without the compiler being able to point
> out inconsistencies) I'm not overly happy with such a change. Yet
> I guess this isn't the first case we have in the code base.

For acpi-cpufreq, this is called by on_selected_cpus(), but hwp calls
this directly.  You could do something like:

void intel_feature_detect(struct cpufreq_policy *policy)
{
    /* current feature_detect() */
}

static void feature_detect(void *info)
    struct cpufreq_policy *policy = info;

    intel_feature_detect(policy);
}

Would you prefer that?

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Wed May 26 15:00:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 15:00:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132560.247207 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llv0w-0001sc-1T; Wed, 26 May 2021 15:00:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132560.247207; Wed, 26 May 2021 15:00:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llv0v-0001sV-T2; Wed, 26 May 2021 15:00:05 +0000
Received: by outflank-mailman (input) for mailman id 132560;
 Wed, 26 May 2021 15:00:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZGBu=KV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1llv0u-0001YW-7B
 for xen-devel@lists.xenproject.org; Wed, 26 May 2021 15:00:04 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8495a6e8-5733-493d-841d-52bf67dd7ffb;
 Wed, 26 May 2021 15:00:02 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by smtp-out1.suse.de (Postfix) with ESMTP id 4A16B218B1;
 Wed, 26 May 2021 15:00:01 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 0A07F11A98;
 Wed, 26 May 2021 15:00:01 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8495a6e8-5733-493d-841d-52bf67dd7ffb
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622041201; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Uu12p+w5DaHnHdpZ5rnZx79/9gKgy3RKsXqKxmhmL8g=;
	b=RQeXFfgVGCBR/NBobEKza8y+2SmanEmqWhx3wQ9mRazdt5R2qLFR9QLTc7Srq3BG60gULS
	Gb9R7hz2XiiRXWkopy++LND0/C1BoUbAUG5x3iaOZ0uEz5JE3E4Xl05wlqev2xaGFEMzAr
	lCtciTdryBzBC7sU9Jn2Vb357fDpu64=
Subject: Re: [PATCH 04/13] cpufreq: Add Hardware P-State (HWP) driver
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
References: <20210503192810.36084-1-jandryuk@gmail.com>
 <20210503192810.36084-5-jandryuk@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <1747789a-ab6c-cdae-ed35-a6b81ac580a9@suse.com>
Date: Wed, 26 May 2021 16:59:57 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <20210503192810.36084-5-jandryuk@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 03.05.2021 21:28, Jason Andryuk wrote:
> If HWP Energy_Performance_Preference isn't supported, the code falls
> back to IA32_ENERGY_PERF_BIAS.  Right now, we don't check
> CPUID.06H:ECX.SETBH[bit 3] before using that MSR.

May I ask what problem there is doing so?

>  The SDM reads like
> it'll be available, and I assume it was available by the time Skylake
> introduced HWP.

The SDM documents the MSR's presence back to at least Nehalem, but ties
it to the CPUID bit even there.

> --- a/docs/misc/xen-command-line.pandoc
> +++ b/docs/misc/xen-command-line.pandoc
> @@ -1355,6 +1355,15 @@ Specify whether guests are to be given access to physical port 80
>  (often used for debugging purposes), to override the DMI based
>  detection of systems known to misbehave upon accesses to that port.
>  
> +### hwp (x86)
> +> `= <boolean>`
> +
> +> Default: `true`
> +
> +Specifies whether Xen uses Hardware-Controlled Performance States (HWP)
> +on supported Intel hardware.  HWP is a Skylake+ feature which provides
> +better CPU power management.

Is there a particular reason giving this a top-level option rather
than a sub-option of cpufreq=?

> --- a/xen/arch/x86/acpi/cpufreq/cpufreq.c
> +++ b/xen/arch/x86/acpi/cpufreq/cpufreq.c
> @@ -641,9 +641,12 @@ static int __init cpufreq_driver_init(void)
>      int ret = 0;
>  
>      if ((cpufreq_controller == FREQCTL_xen) &&
> -        (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL))
> -        ret = cpufreq_register_driver(&acpi_cpufreq_driver);
> -    else if ((cpufreq_controller == FREQCTL_xen) &&
> +        (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL)) {
> +        if (hwp_available())
> +            ret = hwp_register_driver();
> +        else
> +            ret = cpufreq_register_driver(&acpi_cpufreq_driver);
> +    } else if ((cpufreq_controller == FREQCTL_xen) &&

I'd prefer if you did this with slightly less code churn, e.g.
(considering that the vendor check isn't really necessary afaict)

    if ((cpufreq_controller == FREQCTL_xen) && hwp_available())
        ret = hwp_register_driver();
    else if ((cpufreq_controller == FREQCTL_xen) &&
             (boot_cpu_data.x86_vendor == X86_VENDOR_INTEL))
        ret = cpufreq_register_driver(&acpi_cpufreq_driver);
    ...

> --- /dev/null
> +++ b/xen/arch/x86/acpi/cpufreq/hwp.c
> @@ -0,0 +1,533 @@
> +/*
> + * hwp.c cpufreq driver to run Intel Hardware P-States (HWP)
> + *
> + * Copyright (C) 2021 Jason Andryuk <jandryuk@gmail.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License as published by
> + * the Free Software Foundation; either version 2 of the License, or (at
> + * your option) any later version.
> + *
> + * This program is distributed in the hope that it will be useful, but
> + * WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the GNU
> + * General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License along
> + * with this program; If not, see <http://www.gnu.org/licenses/>.
> + */
> +
> +#include <xen/cpumask.h>
> +#include <xen/init.h>
> +#include <xen/param.h>
> +#include <xen/xmalloc.h>
> +#include <asm/msr.h>
> +#include <asm/io.h>

Nit: Please swap these two for being properly sorted (alphabetically).

> +#include <acpi/cpufreq/cpufreq.h>
> +
> +static bool feature_hwp;
> +static bool feature_hwp_notification;
> +static bool feature_hwp_activity_window;
> +static bool feature_hwp_energy_perf;
> +static bool feature_hwp_pkg_level_ctl;
> +static bool feature_hwp_peci;
> +
> +static bool feature_hdc;
> +static bool feature_fast_msr;
> +
> +bool opt_hwp = true;

Please add __read_mostly or even __initdata as applicable.

> +struct hwp_drv_data
> +{
> +    union
> +    {
> +        uint64_t hwp_caps;
> +        struct
> +        {
> +            uint64_t hw_highest:8;
> +            uint64_t hw_guaranteed:8;
> +            uint64_t hw_most_efficient:8;
> +            uint64_t hw_lowest:8;
> +            uint64_t hw_reserved:32;

I'd like to suggest to convert the hw_ prefixes here into ...

> +        };

... naming of the field ("hw") here.

> +    };
> +    union hwp_request curr_req;
> +    uint16_t activity_window;
> +    uint8_t minimum;
> +    uint8_t maximum;
> +    uint8_t desired;
> +    uint8_t energy_perf;
> +};
> +struct hwp_drv_data *hwp_drv_data[NR_CPUS];

New NR_CPUS-dimensioned arrays need explicit justification. From
what I get I can't see why this couldn't be per-CPU data instead.

Also - static?

> +#define hwp_err(...)     printk(XENLOG_ERR __VA_ARGS__)
> +#define hwp_info(...)    printk(XENLOG_INFO __VA_ARGS__)
> +#define hwp_verbose(...)                   \
> +({                                         \
> +    if ( cpufreq_verbose )                 \
> +    {                                      \
> +        printk(XENLOG_DEBUG __VA_ARGS__);  \
> +    }                                      \
> +})
> +#define hwp_verbose_cont(...)              \
> +({                                         \
> +    if ( cpufreq_verbose )                 \
> +    {                                      \
> +        printk(             __VA_ARGS__);  \
> +    }                                      \
> +})

Please omit this as not properly working (we don't have Linux'es
printk continuation logic) and as not actually used anywhere. For
hwp_verbose() please omit unnecessary braces.

> +static int hwp_governor(struct cpufreq_policy *policy,
> +                        unsigned int event)
> +{
> +    int ret;
> +
> +    if ( policy == NULL )
> +        return -EINVAL;
> +
> +    switch (event)

Style: Missing blanks.

> +    {
> +    case CPUFREQ_GOV_START:
> +        ret = 0;
> +        break;
> +    case CPUFREQ_GOV_STOP:
> +        ret = -EINVAL;
> +        break;

Any particular need for this, when you have ...

> +    case CPUFREQ_GOV_LIMITS:
> +        ret = 0;
> +        break;
> +    default:
> +        ret = -EINVAL;

... this (albeit with a missing "break")? Similarly, any particular
reason not to fold the other two?

> +bool hwp_available(void)

The only caller of this function is __init, so the function here should
be, too.

> +{
> +    uint32_t eax;

This could well be unsigned int afaict - see ./CODING_STYLE.

> +    uint64_t val;
> +    bool use_hwp;
> +
> +    if ( boot_cpu_data.cpuid_level < CPUID_PM_LEAF )
> +    {
> +        hwp_verbose("cpuid_level (%u) lacks HWP support\n", boot_cpu_data.cpuid_level);
> +
> +        return false;
> +    }
> +
> +    eax = cpuid_eax(CPUID_PM_LEAF);
> +    feature_hwp                 = !!(eax & CPUID6_EAX_HWP);
> +    feature_hwp_notification    = !!(eax & CPUID6_EAX_HWP_Notification);
> +    feature_hwp_activity_window = !!(eax & CPUID6_EAX_HWP_Activity_Window);
> +    feature_hwp_energy_perf     =
> +        !!(eax & CPUID6_EAX_HWP_Energy_Performance_Preference);
> +    feature_hwp_pkg_level_ctl   =
> +        !!(eax & CPUID6_EAX_HWP_Package_Level_Request);
> +    feature_hwp_peci            = !!(eax & CPUID6_EAX_HWP_PECI);

Please avoid !! unless it's really needed (i.e. when the conversion to
bool isn't implicit anyway). Also elsewhere.

> +    hwp_verbose("HWP: %d notify: %d act_window: %d energy_perf: %d pkg_level: %d peci: %d\n",
> +                feature_hwp, feature_hwp_notification,
> +                feature_hwp_activity_window, feature_hwp_energy_perf,
> +                feature_hwp_pkg_level_ctl, feature_hwp_peci);
> +
> +    if ( !feature_hwp )
> +    {
> +        hwp_verbose("Hardware does not support HWP\n");

This is redundant with the hwp_verbose immediately ahead.

> +        return false;
> +    }
> +
> +    if ( boot_cpu_data.cpuid_level < 0x16 )
> +    {
> +        hwp_info("HWP disabled: cpuid_level %x < 0x16 lacks CPU freq info\n",
> +                 boot_cpu_data.cpuid_level);
> +
> +        return false;

While we commonly insist on blank lines ahead ofthe main return
statement of a function, we don't normally have such extra blank
lines in cases like this one.

> +    }

Perhaps worth folding with the earlier CPUID level check?

> +    hwp_verbose("HWP: FAST_IA32_HWP_REQUEST %ssupported\n",
> +                eax & CPUID6_EAX_FAST_HWP_MSR ? "" : "not ");
> +    if ( eax & CPUID6_EAX_FAST_HWP_MSR )
> +    {
> +        if ( rdmsr_safe(MSR_FAST_UNCORE_MSRS_CAPABILITY, val) )
> +            hwp_err("error rdmsr_safe(MSR_FAST_UNCORE_MSRS_CAPABILITY)\n");
> +
> +        hwp_verbose("HWP: MSR_FAST_UNCORE_MSRS_CAPABILITY: %016lx\n", val);

Missing "else" above here?

> +        if (val & FAST_IA32_HWP_REQUEST )

Style: Missing blank.

> +static void hdc_set_pkg_hdc_ctl(bool val)
> +{
> +    uint64_t msr;
> +
> +    if ( rdmsr_safe(MSR_IA32_PKG_HDC_CTL, msr) )
> +    {
> +        hwp_err("error rdmsr_safe(MSR_IA32_PKG_HDC_CTL)\n");

I'm not convinced of the need of having such log messages after
failed rdmsr/wrmsr, but I'm definitely against them getting logged
unconditionally. In verbose mode, maybe.

> +        return;
> +    }
> +
> +    msr = val ? IA32_PKG_HDC_CTL_HDC_PKG_Enable : 0;

If you don't use the prior value, why did you read it? But I
think you really mean to set/clear just bot 0.

> +static void hdc_set_pm_ctl1(bool val)
> +{
> +    uint64_t msr;
> +
> +    if ( rdmsr_safe(MSR_IA32_PM_CTL1, msr) )
> +    {
> +        hwp_err("error rdmsr_safe(MSR_IA32_PM_CTL1)\n");
> +
> +        return;
> +    }
> +
> +    msr = val ? IA32_PM_CTL1_HDC_Allow_Block : 0;

Same here then, and ...

> +static void hwp_fast_uncore_msrs_ctl(bool val)
> +{
> +    uint64_t msr;
> +
> +    if ( rdmsr_safe(MSR_FAST_UNCORE_MSRS_CTL, msr) )
> +        hwp_err("error rdmsr_safe(MSR_FAST_UNCORE_MSRS_CTL)\n");
> +
> +    msr = val;

... here (where you imply bit 0 instead of using a proper
constant).

Also for all three functions I'm not convinced their names are
in good sync with their parameters being boolean.

> +static void hwp_read_capabilities(void *info)
> +{
> +    struct cpufreq_policy *policy = info;
> +    struct hwp_drv_data *data = hwp_drv_data[policy->cpu];
> +
> +    if ( rdmsr_safe(MSR_IA32_HWP_CAPABILITIES, data->hwp_caps) )
> +    {
> +        hwp_err("CPU%u: error rdmsr_safe(MSR_IA32_HWP_CAPABILITIES)\n",
> +                policy->cpu);
> +
> +        return;
> +    }
> +
> +    if ( rdmsr_safe(MSR_IA32_HWP_REQUEST, data->curr_req.raw) )
> +    {
> +        hwp_err("CPU%u: error rdmsr_safe(MSR_IA32_HWP_REQUEST)\n", policy->cpu);
> +
> +        return;
> +    }
> +}

This function doesn't indicate failure to its caller(s), so am I
to understand that failure to read either ofthe MSRs is actually
benign to the driver?

> +static void hwp_init_msrs(void *info)
> +{
> +    struct cpufreq_policy *policy = info;
> +    uint64_t val;
> +
> +    /* Package level MSR, but we don't have a good idea of packages here, so
> +     * just do it everytime. */

Style: Comment format (also elsewhere).

> +    if ( rdmsr_safe(MSR_IA32_PM_ENABLE, val) )
> +    {
> +        hwp_err("CPU%u: error rdmsr_safe(MSR_IA32_PM_ENABLE)\n", policy->cpu);
> +
> +        return;
> +    }
> +
> +    hwp_verbose("CPU%u: MSR_IA32_PM_ENABLE: %016lx\n", policy->cpu, val);
> +    if ( val != IA32_PM_ENABLE_HWP_ENABLE )
> +    {
> +        val = IA32_PM_ENABLE_HWP_ENABLE;

You should neither depend on reserved bits being zero, nor discard any
non-zero value here, I think.

> +        if ( wrmsr_safe(MSR_IA32_PM_ENABLE, val) )
> +            hwp_err("CPU%u: error wrmsr_safe(MSR_IA32_PM_ENABLE, %lx)\n",
> +                    policy->cpu, val);
> +    }
> +
> +    hwp_read_capabilities(info);

Please pass properly typed arguments (and have properly typed parameters)
wherever possible - here: policy. The exception are callback functions
and alike, where the function type may have to match a sufficiently
generic one.

> +static int hwp_cpufreq_verify(struct cpufreq_policy *policy)
> +{
> +    unsigned int cpu = policy->cpu;
> +    struct hwp_drv_data *data = hwp_drv_data[cpu];
> +
> +    if ( !feature_hwp_energy_perf && data->energy_perf )
> +    {
> +        if ( data->energy_perf > 15 )
> +        {
> +            hwp_err("energy_perf %d exceeds IA32_ENERGY_PERF_BIAS range 0-15\n",
> +                    data->energy_perf);
> +
> +            return -EINVAL;
> +        }
> +    }
> +
> +    if ( !feature_hwp_activity_window && data->activity_window )
> +    {
> +        hwp_err("HWP activity window not supported.\n");

As in the majority of log messages you have, please omit full stops.

> +static void hwp_energy_perf_bias(void *info)
> +{
> +    uint64_t msr;
> +    struct hwp_drv_data *data = info;
> +    uint8_t val = data->energy_perf;
> +
> +    ASSERT(val <= 15);
> +
> +    if ( rdmsr_safe(MSR_IA32_ENERGY_PERF_BIAS, msr) )
> +    {
> +        hwp_err("error rdmsr_safe(MSR_IA32_ENERGY_PERF_BIAS)\n");
> +
> +        return;
> +    }
> +
> +    msr &= ~(0xf);

Unnessary parentheses.

> +static void hwp_write_request(void *info)
> +{
> +    struct cpufreq_policy *policy = info;
> +    struct hwp_drv_data *data = hwp_drv_data[policy->cpu];
> +    union hwp_request hwp_req = data->curr_req;
> +
> +    BUILD_BUG_ON(sizeof(union hwp_request) != sizeof(uint64_t));

ITYM

    BUILD_BUG_ON(sizeof(hwp_req) != sizeof(hwp_req.raw));

here?

> +    if ( wrmsr_safe(MSR_IA32_HWP_REQUEST, hwp_req.raw) )
> +    {
> +        hwp_err("CPU%u: error wrmsr_safe(MSR_IA32_HWP_REQUEST, %lx)\n",
> +                policy->cpu, hwp_req.raw);
> +        rdmsr_safe(MSR_IA32_HWP_REQUEST, data->curr_req.raw);

What if this one fails, too? data->curr_req.raw then pretty certainly
ends up stale.

> +static int hwp_cpufreq_target(struct cpufreq_policy *policy,
> +                              unsigned int target_freq, unsigned int relation)
> +{
> +    unsigned int cpu = policy->cpu;
> +    struct hwp_drv_data *data = hwp_drv_data[cpu];
> +    union hwp_request hwp_req;
> +
> +    /* Zero everything to ensure reserved bits are zero... */
> +    hwp_req.raw = 0;> +    /* .. and update from there */
> +    hwp_req.min_perf = data->minimum;
> +    hwp_req.max_perf = data->maximum;
> +    hwp_req.desired = data->desired;

We typically prefer to use initializers to achieve the same effect.
Since the bitfields part is in an unnamed struct, old gcc would
prohibit use of an initializer for all of the assignments, but at
least "raw" can be set in the initializer.

> +    if ( feature_hwp_energy_perf )
> +        hwp_req.energy_perf = data->energy_perf;
> +    if ( feature_hwp_activity_window )
> +        hwp_req.activity_window = data->activity_window;
> +
> +    if ( hwp_req.raw == data->curr_req.raw )
> +        return 0;
> +
> +    data->curr_req.raw = hwp_req.raw;

I think you can omit .raw on both sides.

> +    hwp_verbose("CPU%u: wrmsr HWP_REQUEST %016lx\n", cpu, hwp_req.raw);
> +    on_selected_cpus(cpumask_of(cpu), hwp_write_request, policy, 1);
> +
> +    if ( !feature_hwp_energy_perf && data->energy_perf )
> +    {
> +        on_selected_cpus(cpumask_of(cpu), hwp_energy_perf_bias,
> +                         data, 1);
> +    }

Like elsewhere, please omit unnecessary braces.

> +static int hwp_cpufreq_cpu_init(struct cpufreq_policy *policy)
> +{
> +    unsigned int cpu = policy->cpu;
> +    struct hwp_drv_data *data;
> +
> +    if ( cpufreq_opt_governor )
> +    {
> +        printk(XENLOG_WARNING
> +               "HWP: governor \"%s\" is incompatible with hwp. Using default \"%s\"\n",
> +               cpufreq_opt_governor->name, hwp_cpufreq_governor.name);
> +    }

Same here (and perhaps elsewhere) again.

> +    policy->governor = &hwp_cpufreq_governor;
> +
> +    data = xzalloc(typeof(*data));

Commonly we specify the type explicitly in such cases, rather than using
typeof(). I will admit though that I'm not entirely certain which one's
better. But consistency across the code base is perhaps preferable for
the time being.

> +    if ( !data )
> +        return -ENOMEM;

Is it correct to have set the governor before this error check?

> +    hwp_drv_data[cpu] = data;
> +
> +    on_selected_cpus(cpumask_of(cpu), hwp_init_msrs, policy, 1);
> +
> +    data->minimum = data->hw_lowest;
> +    data->maximum = data->hw_highest;
> +    data->desired = 0; /* default to HW autonomous */
> +    if ( feature_hwp_energy_perf )
> +        data->energy_perf = 0x80;
> +    else
> +        data->energy_perf = 7;

Where's this 7 coming from? (You do mention the 0x80 at least in the
description.)

> +static int hwp_cpufreq_cpu_exit(struct cpufreq_policy *policy)
> +{
> +    unsigned int cpu = policy->cpu;
> +
> +    xfree(hwp_drv_data[cpu]);
> +    hwp_drv_data[cpu] = NULL;

Please don't open-code XFREE().

> +int hwp_register_driver(void)

__init

> +{
> +    int ret;
> +
> +    ret = cpufreq_register_driver(&hwp_cpufreq_driver);
> +
> +    return ret;

Preferably the body would consist of just a singe return statement.

> --- a/xen/include/asm-x86/msr-index.h
> +++ b/xen/include/asm-x86/msr-index.h
> @@ -101,6 +101,12 @@
>  #define MSR_RTIT_ADDR_A(n)                 (0x00000580 + (n) * 2)
>  #define MSR_RTIT_ADDR_B(n)                 (0x00000581 + (n) * 2)
>  
> +#define MSR_FAST_UNCORE_MSRS_CTL            0x00000657
> +#define  FAST_IA32_HWP_REQUEST_MSR_ENABLE   (_AC(1, ULL) <<  0)
> +
> +#define MSR_FAST_UNCORE_MSRS_CAPABILITY     0x0000065f
> +#define  FAST_IA32_HWP_REQUEST              (_AC(1, ULL) <<  0)
> +
>  #define MSR_U_CET                           0x000006a0
>  #define MSR_S_CET                           0x000006a2
>  #define  CET_SHSTK_EN                       (_AC(1, ULL) <<  0)
> @@ -112,10 +118,20 @@
>  #define MSR_PL3_SSP                         0x000006a7
>  #define MSR_INTERRUPT_SSP_TABLE             0x000006a8
>  
> +#define MSR_IA32_PM_ENABLE                  0x00000770
> +#define  IA32_PM_ENABLE_HWP_ENABLE          (_AC(1, ULL) <<  0)

Please have a blank line after here.

> +#define MSR_IA32_HWP_CAPABILITIES           0x00000771
> +#define MSR_IA32_HWP_REQUEST                0x00000774
> +
>  #define MSR_PASID                           0x00000d93
>  #define  PASID_PASID_MASK                   0x000fffff
>  #define  PASID_VALID                        (_AC(1, ULL) << 31)
>  
> +#define MSR_IA32_PKG_HDC_CTL                0x00000db0
> +#define  IA32_PKG_HDC_CTL_HDC_PKG_Enable    (_AC(1, ULL) <<  0)

I don't think "HDC" twice is helpful?

Also please use all upper case names (actually also for the
CPUID constants higher up).

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 26 15:10:20 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 15:10:20 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132567.247218 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llvAQ-0002lr-VL; Wed, 26 May 2021 15:09:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132567.247218; Wed, 26 May 2021 15:09:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llvAQ-0002lk-SG; Wed, 26 May 2021 15:09:54 +0000
Received: by outflank-mailman (input) for mailman id 132567;
 Wed, 26 May 2021 15:09:53 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZGBu=KV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1llvAP-0002lZ-IQ
 for xen-devel@lists.xenproject.org; Wed, 26 May 2021 15:09:53 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 21b8d34e-57f5-4826-9a3f-41e77e0fbd84;
 Wed, 26 May 2021 15:09:52 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by smtp-out1.suse.de (Postfix) with ESMTP id EC6C0218D0;
 Wed, 26 May 2021 15:09:51 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id D7D2611A98;
 Wed, 26 May 2021 15:09:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 21b8d34e-57f5-4826-9a3f-41e77e0fbd84
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622041791; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QLZ+/VjMHIzSsHqjlYN8Qsxcpte4s5NfwIoid2vqaKM=;
	b=f07cChPWTGvSF0haladmTLDgs5659K2fRbC4m2DSOrS4YAMTE2T92xPnqlgs5+sEAGuoNy
	VW83GIOgQGunfEhAxyceHT9MJTijTHmlvoXfEozl0AeAfU6iT3AUGpqfmpQCNJDHctTFBE
	rQIgGKIBYeV1GcWE91ImJ6XM8iHRGyU=
Subject: Re: [PATCH 01/13] cpufreq: Allow restricting to internal governors
 only
To: Jason Andryuk <jandryuk@gmail.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>
References: <20210503192810.36084-1-jandryuk@gmail.com>
 <20210503192810.36084-2-jandryuk@gmail.com>
 <927b886a-9b0c-2162-763b-9c2147227b8c@suse.com>
 <CAKf6xptZ=tHUUX+NXMfUPz_=wJJz6_FbEG6BraXRgcRokK5bcg@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <0701d667-b8f6-5691-5a40-e0e8eff0debb@suse.com>
Date: Wed, 26 May 2021 17:09:44 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <CAKf6xptZ=tHUUX+NXMfUPz_=wJJz6_FbEG6BraXRgcRokK5bcg@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 26.05.2021 16:12, Jason Andryuk wrote:
> On Wed, May 26, 2021 at 9:18 AM Jan Beulich <jbeulich@suse.com> wrote:
>> Sorry, all quite nit-like remarks, but still ...
> 
> It's fine.  Would a design session be useful to discuss hwp?

Is there anything beyond patch review that's necessary there? I'm
also not really set up to usefully join design sessions, I'm afraid.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 26 15:11:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 15:11:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132575.247237 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llvCQ-0004AR-FO; Wed, 26 May 2021 15:11:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132575.247237; Wed, 26 May 2021 15:11:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llvCQ-0004AK-B4; Wed, 26 May 2021 15:11:58 +0000
Received: by outflank-mailman (input) for mailman id 132575;
 Wed, 26 May 2021 15:11:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZGBu=KV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1llvCP-0004AA-30
 for xen-devel@lists.xenproject.org; Wed, 26 May 2021 15:11:57 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 52aee2e4-7622-4a7d-945b-85364e833a7c;
 Wed, 26 May 2021 15:11:56 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by smtp-out2.suse.de (Postfix) with ESMTP id 86A7D1FD2A;
 Wed, 26 May 2021 15:11:55 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 662C511A98;
 Wed, 26 May 2021 15:11:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 52aee2e4-7622-4a7d-945b-85364e833a7c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622041915; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=gTKMvUe9ERASua/BgbH1Bv67rkqrl0Ql9ZSgM5GmWYU=;
	b=fxfwCc+uFdvhjLdOJGm1J4xDJ5gz+ZkeJIUCF9tazpLx2oHiXGTN/hy2S8qpF+QG5WVxvx
	Z7V8zOsq9tJ1Gl1n29Nfxv05M0kVHa0ntZBAtcqZyCoq0XmfpBPOPSC6VtJJ+oDohofMZp
	KrFWknqmnRq1XK1n0LOyPTYSboVB1lY=
Subject: Re: [PATCH 03/13] cpufreq: Export intel_feature_detect
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, xen-devel <xen-devel@lists.xenproject.org>
References: <20210503192810.36084-1-jandryuk@gmail.com>
 <20210503192810.36084-4-jandryuk@gmail.com>
 <68c32d6e-8c6f-35da-c9cd-a560d3d6895b@suse.com>
 <CAKf6xpuni2=Ud9hojAn2U_aBEVQHNU7KkR9sG8WM6RMCYOnf7Q@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8525296a-607b-a0b2-94b3-67706764a9c4@suse.com>
Date: Wed, 26 May 2021 17:11:50 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <CAKf6xpuni2=Ud9hojAn2U_aBEVQHNU7KkR9sG8WM6RMCYOnf7Q@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 26.05.2021 16:44, Jason Andryuk wrote:
> On Wed, May 26, 2021 at 9:27 AM Jan Beulich <jbeulich@suse.com> wrote:
>> On 03.05.2021 21:28, Jason Andryuk wrote:
>>> --- a/xen/arch/x86/acpi/cpufreq/cpufreq.c
>>> +++ b/xen/arch/x86/acpi/cpufreq/cpufreq.c
>>> @@ -340,7 +340,7 @@ static unsigned int get_cur_freq_on_cpu(unsigned int cpu)
>>>      return extract_freq(get_cur_val(cpumask_of(cpu)), data);
>>>  }
>>>
>>> -static void feature_detect(void *info)
>>> +void intel_feature_detect(void *info)
>>>  {
>>>      struct cpufreq_policy *policy = info;
>>
>> ... because of this (requiring the hwp code to stay in sync with
>> possible changes here, without the compiler being able to point
>> out inconsistencies) I'm not overly happy with such a change. Yet
>> I guess this isn't the first case we have in the code base.
> 
> For acpi-cpufreq, this is called by on_selected_cpus(), but hwp calls
> this directly.  You could do something like:
> 
> void intel_feature_detect(struct cpufreq_policy *policy)
> {
>     /* current feature_detect() */
> }
> 
> static void feature_detect(void *info)
>     struct cpufreq_policy *policy = info;
> 
>     intel_feature_detect(policy);
> }
> 
> Would you prefer that?

Would feel less fragile, yes. And no need for the intermediate "policy"
variable, afaics.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 26 15:22:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 15:22:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132587.247259 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llvMQ-00062s-NW; Wed, 26 May 2021 15:22:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132587.247259; Wed, 26 May 2021 15:22:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llvMQ-00062l-Jm; Wed, 26 May 2021 15:22:18 +0000
Received: by outflank-mailman (input) for mailman id 132587;
 Wed, 26 May 2021 15:22:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1llvMP-00062M-Kf
 for xen-devel@lists.xenproject.org; Wed, 26 May 2021 15:22:17 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1llvML-0003JB-RQ; Wed, 26 May 2021 15:22:13 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1llvML-0007OT-LN; Wed, 26 May 2021 15:22:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=zoAuU7FmjXI3cpHS6x/57rn0nuIppVk1pzVGGjtL/V0=; b=QtPZ6TOWQQIVeXARZxJpaCXnoq
	OkPUsoH4inb6iEmh0O4SSuKWlWQo6pkLEBi0qApti48QSeYrYg6YVG6I9vEG+NaMGgeR02epDSQCS
	1JS8WJiNrdHI3+4+Wu2mMMTg/PFrGY1LOFgRROENUJUEgzqTgnUJAbj3T4aEOfOnNDT8=;
Subject: Re: [PATCH v2 0/2] Use const whenever we point to literal strings
 (take 1)
To: xen-devel@lists.xenproject.org
Cc: Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Jan Beulich <jbeulich@suse.com>, Stefano Stabellini
 <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <20210518140134.31541-1-julien@xen.org>
From: Julien Grall <julien@xen.org>
Message-ID: <d6a96741-6b78-882e-4dcf-cb2439846927@xen.org>
Date: Wed, 26 May 2021 16:22:11 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <20210518140134.31541-1-julien@xen.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 18/05/2021 15:01, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Hi all,
> 
> By default, both Clang and GCC will happily compile C code where
> non-const char * point to literal strings. This means the following
> code will be accepted:
> 
>      char *str = "test";
> 
>      str[0] = 'a';
> 
> Literal strings will reside in rodata, so they are not modifiable.
> This will result to an permission fault at runtime if the permissions
> are enforced in the page-tables (this is the case in Xen).
> 
> I am not aware of code trying to modify literal strings in Xen.
> However, there is a frequent use of non-const char * to point to
> literal strings. Given the size of the codebase, there is a risk
> to involuntarily introduce code that will modify literal strings.
> 
> Therefore it would be better to enforce using const when pointing
> to such strings. Both GCC and Clang provide an option to warn
> for such case (see -Wwrite-strings) and therefore could be used
> by Xen.
> 
> This series doesn't yet make use of -Wwrite-strings because
> the tree is not fully converted. Instead, it contains some easy
> and non-controversial use of const in the code.
> 
> Julien Grall (2):
>    xen/char: console: Use const whenever we point to literal strings
>    tools/console: Use const whenever we point to literal strings

I have committed the two patches.

> 
>   tools/console/client/main.c |  4 ++--
>   tools/console/daemon/io.c   | 15 ++++++++-------
>   xen/drivers/char/console.c  |  7 ++++---
>   3 files changed, 14 insertions(+), 12 deletions(-)
> 

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 26 15:22:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 15:22:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132585.247247 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llvM6-0005gS-Ev; Wed, 26 May 2021 15:21:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132585.247247; Wed, 26 May 2021 15:21:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llvM6-0005gL-Bb; Wed, 26 May 2021 15:21:58 +0000
Received: by outflank-mailman (input) for mailman id 132585;
 Wed, 26 May 2021 15:21:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1llvM4-0005gF-OE
 for xen-devel@lists.xenproject.org; Wed, 26 May 2021 15:21:56 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1llvM3-0003If-DI; Wed, 26 May 2021 15:21:55 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1llvM3-0007BA-35; Wed, 26 May 2021 15:21:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=XvU7hL8A7BFnf31pviCOOpiMF+qB2m6rDh46Ok3wsJo=; b=3+v4T6D+gP2+nw816+kx9jh4TN
	7K+wQC9h9gK0jw/WTB5Sm7/Yf2q6OnDMjdOVPtDO6DCbdxMk34NwqjSwaxovN49mp0HjAAhmxbJNb
	YOqTOCDCsLD/emnZygeOBYiD7vpvehHLCpF7dPfopCmVssXTrFtjoMLdk2pvjhUMCOU0=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	Julien Grall <jgrall@amazon.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH] xen/grant-table: Simplify the update to the per-vCPU maptrack freelist
Date: Wed, 26 May 2021 16:21:52 +0100
Message-Id: <20210526152152.26251-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1

From: Julien Grall <jgrall@amazon.com>

Since XSA-288 (commit 02cbeeb62075 "gnttab: split maptrack lock to make
it fulfill its purpose again"), v->maptrack_head and v->maptrack_tail
are with the lock v->maptrack_freelist_lock held.

Therefore it is not necessary to update the fields using cmpxchg()
and also read them atomically.

Note that there are two cases where v->maptrack_tail is accessed without
the lock. They both happen _get_maptrack_handle() when the current vCPU
list is empty. Therefore there is no possible race.

The code is now reworked to remove any use of cmpxch() and read_atomic()
when accessing the fields v->maptrack_{head, tail}.

Take the opportunity to add a comment on top of the lock definition
and explain what it protects.

Signed-off-by: Julien Grall <jgrall@amazon.com>

----

I am not sure whether we should try to protect the remaining unprotected
access with the lock or maybe add a comment?
---
 xen/common/grant_table.c | 60 +++++++++++++++-------------------------
 xen/include/xen/sched.h  |  5 +++-
 2 files changed, 27 insertions(+), 38 deletions(-)

diff --git a/xen/common/grant_table.c b/xen/common/grant_table.c
index ab30e2e8cfb6..cac9d1d73446 100644
--- a/xen/common/grant_table.c
+++ b/xen/common/grant_table.c
@@ -543,34 +543,26 @@ double_gt_unlock(struct grant_table *lgt, struct grant_table *rgt)
 static inline grant_handle_t
 _get_maptrack_handle(struct grant_table *t, struct vcpu *v)
 {
-    unsigned int head, next, prev_head;
+    unsigned int head, next;
 
     spin_lock(&v->maptrack_freelist_lock);
 
-    do {
-        /* No maptrack pages allocated for this VCPU yet? */
-        head = read_atomic(&v->maptrack_head);
-        if ( unlikely(head == MAPTRACK_TAIL) )
-        {
-            spin_unlock(&v->maptrack_freelist_lock);
-            return INVALID_MAPTRACK_HANDLE;
-        }
-
-        /*
-         * Always keep one entry in the free list to make it easier to
-         * add free entries to the tail.
-         */
-        next = read_atomic(&maptrack_entry(t, head).ref);
-        if ( unlikely(next == MAPTRACK_TAIL) )
-        {
-            spin_unlock(&v->maptrack_freelist_lock);
-            return INVALID_MAPTRACK_HANDLE;
-        }
+    /* No maptrack pages allocated for this VCPU yet? */
+    head = v->maptrack_head;
+    if ( unlikely(head == MAPTRACK_TAIL) )
+        goto out;
 
-        prev_head = head;
-        head = cmpxchg(&v->maptrack_head, prev_head, next);
-    } while ( head != prev_head );
+    /*
+     * Always keep one entry in the free list to make it easier to
+     * add free entries to the tail.
+     */
+    next = read_atomic(&maptrack_entry(t, head).ref);
+    if ( unlikely(next == MAPTRACK_TAIL) )
+        head = MAPTRACK_TAIL;
+    else
+        v->maptrack_head = next;
 
+out:
     spin_unlock(&v->maptrack_freelist_lock);
 
     return head;
@@ -623,7 +615,7 @@ put_maptrack_handle(
 {
     struct domain *currd = current->domain;
     struct vcpu *v;
-    unsigned int prev_tail, cur_tail;
+    unsigned int prev_tail;
 
     /* 1. Set entry to be a tail. */
     maptrack_entry(t, handle).ref = MAPTRACK_TAIL;
@@ -633,11 +625,8 @@ put_maptrack_handle(
 
     spin_lock(&v->maptrack_freelist_lock);
 
-    cur_tail = read_atomic(&v->maptrack_tail);
-    do {
-        prev_tail = cur_tail;
-        cur_tail = cmpxchg(&v->maptrack_tail, prev_tail, handle);
-    } while ( cur_tail != prev_tail );
+    prev_tail = v->maptrack_tail;
+    v->maptrack_tail = handle;
 
     /* 3. Update the old tail entry to point to the new entry. */
     write_atomic(&maptrack_entry(t, prev_tail).ref, handle);
@@ -650,7 +639,7 @@ get_maptrack_handle(
     struct grant_table *lgt)
 {
     struct vcpu          *curr = current;
-    unsigned int          i, head;
+    unsigned int          i;
     grant_handle_t        handle;
     struct grant_mapping *new_mt = NULL;
 
@@ -686,7 +675,7 @@ get_maptrack_handle(
             maptrack_entry(lgt, handle).ref = MAPTRACK_TAIL;
             curr->maptrack_tail = handle;
             if ( curr->maptrack_head == MAPTRACK_TAIL )
-                write_atomic(&curr->maptrack_head, handle);
+                curr->maptrack_head = handle;
             spin_unlock(&curr->maptrack_freelist_lock);
         }
         return steal_maptrack_handle(lgt, curr);
@@ -716,13 +705,10 @@ get_maptrack_handle(
     lgt->maptrack_limit += MAPTRACK_PER_PAGE;
 
     spin_unlock(&lgt->maptrack_lock);
-    spin_lock(&curr->maptrack_freelist_lock);
-
-    do {
-        new_mt[i - 1].ref = read_atomic(&curr->maptrack_head);
-        head = cmpxchg(&curr->maptrack_head, new_mt[i - 1].ref, handle + 1);
-    } while ( head != new_mt[i - 1].ref );
 
+    spin_lock(&curr->maptrack_freelist_lock);
+    new_mt[i - 1].ref = curr->maptrack_head;
+    curr->maptrack_head = handle + 1;
     spin_unlock(&curr->maptrack_freelist_lock);
 
     return handle;
diff --git a/xen/include/xen/sched.h b/xen/include/xen/sched.h
index 3982167144c6..bd1cb08266d8 100644
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -255,7 +255,10 @@ struct vcpu
     /* VCPU paused by system controller. */
     int              controller_pause_count;
 
-    /* Grant table map tracking. */
+    /*
+     * Grant table map tracking. The lock maptrack_freelist_lock protect
+     * the access to maptrack_head and maptrack_tail.
+     */
     spinlock_t       maptrack_freelist_lock;
     unsigned int     maptrack_head;
     unsigned int     maptrack_tail;
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed May 26 15:30:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 15:30:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132598.247270 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llvU5-0007ia-GR; Wed, 26 May 2021 15:30:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132598.247270; Wed, 26 May 2021 15:30:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llvU5-0007iT-DR; Wed, 26 May 2021 15:30:13 +0000
Received: by outflank-mailman (input) for mailman id 132598;
 Wed, 26 May 2021 15:30:11 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=ZGBu=KV=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1llvU3-0007iN-Aw
 for xen-devel@lists.xenproject.org; Wed, 26 May 2021 15:30:11 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 445dedcd-f50b-4cc9-aec7-897f1bcb5586;
 Wed, 26 May 2021 15:30:10 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 5184F218DB;
 Wed, 26 May 2021 15:21:21 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 309DA11A98;
 Wed, 26 May 2021 15:21:21 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 445dedcd-f50b-4cc9-aec7-897f1bcb5586
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622042481; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=YzBKnSnWaD5YH4S63kNMFRWzin/vk7cbPdsaL2ae/GY=;
	b=UYRNgKzhDTF4XoCsKBio0coNtDMW6NNbaL+iQ7e0y02koXaXCu5UXsb9cjLsX2oXzspluN
	QFCvVcf+rTSrstQyMz5EtcI5aA+ukQyf64oVEDGQjvKRvHoh46j0BgTk90hF4LwfI7sFMV
	mkOO8oU55ovixoREjVVeXuuRMupgQDk=
Subject: Re: [PATCH 05/13] xenpm: Change get-cpufreq-para output for internal
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210503192810.36084-1-jandryuk@gmail.com>
 <20210503192810.36084-6-jandryuk@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a8180fa6-9b7d-52cd-c055-71ca28b08325@suse.com>
Date: Wed, 26 May 2021 17:21:17 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <20210503192810.36084-6-jandryuk@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 03.05.2021 21:28, Jason Andryuk wrote:
> --- a/tools/misc/xenpm.c
> +++ b/tools/misc/xenpm.c
> @@ -711,6 +711,7 @@ void start_gather_func(int argc, char *argv[])
>  /* print out parameters about cpu frequency */
>  static void print_cpufreq_para(int cpuid, struct xc_get_cpufreq_para *p_cpufreq)
>  {
> +    bool internal = strstr(p_cpufreq->scaling_governor, "internal");

Like suggested for the hypervisor, perhaps better check for names
ending in "-internal"?

> @@ -720,10 +721,19 @@ static void print_cpufreq_para(int cpuid, struct xc_get_cpufreq_para *p_cpufreq)
>          printf(" %d", p_cpufreq->affected_cpus[i]);
>      printf("\n");
>  
> -    printf("cpuinfo frequency    : max [%u] min [%u] cur [%u]\n",
> -           p_cpufreq->cpuinfo_max_freq,
> -           p_cpufreq->cpuinfo_min_freq,
> -           p_cpufreq->cpuinfo_cur_freq);
> +    if ( internal )
> +    {
> +        printf("cpuinfo frequency    : base [%u] turbo [%u]\n",
> +               p_cpufreq->cpuinfo_min_freq,
> +               p_cpufreq->cpuinfo_max_freq);
> +    }
> +    else
> +    {
> +        printf("cpuinfo frequency    : max [%u] min [%u] cur [%u]\n",
> +               p_cpufreq->cpuinfo_max_freq,
> +               p_cpufreq->cpuinfo_min_freq,
> +               p_cpufreq->cpuinfo_cur_freq);
> +    }

Since the file adopts hypervisor coding style, the unnecessary
braces would again better be omitted.

Jan


From xen-devel-bounces@lists.xenproject.org Wed May 26 15:36:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 15:36:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132606.247280 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llvaQ-0008Pv-6y; Wed, 26 May 2021 15:36:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132606.247280; Wed, 26 May 2021 15:36:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llvaQ-0008Po-43; Wed, 26 May 2021 15:36:46 +0000
Received: by outflank-mailman (input) for mailman id 132606;
 Wed, 26 May 2021 15:36:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llvaP-0008Pe-4O; Wed, 26 May 2021 15:36:45 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llvaO-0003Yb-So; Wed, 26 May 2021 15:36:44 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llvaO-0003Mz-Hu; Wed, 26 May 2021 15:36:44 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1llvaO-0000MM-HM; Wed, 26 May 2021 15:36:44 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=4FapP0HGeV4YBRD9PhM71D7vwAknXDy9UQtir2K1QI8=; b=123fEfzcTC+x8Nr7aQnXKshleK
	YQfDQ9e79lIN0YvSrp1hZOo/cULgYwBdNBJ0MP6tQGyVn/Xk2dZjMK4CdmPlojq/UWAy6BQVCh61G
	S31RtgJLWpkBIZUaUclJYSCmYPhaXUiFs3ClqikEpCuAPu/ikRXLtbBb3enbW8vozqAA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162160-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162160: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=ad9f25d338605d26acedcaf3ba5fab5ca26f1c10
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 26 May 2021 15:36:44 +0000

flight 162160 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162160/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl          14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  14 guest-start    fail in 162153 REGR. vs. 152332
 test-arm64-arm64-xl-credit2  14 guest-start    fail in 162153 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl          13 debian-fixup     fail in 162153 pass in 162160
 test-arm64-arm64-xl-credit2  13 debian-fixup               fail pass in 162153
 test-arm64-arm64-xl-credit1  13 debian-fixup               fail pass in 162153

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                ad9f25d338605d26acedcaf3ba5fab5ca26f1c10
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  298 days
Failing since        152366  2020-08-01 20:49:34 Z  297 days  506 attempts
Testing same since   162153  2021-05-25 19:42:13 Z    0 days    2 attempts

------------------------------------------------------------
6086 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1653453 lines long.)


From xen-devel-bounces@lists.xenproject.org Wed May 26 15:53:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 15:53:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132617.247299 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llvqh-0002Kk-Q8; Wed, 26 May 2021 15:53:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132617.247299; Wed, 26 May 2021 15:53:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llvqh-0002Kd-MX; Wed, 26 May 2021 15:53:35 +0000
Received: by outflank-mailman (input) for mailman id 132617;
 Wed, 26 May 2021 15:53:34 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wTMo=KV=kernel.org=will@srs-us1.protection.inumbo.net>)
 id 1llvqg-0002KX-4W
 for xen-devel@lists.xenproject.org; Wed, 26 May 2021 15:53:34 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 864be4d1-254e-4c9b-a5e6-1c89d23ff4e3;
 Wed, 26 May 2021 15:53:33 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 063E7611CD;
 Wed, 26 May 2021 15:53:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 864be4d1-254e-4c9b-a5e6-1c89d23ff4e3
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1622044412;
	bh=OnL/4Jq+LKq6vWvDBS0xA9CvVYxTMOTBjml/zh+3Ar0=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=B4UuFsxu1r3HigmEjVwQF/otX4DrMd38lSSUyIbdumWlwAz4mqWFVwTfX3tV+cLTK
	 ezumxJCPuSoiNTejbR/1l7Q+EyQuGnIdMF026YW9wZDBrp0Etki3m3VqLD3t52Z+ML
	 aIGQuwBPRYJXDmtAfPisyQiC2L9qX6xiRg823CR+7DK1ZJFKUWOyliFfy6nu3kEBra
	 72iE6bgTyqyaVJ438e+wldUwZTA5bJjB+laj7DX9kULouD8NbVtF/6oVvQWdjQCza8
	 vTg4CtjTc7cs0vgmNuLf7hWhlxI/bnFdiE1zs5zIETRjnoBXmZKSOwwvGM2DpAYygJ
	 B11ILWs8v9zvg==
Date: Wed, 26 May 2021 16:53:21 +0100
From: Will Deacon <will@kernel.org>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
	bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
	jxgao@google.com, joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com, rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v7 14/15] dt-bindings: of: Add restricted DMA pool
Message-ID: <20210526155321.GA19633@willie-the-truck>
References: <20210518064215.2856977-1-tientzu@chromium.org>
 <20210518064215.2856977-15-tientzu@chromium.org>
 <20210526121322.GA19313@willie-the-truck>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210526121322.GA19313@willie-the-truck>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Wed, May 26, 2021 at 01:13:22PM +0100, Will Deacon wrote:
> On Tue, May 18, 2021 at 02:42:14PM +0800, Claire Chang wrote:
> > @@ -138,4 +160,9 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB).
> >  		memory-region = <&multimedia_reserved>;
> >  		/* ... */
> >  	};
> > +
> > +	pcie_device: pcie_device@0,0 {
> > +		memory-region = <&restricted_dma_mem_reserved>;
> > +		/* ... */
> > +	};
> 
> I still don't understand how this works for individual PCIe devices -- how
> is dev->of_node set to point at the node you have above?
> 
> I tried adding the memory-region to the host controller instead, and then
> I see it crop up in dmesg:
> 
>   | pci-host-generic 40000000.pci: assigned reserved memory node restricted_dma_mem_reserved
> 
> but none of the actual PCI devices end up with 'dma_io_tlb_mem' set, and
> so the restricted DMA area is not used. In fact, swiotlb isn't used at all.
> 
> What am I missing to make this work with PCIe devices?

Aha, looks like we're just missing the logic to inherit the DMA
configuration. The diff below gets things working for me.

Will

--->8

diff --git a/drivers/of/address.c b/drivers/of/address.c
index c562a9ff5f0b..bf499fdd6e93 100644
--- a/drivers/of/address.c
+++ b/drivers/of/address.c
@@ -1113,25 +1113,25 @@ bool of_dma_is_coherent(struct device_node *np)
 }
 EXPORT_SYMBOL_GPL(of_dma_is_coherent);
 
-int of_dma_set_restricted_buffer(struct device *dev)
+int of_dma_set_restricted_buffer(struct device *dev, struct device_node *np)
 {
-	struct device_node *node;
 	int count, i;
 
-	if (!dev->of_node)
+	if (!np)
 		return 0;
 
-	count = of_property_count_elems_of_size(dev->of_node, "memory-region",
+	count = of_property_count_elems_of_size(np, "memory-region",
 						sizeof(phandle));
 	for (i = 0; i < count; i++) {
-		node = of_parse_phandle(dev->of_node, "memory-region", i);
+		struct device_node *node;
+
+		node = of_parse_phandle(np, "memory-region", i);
 		/* There might be multiple memory regions, but only one
-		 * restriced-dma-pool region is allowed.
+		 * restricted-dma-pool region is allowed.
 		 */
 		if (of_device_is_compatible(node, "restricted-dma-pool") &&
 		    of_device_is_available(node))
-			return of_reserved_mem_device_init_by_idx(
-				dev, dev->of_node, i);
+			return of_reserved_mem_device_init_by_idx(dev, np, i);
 	}
 
 	return 0;
diff --git a/drivers/of/device.c b/drivers/of/device.c
index d8d865223e51..2defdca418ec 100644
--- a/drivers/of/device.c
+++ b/drivers/of/device.c
@@ -166,7 +166,7 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
 	arch_setup_dma_ops(dev, dma_start, size, iommu, coherent);
 
 	if (!iommu)
-		return of_dma_set_restricted_buffer(dev);
+		return of_dma_set_restricted_buffer(dev, np);
 
 	return 0;
 }
diff --git a/drivers/of/of_private.h b/drivers/of/of_private.h
index 9fc874548528..8fde97565d11 100644
--- a/drivers/of/of_private.h
+++ b/drivers/of/of_private.h
@@ -163,14 +163,15 @@ struct bus_dma_region;
 #if defined(CONFIG_OF_ADDRESS) && defined(CONFIG_HAS_DMA)
 int of_dma_get_range(struct device_node *np,
 		const struct bus_dma_region **map);
-int of_dma_set_restricted_buffer(struct device *dev);
+int of_dma_set_restricted_buffer(struct device *dev, struct device_node *np);
 #else
 static inline int of_dma_get_range(struct device_node *np,
 		const struct bus_dma_region **map)
 {
 	return -ENODEV;
 }
-static inline int of_dma_set_restricted_buffer(struct device *dev)
+static inline int of_dma_set_restricted_buffer(struct device *dev,
+					       struct device_node *np)
 {
 	return -ENODEV;
 }


From xen-devel-bounces@lists.xenproject.org Wed May 26 16:11:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 16:11:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132624.247309 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llw8A-00056E-9f; Wed, 26 May 2021 16:11:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132624.247309; Wed, 26 May 2021 16:11:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llw8A-000567-6g; Wed, 26 May 2021 16:11:38 +0000
Received: by outflank-mailman (input) for mailman id 132624;
 Wed, 26 May 2021 16:11:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1llw88-000561-Ra
 for xen-devel@lists.xenproject.org; Wed, 26 May 2021 16:11:36 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1llw87-0004fU-6J; Wed, 26 May 2021 16:11:35 +0000
Received: from 54-240-197-235.amazon.com ([54.240.197.235]
 helo=ufe34d9ed68d054.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_256_GCM_SHA384:256) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1llw86-0003AK-T2; Wed, 26 May 2021 16:11:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Message-Id:Date:Subject:Cc:To:From;
	bh=yrQbQMaqTvUXp0yu7HpETMtIwdzmnlOinZ/SB1SX3yg=; b=ioFK1A6SBdCuJWICOD5twMwhhV
	X4lO53Nvhdb3lw4MXFVRySyx484FDltxvE1TakZPLAgE35lz9h6EQpuqN1BCFKYOjeC3fMZ9PPCrp
	2oaKnK3OsxXgVe1TCdcB+ZHgayP295zbTBX2zVFFOKsQkjbtmQMFaDs71VGysWg1UQEw=;
From: Julien Grall <julien@xen.org>
To: xen-devel@lists.xenproject.org
Cc: julien@xen.org,
	Julien Grall <jgrall@amazon.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <george.dunlap@citrix.com>,
	Ian Jackson <iwj@xenproject.org>,
	Jan Beulich <jbeulich@suse.com>,
	Stefano Stabellini <sstabellini@kernel.org>,
	Wei Liu <wl@xen.org>
Subject: [PATCH v2] xen/page_alloc: Remove dead code in alloc_domheap_pages()
Date: Wed, 26 May 2021 17:11:29 +0100
Message-Id: <20210526161129.28572-1-julien@xen.org>
X-Mailer: git-send-email 2.17.1

From: Julien Grall <jgrall@amazon.com>

Since commit 1aac966e24e9 "xen: support RAM at addresses 0 and 4096",
bits_to_zone() will never return 0 and it is expected that we have
minimum 2 zones.

Therefore the check in alloc_domheap_pages() is unnecessary and can
be removed. However, for sanity, it is replaced with an ASSERT().

Also take the opportunity to switch from min_t() to min() as
bits_to_zone() cannot return a negative value. The macro is tweaked
to make it clearer.

This bug was discovered and resolved using Coverity Static Analysis
Security Testing (SAST) by Synopsys, Inc.

Signed-off-by: Julien Grall <jgrall@amazon.com>

---
    Changes in v2:
        - Remove BUILD_BUG_ON()
        - Switch from min_t() to min()
---
 xen/common/page_alloc.c | 7 ++++---
 1 file changed, 4 insertions(+), 3 deletions(-)

diff --git a/xen/common/page_alloc.c b/xen/common/page_alloc.c
index ace6333c18ea..958ba0cd9256 100644
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -441,7 +441,7 @@ mfn_t __init alloc_boot_pages(unsigned long nr_pfns, unsigned long pfn_align)
 #define MEMZONE_XEN 0
 #define NR_ZONES    (PADDR_BITS - PAGE_SHIFT + 1)
 
-#define bits_to_zone(b) (((b) < (PAGE_SHIFT + 1)) ? 1 : ((b) - PAGE_SHIFT))
+#define bits_to_zone(b) (((b) < (PAGE_SHIFT + 1)) ? 1U : ((b) - PAGE_SHIFT))
 #define page_to_zone(pg) (is_xen_heap_page(pg) ? MEMZONE_XEN :  \
                           (flsl(mfn_x(page_to_mfn(pg))) ? : 1))
 
@@ -2336,8 +2336,9 @@ struct page_info *alloc_domheap_pages(
 
     bits = domain_clamp_alloc_bitsize(memflags & MEMF_no_owner ? NULL : d,
                                       bits ? : (BITS_PER_LONG+PAGE_SHIFT));
-    if ( (zone_hi = min_t(unsigned int, bits_to_zone(bits), zone_hi)) == 0 )
-        return NULL;
+
+    zone_hi = min(bits_to_zone(bits), zone_hi);
+    ASSERT(zone_hi != 0);
 
     if ( memflags & MEMF_no_owner )
         memflags |= MEMF_no_refcount;
-- 
2.17.1



From xen-devel-bounces@lists.xenproject.org Wed May 26 16:17:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 16:17:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132631.247320 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llwE8-0005oG-Ue; Wed, 26 May 2021 16:17:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132631.247320; Wed, 26 May 2021 16:17:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llwE8-0005o9-Rc; Wed, 26 May 2021 16:17:48 +0000
Received: by outflank-mailman (input) for mailman id 132631;
 Wed, 26 May 2021 16:17:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1llwE7-0005o3-7L
 for xen-devel@lists.xenproject.org; Wed, 26 May 2021 16:17:47 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1llwE6-0004m2-6e; Wed, 26 May 2021 16:17:46 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1llwE6-0003TH-0E; Wed, 26 May 2021 16:17:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=6D2khYCGcXGlAAyl9wXAEwQtH497WAI1bltKzTZdh+o=; b=vAyWuGf/oDtnm0H4PesELsvIq7
	d0ooTnQ8XT9VXhIp7PUDt4pnZAdQtXQ0SDAk+YOjCfjaxLzvpTTuOj9/6heXyPXj/hZlJXBNovrEv
	wXjHjP356Xysb4k0T6SD6GDLFKkUE59KABx3d1xNxU/fbbFILdZYMJVl7BxHkscSaRgI=;
Subject: Re: [XEN PATCH v1] libxl: use getrandom() syscall for random data
 extraction
To: Sergiy Kibrik <Sergiy_Kibrik@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>
References: <20210524085858.1902-1-Sergiy_Kibrik@epam.com>
 <13bde708-1d87-a2c7-077f-f08db597ae15@xen.org>
 <AM9PR03MB68362CAC724A6BEE4A50B96AF0249@AM9PR03MB6836.eurprd03.prod.outlook.com>
From: Julien Grall <julien@xen.org>
Message-ID: <6f6c29e1-09dc-d7ea-6126-4649100c149d@xen.org>
Date: Wed, 26 May 2021 17:17:44 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <AM9PR03MB68362CAC724A6BEE4A50B96AF0249@AM9PR03MB6836.eurprd03.prod.outlook.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi,

On 26/05/2021 10:31, Sergiy Kibrik wrote:
> Hi Julien,
> 
>>
>>   From the man:
>>
>> VERSIONS
>>          getrandom() was introduced in version 3.17 of the Linux kernel.
>>    Support was added to glibc in version 2.25.
>>
>> If I am not mistaken glibc 2.25 was released in 2017. Also, the call was only
>> introduced in FreeBSD 12.
>>
>> So I think we want to check if getrandom() can be used. We may also want to
>> consider to fallback to read /dev/urandom if the call return ENOSYS.
>>
> 
> You mean its availability should be checked both at build and runtime?

Correct. You can have a libc suporting getrandom() but a kernel that 
doesn't provide the syscall.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 26 16:19:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 16:19:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132637.247331 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llwFL-0006Ol-9V; Wed, 26 May 2021 16:19:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132637.247331; Wed, 26 May 2021 16:19:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llwFL-0006Oe-6P; Wed, 26 May 2021 16:19:03 +0000
Received: by outflank-mailman (input) for mailman id 132637;
 Wed, 26 May 2021 16:19:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1llwFJ-0006OS-SA
 for xen-devel@lists.xenproject.org; Wed, 26 May 2021 16:19:01 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1llwFI-0004mr-V4; Wed, 26 May 2021 16:19:00 +0000
Received: from [54.239.6.185] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1llwFI-0003YL-PY; Wed, 26 May 2021 16:19:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=qOH8JQ+l9ePPzjcF+2tQRoy8AMjHGYyfSRfbRnD6RJU=; b=3c9xuQQu8iqUR9BPr0oiOTFPlE
	RGr6w9yPzSCGIibw+0+WneLhQ9/IYQApKG5kjPxh4IKZvWSLRloOuJtZIyF0abvtRvp0BGWYFk1PW
	LW1nt78eYsKwHn4RdFGCDWJgMhqRLZLByqGs7P+rXOm152NCmslqxoPY7YPlrPLYibhg=;
Subject: Re: [XEN PATCH v1] libxl/arm: provide guests with random seed
To: Sergiy Kibrik <Sergiy_Kibrik@epam.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>
References: <20210524080057.1773-1-Sergiy_Kibrik@epam.com>
 <a3c51dea-80e5-df92-3757-72809ad96434@xen.org>
 <AM9PR03MB6836B02970F4F1CAEEAEAD78F0249@AM9PR03MB6836.eurprd03.prod.outlook.com>
From: Julien Grall <julien@xen.org>
Message-ID: <d85d9c25-aff7-43ff-6ae5-04ab394dcd0d@xen.org>
Date: Wed, 26 May 2021 17:18:58 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <AM9PR03MB6836B02970F4F1CAEEAEAD78F0249@AM9PR03MB6836.eurprd03.prod.outlook.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit



On 26/05/2021 10:28, Sergiy Kibrik wrote:
> Hi Julien,

Hi Sergiy,

>>> diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c index
>>> 34f8a29056..05c58a428c 100644
>>> --- a/tools/libxl/libxl_arm.c
>>> +++ b/tools/libxl/libxl_arm.c
>>> @@ -342,6 +342,12 @@ static int make_chosen_node(libxl__gc *gc, void
>> *fdt, bool ramdisk,
>>>            if (res) return res;
>>>        }
>>>
>>> +    uint8_t seed[128];
>>
>> I couldn't find any documentation for the property (although, I have found
>> code in Linux). Can you explain where the 128 come from?
>   
> I didn't find documentation either, probably that part is un-documented yet.
> This is kind of tradeoff between ChaCha20 key size of 32 (which is used in guest Linux CRNG), and data size that host is expected to provide w/o being blocked or delayed
> (which is 256 according to getrandom() man page). In case of 128-bytes seed each byte of CRNG state will be mixed 4 times using bytes from this seed.

Ok. Can the reasoning be documented in the commit message (with a short 
summary in the code)? This would be helpful if in the future one decide 
to change the size of the seed.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Wed May 26 16:45:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 16:45:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132645.247343 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llweO-00019B-55; Wed, 26 May 2021 16:44:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132645.247343; Wed, 26 May 2021 16:44:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llweO-000194-1m; Wed, 26 May 2021 16:44:56 +0000
Received: by outflank-mailman (input) for mailman id 132645;
 Wed, 26 May 2021 16:44:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JqRz=KV=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1llweM-00018u-PB
 for xen-devel@lists.xenproject.org; Wed, 26 May 2021 16:44:54 +0000
Received: from mail-lf1-x12d.google.com (unknown [2a00:1450:4864:20::12d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8d084766-a88d-4c71-b77a-decb12bf3ab9;
 Wed, 26 May 2021 16:44:53 +0000 (UTC)
Received: by mail-lf1-x12d.google.com with SMTP id a5so2816471lfm.0
 for <xen-devel@lists.xenproject.org>; Wed, 26 May 2021 09:44:53 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8d084766-a88d-4c71-b77a-decb12bf3ab9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=l7a8ebK5xXDBzz85eTvWQtSsDhN/B+UPY91CR7nbAek=;
        b=jbMVmxWNjO4SR9rmXolvcNvb5t5FZL5P9dz+foLPSZGJg3CX6fNDG7dNOt1sXd2Cdm
         BANKoh/1PXo+PZO9FYW6TWAM2uiY1Yqae2GBUnoUwWkF9yyz0L7Oel5TIBlncFlgn25C
         DsjbEEoLw1B11LMYMwRMqoz7ROaXeo/1+VV51nViWmNAUSpc2hNGVbMb3wazloIsEujj
         ZxutNdhsnFzxfuzC4ieFgQaDGiUtLvYaAnNj//QHjoZYAAku27L4l7mB6DIfjM/x/4xn
         dWz1z0GuGSP46Iaj2rFhb1KX80QEGfyiGkNBD7uMv1Pl49qVIfSq6WHClSTOHuPVhFcv
         0a1g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=l7a8ebK5xXDBzz85eTvWQtSsDhN/B+UPY91CR7nbAek=;
        b=RhMLDV8Qh6s/e6yHPYxR6mLQU5KH6FuNukyKbwZKMsiE3tn8NPr6R5pYlXcBA5ct0u
         xumFtzPONrBCrO/PSs5DE67tpULcZDpFiNu9LC+eAGkmGAiEnh8tjrZSGw/ai1pwUi7U
         FJfBD2iq1bjlCxE8L8teX4T/q63GWxzJV+Lx6M3dbcIQsgNvcXa4qPZkECglYgzP5mjx
         WAz6Lbz7OUDoM0cgZ4m+I/wNr0Gf7/jxrOZWn79IhvO1zVMHKsMqbdGinuTdXJHpyL5w
         WHXhyzvlILwsfnNyqhpIBLZTOmmVQYodU1YNsbT7eJpygz0rkZz/rjT1Bc/fthYb1Uhm
         /uaA==
X-Gm-Message-State: AOAM531b1+nUp1qgaUA/7+udKoDJ9CQoZl9ba+XNtKIFUP9YuDc1N3kH
	kteCAC839sfe69xFTuunyzVHCZ7BwzBQTxFgw7I=
X-Google-Smtp-Source: ABdhPJwLLldtbvjR8nVWso7kqTUf/b6wd/+b9K6ktzHe9O7K1/jEPBXtoK18HfeeUh1t2PyUxEBvm0P7SlqIWcMiC7w=
X-Received: by 2002:a05:6512:38cd:: with SMTP id p13mr2612709lft.419.1622047492718;
 Wed, 26 May 2021 09:44:52 -0700 (PDT)
MIME-Version: 1.0
References: <20210503192810.36084-1-jandryuk@gmail.com> <20210503192810.36084-2-jandryuk@gmail.com>
 <927b886a-9b0c-2162-763b-9c2147227b8c@suse.com> <CAKf6xptZ=tHUUX+NXMfUPz_=wJJz6_FbEG6BraXRgcRokK5bcg@mail.gmail.com>
 <0701d667-b8f6-5691-5a40-e0e8eff0debb@suse.com>
In-Reply-To: <0701d667-b8f6-5691-5a40-e0e8eff0debb@suse.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Wed, 26 May 2021 12:44:41 -0400
Message-ID: <CAKf6xpsY-ER=i4-Ubf8oJFyCQJx=oj011mKhBCHMktz+y6s=hA@mail.gmail.com>
Subject: Re: [PATCH 01/13] cpufreq: Allow restricting to internal governors only
To: Jan Beulich <jbeulich@suse.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"

On Wed, May 26, 2021 at 11:09 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 26.05.2021 16:12, Jason Andryuk wrote:
> > On Wed, May 26, 2021 at 9:18 AM Jan Beulich <jbeulich@suse.com> wrote:
> >> Sorry, all quite nit-like remarks, but still ...
> >
> > It's fine.  Would a design session be useful to discuss hwp?
>
> Is there anything beyond patch review that's necessary there? I'm
> also not really set up to usefully join design sessions, I'm afraid.

It's not necessary, but I figured I'd offer since Xen Summit is going on.

The part where it might be useful is the hypercall interface.  I
basically exposed all the MSR configuration.  I couldn't think of a
useful abstraction, and this way the full range of configuration is
available.  Additionally, there are some convenience presets to make
it a little more user friendly.

Thank you for reviewing.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Wed May 26 16:58:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 16:58:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132652.247354 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llwr6-0002bi-B9; Wed, 26 May 2021 16:58:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132652.247354; Wed, 26 May 2021 16:58:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llwr6-0002bb-86; Wed, 26 May 2021 16:58:04 +0000
Received: by outflank-mailman (input) for mailman id 132652;
 Wed, 26 May 2021 16:58:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=u4Cs=KV=invisiblethingslab.com=marmarek@srs-us1.protection.inumbo.net>)
 id 1llwr5-0002bV-2f
 for xen-devel@lists.xenproject.org; Wed, 26 May 2021 16:58:03 +0000
Received: from wout5-smtp.messagingengine.com (unknown [64.147.123.21])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 41afbbfb-03c6-4007-bda8-37d85af4429b;
 Wed, 26 May 2021 16:58:02 +0000 (UTC)
Received: from compute1.internal (compute1.nyi.internal [10.202.2.41])
 by mailout.west.internal (Postfix) with ESMTP id 87D921388;
 Wed, 26 May 2021 12:58:00 -0400 (EDT)
Received: from mailfrontend1 ([10.202.2.162])
 by compute1.internal (MEProxy); Wed, 26 May 2021 12:58:00 -0400
Received: from mail-itl (ip5b434f04.dynamic.kabel-deutschland.de [91.67.79.4])
 by mail.messagingengine.com (Postfix) with ESMTPA;
 Wed, 26 May 2021 12:57:59 -0400 (EDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 41afbbfb-03c6-4007-bda8-37d85af4429b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=
	messagingengine.com; h=cc:content-type:date:from:message-id
	:mime-version:subject:to:x-me-proxy:x-me-proxy:x-me-sender
	:x-me-sender:x-sasl-enc; s=fm2; bh=pKuNCDmr5syPGyt1UBmJ7UEIer4kS
	YVDOklaFBonYqM=; b=mZhy4buIcSFdu+aFWZIlmgB3vPx84vtAQnslt+8dPRHSC
	5tOqC/wwqyrOB8fsgt7/k9oOpOQHwZMDl1wcJqA/fk/OWDM9Y+95g2sR0SlTFmXu
	4ThgBzpguVNdGt/Ni3BDsZ9VS38i+a/46eIHMYc5OCcq1EPWVdNEABS9ha5ZN43R
	GFk6lXmNJRfLDITEV9p8jBeHFvR//YnX8mb8cBhQZm6dQC2gUCdDF5uG7hp7TW0R
	gAyV9ibpjNjOZPN7NBEJ0b1LNTSmMm65EjwjA6p5UmsRG8WfTuga+OzIA4BJGSx7
	36FxvoVxMhiefZiDF88KHWOjNMVD2dU/3lY7CItnw==
X-ME-Sender: <xms:F36uYKl0hW0ZpNa_2-KjRf_e6Nsza_7qkccLYWbwc2rpRg50CgUbrQ>
    <xme:F36uYB2eZOrGQ4pSZ0RvzrGDzZ-UZcHpKVNsHeAVgjYIUBrurHo85zZuEaJzd0viY
    VtG_Qr-bxEm4g>
X-ME-Proxy-Cause: gggruggvucftvghtrhhoucdtuddrgeduledrvdekfedguddtkecutefuodetggdotefrod
    ftvfcurfhrohhfihhlvgemucfhrghsthforghilhdpqfgfvfdpuffrtefokffrpgfnqfgh
    necuuegrihhlohhuthemuceftddtnecusecvtfgvtghiphhivghnthhsucdlqddutddtmd
    enucfjughrpeffhffvuffkgggtugesghdtreertddtjeenucfhrhhomhepofgrrhgvkhcu
    ofgrrhgtiiihkhhofihskhhiqdfikphrvggtkhhiuceomhgrrhhmrghrvghksehinhhvih
    hsihgslhgvthhhihhnghhslhgrsgdrtghomheqnecuggftrfgrthhtvghrnheptddugfet
    udevudeiveevgfetueejlefggffghffhhfehtdfffeefgfduueegfefhnecukfhppeelud
    drieejrdejledrgeenucevlhhushhtvghrufhiiigvpedtnecurfgrrhgrmhepmhgrihhl
    fhhrohhmpehmrghrmhgrrhgvkhesihhnvhhishhisghlvghthhhinhhgshhlrggsrdgtoh
    hm
X-ME-Proxy: <xmx:F36uYIq4ARNSB27UzHWBrJI2mqiYAmlV_9B9hE7B0hqHkv2eik9FbQ>
    <xmx:F36uYOn6orO1ox7O39xfV1wKQHTOkUklg1j_cren0034iWzniYCRog>
    <xmx:F36uYI3v4YKFOFvk2VllOH9YU8qbk3GGUa7V_YU3DdD5buIgK944sA>
    <xmx:GH6uYJiXIQIvZvZTSVoND_b1zd6r2gZ8y3FuruAyf0uJHbY84jlohg>
Date: Wed, 26 May 2021 18:57:55 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Stefano Stabellini <sstabellini@kernel.org>,
	Doug Goldstein <cardoe@cardoe.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: building docker image in gitlab-ci
Message-ID: <YK5+FGpjgeCUMg6Q@mail-itl>
MIME-Version: 1.0
Content-Type: multipart/signed; micalg=pgp-sha256;
	protocol="application/pgp-signature"; boundary="+7Kp4yILsOXaDwqL"
Content-Disposition: inline


--+7Kp4yILsOXaDwqL
Content-Type: text/plain; protected-headers=v1; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: quoted-printable
Date: Wed, 26 May 2021 18:57:55 +0200
From: Marek =?utf-8?Q?Marczykowski-G=C3=B3recki?= <marmarek@invisiblethingslab.com>
To: Stefano Stabellini <sstabellini@kernel.org>,
	Doug Goldstein <cardoe@cardoe.com>
Cc: xen-devel <xen-devel@lists.xenproject.org>
Subject: building docker image in gitlab-ci

Hi,

Regarding our chat few minutes ago - this is part of the .gitlab-ci.yml,
that builds and pushed containers, that is later used for other tests:

    variables:
      CONTAINER_TEST_IMAGE: ...

    build:
      stage: build
      image: docker/compose:latest
      services:
        - docker:dind
      script:
        - cd ci
        - docker-compose build
        - docker tag <whatever-docker-compose-produced> $CONTAINER_TEST_IMA=
GE
        - docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_RE=
GISTRY
        - docker push $CONTAINER_TEST_IMAGE


I use docker-compose here, but it doesn't really matter. It pushes to
the container registry conveniently provided by gitlab too :)

In my case I build trigger it via push to a specific branch, but
connecting to schedule should be trivial too.

--=20
Best Regards,
Marek Marczykowski-G=C3=B3recki
Invisible Things Lab

--+7Kp4yILsOXaDwqL
Content-Type: application/pgp-signature; name="signature.asc"

-----BEGIN PGP SIGNATURE-----

iQEzBAEBCAAdFiEEhrpukzGPukRmQqkK24/THMrX1ywFAmCufhMACgkQ24/THMrX
1yx4EQf+LIeByJXOTqdZABk5J7hmbshBD48X4HpEySoDbvUT5aTGz/e5uJ/voK0l
O/sO4Sodr1eRms0KoB6qi32PWmjBoqQv+K8dX+qtfGZYP2fOjuSf9BAfF0dl7/aI
kEHoofDkYuIkgUcLL/a8gE4BcPbyssBQCJtGr15CmMvWgcj5E1qkpMBjnqEC/HW7
5Q5+v/a4ksbJ3CXe8nn81mb2Vho/kgs6gVzo/n81igMb7hTUEkp7Ou/2IKFFsCl6
6QoatBkpFpNrlnprYcGLOdZ09ScgxkfnpiJOSceeXii1+/VJNzuxv9u5JGX6l3qY
a1IHSN5O7r4TR1AjORf1ixfjPUz3pg==
=ydnB
-----END PGP SIGNATURE-----

--+7Kp4yILsOXaDwqL--


From xen-devel-bounces@lists.xenproject.org Wed May 26 17:24:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 17:24:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132659.247364 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llxGF-0005mZ-GR; Wed, 26 May 2021 17:24:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132659.247364; Wed, 26 May 2021 17:24:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llxGF-0005mS-DM; Wed, 26 May 2021 17:24:03 +0000
Received: by outflank-mailman (input) for mailman id 132659;
 Wed, 26 May 2021 17:24:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=JqRz=KV=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1llxGD-0005mK-LK
 for xen-devel@lists.xenproject.org; Wed, 26 May 2021 17:24:01 +0000
Received: from mail-lj1-x22b.google.com (unknown [2a00:1450:4864:20::22b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3e7add9d-a836-44b3-8762-feaa079bf9c4;
 Wed, 26 May 2021 17:24:00 +0000 (UTC)
Received: by mail-lj1-x22b.google.com with SMTP id a4so2689712ljd.5
 for <xen-devel@lists.xenproject.org>; Wed, 26 May 2021 10:24:00 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3e7add9d-a836-44b3-8762-feaa079bf9c4
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=ptbYf6+VtJZPBOlExjwvlvDDRPFS5hF2G30LNOycenM=;
        b=NLr+x7NpFSYauH+pJzUK+pbPs1Gb99ObTSjoaPFEuuiFDHlxxld42kIqbwveHzQ1Mr
         3RPc5S/1L6XFosUyu5SindGfWSDD8nSzA2JHRZCmbry2XBf18cftDNhlIdcpAmMsZpkW
         f1ZzRKbJa+ylA0Yn+0qtqHytbklEIBrgEmmmf5iUHH9Z4U4Bk6XYbIPtJFLaWlRI6UOK
         xfklCTZumL9pk5lBrx57w9EceGfTUzFzqexVuTHfK0eBzfADMtJAVBGVom0onKQam9kp
         FwmgJMS4mR0WUZPyd7OfM4/Za4C1xE20S6MJ9loAbz+58EDkuP1qmlIJik20Jv/YWDx8
         9JRQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=ptbYf6+VtJZPBOlExjwvlvDDRPFS5hF2G30LNOycenM=;
        b=QQOjlhqnEpDHOZfQfrvb6TfUHWVDQKef4xOT+t5F6YOddTi5w4AlRvoubGKFqwW69u
         dskT4bnAQZCVWZTl50H7jEAkhCrsoJ3zddRWOzXvqM6ePZfNGuBtXUi7W0P4FvUeZUye
         Vo1SizZX+ipnemV5WP+Dkc3kuQkbcKzg+GwtnYAhpAbupG1wpPu17g/3khhT3rSskSbW
         mTx7Jv/liy7chtUtmeiTqFj4OG12hA7T6F+qtIJGEWXYFWGLLV8iI6ZcYYrKdQV5dimY
         ZZd527K2tPmhCqapV21xX3x6kbXDr97sqqdQRQufX05LbSxb9fnq3PnkE7yyqos3vOvy
         KOoA==
X-Gm-Message-State: AOAM5323+/rbEvfr3TAIHeIKPZPgk62ZciT5fA8NjWMqXhYDzfS2MraT
	/ccW2ZPFKHOnyXlJv+5cJYlLE3JxUduPibcyFu0=
X-Google-Smtp-Source: ABdhPJwcPMD6iJ3xDap8EDEamqU9zO97yN9kyZAHpx5m5BIzxWdXvqrQiXVXSVDr6gqVQR6WqvZDSUu4DK6Uu6ZKMtA=
X-Received: by 2002:a05:651c:1a7:: with SMTP id c7mr3020551ljn.77.1622049839778;
 Wed, 26 May 2021 10:23:59 -0700 (PDT)
MIME-Version: 1.0
References: <ca7a78e5-2ee9-4109-7905-3b9186475f3d@suse.com>
In-Reply-To: <ca7a78e5-2ee9-4109-7905-3b9186475f3d@suse.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Wed, 26 May 2021 13:23:48 -0400
Message-ID: <CAKf6xptQaLw74y5pYuSo9ZXVDoFmCaNByNuKi6vwZ4LnSDoa=w@mail.gmail.com>
Subject: Re: [PATCH] x86: make hypervisor build with gcc11
To: Jan Beulich <jbeulich@suse.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, 
	Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, 
	Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, May 19, 2021 at 11:40 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> Gcc 11 looks to make incorrect assumptions about valid ranges that
> pointers may be used for addressing when they are derived from e.g. a
> plain constant. See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=3D100680=
.
>
> Utilize RELOC_HIDE() to work around the issue, which for x86 manifests
> in at least
> - mpparse.c:efi_check_config(),
> - tboot.c:tboot_probe(),
> - tboot.c:tboot_gen_frametable_integrity(),
> - x86_emulate.c:x86_emulate() (at -O2 only).
> The last case is particularly odd not just because it only triggers at
> higher optimization levels, but also because it only affects one of at
> least three similar constructs. Various "note" diagnostics claim the
> valid index range to be [0, 2=E2=81=B6=C2=B3-1].
>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
Tested-by: Jason Andryuk <jandryuk@gmail.com>

Needed on Fedora 34.

Thanks,
Jason


From xen-devel-bounces@lists.xenproject.org Wed May 26 18:31:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 18:31:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132668.247376 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llyJ9-00045M-P4; Wed, 26 May 2021 18:31:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132668.247376; Wed, 26 May 2021 18:31:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llyJ9-00045F-L8; Wed, 26 May 2021 18:31:07 +0000
Received: by outflank-mailman (input) for mailman id 132668;
 Wed, 26 May 2021 18:31:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UhT1=KV=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1llyJ7-000459-Ui
 for xen-devel@lists.xenproject.org; Wed, 26 May 2021 18:31:06 +0000
Received: from userp2130.oracle.com (unknown [156.151.31.86])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d86a9360-563f-4fe7-bca0-6ccd6ef1794e;
 Wed, 26 May 2021 18:31:01 +0000 (UTC)
Received: from pps.filterd (userp2130.oracle.com [127.0.0.1])
 by userp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 14QIDeev049286;
 Wed, 26 May 2021 18:30:20 GMT
Received: from userp3020.oracle.com (userp3020.oracle.com [156.151.31.79])
 by userp2130.oracle.com with ESMTP id 38q3q91gqp-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 26 May 2021 18:30:19 +0000
Received: from pps.filterd (userp3020.oracle.com [127.0.0.1])
 by userp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 14QIUITa143253;
 Wed, 26 May 2021 18:30:19 GMT
Received: from nam10-mw2-obe.outbound.protection.outlook.com
 (mail-mw2nam10lp2103.outbound.protection.outlook.com [104.47.55.103])
 by userp3020.oracle.com with ESMTP id 38qbqtjejk-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 26 May 2021 18:30:18 +0000
Received: from CH0PR10MB5020.namprd10.prod.outlook.com (2603:10b6:610:c0::22)
 by CH2PR10MB4198.namprd10.prod.outlook.com (2603:10b6:610:ab::14)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4150.26; Wed, 26 May
 2021 18:29:59 +0000
Received: from CH0PR10MB5020.namprd10.prod.outlook.com
 ([fe80::6cb6:faf9:b596:3b9a]) by CH0PR10MB5020.namprd10.prod.outlook.com
 ([fe80::6cb6:faf9:b596:3b9a%7]) with mapi id 15.20.4150.027; Wed, 26 May 2021
 18:29:59 +0000
Received: from [10.74.102.235] (160.34.88.235) by
 SA0PR11CA0008.namprd11.prod.outlook.com (2603:10b6:806:d3::13) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4150.27 via Frontend Transport; Wed, 26 May 2021 18:29:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d86a9360-563f-4fe7-bca0-6ccd6ef1794e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=XA6fdOjYg15DN5gdSLxbx6Is83fSLIQL0nPQobxX5iE=;
 b=bOzHPelmBxprgLSKHTr/fZoPmqDa0l2QbCCeeZ7uAESE/XKPXR6ULGKln/TnbX53oRMw
 E8yYxthzpTKRLJsPhd2IbMrURDOsKoicSU/8OaXjTxI7uCedJD+nbgfJjqlD0RCUKhkz
 9Rc9RwHoLo2vGEehhMVCQhmRuAWMRVpW6dX/oY1HkAI5Eb9wNmmN2Qz/97ErtcL0ZjrN
 vO7Mvz58afUwQr+6NWC1GSRokkMyCFOPnhMj9tdcNymp0su3mcn9Xh6YAsw5d0DBrql5
 PaxjSqUzxTpJtEXWedn2ny1PSC7RgonYPRpluE1XpCNaGKw78btKetKJGhUfJIjAKEUZ fw== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=F91Px+Sn+Rx7KkTfcE9QWa+miYmbiFFjZXobe9bdUAap34WHHoxl7lfyXPMzrj69KHC7tfheBoZg9Iz+XXmk9tH3iMaCI4ZLJE59TOS1OWZUnzRpVTwYfldPzyWjb0PoW+o6+71DosH2GEoBGOHRD7o+Nc8chpMlhLTWrijv/VyXcq+Rqr/278b/W8PoDudPMB4MPThKSpkpSmA2WvbhuYA2u4GehI0ktjaq24t5mVMt4bkZLtVQ7ToSMgSUeFCDFU7hxmUXzl6zhXhfgqaAsF76/GpcQd5qxhpZv7ysNl1OVmS6bmKC56V8eMQg0HmD4gr99GXyHDDv1DqqOnaErw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XA6fdOjYg15DN5gdSLxbx6Is83fSLIQL0nPQobxX5iE=;
 b=h5YsDmP2klRF+hgMUfdCThFunR7TzY3S54+PAQFgX57Ueka6LQu/XLZoSuuk7yTVZnkaOqBDatI1JLVG+h2Tvr8eMeY1QBdPoAVJyN2rqNEEGGjmEg0RjkTtVoq+TSJIJ7o1KVVjCZPMFRfDGiNFQnnf07aVs0YaBdx89oGcMJkn8XKoRM8k1ftC+mV9iBAtQC71N4fp3URyoEtHnMhvuXw+f/XcQvuiBU4wwMpuZSqKW5udhmsnMdkcNWm1ea6Y6MvZDF+d2Ht9mAEWQU8cH86+fGtvizaw6nRNjuEZWR8cKWZC01n0EwTjZckNoMcuyMFcn+tqHvthf8cUWSWj/g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XA6fdOjYg15DN5gdSLxbx6Is83fSLIQL0nPQobxX5iE=;
 b=VQlG9AbAYZNNRx7UhqL+KnrNWTvIny+sOwNlZ/qJtfMJfBwooEFYZjqRiI2ksfzrG6Ru/vOrX2MXd8zbZ//C9hz9uunyqSmq1I9wox18AbK7KDJIfrwGyB6H3xHcFwV9QqVaoabVRztXdKbVRky3oKTTwa5EJ0xS1vKRamPZemQ=
Authentication-Results: dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com;
 dkim=none (message not signed)
 header.d=none;dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com; dmarc=none
 action=none header.from=oracle.com;
Subject: Re: [PATCH v3 01/11] xen/manage: keep track of the on-going suspend
 mode
To: Anchal Agarwal <anchalag@amazon.com>
Cc: "tglx@linutronix.de" <tglx@linutronix.de>,
        "mingo@redhat.com" <mingo@redhat.com>, "bp@alien8.de" <bp@alien8.de>,
        "hpa@zytor.com" <hpa@zytor.com>, "jgross@suse.com" <jgross@suse.com>,
        "linux-pm@vger.kernel.org" <linux-pm@vger.kernel.org>,
        "linux-mm@kvack.org" <linux-mm@kvack.org>,
        "sstabellini@kernel.org" <sstabellini@kernel.org>,
        "konrad.wilk@oracle.com" <konrad.wilk@oracle.com>,
        "roger.pau@citrix.com" <roger.pau@citrix.com>,
        "axboe@kernel.dk" <axboe@kernel.dk>,
        "davem@davemloft.net" <davem@davemloft.net>,
        "rjw@rjwysocki.net" <rjw@rjwysocki.net>,
        "len.brown@intel.com" <len.brown@intel.com>,
        "pavel@ucw.cz" <pavel@ucw.cz>,
        "peterz@infradead.org" <peterz@infradead.org>,
        "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        "vkuznets@redhat.com" <vkuznets@redhat.com>,
        "netdev@vger.kernel.org" <netdev@vger.kernel.org>,
        "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
        Woodhouse@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com,
        David <dwmw@amazon.co.uk>,
        "benh@kernel.crashing.org" <benh@kernel.crashing.org>,
        Shah@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com,
        Amit
 <aams@amazon.de>,
        Agarwal@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
References: <e3e447e5-2f7a-82a2-31c8-10c2ffcbfb2c@oracle.com>
 <20200922231736.GA24215@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <20200925190423.GA31885@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <274ddc57-5c98-5003-c850-411eed1aea4c@oracle.com>
 <20200925222826.GA11755@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <cc738014-6a79-a5ae-cb2a-a02ff15b4582@oracle.com>
 <20200930212944.GA3138@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <8cd59d9c-36b1-21cf-e59f-40c5c20c65f8@oracle.com>
 <20210521052650.GA19056@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <0b1f0772-d1b1-0e59-8e99-368e54d40fbf@oracle.com>
 <20210526044038.GA16226@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <33380567-f86c-5d85-a79e-c1cd889f8ec2@oracle.com>
Date: Wed, 26 May 2021 14:29:53 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
In-Reply-To: <20210526044038.GA16226@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Originating-IP: [160.34.88.235]
X-ClientProxiedBy: SA0PR11CA0008.namprd11.prod.outlook.com
 (2603:10b6:806:d3::13) To CH0PR10MB5020.namprd10.prod.outlook.com
 (2603:10b6:610:c0::22)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: d8547f8d-1d7b-45d9-e57c-08d9207444e5
X-MS-TrafficTypeDiagnostic: CH2PR10MB4198:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: 
	<CH2PR10MB4198D636FB4AA9B68EF4FE298A249@CH2PR10MB4198.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	b+rKNv3KV0qLprLhDQ1P4GdxtE4P4X3RiHIY13RDzTceAXwaPZPTWjXI0YF/Gcyjavo0DuDJGarW/yLiEiOmkPzkDwLI33Heq6OdsU3CqfRwwZO+oLmayWGYoXdN5Kf8/2xbLe9m5SyopcC6Z9ljemgjfsXSB/OCVXTycvs8g1YnTEzPmAgzo4OGiLOQHuxH4gXy44JSelLJWZIG0ZHanJENH4zKWcOFAsWImu/HYVQGt1O1S5GxMmxCAnfGdbe8sC3hhVXWxnoIsQo565tY95SBN1QetLE7D4eUUxEgOrOrTlDZ8RpTTt6hh8yf8L+mI/paOnKI1EC0dXkraVvtKuGN4fkFAOvKUJhRtELmUi1fsEc+IsO25dGnd4TypkQSwCd6g5pCxj1bpvsKdCX/JwUoLyvLJgGLXEXYcPi/sBSdLUhRSA3swNA1XoSGKoPbQ0d5za5j0kYDuuSstqdcbvXaHPnYhEsBNXibKq8XuUkipk5ttJGEZsj6QJj5+PMfDL7+rtdvBNlxrGRf3qyRRzR6RruQRqIHmJklP99PDVTdgz+7FowoudHZMogHTvwnr0+iuN4pKyAFAFI1UxwT6xYauZXltMVR2YkfWpxscIJzmin9iWf+RhYvclcRJUItwL65zkjww7Ouh7ZgMSlWcZVnX2iKavTySIOOGn5rj2FaSLNwImY5OHrF4NRy/46N
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:CH0PR10MB5020.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(376002)(366004)(39860400002)(346002)(396003)(136003)(2616005)(7416002)(16576012)(16526019)(186003)(4326008)(53546011)(36756003)(26005)(5660300002)(2906002)(6486002)(31686004)(316002)(83380400001)(6916009)(86362001)(8936002)(54906003)(44832011)(66556008)(956004)(8676002)(66476007)(66946007)(31696002)(478600001)(38100700002)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?utf-8?B?U2FSdlY3azdkbjhFVDMxSy9IZmdiMnhPUGJpMlFEUU9sTHNqL3JxL2hvOUdI?=
 =?utf-8?B?djVoT3JPOEZMdDE2U3lkdjIwMzNOWSs4eEhVL29jcEJlZzlpNDFNZ3FVKzhO?=
 =?utf-8?B?ZGMvWHA0K0E0MXNIQ0FCdU0yRjZLTnRNeXAzbVNQMDQzZk1SNGNoR1BqR1ZH?=
 =?utf-8?B?TDhaSFpyOTlWR1ZQVk5OamdkV0RldGttZGFKT0RiNnlXU2x3VC85Z21xMnFu?=
 =?utf-8?B?ekViS09Ua2dkcVhlOVdXNUNGMmpxT2pBVXRWdFIxRTNPcnVRbmZ0aTlvbUow?=
 =?utf-8?B?YjVreXg2d3lGMHM5ZzFyVjZGQkxhNk5LZWxENCt2M2ZQV3hIenpFa3RyeEhj?=
 =?utf-8?B?R3VSTTlyd094N2loRGNwZUpaVVpTS0J0blJkUXZmdTlMbW1CcjZKZ2pLR21I?=
 =?utf-8?B?c3o4aHVqVnh5VHFOQVBGbTM3UTBoQnBIMEFnc2JkU0l0Yk1QdDlZNXQrejlZ?=
 =?utf-8?B?eUxlUmkyN0Y1KzUwVEtYZ01VelRxZ2F0WFh2MHI5blZhVW9XaytPRVdWaUZW?=
 =?utf-8?B?amkzWUVqOVAwVklCdnJieXN6UnZhZDZ6ZWVqWkRDaGdwOFVXRUVIendzbENE?=
 =?utf-8?B?UkxsNXAwZzIyK0tSZmk2R2tKbDIrSVFkZlJMaHpIalF5MndpeTQ2NXJnTzNx?=
 =?utf-8?B?alFscENNcDhzWXlGVXBjVjRGdG1hNXloSEp0V1VkMCtLU1o2QVppRjJWcmNU?=
 =?utf-8?B?c2YzSWh3bnpRUUJsejlrMGZIZ3NNMGRTRHJXSXJienh5c3NlODBqMnI0L2dX?=
 =?utf-8?B?aWw3V3RSSGNpVkkyS0J1cVE4cWh2c3hZWU1sbFhBa0JTR0xNR1NSV09TcTJZ?=
 =?utf-8?B?bnNaUy9CTkNnS3FmWk5acVFpcVVITFBnYzhBQkYrZllVL21TaStGRVZDN0J5?=
 =?utf-8?B?SXU3QVdzTjNzTStxNVRFdHpIN0VoTTN1clpCZVJhblNXdkQwL2ZUZEJHZkZr?=
 =?utf-8?B?SWZEKzBVYzhlem10WVlCVHJ0V0ZKK0F3QVZwa1RHOVVmdEpFTDNjb1lpb05N?=
 =?utf-8?B?Vi85QmR1VllCUkttY1ZpN2o4WWRNY25UVXV0ejBQVFMxcVN4bjIvVGptdTMw?=
 =?utf-8?B?Q1g3dDNDb0Evbk1ZRHkzZVliRUgySzU1SzBIcHhhNnpmR0Z6aUJMckZOcWIz?=
 =?utf-8?B?RnR1MG1xdm5ER3doTEo4SFlKZ3BWbWtEV3Q1RTJyYlNVdEx6QTZSWnJ2cHUx?=
 =?utf-8?B?ejByT3BHQy9VS1NsVnhCUmlNbVVVL2ZRalVhZjhMakRvMDZjM1lBZEFuYWEr?=
 =?utf-8?B?cEtUOW5JTml4WitocmhMcUROZmF1R0o1bFByZmtrYVRJQkFoSzhrV2h6VFdu?=
 =?utf-8?B?cEl3VDI0WDVuckxYSnR4aWg5STk5VVUveWpTczVqRnE4VzFPYmQ0ZUFmSFlH?=
 =?utf-8?B?K2NLQ1NMVXltWitYcFRsNC96MWRhckF4aW1nTmZoVHR0MzJxQXhpbXZOYWFJ?=
 =?utf-8?B?RnFZSUthM1NqSnZHZkp0RmtReHhhMXZKdWJKOEhZaS9BemdLcTBzSUY2SlJR?=
 =?utf-8?B?dVJUU2tDeGNxTmNiMlBITWVFZmxIQmVxY1RzNjR2eVNIb0xTbVBURW4zYWZs?=
 =?utf-8?B?a203WjJNdy9WY3ZPTXVPdFd5aTV3T3VxUXZzQ2ErSmMrK21hRy8zQU9ydlQ3?=
 =?utf-8?B?MDNFQmJpTDBBUVFtdVVJRk5OZDEwcTE1Q3NUQjdpK2hNMGJwSmNtd3hRYVo0?=
 =?utf-8?B?SVlOT01ob2NGeUh5THJtZ0VNZTVlM0NaaWdZRkMvTmh4YWVvUXBRV01uaTJI?=
 =?utf-8?Q?hAqQRhGEnhs/qSDPgqvWOgufmJ5WVRTOf0yDEIv?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: d8547f8d-1d7b-45d9-e57c-08d9207444e5
X-MS-Exchange-CrossTenant-AuthSource: CH0PR10MB5020.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 May 2021 18:29:59.6118
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 6Kvz4DQEvSLVF5/Qt8cLlGV3BqyKl7jb7RtaRicSCZ8xr84irSI0Ul34j++9eHoFDVRugiYISj4OW5C0DmALYIfdTAX95lxkxwYOwpIs2Sc=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH2PR10MB4198
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9996 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 mlxscore=0 phishscore=0 bulkscore=0
 mlxlogscore=999 malwarescore=0 spamscore=0 suspectscore=0 adultscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2105260125
X-Proofpoint-GUID: RpVKjGUs8Bdpy3b6pvQ1sYv1dVa-Z3UO
X-Proofpoint-ORIG-GUID: RpVKjGUs8Bdpy3b6pvQ1sYv1dVa-Z3UO
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9996 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 mlxscore=0 clxscore=1011
 malwarescore=0 bulkscore=0 impostorscore=0 phishscore=0 spamscore=0
 adultscore=0 priorityscore=1501 mlxlogscore=999 lowpriorityscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2105260124


On 5/26/21 12:40 AM, Anchal Agarwal wrote:
> On Tue, May 25, 2021 at 06:23:35PM -0400, Boris Ostrovsky wrote:
>> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
>>
>>
>>
>> On 5/21/21 1:26 AM, Anchal Agarwal wrote:
>>>>> What I meant there wrt VCPU info was that VCPU info is not unregistered during hibernation,
>>>>> so Xen still remembers the old physical addresses for the VCPU information, created by the
>>>>> booting kernel. But since the hibernation kernel may have different physical
>>>>> addresses for VCPU info and if mismatch happens, it may cause issues with resume.
>>>>> During hibernation, the VCPU info register hypercall is not invoked again.
>>>> I still don't think that's the cause but it's certainly worth having a look.
>>>>
>>> Hi Boris,
>>> Apologies for picking this up after last year.
>>> I did some dive deep on the above statement and that is indeed the case that's happening.
>>> I did some debugging around KASLR and hibernation using reboot mode.
>>> I observed in my debug prints that whenever vcpu_info* address for secondary vcpu assigned
>>> in xen_vcpu_setup at boot is different than what is in the image, resume gets stuck for that vcpu
>>> in bringup_cpu(). That means we have different addresses for &per_cpu(xen_vcpu_info, cpu) at boot and after
>>> control jumps into the image.
>>>
>>> I failed to get any prints after it got stuck in bringup_cpu() and
>>> I do not have an option to send a sysrq signal to the guest or rather get a kdump.
>>
>> xenctx and xen-hvmctx might be helpful.
>>
>>
>>> This change is not observed in every hibernate-resume cycle. I am not sure if this is a bug or an
>>> expected behavior.
>>> Also, I am contemplating the idea that it may be a bug in xen code getting triggered only when
>>> KASLR is enabled but I do not have substantial data to prove that.
>>> Is this a coincidence that this always happens for 1st vcpu?
>>> Moreover, since hypervisor is not aware that guest is hibernated and it looks like a regular shutdown to dom0 during reboot mode,
>>> will re-registering vcpu_info for secondary vcpu's even plausible?
>>
>> I think I am missing how this is supposed to work (maybe we've talked about this but it's been many months since then). You hibernate the guest and it writes the state to swap. The guest is then shut down? And what's next? How do you wake it up?
>>
>>
>> -boris
>>
> To resume a guest, guest boots up as the fresh guest and then software_resume()
> is called which if finds a stored hibernation image, quiesces the devices and loads 
> the memory contents from the image. The control then transfers to the targeted kernel.
> This further disables non boot cpus,sycore_suspend/resume callbacks are invoked which sets up
> the shared_info, pvclock, grant tables etc. Since the vcpu_info pointer for each
> non-boot cpu is already registered, the hypercall does not happen again when
> bringing up the non boot cpus. This leads to inconsistencies as pointed
> out earlier when KASLR is enabled.


I'd think the 'if' condition in the code fragment below should always fail since hypervisor is creating new guest, resulting in the hypercall. Just like in the case of save/restore.


Do you call xen_vcpu_info_reset() on resume? That will re-initialize per_cpu(xen_vcpu). Maybe you need to add this to xen_syscore_resume().


-boris


>
> Thanks,
> Anchal
>>
>>>  I could definitely use some advice to debug this further.
>>>
>>>
>>> Some printk's from my debugging:
>>>
>>> At Boot:
>>>
>>> xen_vcpu_setup: xen_have_vcpu_info_placement=1 cpu=1, vcpup=0xffff9e548fa560e0, info.mfn=3996246 info.offset=224,
>>>
>>> Image Loads:
>>> It ends up in the condition:
>>>  xen_vcpu_setup()
>>>  {
>>>  ...
>>>  if (xen_hvm_domain()) {
>>>         if (per_cpu(xen_vcpu, cpu) == &per_cpu(xen_vcpu_info, cpu))
>>>                 return 0;
>>>  }
>>>  ...
>>>  }
>>>
>>> xen_vcpu_setup: checking mfn on resume cpu=1, info.mfn=3934806 info.offset=224, &per_cpu(xen_vcpu_info, cpu)=0xffff9d7240a560e0
>>>
>>> This is tested on c4.2xlarge [8vcpu 15GB mem] instance with 5.10 kernel running
>>> in the guest.
>>>
>>> Thanks,
>>> Anchal.
>>>> -boris
>>>>
>>>>


From xen-devel-bounces@lists.xenproject.org Wed May 26 18:31:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 18:31:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132669.247387 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llyJE-0004Lq-1j; Wed, 26 May 2021 18:31:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132669.247387; Wed, 26 May 2021 18:31:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llyJD-0004Lf-Th; Wed, 26 May 2021 18:31:11 +0000
Received: by outflank-mailman (input) for mailman id 132669;
 Wed, 26 May 2021 18:31:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llyJC-0004LP-9l; Wed, 26 May 2021 18:31:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llyJC-00075j-5n; Wed, 26 May 2021 18:31:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llyJB-0004oN-SR; Wed, 26 May 2021 18:31:09 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1llyJB-0002k2-Ru; Wed, 26 May 2021 18:31:09 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=X2nzqcstRCURBZ28eXE3vl+WJuMz8JdHZXMmkwFgLKc=; b=NL5cZzvcGjBJfsF1bolPR943IU
	fHSpqPlo2D7fhjrmB3sUOtSAlSXLrBLSreG6VpvWMvP3RE4gXfnZ9P3R4IuS6b8NjWBDe3KwCFhUQ
	dTBK6tpCZZy2SnrQHyU0OdDUC4FzPjfEtEhmSAFQopwge6Fxlx/oJhIZvoihpIM5FFZ4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162171-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162171: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=7c110dd335a17be52549dc4b9dfbfba8165ade40
X-Osstest-Versions-That:
    xen=7793d19bac84cb84d919035faa9aa638f0a33370
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 26 May 2021 18:31:09 +0000

flight 162171 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162171/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  7c110dd335a17be52549dc4b9dfbfba8165ade40
baseline version:
 xen                  7793d19bac84cb84d919035faa9aa638f0a33370

Last test of basis   162161  2021-05-26 08:01:40 Z    0 days
Testing same since   162171  2021-05-26 16:00:27 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Julien Grall <jgrall@amazon.com>
  Wei Liu <wl@xen.org>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   7793d19bac..7c110dd335  7c110dd335a17be52549dc4b9dfbfba8165ade40 -> smoke


From xen-devel-bounces@lists.xenproject.org Wed May 26 18:35:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 18:35:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132681.247400 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llyN8-0005OO-J5; Wed, 26 May 2021 18:35:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132681.247400; Wed, 26 May 2021 18:35:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llyN8-0005OH-Fu; Wed, 26 May 2021 18:35:14 +0000
Received: by outflank-mailman (input) for mailman id 132681;
 Wed, 26 May 2021 18:35:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=UhT1=KV=oracle.com=boris.ostrovsky@srs-us1.protection.inumbo.net>)
 id 1llyN6-0005OB-L5
 for xen-devel@lists.xenproject.org; Wed, 26 May 2021 18:35:12 +0000
Received: from userp2120.oracle.com (unknown [156.151.31.85])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c7494e9a-6c50-4942-bb81-f630442d4a24;
 Wed, 26 May 2021 18:35:11 +0000 (UTC)
Received: from pps.filterd (userp2120.oracle.com [127.0.0.1])
 by userp2120.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 14QIDVud009775;
 Wed, 26 May 2021 18:35:01 GMT
Received: from aserp3020.oracle.com (aserp3020.oracle.com [141.146.126.70])
 by userp2120.oracle.com with ESMTP id 38ptkpa0ef-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 26 May 2021 18:35:01 +0000
Received: from pps.filterd (aserp3020.oracle.com [127.0.0.1])
 by aserp3020.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 14QIV2jo133988;
 Wed, 26 May 2021 18:35:00 GMT
Received: from nam02-bn1-obe.outbound.protection.outlook.com
 (mail-bn1nam07lp2042.outbound.protection.outlook.com [104.47.51.42])
 by aserp3020.oracle.com with ESMTP id 38rehdvqut-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Wed, 26 May 2021 18:35:00 +0000
Received: from CH0PR10MB5020.namprd10.prod.outlook.com (2603:10b6:610:c0::22)
 by CH0PR10MB5147.namprd10.prod.outlook.com (2603:10b6:610:c2::18)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4173.21; Wed, 26 May
 2021 18:34:58 +0000
Received: from CH0PR10MB5020.namprd10.prod.outlook.com
 ([fe80::6cb6:faf9:b596:3b9a]) by CH0PR10MB5020.namprd10.prod.outlook.com
 ([fe80::6cb6:faf9:b596:3b9a%7]) with mapi id 15.20.4150.027; Wed, 26 May 2021
 18:34:58 +0000
Received: from [10.74.102.235] (138.3.201.43) by
 BL0PR03CA0010.namprd03.prod.outlook.com (2603:10b6:208:2d::23) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4173.20 via Frontend Transport; Wed, 26 May 2021 18:34:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c7494e9a-6c50-4942-bb81-f630442d4a24
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=subject : to : cc :
 references : from : message-id : date : in-reply-to : content-type :
 content-transfer-encoding : mime-version; s=corp-2020-01-29;
 bh=QairHb/9rF4TAODfQZwal4qGpowzag+oYTLrPS0g8Bs=;
 b=nxjRzMMu0c3In5BnWCkBzGymodhjtFuHWlMI5N+H5Zyio62Ilbgfhy1JDpvnOstAHJ6Z
 ho7kz2dWIyQT5vThf+aV5BbjiZEIEpUxE+2E9DmSs5XCRwYlUEAi1FQRLuysuv6BlrcM
 3dXTxN6wGUE4ZOIE7P/JJpH4h5nX5DhP3qnmRObe9OB7WZQnaF3YZiLQnB8CmweazjHv
 FpvSJGJcR0eQdB3G3KucSTBKNgYbCN24fE4VLxw0AOjGMYNjO+GRoEg2nj1HEpQIBTta
 M7Z0er9Bq+skPTEfMT5NHF8NLZG457SxjHBmARIvEihYEQ1GiodSQgPOUufhrOu4IOHq fw== 
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=FsLKvJXlwFrzRMkeVo0dTyGwDKf/Ta4iX+GJa4Hs7nKNAjs9ZG6z4qnuFYJWDpA5T+pq212aC1mbKt26DtqnncVta2GeHGo0eJnAn0eNiu1dlO2w7Q//zAwbM6m+FCUPQTi4eaDKGIQsXkh79VvUAkbiTkBF5XIRZOOLAuNheSxs8a8IbyX3IdDkk0tOOXvXPO7b1qTZQOElmD2qoUyLLOoHOzIhsPCSkmyqf1WHGFgDO09TKHeeqEqqfsbrsxgdt/tRm8OYBPadZrw3MmSYou2OeEl8dkD0+A9Vhx4Gw+O0zr15uPXX0Ll0oLqb+EJKUivKGR3txsfPtVbqIox/Yg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QairHb/9rF4TAODfQZwal4qGpowzag+oYTLrPS0g8Bs=;
 b=k3kRkVErn/1rWwBb5WuNRsFP4SS7WzI8IebuO9IO+5XUpbOOnsl8U18Iqi+E6qr2Fic4UpPul49cjPqLIIUHrkvYyp2ajI2szwAzgqw4anE/4WNYfHlb0sWsNhoOx+whR/X7N2NsPT6/PLTkB2WjdiTRdK4Hk6VARs+CWP9m23KPAk/60xJJILhJp77fUpexqYR/Cq1axm106r0f6Hns6QMpA+BfBWqTRddBy8cIHU46QJiktauRyoxj95A5+laJ1TTgmLpG8r/CpNJtUg7Z1XLzV0nxwQAWER3aIcHnlcppvJjFW+BP8Vj1ROF9nEVU14m61Kl4zzOHNU7nHa/YSQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=oracle.com; dmarc=pass action=none header.from=oracle.com;
 dkim=pass header.d=oracle.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=oracle.onmicrosoft.com; s=selector2-oracle-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=QairHb/9rF4TAODfQZwal4qGpowzag+oYTLrPS0g8Bs=;
 b=D7wn4BbmTWonBLAnQWjc3fIGl5ZWEqVv9XehKGHLlC6ilNTY8hTS6e112s3J94x78d0ATQviFflP7riAifK8vphG/hZHLAvVae+7ywz21QmfB0MD3g3sDVmqF+oOZ+QueX1/p6qiySJkihuNKhXCPrVuxHfP0EobSECZdxktB10=
Authentication-Results: vger.kernel.org; dkim=none (message not signed)
 header.d=none;vger.kernel.org; dmarc=none action=none header.from=oracle.com;
Subject: Re: [PATCH -next] xen: Use DEVICE_ATTR_*() macro
To: YueHaibing <yuehaibing@huawei.com>, jgross@suse.com,
        sstabellini@kernel.org
Cc: xen-devel@lists.xenproject.org, linux-kernel@vger.kernel.org
References: <20210526141019.13752-1-yuehaibing@huawei.com>
From: Boris Ostrovsky <boris.ostrovsky@oracle.com>
Message-ID: <4ad216c5-516d-48c6-5653-2f28e4e85b5c@oracle.com>
Date: Wed, 26 May 2021 14:34:55 -0400
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
In-Reply-To: <20210526141019.13752-1-yuehaibing@huawei.com>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 7bit
Content-Language: en-US
X-Originating-IP: [138.3.201.43]
X-ClientProxiedBy: BL0PR03CA0010.namprd03.prod.outlook.com
 (2603:10b6:208:2d::23) To CH0PR10MB5020.namprd10.prod.outlook.com
 (2603:10b6:610:c0::22)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 8a175909-7710-419e-81c9-08d92074f738
X-MS-TrafficTypeDiagnostic: CH0PR10MB5147:
X-Microsoft-Antispam-PRVS: 
	<CH0PR10MB5147D8568281268A822A99EF8A249@CH0PR10MB5147.namprd10.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:341;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 
	iHoluXVsI78OyZSlQIvP8KG8+n+e4bW/FD1gkKF6i36Vj+KmsAQ9yOLwkoWDe4DYJDKEpNf2J5VHNszZyFjsV6J0a3sBVxipgBvZNUl/Wn8Kl7GqaR+i2u8+a/xGm8PZg6pfxDueWJsbHmGSPHOskrvSkBFgVuVjDRElH9FsFXYoMOjPmNL/JBXu6Dr9oLkfpGhIsplUAjo6+9tpgj+PV+9ekoPBaCk9WYRaLr+UL33zf/GtBrEcOddxPOAdsIMEPRVzGUvLgnsdpOMrXDF6J77biXoh2sPaxqxVgS8pOpjrZVmqTIXIB4Gubi+Xv+iaYkGWKzfCcuxDfdnbSLFRjrXoe/u62pdO5+6eYpJMSDOi8eMY4g0puAD7BCxEZx2Q3+mYldiXoJPQc/9TEv21DGwOaMmnWlyVQb06G841ZsKcw5TpU1uGbkRhzRoPsy8J7meloAEGHOw3Vbh9mvunQ44+7uvUCbA/w95eSDkuVZ3RVeLdwDe2GyCAGCZm+zBlYxFkvoXU4q1nViAJ3u5JPgVclVZi7Ud54Y5qB/7DKSGhPlydDSEVjcONbwC1m/dqvGu8dlC59HLiCbfTXOkS8pgS0/+hSlDvXkFLfy6YQwwTEcq+810ylGtwHytjxLfhscJwHNMqhWNuXriOok7dh9S13wPDBjUCIbBUkguroifHtxNq4ueRNG2p5H4qF+gH
X-Forefront-Antispam-Report: 
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:CH0PR10MB5020.namprd10.prod.outlook.com;PTR:;CAT:NONE;SFS:(366004)(396003)(376002)(39860400002)(136003)(346002)(478600001)(6486002)(38100700002)(8936002)(31686004)(4326008)(956004)(16576012)(186003)(2616005)(5660300002)(44832011)(31696002)(16526019)(316002)(86362001)(8676002)(2906002)(66476007)(66556008)(558084003)(66946007)(26005)(53546011)(36756003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: 
	=?utf-8?B?Ui92NlVEV1Y4MGVKVzRieEM0d3A3SVA2Tk5lbkRDR05KalhDK2MyK3RjWmxX?=
 =?utf-8?B?Z0lZZGhDam5CUndESHlxL3JBRGVXVDVIN253cE01VU5zeVFrWTg5YVp5cUNV?=
 =?utf-8?B?dGV4YnV6TkRCa21DNFFycWhEU1Jxb0cxQ09kTnJ3dngwdlRTbElvclQ2d1NW?=
 =?utf-8?B?czI4VlViMG53WGhtcmtHeTZ1aWJkc0ZscDg3ZDlnNnVPOHRNN2ZIbVlSTUcw?=
 =?utf-8?B?S1gzaXEwSG5jQ2E3ejA3djNiU2tWa2ZCbS9LU3llT2tCUFh3Zml6R1BFRFo0?=
 =?utf-8?B?Qm1yd3RwajRCMUNqM3lGd3hxY2ZJeHJVdnpZR3NBNlRTT2g3ZDFsMjExbjFX?=
 =?utf-8?B?NWdhZUlCR1RiVTlVWml1aEtNMmJVRUEvRTJQMmEzelBVYXZiRzBUdUlkZnA0?=
 =?utf-8?B?WFdhVW9XQ1Q5L0UzYXBCQnpGMUg5TnJtNE52REhmaHM0SG9sWTIvTVhMREps?=
 =?utf-8?B?TVdEMjlycVBRdWo4WURpcGpwVEFKeTJBbTA0QkFVbWtkVnpJTGxmcTJXeG5q?=
 =?utf-8?B?RDcwSDZOby96WVd5UEFLUEs0Y1J2OFJEZk1XVVRjNjUxeGVnUWNiNElsMkVj?=
 =?utf-8?B?RVZiL0dERzRTMDJ0Y2hHRWJTN0F0M3doNGNmZ3hpeVNSSDluWHJSTjJxWXBR?=
 =?utf-8?B?RjRIOENzbHk2QVI3UVdaUUljYTJpREdLN215V3gwR0pZM0ZzRmcrb20yYTNM?=
 =?utf-8?B?ZStoM2NZYjhDTXR1RDl1eWJhL2t2ekNCWXBhOGdBcXl4ZklaY1BOK2VObm4y?=
 =?utf-8?B?TXM1UUpSTytjcEF3RlhPWTNGN2pRNlVlZkJOSFFzU1JILzRiNlNQd3d4WTF2?=
 =?utf-8?B?bDBSNFUxY0hRcmRSOHZ3YjR2bzU3YlNOLzg0N2kvcDZYMXQ3L2pyNWVlamNi?=
 =?utf-8?B?MGdrM2taTHBjUnB4T0lYaG5Db2JRMGI0QjhyR3RFcm9NVXBZYzBINzZiSW00?=
 =?utf-8?B?Q0N0K0tlT0kwWndhZGZyc2VpaHluRnE1WDVzSk92WnNBOVRmMExqZUVlTDBy?=
 =?utf-8?B?eGJQdWRRRkw3eUZvaTdCUWVwZHBnS1NLTjJQYUF6aFp5N0REWXlLSHNPc0lr?=
 =?utf-8?B?MlNVRHpVVGJYWHpIUG8zSTNsRCttNnBXQnZYdEsvQW9veituRTFxdXc1N1JJ?=
 =?utf-8?B?K0FyQjRjb2R0OFJSeGN0WllCOVBiVW5XYkh1V3Y4VzVkY1FLandVN1B5VkVZ?=
 =?utf-8?B?Ullmd1Q3SStGaHlSZndwQmVLek5SWGlsb2VWRDRleG11b2FaYis2aFlUQk5Y?=
 =?utf-8?B?ZkFaZUh4akpqZngyd0VKN1hPR2NPcVp3ZnJGRC9kRE53SHpXWTFDdHNTVjVF?=
 =?utf-8?B?Q004NG1SZ3pBdGtUMExxekhmQWVqdmNjL2RaZHl3UFlxRXZ0eTJsdW1rcTZU?=
 =?utf-8?B?bVd4Y1RITTlCSE9vUzNhMzllaHNWeG5pL0JaTFRkblh6SHFHQzFESmpTVU1p?=
 =?utf-8?B?UWdka2xsa2N5N2ZFcDlxU2ljalM1WTZGV2xzdWI2b2M3bFJvbVkvd2hLd1Ex?=
 =?utf-8?B?eCs3QkpSc3RGZjVaMkRSOTI0S0NUT1YwTUtqTmhvMWZHcUxSaDJLeXB0eTZr?=
 =?utf-8?B?dUdPSlE5UUd3LzlNNmFyb3JNbERkSTVNbVE5c3JLTzNlZmkwMTAvOGo0MDd6?=
 =?utf-8?B?NWVCRlF6WU1ubldzMUJNd05Fb0daaVkyK0JWZWNpamV3WUNJV1dsWEZKcG96?=
 =?utf-8?B?TXJycEZyTlIyUVJEMGF1cWNxb0lXOVk1eE00QXQyQUhiRjhVUm9xdlZYdC8w?=
 =?utf-8?Q?/DCvGB3bL0m+dyl92iGLBou2nxMjqk1f1kdCVVm?=
X-OriginatorOrg: oracle.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 8a175909-7710-419e-81c9-08d92074f738
X-MS-Exchange-CrossTenant-AuthSource: CH0PR10MB5020.namprd10.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 26 May 2021 18:34:58.6871
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 4e2c6054-71cb-48f1-bd6c-3a9705aca71b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: fmRKSM5HQZMLOT5hYV29V73PgDF14snxtO3lnL3T2K1vYzeM3DOq6Si4WnD2CiZ8V+rVd7TH8Bc6Ut4r4Si7ZstkN6N0iaP20Ecy982MM2k=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: CH0PR10MB5147
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9996 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 suspectscore=0 spamscore=0 adultscore=0
 mlxscore=0 mlxlogscore=999 malwarescore=0 phishscore=0 bulkscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2105260125
X-Proofpoint-GUID: PXTojEHe3qxMV25vs4whMcfkxlq6gtfC
X-Proofpoint-ORIG-GUID: PXTojEHe3qxMV25vs4whMcfkxlq6gtfC
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9996 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 impostorscore=0 mlxscore=0
 suspectscore=0 bulkscore=0 adultscore=0 mlxlogscore=999 phishscore=0
 malwarescore=0 clxscore=1015 lowpriorityscore=0 spamscore=0
 priorityscore=1501 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2104190000 definitions=main-2105260124


On 5/26/21 10:10 AM, YueHaibing wrote:
> Use DEVICE_ATTR_*() helper instead of plain DEVICE_ATTR(),
> which makes the code a bit shorter and easier to read.
>
> Signed-off-by: YueHaibing <yuehaibing@huawei.com>


Reviewed-by: Boris Ostrovsky <boris.ostrovsky@oracle.com>





From xen-devel-bounces@lists.xenproject.org Wed May 26 18:36:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 18:36:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132691.247412 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llyOl-00063z-41; Wed, 26 May 2021 18:36:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132691.247412; Wed, 26 May 2021 18:36:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llyOl-00063s-0I; Wed, 26 May 2021 18:36:55 +0000
Received: by outflank-mailman (input) for mailman id 132691;
 Wed, 26 May 2021 18:36:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8HVS=KV=epam.com=prvs=578065f5e4=sergiy_kibrik@srs-us1.protection.inumbo.net>)
 id 1llyOj-00063j-IH
 for xen-devel@lists.xenproject.org; Wed, 26 May 2021 18:36:53 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 229035c7-9a1e-4482-b199-b7d3406db46b;
 Wed, 26 May 2021 18:36:52 +0000 (UTC)
Received: from pps.filterd (m0174681.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 14QIUQT6032086; Wed, 26 May 2021 18:36:50 GMT
Received: from eur01-he1-obe.outbound.protection.outlook.com
 (mail-he1eur01lp2059.outbound.protection.outlook.com [104.47.0.59])
 by mx0b-0039f301.pphosted.com with ESMTP id 38sqgxh4bs-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Wed, 26 May 2021 18:36:50 +0000
Received: from AM9PR03MB6836.eurprd03.prod.outlook.com (2603:10a6:20b:2d8::8)
 by AM0PR03MB3761.eurprd03.prod.outlook.com (2603:10a6:208:45::12)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4150.27; Wed, 26 May
 2021 18:36:47 +0000
Received: from AM9PR03MB6836.eurprd03.prod.outlook.com
 ([fe80::9151:70c8:8d48:5135]) by AM9PR03MB6836.eurprd03.prod.outlook.com
 ([fe80::9151:70c8:8d48:5135%2]) with mapi id 15.20.4173.020; Wed, 26 May 2021
 18:36:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 229035c7-9a1e-4482-b199-b7d3406db46b
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Dx9++DZa268rFEuNJMxOnsU/Jhq6muEcAnhauupCpz78gErxhZNLNtxmAIsmU37vP92Q+OKvDZnJQ0iq1yYcJuisiUiARieHDNkCNHx0n6TmTtPcJ1pNlskLP0CeLUaFO4Qb1Oih4SuiijjxKGtbySHUOmbui/dK6IeYtsSEM7llBACIrPhZ4Ou/pWE2Wsgpjd8eDGVry8sTpUT44ZyWrIJT8nWYDaEMNr7Zq4lyoVISyr6CcjS2+QXA3TVb2h+ExFjFZc/w7ZzWSWfo7qNxJwvq3UK46V20woD5/ykskFhUS86aK2z8m6UcHvB8+qT/gAZWtnEG7xGxH5FhjVGtfA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oZBXk8OvgpaKHkCnTUQDjLbAC5s1kAFtzw2yZftH5k4=;
 b=RkaWTG86X4zgQdWiP3t71lLAv8HlCGt1WuDNcFLaHmAcv+IguCPzc0VIpHtvY5FJ501hdJXt5qtFM43+9CSiFnDdiF3TRgI1iaWmNtKogoUiCzYwz7ywlldLmEETSB2kfG3Bz54z/GxEUmpy64ra/GqeTx2VyrLLUnPq73MRYSbPgQGqGzQGSYqHRA3sUAWzea/0kEBuDepwuuSZpH8F4eA4L1NAchoSBn9a1gFy7cGGyn7RtngxDKyYcYzTelTYpysmWzskpcsxdtztgfGAv/RGKThaJlfTgQbbYgT/pLX2Om804MTP94637P1RonbjKLm8sLOtX9UtCy/WrucTmA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=oZBXk8OvgpaKHkCnTUQDjLbAC5s1kAFtzw2yZftH5k4=;
 b=sQsMLIaXhpjdXM9sPqOVbKn0KEkgVmtof/cCxg5GZdJAPhGxuaEljdjFOVcxSgrQSPX7PQeHo92bYDmjHHol9pUJ451REM43nohnhYjGaK7CUwu3TiAM0UfFPA1No1zUFn1drK8yCTmGwTaDwFNi9iq5ZC1ucoIn9q4oytX0cU+Ufw+pdN2/gwW6Zq0nw+DLMEz8WjTYGsWBZvHwDa2koF+hLqB8ffA+9KGBTlCI95s9frjodtnZgnCBcVPv7GVdgv5ltH+tz7B+qI+Sh1FoZjVtnuxigN2Lut5wt0veqx8Q5cujfajo6TBaXAYq0pGVhT46yHguZAzT7I6JfPFlHA==
From: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
To: Julien Grall <julien@xen.org>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>
Subject: RE: [XEN PATCH v1] libxl/arm: provide guests with random seed
Thread-Topic: [XEN PATCH v1] libxl/arm: provide guests with random seed
Thread-Index: AQHXUJ0pDoWarWZCUEmrgeaODNENvKr1enfAgAB5+ACAACY8wA==
Date: Wed, 26 May 2021 18:36:47 +0000
Message-ID: 
 <AM9PR03MB6836B2CC4729BE5694311918F0249@AM9PR03MB6836.eurprd03.prod.outlook.com>
References: <20210524080057.1773-1-Sergiy_Kibrik@epam.com>
 <a3c51dea-80e5-df92-3757-72809ad96434@xen.org>
 <AM9PR03MB6836B02970F4F1CAEEAEAD78F0249@AM9PR03MB6836.eurprd03.prod.outlook.com>
 <d85d9c25-aff7-43ff-6ae5-04ab394dcd0d@xen.org>
In-Reply-To: <d85d9c25-aff7-43ff-6ae5-04ab394dcd0d@xen.org>
Accept-Language: en-US
Content-Language: ru-RU
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=epam.com;
x-originating-ip: [95.67.114.216]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 3e73e630-c08e-4b27-5c2e-08d92075383f
x-ms-traffictypediagnostic: AM0PR03MB3761:
x-microsoft-antispam-prvs: 
 <AM0PR03MB376189927D338E2B8358AF38F0249@AM0PR03MB3761.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:7691;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 kxRADPKfYGNpfI8GVvQK3Z3O/qOYE+IUQA+Z20oaA68hwmxTa1HBLY1qBfPQckKXj8gteLc4f4kr7lSAyI5bTR4cyfjnZkgIxaTWx9qUVhatanHcbvWUVRWrNzlCOb5B5SKqngkdDso9jR5T4Ml5HWdjHRku0mS2NKVAFMCws3t/dQjSrdlUeoH5qkPBfp8JpHv/5Kveu6ZK7FNO3jpDGOhudMRAKkMtSXgy3SS+4HQ3jUmgz6Z0vEEKi2XdZBiPjgX3bgYeuTarhs8HK2BUTheKhB6ggfT4LRsYt6H34Kq4I8LENZd75Ok6TWCUgtNab8uCDJhFTyzMPZnCwDJL8U5UHMXF57isXQ67M8EkefXGQsdVJPgacfrV+8TN0VVoklirU5tS+0dWUZo8vBpe9vYtLWXTOl00Yts5hDFbwaBG0G5Kzclv3udAhk7kx8gTwHEeFL5mrOd7+spjgOCVexZW1Gyj6nGLkErU/QtBvrLT3tava6gapm8ay9/4I9C32AM6rVWsx+yNTl7bvzeJfK92h5xOIBdRj+if7gsDjaM+40+Tp1Y0zKGbRvBDW7pA2Rj16pcyhEXQ4svNGhkRL210ZPtkORhjcXjmJHJ5rJo=
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM9PR03MB6836.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(376002)(396003)(136003)(346002)(39840400004)(366004)(66476007)(6506007)(26005)(83380400001)(66946007)(66556008)(316002)(71200400001)(76116006)(33656002)(478600001)(55016002)(64756008)(66446008)(52536014)(110136005)(54906003)(2906002)(9686003)(7696005)(5660300002)(38100700002)(558084003)(122000001)(8936002)(4326008)(86362001)(186003)(8676002);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?utf-8?B?L3ZGM2VvOEx4eGZ3NzFiNXlFU2ZJMlJTRkRLRmlobExORjZFTFZQaFNaTFZQ?=
 =?utf-8?B?cEdsclYvdHN3V2l5UjB1eStwWERCNGZEQWluY2s3aWxkSWVyN1JGY1l3YkNR?=
 =?utf-8?B?VFV0K2txR0RkNWhBTVV0TXd0L3ExaXlFZFdqSnhnTVVNZExVV1VIbEQ5SlVI?=
 =?utf-8?B?MDhqanJwT0V2VkY3WlNHKys3Wk14dUpGTzlRbUh6M0RnZXBsRGVWS0pOSERs?=
 =?utf-8?B?RGF2SWgrTTh1cTBaOWxJSFpzVFk3bFlhZ0d6NXNIWWxBdWtUZ25yT0kyZUdO?=
 =?utf-8?B?bHBLQmhDOG9XRDBLbVJPQjBRak5OODZsTzV2OTNxQzFtY2JjRXdzZitLSkFv?=
 =?utf-8?B?Vm1lbDRhSHNTRlpaRTgzZElNZkJOaktsTHB1ZVB6SWNoU0dZV3RvZit0amJI?=
 =?utf-8?B?dGFtZjhwRGxVd0EwNU0rTmExMGhpZHdqdkZWRFJBTXYwckpXdDlWRTJ1MEtH?=
 =?utf-8?B?RENkVlVmK3dNblFkc3dzalBmN1ZJRUVkL0xzN1FZYUtyQWFDang3Q1lRQklR?=
 =?utf-8?B?NHhRbW13RyttZ2JyRUp2a0VFSGpPaVZEdTczeFRYdmExL2pqRXFPd0hRLzZS?=
 =?utf-8?B?TE5uRi9wVXVrSDRIN3pGKzJVQ3VQcS9yL05neWdxTW1aNkJFcUtmeEkwelBa?=
 =?utf-8?B?RG9ISTNPaENaUHB3Slo5TjdITm5Ra0d5SzZUYUVMTW9Mck50b2VmV3JoZC9K?=
 =?utf-8?B?TCs0VEJHS2llZHlNbW5hTXNiU2sxMHRwVEc3MVhYc1pRSkRyU3BUYkluYWpQ?=
 =?utf-8?B?UHh2R1JoekcwREhMc1MxcERueUVRWllLLzlJeUZua2R1aTNqZG9vSFFSY3Vu?=
 =?utf-8?B?S09BczhaajdHRG5RRWd6aDRuNWh6dWlxVVA0ZU5RK24xQmJuZysvblBZbXhZ?=
 =?utf-8?B?dTBFdGFYczMrTUxZS0t4WGZjY21JTHl0dHhaMXczL2QvOHFEWUZhcytnalpz?=
 =?utf-8?B?c0ozcmhJb0dEWWJ3czh5NlNrZDhEb0dJZ1d2anJBRGNtL2IrZDQzNmRXWWJp?=
 =?utf-8?B?RFl0ZHA0bnZIMGdsTloweWdmTG5IaUgxVGVPeDk2NnJLRDIyWFFvMm11TmdS?=
 =?utf-8?B?Y2h5VnRmbTlLUCtVWjI2VzB4eW9lak9wNVVuaTNPeExWQk1IY2tBL2x5Uyt3?=
 =?utf-8?B?VTB0STQyNTBIdktzbTl5S1dYQWNqUGhmcWd0RnorOGF0dWFiazJHUkZLdUhD?=
 =?utf-8?B?TDFpdjJNSkMrU3N6NW5KK2xCQVBmL3YxTVlGd1RTSVMyTnZTVko3U2I2S3hs?=
 =?utf-8?B?YmUrYmJZa0hnRUtQM3o0WXJJbHdqOXpHKytUcW5zYmdRSFVVQlp2cmZxWVdR?=
 =?utf-8?B?VC9wTmtFM2xFdFhrNG5CWjVzT2lrWUgvOS9xRDRzeGZrdVFlYkxwVTlQdnRL?=
 =?utf-8?B?Q0pFUWZwTGZvNGdpYk1VbHRHRWV6aFR3S0JkZWRsWGc0bVBBNitSS0VYT24z?=
 =?utf-8?B?U1N3eTdMb2FBTm9kWUp5dnltOWZTSVYxNXEyRlRyZVEwVGxkWFM5UFlJZjRR?=
 =?utf-8?B?UDU5Y2ppWTAyY1BhTWJ5R0tsWTY3ZWlqTjdtTU8xS0doUG1pT1FNR0gvZnhk?=
 =?utf-8?B?SzkvV05OQmlhcWdObU5IZVhrOUZJSlRjUCs3UkRlZHBVSFR5czdsVkNKN1FH?=
 =?utf-8?B?U1laSnZCVEFUUytWOFpMeGpVOTVqYjRKcHB1UG5YUFJmNnNZQnFzeDAyUW9U?=
 =?utf-8?B?Y1lEMGl2NFpXMXdkMVJzeU0rY0s3WUZvYVZZZzRWNFVTa2p6UzVDMWhWYVh6?=
 =?utf-8?Q?9Wa6j/JJJFVV+lU8iu9RnkwdTOUgJdZuPDUFxwt?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM9PR03MB6836.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 3e73e630-c08e-4b27-5c2e-08d92075383f
X-MS-Exchange-CrossTenant-originalarrivaltime: 26 May 2021 18:36:47.5828
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: km11elCz4zj2MulwnquUw9npJ3Z1FvqCqck72bcDRy5pT0/kbWM9Kog5pOKzjigi4Xwm4cCW00WrNTQ4UO19Cw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB3761
X-Proofpoint-GUID: owOXevxruiuaSCjtx-t8QfgF1zpj7ZMk
X-Proofpoint-ORIG-GUID: owOXevxruiuaSCjtx-t8QfgF1zpj7ZMk
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 mlxlogscore=867
 impostorscore=0 malwarescore=0 mlxscore=0 bulkscore=0 lowpriorityscore=0
 suspectscore=0 clxscore=1015 adultscore=0 phishscore=0 priorityscore=1501
 spamscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2104190000 definitions=main-2105260125

ID4gT2suIENhbiB0aGUgcmVhc29uaW5nIGJlIGRvY3VtZW50ZWQgaW4gdGhlIGNvbW1pdCBtZXNz
YWdlICh3aXRoIGEgc2hvcnQNCj4gc3VtbWFyeSBpbiB0aGUgY29kZSk/IFRoaXMgd291bGQgYmUg
aGVscGZ1bCBpZiBpbiB0aGUgZnV0dXJlIG9uZSBkZWNpZGUgdG8NCj4gY2hhbmdlIHRoZSBzaXpl
IG9mIHRoZSBzZWVkLg0KPiANCg0KU3VyZSwgSSdsbCBkbyB0aGF0Lg0KDQogICAtU2VyZ2l5DQo=


From xen-devel-bounces@lists.xenproject.org Wed May 26 18:41:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 18:41:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132699.247423 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llyTC-0007RO-ND; Wed, 26 May 2021 18:41:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132699.247423; Wed, 26 May 2021 18:41:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llyTC-0007RH-Ja; Wed, 26 May 2021 18:41:30 +0000
Received: by outflank-mailman (input) for mailman id 132699;
 Wed, 26 May 2021 18:41:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=8HVS=KV=epam.com=prvs=578065f5e4=sergiy_kibrik@srs-us1.protection.inumbo.net>)
 id 1llyTB-0007RB-6t
 for xen-devel@lists.xenproject.org; Wed, 26 May 2021 18:41:29 +0000
Received: from mx0b-0039f301.pphosted.com (unknown [148.163.137.242])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7c5432a0-c52c-4203-a727-ec312935f6bb;
 Wed, 26 May 2021 18:41:28 +0000 (UTC)
Received: from pps.filterd (m0174683.ppops.net [127.0.0.1])
 by mx0b-0039f301.pphosted.com (8.16.0.43/8.16.0.43) with SMTP id
 14QITuEI014761; Wed, 26 May 2021 18:41:27 GMT
Received: from eur05-am6-obe.outbound.protection.outlook.com
 (mail-am6eur05lp2112.outbound.protection.outlook.com [104.47.18.112])
 by mx0b-0039f301.pphosted.com with ESMTP id 38spq71fr1-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=NOT);
 Wed, 26 May 2021 18:41:27 +0000
Received: from AM9PR03MB6836.eurprd03.prod.outlook.com (2603:10a6:20b:2d8::8)
 by AM0PR03MB5474.eurprd03.prod.outlook.com (2603:10a6:208:16c::22)
 with Microsoft SMTP Server (version=TLS1_2,
 cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id 15.20.4150.27; Wed, 26 May
 2021 18:41:23 +0000
Received: from AM9PR03MB6836.eurprd03.prod.outlook.com
 ([fe80::9151:70c8:8d48:5135]) by AM9PR03MB6836.eurprd03.prod.outlook.com
 ([fe80::9151:70c8:8d48:5135%2]) with mapi id 15.20.4173.020; Wed, 26 May 2021
 18:41:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7c5432a0-c52c-4203-a727-ec312935f6bb
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=dxiWHLqHGttREz4XhzG19Dr7b0dmOENIoWPdY4piqvjWqgVtq6aoLyUoT1Aepu2zGISvXslXikzUzJCKnVbF3kjRTQnFg+cqLSuOJfmp0pR8qQepljpWE+TSlTPwhXpF7Lhj79lM0BTJrz4lIDEvcMQI+zQOG/3ALlggU+tX+IiCtuTkW1osIwpQ5q/H3Cec91y9LH7F6t94eFOKs7VxJUrx3XfRGO9r6pZvVx9mr8pMKScWI1Q4eGMPznn1J3GAV4g0U4SNzgFgBDKZSirsoAo3D0jioSB7E7V8anYQfQPsWk/43EmyYnZdUrokvVtClNIvP/lDjmheWOAGxnn7Gw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uzRpJyWZiGcsJseEafjmvR0U81BdsxSH+wsqwxY6dCE=;
 b=hG94zFjrKAFnACBDDmrhEl45AQ4LLHPKoI8mvBgq0OHljCn0gixPQtSGLeMh7qh8329P6+W3Y9aCGFF/SJ2b0S0u9ub3WTGwWTt7micHbdSYjpnzoWXOM4CL8NxUfBEwL052WVolVcQxu/Rio0ljw/br3dKly6X1YT2w/HCtXr5PfhXfFaMBEcR+n8md3Iiac1M4zPp3Td9RPkZ0+dM4f7F9I0b2jNf7atGsTEcB38HPmDzJSbxYEGy+fRz4GyeYEhYjg9Mp8d6c/dhbyXDOFGkHxpqNyXggz6kA81+RfT1aklE3GFmlrVbgRU96MSaGqE9DtA7z9MPMHrf1XAK5UA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=epam.com; dmarc=pass action=none header.from=epam.com;
 dkim=pass header.d=epam.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=epam.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=uzRpJyWZiGcsJseEafjmvR0U81BdsxSH+wsqwxY6dCE=;
 b=G4WSKKaiKhndrS7ByW/qZKaET6/RCcPfuLrOi9/dXEHyOQz2ZXFeZ9xVuj1fszAbIOvo4C811QgXOvOE17T3V2RYeqcyozqKwuCfFVZhKblaiu61/VjVOBOkM3s6UsCOMk8mSYu64RR4cT4uJusTpfXwlhH9o1vnq7xHoOPxN56dV2VlrMFSUatvOoUDSlaNxItKh4jo5s0EIgv/Few3SsbnajlAiowVfh62ca3WKzNlT1tbolSFG77jh4ZyIVZLn2Tru7cc16805qMErIvJUBzvv4Hvl85tqAoY+ZnGKP0j3S9rRgmLokFWbT1pwg+a5sr5cafLMSpPOre+KjTzfA==
From: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
To: Julien Grall <julien@xen.org>,
        "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>
CC: Ian Jackson <ian.jackson@eu.citrix.com>, Wei Liu <wl@xen.org>
Subject: RE: [XEN PATCH v1] libxl: use getrandom() syscall for random data
 extraction
Thread-Topic: [XEN PATCH v1] libxl: use getrandom() syscall for random data
 extraction
Thread-Index: AQHXUJviVlkCcTT7D0WghZSRk2JxUar1gejggAByMQCAACcHIA==
Date: Wed, 26 May 2021 18:41:22 +0000
Message-ID: 
 <AM9PR03MB68360EE9DC1E8C52A1902BECF0249@AM9PR03MB6836.eurprd03.prod.outlook.com>
References: <20210524085858.1902-1-Sergiy_Kibrik@epam.com>
 <13bde708-1d87-a2c7-077f-f08db597ae15@xen.org>
 <AM9PR03MB68362CAC724A6BEE4A50B96AF0249@AM9PR03MB6836.eurprd03.prod.outlook.com>
 <6f6c29e1-09dc-d7ea-6126-4649100c149d@xen.org>
In-Reply-To: <6f6c29e1-09dc-d7ea-6126-4649100c149d@xen.org>
Accept-Language: en-US
Content-Language: ru-RU
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
authentication-results: xen.org; dkim=none (message not signed)
 header.d=none;xen.org; dmarc=none action=none header.from=epam.com;
x-originating-ip: [95.67.114.216]
x-ms-publictraffictype: Email
x-ms-office365-filtering-correlation-id: 5ccccbbe-0f41-4835-cad1-08d92075dc6a
x-ms-traffictypediagnostic: AM0PR03MB5474:
x-microsoft-antispam-prvs: 
 <AM0PR03MB5474FCCC1BEEEDC23A6EFF2BF0249@AM0PR03MB5474.eurprd03.prod.outlook.com>
x-ms-oob-tlc-oobclassifiers: OLM:2331;
x-ms-exchange-senderadcheck: 1
x-microsoft-antispam: BCL:0;
x-microsoft-antispam-message-info: 
 ffGk+fk0X6WiqkCzfeLAcze9B45Nh4tCI7dtNeDuciSnOIkIDDGD9DsMX4TpwfPgZJWhyf+BICEjhBAvTiGbYEFtYKjBSxZqq3r7tkh0F4s/KyS4jF+5eKyNdAh0h5SS0yH//90NIRrXj3kI1vH3cMQF+p/u1KLsPfZ5SobEIFEn5KbPiEHusns3MxYBvKiu/iOAlX5ScnF4S+262R7LJw33zLbsYNg8gmtovezvpWsvJ/Ei+YVYZUW8NYp7A7q/poO9zqbcm2D6eee2Vjbhq+VagX5648Cxwg1MB97ObZvKoktbIGhvST/qzDc0iYauki17GcUdRwOdeQVkdU7u/RffC8oa4dj6PeRkxeQgIv8jJOqluRYASIE6ojI9Iaks1OewTG+kW1l0/PSaMCi72U8Ku7edP1UYfv5MCV6CMkvj2YNZMvkZfOD+NMS/EjnqmQdhJ7havLlghFnNDj+ZTcgE2ypvX+QkWU8J308DdugJxHcBpeSHY0etnq2H9+R0ebZqNrmweLfuYQmmxzeRXAOgYK9rsPq9cpInG+iYNtfxzbPDpfjI2ft/TL8v5zuJ6J1avEge9WXek3cCnj10JilcNzmVE1b5W7TFkLSZ/qE=
x-forefront-antispam-report: 
 CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:AM9PR03MB6836.eurprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(346002)(366004)(376002)(396003)(39840400004)(4326008)(26005)(316002)(76116006)(5660300002)(7696005)(2906002)(6506007)(71200400001)(38100700002)(64756008)(66446008)(54906003)(33656002)(186003)(110136005)(558084003)(66476007)(86362001)(478600001)(52536014)(66556008)(122000001)(66946007)(55016002)(8936002)(8676002)(9686003);DIR:OUT;SFP:1101;
x-ms-exchange-antispam-messagedata: 
 =?utf-8?B?RVo0SUhTeGgyL1dtbUo5allkMktmRXBTWDJ0aTMyRkFjTWhaMEtWZnM1c3Rx?=
 =?utf-8?B?ek5CTm9OdjA0YUZHKzh4V2RaLzV3MlZPWEUwQy9FWmhGbCt3STVPVVlzWHdF?=
 =?utf-8?B?ejV0bzZlcnR0WGdwYnJmQjVDTWhuK0d3dFVTNlp2Nm1sNzMzaklZQktYMDli?=
 =?utf-8?B?aE0xZ1hVZkZmZHJPNVhZakJjK2s3OGN3ZGtvSjlyS3JhN3FOdWFsMDZYYWl0?=
 =?utf-8?B?VFRBZms4Q3R3eDNqTmJBMmJBMFZHN25jZ0I1ZTB0RE5xMEladk41d0lDcVQx?=
 =?utf-8?B?dlBuQStPVldDZ08wcDNVcGVCMmpUMkpnQTI1WXBPNHhBaS85bDRsOEd0SE9K?=
 =?utf-8?B?ZG44d2pVWWZaRVlVLzh6MFg0M1kzeHR3RkFNTVZ3WmZwS3h0M2l3NzIrMThC?=
 =?utf-8?B?TW8wbVhIdERCQzZpbSsrWWFsaUpqbGNtZ3VOenNHZThPNDJuQmtzdjRJT1U2?=
 =?utf-8?B?Z1p3MlAyZml4YzZtb1pnVVhpTjRkTnJzNVFndkFLcUJTVVRVWTRiL1BZb29o?=
 =?utf-8?B?eFF6L3RiUDdkekw2MytCaDcwU3VmSTg0ZUVMN1dKUHU5S2FVa0ZVeHdxcnJH?=
 =?utf-8?B?N2ZVYzM3Rll6UlFPbEVpc2hlc0dDeEZVbjVWVUgvTGRrZVQ0Y0QwallzWll1?=
 =?utf-8?B?K210RWdFbWxJblphTGg1WDlwVmwyRDJ1YnRHZzZVRUYydE9PNnIrS2g3QW1W?=
 =?utf-8?B?NWJHUEoxOUJ4eS9XZnpzRFJ3UHY3MFEzblZPamVoMThqMlllakwrU0ZLQ3hI?=
 =?utf-8?B?ODFGbHpkMEU3Tk5nNG5NeVQzZTdHU3UzYzFOQUh1Wjh3RUhpQktWOFFzSElM?=
 =?utf-8?B?TVY3RHYwRzgzTFp5N2hTSEhtM0NPOGJhbitZZ1F5UGVKb0t6SUU3OXk4T3Mx?=
 =?utf-8?B?aG1jTms1UzZSSG13cVlDNUtqaUsyWkJLZFlieEJsaHVpbUdzWnVHdU0yb0tp?=
 =?utf-8?B?WVluNlRwNEVzd1FuZ0FnRE9RRlgxOUpWQk9PVjNraG5PREhsRW5OeGJrRC9C?=
 =?utf-8?B?bVJQYkdmTVRUWmtEak5MV3NBNUNOMTUvNkN4LzZaUnZROE0xRDlydnNUVlVH?=
 =?utf-8?B?T242ZlFQS1FVWUUxdXJsVDNPY3BOUEs0TFg0QlVFNGlHSjNGSUdzbkc3UmxU?=
 =?utf-8?B?K1pxcm14cHptd3JxYWJnM3JUQVBmOU5SMFdnU1VCUFJ4Z29tQ290Zlh5Zzd4?=
 =?utf-8?B?ZVl0NzRRZG00aFAweWRtaEIzVUpvTkxSSFdxOENqeVAzbTBaZEk3OW90cEZ5?=
 =?utf-8?B?bkxvakQwZGxoYWxQcHJrTUVvRXdvVklOWTBDRDNEWlFBRTc3VjJ4SjVxQUlH?=
 =?utf-8?B?MmpHQ1ltMjFQOENmcFpmQ1dkenpnTjNsbk9NRHJnWkdkYXdzK1lHSjdyanA5?=
 =?utf-8?B?TXRHSk52T1NCM1JVUmhzTVRHOFdCdWFqK3ZEbVVQMnRkb3lRK2hza2NOWGta?=
 =?utf-8?B?VnFwQ2F5LzF2dmZ2em5lSXZZMXJHK0svVzNoVUkxWU1aMTNURk5ub083NUJQ?=
 =?utf-8?B?aHdqbEtITDcrOE56RmMyQWtncEVCZ1BSWThRRkRDUm5rQWc4ZGpmdWZvRlFo?=
 =?utf-8?B?ZEpNdTR4ODFyZGZTVlh3UVBEY2l4LzZLTjZXUUVkMWRxcVRpNEo2YWJyOU9Y?=
 =?utf-8?B?b1dER0dSUDFJZzBHZy96TGxFOGd2K0NNeSs4ZzhqZzdjWUV0MzkrZm5QbWVt?=
 =?utf-8?B?bytkUHpFenFMSVdiYlBsRS9OUWFXSU1WV1FuN3cvdFo4dWY4TnZaSUdGNjE5?=
 =?utf-8?Q?OVWWrhmNCiUMG48ye6KrW7TRjTyRWS5Tm7Eid4S?=
x-ms-exchange-transport-forked: True
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: base64
MIME-Version: 1.0
X-OriginatorOrg: epam.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-AuthSource: AM9PR03MB6836.eurprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 5ccccbbe-0f41-4835-cad1-08d92075dc6a
X-MS-Exchange-CrossTenant-originalarrivaltime: 26 May 2021 18:41:22.9908
 (UTC)
X-MS-Exchange-CrossTenant-fromentityheader: Hosted
X-MS-Exchange-CrossTenant-id: b41b72d0-4e9f-4c26-8a69-f949f367c91d
X-MS-Exchange-CrossTenant-mailboxtype: HOSTED
X-MS-Exchange-CrossTenant-userprincipalname: Peq1kzBDlDjNh4s88L1gwzi3vBH2K1xPdpAZwStmhuvA6k7fTThrK8ZaD8lV/3E4usVbuJfEsb4PysIH8EuS0w==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: AM0PR03MB5474
X-Proofpoint-GUID: JWGj9qlf9_qgl5oTiI11ynrw6hauSyHx
X-Proofpoint-ORIG-GUID: JWGj9qlf9_qgl5oTiI11ynrw6hauSyHx
X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 malwarescore=0
 impostorscore=0 mlxscore=0 suspectscore=0 adultscore=0 phishscore=0
 spamscore=0 mlxlogscore=855 clxscore=1015 bulkscore=0 priorityscore=1501
 lowpriorityscore=0 classifier=spam adjust=0 reason=mlx scancount=1
 engine=8.12.0-2104190000 definitions=main-2105260125

PiA+IFlvdSBtZWFuIGl0cyBhdmFpbGFiaWxpdHkgc2hvdWxkIGJlIGNoZWNrZWQgYm90aCBhdCBi
dWlsZCBhbmQgcnVudGltZT8NCj4gDQo+IENvcnJlY3QuIFlvdSBjYW4gaGF2ZSBhIGxpYmMgc3Vw
b3J0aW5nIGdldHJhbmRvbSgpIGJ1dCBhIGtlcm5lbCB0aGF0IGRvZXNuJ3QNCj4gcHJvdmlkZSB0
aGUgc3lzY2FsbC4NCj4gDQoNCkFncmVlLCBJIHNoYWxsIGNoZWNrIHRoaXMuDQoNCiAgLVNlcmdp
eQ0K


From xen-devel-bounces@lists.xenproject.org Wed May 26 19:16:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 19:16:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132706.247434 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llz0g-0002Gr-Ex; Wed, 26 May 2021 19:16:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132706.247434; Wed, 26 May 2021 19:16:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1llz0g-0002Gk-Ba; Wed, 26 May 2021 19:16:06 +0000
Received: by outflank-mailman (input) for mailman id 132706;
 Wed, 26 May 2021 19:16:05 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llz0f-0002Ga-A3; Wed, 26 May 2021 19:16:05 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llz0f-0007ru-5n; Wed, 26 May 2021 19:16:05 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1llz0e-00072i-QL; Wed, 26 May 2021 19:16:04 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1llz0e-00073O-Pq; Wed, 26 May 2021 19:16:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=LBg1EwnNxUJdLbSiFtOgCsudfM1zo5Rmy0xx/oK1wWU=; b=jeiW7ScbZTSHYGqfFe7U9naZsi
	JEToX+tWbPTKIuhJ5FTgFKmYZJ8jHRlS31kfwX7m1FR2uVaUgpGVHZ0KadilbXAgMNO83Sv7mFNcu
	c96d9PS1HGYq4immhngJYPK5d4MGBPovN1VUO/yDMMNEpixv8qwItSFI+8uOXhXjUu+w=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162164-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162164: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-examine:memdisk-try-append:fail:heisenbug
    xen-unstable:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=3092006fc4e096a7eebb8042cb76d82b09ccece4
X-Osstest-Versions-That:
    xen=aa77acc28098d04945af998f3fc0dbd3759b5b41
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 26 May 2021 19:16:04 +0000

flight 162164 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162164/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail in 162156 pass in 162164
 test-amd64-amd64-examine      4 memdisk-try-append         fail pass in 162156
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 162156

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162145
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162145
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162145
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162145
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162145
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162145
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162145
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162145
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162145
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162145
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162145
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  3092006fc4e096a7eebb8042cb76d82b09ccece4
baseline version:
 xen                  aa77acc28098d04945af998f3fc0dbd3759b5b41

Last test of basis   162145  2021-05-25 01:54:50 Z    1 days
Failing since        162150  2021-05-25 10:39:08 Z    1 days    3 attempts
Testing same since   162156  2021-05-26 00:06:46 Z    0 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Connor Davis <connojdavis@gmail.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   aa77acc280..3092006fc4  3092006fc4e096a7eebb8042cb76d82b09ccece4 -> master


From xen-devel-bounces@lists.xenproject.org Wed May 26 22:35:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Wed, 26 May 2021 22:35:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132728.247474 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lm27R-0003KJ-0H; Wed, 26 May 2021 22:35:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132728.247474; Wed, 26 May 2021 22:35:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lm27Q-0003KC-Si; Wed, 26 May 2021 22:35:16 +0000
Received: by outflank-mailman (input) for mailman id 132728;
 Wed, 26 May 2021 22:35:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lm27P-0003K2-Lf; Wed, 26 May 2021 22:35:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lm27P-0002kt-Cs; Wed, 26 May 2021 22:35:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lm27P-0008K0-63; Wed, 26 May 2021 22:35:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lm27P-0001UW-5Z; Wed, 26 May 2021 22:35:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Message-Id:Subject:To;
	bh=BP+kwjaowXJkvKL2Wedhir6DNksjGrGt5Rq/vtzK2b4=; b=4iSOHPc7FKFEhD40zA5dAZedBK
	RnI8t8nU8jizWBdQaqE08FTwfpDbpwySXjpGZnJQNlFkMg+1ADCTlkWDmAsaigTQ4bzovV71Wh+rR
	VJLJXBK6wW2ht9JmYxW1NEez1Ik2KkDE5iIwm3io1+kQbb+CRuDbzXmoUuVnf3VHj0z8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Subject: [qemu-mainline bisection] complete test-amd64-amd64-qemuu-freebsd12-amd64
Message-Id: <E1lm27P-0001UW-5Z@osstest.test-lab.xenproject.org>
From: osstest service owner <osstest-admin@xenproject.org>
Date: Wed, 26 May 2021 22:35:15 +0000

branch xen-unstable
xenbranch xen-unstable
job test-amd64-amd64-qemuu-freebsd12-amd64
testid guest-saverestore

Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Bug not present: cbde7be900d2a2279cbc4becb91d1ddd6a014def
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/160441/


  commit 8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 12:54:55 2021 +0000
  
      machine: remove 'query-cpus' QMP command
      
      The newer 'query-cpus-fast' command avoids side effects on the guest
      execution. Note that some of the field names are different in the
      'query-cpus-fast' command.
      
      Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Tested-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>


For bisection revision-tuple graph see:
   http://logs.test-lab.xenproject.org/osstest/results/bisect/qemu-mainline/test-amd64-amd64-qemuu-freebsd12-amd64.guest-saverestore.html
Revision IDs in each graph node refer, respectively, to the Trees above.

----------------------------------------
Running cs-bisection-step --graph-out=/home/logs/results/bisect/qemu-mainline/test-amd64-amd64-qemuu-freebsd12-amd64.guest-saverestore --summary-out=tmp/162173.bisection-summary --basis-template=152631 --blessings=real,real-bisect,real-retry qemu-mainline test-amd64-amd64-qemuu-freebsd12-amd64 guest-saverestore
Searching for failure / basis pass:
 162158 fail [host=elbling0] / 160125 [host=elbling1] 160119 [host=godello0] 160113 [host=albana1] 160104 [host=godello1] 160097 [host=chardonnay0] 160091 [host=huxelrebe1] 160088 [host=pinot1] 160082 [host=fiano0] 160079 [host=chardonnay1] 160070 [host=fiano1] 160066 [host=pinot0] 160002 [host=albana1] 159947 [host=chardonnay1] 159926 [host=chardonnay0] 159911 [host=fiano1] 159898 [host=huxelrebe1] 159888 [host=godello0] 159878 [host=fiano0] 159869 [host=fiano1] 159860 [host=godello1] 159853 [h\
 ost=elbling1] 159848 [host=godello0] 159842 [host=fiano1] 159834 [host=albana0] 159828 ok.
Failure / basis pass flights: 162158 / 159828
(tree with no url: minios)
Tree: linux git://xenbits.xen.org/linux-pvops.git
Tree: linuxfirmware git://xenbits.xen.org/osstest/linux-firmware.git
Tree: ovmf git://xenbits.xen.org/osstest/ovmf.git
Tree: qemu git://xenbits.xen.org/qemu-xen-traditional.git
Tree: qemuu git://git.qemu.org/qemu.git
Tree: seabios git://xenbits.xen.org/osstest/seabios.git
Tree: xen git://xenbits.xen.org/xen.git
Latest c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 92f8c6fef13b31ba222c4d20ad8afd2b79c4c28e b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
Basis pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ef91b07388e1c0a50c604e5350eeda98428ccea6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cb90ecf9349198558569f6c86c4c27d215406095 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 243036df0d55673de59c214e240b9b914d278b65
Generating revisions with ./adhoc-revtuple-generator  git://xenbits.xen.org/linux-pvops.git#c3038e718a19fc596f7b1baba0f83d5146dc7784-c3038e718a19fc596f7b1baba0f83d5146dc7784 git://xenbits.xen.org/osstest/linux-firmware.git#c530a75c1e6a472b0eb9558310b518f0dfcd8860-c530a75c1e6a472b0eb9558310b518f0dfcd8860 git://xenbits.xen.org/osstest/ovmf.git#ef91b07388e1c0a50c604e5350eeda98428ccea6-cfa6ffb113f2c0d922034cc77c0d6c52eea05497 git://xenbits.xen.org/qemu-xen-traditional.git#3d273dd05e51e5a1ffba3d98c74\
 37ee84e8f8764-3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 git://git.qemu.org/qemu.git#cb90ecf9349198558569f6c86c4c27d215406095-92f8c6fef13b31ba222c4d20ad8afd2b79c4c28e git://xenbits.xen.org/osstest/seabios.git#ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e-b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee git://xenbits.xen.org/xen.git#243036df0d55673de59c214e240b9b914d278b65-aa77acc28098d04945af998f3fc0dbd3759b5b41
Loaded 79680 nodes in revision graph
Searching for test results:
 159828 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ef91b07388e1c0a50c604e5350eeda98428ccea6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cb90ecf9349198558569f6c86c4c27d215406095 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 243036df0d55673de59c214e240b9b914d278b65
 159834 [host=albana0]
 159842 [host=fiano1]
 159848 [host=godello0]
 159853 [host=elbling1]
 159860 [host=godello1]
 159869 [host=fiano1]
 159878 [host=fiano0]
 159888 [host=godello0]
 159898 [host=huxelrebe1]
 159911 [host=fiano1]
 159926 [host=chardonnay0]
 159947 [host=chardonnay1]
 160002 [host=albana1]
 160048 []
 160050 []
 160057 []
 160062 []
 160064 []
 160066 [host=pinot0]
 160070 [host=fiano1]
 160079 [host=chardonnay1]
 160082 [host=fiano0]
 160088 [host=pinot1]
 160091 [host=huxelrebe1]
 160097 [host=chardonnay0]
 160104 [host=godello1]
 160113 [host=albana1]
 160119 [host=godello0]
 160125 [host=elbling1]
 160134 fail irrelevant
 160147 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2e1293cbaac75e84f541f9acfa8e26749f4c3562 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160167 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ca318882714080fb81fe9eb89a7b7934efc5bfae 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 bdee969c0e65d4d509932b1d70e3a3b2ffbff6d5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160328 fail irrelevant
 160378 blocked c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ef91b07388e1c0a50c604e5350eeda98428ccea6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cb90ecf9349198558569f6c86c4c27d215406095 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 243036df0d55673de59c214e240b9b914d278b65
 160380 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ef91b07388e1c0a50c604e5350eeda98428ccea6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cb90ecf9349198558569f6c86c4c27d215406095 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 243036df0d55673de59c214e240b9b914d278b65
 160382 fail irrelevant
 160384 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 f2a9a6c2a86570ccbf8c5c30cbb8bf723168c459 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160386 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 eb07bfb09ef5483ad58ed0eba713f32fb0c909f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8a40754bca14df63c6d2ffe473b68a270dc50679 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160388 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7286d62d4e259be8cecf3dc2deea80ecc14489a5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160389 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 69259911f948ad2755bd1f2c999dd60ac322c890 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160390 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6e71c36557ed41017e634ae392fa80f03ced7fa1 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dae3c3e8b257cd27d6b35a467a34bf79a6650340
 160361 fail irrelevant
 160391 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2255564fd21059960966b47212def9069cb56077 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160393 fail irrelevant
 160396 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2e51b27fed31eb7b2a2cb4245806c8c7859207f7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0693602a23276b076a679b1e7ed9125a444336b6 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160401 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8b858f9998a9d59a9a7188f2c5c6ffb99eff6115 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160402 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 30ca7eddc486646fa19c9619fcf233ceaa65e28c b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160403 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 2615a5e433aeb812c300d3a48e1a88e1303e2339 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b4011741e6b39a8fd0ed5aded96c16c45ead5888
 160405 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 51204c2f188ec1e2a38f14718d38a3772f850a4b b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b4011741e6b39a8fd0ed5aded96c16c45ead5888
 160407 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 773b0bc2838ede154c6de9d78401b91fafa91062 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 5e8892db93f3fb6a7221f2d47f3c952a7e489737 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160408 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8e6bc6cdc82d45f203bc9fc4342c0452214c74fe b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 14b95b3b8546db201e7efd0636ae0e215fae98f3
 160409 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 757acb9a8295e8be4a37b2cfc1cd947e357fd29c b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 14b95b3b8546db201e7efd0636ae0e215fae98f3
 160411 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 9abda42bf2f5aa6ef403d3140fd3d7d88e8064e9 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 893103e286ac1c500d2ad113f55c41edb35e047c
 160412 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6f34661b6c97a37a5efc27d31c037ddeda4547e2 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 0570d7f276dd20a3adee80ca44a5fe7daf7566cd
 160413 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 edd46cd407ea4a0adaa8d6ca86f550c2a4d5c507 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 a557b00469bca61a058fc1db4855503cac1c3219 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 4e01c48886d9fbfee3bf7e481c4529a176331c78
 160415 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 1941858448e76f83eb00614c4f34ac29e9a8e792 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 0570d7f276dd20a3adee80ca44a5fe7daf7566cd
 160416 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 edd46cd407ea4a0adaa8d6ca86f550c2a4d5c507 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 65a9d3807e9a0ffd9f9719416a07be41b6f39e94 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e4bdcc8aef6707027168ea29caed844a7da67b4d
 160392 fail irrelevant
 160417 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 94fa95c8746c553324e8b69ea4a74af670075324 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e4341623a3b87e7eca87d42b7b88da967cd21c49 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 60c0444fae2148452f9ed0b7c49af1fa41f8f522
 160420 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ef91b07388e1c0a50c604e5350eeda98428ccea6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cb90ecf9349198558569f6c86c4c27d215406095 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 243036df0d55673de59c214e240b9b914d278b65
 160421 fail irrelevant
 160424 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 4751a48aeb2ab828b0a5cbdc585fd3642967cda1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d1929069e355afb809a50a7f6b6affdea399cc8c b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 368096b9c4a273be58dd897e996e3e010bcfc21b
 160426 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b6d5996706ddb6082e3ea8de79849bfecf2aaa15 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6e31b3a5c34c6e5be7ef60773e607f189eaa15f3 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b4011741e6b39a8fd0ed5aded96c16c45ead5888
 160427 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160428 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 030ba3097a6e7d08b99f8a9d19a322f61409c1f6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ee2e67da8f882fcdef2c49fcc58e9962aa695f5a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160429 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f9c53a69edeb94ae8c65276b885c1a7efe4f613a 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 571d413b5da6bc6f1c2aaca8484717642255ddb0 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160431 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 283d845c9164f57f5dba020a4783bb290493802f b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160433 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8becb36063fb14df1e3ae4916215667e2cb65fa2 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160435 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160437 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160438 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160439 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160440 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160441 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8af54b9172ff3b9bbdbb3191ed84994d275a0d81 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
 160418 fail irrelevant
 160448 fail irrelevant
 160477 fail irrelevant
 160501 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7b9a3c9f94bcac23c534bc9f42a9e914b433b299 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160522 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7b9a3c9f94bcac23c534bc9f42a9e914b433b299 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160541 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ec2e6e016d24bd429792d08cf607e4c5350dcdaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee e680cc48b7184d3489873d6776f84ba1fc238ced
 160563 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 b33cf5bfcb4c941370739dfbbe1532ff508fd29d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 7993b0f83fe5c3f8555e79781d5d098f99751a94 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cead8c0d17462f3a1150b5657d3f4eaa88faf1cb
 160619 fail irrelevant
 160632 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 62bad17dcae18f55cb3bdc19909543dfdf928a2b 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6ee55e1d10c25c2f6bf5ce2084ad2327e17affa5 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 90629587e16e2efdb61da77f25c25fba3c4a5fd7
 160650 fail irrelevant
 160736 fail irrelevant
 160748 fail irrelevant
 160779 fail irrelevant
 160801 fail irrelevant
 160827 fail irrelevant
 160851 fail irrelevant
 160883 fail irrelevant
 160916 fail irrelevant
 160980 fail irrelevant
 161050 fail irrelevant
 161088 fail irrelevant
 161121 fail irrelevant
 161147 fail irrelevant
 161171 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 2ad22420a710dc07e3b644f91a5b55c09c39ecf3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 264aa183ad85b2779b27d1312724a291259ccc9f
 161191 fail irrelevant
 161210 fail irrelevant
 161232 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee b53173e7cdafb7a318a239d557478fd73734a86a
 161256 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161276 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161290 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 99e7e48cc7117c37fc1c08a741872d0875595796 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8fe9f1f891eff4e37f82622b7480ee748bf4af74 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee dd22a64de7e02b48312839a15179528c8f7db5c6
 161308 fail irrelevant
 161334 fail irrelevant
 161364 fail irrelevant
 161388 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3b0d007a135284981fa750612a47234b83976f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b1cffefa1b163bce9aebc3416f562c1d3886eeaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 9f6cd4983715cb31f0ea540e6bbb63f799a35d8a
 161401 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 d3b0d007a135284981fa750612a47234b83976f9 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 b1cffefa1b163bce9aebc3416f562c1d3886eeaa b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aaa3eafb3ba8b544d19ca41cda1477640b22b8fc
 161419 fail irrelevant
 161434 fail irrelevant
 161444 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f2f4c6be2dba3f8e97ac544b9c3da71e9f81b294 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ffa090bc56e73e287a63261e70ac02c0970be61a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee bea65a212c0581520203b6ad0d07615693f42f73
 161455 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f2f4c6be2dba3f8e97ac544b9c3da71e9f81b294 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 ffa090bc56e73e287a63261e70ac02c0970be61a b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee bea65a212c0581520203b6ad0d07615693f42f73
 161472 fail irrelevant
 161481 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5396354b868bd6652600a654bba7df16701ac1cb 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0cef06d18762374c94eb4d511717a4735d668a24 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 11e7f0fe72ca0060762d18268e0388731fe8ccb6
 161495 fail irrelevant
 161514 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5b90b8abb4049e2d98040f548ad23b6ab22d5d19 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0cef06d18762374c94eb4d511717a4735d668a24 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 972ba1d1d4bcb77018b50fd2bb63c0e628859ed3
 161540 fail irrelevant
 161554 fail irrelevant
 161571 fail irrelevant
 161587 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8f860d2633baf9c2b6261f703f86e394c6bc22ca b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161604 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 8f860d2633baf9c2b6261f703f86e394c6bc22ca b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161616 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 53c5433e84e8935abed8e91d4a2eb813168a0ecf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161631 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1e6b0394d6c001802dc454ecff19076aaa80f51c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 15106f7dc3290ff3254611f265849a314a93eb0e b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 1f8ee4cb430e5a9da37096574c41632cf69a0bc7
 161766 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 e93d8bcf9dbd5b8dd3b9ddbb1ece6a37e608f300 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee d26c277826dbbd64b3e3cb57159e1ecbfad33bc8
 161780 fail irrelevant
 161812 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d45a5270d075ea589f0b0ddcf963a5fea1f500ac b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 8cccd6438e86112ab383e41b433b5a7e73be9621
 161826 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
 161839 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 09fc903c5ac042e2e1eb54e58ea7f207ed12ee16
 161853 []
 161856 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 7a2b787880bddbb3bd68b18efe1d6fe339df6ff1
 161862 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 7a2b787880bddbb3bd68b18efe1d6fe339df6ff1
 161876 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161886 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161890 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 d90f154867ec0ec22fd719164b88716e8fd48672 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161896 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 f297b7f20010711e36e981fe45645302cc9d109d 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 74e31681ba05ed1876818df30c581bc530554fb3 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee a7da84c457b05479ab423a2e589c5f46c7da0ed7
 161907 fail irrelevant
 161915 fail irrelevant
 161924 fail irrelevant
 161938 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5531fd48ded1271b8775725355ab83994e4bc77c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dab59ce031228066eb95a9c518846fcacfb0dbbf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 43d4cc7d36503bcc3aa2aa6ebea2b7912808f254
 161941 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 5531fd48ded1271b8775725355ab83994e4bc77c 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 dab59ce031228066eb95a9c518846fcacfb0dbbf b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 43d4cc7d36503bcc3aa2aa6ebea2b7912808f254
 161950 fail irrelevant
 161955 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161961 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161963 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161967 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6d34aa9969ff85ca6eaeb4dc1988a4d4e13e7d79 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161971 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6005ee07c380cbde44292f5f6c96e7daa70f4f7d b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161976 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 32928415e36b3e234efb5c24143e06060a68fba3 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 6005ee07c380cbde44292f5f6c96e7daa70f4f7d b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee cb199cc7de987cfda4659fccf51059f210f6ad34
 161981 fail irrelevant
 161986 fail irrelevant
 162019 fail irrelevant
 162070 fail irrelevant
 162090 fail irrelevant
 162104 fail irrelevant
 162099 fail irrelevant
 162108 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 15ee7b76891a78141e6e30ef3f8572e8d6b326d2 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 972e848b53970d12cb2ca64687ef8ff797fb6236 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162112 fail irrelevant
 162116 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162121 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162124 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162127 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 1fb80369b72c6ba7f80b442e4acf771a6dd56ee7 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162132 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162135 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162139 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 3bbaed2cd0a02ee53958d3d2585e837bcf327278 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162143 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 371ebfe28600fc5a435504b841cd401208a68f07 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162146 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0dab1d36f55c3ed649bb8e4c74b9269ef3a63049 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162152 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 0dab1d36f55c3ed649bb8e4c74b9269ef3a63049 b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162158 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 92f8c6fef13b31ba222c4d20ad8afd2b79c4c28e b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
 162168 pass c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 ef91b07388e1c0a50c604e5350eeda98428ccea6 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cb90ecf9349198558569f6c86c4c27d215406095 ef88eeaf052c8a7d28c5f85e790c5e45bcffa45e 243036df0d55673de59c214e240b9b914d278b65
 162173 fail c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 cfa6ffb113f2c0d922034cc77c0d6c52eea05497 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 92f8c6fef13b31ba222c4d20ad8afd2b79c4c28e b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee aa77acc28098d04945af998f3fc0dbd3759b5b41
Searching for interesting versions
 Result found: flight 159828 (pass), for basis pass
 Result found: flight 162158 (fail), for basis failure
 Repro found: flight 162168 (pass), for basis pass
 Repro found: flight 162173 (fail), for basis failure
 0 revisions at c3038e718a19fc596f7b1baba0f83d5146dc7784 c530a75c1e6a472b0eb9558310b518f0dfcd8860 9fd7e88c23f6fb056d25fbc3f8e2e7c1a53859d1 3d273dd05e51e5a1ffba3d98c7437ee84e8f8764 cbde7be900d2a2279cbc4becb91d1ddd6a014def b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee 21657ad4f01a634beac570c64c0691e51b9cf366
No revisions left to test, checking graph state.
 Result found: flight 160435 (pass), for last pass
 Result found: flight 160437 (fail), for first failure
 Repro found: flight 160438 (pass), for last pass
 Repro found: flight 160439 (fail), for first failure
 Repro found: flight 160440 (pass), for last pass
 Repro found: flight 160441 (fail), for first failure

*** Found and reproduced problem changeset ***

  Bug is in tree:  qemuu git://git.qemu.org/qemu.git
  Bug introduced:  8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Bug not present: cbde7be900d2a2279cbc4becb91d1ddd6a014def
  Last fail repro: http://logs.test-lab.xenproject.org/osstest/logs/160441/


  commit 8af54b9172ff3b9bbdbb3191ed84994d275a0d81
  Author: Daniel P. Berrangé <berrange@redhat.com>
  Date:   Mon Feb 22 12:54:55 2021 +0000
  
      machine: remove 'query-cpus' QMP command
      
      The newer 'query-cpus-fast' command avoids side effects on the guest
      execution. Note that some of the field names are different in the
      'query-cpus-fast' command.
      
      Reviewed-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Tested-by: Wainer dos Santos Moschetta <wainersm@redhat.com>
      Signed-off-by: Daniel P. Berrangé <berrange@redhat.com>

neato: graph is too large for cairo-renderer bitmaps. Scaling by 0.363742 to fit
pnmtopng: 231 colors found
Revision graph left in /home/logs/results/bisect/qemu-mainline/test-amd64-amd64-qemuu-freebsd12-amd64.guest-saverestore.{dot,ps,png,html,svg}.
----------------------------------------
162173: tolerable FAIL

flight 162173 qemu-mainline real-bisect [real]
http://logs.test-lab.xenproject.org/osstest/logs/162173/

Failures :-/ but no regressions.

Tests which did not succeed,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail baseline untested


jobs:
 build-amd64                                                  pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary



From xen-devel-bounces@lists.xenproject.org Thu May 27 00:35:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 00:35:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132739.247488 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lm3zu-0006MH-Px; Thu, 27 May 2021 00:35:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132739.247488; Thu, 27 May 2021 00:35:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lm3zu-0006MA-Lj; Thu, 27 May 2021 00:35:38 +0000
Received: by outflank-mailman (input) for mailman id 132739;
 Thu, 27 May 2021 00:35:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lm3zt-0006M0-E8; Thu, 27 May 2021 00:35:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lm3zt-0005Kk-5p; Thu, 27 May 2021 00:35:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lm3zs-0006Lo-TJ; Thu, 27 May 2021 00:35:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lm3zs-0004MB-Sq; Thu, 27 May 2021 00:35:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=izet8KZzoK0a2m3Lbrq5kp+j3z/U7plzQa2Rq+y+OIg=; b=zRt6Q8TR+2kACw+WRhQoYYaPek
	lzyAVZgL7ji0WlvBbi9rR9Z9EePMy2aGc8Hqnkp3OIaazO1jjbTVtZ0x7O+/DB6xMIkxGoSEIP6Qg
	SweY2OTyFrtF4w5XwK2G1thdFjxc/d+6PgrHRV0z/MXHLTYy1Oqco3skU54gUW/ucmd0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162166-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 162166: regressions - FAIL
X-Osstest-Failures:
    linux-5.4:test-amd64-amd64-examine:memdisk-try-append:fail:regression
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=67154cff6258e46b05acc9f797e3328ed839b0e2
X-Osstest-Versions-That:
    linux=b239a0365b9339ad5e276ed9cb4605963c2d939a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 27 May 2021 00:35:36 +0000

flight 162166 linux-5.4 real [real]
flight 162174 linux-5.4 real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162166/
http://logs.test-lab.xenproject.org/osstest/logs/162174/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 162123

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162123
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162123
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162123
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162123
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162123
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162123
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162123
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 162123
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162123
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162123
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162123
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162123
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                67154cff6258e46b05acc9f797e3328ed839b0e2
baseline version:
 linux                b239a0365b9339ad5e276ed9cb4605963c2d939a

Last test of basis   162123  2021-05-22 10:12:11 Z    4 days
Testing same since   162166  2021-05-26 10:42:44 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  "Eric W. Biederman" <ebiederm@xmission.com>
  Alex Deucher <alexander.deucher@amd.com>
  Anirudh Rayabharam <mail@anirudhrb.com>
  Atul Gopinathan <atulgopinathan@gmail.com>
  Bart Van Assche <bvanassche@acm.org>
  Ben Chuang <benchuanggli@gmail.com>
  Changfeng <Changfeng.Zhu@amd.com>
  Christoph Hellwig <hch@lst.de>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Beer <dlbeer@gmail.com>
  Daniel Cordova A <danesc87@gmail.com>
  Daniel Wagner <dwagner@suse.de>
  Darrick J. Wong <djwong@kernel.org>
  David Sterba <dsterba@suse.com>
  Du Cheng <ducheng2@gmail.com>
  Elia Devito <eliadevito@gmail.com>
  Eric Biggers <ebiggers@google.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guchun Chen <guchun.chen@amd.com>
  Guenter Roeck <linux@roeck-us.net>
  Hans de Goede <hdegoede@redhat.com>
  Hou Pu <houpu.main@gmail.com>
  Hui Wang <hui.wang@canonical.com>
  Hulk Robot <hulkrobot@huawei.com>
  Igor Matheus Andrade Torrente <igormtorrente@gmail.com>
  Jacek Anaszewski <jacek.anaszewski@gmail.com>
  Jan Beulich <jbeulich@suse.com>
  Jan Kratochvil <jan.kratochvil@redhat.com>
  Jason Gunthorpe <jgg@nvidia.com>
  Jason Self <jason@bluehome.net>
  Jiri Slaby <jirislaby@kernel.org>
  Jon Hunter <jonathanh@nvidia.com>
  Josef Bacik <josef@toxicpanda.com>
  Juergen Gross <jgross@suse.com>
  Leon Romanovsky <leonro@nvidia.com>
  Liming Sun <limings@nvidia.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
  Maciej W. Rozycki <macro@orcam.me.uk>
  Maor Gottlieb <maorg@nvidia.com>
  Marcel Holtmann <marcel@holtmann.org>
  Martin K. Petersen <martin.petersen@oracle.com>
  Mike Snitzer <snitzer@redhat.com>
  Mikulas Patocka <mpatocka@redhat.com>
  Oleg Nesterov <oleg@redhat.com>
  Pedro Alves <palves@redhat.com>
  PeiSen Hou <pshou@realtek.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Phillip Potter <phil@philpotter.co.uk>
  Ronnie Sahlberg <lsahlber@redhat.com>
  Sasha Levin <sashal@kernel.org>
  Shay Drory <shayd@nvidia.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Simon Marchi <simon.marchi@efficios.com>
  Stafford Horne <shorne@gmail.com>
  Steve French <stfrench@microsoft.com>
  Sudeep Holla <sudeep.holla@arm.com>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  syzbot <syzbot+1f29e126cf461c4de3b3@syzkaller.appspotmail.com>
  Takashi Iwai <tiwai@suse.de>
  Takashi Sakamoto <o-takashi@sakamocchi.jp>
  Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
  Theodore Ts'o <tytso@mit.edu>
  Tom Seewald <tseewald@gmail.com>
  Tyler Hicks <code@tyhicks.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Zhen Lei <thunder.leizhen@huawei.com>
  Zqiang <qiang.zhang@windriver.com>
  Zubin Mithra <zsm@chromium.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 2123 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu May 27 01:01:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 01:01:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132748.247505 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lm4PD-00077N-24; Thu, 27 May 2021 01:01:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132748.247505; Thu, 27 May 2021 01:01:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lm4PC-00076U-VF; Thu, 27 May 2021 01:01:46 +0000
Received: by outflank-mailman (input) for mailman id 132748;
 Thu, 27 May 2021 01:01:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=xUuB=KW=gmail.com=persaur@srs-us1.protection.inumbo.net>)
 id 1lm4PB-0006XI-Qr
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 01:01:45 +0000
Received: from mail-qt1-x82a.google.com (unknown [2607:f8b0:4864:20::82a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id da9e5a95-7e13-40f6-9ef1-7ace80d61878;
 Thu, 27 May 2021 01:01:44 +0000 (UTC)
Received: by mail-qt1-x82a.google.com with SMTP id v4so2369631qtp.1
 for <xen-devel@lists.xenproject.org>; Wed, 26 May 2021 18:01:44 -0700 (PDT)
Received: from smtpclient.apple ([2607:fb90:7bcd:c108:e4aa:408a:f729:1f63])
 by smtp.gmail.com with ESMTPSA id j62sm405368qkf.125.2021.05.26.18.01.43
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Wed, 26 May 2021 18:01:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: da9e5a95-7e13-40f6-9ef1-7ace80d61878
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=content-transfer-encoding:from:mime-version:subject:message-id:date
         :to;
        bh=UXUbGB+CdOX9UuOaqQylR62vI8ma7gNyTi1KjayPRRA=;
        b=OzHzTLCDHw41SUUci2vi+fRloV0E5BxwlCawkyTog/ITuB9Tiq+aL1HI7XJGCpWEMr
         180P6AmAPbTSDZfZR0S44npvtguaXKV8tipyOKolBWWRmp7lyeFDm4WniGggcw42h31K
         zDCpeYaCSYh+YFmvFBKWOjPVKVSC0GRoAaFnhsC7hS+twR/LtGutXG9z+TZDrx+Y5d7T
         AeU/c3KGHec25gIoiCb4UB4ec9GC1E0COfsFkBuZDtf9XngBelPW95SNDARYXBAKMcVl
         hbsM7pAhyXxtu/b063u/cA3xxKdnpUU9VtVGpXLlTQIuPPSqRXQKJ2cAhle5kqV3mzct
         f8Mg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:content-transfer-encoding:from:mime-version
         :subject:message-id:date:to;
        bh=UXUbGB+CdOX9UuOaqQylR62vI8ma7gNyTi1KjayPRRA=;
        b=k51HqyO5b3cbEKE0Ehgcht+WZcItz7QnTKA7XGRrGHG3j6Ip6k01m1kYn+2MqDAx6X
         jzCrFOPYz6ZT/7Ip/5LiW1RvmQUyKrnaqCm+wzSikP3NZVFooCfPF6P8WJvmh24DNPka
         dasLgwYSD0DyQf/g+Qd+YdS7My2Jx/CZHSwt07Ql4fQmHz+4zCHpD6r981+MT5QXsdjG
         PUFZugYNFKekny5V4nxgPy8rA37h5QWZllTKIUtzJyxIpDC06uEN5dnfTKzht2iGPNeg
         lnYY3T9p/+v6bDtW+6b4aC3ufjxf/oVA7Ryt19j4tTvggvS4EXdqy0qUQvdu17DvC5KE
         XnMg==
X-Gm-Message-State: AOAM530Bc8l9qWkyYh806/VVteRQY5VrxziTr2jsd+7vHrRKAG6ccQEF
	/dn1pYMr6aOOT6xpH8QJs2bseH99pKU=
X-Google-Smtp-Source: ABdhPJx91+Qr+CMWwroLT30kojAeEC+CO5z02DM3DOKn2LdV9VSw7dYVtKtKlSHz3gRIccB2IqLVlg==
X-Received: by 2002:ac8:5a0a:: with SMTP id n10mr899975qta.232.1622077304193;
        Wed, 26 May 2021 18:01:44 -0700 (PDT)
Content-Type: multipart/alternative; boundary=Apple-Mail-3207986E-FD44-4192-86D0-1F861B3932F8
Content-Transfer-Encoding: 7bit
From: Rich Persaud <persaur@gmail.com>
Mime-Version: 1.0 (1.0)
Subject: [events] BigBlueButton enhancements 
Message-Id: <56E50BB1-A836-4273-A513-E96D691E81C1@gmail.com>
Date: Wed, 26 May 2021 21:01:01 -0400
To: xen-devel@lists.xenproject.org
X-Mailer: iPhone Mail (18F72)


--Apple-Mail-3207986E-FD44-4192-86D0-1F861B3932F8
Content-Type: text/plain;
	charset=utf-8
Content-Transfer-Encoding: quoted-printable

=EF=BB=BF
LPC and LF have issued an RFQ that includes upstream usability enhancements f=
or BBB and integration with Matrix chat.=20

Some feature gaps (page 7, HTML5 client) may be familiar to Xen Summit parti=
cipants. If Xen Summit has additional enhancement requests for 2022, these c=
ould be coordinated with LF/LPC, as a follow-on project.

https://www.linuxplumbersconf.org/event/11/attachments/732/1341/LPC2021-RFQ0=
1.pdf

Rich=

--Apple-Mail-3207986E-FD44-4192-86D0-1F861B3932F8
Content-Type: text/html;
	charset=utf-8
Content-Transfer-Encoding: quoted-printable

<html><head><meta http-equiv=3D"content-type" content=3D"text/html; charset=3D=
utf-8"></head><body dir=3D"auto"><div dir=3D"ltr">=EF=BB=BF<meta http-equiv=3D=
"content-type" content=3D"text/html; charset=3Dutf-8"><div>LPC and LF have i=
ssued an RFQ that includes upstream usability enhancements for BBB and integ=
ration with Matrix chat.&nbsp;</div><div><br></div><div>Some feature gaps (p=
age 7, HTML5 client) may be familiar to Xen Summit participants. If Xen Summ=
it has additional enhancement requests for 2022, these could be coordinated w=
ith LF/LPC, as a follow-on project.</div><div><br></div><div><a href=3D"http=
s://www.linuxplumbersconf.org/event/11/attachments/732/1341/LPC2021-RFQ01.pd=
f">https://www.linuxplumbersconf.org/event/11/attachments/732/1341/LPC2021-R=
FQ01.pdf</a></div><div><br></div><div>Rich</div></div></body></html>=

--Apple-Mail-3207986E-FD44-4192-86D0-1F861B3932F8--


From xen-devel-bounces@lists.xenproject.org Thu May 27 05:28:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 05:28:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132794.247630 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lm8ZA-00053e-7V; Thu, 27 May 2021 05:28:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132794.247630; Thu, 27 May 2021 05:28:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lm8ZA-00053V-1z; Thu, 27 May 2021 05:28:20 +0000
Received: by outflank-mailman (input) for mailman id 132794;
 Thu, 27 May 2021 05:28:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lm8Z8-00053L-TQ; Thu, 27 May 2021 05:28:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lm8Z8-0007um-PF; Thu, 27 May 2021 05:28:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lm8Z8-0005Z5-GY; Thu, 27 May 2021 05:28:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lm8Z8-0001g8-G8; Thu, 27 May 2021 05:28:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=K4ZGR+lCVYIULGOhAUEPJq5xffkWyqmxv8JpYZrORK0=; b=Izqt+fputoyou0Zxo/ULNclbAV
	INw446t/6bQvXh00gbO4gqexqbEd11Ef+mAWJhqTS42Q6Y5+Gx+ANzul9LNJS5DAOfkiXhjcqR08k
	njvqX0/0zQB+hKE4RqfxkPZPw15SCzd+5r2LZJse0HRTGOn8bg0/H9rNoxm3BG1tIbwg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162167-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162167: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-armhf-armhf-xl-credit1:guest-start:fail:heisenbug
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=92f8c6fef13b31ba222c4d20ad8afd2b79c4c28e
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 27 May 2021 05:28:18 +0000

flight 162167 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162167/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-credit1  14 guest-start                fail pass in 162158

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-xl-rtds 18 guest-start/debian.repeat fail in 162158 like 152631
 test-armhf-armhf-xl-credit1 15 migrate-support-check fail in 162158 never pass
 test-armhf-armhf-xl-credit1 16 saverestore-support-check fail in 162158 never pass
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                92f8c6fef13b31ba222c4d20ad8afd2b79c4c28e
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  279 days
Failing since        152659  2020-08-21 14:07:39 Z  278 days  514 attempts
Testing same since   162158  2021-05-26 01:37:17 Z    1 days    2 attempts

------------------------------------------------------------
510 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  fail    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 161234 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu May 27 05:54:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 05:54:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132805.247650 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lm8yG-0008Dw-Hq; Thu, 27 May 2021 05:54:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132805.247650; Thu, 27 May 2021 05:54:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lm8yG-0008Dp-Dx; Thu, 27 May 2021 05:54:16 +0000
Received: by outflank-mailman (input) for mailman id 132805;
 Thu, 27 May 2021 05:54:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HjtO=KW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lm8yF-0008Dj-88
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 05:54:15 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 39f3508b-7916-4481-bb2f-6ea2897ef43f;
 Thu, 27 May 2021 05:54:10 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 36E57218D6;
 Thu, 27 May 2021 05:54:09 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 1BA4411A98;
 Thu, 27 May 2021 05:54:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 39f3508b-7916-4481-bb2f-6ea2897ef43f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622094849; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=cenKBmpg3IsK1rgGAIPzboMPz4WWFEeK3q+oV059KHM=;
	b=eXJClXW74ampbup5vQmG5n9z2tdXGPb3KrBkLaSy2W+mS7Y+joYRnfZGZ64diLtpFVcl3V
	0Bdv+D7+5c5sLYz2MKo4vCRs+7D2UR21YtJj884J5GGp0a50sJvDGBdWxveA2JyAfTHVUq
	ggXjpbYXQ+pxW40EZU2Ymh18K11eFeI=
Subject: Re: [PATCH 05/13] xenpm: Change get-cpufreq-para output for internal
From: Jan Beulich <jbeulich@suse.com>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210503192810.36084-1-jandryuk@gmail.com>
 <20210503192810.36084-6-jandryuk@gmail.com>
 <a8180fa6-9b7d-52cd-c055-71ca28b08325@suse.com>
Message-ID: <e6566c5d-f5ac-5724-cb26-a74b362da9e4@suse.com>
Date: Thu, 27 May 2021 07:54:05 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <a8180fa6-9b7d-52cd-c055-71ca28b08325@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 26.05.2021 17:21, Jan Beulich wrote:
> On 03.05.2021 21:28, Jason Andryuk wrote:
>> --- a/tools/misc/xenpm.c
>> +++ b/tools/misc/xenpm.c
>> @@ -711,6 +711,7 @@ void start_gather_func(int argc, char *argv[])
>>  /* print out parameters about cpu frequency */
>>  static void print_cpufreq_para(int cpuid, struct xc_get_cpufreq_para *p_cpufreq)
>>  {
>> +    bool internal = strstr(p_cpufreq->scaling_governor, "internal");
> 
> Like suggested for the hypervisor, perhaps better check for names
> ending in "-internal"?

Thinking about it more, the adjustments you make aren't necessarily
applicable to other hypothetical internal governors, are they? In
which case you rather want to check for hwp specifically.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 27 05:57:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 05:57:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132812.247663 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lm90v-0000QK-WC; Thu, 27 May 2021 05:57:01 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132812.247663; Thu, 27 May 2021 05:57:01 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lm90v-0000QD-TB; Thu, 27 May 2021 05:57:01 +0000
Received: by outflank-mailman (input) for mailman id 132812;
 Thu, 27 May 2021 05:57:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lm90v-0000Q3-82; Thu, 27 May 2021 05:57:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lm90v-0008Nf-2e; Thu, 27 May 2021 05:57:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lm90u-0006qN-Qg; Thu, 27 May 2021 05:57:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lm90u-0008KT-QA; Thu, 27 May 2021 05:57:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=lD+ZpQsUp2aIoNy4jOo8GqiDpu5SZJohp2COmv0hQYM=; b=vjpabn68B40u5jAYeaOpM4ToF2
	Was8OFo9wR7CJh2bJWf+CWpoE8Q1G2yId1dRYvVcJQhNkwnyneJTHjMwbuDkHfVBwSg04nGBy/Zuy
	NrP+bALBf9UvswTpzu/q00o35pmImcS0fZgKvODHl21KU8kUOlGA9OEag7luxMDELH44=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162169-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [seabios test] 162169: tolerable FAIL - PUSHED
X-Osstest-Failures:
    seabios:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    seabios:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    seabios:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    seabios=6eff8085980dba0938cea0193b8a0fd3c6b0c4ca
X-Osstest-Versions-That:
    seabios=b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 27 May 2021 05:57:00 +0000

flight 162169 seabios real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162169/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 159942
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 159942
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 159942
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 159942
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 159942
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass

version targeted for testing:
 seabios              6eff8085980dba0938cea0193b8a0fd3c6b0c4ca
baseline version:
 seabios              b0d61ecef66eb05bd7a4eb7ada88ec5dab06dfee

Last test of basis   159942  2021-03-11 16:39:34 Z   76 days
Testing same since   162169  2021-05-26 14:09:53 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Sergei Trofimovich <slyfox@gentoo.org>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/seabios.git
   b0d61ec..6eff808  6eff8085980dba0938cea0193b8a0fd3c6b0c4ca -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu May 27 06:49:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 06:49:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132830.247699 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lm9px-0005SG-F6; Thu, 27 May 2021 06:49:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132830.247699; Thu, 27 May 2021 06:49:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lm9px-0005S9-By; Thu, 27 May 2021 06:49:45 +0000
Received: by outflank-mailman (input) for mailman id 132830;
 Thu, 27 May 2021 06:49:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6svk=KW=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1lm9pv-0005S3-TG
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 06:49:43 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2c56180c-6d3c-4cdc-b64e-332c96a5d916;
 Thu, 27 May 2021 06:49:42 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 817651FD2E;
 Thu, 27 May 2021 06:49:41 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 2BD3411A98;
 Thu, 27 May 2021 06:49:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c56180c-6d3c-4cdc-b64e-332c96a5d916
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622098181; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=hnnDfhpNM1bhZGYvaec0cOaHN0XeXmaI2oMqG3Z58sw=;
	b=BurW7Ax2LDSdGBZIVsDwwSCu0yXTMB1b4IrMKfKIYBOWs+VM14FAEyd8BnvpB84C4tfT/p
	XHB/53kfGlXpMm/Tldudm5LFiKbvJhHrpDQeyL1wumUg2We3KPBcWvN1pMTIrKZhHcyhj3
	h12hCKPutFTEVk+GCewLw8bcqW93zag=
Message-ID: <b1f53cd19ed65eec756d20fdec45c2c5cf79d0d8.camel@suse.com>
Subject: Re: [PATCH v2] firmware/shim: UNSUPPORTED=n
From: Dario Faggioli <dfaggioli@suse.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	 <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Roger
	Pau =?ISO-8859-1?Q?Monn=E9?=
	 <roger.pau@citrix.com>, George Dunlap <george.dunlap@citrix.com>
Date: Thu, 27 May 2021 08:49:40 +0200
In-Reply-To: <72b98382-34ba-6e9d-c90e-c913dfe66258@suse.com>
References: <19695ffc-34d8-b682-b092-668f872d4e57@suse.com>
	 <72b98382-34ba-6e9d-c90e-c913dfe66258@suse.com>
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-p3zACkEDhn8ruykc7ySY"
User-Agent: Evolution 3.40.1 (by Flathub.org) 
MIME-Version: 1.0


--=-p3zACkEDhn8ruykc7ySY
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Wed, 2021-05-26 at 09:37 +0200, Jan Beulich wrote:
> We shouldn't default to include any unsupported code in the shim. Mark
> the setting as off, replacing the ARGO specification. This points out
> anomalies with the scheduler configuration: Unsupported schedulers
> better don't default to Y in release builds (like is already the case
> for ARINC653). Without at least the SCHED_NULL adjustments, the shim
> would suddenly build with RTDS as its default scheduler.
>=20
> As a result, the SCHED_NULL setting can also be dropped from defconfig.
>=20
> Clearly with the shim defaulting to it, SCHED_NULL must be supported at
> least there.
>=20
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> Acked-by: Roger Pau Monn=C3=A9 <roger.pau@citrix.com>
>
Reviewed-by: Dario Faggioli <dfaggioli@suse.com>

> ---
> v2: Also drop SCHED_NULL setting from defconfig. Make SCHED_NULL the
> =C2=A0=C2=A0=C2=A0 default when PV_SHIM_EXCLUSIVE.
> ---
> I'm certainly open to consider alterations on the sched/Kconfig
> adjustments, but _something_ needs to be done there. In particular I
> was puzzled to find the NULL scheduler marked unsupported. Clearly with
> the shim defaulting to it, it must be supported at least there.
>=20
I think null both should and can be supported. There's an outstanding
bug [1], which we may want to finally fix before declaring it as such.
More important IMO, we should add an OSSTest test for it.

The latter may be tricky, as the hypervisor configuration of such test
needs to be host specific. In fact, we need to know how many CPUs we
have on the host, and configure Xen to give only a subset of them to
dom0 (or offline a few, after boot), as well as avoid running on 1 (and
problably also 2) CPUs hosts... or we won't be able to start guests
and/or do local migration jobs.

I can try to put something together, but I don't currently have an
OSSTest development environment up and running any longer, so it may
take a couple of iterations...

[1]
https://lists.xenproject.org/archives/html/xen-devel/2021-01/msg01634.html

> --- a/xen/common/sched/Kconfig
> +++ b/xen/common/sched/Kconfig
> @@ -16,7 +16,7 @@ config SCHED_CREDIT2
> =C2=A0
> =C2=A0config SCHED_RTDS
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0bool "RTDS scheduler supp=
ort (UNSUPPORTED)" if UNSUPPORTED
> -=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0default y
> +=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0default DEBUG
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0---help---
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 The RTDS scheduler=
 is a soft and firm real-time scheduler for
> =C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0=C2=A0 multicore, targete=
d for embedded, automotive, graphics and
> gaming
>
This is fine for me, for now.

That being said, if remember correctly, it should have all the features
that we said we wanted to it to have for declaring supported... Is
anyone of the embedded/safety/automotive/edge users and downstreams
interested interested on helping making RTDS first class citizen? And,
if yes, what's the path toward that?

Thanks and Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-p3zACkEDhn8ruykc7ySY
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAmCvQQQACgkQFkJ4iaW4
c+5UeBAAxlkUHK0z0T95InbKBme00f7wm2alfj+/IL9g/kmSGG8GsacQm/fwMBGW
a38i9AATsA1nEW2VZXXAFKEF0PMug2s5q4M8Y2AsRcU7t4UW3grvhXfU6ipBA5bN
0fgPpWGTFLRrr5NXEryHCUp2hQwCBezkX0u6LUWmbUfahcCx9JN+TZVc/eojBFPm
xXmN5NG4U5cNmnXO99XnVGsePYnUuU0AuLfVxAPT9uJh5OOOF4XtAQBXS1CbFCQ1
Oij23qXzVrPQAQG1KKHxenH68SZFnxz/cXzb8OUqC1GU49yufLL0NLkTC6hApUE9
NF1lMydzR8WLHu0XAZULf95Pk0ToPgb+X/BcIpiYMHeEYPpZxlsD8bLAr0A9yVKK
ziGIYREXhTcrJvUQNQZPIcauJ3ZMH1r3tGS0N+aQhU1nezRI0zmL9/rIcgvW86aH
/+TY+INBM32A+6mySOfBkeiPjpXnw7jbDz2Oj/attzlHs18bhqVAHFTNkzhzaqgC
VDG5653h6GCCKC+RTxIZyWHKUab3l3pwORxydIbiccJgdNavj2ZhMLo8nv/5YVqy
3T4zuMlDNY5TKVUXZudfpbn93gqPrAv1dvMjNyh5zCUyUx3BdmN1SV1SfSP6i+kB
VNHgGXzOo3VbyHUqH4atY5QKeoxzbCS+5pmtA3zZhebv8thPZLo=
=SFvu
-----END PGP SIGNATURE-----

--=-p3zACkEDhn8ruykc7ySY--



From xen-devel-bounces@lists.xenproject.org Thu May 27 07:16:05 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 07:16:05 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132842.247725 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmAFL-0000D0-Ny; Thu, 27 May 2021 07:15:59 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132842.247725; Thu, 27 May 2021 07:15:59 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmAFL-0000Ct-Kj; Thu, 27 May 2021 07:15:59 +0000
Received: by outflank-mailman (input) for mailman id 132842;
 Thu, 27 May 2021 07:15:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HjtO=KW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmAFJ-0000Cn-Tz
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 07:15:57 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ba0f1b88-0bec-4086-b892-35dc8f9ff801;
 Thu, 27 May 2021 07:15:57 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 3E5922190A;
 Thu, 27 May 2021 07:15:56 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 0380611A98;
 Thu, 27 May 2021 07:15:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ba0f1b88-0bec-4086-b892-35dc8f9ff801
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622099756; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=GSFiw99vkDCfksJCNjMxSg92+xJbb+84PggazZo4UNQ=;
	b=UaStFBVOwTw2qcPAOAW7vraK2N7AzDnpdx9IlNm++93MrHbQnf7BhQmtxsysPkePL/Fd4p
	efP49B6rmJDSKSd0uLKSuwCZCExtu9P4/rs4Ht+3yjMLPmWL8b1SfdODyxxk2Vz4FXdgUi
	2dkOszpTLvUBOHRE/xqgTnTI3LAfxQ0=
Subject: Re: [PATCH v2] xen/page_alloc: Remove dead code in
 alloc_domheap_pages()
To: Julien Grall <julien@xen.org>
Cc: Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210526161129.28572-1-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b449be48-cded-b1a2-5086-d4d6856ed5dc@suse.com>
Date: Thu, 27 May 2021 09:15:51 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <20210526161129.28572-1-julien@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 26.05.2021 18:11, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Since commit 1aac966e24e9 "xen: support RAM at addresses 0 and 4096",
> bits_to_zone() will never return 0 and it is expected that we have
> minimum 2 zones.
> 
> Therefore the check in alloc_domheap_pages() is unnecessary and can
> be removed. However, for sanity, it is replaced with an ASSERT().
> 
> Also take the opportunity to switch from min_t() to min() as
> bits_to_zone() cannot return a negative value. The macro is tweaked
> to make it clearer.
> 
> This bug was discovered and resolved using Coverity Static Analysis
> Security Testing (SAST) by Synopsys, Inc.
> 
> Signed-off-by: Julien Grall <jgrall@amazon.com>
> 
> ---
>     Changes in v2:
>         - Remove BUILD_BUG_ON()
>         - Switch from min_t() to min()

Since this fulfills the dependencies put in place at the time, the
Reviewed-by: Jan Beulich <jbeulich@suse.com>
continues to apply here. Thanks for making the adjustments.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 27 07:23:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 07:23:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132848.247736 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmAMU-0001cu-G8; Thu, 27 May 2021 07:23:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132848.247736; Thu, 27 May 2021 07:23:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmAMU-0001cn-Cs; Thu, 27 May 2021 07:23:22 +0000
Received: by outflank-mailman (input) for mailman id 132848;
 Thu, 27 May 2021 07:23:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HjtO=KW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmAMT-0001cf-C6
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 07:23:21 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9c779728-4cec-4917-9c06-c070ab235e5a;
 Thu, 27 May 2021 07:23:20 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 6D38A2190A;
 Thu, 27 May 2021 07:23:19 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 36E9511A98;
 Thu, 27 May 2021 07:23:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9c779728-4cec-4917-9c06-c070ab235e5a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622100199; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=pkFIpzBnw3DwNtJs0V+aqBKdsK8tpoUaHxyqzYMjdW4=;
	b=QRafSssdVJf3pCgwoKfrjLmc2aI+1lUpUEqxLIPI92NtdOJx1F9krFa2A0VmNGcLL27PgC
	kGDJi/La/OOJmilw91lsamHdT3AHbSgolk5dutjYRxkv6i2XJ3U+pa5HcMcov05b6/otrh
	ki0DtFHqX8dIG+JK3baoyvgrJYCLH9g=
Subject: Re: [PATCH 04/13] cpufreq: Add Hardware P-State (HWP) driver
From: Jan Beulich <jbeulich@suse.com>
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
References: <20210503192810.36084-1-jandryuk@gmail.com>
 <20210503192810.36084-5-jandryuk@gmail.com>
 <1747789a-ab6c-cdae-ed35-a6b81ac580a9@suse.com>
Message-ID: <62a49a25-678c-f854-69e7-4af123937791@suse.com>
Date: Thu, 27 May 2021 09:23:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <1747789a-ab6c-cdae-ed35-a6b81ac580a9@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 26.05.2021 16:59, Jan Beulich wrote:
> On 03.05.2021 21:28, Jason Andryuk wrote:
>> +static void hdc_set_pkg_hdc_ctl(bool val)
>> +{
>> +    uint64_t msr;
>> +
>> +    if ( rdmsr_safe(MSR_IA32_PKG_HDC_CTL, msr) )
>> +    {
>> +        hwp_err("error rdmsr_safe(MSR_IA32_PKG_HDC_CTL)\n");
> 
> I'm not convinced of the need of having such log messages after
> failed rdmsr/wrmsr, but I'm definitely against them getting logged
> unconditionally. In verbose mode, maybe.

Perhaps not even there, considering that recovery from faults gets
logged already anyway (see extable_fixup()), and is suitably
restricted to debug builds.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 27 07:45:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 07:45:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132859.247759 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmAhb-00041U-F4; Thu, 27 May 2021 07:45:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132859.247759; Thu, 27 May 2021 07:45:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmAhb-00041N-BM; Thu, 27 May 2021 07:45:11 +0000
Received: by outflank-mailman (input) for mailman id 132859;
 Thu, 27 May 2021 07:45:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HjtO=KW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmAha-00041H-6k
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 07:45:10 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 15d569ad-29e7-4430-9a58-aa02a1bc0360;
 Thu, 27 May 2021 07:45:09 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 6895C2190E;
 Thu, 27 May 2021 07:45:08 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 2591B11A98;
 Thu, 27 May 2021 07:45:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 15d569ad-29e7-4430-9a58-aa02a1bc0360
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622101508; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Ai0T7TEfJbqOfEVDdu1teUGkhqtkTKGVD8vLRQR8XYs=;
	b=iScg6lOSprfL+UF5wUInARzNGsf0XFxZMFU8BfzD9OWiy2lahSRixD5D83/PQz2isBdK6G
	KRe6tp9FBiRPVSjRd5+MGGcHkLwIRvIAj2K4SBTOGkj+zqsjb1fH/hO6Gs+f4cJTgksdTu
	2lp49SNJM+RgRWNeg2H5hjNylk5y7l4=
Subject: Re: [PATCH 04/13] cpufreq: Add Hardware P-State (HWP) driver
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, xen-devel@lists.xenproject.org
References: <20210503192810.36084-1-jandryuk@gmail.com>
 <20210503192810.36084-5-jandryuk@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <7fde649c-5271-72a9-5af8-b6323725d49d@suse.com>
Date: Thu, 27 May 2021 09:45:04 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <20210503192810.36084-5-jandryuk@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 03.05.2021 21:28, Jason Andryuk wrote:
> If HWP Energy_Performance_Preference isn't supported, the code falls
> back to IA32_ENERGY_PERF_BIAS.  Right now, we don't check
> CPUID.06H:ECX.SETBH[bit 3] before using that MSR.  The SDM reads like
> it'll be available, and I assume it was available by the time Skylake
> introduced HWP.

Upon more detailed reading of the respective SDM sections, I only
see two options: Either you fail driver initialization if the bit
is clear, or you correctly deal with the bit being clear. If Xen
runs virtualized itself, the combination of CPUID bits set may
not match that of any bare metal hardware that exists.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 27 08:03:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 08:03:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132871.247783 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmAyv-0006wj-KX; Thu, 27 May 2021 08:03:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132871.247783; Thu, 27 May 2021 08:03:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmAyv-0006ul-FH; Thu, 27 May 2021 08:03:05 +0000
Received: by outflank-mailman (input) for mailman id 132871;
 Thu, 27 May 2021 08:03:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HjtO=KW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmAyu-0006o2-E7
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 08:03:04 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 526b6eb6-ecf7-4237-827e-579e905d04cc;
 Thu, 27 May 2021 08:03:03 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 7D8C92190C;
 Thu, 27 May 2021 08:03:02 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 5D1B611A98;
 Thu, 27 May 2021 08:03:02 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 526b6eb6-ecf7-4237-827e-579e905d04cc
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622102582; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=IkW3zAf2inbBA6AE3IseVexUnjorfSHLh1YaVYufHYo=;
	b=FTOiNWCHeCHfjyO+XMWLXPJUGwO3WRAMLp69IJj/oYMD8tyycszXJz0iSjGY+m/yEIYYie
	iyv4lzu14xGIkW65JSsOcxuyFLJhSkjMq4v8iQRQC4Brp5tlHSfZRHZwKLJ3XSI9Lb7v+A
	utvcYCIu2iG+u1rjVtHR3emGu5a65lA=
Subject: Re: [PATCH 08/13] xenpm: Print HWP parameters
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210503192810.36084-1-jandryuk@gmail.com>
 <20210503192810.36084-9-jandryuk@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <a8482e4b-1d0b-fd18-bb80-3ce4fc2459b7@suse.com>
Date: Thu, 27 May 2021 10:02:58 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <20210503192810.36084-9-jandryuk@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 03.05.2021 21:28, Jason Andryuk wrote:
> --- a/tools/misc/xenpm.c
> +++ b/tools/misc/xenpm.c
> @@ -708,6 +708,43 @@ void start_gather_func(int argc, char *argv[])
>      pause();
>  }
>  
> +static void calculate_hwp_activity_window(const xc_hwp_para_t *hwp,
> +                                          unsigned int *activity_window,
> +                                          const char **units)
> +{
> +    unsigned int mantissa = hwp->activity_window & 0x7f;
> +    unsigned int exponent = ( hwp->activity_window >> 7 ) & 0x7;

Excess blanks inside the parentheses.

> +    unsigned int multiplier = 1;
> +
> +    if ( hwp->activity_window == 0 )
> +    {
> +        *units = "hardware selected";
> +        *activity_window = 0;
> +
> +        return;
> +    }
> +
> +    if ( exponent >= 6 )
> +    {
> +        *units = "s";
> +        exponent -= 6;
> +    }
> +    else if ( exponent >= 3 )
> +    {
> +        *units = "ms";
> +        exponent -= 3;
> +    }
> +    else
> +    {
> +        *units = "us";
> +    }
> +
> +    for ( unsigned int i = 0; i < exponent; i++ )

This requires the compiler to default to C99 mode, which I don't
think we enforce just yet.

> +        multiplier *= 10;
> +
> +    *activity_window = mantissa * multiplier;
> +}
> +
>  /* print out parameters about cpu frequency */
>  static void print_cpufreq_para(int cpuid, struct xc_get_cpufreq_para *p_cpufreq)
>  {
> @@ -777,6 +814,40 @@ static void print_cpufreq_para(int cpuid, struct xc_get_cpufreq_para *p_cpufreq)
>                 p_cpufreq->scaling_cur_freq);
>      }
>  
> +    if ( strcmp(p_cpufreq->scaling_governor, "hwp-internal") == 0 )
> +    {
> +        const xc_hwp_para_t *hwp = &p_cpufreq->u.hwp_para;
> +
> +        printf("hwp variables        :\n");
> +        printf("  hardware limits    : lowest [%u] most_efficient [%u]\n",
> +               hwp->hw_lowest, hwp->hw_most_efficient);
> +        printf("  hardware limits    : guaranteed [%u] highest [%u]\n",
> +               hwp->hw_guaranteed, hwp->hw_highest);
> +        printf("  configured limits  : min [%u] max [%u] energy_perf [%u]\n",
> +               hwp->minimum, hwp->maximum, hwp->energy_perf);
> +
> +        if ( hwp->hw_feature & XEN_SYSCTL_HWP_FEAT_ENERGY_PERF )
> +        {
> +            printf("  configured limits  : energy_perf [%u%s]\n",
> +                   hwp->energy_perf,
> +                   hwp->energy_perf ? "" : " hw autonomous");
> +        }
> +
> +        if ( hwp->hw_feature & XEN_SYSCTL_HWP_FEAT_ACT_WINDOW )
> +        {
> +            unsigned int activity_window;
> +            const char *units;
> +
> +            calculate_hwp_activity_window(hwp, &activity_window, &units);
> +            printf("  configured limits  : activity_window [%u %s]\n",
> +                   activity_window, units);
> +        }
> +
> +        printf("  configured limits  : desired [%u%s]\n",
> +               hwp->desired,
> +               hwp->desired ? "" : " hw autonomous");
> +    }

I suppose output readability would improve if you didn't repeat
"hardware limits :" and "configured limits :" on continuation-like
lines, but rather simply indented by enough spaces.

Also again please again omit an unnecessary pair of braces.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 27 08:03:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 08:03:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132870.247776 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmAyv-0006oF-4A; Thu, 27 May 2021 08:03:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132870.247776; Thu, 27 May 2021 08:03:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmAyv-0006o8-0x; Thu, 27 May 2021 08:03:05 +0000
Received: by outflank-mailman (input) for mailman id 132870;
 Thu, 27 May 2021 08:03:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HjtO=KW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmAyt-0006nw-Mu
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 08:03:03 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 12a44cbd-a727-40dc-98c4-8af5f820f7d2;
 Thu, 27 May 2021 08:03:01 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 982242190B;
 Thu, 27 May 2021 08:03:00 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id CB24F11CD5;
 Thu, 27 May 2021 07:55:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 12a44cbd-a727-40dc-98c4-8af5f820f7d2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622102580; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Q8lYDPUfaNwpwzT9KadN9y6xkH7cv7N12qFSTtd2XeI=;
	b=NxTDOWhzzmtAjvRM2nG2cdOzvgq4DvL1Lw5h8vbW2pZ/l9kCc0dyXeceHKtE5EeabahmLm
	fTgo7RGt3wPzfvFzMKHARA+RMY654qzhJ6iylOL8AEQWIb3+kVkVpnauOyw8PbCBdwCxPl
	PkgU7JitPugRS24e7JcmrqpyW1QwlHs=
Subject: Re: [PATCH 06/13] cpufreq: Export HWP parameters to userspace
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <20210503192810.36084-1-jandryuk@gmail.com>
 <20210503192810.36084-7-jandryuk@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e54c3aef-4c44-4302-f7f4-4f4733e33780@suse.com>
Date: Thu, 27 May 2021 09:55:16 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <20210503192810.36084-7-jandryuk@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 03.05.2021 21:28, Jason Andryuk wrote:
> --- a/xen/arch/x86/acpi/cpufreq/hwp.c
> +++ b/xen/arch/x86/acpi/cpufreq/hwp.c
> @@ -523,6 +523,30 @@ static const struct cpufreq_driver __initconstrel hwp_cpufreq_driver =
>      .update = hwp_cpufreq_update,
>  };
>  
> +int get_hwp_para(struct cpufreq_policy *policy, struct xen_hwp_para *hwp_para)
> +{
> +    unsigned int cpu = policy->cpu;
> +    struct hwp_drv_data *data = hwp_drv_data[cpu];

const, perhaps also for the first function parameter, and in
general (throughout the series) whenever an item is not meant to
be modified.

> +    if ( data == NULL )
> +        return -EINVAL;
> +
> +    hwp_para->hw_feature        =
> +        feature_hwp_activity_window ? XEN_SYSCTL_HWP_FEAT_ACT_WINDOW  : 0 |
> +        feature_hwp_energy_perf     ? XEN_SYSCTL_HWP_FEAT_ENERGY_PERF : 0;

This needs parentheses added, as | has higher precedence than ?:.

> --- a/xen/drivers/acpi/pmstat.c
> +++ b/xen/drivers/acpi/pmstat.c
> @@ -290,6 +290,12 @@ static int get_cpufreq_para(struct xen_sysctl_pm_op *op)
>              &op->u.get_para.u.ondemand.sampling_rate,
>              &op->u.get_para.u.ondemand.up_threshold);
>      }
> +
> +    if ( !strncasecmp(op->u.get_para.scaling_governor,
> +                      "hwp-internal", CPUFREQ_NAME_LEN) )
> +    {
> +        ret = get_hwp_para(policy, &op->u.get_para.u.hwp_para);
> +    }
>      op->u.get_para.turbo_enabled = cpufreq_get_turbo_status(op->cpuid);

Nit: Unnecessary parentheses again, and with the leading blank line
you also want a trailing one. (As an aside I'm also not overly happy
to see the call keyed to the governor name. Is there really no other
indication that hwp is in use?)

> --- a/xen/include/acpi/cpufreq/cpufreq.h
> +++ b/xen/include/acpi/cpufreq/cpufreq.h
> @@ -246,4 +246,7 @@ int write_userspace_scaling_setspeed(unsigned int cpu, unsigned int freq);
>  void cpufreq_dbs_timer_suspend(void);
>  void cpufreq_dbs_timer_resume(void);
>  
> +/********************** hwp hypercall helper *************************/
> +int get_hwp_para(struct cpufreq_policy *policy, struct xen_hwp_para *hwp_para);

While I can see that the excessive number of stars matches what
we have elsewhere in the header, I still wonder if you need to go
this far for a single declaration. If you want to stick to this,
then to match the rest of the file you want to follow the comment
by a blank line.

> --- a/xen/include/public/sysctl.h
> +++ b/xen/include/public/sysctl.h
> @@ -35,7 +35,7 @@
>  #include "domctl.h"
>  #include "physdev.h"
>  
> -#define XEN_SYSCTL_INTERFACE_VERSION 0x00000013
> +#define XEN_SYSCTL_INTERFACE_VERSION 0x00000014

As long as the size of struct xen_get_cpufreq_para doesn't change
(which from the description I imply it doesn't), you don't need to
bump the version, I don't think - your change is a pure addition
to the interface.

> @@ -301,6 +301,23 @@ struct xen_ondemand {
>      uint32_t up_threshold;
>  };
>  
> +struct xen_hwp_para {
> +    uint16_t activity_window; /* 7bit mantissa and 3bit exponent */

If you go this far with commenting, you should also make the further
aspects clear: Which bits these are, and that the exponent is taking
10 as the base (in most other cases one would expect 2).

> +#define XEN_SYSCTL_HWP_FEAT_ENERGY_PERF (1 << 0) /* energy_perf range 0-255 if
> +                                                    1. Otherwise 0-15 */
> +#define XEN_SYSCTL_HWP_FEAT_ACT_WINDOW  (1 << 1) /* activity_window supported
> +                                                    if 1 */

Style: Comment formatting. You may want to move the comment on separate
lines ahead of what they comment.

> +    uint8_t hw_feature; /* bit flags for features */
> +    uint8_t hw_lowest;
> +    uint8_t hw_most_efficient;
> +    uint8_t hw_guaranteed;
> +    uint8_t hw_highest;

Any particular reason for the recurring hw_ prefixes?

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 27 08:16:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 08:16:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132886.247801 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmBBp-0000Z6-TU; Thu, 27 May 2021 08:16:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132886.247801; Thu, 27 May 2021 08:16:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmBBp-0000Yz-Pq; Thu, 27 May 2021 08:16:25 +0000
Received: by outflank-mailman (input) for mailman id 132886;
 Thu, 27 May 2021 08:16:24 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmBBo-0000Yp-Bq; Thu, 27 May 2021 08:16:24 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmBBo-0002wM-5j; Thu, 27 May 2021 08:16:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmBBn-0006Q7-T9; Thu, 27 May 2021 08:16:23 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lmBBn-0003zb-Sc; Thu, 27 May 2021 08:16:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=URMF2PIdoR8hJG0l+Ensy5jhFzOAW0pr1HD7aTCGn9c=; b=orHRIvaY0jAMn9N/SI0/oUT2R8
	UhnWme698Y5arG1x5Lu2UiM/E/kZ/tovGYAsJ+ZR2n8a4ZBIPOz99rlAXICjNhngYjs9fEkegQTQC
	BFHrp+7rUF3MZvv5TdLZha2xWRboE4dnrPUzfU/NQto2CC5Tddx6L5nJF+VLdqa6c3/8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162170-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162170: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:heisenbug
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:heisenbug
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:heisenbug
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt:xen-boot:fail:heisenbug
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=ad9f25d338605d26acedcaf3ba5fab5ca26f1c10
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 27 May 2021 08:16:23 +0000

flight 162170 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162170/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  14 guest-start    fail in 162153 REGR. vs. 152332
 test-arm64-arm64-xl-credit2  14 guest-start    fail in 162153 REGR. vs. 152332
 test-arm64-arm64-xl          14 guest-start    fail in 162160 REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-credit2  13 debian-fixup               fail pass in 162153
 test-arm64-arm64-xl-credit1  13 debian-fixup               fail pass in 162153
 test-arm64-arm64-xl          13 debian-fixup               fail pass in 162160
 test-arm64-arm64-libvirt-xsm 13 debian-fixup               fail pass in 162160
 test-armhf-armhf-libvirt      8 xen-boot                   fail pass in 162160

Tests which did not succeed, but are not blocking:
 test-armhf-armhf-libvirt 16 saverestore-support-check fail in 162160 like 152332
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check fail in 162160 never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check fail in 162160 never pass
 test-armhf-armhf-libvirt    15 migrate-support-check fail in 162160 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                ad9f25d338605d26acedcaf3ba5fab5ca26f1c10
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  299 days
Failing since        152366  2020-08-01 20:49:34 Z  298 days  507 attempts
Testing same since   162153  2021-05-25 19:42:13 Z    1 days    3 attempts

------------------------------------------------------------
6086 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     fail    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1653453 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu May 27 08:33:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 08:33:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132896.247821 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmBSb-0002pB-FJ; Thu, 27 May 2021 08:33:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132896.247821; Thu, 27 May 2021 08:33:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmBSb-0002p4-CG; Thu, 27 May 2021 08:33:45 +0000
Received: by outflank-mailman (input) for mailman id 132896;
 Thu, 27 May 2021 08:33:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ln4B=KW=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lmBSZ-0002oy-Ov
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 08:33:44 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 30217639-f2aa-4329-965c-c6a48bdb532c;
 Thu, 27 May 2021 08:33:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 30217639-f2aa-4329-965c-c6a48bdb532c
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1622104421;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=9cftIIrtFdADYAwrBJaBZ9mQzL/SewCn95MgE2Q5FUM=;
  b=EivGKUGuv4Jc5CYKFH4WLpMC8TfrGOCMXx2FPS+d9wmiHYS8r36ugXQP
   ZgkpwRQ1nBsRkQpOLdIdweeI8Kea/1+0rsrlE9n6Iv9TgafIR+LnfOsKB
   Y+Z1J/1HdlfvnupkVT4016ceKeQc1s5DoKiCN0KvD+F2C4pKlW4ZELzkt
   g=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: ZBXMOD9DanDRHEUsUOMQpKN4q0ZakTFJS2K+iiZEzR5seMVaQFgwbCg/UU/jTIhlExS0eSt6lw
 B2C4vHjxzt5QpkMv+7+PV3Qn2TfORVq07ngDJv3moZNvQczsTsV1hIlQ7nIMNWa2x63pZSt6FQ
 sbsHrIEqST2tjUsU1vpqydmOXXyiuFWIJI/7/87d+vMxI10dIk17iBDXlkWAGWMvf+uPN/iD16
 4B1CqDrS+6EpdJaldFccor+jUHASql6dg5C5pD4eS4WpGYz3s05nWV72E+XJvksXSvi3cpWsou
 +LQ=
X-SBRS: 5.1
X-MesageID: 44724844
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:WvpU+K/TJQBiEovjIoZuk+Hadb1zdoMgy1knxilNoENuHPBwxv
 rAoB1E73PJYVYqOE3Jmbi7Sc+9qFfnhONICOgqTM2ftWzd2VdAQ7sSiLcKrweQfxEWs9QtqZ
 uIEJIOeOEYb2IK9foSiTPQe71LrajlgcLY99s2jU0dNj2CA5sQnjuRYTzra3GeKjM2YqbQQ/
 Gnl7R6TnebCDgqhqvRPAhLY8Hz4/nw0L72ax8PABAqrCOUiymz1bL8Gx+Emj8DTjJm294ZgC
 j4uj28wp/mn+Cwyxfa2WOWxY9RgsHdxtxKA9HJotQJKw/rlh2jaO1aKvy/VQgO0aOSAWsR4Z
 zxS09KBbU215qRRBD6nfLV4Xii7N50gEWSjmNx6BDY0L/ErDFTMbsLuWsWSGqe16KM1OsMpp
 6j5Fjpw6a/Oymw1BgV1+K4Ii2CqXDE1kbKsdRjxUC3ArFuJYO4k+QkjQpo+cA7bV3HAcYcYb
 BTMP0=
X-IronPort-AV: E=Sophos;i="5.82,334,1613451600"; 
   d="scan'208";a="44724844"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BALIWWDTQ+ZlIKK28l85+CK4+s6kgnoGO3ZUTrRFyLrOgqpLbLZeSB+9KmXioCQ9MaOgpRqxs3OX3mNy8nIF3bUBZiBfMRBdU/dQuW9tFudAtgX0v2P/3J0rnyJ3/meIEF67nFwkWipIq5rAPWSRJQQyC/3cRif0Ts3udBtpwAG5VAxRxz5qL5LKEWYr4IGUBzzzjBXzPGp1EG0YoBqBuQKD96573sAvtmQGeNT1P7b4Bcx+kfcCfuPJgVga/pspwxIxhMhumM61F45ZJw3H15T0UKqyNUZsNDxadIr0902dYEQhePq24eQFf5w0gC9uG6ySgNY9OdX9PLc8Xlxydg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jHN9epAEuXEjx8aqD1gWoOXLpmP0sYTmOEO7deqJ1oA=;
 b=cHQuxQk7m8tXlXfNQLUo/haEVKol6RGnATV44r2koQMAvayjJRSgeAHin24tNhPiSaYma/0fWKYbilgxxhf3mYb8qeh8iMxdcabk4jtOwkDpMC0KkhSIZPmQEDIYVqZmtDVwPGKjfWL17uPpAHN4hmrMQbw7MKvu7UVl8eW9bGVdihqbCdFtgtBvTQuFcoFNbOjy/VrwG5c13r3QIlyoVwG5BoBEc4AOvHyh35CHvnJu8o0OMwxows3c994zCRoovPlzuDIHXSDoMyCUi7ekwLN6ko1aLUlcIuMBEoZ294NtauhCPkHcDFXzHWuR76qFkhaDQ1Wt9BE3pe6B2NjU9Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jHN9epAEuXEjx8aqD1gWoOXLpmP0sYTmOEO7deqJ1oA=;
 b=HrQ1JCzinDvJdOCuvIP5cciV/KqIDSevjk9JY8d2FBmNjPNhXiGMsaWkdGNPrj5tQ4hJxaZlhyrJ0lQ0PRp9Vs8mZR3g2CFCximJaRZ9Hlz5P+AfOE5TqYUHIjbhqr1555O3Zm1sqryobZAb+W0MV+AzoMFMwJWCTqDHZ+zFa8I=
Date: Thu, 27 May 2021 10:33:32 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Olaf Hering
	<olaf@aepfle.de>
Subject: Re: [PATCH] x86/AMD: expose SYSCFG, TOM, and TOM2 to Dom0
Message-ID: <YK9ZXJuPk1G5SGnK@Air-de-Roger>
References: <c5764274-1257-809e-a2a7-d87b9d0fe675@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <c5764274-1257-809e-a2a7-d87b9d0fe675@suse.com>
X-ClientProxiedBy: MR2P264CA0114.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:33::30) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: e5abda23-578c-4de2-0cce-08d920ea1f5f
X-MS-TrafficTypeDiagnostic: DM6PR03MB5179:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB51791CFDE75D761B7C06834E8F239@DM6PR03MB5179.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: wEee9m+L7x2GAHxeDNRRWibf5lJibVsSx/Otk9EJkgfm+KXv7Kaaqp/JAq8oG16V4oGbu9v9CMtEmbEJuBGXcqlz08pQHD74nPXy11O+7GAjwG+4C6uqdCF5/jVHtIK2tXOuND84sNFXod7de/0oIeFBXjejZFrq8/fv/7vzOMzF911E5XNA2xfkZhUkeq2nqqfYt97fuPOgITtEY1BZ5RJVTSFV2FdIAYpyn45Y21p89oGyEVgW2N/Rboo4TtMUPGMVs+T5Xgz5ntYLeE+mGOJyZ5dh/Egtpb20wm1pjBzFMrC23GM91tCCQGr1jd9/IWpBiN+VqVGc4QVx7qSrqjyT7Lv/41FPZZ59nOCXQdxXUa7WQfIKXS6mm2M3FXyG/ZkNI+sLiqGK4p5/R9QEVWkWA4ONJWSg3YpSLAIkg32icD8bu24k+CHWAWnL5GTnajp3F3tahv/ZqpC5zNFtbGbgEAQs28xeziDt1hIqVeDuk0eDZLNs8rI6P6WugMpLaswVJu+xduLgKwg9fKHyDMOW6WhegC7a22THp5KLDTonQw3eXkpVVIi22eSyIjM72G4pAN1/8vYv21jhL9Iu5w==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(136003)(39850400004)(396003)(376002)(366004)(346002)(6496006)(83380400001)(86362001)(5660300002)(8676002)(38100700002)(6916009)(54906003)(33716001)(16526019)(956004)(186003)(66556008)(6666004)(316002)(66476007)(66946007)(6486002)(8936002)(2906002)(478600001)(4326008)(85182001)(9686003)(26005);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?MDA3cEJXeTJ4ZmFUODFkSjVTNXBFSy9TK2J1RFJyQ2duT0Y0NldFWWg3ZW42?=
 =?utf-8?B?SFRldWdyZytRbGwrbklheE1IdXBpcUkxalpueTd4TDYycFRCNGRhdnAvZnRt?=
 =?utf-8?B?eVAxRmNzSU12MVFZWUtES2ovZ0FkUHFDeXBjUzl2V0Q0Qi9NcjZEMmVoczVw?=
 =?utf-8?B?SGNwbi9iTXp4U1hPVzFKM0dBakpZTzB4c2xpbzRyNUpoQjFYWlBOb0RyR2J0?=
 =?utf-8?B?NmM1Z1N1NHR4ZGZvUXFrWXN3VUNDOXZhR3EvN0FJdFBZR0N3UVBxR3dBemV4?=
 =?utf-8?B?Z1VpVmoyclFCQU9YY05hVU0yd1hWbUt1Q3EwYnErZ2dNVEhrMFJwK3pacVAx?=
 =?utf-8?B?cDQzQmlUOVA5NHFkUXNCbGt6V2IxanZ4U0JxN0dUdWpCSjI0NkhRQ1lpNW9Q?=
 =?utf-8?B?c1Z5TnA3N3ZIRS9odlpNUjdxOUFWYXB3VXMzSlczZ29OUWRMY1hrWm1yclRY?=
 =?utf-8?B?eHFzbFZLenlKR0UzVDZEc2FydnlnTXlOUEsvMTlOUmZnaGlCZXR6UnJxZ0dp?=
 =?utf-8?B?TWJmV0dhQ1BIWSsweUFNM3F5UGtkVE1IMGxmc3VSMUJDT3ZRSWkySzBmR0FO?=
 =?utf-8?B?b1ZUWktTK2hmR2RjbWpSdmpleEd0bnJReTVYZXZ0LzlzSjg3ZTdhcng0SVpK?=
 =?utf-8?B?ZkJXc1JtTGc4ODdVVzF6Q0dGZGlMWVFnamlKaUxLOU52TTh2Zy9hV3hRUU8y?=
 =?utf-8?B?ck95RmZhdjlGbkNRb3dId0lEL1JZTlB2U1VMdFIzV2g4RC83WHM0eWhsYWNL?=
 =?utf-8?B?bEF4c0wrZlc2YW5oUlFiOGxCMUp1L2drdmR3ZXkyMmpRNi9nNGNkS0c3R0pO?=
 =?utf-8?B?RWVBWnZESjU2YjgyYkpzVk5oaVgzLzJYVEVxM0RWQ1Z6SHRKUzU5Z0RaR0pH?=
 =?utf-8?B?TE83NGdVcC9xQy9ob0dhVndrTnIyRHZxRTFWT3MzdUVGdEh6UFZzMVBzWGRk?=
 =?utf-8?B?cUtFbFhYNGFPL2pFZmh4OXV3dGZLclc5VVpMOWtCcUFQNEg2dTdXR0Y0eXRK?=
 =?utf-8?B?cGxGRW1melZKcWYxd0x2ZGEzY3AxeTZhRTNkY3FhWml6YklRaVdUMkRqTW5x?=
 =?utf-8?B?WXBORkZ5YkdVaHc4dC8zeGh6UWYwTGZVazRkNWM1b0piUlB4UWpxVy94NWpQ?=
 =?utf-8?B?ZjI3Vm05OTFGMkZkUlZiSjhQdDYxRllGcmZZNlFXcmhlSi90RjVzckJtTEwv?=
 =?utf-8?B?allEcFdGN3VETlBKRm9ub2pkNFcwL2FaQjlCbE82K1h1WExCbHRxSlNmU2pX?=
 =?utf-8?B?MFI1Wmgvekw1dkU1dXJLdlVVQUU0bjJTQ1dIdEJxUzc0T3Y0Ukdia1N6VlVG?=
 =?utf-8?B?QXdkbldSZXIraDlKSTZNZHVEMENQclM0Sy82dzhWMVZ2TVJVNkdmcDdHc0Fs?=
 =?utf-8?B?dkhsNkhxd2tYQi94amx0bmlySHh1ZkNtUjVHeUZiVjlRYm1wYXZKSkU4aWxT?=
 =?utf-8?B?enBwVDd5eTJLc0NndDFIbDJ5NFhONkt2V05pWVp4Z0E2bUlOSm9lMmd0dWM4?=
 =?utf-8?B?d1h4aHJsZVZBekpzZ1FFaGhwNWQ0TjdTekVDYndWandSc1hLbXlRbFhwSi9Y?=
 =?utf-8?B?OFpNeXk5amdSeWhIVlhRUXhjWVR4VWVYRURFUFloSlBMUW1IaXdhRENjQTdZ?=
 =?utf-8?B?WE9acWFmYjZDYXc0bVBjQXZ5azlxK25YN1NsQzZWOFBFb0ZRTTFMU0d2ZGJs?=
 =?utf-8?B?ZlRuQnlUV2VkRUdLVCtlcTlPZExldFE3blZxTHRvcG53UGN2MHAzM1IxUXdZ?=
 =?utf-8?Q?4t5TIpS0wrTJWQA2qpsyDE0drKPWpKPI6nWQvVp?=
X-MS-Exchange-CrossTenant-Network-Message-Id: e5abda23-578c-4de2-0cce-08d920ea1f5f
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 May 2021 08:33:37.1719
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 5qPNMjvbGoz7mAv9/7yUGjY3ikNq/UOYFs0viHzkU+k0ar/lzuLsba84v8J5ZLKwEHJPwAl0duaOY1ATpp0icA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5179
X-OriginatorOrg: citrix.com

On Wed, May 26, 2021 at 02:59:00PM +0200, Jan Beulich wrote:
> Sufficiently old Linux (3.12-ish) accesses these MSRs in an unguarded
> manner. Furthermore these MSRs, at least on Fam11 and older CPUs, are
> also consulted by modern Linux, and their (bogus) built-in zapping of
> #GP faults from MSR accesses leads to it effectively reading zero
> instead of the intended values, which are relevant for PCI BAR placement
> (which ought to all live in MMIO-type space, not in DRAM-type one).
> 
> For SYSCFG, only certain bits get exposed. In fact, whether to expose
> MtrrVarDramEn is debatable: It controls use of not just TOM, but also
> the IORRs. Introduce (consistently named) constants for the bits we're
> interested in and use them in pre-existing code as well.

I think we should also allow access to the IORRs MSRs for coherency
(c001001{6,9}) for the hardware domain.

> As a welcome side effect, verbosity on/of debug builds gets (perhaps
> significantly) reduced.
> 
> Note that at least as far as those MSR accesses by Linux are concerned,
> there's no similar issue for DomU-s, as the accesses sit behind PCI
> device matching logic. The checked for devices would never be exposed to
> DomU-s in the first place. Nevertheless I think that at least for HVM we
> should return sensible values, not 0 (as svm_msr_read_intercept() does
> right now). The intended values may, however, need to be determined by
> hvmloader, and then get made known to Xen.

Could we maybe come up with a fixed memory layout that hvmloader had
to respect?

Ie: DRAM from 0 to 3G, MMIO from 3G to 4G, and then the remaining
DRAM from 4G in a contiguous single block?

hvmloader would have to place BARs that don't fit in the 3G-4G hole at
the end of DRAM (ie: after TOM2).

> 
> Fixes: 322ec7c89f66 ("x86/pv: disallow access to unknown MSRs")
> Reported-by: Olaf Hering <olaf@aepfle.de>
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/arch/x86/cpu/amd.c
> +++ b/xen/arch/x86/cpu/amd.c
> @@ -468,14 +468,14 @@ static void check_syscfg_dram_mod_en(voi
>  		return;
>  
>  	rdmsrl(MSR_K8_SYSCFG, syscfg);
> -	if (!(syscfg & K8_MTRRFIXRANGE_DRAM_MODIFY))
> +	if (!(syscfg & SYSCFG_MTRR_FIX_DRAM_MOD_EN))
>  		return;
>  
>  	if (!test_and_set_bool(printed))
>  		printk(KERN_ERR "MTRR: SYSCFG[MtrrFixDramModEn] not "
>  			"cleared by BIOS, clearing this bit\n");
>  
> -	syscfg &= ~K8_MTRRFIXRANGE_DRAM_MODIFY;
> +	syscfg &= ~SYSCFG_MTRR_FIX_DRAM_MOD_EN;
>  	wrmsrl(MSR_K8_SYSCFG, syscfg);
>  }
>  
> --- a/xen/arch/x86/cpu/mtrr/generic.c
> +++ b/xen/arch/x86/cpu/mtrr/generic.c
> @@ -224,7 +224,7 @@ static void __init print_mtrr_state(cons
>  		uint64_t syscfg, tom2;
>  
>  		rdmsrl(MSR_K8_SYSCFG, syscfg);
> -		if (syscfg & (1 << 21)) {
> +		if (syscfg & SYSCFG_MTRR_TOM2_EN) {
>  			rdmsrl(MSR_K8_TOP_MEM2, tom2);
>  			printk("%sTOM2: %012"PRIx64"%s\n", level, tom2,
>  			       syscfg & (1 << 22) ? " (WB)" : "");
> --- a/xen/arch/x86/msr.c
> +++ b/xen/arch/x86/msr.c
> @@ -339,6 +339,19 @@ int guest_rdmsr(struct vcpu *v, uint32_t
>          *val = msrs->tsc_aux;
>          break;
>  
> +    case MSR_K8_SYSCFG:
> +    case MSR_K8_TOP_MEM1:
> +    case MSR_K8_TOP_MEM2:
> +        if ( !(cp->x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) )
> +            goto gp_fault;
> +        if ( !is_hardware_domain(d) )
> +            return X86EMUL_UNHANDLEABLE;

It might be clearer to also handle the !is_hardware_domain case here,
instead of deferring to svm_msr_read_intercept:

if ( is_hardware_domain(d) )
    rdmsrl(msr, *val);
else
    *val = 0;

...

> +        rdmsrl(msr, *val);
> +        if ( msr == MSR_K8_SYSCFG )
> +            *val &= (SYSCFG_TOM2_FORCE_WB | SYSCFG_MTRR_TOM2_EN |
> +                     SYSCFG_MTRR_VAR_DRAM_EN | SYSCFG_MTRR_FIX_DRAM_EN);
> +        break;
> +
>      case MSR_K8_HWCR:
>          if ( !(cp->x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) )
>              goto gp_fault;
> --- a/xen/arch/x86/x86_64/mmconf-fam10h.c
> +++ b/xen/arch/x86/x86_64/mmconf-fam10h.c
> @@ -69,7 +69,7 @@ static void __init get_fam10h_pci_mmconf
>  	rdmsrl(address, val);
>  
>  	/* TOP_MEM2 is not enabled? */
> -	if (!(val & (1<<21))) {
> +	if (!(val & SYSCFG_MTRR_TOM2_EN)) {
>  		tom2 = 1ULL << 32;
>  	} else {
>  		/* TOP_MEM2 */
> --- a/xen/include/asm-x86/msr-index.h
> +++ b/xen/include/asm-x86/msr-index.h
> @@ -116,6 +116,13 @@
>  #define  PASID_PASID_MASK                   0x000fffff
>  #define  PASID_VALID                        (_AC(1, ULL) << 31)
>  
> +#define MSR_K8_SYSCFG                       0xc0010010
> +#define  SYSCFG_MTRR_FIX_DRAM_EN            (_AC(1, ULL) << 18)
> +#define  SYSCFG_MTRR_FIX_DRAM_MOD_EN        (_AC(1, ULL) << 19)
> +#define  SYSCFG_MTRR_VAR_DRAM_EN            (_AC(1, ULL) << 20)
> +#define  SYSCFG_MTRR_TOM2_EN                (_AC(1, ULL) << 21)
> +#define  SYSCFG_TOM2_FORCE_WB               (_AC(1, ULL) << 22)
> +
>  #define MSR_K8_VM_CR                        0xc0010114
>  #define  VM_CR_INIT_REDIRECTION             (_AC(1, ULL) <<  1)
>  #define  VM_CR_SVM_DISABLE                  (_AC(1, ULL) <<  4)
> @@ -279,10 +286,7 @@
>  #define MSR_K8_TOP_MEM1			0xc001001a
>  #define MSR_K7_CLK_CTL			0xc001001b
>  #define MSR_K8_TOP_MEM2			0xc001001d
> -#define MSR_K8_SYSCFG			0xc0010010
>  
> -#define K8_MTRRFIXRANGE_DRAM_ENABLE	0x00040000 /* MtrrFixDramEn bit    */
> -#define K8_MTRRFIXRANGE_DRAM_MODIFY	0x00080000 /* MtrrFixDramModEn bit */
>  #define K8_MTRR_RDMEM_WRMEM_MASK	0x18181818 /* Mask: RdMem|WrMem    */

That last one seem to be unused, I wonder if you could also drop it as
part of this cleanup?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 27 08:33:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 08:33:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132897.247832 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmBSi-00038u-Rv; Thu, 27 May 2021 08:33:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132897.247832; Thu, 27 May 2021 08:33:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmBSi-00038n-Of; Thu, 27 May 2021 08:33:52 +0000
Received: by outflank-mailman (input) for mailman id 132897;
 Thu, 27 May 2021 08:33:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HjtO=KW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmBSi-00038E-CB
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 08:33:52 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5df67993-02ee-4b59-94b9-220ab23e3b6b;
 Thu, 27 May 2021 08:33:51 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 09D381FD2E;
 Thu, 27 May 2021 08:33:50 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id C7A7611A98;
 Thu, 27 May 2021 08:33:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5df67993-02ee-4b59-94b9-220ab23e3b6b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622104430; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=nxo1MCJM/R6X+NZdOmKMBJAoaKiDZ3/+fKvIWgwMmxE=;
	b=JMwPK1eq6aY8Felxof3sRFrpR94JhPoewdNEZA/ss1ukV9q3i/sadewVb4SN4LJ3k+powO
	UnjjEA8HXn1LTTZ7B0ASi0aeBWh8HGrLFsKh081AL+qaZodgYCjft/bE/fRxX9fViqGp4e
	7mxodFXk6CcDaLG7+miUJpTglM9BEmY=
Subject: Re: [PATCH 09/13] xen: Add SET_CPUFREQ_HWP xen_sysctl_pm_op
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org
References: <20210503192810.36084-1-jandryuk@gmail.com>
 <20210503192810.36084-10-jandryuk@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <fe5d6d76-83a8-aabd-0148-949726a45ad0@suse.com>
Date: Thu, 27 May 2021 10:33:45 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <20210503192810.36084-10-jandryuk@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 03.05.2021 21:28, Jason Andryuk wrote:
> --- a/xen/arch/x86/acpi/cpufreq/hwp.c
> +++ b/xen/arch/x86/acpi/cpufreq/hwp.c
> @@ -547,6 +547,120 @@ int get_hwp_para(struct cpufreq_policy *policy, struct xen_hwp_para *hwp_para)
>      return 0;
>  }
>  
> +int set_hwp_para(struct cpufreq_policy *policy,
> +                 struct xen_set_hwp_para *set_hwp)
> +{
> +    unsigned int cpu = policy->cpu;
> +    struct hwp_drv_data *data = hwp_drv_data[cpu];
> +
> +    if ( data == NULL )
> +        return -EINVAL;
> +
> +    /* Validate all parameters first */
> +    if ( set_hwp->set_params & ~XEN_SYSCTL_HWP_SET_PARAM_MASK )
> +    {
> +        hwp_err("Invalid bits in hwp set_params %u\n",
> +                set_hwp->set_params);
> +
> +        return -EINVAL;
> +    }
> +
> +    if ( set_hwp->activity_window & ~XEN_SYSCTL_HWP_ACT_WINDOW_MASK )
> +    {
> +        hwp_err("Invalid bits in activity window %u\n",
> +                set_hwp->activity_window);
> +
> +        return -EINVAL;
> +    }
> +
> +    if ( !feature_hwp_energy_perf &&
> +         set_hwp->set_params & XEN_SYSCTL_HWP_SET_ENERGY_PERF &&

Please add parentheses around the operands of & here and ...

> +         set_hwp->energy_perf > 0xf )
> +    {
> +        hwp_err("energy_perf %u out of range for IA32_ENERGY_PERF_BIAS\n",
> +                set_hwp->energy_perf);
> +
> +        return -EINVAL;
> +    }
> +
> +    if ( set_hwp->set_params & XEN_SYSCTL_HWP_SET_DESIRED &&

... here.

> +         set_hwp->desired != 0 &&
> +         ( set_hwp->desired < data->hw_lowest ||
> +           set_hwp->desired > data->hw_highest ) )

Excess blanks inside the inner pair of parentheses.

> +    {
> +        hwp_err("hwp desired %u is out of range (%u ... %u)\n",
> +                set_hwp->desired, data->hw_lowest, data->hw_highest);
> +
> +        return -EINVAL;
> +    }

None of these -EINVAL should be accompanied by a hwp_err, imo.

> +    /*
> +     * minimum & maximum are not validated as hardware doesn't seem to care
> +     * and the SDM says CPUs will clip internally.
> +     */
> +
> +    /* Apply presets */
> +    switch ( set_hwp->set_params & XEN_SYSCTL_HWP_SET_PRESET_MASK )
> +    {
> +    case XEN_SYSCTL_HWP_SET_PRESET_POWERSAVE:
> +        data->minimum = data->hw_lowest;
> +        data->maximum = data->hw_lowest;
> +        data->activity_window = 0;
> +        if ( feature_hwp_energy_perf )
> +            data->energy_perf = 0xff;
> +        else
> +            data->energy_perf = 0xf;

There may want to be constants #define-d for these, and ...

> +        data->desired = 0;
> +        break;
> +    case XEN_SYSCTL_HWP_SET_PRESET_PERFORMANCE:
> +        data->minimum = data->hw_highest;
> +        data->maximum = data->hw_highest;
> +        data->activity_window = 0;
> +        data->energy_perf = 0;
> +        data->desired = 0;
> +        break;
> +    case XEN_SYSCTL_HWP_SET_PRESET_BALANCE:
> +        data->minimum = data->hw_lowest;
> +        data->maximum = data->hw_highest;
> +        data->activity_window = 0;
> +        data->energy_perf = 0x80;
> +        if ( feature_hwp_energy_perf )
> +            data->energy_perf = 0x80;
> +        else
> +            data->energy_perf = 0x7;

... since these aren't the sole instances of these kind of magic
numbers there surely want to be #define-s for these (such that
the connection between the two [or more?] instances becomes
visible). Actually, the same applies to the 0xf further up, which
has a second use yet a few more lines up.

> +        data->desired = 0;
> +        break;
> +    case XEN_SYSCTL_HWP_SET_PRESET_NONE:
> +        break;
> +    default:
> +        printk("HWP: Invalid preset value: %u\n",
> +               set_hwp->set_params & XEN_SYSCTL_HWP_SET_PRESET_MASK);
> +
> +        return -EINVAL;
> +    }

For the entire switch() - please have blank lines between (non-fall-
through, which here is all of them) case blocks.

> --- a/xen/drivers/acpi/pmstat.c
> +++ b/xen/drivers/acpi/pmstat.c
> @@ -318,6 +318,24 @@ static int set_cpufreq_gov(struct xen_sysctl_pm_op *op)
>      return __cpufreq_set_policy(old_policy, &new_policy);
>  }
>  
> +static int set_cpufreq_hwp(struct xen_sysctl_pm_op *op)
> +{
> +    struct cpufreq_policy *policy;
> +
> +    if ( !cpufreq_governor_internal )
> +        return -EINVAL;
> +
> +    policy = per_cpu(cpufreq_cpu_policy, op->cpuid);
> +
> +    if ( !policy || !policy->governor )
> +        return -EINVAL;
> +
> +    if ( strncasecmp(policy->governor->name, "hwp-internal", CPUFREQ_NAME_LEN) )

I think this recurring string literal also wants to at least gain
a #define.

> @@ -465,6 +483,12 @@ int do_pm_op(struct xen_sysctl_pm_op *op)
>          break;
>      }
>  
> +    case SET_CPUFREQ_HWP:
> +    {
> +        ret = set_cpufreq_hwp(op);
> +        break;
> +    }
> +
>      case SET_CPUFREQ_PARA:
>      {
>          ret = set_cpufreq_para(op);

I think you want to insert somewhere below this one and, despite all
the odd precedents, omit the stray braces.

> --- a/xen/include/acpi/cpufreq/cpufreq.h
> +++ b/xen/include/acpi/cpufreq/cpufreq.h
> @@ -248,5 +248,7 @@ void cpufreq_dbs_timer_resume(void);
>  
>  /********************** hwp hypercall helper *************************/
>  int get_hwp_para(struct cpufreq_policy *policy, struct xen_hwp_para *hwp_para);
> +int set_hwp_para(struct cpufreq_policy *policy,
> +                 struct xen_set_hwp_para *set_hwp);

This renders the comment stale - the patch introducing it probably
can use plural right away.

> --- a/xen/include/public/sysctl.h
> +++ b/xen/include/public/sysctl.h
> @@ -318,6 +318,36 @@ struct xen_hwp_para {
>      uint8_t energy_perf;
>  };
>  
> +/* set multiple values simultaneously when set_args bit is set */
> +struct xen_set_hwp_para {
> +    uint16_t set_params; /* bitflags for valid values */
> +#define XEN_SYSCTL_HWP_SET_DESIRED              (1U << 0)
> +#define XEN_SYSCTL_HWP_SET_ENERGY_PERF          (1U << 1)
> +#define XEN_SYSCTL_HWP_SET_ACT_WINDOW           (1U << 2)
> +#define XEN_SYSCTL_HWP_SET_MINIMUM              (1U << 3)
> +#define XEN_SYSCTL_HWP_SET_MAXIMUM              (1U << 4)
> +#define XEN_SYSCTL_HWP_SET_PRESET_MASK          (0xf000)
> +#define XEN_SYSCTL_HWP_SET_PRESET_NONE          (0x0000)
> +#define XEN_SYSCTL_HWP_SET_PRESET_BALANCE       (0x1000)
> +#define XEN_SYSCTL_HWP_SET_PRESET_POWERSAVE     (0x2000)
> +#define XEN_SYSCTL_HWP_SET_PRESET_PERFORMANCE   (0x3000)

Personally I'd prefer unnecessary parentheses (like around single
tokens) to be omitted.

> +#define XEN_SYSCTL_HWP_SET_PARAM_MASK ((uint16_t)( \

What's the reason for this cast? Wherever possible #define-d
constants should be suitable for use in preprocessor conditionals.

> +                                  XEN_SYSCTL_HWP_SET_PRESET_MASK | \
> +                                  XEN_SYSCTL_HWP_SET_DESIRED     | \
> +                                  XEN_SYSCTL_HWP_SET_ENERGY_PERF | \
> +                                  XEN_SYSCTL_HWP_SET_ACT_WINDOW  | \
> +                                  XEN_SYSCTL_HWP_SET_MINIMUM     | \
> +                                  XEN_SYSCTL_HWP_SET_MAXIMUM     ))
> +
> +    uint16_t activity_window; /* 7bit mantissa and 3bit exponent */

Since the other respective comment is to be extended, perhaps here
you can simply refer to that one?

> +#define XEN_SYSCTL_HWP_ACT_WINDOW_MASK          (0x03ff)
> +    uint8_t minimum;
> +    uint8_t maximum;
> +    uint8_t desired;
> +    uint8_t energy_perf; /* 0-255 or 0-15 depending on HW support */
> +};
> +
> +

No double blank lines please.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 27 08:41:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 08:41:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132912.247845 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmBaH-0004vs-Nl; Thu, 27 May 2021 08:41:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132912.247845; Thu, 27 May 2021 08:41:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmBaH-0004vl-Ku; Thu, 27 May 2021 08:41:41 +0000
Received: by outflank-mailman (input) for mailman id 132912;
 Thu, 27 May 2021 08:41:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HjtO=KW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmBaG-0004vM-8G
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 08:41:40 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 174ac669-8388-429c-9d62-f859a9daa367;
 Thu, 27 May 2021 08:41:39 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 844C81FD2E;
 Thu, 27 May 2021 08:41:38 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 5CB2511A98;
 Thu, 27 May 2021 08:41:38 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 174ac669-8388-429c-9d62-f859a9daa367
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622104898; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=v7ywD3BnX4zshb6hrMRP6r3vcXF6xNj4e2+3UTSsh0k=;
	b=eYU/pSBqBQwegHeDDOviNt2W8ElwV+6zmT0afiDmf43aqf7DY9MZl6PBkyzH8AIIKtIfvl
	v2T5ZpySgHmK0JoGgcGPCKjW73oWk7+mokWOGPI+LRTjOZDa4QMq5Rzz+tC1QVf3lRVUDq
	f9ZnIVG2CnBb9CzKSI6U/pKPQR2b6UM=
Subject: Re: [PATCH 11/13] xenpm: Factor out a non-fatal cpuid_parse variant
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210503192810.36084-1-jandryuk@gmail.com>
 <20210503192810.36084-12-jandryuk@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <61982f4e-beeb-c37e-817a-f4492e848587@suse.com>
Date: Thu, 27 May 2021 10:41:34 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <20210503192810.36084-12-jandryuk@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 03.05.2021 21:28, Jason Andryuk wrote:
> Allow cpuid_prase to be re-used without terminating xenpm.  HWP
> will re-use it to optionally parse a cpuid.

I'm not convinced of the need for this patch: The next one could
easily put an isdigit() check in place to tell cpuid from other
arguments.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 27 08:48:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 08:48:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132920.247863 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmBgp-0005eC-L0; Thu, 27 May 2021 08:48:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132920.247863; Thu, 27 May 2021 08:48:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmBgp-0005e5-Fr; Thu, 27 May 2021 08:48:27 +0000
Received: by outflank-mailman (input) for mailman id 132920;
 Thu, 27 May 2021 08:48:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ln4B=KW=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lmBgn-0005dz-KM
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 08:48:25 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id eb2aa845-59ba-4295-bba0-b619b18ac5c7;
 Thu, 27 May 2021 08:48:24 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: eb2aa845-59ba-4295-bba0-b619b18ac5c7
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1622105304;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=Beyw+hEbY9USWAxOm7hXs8j+8jjcDpFd9xQhJ9P4pFE=;
  b=HZH7C0LfWwwQaWP1OHMJmKTNdOEjJytaUIrMurA9kfuFO8RXAP3cTe25
   W5K/1VRzWIRAe9oCMCI5t7rxfVzpHFvsSn5tWsxpFQ81YTsSyH9Ig7A6V
   OoezmEKnBaheaupw62Ze0zFSY9ciNnJmHhua8wAXAJsxfF7/T+BJzmxdJ
   o=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 7bmBOuYlJZzYUcFZkl6m+TJ2wbMpjiHV3wGxbSmzzPEtsBeMeUhQ/eK5KEXea5yUss8LZ5zvqj
 o7E4dV37AqT1cC83BO7/hQLtB4HqJ6oJZERyDlZZufgISpIzGyWUgO0X46P/f6bH+mLM4oY+nR
 NNbEmJSCosd+yAvG66L1+SkCdI04rcPuffjKaOwgdW3rVFhfe64EKa50WEjua4WZcAGllnJCzg
 actelCFBHcDmE05STmuANBwuLSdHDMMGsVZQH8UcEtrYltCtkveHZELIJeKhD5HonGhDZOahER
 /NA=
X-SBRS: 5.1
X-MesageID: 44750877
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:cR+ELqwhKKPFRfSGemC2KrPw2r1zdoMgy1knxilNoHxuH/BwWf
 rPoB17726TtN91YhsdcL+7V5VoLUmzyXcx2/hyAV7AZniAhILLFvAA0WKK+VSJdxEWtNQtsJ
 uIG5IUNDSaNykfsS+V2miF+9ZL+qj5zEir792usUuEm2tRGtBdBwQSMHfqLqVvLjM2fKbQjP
 Cnl7d6TzzLQwVuUu2LQkMrcsLkvNPxmJfvcXc9dmIaAFnnt0LS1FbieSLopCsjbw==
X-IronPort-AV: E=Sophos;i="5.82,334,1613451600"; 
   d="scan'208";a="44750877"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=R9g/rvRQqWY5aFhH30Cqk1YnvprtrSho+zCLM4wO3X+ZHejp6ihiAsXDLIsGfE4d0lv2nOpMdD1AGWNC3jobj6r4/OgFfO8mhxVu/m+3e6ibG1F+4peg0Y7FOnsMB3GrNqCfs8XBaIL7X+u+e+RcCnXoDC4ANpH6tHzSSxVxRvqZ9n+cFk9+2SEWxNMExwunIaFwxPQSDZ4YOJzIcDpxZ0fzHLjbJy8yb/zEGlOKcEEg/lJWyA4PUGZBvdKDbGqGb6zuLV1dvlvlhH2+mJXuW2gUd+SOBunS8JN2581rfJMl+QprpDHE8f+PtW7nafKqNJXY/97NhECQYGon/Sh3iw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XQqVaxQLPw9y9kMPbzlGNFefFRnsW32OmqEdEk3O22Y=;
 b=NGL5idLh5MgIoxz4VWJ2lxYu3D/LwHAAhRQVZL6x5XaO6d4PKeuqLUYTZnEtQu5bid5KMTC9YfCPlS1MC7vQ7SeDqus45zbMQm+NFdT73GleU8Qq6X43vT7ahjrclWOLiihGCM8YCIErSWxKPO7GVX70LpHNBhzj4oY5XOOhwSnllIjZeoSH308ui1XA6hqbegmAdwo47f5+3YOhveQIyw6Vv2jlUivpUdPV6tzfunTqa2SC3xrnehWbMLUJXMVOAwQX+wQafmu+uWddS2stOeLvjJKuLiPw29NqnfAlzqvhk+/r8w2raGAYuibbLOUlzQPjWJxqcYbfGBPBExkzrw==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=XQqVaxQLPw9y9kMPbzlGNFefFRnsW32OmqEdEk3O22Y=;
 b=IoCpE0zSaLH0VV9iRFeREpSh4oZhi/rI8hRRSDZCjrWXHgiEVH7GaPFAbRnXTwAh6R/IavO1coZyT+XVLj8C0vqC3V+fU9Mi6vFdywhbWGMbJvu7+HkBOEVeheptXmThE57wFNmhn11GdhCRWgyvB9dqA0RkWoP/l5n7J2UrRDU=
Date: Thu, 27 May 2021 10:48:16 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, George Dunlap
	<george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, "Stefano Stabellini" <sstabellini@kernel.org>, Wei Liu
	<wl@xen.org>
Subject: Re: [PATCH] x86: make hypervisor build with gcc11
Message-ID: <YK9c0DaIEo7uZ5Gk@Air-de-Roger>
References: <ca7a78e5-2ee9-4109-7905-3b9186475f3d@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <ca7a78e5-2ee9-4109-7905-3b9186475f3d@suse.com>
X-ClientProxiedBy: PR3P193CA0029.EURP193.PROD.OUTLOOK.COM
 (2603:10a6:102:50::34) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: f7bf46f9-12bd-4bed-6337-08d920ec2de3
X-MS-TrafficTypeDiagnostic: DM6PR03MB4681:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4681BFFFCC1413657856EE188F239@DM6PR03MB4681.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: DQKGlp/MhFKsrk2V1mWWPlm6aD75W/slggmBF0opMZR2VH/l2mp1yxPXHYJ9gFNqnAG43LRGLSpTNzR4cuwZnvYG/IFWAfqwhYe5Gx+SQ6IW2i2tD6Ah7RYeLKcQXSaaOlVkrVfw/2/Oe0PIRrKVxb5fwteyl/GRKtjk25Nij8S0YifiejSxe5+lhvF9i25KrJQfFrloXu8Cz7Y3xwMnYS5Klys4rvBq6OWq6hbPVkD4Gg2/Bt3Era8VWTeC4OiFHKvzmympUfu5c8JniRFP1qUCKnRxNOnjVkVveHyGjoxvWE7hmEinsY4Q3Pbj3m2NfcazGSPNjwzcAxo+3nIBliA7Rs2GFy033RlhBZ5TzSxLR74+Z0PN6zi4w1HelDCJ9PDP3eSghub5DcjZRia9Iy5JWeYadO2MYGEyhUR/547TRP9zn3B5El5VtSqEd/KHdOHK6gLjCN1ICMXAPRD79SpTgNcYEmIHTVI525Ml9Yc4eKIWbYFV2WgjFMCYroJI5fmnbTVgcb5s1+18dV4CG0z2hIJm3k+Qc2Qts0l2fkoT2fCWzOnROIuYdDz2tqO0K2KpGRhw5DWVB3GEZAzFWaJ+SIXSAmZuUp1/f8Ivfs0uo8ib2awmC3a1fEMsq2e5exv+ka+2j3XuoSrro50MNABs3uwCp4F4H7bvNHYIrF7fcXoioV2dH9fvgQ4Ti6PqvBdu6ev3W+gOwkAewZ8ER0pZBN3n8eGsIJLYkwep9LY=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(366004)(346002)(376002)(39850400004)(136003)(396003)(6916009)(956004)(85182001)(33716001)(8936002)(26005)(6486002)(2906002)(186003)(16526019)(38100700002)(6496006)(316002)(5660300002)(8676002)(4326008)(83380400001)(9686003)(66946007)(66476007)(66556008)(86362001)(966005)(478600001)(6666004)(54906003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?L241aFYwRWd6bzNZUlgzRHNxV01wSWNGTjY4cTcwTGYvQ0tCVHhjRWV0RTlx?=
 =?utf-8?B?Y3ZJNHBObkM0OEU1N1crTCtkc1ptdlBuRC9tMGdxd0J1QnJyM1hIMmpsak5F?=
 =?utf-8?B?Rlp6YStGcTNuMWd1MzZjNEJFK0YxNkQ2THdyNDVNOUZJRGI5QU1wUXM2MHY0?=
 =?utf-8?B?RkRhanZtdm9namxXMXdEWEk1UlRZRkZCbHcyS1YrdlQ3Vnh5MlcxUHZNaHA1?=
 =?utf-8?B?MFl6WFM1cmVTVG8zL1ZFd1oreHpDaEZ5cmw2WG5OTFB0emNacEZUa1h6dUFT?=
 =?utf-8?B?K0h6MWkzUHU1bFFDUE4rUnFTWHVLRVV0eDZYaFRGckNRemx0N2dhMlUyWjJT?=
 =?utf-8?B?aGRBaEhySlRBV2hyaUVFZGx5c21JU2pXM2lveDBlRmlNU05Ob2ZjUG5Ga2Mz?=
 =?utf-8?B?YUZrVkZqZkJac0I5em5vOFM5aHY3Q1hRVDRtRllrTUdBbk1tQmNnU2NmRitF?=
 =?utf-8?B?NGdGWUdrUnJERXRBL0kzSi9ON1ljY25GajdBaHFuRFYyNFIrQ3RaNy8zbUwy?=
 =?utf-8?B?dFB2YUFmNHRsSU52RGN6bDFvdjhkdVQzM2xtMzRkY1RoVDBPZmVKc3RINkZK?=
 =?utf-8?B?VWZrWTRualpGRE1YME5XVVhjVHNRbVlqZzNMZk5ZUlFEaXN6cWU1OW9BczFu?=
 =?utf-8?B?MmYrbUZTbVdtbUhWOE1zY2FTRTJCaEpoczU3Q0ZudWFmUkxaZFFhdUJRZnFJ?=
 =?utf-8?B?NjJpU3BWSmV3N3YwQ0hwZ1dPOCtiRi9BS2ZJVUlERTJqZXFweHloaEJDUzZL?=
 =?utf-8?B?N2I4RUFNdlIzMk9zcVg5Q1VjR1FJYldaeEo3cjVEZnlHV1RjMTVRWno3cEdv?=
 =?utf-8?B?VDdIQmhBQkRTTWxWWWdHRHVqWCtoeSsxcFo5TGZLdHhsK2wzUmd1bkFDTW54?=
 =?utf-8?B?TTdJcGFOR2FkQ3VzcWxKUkRVdnoxUDJ0UzE0ZjhYVmhvckFLcGVZTmg5VjQ1?=
 =?utf-8?B?RU9udlZzYXpEbTZVR2JyYitzaU5Rbmw4OFM1UU9DVkV6MHozcnZueS93MjF0?=
 =?utf-8?B?Qysrbi9sOThkNjljM0g1NHUwTGxJY05XdkdIM05TbUZhTldWTWx2eUpjZXdo?=
 =?utf-8?B?NUNsZ0hxc29Wa2p2K25rQlo0eUVZUzh2aCtNVTUvME9lY1VxM2MwMzVEY3RB?=
 =?utf-8?B?WmkxV2lUZGFvSkpKd2x2QVRDQnZ4U210a1BFc213VFhBOXBhY2NWbHNFOUN2?=
 =?utf-8?B?QzVRYzNtTURZWUdmQ1l6TkZlZDBtU0x0c01lRmhXZ3ZHMEdJcG1pY1pHbG9w?=
 =?utf-8?B?ZC9lVDBndWRaNHlGMjkrUlFUMUpwSERZdi9EU29vK1EyVWYzZTJJYkYzcms3?=
 =?utf-8?B?QTQvR3dxQ2d1WVJTYUwyRnBVNXl2SWd4Vmlsd3o2aC8ySTBZanF6YUpCa2Fh?=
 =?utf-8?B?Wk9UaE82UkYrV0gvamRFUmJrUUNPSEErcjhSenB5L2hxNjgrOXVlWExjbzYv?=
 =?utf-8?B?Ym4yUmpvWWNVdWlwTDJWaXFydVo4clV6WUYrR3JwUnFFVy9tMUlJSWt6NzNZ?=
 =?utf-8?B?dXFRdTJvdGVhRytOQUtib0xZbFFWSWNlYW9Hd0M4SERwRnR1Ym11ajRZbEZQ?=
 =?utf-8?B?VjEwNDVpczRhQUtGNklaTzNuS21wVlhuNGd0VXNLVVRYVWErYVRjOWRzdWho?=
 =?utf-8?B?U1R5S3JvTzl3Nld1UFIwNGRBNUtoRFI1NXArdWVjek0vVE51ZDJuZ3RPVC9G?=
 =?utf-8?B?bndUeitETWtuOXRWeC9ieDZFYkY4Lytabkp5RzlvcCtUa2lwK2d3QTR6Q0Zn?=
 =?utf-8?Q?yieELyyoCpEEWz6jsUDIx947gCeezL+EUy1DGPv?=
X-MS-Exchange-CrossTenant-Network-Message-Id: f7bf46f9-12bd-4bed-6337-08d920ec2de3
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 May 2021 08:48:20.5585
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: ZmgK+QzmaMkKMKCPi90o61b0aM1iK0tc559+iARtl0XOiN4d708jE6KuCLytMhsYmw52dBxrBRSPjV20iu9tGQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4681
X-OriginatorOrg: citrix.com

On Wed, May 19, 2021 at 05:39:50PM +0200, Jan Beulich wrote:
> Gcc 11 looks to make incorrect assumptions about valid ranges that
> pointers may be used for addressing when they are derived from e.g. a
> plain constant. See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=100680.
> 
> Utilize RELOC_HIDE() to work around the issue, which for x86 manifests
> in at least
> - mpparse.c:efi_check_config(),
> - tboot.c:tboot_probe(),
> - tboot.c:tboot_gen_frametable_integrity(),
> - x86_emulate.c:x86_emulate() (at -O2 only).
> The last case is particularly odd not just because it only triggers at
> higher optimization levels, but also because it only affects one of at
> least three similar constructs. Various "note" diagnostics claim the
> valid index range to be [0, 2⁶³-1].
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

This is all quite ugly, but I don't have any recommendation short of
getting gcc fixed (or being able to disable those heuristics).

> 
> --- a/tools/tests/x86_emulator/x86-emulate.c
> +++ b/tools/tests/x86_emulator/x86-emulate.c
> @@ -8,6 +8,13 @@
>  
>  #define ERR_PTR(val) NULL
>  
> +/* See gcc bug 100680, but here don't bother making this version dependent. */

Might be worth also referencing 99578 which seems to be the parent
bug? (as 100680 has been closed as a duplicate)

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 27 08:52:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 08:52:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132928.247874 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmBlC-00070k-6L; Thu, 27 May 2021 08:52:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132928.247874; Thu, 27 May 2021 08:52:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmBlC-00070d-2H; Thu, 27 May 2021 08:52:58 +0000
Received: by outflank-mailman (input) for mailman id 132928;
 Thu, 27 May 2021 08:52:56 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=+dwL=KW=darkstar.site=sakib@srs-us1.protection.inumbo.net>)
 id 1lmBlA-00070X-DZ
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 08:52:56 +0000
Received: from pb-smtp2.pobox.com (unknown [64.147.108.71])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 1113c13b-c9a8-4107-834e-9619cb4dc8b0;
 Thu, 27 May 2021 08:52:54 +0000 (UTC)
Received: from pb-smtp2.pobox.com (unknown [127.0.0.1])
 by pb-smtp2.pobox.com (Postfix) with ESMTP id 90AF6CEDCB;
 Thu, 27 May 2021 04:52:54 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from pb-smtp2.nyi.icgroup.com (unknown [127.0.0.1])
 by pb-smtp2.pobox.com (Postfix) with ESMTP id 86F59CEDCA;
 Thu, 27 May 2021 04:52:54 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
Received: from localhost (unknown [95.67.114.216])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by pb-smtp2.pobox.com (Postfix) with ESMTPSA id C2A06CEDC9;
 Thu, 27 May 2021 04:52:53 -0400 (EDT)
 (envelope-from sakib@darkstar.site)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1113c13b-c9a8-4107-834e-9619cb4dc8b0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed; d=pobox.com; h=from:to:cc
	:subject:date:message-id:mime-version:content-transfer-encoding;
	 s=sasl; bh=pSkglQJM+7qceceFKKwFTWVbzJdX1y2GYzYnRFShvQs=; b=VdZR
	TxT1GbOMZuEC3XnM3lC/+uRDjNcy8WKRV68el2EiGzrWgqLyn4762nB/q17AzZeW
	1tRM5qdoiIHBBSp4JxUYde6Cm081/YUHwplJbnqyYcZkVu72iYllj7FVDuaPiVVH
	EfFUchIj7/FwIpmwNagz/EELW7ye7sQgWuL/E4Q=
From: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
To: xen-devel@lists.xenproject.org
Cc: Ian Jackson <ian.jackson@eu.citrix.com>,
	Wei Liu <wl@xen.org>,
	Sergiy Kibrik <Sergiy_Kibrik@epam.com>,
	Julien Grall <julien@xen.org>
Subject: [XEN PATCH v2] libxl/arm: provide guests with random seed
Date: Thu, 27 May 2021 08:52:33 +0000
Message-Id: <20210527085233.69917-1-Sergiy_Kibrik@epam.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
X-Pobox-Relay-ID:
 ECA8B4A0-BEC8-11EB-ADE8-FD8818BA3BAF-90055647!pb-smtp2.pobox.com
Content-Transfer-Encoding: quoted-printable

Pass 128 bytes of random seed via FDT, so that guests' CRNGs are better s=
eeded
early at boot. This is larger than ChaCha20 key size of 32, so each byte =
of
CRNG state will be mixed 4 times using this seed. There does not seem to =
be
advantage in larger seed though.

Depending on its configuration Linux can use the seed as device randomnes=
s
or to just quickly initialize CRNG.
In either case this will provide extra randomness to further harden CRNG.

CC: Julien Grall <julien@xen.org>
Signed-off-by: Sergiy Kibrik <Sergiy_Kibrik@epam.com>
---
 tools/libxl/libxl_arm.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/tools/libxl/libxl_arm.c b/tools/libxl/libxl_arm.c
index 34f8a29056..d3a4a72fb7 100644
--- a/tools/libxl/libxl_arm.c
+++ b/tools/libxl/libxl_arm.c
@@ -304,6 +304,9 @@ static int make_chosen_node(libxl__gc *gc, void *fdt,=
 bool ramdisk,
 {
     int res;
=20
+    /* 1024 bit enough to mix Linux CRNG state several times */
+    uint8_t seed[128];
+
     /* See linux Documentation/devicetree/... */
     res =3D fdt_begin_node(fdt, "chosen");
     if (res) return res;
@@ -342,6 +345,11 @@ static int make_chosen_node(libxl__gc *gc, void *fdt=
, bool ramdisk,
         if (res) return res;
     }
=20
+    res =3D libxl__random_bytes(gc, seed, sizeof(seed));
+    if (res) return res;
+    res =3D fdt_property(fdt, "rng-seed", seed, sizeof(seed));
+    if (res) return res;
+
     res =3D fdt_end_node(fdt);
     if (res) return res;
=20
--=20
2.25.1



From xen-devel-bounces@lists.xenproject.org Thu May 27 09:46:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 09:46:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132944.247908 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmCaU-0003X0-IR; Thu, 27 May 2021 09:45:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132944.247908; Thu, 27 May 2021 09:45:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmCaU-0003Wt-Fd; Thu, 27 May 2021 09:45:58 +0000
Received: by outflank-mailman (input) for mailman id 132944;
 Thu, 27 May 2021 09:45:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HjtO=KW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmCaT-0003Wn-Kg
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 09:45:57 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e7d14f91-6856-4b89-9659-88547c01e8f3;
 Thu, 27 May 2021 09:45:56 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id B23E42190B;
 Thu, 27 May 2021 09:45:55 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 83F2611A98;
 Thu, 27 May 2021 09:45:55 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e7d14f91-6856-4b89-9659-88547c01e8f3
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622108755; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ZMfg0ZqAjEjaoCXvWTP+GjBsqLLG3fjL71ifeFrWL6I=;
	b=sp1a2HC7UbuUGf5kQsnOB9sQ0TqJzNc+AWUsNqDtyqLsd8/aFXk6twDGiwEaRrz3jtZYd1
	aZGFo7mlbi5PXXWBDGtxfyP79mtqzdz11a9Ky2ACz8TWl1bwNYcA0h1t515u75kTDUTNb9
	jm7RwEmS7gY8UYR5DS3nO6bwvXzO+fs=
Subject: Re: [PATCH 10/13] libxc: Add xc_set_cpufreq_hwp
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210503192810.36084-1-jandryuk@gmail.com>
 <20210503192810.36084-11-jandryuk@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <8871a5b3-c03d-8438-b228-b1699c5b2747@suse.com>
Date: Thu, 27 May 2021 11:45:51 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <20210503192810.36084-11-jandryuk@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 03.05.2021 21:28, Jason Andryuk wrote:
> --- a/tools/libs/ctrl/xc_pm.c
> +++ b/tools/libs/ctrl/xc_pm.c
> @@ -330,6 +330,24 @@ int xc_set_cpufreq_para(xc_interface *xch, int cpuid,
>      return xc_sysctl(xch, &sysctl);
>  }
>  
> +int xc_set_cpufreq_hwp(xc_interface *xch, int cpuid,
> +                       xc_set_hwp_para_t *set_hwp)

Besides for general considerations, for xenpm to legitimately pass
the same struct instance into this function multiple times, the
last parameter wants to be pointer-to-const, declaring the intent
of the function to leave the struct unaltered.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 27 09:46:27 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 09:46:27 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132949.247920 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmCax-00046W-0l; Thu, 27 May 2021 09:46:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132949.247920; Thu, 27 May 2021 09:46:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmCaw-00046O-Te; Thu, 27 May 2021 09:46:26 +0000
Received: by outflank-mailman (input) for mailman id 132949;
 Thu, 27 May 2021 09:46:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HjtO=KW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmCav-00041T-LM
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 09:46:25 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9eb8b082-cef8-4f80-97fe-e8d01bdf6b68;
 Thu, 27 May 2021 09:46:24 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 8F3341FD2A;
 Thu, 27 May 2021 09:46:23 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 65C3711A98;
 Thu, 27 May 2021 09:46:23 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9eb8b082-cef8-4f80-97fe-e8d01bdf6b68
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622108783; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QId8BT6DxwCBl1FQmEvd9QHkNqykfSxBrDNbOJEgaEc=;
	b=BrJ14tw6LA0TGoEGyrn/LA6JzZwU1ngVfz9ggwQS62E3KAVpxGf+YCP4KNjR2hPM08CUFP
	G1A8JWgaBV3UdOcAvV9/fu7PaurzhLsZH+WbVoBhat+Jsq7wgGH9+ZoEpEoESQk/BkeEbj
	fdXkIgW+u2WMT5C9u36UG+YP6MB9//A=
Subject: Re: [PATCH 12/13] xenpm: Add set-cpufreq-hwp subcommand
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210503192810.36084-1-jandryuk@gmail.com>
 <20210503192810.36084-13-jandryuk@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <641bb656-ab47-5125-3660-fb9aa342800c@suse.com>
Date: Thu, 27 May 2021 11:46:19 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <20210503192810.36084-13-jandryuk@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 03.05.2021 21:28, Jason Andryuk wrote:
> @@ -1309,6 +1328,226 @@ void disable_turbo_mode(int argc, char *argv[])
>                  errno, strerror(errno));
>  }
>  
> +/*
> + * Parse activity_window:NNN{us,ms,s} and validate range.
> + *
> + * Activity window is a 7bit mantissa (0-127) with a 3bit exponent (0-7) base
> + * 10 in microseconds.  So the range is 1 microsecond to 1270 seconds.  A value
> + * of 0 lets the hardware autonomously select the window.
> + *
> + * Return 0 on success
> + *       -1 on error
> + *        1 Not activity_window. i.e. try parsing as another argument
> + */
> +static int parse_activity_window(xc_set_hwp_para_t *set_hwp, char *p)
> +{
> +    char *param = NULL, *val = NULL, *suffix = NULL;
> +    unsigned int u;
> +    unsigned int exponent = 0;
> +    unsigned int multiplier = 1;
> +    int ret;
> +
> +    ret = sscanf(p, "%m[a-z_A-Z]:%ms", &param, &val);

I have to confess that I first needed to look up availability of the
m modifier. It looks to be in POSIX.1-2008, but not in C11 and older.
I'm therefore not sure if you can legitimately use it; I've not been
able to spot pre-existing uses throughout tools/.

Also, following the naming of other options of this tool (including
the new set-cpufreq-hwp subcommand you add here), instead of _
options should use - (and the pattern here and in the other similar
sscanf() further down then wants adjusting).

> +    if ( ret != 2 )
> +    {
> +        return -1;

No error message at all in this case?

> +    }
> +
> +    if ( strncasecmp(param, "act", 3) != 0 )
> +    {
> +        ret = 1;
> +
> +        goto out;
> +    }
> +
> +    free(param);
> +    param = NULL;
> +
> +    ret = sscanf(val, "%u%ms", &u, &suffix);

Can't you parse this right in the first sscanf()?

> +    if ( ret != 1 && ret != 2 )
> +    {
> +        fprintf(stderr, "invalid activity window: %s\n", val);
> +
> +        ret = -1;
> +
> +        goto out;
> +    }
> +
> +    if ( ret == 2 && suffix )

The help text doesn't clarify what an omitted suffix means; it's
unambiguous only when the value is zero. (While looking at that I
also started wondering whether the range there starting at 0us is
actually appropriate - the range really starts at 1us afaict, with
0 having special meaning.)

> +    {
> +        if ( strcasecmp(suffix, "s") == 0 )
> +        {
> +            multiplier = 1000 * 1000;
> +            exponent = 6;
> +        }
> +        else if ( strcasecmp(suffix, "ms") == 0 )
> +        {
> +            multiplier = 1000;
> +            exponent = 3;
> +        }
> +        else if ( strcasecmp(suffix, "us") == 0 )
> +        {
> +            multiplier = 1;
> +            exponent = 0;
> +        }
> +        else
> +        {
> +            fprintf(stderr, "invalid activity window units: %s\n", suffix);

I think you want to generally quote %s in such cases, to make clear
what is actually part of a malformed string.

> +            ret = -1;
> +            goto out;
> +        }
> +    }
> +
> +    if ( u > 1270 * 1000 * 1000 / multiplier )
> +    {
> +        fprintf(stderr, "activity window %s too large\n", val);
> +
> +        ret = -1;
> +        goto out;
> +    }
> +
> +    /* looking for 7 bits of mantissa and 3 bits of exponent */
> +    while ( u > 127 )

Prior to this loop, don't you need to multiply by "multiplier"?

> +    {
> +        u /= 10;

Fractions get silently chopped off - may want spelling out in
the help text.

> +        exponent += 1;
> +    }
> +
> +    set_hwp->activity_window = ( exponent & 0x7 ) << 7 | ( u & 0x7f );

Excess blanks inside parentheses again.

> +static int parse_hwp_opts(xc_set_hwp_para_t *set_hwp, int *cpuid,
> +                          int argc, char *argv[])
> +{
> +    int i = 0;
> +
> +    if ( argc < 1 )
> +        return -1;
> +
> +    if ( parse_cpuid_non_fatal(argv[i], cpuid) == 0 )
> +    {
> +        i++;
> +    }

Unnecessary braces again, the more that you ...

> +    if ( i == argc )
> +        return -1;

... don't have any here.

> +    if ( strcasecmp(argv[i], "powersave") == 0 )
> +    {
> +        set_hwp->set_params = XEN_SYSCTL_HWP_SET_PRESET_POWERSAVE;
> +        i++;
> +    }
> +    else if ( strcasecmp(argv[i], "performance") == 0 )
> +    {
> +        set_hwp->set_params = XEN_SYSCTL_HWP_SET_PRESET_PERFORMANCE;
> +        i++;
> +    }
> +    else if ( strcasecmp(argv[i], "balance") == 0 )
> +    {
> +        set_hwp->set_params = XEN_SYSCTL_HWP_SET_PRESET_BALANCE;
> +        i++;
> +    }
> +
> +    for ( ; i < argc; i++)
> +    {
> +        unsigned int val;
> +        char *param;
> +        int ret;
> +
> +        ret = parse_activity_window(set_hwp, argv[i]);
> +        switch ( ret )
> +        {
> +        case -1:
> +            return -1;
> +        case 0:
> +            continue;
> +            break;

Why "break" after "continue"? I can see compilers legitimately warning
in such a case.

> +        case 1:

This may better be "default:", or could be omitted altogether. Or
alternatively you may want to have a "default:" with assert().

> +            /* try other parsing */
> +            break;
> +        }
> +
> +        /* sscanf can't handle split on ':' for "%ms:%u'  */
> +        ret = sscanf(argv[i], "%m[a-zA-Z_]:%u", &param, &val);
> +        if ( ret != 2 )
> +        {
> +            fprintf(stderr, "%s is an invalid hwp parameter.\n", argv[i]);

Outside of this function you omit full stops from error messages.
Elsewhere in the tool full stops are also absent except in two or
three deprecation warnings. Hence I think you want to drop them
from messages in this function.

> +            return -1;
> +        }
> +
> +        if ( val > 255 )
> +        {
> +            fprintf(stderr, "%s value %u is out of range.\n", param, val);
> +            return -1;
> +        }
> +
> +        if ( strncasecmp(param, "min", 3) == 0 )
> +        {
> +            set_hwp->minimum = val;
> +            set_hwp->set_params |= XEN_SYSCTL_HWP_SET_MINIMUM;
> +        }
> +        else if ( strncasecmp(param, "max", 3) == 0 )
> +        {
> +            set_hwp->maximum = val;
> +            set_hwp->set_params |= XEN_SYSCTL_HWP_SET_MAXIMUM;
> +        }
> +        else if ( strncasecmp(param, "des", 3) == 0 )
> +        {
> +            set_hwp->desired = val;
> +            set_hwp->set_params |= XEN_SYSCTL_HWP_SET_DESIRED;
> +        }
> +        else if ( strncasecmp(param, "ene", 3) == 0 )
> +        {
> +            set_hwp->energy_perf = val;
> +            set_hwp->set_params |= XEN_SYSCTL_HWP_SET_ENERGY_PERF;
> +        }

While I can see the point of limiting to 3 characters, you would
better not accept longer but e.g. typoed strings.

> +        else
> +        {
> +            fprintf(stderr, "%s is an invalid parameter\n.", param);
> +            return -1;
> +        }
> +
> +        free(param);
> +    }
> +
> +    return 0;

Should you perhaps return an error here if set_hwp->set_params is
still zero?

> +}
> +
> +static void hwp_set_func(int argc, char *argv[])
> +{
> +    xc_set_hwp_para_t set_hwp = {};
> +    int cpuid = -1;
> +    int i = 0;
> +
> +    if ( parse_hwp_opts(&set_hwp, &cpuid, argc, argv) )
> +    {
> +        fprintf(stderr, "Missing, excess, or invalid argument(s)\n");

Isn't this redundant with earlier logged messages, which are also
more specific (with the one exception noted)?

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 27 09:48:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 09:48:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132955.247931 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmCcg-0004mD-Da; Thu, 27 May 2021 09:48:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132955.247931; Thu, 27 May 2021 09:48:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmCcg-0004m6-9z; Thu, 27 May 2021 09:48:14 +0000
Received: by outflank-mailman (input) for mailman id 132955;
 Thu, 27 May 2021 09:48:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HjtO=KW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmCce-0004m0-JZ
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 09:48:12 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f656b06c-e4f9-42b8-b98d-8674a95abfa7;
 Thu, 27 May 2021 09:48:12 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 301211FD2A;
 Thu, 27 May 2021 09:48:11 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 0C2C111A98;
 Thu, 27 May 2021 09:48:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f656b06c-e4f9-42b8-b98d-8674a95abfa7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622108891; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Q3RBQZA7XATzEM1PM6aRkX9Xckj5rXUyPGUuRjETGGs=;
	b=a0pXL2zqqGjBmoc8IFxIIk8NPK1rC9MiQzIXjDsRjA+DkTQy5QVSGUKaKUnPv4mhAYdS/Q
	EfgY/CPy+4+c1mediCsfkxgJSn/A9kuCAmBkpS+sMJDBddaWzZRrWZsq6b9E7JXb+GFkdS
	5UK9WX5m8IGJMLAusdHzWIl2QEj2aUI=
Subject: Re: [PATCH 13/13] CHANGELOG: Add Intel HWP entry
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Ian Jackson <iwj@xenproject.org>,
 Community Manager <community.manager@xenproject.org>,
 xen-devel@lists.xenproject.org
References: <20210503192810.36084-1-jandryuk@gmail.com>
 <20210503192810.36084-14-jandryuk@gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <cae56f66-97e0-7fb1-df82-46daaf6b4538@suse.com>
Date: Thu, 27 May 2021 11:48:10 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <20210503192810.36084-14-jandryuk@gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 03.05.2021 21:28, Jason Andryuk wrote:
> --- a/CHANGELOG.md
> +++ b/CHANGELOG.md
> @@ -5,6 +5,8 @@ Notable changes to Xen will be documented in this file.
>  The format is based on [Keep a Changelog](https://keepachangelog.com/en/1.0.0/)
>  
>  ## [unstable UNRELEASED](https://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=staging) - TBD
> +### Added / support upgraded
> + - Intel Hardware P-States (HWP) cpufreq driver

Please add a blank line between the ## and ### lines, to match what's
already there for earlier versions.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 27 09:50:53 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 09:50:53 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132964.247942 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmCfE-0006AT-QI; Thu, 27 May 2021 09:50:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132964.247942; Thu, 27 May 2021 09:50:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmCfE-0006AM-NN; Thu, 27 May 2021 09:50:52 +0000
Received: by outflank-mailman (input) for mailman id 132964;
 Thu, 27 May 2021 09:50:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HjtO=KW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmCfE-0006AG-8J
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 09:50:52 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0bfd6ba1-dd7d-4596-9894-92cbd6323e99;
 Thu, 27 May 2021 09:50:51 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 474742190F;
 Thu, 27 May 2021 09:50:50 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 110CB11A98;
 Thu, 27 May 2021 09:50:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0bfd6ba1-dd7d-4596-9894-92cbd6323e99
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622109050; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=rzzDsjsEgjnFSsWpRC2O8ErBfZLfcCVGraGfck+zcSQ=;
	b=Ie7NPLuTgvg1f5yI3dNxBet8vkw2Hebqk60i8v6KoG0kCYzQppfwSWVbhANpeBmKSKIg+h
	4om0t6TsUyNLbtZQ5R0io390KMVlBrLomnE+ETZnQeoRcLKfRV5fNJ4h6ULxOZfy5yHjU4
	OkHrT2+ejl3s0ntkYgoSxjh44w0/Asg=
Subject: Re: [PATCH] x86: make hypervisor build with gcc11
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <ca7a78e5-2ee9-4109-7905-3b9186475f3d@suse.com>
 <YK9c0DaIEo7uZ5Gk@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c72414eb-dece-92e5-2f7d-0c80bb9a9982@suse.com>
Date: Thu, 27 May 2021 11:50:46 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <YK9c0DaIEo7uZ5Gk@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 27.05.2021 10:48, Roger Pau Monné wrote:
> On Wed, May 19, 2021 at 05:39:50PM +0200, Jan Beulich wrote:
>> Gcc 11 looks to make incorrect assumptions about valid ranges that
>> pointers may be used for addressing when they are derived from e.g. a
>> plain constant. See https://gcc.gnu.org/bugzilla/show_bug.cgi?id=100680.
>>
>> Utilize RELOC_HIDE() to work around the issue, which for x86 manifests
>> in at least
>> - mpparse.c:efi_check_config(),
>> - tboot.c:tboot_probe(),
>> - tboot.c:tboot_gen_frametable_integrity(),
>> - x86_emulate.c:x86_emulate() (at -O2 only).
>> The last case is particularly odd not just because it only triggers at
>> higher optimization levels, but also because it only affects one of at
>> least three similar constructs. Various "note" diagnostics claim the
>> valid index range to be [0, 2⁶³-1].
>>
>> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.

> This is all quite ugly, but I don't have any recommendation short of
> getting gcc fixed (or being able to disable those heuristics).

Indeed.

>> --- a/tools/tests/x86_emulator/x86-emulate.c
>> +++ b/tools/tests/x86_emulator/x86-emulate.c
>> @@ -8,6 +8,13 @@
>>  
>>  #define ERR_PTR(val) NULL
>>  
>> +/* See gcc bug 100680, but here don't bother making this version dependent. */
> 
> Might be worth also referencing 99578 which seems to be the parent
> bug? (as 100680 has been closed as a duplicate)

Anyone going there will immediately find the xref to that supposed
parent bug. Personally I'm not convinced of it truly being a
duplicate.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 27 10:42:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 10:42:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132985.247973 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmDSi-0002gT-1v; Thu, 27 May 2021 10:42:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132985.247973; Thu, 27 May 2021 10:42:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmDSh-0002gM-Ut; Thu, 27 May 2021 10:41:59 +0000
Received: by outflank-mailman (input) for mailman id 132985;
 Thu, 27 May 2021 10:41:58 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HjtO=KW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmDSg-0002gG-El
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 10:41:58 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id f5a414f9-2136-4554-9411-4974fa7b27ca;
 Thu, 27 May 2021 10:41:57 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 39BC52190B;
 Thu, 27 May 2021 10:41:56 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 13EA611A98;
 Thu, 27 May 2021 10:41:56 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f5a414f9-2136-4554-9411-4974fa7b27ca
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622112116; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=/QiFQpJzn8z0jngnLO0NpnXw5ZlFo2Hax+6X6e/vYpY=;
	b=n79tettru6RTN7r9rbbx7T4/gxjUB2pigbB5bxVA9N6Q16tEG+4OhrRWmi7tYtnSlFGJtE
	jJrWtwPpnWcXrfiUcpGjwdjjhBzbSwV5E9xOAqcMsBL8wLYnMglNhZfZndzqWZWkbd3crf
	xCHEdf/Nr2O5jwBniOFGD6hRL0TOm8E=
Subject: Re: [PATCH] x86/AMD: expose SYSCFG, TOM, and TOM2 to Dom0
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Olaf Hering <olaf@aepfle.de>
References: <c5764274-1257-809e-a2a7-d87b9d0fe675@suse.com>
 <YK9ZXJuPk1G5SGnK@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b6693807-95cb-7925-587d-1e1e2db8c798@suse.com>
Date: Thu, 27 May 2021 12:41:51 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <YK9ZXJuPk1G5SGnK@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 27.05.2021 10:33, Roger Pau Monné wrote:
> On Wed, May 26, 2021 at 02:59:00PM +0200, Jan Beulich wrote:
>> Sufficiently old Linux (3.12-ish) accesses these MSRs in an unguarded
>> manner. Furthermore these MSRs, at least on Fam11 and older CPUs, are
>> also consulted by modern Linux, and their (bogus) built-in zapping of
>> #GP faults from MSR accesses leads to it effectively reading zero
>> instead of the intended values, which are relevant for PCI BAR placement
>> (which ought to all live in MMIO-type space, not in DRAM-type one).
>>
>> For SYSCFG, only certain bits get exposed. In fact, whether to expose
>> MtrrVarDramEn is debatable: It controls use of not just TOM, but also
>> the IORRs. Introduce (consistently named) constants for the bits we're
>> interested in and use them in pre-existing code as well.
> 
> I think we should also allow access to the IORRs MSRs for coherency
> (c001001{6,9}) for the hardware domain.

Hmm, originally I was under the impression that these could conceivably
be written by OSes, and hence would want dealing with separately. But
upon re-reading I see that they are supposed to be set by the BIOS alone.
So yes, let me add them for read access, taking care of the limitation
that I had to spell out.

This raises the question then though whether to also include SMMAddr
and SMMMask in the set - the former does get accessed by Linux as well,
and was one of the reasons for needing 6eef0a99262c ("x86/PV:
conditionally avoid raising #GP for early guest MSR reads").

Especially for SMMAddr, and maybe also for IORR_BASE, returning zero
for DomU-s might be acceptable. The respective masks, however, can
imo not sensibly be returned as zero. Hence even there I'd leave DomU
side handling (see below) for a later time.

>> As a welcome side effect, verbosity on/of debug builds gets (perhaps
>> significantly) reduced.
>>
>> Note that at least as far as those MSR accesses by Linux are concerned,
>> there's no similar issue for DomU-s, as the accesses sit behind PCI
>> device matching logic. The checked for devices would never be exposed to
>> DomU-s in the first place. Nevertheless I think that at least for HVM we
>> should return sensible values, not 0 (as svm_msr_read_intercept() does
>> right now). The intended values may, however, need to be determined by
>> hvmloader, and then get made known to Xen.
> 
> Could we maybe come up with a fixed memory layout that hvmloader had
> to respect?
> 
> Ie: DRAM from 0 to 3G, MMIO from 3G to 4G, and then the remaining
> DRAM from 4G in a contiguous single block?
> 
> hvmloader would have to place BARs that don't fit in the 3G-4G hole at
> the end of DRAM (ie: after TOM2).

Such a fixed scheme may be too limiting, I'm afraid.

>> --- a/xen/arch/x86/msr.c
>> +++ b/xen/arch/x86/msr.c
>> @@ -339,6 +339,19 @@ int guest_rdmsr(struct vcpu *v, uint32_t
>>          *val = msrs->tsc_aux;
>>          break;
>>  
>> +    case MSR_K8_SYSCFG:
>> +    case MSR_K8_TOP_MEM1:
>> +    case MSR_K8_TOP_MEM2:
>> +        if ( !(cp->x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) )
>> +            goto gp_fault;
>> +        if ( !is_hardware_domain(d) )
>> +            return X86EMUL_UNHANDLEABLE;
> 
> It might be clearer to also handle the !is_hardware_domain case here,
> instead of deferring to svm_msr_read_intercept:
> 
> if ( is_hardware_domain(d) )
>     rdmsrl(msr, *val);
> else
>     *val = 0;

As said in the post-commit-message remark, I don't think returning 0
here is appropriate. I'd be willing to move DomU handling here, but
only once it's sane.

>> @@ -279,10 +286,7 @@
>>  #define MSR_K8_TOP_MEM1			0xc001001a
>>  #define MSR_K7_CLK_CTL			0xc001001b
>>  #define MSR_K8_TOP_MEM2			0xc001001d
>> -#define MSR_K8_SYSCFG			0xc0010010
>>  
>> -#define K8_MTRRFIXRANGE_DRAM_ENABLE	0x00040000 /* MtrrFixDramEn bit    */
>> -#define K8_MTRRFIXRANGE_DRAM_MODIFY	0x00080000 /* MtrrFixDramModEn bit */
>>  #define K8_MTRR_RDMEM_WRMEM_MASK	0x18181818 /* Mask: RdMem|WrMem    */
> 
> That last one seem to be unused, I wonder if you could also drop it as
> part of this cleanup?

It's for an unrelated set of MSRs, so I thought I'd leave it alone
here. Otoh the value is at best bogus anyway, as it assumes both
halves of fixed-size MTRRs get accessed separately. So I guess
since you ask for it, I'll drop it at this occasion.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 27 11:01:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 11:01:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.132995.247994 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmDlO-00051A-Qh; Thu, 27 May 2021 11:01:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 132995.247994; Thu, 27 May 2021 11:01:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmDlO-000513-Ng; Thu, 27 May 2021 11:01:18 +0000
Received: by outflank-mailman (input) for mailman id 132995;
 Thu, 27 May 2021 11:01:18 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ln4B=KW=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lmDlN-00050x-Ti
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 11:01:18 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id c17bf8b3-342b-4a0c-a253-dc22831ef760;
 Thu, 27 May 2021 11:01:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c17bf8b3-342b-4a0c-a253-dc22831ef760
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1622113276;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=RqNFXvxyFdbL17SPtPEiEkNYQeKvfyRwO4Xv1F2F2m8=;
  b=iXjgPPXg7Os9kYEJO8duVuKUw0thqVHP5sY8TnfyMwmF/LEpQjLiDsZf
   kvOdOmUyupz9DrytUYr2vRtGaIIpP/JNc1+L0g9l20P5eyeQrBMtmR/V9
   0iwMIltPa9KnMVamJY8FKkiM/4CAhSrshfVhmcdb+aVQqtPEbqRofSczb
   4=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: tazONBhR5eMUobFwgRRgP4qfpiTY6E2N2WO0J+pW4QNCE0UJDVwVQ2kx1zou/WwABmBD0pS4pl
 l91eA9oGZcYB3hzs/npLmehCfkrKxbVfXWynevo/SwRH+9D6vH0E/zVCbMxQw7sGMQ+IjmJhsw
 SU6XAH+syOoKkOZGRD0rXZGAKjHcU8GBOD+oZwQUz5WvN2t8Zz0v2cFiszDfbUntL73rFyQ9eD
 5S6dS3CXf03O6RcMgpLM8CiBYMVxtwB0gQeJbvf/AMi8FQzmdLFuGiri3610ju0QtbcBIsQTjV
 nzE=
X-SBRS: 5.1
X-MesageID: 46306345
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:qxN1Pq64iOA8urb+CgPXwDLXdLJyesId70hD6qkQc3FomwKj9/
 xG/c5rsSMc7Qx6ZJhOo7+90cW7L080lqQFhLX5X43SPzUO0VHARO1fBOPZqAEIcBeOlNK1u5
 0AT0B/YueAcGSTj6zBkXWF+wBL+qj5zEiq792usUuEVWtRGsZdB58SMHfhLqVxLjM2Y6YRJd
 6nyedsgSGvQngTZtTTPAh+YwCSz+e77a4PeHQ9dmYa1DU=
X-IronPort-AV: E=Sophos;i="5.82,334,1613451600"; 
   d="scan'208";a="46306345"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=KvlutgYzS68JsIgLEEarRyjEyNYbBTbGrEir6rQ2u+uSlFx9sIk0hwPNCPhmwHYQmJ7Rs+ewapuMey7uCDgFbrZprzsRzvO7Wj/kkn3BvcY9EzpSFL61SavdrVm3VlYpBrmcDDBmhieeohTHMcL+c2pMuXKbcM3dmT55Zdkns1OkKBttm+eFWkmLnvpwvjQ1/YiOtLBC9U5wD4UCwR5aHDsuuAeircbPFgVV9HSSIohLJjLeayh0JT/fP1yglIYV1q3ln3L+p/b/gjZVoPZjlS4AuyIa2XoelH2ZKddKsKvr6r1h26WPCDjxKdk99wpNmdZoHe5/bcZxJweQ+k1oMQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SZWohDrMGQcVduKlbqEw/e22lgLvj4pTiJyrD26Vs48=;
 b=K38a5NcNvI/V+mvqWcObfSvfUTv2b1kTRJuQLUwmKoBlHUcLSFO3l4uLr2SFnIABw65Mi1nhWt7p587FFfG9V0ks7pEvqx//1e0nyFOqKffU+WvAumIPKBH+FDbknREoCwBYBNNh5ZbHrZ6bgaYixqwOKcdkgwmTjJ3rRDxvQsS0erecQpjWcQD6fQdQnKA4heqW3Ziwk30jWiyeAtJD3IxAlO86gpe32WSeLZV1Acuj9Oo1G56f3bowDMGmWLpodoeFw70vHwAVupmFyDcQBek+B8z/eJPQiQMFsPlCciASfiDAE2fgsJLc1fRRhes7djXOfutn1uVENLM/OGGWWg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=SZWohDrMGQcVduKlbqEw/e22lgLvj4pTiJyrD26Vs48=;
 b=a8+TvhynfTLt4oY+w09UjBmfmipWQVqxO+zTnqZN3YpZjAMDmD3nPQvMhes+mynmulSpvZR9QiagvJQJkrTzxZ2cH7Nmwzyi/HvwkPfJtukHJCovhBZBZ8yzgRVjskPJGAzdYLQK0q7WyYu6EOqC6lZj1JBvvAl+qWR1ke1Uj2w=
Date: Thu, 27 May 2021 13:01:07 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>, Kevin Tian <kevin.tian@intel.com>
Subject: Re: [PATCH v5 2/6] evtchn: convert domain event lock to an r/w one
Message-ID: <YK978wmwAZqQDEQZ@Air-de-Roger>
References: <306e62e8-9070-2db9-c959-858465c50c1d@suse.com>
 <5f5fc6a7-6e27-8275-0f05-11ba5454156a@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <5f5fc6a7-6e27-8275-0f05-11ba5454156a@suse.com>
X-ClientProxiedBy: MR1P264CA0080.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:501:3f::26) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 8ba94242-4725-4eb1-9c8a-08d920febe08
X-MS-TrafficTypeDiagnostic: DM6PR03MB4476:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB44763A2F8C7C7BF629E775938F239@DM6PR03MB4476.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:5797;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: YSIz1m9yQlC9KK5u8bxcG0VkLFPYozDJl7nbABLx4U5igxMm3/JI/229R/lRdx98M8mRa+U9JzCPaYPJKRM45inu7t1t5BgAuUhf2jEFoj39xeumDIPujsM5UwNKoy2ryITxgt+ezHlyy37EGOKme2o4c7lXOoR0Tv/SHRVHpQxF7olp+w0NerTaJBItNrdUOrVB5xEJ3pM78EacaH08AReyNtyWlpJOEX2SRdGxspypfG3bmE4hA34uj1BUVkgK95VVPJ4WlmlS1c2H8A5WrV9OTEtUDexg/B/ByD6Xe5ASn0qpuzPNlcPvW52Cf/8snIQvNisSSJrTnCX8IjQQGmGhGQtq/Z5YCJvS/xMHducqfln3Rv/SiLDJxeVDqQfjbWkWI4b4IPMYxOUzMY3sBIC7HlkACt0pHGuWb5bc+s+q//n249VZM1U9b80MoRIMuU05FjZq9HdNCSa6yERO6JmpDC5Nzo9KixAPODhuM1jSzWkuLMTIM9BV891akMxGYb4+CNPox2x8U/TIg4Z3mjigIAGqboEBLg3bW6fNMXxFrDaaX/GKmdFb4A4Fg4UKiWhzSgTjvX7YU3RYBra8OGtXjcmaPVEMfmpCpoi42oY=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(346002)(366004)(396003)(376002)(39850400004)(136003)(86362001)(66946007)(26005)(66556008)(83380400001)(16526019)(6916009)(4326008)(2906002)(66476007)(6486002)(478600001)(186003)(9686003)(6496006)(85182001)(8936002)(33716001)(316002)(956004)(8676002)(6666004)(54906003)(5660300002)(38100700002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?WTNDbUk2KytiS1ZkSUM1RnBMd1ZhRGxBYkdvVU42VS9rUFlNTVZuQ2d6MEF6?=
 =?utf-8?B?M1NkMEgvS0lHYVUxRUwwU0l6TjF6Y1NNclR0dGFQSmZZSGs5ZWc1RENEdjNV?=
 =?utf-8?B?eEs2ZWE2UTVLSmd4a1ZpSU5zMmtZNFB2OHhuSjJ0d2tMMUlLQit4dG9DZ2FJ?=
 =?utf-8?B?NVZJY1ZrVWxVelltbWJwNFQ0QkIwRlJTNVBJdVFRQWhxaThjUWY5WndBVkk3?=
 =?utf-8?B?clJ1ZjAxWUh5cFljaFRDeS9BS0JoTFgzWnEvQlczdVA4THVtbkluWlJlZmdL?=
 =?utf-8?B?cHNENnZZZTNPdXk3cHNhT1NlckNRbDJiVHdVYVNLNkhHaVNWUDl1Qkl5Yyt2?=
 =?utf-8?B?anBaZkNBWUs0OFQyQVJWL3hzWUdUV3lmWmw2bWsrc3BwYVhJbWEwWUtWUXFR?=
 =?utf-8?B?a3AyYStkZU5rWXIxMVc4NGFJeXlDeTFkMVlnMlBJU3RhOTRpNVE5SUpBNUwy?=
 =?utf-8?B?ckdFeU9sMU9rZytPdXhrRmJwQlZONjIwcjJpREVpQ2daRndNRDBLOXErMjRB?=
 =?utf-8?B?K3VWSTZnMEZhN3ZRMm9GOUxNamxFTUxzNlkxQytDazUxcTMxTncyTUxJOVIy?=
 =?utf-8?B?M0tEOE5rNmlwN2x6Rm0vNkN1QUVXMUZNbURBaHdaN0NoeWtTUG0ydFptWjZV?=
 =?utf-8?B?a1Yyckp0U2wrQ1N3aTZ2SDJXcnJyYnN1Z1drMDNQcUUyVzVJUWJpdkZNa1VV?=
 =?utf-8?B?WVdad0xJVEU3OXhnNXlMeFVKd3BFekZsM1MvMVhFOW1DamdXNzdTTnZ4OGpN?=
 =?utf-8?B?TG5weThWZUM5MTNtcHRERzh6bm5OQkJMTWRqcVB4VG4zbGJSTWdvSGVBRXIx?=
 =?utf-8?B?eGlHbjY5bENkdk1OZHZXblJjQS8reS9kYlZUbEd0YjRUeEIxeStkTDR5RVNE?=
 =?utf-8?B?c0pVSlNxemh2TkltZDRsRmNyTk9pZDhQUWN4M1pyRTZGL0VUSnN5dXYwSGFz?=
 =?utf-8?B?aFZEM091bmR3ektFTTJVKzN0M1VWMCt3Ty8zVHdGYjJDK01UcjBiZzdlNkQ3?=
 =?utf-8?B?NHFMQzdya1dOcmllTDlTME5KSzZ5SUdsRXltRUQydWFVVlp3S3k3KzN2SkdP?=
 =?utf-8?B?LzlYbHFxVTJNdEZGcG9ZTXIrdVFYbGFobWdOR3JBOU5zcWZmNUFvTERvMzlV?=
 =?utf-8?B?dzhVUUt3U3l4cmpBL2VIWXAvUmVzTGJzdW8yTDR1SGp2WDFYd0dIdkFDanZC?=
 =?utf-8?B?R01RRWxLMkEwN0lRbjBrTlhBNHU3SjhBeU9JS1VST2w4WGJSSVVxMHhCNlN2?=
 =?utf-8?B?L2hka3dXOUJpSXZGd2tpT21rckVUK3kvV3daRmd4NzJuS0JEZzZQUmNHbVhQ?=
 =?utf-8?B?RDhSOVM2eGt2bE9KUmdCRmxSQnpzZUtSTFdWVkFpZ0ZpWk9WdnBYZTZPWFNH?=
 =?utf-8?B?TGdVT0xSbE1icllROVc5M1hrQm5RRmdsd1pORTE5K2JncHdXNVR0bVJlZG5Y?=
 =?utf-8?B?dm1aaVEzWU1YYkhyU055SDIvRHo1MWs3emVsY212L1A1RE11cEhWS0FkRncx?=
 =?utf-8?B?QUhQWEN0bFl3K2ZibFlrUCt2Yk5XVHh1M3oxbFBEcjlhZ3BWR2tvTUFYSWkv?=
 =?utf-8?B?YUIvWWh3UDA5NkQveHA5UVl4OFRZaFJEaTVGZmxrNXU5NW1RcHBieWdVU0pz?=
 =?utf-8?B?U3M0bjhTdktXalFpTmgxbjVmNlBtaVR1ZjVucFNmVmNNcVp6MUNUU0NHZm9q?=
 =?utf-8?B?ajZzVTRTcVhISVBNaTV3WHJtQ1RoZVo3bHFxcy8rdGtTekEwNVowdEVZVGhn?=
 =?utf-8?Q?pbV5eRhuiA83hPqrAe/HLBZ2TE8pFleiGHMOgR2?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 8ba94242-4725-4eb1-9c8a-08d920febe08
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 May 2021 11:01:13.3813
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: lwmcoYBlwxvs19kJ/yF8f2VgyJOhs5+TT4dfd1EN5Raf22fCeVr/OOffYXAn1u50g8WVHOyS8S5+SuNviKCIYA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4476
X-OriginatorOrg: citrix.com

On Wed, Jan 27, 2021 at 09:16:07AM +0100, Jan Beulich wrote:
> Especially for the use in evtchn_move_pirqs() (called when moving a vCPU
> across pCPU-s) and the ones in EOI handling in PCI pass-through code,
> serializing perhaps an entire domain isn't helpful when no state (which
> isn't e.g. further protected by the per-channel lock) changes.

I'm unsure this move is good from a performance PoV, as the operations
switched to use the lock in read mode is a very small subset, and then
the remaining operations get a performance penalty when compared to
using a plain spin lock.

> Unfortunately this implies dropping of lock profiling for this lock,
> until r/w locks may get enabled for such functionality.
> 
> While ->notify_vcpu_id is now meant to be consistently updated with the
> per-channel lock held, an extension applies to ECS_PIRQ: The field is
> also guaranteed to not change with the per-domain event lock held for
> writing. Therefore the link_pirq_port() call from evtchn_bind_pirq()
> could in principle be moved out of the per-channel locked regions, but
> this further code churn didn't seem worth it.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> @@ -1510,9 +1509,10 @@ int evtchn_destroy(struct domain *d)
>  {
>      unsigned int i;
>  
> -    /* After this barrier no new event-channel allocations can occur. */
> +    /* After this kind-of-barrier no new event-channel allocations can occur. */
>      BUG_ON(!d->is_dying);
> -    spin_barrier(&d->event_lock);
> +    read_lock(&d->event_lock);
> +    read_unlock(&d->event_lock);

Don't you want to use write mode here to assure there are no read
users that have taken the lock before is_dying has been set, and thus
could make wrong assumptions?

As I understand the point of the barrier here is to ensure there are
no lockers carrier over from before is_dying has been set.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 27 11:17:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 11:17:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133006.248011 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmE0e-0006cz-DN; Thu, 27 May 2021 11:17:04 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133006.248011; Thu, 27 May 2021 11:17:04 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmE0e-0006cs-AU; Thu, 27 May 2021 11:17:04 +0000
Received: by outflank-mailman (input) for mailman id 133006;
 Thu, 27 May 2021 11:17:03 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HjtO=KW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmE0d-0006cm-4b
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 11:17:03 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a6533f1b-3c6e-44e3-bc90-e1f143d4bbcf;
 Thu, 27 May 2021 11:17:01 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id BF169218DD;
 Thu, 27 May 2021 11:17:00 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 8A11611A98;
 Thu, 27 May 2021 11:17:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a6533f1b-3c6e-44e3-bc90-e1f143d4bbcf
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622114220; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=0WtvNwAECKle8+6jyQuA7PmY2BIdWUsLQwGx1tLe9Ns=;
	b=JYIEv7kr8MQ3szYMRnCH1wQVG5B7Bvhd34qGXC+FRF6tqTP0E4e86vnKSQ/H96iSLXWedV
	D0+a/drrixhYgVeOxcjGmNzC2pLYkk2V4DzS/m7SO+oDNavD023Y7j2TzDNwCPyBtz6P2v
	qIpWzHyGC0viE1OapAOJECnlS+ReyWc=
Subject: Re: [PATCH v5 2/6] evtchn: convert domain event lock to an r/w one
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 Kevin Tian <kevin.tian@intel.com>
References: <306e62e8-9070-2db9-c959-858465c50c1d@suse.com>
 <5f5fc6a7-6e27-8275-0f05-11ba5454156a@suse.com>
 <YK978wmwAZqQDEQZ@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d2bebb20-0216-e17d-e7c3-6085ea300e26@suse.com>
Date: Thu, 27 May 2021 13:16:55 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <YK978wmwAZqQDEQZ@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 27.05.2021 13:01, Roger Pau Monné wrote:
> On Wed, Jan 27, 2021 at 09:16:07AM +0100, Jan Beulich wrote:
>> Especially for the use in evtchn_move_pirqs() (called when moving a vCPU
>> across pCPU-s) and the ones in EOI handling in PCI pass-through code,
>> serializing perhaps an entire domain isn't helpful when no state (which
>> isn't e.g. further protected by the per-channel lock) changes.
> 
> I'm unsure this move is good from a performance PoV, as the operations
> switched to use the lock in read mode is a very small subset, and then
> the remaining operations get a performance penalty when compared to
> using a plain spin lock.

Well, yes, unfortunately review of earlier versions has resulted in
there being quite a few less read_lock() uses now than I had
(mistakenly) originally. There are a few worthwhile conversions,
but on the whole maybe I should indeed drop this change.

>> @@ -1510,9 +1509,10 @@ int evtchn_destroy(struct domain *d)
>>  {
>>      unsigned int i;
>>  
>> -    /* After this barrier no new event-channel allocations can occur. */
>> +    /* After this kind-of-barrier no new event-channel allocations can occur. */
>>      BUG_ON(!d->is_dying);
>> -    spin_barrier(&d->event_lock);
>> +    read_lock(&d->event_lock);
>> +    read_unlock(&d->event_lock);
> 
> Don't you want to use write mode here to assure there are no read
> users that have taken the lock before is_dying has been set, and thus
> could make wrong assumptions?
> 
> As I understand the point of the barrier here is to ensure there are
> no lockers carrier over from before is_dying has been set.

The purpose is, as the comment says, no new event channel allocations.
Those happen under write lock, so a read-lock-based barrier is enough
here afaict.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 27 11:27:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 11:27:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133014.248024 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmEAR-00084j-Ao; Thu, 27 May 2021 11:27:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133014.248024; Thu, 27 May 2021 11:27:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmEAR-00084c-7o; Thu, 27 May 2021 11:27:11 +0000
Received: by outflank-mailman (input) for mailman id 133014;
 Thu, 27 May 2021 11:27:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HjtO=KW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmEAP-00084W-Um
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 11:27:09 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 83b0ebba-2998-43f0-8cb3-3684e685cd9d;
 Thu, 27 May 2021 11:27:09 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 234541FD2A;
 Thu, 27 May 2021 11:27:08 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id DA06E11A98;
 Thu, 27 May 2021 11:27:07 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 83b0ebba-2998-43f0-8cb3-3684e685cd9d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622114828; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=BR10Lv3o2g8TOqjBMtoXg4piCGPt5hqgiPINwMFsx8k=;
	b=FEQvsTRF3Cn66NpPKXDNvqzgQUJPqcaz4IpsFdnLwypg/2DPh3rRW5Llrd7SO5EM7NNx1g
	6kctuHTRvuu8H3kXg2llE0TprEVN0BOTeInQIR/FVZdxALVVdq58C6JSKbdmQlgQBLKGLr
	CzV0+F0aIdx0FXBvvWAj48oVVSguvvQ=
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v6 0/3] evtchn: (not so) recent XSAs follow-on
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
Message-ID: <01bbf3d4-ca6a-e837-91fe-b34aa014564c@suse.com>
Date: Thu, 27 May 2021 13:27:03 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

These are grouped into a series largely because of their origin,
not so much because there are (heavy) dependencies among them.
The main change from v6 is the dropping of three more patches,
and re-basing.

1: slightly defer lock acquire where possible
2: add helper for port_is_valid() + evtchn_from_port()
3: type adjustments

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 27 11:28:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 11:28:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133020.248036 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmEBS-0000DU-KV; Thu, 27 May 2021 11:28:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133020.248036; Thu, 27 May 2021 11:28:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmEBS-0000DN-HZ; Thu, 27 May 2021 11:28:14 +0000
Received: by outflank-mailman (input) for mailman id 133020;
 Thu, 27 May 2021 11:28:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HjtO=KW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmEBR-0000DD-6l
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 11:28:13 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b9f8fd59-2afa-49d5-b336-e0008c844830;
 Thu, 27 May 2021 11:28:12 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 9829D218DD;
 Thu, 27 May 2021 11:28:11 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 6653611A98;
 Thu, 27 May 2021 11:28:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b9f8fd59-2afa-49d5-b336-e0008c844830
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622114891; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=b6aTSYEJ4zMIzcsHnssUxQ3Bp2aABIpTEIa3bDkYDGg=;
	b=acmKoGYDeHF6DyeYESAaaMb1IsZc3GKsB6z74gGCyYraRgBjoalxJAAq/olGOXV3UARR7v
	mBckfAt+971c2kTSy84zXDvvaKDqZQRTdWh53ADL6NedLlQ5QeU+yUzG9WWfbP0d8OcZ1h
	iVArkE/bcz5klrhJAx9n5n8vN5ap8a8=
Subject: [PATCH v6 1/3] evtchn: slightly defer lock acquire where possible
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <01bbf3d4-ca6a-e837-91fe-b34aa014564c@suse.com>
Message-ID: <5939858e-1c7c-5658-bc2d-0c9024c74040@suse.com>
Date: Thu, 27 May 2021 13:28:07 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <01bbf3d4-ca6a-e837-91fe-b34aa014564c@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

port_is_valid() and evtchn_from_port() are fine to use without holding
any locks. Accordingly acquire the per-domain lock slightly later in
evtchn_close() and evtchn_bind_vcpu().

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v6: Re-base for re-ordering / shrinking of series.
v4: New.

--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -606,17 +606,14 @@ int evtchn_close(struct domain *d1, int
     int            port2;
     long           rc = 0;
 
- again:
-    spin_lock(&d1->event_lock);
-
     if ( !port_is_valid(d1, port1) )
-    {
-        rc = -EINVAL;
-        goto out;
-    }
+        return -EINVAL;
 
     chn1 = evtchn_from_port(d1, port1);
 
+ again:
+    spin_lock(&d1->event_lock);
+
     /* Guest cannot close a Xen-attached event channel. */
     if ( unlikely(consumer_is_xen(chn1)) && guest )
     {
@@ -1041,16 +1038,13 @@ long evtchn_bind_vcpu(unsigned int port,
     if ( (v = domain_vcpu(d, vcpu_id)) == NULL )
         return -ENOENT;
 
-    spin_lock(&d->event_lock);
-
     if ( !port_is_valid(d, port) )
-    {
-        rc = -EINVAL;
-        goto out;
-    }
+        return -EINVAL;
 
     chn = evtchn_from_port(d, port);
 
+    spin_lock(&d->event_lock);
+
     /* Guest cannot re-bind a Xen-attached event channel. */
     if ( unlikely(consumer_is_xen(chn)) )
     {



From xen-devel-bounces@lists.xenproject.org Thu May 27 11:28:41 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 11:28:41 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133022.248047 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmEBs-0000le-W9; Thu, 27 May 2021 11:28:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133022.248047; Thu, 27 May 2021 11:28:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmEBs-0000lX-SH; Thu, 27 May 2021 11:28:40 +0000
Received: by outflank-mailman (input) for mailman id 133022;
 Thu, 27 May 2021 11:28:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HjtO=KW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmEBr-0000lM-OX
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 11:28:39 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ccfcc1ee-1dbb-455d-bd5f-db6eeff6947e;
 Thu, 27 May 2021 11:28:38 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 063A51FD2E;
 Thu, 27 May 2021 11:28:38 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id CC96211A98;
 Thu, 27 May 2021 11:28:37 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ccfcc1ee-1dbb-455d-bd5f-db6eeff6947e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622114918; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=CyjWzZjN/Vq2WXc19AjsyaFDZn+HLKqQXYu8edCK+Vc=;
	b=mvj530uTOp1qx/bWOOlC07BLw+nhdeqY1ck6hybsZwIxQnyejbDdrXT5FUMKG63fsSekXR
	R8f533asQK2vnvpwpKqIAvJggbkJaIH8j35D1TdHnJKE9kvZZvMzAuz6I54GPWjSZ3VJn5
	tW+LPqPLAv/40pBUBG3xHx9UgC0/4Og=
Subject: [PATCH v6 2/3] evtchn: add helper for port_is_valid() +
 evtchn_from_port()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <01bbf3d4-ca6a-e837-91fe-b34aa014564c@suse.com>
Message-ID: <76106d2d-6219-bbb1-ee06-601da6f40673@suse.com>
Date: Thu, 27 May 2021 13:28:37 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <01bbf3d4-ca6a-e837-91fe-b34aa014564c@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

The combination is pretty common, so adding a simple local helper seems
worthwhile. Make it const- and type-correct, in turn requiring the
two called function to also be const-correct (and at this occasion also
make them type-correct).

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Julien Grall <jgrall@amazon.com>
---
v6: Re-base, also for re-ordering / shrinking of series.
v4: New.

--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -147,6 +147,12 @@ static bool virq_is_global(unsigned int
     return true;
 }
 
+static struct evtchn *_evtchn_from_port(const struct domain *d,
+                                        evtchn_port_t port)
+{
+    return port_is_valid(d, port) ? evtchn_from_port(d, port) : NULL;
+}
+
 static void free_evtchn_bucket(struct domain *d, struct evtchn *bucket)
 {
     if ( !bucket )
@@ -319,7 +325,6 @@ static long evtchn_alloc_unbound(evtchn_
     return rc;
 }
 
-
 static void double_evtchn_lock(struct evtchn *lchn, struct evtchn *rchn)
 {
     ASSERT(lchn != rchn);
@@ -365,9 +370,9 @@ static long evtchn_bind_interdomain(evtc
         ERROR_EXIT(lport);
     lchn = evtchn_from_port(ld, lport);
 
-    if ( !port_is_valid(rd, rport) )
+    rchn = _evtchn_from_port(rd, rport);
+    if ( !rchn )
         ERROR_EXIT_DOM(-EINVAL, rd);
-    rchn = evtchn_from_port(rd, rport);
     if ( (rchn->state != ECS_UNBOUND) ||
          (rchn->u.unbound.remote_domid != ld->domain_id) )
         ERROR_EXIT_DOM(-EINVAL, rd);
@@ -602,15 +607,12 @@ static long evtchn_bind_pirq(evtchn_bind
 int evtchn_close(struct domain *d1, int port1, bool guest)
 {
     struct domain *d2 = NULL;
-    struct evtchn *chn1, *chn2;
-    int            port2;
+    struct evtchn *chn1 = _evtchn_from_port(d1, port1), *chn2;
     long           rc = 0;
 
-    if ( !port_is_valid(d1, port1) )
+    if ( !chn1 )
         return -EINVAL;
 
-    chn1 = evtchn_from_port(d1, port1);
-
  again:
     spin_lock(&d1->event_lock);
 
@@ -698,10 +700,8 @@ int evtchn_close(struct domain *d1, int
             goto out;
         }
 
-        port2 = chn1->u.interdomain.remote_port;
-        BUG_ON(!port_is_valid(d2, port2));
-
-        chn2 = evtchn_from_port(d2, port2);
+        chn2 = _evtchn_from_port(d2, chn1->u.interdomain.remote_port);
+        BUG_ON(!chn2);
         BUG_ON(chn2->state != ECS_INTERDOMAIN);
         BUG_ON(chn2->u.interdomain.remote_dom != d1);
 
@@ -739,15 +739,13 @@ int evtchn_close(struct domain *d1, int
 
 int evtchn_send(struct domain *ld, unsigned int lport)
 {
-    struct evtchn *lchn, *rchn;
+    struct evtchn *lchn = _evtchn_from_port(ld, lport), *rchn;
     struct domain *rd;
     int            rport, ret = 0;
 
-    if ( !port_is_valid(ld, lport) )
+    if ( !lchn )
         return -EINVAL;
 
-    lchn = evtchn_from_port(ld, lport);
-
     evtchn_read_lock(lchn);
 
     /* Guest cannot send via a Xen-attached event channel. */
@@ -967,15 +965,15 @@ int evtchn_status(evtchn_status_t *statu
     if ( d == NULL )
         return -ESRCH;
 
-    spin_lock(&d->event_lock);
-
-    if ( !port_is_valid(d, port) )
+    chn = _evtchn_from_port(d, port);
+    if ( !chn )
     {
-        rc = -EINVAL;
-        goto out;
+        rcu_unlock_domain(d);
+        return -EINVAL;
     }
 
-    chn = evtchn_from_port(d, port);
+    spin_lock(&d->event_lock);
+
     if ( consumer_is_xen(chn) )
     {
         rc = -EACCES;
@@ -1038,11 +1036,10 @@ long evtchn_bind_vcpu(unsigned int port,
     if ( (v = domain_vcpu(d, vcpu_id)) == NULL )
         return -ENOENT;
 
-    if ( !port_is_valid(d, port) )
+    chn = _evtchn_from_port(d, port);
+    if ( !chn )
         return -EINVAL;
 
-    chn = evtchn_from_port(d, port);
-
     spin_lock(&d->event_lock);
 
     /* Guest cannot re-bind a Xen-attached event channel. */
@@ -1088,13 +1085,11 @@ long evtchn_bind_vcpu(unsigned int port,
 int evtchn_unmask(unsigned int port)
 {
     struct domain *d = current->domain;
-    struct evtchn *evtchn;
+    struct evtchn *evtchn = _evtchn_from_port(d, port);
 
-    if ( unlikely(!port_is_valid(d, port)) )
+    if ( unlikely(!evtchn) )
         return -EINVAL;
 
-    evtchn = evtchn_from_port(d, port);
-
     evtchn_read_lock(evtchn);
 
     evtchn_port_unmask(d, evtchn);
@@ -1177,14 +1172,12 @@ static long evtchn_set_priority(const st
 {
     struct domain *d = current->domain;
     unsigned int port = set_priority->port;
-    struct evtchn *chn;
+    struct evtchn *chn = _evtchn_from_port(d, port);
     long ret;
 
-    if ( !port_is_valid(d, port) )
+    if ( !chn )
         return -EINVAL;
 
-    chn = evtchn_from_port(d, port);
-
     evtchn_read_lock(chn);
 
     ret = evtchn_port_set_priority(d, chn, set_priority->priority);
@@ -1410,10 +1403,10 @@ void free_xen_event_channel(struct domai
 
 void notify_via_xen_event_channel(struct domain *ld, int lport)
 {
-    struct evtchn *lchn, *rchn;
+    struct evtchn *lchn = _evtchn_from_port(ld, lport), *rchn;
     struct domain *rd;
 
-    if ( !port_is_valid(ld, lport) )
+    if ( !lchn )
     {
         /*
          * Make sure ->is_dying is read /after/ ->valid_evtchns, pairing
@@ -1424,8 +1417,6 @@ void notify_via_xen_event_channel(struct
         return;
     }
 
-    lchn = evtchn_from_port(ld, lport);
-
     if ( !evtchn_read_trylock(lchn) )
         return;
 
@@ -1580,12 +1571,14 @@ static void domain_dump_evtchn_info(stru
 
     spin_lock(&d->event_lock);
 
-    for ( port = 1; port_is_valid(d, port); ++port )
+    for ( port = 1; ; ++port )
     {
-        const struct evtchn *chn;
+        const struct evtchn *chn = _evtchn_from_port(d, port);
         char *ssid;
 
-        chn = evtchn_from_port(d, port);
+        if ( !chn )
+            break;
+
         if ( chn->state == ECS_FREE )
             continue;
 
--- a/xen/include/xen/event.h
+++ b/xen/include/xen/event.h
@@ -120,7 +120,7 @@ static inline void evtchn_read_unlock(st
     read_unlock(&evtchn->lock);
 }
 
-static inline bool_t port_is_valid(struct domain *d, unsigned int p)
+static inline bool port_is_valid(const struct domain *d, evtchn_port_t p)
 {
     if ( p >= read_atomic(&d->valid_evtchns) )
         return false;
@@ -135,7 +135,8 @@ static inline bool_t port_is_valid(struc
     return true;
 }
 
-static inline struct evtchn *evtchn_from_port(struct domain *d, unsigned int p)
+static inline struct evtchn *evtchn_from_port(const struct domain *d,
+                                              evtchn_port_t p)
 {
     if ( p < EVTCHNS_PER_BUCKET )
         return &d->evtchn[array_index_nospec(p, EVTCHNS_PER_BUCKET)];



From xen-devel-bounces@lists.xenproject.org Thu May 27 11:29:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 11:29:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133027.248057 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmECE-0001Km-8p; Thu, 27 May 2021 11:29:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133027.248057; Thu, 27 May 2021 11:29:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmECE-0001Kf-58; Thu, 27 May 2021 11:29:02 +0000
Received: by outflank-mailman (input) for mailman id 133027;
 Thu, 27 May 2021 11:29:01 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HjtO=KW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmECD-0001H9-9o
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 11:29:01 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id ccd41df2-1a00-4c88-b47c-5a3e3295610c;
 Thu, 27 May 2021 11:29:00 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id B92B2218DD;
 Thu, 27 May 2021 11:28:59 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 89EBA11A98;
 Thu, 27 May 2021 11:28:59 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ccd41df2-1a00-4c88-b47c-5a3e3295610c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622114939; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Qa0Zu1mlw1DByVU5m9GtwBNK3+ZFmTcifmkaJak5uN8=;
	b=d+b+azCuQyhsO/FD1UhTHtvaamexyHVttDUtgvuCVkKpDRVdJJn1InNe1ALl1V5+1vrRgY
	Zwg+6cEZjmwGb2H83o+yRriuFsXz1aqqPvBJAUCTFC+w7H1bCRhU1X1G9enZgB3c0jMM1v
	GTCXDSE2vdFln/wWaohvCRlqP8xvQnk=
Subject: [PATCH v6 3/3] evtchn: type adjustments
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Julien Grall <julien@xen.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <01bbf3d4-ca6a-e837-91fe-b34aa014564c@suse.com>
Message-ID: <8f7b57da-cd10-5f96-62fe-1a6e28c8981f@suse.com>
Date: Thu, 27 May 2021 13:28:59 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <01bbf3d4-ca6a-e837-91fe-b34aa014564c@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

First of all avoid "long" when "int" suffices, i.e. in particular when
merely conveying error codes. 32-bit values are slightly cheaper to
deal with on x86, and their processing is at least no more expensive on
Arm. Where possible use evtchn_port_t for port numbers and unsigned int
for other unsigned quantities in adjacent code. In evtchn_set_priority()
eliminate a local variable altogether instead of changing its type.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v6: Re-base for re-ordering / shrinking of series.
v4: New.

--- a/xen/common/event_channel.c
+++ b/xen/common/event_channel.c
@@ -284,13 +284,12 @@ void evtchn_free(struct domain *d, struc
     xsm_evtchn_close_post(chn);
 }
 
-static long evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
+static int evtchn_alloc_unbound(evtchn_alloc_unbound_t *alloc)
 {
     struct evtchn *chn;
     struct domain *d;
-    int            port;
+    int            port, rc;
     domid_t        dom = alloc->dom;
-    long           rc;
 
     d = rcu_lock_domain_by_any_id(dom);
     if ( d == NULL )
@@ -342,13 +341,13 @@ static void double_evtchn_unlock(struct
     evtchn_write_unlock(rchn);
 }
 
-static long evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
+static int evtchn_bind_interdomain(evtchn_bind_interdomain_t *bind)
 {
     struct evtchn *lchn, *rchn;
     struct domain *ld = current->domain, *rd;
-    int            lport, rport = bind->remote_port;
+    int            lport, rc;
+    evtchn_port_t  rport = bind->remote_port;
     domid_t        rdom = bind->remote_dom;
-    long           rc;
 
     if ( (rd = rcu_lock_domain_by_any_id(rdom)) == NULL )
         return -ESRCH;
@@ -484,12 +483,12 @@ int evtchn_bind_virq(evtchn_bind_virq_t
 }
 
 
-static long evtchn_bind_ipi(evtchn_bind_ipi_t *bind)
+static int evtchn_bind_ipi(evtchn_bind_ipi_t *bind)
 {
     struct evtchn *chn;
     struct domain *d = current->domain;
-    int            port, vcpu = bind->vcpu;
-    long           rc = 0;
+    int            port, rc = 0;
+    unsigned int   vcpu = bind->vcpu;
 
     if ( domain_vcpu(d, vcpu) == NULL )
         return -ENOENT;
@@ -543,16 +542,16 @@ static void unlink_pirq_port(struct evtc
 }
 
 
-static long evtchn_bind_pirq(evtchn_bind_pirq_t *bind)
+static int evtchn_bind_pirq(evtchn_bind_pirq_t *bind)
 {
     struct evtchn *chn;
     struct domain *d = current->domain;
     struct vcpu   *v = d->vcpu[0];
     struct pirq   *info;
-    int            port = 0, pirq = bind->pirq;
-    long           rc;
+    int            port = 0, rc;
+    unsigned int   pirq = bind->pirq;
 
-    if ( (pirq < 0) || (pirq >= d->nr_pirqs) )
+    if ( pirq >= d->nr_pirqs )
         return -EINVAL;
 
     if ( !is_hvm_domain(d) && !pirq_access_permitted(d, pirq) )
@@ -608,7 +607,7 @@ int evtchn_close(struct domain *d1, int
 {
     struct domain *d2 = NULL;
     struct evtchn *chn1 = _evtchn_from_port(d1, port1), *chn2;
-    long           rc = 0;
+    int            rc = 0;
 
     if ( !chn1 )
         return -EINVAL;
@@ -959,7 +958,7 @@ int evtchn_status(evtchn_status_t *statu
     domid_t          dom = status->dom;
     int              port = status->port;
     struct evtchn   *chn;
-    long             rc = 0;
+    int              rc = 0;
 
     d = rcu_lock_domain_by_any_id(dom);
     if ( d == NULL )
@@ -1025,11 +1024,11 @@ int evtchn_status(evtchn_status_t *statu
 }
 
 
-long evtchn_bind_vcpu(unsigned int port, unsigned int vcpu_id)
+int evtchn_bind_vcpu(evtchn_port_t port, unsigned int vcpu_id)
 {
     struct domain *d = current->domain;
     struct evtchn *chn;
-    long           rc = 0;
+    int           rc = 0;
     struct vcpu   *v;
 
     /* Use the vcpu info to prevent speculative out-of-bound accesses */
@@ -1168,12 +1167,11 @@ int evtchn_reset(struct domain *d, bool
     return rc;
 }
 
-static long evtchn_set_priority(const struct evtchn_set_priority *set_priority)
+static int evtchn_set_priority(const struct evtchn_set_priority *set_priority)
 {
     struct domain *d = current->domain;
-    unsigned int port = set_priority->port;
-    struct evtchn *chn = _evtchn_from_port(d, port);
-    long ret;
+    struct evtchn *chn = _evtchn_from_port(d, set_priority->port);
+    int ret;
 
     if ( !chn )
         return -EINVAL;
@@ -1189,7 +1187,7 @@ static long evtchn_set_priority(const st
 
 long do_event_channel_op(int cmd, XEN_GUEST_HANDLE_PARAM(void) arg)
 {
-    long rc;
+    int rc;
 
     switch ( cmd )
     {
--- a/xen/include/xen/event.h
+++ b/xen/include/xen/event.h
@@ -54,7 +54,7 @@ void send_guest_pirq(struct domain *, co
 int evtchn_send(struct domain *d, unsigned int lport);
 
 /* Bind a local event-channel port to the specified VCPU. */
-long evtchn_bind_vcpu(unsigned int port, unsigned int vcpu_id);
+int evtchn_bind_vcpu(evtchn_port_t port, unsigned int vcpu_id);
 
 /* Bind a VIRQ. */
 int evtchn_bind_virq(evtchn_bind_virq_t *bind, evtchn_port_t port);



From xen-devel-bounces@lists.xenproject.org Thu May 27 11:35:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 11:35:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133045.248072 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmEIB-0002yX-7H; Thu, 27 May 2021 11:35:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133045.248072; Thu, 27 May 2021 11:35:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmEIB-0002yQ-4F; Thu, 27 May 2021 11:35:11 +0000
Received: by outflank-mailman (input) for mailman id 133045;
 Thu, 27 May 2021 11:35:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Nal+=KW=kernel.org=will@srs-us1.protection.inumbo.net>)
 id 1lmEI9-0002yK-Fn
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 11:35:09 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9157bd06-f5ff-422e-81b0-d12d985a0491;
 Thu, 27 May 2021 11:35:08 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id 2886A6113B;
 Thu, 27 May 2021 11:35:00 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9157bd06-f5ff-422e-81b0-d12d985a0491
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1622115307;
	bh=awWFyuDtGzTSFYuKnqnAwD8I2u8LvxPLkIedgwXpacY=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=p4bPfFCD5PrcmECQ/JmYfkoIQvGIdlvd3Xmt5WsLHiji5zjF5hUpY/7jV/AEVHJHW
	 CB6f9oBS2X3aNGgQ3+WcdNwOGV1gPRmBycO/zJJUoapgziqu+7IL5STMx1dAhYz3+T
	 0bM7UEMZILv76zqCaWO+pWXqjNmfb4VQ6CH4Qz9dn7I0OepvY2P5/VxhAaiFsn1JPd
	 ce1hSAg7oNQpoYW+JbRsJjBqIARt3uf0TgyQ8bjr7saHdzXTBfM/jxaDHBHbhfTQ4Z
	 nDFzJaE8tdL5MVg6406RffF7r/1Of9XzA/MNxWJkmUtb6zscbeJZBFAuG32Qt3uWbG
	 DCkceG/OoqRLg==
Date: Thu, 27 May 2021 12:34:57 +0100
From: Will Deacon <will@kernel.org>
To: Claire Chang <tientzu@chromium.org>
Cc: heikki.krogerus@linux.intel.com, thomas.hellstrom@linux.intel.com,
	peterz@infradead.org, benh@kernel.crashing.org,
	joonas.lahtinen@linux.intel.com, dri-devel@lists.freedesktop.org,
	chris@chris-wilson.co.uk, grant.likely@arm.com, paulus@samba.org,
	Frank Rowand <frowand.list@gmail.com>, mingo@kernel.org,
	sstabellini@kernel.org, Saravana Kannan <saravanak@google.com>,
	mpe@ellerman.id.au,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	Christoph Hellwig <hch@lst.de>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>, bskeggs@redhat.com,
	linux-pci@vger.kernel.org, xen-devel@lists.xenproject.org,
	Thierry Reding <treding@nvidia.com>,
	intel-gfx@lists.freedesktop.org, matthew.auld@intel.com,
	linux-devicetree <devicetree@vger.kernel.org>,
	Jianxiong Gao <jxgao@google.com>, Daniel Vetter <daniel@ffwll.ch>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	maarten.lankhorst@linux.intel.com, airlied@linux.ie,
	Dan Williams <dan.j.williams@intel.com>,
	linuxppc-dev@lists.ozlabs.org, jani.nikula@linux.intel.com,
	Rob Herring <robh+dt@kernel.org>, rodrigo.vivi@intel.com,
	Bjorn Helgaas <bhelgaas@google.com>, boris.ostrovsky@oracle.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	jgross@suse.com, Nicolas Boichat <drinkcat@chromium.org>,
	Greg KH <gregkh@linuxfoundation.org>,
	Randy Dunlap <rdunlap@infradead.org>,
	lkml <linux-kernel@vger.kernel.org>,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, xypron.glpk@gmx.de,
	Robin Murphy <robin.murphy@arm.com>, bauerman@linux.ibm.com
Subject: Re: [PATCH v7 14/15] dt-bindings: of: Add restricted DMA pool
Message-ID: <20210527113456.GA22019@willie-the-truck>
References: <20210518064215.2856977-1-tientzu@chromium.org>
 <20210518064215.2856977-15-tientzu@chromium.org>
 <20210526121322.GA19313@willie-the-truck>
 <20210526155321.GA19633@willie-the-truck>
 <CALiNf2_sVXnb97++yWusB5PWz8Pzfn9bCKZc6z3tY4bx6-nW8w@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CALiNf2_sVXnb97++yWusB5PWz8Pzfn9bCKZc6z3tY4bx6-nW8w@mail.gmail.com>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Thu, May 27, 2021 at 07:29:20PM +0800, Claire Chang wrote:
> On Wed, May 26, 2021 at 11:53 PM Will Deacon <will@kernel.org> wrote:
> >
> > On Wed, May 26, 2021 at 01:13:22PM +0100, Will Deacon wrote:
> > > On Tue, May 18, 2021 at 02:42:14PM +0800, Claire Chang wrote:
> > > > @@ -138,4 +160,9 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB).
> > > >             memory-region = <&multimedia_reserved>;
> > > >             /* ... */
> > > >     };
> > > > +
> > > > +   pcie_device: pcie_device@0,0 {
> > > > +           memory-region = <&restricted_dma_mem_reserved>;
> > > > +           /* ... */
> > > > +   };
> > >
> > > I still don't understand how this works for individual PCIe devices -- how
> > > is dev->of_node set to point at the node you have above?
> > >
> > > I tried adding the memory-region to the host controller instead, and then
> > > I see it crop up in dmesg:
> > >
> > >   | pci-host-generic 40000000.pci: assigned reserved memory node restricted_dma_mem_reserved
> > >
> > > but none of the actual PCI devices end up with 'dma_io_tlb_mem' set, and
> > > so the restricted DMA area is not used. In fact, swiotlb isn't used at all.
> > >
> > > What am I missing to make this work with PCIe devices?
> >
> > Aha, looks like we're just missing the logic to inherit the DMA
> > configuration. The diff below gets things working for me.
> 
> I guess what was missing is the reg property in the pcie_device node.
> Will update the example dts.

Thanks. I still think something like my diff makes sense, if you wouldn't mind including
it, as it allows restricted DMA to be used for situations where the PCIe
topology is not static.

Perhaps we should prefer dev->of_node if it exists, but then use the node
of the host bridge's parent node otherwise?

Will


From xen-devel-bounces@lists.xenproject.org Thu May 27 11:37:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 11:37:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133051.248083 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmEK9-0003ah-JX; Thu, 27 May 2021 11:37:13 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133051.248083; Thu, 27 May 2021 11:37:13 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmEK9-0003aa-GS; Thu, 27 May 2021 11:37:13 +0000
Received: by outflank-mailman (input) for mailman id 133051;
 Thu, 27 May 2021 11:37:12 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=spZp=KW=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lmEK8-0003aU-0Y
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 11:37:12 +0000
Received: from mail-io1-xd34.google.com (unknown [2607:f8b0:4864:20::d34])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7f929f86-6d82-4020-9b7f-87c0fa0d12f6;
 Thu, 27 May 2021 11:37:11 +0000 (UTC)
Received: by mail-io1-xd34.google.com with SMTP id a8so63783ioa.12
 for <xen-devel@lists.xenproject.org>; Thu, 27 May 2021 04:37:11 -0700 (PDT)
Received: from mail-io1-f42.google.com (mail-io1-f42.google.com.
 [209.85.166.42])
 by smtp.gmail.com with ESMTPSA id o2sm1045641ilt.73.2021.05.27.04.37.10
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 27 May 2021 04:37:10 -0700 (PDT)
Received: by mail-io1-f42.google.com with SMTP id b81so113883iof.2
 for <xen-devel@lists.xenproject.org>; Thu, 27 May 2021 04:37:10 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7f929f86-6d82-4020-9b7f-87c0fa0d12f6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=XepQkdRFyf5QwVg1KdPLbhegPooaNHd/1JiifJ+YyHM=;
        b=J39qajy3UJ3qAzWqGeZGEnwMoSI9LW5BjibS8fd+Acudu4FxBAXa2OhDGjuMcgqW0/
         wo/z5L5WO21v6biwP7ZKEgtIJf223PVujpTOF4zDsL3aPOmxJx+dxaS3A0oSwB4zv/i6
         Jb0bh8tNiRStD2FZiusMVkFxa8G1teBBpM9vg=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=XepQkdRFyf5QwVg1KdPLbhegPooaNHd/1JiifJ+YyHM=;
        b=gQ04b9oWd428YCzvNP367p7os2ct5sn8WAASh6QPxkDtKx0dpWqEcF2yxEUF0ppSg1
         r8TM8DjXppLUTXVS9nmV6SS/TRCT1H4zyQHspHjGbK5Mjk3GMLar7+n3kTqjMqmFPqfi
         7NRlo7R6Ki70xg/OsFEL1eR35WWmnQIPLt/GbqmcXoUvaCSlceGM35dPXJJUeZ5CNQIy
         7wa7zyZaWOqzaIUNPJUZNQEOiQ98yyBdpdtX4M+xq2VprGlRN91NFsJEHrQeZtAjieWf
         pNx2ejycehkHEYNxqOmRvkuLhOQ+k+brVC4G/ur027AtW+2Ui3iAKDgYwDxxiavOVIcL
         XN0g==
X-Gm-Message-State: AOAM532FHm5JA7z+blcucSsI4AKZqWWq8mmVjoISQK/Ck7+Cz61cYQJw
	v1axTE+rnU4lD9mEaoQ5dcn1XyzSr4Ecqw==
X-Google-Smtp-Source: ABdhPJyAJNQAdlO7tYVzqg06vwfMTIMGmYBihPYTT1l9TgwbtaDkTDEk3xWqR3ZOJ48nwEmKfaM5VQ==
X-Received: by 2002:a02:a492:: with SMTP id d18mr3084006jam.28.1622115430727;
        Thu, 27 May 2021 04:37:10 -0700 (PDT)
X-Received: by 2002:a05:6e02:e42:: with SMTP id l2mr2536928ilk.189.1622114971302;
 Thu, 27 May 2021 04:29:31 -0700 (PDT)
MIME-Version: 1.0
References: <20210518064215.2856977-1-tientzu@chromium.org>
 <20210518064215.2856977-15-tientzu@chromium.org> <20210526121322.GA19313@willie-the-truck>
 <20210526155321.GA19633@willie-the-truck>
In-Reply-To: <20210526155321.GA19633@willie-the-truck>
From: Claire Chang <tientzu@chromium.org>
Date: Thu, 27 May 2021 19:29:20 +0800
X-Gmail-Original-Message-ID: <CALiNf2_sVXnb97++yWusB5PWz8Pzfn9bCKZc6z3tY4bx6-nW8w@mail.gmail.com>
Message-ID: <CALiNf2_sVXnb97++yWusB5PWz8Pzfn9bCKZc6z3tY4bx6-nW8w@mail.gmail.com>
Subject: Re: [PATCH v7 14/15] dt-bindings: of: Add restricted DMA pool
To: Will Deacon <will@kernel.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>, 
	Frank Rowand <frowand.list@gmail.com>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, 
	boris.ostrovsky@oracle.com, jgross@suse.com, Christoph Hellwig <hch@lst.de>, 
	Marek Szyprowski <m.szyprowski@samsung.com>, benh@kernel.crashing.org, paulus@samba.org, 
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, sstabellini@kernel.org, 
	Robin Murphy <robin.murphy@arm.com>, grant.likely@arm.com, xypron.glpk@gmx.de, 
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org, bauerman@linux.ibm.com, 
	peterz@infradead.org, Greg KH <gregkh@linuxfoundation.org>, 
	Saravana Kannan <saravanak@google.com>, "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, 
	heikki.krogerus@linux.intel.com, 
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Randy Dunlap <rdunlap@infradead.org>, 
	Dan Williams <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
	linux-devicetree <devicetree@vger.kernel.org>, lkml <linux-kernel@vger.kernel.org>, 
	linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, 
	Nicolas Boichat <drinkcat@chromium.org>, Jim Quinlan <james.quinlan@broadcom.com>, 
	Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com, 
	Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk, 
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
	Jianxiong Gao <jxgao@google.com>, joonas.lahtinen@linux.intel.com, 
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
	matthew.auld@intel.com, rodrigo.vivi@intel.com, 
	thomas.hellstrom@linux.intel.com
Content-Type: text/plain; charset="UTF-8"

On Wed, May 26, 2021 at 11:53 PM Will Deacon <will@kernel.org> wrote:
>
> On Wed, May 26, 2021 at 01:13:22PM +0100, Will Deacon wrote:
> > On Tue, May 18, 2021 at 02:42:14PM +0800, Claire Chang wrote:
> > > @@ -138,4 +160,9 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB).
> > >             memory-region = <&multimedia_reserved>;
> > >             /* ... */
> > >     };
> > > +
> > > +   pcie_device: pcie_device@0,0 {
> > > +           memory-region = <&restricted_dma_mem_reserved>;
> > > +           /* ... */
> > > +   };
> >
> > I still don't understand how this works for individual PCIe devices -- how
> > is dev->of_node set to point at the node you have above?
> >
> > I tried adding the memory-region to the host controller instead, and then
> > I see it crop up in dmesg:
> >
> >   | pci-host-generic 40000000.pci: assigned reserved memory node restricted_dma_mem_reserved
> >
> > but none of the actual PCI devices end up with 'dma_io_tlb_mem' set, and
> > so the restricted DMA area is not used. In fact, swiotlb isn't used at all.
> >
> > What am I missing to make this work with PCIe devices?
>
> Aha, looks like we're just missing the logic to inherit the DMA
> configuration. The diff below gets things working for me.

I guess what was missing is the reg property in the pcie_device node.
Will update the example dts.


From xen-devel-bounces@lists.xenproject.org Thu May 27 11:47:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 11:47:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133061.248099 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmEUS-00054z-OM; Thu, 27 May 2021 11:47:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133061.248099; Thu, 27 May 2021 11:47:52 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmEUS-00054s-LJ; Thu, 27 May 2021 11:47:52 +0000
Received: by outflank-mailman (input) for mailman id 133061;
 Thu, 27 May 2021 11:47:51 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HjtO=KW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmEUR-00054m-CY
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 11:47:51 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a091b69d-fe3d-41a7-9949-31783f024f11;
 Thu, 27 May 2021 11:47:50 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 6A76C2190B;
 Thu, 27 May 2021 11:47:49 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 3C70711A98;
 Thu, 27 May 2021 11:47:49 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a091b69d-fe3d-41a7-9949-31783f024f11
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622116069; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ibvBJ8N9gYkmQfCGXm38qHnMjooDW0kvei05CtgEHck=;
	b=PmClcs6MmtUC5Wvg8JD11blA3FjooZ9IxoJaNFUXNuWQBbOlapQw6daJX30ygtTTw5TJZG
	e/SNKhZFJ9iE1W9Gl2BcXmBWwcrjgXT9HhYbxskpe5v/rGr9lswha+SxfJ6AQZH2Owe/Nw
	+Pujx4muTqLLGmg20il0wBatmp5fSFs=
Subject: Re: [PATCH 7/7] video/vesa: adjust (not just) command line option
 handling
From: Jan Beulich <jbeulich@suse.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <6d6da76c-ccc8-afa2-bd06-5ae132c354f2@suse.com>
 <7e3f69d7-23e8-397d-72b6-8c489d80ea45@suse.com>
 <3e04b606-4e4f-e181-d3be-bcf99a2c8fa2@citrix.com>
 <3aadebaa-5a0b-e21b-c86a-289c2fae5d44@suse.com>
Message-ID: <e4230879-48b2-651a-b0fa-0e4fbedc04f1@suse.com>
Date: Thu, 27 May 2021 13:47:43 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <3aadebaa-5a0b-e21b-c86a-289c2fae5d44@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 27.04.2021 16:04, Jan Beulich wrote:
> On 27.04.2021 15:49, Andrew Cooper wrote:
>> However, is there really any value in these options?  I can't see a case
>> where their use will result in a less broken system.
> 
> Well, if we mis-detect VRAM size, the respective option might indeed
> help. I'm less certain of the utility of the mapping option, the more
> that now there's no possible (and implicit) effect on MTRRs anymore.

Actually I was wrong with referring to an implied effect on MTRRs - that
would come from "vesa-ram=". "vesa-map=" may help if we mis-detected the
space we actually need for the mode. However, we'd then be screwed up
elsewhere as well, so I guess I'll add a patch removing "vesa-map=" as
well.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 27 12:29:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 12:29:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133076.248126 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmF8l-0000or-HS; Thu, 27 May 2021 12:29:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133076.248126; Thu, 27 May 2021 12:29:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmF8l-0000ok-EO; Thu, 27 May 2021 12:29:31 +0000
Received: by outflank-mailman (input) for mailman id 133076;
 Thu, 27 May 2021 12:29:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HjtO=KW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmF8k-0000oe-Iw
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 12:29:30 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cd4d9319-174f-405d-90c9-e07212ec942f;
 Thu, 27 May 2021 12:29:29 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 6E11A218DD;
 Thu, 27 May 2021 12:29:28 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 519A811A98;
 Thu, 27 May 2021 12:29:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cd4d9319-174f-405d-90c9-e07212ec942f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622118568; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=SCWyRvE5JWWIQWwVbsGK4b5GWaY4O0M7tRlh4MgyuhQ=;
	b=Ybv5HTYupYzbJGPQQZqnq2teQyO+mx4GOh52O4k5zJPr+8SgDHx6GFMIpilgmRfZhCP/Qy
	xtTklwQgNbFVm58D9mjm9T1TpPohwyLwsC9kYwWYgUBYyM410XktLFlk+wlQ7zTPevTwMe
	HgOWNilQRCqsVVY0zQc/nkhaBhaZifk=
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 00/12] x86: memcpy() / memset() (non-)ERMS flavors plus
 fallout
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Message-ID: <8f56a8f4-0482-932f-96a9-c791bebb4610@suse.com>
Date: Thu, 27 May 2021 14:29:21 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

While the performance varies quite a bit on older (pre-ERMS) and newer
(ERMS) hardware, so far we've been going with just a single flavor of
these two functions, and oddly enough with ones not consistent with one
another. Using plain memcpy() / memset() on MMIO (video frame buffer)
is generally okay, but the ERMS variant of memcpy() turned out to
regress (boot) performance in a way easily visible to the human eye.
Hence as a prerequisite step this series switches the frame buffer
(and VGA) mapping to be write-combining independent of firmware
arrangements (of MTRRs in particular).

v2, besides addressing review feedback not addressed verbally, extends
things to
- driving gcc's inlining of __builtin_mem{cpy,set}(),
- page clearing and scrubbing.

01: x86: introduce ioremap_wc()
02: x86: re-work memset()
03: x86: re-work memcpy()
04: x86: control memset() and memcpy() inlining
05: x86: introduce "hot" and "cold" page clearing functions
06: page-alloc: make scrub_on_page() static
07: mm: allow page scrubbing routine(s) to be arch controlled
08: x86: move .text.kexec
09: video/vesa: unmap frame buffer when relinquishing console
10: video/vesa: drop "vesa-mtrr" command line option
12: video/vesa: adjust (not just) command line option handling

Side note: While strictly speaking the xen/drivers/video/ changes fall
under REST maintainership, with that code getting built for x86 only
I'm restricting Cc-s to x86 maintainers.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 27 12:30:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 12:30:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133082.248140 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFA0-000264-Uq; Thu, 27 May 2021 12:30:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133082.248140; Thu, 27 May 2021 12:30:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFA0-00025x-Rp; Thu, 27 May 2021 12:30:48 +0000
Received: by outflank-mailman (input) for mailman id 133082;
 Thu, 27 May 2021 12:30:48 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HjtO=KW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmFA0-00025q-8f
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 12:30:48 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 76fa961c-7bdd-4e98-88bb-af7727108a6c;
 Thu, 27 May 2021 12:30:46 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id F071C1FD2E;
 Thu, 27 May 2021 12:30:45 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id D3B6311A98;
 Thu, 27 May 2021 12:30:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 76fa961c-7bdd-4e98-88bb-af7727108a6c
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622118645; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QtM/uNwkuYbgus7NQ6D0a7urDOwmeq1CTr/If2lDLg8=;
	b=BwJjBdlen2P96tXJpurtSF9hnf4oafFPaJ3hQfyOo4MQsvSQ2jyNgRSoCj6Iah4dBMarmZ
	8Zi9Mc1bu/lWclw9Fs80aUVW/FuxFPqyv8IGGafuzN9526Rcq+FDJGjRGzWKhPFZA/3h7h
	QFj/W0iFWGVH1R3LKwRYgQuCd9DqSPM=
Subject: [PATCH v2 01/12] x86: introduce ioremap_wc()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <8f56a8f4-0482-932f-96a9-c791bebb4610@suse.com>
Message-ID: <20abac99-609c-f4f6-1242-c79919f4c317@suse.com>
Date: Thu, 27 May 2021 14:30:41 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <8f56a8f4-0482-932f-96a9-c791bebb4610@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

In order for a to-be-introduced ERMS form of memcpy() to not regress
boot performance on certain systems when video output is active, we
first need to arrange for avoiding further dependency on firmware
setting up MTRRs in a way we can actually further modify. On many
systems, due to the continuously growing amounts of installed memory,
MTRRs get configured with at least one huge WB range, and with MMIO
ranges below 4Gb then forced to UC via overlapping MTRRs. mtrr_add(), as
it is today, can't deal with such a setup. Hence on such systems we
presently leave the frame buffer mapped UC, leading to significantly
reduced performance when using REP STOSB / REP MOVSB.

On post-PentiumII hardware (i.e. any that's capable of running 64-bit
code), an effective memory type of WC can be achieved without MTRRs, by
simply referencing the respective PAT entry from the PTEs. While this
will leave the switch to ERMS forms of memset() and memcpy() with
largely unchanged performance, the change here on its own improves
performance on affected systems quite significantly: Measuring just the
individual affected memcpy() invocations yielded a speedup by a factor
of over 250 on my initial (Skylake) test system. memset() isn't getting
improved by as much there, but still by a factor of about 20.

While adding {__,}PAGE_HYPERVISOR_WC, also add {__,}PAGE_HYPERVISOR_WT
to, at the very least, make clear what PTE flags this memory type uses.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Mark ioremap_wc() __init.
---
TBD: If the VGA range is WC in the fixed range MTRRs, reusing the low
     1st Mb mapping (like ioremap() does) would be an option.

--- a/xen/arch/x86/mm.c
+++ b/xen/arch/x86/mm.c
@@ -5881,6 +5881,20 @@ void __iomem *ioremap(paddr_t pa, size_t
     return (void __force __iomem *)va;
 }
 
+void __iomem *__init ioremap_wc(paddr_t pa, size_t len)
+{
+    mfn_t mfn = _mfn(PFN_DOWN(pa));
+    unsigned int offs = pa & (PAGE_SIZE - 1);
+    unsigned int nr = PFN_UP(offs + len);
+    void *va;
+
+    WARN_ON(page_is_ram_type(mfn_x(mfn), RAM_TYPE_CONVENTIONAL));
+
+    va = __vmap(&mfn, nr, 1, 1, PAGE_HYPERVISOR_WC, VMAP_DEFAULT);
+
+    return (void __force __iomem *)(va + offs);
+}
+
 int create_perdomain_mapping(struct domain *d, unsigned long va,
                              unsigned int nr, l1_pgentry_t **pl1tab,
                              struct page_info **ppg)
--- a/xen/drivers/video/vesa.c
+++ b/xen/drivers/video/vesa.c
@@ -9,9 +9,9 @@
 #include <xen/param.h>
 #include <xen/xmalloc.h>
 #include <xen/kernel.h>
+#include <xen/mm.h>
 #include <xen/vga.h>
 #include <asm/io.h>
-#include <asm/page.h>
 #include "font.h"
 #include "lfb.h"
 
@@ -103,7 +103,7 @@ void __init vesa_init(void)
     lfbp.text_columns = vlfb_info.width / font->width;
     lfbp.text_rows = vlfb_info.height / font->height;
 
-    lfbp.lfb = lfb = ioremap(lfb_base(), vram_remap);
+    lfbp.lfb = lfb = ioremap_wc(lfb_base(), vram_remap);
     if ( !lfb )
         return;
 
@@ -179,8 +179,7 @@ void __init vesa_mtrr_init(void)
 
 static void lfb_flush(void)
 {
-    if ( vesa_mtrr == 3 )
-        __asm__ __volatile__ ("sfence" : : : "memory");
+    __asm__ __volatile__ ("sfence" : : : "memory");
 }
 
 void __init vesa_endboot(bool_t keep)
--- a/xen/drivers/video/vga.c
+++ b/xen/drivers/video/vga.c
@@ -79,7 +79,7 @@ void __init video_init(void)
     {
     case XEN_VGATYPE_TEXT_MODE_3:
         if ( page_is_ram_type(paddr_to_pfn(0xB8000), RAM_TYPE_CONVENTIONAL) ||
-             ((video = ioremap(0xB8000, 0x8000)) == NULL) )
+             ((video = ioremap_wc(0xB8000, 0x8000)) == NULL) )
             return;
         outw(0x200a, 0x3d4); /* disable cursor */
         columns = vga_console_info.u.text_mode_3.columns;
@@ -164,7 +164,11 @@ void __init video_endboot(void)
     {
     case XEN_VGATYPE_TEXT_MODE_3:
         if ( !vgacon_keep )
+        {
             memset(video, 0, columns * lines * 2);
+            iounmap(video);
+            video = ZERO_BLOCK_PTR;
+        }
         break;
     case XEN_VGATYPE_VESA_LFB:
     case XEN_VGATYPE_EFI_LFB:
--- a/xen/include/asm-x86/mm.h
+++ b/xen/include/asm-x86/mm.h
@@ -615,6 +615,8 @@ void destroy_perdomain_mapping(struct do
                                unsigned int nr);
 void free_perdomain_mappings(struct domain *);
 
+void __iomem *ioremap_wc(paddr_t, size_t);
+
 extern int memory_add(unsigned long spfn, unsigned long epfn, unsigned int pxm);
 
 void domain_set_alloc_bitsize(struct domain *d);
--- a/xen/include/asm-x86/page.h
+++ b/xen/include/asm-x86/page.h
@@ -349,8 +349,10 @@ void efi_update_l4_pgtable(unsigned int
 #define __PAGE_HYPERVISOR_RX      (_PAGE_PRESENT | _PAGE_ACCESSED)
 #define __PAGE_HYPERVISOR         (__PAGE_HYPERVISOR_RX | \
                                    _PAGE_DIRTY | _PAGE_RW)
+#define __PAGE_HYPERVISOR_WT      (__PAGE_HYPERVISOR | _PAGE_PWT)
 #define __PAGE_HYPERVISOR_UCMINUS (__PAGE_HYPERVISOR | _PAGE_PCD)
 #define __PAGE_HYPERVISOR_UC      (__PAGE_HYPERVISOR | _PAGE_PCD | _PAGE_PWT)
+#define __PAGE_HYPERVISOR_WC      (__PAGE_HYPERVISOR | _PAGE_PAT)
 #define __PAGE_HYPERVISOR_SHSTK   (__PAGE_HYPERVISOR_RO | _PAGE_DIRTY)
 
 #define MAP_SMALL_PAGES _PAGE_AVAIL0 /* don't use superpages mappings */
--- a/xen/include/asm-x86/x86_64/page.h
+++ b/xen/include/asm-x86/x86_64/page.h
@@ -154,6 +154,10 @@ static inline intpte_t put_pte_flags(uns
                                  _PAGE_GLOBAL | _PAGE_NX)
 #define PAGE_HYPERVISOR_UC      (__PAGE_HYPERVISOR_UC | \
                                  _PAGE_GLOBAL | _PAGE_NX)
+#define PAGE_HYPERVISOR_WC      (__PAGE_HYPERVISOR_WC | \
+                                 _PAGE_GLOBAL | _PAGE_NX)
+#define PAGE_HYPERVISOR_WT      (__PAGE_HYPERVISOR_WT | \
+                                 _PAGE_GLOBAL | _PAGE_NX)
 
 #endif /* __X86_64_PAGE_H__ */
 



From xen-devel-bounces@lists.xenproject.org Thu May 27 12:31:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 12:31:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133087.248151 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFAN-0002aF-73; Thu, 27 May 2021 12:31:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133087.248151; Thu, 27 May 2021 12:31:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFAN-0002a8-3R; Thu, 27 May 2021 12:31:11 +0000
Received: by outflank-mailman (input) for mailman id 133087;
 Thu, 27 May 2021 12:31:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HjtO=KW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmFAM-0002a0-7c
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 12:31:10 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0de3975c-099d-42a7-9c1f-15a18aaeef7d;
 Thu, 27 May 2021 12:31:09 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 6EDFC218DD;
 Thu, 27 May 2021 12:31:08 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 4435511A98;
 Thu, 27 May 2021 12:31:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0de3975c-099d-42a7-9c1f-15a18aaeef7d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622118668; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=0I4gNzPs7QkUzQaZgNM9qNeznBTvZe1/bKfap9B3u8w=;
	b=j6t6sDTyYXBiRPX9kFPTiT8c3JUIc5bsqpm1+JIVm53FA0U71KrhORmXGelcmk0EqVwdYp
	nXTtw0qpBqQQLM9Jc3pOPA4F8HKPuJPAiEGiTf2rOJY5wk/JTwP5t90qCucd0YtiMOKLfV
	bCg7y+ALsbYh4O4yh8IgCiIcty//0Lo=
Subject: [PATCH v2 02/12] x86: re-work memset()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <8f56a8f4-0482-932f-96a9-c791bebb4610@suse.com>
Message-ID: <e43a7b27-db34-303a-6483-6d90781c6978@suse.com>
Date: Thu, 27 May 2021 14:31:08 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <8f56a8f4-0482-932f-96a9-c791bebb4610@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Move the function to its own assembly file. Having it in C just for the
entire body to be an asm() isn't really helpful. Then have two flavors:
A "basic" version using qword steps for the bulk of the operation, and an
ERMS version for modern hardware, to be substituted in via alternatives
patching.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
We may want to consider branching over the REP STOSQ as well, if the
number of qwords turns out to be zero.
We may also want to consider using non-REP STOS{L,W,B} for the tail.

--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -43,6 +43,7 @@ obj-$(CONFIG_INDIRECT_THUNK) += indirect
 obj-y += ioport_emulate.o
 obj-y += irq.o
 obj-$(CONFIG_KEXEC) += machine_kexec.o
+obj-y += memset.o
 obj-y += mm.o x86_64/mm.o
 obj-$(CONFIG_HVM) += monitor.o
 obj-y += mpparse.o
--- /dev/null
+++ b/xen/arch/x86/memset.S
@@ -0,0 +1,31 @@
+#include <asm/asm_defns.h>
+
+.macro memset
+        and     $7, %edx
+        shr     $3, %rcx
+        movzbl  %sil, %esi
+        mov     $0x0101010101010101, %rax
+        imul    %rsi, %rax
+        mov     %rdi, %rsi
+        rep stosq
+        or      %edx, %ecx
+        jz      0f
+        rep stosb
+0:
+        mov     %rsi, %rax
+        ret
+.endm
+
+.macro memset_erms
+        mov     %esi, %eax
+        mov     %rdi, %rsi
+        rep stosb
+        mov     %rsi, %rax
+        ret
+.endm
+
+ENTRY(memset)
+        mov     %rdx, %rcx
+        ALTERNATIVE memset, memset_erms, X86_FEATURE_ERMS
+        .type memset, @function
+        .size memset, . - memset
--- a/xen/arch/x86/string.c
+++ b/xen/arch/x86/string.c
@@ -22,19 +22,6 @@ void *(memcpy)(void *dest, const void *s
     return dest;
 }
 
-void *(memset)(void *s, int c, size_t n)
-{
-    long d0, d1;
-
-    asm volatile (
-        "rep stosb"
-        : "=&c" (d0), "=&D" (d1)
-        : "a" (c), "1" (s), "0" (n)
-        : "memory");
-
-    return s;
-}
-
 void *(memmove)(void *dest, const void *src, size_t n)
 {
     long d0, d1, d2;



From xen-devel-bounces@lists.xenproject.org Thu May 27 12:31:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 12:31:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133092.248161 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFAf-0003AQ-EC; Thu, 27 May 2021 12:31:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133092.248161; Thu, 27 May 2021 12:31:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFAf-0003AJ-BH; Thu, 27 May 2021 12:31:29 +0000
Received: by outflank-mailman (input) for mailman id 133092;
 Thu, 27 May 2021 12:31:28 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HjtO=KW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmFAe-00030D-2g
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 12:31:28 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id bae8eeac-2622-49e8-a362-505f87bbb378;
 Thu, 27 May 2021 12:31:26 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 7BFA8218DD;
 Thu, 27 May 2021 12:31:25 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 587CF11A98;
 Thu, 27 May 2021 12:31:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bae8eeac-2622-49e8-a362-505f87bbb378
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622118685; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=8iVlQanP290BLy02LDzP89TcI1dKH7MaEcZKdyou000=;
	b=aOK0zKMm3VR+7w75/vqdiEKsBmWa3gLhkabEA/9yX93rxodzhVfg622fFN8c6avMoqKQG3
	MzksE9EVTGJAYdyoxdB1hhdCrZR5tFU5qjyXwcGg0pFkvVstG+jbVh9wC35S42GolIqJ7E
	matnOXfWokSWyjJDYatwFvPi36yMnyo=
Subject: [PATCH v2 03/12] x86: re-work memcpy()
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <8f56a8f4-0482-932f-96a9-c791bebb4610@suse.com>
Message-ID: <b715c6aa-579a-fede-fe0e-d7c170ec3bce@suse.com>
Date: Thu, 27 May 2021 14:31:25 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <8f56a8f4-0482-932f-96a9-c791bebb4610@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Move the function to its own assembly file. Having it in C just for the
entire body to be an asm() isn't really helpful. Then have two flavors:
A "basic" version using qword steps for the bulk of the operation, and an
ERMS version for modern hardware, to be substituted in via alternatives
patching.

Alternatives patching, however, requires an extra precaution: It uses
memcpy() itself, and hence the function may patch itself. Luckily the
patched-in code only replaces the prolog of the original function. Make
sure this remains this way.

Additionally alternatives patching, while supposedly safe via enforcing
a control flow change when modifying already prefetched code, may not
really be. Afaict a request is pending to drop the first of the two
options in the SDM's "Handling Self- and Cross-Modifying Code" section.
Insert a serializing instruction there. To avoid having to introduce a
local variable, also switch text_poke() to return void: Neither of its
callers cares about the returned value.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
We may want to consider branching over the REP MOVSQ as well, if the
number of qwords turns out to be zero.
We may also want to consider using non-REP MOVS{L,W,B} for the tail.

--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -43,6 +43,7 @@ obj-$(CONFIG_INDIRECT_THUNK) += indirect
 obj-y += ioport_emulate.o
 obj-y += irq.o
 obj-$(CONFIG_KEXEC) += machine_kexec.o
+obj-y += memcpy.o
 obj-y += memset.o
 obj-y += mm.o x86_64/mm.o
 obj-$(CONFIG_HVM) += monitor.o
--- a/xen/arch/x86/alternative.c
+++ b/xen/arch/x86/alternative.c
@@ -164,12 +164,14 @@ void init_or_livepatch add_nops(void *in
  * executing.
  *
  * "noinline" to cause control flow change and thus invalidate I$ and
- * cause refetch after modification.
+ * cause refetch after modification.  While the SDM continues to suggest this
+ * is sufficient, it may not be - issue a serializing insn afterwards as well.
  */
-static void *init_or_livepatch noinline
+static void init_or_livepatch noinline
 text_poke(void *addr, const void *opcode, size_t len)
 {
-    return memcpy(addr, opcode, len);
+    memcpy(addr, opcode, len);
+    cpuid_eax(0);
 }
 
 /*
--- /dev/null
+++ b/xen/arch/x86/memcpy.S
@@ -0,0 +1,21 @@
+#include <asm/asm_defns.h>
+
+ENTRY(memcpy)
+        mov     %rdx, %rcx
+        mov     %rdi, %rax
+        /*
+         * We need to be careful here: memcpy() is involved in alternatives
+         * patching, so the code doing the actual copying (i.e. past setting
+         * up registers) may not be subject to patching (unless further
+         * precautions were taken).
+         */
+        ALTERNATIVE "and $7, %edx; shr $3, %rcx", \
+                    "rep movsb; ret", X86_FEATURE_ERMS
+        rep movsq
+        or      %edx, %ecx
+        jz      1f
+        rep movsb
+1:
+        ret
+        .type memcpy, @function
+        .size memcpy, . - memcpy
--- a/xen/arch/x86/string.c
+++ b/xen/arch/x86/string.c
@@ -7,21 +7,6 @@
 
 #include <xen/lib.h>
 
-void *(memcpy)(void *dest, const void *src, size_t n)
-{
-    long d0, d1, d2;
-
-    asm volatile (
-        "   rep ; movs"__OS" ; "
-        "   mov %k4,%k3      ; "
-        "   rep ; movsb        "
-        : "=&c" (d0), "=&D" (d1), "=&S" (d2)
-        : "0" (n/BYTES_PER_LONG), "r" (n%BYTES_PER_LONG), "1" (dest), "2" (src)
-        : "memory" );
-
-    return dest;
-}
-
 void *(memmove)(void *dest, const void *src, size_t n)
 {
     long d0, d1, d2;



From xen-devel-bounces@lists.xenproject.org Thu May 27 12:31:48 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 12:31:48 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133097.248173 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFAy-0003jm-Mc; Thu, 27 May 2021 12:31:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133097.248173; Thu, 27 May 2021 12:31:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFAy-0003jf-Jc; Thu, 27 May 2021 12:31:48 +0000
Received: by outflank-mailman (input) for mailman id 133097;
 Thu, 27 May 2021 12:31:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HjtO=KW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmFAx-0003iF-DD
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 12:31:47 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 10a0f0d3-1038-4943-a11f-f1120438c9b8;
 Thu, 27 May 2021 12:31:46 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 8BEE71FD2F;
 Thu, 27 May 2021 12:31:45 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 66A9411A98;
 Thu, 27 May 2021 12:31:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 10a0f0d3-1038-4943-a11f-f1120438c9b8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622118705; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QcysIdV+/q7pgDThgMz+fBurt337dZex+nTBiQnp9G4=;
	b=TDq+Sg+A4U8B3V4H5lq5PraXQrPLCy0lNJIXEiHPUKfVPqPRcqZlaFl2KgbmRrx5yFDZPm
	BBunpHeBZj/w92ikA999jOuWKqW7OeEhsy7CNnLVXSH5FaJZsi8KgeS5G3D3lX/UejSyPZ
	h6VokZGVyR0VW0XIbjsfL+CwwwFEh0Y=
Subject: [PATCH v2 04/12] x86: control memset() and memcpy() inlining
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <8f56a8f4-0482-932f-96a9-c791bebb4610@suse.com>
Message-ID: <654762fa-a7e2-3121-21d5-b992e8428d0e@suse.com>
Date: Thu, 27 May 2021 14:31:44 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <8f56a8f4-0482-932f-96a9-c791bebb4610@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Stop the compiler from inlining non-trivial memset() and memcpy() (for
memset() see e.g. map_vcpu_info() or kimage_load_segments() for
examples). This way we even keep the compiler from using REP STOSQ /
REP MOVSQ when we'd prefer REP STOSB / REP MOVSB (when ERMS is
available).

With gcc10 this yields a modest .text size reduction (release build) of
around 2k.

Unfortunately these options aren't understood by the clang versions I
have readily available for testing with; I'm unaware of equivalents.

Note also that using cc-option-add is not an option here, or at least I
couldn't make things work with it (in case the option was not supported
by the compiler): The embedded comma in the option looks to be getting
in the way.

Requested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.
---
The boundary values are of course up for discussion - I wasn't really
certain whether to use 16 or 32; I'd be less certain about using yet
larger values.

Similarly whether to permit the compiler to emit REP STOSQ / REP MOVSQ
for known size, properly aligned blocks is up for discussion.

--- a/xen/arch/x86/arch.mk
+++ b/xen/arch/x86/arch.mk
@@ -51,6 +51,9 @@ CFLAGS-$(CONFIG_INDIRECT_THUNK) += -fno-
 $(call cc-option-add,CFLAGS-stack-boundary,CC,-mpreferred-stack-boundary=3)
 export CFLAGS-stack-boundary
 
+CFLAGS += $(call cc-option,$(CC),-mmemcpy-strategy=unrolled_loop:16:noalign$(comma)libcall:-1:noalign)
+CFLAGS += $(call cc-option,$(CC),-mmemset-strategy=unrolled_loop:16:noalign$(comma)libcall:-1:noalign)
+
 ifeq ($(CONFIG_UBSAN),y)
 # Don't enable alignment sanitisation.  x86 has efficient unaligned accesses,
 # and various things (ACPI tables, hypercall pages, stubs, etc) are wont-fix.



From xen-devel-bounces@lists.xenproject.org Thu May 27 12:32:11 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 12:32:11 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133101.248184 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFBL-0004Kx-0B; Thu, 27 May 2021 12:32:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133101.248184; Thu, 27 May 2021 12:32:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFBK-0004Kq-SY; Thu, 27 May 2021 12:32:10 +0000
Received: by outflank-mailman (input) for mailman id 133101;
 Thu, 27 May 2021 12:32:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HjtO=KW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmFBK-0004Ka-9F
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 12:32:10 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 52219094-a289-4d72-8109-0927f6bf7cb5;
 Thu, 27 May 2021 12:32:09 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 6C6801FD2E;
 Thu, 27 May 2021 12:32:08 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 4FDF911A98;
 Thu, 27 May 2021 12:32:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 52219094-a289-4d72-8109-0927f6bf7cb5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622118728; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=DqA5J7pXICS+Sn1QhqjKtNkkQJN6ELi2AF/cQu5/0QU=;
	b=F+U4y9zJy9LtFGcmwvmfuiJyCOs9OMaGG9NNoJOAP/eVamVH4GC9ohTVPXDVaPxKbuU4nD
	F4RlAGdhLjOo1YBWLKhrlxUPHuhKokFRpH9VuOsd6VWo0E2BmyJBEj3l9Tjz3JggZd9Mlk
	Hcm9yh4ILFLbotmGxP/uPJwB+bIQL+w=
Subject: [PATCH v2 05/12] x86: introduce "hot" and "cold" page clearing
 functions
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <8f56a8f4-0482-932f-96a9-c791bebb4610@suse.com>
Message-ID: <33d79032-d598-2a7c-f361-6d765fd6a54b@suse.com>
Date: Thu, 27 May 2021 14:32:04 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <8f56a8f4-0482-932f-96a9-c791bebb4610@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

The present clear_page_sse2() is useful in case a page isn't going to
get touched again soon, or if we want to limit churn on the caches.
Amend it by alternatively using CLZERO, which has been found to be quite
a bit faster on Zen2 hardware at least. Note that to use CLZERO, we need
to know the cache line size, and hence a feature dependency on CLFLUSH
gets introduced.

For cases where latency is the most important aspect, or when it is
expected that sufficiently large parts of a page will get accessed again
soon after the clearing, introduce a "hot" alternative. Again use
alternatives patching to select between a "legacy" and an ERMS variant.

Don't switch any callers just yet - this will be the subject of
subsequent changes.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.
---
Note: Ankur indicates that for ~L3-size or larger regions MOVNT/CLZERO
      is better even latency-wise.

--- a/xen/arch/x86/clear_page.S
+++ b/xen/arch/x86/clear_page.S
@@ -1,8 +1,9 @@
         .file __FILE__
 
-#include <asm/page.h>
+#include <asm/asm_defns.h>
+#include <xen/page-size.h>
 
-ENTRY(clear_page_sse2)
+        .macro clear_page_sse2
         mov     $PAGE_SIZE/32, %ecx
         xor     %eax,%eax
 
@@ -16,3 +17,45 @@ ENTRY(clear_page_sse2)
 
         sfence
         ret
+        .endm
+
+        .macro clear_page_clzero
+        mov     %rdi, %rax
+        mov     $PAGE_SIZE/64, %ecx
+        .globl clear_page_clzero_post_count
+clear_page_clzero_post_count:
+
+0:      clzero
+        sub     $-64, %rax
+        .globl clear_page_clzero_post_neg_size
+clear_page_clzero_post_neg_size:
+        sub     $1, %ecx
+        jnz     0b
+
+        sfence
+        ret
+        .endm
+
+ENTRY(clear_page_cold)
+        ALTERNATIVE clear_page_sse2, clear_page_clzero, X86_FEATURE_CLZERO
+        .type clear_page_cold, @function
+        .size clear_page_cold, . - clear_page_cold
+
+        .macro clear_page_stosb
+        mov     $PAGE_SIZE, %ecx
+        xor     %eax,%eax
+        rep stosb
+        ret
+        .endm
+
+        .macro clear_page_stosq
+        mov     $PAGE_SIZE/8, %ecx
+        xor     %eax, %eax
+        rep stosq
+        ret
+        .endm
+
+ENTRY(clear_page_hot)
+        ALTERNATIVE clear_page_stosq, clear_page_stosb, X86_FEATURE_ERMS
+        .type clear_page_hot, @function
+        .size clear_page_hot, . - clear_page_hot
--- a/xen/arch/x86/cpu/common.c
+++ b/xen/arch/x86/cpu/common.c
@@ -56,6 +56,9 @@ static unsigned int forced_caps[NCAPINTS
 
 DEFINE_PER_CPU(bool, full_gdt_loaded);
 
+extern uint32_t clear_page_clzero_post_count[];
+extern int8_t clear_page_clzero_post_neg_size[];
+
 void __init setup_clear_cpu_cap(unsigned int cap)
 {
 	const uint32_t *dfs;
@@ -331,8 +334,38 @@ void __init early_cpu_init(void)
 
 	edx &= ~cleared_caps[cpufeat_word(X86_FEATURE_FPU)];
 	ecx &= ~cleared_caps[cpufeat_word(X86_FEATURE_SSE3)];
-	if (edx & cpufeat_mask(X86_FEATURE_CLFLUSH))
-		c->x86_cache_alignment = ((ebx >> 8) & 0xff) * 8;
+	if (edx & cpufeat_mask(X86_FEATURE_CLFLUSH)) {
+		unsigned int size = ((ebx >> 8) & 0xff) * 8;
+
+		c->x86_cache_alignment = size;
+
+		/*
+		 * Patch in parameters of clear_page_cold()'s CLZERO
+		 * alternative. Note that for now we cap this at 128 bytes.
+		 * Larger cache line sizes would still be dealt with
+		 * correctly, but would cause redundant work done.
+		 */
+		if (size > 128)
+			size = 128;
+		if (size && !(size & (size - 1))) {
+			/*
+			 * Need to play some games to keep the compiler from
+			 * recognizing the negative array index as being out
+			 * of bounds. The labels in assembler code really are
+			 * _after_ the locations to be patched, so the
+			 * negative index is intentional.
+			 */
+			uint32_t *pcount = clear_page_clzero_post_count;
+			int8_t *neg_size = clear_page_clzero_post_neg_size;
+
+			OPTIMIZER_HIDE_VAR(pcount);
+			OPTIMIZER_HIDE_VAR(neg_size);
+			pcount[-1] = PAGE_SIZE / size;
+			neg_size[-1] = -size;
+		}
+		else
+			setup_clear_cpu_cap(X86_FEATURE_CLZERO);
+	}
 	/* Leaf 0x1 capabilities filled in early for Xen. */
 	c->x86_capability[cpufeat_word(X86_FEATURE_FPU)] = edx;
 	c->x86_capability[cpufeat_word(X86_FEATURE_SSE3)] = ecx;
--- a/xen/include/asm-x86/asm-defns.h
+++ b/xen/include/asm-x86/asm-defns.h
@@ -20,6 +20,10 @@
     .byte 0x0f, 0x01, 0xdd
 .endm
 
+.macro clzero
+    .byte 0x0f, 0x01, 0xfc
+.endm
+
 .macro INDIRECT_BRANCH insn:req arg:req
 /*
  * Create an indirect branch.  insn is one of call/jmp, arg is a single
--- a/xen/include/asm-x86/page.h
+++ b/xen/include/asm-x86/page.h
@@ -232,10 +232,11 @@ typedef struct { u64 pfn; } pagetable_t;
 #define pagetable_from_paddr(p) pagetable_from_pfn((p)>>PAGE_SHIFT)
 #define pagetable_null()        pagetable_from_pfn(0)
 
-void clear_page_sse2(void *);
+void clear_page_hot(void *);
+void clear_page_cold(void *);
 void copy_page_sse2(void *, const void *);
 
-#define clear_page(_p)      clear_page_sse2(_p)
+#define clear_page(_p)      clear_page_cold(_p)
 #define copy_page(_t, _f)   copy_page_sse2(_t, _f)
 
 /* Convert between Xen-heap virtual addresses and machine addresses. */
--- a/xen/tools/gen-cpuid.py
+++ b/xen/tools/gen-cpuid.py
@@ -182,6 +182,10 @@ def crunch_numbers(state):
         # the first place.
         APIC: [X2APIC, TSC_DEADLINE, EXTAPIC],
 
+        # The CLZERO insn requires a means to determine the cache line size,
+        # which is tied to the CLFLUSH insn.
+        CLFLUSH: [CLZERO],
+
         # AMD built MMXExtentions and 3DNow as extentions to MMX.
         MMX: [MMXEXT, _3DNOW],
 



From xen-devel-bounces@lists.xenproject.org Thu May 27 12:32:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 12:32:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133111.248195 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFBv-00057U-E3; Thu, 27 May 2021 12:32:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133111.248195; Thu, 27 May 2021 12:32:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFBv-00057H-AF; Thu, 27 May 2021 12:32:47 +0000
Received: by outflank-mailman (input) for mailman id 133111;
 Thu, 27 May 2021 12:32:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HjtO=KW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmFBu-000573-8W
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 12:32:46 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cf02cc0e-f7ce-432b-8bba-6a8fe52d17c5;
 Thu, 27 May 2021 12:32:45 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id A0E4A1FD2E;
 Thu, 27 May 2021 12:32:44 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 6CF1011A98;
 Thu, 27 May 2021 12:32:44 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cf02cc0e-f7ce-432b-8bba-6a8fe52d17c5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622118764; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=TZmIcIJiPml4pYGzZ8PXew8/Uidlmz9UaHgSxe2VY6k=;
	b=BFs8RLNJEVICqaTsdEvhcEPVbK+oospJS0U72GU1SqP5Gb+ztJ048ziABRVXRsl4GbWOqn
	yRGbmCmQtyNNrsrjxMWiKQTVHlMe0K8KiaGia8I5JfoMmNszbu+qnEQe7KJfZQes19jXDp
	8vVcOkfp7kUlHg5hP+ohkjgwFDl/PQI=
Subject: [PATCH v2 06/12] page-alloc: make scrub_on_page() static
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <8f56a8f4-0482-932f-96a9-c791bebb4610@suse.com>
Message-ID: <6f24db03-8c96-3a81-a073-657743e8faee@suse.com>
Date: Thu, 27 May 2021 14:32:40 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <8f56a8f4-0482-932f-96a9-c791bebb4610@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Before starting to alter its properties, restrict the function's
visibility. The only external user is mem-paging, which we can
accommodate by different means.

Also move the function up in its source file, so we won't need to
forward-declare it. Constify its parameter at the same time.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/xen/arch/x86/mm/mem_paging.c
+++ b/xen/arch/x86/mm/mem_paging.c
@@ -316,9 +316,6 @@ static int evict(struct domain *d, gfn_t
     ret = p2m_set_entry(p2m, gfn, INVALID_MFN, PAGE_ORDER_4K,
                         p2m_ram_paged, a);
 
-    /* Clear content before returning the page to Xen */
-    scrub_one_page(page);
-
     /* Track number of paged gfns */
     atomic_inc(&d->paged_pages);
 
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -136,6 +136,7 @@
 #include <xen/numa.h>
 #include <xen/nodemask.h>
 #include <xen/event.h>
+#include <xen/vm_event.h>
 #include <public/sysctl.h>
 #include <public/sched.h>
 #include <asm/page.h>
@@ -757,6 +758,21 @@ static void page_list_add_scrub(struct p
 #endif
 #define SCRUB_BYTE_PATTERN   (SCRUB_PATTERN & 0xff)
 
+static void scrub_one_page(const struct page_info *pg)
+{
+    if ( unlikely(pg->count_info & PGC_broken) )
+        return;
+
+#ifndef NDEBUG
+    /* Avoid callers relying on allocations returning zeroed pages. */
+    unmap_domain_page(memset(__map_domain_page(pg),
+                             SCRUB_BYTE_PATTERN, PAGE_SIZE));
+#else
+    /* For a production build, clear_page() is the fastest way to scrub. */
+    clear_domain_page(_mfn(page_to_mfn(pg)));
+#endif
+}
+
 static void poison_one_page(struct page_info *pg)
 {
 #ifdef CONFIG_SCRUB_DEBUG
@@ -2431,10 +2447,12 @@ void free_domheap_pages(struct page_info
             /*
              * Normally we expect a domain to clear pages before freeing them,
              * if it cares about the secrecy of their contents. However, after
-             * a domain has died we assume responsibility for erasure. We do
-             * scrub regardless if option scrub_domheap is set.
+             * a domain has died or if it has mem-paging enabled we assume
+             * responsibility for erasure. We do scrub regardless if option
+             * scrub_domheap is set.
              */
-            scrub = d->is_dying || scrub_debug || opt_scrub_domheap;
+            scrub = d->is_dying || mem_paging_enabled(d) ||
+                    scrub_debug || opt_scrub_domheap;
         }
         else
         {
@@ -2519,21 +2537,6 @@ static __init int pagealloc_keyhandler_i
 __initcall(pagealloc_keyhandler_init);
 
 
-void scrub_one_page(struct page_info *pg)
-{
-    if ( unlikely(pg->count_info & PGC_broken) )
-        return;
-
-#ifndef NDEBUG
-    /* Avoid callers relying on allocations returning zeroed pages. */
-    unmap_domain_page(memset(__map_domain_page(pg),
-                             SCRUB_BYTE_PATTERN, PAGE_SIZE));
-#else
-    /* For a production build, clear_page() is the fastest way to scrub. */
-    clear_domain_page(_mfn(page_to_mfn(pg)));
-#endif
-}
-
 static void dump_heap(unsigned char key)
 {
     s_time_t      now = NOW();
--- a/xen/include/asm-x86/mem_paging.h
+++ b/xen/include/asm-x86/mem_paging.h
@@ -24,12 +24,6 @@
 
 int mem_paging_memop(XEN_GUEST_HANDLE_PARAM(xen_mem_paging_op_t) arg);
 
-#ifdef CONFIG_MEM_PAGING
-# define mem_paging_enabled(d) vm_event_check_ring((d)->vm_event_paging)
-#else
-# define mem_paging_enabled(d) false
-#endif
-
 #endif /*__ASM_X86_MEM_PAGING_H__ */
 
 /*
--- a/xen/include/xen/mm.h
+++ b/xen/include/xen/mm.h
@@ -498,8 +498,6 @@ static inline unsigned int get_order_fro
     return order;
 }
 
-void scrub_one_page(struct page_info *);
-
 #ifndef arch_free_heap_page
 #define arch_free_heap_page(d, pg) \
     page_list_del(pg, page_to_list(d, pg))
--- a/xen/include/xen/sched.h
+++ b/xen/include/xen/sched.h
@@ -1117,6 +1117,12 @@ static always_inline bool is_iommu_enabl
     return evaluate_nospec(d->options & XEN_DOMCTL_CDF_iommu);
 }
 
+#ifdef CONFIG_MEM_PAGING
+# define mem_paging_enabled(d) vm_event_check_ring((d)->vm_event_paging)
+#else
+# define mem_paging_enabled(d) false
+#endif
+
 extern bool sched_smt_power_savings;
 extern bool sched_disable_smt_switching;
 



From xen-devel-bounces@lists.xenproject.org Thu May 27 12:33:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 12:33:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133119.248206 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFCc-0005mn-NB; Thu, 27 May 2021 12:33:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133119.248206; Thu, 27 May 2021 12:33:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFCc-0005mg-Ju; Thu, 27 May 2021 12:33:30 +0000
Received: by outflank-mailman (input) for mailman id 133119;
 Thu, 27 May 2021 12:33:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HjtO=KW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmFCa-0005mO-Ts
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 12:33:28 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id ffebd46d-b651-435a-8e82-416844cedd35;
 Thu, 27 May 2021 12:33:27 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id E44A8218DD;
 Thu, 27 May 2021 12:33:26 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id A6A2911A98;
 Thu, 27 May 2021 12:33:26 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: ffebd46d-b651-435a-8e82-416844cedd35
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622118806; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Gn1qu776spR85ydugwRCif9Q5VssQAZffidjn3a689E=;
	b=XgUiD8My1RPv5xCY9pnl60B8sVur91hT2Wl3haNQebANPXCAJnJRn3Fv+H/uCjkbbUubvs
	lYxNcGFH4UEroTCF97IBD01ncn+K2ehjMaHpio+iBahPNfPSKcv8wdB5YFu2X11hAgyMW6
	jaUE+7NDqwikjVTCrNubwdRcLtiyu8g=
Subject: [PATCH v2 07/12] mm: allow page scrubbing routine(s) to be arch
 controlled
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>
References: <8f56a8f4-0482-932f-96a9-c791bebb4610@suse.com>
Message-ID: <49c46d4d-4eaa-16a8-ccc8-c873b0b1d092@suse.com>
Date: Thu, 27 May 2021 14:33:23 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <8f56a8f4-0482-932f-96a9-c791bebb4610@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Especially when dealing with large amounts of memory, memset() may not
be very efficient; this can be bad enough that even for debug builds a
custom function is warranted. We additionally want to distinguish "hot"
and "cold" cases.

Keep the default fallback to clear_page_*() in common code; this may
want to be revisited down the road.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.
---
The choice between hot and cold in scrub_one_page()'s callers is
certainly up for discussion / improvement.

--- a/xen/arch/x86/Makefile
+++ b/xen/arch/x86/Makefile
@@ -55,6 +55,7 @@ obj-y += percpu.o
 obj-y += physdev.o
 obj-$(CONFIG_COMPAT) += x86_64/physdev.o
 obj-y += psr.o
+obj-bin-$(CONFIG_DEBUG) += scrub_page.o
 obj-y += setup.o
 obj-y += shutdown.o
 obj-y += smp.o
--- /dev/null
+++ b/xen/arch/x86/scrub_page.S
@@ -0,0 +1,41 @@
+        .file __FILE__
+
+#include <asm/asm_defns.h>
+#include <xen/page-size.h>
+#include <xen/scrub.h>
+
+ENTRY(scrub_page_cold)
+        mov     $PAGE_SIZE/32, %ecx
+        mov     $SCRUB_PATTERN, %rax
+
+0:      movnti  %rax,   (%rdi)
+        movnti  %rax,  8(%rdi)
+        movnti  %rax, 16(%rdi)
+        movnti  %rax, 24(%rdi)
+        add     $32, %rdi
+        sub     $1, %ecx
+        jnz     0b
+
+        sfence
+        ret
+        .type scrub_page_cold, @function
+        .size scrub_page_cold, . - scrub_page_cold
+
+        .macro scrub_page_stosb
+        mov     $PAGE_SIZE, %ecx
+        mov     $SCRUB_BYTE_PATTERN, %eax
+        rep stosb
+        ret
+        .endm
+
+        .macro scrub_page_stosq
+        mov     $PAGE_SIZE/8, %ecx
+        mov     $SCRUB_PATTERN, %rax
+        rep stosq
+        ret
+        .endm
+
+ENTRY(scrub_page_hot)
+        ALTERNATIVE scrub_page_stosq, scrub_page_stosb, X86_FEATURE_ERMS
+        .type scrub_page_hot, @function
+        .size scrub_page_hot, . - scrub_page_hot
--- a/xen/common/page_alloc.c
+++ b/xen/common/page_alloc.c
@@ -124,6 +124,7 @@
 #include <xen/types.h>
 #include <xen/lib.h>
 #include <xen/sched.h>
+#include <xen/scrub.h>
 #include <xen/spinlock.h>
 #include <xen/mm.h>
 #include <xen/param.h>
@@ -750,27 +751,31 @@ static void page_list_add_scrub(struct p
         page_list_add(pg, &heap(node, zone, order));
 }
 
-/* SCRUB_PATTERN needs to be a repeating series of bytes. */
-#ifndef NDEBUG
-#define SCRUB_PATTERN        0xc2c2c2c2c2c2c2c2ULL
-#else
-#define SCRUB_PATTERN        0ULL
+/*
+ * While in debug builds we want callers to avoid relying on allocations
+ * returning zeroed pages, for a production build, clear_page_*() is the
+ * fastest way to scrub.
+ */
+#ifndef CONFIG_DEBUG
+# undef  scrub_page_hot
+# define scrub_page_hot clear_page_hot
+# undef  scrub_page_cold
+# define scrub_page_cold clear_page_cold
 #endif
-#define SCRUB_BYTE_PATTERN   (SCRUB_PATTERN & 0xff)
 
-static void scrub_one_page(const struct page_info *pg)
+static void scrub_one_page(const struct page_info *pg, bool cold)
 {
+    void *ptr;
+
     if ( unlikely(pg->count_info & PGC_broken) )
         return;
 
-#ifndef NDEBUG
-    /* Avoid callers relying on allocations returning zeroed pages. */
-    unmap_domain_page(memset(__map_domain_page(pg),
-                             SCRUB_BYTE_PATTERN, PAGE_SIZE));
-#else
-    /* For a production build, clear_page() is the fastest way to scrub. */
-    clear_domain_page(_mfn(page_to_mfn(pg)));
-#endif
+    ptr = __map_domain_page(pg);
+    if ( cold )
+        scrub_page_cold(ptr);
+    else
+        scrub_page_hot(ptr);
+    unmap_domain_page(ptr);
 }
 
 static void poison_one_page(struct page_info *pg)
@@ -1046,12 +1051,14 @@ static struct page_info *alloc_heap_page
     if ( first_dirty != INVALID_DIRTY_IDX ||
          (scrub_debug && !(memflags & MEMF_no_scrub)) )
     {
+        bool cold = d && d != current->domain;
+
         for ( i = 0; i < (1U << order); i++ )
         {
             if ( test_and_clear_bit(_PGC_need_scrub, &pg[i].count_info) )
             {
                 if ( !(memflags & MEMF_no_scrub) )
-                    scrub_one_page(&pg[i]);
+                    scrub_one_page(&pg[i], cold);
 
                 dirty_cnt++;
             }
@@ -1308,7 +1315,7 @@ bool scrub_free_pages(void)
                 {
                     if ( test_bit(_PGC_need_scrub, &pg[i].count_info) )
                     {
-                        scrub_one_page(&pg[i]);
+                        scrub_one_page(&pg[i], true);
                         /*
                          * We can modify count_info without holding heap
                          * lock since we effectively locked this buddy by
@@ -1947,7 +1954,7 @@ static void __init smp_scrub_heap_pages(
         if ( !mfn_valid(_mfn(mfn)) || !page_state_is(pg, free) )
             continue;
 
-        scrub_one_page(pg);
+        scrub_one_page(pg, true);
     }
 }
 
--- a/xen/include/asm-arm/page.h
+++ b/xen/include/asm-arm/page.h
@@ -135,6 +135,12 @@ extern size_t dcache_line_bytes;
 
 #define copy_page(dp, sp) memcpy(dp, sp, PAGE_SIZE)
 
+#define clear_page_hot  clear_page
+#define clear_page_cold clear_page
+
+#define scrub_page_hot(page) memset(page, SCRUB_BYTE_PATTERN, PAGE_SIZE)
+#define scrub_page_cold      scrub_page_hot
+
 static inline size_t read_dcache_line_bytes(void)
 {
     register_t ctr;
--- a/xen/include/asm-x86/page.h
+++ b/xen/include/asm-x86/page.h
@@ -239,6 +239,11 @@ void copy_page_sse2(void *, const void *
 #define clear_page(_p)      clear_page_cold(_p)
 #define copy_page(_t, _f)   copy_page_sse2(_t, _f)
 
+#ifdef CONFIG_DEBUG
+void scrub_page_hot(void *);
+void scrub_page_cold(void *);
+#endif
+
 /* Convert between Xen-heap virtual addresses and machine addresses. */
 #define __pa(x)             (virt_to_maddr(x))
 #define __va(x)             (maddr_to_virt(x))
--- /dev/null
+++ b/xen/include/xen/scrub.h
@@ -0,0 +1,24 @@
+#ifndef __XEN_SCRUB_H__
+#define __XEN_SCRUB_H__
+
+#include <xen/const.h>
+
+/* SCRUB_PATTERN needs to be a repeating series of bytes. */
+#ifdef CONFIG_DEBUG
+# define SCRUB_PATTERN       _AC(0xc2c2c2c2c2c2c2c2,ULL)
+#else
+# define SCRUB_PATTERN       _AC(0,ULL)
+#endif
+#define SCRUB_BYTE_PATTERN   (SCRUB_PATTERN & 0xff)
+
+#endif /* __XEN_SCRUB_H__ */
+
+/*
+ * Local variables:
+ * mode: C
+ * c-file-style: "BSD"
+ * c-basic-offset: 4
+ * tab-width: 4
+ * indent-tabs-mode: nil
+ * End:
+ */



From xen-devel-bounces@lists.xenproject.org Thu May 27 12:34:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 12:34:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133128.248217 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFDE-0006Rv-Vy; Thu, 27 May 2021 12:34:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133128.248217; Thu, 27 May 2021 12:34:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFDE-0006Ro-Su; Thu, 27 May 2021 12:34:08 +0000
Received: by outflank-mailman (input) for mailman id 133128;
 Thu, 27 May 2021 12:34:07 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HjtO=KW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmFDD-0006Pc-8Z
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 12:34:07 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 9815c3a5-9e12-4398-ac35-e8ac1248d20e;
 Thu, 27 May 2021 12:34:06 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id C04991FD2E;
 Thu, 27 May 2021 12:34:05 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id A413711A98;
 Thu, 27 May 2021 12:34:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9815c3a5-9e12-4398-ac35-e8ac1248d20e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622118845; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=T37zxeTBRZgcSqOKYX7WewAeIo59kPVTD7P0XmcfTcQ=;
	b=CEkK6A2AMkUaWy8Wpu9K+D3Mohi0H8TEqiEQ6bDFXg/HHxe6WoKkzElzBjjdtn9P0GsmRT
	BVnYJt9bcBwt6ByA8kPrJNHevC4rwR291N02RXw7NY+C34GRfp23NlbvNPY2TzBzNstqLy
	dY4pMCyVCkI0DzGydfYGdIuj/QcAqjM=
Subject: [PATCH v2 08/12] x86: move .text.kexec
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <8f56a8f4-0482-932f-96a9-c791bebb4610@suse.com>
Message-ID: <f964e5bb-6b84-40d6-d247-7655ef09d47b@suse.com>
Date: Thu, 27 May 2021 14:34:02 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <8f56a8f4-0482-932f-96a9-c791bebb4610@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

The source file requests page alignment - avoid a padding hole by
placing it right after .text.entry. On average this yields a .text size
reduction of 2k.

Requested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.

--- a/xen/arch/x86/xen.lds.S
+++ b/xen/arch/x86/xen.lds.S
@@ -83,10 +83,11 @@ SECTIONS
        . = ALIGN(PAGE_SIZE);
        _etextentry = .;
 
+       *(.text.kexec)          /* Page aligned in the object file. */
+
        *(.text.cold)
        *(.text.unlikely)
        *(.fixup)
-       *(.text.kexec)
        *(.gnu.warning)
        _etext = .;             /* End of text section */
   } PHDR(text) = 0x9090



From xen-devel-bounces@lists.xenproject.org Thu May 27 12:34:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 12:34:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133132.248228 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFDh-000718-8R; Thu, 27 May 2021 12:34:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133132.248228; Thu, 27 May 2021 12:34:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFDh-000711-5S; Thu, 27 May 2021 12:34:37 +0000
Received: by outflank-mailman (input) for mailman id 133132;
 Thu, 27 May 2021 12:34:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HjtO=KW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmFDg-00070k-Qk
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 12:34:36 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b16f1a14-e0ad-4a58-9e46-3a98dbaac28e;
 Thu, 27 May 2021 12:34:36 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 7CB041FD2E;
 Thu, 27 May 2021 12:34:35 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 627DF11A98;
 Thu, 27 May 2021 12:34:35 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b16f1a14-e0ad-4a58-9e46-3a98dbaac28e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622118875; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=s14UwhbZNQTt7lOAtjre/DnjC1roVy/CdMyBN/BeKjU=;
	b=c3NCNrrsRqryCplBfqnWzNqBgavC9PaVCwK7GfhTDGMxDhSLEwXp6j42Re6gisC5CXdV5G
	nHHia1Z9bK1qnveiJ5pdPiqgXvsbjZmSebQ3ECl+3f0j5BJ3LLeQ69mpVZiNOVCcpbzDz2
	fYRLuwaSIHJCo6qg/1b2fWwvYqgCkY0=
Subject: [PATCH v2 09/12] video/vesa: unmap frame buffer when relinquishing
 console
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <8f56a8f4-0482-932f-96a9-c791bebb4610@suse.com>
Message-ID: <843f440e-039f-ca0b-6ac1-a4d50559d5bc@suse.com>
Date: Thu, 27 May 2021 14:34:29 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <8f56a8f4-0482-932f-96a9-c791bebb4610@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

There's no point in keeping the VA space occupied when no further output
will occur.

Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/xen/drivers/video/lfb.c
+++ b/xen/drivers/video/lfb.c
@@ -168,4 +168,5 @@ void lfb_free(void)
     xfree(lfb.lbuf);
     xfree(lfb.text_buf);
     xfree(lfb.line_len);
+    lfb.lfbp.lfb = ZERO_BLOCK_PTR;
 }
--- a/xen/drivers/video/vesa.c
+++ b/xen/drivers/video/vesa.c
@@ -197,5 +197,7 @@ void __init vesa_endboot(bool_t keep)
                    vlfb_info.width * bpp);
         lfb_flush();
         lfb_free();
+        iounmap(lfb);
+        lfb = ZERO_BLOCK_PTR;
     }
 }



From xen-devel-bounces@lists.xenproject.org Thu May 27 12:35:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 12:35:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133139.248238 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFEW-0007gr-Hh; Thu, 27 May 2021 12:35:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133139.248238; Thu, 27 May 2021 12:35:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFEW-0007gk-Em; Thu, 27 May 2021 12:35:28 +0000
Received: by outflank-mailman (input) for mailman id 133139;
 Thu, 27 May 2021 12:35:27 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HjtO=KW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmFEV-0007ge-Bf
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 12:35:27 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 832eeafd-88b1-4951-95ea-f636978a74e7;
 Thu, 27 May 2021 12:35:26 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id A29BE218DD;
 Thu, 27 May 2021 12:35:25 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 72E3B11A98;
 Thu, 27 May 2021 12:35:25 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 832eeafd-88b1-4951-95ea-f636978a74e7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622118925; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Gqg+pjRYzvrnv7dX+llGzQkhdttXtK16oe9Z8AMg9RY=;
	b=taBtAtRvmeS/XJaxTuCZlR6WQZ2Hi+LLSwxGHHoRDfnuZV5ysOnnt3FvDK80R8EO5vgleO
	y0gQ2oIpA3hpmJVITyX4M7jqy0aBgCCbcopTv8QNuNIA5geExhxMysP/zSu/dyFXn59p1V
	3HmNMpi0ABiccNfotuUqqf3Jceedcbo=
Subject: [PATCH v2 10/12] video/vesa: drop "vesa-mtrr" command line option
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <8f56a8f4-0482-932f-96a9-c791bebb4610@suse.com>
Message-ID: <10607087-9216-c5fb-08cd-29090a80e402@suse.com>
Date: Thu, 27 May 2021 14:35:21 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <8f56a8f4-0482-932f-96a9-c791bebb4610@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Now that we use ioremap_wc() for mapping the frame buffer, there's no
need for this option anymore. As noted in the change introducing the
use of ioremap_wc(), mtrr_add() didn't work in certain cases anyway.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>

--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -6,9 +6,10 @@ The format is based on [Keep a Changelog
 
 ## [unstable UNRELEASED](https://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=staging) - TBD
 
-### Removed
+### Removed / support downgraded
  - XENSTORED_ROOTDIR environment variable from configuartion files and
    initscripts, due to being unused.
+ - dropped support for the (x86-only) "vesa-mtrr" command line option
 
 ## [4.15.0 UNRELEASED](https://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=RELEASE-4.15.0) - TBD
 
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -2369,9 +2369,6 @@ cache-warming. 1ms (1000) has been measu
 ### vesa-map
 > `= <integer>`
 
-### vesa-mtrr
-> `= <integer>`
-
 ### vesa-ram
 > `= <integer>`
 
--- a/xen/arch/x86/setup.c
+++ b/xen/arch/x86/setup.c
@@ -1816,8 +1816,6 @@ void __init noreturn __start_xen(unsigne
 
     local_irq_enable();
 
-    vesa_mtrr_init();
-
     early_msi_init();
 
     iommu_setup();    /* setup iommu if available */
--- a/xen/drivers/video/vesa.c
+++ b/xen/drivers/video/vesa.c
@@ -145,38 +145,6 @@ void __init vesa_init(void)
     video_puts = lfb_redraw_puts;
 }
 
-#include <asm/mtrr.h>
-
-static unsigned int vesa_mtrr;
-integer_param("vesa-mtrr", vesa_mtrr);
-
-void __init vesa_mtrr_init(void)
-{
-    static const int mtrr_types[] = {
-        0, MTRR_TYPE_UNCACHABLE, MTRR_TYPE_WRBACK,
-        MTRR_TYPE_WRCOMB, MTRR_TYPE_WRTHROUGH };
-    unsigned int size_total;
-    int rc, type;
-
-    if ( !lfb || (vesa_mtrr == 0) || (vesa_mtrr >= ARRAY_SIZE(mtrr_types)) )
-        return;
-
-    type = mtrr_types[vesa_mtrr];
-    if ( !type )
-        return;
-
-    /* Find the largest power-of-two */
-    size_total = vram_total;
-    while ( size_total & (size_total - 1) )
-        size_total &= size_total - 1;
-
-    /* Try and find a power of two to add */
-    do {
-        rc = mtrr_add(lfb_base(), size_total, type, 1);
-        size_total >>= 1;
-    } while ( (size_total >= PAGE_SIZE) && (rc == -EINVAL) );
-}
-
 static void lfb_flush(void)
 {
     __asm__ __volatile__ ("sfence" : : : "memory");
--- a/xen/include/asm-x86/setup.h
+++ b/xen/include/asm-x86/setup.h
@@ -25,10 +25,8 @@ void init_IRQ(void);
 
 #ifdef CONFIG_VIDEO
 void vesa_init(void);
-void vesa_mtrr_init(void);
 #else
 static inline void vesa_init(void) {};
-static inline void vesa_mtrr_init(void) {};
 #endif
 
 int construct_dom0(



From xen-devel-bounces@lists.xenproject.org Thu May 27 12:36:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 12:36:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133147.248250 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFF9-0008IV-SJ; Thu, 27 May 2021 12:36:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133147.248250; Thu, 27 May 2021 12:36:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFF9-0008IO-Oc; Thu, 27 May 2021 12:36:07 +0000
Received: by outflank-mailman (input) for mailman id 133147;
 Thu, 27 May 2021 12:36:06 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HjtO=KW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmFF8-0008Bu-BM
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 12:36:06 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id e29f1ad8-fc4f-433a-905e-58fd79203c5f;
 Thu, 27 May 2021 12:35:55 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 94EED218DD;
 Thu, 27 May 2021 12:35:54 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 7A26411A98;
 Thu, 27 May 2021 12:35:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e29f1ad8-fc4f-433a-905e-58fd79203c5f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622118954; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=JCk3FBjjK8M9z1jIUmNcbjwmgv39UQ+Tt+/agq7sJXk=;
	b=CwLdHgXTlzL4WPEEbXsEsixG1mmn4UZqKLjoaWkaBGf3DSCk/P8CmLo224FZA/2vi9m6ls
	TKO+TG4q4vYjIJZnsZCCWvsU3GZ7KKCwlyQCQChSb+INw2xvUcfU4+wCVyxrun6EvuRpIE
	QZ9vBsNlwEcQYdXTTEgfI0aQHwokkME=
Subject: [PATCH v2 11/12] video/vesa: drop "vesa-remap" command line option
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <8f56a8f4-0482-932f-96a9-c791bebb4610@suse.com>
Message-ID: <bc1411cc-f920-a002-d64d-84ce1ebd080e@suse.com>
Date: Thu, 27 May 2021 14:35:50 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <8f56a8f4-0482-932f-96a9-c791bebb4610@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

If we get mode dimensions wrong, having the remapping size controllable
via command line option isn't going to help much. Drop the option.

While adjusting this also
- add __initdata to the variable,
- use ROUNDUP() instead of open-coding it.

Requested-by: Andrew Cooper <andrew.cooper3@citrix.com>
Signed-off-by: Jan Beulich <jbeulich@suse.com>

--- a/CHANGELOG.md
+++ b/CHANGELOG.md
@@ -9,7 +9,7 @@ The format is based on [Keep a Changelog
 ### Removed / support downgraded
  - XENSTORED_ROOTDIR environment variable from configuartion files and
    initscripts, due to being unused.
- - dropped support for the (x86-only) "vesa-mtrr" command line option
+ - dropped support for the (x86-only) "vesa-mtrr" and "vesa-remap" command line options
 
 ## [4.15.0 UNRELEASED](https://xenbits.xen.org/gitweb/?p=xen.git;a=shortlog;h=RELEASE-4.15.0) - TBD
 
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -2366,9 +2366,6 @@ PCPUs when using the credit1 scheduler.
 of a VCPU between CPUs, and reduces the implicit overheads such as
 cache-warming. 1ms (1000) has been measured as a good value.
 
-### vesa-map
-> `= <integer>`
-
 ### vesa-ram
 > `= <integer>`
 
--- a/xen/drivers/video/vesa.c
+++ b/xen/drivers/video/vesa.c
@@ -26,8 +26,7 @@ static bool_t vga_compat;
 static unsigned int vram_total;
 integer_param("vesa-ram", vram_total);
 
-static unsigned int vram_remap;
-integer_param("vesa-map", vram_remap);
+static unsigned int __initdata vram_remap;
 
 static int font_height;
 static int __init parse_font_height(const char *s)
@@ -79,12 +78,8 @@ void __init vesa_early_init(void)
      *                 use for vesafb.  With modern cards it is no
      *                 option to simply use vram_total as that
      *                 wastes plenty of kernel address space. */
-    vram_remap = (vram_remap ?
-                  (vram_remap << 20) :
-                  ((vram_vmode + (1 << L2_PAGETABLE_SHIFT) - 1) &
-                   ~((1 << L2_PAGETABLE_SHIFT) - 1)));
-    vram_remap = max_t(unsigned int, vram_remap, vram_vmode);
-    vram_remap = min_t(unsigned int, vram_remap, vram_total);
+    vram_remap = ROUNDUP(vram_vmode, 1 << L2_PAGETABLE_SHIFT);
+    vram_remap = min(vram_remap, vram_total);
 }
 
 void __init vesa_init(void)



From xen-devel-bounces@lists.xenproject.org Thu May 27 12:36:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 12:36:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133157.248261 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFFu-0000Z2-A9; Thu, 27 May 2021 12:36:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133157.248261; Thu, 27 May 2021 12:36:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFFu-0000Yv-6d; Thu, 27 May 2021 12:36:54 +0000
Received: by outflank-mailman (input) for mailman id 133157;
 Thu, 27 May 2021 12:36:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HjtO=KW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmFFs-0000Rp-EK
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 12:36:52 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e121168b-91ff-47b9-8d50-55f2917047ce;
 Thu, 27 May 2021 12:36:44 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 68EB02190B;
 Thu, 27 May 2021 12:36:43 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 4E09411A98;
 Thu, 27 May 2021 12:36:43 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e121168b-91ff-47b9-8d50-55f2917047ce
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622119003; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=MhpIoP1aaH4G2df5l8aytfLGQDfICiVkm9QqY6PH9WQ=;
	b=JOkSOdwks15QM8gFY5bUcX9p3IPMpCQ2Nr+zN+ZuOWWti45KYkFv/DaKE6PUR+KCLrdJrq
	1fZgcETH4CPMQiJPL3ikpklKHnV3JsHwhbfEIh8f6XniuuVTTv0T+ppDpy5Z71nUJ/GMdt
	pnQNmCZ8H3aqC7xb4uTVuyatTW/Tnmk=
Subject: [PATCH v2 12/12] video/vesa: adjust (not just) command line option
 handling
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <8f56a8f4-0482-932f-96a9-c791bebb4610@suse.com>
Message-ID: <b7e01b63-c0e3-7cf9-8599-4e3453b4047c@suse.com>
Date: Thu, 27 May 2021 14:36:39 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <8f56a8f4-0482-932f-96a9-c791bebb4610@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Document the remaining option. Add section annotation to the variable
holding the parsed value as well as a few adjacent ones. Adjust the
types of font_height and vga_compat.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
Acked-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
v2: Re-base over added earlier patch.

--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -2369,6 +2369,11 @@ cache-warming. 1ms (1000) has been measu
 ### vesa-ram
 > `= <integer>`
 
+> Default: `0`
+
+This allows to override the amount of video RAM, in MiB, determined to be
+present.
+
 ### vga
 > `= ( ask | current | text-80x<rows> | gfx-<width>x<height>x<depth> | mode-<mode> )[,keep]`
 
--- a/xen/drivers/video/vesa.c
+++ b/xen/drivers/video/vesa.c
@@ -19,16 +19,16 @@
 
 static void lfb_flush(void);
 
-static unsigned char *lfb;
-static const struct font_desc *font;
-static bool_t vga_compat;
+static unsigned char *__read_mostly lfb;
+static const struct font_desc *__initdata font;
+static bool __initdata vga_compat;
 
-static unsigned int vram_total;
+static unsigned int __initdata vram_total;
 integer_param("vesa-ram", vram_total);
 
 static unsigned int __initdata vram_remap;
 
-static int font_height;
+static unsigned int __initdata font_height;
 static int __init parse_font_height(const char *s)
 {
     if ( simple_strtoul(s, &s, 10) == 8 && (*s++ == 'x') )



From xen-devel-bounces@lists.xenproject.org Thu May 27 12:49:08 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 12:49:08 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133176.248285 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFRe-0002FC-I8; Thu, 27 May 2021 12:49:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133176.248285; Thu, 27 May 2021 12:49:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFRe-0002F5-FD; Thu, 27 May 2021 12:49:02 +0000
Received: by outflank-mailman (input) for mailman id 133176;
 Thu, 27 May 2021 12:49:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lmFRd-0002Ez-3t
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 12:49:01 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lmFRc-0007nA-DB; Thu, 27 May 2021 12:49:00 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lmFRc-00085G-6u; Thu, 27 May 2021 12:49:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=X4Zh/dC30NMhdSNeODepjxj3SVeV7Cnxn6uvTd/yxow=; b=gIrhj1CIYxMfzdVwi/f9hevkb+
	KJw1GBezu4xU+UIIeDWDhJebrB2v9M6gKaUd1AWaxHboj9IczSnvEcuMxjZo9FtKRcwj0L8X4/9+P
	7D/+YCtU5DkNTRRjDtVmyN8UZ9ukugNt1SbgmPJYsp63GWWgGNwUxl5dBz+5E6zirofs=;
Subject: Re: [PATCH v2 01/12] x86: introduce ioremap_wc()
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <8f56a8f4-0482-932f-96a9-c791bebb4610@suse.com>
 <20abac99-609c-f4f6-1242-c79919f4c317@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <b8035805-4f44-18ce-f4cb-4ce1d3c594fc@xen.org>
Date: Thu, 27 May 2021 13:48:58 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <20abac99-609c-f4f6-1242-c79919f4c317@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 27/05/2021 13:30, Jan Beulich wrote:
> In order for a to-be-introduced ERMS form of memcpy() to not regress
> boot performance on certain systems when video output is active, we
> first need to arrange for avoiding further dependency on firmware
> setting up MTRRs in a way we can actually further modify. On many
> systems, due to the continuously growing amounts of installed memory,
> MTRRs get configured with at least one huge WB range, and with MMIO
> ranges below 4Gb then forced to UC via overlapping MTRRs. mtrr_add(), as
> it is today, can't deal with such a setup. Hence on such systems we
> presently leave the frame buffer mapped UC, leading to significantly
> reduced performance when using REP STOSB / REP MOVSB.
> 
> On post-PentiumII hardware (i.e. any that's capable of running 64-bit
> code), an effective memory type of WC can be achieved without MTRRs, by
> simply referencing the respective PAT entry from the PTEs. While this
> will leave the switch to ERMS forms of memset() and memcpy() with
> largely unchanged performance, the change here on its own improves
> performance on affected systems quite significantly: Measuring just the
> individual affected memcpy() invocations yielded a speedup by a factor
> of over 250 on my initial (Skylake) test system. memset() isn't getting
> improved by as much there, but still by a factor of about 20.
> 
> While adding {__,}PAGE_HYPERVISOR_WC, also add {__,}PAGE_HYPERVISOR_WT
> to, at the very least, make clear what PTE flags this memory type uses.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v2: Mark ioremap_wc() __init.
> ---
> TBD: If the VGA range is WC in the fixed range MTRRs, reusing the low
>       1st Mb mapping (like ioremap() does) would be an option.
> 
> --- a/xen/arch/x86/mm.c
> +++ b/xen/arch/x86/mm.c
> @@ -5881,6 +5881,20 @@ void __iomem *ioremap(paddr_t pa, size_t
>       return (void __force __iomem *)va;
>   }
>   
> +void __iomem *__init ioremap_wc(paddr_t pa, size_t len)
> +{
> +    mfn_t mfn = _mfn(PFN_DOWN(pa));
> +    unsigned int offs = pa & (PAGE_SIZE - 1);
> +    unsigned int nr = PFN_UP(offs + len);
> +    void *va;
> +
> +    WARN_ON(page_is_ram_type(mfn_x(mfn), RAM_TYPE_CONVENTIONAL));
> +
> +    va = __vmap(&mfn, nr, 1, 1, PAGE_HYPERVISOR_WC, VMAP_DEFAULT);
> +
> +    return (void __force __iomem *)(va + offs);
> +}

Arm is already providing ioremap_wc() which is a wrapper to 
ioremap_attr(). Can this be moved to the common code to avoid duplication?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 27 12:49:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 12:49:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133178.248297 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFS0-0002h9-UE; Thu, 27 May 2021 12:49:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133178.248297; Thu, 27 May 2021 12:49:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFS0-0002h2-Ov; Thu, 27 May 2021 12:49:24 +0000
Received: by outflank-mailman (input) for mailman id 133178;
 Thu, 27 May 2021 12:49:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=spZp=KW=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lmFS0-0002ge-2a
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 12:49:24 +0000
Received: from mail-pf1-x42d.google.com (unknown [2607:f8b0:4864:20::42d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2cb6396f-143e-4d30-9ff0-52ed772ddf1a;
 Thu, 27 May 2021 12:49:23 +0000 (UTC)
Received: by mail-pf1-x42d.google.com with SMTP id f22so537372pfn.0
 for <xen-devel@lists.xenproject.org>; Thu, 27 May 2021 05:49:23 -0700 (PDT)
Received: from mail-pf1-f173.google.com (mail-pf1-f173.google.com.
 [209.85.210.173])
 by smtp.gmail.com with ESMTPSA id x19sm1765169pgj.66.2021.05.27.05.49.21
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 27 May 2021 05:49:21 -0700 (PDT)
Received: by mail-pf1-f173.google.com with SMTP id p39so490407pfw.8
 for <xen-devel@lists.xenproject.org>; Thu, 27 May 2021 05:49:21 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2cb6396f-143e-4d30-9ff0-52ed772ddf1a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=t9nQbEpq5kzp2vshpX/2KDY0MEdzNS47SlLJX4RAhzA=;
        b=XnDncQEfy0WK93TP5b5iaOvQy6p3tli5hK7U2lnqkRYfpimdp+9WScU0B/bVffV6lu
         KHvuv/cILlR7hwxjPwG4fSm8TLb0M0uiiQtfQGCy1lJiDeCNSlZdr9ro+4UoS7qK25VN
         Vwx6crrIG7NXb3xhmTS0AaKdDSprMjL64yAgA=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=t9nQbEpq5kzp2vshpX/2KDY0MEdzNS47SlLJX4RAhzA=;
        b=GF+M6rtH0gCPc7HlgOvao3urIcOcRuPlUmEPPdeYP77P/QuszLc71gQs6+GSJPKV4B
         35MaKvUjGMbl8yZsGu/j20F+k3soRfcavB1rdMdXe24xsooFzj+khqAcVWx69kJ5MMZg
         BR7bNvLBieTEQHMB7E2J+Y6YST0cFr2DA6z700rJ2Lw2Yox6R6u3UNxYAIAP5e6l+hFJ
         nwEM8p0ykJ7s/wnE5maoHs/kpMtBTu0Zzl7lmw1ewqbi+ntQe8pt0b7oElnLHpOdEr6k
         EU9ksdW/v8KP4/9wLzta0vnH4gQXIlc4hAEzmP81L8d/OpJ3wE5+OMlvDsvMlTOWJxhi
         CK+A==
X-Gm-Message-State: AOAM530HIiHZ+wWqU5/0zC6DXhEldfp4sMZLUOcmmVZKV2BOnAHzUPQK
	xDV2ep6n846TObO2+eOhyn68zXyeFPUE0w==
X-Google-Smtp-Source: ABdhPJwcOoooqpJVylmkhZjQnlm1irgkO2N17Sw+W5Zt9spC+SnUo20D/T5CGiKAr1ze/jetG7EZdQ==
X-Received: by 2002:a63:e015:: with SMTP id e21mr3574227pgh.442.1622119762184;
        Thu, 27 May 2021 05:49:22 -0700 (PDT)
X-Received: by 2002:a92:2907:: with SMTP id l7mr2908573ilg.64.1622119750871;
 Thu, 27 May 2021 05:49:10 -0700 (PDT)
MIME-Version: 1.0
References: <20210518064215.2856977-1-tientzu@chromium.org>
 <20210518064215.2856977-15-tientzu@chromium.org> <20210526121322.GA19313@willie-the-truck>
 <20210526155321.GA19633@willie-the-truck> <CALiNf2_sVXnb97++yWusB5PWz8Pzfn9bCKZc6z3tY4bx6-nW8w@mail.gmail.com>
 <20210527113456.GA22019@willie-the-truck>
In-Reply-To: <20210527113456.GA22019@willie-the-truck>
From: Claire Chang <tientzu@chromium.org>
Date: Thu, 27 May 2021 20:48:59 +0800
X-Gmail-Original-Message-ID: <CALiNf2_Qk5DmZSJO+jv=m5V-VFtmL9j0v66UY6qKmM-2pr3tRQ@mail.gmail.com>
Message-ID: <CALiNf2_Qk5DmZSJO+jv=m5V-VFtmL9j0v66UY6qKmM-2pr3tRQ@mail.gmail.com>
Subject: Re: [PATCH v7 14/15] dt-bindings: of: Add restricted DMA pool
To: Will Deacon <will@kernel.org>
Cc: heikki.krogerus@linux.intel.com, thomas.hellstrom@linux.intel.com, 
	peterz@infradead.org, benh@kernel.crashing.org, 
	joonas.lahtinen@linux.intel.com, dri-devel@lists.freedesktop.org, 
	chris@chris-wilson.co.uk, grant.likely@arm.com, paulus@samba.org, 
	Frank Rowand <frowand.list@gmail.com>, mingo@kernel.org, sstabellini@kernel.org, 
	Saravana Kannan <saravanak@google.com>, mpe@ellerman.id.au, 
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, Christoph Hellwig <hch@lst.de>, 
	Bartosz Golaszewski <bgolaszewski@baylibre.com>, bskeggs@redhat.com, linux-pci@vger.kernel.org, 
	xen-devel@lists.xenproject.org, Thierry Reding <treding@nvidia.com>, 
	intel-gfx@lists.freedesktop.org, matthew.auld@intel.com, 
	linux-devicetree <devicetree@vger.kernel.org>, Jianxiong Gao <jxgao@google.com>, 
	Daniel Vetter <daniel@ffwll.ch>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, 
	maarten.lankhorst@linux.intel.com, airlied@linux.ie, 
	Dan Williams <dan.j.williams@intel.com>, linuxppc-dev@lists.ozlabs.org, 
	jani.nikula@linux.intel.com, Rob Herring <robh+dt@kernel.org>, rodrigo.vivi@intel.com, 
	Bjorn Helgaas <bhelgaas@google.com>, boris.ostrovsky@oracle.com, 
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>, jgross@suse.com, 
	Nicolas Boichat <drinkcat@chromium.org>, Greg KH <gregkh@linuxfoundation.org>, 
	Randy Dunlap <rdunlap@infradead.org>, lkml <linux-kernel@vger.kernel.org>, 
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, Jim Quinlan <james.quinlan@broadcom.com>, 
	xypron.glpk@gmx.de, Robin Murphy <robin.murphy@arm.com>, bauerman@linux.ibm.com
Content-Type: text/plain; charset="UTF-8"

On Thu, May 27, 2021 at 7:35 PM Will Deacon <will@kernel.org> wrote:
>
> On Thu, May 27, 2021 at 07:29:20PM +0800, Claire Chang wrote:
> > On Wed, May 26, 2021 at 11:53 PM Will Deacon <will@kernel.org> wrote:
> > >
> > > On Wed, May 26, 2021 at 01:13:22PM +0100, Will Deacon wrote:
> > > > On Tue, May 18, 2021 at 02:42:14PM +0800, Claire Chang wrote:
> > > > > @@ -138,4 +160,9 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB).
> > > > >             memory-region = <&multimedia_reserved>;
> > > > >             /* ... */
> > > > >     };
> > > > > +
> > > > > +   pcie_device: pcie_device@0,0 {
> > > > > +           memory-region = <&restricted_dma_mem_reserved>;
> > > > > +           /* ... */
> > > > > +   };
> > > >
> > > > I still don't understand how this works for individual PCIe devices -- how
> > > > is dev->of_node set to point at the node you have above?
> > > >
> > > > I tried adding the memory-region to the host controller instead, and then
> > > > I see it crop up in dmesg:
> > > >
> > > >   | pci-host-generic 40000000.pci: assigned reserved memory node restricted_dma_mem_reserved
> > > >
> > > > but none of the actual PCI devices end up with 'dma_io_tlb_mem' set, and
> > > > so the restricted DMA area is not used. In fact, swiotlb isn't used at all.
> > > >
> > > > What am I missing to make this work with PCIe devices?
> > >
> > > Aha, looks like we're just missing the logic to inherit the DMA
> > > configuration. The diff below gets things working for me.
> >
> > I guess what was missing is the reg property in the pcie_device node.
> > Will update the example dts.
>
> Thanks. I still think something like my diff makes sense, if you wouldn't mind including
> it, as it allows restricted DMA to be used for situations where the PCIe
> topology is not static.
>
> Perhaps we should prefer dev->of_node if it exists, but then use the node
> of the host bridge's parent node otherwise?

Sure. Let me add in the next version.

>
> Will


From xen-devel-bounces@lists.xenproject.org Thu May 27 12:53:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 12:53:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133188.248307 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFW9-0004Fz-DQ; Thu, 27 May 2021 12:53:41 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133188.248307; Thu, 27 May 2021 12:53:41 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFW9-0004Fs-AM; Thu, 27 May 2021 12:53:41 +0000
Received: by outflank-mailman (input) for mailman id 133188;
 Thu, 27 May 2021 12:53:40 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=Nal+=KW=kernel.org=will@srs-us1.protection.inumbo.net>)
 id 1lmFW8-0004Fg-CS
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 12:53:40 +0000
Received: from mail.kernel.org (unknown [198.145.29.99])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2c214be2-be28-42e9-9a8a-0703fac292a8;
 Thu, 27 May 2021 12:53:39 +0000 (UTC)
Received: by mail.kernel.org (Postfix) with ESMTPSA id B984D6109F;
 Thu, 27 May 2021 12:53:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c214be2-be28-42e9-9a8a-0703fac292a8
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org;
	s=k20201202; t=1622120018;
	bh=mnHenbEzMbQdhDrGu816+dyeKu927RlHNDgNBf8bfNk=;
	h=Date:From:To:Cc:Subject:References:In-Reply-To:From;
	b=T2Q+xyFZywvya0fPBfvK4sRdbR/bde8Rq8P4FN21AACPVaQSQsdEc0x2+0v5r+abP
	 3/8E7TEJSHcMhmUgcHo1bhdAA9AfzEMLppzT6/0P8DePbBmTljtDkX4NEtYRaHLEM1
	 Gnn3YP4aa748IT6RrSs7VU9Pqy8c4Xvink86bWS+UgwCuEehm3hOhTqBJaoQ8WPXJq
	 ulYWtbHvYOw84jAR1SH3F2sv+RGThyTfyzG8N3saQSkF5BsBVg+8LW53nliUGD6EFQ
	 5/RBGn2iLBX38fmxUgOxq5fMNgVhtClQMCVzqiJ/tPC3LfS/b2F3cV49KGjPHavT/f
	 X+zziZDTtPkzw==
Date: Thu, 27 May 2021 13:53:28 +0100
From: Will Deacon <will@kernel.org>
To: Claire Chang <tientzu@chromium.org>
Cc: heikki.krogerus@linux.intel.com, thomas.hellstrom@linux.intel.com,
	peterz@infradead.org, benh@kernel.crashing.org,
	joonas.lahtinen@linux.intel.com, dri-devel@lists.freedesktop.org,
	chris@chris-wilson.co.uk, grant.likely@arm.com, paulus@samba.org,
	Frank Rowand <frowand.list@gmail.com>, mingo@kernel.org,
	sstabellini@kernel.org, Saravana Kannan <saravanak@google.com>,
	mpe@ellerman.id.au,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	Christoph Hellwig <hch@lst.de>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>, bskeggs@redhat.com,
	linux-pci@vger.kernel.org, xen-devel@lists.xenproject.org,
	Thierry Reding <treding@nvidia.com>,
	intel-gfx@lists.freedesktop.org, matthew.auld@intel.com,
	linux-devicetree <devicetree@vger.kernel.org>,
	Jianxiong Gao <jxgao@google.com>, Daniel Vetter <daniel@ffwll.ch>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	maarten.lankhorst@linux.intel.com, airlied@linux.ie,
	Dan Williams <dan.j.williams@intel.com>,
	linuxppc-dev@lists.ozlabs.org, jani.nikula@linux.intel.com,
	Rob Herring <robh+dt@kernel.org>, rodrigo.vivi@intel.com,
	Bjorn Helgaas <bhelgaas@google.com>, boris.ostrovsky@oracle.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	jgross@suse.com, Nicolas Boichat <drinkcat@chromium.org>,
	Greg KH <gregkh@linuxfoundation.org>,
	Randy Dunlap <rdunlap@infradead.org>,
	lkml <linux-kernel@vger.kernel.org>,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, xypron.glpk@gmx.de,
	Robin Murphy <robin.murphy@arm.com>, bauerman@linux.ibm.com
Subject: Re: [PATCH v7 14/15] dt-bindings: of: Add restricted DMA pool
Message-ID: <20210527125328.GA22352@willie-the-truck>
References: <20210518064215.2856977-1-tientzu@chromium.org>
 <20210518064215.2856977-15-tientzu@chromium.org>
 <20210526121322.GA19313@willie-the-truck>
 <20210526155321.GA19633@willie-the-truck>
 <CALiNf2_sVXnb97++yWusB5PWz8Pzfn9bCKZc6z3tY4bx6-nW8w@mail.gmail.com>
 <20210527113456.GA22019@willie-the-truck>
 <CALiNf2_Qk5DmZSJO+jv=m5V-VFtmL9j0v66UY6qKmM-2pr3tRQ@mail.gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <CALiNf2_Qk5DmZSJO+jv=m5V-VFtmL9j0v66UY6qKmM-2pr3tRQ@mail.gmail.com>
User-Agent: Mutt/1.10.1 (2018-07-13)

On Thu, May 27, 2021 at 08:48:59PM +0800, Claire Chang wrote:
> On Thu, May 27, 2021 at 7:35 PM Will Deacon <will@kernel.org> wrote:
> >
> > On Thu, May 27, 2021 at 07:29:20PM +0800, Claire Chang wrote:
> > > On Wed, May 26, 2021 at 11:53 PM Will Deacon <will@kernel.org> wrote:
> > > >
> > > > On Wed, May 26, 2021 at 01:13:22PM +0100, Will Deacon wrote:
> > > > > On Tue, May 18, 2021 at 02:42:14PM +0800, Claire Chang wrote:
> > > > > > @@ -138,4 +160,9 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB).
> > > > > >             memory-region = <&multimedia_reserved>;
> > > > > >             /* ... */
> > > > > >     };
> > > > > > +
> > > > > > +   pcie_device: pcie_device@0,0 {
> > > > > > +           memory-region = <&restricted_dma_mem_reserved>;
> > > > > > +           /* ... */
> > > > > > +   };
> > > > >
> > > > > I still don't understand how this works for individual PCIe devices -- how
> > > > > is dev->of_node set to point at the node you have above?
> > > > >
> > > > > I tried adding the memory-region to the host controller instead, and then
> > > > > I see it crop up in dmesg:
> > > > >
> > > > >   | pci-host-generic 40000000.pci: assigned reserved memory node restricted_dma_mem_reserved
> > > > >
> > > > > but none of the actual PCI devices end up with 'dma_io_tlb_mem' set, and
> > > > > so the restricted DMA area is not used. In fact, swiotlb isn't used at all.
> > > > >
> > > > > What am I missing to make this work with PCIe devices?
> > > >
> > > > Aha, looks like we're just missing the logic to inherit the DMA
> > > > configuration. The diff below gets things working for me.
> > >
> > > I guess what was missing is the reg property in the pcie_device node.
> > > Will update the example dts.
> >
> > Thanks. I still think something like my diff makes sense, if you wouldn't mind including
> > it, as it allows restricted DMA to be used for situations where the PCIe
> > topology is not static.
> >
> > Perhaps we should prefer dev->of_node if it exists, but then use the node
> > of the host bridge's parent node otherwise?
> 
> Sure. Let me add in the next version.

Brill, thanks! I'll take it for a spin once it lands on the list.

Will


From xen-devel-bounces@lists.xenproject.org Thu May 27 12:55:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 12:55:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133196.248319 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFXO-0004ra-PZ; Thu, 27 May 2021 12:54:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133196.248319; Thu, 27 May 2021 12:54:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFXO-0004rT-M3; Thu, 27 May 2021 12:54:58 +0000
Received: by outflank-mailman (input) for mailman id 133196;
 Thu, 27 May 2021 12:54:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmFXN-0004rJ-29; Thu, 27 May 2021 12:54:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmFXM-0007tF-OL; Thu, 27 May 2021 12:54:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmFXM-0004BX-BC; Thu, 27 May 2021 12:54:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lmFXM-00066X-Ai; Thu, 27 May 2021 12:54:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Z0wJFsDKQeikxCP9reVYsiNdAlemnTS9cmIGqadZsHA=; b=sV91zPTkKQt/6KpK95A2yBrf0S
	A+cWQlAbGzGZcf00INgzpIMJcrcRIA2mA3yxNTgEgC5YsuFBAebGUg7KTay+Ys4Me/M9du0Wt6vSN
	m6eZ15bUy+ZQ6XBt8N3ZOnVv/FGimOG6n8hM+xkEmgcraTjbkmZGvjaNj4Ux/2wYygY0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162172-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162172: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-amd64:guest-localmigrate/x10:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=7c110dd335a17be52549dc4b9dfbfba8165ade40
X-Osstest-Versions-That:
    xen=3092006fc4e096a7eebb8042cb76d82b09ccece4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 27 May 2021 12:54:56 +0000

flight 162172 xen-unstable real [real]
flight 162222 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162172/
http://logs.test-lab.xenproject.org/osstest/logs/162222/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-debianhvm-amd64 18 guest-localmigrate/x10 fail pass in 162222-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162164
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162164
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162164
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162164
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162164
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162164
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162164
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162164
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162164
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162164
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162164
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  7c110dd335a17be52549dc4b9dfbfba8165ade40
baseline version:
 xen                  3092006fc4e096a7eebb8042cb76d82b09ccece4

Last test of basis   162164  2021-05-26 09:53:43 Z    1 days
Testing same since   162172  2021-05-26 19:39:08 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Roger Pau Monné <rogerpau@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   3092006fc4..7c110dd335  7c110dd335a17be52549dc4b9dfbfba8165ade40 -> master


From xen-devel-bounces@lists.xenproject.org Thu May 27 12:58:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 12:58:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133211.248357 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFbG-0005u8-QD; Thu, 27 May 2021 12:58:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133211.248357; Thu, 27 May 2021 12:58:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFbG-0005u1-N8; Thu, 27 May 2021 12:58:58 +0000
Received: by outflank-mailman (input) for mailman id 133211;
 Thu, 27 May 2021 12:58:57 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=spZp=KW=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lmFbF-0005tv-62
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 12:58:57 +0000
Received: from mail-pl1-x630.google.com (unknown [2607:f8b0:4864:20::630])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id afa3eacc-7175-4cc7-9ea2-7dac1923583f;
 Thu, 27 May 2021 12:58:56 +0000 (UTC)
Received: by mail-pl1-x630.google.com with SMTP id 69so2295623plc.5
 for <xen-devel@lists.xenproject.org>; Thu, 27 May 2021 05:58:56 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70])
 by smtp.gmail.com with UTF8SMTPSA id e7sm2084005pfl.171.2021.05.27.05.58.48
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 27 May 2021 05:58:54 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: afa3eacc-7175-4cc7-9ea2-7dac1923583f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=kA+3MDeoUW6g7BAIQhmMZU6jpRRh7iFmQAMuxd9InLo=;
        b=N/RAF9cNvTcRd3iZmeYm87pVaO7FRy7mf9/WxIyCft3/KEf6OSpP1KzfVNdxCB6y/v
         f9rFlfhM52r7diCBgVLUZz2KQytMc21laqchIO1r5VInxr59Sd+7C4c0jiL2C5LzFLDG
         Gn+MKH5XFhuZrfNlpbnQvXijnbOOZUN/Rm2WA=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=kA+3MDeoUW6g7BAIQhmMZU6jpRRh7iFmQAMuxd9InLo=;
        b=NnI4kCz/fd2ekkFOzD5++9BfHDqCRpZDhvubk238Iw/s3SixkglnXXh5SkXY4s7rXc
         BkI7PsAxgakplcXS6boBhSKRu993RVpVlNKRWtGFPr4y1e+Ys1g/8/ghLhs02ir9zoIl
         L1RM7dVjT+O75tkgh84MEp1z/XKXlH1qqXAadzp4fk9z/yl3ObldNm3rP+3FHreC0TUO
         GukWcj9/uzGI6B5DQboU9F6L+8k0/w36qjOhWgXVs1JwtX/vLc0/zKQ94cqe7TS0JCAZ
         MyF7LQzNMqlfx88d86p6Z/on2WK3ROS7bcmZcDHQYQbS5hZOZSDAzKIaeOfdljtmTbCL
         GwdA==
X-Gm-Message-State: AOAM531yndg0xPjq8GTcCgoBhz4KteLSMhXgtOEXOUPhQZH1ne5xBluj
	OC/OwN7mJrNYdGNehcsgjKb7fQ==
X-Google-Smtp-Source: ABdhPJxa0KAbUWt2GVQAhxJJasbt6KcrMvbewz9fOfHqfZGpT+rFwlvdT6CpO+Xk/o30NYlBhsWhJg==
X-Received: by 2002:a17:90a:b22:: with SMTP id 31mr9588913pjq.179.1622120335442;
        Thu, 27 May 2021 05:58:55 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v8 00/15] Restricted DMA
Date: Thu, 27 May 2021 20:58:30 +0800
Message-Id: <20210527125845.1852284-1-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

This series implements mitigations for lack of DMA access control on
systems without an IOMMU, which could result in the DMA accessing the
system memory at unexpected times and/or unexpected addresses, possibly
leading to data leakage or corruption.

For example, we plan to use the PCI-e bus for Wi-Fi and that PCI-e bus is
not behind an IOMMU. As PCI-e, by design, gives the device full access to
system memory, a vulnerability in the Wi-Fi firmware could easily escalate
to a full system exploit (remote wifi exploits: [1a], [1b] that shows a
full chain of exploits; [2], [3]).

To mitigate the security concerns, we introduce restricted DMA. Restricted
DMA utilizes the existing swiotlb to bounce streaming DMA in and out of a
specially allocated region and does memory allocation from the same region.
The feature on its own provides a basic level of protection against the DMA
overwriting buffer contents at unexpected times. However, to protect
against general data leakage and system memory corruption, the system needs
to provide a way to restrict the DMA to a predefined memory region (this is
usually done at firmware level, e.g. MPU in ATF on some ARM platforms [4]).

[1a] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_4.html
[1b] https://googleprojectzero.blogspot.com/2017/04/over-air-exploiting-broadcoms-wi-fi_11.html
[2] https://blade.tencent.com/en/advisories/qualpwn/
[3] https://www.bleepingcomputer.com/news/security/vulnerabilities-found-in-highly-popular-firmware-for-wifi-chips/
[4] https://github.com/ARM-software/arm-trusted-firmware/blob/master/plat/mediatek/mt8183/drivers/emi_mpu/emi_mpu.c#L132

v8:
- Fix reserved-memory.txt and add the reg property in example.
- Fix sizeof for of_property_count_elems_of_size in
  drivers/of/address.c#of_dma_set_restricted_buffer.
- Apply Will's suggestion to try the OF node having DMA configuration in
  drivers/of/address.c#of_dma_set_restricted_buffer.
- Fix typo in the comment of drivers/of/address.c#of_dma_set_restricted_buffer.
- Add error message for PageHighMem in
  kernel/dma/swiotlb.c#rmem_swiotlb_device_init and move it to
  rmem_swiotlb_setup.
- Fix the message string in rmem_swiotlb_setup.

v7:
Fix debugfs, PageHighMem and comment style in rmem_swiotlb_device_init
https://lore.kernel.org/patchwork/cover/1431031/

v6:
Address the comments in v5
https://lore.kernel.org/patchwork/cover/1423201/

v5:
Rebase on latest linux-next
https://lore.kernel.org/patchwork/cover/1416899/

v4:
- Fix spinlock bad magic
- Use rmem->name for debugfs entry
- Address the comments in v3
https://lore.kernel.org/patchwork/cover/1378113/

v3:
Using only one reserved memory region for both streaming DMA and memory
allocation.
https://lore.kernel.org/patchwork/cover/1360992/

v2:
Building on top of swiotlb.
https://lore.kernel.org/patchwork/cover/1280705/

v1:
Using dma_map_ops.
https://lore.kernel.org/patchwork/cover/1271660/

Claire Chang (15):
  swiotlb: Refactor swiotlb init functions
  swiotlb: Refactor swiotlb_create_debugfs
  swiotlb: Add DMA_RESTRICTED_POOL
  swiotlb: Add restricted DMA pool initialization
  swiotlb: Add a new get_io_tlb_mem getter
  swiotlb: Update is_swiotlb_buffer to add a struct device argument
  swiotlb: Update is_swiotlb_active to add a struct device argument
  swiotlb: Bounce data from/to restricted DMA pool if available
  swiotlb: Move alloc_size to find_slots
  swiotlb: Refactor swiotlb_tbl_unmap_single
  dma-direct: Add a new wrapper __dma_direct_free_pages()
  swiotlb: Add restricted DMA alloc/free support.
  dma-direct: Allocate memory from restricted DMA pool if available
  dt-bindings: of: Add restricted DMA pool
  of: Add plumbing for restricted DMA pool

 .../reserved-memory/reserved-memory.txt       |  36 ++-
 drivers/gpu/drm/i915/gem/i915_gem_internal.c  |   2 +-
 drivers/gpu/drm/nouveau/nouveau_ttm.c         |   2 +-
 drivers/iommu/dma-iommu.c                     |  12 +-
 drivers/of/address.c                          |  25 ++
 drivers/of/device.c                           |   3 +
 drivers/of/of_private.h                       |   5 +
 drivers/pci/xen-pcifront.c                    |   2 +-
 drivers/xen/swiotlb-xen.c                     |   2 +-
 include/linux/device.h                        |   4 +
 include/linux/swiotlb.h                       |  41 ++-
 kernel/dma/Kconfig                            |  14 +
 kernel/dma/direct.c                           |  63 +++--
 kernel/dma/direct.h                           |   9 +-
 kernel/dma/swiotlb.c                          | 242 +++++++++++++-----
 15 files changed, 362 insertions(+), 100 deletions(-)

-- 
2.31.1.818.g46aad6cb9e-goog



From xen-devel-bounces@lists.xenproject.org Thu May 27 12:59:07 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 12:59:07 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133212.248368 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFbP-0006Fg-45; Thu, 27 May 2021 12:59:07 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133212.248368; Thu, 27 May 2021 12:59:07 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFbP-0006FV-00; Thu, 27 May 2021 12:59:07 +0000
Received: by outflank-mailman (input) for mailman id 133212;
 Thu, 27 May 2021 12:59:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=spZp=KW=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lmFbN-0006Dj-I3
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 12:59:05 +0000
Received: from mail-pg1-x52a.google.com (unknown [2607:f8b0:4864:20::52a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 266a6cdf-f83c-415f-b166-7766a8c43566;
 Thu, 27 May 2021 12:59:04 +0000 (UTC)
Received: by mail-pg1-x52a.google.com with SMTP id v14so3630654pgi.6
 for <xen-devel@lists.xenproject.org>; Thu, 27 May 2021 05:59:04 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70])
 by smtp.gmail.com with UTF8SMTPSA id o17sm1723677pjp.33.2021.05.27.05.58.56
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 27 May 2021 05:59:03 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 266a6cdf-f83c-415f-b166-7766a8c43566
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=TTF4vDbD9vQ1KC1jLpuRC6SYDBZGWhbxeMIB/CCcXbM=;
        b=Ql518PoKOfY5r4efBKvH9DmAfXHTzOTlD7cQCPHv4pwiDJMplS9HGrBounYOm/uHrf
         /J12Wcg/AjGjzAJHhjxJJIQiFQkUIqzSfolO8gn7+iXITH1Ou9qg/tKyKRsiVDL/Q70X
         wZNWq4zxBO11T+ZAQUrcXnBAK+2h+MWriKags=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=TTF4vDbD9vQ1KC1jLpuRC6SYDBZGWhbxeMIB/CCcXbM=;
        b=PQzpJPE0MPYlO6CHj2DdEEDdPR+qQkZmePFhXZf6IGirGeQQd7J3S5r+rZk74x6vV5
         69LatfpG0ANC5z2e1wD6n3Ih5opI7s9e57fFj1NZJMAgcRigE2a3HtsU8HQAOZU4bNwM
         zyQ9ngpD5/ZTSWbQdvSqa6vm4i3aXP9mg2BsJpDzwpR9hYo10zccixzK8XjxR5tu/xeB
         HqMkpVRgFcIeKGRZfhRAl5CiCHZeZYSLR2/ewmyjfDR1pAGb/eNGNVyjOhAkQFklZAIK
         DlYRj/RjtrwacrY5EU8UUe5QUKD6LBBXySB4FYp0rW3he5kutCNbULRFwa4npSxz4M+I
         uDOw==
X-Gm-Message-State: AOAM53164mMw4M1udrRb5FhMglqsBVcTE18683s4ZNxgrU2nGjh3v0w9
	yUSXId7qi5MU5YxnMEbaEnk6UQ==
X-Google-Smtp-Source: ABdhPJyp2dRY1SU48NJaQCKs+dJX/2XsN+WDKmaiBpTycmpdiWBGW8lmfuldLoK+B+T5ST+S6ZG9/g==
X-Received: by 2002:a63:f248:: with SMTP id d8mr3579481pgk.219.1622120343937;
        Thu, 27 May 2021 05:59:03 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v8 01/15] swiotlb: Refactor swiotlb init functions
Date: Thu, 27 May 2021 20:58:31 +0800
Message-Id: <20210527125845.1852284-2-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog
In-Reply-To: <20210527125845.1852284-1-tientzu@chromium.org>
References: <20210527125845.1852284-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new function, swiotlb_init_io_tlb_mem, for the io_tlb_mem struct
initialization to make the code reusable.

Note that we now also call set_memory_decrypted in swiotlb_init_with_tbl.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/swiotlb.c | 51 ++++++++++++++++++++++----------------------
 1 file changed, 25 insertions(+), 26 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 8ca7d505d61c..d3232fc19385 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -168,9 +168,30 @@ void __init swiotlb_update_mem_attributes(void)
 	memset(vaddr, 0, bytes);
 }
 
-int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
+static void swiotlb_init_io_tlb_mem(struct io_tlb_mem *mem, phys_addr_t start,
+				    unsigned long nslabs, bool late_alloc)
 {
+	void *vaddr = phys_to_virt(start);
 	unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
+
+	mem->nslabs = nslabs;
+	mem->start = start;
+	mem->end = mem->start + bytes;
+	mem->index = 0;
+	mem->late_alloc = late_alloc;
+	spin_lock_init(&mem->lock);
+	for (i = 0; i < mem->nslabs; i++) {
+		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
+		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
+		mem->slots[i].alloc_size = 0;
+	}
+
+	set_memory_decrypted((unsigned long)vaddr, bytes >> PAGE_SHIFT);
+	memset(vaddr, 0, bytes);
+}
+
+int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
+{
 	struct io_tlb_mem *mem;
 	size_t alloc_size;
 
@@ -186,16 +207,8 @@ int __init swiotlb_init_with_tbl(char *tlb, unsigned long nslabs, int verbose)
 	if (!mem)
 		panic("%s: Failed to allocate %zu bytes align=0x%lx\n",
 		      __func__, alloc_size, PAGE_SIZE);
-	mem->nslabs = nslabs;
-	mem->start = __pa(tlb);
-	mem->end = mem->start + bytes;
-	mem->index = 0;
-	spin_lock_init(&mem->lock);
-	for (i = 0; i < mem->nslabs; i++) {
-		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
-		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
-		mem->slots[i].alloc_size = 0;
-	}
+
+	swiotlb_init_io_tlb_mem(mem, __pa(tlb), nslabs, false);
 
 	io_tlb_default_mem = mem;
 	if (verbose)
@@ -282,7 +295,6 @@ swiotlb_late_init_with_default_size(size_t default_size)
 int
 swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 {
-	unsigned long bytes = nslabs << IO_TLB_SHIFT, i;
 	struct io_tlb_mem *mem;
 
 	if (swiotlb_force == SWIOTLB_NO_FORCE)
@@ -297,20 +309,7 @@ swiotlb_late_init_with_tbl(char *tlb, unsigned long nslabs)
 	if (!mem)
 		return -ENOMEM;
 
-	mem->nslabs = nslabs;
-	mem->start = virt_to_phys(tlb);
-	mem->end = mem->start + bytes;
-	mem->index = 0;
-	mem->late_alloc = 1;
-	spin_lock_init(&mem->lock);
-	for (i = 0; i < mem->nslabs; i++) {
-		mem->slots[i].list = IO_TLB_SEGSIZE - io_tlb_offset(i);
-		mem->slots[i].orig_addr = INVALID_PHYS_ADDR;
-		mem->slots[i].alloc_size = 0;
-	}
-
-	set_memory_decrypted((unsigned long)tlb, bytes >> PAGE_SHIFT);
-	memset(tlb, 0, bytes);
+	swiotlb_init_io_tlb_mem(mem, virt_to_phys(tlb), nslabs, true);
 
 	io_tlb_default_mem = mem;
 	swiotlb_print_info();
-- 
2.31.1.818.g46aad6cb9e-goog



From xen-devel-bounces@lists.xenproject.org Thu May 27 12:59:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 12:59:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133217.248379 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFbg-0006tt-Df; Thu, 27 May 2021 12:59:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133217.248379; Thu, 27 May 2021 12:59:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFbg-0006tm-A8; Thu, 27 May 2021 12:59:24 +0000
Received: by outflank-mailman (input) for mailman id 133217;
 Thu, 27 May 2021 12:59:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=spZp=KW=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lmFbe-0006ac-5s
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 12:59:22 +0000
Received: from mail-pl1-x62f.google.com (unknown [2607:f8b0:4864:20::62f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b9e83e38-63e4-44b7-9fac-aa81e9d146ff;
 Thu, 27 May 2021 12:59:13 +0000 (UTC)
Received: by mail-pl1-x62f.google.com with SMTP id u7so2294266plq.4
 for <xen-devel@lists.xenproject.org>; Thu, 27 May 2021 05:59:13 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70])
 by smtp.gmail.com with UTF8SMTPSA id r10sm2150201pga.48.2021.05.27.05.59.05
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 27 May 2021 05:59:12 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b9e83e38-63e4-44b7-9fac-aa81e9d146ff
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=FBa4WmaBqBXHvL8MxFx8fqbv9G1UK9TFNo0t6ZyBRps=;
        b=H8QKHfGZsWxCHEy5cA1tQcGKb44EP5b2zAEgNghokxkbg+whP74qNT9pCynan1YB4C
         ODR1VCmnmaAcY81RiH4rrdh2ogtLXiV4eZN6Qr0E0GHPsI2d+SkbVqT8pWRXMc0PJZWW
         P9APiIrxJsEIWE0eMgqeltDq17QR2UGVMSiSA=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=FBa4WmaBqBXHvL8MxFx8fqbv9G1UK9TFNo0t6ZyBRps=;
        b=cRyxKVKtKWdmbdINqbXL6qfKUDxgmZXU5/kk2AkTXyV9e/I0fGFxM9zk6tlgiEm6jh
         L3/JGmD/c3tAvsf4UzWEOZqEG+5ljluFM3UKf5k5AYhHSTNczI97k7Y/oETBwW8yCZGM
         RtVaDh9dXjylQKd0gWL+WsdkpLNPQ8UhtZ4mJZ0QZmy06/q2PpTDaMnmQbuX8n8vZ8a8
         MltHejvLWqkcVyHv9jpTZOxR9aMkK/hqOYh/aJgo9KLUROoTxFJohpQjWbMzRIwKBhz5
         0vQ1FpeG8p4rSsWemX7v6dd4Uz59RTC5yfqG9tcmUsQiAQj3FWW/pHF5jcpxRKRsV35E
         0X1A==
X-Gm-Message-State: AOAM531Ncb1aPr76QRjN96JrmnVUAH1bWcv+YmsMboOwxyzvHdyO8x1y
	kgo3v4L3V26Xp/pii0h8xc4Zaw==
X-Google-Smtp-Source: ABdhPJwy7gpYKip34XTwi+RiwqWb6k5jmIQ4n8vVEEugdkMYxDsCVe/LaGvOwoAOLDbCetdBwhFgLQ==
X-Received: by 2002:a17:90b:190b:: with SMTP id mp11mr9025962pjb.77.1622120353025;
        Thu, 27 May 2021 05:59:13 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v8 02/15] swiotlb: Refactor swiotlb_create_debugfs
Date: Thu, 27 May 2021 20:58:32 +0800
Message-Id: <20210527125845.1852284-3-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog
In-Reply-To: <20210527125845.1852284-1-tientzu@chromium.org>
References: <20210527125845.1852284-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Split the debugfs creation to make the code reusable for supporting
different bounce buffer pools, e.g. restricted DMA pool.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/swiotlb.c | 25 +++++++++++++++++++------
 1 file changed, 19 insertions(+), 6 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index d3232fc19385..b849b01a446f 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -64,6 +64,7 @@
 enum swiotlb_force swiotlb_force;
 
 struct io_tlb_mem *io_tlb_default_mem;
+static struct dentry *debugfs_dir;
 
 /*
  * Max segment that we can provide which (if pages are contingous) will
@@ -662,18 +663,30 @@ EXPORT_SYMBOL_GPL(is_swiotlb_active);
 
 #ifdef CONFIG_DEBUG_FS
 
-static int __init swiotlb_create_debugfs(void)
+static void swiotlb_create_debugfs(struct io_tlb_mem *mem, const char *name)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
-
 	if (!mem)
-		return 0;
-	mem->debugfs = debugfs_create_dir("swiotlb", NULL);
+		return;
+
+	mem->debugfs = debugfs_create_dir(name, debugfs_dir);
 	debugfs_create_ulong("io_tlb_nslabs", 0400, mem->debugfs, &mem->nslabs);
 	debugfs_create_ulong("io_tlb_used", 0400, mem->debugfs, &mem->used);
+}
+
+static int __init swiotlb_create_default_debugfs(void)
+{
+	struct io_tlb_mem *mem = io_tlb_default_mem;
+
+	if (mem) {
+		swiotlb_create_debugfs(mem, "swiotlb");
+		debugfs_dir = mem->debugfs;
+	} else {
+		debugfs_dir = debugfs_create_dir("swiotlb", NULL);
+	}
+
 	return 0;
 }
 
-late_initcall(swiotlb_create_debugfs);
+late_initcall(swiotlb_create_default_debugfs);
 
 #endif
-- 
2.31.1.818.g46aad6cb9e-goog



From xen-devel-bounces@lists.xenproject.org Thu May 27 12:59:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 12:59:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133218.248390 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFbk-0007IQ-Kv; Thu, 27 May 2021 12:59:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133218.248390; Thu, 27 May 2021 12:59:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFbk-0007IG-Ha; Thu, 27 May 2021 12:59:28 +0000
Received: by outflank-mailman (input) for mailman id 133218;
 Thu, 27 May 2021 12:59:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=spZp=KW=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lmFbj-0006ac-5w
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 12:59:27 +0000
Received: from mail-pg1-x52a.google.com (unknown [2607:f8b0:4864:20::52a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 604db2b4-a7af-4f19-9aa1-0108a4c225df;
 Thu, 27 May 2021 12:59:22 +0000 (UTC)
Received: by mail-pg1-x52a.google.com with SMTP id 29so3618019pgu.11
 for <xen-devel@lists.xenproject.org>; Thu, 27 May 2021 05:59:22 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70])
 by smtp.gmail.com with UTF8SMTPSA id m14sm2012639pff.17.2021.05.27.05.59.14
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 27 May 2021 05:59:21 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 604db2b4-a7af-4f19-9aa1-0108a4c225df
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=v5uDNYG4BjZ/8joAD3lImi71MufKa1LW825i4nd7Nkw=;
        b=E61Ri/F39WkjFWJqAXjGFvQPED/h2WAUJXs5JDDaCoUA1omO3EVyPXt2Y2hwAfXcjs
         MTmmUwzdXmE7AXrGKnYXFwjizScT15arM61lsHXOY8uoU6WtnTGi/ZNZ6/f2x5v6f4yj
         nZWUxmDPBg+tGToE82wIiDUeuR5LEBQHZ4KOI=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=v5uDNYG4BjZ/8joAD3lImi71MufKa1LW825i4nd7Nkw=;
        b=XmO2Q7c/kWF+M11cNkeZyydli5DBgx6+8TbJtRdsGxc241FWyI+W8APm92drREmsgY
         SmSwyewn4Ffda8Y0jufjxRGxHCgqQMRXE6LlwlcPRtCA+PkxVdg7kWM+eIymqBlUlLsz
         lT4F1QHvgbOpJ27a4tYrpCyZ11CJPudi2iXMUwO0oQIlEGEm1tR/TW8LPGyya7oTTQt8
         kvg0sKUES5cygvGDumBSxUNiKfV3R7xKop9/wcRT3GBgLm02jQnyHoxStCYOZWABTiU3
         euOonbH0o6py24Kx5K8gBM0Y728D/dLTOEiGjc1SLGc1F52TEEsMpP0SwTJjSZ6Z/4Tm
         yg/g==
X-Gm-Message-State: AOAM533ELK5GQHWa6mkViBW+NP5llWnBc5T7YF18Gd2HIYQ1hMgPOaxz
	/3mI4ruqMeI/r6K9/pZB2njxow==
X-Google-Smtp-Source: ABdhPJxENRpAFf3CkWut6vuXYNlsvVQFU1tdVBb0uZip+87V32FWXRIoYRditOw7sBaKZrr0nh2u9g==
X-Received: by 2002:a05:6a00:134b:b029:2bf:2c30:ebbd with SMTP id k11-20020a056a00134bb02902bf2c30ebbdmr3639373pfu.74.1622120361576;
        Thu, 27 May 2021 05:59:21 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v8 03/15] swiotlb: Add DMA_RESTRICTED_POOL
Date: Thu, 27 May 2021 20:58:33 +0800
Message-Id: <20210527125845.1852284-4-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog
In-Reply-To: <20210527125845.1852284-1-tientzu@chromium.org>
References: <20210527125845.1852284-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new kconfig symbol, DMA_RESTRICTED_POOL, for restricted DMA pool.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/Kconfig | 14 ++++++++++++++
 1 file changed, 14 insertions(+)

diff --git a/kernel/dma/Kconfig b/kernel/dma/Kconfig
index 77b405508743..3e961dc39634 100644
--- a/kernel/dma/Kconfig
+++ b/kernel/dma/Kconfig
@@ -80,6 +80,20 @@ config SWIOTLB
 	bool
 	select NEED_DMA_MAP_STATE
 
+config DMA_RESTRICTED_POOL
+	bool "DMA Restricted Pool"
+	depends on OF && OF_RESERVED_MEM
+	select SWIOTLB
+	help
+	  This enables support for restricted DMA pools which provide a level of
+	  DMA memory protection on systems with limited hardware protection
+	  capabilities, such as those lacking an IOMMU.
+
+	  For more information see
+	  <Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt>
+	  and <kernel/dma/swiotlb.c>.
+	  If unsure, say "n".
+
 #
 # Should be selected if we can mmap non-coherent mappings to userspace.
 # The only thing that is really required is a way to set an uncached bit
-- 
2.31.1.818.g46aad6cb9e-goog



From xen-devel-bounces@lists.xenproject.org Thu May 27 12:59:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 12:59:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133220.248401 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFbp-0007fQ-1M; Thu, 27 May 2021 12:59:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133220.248401; Thu, 27 May 2021 12:59:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFbo-0007fI-Td; Thu, 27 May 2021 12:59:32 +0000
Received: by outflank-mailman (input) for mailman id 133220;
 Thu, 27 May 2021 12:59:32 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=spZp=KW=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lmFbo-0006ac-67
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 12:59:32 +0000
Received: from mail-pj1-x1032.google.com (unknown [2607:f8b0:4864:20::1032])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 37934301-fdfc-413f-be64-21feac0c4318;
 Thu, 27 May 2021 12:59:31 +0000 (UTC)
Received: by mail-pj1-x1032.google.com with SMTP id
 b15-20020a17090a550fb029015dad75163dso370535pji.0
 for <xen-devel@lists.xenproject.org>; Thu, 27 May 2021 05:59:31 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70])
 by smtp.gmail.com with UTF8SMTPSA id b1sm2283938pgf.84.2021.05.27.05.59.23
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 27 May 2021 05:59:30 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 37934301-fdfc-413f-be64-21feac0c4318
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=T4AlVMpaxoBirwNTcqLmDmgFXiPVS0VLyb+9i3p+imI=;
        b=CvfRxJ5V9gKk6u6YLWcJx5gnvIl9KADGaY68BVyoOTlxsZIYBwmCIUy6cy/EkJ5V39
         BIErMCSgOF8GthMo65IN/6rS1CSzvq2iqmoBrtsHAQg9YFDtsGD6qxVIcoystXKVmg4Y
         0Pj4mBD1pyn4QIUFHHuIi/lzWBR/yBkH4GJaE=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=T4AlVMpaxoBirwNTcqLmDmgFXiPVS0VLyb+9i3p+imI=;
        b=d+F8SdBxi7bcK3pmbnYzZ4zEbjJoIuYcTTkfD427zPGty4IFM/gwU5aVsZpZceQp99
         HncTwks0EQysN1gGfdIFf0iR2Bzt/oX2VnOsx11ycm5kN47TLXGSadgqIPkMrfd0Y2V4
         IP1EvjF8ePjc8c8dlCzoQO8faJXSBED7SmfW2YPrSEeRqHviYoZsNUhLxcZyVkBQ4ktN
         WQBxB1dQcWO/rh0kwFlh1i/vEbDHWZIgNbl2Q1UBjz6oMr+xpslVTj2SrZxhgP4w293m
         s3JNoyylZmen3W5HGaaQ/NdPDwS4SIKD6NZxl2U5470TDAFsC+ZROiwcuxiNUPdeEmhM
         ugGw==
X-Gm-Message-State: AOAM530kAS2JGiMDqcRJzHFaB1O/f2frDn88sX3WupmtAmdHFTMvK0aK
	UTksN27BdtoP/zm90fjpUQ1EyQ==
X-Google-Smtp-Source: ABdhPJzsUXxHElPiQq44JQkAo3Fb/eFl0eY1YUlEp1O6N7E/B92qoxEbXYhTocMtdUka3Uh42209KQ==
X-Received: by 2002:a17:902:b181:b029:fc:c069:865c with SMTP id s1-20020a170902b181b02900fcc069865cmr3099177plr.28.1622120370579;
        Thu, 27 May 2021 05:59:30 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v8 04/15] swiotlb: Add restricted DMA pool initialization
Date: Thu, 27 May 2021 20:58:34 +0800
Message-Id: <20210527125845.1852284-5-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog
In-Reply-To: <20210527125845.1852284-1-tientzu@chromium.org>
References: <20210527125845.1852284-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add the initialization function to create restricted DMA pools from
matching reserved-memory nodes.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 include/linux/device.h  |  4 +++
 include/linux/swiotlb.h |  3 +-
 kernel/dma/swiotlb.c    | 76 +++++++++++++++++++++++++++++++++++++++++
 3 files changed, 82 insertions(+), 1 deletion(-)

diff --git a/include/linux/device.h b/include/linux/device.h
index 959cb9d2c9ab..e78e1ce0b1b1 100644
--- a/include/linux/device.h
+++ b/include/linux/device.h
@@ -416,6 +416,7 @@ struct dev_links_info {
  * @dma_pools:	Dma pools (if dma'ble device).
  * @dma_mem:	Internal for coherent mem override.
  * @cma_area:	Contiguous memory area for dma allocations
+ * @dma_io_tlb_mem: Internal for swiotlb io_tlb_mem override.
  * @archdata:	For arch-specific additions.
  * @of_node:	Associated device tree node.
  * @fwnode:	Associated device node supplied by platform firmware.
@@ -521,6 +522,9 @@ struct device {
 #ifdef CONFIG_DMA_CMA
 	struct cma *cma_area;		/* contiguous memory area for dma
 					   allocations */
+#endif
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+	struct io_tlb_mem *dma_io_tlb_mem;
 #endif
 	/* arch specific additions */
 	struct dev_archdata	archdata;
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 216854a5e513..03ad6e3b4056 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -72,7 +72,8 @@ extern enum swiotlb_force swiotlb_force;
  *		range check to see if the memory was in fact allocated by this
  *		API.
  * @nslabs:	The number of IO TLB blocks (in groups of 64) between @start and
- *		@end. This is command line adjustable via setup_io_tlb_npages.
+ *		@end. For default swiotlb, this is command line adjustable via
+ *		setup_io_tlb_npages.
  * @used:	The number of used IO TLB block.
  * @list:	The free list describing the number of free entries available
  *		from each index.
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index b849b01a446f..d99b403144a8 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -39,6 +39,13 @@
 #ifdef CONFIG_DEBUG_FS
 #include <linux/debugfs.h>
 #endif
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+#include <linux/io.h>
+#include <linux/of.h>
+#include <linux/of_fdt.h>
+#include <linux/of_reserved_mem.h>
+#include <linux/slab.h>
+#endif
 
 #include <asm/io.h>
 #include <asm/dma.h>
@@ -690,3 +697,72 @@ static int __init swiotlb_create_default_debugfs(void)
 late_initcall(swiotlb_create_default_debugfs);
 
 #endif
+
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
+				    struct device *dev)
+{
+	struct io_tlb_mem *mem = rmem->priv;
+	unsigned long nslabs = rmem->size >> IO_TLB_SHIFT;
+
+	if (dev->dma_io_tlb_mem)
+		return 0;
+
+	/*
+	 * Since multiple devices can share the same pool, the private data,
+	 * io_tlb_mem struct, will be initialized by the first device attached
+	 * to it.
+	 */
+	if (!mem) {
+		mem = kzalloc(struct_size(mem, slots, nslabs), GFP_KERNEL);
+		if (!mem)
+			return -ENOMEM;
+
+		swiotlb_init_io_tlb_mem(mem, rmem->base, nslabs, false);
+
+		rmem->priv = mem;
+
+		if (IS_ENABLED(CONFIG_DEBUG_FS))
+			swiotlb_create_debugfs(mem, rmem->name);
+	}
+
+	dev->dma_io_tlb_mem = mem;
+
+	return 0;
+}
+
+static void rmem_swiotlb_device_release(struct reserved_mem *rmem,
+					struct device *dev)
+{
+	if (dev)
+		dev->dma_io_tlb_mem = NULL;
+}
+
+static const struct reserved_mem_ops rmem_swiotlb_ops = {
+	.device_init = rmem_swiotlb_device_init,
+	.device_release = rmem_swiotlb_device_release,
+};
+
+static int __init rmem_swiotlb_setup(struct reserved_mem *rmem)
+{
+	unsigned long node = rmem->fdt_node;
+
+	if (of_get_flat_dt_prop(node, "reusable", NULL) ||
+	    of_get_flat_dt_prop(node, "linux,cma-default", NULL) ||
+	    of_get_flat_dt_prop(node, "linux,dma-default", NULL) ||
+	    of_get_flat_dt_prop(node, "no-map", NULL))
+		return -EINVAL;
+
+	if (PageHighMem(pfn_to_page(PHYS_PFN(rmem->base)))) {
+		pr_err("Restricted DMA pool must be accessible within the linear mapping.");
+		return -EINVAL;
+	}
+
+	rmem->ops = &rmem_swiotlb_ops;
+	pr_info("Reserved memory: created restricted DMA pool at %pa, size %ld MiB\n",
+		&rmem->base, (unsigned long)rmem->size / SZ_1M);
+	return 0;
+}
+
+RESERVEDMEM_OF_DECLARE(dma, "restricted-dma-pool", rmem_swiotlb_setup);
+#endif /* CONFIG_DMA_RESTRICTED_POOL */
-- 
2.31.1.818.g46aad6cb9e-goog



From xen-devel-bounces@lists.xenproject.org Thu May 27 12:59:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 12:59:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133226.248412 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFby-0008Ib-C8; Thu, 27 May 2021 12:59:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133226.248412; Thu, 27 May 2021 12:59:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFby-0008Hv-7i; Thu, 27 May 2021 12:59:42 +0000
Received: by outflank-mailman (input) for mailman id 133226;
 Thu, 27 May 2021 12:59:40 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=spZp=KW=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lmFbw-0008B4-Dr
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 12:59:40 +0000
Received: from mail-pj1-x1036.google.com (unknown [2607:f8b0:4864:20::1036])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id da616947-6eb9-4f3e-98fd-4805e49e6ddb;
 Thu, 27 May 2021 12:59:39 +0000 (UTC)
Received: by mail-pj1-x1036.google.com with SMTP id
 gb21-20020a17090b0615b029015d1a863a91so2296399pjb.2
 for <xen-devel@lists.xenproject.org>; Thu, 27 May 2021 05:59:39 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70])
 by smtp.gmail.com with UTF8SMTPSA id o2sm1885997pfu.80.2021.05.27.05.59.31
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 27 May 2021 05:59:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: da616947-6eb9-4f3e-98fd-4805e49e6ddb
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=n2YnMeaa21fO8GV3DprpFoE7sFhCpYEOshpAHonnB4Q=;
        b=CWN//86M7pfZwpqjk6Pz1G7HqyEoQV2evBjGZ2yUSjg4QlRX/avwG27ptvNDpwyHCw
         +v7KXNRKRgPheqlBZeQIo496WAVDVK7ga9R6f5STmR5c/U2eiCxGBrlIMJe+YfycLoea
         t6vNAhsA0F2FaNz1gwrhV3oLti0cIUc2nP7Cs=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=n2YnMeaa21fO8GV3DprpFoE7sFhCpYEOshpAHonnB4Q=;
        b=A/0dkHEizizQ43OvBORTp2ZRtipSJGuEizwOZ+f9aL5zuhd8yAOumqJYtmvyqRAaJD
         SnehyQND1NXbq3X2NYkjR2kfCQjHDBbY7xbptbrK02jDkEFuBzKI6w4nfjY6tTbAsYzn
         eEQVzZj9LSb89GGy9CT5qwQggvXS08hKyMBTXJm1C+gaE482K82++AjWDiIziNaxinDF
         2yw1jhlZbdnydei1f/8Su2EsleQzwN5h7c28k/YsT7XFdOLw9Zi6Ya9Blyss4OIPd+e0
         sm7YZqaEq8RQh2GAg2kkVDHKF3ZUwPZO3aucFkOF4ebuvdFGmUU0rDwuSKb1S8dsSsr0
         mB2w==
X-Gm-Message-State: AOAM533IgGfQ8Oi/Db7rW9cc/ar0/67O8L1gblFUb+9VTQ2/a2qc4zRg
	hFEnkX6zjNcOVe3M0ReP7gHjAg==
X-Google-Smtp-Source: ABdhPJy5Kn+yvqLcXzfzyvGlttPOTv6zxSUWOgHVrqd8CjcCzdgLfUd5wPMKRSdGsOL4+oXiuJ5KrA==
X-Received: by 2002:a17:90a:46cc:: with SMTP id x12mr9547674pjg.52.1622120379071;
        Thu, 27 May 2021 05:59:39 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v8 05/15] swiotlb: Add a new get_io_tlb_mem getter
Date: Thu, 27 May 2021 20:58:35 +0800
Message-Id: <20210527125845.1852284-6-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog
In-Reply-To: <20210527125845.1852284-1-tientzu@chromium.org>
References: <20210527125845.1852284-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new getter, get_io_tlb_mem, to help select the io_tlb_mem struct.
The restricted DMA pool is preferred if available.

The reason it was done this way instead of assigning the active pool to
dev->dma_io_tlb_mem was because directly using dev->dma_io_tlb_mem might
cause memory allocation issues for existing devices. The pool can't
support atomic coherent allocation so swiotlb_alloc needs to distinguish
it from the default swiotlb pool.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 include/linux/swiotlb.h | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 03ad6e3b4056..b469f04cca26 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -2,6 +2,7 @@
 #ifndef __LINUX_SWIOTLB_H
 #define __LINUX_SWIOTLB_H
 
+#include <linux/device.h>
 #include <linux/dma-direction.h>
 #include <linux/init.h>
 #include <linux/types.h>
@@ -102,6 +103,16 @@ struct io_tlb_mem {
 };
 extern struct io_tlb_mem *io_tlb_default_mem;
 
+static inline struct io_tlb_mem *get_io_tlb_mem(struct device *dev)
+{
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+	if (dev && dev->dma_io_tlb_mem)
+		return dev->dma_io_tlb_mem;
+#endif /* CONFIG_DMA_RESTRICTED_POOL */
+
+	return io_tlb_default_mem;
+}
+
 static inline bool is_swiotlb_buffer(phys_addr_t paddr)
 {
 	struct io_tlb_mem *mem = io_tlb_default_mem;
-- 
2.31.1.818.g46aad6cb9e-goog



From xen-devel-bounces@lists.xenproject.org Thu May 27 12:59:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 12:59:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133233.248423 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFc9-0000fA-Km; Thu, 27 May 2021 12:59:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133233.248423; Thu, 27 May 2021 12:59:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFc9-0000ev-HO; Thu, 27 May 2021 12:59:53 +0000
Received: by outflank-mailman (input) for mailman id 133233;
 Thu, 27 May 2021 12:59:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=spZp=KW=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lmFc8-0008B4-6r
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 12:59:52 +0000
Received: from mail-pj1-x1032.google.com (unknown [2607:f8b0:4864:20::1032])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c588dbfe-736e-4951-b15b-2e43ce61c759;
 Thu, 27 May 2021 12:59:48 +0000 (UTC)
Received: by mail-pj1-x1032.google.com with SMTP id f8so413211pjh.0
 for <xen-devel@lists.xenproject.org>; Thu, 27 May 2021 05:59:48 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70])
 by smtp.gmail.com with UTF8SMTPSA id o5sm1947925pfp.196.2021.05.27.05.59.40
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 27 May 2021 05:59:47 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c588dbfe-736e-4951-b15b-2e43ce61c759
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=SRSZmEUB055aoXyARExKzd2TpwKQHnstrT5lWFQ1tNw=;
        b=cNn9BMhhKGHjhwPPG2UgABfE69X+p0W4TmaTAB5q8Z4M5InvkWs8ATmOd5U8N6qWxU
         bxxYuDcvkgM0Xfxs9klHmfv1f/HVtMfBRvpREzONkBnjg/VPWHTAwBepUFiPDAMGuCVL
         v48zrB6kkCg3HK6W5PGMid66190hqov2+sC6I=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=SRSZmEUB055aoXyARExKzd2TpwKQHnstrT5lWFQ1tNw=;
        b=WpP5CO3x3NwfJTNwUZAKb7RKoQBtP5p5E+X17vYLUE/C19m2wbw3MOo+FUynVX2jBb
         YIPYT0zcwilOrsv0IhuzizhNn4Iww5rXouwJHOGPZkIVt/ehs6k1EGvI23l6+eTTwDS4
         RJY8YFQ1vBLBVcmnqL7XEnoi3CDbAp/INttvGRdbkb84qV0aZnFNxkXIBXuMd2LeCfEE
         N4s9m48BV1HDw/XHMyd91X8gTgsiTp721FMoZLpVCIduIre3Ft0DN8atFvqXSG8zWSHW
         V63CUi69X2znAEZjisAHc4+EAKsWGofutw+KtieSruK84STNvgtLXa4+MThsl8BycKoa
         8rcg==
X-Gm-Message-State: AOAM531lAw9CYEC/nBdYcbOpp2NUQRvvH1CAeFlApyBvAlORg0mPGBHU
	HKwmQnqsnqAMZUr5NUNTDqi0Yw==
X-Google-Smtp-Source: ABdhPJzNxyImuZ9vLBAvulnmvXHqnDmLE3mk3E5uMUl1wrRQ78i9a00FaZ7G/OE27BAd+P8QYyZIUw==
X-Received: by 2002:a17:903:22d1:b029:f2:67be:82fb with SMTP id y17-20020a17090322d1b02900f267be82fbmr3220638plg.15.1622120387710;
        Thu, 27 May 2021 05:59:47 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v8 06/15] swiotlb: Update is_swiotlb_buffer to add a struct device argument
Date: Thu, 27 May 2021 20:58:36 +0800
Message-Id: <20210527125845.1852284-7-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog
In-Reply-To: <20210527125845.1852284-1-tientzu@chromium.org>
References: <20210527125845.1852284-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Update is_swiotlb_buffer to add a struct device argument. This will be
useful later to allow for restricted DMA pool.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 drivers/iommu/dma-iommu.c | 12 ++++++------
 drivers/xen/swiotlb-xen.c |  2 +-
 include/linux/swiotlb.h   |  6 +++---
 kernel/dma/direct.c       |  6 +++---
 kernel/dma/direct.h       |  6 +++---
 5 files changed, 16 insertions(+), 16 deletions(-)

diff --git a/drivers/iommu/dma-iommu.c b/drivers/iommu/dma-iommu.c
index 7bcdd1205535..a5df35bfd150 100644
--- a/drivers/iommu/dma-iommu.c
+++ b/drivers/iommu/dma-iommu.c
@@ -504,7 +504,7 @@ static void __iommu_dma_unmap_swiotlb(struct device *dev, dma_addr_t dma_addr,
 
 	__iommu_dma_unmap(dev, dma_addr, size);
 
-	if (unlikely(is_swiotlb_buffer(phys)))
+	if (unlikely(is_swiotlb_buffer(dev, phys)))
 		swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
 }
 
@@ -575,7 +575,7 @@ static dma_addr_t __iommu_dma_map_swiotlb(struct device *dev, phys_addr_t phys,
 	}
 
 	iova = __iommu_dma_map(dev, phys, aligned_size, prot, dma_mask);
-	if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(phys))
+	if (iova == DMA_MAPPING_ERROR && is_swiotlb_buffer(dev, phys))
 		swiotlb_tbl_unmap_single(dev, phys, org_size, dir, attrs);
 	return iova;
 }
@@ -781,7 +781,7 @@ static void iommu_dma_sync_single_for_cpu(struct device *dev,
 	if (!dev_is_dma_coherent(dev))
 		arch_sync_dma_for_cpu(phys, size, dir);
 
-	if (is_swiotlb_buffer(phys))
+	if (is_swiotlb_buffer(dev, phys))
 		swiotlb_sync_single_for_cpu(dev, phys, size, dir);
 }
 
@@ -794,7 +794,7 @@ static void iommu_dma_sync_single_for_device(struct device *dev,
 		return;
 
 	phys = iommu_iova_to_phys(iommu_get_dma_domain(dev), dma_handle);
-	if (is_swiotlb_buffer(phys))
+	if (is_swiotlb_buffer(dev, phys))
 		swiotlb_sync_single_for_device(dev, phys, size, dir);
 
 	if (!dev_is_dma_coherent(dev))
@@ -815,7 +815,7 @@ static void iommu_dma_sync_sg_for_cpu(struct device *dev,
 		if (!dev_is_dma_coherent(dev))
 			arch_sync_dma_for_cpu(sg_phys(sg), sg->length, dir);
 
-		if (is_swiotlb_buffer(sg_phys(sg)))
+		if (is_swiotlb_buffer(dev, sg_phys(sg)))
 			swiotlb_sync_single_for_cpu(dev, sg_phys(sg),
 						    sg->length, dir);
 	}
@@ -832,7 +832,7 @@ static void iommu_dma_sync_sg_for_device(struct device *dev,
 		return;
 
 	for_each_sg(sgl, sg, nelems, i) {
-		if (is_swiotlb_buffer(sg_phys(sg)))
+		if (is_swiotlb_buffer(dev, sg_phys(sg)))
 			swiotlb_sync_single_for_device(dev, sg_phys(sg),
 						       sg->length, dir);
 
diff --git a/drivers/xen/swiotlb-xen.c b/drivers/xen/swiotlb-xen.c
index 24d11861ac7d..0c4fb34f11ab 100644
--- a/drivers/xen/swiotlb-xen.c
+++ b/drivers/xen/swiotlb-xen.c
@@ -100,7 +100,7 @@ static int is_xen_swiotlb_buffer(struct device *dev, dma_addr_t dma_addr)
 	 * in our domain. Therefore _only_ check address within our domain.
 	 */
 	if (pfn_valid(PFN_DOWN(paddr)))
-		return is_swiotlb_buffer(paddr);
+		return is_swiotlb_buffer(dev, paddr);
 	return 0;
 }
 
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index b469f04cca26..2a6cca07540b 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -113,9 +113,9 @@ static inline struct io_tlb_mem *get_io_tlb_mem(struct device *dev)
 	return io_tlb_default_mem;
 }
 
-static inline bool is_swiotlb_buffer(phys_addr_t paddr)
+static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = get_io_tlb_mem(dev);
 
 	return mem && paddr >= mem->start && paddr < mem->end;
 }
@@ -127,7 +127,7 @@ bool is_swiotlb_active(void);
 void __init swiotlb_adjust_size(unsigned long size);
 #else
 #define swiotlb_force SWIOTLB_NO_FORCE
-static inline bool is_swiotlb_buffer(phys_addr_t paddr)
+static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
 	return false;
 }
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index f737e3347059..84c9feb5474a 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -343,7 +343,7 @@ void dma_direct_sync_sg_for_device(struct device *dev,
 	for_each_sg(sgl, sg, nents, i) {
 		phys_addr_t paddr = dma_to_phys(dev, sg_dma_address(sg));
 
-		if (unlikely(is_swiotlb_buffer(paddr)))
+		if (unlikely(is_swiotlb_buffer(dev, paddr)))
 			swiotlb_sync_single_for_device(dev, paddr, sg->length,
 						       dir);
 
@@ -369,7 +369,7 @@ void dma_direct_sync_sg_for_cpu(struct device *dev,
 		if (!dev_is_dma_coherent(dev))
 			arch_sync_dma_for_cpu(paddr, sg->length, dir);
 
-		if (unlikely(is_swiotlb_buffer(paddr)))
+		if (unlikely(is_swiotlb_buffer(dev, paddr)))
 			swiotlb_sync_single_for_cpu(dev, paddr, sg->length,
 						    dir);
 
@@ -504,7 +504,7 @@ size_t dma_direct_max_mapping_size(struct device *dev)
 bool dma_direct_need_sync(struct device *dev, dma_addr_t dma_addr)
 {
 	return !dev_is_dma_coherent(dev) ||
-		is_swiotlb_buffer(dma_to_phys(dev, dma_addr));
+	       is_swiotlb_buffer(dev, dma_to_phys(dev, dma_addr));
 }
 
 /**
diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
index 50afc05b6f1d..13e9e7158d94 100644
--- a/kernel/dma/direct.h
+++ b/kernel/dma/direct.h
@@ -56,7 +56,7 @@ static inline void dma_direct_sync_single_for_device(struct device *dev,
 {
 	phys_addr_t paddr = dma_to_phys(dev, addr);
 
-	if (unlikely(is_swiotlb_buffer(paddr)))
+	if (unlikely(is_swiotlb_buffer(dev, paddr)))
 		swiotlb_sync_single_for_device(dev, paddr, size, dir);
 
 	if (!dev_is_dma_coherent(dev))
@@ -73,7 +73,7 @@ static inline void dma_direct_sync_single_for_cpu(struct device *dev,
 		arch_sync_dma_for_cpu_all();
 	}
 
-	if (unlikely(is_swiotlb_buffer(paddr)))
+	if (unlikely(is_swiotlb_buffer(dev, paddr)))
 		swiotlb_sync_single_for_cpu(dev, paddr, size, dir);
 
 	if (dir == DMA_FROM_DEVICE)
@@ -113,7 +113,7 @@ static inline void dma_direct_unmap_page(struct device *dev, dma_addr_t addr,
 	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC))
 		dma_direct_sync_single_for_cpu(dev, addr, size, dir);
 
-	if (unlikely(is_swiotlb_buffer(phys)))
+	if (unlikely(is_swiotlb_buffer(dev, phys)))
 		swiotlb_tbl_unmap_single(dev, phys, size, dir, attrs);
 }
 #endif /* _KERNEL_DMA_DIRECT_H */
-- 
2.31.1.818.g46aad6cb9e-goog



From xen-devel-bounces@lists.xenproject.org Thu May 27 13:00:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 13:00:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133237.248433 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFcI-0001Oi-Vh; Thu, 27 May 2021 13:00:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133237.248433; Thu, 27 May 2021 13:00:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFcI-0001OD-SQ; Thu, 27 May 2021 13:00:02 +0000
Received: by outflank-mailman (input) for mailman id 133237;
 Thu, 27 May 2021 13:00:02 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=spZp=KW=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lmFcI-0008B4-6v
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 13:00:02 +0000
Received: from mail-pf1-x436.google.com (unknown [2607:f8b0:4864:20::436])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c0c42614-a3d2-47ed-a196-38eebc41ff34;
 Thu, 27 May 2021 12:59:56 +0000 (UTC)
Received: by mail-pf1-x436.google.com with SMTP id x18so506569pfi.9
 for <xen-devel@lists.xenproject.org>; Thu, 27 May 2021 05:59:56 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70])
 by smtp.gmail.com with UTF8SMTPSA id z6sm1943314pgp.89.2021.05.27.05.59.49
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 27 May 2021 05:59:55 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c0c42614-a3d2-47ed-a196-38eebc41ff34
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=Mi4fzGLZAevgWkOrJd5AqwgkgacGpqhm6BO8mxVKx0M=;
        b=cC0gm2frLrJhtkllrartdi/IJ10NPfAYEcwXPEVpQV2EiZmW5wSJj1wWFHn6TpRrf2
         HaqrP/OROCg0MWl2jqREsiM0RPu9M17H/istHD5+5fpOYHJ0bBQL7eAhc04shOaWsUE6
         dFPx9rNBosC2Q2M9oPdfJQj93f1q/HMozBcKA=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=Mi4fzGLZAevgWkOrJd5AqwgkgacGpqhm6BO8mxVKx0M=;
        b=jAtSNVSPn+4JuNnyT19bVnZwJOWXMTZhm8jb271AaKsfoN8zcwlzOUUnbVBk6eCZMt
         IMAaILs205GWvEiPcUa/wQBv7scDrBAFGhewLSpagCJy8piRZZ/ySYHgj+qLIfuLdfoQ
         QnKBcN+HLWmcEG+RssH9i/528twoZLQ3qhHb3tSUFklOIA8D8L8UwYJN4Tz+7YSSWKnV
         dpS8W09R/1F7grAfgfk6pzKhyTCvhNm7Ou/CSt3UIFiF2csJoLr9epb4dLz+JxZdHhV6
         HbTRJKtY0T1hoqnih9eP7TIZv0bNDeQ3lVdIqPYUVuI1HvxOHKKPz0K6dR6DrL7dOzEk
         lckw==
X-Gm-Message-State: AOAM533RZV9n94CeMPYyWrT32bIRCZaRrFxwvgkY5zFHeOXDgR5qGWsU
	SXBwvMYcmyTf3BnzG5APLUfqjQ==
X-Google-Smtp-Source: ABdhPJzPlZfJRITEebp1ZlzGxNW3wGUcc2uIEGsv6usjxbdSMuBRFFThGIAIIadsB2Kn5HYh1wQOKg==
X-Received: by 2002:a05:6a00:bcd:b029:2e5:694c:1c96 with SMTP id x13-20020a056a000bcdb02902e5694c1c96mr3610267pfu.53.1622120396288;
        Thu, 27 May 2021 05:59:56 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v8 07/15] swiotlb: Update is_swiotlb_active to add a struct device argument
Date: Thu, 27 May 2021 20:58:37 +0800
Message-Id: <20210527125845.1852284-8-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog
In-Reply-To: <20210527125845.1852284-1-tientzu@chromium.org>
References: <20210527125845.1852284-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Update is_swiotlb_active to add a struct device argument. This will be
useful later to allow for restricted DMA pool.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 drivers/gpu/drm/i915/gem/i915_gem_internal.c | 2 +-
 drivers/gpu/drm/nouveau/nouveau_ttm.c        | 2 +-
 drivers/pci/xen-pcifront.c                   | 2 +-
 include/linux/swiotlb.h                      | 4 ++--
 kernel/dma/direct.c                          | 2 +-
 kernel/dma/swiotlb.c                         | 4 ++--
 6 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/gpu/drm/i915/gem/i915_gem_internal.c b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
index ce6b664b10aa..7d48c433446b 100644
--- a/drivers/gpu/drm/i915/gem/i915_gem_internal.c
+++ b/drivers/gpu/drm/i915/gem/i915_gem_internal.c
@@ -42,7 +42,7 @@ static int i915_gem_object_get_pages_internal(struct drm_i915_gem_object *obj)
 
 	max_order = MAX_ORDER;
 #ifdef CONFIG_SWIOTLB
-	if (is_swiotlb_active()) {
+	if (is_swiotlb_active(NULL)) {
 		unsigned int max_segment;
 
 		max_segment = swiotlb_max_segment();
diff --git a/drivers/gpu/drm/nouveau/nouveau_ttm.c b/drivers/gpu/drm/nouveau/nouveau_ttm.c
index 65430912ff72..d0e998b9e2e8 100644
--- a/drivers/gpu/drm/nouveau/nouveau_ttm.c
+++ b/drivers/gpu/drm/nouveau/nouveau_ttm.c
@@ -270,7 +270,7 @@ nouveau_ttm_init(struct nouveau_drm *drm)
 	}
 
 #if IS_ENABLED(CONFIG_SWIOTLB) && IS_ENABLED(CONFIG_X86)
-	need_swiotlb = is_swiotlb_active();
+	need_swiotlb = is_swiotlb_active(NULL);
 #endif
 
 	ret = ttm_device_init(&drm->ttm.bdev, &nouveau_bo_driver, drm->dev->dev,
diff --git a/drivers/pci/xen-pcifront.c b/drivers/pci/xen-pcifront.c
index b7a8f3a1921f..6d548ce53ce7 100644
--- a/drivers/pci/xen-pcifront.c
+++ b/drivers/pci/xen-pcifront.c
@@ -693,7 +693,7 @@ static int pcifront_connect_and_init_dma(struct pcifront_device *pdev)
 
 	spin_unlock(&pcifront_dev_lock);
 
-	if (!err && !is_swiotlb_active()) {
+	if (!err && !is_swiotlb_active(NULL)) {
 		err = pci_xen_swiotlb_init_late();
 		if (err)
 			dev_err(&pdev->xdev->dev, "Could not setup SWIOTLB!\n");
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 2a6cca07540b..c530c976d18b 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -123,7 +123,7 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 void __init swiotlb_exit(void);
 unsigned int swiotlb_max_segment(void);
 size_t swiotlb_max_mapping_size(struct device *dev);
-bool is_swiotlb_active(void);
+bool is_swiotlb_active(struct device *dev);
 void __init swiotlb_adjust_size(unsigned long size);
 #else
 #define swiotlb_force SWIOTLB_NO_FORCE
@@ -143,7 +143,7 @@ static inline size_t swiotlb_max_mapping_size(struct device *dev)
 	return SIZE_MAX;
 }
 
-static inline bool is_swiotlb_active(void)
+static inline bool is_swiotlb_active(struct device *dev)
 {
 	return false;
 }
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 84c9feb5474a..7a88c34d0867 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -495,7 +495,7 @@ int dma_direct_supported(struct device *dev, u64 mask)
 size_t dma_direct_max_mapping_size(struct device *dev)
 {
 	/* If SWIOTLB is active, use its maximum mapping size */
-	if (is_swiotlb_active() &&
+	if (is_swiotlb_active(dev) &&
 	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
 		return swiotlb_max_mapping_size(dev);
 	return SIZE_MAX;
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index d99b403144a8..b2b6503ecd88 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -662,9 +662,9 @@ size_t swiotlb_max_mapping_size(struct device *dev)
 	return ((size_t)IO_TLB_SIZE) * IO_TLB_SEGSIZE;
 }
 
-bool is_swiotlb_active(void)
+bool is_swiotlb_active(struct device *dev)
 {
-	return io_tlb_default_mem != NULL;
+	return get_io_tlb_mem(dev) != NULL;
 }
 EXPORT_SYMBOL_GPL(is_swiotlb_active);
 
-- 
2.31.1.818.g46aad6cb9e-goog



From xen-devel-bounces@lists.xenproject.org Thu May 27 13:02:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 13:02:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133254.248447 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFeU-0003Bl-JL; Thu, 27 May 2021 13:02:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133254.248447; Thu, 27 May 2021 13:02:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFeU-0003Be-GJ; Thu, 27 May 2021 13:02:18 +0000
Received: by outflank-mailman (input) for mailman id 133254;
 Thu, 27 May 2021 13:02:17 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TI9I=KW=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lmFeT-0003BU-BF
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 13:02:17 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1403423d-dd02-4bfb-83e6-4cc5acf9a994;
 Thu, 27 May 2021 13:02:16 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id D40FB68AFE; Thu, 27 May 2021 15:02:11 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1403423d-dd02-4bfb-83e6-4cc5acf9a994
Date: Thu, 27 May 2021 15:02:11 +0200
From: Christoph Hellwig <hch@lst.de>
To: Florian Fainelli <f.fainelli@gmail.com>
Cc: Claire Chang <tientzu@chromium.org>, Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
	bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
	jxgao@google.com, joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com, rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com,
	Tom Lendacky <thomas.lendacky@amd.com>
Subject: Re: [PATCH v7 01/15] swiotlb: Refactor swiotlb init functions
Message-ID: <20210527130211.GA24344@lst.de>
References: <20210518064215.2856977-1-tientzu@chromium.org> <20210518064215.2856977-2-tientzu@chromium.org> <170a54f2-be20-ec29-1d7f-3388e5f928c6@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <170a54f2-be20-ec29-1d7f-3388e5f928c6@gmail.com>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Wed, May 19, 2021 at 11:50:07AM -0700, Florian Fainelli wrote:
> You convert this call site with swiotlb_init_io_tlb_mem() which did not
> do the set_memory_decrypted()+memset(). Is this okay or should
> swiotlb_init_io_tlb_mem() add an additional argument to do this
> conditionally?

The zeroing is useful and was missing before.  I think having a clean
state here is the right thing.

Not sure about the set_memory_decrypted, swiotlb_update_mem_attributes
kinda suggests it is too early to set the memory decrupted.

Adding Tom who should now about all this.


From xen-devel-bounces@lists.xenproject.org Thu May 27 13:03:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 13:03:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133264.248459 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFfm-0003rc-Ue; Thu, 27 May 2021 13:03:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133264.248459; Thu, 27 May 2021 13:03:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFfm-0003rV-RE; Thu, 27 May 2021 13:03:38 +0000
Received: by outflank-mailman (input) for mailman id 133264;
 Thu, 27 May 2021 13:03:36 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=spZp=KW=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lmFcr-0008B4-81
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 13:00:37 +0000
Received: from mail-pf1-x430.google.com (unknown [2607:f8b0:4864:20::430])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 26ed9e2f-dcab-4dd2-8b89-9bd645ed4578;
 Thu, 27 May 2021 13:00:14 +0000 (UTC)
Received: by mail-pf1-x430.google.com with SMTP id q25so538850pfn.1
 for <xen-devel@lists.xenproject.org>; Thu, 27 May 2021 06:00:14 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70])
 by smtp.gmail.com with UTF8SMTPSA id m84sm1905689pfd.41.2021.05.27.06.00.06
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 27 May 2021 06:00:13 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 26ed9e2f-dcab-4dd2-8b89-9bd645ed4578
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=w2PiQmD1cuLTolB92EeZQujQ/2l5n9gaMPeHb2aHq3I=;
        b=h56T1h46NuCHAXLEDOrFtGzPeGkXpqjaVqU2Id3/ScnxpcTFZFB+NgniQLXbdnzuUx
         QD0APsw9ZjoeJMpeIHj0tuMfdo+wG/iO7GBpb84AqyFHYbVnubqVV+LlaPHYPYnvVgbR
         tJ7ATJhWs9UvVNPUNH5hquxvKLueKDOrXLp9s=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=w2PiQmD1cuLTolB92EeZQujQ/2l5n9gaMPeHb2aHq3I=;
        b=FyfcJkYGfYUh9QDTmex4khUPGIakqklvXPKJNSf0WinsowiYBCVwBuk10+4ptQv/Zp
         RB5y+3A3V7KITbW/s+47ftG+G0qwYWnhV36t1CIDhNj/jtKNIXAZvW/J3tZqs/JbmKnV
         ziuHYIExvpQtnxqYsId+tLumYrPyjl3VjTistTtx6eXFY7/E5fAHuGkvGz21IFlqPMoi
         y2RlG+d9ngAofsDsQmeOI0CpELf/6ciiW4pS9Q4I7RLamOHU9wR8KNY+A5Blta5ObHhX
         YVN9Xze6IBbGK7NFQw0oUuS7vh3Xja05VdidQhkLdekPwrcQhXnUiGrZSt9qlfMZP36o
         uoPQ==
X-Gm-Message-State: AOAM531ePA/yaZCuSLvj+d1MjpBitHbxVUEwsW41AJNZ/UcTFvgpLEgp
	f8u3zEqQ4Im8WH9lwTSuVUSA9g==
X-Google-Smtp-Source: ABdhPJwPKvUueO3/tWMtqAGArtGFSFkKVOYPoppJ9Pk4xAfchfGgkSrJtv/ImJCVwl16f+ZRC2hwVw==
X-Received: by 2002:a63:5c01:: with SMTP id q1mr3600625pgb.447.1622120414085;
        Thu, 27 May 2021 06:00:14 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v8 09/15] swiotlb: Move alloc_size to find_slots
Date: Thu, 27 May 2021 20:58:39 +0800
Message-Id: <20210527125845.1852284-10-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog
In-Reply-To: <20210527125845.1852284-1-tientzu@chromium.org>
References: <20210527125845.1852284-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Move the maintenance of alloc_size to find_slots for better code
reusability later.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/swiotlb.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index fa7f23fffc81..88b3471ac6a8 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -482,8 +482,11 @@ static int find_slots(struct device *dev, phys_addr_t orig_addr,
 	return -1;
 
 found:
-	for (i = index; i < index + nslots; i++)
+	for (i = index; i < index + nslots; i++) {
 		mem->slots[i].list = 0;
+		mem->slots[i].alloc_size =
+			alloc_size - ((i - index) << IO_TLB_SHIFT);
+	}
 	for (i = index - 1;
 	     io_tlb_offset(i) != IO_TLB_SEGSIZE - 1 &&
 	     mem->slots[i].list; i--)
@@ -538,11 +541,8 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 	 * This is needed when we sync the memory.  Then we sync the buffer if
 	 * needed.
 	 */
-	for (i = 0; i < nr_slots(alloc_size + offset); i++) {
+	for (i = 0; i < nr_slots(alloc_size + offset); i++)
 		mem->slots[index + i].orig_addr = slot_addr(orig_addr, i);
-		mem->slots[index + i].alloc_size =
-			alloc_size - (i << IO_TLB_SHIFT);
-	}
 	tlb_addr = slot_addr(mem->start, index) + offset;
 	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
 	    (dir == DMA_TO_DEVICE || dir == DMA_BIDIRECTIONAL))
-- 
2.31.1.818.g46aad6cb9e-goog



From xen-devel-bounces@lists.xenproject.org Thu May 27 13:03:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 13:03:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133268.248470 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFft-0004C0-77; Thu, 27 May 2021 13:03:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133268.248470; Thu, 27 May 2021 13:03:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFft-0004Br-3s; Thu, 27 May 2021 13:03:45 +0000
Received: by outflank-mailman (input) for mailman id 133268;
 Thu, 27 May 2021 13:03:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=spZp=KW=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lmFdB-0008B4-8w
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 13:00:57 +0000
Received: from mail-qv1-xf29.google.com (unknown [2607:f8b0:4864:20::f29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 64418cba-e69e-4154-a1a2-66151aaf7c83;
 Thu, 27 May 2021 13:00:31 +0000 (UTC)
Received: by mail-qv1-xf29.google.com with SMTP id a7so3810qvf.11
 for <xen-devel@lists.xenproject.org>; Thu, 27 May 2021 06:00:31 -0700 (PDT)
Received: from mail-qt1-f179.google.com (mail-qt1-f179.google.com.
 [209.85.160.179])
 by smtp.gmail.com with ESMTPSA id x18sm1272488qkx.118.2021.05.27.06.00.30
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 27 May 2021 06:00:30 -0700 (PDT)
Received: by mail-qt1-f179.google.com with SMTP id s12so108292qta.3
 for <xen-devel@lists.xenproject.org>; Thu, 27 May 2021 06:00:30 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 64418cba-e69e-4154-a1a2-66151aaf7c83
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=tgPMjyY5jAkXYAxU2M6Xs7JOfdHxgDuWupSEgAOvuBU=;
        b=HyAvfwEBrieOghAofexhSawgJJukm2YnQ2fAt2CFdEKEbDS1O9qhSu7HLdewyRfCVz
         gEA6t08wyk35Bk0UPING6Rrv8blJD9d3z2B2GuwI2wJTrYZe4msm0UUHcQ98yGMgAfAw
         jsaa2ecYO9Gel1C3uwINW+EKRzQM0HGE52Gr4=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=tgPMjyY5jAkXYAxU2M6Xs7JOfdHxgDuWupSEgAOvuBU=;
        b=s3Mzxwgtxtg/H3ih7I04JsAwh06JL2U/t4VLtMkPJJ7akvR9/yMAg5vmlLeh4uHnQT
         yNIkcw5OxqDvb2UefBtFbODVRg5D39OwJLCJgPjn0dfH+VDmylIIgm+KLsl+URHmUutg
         QUQrt+Hi7VAp57AIIJ4IkFB+vBFpw+tQlWKPoyky5zUGRDsyPylQQvIiOfNcumTzxSeQ
         nicxeoQMExSGFX9dEDEia18fMVZByTbRtLwrXhkOxr+oxzFz2J+WzsU1prkBD/1k170H
         mu0sbefGBecbeROzjMy1b+dHlDuEzsEZP9gWJKlXTIpdxC2EOQ5Q7oXx+47eM6Sk2EpJ
         Lydg==
X-Gm-Message-State: AOAM532Bivw01+VUQjgdMNKQKpNO5AbnpDUDtoexcQolfcd68v1vyX15
	CkOPjoKe81aumj9HDKUkMRttfUxCAkbPfw==
X-Google-Smtp-Source: ABdhPJwfmBtSnmT66s8oSVxbl0kteAwOB5krJM5yJKY38FIq21juIdo6PVBTWgfVFxZK0uJ/eIVb9g==
X-Received: by 2002:a0c:d784:: with SMTP id z4mr2319584qvi.27.1622120430641;
        Thu, 27 May 2021 06:00:30 -0700 (PDT)
X-Received: by 2002:a02:a505:: with SMTP id e5mr3414377jam.10.1622120419476;
 Thu, 27 May 2021 06:00:19 -0700 (PDT)
MIME-Version: 1.0
References: <20210518064215.2856977-1-tientzu@chromium.org>
In-Reply-To: <20210518064215.2856977-1-tientzu@chromium.org>
From: Claire Chang <tientzu@chromium.org>
Date: Thu, 27 May 2021 21:00:07 +0800
X-Gmail-Original-Message-ID: <CALiNf2-dUFSCOz4=jmEm8ZcX+zQXKzo6yPg31iLLLG3FAr+g1w@mail.gmail.com>
Message-ID: <CALiNf2-dUFSCOz4=jmEm8ZcX+zQXKzo6yPg31iLLLG3FAr+g1w@mail.gmail.com>
Subject: Re: [PATCH v7 00/15] Restricted DMA
To: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>, 
	Will Deacon <will@kernel.org>, Frank Rowand <frowand.list@gmail.com>, 
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com, jgross@suse.com, 
	Christoph Hellwig <hch@lst.de>, Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org, paulus@samba.org, 
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, sstabellini@kernel.org, 
	Robin Murphy <robin.murphy@arm.com>, grant.likely@arm.com, xypron.glpk@gmx.de, 
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org, bauerman@linux.ibm.com, 
	peterz@infradead.org, Greg KH <gregkh@linuxfoundation.org>, 
	Saravana Kannan <saravanak@google.com>, "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, 
	heikki.krogerus@linux.intel.com, 
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Randy Dunlap <rdunlap@infradead.org>, 
	Dan Williams <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
	linux-devicetree <devicetree@vger.kernel.org>, lkml <linux-kernel@vger.kernel.org>, 
	linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, 
	Nicolas Boichat <drinkcat@chromium.org>, Jim Quinlan <james.quinlan@broadcom.com>, 
	Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com, 
	Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk, 
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
	Jianxiong Gao <jxgao@google.com>, joonas.lahtinen@linux.intel.com, 
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
	matthew.auld@intel.com, rodrigo.vivi@intel.com, 
	thomas.hellstrom@linux.intel.com
Content-Type: text/plain; charset="UTF-8"

v8 here: https://lore.kernel.org/patchwork/cover/1437112/


From xen-devel-bounces@lists.xenproject.org Thu May 27 13:03:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 13:03:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133270.248474 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFft-0004G3-Ie; Thu, 27 May 2021 13:03:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133270.248474; Thu, 27 May 2021 13:03:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFft-0004Es-DF; Thu, 27 May 2021 13:03:45 +0000
Received: by outflank-mailman (input) for mailman id 133270;
 Thu, 27 May 2021 13:03:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=spZp=KW=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lmFcm-0008B4-83
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 13:00:32 +0000
Received: from mail-pl1-x62c.google.com (unknown [2607:f8b0:4864:20::62c])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6aa84d79-370f-4126-b495-7b1f156bc2f5;
 Thu, 27 May 2021 13:00:06 +0000 (UTC)
Received: by mail-pl1-x62c.google.com with SMTP id h12so2284627plf.11
 for <xen-devel@lists.xenproject.org>; Thu, 27 May 2021 06:00:06 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70])
 by smtp.gmail.com with UTF8SMTPSA id m1sm2027143pgd.78.2021.05.27.05.59.58
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 27 May 2021 06:00:04 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6aa84d79-370f-4126-b495-7b1f156bc2f5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=B7RrpH6OgXx/lTZyexKL+IWeAI7yB0yfCFBVtG2zWbw=;
        b=kcN4x88wnIqA2eddyBkI1s+5ccQseApdZLb8/RSr2+Fa520MxJZZ6NkX9r/ML2uBrj
         kvMXO+zNUanE1yYV9wWE+HheUK89EpCu8v6POAAGHEkAn5qXZCd1I7/7KAUnWJ2R0GFj
         AiEWOWj6q+T+/qXKxk0sPdE2cDhBD8s6mX/Vs=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=B7RrpH6OgXx/lTZyexKL+IWeAI7yB0yfCFBVtG2zWbw=;
        b=IhNRDs/J/nyPwygPZYYin9Zlz7QIGL7OKgwZcl597P5oMDKg8C9Oh0yqOax3muZiSO
         PQJqR5eKjl2P9CqR+nqHjT/zAaDugC7rCwqm69C3Cq15dYIjP4lzcXZ/aCGzSpWrm0Jm
         iu8ZOMqjMfRy02CAebU8MF79nvUOr5kFimvYpJh17neAASzPcwzB7sKIhk7p8gRBmS9P
         h28L0Q+nkqXSH6zm1WvpaDGSr9x0IoFZmMSlU2aISEGuJqmTKWMr1hePx4h+qb2dtCzK
         1CbT0wGkk/L6LXIkzrtCbtPq/uljbPaBmxAikRx0Raa8twkGMBZ09KalAYOcMm2RLH/7
         TuNw==
X-Gm-Message-State: AOAM532wH1wFTuHWcQosFXfATnWBhNXzuS/hy4rLK7oeY943tPqkvrW0
	T/4DmVa8yb+La0CBV67/pTD9CQ==
X-Google-Smtp-Source: ABdhPJx/S80brWa6yq+WbXDwFafH2VwKxtCuF8GRMQTIN21BHKQJi5x8f1RJPDSguTNDbXCIi1shzA==
X-Received: by 2002:a17:90b:33d1:: with SMTP id lk17mr3678617pjb.154.1622120405435;
        Thu, 27 May 2021 06:00:05 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v8 08/15] swiotlb: Bounce data from/to restricted DMA pool if available
Date: Thu, 27 May 2021 20:58:38 +0800
Message-Id: <20210527125845.1852284-9-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog
In-Reply-To: <20210527125845.1852284-1-tientzu@chromium.org>
References: <20210527125845.1852284-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Regardless of swiotlb setting, the restricted DMA pool is preferred if
available.

The restricted DMA pools provide a basic level of protection against the
DMA overwriting buffer contents at unexpected times. However, to protect
against general data leakage and system memory corruption, the system
needs to provide a way to lock down the memory access, e.g., MPU.

Note that is_dev_swiotlb_force doesn't check if
swiotlb_force == SWIOTLB_FORCE. Otherwise the memory allocation behavior
with default swiotlb will be changed by the following patche
("dma-direct: Allocate memory from restricted DMA pool if available").

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 include/linux/swiotlb.h | 13 +++++++++++++
 kernel/dma/direct.c     |  3 ++-
 kernel/dma/direct.h     |  3 ++-
 kernel/dma/swiotlb.c    |  8 ++++----
 4 files changed, 21 insertions(+), 6 deletions(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index c530c976d18b..0c5a18d9cf89 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -120,6 +120,15 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 	return mem && paddr >= mem->start && paddr < mem->end;
 }
 
+static inline bool is_dev_swiotlb_force(struct device *dev)
+{
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+	if (dev->dma_io_tlb_mem)
+		return true;
+#endif /* CONFIG_DMA_RESTRICTED_POOL */
+	return false;
+}
+
 void __init swiotlb_exit(void);
 unsigned int swiotlb_max_segment(void);
 size_t swiotlb_max_mapping_size(struct device *dev);
@@ -131,6 +140,10 @@ static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
 	return false;
 }
+static inline bool is_dev_swiotlb_force(struct device *dev)
+{
+	return false;
+}
 static inline void swiotlb_exit(void)
 {
 }
diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 7a88c34d0867..078f7087e466 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -496,7 +496,8 @@ size_t dma_direct_max_mapping_size(struct device *dev)
 {
 	/* If SWIOTLB is active, use its maximum mapping size */
 	if (is_swiotlb_active(dev) &&
-	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE))
+	    (dma_addressing_limited(dev) || swiotlb_force == SWIOTLB_FORCE ||
+	     is_dev_swiotlb_force(dev)))
 		return swiotlb_max_mapping_size(dev);
 	return SIZE_MAX;
 }
diff --git a/kernel/dma/direct.h b/kernel/dma/direct.h
index 13e9e7158d94..f94813674e23 100644
--- a/kernel/dma/direct.h
+++ b/kernel/dma/direct.h
@@ -87,7 +87,8 @@ static inline dma_addr_t dma_direct_map_page(struct device *dev,
 	phys_addr_t phys = page_to_phys(page) + offset;
 	dma_addr_t dma_addr = phys_to_dma(dev, phys);
 
-	if (unlikely(swiotlb_force == SWIOTLB_FORCE))
+	if (unlikely(swiotlb_force == SWIOTLB_FORCE) ||
+	    is_dev_swiotlb_force(dev))
 		return swiotlb_map(dev, phys, size, dir, attrs);
 
 	if (unlikely(!dma_capable(dev, dma_addr, size, true))) {
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index b2b6503ecd88..fa7f23fffc81 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -347,7 +347,7 @@ void __init swiotlb_exit(void)
 static void swiotlb_bounce(struct device *dev, phys_addr_t tlb_addr, size_t size,
 			   enum dma_data_direction dir)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = get_io_tlb_mem(dev);
 	int index = (tlb_addr - mem->start) >> IO_TLB_SHIFT;
 	phys_addr_t orig_addr = mem->slots[index].orig_addr;
 	size_t alloc_size = mem->slots[index].alloc_size;
@@ -429,7 +429,7 @@ static unsigned int wrap_index(struct io_tlb_mem *mem, unsigned int index)
 static int find_slots(struct device *dev, phys_addr_t orig_addr,
 		size_t alloc_size)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = get_io_tlb_mem(dev);
 	unsigned long boundary_mask = dma_get_seg_boundary(dev);
 	dma_addr_t tbl_dma_addr =
 		phys_to_dma_unencrypted(dev, mem->start) & boundary_mask;
@@ -506,7 +506,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 		size_t mapping_size, size_t alloc_size,
 		enum dma_data_direction dir, unsigned long attrs)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = get_io_tlb_mem(dev);
 	unsigned int offset = swiotlb_align_offset(dev, orig_addr);
 	unsigned int i;
 	int index;
@@ -557,7 +557,7 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
 			      size_t mapping_size, enum dma_data_direction dir,
 			      unsigned long attrs)
 {
-	struct io_tlb_mem *mem = io_tlb_default_mem;
+	struct io_tlb_mem *mem = get_io_tlb_mem(hwdev);
 	unsigned long flags;
 	unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr);
 	int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT;
-- 
2.31.1.818.g46aad6cb9e-goog



From xen-devel-bounces@lists.xenproject.org Thu May 27 13:03:46 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 13:03:46 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133271.248490 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFfu-0004h4-RR; Thu, 27 May 2021 13:03:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133271.248490; Thu, 27 May 2021 13:03:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFfu-0004fV-N2; Thu, 27 May 2021 13:03:46 +0000
Received: by outflank-mailman (input) for mailman id 133271;
 Thu, 27 May 2021 13:03:45 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=spZp=KW=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lmFft-0004HJ-Mn
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 13:03:45 +0000
Received: from mail-pf1-x435.google.com (unknown [2607:f8b0:4864:20::435])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a47b63ad-8d60-4159-92cd-eadc6b6b963a;
 Thu, 27 May 2021 13:03:45 +0000 (UTC)
Received: by mail-pf1-x435.google.com with SMTP id 22so509319pfv.11
 for <xen-devel@lists.xenproject.org>; Thu, 27 May 2021 06:03:44 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70])
 by smtp.gmail.com with UTF8SMTPSA id w15sm2015155pjy.1.2021.05.27.06.03.34
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 27 May 2021 06:03:41 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a47b63ad-8d60-4159-92cd-eadc6b6b963a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=m7+uI7rGPgfSKn+aVdUzTy5/dZn0TD9snxYJvtDtWr8=;
        b=ftmREmcxrJM5ghdWu3uu6RoVOQmn/FmRG0fXdD0sAjbx0bczGkzwvRTurlvaYuIvJW
         x3lcL0ttraMgqRb6RQUndSyEBiGIY3jtWjTqxgCUrCG/1jwY14bstcfPs5KZ9RhvRj5W
         T2wJrb10vnObIKglZGitAlnSSqiKkQCQp8AYQ=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=m7+uI7rGPgfSKn+aVdUzTy5/dZn0TD9snxYJvtDtWr8=;
        b=AJ60oWOW8piSmO39i7ejv1tqCzZqTuSBfDixlrQz4BElRNB50M8Wu8U8bF7fkFRuHK
         duYlI/V9jYR4QkgiR4fZiFw9X9NZ3F5qj0btcvJQE+fKbXGsDNwTevvOK98pBj/QR6m+
         I+qFDPTtlfQhbAztebHWcdodhFgTM06dXazBIrO5M6KKE3kgDCtVOc5cQut7sTlgEhyD
         wYZFvuDr3r2XRvb5NmwqnjC9XnNcgIsQFWtjDNJPn3HI9r0LSP3YQA9vqCw5iv/BiCWH
         cHxW/Bdllp7NSwCaog6L6AaQPOeSs0ahvPf+g+vVhSV/YAEgTUQp8kt0L8AP8efKch9k
         lBSQ==
X-Gm-Message-State: AOAM531hm508AyoEdvnrqNmUmiIZIFh+pqVNdYhy7M7800rVvTYLdhi+
	vHMemR6o1b3T+Osw8I6UClduHw==
X-Google-Smtp-Source: ABdhPJwFMAyQyClkgtD2WU3AOT0NSiYpDWGkYsiRLX3TSIJ7iJhjuPVvZAr31+YEfks0HMDDaEOArA==
X-Received: by 2002:a05:6a00:216a:b029:2df:3461:4ac3 with SMTP id r10-20020a056a00216ab02902df34614ac3mr3590982pff.80.1622120624345;
        Thu, 27 May 2021 06:03:44 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v8 11/15] dma-direct: Add a new wrapper __dma_direct_free_pages()
Date: Thu, 27 May 2021 21:03:25 +0800
Message-Id: <20210527130329.1853588-1-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new wrapper __dma_direct_free_pages() that will be useful later
for swiotlb_free().

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/direct.c | 14 ++++++++++----
 1 file changed, 10 insertions(+), 4 deletions(-)

diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index 078f7087e466..eb4098323bbc 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -75,6 +75,12 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size)
 		min_not_zero(dev->coherent_dma_mask, dev->bus_dma_limit);
 }
 
+static void __dma_direct_free_pages(struct device *dev, struct page *page,
+				    size_t size)
+{
+	dma_free_contiguous(dev, page, size);
+}
+
 static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
 		gfp_t gfp)
 {
@@ -237,7 +243,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 			return NULL;
 	}
 out_free_pages:
-	dma_free_contiguous(dev, page, size);
+	__dma_direct_free_pages(dev, page, size);
 	return NULL;
 }
 
@@ -273,7 +279,7 @@ void dma_direct_free(struct device *dev, size_t size,
 	else if (IS_ENABLED(CONFIG_ARCH_HAS_DMA_CLEAR_UNCACHED))
 		arch_dma_clear_uncached(cpu_addr, size);
 
-	dma_free_contiguous(dev, dma_direct_to_page(dev, dma_addr), size);
+	__dma_direct_free_pages(dev, dma_direct_to_page(dev, dma_addr), size);
 }
 
 struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
@@ -310,7 +316,7 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
 	*dma_handle = phys_to_dma_direct(dev, page_to_phys(page));
 	return page;
 out_free_pages:
-	dma_free_contiguous(dev, page, size);
+	__dma_direct_free_pages(dev, page, size);
 	return NULL;
 }
 
@@ -329,7 +335,7 @@ void dma_direct_free_pages(struct device *dev, size_t size,
 	if (force_dma_unencrypted(dev))
 		set_memory_encrypted((unsigned long)vaddr, 1 << page_order);
 
-	dma_free_contiguous(dev, page, size);
+	__dma_direct_free_pages(dev, page, size);
 }
 
 #if defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_DEVICE) || \
-- 
2.31.1.818.g46aad6cb9e-goog



From xen-devel-bounces@lists.xenproject.org Thu May 27 13:03:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 13:03:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133275.248503 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFg5-0005Kp-AA; Thu, 27 May 2021 13:03:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133275.248503; Thu, 27 May 2021 13:03:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFg5-0005KW-6Q; Thu, 27 May 2021 13:03:57 +0000
Received: by outflank-mailman (input) for mailman id 133275;
 Thu, 27 May 2021 13:03:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=spZp=KW=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lmFd6-0008B4-8f
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 13:00:52 +0000
Received: from mail-pj1-x1036.google.com (unknown [2607:f8b0:4864:20::1036])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3e7a7143-5b5f-42e2-894b-9d300fe461f9;
 Thu, 27 May 2021 13:00:28 +0000 (UTC)
Received: by mail-pj1-x1036.google.com with SMTP id
 b15-20020a17090a550fb029015dad75163dso372328pji.0
 for <xen-devel@lists.xenproject.org>; Thu, 27 May 2021 06:00:28 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70])
 by smtp.gmail.com with UTF8SMTPSA id 66sm2009117pgj.9.2021.05.27.06.00.15
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 27 May 2021 06:00:22 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3e7a7143-5b5f-42e2-894b-9d300fe461f9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=gCZEoBF15DvIVGSc+3j8Uk27721eHnYQC5cQOW6yZk4=;
        b=FaFXCFWJLCD/gigBOGfY4F270yO0Alvb7EMZVKQJ9L8utcEPvn3XtnKI3BEMvmJtEK
         gYgjai80HZHDiRLWnq4ZnWFUPWMggGA/Wo+QuTb4wDh5MX+6KsOXKwWCEnZzHAaiuA3T
         2/Aqi7L4LhIRGkedSPRMYz9qyuKketwP3p78k=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=gCZEoBF15DvIVGSc+3j8Uk27721eHnYQC5cQOW6yZk4=;
        b=Hymu/xwylAU+dLhd+O9BiDQMjPpxNveb59vsOiwP56T+5tcD+nKYgZFr5lSbDVfJPf
         IdTO2YMabxETffgHfuwlNm22391IMroXvxjTWwxo3k77ze4TRdQuEoFrrfK50g3F36CL
         qOvdDpm6792I7MpvAFqwSppiIjKTZz5wp3MgrZVzpXKBmv6SzANGvSrqC/mvHAf/qkpe
         m7nJVYm7Z/nrSTC07uYOEjNoupTQlVGk4XXfdn0lUsoNtWPDfD4tA8yNd/ikZBwqnp7f
         bfLoiFmq4LXW0eRNn+v1nViZ2IwLSvJCNlkAjTY+nPm6XOwSZqiYFM3rbevSQhTV+vuQ
         KMvg==
X-Gm-Message-State: AOAM532vknpBfikTAk0Pz2js6L/vn4mfyr6+IyL2BQKYjg6SLkPO6k8C
	JF1T7KcKIhvbGjdiEw9q9EFF4A==
X-Google-Smtp-Source: ABdhPJyRdfD5aP84oP+O3UZ9D5kP+7R8Jjf69TCVtMPqoqtl6dZXUjZXcnaxnn4OXnc5bmDzFDOgsA==
X-Received: by 2002:a17:90a:74f:: with SMTP id s15mr4091622pje.90.1622120423164;
        Thu, 27 May 2021 06:00:23 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v8 10/15] swiotlb: Refactor swiotlb_tbl_unmap_single
Date: Thu, 27 May 2021 20:58:40 +0800
Message-Id: <20210527125845.1852284-11-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog
In-Reply-To: <20210527125845.1852284-1-tientzu@chromium.org>
References: <20210527125845.1852284-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add a new function, release_slots, to make the code reusable for supporting
different bounce buffer pools, e.g. restricted DMA pool.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/swiotlb.c | 35 ++++++++++++++++++++---------------
 1 file changed, 20 insertions(+), 15 deletions(-)

diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 88b3471ac6a8..c4fc2e444e7a 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -550,27 +550,15 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 	return tlb_addr;
 }
 
-/*
- * tlb_addr is the physical address of the bounce buffer to unmap.
- */
-void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
-			      size_t mapping_size, enum dma_data_direction dir,
-			      unsigned long attrs)
+static void release_slots(struct device *dev, phys_addr_t tlb_addr)
 {
-	struct io_tlb_mem *mem = get_io_tlb_mem(hwdev);
+	struct io_tlb_mem *mem = get_io_tlb_mem(dev);
 	unsigned long flags;
-	unsigned int offset = swiotlb_align_offset(hwdev, tlb_addr);
+	unsigned int offset = swiotlb_align_offset(dev, tlb_addr);
 	int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT;
 	int nslots = nr_slots(mem->slots[index].alloc_size + offset);
 	int count, i;
 
-	/*
-	 * First, sync the memory before unmapping the entry
-	 */
-	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
-	    (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL))
-		swiotlb_bounce(hwdev, tlb_addr, mapping_size, DMA_FROM_DEVICE);
-
 	/*
 	 * Return the buffer to the free list by setting the corresponding
 	 * entries to indicate the number of contiguous entries available.
@@ -605,6 +593,23 @@ void swiotlb_tbl_unmap_single(struct device *hwdev, phys_addr_t tlb_addr,
 	spin_unlock_irqrestore(&mem->lock, flags);
 }
 
+/*
+ * tlb_addr is the physical address of the bounce buffer to unmap.
+ */
+void swiotlb_tbl_unmap_single(struct device *dev, phys_addr_t tlb_addr,
+			      size_t mapping_size, enum dma_data_direction dir,
+			      unsigned long attrs)
+{
+	/*
+	 * First, sync the memory before unmapping the entry
+	 */
+	if (!(attrs & DMA_ATTR_SKIP_CPU_SYNC) &&
+	    (dir == DMA_FROM_DEVICE || dir == DMA_BIDIRECTIONAL))
+		swiotlb_bounce(dev, tlb_addr, mapping_size, DMA_FROM_DEVICE);
+
+	release_slots(dev, tlb_addr);
+}
+
 void swiotlb_sync_single_for_device(struct device *dev, phys_addr_t tlb_addr,
 		size_t size, enum dma_data_direction dir)
 {
-- 
2.31.1.818.g46aad6cb9e-goog



From xen-devel-bounces@lists.xenproject.org Thu May 27 13:03:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 13:03:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133276.248509 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFg5-0005Pk-Qr; Thu, 27 May 2021 13:03:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133276.248509; Thu, 27 May 2021 13:03:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFg5-0005Nx-H4; Thu, 27 May 2021 13:03:57 +0000
Received: by outflank-mailman (input) for mailman id 133276;
 Thu, 27 May 2021 13:03:55 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=spZp=KW=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lmFg3-0005ED-M5
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 13:03:55 +0000
Received: from mail-pf1-x430.google.com (unknown [2607:f8b0:4864:20::430])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 9d859fc2-d424-4032-9379-b632696215ba;
 Thu, 27 May 2021 13:03:54 +0000 (UTC)
Received: by mail-pf1-x430.google.com with SMTP id y15so501191pfn.13
 for <xen-devel@lists.xenproject.org>; Thu, 27 May 2021 06:03:54 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70])
 by smtp.gmail.com with UTF8SMTPSA id p11sm2096443pjo.19.2021.05.27.06.03.46
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 27 May 2021 06:03:53 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 9d859fc2-d424-4032-9379-b632696215ba
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=Hq3dk3sdVo1Aiyz3oEGqQdxJF9bgRO7on7tVtGrwREA=;
        b=Fc1gvJ3n84xz3JxOO0N55rsBPx0BRwCnJqQGnPap/zK0pI1U/L+kL/kcQzWLQ84Riq
         0j/pqlcmWQYQydyVIRDN+PwShThBXOPgJ9Tt5RavIwKiqavfO6y+wHM9/KEWCh+zM9+b
         bTIS7XzvK0G3Tb5c5LYB9BTAV5sVenH8dr8s8=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=Hq3dk3sdVo1Aiyz3oEGqQdxJF9bgRO7on7tVtGrwREA=;
        b=ngM91AlHPnHI/iOLRJuXQwOGKwaMHxcQ0RwhuHPdzSDfIuevYl+BNhaGsLJZ5twjE3
         tquxy9Xf7+h1ZU9gTC7X7uM7l/ydwc0KTzVzH71ylA9/9T6Fbd4kgp/C1UHykPGSLaeR
         1I7eYeDsszYkj1nXuMp6Df/BOtabYHLZ9ujIbEO+d/qCnyT6ykE+yYaMRKdI9+2iMkj9
         bP2VXqVyG2QAVcE1PuQcr2+zhTwOkBRR2CNxlGwqxRpLIShe9FCM91jjjXlICSGXztiB
         9/EvxZUcinypythNkzhe8/LgFQ6A8HKBIWJD6DhDVIfmS5LvppXz0VcxADF2AAlpgAwq
         HoNw==
X-Gm-Message-State: AOAM530gLDlR9WPiBEctR6+AfIhJvHNC2Ga7lxPzALqXPa1O6J5vEdfd
	l/iwwIRaWjbg5HhAWVLLBPKgrw==
X-Google-Smtp-Source: ABdhPJwek6kyZnVLp6SkoYltLcwOXMIMY+AUL7OjyvyvB2fw/uVSfHyOeaCWcshB7EYERK97204FCg==
X-Received: by 2002:aa7:8588:0:b029:28e:dfa1:e31a with SMTP id w8-20020aa785880000b029028edfa1e31amr3587263pfn.77.1622120633858;
        Thu, 27 May 2021 06:03:53 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v8 12/15] swiotlb: Add restricted DMA alloc/free support.
Date: Thu, 27 May 2021 21:03:26 +0800
Message-Id: <20210527130329.1853588-2-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog
In-Reply-To: <20210527130329.1853588-1-tientzu@chromium.org>
References: <20210527130329.1853588-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Add the functions, swiotlb_{alloc,free} to support the memory allocation
from restricted DMA pool.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 include/linux/swiotlb.h |  4 ++++
 kernel/dma/swiotlb.c    | 35 +++++++++++++++++++++++++++++++++--
 2 files changed, 37 insertions(+), 2 deletions(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 0c5a18d9cf89..e8cf49bd90c5 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -134,6 +134,10 @@ unsigned int swiotlb_max_segment(void);
 size_t swiotlb_max_mapping_size(struct device *dev);
 bool is_swiotlb_active(struct device *dev);
 void __init swiotlb_adjust_size(unsigned long size);
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+struct page *swiotlb_alloc(struct device *dev, size_t size);
+bool swiotlb_free(struct device *dev, struct page *page, size_t size);
+#endif /* CONFIG_DMA_RESTRICTED_POOL */
 #else
 #define swiotlb_force SWIOTLB_NO_FORCE
 static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index c4fc2e444e7a..648bfdde4b0c 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -457,8 +457,9 @@ static int find_slots(struct device *dev, phys_addr_t orig_addr,
 
 	index = wrap = wrap_index(mem, ALIGN(mem->index, stride));
 	do {
-		if ((slot_addr(tbl_dma_addr, index) & iotlb_align_mask) !=
-		    (orig_addr & iotlb_align_mask)) {
+		if (orig_addr &&
+		    (slot_addr(tbl_dma_addr, index) & iotlb_align_mask) !=
+			    (orig_addr & iotlb_align_mask)) {
 			index = wrap_index(mem, index + 1);
 			continue;
 		}
@@ -704,6 +705,36 @@ late_initcall(swiotlb_create_default_debugfs);
 #endif
 
 #ifdef CONFIG_DMA_RESTRICTED_POOL
+struct page *swiotlb_alloc(struct device *dev, size_t size)
+{
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
+	phys_addr_t tlb_addr;
+	int index;
+
+	if (!mem)
+		return NULL;
+
+	index = find_slots(dev, 0, size);
+	if (index == -1)
+		return NULL;
+
+	tlb_addr = slot_addr(mem->start, index);
+
+	return pfn_to_page(PFN_DOWN(tlb_addr));
+}
+
+bool swiotlb_free(struct device *dev, struct page *page, size_t size)
+{
+	phys_addr_t tlb_addr = page_to_phys(page);
+
+	if (!is_swiotlb_buffer(dev, tlb_addr))
+		return false;
+
+	release_slots(dev, tlb_addr);
+
+	return true;
+}
+
 static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
 				    struct device *dev)
 {
-- 
2.31.1.818.g46aad6cb9e-goog



From xen-devel-bounces@lists.xenproject.org Thu May 27 13:04:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 13:04:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133296.248525 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFgY-0007Is-V7; Thu, 27 May 2021 13:04:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133296.248525; Thu, 27 May 2021 13:04:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFgY-0007Il-Rc; Thu, 27 May 2021 13:04:26 +0000
Received: by outflank-mailman (input) for mailman id 133296;
 Thu, 27 May 2021 13:04:25 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=spZp=KW=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lmFgX-0005ED-NI
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 13:04:25 +0000
Received: from mail-pg1-x52b.google.com (unknown [2607:f8b0:4864:20::52b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id fe37d330-38dd-471c-8995-62dc68b67365;
 Thu, 27 May 2021 13:04:03 +0000 (UTC)
Received: by mail-pg1-x52b.google.com with SMTP id e22so3628023pgv.10
 for <xen-devel@lists.xenproject.org>; Thu, 27 May 2021 06:04:03 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70])
 by smtp.gmail.com with UTF8SMTPSA id n1sm1934646pjv.40.2021.05.27.06.03.55
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 27 May 2021 06:04:02 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fe37d330-38dd-471c-8995-62dc68b67365
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=MYsbpvk1elzDffjY4lVqlSBet3lNsQF2QtHREtnNje4=;
        b=IyvYtT1AB6E4xwxKSFDYkZ5n98xR0l1LZiUkJW0MW5CgH/i5aGIsnRF/JPxxLV994w
         6zBxh1X3gHavrt/AsvfReKnILPfzC/l6kkFOeK3VrBIogUw16ui2Ro+3Hw3QPFy2pWGM
         +/sNzLuWK+E6zh1p0Gxt9RmiN+wGdEuT6cNDQ=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=MYsbpvk1elzDffjY4lVqlSBet3lNsQF2QtHREtnNje4=;
        b=MqJGD+vBDelX55SvvdU4DoBwpTFbwBdYHQfzgHUg83IIGGkGOu9nwD2XxERVEdBReJ
         6x2QZnlBCe9jVf8ZviTBU2ySCQMeIlYlOAYwl2ilefARiMB7yVVEWVsMcoZAeETQooaF
         H0Bdi3NLR2ZC0kXgnaHyehznLIjL21XOLaYM5bgIcrxuth2nZj4Kukfh4KtfC/zh8oWo
         U+fFndDTX0JctkIVJwej16N58A1efEw/V/7m+5Dy7bxYgeT9JNNcovIajducGnjWMD5D
         lkzpBNdvMaVyuqFpjPQAvA2nhgn32+ef+tPcetNGl5TX7m9Tv8iKaUvw85Ml5br+bLvI
         Kn/w==
X-Gm-Message-State: AOAM532H/GlXmAcnQqDTznkZzuViENO86xE+xTg7lt0wvK818jZgvQHa
	y0h56BHnaPrdemutDFuDRv8ZFw==
X-Google-Smtp-Source: ABdhPJxiOdBGPcWtjBuPQLZxflfz/miVWdNfGLJRIpkM261jkyasjOonUIh30AqgExiX3771SeFH4Q==
X-Received: by 2002:a63:4d11:: with SMTP id a17mr3584089pgb.266.1622120642890;
        Thu, 27 May 2021 06:04:02 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v8 13/15] dma-direct: Allocate memory from restricted DMA pool if available
Date: Thu, 27 May 2021 21:03:27 +0800
Message-Id: <20210527130329.1853588-3-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog
In-Reply-To: <20210527130329.1853588-1-tientzu@chromium.org>
References: <20210527130329.1853588-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

The restricted DMA pool is preferred if available.

The restricted DMA pools provide a basic level of protection against the
DMA overwriting buffer contents at unexpected times. However, to protect
against general data leakage and system memory corruption, the system
needs to provide a way to lock down the memory access, e.g., MPU.

Note that since coherent allocation needs remapping, one must set up
another device coherent pool by shared-dma-pool and use
dma_alloc_from_dev_coherent instead for atomic coherent allocation.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 kernel/dma/direct.c | 38 +++++++++++++++++++++++++++++---------
 1 file changed, 29 insertions(+), 9 deletions(-)

diff --git a/kernel/dma/direct.c b/kernel/dma/direct.c
index eb4098323bbc..0d521f78c7b9 100644
--- a/kernel/dma/direct.c
+++ b/kernel/dma/direct.c
@@ -78,6 +78,10 @@ static bool dma_coherent_ok(struct device *dev, phys_addr_t phys, size_t size)
 static void __dma_direct_free_pages(struct device *dev, struct page *page,
 				    size_t size)
 {
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+	if (swiotlb_free(dev, page, size))
+		return;
+#endif
 	dma_free_contiguous(dev, page, size);
 }
 
@@ -92,7 +96,17 @@ static struct page *__dma_direct_alloc_pages(struct device *dev, size_t size,
 
 	gfp |= dma_direct_optimal_gfp_mask(dev, dev->coherent_dma_mask,
 					   &phys_limit);
-	page = dma_alloc_contiguous(dev, size, gfp);
+
+#ifdef CONFIG_DMA_RESTRICTED_POOL
+	page = swiotlb_alloc(dev, size);
+	if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
+		__dma_direct_free_pages(dev, page, size);
+		page = NULL;
+	}
+#endif
+
+	if (!page)
+		page = dma_alloc_contiguous(dev, size, gfp);
 	if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
 		dma_free_contiguous(dev, page, size);
 		page = NULL;
@@ -148,7 +162,7 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 		gfp |= __GFP_NOWARN;
 
 	if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
-	    !force_dma_unencrypted(dev)) {
+	    !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) {
 		page = __dma_direct_alloc_pages(dev, size, gfp & ~__GFP_ZERO);
 		if (!page)
 			return NULL;
@@ -161,18 +175,23 @@ void *dma_direct_alloc(struct device *dev, size_t size,
 	}
 
 	if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) &&
-	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
-	    !dev_is_dma_coherent(dev))
+	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) &&
+	    !is_dev_swiotlb_force(dev))
 		return arch_dma_alloc(dev, size, dma_handle, gfp, attrs);
 
 	/*
 	 * Remapping or decrypting memory may block. If either is required and
 	 * we can't block, allocate the memory from the atomic pools.
+	 * If restricted DMA (i.e., is_dev_swiotlb_force) is required, one must
+	 * set up another device coherent pool by shared-dma-pool and use
+	 * dma_alloc_from_dev_coherent instead.
 	 */
 	if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) &&
 	    !gfpflags_allow_blocking(gfp) &&
 	    (force_dma_unencrypted(dev) ||
-	     (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev))))
+	     (IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
+	      !dev_is_dma_coherent(dev))) &&
+	    !is_dev_swiotlb_force(dev))
 		return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
 
 	/* we always manually zero the memory once we are done */
@@ -253,15 +272,15 @@ void dma_direct_free(struct device *dev, size_t size,
 	unsigned int page_order = get_order(size);
 
 	if ((attrs & DMA_ATTR_NO_KERNEL_MAPPING) &&
-	    !force_dma_unencrypted(dev)) {
+	    !force_dma_unencrypted(dev) && !is_dev_swiotlb_force(dev)) {
 		/* cpu_addr is a struct page cookie, not a kernel address */
 		dma_free_contiguous(dev, cpu_addr, size);
 		return;
 	}
 
 	if (!IS_ENABLED(CONFIG_ARCH_HAS_DMA_SET_UNCACHED) &&
-	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) &&
-	    !dev_is_dma_coherent(dev)) {
+	    !IS_ENABLED(CONFIG_DMA_DIRECT_REMAP) && !dev_is_dma_coherent(dev) &&
+	    !is_dev_swiotlb_force(dev)) {
 		arch_dma_free(dev, size, cpu_addr, dma_addr, attrs);
 		return;
 	}
@@ -289,7 +308,8 @@ struct page *dma_direct_alloc_pages(struct device *dev, size_t size,
 	void *ret;
 
 	if (IS_ENABLED(CONFIG_DMA_COHERENT_POOL) &&
-	    force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp))
+	    force_dma_unencrypted(dev) && !gfpflags_allow_blocking(gfp) &&
+	    !is_dev_swiotlb_force(dev))
 		return dma_direct_alloc_from_pool(dev, size, dma_handle, gfp);
 
 	page = __dma_direct_alloc_pages(dev, size, gfp);
-- 
2.31.1.818.g46aad6cb9e-goog



From xen-devel-bounces@lists.xenproject.org Thu May 27 13:04:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 13:04:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133299.248536 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFgj-0007ia-7r; Thu, 27 May 2021 13:04:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133299.248536; Thu, 27 May 2021 13:04:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFgj-0007iR-3h; Thu, 27 May 2021 13:04:37 +0000
Received: by outflank-mailman (input) for mailman id 133299;
 Thu, 27 May 2021 13:04:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=spZp=KW=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lmFgh-0005ED-Nf
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 13:04:35 +0000
Received: from mail-pg1-x529.google.com (unknown [2607:f8b0:4864:20::529])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8da5f5d4-b8fc-426f-8d4a-7c3b8316b614;
 Thu, 27 May 2021 13:04:12 +0000 (UTC)
Received: by mail-pg1-x529.google.com with SMTP id l70so3663155pga.1
 for <xen-devel@lists.xenproject.org>; Thu, 27 May 2021 06:04:12 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70])
 by smtp.gmail.com with UTF8SMTPSA id w197sm1969835pfc.5.2021.05.27.06.04.04
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 27 May 2021 06:04:11 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8da5f5d4-b8fc-426f-8d4a-7c3b8316b614
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=t98IQ0DRt9rdYTgtaruQg3/PAS3qE8eaMmCBaIqRh14=;
        b=GAjg2iwD8z4KXC0wmxl+b72WGOgo3axuh1N2ckUZTa0L2t7lEjUG/rd9vUciIgjM6/
         BBoe8qhQS9McLcHa3Vql2S8tpN9ukDLaEDnz4WKO3KFwq/co1Y7IP35DmK8g86ONb0cC
         2QkhzvjQas2dIg52wH3GUtMDzBV3jAGcZRtko=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=t98IQ0DRt9rdYTgtaruQg3/PAS3qE8eaMmCBaIqRh14=;
        b=Q3lJ/n7s7NRJsRcX5covPzBvlRBpktRhFvStBQjgKSLprWxAv0w7A2b4qnxN2TdSDb
         t9TWHt51N7S+Ipky2YFoyRc475Y3du7SNyheonQO8Wz4/ZY2/7/zZIGwk4+Ui373skJT
         DTieml/eD0ObIV/Z4nVEGC3Wq3Pt3VUxhPa2y+hRQgW+jnBOP9qHsEL+9ILMGO/8hfsU
         JjlDj5SovDsSOTuuSu322oXzfAD5rufVt5ZmZeDAwwbylerhbQrAhF1fOEANSwzty7rr
         A19dyzQm7YeTYb7emTysghQ8tkZhC+rDCYYHszpQCwQyO2vL0gPug4Lz/31Oy96HJ6Ha
         0Ibg==
X-Gm-Message-State: AOAM5305e+02akOqwYoiYWO5fIVeJdOTGQyj0l3S2tLebYkPsPgLNdNX
	1EPXUNZ4G2cRpgNufbKP+vdkrA==
X-Google-Smtp-Source: ABdhPJz5SQWTqZi0xc7pup6EKCV3Dg34qwCaD2h6fUXISa0uHIOvWKamMjceM9SgliQmqLBbNqrp+Q==
X-Received: by 2002:a62:cd49:0:b029:2d8:ae8d:6a9f with SMTP id o70-20020a62cd490000b02902d8ae8d6a9fmr3674431pfg.50.1622120651592;
        Thu, 27 May 2021 06:04:11 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v8 14/15] dt-bindings: of: Add restricted DMA pool
Date: Thu, 27 May 2021 21:03:28 +0800
Message-Id: <20210527130329.1853588-4-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog
In-Reply-To: <20210527130329.1853588-1-tientzu@chromium.org>
References: <20210527130329.1853588-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

Introduce the new compatible string, restricted-dma-pool, for restricted
DMA. One can specify the address and length of the restricted DMA memory
region by restricted-dma-pool in the reserved-memory node.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 .../reserved-memory/reserved-memory.txt       | 36 +++++++++++++++++--
 1 file changed, 33 insertions(+), 3 deletions(-)

diff --git a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
index e8d3096d922c..46804f24df05 100644
--- a/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
+++ b/Documentation/devicetree/bindings/reserved-memory/reserved-memory.txt
@@ -51,6 +51,23 @@ compatible (optional) - standard definition
           used as a shared pool of DMA buffers for a set of devices. It can
           be used by an operating system to instantiate the necessary pool
           management subsystem if necessary.
+        - restricted-dma-pool: This indicates a region of memory meant to be
+          used as a pool of restricted DMA buffers for a set of devices. The
+          memory region would be the only region accessible to those devices.
+          When using this, the no-map and reusable properties must not be set,
+          so the operating system can create a virtual mapping that will be used
+          for synchronization. The main purpose for restricted DMA is to
+          mitigate the lack of DMA access control on systems without an IOMMU,
+          which could result in the DMA accessing the system memory at
+          unexpected times and/or unexpected addresses, possibly leading to data
+          leakage or corruption. The feature on its own provides a basic level
+          of protection against the DMA overwriting buffer contents at
+          unexpected times. However, to protect against general data leakage and
+          system memory corruption, the system needs to provide way to lock down
+          the memory access, e.g., MPU. Note that since coherent allocation
+          needs remapping, one must set up another device coherent pool by
+          shared-dma-pool and use dma_alloc_from_dev_coherent instead for atomic
+          coherent allocation.
         - vendor specific string in the form <vendor>,[<device>-]<usage>
 no-map (optional) - empty property
     - Indicates the operating system must not create a virtual mapping
@@ -85,10 +102,11 @@ memory-region-names (optional) - a list of names, one for each corresponding
 
 Example
 -------
-This example defines 3 contiguous regions are defined for Linux kernel:
+This example defines 4 contiguous regions for Linux kernel:
 one default of all device drivers (named linux,cma@72000000 and 64MiB in size),
-one dedicated to the framebuffer device (named framebuffer@78000000, 8MiB), and
-one for multimedia processing (named multimedia-memory@77000000, 64MiB).
+one dedicated to the framebuffer device (named framebuffer@78000000, 8MiB),
+one for multimedia processing (named multimedia-memory@77000000, 64MiB), and
+one for restricted dma pool (named restricted_dma_reserved@0x50000000, 64MiB).
 
 / {
 	#address-cells = <1>;
@@ -120,6 +138,11 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB).
 			compatible = "acme,multimedia-memory";
 			reg = <0x77000000 0x4000000>;
 		};
+
+		restricted_dma_reserved: restricted_dma_reserved {
+			compatible = "restricted-dma-pool";
+			reg = <0x50000000 0x4000000>;
+		};
 	};
 
 	/* ... */
@@ -138,4 +161,11 @@ one for multimedia processing (named multimedia-memory@77000000, 64MiB).
 		memory-region = <&multimedia_reserved>;
 		/* ... */
 	};
+
+	pcie_device: pcie_device@0,0 {
+		reg = <0x83010000 0x0 0x00000000 0x0 0x00100000
+		       0x83010000 0x0 0x00100000 0x0 0x00100000>;
+		memory-region = <&restricted_dma_mem_reserved>;
+		/* ... */
+	};
 };
-- 
2.31.1.818.g46aad6cb9e-goog



From xen-devel-bounces@lists.xenproject.org Thu May 27 13:06:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 13:06:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133310.248550 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFiT-0000QJ-Pi; Thu, 27 May 2021 13:06:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133310.248550; Thu, 27 May 2021 13:06:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFiT-0000QC-MP; Thu, 27 May 2021 13:06:25 +0000
Received: by outflank-mailman (input) for mailman id 133310;
 Thu, 27 May 2021 13:06:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lmFiT-0000Py-7q
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 13:06:25 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lmFiR-0008CS-CC; Thu, 27 May 2021 13:06:23 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lmFiR-000129-5t; Thu, 27 May 2021 13:06:23 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=bYm8+YoUMszRol+8NW8IYOBhNQiC/4d4yosbw40g3sA=; b=OVgsfzlXzZkMIIgEdg/fJgEe9F
	oK2xb4XwX7tk8jaIZQxr/5I1sqnTYUQZ1sJHqezrsAXj5J+l0oZ17NAQJ0irMAH49MvRncmC5pn1C
	X4tdG1UEmWi+hdns1FYUkAaJ2wtVyLGEZ5LM8h+BDFOaXOPBWpbLFeh8FksYF7ykAHy0=;
Subject: Re: [PATCH v2 07/12] mm: allow page scrubbing routine(s) to be arch
 controlled
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>
References: <8f56a8f4-0482-932f-96a9-c791bebb4610@suse.com>
 <49c46d4d-4eaa-16a8-ccc8-c873b0b1d092@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <b1c10ad9-2cef-031d-39c2-8d2013b3e0b5@xen.org>
Date: Thu, 27 May 2021 14:06:20 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <49c46d4d-4eaa-16a8-ccc8-c873b0b1d092@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 27/05/2021 13:33, Jan Beulich wrote:
> Especially when dealing with large amounts of memory, memset() may not
> be very efficient; this can be bad enough that even for debug builds a
> custom function is warranted. We additionally want to distinguish "hot"
> and "cold" cases.

Do you have any benchmark showing the performance improvement?

> 
> Keep the default fallback to clear_page_*() in common code; this may
> want to be revisited down the road.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v2: New.
> ---
> The choice between hot and cold in scrub_one_page()'s callers is
> certainly up for discussion / improvement.

To get the discussion started, can you explain how you made the decision 
between hot/cot? This will also want to be written down in the commit 
message.

> 
> --- a/xen/arch/x86/Makefile
> +++ b/xen/arch/x86/Makefile
> @@ -55,6 +55,7 @@ obj-y += percpu.o
>   obj-y += physdev.o
>   obj-$(CONFIG_COMPAT) += x86_64/physdev.o
>   obj-y += psr.o
> +obj-bin-$(CONFIG_DEBUG) += scrub_page.o
>   obj-y += setup.o
>   obj-y += shutdown.o
>   obj-y += smp.o
> --- /dev/null
> +++ b/xen/arch/x86/scrub_page.S
> @@ -0,0 +1,41 @@
> +        .file __FILE__
> +
> +#include <asm/asm_defns.h>
> +#include <xen/page-size.h>
> +#include <xen/scrub.h>
> +
> +ENTRY(scrub_page_cold)
> +        mov     $PAGE_SIZE/32, %ecx
> +        mov     $SCRUB_PATTERN, %rax
> +
> +0:      movnti  %rax,   (%rdi)
> +        movnti  %rax,  8(%rdi)
> +        movnti  %rax, 16(%rdi)
> +        movnti  %rax, 24(%rdi)
> +        add     $32, %rdi
> +        sub     $1, %ecx
> +        jnz     0b
> +
> +        sfence
> +        ret
> +        .type scrub_page_cold, @function
> +        .size scrub_page_cold, . - scrub_page_cold
> +
> +        .macro scrub_page_stosb
> +        mov     $PAGE_SIZE, %ecx
> +        mov     $SCRUB_BYTE_PATTERN, %eax
> +        rep stosb
> +        ret
> +        .endm
> +
> +        .macro scrub_page_stosq
> +        mov     $PAGE_SIZE/8, %ecx
> +        mov     $SCRUB_PATTERN, %rax
> +        rep stosq
> +        ret
> +        .endm
> +
> +ENTRY(scrub_page_hot)
> +        ALTERNATIVE scrub_page_stosq, scrub_page_stosb, X86_FEATURE_ERMS
> +        .type scrub_page_hot, @function
> +        .size scrub_page_hot, . - scrub_page_hot

 From the commit message, it is not clear how the implementation for 
hot/cold was chosen. Can you outline in the commit message what are the 
assumption for each helper?

This will be helpful for anyone that may notice regression or even other 
arch if they need to implement it.

> --- a/xen/common/page_alloc.c
> +++ b/xen/common/page_alloc.c
> @@ -124,6 +124,7 @@
>   #include <xen/types.h>
>   #include <xen/lib.h>
>   #include <xen/sched.h>
> +#include <xen/scrub.h>
>   #include <xen/spinlock.h>
>   #include <xen/mm.h>
>   #include <xen/param.h>
> @@ -750,27 +751,31 @@ static void page_list_add_scrub(struct p
>           page_list_add(pg, &heap(node, zone, order));
>   }
>   
> -/* SCRUB_PATTERN needs to be a repeating series of bytes. */
> -#ifndef NDEBUG
> -#define SCRUB_PATTERN        0xc2c2c2c2c2c2c2c2ULL
> -#else
> -#define SCRUB_PATTERN        0ULL
> +/*
> + * While in debug builds we want callers to avoid relying on allocations
> + * returning zeroed pages, for a production build, clear_page_*() is the
> + * fastest way to scrub.
> + */
> +#ifndef CONFIG_DEBUG
> +# undef  scrub_page_hot
> +# define scrub_page_hot clear_page_hot
> +# undef  scrub_page_cold
> +# define scrub_page_cold clear_page_cold
>   #endif
> -#define SCRUB_BYTE_PATTERN   (SCRUB_PATTERN & 0xff)
>   
> -static void scrub_one_page(const struct page_info *pg)
> +static void scrub_one_page(const struct page_info *pg, bool cold)
>   {
> +    void *ptr;
> +
>       if ( unlikely(pg->count_info & PGC_broken) )
>           return;
>   
> -#ifndef NDEBUG
> -    /* Avoid callers relying on allocations returning zeroed pages. */
> -    unmap_domain_page(memset(__map_domain_page(pg),
> -                             SCRUB_BYTE_PATTERN, PAGE_SIZE));
> -#else
> -    /* For a production build, clear_page() is the fastest way to scrub. */
> -    clear_domain_page(_mfn(page_to_mfn(pg)));
> -#endif
> +    ptr = __map_domain_page(pg);
> +    if ( cold )
> +        scrub_page_cold(ptr);
> +    else
> +        scrub_page_hot(ptr);
> +    unmap_domain_page(ptr);
>   }
>   
>   static void poison_one_page(struct page_info *pg)
> @@ -1046,12 +1051,14 @@ static struct page_info *alloc_heap_page
>       if ( first_dirty != INVALID_DIRTY_IDX ||
>            (scrub_debug && !(memflags & MEMF_no_scrub)) )
>       {
> +        bool cold = d && d != current->domain;

So the assumption is if the domain is not running, then the content is 
not in the cache. Is that correct?

> +
>           for ( i = 0; i < (1U << order); i++ )
>           {
>               if ( test_and_clear_bit(_PGC_need_scrub, &pg[i].count_info) )
>               {
>                   if ( !(memflags & MEMF_no_scrub) )
> -                    scrub_one_page(&pg[i]);
> +                    scrub_one_page(&pg[i], cold);
>   
>                   dirty_cnt++;
>               }
> @@ -1308,7 +1315,7 @@ bool scrub_free_pages(void)
>                   {
>                       if ( test_bit(_PGC_need_scrub, &pg[i].count_info) )
>                       {
> -                        scrub_one_page(&pg[i]);
> +                        scrub_one_page(&pg[i], true);
>                           /*
>                            * We can modify count_info without holding heap
>                            * lock since we effectively locked this buddy by
> @@ -1947,7 +1954,7 @@ static void __init smp_scrub_heap_pages(
>           if ( !mfn_valid(_mfn(mfn)) || !page_state_is(pg, free) )
>               continue;
>   
> -        scrub_one_page(pg);
> +        scrub_one_page(pg, true);
>       }
>   }
>   
> --- a/xen/include/asm-arm/page.h
> +++ b/xen/include/asm-arm/page.h
> @@ -135,6 +135,12 @@ extern size_t dcache_line_bytes;
>   
>   #define copy_page(dp, sp) memcpy(dp, sp, PAGE_SIZE)
>   
> +#define clear_page_hot  clear_page
> +#define clear_page_cold clear_page
> +
> +#define scrub_page_hot(page) memset(page, SCRUB_BYTE_PATTERN, PAGE_SIZE)
> +#define scrub_page_cold      scrub_page_hot
> +
>   static inline size_t read_dcache_line_bytes(void)
>   {
>       register_t ctr;
> --- a/xen/include/asm-x86/page.h
> +++ b/xen/include/asm-x86/page.h
> @@ -239,6 +239,11 @@ void copy_page_sse2(void *, const void *
>   #define clear_page(_p)      clear_page_cold(_p)
>   #define copy_page(_t, _f)   copy_page_sse2(_t, _f)
>   
> +#ifdef CONFIG_DEBUG
> +void scrub_page_hot(void *);
> +void scrub_page_cold(void *);
> +#endif
> +
>   /* Convert between Xen-heap virtual addresses and machine addresses. */
>   #define __pa(x)             (virt_to_maddr(x))
>   #define __va(x)             (maddr_to_virt(x))
> --- /dev/null
> +++ b/xen/include/xen/scrub.h
> @@ -0,0 +1,24 @@
> +#ifndef __XEN_SCRUB_H__
> +#define __XEN_SCRUB_H__
> +
> +#include <xen/const.h>
> +
> +/* SCRUB_PATTERN needs to be a repeating series of bytes. */
> +#ifdef CONFIG_DEBUG
> +# define SCRUB_PATTERN       _AC(0xc2c2c2c2c2c2c2c2,ULL)
> +#else
> +# define SCRUB_PATTERN       _AC(0,ULL)
> +#endif
> +#define SCRUB_BYTE_PATTERN   (SCRUB_PATTERN & 0xff)
> +
> +#endif /* __XEN_SCRUB_H__ */
> +
> +/*
> + * Local variables:
> + * mode: C
> + * c-file-style: "BSD"
> + * c-basic-offset: 4
> + * tab-width: 4
> + * indent-tabs-mode: nil
> + * End:
> + */
> 
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 27 13:09:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 13:09:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133337.248561 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFlO-0001B1-8L; Thu, 27 May 2021 13:09:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133337.248561; Thu, 27 May 2021 13:09:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFlO-0001Au-56; Thu, 27 May 2021 13:09:26 +0000
Received: by outflank-mailman (input) for mailman id 133337;
 Thu, 27 May 2021 13:09:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HjtO=KW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmFlM-0001Am-M4
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 13:09:24 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5d77f7bb-cd7e-4b05-8793-c849af92b23e;
 Thu, 27 May 2021 13:09:23 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id A78C8218DD;
 Thu, 27 May 2021 13:09:22 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 7DBDC11A98;
 Thu, 27 May 2021 13:09:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5d77f7bb-cd7e-4b05-8793-c849af92b23e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622120962; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=trLBAOWe0MCleefC+wGsBbKvDk54IBj1+8SrsR29cGI=;
	b=CZ73lC6l7m/6kpFXnBwSHC2Qi05l5z0GwPVV4EO6ELMcTDS9Vc1IwsRqYXBLD6U6Pl9Wj1
	f1B7/ckbSSIaZUd2m6+WYYygivdYwJbGPSIOEwo5yk2O1ubA6S8vvHaNf+ZdETPpcei2vv
	RgzcSbMzOchK1cuQfQPUSGDeRSrhp1o=
Subject: Re: [PATCH v2 01/12] x86: introduce ioremap_wc()
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <8f56a8f4-0482-932f-96a9-c791bebb4610@suse.com>
 <20abac99-609c-f4f6-1242-c79919f4c317@suse.com>
 <b8035805-4f44-18ce-f4cb-4ce1d3c594fc@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d7019879-037b-7945-4a8a-5a8252e5922a@suse.com>
Date: Thu, 27 May 2021 15:09:18 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <b8035805-4f44-18ce-f4cb-4ce1d3c594fc@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 27.05.2021 14:48, Julien Grall wrote:
> On 27/05/2021 13:30, Jan Beulich wrote:
>> --- a/xen/arch/x86/mm.c
>> +++ b/xen/arch/x86/mm.c
>> @@ -5881,6 +5881,20 @@ void __iomem *ioremap(paddr_t pa, size_t
>>       return (void __force __iomem *)va;
>>   }
>>   
>> +void __iomem *__init ioremap_wc(paddr_t pa, size_t len)
>> +{
>> +    mfn_t mfn = _mfn(PFN_DOWN(pa));
>> +    unsigned int offs = pa & (PAGE_SIZE - 1);
>> +    unsigned int nr = PFN_UP(offs + len);
>> +    void *va;
>> +
>> +    WARN_ON(page_is_ram_type(mfn_x(mfn), RAM_TYPE_CONVENTIONAL));
>> +
>> +    va = __vmap(&mfn, nr, 1, 1, PAGE_HYPERVISOR_WC, VMAP_DEFAULT);
>> +
>> +    return (void __force __iomem *)(va + offs);
>> +}
> 
> Arm is already providing ioremap_wc() which is a wrapper to 
> ioremap_attr().

I did notice this, yes.

> Can this be moved to the common code to avoid duplication?

If by "this" you mean ioremap_attr(), then I wasn't convinced we want
a function of this name on x86. In particular you may note that
x86'es ioremap() is sort of the equivalent of Arm's ioremap_nocache(),
but is different from the new ioremap_wc() by more than just the
different PTE attributes.

Also I was specifically asked to make ioremap_wc() __init; ioremap()
cannot be, because of at least the use from pci_vtd_quirk().

Plus I'd need to clean up Arm's lack of __iomem if I wanted to fold
things. Or wait - it's declaration and definition which are out of
sync there, i.e. a pre-existing issue.

Bottom line - while I did consider folding, I don't think that's
feasible at this point in time.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 27 13:13:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 13:13:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133344.248575 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFp0-0002XQ-R8; Thu, 27 May 2021 13:13:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133344.248575; Thu, 27 May 2021 13:13:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFp0-0002XD-NR; Thu, 27 May 2021 13:13:10 +0000
Received: by outflank-mailman (input) for mailman id 133344;
 Thu, 27 May 2021 13:13:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=spZp=KW=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lmFgm-0005ED-Nu
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 13:04:40 +0000
Received: from mail-pf1-x433.google.com (unknown [2607:f8b0:4864:20::433])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 803279d5-934d-40d8-bef2-a2cb7f54f081;
 Thu, 27 May 2021 13:04:20 +0000 (UTC)
Received: by mail-pf1-x433.google.com with SMTP id q25so548388pfn.1
 for <xen-devel@lists.xenproject.org>; Thu, 27 May 2021 06:04:20 -0700 (PDT)
Received: from localhost ([2401:fa00:95:205:a93:378d:9a9e:3b70])
 by smtp.gmail.com with UTF8SMTPSA id o3sm1937619pgh.22.2021.05.27.06.04.12
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Thu, 27 May 2021 06:04:19 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 803279d5-934d-40d8-bef2-a2cb7f54f081
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=g9sAWaw9/Ru404/gNZK/9yCFJ+COY6ShgEN5yl9rius=;
        b=RU9c38RKOmRl8qLPe9nfaNayIMAo/KhsxRPQb+p1I2/vkq857xJH1yHb2hsJr/oLI5
         OzLGwV9+GOm93mot3oVKzRPIFGItH+esVlARoN4qqQuH+rUWPo604KbT7KtsniJEDbEo
         HQFJYd6CK0U3jnTgrILmcYyxGPhhzyPRwcTRo=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=g9sAWaw9/Ru404/gNZK/9yCFJ+COY6ShgEN5yl9rius=;
        b=nwMq6y3ohTVCKQ0mSDkXfutcgqIwbOMu4jeGXglaV4W5Vrhaf0nk2v9cnLGRnt42um
         GVhN/Lali8rT+AnSHH7uBEQv412NX/aJEGJFCZigMJC56glVDpBndrTPxnNW5LvE94bK
         +XfsAH+NgK6fwZe9ijhnrGiAWyKN7ZSg8XaiOsJIIgqDteVNNIkcroG0q099CkB7VuTF
         W0JTTb0MjcrIt5f2ezj/YkhATx0uT/I412TBLfnQtmPWJc+mthG+9WzULiyhmwsC8s3K
         3h5e5YAHNLaLop7yYzm2X3InnclZHhaz+Ya9Pr5uEz/LEWWnIpynPOE113Dma+C7Qi/s
         oe+w==
X-Gm-Message-State: AOAM531ocuRQJ8BkFzQ7bjwU7oZ6G8NcW0i879hFYle0ycE/JKiX9rlX
	MlNxNpwpaGDkMyijK1o4rPlLRQ==
X-Google-Smtp-Source: ABdhPJw+tNdufRtYdRKBadLokBr1RMenJ8TRt7icb84uhhJq7TwV1n1jRizQR2Rn5o3HccoeyQdcjw==
X-Received: by 2002:aa7:9438:0:b029:2df:258e:7f10 with SMTP id y24-20020aa794380000b02902df258e7f10mr3229291pfo.79.1622120660147;
        Thu, 27 May 2021 06:04:20 -0700 (PDT)
From: Claire Chang <tientzu@chromium.org>
To: Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>
Cc: benh@kernel.crashing.org,
	paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org,
	Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com,
	xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>,
	mingo@kernel.org,
	bauerman@linux.ibm.com,
	peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>,
	linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	tfiga@chromium.org,
	bskeggs@redhat.com,
	bhelgaas@google.com,
	chris@chris-wilson.co.uk,
	tientzu@chromium.org,
	daniel@ffwll.ch,
	airlied@linux.ie,
	dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com,
	jxgao@google.com,
	joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com,
	rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: [PATCH v8 15/15] of: Add plumbing for restricted DMA pool
Date: Thu, 27 May 2021 21:03:29 +0800
Message-Id: <20210527130329.1853588-5-tientzu@chromium.org>
X-Mailer: git-send-email 2.31.1.818.g46aad6cb9e-goog
In-Reply-To: <20210527130329.1853588-1-tientzu@chromium.org>
References: <20210527130329.1853588-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

If a device is not behind an IOMMU, we look up the device node and set
up the restricted DMA when the restricted-dma-pool is presented.

Signed-off-by: Claire Chang <tientzu@chromium.org>
---
 drivers/of/address.c    | 33 +++++++++++++++++++++++++++++++++
 drivers/of/device.c     |  3 +++
 drivers/of/of_private.h |  6 ++++++
 3 files changed, 42 insertions(+)

diff --git a/drivers/of/address.c b/drivers/of/address.c
index aca94c348bd4..6cc7eaaf7e11 100644
--- a/drivers/of/address.c
+++ b/drivers/of/address.c
@@ -8,6 +8,7 @@
 #include <linux/logic_pio.h>
 #include <linux/module.h>
 #include <linux/of_address.h>
+#include <linux/of_reserved_mem.h>
 #include <linux/pci.h>
 #include <linux/pci_regs.h>
 #include <linux/sizes.h>
@@ -1112,6 +1113,38 @@ bool of_dma_is_coherent(struct device_node *np)
 }
 EXPORT_SYMBOL_GPL(of_dma_is_coherent);
 
+int of_dma_set_restricted_buffer(struct device *dev, struct device_node *np)
+{
+	struct device_node *node, *of_node = dev->of_node;
+	int count, i;
+
+	count = of_property_count_elems_of_size(of_node, "memory-region",
+						sizeof(u32));
+	/*
+	 * If dev->of_node doesn't exist or doesn't contain memory-region, try
+	 * the OF node having DMA configuration.
+	 */
+	if (count <= 0) {
+		of_node = np;
+		count = of_property_count_elems_of_size(
+			of_node, "memory-region", sizeof(u32));
+	}
+
+	for (i = 0; i < count; i++) {
+		node = of_parse_phandle(of_node, "memory-region", i);
+		/*
+		 * There might be multiple memory regions, but only one
+		 * restricted-dma-pool region is allowed.
+		 */
+		if (of_device_is_compatible(node, "restricted-dma-pool") &&
+		    of_device_is_available(node))
+			return of_reserved_mem_device_init_by_idx(dev, of_node,
+								  i);
+	}
+
+	return 0;
+}
+
 /**
  * of_mmio_is_nonposted - Check if device uses non-posted MMIO
  * @np:	device node
diff --git a/drivers/of/device.c b/drivers/of/device.c
index c5a9473a5fb1..2defdca418ec 100644
--- a/drivers/of/device.c
+++ b/drivers/of/device.c
@@ -165,6 +165,9 @@ int of_dma_configure_id(struct device *dev, struct device_node *np,
 
 	arch_setup_dma_ops(dev, dma_start, size, iommu, coherent);
 
+	if (!iommu)
+		return of_dma_set_restricted_buffer(dev, np);
+
 	return 0;
 }
 EXPORT_SYMBOL_GPL(of_dma_configure_id);
diff --git a/drivers/of/of_private.h b/drivers/of/of_private.h
index d717efbd637d..8fde97565d11 100644
--- a/drivers/of/of_private.h
+++ b/drivers/of/of_private.h
@@ -163,12 +163,18 @@ struct bus_dma_region;
 #if defined(CONFIG_OF_ADDRESS) && defined(CONFIG_HAS_DMA)
 int of_dma_get_range(struct device_node *np,
 		const struct bus_dma_region **map);
+int of_dma_set_restricted_buffer(struct device *dev, struct device_node *np);
 #else
 static inline int of_dma_get_range(struct device_node *np,
 		const struct bus_dma_region **map)
 {
 	return -ENODEV;
 }
+static inline int of_dma_set_restricted_buffer(struct device *dev,
+					       struct device_node *np)
+{
+	return -ENODEV;
+}
 #endif
 
 #endif /* _LINUX_OF_PRIVATE_H */
-- 
2.31.1.818.g46aad6cb9e-goog



From xen-devel-bounces@lists.xenproject.org Thu May 27 13:23:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 13:23:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133355.248598 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFyn-0004Cq-0h; Thu, 27 May 2021 13:23:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133355.248598; Thu, 27 May 2021 13:23:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmFym-0004Cj-Tm; Thu, 27 May 2021 13:23:16 +0000
Received: by outflank-mailman (input) for mailman id 133355;
 Thu, 27 May 2021 13:23:15 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ln4B=KW=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lmFyk-00047U-NL
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 13:23:14 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 2ea5061f-4aa9-45f8-8b17-4b9dccad93c2;
 Thu, 27 May 2021 13:23:13 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2ea5061f-4aa9-45f8-8b17-4b9dccad93c2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1622121793;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=uGFEt31AlLcGiQ6k0/MJ1xX2VZ/H6EaHnQra1yn73ts=;
  b=aL00czIMpc+/07o1dV6ffcjuJh0/McP37PlO09G/puK7NebACAGGbRmz
   Pqpw5TMc8rzDYpl8sqccZTrXMcnV4Lu23rrthu3s3HWw3H1OE8qW/occp
   85efXHj/1zA7g1S0cgWjj5QmScZntVf/YRfYmUiYs2/DUCdaNB9sJsaU5
   E=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: z4YrM1QBnMpiw+8+d0C4fVvAtEdRV15hXjUKqWvHgJfY0faxI9F2gaf9dQoC2dgjYkgg3jbgx8
 UH88B0/c3iXv47XnCIDcz8uFGTySxTIBd0M8RZ3im7lvXmRo+Y/EZTrn9ZtXFC+bBiHLHbAieF
 haBDTr54ZvZVwc3p5PIPmkUCUCeZKl0dDV39fbp8oLIdRI/GpKQyLlhulmq6k9t+Qbb1wclJSN
 1xivlrTaZK59E7x1KvAWyIENEMqRmiHd9f19+sZL6FFJNtyvz98aH/QutnJbu2NL3kpiRNa86u
 SKk=
X-SBRS: 5.1
X-MesageID: 44745403
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:FOgjIqETj3m9vaN8pLqEEseALOsnbusQ8zAXPiBKJCC9vPb5qy
 nOpoV+6faQslwssR4b9uxoVJPvfZq+z+8R3WByB8bAYOCOggLBQL2KhbGI/9SKIVydygcy78
 Zdm6gVMqyMMbB55/yKnDVRxbwbsaa6GKPDv5ah8590JzsaDJ2Jd21Ce32m+ksdfnghObMJUK
 Cyy+BgvDSadXEefq2AdwM4t7iqnayzqHr+CyR2fyIa1A==
X-IronPort-AV: E=Sophos;i="5.82,334,1613451600"; 
   d="scan'208";a="44745403"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=acrfl938ETe0Mb9QocAs3Lang4bMlQxxtdlB6U0GYQ0MY8m0+VgJzr7WUXfoeYK/7ghSnB8gg2cbOUDxTs0+XYIdrLNpqv14fmVoo2jBZom7wupmgupCVpgklw1ICsyJz3BmTC0CZe+3e2fRooo4gug+pZaQUY7Uh05+uwubcrS7+PGkyueT133mSUaYvwqXbcIElfyKh5T++8JtE+p0aapkO6n4pj5dC7sQ7yhlxevz3yvS04t9pdlUW1ZeJIuZNPZ5rDQ3MxfgHmbkVUcM87f0PjBdkb3ouG1KtUetsEtXhdkcFsCaj8FQTx+S/dIbBq2cqSWraHXPI3WsEswMew==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sWqLceelXSTusqka6is1EUFhUZE4FAj3O4kFPQ/zR+0=;
 b=DVKf/nfZABFqyBA9Hh5s5NYhVUcmqB8oGk49E6A7rH9MPdKFDn8kGjk67KcgpQjTWgfCEDz7Kest9IOehKI0ZgG0BJEyilEokiM1P3ZY/zHbQiCjWCy7YD/qWkXas/h1xm3e7UAF0Fajl0iyIyuBAHqjpmj9Wt1/0EXpKAHX2c/3H4qlzHyIpzKrqiXqcnTwU84cmVpF/zW9yRbktDEs/+s78BJwpoNeYahzZQL2vghTF8Fgl2s/XdTi55hcCY2mECFPARaGQAUFXLoLLt1L68AhPbFDLKncTmEDiFXfoqc60GmdLyQqL+eBrYDub5WnpmooGLeCjQg/koHTQED69g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=sWqLceelXSTusqka6is1EUFhUZE4FAj3O4kFPQ/zR+0=;
 b=Yf7s8k4JilKF3IXXMrMCY+c9sE447+MKus+LYhoRrbca6fmmmvkTc9f6ssiQ+DRFlnA0++lPMzhvxrqhoC/s6CPwt9Ocfrr18l6Fp1e7aP2hFWyZoudXfsgQxpBtZsDQvx1ycFZDNvlefD52KwSZjhQwMv5q4VbpN5GadN0DW4k=
Date: Thu, 27 May 2021 15:23:02 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Olaf Hering
	<olaf@aepfle.de>
Subject: Re: [PATCH] x86/AMD: expose SYSCFG, TOM, and TOM2 to Dom0
Message-ID: <YK+dNgom3cVzkcFF@Air-de-Roger>
References: <c5764274-1257-809e-a2a7-d87b9d0fe675@suse.com>
 <YK9ZXJuPk1G5SGnK@Air-de-Roger>
 <b6693807-95cb-7925-587d-1e1e2db8c798@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <b6693807-95cb-7925-587d-1e1e2db8c798@suse.com>
X-ClientProxiedBy: MR2P264CA0169.FRAP264.PROD.OUTLOOK.COM (2603:10a6:501::8)
 To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 05075e69-2fc6-4c54-4557-08d9211291b8
X-MS-TrafficTypeDiagnostic: DM5PR03MB2635:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB26355B27F2F67FD83909937F8F239@DM5PR03MB2635.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: S/mcSoRkwj0WVlr3ALjs28AZSVRVdVgsgwWGm5hQ2Pe9Q1L8LrZSc+dLonbQPA4rs7hAhobYYWKf4W25+jKCD5SqB+RBAVCw0qLk8FLuR8zafg4AMrcLkGT07XwFCeW+tMyqBBaSH54NLYcmOXzRnWkJWXZ+W46v0QhWRrc5gfp1xLLJDzQ+z0tImFHweeOUJ1Tm7qMKG9qDauQozkYvwI02cC5dz9zPq1bFV780F9EVxDWiexySxhPomgKuFoInAEsoMkq26d0iZY4R6j1eYX3oImfEi054zU4AcrO/5OMjLntxEvdv+2TCDQN8zudhQvwW/PR1VErXjO++vNwtf09GxEMSPb3IWcxfP4+49PU0URtH2TcVglZJvM+46r4Scncu+n9AEOT5LXBU9Zn1/NOkCB5DyxwUC9GjMs2Dom7cH52sVnGuJmSqU/r0/CiHSVGb37c4cgBlKIEZoTkBrVD08yR7APKPrHbMw5InxExPXIK2/2dC7/B+95VOrENrsKgDuBsDaU9PgIwVzhNZZH1Redjk4L6y2k+RLaEmf/QQRtB7309ZmC3zpNouJffb57gBVnLeK7WaobQFrutTwbDcSw6pHLGvCkOc3bKsd4zJHfYtuRSSwJJqAYWMMb+XaY3lb47zDSv/7kHxISQPoA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(136003)(396003)(376002)(39860400002)(346002)(366004)(33716001)(478600001)(26005)(86362001)(66476007)(956004)(316002)(6486002)(186003)(8676002)(8936002)(2906002)(83380400001)(16526019)(4326008)(6916009)(5660300002)(66946007)(38100700002)(6496006)(85182001)(9686003)(6666004)(54906003)(66556008)(53546011)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?bHdnaHR6ZHUrNDk5OVpBa0d3TlJ1aW9nVE1DQjEySTZpVmhhcWxMdXB3L3RK?=
 =?utf-8?B?cEk1cE1pUHp3UDhhWGRNWTJDRlRUMkUzNmVlais1K2NvenZpZnN4Vi9BanNE?=
 =?utf-8?B?a2UySVhGU0JLS1dtZVZ2ZXl2T0wvdHpvVXlIc0F2WHNJYnZZUHVXSjJqYUlJ?=
 =?utf-8?B?M1YrSm1mMU9lbDltWjZJWHJWdmFRNHRQaEg5TFZPbTBRSjBVNGVnSWdSY09I?=
 =?utf-8?B?Z2FIM3ZUL2pwMUF4TU13YXh2b1FBYnBhK1NaSjU4c1dOM1lxR2pQUFFRRFJi?=
 =?utf-8?B?ME1JVHJhSjVMNE5YdUVzd2UwcUtjeHBRYUdzSDB2MUYzakFVV1UwSDE5c0NK?=
 =?utf-8?B?MFpsUFpOeGFkQnpIbHAxZEpXaWQvM0gzRGFESTRFcXF3QlNiTTVtenFjTnIx?=
 =?utf-8?B?T3d3cWhUQ0lHcXNXRm9FR0YzRnlNSnFwejFBSTRmS3hKZWpCU0J4OUtUdzZr?=
 =?utf-8?B?ejBOdDI5cnNCVWtPZjJOM2tHcm1ocEo2NkhnQkFubnJMcEQzNVMzQU5ESFc3?=
 =?utf-8?B?UElpMjZ2VlFMZzMyQWg3bTJxcmNHdUFDditMYS9CQUdrMXFBMzlUZFhqZjFV?=
 =?utf-8?B?TlFDOXdrMWxscWg2a1BZT1lrandyUXE2ckoxK0o1STFDWEx5cS9hdTZHektp?=
 =?utf-8?B?WGsrOFVjelJiWmphNTU4NDM2R2ZYbTNRNkpjZXhPdmZJZGFnM0tFTmdGVGN1?=
 =?utf-8?B?ekxKcFZiNmxhUzVvck5iM1Q3OVhXbmdwOTZ6YnRoVlNqdWtYKy9Xb1dsYkgw?=
 =?utf-8?B?STUySXJuZVZnNU1MMnZTV042Qm82Ry90aSsyWHBvVVVsRFJYU25CeG13NFJM?=
 =?utf-8?B?YWRIY20rcXVGUWRzNnFEeHVicGhDV0FNZTg3S0xvMk1yYXhXSkR1dTlabk9q?=
 =?utf-8?B?V3lTZFlRVkdXUnRzZ2RyZFg4UTN2Y0ZCU1Y2VUF3dW4zdHJScG1wd3p1dzgv?=
 =?utf-8?B?YUZxeXdoalppR3MvNTlYSGVLWG5UalRZL1JOZ2kvK3JhSElHbkpOVW0zWnIv?=
 =?utf-8?B?V3NHSXo0SVowc1BES1owVnlLd2JTbEdWWGhkKzNIWXYyTFpla2JIdHhJTkFx?=
 =?utf-8?B?VmlHWitTZ2hRVjd2VFZRK0h4d2hPM0RvczUvblBvTW1STDFPQmJGK2VDMFRt?=
 =?utf-8?B?TmxHNjdZcWs4cFhHdHVvdDR3Ung0UUo0dDNnMUNKd2xDM0U0NWxtbFZIS3Rr?=
 =?utf-8?B?L1p5QVFWR2V3VHlBb3ZteFpOZlI5MVVRbkdMUFBPcktPRkovd1hqRzZqTEZw?=
 =?utf-8?B?aEg1UGhuTnZOR0ZBbGwxMjRBSE9VOXUzMHdUUUJ4TTRvdUhvWEVUYUNXaEhY?=
 =?utf-8?B?L0xEUGZBM28rZlZ0L2xMRXhEajU4cWN0L0hhbWMxSEhxTWkvZXY0Vm9XVXk2?=
 =?utf-8?B?cC9leDQxN1dSM1E4ZTdqcXlmYkk0RzRpSXl0aTFUd00yQTVvM3JVOHJxMExp?=
 =?utf-8?B?YjdkclRIcHpXcVhWT0wybGlTRkxrUXp4eTRBd2gxTm0wZ1NoNmw2TUdiZTNL?=
 =?utf-8?B?QlR3RFJGbDMranNaT0ZnRXRxV3RrcFBGUlhiV3EwRldWR0V5b1VXcFNXVTRt?=
 =?utf-8?B?SFFjdkRyZjFtcmk1WmkzNXArZnJrMW1URFl1cTFlckEwdFlhVFpQR2F3cUdT?=
 =?utf-8?B?dWdVR1JCUVBNQzFnQ3IzSkp2RlVwVk5uOWViSlhZU1dqVTRKbnVPUkE5SkY0?=
 =?utf-8?B?VjU5WVRHRFp3SlByNStKemFoWlJDY1l0a0ZhbDlxd1k2elI3UHNQMmxwU3gr?=
 =?utf-8?Q?HWMSJeKacYf/HGq7JlMVbBvRl64L616f6eiXlON?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 05075e69-2fc6-4c54-4557-08d9211291b8
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 May 2021 13:23:09.2634
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: g9mm5ie3WH62ZhWr9UBywvpRRI9DMeZQZXAgXApwI8pPvNymn/WivJTYIMazK+toIPabU8Jk00xfumt3ooWAEg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB2635
X-OriginatorOrg: citrix.com

On Thu, May 27, 2021 at 12:41:51PM +0200, Jan Beulich wrote:
> On 27.05.2021 10:33, Roger Pau Monné wrote:
> > On Wed, May 26, 2021 at 02:59:00PM +0200, Jan Beulich wrote:
> >> Sufficiently old Linux (3.12-ish) accesses these MSRs in an unguarded
> >> manner. Furthermore these MSRs, at least on Fam11 and older CPUs, are
> >> also consulted by modern Linux, and their (bogus) built-in zapping of
> >> #GP faults from MSR accesses leads to it effectively reading zero
> >> instead of the intended values, which are relevant for PCI BAR placement
> >> (which ought to all live in MMIO-type space, not in DRAM-type one).
> >>
> >> For SYSCFG, only certain bits get exposed. In fact, whether to expose
> >> MtrrVarDramEn is debatable: It controls use of not just TOM, but also
> >> the IORRs. Introduce (consistently named) constants for the bits we're
> >> interested in and use them in pre-existing code as well.
> > 
> > I think we should also allow access to the IORRs MSRs for coherency
> > (c001001{6,9}) for the hardware domain.
> 
> Hmm, originally I was under the impression that these could conceivably
> be written by OSes, and hence would want dealing with separately. But
> upon re-reading I see that they are supposed to be set by the BIOS alone.
> So yes, let me add them for read access, taking care of the limitation
> that I had to spell out.
> 
> This raises the question then though whether to also include SMMAddr
> and SMMMask in the set - the former does get accessed by Linux as well,
> and was one of the reasons for needing 6eef0a99262c ("x86/PV:
> conditionally avoid raising #GP for early guest MSR reads").

That seems fine, we might also want SMM_BASE?

> 
> Especially for SMMAddr, and maybe also for IORR_BASE, returning zero
> for DomU-s might be acceptable. The respective masks, however, can
> imo not sensibly be returned as zero. Hence even there I'd leave DomU
> side handling (see below) for a later time.

Sure. I think for consistency we should however enable reading the
hardware IORR MSRs for the hardware domain, or else returning
MtrrVarDramEn set is likely to cause trouble as the guest could assume
IORRs to be unconditionally present.

> >> As a welcome side effect, verbosity on/of debug builds gets (perhaps
> >> significantly) reduced.
> >>
> >> Note that at least as far as those MSR accesses by Linux are concerned,
> >> there's no similar issue for DomU-s, as the accesses sit behind PCI
> >> device matching logic. The checked for devices would never be exposed to
> >> DomU-s in the first place. Nevertheless I think that at least for HVM we
> >> should return sensible values, not 0 (as svm_msr_read_intercept() does
> >> right now). The intended values may, however, need to be determined by
> >> hvmloader, and then get made known to Xen.
> > 
> > Could we maybe come up with a fixed memory layout that hvmloader had
> > to respect?
> > 
> > Ie: DRAM from 0 to 3G, MMIO from 3G to 4G, and then the remaining
> > DRAM from 4G in a contiguous single block?
> > 
> > hvmloader would have to place BARs that don't fit in the 3G-4G hole at
> > the end of DRAM (ie: after TOM2).
> 
> Such a fixed scheme may be too limiting, I'm afraid.

Maybe, I guess a possible broken scenario would be for a guest to be
setup with a set of 32bit BARs that cannot possibly fit in the 3-4G
hole, but I think that's unlikely.

> 
> >> --- a/xen/arch/x86/msr.c
> >> +++ b/xen/arch/x86/msr.c
> >> @@ -339,6 +339,19 @@ int guest_rdmsr(struct vcpu *v, uint32_t
> >>          *val = msrs->tsc_aux;
> >>          break;
> >>  
> >> +    case MSR_K8_SYSCFG:
> >> +    case MSR_K8_TOP_MEM1:
> >> +    case MSR_K8_TOP_MEM2:
> >> +        if ( !(cp->x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) )
> >> +            goto gp_fault;
> >> +        if ( !is_hardware_domain(d) )
> >> +            return X86EMUL_UNHANDLEABLE;
> > 
> > It might be clearer to also handle the !is_hardware_domain case here,
> > instead of deferring to svm_msr_read_intercept:
> > 
> > if ( is_hardware_domain(d) )
> >     rdmsrl(msr, *val);
> > else
> >     *val = 0;
> 
> As said in the post-commit-message remark, I don't think returning 0
> here is appropriate. I'd be willing to move DomU handling here, but
> only once it's sane.

Hm, OK. IMO it's fine to move the current domU behavior here without
fixing it in the same patch, and do the fixing afterwards.

At least we would have the handling all done in a single place, and
you are certainly not making the domU case any worse than what
currently is.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 27 13:24:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 13:24:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133363.248609 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmG0J-00057C-IN; Thu, 27 May 2021 13:24:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133363.248609; Thu, 27 May 2021 13:24:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmG0J-000575-FM; Thu, 27 May 2021 13:24:51 +0000
Received: by outflank-mailman (input) for mailman id 133363;
 Thu, 27 May 2021 13:24:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TI9I=KW=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lmG0H-00056o-SV
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 13:24:49 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b15416cc-e576-4c59-9fb3-7c124fb9c86d;
 Thu, 27 May 2021 13:24:48 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id A8FB568AFE; Thu, 27 May 2021 15:24:42 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b15416cc-e576-4c59-9fb3-7c124fb9c86d
Date: Thu, 27 May 2021 15:24:42 +0200
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
	bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
	jxgao@google.com, joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com, rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v7 02/15] swiotlb: Refactor swiotlb_create_debugfs
Message-ID: <20210527132442.GA26160@lst.de>
References: <20210518064215.2856977-1-tientzu@chromium.org> <20210518064215.2856977-3-tientzu@chromium.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210518064215.2856977-3-tientzu@chromium.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Tue, May 18, 2021 at 02:42:02PM +0800, Claire Chang wrote:
>  struct io_tlb_mem *io_tlb_default_mem;
> +static struct dentry *debugfs_dir;
>  
>  /*
>   * Max segment that we can provide which (if pages are contingous) will
> @@ -662,18 +663,30 @@ EXPORT_SYMBOL_GPL(is_swiotlb_active);
>  
>  #ifdef CONFIG_DEBUG_FS
>  
> +static void swiotlb_create_debugfs(struct io_tlb_mem *mem, const char *name)
>  {
>  	if (!mem)
> +		return;

I don't think this check makes much sense here.

> +}
> +
> +static int __init swiotlb_create_default_debugfs(void)
> +{
> +	struct io_tlb_mem *mem = io_tlb_default_mem;
> +
> +	if (mem) {
> +		swiotlb_create_debugfs(mem, "swiotlb");
> +		debugfs_dir = mem->debugfs;
> +	} else {
> +		debugfs_dir = debugfs_create_dir("swiotlb", NULL);
> +	}

This also looks rather strange.  I'd much rather create move the
directory creation of out swiotlb_create_debugfs.  E.g. something like:

static void swiotlb_create_debugfs_file(struct io_tlb_mem *mem)
{
	debugfs_create_ulong("io_tlb_nslabs", 0400, mem->debugfs, &mem->nslabs);
	debugfs_create_ulong("io_tlb_used", 0400, mem->debugfs, &mem->used);
}

static int __init swiotlb_init_debugfs(void)
{
	debugfs_dir = debugfs_create_dir("swiotlb", NULL);
	if (io_tlb_default_mem) {
		io_tlb_default_mem->debugfs = debugfs_dir;
		swiotlb_create_debugfs_files(io_tlb_default_mem);
	}
	return 0;
}
late_initcall(swiotlb_init_debugfs);

...

static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
                                    struct device *dev)
{
	...
		mem->debugfs = debugfs_create_dir(rmem->name, debugfs_dir);
		swiotlb_create_debugfs_files(mem->debugfs);

			
}


From xen-devel-bounces@lists.xenproject.org Thu May 27 13:25:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 13:25:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133365.248620 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmG0b-0005c5-QN; Thu, 27 May 2021 13:25:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133365.248620; Thu, 27 May 2021 13:25:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmG0b-0005by-NN; Thu, 27 May 2021 13:25:09 +0000
Received: by outflank-mailman (input) for mailman id 133365;
 Thu, 27 May 2021 13:25:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TI9I=KW=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lmG0a-0005ZD-DO
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 13:25:08 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 01542514-232d-4ba6-8568-c8c5d5653e68;
 Thu, 27 May 2021 13:25:07 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 0E5A268BFE; Thu, 27 May 2021 15:25:05 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 01542514-232d-4ba6-8568-c8c5d5653e68
Date: Thu, 27 May 2021 15:25:04 +0200
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
	bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
	jxgao@google.com, joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com, rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v7 03/15] swiotlb: Add DMA_RESTRICTED_POOL
Message-ID: <20210527132504.GB26160@lst.de>
References: <20210518064215.2856977-1-tientzu@chromium.org> <20210518064215.2856977-4-tientzu@chromium.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210518064215.2856977-4-tientzu@chromium.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Tue, May 18, 2021 at 02:42:03PM +0800, Claire Chang wrote:
> Add a new kconfig symbol, DMA_RESTRICTED_POOL, for restricted DMA pool.

Please merge this with the actual code that is getting added.


From xen-devel-bounces@lists.xenproject.org Thu May 27 13:25:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 13:25:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133371.248630 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmG0w-0006Dp-2l; Thu, 27 May 2021 13:25:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133371.248630; Thu, 27 May 2021 13:25:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmG0v-0006Dg-Va; Thu, 27 May 2021 13:25:29 +0000
Received: by outflank-mailman (input) for mailman id 133371;
 Thu, 27 May 2021 13:25:29 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9d2D=KW=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lmG0u-0006B1-Ty
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 13:25:28 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bb615e5d-b821-413b-be36-3da504582752;
 Thu, 27 May 2021 13:25:28 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bb615e5d-b821-413b-be36-3da504582752
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1622121928;
  h=from:to:cc:subject:date:message-id:mime-version:
   content-transfer-encoding;
  bh=YCHtb/6ui0SaeIuOK0c9WWVJKbyc0qDmna7ajwXvw9g=;
  b=ezzgrihvNbdBpoDhbPKPnAWk8/fRSKCgiZTt9KKsXfBpwgieWkOHTSkG
   Ks9H00VkDVLkI/a/3nG8ieeVH83suW1RZy+vytm28JZADnkYB99Cw/rdR
   LLF80UK+nMVOdbkumoXjXPZt+OOWRlND+NmpaQbQ54BVbdIUrTrdVMRwx
   8=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: 7qKzuQr2rJ/5+9//z8wBMURnR/9DrLUIQfYb5OgKQMtKuIIb3b5X9EKeTNgRVBqx20uGv1YTUb
 q4/JXjLLfcKxaTSzfnYCV4+Gr09KpMyBTGFOGIb6PtT4beFWx2aH7CWW3ZPt1yqRM3UfNKzaYs
 zyguDWDPVonWM0GiveNWFN6DCwUcAIhoDi2wgw/ORjLjRTGZsI+JFVtoSaQjc5Eq/xVAKRqnRW
 GKBRDIQTXVR1EBLCR4nIYK3TRHcjpZN3FZUAewvcLvuvW3SgWQkXt29TORcGMpqAREroKBV6Fd
 WW4=
X-SBRS: 5.1
X-MesageID: 44523219
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:wEDGxqgATlI/e5L5FSTElFI5C3BQXiAji2hC6mlwRA09TyX5ra
 2TdTogtSMc6QxhPE3I/OrrBEDuexzhHPJOj7X5Xo3SOTUO2lHYT72KhLGKq1Hd8kXFndK1vp
 0QEZSWZueQMbB75/yKnTVREbwbsaW6GHbDv5ag859vJzsaFZ2J921Ce2Gm+tUdfng8OXI+fq
 DsgPZvln6bVlk8SN+0PXUBV/irnaywqHq3CSR2fiLO8WO1/EuV1II=
X-IronPort-AV: E=Sophos;i="5.82,334,1613451600"; 
   d="scan'208";a="44523219"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH 0/3] x86: Fixes to TSX handling
Date: Thu, 27 May 2021 14:25:16 +0100
Message-ID: <20210527132519.21730-1-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

Fix the handling of the HLE/RTM CPUID bits for guests.

Andrew Cooper (3):
  x86/cpuid: Rework HLE and RTM handling
  x86/tsx: Minor cleanup and improvements
  x86/tsx: Deprecate vpmu=rtm-abort and use tsx=<bool> instead

 docs/misc/xen-command-line.pandoc           |  40 +++++------
 tools/libs/guest/xg_cpuid_x86.c             |   2 +
 xen/arch/x86/cpu/intel.c                    |   3 -
 xen/arch/x86/cpu/vpmu.c                     |   4 +-
 xen/arch/x86/cpuid.c                        |  26 +++----
 xen/arch/x86/hvm/vmx/vmx.c                  |   2 +-
 xen/arch/x86/msr.c                          |   2 +-
 xen/arch/x86/spec_ctrl.c                    |   5 +-
 xen/arch/x86/tsx.c                          | 103 ++++++++++++++++++++++++----
 xen/include/asm-x86/cpufeature.h            |   1 +
 xen/include/asm-x86/processor.h             |   1 +
 xen/include/asm-x86/vpmu.h                  |   1 -
 xen/include/public/arch-x86/cpufeatureset.h |   4 +-
 13 files changed, 131 insertions(+), 63 deletions(-)

-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Thu May 27 13:25:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 13:25:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133372.248642 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmG11-0006YF-AZ; Thu, 27 May 2021 13:25:35 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133372.248642; Thu, 27 May 2021 13:25:35 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmG11-0006Y4-79; Thu, 27 May 2021 13:25:35 +0000
Received: by outflank-mailman (input) for mailman id 133372;
 Thu, 27 May 2021 13:25:33 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9d2D=KW=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lmG0z-0006Vp-Bx
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 13:25:33 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d5590923-66fe-478c-957a-1b070d481308;
 Thu, 27 May 2021 13:25:31 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d5590923-66fe-478c-957a-1b070d481308
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1622121931;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=m9QjH0cMMuHCQVPpDVT4R/qor7cqmBRJec5W9WeQMWI=;
  b=CCvrJWHEFqL/KPm9u7kpq+Uv9f3MXekSOuzCymZoEeV3n21th/xziL9C
   GfV3bC8fwZvyA3+LPM9+Mu53rWCLBPwDaZGdwjfyR0a3P15xr5EPc7HlZ
   0xVC8JFrLiccPLjEXhElrJ0GrUIcBqt3/LRjCS0CIdpT8XIFLja8faFYr
   o=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: RPC8Wl/AUgpsC6I+ScZyKvRYyEfw9QAESKL4QptfhG5ddSJOw/WTQQdTUsQxCkrGLfc4kbJblf
 v4bBHOObommyCNSwlKbezknANoDRFYsmrV9NQbLMUW+Zp3YAMhqGDa3ZqpZdNvy4KG7gQ3Ov6T
 YM2Bk2tXhOfGa8D6X4IddPhhc0vh18AbRt4kv3TjDYhIm+t9dwl1R2d8d0P5azv7CxmCbWlLhN
 6pFViN5H6FpMm9HXB2yKQk88jwwV9VJcJtMN+5Jtkik2cy4NjY3CKL8JL6qXOUmXa6gHIuczce
 K5w=
X-SBRS: 5.1
X-MesageID: 44745591
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:s3Gyoq/gv3zs8gQ3zrZuk+DgI+orL9Y04lQ7vn2YSXRuHPBw8P
 re5cjztCWE7gr5N0tBpTntAsW9qDbnhPtICOoqTNCftWvdyQiVxehZhOOIqVDd8m/Fh4pgPM
 9bAtBD4bbLbGSS4/yU3ODBKadD/OW6
X-IronPort-AV: E=Sophos;i="5.82,334,1613451600"; 
   d="scan'208";a="44745591"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH 1/3] x86/cpuid: Rework HLE and RTM handling
Date: Thu, 27 May 2021 14:25:17 +0100
Message-ID: <20210527132519.21730-2-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210527132519.21730-1-andrew.cooper3@citrix.com>
References: <20210527132519.21730-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

The TAA mitigation offered the option to hide the HLE and RTM CPUID bits,
which has caused some migration compatibility problems.

These two bits are special.  Annotate them with ! to emphasise this point.

Hardware Lock Elision (HLE) may or may not be visible in CPUID, but is
disabled in microcode on all CPUs, and has been removed from the architecture.
Do not advertise it to VMs by default.

Restricted Transactional Memory (RTM) may or may not be visible in CPUID, and
may or may not be configured in force-abort mode.  Have tsx_init() note
whether RTM has been configured into force-abort mode, so
guest_common_feature_adjustments() can conditionally hide it from VMs by
default.

The host policy values for HLE/RTM may or may not be set, depending on any
previous running kernel's choice of visibility, and Xen's choice.  TSX is
available on any CPU which enumerates a TSX-hiding mechanism, so instead of
doing a two-step to clobber any hiding, scan CPUID, then set the visibility,
just force visibility of the bits in the first place.

With the HLE/RTM bits now unilaterally visible in the host policy,
xc_cpuid_apply_policy() can construct a more appropriate policy out of thin
air for pre-4.13 VMs with no CPUID data in their migration stream, and
specifically one where HLE/RTM doesn't potentially disappear behind the back
of a running VM.

Fixes: 8c4330818f6 ("x86/spec-ctrl: Mitigate the TSX Asynchronous Abort sidechannel")
Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 tools/libs/guest/xg_cpuid_x86.c             |  2 ++
 xen/arch/x86/cpuid.c                        | 24 ++++++++++------------
 xen/arch/x86/spec_ctrl.c                    |  3 ---
 xen/arch/x86/tsx.c                          | 31 +++++++++++++++++++++++++++--
 xen/include/asm-x86/processor.h             |  1 +
 xen/include/public/arch-x86/cpufeatureset.h |  4 ++--
 6 files changed, 44 insertions(+), 21 deletions(-)

diff --git a/tools/libs/guest/xg_cpuid_x86.c b/tools/libs/guest/xg_cpuid_x86.c
index 1ebc108213..ec5a47fde4 100644
--- a/tools/libs/guest/xg_cpuid_x86.c
+++ b/tools/libs/guest/xg_cpuid_x86.c
@@ -511,6 +511,8 @@ int xc_cpuid_apply_policy(xc_interface *xch, uint32_t domid, bool restore,
          * so migrated-in VM's don't risk seeing features disappearing.
          */
         p->basic.rdrand = test_bit(X86_FEATURE_RDRAND, host_featureset);
+        p->feat.hle = test_bit(X86_FEATURE_HLE, host_featureset);
+        p->feat.rtm = test_bit(X86_FEATURE_RTM, host_featureset);
 
         if ( di.hvm )
         {
diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index 752bf244ea..55a7b16342 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -375,6 +375,16 @@ static void __init guest_common_default_feature_adjustments(uint32_t *fs)
          boot_cpu_data.x86 == 6 && boot_cpu_data.x86_model == 0x3a &&
          cpu_has_rdrand && !is_forced_cpu_cap(X86_FEATURE_RDRAND) )
         __clear_bit(X86_FEATURE_RDRAND, fs);
+
+    /*
+     * On certain hardware, speculative or errata workarounds can result in
+     * TSX being placed in "force-abort" mode, where it doesn't actually
+     * function as expected, but is technically compatible with the ISA.
+     *
+     * Do not advertise RTM to guests by default if it won't actually work.
+     */
+    if ( rtm_disabled )
+        __clear_bit(X86_FEATURE_RTM, fs);
 }
 
 static void __init guest_common_feature_adjustments(uint32_t *fs)
@@ -652,20 +662,6 @@ void recalculate_cpuid_policy(struct domain *d)
             __clear_bit(X86_FEATURE_SYSCALL, max_fs);
     }
 
-    /*
-     * On hardware with MSR_TSX_CTRL, the admin may have elected to disable
-     * TSX and hide the feature bits.  Migrating-in VMs may have been booted
-     * pre-mitigation when the TSX features were visbile.
-     *
-     * This situation is compatible (albeit with a perf hit to any TSX code in
-     * the guest), so allow the feature bits to remain set.
-     */
-    if ( cpu_has_tsx_ctrl )
-    {
-        __set_bit(X86_FEATURE_HLE, max_fs);
-        __set_bit(X86_FEATURE_RTM, max_fs);
-    }
-
     /* Clamp the toolstacks choices to reality. */
     for ( i = 0; i < ARRAY_SIZE(fs); i++ )
         fs[i] &= max_fs[i];
diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index cd05f42394..f2782b2d55 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -1158,9 +1158,6 @@ void __init init_speculation_mitigations(void)
          ((hw_smt_enabled && opt_smt) ||
           !boot_cpu_has(X86_FEATURE_SC_VERW_IDLE)) )
     {
-        setup_clear_cpu_cap(X86_FEATURE_HLE);
-        setup_clear_cpu_cap(X86_FEATURE_RTM);
-
         opt_tsx = 0;
         tsx_init();
     }
diff --git a/xen/arch/x86/tsx.c b/xen/arch/x86/tsx.c
index 39e483640a..e09e819dce 100644
--- a/xen/arch/x86/tsx.c
+++ b/xen/arch/x86/tsx.c
@@ -15,6 +15,7 @@
  */
 int8_t __read_mostly opt_tsx = -1;
 int8_t __read_mostly cpu_has_tsx_ctrl = -1;
+bool __read_mostly rtm_disabled;
 
 static int __init parse_tsx(const char *s)
 {
@@ -45,6 +46,30 @@ void tsx_init(void)
             rdmsrl(MSR_ARCH_CAPABILITIES, caps);
 
         cpu_has_tsx_ctrl = !!(caps & ARCH_CAPS_TSX_CTRL);
+
+        /*
+         * The TSX features (HLE/RTM) are handled specially.  They both
+         * enumerate features but, on certain parts, have mechanisms to be
+         * hidden without disrupting running software.
+         *
+         * At the moment, we're running in an unknown context (WRT hiding -
+         * particularly if another fully fledged kernel ran before us) and
+         * depending on user settings, may elect to continue hiding them from
+         * native CPUID instructions.
+         *
+         * Xen doesn't use TSX itself, but use cpu_has_{hle,rtm} for various
+         * system reasons, mostly errata detection, so the meaning is more
+         * useful as "TSX infrastructure available", as opposed to "features
+         * advertised and working".
+         *
+         * Force the features to be visible in Xen's view if we see any of the
+         * infrastructure capable of hiding them.
+         */
+        if ( cpu_has_tsx_ctrl )
+        {
+            setup_force_cpu_cap(X86_FEATURE_HLE);
+            setup_force_cpu_cap(X86_FEATURE_RTM);
+        }
     }
 
     if ( cpu_has_tsx_ctrl )
@@ -53,9 +78,11 @@ void tsx_init(void)
 
         rdmsrl(MSR_TSX_CTRL, val);
 
+        /* Check bottom bit only.  Higher bits are various sentinels. */
+        rtm_disabled = !(opt_tsx & 1);
+
         val &= ~(TSX_CTRL_RTM_DISABLE | TSX_CTRL_CPUID_CLEAR);
-        /* Check bottom bit only.  Higher bits are various sentinals. */
-        if ( !(opt_tsx & 1) )
+        if ( rtm_disabled )
             val |= TSX_CTRL_RTM_DISABLE | TSX_CTRL_CPUID_CLEAR;
 
         wrmsrl(MSR_TSX_CTRL, val);
diff --git a/xen/include/asm-x86/processor.h b/xen/include/asm-x86/processor.h
index 83143d4df8..bc4dc69253 100644
--- a/xen/include/asm-x86/processor.h
+++ b/xen/include/asm-x86/processor.h
@@ -627,6 +627,7 @@ static inline uint8_t get_cpu_family(uint32_t raw, uint8_t *model,
 }
 
 extern int8_t opt_tsx, cpu_has_tsx_ctrl;
+extern bool rtm_disabled;
 void tsx_init(void);
 
 enum ap_boot_method {
diff --git a/xen/include/public/arch-x86/cpufeatureset.h b/xen/include/public/arch-x86/cpufeatureset.h
index c42f56bdd4..b65af42436 100644
--- a/xen/include/public/arch-x86/cpufeatureset.h
+++ b/xen/include/public/arch-x86/cpufeatureset.h
@@ -197,14 +197,14 @@ XEN_CPUFEATURE(FSGSBASE,      5*32+ 0) /*A  {RD,WR}{FS,GS}BASE instructions */
 XEN_CPUFEATURE(TSC_ADJUST,    5*32+ 1) /*S  TSC_ADJUST MSR available */
 XEN_CPUFEATURE(SGX,           5*32+ 2) /*   Software Guard extensions */
 XEN_CPUFEATURE(BMI1,          5*32+ 3) /*A  1st bit manipulation extensions */
-XEN_CPUFEATURE(HLE,           5*32+ 4) /*A  Hardware Lock Elision */
+XEN_CPUFEATURE(HLE,           5*32+ 4) /*!a Hardware Lock Elision */
 XEN_CPUFEATURE(AVX2,          5*32+ 5) /*A  AVX2 instructions */
 XEN_CPUFEATURE(FDP_EXCP_ONLY, 5*32+ 6) /*!  x87 FDP only updated on exception. */
 XEN_CPUFEATURE(SMEP,          5*32+ 7) /*S  Supervisor Mode Execution Protection */
 XEN_CPUFEATURE(BMI2,          5*32+ 8) /*A  2nd bit manipulation extensions */
 XEN_CPUFEATURE(ERMS,          5*32+ 9) /*A  Enhanced REP MOVSB/STOSB */
 XEN_CPUFEATURE(INVPCID,       5*32+10) /*H  Invalidate Process Context ID */
-XEN_CPUFEATURE(RTM,           5*32+11) /*A  Restricted Transactional Memory */
+XEN_CPUFEATURE(RTM,           5*32+11) /*!A Restricted Transactional Memory */
 XEN_CPUFEATURE(PQM,           5*32+12) /*   Platform QoS Monitoring */
 XEN_CPUFEATURE(NO_FPU_SEL,    5*32+13) /*!  FPU CS/DS stored as zero */
 XEN_CPUFEATURE(MPX,           5*32+14) /*s  Memory Protection Extensions */
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Thu May 27 13:25:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 13:25:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133373.248653 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmG15-0006tZ-Jx; Thu, 27 May 2021 13:25:39 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133373.248653; Thu, 27 May 2021 13:25:39 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmG15-0006tO-Fn; Thu, 27 May 2021 13:25:39 +0000
Received: by outflank-mailman (input) for mailman id 133373;
 Thu, 27 May 2021 13:25:38 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9d2D=KW=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lmG14-0006Vp-AW
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 13:25:38 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id acc3c51a-8119-4370-96de-e858d9f5694d;
 Thu, 27 May 2021 13:25:33 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: acc3c51a-8119-4370-96de-e858d9f5694d
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1622121933;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=qWEul1M27l1E3zd/VjLDe6HsUX+pKdhbkISaVGQCUfo=;
  b=HolpBQfTK1UEM8/8ZI4HFZN6XpB9QmVxpn4yfGimtFk8BI3GygFdFQmh
   hwLLims9h7Mmb3SC/lcA9eq0d/drTVKHXHNQEit+s3zcqXdhPFcu6J/ON
   wSUbepkPmlIOhx4bdN1VtPDngV8IRgTpnkQfXAZhA5QKvaHj0smBSPfoV
   U=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: OslH3wq/eT+OxgiMFSRtAJC3H5CmagPR9LvlkqcbGzgH9f+mTgUOmEoSiBhs1AYFkqnU+8Gu/X
 3qG1i8rXG362h/VvhDCECbEzNgt2WHdNjDEjGSQwpye0r/EMkPZXVAeENdcX/zsVwo41IHFZas
 qLPJUM6Sm0k7RVeU7hN7IUY/wLZ9gS8K2p/RwocQP0lZXFiSOSG+AqL1djctO5tsi/PbpX5c93
 E1X7H4Q7F91DO6nQVZlUmnw2xz6mIqoxgajbnbLOM56yUI8B0WgDzRRANPJ1ZUh8e0wBrGhDyV
 nbM=
X-SBRS: 5.1
X-MesageID: 44745595
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:fb7g9KBnIig7KSXlHemW55DYdb4zR+YMi2TC1yhKKCC9Ffbo7/
 xG/c5rrCMc5wxhO03I9eruBEDEewK5yXcX2/h2AV7BZniFhILAFugLhuGOrwEIWReOkdK1vZ
 0QCJSWY+eRMbEVt6jHCXGDYrMd/OU=
X-IronPort-AV: E=Sophos;i="5.82,334,1613451600"; 
   d="scan'208";a="44745595"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH 2/3] x86/tsx: Minor cleanup and improvements
Date: Thu, 27 May 2021 14:25:18 +0100
Message-ID: <20210527132519.21730-3-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210527132519.21730-1-andrew.cooper3@citrix.com>
References: <20210527132519.21730-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

 * Introduce cpu_has_arch_caps and replace boot_cpu_has(X86_FEATURE_ARCH_CAPS)
 * Read CPUID data into the appropriate boot_cpu_data.x86_capability[]
   element, as subsequent changes are going to need more cpu_has_* logic.
 * Use the hi/lo MSR helpers, which substantially improves code generation.

No practical change.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 xen/arch/x86/cpuid.c             |  2 +-
 xen/arch/x86/hvm/vmx/vmx.c       |  2 +-
 xen/arch/x86/msr.c               |  2 +-
 xen/arch/x86/spec_ctrl.c         |  2 +-
 xen/arch/x86/tsx.c               | 21 ++++++++++++---------
 xen/include/asm-x86/cpufeature.h |  1 +
 6 files changed, 17 insertions(+), 13 deletions(-)

diff --git a/xen/arch/x86/cpuid.c b/xen/arch/x86/cpuid.c
index 55a7b16342..f3c8950aa3 100644
--- a/xen/arch/x86/cpuid.c
+++ b/xen/arch/x86/cpuid.c
@@ -747,7 +747,7 @@ int init_domain_cpuid_policy(struct domain *d)
      * so dom0 can turn off workarounds as appropriate.  Temporary, until the
      * domain policy logic gains a better understanding of MSRs.
      */
-    if ( is_hardware_domain(d) && boot_cpu_has(X86_FEATURE_ARCH_CAPS) )
+    if ( is_hardware_domain(d) && cpu_has_arch_caps )
         p->feat.arch_caps = true;
 
     d->arch.cpuid = p;
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 1450fd1991..7e3e67fdc3 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -2566,7 +2566,7 @@ static bool __init has_if_pschange_mc(void)
     if ( cpu_has_hypervisor )
         return false;
 
-    if ( boot_cpu_has(X86_FEATURE_ARCH_CAPS) )
+    if ( cpu_has_arch_caps )
         rdmsrl(MSR_ARCH_CAPABILITIES, caps);
 
     if ( caps & ARCH_CAPS_IF_PSCHANGE_MC_NO )
diff --git a/xen/arch/x86/msr.c b/xen/arch/x86/msr.c
index c3a988bd11..374f92b2c5 100644
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -136,7 +136,7 @@ int init_domain_msr_policy(struct domain *d)
      * so dom0 can turn off workarounds as appropriate.  Temporary, until the
      * domain policy logic gains a better understanding of MSRs.
      */
-    if ( is_hardware_domain(d) && boot_cpu_has(X86_FEATURE_ARCH_CAPS) )
+    if ( is_hardware_domain(d) && cpu_has_arch_caps )
     {
         uint64_t val;
 
diff --git a/xen/arch/x86/spec_ctrl.c b/xen/arch/x86/spec_ctrl.c
index f2782b2d55..739b7913ff 100644
--- a/xen/arch/x86/spec_ctrl.c
+++ b/xen/arch/x86/spec_ctrl.c
@@ -885,7 +885,7 @@ void __init init_speculation_mitigations(void)
     bool cpu_has_bug_taa;
     uint64_t caps = 0;
 
-    if ( boot_cpu_has(X86_FEATURE_ARCH_CAPS) )
+    if ( cpu_has_arch_caps )
         rdmsrl(MSR_ARCH_CAPABILITIES, caps);
 
     hw_smt_enabled = check_smt_enabled();
diff --git a/xen/arch/x86/tsx.c b/xen/arch/x86/tsx.c
index e09e819dce..98ecb71a4a 100644
--- a/xen/arch/x86/tsx.c
+++ b/xen/arch/x86/tsx.c
@@ -34,15 +34,18 @@ void tsx_init(void)
 {
     /*
      * This function is first called between microcode being loaded, and CPUID
-     * being scanned generally.  Calculate from raw data whether MSR_TSX_CTRL
-     * is available.
+     * being scanned generally.  Read into boot_cpu_data.x86_capability[] for
+     * the cpu_has_* bits we care about using here.
      */
     if ( unlikely(cpu_has_tsx_ctrl < 0) )
     {
         uint64_t caps = 0;
 
-        if ( boot_cpu_data.cpuid_level >= 7 &&
-             (cpuid_count_edx(7, 0) & cpufeat_mask(X86_FEATURE_ARCH_CAPS)) )
+        if ( boot_cpu_data.cpuid_level >= 7 )
+            boot_cpu_data.x86_capability[cpufeat_word(X86_FEATURE_ARCH_CAPS)]
+                = cpuid_count_edx(7, 0);
+
+        if ( cpu_has_arch_caps )
             rdmsrl(MSR_ARCH_CAPABILITIES, caps);
 
         cpu_has_tsx_ctrl = !!(caps & ARCH_CAPS_TSX_CTRL);
@@ -74,18 +77,18 @@ void tsx_init(void)
 
     if ( cpu_has_tsx_ctrl )
     {
-        uint64_t val;
+        uint32_t hi, lo;
 
-        rdmsrl(MSR_TSX_CTRL, val);
+        rdmsr(MSR_TSX_CTRL, lo, hi);
 
         /* Check bottom bit only.  Higher bits are various sentinels. */
         rtm_disabled = !(opt_tsx & 1);
 
-        val &= ~(TSX_CTRL_RTM_DISABLE | TSX_CTRL_CPUID_CLEAR);
+        lo &= ~(TSX_CTRL_RTM_DISABLE | TSX_CTRL_CPUID_CLEAR);
         if ( rtm_disabled )
-            val |= TSX_CTRL_RTM_DISABLE | TSX_CTRL_CPUID_CLEAR;
+            lo |= TSX_CTRL_RTM_DISABLE | TSX_CTRL_CPUID_CLEAR;
 
-        wrmsrl(MSR_TSX_CTRL, val);
+        wrmsr(MSR_TSX_CTRL, lo, hi);
     }
     else if ( opt_tsx >= 0 )
         printk_once(XENLOG_WARNING
diff --git a/xen/include/asm-x86/cpufeature.h b/xen/include/asm-x86/cpufeature.h
index 33b2257888..9f5ae3aa0d 100644
--- a/xen/include/asm-x86/cpufeature.h
+++ b/xen/include/asm-x86/cpufeature.h
@@ -133,6 +133,7 @@
 #define cpu_has_avx512_vp2intersect boot_cpu_has(X86_FEATURE_AVX512_VP2INTERSECT)
 #define cpu_has_tsx_force_abort boot_cpu_has(X86_FEATURE_TSX_FORCE_ABORT)
 #define cpu_has_serialize       boot_cpu_has(X86_FEATURE_SERIALIZE)
+#define cpu_has_arch_caps       boot_cpu_has(X86_FEATURE_ARCH_CAPS)
 
 /* CPUID level 0x00000007:1.eax */
 #define cpu_has_avx_vnni        boot_cpu_has(X86_FEATURE_AVX_VNNI)
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Thu May 27 13:25:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 13:25:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133375.248664 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmG1B-0007NE-2n; Thu, 27 May 2021 13:25:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133375.248664; Thu, 27 May 2021 13:25:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmG1A-0007Mu-Uy; Thu, 27 May 2021 13:25:44 +0000
Received: by outflank-mailman (input) for mailman id 133375;
 Thu, 27 May 2021 13:25:43 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9d2D=KW=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lmG19-0006Vp-Ai
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 13:25:43 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id cf710b61-4d57-4556-b0a8-9d504384264d;
 Thu, 27 May 2021 13:25:34 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cf710b61-4d57-4556-b0a8-9d504384264d
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1622121933;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:mime-version:content-transfer-encoding;
  bh=hbSlZ0t4S9qZJpcVYcxxqV9AHueCrBm5p1+6QD2gkF4=;
  b=csYiggvlHewb61k3daWwtqDdhjw4KFDx9PFeGnnpHI6UfkzHLkd0UPOJ
   G0mQPlqcEXjzbPBZLxaKjIzWVUWO/fpssFFNyxb87fNQt+h1FcrQBXqFK
   kqi2XzBBTub5ShVWT4i/yaUoO0E2rn8hy+2ozVNlbq9ZDv3VkXEHRnV55
   w=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=none (message not signed) header.i=none
IronPort-SDR: ctaQJrHedHNfZbi6TXiP7BmYX0i1rU9TXyaIQ/8KlLN8FshKN2kUXpPMnC9bm7KSL/Pxc1gFpL
 /25S6z2vUk5qBFuqjpppiefS7ySJ6/FWs+0L0DcrRuKsMYY8XqO9nAbHvZHuoJBLq4EGG4p1gr
 Z6FLRC05zgQLO/7B5sVfZjDRlQOoZF6CFNuOFGTtjpBn5jfNPnfehX7hhfGpH0hNaYsQizx22v
 JXoJkfzWljgcev3I62BY2uGZhMpeLdSAeCZrvWYVBdafbJHC9vkbPYBn7Vp6J1jfa4fq4wwxnq
 Ptg=
X-SBRS: 5.1
X-MesageID: 44745603
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:VsOdxaGa6A9bvP6FpLqE0seALOsnbusQ8zAXP0AYc31om6uj5r
 iTdZUgpGbJYVkqKRIdcLy7V5VoBEmskaKdgrNhW4tKPjOW2ldARbsKheCJrlHd8m/Fh4lgPM
 9bAtND4bbLbWSS4/yV3ODBKadE/OW6
X-IronPort-AV: E=Sophos;i="5.82,334,1613451600"; 
   d="scan'208";a="44745603"
From: Andrew Cooper <andrew.cooper3@citrix.com>
To: Xen-devel <xen-devel@lists.xenproject.org>
CC: Andrew Cooper <andrew.cooper3@citrix.com>, Jan Beulich
	<JBeulich@suse.com>, =?UTF-8?q?Roger=20Pau=20Monn=C3=A9?=
	<roger.pau@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH 3/3] x86/tsx: Deprecate vpmu=rtm-abort and use tsx=<bool> instead
Date: Thu, 27 May 2021 14:25:19 +0100
Message-ID: <20210527132519.21730-4-andrew.cooper3@citrix.com>
X-Mailer: git-send-email 2.11.0
In-Reply-To: <20210527132519.21730-1-andrew.cooper3@citrix.com>
References: <20210527132519.21730-1-andrew.cooper3@citrix.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit

This reuses the rtm_disable infrastructure, so CPUID derivation works properly
when TSX is disabled in favour of working PCR3.

vpmu= is not a supported feature, and having this functionality under tsx=
centralises all TSX handling.

Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
---
CC: Jan Beulich <JBeulich@suse.com>
CC: Roger Pau Monné <roger.pau@citrix.com>
CC: Wei Liu <wl@xen.org>
---
 docs/misc/xen-command-line.pandoc | 40 +++++++++++++++---------------
 xen/arch/x86/cpu/intel.c          |  3 ---
 xen/arch/x86/cpu/vpmu.c           |  4 +--
 xen/arch/x86/tsx.c                | 51 +++++++++++++++++++++++++++++++++++++--
 xen/include/asm-x86/vpmu.h        |  1 -
 5 files changed, 70 insertions(+), 29 deletions(-)

diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
index c32a397a12..a6facc33ea 100644
--- a/docs/misc/xen-command-line.pandoc
+++ b/docs/misc/xen-command-line.pandoc
@@ -2296,14 +2296,21 @@ pages) must also be specified via the tbuf_size parameter.
 
 Controls for the use of Transactional Synchronization eXtensions.
 
-On Intel parts released in Q3 2019 (with updated microcode), and future parts,
-a control has been introduced which allows TSX to be turned off.
+Several microcode updates are relevant:
 
-On systems with the ability to turn TSX off, this boolean offers system wide
-control of whether TSX is enabled or disabled.
+ * March 2019, fixing the TSX memory ordering errata on all TSX-enabled CPUs
+   to date.  Introduced MSR_TSX_FORCE_ABORT on SKL/SKX/KBL/WHL/CFL parts.  The
+   errata workaround uses Performance Counter 3, so the user can select
+   between working TSX and working perfcounters.
 
-On parts vulnerable to CVE-2019-11135 / TSX Asynchronous Abort, the following
-logic applies:
+ * November 2019, fixing the TSX Async Abort speculative vulnerability.
+   Introduced MSR_TSX_CTRL on all TSX-enabled MDS_NO parts to date,
+   CLX/WHL-R/CFL-R, with the controls becoming architectural moving forward
+   and formally retiring HLE from the architecture.  The user can disable TSX
+   to mitigate TAA, and elect to hide the HLE/RTM CPUID bits.
+
+On systems with the ability to disable TSX off, this boolean offers system
+wide control of whether TSX is enabled or disabled.
 
  * An explicit `tsx=` choice is honoured, even if it is `true` and would
    result in a vulnerable system.
@@ -2311,10 +2318,14 @@ logic applies:
  * When no explicit `tsx=` choice is given, parts vulnerable to TAA will be
    mitigated by disabling TSX, as this is the lowest overhead option.
 
- * If the use of TSX is important, the more expensive TAA mitigations can be
+   If the use of TSX is important, the more expensive TAA mitigations can be
    opted in to with `smt=0 spec-ctrl=md-clear`, at which point TSX will remain
    active by default.
 
+ * When no explicit `tsx=` option is given, parts susceptible to the memory
+   ordering errata default to `true` to enable working TSX.  Alternatively,
+   selecting `tsx=0` will disable TSX and restore PCR3 to a working state.
+
 ### ucode
 > `= List of [ <integer> | scan=<bool>, nmi=<bool>, allow-same=<bool> ]`
 
@@ -2456,20 +2467,7 @@ provide access to a wealth of low level processor information.
 
 *   The `arch` option allows access to the pre-defined architectural events.
 
-*   The `rtm-abort` boolean controls a trade-off between working Restricted
-    Transactional Memory, and working performance counters.
-
-    All processors released to date (Q1 2019) supporting Transactional Memory
-    Extensions suffer an erratum which has been addressed in microcode.
-
-    Processors based on the Skylake microarchitecture with up-to-date
-    microcode internally use performance counter 3 to work around the erratum.
-    A consequence is that the counter gets reprogrammed whenever an `XBEGIN`
-    instruction is executed.
-
-    An alternative mode exists where PCR3 behaves as before, at the cost of
-    `XBEGIN` unconditionally aborting.  Enabling `rtm-abort` mode will
-    activate this alternative mode.
+*   The `rtm-abort` boolean has been superseded.  Use `tsx=0` instead.
 
 *Warning:*
 As the virtualisation is not 100% safe, don't use the vpmu flag on
diff --git a/xen/arch/x86/cpu/intel.c b/xen/arch/x86/cpu/intel.c
index 37439071d9..abf8e206d7 100644
--- a/xen/arch/x86/cpu/intel.c
+++ b/xen/arch/x86/cpu/intel.c
@@ -356,9 +356,6 @@ static void Intel_errata_workarounds(struct cpuinfo_x86 *c)
 	    (c->x86_model == 29 || c->x86_model == 46 || c->x86_model == 47))
 		__set_bit(X86_FEATURE_CLFLUSH_MONITOR, c->x86_capability);
 
-	if (cpu_has_tsx_force_abort && opt_rtm_abort)
-		wrmsrl(MSR_TSX_FORCE_ABORT, TSX_FORCE_ABORT_RTM);
-
 	probe_c3_errata(c);
 }
 
diff --git a/xen/arch/x86/cpu/vpmu.c b/xen/arch/x86/cpu/vpmu.c
index d8659c63f8..16e91a3694 100644
--- a/xen/arch/x86/cpu/vpmu.c
+++ b/xen/arch/x86/cpu/vpmu.c
@@ -49,7 +49,6 @@ CHECK_pmu_params;
 static unsigned int __read_mostly opt_vpmu_enabled;
 unsigned int __read_mostly vpmu_mode = XENPMU_MODE_OFF;
 unsigned int __read_mostly vpmu_features = 0;
-bool __read_mostly opt_rtm_abort;
 
 static DEFINE_SPINLOCK(vpmu_lock);
 static unsigned vpmu_count;
@@ -79,7 +78,8 @@ static int __init parse_vpmu_params(const char *s)
         else if ( !cmdline_strcmp(s, "arch") )
             vpmu_features |= XENPMU_FEATURE_ARCH_ONLY;
         else if ( (val = parse_boolean("rtm-abort", s, ss)) >= 0 )
-            opt_rtm_abort = val;
+            printk(XENLOG_WARNING
+                   "'rtm-abort=<bool>' superseded.  Use 'tsx=<bool>' instead\n");
         else
             rc = -EINVAL;
 
diff --git a/xen/arch/x86/tsx.c b/xen/arch/x86/tsx.c
index 98ecb71a4a..338191df7f 100644
--- a/xen/arch/x86/tsx.c
+++ b/xen/arch/x86/tsx.c
@@ -6,7 +6,9 @@
  * Valid values:
  *   1 => Explicit tsx=1
  *   0 => Explicit tsx=0
- *  -1 => Default, implicit tsx=1, may change to 0 to mitigate TAA
+ *  -1 => Default, altered to 0/1 (if unspecified) by:
+ *                 - TAA heuristics/settings for speculative safety
+ *                 - "TSX vs PCR3" select for TSX memory ordering safety
  *  -3 => Implicit tsx=1 (feed-through from spec-ctrl=0)
  *
  * This is arranged such that the bottom bit encodes whether TSX is actually
@@ -50,6 +52,26 @@ void tsx_init(void)
 
         cpu_has_tsx_ctrl = !!(caps & ARCH_CAPS_TSX_CTRL);
 
+        if ( cpu_has_tsx_force_abort )
+        {
+            /*
+             * On an early TSX-enable Skylake part subject to the memory
+             * ordering erratum, with at least the March 2019 microcode.
+             */
+
+            /*
+             * If no explicit tsx= option is provided, pick a default.
+             *
+             * This deliberately overrides the implicit opt_tsx=-3 from
+             * `spec-ctrl=0` because:
+             * - parse_spec_ctrl() ran before any CPU details where know.
+             * - We now know we're running on a CPU not affected by TAA (as
+             *   TSX_FORCE_ABORT is enumerated).
+             */
+            if ( opt_tsx < 0 )
+                opt_tsx = 1;
+        }
+
         /*
          * The TSX features (HLE/RTM) are handled specially.  They both
          * enumerate features but, on certain parts, have mechanisms to be
@@ -75,6 +97,12 @@ void tsx_init(void)
         }
     }
 
+    /*
+     * Note: MSR_TSX_CTRL is enumerated on TSX-enabled MDS_NO and later parts.
+     * MSR_TSX_FORCE_ABORT is enumerated on TSX-enabled pre-MDS_NO Skylake
+     * parts only.  The two features are on a disjoint set of CPUs, and not
+     * offered to guests by hypervisors.
+     */
     if ( cpu_has_tsx_ctrl )
     {
         uint32_t hi, lo;
@@ -90,9 +118,28 @@ void tsx_init(void)
 
         wrmsr(MSR_TSX_CTRL, lo, hi);
     }
+    else if ( cpu_has_tsx_force_abort )
+    {
+        /*
+         * On an early TSX-enable Skylake part subject to the memory ordering
+         * erratum, with at least the March 2019 microcode.
+         */
+        uint32_t hi, lo;
+
+        rdmsr(MSR_TSX_FORCE_ABORT, lo, hi);
+
+        /* Check bottom bit only.  Higher bits are various sentinels. */
+        rtm_disabled = !(opt_tsx & 1);
+
+        lo &= ~TSX_FORCE_ABORT_RTM;
+        if ( rtm_disabled )
+            lo |= TSX_FORCE_ABORT_RTM;
+
+        wrmsr(MSR_TSX_FORCE_ABORT, lo, hi);
+    }
     else if ( opt_tsx >= 0 )
         printk_once(XENLOG_WARNING
-                    "MSR_TSX_CTRL not available - Ignoring tsx= setting\n");
+                    "TSX controls not available - Ignoring tsx= setting\n");
 }
 
 /*
diff --git a/xen/include/asm-x86/vpmu.h b/xen/include/asm-x86/vpmu.h
index 55f85ba00f..4b0a6ba3da 100644
--- a/xen/include/asm-x86/vpmu.h
+++ b/xen/include/asm-x86/vpmu.h
@@ -126,7 +126,6 @@ static inline int vpmu_do_rdmsr(unsigned int msr, uint64_t *msr_content)
 
 extern unsigned int vpmu_mode;
 extern unsigned int vpmu_features;
-extern bool opt_rtm_abort;
 
 /* Context switch */
 static inline void vpmu_switch_from(struct vcpu *prev)
-- 
2.11.0



From xen-devel-bounces@lists.xenproject.org Thu May 27 13:27:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 13:27:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133398.248678 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmG2g-0000Mo-GF; Thu, 27 May 2021 13:27:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133398.248678; Thu, 27 May 2021 13:27:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmG2g-0000Mh-DE; Thu, 27 May 2021 13:27:18 +0000
Received: by outflank-mailman (input) for mailman id 133398;
 Thu, 27 May 2021 13:27:17 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TI9I=KW=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lmG2f-0000MW-0K
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 13:27:17 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 30e63615-ac97-4f59-bdd0-d117e000b202;
 Thu, 27 May 2021 13:27:16 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id A928168AFE; Thu, 27 May 2021 15:27:11 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 30e63615-ac97-4f59-bdd0-d117e000b202
Date: Thu, 27 May 2021 15:27:11 +0200
From: Christoph Hellwig <hch@lst.de>
To: Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>
Cc: Claire Chang <tientzu@chromium.org>, Rob Herring <robh+dt@kernel.org>,
	mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>,
	Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>, boris.ostrovsky@oracle.com,
	jgross@suse.com, Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>,
	Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com,
	Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk,
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie,
	dri-devel@lists.freedesktop.org, intel-gfx@lists.freedesktop.org,
	jani.nikula@linux.intel.com, Jianxiong Gao <jxgao@google.com>,
	joonas.lahtinen@linux.intel.com, linux-pci@vger.kernel.org,
	maarten.lankhorst@linux.intel.com, matthew.auld@intel.com,
	rodrigo.vivi@intel.com, thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v7 04/15] swiotlb: Add restricted DMA pool
 initialization
Message-ID: <20210527132711.GC26160@lst.de>
References: <20210518064215.2856977-1-tientzu@chromium.org> <20210518064215.2856977-5-tientzu@chromium.org> <CALiNf2_AWsnGqCnh02ZAGt+B-Ypzs1=-iOG2owm4GZHz2JAc4A@mail.gmail.com> <YKvLDlnns3TWEZ5l@0xbeefdead.lan>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <YKvLDlnns3TWEZ5l@0xbeefdead.lan>
User-Agent: Mutt/1.5.17 (2007-11-01)

On Mon, May 24, 2021 at 11:49:34AM -0400, Konrad Rzeszutek Wilk wrote:
> rmem_swiotlb_setup
> 
> ?
> 
> Which is ARM specific and inside the generic code?

I don't think it is arm specific at all.  It is OF specific, but just
about every platform but x86 uses OF.  And I can think of an ACPI version
of this as well.


From xen-devel-bounces@lists.xenproject.org Thu May 27 13:27:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 13:27:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133399.248689 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmG2q-0000iW-OQ; Thu, 27 May 2021 13:27:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133399.248689; Thu, 27 May 2021 13:27:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmG2q-0000iN-LD; Thu, 27 May 2021 13:27:28 +0000
Received: by outflank-mailman (input) for mailman id 133399;
 Thu, 27 May 2021 13:27:27 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TI9I=KW=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lmG2p-0000hT-Cy
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 13:27:27 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 59dba942-5b9d-4893-8fa6-70d71b53c717;
 Thu, 27 May 2021 13:27:26 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id BAAFB68AFE; Thu, 27 May 2021 15:27:23 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 59dba942-5b9d-4893-8fa6-70d71b53c717
Date: Thu, 27 May 2021 15:27:23 +0200
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
	bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
	jxgao@google.com, joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com, rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v7 04/15] swiotlb: Add restricted DMA pool
 initialization
Message-ID: <20210527132723.GD26160@lst.de>
References: <20210518064215.2856977-1-tientzu@chromium.org> <20210518064215.2856977-5-tientzu@chromium.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210518064215.2856977-5-tientzu@chromium.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

I'd still much prefer to always have the pointer in struct device.
Especially as we're also looking into things like a global 64-bit bounce
buffer.  Something like this untested patch ontop of your series:


diff --git a/drivers/base/core.c b/drivers/base/core.c
index 628e33939aca..3cb95fa29f70 100644
--- a/drivers/base/core.c
+++ b/drivers/base/core.c
@@ -29,6 +29,7 @@
 #include <linux/sched/mm.h>
 #include <linux/sysfs.h>
 #include <linux/dma-map-ops.h> /* for dma_default_coherent */
+#include <linux/swiotlb.h>
 
 #include "base.h"
 #include "power/power.h"
@@ -2814,6 +2815,9 @@ void device_initialize(struct device *dev)
     defined(CONFIG_ARCH_HAS_SYNC_DMA_FOR_CPU_ALL)
 	dev->dma_coherent = dma_default_coherent;
 #endif
+#ifdef CONFIG_SWIOTLB
+	dev->dma_io_tlb_mem = &io_tlb_default_mem;
+#endif
 }
 EXPORT_SYMBOL_GPL(device_initialize);
 
diff --git a/include/linux/device.h b/include/linux/device.h
index 4987608ea4ff..6aca6fa0facc 100644
--- a/include/linux/device.h
+++ b/include/linux/device.h
@@ -416,7 +416,7 @@ struct dev_links_info {
  * @dma_pools:	Dma pools (if dma'ble device).
  * @dma_mem:	Internal for coherent mem override.
  * @cma_area:	Contiguous memory area for dma allocations
- * @dma_io_tlb_mem: Internal for swiotlb io_tlb_mem override.
+ * @dma_io_tlb_mem: Pointer to the swiotlb pool used.  Not for driver use.
  * @archdata:	For arch-specific additions.
  * @of_node:	Associated device tree node.
  * @fwnode:	Associated device node supplied by platform firmware.
@@ -523,7 +523,7 @@ struct device {
 	struct cma *cma_area;		/* contiguous memory area for dma
 					   allocations */
 #endif
-#ifdef CONFIG_DMA_RESTRICTED_POOL
+#ifdef CONFIG_SWIOTLB
 	struct io_tlb_mem *dma_io_tlb_mem;
 #endif
 	/* arch specific additions */
diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index e8cf49bd90c5..c153cd0c469c 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -95,6 +95,7 @@ struct io_tlb_mem {
 	spinlock_t lock;
 	struct dentry *debugfs;
 	bool late_alloc;
+	bool force_swiotlb;
 	struct io_tlb_slot {
 		phys_addr_t orig_addr;
 		size_t alloc_size;
@@ -103,30 +104,16 @@ struct io_tlb_mem {
 };
 extern struct io_tlb_mem *io_tlb_default_mem;
 
-static inline struct io_tlb_mem *get_io_tlb_mem(struct device *dev)
-{
-#ifdef CONFIG_DMA_RESTRICTED_POOL
-	if (dev && dev->dma_io_tlb_mem)
-		return dev->dma_io_tlb_mem;
-#endif /* CONFIG_DMA_RESTRICTED_POOL */
-
-	return io_tlb_default_mem;
-}
-
 static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
 {
-	struct io_tlb_mem *mem = get_io_tlb_mem(dev);
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 
 	return mem && paddr >= mem->start && paddr < mem->end;
 }
 
 static inline bool is_dev_swiotlb_force(struct device *dev)
 {
-#ifdef CONFIG_DMA_RESTRICTED_POOL
-	if (dev->dma_io_tlb_mem)
-		return true;
-#endif /* CONFIG_DMA_RESTRICTED_POOL */
-	return false;
+	return dev->dma_io_tlb_mem->force_swiotlb;
 }
 
 void __init swiotlb_exit(void);
@@ -134,10 +121,8 @@ unsigned int swiotlb_max_segment(void);
 size_t swiotlb_max_mapping_size(struct device *dev);
 bool is_swiotlb_active(struct device *dev);
 void __init swiotlb_adjust_size(unsigned long size);
-#ifdef CONFIG_DMA_RESTRICTED_POOL
 struct page *swiotlb_alloc(struct device *dev, size_t size);
 bool swiotlb_free(struct device *dev, struct page *page, size_t size);
-#endif /* CONFIG_DMA_RESTRICTED_POOL */
 #else
 #define swiotlb_force SWIOTLB_NO_FORCE
 static inline bool is_swiotlb_buffer(struct device *dev, phys_addr_t paddr)
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index d3fa4669229b..69d62e18f493 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -347,7 +347,7 @@ void __init swiotlb_exit(void)
 static void swiotlb_bounce(struct device *dev, phys_addr_t tlb_addr, size_t size,
 			   enum dma_data_direction dir)
 {
-	struct io_tlb_mem *mem = get_io_tlb_mem(dev);
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	int index = (tlb_addr - mem->start) >> IO_TLB_SHIFT;
 	phys_addr_t orig_addr = mem->slots[index].orig_addr;
 	size_t alloc_size = mem->slots[index].alloc_size;
@@ -429,7 +429,7 @@ static unsigned int wrap_index(struct io_tlb_mem *mem, unsigned int index)
 static int find_slots(struct device *dev, phys_addr_t orig_addr,
 		size_t alloc_size)
 {
-	struct io_tlb_mem *mem = get_io_tlb_mem(dev);
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	unsigned long boundary_mask = dma_get_seg_boundary(dev);
 	dma_addr_t tbl_dma_addr =
 		phys_to_dma_unencrypted(dev, mem->start) & boundary_mask;
@@ -510,7 +510,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 		size_t mapping_size, size_t alloc_size,
 		enum dma_data_direction dir, unsigned long attrs)
 {
-	struct io_tlb_mem *mem = get_io_tlb_mem(dev);
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	unsigned int offset = swiotlb_align_offset(dev, orig_addr);
 	unsigned int i;
 	int index;
@@ -553,7 +553,7 @@ phys_addr_t swiotlb_tbl_map_single(struct device *dev, phys_addr_t orig_addr,
 
 static void release_slots(struct device *dev, phys_addr_t tlb_addr)
 {
-	struct io_tlb_mem *mem = get_io_tlb_mem(dev);
+	struct io_tlb_mem *mem = dev->dma_io_tlb_mem;
 	unsigned long flags;
 	unsigned int offset = swiotlb_align_offset(dev, tlb_addr);
 	int index = (tlb_addr - offset - mem->start) >> IO_TLB_SHIFT;
@@ -670,7 +670,7 @@ size_t swiotlb_max_mapping_size(struct device *dev)
 
 bool is_swiotlb_active(struct device *dev)
 {
-	return get_io_tlb_mem(dev) != NULL;
+	return dev->dma_io_tlb_mem;
 }
 EXPORT_SYMBOL_GPL(is_swiotlb_active);
 
@@ -741,7 +741,7 @@ static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
 	struct io_tlb_mem *mem = rmem->priv;
 	unsigned long nslabs = rmem->size >> IO_TLB_SHIFT;
 
-	if (dev->dma_io_tlb_mem)
+	if (dev->dma_io_tlb_mem != io_tlb_default_mem)
 		return 0;
 
 	/*
@@ -760,6 +760,7 @@ static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
 		}
 
 		swiotlb_init_io_tlb_mem(mem, rmem->base, nslabs, false);
+		mem->force_swiotlb = true;
 
 		rmem->priv = mem;
 
@@ -768,15 +769,13 @@ static int rmem_swiotlb_device_init(struct reserved_mem *rmem,
 	}
 
 	dev->dma_io_tlb_mem = mem;
-
 	return 0;
 }
 
 static void rmem_swiotlb_device_release(struct reserved_mem *rmem,
 					struct device *dev)
 {
-	if (dev)
-		dev->dma_io_tlb_mem = NULL;
+	dev->dma_io_tlb_mem = io_tlb_default_mem;
 }
 
 static const struct reserved_mem_ops rmem_swiotlb_ops = {


From xen-devel-bounces@lists.xenproject.org Thu May 27 13:28:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 13:28:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133408.248699 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmG3n-0001aP-33; Thu, 27 May 2021 13:28:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133408.248699; Thu, 27 May 2021 13:28:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmG3n-0001aI-06; Thu, 27 May 2021 13:28:27 +0000
Received: by outflank-mailman (input) for mailman id 133408;
 Thu, 27 May 2021 13:28:26 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TI9I=KW=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lmG3m-0001a3-BD
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 13:28:26 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 91edc2c0-5209-4ece-ac62-b4bdcddc9512;
 Thu, 27 May 2021 13:28:25 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 290AE68AFE; Thu, 27 May 2021 15:28:23 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 91edc2c0-5209-4ece-ac62-b4bdcddc9512
Date: Thu, 27 May 2021 15:28:22 +0200
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
	bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
	jxgao@google.com, joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com, rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v7 07/15] swiotlb: Update is_swiotlb_active to add a
 struct device argument
Message-ID: <20210527132822.GE26160@lst.de>
References: <20210518064215.2856977-1-tientzu@chromium.org> <20210518064215.2856977-8-tientzu@chromium.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210518064215.2856977-8-tientzu@chromium.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

> +	if (is_swiotlb_active(NULL)) {

Passing a NULL argument to this doesn't make sense.  They all should have
a struct device at hand, you'll just need to dig for it.

And this function should be about to go away anyway, but until then we
need to do this properly.


From xen-devel-bounces@lists.xenproject.org Thu May 27 13:30:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 13:30:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133420.248710 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmG5U-0002zX-Ew; Thu, 27 May 2021 13:30:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133420.248710; Thu, 27 May 2021 13:30:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmG5U-0002zQ-Bd; Thu, 27 May 2021 13:30:12 +0000
Received: by outflank-mailman (input) for mailman id 133420;
 Thu, 27 May 2021 13:30:11 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lmG5S-0002zI-Ur
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 13:30:10 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lmG5S-0000FM-R9; Thu, 27 May 2021 13:30:10 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lmG5S-0002vw-KM; Thu, 27 May 2021 13:30:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=EX10sFiK/vacdc2TZCENeLUkaSzkI9mwdbhchUhK9h4=; b=40ClmA2MdSyXfB/92gxtvklrho
	M90uTXRiUnqgGhT8zSV+CLp0+UDlJ7rcx0EDCBqkshtwb6eYuhdiws/WgvsGiLFcKkqua0HDwl80N
	qpOdIGdS9bU++hKkjuD1hEtkzVntcnwpr95pwgm7rIOF5FuQlfvD1380KW3cvInkpwwU=;
Subject: Re: [PATCH v2 01/12] x86: introduce ioremap_wc()
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <8f56a8f4-0482-932f-96a9-c791bebb4610@suse.com>
 <20abac99-609c-f4f6-1242-c79919f4c317@suse.com>
 <b8035805-4f44-18ce-f4cb-4ce1d3c594fc@xen.org>
 <d7019879-037b-7945-4a8a-5a8252e5922a@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <7d32ff1c-315c-f2da-bc1b-06fb2233fe55@xen.org>
Date: Thu, 27 May 2021 14:30:08 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <d7019879-037b-7945-4a8a-5a8252e5922a@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 27/05/2021 14:09, Jan Beulich wrote:
> On 27.05.2021 14:48, Julien Grall wrote:
>> On 27/05/2021 13:30, Jan Beulich wrote:
>>> --- a/xen/arch/x86/mm.c
>>> +++ b/xen/arch/x86/mm.c
>>> @@ -5881,6 +5881,20 @@ void __iomem *ioremap(paddr_t pa, size_t
>>>        return (void __force __iomem *)va;
>>>    }
>>>    
>>> +void __iomem *__init ioremap_wc(paddr_t pa, size_t len)
>>> +{
>>> +    mfn_t mfn = _mfn(PFN_DOWN(pa));
>>> +    unsigned int offs = pa & (PAGE_SIZE - 1);
>>> +    unsigned int nr = PFN_UP(offs + len);
>>> +    void *va;
>>> +
>>> +    WARN_ON(page_is_ram_type(mfn_x(mfn), RAM_TYPE_CONVENTIONAL));
>>> +
>>> +    va = __vmap(&mfn, nr, 1, 1, PAGE_HYPERVISOR_WC, VMAP_DEFAULT);
>>> +
>>> +    return (void __force __iomem *)(va + offs);
>>> +}
>>
>> Arm is already providing ioremap_wc() which is a wrapper to
>> ioremap_attr().
> 
> I did notice this, yes.
> 
>> Can this be moved to the common code to avoid duplication?
> 
> If by "this" you mean ioremap_attr(), then I wasn't convinced we want
> a function of this name on x86.

I am open to other name.

> In particular you may note that
> x86'es ioremap() is sort of the equivalent of Arm's ioremap_nocache(),
> but is different from the new ioremap_wc() by more than just the
> different PTE attributes.
That's because ioremap() will not vmap() the first MB, am I correct? If 
so, I am not sure why you want to do that in ioremap() but not 
ioremap_wc(). Wouldn't this result access the memory with mismatched 
attributes?

> Also I was specifically asked to make ioremap_wc() __init; ioremap()
> cannot be, because of at least the use from pci_vtd_quirk().

I am not sure this is relevant to the conversation here. I am sure there 
are other function that would benefits to be __init in one arch but 
can't in the other. Yet, common code can be beneficials.

> 
> Plus I'd need to clean up Arm's lack of __iomem if I wanted to fold
> things. 

__iomem is NOP on Xen. So while the annotation may not be consistently 
used, I don't see the clean-up a requirement to consolidate the code...

> Or wait - it's declaration and definition which are out of
> sync there, i.e. a pre-existing issue.

We don't usually add __init on both the declaration and definition. So 
why would it be necessary to add __iomem in both cases?

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 27 13:30:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 13:30:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133425.248721 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmG69-0003cR-RC; Thu, 27 May 2021 13:30:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133425.248721; Thu, 27 May 2021 13:30:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmG69-0003cK-OJ; Thu, 27 May 2021 13:30:53 +0000
Received: by outflank-mailman (input) for mailman id 133425;
 Thu, 27 May 2021 13:30:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TI9I=KW=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lmG68-0003c6-Am
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 13:30:52 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3c14ae65-8a4e-4fcb-b721-a66cf2a15239;
 Thu, 27 May 2021 13:30:51 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id DA26F68AFE; Thu, 27 May 2021 15:30:46 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3c14ae65-8a4e-4fcb-b721-a66cf2a15239
Date: Thu, 27 May 2021 15:30:46 +0200
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
	bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
	jxgao@google.com, joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com, rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v7 13/15] dma-direct: Allocate memory from restricted
 DMA pool if available
Message-ID: <20210527133046.GF26160@lst.de>
References: <20210518064215.2856977-1-tientzu@chromium.org> <20210518064215.2856977-14-tientzu@chromium.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210518064215.2856977-14-tientzu@chromium.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

> +#ifdef CONFIG_DMA_RESTRICTED_POOL
> +	if (swiotlb_free(dev, page, size))
> +		return;
> +#endif

Please avoid the ifdefs by either stubbing out the function to be a no-op
or by using IS_ENABLED.

> +#ifdef CONFIG_DMA_RESTRICTED_POOL
> +	page = swiotlb_alloc(dev, size);
> +	if (page && !dma_coherent_ok(dev, page_to_phys(page), size)) {
> +		__dma_direct_free_pages(dev, page, size);
> +		page = NULL;
> +	}
> +#endif

Same here, for the stub it would just return NULL.


From xen-devel-bounces@lists.xenproject.org Thu May 27 13:32:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 13:32:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133436.248736 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmG7U-0004Ii-8W; Thu, 27 May 2021 13:32:16 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133436.248736; Thu, 27 May 2021 13:32:16 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmG7U-0004Ib-5F; Thu, 27 May 2021 13:32:16 +0000
Received: by outflank-mailman (input) for mailman id 133436;
 Thu, 27 May 2021 13:32:15 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=TI9I=KW=lst.de=hch@srs-us1.protection.inumbo.net>)
 id 1lmG7T-0004IS-Qw
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 13:32:15 +0000
Received: from verein.lst.de (unknown [213.95.11.211])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b9cb9524-32bd-4b41-9727-bbc7553e21c7;
 Thu, 27 May 2021 13:32:14 +0000 (UTC)
Received: by verein.lst.de (Postfix, from userid 2407)
 id 9130268AFE; Thu, 27 May 2021 15:32:12 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b9cb9524-32bd-4b41-9727-bbc7553e21c7
Date: Thu, 27 May 2021 15:32:12 +0200
From: Christoph Hellwig <hch@lst.de>
To: Claire Chang <tientzu@chromium.org>
Cc: Rob Herring <robh+dt@kernel.org>, mpe@ellerman.id.au,
	Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>,
	Frank Rowand <frowand.list@gmail.com>,
	Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
	boris.ostrovsky@oracle.com, jgross@suse.com,
	Christoph Hellwig <hch@lst.de>,
	Marek Szyprowski <m.szyprowski@samsung.com>,
	benh@kernel.crashing.org, paulus@samba.org,
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
	sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
	grant.likely@arm.com, xypron.glpk@gmx.de,
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org,
	bauerman@linux.ibm.com, peterz@infradead.org,
	Greg KH <gregkh@linuxfoundation.org>,
	Saravana Kannan <saravanak@google.com>,
	"Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
	heikki.krogerus@linux.intel.com,
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
	Randy Dunlap <rdunlap@infradead.org>,
	Dan Williams <dan.j.williams@intel.com>,
	Bartosz Golaszewski <bgolaszewski@baylibre.com>,
	linux-devicetree <devicetree@vger.kernel.org>,
	lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
	xen-devel@lists.xenproject.org,
	Nicolas Boichat <drinkcat@chromium.org>,
	Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
	bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
	daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
	jxgao@google.com, joonas.lahtinen@linux.intel.com,
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
	matthew.auld@intel.com, rodrigo.vivi@intel.com,
	thomas.hellstrom@linux.intel.com
Subject: Re: [PATCH v8 00/15] Restricted DMA
Message-ID: <20210527133212.GA27432@lst.de>
References: <20210527125845.1852284-1-tientzu@chromium.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <20210527125845.1852284-1-tientzu@chromium.org>
User-Agent: Mutt/1.5.17 (2007-11-01)

I just finished reviewing v7, sorry.  Let me find some time to see what
difference this version makes.


From xen-devel-bounces@lists.xenproject.org Thu May 27 13:46:52 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 13:46:52 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133470.248750 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmGLX-00063j-Ea; Thu, 27 May 2021 13:46:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133470.248750; Thu, 27 May 2021 13:46:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmGLX-00063c-BR; Thu, 27 May 2021 13:46:47 +0000
Received: by outflank-mailman (input) for mailman id 133470;
 Thu, 27 May 2021 13:46:46 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ln4B=KW=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lmGLW-00063W-FE
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 13:46:46 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 1c81b972-0b8d-42fc-8f08-2c70236488b8;
 Thu, 27 May 2021 13:46:45 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 1c81b972-0b8d-42fc-8f08-2c70236488b8
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1622123205;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=4fGDbtEoWzH6UNk042ImkReLBj8Qmlqw2slhI3a/OF8=;
  b=b9cNzf+2fLpht+wS5hfj/lgarzBn8a77djTa0sQ3TS25SglgMk+zdUep
   B3eZPqNYxRaHgtS9odgQL1JMshgCfKSK+JdIYl0+3aRmnKfTVT+DW82x6
   1sz7oYS4St4N116yRIHUolzSQZcogoJhRGhlbwAsIU85w0ZbQU/wS7exX
   Q=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: vH6fCUzyAP7LZcXOujOZvGGMXwpCtaTDnVkb2wnB+9DVA+P93/Qa/Mh04Y3lWk8jP17s/qh6ra
 5/Vzq0aQk5umyYej5Vt9ABpx+0jB50lrO2rKkSPGjr/Zyn9weEZNRbSomoLwAYdxnmiu4fFAAz
 oB0iaRXOvHewb4yDfacVEd7ETEfGXZrEutJ69VeiJMdFQ29P2P+hs5wtcjOjPRwp6eGYe15sQN
 rmF/FEXGY8pv0Ym7PK0vT8FHIZpkhLr3FCAzx6BC9RYg/4dK3rUJg6bzKlmbq7SMjIYsrv8Q4+
 H1c=
X-SBRS: 5.1
X-MesageID: 44747580
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:U1dkP6vj16kmshsVco91P1sI7skCToMji2hC6mlwRA09TyXGra
 6TdaUguiMc1gx8ZJhBo7C90KnpewK7yXdQ2/htAV7EZnibhILIFvAZ0WKG+Vzd8kLFh4tgPM
 tbAsxD4ZjLfCdHZKXBkXqF+rQbsaG6GcmT7I+0pRodLnAJGtRdBkVCe32m+yVNNXl77PECZe
 OhD6R81l2dkSN9VLXLOpBJZZmNmzWl/6iWLyIuNloC0k2jnDmo4Ln1H1yzxREFSQ5Cxr8k7C
 zsjxH5zr/LiYD59jbsk0voq7hGktrozdVOQOaWjNIOFznqggG0IKx8Rry5uiwvqu3H0idrrD
 D1mWZkAy1P0QKUQonsyiGdnDUIkQxeqkMK8GXow0cK+qfCNXQH46Mrv/MqTvPbg3BQ9u2Unp
 g7hl5wGvJsfFr9dR/Glq/1vidR5wGJSEoZ4JouZkNkIP0jgZ9q3MEiFRBuYds99ByT0vFuLA
 A4NrCj2B8RSyLDU0zk
X-IronPort-AV: E=Sophos;i="5.82,334,1613451600"; 
   d="scan'208";a="44747580"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=I0FQyivuLqnUfS0M2tawuRgeJw7IZTre1S1LPvrI8V5GA3MAz4Bm5z+Etj5g+pC7K+ak05/JenifRxpYZFc9RO5tq3wr+el//mKOgCM+4GYjg9TK/brF6Xqqu1hhEEA6dadhcYv0IpBzx/YbJdZlwJujaKqlHKY9l1X/EWP85El9Ht6d5/UaK74iGPNwghQDIWExc8ZjMntRNWZtVJ60I+yRy7zItgRVSPm7SYkDKeqD4A6pcG2xiVKgzekrnaHr6wgj1nnQFHjtR9B3QFrVGcY2YfTQV2dgUwIyz9GulKORIAqBSfD0YqrcZ6Wws78eO5xpQFdzBp6h9zr8kL55Jg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rflu3yQzBzaKqEEvZrK0UCJy5FS+Bo/jM9IWyELmBAk=;
 b=j7peeh77fmuJ9gWutMylWYTSMnnEfMUvI1n7mVxMebACeD3fptOjSVxta6QsAq0h9NLCEbfULhee7IfzA1n/sv2WNBCNPLOFG+IWPtSzLgTPG3KgOSw4dVxqYGaX2C1ab1RA1Z+WOsy8eLdO39Vc5CdUrEO6yRqsE8i+BWl9GOK+S0jmKNtz3fFRS6uqoWf+gWw87bnWFTrYhH1hmuzxeHdubKKChOPIPY6cwTAQYDoM3x7sy5PF6YkscTkUyOQfdrUetunNbyiim5E+S7deVJ1wz8LjpavSKixhf8h7Bgd2dwVy6murpTnivR8/nTGBMl7gPKnQobmowp+t8qEeDA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=rflu3yQzBzaKqEEvZrK0UCJy5FS+Bo/jM9IWyELmBAk=;
 b=jOX2XZgZhjK6b9m/xLFvK1iH/rdC1d+yVfFTTPplhj9XVQoFUoWoto2cvYd58ffFrPrO1v/0JNNbJijSn96mg5bco8rquHtJLDx11YWu6oSBgUkmjaUElgQzQtCsGq6qnujMcy7e4hZGyyq2BiydfERYYe85YuWKkpg7/fjGVFE=
Date: Thu, 27 May 2021 15:46:36 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>
Subject: Re: [PATCH v6 1/3] evtchn: slightly defer lock acquire where possible
Message-ID: <YK+ivBEno2iuHeoE@Air-de-Roger>
References: <01bbf3d4-ca6a-e837-91fe-b34aa014564c@suse.com>
 <5939858e-1c7c-5658-bc2d-0c9024c74040@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <5939858e-1c7c-5658-bc2d-0c9024c74040@suse.com>
X-ClientProxiedBy: PR0P264CA0055.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:100:1d::19) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6965fcca-5eb0-4939-7789-08d92115dbc3
X-MS-TrafficTypeDiagnostic: DS7PR03MB5606:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DS7PR03MB5606DD38F151F68EFFCAF87D8F239@DS7PR03MB5606.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:5797;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: ByEyysl4FMgzdcd24EsHvT+w8NI15HHYKvazfvAL4tE2dSr/FpeDjr1SbqXBv69wmmRtbPWqcPIaE3owqHVljXeeVFLMcg9sFPTmnjAp5UoSIFqsfxYB9McQHJS9AJA2k+MXdi8UQOFAV6HkITlsufppGcACb2WApSkd7nVe/isJU8kTU90cHifqLHpY3Pa6qPiWZTEVAFIoljM9/fy4B+lgKzSwyHYkiAHn/P4AzieyJaVPEAo9Oyv8yaunZ++mO9c5I65OT8PtHNjvtHWjvx9uWu+LDD7fwuS9PSz04tnfXBlc+cfzoELl3E6uIvXcSddm/UT9hQ9c0L8LmxlmhnTG2c1tGdUCY9ZO6SdWFUStklPM2AEI/qlQISVgR6WOcUudxdz/ClyQH5mQnpnFiADTYasQjxf6YOHfwpOzco4KVoJssxhrtYHQaZL5HvEsWND9s+gLx3o6lSpZmYrs5/37l/Ssuuzg167+tuHbux/xOdSIPGKcZtjUUwI5YtUTtYzlfIRVmOAUQPIKo9hFP/xP/gz84RLFJ2D1WDHkYcKy4738gm6v7HKol51KRlh9bnPnAz1wG0apRG6BTkQsdDSNBqeZ3SmffA5kKUn+QiwAENXda5Dn5WSP+cvFg7BKIvnc6YD/CLQuTTsHzxfMQA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(376002)(346002)(366004)(136003)(39850400004)(396003)(66556008)(66476007)(4326008)(4744005)(8936002)(5660300002)(9686003)(38100700002)(2906002)(956004)(16526019)(86362001)(186003)(26005)(54906003)(6496006)(85182001)(6666004)(66946007)(6486002)(83380400001)(6916009)(478600001)(8676002)(33716001)(316002)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?SU5UTGFrVU9RamNvaE1TSEZjK1llWlBNWEZoQ1EyVWJ5dDhNY2pxMzhvNGFa?=
 =?utf-8?B?UU9iWUtEbi8vNzVWUWZUdjM5akZ1SXUraE1yQzRpVk9nckQrOVFNZVM1ZExp?=
 =?utf-8?B?d1lGT2cvUUtNbkR6WXZXWjN0VlV0MzRVUk9wS3FYSHpONTEwRlNqWVBqMGVE?=
 =?utf-8?B?MXVMYmpEN0kvN2w5a1lROG5ydU9JbzE5RGdCWWIvUFlMRUgvTW9LOFhRSSsz?=
 =?utf-8?B?UGl2WHJtcEtwSXdPTzlyaXNGd2MzOUErUUZ3Z2haOGozU3RtNHRLeWhtL3pr?=
 =?utf-8?B?a01EWGNWU2xaZlpNMy9IWGxCdE1DVTJOTGdOTWJ0RDJLNi81Y28zMzRmbnZF?=
 =?utf-8?B?VHZVaG5iME0zM29xVi9iSnlRR3Y5SXR3OGI0K1B2K0k0ZC9UK3ZDbFdCeFdk?=
 =?utf-8?B?TnppMlRDUncxcCtPUWNBcGxZNDVaRHJTMHRraDAxMjdFWm9MZWtEemliZ0ZG?=
 =?utf-8?B?LzBDaitQRW0xeFIyM3V5NTlaajFLdmExb2VBWHNaMHpyUWduNnZBRzhxZk9u?=
 =?utf-8?B?ZmNFOFlldk9FWmh3dlYvOTlQaUxMWS9nbE5UVVd6dzNIUVdhUU9nSGRKWkRD?=
 =?utf-8?B?VEE0M2ovVkxKSlQ5L0RlM200UmVPSCtrWlZCc0hHYkYrSDFxRG5pY3o4bU1R?=
 =?utf-8?B?ZUxoT2FYTVN2R2dlaTRGOFFZMHF3SGk2RnV5SUx4TVJKZytKSVpZWlVnUHBs?=
 =?utf-8?B?cHJhM2docmdhMVRjdmJqWEs5MnltSWJteWFudUwveDQxUXpiQjl2c1ExYnFl?=
 =?utf-8?B?L21iTGxVZ3JXVk83Z3I2cDFFM2dqRHlNVFhNZ2oxaFVNMlhXYWFzbHMrZ04z?=
 =?utf-8?B?aWtNMnY0REI3c25nSzUxQkgwaHp2RFF2dUQzbm1qWDRGQldrREI1OVFPWVpR?=
 =?utf-8?B?K2JKd1l2a3M0blgyNXJ5M2FHam9zL1ZKUlN0YU16dk5CSW5kNnVrVXlUck5G?=
 =?utf-8?B?d1kzL1dsN09jYXVzVk5qNXdJYlpCbGV1blcvUTZsaEdxU0lzT0ZOQWtiOHJV?=
 =?utf-8?B?d2VpaXhRZDFmRklVU1V3Q0tYMzFXVjg2dVZvUGpJSlU5Vk1TeXVKUWRvQndZ?=
 =?utf-8?B?MVRGQlNJUnVPendtOUwxYTVzK0RBU01qMldGWk5XRk81K1IzTko3R3Jydytz?=
 =?utf-8?B?cC9uM2FKaUVLczZXdW5ScGw1RjNDMTRpRTdjRDZLUTZWR1ZrTGp2enlJei9u?=
 =?utf-8?B?YkpVMTZkdmxvTEYzZlRiemdYNDRGTHp1ckFNUEpWMGV3ZHFJSE9BYytoWXR4?=
 =?utf-8?B?bi9JQmVWWHJ2bm5MR2hMblF6Sjlya1luM2dwbnpqOG1qVXJxOWdtUU95aUY0?=
 =?utf-8?B?UWNKR2VqODVxeERrUnc5by94bG45N3AvQWl3c1Uwb0FTMGIxbktETFVIZStY?=
 =?utf-8?B?dWdMbGIzL2dza2V4VDFjWHFYNzdFMG5CMFZEMC9ZdmYwWWtkeUYwY2dNMmhC?=
 =?utf-8?B?aGJZelVhMGNIblRIcEFTdGJrSDZBYU5RWVlJU3V4YS9iL1EwUGphSGhacVJ1?=
 =?utf-8?B?MHJ6OGlTWEkzQ2FXcm5ibjdvcFFFQjkrQXBuZzNQeUZBNWdzdTZ5Zi9hRjl1?=
 =?utf-8?B?REZvNTJlWWhjL2d5WEZ6YUpSVFhJc0pOajVqb1lVZnBiaVljdU5UZlR3Wi9N?=
 =?utf-8?B?OG4zUUVTVTlMbW5yZDZpc0xkRGtyYzhSSk5OcVNpRHFiNXVRL1owbDFDVVIy?=
 =?utf-8?B?U2dydk8yUldFY3BmdXd3VGpicWVXS1JNcjB0aWh2aUtxM0RlTUp2QlBrazBo?=
 =?utf-8?Q?611A+xvm7CCSMNUwIBGz8+iKoQfCjdvepE6FlZt?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 6965fcca-5eb0-4939-7789-08d92115dbc3
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 May 2021 13:46:41.5771
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: QtxdX0/SrOPjOR8Y/KFHM4XyWQ2nT7DC3qz6RgLl1T2F6NmJbvRUJ+YGtMJPgmLQMENMJ11IzcOI/EsGaze1Jg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5606
X-OriginatorOrg: citrix.com

On Thu, May 27, 2021 at 01:28:07PM +0200, Jan Beulich wrote:
> port_is_valid() and evtchn_from_port() are fine to use without holding
> any locks. Accordingly acquire the per-domain lock slightly later in
> evtchn_close() and evtchn_bind_vcpu().
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 27 13:52:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 13:52:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133478.248764 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmGQm-0007R5-4y; Thu, 27 May 2021 13:52:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133478.248764; Thu, 27 May 2021 13:52:12 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmGQm-0007Qy-1E; Thu, 27 May 2021 13:52:12 +0000
Received: by outflank-mailman (input) for mailman id 133478;
 Thu, 27 May 2021 13:52:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ln4B=KW=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lmGQk-0007Qs-OE
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 13:52:10 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4b6f9330-7cff-4344-ba10-e3501979d1f6;
 Thu, 27 May 2021 13:52:09 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4b6f9330-7cff-4344-ba10-e3501979d1f6
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1622123529;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=DmkTe83W6qXN7niHOKS/0hzIJ5/nQshOQDcNkcbLGQk=;
  b=aCMy4tJbPaii0vgRTq+alVDJQ31nTH0wb62/J5cMQi9HaKBISpnN0LxT
   kIya/n7rkGAzdl1lqEhiRSi841JZGBLoEplL12rb0J04tLUUDXS4lhiQ+
   MV0qgDQDG1/C3A63mLYLiTIzsLOhB/4j5Ljfnu2EkPg0UBiyKltWUDMBK
   0=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: 0hCX7/cNoXc33z+Fn7JQAY3jv5QHiOW4G9xRdkW3rZJext3ahpkRvbHJ8dbNvspWxwCCDPc4y/
 A5yOHHN4rNlCm22b1v+tFt1w2AmrzBkNIpAP3NIZAyzcP72+wRqp/hUEt5DTNvQcJyAIQACMck
 R1Z9FF2TLen9P+KMANB4/kUKeBsopZlrK3/imx/xGAWxMhpYTgzQO6DD7x1r0UtLA6Dk7iyqfb
 3w7B+vWvfaFPbKzDfgE+QlfqyHG3z3l8ST0ql2wQlYHrPDp7WAeqgU8hBayXBVzmXfYX/JbOEb
 1Hw=
X-SBRS: 5.1
X-MesageID: 44854046
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:rhysjqAWdAT4LU3lHeiGsceALOsnbusQ8zAXPhhKOGVom7+j5r
 iTdZUgpFbJYVkqKRQdcLy7VpVoBEmsk6KdgrNhdYtKPjOW3FdARbsKheCJrlGOJ8S9zJ876U
 4JSdk7NDSaNzhHZLPBkWuF+qEbsbq6Gc6T5Ns25k0dBj2DCMlbnkFE4mPyKDwweCB2Qb4CUL
 aM7MtOoDStPV4NaN6gO3UDV+/f4/XWiZPPe3c9dlQawTjLqQntxK/xEhCe0BtbeShI260e/W
 /MlBG8zrm/ssu81gTX2wbontlrcZrau5d+7f63+4YowwbX+0eVjUNaKv2/VQUO0a6SAZAR4Z
 rxSlkbToZOAjjqDxqISFPWqnPdOZwVmjrfIBmj8CLeSIXCNU8HItsEioRDfhTD7U08+Nl6za
 JQxmqc84FaFBXagU3Glqz1vr5R5zqJSFcZ4JwuZkZkIP4jgbRq3PsiFWZuYeE99Q7Bmf4a+d
 hVfbLhDaxtACynhljizxhSKfKXLwgOIis=
X-IronPort-AV: E=Sophos;i="5.82,334,1613451600"; 
   d="scan'208";a="44854046"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=O24Jqt53sqmXO8S//wa2F3l3V/aFJhbCMBeEr8t+KA+tMvMbpbc37U2krovYfZw1C0cGXnPq+jkvxbiHyJwlokuFy/lrDjD8xmlGIlHRMTRfApaLQ9K942mXRc4REU5XOytggExNvXH5cOKWfWNjf1pM2EBUMbu8OojgW69SDndd/rDmHIqQV3NEWdBjxPPqtybj6kj9R9VDMp/fEDiYT5KqN4C1pv+gruEsaPnHTtZSV+0I35jK4dakAASKTFtI2bj+h5xwjYRuYTmfZqUij5vzGe1EO5KUGqpNwx3PN1EQU+VtBT8gu/kMQbIibcKnUm9tzHupqVjFq7wWUjk/uQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DXi3gMpug1YpZhgaWmWxDkKKcwRpKsgYNjrvQ5ysU7c=;
 b=QKnVqCyniTTYgho5OaaOB3PMkM063YQP/VnTJQ6BP8BDzmtifpqj+1qPDQ9bLvBW6pTvBgDHNP+Hn7iN0WpRhgiVfSfuJP20UtdAk+rizvAALPh0qf6qDno5TNEQBVlkht2Kn/3aR7ZcqmtB1g4QcYIpNPvEwqL7Q3VFl2wheEajzc4Dq+2XCvAexk1/OEITFyscNWtEnHPCdzzNNWA/EZyg9GUqGds/zdVtdfoJ+ILmk5YZn3FOx7683+Sfmk9UPYIlz2CXYm9/RWzdHCnEsQahwzttwfRx2Lc0Wluhd636I6ARkd7MKvDb4BocjPVRu82Yi2kdpfgvVcEWycuubA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=DXi3gMpug1YpZhgaWmWxDkKKcwRpKsgYNjrvQ5ysU7c=;
 b=wXkSvmhK3YEiI4ZJ4FdtXb0Q087shOS6v4Qb7cC1IIaMuQEHLwrkEzQn6cwf2NiIrkVOfI4fc9Z+4QDDMbS1MRUmf/qbUdMUuTxBBQ1F4+FOUQOVivUmIKFls3E03Ds/r9vQFLHoMpLJ05qWSxaudGwzPcISVrYDHKTx1AJ06C0=
Date: Thu, 27 May 2021 15:52:01 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, Andrew
 Cooper <andrew.cooper3@citrix.com>, George Dunlap
	<George.Dunlap@eu.citrix.com>, Ian Jackson <iwj@xenproject.org>, Julien Grall
	<julien@xen.org>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>
Subject: Re: [PATCH v6 3/3] evtchn: type adjustments
Message-ID: <YK+kAYv0YENvuuqQ@Air-de-Roger>
References: <01bbf3d4-ca6a-e837-91fe-b34aa014564c@suse.com>
 <8f7b57da-cd10-5f96-62fe-1a6e28c8981f@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <8f7b57da-cd10-5f96-62fe-1a6e28c8981f@suse.com>
X-ClientProxiedBy: MR2P264CA0046.FRAP264.PROD.OUTLOOK.COM (2603:10a6:500::34)
 To DS7PR03MB5608.namprd03.prod.outlook.com (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 81418143-bfb3-4420-e66b-08d921169d4b
X-MS-TrafficTypeDiagnostic: DS7PR03MB5463:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DS7PR03MB5463ADC3C10617CF2723067A8F239@DS7PR03MB5463.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:6108;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: PnxMEWDhnx1MYRsa2cSwsXjJ+YUj9vVHP7RV0VB8IAckOA4fa+UboUlUtwVVEmenZG96oEVkX6FobqCtKsSnGoZQGwBpqJ2gQyfJGq/lVo6+b1XPCSyURcwV1p9nFKKDjsiqyQeYJ+TrZ5lRWmmIYjwuLpTdjvhPUD5XH2oD9nuZGoEAbJ+e0iE9Ec4BTnXb9kRFd/OebvOLEwtZZ7tiWAMbW8UZ/KXvYKjNkPiqKy5D3hvh9+syvgvlaOxrsLwugSyRKrnNlaJTMShmi4t+gdH2I/N1cbFphrLsmUw/REmrA+LZWPdNIBPfBaV/KV9gbtXqEPWUPBbkdovEhiS/LUumJ8Plb0iYU4x+FLkX9JSLNpvGU/KtI694L0hQCufJwr1BWILUy3Jx4wUKB7QMXUDOqwBJFgZltcNte7PtNGLaVodFr/c1czNOb9e1RPzpLhyw0PZnSYnPecbFH9zP8i1DKqm7oLswFNNDSK+OZJ+Kf9rGFaME4KcWc9dzwrjTp4ZCDH0YYuiZej4O0svsjiC8cTUSgvT+Kgv7B9rkLmrpDbA74dwCjxzMb+m6yCQMnPRgHveIbVWAZe68XK7KkuRom3NGJgSyc5mPOF3WqfcpUM19Lh13Qxm5QSCkpwmOwSQrgv2dxv9CDNVKLQhhFQ==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(396003)(346002)(366004)(376002)(136003)(39850400004)(5660300002)(6496006)(86362001)(478600001)(38100700002)(54906003)(6486002)(316002)(2906002)(26005)(16526019)(33716001)(186003)(6916009)(9686003)(8936002)(8676002)(4744005)(6666004)(956004)(85182001)(4326008)(66946007)(66476007)(66556008)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?bkNxaGs3Um40SDNNalhGMXRJZG0zazlmMy9DVkpVaEZEajRkNmFQVjljdjJZ?=
 =?utf-8?B?VGs0Zm9oc3VPRmlRaDB3K2piVzk0ODRrVC9EYjNYckIzTVJZc3pEM01rSTRy?=
 =?utf-8?B?dThnQmxaYmpWRytreWNFbmZjemJrNXppSDdtNFBQMzZXNHcxbWdYeGx5RWNo?=
 =?utf-8?B?MVlMcUVWbEN1Yy9FMzBtLzNRM3d4MzBnT3NwK2RYTnlDYXdxMHhoZ2I2RDI3?=
 =?utf-8?B?TFcyTWl5LzJ3cUhuR05tTld0aFEzOGZCZDBwSUFldmRyRVI5Ri9NVzR3ZDhI?=
 =?utf-8?B?bk56cUsyZlF3Zk1KUUtNTXhtWHFiTHY2OHQxWW91M25wRG9uSHJ4Z3NJT2VF?=
 =?utf-8?B?SExJVm5DaE41cFp0eG1sU1JaNUcxN3FmVVJGZVBtT1dtZ21wSndOVWkwZUtM?=
 =?utf-8?B?emhOZE5ZWFoydWc4VE9VaUJXclErQWRGbTI0RXJ6eEdIeU1raXFUVjZaNGdr?=
 =?utf-8?B?TXFTSWsrd3laVkNmcjBWdWxPTG94QmI3cUVJSzlSSXlqUC9TanBKZ0tEV2c0?=
 =?utf-8?B?MmsyWU9RT20wU2RrczdjZHJuZTlEeFpGSWgxNW9vdjhrcnJ3RkFwRHhXQUpx?=
 =?utf-8?B?UVd4Z3orbElyK2IzTnZUSzVlZFVuUE1HRGJIVGJRVnpYTUN5TEVEU1g3SGJr?=
 =?utf-8?B?M2lHM01hejUrdjdMNFR0RWo3UllWQStMSHkzZGZ1ZGtyeHYzL2NjeHgrc3JK?=
 =?utf-8?B?Wnk1SDg4ekFHdXRtQ1VTS0ZlRFU4Z3RxUHBMZ3BRSW5meTRZSFM0TTJGN3M2?=
 =?utf-8?B?Tjh5cWJoeFlQYnJKRW1ETTY3dVF1eWJRcWpBd1RpMVdFYVZOUXVPd2ltMHlu?=
 =?utf-8?B?dTBteDBtdk8xSjU1TmtHMVVZaXRCNXkxdGltbkZ0bnE0d2dCcUJQYnBReGd5?=
 =?utf-8?B?d09FeEVZa3VBK2J0Y2NqN2EwYkFzeHdkY0Q0NXliYXQ5dWtNM3gvUnNYVWZV?=
 =?utf-8?B?SjRuY3grTTdDeXVMYU5wbWpVZlpVUkRocXVESTBhaEVsT3FzZUxkQUFhQjM1?=
 =?utf-8?B?akFFT0ZMMnZSVnIrb0ZGZktSZm1oL3Y5M0tCVWVrcWRMZCtTMzlOeUsxRnY0?=
 =?utf-8?B?cUE0dlN2TzVQY2ZQT29WQ3pOckZ4RHo2M01XNzE4bFFTdTArelJFakRCWlgr?=
 =?utf-8?B?c01WY1RYTmhyc0cyWllvN3BFcm52QzJweXZ3MkJqZHNBbkFPeFU5Zyt4NGdr?=
 =?utf-8?B?Q0VvQzZtT3grcVNzcStGOW5ZNFUzOC83WGhPZ2hJemdHR1FKTDdaclJOK1c0?=
 =?utf-8?B?aGZqcEZpKzVXWlJ4NmlMZFRJTDhRMFQ0ZEUwaVFhMC9FWm0ySExiWkhZT3FC?=
 =?utf-8?B?UGVsa1dvOHNhN1cyblpvUW9jVFJjRDhJZWVhc053Tkp6SGt2dGg2VnhMRjFs?=
 =?utf-8?B?V2FHK0FZQUsxT1pId0Z3eXJ5MWhKRmtyZ1VjSC82RForUUdMUGdLaTNZWHRv?=
 =?utf-8?B?S3FZVzIwYWVCbTB4N2hQTnViUlJvSk5ldm51OFV0bVJSUVZ4b2RaS1BYc0xR?=
 =?utf-8?B?SXBhekFOZ0IyT3VXUmFQYWZvaXBYQ0QvWEtGWTBRd3ZndUN1ekdkSkYxdnpj?=
 =?utf-8?B?UmcrVG5sZ1J6RnNNVmxoVnhYcFZIZG9kUWppeHlJMkgydUtJQlVjZWlJVFR2?=
 =?utf-8?B?ak5ZRDJMQkFOdFYzenZoTEJQSy8zaFFxTjlSR0tVN0hVdGpMMWxyYW5IYjNa?=
 =?utf-8?B?TDRCVWRZd2x6ZENYWDkyTXZUQmJxNWhmNDd0SmZUcUltZnJOYTNKTkJJbnpH?=
 =?utf-8?Q?P5Xxny6K2UWC2ofFhtl7xwWLvXROo2IXFqLCO7c?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 81418143-bfb3-4420-e66b-08d921169d4b
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 May 2021 13:52:06.3760
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 4rxuBpU3VEn+KMiyeOBsGxA49PKx5oCb9/ABTdAFrR5/Fos2cnPA2VWUwVl2Yu3udiNnnM9uvjdtLHTaNzUSWw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DS7PR03MB5463
X-OriginatorOrg: citrix.com

On Thu, May 27, 2021 at 01:28:59PM +0200, Jan Beulich wrote:
> First of all avoid "long" when "int" suffices, i.e. in particular when
> merely conveying error codes. 32-bit values are slightly cheaper to
> deal with on x86, and their processing is at least no more expensive on
> Arm. Where possible use evtchn_port_t for port numbers and unsigned int
> for other unsigned quantities in adjacent code. In evtchn_set_priority()
> eliminate a local variable altogether instead of changing its type.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>

Acked-by: Roger Pau Monné <roger.pau@citrix.com>

Just one style nit below.

> -long evtchn_bind_vcpu(unsigned int port, unsigned int vcpu_id)
> +int evtchn_bind_vcpu(evtchn_port_t port, unsigned int vcpu_id)
>  {
>      struct domain *d = current->domain;
>      struct evtchn *chn;
> -    long           rc = 0;
> +    int           rc = 0;
                     ^ I think you are missing one space here, other
                       functions in the same file don't align at the *.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 27 13:55:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 13:55:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133484.248775 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmGTo-00088Z-Kj; Thu, 27 May 2021 13:55:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133484.248775; Thu, 27 May 2021 13:55:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmGTo-00088S-HV; Thu, 27 May 2021 13:55:20 +0000
Received: by outflank-mailman (input) for mailman id 133484;
 Thu, 27 May 2021 13:55:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmGTm-00088G-Tl; Thu, 27 May 2021 13:55:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmGTm-0000i9-La; Thu, 27 May 2021 13:55:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmGTm-0007Rb-DY; Thu, 27 May 2021 13:55:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lmGTm-00089a-D2; Thu, 27 May 2021 13:55:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=qGhW8bj4cvaSB6LwIEt2Hr1eP4ZrT4O71j9Fja7Wz64=; b=seSI188iu4rXWUTA8i7kumEc95
	OagGLGynCkfIzPRISPxK8jj+HC829zO0BgOyBq8Zx5ZioNXNArgk3FrJGxAo1FddNa1J2rUu7vZQ8
	EZiZUJatWzfa4a5voqozh/ydlfmFPH5yyUjnI3QB41BrXcGI/6qON/M15HJDv5YqFp44=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162191-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 162191: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=b1164a8e68a5aecfc2ff420e4f31c35abbf78455
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 27 May 2021 13:55:18 +0000

flight 162191 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162191/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              b1164a8e68a5aecfc2ff420e4f31c35abbf78455
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  321 days
Failing since        151818  2020-07-11 04:18:52 Z  320 days  313 attempts
Testing same since   162159  2021-05-26 04:20:08 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 59516 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu May 27 13:58:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 13:58:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133495.248788 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmGX5-0000Pi-9C; Thu, 27 May 2021 13:58:43 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133495.248788; Thu, 27 May 2021 13:58:43 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmGX5-0000Pb-6N; Thu, 27 May 2021 13:58:43 +0000
Received: by outflank-mailman (input) for mailman id 133495;
 Thu, 27 May 2021 13:58:42 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HjtO=KW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmGX4-0000PT-2x
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 13:58:42 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 935aca79-1eda-43b5-ae3a-7cac137c9a6d;
 Thu, 27 May 2021 13:58:41 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 4CE461FD30;
 Thu, 27 May 2021 13:58:40 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 1A04E11A98;
 Thu, 27 May 2021 13:58:40 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 935aca79-1eda-43b5-ae3a-7cac137c9a6d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622123920; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=mCVfONCAAykp2Ahq9STWMwCByNCPJFM+wBnKOer8Q/c=;
	b=MaKjKSSpxx6XBIR4BzKAFzYsak7vCqRJwVeFfVuCkdFfolKOHgofDJQytXbhtlzbgGUWax
	5MTJyNzo/JiwKR/aIaS8uLJmnEJU4hqLpoBaAQFIQe7SafBnqrcNKRbgtwbEYH94XIFz1h
	x/qtGMtW7+xtuNDbwQQ75XQxXybwYAM=
Subject: Re: [PATCH v2 07/12] mm: allow page scrubbing routine(s) to be arch
 controlled
To: Julien Grall <julien@xen.org>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <8f56a8f4-0482-932f-96a9-c791bebb4610@suse.com>
 <49c46d4d-4eaa-16a8-ccc8-c873b0b1d092@suse.com>
 <b1c10ad9-2cef-031d-39c2-8d2013b3e0b5@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e805e525-f024-8b5f-3814-0c1346a227f8@suse.com>
Date: Thu, 27 May 2021 15:58:35 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <b1c10ad9-2cef-031d-39c2-8d2013b3e0b5@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 27.05.2021 15:06, Julien Grall wrote:
> On 27/05/2021 13:33, Jan Beulich wrote:
>> Especially when dealing with large amounts of memory, memset() may not
>> be very efficient; this can be bad enough that even for debug builds a
>> custom function is warranted. We additionally want to distinguish "hot"
>> and "cold" cases.
> 
> Do you have any benchmark showing the performance improvement?

This is based on the numbers provided at
https://lists.xen.org/archives/html/xen-devel/2021-04/msg00716.html (???)
with the thread with some of the prior discussion rooted at
https://lists.xen.org/archives/html/xen-devel/2021-04/msg00425.html

I'm afraid I lack ideas on how to sensibly measure _all_ of the
effects (i.e. including the amount of disturbing of caches).

>> ---
>> The choice between hot and cold in scrub_one_page()'s callers is
>> certainly up for discussion / improvement.
> 
> To get the discussion started, can you explain how you made the decision 
> between hot/cot? This will also want to be written down in the commit 
> message.

Well, the initial trivial heuristic is "allocation for oneself" vs
"allocation for someone else, or freeing, or scrubbing", i.e. whether
it would be likely that the page will soon be accessed again (or for
the first time).

>> --- /dev/null
>> +++ b/xen/arch/x86/scrub_page.S
>> @@ -0,0 +1,41 @@
>> +        .file __FILE__
>> +
>> +#include <asm/asm_defns.h>
>> +#include <xen/page-size.h>
>> +#include <xen/scrub.h>
>> +
>> +ENTRY(scrub_page_cold)
>> +        mov     $PAGE_SIZE/32, %ecx
>> +        mov     $SCRUB_PATTERN, %rax
>> +
>> +0:      movnti  %rax,   (%rdi)
>> +        movnti  %rax,  8(%rdi)
>> +        movnti  %rax, 16(%rdi)
>> +        movnti  %rax, 24(%rdi)
>> +        add     $32, %rdi
>> +        sub     $1, %ecx
>> +        jnz     0b
>> +
>> +        sfence
>> +        ret
>> +        .type scrub_page_cold, @function
>> +        .size scrub_page_cold, . - scrub_page_cold
>> +
>> +        .macro scrub_page_stosb
>> +        mov     $PAGE_SIZE, %ecx
>> +        mov     $SCRUB_BYTE_PATTERN, %eax
>> +        rep stosb
>> +        ret
>> +        .endm
>> +
>> +        .macro scrub_page_stosq
>> +        mov     $PAGE_SIZE/8, %ecx
>> +        mov     $SCRUB_PATTERN, %rax
>> +        rep stosq
>> +        ret
>> +        .endm
>> +
>> +ENTRY(scrub_page_hot)
>> +        ALTERNATIVE scrub_page_stosq, scrub_page_stosb, X86_FEATURE_ERMS
>> +        .type scrub_page_hot, @function
>> +        .size scrub_page_hot, . - scrub_page_hot
> 
>  From the commit message, it is not clear how the implementation for 
> hot/cold was chosen. Can you outline in the commit message what are the 
> assumption for each helper?

I've added 'The goal is for accesses of "cold" pages to not
disturb caches (albeit finding a good balance between this
and the higher latency looks to be difficult).'

>> @@ -1046,12 +1051,14 @@ static struct page_info *alloc_heap_page
>>       if ( first_dirty != INVALID_DIRTY_IDX ||
>>            (scrub_debug && !(memflags & MEMF_no_scrub)) )
>>       {
>> +        bool cold = d && d != current->domain;
> 
> So the assumption is if the domain is not running, then the content is 
> not in the cache. Is that correct?

Not exactly: For one, instead of "not running" it is "is not the current
domain", i.e. there may still be vCPU-s of the domain running elsewhere.
And for the cache the question isn't so much of "is in cache", but to
avoid needlessly bringing contents into the cache when the data is
unlikely to be used again soon.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 27 14:10:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 14:10:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133504.248806 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmGiV-0002is-Fj; Thu, 27 May 2021 14:10:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133504.248806; Thu, 27 May 2021 14:10:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmGiV-0002il-C1; Thu, 27 May 2021 14:10:31 +0000
Received: by outflank-mailman (input) for mailman id 133504;
 Thu, 27 May 2021 14:10:30 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=2Vta=KW=aculab.com=david.laight@srs-us1.protection.inumbo.net>)
 id 1lmGiU-0002ib-Ek
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 14:10:30 +0000
Received: from eu-smtp-delivery-151.mimecast.com (unknown [185.58.85.151])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTP
 id 78e1a745-4556-422c-9f63-46d0d9b93036;
 Thu, 27 May 2021 14:10:29 +0000 (UTC)
Received: from AcuMS.aculab.com (156.67.243.121 [156.67.243.121]) (Using
 TLS) by relay.mimecast.com with ESMTP id
 uk-mta-140-jheLYrfmNJS7VYboBqC6Hg-1; Thu, 27 May 2021 15:10:23 +0100
Received: from AcuMS.Aculab.com (fd9f:af1c:a25b:0:994c:f5c2:35d6:9b65) by
 AcuMS.aculab.com (fd9f:af1c:a25b:0:994c:f5c2:35d6:9b65) with Microsoft SMTP
 Server (TLS) id 15.0.1497.2; Thu, 27 May 2021 15:10:21 +0100
Received: from AcuMS.Aculab.com ([fe80::994c:f5c2:35d6:9b65]) by
 AcuMS.aculab.com ([fe80::994c:f5c2:35d6:9b65%12]) with mapi id
 15.00.1497.015; Thu, 27 May 2021 15:10:21 +0100
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 78e1a745-4556-422c-9f63-46d0d9b93036
X-MC-Unique: jheLYrfmNJS7VYboBqC6Hg-1
From: David Laight <David.Laight@ACULAB.COM>
To: 'Chen Huang' <chenhuang5@huawei.com>, Michael Ellerman
	<mpe@ellerman.id.au>, Benjamin Herrenschmidt <benh@kernel.crashing.org>,
	"Paul Mackerras" <paulus@samba.org>, Boris Ostrovsky
	<boris.ostrovsky@oracle.com>, Juergen Gross <jgross@suse.com>, Stefano
 Stabellini <sstabellini@kernel.org>, Mark Fasheh <mark@fasheh.com>, Joel
 Becker <jlbec@evilplan.org>, Joseph Qi <joseph.qi@linux.alibaba.com>, Nathan
 Lynch <nathanl@linux.ibm.com>, "Andrew Donnellan" <ajd@linux.ibm.com>, Alexey
 Kardashevskiy <aik@ozlabs.ru>, "Andrew Morton" <akpm@linux-foundation.org>,
	Stephen Rothwell <sfr@canb.auug.org.au>, Jens Axboe <axboe@kernel.dk>, Yang
 Yingliang <yangyingliang@huawei.com>, Masahiro Yamada <masahiroy@kernel.org>,
	Dan Carpenter <dan.carpenter@oracle.com>
CC: "linuxppc-dev@lists.ozlabs.org" <linuxppc-dev@lists.ozlabs.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"ocfs2-devel@oss.oracle.com" <ocfs2-devel@oss.oracle.com>
Subject: RE: [PATCH -next 2/3] xen: balloon: Replaced simple_strtoull() with
 kstrtoull()
Thread-Topic: [PATCH -next 2/3] xen: balloon: Replaced simple_strtoull() with
 kstrtoull()
Thread-Index: AQHXUg74rK/OAxNe1kmGDaODmfSqv6r3X4NA
Date: Thu, 27 May 2021 14:10:21 +0000
Message-ID: <0f03f9b9ff41460db2935e077f7f80c7@AcuMS.aculab.com>
References: <20210526092020.554341-1-chenhuang5@huawei.com>
 <20210526092020.554341-2-chenhuang5@huawei.com>
In-Reply-To: <20210526092020.554341-2-chenhuang5@huawei.com>
Accept-Language: en-GB, en-US
X-MS-Has-Attach: 
X-MS-TNEF-Correlator: 
x-ms-exchange-transport-fromentityheader: Hosted
x-originating-ip: [10.202.205.107]
MIME-Version: 1.0
Authentication-Results: relay.mimecast.com;
	auth=pass smtp.auth=C51A453 smtp.mailfrom=david.laight@aculab.com
X-Mimecast-Spam-Score: 0
X-Mimecast-Originator: aculab.com
Content-Language: en-US
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: quoted-printable

From: Chen Huang
> Sent: 26 May 2021 10:20
>=20
> The simple_strtoull() function is deprecated in some situation, since
> it does not check for the range overflow, use kstrtoull() instead.
>=20
...
> -=09target_bytes =3D simple_strtoull(buf, &endchar, 0) * 1024;
> +=09ret =3D kstrtoull(buf, 0, &target_bytes);
> +=09if (ret)
> +=09=09return ret;
> +=09target_bytes *=3D 1024;

I'd have thought it was more important to check *endchar
than overflow.
If you are worried about overflow you need a range check
before the multiply.

=09David

-
Registered Address Lakeside, Bramley Road, Mount Farm, Milton Keynes, MK1 1=
PT, UK
Registration No: 1397386 (Wales)



From xen-devel-bounces@lists.xenproject.org Thu May 27 14:39:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 14:39:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133515.248829 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmHAL-0005He-Qo; Thu, 27 May 2021 14:39:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133515.248829; Thu, 27 May 2021 14:39:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmHAL-0005HX-N0; Thu, 27 May 2021 14:39:17 +0000
Received: by outflank-mailman (input) for mailman id 133515;
 Thu, 27 May 2021 14:39:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=pgLj=KW=oracle.com=dan.carpenter@srs-us1.protection.inumbo.net>)
 id 1lmHAK-0005HR-NC
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 14:39:16 +0000
Received: from aserp2130.oracle.com (unknown [141.146.126.79])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cc27f675-8eed-4b78-a79d-85df67cc8ca1;
 Thu, 27 May 2021 14:39:15 +0000 (UTC)
Received: from pps.filterd (aserp2130.oracle.com [127.0.0.1])
 by aserp2130.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 14REZINU091897;
 Thu, 27 May 2021 14:37:54 GMT
Received: from aserp3030.oracle.com (aserp3030.oracle.com [141.146.126.71])
 by aserp2130.oracle.com with ESMTP id 38pqfcmea7-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 27 May 2021 14:37:54 +0000
Received: from pps.filterd (aserp3030.oracle.com [127.0.0.1])
 by aserp3030.oracle.com (8.16.0.42/8.16.0.42) with SMTP id 14REaAXE101211;
 Thu, 27 May 2021 14:37:54 GMT
Received: from pps.reinject (localhost [127.0.0.1])
 by aserp3030.oracle.com with ESMTP id 38pr0dn4mk-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 27 May 2021 14:37:54 +0000
Received: from aserp3030.oracle.com (aserp3030.oracle.com [127.0.0.1])
 by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 14REbrPF103775;
 Thu, 27 May 2021 14:37:53 GMT
Received: from pps.reinject (localhost [127.0.0.1])
 by aserp3030.oracle.com with ESMTP id 38pr0dn4mb-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 27 May 2021 14:37:53 +0000
Received: from aserp3030.oracle.com (aserp3030.oracle.com [127.0.0.1])
 by pps.reinject (8.16.0.36/8.16.0.36) with SMTP id 14REbqlO103757;
 Thu, 27 May 2021 14:37:52 GMT
Received: from aserv0121.oracle.com (aserv0121.oracle.com [141.146.126.235])
 by aserp3030.oracle.com with ESMTP id 38pr0dn4m2-1
 (version=TLSv1.2 cipher=ECDHE-RSA-AES256-GCM-SHA384 bits=256 verify=OK);
 Thu, 27 May 2021 14:37:52 +0000
Received: from abhmp0004.oracle.com (abhmp0004.oracle.com [141.146.116.10])
 by aserv0121.oracle.com (8.14.4/8.13.8) with ESMTP id 14REbffR028351;
 Thu, 27 May 2021 14:37:41 GMT
Received: from kadam (/41.212.42.34) by default (Oracle Beehive Gateway v4.0)
 with ESMTP ; Thu, 27 May 2021 14:37:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cc27f675-8eed-4b78-a79d-85df67cc8ca1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=oracle.com; h=date : from : to : cc
 : subject : message-id : references : mime-version : content-type :
 in-reply-to; s=corp-2020-01-29;
 bh=2cWNSzPHxT4+TtO20cktydNM136a4XNJfF9qm2ldDgc=;
 b=COG/l0eFrIGy2v8yfYCjPo096ZmT871QOihIxtXKWxdX/gRd2i64dkO9nJR9mFsFcg8s
 3d+wIDdy60rKaLYivKQpZPfVzToBYwUnV9vAVW/pcIyHdLiAaVgsi9Tu2onm4ZFfp54w
 joeJwhrG6Fwb/1rf4pDvTdhkcGQ9RsDP692VcxB62fISCWEaAg3BMGUSqkN0ZnTHNiec
 UnOo8ZapdzcIPJFw5rBK/0kdw1QF/1LSdXdflbdWDpDM28s/pv52J23rsgzrFEPPFqHD
 7QxtJIdiBoTY9vGbkbOzZGabxFxQ4Yl2Kd0Oh+0S40V99J/6uxTA0OyyOm+am9CAIJcD Fg== 
Date: Thu, 27 May 2021 17:37:30 +0300
From: Dan Carpenter <dan.carpenter@oracle.com>
To: David Laight <David.Laight@ACULAB.COM>
Cc: "'Chen Huang'" <chenhuang5@huawei.com>,
        Michael Ellerman <mpe@ellerman.id.au>,
        Benjamin Herrenschmidt <benh@kernel.crashing.org>,
        Paul Mackerras <paulus@samba.org>,
        Boris Ostrovsky <boris.ostrovsky@oracle.com>,
        Juergen Gross <jgross@suse.com>,
        Stefano Stabellini <sstabellini@kernel.org>,
        Mark Fasheh <mark@fasheh.com>, Joel Becker <jlbec@evilplan.org>,
        Joseph Qi <joseph.qi@linux.alibaba.com>,
        Nathan Lynch <nathanl@linux.ibm.com>,
        Andrew Donnellan <ajd@linux.ibm.com>,
        Alexey Kardashevskiy <aik@ozlabs.ru>,
        Andrew Morton <akpm@linux-foundation.org>,
        Stephen Rothwell <sfr@canb.auug.org.au>, Jens Axboe <axboe@kernel.dk>,
        Yang Yingliang <yangyingliang@huawei.com>,
        Masahiro Yamada <masahiroy@kernel.org>,
        "linuxppc-dev@lists.ozlabs.org" <linuxppc-dev@lists.ozlabs.org>,
        "linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
        "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
        "ocfs2-devel@oss.oracle.com" <ocfs2-devel@oss.oracle.com>
Subject: Re: [PATCH -next 2/3] xen: balloon: Replaced simple_strtoull() with
 kstrtoull()
Message-ID: <20210527143729.GL24442@kadam>
References: <20210526092020.554341-1-chenhuang5@huawei.com>
 <20210526092020.554341-2-chenhuang5@huawei.com>
 <0f03f9b9ff41460db2935e077f7f80c7@AcuMS.aculab.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=us-ascii
Content-Disposition: inline
In-Reply-To: <0f03f9b9ff41460db2935e077f7f80c7@AcuMS.aculab.com>
User-Agent: Mutt/1.9.4 (2018-02-28)
X-Proofpoint-ORIG-GUID: xI1PN-Oc27c8aWvBSQ4m3wXmnwlPjzpN
X-Proofpoint-GUID: xI1PN-Oc27c8aWvBSQ4m3wXmnwlPjzpN
X-Proofpoint-Virus-Version: vendor=nai engine=6200 definitions=9996 signatures=668682
X-Proofpoint-Spam-Details: rule=notspam policy=default score=0 bulkscore=0 spamscore=0 mlxscore=0
 malwarescore=0 mlxlogscore=999 lowpriorityscore=0 impostorscore=0
 adultscore=0 phishscore=0 priorityscore=1501 clxscore=1011 suspectscore=0
 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.12.0-2104190000
 definitions=main-2105270095

On Thu, May 27, 2021 at 02:10:21PM +0000, David Laight wrote:
> From: Chen Huang
> > Sent: 26 May 2021 10:20
> > 
> > The simple_strtoull() function is deprecated in some situation, since
> > it does not check for the range overflow, use kstrtoull() instead.
> > 
> ...
> > -	target_bytes = simple_strtoull(buf, &endchar, 0) * 1024;
> > +	ret = kstrtoull(buf, 0, &target_bytes);
> > +	if (ret)
> > +		return ret;
> > +	target_bytes *= 1024;
> 
> I'd have thought it was more important to check *endchar
> than overflow.

That's one of the differences between simple_strtoull() and kstrtoull().
The simple_strtoull() will accept a string like "123ABC", but kstrtoull()
will only accept NUL terminated numbers or a newline followed by a NUL
terminator.  Which is fine in this context because users will be doing
"echo 1234 > /sys/foo".

> If you are worried about overflow you need a range check
> before the multiply.

This is probably a case where if the users cause an integer overflow
then they get what they deserve.

regards,
dan carpenter


From xen-devel-bounces@lists.xenproject.org Thu May 27 14:41:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 14:41:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133521.248839 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmHCE-0006aF-5R; Thu, 27 May 2021 14:41:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133521.248839; Thu, 27 May 2021 14:41:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmHCE-0006a8-2O; Thu, 27 May 2021 14:41:14 +0000
Received: by outflank-mailman (input) for mailman id 133521;
 Thu, 27 May 2021 14:41:12 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=A0TX=KW=amd.com=thomas.lendacky@srs-us1.protection.inumbo.net>)
 id 1lmHCC-0006a2-30
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 14:41:12 +0000
Received: from NAM02-BN1-obe.outbound.protection.outlook.com (unknown
 [40.107.212.56]) by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 83f73ff8-38c1-4a3b-890c-461675fd55d8;
 Thu, 27 May 2021 14:41:10 +0000 (UTC)
Received: from DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) by
 DM6PR12MB4218.namprd12.prod.outlook.com (2603:10b6:5:21b::16) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4173.24; Thu, 27 May 2021 14:41:08 +0000
Received: from DM5PR12MB1355.namprd12.prod.outlook.com
 ([fe80::b914:4704:ad6f:aba9]) by DM5PR12MB1355.namprd12.prod.outlook.com
 ([fe80::b914:4704:ad6f:aba9%12]) with mapi id 15.20.4173.022; Thu, 27 May
 2021 14:41:08 +0000
Received: from office-linux.texastahm.com (67.79.209.213) by
 SN2PR01CA0050.prod.exchangelabs.com (2603:10b6:800::18) with Microsoft SMTP
 Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4173.22 via Frontend Transport; Thu, 27 May 2021 14:41:04 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 83f73ff8-38c1-4a3b-890c-461675fd55d8
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=a5Qkid7FttMYKKHYpUGdBU+35V4v+SsRNsnuT4KRcIxWmjURLSiJhV9KUNRs240BnwURE1lRi47gpOsPbOsoebE3T56cqXAdgiCT+fx2KymwxFZ1LE8I2UAxnkpKa9HX28sax9Jrcm0xSx+cTkLWlhp+Ey3Jpz307gLGa8bcZWe5URxMT9w/bjuRk4FESXGP7OD/2wkbRrr2moxKTcs3m65CU22s6uqLDe5Y/qZmePTDvPIDjkxBK827NFqAr6IxyM4UA4OLyIjfrit49TrRfTuj+JdDzNcOM54wWoxH6j3PRg7/nWJjOe6bXRUuQBqQHWFOaVV7VMjaK2oTNg5KpA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xov16ex2LLNz59XcqP8ZntDrIieWMcGionlMx8pusEY=;
 b=NVhmAjt1LefX6M7BHKuU+NBv/+rPRp7mehKbMgD7Vo8QiapJo1wU7pzItmAroymAmuWmZbLIoGo1v9v/yID6cJLuLRuE66bRyBLGcxVOMj0ICLwQNArdnn+tIuzXTEtrUJna5q5b40Gg1hilSowkndsei0MOMkPuAGswBCooBY9ehcs0iv+OJlFr3pYyoKhAwMd02EqgWiboCbwzwQgPWLYD4xWTKYGERouAwJJTVuKyE974PJpyBZVK5EPYcUTFUkwkEeFamtw9SswgAzezsWe1vCD9I6Uu8+IqHGaZLn8Qa/w0dfGR3MunzK5Zi6z5vbhKmN+k+3Yawg5ugCB4ZA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=xov16ex2LLNz59XcqP8ZntDrIieWMcGionlMx8pusEY=;
 b=WMnXzm73hbEdQkv9+AAe+P/IpiJ2/2NFybvRfzC8WnYGVfU2s+kuP5a6/0LfGLvH7MnrUc3CO+ODX3GNtyrpvclzfwJALOnWGHsg31xfygktZ4wxstidddi+ggkxlgzKI/MsdTkzJZGdOIn6AisWXY3LVl4ToaYPSvjcMELD0mw=
Authentication-Results: linux.intel.com; dkim=none (message not signed)
 header.d=none;linux.intel.com; dmarc=none action=none header.from=amd.com;
Subject: Re: [PATCH v7 01/15] swiotlb: Refactor swiotlb init functions
To: Christoph Hellwig <hch@lst.de>, Florian Fainelli <f.fainelli@gmail.com>
Cc: Claire Chang <tientzu@chromium.org>, Rob Herring <robh+dt@kernel.org>,
 mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>,
 Will Deacon <will@kernel.org>, Frank Rowand <frowand.list@gmail.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com,
 jgross@suse.com, Marek Szyprowski <m.szyprowski@samsung.com>,
 benh@kernel.crashing.org, paulus@samba.org,
 "list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
 sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
 grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding
 <treding@nvidia.com>, mingo@kernel.org, bauerman@linux.ibm.com,
 peterz@infradead.org, Greg KH <gregkh@linuxfoundation.org>,
 Saravana Kannan <saravanak@google.com>,
 "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
 heikki.krogerus@linux.intel.com,
 Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
 Randy Dunlap <rdunlap@infradead.org>, Dan Williams
 <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>,
 linux-devicetree <devicetree@vger.kernel.org>,
 lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
 xen-devel@lists.xenproject.org, Nicolas Boichat <drinkcat@chromium.org>,
 Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
 bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
 daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
 intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
 jxgao@google.com, joonas.lahtinen@linux.intel.com,
 linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
 matthew.auld@intel.com, rodrigo.vivi@intel.com,
 thomas.hellstrom@linux.intel.com
References: <20210518064215.2856977-1-tientzu@chromium.org>
 <20210518064215.2856977-2-tientzu@chromium.org>
 <170a54f2-be20-ec29-1d7f-3388e5f928c6@gmail.com>
 <20210527130211.GA24344@lst.de>
From: Tom Lendacky <thomas.lendacky@amd.com>
Message-ID: <bab261b4-f801-05af-8fd9-c440ed219591@amd.com>
Date: Thu, 27 May 2021 09:41:02 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <20210527130211.GA24344@lst.de>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [67.79.209.213]
X-ClientProxiedBy: SN2PR01CA0050.prod.exchangelabs.com (2603:10b6:800::18) To
 DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 59e14bbc-1eea-4b9f-e0b6-08d9211d768f
X-MS-TrafficTypeDiagnostic: DM6PR12MB4218:
X-Microsoft-Antispam-PRVS:
	<DM6PR12MB4218889520CE03D8A0062CAAEC239@DM6PR12MB4218.namprd12.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4941;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	nZNB84YC4kztOdLzWwMIlAuRl+h2KTNM+1A6fbzJA0nn9U3XrzLsdsg5fPGYLYXKx6BpkcxDV/PnT5Ym8OKRkAC0PkUfImgHMBoqNsExvyly+UmVkcW5UhrZRh4m7mC4TM/H784Uu41+EJZBAymgGY0SOBhF2a5m7wjXykLewN7O5UhL5Eokq6HcCehl9IOXO2rn8wkquQmMPSfJgFdghwToh8urF5O/q1hLoIeEzsPLt7qPDr4QG+mzBrMlQ5Faf15YqTi0Ajn1URT4xnVabof/ysvP4a73X2R5RsAJXocyU2CyWZOT2IO02O0KIiQ8bOKNl5OmHjK9PQZCucn4x4Y83ibfder6hoJx3+C6s4L8KJZk5gErfn1bslS9AgrJ+uiMLoLqgcyTGqfBhIc0I6jsmlYifXpA46e+mOWbtMEhNRIw9wJRjOxtH7lF+7fncpH/EjenpZKDSdyqdJjUtZmdOAX9q7ImvJWx6n7KzNtn+CaObMHxcwUdXOBgp+HfTVKk2SbGKKA2I6tMZQGqOaakYmNfT3HSAE0CNVGJ0Nxaav1m7zoG6+P7X40RPeQ2rhCnYWb7HBy+mbEMFp6PPKg+iCoJ5PRwz+8ZQYOEtSvHCypshtkyhG97zbLj+68F/InSYrrqudvTKYNYYNvpubjb0kmyV/Say81zdfegQYU=
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR12MB1355.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(136003)(346002)(376002)(366004)(39860400002)(38100700002)(4744005)(7406005)(7366002)(7416002)(6512007)(2906002)(53546011)(86362001)(8936002)(478600001)(2616005)(16526019)(956004)(316002)(5660300002)(36756003)(83380400001)(31686004)(186003)(6506007)(6486002)(54906003)(66476007)(110136005)(26005)(31696002)(4326008)(66556008)(66946007)(8676002)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?OGgyZW1PNU5qT3ZSeisxckZFc0tldW91cTExWEx4RVFXbjBLS2JVTitudGd5?=
 =?utf-8?B?ZXB0bDY0SVM3cTZrd1JGeFo0SDRjMTlEVkJmcnZ0MzI0UnZSWVZRQlZEY0dV?=
 =?utf-8?B?VjRaSlkyNm1KMEsxcFg4VGZEWVA2T25tYW4wajdHR0YxNGJBRjV4eXZYRUpT?=
 =?utf-8?B?QWkybUc3UVlnNTBHS3M0NUlhU0MvWDRsRldpck1iSmh2aHFNalM2MXBqcmpa?=
 =?utf-8?B?QUU1YnZvWjJlR0lYRVpvbjhlSmVTS1RqVERibnl4M1Y4eDR5QUlJWlpLMUor?=
 =?utf-8?B?ak5NYldoR0dxczZZNmFwSEtOMHFKcHlmZTVmTEl0TVNmQkl4ZjRncXFRN0tB?=
 =?utf-8?B?TWltUFlTcjVRYUU4d1pUMXBSakkzc2IrMkR2eHBEL2J1N1NtMzFJTytsWTMw?=
 =?utf-8?B?NVVOdFp4TitQVlhDZit2ZEIvM294SHZjeVZCOVZqbTk2SmNYeXV5bCtKa2pD?=
 =?utf-8?B?UEFqSW50ck5BdDRuQUNwUFF3aGVZUDRFUUxEeFlET1FYaC9GU2dSNVUyZzVW?=
 =?utf-8?B?a3NmY3duR2tRYko0NGd4Rzc2NzVGSkJMSWJGMnFFSk41Y1NYb2llSUszdUMv?=
 =?utf-8?B?anZZZjJWSGNoeXJXajZjQlJNcXQ5VzBPVmZ4clc4eEJBT0dGblRZLytia0NP?=
 =?utf-8?B?MURQUEZwVDJSalBsRlFuYUJabC9MakVubG9tVjVJOVdkMnl5ZWtNSU5TOGdM?=
 =?utf-8?B?Z0ducUxiM1daNStlM0tFZzU0emh2YnQxSjkxZmd1cy9LV1R4WmxGUlA0NlhO?=
 =?utf-8?B?SnoyOElpaDhzUHFZdDE4UVpMSmtmdkFpY2pvTksrTUxWejJqV0Vob2NjMUVq?=
 =?utf-8?B?ZUxxdlQraGd0YTZPekNOQ3N4UTNwV0huUG9Mam9rM1hnajc4VE5qT2xnWUNn?=
 =?utf-8?B?VWNBTkxULzlJN3BHNWVnTUc5cDV4S3dmdU9YR3VveWZURnQ1TmJxZjE1aDhy?=
 =?utf-8?B?UXFtNnRPOFladWZab1JwWmloN2plYTdUc2NqaWlFTFZMTEdKTERkZnZSTy8z?=
 =?utf-8?B?blcrbWU1RTVsWFlHNXhFRlUwZFFpdDduWTJNcDRPMGVtVTJ6dVdHQjFYNFVP?=
 =?utf-8?B?djc4NjdKeVc3czFvanFROFFqdU9VOWE5Wmc0d01QTE9GeHAvVFY3SnMvdW5z?=
 =?utf-8?B?Qi9VN2lVUnNsb2pxOWliOXlmcDh6eG9iZUtRZzNXZmFXRFlvUlpNM1FHaGVx?=
 =?utf-8?B?V2hCbUhpWkQwRmdyelozSzFyMDlRNnJveXBCU1c3MlgxYTlTMWIybzc2UjAy?=
 =?utf-8?B?dW9RSnlOakVsblZDN0F6dzdMOW9lNDQxbngrMGpvT3NVTDZpTDVEeXV0T01L?=
 =?utf-8?B?ODM3azY4TU93UE1CMHZ3V3pnUTZycmF6YmliVTZORXBiZ3hJeXJLcnpiTmtM?=
 =?utf-8?B?RHEwalZtaU8rTTY3ajlaaHpmSjJTZEREdGFPcHlBMzJrS2tQZWtDajY0WmVB?=
 =?utf-8?B?K2EzbFZtQUtJWk9rRytLVG12Z2x3WVowRk8vRUdjYzZKUjJIK0tEVWNhcXJG?=
 =?utf-8?B?MndLQlk0cVljRW9NTkU5SWlScTFld3Fld3MrR1VmQXkvTmI2ZTNvZzhqYlhj?=
 =?utf-8?B?ZjNFZzdOV093akNmUlcvcklnYURkQUdWbHZBeXhibnpRSWhGVHJ0TTJmbk42?=
 =?utf-8?B?WTRTUWhzc3YvMFJmbnhTS3RIZDgwajhiVkk5OEx5V3hGRXpySmNCZDdVcGJU?=
 =?utf-8?B?WU9HbGRQN3E5dkFET0kzUUtEZVZDazdIaVl5TkVDMGlzSkZpZEdjb2NEQ2hz?=
 =?utf-8?Q?FkiPGixk4soS+HSivd/GXKKe6qNXTD5QcRXLY01?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: 59e14bbc-1eea-4b9f-e0b6-08d9211d768f
X-MS-Exchange-CrossTenant-AuthSource: DM5PR12MB1355.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 May 2021 14:41:08.0642
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 5CsF8+AyOlbzvidWQoLfPambEK4AEjMpHIQTAAL4vJ0v4GhJJhQZeJ2+YilHLtstBzE3OoZMtjG1uZ8CgLZkHw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR12MB4218

On 5/27/21 8:02 AM, Christoph Hellwig wrote:
> On Wed, May 19, 2021 at 11:50:07AM -0700, Florian Fainelli wrote:
>> You convert this call site with swiotlb_init_io_tlb_mem() which did not
>> do the set_memory_decrypted()+memset(). Is this okay or should
>> swiotlb_init_io_tlb_mem() add an additional argument to do this
>> conditionally?
> 
> The zeroing is useful and was missing before.  I think having a clean
> state here is the right thing.
> 
> Not sure about the set_memory_decrypted, swiotlb_update_mem_attributes
> kinda suggests it is too early to set the memory decrupted.
> 
> Adding Tom who should now about all this.

The reason for adding swiotlb_update_mem_attributes() was because having
the call to set_memory_decrypted() in swiotlb_init_with_tbl() triggered a
BUG_ON() related to interrupts not being enabled yet during boot. So that
call had to be delayed until interrupts were enabled.

Thanks,
Tom

> 


From xen-devel-bounces@lists.xenproject.org Thu May 27 14:42:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 14:42:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133525.248850 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmHD0-0007AI-F5; Thu, 27 May 2021 14:42:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133525.248850; Thu, 27 May 2021 14:42:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmHD0-0007AB-C3; Thu, 27 May 2021 14:42:02 +0000
Received: by outflank-mailman (input) for mailman id 133525;
 Thu, 27 May 2021 14:42:00 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmHCy-00079r-DV; Thu, 27 May 2021 14:42:00 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmHCy-0001ZY-85; Thu, 27 May 2021 14:42:00 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmHCx-0001Jt-U8; Thu, 27 May 2021 14:42:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lmHCx-0001UG-Tb; Thu, 27 May 2021 14:41:59 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=UXdPvTuXeYZHPAc2z+C5lmJLr2yqrSw19M4jsUCxnqg=; b=sNKicIFH1y9ODLyY4bMs1ESDbS
	G2rOpMeD/2yFa+oNrEobBPLGZoYapX60wlH3D9ph/bRP3K1Hi7Pj3IAWQphRZYmLGDP9P6yiOMZAv
	V+6COROBg8DjrEspUIcQDSf3zqnYZXrwGO9PoYO9sk7mfkHAk5r3Qu73Ox+Rr6Ss13fg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162175-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 162175: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=67154cff6258e46b05acc9f797e3328ed839b0e2
X-Osstest-Versions-That:
    linux=b239a0365b9339ad5e276ed9cb4605963c2d939a
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 27 May 2021 14:41:59 +0000

flight 162175 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162175/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162123
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162123
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162123
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162123
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162123
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162123
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162123
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162123
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162123
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162123
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162123
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                67154cff6258e46b05acc9f797e3328ed839b0e2
baseline version:
 linux                b239a0365b9339ad5e276ed9cb4605963c2d939a

Last test of basis   162123  2021-05-22 10:12:11 Z    5 days
Testing same since   162166  2021-05-26 10:42:44 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
  "Eric W. Biederman" <ebiederm@xmission.com>
  Alex Deucher <alexander.deucher@amd.com>
  Anirudh Rayabharam <mail@anirudhrb.com>
  Atul Gopinathan <atulgopinathan@gmail.com>
  Bart Van Assche <bvanassche@acm.org>
  Ben Chuang <benchuanggli@gmail.com>
  Changfeng <Changfeng.Zhu@amd.com>
  Christoph Hellwig <hch@lst.de>
  Christophe JAILLET <christophe.jaillet@wanadoo.fr>
  Dan Carpenter <dan.carpenter@oracle.com>
  Daniel Beer <dlbeer@gmail.com>
  Daniel Cordova A <danesc87@gmail.com>
  Daniel Wagner <dwagner@suse.de>
  Darrick J. Wong <djwong@kernel.org>
  David Sterba <dsterba@suse.com>
  Du Cheng <ducheng2@gmail.com>
  Elia Devito <eliadevito@gmail.com>
  Eric Biggers <ebiggers@google.com>
  Florian Fainelli <f.fainelli@gmail.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Guchun Chen <guchun.chen@amd.com>
  Guenter Roeck <linux@roeck-us.net>
  Hans de Goede <hdegoede@redhat.com>
  Hou Pu <houpu.main@gmail.com>
  Hui Wang <hui.wang@canonical.com>
  Hulk Robot <hulkrobot@huawei.com>
  Igor Matheus Andrade Torrente <igormtorrente@gmail.com>
  Jacek Anaszewski <jacek.anaszewski@gmail.com>
  Jan Beulich <jbeulich@suse.com>
  Jan Kratochvil <jan.kratochvil@redhat.com>
  Jason Gunthorpe <jgg@nvidia.com>
  Jason Self <jason@bluehome.net>
  Jiri Slaby <jirislaby@kernel.org>
  Jon Hunter <jonathanh@nvidia.com>
  Josef Bacik <josef@toxicpanda.com>
  Juergen Gross <jgross@suse.com>
  Leon Romanovsky <leonro@nvidia.com>
  Liming Sun <limings@nvidia.com>
  Linus Torvalds <torvalds@linux-foundation.org>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Luiz Augusto von Dentz <luiz.von.dentz@intel.com>
  Maciej W. Rozycki <macro@orcam.me.uk>
  Maor Gottlieb <maorg@nvidia.com>
  Marcel Holtmann <marcel@holtmann.org>
  Martin K. Petersen <martin.petersen@oracle.com>
  Mike Snitzer <snitzer@redhat.com>
  Mikulas Patocka <mpatocka@redhat.com>
  Oleg Nesterov <oleg@redhat.com>
  Pedro Alves <palves@redhat.com>
  PeiSen Hou <pshou@realtek.com>
  Peter Zijlstra (Intel) <peterz@infradead.org>
  Phillip Potter <phil@philpotter.co.uk>
  Ronnie Sahlberg <lsahlber@redhat.com>
  Sasha Levin <sashal@kernel.org>
  Shay Drory <shayd@nvidia.com>
  Shuah Khan <skhan@linuxfoundation.org>
  Simon Marchi <simon.marchi@efficios.com>
  Stafford Horne <shorne@gmail.com>
  Steve French <stfrench@microsoft.com>
  Sudeep Holla <sudeep.holla@arm.com>
  Sudip Mukherjee <sudip.mukherjee@codethink.co.uk>
  syzbot <syzbot+1f29e126cf461c4de3b3@syzkaller.appspotmail.com>
  Takashi Iwai <tiwai@suse.de>
  Takashi Sakamoto <o-takashi@sakamocchi.jp>
  Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
  Theodore Ts'o <tytso@mit.edu>
  Tom Seewald <tseewald@gmail.com>
  Tyler Hicks <code@tyhicks.com>
  Ulf Hansson <ulf.hansson@linaro.org>
  Zhen Lei <thunder.leizhen@huawei.com>
  Zqiang <qiang.zhang@windriver.com>
  Zubin Mithra <zsm@chromium.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   b239a0365b93..67154cff6258  67154cff6258e46b05acc9f797e3328ed839b0e2 -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Thu May 27 14:57:18 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 14:57:18 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133539.248865 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmHRg-0000S6-0n; Thu, 27 May 2021 14:57:12 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133539.248865; Thu, 27 May 2021 14:57:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmHRf-0000Rz-TJ; Thu, 27 May 2021 14:57:11 +0000
Received: by outflank-mailman (input) for mailman id 133539;
 Thu, 27 May 2021 14:57:10 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HjtO=KW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmHRe-0000Rt-Hw
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 14:57:10 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 897c66c2-5982-4cc6-b08a-9f631f976dd1;
 Thu, 27 May 2021 14:57:09 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id A83962190B;
 Thu, 27 May 2021 14:57:08 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 8785111A98;
 Thu, 27 May 2021 14:57:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 897c66c2-5982-4cc6-b08a-9f631f976dd1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622127428; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=edcpG4jE4xc4ni/h6DdRIUcGzf0HxMC9hpuohG03o20=;
	b=t6DSoQuL5DpOBCpotCCpswGH/tcK3Zxyc6QR5jzKtnhnuol8sw722nghm0GgA+5I5DUFtL
	VLgxe57DS/TDTFDBbE75tBBXBzbpBDlBziUozNCZ4Q0EN6t7ILZlqQR87w4qsoYa5woNKw
	mONmev4Ah/F2Wykcr4uaqTYcxZ7GuOU=
Subject: Re: [PATCH] x86/AMD: expose SYSCFG, TOM, and TOM2 to Dom0
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Olaf Hering <olaf@aepfle.de>
References: <c5764274-1257-809e-a2a7-d87b9d0fe675@suse.com>
 <YK9ZXJuPk1G5SGnK@Air-de-Roger>
 <b6693807-95cb-7925-587d-1e1e2db8c798@suse.com>
 <YK+dNgom3cVzkcFF@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ca774a12-c054-3383-5f38-2c09b66be681@suse.com>
Date: Thu, 27 May 2021 16:57:04 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <YK+dNgom3cVzkcFF@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 27.05.2021 15:23, Roger Pau Monné wrote:
> On Thu, May 27, 2021 at 12:41:51PM +0200, Jan Beulich wrote:
>> On 27.05.2021 10:33, Roger Pau Monné wrote:
>>> On Wed, May 26, 2021 at 02:59:00PM +0200, Jan Beulich wrote:
>>>> Sufficiently old Linux (3.12-ish) accesses these MSRs in an unguarded
>>>> manner. Furthermore these MSRs, at least on Fam11 and older CPUs, are
>>>> also consulted by modern Linux, and their (bogus) built-in zapping of
>>>> #GP faults from MSR accesses leads to it effectively reading zero
>>>> instead of the intended values, which are relevant for PCI BAR placement
>>>> (which ought to all live in MMIO-type space, not in DRAM-type one).
>>>>
>>>> For SYSCFG, only certain bits get exposed. In fact, whether to expose
>>>> MtrrVarDramEn is debatable: It controls use of not just TOM, but also
>>>> the IORRs. Introduce (consistently named) constants for the bits we're
>>>> interested in and use them in pre-existing code as well.
>>>
>>> I think we should also allow access to the IORRs MSRs for coherency
>>> (c001001{6,9}) for the hardware domain.
>>
>> Hmm, originally I was under the impression that these could conceivably
>> be written by OSes, and hence would want dealing with separately. But
>> upon re-reading I see that they are supposed to be set by the BIOS alone.
>> So yes, let me add them for read access, taking care of the limitation
>> that I had to spell out.
>>
>> This raises the question then though whether to also include SMMAddr
>> and SMMMask in the set - the former does get accessed by Linux as well,
>> and was one of the reasons for needing 6eef0a99262c ("x86/PV:
>> conditionally avoid raising #GP for early guest MSR reads").
> 
> That seems fine, we might also want SMM_BASE?

That's pretty unrelated to the topic here - there's no memory type
or DRAM vs MMIO decision associated with that register. I'm also
having trouble seeing what an OS would want to use SMM's CS value
for.

>> Especially for SMMAddr, and maybe also for IORR_BASE, returning zero
>> for DomU-s might be acceptable. The respective masks, however, can
>> imo not sensibly be returned as zero. Hence even there I'd leave DomU
>> side handling (see below) for a later time.
> 
> Sure. I think for consistency we should however enable reading the
> hardware IORR MSRs for the hardware domain, or else returning
> MtrrVarDramEn set is likely to cause trouble as the guest could assume
> IORRs to be unconditionally present.

Well, yes, I've added IORRs already, as I was under the impression
that we were agreeing already that we want to expose them to Dom0.

>>>> As a welcome side effect, verbosity on/of debug builds gets (perhaps
>>>> significantly) reduced.
>>>>
>>>> Note that at least as far as those MSR accesses by Linux are concerned,
>>>> there's no similar issue for DomU-s, as the accesses sit behind PCI
>>>> device matching logic. The checked for devices would never be exposed to
>>>> DomU-s in the first place. Nevertheless I think that at least for HVM we
>>>> should return sensible values, not 0 (as svm_msr_read_intercept() does
>>>> right now). The intended values may, however, need to be determined by
>>>> hvmloader, and then get made known to Xen.
>>>
>>> Could we maybe come up with a fixed memory layout that hvmloader had
>>> to respect?
>>>
>>> Ie: DRAM from 0 to 3G, MMIO from 3G to 4G, and then the remaining
>>> DRAM from 4G in a contiguous single block?
>>>
>>> hvmloader would have to place BARs that don't fit in the 3G-4G hole at
>>> the end of DRAM (ie: after TOM2).
>>
>> Such a fixed scheme may be too limiting, I'm afraid.
> 
> Maybe, I guess a possible broken scenario would be for a guest to be
> setup with a set of 32bit BARs that cannot possibly fit in the 3-4G
> hole, but I think that's unlikely.

Can't one (almost) arbitrarily size the amount of VRAM of the emulated
VGA? I wouldn't be surprised if this can't be placed above 4Gb.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 27 14:57:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 14:57:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133540.248876 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmHRq-0000lX-88; Thu, 27 May 2021 14:57:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133540.248876; Thu, 27 May 2021 14:57:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmHRq-0000lQ-4k; Thu, 27 May 2021 14:57:22 +0000
Received: by outflank-mailman (input) for mailman id 133540;
 Thu, 27 May 2021 14:57:21 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HjtO=KW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmHRp-0000l0-DE
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 14:57:21 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id e0638399-f3d0-4ae1-8ede-6f49afdf1a75;
 Thu, 27 May 2021 14:57:20 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id BBAAC1FD2E;
 Thu, 27 May 2021 14:57:19 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 9AAF611A98;
 Thu, 27 May 2021 14:57:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: e0638399-f3d0-4ae1-8ede-6f49afdf1a75
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622127439; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=UVxxdyQjpZSI02rN+RmLnbxOL0hPk4E5s0ox/k9JMP0=;
	b=VeegR4AuuhMM/cYbgg9FqFPw/F6sVMSBKtQZz5/kaHJBqyR3GFenGlcdHYciSiDErvXvhw
	pKQoeQfG1Ph90UbyR+EvItfiL4C8LelEC2q/csvj5hc9PIoVTs23XKJuMc5HrrDCLnJ7Vw
	PqdvtV+5jLOA43EygDui3tYFb81PDk0=
Subject: Re: [PATCH v2 01/12] x86: introduce ioremap_wc()
To: Julien Grall <julien@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <8f56a8f4-0482-932f-96a9-c791bebb4610@suse.com>
 <20abac99-609c-f4f6-1242-c79919f4c317@suse.com>
 <b8035805-4f44-18ce-f4cb-4ce1d3c594fc@xen.org>
 <d7019879-037b-7945-4a8a-5a8252e5922a@suse.com>
 <7d32ff1c-315c-f2da-bc1b-06fb2233fe55@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <e24036b4-a40c-4b1e-7aaa-11007a6878b9@suse.com>
Date: Thu, 27 May 2021 16:57:19 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <7d32ff1c-315c-f2da-bc1b-06fb2233fe55@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 27.05.2021 15:30, Julien Grall wrote:
> On 27/05/2021 14:09, Jan Beulich wrote:
>> On 27.05.2021 14:48, Julien Grall wrote:
>>> On 27/05/2021 13:30, Jan Beulich wrote:
>>>> --- a/xen/arch/x86/mm.c
>>>> +++ b/xen/arch/x86/mm.c
>>>> @@ -5881,6 +5881,20 @@ void __iomem *ioremap(paddr_t pa, size_t
>>>>        return (void __force __iomem *)va;
>>>>    }
>>>>    
>>>> +void __iomem *__init ioremap_wc(paddr_t pa, size_t len)
>>>> +{
>>>> +    mfn_t mfn = _mfn(PFN_DOWN(pa));
>>>> +    unsigned int offs = pa & (PAGE_SIZE - 1);
>>>> +    unsigned int nr = PFN_UP(offs + len);
>>>> +    void *va;
>>>> +
>>>> +    WARN_ON(page_is_ram_type(mfn_x(mfn), RAM_TYPE_CONVENTIONAL));
>>>> +
>>>> +    va = __vmap(&mfn, nr, 1, 1, PAGE_HYPERVISOR_WC, VMAP_DEFAULT);
>>>> +
>>>> +    return (void __force __iomem *)(va + offs);
>>>> +}
>>>
>>> Arm is already providing ioremap_wc() which is a wrapper to
>>> ioremap_attr().
>>
>> I did notice this, yes.
>>
>>> Can this be moved to the common code to avoid duplication?
>>
>> If by "this" you mean ioremap_attr(), then I wasn't convinced we want
>> a function of this name on x86.
> 
> I am open to other name.

My remark wasn't so much about the name, but about there being a
"more capable" backing function for a number of wrappers.

>> In particular you may note that
>> x86'es ioremap() is sort of the equivalent of Arm's ioremap_nocache(),
>> but is different from the new ioremap_wc() by more than just the
>> different PTE attributes.
> That's because ioremap() will not vmap() the first MB, am I correct? If 
> so, I am not sure why you want to do that in ioremap() but not 
> ioremap_wc(). Wouldn't this result access the memory with mismatched 
> attributes?

UC and WC aren't really conflicting cache attributes - they both
fall in the "uncachable" category. In fact I have a TBD in the
post-commit-message area regarding this very aspect of possibly
reusing the low 1Mb mapping.

>> Plus I'd need to clean up Arm's lack of __iomem if I wanted to fold
>> things. 
> 
> __iomem is NOP on Xen. So while the annotation may not be consistently 
> used, I don't see the clean-up a requirement to consolidate the code...
> 
>> Or wait - it's declaration and definition which are out of
>> sync there, i.e. a pre-existing issue.
> 
> We don't usually add __init on both the declaration and definition. So 
> why would it be necessary to add __iomem in both cases?

__init is an attribute that is meaningful only for functions and
only on their definitions (because it controls what section the
code gets emitted to by the compiler, while it is of no interest
at all to any caller of the function, as far as the compiler is
concerned). __iomem, otoh, is a modifier for pointer types, so
doesn't apply to the function as a whole but to its return types.
Such types (when they're not NOP) need to be consistent between
declaration and definition. You can try this with an about
arbitrary (but valid) __attribute__(()) of your choice and with a
not overly old compiler - you should see it complain about such
inconsistencies.

Jan


From xen-devel-bounces@lists.xenproject.org Thu May 27 15:03:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 15:03:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133553.248887 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmHYB-0002WA-V3; Thu, 27 May 2021 15:03:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133553.248887; Thu, 27 May 2021 15:03:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmHYB-0002W3-S9; Thu, 27 May 2021 15:03:55 +0000
Received: by outflank-mailman (input) for mailman id 133553;
 Thu, 27 May 2021 15:03:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HjtO=KW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmHYA-0002Vx-HL
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 15:03:54 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id a5545120-af40-4aaa-86f4-36c1496d1953;
 Thu, 27 May 2021 15:03:53 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id D23011FD2E;
 Thu, 27 May 2021 15:03:52 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id B8C6C11A98;
 Thu, 27 May 2021 15:03:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a5545120-af40-4aaa-86f4-36c1496d1953
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622127832; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=pOt4l9d9iMF6Oaid7J1bJSViGCYSYwUECToINe9lubI=;
	b=r4ia2xp9eP80L/0Bxwm8A9jXFddSOtZwm7sOmYFgedsUXWFs1XRrQ2iIsEpCZNJb+uvpvf
	SyAw+B2xrtOegqycUuZcIgZhKDXNdB3UfnVN2pNk2M8G6NtXLFBU7j+8hUz84kuKkXEH8K
	CBa81A31vodm/8YA+TTm8yt7vqkqNCI=
Subject: Re: [PATCH 1/3] x86/cpuid: Rework HLE and RTM handling
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210527132519.21730-1-andrew.cooper3@citrix.com>
 <20210527132519.21730-2-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <97d2924e-d312-7f02-2203-77f1bed0df33@suse.com>
Date: Thu, 27 May 2021 17:03:48 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <20210527132519.21730-2-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 27.05.2021 15:25, Andrew Cooper wrote:
> The TAA mitigation offered the option to hide the HLE and RTM CPUID bits,
> which has caused some migration compatibility problems.
> 
> These two bits are special.  Annotate them with ! to emphasise this point.
> 
> Hardware Lock Elision (HLE) may or may not be visible in CPUID, but is
> disabled in microcode on all CPUs, and has been removed from the architecture.
> Do not advertise it to VMs by default.
> 
> Restricted Transactional Memory (RTM) may or may not be visible in CPUID, and
> may or may not be configured in force-abort mode.  Have tsx_init() note
> whether RTM has been configured into force-abort mode, so
> guest_common_feature_adjustments() can conditionally hide it from VMs by
> default.
> 
> The host policy values for HLE/RTM may or may not be set, depending on any
> previous running kernel's choice of visibility, and Xen's choice.  TSX is
> available on any CPU which enumerates a TSX-hiding mechanism, so instead of
> doing a two-step to clobber any hiding, scan CPUID, then set the visibility,
> just force visibility of the bits in the first place.
> 
> With the HLE/RTM bits now unilaterally visible in the host policy,
> xc_cpuid_apply_policy() can construct a more appropriate policy out of thin
> air for pre-4.13 VMs with no CPUID data in their migration stream, and
> specifically one where HLE/RTM doesn't potentially disappear behind the back
> of a running VM.
> 
> Fixes: 8c4330818f6 ("x86/spec-ctrl: Mitigate the TSX Asynchronous Abort sidechannel")
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Thu May 27 15:06:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 15:06:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133558.248897 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmHaa-0003AE-C4; Thu, 27 May 2021 15:06:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133558.248897; Thu, 27 May 2021 15:06:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmHaa-0003A7-8n; Thu, 27 May 2021 15:06:24 +0000
Received: by outflank-mailman (input) for mailman id 133558;
 Thu, 27 May 2021 15:06:22 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HjtO=KW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmHaY-0003A1-Ja
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 15:06:22 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id dd97c960-f88d-41ad-b0b9-81af9cebaf2f;
 Thu, 27 May 2021 15:06:22 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 306171FD2E;
 Thu, 27 May 2021 15:06:21 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id EEC6911A98;
 Thu, 27 May 2021 15:06:20 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dd97c960-f88d-41ad-b0b9-81af9cebaf2f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622127981; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=2MG+X7HMN6JVp/a7DMrpxb+F+vbWGuxR0Undhpjchoc=;
	b=ALUrMSyyL6Xj+Bdl6gbJtOIbByn5ttrvmz473hBLuIwefjXF1uVqKeXL9/CE6EFeECEBSw
	tHh4+3D9FEsvQV0XV2ZyanAaY1Z68vydA3UmrcLr9Budw6f1dGRf2KEuc+a63hT6O8k+BJ
	TmH0GWGYcH6zvod8J0b4IwiOzid+SQI=
Subject: Re: [PATCH 2/3] x86/tsx: Minor cleanup and improvements
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210527132519.21730-1-andrew.cooper3@citrix.com>
 <20210527132519.21730-3-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <ec271eb5-291d-2026-2bd9-5a12190e29be@suse.com>
Date: Thu, 27 May 2021 17:06:20 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <20210527132519.21730-3-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 27.05.2021 15:25, Andrew Cooper wrote:
>  * Introduce cpu_has_arch_caps and replace boot_cpu_has(X86_FEATURE_ARCH_CAPS)
>  * Read CPUID data into the appropriate boot_cpu_data.x86_capability[]
>    element, as subsequent changes are going to need more cpu_has_* logic.
>  * Use the hi/lo MSR helpers, which substantially improves code generation.
> 
> No practical change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Thu May 27 15:11:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 15:11:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133567.248909 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmHfS-0004Xi-0H; Thu, 27 May 2021 15:11:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133567.248909; Thu, 27 May 2021 15:11:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmHfR-0004Xb-Sf; Thu, 27 May 2021 15:11:25 +0000
Received: by outflank-mailman (input) for mailman id 133567;
 Thu, 27 May 2021 15:11:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=HjtO=KW=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmHfQ-0004XV-V4
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 15:11:24 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id d9b6d61e-3915-4756-a3f1-26aeb549e764;
 Thu, 27 May 2021 15:11:23 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id ADEFD1FD2F;
 Thu, 27 May 2021 15:11:22 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 7B5E711A98;
 Thu, 27 May 2021 15:11:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: d9b6d61e-3915-4756-a3f1-26aeb549e764
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622128282; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Wkwbi+PB7BAz1+KVXLTqHm7BunOvpihQ5ejo2Lv2Dqo=;
	b=o2+ph9guFwFdKDsFlNf56JQXyY52wCR/PA1RUDFCQW5uIPNnj4xaalzGfzRG2H4+Kx5AGE
	eXlM7wQYugpm7KLY/MNO4BR7m9LTGRszTeKi8/ZY+CMO+baJG6vFcqLIH1zANoTL9qJOT6
	u7ZTsedpdEVe0ElgwcCp/7G42M7EbEs=
Subject: Re: [PATCH 3/3] x86/tsx: Deprecate vpmu=rtm-abort and use tsx=<bool>
 instead
To: Andrew Cooper <andrew.cooper3@citrix.com>
Cc: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 Wei Liu <wl@xen.org>, Xen-devel <xen-devel@lists.xenproject.org>
References: <20210527132519.21730-1-andrew.cooper3@citrix.com>
 <20210527132519.21730-4-andrew.cooper3@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <de901446-021f-2e7f-ee9a-034f36778f37@suse.com>
Date: Thu, 27 May 2021 17:11:18 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <20210527132519.21730-4-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 27.05.2021 15:25, Andrew Cooper wrote:
> This reuses the rtm_disable infrastructure, so CPUID derivation works properly
> when TSX is disabled in favour of working PCR3.
> 
> vpmu= is not a supported feature, and having this functionality under tsx=
> centralises all TSX handling.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>



From xen-devel-bounces@lists.xenproject.org Thu May 27 15:20:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 15:20:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133574.248920 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmHod-00061P-VD; Thu, 27 May 2021 15:20:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133574.248920; Thu, 27 May 2021 15:20:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmHod-00061I-Rb; Thu, 27 May 2021 15:20:55 +0000
Received: by outflank-mailman (input) for mailman id 133574;
 Thu, 27 May 2021 15:20:55 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ln4B=KW=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lmHod-00061C-3G
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 15:20:55 +0000
Received: from esa6.hc3370-68.iphmx.com (unknown [216.71.155.175])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2c02a242-ec5c-4fe4-bae7-bcf7322f51f3;
 Thu, 27 May 2021 15:20:53 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2c02a242-ec5c-4fe4-bae7-bcf7322f51f3
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1622128853;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=WPD7pNQEqa/siY58nkVGlldYh+C1L4iLrnhhDsefgYE=;
  b=LguKQH2RaqTyD3eSTYd+Edb3amB5QBueiSpWcCngskMMIefXx98dUWfd
   AJNsM+jp5WhdEtoLWj2tsPVmwmbWQZR6w0G/J4a02uSGnu3SUaUeq1aaV
   mCnI6MR1PK5CH6kdrFNQ58z2CS3FX6XX0c6gS6Umvf9WTKAkGLwM0/Yx3
   I=;
Authentication-Results: esa6.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: cwLL+AXADZOTc+lstAszYwWiaq0GqGwJBXqlm0XFPsxTJ8HTaiaHPspHHXUiJDRlcW6CE02v4C
 quVb6GWRyAHff/ZILgA/9HuyUw0ph+qtCSVoJK4OTrL8mUsfnGEP5lUi/hhxOu0uq+qRx8C56C
 CuwRkYmsGPity+pB6tnvcs3VilFHgz8hD8RA89ykgTUO2KW4WnvwDwDD01M1bTwt+3aKOMTKG/
 AWaFI7G78hETqEx43chFe1KWSvz70cEv7gfHRY/LmF4F2vlDziXcIB4BGmZcW0PcRf1g86v+YK
 MOA=
X-SBRS: 5.1
X-MesageID: 44864868
X-Ironport-Server: esa6.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:/LzkwKvghcrPG2K7m9/jSS5b7skDRdV00zEX/kB9WHVpm5qj5r
 mTdZMgpHrJYVcqKRMdcbLpAsO9qBznmKKdjbN8AV7AZniEhILLFuBfBNDZslvd8kTFn4Y36U
 4HScdD4bbLbWSS4/yV3OEWeexQuOVuXMqT9IPjJ9YGd3AMV51d
X-IronPort-AV: E=Sophos;i="5.82,334,1613451600"; 
   d="scan'208";a="44864868"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=h8opeJVxUGtClTjpm1QzlLg4SjlNR8fOJYbLG32wdWqHFBpP4OlJCcIfJ4wDoWjnd2rOBeK6vauwCIguhA2rplUZIHJRH7rC/uZxeZOfmBBYHfMg1y4FjsPS+cZ1S5r5O7rH8rrHz9Fewv9vImivvHsN3CAG8mOvkmfZjFFCNBslwUpmJ5ZI29FsQOSmqVhYB6t4SvUvr+RD3LTNb/GQD8pzAfzOGVMiRPpRAGb0B4sSo1e8MWpOBphYvvF89/3mo301Hu07w3g8WJUia+7SKMOLIazfs8LN2ZSd0ERnbEw/+u8SA20O70nm8VhqztyCaxq8veDZ32idz6sZ6SlBZg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=b9kti0sSeNFYtvJ4yA4eu33vG6aQbwWAzTH/n6j6Z6c=;
 b=Xh5HmVvXGaolSm/FAAx1JzyfHlk7hlzxIGpi/Rpa3tesPx+WSIyceiHnm/JzWTw+vmflJWBcLm0FvrkN/lQH9BcaSNld8ypjKDfyhR75edKiIRfZp7ddGqO9bQ1pKlBrWqdbG39zVUOxDXTrOt7roGPOKIhisez79RyViL+YgPSJvlebYAuTJPPfaGRgz9hLdDhR1AcFVKgn/oK57xaPKgEUPfyrF/T1Ywii4fn9GIgi73egDNS2jgruXQSxCdv8nzFyu4oQ3avAigo8H+dkCVJxPl9+Pf8Vhkt/0nVbCRk/7RWuoL0bCzmqo1kMI4E0ZbrT9O7i/WFlgNPpDPp0Fg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=b9kti0sSeNFYtvJ4yA4eu33vG6aQbwWAzTH/n6j6Z6c=;
 b=ftGmjebP5hLvH55pHdrAmYLCLJab2KD4U4kglnrZR9r92201aXQ/po97Pb/ux3L2TG+VMTN/dR19H8/881GUgTfTW25DgvzxNAY/Xq1+j07sgf2yhnxYGj+8346huTmgTvt6uO1SYuQMp+GZ/k9A5dkum5AnQiEZMM+r/B/106I=
Date: Thu, 27 May 2021 17:20:46 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH 1/3] x86/cpuid: Rework HLE and RTM handling
Message-ID: <YK+4zhqJdyZwLwNx@Air-de-Roger>
References: <20210527132519.21730-1-andrew.cooper3@citrix.com>
 <20210527132519.21730-2-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20210527132519.21730-2-andrew.cooper3@citrix.com>
X-ClientProxiedBy: MR2P264CA0009.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:1::21) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ab2d52f2-b951-41f6-54a7-08d9212302c4
X-MS-TrafficTypeDiagnostic: DM6PR03MB5340:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB53406357F691A1E27F4831EA8F239@DM6PR03MB5340.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:10000;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: RKBsdU8dZWeUK46rBkvrZNRL6NA2jsMUwUpbnM2pdMgXAPYtJkTv0PSkzzDKSON/1ou5oFhHiESr1ZjyehvOodY2F0mfp2vVm3YdTZ+g7yoD+tKlxnbxw3Jb+FY1zumwwSvT9+fzZIlkVoCgMvnd1DgclNg/p90vZc1aNUJNEjTq0iqUX5/AboFrHyA7NAPb5KcmX6x/LTwW63c5PZcgWPIyc2zkcSfo7CS3+eB2J6enR+/uak0GzNNmbOJDKayI+Y95qkj73efi2MHFHxejWVSiTNiv0tDzKpQQ5KsP+Cof3cxu5K3TnC7JZuIu7LEAaLvQFQeZrckjMm1jy/UjqrBK3O/2zbO8U+FkKvajxce37rdYekCp4gcyTFZRbdf0wBdyS194QFw0T4Lplks4o0OkvzpdnKOZSqQqdTH7X6G5plFl5VAhkDs08qxDG2KzqOKRBIc7XLIacX0sfXqFuxJWNpeHoSLLEBfIDutwRQRfER1aNmvQld1/X3Ft8VvUYkUW4AagWnrpKVnaXba3OqTGGGcnHUAaGB3Kn0ptl7Syzn3CT+ObucSYaz7NTYYlIJt8j5inoCHa8y6kBH0nNYMMVFgA0mvNJKoagTzxIuB2VG+qVUwtpMGyuB22ANJCDDGZLKY0r67VcvjiDh07yA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(136003)(346002)(396003)(366004)(39860400002)(376002)(6862004)(85182001)(38100700002)(2906002)(9686003)(33716001)(6496006)(6666004)(86362001)(478600001)(8936002)(16526019)(6636002)(956004)(5660300002)(316002)(83380400001)(186003)(6486002)(54906003)(66476007)(26005)(4326008)(66556008)(66946007)(8676002)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?K3pRditTTkUyUHJUR1k2REpBQTd2VjVCREJsOU9KaHBqZ2hsQ3RqbmFXSDZX?=
 =?utf-8?B?UFZPNnZHaHcxOGo0M0FXZHVvQTFYaGFjOHduWklKM01IOWNnb0oyZnpXQkFu?=
 =?utf-8?B?L3k4cGQvNml3WXRpQ0UvbFBiditlcGJHNHg0RlBWNG5GeUNSSHhFVDl6TlpQ?=
 =?utf-8?B?ejRuR0N5em8zZTZFQjBWM2FEZG0yWVNLa1pnQ05rYUdYVVk0WHNoSnNnZFRF?=
 =?utf-8?B?N05OSTFkK3RBM2VjRy9WWXJ0ZmQ2cFNnMkVoSDJlZ1luS2Mwcm80Yjh6dzA0?=
 =?utf-8?B?a08vWE5DTDRTRHZmSE9XUEo5T0pJUDBmNlZZR2FCRVM4K28valg0bjdIRWRO?=
 =?utf-8?B?WDhXS0RNQlh0L250TCtLaUNiVllQTVFja2F1d3d5dVU1Q1ZPakJJdjJScHRL?=
 =?utf-8?B?ZGpLMk5ydkdrU2dRZFhKcE1FYUV1YzhVOUtKK01JdEdrUkZ3VmF1NTV2NXAy?=
 =?utf-8?B?L3RLU3BuNVJIOXVDQVlTendlWTdaVXA3d010NWRpMTBPa3JCNWVCcllTTkNl?=
 =?utf-8?B?ek9UZEo1Q3ZXTzVpQWNIb3FBbHRndXBNS0VPaTNDV2NpSEsrcFR0THozbUF0?=
 =?utf-8?B?aThoZWsvM0lBVlV3enU1bzdQeUVhd011dmwrRC95UUtFZmpTWVR3Q20rVmUr?=
 =?utf-8?B?cmtsUnV5c2dKK1NKRzBmOTBlNlVzOGVSR05STTdpcXFhL1pCSFpMMGlhMktS?=
 =?utf-8?B?VUc1NnRFUlZhSU9zSERJbXBMVFUrNERRSUV1UXdBYVlLN1BoL1NSaEJGb0xF?=
 =?utf-8?B?TTN1bm5ZSjJKZE9OeHNiZ2haOWdyeFhIdDlHdC9wajR3NjlWSHhXdXJ4Nkow?=
 =?utf-8?B?R2JjditNS25UMzBDaUFUT2VsYVBCZGFWN2xxUGNBRlprMDhyTThPNnAwQzNO?=
 =?utf-8?B?L2xJRjRTRGN0c3k4T0txSS83VzBDaGdtdkk1dFg3dVl1Mm96NWJydlhCYXJs?=
 =?utf-8?B?bStJWE5EWG9YUS9XYjA3RGl0VXFtUzJhWmlBNERWWkh3MUJ2dWVBYUcwL2FD?=
 =?utf-8?B?ZHpRRWQraXBKVDFVRVhXVS94cFZqTWFLK09XNnduV3Y3MWNMUkgySE81OExI?=
 =?utf-8?B?dm4yeURKc1lrNk1rWTBZMUIvQ0pXd2N1bUhTNlhuYVdCZFliQWRLWWo0TnZW?=
 =?utf-8?B?NmNuQi9OT3VaUk9EV25EWlJRNW9kVE9uSm1RbHZZd2NWeTJHT05ac3RjTS9L?=
 =?utf-8?B?SnVKdUdycGZlSlhreHg0THh1d0NxZHNDM25qT2FhNEgvZzhFU3ptN2ZrZG9y?=
 =?utf-8?B?dFJITXptR3YxMzUreC92MFp2MXpMRjdTRW15M0FudWV6eWVYR1dlU0tCSksx?=
 =?utf-8?B?WGFURWpOd3Rsc09ybi9uTXQ0RzBLaE9IdG5hT29SUzY4b3pseEJKOG9DYVZH?=
 =?utf-8?B?U1cwa3YrSWh6RVY5eG1lT25jRFNqaWtGQk1tVkthZGlybUFQY3FzaGJSVTJR?=
 =?utf-8?B?UmhFYi96ZEtBeUV0cjdtZit5cEl5WWNwVVlvaHUrUXZSS2Y4eGR4SFQ2RjhZ?=
 =?utf-8?B?MWEzbWdmZk1aUWtoeFpNc3VNZXVLUSswRzJ4Zm5YOUM2RWs4Z3MwejlLS1Rs?=
 =?utf-8?B?aHRENE5YdVNCd1F5bTJZLzlnWkVoUDl4dEt6akRpbm1ETUE4QUFNQjhoY0FL?=
 =?utf-8?B?WGVQYmZhUVFhQUs5MWluVSt6aG5PbFdrS1lpb2xjRDAxM1VtWWN1ekttK1Mx?=
 =?utf-8?B?TTNpOEhBRkU4RXZrMFhNS3BGa1ptc05LeXhXWFNuSC9oWFZlVGhoSUw3Wkxl?=
 =?utf-8?Q?6n3gHeh0fTsab5naWvpyUTtTV14ADliLwlD5mSw?=
X-MS-Exchange-CrossTenant-Network-Message-Id: ab2d52f2-b951-41f6-54a7-08d9212302c4
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 May 2021 15:20:50.6068
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: a6oMSbNPhgJJG/SW4BUliYqH1WlyR463RIAYfMiw9ZM+PjOCRVznCS5PCLTinYe156+9txrX13rMBLui0a7tkA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5340
X-OriginatorOrg: citrix.com

On Thu, May 27, 2021 at 02:25:17PM +0100, Andrew Cooper wrote:
> The TAA mitigation offered the option to hide the HLE and RTM CPUID bits,
> which has caused some migration compatibility problems.
> 
> These two bits are special.  Annotate them with ! to emphasise this point.
> 
> Hardware Lock Elision (HLE) may or may not be visible in CPUID, but is
> disabled in microcode on all CPUs, and has been removed from the architecture.
> Do not advertise it to VMs by default.
> 
> Restricted Transactional Memory (RTM) may or may not be visible in CPUID, and
> may or may not be configured in force-abort mode.  Have tsx_init() note
> whether RTM has been configured into force-abort mode, so
> guest_common_feature_adjustments() can conditionally hide it from VMs by
> default.
> 
> The host policy values for HLE/RTM may or may not be set, depending on any
> previous running kernel's choice of visibility, and Xen's choice.  TSX is
> available on any CPU which enumerates a TSX-hiding mechanism, so instead of
> doing a two-step to clobber any hiding, scan CPUID, then set the visibility,
> just force visibility of the bits in the first place.
> 
> With the HLE/RTM bits now unilaterally visible in the host policy,
> xc_cpuid_apply_policy() can construct a more appropriate policy out of thin
> air for pre-4.13 VMs with no CPUID data in their migration stream, and
> specifically one where HLE/RTM doesn't potentially disappear behind the back
> of a running VM.
> 
> Fixes: 8c4330818f6 ("x86/spec-ctrl: Mitigate the TSX Asynchronous Abort sidechannel")
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 27 15:41:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 15:41:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133583.248931 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmI84-0008RB-OW; Thu, 27 May 2021 15:41:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133583.248931; Thu, 27 May 2021 15:41:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmI84-0008R4-LR; Thu, 27 May 2021 15:41:00 +0000
Received: by outflank-mailman (input) for mailman id 133583;
 Thu, 27 May 2021 15:40:58 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ln4B=KW=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lmI82-0008Qy-Qq
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 15:40:58 +0000
Received: from esa3.hc3370-68.iphmx.com (unknown [216.71.145.155])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 86cc71c7-7fd2-4750-956d-704b7a03c23a;
 Thu, 27 May 2021 15:40:57 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 86cc71c7-7fd2-4750-956d-704b7a03c23a
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1622130056;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=/naLsN8ifjl2UmADRleZuoerq8cg0/zxtQ1Gx7KIUeo=;
  b=GyjsMZqhxf8Gu5wlHJS53gyKEbWMonXw6uF4KHGHzcpKynV1HHenoAyd
   RtI068mF6G09bdZzEOLJqIb8G+tSXa/Z38TrpkE4bKcFW9lHiLo66YRd6
   KOgBjCJzQQ3sMMiKoaPgafncsGE568hpRY8mSww/SP5lGP1algFzmHGRn
   s=;
Authentication-Results: esa3.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: Aq6eVXiFxeQhFCBoEuDV8t1lfnNS8UwD6tBBR+OcUjAcBiassUOn+C2qBlyX58t90ZyEj4tadb
 QD+5R5AL8W/knSWmHwhWjJPPOVVxihVGkFUZpKuSxHEc7pLj4SjpBzqGwk2IarR+drTxFyL3v/
 I/z6VzBi9qttlZmoijUwi5h+Ve29cUhxpM8xWAtkHzMQ+/m6Mnk/YskMO3pSA8OyHdRo4nsi6o
 lxscjJZDpWxphU8jdG/sZQ7N1NCwvvUvly0HRTXLu3uWCQpG3ItGo0V/BHc2HtTgaJ93x368HH
 xhw=
X-SBRS: 5.1
X-MesageID: 44785745
X-Ironport-Server: esa3.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:SnJvXq4OpyC8hn3luQPXwDLXdLJyesId70hD6qkQc3FomwKj9/
 xG/c5rsSMc7Qx6ZJhOo7+90cW7L080lqQFhLX5X43SPzUO0VHARO1fBOPZqAEIcBeOlNK1u5
 0AT0B/YueAcGSTj6zBkXWF+wBL+qj5zEiq792usUuEVWtRGsZdB58SMHfhLqVxLjM2Y6YRJd
 6nyedsgSGvQngTZtTTPAh+YwCSz+e77a4PeHQ9dmYa1DU=
X-IronPort-AV: E=Sophos;i="5.82,334,1613451600"; 
   d="scan'208";a="44785745"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=f7l4xoiwFvirQgwxq4NacCWnkxHExWjye98V3oFjJyqJbbiyMQsp3OIhaUl2yF75rQMdWkAFPeeO4DdnOCnA0IYVkcqWiF1ZudBts3dUCvxGmC+DBP8G4dBpGS7yYhVdYZ8g9dUtYGDkXe4rTvi3hn0IQWDF+16jsTXtkKuqHDD2pgl/q0rf2Ol2737RliXznU2QwfoNbowhbN3UjmC1ypgRi72iKbWN/kllj3mHeB/3cNuwt6mfRdocn2PY+yuuKJtldW+UnXCHaJB9Fhaj55PpN9Xe+Y0lR0oykHmYnmI2ekAc6nO/k7OJ95ASjyf0NO0rqzZmYvIgEy07GpjtdA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jGS0i3j5KxQRKnYVwmu8glWVhBWkGHgIz7u58zE0TX4=;
 b=VFdtl9grLaJ+cTNtH/pTpaA1jJv5G7GXT/w/Ho+ueSDgHIkHm6crixvIqmVdWN5zOPpDWolrTtp38cBpe9ugSP7UunNjPJ9tPymbJLCSjmIMTSQ3NQxzpuN/p7MiyHMkEvb26xMInES6oLz1HZE1ZBwJZrBxywLxd9c99eQ3w9gf1jetRpyi48W7Xo4275qmpS/ZVN/GFxYKKplcV/eCSAaWZgwA65gwVSp598vooOxvnTPlv7MnCPIlkBsU0RQWtI4YwWD+sfCrIM9l0PncvhIDqRC++h14aDxd76+xelWZKxgrBVlhfPiaysQkKxHlQpMs1/9G+w5fzg8qnvbllQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=jGS0i3j5KxQRKnYVwmu8glWVhBWkGHgIz7u58zE0TX4=;
 b=ZpVwai4eM9WOKf5Mqlm1X5nBcIyjIqx/smbwd9pJY0/g6go/pgsgQ2RzwiqnP+JWWsx6MYuUMn3UeceEha4eFdgE1CMz9b6EEwI8RDRNDfd1JpKINc6YHUTTuKi7x6J1OeS/f85LQxvWrYyIgajn4cAzG7qgpV9aMWu+/K0SJD8=
Date: Thu, 27 May 2021 17:40:45 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH 2/3] x86/tsx: Minor cleanup and improvements
Message-ID: <YK+9fchtWnsslsPn@Air-de-Roger>
References: <20210527132519.21730-1-andrew.cooper3@citrix.com>
 <20210527132519.21730-3-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20210527132519.21730-3-andrew.cooper3@citrix.com>
X-ClientProxiedBy: MR2P264CA0090.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:32::30) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ae7fac06-5992-468e-f9ed-08d92125cee6
X-MS-TrafficTypeDiagnostic: DM6PR03MB5322:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB532206711D3DBA779E130C8D8F239@DM6PR03MB5322.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:3383;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: Q50n/6FWOM3lVKgTR84pQCj5Who5bqqfQPg4zqVM1spZf4SHT21d2WpXvt9MKLG2Mq7Z23+7XLqfwNyAJaymTTPoUvGD3YGnHJjc2OzQjHtexrc2ampqDOBaCiwXG7lkxKqTjFVNrTh3O+GzmNkMhImwsep84xfqsZzIroRl8cpUr/aRG1L6YYBIPEGEan9f0+Sp6Zt5uQNUzxD6SSSPZ/4ZKaVO8qVhULy88rw5RQ4oD8yWW5EyUqkqQT5T3WVFv8+8dSDBzyOoTJrqk9XAkkXL6MlU9J3r+FxbbcqzIw9i+fwo1uSKru4aumtzlW2KJ1YhIlCbK47nsjjoH7TmZnbtkUn4f29Fs0DFep5wOw0PABNJ9YxhZkzsOhGu78yMaBKjtMsvTgFv2L60kgwAAt2j+eunkwf1heMivMz4yyqWPN6FcKiF2q+kXHyw+zZb98n95z/+jZ3PZhYdDzhASDK6KSQ6rH9fS36rI3t8w9Yh4u6aV3Q1JNP9TPLIAkennIJkgUvyacoLbBOoay94jzKblhbAJYdXt6zaS1y00vYbXwJAXdshepJYijU5PGX9CLDLhbzWzHkmqjDS9m1FufJgVYk2caCyT960AxgHquvM/PkbaYfFtjjv8V4ne2F6gOL3cBmsImrfBsZrfRyl5w==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(396003)(346002)(376002)(366004)(39860400002)(136003)(478600001)(26005)(54906003)(8936002)(8676002)(956004)(6486002)(316002)(6496006)(6636002)(16526019)(4326008)(6862004)(186003)(9686003)(38100700002)(86362001)(85182001)(4744005)(2906002)(66946007)(6666004)(5660300002)(66476007)(66556008)(33716001)(67856001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?dVVBMUZZUEZFOVVNVkVBR2NKM3pnaVR5M3hBaXhwTHdMblBNYWhlM2MrVUhB?=
 =?utf-8?B?aVJTOVY3Z0dBdzNxeFo0UGFZdEhaQURveXU2Y1F3WVBHdnNRZ2thaXVNSUpt?=
 =?utf-8?B?U0cybWRmWTNMajRoY3VISS8waktBM3BzVzNXZndWaDlDQjBHSi9UMzlEZHA3?=
 =?utf-8?B?eFBmQ2pHeXZXekhaME00YUdjZkV1YlNDYVhHdmdZb2JUcXArMk8xaG41V0JX?=
 =?utf-8?B?bkhDbW40VG5PV1hVZURuWmJVVExkaE1jaEZwZlNhd04vY01uT1BselA3SWJY?=
 =?utf-8?B?UWV6dmNjMlJyRTdoQXBJZ0ZCYk4wTkFBdC9YOHNwUDUvS2RKM0lJT1hGQlNS?=
 =?utf-8?B?Yy85ZE5aOURVTnRhbWtNMUF5ZlJ6Z1Yzc1R1d2NVOUc3QkJndGtmZ0VrNGNL?=
 =?utf-8?B?bGttcUtjbTBoVTlTNGhZbm5ZUUI2MmVnb21LbTBRYWw1b3k5SzBDZVVpV0JZ?=
 =?utf-8?B?Y1Nuc2JwK0lJZGVwN0twNzExWk9CdkNNSzJtM0cxRHZBc0xRMjN6NW5abEdY?=
 =?utf-8?B?UjBIUUVDdC9aSlFCNVI3Y0JFRjBNVHcxMTFwdlpNN2ZIMGlqWURqbUlLVDZS?=
 =?utf-8?B?cUoycTZMSFNRcEZlNkRSSk1FRVhTWGU1V2IzR0hZRjFtdzBrS3hmL0xmSnlS?=
 =?utf-8?B?Z3QxM2Flb2ROeUYxYm1RdHU4b0wxWVpuY0NTUjBIYzJOaGNHV2FJaG1qWWMr?=
 =?utf-8?B?UTdVcXhKeXVQZm9GcWlnNkJpUElzNG1JL2pRcjcrYjNZY1BFSHZNUzZDNTV2?=
 =?utf-8?B?K0FqbTNVUWs3SEk5NnlqZzJOZU9PekpKL21kUklLbld4SVpockQxMS9kZDBO?=
 =?utf-8?B?ZndqUmFYa2t4WlYyOGM3ME1tZ2Nkc2JvWkN6Tkw4OUc4V29ZaDRQRFYwOGtj?=
 =?utf-8?B?MVFVM2lDd0dncTFqR1ZXWThZRE13NDdaN3poNHRUbmpEazJQYVFZS2d1VWFK?=
 =?utf-8?B?RXBTc0Raa05RQTB1YWdMM0pMTU5yRzhvUURqZGxvYUhkWVU5WkltSkNGL2kv?=
 =?utf-8?B?TVc1R1VWSzVpVGFZWWdOZmJEcWVlTEdxRnVqVUdIT1EvNS9tUmd4d0RBeGJa?=
 =?utf-8?B?amZXT082S0hMeTlXQS9tSDhGWDdZY21aUXFOUUNTci9KbDRLem40cTZwUlJs?=
 =?utf-8?B?ZU1vb01ZdDBCNFgzVkhDT3o3YmlxaHFSWFlXbnBYQmN3WTROWDQvRnJLeWRQ?=
 =?utf-8?B?ZDJrdDE2dWc4NUhKbTRVR2wwNVFITUFBbXNMVDlNUldrWkVUWm05MHpBb3A3?=
 =?utf-8?B?OU1JTGlQMkVQdzdYaDVneDRNS2JOaThTYy9ja0pmMm81ZUo3SFUwNHNpTjdF?=
 =?utf-8?B?QXg0M0dkY0ZqY0pHSERoU3NxU0JjYmhYcUtnTnF0SGhwSnpXZlo1MG9FUzhD?=
 =?utf-8?B?UitIbnc5TGV5dWs0QjF3NHZDY1p1OUVjS1BlbTZ6UlgrNlVuRmNWVzBISW81?=
 =?utf-8?B?Mmd0VnF5MUNMaHN1ZmZyVjRab21sVXE1M2dsMm5tNWliN1U1em1uMk55NTVw?=
 =?utf-8?B?U1RVZ0ExNlVGYnZ0Qkw2Vy9FSVBiQnp1Vko0M3FSWVkyOWhSVU9ESWZKYmc4?=
 =?utf-8?B?eEFydDJTb1NGcUMwRmNndkpEelQxR094WkxGMFYrQmV1MXQxN2pKQysyRk9K?=
 =?utf-8?B?NXRub0hDcTNWWEc1RWxYVU1OUGt1WGNmRW5WTzZuSGxGeTl1VVJVQUpEenhO?=
 =?utf-8?B?R3RMRlR3Z3B1RGxhSUlZbVZ0UzVIOHpHY0w5Y2xtVzk5QnVOZ2hpYi9mRDVy?=
 =?utf-8?Q?WrMU3tcjygi7OxG6AIETW+v96ieV6i8UtZuTZXr?=
X-MS-Exchange-CrossTenant-Network-Message-Id: ae7fac06-5992-468e-f9ed-08d92125cee6
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 May 2021 15:40:51.9401
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: NmvP0n7rbFqFJnKIPQ1z3QgKYYcwIPBr0AH9+L5Qs/zABz8tcRuYGLWbVym3rhzzmBGzw0NOnw/n+ZynowW7Iw==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB5322
X-OriginatorOrg: citrix.com

On Thu, May 27, 2021 at 02:25:18PM +0100, Andrew Cooper wrote:
>  * Introduce cpu_has_arch_caps and replace boot_cpu_has(X86_FEATURE_ARCH_CAPS)
>  * Read CPUID data into the appropriate boot_cpu_data.x86_capability[]
>    element, as subsequent changes are going to need more cpu_has_* logic.
>  * Use the hi/lo MSR helpers, which substantially improves code generation.
> 
> No practical change.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 27 16:32:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 16:32:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133590.248942 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmIve-0005Xp-NE; Thu, 27 May 2021 16:32:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133590.248942; Thu, 27 May 2021 16:32:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmIve-0005Xi-Jr; Thu, 27 May 2021 16:32:14 +0000
Received: by outflank-mailman (input) for mailman id 133590;
 Thu, 27 May 2021 16:32:13 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=A0TX=KW=amd.com=thomas.lendacky@srs-us1.protection.inumbo.net>)
 id 1lmIvd-0005Xc-9h
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 16:32:13 +0000
Received: from NAM10-MW2-obe.outbound.protection.outlook.com (unknown
 [40.107.94.49]) by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id af50b313-d7b2-40d1-b117-ebf72b14a4b1;
 Thu, 27 May 2021 16:32:11 +0000 (UTC)
Received: from DM5PR12MB1355.namprd12.prod.outlook.com (2603:10b6:3:6e::7) by
 DM5PR12MB2504.namprd12.prod.outlook.com (2603:10b6:4:b5::19) with
 Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4150.23; Thu, 27 May 2021 16:32:07 +0000
Received: from DM5PR12MB1355.namprd12.prod.outlook.com
 ([fe80::b914:4704:ad6f:aba9]) by DM5PR12MB1355.namprd12.prod.outlook.com
 ([fe80::b914:4704:ad6f:aba9%12]) with mapi id 15.20.4173.022; Thu, 27 May
 2021 16:32:07 +0000
Received: from office-linux.texastahm.com (67.79.209.213) by
 SN7P220CA0025.NAMP220.PROD.OUTLOOK.COM (2603:10b6:806:123::30) with Microsoft
 SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384) id
 15.20.4173.20 via Frontend Transport; Thu, 27 May 2021 16:32:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: af50b313-d7b2-40d1-b117-ebf72b14a4b1
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=L0EXOuzHfeW8Iat9tH7IYa4viwaQRqAxRiTx6Pf5tPuTZBl6QWNpdRazjP/9ydPI0Pkw9hrvK6cBbTZPDjFFIU6g63P9eOsI3BPOcE90GkpneynOBtwqw+F2Jk7Q62LYhTq144ENp0lk7Nfd81gQXjEsdadtR6Qv+9/NTVS25yRC2vqPAXpIFP9aTIjJdfyPYz3f3PvBqQ4BWkAm93eSGIcLepZlkKpLA6BlTj7TYzuRd3cOrNcrG6cIWyZpv69hLR2TnEnD085EORKuHcd7EZSJwsFAXipoqnqxM/kdaN2An+I9U+LLyJO7a9nsXqLJqdziAHgbZ12kRdC+a+byLA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Q45R70SZ4GNbGD0yl6IMD+eivKNAhgiOmhZWzOMDiKM=;
 b=b71QOtg/qgOOp3s4MYD1PZ4Q0gYjwloXnONbLFzLs5XFy+vmICGhm9dXZNEDKsRfSL8jjYPveExNaOfi92dXuYHAsmMAeWb83ShugjLHX//H7MZvdOr4MJFQ633VQbgEudYI8PElYsJpOjg9UI92SSx1W+08eEQN1ZDfuZihLzVgIfz/Vg/uX/Hyk8HIhZnXQ+v3u79eEw3k06YJ9mhXy/PuzX58pHMC8iUy6ppAPE9PCitxQiW3HHIPhIBgl6PlWEd3xmmiq1rdDs+3qfeFJpGdb2mC0g25lSdv1uWH0fIoJoiYzkuCaST0YWRJoOGk0fKhDurej+TrTjcUy66lhA==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=amd.com; dmarc=pass action=none header.from=amd.com; dkim=pass
 header.d=amd.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=amd.com; s=selector1;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=Q45R70SZ4GNbGD0yl6IMD+eivKNAhgiOmhZWzOMDiKM=;
 b=g/d9DN9EYyUlZgOsqqs0xcYYISoNY4012McHrOuqNSyIGphFMQCt3rbkv8VwaK5XWjqOd72J5TnTn2HRYIGCZfYh4XjjY4WheSNsveTzV+TfOSIMjL/E7UD9YRAqprBxnIBkGso/CDuxyF0/kXvuN5Ivac6y6J2FCUDG0kbHgZI=
Authentication-Results: linux.intel.com; dkim=none (message not signed)
 header.d=none;linux.intel.com; dmarc=none action=none header.from=amd.com;
Subject: Re: [PATCH v7 01/15] swiotlb: Refactor swiotlb init functions
From: Tom Lendacky <thomas.lendacky@amd.com>
To: Christoph Hellwig <hch@lst.de>, Florian Fainelli <f.fainelli@gmail.com>
Cc: Claire Chang <tientzu@chromium.org>, Rob Herring <robh+dt@kernel.org>,
 mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>,
 Will Deacon <will@kernel.org>, Frank Rowand <frowand.list@gmail.com>,
 Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, boris.ostrovsky@oracle.com,
 jgross@suse.com, Marek Szyprowski <m.szyprowski@samsung.com>,
 benh@kernel.crashing.org, paulus@samba.org,
 "list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>,
 sstabellini@kernel.org, Robin Murphy <robin.murphy@arm.com>,
 grant.likely@arm.com, xypron.glpk@gmx.de, Thierry Reding
 <treding@nvidia.com>, mingo@kernel.org, bauerman@linux.ibm.com,
 peterz@infradead.org, Greg KH <gregkh@linuxfoundation.org>,
 Saravana Kannan <saravanak@google.com>,
 "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>,
 heikki.krogerus@linux.intel.com,
 Andy Shevchenko <andriy.shevchenko@linux.intel.com>,
 Randy Dunlap <rdunlap@infradead.org>, Dan Williams
 <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>,
 linux-devicetree <devicetree@vger.kernel.org>,
 lkml <linux-kernel@vger.kernel.org>, linuxppc-dev@lists.ozlabs.org,
 xen-devel@lists.xenproject.org, Nicolas Boichat <drinkcat@chromium.org>,
 Jim Quinlan <james.quinlan@broadcom.com>, tfiga@chromium.org,
 bskeggs@redhat.com, bhelgaas@google.com, chris@chris-wilson.co.uk,
 daniel@ffwll.ch, airlied@linux.ie, dri-devel@lists.freedesktop.org,
 intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com,
 jxgao@google.com, joonas.lahtinen@linux.intel.com,
 linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com,
 matthew.auld@intel.com, rodrigo.vivi@intel.com,
 thomas.hellstrom@linux.intel.com
References: <20210518064215.2856977-1-tientzu@chromium.org>
 <20210518064215.2856977-2-tientzu@chromium.org>
 <170a54f2-be20-ec29-1d7f-3388e5f928c6@gmail.com>
 <20210527130211.GA24344@lst.de>
 <bab261b4-f801-05af-8fd9-c440ed219591@amd.com>
Message-ID: <e59d4799-a6ff-6d13-0fed-087fc3482587@amd.com>
Date: Thu, 27 May 2021 11:32:01 -0500
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <bab261b4-f801-05af-8fd9-c440ed219591@amd.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
X-Originating-IP: [67.79.209.213]
X-ClientProxiedBy: SN7P220CA0025.NAMP220.PROD.OUTLOOK.COM
 (2603:10b6:806:123::30) To DM5PR12MB1355.namprd12.prod.outlook.com
 (2603:10b6:3:6e::7)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: ff9c356c-975c-4f82-7c16-08d9212cf790
X-MS-TrafficTypeDiagnostic: DM5PR12MB2504:
X-Microsoft-Antispam-PRVS:
	<DM5PR12MB250437DEE02CE4B6ED65D870EC239@DM5PR12MB2504.namprd12.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info:
	FcGVaNUqWOacS4qyh3m/JN7k7/837FC8duYo0XVAJJd5k+WqoYLcoTR+CTvv8QEbyCZD/Z99yKW0kAO/XRRI76AF8ViBY7bKdBH3JWMeXgSYWuMNv8muI7pDCgCBAxpmWG89Hq/oD84DLTSFpgo7HqLuM/2mDEQUfE6/aGa1Q/mXdIALomEgUe0Xgf0ngsouf2bB533WVY1G2z89gYG2RtOfOu1QxzLU49yHRZJDEtwlCcLbmgYoLQdbPBISX+FoYH6zpn/l20UKvHeIxwim2wqN8iWydRQNAyFZc4jSwJGbg0cwBZY1maWQN0Po81sVGfMdNxrGtRXK+T/DGKXUvvZ0Gv3fRxThTXv3I5LeRx5wIIQ2/w/WfQwsIViUT3x/+A3k9ookLYdS8i+qbGEMpvTmHRo9Xa3cjHUXCjY4lS8YypbGSN2F5vKIo8MiexdPG1Q7ppDLRW1WZeGiRYY1EN1iE6lmwlICHca8PKDTmzcSc5LOpVaJx4MZ+EJdDGNLqsp6Kl/qrCErsXsy3WgG8QiaDkspb8J1sguRu/u/MAqKN1VLkgztc/Ar3wDwSPKMuImq3UJILi3oL2YUKwaoi4e+DvhFDTXmBtInykztacIVUheNAJl44l/pXBLmcpbwcCiQU9wj6iOMaWKkenoslOi4QhEkeWzwPrSiIB3ABRovYX+gkMWut/H8L4c2FRYl
X-Forefront-Antispam-Report:
	CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DM5PR12MB1355.namprd12.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(366004)(136003)(346002)(376002)(39860400002)(86362001)(54906003)(31696002)(110136005)(66946007)(4326008)(5660300002)(7416002)(6512007)(45080400002)(478600001)(31686004)(66556008)(2906002)(7366002)(316002)(956004)(2616005)(66476007)(8936002)(7406005)(36756003)(8676002)(6486002)(83380400001)(186003)(6506007)(38100700002)(26005)(53546011)(16526019)(43740500002)(45980500001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData:
	=?utf-8?B?TUpkeGdMc1lLRmhsNm55LzRZYkp1WlhZQjluUXU0SENsK21aRHg1Y3NaQm1q?=
 =?utf-8?B?ZXRkb0NrUXNRM2FWL3l3UFFwNno2TXJRcFZ2M2ZGazNnS0RjaEpvS29ycnZQ?=
 =?utf-8?B?cUFtMUQvQ3RZbnJQdEtmdXg5SkZpL2hJdmloV3V0UXNrdFJUQ3F4dTh4KzQ0?=
 =?utf-8?B?QnU0LzVzL1NiSzcxZjNWbHllM2hNdHlMblRkMUUwdkxOT01JS2ZiZWMwMXZ6?=
 =?utf-8?B?OEwwWmNtd01EYjQ4SnZNKzNtcWZnTDFiRTYzam5yYS9Jei9FcGl0WkdCZzQ4?=
 =?utf-8?B?Ulg3QjNnb2V3V2JBNzVya2tVWDg1NFEvZ0JEdC8vajVvQ0wxZ2hrb2V5MHRF?=
 =?utf-8?B?K3RWVi9PUXE4UkxlSGhLOFAwRFdoa3lCbEhzbXBKR2wrT1kvQ1I5TUNQdUJi?=
 =?utf-8?B?TWRhUEkwNHJSVXh5bU1DQk1aRGdrSllDUnJDOVpjZkdRUGY2NDNOQVNBa0E3?=
 =?utf-8?B?RVdNM1QvWmVMWDdtak43bytsQ1BDL2J6UWQrRTdtVmdFS2oyMUdPWVFLa2VH?=
 =?utf-8?B?THMxV3ZzR1BNbTNSOVdwTkhKeWxja0lFMnQwMWZoK1pGVmZNV0tNdm82YUZX?=
 =?utf-8?B?MFJZalRUNXc2aDNKSzNQK0wxR1UrS2ZlaTYrMFFDSmhNTlFPaDF5VmdGZU9y?=
 =?utf-8?B?TzAyS1IzT1RYRklRQmVHbnArdGRScEt3UCs5OVAxanlDMHhDTTVQUUlTZXZD?=
 =?utf-8?B?T2FkM0NHcmIrVUdkY0tJR25ONjl0QmhlUjR4ME5XY1hPaUo0b1hZd20rb2RB?=
 =?utf-8?B?OHFGRkpBanlNZ2RWaHI2a1lIcm11Wks2S0V4a2pHaHlYMjR2bWdVakR2ajdK?=
 =?utf-8?B?ajFjL05jQTdzdHkzTkNEOVpYbXltcFM3ZlN3WDRrYVdNTHhtVWtqY2I4dzd6?=
 =?utf-8?B?dGowa1k3b3RLZjFwNkkrd3JZWjh0c2ZpUEpiQzZKQXdWL1REUURQL3l6ckxj?=
 =?utf-8?B?TGVtNURpb0JNS0NmM2NPRW5DS1FNMG1PY0NSUVdaWHV0ek5aenRrSHNNSVhC?=
 =?utf-8?B?YnlxUituYmtEbjBUcHhtSGJhTXduS2x6RFgvcTlZNmZKdDZqcnROQmFLZWk5?=
 =?utf-8?B?ZG5GRGhWdG94dSsvaE9uOFBUa0ZWWUU3YXJGWW8yaFo1RHNlOW5uU0N4MUxR?=
 =?utf-8?B?VkFGMFkwdzRKTFZPZmkydVI5b1hIZnR1Q2ZvUlB6V2xCQWNxLy84alFwQk93?=
 =?utf-8?B?Zm93Rk5FcWlvekJyeWpxYWFZVU1GRnJQUnViOTNaN2FtZzhBaXVXcU9wRzk4?=
 =?utf-8?B?Mmd4TXZhUTREYUdZN1NDVjl2QUIzeEM4dlVVSlhicVZ5SDJuTFkwZkVYTEVq?=
 =?utf-8?B?L25KZ1dSbllXUEk5WXltY0lPcGw2eW4vYkJocm1VaWh1RENkUEhrVU5YUUhM?=
 =?utf-8?B?Smx5NmY1NHZTWC9BTkdTNmlHWDM5QzVZL1lYOEZidU1Zak8zMVZUSC90amJl?=
 =?utf-8?B?Tk94T2QzOHM4L1BCalMyVStGSy9XaVdiWkc4NXlCLzlpT0ZXOHRzT0F6V2VT?=
 =?utf-8?B?bWNyNE00c0V1RStvK053clZFSmYyZ1JzMjhwSE8yR3BYMm5tNWRzYWVtSTd2?=
 =?utf-8?B?aDZVcUlGbHdoYld1VTk3ZEFtTWJzVE9FNkc3Rjd6MWIzMVFPekFQUXdiSm5K?=
 =?utf-8?B?eTZrU3hzbXFDWGJzckZrNGhqV2RFRTZQVWJQeVcxaWZGNGI5TUZWRUlBREtt?=
 =?utf-8?B?ZzU0RGN0Skprd3ExU1NTMjZDaGtzMUdEeHphR054NWpFVlozK0FsOGxiRCtR?=
 =?utf-8?Q?qfedf2phVVGR5NGp69NYP3DhJAKJa9XLDzqnT+9?=
X-OriginatorOrg: amd.com
X-MS-Exchange-CrossTenant-Network-Message-Id: ff9c356c-975c-4f82-7c16-08d9212cf790
X-MS-Exchange-CrossTenant-AuthSource: DM5PR12MB1355.namprd12.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 May 2021 16:32:06.8243
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 3dd8961f-e488-4e60-8e11-a82d994e183d
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: QJS6bIZ0vjGRb0nzhcqirX3WEUS9JFIjV74y0C7suMUncSWyTzMsKQx1kkXffQS7NpGhfRw3Qzl5gmec0Rv+Pg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR12MB2504

On 5/27/21 9:41 AM, Tom Lendacky wrote:
> On 5/27/21 8:02 AM, Christoph Hellwig wrote:
>> On Wed, May 19, 2021 at 11:50:07AM -0700, Florian Fainelli wrote:
>>> You convert this call site with swiotlb_init_io_tlb_mem() which did not
>>> do the set_memory_decrypted()+memset(). Is this okay or should
>>> swiotlb_init_io_tlb_mem() add an additional argument to do this
>>> conditionally?
>>
>> The zeroing is useful and was missing before.  I think having a clean
>> state here is the right thing.
>>
>> Not sure about the set_memory_decrypted, swiotlb_update_mem_attributes
>> kinda suggests it is too early to set the memory decrupted.
>>
>> Adding Tom who should now about all this.
> 
> The reason for adding swiotlb_update_mem_attributes() was because having
> the call to set_memory_decrypted() in swiotlb_init_with_tbl() triggered a
> BUG_ON() related to interrupts not being enabled yet during boot. So that
> call had to be delayed until interrupts were enabled.

I pulled down and tested the patch set and booted with SME enabled. The
following was seen during the boot:

[    0.134184] BUG: Bad page state in process swapper  pfn:108002
[    0.134196] page:(____ptrval____) refcount:0 mapcount:-128 mapping:0000000000000000 index:0x0 pfn:0x108002
[    0.134201] flags: 0x17ffffc0000000(node=0|zone=2|lastcpupid=0x1fffff)
[    0.134208] raw: 0017ffffc0000000 ffff88847f355e28 ffff88847f355e28 0000000000000000
[    0.134210] raw: 0000000000000000 0000000000000001 00000000ffffff7f 0000000000000000
[    0.134212] page dumped because: nonzero mapcount
[    0.134213] Modules linked in:
[    0.134218] CPU: 0 PID: 0 Comm: swapper Not tainted 5.13.0-rc2-sos-custom #3
[    0.134221] Hardware name: ...
[    0.134224] Call Trace:
[    0.134233]  dump_stack+0x76/0x94
[    0.134244]  bad_page+0xa6/0xf0
[    0.134252]  __free_pages_ok+0x331/0x360
[    0.134256]  memblock_free_all+0x158/0x1c1
[    0.134267]  mem_init+0x1f/0x14c
[    0.134273]  start_kernel+0x290/0x574
[    0.134279]  secondary_startup_64_no_verify+0xb0/0xbb

I see this about 40 times during the boot, each with a different PFN. The
system boots (which seemed odd), but I don't know if there will be side
effects to this (I didn't stress the system).

I modified the code to add a flag to not do the set_memory_decrypted(), as
suggested by Florian, when invoked from swiotlb_init_with_tbl(), and that
eliminated the bad page state BUG.

Thanks,
Tom

> 
> Thanks,
> Tom
> 
>>


From xen-devel-bounces@lists.xenproject.org Thu May 27 16:41:34 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 16:41:34 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133597.248953 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmJ4T-000710-LQ; Thu, 27 May 2021 16:41:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133597.248953; Thu, 27 May 2021 16:41:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmJ4T-00070t-Hv; Thu, 27 May 2021 16:41:21 +0000
Received: by outflank-mailman (input) for mailman id 133597;
 Thu, 27 May 2021 16:41:20 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmJ4S-00070j-J5; Thu, 27 May 2021 16:41:20 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmJ4S-00047r-FI; Thu, 27 May 2021 16:41:20 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmJ4S-0006bi-7l; Thu, 27 May 2021 16:41:20 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lmJ4S-0005Xn-78; Thu, 27 May 2021 16:41:20 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=fSZggA+sL0hK2eZ31HEq/fI91Ke02xeHsIa46lbAzcM=; b=4owAfYdfqN4NJ6jwCytu/YQg4a
	++Z8Ycxm4TbHpKID9imJb6lJC1AgJa2iQ4Jd+ZC5wRCOUEp0ySCDPka7vLGhi5Y7gXgjHBRIxM++z
	qSJQQp6j2uRh8B3fYgYuRgnoYOS6pB1i1WYBOmhsiK8islsRaOC62zbt+9caRqNorXwQ=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162232-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162232: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=722f59d38c710a940ab05e542a83020eb5546dea
X-Osstest-Versions-That:
    xen=7c110dd335a17be52549dc4b9dfbfba8165ade40
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 27 May 2021 16:41:20 +0000

flight 162232 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162232/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  722f59d38c710a940ab05e542a83020eb5546dea
baseline version:
 xen                  7c110dd335a17be52549dc4b9dfbfba8165ade40

Last test of basis   162171  2021-05-26 16:00:27 Z    1 days
Testing same since   162232  2021-05-27 13:02:02 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   7c110dd335..722f59d38c  722f59d38c710a940ab05e542a83020eb5546dea -> smoke


From xen-devel-bounces@lists.xenproject.org Thu May 27 17:24:26 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 17:24:26 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133605.248967 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmJk4-0002pr-1x; Thu, 27 May 2021 17:24:20 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133605.248967; Thu, 27 May 2021 17:24:20 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmJk3-0002pk-UW; Thu, 27 May 2021 17:24:19 +0000
Received: by outflank-mailman (input) for mailman id 133605;
 Thu, 27 May 2021 17:24:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ln4B=KW=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lmJk2-0002pe-Ei
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 17:24:18 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3cc73b4c-5d89-4d15-b2a7-6769b630be9a;
 Thu, 27 May 2021 17:24:16 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3cc73b4c-5d89-4d15-b2a7-6769b630be9a
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1622136256;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=/GGbbTeOcMIKp3C3aoNY8msHhW/owdVlh7u/E4EwR5g=;
  b=TVLVfORoR6y5bJGcZbaqGZazLPiiDCfgWlJLncT+OH+oV4U0s2NuUfCL
   XTTdFTLdGaqv51vAqSloiGpR3gdhuVMtUupPsp8HgmcRhQJSgiQKyhbg5
   qkORCODtuwzozBy8nau7S/JXWWoFWCgCrvEUdCCfjY3diuGuSxngLWUj/
   0=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: qH/sxLvl1LkqLzZvQBAmyDzjpakHpLvCPVDj9CggC7WD1Wi3etZDPmZJUy25TeZoMk17EZaIXI
 T6HygTj5bGJ1xeI6LW64IigMkYXMxYQ0p/lFXUPW7yeSbEkfWIYVzL2JxVnXsR8YLvy+dk6y8G
 50ZMCqNqAtNhFA5joyQY/vWuRhqTp4NuXmu1f7lzWfnKZ3XfrECjBl7npgExsxTbZ1dEa55aur
 w7Fl0NSnseoq0/JaipUAx71DJKr2PqdKAuDc31J22VoUQBwizsBH3WoBfKydTLGiafGOmsLnfc
 ji4=
X-SBRS: 5.1
X-MesageID: 45163423
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:WfvyuK1fe717L2cq01cvpQqjBEIkLtp133Aq2lEZdPU0SKGlfq
 GV7ZMmPHrP4gr5N0tOpTntAse9qDbnhP1ICOoqTNOftWvd2FdARbsKheffKn/bak/DH4Zmvp
 uIGJIObeEYY2IasS77ijPIb+rJwrO8gd+VbTG19QYSceloAZsQnjuQEmygYytLrJEtP+tCKH
 KbjPA33gaISDAsQemQIGIKZOTHr82jruOaXfZXbyRXkDVnlFmTmcXHLyQ=
X-IronPort-AV: E=Sophos;i="5.83,228,1616472000"; 
   d="scan'208";a="45163423"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=EGHQPzIMkeUwX12ZCmgIP23T65pDkQzxz3MaibYtlEEw2/ZRdU16fSVxlbGtteTgB3a3cvRndUoYNlmSIPa/S6VmvZay5vO4JUHFGoV7Lt3Hmjs5M9ouYEX6BZoSfabuxSowcGyi6N7tpXICzym5a0y7+f3HS+o7Skyqt962TSsT4zmkaveP/XJ0FilKGbDa3IR/CHZm8Rw3KVkGqvvLeps9kfLsGyIknvfdlrUZXUSlgwcWkPOsQFzo/n3V/Y5tZTSTcqMSGkMgeLHC2J45UdoNiyr/JrF0LiTzwSL162VgI8b46P34d6NWOHeRIjpZj/IB5ueKlzB1SbUuocHRgg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8TSa8Bj66WshHEcSwn6Xn7Gvf4WKJGEibxkzR9PXQjg=;
 b=AnWaBL9SuBcI2VMM4wZfvYcVXErH76xbKQIdhFdDbG0874hfACbfM8jmVY1CXjtzppVDbjgzBUm6YPVkpI1BfPNxO6mEcNeH+Og0yuCBFx6+C/+LIFq7pMtc4qahi0oQdZx92YRw5bozm+9jM29NWhuCh5E0rlJ60UHxZlMitCCbRvsk1zRtjIenYLPD3MHY+Ryu5TmIOVVE5tialvX4yyEspga1+Yl0zOJGe7wYHSHj9uGcHHxfJ3WOsRu3YeTbtvnqS7hga2eHi3qM0GVS+soHCUwbBBCA9q3GvgzZKaVAzgAmYISEXJ+OFvZldtxNCiR+0M8gXgvOjsC+jCckyg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=8TSa8Bj66WshHEcSwn6Xn7Gvf4WKJGEibxkzR9PXQjg=;
 b=E2CZEI3AT7Lvh2iZ6qOJr6mxa9+AoV9msPOAZoOYYEFbVHhYvwbRHa5KnqICr8oHkZ79fUWx8xyw9soyd71ynGOF0MZQPq8uy1BIQH7Cl2nMPg+JN0GzSBU/YsYuZLO2oVV8D9kb/npMiiqviq6NdroUpLQWNPd3ISuXJb21X6k=
Date: Thu, 27 May 2021 19:24:06 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Andrew Cooper <andrew.cooper3@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, Wei Liu <wl@xen.org>
Subject: Re: [PATCH 3/3] x86/tsx: Deprecate vpmu=rtm-abort and use tsx=<bool>
 instead
Message-ID: <YK/VtuUatxX6lQuo@Air-de-Roger>
References: <20210527132519.21730-1-andrew.cooper3@citrix.com>
 <20210527132519.21730-4-andrew.cooper3@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <20210527132519.21730-4-andrew.cooper3@citrix.com>
X-ClientProxiedBy: MRXP264CA0019.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:15::31) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 02366dbe-e751-493c-5e95-08d921343e1d
X-MS-TrafficTypeDiagnostic: DM5PR03MB3212:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM5PR03MB3212A7E09553334E2CBBD9F88F239@DM5PR03MB3212.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4125;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: VYMvAC+OXzwIAfFNDezqmBAQAuia4S6FNePRWUeyMAjQ7fHfsB3aP8o+LA+GFYEQVuKXQ2w3C1Hic971udswV2j+/e49Hgb58+LN6i8KELnnJamXwGlL4HeeXu6WrWowPNfwnENySwTkpnXDBE0ZKINWVpzU9/68AI8AFnXmXtwKN4LAs1iMOWHb0Dajvs4hBxjuiWDpMyhra8RYBmcVK3hGDtvAzOYIeEsG7wYHraH4KL/EknyT3mYThIwurBZdix5ccB+ZayhLGI7LT4DCIbUyXrq26G82+Ne1FH6x6NBsipP5lMsWPinCgNy/kwc3qKfLvGKcMFKEijR+PtqQkYoqJ+OqyQawqYaUhQDhQo/hOUcklYEZndzFhQuv3x/c00eHcYwGlzxUUUa8ekOAIK38oUS3NWuKG1G/CiQGd+j8ZK+BxJnu3Vg9S24c6n2kfs5vY242EiejP/kdzhh95Yzn0yF1Ts3kMf/WgNxEbaHu294J68/5XWnXcxPs7cgt3pfViaYr1p31mC/eILZ68fDlMmIV4ftHsha+FPcUaaEibfigwvyWFa6VH/MTCt+5HC7drXCF+x6gZT13RHAJwdSVPrqrSYXSMAbFCskwQdZF5Wzcho5/EKWiG0hjZeeIl9oxnNem/l0QpdYPqScDxA==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(346002)(39860400002)(366004)(376002)(396003)(136003)(6496006)(86362001)(16526019)(54906003)(316002)(186003)(478600001)(85182001)(5660300002)(8936002)(6862004)(9686003)(2906002)(26005)(66476007)(4326008)(66556008)(6666004)(33716001)(6636002)(66946007)(38100700002)(83380400001)(8676002)(6486002)(956004)(10944003);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?S0NOWGZrTjR1Mnd6VDdjdGZXTGROWUlIRjlMdFZ4SFlPN3JQd0lkYzB4aFRB?=
 =?utf-8?B?b1ExeGRCbXdGOVl6VlFlcFBJU1NqYkUvMGo0WTB2V2JPbDZGYVJscHp4ZzQr?=
 =?utf-8?B?MzVDMk1tZm1pZUJlbXFodmIwdDExWkllaGxQRHh0UXpxcE9tc2dmbnd4akxl?=
 =?utf-8?B?c0poUm9hODMzWWNrS2UrMEZjZlZpbm9pWjZpN3J3dy9IRmFiQWxPaXVsclRX?=
 =?utf-8?B?WWEzeFBkbVFCMmdYYkFHUHhiZlZRbXRGMkdRb0E1dTZpeUlPNS84ZGRjRzBQ?=
 =?utf-8?B?UjRRQTZsUlU3NHB0N2tqNC9WWHhFTThLRW1zSENxZ1BVWG1tZVY1OHlUdHJi?=
 =?utf-8?B?aytMbHB4KzhOZ1lJUEhJdzA0Nk9sS1BjQ1R1R0RVck1KTnR0cXNNK0tLcGM3?=
 =?utf-8?B?UTdpM0d4RkJmTXIrck1YSEpVcVFkNnpDSXU1YnV5SUNMWVhwbjh1OEhIQXZu?=
 =?utf-8?B?RXZGMjcvRXJVVlZQY0RHVnQ3WjFncUFJdE1wazZiOUFRcXhOUFdSemJxK3RJ?=
 =?utf-8?B?LzZyQ3BBeE80S2dCdCtjNklMMVVqTXZrZWlrUHkzaFZCdjF2OGJHMmVxT3B1?=
 =?utf-8?B?ZCtMaVkrZzhnZGVKU2ttNlNKdUtNYTNWQWRHRE80MWYvMmVkTDJndk5nUnpC?=
 =?utf-8?B?YlJCbWNaUDg0RjZ1OUxwOGFzaHQ0a0RwUWkvMm9DNTlJYVZuZGVBTTdxQklC?=
 =?utf-8?B?OVAwVEwyc091cWFhSW9CWkJ4d1dHNTFwRmhOYjFMNnpWU3N2ZzE0UWhRWHo0?=
 =?utf-8?B?eGlyWUdiSENpcENJd2ZVNGNyRnJVdTNYbE4vV1FobU1OQjdEUDdOdmhiTnU1?=
 =?utf-8?B?V2ZTVUUyU3hVcXBZRjZOeFF4VzFXL0tIWHVsbVJSNW9idko4Z2dBTTFGbTh5?=
 =?utf-8?B?OFdQQzF5eXlEQVpnUFFnb28xWWJjSEg0Z3E4cVhRWkw0NVVueGttTEtOZUNp?=
 =?utf-8?B?UDlBTlRuVnlaMW5tNmYraGQxSFFYY2dPaVk0SjR3bVIvdWR4ZDFScTZVclhE?=
 =?utf-8?B?Q1hCTE5BNFhWSE1YS2dpbG1HNitJQ04wa1hnSCtTclY5bWQxcFdkMFZUN2RF?=
 =?utf-8?B?NXlialhjMElYYitPaUpjVWJYNUlCbkVVNm5YalNmZnU5M0o3UlhURndKM3pF?=
 =?utf-8?B?UmdOcFVaNmVvOHhjTDJSeEJTU1RYNWQrTXVFMnQ1UWFacDNGL0NXNEphbXFK?=
 =?utf-8?B?SFJ6M2dqemQrREZROEVYYVNBTjZWM3Q0a0ZzOFBvTFk4R3dlQzJSZU42R1pV?=
 =?utf-8?B?Z0RzY0NEYVNmSFdBOHF1TmtiUW9sRUhGRWNqS1ArcjFKaEpteUxMRmVTblp6?=
 =?utf-8?B?T2xpOGxkeDRJaEllcTdpenRLcXF6dUxzeWtQMVhRMTNWTEp2T2pGeUVTakZG?=
 =?utf-8?B?WU1pRUJJTWtlZnBxLzVPNUp0SnU3TG5QT0xMWFh2YWRLZXc0aVIwc0w0ZVhP?=
 =?utf-8?B?Um0rQThQRnpPdEpKbTNuOHRFK0JRWWtPYitFUUNOSWQ1aDdvNllBNEhVZ05p?=
 =?utf-8?B?RGJmNldSNmJkekdMNm1lUEZzeXd0eExDQWtkWFZOOG1VV2NlQitxS1FBZkRU?=
 =?utf-8?B?OHlFajRqVUYyRko3ZElhNHVxL29FWi9ZSEdWb21XQUdDcXpGRXZndU82SG4w?=
 =?utf-8?B?dFlLaGdIK0Q2aGJJVkdzZFl2VTF3VDB5TndQaDdySFFkbjZOWlJNaFlHRktq?=
 =?utf-8?B?TTNIRXY3Nm1meHEwTFpsY0hLbWtadkNvZ0ZPUW04ZUlIQ0FrT1FkaEc5Ylp2?=
 =?utf-8?Q?KcrOyCsiWA/lOcAI0GxZ8Lg55KOQvnNKK7XTu1o?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 02366dbe-e751-493c-5e95-08d921343e1d
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 May 2021 17:24:11.5297
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: lUc0quZblceSB4v+p6nq2y7d6CROiVZPV6dMJPty7/98B/e0p4Mccf0X7Mkbbe521k1SPLxzA1pHtx5fJ82CGg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM5PR03MB3212
X-OriginatorOrg: citrix.com

On Thu, May 27, 2021 at 02:25:19PM +0100, Andrew Cooper wrote:
> This reuses the rtm_disable infrastructure, so CPUID derivation works properly
> when TSX is disabled in favour of working PCR3.
> 
> vpmu= is not a supported feature, and having this functionality under tsx=
> centralises all TSX handling.
> 
> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>

Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

> ---
> CC: Jan Beulich <JBeulich@suse.com>
> CC: Roger Pau Monné <roger.pau@citrix.com>
> CC: Wei Liu <wl@xen.org>
> ---
>  docs/misc/xen-command-line.pandoc | 40 +++++++++++++++---------------
>  xen/arch/x86/cpu/intel.c          |  3 ---
>  xen/arch/x86/cpu/vpmu.c           |  4 +--
>  xen/arch/x86/tsx.c                | 51 +++++++++++++++++++++++++++++++++++++--
>  xen/include/asm-x86/vpmu.h        |  1 -
>  5 files changed, 70 insertions(+), 29 deletions(-)
> 
> diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
> index c32a397a12..a6facc33ea 100644
> --- a/docs/misc/xen-command-line.pandoc
> +++ b/docs/misc/xen-command-line.pandoc
> @@ -2296,14 +2296,21 @@ pages) must also be specified via the tbuf_size parameter.
>  
>  Controls for the use of Transactional Synchronization eXtensions.
>  
> -On Intel parts released in Q3 2019 (with updated microcode), and future parts,
> -a control has been introduced which allows TSX to be turned off.
> +Several microcode updates are relevant:
>  
> -On systems with the ability to turn TSX off, this boolean offers system wide
> -control of whether TSX is enabled or disabled.
> + * March 2019, fixing the TSX memory ordering errata on all TSX-enabled CPUs
> +   to date.  Introduced MSR_TSX_FORCE_ABORT on SKL/SKX/KBL/WHL/CFL parts.  The
> +   errata workaround uses Performance Counter 3, so the user can select
> +   between working TSX and working perfcounters.
>  
> -On parts vulnerable to CVE-2019-11135 / TSX Asynchronous Abort, the following
> -logic applies:
> + * November 2019, fixing the TSX Async Abort speculative vulnerability.
> +   Introduced MSR_TSX_CTRL on all TSX-enabled MDS_NO parts to date,
> +   CLX/WHL-R/CFL-R, with the controls becoming architectural moving forward
> +   and formally retiring HLE from the architecture.  The user can disable TSX
> +   to mitigate TAA, and elect to hide the HLE/RTM CPUID bits.
> +
> +On systems with the ability to disable TSX off, this boolean offers system
> +wide control of whether TSX is enabled or disabled.
>  
>   * An explicit `tsx=` choice is honoured, even if it is `true` and would
>     result in a vulnerable system.
> @@ -2311,10 +2318,14 @@ logic applies:
>   * When no explicit `tsx=` choice is given, parts vulnerable to TAA will be
>     mitigated by disabling TSX, as this is the lowest overhead option.
>  
> - * If the use of TSX is important, the more expensive TAA mitigations can be
> +   If the use of TSX is important, the more expensive TAA mitigations can be
>     opted in to with `smt=0 spec-ctrl=md-clear`, at which point TSX will remain
>     active by default.
>  
> + * When no explicit `tsx=` option is given, parts susceptible to the memory
> +   ordering errata default to `true` to enable working TSX.  Alternatively,
> +   selecting `tsx=0` will disable TSX and restore PCR3 to a working state.
> +
>  ### ucode
>  > `= List of [ <integer> | scan=<bool>, nmi=<bool>, allow-same=<bool> ]`
>  
> @@ -2456,20 +2467,7 @@ provide access to a wealth of low level processor information.
>  
>  *   The `arch` option allows access to the pre-defined architectural events.
>  
> -*   The `rtm-abort` boolean controls a trade-off between working Restricted
> -    Transactional Memory, and working performance counters.
> -
> -    All processors released to date (Q1 2019) supporting Transactional Memory
> -    Extensions suffer an erratum which has been addressed in microcode.
> -
> -    Processors based on the Skylake microarchitecture with up-to-date
> -    microcode internally use performance counter 3 to work around the erratum.
> -    A consequence is that the counter gets reprogrammed whenever an `XBEGIN`
> -    instruction is executed.
> -
> -    An alternative mode exists where PCR3 behaves as before, at the cost of
> -    `XBEGIN` unconditionally aborting.  Enabling `rtm-abort` mode will
> -    activate this alternative mode.
> +*   The `rtm-abort` boolean has been superseded.  Use `tsx=0` instead.
>  
>  *Warning:*
>  As the virtualisation is not 100% safe, don't use the vpmu flag on
> diff --git a/xen/arch/x86/cpu/intel.c b/xen/arch/x86/cpu/intel.c
> index 37439071d9..abf8e206d7 100644
> --- a/xen/arch/x86/cpu/intel.c
> +++ b/xen/arch/x86/cpu/intel.c
> @@ -356,9 +356,6 @@ static void Intel_errata_workarounds(struct cpuinfo_x86 *c)
>  	    (c->x86_model == 29 || c->x86_model == 46 || c->x86_model == 47))
>  		__set_bit(X86_FEATURE_CLFLUSH_MONITOR, c->x86_capability);
>  
> -	if (cpu_has_tsx_force_abort && opt_rtm_abort)
> -		wrmsrl(MSR_TSX_FORCE_ABORT, TSX_FORCE_ABORT_RTM);
> -
>  	probe_c3_errata(c);
>  }
>  
> diff --git a/xen/arch/x86/cpu/vpmu.c b/xen/arch/x86/cpu/vpmu.c
> index d8659c63f8..16e91a3694 100644
> --- a/xen/arch/x86/cpu/vpmu.c
> +++ b/xen/arch/x86/cpu/vpmu.c
> @@ -49,7 +49,6 @@ CHECK_pmu_params;
>  static unsigned int __read_mostly opt_vpmu_enabled;
>  unsigned int __read_mostly vpmu_mode = XENPMU_MODE_OFF;
>  unsigned int __read_mostly vpmu_features = 0;
> -bool __read_mostly opt_rtm_abort;
>  
>  static DEFINE_SPINLOCK(vpmu_lock);
>  static unsigned vpmu_count;
> @@ -79,7 +78,8 @@ static int __init parse_vpmu_params(const char *s)
>          else if ( !cmdline_strcmp(s, "arch") )
>              vpmu_features |= XENPMU_FEATURE_ARCH_ONLY;
>          else if ( (val = parse_boolean("rtm-abort", s, ss)) >= 0 )
> -            opt_rtm_abort = val;
> +            printk(XENLOG_WARNING
> +                   "'rtm-abort=<bool>' superseded.  Use 'tsx=<bool>' instead\n");
>          else
>              rc = -EINVAL;
>  
> diff --git a/xen/arch/x86/tsx.c b/xen/arch/x86/tsx.c
> index 98ecb71a4a..338191df7f 100644
> --- a/xen/arch/x86/tsx.c
> +++ b/xen/arch/x86/tsx.c
> @@ -6,7 +6,9 @@
>   * Valid values:
>   *   1 => Explicit tsx=1
>   *   0 => Explicit tsx=0
> - *  -1 => Default, implicit tsx=1, may change to 0 to mitigate TAA
> + *  -1 => Default, altered to 0/1 (if unspecified) by:
> + *                 - TAA heuristics/settings for speculative safety
> + *                 - "TSX vs PCR3" select for TSX memory ordering safety
>   *  -3 => Implicit tsx=1 (feed-through from spec-ctrl=0)
>   *
>   * This is arranged such that the bottom bit encodes whether TSX is actually
> @@ -50,6 +52,26 @@ void tsx_init(void)
>  
>          cpu_has_tsx_ctrl = !!(caps & ARCH_CAPS_TSX_CTRL);
>  
> +        if ( cpu_has_tsx_force_abort )
> +        {
> +            /*
> +             * On an early TSX-enable Skylake part subject to the memory
> +             * ordering erratum, with at least the March 2019 microcode.
> +             */
> +
> +            /*
> +             * If no explicit tsx= option is provided, pick a default.
> +             *
> +             * This deliberately overrides the implicit opt_tsx=-3 from
> +             * `spec-ctrl=0` because:
> +             * - parse_spec_ctrl() ran before any CPU details where know.
> +             * - We now know we're running on a CPU not affected by TAA (as
> +             *   TSX_FORCE_ABORT is enumerated).
> +             */
> +            if ( opt_tsx < 0 )
> +                opt_tsx = 1;
> +        }
> +
>          /*
>           * The TSX features (HLE/RTM) are handled specially.  They both
>           * enumerate features but, on certain parts, have mechanisms to be
> @@ -75,6 +97,12 @@ void tsx_init(void)
>          }
>      }
>  
> +    /*
> +     * Note: MSR_TSX_CTRL is enumerated on TSX-enabled MDS_NO and later parts.
> +     * MSR_TSX_FORCE_ABORT is enumerated on TSX-enabled pre-MDS_NO Skylake
> +     * parts only.  The two features are on a disjoint set of CPUs, and not
> +     * offered to guests by hypervisors.
> +     */
>      if ( cpu_has_tsx_ctrl )
>      {
>          uint32_t hi, lo;
> @@ -90,9 +118,28 @@ void tsx_init(void)
>  
>          wrmsr(MSR_TSX_CTRL, lo, hi);
>      }
> +    else if ( cpu_has_tsx_force_abort )
> +    {
> +        /*
> +         * On an early TSX-enable Skylake part subject to the memory ordering
> +         * erratum, with at least the March 2019 microcode.
> +         */
> +        uint32_t hi, lo;
> +
> +        rdmsr(MSR_TSX_FORCE_ABORT, lo, hi);
> +
> +        /* Check bottom bit only.  Higher bits are various sentinels. */
> +        rtm_disabled = !(opt_tsx & 1);

I think you also calculate rtm_disabled in the previous if case
(cpu_has_tsx_ctrl), maybe that could be pulled out?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 27 17:51:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 17:51:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133613.248978 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmKAb-00062Q-DY; Thu, 27 May 2021 17:51:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133613.248978; Thu, 27 May 2021 17:51:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmKAb-00062J-AQ; Thu, 27 May 2021 17:51:45 +0000
Received: by outflank-mailman (input) for mailman id 133613;
 Thu, 27 May 2021 17:51:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=Ln4B=KW=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lmKAZ-00062D-GA
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 17:51:43 +0000
Received: from esa2.hc3370-68.iphmx.com (unknown [216.71.145.153])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 02ae2b51-a4c0-4b52-a49c-53f4d5fec426;
 Thu, 27 May 2021 17:51:42 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 02ae2b51-a4c0-4b52-a49c-53f4d5fec426
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1622137901;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=RplwToqZnieUaG74kOJgoIkY7Bw1MiVGskyzvpop7V4=;
  b=c/LSWVZmlaJWGM3/GbFOAoTa4YF4t286RzvmU+nDXJAbd50ZqObhCuUY
   iXVQIwd8HNFD/tWmWbeGDn+/Ffs43/RNWR68EbXUDRslTUklqzkyTPUfa
   fLiC3euGLYo9gdX0NcvJWYUItXKZEMgvkgJJms9yqVgPgqb7gcTCNCV+V
   I=;
Authentication-Results: esa2.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: E9n+5l5F/+uGWeVPM/fSAaBoea4ywLTRw4UOQO1DZAmVtuOXDafXpqlbfVRA/PRKy6imIaZ7K/
 9jXFazRmAmW/towMU4u+nMjFwqEsTFy52xtUQa7mgQPJ+ctSRbx58qYgZKU/TwMv79ZOyo6yKH
 Lutp4UDDGQhEKCA+JzY5aYP3LFfX2foptPbnFY/ioY4FAh0uWm+wzEbChd/NI4Dz5ceuW6g1Xs
 r35NRLCPpTSM0Ma2nVjob9pZiysMOuzgQic5dRZZIWxE7Soq+7O/EwT/ZA/ngJfK+JKxGXtKRS
 M9k=
X-SBRS: 5.1
X-MesageID: 44773758
X-Ironport-Server: esa2.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:q5bYt6DKO1G1+DPlHelW55DYdb4zR+YMi2TDt3oddfWaSKylfq
 GV7ZAmPHrP4gr5N0tOpTntAse9qBDnhPtICOsqTNSftWDd0QPFEGgL1+DfKlbbak/DH4BmtJ
 uJc8JFeaDN5VoRt7eH3OFveexQv+Vu88qT9JnjJ28Gd3AMV0n5hT0JcTpyFCdNNW97LKt8Lr
 WwzOxdqQGtfHwGB/7LfEXsD4D41qT2fIuNW29/OyIa
X-IronPort-AV: E=Sophos;i="5.83,228,1616472000"; 
   d="scan'208";a="44773758"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=WBWWRWf7a5dRszMN1saMfbfvqYWDMculWjSlZmO/7WTEvA/BJWtBj3Q6aQWXPrCbWfCkw3c/qQ8og/6AdDY9aVsJYaj7oR7MVL/bOC/lGCneip/NVQDOuRPZxWvnypKhUATMIJSEL5E6TTrCfNq99H3n+1GZIk8vlYw2zJNQP5xwakUyJHUw1J7PanzJnfrALDIFBWyVWlJZJiQaFbwc/9V+ABStEmZmya2UYOZ4iTWyMiD02U3hCDK7sBKpptdvaG/5XGkzRHfnP38QkyILl6UFFmxBTUvpsJ/1ERmPGi/+9K+ZwGuRsPk2fvza88clbgJ7Tnc1NH+RBZlgHtsxwg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GXk/lIH67X4AS0ZicYstO/5YdJ2JQuV3H4o1mIQpy9k=;
 b=Dpno08pgCNchkPfsKMV2P9AO21nLJncgW5yrfjBtPazYnPG2jHYw33u35mi25lYg0SWLBmxCvYWuFLzc3otHDUn34hB6fkTt6xwAxWT4pCb3Nn91OCOjLMXB1x/8V6tjllOwEAbTgu6EW1dKr+V+7NOqr9br44wAnpZWZThb3arUicXLMYDJw0kO44xfT1H9GjFFiz9aRmOnTl6F+kd6U6L4Hgp983vE3j8DRpHkWDNL8Qc236LTayWzzR7zjI+IpQctMkjt4k+yRSZnkLDda55uXsaP2CICAptwZNUvrmiL1d7YCIb1ldlrtIi1RjFtyB61bUoHYb/8ZAvw956k4g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GXk/lIH67X4AS0ZicYstO/5YdJ2JQuV3H4o1mIQpy9k=;
 b=OtwJdtkB//RaRemWNhy0b58WaDGkiiW2zF3CY1paIW2BnHIuWc5klVu0OsPrmgwqPxBNcsil23npUNb3vnUk/mIiJ9eaIzSv6fgO8MqWKlGk+XHgWk9FzEeDrPb1AvVKugrxudRGr0xUmNiMOtyIumVWRxV+C1gur761ANcsJso=
Date: Thu, 27 May 2021 19:51:32 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Jan Beulich <jbeulich@suse.com>
CC: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>, "Andrew
 Cooper" <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Olaf Hering
	<olaf@aepfle.de>
Subject: Re: [PATCH] x86/AMD: expose SYSCFG, TOM, and TOM2 to Dom0
Message-ID: <YK/cJM2fpzSq77Gy@Air-de-Roger>
References: <c5764274-1257-809e-a2a7-d87b9d0fe675@suse.com>
 <YK9ZXJuPk1G5SGnK@Air-de-Roger>
 <b6693807-95cb-7925-587d-1e1e2db8c798@suse.com>
 <YK+dNgom3cVzkcFF@Air-de-Roger>
 <ca774a12-c054-3383-5f38-2c09b66be681@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <ca774a12-c054-3383-5f38-2c09b66be681@suse.com>
X-ClientProxiedBy: MR1P264CA0078.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:501:3f::24) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 2d245ae4-d9dd-47bf-799e-08d921381397
X-MS-TrafficTypeDiagnostic: DM4PR03MB5968:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM4PR03MB59680B4B8C89FCD548B1B4468F239@DM4PR03MB5968.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: oj/X7NJQAJimJpSQwsIre7TPEJ3tTaNjG5t8BtxnTn5FBwhC1tDfn+2ckCzOEcuuRuvoyZ6KEi5ws/yqTWMKR6SQfVLB1DZTpQptYWGe28XAYMOXhszuYKAV0OB5d4h57fYuJDJzoJ2dOqBFtG4sy3DCybxD/2XsViaWDw+b12Ylraf22kqjZ0R4gceTlq642eocHjMCq4YlxM7Dc7ITtDQ3QIoyyDItddFDzxDFJCtnBsywklYIpZq0gz206adWNcifmh5hgkVpLeZwzPcp9Nhb4pNMmhFG+w2Y2d2vRR5mmLA/eUQzmMAEGH+YU+A2l3+lPhvUY1FvOxu0PjAOI+FaxqMLEJZF9cNoYTEIvcGCwO9mjpjWfZHez/v/qVf1GgtkGgffzHpGVh92/gn5Xp5a//LUEW2pONdV/rpnpAfa9IR8iyDnMCJOuAFNczJLhm1vXyrHe9s0OafHSVO67EeEX6d7oI+xBDo2jx+f2MOiHKR90D6Nlp2nS6REwU+8wm+HRk0ODQMHxovY5W1xrdl3E2Akwo5c9ndogWu9AefOgbOKSkY0uKc6dm7ABZtIJYM8skePFcr9p+ODnNFlktap5nEU70Ym8aTT/PHK7aU=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(376002)(366004)(39860400002)(396003)(136003)(346002)(38100700002)(26005)(85182001)(186003)(54906003)(9686003)(33716001)(8936002)(316002)(4326008)(8676002)(86362001)(6666004)(2906002)(83380400001)(6916009)(66476007)(66556008)(6486002)(6496006)(53546011)(66946007)(5660300002)(956004)(478600001)(16526019);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?TW9kNWlpb1g4RGgzVFlJbFhwUlFlQjJQTmVCb3VkdUFmbU00bDcrTE9CdEJY?=
 =?utf-8?B?WEhDK3g4Q2VUbUFzd0xzR3h0NGU0a2hTTlZoYWVKUDR6aTN4NHN2Smk0cFpx?=
 =?utf-8?B?dm9kT3BuVXExZk51RXJ2RHA0eXpuS2prZTN3c0FFcVhrclpXa081RGZMd2pC?=
 =?utf-8?B?dXpUTENYK0VvQytELzNhbHd1L1hzR3ZKcmgrOXdJc29yNnA2TVZ4RzY2TEp0?=
 =?utf-8?B?dDhUS0wxR3hIRUZQUmNWRXp6RWx6bk5iRFdVWFl6b3hVSG5hc0RYUzN3c2xK?=
 =?utf-8?B?dWRLa1plK1R4b3JjVEVKZThLeWMzNTV3bXZqd2IzbE9IU3BZeTVFTnlFQlJ6?=
 =?utf-8?B?SUJuQ1VaRkxGbGxraDVEbHFaZ1Q2b3hYb2R4SGdMbHQ3c1dxQ2t4RS9zZlNp?=
 =?utf-8?B?QVZIVzYrYWRrYWo5dncwcDFoMm8wMmEwL2pGbTcrb014Y3Q4WGdtNTJCR1B2?=
 =?utf-8?B?WWprQiszSVNFRjBIdmNTTFNMaWtKbmVYczZzOGRjUHFka1N2VzMxTk5wWTdX?=
 =?utf-8?B?ZXNGQ0FOUEpuckZ2QWFGcUFMMVl3OHpYWVZRbWRpdFdzWWM0ZmtqY1puNzJn?=
 =?utf-8?B?UkNyeXZVRHFkdTJNckRKQVNVSmhYeVU2TkFLWW9TakdXRDJ2cnhCNzlMNDZo?=
 =?utf-8?B?elEzZ1pzMWFjcHRmeDU2RHJqRDJpdFRBUHF3Yk0vQmtQUDNNMURhSGs0SEN2?=
 =?utf-8?B?NVNuYnB0SDhqcys0L0dVZHhqeGp4NW5UeEFQenRyY0tOaVcxUkNzN0tsTzhY?=
 =?utf-8?B?OFNER1lsSmE2YWVRRHNaZTlKMGRuZzdvSUJ1c0lnUGFFR1J5MjFGVjJnVjkv?=
 =?utf-8?B?QysvdVBBcEpRa2F0UFBMY1BOZ21yL2xaZ2k3NkdHbDRtTnF4YnlSWVBnTTZR?=
 =?utf-8?B?Q3VKRG9ZOUpZdW5XR1JJZ1BwTlJLd3VRMEVtV2tVT3FEakQ5Q1ZCQXNlUnFY?=
 =?utf-8?B?bEk3RlFmb25YTU9ZRzlsMi9zc1F6K09CQVVNWm9uOFpFWENOeUF1bm45bmlL?=
 =?utf-8?B?OWdYT05iWjdYVlBHSVhacUgzQStmTEhvdEU1ZkhxbDd1NGRjQkVYOUozeHh3?=
 =?utf-8?B?aTR2NDhDYitNenQwWGpRMzlMS21Ka21YczdjVGJZNlVIenBoSzErOTZvanp5?=
 =?utf-8?B?dEt4eTRuQzR4aUtROG1POUN1TWVyRkQyY2NwdTNjS293RXViRUFoVUovd3Ja?=
 =?utf-8?B?azhJM1FXd0Z0RGsxU3VScHArMXhaTlhkNElMT3E3MWN4N0t4TEp0cEtGRGFG?=
 =?utf-8?B?ZEo5V2ZZUXZoWHZxZzdzSlpDeU5Rb04zMk52TnczQkVIMThlWVA1U1d1UmRH?=
 =?utf-8?B?dXFkWkpIS0tKQitJMElZWUZMdmcvSmx3UEQ5cHVjN2RRVXNxYk1WZkV5dzA3?=
 =?utf-8?B?S0twZXFkb1VGWXdvY014R0VRUWpHK2h6UnlBMTBiV2pKZGV2SytldTNRZnVG?=
 =?utf-8?B?UGZaY0N2aVYxamdyN2pnREZpMDN1YVJQQlRFWWNnRFFmRUM4WVNJZGZDanUz?=
 =?utf-8?B?UkpaYnJGM0VGRWdya0F0N2ZjTUFOV2RiWlVMU3Z2TXcrSnF6KzU0eWhTaXAz?=
 =?utf-8?B?ZTBkZnF4YmdNOWpLcjkrNWJhcmVkSHAvbmhiRDVjMWgxcVo2VmRuMk9hRndU?=
 =?utf-8?B?emhFMUxOd05IRHg2eWkxWDZGZU5lYzJMOWZ3ZU9ZcllJS1BjdWlNbjJadmpn?=
 =?utf-8?B?b0x3QWgvTlloT0dqanUvcFVqdENZakt6MnNrN25WTUF3bHBtYnVhVXFlRkh0?=
 =?utf-8?Q?eWePM2BwKfIjpS7TlErZOCjjHrOH0XMSYFAwPIL?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 2d245ae4-d9dd-47bf-799e-08d921381397
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 May 2021 17:51:38.1330
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: gyhaRP6djQHn2c9HhIcCL9qXDWGKXSVKvVqb58ev1j6yQd87uquqnWgbK4dUBEat8u3/IjrbQ8E/xxUeZvs1tg==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR03MB5968
X-OriginatorOrg: citrix.com

On Thu, May 27, 2021 at 04:57:04PM +0200, Jan Beulich wrote:
> On 27.05.2021 15:23, Roger Pau Monné wrote:
> > On Thu, May 27, 2021 at 12:41:51PM +0200, Jan Beulich wrote:
> >> On 27.05.2021 10:33, Roger Pau Monné wrote:
> >>> On Wed, May 26, 2021 at 02:59:00PM +0200, Jan Beulich wrote:
> >>>> Sufficiently old Linux (3.12-ish) accesses these MSRs in an unguarded
> >>>> manner. Furthermore these MSRs, at least on Fam11 and older CPUs, are
> >>>> also consulted by modern Linux, and their (bogus) built-in zapping of
> >>>> #GP faults from MSR accesses leads to it effectively reading zero
> >>>> instead of the intended values, which are relevant for PCI BAR placement
> >>>> (which ought to all live in MMIO-type space, not in DRAM-type one).
> >>>>
> >>>> For SYSCFG, only certain bits get exposed. In fact, whether to expose
> >>>> MtrrVarDramEn is debatable: It controls use of not just TOM, but also
> >>>> the IORRs. Introduce (consistently named) constants for the bits we're
> >>>> interested in and use them in pre-existing code as well.
> >>>
> >>> I think we should also allow access to the IORRs MSRs for coherency
> >>> (c001001{6,9}) for the hardware domain.
> >>
> >> Hmm, originally I was under the impression that these could conceivably
> >> be written by OSes, and hence would want dealing with separately. But
> >> upon re-reading I see that they are supposed to be set by the BIOS alone.
> >> So yes, let me add them for read access, taking care of the limitation
> >> that I had to spell out.
> >>
> >> This raises the question then though whether to also include SMMAddr
> >> and SMMMask in the set - the former does get accessed by Linux as well,
> >> and was one of the reasons for needing 6eef0a99262c ("x86/PV:
> >> conditionally avoid raising #GP for early guest MSR reads").
> > 
> > That seems fine, we might also want SMM_BASE?
> 
> That's pretty unrelated to the topic here - there's no memory type
> or DRAM vs MMIO decision associated with that register. I'm also
> having trouble seeing what an OS would want to use SMM's CS value
> for.

Right, I think I read the text too fast. I don't think we do need it.

> >> Especially for SMMAddr, and maybe also for IORR_BASE, returning zero
> >> for DomU-s might be acceptable. The respective masks, however, can
> >> imo not sensibly be returned as zero. Hence even there I'd leave DomU
> >> side handling (see below) for a later time.
> > 
> > Sure. I think for consistency we should however enable reading the
> > hardware IORR MSRs for the hardware domain, or else returning
> > MtrrVarDramEn set is likely to cause trouble as the guest could assume
> > IORRs to be unconditionally present.
> 
> Well, yes, I've added IORRs already, as I was under the impression
> that we were agreeing already that we want to expose them to Dom0.

Indeed, I think that's fine.

> >>>> As a welcome side effect, verbosity on/of debug builds gets (perhaps
> >>>> significantly) reduced.
> >>>>
> >>>> Note that at least as far as those MSR accesses by Linux are concerned,
> >>>> there's no similar issue for DomU-s, as the accesses sit behind PCI
> >>>> device matching logic. The checked for devices would never be exposed to
> >>>> DomU-s in the first place. Nevertheless I think that at least for HVM we
> >>>> should return sensible values, not 0 (as svm_msr_read_intercept() does
> >>>> right now). The intended values may, however, need to be determined by
> >>>> hvmloader, and then get made known to Xen.
> >>>
> >>> Could we maybe come up with a fixed memory layout that hvmloader had
> >>> to respect?
> >>>
> >>> Ie: DRAM from 0 to 3G, MMIO from 3G to 4G, and then the remaining
> >>> DRAM from 4G in a contiguous single block?
> >>>
> >>> hvmloader would have to place BARs that don't fit in the 3G-4G hole at
> >>> the end of DRAM (ie: after TOM2).
> >>
> >> Such a fixed scheme may be too limiting, I'm afraid.
> > 
> > Maybe, I guess a possible broken scenario would be for a guest to be
> > setup with a set of 32bit BARs that cannot possibly fit in the 3-4G
> > hole, but I think that's unlikely.
> 
> Can't one (almost) arbitrarily size the amount of VRAM of the emulated
> VGA? I wouldn't be surprised if this can't be placed above 4Gb.

I would say it's fine to request such big region that resides in a BAR
to be placed in a 64bit BAR and then be put above the 4G boundary, but
anyway, we don't need to discuss this now.

Also I'm not sure how big VRAM regions does QEMU really allow.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Thu May 27 17:58:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 17:58:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133618.248988 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmKHE-0006nd-6L; Thu, 27 May 2021 17:58:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133618.248988; Thu, 27 May 2021 17:58:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmKHE-0006nW-3U; Thu, 27 May 2021 17:58:36 +0000
Received: by outflank-mailman (input) for mailman id 133618;
 Thu, 27 May 2021 17:55:16 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5/6H=KW=linuxfoundation.org=dgiles@srs-us1.protection.inumbo.net>)
 id 1lmKE0-0006jp-O6
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 17:55:16 +0000
Received: from mail-ed1-x52f.google.com (unknown [2a00:1450:4864:20::52f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 57f32af1-bf08-41f4-b642-69feeb514de6;
 Thu, 27 May 2021 17:55:15 +0000 (UTC)
Received: by mail-ed1-x52f.google.com with SMTP id t3so1833590edc.7
 for <xen-devel@lists.xenproject.org>; Thu, 27 May 2021 10:55:15 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 57f32af1-bf08-41f4-b642-69feeb514de6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linuxfoundation.org; s=google;
        h=mime-version:from:date:message-id:subject:to;
        bh=NAQh85zqlrqU3U8Zet/U5O161m4aLohptXwLJ00vp7M=;
        b=LEZ5WDtaYWfo4c5/fjjSdp8o/RM39v3dXUbduKnqVKj0W4BsEaW2HGs1teXN46+DW5
         2deKKjTLmvZDv+kpPZqbSs4+cb7pdCp73DkxAoCrgf1yj2ElHLcd4NwNyDK8qFnwNZMH
         q0IYfR0+pnMB/zHhWVieulior6q/xK3diG0qA=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:from:date:message-id:subject:to;
        bh=NAQh85zqlrqU3U8Zet/U5O161m4aLohptXwLJ00vp7M=;
        b=YyRX63yb4Z1tW4NtZ2XJaJfA4nvZ86kbwGAJ18uJ9ACvGAFjMRtaplO7ivpJMEPkne
         Ix79e7UqhQzKjrUURo9efIlViliGDQySxjDei7KuzU4qxmG2xDlKp8Pi3BEK8pWQVMYT
         HqMlShPxUF7XjiEUm+mulr9yTao6btSlqIHXPZmI798cV7nbhVpIX6c6JALs4IVpdsy/
         P2N/qr9lrC5G2In3KO/V9/JfiP7GEMX8XYZnPG90GjvzzkkDjumgErCubS7h/r3D5v/d
         B/rgJ3qEunXpMjyTqrvojVUzdqil4ivcA97XFI/ymqq92FEo+H5uHoYDfYxj0US93JTC
         TZqQ==
X-Gm-Message-State: AOAM531LzfWx43RR9fsjDPbRbSN69gPlJ1lyDj6Z+tdwtqERzYcn/fc9
	FJETj0GqH2K/srCDbp/2QHhesb7P++N6pA8FQuWGr5djvI9ccA==
X-Google-Smtp-Source: ABdhPJyNe1DId2Qqz0Zt91JvVDkPgAInkZ2THy9mjnoTgqpTFEk58e5V2gz2I5olzrfEkwn7FJJucKd4gN638pg1WT4=
X-Received: by 2002:a05:6402:1344:: with SMTP id y4mr1860287edw.285.1622138114211;
 Thu, 27 May 2021 10:55:14 -0700 (PDT)
MIME-Version: 1.0
From: Deb Giles <dgiles@linuxfoundation.org>
Date: Thu, 27 May 2021 12:54:37 -0500
Message-ID: <CAFcyGRrEHTd-ZMsSkML67Om-uukiSfhYLyp0Ddza7pmo1Jz-2w@mail.gmail.com>
Subject: Design Session Notes
To: xen-devel@lists.xenproject.org
Content-Type: multipart/mixed; boundary="000000000000133f0205c35376c8"

--000000000000133f0205c35376c8
Content-Type: multipart/alternative; boundary="000000000000133f0105c35376c6"

--000000000000133f0105c35376c6
Content-Type: text/plain; charset="UTF-8"

Hi Everyone,

Attached are the notes from the Design Session on VirtuIO Cross-Project BoF
(Birds of a Feather) for Xen and Guest OS (Linux, Windows, FreeBSD)
developers.

Thanks,

Deb

Deb Giles
Event Manager
The Linux Foundation
+1.503.807.1876 (Time Zone: Central Time)

*Schedule a Meeting with Me* <https://calendly.com/debgiles>

--000000000000133f0105c35376c6
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_default" style=3D"font-family:tahoma,s=
ans-serif">Hi Everyone,</div><div class=3D"gmail_default" style=3D"font-fam=
ily:tahoma,sans-serif"><br></div><div class=3D"gmail_default" style=3D"font=
-family:tahoma,sans-serif">Attached are the notes from the Design Session o=
n VirtuIO Cross-Project BoF (Birds of a Feather) for Xen and Guest OS (Linu=
x, Windows, FreeBSD) developers.</div><div class=3D"gmail_default" style=3D=
"font-family:tahoma,sans-serif"><br></div><div class=3D"gmail_default" styl=
e=3D"font-family:tahoma,sans-serif">Thanks,</div><div class=3D"gmail_defaul=
t" style=3D"font-family:tahoma,sans-serif"><br></div><div class=3D"gmail_de=
fault" style=3D"font-family:tahoma,sans-serif">Deb</div><div class=3D"gmail=
_default" style=3D"font-family:tahoma,sans-serif"><br></div><div><div dir=
=3D"ltr" class=3D"gmail_signature" data-smartmail=3D"gmail_signature"><div =
dir=3D"ltr"><div><div dir=3D"ltr"><div><div dir=3D"ltr"><div><div dir=3D"lt=
r"><div><div dir=3D"ltr"><div dir=3D"ltr"><div dir=3D"ltr"><div dir=3D"ltr"=
><div dir=3D"ltr"><div dir=3D"ltr"><div dir=3D"ltr"><font color=3D"#888888"=
>Deb Giles</font><br style=3D"color:rgb(136,136,136);font-size:12.8px"><spa=
n style=3D"color:rgb(136,136,136);font-size:12.8px">Event Manager</span><br=
 style=3D"color:rgb(136,136,136);font-size:12.8px"><span style=3D"color:rgb=
(136,136,136);font-size:12.8px">The Linux Foundation</span></div><div><a hr=
ef=3D"tel:%2B1.503.807.1876" value=3D"+14153684840" style=3D"font-size:12.8=
px;color:rgb(17,85,204)" target=3D"_blank">+1.503.807.1876</a>=C2=A0<span s=
tyle=3D"color:rgb(136,136,136);font-size:12.8px">(Time Zone: Central Time)<=
/span></div><div><span style=3D"color:rgb(136,136,136);font-size:12.8px"><b=
r></span></div><div><span style=3D"font-size:12.8px"><a href=3D"https://cal=
endly.com/debgiles" target=3D"_blank"><font color=3D"#000000"><b>Schedule a=
 Meeting with Me</b></font></a></span></div><div dir=3D"ltr"><div style=3D"=
color:rgb(136,136,136);font-size:12.8px"></div></div></div></div></div></di=
v></div></div></div></div></div></div></div></div></div></div></div></div><=
/div>

--000000000000133f0105c35376c6--
--000000000000133f0205c35376c8
Content-Type: application/vnd.openxmlformats-officedocument.wordprocessingml.document; 
	name="xl interface.docx"
Content-Disposition: attachment; filename="xl interface.docx"
Content-Transfer-Encoding: base64
Content-ID: <f_kp77046z0>
X-Attachment-Id: f_kp77046z0

UEsDBBQACAgIAElWu1IAAAAAAAAAAAAAAAASAAAAd29yZC9udW1iZXJpbmcueG1spZNNTsMwEIVP
wB0i79skFSAUNe2CCjbsgAO4jpNYtT3W2Eno7XGbv1IklIZV5Izf98bj5/X2S8mg5mgF6JTEy4gE
XDPIhC5S8vnxsngigXVUZ1SC5ik5cku2m7t1k+hK7Tn6fYFHaJsolpLSOZOEoWUlV9QuwXDtizmg
os4vsQgVxUNlFgyUoU7shRTuGK6i6JF0GEhJhTrpEAslGIKF3J0kCeS5YLz79Aqc4ttKdsAqxbU7
O4bIpe8BtC2FsT1NzaX5YtlD6r8OUSvZ72vMFLcMaePnrGRr1ABmBoFxa/3fXVsciHE0YYAnxKCY
0sJPz74TRYUeMKd0XIEG76X37oZ2Ro0HGWdh5ZRG2tKb2CPF4+8u6Ix5XuqNmJTiK4JXuQqHQM5B
sJKi6wFyDkECO/DsmeqaDmHOiklxviJlghZI1RhSe9PNxtFVXN5LavhIK/5He0WozBj3+zm0ixcY
P9wGWPWAcPMNUEsHCEkTQ39oAQAAPQUAAFBLAwQUAAgICABJVrtSAAAAAAAAAAAAAAAAEQAAAHdv
cmQvc2V0dGluZ3MueG1spZXNbtswDMefYO8Q6J74o0k2GHV6WLHtsJ7SPQAjybYQfUGS4+XtJ8eW
1aRA4WanSH+SP9IMTT8+/RV8caLGMiVLlK1StKASK8JkXaI/rz+W39DCOpAEuJK0RGdq0dPuy2NX
WOqc97ILT5C2ELhEjXO6SBKLGyrArpSm0hsrZQQ4fzV1IsAcW73ESmhw7MA4c+ckT9MtGjGqRK2R
xYhYCoaNsqpyfUihqophOv6ECDMn7xDyrHArqHSXjImh3NegpG2YtoEm7qV5YxMgp48e4iR48Ov0
nGzEQOcbLfiQqFOGaKMwtdarz4NxImbpjAb2iCliTgnXOUMlApicMP1w3ICm3Cufe2zaBRUfJPbC
8jmFDKbf7GDAnN9XAXf08228ZrOm+Ibgo1xrpoG8B4EbMC4A+D0ErvCRku8gTzANM6lnjfMNiTCo
DYg4pPZT/2yW3ozLvgFNI63+P9pPo1odx319D+3NG5htPgfIA2DnVyChFbTcvcJh75RedMUJ/BR/
zVOU9OZhy8XTftiYwS/bIH+UIPybc7UQXxShvak1bH5xfcrkKic3+z6IvoDWQ9pDnZWIs7pxWc93
/kb8Qr5cDnU+2vKLLR9slwtg7Pec9x4PUcuD9sbvIWgPUVsHbR21TdA2UdsGbdtrzVlTw5k8+jaE
Y69XinPVUfIr2t9JYz/CV2r3D1BLBwiOs8OkBQIAAOoGAABQSwMEFAAICAgASVa7UgAAAAAAAAAA
AAAAABIAAAB3b3JkL2ZvbnRUYWJsZS54bWyllMtOwzAQRb+Af4i8b5MiQCgirRAINux47KeOk1jY
HmvsNPTvcWkepZVQGlZRPLnnjsc3vlt9aRVtBDmJJmOLecIiYTjm0pQZe397mt2yyHkwOSg0ImNb
4dhqeXHXpAUa76IgNy7VPGOV9zaNY8crocHN0QoTigWSBh9eqYw10GdtZxy1BS/XUkm/jS+T5Ia1
GMxYTSZtETMtOaHDwu8kKRaF5KJ9dAoa47uXPCKvtTD+xzEmoUIPaFwlretoeiotFKsOsvlrExut
uu8aO8YtJ2jCWWi1N2qQckvIhXNh9XFf7ImLZMQAd4heMaaF355dJxqk6TG7ZByBeu958G6H9oMa
NjLMwqkxjexLL3JNQNvTLmDCPA/1Vo5K8REhqHxNfSCnIHgF5DuAmkJQyD9F/gBmA32Y83JUnI9I
uYSSQA8hdWed7CI5istrBVYMtPJ/tGfC2g5xv5pCO/gDF9fnAS47wLK9/6ImNaBD+O9JgmLxyfqH
oBwM7Cpxe2UuvwFQSwcItXTt+IABAAB0BQAAUEsDBBQACAgIAElWu1IAAAAAAAAAAAAAAAAPAAAA
d29yZC9zdHlsZXMueG1s1ZbtbtowFIavYPeA8r9NSAJDUWlVteo2qeqmtbuAg2OIVce2bAfKrn7O
NyShSgMSHfwAH/u8x378Os7VzVtMR2ssFeFsbo0vHWuEGeIhYau59efl4WJmjZQGFgLlDM+tLVbW
zfWXq02g9JZiNTL5TAUxmluR1iKwbYUiHIO65AIz07nkMgZtmnJlxyBfE3GBeCxAkwWhRG9t13Gm
ViHD51YiWVBIXMQESa74UqcpAV8uCcLFT5kh+9TNU+45SmLMdFbRlpiaOXCmIiJUqRYPVTOdUSmy
fm8R65iW4zaiT7VQwsZsRkzzQhsuQyE5wkqZ6H3eWSmOnR4AU4kqo88U9muWM4mBsEomtUZDqKp9
aWoX0DKpeiE1C0X7TCTveiQLCXLbngUM4LmbL0gvFzcUTJZOZGXIIRIoAqlLATpEgXL0isM7YGuo
zByuetm5oRQSWEmIa5OqD+3s2GnY5TkCgWu11XFq3yRPRG13f4jazgkcTz4m4JYC1+YBGHJ0j5eQ
UK3Spvwli2bRyn4eONNqtAlAIULm1q0kYMpvAqR2GhiUvlUEdkLRLVPVeDuVUn9NeA3moLhuGblT
zRgFtipjmKUxu5iM3ZyiaLYyTQGIZBKUpIfa/Tq1isbvhJoAJJoXsqKQ3RWyW1yye8JI6K0w6QJk
6i8RpapZ149wbj2lfszWHeaZ5irKGDOIcbkclg/Ka2epbXkNC4r3pF/SSC/9bOToqUeV7kV8x5Be
m23hKO8YjfMtWoDC4U9W9tYFTRZ+013xYnNeMRZPO0MKwTT8aDZINeL1XsJSY3NTjl0nnfECm/Nv
luE7zvt7W9m49p7vtL2Xx3Z8NgSbexCb+8mwedO+2BalstM8wl7HEc5jR2L0DmL0zo1xtk/RHUoR
ccpl5T0v/baekLOOJ+TsBHj9g3j9z4XXnfXFu4dzmn1aOP0OnP4JcE4O4px8Mpz+KXEevL+PxDk9
iHP6v+IkDeGz4H0h2rxVtN4XsuiZuU73uH78Pp90wJocBes5WehOXlXHmZF57iBmJ3yVr0zddaN1
m9rreO/yDrx3lf/U9T9QSwcIiM5FDB0DAADfEQAAUEsDBBQACAgIAElWu1IAAAAAAAAAAAAAAAAR
AAAAd29yZC9kb2N1bWVudC54bWztXW1z27gR/gX9Dxh9uLxMbFmyk0vlczKu3dz4Wvc8tpvcTacf
IGIlIgIBFgCl8H59FyApWc4142OmmbXITGKJILDEApvHu4sH4A9vP2WKLcE6afTJYLR/MGCgEyOk
np8M/nn7bu/1gDnPteDKaDgZlOAGb9/86YfVRJikyEB7hhK0m2TJySD1Pp8Mhy5JIeNu3+Sg8ebM
2Ix7vLTzYcbtosj3EpPl3MupVNKXw/HBwatBLcacDAqrJ7WIvUwm1jgz86HJxMxmMoH6o2lhH/Lc
qsl53eX4xKEFhX0w2qUyd420rK00vJk2QpZfUmKZqabeKn/I04TlK5yOTFUPWhkrcmsScA5Lz6ub
a4mjgwcMYBCxbvGQLmw/s+lJxqVeiwnGcU/Q+tn7+Ox60KKojSKbsXDqIR2pbv1dTi235ee94C3G
8277XD7Iiu9JwFa+sGuDbCMiSbn1jQDVRoIyyQLEGddLvjZmMX+QOd+TJCSfW55tjNT9oZkdHdwz
l5uU57CRNv86aT9aU+Qbcz9qI+3O/8DRyz8mYNwIeIMQODWiDJ85W00QQcX1yeCg/jOoi85BfV54
9XnR9TnMeKH879y5sluFo6NJzi2/EOvSUexMfmXDh60/3hntHYrgLpHyZPAerOCaB6GJ27oE7vyp
k3yrMD3V7k6rYdQWy5cc1RlV14lRxjZlR/CSv3pV3XC/NaXjw6bkzG2XDeueDjcd/9KI/c7Ibo/L
I1Ide+hVU3RwdzBWk/jbdOJyniCc5hYc2CUM3nxSTGoPdoblobav2lTDR8ACx9/GAjtkcd/awj7O
EfPchN2kplCCrYClfAmMM58i6LKVDJ9MSLf4179fMOyaMxlU90A5eEvRKg97q6SkagurxHAB/8Ji
wj4Ac5Vpelsyb1jGF8CkZzmarZwq/K6ZktOltD7cngJaLrbhGexTtM2jjtvm/7AEUnP0suNzREzV
FvhxbsAFkIhg4UA7CNjgZQAH6eJ3BImrswvmckgkevoxI0Dyt9mrjlvjY0CM7zs+R8RUbYEYT9+j
A3HxMxOwxLDfsYRrxpUFLsrgUzhvuZynnl1eYqUZdg2yKQgBgrkiz431zyhix+uO2+VjwI4/d3yO
iKnaJoYulAQ9Yb+aIuDGE89S4/dyVcwbOKGIDacdt7vHgA1/6fgcEVO1BTZwLcoJu03BAuP4T8lM
okofrm8ZLMMqapJyrUE5hjXZKuV+TxuPF+WKlxRh46zjJvkYYOO843NETNU2sGHnhg3Z3HKEiCFz
KWKHYBlkxpYksxR/7bjJPQZYeNfxOSKmagtYcB5nX5sJO1WKmVnIYmbMwn8KGdyLmMyMDsWc4mrx
6KDj9vcIMGL0jTglZOeImKotMOJjAXYe0hEf0pIJw0pTxMX8Bf7knqLzMOo6jeQxAENPqiClagtg
SBLF7WISlz49BhYuLFvEhY54pbhHH0IIrO/AHTNntj0KJl1IbWoIhExuS4r0ilFPr6CPJD29gpSq
XxOGRHgY3kGTIRNWLsG6mOzkGKVwpqT3CthUeibkbAYWtKeIHT3Rgj529EQLUqq2DE9CdHIbWJoB
Pur0hWPPzy9Pn9/1QKaFj9FLcFFqVnKka3HFsCoDPZcajuskqeeBDDpkMXOamCKME3cx5iHpqfTU
DPpo01MzSKnaAm3wyRJUhTbPN27KelPNc/Z0Cn4FoGNYNLOoMWhRezFxTXarfC3iGYZDbCY/gXgR
cepLQVXcLuFYoQWKDBUTE1wkkrDUs0Low1LPCiGl6tcEUICNCu7DdqoIDDHTsiaoQ2JciXWzfcYc
ujTCYTC1gLBLS5gqGQMiVK4zNL72qY4jcGG8da9RgKWPhUP8y3IFcRM/36DWi7jbyyCYobMVe7Ql
2OAPex8h0cfyIcwjiWU9VYU+lvVUFVKqtgzo4nrTryFmC8DDE19gjDblyQIC/KykUoFEX+gKTtBl
MlqVd/ylWKNZxw47TFlmxHpbDk1PqWe80EeXnvFCStUW6MJ+BIPwMmE//60KtE7vMOMoLmaPu85y
IaZqG5ujaFY9MYeUqm3M6jTuBfgAVdg2xYDquHJ2cjAYj1Ve0Coms9Ow3G6qc8oC0S+g3osvVM7B
xpOddAKhekRHilbcdaYPMVV3BRx7chIpVduY1XfKH4dzhpLCuRr0VinEnNPW7oeQd7fAFcaPM1QG
b38398cUjbLrXCRiqu4K1vX0KVKqtjGrmyb9n5WIatwVNibjHUPcO72+rLlTK166cLrVjC+xe4iH
CIBJTnFT6Ljr1Cliqu4K1PVsL1KqtnbrYtx6x7eLEW3AubBY4E0mkwh9234eVbeu68QtYqruCtb1
XDNSqrYxq5/qc4AuPFPGLGrexfXFzdnee5ZGMmp1MinGsK4Aksua464TwIipuivw1nPWSKnaxqya
Y86ujMdgVcYU3JpIFvhicWk2nI1YUcXSMseW0hkbmfgroEkTG3edJkZM1V1BvJ7ZRkrVNma1pumu
pK53BEjHhAlgVyXj9qbcYcjK89wanqTsKVeuwsAUPFgzBw2mcKx+jw1C4bN9hn6h5tYEWRmEFdxQ
/xfQbMg+cqlSbADHwVksn1ioH7cVG6OI2/qu1InFAWwIxDbWlb9BjK1zmSxCwXpPVfM+h3XVwMMj
CcpdZ9cRU3VXQLknBJJStY1ZNYTAi8+2N8Q9ojHXGE66kL4MScVqv5WtY+7jzRZ0Ng9vFKtO+l4Z
u2gqb1jJAYHro3MCVJt4fOcCVFm/aARBPZMBegla+mFPQySl6o4A6GFPQySlahuziksy53eYNhXu
6SKbQlxnTkwuodmvGoN6omsxhz2dkJSquwJyPZ2QlKptzOqnZlfaReUWCu55INNUAfQxS80KI2uo
Dghhc4jeniX5PsPDnkhIStVdQbmeSEhK1TZmVe0ouTGFTTDylQJjUh7zfNxOpQ/vSt/k/vYZhsxP
6mi5OjMJHT2S2b/Dnk9IStVdQbyeT0hK1XZ+XbUIfW5Cxs+GZBzGrxnP87CcgQ7eL6BJQlpPGySl
6q5AWk8bJKVqG7O6lkmKceosAFrhoN7ru4In1akp6MDFNVzNTB63A4fVYs6c51pwK+Ll5ekZSdC7
TyYc/n+M4EFHkThIfFU/n9+EqUxPBqOXr4+i/BV+H4+r78bibxhUN6wBWS59Na/5/JKHHk+N9ybD
6kdVbW/yzYWCmd9c2fDG2M1lClwAqvD9OF7OjPHNZf2EfxTZbZkD3sTZtaFprU7T92F4vijjF2GS
IjCv3vwXUEsHCL0qNjCFCQAAzIsAAFBLAwQUAAgICABJVrtSAAAAAAAAAAAAAAAAHAAAAHdvcmQv
X3JlbHMvZG9jdW1lbnQueG1sLnJlbHOtkk1qwzAQhU/QO4jZ17LTH0qJnE0IZFvcAyjy+IdaIyFN
Sn37ipQkDgTThZfviXnzzYzWmx87iG8MsXekoMhyEEjG1T21Cj6r3eMbiMiaaj04QgUjRtiUD+sP
HDSnmtj1PooUQlFBx+zfpYymQ6tj5jxSemlcsJqTDK302nzpFuUqz19lmGZAeZMp9rWCsK8LENXo
8T/Zrml6g1tnjhaJ77SQnGoxBerQIis4yT+zyFIYyPsMqyUZIjKn5cYrxtmZQ3haEqFxxJU+DJNV
XKw5iOclIehoDxjS3FeIizUH8bLoMXgccHqKkz63lzefvPwFUEsHCJAAq+vxAAAALAMAAFBLAwQU
AAgICABJVrtSAAAAAAAAAAAAAAAACwAAAF9yZWxzLy5yZWxzjc87DsIwDAbgE3CHyDtNy4AQatIF
IXVF5QBR4qYRzUNJePT2ZGAAxMBo+/dnue0ediY3jMl4x6CpaiDopFfGaQbn4bjeAUlZOCVm75DB
ggk6vmpPOItcdtJkQiIFcYnBlHPYU5rkhFakygd0ZTL6aEUuZdQ0CHkRGummrrc0vhvAP0zSKwax
Vw2QYQn4j+3H0Ug8eHm16PKPE1+JIouoMTO4+6ioerWrwgLlLf14kT8BUEsHCC1ozyKxAAAAKgEA
AFBLAwQUAAgICABJVrtSAAAAAAAAAAAAAAAAFQAAAHdvcmQvdGhlbWUvdGhlbWUxLnhtbO1ZS2/b
Nhy/D9h3IHRvZdlW6gR1itix261NGyRuhx5piZbYUKJA0kl8G9rjgAHDumGHFdhth2FbgRbYpfs0
2TpsHdCvsL8elimbzqNNtw6tDzZJ/f7vB0n58pXDiKF9IiTlcdtyLtYsRGKP+zQO2tbtQf9Cy0JS
4djHjMekbU2ItK6sf/jBZbymQhIRBPSxXMNtK1QqWbNt6cEylhd5QmJ4NuIiwgqmIrB9gQ+Ab8Ts
eq22YkeYxhaKcQRsb41G1CNokLK01qfMewy+YiXTBY+JXS+TqFNkWH/PSX/kRHaZQPuYtS2Q4/OD
ATlUFmJYKnjQtmrZx7LXL9slEVNLaDW6fvYp6AoCf6+e0YlgWBI6/ebqpc2Sfz3nv4jr9XrdnlPy
ywDY88BSZwHb7LeczpSnBsqHi7y7NbfWrOI1/o0F/Gqn03FXK/jGDN9cwLdqK82NegXfnOHdRf07
G93uSgXvzvArC/j+pdWVZhWfgUJG470FdBrPMjIlZMTZNSO8BfDWNAFmKFvLrpw+VstyLcL3uOgD
IAsuVjRGapKQEfYA18WMDgVNBeA1grUn+ZInF5ZSWUh6giaqbX2cYKiIGeTlsx9fPnuCju4/Pbr/
y9GDB0f3fzZQXcNxoFO9+P6Lvx99iv568t2Lh1+Z8VLH//7TZ7/9+qUZqHTg868f//H08fNvPv/z
h4cG+IbAQx0+oBGR6CY5QDs8AsMMAshQnI1iEGKqU2zEgcQxTmkM6J4KK+ibE8ywAdchVQ/eEdAC
TMCr43sVhXdDMVbUALweRhXgFuesw4XRpuupLN0L4zgwCxdjHbeD8b5Jdncuvr1xArlMTSy7Iamo
uc0g5DggMVEofcb3CDGQ3aW04tct6gku+UihuxR1MDW6ZECHykx0jUYQl4lJQYh3xTdbd1CHMxP7
TbJfRUJVYGZiSVjFjVfxWOHIqDGOmI68gVVoUnJ3IryKw6WCSAeEcdTziZQmmltiUlH3OrQOc9i3
2CSqIoWieybkDcy5jtzke90QR4lRZxqHOvYjuQcpitE2V0YleLVC0jnEAcdLw32HEnW22r5Ng9Cc
IOmTsTCVBOHVepywESZx0eErvTqi8XGNO4K+jc+7cUOrfP7to/9Ry94AJ5hqZr5RL8PNt+cuFz59
+7vzJh7H2wQK4n1zft+c38XmvKyez78lz7qwrR+0MzbR0lP3iDK2qyaM3JBZ/5Zgnt+HxWySEZWH
/CSEYSGuggsEzsZIcPUJVeFuiBMQ42QSAlmwDiRKuISrhbWUd3Y/pWBztuZOL5WAxmqL+/lyQ79s
lmyyWSB1QY2UwWmFNS69njAnB55SmuOapbnHSrM1b0LdIJy+SnBW6rloSBTMiJ/6PWcwDcsbDJFT
02IUYp8YljX7nMYb8aZ7JiXOx8m1BSfbi9XE4uoMHbStVbfuWsjDSdsawWkJhlEC/GTaaTAL4rbl
qdzAk2txzuJVc1Y5NXeZwRURiZBqE8swp8oeTV+lxDP9624z9cP5GGBoJqfTotFy/kMt7PnQktGI
eGrJymxaPONjRcRu6B+gIRuLHQx6N/Ps8qmETl+fTgTkdrNIvGrhFrUx/8qmqBnMkhAX2d7SYp/D
s3GpQzbT1LOX6P6KpjTO0RT33TUlzVw4nzb87NIEu7jAKM3RtsWFCjl0oSSkXl/Avp/JAr0QlEWq
EmLpC+hUV7I/61s5j7zJBaHaoQESFDqdCgUh26qw8wRmTl3fHqeMij5TqiuT/HdI9gkbpNW7ktpv
oXDaTQpHZLj5oNmm6hoG/bf44NJ8pY1nJqh5ls2vqTV9bStYfT0VTrMBa+LqZovr7tKdZ36rTeCW
gdIvaNxUeGx2PB3wHYg+Kvd5BIl4oVWUX7k4BJ1bmnEpq3/rFNRaEu/zPDtqzm4scfbx4l7d2a7B
1+7xrrYXS9TW7iHZbOGPKD68B7I34XozZvmKTGCWD7ZFZvCQ+5NiyGTeEnJHTFs6i3fICFH/cBrW
OY8W//SUm/lOLiC1vSRsnExY4GebSElcP5m4pJje8Uri7BZnYsBmknN8HuWyRZaeYvHruOwUyptd
Zsze07rsFIF6BZepw+NdVnjKNiUeOVQCd6d/XUH+2rOUXf8HUEsHCCFaooQsBgAA2x0AAFBLAwQU
AAgICABJVrtSAAAAAAAAAAAAAAAAEwAAAFtDb250ZW50X1R5cGVzXS54bWy1k01uwjAQhU/QO0Te
VsTQRVVVBBb9WbZd0AMMzgSs+k+egcLtOwmQBQKplZqNZfvNvPd5JE/nO++KLWayMVRqUo5VgcHE
2oZVpT4Xr6MHVRBDqMHFgJXaI6n57Ga62CekQpoDVWrNnB61JrNGD1TGhEGUJmYPLMe80gnMF6xQ
343H99rEwBh4xK2Hmk2fsYGN4+LpcN9aVwpSctYAC5cWM1W87EQ8YLZn/Yu+bajPYEZHkDKj62po
bRPdngeISm3Cu0wm2xr/FBGbxhqso9l4aSm/Y65TjgaJZKjelYTMsjumfkDmN/Biq9tKfVLL4yOH
QeC9w2sAnTZofCNeC1g6vEzQy4NChI1fYpb9ZYheHhSiVzzYcBmkL/lHDpaPemX4nXRYJ6dI3f32
2Q9QSwcIM68PtywBAAAtBAAAUEsBAhQAFAAICAgASVa7UkkTQ39oAQAAPQUAABIAAAAAAAAAAAAA
AAAAAAAAAHdvcmQvbnVtYmVyaW5nLnhtbFBLAQIUABQACAgIAElWu1KOs8OkBQIAAOoGAAARAAAA
AAAAAAAAAAAAAKgBAAB3b3JkL3NldHRpbmdzLnhtbFBLAQIUABQACAgIAElWu1K1dO34gAEAAHQF
AAASAAAAAAAAAAAAAAAAAOwDAAB3b3JkL2ZvbnRUYWJsZS54bWxQSwECFAAUAAgICABJVrtSiM5F
DB0DAADfEQAADwAAAAAAAAAAAAAAAACsBQAAd29yZC9zdHlsZXMueG1sUEsBAhQAFAAICAgASVa7
Ur0qNjCFCQAAzIsAABEAAAAAAAAAAAAAAAAABgkAAHdvcmQvZG9jdW1lbnQueG1sUEsBAhQAFAAI
CAgASVa7UpAAq+vxAAAALAMAABwAAAAAAAAAAAAAAAAAyhIAAHdvcmQvX3JlbHMvZG9jdW1lbnQu
eG1sLnJlbHNQSwECFAAUAAgICABJVrtSLWjPIrEAAAAqAQAACwAAAAAAAAAAAAAAAAAFFAAAX3Jl
bHMvLnJlbHNQSwECFAAUAAgICABJVrtSIVqihCwGAADbHQAAFQAAAAAAAAAAAAAAAADvFAAAd29y
ZC90aGVtZS90aGVtZTEueG1sUEsBAhQAFAAICAgASVa7UjOvD7csAQAALQQAABMAAAAAAAAAAAAA
AAAAXhsAAFtDb250ZW50X1R5cGVzXS54bWxQSwUGAAAAAAkACQBCAgAAyxwAAAAA
--000000000000133f0205c35376c8--


From xen-devel-bounces@lists.xenproject.org Thu May 27 18:33:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 18:33:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133630.249000 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmKos-0002ZV-4M; Thu, 27 May 2021 18:33:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133630.249000; Thu, 27 May 2021 18:33:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmKos-0002ZO-11; Thu, 27 May 2021 18:33:22 +0000
Received: by outflank-mailman (input) for mailman id 133630;
 Thu, 27 May 2021 18:33:20 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=9d2D=KW=citrix.com=andrew.cooper3@srs-us1.protection.inumbo.net>)
 id 1lmKoq-0002ZI-3F
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 18:33:20 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id af3feb38-e744-47c6-a3bf-bc5459fa4308;
 Thu, 27 May 2021 18:33:17 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: af3feb38-e744-47c6-a3bf-bc5459fa4308
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1622140397;
  h=subject:to:cc:references:from:message-id:date:
   in-reply-to:content-transfer-encoding:mime-version;
  bh=AkYpiD/AH03QGFAYAXF+d6oaRKX/Osg7RjDDbjC6rq4=;
  b=G6s6PLi7VNoFu0BrbGgSiAfsOVeaPKThcdc/6lPrFiU7OHKBickr3tRQ
   r3b5xqrE7G8/nMCshd3Qa4koz4kyZnDFRs/lDrWmwzNHQlEX+1/7E4ytC
   fEGvH30oqJv+sFiGWF+GC6ZRXjrtHQlaXGDdut7nEeiz/lbQsLp7x/VTL
   c=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: zqe7dMuXvp04NXhAim1ACvqh9JQTXOGKuhJSoLdsP83fHClRcQM2y+E+7TO0Lj1pBNDQSAbciS
 oHF2cHFO8MpybRxSZPCo8IdRHMI1nEtvuDBiWiHZWpmic4c9OKS6WlQWLLOvPR67Y2GokY47Vy
 lZg5qpIELD5Rd3ccj2v96wNIx1xGiNzuoNTmOJ9kSFWZJCG1uXbnQIN4xuy/Kh4BM0NO/0OfoJ
 SAmA5Mwkxq63PGtiB6lNArW4q1Du8QTvzdkBCxrH3wZS5fUQJDT9AE814UGJBbFxNIKlnF39kg
 8as=
X-SBRS: 5.1
X-MesageID: 44554905
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:wg7wpKxkSUIUooPQYVG+KrPw1r1zdoMgy1knxilNoHxuH/BwWf
 rPoB17726RtN91YhsdcL+7V5VoLUmzyXcX2/h1AV7BZniEhILAFugLgbcKqweKJ8SUzJ8+6U
 4PSclD4N2bNykGsS75ijPIb+rJFrO8gd+VbeS19QYScelzAZsQiDuQkmygYzZLrA8tP+teKL
 OsovBpihCHYnotYsGyFhA+LpL+T42iruOeXfYebSRXkDWzsQ==
X-IronPort-AV: E=Sophos;i="5.83,228,1616472000"; 
   d="scan'208";a="44554905"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=BlOuFGECeNXXnodsAuPL0ck6u+yJ5A3ncLqLz49CzAtYCDhosO+8qCyfV9atjCZCyeTuLUbfB9X0gFpe3e0HjTvosIFRv5XOHEdzgSGLDdBHZ8v9D7/4HUpLN52hGsnEc/66ouJjCtxJJR1yRWoLaxIvSqTYRmpRLU3N/mjy08ryS0Swr7ETNVdD2tcCIk192T9AOTHEgxl67uYXdfpow1xAaw5QWzXsr5FYyJfwpdwyIS+3K+Y3SVYpq0+YaViEEmXNg5rkK+sOU56J/oDKZbH8VGIec+x/2H6ZFnPg5+OVZEgFM924xqhqRQ+1j+V8rr2PJkzcEInKSjBgN/fCSQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=w1teCrWfmmiQ8bQ1vlAiJ+AFkxAwdFFR4+rekFGhQ2E=;
 b=A/OHQrS3QVpuWxctGMiy+IJL8JO2m0X2L++VN4gp3q4g0vakSkvZedbsRxPajwtAR6iPKP9yzwBQuLoKsO+rEMhOaf4oyxaAWR+37ejWOcpx3AX6xVoipy4JUqKfWIsnoXKcTaGF522oPn/obaMTgNjIpgPeCtAKeVhzU4wZ6sZGTBCg5/GuRdFEgukE8vCkih1wUaur7OA1CbeKwfm60sIUD0J99PMbQNVbk6+8mvAg+O5wC3KizSmV4fFaVw2Wd6/SQIyGsVKmy36i5OyaEKFKj5V8WIdVN9WYC6LUOIDS6XSaU7AY9sdj+y4mRfUPu5VI34KBA3DXlNoeM3zvSQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=w1teCrWfmmiQ8bQ1vlAiJ+AFkxAwdFFR4+rekFGhQ2E=;
 b=BpxVlg6m/Pmq0pqWeF7FssJscsCF4JEeTTwQXzOb0lpRGSrEkY0CTYPeygKfmJqM4Cj+K7wpYkoBnJ9mKZtZy+7QzzJcJM0W99bIAjbjkvzwCvZfmLDUjojhzl6DOKACR8mKup16afR5q0qNSU7Xga+Kc/3Oa9Jn1HYU/korlGY=
Subject: Re: [PATCH 3/3] x86/tsx: Deprecate vpmu=rtm-abort and use tsx=<bool>
 instead
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
CC: Xen-devel <xen-devel@lists.xenproject.org>, Jan Beulich
	<JBeulich@suse.com>, Wei Liu <wl@xen.org>
References: <20210527132519.21730-1-andrew.cooper3@citrix.com>
 <20210527132519.21730-4-andrew.cooper3@citrix.com>
 <YK/VtuUatxX6lQuo@Air-de-Roger>
From: Andrew Cooper <andrew.cooper3@citrix.com>
Message-ID: <01c81aac-c349-1650-c147-140f77ad6f1a@citrix.com>
Date: Thu, 27 May 2021 19:33:08 +0100
User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:78.0) Gecko/20100101
 Thunderbird/78.8.1
In-Reply-To: <YK/VtuUatxX6lQuo@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: 8bit
Content-Language: en-GB
X-ClientProxiedBy: LO4P123CA0031.GBRP123.PROD.OUTLOOK.COM
 (2603:10a6:600:151::18) To BN7PR03MB3618.namprd03.prod.outlook.com
 (2603:10b6:406:c3::27)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6563b488-8b31-4de8-5dcc-08d9213de34d
X-MS-TrafficTypeDiagnostic: BN6PR03MB2865:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <BN6PR03MB2865CB134EED059DC33DA166BA239@BN6PR03MB2865.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:4941;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 0+W1caXvKpen1+NY6MTW8pS2DO6D1+aW49Tp0q92mF3bdcYo6JpLNu4rFECm5kCK59Hu0ICZaiU9qnwfYGYSt5eTIlFqAqLcrBeo6Q4fEc45gTqDzSNwLm+c99wk7bRm83B8sbWV7Hwej61lwFfHlJcssYz69kzEmZxuHWR0DTDXZR4EvXpwjDUiLtUihM5gp+o0qJ6/DNkHlpTEzowFYgZR85395CPgAEqKPVGAaZBm53NwsQXIv59SwPYGM3a46NquMGmYFedMfS63dTTiHxTF8AJURwEy/DRy90lvI1BrO5k5kvw+Ij2+Pp/32gsZ8LqR0PCkIccWxqAhf034R7mratx2k6er0azAwMMfYr9TiSn6KmM7BJFaf0cwDEYRaDO4YhYV/lJIDW5sX4mW45KHvsCUDNfilWIDiRGjXyMLzIVbeiQVnTZBaKZ6loWFDNyni787WC7a1rZU5yJ4Ppvj8IQWE8qRm6DJSGzyuY/G3km2oCt6wLUVG+P/tbanbUKu2QQ9DQl+9GimjqIHrZQwOte7n4XIuZ6yRb7toQlZiYw7Bd1CH+n3rLuTZat/ibc6w/QO8hdlIk/bXDUzICwFPeQSKxEplWdwTFWJjfhvLZI7+SWRSUsGsZBqKJ1VrzMR0WQu6/syC8ZVueWKmXDpi7KiafHp9Wku5SmXdJURY5FNUougZOE0mesb9BLiibfsJ1GhpYbOJU1gNNms6g==
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:BN7PR03MB3618.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(136003)(346002)(376002)(396003)(366004)(39860400002)(31696002)(2616005)(6636002)(54906003)(66476007)(478600001)(6862004)(6486002)(53546011)(6666004)(36756003)(4326008)(31686004)(16576012)(8936002)(83380400001)(2906002)(316002)(16526019)(38100700002)(66556008)(37006003)(956004)(66946007)(26005)(5660300002)(186003)(8676002)(86362001)(10944003)(45980500001)(43740500002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?VlZSMVdDeHR0OTRvOTBxZUtGWFN1RTBvOTBWQXBTV1RCQTV0VGJIM3VaWEpC?=
 =?utf-8?B?U3N0cGV4U0pXZXRWZU5jYm1uWjBQZmEwbjdIMG1HVHpkN1V5bVh1MVg4bFpN?=
 =?utf-8?B?MGNmRUpKSHN6UXBGZWN5UllpNFlhNzdna2wxVkxjV1d3WjBpUjBaYTBzam5G?=
 =?utf-8?B?YzNMbTNtMXhvVnVTay9mcFZxOWdVSkY0SzNQakw0bzFXME9Qazc3b1ovbWZq?=
 =?utf-8?B?VUxqcE5EZjN5TWZSM3lZSHVxME4yQ1lPU0NZUmxtVnIvZDdXR1dwSmRBQTh4?=
 =?utf-8?B?TlFHeTFYNTNBQ0J1TDFwNXJOKzMwODdXeWhKOWxtcEhabHVsQnJpN0dWNGdn?=
 =?utf-8?B?MDZNb0l4bzZXQld5dlQ5V0lMaFJDazVpSFREc3N2dnBkNkRvemx5TkxDSEpD?=
 =?utf-8?B?NVBOOGp2ck5EK2dQcFEzRjhOVmVYWlY3Uk9nVnd3U3luVzhNU2E1dW45cHk1?=
 =?utf-8?B?aWE4Q2wzNVpwamFQMS8rMTN3N0FTTHRVajk3dFk3TUJhd3Iwa0p4b041Ni96?=
 =?utf-8?B?c1dvSEhzZEFmWVF2UUpOMEVpQXBPY1p5cUVjOTVuNXdvZ084aUtpYVpWdklD?=
 =?utf-8?B?NWFUay9UNDBxZzh2QVFoankzbnN5VkRsQXB1Y2hERU1EalVuYnMxNXF4SlhR?=
 =?utf-8?B?a09tSzJKemRzdjFsSyt2WkJlZlJadldlV3BvWTdXTWFIUVB4SC9wOXVJK2Ji?=
 =?utf-8?B?b1pkeHhYdGU5d21qVWE3aFNyT3Z0TVVEZm9hVnNOSDlqSUM4RGRMTVZNYUY4?=
 =?utf-8?B?UUxXaGFJbzgrZVFEUE9UdWs3MS9JdkRKZ1B6a3RRcVc5d2xkdFB1c1V1L2NB?=
 =?utf-8?B?c0k3TFRJMTNTSUNPNjYwTmRDS2Z2b2RNMmZ1UEVJQ0lXSVRWRFRKS28xWmI2?=
 =?utf-8?B?K1VOK1Z2WWtybktZWVB6a0FKeFdDM09SNHdKamFENVNFc0N0SHZwVERyNHcx?=
 =?utf-8?B?M2EvWEdHYis4ZFJDV0lFN0xEOVBxZDJIVGRtRHFXbXlGeHUzTWVjeTlma3py?=
 =?utf-8?B?cmRIeWttbi9hYy9pcy90TGdoNHRsTENLUHozeHM1ZXUzcFczajd5ZnU2R0t0?=
 =?utf-8?B?K0dQUHdsdEF3VXNWUHM1RnBLTFo1WG1nem1GTVlKYmEyT3V3ZU1oRlZERWFB?=
 =?utf-8?B?cFc2L2xZMFRpd3N3dWx2bDliVmRwOHE3V2FrRzcwbXBland4azRDdnpUNkR0?=
 =?utf-8?B?Wk92L21Bd3JDZk90blB3SmQvNkRsdXVKS3pnTGlQRFB3U005V1VZMHFRWDBx?=
 =?utf-8?B?YXY0T0ZzWW4wUUpHNnd0MFBXSXQ3SWIxMFB2SDgvMFlWeE13NVI5R0NRSEU5?=
 =?utf-8?B?SnFRdVJwNXBYRzNCeGp0dTBnSGpNbnZqa0FSOWVndVdsWkxhdnBNZWJmWWFz?=
 =?utf-8?B?aS81TlVMcnFFaXR0VFpvQW80Vi9GNEVFZmlxOFdOQzkxVzZrRjhVcDVsb2kx?=
 =?utf-8?B?Q0FKUWVaS1V0dHpIbDdvR240b2U3UTF1UDhIV3Y0YmVBREVaTmRMYmNUOStQ?=
 =?utf-8?B?elUzM01VRU05M0h6RVpqNlZkWHp0blhHVGJyekZDaHF6Q3U1b2JCbzEvM3Ey?=
 =?utf-8?B?dnNtOXMzbjJ6U1liTU5ySGM3amlEMTR6Qi9SRGc0VHUzNit3NC95dEp0N0VM?=
 =?utf-8?B?TFc5TngxRzd0enh5dWZLMUcrdm9kdnlwM2E5YnUrNUhtNUJaSC9kTmRmUXYr?=
 =?utf-8?B?akQ3UFZheWtPcTVIdGgxdkJvMDEwUEZQUjZ4cENacFRQdEVyMjF3dDcrTDdo?=
 =?utf-8?Q?z7yjB/rZ7mkGQJkL9lIbu01W9c0wo+XO8QjlR/d?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 6563b488-8b31-4de8-5dcc-08d9213de34d
X-MS-Exchange-CrossTenant-AuthSource: BN7PR03MB3618.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 27 May 2021 18:33:14.2554
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: 7DtOABnXdZWXBBDi7MQcEr7tZGIyEI9Mulj3dQIMaWXnLAnmk+7KAysainNU1KC3gI2OWarrdZY7U7xrA5NfoeBhnwJ2/Ofpa/rLvQt7o2A=
X-MS-Exchange-Transport-CrossTenantHeadersStamped: BN6PR03MB2865
X-OriginatorOrg: citrix.com

On 27/05/2021 18:24, Roger Pau Monné wrote:
> On Thu, May 27, 2021 at 02:25:19PM +0100, Andrew Cooper wrote:
>> This reuses the rtm_disable infrastructure, so CPUID derivation works properly
>> when TSX is disabled in favour of working PCR3.
>>
>> vpmu= is not a supported feature, and having this functionality under tsx=
>> centralises all TSX handling.
>>
>> Signed-off-by: Andrew Cooper <andrew.cooper3@citrix.com>
> Reviewed-by: Roger Pau Monné <roger.pau@citrix.com>

Thanks.

>
>> ---
>> CC: Jan Beulich <JBeulich@suse.com>
>> CC: Roger Pau Monné <roger.pau@citrix.com>
>> CC: Wei Liu <wl@xen.org>
>> ---
>>  docs/misc/xen-command-line.pandoc | 40 +++++++++++++++---------------
>>  xen/arch/x86/cpu/intel.c          |  3 ---
>>  xen/arch/x86/cpu/vpmu.c           |  4 +--
>>  xen/arch/x86/tsx.c                | 51 +++++++++++++++++++++++++++++++++++++--
>>  xen/include/asm-x86/vpmu.h        |  1 -
>>  5 files changed, 70 insertions(+), 29 deletions(-)
>>
>> diff --git a/docs/misc/xen-command-line.pandoc b/docs/misc/xen-command-line.pandoc
>> index c32a397a12..a6facc33ea 100644
>> --- a/docs/misc/xen-command-line.pandoc
>> +++ b/docs/misc/xen-command-line.pandoc
>> @@ -2296,14 +2296,21 @@ pages) must also be specified via the tbuf_size parameter.
>>  
>>  Controls for the use of Transactional Synchronization eXtensions.
>>  
>> -On Intel parts released in Q3 2019 (with updated microcode), and future parts,
>> -a control has been introduced which allows TSX to be turned off.
>> +Several microcode updates are relevant:
>>  
>> -On systems with the ability to turn TSX off, this boolean offers system wide
>> -control of whether TSX is enabled or disabled.
>> + * March 2019, fixing the TSX memory ordering errata on all TSX-enabled CPUs
>> +   to date.  Introduced MSR_TSX_FORCE_ABORT on SKL/SKX/KBL/WHL/CFL parts.  The
>> +   errata workaround uses Performance Counter 3, so the user can select
>> +   between working TSX and working perfcounters.
>>  
>> -On parts vulnerable to CVE-2019-11135 / TSX Asynchronous Abort, the following
>> -logic applies:
>> + * November 2019, fixing the TSX Async Abort speculative vulnerability.
>> +   Introduced MSR_TSX_CTRL on all TSX-enabled MDS_NO parts to date,
>> +   CLX/WHL-R/CFL-R, with the controls becoming architectural moving forward
>> +   and formally retiring HLE from the architecture.  The user can disable TSX
>> +   to mitigate TAA, and elect to hide the HLE/RTM CPUID bits.
>> +
>> +On systems with the ability to disable TSX off, this boolean offers system
>> +wide control of whether TSX is enabled or disabled.
>>  
>>   * An explicit `tsx=` choice is honoured, even if it is `true` and would
>>     result in a vulnerable system.
>> @@ -2311,10 +2318,14 @@ logic applies:
>>   * When no explicit `tsx=` choice is given, parts vulnerable to TAA will be
>>     mitigated by disabling TSX, as this is the lowest overhead option.
>>  
>> - * If the use of TSX is important, the more expensive TAA mitigations can be
>> +   If the use of TSX is important, the more expensive TAA mitigations can be
>>     opted in to with `smt=0 spec-ctrl=md-clear`, at which point TSX will remain
>>     active by default.
>>  
>> + * When no explicit `tsx=` option is given, parts susceptible to the memory
>> +   ordering errata default to `true` to enable working TSX.  Alternatively,
>> +   selecting `tsx=0` will disable TSX and restore PCR3 to a working state.
>> +
>>  ### ucode
>>  > `= List of [ <integer> | scan=<bool>, nmi=<bool>, allow-same=<bool> ]`
>>  
>> @@ -2456,20 +2467,7 @@ provide access to a wealth of low level processor information.
>>  
>>  *   The `arch` option allows access to the pre-defined architectural events.
>>  
>> -*   The `rtm-abort` boolean controls a trade-off between working Restricted
>> -    Transactional Memory, and working performance counters.
>> -
>> -    All processors released to date (Q1 2019) supporting Transactional Memory
>> -    Extensions suffer an erratum which has been addressed in microcode.
>> -
>> -    Processors based on the Skylake microarchitecture with up-to-date
>> -    microcode internally use performance counter 3 to work around the erratum.
>> -    A consequence is that the counter gets reprogrammed whenever an `XBEGIN`
>> -    instruction is executed.
>> -
>> -    An alternative mode exists where PCR3 behaves as before, at the cost of
>> -    `XBEGIN` unconditionally aborting.  Enabling `rtm-abort` mode will
>> -    activate this alternative mode.
>> +*   The `rtm-abort` boolean has been superseded.  Use `tsx=0` instead.
>>  
>>  *Warning:*
>>  As the virtualisation is not 100% safe, don't use the vpmu flag on
>> diff --git a/xen/arch/x86/cpu/intel.c b/xen/arch/x86/cpu/intel.c
>> index 37439071d9..abf8e206d7 100644
>> --- a/xen/arch/x86/cpu/intel.c
>> +++ b/xen/arch/x86/cpu/intel.c
>> @@ -356,9 +356,6 @@ static void Intel_errata_workarounds(struct cpuinfo_x86 *c)
>>  	    (c->x86_model == 29 || c->x86_model == 46 || c->x86_model == 47))
>>  		__set_bit(X86_FEATURE_CLFLUSH_MONITOR, c->x86_capability);
>>  
>> -	if (cpu_has_tsx_force_abort && opt_rtm_abort)
>> -		wrmsrl(MSR_TSX_FORCE_ABORT, TSX_FORCE_ABORT_RTM);
>> -
>>  	probe_c3_errata(c);
>>  }
>>  
>> diff --git a/xen/arch/x86/cpu/vpmu.c b/xen/arch/x86/cpu/vpmu.c
>> index d8659c63f8..16e91a3694 100644
>> --- a/xen/arch/x86/cpu/vpmu.c
>> +++ b/xen/arch/x86/cpu/vpmu.c
>> @@ -49,7 +49,6 @@ CHECK_pmu_params;
>>  static unsigned int __read_mostly opt_vpmu_enabled;
>>  unsigned int __read_mostly vpmu_mode = XENPMU_MODE_OFF;
>>  unsigned int __read_mostly vpmu_features = 0;
>> -bool __read_mostly opt_rtm_abort;
>>  
>>  static DEFINE_SPINLOCK(vpmu_lock);
>>  static unsigned vpmu_count;
>> @@ -79,7 +78,8 @@ static int __init parse_vpmu_params(const char *s)
>>          else if ( !cmdline_strcmp(s, "arch") )
>>              vpmu_features |= XENPMU_FEATURE_ARCH_ONLY;
>>          else if ( (val = parse_boolean("rtm-abort", s, ss)) >= 0 )
>> -            opt_rtm_abort = val;
>> +            printk(XENLOG_WARNING
>> +                   "'rtm-abort=<bool>' superseded.  Use 'tsx=<bool>' instead\n");
>>          else
>>              rc = -EINVAL;
>>  
>> diff --git a/xen/arch/x86/tsx.c b/xen/arch/x86/tsx.c
>> index 98ecb71a4a..338191df7f 100644
>> --- a/xen/arch/x86/tsx.c
>> +++ b/xen/arch/x86/tsx.c
>> @@ -6,7 +6,9 @@
>>   * Valid values:
>>   *   1 => Explicit tsx=1
>>   *   0 => Explicit tsx=0
>> - *  -1 => Default, implicit tsx=1, may change to 0 to mitigate TAA
>> + *  -1 => Default, altered to 0/1 (if unspecified) by:
>> + *                 - TAA heuristics/settings for speculative safety
>> + *                 - "TSX vs PCR3" select for TSX memory ordering safety
>>   *  -3 => Implicit tsx=1 (feed-through from spec-ctrl=0)
>>   *
>>   * This is arranged such that the bottom bit encodes whether TSX is actually
>> @@ -50,6 +52,26 @@ void tsx_init(void)
>>  
>>          cpu_has_tsx_ctrl = !!(caps & ARCH_CAPS_TSX_CTRL);
>>  
>> +        if ( cpu_has_tsx_force_abort )
>> +        {
>> +            /*
>> +             * On an early TSX-enable Skylake part subject to the memory
>> +             * ordering erratum, with at least the March 2019 microcode.
>> +             */
>> +
>> +            /*
>> +             * If no explicit tsx= option is provided, pick a default.
>> +             *
>> +             * This deliberately overrides the implicit opt_tsx=-3 from
>> +             * `spec-ctrl=0` because:
>> +             * - parse_spec_ctrl() ran before any CPU details where know.
>> +             * - We now know we're running on a CPU not affected by TAA (as
>> +             *   TSX_FORCE_ABORT is enumerated).
>> +             */
>> +            if ( opt_tsx < 0 )
>> +                opt_tsx = 1;
>> +        }
>> +
>>          /*
>>           * The TSX features (HLE/RTM) are handled specially.  They both
>>           * enumerate features but, on certain parts, have mechanisms to be
>> @@ -75,6 +97,12 @@ void tsx_init(void)
>>          }
>>      }
>>  
>> +    /*
>> +     * Note: MSR_TSX_CTRL is enumerated on TSX-enabled MDS_NO and later parts.
>> +     * MSR_TSX_FORCE_ABORT is enumerated on TSX-enabled pre-MDS_NO Skylake
>> +     * parts only.  The two features are on a disjoint set of CPUs, and not
>> +     * offered to guests by hypervisors.
>> +     */
>>      if ( cpu_has_tsx_ctrl )
>>      {
>>          uint32_t hi, lo;
>> @@ -90,9 +118,28 @@ void tsx_init(void)
>>  
>>          wrmsr(MSR_TSX_CTRL, lo, hi);
>>      }
>> +    else if ( cpu_has_tsx_force_abort )
>> +    {
>> +        /*
>> +         * On an early TSX-enable Skylake part subject to the memory ordering
>> +         * erratum, with at least the March 2019 microcode.
>> +         */
>> +        uint32_t hi, lo;
>> +
>> +        rdmsr(MSR_TSX_FORCE_ABORT, lo, hi);
>> +
>> +        /* Check bottom bit only.  Higher bits are various sentinels. */
>> +        rtm_disabled = !(opt_tsx & 1);
> I think you also calculate rtm_disabled in the previous if case
> (cpu_has_tsx_ctrl), maybe that could be pulled out?

rtm_disabled needs to not become true if !cpu_has_tsx_ctrl ||
!cpu_has_tsx_force_abort

Otherwise we'll default to disabling the CPUID bits even systems when we
can't we can't actually control TSX behaviour.

~Andrew


From xen-devel-bounces@lists.xenproject.org Thu May 27 18:36:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 18:36:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133636.249011 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmKs0-0003EG-Jm; Thu, 27 May 2021 18:36:36 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133636.249011; Thu, 27 May 2021 18:36:36 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmKs0-0003E9-GT; Thu, 27 May 2021 18:36:36 +0000
Received: by outflank-mailman (input) for mailman id 133636;
 Thu, 27 May 2021 18:36:35 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=5/6H=KW=linuxfoundation.org=dgiles@srs-us1.protection.inumbo.net>)
 id 1lmKrz-0003E3-2P
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 18:36:35 +0000
Received: from mail-ed1-x533.google.com (unknown [2a00:1450:4864:20::533])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6943492d-8fff-40b0-8ac4-d159ddce4fec;
 Thu, 27 May 2021 18:36:32 +0000 (UTC)
Received: by mail-ed1-x533.google.com with SMTP id y7so1995607eda.2
 for <xen-devel@lists.xenproject.org>; Thu, 27 May 2021 11:36:32 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6943492d-8fff-40b0-8ac4-d159ddce4fec
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linuxfoundation.org; s=google;
        h=mime-version:from:date:message-id:subject:to;
        bh=CN7c8uNFUTqMqzTfxD+oBi7pC5cjyIPKb25gYYZCQ7c=;
        b=h4ihET7RVIdMohHX7kLYmNXt1GnM/SLK9r95RzMMq2LtIXJa2l8zHmhsWDyX02HJaf
         GL8nTIR1H2Ci/JwiszpFGSrbdUbKOpQdNWaqz2gNW5+NfCcaglRceYChjZkyD+XijywS
         LWbfYnnMNwU7Ww0pW5PysUknij1te2g/OejEM=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:from:date:message-id:subject:to;
        bh=CN7c8uNFUTqMqzTfxD+oBi7pC5cjyIPKb25gYYZCQ7c=;
        b=VgiK/JnyfHKxahrUQJ/mOLKUlLZLoKbap9srm0Y0TNa896Uso2wxWJqqbPtbnld/RV
         t49yTmBoMMsrkQ/f3O/T79F7P+NDvPb9QIAEoYha3lFUkekjKUW/f45y2Z6QBH6fzuw2
         ZKjq5X8yPWsEpkcSE3zr1FzZYsQaIucLY6cNsQkWwZRgAarq/5uKeGnNVjRihwLN4ofX
         rniE7OZ/zhSKQLn6tVWi8X81RdP61kYaKEgYDMAK4IATsjJBl55BFmkPILPBjVxaZBbr
         +FsKY2TFYrBV+yz9Rs1D1/IewrXdx19yVmSJTGs/oJ//PST767UlejBTnHwkhYy8k89V
         L7wA==
X-Gm-Message-State: AOAM533HBqQ7uNolgrj563ZIsM+y5CeuNHqMKt6B3YyuFqxsBMNvy8Xg
	14LO/mgY4JkpXdaKKgSyEsB7AbBDr5bpFaTjs8o5UTjV5rw98w==
X-Google-Smtp-Source: ABdhPJwXouhULF1jWqiMsj9ZVEqCZsXR+sKVTTwrAwSjjGGCU/vFZOqLny+ROkKgn66gaV5kBrel1jFd13RLTFOzECw=
X-Received: by 2002:a05:6402:1052:: with SMTP id e18mr5870620edu.366.1622140591209;
 Thu, 27 May 2021 11:36:31 -0700 (PDT)
MIME-Version: 1.0
From: Deb Giles <dgiles@linuxfoundation.org>
Date: Thu, 27 May 2021 13:35:54 -0500
Message-ID: <CAFcyGRqc67xZ0eKZ+F7BAa_bR4yw7npecJYUre2eQCPJsuY-Pg@mail.gmail.com>
Subject: Design Session: Using Gitlab MR for Xen
To: xen-devel@lists.xenproject.org
Content-Type: multipart/mixed; boundary="000000000000b7457b05c3540969"

--000000000000b7457b05c3540969
Content-Type: multipart/alternative; boundary="000000000000b7457905c3540967"

--000000000000b7457905c3540967
Content-Type: text/plain; charset="UTF-8"

Hello,

Attached are the notes and public chat record for the Design Session: Using
Gitlab MR for Xen.

Thanks,

Deb

Deb Giles
Event Manager
The Linux Foundation
+1.503.807.1876 (Time Zone: Central Time)

*Schedule a Meeting with Me* <https://calendly.com/debgiles>

--000000000000b7457905c3540967
Content-Type: text/html; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

<div dir=3D"ltr"><div class=3D"gmail_default" style=3D"font-family:tahoma,s=
ans-serif">Hello,</div><div class=3D"gmail_default" style=3D"font-family:ta=
homa,sans-serif"><br></div><div class=3D"gmail_default" style=3D"font-famil=
y:tahoma,sans-serif">Attached are the notes and public chat record for the =
Design Session: Using Gitlab MR for Xen.</div><div class=3D"gmail_default" =
style=3D"font-family:tahoma,sans-serif"><br></div><div class=3D"gmail_defau=
lt" style=3D"font-family:tahoma,sans-serif">Thanks,</div><div class=3D"gmai=
l_default" style=3D"font-family:tahoma,sans-serif"><br></div><div class=3D"=
gmail_default" style=3D"font-family:tahoma,sans-serif">Deb</div><div class=
=3D"gmail_default" style=3D"font-family:tahoma,sans-serif"><br></div><div><=
div dir=3D"ltr" class=3D"gmail_signature" data-smartmail=3D"gmail_signature=
"><div dir=3D"ltr"><div><div dir=3D"ltr"><div><div dir=3D"ltr"><div><div di=
r=3D"ltr"><div><div dir=3D"ltr"><div dir=3D"ltr"><div dir=3D"ltr"><div dir=
=3D"ltr"><div dir=3D"ltr"><div dir=3D"ltr"><div dir=3D"ltr"><font color=3D"=
#888888">Deb Giles</font><br style=3D"color:rgb(136,136,136);font-size:12.8=
px"><span style=3D"color:rgb(136,136,136);font-size:12.8px">Event Manager</=
span><br style=3D"color:rgb(136,136,136);font-size:12.8px"><span style=3D"c=
olor:rgb(136,136,136);font-size:12.8px">The Linux Foundation</span></div><d=
iv><a href=3D"tel:%2B1.503.807.1876" value=3D"+14153684840" style=3D"font-s=
ize:12.8px;color:rgb(17,85,204)" target=3D"_blank">+1.503.807.1876</a>=C2=
=A0<span style=3D"color:rgb(136,136,136);font-size:12.8px">(Time Zone: Cent=
ral Time)</span></div><div><span style=3D"color:rgb(136,136,136);font-size:=
12.8px"><br></span></div><div><span style=3D"font-size:12.8px"><a href=3D"h=
ttps://calendly.com/debgiles" target=3D"_blank"><font color=3D"#000000"><b>=
Schedule a Meeting with Me</b></font></a></span></div><div dir=3D"ltr"><div=
 style=3D"color:rgb(136,136,136);font-size:12.8px"></div></div></div></div>=
</div></div></div></div></div></div></div></div></div></div></div></div></d=
iv></div></div>

--000000000000b7457905c3540967--
--000000000000b7457b05c3540969
Content-Type: text/plain; charset="UTF-8"; 
	name="bbb-Shanghai[public-chat]_2021-5-27_13-33.txt"
Content-Disposition: attachment; 
	filename="bbb-Shanghai[public-chat]_2021-5-27_13-33.txt"
Content-Transfer-Encoding: base64
Content-ID: <f_kp78lndt1>
X-Attachment-Id: f_kp78lndt1

CiAgICAgICAgICAgIFsxMTo0NV0gV2VsY29tZSB0byBTaGFuZ2hhaSFGb3IgaGVscCBvbiB1c2lu
ZyBCaWdCbHVlQnV0dG9uIHNlZSB0aGVzZSAoc2hvcnQpIHR1dG9yaWFsIHZpZGVvcy5UbyBqb2lu
IHRoZSBhdWRpbyBicmlkZ2UgY2xpY2sgdGhlIHBob25lIGJ1dHRvbi4gIFVzZSBhIGhlYWRzZXQg
dG8gYXZvaWQgY2F1c2luZyBiYWNrZ3JvdW5kIG5vaXNlIGZvciBvdGhlcnMuVGhpcyBzZXJ2ZXIg
aXMgcnVubmluZyBCaWdCbHVlQnV0dG9uLgpbMTE6NDVdIFRvIGludml0ZSBzb21lb25lIHRvIHRo
ZSBtZWV0aW5nLCBzZW5kIHRoZW0gdGhpcyBsaW5rOiBodHRwczovL3hlbnN1bW1pdC5saW51eGZv
dW5kYXRpb24ub3JnL2Ivc2hhbmdoYWkKWzExOjQ4XSBEZWIgR2lsZXMgLSBMRiBFdmVudHMgVGVh
bTogSWYgeW91J2QgbGlrZSBzY3JlZW4gc2hhcmUgYWJpbGl0aWVzLCBqdXN0IGxldCBtZSBrbm93
IGFuZCBJIGNhbiBtYWtlIHlvdSBwcmVzZW50ZXIKWzExOjUwXSBwZW5ueSB6aGVuZzogc28gbWFu
eSBhcm0gZm9sa3MgbG9sClsxMTo1OF0gSnVlcmdlbiBHcm9zcyAoamdyb3NzKTogRGViLCBjYW4g
eW91IHBsZWFzZSBtYWtlIG1lIHByZXNlbnRlcj8KWzExOjU5XSBEZWIgR2lsZXMgLSBMRiBFdmVu
dHMgVGVhbTogWW91J3JlIGFsbCBzZXQhClsxMjowMF0gUm9nZXIgUGF1IE1vbm5lOiBJIHdhbnRl
ZCB0byBhc2sgd2hldGhlciB0aGUgTGludXgga2VybmVsL1FFTVUgY29kZSBjb3VsZCBiZSBhY2Nl
cHRlZCB3aXRob3V0IGFueSBzcGVjIGNoYW5nZT8KWzEyOjA3XSBPbGVrc2FuZHJUOiBJbml0aWFs
bHkgSSBjcmVhdGVkIG5ldyAidmRpc2siIHByb3BlcnR5IGZvciB2aXJ0aW8gZGlzaywgYnV0IHdh
cyBhc2tlZCB0byByZXVzZSBleGlzdGluZyAiZGlzayIgY29uZmlndXJhdGlvbiBpbiBsaWJ4bCBp
ZiBwb3NzaWJsZSkgIGh0dHBzOi8vbGlzdHMueGVucHJvamVjdC5vcmcvYXJjaGl2ZXMvaHRtbC94
ZW4tZGV2ZWwvMjAyMS0wNS9tc2cwMTE3NC5odG1sClsxMjoxM10gc3RhY2t0cnVzdDogSW1wb3J0
YW50IGZvciBBdXRvbW90aXZlLCBGdVNBLCBzZWN1cml0eQpbMTI6MTldIENocmlzdG9waGVyIENs
YXJrOiBodHRwczovL29wZW54dC5hdGxhc3NpYW4ubmV0L3dpa2kvc3BhY2VzL0RDL3BhZ2VzLzEz
MzM0MjgyMjUvQW5hbHlzaXMrb2YrQXJnbythcythK3RyYW5zcG9ydCttZWRpdW0rZm9yK1ZpcnRJ
TwpbMTI6MjBdIENocmlzdG9waGVyIENsYXJrOiBodHRwczovL29wZW54dC5hdGxhc3NpYW4ubmV0
L3dpa2kvc3BhY2VzL0RDL3BhZ2VzLzE2OTYxNjk5ODUvVmlydElPLUFyZ28lMkJEZXZlbG9wbWVu
dCUyQlBoYXNlJTJCMQpbMTI6MjBdIENocmlzdG9waGVyIENsYXJrOiBodHRwczovL29wZW54dC5h
dGxhc3NpYW4ubmV0L3dpa2kvc3BhY2VzL0RDL3BhZ2VzLzEzNDg3NjM2OTgvVmlydElPJTJCQXJn
bwpbMTI6MjRdIERhbWllbiBUaGVub3QgKGR0aGVub3QpOiBTaGFyZWQgbWVtb3J5IHdvdWxkIGJl
IGludGVyZXN0aW5nIGZvciBEUFUgdG9vClsxMjoyNV0gQ2hyaXN0b3BoZXIgQ2xhcms6IGh0dHBz
Oi8vd3d3LnBsYXRmb3Jtc2VjdXJpdHlzdW1taXQuY29tLzIwMTgvc3BlYWtlci9wcmF0dC8KWzEy
OjI3XSBDaHJpc3RvcGhlciBDbGFyazogYWNrIG9uIGRpZmZlcmVudCBwZXJmIGNoYXJhY3Rlcmlz
dGljcyBmb3IgZGlmZmVyZW50IGhhcmR3YXJlIGFyY2hpdGVjdHVyZXMKWzEyOjI4XSBEZW1pIE1h
cmllIE9iZW5vdXI6IFJJU0MtViBvbmx5IGhhcyAzMi1iaXQgYXRvbWljcwpbMTI6MjhdIERlbWkg
TWFyaWUgT2Jlbm91cjogTG9uZy10ZXJtIGZpeCBpcyBwcm9iYWJseSB0byBjaGFuZ2UgdGhlIGd1
ZXN0LXZpc2libGUgQUJJClsxMjozMF0gRGVtaSBNYXJpZSBPYmVub3VyOiBUbyBub3QgcmVxdWly
ZSBhdG9taWNzIGluIHNoYXJlZCBtZW1vcnkKWzEyOjMwXSBKYXNvbiBBbmRyeXVrOiBAQW5kcmV3
LCB5b3UgdXNlIG5ldGZyb250L2JhY2ssIGJ1dCB5b3VyIGRvbTAgbmljIHNjYXR0ZXIvZ2F0aGVy
cyBvdmVyIHRoZSBndWVzdCdzIGdyYW50ZWQgIG5ldHdvcmsgZnJhbWVzIGRpcmVjdGx5PwpbMTI6
MzBdIEFuZHJldyBDb29wZXI6IHllYWgKWzEyOjMxXSBBbmRyZXcgQ29vcGVyOiBhIGNvbnNlcXVl
bmNlIGlzIHRoYXQgZG9tMCBuZXZlciB0b3VjaGVzIHRoZSBtYXBwaW5nLCBzbyBuZXZlciBzZXRz
IHRoZSBBIGJpdCwgc28geW91IGNhbiBza2lwIHRoZSBUTEIgZmx1c2ggb24gdGhlIHVubWFwIHNp
ZGUKWzEyOjMxXSBBbmRyZXcgQ29vcGVyOiBhbmQgdGhpcyBpcyBncmVhdCBmb3IgcGVyZm9ybWFu
Y2UKWzEyOjMxXSBKYXNvbiBBbmRyeXVrOiBGYW5jeSA6KQpbMTI6MzNdIFJvZ2VyIFBhdSBNb25u
ZTogbm90ZSB5b3UgbmVlZCB0byBwYXRjaCBYZW4gdG8gbm90IGRvIHRoZSBmbHVzaCBvbiB1bm1h
cCBpZiB0aGUgQSBiaXQgaXNuJ3Qgc2V0IDopClsxMjozN10gQW5kcmV3IENvb3BlcjogaW4gc29t
ZSBjb3Bpb3VzIGZyZWUgdGltZSwgd2UgbmVlZCB0byB1cHN0cmVhbSBhIHZlcnNpb24gb2YgdGhh
dCB3aGljaCBkb2Vzbid0IHRydXN0IGRvbTAgdG8gYmUgbm9uLW1hbGljaW91cwpbMTI6MzhdIFN0
ZWZhbm8gU3RhYmVsbGluaTogVGhpcyBpcyBXaW5kUml2ZXIgS1ZNdG9vbHMgdGhhdCBkb2VzIHNo
YXJlZCBtZW1vcnkgZm9yIHZpcnRpbyBodHRwczovL2dpdGh1Yi5jb20vT3BlbkFNUC9rdm10b29s
ClsxMjozOV0gUm9nZXIgUGF1IE1vbm5lOiBJIHRoaW5rIGJvdGgKWzEyOjM5XSBEYW5pZWwgUCBT
bWl0aDogb2gsIGkgbWlzdW5kZXJzdG9vZCwgQHN0ZWZhbm8gaSB0aG91Z2h0IHlvdSBzYWlkIHRo
ZXJlIHdhcyBhIG1vdmUgdG8gYSBjb3B5IG1lY2hhbmlzbQpbMTI6NDFdIFN0ZWZhbm8gU3RhYmVs
bGluaTogQGRhbmllbCwgeWVzIEkgbWVhbnQgY29weSB0byBhIHByZS1zaGFyZWQgbWVtb3J5IHJl
Z2lvbgpbMTI6NDJdIFN0ZWZhbm8gU3RhYmVsbGluaTogc29ycnkgZm9yIHRoZSBjb25mdXNpb24K
WzEyOjQyXSBEYW5pZWwgUCBTbWl0aDogb2theSwgbm8gd29ycmllcy4gdGhhbmsgeW91IQpbMTI6
NDJdIFdlaSBMaXU6IFRoYXQncyBpbiB0aGUgbmV3ZXIgc3BlYyByaWdodD8KWzEyOjQ0XSBEZWIg
R2lsZXMgLSBMRiBFdmVudHMgVGVhbTogVGhpcyBpcyBhIDEgbWludXRlIHdhcm5pbmcgYmVmb3Jl
IHRoZSBuZXh0IGRlc2lnbiBzZXNzaW9uIGlzIHNjaGVkdWxlZCB0byBzdGFydApbMTI6NDVdIENo
cmlzdG9waGVyIENsYXJrOiBUaGFua3MgZm9yIHRoZSBkaXNjdXNzaW9uClsxMjo0Nl0gRGFuaWVs
IFAgU21pdGg6IHRoYW5rIHlvdSBqdWVyZ2VuIGZvciBob3N0aW5nIQpbMTI6NDddIERlYiBHaWxl
cyAtIExGIEV2ZW50cyBUZWFtOiBXZSBjYW4gaWYgeW91J2QgbGlrZSEKWzEyOjQ3XSBEZWIgR2ls
ZXMgLSBMRiBFdmVudHMgVGVhbTogVGhlZHkndmUgYmVlbiBzYXZlZApbMTI6NDhdIFN0ZWZhbm8g
U3RhYmVsbGluaTogQGRhbmllbCB5b3UgaGF2ZSBtYWlsClsxMjo0OV0ganVsaWVuLWY6IEdpdExh
YiBkb2VzIGhhdmUgYW4gQVBJLCBub3Qgc3VyZSBob3cgdXNlZnVsIGZvciBNZXJnZSBSZXF1ZXN0
cyBpdCBpcyB0aG91Z2gKWzEyOjUwXSBTdGVmYW5vIFN0YWJlbGxpbmk6IE1SIGFyZSBub3QgdGhl
IGlzc3VlLiBXZSBjb3VsZCB1c2UgdGhlbSB0b2RheS4gVGhlIGltcG9ydGFudCBpc3N1ZSB3aGVy
ZSBkbyB0aGUgcmV2aWV3cyBnby4KWzEyOjUxXSBCb2JieSBFc2hsZW1hbjogKzEgdGhlIGZpcnN0
IHNldHVwIGlzIG5vdCBpbnR1aXRpdmUgYXQgYWxsClsxMjo1MV0ganVsaWVuLWY6IFllYWgsIEkg
d2FzIHRhbGtpbmcgYWJvdXQgTVIgcmV2aWV3cyA6LSkKWzEyOjUxXSBPbGl2aWVyIExhbWJlcnQ6
IGhhaGEgeWVhaCBCb2JieSBJIHJlbWVtYmVyIGl0IHRvb2sgeW91IHNvbWUgdGltZSB0byBiZSBz
ZXR1cApbMTI6NTJdIFNjb3R0IERhdmlzOiBDb21tZW50cyBpbiB0aGUgTVIgd291bGQgd29yayBm
b3IgcmV2aWV3cywgbm8/ClsxMjo1Ml0gU3RlZmFubyBTdGFiZWxsaW5pOiB5ZXMgYnV0IHRoZXkg
ZG9uJ3Qgd29yayB3ZWxsIGlmIHNvbWVvbmUgZWxzZSBpcyByZXBseWluZyB0byBlbWFpbHMKWzEy
OjUyXSBJYW4gSmFja3NvbjogTVIgY29tbWVudHMgYXJlbid0IGF0dGFjaGVkIHRvIHBhcnRpY3Vs
YXIgY29tbWl0cy4KWzEyOjUzXSBTYW11ZWwgVmVyc2NoZWxkZTogSWFuOiBJIHRoaW5rIHRoZXkg
YXJlIGlmIHlvdSBjb21tZW50IGZyb20gdGhlIGNvbW1pdCBpdHNlbGYKWzEyOjUzXSBJYW4gSmFj
a3NvbjogVGhlIGZvcmdlcyAoZ2l0bGFiIGdpdGh1YikgbGV0IHlvdSBkbyByZXZpZXcgbGluZS1i
eS1saW5lIGluIGNvZGUgdmlhIHRoZWlyIFVJIGJ1dCBpdCBpcyByZWFsbHkgd2ViIHVpIG9ubHkg
YW5kIGhhcyBpdHMgb3duIHByb2JsZW1zLgpbMTI6NTRdIFN0ZWZhbm8gU3RhYmVsbGluaTogdGhl
IGlzc3VlIGlzIHRoYXQgd2UgcmFuIGEgdGVzdCBhbmQgd2Fzbid0IGludGVncmF0aW5nIHZlcnkg
d2VsbCB3aXRoIHRoZSBlbWFpbCB3b3JrZmxvdy4gSSBjYW4ndCByZW1lbWJlciBleGFjdGx5IHdo
eS4gV2FzIGl0IGJlY2F1c2UgdGhlIG5vdGlmaWNhdGlvbiBlbWFpbHMgZGlkbid0IHJldGFpbiB0
aGUgYXV0aG9yIG9mIHRoZSBjb21tZW50PwpbMTI6NTRdIEJlcnRyYW5kIE1hcnF1aXM6IGZvbGxv
d2luZyBwZW5kaW5nIHBhdGNoZXMgaXMgYWxzbyBhIGxvdCBlYXNpZXIgaW4gYSB3ZWIgdWksIHRo
ZSBmaWx0ZXJpbmcgaXMgYSBsb3QgZWFzaWVyClsxMjo1NV0gT2xpdmllciBMYW1iZXJ0OiBAU3Rl
ZmFubyA6IHdlbGwgdGhlIHRlc3Qgd2FzIGhhbGYgYmFrZWQgSSB3b3VsZCBzYXkKWzEyOjU1XSBP
bGl2aWVyIExhbWJlcnQ6IHRoYXQncyB3aHkgd2Ugc2hvdWxkIGRvIGFuIG9uLWJvYXJkaW5nIHRv
IGJlIHN1cmUgaXQncyB0cnVseSBkb25lICJ0aGUgYmVzdCB3YXkiIHBvc3NpYmxlClsxMjo1NV0g
T2xpdmllciBMYW1iZXJ0OiB3aXRoIHBlb3BsZSB3aXRoIGEgYml0IG9mIEdpdGxhYiBleHBlcmll
bmNlClsxMjo1NV0gT2xpdmllciBMYW1iZXJ0OiBhbmQgc2VlIHRoZW4gaWYgaXQncyBPSyBvciBu
b3QKWzEyOjU2XSBKdWxpZW4gR3JhbGw6IFdoeSBhcmUgd2Ugc3RpY2tpbmcgdG8gZ2l0bGFiPwpb
MTI6NTZdIFN0ZWZhbm8gU3RhYmVsbGluaTogaXQgaXMgb3BlbiBzb3VyY2UgYW5kIHdlIGFscmVh
ZHkgdXNlIHRoZSBDSS1sb29wIHRoZXJlClsxMjo1Nl0gSnVsaWVuIEdyYWxsOiBPciBpcyB0aGUg
cHJvYmxlbSBiZWNhdXNlIG9mIHRoZSB3ZWIgVUkgaW4gZ2VuZXJhbD8KWzEyOjU2XSBCZXJ0cmFu
ZCBNYXJxdWlzOiBnaXQgcHVzaCBpcyBtb3JlIHNpbXBsZSB0aGVuIGdpdCBzZW5kLWVtYWlsIDot
KQpbMTI6NTZdIEJlcnRyYW5kIE1hcnF1aXM6IGZyb20gd2hhdCBpIHJlY2FsbCB0aGUgcHJvYmxl
bSB3YXMgZHVlIHRvIHRoZSBmYWN0IHRoYXQgaXQgd2FzIG5vdCBwb3NzaWJsZSB0byBkbyBib3Ro
IFVJIGFuZCBtYWlsIHJldmlld3MgaW4gcGFyYWxsZWwKWzEyOjU2XSBKdWxpZW4gR3JhbGw6IFN0
ZWZhbm86IFJpZ2h0LCBJIGFtIHRyeWluZyB0byBmaWd1cmUgb3V0IGlmIHBlb3BsZSBhcmUgYWdh
aW5zdCB3ZWIgVUkgaW4gZ2VuZXJhbCBvciBqdXN0IGdpdGxhYi4KWzEyOjU3XSBCZXJ0cmFuZCBN
YXJxdWlzOiBkZWZpbml0ZWx5IGdpdGxhYiBpcyBub3QgdGhlIG9ubHkgYW5zd2VyLCB0aGVyZSBt
aWdodCBiZSBvdGhlcnMKWzEyOjU3XSBDaHJpc3RvcGhlciBDbGFyazogSSBsaWtlIGVtYWlsLiBQ
bGVhc2UgZXhjdXNlIG1lIGZvciBhIHdoaWxlIHdoaWxlIEkgdGVuZCB0byBteSBsYXduLgpbMTI6
NTddIEJlcnRyYW5kIE1hcnF1aXM6IGFkdmFudGFnZSBvZiBnaXRsYWIgaXMgQ0kgcGFydCB3ZSBh
bHJlYWR5IGhhdmUgaSB3b3VsZCBzYXkKWzEyOjU3XSBqdWxpZW4tZjogSU1ITywgeW91IHNob3Vs
ZCBub3QgdHJ5IHRvIHBvcnQgeW91ciBleGlzdGluZyBlbWFpbCB3b3JrZmxvdyB0byBHaXRMYWIs
IGJ1dCBhZGQgYSBuZXcgd29ya2Zsb3cgb24gR2l0TGFiIGZvciBzaW1wbGUgY2hhbmdlcyB0byBo
ZWxwIGF0dHJhY3QgbmV3IGNvbnRyaWJ1dG9ycwpbMTI6NThdIE9saXZpZXIgTGFtYmVydDogKzEK
WzEyOjU4XSBBbnRob255IFBFUkFSRCAoYW50aG9ueXBlcik6IGdpdCByYW5nZS1kaWZmIDstKQpb
MTI6NThdIEJlcnRyYW5kIE1hcnF1aXM6IGZvbGxvd2luZyB0aHJlYWRzIGFuZCBjb21tZW50cyBp
cyBuaWNlIGluIGdpdGxhYiBpIGZpbmQgYmVjYXVzZSB5b3UgY2FuIGFuc3dlciB0byBhIHNwZWNp
ZmljIGNvbW1lbnQKWzEyOjU5XSBEZW1pIE1hcmllIE9iZW5vdXI6IElmIHJlZ2lzdGVyaW5nIGFu
IGFjY291bnQgb24gR2l0TGFiIGF1dG9tYXRpY2FsbHkgZ2VuZXJhdGVkIGNyZWRlbnRpYWxzIGZv
ciBgZ2l0IHNlbmQtZW1haWwsIHRoYXQgd291bGQgcmVhbGx5IGhlbHAKWzEyOjU5XSBTYW11ZWwg
VmVyc2NoZWxkZTogWW91IGNhbiBjb21wYXJlIGJldHdlZW4gc3VjY2Vzc2l2ZSBpdGVyYXRpb25z
IG9mIGEgZ2l2ZW4gY29tbWl0IGRpcmVjdGx5IGluIHRoZSB3ZWIgVUkKWzEyOjU5XSBCZXJ0cmFu
ZCBNYXJxdWlzOiArMQpbMTI6NTldIERlbWkgTWFyaWUgT2Jlbm91cjogQ2FuIHdlIGFzayB0aGUg
R0hDIChHbGFzZ293IEhhc2tlbGwgQ29tcGlsZXIpIHRlYW0gaG93IHRoZXkgZG8gY29kZSByZXZp
ZXc/ClsxMjo1OV0gQmVydHJhbmQgTWFycXVpczogZnVsbHkgYWdyZWUgd2l0aCBJYW4gaGVyZQpb
MTM6MDBdIERlbWkgTWFyaWUgT2Jlbm91cjogVGhleSBoYXZlIGEgbWFzc2l2ZSBhbW91bnQgb2Yg
ZXhwZXJpZW5jZSB1c2luZyBHaXRMYWIgZm9yIHRoaXMsIGhhdmluZyBwcmV2aW91c2x5IHVzZWQg
UGhhYnJpY2F0b3IKWzEzOjAwXSBEZW1pIE1hcmllIE9iZW5vdXI6IFRoZXkgaGF2ZSBtYW5hZ2Vk
IHRvIHVzZSBHaXRMYWIgdG8gcmV2aWV3IGNvbW1pdC1ieS1jb21taXQuClsxMzowMV0gSmVwcGU6
IEkgc2VlIGFuIGlzc3VlIHdpdGggYmFyIHRvIGVudHJ5IG91dGxpbmVkIGluIHRoaXMgZGlzY3Vz
c2lvbi4gUGVyaGFwcyB0aGVyZSBjYW4gYmUgYSBsYW5kaW5nIHBhZ2UgYXQgeGVuIHByb2plY3Qg
dGhhdCBzYXlzICJoZXkgdGhhbmtzIGZvciBjb250cmlidXRpbmcsIHdlIHdvdWxkIGxvdmUgeW91
ciBwYXRjaC4uLiBoZXJlIGlzIGhvdyB0byBzdWJtaXQsIGdpdGxhYiBoYXMgbGltaXRhdGlvbnMg
Zm9yIHVzIHNhZGx5IiBhbmQgdGhlbiBwcm92aWRlIC4uLiBYIChhbiBleHBsYW5hdGlvbiBvZiBl
bWFpbCBzdWJtaXNzaW9uIG9yLi4uIGEgaW4gYmV0d2VlbiBzb2x1dGlvbiB0aGF0IGlzIGhvcGVm
dWxseSBlYXNpZXIgdGhlbiBlbWFpbCBmb3IgYSBuZXcgIGNvbnRyaWJ1dG9yPykKWzEzOjAxXSBH
ZW9yZ2UgRHVubGFwIChnd2QpOiBKZXBwZTogV2UgaGF2ZSBvbmUuICB0aGUgcHJvYmxlbSBpcyBp
J3RzIHN1cGVyIGNvbXBsaWNhdGVkClsxMzowMV0gRGVtaSBNYXJpZSBPYmVub3VyOiBUaGF0IGlz
IHN0aWxsIGEgaGlnaCBiYXJyaWVyIHRvIGVudHJ5LCBoZW5jZSBteSBzdWdnZXN0aW9uIG9mIGF1
dG9tYXRpY2FsbHkgc2VuZGluZyB0aGUgZW1haWwuClsxMzowMV0gR2VvcmdlIER1bmxhcCAoZ3dk
KTogSU5oZXJlbnRseSBjb21wbGljYXRlZDsgaXQgY2FuJ3QgYmUgbWFkZSBzaW1wbGVyLgpbMTM6
MDJdIFNhbXVlbCBWZXJzY2hlbGRlOiBEZW1pOiBJIHRoaW5rIEdpdGxhYidzIFVJIGhhcyBtb3N0
IG9uZSBuZWVkcyB0byByZXZpZXcgY29tbWl0IGJ5IGNvbW1pdCwgd2l0aCBzdWNjZXNzaXZlIGlu
dGVyYXRpb25zIChmb3JjZSBwdXNoZXMpIGZvciB0aGUgc2FtZSBjb21taXQgc2VyaWVzLiBUaGUg
bWFpbiBpc3N1ZSBpcyB0aGF0IGl0J3MgYSBkaWZmZXJlbnQgcHJvY2VzcyB0aGFuIHRoZSBjdXJy
ZW50IHByb2Nlc3MuIEl0IHdvdWxkIGJlIGVhc2llciBpZiB3ZSB3ZXJlIHN0YXJ0aW5nIGZyb20g
bm90aGluZy4KWzEzOjAyXSBPbGl2aWVyIExhbWJlcnQ6IGluZGVlZCwgYmVuZGluZyBwcmV2aW91
cy9leGlzdGluZyB3b3JrZmxvdyB3b24ndCByZWFsbHkgZml0ClsxMzowMl0gSmVwcGU6IEBHZW9y
Z2UgQWgsIEkgc2VlLiA7KQpbMTM6MDRdIG1hcm1hcmVrOiByZXZpZXdhYmxlLmlvIGlzICpyZWFs
bHkqIHBvd2VyZnVsIGZvciBoYW5kbGluZyBzZXZlcmFsIHJldmlzaW9ucyBvZiB0aGUgc2VyaWVz
LCBhbmQgdHJhY2tpbmcgc3RhdGUgb2YgZWFjaCBjb21taXQgb3IgZXZlbiBjaHVuazsgYnV0IGl0
J3MgeWV0IGFub3RoZXIgd2ViIHVpClsxMzowNF0gRGVtaSBNYXJpZSBPYmVub3VyOiBDYW4gd2Ug
aGF2ZSBhIGJvdCB0aGF0IGF1dG9tYXRpY2FsbHkgc3RyaXBzIHRob3NlIGRpc2NsYWltZXJzPwpb
MTM6MDVdIEdlb3JnZSBEdW5sYXAgKGd3ZCk6IFRoYXQgbWlnaHQgYnJlYWsgREtJTQpbMTM6MDVd
IERlbWkgTWFyaWUgT2Jlbm91cjogSXQgd291bGQ7IHdlIHdvdWxkIG5lZWQgdG8gcmV3cml0ZSBG
cm9tIGhlYWRlcnMKWzEzOjA1XSBEZW1pIE1hcmllIE9iZW5vdXI6IEJ1dCB0aGF0IGlzIGJhc2lj
YWxseSBhIGdpdmVuIG5vd2FkYXlzLgpbMTM6MDVdIE9saXZpZXIgTGFtYmVydDogeWVhaCB3ZSBo
YWQgdGhlIHNhbWUgaXNzdWUgaGVyZSB0byBmaW5kIHRoZXJlIHdhcyBhIHYyIG9uIGEgcGF0Y2gg
c2VyaWVzClsxMzowNl0gQW5kcmV3IENvb3BlcjogaHR0cHM6Ly9sb3JlLmtlcm5lbC5vcmcveGVu
LWRldmVsLwpbMTM6MDddIFdlaSBMaXU6IFRyeSBiNC4gSXQgaXMgcmVhbGx5IGdvb2QuClsxMzow
OF0gT2xpdmllciBMYW1iZXJ0OiBTaG91bGQgd2UgdHJ5IHRvIHN0YXJ0IHVzaW5nIHRoZSBmdWxs
IHdlYiBVSSB3b3JrZmxvdyBmb3IgInNtYWxsIiBNUnMgZmlyc3Q/ClsxMzowOF0gT2xpdmllciBM
YW1iZXJ0OiBhbmQgc2VlIHdoZXJlIGl0J3MgZ29pbmcgZnJvbCB0aGVyZQpbMTM6MDhdIE9saXZp
ZXIgTGFtYmVydDogKmZyb20KWzEzOjA5XSBEb3VnIEdvbGRzdGVpbjogU29ycnkuIEkgbWlzc2Vk
IHRoZSBlYXJseSBwYXJ0IG9mIHRoZSBjb252ZXJzYXRpb24uClsxMzowOV0gT2xpdmllciBMYW1i
ZXJ0OiBoZXkgRG91ZyA6KQpbMTM6MDldIERvdWcgR29sZHN0ZWluOiBJJ20gd2F0Y2hpbmcgdGhl
IHRleHQgY29udm8gYnV0IEkgaGFkIGEgd29yayBtZWV0aW5nIEkgY291bGRuJ3QgZ2V0IG91dCBv
Zi4KWzEzOjEwXSBEZW1pIE1hcmllIE9iZW5vdXI6IFdoYXQgYWJvdXQgU291cmNlSHV0PwpbMTM6
MTBdIEplcHBlOiBIb3cgbWFueSB0b29scyBhcmUgYXZhaWxhYmxlIHRvIG1hbmFnZSBodWdlIGNv
ZGUgYmFzZXM/IEl0IHNlZW1zIHRvIG1lIChhcyBhbiBvdXRzaWRlciBpbiB0aGlzKS4gUGVyaGFw
cywgb25lIGNhbiBsb29rIGF0IHNldmVyYWwgIE9TUyBhbmQgZmluZCBvbmUgdGhhdCB3b3JrcyB3
aXRoIHNtYWxsIGNvbnRyaWJ1dG9ycz8gVGhlbiBvbmUgbWF5IHZvdGUgb24gYSBzb2x1dGlvbj8g
T3IgaXMgdGhhdCBub3QgcG9zc2libGUgZm9yIGEgdGVjaG5pY2FsIHJlYXNvbi4KWzEzOjEwXSBP
bGl2aWVyIExhbWJlcnQ6IEkgdGhpbmsgaXQncyBhIG1hdHRlciBvZiBsZWF2aW5nIHRoZSBlbWFp
bCBjbGllbnQgYXQgc29tZSBwb2ludC4KWzEzOjEwXSBEb3VnIEdvbGRzdGVpbjogVWx0aW1hdGVs
eSB3aGF0IHdlIGRvIEkgd291bGQgcmVhbGx5IGxpa2UgdG8gc2VlIHVzIGFkb3B0IGEgcHJvY2Vz
cyB0aGF0J3Mgc2ltaWxhciB0byBhbm90aGVyIHByb2plY3QuIEFub3RoZXIgbGFyZ2VyIHByb2pl
Y3QuClsxMzoxMV0gRG91ZyBHb2xkc3RlaW46IEkgZG9uJ3Qgd2FudCB0byBzZWUgdXMgb2ZmIG9u
IGFuIGlzbGFuZCB3aGVyZSB3ZSBoYXZlIHRvIG1haW50YWluIGEgbG90IG9mIGV4dHJhIHNpZGUg
cHJvamVjdHMuClsxMzoxMV0gR2VvcmdlIER1bmxhcCAoZ3dkKTogRG91ZzogRGlkIHlvdSBoYXZl
IGEgbGFyZ2UgcHJvamVjdCBpbiBtaW5kPwpbMTM6MTJdIERvdWcgR29sZHN0ZWluOiBOb3Qgc3Bl
Y2lmaWNhbGx5LiBCdXQgcHJvamVjdHMgbGlrZSBRRU1VIGFyZSBnb29kIHRvIGxvb2sgdG93YXJk
cy4KWzEzOjEyXSBJYW4gSmFja3NvbjogSSBhYnNvbHV0ZWx5IGFncmVlIHdpdGggd2hhdCBBbmR5
IGlzIHNheWluZwpbMTM6MTJdIFN0ZWZhbm8gU3RhYmVsbGluaTogUUVNVSBpcyBzdGlsbCBkb2lu
ZyBlbWFpbCByZXZpZXdzLCBidXQgdGhleSBzd2l0Y2hlZCB0byBNUnMgZm9yIHB1bGwgcmVxdWVz
dHMgKG5vdCByZXZpZXdzKQpbMTM6MTJdIFdlaSBMaXU6IENsb3VkIEh5cGVydmlzb3IgdXNlcyBH
aXRIdWIsIGFuZCBhdCB0aGUgc2FtZSB0aW1lIGFkaGVyZSB0byBoaWdoIHN0YW5kYXJkICBmb3Ig
Y29tbWl0IG1lc3NhZ2VzLgpbMTM6MTNdIFdlaSBMaXU6IEl0IGNhbiBiZSBkb25lLgpbMTM6MTNd
IGp1bGllbi1mOiBUaGF0J3MgcGVyZmVjdGx5IHBvc3NpYmxlIHdpdGggR2l0TGFiClsxMzoxM10g
U2FtdWVsIFZlcnNjaGVsZGU6IFllcywgdGhpcyBtZWFucyBubyAiZml4dXAgZml4dXAiIGNvbW1p
dHMgaW4gbWVyZ2UgcmVxdWVzdHMsIGJ1dCBpbmNyZW1lbnRhbCByZWZpbmluZyBvZiBjb21taXRz
LgpbMTM6MTNdIGp1bGllbi1mOiBZb3UgY2FuIHJlYmFzZSBhbmQgbWVyZ2UgdGhlIGNvbW1pdHMK
WzEzOjEzXSBKZXBwZTogQ2FuIGEgc29sdXRpb24gYmUgZm91bmQgd2hlcmUgZXh0cmVtZWx5IHZl
cmJvc2UgaGlzdG9yeSBpcyBiYWtlZCBpbj8KWzEzOjEzXSBTdGVmYW5vIFN0YWJlbGxpbmk6IEkg
YWdyZWUgd2l0aCBBbmR5ClsxMzoxM10gR2VvcmdlIER1bmxhcCAoZ3dkKTogV2VpOiBZb3UgbWVh
biB0aGUgY29tbWl0IG1lc3NhZ2VzIGhhdmUgYSBzdW1tYXJ5IG9mIGFsbCB0aGUgInBhdGhzIG5v
dCB0YWtlbiIgYW5kIHdoeT8KWzEzOjE0XSBXZWkgTGl1OiBOby4gSSBtZWFuIGNvbW1pdCBtZXNz
YWdlcyBhcmUgY2xvc2UgdG8gd2hhdCBMaW51eCBrZXJuZWwgLyBYZW4gaGF2ZSBmb3IgYXJjaGFl
b2xvZ3kgcHVycG9zZXMuClsxMzoxNF0gT2xpdmllciBMYW1iZXJ0OiBJIHRoaW5rIHRoZSBvbmx5
IHdheSB0byBwcm92ZSBpdCBpcyB0byBhY3R1YWxseSB0cnkgd2hpbGUgaGF2aW5nIHBlb3BsZSBh
cm91bmQgdG8gaGVscCBvbiB1c2luZyBpdCA6KQpbMTM6MTRdIEdlb3JnZSBEdW5sYXAgKGd3ZCk6
IEkganVzdCBzcGVudCBhIHdlZWsgZG9pbmcgZGF0YSBtaW5pbmcgb24gdGhlIDE3LXllYXIgaGlz
dG9yeSBvZiB4ZW4tZGV2ZWwsIHNvIEkgZGVmaW5pdGVseSBhcHByZWNpYXRlIHRoZSBvcGVuIGZv
cm1hdC4KWzEzOjE1XSBXZWkgTGl1OiBJdCBpcyBqdXN0IHRoYXQgcGVvcGxlIG5lZWQgdG8gYmUg
dmlnaWxhbnQgYWJvdXQgbm90IG1lcmdpbmcgdGhpbmdzIHdpdGggY3JhcHB5IGNvbW1pdCBtZXNz
YWdlcy4KWzEzOjE1XSBEb3VnIEdvbGRzdGVpbjogQWdyZWVkIFdlaS4KWzEzOjE1XSBCZXJ0cmFu
ZCBNYXJxdWlzOiBnaXRsYWIgb3Igb3RoZXIgdGhpbmdzIGxpa2UgdGhhdCBhcmUgc3RpbGwgZ2Vu
ZXJhdGluZyBtYWlscyB3aGljaCBjb3VsZCBiZSBhcmNoaXZlZApbMTM6MTVdIEdlb3JnZSBEdW5s
YXAgKGd3ZCk6IEJ1dCB0aGF0J3Mgbm90IHdoYXQgQW5keSBpcyB0YWxraW5nIGFib3V0OyBpdCdz
IGFib3V0IGdvaW5nIGJhY2sgMTAgeWVhcnMgYWdvIGFuZCBzZWVpbmcgYWxsIHRoZSAicGF0aHMg
bm90IHRha2VuIiBkaXNjdXNzaW9uIGFuZCB3aHkgdGhleSB3ZXJlbid0IHRha2VuLgpbMTM6MTVd
IEJvYmJ5IEVzaGxlbWFuOiBSdXN0YyBpcyBhIGdyZWF0ICBleGFtcGxlIG9mIGJpZyAvIGNvbXBs
aWNhdGVkIHByb2plY3QgdXNpbmcgTVIgcHJvY2VzcwpbMTM6MTZdIFdlaSBMaXU6IFllcywgUnVz
dCBhcyB3ZWxsClsxMzoxN10gU2FtdWVsIFZlcnNjaGVsZGU6IENhbid0IHlvdSBleHBvcnQgZXZl
cnkgZGF0YSBmcm9tIGdpdGxhYj8KWzEzOjE3XSBqdWxpZW4tZjogTm90IHN1cmUgYWJvdXQgcmV2
aWV3IGNvbW1lbnRz4oCmClsxMzoxN10gU2NvdHQgRGF2aXM6ICsxIGZvciBiZWluZyBhYmxlIHRv
IG1hbmFnZSBjb2RlIHdpdGggTVJzIGFuZCBoYXZlIHRoZW0gYXV0b21hdGljYWxseSBnZXQgc2Vu
dCB0byBlbWFpbCBsaXN0IGZvciBkaXNjdXNzaW9uIHRvIG9jY3VyClsxMzoxN10gRG91ZyBHb2xk
c3RlaW46IFJlcXVpcmUgYSBQUiB0byBoYXZlIGEgdGhyZWFkIG9uIHhlbi1kZXZlbApbMTM6MTdd
IE9saXZpZXIgTGFtYmVydDogeW91IGNhbiBleHBvcnQgYnV0IElESyBvbiB3aGljaCBmb3JtYXQg
ZXhhY3RseQpbMTM6MTddIGp1bGllbi1mOiBJIGRvbid0IHRyZWF0IHRoZW0gYXMgcGVybWFuZW50
IGRhdGEsIGlmIHNvbWV0aGluZyBpcyBuZWNlc3NhcnksIGl0IHNob3VsZCBlaXRoZXIgYmUgYSBj
b21tZW50IGNvZGUgb3IgYSBjb21taXQgbWVzc2FnZQpbMTM6MThdIE9saXZpZXIgTGFtYmVydDog
YnV0IHllYWggeW91IGNhbiBnZXQgdGhlIHNvdXJjZXMgb2YgR0wgYW5kIGltcG9ydCBvbiB5b3Vy
IG93biBpbnN0YW5jZQpbMTM6MThdIGp1bGllbi1mOiBAU2NvdHQgdGhhdCBjb3VsZCB3b3JrIHll
cyA6LSkKWzEzOjE4XSBTdGVmYW5vIFN0YWJlbGxpbmk6IHJpZ2h0IHRoYXQgd291bGQgYmUgZWFz
eSB0byBkbzogZ2l0bGFiIGZvciBNUiBhbmQgdGVzdHMgYnV0IG1haWxpbmcgbGlzdCBmb3IgcmV2
aWV3cy4gVGhhdCB3b3VsZCBiZSBPSy4KWzEzOjE4XSBBbnRob255IFBFUkFSRCAoYW50aG9ueXBl
cik6IEkndmUganVzdCBmb3VuZCBvdXQgaG93IFFFTVUgZG9lcyB3b3JrIGFyb3VuZCBlbWFpbCBz
dWJtaXNzaW9uOiBodHRwczovL3dpa2kucWVtdS5vcmcvQ29udHJpYnV0ZS9TdWJtaXRBUGF0Y2gj
SWZfeW91X2Nhbm5vdF9zZW5kX3BhdGNoX2VtYWlscwpbMTM6MThdIEdlb3JnZSBEdW5sYXAgKGd3
ZCk6IEkgbWVhbiBsb25nLXRlcm0gd2UgY291bGQgaG9zdCBvdXIgb3duIGdpdGxhYiBpbnN0YW5j
ZSwgYW5kIHRoZW4gdGhlIHByb2plY3QgYXQgbGVhc3QgYXMgYSB3aG9sZSB3b3VsZCBoYXZlIHRo
ZSB3aG9sZSBoaXN0b3J5ClsxMzoxOF0ganVsaWVuLWY6IEBTdGVmYW5vIG5vdCBzdXJlIGl0IHJl
YWxseSBoZWxwcyBuZXcgY29udHJpYnV0b3JzClsxMzoxOV0ganVsaWVuLWY6IHRoZSBtYWluIHBh
aW4gcG9pbnQgYXBwZWFycyB0byBiZSByZXZpZXcKWzEzOjIwXSBCZXJ0cmFuZCBNYXJxdWlzOiB3
aGF0IGlzIHRoZSBpbnRlcmVzdCBvZiBwdXNoaW5nIHRvIE1SIGFuZCBkb2luZyByZXZpZXcgYnkg
bWFpbCBhZnRlciA/ClsxMzoyMV0gU3RlZmFubyBTdGFiZWxsaW5pOiBvbmUgb2YgdGhlIHBvaW50
cyB3YXMgdHJhY2tpbmcgYW5kIHRoYXQgd291bGQgaGVscApbMTM6MjFdIGp1bGllbi1mOiBFeGFj
dGx5IDotcApbMTM6MjFdIFNjb3R0IERhdmlzOiBzdWJtaXR0ZXIgY2FuIGp1c3QgYWRkIHRoZSBh
Y2tlZCBieSB0byB0aGUgTVIKWzEzOjIxXSBFZHdpbiBUw7Zyw7ZrOiBodHRwczovL2dlcnJpdC5n
b29nbGVzb3VyY2UuY29tL3BsdWdpbnMvcmV2aWV3bm90ZXMvJTJCL21hc3Rlci9zcmMvbWFpbi9y
ZXNvdXJjZXMvRG9jdW1lbnRhdGlvbi9yZWZzLW5vdGVzLXJldmlldy5tZApbMTM6MjFdIEVkd2lu
IFTDtnLDtms6IGh0dHBzOi8vZ2Vycml0Lmdvb2dsZXNvdXJjZS5jb20vcGx1Z2lucy9yZXZpZXdu
b3Rlcy8lMkIvbWFzdGVyL3NyYy9tYWluL3Jlc291cmNlcy9Eb2N1bWVudGF0aW9uL3JlZnMtbm90
ZXMtcmV2aWV3Lm1kClsxMzoyMV0gSnVsaWVuIEdyYWxsOiBUaGUgbWFpbiBpc3N1ZSBpcyBub3Qg
cXVvdGluZyBidXQgZGlzY2xhaW1lci4gVGVjaG5pY2FsbHkgd2Ugc2hvdWxkIG5vdCBhbnN3ZXIg
dG8gc3VjaCBlLW1haWwgKHRoZXkgYXJlIG1lYW50IHRvIGJlIHByaXZhdGUpLgpbMTM6MjJdIFN0
ZWZhbm8gU3RhYmVsbGluaTogUUVNVTtzIHNvbHV0aW9uIG1pZ2h0IHdvcms6IGh0dHBzOi8vd2lr
aS5xZW11Lm9yZy9Db250cmlidXRlL1N1Ym1pdEFQYXRjaCNJZl95b3VfY2Fubm90X3NlbmRfcGF0
Y2hfZW1haWxzClsxMzoyM10gSWFuIEphY2tzb246IE1vc3Qgb2YgdGhlIGRpc2NsYWltZXJzIGp1
c3Qgc2F5IGl0ICJtYXkiIGJlIHByaXZhdGUgb3Igc29tZSBzdWNoLiAgVGhhdCdzIG5vdCBhIHBy
b2JsZW0uClsxMzoyM10gQmVydHJhbmQgTWFycXVpczogaW4gZ2Vycml0IHlvdSBjYW4gbW9kaWZ5
IHRoZSBjb21taXQgbWVzc2FnZSBkaXJlY3RseSBpbiB0aGUgVUkKWzEzOjIzXSBCZXJ0cmFuZCBN
YXJxdWlzOiB0aGlzIHByZXZlbnRzIHY1OCBvZiBwYXRjaGVzClsxMzoyM10gU2FtdWVsIFZlcnNj
aGVsZGU6IEhhdmluZyB1c2VkIGdlcnJpdCBpbiB0aGUgcGFzdCBJIGxvdmUgaXQgZGVzcGl0ZSBp
dHMgYXdmdWwgVUkgYW5kIHRoZSB3YXkgaXQgZm9yY2VzIHlvdSB0byByZWZpbmUgdW50aWwgaXQn
cyBwZXJmZWN0ClsxMzoyM10gQmVydHJhbmQgTWFycXVpczogbWF5YmUgdGhpcyBjYW4gYWxzbyBi
ZSBkb25lIGluIHNvbWUgb3RoZXIgVUkKWzEzOjIzXSBPbGl2aWVyIExhbWJlcnQ6IHllYWggdGhl
cmUncyBzb21lIGJvdCBvbiBHaXRsYWIgdGhhdCBjb3VsZCBkbyB0aGF0ClsxMzoyM10gU2FtdWVs
IFZlcnNjaGVsZGU6IChzZWNvbmQgcGFydCBvZiBteSBjb21tZW50IGlzIGEgcGx1cyBpbiBmYWN0
KQpbMTM6MjNdIEp1bGllbiBHcmFsbDogSWFuOiBOb3RlIHN1cmUgd2Ugc2hvdWxkIHRha2UgdGhl
IGNoYW5jZSBpZiB0aGUgcGF0Y2ggaXMgbWVyZ2VkLgpbMTM6MjRdIEp1bGllbiBHcmFsbDogcy9O
b3RlL25vdC8KWzEzOjI0XSBPbGl2aWVyIExhbWJlcnQ6IEkgYWdyZWUgd2l0aCBFZHdpbjogd2Ug
bmVlZCB0byBzdGFydCBmaXJzdCBvbiBzb21ldGhpbmcgb24gdGhlIHNpZGUuClsxMzoyNF0gT2xp
dmllciBMYW1iZXJ0OiBjb25zaWRlciBpdCBhcyBhICJuZXcgd2F5IgpbMTM6MjRdIEJlcnRyYW5k
IE1hcnF1aXM6IHdlIGhhdmUgYSBnaXRsYWIKWzEzOjI0XSBCZXJ0cmFuZCBNYXJxdWlzOiB3ZSB1
c2UgaXQgZm9yIEZ1c2Egb3IgUENJIHJpZ2h0IG5vdwpbMTM6MjZdIEJlcnRyYW5kIE1hcnF1aXM6
IGZvbGxvd2luZyBjb21tZW50cyBhbW9uZyBwYXRjaCB2ZXJzaW9ucyBpbiBnaXRsYWIgaXMgZGVm
aW5pdGVseSBuaWNlClsxMzoyNl0gSnVsaWVuIEdyYWxsOiArMSB3aXRoIHdoYXQgSWFuIHNheXMu
IFRoZSB0cmFja2luZyB3b3VsZCBiZSBlYXNpZXIuClsxMzoyNl0gQm9iYnkgRXNobGVtYW46IEJl
aW5nIGFibGUgdG8gcmVzb2x2ZSBjb21tZW50cyBpcyBpbmRlZWQgdmVyeSBuaWNlLiAgQ29tYmlu
ZyB0aHJvdWdoIGVtYWlsIHRvIGRvdWJsZSBjaGVjayBlYWNoIGNvbW1lbnQgaGFzIGJlZW4gYWRk
cmVzc2VkIGluIHRoZSByZXZpc2lvbiBpcyBhIHBhaW4uClsxMzoyN10ganVsaWVuLWY6ICsxClsx
MzoyN10gT2xpdmllciBMYW1iZXJ0OiArMSB0b28gOikKWzEzOjI3XSBqdWxpZW4tZjogU3RhcnQg
c2ltcGxlIGFuZCBmaXggdGhpbmdzIGFzIHlvdSBnbwpbMTM6MjhdIFNjb3R0IERhdmlzOiB0aGUg
YWR2YW50YWdlIGlzIG5vdCBoYXZpbmcgdG8gc2V0dXAgZ2l0LXNlbmQtZW1haWwKWzEzOjI4XSBC
ZXJ0cmFuZCBNYXJxdWlzOiBhbHNvIHRoZSBoaXN0b3J5LCB5b3Uga2VlcCB0cmFjayBvZiBwYXRj
aGVzIHBlbmRpbmcgZm9yIHJldmlldwpbMTM6MjhdIFNjb3R0IERhdmlzOiBJIGp1c3QgZGlkIHRo
YXQgYW5kIGl0IHRvb2sgdGhlIGJldHRlciBwYXJ0IG9mIGEgd29ya2RheSB0byBmaWd1cmUgaXQg
YWxsIG91dCBmb3IgbXkgZmlyc3QgcGF0Y2ggc3ViClsxMzoyOV0gSmVwcGU6IE9oIG5vIQpbMTM6
MjldIEplcHBlOiBsb2wKWzEzOjI5XSBqdWxpZW4tZjogRXhhY3RseSEKSXQncyBhIG5ldyB3b3Jr
ZmxvdwpbMTM6MjldIE9saXZpZXIgTGFtYmVydDogeWVhaCBpbmRlZWQKWzEzOjI5XSBTY290dCBE
YXZpczogdGhvdWdoIHBhcnQgb2YgdGhhdCB3YXMganVzdCBmaW5kaW5nIGFsbCB0aGUgcml1Z2h0
IGRvY3VtZW50YXRpb24KWzEzOjI5XSBJYW4gSmFja3NvbjogUmlnaHQuICBXZSBkbyBuZWVkIGEg
c21hbGwgZ2l0LXNlbmQtZW1haWwgcm9ib3QgdGhlbgpbMTM6MjldIElhbiBKYWNrc29uOiBJcyBh
bnlvbmUgdm9sdW50ZWVyaW5nIHRvIHNldCBvbmUgdXA/ClsxMzoyOV0gSmVwcGU6IChJIHRob3Vn
aHQgb25seSBJIHRlc3QgdGhlIG1vc3QgZGlmZmljdWx0IGZpcnN0KSBsb2wKWzEzOjI5XSBJYW4g
SmFja3NvbjogQ291bGQgYmUgYSBiaXQgc2hvZGR5IGFuZCBhZC1ob2MgZm9yIG5vdwpbMTM6MzBd
IERlYiBHaWxlcyAtIExGIEV2ZW50cyBUZWFtOiBUaGlzIGlzIGEgMSBtaW51dGUgd2FybmluZyB0
byB3cmFwIHVwIHRoaXMgZGVzaWduIHNlc3Npb24gYmVmb3JlIGNsb3NpbmcgcmVtYXJrcyBiZWdp
biBpbiBCdWRhcGVzdApbMTM6MzBdIERlYiBHaWxlcyAtIExGIEV2ZW50cyBUZWFtOiBodHRwczov
L3hlbnN1bW1pdC5saW51eGZvdW5kYXRpb24ub3JnL2IvYnVkYXBlc3QKWzEzOjMwXSBTYW11ZWwg
VmVyc2NoZWxkZTogOikKWzEzOjMwXSBPbGl2aWVyIExhbWJlcnQ6IFRoYW5rcyBJYW4gOikKWzEz
OjMxXSBEb3VnIEdvbGRzdGVpbjogSWFuLCBzZWFyY2ggaWYgc29tZW9uZSBoYXMgYWxyZWFkeSBt
YWRlIG9uZS4KWzEzOjMxXSBCZXJ0cmFuZCBNYXJxdWlzOiB0aGFua3MgZm9yIHRoZSBkaXNjdXNz
aW9uClsxMzozMV0ganVsaWVuLWY6IFRoYW5rcyBhbGwhClsxMzozMV0gSmVwcGU6IFRZClsxMzoz
MV0gT2xpdmllciBMYW1iZXJ0OiBUaGFua3MhClsxMzozMV0gRG91ZyBHb2xkc3RlaW46IFlvdSBt
aWdodCBiZSBhYmxlIHRvIHJldXNlIHNvbWUgY29kZS4KWzEzOjMxXSBCb2JieSBFc2hsZW1hbjog
R3JlYXQgY29udm8hICBUaGFua3MgYWxsClsxMzozMl0gRWR3aW4gVMO2csO2azogYnR3IGl0J2Qg
YmUgdXNlZnVsIGlmIHRoaXMgcHVibGljIGNoYXQgZ290IHNhdmVkIHRvZ2V0aGVyIHdpdGggdGhl
IHNoYXJlZCBub3RlcwpbMTM6MzJdIEVkd2luIFTDtnLDtms6IHRoZXkgYm90aCBjb250YWluIHVz
ZWZ1bCBpbmZvcm1hdGlvbiwgd291bGQgdGhhdCBiZSBwb3NzaWJsZSBEZWI/ClsxMzozM10gRGVi
IEdpbGVzIC0gTEYgRXZlbnRzIFRlYW06IFllcywgSSBjYW4gc2F2ZSB0aGUgcHVibGljIGNoYXQg
dG9v
--000000000000b7457b05c3540969
Content-Type: application/vnd.openxmlformats-officedocument.wordprocessingml.document; 
	name="Design Session - Using Gitlab MR for Xen.docx"
Content-Disposition: attachment; 
	filename="Design Session - Using Gitlab MR for Xen.docx"
Content-Transfer-Encoding: base64
Content-ID: <f_kp78lndd0>
X-Attachment-Id: f_kp78lndd0

UEsDBBQABgAIAAAAIQDfpNJsWgEAACAFAAATAAgCW0NvbnRlbnRfVHlwZXNdLnhtbCCiBAIooAAC
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAC0
lMtuwjAQRfeV+g+Rt1Vi6KKqKgKLPpYtUukHGHsCVv2Sx7z+vhMCUVUBkQpsIiUz994zVsaD0dqa
bAkRtXcl6xc9loGTXmk3K9nX5C1/ZBkm4ZQw3kHJNoBsNLy9GUw2ATAjtcOSzVMKT5yjnIMVWPgA
jiqVj1Ykeo0zHoT8FjPg973eA5feJXApT7UHGw5eoBILk7LXNX1uSCIYZNlz01hnlUyEYLQUiep8
6dSflHyXUJBy24NzHfCOGhg/mFBXjgfsdB90NFEryMYipndhqYuvfFRcebmwpCxO2xzg9FWlJbT6
2i1ELwGRztyaoq1Yod2e/ygHpo0BvDxF49sdDymR4BoAO+dOhBVMP69G8cu8E6Si3ImYGrg8Rmvd
CZFoA6F59s/m2NqciqTOcfQBaaPjP8ber2ytzmngADHp039dm0jWZ88H9W2gQB3I5tv7bfgDAAD/
/wMAUEsDBBQABgAIAAAAIQAekRq37wAAAE4CAAALAAgCX3JlbHMvLnJlbHMgogQCKKAAAgAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAArJLBasMw
DEDvg/2D0b1R2sEYo04vY9DbGNkHCFtJTBPb2GrX/v082NgCXelhR8vS05PQenOcRnXglF3wGpZV
DYq9Cdb5XsNb+7x4AJWFvKUxeNZw4gyb5vZm/cojSSnKg4tZFYrPGgaR+IiYzcAT5SpE9uWnC2ki
Kc/UYySzo55xVdf3mH4zoJkx1dZqSFt7B6o9Rb6GHbrOGX4KZj+xlzMtkI/C3rJdxFTqk7gyjWop
9SwabDAvJZyRYqwKGvC80ep6o7+nxYmFLAmhCYkv+3xmXBJa/ueK5hk/Nu8hWbRf4W8bnF1B8wEA
AP//AwBQSwMEFAAGAAgAAAAhANZks1H0AAAAMQMAABwACAF3b3JkL19yZWxzL2RvY3VtZW50Lnht
bC5yZWxzIKIEASigAAEAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAArJLLasMwEEX3hf6DmH0t
O31QQuRsSiHb1v0ARR4/qCwJzfThv69ISevQYLrwcq6Yc8+ANtvPwYp3jNR7p6DIchDojK971yp4
qR6v7kEQa1dr6x0qGJFgW15ebJ7Qak5L1PWBRKI4UtAxh7WUZDocNGU+oEsvjY+D5jTGVgZtXnWL
cpXndzJOGVCeMMWuVhB39TWIagz4H7Zvmt7ggzdvAzo+UyE/cP+MzOk4SlgdW2QFkzBLRJDnRVZL
itAfi2Myp1AsqsCjxanAYZ6rv12yntMu/rYfxu+wmHO4WdKh8Y4rvbcTj5/oKCFPPnr5BQAA//8D
AFBLAwQUAAYACAAAACEAoPlnDaoFAADzNAAAEQAAAHdvcmQvZG9jdW1lbnQueG1s7FvNcts2EL53
pu+A4aEnK6RIiZTZyBn9kBkfkrqO3TtEQiLGJMEBQNHq0/RZ+mRd8EeSKyeFlVMjxjMhsQA+LhYf
dhcG/P7Dc5aiLeGCsnxqDN9ZBiJ5xGKab6bG40M4mBhISJzHOGU5mRo7IowPNz//9L7yYxaVGckl
Aohc+FURTY1EysI3TRElJMPiXUYjzgRby3cRy0y2XtOImBXjsWlbQ6t+KziLiBDwvQXOt1gYLVz0
rIcWc1xBZwU4MqMEc0meDxjDN4OMzWtzcgpknwEEI7SHp1DOm6FcU2l1AjQ6Cwi0OkEan4f0yuDc
85DsUyTvPCTnFGlyHtIJnbJTgrOC5FC5ZjzDEop8Y2aYP5XFAIALLOmKplTuANNyOxhM86czNIJe
e4TMid+M4JkZi0nqxB0Kmxolz/22/2DfX6nuN/3bR9eD64y/6bJsnUM9cpOTFGzBcpHQYr/Cs3PR
oDLpQLbfGsQ2S7t2VTHUXC5fc0/LxpQHQB31W/tnaaP5txGHlsaMKIh9Dx0VXn6z0yQDFh4+fJZp
jow71HQgHYB9AuBGRNPhdxiTFsOMDitU4VDNpdHhNLOicOjBsENNP/ZvZY4ABHkbxLjTQ+yyoxEV
m++j7UfOyuKARr8P7fbghCqVDrwBq6X/8ZIU36fMlwQX4JuyyL/d5IzjVQoaAZkR8BHVM6D+h2lV
j/qVPCPlCIwbSF9WLN6pZwE1I7/AHN8Cc1xnPhkv5zOjloLzl0rqtf9A6kOqFN9PDcuynbF77e5F
d/wV4ZKscZnK05q7I1GtxR1XD94+QpZLAU2xiChM2R+ExzjHqjPBQs4ExVPjgWZEoM+kQvcsw7mq
TGa5eNk8EqcNTfWJiKWMQ4MtTqfGKBjPXLepEH92UtvpJAuly5HMbDU1D4q/aob/4ZAqX96ElAuJ
RLnKqJSEIypESYSPgKI0RYLIslDtZdOrMcMplRw7tCa2t+ipdMFUuq2pgyoqExQxXoCbkuSYSEKD
SaNwMZt5c6tn0gUz6Y4zCHBZy6WPVKZ4hT7d+4jmkmx4nV83deSZCgnxsaUZBM2ndcqqKwTb+FbG
yZaSSod79mISBpYz7rl3ydxjEnZfFIPPYmlZ7+R8He6419fDcDnvuXO53FHJvi8KHEFuXnAiCN8S
4+bvv5ofNDh4MjT4ZSN/RS94BSbhjK0DrlSRuwJAREHS9IvEXDZf/6EWWoFllKxYttKyQpDHP54N
XidMHbuaqIW2FDdhTMMFjeeut1ws+v3cJYev30pZn2GonKjdy2lQx5lNAiscBj11Lpg6+zAlOY6e
FIEaJ2RCSSd99tzAXY5mii49iS6ZREChNnw93qKt6AqrnXYoG1reJHRCu6dSTyVOJKa5ckez6Mm8
r7lE4gGwKaU50XFM1txxRu5YzXrPpsuObhpsGV0PA8sd9Tv5i06jZUI4iliWlTmVVC+LHobByBkv
lj1z+ix6gH4PPj366nc9awCuD9iEunl2dVY6FFqLa6snVk8sINZ9KaRO1mONlzPP608zetJo5T22
EyyX81mf91wyXx74DrFSIpYjwTIik/psFYsdWquLIlcoZxJhFFN1XQmmtw1mGvTyPBXF3H5L/yZ6
vW7LcRjYi3A56W15uUv19bOjz8AFJCQphI9ucY62LC1zSQgnMZJMXcNBZQEL+MWCrS7s5HHFXuZP
Xxv/D3nmeHOFgriiJ9QAE2xpTGq3j+qzWdAB5OSZ8IgKgqhO0um5s8B2PJ0kYm7Zlq22NHqe6WXz
Y8/UjJlE8m4/T/+FXyu++aIsV8H2yrZH6sKZn8D7eALvNWSx+YRrRrAC5KOmCaebRB6KQCXJskM5
Jeuj2oTgmMB3PbsurhmTR8VNqW5cglb7CVaz1i5p1aYWxyz6yNW9Xl/9uvOOwsRMDceta81u3PVr
c8vXPPy10s0/AAAA//8DAFBLAwQUAAYACAAAACEAtvRnmNIGAADJIAAAFQAAAHdvcmQvdGhlbWUv
dGhlbWUxLnhtbOxZS4sbRxC+B/IfhrnLes3oYaw10kjya9c23rWDj71Sa6atnmnR3dq1MIZgn3IJ
BJyQQwy55RBCDDHE5JIfY7BJnB+R6h5JMy31xI9dgwm7glU/vqr+uqq6ujRz4eL9mDpHmAvCko5b
PVdxHZyM2JgkYce9fTAstVxHSJSMEWUJ7rgLLNyLO59/dgGdlxGOsQPyiTiPOm4k5ex8uSxGMIzE
OTbDCcxNGI+RhC4Py2OOjkFvTMu1SqVRjhFJXCdBMai9MZmQEXYOlEp3Z6V8QOFfIoUaGFG+r1Rj
Q0Jjx9Oq+hILEVDuHCHacWGdMTs+wPel61AkJEx03Ir+c8s7F8prISoLZHNyQ/23lFsKjKc1LcfD
w7Wg5/leo7vWrwFUbuMGzUFj0Fjr0wA0GsFOUy6mzmYt8JbYHChtWnT3m/161cDn9Ne38F1ffQy8
BqVNbws/HAaZDXOgtOlv4f1eu9c39WtQ2mxs4ZuVbt9rGngNiihJplvoit+oB6vdriETRi9b4W3f
GzZrS3iGKueiK5VPZFGsxege40MAaOciSRJHLmZ4gkaACxAlh5w4uySMIPBmKGEChiu1yrBSh//q
4+mW9ig6j1FOOh0aia0hxccRI05msuNeBa1uDvLqxYuXj56/fPT7y8ePXz76dbn2ttxllIR5uTc/
ffPP0y+dv3/78c2Tb+14kce//uWr13/8+V/qpUHru2evnz979f3Xf/38xALvcnSYhx+QGAvnOj52
brEYNmhZAB/y95M4iBDJS3STUKAEKRkLeiAjA319gSiy4HrYtOMdDunCBrw0v2cQ3o/4XBIL8FoU
G8A9xmiPceuerqm18laYJ6F9cT7P424hdGRbO9jw8mA+g7gnNpVBhA2aNym4HIU4wdJRc2yKsUXs
LiGGXffIiDPBJtK5S5weIlaTHJBDI5oyocskBr8sbATB34Zt9u44PUZt6vv4yETC2UDUphJTw4yX
0Fyi2MoYxTSP3EUyspHcX/CRYXAhwdMhpswZjLEQNpkbfGHQvQZpxu72PbqITSSXZGpD7iLG8sg+
mwYRimdWziSJ8tgrYgohipybTFpJMPOEqD74ASWF7r5DsOHut5/t25CG7AGiZubcdiQwM8/jgk4Q
tinv8thIsV1OrNHRm4dGaO9iTNExGmPs3L5iw7OZYfOM9NUIssplbLPNVWTGquonWECtpIobi2OJ
MEJ2H4esgM/eYiPxLFASI16k+frUDJkBXHWxNV7paGqkUsLVobWTuCFiY3+FWm9GyAgr1Rf2eF1w
w3/vcsZA5t4HyOD3loHE/s62OUDUWCALmAMEVYYt3YKI4f5MRB0nLTa3yk3MQ5u5obxR9MQkeWsF
tFH7+B+v9oEK49UPTy3Y06l37MCTVDpFyWSzvinCbVY1AeNj8ukXNX00T25iuEcs0LOa5qym+d/X
NEXn+aySOatkzioZu8hHqGSy4kU/Alo96NFa4sKnPhNC6b5cULwrdNkj4OyPhzCoO1po/ZBpFkFz
uZyBCznSbYcz+QWR0X6EZrBMVa8QiqXqUDgzJqBw0sNW3WqCzuM9Nk5Hq9XVc00QQDIbh8JrNQ5l
mkxHG83sAd5ave6F+kHrioCSfR8SucVMEnULieZq8C0k9M5OhUXbwqKl1Bey0F9Lr8Dl5CD1SNz3
UkYQbhDSY+WnVH7l3VP3dJExzW3XLNtrK66n42mDRC7cTBK5MIzg8tgcPmVftzOXGvSUKbZpNFsf
w9cqiWzkBpqYPecYzlzdBzUjNOu4E/jJBM14BvqEylSIhknHHcmloT8ks8y4kH0kohSmp9L9x0Ri
7lASQ6zn3UCTjFu11lR7/ETJtSufnuX0V97JeDLBI1kwknVhLlVinT0hWHXYHEjvR+Nj55DO+S0E
hvKbVWXAMRFybc0x4bngzqy4ka6WR9F435IdUURnEVreKPlknsJ1e00ntw/NdHNXZn+5mcNQOenE
t+7bhdRELmkWXCDq1rTnj493yedYZXnfYJWm7s1c117luqJb4uQXQo5atphBTTG2UMtGTWqnWBDk
lluHZtEdcdq3wWbUqgtiVVfq3taLbXZ4DyK/D9XqnEqhqcKvFo6C1SvJNBPo0VV2uS+dOScd90HF
73pBzQ9KlZY/KHl1r1Jq+d16qev79erAr1b6vdpDMIqM4qqfrj2EH/t0sXxvr8e33t3Hq1L73IjF
Zabr4LIW1u/uq7Xid/cOAcs8aNSG7Xq71yi1691hyev3WqV20OiV+o2g2R/2A7/VHj50nSMN9rr1
wGsMWqVGNQhKXqOi6LfapaZXq3W9Zrc18LoPl7aGna++V+bVvHb+BQAA//8DAFBLAwQUAAYACAAA
ACEASNQf1tEDAAANCwAAEQAAAHdvcmQvc2V0dGluZ3MueG1stFbfb9s2EH4fsP/B0PMUS7KkJEKd
YrLjJUW8DpWLPVMSZRPhD4Gk4rrF/vcdKdHylqBwWuTFJu+7++54dzzq3fsvjE6esFRE8LkXXgTe
BPNK1IRv597nzcq/8iZKI14jKjieewesvPc3v/7ybp8prDWoqQlQcJWxau7ttG6z6VRVO8yQuhAt
5gA2QjKkYSu3U4bkY9f6lWAt0qQklOjDNAqC1BtoxNzrJM8GCp+RSgolGm1MMtE0pMLDn7OQ5/jt
TZai6hjm2nqcSkwhBsHVjrTKsbEfZQNw50ievneIJ0ad3j4MzjjuXsj6aHFOeMaglaLCSkGBGHUB
Ej46jp8RHX1fgO/hiJYKzMPArk4jT15HED0jSCv85XUcVwPHFCxPeUj9Op70yEPGxIbpjwVzQqDw
6ygSF4c6sPFEip5T4x56IKVEsr9BQ4FZld1vuZCopBAOFHoCtZrY6MwvHNn82SX4vIGL/FUINtln
LZYVdDNMgTTwpgaAHhJNoZEGy0y1mFI7FiqKETjaZ1uJGFxoJ7E2NW5QR/UGlYUWLSg9ITjPZTRQ
VjskUaWxLFpUAdtCcC0FdXq1+FPoBQwHCb07WNhRMa6KfuyABUcMTvifUbIWNTaRdZKcXwpjYL2H
yanL/zsSMCYlqfHGZLbQB4pXEHxBvuLfef2hU5oAox0oPxHB9wLA3Hj+CL2wObR4hZHuIE1v5MxW
YkVJuyZSCnnPa+iNN3NGmgZLcECg19bQPkSKvc3zHUY1vE5v5LdT+G9Qhqs420BbPuZCa8HuDu0O
cv1zlbT9Pj1tX3hja+UWn4TQR9UgmiXpddpHatBzkNlVGoS3LyFw2aLli0geGGiIbIiHZebl+ku6
lWnqCestFoiVkqDJ2rxtU6NRyseccIeXGEYSPkWKrnSg7/eAYojSFaTXATY1LKuJape4sWu6RnI7
8g4a8kUpTJgPRy4zsbD8Q4qu7dG9RG3frE4ljOPBknD9QJiTq64snBWHIXoCdbz++CRtnsb07DMN
xbeX/gHZJrK6mPufi6HJqCxMg+A1atu+z8ptOPco2e50aFpDw66GTyC7KbfRgEUWi3rMblBlTgba
w2KURU52ojdzstkoi50sHmWJkyWjLHWy1Mh2MFkkjPlHaHm3NPJGUCr2uL4b8Wci9wBUBCpeHFg5
zvWLHqNEwR1s4QnQQjrsN4uFcVaL6t48TvHwZiyS28tluOrhxD4d2l5TSO0n3ORI4XrAnGnSm37L
49tZfr1a+fn1MvKTRRz4V3mc+FGc5vllMkuXafDPcA/cd+zNvwAAAP//AwBQSwMEFAAGAAgAAAAh
AKushiJ3AQAA/wIAABEACAFkb2NQcm9wcy9jb3JlLnhtbCCiBAEooAABAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAIyS30vDMBDH3wX/h5L3Lv3BdJS2A5U9bSA4UXyLyW2La5OQpOv235u2a2dx
gm93uc997/JN0vmxLLwDaMOlyFA4CZAHgkrGxTZDr+uFP0OesUQwUkgBGTqBQfP89ialKqFSw7OW
CrTlYDynJExCVYZ21qoEY0N3UBIzcYRwxY3UJbEu1VusCN2TLeAoCO5wCZYwYgluBH01KKKzJKOD
pKp00QowiqGAEoQ1OJyE+MJa0KW52tBWfpAltycFV9G+ONBHwwewrutJHbeo2z/E76vlS3tVn4vG
KwooTxlNLLcF5Cm+hC4y1ecXUNsdD4mLqQZipc7XO/CWXFRHbyEr57t1L9PCPdBYv4dTLTUzTmaU
OYyBoZqrpq0bMjpwdEGMXbkX3nBgD6c/5v3mmlYNB978lDxsiSFNz7Z3OwLznF1JZ25feYsfn9YL
lEdBFPrB1I/u1+EsieMkCD6aNUf9F8HyvMD/FadjxV6gc2r8ZfNvAAAA//8DAFBLAwQUAAYACAAA
ACEAhaLUxwICAAApBwAAEgAAAHdvcmQvZm9udFRhYmxlLnhtbNyTUWvbMBCA3wf7D0bvjWXHcVNT
p6xbA4Oxh9LtXZHlWMySjE6Jm3+/k+xkgZBRD7aHJaDId7rPpy/S/cOraqO9sCCNLkkyoyQSmptK
6m1Jvr2sb5YkAsd0xVqjRUkOAsjD6v27+76ojXYQYb2GQvGSNM51RRwDb4RiMDOd0JisjVXM4aPd
xorZH7vuhhvVMSc3spXuEKeU5mTE2LdQTF1LLj4ZvlNCu1AfW9Ei0WhoZAdHWv8WWm9s1VnDBQDu
WbUDTzGpT5gkuwApya0BU7sZbmbsKKCwPKFhptpfgMU0QHoByLl4ncZYjowYK885sprGyU8cWZ1x
/qyZMwCIaYjFsQ84KL8jxYvPW20s27RIwv8oQs1RAPsRu/U/YYrLV+N5jfpCM4UFH1krN1aGRMe0
AZFgbs/aktCUrukCR//N6NyPJPYLecMsCA8ZFtIhXDMl28MxCr0EGBKddLw5xvfMSt/skAK5xcQO
NrQkTxml6dN6TYZIgt1RjGS3j2Mk9e8Kn7sxMj9FqI/wwAmPycDhgXNag++MBwMXJl6kEhB9FX30
bBTTV4ykNEcTC/ThzcwnGbGBO8mI3/+Fkdvl4p8Y+S5sxTS7YuIRTfhT4V1kf/1sfEiw2fzcRObd
0EsTSdj3703cTTQx3pLoi9w27upd8TfkP70r4wRWPwEAAP//AwBQSwMEFAAGAAgAAAAhABlUGG5Y
AwAAJzoAABQAAAB3b3JkL3dlYlNldHRpbmdzLnhtbOybW2/TMBTH35H4DlXeWX2/VGxIAw0hIYRg
fIA0cdeIJI7ibN349Jyk7dZdHlYkztN5qV0n/vf452P7xHHff7ht6tlN6FMV29OMn7BsFtoillV7
dZr9urx457JZGvK2zOvYhtPsLqTsw9nbN+83i01Y/gzDAHemGai0adEUp9l6GLrFfJ6KdWjydBK7
0MLFVeybfICv/dW8yfvf1927IjZdPlTLqq6Gu7lgzGQ7mf41KnG1qorwKRbXTWiHqf68DzUoxjat
qy7t1TavUdvEvuz6WISUoD1NvdVr8qq9l+HqmVBTFX1McTWcQGN2Fk1SUJ2zKdfUDwL6OAHxTMAU
4fY4DbfTmEPNQ52qPE7H3OtU5YHOvxlzIJDCcRJ6b0e6a8YWNcXiy1Ub+3xZgxL00Qwwzybh8ROs
HZMpC7efgdeW1U3apbPNYuSghGdcOmmn68tY3n2art3kNYyIbD6Wgs9+DathX8ruS39UV+sXii9j
97zwPA5DbJ6Ugx3nZT/mhoc6LYy1DL6kP+N9Y6bLi7DLF7GOMETy6yFuJeoDy46ruXxk0XF1+8OW
H1N1ftjosTs+rqu6fNwnUjItlLXbPiH6/4X+NvtkMHhnpVdeEHhc8FxrLrTzWhF5ZPLAHJYAzwg8
LnjtuYIwy9FcgwzeScmdZoxWV2TwnGktrDHKE3lk8lJp4cDlJZHHJS+8NsoaReCxwTOrvJGeUUSJ
TJ4LaRxXymoijz3NGw6xjVTk88jknZDCamVomsd2eVhfBXOGJhts8t5wa5y0FMwjgxeceVhfhSPy
yOQ5N856oQXt2WCTt84azbQgn0cgv033b6ZefroyQmpnmZv6g14bIvbSo855NEiUlPDYaxVNTwiD
5JC80U4z6zSF/8jgwd0d445p2sxHJ6+5Nt757QpA5PFiIWmssd4a2uXBJi+U1xIeeWmzAZu8Y45r
5ziFNsjkneJaeToRhe7x414D55YbIo9LXtjpQBSFlOjgmZCSay0osEEm78fDaEox2l1DBs+NZMYy
7yiixCZvwedhnqcjUdjkhXLCSicosEEG77QyVghyeGTuXDghhPV0RgGD/DZ9+vJqVzr+WOyGqqn+
hIvYn/dxk0I/2ZDXddx8//Z5q3rw986zvwAAAP//AwBQSwMEFAAGAAgAAAAhAPZF01LCCwAASnQA
AA8AAAB3b3JkL3N0eWxlcy54bWy8ncly2zoWhvdd1e/A0qp74XiSJTt1nVuOk7RdbSe+V05nDZGQ
hTZJ6JKUBz59AyAlQT4ExQOe9iaxhvNh+PEDOJz02+8vSRw88SwXMj0fHH44GAQ8DWUk0ofzwc/7
b3ungyAvWBqxWKb8fPDK88Hvn/7+t9+eP+bFa8zzQAHS/GMSng/mRbH4uL+fh3OesPyDXPBUfTiT
WcIK9TJ72E9Y9rhc7IUyWbBCTEUsitf9o4OD0aDGZF0ocjYTIf8iw2XC08LE72c8VkSZ5nOxyFe0
5y60Z5lFi0yGPM9Vo5O44iVMpGvM4RCAEhFmMpez4oNqTF0jg1LhhwfmryTeAE5wgCMAGIX8Bcc4
rRn7KtLmiAjHGa05IrI4fpWxADnHIU5W9chfE92iJPx4/ZDKjE1jRVIaBaqbAwPW/6ra6v/Mn+rr
n9SAjWT4hc/YMi5y/TK7y+qX9Svz3zeZFnnw/JHloRD3ql4KnghVztVFmouB+oSzvLjIBWv8cK7/
aPwkzAvr7c8iEoN9XWJeqg+fWHw+OBqu3rnUNdh6L2bpw+o9nu79nNg1sd6aKu75gGV7kwsduF83
rPrfau5i/ar61pu+UWZS1ppUDlef8tmNDB95NCnUB+eDA12UevPn9V0mZKZcfD44O6vfnPBEXIko
4qn1xXQuIv5rztOfOY827//xzTixfiOUy1T9fTweGb3iPPr6EvKF9rX6NGW6977rgFh/eyk2hZvw
v1aww7rPmuLnnOnJLTh8izDVRyGOdERutbaZuXzTdvMtVEHH71XQ8L0KOnmvgkbvVdD4vQo6fa+C
DOb/WZBII/5SGREWA6i7OA43ojkOs6E5Di+hOQ6roDkOJ6A5joGO5jjGMZrjGKYITiFD1yi0Bvux
Y7S3c3evEX7c3UuCH3f3CuDH3T3h+3F3z+9+3N3TuR939+ztx909WeO51VYruFY2S4veLptJWaSy
4EHBX/rTWKpYJuOj4elFj2ckjSTAVDNbvRD3poXMvN49QoxJ/dfzQqdigZwFM/GwzHjeu+I8feKx
StkDFkWKRwjMeLHMHD3iM6YzPuMZT0NOObDpoLFIeZAukynB2FywBzIWTyPi7lsRSSaF9YBmy2Ku
TSIIBnXCwkz2r5pkZPPDjcj795WGBJ+XccyJWN9phphh9c8NDKZ/amAw/TMDg+mfGFiaUXVRTSPq
qZpG1GE1jajfqvFJ1W81jajfahpRv9W0/v12L4rYTPH2ruOw+7G7y1jqY/S96zERDylTG4D+y019
zDS4Yxl7yNhiHujjx81Yu83Ycj7L6DW4p1jT1iSqfb0ZIpeq1SJd9u/QLRqVudY8InuteUQGW/P6
W+xWbZP1Bu2KJp+ZLKdFo2kNqZNpJyxeVhva/m5jRf8RtjHAN5HlZDZoxhKM4O96O6vlpJj5NrXs
X7ENq7+t3s5KpNWrkQS1jGX4SDMNX70ueKbSssfepG8yjuUzj+iIkyKT1VizLX9kJOlk+a/JYs5y
YXKlLUT3pX51dj+4ZYveDbqLmUhpdPu6lzARB3Q7iKv725vgXi50mqk7hgb4WRaFTMiY9ZHAf/zi
03/SVPBCJcHpK1FrL4gODxnYpSBYZCqSjIhIapspUkGyhhrev/nrVLIsoqHdZby6oKbgRMQJSxbV
poPAW2pefFbzD8FuyPD+wzKhjwtRmeqeBGYdNsyX0//ysP9U910GJEeGfiwLc/zRbHVNNB2u/zZh
C9d/i2DUVMuDHr8Ejd3C9W/sFo6qsZcxy3PhPIXqzaNq7opH3d7+yV/Nk7HMZsuYrgNXQLIeXAHJ
ulDGyyTNKVtseIQNNjzq9hIOGcMjOCRneP/KREQmhoFRKWFgVDIYGJUGBkYqQP8rdCxY/8t0LFj/
a3UqGNEWwIJRjTPS5Z/oLI8FoxpnBkY1zgyMapwZGNU4O/4S8NlMbYLplhgLSTXmLCTdQpMWPFnI
jGWvRMivMX9gBAdIK9pdJmf6TguZVhdxEyD1MeqYcLNd4ahE/sWnZFXTLMp6ERwRZXEsJdGxtc2C
YyK3r13bFWbuuehdhbuYhXwu44hnjja5Y1W+PFmwsD5MD073dTrseSMe5kUwma+P9tuY0cHOyFXC
vhW2u8CmPh8dtYTd8kgsk1VF4c0Uo+PuwWZEbwWv7nlpCd7sJLYiTzpGwjJHuyM3u+StyHHHSFjm
acdI49OtyDY/fGHZY+NAGLeNn3WO5xh847ZRtA5uLLZtIK0jm4bguG0UbVkluAhDfbYAqtPNM+74
buZxx2Nc5KZg7OSmdPaVG9FmsD/5k9ArO2bSNOWtr54A877ZRHeaOf9Yyuq4/dYJp+43dV2rjVOa
86CRc9z9xNXWLOPux87TjRvRed5xIzpPQG5Ep5nIGY6aktyUznOTG9F5knIj0LMVXBFwsxWMx81W
MN5ntoIUn9mqxy7Ajei8HXAj0EaFCLRRe+wU3AiUUUG4l1EhBW1UiEAbFSLQRoUbMJxRYTzOqDDe
x6iQ4mNUSEEbFSLQRoUItFEhAm1UiEAb1XNv7wz3MiqkoI0KEWijQgTaqGa/2MOoMB5nVBjvY1RI
8TEqpKCNChFoo0IE2qgQgTYqRKCNChEoo4JwL6NCCtqoEIE2KkSgjVrdauhvVBiPMyqM9zEqpPgY
FVLQRoUItFEhAm1UiEAbFSLQRoUIlFFBuJdRIQVtVIhAGxUi0EY1Jwt7GBXG44wK432MCik+RoUU
tFEhAm1UiEAbFSLQRoUItFEhAmVUEO5lVEhBGxUi0EaFiLbxWZ+idF1mf4g/6um8Yr/7qau6Un/a
t3LbqOPuqFWt3Kzu9yJ8lvIxaLzx8NjkG90gYhoLaQ5RO06r21xzSQTqxOePy/Y7fGx6z4cu1fdC
mHOmAD7sGgmOqQzbhrwdCZK8YdtItyPBrnPYNvvakWAZHLZNusaXq4tS1HIEgtumGSv40BHeNltb
4bCL2+ZoKxD2cNvMbAXCDm6bj63Ak0BPzm+jTzr202h9fSkgtA1HizB2E9qGJdRqNR1DY3QVzU3o
qp6b0FVGNwGlpxODF9aNQivsRvlJDW2GldrfqG4CVmpI8JIaYPylhihvqSHKT2o4MWKlhgSs1P6T
s5vgJTXA+EsNUd5SQ5Sf1HApw0oNCVipIQErdc8F2YnxlxqivKWGKD+p4eYOKzUkYKWGBKzUkOAl
NcD4Sw1R3lJDlJ/UIEtGSw0JWKkhASs1JHhJDTD+UkOUt9QQ1Sa1OYqyJTVKYSsctwmzAnELshWI
m5ytQI9syYr2zJYsgme2BLVaaY7LlmzR3ISu6rkJXWV0E1B6OjF4Yd0otMJulJ/UuGypSWp/o7oJ
WKlx2ZJTaly21Co1LltqlRqXLbmlxmVLTVLjsqUmqf0nZzfBS2pcttQqNS5bapUaly25pcZlS01S
47KlJqlx2VKT1D0XZCfGX2pcttQqNS5bckuNy5aapMZlS01S47KlJqlx2ZJTaly21Co1LltqlRqX
LbmlxmVLTVLjsqUmqXHZUpPUuGzJKTUuW2qVGpcttUqNy5ZuVYggeATUJGFZEdA9L+6K5fOC9X84
4c8047mMn3gU0Db1BtXK/eetn7/SbPNTd+r7heoz/QR063alqHoCbA00X7yO1j9TpYN1TYL6p7vq
t02F69O1VYkmEBYVzlVZYf3sKkdR9TNo1zdRmSfQvi3Y8aBaU5HNAFx9u+7STX9V39vqrdZ6F3rA
t9TZGKK1jyrPuCp4Vk8Cu2qo6jONq192U39cp5ECPNc/F1bVNHphFUp9fsnj+JZV35YL91djPiuq
Tw8PzCML3nw+rZ6+54zPzDTtBOxvV6Z6Wf+Km6O/q+fx19cPOIeknosauttczNK3pzuO4XCZq64x
znpbv+oh+3tsrxydlGE5Hpb561k5OivL05NSlKejshyNS/1G8Vd5dlDCkYMnVHIx1YQfetKwrbLt
p0q3XOgBZL51cHB0fDI6M0sqYduL8nRYnuTleFyK8Yv+T8bl+Kh8Kk9Py2F5ela6W90hlrq9q7/y
T/8DAAD//wMAUEsDBBQABgAIAAAAIQDVwI3mdgEAAMoCAAAQAAgBZG9jUHJvcHMvYXBwLnhtbCCi
BAEooAABAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAJxSy07DMBC8I/EPUe6tUwSoVFtXqBXi
wEtqaM+WvUksHNuyDaJ/z4a0IYgbOe3MesczG8PqszXZB4aonV3ms2mRZ2ilU9rWy/y1vJvM8ywm
YZUwzuIyP2DMV/z8DF6C8xiSxpiRhI3LvEnJLxiLssFWxCm1LXUqF1qRCIaauarSEjdOvrdoE7so
imuGnwmtQjXxg2DeKy4+0n9FlZOdv7grD570OJTYeiMS8qdu0kyVSy2wgYXSJWFK3SIviB4AvIga
I58B6wvYu6Ain8+B9RWsGxGETLRAflVcAhthuPXeaCkSrZY/ahlcdFXKnr/9Zt08sPERoAxblO9B
p0NnYwzhQVu6ny7oCzIWRB2Eb47uBgRbKQyuKT2vhIkI7IeAtWu9sCTHhor03uKrL92mW8Rx5Dc5
CrnXqdl6IcnC1c1sHHfUgS2xqMj/YGEg4J7+SDCdPs3aGtXpzN9Gt8Bd/zT57Hpa0Pe9sRNHuYc3
w78AAAD//wMAUEsBAi0AFAAGAAgAAAAhAN+k0mxaAQAAIAUAABMAAAAAAAAAAAAAAAAAAAAAAFtD
b250ZW50X1R5cGVzXS54bWxQSwECLQAUAAYACAAAACEAHpEat+8AAABOAgAACwAAAAAAAAAAAAAA
AACTAwAAX3JlbHMvLnJlbHNQSwECLQAUAAYACAAAACEA1mSzUfQAAAAxAwAAHAAAAAAAAAAAAAAA
AACzBgAAd29yZC9fcmVscy9kb2N1bWVudC54bWwucmVsc1BLAQItABQABgAIAAAAIQCg+WcNqgUA
APM0AAARAAAAAAAAAAAAAAAAAOkIAAB3b3JkL2RvY3VtZW50LnhtbFBLAQItABQABgAIAAAAIQC2
9GeY0gYAAMkgAAAVAAAAAAAAAAAAAAAAAMIOAAB3b3JkL3RoZW1lL3RoZW1lMS54bWxQSwECLQAU
AAYACAAAACEASNQf1tEDAAANCwAAEQAAAAAAAAAAAAAAAADHFQAAd29yZC9zZXR0aW5ncy54bWxQ
SwECLQAUAAYACAAAACEAq6yGIncBAAD/AgAAEQAAAAAAAAAAAAAAAADHGQAAZG9jUHJvcHMvY29y
ZS54bWxQSwECLQAUAAYACAAAACEAhaLUxwICAAApBwAAEgAAAAAAAAAAAAAAAAB1HAAAd29yZC9m
b250VGFibGUueG1sUEsBAi0AFAAGAAgAAAAhABlUGG5YAwAAJzoAABQAAAAAAAAAAAAAAAAApx4A
AHdvcmQvd2ViU2V0dGluZ3MueG1sUEsBAi0AFAAGAAgAAAAhAPZF01LCCwAASnQAAA8AAAAAAAAA
AAAAAAAAMSIAAHdvcmQvc3R5bGVzLnhtbFBLAQItABQABgAIAAAAIQDVwI3mdgEAAMoCAAAQAAAA
AAAAAAAAAAAAACAuAABkb2NQcm9wcy9hcHAueG1sUEsFBgAAAAALAAsAwQIAAMwwAAAAAA==
--000000000000b7457b05c3540969--


From xen-devel-bounces@lists.xenproject.org Thu May 27 18:48:50 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 18:48:50 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133648.249034 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmL3m-0004ws-3V; Thu, 27 May 2021 18:48:46 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133648.249034; Thu, 27 May 2021 18:48:46 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmL3l-0004wl-Vu; Thu, 27 May 2021 18:48:45 +0000
Received: by outflank-mailman (input) for mailman id 133648;
 Thu, 27 May 2021 18:48:45 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lmL3l-0004wf-EP
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 18:48:45 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lmL3j-0006JL-RI; Thu, 27 May 2021 18:48:43 +0000
Received: from [54.239.6.186] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lmL3j-0006MH-LB; Thu, 27 May 2021 18:48:43 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=gdFLE1hfABuvxFWtXsBOMA7TF9HSzyLSeS6Rr/mUTrU=; b=OnjK7VFthuonyzTUJ64fDUnz7g
	6hXEWrhzCjiUTMcuc+rYyjXEQz558Xh5ckN3KZ+Qbl2YNB3Ux7eUvyaHqXwUKuVCsmH5n2PQmNUH2
	gVwb0kGEah40y+OK/6hiKQ1dBLHKHeoL4OwfXzew3s6QTIBwT37WROPyspK06xj1fVtk=;
Subject: Re: [PATCH v6 1/3] evtchn: slightly defer lock acquire where possible
To: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <01bbf3d4-ca6a-e837-91fe-b34aa014564c@suse.com>
 <5939858e-1c7c-5658-bc2d-0c9024c74040@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <938eb888-ec15-feb1-19f7-b90dfee822ae@xen.org>
Date: Thu, 27 May 2021 19:48:41 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <5939858e-1c7c-5658-bc2d-0c9024c74040@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 7bit

Hi Jan,

On 27/05/2021 12:28, Jan Beulich wrote:
> port_is_valid() and evtchn_from_port() are fine to use without holding
> any locks. Accordingly acquire the per-domain lock slightly later in
> evtchn_close() and evtchn_bind_vcpu().

So I agree that port_is_valid() and evtchn_from_port() are fine to use 
without holding any locks in evtchn_bind_vcpu(). However, this is 
misleading to say there is no problem with evtchn_close().

evtchn_close() can be called with current != d and therefore, there is a 
risk that port_is_valid() may be valid and then invalid because 
d->valid_evtchns is decremented in evtchn_destroy().

Thankfully the memory is still there. So the current code is okayish and 
I could reluctantly accept this behavior to be spread. However, I don't 
think this should be left uncommented in both the code (maybe on top of 
port_is_valid()?) and the commit message.

> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> ---
> v6: Re-base for re-ordering / shrinking of series.
> v4: New.
> 
> --- a/xen/common/event_channel.c
> +++ b/xen/common/event_channel.c
> @@ -606,17 +606,14 @@ int evtchn_close(struct domain *d1, int
>       int            port2;
>       long           rc = 0;
>   
> - again:
> -    spin_lock(&d1->event_lock);
> -
>       if ( !port_is_valid(d1, port1) )
> -    {
> -        rc = -EINVAL;
> -        goto out;
> -    }
> +        return -EINVAL;
>   
>       chn1 = evtchn_from_port(d1, port1);
>   
> + again:
> +    spin_lock(&d1->event_lock);
> +
>       /* Guest cannot close a Xen-attached event channel. */
>       if ( unlikely(consumer_is_xen(chn1)) && guest )
>       {
> @@ -1041,16 +1038,13 @@ long evtchn_bind_vcpu(unsigned int port,
>       if ( (v = domain_vcpu(d, vcpu_id)) == NULL )
>           return -ENOENT;
>   
> -    spin_lock(&d->event_lock);
> -
>       if ( !port_is_valid(d, port) )
> -    {
> -        rc = -EINVAL;
> -        goto out;
> -    }
> +        return -EINVAL;
>   
>       chn = evtchn_from_port(d, port);
>   
> +    spin_lock(&d->event_lock);
> +
>       /* Guest cannot re-bind a Xen-attached event channel. */
>       if ( unlikely(consumer_is_xen(chn)) )
>       {
> 

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Thu May 27 18:50:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 18:50:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133653.249045 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmL5g-0006HL-FG; Thu, 27 May 2021 18:50:44 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133653.249045; Thu, 27 May 2021 18:50:44 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmL5g-0006HE-Bs; Thu, 27 May 2021 18:50:44 +0000
Received: by outflank-mailman (input) for mailman id 133653;
 Thu, 27 May 2021 18:50:43 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=6G6c=KW=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1lmL5f-0006H8-HB
 for xen-devel@lists.xenproject.org; Thu, 27 May 2021 18:50:43 +0000
Received: from mail-lf1-x132.google.com (unknown [2a00:1450:4864:20::132])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8374b7eb-bbfb-4935-8818-f8c9c4bb03bd;
 Thu, 27 May 2021 18:50:42 +0000 (UTC)
Received: by mail-lf1-x132.google.com with SMTP id a5so1672538lfm.0
 for <xen-devel@lists.xenproject.org>; Thu, 27 May 2021 11:50:42 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8374b7eb-bbfb-4935-8818-f8c9c4bb03bd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=8vkRK1ysi9oz2J8sQbxFxjgnrymhPWsDN9BwEBGs0JQ=;
        b=BJCetQx65Trk2xHWwgxIg6ZyK4iyc6qUO7CI9v2noUs43Se0eRzztX+viUzdar3h/p
         gqqLe0REmYPGmFk9fueTR/E8T1iSwEN4N0F5uJYXNW3kitm22mR90onYsHfiomzGUG1K
         aR5D65ApZ4/Gs/nqwuHg1oKgjQZHRSBItrXMyrjJc+ndq/sV654d+TEgV9wMWy/Z47MT
         vr9uaHAK3hK06WYOpda3yufF+Zo9eTwd9Gx/CC7d3kEVFmYs1af1dUFyX9Yufo3f2IGA
         OzQ7UrruvJWwwcD4QidF+1R4K9SX0dV7uxVj2zcz++e1teeVxg7XqhB2pDZdnBtuUwAW
         WkGA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=8vkRK1ysi9oz2J8sQbxFxjgnrymhPWsDN9BwEBGs0JQ=;
        b=VVzutEQ1gWwyTbB4X868LAYoKL6Ictqo92NLcRL2GTr+eNCDGSjSpzXrmFumukvbvr
         ldg2ze7rMcXOV4njn5HaZM6h/nQSZGed4pdMR/BEvjw8hzUBzbG3wWUnlOblorDwMwhV
         gGaHNUCIzkZHOqVz5px3fFmzjM+nyC/lP1uTQZkMNwzX3dxLZ5rTi9IrMuXoKp9dVarr
         CS7yOYPMbRlhM/HaRpp/23tphDdaIxykOWubb6MTgt+nju77TnHmE0l9uKfLn32OVepx
         28CKBkK0Y7cp8Se1wDBBCrleS7frn9P0o60Ijl+7Pg9HMM8qbGjYAxL8tnkF55OtAVZ/
         nfIg==
X-Gm-Message-State: AOAM530ADeloM42LIO3paLLI0mEHCwyr6Zt8r65Z/JyOf8T1spMgLe/s
	BOEpJIheeqU8M67oZx+LNxvBWRXG8k1DfBqS4ig=
X-Google-Smtp-Source: ABdhPJwihvcy6Ob6IkrKflseRiTrKCTQNqj9LdJzCgDDvv9bqwJfzNu99sN2UTJ1qU9rRVat0G6O+xYNGFeO+pc2CvU=
X-Received: by 2002:a05:6512:38cd:: with SMTP id p13mr3114755lft.419.1622141440940;
 Thu, 27 May 2021 11:50:40 -0700 (PDT)
MIME-Version: 1.0
References: <20210503192810.36084-1-jandryuk@gmail.com> <20210503192810.36084-5-jandryuk@gmail.com>
 <1747789a-ab6c-cdae-ed35-a6b81ac580a9@suse.com>
In-Reply-To: <1747789a-ab6c-cdae-ed35-a6b81ac580a9@suse.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Thu, 27 May 2021 14:50:29 -0400
Message-ID: <CAKf6xps4NuBxMpgCo_duWU1ZXB8x8B8uszb3uNyb6kABxUhNHA@mail.gmail.com>
Subject: Re: [PATCH 04/13] cpufreq: Add Hardware P-State (HWP) driver
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, George Dunlap <george.dunlap@citrix.com>, 
	Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>, 
	Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>, 
	=?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"

On Wed, May 26, 2021 at 11:00 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 03.05.2021 21:28, Jason Andryuk wrote:
> > If HWP Energy_Performance_Preference isn't supported, the code falls
> > back to IA32_ENERGY_PERF_BIAS.  Right now, we don't check
> > CPUID.06H:ECX.SETBH[bit 3] before using that MSR.
>
> May I ask what problem there is doing so?
>
> >  The SDM reads like
> > it'll be available, and I assume it was available by the time Skylake
> > introduced HWP.
>
> The SDM documents the MSR's presence back to at least Nehalem, but ties
> it to the CPUID bit even there.

Your point about inconsistent virtualized environments is something I
hadn't considered.  I will add a check, but I will make hwp disable in
that case.  While it could run just without an energy/performance
preference, HWP doesn't seem useful in that scenario.

> > --- a/docs/misc/xen-command-line.pandoc
> > +++ b/docs/misc/xen-command-line.pandoc
> > @@ -1355,6 +1355,15 @@ Specify whether guests are to be given access to physical port 80
> >  (often used for debugging purposes), to override the DMI based
> >  detection of systems known to misbehave upon accesses to that port.
> >
> > +### hwp (x86)
> > +> `= <boolean>`
> > +
> > +> Default: `true`
> > +
> > +Specifies whether Xen uses Hardware-Controlled Performance States (HWP)
> > +on supported Intel hardware.  HWP is a Skylake+ feature which provides
> > +better CPU power management.
>
> Is there a particular reason giving this a top-level option rather
> than a sub-option of cpufreq=?

I will investigate.

Below, I'm trimming everything where I will just make the changes
according to your earlier email.

> > +    };
> > +    union hwp_request curr_req;
> > +    uint16_t activity_window;
> > +    uint8_t minimum;
> > +    uint8_t maximum;
> > +    uint8_t desired;
> > +    uint8_t energy_perf;
> > +};
> > +struct hwp_drv_data *hwp_drv_data[NR_CPUS];
>
> New NR_CPUS-dimensioned arrays need explicit justification. From
> what I get I can't see why this couldn't be per-CPU data instead.
>
> Also - static?

I followed acpi-cpufreq with the NR_CPUS array.  per-cpu makes sense
and I'll investigate.

> > +    hwp_verbose("HWP: FAST_IA32_HWP_REQUEST %ssupported\n",
> > +                eax & CPUID6_EAX_FAST_HWP_MSR ? "" : "not ");
> > +    if ( eax & CPUID6_EAX_FAST_HWP_MSR )
> > +    {
> > +        if ( rdmsr_safe(MSR_FAST_UNCORE_MSRS_CAPABILITY, val) )
> > +            hwp_err("error rdmsr_safe(MSR_FAST_UNCORE_MSRS_CAPABILITY)\n");
> > +
> > +        hwp_verbose("HWP: MSR_FAST_UNCORE_MSRS_CAPABILITY: %016lx\n", val);
>
> Missing "else" above here?

Are unbalanced braces acceptable or must they be balanced?  Is this acceptable:
if ()
  one;
else {
  two;
  three;
}

> > +static void hdc_set_pkg_hdc_ctl(bool val)
> > +{
> > +    uint64_t msr;
> > +
> > +    if ( rdmsr_safe(MSR_IA32_PKG_HDC_CTL, msr) )
> > +    {
> > +        hwp_err("error rdmsr_safe(MSR_IA32_PKG_HDC_CTL)\n");
>
> I'm not convinced of the need of having such log messages after
> failed rdmsr/wrmsr, but I'm definitely against them getting logged
> unconditionally. In verbose mode, maybe.

We should not fail the rdmsr if our earlier cpuid check passed.  So in
that respect these messages can be removed.  The benefit here is that
you can see which MSR failed.  If you relied on extable_fixup, you
would have to cross reference manually.  How will administors know if
hwp isn't working properly there are no messages?

For HWP in general, the SDM says to check CPUID for the availability
of MSRs.  Therefore rdmsr/wrmsr shouldn't fail.  During development, I
saw wrmsr failures when with "Valid Maximum" and other "Valid" bits
set.  I think that was because I hadn't set up the Package Request
MSR.  That has been fixed by not using those bits.  Anyway,
rdmsr/wrmsr shouldn't fail, so how much code should be put into
checking for those failures which shouldn't happen?

My sample size is 2 models of processor, so verbose reporting of
errors makes some sense during wider deployment and testing.

> > +        return;
> > +    }
> > +
> > +    msr = val ? IA32_PKG_HDC_CTL_HDC_PKG_Enable : 0;
>
> If you don't use the prior value, why did you read it? But I
> think you really mean to set/clear just bot 0.

During development I printed the initial values .  I removed the
printing before submission but not the reading.

In the SDM, It says "Bits 63:1 are reserved and must be zero", so I
intended to only write 0 or 1.  Below, you remark on not discarding
reserved bits. So you want all of these to be read-modify-write
operations?

> > +static void hdc_set_pm_ctl1(bool val)
> > +{
> > +    uint64_t msr;
> > +
> > +    if ( rdmsr_safe(MSR_IA32_PM_CTL1, msr) )
> > +    {
> > +        hwp_err("error rdmsr_safe(MSR_IA32_PM_CTL1)\n");
> > +
> > +        return;
> > +    }
> > +
> > +    msr = val ? IA32_PM_CTL1_HDC_Allow_Block : 0;
>
> Same here then, and ...
>
> > +static void hwp_fast_uncore_msrs_ctl(bool val)
> > +{
> > +    uint64_t msr;
> > +
> > +    if ( rdmsr_safe(MSR_FAST_UNCORE_MSRS_CTL, msr) )
> > +        hwp_err("error rdmsr_safe(MSR_FAST_UNCORE_MSRS_CTL)\n");
> > +
> > +    msr = val;
>
> ... here (where you imply bit 0 instead of using a proper
> constant).
>
> Also for all three functions I'm not convinced their names are
> in good sync with their parameters being boolean.

Would you prefer something named closer to the bit names like:
hdc_set_pkg_hdc_ctl -> hdc_set_pkg_enable
hdc_set_pm_ctl1 -> hdc_set_allow_block
hwp_fast_uncore_msrs_ctl -> hwp_fast_request_enable

> > +static void hwp_read_capabilities(void *info)
> > +{
> > +    struct cpufreq_policy *policy = info;
> > +    struct hwp_drv_data *data = hwp_drv_data[policy->cpu];
> > +
> > +    if ( rdmsr_safe(MSR_IA32_HWP_CAPABILITIES, data->hwp_caps) )
> > +    {
> > +        hwp_err("CPU%u: error rdmsr_safe(MSR_IA32_HWP_CAPABILITIES)\n",
> > +                policy->cpu);
> > +
> > +        return;
> > +    }
> > +
> > +    if ( rdmsr_safe(MSR_IA32_HWP_REQUEST, data->curr_req.raw) )
> > +    {
> > +        hwp_err("CPU%u: error rdmsr_safe(MSR_IA32_HWP_REQUEST)\n", policy->cpu);
> > +
> > +        return;
> > +    }
> > +}
>
> This function doesn't indicate failure to its caller(s), so am I
> to understand that failure to read either ofthe MSRs is actually
> benign to the driver?

They really shouldn't fail since the CPUID check passed during
initialization.  If you can't read/write MSRs, something is broken and
the driver cannot function.  The machine would still run, but HWP
would be uncontrollable.  How much care should be given to
"impossible" situations?

> > +    if ( rdmsr_safe(MSR_IA32_PM_ENABLE, val) )
> > +    {
> > +        hwp_err("CPU%u: error rdmsr_safe(MSR_IA32_PM_ENABLE)\n", policy->cpu);
> > +
> > +        return;
> > +    }
> > +
> > +    hwp_verbose("CPU%u: MSR_IA32_PM_ENABLE: %016lx\n", policy->cpu, val);
> > +    if ( val != IA32_PM_ENABLE_HWP_ENABLE )
> > +    {
> > +        val = IA32_PM_ENABLE_HWP_ENABLE;
>
> You should neither depend on reserved bits being zero, nor discard any
> non-zero value here, I think.

Ok.

> > +static void hwp_write_request(void *info)
> > +{
> > +    struct cpufreq_policy *policy = info;
> > +    struct hwp_drv_data *data = hwp_drv_data[policy->cpu];
> > +    union hwp_request hwp_req = data->curr_req;
> > +
> > +    BUILD_BUG_ON(sizeof(union hwp_request) != sizeof(uint64_t));
>
> ITYM
>
>     BUILD_BUG_ON(sizeof(hwp_req) != sizeof(hwp_req.raw));
>
> here?

Originally, I wanted this to live next to the union definition.
However, BUILD_BUG_ON needs to live in a function, so I placed it here
where it is used.

I'd prefer
    BUILD_BUG_ON(sizeof(hwp_req) != sizeof(uint64_t))

to make the comparison to 64bit, the size of the MSR, explicit.

> > +    if ( wrmsr_safe(MSR_IA32_HWP_REQUEST, hwp_req.raw) )
> > +    {
> > +        hwp_err("CPU%u: error wrmsr_safe(MSR_IA32_HWP_REQUEST, %lx)\n",
> > +                policy->cpu, hwp_req.raw);
> > +        rdmsr_safe(MSR_IA32_HWP_REQUEST, data->curr_req.raw);
>
> What if this one fails, too? data->curr_req.raw then pretty certainly
> ends up stale.

It would be stale, but I think this is unlikely.  rdmsr should not
fail given our earlier CPUID checks.  wrmsr could fail if you set
something wrong.  During testing I set some of the valid
"Maximum/Minimum" bits (now unused) that cause wrmsr to fault when
some other MSR (maybe IA32_HWP_REQUEST_PKG) wasn't set.

> > +    policy->governor = &hwp_cpufreq_governor;
> > +
> > +    data = xzalloc(typeof(*data));
>
> Commonly we specify the type explicitly in such cases, rather than using
> typeof(). I will admit though that I'm not entirely certain which one's
> better. But consistency across the code base is perhaps preferable for
> the time being.

I thought typeof(*data) is always preferable since it will always be
the matching type.  I can change it, but I feel like it's a step
backwards.

> > +    if ( !data )
> > +        return -ENOMEM;
>
> Is it correct to have set the governor before this error check?

I will re-order.

> > +    hwp_drv_data[cpu] = data;
> > +
> > +    on_selected_cpus(cpumask_of(cpu), hwp_init_msrs, policy, 1);
> > +
> > +    data->minimum = data->hw_lowest;
> > +    data->maximum = data->hw_highest;
> > +    data->desired = 0; /* default to HW autonomous */
> > +    if ( feature_hwp_energy_perf )
> > +        data->energy_perf = 0x80;
> > +    else
> > +        data->energy_perf = 7;
>
> Where's this 7 coming from? (You do mention the 0x80 at least in the
> description.)

When HWP energy performance preference is unavailable, it falls back
to IA32_ENERGY_PERF_BIAS with a 0-15 range.  Per the SDM Vol3 14.3.4,
"A value of 7 roughly translates into a hint to balance performance
with energy consumption."  I will add a comment.

> > --- a/xen/include/asm-x86/msr-index.h
> > +++ b/xen/include/asm-x86/msr-index.h
> > +#define MSR_IA32_HWP_CAPABILITIES           0x00000771
> > +#define MSR_IA32_HWP_REQUEST                0x00000774
> > +
> >  #define MSR_PASID                           0x00000d93
> >  #define  PASID_PASID_MASK                   0x000fffff
> >  #define  PASID_VALID                        (_AC(1, ULL) << 31)
> >
> > +#define MSR_IA32_PKG_HDC_CTL                0x00000db0
> > +#define  IA32_PKG_HDC_CTL_HDC_PKG_Enable    (_AC(1, ULL) <<  0)
>
> I don't think "HDC" twice is helpful?

I took the MSR name "IA32_PKG_HDC_CTL" and bit name "HDC_PKG_Enable"
from the SDM.  I intentionally left the case to match the SDM.

> Also please use all upper case names (actually also for the
> CPUID constants higher up).

Again, I took them straight from the SDM.

I will make them all upper case and switch
IA32_PKG_HDC_CTL_HDC_PKG_Enable to IA32_PKG_HDC_CTL_PKG_Enable.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Thu May 27 19:27:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 19:27:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133663.249056 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmLfF-0001Fd-Bj; Thu, 27 May 2021 19:27:29 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133663.249056; Thu, 27 May 2021 19:27:29 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmLfF-0001FW-8A; Thu, 27 May 2021 19:27:29 +0000
Received: by outflank-mailman (input) for mailman id 133663;
 Thu, 27 May 2021 19:27:28 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmLfE-0001FM-1G; Thu, 27 May 2021 19:27:28 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmLfD-0006yb-PJ; Thu, 27 May 2021 19:27:27 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmLfD-00064N-Cf; Thu, 27 May 2021 19:27:27 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lmLfD-0005Ah-CC; Thu, 27 May 2021 19:27:27 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=2yVenAwRoDkaeiZ1pkyP8qIv93ISUuFPFv4vLJcV5hI=; b=wv46LdyWoHHofsSfjyOgc7VwZm
	wDMygmDYBHESLnpJCYXibk2ppK0F0zpNCURjvhO3AaDUlCFhh2+NXXkiQQXOxZeaW/FUsySoDQr5v
	TlfrnAofNgD3bqs0dvY3b8++caPP1cJGRRAEUxKsxVrlk0fXTcEEAIZa4jAQCaD0iOhI=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162197-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162197: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=2ab2dad01f6dc3667c0d53d2b1ba46b511031207
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 27 May 2021 19:27:27 +0000

flight 162197 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162197/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                2ab2dad01f6dc3667c0d53d2b1ba46b511031207
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  280 days
Failing since        152659  2020-08-21 14:07:39 Z  279 days  515 attempts
Testing same since   162197  2021-05-27 05:30:44 Z    0 days    1 attempts

------------------------------------------------------------
510 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 161741 lines long.)


From xen-devel-bounces@lists.xenproject.org Thu May 27 20:58:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 20:58:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133673.249070 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmN4s-0001Tl-Bn; Thu, 27 May 2021 20:58:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133673.249070; Thu, 27 May 2021 20:58:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmN4s-0001Te-88; Thu, 27 May 2021 20:58:02 +0000
Received: by outflank-mailman (input) for mailman id 133673;
 Thu, 27 May 2021 20:58:01 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmN4r-0001TU-9G; Thu, 27 May 2021 20:58:01 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmN4r-00004t-4D; Thu, 27 May 2021 20:58:01 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmN4q-0001xN-T6; Thu, 27 May 2021 20:58:00 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lmN4q-0007yh-Sa; Thu, 27 May 2021 20:58:00 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=4il0ZNitqhtkXOi7JaxzIaYHT6iUQgFS57LFOWR7GnA=; b=YXwO3t1nrEmL/nXwlQLxxzv4gU
	X36wDclDCz9yVpJL+e8XOuZ+gvfZOBIqAb3SWijIxqxzOCqBVJ7s9vv0S1IjB7vyoD/Wb4e/y25vg
	/MPeAWB5ChzTK4u8CjH1e238+LycZxYrAhOndE0Vlkb+zuyXKUBqQsu4nL5XEzejOao8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162217-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162217: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=e1999b264f1f9d7230edf2448f757c73da567832
X-Osstest-Versions-That:
    ovmf=cfa6ffb113f2c0d922034cc77c0d6c52eea05497
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 27 May 2021 20:58:00 +0000

flight 162217 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162217/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 e1999b264f1f9d7230edf2448f757c73da567832
baseline version:
 ovmf                 cfa6ffb113f2c0d922034cc77c0d6c52eea05497

Last test of basis   162131  2021-05-23 12:12:23 Z    4 days
Testing same since   162217  2021-05-27 10:11:38 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andreas Sandberg <andreas.sandberg@arm.com>
  Joey Gouly <joey.gouly@arm.com>
  Sami Mujawar <sami.mujawar@arm.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   cfa6ffb113..e1999b264f  e1999b264f1f9d7230edf2448f757c73da567832 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Thu May 27 21:14:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 21:14:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133680.249084 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmNKc-0003rU-Rv; Thu, 27 May 2021 21:14:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133680.249084; Thu, 27 May 2021 21:14:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmNKc-0003rN-Nk; Thu, 27 May 2021 21:14:18 +0000
Received: by outflank-mailman (input) for mailman id 133680;
 Thu, 27 May 2021 21:14:17 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmNKb-0003rD-Kx; Thu, 27 May 2021 21:14:17 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmNKb-0000NW-H0; Thu, 27 May 2021 21:14:17 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmNKb-0002cr-91; Thu, 27 May 2021 21:14:17 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lmNKb-0006AD-8Y; Thu, 27 May 2021 21:14:17 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=blyq9gZGpuwXakleaXAfk1LNkNyJ/jnOc3To7ikMcAk=; b=EmyNxjuLFAFnXFwDD3BexvPem7
	Ij27BG4x7okvjMNr1TX6CtqFtpA4sbhkwM0qf0ow4i8U6/H8154vUA/9PjnnXeOutO2yFo3vfIohf
	eDDPSfTxtDb/C2ESOV+kcpQe+5syK5pYX+Ohno6DuF1sEmLENL12TUlgO+6c38vtGc6I=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162239-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162239: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=9fdcf851689cb2a9501d3947cb5d767d9c7797e8
X-Osstest-Versions-That:
    xen=722f59d38c710a940ab05e542a83020eb5546dea
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 27 May 2021 21:14:17 +0000

flight 162239 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162239/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  9fdcf851689cb2a9501d3947cb5d767d9c7797e8
baseline version:
 xen                  722f59d38c710a940ab05e542a83020eb5546dea

Last test of basis   162232  2021-05-27 13:02:02 Z    0 days
Testing same since   162239  2021-05-27 19:02:50 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   722f59d38c..9fdcf85168  9fdcf851689cb2a9501d3947cb5d767d9c7797e8 -> smoke


From xen-devel-bounces@lists.xenproject.org Thu May 27 22:39:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Thu, 27 May 2021 22:39:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133690.249098 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmOeW-00036l-0X; Thu, 27 May 2021 22:38:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133690.249098; Thu, 27 May 2021 22:38:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmOeV-00036e-TE; Thu, 27 May 2021 22:38:55 +0000
Received: by outflank-mailman (input) for mailman id 133690;
 Thu, 27 May 2021 22:38:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmOeU-00036U-1N; Thu, 27 May 2021 22:38:54 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmOeT-0001jO-Rj; Thu, 27 May 2021 22:38:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmOeT-0008AL-Fy; Thu, 27 May 2021 22:38:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lmOeT-00071s-FG; Thu, 27 May 2021 22:38:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7PwFv5YjkwyFEoi9Po8f1QSmgrVKJmlXfYYK3BeBInA=; b=ujzFj6WOL/BCoHNTJ3e/wqXqtX
	7D9agt8ycX+v+g4/WYPvo9l4XdJ63NZga+lWw+xZ5GNYMaleDg819lM+Gp53QhdxrWa+5907rvzd/
	fmUc1Doh4n4XejIt2FIVX2rKUGKFOVQzQdvoER+ov4+kCl/VIjAv1tjOVLp/uUl1Eizs=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162209-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162209: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-qemuu-freebsd11-amd64:guest-localmigrate/x10:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=d7c5303fbc8ac874ae3e597a5a0d3707dc0230b4
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Thu, 27 May 2021 22:38:53 +0000

flight 162209 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162209/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl          14 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-qemuu-freebsd11-amd64 19 guest-localmigrate/x10 fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                d7c5303fbc8ac874ae3e597a5a0d3707dc0230b4
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  300 days
Failing since        152366  2020-08-01 20:49:34 Z  299 days  508 attempts
Testing same since   162209  2021-05-27 08:19:29 Z    0 days    1 attempts

------------------------------------------------------------
6103 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1658499 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 28 00:39:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 May 2021 00:39:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133717.249130 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmQWb-0007gQ-Ge; Fri, 28 May 2021 00:38:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133717.249130; Fri, 28 May 2021 00:38:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmQWb-0007gJ-Bg; Fri, 28 May 2021 00:38:53 +0000
Received: by outflank-mailman (input) for mailman id 133717;
 Fri, 28 May 2021 00:38:51 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=dbG1=KX=linaro.org=linus.walleij@srs-us1.protection.inumbo.net>)
 id 1lmQWZ-0007fv-7Q
 for xen-devel@lists.xenproject.org; Fri, 28 May 2021 00:38:51 +0000
Received: from mail-lj1-x22d.google.com (unknown [2a00:1450:4864:20::22d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 0b0d2d0d-937c-4ecb-970c-107aa2c8a533;
 Fri, 28 May 2021 00:38:50 +0000 (UTC)
Received: by mail-lj1-x22d.google.com with SMTP id 131so3154344ljj.3
 for <xen-devel@lists.xenproject.org>; Thu, 27 May 2021 17:38:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0b0d2d0d-937c-4ecb-970c-107aa2c8a533
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc:content-transfer-encoding;
        bh=3tZvKVXXK1CrsVCEdwk4kRijURbW1gkRo3MklbEt43g=;
        b=dzDstQmFnkHnT2yUIYKLBRRBS9fEvHqlCfVDyb072nsd2XLJcbMC9YytfAlVCe02vE
         w9imrgViRD9UmJFiNPktvwlhL+FSug8rzH1rndJtVlz3QQe+LaHVPjU6VmarHmQVjLz9
         xf8j7GTU197/jPwSsOeFHBpLr83hJSyRTW1TXntHZd/bB1uAuI+WQ+bhSHrv/SWtMmVd
         kyaH7en3LnMLvCu2BrfR+25SZJ37jTDiSZTB3f4yqmQeYCEoZnJSe4DbXuwQ/ieUoyZH
         Yg2q/hN5vESri9Wyb+J0nlja9X++NyijNUy+ePQmji3JkVQ2yCofSythjNO8OPePTRya
         28LQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc:content-transfer-encoding;
        bh=3tZvKVXXK1CrsVCEdwk4kRijURbW1gkRo3MklbEt43g=;
        b=Gyc4xsBZ+UrI3QjwjwRFyTsqxsBBckomYzuTda0zZGqKCciiEuZfV/HZ8Xf54R8fbL
         clvOvACJ5zy86EuxqzSsPQkevlc+scXBSM+kGSvsipWHY+/HUo6vtWdaioeqKnT0Pa0H
         hFIRp+lvcWRIzv+DcgpwlZLiJQ/WbYMAc3XI89La7Q4rZquMo5ySWI4Jn64rsJ5Df96p
         L+7o6AkD6BfUGs15PKcAM9YpyEptl8OKISDxmPbb8b/I66FHVJRXHwpvlelt8o2tfY0Q
         6WAX4FKzm2Fqywfe9KP246oqw0Gq8EwXPJnaJht6+H7Q0aK2wSYxM7p4HrPFBpGl9DKG
         INBw==
X-Gm-Message-State: AOAM532ruzWV3RCxF3zo/5J6XkJIen9nAiX9VeKi5D1lpiDdGpMqczdB
	S3/TOA1Cz6Ox2nP6LEmLPEjHyCeyV5N4Srx4OTc4KA==
X-Google-Smtp-Source: ABdhPJz7njZd2vaMPN20MaL5HfLauAdefifx8indoEE9qbs7jFDoqG1iIPLF/BJ6sjZMAVRNssGVEUTUE+E+NW+cvHo=
X-Received: by 2002:a05:651c:4c6:: with SMTP id e6mr4571492lji.326.1622162328960;
 Thu, 27 May 2021 17:38:48 -0700 (PDT)
MIME-Version: 1.0
References: <20210521090959.1663703-1-daniel.vetter@ffwll.ch> <20210521090959.1663703-11-daniel.vetter@ffwll.ch>
In-Reply-To: <20210521090959.1663703-11-daniel.vetter@ffwll.ch>
From: Linus Walleij <linus.walleij@linaro.org>
Date: Fri, 28 May 2021 02:38:38 +0200
Message-ID: <CACRpkdbZf_cTMppxfC4mM6XZ2YySH7dQ0NCY6v_pfwsdRzLPKA@mail.gmail.com>
Subject: Re: [PATCH 11/11] drm/tiny: drm_gem_simple_display_pipe_prepare_fb is
 the default
To: Daniel Vetter <daniel.vetter@ffwll.ch>
Cc: DRI Development <dri-devel@lists.freedesktop.org>, 
	Intel Graphics Development <intel-gfx@lists.freedesktop.org>, Daniel Vetter <daniel.vetter@intel.com>, 
	Joel Stanley <joel@jms.id.au>, Andrew Jeffery <andrew@aj.id.au>, =?UTF-8?Q?Noralf_Tr=C3=B8nnes?= <noralf@tronnes.org>, 
	Emma Anholt <emma@anholt.net>, David Lechner <david@lechnology.com>, 
	Kamlesh Gurudasani <kamlesh.gurudasani@gmail.com>, 
	Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>, Maxime Ripard <mripard@kernel.org>, 
	Thomas Zimmermann <tzimmermann@suse.de>, Sam Ravnborg <sam@ravnborg.org>, 
	Alex Deucher <alexander.deucher@amd.com>, 
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>, 
	linux-aspeed <linux-aspeed@lists.ozlabs.org>, 
	Linux ARM <linux-arm-kernel@lists.infradead.org>, xen-devel@lists.xenproject.org
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Fri, May 21, 2021 at 11:10 AM Daniel Vetter <daniel.vetter@ffwll.ch> wro=
te:

> Goes through all the drivers and deletes the default hook since it's
> the default now.
>
> Signed-off-by: Daniel Vetter <daniel.vetter@intel.com>
> Cc: Joel Stanley <joel@jms.id.au>
> Cc: Andrew Jeffery <andrew@aj.id.au>
> Cc: "Noralf Tr=C3=B8nnes" <noralf@tronnes.org>
> Cc: Linus Walleij <linus.walleij@linaro.org>
> Cc: Emma Anholt <emma@anholt.net>
> Cc: David Lechner <david@lechnology.com>
> Cc: Kamlesh Gurudasani <kamlesh.gurudasani@gmail.com>
> Cc: Oleksandr Andrushchenko <oleksandr_andrushchenko@epam.com>
> Cc: Daniel Vetter <daniel.vetter@ffwll.ch>
> Cc: Maxime Ripard <mripard@kernel.org>
> Cc: Thomas Zimmermann <tzimmermann@suse.de>
> Cc: Sam Ravnborg <sam@ravnborg.org>
> Cc: Alex Deucher <alexander.deucher@amd.com>
> Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
> Cc: linux-aspeed@lists.ozlabs.org
> Cc: linux-arm-kernel@lists.infradead.org
> Cc: xen-devel@lists.xenproject.org

Acked-by: Linus Walleij <linus.walleij@linaro.org>

Yours,
Linus Walleij


From xen-devel-bounces@lists.xenproject.org Fri May 28 02:14:19 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 May 2021 02:14:19 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133741.249159 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmS0c-0007Bc-Ur; Fri, 28 May 2021 02:13:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133741.249159; Fri, 28 May 2021 02:13:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmS0c-0007BV-RI; Fri, 28 May 2021 02:13:58 +0000
Received: by outflank-mailman (input) for mailman id 133741;
 Fri, 28 May 2021 02:13:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmS0b-0007BL-0U; Fri, 28 May 2021 02:13:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmS0a-0003Bq-Pd; Fri, 28 May 2021 02:13:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmS0a-0001QI-Dz; Fri, 28 May 2021 02:13:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lmS0a-0002Fs-Bq; Fri, 28 May 2021 02:13:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Eb2rQaBV76HBF1DQPR5BRMjYpFEyNye+RvOPM3o5MZo=; b=n/bWtzvVmfvNZl1SGWJcFGKSDK
	eg5UZHXrCU0OPOEYY6ilQnaHy+Cm0bkNGaW7pqiPB8q3pif7BbwP8MkOuoWrBuGzbSoCawKRna5bX
	B7+DDq55VsKqUqmyganI0IwsavJYT42l0lRPA+8Cr73WnPBp4M9lgnFNOW2SDzVC/414=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162230-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162230: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=7c110dd335a17be52549dc4b9dfbfba8165ade40
X-Osstest-Versions-That:
    xen=7c110dd335a17be52549dc4b9dfbfba8165ade40
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 28 May 2021 02:13:56 +0000

flight 162230 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162230/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162172
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162172
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162172
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162172
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162172
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162172
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162172
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162172
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162172
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162172
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162172
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  7c110dd335a17be52549dc4b9dfbfba8165ade40
baseline version:
 xen                  7c110dd335a17be52549dc4b9dfbfba8165ade40

Last test of basis   162230  2021-05-27 12:57:28 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Fri May 28 06:15:42 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 May 2021 06:15:42 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133751.249173 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmVmJ-0003FF-UK; Fri, 28 May 2021 06:15:27 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133751.249173; Fri, 28 May 2021 06:15:27 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmVmJ-0003F8-QY; Fri, 28 May 2021 06:15:27 +0000
Received: by outflank-mailman (input) for mailman id 133751;
 Fri, 28 May 2021 06:15:26 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmVmI-0003Ey-H9; Fri, 28 May 2021 06:15:26 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmVmI-0007i5-As; Fri, 28 May 2021 06:15:26 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmVmH-0005jU-VG; Fri, 28 May 2021 06:15:26 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lmVmH-0002q0-Uk; Fri, 28 May 2021 06:15:25 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=AecW++AWMorhVOCFYnSlqjmlRldCqkpxxu7cMOmkff4=; b=XAXxA599PtoOWvICTMD9h94EPz
	Iy5/iXyVrptMoT5TF4rNTUvcyY3PCpSJvcKZuieWY3xFU/bPxw8MJVPdZibyv3lK1caE0fV63gn8g
	yr6y3wkhWS/Y/V/O/X5kZd1cY/wzIP8TYFN6AZgrEfg0NXOo1OOaH35VVzZtiTB1sykM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162240-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162240: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=c8616fc7670b884de5f74d2767aade224c1c5c3a
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 28 May 2021 06:15:25 +0000

flight 162240 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162240/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                c8616fc7670b884de5f74d2767aade224c1c5c3a
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  280 days
Failing since        152659  2020-08-21 14:07:39 Z  279 days  516 attempts
Testing same since   162240  2021-05-27 19:38:27 Z    0 days    1 attempts

------------------------------------------------------------
514 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 162988 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 28 06:30:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 May 2021 06:30:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133760.249187 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmW0X-0005YZ-Ey; Fri, 28 May 2021 06:30:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133760.249187; Fri, 28 May 2021 06:30:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmW0X-0005YS-Bh; Fri, 28 May 2021 06:30:09 +0000
Received: by outflank-mailman (input) for mailman id 133760;
 Fri, 28 May 2021 06:30:08 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmW0W-0005YI-Ea; Fri, 28 May 2021 06:30:08 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmW0W-0007wh-8m; Fri, 28 May 2021 06:30:08 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmW0V-0006Wv-VK; Fri, 28 May 2021 06:30:08 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lmW0V-0004Ut-Ul; Fri, 28 May 2021 06:30:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3fWqfgqHcK9o62mPKMOYGWsb4KYGMewvhbX8GoLVjO0=; b=ksv9O3hXQ6+W/CbP7MQ9YK9NvD
	E+T5p6W3RULlvMSGRn3OMWIxfM4ORHfnfNNe+yjsag1srv0XUw0XF2M/Emx3vg8g0DtON2yVl6w0U
	FCMquvSrZWIHn9xcN648L+mDStxf/mtCPH7DDEUK+gNUsEr00RQgTXs0gxNobRFdpd5Q=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162243-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 162243: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=70f53b1c04cfed8529c87c7be8ca4c76d1123a30
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 28 May 2021 06:30:07 +0000

flight 162243 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162243/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              70f53b1c04cfed8529c87c7be8ca4c76d1123a30
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  322 days
Failing since        151818  2020-07-11 04:18:52 Z  321 days  314 attempts
Testing same since   162243  2021-05-28 04:20:06 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 59708 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 28 06:35:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 May 2021 06:35:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133768.249200 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmW5x-0006Fk-4B; Fri, 28 May 2021 06:35:45 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133768.249200; Fri, 28 May 2021 06:35:45 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmW5x-0006Fd-12; Fri, 28 May 2021 06:35:45 +0000
Received: by outflank-mailman (input) for mailman id 133768;
 Fri, 28 May 2021 06:35:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wdiM=KX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmW5v-0006FX-VD
 for xen-devel@lists.xenproject.org; Fri, 28 May 2021 06:35:44 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 238af3f9-f427-44db-843d-e5bf59d30e44;
 Fri, 28 May 2021 06:35:42 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 63456218B0;
 Fri, 28 May 2021 06:35:41 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 1D4BE11A98;
 Fri, 28 May 2021 06:35:41 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 238af3f9-f427-44db-843d-e5bf59d30e44
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622183741; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=VvIfXPA9lEsjZ+XzJdoKb/OSD7M9TWEcPYCqnEflyIM=;
	b=BLQ+QPjJqCZ0T3mSwnohIuXUIVFDsEjas8f3MUfY33P15KDOktrruHaYHxF1ruZjG2kXFU
	ibPktXeFLBJBHOja7yriz2uH/9KhhnVNh3uRpGmfH2RI+XlKMFcCQn8EdvSvbXYBKf7k6a
	DKAWQJOlWA6c/xRNFIG+UIfMdNkOLqI=
Subject: Re: [PATCH 04/13] cpufreq: Add Hardware P-State (HWP) driver
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>, xen-devel <xen-devel@lists.xenproject.org>
References: <20210503192810.36084-1-jandryuk@gmail.com>
 <20210503192810.36084-5-jandryuk@gmail.com>
 <1747789a-ab6c-cdae-ed35-a6b81ac580a9@suse.com>
 <CAKf6xps4NuBxMpgCo_duWU1ZXB8x8B8uszb3uNyb6kABxUhNHA@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <2c3400e8-8236-8558-08e4-37c8b1494de7@suse.com>
Date: Fri, 28 May 2021 08:35:36 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <CAKf6xps4NuBxMpgCo_duWU1ZXB8x8B8uszb3uNyb6kABxUhNHA@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 27.05.2021 20:50, Jason Andryuk wrote:
> On Wed, May 26, 2021 at 11:00 AM Jan Beulich <jbeulich@suse.com> wrote:
>>
>> On 03.05.2021 21:28, Jason Andryuk wrote:
>>> If HWP Energy_Performance_Preference isn't supported, the code falls
>>> back to IA32_ENERGY_PERF_BIAS.  Right now, we don't check
>>> CPUID.06H:ECX.SETBH[bit 3] before using that MSR.
>>
>> May I ask what problem there is doing so?
>>
>>>  The SDM reads like
>>> it'll be available, and I assume it was available by the time Skylake
>>> introduced HWP.
>>
>> The SDM documents the MSR's presence back to at least Nehalem, but ties
>> it to the CPUID bit even there.
> 
> Your point about inconsistent virtualized environments is something I
> hadn't considered.  I will add a check, but I will make hwp disable in
> that case.  While it could run just without an energy/performance
> preference, HWP doesn't seem useful in that scenario.

Makes sense. Of course I wouldn't expect hypervisors to populate much
of CPUID leaf 6 anyway, like is the case for Xen itself.

>>> +    hwp_verbose("HWP: FAST_IA32_HWP_REQUEST %ssupported\n",
>>> +                eax & CPUID6_EAX_FAST_HWP_MSR ? "" : "not ");
>>> +    if ( eax & CPUID6_EAX_FAST_HWP_MSR )
>>> +    {
>>> +        if ( rdmsr_safe(MSR_FAST_UNCORE_MSRS_CAPABILITY, val) )
>>> +            hwp_err("error rdmsr_safe(MSR_FAST_UNCORE_MSRS_CAPABILITY)\n");
>>> +
>>> +        hwp_verbose("HWP: MSR_FAST_UNCORE_MSRS_CAPABILITY: %016lx\n", val);
>>
>> Missing "else" above here?
> 
> Are unbalanced braces acceptable or must they be balanced?  Is this acceptable:
> if ()
>   one;
> else {
>   two;
>   three;
> }

Yes, it is. But I don't see how the question relates to my comment.
All that needs to go in the else's body is the hwp_verbose().

>>> +static void hdc_set_pkg_hdc_ctl(bool val)
>>> +{
>>> +    uint64_t msr;
>>> +
>>> +    if ( rdmsr_safe(MSR_IA32_PKG_HDC_CTL, msr) )
>>> +    {
>>> +        hwp_err("error rdmsr_safe(MSR_IA32_PKG_HDC_CTL)\n");
>>
>> I'm not convinced of the need of having such log messages after
>> failed rdmsr/wrmsr, but I'm definitely against them getting logged
>> unconditionally. In verbose mode, maybe.
> 
> We should not fail the rdmsr if our earlier cpuid check passed.  So in
> that respect these messages can be removed.  The benefit here is that
> you can see which MSR failed.  If you relied on extable_fixup, you
> would have to cross reference manually.  How will administors know if
> hwp isn't working properly there are no messages?

This same question would go for various other MSR accesses which
might fail, but which aren't accompanied by an explicit log message.
I wouldn't mind a single summary message reporting if e.g. HWP
setup failed. More detailed analysis of such would be more of a
developer's than an administrator's job then anyway.

> For HWP in general, the SDM says to check CPUID for the availability
> of MSRs.  Therefore rdmsr/wrmsr shouldn't fail.  During development, I
> saw wrmsr failures when with "Valid Maximum" and other "Valid" bits
> set.  I think that was because I hadn't set up the Package Request
> MSR.  That has been fixed by not using those bits.  Anyway,
> rdmsr/wrmsr shouldn't fail, so how much code should be put into
> checking for those failures which shouldn't happen?

If there's any risk of accesses causing a fault, using *msr_safe()
is of course the right choice. Beyond that you will need to consider
what the consequences are. Sadly this needs doing on every single
case individually. See "Handling unexpected conditions" section of
./CODING_STYLE for guidelines (even if the specific constructs
aren't in use here).

>>> +        return;
>>> +    }
>>> +
>>> +    msr = val ? IA32_PKG_HDC_CTL_HDC_PKG_Enable : 0;
>>
>> If you don't use the prior value, why did you read it? But I
>> think you really mean to set/clear just bot 0.
> 
> During development I printed the initial values .  I removed the
> printing before submission but not the reading.
> 
> In the SDM, It says "Bits 63:1 are reserved and must be zero", so I
> intended to only write 0 or 1.  Below, you remark on not discarding
> reserved bits. So you want all of these to be read-modify-write
> operations?

Yes, this is the only way to be forward compatible.

>>> +static void hdc_set_pm_ctl1(bool val)
>>> +{
>>> +    uint64_t msr;
>>> +
>>> +    if ( rdmsr_safe(MSR_IA32_PM_CTL1, msr) )
>>> +    {
>>> +        hwp_err("error rdmsr_safe(MSR_IA32_PM_CTL1)\n");
>>> +
>>> +        return;
>>> +    }
>>> +
>>> +    msr = val ? IA32_PM_CTL1_HDC_Allow_Block : 0;
>>
>> Same here then, and ...
>>
>>> +static void hwp_fast_uncore_msrs_ctl(bool val)
>>> +{
>>> +    uint64_t msr;
>>> +
>>> +    if ( rdmsr_safe(MSR_FAST_UNCORE_MSRS_CTL, msr) )
>>> +        hwp_err("error rdmsr_safe(MSR_FAST_UNCORE_MSRS_CTL)\n");
>>> +
>>> +    msr = val;
>>
>> ... here (where you imply bit 0 instead of using a proper
>> constant).
>>
>> Also for all three functions I'm not convinced their names are
>> in good sync with their parameters being boolean.
> 
> Would you prefer something named closer to the bit names like:
> hdc_set_pkg_hdc_ctl -> hdc_set_pkg_enable
> hdc_set_pm_ctl1 -> hdc_set_allow_block
> hwp_fast_uncore_msrs_ctl -> hwp_fast_request_enable

My primary request is for function name, purpose, and parameter(s)
to be in line. So e.g. if you left the parameters as boolean, then
I think your suggested name changes make sense. Alternatively you
could make the functions e.g. be full register updating ones, with
suitable parameters, which could be a mask-to-set and mask-to-clear.

>>> +static void hwp_read_capabilities(void *info)
>>> +{
>>> +    struct cpufreq_policy *policy = info;
>>> +    struct hwp_drv_data *data = hwp_drv_data[policy->cpu];
>>> +
>>> +    if ( rdmsr_safe(MSR_IA32_HWP_CAPABILITIES, data->hwp_caps) )
>>> +    {
>>> +        hwp_err("CPU%u: error rdmsr_safe(MSR_IA32_HWP_CAPABILITIES)\n",
>>> +                policy->cpu);
>>> +
>>> +        return;
>>> +    }
>>> +
>>> +    if ( rdmsr_safe(MSR_IA32_HWP_REQUEST, data->curr_req.raw) )
>>> +    {
>>> +        hwp_err("CPU%u: error rdmsr_safe(MSR_IA32_HWP_REQUEST)\n", policy->cpu);
>>> +
>>> +        return;
>>> +    }
>>> +}
>>
>> This function doesn't indicate failure to its caller(s), so am I
>> to understand that failure to read either ofthe MSRs is actually
>> benign to the driver?
> 
> They really shouldn't fail since the CPUID check passed during
> initialization.  If you can't read/write MSRs, something is broken and
> the driver cannot function.  The machine would still run, but HWP
> would be uncontrollable.  How much care should be given to
> "impossible" situations?

See above. The main point is to avoid crashing. In the specific
case here I think you could simply drop both the log messages and
the early return, assuming the caller can deal fine with the zero
value(s) that rdmsr_safe() will substitute for the MSR value(s).
Bailing early, otoh, calls for handing back an error indicator
imo.

>>> +static void hwp_write_request(void *info)
>>> +{
>>> +    struct cpufreq_policy *policy = info;
>>> +    struct hwp_drv_data *data = hwp_drv_data[policy->cpu];
>>> +    union hwp_request hwp_req = data->curr_req;
>>> +
>>> +    BUILD_BUG_ON(sizeof(union hwp_request) != sizeof(uint64_t));
>>
>> ITYM
>>
>>     BUILD_BUG_ON(sizeof(hwp_req) != sizeof(hwp_req.raw));
>>
>> here?
> 
> Originally, I wanted this to live next to the union definition.
> However, BUILD_BUG_ON needs to live in a function, so I placed it here
> where it is used.
> 
> I'd prefer
>     BUILD_BUG_ON(sizeof(hwp_req) != sizeof(uint64_t))
> 
> to make the comparison to 64bit, the size of the MSR, explicit.

Well, then "raw" could still be wrong in principle, but it is the
whole point of having "raw" to have it match MSR size. So while I
could live with your alternative, I continue to think my suggestion
is the more appropriate form of the check.

>>> +    policy->governor = &hwp_cpufreq_governor;
>>> +
>>> +    data = xzalloc(typeof(*data));
>>
>> Commonly we specify the type explicitly in such cases, rather than using
>> typeof(). I will admit though that I'm not entirely certain which one's
>> better. But consistency across the code base is perhaps preferable for
>> the time being.
> 
> I thought typeof(*data) is always preferable since it will always be
> the matching type.  I can change it, but I feel like it's a step
> backwards.

There's exactly one similar use in the entire code base. The original
idea with xmalloc() was that one would explicitly specify the intended
type, such that without looking elsewhere one can see what exactly is
to be allocated. One could further rely on the compiler warning if
there was a disagreement between declaration and assignment.

If instead we were to use typeof() everywhere, there's be a fair
amount of redundancy between the LHS of the assignment and the operand
of typeof(), which would call for eliminating (by abstracting away).

>>> +    if ( feature_hwp_energy_perf )
>>> +        data->energy_perf = 0x80;
>>> +    else
>>> +        data->energy_perf = 7;
>>
>> Where's this 7 coming from? (You do mention the 0x80 at least in the
>> description.)
> 
> When HWP energy performance preference is unavailable, it falls back
> to IA32_ENERGY_PERF_BIAS with a 0-15 range.  Per the SDM Vol3 14.3.4,
> "A value of 7 roughly translates into a hint to balance performance
> with energy consumption."  I will add a comment.

Actually, as per a comment on a later patch, I'm instead expecting
you to add a #define, the name of which will serve as comment.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 28 06:56:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 May 2021 06:56:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133778.249212 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmWPh-00005x-Ps; Fri, 28 May 2021 06:56:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133778.249212; Fri, 28 May 2021 06:56:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmWPh-00005q-Mo; Fri, 28 May 2021 06:56:09 +0000
Received: by outflank-mailman (input) for mailman id 133778;
 Fri, 28 May 2021 06:56:08 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wdiM=KX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmWPg-00005k-Mu
 for xen-devel@lists.xenproject.org; Fri, 28 May 2021 06:56:08 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id faa2a749-8b44-45a6-b7da-c11bc5685dda;
 Fri, 28 May 2021 06:56:06 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 71043218B3;
 Fri, 28 May 2021 06:56:05 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 4527811A98;
 Fri, 28 May 2021 06:56:05 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: faa2a749-8b44-45a6-b7da-c11bc5685dda
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622184965; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=tLGueST+/+NzoysaKWzkvZCiMsHiJjBkl7QtcnPhGnw=;
	b=YE9vFZ4AybrgKwKGyXo/xRBlNSHoSDy4VdR8wBphMkNOzir43E8bt8mV4bbmC5gbxRgBaB
	Ah8yIHRme73maC+yjlzfi6+/zakH9EekQ9wQFcp9s9WT8Gncg/HwBKfPe4VzvpZ5RHgA6u
	o3L7AsPmleXapy/5nRbYXS23f2R0JMc=
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Subject: [PATCH v2 0/2] x86/AMD: MSR handling adjustments
Message-ID: <ebc58213-f68a-e060-83f5-c9c89a87f074@suse.com>
Date: Fri, 28 May 2021 08:56:01 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

1: expose SYSCFG, TOM, TOM2, and IORRs to Dom0
2: drop MSR_K7_HWCR

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 28 06:57:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 May 2021 06:57:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133782.249223 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmWQY-0000fT-4O; Fri, 28 May 2021 06:57:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133782.249223; Fri, 28 May 2021 06:57:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmWQY-0000fM-0l; Fri, 28 May 2021 06:57:02 +0000
Received: by outflank-mailman (input) for mailman id 133782;
 Fri, 28 May 2021 06:57:00 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wdiM=KX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmWQW-0000fE-Pa
 for xen-devel@lists.xenproject.org; Fri, 28 May 2021 06:57:00 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 7041c680-dd6b-41aa-ac9c-1065f0187103;
 Fri, 28 May 2021 06:56:59 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 762E91FD2F;
 Fri, 28 May 2021 06:56:58 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 57F0611A98;
 Fri, 28 May 2021 06:56:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7041c680-dd6b-41aa-ac9c-1065f0187103
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622185018; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=FVRc2ayGAxL2f+DfsGuwiPUo2MOa0/B/xTB2k4DtwXQ=;
	b=I1Pb6sLBAGJ4Lf8nCplvGEnD3NwP8YBcMrz0z57f17xwr1nkj3WcWigkRDbhl5O47q2I1l
	P12QbqprWN2xCmAiF9XP/cDeKybhyvpNYIIz1YAQ/iiKoFN3eG7Y4/mySImKxr/lx+so7f
	5x54SuSPg+etT4QD+JyZfuECgVnqMKU=
Subject: [PATCH v2 1/2] x86/AMD: expose SYSCFG, TOM, TOM2, and IORRs to Dom0
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <ebc58213-f68a-e060-83f5-c9c89a87f074@suse.com>
Message-ID: <c6952f53-aecb-542d-94a5-a1547dd4d6c4@suse.com>
Date: Fri, 28 May 2021 08:56:54 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <ebc58213-f68a-e060-83f5-c9c89a87f074@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Sufficiently old Linux (3.12-ish) accesses these MSRs (with the
exception of IORRs) in an unguarded manner. Furthermore these same MSRs,
at least on Fam11 and older CPUs, are also consulted by modern Linux,
and their (bogus) built-in zapping of #GP faults from MSR accesses leads
to it effectively reading zero instead of the intended values, which are
relevant for PCI BAR placement (which ought to all live in MMIO-type
space, not in DRAM-type one).

For SYSCFG, only certain bits get exposed. Since MtrrVarDramEn also
covers the IORRs, expose them as well. Introduce (consistently named)
constants for the bits we're interested in and use them in pre-existing
code as well. While there also drop the unused and somewhat questionable
K8_MTRR_RDMEM_WRMEM_MASK. To complete the set of memory type and DRAM vs
MMIO controlling MSRs, also expose TSEG_{BASE,MASK} (the former also
gets read by Linux, dealing with which was already the subject of
6eef0a99262c ["x86/PV: conditionally avoid raising #GP for early guest
MSR reads"]).

As a welcome side effect, verbosity on/of debug builds gets (perhaps
significantly) reduced.

Note that at least as far as those MSR accesses by Linux are concerned,
there's no similar issue for DomU-s, as the accesses sit behind PCI
device matching logic. The checked for devices would never be exposed to
DomU-s in the first place. Nevertheless I think that at least for HVM we
should return sensible values, not 0 (as svm_msr_read_intercept() does
right now). The intended values may, however, need to be determined by
hvmloader, and then get made known to Xen.

Fixes: 322ec7c89f66 ("x86/pv: disallow access to unknown MSRs")
Reported-by: Olaf Hering <olaf@aepfle.de>
Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: Also permit IORRs and their TSEG counterparts to be read. Drop
    K8_MTRR_RDMEM_WRMEM_MASK.
---
TBD: For PVH, we might grant Dom0 direct read access to the MSR (maybe
     except SYSCFG).

--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -468,14 +468,14 @@ static void check_syscfg_dram_mod_en(voi
 		return;
 
 	rdmsrl(MSR_K8_SYSCFG, syscfg);
-	if (!(syscfg & K8_MTRRFIXRANGE_DRAM_MODIFY))
+	if (!(syscfg & SYSCFG_MTRR_FIX_DRAM_MOD_EN))
 		return;
 
 	if (!test_and_set_bool(printed))
 		printk(KERN_ERR "MTRR: SYSCFG[MtrrFixDramModEn] not "
 			"cleared by BIOS, clearing this bit\n");
 
-	syscfg &= ~K8_MTRRFIXRANGE_DRAM_MODIFY;
+	syscfg &= ~SYSCFG_MTRR_FIX_DRAM_MOD_EN;
 	wrmsrl(MSR_K8_SYSCFG, syscfg);
 }
 
--- a/xen/arch/x86/cpu/mtrr/generic.c
+++ b/xen/arch/x86/cpu/mtrr/generic.c
@@ -224,7 +224,7 @@ static void __init print_mtrr_state(cons
 		uint64_t syscfg, tom2;
 
 		rdmsrl(MSR_K8_SYSCFG, syscfg);
-		if (syscfg & (1 << 21)) {
+		if (syscfg & SYSCFG_MTRR_TOM2_EN) {
 			rdmsrl(MSR_K8_TOP_MEM2, tom2);
 			printk("%sTOM2: %012"PRIx64"%s\n", level, tom2,
 			       syscfg & (1 << 22) ? " (WB)" : "");
--- a/xen/arch/x86/msr.c
+++ b/xen/arch/x86/msr.c
@@ -339,6 +339,25 @@ int guest_rdmsr(struct vcpu *v, uint32_t
         *val = msrs->tsc_aux;
         break;
 
+    case MSR_K8_SYSCFG:
+    case MSR_K8_TOP_MEM1:
+    case MSR_K8_TOP_MEM2:
+    case MSR_K8_IORR_BASE0:
+    case MSR_K8_IORR_MASK0:
+    case MSR_K8_IORR_BASE1:
+    case MSR_K8_IORR_MASK1:
+    case MSR_K8_TSEG_BASE:
+    case MSR_K8_TSEG_MASK:
+        if ( !(cp->x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) )
+            goto gp_fault;
+        if ( !is_hardware_domain(d) )
+            return X86EMUL_UNHANDLEABLE;
+        rdmsrl(msr, *val);
+        if ( msr == MSR_K8_SYSCFG )
+            *val &= (SYSCFG_TOM2_FORCE_WB | SYSCFG_MTRR_TOM2_EN |
+                     SYSCFG_MTRR_VAR_DRAM_EN | SYSCFG_MTRR_FIX_DRAM_EN);
+        break;
+
     case MSR_K8_HWCR:
         if ( !(cp->x86_vendor & (X86_VENDOR_AMD | X86_VENDOR_HYGON)) )
             goto gp_fault;
--- a/xen/arch/x86/x86_64/mmconf-fam10h.c
+++ b/xen/arch/x86/x86_64/mmconf-fam10h.c
@@ -69,7 +69,7 @@ static void __init get_fam10h_pci_mmconf
 	rdmsrl(address, val);
 
 	/* TOP_MEM2 is not enabled? */
-	if (!(val & (1<<21))) {
+	if (!(val & SYSCFG_MTRR_TOM2_EN)) {
 		tom2 = 1ULL << 32;
 	} else {
 		/* TOP_MEM2 */
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -116,6 +116,21 @@
 #define  PASID_PASID_MASK                   0x000fffff
 #define  PASID_VALID                        (_AC(1, ULL) << 31)
 
+#define MSR_K8_SYSCFG                       0xc0010010
+#define  SYSCFG_MTRR_FIX_DRAM_EN            (_AC(1, ULL) << 18)
+#define  SYSCFG_MTRR_FIX_DRAM_MOD_EN        (_AC(1, ULL) << 19)
+#define  SYSCFG_MTRR_VAR_DRAM_EN            (_AC(1, ULL) << 20)
+#define  SYSCFG_MTRR_TOM2_EN                (_AC(1, ULL) << 21)
+#define  SYSCFG_TOM2_FORCE_WB               (_AC(1, ULL) << 22)
+
+#define MSR_K8_IORR_BASE0                   0xc0010016
+#define MSR_K8_IORR_MASK0                   0xc0010017
+#define MSR_K8_IORR_BASE1                   0xc0010018
+#define MSR_K8_IORR_MASK1                   0xc0010019
+
+#define MSR_K8_TSEG_BASE                    0xc0010112 /* AMD doc: SMMAddr */
+#define MSR_K8_TSEG_MASK                    0xc0010113 /* AMD doc: SMMMask */
+
 #define MSR_K8_VM_CR                        0xc0010114
 #define  VM_CR_INIT_REDIRECTION             (_AC(1, ULL) <<  1)
 #define  VM_CR_SVM_DISABLE                  (_AC(1, ULL) <<  4)
@@ -279,11 +294,6 @@
 #define MSR_K8_TOP_MEM1			0xc001001a
 #define MSR_K7_CLK_CTL			0xc001001b
 #define MSR_K8_TOP_MEM2			0xc001001d
-#define MSR_K8_SYSCFG			0xc0010010
-
-#define K8_MTRRFIXRANGE_DRAM_ENABLE	0x00040000 /* MtrrFixDramEn bit    */
-#define K8_MTRRFIXRANGE_DRAM_MODIFY	0x00080000 /* MtrrFixDramModEn bit */
-#define K8_MTRR_RDMEM_WRMEM_MASK	0x18181818 /* Mask: RdMem|WrMem    */
 
 #define MSR_K7_HWCR			0xc0010015
 #define MSR_K8_HWCR			0xc0010015



From xen-devel-bounces@lists.xenproject.org Fri May 28 06:57:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 May 2021 06:57:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133786.249233 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmWR2-0001Ix-Ia; Fri, 28 May 2021 06:57:32 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133786.249233; Fri, 28 May 2021 06:57:32 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmWR2-0001Iq-Fc; Fri, 28 May 2021 06:57:32 +0000
Received: by outflank-mailman (input) for mailman id 133786;
 Fri, 28 May 2021 06:57:31 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wdiM=KX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmWR1-0001Ig-OV
 for xen-devel@lists.xenproject.org; Fri, 28 May 2021 06:57:31 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 64b18428-a21e-467c-a817-0dd37c5f0d25;
 Fri, 28 May 2021 06:57:31 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 6B93B1FD2F;
 Fri, 28 May 2021 06:57:30 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 4F6C511A98;
 Fri, 28 May 2021 06:57:30 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 64b18428-a21e-467c-a817-0dd37c5f0d25
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622185050; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=ak8IN3tVVod3ou9ji6B4NVo7m0cuBbHRqWFoAb1mI0s=;
	b=XkvoW5xFonH7DFz/0V2StfxJJAB8PS2xOP6jcXMSHYH21sEeClWrm1iS2VYrQgtl0Gbavm
	hVddnQ6uvWmwSfiEyEYJkKG6yXGo2WkozkU/JmoX89lrW1CO4r8bJNlKGrPY0L0PQcDpT+
	lRKrtCTOKyD3s2RnGNRZnxK17aJvzdc=
Subject: [PATCH v2 2/2] x86/AMD: drop MSR_K7_HWCR
From: Jan Beulich <jbeulich@suse.com>
To: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
References: <ebc58213-f68a-e060-83f5-c9c89a87f074@suse.com>
Message-ID: <3d1680ea-1089-3d25-db0d-06b5eab3f7a4@suse.com>
Date: Fri, 28 May 2021 08:57:30 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <ebc58213-f68a-e060-83f5-c9c89a87f074@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

We don't support any K7 (32-bit only) hardware anymore, and the MSR is
accessible as MSR_K8_HWCR as well. Using the K7 name was particularly
odd for Hygon as well as in a Fam0F-specific piece of code.

Signed-off-by: Jan Beulich <jbeulich@suse.com>
---
v2: New.
---
Of course there are more MSR_K7_* left - we'll have to convert them over
time.

--- a/xen/arch/x86/cpu/amd.c
+++ b/xen/arch/x86/cpu/amd.c
@@ -694,9 +694,9 @@ static void init_amd(struct cpuinfo_x86
 	 * Errata 122 for all steppings (F+ have it disabled by default)
 	 */
 	if (c->x86 == 15) {
-		rdmsrl(MSR_K7_HWCR, value);
+		rdmsrl(MSR_K8_HWCR, value);
 		value |= 1 << 6;
-		wrmsrl(MSR_K7_HWCR, value);
+		wrmsrl(MSR_K8_HWCR, value);
 	}
 
 	/*
@@ -928,9 +928,9 @@ static void init_amd(struct cpuinfo_x86
 	}
 
 	if (cpu_has(c, X86_FEATURE_EFRO)) {
-		rdmsr(MSR_K7_HWCR, l, h);
+		rdmsr(MSR_K8_HWCR, l, h);
 		l |= (1 << 27); /* Enable read-only APERF/MPERF bit */
-		wrmsr(MSR_K7_HWCR, l, h);
+		wrmsr(MSR_K8_HWCR, l, h);
 	}
 
 	/* Prevent TSC drift in non single-processor, single-core platforms. */
--- a/xen/arch/x86/cpu/hygon.c
+++ b/xen/arch/x86/cpu/hygon.c
@@ -70,9 +70,9 @@ static void init_hygon(struct cpuinfo_x8
 		__set_bit(X86_FEATURE_ARAT, c->x86_capability);
 
 	if (cpu_has(c, X86_FEATURE_EFRO)) {
-		rdmsrl(MSR_K7_HWCR, value);
+		rdmsrl(MSR_K8_HWCR, value);
 		value |= (1 << 27); /* Enable read-only APERF/MPERF bit */
-		wrmsrl(MSR_K7_HWCR, value);
+		wrmsrl(MSR_K8_HWCR, value);
 	}
 
 	amd_log_freq(c);
--- a/xen/include/asm-x86/msr-index.h
+++ b/xen/include/asm-x86/msr-index.h
@@ -295,7 +295,6 @@
 #define MSR_K7_CLK_CTL			0xc001001b
 #define MSR_K8_TOP_MEM2			0xc001001d
 
-#define MSR_K7_HWCR			0xc0010015
 #define MSR_K8_HWCR			0xc0010015
 #define K8_HWCR_MON_MWAIT_USER_EN	(1ULL << 10)
 #define K8_HWCR_MCi_STATUS_WREN		(1ULL << 18)



From xen-devel-bounces@lists.xenproject.org Fri May 28 07:00:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 May 2021 07:00:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133797.249245 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmWTa-0002lR-1N; Fri, 28 May 2021 07:00:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133797.249245; Fri, 28 May 2021 07:00:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmWTZ-0002lK-US; Fri, 28 May 2021 07:00:09 +0000
Received: by outflank-mailman (input) for mailman id 133797;
 Fri, 28 May 2021 07:00:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wdiM=KX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmWTY-0002lC-7e
 for xen-devel@lists.xenproject.org; Fri, 28 May 2021 07:00:08 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c908a0cd-0051-431d-afaa-8a717acad818;
 Fri, 28 May 2021 07:00:07 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 6A19F1FD2E;
 Fri, 28 May 2021 07:00:06 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 350D211A98;
 Fri, 28 May 2021 07:00:06 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c908a0cd-0051-431d-afaa-8a717acad818
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622185206; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=yZzaowl1OEBJ94ow4jIwZaUa26psa43EJrZpg+OiR3Q=;
	b=VfQgA14M09eM7yFKaAa/nhGLsd7GhHVNtLUgtT59g1opTwjBYyZFhSPi3F1JqhRZylHy8S
	yiWbvpymjrjkupBVOoZgPW77G0H4LstXfp/zyHjXG7l28eWZi/AsUIx3FAqhGLp6tIKLwJ
	+n7pOaOk9sA0boecrsy78kPjj3rhXDo=
Subject: Ping: [PATCH] x86/tboot: include all valid frame table entries in S3
 integrity check
From: Jan Beulich <jbeulich@suse.com>
To: Lukasz Hawrylko <lukasz.hawrylko@linux.intel.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <e878fd86-2d82-ce3c-1dc4-d3a07025f1d4@suse.com>
Message-ID: <a2899b2c-dafb-08c2-e5cd-ba92cd0b6032@suse.com>
Date: Fri, 28 May 2021 09:00:02 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <e878fd86-2d82-ce3c-1dc4-d3a07025f1d4@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 19.05.2021 17:48, Jan Beulich wrote:
> The difference of two pdx_to_page() return values is a number of pages,
> not the number of bytes covered by the corresponding frame table entries.
> 
> Fixes: 3cb68d2b59ab ("tboot: fix S3 issue for Intel Trusted Execution Technology.")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/arch/x86/tboot.c
> +++ b/xen/arch/x86/tboot.c
> @@ -323,12 +323,12 @@ static void tboot_gen_frametable_integri
>          if ( nidx >= max_idx )
>              break;
>          vmac_update((uint8_t *)pdx_to_page(sidx * PDX_GROUP_COUNT),
> -                       pdx_to_page(eidx * PDX_GROUP_COUNT)
> -                       - pdx_to_page(sidx * PDX_GROUP_COUNT), &ctx);
> +                    (eidx - sidx) * PDX_GROUP_COUNT * sizeof(*frame_table),
> +                    &ctx);
>      }
>      vmac_update((uint8_t *)pdx_to_page(sidx * PDX_GROUP_COUNT),
> -                   pdx_to_page(max_pdx - 1) + 1
> -                   - pdx_to_page(sidx * PDX_GROUP_COUNT), &ctx);
> +                (max_pdx - sidx * PDX_GROUP_COUNT) * sizeof(*frame_table),
> +                &ctx);
>  
>      *mac = vmac(NULL, 0, nonce, NULL, &ctx);
>  
> 



From xen-devel-bounces@lists.xenproject.org Fri May 28 07:00:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 May 2021 07:00:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133799.249256 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmWTp-00038v-AU; Fri, 28 May 2021 07:00:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133799.249256; Fri, 28 May 2021 07:00:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmWTp-00038m-6r; Fri, 28 May 2021 07:00:25 +0000
Received: by outflank-mailman (input) for mailman id 133799;
 Fri, 28 May 2021 07:00:23 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wdiM=KX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmWTn-00035u-Sy
 for xen-devel@lists.xenproject.org; Fri, 28 May 2021 07:00:23 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 756355e4-969b-4804-b985-65f9b19098f5;
 Fri, 28 May 2021 07:00:23 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 6C49B1FD2E;
 Fri, 28 May 2021 07:00:22 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 4955B11A98;
 Fri, 28 May 2021 07:00:22 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 756355e4-969b-4804-b985-65f9b19098f5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622185222; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=keg0tAiXIrXO/9uFKfQsZx/6TkaJWl/wX7wIHwGo+RE=;
	b=jR48u4JAqP/N95Kewkw1jomm45BkJqV2bxMKFK/Oy3sckuzOw9Wa/W8PBwyBY8UBndGl+F
	fyF+61RIFwfJ6E2JqVDCLTkIm3ZGdlMiC6ngEj5MkZ3+LAKirwlDF79NN0ZEOe4x3RshU4
	7hzj7T42+vwVisiMXybOPKIKXEKyc+o=
Subject: Ping: [PATCH] x86/tboot: adjust UUID check
From: Jan Beulich <jbeulich@suse.com>
To: Lukasz Hawrylko <lukasz.hawrylko@linux.intel.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>
References: <422e39c9-0cba-0944-b813-dfe2578ad719@suse.com>
Message-ID: <1651edfd-4854-46ed-4701-6f82c2534e00@suse.com>
Date: Fri, 28 May 2021 09:00:22 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <422e39c9-0cba-0944-b813-dfe2578ad719@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 19.05.2021 17:49, Jan Beulich wrote:
> Replace a bogus cast, move the static variable into the only function
> using it, and add __initconst. While there, also remove a pointless NULL
> check.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 
> --- a/xen/arch/x86/tboot.c
> +++ b/xen/arch/x86/tboot.c
> @@ -27,8 +27,6 @@ static vmac_t domain_mac;     /* MAC for
>  static vmac_t xenheap_mac;    /* MAC for xen heap during S3 */
>  static vmac_t frametable_mac; /* MAC for frame table during S3 */
>  
> -static const uuid_t tboot_shared_uuid = TBOOT_SHARED_UUID;
> -
>  /* used by tboot_protect_mem_regions() and/or tboot_parse_dmar_table() */
>  static uint64_t __initdata txt_heap_base, __initdata txt_heap_size;
>  static uint64_t __initdata sinit_base, __initdata sinit_size;
> @@ -93,6 +91,7 @@ static void __init tboot_copy_memory(uns
>  void __init tboot_probe(void)
>  {
>      tboot_shared_t *tboot_shared;
> +    static const uuid_t __initconst tboot_shared_uuid = TBOOT_SHARED_UUID;
>  
>      /* Look for valid page-aligned address for shared page. */
>      if ( !opt_tboot_pa || (opt_tboot_pa & ~PAGE_MASK) )
> @@ -101,9 +100,7 @@ void __init tboot_probe(void)
>      /* Map and check for tboot UUID. */
>      set_fixmap(FIX_TBOOT_SHARED_BASE, opt_tboot_pa);
>      tboot_shared = fix_to_virt(FIX_TBOOT_SHARED_BASE);
> -    if ( tboot_shared == NULL )
> -        return;
> -    if ( memcmp(&tboot_shared_uuid, (uuid_t *)tboot_shared, sizeof(uuid_t)) )
> +    if ( memcmp(&tboot_shared_uuid, &tboot_shared->uuid, sizeof(uuid_t)) )
>          return;
>  
>      /* new tboot_shared (w/ GAS support, integrity, etc.) is not backwards
> 



From xen-devel-bounces@lists.xenproject.org Fri May 28 07:22:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 May 2021 07:22:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133817.249267 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmWoZ-0005iO-0H; Fri, 28 May 2021 07:21:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133817.249267; Fri, 28 May 2021 07:21:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmWoY-0005iH-T8; Fri, 28 May 2021 07:21:50 +0000
Received: by outflank-mailman (input) for mailman id 133817;
 Fri, 28 May 2021 07:21:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wdiM=KX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmWoX-0005iB-KT
 for xen-devel@lists.xenproject.org; Fri, 28 May 2021 07:21:49 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 53687a03-1455-4353-bc47-1652cc8b435b;
 Fri, 28 May 2021 07:21:48 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 63B0E1FD2F;
 Fri, 28 May 2021 07:21:47 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 2798911A98;
 Fri, 28 May 2021 07:21:47 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 53687a03-1455-4353-bc47-1652cc8b435b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622186507; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=/k9p3WclJ76DHTPc78htGDW+dmcq4nCb/eORYnSOm2k=;
	b=IpXWyfTtJEELs8BswrVnUl/yu7s+fRkk8125LHM9D0OK9pynqS8Ff76hWiMQm+lkmNk0nh
	Qx43Nj+QNCNeSGe4IQF/7OQI6IAbTdp/JmoTW0L6oo8uemZfo2WO2KM3KTqcDSxswRn5gT
	n4TNnCyR42cXuFBlrbBRkcKuG3LRwuU=
Subject: Re: [PATCH v3 1/2] libelf: don't attempt to parse __xen_guest for PVH
From: Jan Beulich <jbeulich@suse.com>
To: Ian Jackson <iwj@xenproject.org>, Wei Liu <wl@xen.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, xen-devel@lists.xenproject.org,
 Roger Pau Monne <roger.pau@citrix.com>
References: <20210520123012.89855-1-roger.pau@citrix.com>
 <20210520123012.89855-2-roger.pau@citrix.com>
 <29ecaaaf-d096-e8af-fc6f-292ff2b3d5ae@suse.com>
 <6dd3d6fe-04cc-4b9d-e92f-6d4c81785300@suse.com>
Message-ID: <e6145998-0736-a213-36b2-ace27c3844be@suse.com>
Date: Fri, 28 May 2021 09:21:42 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <6dd3d6fe-04cc-4b9d-e92f-6d4c81785300@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 21.05.2021 15:31, Jan Beulich wrote:
> On 20.05.2021 14:35, Jan Beulich wrote:
>> On 20.05.2021 14:30, Roger Pau Monne wrote:
>>> The legacy __xen_guest section doesn't support the PHYS32_ENTRY
>>> elfnote, so it's pointless to attempt to parse the elfnotes from that
>>> section when called from an hvm container.
>>>
>>> Suggested-by: Jan Beulich <jbeulich@suse.com>
>>> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
>>> ---
>>> Changes since v2:
>>>  - New in this version.
>>> ---
>>>  xen/common/libelf/libelf-dominfo.c | 6 ++----
>>>  1 file changed, 2 insertions(+), 4 deletions(-)
>>>
>>> diff --git a/xen/common/libelf/libelf-dominfo.c b/xen/common/libelf/libelf-dominfo.c
>>> index 69c94b6f3bb..abea1011c18 100644
>>> --- a/xen/common/libelf/libelf-dominfo.c
>>> +++ b/xen/common/libelf/libelf-dominfo.c
>>> @@ -577,10 +577,8 @@ elf_errorstatus elf_xen_parse(struct elf_binary *elf,
>>>  
>>>      }
>>>  
>>> -    /*
>>> -     * Finally fall back to the __xen_guest section.
>>> -     */
>>> -    if ( xen_elfnotes == 0 )
>>> +    /* Finally fall back to the __xen_guest section for PV guests only. */
>>> +    if ( xen_elfnotes == 0 && !hvm )
>>
>> Isn't this depending on the 2nd patch adding the function parameter?
>> IOW doesn't this break the build, even if just transiently? With the
>> respective hunk from patch 2 moved here
>> Reviewed-by: Jan Beulich <jbeulich@suse.com>
> 
> With the intention of committing I did this hunk movement, noticing
> that
> - it's more than just one hunk,
> - a tool stack maintainer ack is actually going to be needed (all
>   respective hunks are now in the first patch).
> I'll keep the massaged patches locally, until the missing ack arrives.

I've timed out waiting for an ack and committed the patches, considering
the tool stack parts of them are mechanical enough.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 28 07:42:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 May 2021 07:42:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133827.249289 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmX7y-00088N-MJ; Fri, 28 May 2021 07:41:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133827.249289; Fri, 28 May 2021 07:41:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmX7y-00088G-J5; Fri, 28 May 2021 07:41:54 +0000
Received: by outflank-mailman (input) for mailman id 133827;
 Fri, 28 May 2021 07:41:53 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wdiM=KX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmX7x-000887-Ly
 for xen-devel@lists.xenproject.org; Fri, 28 May 2021 07:41:53 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c312cf2a-0f9d-4e48-a459-40f23c47ba95;
 Fri, 28 May 2021 07:41:52 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id E8FF81FD2F;
 Fri, 28 May 2021 07:41:51 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id A388E11A98;
 Fri, 28 May 2021 07:41:51 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c312cf2a-0f9d-4e48-a459-40f23c47ba95
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622187711; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=NPdJQ8YPDL8kMV256eAWoFl9ZI+yCo1/juFVkI6RjKM=;
	b=KCxZmXHSvaI64T+6b4NvxQtxZk7ULPHXvkQ4W4tEzeNQYCMq7NepEec3j6LKAN4gEIQxjT
	psEVNVAN3y64OgxG+ej0yocdAfokoZrUTTiOWXto8zIkM1p2w/Mzf4kZhu3mUwH5zrbUaN
	aN3Q2V+61HFwlLBwBU6aY2KhCpl3xDQ=
Subject: Re: [PATCH v4 2/4] xen/common: Guard iommu symbols with
 CONFIG_HAS_PASSTHROUGH
From: Jan Beulich <jbeulich@suse.com>
To: Connor Davis <connojdavis@gmail.com>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair23@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Paul Durrant <paul@xen.org>,
 xen-devel@lists.xenproject.org
References: <cover.1621712830.git.connojdavis@gmail.com>
 <f76852db6601b1bf243781b0b8b16c7a6fdc8a01.1621712830.git.connojdavis@gmail.com>
 <3b872d59-4330-68ee-c62b-230f5d6cb2cf@suse.com>
Message-ID: <c415c691-1f59-b13d-00ec-80fee777da79@suse.com>
Date: Fri, 28 May 2021 09:41:47 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <3b872d59-4330-68ee-c62b-230f5d6cb2cf@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 25.05.2021 10:44, Jan Beulich wrote:
> On 24.05.2021 16:34, Connor Davis wrote:
>> --- a/xen/include/xen/iommu.h
>> +++ b/xen/include/xen/iommu.h
>> @@ -51,9 +51,15 @@ static inline bool_t dfn_eq(dfn_t x, dfn_t y)
>>      return dfn_x(x) == dfn_x(y);
>>  }
>>  
>> -extern bool_t iommu_enable, iommu_enabled;
>> +extern bool_t iommu_enable;
> 
> ... just "bool" used here, as I think I did say before. Can be taken
> care of while committing, I think.

Actually, while preparing to commit this, I realized this would
better be

--- a/xen/include/xen/iommu.h
+++ b/xen/include/xen/iommu.h
@@ -51,8 +51,12 @@ static inline bool_t dfn_eq(dfn_t x, dfn_t y)
     return dfn_x(x) == dfn_x(y);
 }
 
+#ifdef CONFIG_HAS_PASSTHROUGH
 extern bool_t iommu_enable, iommu_enabled;
 extern bool force_iommu, iommu_quarantine, iommu_verbose;
+#else
+#define iommu_enabled false
+#endif
 
 #ifdef CONFIG_X86
 extern enum __packed iommu_intremap {

Which is what I'm about to commit.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 28 07:45:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 May 2021 07:45:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133834.249305 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmXBC-0000QX-6j; Fri, 28 May 2021 07:45:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133834.249305; Fri, 28 May 2021 07:45:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmXBC-0000QQ-3g; Fri, 28 May 2021 07:45:14 +0000
Received: by outflank-mailman (input) for mailman id 133834;
 Fri, 28 May 2021 07:45:13 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wdiM=KX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmXBB-0000QI-7B
 for xen-devel@lists.xenproject.org; Fri, 28 May 2021 07:45:13 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 4a6a1a11-55ab-4426-b2fe-6408d46ea676;
 Fri, 28 May 2021 07:45:12 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 711701FD2F;
 Fri, 28 May 2021 07:45:11 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 2598D11A98;
 Fri, 28 May 2021 07:45:11 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4a6a1a11-55ab-4426-b2fe-6408d46ea676
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622187911; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=7O5RqbP5KTlJMn4F/atKoTHnyU2SqrZ4GYekbT3GH3o=;
	b=eVriGgKeYi13uNxqoVsn00/63yPqAyos4BDbqwHqzIL5K1Fff01CoGNuaEl8WShPrgqiFg
	uhwwSgJ+j6P3AfiyHIr3naUz0z9W6TGAwoHcsiChyo6fQ8J3RGuQJlsEJRtBYWH6+TKEUz
	/UOcW9euPsj75MaZoLCEosVFIpSCPq8=
Subject: Re: [PATCH v4 1/4] xen/char: Default HAS_NS16550 to y only for X86
 and ARM
From: Jan Beulich <jbeulich@suse.com>
To: Connor Davis <connojdavis@gmail.com>
Cc: Bobby Eshleman <bobbyeshleman@gmail.com>,
 Alistair Francis <alistair23@gmail.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>,
 Wei Liu <wl@xen.org>, Alistair Francis <alistair.francis@wdc.com>,
 xen-devel@lists.xenproject.org
References: <cover.1621712830.git.connojdavis@gmail.com>
 <2a8329da33d2af0eab8581a01e3098e8360bda87.1621712830.git.connojdavis@gmail.com>
 <97ef9c85-b5ba-42c6-88f0-6ac66063754c@suse.com>
 <2d1e11a6-4923-9935-5576-002a9acb1510@suse.com>
Message-ID: <d65a2207-c03c-72d6-8387-fae6b8884a64@suse.com>
Date: Fri, 28 May 2021 09:45:06 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <2d1e11a6-4923-9935-5576-002a9acb1510@suse.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 25.05.2021 09:06, Jan Beulich wrote:
> On 25.05.2021 09:02, Jan Beulich wrote:
>> On 24.05.2021 16:34, Connor Davis wrote:
>>> Defaulting to yes only for X86 and ARM reduces the requirements
>>> for a minimal build when porting new architectures.
>>>
>>> Signed-off-by: Connor Davis <connojdavis@gmail.com>
>>> Reviewed-by: Alistair Francis <alistair.francis@wdc.com>
>>
>> Acked-by: Jan Beulich <jbeulich@suse.com>
> 
> Actually, just to make sure: This ...
> 
>> --- a/xen/drivers/char/Kconfig
>> +++ b/xen/drivers/char/Kconfig
>> @@ -1,5 +1,6 @@
>>  config HAS_NS16550
>>  	bool "NS16550 UART driver" if ARM
>> +	default n if RISCV
> 
> ... won't trigger a kconfig warning prior to the RISCV symbol getting
> introduced, will it? (I was about to commit the change when I started
> wondering.)

While, not having heard back, I've now verified this myself, I still
decided to leave committing this until the point where it can be committed
together with at least patch 3 here, so there wouldn't be a seemingly
stray reference to RISCV here.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 28 08:31:25 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 May 2021 08:31:25 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133842.249316 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmXtf-0005tS-U5; Fri, 28 May 2021 08:31:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133842.249316; Fri, 28 May 2021 08:31:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmXtf-0005tL-Qs; Fri, 28 May 2021 08:31:11 +0000
Received: by outflank-mailman (input) for mailman id 133842;
 Fri, 28 May 2021 08:31:11 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WKFi=KX=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lmXte-0005tF-Sc
 for xen-devel@lists.xenproject.org; Fri, 28 May 2021 08:31:11 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 10ffbe53-448e-4aee-a3a5-b308e4a7e72e;
 Fri, 28 May 2021 08:31:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 10ffbe53-448e-4aee-a3a5-b308e4a7e72e
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1622190668;
  h=date:from:to:cc:subject:message-id:references:
   in-reply-to:mime-version;
  bh=VASdSap5MshOa2kPMnw39inUZ6CqQQNx0VkaoKgoeuQ=;
  b=VYW4asmStOwi7qj6e0MfDbh89Fxr6L9vwk8Cp+YypU4YhGJmUjdSyRAb
   eq/sWNYBgB+4lR+FAk0g5kOsVQedNnfD7H1RQs7/8fT786XqluQ2nq9rn
   bi47LGHgHgGP6omqOowkJ+G3wyFvOQ7UXOtmaekBAxSiMxFDyCF4hoLdl
   Q=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: UgEGSQDnKTCTBJhc3+RL1/5OZu/zdS7Pc8M6VMcb05oRY56GXZnz2AMpgrnZ91jJj62qpp89dt
 7tlzDQfU8TAhiMfQ/Mi/o+bB+3DfapszNLMY5VrCCmoOeqoCL6jEKQZ+ygeqqFQ0NF/+/oO9Fp
 BVRuD4EVy4Ri9KmjMOTEuuwjHWvCtRGNDcmdJlQ2UVKqAf4VbLSNzixOv1jDlMdCqSbN3Xvhws
 ghT0Pph2rh9QnqPR3I/v4w4JPsDu0InixTv4MSLhy44s3OY4Wl0ZLZOLrNXqILqhnRaspLeaHJ
 1rY=
X-SBRS: 5.1
X-MesageID: 45209600
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:Ek+iyqnb1UJjE1ykT2rSfm6zG+/pDfPXimdD5ihNYBxZY6Wkfp
 +V7ZcmPE7P6Ar5BktApTnZAtjwfZoYz+8B3WEQVY3SJTUOy1HYXL2KjLGSgwEIfheUh4xgPM
 hbAtVD4bHLfD9HZIPBkXeF+rUbsZq6GcKT9JvjJh5WJGkBBs8QinYcNuuCKCJLrUt9dOUE/f
 Knl456TlGbCAwqh7GAdwM4tp/41qb2ffzdEHg7Li9ixSy25AnYrYISJyLomSv2Hgk/m4vLPg
 D+4kLEDvHIiZ2G4y6Z81WWw4VdmdPnxNcGLteLkNIpJjLljRvtTJh9WpWZ1QpF892H2RIPqp
 3hsh0gN8N85zf6ZWeuuybg3AHmzXIH92Li81mFmnHuyPaJFA7SM/Ax176xTyGptXbI/esMj5
 6j5ljp66a/2CmwzRgU5LDzJlxXfwSP0DhSxNL6SRRkIM0jgfRq3P8iFXhuYeE99PiT0vF/LA
 AnNrCv2B93SyLDU5mLhBg1/DRbNk5DUStvfCA5y4eoO88/pgE086Jf/r1Dop4KzuN9d3Bg3Z
 WNDk1YrsAFciZNV9MLOA4oe7r/NoXie2O5DF6v
X-IronPort-AV: E=Sophos;i="5.83,229,1616472000"; 
   d="scan'208";a="45209600"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=OOMHDnXVMujszFhUsckOL6aeH6aNaDFunT75TvsnszVHEeJandrsn2yJ1r9Nn0k59mtljh7WYXxshUsCNa5hqEvEUTMIQPBtkLukAGQ0F0qY1dOadeOl5to1TN/5nOy4aarum0qsjm0pnF/KXboXpLKbAqID6ImE0JrmH7Sh9ipaxXDkwPYkN2ha16sVHEkT1ztTz9XK0ukevalqpCwkdW1RD/HmmXx/AwMxRcXPBarwYyZrCTPAenPqYsALDUeKVv5oIpBdqHEIvLusBKB1/obO30MJjaJ58I0akC/M/AQNLIM/eu8zOhV4qtmKasulPF1eKLqZagX90/IJULsAsw==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3zRKekr4dj4xhs0PEWnqIXTmpu6fWZgo0v5jM1EFdXo=;
 b=Ym+ryaiuywXgCvQUaTQKQT3GBoSQlbym1hbPbcCzDoW6vlVKmWF5XCtIJ/t+sztAUM4oI2bt1q/sGdBSol77oxc0CNVdL8x65Xy3RtbuxZcQIrUtLyR3LcJ4BxP2woRNzLAdf4vragQ/KU/Te+BD90u1TscYrXV+R4MIlEX//wziXfyNFFBuzS/r4Fv7w2nYFxWixDlCOCNuDwNN4jxVwndDARu7/AB7kv5/W0Dc/9nB0s6TEzFBdpY+QqK3rlgA3eIwnpxRUvTx0RM2iXVjx3PRmjCz/h8HLVrnUvMEK9yeEyv9VWCj9yWX64AI1HSBgxOkOdpGWwhyU3uYw3Vx2Q==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=3zRKekr4dj4xhs0PEWnqIXTmpu6fWZgo0v5jM1EFdXo=;
 b=wwlYwhoGy79hnS09n7/ux1RxdkMXsiFNozN2bIy/5Z9RnRH50U/734ekeTbVhL/yJ6tszsZa2/VWPlG3DKhk3TLS8h42x8+9Kql7+LR4/+EOjyhV2mJGYmhnAVud70M+RWNRKL+A7aZ7X7G68Z4zPz69exGUutUTaz0q6o7J3+o=
Date: Fri, 28 May 2021 10:30:59 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Julien Grall <julien@xen.org>
CC: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>
Subject: Re: [PATCH v6 1/3] evtchn: slightly defer lock acquire where possible
Message-ID: <YLCqQz9xS4HEpabG@Air-de-Roger>
References: <01bbf3d4-ca6a-e837-91fe-b34aa014564c@suse.com>
 <5939858e-1c7c-5658-bc2d-0c9024c74040@suse.com>
 <938eb888-ec15-feb1-19f7-b90dfee822ae@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <938eb888-ec15-feb1-19f7-b90dfee822ae@xen.org>
X-ClientProxiedBy: MRXP264CA0025.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:14::13) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 75d232ff-af1e-4e66-5227-08d921b2eecd
X-MS-TrafficTypeDiagnostic: DM6PR03MB3481:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB34819DE135A459A56C30C5688F229@DM6PR03MB3481.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:3276;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: yEHF3Fn4kzQc79muLUFRudt1PpKGzxITV7DNdZdM1EWUKqCJGMS9cI1SHnY9foTQTj9Kj9HYseb3CDMTnTumMrcuTJWkGv7CfqkbAC+24y69YHKTaZjACcZNjCBtZU++Cphi/SdGh1aqRa+ppl+0a7oyTbrkSVqehEEXacuW9GUdVc2NtIlv+aVQd9RhTSDnq4V5pekkcDOzl2EU9sqQWx5dhSot3D4lo1ngqaTWTIeufSpz9dEgKl9Q8If7airLRROpaY411oG/wZ1EH87StS1z3EYkbxkta63QwL0B7J1mKujr/YxC9uGxtoBEJlql436MfTXPZ87sJp44a69905FFFeGTg6WLakJNiAjN/3frIV3v4Neu/em+Um1oHbbeAbhHJZlSx3oCZWv86nnbJ5t/zn1+cWtD+UOkk8udWUjD4qGac+XDzmuWb7a5LMn3Hu/zJZM2+5SUWKE6Sc9aDS+Fzo3Bix8j7NPBEEoGZCPNdg0y+oDDO3FP14NiqJMTMw+/zThMHyuWybNwXPtcwLF99PNJSbQWDI3Evj0PvcVR6uvxrn4NAcFlJlNXGqo9dCVS2MYhTYy3fZunGB/jz72W0l8xf1Awe8JwzvTPsXc=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(7916004)(136003)(376002)(346002)(39850400004)(366004)(396003)(16526019)(186003)(6496006)(8676002)(53546011)(83380400001)(26005)(6666004)(66556008)(66476007)(66946007)(38100700002)(86362001)(5660300002)(6916009)(316002)(85182001)(956004)(4326008)(6486002)(9686003)(33716001)(2906002)(8936002)(54906003)(478600001);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?aXhDNmdnUHNYd0x4QXp2cEI5UHpLSGx6MEsxbEF3K0RXV3M3RVVKYk5DSllz?=
 =?utf-8?B?VnEvRGd4alBLbkRtYjFOSFJNM1U4am1XNEFZVXJHNGFZMTYrdVZjdnYrc0s0?=
 =?utf-8?B?MmU0MjUvMHRIU1R5ekVseVZLaXVPZHNOcVhNY0FvUi9GUDlVcUF4WlJGZDUr?=
 =?utf-8?B?RTJwbjdUc3Vuai9XT3IvM3hISHlmL1NUMnJJYVp2SHVad2FpeEthSVkrZTF3?=
 =?utf-8?B?TldqcnUrQlJMcFk0VGYxbGxtaWV0dmVvbFVOOUF0RWVmMy9zc3RFbVZHYURo?=
 =?utf-8?B?akk3aXdTNjEwcmViWWlGZFh6NnprcVc4bkV5bHFlUVh4QUQvUGFEWWRGblBu?=
 =?utf-8?B?V0RMSHdRL2NZMDdDVzNvVnNxRTdmbnlrSm80M3NPSThUMk8wS2RsbEcxWTBq?=
 =?utf-8?B?eE9zbmg5QXQ1NzkwMWNydi9wWnBJR1puU1pVbG9iL3JvUVF5dWhER3hKcDNu?=
 =?utf-8?B?d0NXRlpTb3ZMU2RuVEhMK1BVQ1hFWGJaZjdNUU1PNjFWNWQ2QklxNlg0bGtK?=
 =?utf-8?B?aTJvWDlXMXJGL1VyYnpybGdpQlNJQms2cUFHeDQ2OE5tQkxuWFg3UUtFSkVD?=
 =?utf-8?B?TXgwUENjNkE4NzBiZmgwVlF0bDBVNXhzRE5kTEdKTHRESTNWbW01eE8vWEx6?=
 =?utf-8?B?a085K2QrK2JqVktSZzhvMlRQNDZENGFpVEVDR2tJREtFdlpKRUxuTW9Ha2ND?=
 =?utf-8?B?ZDJXR1kwU1Rva2l2emRvNmcrYW85QTA0S3FsaWpIeFRXcmU1a0IvNkZLdllD?=
 =?utf-8?B?TnVJcTl4U2RYUDJIUWFVdlFwRHRVRS9mSVNGOFU5VDRIeXpBUXBXNHVQb1Ns?=
 =?utf-8?B?eWFNc0FJZ01zYk9pWHlPdVZoSUVzT0xzTFRBSkhDYTlwV0J5QURtUVMybGRT?=
 =?utf-8?B?LzZ3VG9KT2xNTlhCajJSRHMvSXU3K0l1YmZ6ajNWdUZEcFEraHdZaVlmd0Ru?=
 =?utf-8?B?MEJIS2hycjBWVzUrWndjMUs2SFMxUWptR3VvdU9Xd3dBN2o0S2dxRTZkYTls?=
 =?utf-8?B?dGpDRkhoKzhwWFlWRHRBaGxDblh1blNhVmthZDFjRkVCNHgzK0xJSlpVY01Y?=
 =?utf-8?B?KzZQNmI4bjZsQXlnWEw3UExTSTZacXdlUHNJQ2Z1ZHptQWRaa0xXZ1pCcSsv?=
 =?utf-8?B?aVU1bU9oRFVGYUdVTjNpK1Y0ZjEwTXJ6U3NMRllNTTN6WGhCUkliemtxaHBQ?=
 =?utf-8?B?b3ZMUU1WT2Zja3BZdmlsQVQvbnRjY0VFUzVza2FuZzM2R0wvSDR5SXo2bjYx?=
 =?utf-8?B?WVY5Z3F4NHRXTXp1Sk5KZGRtdGk4TXRjOE10Z3F5S2Y5eGdqRzk1dTlwQmdI?=
 =?utf-8?B?WE43aGVjMklWS281TW5YU2JtdTFXWSt6V2cvY3kzZjUwV051QXl1bGEwUG5L?=
 =?utf-8?B?Sy95MVFvOWV2bEV6VTNicFF4dzdLWmdGZFJ0YmVVL1Jjc3MzWTVPWEhBREZy?=
 =?utf-8?B?ZVZQd2p5Y0YxWjNjNEZmVGtSb2lsQ1M4TzhIdlNiRjBnWG1hMDZGY0J2R21x?=
 =?utf-8?B?TkZLY2swSDhjVTVUcUpwbHJIdk14TnJrdXVQTzhIeGlTV20xYkExUFN1ZU9k?=
 =?utf-8?B?ckhJaDFWUkhGcWlNbU8rZjJMVlVTeDI0dko1WEswQkNodVNxK1NXTzhBYmZo?=
 =?utf-8?B?UCtnTzNvSnVjZWdRUHZQeHdKVGtIKzJHa3h6SlY1UW5vVlVMN1FRUThmMWFX?=
 =?utf-8?B?ZWpUSGpBQWtGVUdRbm1tR1lkUmlxcnUrbHlkQ0kxVnJBVjJJU2wyd3Iwcnd3?=
 =?utf-8?Q?YHcydeAORn9/qGHBTr9wpnNrfZFCPJi7GMlP6NL?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 75d232ff-af1e-4e66-5227-08d921b2eecd
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 May 2021 08:31:04.5886
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: izIRkhGBj3SbNTQnXmBWjKwArTxzls/MQ9+6Xw4MC23/oyn9hXP7Ui3tYDiRGtAv98qcCrcTsOCglY9p2eTcow==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3481
X-OriginatorOrg: citrix.com

On Thu, May 27, 2021 at 07:48:41PM +0100, Julien Grall wrote:
> Hi Jan,
> 
> On 27/05/2021 12:28, Jan Beulich wrote:
> > port_is_valid() and evtchn_from_port() are fine to use without holding
> > any locks. Accordingly acquire the per-domain lock slightly later in
> > evtchn_close() and evtchn_bind_vcpu().
> 
> So I agree that port_is_valid() and evtchn_from_port() are fine to use
> without holding any locks in evtchn_bind_vcpu(). However, this is misleading
> to say there is no problem with evtchn_close().
> 
> evtchn_close() can be called with current != d and therefore, there is a

The only instances evtchn_close is called with current != d and the
domain could be unpaused is in free_xen_event_channel AFAICT.

> risk that port_is_valid() may be valid and then invalid because
> d->valid_evtchns is decremented in evtchn_destroy().

Hm, I guess you could indeed have parallel calls to
free_xen_event_channel and evtchn_destroy in a way that
free_xen_event_channel could race with valid_evtchns getting
decreased?

All callers of free_xen_event_channel are internal to the hypervisor,
so maybe with proper ordering of the operations this could be avoided,
albeit it's not easy to assert.

> Thankfully the memory is still there. So the current code is okayish and I
> could reluctantly accept this behavior to be spread. However, I don't think
> this should be left uncommented in both the code (maybe on top of
> port_is_valid()?) and the commit message.

Indeed, I think we need some expansion of the comment in port_is_valid
to clarify all this. I'm not sure I understand it properly myself when
it's fine to use port_is_valid without holding the per domain event
lock.

evtchn_close shouldn't be a very common operation anyway, so it
holding the per domain lock a bit longer doesn't seem that critical to
me anyway.

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri May 28 09:59:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 May 2021 09:59:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133855.249327 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmZH4-00050u-Cl; Fri, 28 May 2021 09:59:26 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133855.249327; Fri, 28 May 2021 09:59:26 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmZH4-00050n-93; Fri, 28 May 2021 09:59:26 +0000
Received: by outflank-mailman (input) for mailman id 133855;
 Fri, 28 May 2021 09:59:25 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=ymrK=KX=bugseng.com=roberto.bagnara@srs-us1.protection.inumbo.net>)
 id 1lmZH3-00050h-OZ
 for xen-devel@lists.xenproject.org; Fri, 28 May 2021 09:59:25 +0000
Received: from spartacus.cs.unipr.it (unknown [160.78.167.140])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id fdd7dd18-0484-4648-9303-58af12f09001;
 Fri, 28 May 2021 09:59:23 +0000 (UTC)
Received: from [192.168.43.199] (unknown [151.38.82.198])
 by spartacus.cs.unipr.it (Postfix) with ESMTPSA id EB6854E0ACB
 for <xen-devel@lists.xenproject.org>; Fri, 28 May 2021 11:59:21 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: fdd7dd18-0484-4648-9303-58af12f09001
Subject: Re: Invalid _Static_assert expanded from HASH_CALLBACKS_CHECK
To: xen-devel@lists.xenproject.org
References: <ccb37c2e-a3a6-a2e4-bf15-da81f97c94be@bugseng.com>
 <38898d21-fe76-36dc-f1e6-497b52c5c0b7@suse.com>
From: Roberto Bagnara <roberto.bagnara@bugseng.com>
Message-ID: <88f2f029-ad2b-4f0d-c683-7ae9d7c92dc6@bugseng.com>
Date: Fri, 28 May 2021 11:59:13 +0200
User-Agent: Mozilla/5.0 (X11; U; Linux i686; en-US; rv:1.7.12) Gecko/20050929
 Thunderbird/1.0.7 Fedora/1.0.7-1.1.fc4 Mnenhy/0.7.3.0
MIME-Version: 1.0
In-Reply-To: <38898d21-fe76-36dc-f1e6-497b52c5c0b7@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 8bit

Hi Jan.

Please see below.

On 25/05/21 10:58, Jan Beulich wrote:
> On 24.05.2021 06:29, Roberto Bagnara wrote:
>> I stumbled upon parsing errors due to invalid uses of
>> _Static_assert expanded from HASH_CALLBACKS_CHECK where
>> the tested expression is not constant, as mandated by
>> the C standard.
>>
>> Judging from the following comment, there is partial awareness
>> of the fact this is an issue:
>>
>> #ifndef __clang__ /* At least some versions dislike some of the uses. */
>> #define HASH_CALLBACKS_CHECK(mask) \
>>       BUILD_BUG_ON((mask) > (1U << ARRAY_SIZE(callbacks)) - 1)
>>
>> Indeed, this is not a fault of Clang: the point is that some
>> of the expansions of this macro are not C.  Moreover,
>> the fact that GCC sometimes accepts them is not
>> something we can rely upon:
>>
>> $ cat p.c
>> void f() {
>> static const int x = 3;
>> _Static_assert(x < 4, "");
>> }
>> $ gcc -c -O p.c
>> $ gcc -c p.c
>> p.c: In function ‘f’:
>> p.c:3:20: error: expression in static assertion is not constant
>> 3 | _Static_assert(x < 4, "");
>> | ~^~
>> $
> 
> I'd nevertheless like to stick to this as long as not proven
> otherwise by future gcc.

Just two observations:

1) Violating the C standard makes MISRA complicance significantly
    more difficult.  In addition, it complicates also compiler
    qualification, for those who are required to do it.

2) GCC is already proving otherwise: if you try compiling
    without optimization, compilation fails.

Kind regards,

    Roberto


From xen-devel-bounces@lists.xenproject.org Fri May 28 10:02:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 May 2021 10:02:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133861.249338 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmZKA-0006QA-R6; Fri, 28 May 2021 10:02:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133861.249338; Fri, 28 May 2021 10:02:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmZKA-0006Q3-Nz; Fri, 28 May 2021 10:02:38 +0000
Received: by outflank-mailman (input) for mailman id 133861;
 Fri, 28 May 2021 10:02:37 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmZK9-0006Pp-7V; Fri, 28 May 2021 10:02:37 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmZK9-0003kq-2m; Fri, 28 May 2021 10:02:37 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmZK8-0001Zc-Nt; Fri, 28 May 2021 10:02:36 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lmZK8-0008Ld-NQ; Fri, 28 May 2021 10:02:36 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=0GDSD2xQqbbt2iCyBEpXRWghEmvUy9nKYaoKdMgMwyk=; b=0XsueD6bp/PYLmnOyqCy/6yJJz
	tCmlf09LZkPh7MzdITPuyaTsb12QxMFhB8pqCZ/NFb29sFOiUFQFSbhg0P/qixF3sRs4ty1nhlbI5
	G/hcor4dNr1VU3i8GOkqcR32NHp77jIj2+Zahgf3dOMEE6rXG7ji5FwpR+aYSbZNSt/Q=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162241-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162241: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=97e5bf604b7a0d6e1b3e00fe31d5fd4b9bffeaae
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 28 May 2021 10:02:36 +0000

flight 162241 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162241/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                97e5bf604b7a0d6e1b3e00fe31d5fd4b9bffeaae
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  300 days
Failing since        152366  2020-08-01 20:49:34 Z  299 days  509 attempts
Testing same since   162241  2021-05-27 23:12:27 Z    0 days    1 attempts

------------------------------------------------------------
6105 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1658927 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 28 10:08:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 May 2021 10:08:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133869.249352 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmZPJ-00078h-HZ; Fri, 28 May 2021 10:07:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133869.249352; Fri, 28 May 2021 10:07:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmZPJ-00078a-Ec; Fri, 28 May 2021 10:07:57 +0000
Received: by outflank-mailman (input) for mailman id 133869;
 Fri, 28 May 2021 10:07:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wdiM=KX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmZPI-00078U-74
 for xen-devel@lists.xenproject.org; Fri, 28 May 2021 10:07:56 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2e59f9ad-a74f-461a-9295-ab9b8eb7908b;
 Fri, 28 May 2021 10:07:55 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id 61A3A218B0;
 Fri, 28 May 2021 10:07:54 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 4E43911A98;
 Fri, 28 May 2021 10:07:54 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2e59f9ad-a74f-461a-9295-ab9b8eb7908b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622196474; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=uxygQ/7cUMd/a4ps4mLRCf93NjR5L7PGlVUYjXzH/Mw=;
	b=UIFzD18SUrp0CpzAhsVbfHNRGwI08R9Dm5/aRcZlk58W0KGfFMc5xwMZ51Qs/g0nh7MLrC
	jq4JhoXqcGYpZXbUXoAhUut8QCBDavQPGWgISDGKobzrEKzvHqjurutN+ZS8ZNe+y4IZlW
	1ZSkRcdBJ4Ze7LEJywHXAGnPLo+Au8U=
Subject: Re: Invalid _Static_assert expanded from HASH_CALLBACKS_CHECK
To: Roberto Bagnara <roberto.bagnara@bugseng.com>
References: <ccb37c2e-a3a6-a2e4-bf15-da81f97c94be@bugseng.com>
 <38898d21-fe76-36dc-f1e6-497b52c5c0b7@suse.com>
 <88f2f029-ad2b-4f0d-c683-7ae9d7c92dc6@bugseng.com>
Cc: xen-devel@lists.xenproject.org
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <d41dd2c3-e204-ea53-f32e-6d6fd8f7615c@suse.com>
Date: Fri, 28 May 2021 12:07:50 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <88f2f029-ad2b-4f0d-c683-7ae9d7c92dc6@bugseng.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 28.05.2021 11:59, Roberto Bagnara wrote:
> On 25/05/21 10:58, Jan Beulich wrote:
>> On 24.05.2021 06:29, Roberto Bagnara wrote:
>>> I stumbled upon parsing errors due to invalid uses of
>>> _Static_assert expanded from HASH_CALLBACKS_CHECK where
>>> the tested expression is not constant, as mandated by
>>> the C standard.
>>>
>>> Judging from the following comment, there is partial awareness
>>> of the fact this is an issue:
>>>
>>> #ifndef __clang__ /* At least some versions dislike some of the uses. */
>>> #define HASH_CALLBACKS_CHECK(mask) \
>>>       BUILD_BUG_ON((mask) > (1U << ARRAY_SIZE(callbacks)) - 1)
>>>
>>> Indeed, this is not a fault of Clang: the point is that some
>>> of the expansions of this macro are not C.  Moreover,
>>> the fact that GCC sometimes accepts them is not
>>> something we can rely upon:
>>>
>>> $ cat p.c
>>> void f() {
>>> static const int x = 3;
>>> _Static_assert(x < 4, "");
>>> }
>>> $ gcc -c -O p.c
>>> $ gcc -c p.c
>>> p.c: In function ‘f’:
>>> p.c:3:20: error: expression in static assertion is not constant
>>> 3 | _Static_assert(x < 4, "");
>>> | ~^~
>>> $
>>
>> I'd nevertheless like to stick to this as long as not proven
>> otherwise by future gcc.
> 
> Just two observations:
> 
> 1) Violating the C standard makes MISRA complicance significantly
>     more difficult.  In addition, it complicates also compiler
>     qualification, for those who are required to do it.
> 
> 2) GCC is already proving otherwise: if you try compiling
>     without optimization, compilation fails.

I'm afraid we have other issues when building without optimization.

In any event - feel free to contribute a patch. As said, I'm not
the maintainer of that piece of code, and you may well find him
agreeing with such a change. He didn't reply yet on the earlier
mail, which would be a prereq to me possibly making a patch
myself.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 28 10:14:14 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 May 2021 10:14:14 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133876.249363 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmZVI-0000Bv-CP; Fri, 28 May 2021 10:14:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133876.249363; Fri, 28 May 2021 10:14:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmZVI-0000Bm-8M; Fri, 28 May 2021 10:14:08 +0000
Received: by outflank-mailman (input) for mailman id 133876;
 Fri, 28 May 2021 10:14:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmZVH-0000Bc-FK; Fri, 28 May 2021 10:14:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmZVH-0003wi-8r; Fri, 28 May 2021 10:14:07 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmZVH-0001vx-06; Fri, 28 May 2021 10:14:07 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lmZVG-0004wm-Vv; Fri, 28 May 2021 10:14:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=54a7NDvJiUE9oBxdIjape00bgBjzzHil7VhbxYF8gAg=; b=SSV1/ks/54VYk4wsgS/0++panX
	CxybXys2gzWFArCGnr1CDkKYtH2zGPrqhVBBL7jM0evRsJi9Ae+23CS1CP6UJfAWIM+Yj2rQrsxqC
	yFkUxNP8J2aLtZ1dP6vj5YJ2lScmfK+dX/WRAv61W0Ss350Hu17X/wX9KuRDubtVpOYg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162245-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162245: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=683d899e4bffca35c5b192ea0662362b0270a695
X-Osstest-Versions-That:
    xen=9fdcf851689cb2a9501d3947cb5d767d9c7797e8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 28 May 2021 10:14:06 +0000

flight 162245 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162245/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  683d899e4bffca35c5b192ea0662362b0270a695
baseline version:
 xen                  9fdcf851689cb2a9501d3947cb5d767d9c7797e8

Last test of basis   162239  2021-05-27 19:02:50 Z    0 days
Testing same since   162245  2021-05-28 08:01:34 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Connor Davis <connojdavis@gmail.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   9fdcf85168..683d899e4b  683d899e4bffca35c5b192ea0662362b0270a695 -> smoke


From xen-devel-bounces@lists.xenproject.org Fri May 28 10:23:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 May 2021 10:23:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133885.249377 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmZeE-0001ct-9H; Fri, 28 May 2021 10:23:22 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133885.249377; Fri, 28 May 2021 10:23:22 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmZeE-0001cm-5O; Fri, 28 May 2021 10:23:22 +0000
Received: by outflank-mailman (input) for mailman id 133885;
 Fri, 28 May 2021 10:23:21 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wdiM=KX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmZeD-0001cg-HZ
 for xen-devel@lists.xenproject.org; Fri, 28 May 2021 10:23:21 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7a46a3ef-3fd3-426f-b862-e8254683bd2f;
 Fri, 28 May 2021 10:23:20 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id B1BAF218B0;
 Fri, 28 May 2021 10:23:19 +0000 (UTC)
Received: from director2.suse.de (director2.suse-dmz.suse.de [192.168.254.72])
 by imap.suse.de (Postfix) with ESMTPSA id 8169311A98;
 Fri, 28 May 2021 10:23:19 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7a46a3ef-3fd3-426f-b862-e8254683bd2f
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622197399; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=Jje4WFECR2yDCHwR5cyInBEfgUwMw9ZzVmyalZLt7js=;
	b=tTNaRCno8M6wrKT5UJBBJi8x4DZWBWWPN4iT5hV+iLb26kzawVTB/hstOaOVsH42Yx0MgK
	GzDD/b3WNFMR21gtwFaVlFaasPxjvPC1/FkmCv6ihDPbejn5BDNs96aIvj2gWDVMCX5ub3
	ABbYlB6FRTcERJ51RCktGq2QrLnghQI=
Subject: Re: [PATCH v6 1/3] evtchn: slightly defer lock acquire where possible
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
References: <01bbf3d4-ca6a-e837-91fe-b34aa014564c@suse.com>
 <5939858e-1c7c-5658-bc2d-0c9024c74040@suse.com>
 <938eb888-ec15-feb1-19f7-b90dfee822ae@xen.org>
 <YLCqQz9xS4HEpabG@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <27d54d81-bec8-5bc7-39cd-60e9761e726b@suse.com>
Date: Fri, 28 May 2021 12:23:15 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <YLCqQz9xS4HEpabG@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 28.05.2021 10:30, Roger Pau Monné wrote:
> On Thu, May 27, 2021 at 07:48:41PM +0100, Julien Grall wrote:
>> On 27/05/2021 12:28, Jan Beulich wrote:
>>> port_is_valid() and evtchn_from_port() are fine to use without holding
>>> any locks. Accordingly acquire the per-domain lock slightly later in
>>> evtchn_close() and evtchn_bind_vcpu().
>>
>> So I agree that port_is_valid() and evtchn_from_port() are fine to use
>> without holding any locks in evtchn_bind_vcpu(). However, this is misleading
>> to say there is no problem with evtchn_close().
>>
>> evtchn_close() can be called with current != d and therefore, there is a
> 
> The only instances evtchn_close is called with current != d and the
> domain could be unpaused is in free_xen_event_channel AFAICT.

As long as the domain is not paused, ->valid_evtchns can't ever
decrease: The only point where this gets done is in evtchn_destroy().
Hence ...

>> risk that port_is_valid() may be valid and then invalid because
>> d->valid_evtchns is decremented in evtchn_destroy().
> 
> Hm, I guess you could indeed have parallel calls to
> free_xen_event_channel and evtchn_destroy in a way that
> free_xen_event_channel could race with valid_evtchns getting
> decreased?

... I don't see this as relevant.

>> Thankfully the memory is still there. So the current code is okayish and I
>> could reluctantly accept this behavior to be spread. However, I don't think
>> this should be left uncommented in both the code (maybe on top of
>> port_is_valid()?) and the commit message.
> 
> Indeed, I think we need some expansion of the comment in port_is_valid
> to clarify all this. I'm not sure I understand it properly myself when
> it's fine to use port_is_valid without holding the per domain event
> lock.

Because of the above property plus the fact that even if
->valid_evtchns decreases, the underlying struct evtchn instance
will remain valid (i.e. won't get de-allocated, which happens only
in evtchn_destroy_final()), it is always fine to use it without
lock. With this I'm having trouble seeing what would need adding
to port_is_valid()'s commentary.

The only thing which shouldn't happen anywhere is following a
port_is_valid() check which has returned false by code assuming
the port is going to remain invalid. But I think that's obvious
when you don't hold any suitable lock.

I do intend to follow Julien's request to explain things more for
evtchn_close().

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 28 10:49:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 May 2021 10:49:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133892.249388 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lma2y-0003xq-Dj; Fri, 28 May 2021 10:48:56 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133892.249388; Fri, 28 May 2021 10:48:56 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lma2y-0003xj-AB; Fri, 28 May 2021 10:48:56 +0000
Received: by outflank-mailman (input) for mailman id 133892;
 Fri, 28 May 2021 10:48:55 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lma2x-0003xd-CO
 for xen-devel@lists.xenproject.org; Fri, 28 May 2021 10:48:55 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lma2v-0004WK-Mz; Fri, 28 May 2021 10:48:53 +0000
Received: from [54.239.6.187] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lma2v-0005GG-Gh; Fri, 28 May 2021 10:48:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=ZJqmUNj1//MsNBfD3g6CRdpHlKU20FoByOfAIatVwEY=; b=uCZsp7CITRez2OimLvHz8MMD/7
	KLmX9x7Topsx/0F4J0R7EEF3vhI4uDYKEf7Lu48Rac+zewO+DFdztuKuJRyhWfbllMwOI5F6Zefpz
	PmgCDYHowDHTRx8t/ypq8iY/RhV3eB9jUhEXPMGLki7NcDnU8XwZyWC+e8v2N4QMCrm8=;
Subject: Re: [PATCH v6 1/3] evtchn: slightly defer lock acquire where possible
To: Jan Beulich <jbeulich@suse.com>, =?UTF-8?Q?Roger_Pau_Monn=c3=a9?=
 <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <01bbf3d4-ca6a-e837-91fe-b34aa014564c@suse.com>
 <5939858e-1c7c-5658-bc2d-0c9024c74040@suse.com>
 <938eb888-ec15-feb1-19f7-b90dfee822ae@xen.org>
 <YLCqQz9xS4HEpabG@Air-de-Roger>
 <27d54d81-bec8-5bc7-39cd-60e9761e726b@suse.com>
From: Julien Grall <julien@xen.org>
Message-ID: <079f2f2a-0797-b650-ff47-7e595ab29589@xen.org>
Date: Fri, 28 May 2021 11:48:51 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <27d54d81-bec8-5bc7-39cd-60e9761e726b@suse.com>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Jan,

On 28/05/2021 11:23, Jan Beulich wrote:
> On 28.05.2021 10:30, Roger Pau Monné wrote:
>> On Thu, May 27, 2021 at 07:48:41PM +0100, Julien Grall wrote:
>>> On 27/05/2021 12:28, Jan Beulich wrote:
>>>> port_is_valid() and evtchn_from_port() are fine to use without holding
>>>> any locks. Accordingly acquire the per-domain lock slightly later in
>>>> evtchn_close() and evtchn_bind_vcpu().
>>>
>>> So I agree that port_is_valid() and evtchn_from_port() are fine to use
>>> without holding any locks in evtchn_bind_vcpu(). However, this is misleading
>>> to say there is no problem with evtchn_close().
>>>
>>> evtchn_close() can be called with current != d and therefore, there is a
>>
>> The only instances evtchn_close is called with current != d and the
>> domain could be unpaused is in free_xen_event_channel AFAICT.
> 
> As long as the domain is not paused, ->valid_evtchns can't ever
> decrease: The only point where this gets done is in evtchn_destroy().
> Hence ...
> 
>>> risk that port_is_valid() may be valid and then invalid because
>>> d->valid_evtchns is decremented in evtchn_destroy().
>>
>> Hm, I guess you could indeed have parallel calls to
>> free_xen_event_channel and evtchn_destroy in a way that
>> free_xen_event_channel could race with valid_evtchns getting
>> decreased?
> 
> ... I don't see this as relevant.
> 
>>> Thankfully the memory is still there. So the current code is okayish and I
>>> could reluctantly accept this behavior to be spread. However, I don't think
>>> this should be left uncommented in both the code (maybe on top of
>>> port_is_valid()?) and the commit message.
>>
>> Indeed, I think we need some expansion of the comment in port_is_valid
>> to clarify all this. I'm not sure I understand it properly myself when
>> it's fine to use port_is_valid without holding the per domain event
>> lock.
> 
> Because of the above property plus the fact that even if
> ->valid_evtchns decreases, the underlying struct evtchn instance
> will remain valid (i.e. won't get de-allocated, which happens only
> in evtchn_destroy_final()), it is always fine to use it without
> lock. With this I'm having trouble seeing what would need adding
> to port_is_valid()'s commentary.

Lets take the example of free_xen_event_channel(). The function is 
checking if the port is valid. If it is, then evtchn_close() will be called.

At this point, it would be fair for a developper to assume that 
port_is_valid() will also return true in event_close().

To push to the extreme, if free_xen_event_channel() was the only caller 
of evtchn_close(), one could argue that the check in evtchn_close() 
could be a BUG_ON().

However, this can't be because this would sooner or later turn to an XSA.

Effectively, it means that is_port_valid() *cannot* be used in an 
ASSERT()/BUG_ON() and every caller should check the return even if the 
port was previously validated.

So I think a comment on top of is_port_valid() would be really useful 
before someone rediscover it the hard way.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 28 13:20:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 May 2021 13:20:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133910.249402 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmcP5-0000oh-PH; Fri, 28 May 2021 13:19:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133910.249402; Fri, 28 May 2021 13:19:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmcP5-0000oZ-JZ; Fri, 28 May 2021 13:19:55 +0000
Received: by outflank-mailman (input) for mailman id 133910;
 Fri, 28 May 2021 13:19:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=QboB=KX=gmail.com=jandryuk@srs-us1.protection.inumbo.net>)
 id 1lmcP4-0000oT-OJ
 for xen-devel@lists.xenproject.org; Fri, 28 May 2021 13:19:54 +0000
Received: from mail-lf1-x132.google.com (unknown [2a00:1450:4864:20::132])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f23221fd-77a2-44e9-97c8-cc365e133578;
 Fri, 28 May 2021 13:19:53 +0000 (UTC)
Received: by mail-lf1-x132.google.com with SMTP id a5so5365328lfm.0
 for <xen-devel@lists.xenproject.org>; Fri, 28 May 2021 06:19:53 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f23221fd-77a2-44e9-97c8-cc365e133578
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=k6CI3bJoWlR73wvGXkQ1pwSf9oraVxE1+E2j7+D/QPU=;
        b=qcMEJo688QrfETvG8lwRXHD3x2/GMvUrF8rj0gxaHLd8Ze+baq1raGSwyVU/NDSAUF
         TO/W5P2jl4fBhd3zsG2sEq3DKBEVrsJlPBRO45kUcQCCcpqc/MdnFQgZ1NYnTr40ArrY
         N7C+gWbDH0OOmSO/0s8fK0eOML9zxFzBWLlFeQp+IaR+qEcR9t68I1GlhS7ULHPN7q15
         siNtSTimI8txtIdy9usOtG1gXjcMJPx947BRXL3H3a9npe81QS3B9QIh5G0eONeUGscQ
         QttC02aXdPqCpx4UzvgbA73gGgTpfwJB+yaNPi3C9ld06T2IDjY1s5HMHCRSPxqoHvN5
         uZZg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=k6CI3bJoWlR73wvGXkQ1pwSf9oraVxE1+E2j7+D/QPU=;
        b=kX2cQumUrHExHMCc7a/n7NROuKteB8CQfFxawUAMB5eJGq3KERE6VJCYSXXPI/lsNA
         lRtCUOx9r3jNgvUyhJZk+HIeNdJmA0po+0XyqDYAQNrqxQLTAQnYlW8HSFytZkcUfWnT
         6/93WZ1hUBjgqi4aI2Ggq/dRGo0nPpdz2aQu4zKMYXd7mdpiaEPICPP3w3A2RX9aZhk8
         fI6zpDY1spp3DnNQuTrgk0vgQCG/4nH5iB5dsC7wIDyBcrDq7X3cUcLGiZ3Q64fwaN1m
         5KHK+5q8PzvllcWMC8GGVLwm5eaubJtwxNoQZFY3QmLy7PkTnxJ2Xco9H3SmAgPLfagg
         Z0LA==
X-Gm-Message-State: AOAM530NOtnz41ESf6YX2IzjMF+8dh99Es9uHFw/kK9o9lG6mJy3AyVu
	nRFftziyDhF+mSKjiCV+6nPt/vpnCSUJdfkKUow=
X-Google-Smtp-Source: ABdhPJxHubQ4FhiTAJgLMZWfFaU+5M7ZPmpSEQAaEJuxlxLxY3+S+VNSs/2hbaYeaYHTgCBfwpT0jTLXU0HhXw6GMSs=
X-Received: by 2002:a05:6512:b95:: with SMTP id b21mr5710476lfv.491.1622207991200;
 Fri, 28 May 2021 06:19:51 -0700 (PDT)
MIME-Version: 1.0
References: <20210503192810.36084-1-jandryuk@gmail.com> <20210503192810.36084-7-jandryuk@gmail.com>
 <e54c3aef-4c44-4302-f7f4-4f4733e33780@suse.com>
In-Reply-To: <e54c3aef-4c44-4302-f7f4-4f4733e33780@suse.com>
From: Jason Andryuk <jandryuk@gmail.com>
Date: Fri, 28 May 2021 09:19:39 -0400
Message-ID: <CAKf6xptN13CW78XajgyE0G8t2NjFVka8tzNO2oofjcw7tT7n8g@mail.gmail.com>
Subject: Re: [PATCH 06/13] cpufreq: Export HWP parameters to userspace
To: Jan Beulich <jbeulich@suse.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, =?UTF-8?Q?Roger_Pau_Monn=C3=A9?= <roger.pau@citrix.com>, 
	Wei Liu <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>, 
	Julien Grall <julien@xen.org>, Stefano Stabellini <sstabellini@kernel.org>, 
	xen-devel <xen-devel@lists.xenproject.org>
Content-Type: text/plain; charset="UTF-8"

On Thu, May 27, 2021 at 4:03 AM Jan Beulich <jbeulich@suse.com> wrote:
>
> On 03.05.2021 21:28, Jason Andryuk wrote:
> > --- a/xen/drivers/acpi/pmstat.c
> > +++ b/xen/drivers/acpi/pmstat.c
> > @@ -290,6 +290,12 @@ static int get_cpufreq_para(struct xen_sysctl_pm_op *op)
> >              &op->u.get_para.u.ondemand.sampling_rate,
> >              &op->u.get_para.u.ondemand.up_threshold);
> >      }
> > +
> > +    if ( !strncasecmp(op->u.get_para.scaling_governor,
> > +                      "hwp-internal", CPUFREQ_NAME_LEN) )
> > +    {
> > +        ret = get_hwp_para(policy, &op->u.get_para.u.hwp_para);
> > +    }
> >      op->u.get_para.turbo_enabled = cpufreq_get_turbo_status(op->cpuid);
>
> Nit: Unnecessary parentheses again, and with the leading blank line
> you also want a trailing one. (As an aside I'm also not overly happy
> to see the call keyed to the governor name. Is there really no other
> indication that hwp is in use?)

This is preceded by similar checks for "userspace" and "ondemand", so
it is following existing code.  Unlike other governors, hwp-internal
is static.  It could be exported if you want to switch to comparing
with cpufreq_driver.

> > --- a/xen/include/acpi/cpufreq/cpufreq.h
> > +++ b/xen/include/acpi/cpufreq/cpufreq.h
> > @@ -246,4 +246,7 @@ int write_userspace_scaling_setspeed(unsigned int cpu, unsigned int freq);
> >  void cpufreq_dbs_timer_suspend(void);
> >  void cpufreq_dbs_timer_resume(void);
> >
> > +/********************** hwp hypercall helper *************************/
> > +int get_hwp_para(struct cpufreq_policy *policy, struct xen_hwp_para *hwp_para);
>
> While I can see that the excessive number of stars matches what
> we have elsewhere in the header, I still wonder if you need to go
> this far for a single declaration. If you want to stick to this,
> then to match the rest of the file you want to follow the comment
> by a blank line.

Will remove.

> > --- a/xen/include/public/sysctl.h
> > +++ b/xen/include/public/sysctl.h
> > @@ -301,6 +301,23 @@ struct xen_ondemand {
> >      uint32_t up_threshold;
> >  };
> >
> > +struct xen_hwp_para {
> > +    uint16_t activity_window; /* 7bit mantissa and 3bit exponent */
>
> If you go this far with commenting, you should also make the further
> aspects clear: Which bits these are, and that the exponent is taking
> 10 as the base (in most other cases one would expect 2).

Yes, this is much more useful.

> > +#define XEN_SYSCTL_HWP_FEAT_ENERGY_PERF (1 << 0) /* energy_perf range 0-255 if
> > +                                                    1. Otherwise 0-15 */
> > +#define XEN_SYSCTL_HWP_FEAT_ACT_WINDOW  (1 << 1) /* activity_window supported
> > +                                                    if 1 */
>
> Style: Comment formatting. You may want to move the comment on separate
> lines ahead of what they comment.
>
> > +    uint8_t hw_feature; /* bit flags for features */
> > +    uint8_t hw_lowest;
> > +    uint8_t hw_most_efficient;
> > +    uint8_t hw_guaranteed;
> > +    uint8_t hw_highest;
>
> Any particular reason for the recurring hw_ prefixes?

The idea was to differentiate values provided by CPU hardware from
user-configured values.

Regards,
Jason


From xen-devel-bounces@lists.xenproject.org Fri May 28 13:29:15 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 May 2021 13:29:15 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133917.249412 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmcY3-0002JA-K9; Fri, 28 May 2021 13:29:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133917.249412; Fri, 28 May 2021 13:29:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmcY3-0002J3-HD; Fri, 28 May 2021 13:29:11 +0000
Received: by outflank-mailman (input) for mailman id 133917;
 Fri, 28 May 2021 13:29:10 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wdiM=KX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmcY2-0002Ix-BR
 for xen-devel@lists.xenproject.org; Fri, 28 May 2021 13:29:10 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 0abaf67a-2cf1-44e9-b49f-91729776ce0b;
 Fri, 28 May 2021 13:29:09 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 3366B1FD2F;
 Fri, 28 May 2021 13:29:08 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 029EB11A98;
 Fri, 28 May 2021 13:29:07 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id agsiOyPwsGDDSQAALh3uQQ
 (envelope-from <jbeulich@suse.com>); Fri, 28 May 2021 13:29:07 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 0abaf67a-2cf1-44e9-b49f-91729776ce0b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622208548; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=5G8xbPrE4QwsQXmnQ1TYQWmzvf763gDMqALnJjZGx4c=;
	b=Z8hVysByzc0zdVbbtxs8xxj/X1tx3sWMTnkTh8Uo1dYnKNMf6E9k/DCS4reqKCfRdk32Km
	XArxrjsyjKgfDunz00IVg5EQ/EZmNHZd+U5dF+jf5MmfcJjX/NlaAuhKsy0FLYd8pbSOwv
	DqU/ccQ70DUnfMGLzbAjtn4fTvrjmJ8=
Subject: Re: [PATCH] xen/grant-table: Simplify the update to the per-vCPU
 maptrack freelist
To: Julien Grall <julien@xen.org>
Cc: Julien Grall <jgrall@amazon.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <george.dunlap@citrix.com>, Ian Jackson <iwj@xenproject.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210526152152.26251-1-julien@xen.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <6748164b-ad38-d7d0-6abe-b5e393f7b9f3@suse.com>
Date: Fri, 28 May 2021 15:29:03 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <20210526152152.26251-1-julien@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 26.05.2021 17:21, Julien Grall wrote:
> From: Julien Grall <jgrall@amazon.com>
> 
> Since XSA-288 (commit 02cbeeb62075 "gnttab: split maptrack lock to make

XSA-228 I suppose?

> it fulfill its purpose again"), v->maptrack_head and v->maptrack_tail
> are with the lock v->maptrack_freelist_lock held.

Nit: missing "accessed" or alike?

> Therefore it is not necessary to update the fields using cmpxchg()
> and also read them atomically.

Ah yes, very good observation. Should have noticed this back at the
time, for an immediate follow-up change.

> Note that there are two cases where v->maptrack_tail is accessed without
> the lock. They both happen _get_maptrack_handle() when the current vCPU
> list is empty. Therefore there is no possible race.

I think you mean the other function here, without a leading underscore
in its name. And if you want to explain the absence of a race, wouldn't
you then better also mention that the list can get initially filled
only on the local vCPU?

> I am not sure whether we should try to protect the remaining unprotected
> access with the lock or maybe add a comment?

As per above I don't view adding locking as sensible. If you feel like
adding a helpful comment, perhaps. I will admit that it took me more
than just a moment to recall that "local vCPU only" argument.

> --- a/xen/common/grant_table.c
> +++ b/xen/common/grant_table.c
> @@ -543,34 +543,26 @@ double_gt_unlock(struct grant_table *lgt, struct grant_table *rgt)
>  static inline grant_handle_t
>  _get_maptrack_handle(struct grant_table *t, struct vcpu *v)
>  {
> -    unsigned int head, next, prev_head;
> +    unsigned int head, next;
>  
>      spin_lock(&v->maptrack_freelist_lock);
>  
> -    do {
> -        /* No maptrack pages allocated for this VCPU yet? */
> -        head = read_atomic(&v->maptrack_head);
> -        if ( unlikely(head == MAPTRACK_TAIL) )
> -        {
> -            spin_unlock(&v->maptrack_freelist_lock);
> -            return INVALID_MAPTRACK_HANDLE;

Where did this and ...

> -        }
> -
> -        /*
> -         * Always keep one entry in the free list to make it easier to
> -         * add free entries to the tail.
> -         */
> -        next = read_atomic(&maptrack_entry(t, head).ref);
> -        if ( unlikely(next == MAPTRACK_TAIL) )
> -        {
> -            spin_unlock(&v->maptrack_freelist_lock);
> -            return INVALID_MAPTRACK_HANDLE;

... this use of INVALID_MAPTRACK_HANDLE go? It is at present merely
coincidence that INVALID_MAPTRACK_HANDLE == MAPTRACK_TAIL. If you
want to fold them, you will need to do so properly (by eliminating
one of the two constants). But I think they're separate on purpose.

> -        }
> +    /* No maptrack pages allocated for this VCPU yet? */
> +    head = v->maptrack_head;
> +    if ( unlikely(head == MAPTRACK_TAIL) )
> +        goto out;
>  
> -        prev_head = head;
> -        head = cmpxchg(&v->maptrack_head, prev_head, next);
> -    } while ( head != prev_head );
> +    /*
> +     * Always keep one entry in the free list to make it easier to
> +     * add free entries to the tail.
> +     */
> +    next = read_atomic(&maptrack_entry(t, head).ref);

Since the lock protects the entire free list, why do you need to
keep read_atomic() here?

> +    if ( unlikely(next == MAPTRACK_TAIL) )
> +        head = MAPTRACK_TAIL;
> +    else
> +        v->maptrack_head = next;
>  
> +out:

Please indent labels by at least one blank, to avoid issues with
diff's -p option. In fact if you didn't introduce a goto here in
the first place, there'd be less code churn overall, as you'd
need to alter the indentation of fewer lines.

> @@ -623,7 +615,7 @@ put_maptrack_handle(
>  {
>      struct domain *currd = current->domain;
>      struct vcpu *v;
> -    unsigned int prev_tail, cur_tail;
> +    unsigned int prev_tail;
>  
>      /* 1. Set entry to be a tail. */
>      maptrack_entry(t, handle).ref = MAPTRACK_TAIL;
> @@ -633,11 +625,8 @@ put_maptrack_handle(
>  
>      spin_lock(&v->maptrack_freelist_lock);
>  
> -    cur_tail = read_atomic(&v->maptrack_tail);
> -    do {
> -        prev_tail = cur_tail;
> -        cur_tail = cmpxchg(&v->maptrack_tail, prev_tail, handle);
> -    } while ( cur_tail != prev_tail );
> +    prev_tail = v->maptrack_tail;
> +    v->maptrack_tail = handle;
>  
>      /* 3. Update the old tail entry to point to the new entry. */
>      write_atomic(&maptrack_entry(t, prev_tail).ref, handle);

Since the write_atomic() here can then also be converted, may I
ask that you then rename the local variable to just "tail" as
well?

> --- a/xen/include/xen/sched.h
> +++ b/xen/include/xen/sched.h
> @@ -255,7 +255,10 @@ struct vcpu
>      /* VCPU paused by system controller. */
>      int              controller_pause_count;
>  
> -    /* Grant table map tracking. */
> +    /*
> +     * Grant table map tracking. The lock maptrack_freelist_lock protect

Nit: protects

> +     * the access to maptrack_head and maptrack_tail.
> +     */

I'm inclined to suggest this doesn't need spelling out, considering ...

>      spinlock_t       maptrack_freelist_lock;
>      unsigned int     maptrack_head;
>      unsigned int     maptrack_tail;

... both the name of the lock and its placement next to the two
fields it protects. Also as per the docs change of the XSA-228 change,
the lock protects more than just these two fields, so the comment may
be misleading the way you have it now.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 28 13:31:51 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 May 2021 13:31:51 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133923.249424 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmcab-0003cy-3P; Fri, 28 May 2021 13:31:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133923.249424; Fri, 28 May 2021 13:31:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmcaa-0003cr-VD; Fri, 28 May 2021 13:31:48 +0000
Received: by outflank-mailman (input) for mailman id 133923;
 Fri, 28 May 2021 13:31:47 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WKFi=KX=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lmcaZ-0003cl-7s
 for xen-devel@lists.xenproject.org; Fri, 28 May 2021 13:31:47 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id dd6dd223-130b-40ec-a9a1-4f0be38b0e18;
 Fri, 28 May 2021 13:31:46 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: dd6dd223-130b-40ec-a9a1-4f0be38b0e18
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1622208705;
  h=date:from:to:cc:subject:message-id:references:
   content-transfer-encoding:in-reply-to:mime-version;
  bh=XAyMUqU7ZwznyIIqeZ7k/AkSOQ20qZfN52rezn8JFek=;
  b=bEUtWnsPcOe1W4OqkV1sQmkB8+Ot0iecV9g3yQe+JpZk8Dj8mvu52/p9
   GrCjizi/zFqbQym31vivOqOvfcAnXZKuzoXd0bSNTQk+pPzb2ycAkrqK7
   7+NUttrbHlfGXo1hcQ0uI3Daz2UlLV28mycKUqOxqYgst+qlr1CezhHDX
   o=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: KQsVnlpdedQbr0Pdykprs09289PnzOa5nqUyFGy9XfzvibrsiVvlKMHR0CeOykEMLS9nzk52sA
 oB05VajYgnZUU7oNUXLzoDeQrKgoQUa65fUgU18p2u5gEiz4asHO6kP5VidtYc1c7ZAqUTkTzw
 UaW/KNAau99Ewm7bxe4F8Qzj2Bb0cCnYIion0wOrJCBY+fBpmNQsz6PsRUM0q+xXE9e/XKKj1w
 9dzyFdOp5/eQnfLYtqus/tHhQsucStOVzxz8whGoaTPy4MORBpjWYbwQFEbgK625s8i8FgQBlr
 Tio=
X-SBRS: 5.1
X-MesageID: 44611574
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:tC4+Va+irC95sM3rqetuk+E3d71zdoMgy1knxilNoENuH/Bwxv
 rFoB1E73TJYVYqOU3IXOrwXZVoIEmsi6KdhLNhW8baYOCIghrSEGgP1/qZ/9SYIVyOygcF79
 YRT0EcMqyOMbEZt7eD3ODQKb9Jr7f3k9HL9IOuqEuBVTsLV0gH1XYFNu5gencbeOAvP+t7KH
 P23LsIm9PPQwVqUixubkN1LdQq4bfw5d3biXltPW9q1OGQ5QnYo4ITAnCjr2Qjuy0m+8ZWzY
 HwqX262k3C28vLjyM03VWjp6i+1eGRmeeqQ6e3+5MoAwSprjztSJVqWrWEsjxwivqo8kwSi9
 XJow0tJYBa7G7QZHi8pV/W0QHm+jAo9nPy1Daj8FveiP28YAh/J9tKhIpffBecwVEnpstEy6
 5O33iUrd5+EQ7AtD6V3annazha0m6P5VYym+8aiHJSFaEEbqVKlJcS+ENOHI1FND7m6bogDP
 JlAKjnldlrWGLfS0qcknhkwdSqUHh2NAyBWFI6ocCQ0yJbhjRQ01Yf68oFgH8a+Z4xIqM0xt
 jsA+BNrvVjX8UWZaVyCKMqWs2sEFXXTRbNKm6JZXXmDrwAIGKlke/T3JwFoMWRPLAYxpo7n5
 rMFHlCs3QpQlnjDc2V0IcO1AvMTmW7VTHGz8FT4IVYg5XwSaHmKzfrciECr+KQ59EkRuHLUf
 e6P5xbR9X5K3H1IJ1E2w3lV4MXEGIZWsEOoNo3H3mf5uHMNpbsvunad/i7HsvOLR8UHkfERl
 cTVjn6I8tNqmqxXGXjuQPcX3P2dla6x7hUeZKq3NQ7+cwoDMlhowIVgVO26oWgMjtZqJUscE
 9/Or/81p6hrW6t5GDS8lhzMhVTDkxp8KztOkk6iTMiAgfRS/Iuqt+fcWdd0D+sPRlkVf7bFw
 ZZuhBe5b+3B4b4/1ELN/uXdkahy1cDrnODSJkR3oeZ493+R58+BpE6HIRsCATwEQBvkwoCkh
 ZrVOY9fD71KtrSs9TysHVUPpCJSzBEunb+HSYTwkiv8nl14qkUNykmtz3Ha7/Ive5iLAAkwm
 GZvZVvqoZpNF6UWDECad8DQQRxgVKsce175TS+FcNpc4/QCVlNpB+x9EinYjEICzvXHhYp9z
 zcxRP9Q4CXPnNt/lYDjfrNzG4cTBTAQ6vbUAFfjWQ6LxWShl9DldaRYKy9ym2QbUZH7N08HV
 j+EHYvCzIr/suw0hGNnjaECDEB/bUBesLgLJlLScCY5puKQLf41Z3u28UkvaqMabjVw5g2uf
 r2QX7iENviY9lZkjComg==
X-IronPort-AV: E=Sophos;i="5.83,229,1616472000"; 
   d="scan'208";a="44611574"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=Hc/ySrG0N7SzpgmEn6NKszuQ+5stpGvCQG6hXhx29a/+c21Nd4QnY3B8MU9YMqtTGXX5RvycYY+UILRldsItPu1eVP0SflqdxwV8WZvhMZmZXs+gHm+oWax3vtE+00Qj25gFRrOMCcC2QPjYn1EFSTMzgkiqCsP5sMHi7rs8GwqgwEc14rjM9P2/pO4xSscoQyjAaLfUC8jaO5TN4joC3ZSQ5OGO/S/C/6svt/TlOqr0Vxhnw9LQwZ8SGskrpzf7hCilT9HaMthXHGK50541Mn8ju/934oTerpU1YAGbBzwblXLMiuvqAJCbdEwX1atShbcEpK/9gVpiWJSrQ+CjGg==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5GYo8kOOcdZRaw88SL3GvBsnx9iPp+rZg/kwed91s5g=;
 b=aABXR9SzHfwID4yhyg0akga9jwgVyNIFMJ67EIBgFqR3qnVG4TD5XmmJz4eqauXZrfVhv72uRJY9ErqegcqbcSi4TQ0RGj1cp/PJp4FLttdim3ks4sqlZSKh5ACKqDI8JrVHEA667d1j8iahGW4aidWWWQl0v1+xeURzMGJE12L1mMX9tG6INGQPODtLb6kUmRyce57xUr6EAzDXzf3YQ/+a9g+dXb7QXTZWhzkdNLPOpC138US6iEPsyb+saj6skid8IjuHi3k6qsNzK2/bcv0V9GMzuxi7jpf6rKsZ+LV359YrWdQxPalSXwc0jGm9tlz0nvARKx1pSHyOALbCnQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=5GYo8kOOcdZRaw88SL3GvBsnx9iPp+rZg/kwed91s5g=;
 b=FElOBdxX16+W/tf2s01cIA5jqQsc/Vb2mWuSN7Le2TnsZIgc/VuSAHKsAzqtIQz/Vg4qDeAzBSsu9xXw7PpSe9EjJz/Ej5TiJh26gNaPgDcMPTXaqugvNRsAuli1fIOrdJ1DqCuwxZP+LhJeGvrGnpMr8diw8OCnX+8j5rWaZiQ=
Date: Fri, 28 May 2021 15:31:37 +0200
From: Roger Pau =?utf-8?B?TW9ubsOp?= <roger.pau@citrix.com>
To: Julien Grall <julien@xen.org>
CC: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	<xen-devel@lists.xenproject.org>, Andrew Cooper <andrew.cooper3@citrix.com>,
	George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
	<iwj@xenproject.org>, Wei Liu <wl@xen.org>, Stefano Stabellini
	<sstabellini@kernel.org>
Subject: Re: [PATCH v6 1/3] evtchn: slightly defer lock acquire where possible
Message-ID: <YLDwuQrJsYU9PAFT@Air-de-Roger>
References: <01bbf3d4-ca6a-e837-91fe-b34aa014564c@suse.com>
 <5939858e-1c7c-5658-bc2d-0c9024c74040@suse.com>
 <938eb888-ec15-feb1-19f7-b90dfee822ae@xen.org>
 <YLCqQz9xS4HEpabG@Air-de-Roger>
 <27d54d81-bec8-5bc7-39cd-60e9761e726b@suse.com>
 <079f2f2a-0797-b650-ff47-7e595ab29589@xen.org>
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
Content-Transfer-Encoding: 8bit
In-Reply-To: <079f2f2a-0797-b650-ff47-7e595ab29589@xen.org>
X-ClientProxiedBy: PR1P264CA0031.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:102:19f::18) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 82f3c8d2-079a-428b-552e-08d921dcede7
X-MS-TrafficTypeDiagnostic: DM6PR03MB4841:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB4841E4C3CCF0C9B641D9F5148F229@DM6PR03MB4841.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:8882;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: LUY1qQKsG6m0UQGEhWJ1FaV1TFwMlxfxyR9ftAiI6UZJ2J+BwHtdE4e59HVZqx7lWcqZ9ay6mslwIFsfGAzOlU5dct7hvSpilu7RyVfDB003PLy732usHMlFA+SAUz/b80mVOrEyV2AiZ2OdQ96odZL3lFW8dVdkPNNqARmjcqTFKgOQFHOmvGfzChMVIh0N1tqGqSLrKHEfbphpn7VE9YKIjAqNt8hHamNTvJUfuuRudjOyhoyAv+S0wD9pcIxfqBTBsOL/K24AX6hnrG+dfaZaL9Gv8KDdPzJWpRnW+tBpdtuAgR3KLDUUcEIkB1JOmfi6KLeTSEMy8aIf1cVgne8VEHV3Nf0F6DBA/CLPY68X0dCU3OA82fPzDTdpyE5wmUWHEgdVW8yqTQEU2Ks3Zq4t3OHixD0EGoA7kD5a7VUtHtJ/XAcAklr6VkBIhkPQIMW9sSC/e9SGxHhN6oMxvAt9w3j7EIiMUktCTDiUk8QEocwTzoWNrgPoa7LOdyPSUKzFZbEQyc9NhTRkJUvhpNZ5qDYTlOFKCXIwKUL5hQ05JMZSdcscVbCffk1YS9dm9KvIXsSi2YJzWd+LGJoVUXp92OVbRkPucRxqf9/TUks=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(7916004)(4636009)(366004)(376002)(136003)(39860400002)(396003)(346002)(38100700002)(66476007)(33716001)(5660300002)(66556008)(9686003)(8936002)(478600001)(2906002)(8676002)(4326008)(86362001)(6496006)(54906003)(6486002)(26005)(53546011)(66946007)(16526019)(316002)(6916009)(83380400001)(956004)(186003)(85182001)(6666004);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?bjJqYm1vRWJFUUhzc0NpSFdFVWJocVplNWhCeUFHblJqendOeFlvcE1OTTVS?=
 =?utf-8?B?aUtUQ1F0dytqU1dZZTVjMW9nUTBrMU1LTU0vMDhnUnV1WXFtdGp2YVFMODRS?=
 =?utf-8?B?WkdtTFpObzlXOXJ1NEV1L3RET01oczBqR2gxOWVPcTYrU3o2NzdZNVVwdXYv?=
 =?utf-8?B?YTBxS1ppRzIrcEVQUjBCNC9RWUdOQTdwOHVJWGh2UnhLU2t6eWcrNGVYY0l6?=
 =?utf-8?B?T0FuWWpqaWNMQUozRnVzNUZ4dkFHbWJnNzBqeTVEdjBNUytPRnhGanNON051?=
 =?utf-8?B?cDlaaWtrZHdzNERERDNvWUhHaXdhU0x4b2k4L2wwbXBPaDF1MkhBWjVqNUtO?=
 =?utf-8?B?akkxM20xVjJRN3ZuTnpESkVpak1pOHNCeS9zcGlxUHNUMXpNVlN0c2x6MFRL?=
 =?utf-8?B?QzlteGkzakEybSsrZEp4d0xRYkptb21iNkNUb1dJb3BONkl0TkE4YmhacTg1?=
 =?utf-8?B?U0JEdGZRQ1liRDJpN3JEdCtGbFVaU0lVV215dHZTUDFJT1JSZndQTEorU3VN?=
 =?utf-8?B?T2FTY2JyenlKV3BIS0Yxcmd5dDh4VDNlZ2hSelF0MnBIaDNoMWp2UXFlV3FT?=
 =?utf-8?B?bDAzclV1N3RRTmdVSTF1K05aWGZQbnhudXdMTWEwOXUzLzlONXluWDVCNHY5?=
 =?utf-8?B?K2I1T2p1VGdUTnRpQk9MOFo0Sk5HdzhzZVk0bEpmZXBQKzA5MDF6cXFFajBI?=
 =?utf-8?B?ZTlkMHE1U3VFV2FmMjkvN2NaK0s2MFAxekM5ZXloMTN1SXpPc1kxQm1jcHZD?=
 =?utf-8?B?RXNzUytWaUtJL2dLQXdFbXp2cGdIcG5zcW03RVBoMFQ2a0JHMkw4TEdlN3BB?=
 =?utf-8?B?YjFwNG4zbEFobWFoVkJ2dGsxakVNWHFHSXBFQUJEZDhNU2NXeW1TUTBRK2ZX?=
 =?utf-8?B?ZSsybW9Dc3c1ZFdaekhYbWoxSFhZRlpEYUZuYlNPdm9tOVBhekUrMFBucnlm?=
 =?utf-8?B?OE5Jd1lOWEtTaVdjM0JobUdIcjcxVFJ5RWpHYUxrTFNCM2JSRjRBMStLWnVG?=
 =?utf-8?B?Vk5ERjZCTDFPVmQzTlo1YkhmMTRWM1NSMzRMTUlBVmxnZm9RTnlqQ0Q0VWZh?=
 =?utf-8?B?MWtoNW5FekV5Sk9NdXp1NWsrZmNoTHQ2TUc5UStLRHdmMVExclhTckxZUGVl?=
 =?utf-8?B?ZllHNnkxMjJ1NmFhT21XN1B5VUlTM09INFJLZFFYb2d5MmNuZ1M1ak1meFdV?=
 =?utf-8?B?aG9GU1IvVHVxdHNMK3NhT01XZ1UwYXlPR3F1ZGlBTW5IaWxEZlZVWk1wMnN4?=
 =?utf-8?B?UVZhUEU2ZURMMWY4RWpML2VNZ2hnZU5iU04yTm1uaEJxSGVrRnF3TjFBY3BQ?=
 =?utf-8?B?VFc3VVFva0VONE9Cc21vLzc3V0k1QVZmSHMwR25aQmxSbFhKN1p3RFBKM3Av?=
 =?utf-8?B?akVxMWJMU3lHSkE1NEptZ2JwN0RrNFh2RExRMmdvemV3R3pXSE11R2lYdVdR?=
 =?utf-8?B?dVdiWnd4QW50b1llRVhCaHkzSGE3RWo3dDFSRlZkSTJ2ZGk1TU84eTcybVdD?=
 =?utf-8?B?WVZGRkVwMFlUU2J3aWltSzUyNXBra2h3dk5vdHZZNjdNYUpYL1ZWdldGSXlY?=
 =?utf-8?B?QUhVWXZoanIwNittUC9wWVRva1ZXTWI4MjRXTFF0VnVYS2NyQ2hYbGR4NTdX?=
 =?utf-8?B?Tkw2YzdKTnI4b0JKcFRrSEFER3pUVFBQVlpkZG95REcvNmc1N0d6UFZmdjhC?=
 =?utf-8?B?ZzNFS0c4QlVFM3pEbFp3VE9ONWNNL1h3LzRub2RrcGNsbVFQR1lUUzhDYXFl?=
 =?utf-8?Q?M6uhu02S/1X5oT+4DSqEE+PIINAP3mjWRbZPvJD?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 82f3c8d2-079a-428b-552e-08d921dcede7
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 May 2021 13:31:41.9728
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: RfK1zCsWe5ltSnxeRe96CnEU4M4WTOLgqyN+EGsgHvC5XOzeBVALWPfNzHYy/XKj+Kp/AsEZ4QZLpV4ipQuEqA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB4841
X-OriginatorOrg: citrix.com

On Fri, May 28, 2021 at 11:48:51AM +0100, Julien Grall wrote:
> Hi Jan,
> 
> On 28/05/2021 11:23, Jan Beulich wrote:
> > On 28.05.2021 10:30, Roger Pau Monné wrote:
> > > On Thu, May 27, 2021 at 07:48:41PM +0100, Julien Grall wrote:
> > > > On 27/05/2021 12:28, Jan Beulich wrote:
> > > > > port_is_valid() and evtchn_from_port() are fine to use without holding
> > > > > any locks. Accordingly acquire the per-domain lock slightly later in
> > > > > evtchn_close() and evtchn_bind_vcpu().
> > > > 
> > > > So I agree that port_is_valid() and evtchn_from_port() are fine to use
> > > > without holding any locks in evtchn_bind_vcpu(). However, this is misleading
> > > > to say there is no problem with evtchn_close().
> > > > 
> > > > evtchn_close() can be called with current != d and therefore, there is a
> > > 
> > > The only instances evtchn_close is called with current != d and the
> > > domain could be unpaused is in free_xen_event_channel AFAICT.
> > 
> > As long as the domain is not paused, ->valid_evtchns can't ever
> > decrease: The only point where this gets done is in evtchn_destroy().
> > Hence ...
> > 
> > > > risk that port_is_valid() may be valid and then invalid because
> > > > d->valid_evtchns is decremented in evtchn_destroy().
> > > 
> > > Hm, I guess you could indeed have parallel calls to
> > > free_xen_event_channel and evtchn_destroy in a way that
> > > free_xen_event_channel could race with valid_evtchns getting
> > > decreased?
> > 
> > ... I don't see this as relevant.
> > 
> > > > Thankfully the memory is still there. So the current code is okayish and I
> > > > could reluctantly accept this behavior to be spread. However, I don't think
> > > > this should be left uncommented in both the code (maybe on top of
> > > > port_is_valid()?) and the commit message.
> > > 
> > > Indeed, I think we need some expansion of the comment in port_is_valid
> > > to clarify all this. I'm not sure I understand it properly myself when
> > > it's fine to use port_is_valid without holding the per domain event
> > > lock.
> > 
> > Because of the above property plus the fact that even if
> > ->valid_evtchns decreases, the underlying struct evtchn instance
> > will remain valid (i.e. won't get de-allocated, which happens only
> > in evtchn_destroy_final()), it is always fine to use it without
> > lock. With this I'm having trouble seeing what would need adding
> > to port_is_valid()'s commentary.
> 
> Lets take the example of free_xen_event_channel(). The function is checking
> if the port is valid. If it is, then evtchn_close() will be called.
> 
> At this point, it would be fair for a developper to assume that
> port_is_valid() will also return true in event_close().
> 
> To push to the extreme, if free_xen_event_channel() was the only caller of
> evtchn_close(), one could argue that the check in evtchn_close() could be a
> BUG_ON().
> 
> However, this can't be because this would sooner or later turn to an XSA.
> 
> Effectively, it means that is_port_valid() *cannot* be used in an
> ASSERT()/BUG_ON() and every caller should check the return even if the port
> was previously validated.

We already have cases of is_port_valid being used in ASSERTs (in the
shim) and a BUG_ON (with the domain event lock held in evtchn_close).

> So I think a comment on top of is_port_valid() would be really useful before
> someone rediscover it the hard way.

I think I'm being extremely dull here, sorry. From your text I
understand that the value returned by is_port_valid could be stale by
the time the user reads it?

I think there's some condition that makes this value stale, and it's
not the common case?

Thanks, Roger.


From xen-devel-bounces@lists.xenproject.org Fri May 28 13:39:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 May 2021 13:39:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133930.249435 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmciA-0004QB-TZ; Fri, 28 May 2021 13:39:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133930.249435; Fri, 28 May 2021 13:39:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmciA-0004Q4-QY; Fri, 28 May 2021 13:39:38 +0000
Received: by outflank-mailman (input) for mailman id 133930;
 Fri, 28 May 2021 13:39:37 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wdiM=KX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmci9-0004Py-Qs
 for xen-devel@lists.xenproject.org; Fri, 28 May 2021 13:39:37 +0000
Received: from smtp-out1.suse.de (unknown [195.135.220.28])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id a24a8389-3945-40f7-8023-7bcaac39cd9e;
 Fri, 28 May 2021 13:39:36 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out1.suse.de (Postfix) with ESMTPS id AA231218B3;
 Fri, 28 May 2021 13:39:35 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 6E74111A98;
 Fri, 28 May 2021 13:39:35 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id zf8BGZfysGD9TwAALh3uQQ
 (envelope-from <jbeulich@suse.com>); Fri, 28 May 2021 13:39:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: a24a8389-3945-40f7-8023-7bcaac39cd9e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622209175; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=OisGe7vQXFfJ9DLT33XiqJ9GPBCWqd7AwYgymIyC5hQ=;
	b=qYsW0UxG8hs5yKQtAB6r5e8Toa9nj0zJwaPqH8bgJw8F8fZzP6Y+/UQnZIaKz12kBeSAe8
	kWzLbAghzIMy+frjKkj2XZ50EMEAAGSJylATOj8orrYvZdYc4WEGLRa6kPlTwWP9dm9WEy
	YUz8bFrPiPE5AlEiQwRhYt7PrK9foKI=
Subject: Re: [PATCH 06/13] cpufreq: Export HWP parameters to userspace
To: Jason Andryuk <jandryuk@gmail.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>,
 =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>, Wei Liu
 <wl@xen.org>, George Dunlap <george.dunlap@citrix.com>,
 Ian Jackson <iwj@xenproject.org>, Julien Grall <julien@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>,
 xen-devel <xen-devel@lists.xenproject.org>
References: <20210503192810.36084-1-jandryuk@gmail.com>
 <20210503192810.36084-7-jandryuk@gmail.com>
 <e54c3aef-4c44-4302-f7f4-4f4733e33780@suse.com>
 <CAKf6xptN13CW78XajgyE0G8t2NjFVka8tzNO2oofjcw7tT7n8g@mail.gmail.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <f267ad7b-f2e7-665c-d989-45759cf18a02@suse.com>
Date: Fri, 28 May 2021 15:39:31 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <CAKf6xptN13CW78XajgyE0G8t2NjFVka8tzNO2oofjcw7tT7n8g@mail.gmail.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit

On 28.05.2021 15:19, Jason Andryuk wrote:
> On Thu, May 27, 2021 at 4:03 AM Jan Beulich <jbeulich@suse.com> wrote:
>> On 03.05.2021 21:28, Jason Andryuk wrote:
>>> --- a/xen/drivers/acpi/pmstat.c
>>> +++ b/xen/drivers/acpi/pmstat.c
>>> @@ -290,6 +290,12 @@ static int get_cpufreq_para(struct xen_sysctl_pm_op *op)
>>>              &op->u.get_para.u.ondemand.sampling_rate,
>>>              &op->u.get_para.u.ondemand.up_threshold);
>>>      }
>>> +
>>> +    if ( !strncasecmp(op->u.get_para.scaling_governor,
>>> +                      "hwp-internal", CPUFREQ_NAME_LEN) )
>>> +    {
>>> +        ret = get_hwp_para(policy, &op->u.get_para.u.hwp_para);
>>> +    }
>>>      op->u.get_para.turbo_enabled = cpufreq_get_turbo_status(op->cpuid);
>>
>> Nit: Unnecessary parentheses again, and with the leading blank line
>> you also want a trailing one. (As an aside I'm also not overly happy
>> to see the call keyed to the governor name. Is there really no other
>> indication that hwp is in use?)
> 
> This is preceded by similar checks for "userspace" and "ondemand", so
> it is following existing code.  Unlike other governors, hwp-internal
> is static.  It could be exported if you want to switch to comparing
> with cpufreq_driver.

Hmm, well, then feel free to keep the logic as you have it, except
please don't take presence of unnecessary braces as excuse to add
more.

>>> +    uint8_t hw_feature; /* bit flags for features */
>>> +    uint8_t hw_lowest;
>>> +    uint8_t hw_most_efficient;
>>> +    uint8_t hw_guaranteed;
>>> +    uint8_t hw_highest;
>>
>> Any particular reason for the recurring hw_ prefixes?
> 
> The idea was to differentiate values provided by CPU hardware from
> user-configured values.

I think that follows from their names already without the prefix.
I'd prefer if you dropped them, but I'll try to not insist (I may
comment on this again in later versions, in case I forgot by then).

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 28 13:42:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 May 2021 13:42:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133936.249446 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmckU-0005no-Ev; Fri, 28 May 2021 13:42:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133936.249446; Fri, 28 May 2021 13:42:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmckU-0005nh-Bo; Fri, 28 May 2021 13:42:02 +0000
Received: by outflank-mailman (input) for mailman id 133936;
 Fri, 28 May 2021 13:42:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=wdiM=KX=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lmckS-0005nI-Mp
 for xen-devel@lists.xenproject.org; Fri, 28 May 2021 13:42:00 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b6ec4b3c-0d6f-48da-a4ef-c1438f7acfee;
 Fri, 28 May 2021 13:41:59 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id F3DB11FD2E;
 Fri, 28 May 2021 13:41:58 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id C3E6D11A98;
 Fri, 28 May 2021 13:41:58 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id nhWmLibzsGAqUQAALh3uQQ
 (envelope-from <jbeulich@suse.com>); Fri, 28 May 2021 13:41:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b6ec4b3c-0d6f-48da-a4ef-c1438f7acfee
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622209319; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=dsxnym2Edb8D8ioPAlnFYNHlty7zrn/wpCqrfApi17Y=;
	b=Qt+UkaJTUz5S8s5jqrK/QYBHB39/clK8RBPdLYyjqN/qKDBl6m4os8wcSHxgg8NTqINWIZ
	0s50OnLWtRM3O/YQ5+XAzW2IlG39KcBYOzoZ+mnoJZza4qLMbWaYHBbkt78BW2+ZoM/ndw
	j16wJnj0k4NVySgUri3398QJKZO47bQ=
Subject: Re: [PATCH v6 1/3] evtchn: slightly defer lock acquire where possible
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>, Julien Grall <julien@xen.org>
References: <01bbf3d4-ca6a-e837-91fe-b34aa014564c@suse.com>
 <5939858e-1c7c-5658-bc2d-0c9024c74040@suse.com>
 <938eb888-ec15-feb1-19f7-b90dfee822ae@xen.org>
 <YLCqQz9xS4HEpabG@Air-de-Roger>
 <27d54d81-bec8-5bc7-39cd-60e9761e726b@suse.com>
 <079f2f2a-0797-b650-ff47-7e595ab29589@xen.org>
 <YLDwuQrJsYU9PAFT@Air-de-Roger>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <68f032b6-62a0-c472-8b10-ec26d0407c93@suse.com>
Date: Fri, 28 May 2021 15:41:53 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <YLDwuQrJsYU9PAFT@Air-de-Roger>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit

On 28.05.2021 15:31, Roger Pau Monné wrote:
> I think I'm being extremely dull here, sorry. From your text I
> understand that the value returned by is_port_valid could be stale by
> the time the user reads it?
> 
> I think there's some condition that makes this value stale, and it's
> not the common case?

It's evtchn_destroy() running in parallel which can have this effect.

Jan


From xen-devel-bounces@lists.xenproject.org Fri May 28 13:53:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 May 2021 13:53:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133945.249457 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmcvU-0007O7-Gu; Fri, 28 May 2021 13:53:24 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133945.249457; Fri, 28 May 2021 13:53:24 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmcvU-0007O0-Dh; Fri, 28 May 2021 13:53:24 +0000
Received: by outflank-mailman (input) for mailman id 133945;
 Fri, 28 May 2021 13:53:22 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmcvS-0007Nq-MH; Fri, 28 May 2021 13:53:22 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmcvS-0007Xx-E3; Fri, 28 May 2021 13:53:22 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmcvR-0003Ue-BE; Fri, 28 May 2021 13:53:21 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lmcvR-00045a-Aj; Fri, 28 May 2021 13:53:21 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=ikoqymKyi9+K4sAL0pjRqTtq8Zvl5ue8fWD22qlfU4M=; b=S7t1ohSDR+owdbIg0Q4rgdqK37
	9CG+el2dgmfOuhm2Gt9KtMbemFKXOzQs9Sn90RQZqhqBg2lqd7Q5q2iI0V+RD+golZ+htfFuSeYgl
	cTHIzh8Spd0eNhn8zl7aOgpOLksmfJ+5L65LzTZiEPaB6e67/3q/3BkeRcTwbymhHcYU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162242-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162242: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:allowable
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=9fdcf851689cb2a9501d3947cb5d767d9c7797e8
X-Osstest-Versions-That:
    xen=7c110dd335a17be52549dc4b9dfbfba8165ade40
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 28 May 2021 13:53:21 +0000

flight 162242 xen-unstable real [real]
flight 162248 xen-unstable real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162242/
http://logs.test-lab.xenproject.org/osstest/logs/162248/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 162248-retest

Regressions which are regarded as allowable (not blocking):
 test-armhf-armhf-xl-rtds    18 guest-start/debian.repeat fail REGR. vs. 162230

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162230
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162230
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162230
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162230
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162230
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162230
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162230
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162230
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162230
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162230
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162230
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  9fdcf851689cb2a9501d3947cb5d767d9c7797e8
baseline version:
 xen                  7c110dd335a17be52549dc4b9dfbfba8165ade40

Last test of basis   162230  2021-05-27 12:57:28 Z    1 days
Testing same since   162242  2021-05-28 02:17:13 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   7c110dd335..9fdcf85168  9fdcf851689cb2a9501d3947cb5d767d9c7797e8 -> master


From xen-devel-bounces@lists.xenproject.org Fri May 28 14:26:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 May 2021 14:26:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133958.249491 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmdRK-0002V7-DU; Fri, 28 May 2021 14:26:18 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133958.249491; Fri, 28 May 2021 14:26:18 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmdRK-0002V0-AI; Fri, 28 May 2021 14:26:18 +0000
Received: by outflank-mailman (input) for mailman id 133958;
 Fri, 28 May 2021 14:26:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lmdRI-0002Uu-Sx
 for xen-devel@lists.xenproject.org; Fri, 28 May 2021 14:26:16 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lmdRG-0008B9-T8; Fri, 28 May 2021 14:26:14 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lmdRG-00065y-LE; Fri, 28 May 2021 14:26:14 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=tFXYmzd3ob8Pm3+whOyKVcgWLeJIn7L1AD0iKVPLCdo=; b=yuZo5r1NeAEw/BMzWi6fb9u28b
	VA4/SZIL9mUBFbW2ZSSk5WQkexRevyrJmRVm3N+xRtECCfnRyrqFNRrklzrxAlcBFSEbSENj5oNcx
	TLpzIHvuSDpD0/3vnxjgxuxL4fnYqRtZnvfsz/C8X/5EA6i9jzhJwHa1VGDIeABU37tA=;
Subject: Re: [PATCH v6 1/3] evtchn: slightly defer lock acquire where possible
To: =?UTF-8?Q?Roger_Pau_Monn=c3=a9?= <roger.pau@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>,
 "xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
 Andrew Cooper <andrew.cooper3@citrix.com>,
 George Dunlap <George.Dunlap@eu.citrix.com>, Ian Jackson
 <iwj@xenproject.org>, Wei Liu <wl@xen.org>,
 Stefano Stabellini <sstabellini@kernel.org>
References: <01bbf3d4-ca6a-e837-91fe-b34aa014564c@suse.com>
 <5939858e-1c7c-5658-bc2d-0c9024c74040@suse.com>
 <938eb888-ec15-feb1-19f7-b90dfee822ae@xen.org>
 <YLCqQz9xS4HEpabG@Air-de-Roger>
 <27d54d81-bec8-5bc7-39cd-60e9761e726b@suse.com>
 <079f2f2a-0797-b650-ff47-7e595ab29589@xen.org>
 <YLDwuQrJsYU9PAFT@Air-de-Roger>
From: Julien Grall <julien@xen.org>
Message-ID: <af864de5-d79d-4ed7-3778-bae6455185e4@xen.org>
Date: Fri, 28 May 2021 15:26:12 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <YLDwuQrJsYU9PAFT@Air-de-Roger>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit

Hi Roger,

On 28/05/2021 14:31, Roger Pau Monné wrote:
> On Fri, May 28, 2021 at 11:48:51AM +0100, Julien Grall wrote:
>> Hi Jan,
>>
>> On 28/05/2021 11:23, Jan Beulich wrote:
>>> On 28.05.2021 10:30, Roger Pau Monné wrote:
>>>> On Thu, May 27, 2021 at 07:48:41PM +0100, Julien Grall wrote:
>>>>> On 27/05/2021 12:28, Jan Beulich wrote:
>>>>>> port_is_valid() and evtchn_from_port() are fine to use without holding
>>>>>> any locks. Accordingly acquire the per-domain lock slightly later in
>>>>>> evtchn_close() and evtchn_bind_vcpu().
>>>>>
>>>>> So I agree that port_is_valid() and evtchn_from_port() are fine to use
>>>>> without holding any locks in evtchn_bind_vcpu(). However, this is misleading
>>>>> to say there is no problem with evtchn_close().
>>>>>
>>>>> evtchn_close() can be called with current != d and therefore, there is a
>>>>
>>>> The only instances evtchn_close is called with current != d and the
>>>> domain could be unpaused is in free_xen_event_channel AFAICT.
>>>
>>> As long as the domain is not paused, ->valid_evtchns can't ever
>>> decrease: The only point where this gets done is in evtchn_destroy().
>>> Hence ...
>>>
>>>>> risk that port_is_valid() may be valid and then invalid because
>>>>> d->valid_evtchns is decremented in evtchn_destroy().
>>>>
>>>> Hm, I guess you could indeed have parallel calls to
>>>> free_xen_event_channel and evtchn_destroy in a way that
>>>> free_xen_event_channel could race with valid_evtchns getting
>>>> decreased?
>>>
>>> ... I don't see this as relevant.
>>>
>>>>> Thankfully the memory is still there. So the current code is okayish and I
>>>>> could reluctantly accept this behavior to be spread. However, I don't think
>>>>> this should be left uncommented in both the code (maybe on top of
>>>>> port_is_valid()?) and the commit message.
>>>>
>>>> Indeed, I think we need some expansion of the comment in port_is_valid
>>>> to clarify all this. I'm not sure I understand it properly myself when
>>>> it's fine to use port_is_valid without holding the per domain event
>>>> lock.
>>>
>>> Because of the above property plus the fact that even if
>>> ->valid_evtchns decreases, the underlying struct evtchn instance
>>> will remain valid (i.e. won't get de-allocated, which happens only
>>> in evtchn_destroy_final()), it is always fine to use it without
>>> lock. With this I'm having trouble seeing what would need adding
>>> to port_is_valid()'s commentary.
>>
>> Lets take the example of free_xen_event_channel(). The function is checking
>> if the port is valid. If it is, then evtchn_close() will be called.
>>
>> At this point, it would be fair for a developper to assume that
>> port_is_valid() will also return true in event_close().
>>
>> To push to the extreme, if free_xen_event_channel() was the only caller of
>> evtchn_close(), one could argue that the check in evtchn_close() could be a
>> BUG_ON().
>>
>> However, this can't be because this would sooner or later turn to an XSA.
>>
>> Effectively, it means that is_port_valid() *cannot* be used in an
>> ASSERT()/BUG_ON() and every caller should check the return even if the port
>> was previously validated.
> 
> We already have cases of is_port_valid being used in ASSERTs (in the
> shim) and a BUG_ON (with the domain event lock held in evtchn_close).

I was likely a bit too restrictive in my remark. The BUG_ON() in 
evtchn_close() is fine because this is used on a (in theory) an open 
port with the both domains event lock held.

This would be more a problem if we have either:

if ( !port_is_valid(d1, port1) )
   return -EINVAL;

/* .... */

BUG_ON(port_is_valid(d1, port1));

> 
>> So I think a comment on top of is_port_valid() would be really useful before
>> someone rediscover it the hard way.
> 
> I think I'm being extremely dull here, sorry. From your text I
> understand that the value returned by is_port_valid could be stale by
> the time the user reads it?
> 
> I think there's some condition that makes this value stale, and it's
> not the common case?

It is any code that race with evtchn_destroy().

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 28 14:37:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 May 2021 14:37:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133984.249520 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmdcD-0005Bw-3i; Fri, 28 May 2021 14:37:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133984.249520; Fri, 28 May 2021 14:37:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmdcD-0005Bp-0p; Fri, 28 May 2021 14:37:33 +0000
Received: by outflank-mailman (input) for mailman id 133984;
 Fri, 28 May 2021 14:37:31 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XCBB=KX=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
 id 1lmdcB-0005Bd-OP
 for xen-devel@lists.xenproject.org; Fri, 28 May 2021 14:37:31 +0000
Received: from mail-wr1-x42b.google.com (unknown [2a00:1450:4864:20::42b])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bd4f5837-bec9-4d3c-a07e-6e66eed003f6;
 Fri, 28 May 2021 14:37:30 +0000 (UTC)
Received: by mail-wr1-x42b.google.com with SMTP id g17so3475242wrs.13
 for <xen-devel@lists.xenproject.org>; Fri, 28 May 2021 07:37:30 -0700 (PDT)
Received: from zen.linaroharston ([51.148.130.216])
 by smtp.gmail.com with ESMTPSA id v12sm7818133wru.73.2021.05.28.07.37.28
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 28 May 2021 07:37:28 -0700 (PDT)
Received: from zen (localhost [127.0.0.1])
 by zen.linaroharston (Postfix) with ESMTP id 6EBCF1FF7E;
 Fri, 28 May 2021 15:37:27 +0100 (BST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bd4f5837-bec9-4d3c-a07e-6e66eed003f6
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=user-agent:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=yb962kXjxFQoOV5zXryyLTrwVUs3L42/5UgEjwUTIy0=;
        b=cV61npoaqMUSo85o0e4d2X7HzeO/uwQ9riCNEjajYfyHuglaVIOAAyMAmN6Y35YECH
         PT7os7q3sb4yly3lB8pFs7qZo+aYMu/1oSwuUBHkCPyGLYyUaz5OMOnTY+VQrBTshY+j
         8iCfBE08KnAxXiH+mDYArqYGDvyjrXzkVP9y3eRvgln6ZkBPu/p00I6HgCmiwuwBij5P
         k8qS9PJ1drkIkWqaVsAIsxs4PJwx1Ktb0q+Mtxtn86KE0Gi31Oo+OBqcSSZ+vquM7SGh
         NQebsjCdH1yHEjGgIU04ZKyfbgzNcceKvKLIeKcEhRdg28IXhkj2gZpypFizmoTMtAgJ
         5Ojg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:user-agent:from:to:cc:subject:date:message-id
         :mime-version:content-transfer-encoding;
        bh=yb962kXjxFQoOV5zXryyLTrwVUs3L42/5UgEjwUTIy0=;
        b=NAmjQaXcXzjqYVv/ik25j2d5HTEBxhFe9ZryqnGE1swEmGMTQJnVe6I11jDU8GdhmC
         TeGjWql+V5gWQS5sJJFrcc1TfQKvrxf1fRE21XDwXfqQUOSfPlQzuXisGEtevVTnhzSW
         cusMSEiFbzn1/Ei5NFak8ISfcAfCNod90NzIAwHaDn8UbpiZ8U7oNlIXK78Y7xrI7D03
         Q5HdC6BL5vDFOhQn+pB0XQFzs253depmCVVA1vMF1vffsSlRJ/M6kxXYFLe+V4GZCzW5
         3Fta+elImKyvMSqprfe2P8b60mTgWosbt93xczppHw5FavQtKj+gSVUxsWy7i/KhFjfD
         Z2GA==
X-Gm-Message-State: AOAM531RESRMe+G+ZVHl4XklL8Mr4SuvolI8HNPtT4zF7EbWvSKyD6fC
	udL23NaeyE/Bzfh8QtY2fp52hQ==
X-Google-Smtp-Source: ABdhPJxi2pof58uMldIFpTp+SWl56DBDbsvnQUM01uGNNwICZGQ+ZdX89CN5TqitnS/36ShFNgrNIQ==
X-Received: by 2002:a5d:59ae:: with SMTP id p14mr9116575wrr.214.1622212649670;
        Fri, 28 May 2021 07:37:29 -0700 (PDT)
User-agent: mu4e 1.5.13; emacs 28.0.50
From: Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>
To: Debian Install System Team <debian-boot@lists.debian.org>,
 pkg-grub-devel@alioth-lists.debian.net,
 pkg-xen-devel@lists.alioth.debian.org, xen-devel@lists.xenproject.org
Cc: Steve McIntyre <93sam@debian.org>, Julien Grall <julien@xen.org>,
 Ruchika Gupta <ruchika.gupta@linaro.org>, Ilias Apalodimas
 <ilias.apalodimas@linaro.org>, qemu-arm <qemu-arm@nongnu.org>
Subject: What's missing for arm64 Xen boot with FDT via Grub in Debian
 Bullseye?
Date: Fri, 28 May 2021 13:49:14 +0100
Message-ID: <87mtse2ac8.fsf@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable


Hi,

I'm currently trying to pull together the threads for booting Xen on
Debian. I'm currently doing this within QEMU's TCG emulation and the
"virt" machine model:

  -machine type=3Dvirt,virtualization=3Don,gic-version=3D3 \
  -cpu max,pauth-impdef=3Don

with the firmware on my Ubuntu machine:

  -drive if=3Dpflash,file=3D/usr/share/AAVMF/AAVMF_CODE.fd,format=3Draw,rea=
donly=3Don -drive if=3Dpflash,file=3D$HOME/images/AAVMF_VARS.fd,format=3Draw

(qemu-efi-aarch64 Version: 0~20180205.c0d9813c-2ubuntu0.3)

When booting this way I get the Grub menu and Xen is loaded by Grub but
falls over later:

  (XEN) MODULE[0]: 00000000f5869000 - 00000000f59b60c8 Xen
  (XEN) MODULE[1]: 000000013857d000 - 0000000138580000 Device Tree
  (XEN) MODULE[2]: 00000000f73a1000 - 00000000f8da0780 Kernel
  (XEN) MODULE[3]: 00000000f59b7000 - 00000000f739f99b Ramdisk
  (XEN)
  (XEN) CMDLINE[00000000f73a1000]:chosen placeholder root=3DUUID=3D435201aa=
-c5cf-4e7a-8107-5eef28844188 ro console=3Dhvc0
  (XEN)
  (XEN) Command line: placeholder dom0_mem=3D2G loglvl=3Dall guest_loglvl=
=3Dall no-real-mode edd=3Doff
  (XEN) parameter "placeholder" unknown!
  (XEN) parameter "no-real-mode" unknown!
  (XEN) parameter "edd" unknown!
  (XEN) Domain heap initialised
  (XEN) Booting using Device Tree
  (XEN) Platform: Generic System
  (XEN)
  (XEN) ****************************************
  (XEN) Panic on CPU 0:
  (XEN) Unable to find a compatible timer in the device tree
  (XEN) ****************************************

It seems like there are bits of the DT missing. I can however
successfully boot Xen with the Linux guest using the guest-loader device
and bypassing the firmware/boot code step. This gives:

  (XEN) MODULE[0]: 0000000040200000 - 000000004034d0c8 Xen
  (XEN) MODULE[1]: 0000000048000000 - 0000000048100000 Device Tree
  (XEN) MODULE[2]: 0000000046000000 - 0000000046eb2200 Kernel
  (XEN)
  (XEN) CMDLINE[0000000046000000]:chosen root=3D/dev/sda2 console=3Dhvc0 ea=
rlyprintk=3Dxen
  (XEN)
  (XEN) Command line: dom0_mem=3D4G dom0_max_vcpus=3D4
  (XEN) Domain heap initialised
  (XEN) Booting using Device Tree
  (XEN) Platform: Generic System
  (XEN) Taking dtuart configuration from /chosen/stdout-path
  (XEN) Looking for dtuart at "/pl011@9000000", options ""
   Xen 4.15.1-pre
  (XEN) Xen version 4.15.1-pre (alex.bennee@) (aarch64-linux-gnu-gcc (Debia=
n 8.3.0-2) 8.3.0) debug=3Dy Tue May 18 09:34:55 UTC 2021
  (XEN) Latest ChangeSet:
  (XEN) build-id: a50d8f03a1a15662ac7c4e5f73f2f544a6739df2
  (XEN) Processor: 411fd070: "ARM Limited", variant: 0x1, part 0xd07, rev 0=
x0
  (XEN) 64-bit Execution:
  (XEN)   Processor Features: 0000000001000222 0000000000000000
  (XEN)     Exception Levels: EL3:No EL2:64+32 EL1:64+32 EL0:64+32
  (XEN)     Extensions: FloatingPoint AdvancedSIMD GICv3-SysReg
  (XEN)   Debug Features: 0000000010305106 0000000000000000
  (XEN)   Auxiliary Features: 0000000000000000 0000000000000000
  (XEN)   Memory Model Features: 0000000000001124 0000000000000000
  (XEN)   ISA Features:  0000000000011120 0000000000000000
  (XEN) 32-bit Execution:
  (XEN)   Processor Features: 00000131:10011001
  (XEN)     Instruction Sets: AArch32 A32 Thumb Thumb-2 Jazelle
  (XEN)     Extensions: GenericTimer
  (XEN)   Debug Features: 03010066
  (XEN)   Auxiliary Features: 00000000
  (XEN)   Memory Model Features: 10101105 40000000 01260000 02102211
  (XEN)  ISA Features: 02101110 13112111 21232042 01112131 00011142 00011121
  (XEN) Using SMC Calling Convention v1.0
  (XEN) Using PSCI v0.2
  (XEN) SMP: Allowing 8 CPUs
  (XEN) enabled workaround for: ARM erratum 832075
  (XEN) enabled workaround for: ARM erratum 834220
  (XEN) enabled workaround for: ARM erratum 1319367
  (XEN) Generic Timer IRQ: phys=3D30 hyp=3D26 virt=3D27 Freq: 62500 KHz
  (XEN) GICv3 initialization:
  (XEN)       gic_dist_addr=3D0x00000008000000
  (XEN)       gic_maintenance_irq=3D25
  (XEN)       gic_rdist_stride=3D0
  (XEN)       gic_rdist_regions=3D1
  (XEN)       redistributor regions:
  (XEN)         - region 0: 0x000000080a0000 - 0x00000009000000
  (XEN) GICv3: 256 lines, (IID 0000043b).
  (XEN) GICv3: CPU0: Found redistributor in region 0 @000000004001c000

Attempting to boot with acpi=3Don still sees Grub attempt to use DT to
boot the hypervisor. However selecting the kernel directly boots with
ACPI (which is a shame as I'd like to see what FDT it gets presented
with).

The full command line for booting via the guest-loader is:

  ./qemu-system-aarch64 \
    -machine virt,virtualization=3Don,gic-version=3D3 \
    -cpu max,pauth-impdef=3Don \
    -serial mon:stdio \
    -netdev user,id=3Dnet1,hostfwd=3Dtcp::2222-:22 \
    -device virtio-net-pci,netdev=3Dnet1 \
    -device virtio-scsi-pci \
    -drive file=3D/dev/zvol/hackpool-0/debian-buster-arm64,id=3Dhd0,index=
=3D0,if=3Dnone,format=3Draw \
    -device scsi-hd,drive=3Dhd0 \
    -display none \
    -m 16384 \
    -kernel ~/lsrc/xen/xen.build.arm64-xen/xen/xen \
    -append "dom0_mem=3D4G dom0_max_vcpus=3D4" \
    -device guest-loader,addr=3D0x46000000,kernel=3D$HOME/lsrc/linux.git/bu=
ilds/arm64/arch/arm64/boot/Image,bootargs=3D"root=3D/dev/sda2 console=3Dhvc=
0 earlyprintk=3Dxen" \
    -smp 8

So some questions:

  - is Xen on arm64 tested on Debian Bullseye? If so what platform?
  - how do I tell Grub to do a straight FDT boot with the DT from the firmw=
are?
  - are there any missing pieces I should be aware of?

I appreciate that ACPI is the preferred enterprise way of booting but at
the moment I think FDT is probably preferred because:

  - lack of real HW with decent ACPI (my MachiatoBin only boots with DT)
  - I want to try additional hypervisors who don't have ACPI aware implemen=
tations

That said if I can get an ACPI version of Xen booting via Grub that
would be an improvement.

--=20
Alex Benn=C3=A9e


From xen-devel-bounces@lists.xenproject.org Fri May 28 15:13:00 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 May 2021 15:13:00 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133993.249531 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmeAP-0000t2-21; Fri, 28 May 2021 15:12:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133993.249531; Fri, 28 May 2021 15:12:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmeAO-0000sv-VE; Fri, 28 May 2021 15:12:52 +0000
Received: by outflank-mailman (input) for mailman id 133993;
 Fri, 28 May 2021 15:12:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W+lD=KX=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1lmeAN-0000sp-UO
 for xen-devel@lists.xenproject.org; Fri, 28 May 2021 15:12:51 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 61c4e699-1336-41c1-9105-de823348d1f9;
 Fri, 28 May 2021 15:12:51 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id F03601FD2F;
 Fri, 28 May 2021 15:12:49 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 1D59411906;
 Fri, 28 May 2021 15:12:49 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id uUSnBHEIsWB6CQAALh3uQQ
 (envelope-from <dfaggioli@suse.com>); Fri, 28 May 2021 15:12:49 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 61c4e699-1336-41c1-9105-de823348d1f9
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622214769; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=EnDoJ8l4mQ5d/oocwttKAMYLIuqh+t7dfloUxS95UlA=;
	b=XuRHGe1F9Xacu6JYqT0BZL4nXcd9HIGD2lpZh3DCv6dcnJw9RPLwdr9Ni9NIWKPj7JB8su
	AZnUtC87mcs3vyCndWofRuJ9shdZnraDsxOECO9LiJdKQbHitgJ5zoQUOk4idqVGrRpVI/
	VAj5oWDHHgVifKfzPHAXl9mubMSYt58=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622214769; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding;
	bh=EnDoJ8l4mQ5d/oocwttKAMYLIuqh+t7dfloUxS95UlA=;
	b=XuRHGe1F9Xacu6JYqT0BZL4nXcd9HIGD2lpZh3DCv6dcnJw9RPLwdr9Ni9NIWKPj7JB8su
	AZnUtC87mcs3vyCndWofRuJ9shdZnraDsxOECO9LiJdKQbHitgJ5zoQUOk4idqVGrRpVI/
	VAj5oWDHHgVifKfzPHAXl9mubMSYt58=
Subject: [PATCH] credit2: make sure we pick a runnable unit from the runq if
 there is one
From: Dario Faggioli <dfaggioli@suse.com>
To: xen-devel@lists.xenproject.org
Cc: =?utf-8?q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>,
 Dion Kant <g.w.kant@hunenet.nl>, George Dunlap <george.dunlap@citrix.com>,
 Jan Beulich <jbeulich@suse.com>,
 =?utf-8?q?Micha=C5=82_Leszczy=C5=84ski?= <michal.leszczynski@cert.pl>,
 Dion Kant <g.w.kant@hunenet.nl>
Date: Fri, 28 May 2021 17:12:48 +0200
Message-ID: <162221476843.1378.16573083798333423966.stgit@Wayrath>
User-Agent: StGit/0.23
MIME-Version: 1.0
Content-Type: text/plain; charset="utf-8"
Content-Transfer-Encoding: 8bit
Authentication-Results: imap.suse.de;
	none
X-Spam-Level: **
X-Spam-Score: 2.00
X-Spamd-Result: default: False [2.00 / 100.00];
	 ARC_NA(0.00)[];
	 RCVD_VIA_SMTP_AUTH(0.00)[];
	 FROM_HAS_DN(0.00)[];
	 TO_DN_SOME(0.00)[];
	 TO_MATCH_ENVRCPT_ALL(0.00)[];
	 MIME_GOOD(-0.10)[text/plain];
	 DKIM_SIGNED(0.00)[suse.com:s=susede1];
	 RCPT_COUNT_SEVEN(0.00)[7];
	 RCVD_NO_TLS_LAST(0.10)[];
	 FROM_EQ_ENVFROM(0.00)[];
	 MIME_TRACE(0.00)[0:+];
	 MID_RHS_NOT_FQDN(0.50)[];
	 RCVD_COUNT_TWO(0.00)[2];
	 SUSPICIOUS_RECIPS(1.50)[]
X-Spam-Flag: NO

A !runnable unit (temporarily) present in the runq may cause us to
stop scanning the runq itself too early. Of course, we don't run any
non-runnable vCPUs, but we end the scan and we fallback to picking
the idle unit. In other word, this prevent us to find there and pick
the actual unit that we're meant to start running (which might be
further ahead in the runq).

Depending on the vCPU pinning configuration, this may lead to such
unit to be stuck in the runq for long time, causing malfunctioning
inside the guest.

Fix this by checking runnable/non-runnable status up-front, in the runq
scanning function.

Reported-by: Michał Leszczyński <michal.leszczynski@cert.pl>
Reported-by: Dion Kant <g.w.kant@hunenet.nl>
Signed-off-by: Dario Faggioli <dfaggioli@suse.com>
---
Cc: George Dunlap <george.dunlap@citrix.com>
Cc: Jan Beulich <jbeulich@suse.com>
Cc: Michał Leszczyński <michal.leszczynski@cert.pl>
Cc: Dion Kant <g.w.kant@hunenet.nl>
---
This is a bugfix and it solves the following problems, reported in
various ways:
* https://lists.xen.org/archives/html/xen-devel/2020-05/msg01985.html
* https://lists.xenproject.org/archives/html/xen-devel/2020-10/msg01561.html
* https://bugzilla.opensuse.org/show_bug.cgi?id=1179246

Hence, it should be backported, I'd say as far as possible... At least
to all the releases that have Credit2 as the default scheduler.

I will look further into this, and I think I can provide the backports
myself.

I'd like to send a *huge* thank you to Dion Kant who arranged for me to
be able to use a box where it was particularly easy to reproduce the
bug, and that was for all the time that it took me to finally be able to
work on this properly and nail it! :-)
---
 xen/common/sched/credit2.c |    7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/xen/common/sched/credit2.c b/xen/common/sched/credit2.c
index eb5e5a78c5..f5c1e5b944 100644
--- a/xen/common/sched/credit2.c
+++ b/xen/common/sched/credit2.c
@@ -3463,6 +3463,10 @@ runq_candidate(struct csched2_runqueue_data *rqd,
                         (unsigned char *)&d);
         }
 
+        /* Skip non runnable units that we (temporarily) have in the runq */
+        if ( unlikely(!unit_runnable_state(svc->unit)) )
+            continue;
+
         /* Only consider vcpus that are allowed to run on this processor. */
         if ( !cpumask_test_cpu(cpu, svc->unit->cpu_hard_affinity) )
             continue;
@@ -3496,8 +3500,7 @@ runq_candidate(struct csched2_runqueue_data *rqd,
          * some budget, then choose it.
          */
         if ( (yield || svc->credit > snext->credit) &&
-             (!has_cap(svc) || unit_grab_budget(svc)) &&
-             unit_runnable_state(svc->unit) )
+             (!has_cap(svc) || unit_grab_budget(svc)) )
             snext = svc;
 
         /* In any case, if we got this far, break. */




From xen-devel-bounces@lists.xenproject.org Fri May 28 15:18:23 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 May 2021 15:18:23 +0000
Received: from list by lists.xenproject.org with outflank-mailman.133999.249542 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmeFf-0001ah-OH; Fri, 28 May 2021 15:18:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 133999.249542; Fri, 28 May 2021 15:18:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmeFf-0001aa-K5; Fri, 28 May 2021 15:18:19 +0000
Received: by outflank-mailman (input) for mailman id 133999;
 Fri, 28 May 2021 15:18:18 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W+lD=KX=suse.com=dfaggioli@srs-us1.protection.inumbo.net>)
 id 1lmeFe-0001aU-Ht
 for xen-devel@lists.xenproject.org; Fri, 28 May 2021 15:18:18 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c27dea2c-51ef-4644-ba42-18836297bab1;
 Fri, 28 May 2021 15:18:17 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id E03C41FD2F;
 Fri, 28 May 2021 15:18:16 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 072E811906;
 Fri, 28 May 2021 15:18:15 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id jNhaObcJsWC+DAAALh3uQQ
 (envelope-from <dfaggioli@suse.com>); Fri, 28 May 2021 15:18:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c27dea2c-51ef-4644-ba42-18836297bab1
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622215096; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=we9YjhvjOoTuBvT2KSjS85nvFYPt8UnNItAKK7Irsmk=;
	b=WmDX6XUFF7mbuEykS+nioj3EfAsKpP+J+BPtSdRqI1fb02Uv5I0Pxf3BYdpP8JjL0VdtO/
	NLBUyuatiLwfH8BMevN4RaG1VBv3yv8hhrdt4Lr8y5lqCUQaf5Qf/wbHmezmSube4OIf8J
	wXB5JW9vDElnjCZTYK9Mf9m8O6qSyCY=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622215096; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 in-reply-to:in-reply-to:references:references;
	bh=we9YjhvjOoTuBvT2KSjS85nvFYPt8UnNItAKK7Irsmk=;
	b=WmDX6XUFF7mbuEykS+nioj3EfAsKpP+J+BPtSdRqI1fb02Uv5I0Pxf3BYdpP8JjL0VdtO/
	NLBUyuatiLwfH8BMevN4RaG1VBv3yv8hhrdt4Lr8y5lqCUQaf5Qf/wbHmezmSube4OIf8J
	wXB5JW9vDElnjCZTYK9Mf9m8O6qSyCY=
Message-ID: <ab4e6ff41a08a874bba805873c94e0d8fa1d0986.camel@suse.com>
Subject: Re: Ping: [Bugfix PATCH for-4.15] xen: credit2: fix per-entity load
 tracking when continuing running
From: Dario Faggioli <dfaggioli@suse.com>
To: Jan Beulich <jbeulich@suse.com>, George Dunlap <george.dunlap@citrix.com>
Cc: xen-devel@lists.xenproject.org
Date: Fri, 28 May 2021 17:18:15 +0200
In-Reply-To: <a3f31cde-f1e5-e643-28bc-cdb2b36f372d@suse.com>
References: <161615605709.5036.4052641880659992679.stgit@Wayrath>
	 <a3f31cde-f1e5-e643-28bc-cdb2b36f372d@suse.com>
Content-Type: multipart/signed; micalg="pgp-sha256";
	protocol="application/pgp-signature"; boundary="=-+F0ocCRSGkHJGFd8OTX0"
User-Agent: Evolution 3.40.1 (by Flathub.org) 
MIME-Version: 1.0
Authentication-Results: imap.suse.de;
	none
X-Spam-Level: 
X-Spam-Score: -2.10
X-Spamd-Result: default: False [-2.10 / 100.00];
	 ARC_NA(0.00)[];
	 RCVD_VIA_SMTP_AUTH(0.00)[];
	 FROM_HAS_DN(0.00)[];
	 RCPT_COUNT_THREE(0.00)[3];
	 TO_DN_SOME(0.00)[];
	 TO_MATCH_ENVRCPT_ALL(0.00)[];
	 MIME_GOOD(-0.20)[multipart/signed,text/plain];
	 DKIM_SIGNED(0.00)[suse.com:s=susede1];
	 SIGNED_PGP(-2.00)[];
	 RCVD_NO_TLS_LAST(0.10)[];
	 FROM_EQ_ENVFROM(0.00)[];
	 MIME_TRACE(0.00)[0:+,1:+,2:~];
	 RCVD_COUNT_TWO(0.00)[2];
	 MID_RHS_MATCH_FROM(0.00)[]
X-Spam-Flag: NO


--=-+F0ocCRSGkHJGFd8OTX0
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: quoted-printable

On Tue, 2021-04-27 at 10:35 +0200, Jan Beulich wrote:
> On 19.03.2021 13:14, Dario Faggioli wrote:
> >=20
> > ---
> > Cc: George Dunlap <george.dunlap@citrix.com>
> > Cc: Ian Jackson <iwj@xenproject.org>
> > ---
> > Despite the limited effect, it's a bug. So:
> > - it should be backported;
> > - I think it should be included in 4.15. The risk is pretty low,
> > for
> > =C2=A0 the same reasons already explained when describing its limited
> > impact.
>=20
> I'm a little puzzled to find this is still in my waiting-to-go-in
> folder, for not having had an ack (or otherwise). George?
>=20
Yeah, and it probably still is, so... George? :-D

BTW, I'm dropping IanJ as, quite obviously,this won't go in 4.15. :-P
It should however (after it goes in) be backported (to 4.15 and
probably even earlier... I can have a look myself if it helps).

Thanks and Regards
--=20
Dario Faggioli, Ph.D
http://about.me/dario.faggioli
Virtualization Software Engineer
SUSE Labs, SUSE https://www.suse.com/
-------------------------------------------------------------------
<<This happens because _I_ choose it to happen!>> (Raistlin Majere)

--=-+F0ocCRSGkHJGFd8OTX0
Content-Type: application/pgp-signature; name="signature.asc"
Content-Description: This is a digitally signed message part
Content-Transfer-Encoding: 7bit

-----BEGIN PGP SIGNATURE-----

iQIzBAABCAAdFiEES5ssOj3Vhr0WPnOLFkJ4iaW4c+4FAmCxCbcACgkQFkJ4iaW4
c+4ipA/+IEaOlOIl/41XnhBGn5YKD0iwoEZR/XNjs9wz1IUrd49+S3aOkH5kcTLE
joI+E4TvAsRp+QBNAdRl2AKOCbv1B+mYqLzPJARzx0/pYX5+0tt1SMvvWh88kJ24
1S8iEl5i2+KS6htytaFvs1BGXzQn5X3B8HBh8EpKwy6+Bh8gU6husf40pgpGIOMC
8tesaRP0GDITz88rwb807Axfelt8Pca0QKWe6z4XXdsqimCJd1IgGzsKGQKdzlFa
bxaNU9A4FAcRUmXSAcd+Mv+OPNoQU13rtqpiIkxMgEVDSBr2xuLjR7jUIZ3Ci65x
V3cCqAonNkOvLMlbN9YNtXPXfUFkHQkFDzYholcwhzMDOPNh/G/b6dtPo7ZM0LoY
d8Twr2UGClY6tDpOPfFTLy/5TaakorRO7uaG8FE/UYBbYdXF5xH1GOklvdYRfLe0
+w2AOT9/lw6luA3+0IWi9EgFMwGvKoUKpUWW/ftQTwtYCKV1S3tYVmMpN/c4YVWA
9QljNDEGNv9jswTiFA++Hdp+4QcmWcB3SF7aa0kk+ClmFdik5Y+Q+ChUVfnYAp5L
4Iw4BJbKfb/mB9KAsUYk3EU/xT/dp+qJDH4G2ziJ+4YIM/18cnZ+QMNoJFsuYxHa
BZSvcsfD4Vi+7l8gOYu7wK527NDmR3fe8bMh5jZgBnCGhpsGSdE=
=JRaH
-----END PGP SIGNATURE-----

--=-+F0ocCRSGkHJGFd8OTX0--



From xen-devel-bounces@lists.xenproject.org Fri May 28 15:28:44 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 May 2021 15:28:44 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134005.249553 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmePZ-00032d-NZ; Fri, 28 May 2021 15:28:33 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134005.249553; Fri, 28 May 2021 15:28:33 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmePZ-00032W-JH; Fri, 28 May 2021 15:28:33 +0000
Received: by outflank-mailman (input) for mailman id 134005;
 Fri, 28 May 2021 15:28:32 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>) id 1lmePY-00032Q-F6
 for xen-devel@lists.xenproject.org; Fri, 28 May 2021 15:28:32 +0000
Received: from xenbits.xenproject.org ([104.239.192.120])
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lmePU-0000l4-6J; Fri, 28 May 2021 15:28:28 +0000
Received: from [54.239.6.188] (helo=a483e7b01a66.ant.amazon.com)
 by xenbits.xenproject.org with esmtpsa
 (TLS1.3:ECDHE_RSA_AES_128_GCM_SHA256:128) (Exim 4.92)
 (envelope-from <julien@xen.org>)
 id 1lmePT-0002Gs-Vj; Fri, 28 May 2021 15:28:28 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=xen.org;
	s=20200302mail; h=Content-Transfer-Encoding:Content-Type:In-Reply-To:
	MIME-Version:Date:Message-ID:From:References:Cc:To:Subject;
	bh=R/moMEWBQRAgkgeccBxxZhe4pxMQezg1ErEy4FNVmVM=; b=1+LWuObCW3ZRGkbMxmFe9e6X4x
	5ERaKvgkGNJkji2RPYd8fm+KB3Awjq8rzwh7GU9ep6vMteAtP+p/qy5fW1Xd4nBVUvHSnTj+uXwFO
	adcZ4/geumUKmltrOF6ov/EVb+7L+6v4ksND5SCYBBbUuAdOqMF3YE2qRf53XK0hhHsQ=;
Subject: Re: What's missing for arm64 Xen boot with FDT via Grub in Debian
 Bullseye?
To: =?UTF-8?Q?Alex_Benn=c3=a9e?= <alex.bennee@linaro.org>,
 Debian Install System Team <debian-boot@lists.debian.org>,
 pkg-grub-devel@alioth-lists.debian.net,
 pkg-xen-devel@lists.alioth.debian.org, xen-devel@lists.xenproject.org
Cc: Steve McIntyre <93sam@debian.org>,
 Ruchika Gupta <ruchika.gupta@linaro.org>,
 Ilias Apalodimas <ilias.apalodimas@linaro.org>,
 qemu-arm <qemu-arm@nongnu.org>
References: <87mtse2ac8.fsf@linaro.org>
From: Julien Grall <julien@xen.org>
Message-ID: <0df82c26-078a-83de-952c-cbad06b3ad2d@xen.org>
Date: Fri, 28 May 2021 16:28:25 +0100
User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:78.0)
 Gecko/20100101 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <87mtse2ac8.fsf@linaro.org>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-GB
Content-Transfer-Encoding: 8bit



On 28/05/2021 13:49, Alex Bennée wrote:
> 
> Hi,

Hi Alex,

> I'm currently trying to pull together the threads for booting Xen on
> Debian. I'm currently doing this within QEMU's TCG emulation and the
> "virt" machine model:
> 
>    -machine type=virt,virtualization=on,gic-version=3 \
>    -cpu max,pauth-impdef=on
> 
> with the firmware on my Ubuntu machine:
> 
>    -drive if=pflash,file=/usr/share/AAVMF/AAVMF_CODE.fd,format=raw,readonly=on -drive if=pflash,file=$HOME/images/AAVMF_VARS.fd,format=raw
> 
> (qemu-efi-aarch64 Version: 0~20180205.c0d9813c-2ubuntu0.3)
> 
> When booting this way I get the Grub menu and Xen is loaded by Grub but
> falls over later:
> 
>    (XEN) MODULE[0]: 00000000f5869000 - 00000000f59b60c8 Xen
>    (XEN) MODULE[1]: 000000013857d000 - 0000000138580000 Device Tree
>    (XEN) MODULE[2]: 00000000f73a1000 - 00000000f8da0780 Kernel
>    (XEN) MODULE[3]: 00000000f59b7000 - 00000000f739f99b Ramdisk
>    (XEN)
>    (XEN) CMDLINE[00000000f73a1000]:chosen placeholder root=UUID=435201aa-c5cf-4e7a-8107-5eef28844188 ro console=hvc0
>    (XEN)
>    (XEN) Command line: placeholder dom0_mem=2G loglvl=all guest_loglvl=all no-real-mode edd=off
>    (XEN) parameter "placeholder" unknown!
>    (XEN) parameter "no-real-mode" unknown!
>    (XEN) parameter "edd" unknown!
>    (XEN) Domain heap initialised
>    (XEN) Booting using Device Tree
>    (XEN) Platform: Generic System
>    (XEN)
>    (XEN) ****************************************
>    (XEN) Panic on CPU 0:
>    (XEN) Unable to find a compatible timer in the device tree
>    (XEN) ****************************************
> 
> It seems like there are bits of the DT missing. I can however
> successfully boot Xen with the Linux guest using the guest-loader device
> and bypassing the firmware/boot code step. This gives:
> 
>    (XEN) MODULE[0]: 0000000040200000 - 000000004034d0c8 Xen
>    (XEN) MODULE[1]: 0000000048000000 - 0000000048100000 Device Tree
>    (XEN) MODULE[2]: 0000000046000000 - 0000000046eb2200 Kernel
>    (XEN)
>    (XEN) CMDLINE[0000000046000000]:chosen root=/dev/sda2 console=hvc0 earlyprintk=xen
>    (XEN)
>    (XEN) Command line: dom0_mem=4G dom0_max_vcpus=4
>    (XEN) Domain heap initialised
>    (XEN) Booting using Device Tree
>    (XEN) Platform: Generic System
>    (XEN) Taking dtuart configuration from /chosen/stdout-path
>    (XEN) Looking for dtuart at "/pl011@9000000", options ""
>     Xen 4.15.1-pre
>    (XEN) Xen version 4.15.1-pre (alex.bennee@) (aarch64-linux-gnu-gcc (Debian 8.3.0-2) 8.3.0) debug=y Tue May 18 09:34:55 UTC 2021
>    (XEN) Latest ChangeSet:
>    (XEN) build-id: a50d8f03a1a15662ac7c4e5f73f2f544a6739df2
>    (XEN) Processor: 411fd070: "ARM Limited", variant: 0x1, part 0xd07, rev 0x0
>    (XEN) 64-bit Execution:
>    (XEN)   Processor Features: 0000000001000222 0000000000000000
>    (XEN)     Exception Levels: EL3:No EL2:64+32 EL1:64+32 EL0:64+32
>    (XEN)     Extensions: FloatingPoint AdvancedSIMD GICv3-SysReg
>    (XEN)   Debug Features: 0000000010305106 0000000000000000
>    (XEN)   Auxiliary Features: 0000000000000000 0000000000000000
>    (XEN)   Memory Model Features: 0000000000001124 0000000000000000
>    (XEN)   ISA Features:  0000000000011120 0000000000000000
>    (XEN) 32-bit Execution:
>    (XEN)   Processor Features: 00000131:10011001
>    (XEN)     Instruction Sets: AArch32 A32 Thumb Thumb-2 Jazelle
>    (XEN)     Extensions: GenericTimer
>    (XEN)   Debug Features: 03010066
>    (XEN)   Auxiliary Features: 00000000
>    (XEN)   Memory Model Features: 10101105 40000000 01260000 02102211
>    (XEN)  ISA Features: 02101110 13112111 21232042 01112131 00011142 00011121
>    (XEN) Using SMC Calling Convention v1.0
>    (XEN) Using PSCI v0.2
>    (XEN) SMP: Allowing 8 CPUs
>    (XEN) enabled workaround for: ARM erratum 832075
>    (XEN) enabled workaround for: ARM erratum 834220
>    (XEN) enabled workaround for: ARM erratum 1319367
>    (XEN) Generic Timer IRQ: phys=30 hyp=26 virt=27 Freq: 62500 KHz
>    (XEN) GICv3 initialization:
>    (XEN)       gic_dist_addr=0x00000008000000
>    (XEN)       gic_maintenance_irq=25
>    (XEN)       gic_rdist_stride=0
>    (XEN)       gic_rdist_regions=1
>    (XEN)       redistributor regions:
>    (XEN)         - region 0: 0x000000080a0000 - 0x00000009000000
>    (XEN) GICv3: 256 lines, (IID 0000043b).
>    (XEN) GICv3: CPU0: Found redistributor in region 0 @000000004001c000
> 
> Attempting to boot with acpi=on still sees Grub attempt to use DT to
> boot the hypervisor. However selecting the kernel directly boots with
> ACPI (which is a shame as I'd like to see what FDT it gets presented
> with).

ACPI is not built by default in Xen on Arm. You will need to select it 
from Kconfig and rebuild the hypervisor.

> 
> The full command line for booting via the guest-loader is:
> 
>    ./qemu-system-aarch64 \
>      -machine virt,virtualization=on,gic-version=3 \
>      -cpu max,pauth-impdef=on \
>      -serial mon:stdio \
>      -netdev user,id=net1,hostfwd=tcp::2222-:22 \
>      -device virtio-net-pci,netdev=net1 \
>      -device virtio-scsi-pci \
>      -drive file=/dev/zvol/hackpool-0/debian-buster-arm64,id=hd0,index=0,if=none,format=raw \
>      -device scsi-hd,drive=hd0 \
>      -display none \
>      -m 16384 \
>      -kernel ~/lsrc/xen/xen.build.arm64-xen/xen/xen \
>      -append "dom0_mem=4G dom0_max_vcpus=4" \
>      -device guest-loader,addr=0x46000000,kernel=$HOME/lsrc/linux.git/builds/arm64/arch/arm64/boot/Image,bootargs="root=/dev/sda2 console=hvc0 earlyprintk=xen" \
>      -smp 8
> 
> So some questions:
> 
>    - is Xen on arm64 tested on Debian Bullseye? If so what platform?

I am using Debian Bullseye on QEMU and also the FVP. We are also using 
Debian in Osstest for all the testing (it is possible an older version 
of Debian).

>    - how do I tell Grub to do a straight FDT boot with the DT from the firmware?

Is the firmware actually providing a DT? You could try to boot Xen from 
UEFI directly to confirm that.

However, I vaguely recall that GRUB may only pass ACPI if it is provided.

>    - are there any missing pieces I should be aware of?

Other than re-building Xen with ACPI=y, I am not aware of any issue to 
use Xen with Debian bullseye.

Cheers,

-- 
Julien Grall


From xen-devel-bounces@lists.xenproject.org Fri May 28 15:45:01 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 May 2021 15:45:01 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134015.249564 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmefN-0005PY-7r; Fri, 28 May 2021 15:44:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134015.249564; Fri, 28 May 2021 15:44:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmefN-0005PR-4s; Fri, 28 May 2021 15:44:53 +0000
Received: by outflank-mailman (input) for mailman id 134015;
 Fri, 28 May 2021 15:44:52 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=YhPb=KX=xen.org=tim@srs-us1.protection.inumbo.net>)
 id 1lmefM-0005PL-0V
 for xen-devel@lists.xenproject.org; Fri, 28 May 2021 15:44:52 +0000
Received: from deinos.phlegethon.org (unknown [2001:41d0:8:b1d7::1])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f2761a70-10d4-4bab-aca1-1b49010752fc;
 Fri, 28 May 2021 15:44:50 +0000 (UTC)
Received: from tjd by deinos.phlegethon.org with local (Exim 4.94.2 (FreeBSD))
 (envelope-from <tim@xen.org>)
 id 1lmefH-000AHT-8X; Fri, 28 May 2021 15:44:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f2761a70-10d4-4bab-aca1-1b49010752fc
Date: Fri, 28 May 2021 16:44:47 +0100
From: Tim Deegan <tim@xen.org>
To: Jan Beulich <jbeulich@suse.com>
Cc: Roberto Bagnara <roberto.bagnara@bugseng.com>,
	xen-devel@lists.xenproject.org
Subject: Re: Invalid _Static_assert expanded from HASH_CALLBACKS_CHECK
Message-ID: <YLEP73On6EBjv3Ks@deinos.phlegethon.org>
References: <ccb37c2e-a3a6-a2e4-bf15-da81f97c94be@bugseng.com>
 <38898d21-fe76-36dc-f1e6-497b52c5c0b7@suse.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=iso-8859-1
Content-Disposition: inline
In-Reply-To: <38898d21-fe76-36dc-f1e6-497b52c5c0b7@suse.com>
X-SA-Known-Good: Yes
X-SA-Exim-Connect-IP: <locally generated>
X-SA-Exim-Mail-From: tim@xen.org
X-SA-Exim-Scanned: No (on deinos.phlegethon.org); SAEximRunCond expanded to false

Hi,

At 10:58 +0200 on 25 May (1621940330), Jan Beulich wrote:
> On 24.05.2021 06:29, Roberto Bagnara wrote:
> > I stumbled upon parsing errors due to invalid uses of
> > _Static_assert expanded from HASH_CALLBACKS_CHECK where
> > the tested expression is not constant, as mandated by
> > the C standard.
> > 
> > Judging from the following comment, there is partial awareness
> > of the fact this is an issue:
> > 
> > #ifndef __clang__ /* At least some versions dislike some of the uses. */
> > #define HASH_CALLBACKS_CHECK(mask) \
> >      BUILD_BUG_ON((mask) > (1U << ARRAY_SIZE(callbacks)) - 1)
> > 
> > Indeed, this is not a fault of Clang: the point is that some
> > of the expansions of this macro are not C.  Moreover,
> > the fact that GCC sometimes accepts them is not
> > something we can rely upon:

Well, that is unfortunate - especially since the older ad-hoc
compile-time assertion macros handled this kind of thing pretty well.
Why when I were a lad &c &c. :)

> > Finally, I think this can be easily avoided: instead
> > of initializing a static const with a constant expression
> > and then static-asserting the static const, just static-assert
> > the constant initializer.
> 
> Well, yes, but the whole point of constructs like
> 
>     HASH_CALLBACKS_CHECK(callback_mask);
>     hash_domain_foreach(d, callback_mask, callbacks, gmfn);
> 
> is to make very obvious that the checked mask and the used mask
> match. Hence if anything I'd see us eliminate the static const
> callback_mask variables altogether.

That seems like a good approach.

Cheers,

Tim.


From xen-devel-bounces@lists.xenproject.org Fri May 28 16:47:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 May 2021 16:47:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134023.249575 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmfdJ-0003Mt-0G; Fri, 28 May 2021 16:46:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134023.249575; Fri, 28 May 2021 16:46:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmfdI-0003Mm-RO; Fri, 28 May 2021 16:46:48 +0000
Received: by outflank-mailman (input) for mailman id 134023;
 Fri, 28 May 2021 16:46:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmfdH-0003Mc-HA; Fri, 28 May 2021 16:46:47 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmfdH-0002Y2-CJ; Fri, 28 May 2021 16:46:47 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmfdH-0004de-1d; Fri, 28 May 2021 16:46:47 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lmfdH-0006Kk-19; Fri, 28 May 2021 16:46:47 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=wVarjcw3nbjEvFfZIhrDWdxTmlHjr+z6Nlo0GrdHuiY=; b=d7L+tgTk4TCOhaQOpd8WbMpAt5
	AjA3C1Mm0+7XMFjEAOOWb7cLuy4RF0so7nj6JpkD8sXGSkZH2/DCyAcP1X/iMk+g1CtQ9xWweLHJj
	pzD/Nhq8lNs7Ub4ibj+pSyUzD6a3tSayGh3VzreRpMBlwd84R4y9hN29d9JLqPm54PhY=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162244-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162244: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:heisenbug
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=c8616fc7670b884de5f74d2767aade224c1c5c3a
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Fri, 28 May 2021 16:46:47 +0000

flight 162244 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162244/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10     fail pass in 162240

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                c8616fc7670b884de5f74d2767aade224c1c5c3a
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  281 days
Failing since        152659  2020-08-21 14:07:39 Z  280 days  517 attempts
Testing same since   162240  2021-05-27 19:38:27 Z    0 days    2 attempts

------------------------------------------------------------
514 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 162988 lines long.)


From xen-devel-bounces@lists.xenproject.org Fri May 28 17:30:37 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 May 2021 17:30:37 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134031.249588 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmgJY-0008BP-T4; Fri, 28 May 2021 17:30:28 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134031.249588; Fri, 28 May 2021 17:30:28 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmgJY-0008BI-Pl; Fri, 28 May 2021 17:30:28 +0000
Received: by outflank-mailman (input) for mailman id 134031;
 Fri, 28 May 2021 17:30:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=XCBB=KX=linaro.org=alex.bennee@srs-us1.protection.inumbo.net>)
 id 1lmgJY-0008BC-3U
 for xen-devel@lists.xenproject.org; Fri, 28 May 2021 17:30:28 +0000
Received: from mail-wr1-x42a.google.com (unknown [2a00:1450:4864:20::42a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 696e3ce1-600b-4d71-853c-a824defd5f12;
 Fri, 28 May 2021 17:30:26 +0000 (UTC)
Received: by mail-wr1-x42a.google.com with SMTP id x7so3989389wrt.12
 for <xen-devel@lists.xenproject.org>; Fri, 28 May 2021 10:30:26 -0700 (PDT)
Received: from zen.linaroharston ([51.148.130.216])
 by smtp.gmail.com with ESMTPSA id h8sm7742481wrw.85.2021.05.28.10.30.24
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Fri, 28 May 2021 10:30:24 -0700 (PDT)
Received: from zen (localhost [127.0.0.1])
 by zen.linaroharston (Postfix) with ESMTP id 150BF1FF7E;
 Fri, 28 May 2021 18:30:24 +0100 (BST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 696e3ce1-600b-4d71-853c-a824defd5f12
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=linaro.org; s=google;
        h=references:user-agent:from:to:cc:subject:date:in-reply-to
         :message-id:mime-version:content-transfer-encoding;
        bh=k1WoWq9vd1G/lpEOGpkVB3uP8cOZ77dM5fz+wQD+33c=;
        b=qa2TPFD4hnikwg9Hk+AJ1QPiUskqKjrwAmVlkgi+zdKl4kpqWP67j0xDO5ChCaeWsw
         m6fDuBtTHkVQGHBdq1/TF2hZRoKZ5S9qdL3NZYmiCWJhb4JXiAtRcTQHHX5NTvjgIpTq
         ELUp1iGxKOKS18vrLTQ7j9rN+GxPuTO/WSoY5Jwg7ELARqyfHwSLbuOfLfO8wEy87H18
         vWkkEzjCl83x7tHEvOqS9wturHY6DfFQidsGUvH7Ejdsq6tmU9aNuJ7iSc9Ihso6LeFE
         5VTMO3Fjk8nFEz9zTWEk4v7eelORC1dCq+l8LMgnqSfvPIdkbrEp5mYWXY1JUyy6Cns3
         qZFg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:references:user-agent:from:to:cc:subject:date
         :in-reply-to:message-id:mime-version:content-transfer-encoding;
        bh=k1WoWq9vd1G/lpEOGpkVB3uP8cOZ77dM5fz+wQD+33c=;
        b=h5K0NgqHUgKAyWXL+WBx0WnUOLV5kv1AQe6ISnH0acbANm/CoptJ0ovVnTnI0A9z3j
         ef9h1sVOCWCAebV2nVn8KmMHZLU1lcWCsNRE8esVoBQ+Mc4Q9411wt6PnDA/eEFPW70s
         61ZU0pkb9oz0GfTIsp4FBCR67NXO/9p8qb966uj9CQdV8+yCWW+/2UpOJ9R79AiEU6+m
         clgnrNu6Phhqq+oeQDXefnBOiu3SeeQ8xq57SSGmLjW/CwahN9cF81sqKWpgv30Pf1Sm
         fM0oeUV7swbmzC1W9TC0GFsgLExopWiaPEBFMrmCq2qFBGAhV9uAJzT6dP3hjz2ykHUj
         T+8A==
X-Gm-Message-State: AOAM530EXA594WDku0hBs7ruafCCHWyXx2A4owjYcTBfRYZKkHfx9Lok
	nEAlSU4PSOVGS8o/lZyrrDCUbw==
X-Google-Smtp-Source: ABdhPJyjoBgCJrMdTo1RrwmbCDjMeCcUrsS197mjQ3RR37X1FTRrPPxxP3kvhBxOcMHOSt46RlOS0g==
X-Received: by 2002:adf:fa52:: with SMTP id y18mr9726582wrr.355.1622223025936;
        Fri, 28 May 2021 10:30:25 -0700 (PDT)
References: <87mtse2ac8.fsf@linaro.org>
 <0df82c26-078a-83de-952c-cbad06b3ad2d@xen.org>
User-agent: mu4e 1.5.13; emacs 28.0.50
From: Alex =?utf-8?Q?Benn=C3=A9e?= <alex.bennee@linaro.org>
To: Julien Grall <julien@xen.org>
Cc: Debian Install System Team <debian-boot@lists.debian.org>,
 pkg-grub-devel@alioth-lists.debian.net,
 pkg-xen-devel@lists.alioth.debian.org, xen-devel@lists.xenproject.org,
 Steve McIntyre <93sam@debian.org>, Ruchika Gupta
 <ruchika.gupta@linaro.org>, Ilias Apalodimas
 <ilias.apalodimas@linaro.org>, qemu-arm <qemu-arm@nongnu.org>
Subject: Re: What's missing for arm64 Xen boot with FDT via Grub in Debian
 Bullseye?
Date: Fri, 28 May 2021 18:27:52 +0100
In-reply-to: <0df82c26-078a-83de-952c-cbad06b3ad2d@xen.org>
Message-ID: <878s3y22bz.fsf@linaro.org>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Transfer-Encoding: quoted-printable


Julien Grall <julien@xen.org> writes:

> On 28/05/2021 13:49, Alex Benn=C3=A9e wrote:
>> Hi,
>
> Hi Alex,
>
>> I'm currently trying to pull together the threads for booting Xen on
>> Debian. I'm currently doing this within QEMU's TCG emulation and the
>> "virt" machine model:
>>    -machine type=3Dvirt,virtualization=3Don,gic-version=3D3 \
>>    -cpu max,pauth-impdef=3Don
>> with the firmware on my Ubuntu machine:
>>    -drive
>> if=3Dpflash,file=3D/usr/share/AAVMF/AAVMF_CODE.fd,format=3Draw,readonly=
=3Don
>> -drive if=3Dpflash,file=3D$HOME/images/AAVMF_VARS.fd,format=3Draw
>> (qemu-efi-aarch64 Version: 0~20180205.c0d9813c-2ubuntu0.3)
>> When booting this way I get the Grub menu and Xen is loaded by Grub
>> but
>> falls over later:
>>    (XEN) MODULE[0]: 00000000f5869000 - 00000000f59b60c8 Xen
>>    (XEN) MODULE[1]: 000000013857d000 - 0000000138580000 Device Tree
>>    (XEN) MODULE[2]: 00000000f73a1000 - 00000000f8da0780 Kernel
>>    (XEN) MODULE[3]: 00000000f59b7000 - 00000000f739f99b Ramdisk
>>    (XEN)
>>    (XEN) CMDLINE[00000000f73a1000]:chosen placeholder root=3DUUID=3D4352=
01aa-c5cf-4e7a-8107-5eef28844188 ro console=3Dhvc0
>>    (XEN)
>>    (XEN) Command line: placeholder dom0_mem=3D2G loglvl=3Dall guest_logl=
vl=3Dall no-real-mode edd=3Doff
>>    (XEN) parameter "placeholder" unknown!
>>    (XEN) parameter "no-real-mode" unknown!
>>    (XEN) parameter "edd" unknown!
>>    (XEN) Domain heap initialised
>>    (XEN) Booting using Device Tree
>>    (XEN) Platform: Generic System
>>    (XEN)
>>    (XEN) ****************************************
>>    (XEN) Panic on CPU 0:
>>    (XEN) Unable to find a compatible timer in the device tree
>>    (XEN) ****************************************
>> It seems like there are bits of the DT missing. I can however
>> successfully boot Xen with the Linux guest using the guest-loader device
>> and bypassing the firmware/boot code step. This gives:
>>    (XEN) MODULE[0]: 0000000040200000 - 000000004034d0c8 Xen
>>    (XEN) MODULE[1]: 0000000048000000 - 0000000048100000 Device Tree
>>    (XEN) MODULE[2]: 0000000046000000 - 0000000046eb2200 Kernel
>>    (XEN)
>>    (XEN) CMDLINE[0000000046000000]:chosen root=3D/dev/sda2 console=3Dhvc=
0 earlyprintk=3Dxen
>>    (XEN)
>>    (XEN) Command line: dom0_mem=3D4G dom0_max_vcpus=3D4
>>    (XEN) Domain heap initialised
>>    (XEN) Booting using Device Tree
>>    (XEN) Platform: Generic System
>>    (XEN) Taking dtuart configuration from /chosen/stdout-path
>>    (XEN) Looking for dtuart at "/pl011@9000000", options ""
>>     Xen 4.15.1-pre
>>    (XEN) Xen version 4.15.1-pre (alex.bennee@) (aarch64-linux-gnu-gcc (D=
ebian 8.3.0-2) 8.3.0) debug=3Dy Tue May 18 09:34:55 UTC 2021
>>    (XEN) Latest ChangeSet:
>>    (XEN) build-id: a50d8f03a1a15662ac7c4e5f73f2f544a6739df2
>>    (XEN) Processor: 411fd070: "ARM Limited", variant: 0x1, part 0xd07, r=
ev 0x0
>>    (XEN) 64-bit Execution:
>>    (XEN)   Processor Features: 0000000001000222 0000000000000000
>>    (XEN)     Exception Levels: EL3:No EL2:64+32 EL1:64+32 EL0:64+32
>>    (XEN)     Extensions: FloatingPoint AdvancedSIMD GICv3-SysReg
>>    (XEN)   Debug Features: 0000000010305106 0000000000000000
>>    (XEN)   Auxiliary Features: 0000000000000000 0000000000000000
>>    (XEN)   Memory Model Features: 0000000000001124 0000000000000000
>>    (XEN)   ISA Features:  0000000000011120 0000000000000000
>>    (XEN) 32-bit Execution:
>>    (XEN)   Processor Features: 00000131:10011001
>>    (XEN)     Instruction Sets: AArch32 A32 Thumb Thumb-2 Jazelle
>>    (XEN)     Extensions: GenericTimer
>>    (XEN)   Debug Features: 03010066
>>    (XEN)   Auxiliary Features: 00000000
>>    (XEN)   Memory Model Features: 10101105 40000000 01260000 02102211
>>    (XEN)  ISA Features: 02101110 13112111 21232042 01112131 00011142 000=
11121
>>    (XEN) Using SMC Calling Convention v1.0
>>    (XEN) Using PSCI v0.2
>>    (XEN) SMP: Allowing 8 CPUs
>>    (XEN) enabled workaround for: ARM erratum 832075
>>    (XEN) enabled workaround for: ARM erratum 834220
>>    (XEN) enabled workaround for: ARM erratum 1319367
>>    (XEN) Generic Timer IRQ: phys=3D30 hyp=3D26 virt=3D27 Freq: 62500 KHz
>>    (XEN) GICv3 initialization:
>>    (XEN)       gic_dist_addr=3D0x00000008000000
>>    (XEN)       gic_maintenance_irq=3D25
>>    (XEN)       gic_rdist_stride=3D0
>>    (XEN)       gic_rdist_regions=3D1
>>    (XEN)       redistributor regions:
>>    (XEN)         - region 0: 0x000000080a0000 - 0x00000009000000
>>    (XEN) GICv3: 256 lines, (IID 0000043b).
>>    (XEN) GICv3: CPU0: Found redistributor in region 0 @000000004001c000
>> Attempting to boot with acpi=3Don still sees Grub attempt to use DT to
>> boot the hypervisor. However selecting the kernel directly boots with
>> ACPI (which is a shame as I'd like to see what FDT it gets presented
>> with).
>
> ACPI is not built by default in Xen on Arm. You will need to select it
> from Kconfig and rebuild the hypervisor.

OK so I think what was happening is grub was always passing ACPI and
with Xen not built for it there was simply no DTB to process. Testing
with a ACPI build works for Bullseye.

Testing on Buster Xen does boot but hangs at:

  (XEN) *** LOADING DOMAIN 0 ***
  (XEN) Loading d0 kernel from boot module @ 00000000f7d2d000
  (XEN) Loading ramdisk from boot module @ 00000000f304b000
  (XEN) Allocating 1:1 mappings totalling 2048MB for dom0:
  (XEN) BANK[0] 0x00000040000000-0x000000c0000000 (2048MB)
  (XEN) Grant table range: 0x000000f2ef5000-0x000000f2f35000
  (XEN) Allocating PPI 16 for event channel interrupt
  (XEN) Loading zImage from 00000000f7d2d000 to 0000000040080000-0000000041=
264780
  (XEN) Loading d0 initrd from 00000000f304b000 to 0x0000000048200000-0x000=
000004cee0c00
  (XEN) Loading d0 DTB to 0x0000000048000000-0x000000004800027f
  (XEN) Initial low memory virq threshold set at 0x4000 pages.
  (XEN) Scrubbing Free RAM in background
  (XEN) Std. Loglevel: All
  (XEN) Guest Loglevel: All
  (XEN) *** Serial input to DOM0 (type 'CTRL-a' three times to switch input)
  (XEN) Freed 380kB init memory.


I'll need to update by build environment and rebuild for Bullseye to
check I can still start a new domain.

>
>> The full command line for booting via the guest-loader is:
>>    ./qemu-system-aarch64 \
>>      -machine virt,virtualization=3Don,gic-version=3D3 \
>>      -cpu max,pauth-impdef=3Don \
>>      -serial mon:stdio \
>>      -netdev user,id=3Dnet1,hostfwd=3Dtcp::2222-:22 \
>>      -device virtio-net-pci,netdev=3Dnet1 \
>>      -device virtio-scsi-pci \
>>      -drive file=3D/dev/zvol/hackpool-0/debian-buster-arm64,id=3Dhd0,ind=
ex=3D0,if=3Dnone,format=3Draw \
>>      -device scsi-hd,drive=3Dhd0 \
>>      -display none \
>>      -m 16384 \
>>      -kernel ~/lsrc/xen/xen.build.arm64-xen/xen/xen \
>>      -append "dom0_mem=3D4G dom0_max_vcpus=3D4" \
>>      -device guest-loader,addr=3D0x46000000,kernel=3D$HOME/lsrc/linux.gi=
t/builds/arm64/arch/arm64/boot/Image,bootargs=3D"root=3D/dev/sda2 console=
=3Dhvc0 earlyprintk=3Dxen" \
>>      -smp 8
>> So some questions:
>>    - is Xen on arm64 tested on Debian Bullseye? If so what platform?
>
> I am using Debian Bullseye on QEMU and also the FVP. We are also using
> Debian in Osstest for all the testing (it is possible an older version
> of Debian).
>
>>    - how do I tell Grub to do a straight FDT boot with the DT from the f=
irmware?
>
> Is the firmware actually providing a DT? You could try to boot Xen
> from UEFI directly to confirm that.
>
> However, I vaguely recall that GRUB may only pass ACPI if it is provided.
>
>>    - are there any missing pieces I should be aware of?
>
> Other than re-building Xen with ACPI=3Dy, I am not aware of any issue to
> use Xen with Debian bullseye.
>
> Cheers,


--=20
Alex Benn=C3=A9e


From xen-devel-bounces@lists.xenproject.org Fri May 28 17:39:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 May 2021 17:39:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134040.249600 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmgSf-0000Zk-NU; Fri, 28 May 2021 17:39:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134040.249600; Fri, 28 May 2021 17:39:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmgSf-0000Za-KC; Fri, 28 May 2021 17:39:53 +0000
Received: by outflank-mailman (input) for mailman id 134040;
 Fri, 28 May 2021 17:39:52 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WKFi=KX=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lmgSe-0000ZU-5f
 for xen-devel@lists.xenproject.org; Fri, 28 May 2021 17:39:52 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id afdb6b55-a002-4e91-a80a-a657e596fcd2;
 Fri, 28 May 2021 17:39:50 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: afdb6b55-a002-4e91-a80a-a657e596fcd2
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1622223590;
  h=from:to:cc:subject:date:message-id:
   content-transfer-encoding:mime-version;
  bh=5BtckwRaIBXJdw/3h71NRjQvhbftsf/iUxEvDmUVBig=;
  b=cfkKFb7VWl1VgJBu0mXkNvOYBiynzuFXp9fEeRYY1NIhCDSKB1k//nEr
   vEvMv54UpfeBjGaFqSs6aoGwce/T55H1+K0QXSWlUzfj6hnuEYP2v40K0
   lVvepnscuYSJOlaxCa7Su8nCyYeFCS2BXH35mxUVYmwQcXyYPU+V5GNY1
   U=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: drCTXFmKwDD3qPKHW77HJ5YBDDWOCXK+qFfQhvsio+QDX5w1LRBR/FuEmKIh6Oa3+ZYRoHrWPv
 Sara1o6jG/SkrMIQ5jqsGELvwK7z3QqqgZlYrnt5Xv7Sqa1CFMMGXfdD+aL9twWLQOIkXcL5+O
 7gPHcRflczF9jcquYIwg75wQTpLEY6pzts9D0811EdbY/qkJotkUna+WqWYrj4Y38dKg2ec2+S
 6squxvXCTP/8Iyoyj+3gSeWvnkVuhHLS+MFhgEyGnVJLKwmLck6boEBA/PG9uUTYd98FTtF4yW
 PcA=
X-SBRS: 5.1
X-MesageID: 46432759
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:5wR76av03py60ZZwBHc0K/aR7skCK4Mji2hC6mlwRA09TyXBrb
 HWoB1p726NtN9xYgBUpTnkAsO9qBznhPtICOUqU4tKGTOW3ldAT7sSmbcKoQeQfxEWn9Q1vc
 wMH8dD4Z/LfD9HZK3BgDVQZuxQouVvh5rY5ts2oU0NceggUdAa0+4wMHfgLmRGADBcA5w3DZ
 yd4dcCiQaBVB0sH7KGL0hAZvPEodLTkpLgfFohPD4IrCezrR7A0s+ML/CjtC1uGw+nBY1SuF
 QtvTaJm5lK6ZyAu2PhP0y/1eUopDOgp+EzdfBlxKUuW0XRYrTEXvUeZ5SS+C0wqPuirE0nis
 XIvn4bTrdOwmKUY2W8uxeoxAX6yjYp7BbZuC2lvUc=
X-IronPort-AV: E=Sophos;i="5.83,230,1616472000"; 
   d="scan'208";a="46432759"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=cq9n60GHK/TrsJYLnUf21bTbKmbkIezdlsm4rp3aLhIKVF2JKls0xNK4uR5ntfhU49t9P/H6PkGpQUnO0kAicia8STHkTvs3zV4+xeEsgUUiWA/JeNu2DmALNQSq5ammTjW0IerCzr/IVJIUDzuP/EdYDo81mVFpKeErawVZihh9mvfTsy1jlnZpNLNs9N2ZjocJvYTo2VVy/p4tG2hGqJ7gRCm4dlPRPBNWsEtzfguKdJr6jrbVFreDmG/Wwv7n48IHF6OsrUXqGgLQp+5Hg85z0wbjAJdmOzd2Ovze2IG3+bFKAVFB7pWEUSOfzbIqkSQn++IH3sm0gaPTIzw3iQ==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qFYCEX5tVKfuG4zjygC3ffVJw++0wA2LgWtWO7+wEEM=;
 b=gvDWsVffGYvMYr+OlJJCWveTIeK4skACKWASYO+PMdmI2WKq3pziAlbTc2bWOVhF7Ftb7qRe/qxIC56H3qRJmGza1JH2uMsWDeijATLPSOqZQ4fTyBVhdMLSXuKZhJMV36xUAEGo2oxHtucrSfp4FtP1b9nfO2knVYgfuhcqPkd1QWpgzF7EQXWhx9hlNeTNKTDviZqKoRNM9fU0d3RkrKVsrsDgiRJ/eQSAXVDC4oxY6Yc2LWKNSRegUVTWVW0Hj05ydMUCB4jTmtz8+jXZadSTsHsWnKhfuEYosE+yAYSxTVYoqQAUzxwWLtb82qPEYRP1yWvBbn2JI/DPJkdGjg==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=qFYCEX5tVKfuG4zjygC3ffVJw++0wA2LgWtWO7+wEEM=;
 b=MHvBNu27lO/rGML5Jzz17iIDG/UAVz+XcMeVwjnVquyACn+jZT9dfdI5atfD3/4UiyOouc3KNoCBmJ41jAkW0DAVX34wtuFmeuS5JvjQjsYM8/Vn+6bxRzLJ5UEKNkuvyP7rnDa7tHP46QJ9El0CvbMN6uoiMuEjztif2c4wep0=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Jun Nakajima
	<jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>, George Dunlap
	<george.dunlap@citrix.com>
Subject: [PATCH 0/3] x86/ept: force WB to foreign and grant mappings
Date: Fri, 28 May 2021 19:39:32 +0200
Message-ID: <20210528173935.29919-1-roger.pau@citrix.com>
X-Mailer: git-send-email 2.31.1
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: MR2P264CA0137.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:30::29) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 0951607f-c908-4e82-53f6-08d921ff9541
X-MS-TrafficTypeDiagnostic: DM6PR03MB3945:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM6PR03MB39451EA9FF93693CB0DAE0848F229@DM6PR03MB3945.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: dOHZEQfT3tYnqSYDnOL3komV6I8Uf/b+UdT77NLZ4zdu9BeSkYCETMQi+nHAk4oNPLLCHwseWJq3d5Cc8czqpWNHIyC1Z9oSS6lRduYWjbhstBqfyZ/5/lVIPfHBqARP5kbjKY/mzkvi2qwckXYbtMLiRvdtefaq9DbM3+0HBuwuH1X3Mbwxzp8ZcuEECi7UDEJ4pdzdK7VGgbRne3rUFkYQbNwQnkUy/ErYCKnU3w7erpzpWKribPexAn0lpsNMmTCyq7JEJcifVXq8dQ3bii0dIOCqMLa1mMZU618yHotn4aaKgUXzfGdhA5h24uW4HT1EoXi0EpodUNnG+4oFGHidbNJr7V5k0TTugrOPCqPPjomBeTY+E/8fc4gCGPICMrVJ0MsXh4UmY6pgwGIcWenNZjW9auyFgWUgZDiBZolhqZyB3xmbsQdlgpocbqlArDGVZZjGtxD0/Ls1x08t2qpslmfzaqcHo4SGzvLh962beyad4yh4/MsjhV1rjv5gNdYJ3piHfeHSqq+s9rKZlQRxm1MWyAI6cXxft+d02v9HniXHRwa4cHwd6HP4Uv9YTTOhulrQJAsfl7VJleEeMAAAd3LmqyOE8lUeTFFH3BU=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(346002)(39860400002)(366004)(376002)(136003)(396003)(16526019)(186003)(2906002)(316002)(26005)(66476007)(6496006)(8936002)(66946007)(66556008)(956004)(8676002)(36756003)(6666004)(6486002)(107886003)(83380400001)(38100700002)(1076003)(4326008)(86362001)(478600001)(4744005)(6916009)(54906003)(2616005)(5660300002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?NjIrUnRpeTk4U3FiajZkbFFtQmZSaXdSUy9PRlpTSFEyTW5BU2xvbW5RSVBD?=
 =?utf-8?B?dzYySCs1cTVXVnZLTWI2eVZGTmhsT2hLakhVSmRXQzRYUVpXOWhhOFU2dWR3?=
 =?utf-8?B?ODViYThiSktOY3piTmNhRS9kWlRiNkk2M3ZxOWVMYzRoZ3VrUlg4eFMvQXJS?=
 =?utf-8?B?eXpkRFBXZWszK25tOHloRUtURkRyaUJTQ01zT3o1Z0dYcVJ6Rm0wc3dmT2hB?=
 =?utf-8?B?UEdPTW13clU3bFgrWnBVS2JrUGZ6R1oxY1BwOG4yM3JlczJpendDelUwbExh?=
 =?utf-8?B?M2hxSFJJTW9INXVMMmxPRkdLVWUzanBwNVBZZmJ0YmtubFo0QVVsS3N2Ymh5?=
 =?utf-8?B?Nndnc3BGR2FmYnphZGdwRWJsNm9HREsvZzhXVFNnQ0J5bHhXUUU3emNMSGt2?=
 =?utf-8?B?TitLaE9FVW43YW9KMHU5UDB1VCt2Q2QyaWZDcG5LSFVqQzdpQWozZzFaUW5a?=
 =?utf-8?B?ZXV5UGlNUUFzd05UWW9RSVBSa3NZUzVYYXMwVHQ0MGxSTVBUeG1qWWJaZFE2?=
 =?utf-8?B?dzBpbENXR0QzUmovT2ExMG9rbktNT3h1eFJXZ21KRU5IdUo3alBCQmduYUNE?=
 =?utf-8?B?Y2p2b2kzT202bkRzZzQ4OFdqTXA5eWJqWjluYVZjZjVFM0xNdlcwQkRMUVhZ?=
 =?utf-8?B?aTIrV2VaZGpMZVRhWHhzTzd0MHdRbVZGUjVVUCswZzMrNExoem9uYmluTk1w?=
 =?utf-8?B?WUdmQXQwZFg0TkZLeFYrMklxQ2xNZnlOT3NiTkNrYTY4bExYcWxBNzBKelZp?=
 =?utf-8?B?TGZHRXdGeGtONitiUU5LN0hFNmdSb1ZnZ003d0RnQTNQV1cxeWZsakIvYW1N?=
 =?utf-8?B?a1laYTBjL0FtQ0oxRjNCOERkalY0aitwS2Z4K0V3bUNmWmsrbERKTVc0MW9l?=
 =?utf-8?B?OG01aTE3c1AvSFFZSW1FS1FReVhNZ1ZtNlgzdjhXOENEZDBNZVkva05MN2th?=
 =?utf-8?B?cUpnYzJmM0Y4cUxxNmZrTHhnSjI1L21RZVNlRE44QUR6TytaRTM2bDNrYysr?=
 =?utf-8?B?bSt3ZlZjM0JHYVQzUldzZFljMFYxMzZiRDFEa3plSFA0TjJzUTE5V2RBbTR5?=
 =?utf-8?B?amh0akliRTZMci9aa1R0UFE5djEyVVR4N1FFSjVHcGtZeCtnU05kNG9QUUJm?=
 =?utf-8?B?dUk5RUprejByNlVhc3BsbUJsWjlzQkRPaTBVM2VvYkc0SzhjR2kvMFZoa2E3?=
 =?utf-8?B?Y1YrY0REWnJHUXd5bDNwMG15OTM1cFhMOCtFcHQ4bkpEeHNMek1rdy9qQmRI?=
 =?utf-8?B?elJ4WWp4UTh5alFWT2Y0U0pQZlFkd0o3b01ua0lUbDFIdWhFZHJoNkpSeGJ6?=
 =?utf-8?B?NjIrTEhGQXhNd0JERGNqTWQ1VzZyNlo1RTFNVWlOQjVnYjdpSWJ0LzJENEhZ?=
 =?utf-8?B?ZHNCL2RSNkhaY1dNVWFZb3VNekQ3ZFFBVGV0cUptQ09EZTJ0UmJZSHlHUGhW?=
 =?utf-8?B?akluQXh6M3hxbU9kTW9EanhWak1ObzhsTCtqS0huMDlEMGlJbExCRGU0akRT?=
 =?utf-8?B?cmdLc215U0prc1VGMzRlUitnQlpmOURNWDdnUUdTa1M5NlZsUmFXQmVyTUhY?=
 =?utf-8?B?QmNhT3lUOSszRkZpUGJ2SmYyeVU1aE9PQzl4UjJlRkhncUV2Mzd4Qkw2VHUx?=
 =?utf-8?B?Y2Z1ZTk5Z1dIRFczeHZ1OXdxbVI2cVVHcTl3OHhCK21DekFyZlRVeHUyNUJV?=
 =?utf-8?B?RkdOZkZEcU94ZG5oV1RXOGYrMkxGNkdXcW1Ua1dCdEZPN1UyZHdjRWwySDdU?=
 =?utf-8?Q?9+7yDsfX+m4tHuWi4oEANqqpDMzPtbBzsE4DS9U?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 0951607f-c908-4e82-53f6-08d921ff9541
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 May 2021 17:39:45.5144
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: /vOrAMgwUx/k1/lH4Ra9nYzdYBA9dhgg12yOIDRP6hxsPpD86/3dzpcVo+sZFVL4K3b07Dw9R6CFEl8FuX2d+A==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM6PR03MB3945
X-OriginatorOrg: citrix.com


Hello,

The aim of this series is to force the cache attribute of foreign and
grant mappings to WB for HVM/PVH guests. This is required because those
mappings will be likely be using unpopulated memory ranges in the p2m,
and those are usually UC in the MTRR state.

Having the guest set the correct MTRR attributes is also unlikely,
because the number of MTRR ranges is finite.

Roger Pau Monne (3):
  x86/mtrr: remove stale function prototype
  x86/mtrr: move epte_get_entry_emt to p2m-ept.c
  x86/ept: force WB cache attributes for grant and foreign maps

 xen/arch/x86/hvm/mtrr.c           | 107 +---------------------
 xen/arch/x86/hvm/vmx/vmx.c        |   6 +-
 xen/arch/x86/mm/p2m-ept.c         | 145 ++++++++++++++++++++++++++++--
 xen/include/asm-x86/hvm/vmx/vmx.h |   2 +
 xen/include/asm-x86/mtrr.h        |   7 +-
 5 files changed, 147 insertions(+), 120 deletions(-)

-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri May 28 17:39:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 May 2021 17:39:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134041.249611 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmgSj-0000r7-VX; Fri, 28 May 2021 17:39:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134041.249611; Fri, 28 May 2021 17:39:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmgSj-0000qy-S7; Fri, 28 May 2021 17:39:57 +0000
Received: by outflank-mailman (input) for mailman id 134041;
 Fri, 28 May 2021 17:39:56 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WKFi=KX=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lmgSi-0000ZU-SW
 for xen-devel@lists.xenproject.org; Fri, 28 May 2021 17:39:56 +0000
Received: from esa4.hc3370-68.iphmx.com (unknown [216.71.155.144])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 33356d93-6df3-44d8-b236-1a4f0aca9dce;
 Fri, 28 May 2021 17:39:52 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 33356d93-6df3-44d8-b236-1a4f0aca9dce
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1622223592;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:content-transfer-encoding:mime-version;
  bh=IxeGPsjlqm7m9YueumxWU8Dv7M+sUj7g2Br9UfeV1pg=;
  b=D2v2cJuglfoll1Qur8cfLZWgT2zD2UUURlPUtVAerkoFy0db/Le+Eywp
   62v3z1eYDcxe6quYaELCBI86frkMLvNzZWyhHkxGK1Vc2pUaeFOseIu7U
   l9k6ITJV0AQqXmHT0/42NlWpI0glKx637odZVm+oiglTFdNCpp2s5CYQw
   U=;
Authentication-Results: esa4.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: OEJBosAulKLIEV0aQvn9EvzWgualTiQrGPxgzySXbcO7Mp3468v++PvvoGUkdocKOqASmkUdrj
 ne03uh6TG39czepTOwp3+y8H6WMzSH87badLj3RbM0JIyZ6W0S+F8Wc7aRtAqMF8qbIEgD5cwB
 8Uv2yvhSr1UNl5U5LxDbqUk9JBI7KwMYnoISbOy9MNKVwjJygDJu0A0lxHHfpR4KwPKm4K2fRZ
 1fHGBbZN4fioBRiFg5AcGmb1JuUVN4HYwjeZHbRupVaACI/0+XZBor9IKsRecHCo8pbighT7hT
 DGU=
X-SBRS: 5.1
X-MesageID: 46432765
X-Ironport-Server: esa4.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:oiRMgqDDn9Pj/DTlHeiUsceALOsnbusQ8zAXPh9KJyC9I/b2qy
 nxppgmPH/P6Ar4WBkb6Le90c67MAzhHP9OkPUs1NKZPTUO11HYVb2KgbGSpgEIXheOjNK1tp
 0QAJSWaueAdWSS5PySiGLTfqdCsbv3hZxAx92uskuFJTsaG52IhD0JbDpzfHcGIDWvUvECZe
 uhD4d81nWdkRt9VLX0OlA1G8z44/HbnpPvZhALQzQ97hOVsD+u4LnmVzCFwxY3SVp0sPQf2F
 mAtza8yrSosvm9xBOZ/XTU9Y5qlNzozcYGLNCQi/ISNi7nhm+TFcRcsoW5zXUISdyUmRIXeI
 GmmWZmAy0z0QKRQoiNm2qu5+G6uwxerUMLoDSj8AneSc+QfkNxNyMOv/MBTvN1g3BQ9e2U65
 g7qF5xgaAnRi8orB6Nk+QgaCsa4HZc2UBS6tL7r0YvHLf2O4Uh4bD2wituYd899XXBmf4a+a
 9VfZnh2Mo=
X-IronPort-AV: E=Sophos;i="5.83,230,1616472000"; 
   d="scan'208";a="46432765"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=TyQhfQvVdDRHjgYgZ6P2fgSN5h+BVxSa9/JyFn82AenngFHR2mxmgpY8AhwCYrG18GzQ4zHgfXxJwvvHAe3mLaC4dXzCBaobUqRjmNyx32NSGAwpjp0Ixhw8ihv6uSX6I7YkQzKVi6AqxbZAriDZvSuqIGx1X1za3PsAXShaHZQGtmSsVA5pOteJStiENe+Lp/IyXZRdwFZaVjAE4JMvO2YNaAsxlM6AY8C49vK8Iwu3SMxc/XHsVOQ//1If9Gsjnqfz0k1wBzcPZ5YrTZ7F/XM94oVxdCgNHODk4kLsA8nIZA4Dx2PTY8HmUizHsuGmLdJW90cvyFpPLvrP6ipL7w==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mrKXXGDcgQcBgxROwP6XsmUEHjRwDViGegef7nDL4CI=;
 b=LF+thI9QK+mISA6xZ9Ac7EcMrYMCGOaCShxcMqzUDktUuv3NnV50KNcJpfD9FmPzI2ZYJKuaKlCW/MCse+XuuwVHAJOdYHdJ+alhXREmISWoa6BpytlKTrBDpBqRq0vkkRdaz0fJaBAfFd47wi/QMa0AkfcrhUASiaIesjJOGRKfWW9Bk8F5b7QivlupGhhb9s9Pc6bzWkNvhp1QjZ/OTET7/W0+XjIEvM9wmoH9oY1rBCvx7Rsj+Lvn8gmXYmxtUVpfb1sApFnoewmD+Nuj3v2Px+0aMv1pf7z6nGybT0/qTn3ZC5I/sVXXpkQxFJrRG4O/nPZhzjtTO5E1w+n23g==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=mrKXXGDcgQcBgxROwP6XsmUEHjRwDViGegef7nDL4CI=;
 b=rucvN+hcCLRZCJYS9jPBByNYtA6za7XyymGOBkonFy1O+oPiR7ADEwA4q4NR0/mUrcV/SyZ8EQtN79Msp+4SeF5MkQ2pyVS0hVeBE0M3pq/PSYFcINDQkA1Z5cktGPkEuQLXUtBVdh87LwpWK6Kx1UkpP5MyHNmNxInhPrkyi3k=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>
Subject: [PATCH 1/3] x86/mtrr: remove stale function prototype
Date: Fri, 28 May 2021 19:39:33 +0200
Message-ID: <20210528173935.29919-2-roger.pau@citrix.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210528173935.29919-1-roger.pau@citrix.com>
References: <20210528173935.29919-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: MR2P264CA0130.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:30::22) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 6b1ace0f-e868-4886-926e-08d921ff97ed
X-MS-TrafficTypeDiagnostic: DM4PR03MB5966:
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM4PR03MB5966B74F56180B9408946F878F229@DM4PR03MB5966.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:751;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: lm8mdrUDnkJCAe5TO/og+SxKU2zuS97Yvl0zM44fRBoHk3AIjFl/l0RjBnUWkCsAiblSYqxPT+f/Ref/+02efFVzG1Cw2jzH3fBKQGJADFdjs3egcyAc40bD5HhnizfdfBGPiQUq/xcXzz6jrcZDsHg2EsA1KrMRgdCUPJHrrVkAMFHd92aeyVo5/t/GVbPXR4vh8uhXCXSCZtcW/2NnvqEdx2tZ1etWkHCzL/VnEt1j28oO1o94kYZSXH6h6kjCGHcpVHpAyto8W1nnNtrF0695E5OqCkJG/xXLwseMLAIYg7wgiYa53U+VhGWiW0MBHPZexXwBzW7CfcxbKlc+MP2MPW7G6uGd0DEL/ItFXOJ9LfonQGvJHPQJfEleAg7x84+FJEXQ7Osj7HitN/Ue7JIBeoMmR9O+P6ffC1kQ2fDa2YanKOKb9+I8i8Q8LuzTimjLdnfJ9KTR3MFFoOtFOAoiI9mYk3d/JuhYlz2kN0Q8hGHyzG5uaavEROXBV307LWbVshCNvN9/1GcOy8mDh3Y1jwZzRbXyjAvg/4iHiPRq5WSlzxhKVED2DUnWnecKv4Dr8xTWSYyUSzFmyWcqIz3jW1dfmFhj1vQpgQnGgRU=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39860400002)(366004)(376002)(136003)(346002)(6666004)(36756003)(2906002)(26005)(2616005)(66476007)(316002)(66556008)(186003)(66946007)(16526019)(54906003)(956004)(4326008)(38100700002)(6486002)(8676002)(4744005)(83380400001)(478600001)(8936002)(6496006)(86362001)(6916009)(1076003)(5660300002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?Vm9aS0l2bkwyQWNUNzZNSU9KZHVaK2hKc0FFZHdBQ0pyaU55UUhBcU1hWTRw?=
 =?utf-8?B?SlNxU2FpYnNqTzZ2eVh0RVFrNllWeXdMaGltTVVCbEt5TTdic0NzRWdkcFF0?=
 =?utf-8?B?dW1GeFVXTVMxbHhCQ0lqNzhZUDFtOVNOUmU3VHJ4NFpFZkM3RkVlTm5EQnZV?=
 =?utf-8?B?c0dDcWd3cUFhL2dONHF5WS85R254VHNZVks2ODBmSUtmNHhWTU5MekNNS3VZ?=
 =?utf-8?B?WVIwMVdJaUdZeE5TN2gvcHZwenFjdmx0RHZtOFJma1V2L3dkdUlVVDE4ZlhQ?=
 =?utf-8?B?cnNVamZubHVEWUNFV3Z5M0l3YStBcCsyODFpQ21lSzVyOUg3QVBxdzdTSlFC?=
 =?utf-8?B?Tk5mRFhveTM3Wnl3MnFIM1lyVFBRSE9BR3Mwa1pvK2xaSHVTYW83SzQwSmZr?=
 =?utf-8?B?ZjFMTFBlODBva0RLdmpGS245emxwWVA4cjgyY3hCRXBhUGpNQThBRDVxZFNT?=
 =?utf-8?B?UEhYaXcrY0YvcU9ZTmxrMlNwSjVYeXBCeERnc21lZU5qN0lkMkd0cGtkL1Nw?=
 =?utf-8?B?RFRwMk5hQU9CdStFNHU3akJMSXpha2xXanlGTGdaSUZBeS9GVXV2M3BCSFM2?=
 =?utf-8?B?NUszMnBWKys3TndYWGJQSkE5eXBWaDA1QzZkclMvSkd1cDlISnhaRTc4VW5l?=
 =?utf-8?B?QVR4c1ZES1p2MXNLelY1VlVtN2YraGFybFlWVW9rWHY1elFpMmkxVWhaanVX?=
 =?utf-8?B?ek9PNTloVUZiQkpKV294T0c4bGR1SHFtK01ObzFMK05ORFRJT1RJeUx3WFBL?=
 =?utf-8?B?ZENTQUFyOFU0MUJib0pCVEIydzNTRy9hUEtHd2RSOE1qZllEZHJlZ29iMTBn?=
 =?utf-8?B?Si9YZXpjTmkwclBxOFNEVXM1a0RjR0dSbHhpaUhxTzd4WFdXWk5WYmFDL3Nj?=
 =?utf-8?B?MENET3prNVFSb09tejJ4dSs0Z1ZRZ3g2SVlMTmhCa0FLeW01REFOSkwvUXdj?=
 =?utf-8?B?WnBjRk1kUVhxT0Q0M2libm1aaVhmVVVvUE1hT2lCNnN2S3grdW93R25odFVU?=
 =?utf-8?B?YjBrK0hIRTJRR20ycVJKbGt4aWx3WXk1dzZpckl3a29vdUVUOVNTMWJxVTdH?=
 =?utf-8?B?S3NPZG1SMUVWcE9ObmxuSzhwMm4xMVVWNys0OW9KczZ1TEpmbGg4RlI1Qk9I?=
 =?utf-8?B?NnJVaGVqaDdPQjJ5L0svTU5HckZjeWR5UXZ1SG4ydk1aMUwrK2lNSGRZVnk2?=
 =?utf-8?B?Nlh3TmVOU0J2NlNUcnVBZTdBaFA3Z094WVZMMVMxMktQdkN6YjhnVGNpRVN1?=
 =?utf-8?B?cEtrVEgzTmEwM21vbG4rUE5Xd3dxTFBsZXhPNzE4ZUJHYUxiUXRYT3FUbDVp?=
 =?utf-8?B?N0wxQXdsbFlTc1k5UDJBYnhwMis0QWpaeVczTUFXdEdGUWJZbU9BN3JUU3VT?=
 =?utf-8?B?Ky9IWSs5ZmtTaGFPelkzL3JTSW8rOGtGMkcvQUVjZzFRdm1kK1BDZjd4OHp1?=
 =?utf-8?B?bURFWk1CMjJTZmllT2ZYMGRsMERUekkyUjdxc1d6OUdzRmxuRllaVmxnUjlP?=
 =?utf-8?B?YmJpL2Y5dTRiUzR4MUJlNnd4Z1g0dVhRUU1JVFlTWjRIMW1QdkVXMm9rZ2o0?=
 =?utf-8?B?YzRuMDdXYkNjenJ1RTVIYkpzK2Z4SEpHdjY4b2NTTGNjdHVxNEZTVHhlTTI3?=
 =?utf-8?B?SFY0VU1VZ1RXTDdJdHRFY2QzaVdQcENnUVNxRTkyRlFSU0Q5Z0tsZGxMb1VP?=
 =?utf-8?B?NDVFT2x4YlJ5Sm5QMFNGUEpIUXRiRXBMSWhNL2hzc3haM29pS0VNeFFJN1cr?=
 =?utf-8?Q?RA81GRAAp46lEhyN1XlKvxch52rMQSuVfsVhhrE?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 6b1ace0f-e868-4886-926e-08d921ff97ed
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 May 2021 17:39:50.0087
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: hlClrQDZnDfCPAW21vi6mC4zstsM+uDfYwiz/SAd7wb7PZ9m0dV6PY60owNRyJKcP088x6JaKo96YfiwipJW9g==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR03MB5966
X-OriginatorOrg: citrix.com

Fixes: 1c84d04673 ('VMX: remove the problematic set_uc_mode logic')
Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/include/asm-x86/mtrr.h | 2 --
 1 file changed, 2 deletions(-)

diff --git a/xen/include/asm-x86/mtrr.h b/xen/include/asm-x86/mtrr.h
index 4be704cb6a..24e5de5c22 100644
--- a/xen/include/asm-x86/mtrr.h
+++ b/xen/include/asm-x86/mtrr.h
@@ -78,8 +78,6 @@ extern u32 get_pat_flags(struct vcpu *v, u32 gl1e_flags, paddr_t gpaddr,
 extern int epte_get_entry_emt(struct domain *, unsigned long gfn, mfn_t mfn,
                               unsigned int order, uint8_t *ipat,
                               bool_t direct_mmio);
-extern void ept_change_entry_emt_with_range(
-    struct domain *d, unsigned long start_gfn, unsigned long end_gfn);
 extern unsigned char pat_type_2_pte_flags(unsigned char pat_type);
 extern int hold_mtrr_updates_on_aps;
 extern void mtrr_aps_sync_begin(void);
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri May 28 17:40:03 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 May 2021 17:40:03 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134042.249621 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmgSp-0001Oq-8M; Fri, 28 May 2021 17:40:03 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134042.249621; Fri, 28 May 2021 17:40:03 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmgSp-0001OS-4X; Fri, 28 May 2021 17:40:03 +0000
Received: by outflank-mailman (input) for mailman id 134042;
 Fri, 28 May 2021 17:40:01 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WKFi=KX=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lmgSn-0000ZU-Sk
 for xen-devel@lists.xenproject.org; Fri, 28 May 2021 17:40:01 +0000
Received: from esa5.hc3370-68.iphmx.com (unknown [216.71.155.168])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id b1022678-1dab-4265-9269-6e686d82b118;
 Fri, 28 May 2021 17:39:58 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b1022678-1dab-4265-9269-6e686d82b118
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1622223598;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:content-transfer-encoding:mime-version;
  bh=bvVMuM/HMP8ABLm9/U9j8KYIq8D0Cdsi9pnwcgfazSA=;
  b=LyTRiMgARwxMWJf0iOd0g6fgO7fNYbrWhEyQP2s+C4Lsm5vv1lY/MbUj
   U+Y18oCC0BjbkvU6LMB4Jm2FdiEeJJYFc0wGbh6ZarJf/fXB4mfbe8c7Z
   pmyc/qetIPim+qZRxmRkoIQbt5uvoyfEOvCiuVOMutw0XkAAa6dRsJKQN
   w=;
Authentication-Results: esa5.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: rBocOdJhVqCmH6Cs3RDUktMijpi94EDlr3TPpoipgHSXRpfQt8xXzV4nr2u4VPFAwQYamS0j3z
 WGqlE0yh3MUwfLWCFLZug/A/90JJ5NU6D/81PbvM45GbvK82Cjh3BtOtUnBMPl9WtNVOxrHgG7
 nIAMR9+qnfYqY5VWIDhqUCvBJFxFlIMAItwRphJ1r2QgCCgKu0hRGILEo1a+EwTN2l7HY0NhVt
 7p+gM7HXOEJy75EklgygIqAJnYUFaW/ib7RLgUV5fTrRYPbWqVNCSaKYY0q7RjAnZzUlG+Jtvq
 p24=
X-SBRS: 5.1
X-MesageID: 44631976
X-Ironport-Server: esa5.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:C2n/V6B4SRnrqn7lHeiTsceALOsnbusQ8zAXPh9KJyC9I/b2qy
 nxppgmPH/P6Ar4WBkb6Le90c67MAzhHP9OkPUs1NKZPTUO11HYVb2KgbGSpgEIXheOjNK1tp
 0QAJSWaueAdWSS5PySiGLTfqdCsbv3hZxAx92uskuFJTsaG52IhD0JbDpzfHcGIDWvUvECZe
 uhD4d81nSdkTN9VLXzOpFrNNKz5uHjpdbDW1orFhQn4A6BgXeD87jhCSWV2R8YTndm3aoi2X
 KtqX262oyT99WAjjPM3W7a6Jpb3PH7zMFYOcCKgs8Jbh3xlweTYph7UbHqhkF3nAiW0idvrD
 DwmWZmAywqgEmhOF1d4CGdmzUI6QxerkMLkjSj8CLeSaWTfkNJNyJD7bgpPycxpXBQ5O2Vfc
 pwriqkXqFsfGT9dRLGlpL1viFR5z6JSEUZ4JguZgRkIPAjgZ9q3MAiFRBuYdg99ByT0vFtLA
 A4NrCj2B8RSyLAU0zk
X-IronPort-AV: E=Sophos;i="5.83,230,1616472000"; 
   d="scan'208";a="44631976"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=VL4xxXwrS9wor3KCFhWxFqH85w/w/lqVzjp0ykSTdTuosQ2UxDusrYYy5o5ajI0mzb5xOF2T+z5g70TuAslv2bB6jxe5JcWc7q2ysf9f1BaSLXmE39SlNh+YP7nJh7xpC7hqAAdrrZsxQJql5lh8bpJ8MEvu2CT+cCWGOMkar8v7lxJZXxQTPnR9Jy35gnQGsSZCOIF/m6dx7ivSFZYCwfolPm7Qp/uTcRu5WA20MHwYo1Y9cNmuSUqX5Zsev9J5rP+F3bjwLbuhhqXoo5jVpc9wfkjnlL8Slif+rgFVnU/ERurtAA9/jBl7CMeZlQCeuCQjUniO4SbL/e2z+07liA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GJvcxRwg22ywwmHcBxf934ac3uryD0pmSSRdtpvFX8U=;
 b=PVu8KguDHiNNM7R3JAZFYL3XKaxTMwDG+R6RQljmh/PG0YB5gNFOIFrdkHRnrK9ZUx9rxkFckBdoV2Q6B0ydPbUoDc/DRq3cCds8TF2UjH5gOhU3VLJKTn9wN1+3lFchdr/RmIsEahld4I79lw3pEy9JWOcpKP89FXrpqjoG04jBOGd5FyEMcBKdt3aTh0QP6mNwwIURyT92Az+2KwWkH2p1lM8evfgIf8DeFFD8G6kuiW3k8viy7lnJ9BCKvO5p6t1aUiQw0Vb/XIQf5WwHh0hrYZnPzRalWH01zWVSBHgaLF51SjPRY8tYEommPD65CVQIEk7EayHtDNTvEYOXvQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=GJvcxRwg22ywwmHcBxf934ac3uryD0pmSSRdtpvFX8U=;
 b=f5WRMDZSbDPh7hqA2tz1KmBVpBEO3Kf/o/d9srbj8eeX3loE7hg53UwXrkxGr0xAhu+4d8+kawQRrSXiNWFazcelrE5vcJ7tYEa2IYkf9XO9cIq6reQ97rCqFXT2cmH+ViGyoAanFDbjj+5swEmF23kedbRWZOP506L6jreizxk=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Jan Beulich <jbeulich@suse.com>,
	Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Jun Nakajima
	<jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>, George Dunlap
	<george.dunlap@citrix.com>
Subject: [PATCH 2/3] x86/mtrr: move epte_get_entry_emt to p2m-ept.c
Date: Fri, 28 May 2021 19:39:34 +0200
Message-ID: <20210528173935.29919-3-roger.pau@citrix.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210528173935.29919-1-roger.pau@citrix.com>
References: <20210528173935.29919-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: MR2P264CA0129.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:30::21) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: 3c37482d-435b-4952-7e5d-08d921ff9acb
X-MS-TrafficTypeDiagnostic: DM4PR03MB5966:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM4PR03MB596663A2A7AF2CF3F984B64D8F229@DM4PR03MB5966.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:5797;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: h3hnrmg3OiMYZGcJHHwtV98Ty13QItmCXyTTQku2UDTDqmsN+owKX7p8flQvmxoJ8xs3r4TTyFTDz/9BHACM6+7FI3iNsTxZqcDjwX6t6/dLBzFcBOm16XGb4ZC0xxRsVb1B0xzBrtfMe9XXxaz1b0AzNkTbGheMo3eyj7GQYoSCbX3F8HIRLK/s/oUEj4ilC0eX6sgF1RM00bdXHJBFivRAohWv6ywY2JGqUVk/qJR2JPBveIy4E/BoZmwjFKkXUewDNGzJbVbaEP45xwIXl1sXEBJCcfbRHkrCrGoxppfRgRkYBRwCt41LG+lZIF1JE0Kkf/v3OmjYlvCeTUu005WZlx7MC+IWkqDjds1ELir39znwkB1CzoKlL3b7tWtYfO3XYUHL+swJDwqoG4byQtaeMT8yCx6tF3mpkrC8mE2dy8nz9s80C7+Yy5KlfclIRCAeqBqevs6rUnejKZw6A/Raomf05OhWEfNg6BH9iHl1W8GOcYTymZhNDV6QzxaZ/WcPMSS3Yt2pPZb05voaCeemNHkPmSxnPsgrJffbW5iROx/yPF8QAgxlBhaa/7BxqfQOWzmNn5BZMkIHb4qo1boKejh5VE0rlM6vgN0SCs8=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39860400002)(366004)(376002)(136003)(346002)(6666004)(107886003)(36756003)(2906002)(26005)(2616005)(66476007)(316002)(66556008)(186003)(66946007)(16526019)(54906003)(956004)(4326008)(38100700002)(6486002)(30864003)(8676002)(83380400001)(478600001)(8936002)(6496006)(86362001)(6916009)(1076003)(5660300002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?VyttYlNnQzBIZ0hYSTZmVk1BWjNJUURKNGF0TzlQNGxzQ3pRY2VtTG5hTVhH?=
 =?utf-8?B?M1pkZHBkdjg1cjdtRkdVZmJuaFhVTXllaVAwUkduMXdoK2VQLzBVN000QVRE?=
 =?utf-8?B?ME9xQXFJemFlVEdyUjQrTUJqa3NRWTlpVU1MUXdsM1lCekwxZ0E5a0U0VVBo?=
 =?utf-8?B?bDc1aVl6V0haM0E0QnpzaUNITWIwemhoTnlwOXh2SXFPZ2ljUUpkV2UvT0Y4?=
 =?utf-8?B?ME5Da2Y2N3FOQlFIeDJlMUQ3ckRXT05vWktJcEphUUl3SitwYWtuYTFmc2Q4?=
 =?utf-8?B?QjhnU2x5NVJmOTBSMjdqVE42OXl1czRrK3VEeWY1aEZuTDQ2M0pBR0tkSFJn?=
 =?utf-8?B?T2dJRDJqd0ZMUDB0Y2hHUnE1Qm45SjRITlZNNTdpUnhyYXpaUmxNSEtGamlR?=
 =?utf-8?B?T1Y2Sm1MZExmWDVtVGdUYXBGM3lndzBPNTMxQklSU3JpVjJxNklkQU92WDJH?=
 =?utf-8?B?YXZtQUVOSElLaW5VY09aS2pIQndNQ3FSeHVLcm9Fb2hhMnFIRXBNeDFVT3hj?=
 =?utf-8?B?bllqR2pNdjRRekpZazB0bU9kYW5HcEFZeTJLOGt4eXphK3puN0JkTFRZZ1l4?=
 =?utf-8?B?d2UwSkhBN1FhSG5ZMzVGL0dmT01NTnFSN0R2MWpYY1BXbmhkUk91Y2tuM3VW?=
 =?utf-8?B?Qi9EbkZQZW84Q2hSczVmS1NoYmFCNGwvQUVOb1NIMmhycTdXWlRROXNJNFdx?=
 =?utf-8?B?ZFUvUEJDRWdPd1k4ZDcrMHQrOG5mcDR2L1BJcHJzSWN2MnVhakk1MC80SjRH?=
 =?utf-8?B?VWhNUzBHUDkzZDAvNU5SZDZ3Y3ZncGtjSktnOVB2Q0xmdTdzL1BLNTVIV0th?=
 =?utf-8?B?alZYOWw3bWlHS0M5OUU4Q3hSSHlWckFOclFsTndrYWFFS1A0aEU3NGN5YUZh?=
 =?utf-8?B?VnJHRGJkcjJGV09WNUVadlUyOUE4dktvdTRhMTc5ZWJoM1dyM2JHNGJFWncw?=
 =?utf-8?B?QnUwMThXQXRkTTZYUlZCWGhmZ0ZFMUhrM29TMVp4NVpWdUQxSDBkNHp2V0h5?=
 =?utf-8?B?STM1ZFNqSTc0NXJjUzdCUG1Sdmp6WjlYbEZydHdSMWNLSVV0Yk1wZlY4ZmNL?=
 =?utf-8?B?YVpQWXJmM2xFajc3ZStlK1NGaHJISXJLZk5qYjRYbXBBaVBkQmVxUEZhV2Z1?=
 =?utf-8?B?L1ZvdGpPeUNpeWxFRmxGbGQvNG9iN1VUSEFrL2I1N01VeXNGTW9HbGJRVmpT?=
 =?utf-8?B?TUIxUnFGZDdMUi9pWVhWSEVCZys1QW9IL1FlWTE0TEk2OGkyTGhRMkNyWmxB?=
 =?utf-8?B?R3R1QlZQK3lxQVJCcEJkUG56dENnL1JoL2NaZnhZamlQbVEvbThOd3BmOTY2?=
 =?utf-8?B?MGpBQ0g1Z3BTSHNsWDJDcTYzcy9idWVTeVpaZWhPNnpndjMzaGZibjZmRkdW?=
 =?utf-8?B?ZTZSenJTS290YzR1bzU2YjRuaDlOazAzbVdDTjlnYVB1N3c2Sm05WWM1cUxz?=
 =?utf-8?B?blIzNHBOM2FyQW8xM0JsYTY5ZWs0dEt5UDJ2UUV4emptblRTMHFkUG9FNTk2?=
 =?utf-8?B?UXMvZmo3UkY1Vmo2ZG12Uk1tZGpXa3NKQWlVOVQ4djBiU0hyTW4vYUxmYmh2?=
 =?utf-8?B?cWdRWS96OE0ya3Y3YXEyelhXOGlNRW5uSVYrbnJ2MG0ycVRQQmNZYXN6UkU1?=
 =?utf-8?B?UEV2ajBacENHNGY1bG1Pd1g2UWtVeFNMV2JCbHY5Sm9SSWVhNmtsNTZKS0tp?=
 =?utf-8?B?U1lteHprcHQ4c0NzWDdSRlhSYmlCZEM1VlU5blZrb1lrOGw3YVVYMkVHTEox?=
 =?utf-8?Q?eFa9dZX5alb8nhd9tJPuUSoeWWLWMOStaBk/ZQP?=
X-MS-Exchange-CrossTenant-Network-Message-Id: 3c37482d-435b-4952-7e5d-08d921ff9acb
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 May 2021 17:39:54.8444
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: iiGorM+JTj0lcezfzXSFfpp9FQvYBQOtEOMU32kLtWGxw0Z/FqNT1CP99ZxVDAWJN6Pcn/q5LqhdAnwEwc+IgQ==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR03MB5966
X-OriginatorOrg: citrix.com

This is an EPT specific function, so it shouldn't live in the generic
mtrr file. Such movement is also needed for future work that will
require passing a p2m_type_t parameter to epte_get_entry_emt, and
making that type visible to the mtrr users is cumbersome and
unneeded.

Moving epte_get_entry_emt out of mtrr.c requires making the helper to
get the MTRR type of an address from the mtrr state public. While
there rename the function to start with the mtrr prefix, like other
mtrr related functions.

While there fix some of the types of the function parameters.

No functional change intended.

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/hvm/mtrr.c           | 107 +---------------------------
 xen/arch/x86/hvm/vmx/vmx.c        |   4 +-
 xen/arch/x86/mm/p2m-ept.c         | 114 ++++++++++++++++++++++++++++--
 xen/include/asm-x86/hvm/vmx/vmx.h |   2 +
 xen/include/asm-x86/mtrr.h        |   5 +-
 5 files changed, 118 insertions(+), 114 deletions(-)

diff --git a/xen/arch/x86/hvm/mtrr.c b/xen/arch/x86/hvm/mtrr.c
index 82ded1635c..4a9f3177ed 100644
--- a/xen/arch/x86/hvm/mtrr.c
+++ b/xen/arch/x86/hvm/mtrr.c
@@ -194,8 +194,7 @@ void hvm_vcpu_cacheattr_destroy(struct vcpu *v)
  * May return a negative value when order > 0, indicating to the caller
  * that the respective mapping needs splitting.
  */
-static int get_mtrr_type(const struct mtrr_state *m,
-                         paddr_t pa, unsigned int order)
+int mtrr_get_type(const struct mtrr_state *m, paddr_t pa, unsigned int order)
 {
    uint8_t     overlap_mtrr = 0;
    uint8_t     overlap_mtrr_pos = 0;
@@ -323,7 +322,7 @@ static uint8_t effective_mm_type(struct mtrr_state *m,
      * just use it
      */ 
     if ( gmtrr_mtype == NO_HARDCODE_MEM_TYPE )
-        mtrr_mtype = get_mtrr_type(m, gpa, 0);
+        mtrr_mtype = mtrr_get_type(m, gpa, 0);
     else
         mtrr_mtype = gmtrr_mtype;
 
@@ -350,7 +349,7 @@ uint32_t get_pat_flags(struct vcpu *v,
     guest_eff_mm_type = effective_mm_type(g, pat, gpaddr, 
                                           gl1e_flags, gmtrr_mtype);
     /* 2. Get the memory type of host physical address, with MTRR */
-    shadow_mtrr_type = get_mtrr_type(&mtrr_state, spaddr, 0);
+    shadow_mtrr_type = mtrr_get_type(&mtrr_state, spaddr, 0);
 
     /* 3. Find the memory type in PAT, with host MTRR memory type
      * and guest effective memory type.
@@ -789,106 +788,6 @@ void memory_type_changed(struct domain *d)
     }
 }
 
-int epte_get_entry_emt(struct domain *d, unsigned long gfn, mfn_t mfn,
-                       unsigned int order, uint8_t *ipat, bool_t direct_mmio)
-{
-    int gmtrr_mtype, hmtrr_mtype;
-    struct vcpu *v = current;
-    unsigned long i;
-
-    *ipat = 0;
-
-    if ( v->domain != d )
-        v = d->vcpu ? d->vcpu[0] : NULL;
-
-    /* Mask, not add, for order so it works with INVALID_MFN on unmapping */
-    if ( rangeset_overlaps_range(mmio_ro_ranges, mfn_x(mfn),
-                                 mfn_x(mfn) | ((1UL << order) - 1)) )
-    {
-        if ( !order || rangeset_contains_range(mmio_ro_ranges, mfn_x(mfn),
-                                               mfn_x(mfn) | ((1UL << order) - 1)) )
-        {
-            *ipat = 1;
-            return MTRR_TYPE_UNCACHABLE;
-        }
-        /* Force invalid memory type so resolve_misconfig() will split it */
-        return -1;
-    }
-
-    if ( !mfn_valid(mfn) )
-    {
-        *ipat = 1;
-        return MTRR_TYPE_UNCACHABLE;
-    }
-
-    if ( !direct_mmio && !is_iommu_enabled(d) && !cache_flush_permitted(d) )
-    {
-        *ipat = 1;
-        return MTRR_TYPE_WRBACK;
-    }
-
-    for ( i = 0; i < (1ul << order); i++ )
-    {
-        if ( is_special_page(mfn_to_page(mfn_add(mfn, i))) )
-        {
-            if ( order )
-                return -1;
-            *ipat = 1;
-            return MTRR_TYPE_WRBACK;
-        }
-    }
-
-    if ( direct_mmio )
-        return MTRR_TYPE_UNCACHABLE;
-
-    gmtrr_mtype = hvm_get_mem_pinned_cacheattr(d, _gfn(gfn), order);
-    if ( gmtrr_mtype >= 0 )
-    {
-        *ipat = 1;
-        return gmtrr_mtype != PAT_TYPE_UC_MINUS ? gmtrr_mtype
-                                                : MTRR_TYPE_UNCACHABLE;
-    }
-    if ( gmtrr_mtype == -EADDRNOTAVAIL )
-        return -1;
-
-    gmtrr_mtype = v ? get_mtrr_type(&v->arch.hvm.mtrr, gfn << PAGE_SHIFT, order)
-                    : MTRR_TYPE_WRBACK;
-    hmtrr_mtype = get_mtrr_type(&mtrr_state, mfn_x(mfn) << PAGE_SHIFT, order);
-    if ( gmtrr_mtype < 0 || hmtrr_mtype < 0 )
-        return -1;
-
-    /* If both types match we're fine. */
-    if ( likely(gmtrr_mtype == hmtrr_mtype) )
-        return hmtrr_mtype;
-
-    /* If either type is UC, we have to go with that one. */
-    if ( gmtrr_mtype == MTRR_TYPE_UNCACHABLE ||
-         hmtrr_mtype == MTRR_TYPE_UNCACHABLE )
-        return MTRR_TYPE_UNCACHABLE;
-
-    /* If either type is WB, we have to go with the other one. */
-    if ( gmtrr_mtype == MTRR_TYPE_WRBACK )
-        return hmtrr_mtype;
-    if ( hmtrr_mtype == MTRR_TYPE_WRBACK )
-        return gmtrr_mtype;
-
-    /*
-     * At this point we have disagreeing WC, WT, or WP types. The only
-     * combination that can be cleanly resolved is WT:WP. The ones involving
-     * WC need to be converted to UC, both due to the memory ordering
-     * differences and because WC disallows reads to be cached (WT and WP
-     * permit this), while WT and WP require writes to go straight to memory
-     * (WC can buffer them).
-     */
-    if ( (gmtrr_mtype == MTRR_TYPE_WRTHROUGH &&
-          hmtrr_mtype == MTRR_TYPE_WRPROT) ||
-         (gmtrr_mtype == MTRR_TYPE_WRPROT &&
-          hmtrr_mtype == MTRR_TYPE_WRTHROUGH) )
-        return MTRR_TYPE_WRPROT;
-
-    return MTRR_TYPE_UNCACHABLE;
-}
-
 /*
  * Local variables:
  * mode: C
diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 7e3e67fdc3..0d4b47681b 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -417,12 +417,12 @@ static int vmx_domain_initialise(struct domain *d)
 static void domain_creation_finished(struct domain *d)
 {
     gfn_t gfn = gaddr_to_gfn(APIC_DEFAULT_PHYS_BASE);
-    uint8_t ipat;
+    bool ipat;
 
     if ( !has_vlapic(d) || mfn_eq(apic_access_mfn, INVALID_MFN) )
         return;
 
-    ASSERT(epte_get_entry_emt(d, gfn_x(gfn), apic_access_mfn, 0, &ipat,
+    ASSERT(epte_get_entry_emt(d, gfn, apic_access_mfn, 0, &ipat,
                               true) == MTRR_TYPE_WRBACK);
     ASSERT(ipat);
 
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index a3beaf91e2..f1d1d07e92 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -20,6 +20,7 @@
 #include <public/hvm/dm_op.h>
 #include <asm/altp2m.h>
 #include <asm/current.h>
+#include <asm/iocap.h>
 #include <asm/paging.h>
 #include <asm/types.h>
 #include <asm/domain.h>
@@ -485,6 +486,108 @@ static int ept_invalidate_emt_range(struct p2m_domain *p2m,
     return rc;
 }
 
+int epte_get_entry_emt(struct domain *d, gfn_t gfn, mfn_t mfn,
+                       unsigned int order, bool *ipat, bool direct_mmio)
+{
+    int gmtrr_mtype, hmtrr_mtype;
+    struct vcpu *v = current;
+    unsigned long i;
+
+    *ipat = false;
+
+    if ( v->domain != d )
+        v = d->vcpu ? d->vcpu[0] : NULL;
+
+    /* Mask, not add, for order so it works with INVALID_MFN on unmapping */
+    if ( rangeset_overlaps_range(mmio_ro_ranges, mfn_x(mfn),
+                                 mfn_x(mfn) | ((1UL << order) - 1)) )
+    {
+        if ( !order || rangeset_contains_range(mmio_ro_ranges, mfn_x(mfn),
+                                               mfn_x(mfn) | ((1UL << order) - 1)) )
+        {
+            *ipat = true;
+            return MTRR_TYPE_UNCACHABLE;
+        }
+        /* Force invalid memory type so resolve_misconfig() will split it */
+        return -1;
+    }
+
+    if ( !mfn_valid(mfn) )
+    {
+        *ipat = true;
+        return MTRR_TYPE_UNCACHABLE;
+    }
+
+    if ( !direct_mmio && !is_iommu_enabled(d) && !cache_flush_permitted(d) )
+    {
+        *ipat = true;
+        return MTRR_TYPE_WRBACK;
+    }
+
+    for ( i = 0; i < (1ul << order); i++ )
+    {
+        if ( is_special_page(mfn_to_page(mfn_add(mfn, i))) )
+        {
+            if ( order )
+                return -1;
+            *ipat = true;
+            return MTRR_TYPE_WRBACK;
+        }
+    }
+
+    if ( direct_mmio )
+        return MTRR_TYPE_UNCACHABLE;
+
+    gmtrr_mtype = hvm_get_mem_pinned_cacheattr(d, gfn, order);
+    if ( gmtrr_mtype >= 0 )
+    {
+        *ipat = true;
+        return gmtrr_mtype != PAT_TYPE_UC_MINUS ? gmtrr_mtype
+                                                : MTRR_TYPE_UNCACHABLE;
+    }
+    if ( gmtrr_mtype == -EADDRNOTAVAIL )
+        return -1;
+
+    gmtrr_mtype = v ? mtrr_get_type(&v->arch.hvm.mtrr,
+                                    gfn_x(gfn) << PAGE_SHIFT, order)
+                    : MTRR_TYPE_WRBACK;
+    hmtrr_mtype = mtrr_get_type(&mtrr_state, mfn_x(mfn) << PAGE_SHIFT,
+                                order);
+    if ( gmtrr_mtype < 0 || hmtrr_mtype < 0 )
+        return -1;
+
+    /* If both types match we're fine. */
+    if ( likely(gmtrr_mtype == hmtrr_mtype) )
+        return hmtrr_mtype;
+
+    /* If either type is UC, we have to go with that one. */
+    if ( gmtrr_mtype == MTRR_TYPE_UNCACHABLE ||
+         hmtrr_mtype == MTRR_TYPE_UNCACHABLE )
+        return MTRR_TYPE_UNCACHABLE;
+
+    /* If either type is WB, we have to go with the other one. */
+    if ( gmtrr_mtype == MTRR_TYPE_WRBACK )
+        return hmtrr_mtype;
+    if ( hmtrr_mtype == MTRR_TYPE_WRBACK )
+        return gmtrr_mtype;
+
+    /*
+     * At this point we have disagreeing WC, WT, or WP types. The only
+     * combination that can be cleanly resolved is WT:WP. The ones involving
+     * WC need to be converted to UC, both due to the memory ordering
+     * differences and because WC disallows reads to be cached (WT and WP
+     * permit this), while WT and WP require writes to go straight to memory
+     * (WC can buffer them).
+     */
+    if ( (gmtrr_mtype == MTRR_TYPE_WRTHROUGH &&
+          hmtrr_mtype == MTRR_TYPE_WRPROT) ||
+         (gmtrr_mtype == MTRR_TYPE_WRPROT &&
+          hmtrr_mtype == MTRR_TYPE_WRTHROUGH) )
+        return MTRR_TYPE_WRPROT;
+
+    return MTRR_TYPE_UNCACHABLE;
+}
+
 /*
  * Resolve deliberately mis-configured (EMT field set to an invalid value)
  * entries in the page table hierarchy for the given GFN:
@@ -519,7 +622,7 @@ static int resolve_misconfig(struct p2m_domain *p2m, unsigned long gfn)
 
         if ( level == 0 || is_epte_superpage(&e) )
         {
-            uint8_t ipat = 0;
+            bool ipat;
 
             if ( e.emt != MTRR_NUM_TYPES )
                 break;
@@ -535,7 +638,7 @@ static int resolve_misconfig(struct p2m_domain *p2m, unsigned long gfn)
                         e.emt = 0;
                     if ( !is_epte_valid(&e) || !is_epte_present(&e) )
                         continue;
-                    e.emt = epte_get_entry_emt(p2m->domain, gfn + i,
+                    e.emt = epte_get_entry_emt(p2m->domain, _gfn(gfn + i),
                                                _mfn(e.mfn), 0, &ipat,
                                                e.sa_p2mt == p2m_mmio_direct);
                     e.ipat = ipat;
@@ -553,7 +656,8 @@ static int resolve_misconfig(struct p2m_domain *p2m, unsigned long gfn)
             }
             else
             {
-                int emt = epte_get_entry_emt(p2m->domain, gfn, _mfn(e.mfn),
+                int emt = epte_get_entry_emt(p2m->domain, _gfn(gfn),
+                                             _mfn(e.mfn),
                                              level * EPT_TABLE_ORDER, &ipat,
                                              e.sa_p2mt == p2m_mmio_direct);
                 bool_t recalc = e.recalc;
@@ -788,8 +892,8 @@ ept_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
 
     if ( mfn_valid(mfn) || p2m_allows_invalid_mfn(p2mt) )
     {
-        uint8_t ipat = 0;
-        int emt = epte_get_entry_emt(p2m->domain, gfn, mfn,
+        bool ipat;
+        int emt = epte_get_entry_emt(p2m->domain, _gfn(gfn), mfn,
                                      i * EPT_TABLE_ORDER, &ipat,
                                      p2mt == p2m_mmio_direct);
 
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index 534e9fc221..f668ee1f09 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -599,6 +599,8 @@ void ept_p2m_uninit(struct p2m_domain *p2m);
 
 void ept_walk_table(struct domain *d, unsigned long gfn);
 bool_t ept_handle_misconfig(uint64_t gpa);
+int epte_get_entry_emt(struct domain *d, gfn_t gfn, mfn_t mfn,
+                       unsigned int order, bool *ipat, bool direct_mmio);
 void setup_ept_dump(void);
 void p2m_init_altp2m_ept(struct domain *d, unsigned int i);
 /* Locate an alternate p2m by its EPTP */
diff --git a/xen/include/asm-x86/mtrr.h b/xen/include/asm-x86/mtrr.h
index 24e5de5c22..e0fd1005ce 100644
--- a/xen/include/asm-x86/mtrr.h
+++ b/xen/include/asm-x86/mtrr.h
@@ -72,12 +72,11 @@ extern int mtrr_add_page(unsigned long base, unsigned long size,
                          unsigned int type, char increment);
 extern int mtrr_del(int reg, unsigned long base, unsigned long size);
 extern int mtrr_del_page(int reg, unsigned long base, unsigned long size);
+extern int mtrr_get_type(const struct mtrr_state *m, paddr_t pa,
+                         unsigned int order);
 extern void mtrr_centaur_report_mcr(int mcr, u32 lo, u32 hi);
 extern u32 get_pat_flags(struct vcpu *v, u32 gl1e_flags, paddr_t gpaddr,
                   paddr_t spaddr, uint8_t gmtrr_mtype);
-extern int epte_get_entry_emt(struct domain *, unsigned long gfn, mfn_t mfn,
-                              unsigned int order, uint8_t *ipat,
-                              bool_t direct_mmio);
 extern unsigned char pat_type_2_pte_flags(unsigned char pat_type);
 extern int hold_mtrr_updates_on_aps;
 extern void mtrr_aps_sync_begin(void);
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri May 28 17:40:06 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 May 2021 17:40:06 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134043.249633 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmgSs-00027j-Of; Fri, 28 May 2021 17:40:06 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134043.249633; Fri, 28 May 2021 17:40:06 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmgSs-00027V-KV; Fri, 28 May 2021 17:40:06 +0000
Received: by outflank-mailman (input) for mailman id 134043;
 Fri, 28 May 2021 17:40:05 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=WKFi=KX=citrix.com=roger.pau@srs-us1.protection.inumbo.net>)
 id 1lmgSr-0001si-9c
 for xen-devel@lists.xenproject.org; Fri, 28 May 2021 17:40:05 +0000
Received: from esa1.hc3370-68.iphmx.com (unknown [216.71.145.142])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 76e5c0c5-a728-4b27-b30c-372172b9795a;
 Fri, 28 May 2021 17:40:03 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 76e5c0c5-a728-4b27-b30c-372172b9795a
DKIM-Signature: v=1; a=rsa-sha256; c=simple/simple;
  d=citrix.com; s=securemail; t=1622223603;
  h=from:to:cc:subject:date:message-id:in-reply-to:
   references:content-transfer-encoding:mime-version;
  bh=9nedISlwrSegLGQxOvVojP0HNEh86KYexV0VnKl/dG8=;
  b=FUJX7JZdG9gd2p1alHgC9aB55BfKKsmdyp3vd4zuIIYJHeLVSf0iiE71
   VRThPvCbf/FGle+eL9s7LA55w0RLAQsCVxV/fsBjwwjrGRIDC8eV9/y0H
   dmIm4rYKTVTyYQO+O4Z/+6mu1Gp1ZIRVXEX6F5D3xgn5pudNzrKYYvMct
   k=;
Authentication-Results: esa1.hc3370-68.iphmx.com; dkim=pass (signature verified) header.i=@citrix.onmicrosoft.com
IronPort-SDR: +A6iCFZLI6rXnxmFGJOfd3k8JDSXNOVdeDCEXUdwGYwtgSr6OVJ63IB2y9kTZzLm4sktvEoB42
 AbzRV5Wc9bFpVbi/Q9T+s4DlEqC0mcDTy6jhBpnGQ0T1bM4Q0e2vdatMiTT+rPKxE26UF60mVd
 2eaMqOolIxZvLy2rBemlUZi2tyn9PN7LDBWozyj96Ama/jA2FG7MYl+V+02YjAHaVU1fMxqOHP
 clbf0fQ2bDuPSxRQvsQUZYt+qgwCsWjnwq3VT/PMDZ5kYqWAkgKmrOXlc3eB+YxUW95piKyeSH
 mIM=
X-SBRS: 5.1
X-MesageID: 45249119
X-Ironport-Server: esa1.hc3370-68.iphmx.com
X-Remote-IP: 162.221.158.21
X-Policy: $RELAYED
IronPort-HdrOrdr: A9a23:fffGramE+8v+XlF6bWGj5YTKDX3pDfMTimdD5ihNYBxZY6Wkfp
 +V8sjzhCWatN9OYh0dcLC7WJVpQRvnhPlICO4qTMiftWjdyReVxeRZjLcKrAeQYhEWmtQtsJ
 uINpIOcuEYbmIK/voSgjPIa+rIqePvmMvD6Ja8vhUdOD2CKZsQkjuRYjzrYnGeLzM2Y6bReq
 Dsgvau8FGbCAUqh4mAdzc4t6+pnayDqHqICiR2RiIP2U2rt3eF+bT6Gx+X0lM1SDVU24ov9m
 DDjkjQ+rijm+vT8G6f60bjq7Bt3PfxwNpKA8KBzuIPLC/3twqubIN9H5WfoTEOpv214lpCqq
 iOn/4ZBbU215rtRBDznfO0sDOQlgrGqkWSiWNwuEGT5PAQH1kBepN8beszSGqq16Ii1OsMnZ
 6jkVjp76a+oHv77WzADuPzJmZXf3yP0DAfeN4o/ghiuLQlGclsRPQkjTho+bc7bW7HAdMcYa
 ZT5P+13occTbrMVQGmgoAo+q32YkgO
X-IronPort-AV: E=Sophos;i="5.83,230,1616472000"; 
   d="scan'208";a="45249119"
ARC-Seal: i=1; a=rsa-sha256; s=arcselector9901; d=microsoft.com; cv=none;
 b=eq0HHQa16ty6I6Sw1KK7gCgNcq4GyIBfkIH7izqrshKu4ASDTD2YlZsrIVPQ6vT1JwdiltXscFW4EoD8B/lBcRhZC+fntCxwHMhmVpqazSNPUTvle66bfoGZFf+PBrSJxV2RsyTR0SNRJ3BaXCIYhjWb9/PLXvgcT9ZS6ahqe0a70/FIo+/aE17uAuamKvqLcsGowbBFdv6y9P+e3DD+9ZHoHTOGY0wZhN98/2K8ZMcHRV4v2wOjPGvS6NtbBdzbHbNxLEoD7HdvZU7IfFjwNP3nMIBkXUKkNZdyjrbVwPNU56/EdFhj8J8Q04uCAIlGCyxcS1W4xO+pMFQN2AW5mA==
ARC-Message-Signature: i=1; a=rsa-sha256; c=relaxed/relaxed; d=microsoft.com;
 s=arcselector9901;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AtL+Hp/29nuKWkROvVXgdUUpYBKLh+7LK1UvsLfBAnw=;
 b=Ivw8QTXbgd70tuzaa37WLHViUjyfKa6m0Xvdj+Jn427ltUk5cE22rSFbv/khbz0jPgmfZoYPeNfqqzG9HdfeWQ8UXa6CwHGBoBWBeFhUVczmQ/4B6Sx1LCuYdLBHq7JqOstLcA7UXZ+I8mYV0/mQbolTzlR3VXwe7FHa28NUTd9jmfOg9S9HVnmPaojkoARXxz/sHPXcrQOso2ncD31RqhlBgzVB09CsBVgvyAdzImR+zr7iU18avUfKKHpb2WEDuYSfj7szvdolubW7O0d2d4SPDp/wXXVyLGz9M3HlswKtaov04c94q36sXuMFcOYfqfv1gDKRE5UZjtb1vtnDkQ==
ARC-Authentication-Results: i=1; mx.microsoft.com 1; spf=pass
 smtp.mailfrom=citrix.com; dmarc=pass action=none header.from=citrix.com;
 dkim=pass header.d=citrix.com; arc=none
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
 d=citrix.onmicrosoft.com; s=selector2-citrix-onmicrosoft-com;
 h=From:Date:Subject:Message-ID:Content-Type:MIME-Version:X-MS-Exchange-SenderADCheck;
 bh=AtL+Hp/29nuKWkROvVXgdUUpYBKLh+7LK1UvsLfBAnw=;
 b=UJFZGiApEH1vy8jgKaaJZ+4rdKdmjxOjojLnKlEZMMTN8In71yspd7/+zwVhtLdfcZ2azEu+czUq0guRWd/JGC8CdmzQhX6y0/3mzhpf0Ap2oCs/WbuxCTGt8OGTxxxZir6bPFORz5sA222biacpCqU0FzmcE38H+i0VMVn2Pas=
From: Roger Pau Monne <roger.pau@citrix.com>
To: <xen-devel@lists.xenproject.org>
CC: Roger Pau Monne <roger.pau@citrix.com>, Jun Nakajima
	<jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>, Jan Beulich
	<jbeulich@suse.com>, Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu
	<wl@xen.org>, George Dunlap <george.dunlap@citrix.com>
Subject: [PATCH 3/3] x86/ept: force WB cache attributes for grant and foreign maps
Date: Fri, 28 May 2021 19:39:35 +0200
Message-ID: <20210528173935.29919-4-roger.pau@citrix.com>
X-Mailer: git-send-email 2.31.1
In-Reply-To: <20210528173935.29919-1-roger.pau@citrix.com>
References: <20210528173935.29919-1-roger.pau@citrix.com>
Content-Type: text/plain; charset=UTF-8
Content-Transfer-Encoding: 8bit
X-ClientProxiedBy: MRXP264CA0003.FRAP264.PROD.OUTLOOK.COM
 (2603:10a6:500:15::15) To DS7PR03MB5608.namprd03.prod.outlook.com
 (2603:10b6:5:2c9::18)
MIME-Version: 1.0
X-MS-Exchange-MessageSentRepresentingType: 1
X-MS-PublicTrafficType: Email
X-MS-Office365-Filtering-Correlation-Id: a4af593d-a375-4a60-cea1-08d921ff9e03
X-MS-TrafficTypeDiagnostic: DM4PR03MB5966:
X-LD-Processed: 335836de-42ef-43a2-b145-348c2ee9ca5b,ExtAddr
X-MS-Exchange-Transport-Forked: True
X-Microsoft-Antispam-PRVS: <DM4PR03MB5966DBCFCA649CB96378924C8F229@DM4PR03MB5966.namprd03.prod.outlook.com>
X-MS-Oob-TLC-OOBClassifiers: OLM:7691;
X-MS-Exchange-SenderADCheck: 1
X-Microsoft-Antispam: BCL:0;
X-Microsoft-Antispam-Message-Info: 0W4ypPRZ+w1jF86gzWKgk6EEFkt5587z3m8IXeFrfv8/pObMpkKfSYnhHc18hM41sAR2pryeQIS1CANCWoLHA7YogZrsGSrjTkw4PSmV2SgPKWvIlu7y44VObq+KXhnLWXFZ/dQIZb6OTHFJhXO17amqd4UzEtdv7PK2FLZzql6RAi7CJGZ0DmJBdbIM5/+HUAOFzWhKLnahWQO22Iub2MCAmtumZbKCABkiv5mxA05+D/uBVTodYCH82b6yGbEm7WLncmQVdXdAz3FS0Evq36Uh5zRDTFvPP0s3Biu/DXjeW52svAQGADP+0LJ0exGO1cl6IaEpBVRzNbCTdHqWJfA2+LO/clUGYAjrNGJYOpfJZDPF0cSE+4mHqbPuh/UDEupfMN8Yp9RedM0k+VcrcE/bdL6Fchr1hx3ej/map4c8v1N63xG9jxuntinHbzPCTwbLH9L8uZKrWrcTxwDmc5UOeica35LaA7gVcSBqPB5yZ7Ei+K6pfG9EU4zP5kHr3bHVUyiLgVEt3IFC3j2a3P+EfG0vfKy4dTWPCfaqQv54yeTPBomMk1zBHapTE58o+w3984i0uVi7qFAnCXePFfRYm7bkHy0DV0tFjibsuK4=
X-Forefront-Antispam-Report: CIP:255.255.255.255;CTRY:;LANG:en;SCL:1;SRV:;IPV:NLI;SFV:NSPM;H:DS7PR03MB5608.namprd03.prod.outlook.com;PTR:;CAT:NONE;SFS:(4636009)(396003)(39860400002)(366004)(376002)(136003)(346002)(6666004)(107886003)(36756003)(2906002)(26005)(2616005)(66476007)(316002)(66556008)(186003)(66946007)(16526019)(54906003)(956004)(4326008)(38100700002)(6486002)(8676002)(83380400001)(478600001)(8936002)(6496006)(86362001)(6916009)(1076003)(5660300002);DIR:OUT;SFP:1101;
X-MS-Exchange-AntiSpam-MessageData: =?utf-8?B?RVMxQnZYM3lmKyt4UmhVN0xBSnA2LzFvc3lUUlRIdnpBdE5mcHBTbmVHbHcw?=
 =?utf-8?B?VlhUWlRjbUk3OEdoWGhGVDRwTVU5aWZTOGZSWmY2TXhDL1NDVFVRZkNocmpQ?=
 =?utf-8?B?Qkl3eUlybm1La0lQdTBsOXVNaXNnSDBxemNRY25jUHhGVTNuU0JSY0pualpM?=
 =?utf-8?B?cGFJMmQ3MXcrT0pkV1ZPK2Yvbkt6VEd4MU16R2Z3THIxVllXdXBSV2ZkWlVl?=
 =?utf-8?B?a09aQ2UrL3pEeEpaNUVZUDZZaGNxVHh1U1V1aW1sWC96RmN6akJQZmthOWor?=
 =?utf-8?B?R05lTUFzQnRMUlRKdW1ydzFFOWpqZnIzNzYybnh5REt2SktyblpJaFZwQlQ1?=
 =?utf-8?B?MHE5NFZMNzJTQm1YVTB4SHJ6YzBEQ3M4VXNPbitRRGp3TlNnNTlTVndkQzdk?=
 =?utf-8?B?STU4bHhqQlpCWWpENE4vMzBteXI0RUFqU2xuRTNNdVU3UzRSSmtMWHg1L3NJ?=
 =?utf-8?B?UkVQTjh5cVAzZDgvMU4yNXUwcEk4Q1BaUGFCd2VocWVjT0h5RFRBRHAyd1VZ?=
 =?utf-8?B?REFvc2R0L09ERm5uNUh4R2tQNDFQd3RVa1VSMmUxTkhhTkQvQm5KYmMrK2Iv?=
 =?utf-8?B?bTlQQ0xRT3JmQ0ZBTjNFTENWUmxBSDdCOGRqTEhNNUUrQnNDcTc2VTJaWXpI?=
 =?utf-8?B?R2d0THExL2x3R0hnbUlKc0w1em5ReEs5VHJyZjRXSUh3L2NYTE5LWW5TV1VN?=
 =?utf-8?B?bWNUWkIzUEp1ZkFxV2FzUFFOQTZjUWZBcHhMQjJ6QWVkT05CbHN1NzVOTlg5?=
 =?utf-8?B?SEpseVJpaGsvdXl6dzlMbGFVOURQTnQzTVBGZi9obzBKWjVYU09EbEk2bkF0?=
 =?utf-8?B?VjJCSXhWVlgxNlZNS0Mvc2xQclJjd2Y1RXlPSnZLMFJNeFQ2U2JpbExzd0kz?=
 =?utf-8?B?THExeDRHR0xoTzMrbjNldFZtM2lhNnRjSVQ0K1FweTg1THRzMVhmYXgyakVr?=
 =?utf-8?B?Q0ZWTXFId0JkaTBRQlN2My96UlZlVkRtNFlobjliVGNLaUoxeVlBOEpFdXZ4?=
 =?utf-8?B?dGVMV1Vmcm1lbTYxajlSU0dxM0pYR2tDMmxiMXlQajZmVTRkMTM5dE4wK3hN?=
 =?utf-8?B?OUYxb0xEK2c0M05iOGEvcTUyYXlyajNueTl5cGs2ZEVCQVlSaVF0d2t4Tkli?=
 =?utf-8?B?cnU4TVJQKy8zMUhzSHhCWmVYYTBZOEhsU1FacXJNSC81VUVjelhqZkkwM1VU?=
 =?utf-8?B?UjlDNXFHZWJMaEZkNUZqQWQ0MTZtL0x5aFBFRFhteVI4ZHB0cVIweEc2N3NR?=
 =?utf-8?B?V2RhYzNKTXpySWxmbkpXWnlIc1RhRkE5Wm04YWlwSTlsaHRBdC80ZDlYSUNt?=
 =?utf-8?B?OTEvRmxiMW9sZ1g1am5Gdkdnb2w0VnAwSTNUUUJiUno3ZFd2dzA1Wmx0Nmg4?=
 =?utf-8?B?eGJPeDFDb014R3gyNHFvVWZZamZhdDBTalpvS3lRdThwRzlhNU13blNiS3FU?=
 =?utf-8?B?bk92YVYwMXJ6NytQK2lzSmFRZ2pOZW14aWU0YUZCNTBYRHRDRlR0RzBCTjZV?=
 =?utf-8?B?SnNNc0cvV3N0SVhVUEhGMGVCYW5GWGpNTisrbFpjWWY0RTBxenpreEkzb2d6?=
 =?utf-8?B?SjNaZ0ZKWW5KRGlYQkxZZTVpMXdSWHlGREI0OTViRzh1WTFuSTNRVjdVRDV5?=
 =?utf-8?B?ZFNMK2puOTB5elc3RzNKSWN1VUFmUnhHYlQ1MXFuOWJacG1LQ3ZYaFFTZHBD?=
 =?utf-8?B?RUpjZEF3cGRxMmpMeEFXWnlpME5tRW5VU21UNk1yb2Nqc3I5V01pS2pjU0xQ?=
 =?utf-8?Q?Rf9k7Hxs1otxM2PBqjylduMC5ldK23bpPoYvn4e?=
X-MS-Exchange-CrossTenant-Network-Message-Id: a4af593d-a375-4a60-cea1-08d921ff9e03
X-MS-Exchange-CrossTenant-AuthSource: DS7PR03MB5608.namprd03.prod.outlook.com
X-MS-Exchange-CrossTenant-AuthAs: Internal
X-MS-Exchange-CrossTenant-OriginalArrivalTime: 28 May 2021 17:40:00.3203
 (UTC)
X-MS-Exchange-CrossTenant-FromEntityHeader: Hosted
X-MS-Exchange-CrossTenant-Id: 335836de-42ef-43a2-b145-348c2ee9ca5b
X-MS-Exchange-CrossTenant-MailboxType: HOSTED
X-MS-Exchange-CrossTenant-UserPrincipalName: TdcYqbP2mgA9Fh5hUhfZm4Tx+YmL8FNLGFav7GYHvq1HX0EN9wCvr99EbgxiEdgDEz3eAMm6Zh/ETzQEM0XNrA==
X-MS-Exchange-Transport-CrossTenantHeadersStamped: DM4PR03MB5966
X-OriginatorOrg: citrix.com

Force WB type for grants and foreign pages. Those are usually mapped
over unpopulated physical ranges in the p2m, and those ranges would
usually be UC in the MTRR state, which is unlikely to be the correct
cache attribute. It's also cumbersome (or even impossible) for the
guest to be setting the MTRR type for all those mappings as WB, as
MTRR ranges are finite.

Note that on AMD we cannot force a cache attribute because of the lack
of ignore PAT equivalent, so the behavior here slightly diverges
between AMD and Intel (or EPT vs NPT/shadow).

Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>
---
 xen/arch/x86/hvm/vmx/vmx.c        |  2 +-
 xen/arch/x86/mm/p2m-ept.c         | 35 ++++++++++++++++++++++++++-----
 xen/include/asm-x86/hvm/vmx/vmx.h |  2 +-
 3 files changed, 32 insertions(+), 7 deletions(-)

diff --git a/xen/arch/x86/hvm/vmx/vmx.c b/xen/arch/x86/hvm/vmx/vmx.c
index 0d4b47681b..e09b7e3af9 100644
--- a/xen/arch/x86/hvm/vmx/vmx.c
+++ b/xen/arch/x86/hvm/vmx/vmx.c
@@ -423,7 +423,7 @@ static void domain_creation_finished(struct domain *d)
         return;
 
     ASSERT(epte_get_entry_emt(d, gfn, apic_access_mfn, 0, &ipat,
-                              true) == MTRR_TYPE_WRBACK);
+                              p2m_mmio_direct) == MTRR_TYPE_WRBACK);
     ASSERT(ipat);
 
     if ( set_mmio_p2m_entry(d, gfn, apic_access_mfn, PAGE_ORDER_4K) )
diff --git a/xen/arch/x86/mm/p2m-ept.c b/xen/arch/x86/mm/p2m-ept.c
index f1d1d07e92..59c0325473 100644
--- a/xen/arch/x86/mm/p2m-ept.c
+++ b/xen/arch/x86/mm/p2m-ept.c
@@ -487,11 +487,12 @@ static int ept_invalidate_emt_range(struct p2m_domain *p2m,
 }
 
 int epte_get_entry_emt(struct domain *d, gfn_t gfn, mfn_t mfn,
-                       unsigned int order, bool *ipat, bool direct_mmio)
+                       unsigned int order, bool *ipat, p2m_type_t type)
 {
     int gmtrr_mtype, hmtrr_mtype;
     struct vcpu *v = current;
     unsigned long i;
+    bool direct_mmio = type == p2m_mmio_direct;
 
     *ipat = false;
 
@@ -535,9 +536,33 @@ int epte_get_entry_emt(struct domain *d, gfn_t gfn, mfn_t mfn,
         }
     }
 
-    if ( direct_mmio )
+    switch ( type )
+    {
+    case p2m_mmio_direct:
         return MTRR_TYPE_UNCACHABLE;
 
+    case p2m_grant_map_ro:
+    case p2m_grant_map_rw:
+    case p2m_map_foreign:
+        /*
+         * Force WB type for grants and foreign pages. Those are usually mapped
+         * over unpopulated physical ranges in the p2m, and those would usually
+         * be UC in the MTRR state, which is unlikely to be the correct cache
+         * attribute. It's also cumbersome (or even impossible) for the guest
+         * to be setting the MTRR type for all those mappings as WB, as MTRR
+         * ranges are finite.
+         *
+         * Note that on AMD we cannot force a cache attribute because of the
+         * lack of ignore PAT equivalent, so the behavior here slightly
+         * diverges. See p2m_type_to_flags for the AMD attributes.
+         */
+        *ipat = true;
+        return MTRR_TYPE_WRBACK;
+
+    default:
+        break;
+    }
+
     gmtrr_mtype = hvm_get_mem_pinned_cacheattr(d, gfn, order);
     if ( gmtrr_mtype >= 0 )
     {
@@ -640,7 +665,7 @@ static int resolve_misconfig(struct p2m_domain *p2m, unsigned long gfn)
                         continue;
                     e.emt = epte_get_entry_emt(p2m->domain, _gfn(gfn + i),
                                                _mfn(e.mfn), 0, &ipat,
-                                               e.sa_p2mt == p2m_mmio_direct);
+                                               e.sa_p2mt);
                     e.ipat = ipat;
 
                     nt = p2m_recalc_type(e.recalc, e.sa_p2mt, p2m, gfn + i);
@@ -659,7 +684,7 @@ static int resolve_misconfig(struct p2m_domain *p2m, unsigned long gfn)
                 int emt = epte_get_entry_emt(p2m->domain, _gfn(gfn),
                                              _mfn(e.mfn),
                                              level * EPT_TABLE_ORDER, &ipat,
-                                             e.sa_p2mt == p2m_mmio_direct);
+                                             e.sa_p2mt);
                 bool_t recalc = e.recalc;
 
                 if ( recalc && p2m_is_changeable(e.sa_p2mt) )
@@ -895,7 +920,7 @@ ept_set_entry(struct p2m_domain *p2m, gfn_t gfn_, mfn_t mfn,
         bool ipat;
         int emt = epte_get_entry_emt(p2m->domain, _gfn(gfn), mfn,
                                      i * EPT_TABLE_ORDER, &ipat,
-                                     p2mt == p2m_mmio_direct);
+                                     p2mt);
 
         if ( emt >= 0 )
             new_entry.emt = emt;
diff --git a/xen/include/asm-x86/hvm/vmx/vmx.h b/xen/include/asm-x86/hvm/vmx/vmx.h
index f668ee1f09..0deb507490 100644
--- a/xen/include/asm-x86/hvm/vmx/vmx.h
+++ b/xen/include/asm-x86/hvm/vmx/vmx.h
@@ -600,7 +600,7 @@ void ept_p2m_uninit(struct p2m_domain *p2m);
 void ept_walk_table(struct domain *d, unsigned long gfn);
 bool_t ept_handle_misconfig(uint64_t gpa);
 int epte_get_entry_emt(struct domain *d, gfn_t gfn, mfn_t mfn,
-                       unsigned int order, bool *ipat, bool direct_mmio);
+                       unsigned int order, bool *ipat, p2m_type_t type);
 void setup_ept_dump(void);
 void p2m_init_altp2m_ept(struct domain *d, unsigned int i);
 /* Locate an alternate p2m by its EPTP */
-- 
2.31.1



From xen-devel-bounces@lists.xenproject.org Fri May 28 21:50:47 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Fri, 28 May 2021 21:50:47 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134121.249679 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmkNC-0002SV-7f; Fri, 28 May 2021 21:50:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134121.249679; Fri, 28 May 2021 21:50:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmkNC-0002SO-4Y; Fri, 28 May 2021 21:50:30 +0000
Received: by outflank-mailman (input) for mailman id 134121;
 Fri, 28 May 2021 21:50:28 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=W8ur=KX=amazon.com=prvs=7756afe29=anchalag@srs-us1.protection.inumbo.net>)
 id 1lmkNA-0002SI-KJ
 for xen-devel@lists.xenproject.org; Fri, 28 May 2021 21:50:28 +0000
Received: from smtp-fw-80007.amazon.com (unknown [99.78.197.218])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 8e4e52f1-809f-4863-a3bd-90925a9c0585;
 Fri, 28 May 2021 21:50:27 +0000 (UTC)
Received: from pdx4-co-svc-p1-lb2-vlan3.amazon.com (HELO
 email-inbound-relay-1a-67b371d8.us-east-1.amazon.com) ([10.25.36.214])
 by smtp-border-fw-80007.pdx80.corp.amazon.com with ESMTP;
 28 May 2021 21:50:25 +0000
Received: from EX13MTAUEE001.ant.amazon.com
 (iad55-ws-svc-p15-lb9-vlan2.iad.amazon.com [10.40.159.162])
 by email-inbound-relay-1a-67b371d8.us-east-1.amazon.com (Postfix) with ESMTPS
 id 1C50FA1E03; Fri, 28 May 2021 21:50:17 +0000 (UTC)
Received: from EX13D08UEE004.ant.amazon.com (10.43.62.182) by
 EX13MTAUEE001.ant.amazon.com (10.43.62.200) with Microsoft SMTP Server (TLS)
 id 15.0.1497.18; Fri, 28 May 2021 21:50:09 +0000
Received: from EX13MTAUWA001.ant.amazon.com (10.43.160.58) by
 EX13D08UEE004.ant.amazon.com (10.43.62.182) with Microsoft SMTP Server (TLS)
 id 15.0.1497.18; Fri, 28 May 2021 21:50:09 +0000
Received: from dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com
 (172.22.96.68) by mail-relay.amazon.com (10.43.160.118) with Microsoft SMTP
 Server id 15.0.1497.18 via Frontend Transport; Fri, 28 May 2021 21:50:09
 +0000
Received: by dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com (Postfix,
 from userid 4335130)
 id D5E6A4001D; Fri, 28 May 2021 21:50:08 +0000 (UTC)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 8e4e52f1-809f-4863-a3bd-90925a9c0585
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
  d=amazon.com; i=@amazon.com; q=dns/txt; s=amazon201209;
  t=1622238628; x=1653774628;
  h=date:from:to:cc:message-id:references:mime-version:
   in-reply-to:subject;
  bh=rc/9y3ypxvjAsFbfUCSM+1dHQ6N3pxPlJofRB4Y3Fno=;
  b=vADY5aULc1CBxb4qITHynMW6v7IB1p4JkM+DvtPSCQsatKoDJlvqznKH
   khf8on5AhYrS8yyM+OYgPd3DoWcMnGOxwtK11x/8eva+fVxSvFo8wdq3e
   4sDG6VbwiXB1Q8KzHf7FW0X18EqxxefxOJq9mppe+24tOuKDAYTEh0pgk
   I=;
X-IronPort-AV: E=Sophos;i="5.83,231,1616457600"; 
   d="scan'208";a="3874630"
Subject: Re: [PATCH v3 01/11] xen/manage: keep track of the on-going suspend mode
Date: Fri, 28 May 2021 21:50:08 +0000
From: Anchal Agarwal <anchalag@amazon.com>
To: Boris Ostrovsky <boris.ostrovsky@oracle.com>
CC: "tglx@linutronix.de" <tglx@linutronix.de>, "mingo@redhat.com"
	<mingo@redhat.com>, "bp@alien8.de" <bp@alien8.de>, "hpa@zytor.com"
	<hpa@zytor.com>, "jgross@suse.com" <jgross@suse.com>,
	"linux-pm@vger.kernel.org" <linux-pm@vger.kernel.org>, "linux-mm@kvack.org"
	<linux-mm@kvack.org>, "sstabellini@kernel.org" <sstabellini@kernel.org>,
	"konrad.wilk@oracle.com" <konrad.wilk@oracle.com>, "roger.pau@citrix.com"
	<roger.pau@citrix.com>, "axboe@kernel.dk" <axboe@kernel.dk>,
	"davem@davemloft.net" <davem@davemloft.net>, "rjw@rjwysocki.net"
	<rjw@rjwysocki.net>, "len.brown@intel.com" <len.brown@intel.com>,
	"pavel@ucw.cz" <pavel@ucw.cz>, "peterz@infradead.org" <peterz@infradead.org>,
	"xen-devel@lists.xenproject.org" <xen-devel@lists.xenproject.org>,
	"vkuznets@redhat.com" <vkuznets@redhat.com>, "netdev@vger.kernel.org"
	<netdev@vger.kernel.org>, "linux-kernel@vger.kernel.org"
	<linux-kernel@vger.kernel.org>, <dwmw@amazon.co.uk>,
	<“benh@kernel.crashing.xn--org-9o0a>, <“aams@amazon.xn--com-9o0a>,
	<“anchalag@amazon.xn--com-9o0a>
Message-ID: <20210528215008.GA19622@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
References: <20200925190423.GA31885@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <274ddc57-5c98-5003-c850-411eed1aea4c@oracle.com>
 <20200925222826.GA11755@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <cc738014-6a79-a5ae-cb2a-a02ff15b4582@oracle.com>
 <20200930212944.GA3138@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <8cd59d9c-36b1-21cf-e59f-40c5c20c65f8@oracle.com>
 <20210521052650.GA19056@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <0b1f0772-d1b1-0e59-8e99-368e54d40fbf@oracle.com>
 <20210526044038.GA16226@dev-dsk-anchalag-2a-9c2d1d96.us-west-2.amazon.com>
 <33380567-f86c-5d85-a79e-c1cd889f8ec2@oracle.com>
MIME-Version: 1.0
Content-Type: text/plain; charset="us-ascii"
Content-Disposition: inline
In-Reply-To: <33380567-f86c-5d85-a79e-c1cd889f8ec2@oracle.com>
User-Agent: Mutt/1.5.21 (2010-09-15)
Precedence: Bulk

On Wed, May 26, 2021 at 02:29:53PM -0400, Boris Ostrovsky wrote:
> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> 
> 
> 
> On 5/26/21 12:40 AM, Anchal Agarwal wrote:
> > On Tue, May 25, 2021 at 06:23:35PM -0400, Boris Ostrovsky wrote:
> >> CAUTION: This email originated from outside of the organization. Do not click links or open attachments unless you can confirm the sender and know the content is safe.
> >>
> >>
> >>
> >> On 5/21/21 1:26 AM, Anchal Agarwal wrote:
> >>>>> What I meant there wrt VCPU info was that VCPU info is not unregistered during hibernation,
> >>>>> so Xen still remembers the old physical addresses for the VCPU information, created by the
> >>>>> booting kernel. But since the hibernation kernel may have different physical
> >>>>> addresses for VCPU info and if mismatch happens, it may cause issues with resume.
> >>>>> During hibernation, the VCPU info register hypercall is not invoked again.
> >>>> I still don't think that's the cause but it's certainly worth having a look.
> >>>>
> >>> Hi Boris,
> >>> Apologies for picking this up after last year.
> >>> I did some dive deep on the above statement and that is indeed the case that's happening.
> >>> I did some debugging around KASLR and hibernation using reboot mode.
> >>> I observed in my debug prints that whenever vcpu_info* address for secondary vcpu assigned
> >>> in xen_vcpu_setup at boot is different than what is in the image, resume gets stuck for that vcpu
> >>> in bringup_cpu(). That means we have different addresses for &per_cpu(xen_vcpu_info, cpu) at boot and after
> >>> control jumps into the image.
> >>>
> >>> I failed to get any prints after it got stuck in bringup_cpu() and
> >>> I do not have an option to send a sysrq signal to the guest or rather get a kdump.
> >>
> >> xenctx and xen-hvmctx might be helpful.
> >>
> >>
> >>> This change is not observed in every hibernate-resume cycle. I am not sure if this is a bug or an
> >>> expected behavior.
> >>> Also, I am contemplating the idea that it may be a bug in xen code getting triggered only when
> >>> KASLR is enabled but I do not have substantial data to prove that.
> >>> Is this a coincidence that this always happens for 1st vcpu?
> >>> Moreover, since hypervisor is not aware that guest is hibernated and it looks like a regular shutdown to dom0 during reboot mode,
> >>> will re-registering vcpu_info for secondary vcpu's even plausible?
> >>
> >> I think I am missing how this is supposed to work (maybe we've talked about this but it's been many months since then). You hibernate the guest and it writes the state to swap. The guest is then shut down? And what's next? How do you wake it up?
> >>
> >>
> >> -boris
> >>
> > To resume a guest, guest boots up as the fresh guest and then software_resume()
> > is called which if finds a stored hibernation image, quiesces the devices and loads
> > the memory contents from the image. The control then transfers to the targeted kernel.
> > This further disables non boot cpus,sycore_suspend/resume callbacks are invoked which sets up
> > the shared_info, pvclock, grant tables etc. Since the vcpu_info pointer for each
> > non-boot cpu is already registered, the hypercall does not happen again when
> > bringing up the non boot cpus. This leads to inconsistencies as pointed
> > out earlier when KASLR is enabled.
> 
> 
> I'd think the 'if' condition in the code fragment below should always fail since hypervisor is creating new guest, resulting in the hypercall. Just like in the case of save/restore.
>
That only fails during boot but not after the control jumps into the image. The
non boot cpus are brought offline(freeze_secondary_cpus) and then online via cpu hotplug path. In that case xen_vcpu_setup doesn't invokes the hypercall again.
> 
> Do you call xen_vcpu_info_reset() on resume? That will re-initialize per_cpu(xen_vcpu). Maybe you need to add this to xen_syscore_resume().
> 
Yes coincidentally I did. It fails the registration of vcpu_info with error -22.
Basically because nobody unregistered them and xen does not know that guest hibernated
in first place. 

Moreover, syscore_resume is also called during hibernation path i.e after Image is
created. Everything is resumed and thawed back before final writing of the image
and then a machine shutdown. So syscore_resume can only invoke xen_vcpu_info_reset
when it is actually resuming from image. I had ben able to use in_suspend
variable to detect that luckily.

Another line of thought is something what kexec does to come around this problem
is to abuse soft_reset and issue it during syscore_resume or may be before the image get loaded.
I haven't experimented with that yet as I am assuming there has to be a way to re-register vcpus during resume.

Thanks,
Anchal
> 
> -boris
> 
> 
> >
> > Thanks,
> > Anchal
> >>
> >>>  I could definitely use some advice to debug this further.
> >>>
> >>>
> >>> Some printk's from my debugging:
> >>>
> >>> At Boot:
> >>>
> >>> xen_vcpu_setup: xen_have_vcpu_info_placement=1 cpu=1, vcpup=0xffff9e548fa560e0, info.mfn=3996246 info.offset=224,
> >>>
> >>> Image Loads:
> >>> It ends up in the condition:




> >>>  xen_vcpu_setup()
> >>>  {
> >>>  ...
> >>>  if (xen_hvm_domain()) {
> >>>         if (per_cpu(xen_vcpu, cpu) == &per_cpu(xen_vcpu_info, cpu))
> >>>                 return 0;
> >>>  }
> >>>  ...
> >>>  }
> >>>
> >>> xen_vcpu_setup: checking mfn on resume cpu=1, info.mfn=3934806 info.offset=224, &per_cpu(xen_vcpu_info, cpu)=0xffff9d7240a560e0
> >>>
> >>> This is tested on c4.2xlarge [8vcpu 15GB mem] instance with 5.10 kernel running
> >>> in the guest.
> >>>
> >>> Thanks,
> >>> Anchal.
> >>>> -boris
> >>>>
> >>>>


From xen-devel-bounces@lists.xenproject.org Sat May 29 00:12:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 May 2021 00:12:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134132.249691 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmma1-0007Go-8l; Sat, 29 May 2021 00:11:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134132.249691; Sat, 29 May 2021 00:11:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmma1-0007Gh-5T; Sat, 29 May 2021 00:11:53 +0000
Received: by outflank-mailman (input) for mailman id 134132;
 Sat, 29 May 2021 00:11:51 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmmZz-0007GX-3f; Sat, 29 May 2021 00:11:51 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmmZy-000251-RW; Sat, 29 May 2021 00:11:50 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmmZy-0001K2-IB; Sat, 29 May 2021 00:11:50 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lmmZy-0008I2-Hh; Sat, 29 May 2021 00:11:50 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=sXn369TP40cUH/N6cGTG1qr/fCuY4NqjLu/YsRyL954=; b=kC6OvwWYECrYZaYsj9siRL+Iel
	NFIcXJeLtQBppSCL1mXRWsP14+VXeEDaLWwf9Zbaj3tUKdw2iAuWVZOJDGQ5ZqrOd1Pbnl2NEu5xV
	08YL2t8uugCSLC56mtOy3V1675O2DwmEJ7d20lGOMHaAQr8FOKv7SvpKsRjxGgmmKgVM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162246-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162246: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=97e5bf604b7a0d6e1b3e00fe31d5fd4b9bffeaae
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 29 May 2021 00:11:50 +0000

flight 162246 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162246/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 162241

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                97e5bf604b7a0d6e1b3e00fe31d5fd4b9bffeaae
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  301 days
Failing since        152366  2020-08-01 20:49:34 Z  300 days  510 attempts
Testing same since   162241  2021-05-27 23:12:27 Z    1 days    2 attempts

------------------------------------------------------------
6105 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1658927 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 29 03:54:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 May 2021 03:54:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134142.249705 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmq3e-0007jF-W0; Sat, 29 May 2021 03:54:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134142.249705; Sat, 29 May 2021 03:54:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmq3e-0007j8-R7; Sat, 29 May 2021 03:54:42 +0000
Received: by outflank-mailman (input) for mailman id 134142;
 Sat, 29 May 2021 03:54:40 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmq3c-0007ix-QJ; Sat, 29 May 2021 03:54:40 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmq3c-0002xo-LN; Sat, 29 May 2021 03:54:40 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmq3c-0005Nq-9O; Sat, 29 May 2021 03:54:40 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lmq3c-0000hY-7J; Sat, 29 May 2021 03:54:40 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=x6wxAGIiELT+YOVHgXYtL5eLNCN1lmeVIHFtqqBlBZc=; b=cifH/qKPkbJinD4XAyyL4S5nsh
	XaiV0VcotD2C5RgsKwaM2zNDxbQEddJpb9B/uHv+iOeDbTSE/+e9y3NiK7g/dr8Dv1nNpr08hdhVB
	j+T45pyVxXLci1nsDxv4NYglCcU8phWeadi/RfYIirXYQR+w+Z0TCmdX7Jc/qOoCZT94=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162247-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-5.4 test] 162247: tolerable FAIL - PUSHED
X-Osstest-Failures:
    linux-5.4:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-5.4:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    linux-5.4:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-5.4:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-5.4:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=103f1dbea1ae44731edca02cd7fcfa4a33742cd2
X-Osstest-Versions-That:
    linux=67154cff6258e46b05acc9f797e3328ed839b0e2
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 29 May 2021 03:54:40 +0000

flight 162247 linux-5.4 real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162247/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162175
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162175
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162175
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162175
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162175
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162175
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162175
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162175
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162175
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162175
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162175
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 linux                103f1dbea1ae44731edca02cd7fcfa4a33742cd2
baseline version:
 linux                67154cff6258e46b05acc9f797e3328ed839b0e2

Last test of basis   162175  2021-05-27 00:41:39 Z    2 days
Testing same since   162247  2021-05-28 11:41:53 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Alexei Starovoitov <ast@kernel.org>
  Arnaldo Carvalho de Melo <acme@redhat.com>
  Daniel Borkmann <daniel@iogearbox.net>
  Dave Rigby <d.rigby@me.com>
  David S. Miller <davem@davemloft.net>
  Dongliang Mu <mudongliangabcd@gmail.com>
  Felipe Balbi <balbi@kernel.org>
  Florian Fainelli <f.fainelli@gmail.com>
  Greg Kroah-Hartman <gregkh@linuxfoundation.org>
  Hulk Robot <hulkrobot@huawei.com>
  Jack Pham <jackp@codeaurora.org>
  Jan Kratochvil <jan.kratochvil@redhat.com>
  Jiri Olsa <jolsa@redhat.com>
  Jon Hunter <jonathanh@nvidia.com>
  Linux Kernel Functional Testing <lkft@linaro.org>
  Shuah Khan <skhan@linuxfoundation.org>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

hint: The 'hooks/update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-receive' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
hint: The 'hooks/post-update' hook was ignored because it's not set as executable.
hint: You can disable this warning with `git config advice.ignoredHook false`.
To xenbits.xen.org:/home/xen/git/linux-pvops.git
   67154cff6258..103f1dbea1ae  103f1dbea1ae44731edca02cd7fcfa4a33742cd2 -> tested/linux-5.4


From xen-devel-bounces@lists.xenproject.org Sat May 29 09:02:30 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 May 2021 09:02:30 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134158.249734 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmurJ-00039K-V3; Sat, 29 May 2021 09:02:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134158.249734; Sat, 29 May 2021 09:02:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmurJ-00039D-Rx; Sat, 29 May 2021 09:02:17 +0000
Received: by outflank-mailman (input) for mailman id 134158;
 Sat, 29 May 2021 09:02:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmurI-000391-6R; Sat, 29 May 2021 09:02:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmurI-0000bp-2F; Sat, 29 May 2021 09:02:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmurH-0003RK-JD; Sat, 29 May 2021 09:02:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lmurH-00065i-Ii; Sat, 29 May 2021 09:02:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=tz/2EicNWG2Y4Kh4BFeQOxPGQhP+ZA2Ypba8RD2ptyc=; b=FyNQ3STpSfGm4V/LfRqrvaRxAa
	i+aWCQjrQawhilmDWxFXA0Y4+HKygXad8Cbzqu6V5KcaIuCBtpYZAu8PwW/IzdZttaWdF38Isbaud
	T7yrDHO5iL311d/Z8wLDpNHhqcRVtPxkOaqNBD64M9MQMN75RmU6ZSXOAUox7Pj4RO+Y=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162250-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162250: tolerable FAIL - PUSHED
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=683d899e4bffca35c5b192ea0662362b0270a695
X-Osstest-Versions-That:
    xen=9fdcf851689cb2a9501d3947cb5d767d9c7797e8
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 29 May 2021 09:02:15 +0000

flight 162250 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162250/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162242
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162242
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162242
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162242
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162242
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162242
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162242
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162242
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162242
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162242
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162242
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  683d899e4bffca35c5b192ea0662362b0270a695
baseline version:
 xen                  9fdcf851689cb2a9501d3947cb5d767d9c7797e8

Last test of basis   162242  2021-05-28 02:17:13 Z    1 days
Testing same since   162250  2021-05-28 14:08:04 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Connor Davis <connojdavis@gmail.com>
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   9fdcf85168..683d899e4b  683d899e4bffca35c5b192ea0662362b0270a695 -> master


From xen-devel-bounces@lists.xenproject.org Sat May 29 12:25:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 May 2021 12:25:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134171.249749 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmy1l-0006JP-KX; Sat, 29 May 2021 12:25:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134171.249749; Sat, 29 May 2021 12:25:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmy1l-0006JI-Gh; Sat, 29 May 2021 12:25:17 +0000
Received: by outflank-mailman (input) for mailman id 134171;
 Sat, 29 May 2021 12:25:16 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmy1k-0006Is-8e; Sat, 29 May 2021 12:25:16 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmy1k-00041U-2R; Sat, 29 May 2021 12:25:16 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmy1j-0005hF-M1; Sat, 29 May 2021 12:25:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lmy1j-0000o4-LU; Sat, 29 May 2021 12:25:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=vYYIX5/8kUdnjzP1w+7hNaxEF6ABEOAKn05UJZK6ESE=; b=LQnw6/EYcCdorIEjshLSohm861
	aMrPWuHCGLrI79+L1KE3m9Dt6Niej59q/tyJMsbBq/JWIWANp7psz1Ynw4e9h64HcK8SnCaBkjdBz
	+VsGfN20DoJ8C5vxL+3a9Zm1JJAM8ryb41ttwW/ox5kIkveApgD/gV+OYGsxKu80KftU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162254-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 162254: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=70f53b1c04cfed8529c87c7be8ca4c76d1123a30
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 29 May 2021 12:25:15 +0000

flight 162254 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162254/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              70f53b1c04cfed8529c87c7be8ca4c76d1123a30
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  323 days
Failing since        151818  2020-07-11 04:18:52 Z  322 days  315 attempts
Testing same since   162243  2021-05-28 04:20:06 Z    1 days    2 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 59708 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 29 12:41:54 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 May 2021 12:41:54 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134179.249763 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmyHk-00006N-4P; Sat, 29 May 2021 12:41:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134179.249763; Sat, 29 May 2021 12:41:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lmyHk-00006G-0s; Sat, 29 May 2021 12:41:48 +0000
Received: by outflank-mailman (input) for mailman id 134179;
 Sat, 29 May 2021 12:41:47 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmyHi-000066-UN; Sat, 29 May 2021 12:41:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmyHi-0004I9-NS; Sat, 29 May 2021 12:41:46 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lmyHi-0006JX-Cr; Sat, 29 May 2021 12:41:46 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lmyHi-0006vT-CN; Sat, 29 May 2021 12:41:46 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=o26lO6ZbAaibri2aetgwWMSTW8UcLsroRgX5EUg8YNg=; b=rUPXjGjrJkvlgxnvJ7UbY767N5
	M2HKgoPfMHO40vIC1gWlTgG3HPg2VYnBlkjIm8AsAGcG23n1Q3aBchp/hEpXXQCQYe+nWr/4j7d7+
	WIj36f7BvSVTDj7VQ0OnCUjxVCkIEOHqNBbgGIbpZSwKSZezCCOwxj87ijIZ7o24P7Eg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162252-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162252: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=7258034ab40e6927acbd005feb295eb3acf972bb
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 29 May 2021 12:41:46 +0000

flight 162252 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162252/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                7258034ab40e6927acbd005feb295eb3acf972bb
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  282 days
Failing since        152659  2020-08-21 14:07:39 Z  280 days  518 attempts
Testing same since   162252  2021-05-28 17:09:47 Z    0 days    1 attempts

------------------------------------------------------------
514 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 163083 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 29 15:57:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 May 2021 15:57:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134189.249777 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ln1Kv-0000ZU-DJ; Sat, 29 May 2021 15:57:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134189.249777; Sat, 29 May 2021 15:57:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ln1Kv-0000ZN-A2; Sat, 29 May 2021 15:57:17 +0000
Received: by outflank-mailman (input) for mailman id 134189;
 Sat, 29 May 2021 15:57:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ln1Kt-0000ZD-TQ; Sat, 29 May 2021 15:57:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ln1Kt-0007Uq-Ku; Sat, 29 May 2021 15:57:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ln1Kt-0007aj-9B; Sat, 29 May 2021 15:57:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ln1Kt-0006ES-8f; Sat, 29 May 2021 15:57:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=pwhKNdYw6Ux9erm3in3G8/2YvlAwyhSX5A3o/RCYrAw=; b=nuVaLx/ozl6es3/TELGgIy9HIY
	Y/HgmsChUYMKyzAC61CNZo8rzqvdimbFJTpeLUJHOTn3goHDTG112IskSUAXltB5nXTyXbC6Qa6TX
	jLY7dbBgBFTkSW3bDKhTZpQAdDepW+/bi+0V5iYppLqzMWY7a8qmwhjuOnDx0KTlY9/M=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162253-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162253: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=5ff2756afde08b266fbb673849899fec694f39f1
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 29 May 2021 15:57:15 +0000

flight 162253 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162253/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                5ff2756afde08b266fbb673849899fec694f39f1
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  301 days
Failing since        152366  2020-08-01 20:49:34 Z  300 days  511 attempts
Testing same since   162253  2021-05-29 00:42:31 Z    0 days    1 attempts

------------------------------------------------------------
6107 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1660171 lines long.)


From xen-devel-bounces@lists.xenproject.org Sat May 29 19:13:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 May 2021 19:13:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134201.249790 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ln4Op-0002Ce-PF; Sat, 29 May 2021 19:13:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134201.249790; Sat, 29 May 2021 19:13:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ln4Op-0002CX-MJ; Sat, 29 May 2021 19:13:31 +0000
Received: by outflank-mailman (input) for mailman id 134201;
 Sat, 29 May 2021 19:13:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ln4On-0002CN-Sd; Sat, 29 May 2021 19:13:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ln4On-0002mM-LY; Sat, 29 May 2021 19:13:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ln4On-0000mM-CC; Sat, 29 May 2021 19:13:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ln4On-0000uR-Bk; Sat, 29 May 2021 19:13:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=x+r5YfVl21rE72etvMgCPS7dqN91Wv1+XJASZm7I4ds=; b=uWR9W7jJFQmKybQGHRISJ4/26S
	sfdDbvPeCzscXaDflHn7DpQxrW7CLxYft70zVOlSIS1bt/GVBDYdtsTVNtqmGgzs6TnlB2QvOF4UB
	OjOQkEGWF2Sr82tDfbFChmJ6e6Z8jKUQSAhT+QYVkAGmbcoiauN8M+MFnrZccQsMZwZ8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162256-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162256: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=b8ed8c0fb2c9a5e97d0286a23827f2b5356ca552
X-Osstest-Versions-That:
    ovmf=e1999b264f1f9d7230edf2448f757c73da567832
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 29 May 2021 19:13:29 +0000

flight 162256 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162256/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 b8ed8c0fb2c9a5e97d0286a23827f2b5356ca552
baseline version:
 ovmf                 e1999b264f1f9d7230edf2448f757c73da567832

Last test of basis   162217  2021-05-27 10:11:38 Z    2 days
Testing same since   162256  2021-05-29 11:11:12 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Laszlo Ersek <lersek@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   e1999b264f..b8ed8c0fb2  b8ed8c0fb2c9a5e97d0286a23827f2b5356ca552 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sat May 29 20:15:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 May 2021 20:15:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134209.249805 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ln5Mp-0008Ij-DF; Sat, 29 May 2021 20:15:31 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134209.249805; Sat, 29 May 2021 20:15:31 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ln5Mp-0008Ic-AF; Sat, 29 May 2021 20:15:31 +0000
Received: by outflank-mailman (input) for mailman id 134209;
 Sat, 29 May 2021 20:15:29 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ln5Mn-0008IM-Rt; Sat, 29 May 2021 20:15:29 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ln5Mn-0003p0-Ga; Sat, 29 May 2021 20:15:29 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ln5Mn-0003el-3V; Sat, 29 May 2021 20:15:29 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ln5Mn-0003lh-33; Sat, 29 May 2021 20:15:29 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=3zySDlHSAMO8czt8+8hOiVixL4NU4Mj0GeHB3hrYTW0=; b=cnwBP5sTC3sTNX1kxiVWeQenXx
	3fDsZatAahT94QDa8efplC+WlR0s0ClH4spCQyMQka37k6GIIJRfz9cisCZ6Tgusyy8p+o8OJO/yt
	t4NcTSs/pWDJ6c0hmGDMxPRAlwLQLLGcOGCMlOU05e9LPZDZN8TbzIr8gDCLkW0yeL5w=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162255-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162255: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=683d899e4bffca35c5b192ea0662362b0270a695
X-Osstest-Versions-That:
    xen=683d899e4bffca35c5b192ea0662362b0270a695
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 29 May 2021 20:15:29 +0000

flight 162255 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162255/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162250
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162250
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162250
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162250
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162250
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162250
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162250
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162250
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162250
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162250
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162250
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  683d899e4bffca35c5b192ea0662362b0270a695
baseline version:
 xen                  683d899e4bffca35c5b192ea0662362b0270a695

Last test of basis   162255  2021-05-29 09:05:27 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sat May 29 23:44:35 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sat, 29 May 2021 23:44:35 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134220.249818 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ln8ct-0001kn-KH; Sat, 29 May 2021 23:44:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134220.249818; Sat, 29 May 2021 23:44:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ln8ct-0001kg-HM; Sat, 29 May 2021 23:44:19 +0000
Received: by outflank-mailman (input) for mailman id 134220;
 Sat, 29 May 2021 23:44:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ln8ct-0001kW-1O; Sat, 29 May 2021 23:44:19 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ln8cs-0007As-Qx; Sat, 29 May 2021 23:44:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ln8cs-00058B-Dx; Sat, 29 May 2021 23:44:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ln8cs-0003sD-DT; Sat, 29 May 2021 23:44:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=v+BuGVsSUm610WpV3zO76f3W/NUTaQ0HLfmrq+ATeII=; b=nVtC6g33/j3RnT/qNUTUGBRCRA
	feAfVBVkWLpb5+YDQkQGl1i53povkvS+sLMhdp/5Usp9vnWfQFAm7xmk2W3BbNSdTFK+EEGYR9Iz+
	nfGJpgtIz/QrhVR8Eh5a9PTKHc3P+AROPyJ2JlkBHwhj+NjShIFPnRsF54XcUhRNzjoo=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162257-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162257: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=62c0ac5041e9130b041adfa13a41583d3c3ddd24
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sat, 29 May 2021 23:44:18 +0000

flight 162257 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162257/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                62c0ac5041e9130b041adfa13a41583d3c3ddd24
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  282 days
Failing since        152659  2020-08-21 14:07:39 Z  281 days  519 attempts
Testing same since   162257  2021-05-29 12:55:13 Z    0 days    1 attempts

------------------------------------------------------------
515 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 163638 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 30 00:44:28 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 May 2021 00:44:28 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134229.249832 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ln9Yt-000867-Hi; Sun, 30 May 2021 00:44:15 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134229.249832; Sun, 30 May 2021 00:44:15 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1ln9Yt-000860-Ei; Sun, 30 May 2021 00:44:15 +0000
Received: by outflank-mailman (input) for mailman id 134229;
 Sun, 30 May 2021 00:44:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ln9Yr-00085q-Re; Sun, 30 May 2021 00:44:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ln9Yr-0000JP-MJ; Sun, 30 May 2021 00:44:13 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1ln9Yr-0006e2-BR; Sun, 30 May 2021 00:44:13 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1ln9Yr-0000w7-Av; Sun, 30 May 2021 00:44:13 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=QAWRM65+BBBXuMc7/3YyE0HpUWTTsFm6b7Q/RDVS6uI=; b=iI3yWe/eCVpB5JTr9IHDPYMsns
	UIcyvO0liNPlmHvojVpGqAFiMGGzl2dGYDuCwe/1Ql91CgtPpL+kVRGXDjKBvZcewBK36pZ0Sn2R+
	5hmBBXmrqEb4w0NacoqiT33tXzKlkw7Z4lXQzARfbf3+wI14pHtzFJ/pju2inSxZIEUg=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162259-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162259: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=adfa3327d4fc25d5eff5fedcdb11ecde52a995cc
X-Osstest-Versions-That:
    ovmf=b8ed8c0fb2c9a5e97d0286a23827f2b5356ca552
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 30 May 2021 00:44:13 +0000

flight 162259 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162259/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 adfa3327d4fc25d5eff5fedcdb11ecde52a995cc
baseline version:
 ovmf                 b8ed8c0fb2c9a5e97d0286a23827f2b5356ca552

Last test of basis   162256  2021-05-29 11:11:12 Z    0 days
Testing same since   162259  2021-05-29 19:42:34 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Brijesh Singh <brijesh.singh@amd.com>
  Laszlo Ersek <lersek@redhat.com>
  Lendacky, Thomas <thomas.lendacky@amd.com>
  Tom Lendacky <thomas.lendacky@amd.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   b8ed8c0fb2..adfa3327d4  adfa3327d4fc25d5eff5fedcdb11ecde52a995cc -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Sun May 30 02:23:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 May 2021 02:23:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134237.249847 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnB6F-0006TR-PA; Sun, 30 May 2021 02:22:47 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134237.249847; Sun, 30 May 2021 02:22:47 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnB6F-0006TJ-K0; Sun, 30 May 2021 02:22:47 +0000
Received: by outflank-mailman (input) for mailman id 134237;
 Sun, 30 May 2021 02:22:46 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnB6E-0006T9-0e; Sun, 30 May 2021 02:22:46 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnB6D-0007dW-ME; Sun, 30 May 2021 02:22:45 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnB6D-0001PN-AQ; Sun, 30 May 2021 02:22:45 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lnB6D-0005F1-9t; Sun, 30 May 2021 02:22:45 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=7L/VOTFuU73zBOHsdzVy2r4F6pr8wrHbcofgi2dt5yo=; b=rp7FcBygVI+GIRKWBOmBn4t72i
	wQKcyekPcFxvA0mrlMRWWcsGRQdMOjh1Wsg+H+f80OunokI0SnMQsMiKzY4LlySofgewEGQTgkfwp
	91Bs0DRDN9ZeTAABuwihP5wO+hcvbBMrl6QnbilSpHCsj0sbIkshvLpw4PQ+MFu3X1To=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162258-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162258: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=6799d4f2da496cab9b3fd26283a8ce3639b1a88d
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 30 May 2021 02:22:45 +0000

flight 162258 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162258/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                6799d4f2da496cab9b3fd26283a8ce3639b1a88d
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  302 days
Failing since        152366  2020-08-01 20:49:34 Z  301 days  512 attempts
Testing same since   162258  2021-05-29 16:01:15 Z    0 days    1 attempts

------------------------------------------------------------
6112 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1661604 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 30 07:37:09 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 May 2021 07:37:09 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134247.249860 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnG0F-0000j8-BA; Sun, 30 May 2021 07:36:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134247.249860; Sun, 30 May 2021 07:36:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnG0F-0000j1-8A; Sun, 30 May 2021 07:36:55 +0000
Received: by outflank-mailman (input) for mailman id 134247;
 Sun, 30 May 2021 07:36:54 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnG0D-0000iq-Vr; Sun, 30 May 2021 07:36:53 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnG0D-0004kd-QN; Sun, 30 May 2021 07:36:53 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnG0D-0003k6-Gh; Sun, 30 May 2021 07:36:53 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lnG0D-0007Mh-G5; Sun, 30 May 2021 07:36:53 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=/5c4+kYl1SOBtbbJYGVpW/xsgE2qz39I+o2GeFWYon8=; b=HDzlH4t5aBhxjaWvSORB6Tn7Qu
	8gir5TK6uhnLjqHEuXwnrcZP5tVgO5xlfEBcQQgJViHB7RdNFr+qcyl2ByAkNRtJKQw1Xc9KqkmKs
	9Vb+0JnBWp1dg/Csi5BWlKgsA5BHUBH+q5G/7HxjJoOBdP7o97+vIWwDdzyDmq2ZOJAw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162260-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162260: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:debian-hvm-install:fail:heisenbug
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=62c0ac5041e9130b041adfa13a41583d3c3ddd24
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 30 May 2021 07:36:53 +0000

flight 162260 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162260/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail in 162257 REGR. vs. 152631

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 12 debian-hvm-install fail pass in 162257

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 qemuu                62c0ac5041e9130b041adfa13a41583d3c3ddd24
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  282 days
Failing since        152659  2020-08-21 14:07:39 Z  281 days  520 attempts
Testing same since   162257  2021-05-29 12:55:13 Z    0 days    2 attempts

------------------------------------------------------------
515 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 163638 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 30 08:56:10 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 May 2021 08:56:10 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134258.249874 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnHEm-0000G7-5a; Sun, 30 May 2021 08:56:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134258.249874; Sun, 30 May 2021 08:56:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnHEm-0000G0-2U; Sun, 30 May 2021 08:56:00 +0000
Received: by outflank-mailman (input) for mailman id 134258;
 Sun, 30 May 2021 08:55:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnHEk-0000Fq-Da; Sun, 30 May 2021 08:55:58 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnHEk-0006Zi-4X; Sun, 30 May 2021 08:55:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnHEj-0007d1-RR; Sun, 30 May 2021 08:55:57 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lnHEj-0006OO-Qt; Sun, 30 May 2021 08:55:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=jT24UlsPZrs/PLcCU00g0AUESIUruUvFMjH/4Ne9piw=; b=oZvHNkcFpUz+d95Ju01tNmSG4I
	rFojeUIANDU6ygtEaMz87xDAI0c7+0LYYgjRuXHRMa/RVkJmyP7aldTv/lFp0hdu6ZFEolvkao7GK
	g+V9HIftZ5bt5zgTjRAAfgPrfvDKumGBEUUnHqQmkTZIOj6KAiJudV1FCdleNE0uOi3w=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162263-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 162263: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=70f53b1c04cfed8529c87c7be8ca4c76d1123a30
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 30 May 2021 08:55:57 +0000

flight 162263 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162263/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              70f53b1c04cfed8529c87c7be8ca4c76d1123a30
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  324 days
Failing since        151818  2020-07-11 04:18:52 Z  323 days  316 attempts
Testing same since   162243  2021-05-28 04:20:06 Z    2 days    3 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 59708 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 30 10:19:31 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 May 2021 10:19:31 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134268.249889 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnIXP-0007vH-Dy; Sun, 30 May 2021 10:19:19 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134268.249889; Sun, 30 May 2021 10:19:19 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnIXP-0007vA-AW; Sun, 30 May 2021 10:19:19 +0000
Received: by outflank-mailman (input) for mailman id 134268;
 Sun, 30 May 2021 10:19:18 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnIXO-0007v0-Pa; Sun, 30 May 2021 10:19:18 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnIXO-00088d-HE; Sun, 30 May 2021 10:19:18 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnIXO-0002Vy-8T; Sun, 30 May 2021 10:19:18 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lnIXO-000059-7u; Sun, 30 May 2021 10:19:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=UIjazsDUxpvVPdlogBgfcYwSR5z1AabvaZxDW3xX7wI=; b=sHEzyOtQ79UqRGlh9/u+bauFGX
	ckdd8wsYBmguY/MtAyG9QEcVjo6KtE6Gcppt3mqwOw3+8ciwWmLO0qkk6N4tVvD5Pv8evw1SKdFCJ
	TTBJKcrsyI+uc6akvONI2GCLFCDT4Kd8YCQgviZtWK38aC/Eov1LZ5KMHcgPc7n2EpLA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162265-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-coverity test] 162265: all pass - PUSHED
X-Osstest-Versions-This:
    xen=683d899e4bffca35c5b192ea0662362b0270a695
X-Osstest-Versions-That:
    xen=3092006fc4e096a7eebb8042cb76d82b09ccece4
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 30 May 2021 10:19:18 +0000

flight 162265 xen-unstable-coverity real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162265/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 xen                  683d899e4bffca35c5b192ea0662362b0270a695
baseline version:
 xen                  3092006fc4e096a7eebb8042cb76d82b09ccece4

Last test of basis   162163  2021-05-26 09:18:28 Z    4 days
Testing same since   162265  2021-05-30 09:19:35 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Andrew Cooper <andrew.cooper3@citrix.com>
  Connor Davis <connojdavis@gmail.com>
  Jan Beulich <jbeulich@suse.com>
  Jason Andryuk <jandryuk@gmail.com>
  Julien Grall <jgrall@amazon.com>
  Roger Pau Monné <roger.pau@citrix.com>
  Roger Pau Monné <rogerpau@citrix.com>
  Wei Liu <wl@xen.org>

jobs:
 coverity-amd64                                               pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   3092006fc4..683d899e4b  683d899e4bffca35c5b192ea0662362b0270a695 -> coverity-tested/smoke


From xen-devel-bounces@lists.xenproject.org Sun May 30 12:14:38 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 May 2021 12:14:38 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134279.249903 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnKKn-00020P-V2; Sun, 30 May 2021 12:14:25 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134279.249903; Sun, 30 May 2021 12:14:25 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnKKn-00020I-Rx; Sun, 30 May 2021 12:14:25 +0000
Received: by outflank-mailman (input) for mailman id 134279;
 Sun, 30 May 2021 12:14:25 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnKKn-000208-6u; Sun, 30 May 2021 12:14:25 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnKKm-0001aS-TZ; Sun, 30 May 2021 12:14:24 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnKKm-0006fe-Jg; Sun, 30 May 2021 12:14:24 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lnKKm-0007Qs-JD; Sun, 30 May 2021 12:14:24 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Rig/+G11VKKi6QYLT4aXqUYul+8e2v3CAmaryZ9+1NE=; b=lgM7srDkc6uG0nQpC07q9RumgV
	EGAo7PrIV/CyLwclWYumnZXFluUvMFXG00bZT21mvVYfmq3+iwtfcooqInub7jXF0NPz25c+Mjpnn
	i66y7ji+LA+Q9dbSgEV8wEdiOxD5bBuoEbFk8I22kUhx5wJRewiglSjUfeEBuEWqxzt8=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162261-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162261: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:heisenbug
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=683d899e4bffca35c5b192ea0662362b0270a695
X-Osstest-Versions-That:
    xen=683d899e4bffca35c5b192ea0662362b0270a695
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 30 May 2021 12:14:24 +0000

flight 162261 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162261/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat  fail pass in 162255

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162255
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162255
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162255
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162255
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162255
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162255
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162255
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162255
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162255
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162255
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162255
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  683d899e4bffca35c5b192ea0662362b0270a695
baseline version:
 xen                  683d899e4bffca35c5b192ea0662362b0270a695

Last test of basis   162261  2021-05-30 01:52:48 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Sun May 30 15:59:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 May 2021 15:59:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134314.249917 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnNq6-0004q6-8x; Sun, 30 May 2021 15:58:58 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134314.249917; Sun, 30 May 2021 15:58:58 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnNq6-0004pz-5p; Sun, 30 May 2021 15:58:58 +0000
Received: by outflank-mailman (input) for mailman id 134314;
 Sun, 30 May 2021 15:58:56 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnNq4-0004pp-Gv; Sun, 30 May 2021 15:58:56 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnNq4-0005Fw-7C; Sun, 30 May 2021 15:58:56 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnNq3-0006lB-Sc; Sun, 30 May 2021 15:58:55 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lnNq3-0005vj-S9; Sun, 30 May 2021 15:58:55 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=XZixkcpV6LxTckEeAgvGCiIR+U+fNdN/uwhsNXj94zg=; b=0ckPWTQXs1P8r8TJUFHrDBuTGH
	kC5dvrLHOgOBI6jika9zXlEcXZe1Jl/Hl7Xmyx+yA7TUnnw6KFX2DfQSXeULk4MPDxF47XLPzQeE1
	OXg85jmL91TyOkuVzwe/CLA8o99nkQFEFHltydbKOQ5hbw2biJ2splRnioHhTMTvRfCk=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162262-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162262: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-armhf-armhf-xl-arndale:xen-boot:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=df8c66c4cfb91f2372d138b9b714f6df6f506966
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 30 May 2021 15:58:55 +0000

flight 162262 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162262/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-armhf-armhf-xl-arndale   8 xen-boot                 fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                df8c66c4cfb91f2372d138b9b714f6df6f506966
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  302 days
Failing since        152366  2020-08-01 20:49:34 Z  301 days  513 attempts
Testing same since   162262  2021-05-30 02:26:01 Z    0 days    1 attempts

------------------------------------------------------------
6125 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  fail    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1664518 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 30 17:27:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 May 2021 17:27:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134298.249960 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnPE5-0005tk-Tv; Sun, 30 May 2021 17:27:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134298.249960; Sun, 30 May 2021 17:27:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnPE5-0005qz-Fh; Sun, 30 May 2021 17:27:49 +0000
Received: by outflank-mailman (input) for mailman id 134298;
 Sun, 30 May 2021 15:06:59 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GvOc=KZ=gmail.com=ltykernel@srs-us1.protection.inumbo.net>)
 id 1lnN1n-0000PC-0d
 for xen-devel@lists.xenproject.org; Sun, 30 May 2021 15:06:59 +0000
Received: from mail-pg1-x532.google.com (unknown [2607:f8b0:4864:20::532])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id c687c071-24f7-40c4-8a2b-d5ee5086fc57;
 Sun, 30 May 2021 15:06:44 +0000 (UTC)
Received: by mail-pg1-x532.google.com with SMTP id n12so3188759pgs.13
 for <xen-devel@lists.xenproject.org>; Sun, 30 May 2021 08:06:44 -0700 (PDT)
Received: from ubuntu-Virtual-Machine.corp.microsoft.com
 ([2001:4898:80e8:9:dc2d:80ab:c3f3:1524])
 by smtp.gmail.com with ESMTPSA id b15sm8679688pfi.100.2021.05.30.08.06.42
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 30 May 2021 08:06:43 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: c687c071-24f7-40c4-8a2b-d5ee5086fc57
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=as2MfWVTTjebtE6pObnR6ZA7Z7Mm2lquU8h3QkjiG+Y=;
        b=GpZ60FI6fTr2Eb0z+JTjvboH6NeKZstEkZ9Z3PcaEev5NTP5cyblGYKdaVvXLEr73E
         WX+AEtVQzFVfJaLV0rYdWNfG4LrBbdsrnRbFm4RpbKaCmNw1mQZDhRlTndRUyrZnQNhw
         sbNHxW6s7/Twp5y6+uUKltkXMZDm6+ka+AqcBJmYLAdpfVO1jFJa79kQ4XH1KSZwaAWT
         va/SSdgTJhh2i8jjzh4r1zZBeqmPTKcKLpJ/SXZH/mTj4+QEpCrFvfzA4muUqpzsyXKB
         ZZdJclz9Jcoqd0dAOmUrQ8hcyE2iaZOIYsoE5BbOWtOfJ+QgEo4cSySeRczXEFgKMmmr
         lbag==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=as2MfWVTTjebtE6pObnR6ZA7Z7Mm2lquU8h3QkjiG+Y=;
        b=n8adN8vu1puzjwjSigo68Qk7fNE2KaKLuMRanSkVF0q3d4audcDim3McWi6Zn5iFkE
         5X5L0R6j15FFXhCMgqX7ZovtrEPYxrvyoj/FjEK37JzRFrk+6xP31/9LnTUuAljERYiH
         IgEl8fc0eQQ4TVlp38wfpWN6NLBiQgnBnV8zOgxhUcgqFq1EkArny/xALgvda+ojCrUr
         e3/yYKrh2KYN39P5UHhHaW7p8vvgT2pPzvbT2xtzxF425ZqQzbxBJ5kc8RJSnbdioLu+
         ksRtScHL7i8Qn0IO91wz2Y5GjUl9Cxnji4E1DVpZ0LOO9CIL0awUAa1qzqIZXMo91tGx
         i+Rg==
X-Gm-Message-State: AOAM533CHSCmj9zFXhJHFIOXlCbTXRGjUrmIxP4ZmS/XOX/M7Xsa++gd
	oxk/BoW7hTXOXcOVw0RpSTs=
X-Google-Smtp-Source: ABdhPJz/1IJuFU3tpWooRg4ISBSdPCyLtODAH6GZPX1+lCpyZCLB2ut70Lz6laAnP0+zEj7rLpgQiw==
X-Received: by 2002:aa7:84d9:0:b029:2dc:9bd3:91f3 with SMTP id x25-20020aa784d90000b02902dc9bd391f3mr13280563pfn.0.1622387203493;
        Sun, 30 May 2021 08:06:43 -0700 (PDT)
From: Tianyu Lan <ltykernel@gmail.com>
To: kys@microsoft.com,
	haiyangz@microsoft.com,
	sthemmin@microsoft.com,
	wei.liu@kernel.org,
	decui@microsoft.com,
	tglx@linutronix.de,
	mingo@redhat.com,
	bp@alien8.de,
	x86@kernel.org,
	hpa@zytor.com,
	arnd@arndb.de,
	dave.hansen@linux.intel.com,
	luto@kernel.org,
	peterz@infradead.org,
	akpm@linux-foundation.org,
	kirill.shutemov@linux.intel.com,
	rppt@kernel.org,
	hannes@cmpxchg.org,
	cai@lca.pw,
	krish.sadhukhan@oracle.com,
	saravanand@fb.com,
	Tianyu.Lan@microsoft.com,
	konrad.wilk@oracle.com,
	hch@lst.de,
	m.szyprowski@samsung.com,
	robin.murphy@arm.com,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	sstabellini@kernel.org,
	joro@8bytes.org,
	will@kernel.org,
	xen-devel@lists.xenproject.org,
	davem@davemloft.net,
	kuba@kernel.org,
	jejb@linux.ibm.com,
	martin.petersen@oracle.com
Cc: iommu@lists.linux-foundation.org,
	linux-arch@vger.kernel.org,
	linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	netdev@vger.kernel.org,
	vkuznets@redhat.com,
	thomas.lendacky@amd.com,
	brijesh.singh@amd.com,
	sunilmut@microsoft.com
Subject: [RFC PATCH V3 04/11] HV: Add Write/Read MSR registers via ghcb
Date: Sun, 30 May 2021 11:06:21 -0400
Message-Id: <20210530150628.2063957-5-ltykernel@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20210530150628.2063957-1-ltykernel@gmail.com>
References: <20210530150628.2063957-1-ltykernel@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Tianyu Lan <Tianyu.Lan@microsoft.com>

Hyper-V provides GHCB protocol to write Synthetic Interrupt
Controller MSR registers and these registers are emulated by
Hypervisor rather than paravisor in VMPL0.

Hyper-V requests to write SINTx MSR registers twice(once via
GHCB and once via wrmsr instruction emulated by paravisor including
the proxy bit 21). Guest OS ID MSR also needs to be set via GHCB.

Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com>
---
 arch/x86/hyperv/hv_init.c       |  26 +------
 arch/x86/hyperv/ivm.c           | 125 ++++++++++++++++++++++++++++++++
 arch/x86/include/asm/mshyperv.h |  78 +++++++++++++++++++-
 arch/x86/kernel/cpu/mshyperv.c  |   3 +
 drivers/hv/hv.c                 | 114 +++++++++++++++++++----------
 include/asm-generic/mshyperv.h  |   4 +-
 6 files changed, 287 insertions(+), 63 deletions(-)

diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c
index dc74d01cb859..e35a3c1556b8 100644
--- a/arch/x86/hyperv/hv_init.c
+++ b/arch/x86/hyperv/hv_init.c
@@ -472,6 +472,9 @@ void __init hyperv_init(void)
 
 		ghcb_base = (void **)this_cpu_ptr(ms_hyperv.ghcb_base);
 		*ghcb_base = ghcb_va;
+
+		/* Hyper-V requires to write guest os id via ghcb in SNP IVM. */
+		hv_ghcb_msr_write(HV_X64_MSR_GUEST_OS_ID, guest_id);
 	}
 
 	rdmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64);
@@ -564,6 +567,7 @@ void hyperv_cleanup(void)
 
 	/* Reset our OS id */
 	wrmsrl(HV_X64_MSR_GUEST_OS_ID, 0);
+	hv_ghcb_msr_write(HV_X64_MSR_GUEST_OS_ID, 0);
 
 	/*
 	 * Reset hypercall page reference before reset the page,
@@ -645,28 +649,6 @@ bool hv_is_hibernation_supported(void)
 }
 EXPORT_SYMBOL_GPL(hv_is_hibernation_supported);
 
-enum hv_isolation_type hv_get_isolation_type(void)
-{
-	if (!(ms_hyperv.priv_high & HV_ISOLATION))
-		return HV_ISOLATION_TYPE_NONE;
-	return FIELD_GET(HV_ISOLATION_TYPE, ms_hyperv.isolation_config_b);
-}
-EXPORT_SYMBOL_GPL(hv_get_isolation_type);
-
-bool hv_is_isolation_supported(void)
-{
-	return hv_get_isolation_type() != HV_ISOLATION_TYPE_NONE;
-}
-EXPORT_SYMBOL_GPL(hv_is_isolation_supported);
-
-DEFINE_STATIC_KEY_FALSE(isolation_type_snp);
-
-bool hv_isolation_type_snp(void)
-{
-	return static_branch_unlikely(&isolation_type_snp);
-}
-EXPORT_SYMBOL_GPL(hv_isolation_type_snp);
-
 /* Bit mask of the extended capability to query: see HV_EXT_CAPABILITY_xxx */
 bool hv_query_ext_cap(u64 cap_query)
 {
diff --git a/arch/x86/hyperv/ivm.c b/arch/x86/hyperv/ivm.c
index fad1d3024056..fd6dd804beef 100644
--- a/arch/x86/hyperv/ivm.c
+++ b/arch/x86/hyperv/ivm.c
@@ -6,12 +6,137 @@
  *  Tianyu Lan <Tianyu.Lan@microsoft.com>
  */
 
+#include <linux/types.h>
+#include <linux/bitfield.h>
 #include <linux/hyperv.h>
 #include <linux/types.h>
 #include <linux/bitfield.h>
 #include <asm/io.h>
+#include <asm/svm.h>
+#include <asm/sev-es.h>
 #include <asm/mshyperv.h>
 
+union hv_ghcb {
+	struct ghcb ghcb;
+} __packed __aligned(PAGE_SIZE);
+
+void hv_ghcb_msr_write(u64 msr, u64 value)
+{
+	union hv_ghcb *hv_ghcb;
+	void **ghcb_base;
+	unsigned long flags;
+
+	if (!ms_hyperv.ghcb_base)
+		return;
+
+	local_irq_save(flags);
+	ghcb_base = (void **)this_cpu_ptr(ms_hyperv.ghcb_base);
+	hv_ghcb = (union hv_ghcb *)*ghcb_base;
+	if (!hv_ghcb) {
+		local_irq_restore(flags);
+		return;
+	}
+
+	memset(hv_ghcb, 0x00, HV_HYP_PAGE_SIZE);
+
+	hv_ghcb->ghcb.protocol_version = 1;
+	hv_ghcb->ghcb.ghcb_usage = 0;
+
+	ghcb_set_sw_exit_code(&hv_ghcb->ghcb, SVM_EXIT_MSR);
+	ghcb_set_rcx(&hv_ghcb->ghcb, msr);
+	ghcb_set_rax(&hv_ghcb->ghcb, lower_32_bits(value));
+	ghcb_set_rdx(&hv_ghcb->ghcb, value >> 32);
+	ghcb_set_sw_exit_info_1(&hv_ghcb->ghcb, 1);
+	ghcb_set_sw_exit_info_2(&hv_ghcb->ghcb, 0);
+
+	VMGEXIT();
+
+	if ((hv_ghcb->ghcb.save.sw_exit_info_1 & 0xffffffff) == 1)
+		pr_warn("Fail to write msr via ghcb %llx.\n", msr);
+
+	local_irq_restore(flags);
+}
+
+void hv_ghcb_msr_read(u64 msr, u64 *value)
+{
+	union hv_ghcb *hv_ghcb;
+	void **ghcb_base;
+	unsigned long flags;
+
+	if (!ms_hyperv.ghcb_base)
+		return;
+
+	local_irq_save(flags);
+	ghcb_base = (void **)this_cpu_ptr(ms_hyperv.ghcb_base);
+	hv_ghcb = (union hv_ghcb *)*ghcb_base;
+	if (!hv_ghcb) {
+		local_irq_restore(flags);
+		return;
+	}
+
+	memset(hv_ghcb, 0x00, HV_HYP_PAGE_SIZE);
+	hv_ghcb->ghcb.protocol_version = 1;
+	hv_ghcb->ghcb.ghcb_usage = 0;
+
+	ghcb_set_sw_exit_code(&hv_ghcb->ghcb, SVM_EXIT_MSR);
+	ghcb_set_rcx(&hv_ghcb->ghcb, msr);
+	ghcb_set_sw_exit_info_1(&hv_ghcb->ghcb, 0);
+	ghcb_set_sw_exit_info_2(&hv_ghcb->ghcb, 0);
+
+	VMGEXIT();
+
+	if ((hv_ghcb->ghcb.save.sw_exit_info_1 & 0xffffffff) == 1)
+		pr_warn("Fail to read msr via ghcb %llx.\n", msr);
+	else
+		*value = (u64)lower_32_bits(hv_ghcb->ghcb.save.rax)
+			| ((u64)lower_32_bits(hv_ghcb->ghcb.save.rdx) << 32);
+	local_irq_restore(flags);
+}
+
+void hv_sint_rdmsrl_ghcb(u64 msr, u64 *value)
+{
+	hv_ghcb_msr_read(msr, value);
+}
+EXPORT_SYMBOL_GPL(hv_sint_rdmsrl_ghcb);
+
+void hv_sint_wrmsrl_ghcb(u64 msr, u64 value)
+{
+	hv_ghcb_msr_write(msr, value);
+
+	/* Write proxy bit vua wrmsrl instruction. */
+	if (msr >= HV_X64_MSR_SINT0 && msr <= HV_X64_MSR_SINT15)
+		wrmsrl(msr, value | 1 << 20);
+}
+EXPORT_SYMBOL_GPL(hv_sint_wrmsrl_ghcb);
+
+void hv_signal_eom_ghcb(void)
+{
+	hv_sint_wrmsrl_ghcb(HV_X64_MSR_EOM, 0);
+}
+EXPORT_SYMBOL_GPL(hv_signal_eom_ghcb);
+
+enum hv_isolation_type hv_get_isolation_type(void)
+{
+	if (!(ms_hyperv.priv_high & HV_ISOLATION))
+		return HV_ISOLATION_TYPE_NONE;
+	return FIELD_GET(HV_ISOLATION_TYPE, ms_hyperv.isolation_config_b);
+}
+EXPORT_SYMBOL_GPL(hv_get_isolation_type);
+
+bool hv_is_isolation_supported(void)
+{
+	return hv_get_isolation_type() != HV_ISOLATION_TYPE_NONE;
+}
+EXPORT_SYMBOL_GPL(hv_is_isolation_supported);
+
+DEFINE_STATIC_KEY_FALSE(isolation_type_snp);
+
+bool hv_isolation_type_snp(void)
+{
+	return static_branch_unlikely(&isolation_type_snp);
+}
+EXPORT_SYMBOL_GPL(hv_isolation_type_snp);
+
 /*
  * hv_mark_gpa_visibility - Set pages visible to host via hvcall.
  *
diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h
index 6af9d55ffe3b..eec7f3357d51 100644
--- a/arch/x86/include/asm/mshyperv.h
+++ b/arch/x86/include/asm/mshyperv.h
@@ -30,6 +30,63 @@ static inline u64 hv_get_register(unsigned int reg)
 	return value;
 }
 
+#define hv_get_sint_reg(val, reg) {		\
+	if (hv_isolation_type_snp())		\
+		hv_get_##reg##_ghcb(&val);	\
+	else					\
+		rdmsrl(HV_X64_MSR_##reg, val);	\
+	}
+
+#define hv_set_sint_reg(val, reg) {		\
+	if (hv_isolation_type_snp())		\
+		hv_set_##reg##_ghcb(val);	\
+	else					\
+		wrmsrl(HV_X64_MSR_##reg, val);	\
+	}
+
+
+#define hv_get_simp(val) hv_get_sint_reg(val, SIMP)
+#define hv_get_siefp(val) hv_get_sint_reg(val, SIEFP)
+
+#define hv_set_simp(val) hv_set_sint_reg(val, SIMP)
+#define hv_set_siefp(val) hv_set_sint_reg(val, SIEFP)
+
+#define hv_get_synic_state(val) {			\
+	if (hv_isolation_type_snp())			\
+		hv_get_synic_state_ghcb(&val);		\
+	else						\
+		rdmsrl(HV_X64_MSR_SCONTROL, val);	\
+	}
+#define hv_set_synic_state(val) {			\
+	if (hv_isolation_type_snp())			\
+		hv_set_synic_state_ghcb(val);		\
+	else						\
+		wrmsrl(HV_X64_MSR_SCONTROL, val);	\
+	}
+
+#define hv_get_vp_index(index) rdmsrl(HV_X64_MSR_VP_INDEX, index)
+
+#define hv_signal_eom() {			 \
+	if (hv_isolation_type_snp() &&		 \
+	    old_msg_type != HVMSG_TIMER_EXPIRED) \
+		hv_signal_eom_ghcb();		 \
+	else					 \
+		wrmsrl(HV_X64_MSR_EOM, 0);	 \
+	}
+
+#define hv_get_synint_state(int_num, val) {		\
+	if (hv_isolation_type_snp())			\
+		hv_get_synint_state_ghcb(int_num, &val);\
+	else						\
+		rdmsrl(HV_X64_MSR_SINT0 + int_num, val);\
+	}
+#define hv_set_synint_state(int_num, val) {		\
+	if (hv_isolation_type_snp())			\
+		hv_set_synint_state_ghcb(int_num, val);	\
+	else						\
+		wrmsrl(HV_X64_MSR_SINT0 + int_num, val);\
+	}
+
 #define hv_get_raw_timer() rdtsc_ordered()
 
 void hyperv_vector_handler(struct pt_regs *regs);
@@ -197,6 +254,25 @@ int hv_unmap_ioapic_interrupt(int ioapic_id, struct hv_interrupt_entry *entry);
 int hv_mark_gpa_visibility(u16 count, const u64 pfn[], u32 visibility);
 int hv_set_mem_host_visibility(void *kbuffer, size_t size,
 			       enum vmbus_page_visibility visibility);
+void hv_sint_wrmsrl_ghcb(u64 msr, u64 value);
+void hv_sint_rdmsrl_ghcb(u64 msr, u64 *value);
+void hv_signal_eom_ghcb(void);
+void hv_ghcb_msr_write(u64 msr, u64 value);
+void hv_ghcb_msr_read(u64 msr, u64 *value);
+
+#define hv_get_synint_state_ghcb(int_num, val)			\
+	hv_sint_rdmsrl_ghcb(HV_X64_MSR_SINT0 + int_num, val)
+#define hv_set_synint_state_ghcb(int_num, val) \
+	hv_sint_wrmsrl_ghcb(HV_X64_MSR_SINT0 + int_num, val)
+
+#define hv_get_SIMP_ghcb(val) hv_sint_rdmsrl_ghcb(HV_X64_MSR_SIMP, val)
+#define hv_set_SIMP_ghcb(val) hv_sint_wrmsrl_ghcb(HV_X64_MSR_SIMP, val)
+
+#define hv_get_SIEFP_ghcb(val) hv_sint_rdmsrl_ghcb(HV_X64_MSR_SIEFP, val)
+#define hv_set_SIEFP_ghcb(val) hv_sint_wrmsrl_ghcb(HV_X64_MSR_SIEFP, val)
+
+#define hv_get_synic_state_ghcb(val) hv_sint_rdmsrl_ghcb(HV_X64_MSR_SCONTROL, val)
+#define hv_set_synic_state_ghcb(val) hv_sint_wrmsrl_ghcb(HV_X64_MSR_SCONTROL, val)
 #else /* CONFIG_HYPERV */
 static inline void hyperv_init(void) {}
 static inline void hyperv_setup_mmu_ops(void) {}
@@ -213,9 +289,9 @@ static inline int hyperv_flush_guest_mapping_range(u64 as,
 {
 	return -1;
 }
+static inline void hv_signal_eom_ghcb(void) { };
 #endif /* CONFIG_HYPERV */
 
-
 #include <asm-generic/mshyperv.h>
 
 #endif
diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c
index d1ca36224657..c19e14ec8eec 100644
--- a/arch/x86/kernel/cpu/mshyperv.c
+++ b/arch/x86/kernel/cpu/mshyperv.c
@@ -325,6 +325,9 @@ static void __init ms_hyperv_init_platform(void)
 
 		pr_info("Hyper-V: Isolation Config: Group A 0x%x, Group B 0x%x\n",
 			ms_hyperv.isolation_config_a, ms_hyperv.isolation_config_b);
+
+		if (hv_get_isolation_type() == HV_ISOLATION_TYPE_SNP)
+			static_branch_enable(&isolation_type_snp);
 	}
 
 	if (ms_hyperv.hints & HV_X64_ENLIGHTENED_VMCS_RECOMMENDED) {
diff --git a/drivers/hv/hv.c b/drivers/hv/hv.c
index e83507f49676..28faa8364952 100644
--- a/drivers/hv/hv.c
+++ b/drivers/hv/hv.c
@@ -136,17 +136,24 @@ int hv_synic_alloc(void)
 		tasklet_init(&hv_cpu->msg_dpc,
 			     vmbus_on_msg_dpc, (unsigned long) hv_cpu);
 
-		hv_cpu->synic_message_page =
-			(void *)get_zeroed_page(GFP_ATOMIC);
-		if (hv_cpu->synic_message_page == NULL) {
-			pr_err("Unable to allocate SYNIC message page\n");
-			goto err;
-		}
+		/*
+		 * Synic message and event pages are allocated by paravisor.
+		 * Skip these pages allocation here.
+		 */
+		if (!hv_isolation_type_snp()) {
+			hv_cpu->synic_message_page =
+				(void *)get_zeroed_page(GFP_ATOMIC);
+			if (hv_cpu->synic_message_page == NULL) {
+				pr_err("Unable to allocate SYNIC message page\n");
+				goto err;
+			}
 
-		hv_cpu->synic_event_page = (void *)get_zeroed_page(GFP_ATOMIC);
-		if (hv_cpu->synic_event_page == NULL) {
-			pr_err("Unable to allocate SYNIC event page\n");
-			goto err;
+			hv_cpu->synic_event_page =
+				(void *)get_zeroed_page(GFP_ATOMIC);
+			if (hv_cpu->synic_event_page == NULL) {
+				pr_err("Unable to allocate SYNIC event page\n");
+				goto err;
+			}
 		}
 
 		hv_cpu->post_msg_page = (void *)get_zeroed_page(GFP_ATOMIC);
@@ -173,10 +180,17 @@ void hv_synic_free(void)
 	for_each_present_cpu(cpu) {
 		struct hv_per_cpu_context *hv_cpu
 			= per_cpu_ptr(hv_context.cpu_context, cpu);
+		free_page((unsigned long)hv_cpu->post_msg_page);
+
+		/*
+		 * Synic message and event pages are allocated by paravisor.
+		 * Skip free these pages here.
+		 */
+		if (hv_isolation_type_snp())
+			continue;
 
 		free_page((unsigned long)hv_cpu->synic_event_page);
 		free_page((unsigned long)hv_cpu->synic_message_page);
-		free_page((unsigned long)hv_cpu->post_msg_page);
 	}
 
 	kfree(hv_context.hv_numa_map);
@@ -199,26 +213,43 @@ void hv_synic_enable_regs(unsigned int cpu)
 	union hv_synic_scontrol sctrl;
 
 	/* Setup the Synic's message page */
-	simp.as_uint64 = hv_get_register(HV_REGISTER_SIMP);
+	hv_get_simp(simp.as_uint64);
 	simp.simp_enabled = 1;
-	simp.base_simp_gpa = virt_to_phys(hv_cpu->synic_message_page)
-		>> HV_HYP_PAGE_SHIFT;
 
-	hv_set_register(HV_REGISTER_SIMP, simp.as_uint64);
+	if (hv_isolation_type_snp()) {
+		hv_cpu->synic_message_page
+			= ioremap_cache(simp.base_simp_gpa << HV_HYP_PAGE_SHIFT,
+					HV_HYP_PAGE_SIZE);
+		if (!hv_cpu->synic_message_page)
+			pr_err("Fail to map syinc message page.\n");
+	} else {
+		simp.base_simp_gpa = virt_to_phys(hv_cpu->synic_message_page)
+			>> HV_HYP_PAGE_SHIFT;
+	}
+
+	hv_set_simp(simp.as_uint64);
 
 	/* Setup the Synic's event page */
-	siefp.as_uint64 = hv_get_register(HV_REGISTER_SIEFP);
+	hv_get_siefp(siefp.as_uint64);
 	siefp.siefp_enabled = 1;
-	siefp.base_siefp_gpa = virt_to_phys(hv_cpu->synic_event_page)
-		>> HV_HYP_PAGE_SHIFT;
 
-	hv_set_register(HV_REGISTER_SIEFP, siefp.as_uint64);
+	if (hv_isolation_type_snp()) {
+		hv_cpu->synic_event_page =
+			ioremap_cache(siefp.base_siefp_gpa << HV_HYP_PAGE_SHIFT,
+				      HV_HYP_PAGE_SIZE);
+
+		if (!hv_cpu->synic_event_page)
+			pr_err("Fail to map syinc event page.\n");
+	} else {
+		siefp.base_siefp_gpa = virt_to_phys(hv_cpu->synic_event_page)
+			>> HV_HYP_PAGE_SHIFT;
+	}
+	hv_set_siefp(siefp.as_uint64);
 
 	/* Setup the shared SINT. */
 	if (vmbus_irq != -1)
 		enable_percpu_irq(vmbus_irq, 0);
-	shared_sint.as_uint64 = hv_get_register(HV_REGISTER_SINT0 +
-					VMBUS_MESSAGE_SINT);
+	hv_get_synint_state(VMBUS_MESSAGE_SINT, shared_sint.as_uint64);
 
 	shared_sint.vector = vmbus_interrupt;
 	shared_sint.masked = false;
@@ -233,14 +264,12 @@ void hv_synic_enable_regs(unsigned int cpu)
 #else
 	shared_sint.auto_eoi = 0;
 #endif
-	hv_set_register(HV_REGISTER_SINT0 + VMBUS_MESSAGE_SINT,
-				shared_sint.as_uint64);
+	hv_set_synint_state(VMBUS_MESSAGE_SINT, shared_sint.as_uint64);
 
 	/* Enable the global synic bit */
-	sctrl.as_uint64 = hv_get_register(HV_REGISTER_SCONTROL);
+	hv_get_synic_state(sctrl.as_uint64);
 	sctrl.enable = 1;
-
-	hv_set_register(HV_REGISTER_SCONTROL, sctrl.as_uint64);
+	hv_set_synic_state(sctrl.as_uint64);
 }
 
 int hv_synic_init(unsigned int cpu)
@@ -262,32 +291,39 @@ void hv_synic_disable_regs(unsigned int cpu)
 	union hv_synic_siefp siefp;
 	union hv_synic_scontrol sctrl;
 
-	shared_sint.as_uint64 = hv_get_register(HV_REGISTER_SINT0 +
-					VMBUS_MESSAGE_SINT);
-
+	hv_get_synint_state(VMBUS_MESSAGE_SINT, shared_sint.as_uint64);
 	shared_sint.masked = 1;
+	hv_set_synint_state(VMBUS_MESSAGE_SINT, shared_sint.as_uint64);
+
 
 	/* Need to correctly cleanup in the case of SMP!!! */
 	/* Disable the interrupt */
-	hv_set_register(HV_REGISTER_SINT0 + VMBUS_MESSAGE_SINT,
-				shared_sint.as_uint64);
+	hv_get_simp(simp.as_uint64);
 
-	simp.as_uint64 = hv_get_register(HV_REGISTER_SIMP);
+	/*
+	 * In Isolation VM, sim and sief pages are allocated by
+	 * paravisor. These pages also will be used by kdump
+	 * kernel. So just reset enable bit here and keep page
+	 * addresses.
+	 */
 	simp.simp_enabled = 0;
-	simp.base_simp_gpa = 0;
+	if (!hv_isolation_type_snp())
+		simp.base_simp_gpa = 0;
 
-	hv_set_register(HV_REGISTER_SIMP, simp.as_uint64);
+	hv_set_simp(simp.as_uint64);
 
-	siefp.as_uint64 = hv_get_register(HV_REGISTER_SIEFP);
+	hv_get_siefp(siefp.as_uint64);
 	siefp.siefp_enabled = 0;
-	siefp.base_siefp_gpa = 0;
 
-	hv_set_register(HV_REGISTER_SIEFP, siefp.as_uint64);
+	if (!hv_isolation_type_snp())
+		siefp.base_siefp_gpa = 0;
+
+	hv_set_siefp(siefp.as_uint64);
 
 	/* Disable the global synic bit */
-	sctrl.as_uint64 = hv_get_register(HV_REGISTER_SCONTROL);
+	hv_get_synic_state(sctrl.as_uint64);
 	sctrl.enable = 0;
-	hv_set_register(HV_REGISTER_SCONTROL, sctrl.as_uint64);
+	hv_set_synic_state(sctrl.as_uint64);
 
 	if (vmbus_irq != -1)
 		disable_percpu_irq(vmbus_irq);
diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h
index 2914e27b0429..63e0de2a7c54 100644
--- a/include/asm-generic/mshyperv.h
+++ b/include/asm-generic/mshyperv.h
@@ -23,6 +23,7 @@
 #include <linux/bitops.h>
 #include <linux/cpumask.h>
 #include <asm/ptrace.h>
+#include <asm/mshyperv.h>
 #include <asm/hyperv-tlfs.h>
 
 struct ms_hyperv_info {
@@ -51,6 +52,7 @@ extern struct ms_hyperv_info ms_hyperv;
 
 extern u64 hv_do_hypercall(u64 control, void *inputaddr, void *outputaddr);
 extern u64 hv_do_fast_hypercall8(u16 control, u64 input8);
+extern bool hv_isolation_type_snp(void);
 
 /* Helper functions that provide a consistent pattern for checking Hyper-V hypercall status. */
 static inline int hv_result(u64 status)
@@ -145,7 +147,7 @@ static inline void vmbus_signal_eom(struct hv_message *msg, u32 old_msg_type)
 		 * possibly deliver another msg from the
 		 * hypervisor
 		 */
-		hv_set_register(HV_REGISTER_EOM, 0);
+		hv_signal_eom();
 	}
 }
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Sun May 30 17:27:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 May 2021 17:27:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134306.249988 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnPE8-0006UH-4d; Sun, 30 May 2021 17:27:52 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134306.249988; Sun, 30 May 2021 17:27:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnPE7-0006R6-JB; Sun, 30 May 2021 17:27:51 +0000
Received: by outflank-mailman (input) for mailman id 134306;
 Sun, 30 May 2021 15:07:19 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GvOc=KZ=gmail.com=ltykernel@srs-us1.protection.inumbo.net>)
 id 1lnN27-0000PC-18
 for xen-devel@lists.xenproject.org; Sun, 30 May 2021 15:07:19 +0000
Received: from mail-pj1-x1032.google.com (unknown [2607:f8b0:4864:20::1032])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f6e39b98-5c8d-4af8-a799-88933df614f0;
 Sun, 30 May 2021 15:06:50 +0000 (UTC)
Received: by mail-pj1-x1032.google.com with SMTP id
 lx17-20020a17090b4b11b029015f3b32b8dbso7260069pjb.0
 for <xen-devel@lists.xenproject.org>; Sun, 30 May 2021 08:06:50 -0700 (PDT)
Received: from ubuntu-Virtual-Machine.corp.microsoft.com
 ([2001:4898:80e8:9:dc2d:80ab:c3f3:1524])
 by smtp.gmail.com with ESMTPSA id b15sm8679688pfi.100.2021.05.30.08.06.48
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 30 May 2021 08:06:49 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f6e39b98-5c8d-4af8-a799-88933df614f0
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=8QttpWnu2vUzDHi+PZwCcYLiEemtutjgmMI/iodwPTk=;
        b=W8ClChNy/zGX4qaFJXG3v36CEkmNBgz6cFpUbxLY1MtMXiB3DOS0OEuklRIaJsUjfV
         CqNiLrgh8G3MDnqTm2tqCifdbbiQ71hz1sqU7R2dyMZh7qX5GiL+3tYuOVCF6CIkTiBu
         3WfmClEYRfWncVxeql61AC3bv02+aSKH2VTtSAeNX9rR2YLgnBdcXZAj+2VP8OnvpswI
         6IrYrAMKWnDk1x5midWNyrQPk4MmHUTraFR4iwofk71RKNS/2odDBr6VNdmfIBv6hgpz
         ljYhxqjLMoxzQDxzPB8RVmA3zmgl1Y0LYpfklpdG8vApkDP75Koqa6Bu6f2ftX6QddwC
         sV/g==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=8QttpWnu2vUzDHi+PZwCcYLiEemtutjgmMI/iodwPTk=;
        b=GZDC3q7X4298aDVxCCsz4O7VvtqMsAtUoZ2sTfMGwbHT2UVjNi38mvSqFhGzLS9Ymt
         qSquu3Nc2NqqsFbpAoCAgi5wxNskhLF1b90aPRf/2v8DfffLxUSchTRq9R+FH69DbLm8
         cihhWtXuL6l2UG+hH29Q7obmLTASmwRvF2S8TTPTDA9KEXn265e0f8BrpybHEG1IS4w8
         HnWVE3Xp3q5XObYezXGzn3hbxXLBo/Oztv7SE8PxEcQk1wl8qVmlEJOyASx6PXpz9UAb
         R08aXQzUpNyZMfbMZ07HtlfccrJrXAKb4uPOqWLygkoaNk7DkrhSZKyI1B4XjRvskG4e
         dpng==
X-Gm-Message-State: AOAM531fifVeMtENLDD6uR1wL3EUlsekQYzRddCYY8LAtdFV/GnNbckO
	EriV2pzLetj7P9ayyNYXGxU=
X-Google-Smtp-Source: ABdhPJwOFhoVhELuBsoR0x6Bkk9R5eQShuRJkYgZiLaNrPrGgc4hr2L1c/qfttmLx9Y24CFWGvaVqQ==
X-Received: by 2002:a17:902:d2d1:b029:ef:8d29:3a64 with SMTP id n17-20020a170902d2d1b02900ef8d293a64mr16523183plc.38.1622387209666;
        Sun, 30 May 2021 08:06:49 -0700 (PDT)
From: Tianyu Lan <ltykernel@gmail.com>
To: kys@microsoft.com,
	haiyangz@microsoft.com,
	sthemmin@microsoft.com,
	wei.liu@kernel.org,
	decui@microsoft.com,
	tglx@linutronix.de,
	mingo@redhat.com,
	bp@alien8.de,
	x86@kernel.org,
	hpa@zytor.com,
	arnd@arndb.de,
	dave.hansen@linux.intel.com,
	luto@kernel.org,
	peterz@infradead.org,
	akpm@linux-foundation.org,
	kirill.shutemov@linux.intel.com,
	rppt@kernel.org,
	hannes@cmpxchg.org,
	cai@lca.pw,
	krish.sadhukhan@oracle.com,
	saravanand@fb.com,
	Tianyu.Lan@microsoft.com,
	konrad.wilk@oracle.com,
	hch@lst.de,
	m.szyprowski@samsung.com,
	robin.murphy@arm.com,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	sstabellini@kernel.org,
	joro@8bytes.org,
	will@kernel.org,
	xen-devel@lists.xenproject.org,
	davem@davemloft.net,
	kuba@kernel.org,
	jejb@linux.ibm.com,
	martin.petersen@oracle.com
Cc: iommu@lists.linux-foundation.org,
	linux-arch@vger.kernel.org,
	linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	netdev@vger.kernel.org,
	vkuznets@redhat.com,
	thomas.lendacky@amd.com,
	brijesh.singh@amd.com,
	sunilmut@microsoft.com
Subject: [RFC PATCH V3 08/11] swiotlb: Add bounce buffer remap address setting function
Date: Sun, 30 May 2021 11:06:25 -0400
Message-Id: <20210530150628.2063957-9-ltykernel@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20210530150628.2063957-1-ltykernel@gmail.com>
References: <20210530150628.2063957-1-ltykernel@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Tianyu Lan <Tianyu.Lan@microsoft.com>

For Hyper-V isolation VM with AMD SEV SNP, the bounce buffer(shared memory)
needs to be accessed via extra address space(e.g address above bit39).
Hyper-V code may remap extra address space outside of swiotlb. swiotlb_
bounce() needs to use remap virtual address to copy data from/to bounce
buffer. Add new interface swiotlb_set_bounce_remap() to do that.

Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com>
---
 include/linux/swiotlb.h |  5 +++++
 kernel/dma/swiotlb.c    | 14 +++++++++++++-
 2 files changed, 18 insertions(+), 1 deletion(-)

diff --git a/include/linux/swiotlb.h b/include/linux/swiotlb.h
index 216854a5e513..43f53cf52f48 100644
--- a/include/linux/swiotlb.h
+++ b/include/linux/swiotlb.h
@@ -113,8 +113,13 @@ unsigned int swiotlb_max_segment(void);
 size_t swiotlb_max_mapping_size(struct device *dev);
 bool is_swiotlb_active(void);
 void __init swiotlb_adjust_size(unsigned long size);
+void swiotlb_set_bounce_remap(unsigned char *vaddr);
 #else
 #define swiotlb_force SWIOTLB_NO_FORCE
+static inline void swiotlb_set_bounce_remap(unsigned char *vaddr)
+{
+}
+
 static inline bool is_swiotlb_buffer(phys_addr_t paddr)
 {
 	return false;
diff --git a/kernel/dma/swiotlb.c b/kernel/dma/swiotlb.c
index 8ca7d505d61c..fbc827ab5fb4 100644
--- a/kernel/dma/swiotlb.c
+++ b/kernel/dma/swiotlb.c
@@ -70,6 +70,7 @@ struct io_tlb_mem *io_tlb_default_mem;
  * not be bounced (unless SWIOTLB_FORCE is set).
  */
 static unsigned int max_segment;
+static unsigned char *swiotlb_bounce_remap_addr;
 
 static unsigned long default_nslabs = IO_TLB_DEFAULT_SIZE >> IO_TLB_SHIFT;
 
@@ -334,6 +335,11 @@ void __init swiotlb_exit(void)
 	io_tlb_default_mem = NULL;
 }
 
+void swiotlb_set_bounce_remap(unsigned char *vaddr)
+{
+	swiotlb_bounce_remap_addr = vaddr;
+}
+
 /*
  * Bounce: copy the swiotlb buffer from or back to the original dma location
  */
@@ -345,7 +351,13 @@ static void swiotlb_bounce(struct device *dev, phys_addr_t tlb_addr, size_t size
 	phys_addr_t orig_addr = mem->slots[index].orig_addr;
 	size_t alloc_size = mem->slots[index].alloc_size;
 	unsigned long pfn = PFN_DOWN(orig_addr);
-	unsigned char *vaddr = phys_to_virt(tlb_addr);
+	unsigned char *vaddr;
+
+	if (swiotlb_bounce_remap_addr)
+		vaddr = swiotlb_bounce_remap_addr + tlb_addr -
+			io_tlb_default_mem->start;
+	else
+		vaddr = phys_to_virt(tlb_addr);
 
 	if (orig_addr == INVALID_PHYS_ADDR)
 		return;
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Sun May 30 17:27:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 May 2021 17:27:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134302.249974 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnPE6-0006A1-UH; Sun, 30 May 2021 17:27:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134302.249974; Sun, 30 May 2021 17:27:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnPE6-00067a-Ee; Sun, 30 May 2021 17:27:50 +0000
Received: by outflank-mailman (input) for mailman id 134302;
 Sun, 30 May 2021 15:07:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GvOc=KZ=gmail.com=ltykernel@srs-us1.protection.inumbo.net>)
 id 1lnN1x-0000PC-0s
 for xen-devel@lists.xenproject.org; Sun, 30 May 2021 15:07:09 +0000
Received: from mail-pf1-x42f.google.com (unknown [2607:f8b0:4864:20::42f])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 224a67ce-d472-48cf-9c86-1f2904e7ae0a;
 Sun, 30 May 2021 15:06:47 +0000 (UTC)
Received: by mail-pf1-x42f.google.com with SMTP id p39so7076627pfw.8
 for <xen-devel@lists.xenproject.org>; Sun, 30 May 2021 08:06:47 -0700 (PDT)
Received: from ubuntu-Virtual-Machine.corp.microsoft.com
 ([2001:4898:80e8:9:dc2d:80ab:c3f3:1524])
 by smtp.gmail.com with ESMTPSA id b15sm8679688pfi.100.2021.05.30.08.06.45
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 30 May 2021 08:06:46 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 224a67ce-d472-48cf-9c86-1f2904e7ae0a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=QKK0SjDRQsBHXdgV5ktTvZVGUyDxuCI7H9YXqu8ylp4=;
        b=c4gjM4Jy3YZ+/DG7gJXR32TyHNrd2AT2w1NRjMRL/cVwnE9Khie59tuur1IINr0LLL
         6PZbttSNvgl9BLFIsGHuSc+FJJmIp6Abf0SCFmm2OMKXGKumPVltWHy67nL9KV9jEyvK
         PEkAMyNbfg9acDIg1EYYlwzHuyceS5JTu6FgLVMgyuSfaM2bY1ImRe4AQpVCfcxArLyf
         2orge+sbm53BUlc2H/Pqht+gxOC5rv9V1Jvz+CQB0vMOQlMPoDaTCkjFsnL76KSZ3ys3
         7eaQ9eSv/+WkbiIZxtGq9L5QDqfVWwVAegCdGoAEYHKpEA2U7AjeuHqOCq8YttRFBnB5
         XuRQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=QKK0SjDRQsBHXdgV5ktTvZVGUyDxuCI7H9YXqu8ylp4=;
        b=UeGcZf69dt4TSublkvcWVJ4+3lsJafXmHVhuiQSRE21ZBuIFpVpnqiorVlOnRq/ngG
         bym/7ZTQBM2Xc2iBGJaehwZAiryTtHJqEc7deqoJrw6+wxUMfibXVuAdSUT6J6Ss9VSx
         JShQE0by3QPSxif9MHsamVnP+pf+0vbNl6cWi+qpi54UV4qOKEf45hJopuj/zzelm8al
         OCFOB+TKg474poYLb9+B/oEt9uu4FXZkHY9l4XjNbY4leN0o2mzF88HuQEJx8SNkRAv9
         izNfHebwVMjPRR9G3DApUeYe8TXNkMbhKchg/wU4diA2IPFS6GyO0JAg5R73FMKTCQWT
         UNFg==
X-Gm-Message-State: AOAM531WSWiLPDD/9YND/QIvJ/zJScbJEwK+BUIqNSuIWRk7lr+rdMza
	Yl0YVzA++TRDppAxHpy3Ohg=
X-Google-Smtp-Source: ABdhPJw+q9VKcKHyDWVpvxugITxjif6TDbX7YEFx343/WR9bqFMb8Oo7wu2PPv+4Bif5UQ80/Bo2xg==
X-Received: by 2002:a62:ae14:0:b029:2e9:c6ef:9856 with SMTP id q20-20020a62ae140000b02902e9c6ef9856mr2736868pff.77.1622387206687;
        Sun, 30 May 2021 08:06:46 -0700 (PDT)
From: Tianyu Lan <ltykernel@gmail.com>
To: kys@microsoft.com,
	haiyangz@microsoft.com,
	sthemmin@microsoft.com,
	wei.liu@kernel.org,
	decui@microsoft.com,
	tglx@linutronix.de,
	mingo@redhat.com,
	bp@alien8.de,
	x86@kernel.org,
	hpa@zytor.com,
	arnd@arndb.de,
	dave.hansen@linux.intel.com,
	luto@kernel.org,
	peterz@infradead.org,
	akpm@linux-foundation.org,
	kirill.shutemov@linux.intel.com,
	rppt@kernel.org,
	hannes@cmpxchg.org,
	cai@lca.pw,
	krish.sadhukhan@oracle.com,
	saravanand@fb.com,
	Tianyu.Lan@microsoft.com,
	konrad.wilk@oracle.com,
	hch@lst.de,
	m.szyprowski@samsung.com,
	robin.murphy@arm.com,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	sstabellini@kernel.org,
	joro@8bytes.org,
	will@kernel.org,
	xen-devel@lists.xenproject.org,
	davem@davemloft.net,
	kuba@kernel.org,
	jejb@linux.ibm.com,
	martin.petersen@oracle.com
Cc: iommu@lists.linux-foundation.org,
	linux-arch@vger.kernel.org,
	linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	netdev@vger.kernel.org,
	vkuznets@redhat.com,
	thomas.lendacky@amd.com,
	brijesh.singh@amd.com,
	sunilmut@microsoft.com
Subject: [RFC PATCH V3 06/11] HV/Vmbus: Add SNP support for VMbus channel initiate message
Date: Sun, 30 May 2021 11:06:23 -0400
Message-Id: <20210530150628.2063957-7-ltykernel@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20210530150628.2063957-1-ltykernel@gmail.com>
References: <20210530150628.2063957-1-ltykernel@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Tianyu Lan <Tianyu.Lan@microsoft.com>

The monitor pages in the CHANNELMSG_INITIATE_CONTACT are shared
with host and so it's necessary to use hvcall to set them visible
to host. In Isolation VM with AMD SEV SNP, the access address
should be in the extra space which is above shared gpa boundary.
So remap these pages into the extra address(pa + shared_gpa_boundary).

Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com>
---
 drivers/hv/connection.c   | 62 +++++++++++++++++++++++++++++++++++++++
 drivers/hv/hyperv_vmbus.h |  1 +
 2 files changed, 63 insertions(+)

diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c
index 186fd4c8acd4..389adc92f958 100644
--- a/drivers/hv/connection.c
+++ b/drivers/hv/connection.c
@@ -104,6 +104,12 @@ int vmbus_negotiate_version(struct vmbus_channel_msginfo *msginfo, u32 version)
 
 	msg->monitor_page1 = virt_to_phys(vmbus_connection.monitor_pages[0]);
 	msg->monitor_page2 = virt_to_phys(vmbus_connection.monitor_pages[1]);
+
+	if (hv_is_isolation_supported()) {
+		msg->monitor_page1 += ms_hyperv.shared_gpa_boundary;
+		msg->monitor_page2 += ms_hyperv.shared_gpa_boundary;
+	}
+
 	msg->target_vcpu = hv_cpu_number_to_vp_number(VMBUS_CONNECT_CPU);
 
 	/*
@@ -148,6 +154,29 @@ int vmbus_negotiate_version(struct vmbus_channel_msginfo *msginfo, u32 version)
 		return -ECONNREFUSED;
 	}
 
+	if (hv_is_isolation_supported()) {
+		vmbus_connection.monitor_pages_va[0]
+			= vmbus_connection.monitor_pages[0];
+		vmbus_connection.monitor_pages[0]
+			= ioremap_cache(msg->monitor_page1, HV_HYP_PAGE_SIZE);
+		if (!vmbus_connection.monitor_pages[0])
+			return -ENOMEM;
+
+		vmbus_connection.monitor_pages_va[1]
+			= vmbus_connection.monitor_pages[1];
+		vmbus_connection.monitor_pages[1]
+			= ioremap_cache(msg->monitor_page2, HV_HYP_PAGE_SIZE);
+		if (!vmbus_connection.monitor_pages[1]) {
+			vunmap(vmbus_connection.monitor_pages[0]);
+			return -ENOMEM;
+		}
+
+		memset(vmbus_connection.monitor_pages[0], 0x00,
+		       HV_HYP_PAGE_SIZE);
+		memset(vmbus_connection.monitor_pages[1], 0x00,
+		       HV_HYP_PAGE_SIZE);
+	}
+
 	return ret;
 }
 
@@ -159,6 +188,7 @@ int vmbus_connect(void)
 	struct vmbus_channel_msginfo *msginfo = NULL;
 	int i, ret = 0;
 	__u32 version;
+	u64 pfn[2];
 
 	/* Initialize the vmbus connection */
 	vmbus_connection.conn_state = CONNECTING;
@@ -216,6 +246,16 @@ int vmbus_connect(void)
 		goto cleanup;
 	}
 
+	if (hv_is_isolation_supported()) {
+		pfn[0] = virt_to_hvpfn(vmbus_connection.monitor_pages[0]);
+		pfn[1] = virt_to_hvpfn(vmbus_connection.monitor_pages[1]);
+		if (hv_mark_gpa_visibility(2, pfn,
+				VMBUS_PAGE_VISIBLE_READ_WRITE)) {
+			ret = -EFAULT;
+			goto cleanup;
+		}
+	}
+
 	msginfo = kzalloc(sizeof(*msginfo) +
 			  sizeof(struct vmbus_channel_initiate_contact),
 			  GFP_KERNEL);
@@ -282,6 +322,8 @@ int vmbus_connect(void)
 
 void vmbus_disconnect(void)
 {
+	u64 pfn[2];
+
 	/*
 	 * First send the unload request to the host.
 	 */
@@ -301,6 +343,26 @@ void vmbus_disconnect(void)
 		vmbus_connection.int_page = NULL;
 	}
 
+	if (hv_is_isolation_supported()) {
+		if (vmbus_connection.monitor_pages_va[0]) {
+			vunmap(vmbus_connection.monitor_pages[0]);
+			vmbus_connection.monitor_pages[0]
+				= vmbus_connection.monitor_pages_va[0];
+			vmbus_connection.monitor_pages_va[0] = NULL;
+		}
+
+		if (vmbus_connection.monitor_pages_va[1]) {
+			vunmap(vmbus_connection.monitor_pages[1]);
+			vmbus_connection.monitor_pages[1]
+				= vmbus_connection.monitor_pages_va[1];
+			vmbus_connection.monitor_pages_va[1] = NULL;
+		}
+
+		pfn[0] = virt_to_hvpfn(vmbus_connection.monitor_pages[0]);
+		pfn[1] = virt_to_hvpfn(vmbus_connection.monitor_pages[1]);
+		hv_mark_gpa_visibility(2, pfn, VMBUS_PAGE_NOT_VISIBLE);
+	}
+
 	hv_free_hyperv_page((unsigned long)vmbus_connection.monitor_pages[0]);
 	hv_free_hyperv_page((unsigned long)vmbus_connection.monitor_pages[1]);
 	vmbus_connection.monitor_pages[0] = NULL;
diff --git a/drivers/hv/hyperv_vmbus.h b/drivers/hv/hyperv_vmbus.h
index 42f3d9d123a1..40bc0eff6665 100644
--- a/drivers/hv/hyperv_vmbus.h
+++ b/drivers/hv/hyperv_vmbus.h
@@ -240,6 +240,7 @@ struct vmbus_connection {
 	 * is child->parent notification
 	 */
 	struct hv_monitor_page *monitor_pages[2];
+	void *monitor_pages_va[2];
 	struct list_head chn_msg_list;
 	spinlock_t channelmsg_lock;
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Sun May 30 17:27:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 May 2021 17:27:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134290.249931 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnPE4-0005PZ-5U; Sun, 30 May 2021 17:27:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134290.249931; Sun, 30 May 2021 17:27:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnPE4-0005PS-0T; Sun, 30 May 2021 17:27:48 +0000
Received: by outflank-mailman (input) for mailman id 134290;
 Sun, 30 May 2021 15:06:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GvOc=KZ=gmail.com=ltykernel@srs-us1.protection.inumbo.net>)
 id 1lnN1T-0000PC-5X
 for xen-devel@lists.xenproject.org; Sun, 30 May 2021 15:06:39 +0000
Received: from mail-pl1-x62a.google.com (unknown [2607:f8b0:4864:20::62a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bd541d85-59a6-4776-a575-8ed425acaf7a;
 Sun, 30 May 2021 15:06:38 +0000 (UTC)
Received: by mail-pl1-x62a.google.com with SMTP id x10so3026416plg.3
 for <xen-devel@lists.xenproject.org>; Sun, 30 May 2021 08:06:37 -0700 (PDT)
Received: from ubuntu-Virtual-Machine.corp.microsoft.com
 ([2001:4898:80e8:9:dc2d:80ab:c3f3:1524])
 by smtp.gmail.com with ESMTPSA id b15sm8679688pfi.100.2021.05.30.08.06.35
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 30 May 2021 08:06:36 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bd541d85-59a6-4776-a575-8ed425acaf7a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=FswRSQfjjaKx7u9LgIeEdiIRei+KQQ1o84JB2HXUqUs=;
        b=LVIdMBZyqnI96Q0Z0mysjEWnSTb1JW5nO8mVMTciG0aK/WD06/PLYNCl5nc+yK0XlS
         2WtJh9tLLiNzy5J3OgcTqz2Jd8R+Qo2XZ8PhM1LGbgk34btlpzEZSypPA3XtbSEfna7F
         0Xy/ByKjJYToboKNSYv39OSVL+1wl/6lNTlTcOxXyL9Czp3My1t9DZhs63AP25Sxxf87
         UAeJF9Q9PWAi/L9NXDXqX7YL7RDddpROE5Dwkr0XfidvpyFaGBUOGxVoC/o97y6s3Bgv
         uNtBz9O1DqUp24qMiBjYRplPEPbpwBaGprGuV8wqmdUN8j9x5KqKXakb0r59LpirJ6/C
         y0Cw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:mime-version
         :content-transfer-encoding;
        bh=FswRSQfjjaKx7u9LgIeEdiIRei+KQQ1o84JB2HXUqUs=;
        b=bit/9Dmvjh/S5TiGIxgTCqJI0XyW+TMIROWEEuKxpefFU80lcLVi4r7RgJ6Dcfwtn9
         8hixz/bZHbE2OCuHDmVEEOWJw/j5BNRnoPKmGLo5Xd5TubBUNGmL4WHs51+Bc+exloDH
         BEOS1nJf6zADs6+v0HaFN0rlceStk57cHsR+15mR6PQ8BiP6FZCYRBW55C7i9pbHDaqY
         U1VUyFNkuHKrH2ASxDCbkU85FEwvglzIC+NS65sg9qTomLkJSottxxvvftGJFzwaTrjG
         ef0KzXQ05xkYPcy69zEPV9RPiiw12C3Qj5HooHja8mGelVYvPHSugdK5vvcOZL22lfOs
         VxHA==
X-Gm-Message-State: AOAM533XQ2ASwg9oV1FRYakI4P3cejc2llnHO0Z+LY2XcjjGrdJc1vRl
	OqGwf6gniKE9L8uVU73d5OA=
X-Google-Smtp-Source: ABdhPJzWgX0reZL8RY1aCiYCdkLb1bxDmfsOiZs5Da5qdPNbLz2+PAq3adwYvehGmjtmOXLHMuRRYw==
X-Received: by 2002:a17:90a:fe84:: with SMTP id co4mr14479308pjb.0.1622387197133;
        Sun, 30 May 2021 08:06:37 -0700 (PDT)
From: Tianyu Lan <ltykernel@gmail.com>
To: kys@microsoft.com,
	haiyangz@microsoft.com,
	sthemmin@microsoft.com,
	wei.liu@kernel.org,
	decui@microsoft.com,
	tglx@linutronix.de,
	mingo@redhat.com,
	bp@alien8.de,
	x86@kernel.org,
	hpa@zytor.com,
	arnd@arndb.de,
	dave.hansen@linux.intel.com,
	luto@kernel.org,
	peterz@infradead.org,
	akpm@linux-foundation.org,
	kirill.shutemov@linux.intel.com,
	rppt@kernel.org,
	hannes@cmpxchg.org,
	cai@lca.pw,
	krish.sadhukhan@oracle.com,
	saravanand@fb.com,
	Tianyu.Lan@microsoft.com,
	konrad.wilk@oracle.com,
	hch@lst.de,
	m.szyprowski@samsung.com,
	robin.murphy@arm.com,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	sstabellini@kernel.org,
	joro@8bytes.org,
	will@kernel.org,
	xen-devel@lists.xenproject.org,
	davem@davemloft.net,
	kuba@kernel.org,
	jejb@linux.ibm.com,
	martin.petersen@oracle.com
Cc: iommu@lists.linux-foundation.org,
	linux-arch@vger.kernel.org,
	linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	netdev@vger.kernel.org,
	vkuznets@redhat.com,
	thomas.lendacky@amd.com,
	brijesh.singh@amd.com,
	sunilmut@microsoft.com
Subject: [RFC PATCH V3 00/11] x86/Hyper-V: Add Hyper-V Isolation VM support
Date: Sun, 30 May 2021 11:06:17 -0400
Message-Id: <20210530150628.2063957-1-ltykernel@gmail.com>
X-Mailer: git-send-email 2.25.1
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Tianyu Lan <Tianyu.Lan@microsoft.com>

Hyper-V provides two kinds of Isolation VMs. VBS(Virtualization-based
security) and AMD SEV-SNP unenlightened Isolation VMs. This patchset
is to add support for these Isolation VM support in Linux.

The memory of these vms are encrypted and host can't access guest
memory directly. Hyper-V provides new host visibility hvcall and
the guest needs to call new hvcall to mark memory visible to host
before sharing memory with host. For security, all network/storage
stack memory should not be shared with host and so there is bounce
buffer requests.

Vmbus channel ring buffer already plays bounce buffer role because
all data from/to host needs to copy from/to between the ring buffer
and IO stack memory. So mark vmbus channel ring buffer visible.

There are two exceptions - packets sent by vmbus_sendpacket_
pagebuffer() and vmbus_sendpacket_mpb_desc(). These packets
contains IO stack memory address and host will access these memory.
So add allocation bounce buffer support in vmbus for these packets.

For SNP isolation VM, guest needs to access the shared memory via
extra address space which is specified by Hyper-V CPUID HYPERV_CPUID_
ISOLATION_CONFIG. The access physical address of the shared memory
should be bounce buffer memory GPA plus with shared_gpa_boundary
reported by CPUID.

Change since v2:
       - Remove not UIO driver in Isolation VM patch
       - Use vmap_pfn() to replace ioremap_page_range function in
       order to avoid exposing symbol ioremap_page_range() and
       ioremap_page_range()
       - Call hv set mem host visibility hvcall in set_memory_encrypted/decrypted()
       - Enable swiotlb force mode instead of adding Hyper-V dma map/unmap hook
       - Fix code style

Tianyu Lan (11):
  x86/HV: Initialize GHCB page in Isolation VM
  x86/HV: Initialize shared memory boundary in the Isolation VM.
  x86/Hyper-V: Add new hvcall guest address host visibility support
  HV: Add Write/Read MSR registers via ghcb
  HV: Add ghcb hvcall support for SNP VM
  HV/Vmbus: Add SNP support for VMbus channel initiate message
  HV/Vmbus: Initialize VMbus ring buffer for Isolation VM
  swiotlb: Add bounce buffer remap address setting function
  HV/IOMMU: Enable swiotlb bounce buffer for Isolation VM
  HV/Netvsc: Add Isolation VM support for netvsc driver
  HV/Storvsc: Add Isolation VM support for storvsc driver

 arch/x86/hyperv/Makefile           |   2 +-
 arch/x86/hyperv/hv_init.c          |  70 +++++--
 arch/x86/hyperv/ivm.c              | 300 +++++++++++++++++++++++++++++
 arch/x86/include/asm/hyperv-tlfs.h |  24 +++
 arch/x86/include/asm/mshyperv.h    |  85 +++++++-
 arch/x86/kernel/cpu/mshyperv.c     |   5 +
 arch/x86/mm/pat/set_memory.c       |  10 +-
 arch/x86/xen/pci-swiotlb-xen.c     |   3 +-
 drivers/hv/Kconfig                 |   1 +
 drivers/hv/channel.c               |  48 ++++-
 drivers/hv/connection.c            |  68 ++++++-
 drivers/hv/hv.c                    | 122 ++++++++----
 drivers/hv/hyperv_vmbus.h          |   3 +
 drivers/hv/ring_buffer.c           |  84 ++++++--
 drivers/hv/vmbus_drv.c             |   3 +
 drivers/iommu/hyperv-iommu.c       |  81 ++++++++
 drivers/net/hyperv/hyperv_net.h    |   6 +
 drivers/net/hyperv/netvsc.c        | 125 +++++++++++-
 drivers/net/hyperv/rndis_filter.c  |   3 +
 drivers/scsi/storvsc_drv.c         |  63 +++++-
 include/asm-generic/hyperv-tlfs.h  |   1 +
 include/asm-generic/mshyperv.h     |  18 +-
 include/linux/hyperv.h             |  16 ++
 include/linux/swiotlb.h            |   5 +
 kernel/dma/swiotlb.c               |  14 +-
 25 files changed, 1062 insertions(+), 98 deletions(-)
 create mode 100644 arch/x86/hyperv/ivm.c

-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Sun May 30 17:27:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 May 2021 17:27:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134308.249999 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnPE9-0006qp-8S; Sun, 30 May 2021 17:27:53 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134308.249999; Sun, 30 May 2021 17:27:53 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnPE8-0006eq-Gh; Sun, 30 May 2021 17:27:52 +0000
Received: by outflank-mailman (input) for mailman id 134308;
 Sun, 30 May 2021 15:07:24 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GvOc=KZ=gmail.com=ltykernel@srs-us1.protection.inumbo.net>)
 id 1lnN2C-0000PC-1P
 for xen-devel@lists.xenproject.org; Sun, 30 May 2021 15:07:24 +0000
Received: from mail-pj1-x1033.google.com (unknown [2607:f8b0:4864:20::1033])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id b91ecb2c-bd2d-466d-a484-57d57bc10950;
 Sun, 30 May 2021 15:06:51 +0000 (UTC)
Received: by mail-pj1-x1033.google.com with SMTP id ot16so5187758pjb.3
 for <xen-devel@lists.xenproject.org>; Sun, 30 May 2021 08:06:51 -0700 (PDT)
Received: from ubuntu-Virtual-Machine.corp.microsoft.com
 ([2001:4898:80e8:9:dc2d:80ab:c3f3:1524])
 by smtp.gmail.com with ESMTPSA id b15sm8679688pfi.100.2021.05.30.08.06.49
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 30 May 2021 08:06:50 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: b91ecb2c-bd2d-466d-a484-57d57bc10950
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=EtCkaCE4/EzFHv3S6hPZuMsiJS31BeqUj9mfIppTr8A=;
        b=BCfbHHh7cyudMiVzOhaWtM71E5It7nVh3XODt6DlztlKwSR8hmKzGs/GHbE9k5ukZn
         cvwUBE5ZURUeWzebOEMn+N+iyqbQf4jk5jAG3OEblpaI62E5gwLiBEvwNG3m6sjyUZ/x
         hRItnVqfcYlZWv6o/PEot2mwHpeCnoNP0dA2FhRjTcNveOVwR98Xi7VhiU05+2rHeKsG
         dCGrpoRct9XXG+mj101SDS97GDP//b5DIZwBmvoBkTvUK4hiwfNAb8emrWGxAk2fR8HL
         u18x+mH2iH03GzLJkM+cxIuDbKfAhp1Vh6iQqsI/53FhLbFKbvqa90/OlAKUI3PIs6w5
         FTNQ==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=EtCkaCE4/EzFHv3S6hPZuMsiJS31BeqUj9mfIppTr8A=;
        b=cQ4kCSD4G/v95nY6ibXjli2AXet3HiuIqv3/wVvprWaY25GpLdBIGHShGE7xU9w+PU
         BFn9+bhjtjshV1w5hRZEMnEod0PzpXAmhlGMehkwcKTLqgC68HV7L2IF0FoGdroV3xtp
         aNxRLOPQo2vT0nzZxjuvHZWnPFVEOT4ueKZf4I9blzLGtZ/6PCc/Dot8Ynn2TZb4c753
         +3Vxh7jYSP/4jufkWJgEop4SEezwIcM4yX1vK1T1GjnS8w+4uY6nNmS7dXX4+0Nq8rEB
         TRKCgHvNCtEiBV1+UYHe2AEKM9oM6uZD1TpTAb/0sCJZieAaobNUtYEXzx5wRlZyGJgs
         HWAg==
X-Gm-Message-State: AOAM531SBKV/PZG1k77WW42LGVxQUzBFSmp+u4Tgmpdg44hkDWcD7fm9
	wcsfteYjV/FIkTeRSq4xyYE=
X-Google-Smtp-Source: ABdhPJxBmJKYzQHIxtFyRq8cxb9YKkXh8aR0S9biw6tunPzBCDJlQ572ZHUOD2JLd5vyGyCMqeAmvA==
X-Received: by 2002:a17:90b:400c:: with SMTP id ie12mr6846115pjb.107.1622387210845;
        Sun, 30 May 2021 08:06:50 -0700 (PDT)
From: Tianyu Lan <ltykernel@gmail.com>
To: kys@microsoft.com,
	haiyangz@microsoft.com,
	sthemmin@microsoft.com,
	wei.liu@kernel.org,
	decui@microsoft.com,
	tglx@linutronix.de,
	mingo@redhat.com,
	bp@alien8.de,
	x86@kernel.org,
	hpa@zytor.com,
	arnd@arndb.de,
	dave.hansen@linux.intel.com,
	luto@kernel.org,
	peterz@infradead.org,
	akpm@linux-foundation.org,
	kirill.shutemov@linux.intel.com,
	rppt@kernel.org,
	hannes@cmpxchg.org,
	cai@lca.pw,
	krish.sadhukhan@oracle.com,
	saravanand@fb.com,
	Tianyu.Lan@microsoft.com,
	konrad.wilk@oracle.com,
	hch@lst.de,
	m.szyprowski@samsung.com,
	robin.murphy@arm.com,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	sstabellini@kernel.org,
	joro@8bytes.org,
	will@kernel.org,
	xen-devel@lists.xenproject.org,
	davem@davemloft.net,
	kuba@kernel.org,
	jejb@linux.ibm.com,
	martin.petersen@oracle.com
Cc: iommu@lists.linux-foundation.org,
	linux-arch@vger.kernel.org,
	linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	netdev@vger.kernel.org,
	vkuznets@redhat.com,
	thomas.lendacky@amd.com,
	brijesh.singh@amd.com,
	sunilmut@microsoft.com
Subject: [RFC PATCH V3 09/11] HV/IOMMU: Enable swiotlb bounce buffer for Isolation VM
Date: Sun, 30 May 2021 11:06:26 -0400
Message-Id: <20210530150628.2063957-10-ltykernel@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20210530150628.2063957-1-ltykernel@gmail.com>
References: <20210530150628.2063957-1-ltykernel@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Tianyu Lan <Tianyu.Lan@microsoft.com>

Hyper-V Isolation VM requires bounce buffer support to copy
data from/to encrypted memory and so enable swiotlb force
mode to use swiotlb bounce buffer for DMA transaction.

In Isolation VM with AMD SEV, the bounce buffer needs to be
accessed via extra address space which is above shared_gpa_boundary
(E.G 39 bit address line) reported by Hyper-V CPUID ISOLATION_CONFIG.
The access physical address will be original physical address +
shared_gpa_boundary. The shared_gpa_boundary in the AMD SEV SNP
spec is called virtual top of memory(vTOM). Memory addresses below
vTOM are automatically treated as private while memory above
vTOM is treated as shared.

ioremap_cache() can't use in the hyperv_iommu_swiotlb_init() which
is too early place and remap bounce buffer in the hyperv_iommu_swiotlb_
later_init().

Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com>
---
 arch/x86/xen/pci-swiotlb-xen.c |  3 +-
 drivers/hv/vmbus_drv.c         |  3 ++
 drivers/iommu/hyperv-iommu.c   | 81 ++++++++++++++++++++++++++++++++++
 include/linux/hyperv.h         |  1 +
 4 files changed, 87 insertions(+), 1 deletion(-)

diff --git a/arch/x86/xen/pci-swiotlb-xen.c b/arch/x86/xen/pci-swiotlb-xen.c
index 54f9aa7e8457..43bd031aa332 100644
--- a/arch/x86/xen/pci-swiotlb-xen.c
+++ b/arch/x86/xen/pci-swiotlb-xen.c
@@ -4,6 +4,7 @@
 
 #include <linux/dma-map-ops.h>
 #include <linux/pci.h>
+#include <linux/hyperv.h>
 #include <xen/swiotlb-xen.h>
 
 #include <asm/xen/hypervisor.h>
@@ -91,6 +92,6 @@ int pci_xen_swiotlb_init_late(void)
 EXPORT_SYMBOL_GPL(pci_xen_swiotlb_init_late);
 
 IOMMU_INIT_FINISH(pci_xen_swiotlb_detect,
-		  NULL,
+		  hyperv_swiotlb_detect,
 		  pci_xen_swiotlb_init,
 		  NULL);
diff --git a/drivers/hv/vmbus_drv.c b/drivers/hv/vmbus_drv.c
index 92cb3f7d21d9..5e3bb76d4dee 100644
--- a/drivers/hv/vmbus_drv.c
+++ b/drivers/hv/vmbus_drv.c
@@ -23,6 +23,7 @@
 #include <linux/cpu.h>
 #include <linux/sched/task_stack.h>
 
+#include <linux/dma-map-ops.h>
 #include <linux/delay.h>
 #include <linux/notifier.h>
 #include <linux/ptrace.h>
@@ -2080,6 +2081,7 @@ struct hv_device *vmbus_device_create(const guid_t *type,
 	return child_device_obj;
 }
 
+static u64 vmbus_dma_mask = DMA_BIT_MASK(64);
 /*
  * vmbus_device_register - Register the child device
  */
@@ -2120,6 +2122,7 @@ int vmbus_device_register(struct hv_device *child_device_obj)
 	}
 	hv_debug_add_dev_dir(child_device_obj);
 
+	child_device_obj->device.dma_mask = &vmbus_dma_mask;
 	return 0;
 
 err_kset_unregister:
diff --git a/drivers/iommu/hyperv-iommu.c b/drivers/iommu/hyperv-iommu.c
index e285a220c913..2604619c6fa3 100644
--- a/drivers/iommu/hyperv-iommu.c
+++ b/drivers/iommu/hyperv-iommu.c
@@ -13,14 +13,22 @@
 #include <linux/irq.h>
 #include <linux/iommu.h>
 #include <linux/module.h>
+#include <linux/hyperv.h>
+#include <linux/io.h>
 
 #include <asm/apic.h>
 #include <asm/cpu.h>
 #include <asm/hw_irq.h>
 #include <asm/io_apic.h>
+#include <asm/iommu.h>
+#include <asm/iommu_table.h>
 #include <asm/irq_remapping.h>
 #include <asm/hypervisor.h>
 #include <asm/mshyperv.h>
+#include <asm/swiotlb.h>
+#include <linux/dma-map-ops.h>
+#include <linux/dma-direct.h>
+#include <linux/set_memory.h>
 
 #include "irq_remapping.h"
 
@@ -36,6 +44,8 @@
 static cpumask_t ioapic_max_cpumask = { CPU_BITS_NONE };
 static struct irq_domain *ioapic_ir_domain;
 
+static unsigned long hyperv_io_tlb_start, hyperv_io_tlb_size;
+
 static int hyperv_ir_set_affinity(struct irq_data *data,
 		const struct cpumask *mask, bool force)
 {
@@ -337,4 +347,75 @@ static const struct irq_domain_ops hyperv_root_ir_domain_ops = {
 	.free = hyperv_root_irq_remapping_free,
 };
 
+void __init hyperv_iommu_swiotlb_init(void)
+{
+	unsigned long bytes, io_tlb_nslabs;
+	void *vstart;
+
+	/* Allocate Hyper-V swiotlb  */
+	bytes = 200 * 1024 * 1024;
+	vstart = memblock_alloc_low(PAGE_ALIGN(bytes), PAGE_SIZE);
+	io_tlb_nslabs = bytes >> IO_TLB_SHIFT;
+	hyperv_io_tlb_size = bytes;
+
+	if (!vstart) {
+		pr_warn("Fail to allocate swiotlb.\n");
+		return;
+	}
+
+	hyperv_io_tlb_start = virt_to_phys(vstart);
+	if (!hyperv_io_tlb_start)
+		panic("%s: Failed to allocate %lu bytes align=0x%lx.\n",
+		      __func__, PAGE_ALIGN(bytes), PAGE_SIZE);
+
+	if (swiotlb_init_with_tbl(vstart, io_tlb_nslabs, 1))
+		panic("%s: Cannot allocate SWIOTLB buffer.\n", __func__);
+
+	swiotlb_set_max_segment(HV_HYP_PAGE_SIZE);
+}
+
+int __init hyperv_swiotlb_detect(void)
+{
+	if (hypervisor_is_type(X86_HYPER_MS_HYPERV)
+	    && hv_is_isolation_supported()) {
+		/*
+		 * Enable swiotlb force mode in Isolation VM to
+		 * use swiotlb bounce buffer for dma transaction.
+		 */
+		swiotlb_force = SWIOTLB_FORCE;
+		return 1;
+	}
+
+	return 0;
+}
+
+void __init hyperv_iommu_swiotlb_later_init(void)
+{
+	void *hyperv_io_tlb_remap;
+	int ret;
+
+	/* Mask bounce buffer visible to host and remap extra address. */
+	if (hv_isolation_type_snp()) {
+		ret = set_memory_decrypted((unsigned long)
+				phys_to_virt(hyperv_io_tlb_start),
+				HVPFN_UP(hyperv_io_tlb_size));
+		if (ret)
+			panic("%s: Fail to mark Hyper-v swiotlb buffer visible to host. err=%d\n",
+			      __func__, ret);
+
+		hyperv_io_tlb_remap = ioremap_cache(hyperv_io_tlb_start
+				+ ms_hyperv.shared_gpa_boundary,
+				hyperv_io_tlb_size);
+		if (!hyperv_io_tlb_remap)
+			panic("Fail to remap io tlb.\n");
+
+		memset(hyperv_io_tlb_remap, 0x00, hyperv_io_tlb_size);
+		swiotlb_set_bounce_remap(hyperv_io_tlb_remap);
+	}
+}
+
+IOMMU_INIT_FINISH(hyperv_swiotlb_detect,
+		  NULL, hyperv_iommu_swiotlb_init,
+		  hyperv_iommu_swiotlb_later_init);
+
 #endif
diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h
index 06eccaba10c5..babbe19f57e2 100644
--- a/include/linux/hyperv.h
+++ b/include/linux/hyperv.h
@@ -1759,6 +1759,7 @@ int hyperv_write_cfg_blk(struct pci_dev *dev, void *buf, unsigned int len,
 int hyperv_reg_block_invalidate(struct pci_dev *dev, void *context,
 				void (*block_invalidate)(void *context,
 							 u64 block_mask));
+int __init hyperv_swiotlb_detect(void);
 
 struct hyperv_pci_block_ops {
 	int (*read_block)(struct pci_dev *dev, void *buf, unsigned int buf_len,
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Sun May 30 17:27:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 May 2021 17:27:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134296.249951 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnPE5-0005jb-B1; Sun, 30 May 2021 17:27:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134296.249951; Sun, 30 May 2021 17:27:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnPE5-0005fF-1m; Sun, 30 May 2021 17:27:49 +0000
Received: by outflank-mailman (input) for mailman id 134296;
 Sun, 30 May 2021 15:06:54 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GvOc=KZ=gmail.com=ltykernel@srs-us1.protection.inumbo.net>)
 id 1lnN1i-0000PC-0X
 for xen-devel@lists.xenproject.org; Sun, 30 May 2021 15:06:54 +0000
Received: from mail-pl1-x636.google.com (unknown [2607:f8b0:4864:20::636])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 566bc030-44b1-4ba4-a923-ef979bb302fb;
 Sun, 30 May 2021 15:06:42 +0000 (UTC)
Received: by mail-pl1-x636.google.com with SMTP id q16so3943650pls.6
 for <xen-devel@lists.xenproject.org>; Sun, 30 May 2021 08:06:42 -0700 (PDT)
Received: from ubuntu-Virtual-Machine.corp.microsoft.com
 ([2001:4898:80e8:9:dc2d:80ab:c3f3:1524])
 by smtp.gmail.com with ESMTPSA id b15sm8679688pfi.100.2021.05.30.08.06.40
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 30 May 2021 08:06:41 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 566bc030-44b1-4ba4-a923-ef979bb302fb
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=2qrBQmFZzB5iCDF0jD57D02FTw3sLeoEdQTZJg1MGwk=;
        b=UFyQ+RLzNYZs3EYbHQuRbGPzqDft6HEaAQo3HtSZsQ8PrQSa6vB6jxS7fVtgEfG/xA
         jM+cfl/xGyfn/W5N2ld/HLLSBnHvpD3KEwH/SGBbI7l4iUyi11+TuyJHlK4KvggVT4iz
         xADWVE/aFltPbqf0XphsRSHciv4GaSRG5AjZZvz63wR8wo4BhSNAl22u6stYpfHIWVnL
         jojPAbT5auKhDFGyGwp4MqaEMZCB9A/V53Vk1hauYY4H20rLyskq8Qqkm+2qmcCcwwmE
         IMJ1HIMj/WK0rszGQU/2Bch7bx/iP8hMgnocl9ttUJiBtYIoP5WV694mqMygAsb/u+rZ
         cVpw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=2qrBQmFZzB5iCDF0jD57D02FTw3sLeoEdQTZJg1MGwk=;
        b=twrKlLCxJSzXVbuQbZUoiU8cpAvFgjqGGERZ1Ie+gy8SPOvRnWcG9jX5WAqjLyIqna
         5phjHpew66xAhppqgTpv0dGxa0Q2q5bFz2Mzrf/dFP+kMhX9w8HIfmC0V41JeHoZ2r+2
         XkHIx8nmFFvZLoXpH+sFETVzfqJLCgN/bJYMysKw2sIDvEe1Xxc5PHe3S9C3ef877IFC
         vYihlD/GMMmrw7NZca6tUYCCT/O/kEsXdRq88OR0ruebkA0cyJd18j1WIEKKAecOwbu1
         3xXQzKORNYMQHkf2GWfskzd039l+TQElXtyb7nPBC3KgX62CFwXs/zBIUzojAyxBxQfU
         YGSQ==
X-Gm-Message-State: AOAM5320VyWzIFOwkeI2jDQE5EC6wEHU6s1U2kkrjpPgHlIpWb4cWaNy
	uMJ0BCYxIFfVOgAo5somYys=
X-Google-Smtp-Source: ABdhPJwNskEtF6xfTREVXSX9bcM9R3zN0Kz9Xsaj71OvIjzQZiDz/dnpGXvOj5fxcJPDkovOn4LJBg==
X-Received: by 2002:a17:90b:30d0:: with SMTP id hi16mr14544722pjb.30.1622387201841;
        Sun, 30 May 2021 08:06:41 -0700 (PDT)
From: Tianyu Lan <ltykernel@gmail.com>
To: kys@microsoft.com,
	haiyangz@microsoft.com,
	sthemmin@microsoft.com,
	wei.liu@kernel.org,
	decui@microsoft.com,
	tglx@linutronix.de,
	mingo@redhat.com,
	bp@alien8.de,
	x86@kernel.org,
	hpa@zytor.com,
	arnd@arndb.de,
	dave.hansen@linux.intel.com,
	luto@kernel.org,
	peterz@infradead.org,
	akpm@linux-foundation.org,
	kirill.shutemov@linux.intel.com,
	rppt@kernel.org,
	hannes@cmpxchg.org,
	cai@lca.pw,
	krish.sadhukhan@oracle.com,
	saravanand@fb.com,
	Tianyu.Lan@microsoft.com,
	konrad.wilk@oracle.com,
	hch@lst.de,
	m.szyprowski@samsung.com,
	robin.murphy@arm.com,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	sstabellini@kernel.org,
	joro@8bytes.org,
	will@kernel.org,
	xen-devel@lists.xenproject.org,
	davem@davemloft.net,
	kuba@kernel.org,
	jejb@linux.ibm.com,
	martin.petersen@oracle.com
Cc: iommu@lists.linux-foundation.org,
	linux-arch@vger.kernel.org,
	linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	netdev@vger.kernel.org,
	vkuznets@redhat.com,
	thomas.lendacky@amd.com,
	brijesh.singh@amd.com,
	sunilmut@microsoft.com
Subject: [RFC PATCH V3 03/11] x86/Hyper-V: Add new hvcall guest address host visibility support
Date: Sun, 30 May 2021 11:06:20 -0400
Message-Id: <20210530150628.2063957-4-ltykernel@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20210530150628.2063957-1-ltykernel@gmail.com>
References: <20210530150628.2063957-1-ltykernel@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Tianyu Lan <Tianyu.Lan@microsoft.com>

Add new hvcall guest address host visibility support. Mark vmbus
ring buffer visible to host when create gpadl buffer and mark back
to not visible when tear down gpadl buffer.

Co-developed-by: Sunil Muthuswamy <sunilmut@microsoft.com>
Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com>
---
 arch/x86/hyperv/Makefile           |   2 +-
 arch/x86/hyperv/ivm.c              | 106 +++++++++++++++++++++++++++++
 arch/x86/include/asm/hyperv-tlfs.h |  24 +++++++
 arch/x86/include/asm/mshyperv.h    |   4 +-
 arch/x86/mm/pat/set_memory.c       |  10 ++-
 drivers/hv/channel.c               |  38 ++++++++++-
 include/asm-generic/hyperv-tlfs.h  |   1 +
 include/linux/hyperv.h             |  10 +++
 8 files changed, 190 insertions(+), 5 deletions(-)
 create mode 100644 arch/x86/hyperv/ivm.c

diff --git a/arch/x86/hyperv/Makefile b/arch/x86/hyperv/Makefile
index 48e2c51464e8..5d2de10809ae 100644
--- a/arch/x86/hyperv/Makefile
+++ b/arch/x86/hyperv/Makefile
@@ -1,5 +1,5 @@
 # SPDX-License-Identifier: GPL-2.0-only
-obj-y			:= hv_init.o mmu.o nested.o irqdomain.o
+obj-y			:= hv_init.o mmu.o nested.o irqdomain.o ivm.o
 obj-$(CONFIG_X86_64)	+= hv_apic.o hv_proc.o
 
 ifdef CONFIG_X86_64
diff --git a/arch/x86/hyperv/ivm.c b/arch/x86/hyperv/ivm.c
new file mode 100644
index 000000000000..fad1d3024056
--- /dev/null
+++ b/arch/x86/hyperv/ivm.c
@@ -0,0 +1,106 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Hyper-V Isolation VM interface with paravisor and hypervisor
+ *
+ * Author:
+ *  Tianyu Lan <Tianyu.Lan@microsoft.com>
+ */
+
+#include <linux/hyperv.h>
+#include <linux/types.h>
+#include <linux/bitfield.h>
+#include <asm/io.h>
+#include <asm/mshyperv.h>
+
+/*
+ * hv_mark_gpa_visibility - Set pages visible to host via hvcall.
+ *
+ * In Isolation VM, all guest memory is encripted from host and guest
+ * needs to set memory visible to host via hvcall before sharing memory
+ * with host.
+ */
+int hv_mark_gpa_visibility(u16 count, const u64 pfn[], u32 visibility)
+{
+	struct hv_gpa_range_for_visibility **input_pcpu, *input;
+	u16 pages_processed;
+	u64 hv_status;
+	unsigned long flags;
+
+	/* no-op if partition isolation is not enabled */
+	if (!hv_is_isolation_supported())
+		return 0;
+
+	if (count > HV_MAX_MODIFY_GPA_REP_COUNT) {
+		pr_err("Hyper-V: GPA count:%d exceeds supported:%lu\n", count,
+			HV_MAX_MODIFY_GPA_REP_COUNT);
+		return -EINVAL;
+	}
+
+	local_irq_save(flags);
+	input_pcpu = (struct hv_gpa_range_for_visibility **)
+			this_cpu_ptr(hyperv_pcpu_input_arg);
+	input = *input_pcpu;
+	if (unlikely(!input)) {
+		local_irq_restore(flags);
+		return -EINVAL;
+	}
+
+	input->partition_id = HV_PARTITION_ID_SELF;
+	input->host_visibility = visibility;
+	input->reserved0 = 0;
+	input->reserved1 = 0;
+	memcpy((void *)input->gpa_page_list, pfn, count * sizeof(*pfn));
+	hv_status = hv_do_rep_hypercall(
+			HVCALL_MODIFY_SPARSE_GPA_PAGE_HOST_VISIBILITY, count,
+			0, input, &pages_processed);
+	local_irq_restore(flags);
+
+	if (!(hv_status & HV_HYPERCALL_RESULT_MASK))
+		return 0;
+
+	return hv_status & HV_HYPERCALL_RESULT_MASK;
+}
+EXPORT_SYMBOL(hv_mark_gpa_visibility);
+
+/*
+ * hv_set_mem_host_visibility - Set specified memory visible to host.
+ *
+ * In Isolation VM, all guest memory is encrypted from host and guest
+ * needs to set memory visible to host via hvcall before sharing memory
+ * with host. This function works as wrap of hv_mark_gpa_visibility()
+ * with memory base and size.
+ */
+int hv_set_mem_host_visibility(void *kbuffer, size_t size,
+			       enum vmbus_page_visibility visibility)
+{
+	int pagecount = size >> HV_HYP_PAGE_SHIFT;
+	u64 *pfn_array;
+	int ret = 0;
+	int i, pfn;
+
+	if (!hv_is_isolation_supported())
+		return 0;
+
+	pfn_array = vzalloc(HV_HYP_PAGE_SIZE);
+	if (!pfn_array)
+		return -ENOMEM;
+
+	for (i = 0, pfn = 0; i < pagecount; i++) {
+		pfn_array[pfn] = virt_to_hvpfn(kbuffer + i * HV_HYP_PAGE_SIZE);
+		pfn++;
+
+		if (pfn == HV_MAX_MODIFY_GPA_REP_COUNT || i == pagecount - 1) {
+			ret |= hv_mark_gpa_visibility(pfn, pfn_array, visibility);
+			pfn = 0;
+
+			if (ret)
+				goto err_free_pfn_array;
+		}
+	}
+
+ err_free_pfn_array:
+	vfree(pfn_array);
+	return ret;
+}
+EXPORT_SYMBOL_GPL(hv_set_mem_host_visibility);
+
diff --git a/arch/x86/include/asm/hyperv-tlfs.h b/arch/x86/include/asm/hyperv-tlfs.h
index 606f5cc579b2..632281b91b44 100644
--- a/arch/x86/include/asm/hyperv-tlfs.h
+++ b/arch/x86/include/asm/hyperv-tlfs.h
@@ -262,6 +262,17 @@ enum hv_isolation_type {
 #define HV_X64_MSR_TIME_REF_COUNT	HV_REGISTER_TIME_REF_COUNT
 #define HV_X64_MSR_REFERENCE_TSC	HV_REGISTER_REFERENCE_TSC
 
+/* Hyper-V GPA map flags */
+#define HV_MAP_GPA_PERMISSIONS_NONE            0x0
+#define HV_MAP_GPA_READABLE                    0x1
+#define HV_MAP_GPA_WRITABLE                    0x2
+
+enum vmbus_page_visibility {
+	VMBUS_PAGE_NOT_VISIBLE = 0,
+	VMBUS_PAGE_VISIBLE_READ_ONLY = 1,
+	VMBUS_PAGE_VISIBLE_READ_WRITE = 3
+};
+
 /*
  * Declare the MSR used to setup pages used to communicate with the hypervisor.
  */
@@ -561,4 +572,17 @@ enum hv_interrupt_type {
 
 #include <asm-generic/hyperv-tlfs.h>
 
+/* All input parameters should be in single page. */
+#define HV_MAX_MODIFY_GPA_REP_COUNT		\
+	((PAGE_SIZE / sizeof(u64)) - 2)
+
+/* HvCallModifySparseGpaPageHostVisibility hypercall */
+struct hv_gpa_range_for_visibility {
+	u64 partition_id;
+	u32 host_visibility:2;
+	u32 reserved0:30;
+	u32 reserved1;
+	u64 gpa_page_list[HV_MAX_MODIFY_GPA_REP_COUNT];
+} __packed;
+
 #endif
diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h
index aeacca7c4da8..6af9d55ffe3b 100644
--- a/arch/x86/include/asm/mshyperv.h
+++ b/arch/x86/include/asm/mshyperv.h
@@ -194,7 +194,9 @@ struct irq_domain *hv_create_pci_msi_domain(void);
 int hv_map_ioapic_interrupt(int ioapic_id, bool level, int vcpu, int vector,
 		struct hv_interrupt_entry *entry);
 int hv_unmap_ioapic_interrupt(int ioapic_id, struct hv_interrupt_entry *entry);
-
+int hv_mark_gpa_visibility(u16 count, const u64 pfn[], u32 visibility);
+int hv_set_mem_host_visibility(void *kbuffer, size_t size,
+			       enum vmbus_page_visibility visibility);
 #else /* CONFIG_HYPERV */
 static inline void hyperv_init(void) {}
 static inline void hyperv_setup_mmu_ops(void) {}
diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
index 156cd235659f..a82975600107 100644
--- a/arch/x86/mm/pat/set_memory.c
+++ b/arch/x86/mm/pat/set_memory.c
@@ -29,6 +29,8 @@
 #include <asm/proto.h>
 #include <asm/memtype.h>
 #include <asm/set_memory.h>
+#include <asm/hyperv-tlfs.h>
+#include <asm/mshyperv.h>
 
 #include "../mm_internal.h"
 
@@ -1986,8 +1988,14 @@ static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc)
 	int ret;
 
 	/* Nothing to do if memory encryption is not active */
-	if (!mem_encrypt_active())
+	if (hv_is_isolation_supported()) {
+		return hv_set_mem_host_visibility((void *)addr,
+				numpages * HV_HYP_PAGE_SIZE,
+				enc ? VMBUS_PAGE_NOT_VISIBLE
+				: VMBUS_PAGE_VISIBLE_READ_WRITE);
+	} else if (!mem_encrypt_active()) {
 		return 0;
+	}
 
 	/* Should not be working on unaligned addresses */
 	if (WARN_ONCE(addr & ~PAGE_MASK, "misaligned address: %#lx\n", addr))
diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c
index f3761c73b074..01048bb07082 100644
--- a/drivers/hv/channel.c
+++ b/drivers/hv/channel.c
@@ -17,6 +17,7 @@
 #include <linux/hyperv.h>
 #include <linux/uio.h>
 #include <linux/interrupt.h>
+#include <linux/set_memory.h>
 #include <asm/page.h>
 #include <asm/mshyperv.h>
 
@@ -465,7 +466,7 @@ static int __vmbus_establish_gpadl(struct vmbus_channel *channel,
 	struct list_head *curr;
 	u32 next_gpadl_handle;
 	unsigned long flags;
-	int ret = 0;
+	int ret = 0, index;
 
 	next_gpadl_handle =
 		(atomic_inc_return(&vmbus_connection.next_gpadl_handle) - 1);
@@ -474,6 +475,13 @@ static int __vmbus_establish_gpadl(struct vmbus_channel *channel,
 	if (ret)
 		return ret;
 
+	ret = set_memory_decrypted((unsigned long)kbuffer,
+				   HVPFN_UP(size));
+	if (ret) {
+		pr_warn("Failed to set host visibility.\n");
+		return ret;
+	}
+
 	init_completion(&msginfo->waitevent);
 	msginfo->waiting_channel = channel;
 
@@ -539,6 +547,15 @@ static int __vmbus_establish_gpadl(struct vmbus_channel *channel,
 	/* At this point, we received the gpadl created msg */
 	*gpadl_handle = gpadlmsg->gpadl;
 
+	if (type == HV_GPADL_BUFFER)
+		index = 0;
+	else
+		index = channel->gpadl_range[1].gpadlhandle ? 2 : 1;
+
+	channel->gpadl_range[index].size = size;
+	channel->gpadl_range[index].buffer = kbuffer;
+	channel->gpadl_range[index].gpadlhandle = *gpadl_handle;
+
 cleanup:
 	spin_lock_irqsave(&vmbus_connection.channelmsg_lock, flags);
 	list_del(&msginfo->msglistentry);
@@ -549,6 +566,11 @@ static int __vmbus_establish_gpadl(struct vmbus_channel *channel,
 	}
 
 	kfree(msginfo);
+
+	if (ret)
+		set_memory_encrypted((unsigned long)kbuffer,
+				     HVPFN_UP(size));
+
 	return ret;
 }
 
@@ -811,7 +833,7 @@ int vmbus_teardown_gpadl(struct vmbus_channel *channel, u32 gpadl_handle)
 	struct vmbus_channel_gpadl_teardown *msg;
 	struct vmbus_channel_msginfo *info;
 	unsigned long flags;
-	int ret;
+	int ret, i;
 
 	info = kzalloc(sizeof(*info) +
 		       sizeof(struct vmbus_channel_gpadl_teardown), GFP_KERNEL);
@@ -859,6 +881,18 @@ int vmbus_teardown_gpadl(struct vmbus_channel *channel, u32 gpadl_handle)
 	spin_unlock_irqrestore(&vmbus_connection.channelmsg_lock, flags);
 
 	kfree(info);
+
+	/* Find gpadl buffer virtual address and size. */
+	for (i = 0; i < VMBUS_GPADL_RANGE_COUNT; i++)
+		if (channel->gpadl_range[i].gpadlhandle == gpadl_handle)
+			break;
+
+	if (set_memory_encrypted((unsigned long)channel->gpadl_range[i].buffer,
+			HVPFN_UP(channel->gpadl_range[i].size)))
+		pr_warn("Fail to set mem host visibility.\n");
+
+	channel->gpadl_range[i].gpadlhandle = 0;
+
 	return ret;
 }
 EXPORT_SYMBOL_GPL(vmbus_teardown_gpadl);
diff --git a/include/asm-generic/hyperv-tlfs.h b/include/asm-generic/hyperv-tlfs.h
index 515c3fb06ab3..8a0219255545 100644
--- a/include/asm-generic/hyperv-tlfs.h
+++ b/include/asm-generic/hyperv-tlfs.h
@@ -158,6 +158,7 @@ struct ms_hyperv_tsc_page {
 #define HVCALL_RETARGET_INTERRUPT		0x007e
 #define HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_SPACE 0x00af
 #define HVCALL_FLUSH_GUEST_PHYSICAL_ADDRESS_LIST 0x00b0
+#define HVCALL_MODIFY_SPARSE_GPA_PAGE_HOST_VISIBILITY 0x00db
 
 /* Extended hypercalls */
 #define HV_EXT_CALL_QUERY_CAPABILITIES		0x8001
diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h
index 2e859d2f9609..06eccaba10c5 100644
--- a/include/linux/hyperv.h
+++ b/include/linux/hyperv.h
@@ -809,6 +809,14 @@ struct vmbus_device {
 
 #define VMBUS_DEFAULT_MAX_PKT_SIZE 4096
 
+struct vmbus_gpadl_range {
+	u32 gpadlhandle;
+	u32 size;
+	void *buffer;
+};
+
+#define VMBUS_GPADL_RANGE_COUNT		3
+
 struct vmbus_channel {
 	struct list_head listentry;
 
@@ -829,6 +837,8 @@ struct vmbus_channel {
 	struct completion rescind_event;
 
 	u32 ringbuffer_gpadlhandle;
+	/* GPADL_RING and Send/Receive GPADL_BUFFER. */
+	struct vmbus_gpadl_range gpadl_range[VMBUS_GPADL_RANGE_COUNT];
 
 	/* Allocated memory for ring buffer */
 	struct page *ringbuffer_page;
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Sun May 30 17:27:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 May 2021 17:27:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134311.250025 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnPEB-0007Ke-TD; Sun, 30 May 2021 17:27:55 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134311.250025; Sun, 30 May 2021 17:27:55 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnPEA-0007Du-O4; Sun, 30 May 2021 17:27:54 +0000
Received: by outflank-mailman (input) for mailman id 134311;
 Sun, 30 May 2021 15:13:09 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GvOc=KZ=gmail.com=ltykernel@srs-us1.protection.inumbo.net>)
 id 1lnN2H-0000PC-1Z
 for xen-devel@lists.xenproject.org; Sun, 30 May 2021 15:07:29 +0000
Received: from mail-pj1-x1030.google.com (unknown [2607:f8b0:4864:20::1030])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 5a39f5d7-2b84-4e76-bdca-43fbe04b93fb;
 Sun, 30 May 2021 15:06:52 +0000 (UTC)
Received: by mail-pj1-x1030.google.com with SMTP id f8so5216697pjh.0
 for <xen-devel@lists.xenproject.org>; Sun, 30 May 2021 08:06:52 -0700 (PDT)
Received: from ubuntu-Virtual-Machine.corp.microsoft.com
 ([2001:4898:80e8:9:dc2d:80ab:c3f3:1524])
 by smtp.gmail.com with ESMTPSA id b15sm8679688pfi.100.2021.05.30.08.06.51
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 30 May 2021 08:06:51 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 5a39f5d7-2b84-4e76-bdca-43fbe04b93fb
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=B9Yo0s6TejreB7euFecHxwxeib5b7yw6r+ldWbllM0I=;
        b=Y7Pislz7VbgUb+yvZNwnu28Spkd0fh4WJ7X5yvOjbx6Wb2FxBAMgtgWiOu45Vr1JtO
         Y37BMSPTphcpB8fkfTZzAt+ty+NYg01Yfo8Um4UE0UsXaV4XkHLa6PVtVfxkhLSGTuak
         7SiYtdhCxaf0HIg83rGL8u/kLsxRDYwfwfkVY6bRxHqVVOdtkMDM/tpoNf09Huiy+DuM
         WDBVtvJ63CZfL/yeEVTe3ZAzWQSCMTcqPN4rBY/V1AADy1B0KgPu6LVFfmGKTLqOMYqg
         aP3eqb4ABnBY5UVJ1lLGCE6AWWd5HFXCWqwLNm4EEEs43CqEAMJwmRj9SSOJTHIVZ58K
         mAIw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=B9Yo0s6TejreB7euFecHxwxeib5b7yw6r+ldWbllM0I=;
        b=K3g8m63UGYwYuzOyD5le7X92UUkR5gtMwPUncWr6TlLDj7n6nXTZau3IZhh79Xevdr
         u6nCuej3J+3b8wnGOk8vEzV/B4pJbqTMtoECYX45yesC0GE4xfAdovxSqzGxWPs5fajM
         G5wBO0o+M/qL4Tvc1rbcoovs2baxaKMHgRLbOrO7tAxQnlw26Swz2lVjJJnIH3kN/Ztx
         rI3R06MlmZXXOG3xD2YnIJDnM20dAeRbkpjF6lf4LF38+k5q6OLNJghebze9cRMTG/Me
         KHpEjjW3gBIsTOMf1YdLRp1h0hwH+wrkzzbAYUmFCi41vwQXFrc23oPzOlJdmYRtbcL6
         6LKg==
X-Gm-Message-State: AOAM530tURQUeWzg9oBH0huz6GjYUFU0pZS9PRll8aAc1yXV5Q/RoVp3
	trjJoI2Ad/y2ezHUCi7yoiE=
X-Google-Smtp-Source: ABdhPJwrEdJxapaQT2EKNHLNcNavFxex5J8rSPwUw+1RburLx4171lng6hUEFaTaie+cBHFtvTw1VQ==
X-Received: by 2002:a17:903:1c3:b029:f4:9624:2c69 with SMTP id e3-20020a17090301c3b02900f496242c69mr16671408plh.51.1622387212144;
        Sun, 30 May 2021 08:06:52 -0700 (PDT)
From: Tianyu Lan <ltykernel@gmail.com>
To: kys@microsoft.com,
	haiyangz@microsoft.com,
	sthemmin@microsoft.com,
	wei.liu@kernel.org,
	decui@microsoft.com,
	tglx@linutronix.de,
	mingo@redhat.com,
	bp@alien8.de,
	x86@kernel.org,
	hpa@zytor.com,
	arnd@arndb.de,
	dave.hansen@linux.intel.com,
	luto@kernel.org,
	peterz@infradead.org,
	akpm@linux-foundation.org,
	kirill.shutemov@linux.intel.com,
	rppt@kernel.org,
	hannes@cmpxchg.org,
	cai@lca.pw,
	krish.sadhukhan@oracle.com,
	saravanand@fb.com,
	Tianyu.Lan@microsoft.com,
	konrad.wilk@oracle.com,
	hch@lst.de,
	m.szyprowski@samsung.com,
	robin.murphy@arm.com,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	sstabellini@kernel.org,
	joro@8bytes.org,
	will@kernel.org,
	xen-devel@lists.xenproject.org,
	davem@davemloft.net,
	kuba@kernel.org,
	jejb@linux.ibm.com,
	martin.petersen@oracle.com
Cc: iommu@lists.linux-foundation.org,
	linux-arch@vger.kernel.org,
	linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	netdev@vger.kernel.org,
	vkuznets@redhat.com,
	thomas.lendacky@amd.com,
	brijesh.singh@amd.com,
	sunilmut@microsoft.com
Subject: [RFC PATCH V3 10/11] HV/Netvsc: Add Isolation VM support for netvsc driver
Date: Sun, 30 May 2021 11:06:27 -0400
Message-Id: <20210530150628.2063957-11-ltykernel@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20210530150628.2063957-1-ltykernel@gmail.com>
References: <20210530150628.2063957-1-ltykernel@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Tianyu Lan <Tianyu.Lan@microsoft.com>

In Isolation VM, all shared memory with host needs to mark visible
to host via hvcall. vmbus_establish_gpadl() has already done it for
netvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
pagebuffer() still need to handle. Use DMA API to map/umap these
memory during sending/receiving packet and Hyper-V DMA ops callback
will use swiotlb function to allocate bounce buffer and copy data
from/to bounce buffer.

Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com>
---
 drivers/net/hyperv/hyperv_net.h   |   6 ++
 drivers/net/hyperv/netvsc.c       | 125 ++++++++++++++++++++++++++++--
 drivers/net/hyperv/rndis_filter.c |   3 +
 include/linux/hyperv.h            |   5 ++
 4 files changed, 133 insertions(+), 6 deletions(-)

diff --git a/drivers/net/hyperv/hyperv_net.h b/drivers/net/hyperv/hyperv_net.h
index b11aa68b44ec..c2fbb9d4df2c 100644
--- a/drivers/net/hyperv/hyperv_net.h
+++ b/drivers/net/hyperv/hyperv_net.h
@@ -164,6 +164,7 @@ struct hv_netvsc_packet {
 	u32 total_bytes;
 	u32 send_buf_index;
 	u32 total_data_buflen;
+	struct hv_dma_range *dma_range;
 };
 
 #define NETVSC_HASH_KEYLEN 40
@@ -1074,6 +1075,7 @@ struct netvsc_device {
 
 	/* Receive buffer allocated by us but manages by NetVSP */
 	void *recv_buf;
+	void *recv_original_buf;
 	u32 recv_buf_size; /* allocated bytes */
 	u32 recv_buf_gpadl_handle;
 	u32 recv_section_cnt;
@@ -1082,6 +1084,8 @@ struct netvsc_device {
 
 	/* Send buffer allocated by us */
 	void *send_buf;
+	void *send_original_buf;
+	u32 send_buf_size;
 	u32 send_buf_gpadl_handle;
 	u32 send_section_cnt;
 	u32 send_section_size;
@@ -1729,4 +1733,6 @@ struct rndis_message {
 #define RETRY_US_HI	10000
 #define RETRY_MAX	2000	/* >10 sec */
 
+void netvsc_dma_unmap(struct hv_device *hv_dev,
+		      struct hv_netvsc_packet *packet);
 #endif /* _HYPERV_NET_H */
diff --git a/drivers/net/hyperv/netvsc.c b/drivers/net/hyperv/netvsc.c
index 7bd935412853..a01740c6c6b8 100644
--- a/drivers/net/hyperv/netvsc.c
+++ b/drivers/net/hyperv/netvsc.c
@@ -153,8 +153,21 @@ static void free_netvsc_device(struct rcu_head *head)
 	int i;
 
 	kfree(nvdev->extension);
-	vfree(nvdev->recv_buf);
-	vfree(nvdev->send_buf);
+
+	if (nvdev->recv_original_buf) {
+		vunmap(nvdev->recv_buf);
+		vfree(nvdev->recv_original_buf);
+	} else {
+		vfree(nvdev->recv_buf);
+	}
+
+	if (nvdev->send_original_buf) {
+		vunmap(nvdev->send_buf);
+		vfree(nvdev->send_original_buf);
+	} else {
+		vfree(nvdev->send_buf);
+	}
+
 	kfree(nvdev->send_section_map);
 
 	for (i = 0; i < VRSS_CHANNEL_MAX; i++) {
@@ -338,8 +351,10 @@ static int netvsc_init_buf(struct hv_device *device,
 	struct net_device *ndev = hv_get_drvdata(device);
 	struct nvsp_message *init_packet;
 	unsigned int buf_size;
+	unsigned long *pfns;
 	size_t map_words;
 	int i, ret = 0;
+	void *vaddr;
 
 	/* Get receive buffer area. */
 	buf_size = device_info->recv_sections * device_info->recv_section_size;
@@ -375,6 +390,21 @@ static int netvsc_init_buf(struct hv_device *device,
 		goto cleanup;
 	}
 
+	if (hv_isolation_type_snp()) {
+		pfns = kcalloc(buf_size / HV_HYP_PAGE_SIZE, sizeof(unsigned long),
+			       GFP_KERNEL);
+		for (i = 0; i < buf_size / HV_HYP_PAGE_SIZE; i++)
+			pfns[i] = virt_to_hvpfn(net_device->recv_buf + i * HV_HYP_PAGE_SIZE) +
+				(ms_hyperv.shared_gpa_boundary >> HV_HYP_PAGE_SHIFT);
+
+		vaddr = vmap_pfn(pfns, buf_size / HV_HYP_PAGE_SIZE, PAGE_KERNEL_IO);
+		kfree(pfns);
+		if (!vaddr)
+			goto cleanup;
+		net_device->recv_original_buf = net_device->recv_buf;
+		net_device->recv_buf = vaddr;
+	}
+
 	/* Notify the NetVsp of the gpadl handle */
 	init_packet = &net_device->channel_init_pkt;
 	memset(init_packet, 0, sizeof(struct nvsp_message));
@@ -477,6 +507,23 @@ static int netvsc_init_buf(struct hv_device *device,
 		goto cleanup;
 	}
 
+	if (hv_isolation_type_snp()) {
+		pfns = kcalloc(buf_size / HV_HYP_PAGE_SIZE, sizeof(unsigned long),
+			       GFP_KERNEL);
+
+		for (i = 0; i < buf_size / HV_HYP_PAGE_SIZE; i++)
+			pfns[i] = virt_to_hvpfn(net_device->send_buf + i * HV_HYP_PAGE_SIZE)
+				+ (ms_hyperv.shared_gpa_boundary >> HV_HYP_PAGE_SHIFT);
+
+		vaddr = vmap_pfn(pfns, buf_size / HV_HYP_PAGE_SIZE, PAGE_KERNEL_IO);
+		kfree(pfns);
+		if (!vaddr)
+			goto cleanup;
+
+		net_device->send_original_buf = net_device->send_buf;
+		net_device->send_buf = vaddr;
+	}
+
 	/* Notify the NetVsp of the gpadl handle */
 	init_packet = &net_device->channel_init_pkt;
 	memset(init_packet, 0, sizeof(struct nvsp_message));
@@ -767,7 +814,7 @@ static void netvsc_send_tx_complete(struct net_device *ndev,
 
 	/* Notify the layer above us */
 	if (likely(skb)) {
-		const struct hv_netvsc_packet *packet
+		struct hv_netvsc_packet *packet
 			= (struct hv_netvsc_packet *)skb->cb;
 		u32 send_index = packet->send_buf_index;
 		struct netvsc_stats *tx_stats;
@@ -783,6 +830,7 @@ static void netvsc_send_tx_complete(struct net_device *ndev,
 		tx_stats->bytes += packet->total_bytes;
 		u64_stats_update_end(&tx_stats->syncp);
 
+		netvsc_dma_unmap(ndev_ctx->device_ctx, packet);
 		napi_consume_skb(skb, budget);
 	}
 
@@ -947,6 +995,63 @@ static void netvsc_copy_to_send_buf(struct netvsc_device *net_device,
 		memset(dest, 0, padding);
 }
 
+void netvsc_dma_unmap(struct hv_device *hv_dev,
+		      struct hv_netvsc_packet *packet)
+{
+	u32 page_count = packet->cp_partial ?
+		packet->page_buf_cnt - packet->rmsg_pgcnt :
+		packet->page_buf_cnt;
+	int i;
+
+	if (!packet->dma_range)
+		return;
+
+	for (i = 0; i < page_count; i++)
+		dma_unmap_single(&hv_dev->device, packet->dma_range[i].dma,
+				 packet->dma_range[i].mapping_size,
+				 DMA_TO_DEVICE);
+
+	kfree(packet->dma_range);
+}
+
+int netvsc_dma_map(struct hv_device *hv_dev,
+		   struct hv_netvsc_packet *packet,
+		   struct hv_page_buffer *pb)
+{
+	u32 page_count =  packet->cp_partial ?
+		packet->page_buf_cnt - packet->rmsg_pgcnt :
+		packet->page_buf_cnt;
+	dma_addr_t dma;
+	int i;
+
+	packet->dma_range = kcalloc(page_count,
+				    sizeof(*packet->dma_range),
+				    GFP_KERNEL);
+	if (!packet->dma_range)
+		return -ENOMEM;
+
+	for (i = 0; i < page_count; i++) {
+		char *src = phys_to_virt((pb[i].pfn << HV_HYP_PAGE_SHIFT)
+					 + pb[i].offset);
+		u32 len = pb[i].len;
+
+		dma = dma_map_single(&hv_dev->device, src, len,
+				     DMA_TO_DEVICE);
+		if (dma_mapping_error(&hv_dev->device, dma)) {
+			kfree(packet->dma_range);
+			return -ENOMEM;
+		}
+
+		packet->dma_range[i].dma = dma;
+		packet->dma_range[i].mapping_size = len;
+		pb[i].pfn = dma >> HV_HYP_PAGE_SHIFT;
+		pb[i].offset = offset_in_hvpage(dma);
+		pb[i].len = len;
+	}
+
+	return 0;
+}
+
 static inline int netvsc_send_pkt(
 	struct hv_device *device,
 	struct hv_netvsc_packet *packet,
@@ -987,14 +1092,22 @@ static inline int netvsc_send_pkt(
 
 	trace_nvsp_send_pkt(ndev, out_channel, rpkt);
 
+	packet->dma_range = NULL;
 	if (packet->page_buf_cnt) {
 		if (packet->cp_partial)
 			pb += packet->rmsg_pgcnt;
 
+		ret = netvsc_dma_map(ndev_ctx->device_ctx, packet, pb);
+		if (ret)
+			return ret;
+
 		ret = vmbus_sendpacket_pagebuffer(out_channel,
-						  pb, packet->page_buf_cnt,
-						  &nvmsg, sizeof(nvmsg),
-						  req_id);
+					  pb, packet->page_buf_cnt,
+					  &nvmsg, sizeof(nvmsg),
+					  req_id);
+
+		if (ret)
+			netvsc_dma_unmap(ndev_ctx->device_ctx, packet);
 	} else {
 		ret = vmbus_sendpacket(out_channel,
 				       &nvmsg, sizeof(nvmsg),
diff --git a/drivers/net/hyperv/rndis_filter.c b/drivers/net/hyperv/rndis_filter.c
index 983bf362466a..448c1ee39246 100644
--- a/drivers/net/hyperv/rndis_filter.c
+++ b/drivers/net/hyperv/rndis_filter.c
@@ -293,6 +293,8 @@ static void rndis_filter_receive_response(struct net_device *ndev,
 	u32 *req_id = &resp->msg.init_complete.req_id;
 	struct rndis_device *dev = nvdev->extension;
 	struct rndis_request *request = NULL;
+	struct hv_device *hv_dev = ((struct net_device_context *)
+			netdev_priv(ndev))->device_ctx;
 	bool found = false;
 	unsigned long flags;
 
@@ -361,6 +363,7 @@ static void rndis_filter_receive_response(struct net_device *ndev,
 			}
 		}
 
+		netvsc_dma_unmap(hv_dev, &request->pkt);
 		complete(&request->wait_event);
 	} else {
 		netdev_err(ndev,
diff --git a/include/linux/hyperv.h b/include/linux/hyperv.h
index babbe19f57e2..90abff664495 100644
--- a/include/linux/hyperv.h
+++ b/include/linux/hyperv.h
@@ -1616,6 +1616,11 @@ struct hyperv_service_callback {
 	void (*callback)(void *context);
 };
 
+struct hv_dma_range {
+	dma_addr_t dma;
+	u32 mapping_size;
+};
+
 #define MAX_SRV_VER	0x7ffffff
 extern bool vmbus_prep_negotiate_resp(struct icmsg_hdr *icmsghdrp, u8 *buf, u32 buflen,
 				const int *fw_version, int fw_vercnt,
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Sun May 30 17:27:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 May 2021 17:27:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134304.249980 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnPE7-0006Mx-HY; Sun, 30 May 2021 17:27:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134304.249980; Sun, 30 May 2021 17:27:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnPE6-0006FV-Uv; Sun, 30 May 2021 17:27:50 +0000
Received: by outflank-mailman (input) for mailman id 134304;
 Sun, 30 May 2021 15:07:14 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GvOc=KZ=gmail.com=ltykernel@srs-us1.protection.inumbo.net>)
 id 1lnN22-0000PC-0y
 for xen-devel@lists.xenproject.org; Sun, 30 May 2021 15:07:14 +0000
Received: from mail-pl1-x62a.google.com (unknown [2607:f8b0:4864:20::62a])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 827f820c-da9e-4b91-bf18-71398a90a062;
 Sun, 30 May 2021 15:06:48 +0000 (UTC)
Received: by mail-pl1-x62a.google.com with SMTP id v12so3926075plo.10
 for <xen-devel@lists.xenproject.org>; Sun, 30 May 2021 08:06:48 -0700 (PDT)
Received: from ubuntu-Virtual-Machine.corp.microsoft.com
 ([2001:4898:80e8:9:dc2d:80ab:c3f3:1524])
 by smtp.gmail.com with ESMTPSA id b15sm8679688pfi.100.2021.05.30.08.06.47
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 30 May 2021 08:06:47 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 827f820c-da9e-4b91-bf18-71398a90a062
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=XbGhpS5yStXGWn3Ym/sECphkTnEdMSaFIpokEER49XU=;
        b=JC3yoSpI7HeBKP3hWsg8wjl3AFxs/Vv/OxS4bnwIFqCjOJl15TuGLGknha8Pn1n/ah
         0ZZUsS5bf4zVtnbK+OU9GfbzU8fFLPbSqbNhb187rqSV3zfZXEB6WH/++hB0KUJdUCem
         IU226b2zVIlLikxKj/uD8rBCpSUIvBorTTx2zMaiyobmmX4TyUZq+n2YR9OK3/T1fvuy
         FxivfaQTU/OoxpBt93x4SvNEzAgmbLv2krRO5irGI/Qt3eTDcCbT+MybWGjezFDPkS8C
         t2tDouUeiWVJhg0oX2dmFl8aqN6cluwzNiwnY2oQHOGtPovXxeiOgpEInpePVYD0962N
         TKIw==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=XbGhpS5yStXGWn3Ym/sECphkTnEdMSaFIpokEER49XU=;
        b=lZyQNiwDALGW2eZvjXPZQq3INKIeYvKI1YN3iHojNYl/nKrd+qehe6KNij3mROYUON
         UScpMLXpTT+ly5D2t1Io6/RMEnPGq+PIj1kdllBJSpftGa1KJSNSwiihtx3MOzN63f0Z
         gk1HrJWSuwUws6Bt319kO/foi3ojedo0fn2QiBgfkuGRgx+M+qwFZ68S/CssgFpxAzO8
         ZX+URTH90VsViXkbyXdKi0Fc6PPcYX9DG3arh/8qZ9G1dUK+hsgVlOBFZFQcq5fWm+bo
         0XMiHCXkatxn9GYabxZdM3iLQY+zVGKRwqfmaegYrVJVMfrvjgi2HFUUIM7OkjdmuU2z
         Y6bg==
X-Gm-Message-State: AOAM532mLy39iWdvUnprM2oUFawIU3CUfEpSGkPatCsQvwes50sWGObs
	baAHA9Sc5gT26sRW5v8S/9U=
X-Google-Smtp-Source: ABdhPJxYzw948CYn/vrnLtLj9td0ihxzeOeJLO8aPceNZ6uTQIjukJfgPqOdArA0dsf3M/WWZk7Qdg==
X-Received: by 2002:a17:90a:49cc:: with SMTP id l12mr14900755pjm.212.1622387208287;
        Sun, 30 May 2021 08:06:48 -0700 (PDT)
From: Tianyu Lan <ltykernel@gmail.com>
To: kys@microsoft.com,
	haiyangz@microsoft.com,
	sthemmin@microsoft.com,
	wei.liu@kernel.org,
	decui@microsoft.com,
	tglx@linutronix.de,
	mingo@redhat.com,
	bp@alien8.de,
	x86@kernel.org,
	hpa@zytor.com,
	arnd@arndb.de,
	dave.hansen@linux.intel.com,
	luto@kernel.org,
	peterz@infradead.org,
	akpm@linux-foundation.org,
	kirill.shutemov@linux.intel.com,
	rppt@kernel.org,
	hannes@cmpxchg.org,
	cai@lca.pw,
	krish.sadhukhan@oracle.com,
	saravanand@fb.com,
	Tianyu.Lan@microsoft.com,
	konrad.wilk@oracle.com,
	hch@lst.de,
	m.szyprowski@samsung.com,
	robin.murphy@arm.com,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	sstabellini@kernel.org,
	joro@8bytes.org,
	will@kernel.org,
	xen-devel@lists.xenproject.org,
	davem@davemloft.net,
	kuba@kernel.org,
	jejb@linux.ibm.com,
	martin.petersen@oracle.com
Cc: iommu@lists.linux-foundation.org,
	linux-arch@vger.kernel.org,
	linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	netdev@vger.kernel.org,
	vkuznets@redhat.com,
	thomas.lendacky@amd.com,
	brijesh.singh@amd.com,
	sunilmut@microsoft.com
Subject: [RFC PATCH V3 07/11] HV/Vmbus: Initialize VMbus ring buffer for Isolation VM
Date: Sun, 30 May 2021 11:06:24 -0400
Message-Id: <20210530150628.2063957-8-ltykernel@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20210530150628.2063957-1-ltykernel@gmail.com>
References: <20210530150628.2063957-1-ltykernel@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Tianyu Lan <Tianyu.Lan@microsoft.com>

VMbus ring buffer are shared with host and it's need to
be accessed via extra address space of Isolation VM with
SNP support. This patch is to map the ring buffer
address in extra address space via ioremap(). HV host
visibility hvcall smears data in the ring buffer and
so reset the ring buffer memory to zero after calling
visibility hvcall.

Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com>
---
 drivers/hv/Kconfig        |  1 +
 drivers/hv/channel.c      | 10 +++++
 drivers/hv/hyperv_vmbus.h |  2 +
 drivers/hv/ring_buffer.c  | 84 ++++++++++++++++++++++++++++++---------
 4 files changed, 79 insertions(+), 18 deletions(-)

diff --git a/drivers/hv/Kconfig b/drivers/hv/Kconfig
index 66c794d92391..a8386998be40 100644
--- a/drivers/hv/Kconfig
+++ b/drivers/hv/Kconfig
@@ -7,6 +7,7 @@ config HYPERV
 	depends on X86 && ACPI && X86_LOCAL_APIC && HYPERVISOR_GUEST
 	select PARAVIRT
 	select X86_HV_CALLBACK_VECTOR
+	select VMAP_PFN
 	help
 	  Select this option to run Linux as a Hyper-V client operating
 	  system.
diff --git a/drivers/hv/channel.c b/drivers/hv/channel.c
index 01048bb07082..7350da9dbe97 100644
--- a/drivers/hv/channel.c
+++ b/drivers/hv/channel.c
@@ -707,6 +707,16 @@ static int __vmbus_open(struct vmbus_channel *newchannel,
 	if (err)
 		goto error_clean_ring;
 
+	err = hv_ringbuffer_post_init(&newchannel->outbound,
+				      page, send_pages);
+	if (err)
+		goto error_free_gpadl;
+
+	err = hv_ringbuffer_post_init(&newchannel->inbound,
+				      &page[send_pages], recv_pages);
+	if (err)
+		goto error_free_gpadl;
+
 	/* Create and init the channel open message */
 	open_info = kzalloc(sizeof(*open_info) +
 			   sizeof(struct vmbus_channel_open_channel),
diff --git a/drivers/hv/hyperv_vmbus.h b/drivers/hv/hyperv_vmbus.h
index 40bc0eff6665..15cd23a561f3 100644
--- a/drivers/hv/hyperv_vmbus.h
+++ b/drivers/hv/hyperv_vmbus.h
@@ -172,6 +172,8 @@ extern int hv_synic_cleanup(unsigned int cpu);
 /* Interface */
 
 void hv_ringbuffer_pre_init(struct vmbus_channel *channel);
+int hv_ringbuffer_post_init(struct hv_ring_buffer_info *ring_info,
+		struct page *pages, u32 page_cnt);
 
 int hv_ringbuffer_init(struct hv_ring_buffer_info *ring_info,
 		       struct page *pages, u32 pagecnt, u32 max_pkt_size);
diff --git a/drivers/hv/ring_buffer.c b/drivers/hv/ring_buffer.c
index 2aee356840a2..d4f93fca1108 100644
--- a/drivers/hv/ring_buffer.c
+++ b/drivers/hv/ring_buffer.c
@@ -17,6 +17,8 @@
 #include <linux/vmalloc.h>
 #include <linux/slab.h>
 #include <linux/prefetch.h>
+#include <linux/io.h>
+#include <asm/mshyperv.h>
 
 #include "hyperv_vmbus.h"
 
@@ -179,43 +181,89 @@ void hv_ringbuffer_pre_init(struct vmbus_channel *channel)
 	mutex_init(&channel->outbound.ring_buffer_mutex);
 }
 
-/* Initialize the ring buffer. */
-int hv_ringbuffer_init(struct hv_ring_buffer_info *ring_info,
-		       struct page *pages, u32 page_cnt, u32 max_pkt_size)
+int hv_ringbuffer_post_init(struct hv_ring_buffer_info *ring_info,
+		       struct page *pages, u32 page_cnt)
 {
+	u64 physic_addr = page_to_pfn(pages) << PAGE_SHIFT;
+	unsigned long *pfns_wraparound;
+	void *vaddr;
 	int i;
-	struct page **pages_wraparound;
 
-	BUILD_BUG_ON((sizeof(struct hv_ring_buffer) != PAGE_SIZE));
+	if (!hv_isolation_type_snp())
+		return 0;
+
+	physic_addr += ms_hyperv.shared_gpa_boundary;
 
 	/*
 	 * First page holds struct hv_ring_buffer, do wraparound mapping for
 	 * the rest.
 	 */
-	pages_wraparound = kcalloc(page_cnt * 2 - 1, sizeof(struct page *),
+	pfns_wraparound = kcalloc(page_cnt * 2 - 1, sizeof(unsigned long),
 				   GFP_KERNEL);
-	if (!pages_wraparound)
+	if (!pfns_wraparound)
 		return -ENOMEM;
 
-	pages_wraparound[0] = pages;
+	pfns_wraparound[0] = physic_addr >> PAGE_SHIFT;
 	for (i = 0; i < 2 * (page_cnt - 1); i++)
-		pages_wraparound[i + 1] = &pages[i % (page_cnt - 1) + 1];
-
-	ring_info->ring_buffer = (struct hv_ring_buffer *)
-		vmap(pages_wraparound, page_cnt * 2 - 1, VM_MAP, PAGE_KERNEL);
-
-	kfree(pages_wraparound);
+		pfns_wraparound[i + 1] = (physic_addr >> PAGE_SHIFT) +
+			i % (page_cnt - 1) + 1;
 
-
-	if (!ring_info->ring_buffer)
+	vaddr = vmap_pfn(pfns_wraparound, page_cnt * 2 - 1, PAGE_KERNEL_IO);
+	kfree(pfns_wraparound);
+	if (!vaddr)
 		return -ENOMEM;
 
-	ring_info->ring_buffer->read_index =
-		ring_info->ring_buffer->write_index = 0;
+	/* Clean memory after setting host visibility. */
+	memset((void *)vaddr, 0x00, page_cnt * PAGE_SIZE);
+
+	ring_info->ring_buffer = (struct hv_ring_buffer *)vaddr;
+	ring_info->ring_buffer->read_index = 0;
+	ring_info->ring_buffer->write_index = 0;
 
 	/* Set the feature bit for enabling flow control. */
 	ring_info->ring_buffer->feature_bits.value = 1;
 
+	return 0;
+}
+
+/* Initialize the ring buffer. */
+int hv_ringbuffer_init(struct hv_ring_buffer_info *ring_info,
+		       struct page *pages, u32 page_cnt, u32 max_pkt_size)
+{
+	int i;
+	struct page **pages_wraparound;
+
+	BUILD_BUG_ON((sizeof(struct hv_ring_buffer) != PAGE_SIZE));
+
+	if (!hv_isolation_type_snp()) {
+		/*
+		 * First page holds struct hv_ring_buffer, do wraparound mapping for
+		 * the rest.
+		 */
+		pages_wraparound = kcalloc(page_cnt * 2 - 1, sizeof(struct page *),
+					   GFP_KERNEL);
+		if (!pages_wraparound)
+			return -ENOMEM;
+
+		pages_wraparound[0] = pages;
+		for (i = 0; i < 2 * (page_cnt - 1); i++)
+			pages_wraparound[i + 1] = &pages[i % (page_cnt - 1) + 1];
+
+		ring_info->ring_buffer = (struct hv_ring_buffer *)
+			vmap(pages_wraparound, page_cnt * 2 - 1, VM_MAP, PAGE_KERNEL);
+
+		kfree(pages_wraparound);
+
+		if (!ring_info->ring_buffer)
+			return -ENOMEM;
+
+		ring_info->ring_buffer->read_index =
+			ring_info->ring_buffer->write_index = 0;
+
+		/* Set the feature bit for enabling flow control. */
+		ring_info->ring_buffer->feature_bits.value = 1;
+	}
+
 	ring_info->ring_size = page_cnt << PAGE_SHIFT;
 	ring_info->ring_size_div10_reciprocal =
 		reciprocal_value(ring_info->ring_size / 10);
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Sun May 30 17:27:56 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 May 2021 17:27:56 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134300.249968 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnPE6-00061L-HG; Sun, 30 May 2021 17:27:50 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134300.249968; Sun, 30 May 2021 17:27:50 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnPE5-0005xd-Uq; Sun, 30 May 2021 17:27:49 +0000
Received: by outflank-mailman (input) for mailman id 134300;
 Sun, 30 May 2021 15:07:04 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GvOc=KZ=gmail.com=ltykernel@srs-us1.protection.inumbo.net>)
 id 1lnN1s-0000PC-0k
 for xen-devel@lists.xenproject.org; Sun, 30 May 2021 15:07:04 +0000
Received: from mail-pj1-x1030.google.com (unknown [2607:f8b0:4864:20::1030])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id f36adaac-9dab-42d3-8e39-a3f4c8e9df73;
 Sun, 30 May 2021 15:06:46 +0000 (UTC)
Received: by mail-pj1-x1030.google.com with SMTP id
 o17-20020a17090a9f91b029015cef5b3c50so7198667pjp.4
 for <xen-devel@lists.xenproject.org>; Sun, 30 May 2021 08:06:46 -0700 (PDT)
Received: from ubuntu-Virtual-Machine.corp.microsoft.com
 ([2001:4898:80e8:9:dc2d:80ab:c3f3:1524])
 by smtp.gmail.com with ESMTPSA id b15sm8679688pfi.100.2021.05.30.08.06.43
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 30 May 2021 08:06:44 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: f36adaac-9dab-42d3-8e39-a3f4c8e9df73
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=z6S1GEUANkCSJDbGlD0AFfyq9MRalRtHEN/uug/GOu0=;
        b=bjkBFLIGpECWm12ZAUvidwjhy290SoG0X7BEjCajWvEiZptzu5I/ZT/UvY4n5d9vQK
         vTBSEZNwaH7n99NZzNVTQPnrt4hta7LrzuCGnHpiON9AUMkhumyiiozan8c/a5HWL8/Y
         MhU50l/wXGpatWp4kkoMPFXW5XREua8Ne8mEhC3qJtFrgXbkxzC7Xu3wI79epYQyGnHQ
         hv9OhB2IW6o2mfT96TOFoaY8GP/A/mlm3P19ebZlMOImyCrfxx7/c9J+oKIGXsseFTI2
         7eqK6e1F+iZOZgxUCpjNiMRUDE9IWIfbm4trwT0KL+Ea6sMa04paSo8i/kuyuz3ciGU2
         qirg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=z6S1GEUANkCSJDbGlD0AFfyq9MRalRtHEN/uug/GOu0=;
        b=dRzmTYZQ9361XVCpcxW5VBFbrgGH9pTTbplZW5+7xHBGT6MHj8cYY/BV2auogYi6LQ
         t0FrezhcuQJ2lLeudLTQ0cF6QLq/EXIXNR6aPxXNHeH0NuXv1ikqkQqeZaF5XLkYQ5TC
         arInUEND2QrwBy1YJvqNi7n1bEB2Bkt/zawf1tZUrMRXVS0ISiWWM6SQ5BnvM666S2CR
         1ht3ipCSiuUGeHUsQ14pkwgVc3Ku+KT7H4ksUdsYzT7gJ7Cu4w3tB8KWWLeOPM/vZSkG
         PFuBtRMBi/5EX2icLi+qXQTAm3ah7Umc29umFlwMr7Wz8hpJ7mtpgNT7MM4qBwY9JB+h
         Ad4g==
X-Gm-Message-State: AOAM533DwhZwXhVtprIvC++wcYVBShmCKEGTdbr3KDTbIecgHo/jLcbw
	mQdXw0y0IBIpCaMwRrrQ/sY=
X-Google-Smtp-Source: ABdhPJyBzi0D5XZ+1bVzKaDd93rKStOmEBliWJn7djh8UK0R2zDyliKB0crRLpyjBYOBktS3q/+xFg==
X-Received: by 2002:a17:90b:1b48:: with SMTP id nv8mr15017912pjb.39.1622387205419;
        Sun, 30 May 2021 08:06:45 -0700 (PDT)
From: Tianyu Lan <ltykernel@gmail.com>
To: kys@microsoft.com,
	haiyangz@microsoft.com,
	sthemmin@microsoft.com,
	wei.liu@kernel.org,
	decui@microsoft.com,
	tglx@linutronix.de,
	mingo@redhat.com,
	bp@alien8.de,
	x86@kernel.org,
	hpa@zytor.com,
	arnd@arndb.de,
	dave.hansen@linux.intel.com,
	luto@kernel.org,
	peterz@infradead.org,
	akpm@linux-foundation.org,
	kirill.shutemov@linux.intel.com,
	rppt@kernel.org,
	hannes@cmpxchg.org,
	cai@lca.pw,
	krish.sadhukhan@oracle.com,
	saravanand@fb.com,
	Tianyu.Lan@microsoft.com,
	konrad.wilk@oracle.com,
	hch@lst.de,
	m.szyprowski@samsung.com,
	robin.murphy@arm.com,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	sstabellini@kernel.org,
	joro@8bytes.org,
	will@kernel.org,
	xen-devel@lists.xenproject.org,
	davem@davemloft.net,
	kuba@kernel.org,
	jejb@linux.ibm.com,
	martin.petersen@oracle.com
Cc: iommu@lists.linux-foundation.org,
	linux-arch@vger.kernel.org,
	linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	netdev@vger.kernel.org,
	vkuznets@redhat.com,
	thomas.lendacky@amd.com,
	brijesh.singh@amd.com,
	sunilmut@microsoft.com
Subject: [RFC PATCH V3 05/11] HV: Add ghcb hvcall support for SNP VM
Date: Sun, 30 May 2021 11:06:22 -0400
Message-Id: <20210530150628.2063957-6-ltykernel@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20210530150628.2063957-1-ltykernel@gmail.com>
References: <20210530150628.2063957-1-ltykernel@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Tianyu Lan <Tianyu.Lan@microsoft.com>

Hyper-V provides ghcb hvcall to handle VMBus
HVCALL_SIGNAL_EVENT and HVCALL_POST_MESSAGE
msg in SNP Isolation VM. Add such support.

Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com>
---
 arch/x86/hyperv/ivm.c           | 69 +++++++++++++++++++++++++++++++++
 arch/x86/include/asm/mshyperv.h |  1 +
 drivers/hv/connection.c         |  6 ++-
 drivers/hv/hv.c                 |  8 +++-
 4 files changed, 82 insertions(+), 2 deletions(-)

diff --git a/arch/x86/hyperv/ivm.c b/arch/x86/hyperv/ivm.c
index fd6dd804beef..e687fca68ba3 100644
--- a/arch/x86/hyperv/ivm.c
+++ b/arch/x86/hyperv/ivm.c
@@ -18,8 +18,77 @@
 
 union hv_ghcb {
 	struct ghcb ghcb;
+	struct {
+		u64 hypercalldata[509];
+		u64 outputgpa;
+		union {
+			union {
+				struct {
+					u32 callcode        : 16;
+					u32 isfast          : 1;
+					u32 reserved1       : 14;
+					u32 isnested        : 1;
+					u32 countofelements : 12;
+					u32 reserved2       : 4;
+					u32 repstartindex   : 12;
+					u32 reserved3       : 4;
+				};
+				u64 asuint64;
+			} hypercallinput;
+			union {
+				struct {
+					u16 callstatus;
+					u16 reserved1;
+					u32 elementsprocessed : 12;
+					u32 reserved2         : 20;
+				};
+				u64 asunit64;
+			} hypercalloutput;
+		};
+		u64 reserved2;
+	} hypercall;
 } __packed __aligned(PAGE_SIZE);
 
+u64 hv_ghcb_hypercall(u64 control, void *input, void *output, u32 input_size)
+{
+	union hv_ghcb *hv_ghcb;
+	void **ghcb_base;
+	unsigned long flags;
+
+	if (!ms_hyperv.ghcb_base)
+		return -EFAULT;
+
+	local_irq_save(flags);
+	ghcb_base = (void **)this_cpu_ptr(ms_hyperv.ghcb_base);
+	hv_ghcb = (union hv_ghcb *)*ghcb_base;
+	if (!hv_ghcb) {
+		local_irq_restore(flags);
+		return -EFAULT;
+	}
+
+	memset(hv_ghcb, 0x00, HV_HYP_PAGE_SIZE);
+	hv_ghcb->ghcb.protocol_version = 1;
+	hv_ghcb->ghcb.ghcb_usage = 1;
+
+	hv_ghcb->hypercall.outputgpa = (u64)output;
+	hv_ghcb->hypercall.hypercallinput.asuint64 = 0;
+	hv_ghcb->hypercall.hypercallinput.callcode = control;
+
+	if (input_size)
+		memcpy(hv_ghcb->hypercall.hypercalldata, input, input_size);
+
+	VMGEXIT();
+
+	hv_ghcb->ghcb.ghcb_usage = 0xffffffff;
+	memset(hv_ghcb->ghcb.save.valid_bitmap, 0,
+	       sizeof(hv_ghcb->ghcb.save.valid_bitmap));
+
+	local_irq_restore(flags);
+
+	return hv_ghcb->hypercall.hypercalloutput.callstatus;
+}
+EXPORT_SYMBOL_GPL(hv_ghcb_hypercall);
+
 void hv_ghcb_msr_write(u64 msr, u64 value)
 {
 	union hv_ghcb *hv_ghcb;
diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h
index eec7f3357d51..51dfbd040930 100644
--- a/arch/x86/include/asm/mshyperv.h
+++ b/arch/x86/include/asm/mshyperv.h
@@ -259,6 +259,7 @@ void hv_sint_rdmsrl_ghcb(u64 msr, u64 *value);
 void hv_signal_eom_ghcb(void);
 void hv_ghcb_msr_write(u64 msr, u64 value);
 void hv_ghcb_msr_read(u64 msr, u64 *value);
+u64 hv_ghcb_hypercall(u64 control, void *input, void *output, u32 input_size);
 
 #define hv_get_synint_state_ghcb(int_num, val)			\
 	hv_sint_rdmsrl_ghcb(HV_X64_MSR_SINT0 + int_num, val)
diff --git a/drivers/hv/connection.c b/drivers/hv/connection.c
index 311cd005b3be..186fd4c8acd4 100644
--- a/drivers/hv/connection.c
+++ b/drivers/hv/connection.c
@@ -445,6 +445,10 @@ void vmbus_set_event(struct vmbus_channel *channel)
 
 	++channel->sig_events;
 
-	hv_do_fast_hypercall8(HVCALL_SIGNAL_EVENT, channel->sig_event);
+	if (hv_isolation_type_snp())
+		hv_ghcb_hypercall(HVCALL_SIGNAL_EVENT, &channel->sig_event,
+				NULL, sizeof(u64));
+	else
+		hv_do_fast_hypercall8(HVCALL_SIGNAL_EVENT, channel->sig_event);
 }
 EXPORT_SYMBOL_GPL(vmbus_set_event);
diff --git a/drivers/hv/hv.c b/drivers/hv/hv.c
index 28faa8364952..03bcc831b034 100644
--- a/drivers/hv/hv.c
+++ b/drivers/hv/hv.c
@@ -97,7 +97,13 @@ int hv_post_message(union hv_connection_id connection_id,
 	aligned_msg->payload_size = payload_size;
 	memcpy((void *)aligned_msg->payload, payload, payload_size);
 
-	status = hv_do_hypercall(HVCALL_POST_MESSAGE, aligned_msg, NULL);
+	if (hv_isolation_type_snp())
+		status = hv_ghcb_hypercall(HVCALL_POST_MESSAGE,
+				(void *)aligned_msg, NULL,
+				sizeof(struct hv_input_post_message));
+	else
+		status = hv_do_hypercall(HVCALL_POST_MESSAGE,
+				aligned_msg, NULL);
 
 	/* Preemption must remain disabled until after the hypercall
 	 * so some other thread can't get scheduled onto this cpu and
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Sun May 30 17:27:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 May 2021 17:27:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134292.249936 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnPE4-0005SQ-Er; Sun, 30 May 2021 17:27:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134292.249936; Sun, 30 May 2021 17:27:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnPE4-0005Rs-9v; Sun, 30 May 2021 17:27:48 +0000
Received: by outflank-mailman (input) for mailman id 134292;
 Sun, 30 May 2021 15:06:44 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GvOc=KZ=gmail.com=ltykernel@srs-us1.protection.inumbo.net>)
 id 1lnN1Y-0000PC-09
 for xen-devel@lists.xenproject.org; Sun, 30 May 2021 15:06:44 +0000
Received: from mail-pj1-x102e.google.com (unknown [2607:f8b0:4864:20::102e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bd11f8a9-b1f9-444a-b7fb-b7a2652d1e5e;
 Sun, 30 May 2021 15:06:39 +0000 (UTC)
Received: by mail-pj1-x102e.google.com with SMTP id
 jz2-20020a17090b14c2b0290162cf0b5a35so1479380pjb.5
 for <xen-devel@lists.xenproject.org>; Sun, 30 May 2021 08:06:39 -0700 (PDT)
Received: from ubuntu-Virtual-Machine.corp.microsoft.com
 ([2001:4898:80e8:9:dc2d:80ab:c3f3:1524])
 by smtp.gmail.com with ESMTPSA id b15sm8679688pfi.100.2021.05.30.08.06.37
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 30 May 2021 08:06:38 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bd11f8a9-b1f9-444a-b7fb-b7a2652d1e5e
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=N7+X8C2xvctqats8In2MEizKCmhW7oGccQhxaGVYYMQ=;
        b=q7tWZton5iDKuT87fqyJEg2jMh65azm2dNFTU3sW+h6mvDH0LxlutXdO1Li/Bjijeg
         PwLdnmy0UWtYFAeQwSDrKPo7iqtlkOoQXcYZDnVe9dfNlYbKWOkVo5B0Qg8ecs08m3j/
         /uwMeD/kq1dIPgdCOE3VSDHtE4JUs4zmtILg/nC6KaBgzXjX970xYYivzVks4euWQADa
         lO90RfeNp6TullZFr9iAm5vXL/6z7Ihlfg8y/XtCyp4U5qWC3X75hxRMxGkME2ndXSTn
         HfLDHt92ew6o18+Ht4Tek8bvQWh2N+fa5+JDSf/+mdfR6e2Uq+pk5PFeDT+DkVytt5Rb
         musg==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=N7+X8C2xvctqats8In2MEizKCmhW7oGccQhxaGVYYMQ=;
        b=uCq1Lr4eEUC6pH+JBvcmtKdfzuJ7vUTelT+Xuno7Mx0iiMH/KtaQ3GcJNckpr8HT7s
         HM54xZHiZrR0/uk1q3dDdb6yaAp7DPdGIKXUmpJIfB3Rh7G7xhKNeZXFwXVTJ00f1BV9
         XCYC+ZiEP/WIHBCotQ4jh7VAupmsMHSHroFHqsGK/k2gNEsIjDnMic5r/HWH4n/wGo5h
         W4m8Y+KvdFclIiqltJA9gLnrP01ldsrgMX6fwj0QyzVhl3QiA5vhNi+8+mFx5v18XCXT
         EzaoyVl7cwZ0hO7qKoi1aIWJetPzXYIkuKY5kb9kMybFa9DvFcMDheowO4X8dCbFmWJN
         MmIg==
X-Gm-Message-State: AOAM532Ow4wBusdE2pYSfczL+ttB/aWywkU0wn4Lndk5KsOuAU8mjysA
	0hdWld6F+mvGrk3WjjOichM=
X-Google-Smtp-Source: ABdhPJwq2lgstXGT117iNMqQIPLAUzLBXpYXE32leEdMzHtS6k6jpdCmJdRCvRkfB+UaPhbQDgK8HQ==
X-Received: by 2002:a17:902:bb92:b029:f4:4a28:3ed0 with SMTP id m18-20020a170902bb92b02900f44a283ed0mr16822273pls.48.1622387198908;
        Sun, 30 May 2021 08:06:38 -0700 (PDT)
From: Tianyu Lan <ltykernel@gmail.com>
To: kys@microsoft.com,
	haiyangz@microsoft.com,
	sthemmin@microsoft.com,
	wei.liu@kernel.org,
	decui@microsoft.com,
	tglx@linutronix.de,
	mingo@redhat.com,
	bp@alien8.de,
	x86@kernel.org,
	hpa@zytor.com,
	arnd@arndb.de,
	dave.hansen@linux.intel.com,
	luto@kernel.org,
	peterz@infradead.org,
	akpm@linux-foundation.org,
	kirill.shutemov@linux.intel.com,
	rppt@kernel.org,
	hannes@cmpxchg.org,
	cai@lca.pw,
	krish.sadhukhan@oracle.com,
	saravanand@fb.com,
	Tianyu.Lan@microsoft.com,
	konrad.wilk@oracle.com,
	hch@lst.de,
	m.szyprowski@samsung.com,
	robin.murphy@arm.com,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	sstabellini@kernel.org,
	joro@8bytes.org,
	will@kernel.org,
	xen-devel@lists.xenproject.org,
	davem@davemloft.net,
	kuba@kernel.org,
	jejb@linux.ibm.com,
	martin.petersen@oracle.com
Cc: iommu@lists.linux-foundation.org,
	linux-arch@vger.kernel.org,
	linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	netdev@vger.kernel.org,
	vkuznets@redhat.com,
	thomas.lendacky@amd.com,
	brijesh.singh@amd.com,
	sunilmut@microsoft.com
Subject: [RFC PATCH V3 01/11] x86/HV: Initialize GHCB page in Isolation VM
Date: Sun, 30 May 2021 11:06:18 -0400
Message-Id: <20210530150628.2063957-2-ltykernel@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20210530150628.2063957-1-ltykernel@gmail.com>
References: <20210530150628.2063957-1-ltykernel@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Tianyu Lan <Tianyu.Lan@microsoft.com>

Hyper-V exposes GHCB page via SEV ES GHCB MSR for SNP guest
to communicate with hypervisor. Map GHCB page for all
cpus to read/write MSR register and submit hvcall request
via GHCB.

Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com>
---
 arch/x86/hyperv/hv_init.c       | 60 ++++++++++++++++++++++++++++++---
 arch/x86/include/asm/mshyperv.h |  2 ++
 include/asm-generic/mshyperv.h  |  2 ++
 3 files changed, 60 insertions(+), 4 deletions(-)

diff --git a/arch/x86/hyperv/hv_init.c b/arch/x86/hyperv/hv_init.c
index bb0ae4b5c00f..dc74d01cb859 100644
--- a/arch/x86/hyperv/hv_init.c
+++ b/arch/x86/hyperv/hv_init.c
@@ -60,6 +60,9 @@ static int hv_cpu_init(unsigned int cpu)
 	struct hv_vp_assist_page **hvp = &hv_vp_assist_page[smp_processor_id()];
 	void **input_arg;
 	struct page *pg;
+	u64 ghcb_gpa;
+	void *ghcb_va;
+	void **ghcb_base;
 
 	/* hv_cpu_init() can be called with IRQs disabled from hv_resume() */
 	pg = alloc_pages(irqs_disabled() ? GFP_ATOMIC : GFP_KERNEL, hv_root_partition ? 1 : 0);
@@ -106,6 +109,17 @@ static int hv_cpu_init(unsigned int cpu)
 		wrmsrl(HV_X64_MSR_VP_ASSIST_PAGE, val);
 	}
 
+	if (ms_hyperv.ghcb_base) {
+		rdmsrl(MSR_AMD64_SEV_ES_GHCB, ghcb_gpa);
+
+		ghcb_va = ioremap_cache(ghcb_gpa, HV_HYP_PAGE_SIZE);
+		if (!ghcb_va)
+			return -ENOMEM;
+
+		ghcb_base = (void **)this_cpu_ptr(ms_hyperv.ghcb_base);
+		*ghcb_base = ghcb_va;
+	}
+
 	return 0;
 }
 
@@ -201,6 +215,7 @@ static int hv_cpu_die(unsigned int cpu)
 	unsigned long flags;
 	void **input_arg;
 	void *pg;
+	void **ghcb_va = NULL;
 
 	local_irq_save(flags);
 	input_arg = (void **)this_cpu_ptr(hyperv_pcpu_input_arg);
@@ -214,6 +229,13 @@ static int hv_cpu_die(unsigned int cpu)
 		*output_arg = NULL;
 	}
 
+	if (ms_hyperv.ghcb_base) {
+		ghcb_va = (void **)this_cpu_ptr(ms_hyperv.ghcb_base);
+		if (*ghcb_va)
+			iounmap(*ghcb_va);
+		*ghcb_va = NULL;
+	}
+
 	local_irq_restore(flags);
 
 	free_pages((unsigned long)pg, hv_root_partition ? 1 : 0);
@@ -369,6 +391,9 @@ void __init hyperv_init(void)
 	u64 guest_id, required_msrs;
 	union hv_x64_msr_hypercall_contents hypercall_msr;
 	int cpuhp, i;
+	u64 ghcb_gpa;
+	void *ghcb_va;
+	void **ghcb_base;
 
 	if (x86_hyper_type != X86_HYPER_MS_HYPERV)
 		return;
@@ -429,9 +454,24 @@ void __init hyperv_init(void)
 			VMALLOC_END, GFP_KERNEL, PAGE_KERNEL_ROX,
 			VM_FLUSH_RESET_PERMS, NUMA_NO_NODE,
 			__builtin_return_address(0));
-	if (hv_hypercall_pg == NULL) {
-		wrmsrl(HV_X64_MSR_GUEST_OS_ID, 0);
-		goto remove_cpuhp_state;
+	if (hv_hypercall_pg == NULL)
+		goto clean_guest_os_id;
+
+	if (hv_isolation_type_snp()) {
+		ms_hyperv.ghcb_base = alloc_percpu(void *);
+		if (!ms_hyperv.ghcb_base)
+			goto clean_guest_os_id;
+
+		rdmsrl(MSR_AMD64_SEV_ES_GHCB, ghcb_gpa);
+		ghcb_va = ioremap_cache(ghcb_gpa, HV_HYP_PAGE_SIZE);
+		if (!ghcb_va) {
+			free_percpu(ms_hyperv.ghcb_base);
+			ms_hyperv.ghcb_base = NULL;
+			goto clean_guest_os_id;
+		}
+
+		ghcb_base = (void **)this_cpu_ptr(ms_hyperv.ghcb_base);
+		*ghcb_base = ghcb_va;
 	}
 
 	rdmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64);
@@ -502,7 +542,8 @@ void __init hyperv_init(void)
 	hv_query_ext_cap(0);
 	return;
 
-remove_cpuhp_state:
+clean_guest_os_id:
+	wrmsrl(HV_X64_MSR_GUEST_OS_ID, 0);
 	cpuhp_remove_state(cpuhp);
 free_vp_assist_page:
 	kfree(hv_vp_assist_page);
@@ -531,6 +572,9 @@ void hyperv_cleanup(void)
 	 */
 	hv_hypercall_pg = NULL;
 
+	if (ms_hyperv.ghcb_base)
+		free_percpu(ms_hyperv.ghcb_base);
+
 	/* Reset the hypercall page */
 	hypercall_msr.as_uint64 = 0;
 	wrmsrl(HV_X64_MSR_HYPERCALL, hypercall_msr.as_uint64);
@@ -615,6 +659,14 @@ bool hv_is_isolation_supported(void)
 }
 EXPORT_SYMBOL_GPL(hv_is_isolation_supported);
 
+DEFINE_STATIC_KEY_FALSE(isolation_type_snp);
+
+bool hv_isolation_type_snp(void)
+{
+	return static_branch_unlikely(&isolation_type_snp);
+}
+EXPORT_SYMBOL_GPL(hv_isolation_type_snp);
+
 /* Bit mask of the extended capability to query: see HV_EXT_CAPABILITY_xxx */
 bool hv_query_ext_cap(u64 cap_query)
 {
diff --git a/arch/x86/include/asm/mshyperv.h b/arch/x86/include/asm/mshyperv.h
index 67ff0d637e55..aeacca7c4da8 100644
--- a/arch/x86/include/asm/mshyperv.h
+++ b/arch/x86/include/asm/mshyperv.h
@@ -11,6 +11,8 @@
 #include <asm/paravirt.h>
 #include <asm/mshyperv.h>
 
+DECLARE_STATIC_KEY_FALSE(isolation_type_snp);
+
 typedef int (*hyperv_fill_flush_list_func)(
 		struct hv_guest_mapping_flush_list *flush,
 		void *data);
diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h
index 9a000ba2bb75..3ae56a29594f 100644
--- a/include/asm-generic/mshyperv.h
+++ b/include/asm-generic/mshyperv.h
@@ -35,6 +35,7 @@ struct ms_hyperv_info {
 	u32 max_lp_index;
 	u32 isolation_config_a;
 	u32 isolation_config_b;
+	void  __percpu **ghcb_base;
 };
 extern struct ms_hyperv_info ms_hyperv;
 
@@ -224,6 +225,7 @@ bool hv_is_hyperv_initialized(void);
 bool hv_is_hibernation_supported(void);
 enum hv_isolation_type hv_get_isolation_type(void);
 bool hv_is_isolation_supported(void);
+bool hv_isolation_type_snp(void);
 void hyperv_cleanup(void);
 bool hv_query_ext_cap(u64 cap_query);
 #else /* CONFIG_HYPERV */
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Sun May 30 17:27:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 May 2021 17:27:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134294.249943 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnPE4-0005bV-SL; Sun, 30 May 2021 17:27:48 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134294.249943; Sun, 30 May 2021 17:27:48 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnPE4-0005ZH-L1; Sun, 30 May 2021 17:27:48 +0000
Received: by outflank-mailman (input) for mailman id 134294;
 Sun, 30 May 2021 15:06:49 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GvOc=KZ=gmail.com=ltykernel@srs-us1.protection.inumbo.net>)
 id 1lnN1d-0000PC-0Q
 for xen-devel@lists.xenproject.org; Sun, 30 May 2021 15:06:49 +0000
Received: from mail-pj1-x102e.google.com (unknown [2607:f8b0:4864:20::102e])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 715a7924-f9c2-431e-a015-efffa53ebaf7;
 Sun, 30 May 2021 15:06:41 +0000 (UTC)
Received: by mail-pj1-x102e.google.com with SMTP id k5so5198069pjj.1
 for <xen-devel@lists.xenproject.org>; Sun, 30 May 2021 08:06:41 -0700 (PDT)
Received: from ubuntu-Virtual-Machine.corp.microsoft.com
 ([2001:4898:80e8:9:dc2d:80ab:c3f3:1524])
 by smtp.gmail.com with ESMTPSA id b15sm8679688pfi.100.2021.05.30.08.06.39
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 30 May 2021 08:06:39 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 715a7924-f9c2-431e-a015-efffa53ebaf7
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=TDI4PXumIZgxBePNd+ivE0VBcCC0VeGNDBRFqyds1eM=;
        b=UAse3Bpl0FAVPqqhfAnA0zSsTcxrZbreVrQAzodtanCUm4szgJIEyR5+ecqtW5IpAX
         wvxV8auZOSKfuO8TZ4M8rO/crEbVTyzAM6eyp3NoWTLuD78xXlOdpNL1GTvOv05hTOvw
         IJKNjFYyz12ZXEp+aAdaQoUfunooYgNvY+tYgiYWzR7jphXnzg+LZ0lYvuQrvoitNAPI
         3dFo+NHKOib8S6vhPQHgJFZuloR8beErGmtrAztyIobPte+3TQ3S9eDUgC3gZiqCjgrA
         2cjMlhmbzneoHRsMOP9QRH/YU9YL8UrYLz+ibUTPFCQMKMdfS1dZjF7iQIxOaa2us5Nb
         DDlA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=TDI4PXumIZgxBePNd+ivE0VBcCC0VeGNDBRFqyds1eM=;
        b=NFXKSE1A9WyATGna6I4J8ZtmQHef2NiMzsJyAuTKidFy5e11/6PSD40hmoHI293QbK
         TcyK/9vBWQqNyJ4JyFmOEPVpGNz1hDCrzpwRsFjffIdHWBenupEACjo9uYWIOIHelH+S
         nnJbg5jCGItedIR9S5JFNquxISDMyO8iWnuRX3kLacu4RwLcfvAI7XSoPFJEm0Ba+GSU
         m0qN+ZDZG1VxiEof+gdiYi081awFvK5fLlirQuGND8YEWUlEtmbvrFPSd8hakbM2A+Vs
         ONUM2NjzmW9q5p5NljIAF3CBOGGAJbWgLJpVD8iNcW8A4hLY79We3E5rX+IYb9w4BnHN
         BKdg==
X-Gm-Message-State: AOAM53101X46OeU2MBrv8EDhqq6yCe7XZTsO+xPELCFCnfuhJncLMhfn
	MAdNFK2qIhE8YPOIL7zQHAk=
X-Google-Smtp-Source: ABdhPJw6VTPU9WI3kHLEFrQaIMTpdWgtR9lc2XZhC9GhjxtOkcyL5lpx/QGe3K50jPAfDu6vcR2b5Q==
X-Received: by 2002:a17:90a:80c5:: with SMTP id k5mr6957940pjw.129.1622387200366;
        Sun, 30 May 2021 08:06:40 -0700 (PDT)
From: Tianyu Lan <ltykernel@gmail.com>
To: kys@microsoft.com,
	haiyangz@microsoft.com,
	sthemmin@microsoft.com,
	wei.liu@kernel.org,
	decui@microsoft.com,
	tglx@linutronix.de,
	mingo@redhat.com,
	bp@alien8.de,
	x86@kernel.org,
	hpa@zytor.com,
	arnd@arndb.de,
	dave.hansen@linux.intel.com,
	luto@kernel.org,
	peterz@infradead.org,
	akpm@linux-foundation.org,
	kirill.shutemov@linux.intel.com,
	rppt@kernel.org,
	hannes@cmpxchg.org,
	cai@lca.pw,
	krish.sadhukhan@oracle.com,
	saravanand@fb.com,
	Tianyu.Lan@microsoft.com,
	konrad.wilk@oracle.com,
	hch@lst.de,
	m.szyprowski@samsung.com,
	robin.murphy@arm.com,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	sstabellini@kernel.org,
	joro@8bytes.org,
	will@kernel.org,
	xen-devel@lists.xenproject.org,
	davem@davemloft.net,
	kuba@kernel.org,
	jejb@linux.ibm.com,
	martin.petersen@oracle.com
Cc: iommu@lists.linux-foundation.org,
	linux-arch@vger.kernel.org,
	linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	netdev@vger.kernel.org,
	vkuznets@redhat.com,
	thomas.lendacky@amd.com,
	brijesh.singh@amd.com,
	sunilmut@microsoft.com
Subject: [RFC PATCH V3 02/11] x86/HV: Initialize shared memory boundary in the Isolation VM.
Date: Sun, 30 May 2021 11:06:19 -0400
Message-Id: <20210530150628.2063957-3-ltykernel@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20210530150628.2063957-1-ltykernel@gmail.com>
References: <20210530150628.2063957-1-ltykernel@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Tianyu Lan <Tianyu.Lan@microsoft.com>

Hyper-V also exposes shared memory boundary via cpuid
HYPERV_CPUID_ISOLATION_CONFIG and store it in the
shared_gpa_boundary of ms_hyperv struct. This prepares
to share memory with host for SNP guest.

Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com>
---
 arch/x86/kernel/cpu/mshyperv.c |  2 ++
 include/asm-generic/mshyperv.h | 12 +++++++++++-
 2 files changed, 13 insertions(+), 1 deletion(-)

diff --git a/arch/x86/kernel/cpu/mshyperv.c b/arch/x86/kernel/cpu/mshyperv.c
index 22f13343b5da..d1ca36224657 100644
--- a/arch/x86/kernel/cpu/mshyperv.c
+++ b/arch/x86/kernel/cpu/mshyperv.c
@@ -320,6 +320,8 @@ static void __init ms_hyperv_init_platform(void)
 	if (ms_hyperv.priv_high & HV_ISOLATION) {
 		ms_hyperv.isolation_config_a = cpuid_eax(HYPERV_CPUID_ISOLATION_CONFIG);
 		ms_hyperv.isolation_config_b = cpuid_ebx(HYPERV_CPUID_ISOLATION_CONFIG);
+		ms_hyperv.shared_gpa_boundary =
+			(u64)1 << ms_hyperv.shared_gpa_boundary_bits;
 
 		pr_info("Hyper-V: Isolation Config: Group A 0x%x, Group B 0x%x\n",
 			ms_hyperv.isolation_config_a, ms_hyperv.isolation_config_b);
diff --git a/include/asm-generic/mshyperv.h b/include/asm-generic/mshyperv.h
index 3ae56a29594f..2914e27b0429 100644
--- a/include/asm-generic/mshyperv.h
+++ b/include/asm-generic/mshyperv.h
@@ -34,8 +34,18 @@ struct ms_hyperv_info {
 	u32 max_vp_index;
 	u32 max_lp_index;
 	u32 isolation_config_a;
-	u32 isolation_config_b;
+	union {
+		u32 isolation_config_b;
+		struct {
+			u32 cvm_type : 4;
+			u32 Reserved11 : 1;
+			u32 shared_gpa_boundary_active : 1;
+			u32 shared_gpa_boundary_bits : 6;
+			u32 Reserved12 : 20;
+		};
+	};
 	void  __percpu **ghcb_base;
+	u64 shared_gpa_boundary;
 };
 extern struct ms_hyperv_info ms_hyperv;
 
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Sun May 30 17:27:59 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 May 2021 17:27:59 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134310.250011 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnPEA-00074j-BZ; Sun, 30 May 2021 17:27:54 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134310.250011; Sun, 30 May 2021 17:27:54 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnPE9-00070q-Ji; Sun, 30 May 2021 17:27:53 +0000
Received: by outflank-mailman (input) for mailman id 134310;
 Sun, 30 May 2021 15:13:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GvOc=KZ=gmail.com=ltykernel@srs-us1.protection.inumbo.net>)
 id 1lnN2M-0000PC-1b
 for xen-devel@lists.xenproject.org; Sun, 30 May 2021 15:07:34 +0000
Received: from mail-pg1-x531.google.com (unknown [2607:f8b0:4864:20::531])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id cf0b8c41-fb1f-455e-ad3b-9c9f328fc9cd;
 Sun, 30 May 2021 15:06:54 +0000 (UTC)
Received: by mail-pg1-x531.google.com with SMTP id 29so6414704pgu.11
 for <xen-devel@lists.xenproject.org>; Sun, 30 May 2021 08:06:54 -0700 (PDT)
Received: from ubuntu-Virtual-Machine.corp.microsoft.com
 ([2001:4898:80e8:9:dc2d:80ab:c3f3:1524])
 by smtp.gmail.com with ESMTPSA id b15sm8679688pfi.100.2021.05.30.08.06.52
 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
 Sun, 30 May 2021 08:06:53 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: cf0b8c41-fb1f-455e-ad3b-9c9f328fc9cd
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=from:to:cc:subject:date:message-id:in-reply-to:references
         :mime-version:content-transfer-encoding;
        bh=M1xVjAHigQsVBryVftpCCjvg81+wOTUQfr50wSa0z0o=;
        b=sNrdoZzEG12p5juMA3jBGdhwekq2aogyv4cWLt3Q9FFTt7/sBUDijWvM58X66976BO
         kJv1GfqQauUiQ6UDwB4KHSHKbqJdsTij6qKiS0I1JyaL90iuEOJ7P6aw8mucrerUw4kZ
         ynS7ThiT7E4/MDPzHnf4KZGFeGtrUHktekJ3Mt5GvomdItXJaCyBS0do306K1hhxXMvs
         vsFRVkYt5+hsjyFrhhoRTY51WzHW2c3gzdOCyN9MeS3XIsh08U9XZ1UHO+ZUZlEe0ZZh
         UW0FBTRMxa8o7owQlLms0NBS25OrbKF7Oojybmwzo1E4d4fQhPlIDfgLBDz6BGXt3erV
         1vmA==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to
         :references:mime-version:content-transfer-encoding;
        bh=M1xVjAHigQsVBryVftpCCjvg81+wOTUQfr50wSa0z0o=;
        b=abzUiziIt+zvbmZG4ET23op6nwaH/O8QIGGtMfUpPCq7XslgMeu5102GFsxO+MBbTE
         bzpjzTupkGw8r0uN7OmC3fi+g4bRALtWdd2UK9tQvh7ioasDxNL8AHbK6VpXyBK0/Drj
         1ZLUnymTOTZMI2F8ZrnrKznDXFiHYhmGK53hYimpXysLBOhP0m7NK7h8UusmX0LjpZVj
         N2MpjrOxrvBqUiH0n4W+0FyfzI4lJ82J4GmOZvz0Ph7MfBMdoLncKi7rvPRMab95p6Kz
         r7jLa36ZigaqTSU4clIdISqjnUtrjz4ZUcISdnwNH4b3opgJVRsRD/lZMI7yR/XpZIFl
         f+6g==
X-Gm-Message-State: AOAM533fZP4349E1T0OT30m013VkqgY93Hn/G30IuB2uVijq+ufS+VHb
	PjHmbLMuvOBSUT/nnAsH+II=
X-Google-Smtp-Source: ABdhPJziJtspWVCmOHJqaVbfIhz+FLD7bwtV5SoxX2/+91foGt0Wvcd+JVLZUDMoLpg47iIDkrNTHQ==
X-Received: by 2002:a05:6a00:1c6b:b029:2e2:caff:13fa with SMTP id s43-20020a056a001c6bb02902e2caff13famr12601991pfw.59.1622387213561;
        Sun, 30 May 2021 08:06:53 -0700 (PDT)
From: Tianyu Lan <ltykernel@gmail.com>
To: kys@microsoft.com,
	haiyangz@microsoft.com,
	sthemmin@microsoft.com,
	wei.liu@kernel.org,
	decui@microsoft.com,
	tglx@linutronix.de,
	mingo@redhat.com,
	bp@alien8.de,
	x86@kernel.org,
	hpa@zytor.com,
	arnd@arndb.de,
	dave.hansen@linux.intel.com,
	luto@kernel.org,
	peterz@infradead.org,
	akpm@linux-foundation.org,
	kirill.shutemov@linux.intel.com,
	rppt@kernel.org,
	hannes@cmpxchg.org,
	cai@lca.pw,
	krish.sadhukhan@oracle.com,
	saravanand@fb.com,
	Tianyu.Lan@microsoft.com,
	konrad.wilk@oracle.com,
	hch@lst.de,
	m.szyprowski@samsung.com,
	robin.murphy@arm.com,
	boris.ostrovsky@oracle.com,
	jgross@suse.com,
	sstabellini@kernel.org,
	joro@8bytes.org,
	will@kernel.org,
	xen-devel@lists.xenproject.org,
	davem@davemloft.net,
	kuba@kernel.org,
	jejb@linux.ibm.com,
	martin.petersen@oracle.com
Cc: iommu@lists.linux-foundation.org,
	linux-arch@vger.kernel.org,
	linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org,
	linux-scsi@vger.kernel.org,
	netdev@vger.kernel.org,
	vkuznets@redhat.com,
	thomas.lendacky@amd.com,
	brijesh.singh@amd.com,
	sunilmut@microsoft.com
Subject: [RFC PATCH V3 11/11] HV/Storvsc: Add Isolation VM support for storvsc driver
Date: Sun, 30 May 2021 11:06:28 -0400
Message-Id: <20210530150628.2063957-12-ltykernel@gmail.com>
X-Mailer: git-send-email 2.25.1
In-Reply-To: <20210530150628.2063957-1-ltykernel@gmail.com>
References: <20210530150628.2063957-1-ltykernel@gmail.com>
MIME-Version: 1.0
Content-Transfer-Encoding: 8bit

From: Tianyu Lan <Tianyu.Lan@microsoft.com>

In Isolation VM, all shared memory with host needs to mark visible
to host via hvcall. vmbus_establish_gpadl() has already done it for
storvsc rx/tx ring buffer. The page buffer used by vmbus_sendpacket_
mpb_desc() still need to handle. Use DMA API to map/umap these
memory during sending/receiving packet and Hyper-V DMA ops callback
will use swiotlb function to allocate bounce buffer and copy data
from/to bounce buffer.

Signed-off-by: Tianyu Lan <Tianyu.Lan@microsoft.com>
---
 drivers/scsi/storvsc_drv.c | 63 +++++++++++++++++++++++++++++++++++---
 1 file changed, 58 insertions(+), 5 deletions(-)

diff --git a/drivers/scsi/storvsc_drv.c b/drivers/scsi/storvsc_drv.c
index 403753929320..32da419c134e 100644
--- a/drivers/scsi/storvsc_drv.c
+++ b/drivers/scsi/storvsc_drv.c
@@ -21,6 +21,8 @@
 #include <linux/device.h>
 #include <linux/hyperv.h>
 #include <linux/blkdev.h>
+#include <linux/io.h>
+#include <linux/dma-mapping.h>
 #include <scsi/scsi.h>
 #include <scsi/scsi_cmnd.h>
 #include <scsi/scsi_host.h>
@@ -427,6 +429,8 @@ struct storvsc_cmd_request {
 	u32 payload_sz;
 
 	struct vstor_packet vstor_packet;
+	u32 hvpg_count;
+	struct hv_dma_range *dma_range;
 };
 
 
@@ -1267,6 +1271,7 @@ static void storvsc_on_channel_callback(void *context)
 	struct hv_device *device;
 	struct storvsc_device *stor_device;
 	struct Scsi_Host *shost;
+	int i;
 
 	if (channel->primary_channel != NULL)
 		device = channel->primary_channel->device_obj;
@@ -1321,6 +1326,17 @@ static void storvsc_on_channel_callback(void *context)
 				request = (struct storvsc_cmd_request *)scsi_cmd_priv(scmnd);
 			}
 
+			if (request->dma_range) {
+				for (i = 0; i < request->hvpg_count; i++)
+					dma_unmap_page(&device->device,
+							request->dma_range[i].dma,
+							request->dma_range[i].mapping_size,
+							request->vstor_packet.vm_srb.data_in
+							     == READ_TYPE ?
+							DMA_FROM_DEVICE : DMA_TO_DEVICE);
+				kfree(request->dma_range);
+			}
+
 			storvsc_on_receive(stor_device, packet, request);
 			continue;
 		}
@@ -1817,7 +1833,9 @@ static int storvsc_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scmnd)
 		unsigned int hvpgoff, hvpfns_to_add;
 		unsigned long offset_in_hvpg = offset_in_hvpage(sgl->offset);
 		unsigned int hvpg_count = HVPFN_UP(offset_in_hvpg + length);
+		dma_addr_t dma;
 		u64 hvpfn;
+		u32 size;
 
 		if (hvpg_count > MAX_PAGE_BUFFER_COUNT) {
 
@@ -1831,6 +1849,13 @@ static int storvsc_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scmnd)
 		payload->range.len = length;
 		payload->range.offset = offset_in_hvpg;
 
+		cmd_request->dma_range = kcalloc(hvpg_count,
+				 sizeof(*cmd_request->dma_range),
+				 GFP_ATOMIC);
+		if (!cmd_request->dma_range) {
+			ret = -ENOMEM;
+			goto free_payload;
+		}
 
 		for (i = 0; sgl != NULL; sgl = sg_next(sgl)) {
 			/*
@@ -1854,9 +1879,30 @@ static int storvsc_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scmnd)
 			 * last sgl should be reached at the same time that
 			 * the PFN array is filled.
 			 */
-			while (hvpfns_to_add--)
-				payload->range.pfn_array[i++] =	hvpfn++;
+			while (hvpfns_to_add--) {
+				size = min(HV_HYP_PAGE_SIZE - offset_in_hvpg,
+					   (unsigned long)length);
+				dma = dma_map_page(&dev->device,
+							 pfn_to_page(hvpfn++),
+							 offset_in_hvpg, size,
+							 scmnd->sc_data_direction);
+				if (dma_mapping_error(&dev->device, dma)) {
+					ret = -ENOMEM;
+					goto free_dma_range;
+				}
+
+				if (offset_in_hvpg) {
+					payload->range.offset = dma & ~HV_HYP_PAGE_MASK;
+					offset_in_hvpg = 0;
+				}
+
+				cmd_request->dma_range[i].dma = dma;
+				cmd_request->dma_range[i].mapping_size = size;
+				payload->range.pfn_array[i++] = dma >> HV_HYP_PAGE_SHIFT;
+				length -= size;
+			}
 		}
+		cmd_request->hvpg_count = hvpg_count;
 	}
 
 	cmd_request->payload = payload;
@@ -1867,13 +1913,20 @@ static int storvsc_queuecommand(struct Scsi_Host *host, struct scsi_cmnd *scmnd)
 	put_cpu();
 
 	if (ret == -EAGAIN) {
-		if (payload_sz > sizeof(cmd_request->mpb))
-			kfree(payload);
 		/* no more space */
-		return SCSI_MLQUEUE_DEVICE_BUSY;
+		ret = SCSI_MLQUEUE_DEVICE_BUSY;
+		goto free_dma_range;
 	}
 
 	return 0;
+
+free_dma_range:
+	kfree(cmd_request->dma_range);
+
+free_payload:
+	if (payload_sz > sizeof(cmd_request->mpb))
+		kfree(payload);
+	return ret;
 }
 
 static struct scsi_host_template scsi_driver = {
-- 
2.25.1



From xen-devel-bounces@lists.xenproject.org Sun May 30 18:26:16 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 May 2021 18:26:16 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134402.250063 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnQ8Y-0001ST-2t; Sun, 30 May 2021 18:26:10 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134402.250063; Sun, 30 May 2021 18:26:10 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnQ8X-0001SM-Va; Sun, 30 May 2021 18:26:09 +0000
Received: by outflank-mailman (input) for mailman id 134402;
 Sun, 30 May 2021 18:26:08 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=v38d=KZ=alien8.de=bp@srs-us1.protection.inumbo.net>)
 id 1lnQ8M-0001SE-Kr
 for xen-devel@lists.xenproject.org; Sun, 30 May 2021 18:26:08 +0000
Received: from mail.skyhub.de (unknown [2a01:4f8:190:11c2::b:1457])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 6586ffd5-35ff-402a-8630-b6535a8ef4d5;
 Sun, 30 May 2021 18:25:55 +0000 (UTC)
Received: from zn.tnic (p200300ec2f2460006cdc9875a311c8ed.dip0.t-ipconnect.de
 [IPv6:2003:ec:2f24:6000:6cdc:9875:a311:c8ed])
 (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits))
 (No client certificate requested)
 by mail.skyhub.de (SuperMail on ZX Spectrum 128k) with ESMTPSA id 297BA1EC04DA;
 Sun, 30 May 2021 20:25:54 +0200 (CEST)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 6586ffd5-35ff-402a-8630-b6535a8ef4d5
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=alien8.de; s=dkim;
	t=1622399154;
	h=from:from:reply-to:subject:subject:date:date:message-id:message-id:
	 to:to:cc:cc:mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:in-reply-to:in-reply-to:  references:references;
	bh=ZTX0veBS1A2m8n8rpJ5g64mOUcWUApPggEQNAUBCm9A=;
	b=IAw0F2TyYVY+rP1eI5AmqzSeMaa7HppyT4ap29b/IT/iNKMfdosPs+xMPG3f3mF2O4Wki1
	qhqGsUCN8l5tGn5bKA48frRbVNMVM2kOQP558FnSnfXnQJ2b7GfkbAAMJMi5biEwUyaXxK
	zI/Js2FGqWivkGhb4JVQazJsFli3sWY=
Date: Sun, 30 May 2021 20:25:47 +0200
From: Borislav Petkov <bp@alien8.de>
To: Tianyu Lan <ltykernel@gmail.com>
Cc: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com,
	wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de,
	mingo@redhat.com, x86@kernel.org, hpa@zytor.com, arnd@arndb.de,
	dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org,
	akpm@linux-foundation.org, kirill.shutemov@linux.intel.com,
	rppt@kernel.org, hannes@cmpxchg.org, cai@lca.pw,
	krish.sadhukhan@oracle.com, saravanand@fb.com,
	Tianyu.Lan@microsoft.com, konrad.wilk@oracle.com, hch@lst.de,
	m.szyprowski@samsung.com, robin.murphy@arm.com,
	boris.ostrovsky@oracle.com, jgross@suse.com, sstabellini@kernel.org,
	joro@8bytes.org, will@kernel.org, xen-devel@lists.xenproject.org,
	davem@davemloft.net, kuba@kernel.org, jejb@linux.ibm.com,
	martin.petersen@oracle.com, iommu@lists.linux-foundation.org,
	linux-arch@vger.kernel.org, linux-hyperv@vger.kernel.org,
	linux-kernel@vger.kernel.org, linux-scsi@vger.kernel.org,
	netdev@vger.kernel.org, vkuznets@redhat.com,
	thomas.lendacky@amd.com, brijesh.singh@amd.com,
	sunilmut@microsoft.com
Subject: Re: [RFC PATCH V3 03/11] x86/Hyper-V: Add new hvcall guest address
 host visibility support
Message-ID: <YLPYqxF7urDIAd9z@zn.tnic>
References: <20210530150628.2063957-1-ltykernel@gmail.com>
 <20210530150628.2063957-4-ltykernel@gmail.com>
MIME-Version: 1.0
Content-Type: text/plain; charset=utf-8
Content-Disposition: inline
In-Reply-To: <20210530150628.2063957-4-ltykernel@gmail.com>

On Sun, May 30, 2021 at 11:06:20AM -0400, Tianyu Lan wrote:
> diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
> index 156cd235659f..a82975600107 100644
> --- a/arch/x86/mm/pat/set_memory.c
> +++ b/arch/x86/mm/pat/set_memory.c
> @@ -29,6 +29,8 @@
>  #include <asm/proto.h>
>  #include <asm/memtype.h>
>  #include <asm/set_memory.h>
> +#include <asm/hyperv-tlfs.h>
> +#include <asm/mshyperv.h>
>  
>  #include "../mm_internal.h"
>  
> @@ -1986,8 +1988,14 @@ static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc)
>  	int ret;
>  
>  	/* Nothing to do if memory encryption is not active */
> -	if (!mem_encrypt_active())
> +	if (hv_is_isolation_supported()) {
> +		return hv_set_mem_host_visibility((void *)addr,
> +				numpages * HV_HYP_PAGE_SIZE,
> +				enc ? VMBUS_PAGE_NOT_VISIBLE
> +				: VMBUS_PAGE_VISIBLE_READ_WRITE);

Put all this gunk in a hv-specific function somewhere in hv-land which
you only call from here. This way you probably won't even need to export
hv_set_mem_host_visibility() and so on...

Thx.

-- 
Regards/Gruss,
    Boris.

https://people.kernel.org/tglx/notes-about-netiquette


From xen-devel-bounces@lists.xenproject.org Sun May 30 19:29:49 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 May 2021 19:29:49 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134409.250074 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnR7x-0007Rd-TB; Sun, 30 May 2021 19:29:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134409.250074; Sun, 30 May 2021 19:29:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnR7x-0007RN-P0; Sun, 30 May 2021 19:29:37 +0000
Received: by outflank-mailman (input) for mailman id 134409;
 Sun, 30 May 2021 19:29:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnR7w-0007RD-0h; Sun, 30 May 2021 19:29:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnR7v-0001Wf-J8; Sun, 30 May 2021 19:29:35 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnR7v-0008PK-4V; Sun, 30 May 2021 19:29:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lnR7v-0000Sd-40; Sun, 30 May 2021 19:29:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=OMN2QN8T6UouoT2YTRx1mP6mjweAgXJnAzqMua4Z0xI=; b=MHIOsY8+ftPWt6570kc9QO/6ct
	0wOuZ4Gzph2MO/ubo7ZhgNFsEKBOCmhwNuV1nQznXmloDeiMeQronaM71nzs9oWZnnjXRpf0o+4eU
	9o1FHbJjfqRUiEF+ymCrHIVf2MYYAwFiUfTTnlPLASv/IZ3bovo3PTqZhMekb7R2uWAw=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162264-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162264: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=62c0ac5041e9130b041adfa13a41583d3c3ddd24
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 30 May 2021 19:29:35 +0000

flight 162264 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162264/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                62c0ac5041e9130b041adfa13a41583d3c3ddd24
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  283 days
Failing since        152659  2020-08-21 14:07:39 Z  282 days  521 attempts
Testing same since   162257  2021-05-29 12:55:13 Z    1 days    3 attempts

------------------------------------------------------------
515 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 163638 lines long.)


From xen-devel-bounces@lists.xenproject.org Sun May 30 23:54:58 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Sun, 30 May 2021 23:54:58 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134421.250088 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnVGP-0006AO-MM; Sun, 30 May 2021 23:54:37 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134421.250088; Sun, 30 May 2021 23:54:37 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnVGP-0006AH-He; Sun, 30 May 2021 23:54:37 +0000
Received: by outflank-mailman (input) for mailman id 134421;
 Sun, 30 May 2021 23:54:36 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnVGO-0006A7-82; Sun, 30 May 2021 23:54:36 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnVGN-0005uf-VJ; Sun, 30 May 2021 23:54:36 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnVGN-00028K-IU; Sun, 30 May 2021 23:54:35 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lnVGN-0007Fh-Hz; Sun, 30 May 2021 23:54:35 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=A6Fno6iv89CYk1JwxQDaQjiPeANb7mN0oLWQfW8UbdI=; b=BGzliilS1l5gDoFAlQRV7lB7Wo
	TDgvnwJ2sHqjcyEC+nzl/fpSM7JD9WGxJ1RQin9EshbCWwiQH/dUQZeZTxXh0LDxsq5cJTRRHFLf1
	qhJ6815zhkm7K71tMbCE/BnYjXVKC6XTuAHlEW/BDVuDyNBUtQexcH/48bZskLAp9aFM=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162266-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162266: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:guest-start:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:guest-start:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=b90e90f40b4ff23c753126008bf4713a42353af6
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Sun, 30 May 2021 23:54:35 +0000

flight 162266 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162266/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 14 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                b90e90f40b4ff23c753126008bf4713a42353af6
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  303 days
Failing since        152366  2020-08-01 20:49:34 Z  302 days  514 attempts
Testing same since   162266  2021-05-30 16:12:07 Z    0 days    1 attempts

------------------------------------------------------------
6126 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1665404 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 31 03:27:39 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 May 2021 03:27:39 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134429.250102 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnYaH-0006yZ-Ph; Mon, 31 May 2021 03:27:21 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134429.250102; Mon, 31 May 2021 03:27:21 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnYaH-0006y7-IR; Mon, 31 May 2021 03:27:21 +0000
Received: by outflank-mailman (input) for mailman id 134429;
 Mon, 31 May 2021 03:27:19 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnYaF-0006xx-JN; Mon, 31 May 2021 03:27:19 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnYaF-0007iY-9u; Mon, 31 May 2021 03:27:19 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnYaE-0002AQ-U6; Mon, 31 May 2021 03:27:19 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lnYaE-0000Df-S0; Mon, 31 May 2021 03:27:18 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=Cr1lFLwE9/0bx7iSrvz+P84TkbvCtnjAdqrPimHBG+o=; b=KlkMFch6+Pql3153JoBuCS3zGo
	0ntNxXShAiA3f9Hz6etsxUIzEUagjBdDm0bfteBrOz3LZcGBwRiguVS8ZsklYdnpCTWG1h08YzqnG
	6tVQFprNu/teZM9eozj33oo/+E7shWFIoZE/40XnKPiL1B5yFxYQAOn2Wb0j0ORTa57E=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162267-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162267: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=f9dc72de91d2915b808e82da34bf613afa5cce43
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 31 May 2021 03:27:18 +0000

flight 162267 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162267/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                f9dc72de91d2915b808e82da34bf613afa5cce43
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  283 days
Failing since        152659  2020-08-21 14:07:39 Z  282 days  522 attempts
Testing same since   162267  2021-05-30 19:39:50 Z    0 days    1 attempts

------------------------------------------------------------
518 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 164188 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 31 04:08:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 May 2021 04:08:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134439.250116 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnZEE-0002qR-3M; Mon, 31 May 2021 04:08:38 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134439.250116; Mon, 31 May 2021 04:08:38 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnZED-0002qK-WB; Mon, 31 May 2021 04:08:37 +0000
Received: by outflank-mailman (input) for mailman id 134439;
 Mon, 31 May 2021 04:08:37 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=GHRc=K2=gmail.com=ltykernel@srs-us1.protection.inumbo.net>)
 id 1lnZED-0002qC-0a
 for xen-devel@lists.xenproject.org; Mon, 31 May 2021 04:08:37 +0000
Received: from mail-pg1-x534.google.com (unknown [2607:f8b0:4864:20::534])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 85774ad6-c798-40a0-8f16-738fcf3b111b;
 Mon, 31 May 2021 04:08:36 +0000 (UTC)
Received: by mail-pg1-x534.google.com with SMTP id v14so7385840pgi.6
 for <xen-devel@lists.xenproject.org>; Sun, 30 May 2021 21:08:36 -0700 (PDT)
Received: from ?IPv6:2404:f801:0:5:8000::4b1? ([2404:f801:9000:1a:efea::4b1])
 by smtp.gmail.com with ESMTPSA id
 v4sm4275222pfn.41.2021.05.30.21.08.23
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Sun, 30 May 2021 21:08:35 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 85774ad6-c798-40a0-8f16-738fcf3b111b
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=gmail.com; s=20161025;
        h=subject:to:cc:references:from:message-id:date:user-agent
         :mime-version:in-reply-to:content-language:content-transfer-encoding;
        bh=B5E+Z+YC3ztWK1Gz5JJ/ZjFYfV7x7WXsq6pvj4QOEV8=;
        b=qpIh0hZZdMUbO/xaXOluESyDyKh1wuXDzVMU8DbvilQdbzUAKKvtHMi2tZHdZZl6FD
         pzHWR1qJ9SA0OUooLhgYA/FHAGzMTwF85gH2zMJXOjHY8akNBlPm2ZAFPL/BsX6AECn0
         g+snaripSV02Ww2TMRpmrTPml1udHoN6L4hr2lbSznbW+9RR1UtQlYNf3WxE5hsOWUtQ
         ktECgJ1y4SLarJMnSz0M8Q7rpMoLOq1xtTjLcSnDW6jj68DsuSfsO/MEvpsp4a3+Pm5b
         T6ij0KxeYE/ElhOhht4tRwlDS7A1Bc2B1m97gM084MLnflqwYjE/MK2P0UCAVk17tm4z
         0i8w==
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:subject:to:cc:references:from:message-id:date
         :user-agent:mime-version:in-reply-to:content-language
         :content-transfer-encoding;
        bh=B5E+Z+YC3ztWK1Gz5JJ/ZjFYfV7x7WXsq6pvj4QOEV8=;
        b=Z27qPc3OW7/POFyHrYdBF2wZtGJH9rA/9fcXrlff6NfCB20C2hxuZWyYrEYifrqfm+
         RGQWbjo1jyskLXfYF0LCsskfxnOAXJa5pRlTaz6U1fflvuvp8Ae6CnuCxz9JwzfZNTLi
         8iXu8c22I8+sikjHOhWHJ+U57XzJkDwiPsk6C6LF1ZFEy6aqiROYeZOCZPut+rwILEXr
         nfNUMx4JhVYJaJZtC+cfKlg2YK9v/xPkPlaGR5iqqVrvEJx/3lwovYvr6I+eXRTph4RO
         KbclmYJjx9vCw3HvQs1ihd/uQl+Dm0b0NY9wE75Et8fFCZRuo1M/DEKTgCqvKo8P1ilG
         It6g==
X-Gm-Message-State: AOAM5323Umc+bR0RwLhVnsPURV6woSuSEYwKqV2+nyG+4aeNevWcRIGc
	uO+V3WCB2s8UUtYAa8jcII4=
X-Google-Smtp-Source: ABdhPJx+HANXtJX9yvLkBX/HVsORb2hbS4daFpETnK5c9NyvtRnVmWON27YgYVQ2Kp8imni+2oGhjA==
X-Received: by 2002:a63:5a43:: with SMTP id k3mr21331103pgm.308.1622434115432;
        Sun, 30 May 2021 21:08:35 -0700 (PDT)
Subject: Re: [RFC PATCH V3 03/11] x86/Hyper-V: Add new hvcall guest address
 host visibility support
To: Borislav Petkov <bp@alien8.de>
Cc: kys@microsoft.com, haiyangz@microsoft.com, sthemmin@microsoft.com,
 wei.liu@kernel.org, decui@microsoft.com, tglx@linutronix.de,
 mingo@redhat.com, x86@kernel.org, hpa@zytor.com, arnd@arndb.de,
 dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org,
 akpm@linux-foundation.org, kirill.shutemov@linux.intel.com, rppt@kernel.org,
 hannes@cmpxchg.org, cai@lca.pw, krish.sadhukhan@oracle.com,
 saravanand@fb.com, Tianyu.Lan@microsoft.com, konrad.wilk@oracle.com,
 hch@lst.de, m.szyprowski@samsung.com, robin.murphy@arm.com,
 boris.ostrovsky@oracle.com, jgross@suse.com, sstabellini@kernel.org,
 joro@8bytes.org, will@kernel.org, xen-devel@lists.xenproject.org,
 davem@davemloft.net, kuba@kernel.org, jejb@linux.ibm.com,
 martin.petersen@oracle.com, iommu@lists.linux-foundation.org,
 linux-arch@vger.kernel.org, linux-hyperv@vger.kernel.org,
 linux-kernel@vger.kernel.org, linux-scsi@vger.kernel.org,
 netdev@vger.kernel.org, vkuznets@redhat.com, thomas.lendacky@amd.com,
 brijesh.singh@amd.com, sunilmut@microsoft.com
References: <20210530150628.2063957-1-ltykernel@gmail.com>
 <20210530150628.2063957-4-ltykernel@gmail.com> <YLPYqxF7urDIAd9z@zn.tnic>
From: Tianyu Lan <ltykernel@gmail.com>
Message-ID: <3716c9e0-2508-3553-5a70-e4b3f5f4c82e@gmail.com>
Date: Mon, 31 May 2021 12:08:21 +0800
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <YLPYqxF7urDIAd9z@zn.tnic>
Content-Type: text/plain; charset=utf-8; format=flowed
Content-Language: en-US
Content-Transfer-Encoding: 7bit

Hi Borislav:
	Thanks for your review.

On 5/31/2021 2:25 AM, Borislav Petkov wrote:
> On Sun, May 30, 2021 at 11:06:20AM -0400, Tianyu Lan wrote:
>> diff --git a/arch/x86/mm/pat/set_memory.c b/arch/x86/mm/pat/set_memory.c
>> index 156cd235659f..a82975600107 100644
>> --- a/arch/x86/mm/pat/set_memory.c
>> +++ b/arch/x86/mm/pat/set_memory.c
>> @@ -29,6 +29,8 @@
>>   #include <asm/proto.h>
>>   #include <asm/memtype.h>
>>   #include <asm/set_memory.h>
>> +#include <asm/hyperv-tlfs.h>
>> +#include <asm/mshyperv.h>
>>   
>>   #include "../mm_internal.h"
>>   
>> @@ -1986,8 +1988,14 @@ static int __set_memory_enc_dec(unsigned long addr, int numpages, bool enc)
>>   	int ret;
>>   
>>   	/* Nothing to do if memory encryption is not active */
>> -	if (!mem_encrypt_active())
>> +	if (hv_is_isolation_supported()) {
>> +		return hv_set_mem_host_visibility((void *)addr,
>> +				numpages * HV_HYP_PAGE_SIZE,
>> +				enc ? VMBUS_PAGE_NOT_VISIBLE
>> +				: VMBUS_PAGE_VISIBLE_READ_WRITE);
> 
> Put all this gunk in a hv-specific function somewhere in hv-land which
> you only call from here. This way you probably won't even need to export
> hv_set_mem_host_visibility() and so on...
> 

Good idea. Will update. Thanks.




From xen-devel-bounces@lists.xenproject.org Mon May 31 06:46:21 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 May 2021 06:46:21 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134449.250127 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnbgY-00018V-JM; Mon, 31 May 2021 06:46:02 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134449.250127; Mon, 31 May 2021 06:46:02 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnbgY-00018O-GH; Mon, 31 May 2021 06:46:02 +0000
Received: by outflank-mailman (input) for mailman id 134449;
 Mon, 31 May 2021 06:46:00 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LUpO=K2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lnbgW-00018I-Ka
 for xen-devel@lists.xenproject.org; Mon, 31 May 2021 06:46:00 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 7c24a3ac-4b8d-46bf-914e-8420d47e461a;
 Mon, 31 May 2021 06:45:58 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id DAD601FD2F;
 Mon, 31 May 2021 06:45:57 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id AF015118DD;
 Mon, 31 May 2021 06:45:57 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id 9qCUKSWGtGAUBAAALh3uQQ
 (envelope-from <jbeulich@suse.com>); Mon, 31 May 2021 06:45:57 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 7c24a3ac-4b8d-46bf-914e-8420d47e461a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622443557; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=rFERr/SKsSWC0NTHViDSjyKZ4DXOlLCMrJuF9hKQ508=;
	b=akBQMsIXsGpFdzPUpFnNPYyiUxJCmu7Hfbi2Cz+dHnB8UIYTytWG2lhKDqyR5rhuOSilBj
	a0iqPGNQbX9uUyqKQ5cFQZTGEt9hYAKAK/OFHldMrNQgg77GHzf3sVpaxdI3V2b7i20eRq
	nBTOhxA6LXsGUOlxHMDAl1b2GSNvR2E=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622443557; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=rFERr/SKsSWC0NTHViDSjyKZ4DXOlLCMrJuF9hKQ508=;
	b=akBQMsIXsGpFdzPUpFnNPYyiUxJCmu7Hfbi2Cz+dHnB8UIYTytWG2lhKDqyR5rhuOSilBj
	a0iqPGNQbX9uUyqKQ5cFQZTGEt9hYAKAK/OFHldMrNQgg77GHzf3sVpaxdI3V2b7i20eRq
	nBTOhxA6LXsGUOlxHMDAl1b2GSNvR2E=
Subject: Re: Invalid _Static_assert expanded from HASH_CALLBACKS_CHECK
To: Tim Deegan <tim@xen.org>
Cc: Roberto Bagnara <roberto.bagnara@bugseng.com>,
 xen-devel@lists.xenproject.org
References: <ccb37c2e-a3a6-a2e4-bf15-da81f97c94be@bugseng.com>
 <38898d21-fe76-36dc-f1e6-497b52c5c0b7@suse.com>
 <YLEP73On6EBjv3Ks@deinos.phlegethon.org>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <11b5b29e-0baf-9f50-6d9e-985e791148b2@suse.com>
Date: Mon, 31 May 2021 08:45:56 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <YLEP73On6EBjv3Ks@deinos.phlegethon.org>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
Authentication-Results: imap.suse.de;
	none
X-Spam-Level: 
X-Spam-Score: 0.00
X-Spamd-Result: default: False [0.00 / 100.00];
	 ARC_NA(0.00)[];
	 RCVD_VIA_SMTP_AUTH(0.00)[];
	 FROM_HAS_DN(0.00)[];
	 RCPT_COUNT_THREE(0.00)[3];
	 TO_DN_SOME(0.00)[];
	 TO_MATCH_ENVRCPT_ALL(0.00)[];
	 MIME_GOOD(-0.10)[text/plain];
	 DKIM_SIGNED(0.00)[suse.com:s=susede1];
	 RCVD_NO_TLS_LAST(0.10)[];
	 FROM_EQ_ENVFROM(0.00)[];
	 MIME_TRACE(0.00)[0:+];
	 RCVD_COUNT_TWO(0.00)[2];
	 MID_RHS_MATCH_FROM(0.00)[]
X-Spam-Flag: NO

On 28.05.2021 17:44, Tim Deegan wrote:
> Hi,
> 
> At 10:58 +0200 on 25 May (1621940330), Jan Beulich wrote:
>> On 24.05.2021 06:29, Roberto Bagnara wrote:
>>> I stumbled upon parsing errors due to invalid uses of
>>> _Static_assert expanded from HASH_CALLBACKS_CHECK where
>>> the tested expression is not constant, as mandated by
>>> the C standard.
>>>
>>> Judging from the following comment, there is partial awareness
>>> of the fact this is an issue:
>>>
>>> #ifndef __clang__ /* At least some versions dislike some of the uses. */
>>> #define HASH_CALLBACKS_CHECK(mask) \
>>>      BUILD_BUG_ON((mask) > (1U << ARRAY_SIZE(callbacks)) - 1)
>>>
>>> Indeed, this is not a fault of Clang: the point is that some
>>> of the expansions of this macro are not C.  Moreover,
>>> the fact that GCC sometimes accepts them is not
>>> something we can rely upon:
> 
> Well, that is unfortunate - especially since the older ad-hoc
> compile-time assertion macros handled this kind of thing pretty well.
> Why when I were a lad &c &c. :)

So I have to admit I don't understand: The commit introducing
HASH_CALLBACKS_CHECK() (90629587e16e "x86/shadow: replace stale
literal numbers in hash_{vcpu,domain}_foreach()") did not replace
any prior compile-time checking. Hence I wonder what you're
referring to (and hence what alternative ways of dealing with the
situation there might be that I'm presently not seeing).

>>> Finally, I think this can be easily avoided: instead
>>> of initializing a static const with a constant expression
>>> and then static-asserting the static const, just static-assert
>>> the constant initializer.
>>
>> Well, yes, but the whole point of constructs like
>>
>>     HASH_CALLBACKS_CHECK(callback_mask);
>>     hash_domain_foreach(d, callback_mask, callbacks, gmfn);
>>
>> is to make very obvious that the checked mask and the used mask
>> match. Hence if anything I'd see us eliminate the static const
>> callback_mask variables altogether.
> 
> That seems like a good approach.

Okay, I'll make a patch then.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 31 06:48:40 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 May 2021 06:48:40 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134455.250138 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnbj6-0001kk-2F; Mon, 31 May 2021 06:48:40 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134455.250138; Mon, 31 May 2021 06:48:40 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnbj5-0001kd-VA; Mon, 31 May 2021 06:48:39 +0000
Received: by outflank-mailman (input) for mailman id 134455;
 Mon, 31 May 2021 06:48:39 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LUpO=K2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lnbj5-0001kU-B8
 for xen-devel@lists.xenproject.org; Mon, 31 May 2021 06:48:39 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id bc9d7e05-6c68-445f-a1b2-0e2c00d8dbe2;
 Mon, 31 May 2021 06:48:38 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id DA6841FD2E;
 Mon, 31 May 2021 06:48:37 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id 2B8B0118DD;
 Mon, 31 May 2021 06:48:37 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id FlftCMWGtGAtBQAALh3uQQ
 (envelope-from <jbeulich@suse.com>); Mon, 31 May 2021 06:48:37 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: bc9d7e05-6c68-445f-a1b2-0e2c00d8dbe2
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622443717; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=FzItFjU0DW/KZKPmp5D37yuIYUfQma0ENnM1MHwo2z4=;
	b=UXoHatDlwOoB5iZUYn+khS2+GyBkMcuU29TmNhHj29MuUl6qLHZV5zZUWTMpNqGsvKfIdF
	jwLxQjTrSYkGOdFMQk1Jkg+BsXE2QzkHIqb1BBpR1lbQEbHIGu04bCMMUeSfEvFCbA36Wv
	0gcCYg6+YScVNltbgS3q1Wpj6D5duO0=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622443717; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=FzItFjU0DW/KZKPmp5D37yuIYUfQma0ENnM1MHwo2z4=;
	b=UXoHatDlwOoB5iZUYn+khS2+GyBkMcuU29TmNhHj29MuUl6qLHZV5zZUWTMpNqGsvKfIdF
	jwLxQjTrSYkGOdFMQk1Jkg+BsXE2QzkHIqb1BBpR1lbQEbHIGu04bCMMUeSfEvFCbA36Wv
	0gcCYg6+YScVNltbgS3q1Wpj6D5duO0=
Subject: Re: [PATCH 1/3] x86/mtrr: remove stale function prototype
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 xen-devel@lists.xenproject.org
References: <20210528173935.29919-1-roger.pau@citrix.com>
 <20210528173935.29919-2-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <095c2f58-712d-5f99-d716-7d70290326fa@suse.com>
Date: Mon, 31 May 2021 08:48:35 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <20210528173935.29919-2-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
Authentication-Results: imap.suse.de;
	none
X-Spam-Level: 
X-Spam-Score: 0.00
X-Spamd-Result: default: False [0.00 / 100.00];
	 ARC_NA(0.00)[];
	 RCVD_VIA_SMTP_AUTH(0.00)[];
	 FROM_HAS_DN(0.00)[];
	 RCPT_COUNT_THREE(0.00)[4];
	 TO_DN_SOME(0.00)[];
	 TO_MATCH_ENVRCPT_ALL(0.00)[];
	 MIME_GOOD(-0.10)[text/plain];
	 DKIM_SIGNED(0.00)[suse.com:s=susede1];
	 RCVD_NO_TLS_LAST(0.10)[];
	 FROM_EQ_ENVFROM(0.00)[];
	 MIME_TRACE(0.00)[0:+];
	 RCVD_COUNT_TWO(0.00)[2];
	 MID_RHS_MATCH_FROM(0.00)[]
X-Spam-Flag: NO

On 28.05.2021 19:39, Roger Pau Monne wrote:
> Fixes: 1c84d04673 ('VMX: remove the problematic set_uc_mode logic')
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Acked-by: Jan Beulich <jbeulich@suse.com>


From xen-devel-bounces@lists.xenproject.org Mon May 31 07:00:12 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 May 2021 07:00:12 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134463.250148 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnbuC-000435-4d; Mon, 31 May 2021 07:00:08 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134463.250148; Mon, 31 May 2021 07:00:08 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnbuC-00042y-1V; Mon, 31 May 2021 07:00:08 +0000
Received: by outflank-mailman (input) for mailman id 134463;
 Mon, 31 May 2021 07:00:06 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LUpO=K2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lnbuA-0003yt-Cd
 for xen-devel@lists.xenproject.org; Mon, 31 May 2021 07:00:06 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 4dcd6c52-21ce-4b46-b782-bfbe5b96b61d;
 Mon, 31 May 2021 07:00:05 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 389CE1FD2E;
 Mon, 31 May 2021 07:00:04 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id D989F118DD;
 Mon, 31 May 2021 07:00:02 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id uknuMnKJtGDDCgAALh3uQQ
 (envelope-from <jbeulich@suse.com>); Mon, 31 May 2021 07:00:02 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 4dcd6c52-21ce-4b46-b782-bfbe5b96b61d
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622444404; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=vVpU7SBVsS5vWuSVPztB+SALCWr1e/9I2kyLjmaAvjc=;
	b=F/0NvfdyiSpc5+bjlr7g7wQK5Tf+RQjb932quydP7hbIdFfE50JxWZ2GxI7oiDmlJHfCtR
	VuanfQzE0j9U++GgdOq/swm0DZfELBXdcHWy+sIwaYi0Hk4DjDyGyAqkS8q+bNwQjhbWUw
	LNv/7e2da+QgpLk5FEpt28cSSZK6NtY=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622444403; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=vVpU7SBVsS5vWuSVPztB+SALCWr1e/9I2kyLjmaAvjc=;
	b=s4XEIBzZ1Z0sAxw8Q32W07M/Zi6caKYtqiTAIvSEdTDmDLWRSp7MbQHojrL7jsZ5gAlA8u
	gE0yNKd56WTilETXXTlET4OAW39QYi2Bbo1VzTVh6d8xy+0+uFiaSyp2C6yJKG7cT99EEj
	tMCGK87LqcQpVqPmwtkLrfTIndfccKE=
Subject: Re: [PATCH 2/3] x86/mtrr: move epte_get_entry_emt to p2m-ept.c
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
References: <20210528173935.29919-1-roger.pau@citrix.com>
 <20210528173935.29919-3-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <b4b369e4-d1d9-98b6-de29-b6a6c6c6454f@suse.com>
Date: Mon, 31 May 2021 09:00:00 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <20210528173935.29919-3-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 8bit
Authentication-Results: imap.suse.de;
	none
X-Spam-Level: 
X-Spam-Score: 0.00
X-Spamd-Result: default: False [0.00 / 100.00];
	 ARC_NA(0.00)[];
	 RCVD_VIA_SMTP_AUTH(0.00)[];
	 FROM_HAS_DN(0.00)[];
	 TO_DN_SOME(0.00)[];
	 TO_MATCH_ENVRCPT_ALL(0.00)[];
	 MIME_GOOD(-0.10)[text/plain];
	 DBL_FAIL(0.00)[citrix.com:server fail];
	 DKIM_SIGNED(0.00)[suse.com:s=susede1];
	 RCPT_COUNT_SEVEN(0.00)[7];
	 RCVD_NO_TLS_LAST(0.10)[];
	 FROM_EQ_ENVFROM(0.00)[];
	 MIME_TRACE(0.00)[0:+];
	 RCVD_COUNT_TWO(0.00)[2];
	 MID_RHS_MATCH_FROM(0.00)[]
X-Spam-Flag: NO

On 28.05.2021 19:39, Roger Pau Monne wrote:
> This is an EPT specific function, so it shouldn't live in the generic
> mtrr file. Such movement is also needed for future work that will
> require passing a p2m_type_t parameter to epte_get_entry_emt, and
> making that type visible to the mtrr users is cumbersome and
> unneeded.
> 
> Moving epte_get_entry_emt out of mtrr.c requires making the helper to
> get the MTRR type of an address from the mtrr state public. While
> there rename the function to start with the mtrr prefix, like other
> mtrr related functions.
> 
> While there fix some of the types of the function parameters.
> 
> No functional change intended.
> 
> Signed-off-by: Roger Pau Monné <roger.pau@citrix.com>

Reviewed-by: Jan Beulich <jbeulich@suse.com>

> --- a/xen/arch/x86/hvm/vmx/vmx.c
> +++ b/xen/arch/x86/hvm/vmx/vmx.c
> @@ -417,12 +417,12 @@ static int vmx_domain_initialise(struct domain *d)
>  static void domain_creation_finished(struct domain *d)
>  {
>      gfn_t gfn = gaddr_to_gfn(APIC_DEFAULT_PHYS_BASE);
> -    uint8_t ipat;
> +    bool ipat;
>  
>      if ( !has_vlapic(d) || mfn_eq(apic_access_mfn, INVALID_MFN) )
>          return;
>  
> -    ASSERT(epte_get_entry_emt(d, gfn_x(gfn), apic_access_mfn, 0, &ipat,
> +    ASSERT(epte_get_entry_emt(d, gfn, apic_access_mfn, 0, &ipat,
>                                true) == MTRR_TYPE_WRBACK);
>      ASSERT(ipat);

This being the only reason for the function to not be static in
p2m-ept.c, I wonder whether it's really worthwhile to keep these
assertions. But certainly not directly related to change at hand.

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 31 07:20:22 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 May 2021 07:20:22 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134470.250160 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lncDe-0006PC-RI; Mon, 31 May 2021 07:20:14 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134470.250160; Mon, 31 May 2021 07:20:14 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lncDe-0006P5-O8; Mon, 31 May 2021 07:20:14 +0000
Received: by outflank-mailman (input) for mailman id 134470;
 Mon, 31 May 2021 07:20:13 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lncDd-0006Ov-5l; Mon, 31 May 2021 07:20:13 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lncDc-0003by-S6; Mon, 31 May 2021 07:20:12 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lncDc-0005zt-IC; Mon, 31 May 2021 07:20:12 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lncDc-00028I-Hh; Mon, 31 May 2021 07:20:12 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=z/if+E/yiAJ+LWNC/Th9AZVa9B/rxlANenkVE9XmBa8=; b=VqDzYVycZCUSzI9qM8WtOFBK3s
	C7hGNMAN1c7GoGjdCJ0/e1Qe5Xw6cxshmQN3x9TxH60YyEPrsU5ZnisctL8ShuPh4C8fLecqdwTAU
	dL4oTLJvFU7ocwxgF46JOZSRKQo24rERi/NO/fT1inyZuBTWnds0vCgTHnD5AxzUTpkU=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162268-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162268: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-amd64-examine:memdisk-try-append:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=8124c8a6b35386f73523d27eacb71b5364a68c4c
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 31 May 2021 07:20:12 +0000

flight 162268 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162268/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-amd64-examine      4 memdisk-try-append       fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                8124c8a6b35386f73523d27eacb71b5364a68c4c
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  303 days
Failing since        152366  2020-08-01 20:49:34 Z  302 days  515 attempts
Testing same since   162268  2021-05-31 00:10:38 Z    0 days    1 attempts

------------------------------------------------------------
6126 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1665410 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 31 07:21:32 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 May 2021 07:21:32 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134479.250174 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lncEs-00072m-DS; Mon, 31 May 2021 07:21:30 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134479.250174; Mon, 31 May 2021 07:21:30 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lncEs-00072f-AX; Mon, 31 May 2021 07:21:30 +0000
Received: by outflank-mailman (input) for mailman id 134479;
 Mon, 31 May 2021 07:21:29 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <SRS0=LUpO=K2=suse.com=jbeulich@srs-us1.protection.inumbo.net>)
 id 1lncEr-00072Z-TB
 for xen-devel@lists.xenproject.org; Mon, 31 May 2021 07:21:29 +0000
Received: from smtp-out2.suse.de (unknown [195.135.220.29])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 2e354651-da27-4db1-a19a-8011f6e29a20;
 Mon, 31 May 2021 07:21:28 +0000 (UTC)
Received: from imap.suse.de (imap-alt.suse-dmz.suse.de [192.168.254.47])
 (using TLSv1.2 with cipher ECDHE-ECDSA-AES128-GCM-SHA256 (128/128 bits))
 (No client certificate requested)
 by smtp-out2.suse.de (Postfix) with ESMTPS id 500891FD2F;
 Mon, 31 May 2021 07:21:27 +0000 (UTC)
Received: from imap3-int (imap-alt.suse-dmz.suse.de [192.168.254.47])
 by imap.suse.de (Postfix) with ESMTP id CFC5F118DD;
 Mon, 31 May 2021 07:21:26 +0000 (UTC)
Received: from director2.suse.de ([192.168.254.72]) by imap3-int with ESMTPSA
 id Q7WDMXaOtGBuFAAALh3uQQ
 (envelope-from <jbeulich@suse.com>); Mon, 31 May 2021 07:21:26 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 2e354651-da27-4db1-a19a-8011f6e29a20
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622445687; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QnUkhMnbahQ+gLAChDjtDb3gGewkxqS5oqBPL8xt238=;
	b=a7p1KFDMbAjZUfoeYF/svLrSDYr8ladOlnSPFs53UBsUZUPOx/5+0tVMoMcvAl7566EyQj
	MpwBn45PRiAcMzQ1Hvibf46DBigBycgmrUzXv195/6GaY3OV4UuywuXn1n/A3Oal5m/iqP
	7FxIKp/59lXPSDehOiensAVCeOJkGAo=
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1;
	t=1622445687; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc:
	 mime-version:mime-version:content-type:content-type:
	 content-transfer-encoding:content-transfer-encoding:
	 in-reply-to:in-reply-to:references:references;
	bh=QnUkhMnbahQ+gLAChDjtDb3gGewkxqS5oqBPL8xt238=;
	b=a7p1KFDMbAjZUfoeYF/svLrSDYr8ladOlnSPFs53UBsUZUPOx/5+0tVMoMcvAl7566EyQj
	MpwBn45PRiAcMzQ1Hvibf46DBigBycgmrUzXv195/6GaY3OV4UuywuXn1n/A3Oal5m/iqP
	7FxIKp/59lXPSDehOiensAVCeOJkGAo=
Subject: Re: [PATCH 3/3] x86/ept: force WB cache attributes for grant and
 foreign maps
To: Roger Pau Monne <roger.pau@citrix.com>
Cc: Jun Nakajima <jun.nakajima@intel.com>, Kevin Tian <kevin.tian@intel.com>,
 Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>,
 George Dunlap <george.dunlap@citrix.com>, xen-devel@lists.xenproject.org
References: <20210528173935.29919-1-roger.pau@citrix.com>
 <20210528173935.29919-4-roger.pau@citrix.com>
From: Jan Beulich <jbeulich@suse.com>
Message-ID: <c3aeb303-760b-fe6a-d51e-6271eaf37d80@suse.com>
Date: Mon, 31 May 2021 09:21:25 +0200
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:78.0) Gecko/20100101
 Thunderbird/78.10.2
MIME-Version: 1.0
In-Reply-To: <20210528173935.29919-4-roger.pau@citrix.com>
Content-Type: text/plain; charset=utf-8
Content-Language: en-US
Content-Transfer-Encoding: 7bit
Authentication-Results: imap.suse.de;
	none
X-Spam-Level: 
X-Spam-Score: 0.00
X-Spamd-Result: default: False [0.00 / 100.00];
	 ARC_NA(0.00)[];
	 RCVD_VIA_SMTP_AUTH(0.00)[];
	 FROM_HAS_DN(0.00)[];
	 TO_DN_SOME(0.00)[];
	 TO_MATCH_ENVRCPT_ALL(0.00)[];
	 MIME_GOOD(-0.10)[text/plain];
	 DKIM_SIGNED(0.00)[suse.com:s=susede1];
	 RCPT_COUNT_SEVEN(0.00)[7];
	 RCVD_NO_TLS_LAST(0.10)[];
	 FROM_EQ_ENVFROM(0.00)[];
	 MIME_TRACE(0.00)[0:+];
	 RCVD_COUNT_TWO(0.00)[2];
	 MID_RHS_MATCH_FROM(0.00)[]
X-Spam-Flag: NO

On 28.05.2021 19:39, Roger Pau Monne wrote:
> --- a/xen/arch/x86/mm/p2m-ept.c
> +++ b/xen/arch/x86/mm/p2m-ept.c
> @@ -487,11 +487,12 @@ static int ept_invalidate_emt_range(struct p2m_domain *p2m,
>  }
>  
>  int epte_get_entry_emt(struct domain *d, gfn_t gfn, mfn_t mfn,
> -                       unsigned int order, bool *ipat, bool direct_mmio)
> +                       unsigned int order, bool *ipat, p2m_type_t type)
>  {
>      int gmtrr_mtype, hmtrr_mtype;
>      struct vcpu *v = current;
>      unsigned long i;
> +    bool direct_mmio = type == p2m_mmio_direct;

I don't think this variable is worthwhile to retain/introduce:

> @@ -535,9 +536,33 @@ int epte_get_entry_emt(struct domain *d, gfn_t gfn, mfn_t mfn,
>          }
>      }
>  
> -    if ( direct_mmio )

With this gone, there's exactly one further use left. Preferably
with this adjustment (which I'd be fine to make while committing, as
long as you and/or the maintainers agree)
Reviewed-by: Jan Beulich <jbeulich@suse.com>

> +    switch ( type )
> +    {
> +    case p2m_mmio_direct:
>          return MTRR_TYPE_UNCACHABLE;

As a largely unrelated note: We really want to find a way to return
WC here for e.g. the frame buffer of graphics cards, the more that
hvm_get_mem_pinned_cacheattr() gets invoked only below from here
(unlike at initial introduction of the function, where it was called
ahead of the direct_mmio check, but still after the mfn_valid(), so
the results were inconsistent anyway). Perhaps we should obtain the
host MTRR setting for the page (or range) in question.

As to hvm_get_mem_pinned_cacheattr(), XEN_DOMCTL_pin_mem_cacheattr
is documented to be intended to be used on RAM only anyway ...

Jan


From xen-devel-bounces@lists.xenproject.org Mon May 31 08:12:17 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 May 2021 08:12:17 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134489.250185 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnd1p-00048W-5o; Mon, 31 May 2021 08:12:05 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134489.250185; Mon, 31 May 2021 08:12:05 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnd1p-00048P-2N; Mon, 31 May 2021 08:12:05 +0000
Received: by outflank-mailman (input) for mailman id 134489;
 Mon, 31 May 2021 08:12:04 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnd1o-00048F-3N; Mon, 31 May 2021 08:12:04 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnd1n-000553-PX; Mon, 31 May 2021 08:12:03 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnd1n-00086z-Ff; Mon, 31 May 2021 08:12:03 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lnd1n-00085k-F9; Mon, 31 May 2021 08:12:03 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=UUwH7b6EwRlvKL1PrRED2/g9ipx7XuXKQLO1eB3xJ1k=; b=O1GMD2Zr8EfSDkwM4koX5pT1P1
	TWZyghvXSD74zdeZcgy6QTTTYIw/EgAyprs/yUkV8cjgZf6xyENzH8FKMhRAeBBWwMKxEF8AtzzVU
	0zrTSyULhmLmNzG3GJc3AoRlRgkl0OktFkk4+u24WOhp14naxNJCWOB96HjefsDYUjRE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162272-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [libvirt test] 162272: regressions - FAIL
X-Osstest-Failures:
    libvirt:build-armhf-libvirt:libvirt-build:fail:regression
    libvirt:build-amd64-libvirt:libvirt-build:fail:regression
    libvirt:build-arm64-libvirt:libvirt-build:fail:regression
    libvirt:build-i386-libvirt:libvirt-build:fail:regression
    libvirt:test-amd64-amd64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-vhd:build-check(1):blocked:nonblocking
    libvirt:test-amd64-amd64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-pair:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:build-check(1):blocked:nonblocking
    libvirt:test-amd64-i386-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-qcow2:build-check(1):blocked:nonblocking
    libvirt:test-arm64-arm64-libvirt-xsm:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt:build-check(1):blocked:nonblocking
    libvirt:test-armhf-armhf-libvirt-raw:build-check(1):blocked:nonblocking
X-Osstest-Versions-This:
    libvirt=70f53b1c04cfed8529c87c7be8ca4c76d1123a30
X-Osstest-Versions-That:
    libvirt=2c846fa6bcc11929c9fb857a22430fb9945654ad
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 31 May 2021 08:12:03 +0000

flight 162272 libvirt real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162272/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 build-armhf-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-amd64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-arm64-libvirt           6 libvirt-build            fail REGR. vs. 151777
 build-i386-libvirt            6 libvirt-build            fail REGR. vs. 151777

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt      1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-amd64-libvirt-vhd  1 build-check(1)               blocked  n/a
 test-amd64-amd64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt       1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-pair  1 build-check(1)               blocked  n/a
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 1 build-check(1) blocked n/a
 test-amd64-i386-libvirt-xsm   1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt      1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-qcow2  1 build-check(1)               blocked  n/a
 test-arm64-arm64-libvirt-xsm  1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt      1 build-check(1)               blocked  n/a
 test-armhf-armhf-libvirt-raw  1 build-check(1)               blocked  n/a

version targeted for testing:
 libvirt              70f53b1c04cfed8529c87c7be8ca4c76d1123a30
baseline version:
 libvirt              2c846fa6bcc11929c9fb857a22430fb9945654ad

Last test of basis   151777  2020-07-10 04:19:19 Z  325 days
Failing since        151818  2020-07-11 04:18:52 Z  324 days  317 attempts
Testing same since   162243  2021-05-28 04:20:06 Z    3 days    4 attempts

------------------------------------------------------------
People who touched revisions under test:
    Adolfo Jayme Barrientos <fitoschido@gmail.com>
  Aleksandr Alekseev <alexander.alekseev@virtuozzo.com>
  Aleksei Zakharov <zaharov@selectel.ru>
  Andika Triwidada <andika@gmail.com>
  Andrea Bolognani <abologna@redhat.com>
  Balázs Meskó <meskobalazs@mailbox.org>
  Barrett Schonefeld <bschoney@utexas.edu>
  Bastian Germann <bastiangermann@fishpost.de>
  Bastien Orivel <bastien.orivel@diateam.net>
  BiaoXiang Ye <yebiaoxiang@huawei.com>
  Bihong Yu <yubihong@huawei.com>
  Binfeng Wu <wubinfeng@huawei.com>
  Bjoern Walk <bwalk@linux.ibm.com>
  Boris Fiuczynski <fiuczy@linux.ibm.com>
  Brian Turek <brian.turek@gmail.com>
  Bruno Haible <bruno@clisp.org>
  Chris Mayo <aklhfex@gmail.com>
  Christian Ehrhardt <christian.ehrhardt@canonical.com>
  Christian Schoenebeck <qemu_oss@crudebyte.com>
  Cole Robinson <crobinso@redhat.com>
  Collin Walling <walling@linux.ibm.com>
  Cornelia Huck <cohuck@redhat.com>
  Cédric Bosdonnat <cbosdonnat@suse.com>
  Côme Borsoi <fedora@borsoi.fr>
  Daniel Henrique Barboza <danielhb413@gmail.com>
  Daniel Letai <dani@letai.org.il>
  Daniel P. Berrange <berrange@redhat.com>
  Daniel P. Berrangé <berrange@redhat.com>
  Dmytro Linkin <dlinkin@nvidia.com>
  Eiichi Tsukata <eiichi.tsukata@nutanix.com>
  Eric Farman <farman@linux.ibm.com>
  Erik Skultety <eskultet@redhat.com>
  Fabian Affolter <mail@fabian-affolter.ch>
  Fabian Freyer <fabian.freyer@physik.tu-berlin.de>
  Fangge Jin <fjin@redhat.com>
  Farhan Ali <alifm@linux.ibm.com>
  Fedora Weblate Translation <i18n@lists.fedoraproject.org>
  gongwei <gongwei@smartx.com>
  Guoyi Tu<tu.guoyi@h3c.com>
  Göran Uddeborg <goeran@uddeborg.se>
  Halil Pasic <pasic@linux.ibm.com>
  Han Han <hhan@redhat.com>
  Hao Wang <wanghao232@huawei.com>
  Hela Basa <r45xveza@pm.me>
  Helmut Grohne <helmut@subdivi.de>
  Ian Wienand <iwienand@redhat.com>
  Jakob Meng <jakobmeng@web.de>
  Jamie Strandboge <jamie@canonical.com>
  Jamie Strandboge <jamie@ubuntu.com>
  Jan Kuparinen <copper_fin@hotmail.com>
  Jean-Baptiste Holcroft <jean-baptiste@holcroft.fr>
  Jianan Gao <jgao@redhat.com>
  Jim Fehlig <jfehlig@suse.com>
  Jin Yan <jinyan12@huawei.com>
  Jiri Denemark <jdenemar@redhat.com>
  John Ferlan <jferlan@redhat.com>
  Jonathan Watt <jwatt@jwatt.org>
  Jonathon Jongsma <jjongsma@redhat.com>
  Julio Faracco <jcfaracco@gmail.com>
  Ján Tomko <jtomko@redhat.com>
  Kashyap Chamarthy <kchamart@redhat.com>
  Kevin Locke <kevin@kevinlocke.name>
  Kristina Hanicova <khanicov@redhat.com>
  Laine Stump <laine@redhat.com>
  Laszlo Ersek <lersek@redhat.com>
  Liao Pingfang <liao.pingfang@zte.com.cn>
  Lin Ma <lma@suse.com>
  Lin Ma <lma@suse.de>
  Lin Ma <morecache@gmail.com>
  Luke Yue <lukedyue@gmail.com>
  Luyao Zhong <luyao.zhong@intel.com>
  Marc Hartmayer <mhartmay@linux.ibm.com>
  Marc-André Lureau <marcandre.lureau@redhat.com>
  Marek Marczykowski-Górecki <marmarek@invisiblethingslab.com>
  Markus Schade <markus.schade@hetzner.com>
  Martin Kletzander <mkletzan@redhat.com>
  Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
  Matt Coleman <matt@datto.com>
  Matt Coleman <mcoleman@datto.com>
  Mauro Matteo Cascella <mcascell@redhat.com>
  Meina Li <meili@redhat.com>
  Michal Privoznik <mprivozn@redhat.com>
  Michał Smyk <fedora@smyk.it>
  Milo Casagrande <milo@milo.name>
  Moshe Levi <moshele@nvidia.com>
  Muha Aliss <muhaaliss@gmail.com>
  Neal Gompa <ngompa13@gmail.com>
  Nick Shyrokovskiy <nshyrokovskiy@gmail.com>
  Nickys Music Group <nickys.music.group@gmail.com>
  Nico Pache <npache@redhat.com>
  Nikolay Shirokovskiy <nshirokovskiy@virtuozzo.com>
  Olaf Hering <olaf@aepfle.de>
  Olesya Gerasimenko <gammaray@basealt.ru>
  Orion Poplawski <orion@nwra.com>
  Pany <geekpany@gmail.com>
  Patrick Magauran <patmagauran.j@gmail.com>
  Paulo de Rezende Pinatti <ppinatti@linux.ibm.com>
  Pavel Hrdina <phrdina@redhat.com>
  Peng Liang <liangpeng10@huawei.com>
  Peter Krempa <pkrempa@redhat.com>
  Pino Toscano <ptoscano@redhat.com>
  Pino Toscano <toscano.pino@tiscali.it>
  Piotr Drąg <piotrdrag@gmail.com>
  Prathamesh Chavan <pc44800@gmail.com>
  Ricky Tigg <ricky.tigg@gmail.com>
  Roman Bogorodskiy <bogorodskiy@gmail.com>
  Roman Bolshakov <r.bolshakov@yadro.com>
  Ryan Gahagan <rgahagan@cs.utexas.edu>
  Ryan Schmidt <git@ryandesign.com>
  Sam Hartman <hartmans@debian.org>
  Scott Shambarger <scott-libvirt@shambarger.net>
  Sebastian Mitterle <smitterl@redhat.com>
  SeongHyun Jo <caelus9536@gmail.com>
  Shalini Chellathurai Saroja <shalini@linux.ibm.com>
  Shaojun Yang <yangshaojun@phytium.com.cn>
  Shi Lei <shi_lei@massclouds.com>
  simmon <simmon@nplob.com>
  Simon Gaiser <simon@invisiblethingslab.com>
  Stefan Bader <stefan.bader@canonical.com>
  Stefan Berger <stefanb@linux.ibm.com>
  Stefan Berger <stefanb@linux.vnet.ibm.com>
  Szymon Scholz <szymonscholz@gmail.com>
  Thomas Huth <thuth@redhat.com>
  Tim Wiederhake <twiederh@redhat.com>
  Tomáš Golembiovský <tgolembi@redhat.com>
  Tomáš Janoušek <tomi@nomi.cz>
  Tuguoyi <tu.guoyi@h3c.com>
  Ville Skyttä <ville.skytta@iki.fi>
  Wang Xin <wangxinxin.wang@huawei.com>
  WangJian <wangjian161@huawei.com>
  Weblate <noreply@weblate.org>
  Yalei Li <274268859@qq.com>
  Yalei Li <liyl43@chinatelecom.cn>
  Yang Hang <yanghang44@huawei.com>
  Yanqiu Zhang <yanqzhan@redhat.com>
  Yaroslav Kargin <ykargin@virtuozzo.com>
  Yi Li <yili@winhong.com>
  Yi Wang <wang.yi59@zte.com.cn>
  Yuri Chornoivan <yurchor@ukr.net>
  Zheng Chuan <zhengchuan@huawei.com>
  zhenwei pi <pizhenwei@bytedance.com>
  Zhenyu Zheng <zheng.zhenyu@outlook.com>

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          fail    
 build-arm64-libvirt                                          fail    
 build-armhf-libvirt                                          fail    
 build-i386-libvirt                                           fail    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           blocked 
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            blocked 
 test-amd64-amd64-libvirt-xsm                                 blocked 
 test-arm64-arm64-libvirt-xsm                                 blocked 
 test-amd64-i386-libvirt-xsm                                  blocked 
 test-amd64-amd64-libvirt                                     blocked 
 test-arm64-arm64-libvirt                                     blocked 
 test-armhf-armhf-libvirt                                     blocked 
 test-amd64-i386-libvirt                                      blocked 
 test-amd64-amd64-libvirt-pair                                blocked 
 test-amd64-i386-libvirt-pair                                 blocked 
 test-arm64-arm64-libvirt-qcow2                               blocked 
 test-armhf-armhf-libvirt-raw                                 blocked 
 test-amd64-amd64-libvirt-vhd                                 blocked 


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 59708 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 31 09:59:33 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 May 2021 09:59:33 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134497.250198 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnehZ-00058d-6l; Mon, 31 May 2021 09:59:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134497.250198; Mon, 31 May 2021 09:59:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnehZ-00058W-3m; Mon, 31 May 2021 09:59:17 +0000
Received: by outflank-mailman (input) for mailman id 134497;
 Mon, 31 May 2021 09:59:16 +0000
Received: from all-amaz-eas1.inumbo.com ([34.197.232.57]
 helo=us1-amaz-eas2.inumbo.com)
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=66J8=K2=linux.intel.com=lukasz.hawrylko@srs-us1.protection.inumbo.net>)
 id 1lnehY-00058Q-DU
 for xen-devel@lists.xenproject.org; Mon, 31 May 2021 09:59:16 +0000
Received: from mga07.intel.com (unknown [134.134.136.100])
 by us1-amaz-eas2.inumbo.com (Halon) with ESMTPS
 id 388a28c4-8426-43a3-b6e5-e69daeb303ee;
 Mon, 31 May 2021 09:59:15 +0000 (UTC)
Received: from orsmga001.jf.intel.com ([10.7.209.18])
 by orsmga105.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 31 May 2021 02:59:13 -0700
Received: from aagrawa3-mobl.ger.corp.intel.com ([10.252.54.84])
 by orsmga001-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 31 May 2021 02:59:11 -0700
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 388a28c4-8426-43a3-b6e5-e69daeb303ee
IronPort-SDR: bpjS/XG5xE+WJww/JwIWLtusDSw734BZY39I5z6NFdWwxaK9GnX7yHGAYizJOGjjHrHAtphKUw
 rrFxSfN/SBzQ==
X-IronPort-AV: E=McAfee;i="6200,9189,10000"; a="267213160"
X-IronPort-AV: E=Sophos;i="5.83,236,1616482800"; 
   d="scan'208";a="267213160"
IronPort-SDR: 1S3jLxaYzrnNm1Xrk7ZRx7yrzCFd3o9GwY2zvfbBUHJsdCSYU2mRvPVE69B1guHHNM7pomJMKP
 VlCgRzNWqagg==
X-IronPort-AV: E=Sophos;i="5.83,236,1616482800"; 
   d="scan'208";a="478856957"
Message-ID: <c20ea146c3f9bd201f3b7275f185d2da10bac614.camel@linux.intel.com>
Subject: Re: [PATCH] x86/tboot: adjust UUID check
From: Lukasz Hawrylko <lukasz.hawrylko@linux.intel.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	 <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Roger
	Pau =?ISO-8859-1?Q?Monn=E9?=
	 <roger.pau@citrix.com>
Date: Mon, 31 May 2021 11:59:02 +0200
In-Reply-To: <422e39c9-0cba-0944-b813-dfe2578ad719@suse.com>
References: <422e39c9-0cba-0944-b813-dfe2578ad719@suse.com>
Content-Type: text/plain; charset="UTF-8"
User-Agent: Evolution 3.38.3 (3.38.3-1.fc33) 
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit

On Wed, 2021-05-19 at 17:49 +0200, Jan Beulich wrote:
> Replace a bogus cast, move the static variable into the only function
> using it, and add __initconst. While there, also remove a pointless NULL
> check.
> 
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 

Reviewed-by: Lukasz Hawrylko <lukasz.hawrylko@linux.intel.com>



From xen-devel-bounces@lists.xenproject.org Mon May 31 10:08:45 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 May 2021 10:08:45 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134506.250210 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lneqg-0006fo-5C; Mon, 31 May 2021 10:08:42 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134506.250210; Mon, 31 May 2021 10:08:42 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lneqg-0006fh-11; Mon, 31 May 2021 10:08:42 +0000
Received: by outflank-mailman (input) for mailman id 134506;
 Mon, 31 May 2021 10:08:41 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=66J8=K2=linux.intel.com=lukasz.hawrylko@srs-us1.protection.inumbo.net>)
 id 1lneqe-0006fb-Vz
 for xen-devel@lists.xenproject.org; Mon, 31 May 2021 10:08:41 +0000
Received: from mga06.intel.com (unknown [134.134.136.31])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 52bfeb64-71de-4edd-a6a9-559eecf96ec9;
 Mon, 31 May 2021 10:08:39 +0000 (UTC)
Received: from fmsmga005.fm.intel.com ([10.253.24.32])
 by orsmga104.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 31 May 2021 03:08:38 -0700
Received: from aagrawa3-mobl.ger.corp.intel.com ([10.252.54.84])
 by fmsmga005-auth.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384;
 31 May 2021 03:08:36 -0700
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 52bfeb64-71de-4edd-a6a9-559eecf96ec9
IronPort-SDR: cd4EUfr5fNdshyEuriAtjJXv/qJu0xQZBF7441QtMunJYoA+aawAJOQ2wxtcyij9uNVWvFhsgd
 m9hrKXy525+A==
X-IronPort-AV: E=McAfee;i="6200,9189,10000"; a="264524367"
X-IronPort-AV: E=Sophos;i="5.83,236,1616482800"; 
   d="scan'208";a="264524367"
IronPort-SDR: iQKKQqF+4cJqCigL/1uv8fc9RZGMx2Ws9bsQ6Ks7DRaDMih2zxMmBLdLuzXteoPyG9eYqt3516
 SavWstDbdyhQ==
X-IronPort-AV: E=Sophos;i="5.83,236,1616482800"; 
   d="scan'208";a="635118865"
Message-ID: <bfd679ad4bfcdad6990887e6e85b4767c013ef4d.camel@linux.intel.com>
Subject: Re: [PATCH] x86/tboot: include all valid frame table entries in S3
 integrity check
From: Lukasz Hawrylko <lukasz.hawrylko@linux.intel.com>
To: Jan Beulich <jbeulich@suse.com>, "xen-devel@lists.xenproject.org"
	 <xen-devel@lists.xenproject.org>
Cc: Andrew Cooper <andrew.cooper3@citrix.com>, Wei Liu <wl@xen.org>, Roger
	Pau =?ISO-8859-1?Q?Monn=E9?=
	 <roger.pau@citrix.com>
Date: Mon, 31 May 2021 12:08:33 +0200
In-Reply-To: <e878fd86-2d82-ce3c-1dc4-d3a07025f1d4@suse.com>
References: <e878fd86-2d82-ce3c-1dc4-d3a07025f1d4@suse.com>
Content-Type: text/plain; charset="UTF-8"
User-Agent: Evolution 3.38.3 (3.38.3-1.fc33) 
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit

On Wed, 2021-05-19 at 17:48 +0200, Jan Beulich wrote:
> The difference of two pdx_to_page() return values is a number of pages,
> not the number of bytes covered by the corresponding frame table entries.
> 
> Fixes: 3cb68d2b59ab ("tboot: fix S3 issue for Intel Trusted Execution Technology.")
> Signed-off-by: Jan Beulich <jbeulich@suse.com>
> 

Reviewed-by: Lukasz Hawrylko <lukasz.hawrylko@linux.intel.com>




From xen-devel-bounces@lists.xenproject.org Mon May 31 11:55:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 May 2021 11:55:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134518.250232 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lngVh-00006X-3c; Mon, 31 May 2021 11:55:09 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134518.250232; Mon, 31 May 2021 11:55:09 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lngVh-00006Q-0N; Mon, 31 May 2021 11:55:09 +0000
Received: by outflank-mailman (input) for mailman id 134518;
 Mon, 31 May 2021 11:55:07 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lngVf-00006G-IA; Mon, 31 May 2021 11:55:07 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lngVe-0000LQ-F1; Mon, 31 May 2021 11:55:06 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lngVe-0003ob-3O; Mon, 31 May 2021 11:55:06 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lngVe-0005og-2u; Mon, 31 May 2021 11:55:06 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=XeV0nmJCqxXsRLm7BQy3T9hfZyAIKncWZD9LLhq5xmc=; b=pTdndoJE6DiY26w7hRdQtCyC1w
	RcCNu8MVu8F8WqVyPRWHVgIwQnru+BHAj4in/N1VcOVT1odgN6ee3nn5vtuYzUDt+6dzJ736d9PKe
	mKlNIkFtJVN1uxD5pXZVafB+Zr2/c5BrsvxQ9v2k5mazbcZZq8xZDH7VTLv96L75xTSA=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162269-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable test] 162269: tolerable FAIL
X-Osstest-Failures:
    xen-unstable:test-arm64-arm64-xl-credit1:xen-boot:fail:heisenbug
    xen-unstable:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:guest-start/debian.repeat:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    xen-unstable:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    xen-unstable:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    xen-unstable:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    xen-unstable:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=683d899e4bffca35c5b192ea0662362b0270a695
X-Osstest-Versions-That:
    xen=683d899e4bffca35c5b192ea0662362b0270a695
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 31 May 2021 11:55:06 +0000

flight 162269 xen-unstable real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162269/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-arm64-arm64-xl-credit1   8 xen-boot                   fail pass in 162261

Tests which did not succeed, but are not blocking:
 test-arm64-arm64-xl-credit1 15 migrate-support-check fail in 162261 never pass
 test-arm64-arm64-xl-credit1 16 saverestore-support-check fail in 162261 never pass
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 162261
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162261
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162261
 test-amd64-i386-xl-qemut-ws16-amd64 19 guest-stop             fail like 162261
 test-armhf-armhf-xl-rtds     18 guest-start/debian.repeat    fail  like 162261
 test-amd64-i386-xl-qemut-win7-amd64 19 guest-stop             fail like 162261
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 162261
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162261
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 162261
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 162261
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162261
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162261
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass

version targeted for testing:
 xen                  683d899e4bffca35c5b192ea0662362b0270a695
baseline version:
 xen                  683d899e4bffca35c5b192ea0662362b0270a695

Last test of basis   162269  2021-05-31 01:52:43 Z    0 days
Testing same since                          (not found)         0 attempts

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64-xtf                                              pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-prev                                             pass    
 build-i386-prev                                              pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-xtf-amd64-amd64-1                                       pass    
 test-xtf-amd64-amd64-2                                       pass    
 test-xtf-amd64-amd64-3                                       pass    
 test-xtf-amd64-amd64-4                                       pass    
 test-xtf-amd64-amd64-5                                       pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         pass    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-i386-freebsd10-amd64                              pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      pass    
 test-amd64-i386-freebsd10-i386                               pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-livepatch                                   pass    
 test-amd64-i386-livepatch                                    pass    
 test-amd64-amd64-migrupgrade                                 pass    
 test-amd64-i386-migrupgrade                                  pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     fail    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Published tested tree is already up to date.



From xen-devel-bounces@lists.xenproject.org Mon May 31 12:36:24 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 May 2021 12:36:24 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134528.250247 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnh9V-0004Kg-J2; Mon, 31 May 2021 12:36:17 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134528.250247; Mon, 31 May 2021 12:36:17 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnh9V-0004KZ-F4; Mon, 31 May 2021 12:36:17 +0000
Received: by outflank-mailman (input) for mailman id 134528;
 Mon, 31 May 2021 12:36:15 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnh9T-0004KP-Qj; Mon, 31 May 2021 12:36:15 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnh9T-00011x-LD; Mon, 31 May 2021 12:36:15 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnh9T-00064f-B8; Mon, 31 May 2021 12:36:15 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lnh9T-000181-AZ; Mon, 31 May 2021 12:36:15 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=cLQ45jhAmpWmeA85MhRWWnkqrumUWouR2vBzZRuTB/g=; b=2H4jAMOe1PTANE4oKdRUpJ6oeY
	iu6lDekel1ozNqMs/WGCvmSPXsShwdMXJdKzJyOSLQfRSZWLoev4g1qBLf6V7Nt4zM+snuLnyyJSg
	DkjWOwr7yVelLSGqFl+an8POf9vSBgP+WpodQJYp/dovSCS0AjAliWCTZvRRTyP60cT0=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162271-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [ovmf test] 162271: all pass - PUSHED
X-Osstest-Versions-This:
    ovmf=fe5da0927aad98f3c005088197fa30c1b8f9d3e8
X-Osstest-Versions-That:
    ovmf=adfa3327d4fc25d5eff5fedcdb11ecde52a995cc
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 31 May 2021 12:36:15 +0000

flight 162271 ovmf real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162271/

Perfect :-)
All tests in this flight passed as required
version targeted for testing:
 ovmf                 fe5da0927aad98f3c005088197fa30c1b8f9d3e8
baseline version:
 ovmf                 adfa3327d4fc25d5eff5fedcdb11ecde52a995cc

Last test of basis   162259  2021-05-29 19:42:34 Z    1 days
Testing same since   162271  2021-05-31 03:40:13 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jason Lou <yun.lou@intel.com>
  Lou, Yun <Yun.Lou@intel.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/ovmf.git
   adfa3327d4..fe5da0927a  fe5da0927aad98f3c005088197fa30c1b8f9d3e8 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon May 31 14:03:13 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 May 2021 14:03:13 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134538.250260 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lniVQ-0004KP-2k; Mon, 31 May 2021 14:03:00 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134538.250260; Mon, 31 May 2021 14:03:00 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lniVP-0004KI-W2; Mon, 31 May 2021 14:02:59 +0000
Received: by outflank-mailman (input) for mailman id 134538;
 Mon, 31 May 2021 14:02:58 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lniVO-0004K8-K5; Mon, 31 May 2021 14:02:58 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lniVO-0002d2-Cx; Mon, 31 May 2021 14:02:58 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lniVO-0001Uj-3L; Mon, 31 May 2021 14:02:58 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lniVO-00080D-2r; Mon, 31 May 2021 14:02:58 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=j06xoxL0xQ1qS0BanDu89K9mQtzVnk8Y0+cu5JRq8X0=; b=YwWOdyvfKNiGFYr8mq3M/7BoKF
	ffz5RrrzhZymo+K16Zz5ADTPyj2sHajq9lS08C6S3FqpH0CQvIQdX1Plpk6CdjPfgqvzkKIfLB4XD
	YRds9qHEFU47/OQZhR9ov1x+h45ckxSa7BDS89I/T06xB4UTBpAfc8GnfwqkO35gie/k=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162275-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [xen-unstable-smoke test] 162275: tolerable all pass - PUSHED
X-Osstest-Failures:
    xen-unstable-smoke:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    xen-unstable-smoke:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    xen=57f68dfd2d111a2ad381df740543c901b41f2299
X-Osstest-Versions-That:
    xen=683d899e4bffca35c5b192ea0662362b0270a695
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 31 May 2021 14:02:58 +0000

flight 162275 xen-unstable-smoke real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162275/

Failures :-/ but no regressions.

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass

version targeted for testing:
 xen                  57f68dfd2d111a2ad381df740543c901b41f2299
baseline version:
 xen                  683d899e4bffca35c5b192ea0662362b0270a695

Last test of basis   162245  2021-05-28 08:01:34 Z    3 days
Testing same since   162275  2021-05-31 11:01:38 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Jan Beulich <jbeulich@suse.com>
  Roger Pau Monné <roger.pau@citrix.com>

jobs:
 build-arm64-xsm                                              pass    
 build-amd64                                                  pass    
 build-armhf                                                  pass    
 build-amd64-libvirt                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-amd64-libvirt                                     pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/xen.git
   683d899e4b..57f68dfd2d  57f68dfd2d111a2ad381df740543c901b41f2299 -> smoke


From xen-devel-bounces@lists.xenproject.org Mon May 31 15:00:57 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 May 2021 15:00:57 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134546.250275 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnjPP-0001oY-24; Mon, 31 May 2021 15:00:51 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134546.250275; Mon, 31 May 2021 15:00:51 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnjPO-0001oR-V2; Mon, 31 May 2021 15:00:50 +0000
Received: by outflank-mailman (input) for mailman id 134546;
 Mon, 31 May 2021 15:00:50 +0000
Received: from us1-rack-iad1.inumbo.com ([172.99.69.81])
 by lists.xenproject.org with esmtp (Exim 4.92) (envelope-from
 <SRS0=MO34=K2=chromium.org=tientzu@srs-us1.protection.inumbo.net>)
 id 1lnjPO-0001oL-59
 for xen-devel@lists.xenproject.org; Mon, 31 May 2021 15:00:50 +0000
Received: from mail-io1-xd2d.google.com (unknown [2607:f8b0:4864:20::d2d])
 by us1-rack-iad1.inumbo.com (Halon) with ESMTPS
 id 3b7aae5b-cef8-4667-a731-c0efff13338a;
 Mon, 31 May 2021 15:00:49 +0000 (UTC)
Received: by mail-io1-xd2d.google.com with SMTP id k16so12179311ios.10
 for <xen-devel@lists.xenproject.org>; Mon, 31 May 2021 08:00:49 -0700 (PDT)
Received: from mail-il1-f176.google.com (mail-il1-f176.google.com.
 [209.85.166.176])
 by smtp.gmail.com with ESMTPSA id c4sm7783974ioo.50.2021.05.31.08.00.48
 for <xen-devel@lists.xenproject.org>
 (version=TLS1_3 cipher=TLS_AES_128_GCM_SHA256 bits=128/128);
 Mon, 31 May 2021 08:00:48 -0700 (PDT)
Received: by mail-il1-f176.google.com with SMTP id h11so10211861ili.9
 for <xen-devel@lists.xenproject.org>; Mon, 31 May 2021 08:00:48 -0700 (PDT)
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
X-Inumbo-ID: 3b7aae5b-cef8-4667-a731-c0efff13338a
DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=chromium.org; s=google;
        h=mime-version:references:in-reply-to:from:date:message-id:subject:to
         :cc;
        bh=wYKtm4IVSfVTnhnKghH2jNnLIsKH6Y+eNOaILu7ARSY=;
        b=AqMEgYi53g30HYY0imJtzNViu05fOgAo/xQtrrm9XrM3n3KYgdCy6sM36TaTmgv0v6
         XvIYBkRFfNiS9tcDp32kkkQRj9L5R+h86b+EJM1hUjC5ImiLdtusiOsddmyVTjrm3mI6
         HVyluqo0OJwH1PN9FVBpuFPsEkrQe6YRuMK7k=
X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20161025;
        h=x-gm-message-state:mime-version:references:in-reply-to:from:date
         :message-id:subject:to:cc;
        bh=wYKtm4IVSfVTnhnKghH2jNnLIsKH6Y+eNOaILu7ARSY=;
        b=tnl52tjSX2M2+IUNGKShNUI71exrJsAx4gcZeVwkPXnZcOylvbOvXWLD5wrdAN8BOm
         AH6jf7zDSktZBF8NIwgv+hNtC6u85Hb3dwW8p31FXeO5PZ5+fUdKW3c4UkMThpkkGM4W
         Ly0AidGLWhB9zEqVZ7JX/YLy6a7fARVtlEZfDpvELJxxp9NwaK84t5jKJ4sdsbs2j0UR
         zj3pr+DZuhagLyxnvsmY851R8bjyvyauDeaidVlvg6QEPH4S72JeCDKK6C5+1awmIjGf
         0EBBUA85lMBJAWNAapGNQbE8IQgWPSy9IoNrUiCi48nnIdQyfaivMgVh+9N1o3qF4mDx
         L5Nw==
X-Gm-Message-State: AOAM532GNPYUhsKsj5OcQ4dGi9+rYORpbHOtQLWyNvBlWdlht3G7+KBw
	KfRUfd5Ii4DNj0uCpmwtVO4ReXEOeG1rVA==
X-Google-Smtp-Source: ABdhPJw1XzUiPclkUrYExykMtqvpkS+QCl+agE9ipL+bbJetlIEsD8gkb0XnT19xntTR7Jo/1MFrtQ==
X-Received: by 2002:a6b:5803:: with SMTP id m3mr8678129iob.68.1622473248431;
        Mon, 31 May 2021 08:00:48 -0700 (PDT)
X-Received: by 2002:a6b:690c:: with SMTP id e12mr17489820ioc.69.1622473237437;
 Mon, 31 May 2021 08:00:37 -0700 (PDT)
MIME-Version: 1.0
References: <20210518064215.2856977-1-tientzu@chromium.org>
 <20210518064215.2856977-2-tientzu@chromium.org> <170a54f2-be20-ec29-1d7f-3388e5f928c6@gmail.com>
 <20210527130211.GA24344@lst.de> <bab261b4-f801-05af-8fd9-c440ed219591@amd.com>
 <e59d4799-a6ff-6d13-0fed-087fc3482587@amd.com>
In-Reply-To: <e59d4799-a6ff-6d13-0fed-087fc3482587@amd.com>
From: Claire Chang <tientzu@chromium.org>
Date: Mon, 31 May 2021 23:00:26 +0800
X-Gmail-Original-Message-ID: <CALiNf29=suiQbDL28tBUXt6-E+-JJC_76X9Uxcdk2s+MSXrp2g@mail.gmail.com>
Message-ID: <CALiNf29=suiQbDL28tBUXt6-E+-JJC_76X9Uxcdk2s+MSXrp2g@mail.gmail.com>
Subject: Re: [PATCH v7 01/15] swiotlb: Refactor swiotlb init functions
To: Tom Lendacky <thomas.lendacky@amd.com>
Cc: Christoph Hellwig <hch@lst.de>, Florian Fainelli <f.fainelli@gmail.com>, Rob Herring <robh+dt@kernel.org>, 
	mpe@ellerman.id.au, Joerg Roedel <joro@8bytes.org>, Will Deacon <will@kernel.org>, 
	Frank Rowand <frowand.list@gmail.com>, Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>, 
	boris.ostrovsky@oracle.com, jgross@suse.com, 
	Marek Szyprowski <m.szyprowski@samsung.com>, benh@kernel.crashing.org, paulus@samba.org, 
	"list@263.net:IOMMU DRIVERS" <iommu@lists.linux-foundation.org>, sstabellini@kernel.org, 
	Robin Murphy <robin.murphy@arm.com>, grant.likely@arm.com, xypron.glpk@gmx.de, 
	Thierry Reding <treding@nvidia.com>, mingo@kernel.org, bauerman@linux.ibm.com, 
	peterz@infradead.org, Greg KH <gregkh@linuxfoundation.org>, 
	Saravana Kannan <saravanak@google.com>, "Rafael J . Wysocki" <rafael.j.wysocki@intel.com>, 
	heikki.krogerus@linux.intel.com, 
	Andy Shevchenko <andriy.shevchenko@linux.intel.com>, Randy Dunlap <rdunlap@infradead.org>, 
	Dan Williams <dan.j.williams@intel.com>, Bartosz Golaszewski <bgolaszewski@baylibre.com>, 
	linux-devicetree <devicetree@vger.kernel.org>, lkml <linux-kernel@vger.kernel.org>, 
	linuxppc-dev@lists.ozlabs.org, xen-devel@lists.xenproject.org, 
	Nicolas Boichat <drinkcat@chromium.org>, Jim Quinlan <james.quinlan@broadcom.com>, 
	Tomasz Figa <tfiga@chromium.org>, bskeggs@redhat.com, 
	Bjorn Helgaas <bhelgaas@google.com>, chris@chris-wilson.co.uk, 
	Daniel Vetter <daniel@ffwll.ch>, airlied@linux.ie, dri-devel@lists.freedesktop.org, 
	intel-gfx@lists.freedesktop.org, jani.nikula@linux.intel.com, 
	Jianxiong Gao <jxgao@google.com>, joonas.lahtinen@linux.intel.com, 
	linux-pci@vger.kernel.org, maarten.lankhorst@linux.intel.com, 
	matthew.auld@intel.com, rodrigo.vivi@intel.com, 
	thomas.hellstrom@linux.intel.com
Content-Type: text/plain; charset="UTF-8"

On Fri, May 28, 2021 at 12:32 AM Tom Lendacky <thomas.lendacky@amd.com> wrote:
>
> On 5/27/21 9:41 AM, Tom Lendacky wrote:
> > On 5/27/21 8:02 AM, Christoph Hellwig wrote:
> >> On Wed, May 19, 2021 at 11:50:07AM -0700, Florian Fainelli wrote:
> >>> You convert this call site with swiotlb_init_io_tlb_mem() which did not
> >>> do the set_memory_decrypted()+memset(). Is this okay or should
> >>> swiotlb_init_io_tlb_mem() add an additional argument to do this
> >>> conditionally?
> >>
> >> The zeroing is useful and was missing before.  I think having a clean
> >> state here is the right thing.
> >>
> >> Not sure about the set_memory_decrypted, swiotlb_update_mem_attributes
> >> kinda suggests it is too early to set the memory decrupted.
> >>
> >> Adding Tom who should now about all this.
> >
> > The reason for adding swiotlb_update_mem_attributes() was because having
> > the call to set_memory_decrypted() in swiotlb_init_with_tbl() triggered a
> > BUG_ON() related to interrupts not being enabled yet during boot. So that
> > call had to be delayed until interrupts were enabled.
>
> I pulled down and tested the patch set and booted with SME enabled. The
> following was seen during the boot:
>
> [    0.134184] BUG: Bad page state in process swapper  pfn:108002
> [    0.134196] page:(____ptrval____) refcount:0 mapcount:-128 mapping:0000000000000000 index:0x0 pfn:0x108002
> [    0.134201] flags: 0x17ffffc0000000(node=0|zone=2|lastcpupid=0x1fffff)
> [    0.134208] raw: 0017ffffc0000000 ffff88847f355e28 ffff88847f355e28 0000000000000000
> [    0.134210] raw: 0000000000000000 0000000000000001 00000000ffffff7f 0000000000000000
> [    0.134212] page dumped because: nonzero mapcount
> [    0.134213] Modules linked in:
> [    0.134218] CPU: 0 PID: 0 Comm: swapper Not tainted 5.13.0-rc2-sos-custom #3
> [    0.134221] Hardware name: ...
> [    0.134224] Call Trace:
> [    0.134233]  dump_stack+0x76/0x94
> [    0.134244]  bad_page+0xa6/0xf0
> [    0.134252]  __free_pages_ok+0x331/0x360
> [    0.134256]  memblock_free_all+0x158/0x1c1
> [    0.134267]  mem_init+0x1f/0x14c
> [    0.134273]  start_kernel+0x290/0x574
> [    0.134279]  secondary_startup_64_no_verify+0xb0/0xbb
>
> I see this about 40 times during the boot, each with a different PFN. The
> system boots (which seemed odd), but I don't know if there will be side
> effects to this (I didn't stress the system).
>
> I modified the code to add a flag to not do the set_memory_decrypted(), as
> suggested by Florian, when invoked from swiotlb_init_with_tbl(), and that
> eliminated the bad page state BUG.

Thanks. Will add a flag to skip set_memory_decrypted() in v9.

>
> Thanks,
> Tom
>
> >
> > Thanks,
> > Tom
> >
> >>


From xen-devel-bounces@lists.xenproject.org Mon May 31 16:23:29 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 May 2021 16:23:29 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134554.250286 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnkh5-0001Yh-Ry; Mon, 31 May 2021 16:23:11 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134554.250286; Mon, 31 May 2021 16:23:11 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnkh5-0001Ya-Nr; Mon, 31 May 2021 16:23:11 +0000
Received: by outflank-mailman (input) for mailman id 134554;
 Mon, 31 May 2021 16:23:10 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnkh4-0001YQ-T6; Mon, 31 May 2021 16:23:10 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnkh4-0005PG-Mk; Mon, 31 May 2021 16:23:10 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnkh4-0008Sc-BU; Mon, 31 May 2021 16:23:10 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lnkh4-0000wm-Ao; Mon, 31 May 2021 16:23:10 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=NXTM8aktSOVpkYtifGxAMzMUs5/r1XHBXZs90FoVcbg=; b=gYJ1aLIKHh/EGbM00GhWYZ66Ks
	Q15LS6kgNinv1miOEN3gb/vEk9tRUZ4tAOx6avYUPabUl6aXQ1mS0vGfjdpsHNEHcwwV096vukX/+
	xn+dyyJM/wz5e4ZvbRWE4EDz3Dsc+u4r1hNqG8XSQ30rPL1soR1NOpmik9PTEwnz/5pE=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162270-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [qemu-mainline test] 162270: regressions - FAIL
X-Osstest-Failures:
    qemu-mainline:test-amd64-amd64-qemuu-freebsd11-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-freebsd12-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-i386:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-freebsd10-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ovmf-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-qemuu-nested-intel:debian-hvm-install/l1/l2:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-win7-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-i386-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-saverestore:fail:regression
    qemu-mainline:test-amd64-amd64-xl-rtds:guest-localmigrate/x10:fail:allowable
    qemu-mainline:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-xl-pvshim:guest-start:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-credit1:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-thunderx:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-libvirt-xsm:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-amd64-amd64-libvirt-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    qemu-mainline:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-vhd:saverestore-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    qemu-mainline:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    qemuu=52848929b70dcf92a68aedcfd90207be81ba3274
X-Osstest-Versions-That:
    qemuu=1d806cef0e38b5db8347a8e12f214d543204a314
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 31 May 2021 16:23:10 +0000

flight 162270 qemu-mainline real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162270/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-amd64-qemuu-freebsd11-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-freebsd12-amd64 16 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-freebsd10-i386 16 guest-saverestore      fail REGR. vs. 152631
 test-amd64-i386-freebsd10-amd64 16 guest-saverestore     fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ovmf-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-qemuu-nested-intel 20 debian-hvm-install/l1/l2 fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-win7-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-i386-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631
 test-amd64-amd64-xl-qemuu-ws16-amd64 15 guest-saverestore fail REGR. vs. 152631

Regressions which are regarded as allowable (not blocking):
 test-amd64-amd64-xl-rtds     20 guest-localmigrate/x10   fail REGR. vs. 152631

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152631
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152631
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152631
 test-amd64-i386-xl-pvshim    14 guest-start                  fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt-xsm  15 migrate-support-check        fail   never pass
 test-amd64-i386-libvirt      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-credit2  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl          16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-credit1  16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-xsm      15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-xsm      16 saverestore-support-check    fail   never pass
 test-arm64-arm64-xl-thunderx 15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-thunderx 16 saverestore-support-check    fail   never pass
 test-arm64-arm64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-arm64-arm64-libvirt-xsm 16 saverestore-support-check    fail   never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt-vhd 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-vhd      14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-vhd      15 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass

version targeted for testing:
 qemuu                52848929b70dcf92a68aedcfd90207be81ba3274
baseline version:
 qemuu                1d806cef0e38b5db8347a8e12f214d543204a314

Last test of basis   152631  2020-08-20 09:07:46 Z  284 days
Failing since        152659  2020-08-21 14:07:39 Z  283 days  523 attempts
Testing same since   162270  2021-05-31 03:30:37 Z    0 days    1 attempts

------------------------------------------------------------
519 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          pass    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           pass    
 test-amd64-coresched-i386-xl                                 pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 fail    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 pass    
 test-amd64-i386-libvirt-xsm                                  pass    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      pass    
 test-amd64-i386-xl-xsm                                       pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       fail    
 test-amd64-amd64-qemuu-freebsd12-amd64                       fail    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         fail    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  pass    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  pass    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          fail    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      pass    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         pass    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 pass    
 test-amd64-amd64-amd64-pvgrub                                pass    
 test-amd64-amd64-i386-pvgrub                                 pass    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    pass    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       pass    
 test-amd64-amd64-xl-rtds                                     fail    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    pass    
 test-arm64-arm64-xl-thunderx                                 pass    
 test-amd64-amd64-libvirt-vhd                                 pass    
 test-armhf-armhf-xl-vhd                                      pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 164273 lines long.)


From xen-devel-bounces@lists.xenproject.org Mon May 31 19:25:02 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 May 2021 19:25:02 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134568.250299 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnnWr-0001Da-L5; Mon, 31 May 2021 19:24:49 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134568.250299; Mon, 31 May 2021 19:24:49 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnnWr-0001DT-HF; Mon, 31 May 2021 19:24:49 +0000
Received: by outflank-mailman (input) for mailman id 134568;
 Mon, 31 May 2021 19:24:48 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnnWq-0001DJ-M4; Mon, 31 May 2021 19:24:48 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnnWq-0008PA-GI; Mon, 31 May 2021 19:24:48 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnnWq-0001BL-60; Mon, 31 May 2021 19:24:48 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lnnWq-0000Au-5W; Mon, 31 May 2021 19:24:48 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=IyVQKdLgJAtNi3Oof+UVAgn5Ar21mfLs+rV2TTnbugE=; b=bjLHHg0rBDUpAPZeFSZnayV/Q8
	XfhUOKvYQ9o/YFCu0bAij+lDFiVju9Pv1WX/ObgLmwhKPzX6ZD6EvpXT7I/HDBF09bt1i24TXpJkh
	XDxw17CBV1DJ/XbEl/cDS0wqeWAa0muWkW3tecycyG/AjXd69BzbsjOpeA3mUecvBpW4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162273-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [seabios test] 162273: tolerable FAIL - PUSHED
X-Osstest-Failures:
    seabios:test-amd64-amd64-xl-qemuu-debianhvm-amd64:guest-start/debianhvm.repeat:fail:heisenbug
    seabios:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    seabios:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-i386-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-i386-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    seabios:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    seabios:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
X-Osstest-Versions-This:
    seabios=81433aa8a19b36f9e3d50697608c93d8a28bf772
X-Osstest-Versions-That:
    seabios=6eff8085980dba0938cea0193b8a0fd3c6b0c4ca
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 31 May 2021 19:24:48 +0000

flight 162273 seabios real [real]
flight 162278 seabios real-retest [real]
http://logs.test-lab.xenproject.org/osstest/logs/162273/
http://logs.test-lab.xenproject.org/osstest/logs/162278/

Failures :-/ but no regressions.

Tests which are failing intermittently (not blocking):
 test-amd64-amd64-xl-qemuu-debianhvm-amd64 20 guest-start/debianhvm.repeat fail pass in 162278-retest

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 162169
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 162169
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 162169
 test-amd64-i386-xl-qemuu-win7-amd64 19 guest-stop             fail like 162169
 test-amd64-i386-xl-qemuu-ws16-amd64 19 guest-stop             fail like 162169
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass

version targeted for testing:
 seabios              81433aa8a19b36f9e3d50697608c93d8a28bf772
baseline version:
 seabios              6eff8085980dba0938cea0193b8a0fd3c6b0c4ca

Last test of basis   162169  2021-05-26 14:09:53 Z    5 days
Testing same since   162273  2021-05-31 05:41:03 Z    0 days    1 attempts

------------------------------------------------------------
People who touched revisions under test:
  Gerd Hoffmann <kraxel@redhat.com>

jobs:
 build-amd64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            pass    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  pass    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    fail    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     pass    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         pass    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-i386-qemuu-rhel6hvm-intel                         pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              pass    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Pushing revision :

To xenbits.xen.org:/home/xen/git/osstest/seabios.git
   6eff808..81433aa  81433aa8a19b36f9e3d50697608c93d8a28bf772 -> xen-tested-master


From xen-devel-bounces@lists.xenproject.org Mon May 31 20:21:04 2021
Return-path: <xen-devel-bounces@lists.xenproject.org>
Envelope-to: archives@lists.xen.org
Delivery-date: Mon, 31 May 2021 20:21:04 +0000
Received: from list by lists.xenproject.org with outflank-mailman.134579.250314 (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnoPB-00070O-TZ; Mon, 31 May 2021 20:20:57 +0000
X-Outflank-Mailman: Message body and most headers restored to incoming version
Received: by outflank-mailman (output) from mailman id 134579.250314; Mon, 31 May 2021 20:20:57 +0000
Received: from localhost ([127.0.0.1] helo=lists.xenproject.org)
	by lists.xenproject.org with esmtp (Exim 4.92)
	(envelope-from <xen-devel-bounces@lists.xenproject.org>)
	id 1lnoPB-00070H-Po; Mon, 31 May 2021 20:20:57 +0000
Received: by outflank-mailman (input) for mailman id 134579;
 Mon, 31 May 2021 20:20:57 +0000
Received: from mail.xenproject.org ([104.130.215.37])
 by lists.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnoPB-000707-4c; Mon, 31 May 2021 20:20:57 +0000
Received: from host146.205.237.98.conversent.net ([205.237.98.146]
 helo=infra.test-lab.xenproject.org)
 by mail.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnoPA-00010r-V6; Mon, 31 May 2021 20:20:57 +0000
Received: from [172.16.148.1] (helo=osstest.test-lab.xenproject.org)
 by infra.test-lab.xenproject.org with esmtp (Exim 4.92)
 (envelope-from <osstest-admin@xenproject.org>)
 id 1lnoPA-0003Bz-LR; Mon, 31 May 2021 20:20:56 +0000
Received: from osstest by osstest.test-lab.xenproject.org with local (Exim
 4.92) (envelope-from <osstest-admin@xenproject.org>)
 id 1lnoPA-0006Uh-L1; Mon, 31 May 2021 20:20:56 +0000
X-BeenThere: xen-devel@lists.xenproject.org
List-Id: Xen developer discussion <xen-devel.lists.xenproject.org>
List-Unsubscribe: <https://lists.xenproject.org/mailman/options/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=unsubscribe>
List-Post: <mailto:xen-devel@lists.xenproject.org>
List-Help: <mailto:xen-devel-request@lists.xenproject.org?subject=help>
List-Subscribe: <https://lists.xenproject.org/mailman/listinfo/xen-devel>,
 <mailto:xen-devel-request@lists.xenproject.org?subject=subscribe>
Errors-To: xen-devel-bounces@lists.xenproject.org
Precedence: list
Sender: "Xen-devel" <xen-devel-bounces@lists.xenproject.org>
DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed;
	d=xenproject.org; s=20200302mail; h=Date:From:Subject:MIME-Version:
	Content-Transfer-Encoding:Content-Type:Message-ID:To;
	bh=e/XtzFgL59SrBpg9W4j3wJv2vd7VImXVvrC9JHR1x0Y=; b=UPDIXOD/WaGCVbzsj2/nH7Ykth
	Ndh/0wJ1mEzv3xc2Ug0juzw9J0PEViDjcveV9R5x29XGkJc1ZXnjRb7k5+uGi7mERu03/5/7z3Gk/
	gNPyTl5wgE1SLDFdT9Hbwx+xZqHi1oWoZRuMeS8rNCDIBoRCBGbo/ljCghlN4ZB4gQk4=;
To: xen-devel@lists.xenproject.org,
    osstest-admin@xenproject.org
Message-ID: <osstest-162274-mainreport@xen.org>
Content-Type: text/plain; charset="UTF-8"
Content-Transfer-Encoding: 8bit
MIME-Version: 1.0
Subject: [linux-linus test] 162274: regressions - FAIL
X-Osstest-Failures:
    linux-linus:test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-intel:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-ws16-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-debianhvm-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemuu-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-qemut-rhel6hvm-amd:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-raw:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-pvshim:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-shadow:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-debianhvm-i386-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-ovmf-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-freebsd10-i386:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemuu-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-xl-qemut-win7-amd64:xen-install:fail:regression
    linux-linus:test-amd64-i386-examine:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-xsm:xen-install:fail:regression
    linux-linus:test-amd64-coresched-i386-xl:xen-install:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/src_host:fail:regression
    linux-linus:test-amd64-i386-libvirt-pair:xen-install/dst_host:fail:regression
    linux-linus:test-amd64-amd64-libvirt-vhd:guest-start:fail:regression
    linux-linus:test-arm64-arm64-xl:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-thunderx:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit2:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-libvirt-xsm:debian-fixup:fail:regression
    linux-linus:test-arm64-arm64-xl-credit1:debian-fixup:fail:regression
    linux-linus:test-amd64-amd64-amd64-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-i386-pvgrub:guest-stop:fail:regression
    linux-linus:test-amd64-amd64-xl-qcow2:guest-start:fail:regression
    linux-linus:test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm:xen-install:fail:regression
    linux-linus:test-armhf-armhf-xl-vhd:guest-start:fail:regression
    linux-linus:test-amd64-amd64-xl-qemut-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemut-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-amd64-amd64-qemuu-nested-amd:debian-hvm-install/l1/l2:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-win7-amd64:guest-stop:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-xl-qemuu-ws16-amd64:guest-stop:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:migrate-support-check:fail:nonblocking
    linux-linus:test-arm64-arm64-xl-seattle:saverestore-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-multivcpu:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit1:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-rtds:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-arndale:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-credit2:saverestore-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-libvirt-raw:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:migrate-support-check:fail:nonblocking
    linux-linus:test-armhf-armhf-xl-cubietruck:saverestore-support-check:fail:nonblocking
X-Osstest-Versions-This:
    linux=8124c8a6b35386f73523d27eacb71b5364a68c4c
X-Osstest-Versions-That:
    linux=deacdb3e3979979016fcd0ffd518c320a62ad166
From: osstest service owner <osstest-admin@xenproject.org>
Date: Mon, 31 May 2021 20:20:56 +0000

flight 162274 linux-linus real [real]
http://logs.test-lab.xenproject.org/osstest/logs/162274/

Regressions :-(

Tests which did not succeed and are blocking,
including tests which could not be run:
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-xsm        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-intel  7 xen-install      fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-ws16-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-pair         10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-pair         11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-i386-xl            7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-debianhvm-amd64  7 xen-install  fail REGR. vs. 152332
 test-amd64-i386-libvirt       7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-qemuu-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-qemut-rhel6hvm-amd  7 xen-install        fail REGR. vs. 152332
 test-amd64-i386-xl-raw        7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-pvshim     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-freebsd10-amd64  7 xen-install           fail REGR. vs. 152332
 test-amd64-i386-xl-shadow     7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-ovmf-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-freebsd10-i386  7 xen-install            fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-amd64-i386-xl-qemuu-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-xl-qemut-win7-amd64  7 xen-install       fail REGR. vs. 152332
 test-amd64-i386-examine       6 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-xsm   7 xen-install              fail REGR. vs. 152332
 test-amd64-coresched-i386-xl  7 xen-install              fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 10 xen-install/src_host     fail REGR. vs. 152332
 test-amd64-i386-libvirt-pair 11 xen-install/dst_host     fail REGR. vs. 152332
 test-amd64-amd64-libvirt-vhd 13 guest-start              fail REGR. vs. 152332
 test-arm64-arm64-xl          13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-xsm      13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-thunderx 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit2  13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-libvirt-xsm 13 debian-fixup             fail REGR. vs. 152332
 test-arm64-arm64-xl-credit1  13 debian-fixup             fail REGR. vs. 152332
 test-amd64-amd64-amd64-pvgrub 20 guest-stop              fail REGR. vs. 152332
 test-amd64-amd64-i386-pvgrub 20 guest-stop               fail REGR. vs. 152332
 test-amd64-amd64-xl-qcow2    13 guest-start              fail REGR. vs. 152332
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm 7 xen-install fail REGR. vs. 152332
 test-armhf-armhf-xl-vhd      13 guest-start              fail REGR. vs. 152332

Tests which did not succeed, but are not blocking:
 test-amd64-amd64-xl-qemut-win7-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-xl-qemut-ws16-amd64 19 guest-stop            fail like 152332
 test-amd64-amd64-qemuu-nested-amd 20 debian-hvm-install/l1/l2 fail like 152332
 test-amd64-amd64-xl-qemuu-win7-amd64 19 guest-stop            fail like 152332
 test-armhf-armhf-libvirt     16 saverestore-support-check    fail  like 152332
 test-armhf-armhf-libvirt-raw 15 saverestore-support-check    fail  like 152332
 test-amd64-amd64-xl-qemuu-ws16-amd64 19 guest-stop            fail like 152332
 test-arm64-arm64-xl-seattle  15 migrate-support-check        fail   never pass
 test-arm64-arm64-xl-seattle  16 saverestore-support-check    fail   never pass
 test-amd64-amd64-libvirt     15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-xsm 15 migrate-support-check        fail   never pass
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm 13 migrate-support-check fail never pass
 test-armhf-armhf-xl          15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl          16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-multivcpu 15 migrate-support-check        fail  never pass
 test-armhf-armhf-xl-multivcpu 16 saverestore-support-check    fail  never pass
 test-armhf-armhf-xl-credit1  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit1  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-rtds     15 migrate-support-check        fail   never pass
 test-armhf-armhf-libvirt     15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-rtds     16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-arndale  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-arndale  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-xl-credit2  15 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-credit2  16 saverestore-support-check    fail   never pass
 test-armhf-armhf-libvirt-raw 14 migrate-support-check        fail   never pass
 test-armhf-armhf-xl-cubietruck 15 migrate-support-check        fail never pass
 test-armhf-armhf-xl-cubietruck 16 saverestore-support-check    fail never pass

version targeted for testing:
 linux                8124c8a6b35386f73523d27eacb71b5364a68c4c
baseline version:
 linux                deacdb3e3979979016fcd0ffd518c320a62ad166

Last test of basis   152332  2020-07-31 19:41:23 Z  304 days
Failing since        152366  2020-08-01 20:49:34 Z  302 days  516 attempts
Testing same since   162268  2021-05-31 00:10:38 Z    0 days    2 attempts

------------------------------------------------------------
6126 people touched revisions under test,
not listing them all

jobs:
 build-amd64-xsm                                              pass    
 build-arm64-xsm                                              pass    
 build-i386-xsm                                               pass    
 build-amd64                                                  pass    
 build-arm64                                                  pass    
 build-armhf                                                  pass    
 build-i386                                                   pass    
 build-amd64-libvirt                                          pass    
 build-arm64-libvirt                                          pass    
 build-armhf-libvirt                                          pass    
 build-i386-libvirt                                           pass    
 build-amd64-pvops                                            pass    
 build-arm64-pvops                                            pass    
 build-armhf-pvops                                            pass    
 build-i386-pvops                                             pass    
 test-amd64-amd64-xl                                          pass    
 test-amd64-coresched-amd64-xl                                pass    
 test-arm64-arm64-xl                                          fail    
 test-armhf-armhf-xl                                          pass    
 test-amd64-i386-xl                                           fail    
 test-amd64-coresched-i386-xl                                 fail    
 test-amd64-amd64-libvirt-qemuu-debianhvm-amd64-xsm           pass    
 test-amd64-i386-libvirt-qemuu-debianhvm-amd64-xsm            fail    
 test-amd64-amd64-xl-qemut-stubdom-debianhvm-amd64-xsm        pass    
 test-amd64-i386-xl-qemut-stubdom-debianhvm-amd64-xsm         fail    
 test-amd64-amd64-xl-qemut-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemut-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-xl-qemuu-debianhvm-i386-xsm                 pass    
 test-amd64-i386-xl-qemuu-debianhvm-i386-xsm                  fail    
 test-amd64-amd64-libvirt-xsm                                 pass    
 test-arm64-arm64-libvirt-xsm                                 fail    
 test-amd64-i386-libvirt-xsm                                  fail    
 test-amd64-amd64-xl-xsm                                      pass    
 test-arm64-arm64-xl-xsm                                      fail    
 test-amd64-i386-xl-xsm                                       fail    
 test-amd64-amd64-qemuu-nested-amd                            fail    
 test-amd64-amd64-xl-pvhv2-amd                                pass    
 test-amd64-i386-qemut-rhel6hvm-amd                           fail    
 test-amd64-i386-qemuu-rhel6hvm-amd                           fail    
 test-amd64-amd64-dom0pvh-xl-amd                              pass    
 test-amd64-amd64-xl-qemut-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemut-debianhvm-amd64                     fail    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64                    pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64                     fail    
 test-amd64-i386-freebsd10-amd64                              fail    
 test-amd64-amd64-qemuu-freebsd11-amd64                       pass    
 test-amd64-amd64-qemuu-freebsd12-amd64                       pass    
 test-amd64-amd64-xl-qemuu-ovmf-amd64                         pass    
 test-amd64-i386-xl-qemuu-ovmf-amd64                          fail    
 test-amd64-amd64-xl-qemut-win7-amd64                         fail    
 test-amd64-i386-xl-qemut-win7-amd64                          fail    
 test-amd64-amd64-xl-qemuu-win7-amd64                         fail    
 test-amd64-i386-xl-qemuu-win7-amd64                          fail    
 test-amd64-amd64-xl-qemut-ws16-amd64                         fail    
 test-amd64-i386-xl-qemut-ws16-amd64                          fail    
 test-amd64-amd64-xl-qemuu-ws16-amd64                         fail    
 test-amd64-i386-xl-qemuu-ws16-amd64                          fail    
 test-armhf-armhf-xl-arndale                                  pass    
 test-amd64-amd64-xl-credit1                                  pass    
 test-arm64-arm64-xl-credit1                                  fail    
 test-armhf-armhf-xl-credit1                                  pass    
 test-amd64-amd64-xl-credit2                                  pass    
 test-arm64-arm64-xl-credit2                                  fail    
 test-armhf-armhf-xl-credit2                                  pass    
 test-armhf-armhf-xl-cubietruck                               pass    
 test-amd64-amd64-xl-qemuu-dmrestrict-amd64-dmrestrict        pass    
 test-amd64-i386-xl-qemuu-dmrestrict-amd64-dmrestrict         fail    
 test-amd64-amd64-examine                                     pass    
 test-arm64-arm64-examine                                     pass    
 test-armhf-armhf-examine                                     pass    
 test-amd64-i386-examine                                      fail    
 test-amd64-i386-freebsd10-i386                               fail    
 test-amd64-amd64-qemuu-nested-intel                          pass    
 test-amd64-amd64-xl-pvhv2-intel                              pass    
 test-amd64-i386-qemut-rhel6hvm-intel                         fail    
 test-amd64-i386-qemuu-rhel6hvm-intel                         fail    
 test-amd64-amd64-dom0pvh-xl-intel                            pass    
 test-amd64-amd64-libvirt                                     pass    
 test-armhf-armhf-libvirt                                     pass    
 test-amd64-i386-libvirt                                      fail    
 test-amd64-amd64-xl-multivcpu                                pass    
 test-armhf-armhf-xl-multivcpu                                pass    
 test-amd64-amd64-pair                                        pass    
 test-amd64-i386-pair                                         fail    
 test-amd64-amd64-libvirt-pair                                pass    
 test-amd64-i386-libvirt-pair                                 fail    
 test-amd64-amd64-amd64-pvgrub                                fail    
 test-amd64-amd64-i386-pvgrub                                 fail    
 test-amd64-amd64-xl-pvshim                                   pass    
 test-amd64-i386-xl-pvshim                                    fail    
 test-amd64-amd64-pygrub                                      pass    
 test-amd64-amd64-xl-qcow2                                    fail    
 test-armhf-armhf-libvirt-raw                                 pass    
 test-amd64-i386-xl-raw                                       fail    
 test-amd64-amd64-xl-rtds                                     pass    
 test-armhf-armhf-xl-rtds                                     pass    
 test-arm64-arm64-xl-seattle                                  pass    
 test-amd64-amd64-xl-qemuu-debianhvm-amd64-shadow             pass    
 test-amd64-i386-xl-qemuu-debianhvm-amd64-shadow              fail    
 test-amd64-amd64-xl-shadow                                   pass    
 test-amd64-i386-xl-shadow                                    fail    
 test-arm64-arm64-xl-thunderx                                 fail    
 test-amd64-amd64-libvirt-vhd                                 fail    
 test-armhf-armhf-xl-vhd                                      fail    


------------------------------------------------------------
sg-report-flight on osstest.test-lab.xenproject.org
logs: /home/logs/logs
images: /home/logs/images

Logs, config files, etc. are available at
    http://logs.test-lab.xenproject.org/osstest/logs

Explanation of these reports, and of osstest in general, is at
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README.email;hb=master
    http://xenbits.xen.org/gitweb/?p=osstest.git;a=blob;f=README;hb=master

Test harness code can be found at
    http://xenbits.xen.org/gitweb?p=osstest.git;a=summary


Not pushing.

(No revision log; it would be 1665410 lines long.)


